1
First pullreq for arm of the 4.1 series, since I'm back from
1
Hi; this is the latest target-arm queue; most of this is a refactoring
2
holiday now. This is mostly my M-profile FPU series and Philippe's
2
patchset from RTH for the arm page-table-walk emulation.
3
devices.h cleanup. I have a pile of other patchsets to work through
4
in my to-review folder, but 42 patches is definitely quite
5
big enough to send now...
6
3
7
thanks
4
thanks
8
-- PMM
5
-- PMM
9
6
10
The following changes since commit 413a99a92c13ec408dcf2adaa87918dc81e890c8:
7
The following changes since commit f1d33f55c47dfdaf8daacd618588ad3ae4c452d1:
11
8
12
Add Nios II semihosting support. (2019-04-29 16:09:51 +0100)
9
Merge tag 'pull-testing-gdbstub-plugins-gitdm-061022-3' of https://github.com/stsquad/qemu into staging (2022-10-06 07:11:56 -0400)
13
10
14
are available in the Git repository at:
11
are available in the Git repository at:
15
12
16
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190429
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221010
17
14
18
for you to fetch changes up to 437cc27ddfded3bbab6afd5ac1761e0e195edba7:
15
for you to fetch changes up to 915f62844cf62e428c7c178149b5ff1cbe129b07:
19
16
20
hw/devices: Move SMSC 91C111 declaration into a new header (2019-04-29 17:57:21 +0100)
17
docs/system/arm/emulation.rst: Report FEAT_GTG support (2022-10-10 14:52:25 +0100)
21
18
22
----------------------------------------------------------------
19
----------------------------------------------------------------
23
target-arm queue:
20
target-arm queue:
24
* remove "bag of random stuff" hw/devices.h header
21
* Retry KVM_CREATE_VM call if it fails EINTR
25
* implement FPU for Cortex-M and enable it for Cortex-M4 and -M33
22
* allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
26
* hw/dma: Compile the bcm2835_dma device as common object
23
* docs/nuvoton: Update URL for images
27
* configure: Remove --source-path option
24
* refactoring of page table walk code
28
* hw/ssi/xilinx_spips: Avoid variable length array
25
* hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
29
* hw/arm/smmuv3: Remove SMMUNotifierNode
26
* Don't allow guest to use unimplemented granule sizes
27
* Report FEAT_GTG support
30
28
31
----------------------------------------------------------------
29
----------------------------------------------------------------
32
Eric Auger (1):
30
Jerome Forissier (2):
33
hw/arm/smmuv3: Remove SMMUNotifierNode
31
target/arm: allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented
32
hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3
34
33
35
Peter Maydell (28):
34
Joel Stanley (1):
36
hw/ssi/xilinx_spips: Avoid variable length array
35
docs/nuvoton: Update URL for images
37
configure: Remove --source-path option
38
target/arm: Make sure M-profile FPSCR RES0 bits are not settable
39
hw/intc/armv7m_nvic: Allow reading of M-profile MVFR* registers
40
target/arm: Implement dummy versions of M-profile FP-related registers
41
target/arm: Disable most VFP sysregs for M-profile
42
target/arm: Honour M-profile FP enable bits
43
target/arm: Decode FP instructions for M profile
44
target/arm: Clear CONTROL_S.SFPA in SG insn if FPU present
45
target/arm: Handle SFPA and FPCA bits in reads and writes of CONTROL
46
target/arm/helper: don't return early for STKOF faults during stacking
47
target/arm: Handle floating point registers in exception entry
48
target/arm: Implement v7m_update_fpccr()
49
target/arm: Clear CONTROL.SFPA in BXNS and BLXNS
50
target/arm: Clean excReturn bits when tail chaining
51
target/arm: Allow for floating point in callee stack integrity check
52
target/arm: Handle floating point registers in exception return
53
target/arm: Move NS TBFLAG from bit 19 to bit 6
54
target/arm: Overlap VECSTRIDE and XSCALE_CPAR TB flags
55
target/arm: Set FPCCR.S when executing M-profile floating point insns
56
target/arm: Activate M-profile floating point context when FPCCR.ASPEN is set
57
target/arm: New helper function arm_v7m_mmu_idx_all()
58
target/arm: New function armv7m_nvic_set_pending_lazyfp()
59
target/arm: Add lazy-FP-stacking support to v7m_stack_write()
60
target/arm: Implement M-profile lazy FP state preservation
61
target/arm: Implement VLSTM for v7M CPUs with an FPU
62
target/arm: Implement VLLDM for v7M CPUs with an FPU
63
target/arm: Enable FPU for Cortex-M4 and Cortex-M33
64
36
65
Philippe Mathieu-Daudé (13):
37
Peter Maydell (4):
66
hw/dma: Compile the bcm2835_dma device as common object
38
target/arm/kvm: Retry KVM_CREATE_VM call if it fails EINTR
67
hw/arm/aspeed: Use TYPE_TMP105/TYPE_PCA9552 instead of hardcoded string
39
target/arm: Don't allow guest to use unimplemented granule sizes
68
hw/arm/nseries: Use TYPE_TMP105 instead of hardcoded string
40
target/arm: Use ARMGranuleSize in ARMVAParameters
69
hw/display/tc6393xb: Remove unused functions
41
docs/system/arm/emulation.rst: Report FEAT_GTG support
70
hw/devices: Move TC6393XB declarations into a new header
71
hw/devices: Move Blizzard declarations into a new header
72
hw/devices: Move CBus declarations into a new header
73
hw/devices: Move Gamepad declarations into a new header
74
hw/devices: Move TI touchscreen declarations into a new header
75
hw/devices: Move LAN9118 declarations into a new header
76
hw/net/ne2000-isa: Add guards to the header
77
hw/net/lan9118: Export TYPE_LAN9118 and use it instead of hardcoded string
78
hw/devices: Move SMSC 91C111 declaration into a new header
79
42
80
configure | 10 +-
43
Richard Henderson (21):
81
hw/dma/Makefile.objs | 2 +-
44
target/arm: Split s2walk_secure from ipa_secure in get_phys_addr
82
include/hw/arm/omap.h | 6 +-
45
target/arm: Make the final stage1+2 write to secure be unconditional
83
include/hw/arm/smmu-common.h | 8 +-
46
target/arm: Add is_secure parameter to get_phys_addr_lpae
84
include/hw/devices.h | 62 ---
47
target/arm: Fix S2 disabled check in S1_ptw_translate
85
include/hw/display/blizzard.h | 22 ++
48
target/arm: Add is_secure parameter to regime_translation_disabled
86
include/hw/display/tc6393xb.h | 24 ++
49
target/arm: Split out get_phys_addr_with_secure
87
include/hw/input/gamepad.h | 19 +
50
target/arm: Add is_secure parameter to v7m_read_half_insn
88
include/hw/input/tsc2xxx.h | 36 ++
51
target/arm: Add TBFLAG_M32.SECURE
89
include/hw/misc/cbus.h | 32 ++
52
target/arm: Merge regime_is_secure into get_phys_addr
90
include/hw/net/lan9118.h | 21 +
53
target/arm: Add is_secure parameter to do_ats_write
91
include/hw/net/ne2000-isa.h | 6 +
54
target/arm: Fold secure and non-secure a-profile mmu indexes
92
include/hw/net/smc91c111.h | 19 +
55
target/arm: Reorg regime_translation_disabled
93
include/qemu/typedefs.h | 1 -
56
target/arm: Drop secure check for HCR.TGE vs SCTLR_EL1.M
94
target/arm/cpu.h | 95 ++++-
57
target/arm: Introduce arm_hcr_el2_eff_secstate
95
target/arm/helper.h | 5 +
58
target/arm: Hoist read of *is_secure in S1_ptw_translate
96
target/arm/translate.h | 3 +
59
target/arm: Remove env argument from combined_attrs_fwb
97
hw/arm/aspeed.c | 13 +-
60
target/arm: Pass HCR to attribute subroutines.
98
hw/arm/exynos4_boards.c | 3 +-
61
target/arm: Fix ATS12NSO* from S PL1
99
hw/arm/gumstix.c | 2 +-
62
target/arm: Split out get_phys_addr_disabled
100
hw/arm/integratorcp.c | 2 +-
63
target/arm: Fix cacheattr in get_phys_addr_disabled
101
hw/arm/kzm.c | 2 +-
64
target/arm: Use tlb_set_page_full
102
hw/arm/mainstone.c | 2 +-
103
hw/arm/mps2-tz.c | 3 +-
104
hw/arm/mps2.c | 2 +-
105
hw/arm/nseries.c | 7 +-
106
hw/arm/palm.c | 2 +-
107
hw/arm/realview.c | 3 +-
108
hw/arm/smmu-common.c | 6 +-
109
hw/arm/smmuv3.c | 28 +-
110
hw/arm/stellaris.c | 2 +-
111
hw/arm/tosa.c | 2 +-
112
hw/arm/versatilepb.c | 2 +-
113
hw/arm/vexpress.c | 2 +-
114
hw/display/blizzard.c | 2 +-
115
hw/display/tc6393xb.c | 18 +-
116
hw/input/stellaris_input.c | 2 +-
117
hw/input/tsc2005.c | 2 +-
118
hw/input/tsc210x.c | 4 +-
119
hw/intc/armv7m_nvic.c | 261 +++++++++++++
120
hw/misc/cbus.c | 2 +-
121
hw/net/lan9118.c | 3 +-
122
hw/net/smc91c111.c | 2 +-
123
hw/ssi/xilinx_spips.c | 6 +-
124
target/arm/cpu.c | 20 +
125
target/arm/helper.c | 873 +++++++++++++++++++++++++++++++++++++++---
126
target/arm/machine.c | 16 +
127
target/arm/translate.c | 150 +++++++-
128
target/arm/vfp_helper.c | 8 +
129
MAINTAINERS | 7 +
130
50 files changed, 1595 insertions(+), 235 deletions(-)
131
delete mode 100644 include/hw/devices.h
132
create mode 100644 include/hw/display/blizzard.h
133
create mode 100644 include/hw/display/tc6393xb.h
134
create mode 100644 include/hw/input/gamepad.h
135
create mode 100644 include/hw/input/tsc2xxx.h
136
create mode 100644 include/hw/misc/cbus.h
137
create mode 100644 include/hw/net/lan9118.h
138
create mode 100644 include/hw/net/smc91c111.h
139
65
66
docs/system/arm/emulation.rst | 1 +
67
docs/system/arm/nuvoton.rst | 4 +-
68
target/arm/cpu-param.h | 2 +-
69
target/arm/cpu.h | 181 ++++++++------
70
target/arm/internals.h | 150 ++++++-----
71
hw/arm/boot.c | 4 +
72
target/arm/helper.c | 332 ++++++++++++++----------
73
target/arm/kvm.c | 4 +-
74
target/arm/m_helper.c | 29 ++-
75
target/arm/ptw.c | 570 ++++++++++++++++++++++--------------------
76
target/arm/tlb_helper.c | 9 +-
77
target/arm/translate-a64.c | 8 -
78
target/arm/translate.c | 9 +-
79
13 files changed, 717 insertions(+), 586 deletions(-)
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
Occasionally the KVM_CREATE_VM ioctl can return EINTR, even though
2
there is no pending signal to be taken. In commit 94ccff13382055
3
we added a retry-on-EINTR loop to the KVM_CREATE_VM call in the
4
generic KVM code. Adopt the same approach for the use of the
5
ioctl in the Arm-specific KVM code (where we use it to create a
6
scratch VM for probing for various things).
2
7
3
This commit finally deletes "hw/devices.h".
8
For more information, see the mailing list thread:
9
https://lore.kernel.org/qemu-devel/8735e0s1zw.wl-maz@kernel.org/
4
10
5
Reviewed-by: Markus Armbruster <armbru@redhat.com>
11
Reported-by: Vitaly Chikunov <vt@altlinux.org>
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Message-id: 20190412165416.7977-13-philmd@redhat.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
14
Reviewed-by: Eric Auger <eric.auger@redhat.com>
15
Acked-by: Marc Zyngier <maz@kernel.org>
16
Message-id: 20220930113824.1933293-1-peter.maydell@linaro.org
9
---
17
---
10
include/hw/devices.h | 11 -----------
18
target/arm/kvm.c | 4 +++-
11
include/hw/net/smc91c111.h | 19 +++++++++++++++++++
19
1 file changed, 3 insertions(+), 1 deletion(-)
12
hw/arm/gumstix.c | 2 +-
13
hw/arm/integratorcp.c | 2 +-
14
hw/arm/mainstone.c | 2 +-
15
hw/arm/realview.c | 2 +-
16
hw/arm/versatilepb.c | 2 +-
17
hw/net/smc91c111.c | 2 +-
18
8 files changed, 25 insertions(+), 17 deletions(-)
19
delete mode 100644 include/hw/devices.h
20
create mode 100644 include/hw/net/smc91c111.h
21
20
22
diff --git a/include/hw/devices.h b/include/hw/devices.h
21
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
23
deleted file mode 100644
24
index XXXXXXX..XXXXXXX
25
--- a/include/hw/devices.h
26
+++ /dev/null
27
@@ -XXX,XX +XXX,XX @@
28
-#ifndef QEMU_DEVICES_H
29
-#define QEMU_DEVICES_H
30
-
31
-/* Devices that have nowhere better to go. */
32
-
33
-#include "hw/hw.h"
34
-
35
-/* smc91c111.c */
36
-void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
37
-
38
-#endif
39
diff --git a/include/hw/net/smc91c111.h b/include/hw/net/smc91c111.h
40
new file mode 100644
41
index XXXXXXX..XXXXXXX
42
--- /dev/null
43
+++ b/include/hw/net/smc91c111.h
44
@@ -XXX,XX +XXX,XX @@
45
+/*
46
+ * SMSC 91C111 Ethernet interface emulation
47
+ *
48
+ * Copyright (c) 2005 CodeSourcery, LLC.
49
+ * Written by Paul Brook
50
+ *
51
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
52
+ * See the COPYING file in the top-level directory.
53
+ */
54
+
55
+#ifndef HW_NET_SMC91C111_H
56
+#define HW_NET_SMC91C111_H
57
+
58
+#include "hw/irq.h"
59
+#include "net/net.h"
60
+
61
+void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
62
+
63
+#endif
64
diff --git a/hw/arm/gumstix.c b/hw/arm/gumstix.c
65
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
66
--- a/hw/arm/gumstix.c
23
--- a/target/arm/kvm.c
67
+++ b/hw/arm/gumstix.c
24
+++ b/target/arm/kvm.c
68
@@ -XXX,XX +XXX,XX @@
25
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
69
#include "hw/arm/pxa.h"
26
if (max_vm_pa_size < 0) {
70
#include "net/net.h"
27
max_vm_pa_size = 0;
71
#include "hw/block/flash.h"
28
}
72
-#include "hw/devices.h"
29
- vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size);
73
+#include "hw/net/smc91c111.h"
30
+ do {
74
#include "hw/boards.h"
31
+ vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size);
75
#include "exec/address-spaces.h"
32
+ } while (vmfd == -1 && errno == EINTR);
76
#include "sysemu/qtest.h"
33
if (vmfd < 0) {
77
diff --git a/hw/arm/integratorcp.c b/hw/arm/integratorcp.c
34
goto err;
78
index XXXXXXX..XXXXXXX 100644
35
}
79
--- a/hw/arm/integratorcp.c
80
+++ b/hw/arm/integratorcp.c
81
@@ -XXX,XX +XXX,XX @@
82
#include "qemu-common.h"
83
#include "cpu.h"
84
#include "hw/sysbus.h"
85
-#include "hw/devices.h"
86
#include "hw/boards.h"
87
#include "hw/arm/arm.h"
88
#include "hw/misc/arm_integrator_debug.h"
89
+#include "hw/net/smc91c111.h"
90
#include "net/net.h"
91
#include "exec/address-spaces.h"
92
#include "sysemu/sysemu.h"
93
diff --git a/hw/arm/mainstone.c b/hw/arm/mainstone.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/hw/arm/mainstone.c
96
+++ b/hw/arm/mainstone.c
97
@@ -XXX,XX +XXX,XX @@
98
#include "hw/arm/pxa.h"
99
#include "hw/arm/arm.h"
100
#include "net/net.h"
101
-#include "hw/devices.h"
102
+#include "hw/net/smc91c111.h"
103
#include "hw/boards.h"
104
#include "hw/block/flash.h"
105
#include "hw/sysbus.h"
106
diff --git a/hw/arm/realview.c b/hw/arm/realview.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/hw/arm/realview.c
109
+++ b/hw/arm/realview.c
110
@@ -XXX,XX +XXX,XX @@
111
#include "hw/sysbus.h"
112
#include "hw/arm/arm.h"
113
#include "hw/arm/primecell.h"
114
-#include "hw/devices.h"
115
#include "hw/net/lan9118.h"
116
+#include "hw/net/smc91c111.h"
117
#include "hw/pci/pci.h"
118
#include "net/net.h"
119
#include "sysemu/sysemu.h"
120
diff --git a/hw/arm/versatilepb.c b/hw/arm/versatilepb.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/hw/arm/versatilepb.c
123
+++ b/hw/arm/versatilepb.c
124
@@ -XXX,XX +XXX,XX @@
125
#include "cpu.h"
126
#include "hw/sysbus.h"
127
#include "hw/arm/arm.h"
128
-#include "hw/devices.h"
129
+#include "hw/net/smc91c111.h"
130
#include "net/net.h"
131
#include "sysemu/sysemu.h"
132
#include "hw/pci/pci.h"
133
diff --git a/hw/net/smc91c111.c b/hw/net/smc91c111.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/hw/net/smc91c111.c
136
+++ b/hw/net/smc91c111.c
137
@@ -XXX,XX +XXX,XX @@
138
#include "qemu/osdep.h"
139
#include "hw/sysbus.h"
140
#include "net/net.h"
141
-#include "hw/devices.h"
142
+#include "hw/net/smc91c111.h"
143
#include "qemu/log.h"
144
/* For crc32 */
145
#include <zlib.h>
146
--
36
--
147
2.20.1
37
2.25.1
148
149
diff view generated by jsdifflib
1
The M-profile FPCCR.S bit indicates the security status of
1
From: Jerome Forissier <jerome.forissier@linaro.org>
2
the floating point context. In the pseudocode ExecuteFPCheck()
3
function it is unconditionally set to match the current
4
security state whenever a floating point instruction is
5
executed.
6
2
7
Implement this by adding a new TB flag which tracks whether
3
Updates write_scr() to allow setting SCR_EL3.EnTP2 when FEAT_SME is
8
FPCCR.S is different from the current security state, so
4
implemented. SCR_EL3 being a 64-bit register, valid_mask is changed
9
that we only need to emit the code to update it in the
5
to uint64_t and the SCR_* constants in target/arm/cpu.h are extended
10
less-common case when it is not already set correctly.
6
to 64-bit so that masking and bitwise not (~) behave as expected.
11
7
12
Note that we will add the handling for the other work done
8
This enables booting Linux with Trusted Firmware-A at EL3 with
13
by ExecuteFPCheck() in later commits.
9
"-M virt,secure=on -cpu max".
14
10
11
Cc: qemu-stable@nongnu.org
12
Fixes: 78cb9776662a ("target/arm: Enable SME for -cpu max")
13
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
14
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20221004072354.27037-1-jerome.forissier@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20190416125744.27770-19-peter.maydell@linaro.org
18
---
18
---
19
target/arm/cpu.h | 2 ++
19
target/arm/cpu.h | 54 ++++++++++++++++++++++-----------------------
20
target/arm/translate.h | 1 +
20
target/arm/helper.c | 5 ++++-
21
target/arm/helper.c | 5 +++++
21
2 files changed, 31 insertions(+), 28 deletions(-)
22
target/arm/translate.c | 20 ++++++++++++++++++++
23
4 files changed, 28 insertions(+)
24
22
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
26
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu.h
25
--- a/target/arm/cpu.h
28
+++ b/target/arm/cpu.h
26
+++ b/target/arm/cpu.h
29
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
27
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
30
FIELD(TBFLAG_A32, VFPEN, 7, 1)
28
31
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
29
#define HPFAR_NS (1ULL << 63)
32
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
30
33
+/* For M profile only, set if FPCCR.S does not match current security state */
31
-#define SCR_NS (1U << 0)
34
+FIELD(TBFLAG_A32, FPCCR_S_WRONG, 20, 1)
32
-#define SCR_IRQ (1U << 1)
35
/* For M profile only, Handler (ie not Thread) mode */
33
-#define SCR_FIQ (1U << 2)
36
FIELD(TBFLAG_A32, HANDLER, 21, 1)
34
-#define SCR_EA (1U << 3)
37
/* For M profile only, whether we should generate stack-limit checks */
35
-#define SCR_FW (1U << 4)
38
diff --git a/target/arm/translate.h b/target/arm/translate.h
36
-#define SCR_AW (1U << 5)
39
index XXXXXXX..XXXXXXX 100644
37
-#define SCR_NET (1U << 6)
40
--- a/target/arm/translate.h
38
-#define SCR_SMD (1U << 7)
41
+++ b/target/arm/translate.h
39
-#define SCR_HCE (1U << 8)
42
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
40
-#define SCR_SIF (1U << 9)
43
bool v7m_handler_mode;
41
-#define SCR_RW (1U << 10)
44
bool v8m_secure; /* true if v8M and we're in Secure mode */
42
-#define SCR_ST (1U << 11)
45
bool v8m_stackcheck; /* true if we need to perform v8M stack limit checks */
43
-#define SCR_TWI (1U << 12)
46
+ bool v8m_fpccr_s_wrong; /* true if v8M FPCCR.S != v8m_secure */
44
-#define SCR_TWE (1U << 13)
47
/* Immediate value in AArch32 SVC insn; must be set if is_jmp == DISAS_SWI
45
-#define SCR_TLOR (1U << 14)
48
* so that top level loop can generate correct syndrome information.
46
-#define SCR_TERR (1U << 15)
49
*/
47
-#define SCR_APK (1U << 16)
48
-#define SCR_API (1U << 17)
49
-#define SCR_EEL2 (1U << 18)
50
-#define SCR_EASE (1U << 19)
51
-#define SCR_NMEA (1U << 20)
52
-#define SCR_FIEN (1U << 21)
53
-#define SCR_ENSCXT (1U << 25)
54
-#define SCR_ATA (1U << 26)
55
-#define SCR_FGTEN (1U << 27)
56
-#define SCR_ECVEN (1U << 28)
57
-#define SCR_TWEDEN (1U << 29)
58
+#define SCR_NS (1ULL << 0)
59
+#define SCR_IRQ (1ULL << 1)
60
+#define SCR_FIQ (1ULL << 2)
61
+#define SCR_EA (1ULL << 3)
62
+#define SCR_FW (1ULL << 4)
63
+#define SCR_AW (1ULL << 5)
64
+#define SCR_NET (1ULL << 6)
65
+#define SCR_SMD (1ULL << 7)
66
+#define SCR_HCE (1ULL << 8)
67
+#define SCR_SIF (1ULL << 9)
68
+#define SCR_RW (1ULL << 10)
69
+#define SCR_ST (1ULL << 11)
70
+#define SCR_TWI (1ULL << 12)
71
+#define SCR_TWE (1ULL << 13)
72
+#define SCR_TLOR (1ULL << 14)
73
+#define SCR_TERR (1ULL << 15)
74
+#define SCR_APK (1ULL << 16)
75
+#define SCR_API (1ULL << 17)
76
+#define SCR_EEL2 (1ULL << 18)
77
+#define SCR_EASE (1ULL << 19)
78
+#define SCR_NMEA (1ULL << 20)
79
+#define SCR_FIEN (1ULL << 21)
80
+#define SCR_ENSCXT (1ULL << 25)
81
+#define SCR_ATA (1ULL << 26)
82
+#define SCR_FGTEN (1ULL << 27)
83
+#define SCR_ECVEN (1ULL << 28)
84
+#define SCR_TWEDEN (1ULL << 29)
85
#define SCR_TWEDEL MAKE_64BIT_MASK(30, 4)
86
#define SCR_TME (1ULL << 34)
87
#define SCR_AMVOFFEN (1ULL << 35)
50
diff --git a/target/arm/helper.c b/target/arm/helper.c
88
diff --git a/target/arm/helper.c b/target/arm/helper.c
51
index XXXXXXX..XXXXXXX 100644
89
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/helper.c
90
--- a/target/arm/helper.c
53
+++ b/target/arm/helper.c
91
+++ b/target/arm/helper.c
54
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
92
@@ -XXX,XX +XXX,XX @@ static void vbar_write(CPUARMState *env, const ARMCPRegInfo *ri,
55
flags = FIELD_DP32(flags, TBFLAG_A32, STACKCHECK, 1);
93
static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
56
}
94
{
57
95
/* Begin with base v8.0 state. */
58
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
96
- uint32_t valid_mask = 0x3fff;
59
+ FIELD_EX32(env->v7m.fpccr[M_REG_S], V7M_FPCCR, S) != env->v7m.secure) {
97
+ uint64_t valid_mask = 0x3fff;
60
+ flags = FIELD_DP32(flags, TBFLAG_A32, FPCCR_S_WRONG, 1);
98
ARMCPU *cpu = env_archcpu(env);
61
+ }
99
62
+
100
/*
63
*pflags = flags;
101
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
64
*cs_base = 0;
102
if (cpu_isar_feature(aa64_doublefault, cpu)) {
65
}
103
valid_mask |= SCR_EASE | SCR_NMEA;
66
diff --git a/target/arm/translate.c b/target/arm/translate.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/translate.c
69
+++ b/target/arm/translate.c
70
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
71
}
104
}
72
}
105
+ if (cpu_isar_feature(aa64_sme, cpu)) {
73
106
+ valid_mask |= SCR_ENTP2;
74
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
75
+ /* Handle M-profile lazy FP state mechanics */
76
+
77
+ /* Update ownership of FP context: set FPCCR.S to match current state */
78
+ if (s->v8m_fpccr_s_wrong) {
79
+ TCGv_i32 tmp;
80
+
81
+ tmp = load_cpu_field(v7m.fpccr[M_REG_S]);
82
+ if (s->v8m_secure) {
83
+ tcg_gen_ori_i32(tmp, tmp, R_V7M_FPCCR_S_MASK);
84
+ } else {
85
+ tcg_gen_andi_i32(tmp, tmp, ~R_V7M_FPCCR_S_MASK);
86
+ }
87
+ store_cpu_field(tmp, v7m.fpccr[M_REG_S]);
88
+ /* Don't need to do this for any further FP insns in this TB */
89
+ s->v8m_fpccr_s_wrong = false;
90
+ }
107
+ }
91
+ }
108
} else {
92
+
109
valid_mask &= ~(SCR_RW | SCR_ST);
93
if (extract32(insn, 28, 4) == 0xf) {
110
if (cpu_isar_feature(aa32_ras, cpu)) {
94
/*
95
* Encodings with T=1 (Thumb) or unconditional (ARM):
96
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
97
dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
98
regime_is_secure(env, dc->mmu_idx);
99
dc->v8m_stackcheck = FIELD_EX32(tb_flags, TBFLAG_A32, STACKCHECK);
100
+ dc->v8m_fpccr_s_wrong = FIELD_EX32(tb_flags, TBFLAG_A32, FPCCR_S_WRONG);
101
dc->cp_regs = cpu->cp_regs;
102
dc->features = env->features;
103
104
--
111
--
105
2.20.1
112
2.25.1
106
107
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Joel Stanley <joel@jms.id.au>
2
2
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
3
openpower.xyz was retired some time ago. The OpenBMC Jenkins is where
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
images can be found these days.
5
Message-id: 20190412165416.7977-12-philmd@redhat.com
5
6
Signed-off-by: Joel Stanley <joel@jms.id.au>
7
Reviewed-by: Hao Wu <wuhaotsh@google.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20221004050042.22681-1-joel@jms.id.au
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
include/hw/net/lan9118.h | 2 ++
13
docs/system/arm/nuvoton.rst | 4 ++--
9
hw/arm/exynos4_boards.c | 3 ++-
14
1 file changed, 2 insertions(+), 2 deletions(-)
10
hw/arm/mps2-tz.c | 3 ++-
11
hw/net/lan9118.c | 1 -
12
4 files changed, 6 insertions(+), 3 deletions(-)
13
15
14
diff --git a/include/hw/net/lan9118.h b/include/hw/net/lan9118.h
16
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/net/lan9118.h
18
--- a/docs/system/arm/nuvoton.rst
17
+++ b/include/hw/net/lan9118.h
19
+++ b/docs/system/arm/nuvoton.rst
18
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ Boot options
19
#include "hw/irq.h"
21
20
#include "net/net.h"
22
The Nuvoton machines can boot from an OpenBMC firmware image, or directly into
21
23
a kernel using the ``-kernel`` option. OpenBMC images for ``quanta-gsj`` and
22
+#define TYPE_LAN9118 "lan9118"
24
-possibly others can be downloaded from the OpenPOWER jenkins :
23
+
25
+possibly others can be downloaded from the OpenBMC jenkins :
24
void lan9118_init(NICInfo *, uint32_t, qemu_irq);
26
25
27
- https://openpower.xyz/
26
#endif
28
+ https://jenkins.openbmc.org/
27
diff --git a/hw/arm/exynos4_boards.c b/hw/arm/exynos4_boards.c
29
28
index XXXXXXX..XXXXXXX 100644
30
The firmware image should be attached as an MTD drive. Example :
29
--- a/hw/arm/exynos4_boards.c
31
30
+++ b/hw/arm/exynos4_boards.c
31
@@ -XXX,XX +XXX,XX @@
32
#include "hw/arm/arm.h"
33
#include "exec/address-spaces.h"
34
#include "hw/arm/exynos4210.h"
35
+#include "hw/net/lan9118.h"
36
#include "hw/boards.h"
37
38
#undef DEBUG
39
@@ -XXX,XX +XXX,XX @@ static void lan9215_init(uint32_t base, qemu_irq irq)
40
/* This should be a 9215 but the 9118 is close enough */
41
if (nd_table[0].used) {
42
qemu_check_nic_model(&nd_table[0], "lan9118");
43
- dev = qdev_create(NULL, "lan9118");
44
+ dev = qdev_create(NULL, TYPE_LAN9118);
45
qdev_set_nic_properties(dev, &nd_table[0]);
46
qdev_prop_set_uint32(dev, "mode_16bit", 1);
47
qdev_init_nofail(dev);
48
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/hw/arm/mps2-tz.c
51
+++ b/hw/arm/mps2-tz.c
52
@@ -XXX,XX +XXX,XX @@
53
#include "hw/arm/armsse.h"
54
#include "hw/dma/pl080.h"
55
#include "hw/ssi/pl022.h"
56
+#include "hw/net/lan9118.h"
57
#include "net/net.h"
58
#include "hw/core/split-irq.h"
59
60
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_eth_dev(MPS2TZMachineState *mms, void *opaque,
61
* except that it doesn't support the checksum-offload feature.
62
*/
63
qemu_check_nic_model(nd, "lan9118");
64
- mms->lan9118 = qdev_create(NULL, "lan9118");
65
+ mms->lan9118 = qdev_create(NULL, TYPE_LAN9118);
66
qdev_set_nic_properties(mms->lan9118, nd);
67
qdev_init_nofail(mms->lan9118);
68
69
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/hw/net/lan9118.c
72
+++ b/hw/net/lan9118.c
73
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_lan9118_packet = {
74
}
75
};
76
77
-#define TYPE_LAN9118 "lan9118"
78
#define LAN9118(obj) OBJECT_CHECK(lan9118_state, (obj), TYPE_LAN9118)
79
80
typedef struct {
81
--
32
--
82
2.20.1
33
2.25.1
83
34
84
35
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Thomas Huth <thuth@redhat.com>
3
The starting security state comes with the translation regime,
4
Reviewed-by: Markus Armbruster <armbru@redhat.com>
4
not the current state of arm_is_secure_below_el3().
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
6
Message-id: 20190412165416.7977-11-philmd@redhat.com
6
Create a new local variable, s2walk_secure, which does not need
7
to be written back to result->attrs.secure -- we compute that
8
value later, after the S2 walk is complete.
9
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20221001162318.153420-2-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
14
---
9
include/hw/net/ne2000-isa.h | 6 ++++++
15
target/arm/ptw.c | 18 +++++++++---------
10
1 file changed, 6 insertions(+)
16
1 file changed, 9 insertions(+), 9 deletions(-)
11
17
12
diff --git a/include/hw/net/ne2000-isa.h b/include/hw/net/ne2000-isa.h
18
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
13
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
14
--- a/include/hw/net/ne2000-isa.h
20
--- a/target/arm/ptw.c
15
+++ b/include/hw/net/ne2000-isa.h
21
+++ b/target/arm/ptw.c
16
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
17
* This work is licensed under the terms of the GNU GPL, version 2 or later.
23
hwaddr ipa;
18
* See the COPYING file in the top-level directory.
24
int s1_prot;
19
*/
25
int ret;
20
+
26
- bool ipa_secure;
21
+#ifndef HW_NET_NE2K_ISA_H
27
+ bool ipa_secure, s2walk_secure;
22
+#define HW_NET_NE2K_ISA_H
28
ARMCacheAttrs cacheattrs1;
23
+
29
ARMMMUIdx s2_mmu_idx;
24
#include "hw/hw.h"
30
bool is_el0;
25
#include "hw/qdev.h"
31
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
26
#include "hw/isa/isa.h"
32
27
@@ -XXX,XX +XXX,XX @@ static inline ISADevice *isa_ne2000_init(ISABus *bus, int base, int irq,
33
ipa = result->phys;
28
}
34
ipa_secure = result->attrs.secure;
29
return d;
35
- if (arm_is_secure_below_el3(env)) {
30
}
36
- if (ipa_secure) {
31
+
37
- result->attrs.secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
32
+#endif
38
- } else {
39
- result->attrs.secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
40
- }
41
+ if (is_secure) {
42
+ /* Select TCR based on the NS bit from the S1 walk. */
43
+ s2walk_secure = !(ipa_secure
44
+ ? env->cp15.vstcr_el2 & VSTCR_SW
45
+ : env->cp15.vtcr_el2 & VTCR_NSW);
46
} else {
47
assert(!ipa_secure);
48
+ s2walk_secure = false;
49
}
50
51
- s2_mmu_idx = (result->attrs.secure
52
+ s2_mmu_idx = (s2walk_secure
53
? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
54
is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
55
56
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
57
result->cacheattrs);
58
59
/* Check if IPA translates to secure or non-secure PA space. */
60
- if (arm_is_secure_below_el3(env)) {
61
+ if (is_secure) {
62
if (ipa_secure) {
63
result->attrs.secure =
64
!(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW));
33
--
65
--
34
2.20.1
66
2.25.1
35
36
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
No code used the tc6393xb_gpio_in_get() and tc6393xb_gpio_out_set()
3
While the stage2 call to get_phys_addr_lpae should never set
4
functions since their introduction in commit 88d2c950b002. Time to
4
attrs.secure when given a non-secure input, it's just as easy
5
remove them.
5
to make the final update to attrs.secure be unconditional and
6
false in the case of non-secure input.
6
7
7
Suggested-by: Markus Armbruster <armbru@redhat.com>
8
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190412165416.7977-4-philmd@redhat.com
10
Message-id: 20221007152159.1414065-1-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
---
13
include/hw/devices.h | 3 ---
14
target/arm/ptw.c | 21 ++++++++++-----------
14
hw/display/tc6393xb.c | 16 ----------------
15
1 file changed, 10 insertions(+), 11 deletions(-)
15
2 files changed, 19 deletions(-)
16
16
17
diff --git a/include/hw/devices.h b/include/hw/devices.h
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/devices.h
19
--- a/target/arm/ptw.c
20
+++ b/include/hw/devices.h
20
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@ void retu_key_event(void *retu, int state);
21
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
22
typedef struct TC6393xbState TC6393xbState;
22
result->cacheattrs = combine_cacheattrs(env, cacheattrs1,
23
TC6393xbState *tc6393xb_init(struct MemoryRegion *sysmem,
23
result->cacheattrs);
24
uint32_t base, qemu_irq irq);
24
25
-void tc6393xb_gpio_out_set(TC6393xbState *s, int line,
25
- /* Check if IPA translates to secure or non-secure PA space. */
26
- qemu_irq handler);
26
- if (is_secure) {
27
-qemu_irq *tc6393xb_gpio_in_get(TC6393xbState *s);
27
- if (ipa_secure) {
28
qemu_irq tc6393xb_l3v_get(TC6393xbState *s);
28
- result->attrs.secure =
29
29
- !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW));
30
#endif
30
- } else {
31
diff --git a/hw/display/tc6393xb.c b/hw/display/tc6393xb.c
31
- result->attrs.secure =
32
index XXXXXXX..XXXXXXX 100644
32
- !((env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))
33
--- a/hw/display/tc6393xb.c
33
- || (env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW)));
34
+++ b/hw/display/tc6393xb.c
34
- }
35
@@ -XXX,XX +XXX,XX @@ struct TC6393xbState {
35
- }
36
blanked : 1;
36
+ /*
37
};
37
+ * Check if IPA translates to secure or non-secure PA space.
38
38
+ * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
39
-qemu_irq *tc6393xb_gpio_in_get(TC6393xbState *s)
39
+ */
40
-{
40
+ result->attrs.secure =
41
- return s->gpio_in;
41
+ (is_secure
42
-}
42
+ && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
43
-
43
+ && (ipa_secure
44
static void tc6393xb_gpio_set(void *opaque, int line, int level)
44
+ || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
45
{
45
+
46
// TC6393xbState *s = opaque;
46
return 0;
47
@@ -XXX,XX +XXX,XX @@ static void tc6393xb_gpio_set(void *opaque, int line, int level)
47
} else {
48
// FIXME: how does the chip reflect the GPIO input level change?
48
/*
49
}
50
51
-void tc6393xb_gpio_out_set(TC6393xbState *s, int line,
52
- qemu_irq handler)
53
-{
54
- if (line >= TC6393XB_GPIOS) {
55
- fprintf(stderr, "TC6393xb: no GPIO pin %d\n", line);
56
- return;
57
- }
58
-
59
- s->handler[line] = handler;
60
-}
61
-
62
static void tc6393xb_gpio_handler_update(TC6393xbState *s)
63
{
64
uint32_t level, diff;
65
--
49
--
66
2.20.1
50
2.25.1
67
68
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
3
Remove the use of regime_is_secure from get_phys_addr_lpae,
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
using the new parameter instead.
5
Message-id: 20190412165416.7977-10-philmd@redhat.com
5
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
include/hw/devices.h | 3 ---
11
target/arm/ptw.c | 20 ++++++++++----------
9
include/hw/net/lan9118.h | 19 +++++++++++++++++++
12
1 file changed, 10 insertions(+), 10 deletions(-)
10
hw/arm/kzm.c | 2 +-
11
hw/arm/mps2.c | 2 +-
12
hw/arm/realview.c | 1 +
13
hw/arm/vexpress.c | 2 +-
14
hw/net/lan9118.c | 2 +-
15
7 files changed, 24 insertions(+), 7 deletions(-)
16
create mode 100644 include/hw/net/lan9118.h
17
13
18
diff --git a/include/hw/devices.h b/include/hw/devices.h
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/devices.h
16
--- a/target/arm/ptw.c
21
+++ b/include/hw/devices.h
17
+++ b/target/arm/ptw.c
22
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@
23
/* smc91c111.c */
19
24
void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
20
static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
25
21
MMUAccessType access_type, ARMMMUIdx mmu_idx,
26
-/* lan9118.c */
22
- bool s1_is_el0, GetPhysAddrResult *result,
27
-void lan9118_init(NICInfo *, uint32_t, qemu_irq);
23
- ARMMMUFaultInfo *fi)
28
-
24
+ bool is_secure, bool s1_is_el0,
29
#endif
25
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
30
diff --git a/include/hw/net/lan9118.h b/include/hw/net/lan9118.h
26
__attribute__((nonnull));
31
new file mode 100644
27
32
index XXXXXXX..XXXXXXX
28
/* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */
33
--- /dev/null
29
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
34
+++ b/include/hw/net/lan9118.h
30
GetPhysAddrResult s2 = {};
35
@@ -XXX,XX +XXX,XX @@
31
int ret;
36
+/*
32
37
+ * SMSC LAN9118 Ethernet interface emulation
33
- ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, false,
38
+ *
34
- &s2, fi);
39
+ * Copyright (c) 2009 CodeSourcery, LLC.
35
+ ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
40
+ * Written by Paul Brook
36
+ *is_secure, false, &s2, fi);
41
+ *
37
if (ret) {
42
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
38
assert(fi->type != ARMFault_None);
43
+ * See the COPYING file in the top-level directory.
39
fi->s2addr = addr;
44
+ */
40
@@ -XXX,XX +XXX,XX @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
45
+
41
*/
46
+#ifndef HW_NET_LAN9118_H
42
static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
47
+#define HW_NET_LAN9118_H
43
MMUAccessType access_type, ARMMMUIdx mmu_idx,
48
+
44
- bool s1_is_el0, GetPhysAddrResult *result,
49
+#include "hw/irq.h"
45
- ARMMMUFaultInfo *fi)
50
+#include "net/net.h"
46
+ bool is_secure, bool s1_is_el0,
51
+
47
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
52
+void lan9118_init(NICInfo *, uint32_t, qemu_irq);
48
{
53
+
49
ARMCPU *cpu = env_archcpu(env);
54
+#endif
50
/* Read an LPAE long-descriptor translation table. */
55
diff --git a/hw/arm/kzm.c b/hw/arm/kzm.c
51
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
56
index XXXXXXX..XXXXXXX 100644
52
* remain non-secure. We implement this by just ORing in the NSTable/NS
57
--- a/hw/arm/kzm.c
53
* bits at each step.
58
+++ b/hw/arm/kzm.c
54
*/
59
@@ -XXX,XX +XXX,XX @@
55
- tableattrs = regime_is_secure(env, mmu_idx) ? 0 : (1 << 4);
60
#include "qemu/error-report.h"
56
+ tableattrs = is_secure ? 0 : (1 << 4);
61
#include "exec/address-spaces.h"
57
for (;;) {
62
#include "net/net.h"
58
uint64_t descriptor;
63
-#include "hw/devices.h"
59
bool nstable;
64
+#include "hw/net/lan9118.h"
60
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
65
#include "hw/char/serial.h"
61
memset(result, 0, sizeof(*result));
66
#include "sysemu/qtest.h"
62
67
63
ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx,
68
diff --git a/hw/arm/mps2.c b/hw/arm/mps2.c
64
- is_el0, result, fi);
69
index XXXXXXX..XXXXXXX 100644
65
+ s2walk_secure, is_el0, result, fi);
70
--- a/hw/arm/mps2.c
66
fi->s2addr = ipa;
71
+++ b/hw/arm/mps2.c
67
72
@@ -XXX,XX +XXX,XX @@
68
/* Combine the S1 and S2 perms. */
73
#include "hw/timer/cmsdk-apb-timer.h"
69
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
74
#include "hw/timer/cmsdk-apb-dualtimer.h"
70
}
75
#include "hw/misc/mps2-scc.h"
71
76
-#include "hw/devices.h"
72
if (regime_using_lpae_format(env, mmu_idx)) {
77
+#include "hw/net/lan9118.h"
73
- return get_phys_addr_lpae(env, address, access_type, mmu_idx, false,
78
#include "net/net.h"
74
- result, fi);
79
75
+ return get_phys_addr_lpae(env, address, access_type, mmu_idx,
80
typedef enum MPS2FPGAType {
76
+ is_secure, false, result, fi);
81
diff --git a/hw/arm/realview.c b/hw/arm/realview.c
77
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
82
index XXXXXXX..XXXXXXX 100644
78
return get_phys_addr_v6(env, address, access_type, mmu_idx,
83
--- a/hw/arm/realview.c
79
is_secure, result, fi);
84
+++ b/hw/arm/realview.c
85
@@ -XXX,XX +XXX,XX @@
86
#include "hw/arm/arm.h"
87
#include "hw/arm/primecell.h"
88
#include "hw/devices.h"
89
+#include "hw/net/lan9118.h"
90
#include "hw/pci/pci.h"
91
#include "net/net.h"
92
#include "sysemu/sysemu.h"
93
diff --git a/hw/arm/vexpress.c b/hw/arm/vexpress.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/hw/arm/vexpress.c
96
+++ b/hw/arm/vexpress.c
97
@@ -XXX,XX +XXX,XX @@
98
#include "hw/sysbus.h"
99
#include "hw/arm/arm.h"
100
#include "hw/arm/primecell.h"
101
-#include "hw/devices.h"
102
+#include "hw/net/lan9118.h"
103
#include "hw/i2c/i2c.h"
104
#include "net/net.h"
105
#include "sysemu/sysemu.h"
106
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/hw/net/lan9118.c
109
+++ b/hw/net/lan9118.c
110
@@ -XXX,XX +XXX,XX @@
111
#include "hw/sysbus.h"
112
#include "net/net.h"
113
#include "net/eth.h"
114
-#include "hw/devices.h"
115
+#include "hw/net/lan9118.h"
116
#include "sysemu/sysemu.h"
117
#include "hw/ptimer.h"
118
#include "qemu/log.h"
119
--
80
--
120
2.20.1
81
2.25.1
121
82
122
83
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Suggested-by: Markus Armbruster <armbru@redhat.com>
3
Pass the correct stage2 mmu_idx to regime_translation_disabled,
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
which we computed afterward.
5
Message-id: 20190412165416.7977-3-philmd@redhat.com
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20221001162318.153420-4-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
10
---
9
hw/arm/nseries.c | 3 ++-
11
target/arm/ptw.c | 6 +++---
10
1 file changed, 2 insertions(+), 1 deletion(-)
12
1 file changed, 3 insertions(+), 3 deletions(-)
11
13
12
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
13
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/arm/nseries.c
16
--- a/target/arm/ptw.c
15
+++ b/hw/arm/nseries.c
17
+++ b/target/arm/ptw.c
16
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
17
#include "hw/boards.h"
19
hwaddr addr, bool *is_secure,
18
#include "hw/i2c/i2c.h"
20
ARMMMUFaultInfo *fi)
19
#include "hw/devices.h"
21
{
20
+#include "hw/misc/tmp105.h"
22
+ ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
21
#include "hw/block/flash.h"
23
+
22
#include "hw/hw.h"
24
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
23
#include "hw/bt.h"
25
- !regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
24
@@ -XXX,XX +XXX,XX @@ static void n8x0_i2c_setup(struct n800_s *s)
26
- ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S
25
qemu_register_powerdown_notifier(&n8x0_system_powerdown_notifier);
27
- : ARMMMUIdx_Stage2;
26
28
+ !regime_translation_disabled(env, s2_mmu_idx)) {
27
/* Attach a TMP105 PM chip (A0 wired to ground) */
29
GetPhysAddrResult s2 = {};
28
- dev = i2c_create_slave(i2c, "tmp105", N8X0_TMP105_ADDR);
30
int ret;
29
+ dev = i2c_create_slave(i2c, TYPE_TMP105, N8X0_TMP105_ADDR);
30
qdev_connect_gpio_out(dev, 0, tmp_irq);
31
}
32
31
33
--
32
--
34
2.20.1
33
2.25.1
35
36
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Thomas Huth <thuth@redhat.com>
3
Remove the use of regime_is_secure from regime_translation_disabled,
4
Reviewed-by: Cédric Le Goater <clg@kaod.org>
4
using the new parameter instead.
5
Reviewed-by: Markus Armbruster <armbru@redhat.com>
5
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
This fixes a bug in S1_ptw_translate and get_phys_addr where we had
7
Message-id: 20190412165416.7977-2-philmd@redhat.com
7
passed ARMMMUIdx_Stage2 and not ARMMMUIdx_Stage2_S to determine if
8
Stage2 is disabled, affecting FEAT_SEL2.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20221001162318.153420-5-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
15
---
10
hw/arm/aspeed.c | 13 +++++++++----
16
target/arm/ptw.c | 20 +++++++++++---------
11
1 file changed, 9 insertions(+), 4 deletions(-)
17
1 file changed, 11 insertions(+), 9 deletions(-)
12
18
13
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
19
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
14
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/aspeed.c
21
--- a/target/arm/ptw.c
16
+++ b/hw/arm/aspeed.c
22
+++ b/target/arm/ptw.c
17
@@ -XXX,XX +XXX,XX @@
23
@@ -XXX,XX +XXX,XX @@ static uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn)
18
#include "hw/arm/aspeed_soc.h"
19
#include "hw/boards.h"
20
#include "hw/i2c/smbus_eeprom.h"
21
+#include "hw/misc/pca9552.h"
22
+#include "hw/misc/tmp105.h"
23
#include "qemu/log.h"
24
#include "sysemu/block-backend.h"
25
#include "hw/loader.h"
26
@@ -XXX,XX +XXX,XX @@ static void ast2500_evb_i2c_init(AspeedBoardState *bmc)
27
eeprom_buf);
28
29
/* The AST2500 EVB expects a LM75 but a TMP105 is compatible */
30
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 7), "tmp105", 0x4d);
31
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 7),
32
+ TYPE_TMP105, 0x4d);
33
34
/* The AST2500 EVB does not have an RTC. Let's pretend that one is
35
* plugged on the I2C bus header */
36
@@ -XXX,XX +XXX,XX @@ static void witherspoon_bmc_i2c_init(AspeedBoardState *bmc)
37
AspeedSoCState *soc = &bmc->soc;
38
uint8_t *eeprom_buf = g_malloc0(8 * 1024);
39
40
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 3), "pca9552", 0x60);
41
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 3), TYPE_PCA9552,
42
+ 0x60);
43
44
i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 4), "tmp423", 0x4c);
45
i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 5), "tmp423", 0x4c);
46
47
/* The Witherspoon expects a TMP275 but a TMP105 is compatible */
48
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 9), "tmp105", 0x4a);
49
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 9), TYPE_TMP105,
50
+ 0x4a);
51
52
/* The witherspoon board expects Epson RX8900 I2C RTC but a ds1338 is
53
* good enough */
54
@@ -XXX,XX +XXX,XX @@ static void witherspoon_bmc_i2c_init(AspeedBoardState *bmc)
55
56
smbus_eeprom_init_one(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 11), 0x51,
57
eeprom_buf);
58
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 11), "pca9552",
59
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 11), TYPE_PCA9552,
60
0x60);
61
}
24
}
62
25
26
/* Return true if the specified stage of address translation is disabled */
27
-static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
28
+static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
29
+ bool is_secure)
30
{
31
uint64_t hcr_el2;
32
33
if (arm_feature(env, ARM_FEATURE_M)) {
34
- switch (env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] &
35
+ switch (env->v7m.mpu_ctrl[is_secure] &
36
(R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
37
case R_V7M_MPU_CTRL_ENABLE_MASK:
38
/* Enabled, but not for HardFault and NMI */
39
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx)
40
41
if (hcr_el2 & HCR_TGE) {
42
/* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
43
- if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) {
44
+ if (!is_secure && regime_el(env, mmu_idx) == 1) {
45
return true;
46
}
47
}
48
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
49
ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
50
51
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
52
- !regime_translation_disabled(env, s2_mmu_idx)) {
53
+ !regime_translation_disabled(env, s2_mmu_idx, *is_secure)) {
54
GetPhysAddrResult s2 = {};
55
int ret;
56
57
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
58
uint32_t base;
59
bool is_user = regime_is_user(env, mmu_idx);
60
61
- if (regime_translation_disabled(env, mmu_idx)) {
62
+ if (regime_translation_disabled(env, mmu_idx, is_secure)) {
63
/* MPU disabled. */
64
result->phys = address;
65
result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
66
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
67
result->page_size = TARGET_PAGE_SIZE;
68
result->prot = 0;
69
70
- if (regime_translation_disabled(env, mmu_idx) ||
71
+ if (regime_translation_disabled(env, mmu_idx, secure) ||
72
m_is_ppb_region(env, address)) {
73
/*
74
* MPU disabled or M profile PPB access: use default memory map.
75
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
76
* are done in arm_v7m_load_vector(), which always does a direct
77
* read using address_space_ldl(), rather than going via this function.
78
*/
79
- if (regime_translation_disabled(env, mmu_idx)) { /* MPU disabled */
80
+ if (regime_translation_disabled(env, mmu_idx, secure)) { /* MPU disabled */
81
hit = true;
82
} else if (m_is_ppb_region(env, address)) {
83
hit = true;
84
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
85
result, fi);
86
87
/* If S1 fails or S2 is disabled, return early. */
88
- if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
89
+ if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
90
+ is_secure)) {
91
return ret;
92
}
93
94
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
95
96
/* Definitely a real MMU, not an MPU */
97
98
- if (regime_translation_disabled(env, mmu_idx)) {
99
+ if (regime_translation_disabled(env, mmu_idx, is_secure)) {
100
uint64_t hcr;
101
uint8_t memattr;
102
63
--
103
--
64
2.20.1
104
2.25.1
65
105
66
106
diff view generated by jsdifflib
1
Implement the VLLDM instruction for v7M for the FPU present cas.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Retain the existing get_phys_addr interface using the security
4
state derived from mmu_idx. Move the kerneldoc comments to the
5
header file where they belong.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20221001162318.153420-6-richard.henderson@linaro.org
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190416125744.27770-26-peter.maydell@linaro.org
6
---
11
---
7
target/arm/helper.h | 1 +
12
target/arm/internals.h | 40 ++++++++++++++++++++++++++++++++++++++
8
target/arm/helper.c | 54 ++++++++++++++++++++++++++++++++++++++++++
13
target/arm/ptw.c | 44 ++++++++++++++----------------------------
9
target/arm/translate.c | 2 +-
14
2 files changed, 55 insertions(+), 29 deletions(-)
10
3 files changed, 56 insertions(+), 1 deletion(-)
11
15
12
diff --git a/target/arm/helper.h b/target/arm/helper.h
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
13
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.h
18
--- a/target/arm/internals.h
15
+++ b/target/arm/helper.h
19
+++ b/target/arm/internals.h
16
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
20
@@ -XXX,XX +XXX,XX @@ typedef struct GetPhysAddrResult {
17
DEF_HELPER_1(v7m_preserve_fp_state, void, env)
21
ARMCacheAttrs cacheattrs;
18
22
} GetPhysAddrResult;
19
DEF_HELPER_2(v7m_vlstm, void, env, i32)
23
20
+DEF_HELPER_2(v7m_vlldm, void, env, i32)
24
+/**
21
25
+ * get_phys_addr_with_secure: get the physical address for a virtual address
22
DEF_HELPER_2(v8m_stackcheck, void, env, i32)
26
+ * @env: CPUARMState
23
27
+ * @address: virtual address to get physical address for
24
diff --git a/target/arm/helper.c b/target/arm/helper.c
28
+ * @access_type: 0 for read, 1 for write, 2 for execute
29
+ * @mmu_idx: MMU index indicating required translation regime
30
+ * @is_secure: security state for the access
31
+ * @result: set on translation success.
32
+ * @fi: set to fault info if the translation fails
33
+ *
34
+ * Find the physical address corresponding to the given virtual address,
35
+ * by doing a translation table walk on MMU based systems or using the
36
+ * MPU state on MPU based systems.
37
+ *
38
+ * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
39
+ * prot and page_size may not be filled in, and the populated fsr value provides
40
+ * information on why the translation aborted, in the format of a
41
+ * DFSR/IFSR fault register, with the following caveats:
42
+ * * we honour the short vs long DFSR format differences.
43
+ * * the WnR bit is never set (the caller must do this).
44
+ * * for PSMAv5 based systems we don't bother to return a full FSR format
45
+ * value.
46
+ */
47
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
48
+ MMUAccessType access_type,
49
+ ARMMMUIdx mmu_idx, bool is_secure,
50
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
51
+ __attribute__((nonnull));
52
+
53
+/**
54
+ * get_phys_addr: get the physical address for a virtual address
55
+ * @env: CPUARMState
56
+ * @address: virtual address to get physical address for
57
+ * @access_type: 0 for read, 1 for write, 2 for execute
58
+ * @mmu_idx: MMU index indicating required translation regime
59
+ * @result: set on translation success.
60
+ * @fi: set to fault info if the translation fails
61
+ *
62
+ * Similarly, but use the security regime of @mmu_idx.
63
+ */
64
bool get_phys_addr(CPUARMState *env, target_ulong address,
65
MMUAccessType access_type, ARMMMUIdx mmu_idx,
66
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
67
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
25
index XXXXXXX..XXXXXXX 100644
68
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/helper.c
69
--- a/target/arm/ptw.c
27
+++ b/target/arm/helper.c
70
+++ b/target/arm/ptw.c
28
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
71
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
29
g_assert_not_reached();
72
return ret;
30
}
73
}
31
74
32
+void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
75
-/**
76
- * get_phys_addr - get the physical address for this virtual address
77
- *
78
- * Find the physical address corresponding to the given virtual address,
79
- * by doing a translation table walk on MMU based systems or using the
80
- * MPU state on MPU based systems.
81
- *
82
- * Returns false if the translation was successful. Otherwise, phys_ptr, attrs,
83
- * prot and page_size may not be filled in, and the populated fsr value provides
84
- * information on why the translation aborted, in the format of a
85
- * DFSR/IFSR fault register, with the following caveats:
86
- * * we honour the short vs long DFSR format differences.
87
- * * the WnR bit is never set (the caller must do this).
88
- * * for PSMAv5 based systems we don't bother to return a full FSR format
89
- * value.
90
- *
91
- * @env: CPUARMState
92
- * @address: virtual address to get physical address for
93
- * @access_type: 0 for read, 1 for write, 2 for execute
94
- * @mmu_idx: MMU index indicating required translation regime
95
- * @result: set on translation success.
96
- * @fi: set to fault info if the translation fails
97
- */
98
-bool get_phys_addr(CPUARMState *env, target_ulong address,
99
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
100
- GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
101
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
102
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
103
+ bool is_secure, GetPhysAddrResult *result,
104
+ ARMMMUFaultInfo *fi)
105
{
106
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
107
- bool is_secure = regime_is_secure(env, mmu_idx);
108
109
if (mmu_idx != s1_mmu_idx) {
110
/*
111
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
112
ARMMMUIdx s2_mmu_idx;
113
bool is_el0;
114
115
- ret = get_phys_addr(env, address, access_type, s1_mmu_idx,
116
- result, fi);
117
+ ret = get_phys_addr_with_secure(env, address, access_type,
118
+ s1_mmu_idx, is_secure, result, fi);
119
120
/* If S1 fails or S2 is disabled, return early. */
121
if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
122
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
123
}
124
}
125
126
+bool get_phys_addr(CPUARMState *env, target_ulong address,
127
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
128
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
33
+{
129
+{
34
+ /* translate.c should never generate calls here in user-only mode */
130
+ return get_phys_addr_with_secure(env, address, access_type, mmu_idx,
35
+ g_assert_not_reached();
131
+ regime_is_secure(env, mmu_idx),
132
+ result, fi);
36
+}
133
+}
37
+
134
+
38
uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
135
hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
136
MemTxAttrs *attrs)
39
{
137
{
40
/* The TT instructions can be used by unprivileged code, but in
41
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
42
env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
43
}
44
45
+void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
46
+{
47
+ /* fptr is the value of Rn, the frame pointer we load the FP regs from */
48
+ assert(env->v7m.secure);
49
+
50
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
51
+ return;
52
+ }
53
+
54
+ /* Check access to the coprocessor is permitted */
55
+ if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
56
+ raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
57
+ }
58
+
59
+ if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
60
+ /* State in FP is still valid */
61
+ env->v7m.fpccr[M_REG_S] &= ~R_V7M_FPCCR_LSPACT_MASK;
62
+ } else {
63
+ bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
64
+ int i;
65
+ uint32_t fpscr;
66
+
67
+ if (fptr & 7) {
68
+ raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
69
+ }
70
+
71
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
72
+ uint32_t slo, shi;
73
+ uint64_t dn;
74
+ uint32_t faddr = fptr + 4 * i;
75
+
76
+ if (i >= 16) {
77
+ faddr += 8; /* skip the slot for the FPSCR */
78
+ }
79
+
80
+ slo = cpu_ldl_data(env, faddr);
81
+ shi = cpu_ldl_data(env, faddr + 4);
82
+
83
+ dn = (uint64_t) shi << 32 | slo;
84
+ *aa32_vfp_dreg(env, i / 2) = dn;
85
+ }
86
+ fpscr = cpu_ldl_data(env, fptr + 0x40);
87
+ vfp_set_fpscr(env, fpscr);
88
+ }
89
+
90
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
91
+}
92
+
93
static bool v7m_push_stack(ARMCPU *cpu)
94
{
95
/* Do the "set up stack frame" part of exception entry,
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
101
TCGv_i32 fptr = load_reg(s, rn);
102
103
if (extract32(insn, 20, 1)) {
104
- /* VLLDM */
105
+ gen_helper_v7m_vlldm(cpu_env, fptr);
106
} else {
107
gen_helper_v7m_vlstm(cpu_env, fptr);
108
}
109
--
138
--
110
2.20.1
139
2.25.1
111
112
diff view generated by jsdifflib
1
The M-profile FPCCR.ASPEN bit indicates that automatic floating-point
1
From: Richard Henderson <richard.henderson@linaro.org>
2
context preservation is enabled. Before executing any floating-point
3
instruction, if FPCCR.ASPEN is set and the CONTROL FPCA/SFPA bits
4
indicate that there is no active floating point context then we
5
must create a new context (by initializing FPSCR and setting
6
FPCA/SFPA to indicate that the context is now active). In the
7
pseudocode this is handled by ExecuteFPCheck().
8
2
9
Implement this with a new TB flag which tracks whether we
3
Remove the use of regime_is_secure from v7m_read_half_insn, using
10
need to create a new FP context.
4
the new parameter instead.
11
5
6
As it happens, both callers pass true, propagated from the argument
7
to arm_v7m_mmu_idx_for_secstate which created the mmu_idx argument,
8
but that is a detail of v7m_handle_execute_nsc we need not expose
9
to the callee.
10
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20221001162318.153420-7-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20190416125744.27770-20-peter.maydell@linaro.org
15
---
16
---
16
target/arm/cpu.h | 2 ++
17
target/arm/m_helper.c | 9 ++++-----
17
target/arm/translate.h | 1 +
18
1 file changed, 4 insertions(+), 5 deletions(-)
18
target/arm/helper.c | 13 +++++++++++++
19
target/arm/translate.c | 29 +++++++++++++++++++++++++++++
20
4 files changed, 45 insertions(+)
21
19
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
23
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
22
--- a/target/arm/m_helper.c
25
+++ b/target/arm/cpu.h
23
+++ b/target/arm/m_helper.c
26
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
24
@@ -XXX,XX +XXX,XX @@ static bool do_v7m_function_return(ARMCPU *cpu)
27
FIELD(TBFLAG_A32, VFPEN, 7, 1)
25
return true;
28
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
26
}
29
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
27
30
+/* For M profile only, set if we must create a new FP context */
28
-static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
31
+FIELD(TBFLAG_A32, NEW_FP_CTXT_NEEDED, 19, 1)
29
+static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool secure,
32
/* For M profile only, set if FPCCR.S does not match current security state */
30
uint32_t addr, uint16_t *insn)
33
FIELD(TBFLAG_A32, FPCCR_S_WRONG, 20, 1)
31
{
34
/* For M profile only, Handler (ie not Thread) mode */
32
/*
35
diff --git a/target/arm/translate.h b/target/arm/translate.h
33
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx,
36
index XXXXXXX..XXXXXXX 100644
34
ARMMMUFaultInfo fi = {};
37
--- a/target/arm/translate.h
35
MemTxResult txres;
38
+++ b/target/arm/translate.h
36
39
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
37
- v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx,
40
bool v8m_secure; /* true if v8M and we're in Secure mode */
38
- regime_is_secure(env, mmu_idx), &sattrs);
41
bool v8m_stackcheck; /* true if we need to perform v8M stack limit checks */
39
+ v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, secure, &sattrs);
42
bool v8m_fpccr_s_wrong; /* true if v8M FPCCR.S != v8m_secure */
40
if (!sattrs.nsc || sattrs.ns) {
43
+ bool v7m_new_fp_ctxt_needed; /* ASPEN set but no active FP context */
41
/*
44
/* Immediate value in AArch32 SVC insn; must be set if is_jmp == DISAS_SWI
42
* This must be the second half of the insn, and it straddles a
45
* so that top level loop can generate correct syndrome information.
43
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
46
*/
44
/* We want to do the MPU lookup as secure; work out what mmu_idx that is */
47
diff --git a/target/arm/helper.c b/target/arm/helper.c
45
mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true);
48
index XXXXXXX..XXXXXXX 100644
46
49
--- a/target/arm/helper.c
47
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) {
50
+++ b/target/arm/helper.c
48
+ if (!v7m_read_half_insn(cpu, mmu_idx, true, env->regs[15], &insn)) {
51
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
49
return false;
52
flags = FIELD_DP32(flags, TBFLAG_A32, FPCCR_S_WRONG, 1);
53
}
50
}
54
51
55
+ if (arm_feature(env, ARM_FEATURE_M) &&
52
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
56
+ (env->v7m.fpccr[env->v7m.secure] & R_V7M_FPCCR_ASPEN_MASK) &&
53
goto gen_invep;
57
+ (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) ||
58
+ (env->v7m.secure &&
59
+ !(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)))) {
60
+ /*
61
+ * ASPEN is set, but FPCA/SFPA indicate that there is no active
62
+ * FP context; we must create a new FP context before executing
63
+ * any FP insn.
64
+ */
65
+ flags = FIELD_DP32(flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED, 1);
66
+ }
67
+
68
*pflags = flags;
69
*cs_base = 0;
70
}
71
diff --git a/target/arm/translate.c b/target/arm/translate.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/translate.c
74
+++ b/target/arm/translate.c
75
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
76
/* Don't need to do this for any further FP insns in this TB */
77
s->v8m_fpccr_s_wrong = false;
78
}
79
+
80
+ if (s->v7m_new_fp_ctxt_needed) {
81
+ /*
82
+ * Create new FP context by updating CONTROL.FPCA, CONTROL.SFPA
83
+ * and the FPSCR.
84
+ */
85
+ TCGv_i32 control, fpscr;
86
+ uint32_t bits = R_V7M_CONTROL_FPCA_MASK;
87
+
88
+ fpscr = load_cpu_field(v7m.fpdscr[s->v8m_secure]);
89
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
90
+ tcg_temp_free_i32(fpscr);
91
+ /*
92
+ * We don't need to arrange to end the TB, because the only
93
+ * parts of FPSCR which we cache in the TB flags are the VECLEN
94
+ * and VECSTRIDE, and those don't exist for M-profile.
95
+ */
96
+
97
+ if (s->v8m_secure) {
98
+ bits |= R_V7M_CONTROL_SFPA_MASK;
99
+ }
100
+ control = load_cpu_field(v7m.control[M_REG_S]);
101
+ tcg_gen_ori_i32(control, control, bits);
102
+ store_cpu_field(control, v7m.control[M_REG_S]);
103
+ /* Don't need to do this for any further FP insns in this TB */
104
+ s->v7m_new_fp_ctxt_needed = false;
105
+ }
106
}
54
}
107
55
108
if (extract32(insn, 28, 4) == 0xf) {
56
- if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) {
109
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
57
+ if (!v7m_read_half_insn(cpu, mmu_idx, true, env->regs[15] + 2, &insn)) {
110
regime_is_secure(env, dc->mmu_idx);
58
return false;
111
dc->v8m_stackcheck = FIELD_EX32(tb_flags, TBFLAG_A32, STACKCHECK);
59
}
112
dc->v8m_fpccr_s_wrong = FIELD_EX32(tb_flags, TBFLAG_A32, FPCCR_S_WRONG);
113
+ dc->v7m_new_fp_ctxt_needed =
114
+ FIELD_EX32(tb_flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED);
115
dc->cp_regs = cpu->cp_regs;
116
dc->features = env->features;
117
60
118
--
61
--
119
2.20.1
62
2.25.1
120
63
121
64
diff view generated by jsdifflib
1
We are close to running out of TB flags for AArch32; we could
1
From: Richard Henderson <richard.henderson@linaro.org>
2
start using the cs_base word, but before we do that we can
3
economise on our usage by sharing the same bits for the VFP
4
VECSTRIDE field and the XScale XSCALE_CPAR field. This
5
works because no XScale CPU ever had VFP.
6
2
3
Remove the use of regime_is_secure from arm_tr_init_disas_context.
4
Instead, provide the value of v8m_secure directly from tb_flags.
5
Rather than use regime_is_secure, use the env->v7m.secure directly,
6
as per arm_mmu_idx_el.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20221001162318.153420-8-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190416125744.27770-18-peter.maydell@linaro.org
10
---
12
---
11
target/arm/cpu.h | 10 ++++++----
13
target/arm/cpu.h | 2 ++
12
target/arm/cpu.c | 7 +++++++
14
target/arm/helper.c | 4 ++++
13
target/arm/helper.c | 6 +++++-
15
target/arm/translate.c | 3 +--
14
target/arm/translate.c | 9 +++++++--
16
3 files changed, 7 insertions(+), 2 deletions(-)
15
4 files changed, 25 insertions(+), 7 deletions(-)
16
17
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
20
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, BE_DATA, 23, 1)
22
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 3, 1) /* Not cached. */
22
FIELD(TBFLAG_A32, THUMB, 0, 1)
23
FIELD(TBFLAG_M32, FPCCR_S_WRONG, 4, 1) /* Not cached. */
23
FIELD(TBFLAG_A32, VECLEN, 1, 3)
24
/* Set if MVE insns are definitely not predicated by VPR or LTPSIZE */
24
FIELD(TBFLAG_A32, VECSTRIDE, 4, 2)
25
FIELD(TBFLAG_M32, MVE_NO_PRED, 5, 1) /* Not cached. */
25
+/*
26
+/* Set if in secure mode */
26
+ * We store the bottom two bits of the CPAR as TB flags and handle
27
+FIELD(TBFLAG_M32, SECURE, 6, 1)
27
+ * checks on the other bits at runtime. This shares the same bits as
28
28
+ * VECSTRIDE, which is OK as no XScale CPU has VFP.
29
+ */
30
+FIELD(TBFLAG_A32, XSCALE_CPAR, 4, 2)
31
/*
29
/*
32
* Indicates whether cp register reads and writes by guest code should access
30
* Bit usage when in AArch64 state
33
* the secure or nonsecure bank of banked registers; note that this is not
34
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
35
FIELD(TBFLAG_A32, VFPEN, 7, 1)
36
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
37
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
38
-/* We store the bottom two bits of the CPAR as TB flags and handle
39
- * checks on the other bits at runtime
40
- */
41
-FIELD(TBFLAG_A32, XSCALE_CPAR, 17, 2)
42
/* For M profile only, Handler (ie not Thread) mode */
43
FIELD(TBFLAG_A32, HANDLER, 21, 1)
44
/* For M profile only, whether we should generate stack-limit checks */
45
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu.c
48
+++ b/target/arm/cpu.c
49
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
50
set_feature(env, ARM_FEATURE_THUMB_DSP);
51
}
52
53
+ /*
54
+ * We rely on no XScale CPU having VFP so we can use the same bits in the
55
+ * TB flags field for VECSTRIDE and XSCALE_CPAR.
56
+ */
57
+ assert(!(arm_feature(env, ARM_FEATURE_VFP) &&
58
+ arm_feature(env, ARM_FEATURE_XSCALE)));
59
+
60
if (arm_feature(env, ARM_FEATURE_V7) &&
61
!arm_feature(env, ARM_FEATURE_M) &&
62
!arm_feature(env, ARM_FEATURE_PMSA)) {
63
diff --git a/target/arm/helper.c b/target/arm/helper.c
31
diff --git a/target/arm/helper.c b/target/arm/helper.c
64
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/helper.c
33
--- a/target/arm/helper.c
66
+++ b/target/arm/helper.c
34
+++ b/target/arm/helper.c
67
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
35
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_m32(CPUARMState *env, int fp_el,
68
|| arm_el_is_aa64(env, 1) || arm_feature(env, ARM_FEATURE_M)) {
36
DP_TBFLAG_M32(flags, STACKCHECK, 1);
69
flags = FIELD_DP32(flags, TBFLAG_A32, VFPEN, 1);
70
}
71
- flags = FIELD_DP32(flags, TBFLAG_A32, XSCALE_CPAR, env->cp15.c15_cpar);
72
+ /* Note that XSCALE_CPAR shares bits with VECSTRIDE */
73
+ if (arm_feature(env, ARM_FEATURE_XSCALE)) {
74
+ flags = FIELD_DP32(flags, TBFLAG_A32,
75
+ XSCALE_CPAR, env->cp15.c15_cpar);
76
+ }
77
}
37
}
78
38
79
flags = FIELD_DP32(flags, TBFLAG_ANY, MMUIDX, arm_to_core_mmu_idx(mmu_idx));
39
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) && env->v7m.secure) {
40
+ DP_TBFLAG_M32(flags, SECURE, 1);
41
+ }
42
+
43
return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags);
44
}
45
80
diff --git a/target/arm/translate.c b/target/arm/translate.c
46
diff --git a/target/arm/translate.c b/target/arm/translate.c
81
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
82
--- a/target/arm/translate.c
48
--- a/target/arm/translate.c
83
+++ b/target/arm/translate.c
49
+++ b/target/arm/translate.c
84
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
50
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
85
dc->fp_excp_el = FIELD_EX32(tb_flags, TBFLAG_ANY, FPEXC_EL);
51
dc->vfp_enabled = 1;
86
dc->vfp_enabled = FIELD_EX32(tb_flags, TBFLAG_A32, VFPEN);
52
dc->be_data = MO_TE;
87
dc->vec_len = FIELD_EX32(tb_flags, TBFLAG_A32, VECLEN);
53
dc->v7m_handler_mode = EX_TBFLAG_M32(tb_flags, HANDLER);
88
- dc->vec_stride = FIELD_EX32(tb_flags, TBFLAG_A32, VECSTRIDE);
54
- dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
89
- dc->c15_cpar = FIELD_EX32(tb_flags, TBFLAG_A32, XSCALE_CPAR);
55
- regime_is_secure(env, dc->mmu_idx);
90
+ if (arm_feature(env, ARM_FEATURE_XSCALE)) {
56
+ dc->v8m_secure = EX_TBFLAG_M32(tb_flags, SECURE);
91
+ dc->c15_cpar = FIELD_EX32(tb_flags, TBFLAG_A32, XSCALE_CPAR);
57
dc->v8m_stackcheck = EX_TBFLAG_M32(tb_flags, STACKCHECK);
92
+ dc->vec_stride = 0;
58
dc->v8m_fpccr_s_wrong = EX_TBFLAG_M32(tb_flags, FPCCR_S_WRONG);
93
+ } else {
59
dc->v7m_new_fp_ctxt_needed =
94
+ dc->vec_stride = FIELD_EX32(tb_flags, TBFLAG_A32, VECSTRIDE);
95
+ dc->c15_cpar = 0;
96
+ }
97
dc->v7m_handler_mode = FIELD_EX32(tb_flags, TBFLAG_A32, HANDLER);
98
dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
99
regime_is_secure(env, dc->mmu_idx);
100
--
60
--
101
2.20.1
61
2.25.1
102
103
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The SMMUNotifierNode struct is not necessary and brings extra
3
This is the last use of regime_is_secure; remove it
4
complexity so let's remove it. We now directly track the SMMUDevices
4
entirely before changing the layout of ARMMMUIdx.
5
which have registered IOMMU MR notifiers.
6
5
7
This is inspired from the same transformation on intel-iommu
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
done in commit b4a4ba0d68f50f218ee3957b6638dbee32a5eeef
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
("intel-iommu: remove IntelIOMMUNotifierNode")
8
Message-id: 20221001162318.153420-9-richard.henderson@linaro.org
10
11
Signed-off-by: Eric Auger <eric.auger@redhat.com>
12
Reviewed-by: Peter Xu <peterx@redhat.com>
13
Message-id: 20190409160219.19026-1-eric.auger@redhat.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
10
---
16
include/hw/arm/smmu-common.h | 8 ++------
11
target/arm/internals.h | 42 ----------------------------------------
17
hw/arm/smmu-common.c | 6 +++---
12
target/arm/ptw.c | 44 ++++++++++++++++++++++++++++++++++++++++--
18
hw/arm/smmuv3.c | 28 +++++++---------------------
13
2 files changed, 42 insertions(+), 44 deletions(-)
19
3 files changed, 12 insertions(+), 30 deletions(-)
20
14
21
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
22
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/arm/smmu-common.h
17
--- a/target/arm/internals.h
24
+++ b/include/hw/arm/smmu-common.h
18
+++ b/target/arm/internals.h
25
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUDevice {
19
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
26
AddressSpace as;
27
uint32_t cfg_cache_hits;
28
uint32_t cfg_cache_misses;
29
+ QLIST_ENTRY(SMMUDevice) next;
30
} SMMUDevice;
31
32
-typedef struct SMMUNotifierNode {
33
- SMMUDevice *sdev;
34
- QLIST_ENTRY(SMMUNotifierNode) next;
35
-} SMMUNotifierNode;
36
-
37
typedef struct SMMUPciBus {
38
PCIBus *bus;
39
SMMUDevice *pbdev[0]; /* Parent array is sparse, so dynamically alloc */
40
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUState {
41
GHashTable *iotlb;
42
SMMUPciBus *smmu_pcibus_by_bus_num[SMMU_PCI_BUS_MAX];
43
PCIBus *pci_bus;
44
- QLIST_HEAD(, SMMUNotifierNode) notifiers_list;
45
+ QLIST_HEAD(, SMMUDevice) devices_with_notifiers;
46
uint8_t bus_num;
47
PCIBus *primary_bus;
48
} SMMUState;
49
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/hw/arm/smmu-common.c
52
+++ b/hw/arm/smmu-common.c
53
@@ -XXX,XX +XXX,XX @@ inline void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr)
54
/* Unmap all notifiers of all mr's */
55
void smmu_inv_notifiers_all(SMMUState *s)
56
{
57
- SMMUNotifierNode *node;
58
+ SMMUDevice *sdev;
59
60
- QLIST_FOREACH(node, &s->notifiers_list, next) {
61
- smmu_inv_notifiers_mr(&node->sdev->iommu);
62
+ QLIST_FOREACH(sdev, &s->devices_with_notifiers, next) {
63
+ smmu_inv_notifiers_mr(&sdev->iommu);
64
}
20
}
65
}
21
}
66
22
67
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
23
-/* Return true if this address translation regime is secure */
24
-static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
25
-{
26
- switch (mmu_idx) {
27
- case ARMMMUIdx_E10_0:
28
- case ARMMMUIdx_E10_1:
29
- case ARMMMUIdx_E10_1_PAN:
30
- case ARMMMUIdx_E20_0:
31
- case ARMMMUIdx_E20_2:
32
- case ARMMMUIdx_E20_2_PAN:
33
- case ARMMMUIdx_Stage1_E0:
34
- case ARMMMUIdx_Stage1_E1:
35
- case ARMMMUIdx_Stage1_E1_PAN:
36
- case ARMMMUIdx_E2:
37
- case ARMMMUIdx_Stage2:
38
- case ARMMMUIdx_MPrivNegPri:
39
- case ARMMMUIdx_MUserNegPri:
40
- case ARMMMUIdx_MPriv:
41
- case ARMMMUIdx_MUser:
42
- return false;
43
- case ARMMMUIdx_SE3:
44
- case ARMMMUIdx_SE10_0:
45
- case ARMMMUIdx_SE10_1:
46
- case ARMMMUIdx_SE10_1_PAN:
47
- case ARMMMUIdx_SE20_0:
48
- case ARMMMUIdx_SE20_2:
49
- case ARMMMUIdx_SE20_2_PAN:
50
- case ARMMMUIdx_Stage1_SE0:
51
- case ARMMMUIdx_Stage1_SE1:
52
- case ARMMMUIdx_Stage1_SE1_PAN:
53
- case ARMMMUIdx_SE2:
54
- case ARMMMUIdx_Stage2_S:
55
- case ARMMMUIdx_MSPrivNegPri:
56
- case ARMMMUIdx_MSUserNegPri:
57
- case ARMMMUIdx_MSPriv:
58
- case ARMMMUIdx_MSUser:
59
- return true;
60
- default:
61
- g_assert_not_reached();
62
- }
63
-}
64
-
65
static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
66
{
67
switch (mmu_idx) {
68
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
68
index XXXXXXX..XXXXXXX 100644
69
index XXXXXXX..XXXXXXX 100644
69
--- a/hw/arm/smmuv3.c
70
--- a/target/arm/ptw.c
70
+++ b/hw/arm/smmuv3.c
71
+++ b/target/arm/ptw.c
71
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
72
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
72
/* invalidate an asid/iova tuple in all mr's */
73
MMUAccessType access_type, ARMMMUIdx mmu_idx,
73
static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova)
74
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
74
{
75
{
75
- SMMUNotifierNode *node;
76
+ bool is_secure;
76
+ SMMUDevice *sdev;
77
+
77
78
+ switch (mmu_idx) {
78
- QLIST_FOREACH(node, &s->notifiers_list, next) {
79
+ case ARMMMUIdx_E10_0:
79
- IOMMUMemoryRegion *mr = &node->sdev->iommu;
80
+ case ARMMMUIdx_E10_1:
80
+ QLIST_FOREACH(sdev, &s->devices_with_notifiers, next) {
81
+ case ARMMMUIdx_E10_1_PAN:
81
+ IOMMUMemoryRegion *mr = &sdev->iommu;
82
+ case ARMMMUIdx_E20_0:
82
IOMMUNotifier *n;
83
+ case ARMMMUIdx_E20_2:
83
84
+ case ARMMMUIdx_E20_2_PAN:
84
trace_smmuv3_inv_notifiers_iova(mr->parent_obj.name, asid, iova);
85
+ case ARMMMUIdx_Stage1_E0:
85
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_flag_changed(IOMMUMemoryRegion *iommu,
86
+ case ARMMMUIdx_Stage1_E1:
86
SMMUDevice *sdev = container_of(iommu, SMMUDevice, iommu);
87
+ case ARMMMUIdx_Stage1_E1_PAN:
87
SMMUv3State *s3 = sdev->smmu;
88
+ case ARMMMUIdx_E2:
88
SMMUState *s = &(s3->smmu_state);
89
+ case ARMMMUIdx_Stage2:
89
- SMMUNotifierNode *node = NULL;
90
+ case ARMMMUIdx_MPrivNegPri:
90
- SMMUNotifierNode *next_node = NULL;
91
+ case ARMMMUIdx_MUserNegPri:
91
92
+ case ARMMMUIdx_MPriv:
92
if (new & IOMMU_NOTIFIER_MAP) {
93
+ case ARMMMUIdx_MUser:
93
int bus_num = pci_bus_num(sdev->bus);
94
+ is_secure = false;
94
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_flag_changed(IOMMUMemoryRegion *iommu,
95
+ break;
95
96
+ case ARMMMUIdx_SE3:
96
if (old == IOMMU_NOTIFIER_NONE) {
97
+ case ARMMMUIdx_SE10_0:
97
trace_smmuv3_notify_flag_add(iommu->parent_obj.name);
98
+ case ARMMMUIdx_SE10_1:
98
- node = g_malloc0(sizeof(*node));
99
+ case ARMMMUIdx_SE10_1_PAN:
99
- node->sdev = sdev;
100
+ case ARMMMUIdx_SE20_0:
100
- QLIST_INSERT_HEAD(&s->notifiers_list, node, next);
101
+ case ARMMMUIdx_SE20_2:
101
- return;
102
+ case ARMMMUIdx_SE20_2_PAN:
102
- }
103
+ case ARMMMUIdx_Stage1_SE0:
103
-
104
+ case ARMMMUIdx_Stage1_SE1:
104
- /* update notifier node with new flags */
105
+ case ARMMMUIdx_Stage1_SE1_PAN:
105
- QLIST_FOREACH_SAFE(node, &s->notifiers_list, next, next_node) {
106
+ case ARMMMUIdx_SE2:
106
- if (node->sdev == sdev) {
107
+ case ARMMMUIdx_Stage2_S:
107
- if (new == IOMMU_NOTIFIER_NONE) {
108
+ case ARMMMUIdx_MSPrivNegPri:
108
- trace_smmuv3_notify_flag_del(iommu->parent_obj.name);
109
+ case ARMMMUIdx_MSUserNegPri:
109
- QLIST_REMOVE(node, next);
110
+ case ARMMMUIdx_MSPriv:
110
- g_free(node);
111
+ case ARMMMUIdx_MSUser:
111
- }
112
+ is_secure = true;
112
- return;
113
+ break;
113
- }
114
+ default:
114
+ QLIST_INSERT_HEAD(&s->devices_with_notifiers, sdev, next);
115
+ g_assert_not_reached();
115
+ } else if (new == IOMMU_NOTIFIER_NONE) {
116
+ }
116
+ trace_smmuv3_notify_flag_del(iommu->parent_obj.name);
117
return get_phys_addr_with_secure(env, address, access_type, mmu_idx,
117
+ QLIST_REMOVE(sdev, next);
118
- regime_is_secure(env, mmu_idx),
118
}
119
- result, fi);
120
+ is_secure, result, fi);
119
}
121
}
120
122
123
hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
121
--
124
--
122
2.20.1
125
2.25.1
123
124
diff view generated by jsdifflib
Deleted patch
1
In the stripe8() function we use a variable length array; however
2
we know that the maximum length required is MAX_NUM_BUSSES. Use
3
a fixed-length array and an assert instead.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
10
Message-id: 20190328152635.2794-1-peter.maydell@linaro.org
11
---
12
hw/ssi/xilinx_spips.c | 6 ++++--
13
1 file changed, 4 insertions(+), 2 deletions(-)
14
15
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/ssi/xilinx_spips.c
18
+++ b/hw/ssi/xilinx_spips.c
19
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_reset(DeviceState *d)
20
21
static inline void stripe8(uint8_t *x, int num, bool dir)
22
{
23
- uint8_t r[num];
24
- memset(r, 0, sizeof(uint8_t) * num);
25
+ uint8_t r[MAX_NUM_BUSSES];
26
int idx[2] = {0, 0};
27
int bit[2] = {0, 7};
28
int d = dir;
29
30
+ assert(num <= MAX_NUM_BUSSES);
31
+ memset(r, 0, sizeof(uint8_t) * num);
32
+
33
for (idx[0] = 0; idx[0] < num; ++idx[0]) {
34
for (bit[0] = 7; bit[0] >= 0; bit[0]--) {
35
r[idx[!d]] |= x[idx[d]] & 1 << bit[d] ? 1 << bit[!d] : 0;
36
--
37
2.20.1
38
39
diff view generated by jsdifflib
Deleted patch
1
Normally configure identifies the source path by looking
2
at the location where the configure script itself exists.
3
We also provide a --source-path option which lets the user
4
manually override this.
5
1
6
There isn't really an obvious use case for the --source-path
7
option, and in commit 927128222b0a91f56c13a in 2017 we
8
accidentally added some logic that looks at $source_path
9
before the command line option that overrides it has been
10
processed.
11
12
The fact that nobody complained suggests that there isn't
13
any use of this option and we aren't testing it either;
14
remove it. This allows us to move the "make $source_path
15
absolute" logic up so that there is no window in the script
16
where $source_path is set but not yet absolute.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
20
Message-id: 20190318134019.23729-1-peter.maydell@linaro.org
21
---
22
configure | 10 ++--------
23
1 file changed, 2 insertions(+), 8 deletions(-)
24
25
diff --git a/configure b/configure
26
index XXXXXXX..XXXXXXX 100755
27
--- a/configure
28
+++ b/configure
29
@@ -XXX,XX +XXX,XX @@ ld_has() {
30
31
# default parameters
32
source_path=$(dirname "$0")
33
+# make source path absolute
34
+source_path=$(cd "$source_path"; pwd)
35
cpu=""
36
iasl="iasl"
37
interp_prefix="/usr/gnemul/qemu-%M"
38
@@ -XXX,XX +XXX,XX @@ for opt do
39
;;
40
--cxx=*) CXX="$optarg"
41
;;
42
- --source-path=*) source_path="$optarg"
43
- ;;
44
--cpu=*) cpu="$optarg"
45
;;
46
--extra-cflags=*) QEMU_CFLAGS="$QEMU_CFLAGS $optarg"
47
@@ -XXX,XX +XXX,XX @@ if test "$debug_info" = "yes"; then
48
LDFLAGS="-g $LDFLAGS"
49
fi
50
51
-# make source path absolute
52
-source_path=$(cd "$source_path"; pwd)
53
-
54
# running configure in the source tree?
55
# we know that's the case if configure is there.
56
if test -f "./configure"; then
57
@@ -XXX,XX +XXX,XX @@ for opt do
58
;;
59
--interp-prefix=*) interp_prefix="$optarg"
60
;;
61
- --source-path=*)
62
- ;;
63
--cross-prefix=*)
64
;;
65
--cc=*)
66
@@ -XXX,XX +XXX,XX @@ $(echo Available targets: $default_target_list | \
67
--target-list-exclude=LIST exclude a set of targets from the default target-list
68
69
Advanced options (experts only):
70
- --source-path=PATH path of source code [$source_path]
71
--cross-prefix=PREFIX use PREFIX for compile tools [$cross_prefix]
72
--cc=CC use C compiler CC [$cc]
73
--iasl=IASL use ACPI compiler IASL [$iasl]
74
--
75
2.20.1
76
77
diff view generated by jsdifflib
Deleted patch
1
Enforce that for M-profile various FPSCR bits which are RES0 there
2
but have defined meanings on A-profile are never settable. This
3
ensures that M-profile code can't enable the A-profile behaviour
4
(notably vector length/stride handling) by accident.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190416125744.27770-2-peter.maydell@linaro.org
9
---
10
target/arm/vfp_helper.c | 8 ++++++++
11
1 file changed, 8 insertions(+)
12
13
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/vfp_helper.c
16
+++ b/target/arm/vfp_helper.c
17
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
18
val &= ~FPCR_FZ16;
19
}
20
21
+ if (arm_feature(env, ARM_FEATURE_M)) {
22
+ /*
23
+ * M profile FPSCR is RES0 for the QC, STRIDE, FZ16, LEN bits
24
+ * and also for the trapped-exception-handling bits IxE.
25
+ */
26
+ val &= 0xf7c0009f;
27
+ }
28
+
29
/*
30
* We don't implement trapped exception handling, so the
31
* trap enable bits, IDE|IXE|UFE|OFE|DZE|IOE are all RAZ/WI (not RES0!)
32
--
33
2.20.1
34
35
diff view generated by jsdifflib
Deleted patch
1
For M-profile the MVFR* ID registers are memory mapped, in the
2
range we implement via the NVIC. Allow them to be read.
3
(If the CPU has no FPU, these registers are defined to be RAZ.)
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190416125744.27770-3-peter.maydell@linaro.org
8
---
9
hw/intc/armv7m_nvic.c | 6 ++++++
10
1 file changed, 6 insertions(+)
11
12
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/intc/armv7m_nvic.c
15
+++ b/hw/intc/armv7m_nvic.c
16
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
17
return 0;
18
}
19
return cpu->env.v7m.sfar;
20
+ case 0xf40: /* MVFR0 */
21
+ return cpu->isar.mvfr0;
22
+ case 0xf44: /* MVFR1 */
23
+ return cpu->isar.mvfr1;
24
+ case 0xf48: /* MVFR2 */
25
+ return cpu->isar.mvfr2;
26
default:
27
bad_offset:
28
qemu_log_mask(LOG_GUEST_ERROR, "NVIC: Bad read offset 0x%x\n", offset);
29
--
30
2.20.1
31
32
diff view generated by jsdifflib
Deleted patch
1
The M-profile floating point support has three associated config
2
registers: FPCAR, FPCCR and FPDSCR. It also makes the registers
3
CPACR and NSACR have behaviour other than reads-as-zero.
4
Add support for all of these as simple reads-as-written registers.
5
We will hook up actual functionality later.
6
1
7
The main complexity here is handling the FPCCR register, which
8
has a mix of banked and unbanked bits.
9
10
Note that we don't share storage with the A-profile
11
cpu->cp15.nsacr and cpu->cp15.cpacr_el1, though the behaviour
12
is quite similar, for two reasons:
13
* the M profile CPACR is banked between security states
14
* it preserves the invariant that M profile uses no state
15
inside the cp15 substruct
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20190416125744.27770-4-peter.maydell@linaro.org
20
---
21
target/arm/cpu.h | 34 ++++++++++++
22
hw/intc/armv7m_nvic.c | 125 ++++++++++++++++++++++++++++++++++++++++++
23
target/arm/cpu.c | 5 ++
24
target/arm/machine.c | 16 ++++++
25
4 files changed, 180 insertions(+)
26
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/cpu.h
30
+++ b/target/arm/cpu.h
31
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
32
uint32_t scr[M_REG_NUM_BANKS];
33
uint32_t msplim[M_REG_NUM_BANKS];
34
uint32_t psplim[M_REG_NUM_BANKS];
35
+ uint32_t fpcar[M_REG_NUM_BANKS];
36
+ uint32_t fpccr[M_REG_NUM_BANKS];
37
+ uint32_t fpdscr[M_REG_NUM_BANKS];
38
+ uint32_t cpacr[M_REG_NUM_BANKS];
39
+ uint32_t nsacr;
40
} v7m;
41
42
/* Information associated with an exception about to be taken:
43
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CSSELR, LEVEL, 1, 3)
44
*/
45
FIELD(V7M_CSSELR, INDEX, 0, 4)
46
47
+/* v7M FPCCR bits */
48
+FIELD(V7M_FPCCR, LSPACT, 0, 1)
49
+FIELD(V7M_FPCCR, USER, 1, 1)
50
+FIELD(V7M_FPCCR, S, 2, 1)
51
+FIELD(V7M_FPCCR, THREAD, 3, 1)
52
+FIELD(V7M_FPCCR, HFRDY, 4, 1)
53
+FIELD(V7M_FPCCR, MMRDY, 5, 1)
54
+FIELD(V7M_FPCCR, BFRDY, 6, 1)
55
+FIELD(V7M_FPCCR, SFRDY, 7, 1)
56
+FIELD(V7M_FPCCR, MONRDY, 8, 1)
57
+FIELD(V7M_FPCCR, SPLIMVIOL, 9, 1)
58
+FIELD(V7M_FPCCR, UFRDY, 10, 1)
59
+FIELD(V7M_FPCCR, RES0, 11, 15)
60
+FIELD(V7M_FPCCR, TS, 26, 1)
61
+FIELD(V7M_FPCCR, CLRONRETS, 27, 1)
62
+FIELD(V7M_FPCCR, CLRONRET, 28, 1)
63
+FIELD(V7M_FPCCR, LSPENS, 29, 1)
64
+FIELD(V7M_FPCCR, LSPEN, 30, 1)
65
+FIELD(V7M_FPCCR, ASPEN, 31, 1)
66
+/* These bits are banked. Others are non-banked and live in the M_REG_S bank */
67
+#define R_V7M_FPCCR_BANKED_MASK \
68
+ (R_V7M_FPCCR_LSPACT_MASK | \
69
+ R_V7M_FPCCR_USER_MASK | \
70
+ R_V7M_FPCCR_THREAD_MASK | \
71
+ R_V7M_FPCCR_MMRDY_MASK | \
72
+ R_V7M_FPCCR_SPLIMVIOL_MASK | \
73
+ R_V7M_FPCCR_UFRDY_MASK | \
74
+ R_V7M_FPCCR_ASPEN_MASK)
75
+
76
/*
77
* System register ID fields.
78
*/
79
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
80
index XXXXXXX..XXXXXXX 100644
81
--- a/hw/intc/armv7m_nvic.c
82
+++ b/hw/intc/armv7m_nvic.c
83
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
84
}
85
case 0xd84: /* CSSELR */
86
return cpu->env.v7m.csselr[attrs.secure];
87
+ case 0xd88: /* CPACR */
88
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
89
+ return 0;
90
+ }
91
+ return cpu->env.v7m.cpacr[attrs.secure];
92
+ case 0xd8c: /* NSACR */
93
+ if (!attrs.secure || !arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
94
+ return 0;
95
+ }
96
+ return cpu->env.v7m.nsacr;
97
/* TODO: Implement debug registers. */
98
case 0xd90: /* MPU_TYPE */
99
/* Unified MPU; if the MPU is not present this value is zero */
100
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
101
return 0;
102
}
103
return cpu->env.v7m.sfar;
104
+ case 0xf34: /* FPCCR */
105
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
106
+ return 0;
107
+ }
108
+ if (attrs.secure) {
109
+ return cpu->env.v7m.fpccr[M_REG_S];
110
+ } else {
111
+ /*
112
+ * NS can read LSPEN, CLRONRET and MONRDY. It can read
113
+ * BFRDY and HFRDY if AIRCR.BFHFNMINS != 0;
114
+ * other non-banked bits RAZ.
115
+ * TODO: MONRDY should RAZ/WI if DEMCR.SDME is set.
116
+ */
117
+ uint32_t value = cpu->env.v7m.fpccr[M_REG_S];
118
+ uint32_t mask = R_V7M_FPCCR_LSPEN_MASK |
119
+ R_V7M_FPCCR_CLRONRET_MASK |
120
+ R_V7M_FPCCR_MONRDY_MASK;
121
+
122
+ if (s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
123
+ mask |= R_V7M_FPCCR_BFRDY_MASK | R_V7M_FPCCR_HFRDY_MASK;
124
+ }
125
+
126
+ value &= mask;
127
+
128
+ value |= cpu->env.v7m.fpccr[M_REG_NS];
129
+ return value;
130
+ }
131
+ case 0xf38: /* FPCAR */
132
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
133
+ return 0;
134
+ }
135
+ return cpu->env.v7m.fpcar[attrs.secure];
136
+ case 0xf3c: /* FPDSCR */
137
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
138
+ return 0;
139
+ }
140
+ return cpu->env.v7m.fpdscr[attrs.secure];
141
case 0xf40: /* MVFR0 */
142
return cpu->isar.mvfr0;
143
case 0xf44: /* MVFR1 */
144
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
145
cpu->env.v7m.csselr[attrs.secure] = value & R_V7M_CSSELR_INDEX_MASK;
146
}
147
break;
148
+ case 0xd88: /* CPACR */
149
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
150
+ /* We implement only the Floating Point extension's CP10/CP11 */
151
+ cpu->env.v7m.cpacr[attrs.secure] = value & (0xf << 20);
152
+ }
153
+ break;
154
+ case 0xd8c: /* NSACR */
155
+ if (attrs.secure && arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
156
+ /* We implement only the Floating Point extension's CP10/CP11 */
157
+ cpu->env.v7m.nsacr = value & (3 << 10);
158
+ }
159
+ break;
160
case 0xd90: /* MPU_TYPE */
161
return; /* RO */
162
case 0xd94: /* MPU_CTRL */
163
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
164
}
165
break;
166
}
167
+ case 0xf34: /* FPCCR */
168
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
169
+ /* Not all bits here are banked. */
170
+ uint32_t fpccr_s;
171
+
172
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
173
+ /* Don't allow setting of bits not present in v7M */
174
+ value &= (R_V7M_FPCCR_LSPACT_MASK |
175
+ R_V7M_FPCCR_USER_MASK |
176
+ R_V7M_FPCCR_THREAD_MASK |
177
+ R_V7M_FPCCR_HFRDY_MASK |
178
+ R_V7M_FPCCR_MMRDY_MASK |
179
+ R_V7M_FPCCR_BFRDY_MASK |
180
+ R_V7M_FPCCR_MONRDY_MASK |
181
+ R_V7M_FPCCR_LSPEN_MASK |
182
+ R_V7M_FPCCR_ASPEN_MASK);
183
+ }
184
+ value &= ~R_V7M_FPCCR_RES0_MASK;
185
+
186
+ if (!attrs.secure) {
187
+ /* Some non-banked bits are configurably writable by NS */
188
+ fpccr_s = cpu->env.v7m.fpccr[M_REG_S];
189
+ if (!(fpccr_s & R_V7M_FPCCR_LSPENS_MASK)) {
190
+ uint32_t lspen = FIELD_EX32(value, V7M_FPCCR, LSPEN);
191
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, LSPEN, lspen);
192
+ }
193
+ if (!(fpccr_s & R_V7M_FPCCR_CLRONRETS_MASK)) {
194
+ uint32_t cor = FIELD_EX32(value, V7M_FPCCR, CLRONRET);
195
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, CLRONRET, cor);
196
+ }
197
+ if ((s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
198
+ uint32_t hfrdy = FIELD_EX32(value, V7M_FPCCR, HFRDY);
199
+ uint32_t bfrdy = FIELD_EX32(value, V7M_FPCCR, BFRDY);
200
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, HFRDY, hfrdy);
201
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, BFRDY, bfrdy);
202
+ }
203
+ /* TODO MONRDY should RAZ/WI if DEMCR.SDME is set */
204
+ {
205
+ uint32_t monrdy = FIELD_EX32(value, V7M_FPCCR, MONRDY);
206
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, MONRDY, monrdy);
207
+ }
208
+
209
+ /*
210
+ * All other non-banked bits are RAZ/WI from NS; write
211
+ * just the banked bits to fpccr[M_REG_NS].
212
+ */
213
+ value &= R_V7M_FPCCR_BANKED_MASK;
214
+ cpu->env.v7m.fpccr[M_REG_NS] = value;
215
+ } else {
216
+ fpccr_s = value;
217
+ }
218
+ cpu->env.v7m.fpccr[M_REG_S] = fpccr_s;
219
+ }
220
+ break;
221
+ case 0xf38: /* FPCAR */
222
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
223
+ value &= ~7;
224
+ cpu->env.v7m.fpcar[attrs.secure] = value;
225
+ }
226
+ break;
227
+ case 0xf3c: /* FPDSCR */
228
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
229
+ value &= 0x07c00000;
230
+ cpu->env.v7m.fpdscr[attrs.secure] = value;
231
+ }
232
+ break;
233
case 0xf50: /* ICIALLU */
234
case 0xf58: /* ICIMVAU */
235
case 0xf5c: /* DCIMVAC */
236
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
237
index XXXXXXX..XXXXXXX 100644
238
--- a/target/arm/cpu.c
239
+++ b/target/arm/cpu.c
240
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
241
env->v7m.ccr[M_REG_S] |= R_V7M_CCR_UNALIGN_TRP_MASK;
242
}
243
244
+ if (arm_feature(env, ARM_FEATURE_VFP)) {
245
+ env->v7m.fpccr[M_REG_NS] = R_V7M_FPCCR_ASPEN_MASK;
246
+ env->v7m.fpccr[M_REG_S] = R_V7M_FPCCR_ASPEN_MASK |
247
+ R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK;
248
+ }
249
/* Unlike A/R profile, M profile defines the reset LR value */
250
env->regs[14] = 0xffffffff;
251
252
diff --git a/target/arm/machine.c b/target/arm/machine.c
253
index XXXXXXX..XXXXXXX 100644
254
--- a/target/arm/machine.c
255
+++ b/target/arm/machine.c
256
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_v8m = {
257
}
258
};
259
260
+static const VMStateDescription vmstate_m_fp = {
261
+ .name = "cpu/m/fp",
262
+ .version_id = 1,
263
+ .minimum_version_id = 1,
264
+ .needed = vfp_needed,
265
+ .fields = (VMStateField[]) {
266
+ VMSTATE_UINT32_ARRAY(env.v7m.fpcar, ARMCPU, M_REG_NUM_BANKS),
267
+ VMSTATE_UINT32_ARRAY(env.v7m.fpccr, ARMCPU, M_REG_NUM_BANKS),
268
+ VMSTATE_UINT32_ARRAY(env.v7m.fpdscr, ARMCPU, M_REG_NUM_BANKS),
269
+ VMSTATE_UINT32_ARRAY(env.v7m.cpacr, ARMCPU, M_REG_NUM_BANKS),
270
+ VMSTATE_UINT32(env.v7m.nsacr, ARMCPU),
271
+ VMSTATE_END_OF_LIST()
272
+ }
273
+};
274
+
275
static const VMStateDescription vmstate_m = {
276
.name = "cpu/m",
277
.version_id = 4,
278
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
279
&vmstate_m_scr,
280
&vmstate_m_other_sp,
281
&vmstate_m_v8m,
282
+ &vmstate_m_fp,
283
NULL
284
}
285
};
286
--
287
2.20.1
288
289
diff view generated by jsdifflib
Deleted patch
1
The only "system register" that M-profile floating point exposes
2
via the VMRS/VMRS instructions is FPSCR, and it does not have
3
the odd special case for rd==15. Add a check to ensure we only
4
expose FPSCR.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190416125744.27770-5-peter.maydell@linaro.org
9
---
10
target/arm/translate.c | 19 +++++++++++++++++--
11
1 file changed, 17 insertions(+), 2 deletions(-)
12
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
16
+++ b/target/arm/translate.c
17
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
18
}
19
}
20
} else { /* !dp */
21
+ bool is_sysreg;
22
+
23
if ((insn & 0x6f) != 0x00)
24
return 1;
25
rn = VFP_SREG_N(insn);
26
+
27
+ is_sysreg = extract32(insn, 21, 1);
28
+
29
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
30
+ /*
31
+ * The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
32
+ * Writes to R15 are UNPREDICTABLE; we choose to undef.
33
+ */
34
+ if (is_sysreg && (rd == 15 || (rn >> 1) != ARM_VFP_FPSCR)) {
35
+ return 1;
36
+ }
37
+ }
38
+
39
if (insn & ARM_CP_RW_BIT) {
40
/* vfp->arm */
41
- if (insn & (1 << 21)) {
42
+ if (is_sysreg) {
43
/* system register */
44
rn >>= 1;
45
46
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
47
}
48
} else {
49
/* arm->vfp */
50
- if (insn & (1 << 21)) {
51
+ if (is_sysreg) {
52
rn >>= 1;
53
/* system register */
54
switch (rn) {
55
--
56
2.20.1
57
58
diff view generated by jsdifflib
1
Handle floating point registers in exception entry.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
This corresponds to the FP-specific parts of the pseudocode
3
functions ActivateException() and PushStack().
4
2
5
We defer the code corresponding to UpdateFPCCR() to a later patch.
3
Use get_phys_addr_with_secure directly. For a-profile, this is the
4
one place where the value of is_secure may not equal arm_is_secure(env).
6
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-10-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190416125744.27770-11-peter.maydell@linaro.org
10
---
10
---
11
target/arm/helper.c | 98 +++++++++++++++++++++++++++++++++++++++++++--
11
target/arm/helper.c | 19 ++++++++++++++-----
12
1 file changed, 95 insertions(+), 3 deletions(-)
12
1 file changed, 14 insertions(+), 5 deletions(-)
13
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
18
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri,
19
switch_v7m_security_state(env, targets_secure);
19
20
write_v7m_control_spsel(env, 0);
20
#ifdef CONFIG_TCG
21
arm_clear_exclusive(env);
21
static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
22
+ /* Clear SFPA and FPCA (has no effect if no FPU) */
22
- MMUAccessType access_type, ARMMMUIdx mmu_idx)
23
+ env->v7m.control[M_REG_S] &=
23
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
24
+ ~(R_V7M_CONTROL_FPCA_MASK | R_V7M_CONTROL_SFPA_MASK);
24
+ bool is_secure)
25
/* Clear IT bits */
25
{
26
env->condexec_bits = 0;
26
bool ret;
27
env->regs[14] = lr;
27
uint64_t par64;
28
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
28
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
29
uint32_t xpsr = xpsr_read(env);
29
ARMMMUFaultInfo fi = {};
30
uint32_t frameptr = env->regs[13];
30
GetPhysAddrResult res = {};
31
ARMMMUIdx mmu_idx = arm_mmu_idx(env);
31
32
+ uint32_t framesize;
32
- ret = get_phys_addr(env, value, access_type, mmu_idx, &res, &fi);
33
+ bool nsacr_cp10 = extract32(env->v7m.nsacr, 10, 1);
33
+ ret = get_phys_addr_with_secure(env, value, access_type, mmu_idx,
34
+
34
+ is_secure, &res, &fi);
35
+ if ((env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) &&
35
36
+ (env->v7m.secure || nsacr_cp10)) {
36
/*
37
+ if (env->v7m.secure &&
37
* ATS operations only do S1 or S1+S2 translations, so we never
38
+ env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK) {
38
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
39
+ framesize = 0xa8;
39
switch (el) {
40
+ } else {
40
case 3:
41
+ framesize = 0x68;
41
mmu_idx = ARMMMUIdx_SE3;
42
+ }
42
+ secure = true;
43
+ } else {
43
break;
44
+ framesize = 0x20;
44
case 2:
45
+ }
45
g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
46
46
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
47
/* Align stack pointer if the guest wants that */
47
switch (el) {
48
if ((frameptr & 4) &&
48
case 3:
49
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
49
mmu_idx = ARMMMUIdx_SE10_0;
50
xpsr |= XPSR_SPREALIGN;
50
+ secure = true;
51
break;
52
case 2:
53
g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
54
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
55
case 4:
56
/* stage 1+2 NonSecure PL1: ATS12NSOPR, ATS12NSOPW */
57
mmu_idx = ARMMMUIdx_E10_1;
58
+ secure = false;
59
break;
60
case 6:
61
/* stage 1+2 NonSecure PL0: ATS12NSOUR, ATS12NSOUW */
62
mmu_idx = ARMMMUIdx_E10_0;
63
+ secure = false;
64
break;
65
default:
66
g_assert_not_reached();
51
}
67
}
52
68
53
- frameptr -= 0x20;
69
- par64 = do_ats_write(env, value, access_type, mmu_idx);
54
+ xpsr &= ~XPSR_SFPA;
70
+ par64 = do_ats_write(env, value, access_type, mmu_idx, secure);
55
+ if (env->v7m.secure &&
71
56
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
72
A32_BANKED_CURRENT_REG_SET(env, par, par64);
57
+ xpsr |= XPSR_SFPA;
73
#else
58
+ }
74
@@ -XXX,XX +XXX,XX @@ static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri,
59
+
75
MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD;
60
+ frameptr -= framesize;
76
uint64_t par64;
61
77
62
if (arm_feature(env, ARM_FEATURE_V8)) {
78
- par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2);
63
uint32_t limit = v7m_sp_limit(env);
79
+ /* There is no SecureEL2 for AArch32. */
64
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
80
+ par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2, false);
65
v7m_stack_write(cpu, frameptr + 24, env->regs[15], mmu_idx, false) &&
81
66
v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, false);
82
A32_BANKED_CURRENT_REG_SET(env, par, par64);
67
83
#else
68
+ if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) {
84
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
69
+ /* FPU is active, try to save its registers */
85
break;
70
+ bool fpccr_s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
86
case 6: /* AT S1E3R, AT S1E3W */
71
+ bool lspact = env->v7m.fpccr[fpccr_s] & R_V7M_FPCCR_LSPACT_MASK;
87
mmu_idx = ARMMMUIdx_SE3;
72
+
88
+ secure = true;
73
+ if (lspact && arm_feature(env, ARM_FEATURE_M_SECURITY)) {
89
break;
74
+ qemu_log_mask(CPU_LOG_INT,
90
default:
75
+ "...SecureFault because LSPACT and FPCA both set\n");
91
g_assert_not_reached();
76
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
92
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
77
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
93
g_assert_not_reached();
78
+ } else if (!env->v7m.secure && !nsacr_cp10) {
94
}
79
+ qemu_log_mask(CPU_LOG_INT,
95
80
+ "...Secure UsageFault with CFSR.NOCP because "
96
- env->cp15.par_el[1] = do_ats_write(env, value, access_type, mmu_idx);
81
+ "NSACR.CP10 prevents stacking FP regs\n");
97
+ env->cp15.par_el[1] = do_ats_write(env, value, access_type,
82
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
98
+ mmu_idx, secure);
83
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
99
#else
84
+ } else {
100
/* Handled by hardware accelerator. */
85
+ if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
101
g_assert_not_reached();
86
+ /* Lazy stacking disabled, save registers now */
87
+ int i;
88
+ bool cpacr_pass = v7m_cpacr_pass(env, env->v7m.secure,
89
+ arm_current_el(env) != 0);
90
+
91
+ if (stacked_ok && !cpacr_pass) {
92
+ /*
93
+ * Take UsageFault if CPACR forbids access. The pseudocode
94
+ * here does a full CheckCPEnabled() but we know the NSACR
95
+ * check can never fail as we have already handled that.
96
+ */
97
+ qemu_log_mask(CPU_LOG_INT,
98
+ "...UsageFault with CFSR.NOCP because "
99
+ "CPACR.CP10 prevents stacking FP regs\n");
100
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
101
+ env->v7m.secure);
102
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
103
+ stacked_ok = false;
104
+ }
105
+
106
+ for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
107
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
108
+ uint32_t faddr = frameptr + 0x20 + 4 * i;
109
+ uint32_t slo = extract64(dn, 0, 32);
110
+ uint32_t shi = extract64(dn, 32, 32);
111
+
112
+ if (i >= 16) {
113
+ faddr += 8; /* skip the slot for the FPSCR */
114
+ }
115
+ stacked_ok = stacked_ok &&
116
+ v7m_stack_write(cpu, faddr, slo, mmu_idx, false) &&
117
+ v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, false);
118
+ }
119
+ stacked_ok = stacked_ok &&
120
+ v7m_stack_write(cpu, frameptr + 0x60,
121
+ vfp_get_fpscr(env), mmu_idx, false);
122
+ if (cpacr_pass) {
123
+ for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
124
+ *aa32_vfp_dreg(env, i / 2) = 0;
125
+ }
126
+ vfp_set_fpscr(env, 0);
127
+ }
128
+ } else {
129
+ /* Lazy stacking enabled, save necessary info to stack later */
130
+ /* TODO : equivalent of UpdateFPCCR() pseudocode */
131
+ }
132
+ }
133
+ }
134
+
135
/*
136
* If we broke a stack limit then SP was already updated earlier;
137
* otherwise we update SP regardless of whether any of the stack
138
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
139
140
if (arm_feature(env, ARM_FEATURE_V8)) {
141
lr = R_V7M_EXCRET_RES1_MASK |
142
- R_V7M_EXCRET_DCRS_MASK |
143
- R_V7M_EXCRET_FTYPE_MASK;
144
+ R_V7M_EXCRET_DCRS_MASK;
145
/* The S bit indicates whether we should return to Secure
146
* or NonSecure (ie our current state).
147
* The ES bit indicates whether we're taking this exception
148
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
149
if (env->v7m.secure) {
150
lr |= R_V7M_EXCRET_S_MASK;
151
}
152
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
153
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
154
+ }
155
} else {
156
lr = R_V7M_EXCRET_RES1_MASK |
157
R_V7M_EXCRET_S_MASK |
158
--
102
--
159
2.20.1
103
2.25.1
160
161
diff view generated by jsdifflib
1
The M-profile architecture floating point system supports
1
From: Richard Henderson <richard.henderson@linaro.org>
2
lazy FP state preservation, where FP registers are not
3
pushed to the stack when an exception occurs but are instead
4
only saved if and when the first FP instruction in the exception
5
handler is executed. Implement this in QEMU, corresponding
6
to the check of LSPACT in the pseudocode ExecuteFPCheck().
7
2
3
For a-profile aarch64, which does not bank system registers, it takes
4
quite a lot of code to switch between security states. In the process,
5
registers such as TCR_EL{1,2} must be swapped, which in itself requires
6
the flushing of softmmu tlbs. Therefore it doesn't buy us anything to
7
separate tlbs by security state.
8
9
Retain the distinction between Stage2 and Stage2_S.
10
11
This will be important as we implement FEAT_RME, and do not wish to
12
add a third set of mmu indexes for Realm state.
13
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20221001162318.153420-11-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20190416125744.27770-24-peter.maydell@linaro.org
11
---
18
---
12
target/arm/cpu.h | 3 ++
19
target/arm/cpu-param.h | 2 +-
13
target/arm/helper.h | 2 +
20
target/arm/cpu.h | 72 +++++++------------
14
target/arm/translate.h | 1 +
21
target/arm/internals.h | 31 +-------
15
target/arm/helper.c | 112 +++++++++++++++++++++++++++++++++++++++++
22
target/arm/helper.c | 144 +++++++++++++------------------------
16
target/arm/translate.c | 22 ++++++++
23
target/arm/ptw.c | 25 ++-----
17
5 files changed, 140 insertions(+)
24
target/arm/translate-a64.c | 8 ---
25
target/arm/translate.c | 6 +-
26
7 files changed, 85 insertions(+), 203 deletions(-)
18
27
28
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu-param.h
31
+++ b/target/arm/cpu-param.h
32
@@ -XXX,XX +XXX,XX @@
33
# define TARGET_PAGE_BITS_MIN 10
34
#endif
35
36
-#define NB_MMU_MODES 15
37
+#define NB_MMU_MODES 8
38
39
#endif
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
40
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
42
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
43
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@
44
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
24
#define EXCP_NOCP 17 /* v7M NOCP UsageFault */
45
* table over and over.
25
#define EXCP_INVSTATE 18 /* v7M INVSTATE UsageFault */
46
* 6. we need separate EL1/EL2 mmu_idx for handling the Privileged Access
26
#define EXCP_STKOF 19 /* v8M STKOF UsageFault */
47
* Never (PAN) bit within PSTATE.
27
+#define EXCP_LAZYFP 20 /* v7M fault during lazy FP stacking */
48
+ * 7. we fold together the secure and non-secure regimes for A-profile,
28
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
49
+ * because there are no banked system registers for aarch64, so the
29
50
+ * process of switching between secure and non-secure is
30
#define ARMV7M_EXCP_RESET 1
51
+ * already heavyweight.
31
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
52
*
32
FIELD(TBFLAG_A32, VFPEN, 7, 1)
53
* This gives us the following list of cases:
33
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
54
*
34
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
55
- * NS EL0 EL1&0 stage 1+2 (aka NS PL0)
35
+/* For M profile only, set if FPCCR.LSPACT is set */
56
- * NS EL1 EL1&0 stage 1+2 (aka NS PL1)
36
+FIELD(TBFLAG_A32, LSPACT, 18, 1)
57
- * NS EL1 EL1&0 stage 1+2 +PAN
37
/* For M profile only, set if we must create a new FP context */
58
- * NS EL0 EL2&0
38
FIELD(TBFLAG_A32, NEW_FP_CTXT_NEEDED, 19, 1)
59
- * NS EL2 EL2&0
39
/* For M profile only, set if FPCCR.S does not match current security state */
60
- * NS EL2 EL2&0 +PAN
40
diff --git a/target/arm/helper.h b/target/arm/helper.h
61
- * NS EL2 (aka NS PL2)
62
- * S EL0 EL1&0 (aka S PL0)
63
- * S EL1 EL1&0 (not used if EL3 is 32 bit)
64
- * S EL1 EL1&0 +PAN
65
- * S EL3 (aka S PL1)
66
+ * EL0 EL1&0 stage 1+2 (aka NS PL0)
67
+ * EL1 EL1&0 stage 1+2 (aka NS PL1)
68
+ * EL1 EL1&0 stage 1+2 +PAN
69
+ * EL0 EL2&0
70
+ * EL2 EL2&0
71
+ * EL2 EL2&0 +PAN
72
+ * EL2 (aka NS PL2)
73
+ * EL3 (aka S PL1)
74
*
75
- * for a total of 11 different mmu_idx.
76
+ * for a total of 8 different mmu_idx.
77
*
78
* R profile CPUs have an MPU, but can use the same set of MMU indexes
79
- * as A profile. They only need to distinguish NS EL0 and NS EL1 (and
80
- * NS EL2 if we ever model a Cortex-R52).
81
+ * as A profile. They only need to distinguish EL0 and EL1 (and
82
+ * EL2 if we ever model a Cortex-R52).
83
*
84
* M profile CPUs are rather different as they do not have a true MMU.
85
* They have the following different MMU indexes:
86
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
87
#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
88
#define ARM_MMU_IDX_M 0x40 /* M profile */
89
90
-/* Meanings of the bits for A profile mmu idx values */
91
-#define ARM_MMU_IDX_A_NS 0x8
92
-
93
/* Meanings of the bits for M profile mmu idx values */
94
#define ARM_MMU_IDX_M_PRIV 0x1
95
#define ARM_MMU_IDX_M_NEGPRI 0x2
96
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
97
/*
98
* A-profile.
99
*/
100
- ARMMMUIdx_SE10_0 = 0 | ARM_MMU_IDX_A,
101
- ARMMMUIdx_SE20_0 = 1 | ARM_MMU_IDX_A,
102
- ARMMMUIdx_SE10_1 = 2 | ARM_MMU_IDX_A,
103
- ARMMMUIdx_SE20_2 = 3 | ARM_MMU_IDX_A,
104
- ARMMMUIdx_SE10_1_PAN = 4 | ARM_MMU_IDX_A,
105
- ARMMMUIdx_SE20_2_PAN = 5 | ARM_MMU_IDX_A,
106
- ARMMMUIdx_SE2 = 6 | ARM_MMU_IDX_A,
107
- ARMMMUIdx_SE3 = 7 | ARM_MMU_IDX_A,
108
-
109
- ARMMMUIdx_E10_0 = ARMMMUIdx_SE10_0 | ARM_MMU_IDX_A_NS,
110
- ARMMMUIdx_E20_0 = ARMMMUIdx_SE20_0 | ARM_MMU_IDX_A_NS,
111
- ARMMMUIdx_E10_1 = ARMMMUIdx_SE10_1 | ARM_MMU_IDX_A_NS,
112
- ARMMMUIdx_E20_2 = ARMMMUIdx_SE20_2 | ARM_MMU_IDX_A_NS,
113
- ARMMMUIdx_E10_1_PAN = ARMMMUIdx_SE10_1_PAN | ARM_MMU_IDX_A_NS,
114
- ARMMMUIdx_E20_2_PAN = ARMMMUIdx_SE20_2_PAN | ARM_MMU_IDX_A_NS,
115
- ARMMMUIdx_E2 = ARMMMUIdx_SE2 | ARM_MMU_IDX_A_NS,
116
+ ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A,
117
+ ARMMMUIdx_E20_0 = 1 | ARM_MMU_IDX_A,
118
+ ARMMMUIdx_E10_1 = 2 | ARM_MMU_IDX_A,
119
+ ARMMMUIdx_E20_2 = 3 | ARM_MMU_IDX_A,
120
+ ARMMMUIdx_E10_1_PAN = 4 | ARM_MMU_IDX_A,
121
+ ARMMMUIdx_E20_2_PAN = 5 | ARM_MMU_IDX_A,
122
+ ARMMMUIdx_E2 = 6 | ARM_MMU_IDX_A,
123
+ ARMMMUIdx_E3 = 7 | ARM_MMU_IDX_A,
124
125
/*
126
* These are not allocated TLBs and are used only for AT system
127
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
128
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
129
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
130
ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
131
- ARMMMUIdx_Stage1_SE0 = 3 | ARM_MMU_IDX_NOTLB,
132
- ARMMMUIdx_Stage1_SE1 = 4 | ARM_MMU_IDX_NOTLB,
133
- ARMMMUIdx_Stage1_SE1_PAN = 5 | ARM_MMU_IDX_NOTLB,
134
/*
135
* Not allocated a TLB: used only for second stage of an S12 page
136
* table walk, or for descriptor loads during first stage of an S1
137
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
138
* then various TLB flush insns which currently are no-ops or flush
139
* only stage 1 MMU indexes will need to change to flush stage 2.
140
*/
141
- ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_NOTLB,
142
- ARMMMUIdx_Stage2_S = 7 | ARM_MMU_IDX_NOTLB,
143
+ ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB,
144
+ ARMMMUIdx_Stage2_S = 4 | ARM_MMU_IDX_NOTLB,
145
146
/*
147
* M-profile.
148
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
149
TO_CORE_BIT(E2),
150
TO_CORE_BIT(E20_2),
151
TO_CORE_BIT(E20_2_PAN),
152
- TO_CORE_BIT(SE10_0),
153
- TO_CORE_BIT(SE20_0),
154
- TO_CORE_BIT(SE10_1),
155
- TO_CORE_BIT(SE20_2),
156
- TO_CORE_BIT(SE10_1_PAN),
157
- TO_CORE_BIT(SE20_2_PAN),
158
- TO_CORE_BIT(SE2),
159
- TO_CORE_BIT(SE3),
160
+ TO_CORE_BIT(E3),
161
162
TO_CORE_BIT(MUser),
163
TO_CORE_BIT(MPriv),
164
diff --git a/target/arm/internals.h b/target/arm/internals.h
41
index XXXXXXX..XXXXXXX 100644
165
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/helper.h
166
--- a/target/arm/internals.h
43
+++ b/target/arm/helper.h
167
+++ b/target/arm/internals.h
44
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(v7m_blxns, void, env, i32)
168
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
45
169
case ARMMMUIdx_Stage1_E0:
46
DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
170
case ARMMMUIdx_Stage1_E1:
47
171
case ARMMMUIdx_Stage1_E1_PAN:
48
+DEF_HELPER_1(v7m_preserve_fp_state, void, env)
172
- case ARMMMUIdx_Stage1_SE0:
49
+
173
- case ARMMMUIdx_Stage1_SE1:
50
DEF_HELPER_2(v8m_stackcheck, void, env, i32)
174
- case ARMMMUIdx_Stage1_SE1_PAN:
51
175
case ARMMMUIdx_E10_0:
52
DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32)
176
case ARMMMUIdx_E10_1:
53
diff --git a/target/arm/translate.h b/target/arm/translate.h
177
case ARMMMUIdx_E10_1_PAN:
54
index XXXXXXX..XXXXXXX 100644
178
case ARMMMUIdx_E20_0:
55
--- a/target/arm/translate.h
179
case ARMMMUIdx_E20_2:
56
+++ b/target/arm/translate.h
180
case ARMMMUIdx_E20_2_PAN:
57
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
181
- case ARMMMUIdx_SE10_0:
58
bool v8m_stackcheck; /* true if we need to perform v8M stack limit checks */
182
- case ARMMMUIdx_SE10_1:
59
bool v8m_fpccr_s_wrong; /* true if v8M FPCCR.S != v8m_secure */
183
- case ARMMMUIdx_SE10_1_PAN:
60
bool v7m_new_fp_ctxt_needed; /* ASPEN set but no active FP context */
184
- case ARMMMUIdx_SE20_0:
61
+ bool v7m_lspact; /* FPCCR.LSPACT set */
185
- case ARMMMUIdx_SE20_2:
62
/* Immediate value in AArch32 SVC insn; must be set if is_jmp == DISAS_SWI
186
- case ARMMMUIdx_SE20_2_PAN:
63
* so that top level loop can generate correct syndrome information.
187
return true;
64
*/
188
default:
189
return false;
190
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
191
{
192
switch (mmu_idx) {
193
case ARMMMUIdx_Stage1_E1_PAN:
194
- case ARMMMUIdx_Stage1_SE1_PAN:
195
case ARMMMUIdx_E10_1_PAN:
196
case ARMMMUIdx_E20_2_PAN:
197
- case ARMMMUIdx_SE10_1_PAN:
198
- case ARMMMUIdx_SE20_2_PAN:
199
return true;
200
default:
201
return false;
202
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
203
static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
204
{
205
switch (mmu_idx) {
206
- case ARMMMUIdx_SE20_0:
207
- case ARMMMUIdx_SE20_2:
208
- case ARMMMUIdx_SE20_2_PAN:
209
case ARMMMUIdx_E20_0:
210
case ARMMMUIdx_E20_2:
211
case ARMMMUIdx_E20_2_PAN:
212
case ARMMMUIdx_Stage2:
213
case ARMMMUIdx_Stage2_S:
214
- case ARMMMUIdx_SE2:
215
case ARMMMUIdx_E2:
216
return 2;
217
- case ARMMMUIdx_SE3:
218
+ case ARMMMUIdx_E3:
219
return 3;
220
- case ARMMMUIdx_SE10_0:
221
- case ARMMMUIdx_Stage1_SE0:
222
- return arm_el_is_aa64(env, 3) ? 1 : 3;
223
- case ARMMMUIdx_SE10_1:
224
- case ARMMMUIdx_SE10_1_PAN:
225
+ case ARMMMUIdx_E10_0:
226
case ARMMMUIdx_Stage1_E0:
227
+ return arm_el_is_aa64(env, 3) || !arm_is_secure_below_el3(env) ? 1 : 3;
228
case ARMMMUIdx_Stage1_E1:
229
case ARMMMUIdx_Stage1_E1_PAN:
230
- case ARMMMUIdx_Stage1_SE1:
231
- case ARMMMUIdx_Stage1_SE1_PAN:
232
- case ARMMMUIdx_E10_0:
233
case ARMMMUIdx_E10_1:
234
case ARMMMUIdx_E10_1_PAN:
235
case ARMMMUIdx_MPrivNegPri:
236
@@ -XXX,XX +XXX,XX @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
237
case ARMMMUIdx_Stage1_E0:
238
case ARMMMUIdx_Stage1_E1:
239
case ARMMMUIdx_Stage1_E1_PAN:
240
- case ARMMMUIdx_Stage1_SE0:
241
- case ARMMMUIdx_Stage1_SE1:
242
- case ARMMMUIdx_Stage1_SE1_PAN:
243
return true;
244
default:
245
return false;
65
diff --git a/target/arm/helper.c b/target/arm/helper.c
246
diff --git a/target/arm/helper.c b/target/arm/helper.c
66
index XXXXXXX..XXXXXXX 100644
247
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/helper.c
248
--- a/target/arm/helper.c
68
+++ b/target/arm/helper.c
249
+++ b/target/arm/helper.c
69
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
250
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
70
g_assert_not_reached();
251
/* Begin with base v8.0 state. */
71
}
252
uint64_t valid_mask = 0x3fff;
72
253
ARMCPU *cpu = env_archcpu(env);
73
+void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
254
+ uint64_t changed;
74
+{
255
75
+ /* translate.c should never generate calls here in user-only mode */
256
/*
76
+ g_assert_not_reached();
257
* Because SCR_EL3 is the "real" cpreg and SCR is the alias, reset always
77
+}
258
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
78
+
259
79
uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
260
/* Clear all-context RES0 bits. */
80
{
261
value &= valid_mask;
81
/* The TT instructions can be used by unprivileged code, but in
262
- raw_write(env, ri, value);
82
@@ -XXX,XX +XXX,XX @@ pend_fault:
263
+ changed = env->cp15.scr_el3 ^ value;
83
return false;
264
+ env->cp15.scr_el3 = value;
84
}
85
86
+void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
87
+{
88
+ /*
89
+ * Preserve FP state (because LSPACT was set and we are about
90
+ * to execute an FP instruction). This corresponds to the
91
+ * PreserveFPState() pseudocode.
92
+ * We may throw an exception if the stacking fails.
93
+ */
94
+ ARMCPU *cpu = arm_env_get_cpu(env);
95
+ bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
96
+ bool negpri = !(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_HFRDY_MASK);
97
+ bool is_priv = !(env->v7m.fpccr[is_secure] & R_V7M_FPCCR_USER_MASK);
98
+ bool splimviol = env->v7m.fpccr[is_secure] & R_V7M_FPCCR_SPLIMVIOL_MASK;
99
+ uint32_t fpcar = env->v7m.fpcar[is_secure];
100
+ bool stacked_ok = true;
101
+ bool ts = is_secure && (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
102
+ bool take_exception;
103
+
104
+ /* Take the iothread lock as we are going to touch the NVIC */
105
+ qemu_mutex_lock_iothread();
106
+
107
+ /* Check the background context had access to the FPU */
108
+ if (!v7m_cpacr_pass(env, is_secure, is_priv)) {
109
+ armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, is_secure);
110
+ env->v7m.cfsr[is_secure] |= R_V7M_CFSR_NOCP_MASK;
111
+ stacked_ok = false;
112
+ } else if (!is_secure && !extract32(env->v7m.nsacr, 10, 1)) {
113
+ armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
114
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
115
+ stacked_ok = false;
116
+ }
117
+
118
+ if (!splimviol && stacked_ok) {
119
+ /* We only stack if the stack limit wasn't violated */
120
+ int i;
121
+ ARMMMUIdx mmu_idx;
122
+
123
+ mmu_idx = arm_v7m_mmu_idx_all(env, is_secure, is_priv, negpri);
124
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
125
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
126
+ uint32_t faddr = fpcar + 4 * i;
127
+ uint32_t slo = extract64(dn, 0, 32);
128
+ uint32_t shi = extract64(dn, 32, 32);
129
+
130
+ if (i >= 16) {
131
+ faddr += 8; /* skip the slot for the FPSCR */
132
+ }
133
+ stacked_ok = stacked_ok &&
134
+ v7m_stack_write(cpu, faddr, slo, mmu_idx, STACK_LAZYFP) &&
135
+ v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, STACK_LAZYFP);
136
+ }
137
+
138
+ stacked_ok = stacked_ok &&
139
+ v7m_stack_write(cpu, fpcar + 0x40,
140
+ vfp_get_fpscr(env), mmu_idx, STACK_LAZYFP);
141
+ }
142
+
265
+
143
+ /*
266
+ /*
144
+ * We definitely pended an exception, but it's possible that it
267
+ * If SCR_EL3.NS changes, i.e. arm_is_secure_below_el3, then
145
+ * might not be able to be taken now. If its priority permits us
268
+ * we must invalidate all TLBs below EL3.
146
+ * to take it now, then we must not update the LSPACT or FP regs,
147
+ * but instead jump out to take the exception immediately.
148
+ * If it's just pending and won't be taken until the current
149
+ * handler exits, then we do update LSPACT and the FP regs.
150
+ */
269
+ */
151
+ take_exception = !stacked_ok &&
270
+ if (changed & SCR_NS) {
152
+ armv7m_nvic_can_take_pending_exception(env->nvic);
271
+ tlb_flush_by_mmuidx(env_cpu(env), (ARMMMUIdxBit_E10_0 |
153
+
272
+ ARMMMUIdxBit_E20_0 |
154
+ qemu_mutex_unlock_iothread();
273
+ ARMMMUIdxBit_E10_1 |
155
+
274
+ ARMMMUIdxBit_E20_2 |
156
+ if (take_exception) {
275
+ ARMMMUIdxBit_E10_1_PAN |
157
+ raise_exception_ra(env, EXCP_LAZYFP, 0, 1, GETPC());
276
+ ARMMMUIdxBit_E20_2_PAN |
277
+ ARMMMUIdxBit_E2));
158
+ }
278
+ }
159
+
279
}
160
+ env->v7m.fpccr[is_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
280
161
+
281
static void scr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
162
+ if (ts) {
282
@@ -XXX,XX +XXX,XX @@ static int gt_phys_redir_timeridx(CPUARMState *env)
163
+ /* Clear s0 to s31 and the FPSCR */
283
case ARMMMUIdx_E20_0:
164
+ int i;
284
case ARMMMUIdx_E20_2:
165
+
285
case ARMMMUIdx_E20_2_PAN:
166
+ for (i = 0; i < 32; i += 2) {
286
- case ARMMMUIdx_SE20_0:
167
+ *aa32_vfp_dreg(env, i / 2) = 0;
287
- case ARMMMUIdx_SE20_2:
168
+ }
288
- case ARMMMUIdx_SE20_2_PAN:
169
+ vfp_set_fpscr(env, 0);
289
return GTIMER_HYP;
170
+ }
290
default:
171
+ /*
291
return GTIMER_PHYS;
172
+ * Otherwise s0 to s15 and FPSCR are UNKNOWN; we choose to leave them
292
@@ -XXX,XX +XXX,XX @@ static int gt_virt_redir_timeridx(CPUARMState *env)
173
+ * unchanged.
293
case ARMMMUIdx_E20_0:
174
+ */
294
case ARMMMUIdx_E20_2:
175
+}
295
case ARMMMUIdx_E20_2_PAN:
176
+
296
- case ARMMMUIdx_SE20_0:
177
/* Write to v7M CONTROL.SPSEL bit for the specified security bank.
297
- case ARMMMUIdx_SE20_2:
178
* This may change the current stack pointer between Main and Process
298
- case ARMMMUIdx_SE20_2_PAN:
179
* stack pointers if it is done for the CONTROL register for the current
299
return GTIMER_HYPVIRT;
180
@@ -XXX,XX +XXX,XX @@ static void arm_log_exception(int idx)
300
default:
181
[EXCP_NOCP] = "v7M NOCP UsageFault",
301
return GTIMER_VIRT;
182
[EXCP_INVSTATE] = "v7M INVSTATE UsageFault",
302
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
183
[EXCP_STKOF] = "v8M STKOF UsageFault",
303
/* stage 1 current state PL1: ATS1CPR, ATS1CPW, ATS1CPRP, ATS1CPWP */
184
+ [EXCP_LAZYFP] = "v7M exception during lazy FP stacking",
304
switch (el) {
185
};
305
case 3:
186
306
- mmu_idx = ARMMMUIdx_SE3;
187
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
307
+ mmu_idx = ARMMMUIdx_E3;
188
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
308
secure = true;
189
return;
309
break;
310
case 2:
311
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
312
/* fall through */
313
case 1:
314
if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) {
315
- mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN
316
- : ARMMMUIdx_Stage1_E1_PAN);
317
+ mmu_idx = ARMMMUIdx_Stage1_E1_PAN;
318
} else {
319
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1;
320
+ mmu_idx = ARMMMUIdx_Stage1_E1;
321
}
322
break;
323
default:
324
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
325
/* stage 1 current state PL0: ATS1CUR, ATS1CUW */
326
switch (el) {
327
case 3:
328
- mmu_idx = ARMMMUIdx_SE10_0;
329
+ mmu_idx = ARMMMUIdx_E10_0;
330
secure = true;
331
break;
332
case 2:
333
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
334
mmu_idx = ARMMMUIdx_Stage1_E0;
335
break;
336
case 1:
337
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0;
338
+ mmu_idx = ARMMMUIdx_Stage1_E0;
339
break;
340
default:
341
g_assert_not_reached();
342
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
343
switch (ri->opc1) {
344
case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */
345
if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) {
346
- mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN
347
- : ARMMMUIdx_Stage1_E1_PAN);
348
+ mmu_idx = ARMMMUIdx_Stage1_E1_PAN;
349
} else {
350
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1;
351
+ mmu_idx = ARMMMUIdx_Stage1_E1;
352
}
353
break;
354
case 4: /* AT S1E2R, AT S1E2W */
355
- mmu_idx = secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2;
356
+ mmu_idx = ARMMMUIdx_E2;
357
break;
358
case 6: /* AT S1E3R, AT S1E3W */
359
- mmu_idx = ARMMMUIdx_SE3;
360
+ mmu_idx = ARMMMUIdx_E3;
361
secure = true;
362
break;
363
default:
364
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
190
}
365
}
191
break;
366
break;
192
+ case EXCP_LAZYFP:
367
case 2: /* AT S1E0R, AT S1E0W */
193
+ /*
368
- mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0;
194
+ * We already pended the specific exception in the NVIC in the
369
+ mmu_idx = ARMMMUIdx_Stage1_E0;
195
+ * v7m_preserve_fp_state() helper function.
370
break;
196
+ */
371
case 4: /* AT S12E1R, AT S12E1W */
372
- mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_E10_1;
373
+ mmu_idx = ARMMMUIdx_E10_1;
374
break;
375
case 6: /* AT S12E0R, AT S12E0W */
376
- mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_E10_0;
377
+ mmu_idx = ARMMMUIdx_E10_0;
378
break;
379
default:
380
g_assert_not_reached();
381
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
382
uint16_t mask = ARMMMUIdxBit_E20_2 |
383
ARMMMUIdxBit_E20_2_PAN |
384
ARMMMUIdxBit_E20_0;
385
-
386
- if (arm_is_secure_below_el3(env)) {
387
- mask >>= ARM_MMU_IDX_A_NS;
388
- }
389
-
390
tlb_flush_by_mmuidx(env_cpu(env), mask);
391
}
392
raw_write(env, ri, value);
393
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
394
uint16_t mask = ARMMMUIdxBit_E10_1 |
395
ARMMMUIdxBit_E10_1_PAN |
396
ARMMMUIdxBit_E10_0;
397
-
398
- if (arm_is_secure_below_el3(env)) {
399
- mask >>= ARM_MMU_IDX_A_NS;
400
- }
401
-
402
tlb_flush_by_mmuidx(cs, mask);
403
raw_write(env, ri, value);
404
}
405
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env)
406
ARMMMUIdxBit_E10_1_PAN |
407
ARMMMUIdxBit_E10_0;
408
}
409
-
410
- if (arm_is_secure_below_el3(env)) {
411
- mask >>= ARM_MMU_IDX_A_NS;
412
- }
413
-
414
return mask;
415
}
416
417
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbbits(CPUARMState *env, uint64_t addr)
418
mmu_idx = ARMMMUIdx_E10_0;
419
}
420
421
- if (arm_is_secure_below_el3(env)) {
422
- mmu_idx &= ~ARM_MMU_IDX_A_NS;
423
- }
424
-
425
return tlbbits_for_regime(env, mmu_idx, addr);
426
}
427
428
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
429
* stage 2 translations, whereas most other scopes only invalidate
430
* stage 1 translations.
431
*/
432
- if (arm_is_secure_below_el3(env)) {
433
- return ARMMMUIdxBit_SE10_1 |
434
- ARMMMUIdxBit_SE10_1_PAN |
435
- ARMMMUIdxBit_SE10_0;
436
- } else {
437
- return ARMMMUIdxBit_E10_1 |
438
- ARMMMUIdxBit_E10_1_PAN |
439
- ARMMMUIdxBit_E10_0;
440
- }
441
+ return (ARMMMUIdxBit_E10_1 |
442
+ ARMMMUIdxBit_E10_1_PAN |
443
+ ARMMMUIdxBit_E10_0);
444
}
445
446
static int e2_tlbmask(CPUARMState *env)
447
{
448
- if (arm_is_secure_below_el3(env)) {
449
- return ARMMMUIdxBit_SE20_0 |
450
- ARMMMUIdxBit_SE20_2 |
451
- ARMMMUIdxBit_SE20_2_PAN |
452
- ARMMMUIdxBit_SE2;
453
- } else {
454
- return ARMMMUIdxBit_E20_0 |
455
- ARMMMUIdxBit_E20_2 |
456
- ARMMMUIdxBit_E20_2_PAN |
457
- ARMMMUIdxBit_E2;
458
- }
459
+ return (ARMMMUIdxBit_E20_0 |
460
+ ARMMMUIdxBit_E20_2 |
461
+ ARMMMUIdxBit_E20_2_PAN |
462
+ ARMMMUIdxBit_E2);
463
}
464
465
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
466
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
467
ARMCPU *cpu = env_archcpu(env);
468
CPUState *cs = CPU(cpu);
469
470
- tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_SE3);
471
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3);
472
}
473
474
static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
475
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
476
{
477
CPUState *cs = env_cpu(env);
478
479
- tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_SE3);
480
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3);
481
}
482
483
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
484
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
485
CPUState *cs = CPU(cpu);
486
uint64_t pageaddr = sextract64(value << 12, 0, 56);
487
488
- tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_SE3);
489
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3);
490
}
491
492
static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
493
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
494
{
495
CPUState *cs = env_cpu(env);
496
uint64_t pageaddr = sextract64(value << 12, 0, 56);
497
- bool secure = arm_is_secure_below_el3(env);
498
- int mask = secure ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2;
499
- int bits = tlbbits_for_regime(env, secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2,
500
- pageaddr);
501
+ int bits = tlbbits_for_regime(env, ARMMMUIdx_E2, pageaddr);
502
503
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
504
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
505
+ ARMMMUIdxBit_E2, bits);
506
}
507
508
static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
509
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
510
{
511
CPUState *cs = env_cpu(env);
512
uint64_t pageaddr = sextract64(value << 12, 0, 56);
513
- int bits = tlbbits_for_regime(env, ARMMMUIdx_SE3, pageaddr);
514
+ int bits = tlbbits_for_regime(env, ARMMMUIdx_E3, pageaddr);
515
516
tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
517
- ARMMMUIdxBit_SE3, bits);
518
+ ARMMMUIdxBit_E3, bits);
519
}
520
521
#ifdef TARGET_AARCH64
522
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae1is_write(CPUARMState *env,
523
524
static int vae2_tlbmask(CPUARMState *env)
525
{
526
- return (arm_is_secure_below_el3(env)
527
- ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2);
528
+ return ARMMMUIdxBit_E2;
529
}
530
531
static void tlbi_aa64_rvae2_write(CPUARMState *env,
532
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3_write(CPUARMState *env,
533
* flush-last-level-only.
534
*/
535
536
- do_rvae_write(env, value, ARMMMUIdxBit_SE3,
537
- tlb_force_broadcast(env));
538
+ do_rvae_write(env, value, ARMMMUIdxBit_E3, tlb_force_broadcast(env));
539
}
540
541
static void tlbi_aa64_rvae3is_write(CPUARMState *env,
542
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3is_write(CPUARMState *env,
543
* flush-last-level-only or inner/outer specific flushes.
544
*/
545
546
- do_rvae_write(env, value, ARMMMUIdxBit_SE3, true);
547
+ do_rvae_write(env, value, ARMMMUIdxBit_E3, true);
548
}
549
#endif
550
551
@@ -XXX,XX +XXX,XX @@ uint64_t arm_sctlr(CPUARMState *env, int el)
552
/* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */
553
if (el == 0) {
554
ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0);
555
- el = (mmu_idx == ARMMMUIdx_E20_0 || mmu_idx == ARMMMUIdx_SE20_0)
556
- ? 2 : 1;
557
+ el = mmu_idx == ARMMMUIdx_E20_0 ? 2 : 1;
558
}
559
return env->cp15.sctlr_el[el];
560
}
561
@@ -XXX,XX +XXX,XX @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
562
switch (mmu_idx) {
563
case ARMMMUIdx_E10_0:
564
case ARMMMUIdx_E20_0:
565
- case ARMMMUIdx_SE10_0:
566
- case ARMMMUIdx_SE20_0:
567
return 0;
568
case ARMMMUIdx_E10_1:
569
case ARMMMUIdx_E10_1_PAN:
570
- case ARMMMUIdx_SE10_1:
571
- case ARMMMUIdx_SE10_1_PAN:
572
return 1;
573
case ARMMMUIdx_E2:
574
case ARMMMUIdx_E20_2:
575
case ARMMMUIdx_E20_2_PAN:
576
- case ARMMMUIdx_SE2:
577
- case ARMMMUIdx_SE20_2:
578
- case ARMMMUIdx_SE20_2_PAN:
579
return 2;
580
- case ARMMMUIdx_SE3:
581
+ case ARMMMUIdx_E3:
582
return 3;
583
default:
584
g_assert_not_reached();
585
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
586
}
587
break;
588
case 3:
589
- return ARMMMUIdx_SE3;
590
+ return ARMMMUIdx_E3;
591
default:
592
g_assert_not_reached();
593
}
594
595
- if (arm_is_secure_below_el3(env)) {
596
- idx &= ~ARM_MMU_IDX_A_NS;
597
- }
598
-
599
return idx;
600
}
601
602
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
603
switch (mmu_idx) {
604
case ARMMMUIdx_E10_1:
605
case ARMMMUIdx_E10_1_PAN:
606
- case ARMMMUIdx_SE10_1:
607
- case ARMMMUIdx_SE10_1_PAN:
608
/* TODO: ARMv8.3-NV */
609
DP_TBFLAG_A64(flags, UNPRIV, 1);
610
break;
611
case ARMMMUIdx_E20_2:
612
case ARMMMUIdx_E20_2_PAN:
613
- case ARMMMUIdx_SE20_2:
614
- case ARMMMUIdx_SE20_2_PAN:
615
/*
616
* Note that EL20_2 is gated by HCR_EL2.E2H == 1, but EL20_0 is
617
* gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
618
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
619
index XXXXXXX..XXXXXXX 100644
620
--- a/target/arm/ptw.c
621
+++ b/target/arm/ptw.c
622
@@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu)
623
ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
624
{
625
switch (mmu_idx) {
626
- case ARMMMUIdx_SE10_0:
627
- return ARMMMUIdx_Stage1_SE0;
628
- case ARMMMUIdx_SE10_1:
629
- return ARMMMUIdx_Stage1_SE1;
630
- case ARMMMUIdx_SE10_1_PAN:
631
- return ARMMMUIdx_Stage1_SE1_PAN;
632
case ARMMMUIdx_E10_0:
633
return ARMMMUIdx_Stage1_E0;
634
case ARMMMUIdx_E10_1:
635
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx)
636
static bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
637
{
638
switch (mmu_idx) {
639
- case ARMMMUIdx_SE10_0:
640
case ARMMMUIdx_E20_0:
641
- case ARMMMUIdx_SE20_0:
642
case ARMMMUIdx_Stage1_E0:
643
- case ARMMMUIdx_Stage1_SE0:
644
case ARMMMUIdx_MUser:
645
case ARMMMUIdx_MSUser:
646
case ARMMMUIdx_MUserNegPri:
647
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
648
649
s2_mmu_idx = (s2walk_secure
650
? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
651
- is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
652
+ is_el0 = mmu_idx == ARMMMUIdx_E10_0;
653
654
/*
655
* S1 is done, now do S2 translation.
656
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
657
case ARMMMUIdx_Stage1_E1:
658
case ARMMMUIdx_Stage1_E1_PAN:
659
case ARMMMUIdx_E2:
660
+ is_secure = arm_is_secure_below_el3(env);
197
+ break;
661
+ break;
198
default:
662
case ARMMMUIdx_Stage2:
199
cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
663
case ARMMMUIdx_MPrivNegPri:
200
return; /* Never happens. Keep compiler happy. */
664
case ARMMMUIdx_MUserNegPri:
201
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
665
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
202
flags = FIELD_DP32(flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED, 1);
666
case ARMMMUIdx_MUser:
203
}
667
is_secure = false;
204
668
break;
205
+ if (arm_feature(env, ARM_FEATURE_M)) {
669
- case ARMMMUIdx_SE3:
206
+ bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
670
- case ARMMMUIdx_SE10_0:
207
+
671
- case ARMMMUIdx_SE10_1:
208
+ if (env->v7m.fpccr[is_secure] & R_V7M_FPCCR_LSPACT_MASK) {
672
- case ARMMMUIdx_SE10_1_PAN:
209
+ flags = FIELD_DP32(flags, TBFLAG_A32, LSPACT, 1);
673
- case ARMMMUIdx_SE20_0:
210
+ }
674
- case ARMMMUIdx_SE20_2:
211
+ }
675
- case ARMMMUIdx_SE20_2_PAN:
212
+
676
- case ARMMMUIdx_Stage1_SE0:
213
*pflags = flags;
677
- case ARMMMUIdx_Stage1_SE1:
214
*cs_base = 0;
678
- case ARMMMUIdx_Stage1_SE1_PAN:
215
}
679
- case ARMMMUIdx_SE2:
680
+ case ARMMMUIdx_E3:
681
case ARMMMUIdx_Stage2_S:
682
case ARMMMUIdx_MSPrivNegPri:
683
case ARMMMUIdx_MSUserNegPri:
684
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
685
index XXXXXXX..XXXXXXX 100644
686
--- a/target/arm/translate-a64.c
687
+++ b/target/arm/translate-a64.c
688
@@ -XXX,XX +XXX,XX @@ static int get_a64_user_mem_index(DisasContext *s)
689
case ARMMMUIdx_E20_2_PAN:
690
useridx = ARMMMUIdx_E20_0;
691
break;
692
- case ARMMMUIdx_SE10_1:
693
- case ARMMMUIdx_SE10_1_PAN:
694
- useridx = ARMMMUIdx_SE10_0;
695
- break;
696
- case ARMMMUIdx_SE20_2:
697
- case ARMMMUIdx_SE20_2_PAN:
698
- useridx = ARMMMUIdx_SE20_0;
699
- break;
700
default:
701
g_assert_not_reached();
702
}
216
diff --git a/target/arm/translate.c b/target/arm/translate.c
703
diff --git a/target/arm/translate.c b/target/arm/translate.c
217
index XXXXXXX..XXXXXXX 100644
704
index XXXXXXX..XXXXXXX 100644
218
--- a/target/arm/translate.c
705
--- a/target/arm/translate.c
219
+++ b/target/arm/translate.c
706
+++ b/target/arm/translate.c
220
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
707
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
221
if (arm_dc_feature(s, ARM_FEATURE_M)) {
708
* otherwise, access as if at PL0.
222
/* Handle M-profile lazy FP state mechanics */
709
*/
223
710
switch (s->mmu_idx) {
224
+ /* Trigger lazy-state preservation if necessary */
711
+ case ARMMMUIdx_E3:
225
+ if (s->v7m_lspact) {
712
case ARMMMUIdx_E2: /* this one is UNPREDICTABLE */
226
+ /*
713
case ARMMMUIdx_E10_0:
227
+ * Lazy state saving affects external memory and also the NVIC,
714
case ARMMMUIdx_E10_1:
228
+ * so we must mark it as an IO operation for icount.
715
case ARMMMUIdx_E10_1_PAN:
229
+ */
716
return arm_to_core_mmu_idx(ARMMMUIdx_E10_0);
230
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
717
- case ARMMMUIdx_SE3:
231
+ gen_io_start();
718
- case ARMMMUIdx_SE10_0:
232
+ }
719
- case ARMMMUIdx_SE10_1:
233
+ gen_helper_v7m_preserve_fp_state(cpu_env);
720
- case ARMMMUIdx_SE10_1_PAN:
234
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
721
- return arm_to_core_mmu_idx(ARMMMUIdx_SE10_0);
235
+ gen_io_end();
722
case ARMMMUIdx_MUser:
236
+ }
723
case ARMMMUIdx_MPriv:
237
+ /*
724
return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
238
+ * If the preserve_fp_state helper doesn't throw an exception
239
+ * then it will clear LSPACT; we don't need to repeat this for
240
+ * any further FP insns in this TB.
241
+ */
242
+ s->v7m_lspact = false;
243
+ }
244
+
245
/* Update ownership of FP context: set FPCCR.S to match current state */
246
if (s->v8m_fpccr_s_wrong) {
247
TCGv_i32 tmp;
248
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
249
dc->v8m_fpccr_s_wrong = FIELD_EX32(tb_flags, TBFLAG_A32, FPCCR_S_WRONG);
250
dc->v7m_new_fp_ctxt_needed =
251
FIELD_EX32(tb_flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED);
252
+ dc->v7m_lspact = FIELD_EX32(tb_flags, TBFLAG_A32, LSPACT);
253
dc->cp_regs = cpu->cp_regs;
254
dc->features = env->features;
255
256
--
725
--
257
2.20.1
726
2.25.1
258
259
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Since uWireSlave is only used in this new header, there is no
3
Use a switch on mmu_idx for the a-profile indexes, instead of
4
need to expose it via "qemu/typedefs.h".
4
three different if's vs regime_el and arm_mmu_idx_is_stage1_of_2.
5
5
6
Reviewed-by: Markus Armbruster <armbru@redhat.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190412165416.7977-9-philmd@redhat.com
8
Message-id: 20221001162318.153420-12-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
include/hw/arm/omap.h | 6 +-----
11
target/arm/ptw.c | 32 +++++++++++++++++++++++++-------
12
include/hw/devices.h | 15 ---------------
12
1 file changed, 25 insertions(+), 7 deletions(-)
13
include/hw/input/tsc2xxx.h | 36 ++++++++++++++++++++++++++++++++++++
14
include/qemu/typedefs.h | 1 -
15
hw/arm/nseries.c | 2 +-
16
hw/arm/palm.c | 2 +-
17
hw/input/tsc2005.c | 2 +-
18
hw/input/tsc210x.c | 4 ++--
19
MAINTAINERS | 2 ++
20
9 files changed, 44 insertions(+), 26 deletions(-)
21
create mode 100644 include/hw/input/tsc2xxx.h
22
13
23
diff --git a/include/hw/arm/omap.h b/include/hw/arm/omap.h
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
24
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/arm/omap.h
16
--- a/target/arm/ptw.c
26
+++ b/include/hw/arm/omap.h
17
+++ b/target/arm/ptw.c
27
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
28
#include "exec/memory.h"
19
29
# define hw_omap_h        "omap.h"
20
hcr_el2 = arm_hcr_el2_eff(env);
30
#include "hw/irq.h"
21
31
+#include "hw/input/tsc2xxx.h"
22
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
32
#include "target/arm/cpu-qom.h"
23
+ switch (mmu_idx) {
33
#include "qemu/log.h"
24
+ case ARMMMUIdx_Stage2:
34
25
+ case ARMMMUIdx_Stage2_S:
35
@@ -XXX,XX +XXX,XX @@ qemu_irq *omap_mpuio_in_get(struct omap_mpuio_s *s);
26
/* HCR.DC means HCR.VM behaves as 1 */
36
void omap_mpuio_out_set(struct omap_mpuio_s *s, int line, qemu_irq handler);
27
return (hcr_el2 & (HCR_DC | HCR_VM)) == 0;
37
void omap_mpuio_key(struct omap_mpuio_s *s, int row, int col, int down);
28
- }
38
29
39
-struct uWireSlave {
30
- if (hcr_el2 & HCR_TGE) {
40
- uint16_t (*receive)(void *opaque);
31
+ case ARMMMUIdx_E10_0:
41
- void (*send)(void *opaque, uint16_t data);
32
+ case ARMMMUIdx_E10_1:
42
- void *opaque;
33
+ case ARMMMUIdx_E10_1_PAN:
43
-};
34
/* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
44
struct omap_uwire_s;
35
- if (!is_secure && regime_el(env, mmu_idx) == 1) {
45
void omap_uwire_attach(struct omap_uwire_s *s,
36
+ if (!is_secure && (hcr_el2 & HCR_TGE)) {
46
uWireSlave *slave, int chipselect);
37
return true;
47
diff --git a/include/hw/devices.h b/include/hw/devices.h
38
}
48
index XXXXXXX..XXXXXXX 100644
39
- }
49
--- a/include/hw/devices.h
40
+ break;
50
+++ b/include/hw/devices.h
41
51
@@ -XXX,XX +XXX,XX @@
42
- if ((hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
52
/* Devices that have nowhere better to go. */
43
+ case ARMMMUIdx_Stage1_E0:
53
44
+ case ARMMMUIdx_Stage1_E1:
54
#include "hw/hw.h"
45
+ case ARMMMUIdx_Stage1_E1_PAN:
55
-#include "ui/console.h"
46
/* HCR.DC means SCTLR_EL1.M behaves as 0 */
56
47
- return true;
57
/* smc91c111.c */
48
+ if (hcr_el2 & HCR_DC) {
58
void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
49
+ return true;
59
@@ -XXX,XX +XXX,XX @@ void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
50
+ }
60
/* lan9118.c */
51
+ break;
61
void lan9118_init(NICInfo *, uint32_t, qemu_irq);
62
63
-/* tsc210x.c */
64
-uWireSlave *tsc2102_init(qemu_irq pint);
65
-uWireSlave *tsc2301_init(qemu_irq penirq, qemu_irq kbirq, qemu_irq dav);
66
-I2SCodec *tsc210x_codec(uWireSlave *chip);
67
-uint32_t tsc210x_txrx(void *opaque, uint32_t value, int len);
68
-void tsc210x_set_transform(uWireSlave *chip,
69
- MouseTransformInfo *info);
70
-void tsc210x_key_event(uWireSlave *chip, int key, int down);
71
-
72
-/* tsc2005.c */
73
-void *tsc2005_init(qemu_irq pintdav);
74
-uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
75
-void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
76
-
77
#endif
78
diff --git a/include/hw/input/tsc2xxx.h b/include/hw/input/tsc2xxx.h
79
new file mode 100644
80
index XXXXXXX..XXXXXXX
81
--- /dev/null
82
+++ b/include/hw/input/tsc2xxx.h
83
@@ -XXX,XX +XXX,XX @@
84
+/*
85
+ * TI touchscreen controller
86
+ *
87
+ * Copyright (c) 2006 Andrzej Zaborowski
88
+ * Copyright (C) 2008 Nokia Corporation
89
+ *
90
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
91
+ * See the COPYING file in the top-level directory.
92
+ */
93
+
52
+
94
+#ifndef HW_INPUT_TSC2XXX_H
53
+ case ARMMMUIdx_E20_0:
95
+#define HW_INPUT_TSC2XXX_H
54
+ case ARMMMUIdx_E20_2:
55
+ case ARMMMUIdx_E20_2_PAN:
56
+ case ARMMMUIdx_E2:
57
+ case ARMMMUIdx_E3:
58
+ break;
96
+
59
+
97
+#include "hw/irq.h"
60
+ default:
98
+#include "ui/console.h"
61
+ g_assert_not_reached();
99
+
62
}
100
+typedef struct uWireSlave {
63
101
+ uint16_t (*receive)(void *opaque);
64
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
102
+ void (*send)(void *opaque, uint16_t data);
103
+ void *opaque;
104
+} uWireSlave;
105
+
106
+/* tsc210x.c */
107
+uWireSlave *tsc2102_init(qemu_irq pint);
108
+uWireSlave *tsc2301_init(qemu_irq penirq, qemu_irq kbirq, qemu_irq dav);
109
+I2SCodec *tsc210x_codec(uWireSlave *chip);
110
+uint32_t tsc210x_txrx(void *opaque, uint32_t value, int len);
111
+void tsc210x_set_transform(uWireSlave *chip, MouseTransformInfo *info);
112
+void tsc210x_key_event(uWireSlave *chip, int key, int down);
113
+
114
+/* tsc2005.c */
115
+void *tsc2005_init(qemu_irq pintdav);
116
+uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
117
+void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
118
+
119
+#endif
120
diff --git a/include/qemu/typedefs.h b/include/qemu/typedefs.h
121
index XXXXXXX..XXXXXXX 100644
122
--- a/include/qemu/typedefs.h
123
+++ b/include/qemu/typedefs.h
124
@@ -XXX,XX +XXX,XX @@ typedef struct RAMBlock RAMBlock;
125
typedef struct Range Range;
126
typedef struct SHPCDevice SHPCDevice;
127
typedef struct SSIBus SSIBus;
128
-typedef struct uWireSlave uWireSlave;
129
typedef struct VirtIODevice VirtIODevice;
130
typedef struct Visitor Visitor;
131
typedef void SaveStateHandler(QEMUFile *f, void *opaque);
132
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/hw/arm/nseries.c
135
+++ b/hw/arm/nseries.c
136
@@ -XXX,XX +XXX,XX @@
137
#include "ui/console.h"
138
#include "hw/boards.h"
139
#include "hw/i2c/i2c.h"
140
-#include "hw/devices.h"
141
#include "hw/display/blizzard.h"
142
+#include "hw/input/tsc2xxx.h"
143
#include "hw/misc/cbus.h"
144
#include "hw/misc/tmp105.h"
145
#include "hw/block/flash.h"
146
diff --git a/hw/arm/palm.c b/hw/arm/palm.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/hw/arm/palm.c
149
+++ b/hw/arm/palm.c
150
@@ -XXX,XX +XXX,XX @@
151
#include "hw/arm/omap.h"
152
#include "hw/boards.h"
153
#include "hw/arm/arm.h"
154
-#include "hw/devices.h"
155
+#include "hw/input/tsc2xxx.h"
156
#include "hw/loader.h"
157
#include "exec/address-spaces.h"
158
#include "cpu.h"
159
diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/hw/input/tsc2005.c
162
+++ b/hw/input/tsc2005.c
163
@@ -XXX,XX +XXX,XX @@
164
#include "hw/hw.h"
165
#include "qemu/timer.h"
166
#include "ui/console.h"
167
-#include "hw/devices.h"
168
+#include "hw/input/tsc2xxx.h"
169
#include "trace.h"
170
171
#define TSC_CUT_RESOLUTION(value, p)    ((value) >> (16 - (p ? 12 : 10)))
172
diff --git a/hw/input/tsc210x.c b/hw/input/tsc210x.c
173
index XXXXXXX..XXXXXXX 100644
174
--- a/hw/input/tsc210x.c
175
+++ b/hw/input/tsc210x.c
176
@@ -XXX,XX +XXX,XX @@
177
#include "audio/audio.h"
178
#include "qemu/timer.h"
179
#include "ui/console.h"
180
-#include "hw/arm/omap.h"    /* For I2SCodec and uWireSlave */
181
-#include "hw/devices.h"
182
+#include "hw/arm/omap.h" /* For I2SCodec */
183
+#include "hw/input/tsc2xxx.h"
184
185
#define TSC_DATA_REGISTERS_PAGE        0x0
186
#define TSC_CONTROL_REGISTERS_PAGE    0x1
187
diff --git a/MAINTAINERS b/MAINTAINERS
188
index XXXXXXX..XXXXXXX 100644
189
--- a/MAINTAINERS
190
+++ b/MAINTAINERS
191
@@ -XXX,XX +XXX,XX @@ F: hw/input/tsc2005.c
192
F: hw/misc/cbus.c
193
F: hw/timer/twl92230.c
194
F: include/hw/display/blizzard.h
195
+F: include/hw/input/tsc2xxx.h
196
F: include/hw/misc/cbus.h
197
198
Palm
199
@@ -XXX,XX +XXX,XX @@ L: qemu-arm@nongnu.org
200
S: Odd Fixes
201
F: hw/arm/palm.c
202
F: hw/input/tsc210x.c
203
+F: include/hw/input/tsc2xxx.h
204
205
Raspberry Pi
206
M: Peter Maydell <peter.maydell@linaro.org>
207
--
65
--
208
2.20.1
66
2.25.1
209
210
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
3
The effect of TGE does not only apply to non-secure state,
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
now that Secure EL2 exists.
5
Message-id: 20190412165416.7977-8-philmd@redhat.com
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-13-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
include/hw/devices.h | 3 ---
11
target/arm/ptw.c | 4 ++--
9
include/hw/input/gamepad.h | 19 +++++++++++++++++++
12
1 file changed, 2 insertions(+), 2 deletions(-)
10
hw/arm/stellaris.c | 2 +-
11
hw/input/stellaris_input.c | 2 +-
12
MAINTAINERS | 1 +
13
5 files changed, 22 insertions(+), 5 deletions(-)
14
create mode 100644 include/hw/input/gamepad.h
15
13
16
diff --git a/include/hw/devices.h b/include/hw/devices.h
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/devices.h
16
--- a/target/arm/ptw.c
19
+++ b/include/hw/devices.h
17
+++ b/target/arm/ptw.c
20
@@ -XXX,XX +XXX,XX @@ void *tsc2005_init(qemu_irq pintdav);
18
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
21
uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
19
case ARMMMUIdx_E10_0:
22
void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
20
case ARMMMUIdx_E10_1:
23
21
case ARMMMUIdx_E10_1_PAN:
24
-/* stellaris_input.c */
22
- /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
25
-void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
23
- if (!is_secure && (hcr_el2 & HCR_TGE)) {
26
-
24
+ /* TGE means that EL0/1 act as if SCTLR_EL1.M is zero */
27
#endif
25
+ if (hcr_el2 & HCR_TGE) {
28
diff --git a/include/hw/input/gamepad.h b/include/hw/input/gamepad.h
26
return true;
29
new file mode 100644
27
}
30
index XXXXXXX..XXXXXXX
28
break;
31
--- /dev/null
32
+++ b/include/hw/input/gamepad.h
33
@@ -XXX,XX +XXX,XX @@
34
+/*
35
+ * Gamepad style buttons connected to IRQ/GPIO lines
36
+ *
37
+ * Copyright (c) 2007 CodeSourcery.
38
+ * Written by Paul Brook
39
+ *
40
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
41
+ * See the COPYING file in the top-level directory.
42
+ */
43
+
44
+#ifndef HW_INPUT_GAMEPAD_H
45
+#define HW_INPUT_GAMEPAD_H
46
+
47
+#include "hw/irq.h"
48
+
49
+/* stellaris_input.c */
50
+void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
51
+
52
+#endif
53
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/arm/stellaris.c
56
+++ b/hw/arm/stellaris.c
57
@@ -XXX,XX +XXX,XX @@
58
#include "hw/sysbus.h"
59
#include "hw/ssi/ssi.h"
60
#include "hw/arm/arm.h"
61
-#include "hw/devices.h"
62
#include "qemu/timer.h"
63
#include "hw/i2c/i2c.h"
64
#include "net/net.h"
65
@@ -XXX,XX +XXX,XX @@
66
#include "sysemu/sysemu.h"
67
#include "hw/arm/armv7m.h"
68
#include "hw/char/pl011.h"
69
+#include "hw/input/gamepad.h"
70
#include "hw/watchdog/cmsdk-apb-watchdog.h"
71
#include "hw/misc/unimp.h"
72
#include "cpu.h"
73
diff --git a/hw/input/stellaris_input.c b/hw/input/stellaris_input.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/hw/input/stellaris_input.c
76
+++ b/hw/input/stellaris_input.c
77
@@ -XXX,XX +XXX,XX @@
78
*/
79
#include "qemu/osdep.h"
80
#include "hw/hw.h"
81
-#include "hw/devices.h"
82
+#include "hw/input/gamepad.h"
83
#include "ui/console.h"
84
85
typedef struct {
86
diff --git a/MAINTAINERS b/MAINTAINERS
87
index XXXXXXX..XXXXXXX 100644
88
--- a/MAINTAINERS
89
+++ b/MAINTAINERS
90
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
91
L: qemu-arm@nongnu.org
92
S: Maintained
93
F: hw/*/stellaris*
94
+F: include/hw/input/gamepad.h
95
96
Versatile Express
97
M: Peter Maydell <peter.maydell@linaro.org>
98
--
29
--
99
2.20.1
30
2.25.1
100
101
diff view generated by jsdifflib
1
Add a new helper function which returns the MMU index to use
1
From: Richard Henderson <richard.henderson@linaro.org>
2
for v7M, where the caller specifies all of the security
3
state, privilege level and whether the execution priority
4
is negative, and reimplement the existing
5
arm_v7m_mmu_idx_for_secstate_and_priv() in terms of it.
6
2
7
We are going to need this for the lazy-FP-stacking code.
3
For page walking, we may require HCR for a security state
4
that is not "current".
8
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221001162318.153420-14-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20190416125744.27770-21-peter.maydell@linaro.org
12
---
10
---
13
target/arm/cpu.h | 7 +++++++
11
target/arm/cpu.h | 20 +++++++++++++-------
14
target/arm/helper.c | 14 +++++++++++---
12
target/arm/helper.c | 11 ++++++++---
15
2 files changed, 18 insertions(+), 3 deletions(-)
13
2 files changed, 21 insertions(+), 10 deletions(-)
16
14
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
17
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
19
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
22
}
20
* Return true if the current security state has AArch64 EL2 or AArch32 Hyp.
21
* This corresponds to the pseudocode EL2Enabled()
22
*/
23
+static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure)
24
+{
25
+ return arm_feature(env, ARM_FEATURE_EL2)
26
+ && (!secure || (env->cp15.scr_el3 & SCR_EEL2));
27
+}
28
+
29
static inline bool arm_is_el2_enabled(CPUARMState *env)
30
{
31
- if (arm_feature(env, ARM_FEATURE_EL2)) {
32
- if (arm_is_secure_below_el3(env)) {
33
- return (env->cp15.scr_el3 & SCR_EEL2) != 0;
34
- }
35
- return true;
36
- }
37
- return false;
38
+ return arm_is_el2_enabled_secstate(env, arm_is_secure_below_el3(env));
23
}
39
}
24
40
25
+/*
41
#else
26
+ * Return the MMU index for a v7M CPU with all relevant information
42
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
27
+ * manually specified.
43
return false;
28
+ */
44
}
29
+ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
45
30
+ bool secstate, bool priv, bool negpri);
46
+static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure)
47
+{
48
+ return false;
49
+}
31
+
50
+
32
/* Return the MMU index for a v7M CPU in the specified security and
51
static inline bool arm_is_el2_enabled(CPUARMState *env)
33
* privilege state.
52
{
53
return false;
54
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_el2_enabled(CPUARMState *env)
55
* "for all purposes other than a direct read or write access of HCR_EL2."
56
* Not included here is HCR_RW.
34
*/
57
*/
58
+uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure);
59
uint64_t arm_hcr_el2_eff(CPUARMState *env);
60
uint64_t arm_hcrx_el2_eff(CPUARMState *env);
61
35
diff --git a/target/arm/helper.c b/target/arm/helper.c
62
diff --git a/target/arm/helper.c b/target/arm/helper.c
36
index XXXXXXX..XXXXXXX 100644
63
index XXXXXXX..XXXXXXX 100644
37
--- a/target/arm/helper.c
64
--- a/target/arm/helper.c
38
+++ b/target/arm/helper.c
65
+++ b/target/arm/helper.c
39
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
66
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
40
return 0;
41
}
67
}
42
68
43
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
69
/*
44
- bool secstate, bool priv)
70
- * Return the effective value of HCR_EL2.
45
+ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
71
+ * Return the effective value of HCR_EL2, at the given security state.
46
+ bool secstate, bool priv, bool negpri)
72
* Bits that are not included here:
73
* RW (read from SCR_EL3.RW as needed)
74
*/
75
-uint64_t arm_hcr_el2_eff(CPUARMState *env)
76
+uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure)
47
{
77
{
48
ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
78
uint64_t ret = env->cp15.hcr_el2;
49
79
50
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
80
- if (!arm_is_el2_enabled(env)) {
51
mmu_idx |= ARM_MMU_IDX_M_PRIV;
81
+ if (!arm_is_el2_enabled_secstate(env, secure)) {
52
}
82
/*
53
83
* "This register has no effect if EL2 is not enabled in the
54
- if (armv7m_nvic_neg_prio_requested(env->nvic, secstate)) {
84
* current Security state". This is ARMv8.4-SecEL2 speak for
55
+ if (negpri) {
85
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff(CPUARMState *env)
56
mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
86
return ret;
57
}
58
59
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
60
return mmu_idx;
61
}
87
}
62
88
63
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
89
+uint64_t arm_hcr_el2_eff(CPUARMState *env)
64
+ bool secstate, bool priv)
65
+{
90
+{
66
+ bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
91
+ return arm_hcr_el2_eff_secstate(env, arm_is_secure_below_el3(env));
67
+
68
+ return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
69
+}
92
+}
70
+
93
+
71
/* Return the MMU index for a v7M CPU in the specified security state */
94
/*
72
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
95
* Corresponds to ARM pseudocode function ELIsInHost().
73
{
96
*/
74
--
97
--
75
2.20.1
98
2.25.1
76
77
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Thomas Huth <thuth@redhat.com>
3
Rename the argument to is_secure_ptr, and introduce a
4
Reviewed-by: Markus Armbruster <armbru@redhat.com>
4
local variable is_secure with the value. We only write
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
back to the pointer toward the end of the function.
6
Message-id: 20190412165416.7977-7-philmd@redhat.com
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20221001162318.153420-15-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
11
---
9
include/hw/devices.h | 14 --------------
12
target/arm/ptw.c | 22 ++++++++++++----------
10
include/hw/misc/cbus.h | 32 ++++++++++++++++++++++++++++++++
13
1 file changed, 12 insertions(+), 10 deletions(-)
11
hw/arm/nseries.c | 1 +
12
hw/misc/cbus.c | 2 +-
13
MAINTAINERS | 1 +
14
5 files changed, 35 insertions(+), 15 deletions(-)
15
create mode 100644 include/hw/misc/cbus.h
16
14
17
diff --git a/include/hw/devices.h b/include/hw/devices.h
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/devices.h
17
--- a/target/arm/ptw.c
20
+++ b/include/hw/devices.h
18
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@ void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
19
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
22
/* stellaris_input.c */
20
23
void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
21
/* Translate a S1 pagetable walk through S2 if needed. */
24
22
static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
25
-/* cbus.c */
23
- hwaddr addr, bool *is_secure,
26
-typedef struct {
24
+ hwaddr addr, bool *is_secure_ptr,
27
- qemu_irq clk;
25
ARMMMUFaultInfo *fi)
28
- qemu_irq dat;
26
{
29
- qemu_irq sel;
27
- ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
30
-} CBus;
28
+ bool is_secure = *is_secure_ptr;
31
-CBus *cbus_init(qemu_irq dat_out);
29
+ ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
32
-void cbus_attach(CBus *bus, void *slave_opaque);
30
33
-
31
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
34
-void *retu_init(qemu_irq irq, int vilma);
32
- !regime_translation_disabled(env, s2_mmu_idx, *is_secure)) {
35
-void *tahvo_init(qemu_irq irq, int betty);
33
+ !regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
36
-
34
GetPhysAddrResult s2 = {};
37
-void retu_key_event(void *retu, int state);
35
int ret;
38
-
36
39
#endif
37
ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
40
diff --git a/include/hw/misc/cbus.h b/include/hw/misc/cbus.h
38
- *is_secure, false, &s2, fi);
41
new file mode 100644
39
+ is_secure, false, &s2, fi);
42
index XXXXXXX..XXXXXXX
40
if (ret) {
43
--- /dev/null
41
assert(fi->type != ARMFault_None);
44
+++ b/include/hw/misc/cbus.h
42
fi->s2addr = addr;
45
@@ -XXX,XX +XXX,XX @@
43
fi->stage2 = true;
46
+/*
44
fi->s1ptw = true;
47
+ * CBUS three-pin bus and the Retu / Betty / Tahvo / Vilma / Avilma /
45
- fi->s1ns = !*is_secure;
48
+ * Hinku / Vinku / Ahne / Pihi chips used in various Nokia platforms.
46
+ fi->s1ns = !is_secure;
49
+ * Based on reverse-engineering of a linux driver.
47
return ~0;
50
+ *
48
}
51
+ * Copyright (C) 2008 Nokia Corporation
49
if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
52
+ * Written by Andrzej Zaborowski
50
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
53
+ *
51
fi->s2addr = addr;
54
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
52
fi->stage2 = true;
55
+ * See the COPYING file in the top-level directory.
53
fi->s1ptw = true;
56
+ */
54
- fi->s1ns = !*is_secure;
57
+
55
+ fi->s1ns = !is_secure;
58
+#ifndef HW_MISC_CBUS_H
56
return ~0;
59
+#define HW_MISC_CBUS_H
57
}
60
+
58
61
+#include "hw/irq.h"
59
if (arm_is_secure_below_el3(env)) {
62
+
60
/* Check if page table walk is to secure or non-secure PA space. */
63
+typedef struct {
61
- if (*is_secure) {
64
+ qemu_irq clk;
62
- *is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
65
+ qemu_irq dat;
63
+ if (is_secure) {
66
+ qemu_irq sel;
64
+ is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
67
+} CBus;
65
} else {
68
+
66
- *is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
69
+CBus *cbus_init(qemu_irq dat_out);
67
+ is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
70
+void cbus_attach(CBus *bus, void *slave_opaque);
68
}
71
+
69
+ *is_secure_ptr = is_secure;
72
+void *retu_init(qemu_irq irq, int vilma);
70
} else {
73
+void *tahvo_init(qemu_irq irq, int betty);
71
- assert(!*is_secure);
74
+
72
+ assert(!is_secure);
75
+void retu_key_event(void *retu, int state);
73
}
76
+
74
77
+#endif
75
addr = s2.phys;
78
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/hw/arm/nseries.c
81
+++ b/hw/arm/nseries.c
82
@@ -XXX,XX +XXX,XX @@
83
#include "hw/i2c/i2c.h"
84
#include "hw/devices.h"
85
#include "hw/display/blizzard.h"
86
+#include "hw/misc/cbus.h"
87
#include "hw/misc/tmp105.h"
88
#include "hw/block/flash.h"
89
#include "hw/hw.h"
90
diff --git a/hw/misc/cbus.c b/hw/misc/cbus.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/hw/misc/cbus.c
93
+++ b/hw/misc/cbus.c
94
@@ -XXX,XX +XXX,XX @@
95
#include "qemu/osdep.h"
96
#include "hw/hw.h"
97
#include "hw/irq.h"
98
-#include "hw/devices.h"
99
+#include "hw/misc/cbus.h"
100
#include "sysemu/sysemu.h"
101
102
//#define DEBUG
103
diff --git a/MAINTAINERS b/MAINTAINERS
104
index XXXXXXX..XXXXXXX 100644
105
--- a/MAINTAINERS
106
+++ b/MAINTAINERS
107
@@ -XXX,XX +XXX,XX @@ F: hw/input/tsc2005.c
108
F: hw/misc/cbus.c
109
F: hw/timer/twl92230.c
110
F: include/hw/display/blizzard.h
111
+F: include/hw/misc/cbus.h
112
113
Palm
114
M: Andrzej Zaborowski <balrogg@gmail.com>
115
--
76
--
116
2.20.1
77
2.25.1
117
118
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add an entries the Blizzard device in MAINTAINERS.
3
This value is unused.
4
4
5
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Markus Armbruster <armbru@redhat.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Message-id: 20221001162318.153420-16-richard.henderson@linaro.org
8
Message-id: 20190412165416.7977-6-philmd@redhat.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
include/hw/devices.h | 7 -------
10
target/arm/ptw.c | 5 ++---
12
include/hw/display/blizzard.h | 22 ++++++++++++++++++++++
11
1 file changed, 2 insertions(+), 3 deletions(-)
13
hw/arm/nseries.c | 1 +
14
hw/display/blizzard.c | 2 +-
15
MAINTAINERS | 2 ++
16
5 files changed, 26 insertions(+), 8 deletions(-)
17
create mode 100644 include/hw/display/blizzard.h
18
12
19
diff --git a/include/hw/devices.h b/include/hw/devices.h
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
20
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
21
--- a/include/hw/devices.h
15
--- a/target/arm/ptw.c
22
+++ b/include/hw/devices.h
16
+++ b/target/arm/ptw.c
23
@@ -XXX,XX +XXX,XX @@ void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
17
@@ -XXX,XX +XXX,XX @@ static uint8_t force_cacheattr_nibble_wb(uint8_t attr)
24
/* stellaris_input.c */
18
* s1 and s2 for the HCR_EL2.FWB == 1 case, returning the
25
void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
19
* combined attributes in MAIR_EL1 format.
26
20
*/
27
-/* blizzard.c */
21
-static uint8_t combined_attrs_fwb(CPUARMState *env,
28
-void *s1d13745_init(qemu_irq gpio_int);
22
- ARMCacheAttrs s1, ARMCacheAttrs s2)
29
-void s1d13745_write(void *opaque, int dc, uint16_t value);
23
+static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2)
30
-void s1d13745_write_block(void *opaque, int dc,
24
{
31
- void *buf, size_t len, int pitch);
25
switch (s2.attrs) {
32
-uint16_t s1d13745_read(void *opaque, int dc);
26
case 7:
33
-
27
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
34
/* cbus.c */
28
35
typedef struct {
29
/* Combine memory type and cacheability attributes */
36
qemu_irq clk;
30
if (arm_hcr_el2_eff(env) & HCR_FWB) {
37
diff --git a/include/hw/display/blizzard.h b/include/hw/display/blizzard.h
31
- ret.attrs = combined_attrs_fwb(env, s1, s2);
38
new file mode 100644
32
+ ret.attrs = combined_attrs_fwb(s1, s2);
39
index XXXXXXX..XXXXXXX
33
} else {
40
--- /dev/null
34
ret.attrs = combined_attrs_nofwb(env, s1, s2);
41
+++ b/include/hw/display/blizzard.h
35
}
42
@@ -XXX,XX +XXX,XX @@
43
+/*
44
+ * Epson S1D13744/S1D13745 (Blizzard/Hailstorm/Tornado) LCD/TV controller.
45
+ *
46
+ * Copyright (C) 2008 Nokia Corporation
47
+ * Written by Andrzej Zaborowski
48
+ *
49
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
50
+ * See the COPYING file in the top-level directory.
51
+ */
52
+
53
+#ifndef HW_DISPLAY_BLIZZARD_H
54
+#define HW_DISPLAY_BLIZZARD_H
55
+
56
+#include "hw/irq.h"
57
+
58
+void *s1d13745_init(qemu_irq gpio_int);
59
+void s1d13745_write(void *opaque, int dc, uint16_t value);
60
+void s1d13745_write_block(void *opaque, int dc,
61
+ void *buf, size_t len, int pitch);
62
+uint16_t s1d13745_read(void *opaque, int dc);
63
+
64
+#endif
65
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/hw/arm/nseries.c
68
+++ b/hw/arm/nseries.c
69
@@ -XXX,XX +XXX,XX @@
70
#include "hw/boards.h"
71
#include "hw/i2c/i2c.h"
72
#include "hw/devices.h"
73
+#include "hw/display/blizzard.h"
74
#include "hw/misc/tmp105.h"
75
#include "hw/block/flash.h"
76
#include "hw/hw.h"
77
diff --git a/hw/display/blizzard.c b/hw/display/blizzard.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/hw/display/blizzard.c
80
+++ b/hw/display/blizzard.c
81
@@ -XXX,XX +XXX,XX @@
82
#include "qemu/osdep.h"
83
#include "qemu-common.h"
84
#include "ui/console.h"
85
-#include "hw/devices.h"
86
+#include "hw/display/blizzard.h"
87
#include "ui/pixel_ops.h"
88
89
typedef void (*blizzard_fn_t)(uint8_t *, const uint8_t *, unsigned int);
90
diff --git a/MAINTAINERS b/MAINTAINERS
91
index XXXXXXX..XXXXXXX 100644
92
--- a/MAINTAINERS
93
+++ b/MAINTAINERS
94
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
95
L: qemu-arm@nongnu.org
96
S: Odd Fixes
97
F: hw/arm/nseries.c
98
+F: hw/display/blizzard.c
99
F: hw/input/lm832x.c
100
F: hw/input/tsc2005.c
101
F: hw/misc/cbus.c
102
F: hw/timer/twl92230.c
103
+F: include/hw/display/blizzard.h
104
105
Palm
106
M: Andrzej Zaborowski <balrogg@gmail.com>
107
--
36
--
108
2.20.1
37
2.25.1
109
110
diff view generated by jsdifflib
1
Like AArch64, M-profile floating point has no FPEXC enable
1
From: Richard Henderson <richard.henderson@linaro.org>
2
bit to gate floating point; so always set the VFPEN TB flag.
3
2
4
M-profile also has CPACR and NSACR similar to A-profile;
3
These subroutines did not need ENV for anything except
5
they behave slightly differently:
4
retrieving the effective value of HCR anyway.
6
* the CPACR is banked between Secure and Non-Secure
7
* if the NSACR forces a trap then this is taken to
8
the Secure state, not the Non-Secure state
9
5
10
Honour the CPACR and NSACR settings. The NSACR handling
6
We have computed the effective value of HCR in the callers,
11
requires us to borrow the exception.target_el field
7
and this will be especially important for interpreting HCR
12
(usually meaningless for M profile) to distinguish the
8
in a non-current security state.
13
NOCP UsageFault taken to Secure state from the more
14
usual fault taken to the current security state.
15
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20221001162318.153420-17-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20190416125744.27770-6-peter.maydell@linaro.org
19
---
14
---
20
target/arm/helper.c | 55 +++++++++++++++++++++++++++++++++++++++---
15
target/arm/ptw.c | 30 +++++++++++++++++-------------
21
target/arm/translate.c | 10 ++++++--
16
1 file changed, 17 insertions(+), 13 deletions(-)
22
2 files changed, 60 insertions(+), 5 deletions(-)
23
17
24
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
25
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/helper.c
20
--- a/target/arm/ptw.c
27
+++ b/target/arm/helper.c
21
+++ b/target/arm/ptw.c
28
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
22
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
29
return target_el;
23
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
30
}
24
}
31
25
32
+/*
26
-static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
33
+ * Return true if the v7M CPACR permits access to the FPU for the specified
27
+static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
34
+ * security state and privilege level.
28
{
35
+ */
29
/*
36
+static bool v7m_cpacr_pass(CPUARMState *env, bool is_secure, bool is_priv)
30
* For an S1 page table walk, the stage 1 attributes are always
37
+{
31
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
38
+ switch (extract32(env->v7m.cpacr[is_secure], 20, 2)) {
32
* when cacheattrs.attrs bit [2] is 0.
39
+ case 0:
33
*/
40
+ case 2: /* UNPREDICTABLE: we treat like 0 */
34
assert(cacheattrs.is_s2_format);
41
+ return false;
35
- if (arm_hcr_el2_eff(env) & HCR_FWB) {
42
+ case 1:
36
+ if (hcr & HCR_FWB) {
43
+ return is_priv;
37
return (cacheattrs.attrs & 0x4) == 0;
44
+ case 3:
38
} else {
45
+ return true;
39
return (cacheattrs.attrs & 0xc) == 0;
46
+ default:
40
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
47
+ g_assert_not_reached();
41
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
48
+ }
42
!regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
49
+}
43
GetPhysAddrResult s2 = {};
44
+ uint64_t hcr;
45
int ret;
46
47
ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
48
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
49
fi->s1ns = !is_secure;
50
return ~0;
51
}
52
- if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
53
- ptw_attrs_are_device(env, s2.cacheattrs)) {
50
+
54
+
51
static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
55
+ hcr = arm_hcr_el2_eff(env);
52
ARMMMUIdx mmu_idx, bool ignfault)
56
+ if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
57
/*
58
* PTW set and S1 walk touched S2 Device memory:
59
* generate Permission fault.
60
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
61
* ref: shared/translation/attrs/S2AttrDecode()
62
* .../S2ConvertAttrsHints()
63
*/
64
-static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
65
+static uint8_t convert_stage2_attrs(uint64_t hcr, uint8_t s2attrs)
53
{
66
{
54
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
67
uint8_t hiattr = extract32(s2attrs, 2, 2);
55
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNDEFINSTR_MASK;
68
uint8_t loattr = extract32(s2attrs, 0, 2);
56
break;
69
uint8_t hihint = 0, lohint = 0;
57
case EXCP_NOCP:
70
58
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
71
if (hiattr != 0) { /* normal memory */
59
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
72
- if (arm_hcr_el2_eff(env) & HCR_CD) { /* cache disabled */
60
+ {
73
+ if (hcr & HCR_CD) { /* cache disabled */
61
+ /*
74
hiattr = loattr = 1; /* non-cacheable */
62
+ * NOCP might be directed to something other than the current
75
} else {
63
+ * security state if this fault is because of NSACR; we indicate
76
if (hiattr != 1) { /* Write-through or write-back */
64
+ * the target security state using exception.target_el.
77
@@ -XXX,XX +XXX,XX @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2)
65
+ */
78
* s1 and s2 for the HCR_EL2.FWB == 0 case, returning the
66
+ int target_secstate;
79
* combined attributes in MAIR_EL1 format.
67
+
80
*/
68
+ if (env->exception.target_el == 3) {
81
-static uint8_t combined_attrs_nofwb(CPUARMState *env,
69
+ target_secstate = M_REG_S;
82
+static uint8_t combined_attrs_nofwb(uint64_t hcr,
70
+ } else {
83
ARMCacheAttrs s1, ARMCacheAttrs s2)
71
+ target_secstate = env->v7m.secure;
84
{
72
+ }
85
uint8_t s1lo, s2lo, s1hi, s2hi, s2_mair_attrs, ret_attrs;
73
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, target_secstate);
86
74
+ env->v7m.cfsr[target_secstate] |= R_V7M_CFSR_NOCP_MASK;
87
- s2_mair_attrs = convert_stage2_attrs(env, s2.attrs);
75
break;
88
+ s2_mair_attrs = convert_stage2_attrs(hcr, s2.attrs);
76
+ }
89
77
case EXCP_INVSTATE:
90
s1lo = extract32(s1.attrs, 0, 4);
78
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
91
s2lo = extract32(s2_mair_attrs, 0, 4);
79
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVSTATE_MASK;
92
@@ -XXX,XX +XXX,XX @@ static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2)
80
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
93
* @s1: Attributes from stage 1 walk
81
return 0;
94
* @s2: Attributes from stage 2 walk
95
*/
96
-static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
97
+static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
98
ARMCacheAttrs s1, ARMCacheAttrs s2)
99
{
100
ARMCacheAttrs ret;
101
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
82
}
102
}
83
103
84
+ if (arm_feature(env, ARM_FEATURE_M)) {
104
/* Combine memory type and cacheability attributes */
85
+ /* CPACR can cause a NOCP UsageFault taken to current security state */
105
- if (arm_hcr_el2_eff(env) & HCR_FWB) {
86
+ if (!v7m_cpacr_pass(env, env->v7m.secure, cur_el != 0)) {
106
+ if (hcr & HCR_FWB) {
87
+ return 1;
107
ret.attrs = combined_attrs_fwb(s1, s2);
88
+ }
108
} else {
89
+
109
- ret.attrs = combined_attrs_nofwb(env, s1, s2);
90
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) && !env->v7m.secure) {
110
+ ret.attrs = combined_attrs_nofwb(hcr, s1, s2);
91
+ if (!extract32(env->v7m.nsacr, 10, 1)) {
92
+ /* FP insns cause a NOCP UsageFault taken to Secure */
93
+ return 3;
94
+ }
95
+ }
96
+
97
+ return 0;
98
+ }
99
+
100
/* The CPACR controls traps to EL1, or PL1 if we're 32 bit:
101
* 0, 2 : trap EL0 and EL1/PL1 accesses
102
* 1 : trap only EL0 accesses
103
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
104
flags = FIELD_DP32(flags, TBFLAG_A32, SCTLR_B, arm_sctlr_b(env));
105
flags = FIELD_DP32(flags, TBFLAG_A32, NS, !access_secure_reg(env));
106
if (env->vfp.xregs[ARM_VFP_FPEXC] & (1 << 30)
107
- || arm_el_is_aa64(env, 1)) {
108
+ || arm_el_is_aa64(env, 1) || arm_feature(env, ARM_FEATURE_M)) {
109
flags = FIELD_DP32(flags, TBFLAG_A32, VFPEN, 1);
110
}
111
flags = FIELD_DP32(flags, TBFLAG_A32, XSCALE_CPAR, env->cp15.c15_cpar);
112
diff --git a/target/arm/translate.c b/target/arm/translate.c
113
index XXXXXXX..XXXXXXX 100644
114
--- a/target/arm/translate.c
115
+++ b/target/arm/translate.c
116
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
117
* for attempts to execute invalid vfp/neon encodings with FP disabled.
118
*/
119
if (s->fp_excp_el) {
120
- gen_exception_insn(s, 4, EXCP_UDEF,
121
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
122
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
123
+ gen_exception_insn(s, 4, EXCP_NOCP, syn_uncategorized(),
124
+ s->fp_excp_el);
125
+ } else {
126
+ gen_exception_insn(s, 4, EXCP_UDEF,
127
+ syn_fp_access_trap(1, 0xe, false),
128
+ s->fp_excp_el);
129
+ }
130
return 0;
131
}
111
}
132
112
113
/*
114
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
115
ARMCacheAttrs cacheattrs1;
116
ARMMMUIdx s2_mmu_idx;
117
bool is_el0;
118
+ uint64_t hcr;
119
120
ret = get_phys_addr_with_secure(env, address, access_type,
121
s1_mmu_idx, is_secure, result, fi);
122
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
123
}
124
125
/* Combine the S1 and S2 cache attributes. */
126
- if (arm_hcr_el2_eff(env) & HCR_DC) {
127
+ hcr = arm_hcr_el2_eff(env);
128
+ if (hcr & HCR_DC) {
129
/*
130
* HCR.DC forces the first stage attributes to
131
* Normal Non-Shareable,
132
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
133
}
134
cacheattrs1.shareability = 0;
135
}
136
- result->cacheattrs = combine_cacheattrs(env, cacheattrs1,
137
+ result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
138
result->cacheattrs);
139
140
/*
133
--
141
--
134
2.20.1
142
2.25.1
135
136
diff view generated by jsdifflib
Deleted patch
1
Correct the decode of the M-profile "coprocessor and
2
floating-point instructions" space:
3
* op0 == 0b11 is always unallocated
4
* if the CPU has an FPU then all insns with op1 == 0b101
5
are floating point and go to disas_vfp_insn()
6
1
7
For the moment we leave VLLDM and VLSTM as NOPs; in
8
a later commit we will fill in the proper implementation
9
for the case where an FPU is present.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20190416125744.27770-7-peter.maydell@linaro.org
14
---
15
target/arm/translate.c | 26 ++++++++++++++++++++++----
16
1 file changed, 22 insertions(+), 4 deletions(-)
17
18
diff --git a/target/arm/translate.c b/target/arm/translate.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/translate.c
21
+++ b/target/arm/translate.c
22
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
23
case 6: case 7: case 14: case 15:
24
/* Coprocessor. */
25
if (arm_dc_feature(s, ARM_FEATURE_M)) {
26
- /* We don't currently implement M profile FP support,
27
- * so this entire space should give a NOCP fault, with
28
- * the exception of the v8M VLLDM and VLSTM insns, which
29
- * must be NOPs in Secure state and UNDEF in Nonsecure state.
30
+ /* 0b111x_11xx_xxxx_xxxx_xxxx_xxxx_xxxx_xxxx */
31
+ if (extract32(insn, 24, 2) == 3) {
32
+ goto illegal_op; /* op0 = 0b11 : unallocated */
33
+ }
34
+
35
+ /*
36
+ * Decode VLLDM and VLSTM first: these are nonstandard because:
37
+ * * if there is no FPU then these insns must NOP in
38
+ * Secure state and UNDEF in Nonsecure state
39
+ * * if there is an FPU then these insns do not have
40
+ * the usual behaviour that disas_vfp_insn() provides of
41
+ * being controlled by CPACR/NSACR enable bits or the
42
+ * lazy-stacking logic.
43
*/
44
if (arm_dc_feature(s, ARM_FEATURE_V8) &&
45
(insn & 0xffa00f00) == 0xec200a00) {
46
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
47
/* Just NOP since FP support is not implemented */
48
break;
49
}
50
+ if (arm_dc_feature(s, ARM_FEATURE_VFP) &&
51
+ ((insn >> 8) & 0xe) == 10) {
52
+ /* FP, and the CPU supports it */
53
+ if (disas_vfp_insn(s, insn)) {
54
+ goto illegal_op;
55
+ }
56
+ break;
57
+ }
58
+
59
/* All other insns: NOCP */
60
gen_exception_insn(s, 4, EXCP_NOCP, syn_uncategorized(),
61
default_exception_el(s));
62
--
63
2.20.1
64
65
diff view generated by jsdifflib
1
Currently the code in v7m_push_stack() which detects a violation
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of the v8M stack limit simply returns early if it does so. This
3
is OK for the current integer-only code, but won't work for the
4
floating point handling we're about to add. We need to continue
5
executing the rest of the function so that we check for other
6
exceptions like not having permission to use the FPU and so
7
that we correctly set the FPCCR state if we are doing lazy
8
stacking. Refactor to avoid the early return.
9
2
3
Use arm_hcr_el2_eff_secstate instead of arm_hcr_el2_eff, so
4
that we use is_secure instead of the current security state.
5
These AT* operations have been broken since arm_hcr_el2_eff
6
gained a check for "el2 enabled" for Secure EL2.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20221001162318.153420-18-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20190416125744.27770-10-peter.maydell@linaro.org
13
---
12
---
14
target/arm/helper.c | 23 ++++++++++++++++++-----
13
target/arm/ptw.c | 8 ++++----
15
1 file changed, 18 insertions(+), 5 deletions(-)
14
1 file changed, 4 insertions(+), 4 deletions(-)
16
15
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
18
--- a/target/arm/ptw.c
20
+++ b/target/arm/helper.c
19
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
20
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
22
* should ignore further stack faults trying to process
23
* that derived exception.)
24
*/
25
- bool stacked_ok;
26
+ bool stacked_ok = true, limitviol = false;
27
CPUARMState *env = &cpu->env;
28
uint32_t xpsr = xpsr_read(env);
29
uint32_t frameptr = env->regs[13];
30
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
31
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
32
env->v7m.secure);
33
env->regs[13] = limit;
34
- return true;
35
+ /*
36
+ * We won't try to perform any further memory accesses but
37
+ * we must continue through the following code to check for
38
+ * permission faults during FPU state preservation, and we
39
+ * must update FPCCR if lazy stacking is enabled.
40
+ */
41
+ limitviol = true;
42
+ stacked_ok = false;
43
}
21
}
44
}
22
}
45
23
46
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
24
- hcr_el2 = arm_hcr_el2_eff(env);
47
* (which may be taken in preference to the one we started with
25
+ hcr_el2 = arm_hcr_el2_eff_secstate(env, is_secure);
48
* if it has higher priority).
26
49
*/
27
switch (mmu_idx) {
50
- stacked_ok =
28
case ARMMMUIdx_Stage2:
51
+ stacked_ok = stacked_ok &&
29
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
52
v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, false) &&
30
return ~0;
53
v7m_stack_write(cpu, frameptr + 4, env->regs[1], mmu_idx, false) &&
31
}
54
v7m_stack_write(cpu, frameptr + 8, env->regs[2], mmu_idx, false) &&
32
55
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
33
- hcr = arm_hcr_el2_eff(env);
56
v7m_stack_write(cpu, frameptr + 24, env->regs[15], mmu_idx, false) &&
34
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
57
v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, false);
35
if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
58
36
/*
59
- /* Update SP regardless of whether any of the stack accesses failed. */
37
* PTW set and S1 walk touched S2 Device memory:
60
- env->regs[13] = frameptr;
38
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
61
+ /*
39
}
62
+ * If we broke a stack limit then SP was already updated earlier;
40
63
+ * otherwise we update SP regardless of whether any of the stack
41
/* Combine the S1 and S2 cache attributes. */
64
+ * accesses failed or we took some other kind of fault.
42
- hcr = arm_hcr_el2_eff(env);
65
+ */
43
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
66
+ if (!limitviol) {
44
if (hcr & HCR_DC) {
67
+ env->regs[13] = frameptr;
45
/*
68
+ }
46
* HCR.DC forces the first stage attributes to
69
47
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
70
return !stacked_ok;
48
result->page_size = TARGET_PAGE_SIZE;
71
}
49
50
/* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
51
- hcr = arm_hcr_el2_eff(env);
52
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
53
result->cacheattrs.shareability = 0;
54
result->cacheattrs.is_s2_format = false;
55
if (hcr & HCR_DC) {
72
--
56
--
73
2.20.1
57
2.25.1
74
75
diff view generated by jsdifflib
1
In the v7M architecture, if an exception is generated in the process
1
From: Richard Henderson <richard.henderson@linaro.org>
2
of doing the lazy stacking of FP registers, the handling of
3
possible escalation to HardFault is treated differently to the normal
4
approach: it works based on the saved information about exception
5
readiness that was stored in the FPCCR when the stack frame was
6
created. Provide a new function armv7m_nvic_set_pending_lazyfp()
7
which pends exceptions during lazy stacking, and implements
8
this logic.
9
2
10
This corresponds to the pseudocode TakePreserveFPException().
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20221001162318.153420-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/ptw.c | 138 +++++++++++++++++++++++++----------------------
9
1 file changed, 74 insertions(+), 64 deletions(-)
11
10
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20190416125744.27770-22-peter.maydell@linaro.org
15
---
16
target/arm/cpu.h | 12 ++++++
17
hw/intc/armv7m_nvic.c | 96 +++++++++++++++++++++++++++++++++++++++++++
18
2 files changed, 108 insertions(+)
19
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
13
--- a/target/arm/ptw.c
23
+++ b/target/arm/cpu.h
14
+++ b/target/arm/ptw.c
24
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
15
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
25
* a different exception).
16
return ret;
26
*/
17
}
27
void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure);
18
28
+/**
19
+/*
29
+ * armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
20
+ * MMU disabled. S1 addresses within aa64 translation regimes are
30
+ * @opaque: the NVIC
21
+ * still checked for bounds -- see AArch64.S1DisabledOutput().
31
+ * @irq: the exception number to mark pending
32
+ * @secure: false for non-banked exceptions or for the nonsecure
33
+ * version of a banked exception, true for the secure version of a banked
34
+ * exception.
35
+ *
36
+ * Similar to armv7m_nvic_set_pending(), but specifically for exceptions
37
+ * generated in the course of lazy stacking of FP registers.
38
+ */
22
+ */
39
+void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure);
23
+static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
40
/**
24
+ MMUAccessType access_type,
41
* armv7m_nvic_get_pending_irq_info: return highest priority pending
25
+ ARMMMUIdx mmu_idx, bool is_secure,
42
* exception, and whether it targets Secure state
26
+ GetPhysAddrResult *result,
43
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
27
+ ARMMMUFaultInfo *fi)
44
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/intc/armv7m_nvic.c
46
+++ b/hw/intc/armv7m_nvic.c
47
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure)
48
do_armv7m_nvic_set_pending(opaque, irq, secure, true);
49
}
50
51
+void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
52
+{
28
+{
53
+ /*
29
+ uint64_t hcr;
54
+ * Pend an exception during lazy FP stacking. This differs
30
+ uint8_t memattr;
55
+ * from the usual exception pending because the logic for
56
+ * whether we should escalate depends on the saved context
57
+ * in the FPCCR register, not on the current state of the CPU/NVIC.
58
+ */
59
+ NVICState *s = (NVICState *)opaque;
60
+ bool banked = exc_is_banked(irq);
61
+ VecInfo *vec;
62
+ bool targets_secure;
63
+ bool escalate = false;
64
+ /*
65
+ * We will only look at bits in fpccr if this is a banked exception
66
+ * (in which case 'secure' tells us whether it is the S or NS version).
67
+ * All the bits for the non-banked exceptions are in fpccr_s.
68
+ */
69
+ uint32_t fpccr_s = s->cpu->env.v7m.fpccr[M_REG_S];
70
+ uint32_t fpccr = s->cpu->env.v7m.fpccr[secure];
71
+
31
+
72
+ assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
32
+ if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
73
+ assert(!secure || banked);
33
+ int r_el = regime_el(env, mmu_idx);
34
+ if (arm_el_is_aa64(env, r_el)) {
35
+ int pamax = arm_pamax(env_archcpu(env));
36
+ uint64_t tcr = env->cp15.tcr_el[r_el];
37
+ int addrtop, tbi;
74
+
38
+
75
+ vec = (banked && secure) ? &s->sec_vectors[irq] : &s->vectors[irq];
39
+ tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
40
+ if (access_type == MMU_INST_FETCH) {
41
+ tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx);
42
+ }
43
+ tbi = (tbi >> extract64(address, 55, 1)) & 1;
44
+ addrtop = (tbi ? 55 : 63);
76
+
45
+
77
+ targets_secure = banked ? secure : exc_targets_secure(s, irq);
46
+ if (extract64(address, pamax, addrtop - pamax + 1) != 0) {
47
+ fi->type = ARMFault_AddressSize;
48
+ fi->level = 0;
49
+ fi->stage2 = false;
50
+ return 1;
51
+ }
78
+
52
+
79
+ switch (irq) {
53
+ /*
80
+ case ARMV7M_EXCP_DEBUG:
54
+ * When TBI is disabled, we've just validated that all of the
81
+ if (!(fpccr_s & R_V7M_FPCCR_MONRDY_MASK)) {
55
+ * bits above PAMax are zero, so logically we only need to
82
+ /* Ignore DebugMonitor exception */
56
+ * clear the top byte for TBI. But it's clearer to follow
83
+ return;
57
+ * the pseudocode set of addrdesc.paddress.
84
+ }
58
+ */
85
+ break;
59
+ address = extract64(address, 0, 52);
86
+ case ARMV7M_EXCP_MEM:
87
+ escalate = !(fpccr & R_V7M_FPCCR_MMRDY_MASK);
88
+ break;
89
+ case ARMV7M_EXCP_USAGE:
90
+ escalate = !(fpccr & R_V7M_FPCCR_UFRDY_MASK);
91
+ break;
92
+ case ARMV7M_EXCP_BUS:
93
+ escalate = !(fpccr_s & R_V7M_FPCCR_BFRDY_MASK);
94
+ break;
95
+ case ARMV7M_EXCP_SECURE:
96
+ escalate = !(fpccr_s & R_V7M_FPCCR_SFRDY_MASK);
97
+ break;
98
+ default:
99
+ g_assert_not_reached();
100
+ }
101
+
102
+ if (escalate) {
103
+ /*
104
+ * Escalate to HardFault: faults that initially targeted Secure
105
+ * continue to do so, even if HF normally targets NonSecure.
106
+ */
107
+ irq = ARMV7M_EXCP_HARD;
108
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY) &&
109
+ (targets_secure ||
110
+ !(s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK))) {
111
+ vec = &s->sec_vectors[irq];
112
+ } else {
113
+ vec = &s->vectors[irq];
114
+ }
60
+ }
115
+ }
61
+ }
116
+
62
+
117
+ if (!vec->enabled ||
63
+ result->phys = address;
118
+ nvic_exec_prio(s) <= exc_group_prio(s, vec->prio, secure)) {
64
+ result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
119
+ if (!(fpccr_s & R_V7M_FPCCR_HFRDY_MASK)) {
65
+ result->page_size = TARGET_PAGE_SIZE;
120
+ /*
66
+
121
+ * We want to escalate to HardFault but the context the
67
+ /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
122
+ * FP state belongs to prevents the exception pre-empting.
68
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
123
+ */
69
+ result->cacheattrs.shareability = 0;
124
+ cpu_abort(&s->cpu->parent_obj,
70
+ result->cacheattrs.is_s2_format = false;
125
+ "Lockup: can't escalate to HardFault during "
71
+ if (hcr & HCR_DC) {
126
+ "lazy FP register stacking\n");
72
+ if (hcr & HCR_DCT) {
73
+ memattr = 0xf0; /* Tagged, Normal, WB, RWA */
74
+ } else {
75
+ memattr = 0xff; /* Normal, WB, RWA */
127
+ }
76
+ }
77
+ } else if (access_type == MMU_INST_FETCH) {
78
+ if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
79
+ memattr = 0xee; /* Normal, WT, RA, NT */
80
+ } else {
81
+ memattr = 0x44; /* Normal, NC, No */
82
+ }
83
+ result->cacheattrs.shareability = 2; /* outer sharable */
84
+ } else {
85
+ memattr = 0x00; /* Device, nGnRnE */
128
+ }
86
+ }
129
+
87
+ result->cacheattrs.attrs = memattr;
130
+ if (escalate) {
88
+ return 0;
131
+ s->cpu->env.v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
132
+ }
133
+ if (!vec->pending) {
134
+ vec->pending = 1;
135
+ /*
136
+ * We do not call nvic_irq_update(), because we know our caller
137
+ * is going to handle causing us to take the exception by
138
+ * raising EXCP_LAZYFP, so raising the IRQ line would be
139
+ * pointless extra work. We just need to recompute the
140
+ * priorities so that armv7m_nvic_can_take_pending_exception()
141
+ * returns the right answer.
142
+ */
143
+ nvic_recompute_state(s);
144
+ }
145
+}
89
+}
146
+
90
+
147
/* Make pending IRQ active. */
91
bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
148
void armv7m_nvic_acknowledge_irq(void *opaque)
92
MMUAccessType access_type, ARMMMUIdx mmu_idx,
149
{
93
bool is_secure, GetPhysAddrResult *result,
94
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
95
/* Definitely a real MMU, not an MPU */
96
97
if (regime_translation_disabled(env, mmu_idx, is_secure)) {
98
- uint64_t hcr;
99
- uint8_t memattr;
100
-
101
- /*
102
- * MMU disabled. S1 addresses within aa64 translation regimes are
103
- * still checked for bounds -- see AArch64.TranslateAddressS1Off.
104
- */
105
- if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
106
- int r_el = regime_el(env, mmu_idx);
107
- if (arm_el_is_aa64(env, r_el)) {
108
- int pamax = arm_pamax(env_archcpu(env));
109
- uint64_t tcr = env->cp15.tcr_el[r_el];
110
- int addrtop, tbi;
111
-
112
- tbi = aa64_va_parameter_tbi(tcr, mmu_idx);
113
- if (access_type == MMU_INST_FETCH) {
114
- tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx);
115
- }
116
- tbi = (tbi >> extract64(address, 55, 1)) & 1;
117
- addrtop = (tbi ? 55 : 63);
118
-
119
- if (extract64(address, pamax, addrtop - pamax + 1) != 0) {
120
- fi->type = ARMFault_AddressSize;
121
- fi->level = 0;
122
- fi->stage2 = false;
123
- return 1;
124
- }
125
-
126
- /*
127
- * When TBI is disabled, we've just validated that all of the
128
- * bits above PAMax are zero, so logically we only need to
129
- * clear the top byte for TBI. But it's clearer to follow
130
- * the pseudocode set of addrdesc.paddress.
131
- */
132
- address = extract64(address, 0, 52);
133
- }
134
- }
135
- result->phys = address;
136
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
137
- result->page_size = TARGET_PAGE_SIZE;
138
-
139
- /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
140
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
141
- result->cacheattrs.shareability = 0;
142
- result->cacheattrs.is_s2_format = false;
143
- if (hcr & HCR_DC) {
144
- if (hcr & HCR_DCT) {
145
- memattr = 0xf0; /* Tagged, Normal, WB, RWA */
146
- } else {
147
- memattr = 0xff; /* Normal, WB, RWA */
148
- }
149
- } else if (access_type == MMU_INST_FETCH) {
150
- if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
151
- memattr = 0xee; /* Normal, WT, RA, NT */
152
- } else {
153
- memattr = 0x44; /* Normal, NC, No */
154
- }
155
- result->cacheattrs.shareability = 2; /* outer sharable */
156
- } else {
157
- memattr = 0x00; /* Device, nGnRnE */
158
- }
159
- result->cacheattrs.attrs = memattr;
160
- return 0;
161
+ return get_phys_addr_disabled(env, address, access_type, mmu_idx,
162
+ is_secure, result, fi);
163
}
164
-
165
if (regime_using_lpae_format(env, mmu_idx)) {
166
return get_phys_addr_lpae(env, address, access_type, mmu_idx,
167
is_secure, false, result, fi);
150
--
168
--
151
2.20.1
169
2.25.1
152
153
diff view generated by jsdifflib
1
Handle floating point registers in exception return.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
This corresponds to pseudocode functions ValidateExceptionReturn(),
3
ExceptionReturn(), PopStack() and ConsumeExcStackFrame().
4
2
3
Do not apply memattr or shareability for Stage2 translations.
4
Make sure to apply HCR_{DC,DCT} only to Regime_EL10, per the
5
pseudocode in AArch64.S1DisabledOutput.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20221001162318.153420-20-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190416125744.27770-16-peter.maydell@linaro.org
8
---
11
---
9
target/arm/helper.c | 142 +++++++++++++++++++++++++++++++++++++++++++-
12
target/arm/ptw.c | 48 +++++++++++++++++++++++++-----------------------
10
1 file changed, 141 insertions(+), 1 deletion(-)
13
1 file changed, 25 insertions(+), 23 deletions(-)
11
14
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
17
--- a/target/arm/ptw.c
15
+++ b/target/arm/helper.c
18
+++ b/target/arm/ptw.c
16
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
19
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
17
bool rettobase = false;
20
GetPhysAddrResult *result,
18
bool exc_secure = false;
21
ARMMMUFaultInfo *fi)
19
bool return_to_secure;
22
{
20
+ bool ftype;
23
- uint64_t hcr;
21
+ bool restore_s16_s31;
24
- uint8_t memattr;
22
25
+ uint8_t memattr = 0x00; /* Device nGnRnE */
23
/* If we're not in Handler mode then jumps to magic exception-exit
26
+ uint8_t shareability = 0; /* non-sharable */
24
* addresses don't have magic behaviour. However for the v8M
27
25
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
28
if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
26
excret);
29
int r_el = regime_el(env, mmu_idx);
27
}
28
29
+ ftype = excret & R_V7M_EXCRET_FTYPE_MASK;
30
+
30
+
31
+ if (!arm_feature(env, ARM_FEATURE_VFP) && !ftype) {
31
if (arm_el_is_aa64(env, r_el)) {
32
+ qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero FTYPE in exception "
32
int pamax = arm_pamax(env_archcpu(env));
33
+ "exit PC value 0x%" PRIx32 " is UNPREDICTABLE "
33
uint64_t tcr = env->cp15.tcr_el[r_el];
34
+ "if FPU not present\n",
34
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
35
+ excret);
35
*/
36
+ ftype = true;
36
address = extract64(address, 0, 52);
37
+ }
37
}
38
+
38
+
39
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
39
+ /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
40
/* EXC_RETURN.ES validation check (R_SMFL). We must do this before
40
+ if (r_el == 1) {
41
* we pick which FAULTMASK to clear.
41
+ uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure);
42
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
42
+ if (hcr & HCR_DC) {
43
*/
43
+ if (hcr & HCR_DCT) {
44
write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
44
+ memattr = 0xf0; /* Tagged, Normal, WB, RWA */
45
45
+ } else {
46
+ /*
46
+ memattr = 0xff; /* Normal, WB, RWA */
47
+ * Clear scratch FP values left in caller saved registers; this
48
+ * must happen before any kind of tail chaining.
49
+ */
50
+ if ((env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_CLRONRET_MASK) &&
51
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
52
+ if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
53
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
54
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
55
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
56
+ "stackframe: error during lazy state deactivation\n");
57
+ v7m_exception_taken(cpu, excret, true, false);
58
+ return;
59
+ } else {
60
+ /* Clear s0..s15 and FPSCR */
61
+ int i;
62
+
63
+ for (i = 0; i < 16; i += 2) {
64
+ *aa32_vfp_dreg(env, i / 2) = 0;
65
+ }
66
+ vfp_set_fpscr(env, 0);
67
+ }
68
+ }
69
+
70
if (sfault) {
71
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
72
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
73
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
74
}
75
}
76
77
+ if (!ftype) {
78
+ /* FP present and we need to handle it */
79
+ if (!return_to_secure &&
80
+ (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK)) {
81
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
82
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
83
+ qemu_log_mask(CPU_LOG_INT,
84
+ "...taking SecureFault on existing stackframe: "
85
+ "Secure LSPACT set but exception return is "
86
+ "not to secure state\n");
87
+ v7m_exception_taken(cpu, excret, true, false);
88
+ return;
89
+ }
90
+
91
+ restore_s16_s31 = return_to_secure &&
92
+ (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
93
+
94
+ if (env->v7m.fpccr[return_to_secure] & R_V7M_FPCCR_LSPACT_MASK) {
95
+ /* State in FPU is still valid, just clear LSPACT */
96
+ env->v7m.fpccr[return_to_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
97
+ } else {
98
+ int i;
99
+ uint32_t fpscr;
100
+ bool cpacr_pass, nsacr_pass;
101
+
102
+ cpacr_pass = v7m_cpacr_pass(env, return_to_secure,
103
+ return_to_priv);
104
+ nsacr_pass = return_to_secure ||
105
+ extract32(env->v7m.nsacr, 10, 1);
106
+
107
+ if (!cpacr_pass) {
108
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
109
+ return_to_secure);
110
+ env->v7m.cfsr[return_to_secure] |= R_V7M_CFSR_NOCP_MASK;
111
+ qemu_log_mask(CPU_LOG_INT,
112
+ "...taking UsageFault on existing "
113
+ "stackframe: CPACR.CP10 prevents unstacking "
114
+ "FP regs\n");
115
+ v7m_exception_taken(cpu, excret, true, false);
116
+ return;
117
+ } else if (!nsacr_pass) {
118
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
119
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_INVPC_MASK;
120
+ qemu_log_mask(CPU_LOG_INT,
121
+ "...taking Secure UsageFault on existing "
122
+ "stackframe: NSACR.CP10 prevents unstacking "
123
+ "FP regs\n");
124
+ v7m_exception_taken(cpu, excret, true, false);
125
+ return;
126
+ }
127
+
128
+ for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
129
+ uint32_t slo, shi;
130
+ uint64_t dn;
131
+ uint32_t faddr = frameptr + 0x20 + 4 * i;
132
+
133
+ if (i >= 16) {
134
+ faddr += 8; /* Skip the slot for the FPSCR */
135
+ }
136
+
137
+ pop_ok = pop_ok &&
138
+ v7m_stack_read(cpu, &slo, faddr, mmu_idx) &&
139
+ v7m_stack_read(cpu, &shi, faddr + 4, mmu_idx);
140
+
141
+ if (!pop_ok) {
142
+ break;
143
+ }
144
+
145
+ dn = (uint64_t)shi << 32 | slo;
146
+ *aa32_vfp_dreg(env, i / 2) = dn;
147
+ }
148
+ pop_ok = pop_ok &&
149
+ v7m_stack_read(cpu, &fpscr, frameptr + 0x60, mmu_idx);
150
+ if (pop_ok) {
151
+ vfp_set_fpscr(env, fpscr);
152
+ }
153
+ if (!pop_ok) {
154
+ /*
155
+ * These regs are 0 if security extension present;
156
+ * otherwise merely UNKNOWN. We zero always.
157
+ */
158
+ for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
159
+ *aa32_vfp_dreg(env, i / 2) = 0;
160
+ }
161
+ vfp_set_fpscr(env, 0);
162
+ }
47
+ }
163
+ }
48
+ }
164
+ }
49
+ }
165
+ env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
50
+ if (memattr == 0 && access_type == MMU_INST_FETCH) {
166
+ V7M_CONTROL, FPCA, !ftype);
51
+ if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
167
+
52
+ memattr = 0xee; /* Normal, WT, RA, NT */
168
/* Commit to consuming the stack frame */
53
+ } else {
169
frameptr += 0x20;
54
+ memattr = 0x44; /* Normal, NC, No */
170
+ if (!ftype) {
171
+ frameptr += 0x48;
172
+ if (restore_s16_s31) {
173
+ frameptr += 0x40;
174
+ }
55
+ }
56
+ shareability = 2; /* outer sharable */
175
+ }
57
+ }
176
/* Undo stack alignment (the SPREALIGN bit indicates that the original
58
+ result->cacheattrs.is_s2_format = false;
177
* pre-exception SP was not 8-aligned and we added a padding word to
178
* align it, so we undo this by ORing in the bit that increases it
179
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
180
*frame_sp_p = frameptr;
181
}
59
}
182
/* This xpsr_write() will invalidate frame_sp_p as it may switch stack */
60
183
- xpsr_write(env, xpsr, ~XPSR_SPREALIGN);
61
result->phys = address;
184
+ xpsr_write(env, xpsr, ~(XPSR_SPREALIGN | XPSR_SFPA));
62
result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
185
+
63
result->page_size = TARGET_PAGE_SIZE;
186
+ if (env->v7m.secure) {
64
-
187
+ bool sfpa = xpsr & XPSR_SFPA;
65
- /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
188
+
66
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
189
+ env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
67
- result->cacheattrs.shareability = 0;
190
+ V7M_CONTROL, SFPA, sfpa);
68
- result->cacheattrs.is_s2_format = false;
191
+ }
69
- if (hcr & HCR_DC) {
192
70
- if (hcr & HCR_DCT) {
193
/* The restored xPSR exception field will be zero if we're
71
- memattr = 0xf0; /* Tagged, Normal, WB, RWA */
194
* resuming in Thread mode. If that doesn't match what the
72
- } else {
73
- memattr = 0xff; /* Normal, WB, RWA */
74
- }
75
- } else if (access_type == MMU_INST_FETCH) {
76
- if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
77
- memattr = 0xee; /* Normal, WT, RA, NT */
78
- } else {
79
- memattr = 0x44; /* Normal, NC, No */
80
- }
81
- result->cacheattrs.shareability = 2; /* outer sharable */
82
- } else {
83
- memattr = 0x00; /* Device, nGnRnE */
84
- }
85
+ result->cacheattrs.shareability = shareability;
86
result->cacheattrs.attrs = memattr;
87
return 0;
88
}
195
--
89
--
196
2.20.1
90
2.25.1
197
198
diff view generated by jsdifflib
1
The magic value pushed onto the callee stack as an integrity
1
From: Richard Henderson <richard.henderson@linaro.org>
2
check is different if floating point is present.
3
2
3
Adjust GetPhysAddrResult to fill in CPUTLBEntryFull,
4
so that it may be passed directly to tlb_set_page_full.
5
6
The change is large, but mostly mechanical. The major
7
non-mechanical change is page_size -> lg_page_size.
8
Most of the time this is obvious, and is related to
9
TARGET_PAGE_BITS.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20221001162318.153420-21-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20190416125744.27770-15-peter.maydell@linaro.org
7
---
15
---
8
target/arm/helper.c | 22 +++++++++++++++++++---
16
target/arm/internals.h | 5 +-
9
1 file changed, 19 insertions(+), 3 deletions(-)
17
target/arm/helper.c | 12 +--
18
target/arm/m_helper.c | 20 ++---
19
target/arm/ptw.c | 179 ++++++++++++++++++++--------------------
20
target/arm/tlb_helper.c | 9 +-
21
5 files changed, 111 insertions(+), 114 deletions(-)
10
22
23
diff --git a/target/arm/internals.h b/target/arm/internals.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/internals.h
26
+++ b/target/arm/internals.h
27
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs {
28
29
/* Fields that are valid upon success. */
30
typedef struct GetPhysAddrResult {
31
- hwaddr phys;
32
- target_ulong page_size;
33
- int prot;
34
- MemTxAttrs attrs;
35
+ CPUTLBEntryFull f;
36
ARMCacheAttrs cacheattrs;
37
} GetPhysAddrResult;
38
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/helper.c
41
--- a/target/arm/helper.c
14
+++ b/target/arm/helper.c
42
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ load_fail:
43
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
44
/* Create a 64-bit PAR */
45
par64 = (1 << 11); /* LPAE bit always set */
46
if (!ret) {
47
- par64 |= res.phys & ~0xfffULL;
48
- if (!res.attrs.secure) {
49
+ par64 |= res.f.phys_addr & ~0xfffULL;
50
+ if (!res.f.attrs.secure) {
51
par64 |= (1 << 9); /* NS */
52
}
53
par64 |= (uint64_t)res.cacheattrs.attrs << 56; /* ATTR */
54
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
55
*/
56
if (!ret) {
57
/* We do not set any attribute bits in the PAR */
58
- if (res.page_size == (1 << 24)
59
+ if (res.f.lg_page_size == 24
60
&& arm_feature(env, ARM_FEATURE_V7)) {
61
- par64 = (res.phys & 0xff000000) | (1 << 1);
62
+ par64 = (res.f.phys_addr & 0xff000000) | (1 << 1);
63
} else {
64
- par64 = res.phys & 0xfffff000;
65
+ par64 = res.f.phys_addr & 0xfffff000;
66
}
67
- if (!res.attrs.secure) {
68
+ if (!res.f.attrs.secure) {
69
par64 |= (1 << 9); /* NS */
70
}
71
} else {
72
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/m_helper.c
75
+++ b/target/arm/m_helper.c
76
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
77
}
78
goto pend_fault;
79
}
80
- address_space_stl_le(arm_addressspace(cs, res.attrs), res.phys, value,
81
- res.attrs, &txres);
82
+ address_space_stl_le(arm_addressspace(cs, res.f.attrs), res.f.phys_addr,
83
+ value, res.f.attrs, &txres);
84
if (txres != MEMTX_OK) {
85
/* BusFault trying to write the data */
86
if (mode == STACK_LAZYFP) {
87
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr,
88
goto pend_fault;
89
}
90
91
- value = address_space_ldl(arm_addressspace(cs, res.attrs), res.phys,
92
- res.attrs, &txres);
93
+ value = address_space_ldl(arm_addressspace(cs, res.f.attrs),
94
+ res.f.phys_addr, res.f.attrs, &txres);
95
if (txres != MEMTX_OK) {
96
/* BusFault trying to read the data */
97
qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n");
98
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool secure,
99
qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n");
100
return false;
101
}
102
- *insn = address_space_lduw_le(arm_addressspace(cs, res.attrs), res.phys,
103
- res.attrs, &txres);
104
+ *insn = address_space_lduw_le(arm_addressspace(cs, res.f.attrs),
105
+ res.f.phys_addr, res.f.attrs, &txres);
106
if (txres != MEMTX_OK) {
107
env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK;
108
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
109
@@ -XXX,XX +XXX,XX @@ static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx,
110
}
111
return false;
112
}
113
- value = address_space_ldl(arm_addressspace(cs, res.attrs), res.phys,
114
- res.attrs, &txres);
115
+ value = address_space_ldl(arm_addressspace(cs, res.f.attrs),
116
+ res.f.phys_addr, res.f.attrs, &txres);
117
if (txres != MEMTX_OK) {
118
/* BusFault trying to read the data */
119
qemu_log_mask(CPU_LOG_INT,
120
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
121
} else {
122
mrvalid = true;
123
}
124
- r = res.prot & PAGE_READ;
125
- rw = res.prot & PAGE_WRITE;
126
+ r = res.f.prot & PAGE_READ;
127
+ rw = res.f.prot & PAGE_WRITE;
128
} else {
129
r = false;
130
rw = false;
131
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
132
index XXXXXXX..XXXXXXX 100644
133
--- a/target/arm/ptw.c
134
+++ b/target/arm/ptw.c
135
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
136
assert(!is_secure);
137
}
138
139
- addr = s2.phys;
140
+ addr = s2.f.phys_addr;
141
}
142
return addr;
143
}
144
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
145
/* 1Mb section. */
146
phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
147
ap = (desc >> 10) & 3;
148
- result->page_size = 1024 * 1024;
149
+ result->f.lg_page_size = 20; /* 1MB */
150
} else {
151
/* Lookup l2 entry. */
152
if (type == 1) {
153
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
154
case 1: /* 64k page. */
155
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
156
ap = (desc >> (4 + ((address >> 13) & 6))) & 3;
157
- result->page_size = 0x10000;
158
+ result->f.lg_page_size = 16;
159
break;
160
case 2: /* 4k page. */
161
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
162
ap = (desc >> (4 + ((address >> 9) & 6))) & 3;
163
- result->page_size = 0x1000;
164
+ result->f.lg_page_size = 12;
165
break;
166
case 3: /* 1k page, or ARMv6/XScale "extended small (4k) page" */
167
if (type == 1) {
168
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
169
if (arm_feature(env, ARM_FEATURE_XSCALE)
170
|| arm_feature(env, ARM_FEATURE_V6)) {
171
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
172
- result->page_size = 0x1000;
173
+ result->f.lg_page_size = 12;
174
} else {
175
/*
176
* UNPREDICTABLE in ARMv5; we choose to take a
177
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
178
}
179
} else {
180
phys_addr = (desc & 0xfffffc00) | (address & 0x3ff);
181
- result->page_size = 0x400;
182
+ result->f.lg_page_size = 10;
183
}
184
ap = (desc >> 4) & 3;
185
break;
186
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
187
g_assert_not_reached();
188
}
189
}
190
- result->prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
191
- result->prot |= result->prot ? PAGE_EXEC : 0;
192
- if (!(result->prot & (1 << access_type))) {
193
+ result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
194
+ result->f.prot |= result->f.prot ? PAGE_EXEC : 0;
195
+ if (!(result->f.prot & (1 << access_type))) {
196
/* Access permission fault. */
197
fi->type = ARMFault_Permission;
198
goto do_fault;
199
}
200
- result->phys = phys_addr;
201
+ result->f.phys_addr = phys_addr;
202
return false;
203
do_fault:
204
fi->domain = domain;
205
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
206
phys_addr = (desc & 0xff000000) | (address & 0x00ffffff);
207
phys_addr |= (uint64_t)extract32(desc, 20, 4) << 32;
208
phys_addr |= (uint64_t)extract32(desc, 5, 4) << 36;
209
- result->page_size = 0x1000000;
210
+ result->f.lg_page_size = 24; /* 16MB */
211
} else {
212
/* Section. */
213
phys_addr = (desc & 0xfff00000) | (address & 0x000fffff);
214
- result->page_size = 0x100000;
215
+ result->f.lg_page_size = 20; /* 1MB */
216
}
217
ap = ((desc >> 10) & 3) | ((desc >> 13) & 4);
218
xn = desc & (1 << 4);
219
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
220
case 1: /* 64k page. */
221
phys_addr = (desc & 0xffff0000) | (address & 0xffff);
222
xn = desc & (1 << 15);
223
- result->page_size = 0x10000;
224
+ result->f.lg_page_size = 16;
225
break;
226
case 2: case 3: /* 4k page. */
227
phys_addr = (desc & 0xfffff000) | (address & 0xfff);
228
xn = desc & 1;
229
- result->page_size = 0x1000;
230
+ result->f.lg_page_size = 12;
231
break;
232
default:
233
/* Never happens, but compiler isn't smart enough to tell. */
234
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
235
}
236
}
237
if (domain_prot == 3) {
238
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
239
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
240
} else {
241
if (pxn && !regime_is_user(env, mmu_idx)) {
242
xn = 1;
243
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
244
fi->type = ARMFault_AccessFlag;
245
goto do_fault;
246
}
247
- result->prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
248
+ result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1);
249
} else {
250
- result->prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
251
+ result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
252
}
253
- if (result->prot && !xn) {
254
- result->prot |= PAGE_EXEC;
255
+ if (result->f.prot && !xn) {
256
+ result->f.prot |= PAGE_EXEC;
257
}
258
- if (!(result->prot & (1 << access_type))) {
259
+ if (!(result->f.prot & (1 << access_type))) {
260
/* Access permission fault. */
261
fi->type = ARMFault_Permission;
262
goto do_fault;
263
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
264
* the CPU doesn't support TZ or this is a non-secure translation
265
* regime, because the attribute will already be non-secure.
266
*/
267
- result->attrs.secure = false;
268
+ result->f.attrs.secure = false;
269
}
270
- result->phys = phys_addr;
271
+ result->f.phys_addr = phys_addr;
272
return false;
273
do_fault:
274
fi->domain = domain;
275
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
276
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
277
ns = mmu_idx == ARMMMUIdx_Stage2;
278
xn = extract32(attrs, 11, 2);
279
- result->prot = get_S2prot(env, ap, xn, s1_is_el0);
280
+ result->f.prot = get_S2prot(env, ap, xn, s1_is_el0);
281
} else {
282
ns = extract32(attrs, 3, 1);
283
xn = extract32(attrs, 12, 1);
284
pxn = extract32(attrs, 11, 1);
285
- result->prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
286
+ result->f.prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn);
287
}
288
289
fault_type = ARMFault_Permission;
290
- if (!(result->prot & (1 << access_type))) {
291
+ if (!(result->f.prot & (1 << access_type))) {
292
goto do_fault;
293
}
294
295
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
296
* the CPU doesn't support TZ or this is a non-secure translation
297
* regime, because the attribute will already be non-secure.
298
*/
299
- result->attrs.secure = false;
300
+ result->f.attrs.secure = false;
301
}
302
/* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */
303
if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) {
304
- arm_tlb_bti_gp(&result->attrs) = true;
305
+ arm_tlb_bti_gp(&result->f.attrs) = true;
306
}
307
308
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
309
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
310
result->cacheattrs.shareability = extract32(attrs, 6, 2);
311
}
312
313
- result->phys = descaddr;
314
- result->page_size = page_size;
315
+ result->f.phys_addr = descaddr;
316
+ result->f.lg_page_size = ctz64(page_size);
317
return false;
318
319
do_fault:
320
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
321
322
if (regime_translation_disabled(env, mmu_idx, is_secure)) {
323
/* MPU disabled. */
324
- result->phys = address;
325
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
326
+ result->f.phys_addr = address;
327
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
328
return false;
329
}
330
331
- result->phys = address;
332
+ result->f.phys_addr = address;
333
for (n = 7; n >= 0; n--) {
334
base = env->cp15.c6_region[n];
335
if ((base & 1) == 0) {
336
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
337
fi->level = 1;
338
return true;
339
}
340
- result->prot = PAGE_READ | PAGE_WRITE;
341
+ result->f.prot = PAGE_READ | PAGE_WRITE;
342
break;
343
case 2:
344
- result->prot = PAGE_READ;
345
+ result->f.prot = PAGE_READ;
346
if (!is_user) {
347
- result->prot |= PAGE_WRITE;
348
+ result->f.prot |= PAGE_WRITE;
349
}
350
break;
351
case 3:
352
- result->prot = PAGE_READ | PAGE_WRITE;
353
+ result->f.prot = PAGE_READ | PAGE_WRITE;
354
break;
355
case 5:
356
if (is_user) {
357
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
358
fi->level = 1;
359
return true;
360
}
361
- result->prot = PAGE_READ;
362
+ result->f.prot = PAGE_READ;
363
break;
364
case 6:
365
- result->prot = PAGE_READ;
366
+ result->f.prot = PAGE_READ;
367
break;
368
default:
369
/* Bad permission. */
370
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
371
fi->level = 1;
372
return true;
373
}
374
- result->prot |= PAGE_EXEC;
375
+ result->f.prot |= PAGE_EXEC;
16
return false;
376
return false;
17
}
377
}
18
378
19
+static uint32_t v7m_integrity_sig(CPUARMState *env, uint32_t lr)
379
static void get_phys_addr_pmsav7_default(CPUARMState *env, ARMMMUIdx mmu_idx,
20
+{
380
- int32_t address, int *prot)
21
+ /*
381
+ int32_t address, uint8_t *prot)
22
+ * Return the integrity signature value for the callee-saves
23
+ * stack frame section. @lr is the exception return payload/LR value
24
+ * whose FType bit forms bit 0 of the signature if FP is present.
25
+ */
26
+ uint32_t sig = 0xfefa125a;
27
+
28
+ if (!arm_feature(env, ARM_FEATURE_VFP) || (lr & R_V7M_EXCRET_FTYPE_MASK)) {
29
+ sig |= 1;
30
+ }
31
+ return sig;
32
+}
33
+
34
static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
35
bool ignore_faults)
36
{
382
{
37
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
383
if (!arm_feature(env, ARM_FEATURE_M)) {
38
bool stacked_ok;
384
*prot = PAGE_READ | PAGE_WRITE;
39
uint32_t limit;
385
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
40
bool want_psp;
386
int n;
41
+ uint32_t sig;
387
bool is_user = regime_is_user(env, mmu_idx);
42
388
43
if (dotailchain) {
389
- result->phys = address;
44
bool mode = lr & R_V7M_EXCRET_MODE_MASK;
390
- result->page_size = TARGET_PAGE_SIZE;
45
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
391
- result->prot = 0;
46
/* Write as much of the stack frame as we can. A write failure may
392
+ result->f.phys_addr = address;
47
* cause us to pend a derived exception.
393
+ result->f.lg_page_size = TARGET_PAGE_BITS;
394
+ result->f.prot = 0;
395
396
if (regime_translation_disabled(env, mmu_idx, secure) ||
397
m_is_ppb_region(env, address)) {
398
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
399
* which always does a direct read using address_space_ldl(), rather
400
* than going via this function, so we don't need to check that here.
401
*/
402
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
403
+ get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot);
404
} else { /* MPU enabled */
405
for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) {
406
/* region search */
407
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
408
if (ranges_overlap(base, rmask,
409
address & TARGET_PAGE_MASK,
410
TARGET_PAGE_SIZE)) {
411
- result->page_size = 1;
412
+ result->f.lg_page_size = 0;
413
}
414
continue;
415
}
416
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
417
continue;
418
}
419
if (rsize < TARGET_PAGE_BITS) {
420
- result->page_size = 1 << rsize;
421
+ result->f.lg_page_size = rsize;
422
}
423
break;
424
}
425
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
426
fi->type = ARMFault_Background;
427
return true;
428
}
429
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
430
+ get_phys_addr_pmsav7_default(env, mmu_idx, address,
431
+ &result->f.prot);
432
} else { /* a MPU hit! */
433
uint32_t ap = extract32(env->pmsav7.dracr[n], 8, 3);
434
uint32_t xn = extract32(env->pmsav7.dracr[n], 12, 1);
435
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
436
case 5:
437
break; /* no access */
438
case 3:
439
- result->prot |= PAGE_WRITE;
440
+ result->f.prot |= PAGE_WRITE;
441
/* fall through */
442
case 2:
443
case 6:
444
- result->prot |= PAGE_READ | PAGE_EXEC;
445
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
446
break;
447
case 7:
448
/* for v7M, same as 6; for R profile a reserved value */
449
if (arm_feature(env, ARM_FEATURE_M)) {
450
- result->prot |= PAGE_READ | PAGE_EXEC;
451
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
452
break;
453
}
454
/* fall through */
455
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
456
case 1:
457
case 2:
458
case 3:
459
- result->prot |= PAGE_WRITE;
460
+ result->f.prot |= PAGE_WRITE;
461
/* fall through */
462
case 5:
463
case 6:
464
- result->prot |= PAGE_READ | PAGE_EXEC;
465
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
466
break;
467
case 7:
468
/* for v7M, same as 6; for R profile a reserved value */
469
if (arm_feature(env, ARM_FEATURE_M)) {
470
- result->prot |= PAGE_READ | PAGE_EXEC;
471
+ result->f.prot |= PAGE_READ | PAGE_EXEC;
472
break;
473
}
474
/* fall through */
475
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
476
477
/* execute never */
478
if (xn) {
479
- result->prot &= ~PAGE_EXEC;
480
+ result->f.prot &= ~PAGE_EXEC;
481
}
482
}
483
}
484
485
fi->type = ARMFault_Permission;
486
fi->level = 1;
487
- return !(result->prot & (1 << access_type));
488
+ return !(result->f.prot & (1 << access_type));
489
}
490
491
bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
492
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
493
uint32_t addr_page_base = address & TARGET_PAGE_MASK;
494
uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
495
496
- result->page_size = TARGET_PAGE_SIZE;
497
- result->phys = address;
498
- result->prot = 0;
499
+ result->f.lg_page_size = TARGET_PAGE_BITS;
500
+ result->f.phys_addr = address;
501
+ result->f.prot = 0;
502
if (mregion) {
503
*mregion = -1;
504
}
505
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
506
ranges_overlap(base, limit - base + 1,
507
addr_page_base,
508
TARGET_PAGE_SIZE)) {
509
- result->page_size = 1;
510
+ result->f.lg_page_size = 0;
511
}
512
continue;
513
}
514
515
if (base > addr_page_base || limit < addr_page_limit) {
516
- result->page_size = 1;
517
+ result->f.lg_page_size = 0;
518
}
519
520
if (matchregion != -1) {
521
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
522
523
if (matchregion == -1) {
524
/* hit using the background region */
525
- get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot);
526
+ get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot);
527
} else {
528
uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2);
529
uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1);
530
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
531
xn = 1;
532
}
533
534
- result->prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
535
- if (result->prot && !xn && !(pxn && !is_user)) {
536
- result->prot |= PAGE_EXEC;
537
+ result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap);
538
+ if (result->f.prot && !xn && !(pxn && !is_user)) {
539
+ result->f.prot |= PAGE_EXEC;
540
}
541
/*
542
* We don't need to look the attribute up in the MAIR0/MAIR1
543
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
544
545
fi->type = ARMFault_Permission;
546
fi->level = 1;
547
- return !(result->prot & (1 << access_type));
548
+ return !(result->f.prot & (1 << access_type));
549
}
550
551
static bool v8m_is_sau_exempt(CPUARMState *env,
552
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
553
} else {
554
fi->type = ARMFault_QEMU_SFault;
555
}
556
- result->page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
557
- result->phys = address;
558
- result->prot = 0;
559
+ result->f.lg_page_size = sattrs.subpage ? 0 : TARGET_PAGE_BITS;
560
+ result->f.phys_addr = address;
561
+ result->f.prot = 0;
562
return true;
563
}
564
} else {
565
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
566
* might downgrade a secure access to nonsecure.
567
*/
568
if (sattrs.ns) {
569
- result->attrs.secure = false;
570
+ result->f.attrs.secure = false;
571
} else if (!secure) {
572
/*
573
* NS access to S memory must fault.
574
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
575
* for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
576
*/
577
fi->type = ARMFault_QEMU_SFault;
578
- result->page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
579
- result->phys = address;
580
- result->prot = 0;
581
+ result->f.lg_page_size = sattrs.subpage ? 0 : TARGET_PAGE_BITS;
582
+ result->f.phys_addr = address;
583
+ result->f.prot = 0;
584
return true;
585
}
586
}
587
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
588
ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, secure,
589
result, fi, NULL);
590
if (sattrs.subpage) {
591
- result->page_size = 1;
592
+ result->f.lg_page_size = 0;
593
}
594
return ret;
595
}
596
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
597
result->cacheattrs.is_s2_format = false;
598
}
599
600
- result->phys = address;
601
- result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
602
- result->page_size = TARGET_PAGE_SIZE;
603
+ result->f.phys_addr = address;
604
+ result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
605
+ result->f.lg_page_size = TARGET_PAGE_BITS;
606
result->cacheattrs.shareability = shareability;
607
result->cacheattrs.attrs = memattr;
608
return 0;
609
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
610
return ret;
611
}
612
613
- ipa = result->phys;
614
- ipa_secure = result->attrs.secure;
615
+ ipa = result->f.phys_addr;
616
+ ipa_secure = result->f.attrs.secure;
617
if (is_secure) {
618
/* Select TCR based on the NS bit from the S1 walk. */
619
s2walk_secure = !(ipa_secure
620
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
621
* Save the stage1 results so that we may merge
622
* prot and cacheattrs later.
623
*/
624
- s1_prot = result->prot;
625
+ s1_prot = result->f.prot;
626
cacheattrs1 = result->cacheattrs;
627
memset(result, 0, sizeof(*result));
628
629
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
630
fi->s2addr = ipa;
631
632
/* Combine the S1 and S2 perms. */
633
- result->prot &= s1_prot;
634
+ result->f.prot &= s1_prot;
635
636
/* If S2 fails, return early. */
637
if (ret) {
638
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
639
* Check if IPA translates to secure or non-secure PA space.
640
* Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
641
*/
642
- result->attrs.secure =
643
+ result->f.attrs.secure =
644
(is_secure
645
&& !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
646
&& (ipa_secure
647
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
648
* cannot upgrade an non-secure translation regime's attributes
649
* to secure.
48
*/
650
*/
49
+ sig = v7m_integrity_sig(env, lr);
651
- result->attrs.secure = is_secure;
50
stacked_ok =
652
- result->attrs.user = regime_is_user(env, mmu_idx);
51
- v7m_stack_write(cpu, frameptr, 0xfefa125b, mmu_idx, ignore_faults) &&
653
+ result->f.attrs.secure = is_secure;
52
+ v7m_stack_write(cpu, frameptr, sig, mmu_idx, ignore_faults) &&
654
+ result->f.attrs.user = regime_is_user(env, mmu_idx);
53
v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx,
655
54
ignore_faults) &&
656
/*
55
v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx,
657
* Fast Context Switch Extension. This doesn't exist at all in v8.
56
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
658
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
57
if (return_to_secure &&
659
58
((excret & R_V7M_EXCRET_ES_MASK) == 0 ||
660
if (arm_feature(env, ARM_FEATURE_PMSA)) {
59
(excret & R_V7M_EXCRET_DCRS_MASK) == 0)) {
661
bool ret;
60
- uint32_t expected_sig = 0xfefa125b;
662
- result->page_size = TARGET_PAGE_SIZE;
61
uint32_t actual_sig;
663
+ result->f.lg_page_size = TARGET_PAGE_BITS;
62
664
63
pop_ok = v7m_stack_read(cpu, &actual_sig, frameptr, mmu_idx);
665
if (arm_feature(env, ARM_FEATURE_V8)) {
64
666
/* PMSAv8 */
65
- if (pop_ok && expected_sig != actual_sig) {
667
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
66
+ if (pop_ok && v7m_integrity_sig(env, excret) != actual_sig) {
668
(access_type == MMU_DATA_STORE ? "writing" : "execute"),
67
/* Take a SecureFault on the current stack */
669
(uint32_t)address, mmu_idx,
68
env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
670
ret ? "Miss" : "Hit",
69
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
671
- result->prot & PAGE_READ ? 'r' : '-',
672
- result->prot & PAGE_WRITE ? 'w' : '-',
673
- result->prot & PAGE_EXEC ? 'x' : '-');
674
+ result->f.prot & PAGE_READ ? 'r' : '-',
675
+ result->f.prot & PAGE_WRITE ? 'w' : '-',
676
+ result->f.prot & PAGE_EXEC ? 'x' : '-');
677
678
return ret;
679
}
680
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
681
bool ret;
682
683
ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &res, &fi);
684
- *attrs = res.attrs;
685
+ *attrs = res.f.attrs;
686
687
if (ret) {
688
return -1;
689
}
690
- return res.phys;
691
+ return res.f.phys_addr;
692
}
693
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
694
index XXXXXXX..XXXXXXX 100644
695
--- a/target/arm/tlb_helper.c
696
+++ b/target/arm/tlb_helper.c
697
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
698
* target page size are handled specially, so for those we
699
* pass in the exact addresses.
700
*/
701
- if (res.page_size >= TARGET_PAGE_SIZE) {
702
- res.phys &= TARGET_PAGE_MASK;
703
+ if (res.f.lg_page_size >= TARGET_PAGE_BITS) {
704
+ res.f.phys_addr &= TARGET_PAGE_MASK;
705
address &= TARGET_PAGE_MASK;
706
}
707
/* Notice and record tagged memory. */
708
if (cpu_isar_feature(aa64_mte, cpu) && res.cacheattrs.attrs == 0xf0) {
709
- arm_tlb_mte_tagged(&res.attrs) = true;
710
+ arm_tlb_mte_tagged(&res.f.attrs) = true;
711
}
712
713
- tlb_set_page_with_attrs(cs, address, res.phys, res.attrs,
714
- res.prot, mmu_idx, res.page_size);
715
+ tlb_set_page_full(cs, mmu_idx, address, &res.f);
716
return true;
717
} else if (probe) {
718
return false;
70
--
719
--
71
2.20.1
720
2.25.1
72
73
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
1
From: Jerome Forissier <jerome.forissier@linaro.org>
2
2
3
This device is used by both ARM (BCM2836, for raspi2) and AArch64
3
According to the Linux kernel booting.rst [1], CPTR_EL3.ESM and
4
(BCM2837, for raspi3) targets, and is not CPU-specific.
4
SCR_EL3.EnTP2 must be initialized to 1 when EL3 is present and FEAT_SME
5
Move it to common object, so we build it once for all targets.
5
is advertised. This has to be taken care of when QEMU boots directly
6
into the kernel (i.e., "-M virt,secure=on -cpu max -kernel Image").
6
7
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Cc: qemu-stable@nongnu.org
8
Message-id: 20190427133028.12874-1-philmd@redhat.com
9
Fixes: 78cb9776662a ("target/arm: Enable SME for -cpu max")
10
Link: [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm64/booting.rst?h=v6.0#n321
11
Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org>
12
Message-id: 20221003145641.1921467-1-jerome.forissier@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
15
---
12
hw/dma/Makefile.objs | 2 +-
16
hw/arm/boot.c | 4 ++++
13
1 file changed, 1 insertion(+), 1 deletion(-)
17
1 file changed, 4 insertions(+)
14
18
15
diff --git a/hw/dma/Makefile.objs b/hw/dma/Makefile.objs
19
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
16
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/dma/Makefile.objs
21
--- a/hw/arm/boot.c
18
+++ b/hw/dma/Makefile.objs
22
+++ b/hw/arm/boot.c
19
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_XLNX_ZYNQMP_ARM) += xlnx-zdma.o
23
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
20
24
if (cpu_isar_feature(aa64_sve, cpu)) {
21
obj-$(CONFIG_OMAP) += omap_dma.o soc_dma.o
25
env->cp15.cptr_el[3] |= R_CPTR_EL3_EZ_MASK;
22
obj-$(CONFIG_PXA2XX) += pxa2xx_dma.o
26
}
23
-obj-$(CONFIG_RASPI) += bcm2835_dma.o
27
+ if (cpu_isar_feature(aa64_sme, cpu)) {
24
+common-obj-$(CONFIG_RASPI) += bcm2835_dma.o
28
+ env->cp15.cptr_el[3] |= R_CPTR_EL3_ESM_MASK;
29
+ env->cp15.scr_el3 |= SCR_ENTP2;
30
+ }
31
/* AArch64 kernels never boot in secure mode */
32
assert(!info->secure_boot);
33
/* This hook is only supported for AArch32 currently:
25
--
34
--
26
2.20.1
35
2.25.1
27
28
diff view generated by jsdifflib
1
Implement the VLSTM instruction for v7M for the FPU present case.
1
Arm CPUs support some subset of the granule (page) sizes 4K, 16K and
2
2
64K. The guest selects the one it wants using bits in the TCR_ELx
3
registers. If it tries to program these registers with a value that
4
is either reserved or which requests a size that the CPU does not
5
implement, the architecture requires that the CPU behaves as if the
6
field was programmed to some size that has been implemented.
7
Currently we don't implement this, and instead let the guest use any
8
granule size, even if the CPU ID register fields say it isn't
9
present.
10
11
Make aa64_va_parameters() check against the supported granule size
12
and force use of a different one if it is not implemented.
13
14
(A subsequent commit will make ARMVAParameters use the new enum
15
rather than the current pair of using16k/using64k bools.)
16
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20221003162315.2833797-2-peter.maydell@linaro.org
5
Message-id: 20190416125744.27770-25-peter.maydell@linaro.org
6
---
20
---
7
target/arm/cpu.h | 2 +
21
target/arm/cpu.h | 33 +++++++++++++
8
target/arm/helper.h | 2 +
22
target/arm/internals.h | 9 ++++
9
target/arm/helper.c | 84 ++++++++++++++++++++++++++++++++++++++++++
23
target/arm/helper.c | 102 +++++++++++++++++++++++++++++++++++++----
10
target/arm/translate.c | 15 +++++++-
24
3 files changed, 136 insertions(+), 8 deletions(-)
11
4 files changed, 102 insertions(+), 1 deletion(-)
12
25
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
26
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
28
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
29
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@
30
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_tgran16_2_lpa2(const ARMISARegisters *id)
18
#define EXCP_INVSTATE 18 /* v7M INVSTATE UsageFault */
31
return t >= 3 || (t == 0 && isar_feature_aa64_tgran16_lpa2(id));
19
#define EXCP_STKOF 19 /* v8M STKOF UsageFault */
32
}
20
#define EXCP_LAZYFP 20 /* v7M fault during lazy FP stacking */
33
21
+#define EXCP_LSERR 21 /* v8M LSERR SecureFault */
34
+static inline bool isar_feature_aa64_tgran4(const ARMISARegisters *id)
22
+#define EXCP_UNALIGNED 22 /* v7M UNALIGNED UsageFault */
35
+{
23
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
36
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 0;
24
37
+}
25
#define ARMV7M_EXCP_RESET 1
38
+
26
diff --git a/target/arm/helper.h b/target/arm/helper.h
39
+static inline bool isar_feature_aa64_tgran16(const ARMISARegisters *id)
40
+{
41
+ return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 1;
42
+}
43
+
44
+static inline bool isar_feature_aa64_tgran64(const ARMISARegisters *id)
45
+{
46
+ return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64) >= 0;
47
+}
48
+
49
+static inline bool isar_feature_aa64_tgran4_2(const ARMISARegisters *id)
50
+{
51
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2);
52
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran4(id));
53
+}
54
+
55
+static inline bool isar_feature_aa64_tgran16_2(const ARMISARegisters *id)
56
+{
57
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2);
58
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran16(id));
59
+}
60
+
61
+static inline bool isar_feature_aa64_tgran64_2(const ARMISARegisters *id)
62
+{
63
+ unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64_2);
64
+ return t >= 2 || (t == 0 && isar_feature_aa64_tgran64(id));
65
+}
66
+
67
static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
68
{
69
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
70
diff --git a/target/arm/internals.h b/target/arm/internals.h
27
index XXXXXXX..XXXXXXX 100644
71
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.h
72
--- a/target/arm/internals.h
29
+++ b/target/arm/helper.h
73
+++ b/target/arm/internals.h
30
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
74
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
31
75
return valid;
32
DEF_HELPER_1(v7m_preserve_fp_state, void, env)
76
}
33
77
34
+DEF_HELPER_2(v7m_vlstm, void, env, i32)
78
+/* Granule size (i.e. page size) */
35
+
79
+typedef enum ARMGranuleSize {
36
DEF_HELPER_2(v8m_stackcheck, void, env, i32)
80
+ /* Same order as TG0 encoding */
37
81
+ Gran4K,
38
DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32)
82
+ Gran64K,
83
+ Gran16K,
84
+ GranInvalid,
85
+} ARMGranuleSize;
86
+
87
/*
88
* Parameters of a given virtual address, as extracted from the
89
* translation control register (TCR) for a given regime.
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
90
diff --git a/target/arm/helper.c b/target/arm/helper.c
40
index XXXXXXX..XXXXXXX 100644
91
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/helper.c
92
--- a/target/arm/helper.c
42
+++ b/target/arm/helper.c
93
+++ b/target/arm/helper.c
43
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
94
@@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tcma(uint64_t tcr, ARMMMUIdx mmu_idx)
44
g_assert_not_reached();
45
}
46
47
+void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
48
+{
49
+ /* translate.c should never generate calls here in user-only mode */
50
+ g_assert_not_reached();
51
+}
52
+
53
uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
54
{
55
/* The TT instructions can be used by unprivileged code, but in
56
@@ -XXX,XX +XXX,XX @@ static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
57
}
95
}
58
}
96
}
59
97
60
+void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
98
+static ARMGranuleSize tg0_to_gran_size(int tg)
61
+{
99
+{
62
+ /* fptr is the value of Rn, the frame pointer we store the FP regs to */
100
+ switch (tg) {
63
+ bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
101
+ case 0:
64
+ bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK;
102
+ return Gran4K;
65
+
103
+ case 1:
66
+ assert(env->v7m.secure);
104
+ return Gran64K;
67
+
105
+ case 2:
68
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
106
+ return Gran16K;
69
+ return;
107
+ default:
70
+ }
108
+ return GranInvalid;
71
+
109
+ }
72
+ /* Check access to the coprocessor is permitted */
110
+}
73
+ if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
111
+
74
+ raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
112
+static ARMGranuleSize tg1_to_gran_size(int tg)
75
+ }
113
+{
76
+
114
+ switch (tg) {
77
+ if (lspact) {
115
+ case 1:
78
+ /* LSPACT should not be active when there is active FP state */
116
+ return Gran16K;
79
+ raise_exception_ra(env, EXCP_LSERR, 0, 1, GETPC());
117
+ case 2:
80
+ }
118
+ return Gran4K;
81
+
119
+ case 3:
82
+ if (fptr & 7) {
120
+ return Gran64K;
83
+ raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
121
+ default:
84
+ }
122
+ return GranInvalid;
85
+
123
+ }
124
+}
125
+
126
+static inline bool have4k(ARMCPU *cpu, bool stage2)
127
+{
128
+ return stage2 ? cpu_isar_feature(aa64_tgran4_2, cpu)
129
+ : cpu_isar_feature(aa64_tgran4, cpu);
130
+}
131
+
132
+static inline bool have16k(ARMCPU *cpu, bool stage2)
133
+{
134
+ return stage2 ? cpu_isar_feature(aa64_tgran16_2, cpu)
135
+ : cpu_isar_feature(aa64_tgran16, cpu);
136
+}
137
+
138
+static inline bool have64k(ARMCPU *cpu, bool stage2)
139
+{
140
+ return stage2 ? cpu_isar_feature(aa64_tgran64_2, cpu)
141
+ : cpu_isar_feature(aa64_tgran64, cpu);
142
+}
143
+
144
+static ARMGranuleSize sanitize_gran_size(ARMCPU *cpu, ARMGranuleSize gran,
145
+ bool stage2)
146
+{
147
+ switch (gran) {
148
+ case Gran4K:
149
+ if (have4k(cpu, stage2)) {
150
+ return gran;
151
+ }
152
+ break;
153
+ case Gran16K:
154
+ if (have16k(cpu, stage2)) {
155
+ return gran;
156
+ }
157
+ break;
158
+ case Gran64K:
159
+ if (have64k(cpu, stage2)) {
160
+ return gran;
161
+ }
162
+ break;
163
+ case GranInvalid:
164
+ break;
165
+ }
86
+ /*
166
+ /*
87
+ * Note that we do not use v7m_stack_write() here, because the
167
+ * If the guest selects a granule size that isn't implemented,
88
+ * accesses should not set the FSR bits for stacking errors if they
168
+ * the architecture requires that we behave as if it selected one
89
+ * fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK
169
+ * that is (with an IMPDEF choice of which one to pick). We choose
90
+ * or AccType_LAZYFP). Faults in cpu_stl_data() will throw exceptions
170
+ * to implement the smallest supported granule size.
91
+ * and longjmp out.
92
+ */
171
+ */
93
+ if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
172
+ if (have4k(cpu, stage2)) {
94
+ bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
173
+ return Gran4K;
95
+ int i;
174
+ }
96
+
175
+ if (have16k(cpu, stage2)) {
97
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
176
+ return Gran16K;
98
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
177
+ }
99
+ uint32_t faddr = fptr + 4 * i;
178
+ assert(have64k(cpu, stage2));
100
+ uint32_t slo = extract64(dn, 0, 32);
179
+ return Gran64K;
101
+ uint32_t shi = extract64(dn, 32, 32);
180
+}
102
+
181
+
103
+ if (i >= 16) {
182
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
104
+ faddr += 8; /* skip the slot for the FPSCR */
183
ARMMMUIdx mmu_idx, bool data)
105
+ }
106
+ cpu_stl_data(env, faddr, slo);
107
+ cpu_stl_data(env, faddr + 4, shi);
108
+ }
109
+ cpu_stl_data(env, fptr + 0x40, vfp_get_fpscr(env));
110
+
111
+ /*
112
+ * If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to
113
+ * leave them unchanged, matching our choice in v7m_preserve_fp_state.
114
+ */
115
+ if (ts) {
116
+ for (i = 0; i < 32; i += 2) {
117
+ *aa32_vfp_dreg(env, i / 2) = 0;
118
+ }
119
+ vfp_set_fpscr(env, 0);
120
+ }
121
+ } else {
122
+ v7m_update_fpccr(env, fptr, false);
123
+ }
124
+
125
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
126
+}
127
+
128
static bool v7m_push_stack(ARMCPU *cpu)
129
{
184
{
130
/* Do the "set up stack frame" part of exception entry,
185
uint64_t tcr = regime_tcr(env, mmu_idx);
131
@@ -XXX,XX +XXX,XX @@ static void arm_log_exception(int idx)
186
bool epd, hpd, using16k, using64k, tsz_oob, ds;
132
[EXCP_INVSTATE] = "v7M INVSTATE UsageFault",
187
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
133
[EXCP_STKOF] = "v8M STKOF UsageFault",
188
+ ARMGranuleSize gran;
134
[EXCP_LAZYFP] = "v7M exception during lazy FP stacking",
189
ARMCPU *cpu = env_archcpu(env);
135
+ [EXCP_LSERR] = "v8M LSERR UsageFault",
190
+ bool stage2 = mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S;
136
+ [EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault",
191
137
};
192
if (!regime_has_2_ranges(mmu_idx)) {
138
193
select = 0;
139
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
194
tsz = extract32(tcr, 0, 6);
140
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
195
- using64k = extract32(tcr, 14, 1);
141
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
196
- using16k = extract32(tcr, 15, 1);
142
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
197
- if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
143
break;
198
+ gran = tg0_to_gran_size(extract32(tcr, 14, 2));
144
+ case EXCP_LSERR:
199
+ if (stage2) {
145
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
200
/* VTCR_EL2 */
146
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
201
hpd = false;
147
+ break;
202
} else {
148
+ case EXCP_UNALIGNED:
203
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
149
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
204
select = extract64(va, 55, 1);
150
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
205
if (!select) {
151
+ break;
206
tsz = extract32(tcr, 0, 6);
152
case EXCP_SWI:
207
+ gran = tg0_to_gran_size(extract32(tcr, 14, 2));
153
/* The PC already points to the next instruction. */
208
epd = extract32(tcr, 7, 1);
154
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
209
sh = extract32(tcr, 12, 2);
155
diff --git a/target/arm/translate.c b/target/arm/translate.c
210
- using64k = extract32(tcr, 14, 1);
156
index XXXXXXX..XXXXXXX 100644
211
- using16k = extract32(tcr, 15, 1);
157
--- a/target/arm/translate.c
212
hpd = extract64(tcr, 41, 1);
158
+++ b/target/arm/translate.c
213
} else {
159
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
214
- int tg = extract32(tcr, 30, 2);
160
if (!s->v8m_secure || (insn & 0x0040f0ff)) {
215
- using16k = tg == 1;
161
goto illegal_op;
216
- using64k = tg == 3;
162
}
217
tsz = extract32(tcr, 16, 6);
163
- /* Just NOP since FP support is not implemented */
218
+ gran = tg1_to_gran_size(extract32(tcr, 30, 2));
164
+
219
epd = extract32(tcr, 23, 1);
165
+ if (arm_dc_feature(s, ARM_FEATURE_VFP)) {
220
sh = extract32(tcr, 28, 2);
166
+ TCGv_i32 fptr = load_reg(s, rn);
221
hpd = extract64(tcr, 42, 1);
167
+
222
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
168
+ if (extract32(insn, 20, 1)) {
223
ds = extract64(tcr, 59, 1);
169
+ /* VLLDM */
224
}
170
+ } else {
225
171
+ gen_helper_v7m_vlstm(cpu_env, fptr);
226
+ gran = sanitize_gran_size(cpu, gran, stage2);
172
+ }
227
+ using64k = gran == Gran64K;
173
+ tcg_temp_free_i32(fptr);
228
+ using16k = gran == Gran16K;
174
+
229
+
175
+ /* End the TB, because we have updated FP control bits */
230
if (cpu_isar_feature(aa64_st, cpu)) {
176
+ s->base.is_jmp = DISAS_UPDATE;
231
max_tsz = 48 - using64k;
177
+ }
232
} else {
178
break;
179
}
180
if (arm_dc_feature(s, ARM_FEATURE_VFP) &&
181
--
233
--
182
2.20.1
234
2.25.1
183
184
diff view generated by jsdifflib
1
Implement the code which updates the FPCCR register on an
1
Now we have an enum for the granule size, use it in the
2
exception entry where we are going to use lazy FP stacking.
2
ARMVAParameters struct instead of the using16k/using64k bools.
3
We have to defer to the NVIC to determine whether the
4
various exceptions are currently ready or not.
5
3
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20190416125744.27770-12-peter.maydell@linaro.org
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20221003162315.2833797-3-peter.maydell@linaro.org
8
---
7
---
9
target/arm/cpu.h | 14 +++++++++
8
target/arm/internals.h | 23 +++++++++++++++++++++--
10
hw/intc/armv7m_nvic.c | 34 ++++++++++++++++++++++
9
target/arm/helper.c | 39 ++++++++++++++++++++++++++++-----------
11
target/arm/helper.c | 67 ++++++++++++++++++++++++++++++++++++++++++-
10
target/arm/ptw.c | 8 +-------
12
3 files changed, 114 insertions(+), 1 deletion(-)
11
3 files changed, 50 insertions(+), 20 deletions(-)
13
12
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
15
--- a/target/arm/internals.h
17
+++ b/target/arm/cpu.h
16
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque);
17
@@ -XXX,XX +XXX,XX @@ typedef enum ARMGranuleSize {
19
* (Ignoring -1, this is the same as the RETTOBASE value before completion.)
18
GranInvalid,
20
*/
19
} ARMGranuleSize;
21
int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
20
22
+/**
21
+/**
23
+ * armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
22
+ * arm_granule_bits: Return address size of the granule in bits
24
+ * @opaque: the NVIC
25
+ * @irq: the exception number to mark pending
26
+ * @secure: false for non-banked exceptions or for the nonsecure
27
+ * version of a banked exception, true for the secure version of a banked
28
+ * exception.
29
+ *
23
+ *
30
+ * Return whether an exception is "ready", i.e. whether the exception is
24
+ * Return the address size of the granule in bits. This corresponds
31
+ * enabled and is configured at a priority which would allow it to
25
+ * to the pseudocode TGxGranuleBits().
32
+ * interrupt the current execution priority. This controls whether the
33
+ * RDY bit for it in the FPCCR is set.
34
+ */
26
+ */
35
+bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure);
27
+static inline int arm_granule_bits(ARMGranuleSize gran)
36
/**
37
* armv7m_nvic_raw_execution_priority: return the raw execution priority
38
* @opaque: the NVIC
39
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/intc/armv7m_nvic.c
42
+++ b/hw/intc/armv7m_nvic.c
43
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
44
return ret;
45
}
46
47
+bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
48
+{
28
+{
49
+ /*
29
+ switch (gran) {
50
+ * Return whether an exception is "ready", i.e. it is enabled and is
30
+ case Gran64K:
51
+ * configured at a priority which would allow it to interrupt the
31
+ return 16;
52
+ * current execution priority.
32
+ case Gran16K:
53
+ *
33
+ return 14;
54
+ * irq and secure have the same semantics as for armv7m_nvic_set_pending():
34
+ case Gran4K:
55
+ * for non-banked exceptions secure is always false; for banked exceptions
35
+ return 12;
56
+ * it indicates which of the exceptions is required.
36
+ default:
57
+ */
37
+ g_assert_not_reached();
58
+ NVICState *s = (NVICState *)opaque;
59
+ bool banked = exc_is_banked(irq);
60
+ VecInfo *vec;
61
+ int running = nvic_exec_prio(s);
62
+
63
+ assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
64
+ assert(!secure || banked);
65
+
66
+ /*
67
+ * HardFault is an odd special case: we always check against -1,
68
+ * even if we're secure and HardFault has priority -3; we never
69
+ * need to check for enabled state.
70
+ */
71
+ if (irq == ARMV7M_EXCP_HARD) {
72
+ return running > -1;
73
+ }
38
+ }
74
+
75
+ vec = (banked && secure) ? &s->sec_vectors[irq] : &s->vectors[irq];
76
+
77
+ return vec->enabled &&
78
+ exc_group_prio(s, vec->prio, secure) < running;
79
+}
39
+}
80
+
40
+
81
/* callback when external interrupt line is changed */
41
/*
82
static void set_irq_level(void *opaque, int n, int level)
42
* Parameters of a given virtual address, as extracted from the
83
{
43
* translation control register (TCR) for a given regime.
44
@@ -XXX,XX +XXX,XX @@ typedef struct ARMVAParameters {
45
bool tbi : 1;
46
bool epd : 1;
47
bool hpd : 1;
48
- bool using16k : 1;
49
- bool using64k : 1;
50
bool tsz_oob : 1; /* tsz has been clamped to legal range */
51
bool ds : 1;
52
+ ARMGranuleSize gran : 2;
53
} ARMVAParameters;
54
55
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
84
diff --git a/target/arm/helper.c b/target/arm/helper.c
56
diff --git a/target/arm/helper.c b/target/arm/helper.c
85
index XXXXXXX..XXXXXXX 100644
57
index XXXXXXX..XXXXXXX 100644
86
--- a/target/arm/helper.c
58
--- a/target/arm/helper.c
87
+++ b/target/arm/helper.c
59
+++ b/target/arm/helper.c
88
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
60
@@ -XXX,XX +XXX,XX @@ typedef struct {
89
env->thumb = addr & 1;
61
uint64_t length;
90
}
62
} TLBIRange;
91
63
92
+static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
64
+static ARMGranuleSize tlbi_range_tg_to_gran_size(int tg)
93
+ bool apply_splim)
94
+{
65
+{
95
+ /*
66
+ /*
96
+ * Like the pseudocode UpdateFPCCR: save state in FPCAR and FPCCR
67
+ * Note that the TLBI range TG field encoding differs from both
97
+ * that we will need later in order to do lazy FP reg stacking.
68
+ * TG0 and TG1 encodings.
98
+ */
69
+ */
99
+ bool is_secure = env->v7m.secure;
70
+ switch (tg) {
100
+ void *nvic = env->nvic;
71
+ case 1:
101
+ /*
72
+ return Gran4K;
102
+ * Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
73
+ case 2:
103
+ * are banked and we want to update the bit in the bank for the
74
+ return Gran16K;
104
+ * current security state; and in one case we want to specifically
75
+ case 3:
105
+ * update the NS banked version of a bit even if we are secure.
76
+ return Gran64K;
106
+ */
77
+ default:
107
+ uint32_t *fpccr_s = &env->v7m.fpccr[M_REG_S];
78
+ return GranInvalid;
108
+ uint32_t *fpccr_ns = &env->v7m.fpccr[M_REG_NS];
109
+ uint32_t *fpccr = &env->v7m.fpccr[is_secure];
110
+ bool hfrdy, bfrdy, mmrdy, ns_ufrdy, s_ufrdy, sfrdy, monrdy;
111
+
112
+ env->v7m.fpcar[is_secure] = frameptr & ~0x7;
113
+
114
+ if (apply_splim && arm_feature(env, ARM_FEATURE_V8)) {
115
+ bool splimviol;
116
+ uint32_t splim = v7m_sp_limit(env);
117
+ bool ign = armv7m_nvic_neg_prio_requested(nvic, is_secure) &&
118
+ (env->v7m.ccr[is_secure] & R_V7M_CCR_STKOFHFNMIGN_MASK);
119
+
120
+ splimviol = !ign && frameptr < splim;
121
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, SPLIMVIOL, splimviol);
122
+ }
123
+
124
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, LSPACT, 1);
125
+
126
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, S, is_secure);
127
+
128
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, USER, arm_current_el(env) == 0);
129
+
130
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, THREAD,
131
+ !arm_v7m_is_handler_mode(env));
132
+
133
+ hfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_HARD, false);
134
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, HFRDY, hfrdy);
135
+
136
+ bfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_BUS, false);
137
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, BFRDY, bfrdy);
138
+
139
+ mmrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_MEM, is_secure);
140
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, MMRDY, mmrdy);
141
+
142
+ ns_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, false);
143
+ *fpccr_ns = FIELD_DP32(*fpccr_ns, V7M_FPCCR, UFRDY, ns_ufrdy);
144
+
145
+ monrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_DEBUG, false);
146
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, MONRDY, monrdy);
147
+
148
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
149
+ s_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, true);
150
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, UFRDY, s_ufrdy);
151
+
152
+ sfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_SECURE, false);
153
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, SFRDY, sfrdy);
154
+ }
79
+ }
155
+}
80
+}
156
+
81
+
157
static bool v7m_push_stack(ARMCPU *cpu)
82
static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx,
83
uint64_t value)
158
{
84
{
159
/* Do the "set up stack frame" part of exception entry,
85
@@ -XXX,XX +XXX,XX @@ static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx,
160
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
86
uint64_t select = sextract64(value, 36, 1);
161
}
87
ARMVAParameters param = aa64_va_parameters(env, select, mmuidx, true);
88
TLBIRange ret = { };
89
+ ARMGranuleSize gran;
90
91
page_size_granule = extract64(value, 46, 2);
92
+ gran = tlbi_range_tg_to_gran_size(page_size_granule);
93
94
/* The granule encoded in value must match the granule in use. */
95
- if (page_size_granule != (param.using64k ? 3 : param.using16k ? 2 : 1)) {
96
+ if (gran != param.gran) {
97
qemu_log_mask(LOG_GUEST_ERROR, "Invalid tlbi page size granule %d\n",
98
page_size_granule);
99
return ret;
100
}
101
102
- page_shift = (page_size_granule - 1) * 2 + 12;
103
+ page_shift = arm_granule_bits(gran);
104
num = extract64(value, 39, 5);
105
scale = extract64(value, 44, 2);
106
exponent = (5 * scale) + 1;
107
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
108
ARMMMUIdx mmu_idx, bool data)
109
{
110
uint64_t tcr = regime_tcr(env, mmu_idx);
111
- bool epd, hpd, using16k, using64k, tsz_oob, ds;
112
+ bool epd, hpd, tsz_oob, ds;
113
int select, tsz, tbi, max_tsz, min_tsz, ps, sh;
114
ARMGranuleSize gran;
115
ARMCPU *cpu = env_archcpu(env);
116
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
117
}
118
119
gran = sanitize_gran_size(cpu, gran, stage2);
120
- using64k = gran == Gran64K;
121
- using16k = gran == Gran16K;
122
123
if (cpu_isar_feature(aa64_st, cpu)) {
124
- max_tsz = 48 - using64k;
125
+ max_tsz = 48 - (gran == Gran64K);
126
} else {
127
max_tsz = 39;
128
}
129
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
130
* adjust the effective value of DS, as documented.
131
*/
132
min_tsz = 16;
133
- if (using64k) {
134
+ if (gran == Gran64K) {
135
if (cpu_isar_feature(aa64_lva, cpu)) {
136
min_tsz = 12;
137
}
138
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
139
switch (mmu_idx) {
140
case ARMMMUIdx_Stage2:
141
case ARMMMUIdx_Stage2_S:
142
- if (using16k) {
143
+ if (gran == Gran16K) {
144
ds = cpu_isar_feature(aa64_tgran16_2_lpa2, cpu);
162
} else {
145
} else {
163
/* Lazy stacking enabled, save necessary info to stack later */
146
ds = cpu_isar_feature(aa64_tgran4_2_lpa2, cpu);
164
- /* TODO : equivalent of UpdateFPCCR() pseudocode */
165
+ v7m_update_fpccr(env, frameptr + 0x20, true);
166
}
147
}
148
break;
149
default:
150
- if (using16k) {
151
+ if (gran == Gran16K) {
152
ds = cpu_isar_feature(aa64_tgran16_lpa2, cpu);
153
} else {
154
ds = cpu_isar_feature(aa64_tgran4_lpa2, cpu);
155
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
156
.tbi = tbi,
157
.epd = epd,
158
.hpd = hpd,
159
- .using16k = using16k,
160
- .using64k = using64k,
161
.tsz_oob = tsz_oob,
162
.ds = ds,
163
+ .gran = gran,
164
};
165
}
166
167
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
168
index XXXXXXX..XXXXXXX 100644
169
--- a/target/arm/ptw.c
170
+++ b/target/arm/ptw.c
171
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
167
}
172
}
168
}
173
}
174
175
- if (param.using64k) {
176
- stride = 13;
177
- } else if (param.using16k) {
178
- stride = 11;
179
- } else {
180
- stride = 9;
181
- }
182
+ stride = arm_granule_bits(param.gran) - 3;
183
184
/*
185
* Note that QEMU ignores shareability and cacheability attributes,
169
--
186
--
170
2.20.1
187
2.25.1
171
172
diff view generated by jsdifflib
1
If the floating point extension is present, then the SG instruction
1
FEAT_GTG is a change tho the ID register ID_AA64MMFR0_EL1 so that it
2
must clear the CONTROL_S.SFPA bit. Implement this.
2
can report a different set of supported granule (page) sizes for
3
stage 1 and stage 2 translation tables. As of commit c20281b2a5048
4
we already report the granule sizes that way for '-cpu max', and now
5
we also correctly make attempts to use unimplemented granule sizes
6
fail, so we can report the support of the feature in the
7
documentation.
3
8
4
(On a no-FPU system the bit will always be zero, so we don't need
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
to make the clearing of the bit conditional on ARM_FEATURE_VFP.)
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20221003162315.2833797-4-peter.maydell@linaro.org
9
Message-id: 20190416125744.27770-8-peter.maydell@linaro.org
10
---
12
---
11
target/arm/helper.c | 1 +
13
docs/system/arm/emulation.rst | 1 +
12
1 file changed, 1 insertion(+)
14
1 file changed, 1 insertion(+)
13
15
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
18
--- a/docs/system/arm/emulation.rst
17
+++ b/target/arm/helper.c
19
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
21
- FEAT_FRINTTS (Floating-point to integer instructions)
20
", executing it\n", env->regs[15]);
22
- FEAT_FlagM (Flag manipulation instructions v2)
21
env->regs[14] &= ~1;
23
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
22
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
24
+- FEAT_GTG (Guest translation granule size)
23
switch_v7m_security_state(env, true);
25
- FEAT_HCX (Support for the HCRX_EL2 register)
24
xpsr_write(env, 0, XPSR_IT);
26
- FEAT_HPDS (Hierarchical permission disables)
25
env->regs[15] += 4;
27
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
26
--
28
--
27
2.20.1
29
2.25.1
28
29
diff view generated by jsdifflib
Deleted patch
1
The M-profile CONTROL register has two bits -- SFPA and FPCA --
2
which relate to floating-point support, and should be RES0 otherwise.
3
Handle them correctly in the MSR/MRS register access code.
4
Neither is banked between security states, so they are stored
5
in v7m.control[M_REG_S] regardless of current security state.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190416125744.27770-9-peter.maydell@linaro.org
10
---
11
target/arm/helper.c | 57 ++++++++++++++++++++++++++++++++++++++-------
12
1 file changed, 49 insertions(+), 8 deletions(-)
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
19
return xpsr_read(env) & mask;
20
break;
21
case 20: /* CONTROL */
22
- return env->v7m.control[env->v7m.secure];
23
+ {
24
+ uint32_t value = env->v7m.control[env->v7m.secure];
25
+ if (!env->v7m.secure) {
26
+ /* SFPA is RAZ/WI from NS; FPCA is stored in the M_REG_S bank */
27
+ value |= env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK;
28
+ }
29
+ return value;
30
+ }
31
case 0x94: /* CONTROL_NS */
32
/* We have to handle this here because unprivileged Secure code
33
* can read the NS CONTROL register.
34
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
35
if (!env->v7m.secure) {
36
return 0;
37
}
38
- return env->v7m.control[M_REG_NS];
39
+ return env->v7m.control[M_REG_NS] |
40
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK);
41
}
42
43
if (el == 0) {
44
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
45
*/
46
uint32_t mask = extract32(maskreg, 8, 4);
47
uint32_t reg = extract32(maskreg, 0, 8);
48
+ int cur_el = arm_current_el(env);
49
50
- if (arm_current_el(env) == 0 && reg > 7) {
51
- /* only xPSR sub-fields may be written by unprivileged */
52
+ if (cur_el == 0 && reg > 7 && reg != 20) {
53
+ /*
54
+ * only xPSR sub-fields and CONTROL.SFPA may be written by
55
+ * unprivileged code
56
+ */
57
return;
58
}
59
60
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
61
env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
62
env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
63
}
64
+ /*
65
+ * SFPA is RAZ/WI from NS. FPCA is RO if NSACR.CP10 == 0,
66
+ * RES0 if the FPU is not present, and is stored in the S bank
67
+ */
68
+ if (arm_feature(env, ARM_FEATURE_VFP) &&
69
+ extract32(env->v7m.nsacr, 10, 1)) {
70
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
71
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
72
+ }
73
return;
74
case 0x98: /* SP_NS */
75
{
76
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
77
env->v7m.faultmask[env->v7m.secure] = val & 1;
78
break;
79
case 20: /* CONTROL */
80
- /* Writing to the SPSEL bit only has an effect if we are in
81
+ /*
82
+ * Writing to the SPSEL bit only has an effect if we are in
83
* thread mode; other bits can be updated by any privileged code.
84
* write_v7m_control_spsel() deals with updating the SPSEL bit in
85
* env->v7m.control, so we only need update the others.
86
* For v7M, we must just ignore explicit writes to SPSEL in handler
87
* mode; for v8M the write is permitted but will have no effect.
88
+ * All these bits are writes-ignored from non-privileged code,
89
+ * except for SFPA.
90
*/
91
- if (arm_feature(env, ARM_FEATURE_V8) ||
92
- !arm_v7m_is_handler_mode(env)) {
93
+ if (cur_el > 0 && (arm_feature(env, ARM_FEATURE_V8) ||
94
+ !arm_v7m_is_handler_mode(env))) {
95
write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
96
}
97
- if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
98
+ if (cur_el > 0 && arm_feature(env, ARM_FEATURE_M_MAIN)) {
99
env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
100
env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
101
}
102
+ if (arm_feature(env, ARM_FEATURE_VFP)) {
103
+ /*
104
+ * SFPA is RAZ/WI from NS or if no FPU.
105
+ * FPCA is RO if NSACR.CP10 == 0, RES0 if the FPU is not present.
106
+ * Both are stored in the S bank.
107
+ */
108
+ if (env->v7m.secure) {
109
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
110
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_SFPA_MASK;
111
+ }
112
+ if (cur_el > 0 &&
113
+ (env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_SECURITY) ||
114
+ extract32(env->v7m.nsacr, 10, 1))) {
115
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
116
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
117
+ }
118
+ }
119
break;
120
default:
121
bad_reg:
122
--
123
2.20.1
124
125
diff view generated by jsdifflib
Deleted patch
1
For v8M floating point support, transitions from Secure
2
to Non-secure state via BLNS and BLXNS must clear the
3
CONTROL.SFPA bit. (This corresponds to the pseudocode
4
BranchToNS() function.)
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190416125744.27770-13-peter.maydell@linaro.org
9
---
10
target/arm/helper.c | 4 ++++
11
1 file changed, 4 insertions(+)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
18
/* translate.c should have made BXNS UNDEF unless we're secure */
19
assert(env->v7m.secure);
20
21
+ if (!(dest & 1)) {
22
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
23
+ }
24
switch_v7m_security_state(env, dest & 1);
25
env->thumb = 1;
26
env->regs[15] = dest & ~1;
27
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
28
*/
29
write_v7m_exception(env, 1);
30
}
31
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
32
switch_v7m_security_state(env, 0);
33
env->thumb = 1;
34
env->regs[15] = dest;
35
--
36
2.20.1
37
38
diff view generated by jsdifflib
Deleted patch
1
The TailChain() pseudocode specifies that a tail chaining
2
exception should sanitize the excReturn all-ones bits and
3
(if there is no FPU) the excReturn FType bits; we weren't
4
doing this.
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190416125744.27770-14-peter.maydell@linaro.org
9
---
10
target/arm/helper.c | 8 ++++++++
11
1 file changed, 8 insertions(+)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
18
qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
19
targets_secure ? "secure" : "nonsecure", exc);
20
21
+ if (dotailchain) {
22
+ /* Sanitize LR FType and PREFIX bits */
23
+ if (!arm_feature(env, ARM_FEATURE_VFP)) {
24
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
25
+ }
26
+ lr = deposit32(lr, 24, 8, 0xff);
27
+ }
28
+
29
if (arm_feature(env, ARM_FEATURE_V8)) {
30
if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
31
(lr & R_V7M_EXCRET_S_MASK)) {
32
--
33
2.20.1
34
35
diff view generated by jsdifflib
Deleted patch
1
Move the NS TBFLAG down from bit 19 to bit 6, which has not
2
been used since commit c1e3781090b9d36c60 in 2015, when we
3
started passing the entire MMU index in the TB flags rather
4
than just a 'privilege level' bit.
5
1
6
This rearrangement is not strictly necessary, but means that
7
we can put M-profile-only bits next to each other rather
8
than scattered across the flag word.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20190416125744.27770-17-peter.maydell@linaro.org
13
---
14
target/arm/cpu.h | 11 ++++++-----
15
1 file changed, 6 insertions(+), 5 deletions(-)
16
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, BE_DATA, 23, 1)
22
FIELD(TBFLAG_A32, THUMB, 0, 1)
23
FIELD(TBFLAG_A32, VECLEN, 1, 3)
24
FIELD(TBFLAG_A32, VECSTRIDE, 4, 2)
25
+/*
26
+ * Indicates whether cp register reads and writes by guest code should access
27
+ * the secure or nonsecure bank of banked registers; note that this is not
28
+ * the same thing as the current security state of the processor!
29
+ */
30
+FIELD(TBFLAG_A32, NS, 6, 1)
31
FIELD(TBFLAG_A32, VFPEN, 7, 1)
32
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
33
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
34
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
35
* checks on the other bits at runtime
36
*/
37
FIELD(TBFLAG_A32, XSCALE_CPAR, 17, 2)
38
-/* Indicates whether cp register reads and writes by guest code should access
39
- * the secure or nonsecure bank of banked registers; note that this is not
40
- * the same thing as the current security state of the processor!
41
- */
42
-FIELD(TBFLAG_A32, NS, 19, 1)
43
/* For M profile only, Handler (ie not Thread) mode */
44
FIELD(TBFLAG_A32, HANDLER, 21, 1)
45
/* For M profile only, whether we should generate stack-limit checks */
46
--
47
2.20.1
48
49
diff view generated by jsdifflib
Deleted patch
1
Pushing registers to the stack for v7M needs to handle three cases:
2
* the "normal" case where we pend exceptions
3
* an "ignore faults" case where we set FSR bits but
4
do not pend exceptions (this is used when we are
5
handling some kinds of derived exception on exception entry)
6
* a "lazy FP stacking" case, where different FSR bits
7
are set and the exception is pended differently
8
1
9
Implement this by changing the existing flag argument that
10
tells us whether to ignore faults or not into an enum that
11
specifies which of the 3 modes we should handle.
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20190416125744.27770-23-peter.maydell@linaro.org
16
---
17
target/arm/helper.c | 118 +++++++++++++++++++++++++++++---------------
18
1 file changed, 79 insertions(+), 39 deletions(-)
19
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
23
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static bool v7m_cpacr_pass(CPUARMState *env, bool is_secure, bool is_priv)
25
}
26
}
27
28
+/*
29
+ * What kind of stack write are we doing? This affects how exceptions
30
+ * generated during the stacking are treated.
31
+ */
32
+typedef enum StackingMode {
33
+ STACK_NORMAL,
34
+ STACK_IGNFAULTS,
35
+ STACK_LAZYFP,
36
+} StackingMode;
37
+
38
static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
39
- ARMMMUIdx mmu_idx, bool ignfault)
40
+ ARMMMUIdx mmu_idx, StackingMode mode)
41
{
42
CPUState *cs = CPU(cpu);
43
CPUARMState *env = &cpu->env;
44
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
45
&attrs, &prot, &page_size, &fi, NULL)) {
46
/* MPU/SAU lookup failed */
47
if (fi.type == ARMFault_QEMU_SFault) {
48
- qemu_log_mask(CPU_LOG_INT,
49
- "...SecureFault with SFSR.AUVIOL during stacking\n");
50
- env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
51
+ if (mode == STACK_LAZYFP) {
52
+ qemu_log_mask(CPU_LOG_INT,
53
+ "...SecureFault with SFSR.LSPERR "
54
+ "during lazy stacking\n");
55
+ env->v7m.sfsr |= R_V7M_SFSR_LSPERR_MASK;
56
+ } else {
57
+ qemu_log_mask(CPU_LOG_INT,
58
+ "...SecureFault with SFSR.AUVIOL "
59
+ "during stacking\n");
60
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
61
+ }
62
+ env->v7m.sfsr |= R_V7M_SFSR_SFARVALID_MASK;
63
env->v7m.sfar = addr;
64
exc = ARMV7M_EXCP_SECURE;
65
exc_secure = false;
66
} else {
67
- qemu_log_mask(CPU_LOG_INT, "...MemManageFault with CFSR.MSTKERR\n");
68
- env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK;
69
+ if (mode == STACK_LAZYFP) {
70
+ qemu_log_mask(CPU_LOG_INT,
71
+ "...MemManageFault with CFSR.MLSPERR\n");
72
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MLSPERR_MASK;
73
+ } else {
74
+ qemu_log_mask(CPU_LOG_INT,
75
+ "...MemManageFault with CFSR.MSTKERR\n");
76
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK;
77
+ }
78
exc = ARMV7M_EXCP_MEM;
79
exc_secure = secure;
80
}
81
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
82
attrs, &txres);
83
if (txres != MEMTX_OK) {
84
/* BusFault trying to write the data */
85
- qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n");
86
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK;
87
+ if (mode == STACK_LAZYFP) {
88
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.LSPERR\n");
89
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_LSPERR_MASK;
90
+ } else {
91
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n");
92
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK;
93
+ }
94
exc = ARMV7M_EXCP_BUS;
95
exc_secure = false;
96
goto pend_fault;
97
@@ -XXX,XX +XXX,XX @@ pend_fault:
98
* later if we have two derived exceptions.
99
* The only case when we must not pend the exception but instead
100
* throw it away is if we are doing the push of the callee registers
101
- * and we've already generated a derived exception. Even in this
102
- * case we will still update the fault status registers.
103
+ * and we've already generated a derived exception (this is indicated
104
+ * by the caller passing STACK_IGNFAULTS). Even in this case we will
105
+ * still update the fault status registers.
106
*/
107
- if (!ignfault) {
108
+ switch (mode) {
109
+ case STACK_NORMAL:
110
armv7m_nvic_set_pending_derived(env->nvic, exc, exc_secure);
111
+ break;
112
+ case STACK_LAZYFP:
113
+ armv7m_nvic_set_pending_lazyfp(env->nvic, exc, exc_secure);
114
+ break;
115
+ case STACK_IGNFAULTS:
116
+ break;
117
}
118
return false;
119
}
120
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
121
uint32_t limit;
122
bool want_psp;
123
uint32_t sig;
124
+ StackingMode smode = ignore_faults ? STACK_IGNFAULTS : STACK_NORMAL;
125
126
if (dotailchain) {
127
bool mode = lr & R_V7M_EXCRET_MODE_MASK;
128
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
129
*/
130
sig = v7m_integrity_sig(env, lr);
131
stacked_ok =
132
- v7m_stack_write(cpu, frameptr, sig, mmu_idx, ignore_faults) &&
133
- v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx,
134
- ignore_faults) &&
135
- v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx,
136
- ignore_faults) &&
137
- v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx,
138
- ignore_faults) &&
139
- v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx,
140
- ignore_faults) &&
141
- v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx,
142
- ignore_faults) &&
143
- v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx,
144
- ignore_faults) &&
145
- v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx,
146
- ignore_faults) &&
147
- v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx,
148
- ignore_faults);
149
+ v7m_stack_write(cpu, frameptr, sig, mmu_idx, smode) &&
150
+ v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx, smode) &&
151
+ v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx, smode) &&
152
+ v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx, smode) &&
153
+ v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx, smode) &&
154
+ v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx, smode) &&
155
+ v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx, smode) &&
156
+ v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx, smode) &&
157
+ v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx, smode);
158
159
/* Update SP regardless of whether any of the stack accesses failed. */
160
*frame_sp_p = frameptr;
161
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
162
* if it has higher priority).
163
*/
164
stacked_ok = stacked_ok &&
165
- v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, false) &&
166
- v7m_stack_write(cpu, frameptr + 4, env->regs[1], mmu_idx, false) &&
167
- v7m_stack_write(cpu, frameptr + 8, env->regs[2], mmu_idx, false) &&
168
- v7m_stack_write(cpu, frameptr + 12, env->regs[3], mmu_idx, false) &&
169
- v7m_stack_write(cpu, frameptr + 16, env->regs[12], mmu_idx, false) &&
170
- v7m_stack_write(cpu, frameptr + 20, env->regs[14], mmu_idx, false) &&
171
- v7m_stack_write(cpu, frameptr + 24, env->regs[15], mmu_idx, false) &&
172
- v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, false);
173
+ v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, STACK_NORMAL) &&
174
+ v7m_stack_write(cpu, frameptr + 4, env->regs[1],
175
+ mmu_idx, STACK_NORMAL) &&
176
+ v7m_stack_write(cpu, frameptr + 8, env->regs[2],
177
+ mmu_idx, STACK_NORMAL) &&
178
+ v7m_stack_write(cpu, frameptr + 12, env->regs[3],
179
+ mmu_idx, STACK_NORMAL) &&
180
+ v7m_stack_write(cpu, frameptr + 16, env->regs[12],
181
+ mmu_idx, STACK_NORMAL) &&
182
+ v7m_stack_write(cpu, frameptr + 20, env->regs[14],
183
+ mmu_idx, STACK_NORMAL) &&
184
+ v7m_stack_write(cpu, frameptr + 24, env->regs[15],
185
+ mmu_idx, STACK_NORMAL) &&
186
+ v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, STACK_NORMAL);
187
188
if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) {
189
/* FPU is active, try to save its registers */
190
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
191
faddr += 8; /* skip the slot for the FPSCR */
192
}
193
stacked_ok = stacked_ok &&
194
- v7m_stack_write(cpu, faddr, slo, mmu_idx, false) &&
195
- v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, false);
196
+ v7m_stack_write(cpu, faddr, slo,
197
+ mmu_idx, STACK_NORMAL) &&
198
+ v7m_stack_write(cpu, faddr + 4, shi,
199
+ mmu_idx, STACK_NORMAL);
200
}
201
stacked_ok = stacked_ok &&
202
v7m_stack_write(cpu, frameptr + 0x60,
203
- vfp_get_fpscr(env), mmu_idx, false);
204
+ vfp_get_fpscr(env), mmu_idx, STACK_NORMAL);
205
if (cpacr_pass) {
206
for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
207
*aa32_vfp_dreg(env, i / 2) = 0;
208
--
209
2.20.1
210
211
diff view generated by jsdifflib
Deleted patch
1
Enable the FPU by default for the Cortex-M4 and Cortex-M33.
2
1
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190416125744.27770-27-peter.maydell@linaro.org
6
---
7
target/arm/cpu.c | 8 ++++++++
8
1 file changed, 8 insertions(+)
9
10
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
11
index XXXXXXX..XXXXXXX 100644
12
--- a/target/arm/cpu.c
13
+++ b/target/arm/cpu.c
14
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
15
set_feature(&cpu->env, ARM_FEATURE_M);
16
set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
17
set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
18
+ set_feature(&cpu->env, ARM_FEATURE_VFP4);
19
cpu->midr = 0x410fc240; /* r0p0 */
20
cpu->pmsav7_dregion = 8;
21
+ cpu->isar.mvfr0 = 0x10110021;
22
+ cpu->isar.mvfr1 = 0x11000011;
23
+ cpu->isar.mvfr2 = 0x00000000;
24
cpu->id_pfr0 = 0x00000030;
25
cpu->id_pfr1 = 0x00000200;
26
cpu->id_dfr0 = 0x00100000;
27
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
28
set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
29
set_feature(&cpu->env, ARM_FEATURE_M_SECURITY);
30
set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
31
+ set_feature(&cpu->env, ARM_FEATURE_VFP4);
32
cpu->midr = 0x410fd213; /* r0p3 */
33
cpu->pmsav7_dregion = 16;
34
cpu->sau_sregion = 8;
35
+ cpu->isar.mvfr0 = 0x10110021;
36
+ cpu->isar.mvfr1 = 0x11000011;
37
+ cpu->isar.mvfr2 = 0x00000040;
38
cpu->id_pfr0 = 0x00000030;
39
cpu->id_pfr1 = 0x00000210;
40
cpu->id_dfr0 = 0x00200000;
41
--
42
2.20.1
43
44
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
1
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Message-id: 20190412165416.7977-5-philmd@redhat.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
include/hw/devices.h | 6 ------
9
include/hw/display/tc6393xb.h | 24 ++++++++++++++++++++++++
10
hw/arm/tosa.c | 2 +-
11
hw/display/tc6393xb.c | 2 +-
12
MAINTAINERS | 1 +
13
5 files changed, 27 insertions(+), 8 deletions(-)
14
create mode 100644 include/hw/display/tc6393xb.h
15
16
diff --git a/include/hw/devices.h b/include/hw/devices.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/devices.h
19
+++ b/include/hw/devices.h
20
@@ -XXX,XX +XXX,XX @@ void *tahvo_init(qemu_irq irq, int betty);
21
22
void retu_key_event(void *retu, int state);
23
24
-/* tc6393xb.c */
25
-typedef struct TC6393xbState TC6393xbState;
26
-TC6393xbState *tc6393xb_init(struct MemoryRegion *sysmem,
27
- uint32_t base, qemu_irq irq);
28
-qemu_irq tc6393xb_l3v_get(TC6393xbState *s);
29
-
30
#endif
31
diff --git a/include/hw/display/tc6393xb.h b/include/hw/display/tc6393xb.h
32
new file mode 100644
33
index XXXXXXX..XXXXXXX
34
--- /dev/null
35
+++ b/include/hw/display/tc6393xb.h
36
@@ -XXX,XX +XXX,XX @@
37
+/*
38
+ * Toshiba TC6393XB I/O Controller.
39
+ * Found in Sharp Zaurus SL-6000 (tosa) or some
40
+ * Toshiba e-Series PDAs.
41
+ *
42
+ * Copyright (c) 2007 Hervé Poussineau
43
+ *
44
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
45
+ * See the COPYING file in the top-level directory.
46
+ */
47
+
48
+#ifndef HW_DISPLAY_TC6393XB_H
49
+#define HW_DISPLAY_TC6393XB_H
50
+
51
+#include "exec/memory.h"
52
+#include "hw/irq.h"
53
+
54
+typedef struct TC6393xbState TC6393xbState;
55
+
56
+TC6393xbState *tc6393xb_init(struct MemoryRegion *sysmem,
57
+ uint32_t base, qemu_irq irq);
58
+qemu_irq tc6393xb_l3v_get(TC6393xbState *s);
59
+
60
+#endif
61
diff --git a/hw/arm/tosa.c b/hw/arm/tosa.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/hw/arm/tosa.c
64
+++ b/hw/arm/tosa.c
65
@@ -XXX,XX +XXX,XX @@
66
#include "hw/hw.h"
67
#include "hw/arm/pxa.h"
68
#include "hw/arm/arm.h"
69
-#include "hw/devices.h"
70
#include "hw/arm/sharpsl.h"
71
#include "hw/pcmcia.h"
72
#include "hw/boards.h"
73
+#include "hw/display/tc6393xb.h"
74
#include "hw/i2c/i2c.h"
75
#include "hw/ssi/ssi.h"
76
#include "hw/sysbus.h"
77
diff --git a/hw/display/tc6393xb.c b/hw/display/tc6393xb.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/hw/display/tc6393xb.c
80
+++ b/hw/display/tc6393xb.c
81
@@ -XXX,XX +XXX,XX @@
82
#include "qapi/error.h"
83
#include "qemu/host-utils.h"
84
#include "hw/hw.h"
85
-#include "hw/devices.h"
86
+#include "hw/display/tc6393xb.h"
87
#include "hw/block/flash.h"
88
#include "ui/console.h"
89
#include "ui/pixel_ops.h"
90
diff --git a/MAINTAINERS b/MAINTAINERS
91
index XXXXXXX..XXXXXXX 100644
92
--- a/MAINTAINERS
93
+++ b/MAINTAINERS
94
@@ -XXX,XX +XXX,XX @@ F: hw/misc/mst_fpga.c
95
F: hw/misc/max111x.c
96
F: include/hw/arm/pxa.h
97
F: include/hw/arm/sharpsl.h
98
+F: include/hw/display/tc6393xb.h
99
100
SABRELITE / i.MX6
101
M: Peter Maydell <peter.maydell@linaro.org>
102
--
103
2.20.1
104
105
diff view generated by jsdifflib