1 | Mostly my stuff with a few easy patches from others. I know I have | 1 | Hi; this is the latest target-arm queue; most of this is a refactoring |
---|---|---|---|
2 | a few big series in my to-review queue, but I've been too jetlagged | 2 | patchset from RTH for the arm page-table-walk emulation. |
3 | to try to tackle those :-( | ||
4 | 3 | ||
5 | thanks | 4 | thanks |
6 | -- PMM | 5 | -- PMM |
7 | 6 | ||
8 | The following changes since commit a26a98dfb9d448d7234d931ae3720feddf6f0651: | 7 | The following changes since commit f1d33f55c47dfdaf8daacd618588ad3ae4c452d1: |
9 | 8 | ||
10 | Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20171006' into staging (2017-10-06 13:19:03 +0100) | 9 | Merge tag 'pull-testing-gdbstub-plugins-gitdm-061022-3' of https://github.com/stsquad/qemu into staging (2022-10-06 07:11:56 -0400) |
11 | 10 | ||
12 | are available in the git repository at: | 11 | are available in the Git repository at: |
13 | 12 | ||
14 | git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20171006 | 13 | https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221010 |
15 | 14 | ||
16 | for you to fetch changes up to 04829ce334bece78d4fa1d0fdbc8bc27dae9b242: | 15 | for you to fetch changes up to 915f62844cf62e428c7c178149b5ff1cbe129b07: |
17 | 16 | ||
18 | nvic: Add missing code for writing SHCSR.HARDFAULTPENDED bit (2017-10-06 16:46:49 +0100) | 17 | docs/system/arm/emulation.rst: Report FEAT_GTG support (2022-10-10 14:52:25 +0100) |
19 | 18 | ||
20 | ---------------------------------------------------------------- | 19 | ---------------------------------------------------------------- |
21 | target-arm: | 20 | target-arm queue: |
22 | * v8M: more preparatory work | 21 | * Retry KVM_CREATE_VM call if it fails EINTR |
23 | * nvic: reset properly rather than leaving the nvic in a weird state | 22 | * allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented |
24 | * xlnx-zynqmp: Mark the "xlnx, zynqmp" device with user_creatable = false | 23 | * docs/nuvoton: Update URL for images |
25 | * sd: fix out-of-bounds check for multi block reads | 24 | * refactoring of page table walk code |
26 | * arm: Fix SMC reporting to EL2 when QEMU provides PSCI | 25 | * hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3 |
26 | * Don't allow guest to use unimplemented granule sizes | ||
27 | * Report FEAT_GTG support | ||
27 | 28 | ||
28 | ---------------------------------------------------------------- | 29 | ---------------------------------------------------------------- |
29 | Jan Kiszka (1): | 30 | Jerome Forissier (2): |
30 | arm: Fix SMC reporting to EL2 when QEMU provides PSCI | 31 | target/arm: allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented |
32 | hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3 | ||
31 | 33 | ||
32 | Michael Olbrich (1): | 34 | Joel Stanley (1): |
33 | hw/sd: fix out-of-bounds check for multi block reads | 35 | docs/nuvoton: Update URL for images |
34 | 36 | ||
35 | Peter Maydell (17): | 37 | Peter Maydell (4): |
36 | nvic: Clear the vector arrays and prigroup on reset | 38 | target/arm/kvm: Retry KVM_CREATE_VM call if it fails EINTR |
37 | target/arm: Don't switch to target stack early in v7M exception return | 39 | target/arm: Don't allow guest to use unimplemented granule sizes |
38 | target/arm: Prepare for CONTROL.SPSEL being nonzero in Handler mode | 40 | target/arm: Use ARMGranuleSize in ARMVAParameters |
39 | target/arm: Restore security state on exception return | 41 | docs/system/arm/emulation.rst: Report FEAT_GTG support |
40 | target/arm: Restore SPSEL to correct CONTROL register on exception return | ||
41 | target/arm: Check for xPSR mismatch usage faults earlier for v8M | ||
42 | target/arm: Warn about restoring to unaligned stack | ||
43 | target/arm: Don't warn about exception return with PC low bit set for v8M | ||
44 | target/arm: Add new-in-v8M SFSR and SFAR | ||
45 | target/arm: Update excret sanity checks for v8M | ||
46 | target/arm: Add support for restoring v8M additional state context | ||
47 | target/arm: Add v8M support to exception entry code | ||
48 | nvic: Implement Security Attribution Unit registers | ||
49 | target/arm: Implement security attribute lookups for memory accesses | ||
50 | target/arm: Fix calculation of secure mm_idx values | ||
51 | target/arm: Factor out "get mmuidx for specified security state" | ||
52 | nvic: Add missing code for writing SHCSR.HARDFAULTPENDED bit | ||
53 | 42 | ||
54 | Thomas Huth (1): | 43 | Richard Henderson (21): |
55 | hw/arm/xlnx-zynqmp: Mark the "xlnx, zynqmp" device with user_creatable = false | 44 | target/arm: Split s2walk_secure from ipa_secure in get_phys_addr |
45 | target/arm: Make the final stage1+2 write to secure be unconditional | ||
46 | target/arm: Add is_secure parameter to get_phys_addr_lpae | ||
47 | target/arm: Fix S2 disabled check in S1_ptw_translate | ||
48 | target/arm: Add is_secure parameter to regime_translation_disabled | ||
49 | target/arm: Split out get_phys_addr_with_secure | ||
50 | target/arm: Add is_secure parameter to v7m_read_half_insn | ||
51 | target/arm: Add TBFLAG_M32.SECURE | ||
52 | target/arm: Merge regime_is_secure into get_phys_addr | ||
53 | target/arm: Add is_secure parameter to do_ats_write | ||
54 | target/arm: Fold secure and non-secure a-profile mmu indexes | ||
55 | target/arm: Reorg regime_translation_disabled | ||
56 | target/arm: Drop secure check for HCR.TGE vs SCTLR_EL1.M | ||
57 | target/arm: Introduce arm_hcr_el2_eff_secstate | ||
58 | target/arm: Hoist read of *is_secure in S1_ptw_translate | ||
59 | target/arm: Remove env argument from combined_attrs_fwb | ||
60 | target/arm: Pass HCR to attribute subroutines. | ||
61 | target/arm: Fix ATS12NSO* from S PL1 | ||
62 | target/arm: Split out get_phys_addr_disabled | ||
63 | target/arm: Fix cacheattr in get_phys_addr_disabled | ||
64 | target/arm: Use tlb_set_page_full | ||
56 | 65 | ||
57 | target/arm/cpu.h | 60 ++++- | 66 | docs/system/arm/emulation.rst | 1 + |
58 | target/arm/internals.h | 15 ++ | 67 | docs/system/arm/nuvoton.rst | 4 +- |
59 | hw/arm/xlnx-zynqmp.c | 2 + | 68 | target/arm/cpu-param.h | 2 +- |
60 | hw/intc/armv7m_nvic.c | 158 ++++++++++- | 69 | target/arm/cpu.h | 181 ++++++++------ |
61 | hw/sd/sd.c | 12 +- | 70 | target/arm/internals.h | 150 ++++++----- |
62 | target/arm/cpu.c | 27 ++ | 71 | hw/arm/boot.c | 4 + |
63 | target/arm/helper.c | 691 +++++++++++++++++++++++++++++++++++++++++++------ | 72 | target/arm/helper.c | 332 ++++++++++++++---------- |
64 | target/arm/machine.c | 16 ++ | 73 | target/arm/kvm.c | 4 +- |
65 | target/arm/op_helper.c | 27 +- | 74 | target/arm/m_helper.c | 29 ++- |
66 | 9 files changed, 898 insertions(+), 110 deletions(-) | 75 | target/arm/ptw.c | 570 ++++++++++++++++++++++-------------------- |
67 | 76 | target/arm/tlb_helper.c | 9 +- | |
77 | target/arm/translate-a64.c | 8 - | ||
78 | target/arm/translate.c | 9 +- | ||
79 | 13 files changed, 717 insertions(+), 586 deletions(-) | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Occasionally the KVM_CREATE_VM ioctl can return EINTR, even though | ||
2 | there is no pending signal to be taken. In commit 94ccff13382055 | ||
3 | we added a retry-on-EINTR loop to the KVM_CREATE_VM call in the | ||
4 | generic KVM code. Adopt the same approach for the use of the | ||
5 | ioctl in the Arm-specific KVM code (where we use it to create a | ||
6 | scratch VM for probing for various things). | ||
1 | 7 | ||
8 | For more information, see the mailing list thread: | ||
9 | https://lore.kernel.org/qemu-devel/8735e0s1zw.wl-maz@kernel.org/ | ||
10 | |||
11 | Reported-by: Vitaly Chikunov <vt@altlinux.org> | ||
12 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
13 | Reviewed-by: Vitaly Chikunov <vt@altlinux.org> | ||
14 | Reviewed-by: Eric Auger <eric.auger@redhat.com> | ||
15 | Acked-by: Marc Zyngier <maz@kernel.org> | ||
16 | Message-id: 20220930113824.1933293-1-peter.maydell@linaro.org | ||
17 | --- | ||
18 | target/arm/kvm.c | 4 +++- | ||
19 | 1 file changed, 3 insertions(+), 1 deletion(-) | ||
20 | |||
21 | diff --git a/target/arm/kvm.c b/target/arm/kvm.c | ||
22 | index XXXXXXX..XXXXXXX 100644 | ||
23 | --- a/target/arm/kvm.c | ||
24 | +++ b/target/arm/kvm.c | ||
25 | @@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try, | ||
26 | if (max_vm_pa_size < 0) { | ||
27 | max_vm_pa_size = 0; | ||
28 | } | ||
29 | - vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size); | ||
30 | + do { | ||
31 | + vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size); | ||
32 | + } while (vmfd == -1 && errno == EINTR); | ||
33 | if (vmfd < 0) { | ||
34 | goto err; | ||
35 | } | ||
36 | -- | ||
37 | 2.25.1 | diff view generated by jsdifflib |
1 | Add the new M profile Secure Fault Status Register | 1 | From: Jerome Forissier <jerome.forissier@linaro.org> |
---|---|---|---|
2 | and Secure Fault Address Register. | ||
3 | 2 | ||
3 | Updates write_scr() to allow setting SCR_EL3.EnTP2 when FEAT_SME is | ||
4 | implemented. SCR_EL3 being a 64-bit register, valid_mask is changed | ||
5 | to uint64_t and the SCR_* constants in target/arm/cpu.h are extended | ||
6 | to 64-bit so that masking and bitwise not (~) behave as expected. | ||
7 | |||
8 | This enables booting Linux with Trusted Firmware-A at EL3 with | ||
9 | "-M virt,secure=on -cpu max". | ||
10 | |||
11 | Cc: qemu-stable@nongnu.org | ||
12 | Fixes: 78cb9776662a ("target/arm: Enable SME for -cpu max") | ||
13 | Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> | ||
14 | Reviewed-by: Andre Przywara <andre.przywara@arm.com> | ||
15 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
16 | Message-id: 20221004072354.27037-1-jerome.forissier@linaro.org | ||
4 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 17 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
5 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
6 | Message-id: 1506092407-26985-10-git-send-email-peter.maydell@linaro.org | ||
7 | --- | 18 | --- |
8 | target/arm/cpu.h | 12 ++++++++++++ | 19 | target/arm/cpu.h | 54 ++++++++++++++++++++++----------------------- |
9 | hw/intc/armv7m_nvic.c | 34 ++++++++++++++++++++++++++++++++++ | 20 | target/arm/helper.c | 5 ++++- |
10 | target/arm/machine.c | 2 ++ | 21 | 2 files changed, 31 insertions(+), 28 deletions(-) |
11 | 3 files changed, 48 insertions(+) | ||
12 | 22 | ||
13 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | 23 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h |
14 | index XXXXXXX..XXXXXXX 100644 | 24 | index XXXXXXX..XXXXXXX 100644 |
15 | --- a/target/arm/cpu.h | 25 | --- a/target/arm/cpu.h |
16 | +++ b/target/arm/cpu.h | 26 | +++ b/target/arm/cpu.h |
17 | @@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState { | 27 | @@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask) |
18 | uint32_t cfsr[M_REG_NUM_BANKS]; /* Configurable Fault Status */ | 28 | |
19 | uint32_t hfsr; /* HardFault Status */ | 29 | #define HPFAR_NS (1ULL << 63) |
20 | uint32_t dfsr; /* Debug Fault Status Register */ | 30 | |
21 | + uint32_t sfsr; /* Secure Fault Status Register */ | 31 | -#define SCR_NS (1U << 0) |
22 | uint32_t mmfar[M_REG_NUM_BANKS]; /* MemManage Fault Address */ | 32 | -#define SCR_IRQ (1U << 1) |
23 | uint32_t bfar; /* BusFault Address */ | 33 | -#define SCR_FIQ (1U << 2) |
24 | + uint32_t sfar; /* Secure Fault Address Register */ | 34 | -#define SCR_EA (1U << 3) |
25 | unsigned mpu_ctrl[M_REG_NUM_BANKS]; /* MPU_CTRL */ | 35 | -#define SCR_FW (1U << 4) |
26 | int exception; | 36 | -#define SCR_AW (1U << 5) |
27 | uint32_t primask[M_REG_NUM_BANKS]; | 37 | -#define SCR_NET (1U << 6) |
28 | @@ -XXX,XX +XXX,XX @@ FIELD(V7M_DFSR, DWTTRAP, 2, 1) | 38 | -#define SCR_SMD (1U << 7) |
29 | FIELD(V7M_DFSR, VCATCH, 3, 1) | 39 | -#define SCR_HCE (1U << 8) |
30 | FIELD(V7M_DFSR, EXTERNAL, 4, 1) | 40 | -#define SCR_SIF (1U << 9) |
31 | 41 | -#define SCR_RW (1U << 10) | |
32 | +/* V7M SFSR bits */ | 42 | -#define SCR_ST (1U << 11) |
33 | +FIELD(V7M_SFSR, INVEP, 0, 1) | 43 | -#define SCR_TWI (1U << 12) |
34 | +FIELD(V7M_SFSR, INVIS, 1, 1) | 44 | -#define SCR_TWE (1U << 13) |
35 | +FIELD(V7M_SFSR, INVER, 2, 1) | 45 | -#define SCR_TLOR (1U << 14) |
36 | +FIELD(V7M_SFSR, AUVIOL, 3, 1) | 46 | -#define SCR_TERR (1U << 15) |
37 | +FIELD(V7M_SFSR, INVTRAN, 4, 1) | 47 | -#define SCR_APK (1U << 16) |
38 | +FIELD(V7M_SFSR, LSPERR, 5, 1) | 48 | -#define SCR_API (1U << 17) |
39 | +FIELD(V7M_SFSR, SFARVALID, 6, 1) | 49 | -#define SCR_EEL2 (1U << 18) |
40 | +FIELD(V7M_SFSR, LSERR, 7, 1) | 50 | -#define SCR_EASE (1U << 19) |
41 | + | 51 | -#define SCR_NMEA (1U << 20) |
42 | /* v7M MPU_CTRL bits */ | 52 | -#define SCR_FIEN (1U << 21) |
43 | FIELD(V7M_MPU_CTRL, ENABLE, 0, 1) | 53 | -#define SCR_ENSCXT (1U << 25) |
44 | FIELD(V7M_MPU_CTRL, HFNMIENA, 1, 1) | 54 | -#define SCR_ATA (1U << 26) |
45 | diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c | 55 | -#define SCR_FGTEN (1U << 27) |
56 | -#define SCR_ECVEN (1U << 28) | ||
57 | -#define SCR_TWEDEN (1U << 29) | ||
58 | +#define SCR_NS (1ULL << 0) | ||
59 | +#define SCR_IRQ (1ULL << 1) | ||
60 | +#define SCR_FIQ (1ULL << 2) | ||
61 | +#define SCR_EA (1ULL << 3) | ||
62 | +#define SCR_FW (1ULL << 4) | ||
63 | +#define SCR_AW (1ULL << 5) | ||
64 | +#define SCR_NET (1ULL << 6) | ||
65 | +#define SCR_SMD (1ULL << 7) | ||
66 | +#define SCR_HCE (1ULL << 8) | ||
67 | +#define SCR_SIF (1ULL << 9) | ||
68 | +#define SCR_RW (1ULL << 10) | ||
69 | +#define SCR_ST (1ULL << 11) | ||
70 | +#define SCR_TWI (1ULL << 12) | ||
71 | +#define SCR_TWE (1ULL << 13) | ||
72 | +#define SCR_TLOR (1ULL << 14) | ||
73 | +#define SCR_TERR (1ULL << 15) | ||
74 | +#define SCR_APK (1ULL << 16) | ||
75 | +#define SCR_API (1ULL << 17) | ||
76 | +#define SCR_EEL2 (1ULL << 18) | ||
77 | +#define SCR_EASE (1ULL << 19) | ||
78 | +#define SCR_NMEA (1ULL << 20) | ||
79 | +#define SCR_FIEN (1ULL << 21) | ||
80 | +#define SCR_ENSCXT (1ULL << 25) | ||
81 | +#define SCR_ATA (1ULL << 26) | ||
82 | +#define SCR_FGTEN (1ULL << 27) | ||
83 | +#define SCR_ECVEN (1ULL << 28) | ||
84 | +#define SCR_TWEDEN (1ULL << 29) | ||
85 | #define SCR_TWEDEL MAKE_64BIT_MASK(30, 4) | ||
86 | #define SCR_TME (1ULL << 34) | ||
87 | #define SCR_AMVOFFEN (1ULL << 35) | ||
88 | diff --git a/target/arm/helper.c b/target/arm/helper.c | ||
46 | index XXXXXXX..XXXXXXX 100644 | 89 | index XXXXXXX..XXXXXXX 100644 |
47 | --- a/hw/intc/armv7m_nvic.c | 90 | --- a/target/arm/helper.c |
48 | +++ b/hw/intc/armv7m_nvic.c | 91 | +++ b/target/arm/helper.c |
49 | @@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs) | 92 | @@ -XXX,XX +XXX,XX @@ static void vbar_write(CPUARMState *env, const ARMCPRegInfo *ri, |
50 | goto bad_offset; | 93 | static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) |
94 | { | ||
95 | /* Begin with base v8.0 state. */ | ||
96 | - uint32_t valid_mask = 0x3fff; | ||
97 | + uint64_t valid_mask = 0x3fff; | ||
98 | ARMCPU *cpu = env_archcpu(env); | ||
99 | |||
100 | /* | ||
101 | @@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
102 | if (cpu_isar_feature(aa64_doublefault, cpu)) { | ||
103 | valid_mask |= SCR_EASE | SCR_NMEA; | ||
51 | } | 104 | } |
52 | return cpu->env.pmsav8.mair1[attrs.secure]; | 105 | + if (cpu_isar_feature(aa64_sme, cpu)) { |
53 | + case 0xde4: /* SFSR */ | 106 | + valid_mask |= SCR_ENTP2; |
54 | + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
55 | + goto bad_offset; | ||
56 | + } | 107 | + } |
57 | + if (!attrs.secure) { | 108 | } else { |
58 | + return 0; | 109 | valid_mask &= ~(SCR_RW | SCR_ST); |
59 | + } | 110 | if (cpu_isar_feature(aa32_ras, cpu)) { |
60 | + return cpu->env.v7m.sfsr; | ||
61 | + case 0xde8: /* SFAR */ | ||
62 | + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
63 | + goto bad_offset; | ||
64 | + } | ||
65 | + if (!attrs.secure) { | ||
66 | + return 0; | ||
67 | + } | ||
68 | + return cpu->env.v7m.sfar; | ||
69 | default: | ||
70 | bad_offset: | ||
71 | qemu_log_mask(LOG_GUEST_ERROR, "NVIC: Bad read offset 0x%x\n", offset); | ||
72 | @@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value, | ||
73 | * only affect cacheability, and we don't implement caching. | ||
74 | */ | ||
75 | break; | ||
76 | + case 0xde4: /* SFSR */ | ||
77 | + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
78 | + goto bad_offset; | ||
79 | + } | ||
80 | + if (!attrs.secure) { | ||
81 | + return; | ||
82 | + } | ||
83 | + cpu->env.v7m.sfsr &= ~value; /* W1C */ | ||
84 | + break; | ||
85 | + case 0xde8: /* SFAR */ | ||
86 | + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
87 | + goto bad_offset; | ||
88 | + } | ||
89 | + if (!attrs.secure) { | ||
90 | + return; | ||
91 | + } | ||
92 | + cpu->env.v7m.sfsr = value; | ||
93 | + break; | ||
94 | case 0xf00: /* Software Triggered Interrupt Register */ | ||
95 | { | ||
96 | int excnum = (value & 0x1ff) + NVIC_FIRST_IRQ; | ||
97 | diff --git a/target/arm/machine.c b/target/arm/machine.c | ||
98 | index XXXXXXX..XXXXXXX 100644 | ||
99 | --- a/target/arm/machine.c | ||
100 | +++ b/target/arm/machine.c | ||
101 | @@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_security = { | ||
102 | VMSTATE_UINT32(env.v7m.ccr[M_REG_S], ARMCPU), | ||
103 | VMSTATE_UINT32(env.v7m.mmfar[M_REG_S], ARMCPU), | ||
104 | VMSTATE_UINT32(env.v7m.cfsr[M_REG_S], ARMCPU), | ||
105 | + VMSTATE_UINT32(env.v7m.sfsr, ARMCPU), | ||
106 | + VMSTATE_UINT32(env.v7m.sfar, ARMCPU), | ||
107 | VMSTATE_END_OF_LIST() | ||
108 | } | ||
109 | }; | ||
110 | -- | 111 | -- |
111 | 2.7.4 | 112 | 2.25.1 |
112 | |||
113 | diff view generated by jsdifflib |
1 | Attempting to do an exception return with an exception frame that | 1 | From: Joel Stanley <joel@jms.id.au> |
---|---|---|---|
2 | is not 8-aligned is UNPREDICTABLE in v8M; warn about this. | ||
3 | (It is not UNPREDICTABLE in v7M, and our implementation can | ||
4 | handle the merely-4-aligned case fine, so we don't need to | ||
5 | do anything except warn.) | ||
6 | 2 | ||
3 | openpower.xyz was retired some time ago. The OpenBMC Jenkins is where | ||
4 | images can be found these days. | ||
5 | |||
6 | Signed-off-by: Joel Stanley <joel@jms.id.au> | ||
7 | Reviewed-by: Hao Wu <wuhaotsh@google.com> | ||
8 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
9 | Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
10 | Message-id: 20221004050042.22681-1-joel@jms.id.au | ||
7 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 11 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
8 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
9 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
10 | Message-id: 1506092407-26985-8-git-send-email-peter.maydell@linaro.org | ||
11 | --- | 12 | --- |
12 | target/arm/helper.c | 7 +++++++ | 13 | docs/system/arm/nuvoton.rst | 4 ++-- |
13 | 1 file changed, 7 insertions(+) | 14 | 1 file changed, 2 insertions(+), 2 deletions(-) |
14 | 15 | ||
15 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 16 | diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst |
16 | index XXXXXXX..XXXXXXX 100644 | 17 | index XXXXXXX..XXXXXXX 100644 |
17 | --- a/target/arm/helper.c | 18 | --- a/docs/system/arm/nuvoton.rst |
18 | +++ b/target/arm/helper.c | 19 | +++ b/docs/system/arm/nuvoton.rst |
19 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | 20 | @@ -XXX,XX +XXX,XX @@ Boot options |
20 | return_to_sp_process); | 21 | |
21 | uint32_t frameptr = *frame_sp_p; | 22 | The Nuvoton machines can boot from an OpenBMC firmware image, or directly into |
22 | 23 | a kernel using the ``-kernel`` option. OpenBMC images for ``quanta-gsj`` and | |
23 | + if (!QEMU_IS_ALIGNED(frameptr, 8) && | 24 | -possibly others can be downloaded from the OpenPOWER jenkins : |
24 | + arm_feature(env, ARM_FEATURE_V8)) { | 25 | +possibly others can be downloaded from the OpenBMC jenkins : |
25 | + qemu_log_mask(LOG_GUEST_ERROR, | 26 | |
26 | + "M profile exception return with non-8-aligned SP " | 27 | - https://openpower.xyz/ |
27 | + "for destination state is UNPREDICTABLE\n"); | 28 | + https://jenkins.openbmc.org/ |
28 | + } | 29 | |
29 | + | 30 | The firmware image should be attached as an MTD drive. Example : |
30 | /* Pop registers. TODO: make these accesses use the correct | 31 | |
31 | * attributes and address space (S/NS, priv/unpriv) and handle | ||
32 | * memory transaction failures. | ||
33 | -- | 32 | -- |
34 | 2.7.4 | 33 | 2.25.1 |
35 | 34 | ||
36 | 35 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | From: Richard Henderson <richard.henderson@linaro.org> | ||
1 | 2 | ||
3 | The starting security state comes with the translation regime, | ||
4 | not the current state of arm_is_secure_below_el3(). | ||
5 | |||
6 | Create a new local variable, s2walk_secure, which does not need | ||
7 | to be written back to result->attrs.secure -- we compute that | ||
8 | value later, after the S2 walk is complete. | ||
9 | |||
10 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
11 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
12 | Message-id: 20221001162318.153420-2-richard.henderson@linaro.org | ||
13 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
14 | --- | ||
15 | target/arm/ptw.c | 18 +++++++++--------- | ||
16 | 1 file changed, 9 insertions(+), 9 deletions(-) | ||
17 | |||
18 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
19 | index XXXXXXX..XXXXXXX 100644 | ||
20 | --- a/target/arm/ptw.c | ||
21 | +++ b/target/arm/ptw.c | ||
22 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
23 | hwaddr ipa; | ||
24 | int s1_prot; | ||
25 | int ret; | ||
26 | - bool ipa_secure; | ||
27 | + bool ipa_secure, s2walk_secure; | ||
28 | ARMCacheAttrs cacheattrs1; | ||
29 | ARMMMUIdx s2_mmu_idx; | ||
30 | bool is_el0; | ||
31 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
32 | |||
33 | ipa = result->phys; | ||
34 | ipa_secure = result->attrs.secure; | ||
35 | - if (arm_is_secure_below_el3(env)) { | ||
36 | - if (ipa_secure) { | ||
37 | - result->attrs.secure = !(env->cp15.vstcr_el2 & VSTCR_SW); | ||
38 | - } else { | ||
39 | - result->attrs.secure = !(env->cp15.vtcr_el2 & VTCR_NSW); | ||
40 | - } | ||
41 | + if (is_secure) { | ||
42 | + /* Select TCR based on the NS bit from the S1 walk. */ | ||
43 | + s2walk_secure = !(ipa_secure | ||
44 | + ? env->cp15.vstcr_el2 & VSTCR_SW | ||
45 | + : env->cp15.vtcr_el2 & VTCR_NSW); | ||
46 | } else { | ||
47 | assert(!ipa_secure); | ||
48 | + s2walk_secure = false; | ||
49 | } | ||
50 | |||
51 | - s2_mmu_idx = (result->attrs.secure | ||
52 | + s2_mmu_idx = (s2walk_secure | ||
53 | ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2); | ||
54 | is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0; | ||
55 | |||
56 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
57 | result->cacheattrs); | ||
58 | |||
59 | /* Check if IPA translates to secure or non-secure PA space. */ | ||
60 | - if (arm_is_secure_below_el3(env)) { | ||
61 | + if (is_secure) { | ||
62 | if (ipa_secure) { | ||
63 | result->attrs.secure = | ||
64 | !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW)); | ||
65 | -- | ||
66 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | From: Richard Henderson <richard.henderson@linaro.org> | ||
1 | 2 | ||
3 | While the stage2 call to get_phys_addr_lpae should never set | ||
4 | attrs.secure when given a non-secure input, it's just as easy | ||
5 | to make the final update to attrs.secure be unconditional and | ||
6 | false in the case of non-secure input. | ||
7 | |||
8 | Suggested-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
10 | Message-id: 20221007152159.1414065-1-richard.henderson@linaro.org | ||
11 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
12 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
13 | --- | ||
14 | target/arm/ptw.c | 21 ++++++++++----------- | ||
15 | 1 file changed, 10 insertions(+), 11 deletions(-) | ||
16 | |||
17 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
18 | index XXXXXXX..XXXXXXX 100644 | ||
19 | --- a/target/arm/ptw.c | ||
20 | +++ b/target/arm/ptw.c | ||
21 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
22 | result->cacheattrs = combine_cacheattrs(env, cacheattrs1, | ||
23 | result->cacheattrs); | ||
24 | |||
25 | - /* Check if IPA translates to secure or non-secure PA space. */ | ||
26 | - if (is_secure) { | ||
27 | - if (ipa_secure) { | ||
28 | - result->attrs.secure = | ||
29 | - !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW)); | ||
30 | - } else { | ||
31 | - result->attrs.secure = | ||
32 | - !((env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW)) | ||
33 | - || (env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))); | ||
34 | - } | ||
35 | - } | ||
36 | + /* | ||
37 | + * Check if IPA translates to secure or non-secure PA space. | ||
38 | + * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA. | ||
39 | + */ | ||
40 | + result->attrs.secure = | ||
41 | + (is_secure | ||
42 | + && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW)) | ||
43 | + && (ipa_secure | ||
44 | + || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW)))); | ||
45 | + | ||
46 | return 0; | ||
47 | } else { | ||
48 | /* | ||
49 | -- | ||
50 | 2.25.1 | diff view generated by jsdifflib |
1 | Reset for devices does not include an automatic clear of the | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | device state (unlike CPU state, where most of the state | ||
3 | structure is cleared to zero). Add some missing initialization | ||
4 | of NVIC state that meant that the device was left in the wrong | ||
5 | state if the guest did a warm reset. | ||
6 | 2 | ||
7 | (In particular, since we were resetting the computed state like | 3 | Remove the use of regime_is_secure from get_phys_addr_lpae, |
8 | s->exception_prio but not all the state it was computed | 4 | using the new parameter instead. |
9 | from like s->vectors[x].active, the NVIC wound up in an | ||
10 | inconsistent state that could later trigger assertion failures.) | ||
11 | 5 | ||
6 | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> | ||
7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
8 | Message-id: 20221001162318.153420-3-richard.henderson@linaro.org | ||
12 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
13 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
14 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
15 | Message-id: 1506092407-26985-2-git-send-email-peter.maydell@linaro.org | ||
16 | --- | 10 | --- |
17 | hw/intc/armv7m_nvic.c | 5 +++++ | 11 | target/arm/ptw.c | 20 ++++++++++---------- |
18 | 1 file changed, 5 insertions(+) | 12 | 1 file changed, 10 insertions(+), 10 deletions(-) |
19 | 13 | ||
20 | diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c | 14 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
21 | index XXXXXXX..XXXXXXX 100644 | 15 | index XXXXXXX..XXXXXXX 100644 |
22 | --- a/hw/intc/armv7m_nvic.c | 16 | --- a/target/arm/ptw.c |
23 | +++ b/hw/intc/armv7m_nvic.c | 17 | +++ b/target/arm/ptw.c |
24 | @@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev) | 18 | @@ -XXX,XX +XXX,XX @@ |
25 | int resetprio; | 19 | |
26 | NVICState *s = NVIC(dev); | 20 | static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, |
27 | 21 | MMUAccessType access_type, ARMMMUIdx mmu_idx, | |
28 | + memset(s->vectors, 0, sizeof(s->vectors)); | 22 | - bool s1_is_el0, GetPhysAddrResult *result, |
29 | + memset(s->sec_vectors, 0, sizeof(s->sec_vectors)); | 23 | - ARMMMUFaultInfo *fi) |
30 | + s->prigroup[M_REG_NS] = 0; | 24 | + bool is_secure, bool s1_is_el0, |
31 | + s->prigroup[M_REG_S] = 0; | 25 | + GetPhysAddrResult *result, ARMMMUFaultInfo *fi) |
32 | + | 26 | __attribute__((nonnull)); |
33 | s->vectors[ARMV7M_EXCP_NMI].enabled = 1; | 27 | |
34 | /* MEM, BUS, and USAGE are enabled through | 28 | /* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */ |
35 | * the System Handler Control register | 29 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, |
30 | GetPhysAddrResult s2 = {}; | ||
31 | int ret; | ||
32 | |||
33 | - ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, false, | ||
34 | - &s2, fi); | ||
35 | + ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, | ||
36 | + *is_secure, false, &s2, fi); | ||
37 | if (ret) { | ||
38 | assert(fi->type != ARMFault_None); | ||
39 | fi->s2addr = addr; | ||
40 | @@ -XXX,XX +XXX,XX @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level, | ||
41 | */ | ||
42 | static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, | ||
43 | MMUAccessType access_type, ARMMMUIdx mmu_idx, | ||
44 | - bool s1_is_el0, GetPhysAddrResult *result, | ||
45 | - ARMMMUFaultInfo *fi) | ||
46 | + bool is_secure, bool s1_is_el0, | ||
47 | + GetPhysAddrResult *result, ARMMMUFaultInfo *fi) | ||
48 | { | ||
49 | ARMCPU *cpu = env_archcpu(env); | ||
50 | /* Read an LPAE long-descriptor translation table. */ | ||
51 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, | ||
52 | * remain non-secure. We implement this by just ORing in the NSTable/NS | ||
53 | * bits at each step. | ||
54 | */ | ||
55 | - tableattrs = regime_is_secure(env, mmu_idx) ? 0 : (1 << 4); | ||
56 | + tableattrs = is_secure ? 0 : (1 << 4); | ||
57 | for (;;) { | ||
58 | uint64_t descriptor; | ||
59 | bool nstable; | ||
60 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
61 | memset(result, 0, sizeof(*result)); | ||
62 | |||
63 | ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx, | ||
64 | - is_el0, result, fi); | ||
65 | + s2walk_secure, is_el0, result, fi); | ||
66 | fi->s2addr = ipa; | ||
67 | |||
68 | /* Combine the S1 and S2 perms. */ | ||
69 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
70 | } | ||
71 | |||
72 | if (regime_using_lpae_format(env, mmu_idx)) { | ||
73 | - return get_phys_addr_lpae(env, address, access_type, mmu_idx, false, | ||
74 | - result, fi); | ||
75 | + return get_phys_addr_lpae(env, address, access_type, mmu_idx, | ||
76 | + is_secure, false, result, fi); | ||
77 | } else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) { | ||
78 | return get_phys_addr_v6(env, address, access_type, mmu_idx, | ||
79 | is_secure, result, fi); | ||
36 | -- | 80 | -- |
37 | 2.7.4 | 81 | 2.25.1 |
38 | 82 | ||
39 | 83 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | From: Richard Henderson <richard.henderson@linaro.org> | ||
1 | 2 | ||
3 | Pass the correct stage2 mmu_idx to regime_translation_disabled, | ||
4 | which we computed afterward. | ||
5 | |||
6 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
8 | Message-id: 20221001162318.153420-4-richard.henderson@linaro.org | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
10 | --- | ||
11 | target/arm/ptw.c | 6 +++--- | ||
12 | 1 file changed, 3 insertions(+), 3 deletions(-) | ||
13 | |||
14 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
15 | index XXXXXXX..XXXXXXX 100644 | ||
16 | --- a/target/arm/ptw.c | ||
17 | +++ b/target/arm/ptw.c | ||
18 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
19 | hwaddr addr, bool *is_secure, | ||
20 | ARMMMUFaultInfo *fi) | ||
21 | { | ||
22 | + ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; | ||
23 | + | ||
24 | if (arm_mmu_idx_is_stage1_of_2(mmu_idx) && | ||
25 | - !regime_translation_disabled(env, ARMMMUIdx_Stage2)) { | ||
26 | - ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S | ||
27 | - : ARMMMUIdx_Stage2; | ||
28 | + !regime_translation_disabled(env, s2_mmu_idx)) { | ||
29 | GetPhysAddrResult s2 = {}; | ||
30 | int ret; | ||
31 | |||
32 | -- | ||
33 | 2.25.1 | diff view generated by jsdifflib |
1 | From: Thomas Huth <thuth@redhat.com> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | The device uses serial_hds in its realize function and thus can't be | 3 | Remove the use of regime_is_secure from regime_translation_disabled, |
4 | used twice. Apart from that, the comma in its name makes it quite hard | 4 | using the new parameter instead. |
5 | to use for the user anyway, since a comma is normally used to separate | ||
6 | the device name from its properties when using the "-device" parameter | ||
7 | or the "device_add" HMP command. | ||
8 | 5 | ||
9 | Signed-off-by: Thomas Huth <thuth@redhat.com> | 6 | This fixes a bug in S1_ptw_translate and get_phys_addr where we had |
10 | Reviewed-by: Alistair Francis <alistair.francis@xilinx.com> | 7 | passed ARMMMUIdx_Stage2 and not ARMMMUIdx_Stage2_S to determine if |
11 | Message-id: 1506441116-16627-1-git-send-email-thuth@redhat.com | 8 | Stage2 is disabled, affecting FEAT_SEL2. |
9 | |||
10 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
11 | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> | ||
12 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
13 | Message-id: 20221001162318.153420-5-richard.henderson@linaro.org | ||
12 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 14 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
13 | --- | 15 | --- |
14 | hw/arm/xlnx-zynqmp.c | 2 ++ | 16 | target/arm/ptw.c | 20 +++++++++++--------- |
15 | 1 file changed, 2 insertions(+) | 17 | 1 file changed, 11 insertions(+), 9 deletions(-) |
16 | 18 | ||
17 | diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c | 19 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
18 | index XXXXXXX..XXXXXXX 100644 | 20 | index XXXXXXX..XXXXXXX 100644 |
19 | --- a/hw/arm/xlnx-zynqmp.c | 21 | --- a/target/arm/ptw.c |
20 | +++ b/hw/arm/xlnx-zynqmp.c | 22 | +++ b/target/arm/ptw.c |
21 | @@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_class_init(ObjectClass *oc, void *data) | 23 | @@ -XXX,XX +XXX,XX @@ static uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn) |
22 | |||
23 | dc->props = xlnx_zynqmp_props; | ||
24 | dc->realize = xlnx_zynqmp_realize; | ||
25 | + /* Reason: Uses serial_hds in realize function, thus can't be used twice */ | ||
26 | + dc->user_creatable = false; | ||
27 | } | 24 | } |
28 | 25 | ||
29 | static const TypeInfo xlnx_zynqmp_type_info = { | 26 | /* Return true if the specified stage of address translation is disabled */ |
27 | -static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
28 | +static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
29 | + bool is_secure) | ||
30 | { | ||
31 | uint64_t hcr_el2; | ||
32 | |||
33 | if (arm_feature(env, ARM_FEATURE_M)) { | ||
34 | - switch (env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] & | ||
35 | + switch (env->v7m.mpu_ctrl[is_secure] & | ||
36 | (R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) { | ||
37 | case R_V7M_MPU_CTRL_ENABLE_MASK: | ||
38 | /* Enabled, but not for HardFault and NMI */ | ||
39 | @@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
40 | |||
41 | if (hcr_el2 & HCR_TGE) { | ||
42 | /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */ | ||
43 | - if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) { | ||
44 | + if (!is_secure && regime_el(env, mmu_idx) == 1) { | ||
45 | return true; | ||
46 | } | ||
47 | } | ||
48 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
49 | ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; | ||
50 | |||
51 | if (arm_mmu_idx_is_stage1_of_2(mmu_idx) && | ||
52 | - !regime_translation_disabled(env, s2_mmu_idx)) { | ||
53 | + !regime_translation_disabled(env, s2_mmu_idx, *is_secure)) { | ||
54 | GetPhysAddrResult s2 = {}; | ||
55 | int ret; | ||
56 | |||
57 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, | ||
58 | uint32_t base; | ||
59 | bool is_user = regime_is_user(env, mmu_idx); | ||
60 | |||
61 | - if (regime_translation_disabled(env, mmu_idx)) { | ||
62 | + if (regime_translation_disabled(env, mmu_idx, is_secure)) { | ||
63 | /* MPU disabled. */ | ||
64 | result->phys = address; | ||
65 | result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
66 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
67 | result->page_size = TARGET_PAGE_SIZE; | ||
68 | result->prot = 0; | ||
69 | |||
70 | - if (regime_translation_disabled(env, mmu_idx) || | ||
71 | + if (regime_translation_disabled(env, mmu_idx, secure) || | ||
72 | m_is_ppb_region(env, address)) { | ||
73 | /* | ||
74 | * MPU disabled or M profile PPB access: use default memory map. | ||
75 | @@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, | ||
76 | * are done in arm_v7m_load_vector(), which always does a direct | ||
77 | * read using address_space_ldl(), rather than going via this function. | ||
78 | */ | ||
79 | - if (regime_translation_disabled(env, mmu_idx)) { /* MPU disabled */ | ||
80 | + if (regime_translation_disabled(env, mmu_idx, secure)) { /* MPU disabled */ | ||
81 | hit = true; | ||
82 | } else if (m_is_ppb_region(env, address)) { | ||
83 | hit = true; | ||
84 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
85 | result, fi); | ||
86 | |||
87 | /* If S1 fails or S2 is disabled, return early. */ | ||
88 | - if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2)) { | ||
89 | + if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2, | ||
90 | + is_secure)) { | ||
91 | return ret; | ||
92 | } | ||
93 | |||
94 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
95 | |||
96 | /* Definitely a real MMU, not an MPU */ | ||
97 | |||
98 | - if (regime_translation_disabled(env, mmu_idx)) { | ||
99 | + if (regime_translation_disabled(env, mmu_idx, is_secure)) { | ||
100 | uint64_t hcr; | ||
101 | uint8_t memattr; | ||
102 | |||
30 | -- | 103 | -- |
31 | 2.7.4 | 104 | 2.25.1 |
32 | 105 | ||
33 | 106 | diff view generated by jsdifflib |
1 | On exception return for v8M, the SPSEL bit in the EXC_RETURN magic | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | value should be restored to the SPSEL bit in the CONTROL register | ||
3 | banked specified by the EXC_RETURN.ES bit. | ||
4 | 2 | ||
5 | Add write_v7m_control_spsel_for_secstate() which behaves like | 3 | Retain the existing get_phys_addr interface using the security |
6 | write_v7m_control_spsel() but allows the caller to specify which | 4 | state derived from mmu_idx. Move the kerneldoc comments to the |
7 | CONTROL bank to use, reimplement write_v7m_control_spsel() in | 5 | header file where they belong. |
8 | terms of it, and use it in exception return. | ||
9 | 6 | ||
7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
8 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
9 | Message-id: 20221001162318.153420-6-richard.henderson@linaro.org | ||
10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
11 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
12 | Message-id: 1506092407-26985-6-git-send-email-peter.maydell@linaro.org | ||
13 | --- | 11 | --- |
14 | target/arm/helper.c | 40 +++++++++++++++++++++++++++------------- | 12 | target/arm/internals.h | 40 ++++++++++++++++++++++++++++++++++++++ |
15 | 1 file changed, 27 insertions(+), 13 deletions(-) | 13 | target/arm/ptw.c | 44 ++++++++++++++---------------------------- |
14 | 2 files changed, 55 insertions(+), 29 deletions(-) | ||
16 | 15 | ||
17 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 16 | diff --git a/target/arm/internals.h b/target/arm/internals.h |
18 | index XXXXXXX..XXXXXXX 100644 | 17 | index XXXXXXX..XXXXXXX 100644 |
19 | --- a/target/arm/helper.c | 18 | --- a/target/arm/internals.h |
20 | +++ b/target/arm/helper.c | 19 | +++ b/target/arm/internals.h |
21 | @@ -XXX,XX +XXX,XX @@ static bool v7m_using_psp(CPUARMState *env) | 20 | @@ -XXX,XX +XXX,XX @@ typedef struct GetPhysAddrResult { |
22 | env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK; | 21 | ARMCacheAttrs cacheattrs; |
22 | } GetPhysAddrResult; | ||
23 | |||
24 | +/** | ||
25 | + * get_phys_addr_with_secure: get the physical address for a virtual address | ||
26 | + * @env: CPUARMState | ||
27 | + * @address: virtual address to get physical address for | ||
28 | + * @access_type: 0 for read, 1 for write, 2 for execute | ||
29 | + * @mmu_idx: MMU index indicating required translation regime | ||
30 | + * @is_secure: security state for the access | ||
31 | + * @result: set on translation success. | ||
32 | + * @fi: set to fault info if the translation fails | ||
33 | + * | ||
34 | + * Find the physical address corresponding to the given virtual address, | ||
35 | + * by doing a translation table walk on MMU based systems or using the | ||
36 | + * MPU state on MPU based systems. | ||
37 | + * | ||
38 | + * Returns false if the translation was successful. Otherwise, phys_ptr, attrs, | ||
39 | + * prot and page_size may not be filled in, and the populated fsr value provides | ||
40 | + * information on why the translation aborted, in the format of a | ||
41 | + * DFSR/IFSR fault register, with the following caveats: | ||
42 | + * * we honour the short vs long DFSR format differences. | ||
43 | + * * the WnR bit is never set (the caller must do this). | ||
44 | + * * for PSMAv5 based systems we don't bother to return a full FSR format | ||
45 | + * value. | ||
46 | + */ | ||
47 | +bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
48 | + MMUAccessType access_type, | ||
49 | + ARMMMUIdx mmu_idx, bool is_secure, | ||
50 | + GetPhysAddrResult *result, ARMMMUFaultInfo *fi) | ||
51 | + __attribute__((nonnull)); | ||
52 | + | ||
53 | +/** | ||
54 | + * get_phys_addr: get the physical address for a virtual address | ||
55 | + * @env: CPUARMState | ||
56 | + * @address: virtual address to get physical address for | ||
57 | + * @access_type: 0 for read, 1 for write, 2 for execute | ||
58 | + * @mmu_idx: MMU index indicating required translation regime | ||
59 | + * @result: set on translation success. | ||
60 | + * @fi: set to fault info if the translation fails | ||
61 | + * | ||
62 | + * Similarly, but use the security regime of @mmu_idx. | ||
63 | + */ | ||
64 | bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
65 | MMUAccessType access_type, ARMMMUIdx mmu_idx, | ||
66 | GetPhysAddrResult *result, ARMMMUFaultInfo *fi) | ||
67 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
68 | index XXXXXXX..XXXXXXX 100644 | ||
69 | --- a/target/arm/ptw.c | ||
70 | +++ b/target/arm/ptw.c | ||
71 | @@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env, | ||
72 | return ret; | ||
23 | } | 73 | } |
24 | 74 | ||
25 | -/* Write to v7M CONTROL.SPSEL bit. This may change the current | 75 | -/** |
26 | - * stack pointer between Main and Process stack pointers. | 76 | - * get_phys_addr - get the physical address for this virtual address |
27 | +/* Write to v7M CONTROL.SPSEL bit for the specified security bank. | 77 | - * |
28 | + * This may change the current stack pointer between Main and Process | 78 | - * Find the physical address corresponding to the given virtual address, |
29 | + * stack pointers if it is done for the CONTROL register for the current | 79 | - * by doing a translation table walk on MMU based systems or using the |
30 | + * security state. | 80 | - * MPU state on MPU based systems. |
31 | */ | 81 | - * |
32 | -static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel) | 82 | - * Returns false if the translation was successful. Otherwise, phys_ptr, attrs, |
33 | +static void write_v7m_control_spsel_for_secstate(CPUARMState *env, | 83 | - * prot and page_size may not be filled in, and the populated fsr value provides |
34 | + bool new_spsel, | 84 | - * information on why the translation aborted, in the format of a |
35 | + bool secstate) | 85 | - * DFSR/IFSR fault register, with the following caveats: |
86 | - * * we honour the short vs long DFSR format differences. | ||
87 | - * * the WnR bit is never set (the caller must do this). | ||
88 | - * * for PSMAv5 based systems we don't bother to return a full FSR format | ||
89 | - * value. | ||
90 | - * | ||
91 | - * @env: CPUARMState | ||
92 | - * @address: virtual address to get physical address for | ||
93 | - * @access_type: 0 for read, 1 for write, 2 for execute | ||
94 | - * @mmu_idx: MMU index indicating required translation regime | ||
95 | - * @result: set on translation success. | ||
96 | - * @fi: set to fault info if the translation fails | ||
97 | - */ | ||
98 | -bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
99 | - MMUAccessType access_type, ARMMMUIdx mmu_idx, | ||
100 | - GetPhysAddrResult *result, ARMMMUFaultInfo *fi) | ||
101 | +bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
102 | + MMUAccessType access_type, ARMMMUIdx mmu_idx, | ||
103 | + bool is_secure, GetPhysAddrResult *result, | ||
104 | + ARMMMUFaultInfo *fi) | ||
36 | { | 105 | { |
37 | - uint32_t tmp; | 106 | ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx); |
38 | - bool new_is_psp, old_is_psp = v7m_using_psp(env); | 107 | - bool is_secure = regime_is_secure(env, mmu_idx); |
39 | + bool old_is_psp = v7m_using_psp(env); | 108 | |
40 | 109 | if (mmu_idx != s1_mmu_idx) { | |
41 | - env->v7m.control[env->v7m.secure] = | 110 | /* |
42 | - deposit32(env->v7m.control[env->v7m.secure], | 111 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, |
43 | + env->v7m.control[secstate] = | 112 | ARMMMUIdx s2_mmu_idx; |
44 | + deposit32(env->v7m.control[secstate], | 113 | bool is_el0; |
45 | R_V7M_CONTROL_SPSEL_SHIFT, | 114 | |
46 | R_V7M_CONTROL_SPSEL_LENGTH, new_spsel); | 115 | - ret = get_phys_addr(env, address, access_type, s1_mmu_idx, |
47 | 116 | - result, fi); | |
48 | - new_is_psp = v7m_using_psp(env); | 117 | + ret = get_phys_addr_with_secure(env, address, access_type, |
49 | + if (secstate == env->v7m.secure) { | 118 | + s1_mmu_idx, is_secure, result, fi); |
50 | + bool new_is_psp = v7m_using_psp(env); | 119 | |
51 | + uint32_t tmp; | 120 | /* If S1 fails or S2 is disabled, return early. */ |
52 | 121 | if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2, | |
53 | - if (old_is_psp != new_is_psp) { | 122 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, |
54 | - tmp = env->v7m.other_sp; | ||
55 | - env->v7m.other_sp = env->regs[13]; | ||
56 | - env->regs[13] = tmp; | ||
57 | + if (old_is_psp != new_is_psp) { | ||
58 | + tmp = env->v7m.other_sp; | ||
59 | + env->v7m.other_sp = env->regs[13]; | ||
60 | + env->regs[13] = tmp; | ||
61 | + } | ||
62 | } | 123 | } |
63 | } | 124 | } |
64 | 125 | ||
65 | +/* Write to v7M CONTROL.SPSEL bit. This may change the current | 126 | +bool get_phys_addr(CPUARMState *env, target_ulong address, |
66 | + * stack pointer between Main and Process stack pointers. | 127 | + MMUAccessType access_type, ARMMMUIdx mmu_idx, |
67 | + */ | 128 | + GetPhysAddrResult *result, ARMMMUFaultInfo *fi) |
68 | +static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel) | ||
69 | +{ | 129 | +{ |
70 | + write_v7m_control_spsel_for_secstate(env, new_spsel, env->v7m.secure); | 130 | + return get_phys_addr_with_secure(env, address, access_type, mmu_idx, |
131 | + regime_is_secure(env, mmu_idx), | ||
132 | + result, fi); | ||
71 | +} | 133 | +} |
72 | + | 134 | + |
73 | void write_v7m_exception(CPUARMState *env, uint32_t new_exc) | 135 | hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr, |
136 | MemTxAttrs *attrs) | ||
74 | { | 137 | { |
75 | /* Write a new value to v7m.exception, thus transitioning into or out | ||
76 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | ||
77 | * Handler mode (and will be until we write the new XPSR.Interrupt | ||
78 | * field) this does not switch around the current stack pointer. | ||
79 | */ | ||
80 | - write_v7m_control_spsel(env, return_to_sp_process); | ||
81 | + write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure); | ||
82 | |||
83 | switch_v7m_security_state(env, return_to_secure); | ||
84 | |||
85 | -- | 138 | -- |
86 | 2.7.4 | 139 | 2.25.1 |
87 | |||
88 | diff view generated by jsdifflib |
1 | From: Jan Kiszka <jan.kiszka@siemens.com> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | This properly forwards SMC events to EL2 when PSCI is provided by QEMU | 3 | Remove the use of regime_is_secure from v7m_read_half_insn, using |
4 | itself and, thus, ARM_FEATURE_EL3 is off. | 4 | the new parameter instead. |
5 | 5 | ||
6 | Found and tested with the Jailhouse hypervisor. Solution based on | 6 | As it happens, both callers pass true, propagated from the argument |
7 | suggestions by Peter Maydell. | 7 | to arm_v7m_mmu_idx_for_secstate which created the mmu_idx argument, |
8 | but that is a detail of v7m_handle_execute_nsc we need not expose | ||
9 | to the callee. | ||
8 | 10 | ||
9 | Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> | ||
10 | Message-id: 4f243068-aaea-776f-d18f-f9e05e7be9cd@siemens.com | ||
11 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | 11 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
12 | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> | ||
13 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
14 | Message-id: 20221001162318.153420-7-richard.henderson@linaro.org | ||
12 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 15 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
13 | --- | 16 | --- |
14 | target/arm/helper.c | 9 ++++++++- | 17 | target/arm/m_helper.c | 9 ++++----- |
15 | target/arm/op_helper.c | 27 +++++++++++++++++---------- | 18 | 1 file changed, 4 insertions(+), 5 deletions(-) |
16 | 2 files changed, 25 insertions(+), 11 deletions(-) | ||
17 | 19 | ||
18 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 20 | diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c |
19 | index XXXXXXX..XXXXXXX 100644 | 21 | index XXXXXXX..XXXXXXX 100644 |
20 | --- a/target/arm/helper.c | 22 | --- a/target/arm/m_helper.c |
21 | +++ b/target/arm/helper.c | 23 | +++ b/target/arm/m_helper.c |
22 | @@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | 24 | @@ -XXX,XX +XXX,XX @@ static bool do_v7m_function_return(ARMCPU *cpu) |
23 | 25 | return true; | |
24 | if (arm_feature(env, ARM_FEATURE_EL3)) { | 26 | } |
25 | valid_mask &= ~HCR_HCD; | 27 | |
26 | - } else { | 28 | -static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, |
27 | + } else if (cpu->psci_conduit != QEMU_PSCI_CONDUIT_SMC) { | 29 | +static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool secure, |
28 | + /* Architecturally HCR.TSC is RES0 if EL3 is not implemented. | 30 | uint32_t addr, uint16_t *insn) |
29 | + * However, if we're using the SMC PSCI conduit then QEMU is | 31 | { |
30 | + * effectively acting like EL3 firmware and so the guest at | 32 | /* |
31 | + * EL2 should retain the ability to prevent EL1 from being | 33 | @@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, |
32 | + * able to make SMC calls into the ersatz firmware, so in | 34 | ARMMMUFaultInfo fi = {}; |
33 | + * that case HCR.TSC should be read/write. | 35 | MemTxResult txres; |
34 | + */ | 36 | |
35 | valid_mask &= ~HCR_TSC; | 37 | - v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, |
38 | - regime_is_secure(env, mmu_idx), &sattrs); | ||
39 | + v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, secure, &sattrs); | ||
40 | if (!sattrs.nsc || sattrs.ns) { | ||
41 | /* | ||
42 | * This must be the second half of the insn, and it straddles a | ||
43 | @@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu) | ||
44 | /* We want to do the MPU lookup as secure; work out what mmu_idx that is */ | ||
45 | mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true); | ||
46 | |||
47 | - if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) { | ||
48 | + if (!v7m_read_half_insn(cpu, mmu_idx, true, env->regs[15], &insn)) { | ||
49 | return false; | ||
36 | } | 50 | } |
37 | 51 | ||
38 | diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c | 52 | @@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu) |
39 | index XXXXXXX..XXXXXXX 100644 | 53 | goto gen_invep; |
40 | --- a/target/arm/op_helper.c | ||
41 | +++ b/target/arm/op_helper.c | ||
42 | @@ -XXX,XX +XXX,XX @@ void HELPER(pre_smc)(CPUARMState *env, uint32_t syndrome) | ||
43 | */ | ||
44 | bool undef = arm_feature(env, ARM_FEATURE_AARCH64) ? smd : smd && !secure; | ||
45 | |||
46 | - if (arm_is_psci_call(cpu, EXCP_SMC)) { | ||
47 | - /* If PSCI is enabled and this looks like a valid PSCI call then | ||
48 | - * that overrides the architecturally mandated SMC behaviour. | ||
49 | + if (!arm_feature(env, ARM_FEATURE_EL3) && | ||
50 | + cpu->psci_conduit != QEMU_PSCI_CONDUIT_SMC) { | ||
51 | + /* If we have no EL3 then SMC always UNDEFs and can't be | ||
52 | + * trapped to EL2. PSCI-via-SMC is a sort of ersatz EL3 | ||
53 | + * firmware within QEMU, and we want an EL2 guest to be able | ||
54 | + * to forbid its EL1 from making PSCI calls into QEMU's | ||
55 | + * "firmware" via HCR.TSC, so for these purposes treat | ||
56 | + * PSCI-via-SMC as implying an EL3. | ||
57 | */ | ||
58 | - return; | ||
59 | - } | ||
60 | - | ||
61 | - if (!arm_feature(env, ARM_FEATURE_EL3)) { | ||
62 | - /* If we have no EL3 then SMC always UNDEFs */ | ||
63 | undef = true; | ||
64 | } else if (!secure && cur_el == 1 && (env->cp15.hcr_el2 & HCR_TSC)) { | ||
65 | - /* In NS EL1, HCR controlled routing to EL2 has priority over SMD. */ | ||
66 | + /* In NS EL1, HCR controlled routing to EL2 has priority over SMD. | ||
67 | + * We also want an EL2 guest to be able to forbid its EL1 from | ||
68 | + * making PSCI calls into QEMU's "firmware" via HCR.TSC. | ||
69 | + */ | ||
70 | raise_exception(env, EXCP_HYP_TRAP, syndrome, 2); | ||
71 | } | 54 | } |
72 | 55 | ||
73 | - if (undef) { | 56 | - if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) { |
74 | + /* If PSCI is enabled and this looks like a valid PSCI call then | 57 | + if (!v7m_read_half_insn(cpu, mmu_idx, true, env->regs[15] + 2, &insn)) { |
75 | + * suppress the UNDEF -- we'll catch the SMC exception and | 58 | return false; |
76 | + * implement the PSCI call behaviour there. | ||
77 | + */ | ||
78 | + if (undef && !arm_is_psci_call(cpu, EXCP_SMC)) { | ||
79 | raise_exception(env, EXCP_UDEF, syn_uncategorized(), | ||
80 | exception_target_el(env)); | ||
81 | } | 59 | } |
60 | |||
82 | -- | 61 | -- |
83 | 2.7.4 | 62 | 2.25.1 |
84 | 63 | ||
85 | 64 | diff view generated by jsdifflib |
1 | Implement the register interface for the SAU: SAU_CTRL, | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | SAU_TYPE, SAU_RNR, SAU_RBAR and SAU_RLAR. None of the | ||
3 | actual behaviour is implemented here; registers just | ||
4 | read back as written. | ||
5 | 2 | ||
6 | When the CPU definition for Cortex-M33 is eventually | 3 | Remove the use of regime_is_secure from arm_tr_init_disas_context. |
7 | added, its initfn will set cpu->sau_sregion, in the same | 4 | Instead, provide the value of v8m_secure directly from tb_flags. |
8 | way that we currently set cpu->pmsav7_dregion for the | 5 | Rather than use regime_is_secure, use the env->v7m.secure directly, |
9 | M3 and M4. | 6 | as per arm_mmu_idx_el. |
10 | 7 | ||
11 | Number of SAU regions is typically a configurable | 8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
12 | CPU parameter, but this patch doesn't provide a | 9 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
13 | QEMU CPU property for it. We can easily add one when | 10 | Message-id: 20221001162318.153420-8-richard.henderson@linaro.org |
14 | we have a board that requires it. | ||
15 | |||
16 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 11 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
17 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
18 | Message-id: 1506092407-26985-14-git-send-email-peter.maydell@linaro.org | ||
19 | --- | 12 | --- |
20 | target/arm/cpu.h | 10 +++++ | 13 | target/arm/cpu.h | 2 ++ |
21 | hw/intc/armv7m_nvic.c | 116 ++++++++++++++++++++++++++++++++++++++++++++++++++ | 14 | target/arm/helper.c | 4 ++++ |
22 | target/arm/cpu.c | 27 ++++++++++++ | 15 | target/arm/translate.c | 3 +-- |
23 | target/arm/machine.c | 14 ++++++ | 16 | 3 files changed, 7 insertions(+), 2 deletions(-) |
24 | 4 files changed, 167 insertions(+) | ||
25 | 17 | ||
26 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | 18 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h |
27 | index XXXXXXX..XXXXXXX 100644 | 19 | index XXXXXXX..XXXXXXX 100644 |
28 | --- a/target/arm/cpu.h | 20 | --- a/target/arm/cpu.h |
29 | +++ b/target/arm/cpu.h | 21 | +++ b/target/arm/cpu.h |
30 | @@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState { | 22 | @@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 3, 1) /* Not cached. */ |
31 | uint32_t mair1[M_REG_NUM_BANKS]; | 23 | FIELD(TBFLAG_M32, FPCCR_S_WRONG, 4, 1) /* Not cached. */ |
32 | } pmsav8; | 24 | /* Set if MVE insns are definitely not predicated by VPR or LTPSIZE */ |
33 | 25 | FIELD(TBFLAG_M32, MVE_NO_PRED, 5, 1) /* Not cached. */ | |
34 | + /* v8M SAU */ | 26 | +/* Set if in secure mode */ |
35 | + struct { | 27 | +FIELD(TBFLAG_M32, SECURE, 6, 1) |
36 | + uint32_t *rbar; | 28 | |
37 | + uint32_t *rlar; | 29 | /* |
38 | + uint32_t rnr; | 30 | * Bit usage when in AArch64 state |
39 | + uint32_t ctrl; | 31 | diff --git a/target/arm/helper.c b/target/arm/helper.c |
40 | + } sau; | ||
41 | + | ||
42 | void *nvic; | ||
43 | const struct arm_boot_info *boot_info; | ||
44 | /* Store GICv3CPUState to access from this struct */ | ||
45 | @@ -XXX,XX +XXX,XX @@ struct ARMCPU { | ||
46 | bool has_mpu; | ||
47 | /* PMSAv7 MPU number of supported regions */ | ||
48 | uint32_t pmsav7_dregion; | ||
49 | + /* v8M SAU number of supported regions */ | ||
50 | + uint32_t sau_sregion; | ||
51 | |||
52 | /* PSCI conduit used to invoke PSCI methods | ||
53 | * 0 - disabled, 1 - smc, 2 - hvc | ||
54 | diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c | ||
55 | index XXXXXXX..XXXXXXX 100644 | 32 | index XXXXXXX..XXXXXXX 100644 |
56 | --- a/hw/intc/armv7m_nvic.c | 33 | --- a/target/arm/helper.c |
57 | +++ b/hw/intc/armv7m_nvic.c | 34 | +++ b/target/arm/helper.c |
58 | @@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs) | 35 | @@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_m32(CPUARMState *env, int fp_el, |
59 | goto bad_offset; | 36 | DP_TBFLAG_M32(flags, STACKCHECK, 1); |
60 | } | ||
61 | return cpu->env.pmsav8.mair1[attrs.secure]; | ||
62 | + case 0xdd0: /* SAU_CTRL */ | ||
63 | + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
64 | + goto bad_offset; | ||
65 | + } | ||
66 | + if (!attrs.secure) { | ||
67 | + return 0; | ||
68 | + } | ||
69 | + return cpu->env.sau.ctrl; | ||
70 | + case 0xdd4: /* SAU_TYPE */ | ||
71 | + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
72 | + goto bad_offset; | ||
73 | + } | ||
74 | + if (!attrs.secure) { | ||
75 | + return 0; | ||
76 | + } | ||
77 | + return cpu->sau_sregion; | ||
78 | + case 0xdd8: /* SAU_RNR */ | ||
79 | + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
80 | + goto bad_offset; | ||
81 | + } | ||
82 | + if (!attrs.secure) { | ||
83 | + return 0; | ||
84 | + } | ||
85 | + return cpu->env.sau.rnr; | ||
86 | + case 0xddc: /* SAU_RBAR */ | ||
87 | + { | ||
88 | + int region = cpu->env.sau.rnr; | ||
89 | + | ||
90 | + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
91 | + goto bad_offset; | ||
92 | + } | ||
93 | + if (!attrs.secure) { | ||
94 | + return 0; | ||
95 | + } | ||
96 | + if (region >= cpu->sau_sregion) { | ||
97 | + return 0; | ||
98 | + } | ||
99 | + return cpu->env.sau.rbar[region]; | ||
100 | + } | ||
101 | + case 0xde0: /* SAU_RLAR */ | ||
102 | + { | ||
103 | + int region = cpu->env.sau.rnr; | ||
104 | + | ||
105 | + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
106 | + goto bad_offset; | ||
107 | + } | ||
108 | + if (!attrs.secure) { | ||
109 | + return 0; | ||
110 | + } | ||
111 | + if (region >= cpu->sau_sregion) { | ||
112 | + return 0; | ||
113 | + } | ||
114 | + return cpu->env.sau.rlar[region]; | ||
115 | + } | ||
116 | case 0xde4: /* SFSR */ | ||
117 | if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
118 | goto bad_offset; | ||
119 | @@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value, | ||
120 | * only affect cacheability, and we don't implement caching. | ||
121 | */ | ||
122 | break; | ||
123 | + case 0xdd0: /* SAU_CTRL */ | ||
124 | + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
125 | + goto bad_offset; | ||
126 | + } | ||
127 | + if (!attrs.secure) { | ||
128 | + return; | ||
129 | + } | ||
130 | + cpu->env.sau.ctrl = value & 3; | ||
131 | + case 0xdd4: /* SAU_TYPE */ | ||
132 | + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
133 | + goto bad_offset; | ||
134 | + } | ||
135 | + break; | ||
136 | + case 0xdd8: /* SAU_RNR */ | ||
137 | + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
138 | + goto bad_offset; | ||
139 | + } | ||
140 | + if (!attrs.secure) { | ||
141 | + return; | ||
142 | + } | ||
143 | + if (value >= cpu->sau_sregion) { | ||
144 | + qemu_log_mask(LOG_GUEST_ERROR, "SAU region out of range %" | ||
145 | + PRIu32 "/%" PRIu32 "\n", | ||
146 | + value, cpu->sau_sregion); | ||
147 | + } else { | ||
148 | + cpu->env.sau.rnr = value; | ||
149 | + } | ||
150 | + break; | ||
151 | + case 0xddc: /* SAU_RBAR */ | ||
152 | + { | ||
153 | + int region = cpu->env.sau.rnr; | ||
154 | + | ||
155 | + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
156 | + goto bad_offset; | ||
157 | + } | ||
158 | + if (!attrs.secure) { | ||
159 | + return; | ||
160 | + } | ||
161 | + if (region >= cpu->sau_sregion) { | ||
162 | + return; | ||
163 | + } | ||
164 | + cpu->env.sau.rbar[region] = value & ~0x1f; | ||
165 | + tlb_flush(CPU(cpu)); | ||
166 | + break; | ||
167 | + } | ||
168 | + case 0xde0: /* SAU_RLAR */ | ||
169 | + { | ||
170 | + int region = cpu->env.sau.rnr; | ||
171 | + | ||
172 | + if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
173 | + goto bad_offset; | ||
174 | + } | ||
175 | + if (!attrs.secure) { | ||
176 | + return; | ||
177 | + } | ||
178 | + if (region >= cpu->sau_sregion) { | ||
179 | + return; | ||
180 | + } | ||
181 | + cpu->env.sau.rlar[region] = value & ~0x1c; | ||
182 | + tlb_flush(CPU(cpu)); | ||
183 | + break; | ||
184 | + } | ||
185 | case 0xde4: /* SFSR */ | ||
186 | if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) { | ||
187 | goto bad_offset; | ||
188 | diff --git a/target/arm/cpu.c b/target/arm/cpu.c | ||
189 | index XXXXXXX..XXXXXXX 100644 | ||
190 | --- a/target/arm/cpu.c | ||
191 | +++ b/target/arm/cpu.c | ||
192 | @@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s) | ||
193 | env->pmsav8.mair1[M_REG_S] = 0; | ||
194 | } | 37 | } |
195 | 38 | ||
196 | + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { | 39 | + if (arm_feature(env, ARM_FEATURE_M_SECURITY) && env->v7m.secure) { |
197 | + if (cpu->sau_sregion > 0) { | 40 | + DP_TBFLAG_M32(flags, SECURE, 1); |
198 | + memset(env->sau.rbar, 0, sizeof(*env->sau.rbar) * cpu->sau_sregion); | ||
199 | + memset(env->sau.rlar, 0, sizeof(*env->sau.rlar) * cpu->sau_sregion); | ||
200 | + } | ||
201 | + env->sau.rnr = 0; | ||
202 | + /* SAU_CTRL reset value is IMPDEF; we choose 0, which is what | ||
203 | + * the Cortex-M33 does. | ||
204 | + */ | ||
205 | + env->sau.ctrl = 0; | ||
206 | + } | 41 | + } |
207 | + | 42 | + |
208 | set_flush_to_zero(1, &env->vfp.standard_fp_status); | 43 | return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags); |
209 | set_flush_inputs_to_zero(1, &env->vfp.standard_fp_status); | ||
210 | set_default_nan_mode(1, &env->vfp.standard_fp_status); | ||
211 | @@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp) | ||
212 | } | ||
213 | } | ||
214 | |||
215 | + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { | ||
216 | + uint32_t nr = cpu->sau_sregion; | ||
217 | + | ||
218 | + if (nr > 0xff) { | ||
219 | + error_setg(errp, "v8M SAU #regions invalid %" PRIu32, nr); | ||
220 | + return; | ||
221 | + } | ||
222 | + | ||
223 | + if (nr) { | ||
224 | + env->sau.rbar = g_new0(uint32_t, nr); | ||
225 | + env->sau.rlar = g_new0(uint32_t, nr); | ||
226 | + } | ||
227 | + } | ||
228 | + | ||
229 | if (arm_feature(env, ARM_FEATURE_EL3)) { | ||
230 | set_feature(env, ARM_FEATURE_VBAR); | ||
231 | } | ||
232 | @@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj) | ||
233 | cpu->midr = 0x410fc240; /* r0p0 */ | ||
234 | cpu->pmsav7_dregion = 8; | ||
235 | } | 44 | } |
236 | + | 45 | |
237 | static void arm_v7m_class_init(ObjectClass *oc, void *data) | 46 | diff --git a/target/arm/translate.c b/target/arm/translate.c |
238 | { | ||
239 | CPUClass *cc = CPU_CLASS(oc); | ||
240 | diff --git a/target/arm/machine.c b/target/arm/machine.c | ||
241 | index XXXXXXX..XXXXXXX 100644 | 47 | index XXXXXXX..XXXXXXX 100644 |
242 | --- a/target/arm/machine.c | 48 | --- a/target/arm/translate.c |
243 | +++ b/target/arm/machine.c | 49 | +++ b/target/arm/translate.c |
244 | @@ -XXX,XX +XXX,XX @@ static bool s_rnr_vmstate_validate(void *opaque, int version_id) | 50 | @@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs) |
245 | return cpu->env.pmsav7.rnr[M_REG_S] < cpu->pmsav7_dregion; | 51 | dc->vfp_enabled = 1; |
246 | } | 52 | dc->be_data = MO_TE; |
247 | 53 | dc->v7m_handler_mode = EX_TBFLAG_M32(tb_flags, HANDLER); | |
248 | +static bool sau_rnr_vmstate_validate(void *opaque, int version_id) | 54 | - dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) && |
249 | +{ | 55 | - regime_is_secure(env, dc->mmu_idx); |
250 | + ARMCPU *cpu = opaque; | 56 | + dc->v8m_secure = EX_TBFLAG_M32(tb_flags, SECURE); |
251 | + | 57 | dc->v8m_stackcheck = EX_TBFLAG_M32(tb_flags, STACKCHECK); |
252 | + return cpu->env.sau.rnr < cpu->sau_sregion; | 58 | dc->v8m_fpccr_s_wrong = EX_TBFLAG_M32(tb_flags, FPCCR_S_WRONG); |
253 | +} | 59 | dc->v7m_new_fp_ctxt_needed = |
254 | + | ||
255 | static bool m_security_needed(void *opaque) | ||
256 | { | ||
257 | ARMCPU *cpu = opaque; | ||
258 | @@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_security = { | ||
259 | VMSTATE_UINT32(env.v7m.cfsr[M_REG_S], ARMCPU), | ||
260 | VMSTATE_UINT32(env.v7m.sfsr, ARMCPU), | ||
261 | VMSTATE_UINT32(env.v7m.sfar, ARMCPU), | ||
262 | + VMSTATE_VARRAY_UINT32(env.sau.rbar, ARMCPU, sau_sregion, 0, | ||
263 | + vmstate_info_uint32, uint32_t), | ||
264 | + VMSTATE_VARRAY_UINT32(env.sau.rlar, ARMCPU, sau_sregion, 0, | ||
265 | + vmstate_info_uint32, uint32_t), | ||
266 | + VMSTATE_UINT32(env.sau.rnr, ARMCPU), | ||
267 | + VMSTATE_VALIDATE("SAU_RNR is valid", sau_rnr_vmstate_validate), | ||
268 | + VMSTATE_UINT32(env.sau.ctrl, ARMCPU), | ||
269 | VMSTATE_END_OF_LIST() | ||
270 | } | ||
271 | }; | ||
272 | -- | 60 | -- |
273 | 2.7.4 | 61 | 2.25.1 |
274 | |||
275 | diff view generated by jsdifflib |
1 | Add support for v8M and in particular the security extension | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | to the exception entry code. This requires changes to: | ||
3 | * calculation of the exception-return magic LR value | ||
4 | * push the callee-saves registers in certain cases | ||
5 | * clear registers when taking non-secure exceptions to avoid | ||
6 | leaking information from the interrupted secure code | ||
7 | * switch to the correct security state on entry | ||
8 | * use the vector table for the security state we're targeting | ||
9 | 2 | ||
3 | This is the last use of regime_is_secure; remove it | ||
4 | entirely before changing the layout of ARMMMUIdx. | ||
5 | |||
6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
8 | Message-id: 20221001162318.153420-9-richard.henderson@linaro.org | ||
10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
11 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
12 | Message-id: 1506092407-26985-13-git-send-email-peter.maydell@linaro.org | ||
13 | --- | 10 | --- |
14 | target/arm/helper.c | 165 +++++++++++++++++++++++++++++++++++++++++++++------- | 11 | target/arm/internals.h | 42 ---------------------------------------- |
15 | 1 file changed, 145 insertions(+), 20 deletions(-) | 12 | target/arm/ptw.c | 44 ++++++++++++++++++++++++++++++++++++++++-- |
13 | 2 files changed, 42 insertions(+), 44 deletions(-) | ||
16 | 14 | ||
17 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 15 | diff --git a/target/arm/internals.h b/target/arm/internals.h |
18 | index XXXXXXX..XXXXXXX 100644 | 16 | index XXXXXXX..XXXXXXX 100644 |
19 | --- a/target/arm/helper.c | 17 | --- a/target/arm/internals.h |
20 | +++ b/target/arm/helper.c | 18 | +++ b/target/arm/internals.h |
21 | @@ -XXX,XX +XXX,XX @@ static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode, | 19 | @@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx) |
22 | } | 20 | } |
23 | } | 21 | } |
24 | 22 | ||
25 | -static uint32_t arm_v7m_load_vector(ARMCPU *cpu) | 23 | -/* Return true if this address translation regime is secure */ |
26 | +static uint32_t arm_v7m_load_vector(ARMCPU *cpu, bool targets_secure) | 24 | -static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx) |
25 | -{ | ||
26 | - switch (mmu_idx) { | ||
27 | - case ARMMMUIdx_E10_0: | ||
28 | - case ARMMMUIdx_E10_1: | ||
29 | - case ARMMMUIdx_E10_1_PAN: | ||
30 | - case ARMMMUIdx_E20_0: | ||
31 | - case ARMMMUIdx_E20_2: | ||
32 | - case ARMMMUIdx_E20_2_PAN: | ||
33 | - case ARMMMUIdx_Stage1_E0: | ||
34 | - case ARMMMUIdx_Stage1_E1: | ||
35 | - case ARMMMUIdx_Stage1_E1_PAN: | ||
36 | - case ARMMMUIdx_E2: | ||
37 | - case ARMMMUIdx_Stage2: | ||
38 | - case ARMMMUIdx_MPrivNegPri: | ||
39 | - case ARMMMUIdx_MUserNegPri: | ||
40 | - case ARMMMUIdx_MPriv: | ||
41 | - case ARMMMUIdx_MUser: | ||
42 | - return false; | ||
43 | - case ARMMMUIdx_SE3: | ||
44 | - case ARMMMUIdx_SE10_0: | ||
45 | - case ARMMMUIdx_SE10_1: | ||
46 | - case ARMMMUIdx_SE10_1_PAN: | ||
47 | - case ARMMMUIdx_SE20_0: | ||
48 | - case ARMMMUIdx_SE20_2: | ||
49 | - case ARMMMUIdx_SE20_2_PAN: | ||
50 | - case ARMMMUIdx_Stage1_SE0: | ||
51 | - case ARMMMUIdx_Stage1_SE1: | ||
52 | - case ARMMMUIdx_Stage1_SE1_PAN: | ||
53 | - case ARMMMUIdx_SE2: | ||
54 | - case ARMMMUIdx_Stage2_S: | ||
55 | - case ARMMMUIdx_MSPrivNegPri: | ||
56 | - case ARMMMUIdx_MSUserNegPri: | ||
57 | - case ARMMMUIdx_MSPriv: | ||
58 | - case ARMMMUIdx_MSUser: | ||
59 | - return true; | ||
60 | - default: | ||
61 | - g_assert_not_reached(); | ||
62 | - } | ||
63 | -} | ||
64 | - | ||
65 | static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
27 | { | 66 | { |
28 | CPUState *cs = CPU(cpu); | 67 | switch (mmu_idx) { |
29 | CPUARMState *env = &cpu->env; | 68 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
30 | MemTxResult result; | 69 | index XXXXXXX..XXXXXXX 100644 |
31 | - hwaddr vec = env->v7m.vecbase[env->v7m.secure] + env->v7m.exception * 4; | 70 | --- a/target/arm/ptw.c |
32 | + hwaddr vec = env->v7m.vecbase[targets_secure] + env->v7m.exception * 4; | 71 | +++ b/target/arm/ptw.c |
33 | uint32_t addr; | 72 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, |
34 | 73 | MMUAccessType access_type, ARMMMUIdx mmu_idx, | |
35 | addr = address_space_ldl(cs->as, vec, | 74 | GetPhysAddrResult *result, ARMMMUFaultInfo *fi) |
36 | @@ -XXX,XX +XXX,XX @@ static uint32_t arm_v7m_load_vector(ARMCPU *cpu) | 75 | { |
37 | * Since we don't model Lockup, we just report this guest error | 76 | + bool is_secure; |
38 | * via cpu_abort(). | 77 | + |
39 | */ | 78 | + switch (mmu_idx) { |
40 | - cpu_abort(cs, "Failed to read from exception vector table " | 79 | + case ARMMMUIdx_E10_0: |
41 | - "entry %08x\n", (unsigned)vec); | 80 | + case ARMMMUIdx_E10_1: |
42 | + cpu_abort(cs, "Failed to read from %s exception vector table " | 81 | + case ARMMMUIdx_E10_1_PAN: |
43 | + "entry %08x\n", targets_secure ? "secure" : "nonsecure", | 82 | + case ARMMMUIdx_E20_0: |
44 | + (unsigned)vec); | 83 | + case ARMMMUIdx_E20_2: |
45 | } | 84 | + case ARMMMUIdx_E20_2_PAN: |
46 | return addr; | 85 | + case ARMMMUIdx_Stage1_E0: |
86 | + case ARMMMUIdx_Stage1_E1: | ||
87 | + case ARMMMUIdx_Stage1_E1_PAN: | ||
88 | + case ARMMMUIdx_E2: | ||
89 | + case ARMMMUIdx_Stage2: | ||
90 | + case ARMMMUIdx_MPrivNegPri: | ||
91 | + case ARMMMUIdx_MUserNegPri: | ||
92 | + case ARMMMUIdx_MPriv: | ||
93 | + case ARMMMUIdx_MUser: | ||
94 | + is_secure = false; | ||
95 | + break; | ||
96 | + case ARMMMUIdx_SE3: | ||
97 | + case ARMMMUIdx_SE10_0: | ||
98 | + case ARMMMUIdx_SE10_1: | ||
99 | + case ARMMMUIdx_SE10_1_PAN: | ||
100 | + case ARMMMUIdx_SE20_0: | ||
101 | + case ARMMMUIdx_SE20_2: | ||
102 | + case ARMMMUIdx_SE20_2_PAN: | ||
103 | + case ARMMMUIdx_Stage1_SE0: | ||
104 | + case ARMMMUIdx_Stage1_SE1: | ||
105 | + case ARMMMUIdx_Stage1_SE1_PAN: | ||
106 | + case ARMMMUIdx_SE2: | ||
107 | + case ARMMMUIdx_Stage2_S: | ||
108 | + case ARMMMUIdx_MSPrivNegPri: | ||
109 | + case ARMMMUIdx_MSUserNegPri: | ||
110 | + case ARMMMUIdx_MSPriv: | ||
111 | + case ARMMMUIdx_MSUser: | ||
112 | + is_secure = true; | ||
113 | + break; | ||
114 | + default: | ||
115 | + g_assert_not_reached(); | ||
116 | + } | ||
117 | return get_phys_addr_with_secure(env, address, access_type, mmu_idx, | ||
118 | - regime_is_secure(env, mmu_idx), | ||
119 | - result, fi); | ||
120 | + is_secure, result, fi); | ||
47 | } | 121 | } |
48 | 122 | ||
49 | -static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr) | 123 | hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr, |
50 | +static void v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain) | ||
51 | +{ | ||
52 | + /* For v8M, push the callee-saves register part of the stack frame. | ||
53 | + * Compare the v8M pseudocode PushCalleeStack(). | ||
54 | + * In the tailchaining case this may not be the current stack. | ||
55 | + */ | ||
56 | + CPUARMState *env = &cpu->env; | ||
57 | + CPUState *cs = CPU(cpu); | ||
58 | + uint32_t *frame_sp_p; | ||
59 | + uint32_t frameptr; | ||
60 | + | ||
61 | + if (dotailchain) { | ||
62 | + frame_sp_p = get_v7m_sp_ptr(env, true, | ||
63 | + lr & R_V7M_EXCRET_MODE_MASK, | ||
64 | + lr & R_V7M_EXCRET_SPSEL_MASK); | ||
65 | + } else { | ||
66 | + frame_sp_p = &env->regs[13]; | ||
67 | + } | ||
68 | + | ||
69 | + frameptr = *frame_sp_p - 0x28; | ||
70 | + | ||
71 | + stl_phys(cs->as, frameptr, 0xfefa125b); | ||
72 | + stl_phys(cs->as, frameptr + 0x8, env->regs[4]); | ||
73 | + stl_phys(cs->as, frameptr + 0xc, env->regs[5]); | ||
74 | + stl_phys(cs->as, frameptr + 0x10, env->regs[6]); | ||
75 | + stl_phys(cs->as, frameptr + 0x14, env->regs[7]); | ||
76 | + stl_phys(cs->as, frameptr + 0x18, env->regs[8]); | ||
77 | + stl_phys(cs->as, frameptr + 0x1c, env->regs[9]); | ||
78 | + stl_phys(cs->as, frameptr + 0x20, env->regs[10]); | ||
79 | + stl_phys(cs->as, frameptr + 0x24, env->regs[11]); | ||
80 | + | ||
81 | + *frame_sp_p = frameptr; | ||
82 | +} | ||
83 | + | ||
84 | +static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain) | ||
85 | { | ||
86 | /* Do the "take the exception" parts of exception entry, | ||
87 | * but not the pushing of state to the stack. This is | ||
88 | @@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr) | ||
89 | */ | ||
90 | CPUARMState *env = &cpu->env; | ||
91 | uint32_t addr; | ||
92 | + bool targets_secure; | ||
93 | + | ||
94 | + targets_secure = armv7m_nvic_acknowledge_irq(env->nvic); | ||
95 | |||
96 | - armv7m_nvic_acknowledge_irq(env->nvic); | ||
97 | + if (arm_feature(env, ARM_FEATURE_V8)) { | ||
98 | + if (arm_feature(env, ARM_FEATURE_M_SECURITY) && | ||
99 | + (lr & R_V7M_EXCRET_S_MASK)) { | ||
100 | + /* The background code (the owner of the registers in the | ||
101 | + * exception frame) is Secure. This means it may either already | ||
102 | + * have or now needs to push callee-saves registers. | ||
103 | + */ | ||
104 | + if (targets_secure) { | ||
105 | + if (dotailchain && !(lr & R_V7M_EXCRET_ES_MASK)) { | ||
106 | + /* We took an exception from Secure to NonSecure | ||
107 | + * (which means the callee-saved registers got stacked) | ||
108 | + * and are now tailchaining to a Secure exception. | ||
109 | + * Clear DCRS so eventual return from this Secure | ||
110 | + * exception unstacks the callee-saved registers. | ||
111 | + */ | ||
112 | + lr &= ~R_V7M_EXCRET_DCRS_MASK; | ||
113 | + } | ||
114 | + } else { | ||
115 | + /* We're going to a non-secure exception; push the | ||
116 | + * callee-saves registers to the stack now, if they're | ||
117 | + * not already saved. | ||
118 | + */ | ||
119 | + if (lr & R_V7M_EXCRET_DCRS_MASK && | ||
120 | + !(dotailchain && (lr & R_V7M_EXCRET_ES_MASK))) { | ||
121 | + v7m_push_callee_stack(cpu, lr, dotailchain); | ||
122 | + } | ||
123 | + lr |= R_V7M_EXCRET_DCRS_MASK; | ||
124 | + } | ||
125 | + } | ||
126 | + | ||
127 | + lr &= ~R_V7M_EXCRET_ES_MASK; | ||
128 | + if (targets_secure || !arm_feature(env, ARM_FEATURE_M_SECURITY)) { | ||
129 | + lr |= R_V7M_EXCRET_ES_MASK; | ||
130 | + } | ||
131 | + lr &= ~R_V7M_EXCRET_SPSEL_MASK; | ||
132 | + if (env->v7m.control[targets_secure] & R_V7M_CONTROL_SPSEL_MASK) { | ||
133 | + lr |= R_V7M_EXCRET_SPSEL_MASK; | ||
134 | + } | ||
135 | + | ||
136 | + /* Clear registers if necessary to prevent non-secure exception | ||
137 | + * code being able to see register values from secure code. | ||
138 | + * Where register values become architecturally UNKNOWN we leave | ||
139 | + * them with their previous values. | ||
140 | + */ | ||
141 | + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { | ||
142 | + if (!targets_secure) { | ||
143 | + /* Always clear the caller-saved registers (they have been | ||
144 | + * pushed to the stack earlier in v7m_push_stack()). | ||
145 | + * Clear callee-saved registers if the background code is | ||
146 | + * Secure (in which case these regs were saved in | ||
147 | + * v7m_push_callee_stack()). | ||
148 | + */ | ||
149 | + int i; | ||
150 | + | ||
151 | + for (i = 0; i < 13; i++) { | ||
152 | + /* r4..r11 are callee-saves, zero only if EXCRET.S == 1 */ | ||
153 | + if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) { | ||
154 | + env->regs[i] = 0; | ||
155 | + } | ||
156 | + } | ||
157 | + /* Clear EAPSR */ | ||
158 | + xpsr_write(env, 0, XPSR_NZCV | XPSR_Q | XPSR_GE | XPSR_IT); | ||
159 | + } | ||
160 | + } | ||
161 | + } | ||
162 | + | ||
163 | + /* Switch to target security state -- must do this before writing SPSEL */ | ||
164 | + switch_v7m_security_state(env, targets_secure); | ||
165 | write_v7m_control_spsel(env, 0); | ||
166 | arm_clear_exclusive(env); | ||
167 | /* Clear IT bits */ | ||
168 | env->condexec_bits = 0; | ||
169 | env->regs[14] = lr; | ||
170 | - addr = arm_v7m_load_vector(cpu); | ||
171 | + addr = arm_v7m_load_vector(cpu, targets_secure); | ||
172 | env->regs[15] = addr & 0xfffffffe; | ||
173 | env->thumb = addr & 1; | ||
174 | } | ||
175 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | ||
176 | if (sfault) { | ||
177 | env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK; | ||
178 | armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); | ||
179 | - v7m_exception_taken(cpu, excret); | ||
180 | + v7m_exception_taken(cpu, excret, true); | ||
181 | qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " | ||
182 | "stackframe: failed EXC_RETURN.ES validity check\n"); | ||
183 | return; | ||
184 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | ||
185 | */ | ||
186 | env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; | ||
187 | armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure); | ||
188 | - v7m_exception_taken(cpu, excret); | ||
189 | + v7m_exception_taken(cpu, excret, true); | ||
190 | qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " | ||
191 | "stackframe: failed exception return integrity check\n"); | ||
192 | return; | ||
193 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | ||
194 | /* Take a SecureFault on the current stack */ | ||
195 | env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK; | ||
196 | armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); | ||
197 | - v7m_exception_taken(cpu, excret); | ||
198 | + v7m_exception_taken(cpu, excret, true); | ||
199 | qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " | ||
200 | "stackframe: failed exception return integrity " | ||
201 | "signature check\n"); | ||
202 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | ||
203 | armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, | ||
204 | env->v7m.secure); | ||
205 | env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; | ||
206 | - v7m_exception_taken(cpu, excret); | ||
207 | + v7m_exception_taken(cpu, excret, true); | ||
208 | qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " | ||
209 | "stackframe: failed exception return integrity " | ||
210 | "check\n"); | ||
211 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | ||
212 | armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false); | ||
213 | env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; | ||
214 | v7m_push_stack(cpu); | ||
215 | - v7m_exception_taken(cpu, excret); | ||
216 | + v7m_exception_taken(cpu, excret, false); | ||
217 | qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: " | ||
218 | "failed exception return integrity check\n"); | ||
219 | return; | ||
220 | @@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs) | ||
221 | return; /* Never happens. Keep compiler happy. */ | ||
222 | } | ||
223 | |||
224 | - lr = R_V7M_EXCRET_RES1_MASK | | ||
225 | - R_V7M_EXCRET_S_MASK | | ||
226 | - R_V7M_EXCRET_DCRS_MASK | | ||
227 | - R_V7M_EXCRET_FTYPE_MASK | | ||
228 | - R_V7M_EXCRET_ES_MASK; | ||
229 | - if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) { | ||
230 | - lr |= R_V7M_EXCRET_SPSEL_MASK; | ||
231 | + if (arm_feature(env, ARM_FEATURE_V8)) { | ||
232 | + lr = R_V7M_EXCRET_RES1_MASK | | ||
233 | + R_V7M_EXCRET_DCRS_MASK | | ||
234 | + R_V7M_EXCRET_FTYPE_MASK; | ||
235 | + /* The S bit indicates whether we should return to Secure | ||
236 | + * or NonSecure (ie our current state). | ||
237 | + * The ES bit indicates whether we're taking this exception | ||
238 | + * to Secure or NonSecure (ie our target state). We set it | ||
239 | + * later, in v7m_exception_taken(). | ||
240 | + * The SPSEL bit is also set in v7m_exception_taken() for v8M. | ||
241 | + * This corresponds to the ARM ARM pseudocode for v8M setting | ||
242 | + * some LR bits in PushStack() and some in ExceptionTaken(); | ||
243 | + * the distinction matters for the tailchain cases where we | ||
244 | + * can take an exception without pushing the stack. | ||
245 | + */ | ||
246 | + if (env->v7m.secure) { | ||
247 | + lr |= R_V7M_EXCRET_S_MASK; | ||
248 | + } | ||
249 | + } else { | ||
250 | + lr = R_V7M_EXCRET_RES1_MASK | | ||
251 | + R_V7M_EXCRET_S_MASK | | ||
252 | + R_V7M_EXCRET_DCRS_MASK | | ||
253 | + R_V7M_EXCRET_FTYPE_MASK | | ||
254 | + R_V7M_EXCRET_ES_MASK; | ||
255 | + if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) { | ||
256 | + lr |= R_V7M_EXCRET_SPSEL_MASK; | ||
257 | + } | ||
258 | } | ||
259 | if (!arm_v7m_is_handler_mode(env)) { | ||
260 | lr |= R_V7M_EXCRET_MODE_MASK; | ||
261 | } | ||
262 | |||
263 | v7m_push_stack(cpu); | ||
264 | - v7m_exception_taken(cpu, lr); | ||
265 | + v7m_exception_taken(cpu, lr, false); | ||
266 | qemu_log_mask(CPU_LOG_INT, "... as %d\n", env->v7m.exception); | ||
267 | } | ||
268 | |||
269 | -- | 124 | -- |
270 | 2.7.4 | 125 | 2.25.1 |
271 | |||
272 | diff view generated by jsdifflib |
1 | In v8M, more bits are defined in the exception-return magic | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | values; update the code that checks these so we accept | ||
3 | the v8M values when the CPU permits them. | ||
4 | 2 | ||
3 | Use get_phys_addr_with_secure directly. For a-profile, this is the | ||
4 | one place where the value of is_secure may not equal arm_is_secure(env). | ||
5 | |||
6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
8 | Message-id: 20221001162318.153420-10-richard.henderson@linaro.org | ||
5 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
6 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
7 | Message-id: 1506092407-26985-11-git-send-email-peter.maydell@linaro.org | ||
8 | --- | 10 | --- |
9 | target/arm/helper.c | 73 ++++++++++++++++++++++++++++++++++++++++++----------- | 11 | target/arm/helper.c | 19 ++++++++++++++----- |
10 | 1 file changed, 58 insertions(+), 15 deletions(-) | 12 | 1 file changed, 14 insertions(+), 5 deletions(-) |
11 | 13 | ||
12 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 14 | diff --git a/target/arm/helper.c b/target/arm/helper.c |
13 | index XXXXXXX..XXXXXXX 100644 | 15 | index XXXXXXX..XXXXXXX 100644 |
14 | --- a/target/arm/helper.c | 16 | --- a/target/arm/helper.c |
15 | +++ b/target/arm/helper.c | 17 | +++ b/target/arm/helper.c |
16 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | 18 | @@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri, |
17 | uint32_t excret; | 19 | |
18 | uint32_t xpsr; | 20 | #ifdef CONFIG_TCG |
19 | bool ufault = false; | 21 | static uint64_t do_ats_write(CPUARMState *env, uint64_t value, |
20 | - bool return_to_sp_process = false; | 22 | - MMUAccessType access_type, ARMMMUIdx mmu_idx) |
21 | - bool return_to_handler = false; | 23 | + MMUAccessType access_type, ARMMMUIdx mmu_idx, |
22 | + bool sfault = false; | 24 | + bool is_secure) |
23 | + bool return_to_sp_process; | 25 | { |
24 | + bool return_to_handler; | 26 | bool ret; |
25 | bool rettobase = false; | 27 | uint64_t par64; |
26 | bool exc_secure = false; | 28 | @@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value, |
27 | bool return_to_secure; | 29 | ARMMMUFaultInfo fi = {}; |
28 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | 30 | GetPhysAddrResult res = {}; |
29 | excret); | 31 | |
30 | } | 32 | - ret = get_phys_addr(env, value, access_type, mmu_idx, &res, &fi); |
31 | 33 | + ret = get_phys_addr_with_secure(env, value, access_type, mmu_idx, | |
32 | + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { | 34 | + is_secure, &res, &fi); |
33 | + /* EXC_RETURN.ES validation check (R_SMFL). We must do this before | 35 | |
34 | + * we pick which FAULTMASK to clear. | 36 | /* |
35 | + */ | 37 | * ATS operations only do S1 or S1+S2 translations, so we never |
36 | + if (!env->v7m.secure && | 38 | @@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) |
37 | + ((excret & R_V7M_EXCRET_ES_MASK) || | 39 | switch (el) { |
38 | + !(excret & R_V7M_EXCRET_DCRS_MASK))) { | 40 | case 3: |
39 | + sfault = 1; | 41 | mmu_idx = ARMMMUIdx_SE3; |
40 | + /* For all other purposes, treat ES as 0 (R_HXSR) */ | 42 | + secure = true; |
41 | + excret &= ~R_V7M_EXCRET_ES_MASK; | 43 | break; |
42 | + } | 44 | case 2: |
43 | + } | 45 | g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */ |
44 | + | 46 | @@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) |
45 | if (env->v7m.exception != ARMV7M_EXCP_NMI) { | 47 | switch (el) { |
46 | /* Auto-clear FAULTMASK on return from other than NMI. | 48 | case 3: |
47 | * If the security extension is implemented then this only | 49 | mmu_idx = ARMMMUIdx_SE10_0; |
48 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | 50 | + secure = true; |
51 | break; | ||
52 | case 2: | ||
53 | g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */ | ||
54 | @@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
55 | case 4: | ||
56 | /* stage 1+2 NonSecure PL1: ATS12NSOPR, ATS12NSOPW */ | ||
57 | mmu_idx = ARMMMUIdx_E10_1; | ||
58 | + secure = false; | ||
59 | break; | ||
60 | case 6: | ||
61 | /* stage 1+2 NonSecure PL0: ATS12NSOUR, ATS12NSOUW */ | ||
62 | mmu_idx = ARMMMUIdx_E10_0; | ||
63 | + secure = false; | ||
64 | break; | ||
65 | default: | ||
49 | g_assert_not_reached(); | 66 | g_assert_not_reached(); |
50 | } | 67 | } |
51 | 68 | ||
52 | + return_to_handler = !(excret & R_V7M_EXCRET_MODE_MASK); | 69 | - par64 = do_ats_write(env, value, access_type, mmu_idx); |
53 | + return_to_sp_process = excret & R_V7M_EXCRET_SPSEL_MASK; | 70 | + par64 = do_ats_write(env, value, access_type, mmu_idx, secure); |
54 | return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) && | 71 | |
55 | (excret & R_V7M_EXCRET_S_MASK); | 72 | A32_BANKED_CURRENT_REG_SET(env, par, par64); |
56 | 73 | #else | |
57 | - switch (excret & 0xf) { | 74 | @@ -XXX,XX +XXX,XX @@ static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri, |
58 | - case 1: /* Return to Handler */ | 75 | MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD; |
59 | - return_to_handler = true; | 76 | uint64_t par64; |
60 | - break; | 77 | |
61 | - case 13: /* Return to Thread using Process stack */ | 78 | - par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2); |
62 | - return_to_sp_process = true; | 79 | + /* There is no SecureEL2 for AArch32. */ |
63 | - /* fall through */ | 80 | + par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2, false); |
64 | - case 9: /* Return to Thread using Main stack */ | 81 | |
65 | - if (!rettobase && | 82 | A32_BANKED_CURRENT_REG_SET(env, par, par64); |
66 | - !(env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_NONBASETHRDENA_MASK)) { | 83 | #else |
67 | + if (arm_feature(env, ARM_FEATURE_V8)) { | 84 | @@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri, |
68 | + if (!arm_feature(env, ARM_FEATURE_M_SECURITY)) { | 85 | break; |
69 | + /* UNPREDICTABLE if S == 1 or DCRS == 0 or ES == 1 (R_XLCP); | 86 | case 6: /* AT S1E3R, AT S1E3W */ |
70 | + * we choose to take the UsageFault. | 87 | mmu_idx = ARMMMUIdx_SE3; |
71 | + */ | 88 | + secure = true; |
72 | + if ((excret & R_V7M_EXCRET_S_MASK) || | 89 | break; |
73 | + (excret & R_V7M_EXCRET_ES_MASK) || | 90 | default: |
74 | + !(excret & R_V7M_EXCRET_DCRS_MASK)) { | 91 | g_assert_not_reached(); |
75 | + ufault = true; | 92 | @@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri, |
76 | + } | 93 | g_assert_not_reached(); |
77 | + } | ||
78 | + if (excret & R_V7M_EXCRET_RES0_MASK) { | ||
79 | ufault = true; | ||
80 | } | ||
81 | - break; | ||
82 | - default: | ||
83 | - ufault = true; | ||
84 | + } else { | ||
85 | + /* For v7M we only recognize certain combinations of the low bits */ | ||
86 | + switch (excret & 0xf) { | ||
87 | + case 1: /* Return to Handler */ | ||
88 | + break; | ||
89 | + case 13: /* Return to Thread using Process stack */ | ||
90 | + case 9: /* Return to Thread using Main stack */ | ||
91 | + /* We only need to check NONBASETHRDENA for v7M, because in | ||
92 | + * v8M this bit does not exist (it is RES1). | ||
93 | + */ | ||
94 | + if (!rettobase && | ||
95 | + !(env->v7m.ccr[env->v7m.secure] & | ||
96 | + R_V7M_CCR_NONBASETHRDENA_MASK)) { | ||
97 | + ufault = true; | ||
98 | + } | ||
99 | + break; | ||
100 | + default: | ||
101 | + ufault = true; | ||
102 | + } | ||
103 | + } | ||
104 | + | ||
105 | + if (sfault) { | ||
106 | + env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK; | ||
107 | + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); | ||
108 | + v7m_exception_taken(cpu, excret); | ||
109 | + qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " | ||
110 | + "stackframe: failed EXC_RETURN.ES validity check\n"); | ||
111 | + return; | ||
112 | } | 94 | } |
113 | 95 | ||
114 | if (ufault) { | 96 | - env->cp15.par_el[1] = do_ats_write(env, value, access_type, mmu_idx); |
97 | + env->cp15.par_el[1] = do_ats_write(env, value, access_type, | ||
98 | + mmu_idx, secure); | ||
99 | #else | ||
100 | /* Handled by hardware accelerator. */ | ||
101 | g_assert_not_reached(); | ||
115 | -- | 102 | -- |
116 | 2.7.4 | 103 | 2.25.1 |
117 | |||
118 | diff view generated by jsdifflib |
1 | Now that we can handle the CONTROL.SPSEL bit not necessarily being | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | in sync with the current stack pointer, we can restore the correct | ||
3 | security state on exception return. This happens before we start | ||
4 | to read registers off the stack frame, but after we have taken | ||
5 | possible usage faults for bad exception return magic values and | ||
6 | updated CONTROL.SPSEL. | ||
7 | 2 | ||
3 | For a-profile aarch64, which does not bank system registers, it takes | ||
4 | quite a lot of code to switch between security states. In the process, | ||
5 | registers such as TCR_EL{1,2} must be swapped, which in itself requires | ||
6 | the flushing of softmmu tlbs. Therefore it doesn't buy us anything to | ||
7 | separate tlbs by security state. | ||
8 | |||
9 | Retain the distinction between Stage2 and Stage2_S. | ||
10 | |||
11 | This will be important as we implement FEAT_RME, and do not wish to | ||
12 | add a third set of mmu indexes for Realm state. | ||
13 | |||
14 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
15 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
16 | Message-id: 20221001162318.153420-11-richard.henderson@linaro.org | ||
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 17 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
9 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
10 | Message-id: 1506092407-26985-5-git-send-email-peter.maydell@linaro.org | ||
11 | --- | 18 | --- |
12 | target/arm/helper.c | 2 ++ | 19 | target/arm/cpu-param.h | 2 +- |
13 | 1 file changed, 2 insertions(+) | 20 | target/arm/cpu.h | 72 +++++++------------ |
21 | target/arm/internals.h | 31 +------- | ||
22 | target/arm/helper.c | 144 +++++++++++++------------------------ | ||
23 | target/arm/ptw.c | 25 ++----- | ||
24 | target/arm/translate-a64.c | 8 --- | ||
25 | target/arm/translate.c | 6 +- | ||
26 | 7 files changed, 85 insertions(+), 203 deletions(-) | ||
14 | 27 | ||
28 | diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h | ||
29 | index XXXXXXX..XXXXXXX 100644 | ||
30 | --- a/target/arm/cpu-param.h | ||
31 | +++ b/target/arm/cpu-param.h | ||
32 | @@ -XXX,XX +XXX,XX @@ | ||
33 | # define TARGET_PAGE_BITS_MIN 10 | ||
34 | #endif | ||
35 | |||
36 | -#define NB_MMU_MODES 15 | ||
37 | +#define NB_MMU_MODES 8 | ||
38 | |||
39 | #endif | ||
40 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | ||
41 | index XXXXXXX..XXXXXXX 100644 | ||
42 | --- a/target/arm/cpu.h | ||
43 | +++ b/target/arm/cpu.h | ||
44 | @@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync); | ||
45 | * table over and over. | ||
46 | * 6. we need separate EL1/EL2 mmu_idx for handling the Privileged Access | ||
47 | * Never (PAN) bit within PSTATE. | ||
48 | + * 7. we fold together the secure and non-secure regimes for A-profile, | ||
49 | + * because there are no banked system registers for aarch64, so the | ||
50 | + * process of switching between secure and non-secure is | ||
51 | + * already heavyweight. | ||
52 | * | ||
53 | * This gives us the following list of cases: | ||
54 | * | ||
55 | - * NS EL0 EL1&0 stage 1+2 (aka NS PL0) | ||
56 | - * NS EL1 EL1&0 stage 1+2 (aka NS PL1) | ||
57 | - * NS EL1 EL1&0 stage 1+2 +PAN | ||
58 | - * NS EL0 EL2&0 | ||
59 | - * NS EL2 EL2&0 | ||
60 | - * NS EL2 EL2&0 +PAN | ||
61 | - * NS EL2 (aka NS PL2) | ||
62 | - * S EL0 EL1&0 (aka S PL0) | ||
63 | - * S EL1 EL1&0 (not used if EL3 is 32 bit) | ||
64 | - * S EL1 EL1&0 +PAN | ||
65 | - * S EL3 (aka S PL1) | ||
66 | + * EL0 EL1&0 stage 1+2 (aka NS PL0) | ||
67 | + * EL1 EL1&0 stage 1+2 (aka NS PL1) | ||
68 | + * EL1 EL1&0 stage 1+2 +PAN | ||
69 | + * EL0 EL2&0 | ||
70 | + * EL2 EL2&0 | ||
71 | + * EL2 EL2&0 +PAN | ||
72 | + * EL2 (aka NS PL2) | ||
73 | + * EL3 (aka S PL1) | ||
74 | * | ||
75 | - * for a total of 11 different mmu_idx. | ||
76 | + * for a total of 8 different mmu_idx. | ||
77 | * | ||
78 | * R profile CPUs have an MPU, but can use the same set of MMU indexes | ||
79 | - * as A profile. They only need to distinguish NS EL0 and NS EL1 (and | ||
80 | - * NS EL2 if we ever model a Cortex-R52). | ||
81 | + * as A profile. They only need to distinguish EL0 and EL1 (and | ||
82 | + * EL2 if we ever model a Cortex-R52). | ||
83 | * | ||
84 | * M profile CPUs are rather different as they do not have a true MMU. | ||
85 | * They have the following different MMU indexes: | ||
86 | @@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync); | ||
87 | #define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */ | ||
88 | #define ARM_MMU_IDX_M 0x40 /* M profile */ | ||
89 | |||
90 | -/* Meanings of the bits for A profile mmu idx values */ | ||
91 | -#define ARM_MMU_IDX_A_NS 0x8 | ||
92 | - | ||
93 | /* Meanings of the bits for M profile mmu idx values */ | ||
94 | #define ARM_MMU_IDX_M_PRIV 0x1 | ||
95 | #define ARM_MMU_IDX_M_NEGPRI 0x2 | ||
96 | @@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx { | ||
97 | /* | ||
98 | * A-profile. | ||
99 | */ | ||
100 | - ARMMMUIdx_SE10_0 = 0 | ARM_MMU_IDX_A, | ||
101 | - ARMMMUIdx_SE20_0 = 1 | ARM_MMU_IDX_A, | ||
102 | - ARMMMUIdx_SE10_1 = 2 | ARM_MMU_IDX_A, | ||
103 | - ARMMMUIdx_SE20_2 = 3 | ARM_MMU_IDX_A, | ||
104 | - ARMMMUIdx_SE10_1_PAN = 4 | ARM_MMU_IDX_A, | ||
105 | - ARMMMUIdx_SE20_2_PAN = 5 | ARM_MMU_IDX_A, | ||
106 | - ARMMMUIdx_SE2 = 6 | ARM_MMU_IDX_A, | ||
107 | - ARMMMUIdx_SE3 = 7 | ARM_MMU_IDX_A, | ||
108 | - | ||
109 | - ARMMMUIdx_E10_0 = ARMMMUIdx_SE10_0 | ARM_MMU_IDX_A_NS, | ||
110 | - ARMMMUIdx_E20_0 = ARMMMUIdx_SE20_0 | ARM_MMU_IDX_A_NS, | ||
111 | - ARMMMUIdx_E10_1 = ARMMMUIdx_SE10_1 | ARM_MMU_IDX_A_NS, | ||
112 | - ARMMMUIdx_E20_2 = ARMMMUIdx_SE20_2 | ARM_MMU_IDX_A_NS, | ||
113 | - ARMMMUIdx_E10_1_PAN = ARMMMUIdx_SE10_1_PAN | ARM_MMU_IDX_A_NS, | ||
114 | - ARMMMUIdx_E20_2_PAN = ARMMMUIdx_SE20_2_PAN | ARM_MMU_IDX_A_NS, | ||
115 | - ARMMMUIdx_E2 = ARMMMUIdx_SE2 | ARM_MMU_IDX_A_NS, | ||
116 | + ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A, | ||
117 | + ARMMMUIdx_E20_0 = 1 | ARM_MMU_IDX_A, | ||
118 | + ARMMMUIdx_E10_1 = 2 | ARM_MMU_IDX_A, | ||
119 | + ARMMMUIdx_E20_2 = 3 | ARM_MMU_IDX_A, | ||
120 | + ARMMMUIdx_E10_1_PAN = 4 | ARM_MMU_IDX_A, | ||
121 | + ARMMMUIdx_E20_2_PAN = 5 | ARM_MMU_IDX_A, | ||
122 | + ARMMMUIdx_E2 = 6 | ARM_MMU_IDX_A, | ||
123 | + ARMMMUIdx_E3 = 7 | ARM_MMU_IDX_A, | ||
124 | |||
125 | /* | ||
126 | * These are not allocated TLBs and are used only for AT system | ||
127 | @@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx { | ||
128 | ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB, | ||
129 | ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB, | ||
130 | ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB, | ||
131 | - ARMMMUIdx_Stage1_SE0 = 3 | ARM_MMU_IDX_NOTLB, | ||
132 | - ARMMMUIdx_Stage1_SE1 = 4 | ARM_MMU_IDX_NOTLB, | ||
133 | - ARMMMUIdx_Stage1_SE1_PAN = 5 | ARM_MMU_IDX_NOTLB, | ||
134 | /* | ||
135 | * Not allocated a TLB: used only for second stage of an S12 page | ||
136 | * table walk, or for descriptor loads during first stage of an S1 | ||
137 | @@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx { | ||
138 | * then various TLB flush insns which currently are no-ops or flush | ||
139 | * only stage 1 MMU indexes will need to change to flush stage 2. | ||
140 | */ | ||
141 | - ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_NOTLB, | ||
142 | - ARMMMUIdx_Stage2_S = 7 | ARM_MMU_IDX_NOTLB, | ||
143 | + ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB, | ||
144 | + ARMMMUIdx_Stage2_S = 4 | ARM_MMU_IDX_NOTLB, | ||
145 | |||
146 | /* | ||
147 | * M-profile. | ||
148 | @@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit { | ||
149 | TO_CORE_BIT(E2), | ||
150 | TO_CORE_BIT(E20_2), | ||
151 | TO_CORE_BIT(E20_2_PAN), | ||
152 | - TO_CORE_BIT(SE10_0), | ||
153 | - TO_CORE_BIT(SE20_0), | ||
154 | - TO_CORE_BIT(SE10_1), | ||
155 | - TO_CORE_BIT(SE20_2), | ||
156 | - TO_CORE_BIT(SE10_1_PAN), | ||
157 | - TO_CORE_BIT(SE20_2_PAN), | ||
158 | - TO_CORE_BIT(SE2), | ||
159 | - TO_CORE_BIT(SE3), | ||
160 | + TO_CORE_BIT(E3), | ||
161 | |||
162 | TO_CORE_BIT(MUser), | ||
163 | TO_CORE_BIT(MPriv), | ||
164 | diff --git a/target/arm/internals.h b/target/arm/internals.h | ||
165 | index XXXXXXX..XXXXXXX 100644 | ||
166 | --- a/target/arm/internals.h | ||
167 | +++ b/target/arm/internals.h | ||
168 | @@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx) | ||
169 | case ARMMMUIdx_Stage1_E0: | ||
170 | case ARMMMUIdx_Stage1_E1: | ||
171 | case ARMMMUIdx_Stage1_E1_PAN: | ||
172 | - case ARMMMUIdx_Stage1_SE0: | ||
173 | - case ARMMMUIdx_Stage1_SE1: | ||
174 | - case ARMMMUIdx_Stage1_SE1_PAN: | ||
175 | case ARMMMUIdx_E10_0: | ||
176 | case ARMMMUIdx_E10_1: | ||
177 | case ARMMMUIdx_E10_1_PAN: | ||
178 | case ARMMMUIdx_E20_0: | ||
179 | case ARMMMUIdx_E20_2: | ||
180 | case ARMMMUIdx_E20_2_PAN: | ||
181 | - case ARMMMUIdx_SE10_0: | ||
182 | - case ARMMMUIdx_SE10_1: | ||
183 | - case ARMMMUIdx_SE10_1_PAN: | ||
184 | - case ARMMMUIdx_SE20_0: | ||
185 | - case ARMMMUIdx_SE20_2: | ||
186 | - case ARMMMUIdx_SE20_2_PAN: | ||
187 | return true; | ||
188 | default: | ||
189 | return false; | ||
190 | @@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
191 | { | ||
192 | switch (mmu_idx) { | ||
193 | case ARMMMUIdx_Stage1_E1_PAN: | ||
194 | - case ARMMMUIdx_Stage1_SE1_PAN: | ||
195 | case ARMMMUIdx_E10_1_PAN: | ||
196 | case ARMMMUIdx_E20_2_PAN: | ||
197 | - case ARMMMUIdx_SE10_1_PAN: | ||
198 | - case ARMMMUIdx_SE20_2_PAN: | ||
199 | return true; | ||
200 | default: | ||
201 | return false; | ||
202 | @@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
203 | static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
204 | { | ||
205 | switch (mmu_idx) { | ||
206 | - case ARMMMUIdx_SE20_0: | ||
207 | - case ARMMMUIdx_SE20_2: | ||
208 | - case ARMMMUIdx_SE20_2_PAN: | ||
209 | case ARMMMUIdx_E20_0: | ||
210 | case ARMMMUIdx_E20_2: | ||
211 | case ARMMMUIdx_E20_2_PAN: | ||
212 | case ARMMMUIdx_Stage2: | ||
213 | case ARMMMUIdx_Stage2_S: | ||
214 | - case ARMMMUIdx_SE2: | ||
215 | case ARMMMUIdx_E2: | ||
216 | return 2; | ||
217 | - case ARMMMUIdx_SE3: | ||
218 | + case ARMMMUIdx_E3: | ||
219 | return 3; | ||
220 | - case ARMMMUIdx_SE10_0: | ||
221 | - case ARMMMUIdx_Stage1_SE0: | ||
222 | - return arm_el_is_aa64(env, 3) ? 1 : 3; | ||
223 | - case ARMMMUIdx_SE10_1: | ||
224 | - case ARMMMUIdx_SE10_1_PAN: | ||
225 | + case ARMMMUIdx_E10_0: | ||
226 | case ARMMMUIdx_Stage1_E0: | ||
227 | + return arm_el_is_aa64(env, 3) || !arm_is_secure_below_el3(env) ? 1 : 3; | ||
228 | case ARMMMUIdx_Stage1_E1: | ||
229 | case ARMMMUIdx_Stage1_E1_PAN: | ||
230 | - case ARMMMUIdx_Stage1_SE1: | ||
231 | - case ARMMMUIdx_Stage1_SE1_PAN: | ||
232 | - case ARMMMUIdx_E10_0: | ||
233 | case ARMMMUIdx_E10_1: | ||
234 | case ARMMMUIdx_E10_1_PAN: | ||
235 | case ARMMMUIdx_MPrivNegPri: | ||
236 | @@ -XXX,XX +XXX,XX @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx) | ||
237 | case ARMMMUIdx_Stage1_E0: | ||
238 | case ARMMMUIdx_Stage1_E1: | ||
239 | case ARMMMUIdx_Stage1_E1_PAN: | ||
240 | - case ARMMMUIdx_Stage1_SE0: | ||
241 | - case ARMMMUIdx_Stage1_SE1: | ||
242 | - case ARMMMUIdx_Stage1_SE1_PAN: | ||
243 | return true; | ||
244 | default: | ||
245 | return false; | ||
15 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 246 | diff --git a/target/arm/helper.c b/target/arm/helper.c |
16 | index XXXXXXX..XXXXXXX 100644 | 247 | index XXXXXXX..XXXXXXX 100644 |
17 | --- a/target/arm/helper.c | 248 | --- a/target/arm/helper.c |
18 | +++ b/target/arm/helper.c | 249 | +++ b/target/arm/helper.c |
19 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | 250 | @@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) |
251 | /* Begin with base v8.0 state. */ | ||
252 | uint64_t valid_mask = 0x3fff; | ||
253 | ARMCPU *cpu = env_archcpu(env); | ||
254 | + uint64_t changed; | ||
255 | |||
256 | /* | ||
257 | * Because SCR_EL3 is the "real" cpreg and SCR is the alias, reset always | ||
258 | @@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
259 | |||
260 | /* Clear all-context RES0 bits. */ | ||
261 | value &= valid_mask; | ||
262 | - raw_write(env, ri, value); | ||
263 | + changed = env->cp15.scr_el3 ^ value; | ||
264 | + env->cp15.scr_el3 = value; | ||
265 | + | ||
266 | + /* | ||
267 | + * If SCR_EL3.NS changes, i.e. arm_is_secure_below_el3, then | ||
268 | + * we must invalidate all TLBs below EL3. | ||
269 | + */ | ||
270 | + if (changed & SCR_NS) { | ||
271 | + tlb_flush_by_mmuidx(env_cpu(env), (ARMMMUIdxBit_E10_0 | | ||
272 | + ARMMMUIdxBit_E20_0 | | ||
273 | + ARMMMUIdxBit_E10_1 | | ||
274 | + ARMMMUIdxBit_E20_2 | | ||
275 | + ARMMMUIdxBit_E10_1_PAN | | ||
276 | + ARMMMUIdxBit_E20_2_PAN | | ||
277 | + ARMMMUIdxBit_E2)); | ||
278 | + } | ||
279 | } | ||
280 | |||
281 | static void scr_reset(CPUARMState *env, const ARMCPRegInfo *ri) | ||
282 | @@ -XXX,XX +XXX,XX @@ static int gt_phys_redir_timeridx(CPUARMState *env) | ||
283 | case ARMMMUIdx_E20_0: | ||
284 | case ARMMMUIdx_E20_2: | ||
285 | case ARMMMUIdx_E20_2_PAN: | ||
286 | - case ARMMMUIdx_SE20_0: | ||
287 | - case ARMMMUIdx_SE20_2: | ||
288 | - case ARMMMUIdx_SE20_2_PAN: | ||
289 | return GTIMER_HYP; | ||
290 | default: | ||
291 | return GTIMER_PHYS; | ||
292 | @@ -XXX,XX +XXX,XX @@ static int gt_virt_redir_timeridx(CPUARMState *env) | ||
293 | case ARMMMUIdx_E20_0: | ||
294 | case ARMMMUIdx_E20_2: | ||
295 | case ARMMMUIdx_E20_2_PAN: | ||
296 | - case ARMMMUIdx_SE20_0: | ||
297 | - case ARMMMUIdx_SE20_2: | ||
298 | - case ARMMMUIdx_SE20_2_PAN: | ||
299 | return GTIMER_HYPVIRT; | ||
300 | default: | ||
301 | return GTIMER_VIRT; | ||
302 | @@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
303 | /* stage 1 current state PL1: ATS1CPR, ATS1CPW, ATS1CPRP, ATS1CPWP */ | ||
304 | switch (el) { | ||
305 | case 3: | ||
306 | - mmu_idx = ARMMMUIdx_SE3; | ||
307 | + mmu_idx = ARMMMUIdx_E3; | ||
308 | secure = true; | ||
309 | break; | ||
310 | case 2: | ||
311 | @@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
312 | /* fall through */ | ||
313 | case 1: | ||
314 | if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) { | ||
315 | - mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN | ||
316 | - : ARMMMUIdx_Stage1_E1_PAN); | ||
317 | + mmu_idx = ARMMMUIdx_Stage1_E1_PAN; | ||
318 | } else { | ||
319 | - mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1; | ||
320 | + mmu_idx = ARMMMUIdx_Stage1_E1; | ||
321 | } | ||
322 | break; | ||
323 | default: | ||
324 | @@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
325 | /* stage 1 current state PL0: ATS1CUR, ATS1CUW */ | ||
326 | switch (el) { | ||
327 | case 3: | ||
328 | - mmu_idx = ARMMMUIdx_SE10_0; | ||
329 | + mmu_idx = ARMMMUIdx_E10_0; | ||
330 | secure = true; | ||
331 | break; | ||
332 | case 2: | ||
333 | @@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
334 | mmu_idx = ARMMMUIdx_Stage1_E0; | ||
335 | break; | ||
336 | case 1: | ||
337 | - mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0; | ||
338 | + mmu_idx = ARMMMUIdx_Stage1_E0; | ||
339 | break; | ||
340 | default: | ||
341 | g_assert_not_reached(); | ||
342 | @@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri, | ||
343 | switch (ri->opc1) { | ||
344 | case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */ | ||
345 | if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) { | ||
346 | - mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN | ||
347 | - : ARMMMUIdx_Stage1_E1_PAN); | ||
348 | + mmu_idx = ARMMMUIdx_Stage1_E1_PAN; | ||
349 | } else { | ||
350 | - mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1; | ||
351 | + mmu_idx = ARMMMUIdx_Stage1_E1; | ||
352 | } | ||
353 | break; | ||
354 | case 4: /* AT S1E2R, AT S1E2W */ | ||
355 | - mmu_idx = secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2; | ||
356 | + mmu_idx = ARMMMUIdx_E2; | ||
357 | break; | ||
358 | case 6: /* AT S1E3R, AT S1E3W */ | ||
359 | - mmu_idx = ARMMMUIdx_SE3; | ||
360 | + mmu_idx = ARMMMUIdx_E3; | ||
361 | secure = true; | ||
362 | break; | ||
363 | default: | ||
364 | @@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri, | ||
365 | } | ||
366 | break; | ||
367 | case 2: /* AT S1E0R, AT S1E0W */ | ||
368 | - mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0; | ||
369 | + mmu_idx = ARMMMUIdx_Stage1_E0; | ||
370 | break; | ||
371 | case 4: /* AT S12E1R, AT S12E1W */ | ||
372 | - mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_E10_1; | ||
373 | + mmu_idx = ARMMMUIdx_E10_1; | ||
374 | break; | ||
375 | case 6: /* AT S12E0R, AT S12E0W */ | ||
376 | - mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_E10_0; | ||
377 | + mmu_idx = ARMMMUIdx_E10_0; | ||
378 | break; | ||
379 | default: | ||
380 | g_assert_not_reached(); | ||
381 | @@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
382 | uint16_t mask = ARMMMUIdxBit_E20_2 | | ||
383 | ARMMMUIdxBit_E20_2_PAN | | ||
384 | ARMMMUIdxBit_E20_0; | ||
385 | - | ||
386 | - if (arm_is_secure_below_el3(env)) { | ||
387 | - mask >>= ARM_MMU_IDX_A_NS; | ||
388 | - } | ||
389 | - | ||
390 | tlb_flush_by_mmuidx(env_cpu(env), mask); | ||
391 | } | ||
392 | raw_write(env, ri, value); | ||
393 | @@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
394 | uint16_t mask = ARMMMUIdxBit_E10_1 | | ||
395 | ARMMMUIdxBit_E10_1_PAN | | ||
396 | ARMMMUIdxBit_E10_0; | ||
397 | - | ||
398 | - if (arm_is_secure_below_el3(env)) { | ||
399 | - mask >>= ARM_MMU_IDX_A_NS; | ||
400 | - } | ||
401 | - | ||
402 | tlb_flush_by_mmuidx(cs, mask); | ||
403 | raw_write(env, ri, value); | ||
404 | } | ||
405 | @@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env) | ||
406 | ARMMMUIdxBit_E10_1_PAN | | ||
407 | ARMMMUIdxBit_E10_0; | ||
408 | } | ||
409 | - | ||
410 | - if (arm_is_secure_below_el3(env)) { | ||
411 | - mask >>= ARM_MMU_IDX_A_NS; | ||
412 | - } | ||
413 | - | ||
414 | return mask; | ||
415 | } | ||
416 | |||
417 | @@ -XXX,XX +XXX,XX @@ static int vae1_tlbbits(CPUARMState *env, uint64_t addr) | ||
418 | mmu_idx = ARMMMUIdx_E10_0; | ||
419 | } | ||
420 | |||
421 | - if (arm_is_secure_below_el3(env)) { | ||
422 | - mmu_idx &= ~ARM_MMU_IDX_A_NS; | ||
423 | - } | ||
424 | - | ||
425 | return tlbbits_for_regime(env, mmu_idx, addr); | ||
426 | } | ||
427 | |||
428 | @@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env) | ||
429 | * stage 2 translations, whereas most other scopes only invalidate | ||
430 | * stage 1 translations. | ||
20 | */ | 431 | */ |
21 | write_v7m_control_spsel(env, return_to_sp_process); | 432 | - if (arm_is_secure_below_el3(env)) { |
22 | 433 | - return ARMMMUIdxBit_SE10_1 | | |
23 | + switch_v7m_security_state(env, return_to_secure); | 434 | - ARMMMUIdxBit_SE10_1_PAN | |
24 | + | 435 | - ARMMMUIdxBit_SE10_0; |
25 | { | 436 | - } else { |
26 | /* The stack pointer we should be reading the exception frame from | 437 | - return ARMMMUIdxBit_E10_1 | |
27 | * depends on bits in the magic exception return type value (and | 438 | - ARMMMUIdxBit_E10_1_PAN | |
439 | - ARMMMUIdxBit_E10_0; | ||
440 | - } | ||
441 | + return (ARMMMUIdxBit_E10_1 | | ||
442 | + ARMMMUIdxBit_E10_1_PAN | | ||
443 | + ARMMMUIdxBit_E10_0); | ||
444 | } | ||
445 | |||
446 | static int e2_tlbmask(CPUARMState *env) | ||
447 | { | ||
448 | - if (arm_is_secure_below_el3(env)) { | ||
449 | - return ARMMMUIdxBit_SE20_0 | | ||
450 | - ARMMMUIdxBit_SE20_2 | | ||
451 | - ARMMMUIdxBit_SE20_2_PAN | | ||
452 | - ARMMMUIdxBit_SE2; | ||
453 | - } else { | ||
454 | - return ARMMMUIdxBit_E20_0 | | ||
455 | - ARMMMUIdxBit_E20_2 | | ||
456 | - ARMMMUIdxBit_E20_2_PAN | | ||
457 | - ARMMMUIdxBit_E2; | ||
458 | - } | ||
459 | + return (ARMMMUIdxBit_E20_0 | | ||
460 | + ARMMMUIdxBit_E20_2 | | ||
461 | + ARMMMUIdxBit_E20_2_PAN | | ||
462 | + ARMMMUIdxBit_E2); | ||
463 | } | ||
464 | |||
465 | static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
466 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
467 | ARMCPU *cpu = env_archcpu(env); | ||
468 | CPUState *cs = CPU(cpu); | ||
469 | |||
470 | - tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_SE3); | ||
471 | + tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3); | ||
472 | } | ||
473 | |||
474 | static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
475 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
476 | { | ||
477 | CPUState *cs = env_cpu(env); | ||
478 | |||
479 | - tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_SE3); | ||
480 | + tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3); | ||
481 | } | ||
482 | |||
483 | static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
484 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
485 | CPUState *cs = CPU(cpu); | ||
486 | uint64_t pageaddr = sextract64(value << 12, 0, 56); | ||
487 | |||
488 | - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_SE3); | ||
489 | + tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3); | ||
490 | } | ||
491 | |||
492 | static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
493 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
494 | { | ||
495 | CPUState *cs = env_cpu(env); | ||
496 | uint64_t pageaddr = sextract64(value << 12, 0, 56); | ||
497 | - bool secure = arm_is_secure_below_el3(env); | ||
498 | - int mask = secure ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2; | ||
499 | - int bits = tlbbits_for_regime(env, secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2, | ||
500 | - pageaddr); | ||
501 | + int bits = tlbbits_for_regime(env, ARMMMUIdx_E2, pageaddr); | ||
502 | |||
503 | - tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits); | ||
504 | + tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, | ||
505 | + ARMMMUIdxBit_E2, bits); | ||
506 | } | ||
507 | |||
508 | static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
509 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
510 | { | ||
511 | CPUState *cs = env_cpu(env); | ||
512 | uint64_t pageaddr = sextract64(value << 12, 0, 56); | ||
513 | - int bits = tlbbits_for_regime(env, ARMMMUIdx_SE3, pageaddr); | ||
514 | + int bits = tlbbits_for_regime(env, ARMMMUIdx_E3, pageaddr); | ||
515 | |||
516 | tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, | ||
517 | - ARMMMUIdxBit_SE3, bits); | ||
518 | + ARMMMUIdxBit_E3, bits); | ||
519 | } | ||
520 | |||
521 | #ifdef TARGET_AARCH64 | ||
522 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae1is_write(CPUARMState *env, | ||
523 | |||
524 | static int vae2_tlbmask(CPUARMState *env) | ||
525 | { | ||
526 | - return (arm_is_secure_below_el3(env) | ||
527 | - ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2); | ||
528 | + return ARMMMUIdxBit_E2; | ||
529 | } | ||
530 | |||
531 | static void tlbi_aa64_rvae2_write(CPUARMState *env, | ||
532 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3_write(CPUARMState *env, | ||
533 | * flush-last-level-only. | ||
534 | */ | ||
535 | |||
536 | - do_rvae_write(env, value, ARMMMUIdxBit_SE3, | ||
537 | - tlb_force_broadcast(env)); | ||
538 | + do_rvae_write(env, value, ARMMMUIdxBit_E3, tlb_force_broadcast(env)); | ||
539 | } | ||
540 | |||
541 | static void tlbi_aa64_rvae3is_write(CPUARMState *env, | ||
542 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3is_write(CPUARMState *env, | ||
543 | * flush-last-level-only or inner/outer specific flushes. | ||
544 | */ | ||
545 | |||
546 | - do_rvae_write(env, value, ARMMMUIdxBit_SE3, true); | ||
547 | + do_rvae_write(env, value, ARMMMUIdxBit_E3, true); | ||
548 | } | ||
549 | #endif | ||
550 | |||
551 | @@ -XXX,XX +XXX,XX @@ uint64_t arm_sctlr(CPUARMState *env, int el) | ||
552 | /* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */ | ||
553 | if (el == 0) { | ||
554 | ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0); | ||
555 | - el = (mmu_idx == ARMMMUIdx_E20_0 || mmu_idx == ARMMMUIdx_SE20_0) | ||
556 | - ? 2 : 1; | ||
557 | + el = mmu_idx == ARMMMUIdx_E20_0 ? 2 : 1; | ||
558 | } | ||
559 | return env->cp15.sctlr_el[el]; | ||
560 | } | ||
561 | @@ -XXX,XX +XXX,XX @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx) | ||
562 | switch (mmu_idx) { | ||
563 | case ARMMMUIdx_E10_0: | ||
564 | case ARMMMUIdx_E20_0: | ||
565 | - case ARMMMUIdx_SE10_0: | ||
566 | - case ARMMMUIdx_SE20_0: | ||
567 | return 0; | ||
568 | case ARMMMUIdx_E10_1: | ||
569 | case ARMMMUIdx_E10_1_PAN: | ||
570 | - case ARMMMUIdx_SE10_1: | ||
571 | - case ARMMMUIdx_SE10_1_PAN: | ||
572 | return 1; | ||
573 | case ARMMMUIdx_E2: | ||
574 | case ARMMMUIdx_E20_2: | ||
575 | case ARMMMUIdx_E20_2_PAN: | ||
576 | - case ARMMMUIdx_SE2: | ||
577 | - case ARMMMUIdx_SE20_2: | ||
578 | - case ARMMMUIdx_SE20_2_PAN: | ||
579 | return 2; | ||
580 | - case ARMMMUIdx_SE3: | ||
581 | + case ARMMMUIdx_E3: | ||
582 | return 3; | ||
583 | default: | ||
584 | g_assert_not_reached(); | ||
585 | @@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el) | ||
586 | } | ||
587 | break; | ||
588 | case 3: | ||
589 | - return ARMMMUIdx_SE3; | ||
590 | + return ARMMMUIdx_E3; | ||
591 | default: | ||
592 | g_assert_not_reached(); | ||
593 | } | ||
594 | |||
595 | - if (arm_is_secure_below_el3(env)) { | ||
596 | - idx &= ~ARM_MMU_IDX_A_NS; | ||
597 | - } | ||
598 | - | ||
599 | return idx; | ||
600 | } | ||
601 | |||
602 | @@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el, | ||
603 | switch (mmu_idx) { | ||
604 | case ARMMMUIdx_E10_1: | ||
605 | case ARMMMUIdx_E10_1_PAN: | ||
606 | - case ARMMMUIdx_SE10_1: | ||
607 | - case ARMMMUIdx_SE10_1_PAN: | ||
608 | /* TODO: ARMv8.3-NV */ | ||
609 | DP_TBFLAG_A64(flags, UNPRIV, 1); | ||
610 | break; | ||
611 | case ARMMMUIdx_E20_2: | ||
612 | case ARMMMUIdx_E20_2_PAN: | ||
613 | - case ARMMMUIdx_SE20_2: | ||
614 | - case ARMMMUIdx_SE20_2_PAN: | ||
615 | /* | ||
616 | * Note that EL20_2 is gated by HCR_EL2.E2H == 1, but EL20_0 is | ||
617 | * gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR. | ||
618 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
619 | index XXXXXXX..XXXXXXX 100644 | ||
620 | --- a/target/arm/ptw.c | ||
621 | +++ b/target/arm/ptw.c | ||
622 | @@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu) | ||
623 | ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx) | ||
624 | { | ||
625 | switch (mmu_idx) { | ||
626 | - case ARMMMUIdx_SE10_0: | ||
627 | - return ARMMMUIdx_Stage1_SE0; | ||
628 | - case ARMMMUIdx_SE10_1: | ||
629 | - return ARMMMUIdx_Stage1_SE1; | ||
630 | - case ARMMMUIdx_SE10_1_PAN: | ||
631 | - return ARMMMUIdx_Stage1_SE1_PAN; | ||
632 | case ARMMMUIdx_E10_0: | ||
633 | return ARMMMUIdx_Stage1_E0; | ||
634 | case ARMMMUIdx_E10_1: | ||
635 | @@ -XXX,XX +XXX,XX @@ static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
636 | static bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
637 | { | ||
638 | switch (mmu_idx) { | ||
639 | - case ARMMMUIdx_SE10_0: | ||
640 | case ARMMMUIdx_E20_0: | ||
641 | - case ARMMMUIdx_SE20_0: | ||
642 | case ARMMMUIdx_Stage1_E0: | ||
643 | - case ARMMMUIdx_Stage1_SE0: | ||
644 | case ARMMMUIdx_MUser: | ||
645 | case ARMMMUIdx_MSUser: | ||
646 | case ARMMMUIdx_MUserNegPri: | ||
647 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
648 | |||
649 | s2_mmu_idx = (s2walk_secure | ||
650 | ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2); | ||
651 | - is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0; | ||
652 | + is_el0 = mmu_idx == ARMMMUIdx_E10_0; | ||
653 | |||
654 | /* | ||
655 | * S1 is done, now do S2 translation. | ||
656 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
657 | case ARMMMUIdx_Stage1_E1: | ||
658 | case ARMMMUIdx_Stage1_E1_PAN: | ||
659 | case ARMMMUIdx_E2: | ||
660 | + is_secure = arm_is_secure_below_el3(env); | ||
661 | + break; | ||
662 | case ARMMMUIdx_Stage2: | ||
663 | case ARMMMUIdx_MPrivNegPri: | ||
664 | case ARMMMUIdx_MUserNegPri: | ||
665 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
666 | case ARMMMUIdx_MUser: | ||
667 | is_secure = false; | ||
668 | break; | ||
669 | - case ARMMMUIdx_SE3: | ||
670 | - case ARMMMUIdx_SE10_0: | ||
671 | - case ARMMMUIdx_SE10_1: | ||
672 | - case ARMMMUIdx_SE10_1_PAN: | ||
673 | - case ARMMMUIdx_SE20_0: | ||
674 | - case ARMMMUIdx_SE20_2: | ||
675 | - case ARMMMUIdx_SE20_2_PAN: | ||
676 | - case ARMMMUIdx_Stage1_SE0: | ||
677 | - case ARMMMUIdx_Stage1_SE1: | ||
678 | - case ARMMMUIdx_Stage1_SE1_PAN: | ||
679 | - case ARMMMUIdx_SE2: | ||
680 | + case ARMMMUIdx_E3: | ||
681 | case ARMMMUIdx_Stage2_S: | ||
682 | case ARMMMUIdx_MSPrivNegPri: | ||
683 | case ARMMMUIdx_MSUserNegPri: | ||
684 | diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c | ||
685 | index XXXXXXX..XXXXXXX 100644 | ||
686 | --- a/target/arm/translate-a64.c | ||
687 | +++ b/target/arm/translate-a64.c | ||
688 | @@ -XXX,XX +XXX,XX @@ static int get_a64_user_mem_index(DisasContext *s) | ||
689 | case ARMMMUIdx_E20_2_PAN: | ||
690 | useridx = ARMMMUIdx_E20_0; | ||
691 | break; | ||
692 | - case ARMMMUIdx_SE10_1: | ||
693 | - case ARMMMUIdx_SE10_1_PAN: | ||
694 | - useridx = ARMMMUIdx_SE10_0; | ||
695 | - break; | ||
696 | - case ARMMMUIdx_SE20_2: | ||
697 | - case ARMMMUIdx_SE20_2_PAN: | ||
698 | - useridx = ARMMMUIdx_SE20_0; | ||
699 | - break; | ||
700 | default: | ||
701 | g_assert_not_reached(); | ||
702 | } | ||
703 | diff --git a/target/arm/translate.c b/target/arm/translate.c | ||
704 | index XXXXXXX..XXXXXXX 100644 | ||
705 | --- a/target/arm/translate.c | ||
706 | +++ b/target/arm/translate.c | ||
707 | @@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s) | ||
708 | * otherwise, access as if at PL0. | ||
709 | */ | ||
710 | switch (s->mmu_idx) { | ||
711 | + case ARMMMUIdx_E3: | ||
712 | case ARMMMUIdx_E2: /* this one is UNPREDICTABLE */ | ||
713 | case ARMMMUIdx_E10_0: | ||
714 | case ARMMMUIdx_E10_1: | ||
715 | case ARMMMUIdx_E10_1_PAN: | ||
716 | return arm_to_core_mmu_idx(ARMMMUIdx_E10_0); | ||
717 | - case ARMMMUIdx_SE3: | ||
718 | - case ARMMMUIdx_SE10_0: | ||
719 | - case ARMMMUIdx_SE10_1: | ||
720 | - case ARMMMUIdx_SE10_1_PAN: | ||
721 | - return arm_to_core_mmu_idx(ARMMMUIdx_SE10_0); | ||
722 | case ARMMMUIdx_MUser: | ||
723 | case ARMMMUIdx_MPriv: | ||
724 | return arm_to_core_mmu_idx(ARMMMUIdx_MUser); | ||
28 | -- | 725 | -- |
29 | 2.7.4 | 726 | 2.25.1 |
30 | |||
31 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | From: Richard Henderson <richard.henderson@linaro.org> | ||
1 | 2 | ||
3 | Use a switch on mmu_idx for the a-profile indexes, instead of | ||
4 | three different if's vs regime_el and arm_mmu_idx_is_stage1_of_2. | ||
5 | |||
6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
8 | Message-id: 20221001162318.153420-12-richard.henderson@linaro.org | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
10 | --- | ||
11 | target/arm/ptw.c | 32 +++++++++++++++++++++++++------- | ||
12 | 1 file changed, 25 insertions(+), 7 deletions(-) | ||
13 | |||
14 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
15 | index XXXXXXX..XXXXXXX 100644 | ||
16 | --- a/target/arm/ptw.c | ||
17 | +++ b/target/arm/ptw.c | ||
18 | @@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
19 | |||
20 | hcr_el2 = arm_hcr_el2_eff(env); | ||
21 | |||
22 | - if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { | ||
23 | + switch (mmu_idx) { | ||
24 | + case ARMMMUIdx_Stage2: | ||
25 | + case ARMMMUIdx_Stage2_S: | ||
26 | /* HCR.DC means HCR.VM behaves as 1 */ | ||
27 | return (hcr_el2 & (HCR_DC | HCR_VM)) == 0; | ||
28 | - } | ||
29 | |||
30 | - if (hcr_el2 & HCR_TGE) { | ||
31 | + case ARMMMUIdx_E10_0: | ||
32 | + case ARMMMUIdx_E10_1: | ||
33 | + case ARMMMUIdx_E10_1_PAN: | ||
34 | /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */ | ||
35 | - if (!is_secure && regime_el(env, mmu_idx) == 1) { | ||
36 | + if (!is_secure && (hcr_el2 & HCR_TGE)) { | ||
37 | return true; | ||
38 | } | ||
39 | - } | ||
40 | + break; | ||
41 | |||
42 | - if ((hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) { | ||
43 | + case ARMMMUIdx_Stage1_E0: | ||
44 | + case ARMMMUIdx_Stage1_E1: | ||
45 | + case ARMMMUIdx_Stage1_E1_PAN: | ||
46 | /* HCR.DC means SCTLR_EL1.M behaves as 0 */ | ||
47 | - return true; | ||
48 | + if (hcr_el2 & HCR_DC) { | ||
49 | + return true; | ||
50 | + } | ||
51 | + break; | ||
52 | + | ||
53 | + case ARMMMUIdx_E20_0: | ||
54 | + case ARMMMUIdx_E20_2: | ||
55 | + case ARMMMUIdx_E20_2_PAN: | ||
56 | + case ARMMMUIdx_E2: | ||
57 | + case ARMMMUIdx_E3: | ||
58 | + break; | ||
59 | + | ||
60 | + default: | ||
61 | + g_assert_not_reached(); | ||
62 | } | ||
63 | |||
64 | return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0; | ||
65 | -- | ||
66 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | From: Richard Henderson <richard.henderson@linaro.org> | ||
1 | 2 | ||
3 | The effect of TGE does not only apply to non-secure state, | ||
4 | now that Secure EL2 exists. | ||
5 | |||
6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
8 | Message-id: 20221001162318.153420-13-richard.henderson@linaro.org | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
10 | --- | ||
11 | target/arm/ptw.c | 4 ++-- | ||
12 | 1 file changed, 2 insertions(+), 2 deletions(-) | ||
13 | |||
14 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
15 | index XXXXXXX..XXXXXXX 100644 | ||
16 | --- a/target/arm/ptw.c | ||
17 | +++ b/target/arm/ptw.c | ||
18 | @@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
19 | case ARMMMUIdx_E10_0: | ||
20 | case ARMMMUIdx_E10_1: | ||
21 | case ARMMMUIdx_E10_1_PAN: | ||
22 | - /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */ | ||
23 | - if (!is_secure && (hcr_el2 & HCR_TGE)) { | ||
24 | + /* TGE means that EL0/1 act as if SCTLR_EL1.M is zero */ | ||
25 | + if (hcr_el2 & HCR_TGE) { | ||
26 | return true; | ||
27 | } | ||
28 | break; | ||
29 | -- | ||
30 | 2.25.1 | diff view generated by jsdifflib |
1 | In the v7M architecture, there is an invariant that if the CPU is | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | in Handler mode then the CONTROL.SPSEL bit cannot be nonzero. | ||
3 | This in turn means that the current stack pointer is always | ||
4 | indicated by CONTROL.SPSEL, even though Handler mode always uses | ||
5 | the Main stack pointer. | ||
6 | 2 | ||
7 | In v8M, this invariant is removed, and CONTROL.SPSEL may now | 3 | For page walking, we may require HCR for a security state |
8 | be nonzero in Handler mode (though Handler mode still always | 4 | that is not "current". |
9 | uses the Main stack pointer). In preparation for this change, | ||
10 | change how we handle this bit: rename switch_v7m_sp() to | ||
11 | the now more accurate write_v7m_control_spsel(), and make it | ||
12 | check both the handler mode state and the SPSEL bit. | ||
13 | 5 | ||
14 | Note that this implicitly changes the point at which we switch | 6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
15 | active SP on exception exit from before we pop the exception | 7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
16 | frame to after it. | 8 | Message-id: 20221001162318.153420-14-richard.henderson@linaro.org |
17 | |||
18 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
19 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
20 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
21 | Message-id: 1506092407-26985-4-git-send-email-peter.maydell@linaro.org | ||
22 | --- | 10 | --- |
23 | target/arm/cpu.h | 8 ++++++- | 11 | target/arm/cpu.h | 20 +++++++++++++------- |
24 | hw/intc/armv7m_nvic.c | 2 +- | 12 | target/arm/helper.c | 11 ++++++++--- |
25 | target/arm/helper.c | 65 ++++++++++++++++++++++++++++++++++----------------- | 13 | 2 files changed, 21 insertions(+), 10 deletions(-) |
26 | 3 files changed, 51 insertions(+), 24 deletions(-) | ||
27 | 14 | ||
28 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | 15 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h |
29 | index XXXXXXX..XXXXXXX 100644 | 16 | index XXXXXXX..XXXXXXX 100644 |
30 | --- a/target/arm/cpu.h | 17 | --- a/target/arm/cpu.h |
31 | +++ b/target/arm/cpu.h | 18 | +++ b/target/arm/cpu.h |
32 | @@ -XXX,XX +XXX,XX @@ void pmccntr_sync(CPUARMState *env); | 19 | @@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env) |
33 | #define PSTATE_MODE_EL1t 4 | 20 | * Return true if the current security state has AArch64 EL2 or AArch32 Hyp. |
34 | #define PSTATE_MODE_EL0t 0 | 21 | * This corresponds to the pseudocode EL2Enabled() |
35 | 22 | */ | |
36 | +/* Write a new value to v7m.exception, thus transitioning into or out | 23 | +static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure) |
37 | + * of Handler mode; this may result in a change of active stack pointer. | 24 | +{ |
38 | + */ | 25 | + return arm_feature(env, ARM_FEATURE_EL2) |
39 | +void write_v7m_exception(CPUARMState *env, uint32_t new_exc); | 26 | + && (!secure || (env->cp15.scr_el3 & SCR_EEL2)); |
27 | +} | ||
40 | + | 28 | + |
41 | /* Map EL and handler into a PSTATE_MODE. */ | 29 | static inline bool arm_is_el2_enabled(CPUARMState *env) |
42 | static inline unsigned int aarch64_pstate_mode(unsigned int el, bool handler) | ||
43 | { | 30 | { |
44 | @@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask) | 31 | - if (arm_feature(env, ARM_FEATURE_EL2)) { |
45 | env->condexec_bits |= (val >> 8) & 0xfc; | 32 | - if (arm_is_secure_below_el3(env)) { |
46 | } | 33 | - return (env->cp15.scr_el3 & SCR_EEL2) != 0; |
47 | if (mask & XPSR_EXCP) { | 34 | - } |
48 | - env->v7m.exception = val & XPSR_EXCP; | 35 | - return true; |
49 | + /* Note that this only happens on exception exit */ | 36 | - } |
50 | + write_v7m_exception(env, val & XPSR_EXCP); | 37 | - return false; |
51 | } | 38 | + return arm_is_el2_enabled_secstate(env, arm_is_secure_below_el3(env)); |
52 | } | 39 | } |
53 | 40 | ||
54 | diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c | 41 | #else |
55 | index XXXXXXX..XXXXXXX 100644 | 42 | @@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env) |
56 | --- a/hw/intc/armv7m_nvic.c | 43 | return false; |
57 | +++ b/hw/intc/armv7m_nvic.c | 44 | } |
58 | @@ -XXX,XX +XXX,XX @@ bool armv7m_nvic_acknowledge_irq(void *opaque) | 45 | |
59 | vec->active = 1; | 46 | +static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure) |
60 | vec->pending = 0; | 47 | +{ |
61 | 48 | + return false; | |
62 | - env->v7m.exception = s->vectpending; | 49 | +} |
63 | + write_v7m_exception(env, s->vectpending); | 50 | + |
64 | 51 | static inline bool arm_is_el2_enabled(CPUARMState *env) | |
65 | nvic_irq_update(s); | 52 | { |
53 | return false; | ||
54 | @@ -XXX,XX +XXX,XX @@ static inline bool arm_is_el2_enabled(CPUARMState *env) | ||
55 | * "for all purposes other than a direct read or write access of HCR_EL2." | ||
56 | * Not included here is HCR_RW. | ||
57 | */ | ||
58 | +uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure); | ||
59 | uint64_t arm_hcr_el2_eff(CPUARMState *env); | ||
60 | uint64_t arm_hcrx_el2_eff(CPUARMState *env); | ||
66 | 61 | ||
67 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 62 | diff --git a/target/arm/helper.c b/target/arm/helper.c |
68 | index XXXXXXX..XXXXXXX 100644 | 63 | index XXXXXXX..XXXXXXX 100644 |
69 | --- a/target/arm/helper.c | 64 | --- a/target/arm/helper.c |
70 | +++ b/target/arm/helper.c | 65 | +++ b/target/arm/helper.c |
71 | @@ -XXX,XX +XXX,XX @@ static bool v7m_using_psp(CPUARMState *env) | 66 | @@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri, |
72 | env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK; | ||
73 | } | 67 | } |
74 | 68 | ||
75 | -/* Switch to V7M main or process stack pointer. */ | 69 | /* |
76 | -static void switch_v7m_sp(CPUARMState *env, bool new_spsel) | 70 | - * Return the effective value of HCR_EL2. |
77 | +/* Write to v7M CONTROL.SPSEL bit. This may change the current | 71 | + * Return the effective value of HCR_EL2, at the given security state. |
78 | + * stack pointer between Main and Process stack pointers. | 72 | * Bits that are not included here: |
79 | + */ | 73 | * RW (read from SCR_EL3.RW as needed) |
80 | +static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel) | 74 | */ |
75 | -uint64_t arm_hcr_el2_eff(CPUARMState *env) | ||
76 | +uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure) | ||
81 | { | 77 | { |
82 | uint32_t tmp; | 78 | uint64_t ret = env->cp15.hcr_el2; |
83 | - uint32_t old_control = env->v7m.control[env->v7m.secure]; | 79 | |
84 | - bool old_spsel = old_control & R_V7M_CONTROL_SPSEL_MASK; | 80 | - if (!arm_is_el2_enabled(env)) { |
85 | + bool new_is_psp, old_is_psp = v7m_using_psp(env); | 81 | + if (!arm_is_el2_enabled_secstate(env, secure)) { |
86 | + | 82 | /* |
87 | + env->v7m.control[env->v7m.secure] = | 83 | * "This register has no effect if EL2 is not enabled in the |
88 | + deposit32(env->v7m.control[env->v7m.secure], | 84 | * current Security state". This is ARMv8.4-SecEL2 speak for |
89 | + R_V7M_CONTROL_SPSEL_SHIFT, | 85 | @@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff(CPUARMState *env) |
90 | + R_V7M_CONTROL_SPSEL_LENGTH, new_spsel); | 86 | return ret; |
91 | + | 87 | } |
92 | + new_is_psp = v7m_using_psp(env); | 88 | |
93 | 89 | +uint64_t arm_hcr_el2_eff(CPUARMState *env) | |
94 | - if (old_spsel != new_spsel) { | 90 | +{ |
95 | + if (old_is_psp != new_is_psp) { | 91 | + return arm_hcr_el2_eff_secstate(env, arm_is_secure_below_el3(env)); |
96 | tmp = env->v7m.other_sp; | ||
97 | env->v7m.other_sp = env->regs[13]; | ||
98 | env->regs[13] = tmp; | ||
99 | + } | ||
100 | +} | 92 | +} |
101 | + | 93 | + |
102 | +void write_v7m_exception(CPUARMState *env, uint32_t new_exc) | 94 | /* |
103 | +{ | 95 | * Corresponds to ARM pseudocode function ELIsInHost(). |
104 | + /* Write a new value to v7m.exception, thus transitioning into or out | 96 | */ |
105 | + * of Handler mode; this may result in a change of active stack pointer. | ||
106 | + */ | ||
107 | + bool new_is_psp, old_is_psp = v7m_using_psp(env); | ||
108 | + uint32_t tmp; | ||
109 | |||
110 | - env->v7m.control[env->v7m.secure] = deposit32(old_control, | ||
111 | - R_V7M_CONTROL_SPSEL_SHIFT, | ||
112 | - R_V7M_CONTROL_SPSEL_LENGTH, new_spsel); | ||
113 | + env->v7m.exception = new_exc; | ||
114 | + | ||
115 | + new_is_psp = v7m_using_psp(env); | ||
116 | + | ||
117 | + if (old_is_psp != new_is_psp) { | ||
118 | + tmp = env->v7m.other_sp; | ||
119 | + env->v7m.other_sp = env->regs[13]; | ||
120 | + env->regs[13] = tmp; | ||
121 | } | ||
122 | } | ||
123 | |||
124 | @@ -XXX,XX +XXX,XX @@ static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode, | ||
125 | bool want_psp = threadmode && spsel; | ||
126 | |||
127 | if (secure == env->v7m.secure) { | ||
128 | - /* Currently switch_v7m_sp switches SP as it updates SPSEL, | ||
129 | - * so the SP we want is always in regs[13]. | ||
130 | - * When we decouple SPSEL from the actually selected SP | ||
131 | - * we need to check want_psp against v7m_using_psp() | ||
132 | - * to see whether we need regs[13] or v7m.other_sp. | ||
133 | - */ | ||
134 | - return &env->regs[13]; | ||
135 | + if (want_psp == v7m_using_psp(env)) { | ||
136 | + return &env->regs[13]; | ||
137 | + } else { | ||
138 | + return &env->v7m.other_sp; | ||
139 | + } | ||
140 | } else { | ||
141 | if (want_psp) { | ||
142 | return &env->v7m.other_ss_psp; | ||
143 | @@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr) | ||
144 | uint32_t addr; | ||
145 | |||
146 | armv7m_nvic_acknowledge_irq(env->nvic); | ||
147 | - switch_v7m_sp(env, 0); | ||
148 | + write_v7m_control_spsel(env, 0); | ||
149 | arm_clear_exclusive(env); | ||
150 | /* Clear IT bits */ | ||
151 | env->condexec_bits = 0; | ||
152 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | ||
153 | return; | ||
154 | } | ||
155 | |||
156 | - /* Set CONTROL.SPSEL from excret.SPSEL. For QEMU this currently | ||
157 | - * causes us to switch the active SP, but we will change this | ||
158 | - * later to not do that so we can support v8M. | ||
159 | + /* Set CONTROL.SPSEL from excret.SPSEL. Since we're still in | ||
160 | + * Handler mode (and will be until we write the new XPSR.Interrupt | ||
161 | + * field) this does not switch around the current stack pointer. | ||
162 | */ | ||
163 | - switch_v7m_sp(env, return_to_sp_process); | ||
164 | + write_v7m_control_spsel(env, return_to_sp_process); | ||
165 | |||
166 | { | ||
167 | /* The stack pointer we should be reading the exception frame from | ||
168 | @@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val) | ||
169 | case 20: /* CONTROL */ | ||
170 | /* Writing to the SPSEL bit only has an effect if we are in | ||
171 | * thread mode; other bits can be updated by any privileged code. | ||
172 | - * switch_v7m_sp() deals with updating the SPSEL bit in | ||
173 | + * write_v7m_control_spsel() deals with updating the SPSEL bit in | ||
174 | * env->v7m.control, so we only need update the others. | ||
175 | */ | ||
176 | if (!arm_v7m_is_handler_mode(env)) { | ||
177 | - switch_v7m_sp(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0); | ||
178 | + write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0); | ||
179 | } | ||
180 | env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK; | ||
181 | env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK; | ||
182 | -- | 97 | -- |
183 | 2.7.4 | 98 | 2.25.1 |
184 | |||
185 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | From: Richard Henderson <richard.henderson@linaro.org> | ||
1 | 2 | ||
3 | Rename the argument to is_secure_ptr, and introduce a | ||
4 | local variable is_secure with the value. We only write | ||
5 | back to the pointer toward the end of the function. | ||
6 | |||
7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
8 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
9 | Message-id: 20221001162318.153420-15-richard.henderson@linaro.org | ||
10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
11 | --- | ||
12 | target/arm/ptw.c | 22 ++++++++++++---------- | ||
13 | 1 file changed, 12 insertions(+), 10 deletions(-) | ||
14 | |||
15 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
16 | index XXXXXXX..XXXXXXX 100644 | ||
17 | --- a/target/arm/ptw.c | ||
18 | +++ b/target/arm/ptw.c | ||
19 | @@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs) | ||
20 | |||
21 | /* Translate a S1 pagetable walk through S2 if needed. */ | ||
22 | static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
23 | - hwaddr addr, bool *is_secure, | ||
24 | + hwaddr addr, bool *is_secure_ptr, | ||
25 | ARMMMUFaultInfo *fi) | ||
26 | { | ||
27 | - ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; | ||
28 | + bool is_secure = *is_secure_ptr; | ||
29 | + ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; | ||
30 | |||
31 | if (arm_mmu_idx_is_stage1_of_2(mmu_idx) && | ||
32 | - !regime_translation_disabled(env, s2_mmu_idx, *is_secure)) { | ||
33 | + !regime_translation_disabled(env, s2_mmu_idx, is_secure)) { | ||
34 | GetPhysAddrResult s2 = {}; | ||
35 | int ret; | ||
36 | |||
37 | ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, | ||
38 | - *is_secure, false, &s2, fi); | ||
39 | + is_secure, false, &s2, fi); | ||
40 | if (ret) { | ||
41 | assert(fi->type != ARMFault_None); | ||
42 | fi->s2addr = addr; | ||
43 | fi->stage2 = true; | ||
44 | fi->s1ptw = true; | ||
45 | - fi->s1ns = !*is_secure; | ||
46 | + fi->s1ns = !is_secure; | ||
47 | return ~0; | ||
48 | } | ||
49 | if ((arm_hcr_el2_eff(env) & HCR_PTW) && | ||
50 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
51 | fi->s2addr = addr; | ||
52 | fi->stage2 = true; | ||
53 | fi->s1ptw = true; | ||
54 | - fi->s1ns = !*is_secure; | ||
55 | + fi->s1ns = !is_secure; | ||
56 | return ~0; | ||
57 | } | ||
58 | |||
59 | if (arm_is_secure_below_el3(env)) { | ||
60 | /* Check if page table walk is to secure or non-secure PA space. */ | ||
61 | - if (*is_secure) { | ||
62 | - *is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW); | ||
63 | + if (is_secure) { | ||
64 | + is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW); | ||
65 | } else { | ||
66 | - *is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW); | ||
67 | + is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW); | ||
68 | } | ||
69 | + *is_secure_ptr = is_secure; | ||
70 | } else { | ||
71 | - assert(!*is_secure); | ||
72 | + assert(!is_secure); | ||
73 | } | ||
74 | |||
75 | addr = s2.phys; | ||
76 | -- | ||
77 | 2.25.1 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | From: Richard Henderson <richard.henderson@linaro.org> | ||
1 | 2 | ||
3 | This value is unused. | ||
4 | |||
5 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
7 | Message-id: 20221001162318.153420-16-richard.henderson@linaro.org | ||
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | --- | ||
10 | target/arm/ptw.c | 5 ++--- | ||
11 | 1 file changed, 2 insertions(+), 3 deletions(-) | ||
12 | |||
13 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
14 | index XXXXXXX..XXXXXXX 100644 | ||
15 | --- a/target/arm/ptw.c | ||
16 | +++ b/target/arm/ptw.c | ||
17 | @@ -XXX,XX +XXX,XX @@ static uint8_t force_cacheattr_nibble_wb(uint8_t attr) | ||
18 | * s1 and s2 for the HCR_EL2.FWB == 1 case, returning the | ||
19 | * combined attributes in MAIR_EL1 format. | ||
20 | */ | ||
21 | -static uint8_t combined_attrs_fwb(CPUARMState *env, | ||
22 | - ARMCacheAttrs s1, ARMCacheAttrs s2) | ||
23 | +static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2) | ||
24 | { | ||
25 | switch (s2.attrs) { | ||
26 | case 7: | ||
27 | @@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env, | ||
28 | |||
29 | /* Combine memory type and cacheability attributes */ | ||
30 | if (arm_hcr_el2_eff(env) & HCR_FWB) { | ||
31 | - ret.attrs = combined_attrs_fwb(env, s1, s2); | ||
32 | + ret.attrs = combined_attrs_fwb(s1, s2); | ||
33 | } else { | ||
34 | ret.attrs = combined_attrs_nofwb(env, s1, s2); | ||
35 | } | ||
36 | -- | ||
37 | 2.25.1 | diff view generated by jsdifflib |
1 | Currently our M profile exception return code switches to the | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | target stack pointer relatively early in the process, before | ||
3 | it tries to pop the exception frame off the stack. This is | ||
4 | awkward for v8M for two reasons: | ||
5 | * in v8M the process vs main stack pointer is not selected | ||
6 | purely by the value of CONTROL.SPSEL, so updating SPSEL | ||
7 | and relying on that to switch to the right stack pointer | ||
8 | won't work | ||
9 | * the stack we should be reading the stack frame from and | ||
10 | the stack we will eventually switch to might not be the | ||
11 | same if the guest is doing strange things | ||
12 | 2 | ||
13 | Change our exception return code to use a 'frame pointer' | 3 | These subroutines did not need ENV for anything except |
14 | to read the exception frame rather than assuming that we | 4 | retrieving the effective value of HCR anyway. |
15 | can switch the live stack pointer this early. | ||
16 | 5 | ||
6 | We have computed the effective value of HCR in the callers, | ||
7 | and this will be especially important for interpreting HCR | ||
8 | in a non-current security state. | ||
9 | |||
10 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
11 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
12 | Message-id: 20221001162318.153420-17-richard.henderson@linaro.org | ||
17 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 13 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
18 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
19 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
20 | Message-id: 1506092407-26985-3-git-send-email-peter.maydell@linaro.org | ||
21 | --- | 14 | --- |
22 | target/arm/helper.c | 130 +++++++++++++++++++++++++++++++++++++++------------- | 15 | target/arm/ptw.c | 30 +++++++++++++++++------------- |
23 | 1 file changed, 98 insertions(+), 32 deletions(-) | 16 | 1 file changed, 17 insertions(+), 13 deletions(-) |
24 | 17 | ||
25 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 18 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
26 | index XXXXXXX..XXXXXXX 100644 | 19 | index XXXXXXX..XXXXXXX 100644 |
27 | --- a/target/arm/helper.c | 20 | --- a/target/arm/ptw.c |
28 | +++ b/target/arm/helper.c | 21 | +++ b/target/arm/ptw.c |
29 | @@ -XXX,XX +XXX,XX @@ static void v7m_push(CPUARMState *env, uint32_t val) | 22 | @@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx, |
30 | stl_phys(cs->as, env->regs[13], val); | 23 | return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0; |
31 | } | 24 | } |
32 | 25 | ||
33 | -static uint32_t v7m_pop(CPUARMState *env) | 26 | -static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs) |
34 | -{ | 27 | +static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs) |
35 | - CPUState *cs = CPU(arm_env_get_cpu(env)); | ||
36 | - uint32_t val; | ||
37 | - | ||
38 | - val = ldl_phys(cs->as, env->regs[13]); | ||
39 | - env->regs[13] += 4; | ||
40 | - return val; | ||
41 | -} | ||
42 | - | ||
43 | /* Return true if we're using the process stack pointer (not the MSP) */ | ||
44 | static bool v7m_using_psp(CPUARMState *env) | ||
45 | { | 28 | { |
46 | @@ -XXX,XX +XXX,XX @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) | 29 | /* |
47 | env->regs[15] = dest & ~1; | 30 | * For an S1 page table walk, the stage 1 attributes are always |
48 | } | 31 | @@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs) |
49 | 32 | * when cacheattrs.attrs bit [2] is 0. | |
50 | +static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool threadmode, | 33 | */ |
51 | + bool spsel) | 34 | assert(cacheattrs.is_s2_format); |
52 | +{ | 35 | - if (arm_hcr_el2_eff(env) & HCR_FWB) { |
53 | + /* Return a pointer to the location where we currently store the | 36 | + if (hcr & HCR_FWB) { |
54 | + * stack pointer for the requested security state and thread mode. | 37 | return (cacheattrs.attrs & 0x4) == 0; |
55 | + * This pointer will become invalid if the CPU state is updated | 38 | } else { |
56 | + * such that the stack pointers are switched around (eg changing | 39 | return (cacheattrs.attrs & 0xc) == 0; |
57 | + * the SPSEL control bit). | 40 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, |
58 | + * Compare the v8M ARM ARM pseudocode LookUpSP_with_security_mode(). | 41 | if (arm_mmu_idx_is_stage1_of_2(mmu_idx) && |
59 | + * Unlike that pseudocode, we require the caller to pass us in the | 42 | !regime_translation_disabled(env, s2_mmu_idx, is_secure)) { |
60 | + * SPSEL control bit value; this is because we also use this | 43 | GetPhysAddrResult s2 = {}; |
61 | + * function in handling of pushing of the callee-saves registers | 44 | + uint64_t hcr; |
62 | + * part of the v8M stack frame (pseudocode PushCalleeStack()), | 45 | int ret; |
63 | + * and in the tailchain codepath the SPSEL bit comes from the exception | 46 | |
64 | + * return magic LR value from the previous exception. The pseudocode | 47 | ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, |
65 | + * opencodes the stack-selection in PushCalleeStack(), but we prefer | 48 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, |
66 | + * to make this utility function generic enough to do the job. | 49 | fi->s1ns = !is_secure; |
67 | + */ | 50 | return ~0; |
68 | + bool want_psp = threadmode && spsel; | 51 | } |
52 | - if ((arm_hcr_el2_eff(env) & HCR_PTW) && | ||
53 | - ptw_attrs_are_device(env, s2.cacheattrs)) { | ||
69 | + | 54 | + |
70 | + if (secure == env->v7m.secure) { | 55 | + hcr = arm_hcr_el2_eff(env); |
71 | + /* Currently switch_v7m_sp switches SP as it updates SPSEL, | 56 | + if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) { |
72 | + * so the SP we want is always in regs[13]. | 57 | /* |
73 | + * When we decouple SPSEL from the actually selected SP | 58 | * PTW set and S1 walk touched S2 Device memory: |
74 | + * we need to check want_psp against v7m_using_psp() | 59 | * generate Permission fault. |
75 | + * to see whether we need regs[13] or v7m.other_sp. | 60 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, |
76 | + */ | 61 | * ref: shared/translation/attrs/S2AttrDecode() |
77 | + return &env->regs[13]; | 62 | * .../S2ConvertAttrsHints() |
78 | + } else { | 63 | */ |
79 | + if (want_psp) { | 64 | -static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs) |
80 | + return &env->v7m.other_ss_psp; | 65 | +static uint8_t convert_stage2_attrs(uint64_t hcr, uint8_t s2attrs) |
81 | + } else { | ||
82 | + return &env->v7m.other_ss_msp; | ||
83 | + } | ||
84 | + } | ||
85 | +} | ||
86 | + | ||
87 | static uint32_t arm_v7m_load_vector(ARMCPU *cpu) | ||
88 | { | 66 | { |
89 | CPUState *cs = CPU(cpu); | 67 | uint8_t hiattr = extract32(s2attrs, 2, 2); |
90 | @@ -XXX,XX +XXX,XX @@ static void v7m_push_stack(ARMCPU *cpu) | 68 | uint8_t loattr = extract32(s2attrs, 0, 2); |
91 | static void do_v7m_exception_exit(ARMCPU *cpu) | 69 | uint8_t hihint = 0, lohint = 0; |
70 | |||
71 | if (hiattr != 0) { /* normal memory */ | ||
72 | - if (arm_hcr_el2_eff(env) & HCR_CD) { /* cache disabled */ | ||
73 | + if (hcr & HCR_CD) { /* cache disabled */ | ||
74 | hiattr = loattr = 1; /* non-cacheable */ | ||
75 | } else { | ||
76 | if (hiattr != 1) { /* Write-through or write-back */ | ||
77 | @@ -XXX,XX +XXX,XX @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2) | ||
78 | * s1 and s2 for the HCR_EL2.FWB == 0 case, returning the | ||
79 | * combined attributes in MAIR_EL1 format. | ||
80 | */ | ||
81 | -static uint8_t combined_attrs_nofwb(CPUARMState *env, | ||
82 | +static uint8_t combined_attrs_nofwb(uint64_t hcr, | ||
83 | ARMCacheAttrs s1, ARMCacheAttrs s2) | ||
92 | { | 84 | { |
93 | CPUARMState *env = &cpu->env; | 85 | uint8_t s1lo, s2lo, s1hi, s2hi, s2_mair_attrs, ret_attrs; |
94 | + CPUState *cs = CPU(cpu); | 86 | |
95 | uint32_t excret; | 87 | - s2_mair_attrs = convert_stage2_attrs(env, s2.attrs); |
96 | uint32_t xpsr; | 88 | + s2_mair_attrs = convert_stage2_attrs(hcr, s2.attrs); |
97 | bool ufault = false; | 89 | |
98 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | 90 | s1lo = extract32(s1.attrs, 0, 4); |
99 | bool return_to_handler = false; | 91 | s2lo = extract32(s2_mair_attrs, 0, 4); |
100 | bool rettobase = false; | 92 | @@ -XXX,XX +XXX,XX @@ static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2) |
101 | bool exc_secure = false; | 93 | * @s1: Attributes from stage 1 walk |
102 | + bool return_to_secure; | 94 | * @s2: Attributes from stage 2 walk |
103 | 95 | */ | |
104 | /* We can only get here from an EXCP_EXCEPTION_EXIT, and | 96 | -static ARMCacheAttrs combine_cacheattrs(CPUARMState *env, |
105 | * gen_bx_excret() enforces the architectural rule | 97 | +static ARMCacheAttrs combine_cacheattrs(uint64_t hcr, |
106 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | 98 | ARMCacheAttrs s1, ARMCacheAttrs s2) |
107 | g_assert_not_reached(); | 99 | { |
100 | ARMCacheAttrs ret; | ||
101 | @@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env, | ||
108 | } | 102 | } |
109 | 103 | ||
110 | + return_to_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) && | 104 | /* Combine memory type and cacheability attributes */ |
111 | + (excret & R_V7M_EXCRET_S_MASK); | 105 | - if (arm_hcr_el2_eff(env) & HCR_FWB) { |
112 | + | 106 | + if (hcr & HCR_FWB) { |
113 | switch (excret & 0xf) { | 107 | ret.attrs = combined_attrs_fwb(s1, s2); |
114 | case 1: /* Return to Handler */ | 108 | } else { |
115 | return_to_handler = true; | 109 | - ret.attrs = combined_attrs_nofwb(env, s1, s2); |
116 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | 110 | + ret.attrs = combined_attrs_nofwb(hcr, s1, s2); |
117 | return; | ||
118 | } | 111 | } |
119 | 112 | ||
120 | - /* Switch to the target stack. */ | 113 | /* |
121 | + /* Set CONTROL.SPSEL from excret.SPSEL. For QEMU this currently | 114 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, |
122 | + * causes us to switch the active SP, but we will change this | 115 | ARMCacheAttrs cacheattrs1; |
123 | + * later to not do that so we can support v8M. | 116 | ARMMMUIdx s2_mmu_idx; |
124 | + */ | 117 | bool is_el0; |
125 | switch_v7m_sp(env, return_to_sp_process); | 118 | + uint64_t hcr; |
126 | - /* Pop registers. */ | 119 | |
127 | - env->regs[0] = v7m_pop(env); | 120 | ret = get_phys_addr_with_secure(env, address, access_type, |
128 | - env->regs[1] = v7m_pop(env); | 121 | s1_mmu_idx, is_secure, result, fi); |
129 | - env->regs[2] = v7m_pop(env); | 122 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, |
130 | - env->regs[3] = v7m_pop(env); | 123 | } |
131 | - env->regs[12] = v7m_pop(env); | 124 | |
132 | - env->regs[14] = v7m_pop(env); | 125 | /* Combine the S1 and S2 cache attributes. */ |
133 | - env->regs[15] = v7m_pop(env); | 126 | - if (arm_hcr_el2_eff(env) & HCR_DC) { |
134 | - if (env->regs[15] & 1) { | 127 | + hcr = arm_hcr_el2_eff(env); |
135 | - qemu_log_mask(LOG_GUEST_ERROR, | 128 | + if (hcr & HCR_DC) { |
136 | - "M profile return from interrupt with misaligned " | 129 | /* |
137 | - "PC is UNPREDICTABLE\n"); | 130 | * HCR.DC forces the first stage attributes to |
138 | - /* Actual hardware seems to ignore the lsbit, and there are several | 131 | * Normal Non-Shareable, |
139 | - * RTOSes out there which incorrectly assume the r15 in the stack | 132 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, |
140 | - * frame should be a Thumb-style "lsbit indicates ARM/Thumb" value. | 133 | } |
141 | + | 134 | cacheattrs1.shareability = 0; |
142 | + { | 135 | } |
143 | + /* The stack pointer we should be reading the exception frame from | 136 | - result->cacheattrs = combine_cacheattrs(env, cacheattrs1, |
144 | + * depends on bits in the magic exception return type value (and | 137 | + result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1, |
145 | + * for v8M isn't necessarily the stack pointer we will eventually | 138 | result->cacheattrs); |
146 | + * end up resuming execution with). Get a pointer to the location | 139 | |
147 | + * in the CPU state struct where the SP we need is currently being | 140 | /* |
148 | + * stored; we will use and modify it in place. | ||
149 | + * We use this limited C variable scope so we don't accidentally | ||
150 | + * use 'frame_sp_p' after we do something that makes it invalid. | ||
151 | + */ | ||
152 | + uint32_t *frame_sp_p = get_v7m_sp_ptr(env, | ||
153 | + return_to_secure, | ||
154 | + !return_to_handler, | ||
155 | + return_to_sp_process); | ||
156 | + uint32_t frameptr = *frame_sp_p; | ||
157 | + | ||
158 | + /* Pop registers. TODO: make these accesses use the correct | ||
159 | + * attributes and address space (S/NS, priv/unpriv) and handle | ||
160 | + * memory transaction failures. | ||
161 | */ | ||
162 | - env->regs[15] &= ~1U; | ||
163 | + env->regs[0] = ldl_phys(cs->as, frameptr); | ||
164 | + env->regs[1] = ldl_phys(cs->as, frameptr + 0x4); | ||
165 | + env->regs[2] = ldl_phys(cs->as, frameptr + 0x8); | ||
166 | + env->regs[3] = ldl_phys(cs->as, frameptr + 0xc); | ||
167 | + env->regs[12] = ldl_phys(cs->as, frameptr + 0x10); | ||
168 | + env->regs[14] = ldl_phys(cs->as, frameptr + 0x14); | ||
169 | + env->regs[15] = ldl_phys(cs->as, frameptr + 0x18); | ||
170 | + if (env->regs[15] & 1) { | ||
171 | + qemu_log_mask(LOG_GUEST_ERROR, | ||
172 | + "M profile return from interrupt with misaligned " | ||
173 | + "PC is UNPREDICTABLE\n"); | ||
174 | + /* Actual hardware seems to ignore the lsbit, and there are several | ||
175 | + * RTOSes out there which incorrectly assume the r15 in the stack | ||
176 | + * frame should be a Thumb-style "lsbit indicates ARM/Thumb" value. | ||
177 | + */ | ||
178 | + env->regs[15] &= ~1U; | ||
179 | + } | ||
180 | + xpsr = ldl_phys(cs->as, frameptr + 0x1c); | ||
181 | + | ||
182 | + /* Commit to consuming the stack frame */ | ||
183 | + frameptr += 0x20; | ||
184 | + /* Undo stack alignment (the SPREALIGN bit indicates that the original | ||
185 | + * pre-exception SP was not 8-aligned and we added a padding word to | ||
186 | + * align it, so we undo this by ORing in the bit that increases it | ||
187 | + * from the current 8-aligned value to the 8-unaligned value. (Adding 4 | ||
188 | + * would work too but a logical OR is how the pseudocode specifies it.) | ||
189 | + */ | ||
190 | + if (xpsr & XPSR_SPREALIGN) { | ||
191 | + frameptr |= 4; | ||
192 | + } | ||
193 | + *frame_sp_p = frameptr; | ||
194 | } | ||
195 | - xpsr = v7m_pop(env); | ||
196 | + /* This xpsr_write() will invalidate frame_sp_p as it may switch stack */ | ||
197 | xpsr_write(env, xpsr, ~XPSR_SPREALIGN); | ||
198 | - /* Undo stack alignment. */ | ||
199 | - if (xpsr & XPSR_SPREALIGN) { | ||
200 | - env->regs[13] |= 4; | ||
201 | - } | ||
202 | |||
203 | /* The restored xPSR exception field will be zero if we're | ||
204 | * resuming in Thread mode. If that doesn't match what the | ||
205 | -- | 141 | -- |
206 | 2.7.4 | 142 | 2.25.1 |
207 | |||
208 | diff view generated by jsdifflib |
1 | In cpu_mmu_index() we try to do this: | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | if (env->v7m.secure) { | ||
3 | mmu_idx += ARMMMUIdx_MSUser; | ||
4 | } | ||
5 | but it will give the wrong answer, because ARMMMUIdx_MSUser | ||
6 | includes the 0x40 ARM_MMU_IDX_M field, and so does the | ||
7 | mmu_idx we're adding to, and we'll end up with 0x8n rather | ||
8 | than 0x4n. This error is then nullified by the call to | ||
9 | arm_to_core_mmu_idx() which masks out the high part, but | ||
10 | we're about to factor out the code that calculates the | ||
11 | ARMMMUIdx values so it can be used without passing it through | ||
12 | arm_to_core_mmu_idx(), so fix this bug first. | ||
13 | 2 | ||
3 | Use arm_hcr_el2_eff_secstate instead of arm_hcr_el2_eff, so | ||
4 | that we use is_secure instead of the current security state. | ||
5 | These AT* operations have been broken since arm_hcr_el2_eff | ||
6 | gained a check for "el2 enabled" for Secure EL2. | ||
7 | |||
8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
10 | Message-id: 20221001162318.153420-18-richard.henderson@linaro.org | ||
14 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 11 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
15 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
16 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
17 | Message-id: 1506092407-26985-16-git-send-email-peter.maydell@linaro.org | ||
18 | --- | 12 | --- |
19 | target/arm/cpu.h | 12 +++++++----- | 13 | target/arm/ptw.c | 8 ++++---- |
20 | 1 file changed, 7 insertions(+), 5 deletions(-) | 14 | 1 file changed, 4 insertions(+), 4 deletions(-) |
21 | 15 | ||
22 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | 16 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
23 | index XXXXXXX..XXXXXXX 100644 | 17 | index XXXXXXX..XXXXXXX 100644 |
24 | --- a/target/arm/cpu.h | 18 | --- a/target/arm/ptw.c |
25 | +++ b/target/arm/cpu.h | 19 | +++ b/target/arm/ptw.c |
26 | @@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch) | 20 | @@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx, |
27 | int el = arm_current_el(env); | ||
28 | |||
29 | if (arm_feature(env, ARM_FEATURE_M)) { | ||
30 | - ARMMMUIdx mmu_idx = el == 0 ? ARMMMUIdx_MUser : ARMMMUIdx_MPriv; | ||
31 | + ARMMMUIdx mmu_idx; | ||
32 | |||
33 | - if (armv7m_nvic_neg_prio_requested(env->nvic, env->v7m.secure)) { | ||
34 | - mmu_idx = ARMMMUIdx_MNegPri; | ||
35 | + if (el == 0) { | ||
36 | + mmu_idx = env->v7m.secure ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser; | ||
37 | + } else { | ||
38 | + mmu_idx = env->v7m.secure ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv; | ||
39 | } | 21 | } |
40 | 22 | } | |
41 | - if (env->v7m.secure) { | 23 | |
42 | - mmu_idx += ARMMMUIdx_MSUser; | 24 | - hcr_el2 = arm_hcr_el2_eff(env); |
43 | + if (armv7m_nvic_neg_prio_requested(env->nvic, env->v7m.secure)) { | 25 | + hcr_el2 = arm_hcr_el2_eff_secstate(env, is_secure); |
44 | + mmu_idx = env->v7m.secure ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri; | 26 | |
27 | switch (mmu_idx) { | ||
28 | case ARMMMUIdx_Stage2: | ||
29 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
30 | return ~0; | ||
45 | } | 31 | } |
46 | 32 | ||
47 | return arm_to_core_mmu_idx(mmu_idx); | 33 | - hcr = arm_hcr_el2_eff(env); |
34 | + hcr = arm_hcr_el2_eff_secstate(env, is_secure); | ||
35 | if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) { | ||
36 | /* | ||
37 | * PTW set and S1 walk touched S2 Device memory: | ||
38 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
39 | } | ||
40 | |||
41 | /* Combine the S1 and S2 cache attributes. */ | ||
42 | - hcr = arm_hcr_el2_eff(env); | ||
43 | + hcr = arm_hcr_el2_eff_secstate(env, is_secure); | ||
44 | if (hcr & HCR_DC) { | ||
45 | /* | ||
46 | * HCR.DC forces the first stage attributes to | ||
47 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
48 | result->page_size = TARGET_PAGE_SIZE; | ||
49 | |||
50 | /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */ | ||
51 | - hcr = arm_hcr_el2_eff(env); | ||
52 | + hcr = arm_hcr_el2_eff_secstate(env, is_secure); | ||
53 | result->cacheattrs.shareability = 0; | ||
54 | result->cacheattrs.is_s2_format = false; | ||
55 | if (hcr & HCR_DC) { | ||
48 | -- | 56 | -- |
49 | 2.7.4 | 57 | 2.25.1 |
50 | |||
51 | diff view generated by jsdifflib |
1 | For v8M, exceptions from Secure to Non-Secure state will save | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | callee-saved registers to the exception frame as well as the | ||
3 | caller-saved registers. Add support for unstacking these | ||
4 | registers in exception exit when necessary. | ||
5 | 2 | ||
3 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
4 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
5 | Message-id: 20221001162318.153420-19-richard.henderson@linaro.org | ||
6 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 6 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
7 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
8 | Message-id: 1506092407-26985-12-git-send-email-peter.maydell@linaro.org | ||
9 | --- | 7 | --- |
10 | target/arm/helper.c | 30 ++++++++++++++++++++++++++++++ | 8 | target/arm/ptw.c | 138 +++++++++++++++++++++++++---------------------- |
11 | 1 file changed, 30 insertions(+) | 9 | 1 file changed, 74 insertions(+), 64 deletions(-) |
12 | 10 | ||
13 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 11 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
14 | index XXXXXXX..XXXXXXX 100644 | 12 | index XXXXXXX..XXXXXXX 100644 |
15 | --- a/target/arm/helper.c | 13 | --- a/target/arm/ptw.c |
16 | +++ b/target/arm/helper.c | 14 | +++ b/target/arm/ptw.c |
17 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | 15 | @@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr, |
18 | "for destination state is UNPREDICTABLE\n"); | 16 | return ret; |
19 | } | 17 | } |
20 | 18 | ||
21 | + /* Do we need to pop callee-saved registers? */ | 19 | +/* |
22 | + if (return_to_secure && | 20 | + * MMU disabled. S1 addresses within aa64 translation regimes are |
23 | + ((excret & R_V7M_EXCRET_ES_MASK) == 0 || | 21 | + * still checked for bounds -- see AArch64.S1DisabledOutput(). |
24 | + (excret & R_V7M_EXCRET_DCRS_MASK) == 0)) { | 22 | + */ |
25 | + uint32_t expected_sig = 0xfefa125b; | 23 | +static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address, |
26 | + uint32_t actual_sig = ldl_phys(cs->as, frameptr); | 24 | + MMUAccessType access_type, |
25 | + ARMMMUIdx mmu_idx, bool is_secure, | ||
26 | + GetPhysAddrResult *result, | ||
27 | + ARMMMUFaultInfo *fi) | ||
28 | +{ | ||
29 | + uint64_t hcr; | ||
30 | + uint8_t memattr; | ||
27 | + | 31 | + |
28 | + if (expected_sig != actual_sig) { | 32 | + if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) { |
29 | + /* Take a SecureFault on the current stack */ | 33 | + int r_el = regime_el(env, mmu_idx); |
30 | + env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK; | 34 | + if (arm_el_is_aa64(env, r_el)) { |
31 | + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); | 35 | + int pamax = arm_pamax(env_archcpu(env)); |
32 | + v7m_exception_taken(cpu, excret); | 36 | + uint64_t tcr = env->cp15.tcr_el[r_el]; |
33 | + qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " | 37 | + int addrtop, tbi; |
34 | + "stackframe: failed exception return integrity " | 38 | + |
35 | + "signature check\n"); | 39 | + tbi = aa64_va_parameter_tbi(tcr, mmu_idx); |
36 | + return; | 40 | + if (access_type == MMU_INST_FETCH) { |
41 | + tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx); | ||
42 | + } | ||
43 | + tbi = (tbi >> extract64(address, 55, 1)) & 1; | ||
44 | + addrtop = (tbi ? 55 : 63); | ||
45 | + | ||
46 | + if (extract64(address, pamax, addrtop - pamax + 1) != 0) { | ||
47 | + fi->type = ARMFault_AddressSize; | ||
48 | + fi->level = 0; | ||
49 | + fi->stage2 = false; | ||
50 | + return 1; | ||
37 | + } | 51 | + } |
38 | + | 52 | + |
39 | + env->regs[4] = ldl_phys(cs->as, frameptr + 0x8); | 53 | + /* |
40 | + env->regs[5] = ldl_phys(cs->as, frameptr + 0xc); | 54 | + * When TBI is disabled, we've just validated that all of the |
41 | + env->regs[6] = ldl_phys(cs->as, frameptr + 0x10); | 55 | + * bits above PAMax are zero, so logically we only need to |
42 | + env->regs[7] = ldl_phys(cs->as, frameptr + 0x14); | 56 | + * clear the top byte for TBI. But it's clearer to follow |
43 | + env->regs[8] = ldl_phys(cs->as, frameptr + 0x18); | 57 | + * the pseudocode set of addrdesc.paddress. |
44 | + env->regs[9] = ldl_phys(cs->as, frameptr + 0x1c); | 58 | + */ |
45 | + env->regs[10] = ldl_phys(cs->as, frameptr + 0x20); | 59 | + address = extract64(address, 0, 52); |
46 | + env->regs[11] = ldl_phys(cs->as, frameptr + 0x24); | 60 | + } |
61 | + } | ||
47 | + | 62 | + |
48 | + frameptr += 0x28; | 63 | + result->phys = address; |
64 | + result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
65 | + result->page_size = TARGET_PAGE_SIZE; | ||
66 | + | ||
67 | + /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */ | ||
68 | + hcr = arm_hcr_el2_eff_secstate(env, is_secure); | ||
69 | + result->cacheattrs.shareability = 0; | ||
70 | + result->cacheattrs.is_s2_format = false; | ||
71 | + if (hcr & HCR_DC) { | ||
72 | + if (hcr & HCR_DCT) { | ||
73 | + memattr = 0xf0; /* Tagged, Normal, WB, RWA */ | ||
74 | + } else { | ||
75 | + memattr = 0xff; /* Normal, WB, RWA */ | ||
49 | + } | 76 | + } |
77 | + } else if (access_type == MMU_INST_FETCH) { | ||
78 | + if (regime_sctlr(env, mmu_idx) & SCTLR_I) { | ||
79 | + memattr = 0xee; /* Normal, WT, RA, NT */ | ||
80 | + } else { | ||
81 | + memattr = 0x44; /* Normal, NC, No */ | ||
82 | + } | ||
83 | + result->cacheattrs.shareability = 2; /* outer sharable */ | ||
84 | + } else { | ||
85 | + memattr = 0x00; /* Device, nGnRnE */ | ||
86 | + } | ||
87 | + result->cacheattrs.attrs = memattr; | ||
88 | + return 0; | ||
89 | +} | ||
50 | + | 90 | + |
51 | /* Pop registers. TODO: make these accesses use the correct | 91 | bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, |
52 | * attributes and address space (S/NS, priv/unpriv) and handle | 92 | MMUAccessType access_type, ARMMMUIdx mmu_idx, |
53 | * memory transaction failures. | 93 | bool is_secure, GetPhysAddrResult *result, |
94 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
95 | /* Definitely a real MMU, not an MPU */ | ||
96 | |||
97 | if (regime_translation_disabled(env, mmu_idx, is_secure)) { | ||
98 | - uint64_t hcr; | ||
99 | - uint8_t memattr; | ||
100 | - | ||
101 | - /* | ||
102 | - * MMU disabled. S1 addresses within aa64 translation regimes are | ||
103 | - * still checked for bounds -- see AArch64.TranslateAddressS1Off. | ||
104 | - */ | ||
105 | - if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) { | ||
106 | - int r_el = regime_el(env, mmu_idx); | ||
107 | - if (arm_el_is_aa64(env, r_el)) { | ||
108 | - int pamax = arm_pamax(env_archcpu(env)); | ||
109 | - uint64_t tcr = env->cp15.tcr_el[r_el]; | ||
110 | - int addrtop, tbi; | ||
111 | - | ||
112 | - tbi = aa64_va_parameter_tbi(tcr, mmu_idx); | ||
113 | - if (access_type == MMU_INST_FETCH) { | ||
114 | - tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx); | ||
115 | - } | ||
116 | - tbi = (tbi >> extract64(address, 55, 1)) & 1; | ||
117 | - addrtop = (tbi ? 55 : 63); | ||
118 | - | ||
119 | - if (extract64(address, pamax, addrtop - pamax + 1) != 0) { | ||
120 | - fi->type = ARMFault_AddressSize; | ||
121 | - fi->level = 0; | ||
122 | - fi->stage2 = false; | ||
123 | - return 1; | ||
124 | - } | ||
125 | - | ||
126 | - /* | ||
127 | - * When TBI is disabled, we've just validated that all of the | ||
128 | - * bits above PAMax are zero, so logically we only need to | ||
129 | - * clear the top byte for TBI. But it's clearer to follow | ||
130 | - * the pseudocode set of addrdesc.paddress. | ||
131 | - */ | ||
132 | - address = extract64(address, 0, 52); | ||
133 | - } | ||
134 | - } | ||
135 | - result->phys = address; | ||
136 | - result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
137 | - result->page_size = TARGET_PAGE_SIZE; | ||
138 | - | ||
139 | - /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */ | ||
140 | - hcr = arm_hcr_el2_eff_secstate(env, is_secure); | ||
141 | - result->cacheattrs.shareability = 0; | ||
142 | - result->cacheattrs.is_s2_format = false; | ||
143 | - if (hcr & HCR_DC) { | ||
144 | - if (hcr & HCR_DCT) { | ||
145 | - memattr = 0xf0; /* Tagged, Normal, WB, RWA */ | ||
146 | - } else { | ||
147 | - memattr = 0xff; /* Normal, WB, RWA */ | ||
148 | - } | ||
149 | - } else if (access_type == MMU_INST_FETCH) { | ||
150 | - if (regime_sctlr(env, mmu_idx) & SCTLR_I) { | ||
151 | - memattr = 0xee; /* Normal, WT, RA, NT */ | ||
152 | - } else { | ||
153 | - memattr = 0x44; /* Normal, NC, No */ | ||
154 | - } | ||
155 | - result->cacheattrs.shareability = 2; /* outer sharable */ | ||
156 | - } else { | ||
157 | - memattr = 0x00; /* Device, nGnRnE */ | ||
158 | - } | ||
159 | - result->cacheattrs.attrs = memattr; | ||
160 | - return 0; | ||
161 | + return get_phys_addr_disabled(env, address, access_type, mmu_idx, | ||
162 | + is_secure, result, fi); | ||
163 | } | ||
164 | - | ||
165 | if (regime_using_lpae_format(env, mmu_idx)) { | ||
166 | return get_phys_addr_lpae(env, address, access_type, mmu_idx, | ||
167 | is_secure, false, result, fi); | ||
54 | -- | 168 | -- |
55 | 2.7.4 | 169 | 2.25.1 |
56 | |||
57 | diff view generated by jsdifflib |
1 | ARM v8M specifies that the INVPC usage fault for mismatched | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | xPSR exception field and handler mode bit should be checked | ||
3 | before updating the PSR and SP, so that the fault is taken | ||
4 | with the existing stack frame rather than by pushing a new one. | ||
5 | Perform this check in the right place for v8M. | ||
6 | 2 | ||
7 | Since v7M specifies in its pseudocode that this usage fault | 3 | Do not apply memattr or shareability for Stage2 translations. |
8 | check should happen later, we have to retain the original | 4 | Make sure to apply HCR_{DC,DCT} only to Regime_EL10, per the |
9 | code for that check rather than being able to merge the two. | 5 | pseudocode in AArch64.S1DisabledOutput. |
10 | (The distinction is architecturally visible but only in | ||
11 | very obscure corner cases like attempting an invalid exception | ||
12 | return with an exception frame in read only memory.) | ||
13 | 6 | ||
7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | Message-id: 20221001162318.153420-20-richard.henderson@linaro.org | ||
14 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
15 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
16 | Message-id: 1506092407-26985-7-git-send-email-peter.maydell@linaro.org | ||
17 | --- | 11 | --- |
18 | target/arm/helper.c | 30 +++++++++++++++++++++++++++--- | 12 | target/arm/ptw.c | 48 +++++++++++++++++++++++++----------------------- |
19 | 1 file changed, 27 insertions(+), 3 deletions(-) | 13 | 1 file changed, 25 insertions(+), 23 deletions(-) |
20 | 14 | ||
21 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 15 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
22 | index XXXXXXX..XXXXXXX 100644 | 16 | index XXXXXXX..XXXXXXX 100644 |
23 | --- a/target/arm/helper.c | 17 | --- a/target/arm/ptw.c |
24 | +++ b/target/arm/helper.c | 18 | +++ b/target/arm/ptw.c |
25 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | 19 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address, |
20 | GetPhysAddrResult *result, | ||
21 | ARMMMUFaultInfo *fi) | ||
22 | { | ||
23 | - uint64_t hcr; | ||
24 | - uint8_t memattr; | ||
25 | + uint8_t memattr = 0x00; /* Device nGnRnE */ | ||
26 | + uint8_t shareability = 0; /* non-sharable */ | ||
27 | |||
28 | if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) { | ||
29 | int r_el = regime_el(env, mmu_idx); | ||
30 | + | ||
31 | if (arm_el_is_aa64(env, r_el)) { | ||
32 | int pamax = arm_pamax(env_archcpu(env)); | ||
33 | uint64_t tcr = env->cp15.tcr_el[r_el]; | ||
34 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address, | ||
35 | */ | ||
36 | address = extract64(address, 0, 52); | ||
26 | } | 37 | } |
27 | xpsr = ldl_phys(cs->as, frameptr + 0x1c); | 38 | + |
28 | 39 | + /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */ | |
29 | + if (arm_feature(env, ARM_FEATURE_V8)) { | 40 | + if (r_el == 1) { |
30 | + /* For v8M we have to check whether the xPSR exception field | 41 | + uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure); |
31 | + * matches the EXCRET value for return to handler/thread | 42 | + if (hcr & HCR_DC) { |
32 | + * before we commit to changing the SP and xPSR. | 43 | + if (hcr & HCR_DCT) { |
33 | + */ | 44 | + memattr = 0xf0; /* Tagged, Normal, WB, RWA */ |
34 | + bool will_be_handler = (xpsr & XPSR_EXCP) != 0; | 45 | + } else { |
35 | + if (return_to_handler != will_be_handler) { | 46 | + memattr = 0xff; /* Normal, WB, RWA */ |
36 | + /* Take an INVPC UsageFault on the current stack. | 47 | + } |
37 | + * By this point we will have switched to the security state | ||
38 | + * for the background state, so this UsageFault will target | ||
39 | + * that state. | ||
40 | + */ | ||
41 | + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, | ||
42 | + env->v7m.secure); | ||
43 | + env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; | ||
44 | + v7m_exception_taken(cpu, excret); | ||
45 | + qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " | ||
46 | + "stackframe: failed exception return integrity " | ||
47 | + "check\n"); | ||
48 | + return; | ||
49 | + } | 48 | + } |
50 | + } | 49 | + } |
51 | + | 50 | + if (memattr == 0 && access_type == MMU_INST_FETCH) { |
52 | /* Commit to consuming the stack frame */ | 51 | + if (regime_sctlr(env, mmu_idx) & SCTLR_I) { |
53 | frameptr += 0x20; | 52 | + memattr = 0xee; /* Normal, WT, RA, NT */ |
54 | /* Undo stack alignment (the SPREALIGN bit indicates that the original | 53 | + } else { |
55 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | 54 | + memattr = 0x44; /* Normal, NC, No */ |
56 | /* The restored xPSR exception field will be zero if we're | 55 | + } |
57 | * resuming in Thread mode. If that doesn't match what the | 56 | + shareability = 2; /* outer sharable */ |
58 | * exception return excret specified then this is a UsageFault. | 57 | + } |
59 | + * v7M requires we make this check here; v8M did it earlier. | 58 | + result->cacheattrs.is_s2_format = false; |
60 | */ | 59 | } |
61 | if (return_to_handler != arm_v7m_is_handler_mode(env)) { | 60 | |
62 | - /* Take an INVPC UsageFault by pushing the stack again. | 61 | result->phys = address; |
63 | - * TODO: the v8M version of this code should target the | 62 | result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; |
64 | - * background state for this exception. | 63 | result->page_size = TARGET_PAGE_SIZE; |
65 | + /* Take an INVPC UsageFault by pushing the stack again; | 64 | - |
66 | + * we know we're v7M so this is never a Secure UsageFault. | 65 | - /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */ |
67 | */ | 66 | - hcr = arm_hcr_el2_eff_secstate(env, is_secure); |
68 | + assert(!arm_feature(env, ARM_FEATURE_V8)); | 67 | - result->cacheattrs.shareability = 0; |
69 | armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false); | 68 | - result->cacheattrs.is_s2_format = false; |
70 | env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK; | 69 | - if (hcr & HCR_DC) { |
71 | v7m_push_stack(cpu); | 70 | - if (hcr & HCR_DCT) { |
71 | - memattr = 0xf0; /* Tagged, Normal, WB, RWA */ | ||
72 | - } else { | ||
73 | - memattr = 0xff; /* Normal, WB, RWA */ | ||
74 | - } | ||
75 | - } else if (access_type == MMU_INST_FETCH) { | ||
76 | - if (regime_sctlr(env, mmu_idx) & SCTLR_I) { | ||
77 | - memattr = 0xee; /* Normal, WT, RA, NT */ | ||
78 | - } else { | ||
79 | - memattr = 0x44; /* Normal, NC, No */ | ||
80 | - } | ||
81 | - result->cacheattrs.shareability = 2; /* outer sharable */ | ||
82 | - } else { | ||
83 | - memattr = 0x00; /* Device, nGnRnE */ | ||
84 | - } | ||
85 | + result->cacheattrs.shareability = shareability; | ||
86 | result->cacheattrs.attrs = memattr; | ||
87 | return 0; | ||
88 | } | ||
72 | -- | 89 | -- |
73 | 2.7.4 | 90 | 2.25.1 |
74 | |||
75 | diff view generated by jsdifflib |
1 | In the v8M architecture, return from an exception to a PC which | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | has bit 0 set is not UNPREDICTABLE; it is defined that bit 0 | ||
3 | is discarded [R_HRJH]. Restrict our complaint about this to v7M. | ||
4 | 2 | ||
3 | Adjust GetPhysAddrResult to fill in CPUTLBEntryFull, | ||
4 | so that it may be passed directly to tlb_set_page_full. | ||
5 | |||
6 | The change is large, but mostly mechanical. The major | ||
7 | non-mechanical change is page_size -> lg_page_size. | ||
8 | Most of the time this is obvious, and is related to | ||
9 | TARGET_PAGE_BITS. | ||
10 | |||
11 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
12 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
13 | Message-id: 20221001162318.153420-21-richard.henderson@linaro.org | ||
5 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 14 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
6 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
7 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
8 | Message-id: 1506092407-26985-9-git-send-email-peter.maydell@linaro.org | ||
9 | --- | 15 | --- |
10 | target/arm/helper.c | 22 +++++++++++++++------- | 16 | target/arm/internals.h | 5 +- |
11 | 1 file changed, 15 insertions(+), 7 deletions(-) | 17 | target/arm/helper.c | 12 +-- |
18 | target/arm/m_helper.c | 20 ++--- | ||
19 | target/arm/ptw.c | 179 ++++++++++++++++++++-------------------- | ||
20 | target/arm/tlb_helper.c | 9 +- | ||
21 | 5 files changed, 111 insertions(+), 114 deletions(-) | ||
12 | 22 | ||
23 | diff --git a/target/arm/internals.h b/target/arm/internals.h | ||
24 | index XXXXXXX..XXXXXXX 100644 | ||
25 | --- a/target/arm/internals.h | ||
26 | +++ b/target/arm/internals.h | ||
27 | @@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs { | ||
28 | |||
29 | /* Fields that are valid upon success. */ | ||
30 | typedef struct GetPhysAddrResult { | ||
31 | - hwaddr phys; | ||
32 | - target_ulong page_size; | ||
33 | - int prot; | ||
34 | - MemTxAttrs attrs; | ||
35 | + CPUTLBEntryFull f; | ||
36 | ARMCacheAttrs cacheattrs; | ||
37 | } GetPhysAddrResult; | ||
38 | |||
13 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 39 | diff --git a/target/arm/helper.c b/target/arm/helper.c |
14 | index XXXXXXX..XXXXXXX 100644 | 40 | index XXXXXXX..XXXXXXX 100644 |
15 | --- a/target/arm/helper.c | 41 | --- a/target/arm/helper.c |
16 | +++ b/target/arm/helper.c | 42 | +++ b/target/arm/helper.c |
17 | @@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu) | 43 | @@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value, |
18 | env->regs[12] = ldl_phys(cs->as, frameptr + 0x10); | 44 | /* Create a 64-bit PAR */ |
19 | env->regs[14] = ldl_phys(cs->as, frameptr + 0x14); | 45 | par64 = (1 << 11); /* LPAE bit always set */ |
20 | env->regs[15] = ldl_phys(cs->as, frameptr + 0x18); | 46 | if (!ret) { |
21 | + | 47 | - par64 |= res.phys & ~0xfffULL; |
22 | + /* Returning from an exception with a PC with bit 0 set is defined | 48 | - if (!res.attrs.secure) { |
23 | + * behaviour on v8M (bit 0 is ignored), but for v7M it was specified | 49 | + par64 |= res.f.phys_addr & ~0xfffULL; |
24 | + * to be UNPREDICTABLE. In practice actual v7M hardware seems to ignore | 50 | + if (!res.f.attrs.secure) { |
25 | + * the lsbit, and there are several RTOSes out there which incorrectly | 51 | par64 |= (1 << 9); /* NS */ |
26 | + * assume the r15 in the stack frame should be a Thumb-style "lsbit | 52 | } |
27 | + * indicates ARM/Thumb" value, so ignore the bit on v7M as well, but | 53 | par64 |= (uint64_t)res.cacheattrs.attrs << 56; /* ATTR */ |
28 | + * complain about the badly behaved guest. | 54 | @@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value, |
29 | + */ | 55 | */ |
30 | if (env->regs[15] & 1) { | 56 | if (!ret) { |
31 | - qemu_log_mask(LOG_GUEST_ERROR, | 57 | /* We do not set any attribute bits in the PAR */ |
32 | - "M profile return from interrupt with misaligned " | 58 | - if (res.page_size == (1 << 24) |
33 | - "PC is UNPREDICTABLE\n"); | 59 | + if (res.f.lg_page_size == 24 |
34 | - /* Actual hardware seems to ignore the lsbit, and there are several | 60 | && arm_feature(env, ARM_FEATURE_V7)) { |
35 | - * RTOSes out there which incorrectly assume the r15 in the stack | 61 | - par64 = (res.phys & 0xff000000) | (1 << 1); |
36 | - * frame should be a Thumb-style "lsbit indicates ARM/Thumb" value. | 62 | + par64 = (res.f.phys_addr & 0xff000000) | (1 << 1); |
37 | - */ | 63 | } else { |
38 | env->regs[15] &= ~1U; | 64 | - par64 = res.phys & 0xfffff000; |
39 | + if (!arm_feature(env, ARM_FEATURE_V8)) { | 65 | + par64 = res.f.phys_addr & 0xfffff000; |
40 | + qemu_log_mask(LOG_GUEST_ERROR, | 66 | } |
41 | + "M profile return from interrupt with misaligned " | 67 | - if (!res.attrs.secure) { |
42 | + "PC is UNPREDICTABLE on v7M\n"); | 68 | + if (!res.f.attrs.secure) { |
43 | + } | 69 | par64 |= (1 << 9); /* NS */ |
44 | } | 70 | } |
45 | + | 71 | } else { |
46 | xpsr = ldl_phys(cs->as, frameptr + 0x1c); | 72 | diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c |
73 | index XXXXXXX..XXXXXXX 100644 | ||
74 | --- a/target/arm/m_helper.c | ||
75 | +++ b/target/arm/m_helper.c | ||
76 | @@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value, | ||
77 | } | ||
78 | goto pend_fault; | ||
79 | } | ||
80 | - address_space_stl_le(arm_addressspace(cs, res.attrs), res.phys, value, | ||
81 | - res.attrs, &txres); | ||
82 | + address_space_stl_le(arm_addressspace(cs, res.f.attrs), res.f.phys_addr, | ||
83 | + value, res.f.attrs, &txres); | ||
84 | if (txres != MEMTX_OK) { | ||
85 | /* BusFault trying to write the data */ | ||
86 | if (mode == STACK_LAZYFP) { | ||
87 | @@ -XXX,XX +XXX,XX @@ static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr, | ||
88 | goto pend_fault; | ||
89 | } | ||
90 | |||
91 | - value = address_space_ldl(arm_addressspace(cs, res.attrs), res.phys, | ||
92 | - res.attrs, &txres); | ||
93 | + value = address_space_ldl(arm_addressspace(cs, res.f.attrs), | ||
94 | + res.f.phys_addr, res.f.attrs, &txres); | ||
95 | if (txres != MEMTX_OK) { | ||
96 | /* BusFault trying to read the data */ | ||
97 | qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n"); | ||
98 | @@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool secure, | ||
99 | qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n"); | ||
100 | return false; | ||
101 | } | ||
102 | - *insn = address_space_lduw_le(arm_addressspace(cs, res.attrs), res.phys, | ||
103 | - res.attrs, &txres); | ||
104 | + *insn = address_space_lduw_le(arm_addressspace(cs, res.f.attrs), | ||
105 | + res.f.phys_addr, res.f.attrs, &txres); | ||
106 | if (txres != MEMTX_OK) { | ||
107 | env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK; | ||
108 | armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false); | ||
109 | @@ -XXX,XX +XXX,XX @@ static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx, | ||
110 | } | ||
111 | return false; | ||
112 | } | ||
113 | - value = address_space_ldl(arm_addressspace(cs, res.attrs), res.phys, | ||
114 | - res.attrs, &txres); | ||
115 | + value = address_space_ldl(arm_addressspace(cs, res.f.attrs), | ||
116 | + res.f.phys_addr, res.f.attrs, &txres); | ||
117 | if (txres != MEMTX_OK) { | ||
118 | /* BusFault trying to read the data */ | ||
119 | qemu_log_mask(CPU_LOG_INT, | ||
120 | @@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op) | ||
121 | } else { | ||
122 | mrvalid = true; | ||
123 | } | ||
124 | - r = res.prot & PAGE_READ; | ||
125 | - rw = res.prot & PAGE_WRITE; | ||
126 | + r = res.f.prot & PAGE_READ; | ||
127 | + rw = res.f.prot & PAGE_WRITE; | ||
128 | } else { | ||
129 | r = false; | ||
130 | rw = false; | ||
131 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
132 | index XXXXXXX..XXXXXXX 100644 | ||
133 | --- a/target/arm/ptw.c | ||
134 | +++ b/target/arm/ptw.c | ||
135 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
136 | assert(!is_secure); | ||
137 | } | ||
138 | |||
139 | - addr = s2.phys; | ||
140 | + addr = s2.f.phys_addr; | ||
141 | } | ||
142 | return addr; | ||
143 | } | ||
144 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address, | ||
145 | /* 1Mb section. */ | ||
146 | phys_addr = (desc & 0xfff00000) | (address & 0x000fffff); | ||
147 | ap = (desc >> 10) & 3; | ||
148 | - result->page_size = 1024 * 1024; | ||
149 | + result->f.lg_page_size = 20; /* 1MB */ | ||
150 | } else { | ||
151 | /* Lookup l2 entry. */ | ||
152 | if (type == 1) { | ||
153 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address, | ||
154 | case 1: /* 64k page. */ | ||
155 | phys_addr = (desc & 0xffff0000) | (address & 0xffff); | ||
156 | ap = (desc >> (4 + ((address >> 13) & 6))) & 3; | ||
157 | - result->page_size = 0x10000; | ||
158 | + result->f.lg_page_size = 16; | ||
159 | break; | ||
160 | case 2: /* 4k page. */ | ||
161 | phys_addr = (desc & 0xfffff000) | (address & 0xfff); | ||
162 | ap = (desc >> (4 + ((address >> 9) & 6))) & 3; | ||
163 | - result->page_size = 0x1000; | ||
164 | + result->f.lg_page_size = 12; | ||
165 | break; | ||
166 | case 3: /* 1k page, or ARMv6/XScale "extended small (4k) page" */ | ||
167 | if (type == 1) { | ||
168 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address, | ||
169 | if (arm_feature(env, ARM_FEATURE_XSCALE) | ||
170 | || arm_feature(env, ARM_FEATURE_V6)) { | ||
171 | phys_addr = (desc & 0xfffff000) | (address & 0xfff); | ||
172 | - result->page_size = 0x1000; | ||
173 | + result->f.lg_page_size = 12; | ||
174 | } else { | ||
175 | /* | ||
176 | * UNPREDICTABLE in ARMv5; we choose to take a | ||
177 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address, | ||
178 | } | ||
179 | } else { | ||
180 | phys_addr = (desc & 0xfffffc00) | (address & 0x3ff); | ||
181 | - result->page_size = 0x400; | ||
182 | + result->f.lg_page_size = 10; | ||
183 | } | ||
184 | ap = (desc >> 4) & 3; | ||
185 | break; | ||
186 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address, | ||
187 | g_assert_not_reached(); | ||
188 | } | ||
189 | } | ||
190 | - result->prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot); | ||
191 | - result->prot |= result->prot ? PAGE_EXEC : 0; | ||
192 | - if (!(result->prot & (1 << access_type))) { | ||
193 | + result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot); | ||
194 | + result->f.prot |= result->f.prot ? PAGE_EXEC : 0; | ||
195 | + if (!(result->f.prot & (1 << access_type))) { | ||
196 | /* Access permission fault. */ | ||
197 | fi->type = ARMFault_Permission; | ||
198 | goto do_fault; | ||
199 | } | ||
200 | - result->phys = phys_addr; | ||
201 | + result->f.phys_addr = phys_addr; | ||
202 | return false; | ||
203 | do_fault: | ||
204 | fi->domain = domain; | ||
205 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, | ||
206 | phys_addr = (desc & 0xff000000) | (address & 0x00ffffff); | ||
207 | phys_addr |= (uint64_t)extract32(desc, 20, 4) << 32; | ||
208 | phys_addr |= (uint64_t)extract32(desc, 5, 4) << 36; | ||
209 | - result->page_size = 0x1000000; | ||
210 | + result->f.lg_page_size = 24; /* 16MB */ | ||
211 | } else { | ||
212 | /* Section. */ | ||
213 | phys_addr = (desc & 0xfff00000) | (address & 0x000fffff); | ||
214 | - result->page_size = 0x100000; | ||
215 | + result->f.lg_page_size = 20; /* 1MB */ | ||
216 | } | ||
217 | ap = ((desc >> 10) & 3) | ((desc >> 13) & 4); | ||
218 | xn = desc & (1 << 4); | ||
219 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, | ||
220 | case 1: /* 64k page. */ | ||
221 | phys_addr = (desc & 0xffff0000) | (address & 0xffff); | ||
222 | xn = desc & (1 << 15); | ||
223 | - result->page_size = 0x10000; | ||
224 | + result->f.lg_page_size = 16; | ||
225 | break; | ||
226 | case 2: case 3: /* 4k page. */ | ||
227 | phys_addr = (desc & 0xfffff000) | (address & 0xfff); | ||
228 | xn = desc & 1; | ||
229 | - result->page_size = 0x1000; | ||
230 | + result->f.lg_page_size = 12; | ||
231 | break; | ||
232 | default: | ||
233 | /* Never happens, but compiler isn't smart enough to tell. */ | ||
234 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, | ||
235 | } | ||
236 | } | ||
237 | if (domain_prot == 3) { | ||
238 | - result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
239 | + result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
240 | } else { | ||
241 | if (pxn && !regime_is_user(env, mmu_idx)) { | ||
242 | xn = 1; | ||
243 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, | ||
244 | fi->type = ARMFault_AccessFlag; | ||
245 | goto do_fault; | ||
246 | } | ||
247 | - result->prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1); | ||
248 | + result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1); | ||
249 | } else { | ||
250 | - result->prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot); | ||
251 | + result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot); | ||
252 | } | ||
253 | - if (result->prot && !xn) { | ||
254 | - result->prot |= PAGE_EXEC; | ||
255 | + if (result->f.prot && !xn) { | ||
256 | + result->f.prot |= PAGE_EXEC; | ||
257 | } | ||
258 | - if (!(result->prot & (1 << access_type))) { | ||
259 | + if (!(result->f.prot & (1 << access_type))) { | ||
260 | /* Access permission fault. */ | ||
261 | fi->type = ARMFault_Permission; | ||
262 | goto do_fault; | ||
263 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, | ||
264 | * the CPU doesn't support TZ or this is a non-secure translation | ||
265 | * regime, because the attribute will already be non-secure. | ||
266 | */ | ||
267 | - result->attrs.secure = false; | ||
268 | + result->f.attrs.secure = false; | ||
269 | } | ||
270 | - result->phys = phys_addr; | ||
271 | + result->f.phys_addr = phys_addr; | ||
272 | return false; | ||
273 | do_fault: | ||
274 | fi->domain = domain; | ||
275 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, | ||
276 | if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { | ||
277 | ns = mmu_idx == ARMMMUIdx_Stage2; | ||
278 | xn = extract32(attrs, 11, 2); | ||
279 | - result->prot = get_S2prot(env, ap, xn, s1_is_el0); | ||
280 | + result->f.prot = get_S2prot(env, ap, xn, s1_is_el0); | ||
281 | } else { | ||
282 | ns = extract32(attrs, 3, 1); | ||
283 | xn = extract32(attrs, 12, 1); | ||
284 | pxn = extract32(attrs, 11, 1); | ||
285 | - result->prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn); | ||
286 | + result->f.prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn); | ||
287 | } | ||
288 | |||
289 | fault_type = ARMFault_Permission; | ||
290 | - if (!(result->prot & (1 << access_type))) { | ||
291 | + if (!(result->f.prot & (1 << access_type))) { | ||
292 | goto do_fault; | ||
293 | } | ||
294 | |||
295 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, | ||
296 | * the CPU doesn't support TZ or this is a non-secure translation | ||
297 | * regime, because the attribute will already be non-secure. | ||
298 | */ | ||
299 | - result->attrs.secure = false; | ||
300 | + result->f.attrs.secure = false; | ||
301 | } | ||
302 | /* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */ | ||
303 | if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) { | ||
304 | - arm_tlb_bti_gp(&result->attrs) = true; | ||
305 | + arm_tlb_bti_gp(&result->f.attrs) = true; | ||
306 | } | ||
307 | |||
308 | if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { | ||
309 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, | ||
310 | result->cacheattrs.shareability = extract32(attrs, 6, 2); | ||
311 | } | ||
312 | |||
313 | - result->phys = descaddr; | ||
314 | - result->page_size = page_size; | ||
315 | + result->f.phys_addr = descaddr; | ||
316 | + result->f.lg_page_size = ctz64(page_size); | ||
317 | return false; | ||
318 | |||
319 | do_fault: | ||
320 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, | ||
321 | |||
322 | if (regime_translation_disabled(env, mmu_idx, is_secure)) { | ||
323 | /* MPU disabled. */ | ||
324 | - result->phys = address; | ||
325 | - result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
326 | + result->f.phys_addr = address; | ||
327 | + result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
328 | return false; | ||
329 | } | ||
330 | |||
331 | - result->phys = address; | ||
332 | + result->f.phys_addr = address; | ||
333 | for (n = 7; n >= 0; n--) { | ||
334 | base = env->cp15.c6_region[n]; | ||
335 | if ((base & 1) == 0) { | ||
336 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, | ||
337 | fi->level = 1; | ||
338 | return true; | ||
339 | } | ||
340 | - result->prot = PAGE_READ | PAGE_WRITE; | ||
341 | + result->f.prot = PAGE_READ | PAGE_WRITE; | ||
342 | break; | ||
343 | case 2: | ||
344 | - result->prot = PAGE_READ; | ||
345 | + result->f.prot = PAGE_READ; | ||
346 | if (!is_user) { | ||
347 | - result->prot |= PAGE_WRITE; | ||
348 | + result->f.prot |= PAGE_WRITE; | ||
349 | } | ||
350 | break; | ||
351 | case 3: | ||
352 | - result->prot = PAGE_READ | PAGE_WRITE; | ||
353 | + result->f.prot = PAGE_READ | PAGE_WRITE; | ||
354 | break; | ||
355 | case 5: | ||
356 | if (is_user) { | ||
357 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, | ||
358 | fi->level = 1; | ||
359 | return true; | ||
360 | } | ||
361 | - result->prot = PAGE_READ; | ||
362 | + result->f.prot = PAGE_READ; | ||
363 | break; | ||
364 | case 6: | ||
365 | - result->prot = PAGE_READ; | ||
366 | + result->f.prot = PAGE_READ; | ||
367 | break; | ||
368 | default: | ||
369 | /* Bad permission. */ | ||
370 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, | ||
371 | fi->level = 1; | ||
372 | return true; | ||
373 | } | ||
374 | - result->prot |= PAGE_EXEC; | ||
375 | + result->f.prot |= PAGE_EXEC; | ||
376 | return false; | ||
377 | } | ||
378 | |||
379 | static void get_phys_addr_pmsav7_default(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
380 | - int32_t address, int *prot) | ||
381 | + int32_t address, uint8_t *prot) | ||
382 | { | ||
383 | if (!arm_feature(env, ARM_FEATURE_M)) { | ||
384 | *prot = PAGE_READ | PAGE_WRITE; | ||
385 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
386 | int n; | ||
387 | bool is_user = regime_is_user(env, mmu_idx); | ||
388 | |||
389 | - result->phys = address; | ||
390 | - result->page_size = TARGET_PAGE_SIZE; | ||
391 | - result->prot = 0; | ||
392 | + result->f.phys_addr = address; | ||
393 | + result->f.lg_page_size = TARGET_PAGE_BITS; | ||
394 | + result->f.prot = 0; | ||
395 | |||
396 | if (regime_translation_disabled(env, mmu_idx, secure) || | ||
397 | m_is_ppb_region(env, address)) { | ||
398 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
399 | * which always does a direct read using address_space_ldl(), rather | ||
400 | * than going via this function, so we don't need to check that here. | ||
401 | */ | ||
402 | - get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot); | ||
403 | + get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot); | ||
404 | } else { /* MPU enabled */ | ||
405 | for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) { | ||
406 | /* region search */ | ||
407 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
408 | if (ranges_overlap(base, rmask, | ||
409 | address & TARGET_PAGE_MASK, | ||
410 | TARGET_PAGE_SIZE)) { | ||
411 | - result->page_size = 1; | ||
412 | + result->f.lg_page_size = 0; | ||
413 | } | ||
414 | continue; | ||
415 | } | ||
416 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
417 | continue; | ||
418 | } | ||
419 | if (rsize < TARGET_PAGE_BITS) { | ||
420 | - result->page_size = 1 << rsize; | ||
421 | + result->f.lg_page_size = rsize; | ||
422 | } | ||
423 | break; | ||
424 | } | ||
425 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
426 | fi->type = ARMFault_Background; | ||
427 | return true; | ||
428 | } | ||
429 | - get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot); | ||
430 | + get_phys_addr_pmsav7_default(env, mmu_idx, address, | ||
431 | + &result->f.prot); | ||
432 | } else { /* a MPU hit! */ | ||
433 | uint32_t ap = extract32(env->pmsav7.dracr[n], 8, 3); | ||
434 | uint32_t xn = extract32(env->pmsav7.dracr[n], 12, 1); | ||
435 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
436 | case 5: | ||
437 | break; /* no access */ | ||
438 | case 3: | ||
439 | - result->prot |= PAGE_WRITE; | ||
440 | + result->f.prot |= PAGE_WRITE; | ||
441 | /* fall through */ | ||
442 | case 2: | ||
443 | case 6: | ||
444 | - result->prot |= PAGE_READ | PAGE_EXEC; | ||
445 | + result->f.prot |= PAGE_READ | PAGE_EXEC; | ||
446 | break; | ||
447 | case 7: | ||
448 | /* for v7M, same as 6; for R profile a reserved value */ | ||
449 | if (arm_feature(env, ARM_FEATURE_M)) { | ||
450 | - result->prot |= PAGE_READ | PAGE_EXEC; | ||
451 | + result->f.prot |= PAGE_READ | PAGE_EXEC; | ||
452 | break; | ||
453 | } | ||
454 | /* fall through */ | ||
455 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
456 | case 1: | ||
457 | case 2: | ||
458 | case 3: | ||
459 | - result->prot |= PAGE_WRITE; | ||
460 | + result->f.prot |= PAGE_WRITE; | ||
461 | /* fall through */ | ||
462 | case 5: | ||
463 | case 6: | ||
464 | - result->prot |= PAGE_READ | PAGE_EXEC; | ||
465 | + result->f.prot |= PAGE_READ | PAGE_EXEC; | ||
466 | break; | ||
467 | case 7: | ||
468 | /* for v7M, same as 6; for R profile a reserved value */ | ||
469 | if (arm_feature(env, ARM_FEATURE_M)) { | ||
470 | - result->prot |= PAGE_READ | PAGE_EXEC; | ||
471 | + result->f.prot |= PAGE_READ | PAGE_EXEC; | ||
472 | break; | ||
473 | } | ||
474 | /* fall through */ | ||
475 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
476 | |||
477 | /* execute never */ | ||
478 | if (xn) { | ||
479 | - result->prot &= ~PAGE_EXEC; | ||
480 | + result->f.prot &= ~PAGE_EXEC; | ||
481 | } | ||
482 | } | ||
483 | } | ||
484 | |||
485 | fi->type = ARMFault_Permission; | ||
486 | fi->level = 1; | ||
487 | - return !(result->prot & (1 << access_type)); | ||
488 | + return !(result->f.prot & (1 << access_type)); | ||
489 | } | ||
490 | |||
491 | bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, | ||
492 | @@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, | ||
493 | uint32_t addr_page_base = address & TARGET_PAGE_MASK; | ||
494 | uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1); | ||
495 | |||
496 | - result->page_size = TARGET_PAGE_SIZE; | ||
497 | - result->phys = address; | ||
498 | - result->prot = 0; | ||
499 | + result->f.lg_page_size = TARGET_PAGE_BITS; | ||
500 | + result->f.phys_addr = address; | ||
501 | + result->f.prot = 0; | ||
502 | if (mregion) { | ||
503 | *mregion = -1; | ||
504 | } | ||
505 | @@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, | ||
506 | ranges_overlap(base, limit - base + 1, | ||
507 | addr_page_base, | ||
508 | TARGET_PAGE_SIZE)) { | ||
509 | - result->page_size = 1; | ||
510 | + result->f.lg_page_size = 0; | ||
511 | } | ||
512 | continue; | ||
513 | } | ||
514 | |||
515 | if (base > addr_page_base || limit < addr_page_limit) { | ||
516 | - result->page_size = 1; | ||
517 | + result->f.lg_page_size = 0; | ||
518 | } | ||
519 | |||
520 | if (matchregion != -1) { | ||
521 | @@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, | ||
522 | |||
523 | if (matchregion == -1) { | ||
524 | /* hit using the background region */ | ||
525 | - get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot); | ||
526 | + get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot); | ||
527 | } else { | ||
528 | uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2); | ||
529 | uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1); | ||
530 | @@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, | ||
531 | xn = 1; | ||
532 | } | ||
533 | |||
534 | - result->prot = simple_ap_to_rw_prot(env, mmu_idx, ap); | ||
535 | - if (result->prot && !xn && !(pxn && !is_user)) { | ||
536 | - result->prot |= PAGE_EXEC; | ||
537 | + result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap); | ||
538 | + if (result->f.prot && !xn && !(pxn && !is_user)) { | ||
539 | + result->f.prot |= PAGE_EXEC; | ||
540 | } | ||
541 | /* | ||
542 | * We don't need to look the attribute up in the MAIR0/MAIR1 | ||
543 | @@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, | ||
544 | |||
545 | fi->type = ARMFault_Permission; | ||
546 | fi->level = 1; | ||
547 | - return !(result->prot & (1 << access_type)); | ||
548 | + return !(result->f.prot & (1 << access_type)); | ||
549 | } | ||
550 | |||
551 | static bool v8m_is_sau_exempt(CPUARMState *env, | ||
552 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, | ||
553 | } else { | ||
554 | fi->type = ARMFault_QEMU_SFault; | ||
555 | } | ||
556 | - result->page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE; | ||
557 | - result->phys = address; | ||
558 | - result->prot = 0; | ||
559 | + result->f.lg_page_size = sattrs.subpage ? 0 : TARGET_PAGE_BITS; | ||
560 | + result->f.phys_addr = address; | ||
561 | + result->f.prot = 0; | ||
562 | return true; | ||
563 | } | ||
564 | } else { | ||
565 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, | ||
566 | * might downgrade a secure access to nonsecure. | ||
567 | */ | ||
568 | if (sattrs.ns) { | ||
569 | - result->attrs.secure = false; | ||
570 | + result->f.attrs.secure = false; | ||
571 | } else if (!secure) { | ||
572 | /* | ||
573 | * NS access to S memory must fault. | ||
574 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, | ||
575 | * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt(). | ||
576 | */ | ||
577 | fi->type = ARMFault_QEMU_SFault; | ||
578 | - result->page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE; | ||
579 | - result->phys = address; | ||
580 | - result->prot = 0; | ||
581 | + result->f.lg_page_size = sattrs.subpage ? 0 : TARGET_PAGE_BITS; | ||
582 | + result->f.phys_addr = address; | ||
583 | + result->f.prot = 0; | ||
584 | return true; | ||
585 | } | ||
586 | } | ||
587 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, | ||
588 | ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, secure, | ||
589 | result, fi, NULL); | ||
590 | if (sattrs.subpage) { | ||
591 | - result->page_size = 1; | ||
592 | + result->f.lg_page_size = 0; | ||
593 | } | ||
594 | return ret; | ||
595 | } | ||
596 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address, | ||
597 | result->cacheattrs.is_s2_format = false; | ||
598 | } | ||
599 | |||
600 | - result->phys = address; | ||
601 | - result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
602 | - result->page_size = TARGET_PAGE_SIZE; | ||
603 | + result->f.phys_addr = address; | ||
604 | + result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
605 | + result->f.lg_page_size = TARGET_PAGE_BITS; | ||
606 | result->cacheattrs.shareability = shareability; | ||
607 | result->cacheattrs.attrs = memattr; | ||
608 | return 0; | ||
609 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
610 | return ret; | ||
611 | } | ||
612 | |||
613 | - ipa = result->phys; | ||
614 | - ipa_secure = result->attrs.secure; | ||
615 | + ipa = result->f.phys_addr; | ||
616 | + ipa_secure = result->f.attrs.secure; | ||
617 | if (is_secure) { | ||
618 | /* Select TCR based on the NS bit from the S1 walk. */ | ||
619 | s2walk_secure = !(ipa_secure | ||
620 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
621 | * Save the stage1 results so that we may merge | ||
622 | * prot and cacheattrs later. | ||
623 | */ | ||
624 | - s1_prot = result->prot; | ||
625 | + s1_prot = result->f.prot; | ||
626 | cacheattrs1 = result->cacheattrs; | ||
627 | memset(result, 0, sizeof(*result)); | ||
628 | |||
629 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
630 | fi->s2addr = ipa; | ||
631 | |||
632 | /* Combine the S1 and S2 perms. */ | ||
633 | - result->prot &= s1_prot; | ||
634 | + result->f.prot &= s1_prot; | ||
635 | |||
636 | /* If S2 fails, return early. */ | ||
637 | if (ret) { | ||
638 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
639 | * Check if IPA translates to secure or non-secure PA space. | ||
640 | * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA. | ||
641 | */ | ||
642 | - result->attrs.secure = | ||
643 | + result->f.attrs.secure = | ||
644 | (is_secure | ||
645 | && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW)) | ||
646 | && (ipa_secure | ||
647 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
648 | * cannot upgrade an non-secure translation regime's attributes | ||
649 | * to secure. | ||
650 | */ | ||
651 | - result->attrs.secure = is_secure; | ||
652 | - result->attrs.user = regime_is_user(env, mmu_idx); | ||
653 | + result->f.attrs.secure = is_secure; | ||
654 | + result->f.attrs.user = regime_is_user(env, mmu_idx); | ||
655 | |||
656 | /* | ||
657 | * Fast Context Switch Extension. This doesn't exist at all in v8. | ||
658 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
659 | |||
660 | if (arm_feature(env, ARM_FEATURE_PMSA)) { | ||
661 | bool ret; | ||
662 | - result->page_size = TARGET_PAGE_SIZE; | ||
663 | + result->f.lg_page_size = TARGET_PAGE_BITS; | ||
47 | 664 | ||
48 | if (arm_feature(env, ARM_FEATURE_V8)) { | 665 | if (arm_feature(env, ARM_FEATURE_V8)) { |
666 | /* PMSAv8 */ | ||
667 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
668 | (access_type == MMU_DATA_STORE ? "writing" : "execute"), | ||
669 | (uint32_t)address, mmu_idx, | ||
670 | ret ? "Miss" : "Hit", | ||
671 | - result->prot & PAGE_READ ? 'r' : '-', | ||
672 | - result->prot & PAGE_WRITE ? 'w' : '-', | ||
673 | - result->prot & PAGE_EXEC ? 'x' : '-'); | ||
674 | + result->f.prot & PAGE_READ ? 'r' : '-', | ||
675 | + result->f.prot & PAGE_WRITE ? 'w' : '-', | ||
676 | + result->f.prot & PAGE_EXEC ? 'x' : '-'); | ||
677 | |||
678 | return ret; | ||
679 | } | ||
680 | @@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr, | ||
681 | bool ret; | ||
682 | |||
683 | ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &res, &fi); | ||
684 | - *attrs = res.attrs; | ||
685 | + *attrs = res.f.attrs; | ||
686 | |||
687 | if (ret) { | ||
688 | return -1; | ||
689 | } | ||
690 | - return res.phys; | ||
691 | + return res.f.phys_addr; | ||
692 | } | ||
693 | diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c | ||
694 | index XXXXXXX..XXXXXXX 100644 | ||
695 | --- a/target/arm/tlb_helper.c | ||
696 | +++ b/target/arm/tlb_helper.c | ||
697 | @@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size, | ||
698 | * target page size are handled specially, so for those we | ||
699 | * pass in the exact addresses. | ||
700 | */ | ||
701 | - if (res.page_size >= TARGET_PAGE_SIZE) { | ||
702 | - res.phys &= TARGET_PAGE_MASK; | ||
703 | + if (res.f.lg_page_size >= TARGET_PAGE_BITS) { | ||
704 | + res.f.phys_addr &= TARGET_PAGE_MASK; | ||
705 | address &= TARGET_PAGE_MASK; | ||
706 | } | ||
707 | /* Notice and record tagged memory. */ | ||
708 | if (cpu_isar_feature(aa64_mte, cpu) && res.cacheattrs.attrs == 0xf0) { | ||
709 | - arm_tlb_mte_tagged(&res.attrs) = true; | ||
710 | + arm_tlb_mte_tagged(&res.f.attrs) = true; | ||
711 | } | ||
712 | |||
713 | - tlb_set_page_with_attrs(cs, address, res.phys, res.attrs, | ||
714 | - res.prot, mmu_idx, res.page_size); | ||
715 | + tlb_set_page_full(cs, mmu_idx, address, &res.f); | ||
716 | return true; | ||
717 | } else if (probe) { | ||
718 | return false; | ||
49 | -- | 719 | -- |
50 | 2.7.4 | 720 | 2.25.1 |
51 | |||
52 | diff view generated by jsdifflib |
1 | From: Michael Olbrich <m.olbrich@pengutronix.de> | 1 | From: Jerome Forissier <jerome.forissier@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | The current code checks if the next block exceeds the size of the card. | 3 | According to the Linux kernel booting.rst [1], CPTR_EL3.ESM and |
4 | This generates an error while reading the last block of the card. | 4 | SCR_EL3.EnTP2 must be initialized to 1 when EL3 is present and FEAT_SME |
5 | Do the out-of-bounds check when starting to read a new block to fix this. | 5 | is advertised. This has to be taken care of when QEMU boots directly |
6 | 6 | into the kernel (i.e., "-M virt,secure=on -cpu max -kernel Image"). | |
7 | This issue became visible with increased error checking in Linux 4.13. | ||
8 | 7 | ||
9 | Cc: qemu-stable@nongnu.org | 8 | Cc: qemu-stable@nongnu.org |
10 | Signed-off-by: Michael Olbrich <m.olbrich@pengutronix.de> | 9 | Fixes: 78cb9776662a ("target/arm: Enable SME for -cpu max") |
11 | Reviewed-by: Alistair Francis <alistair.francis@xilinx.com> | 10 | Link: [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm64/booting.rst?h=v6.0#n321 |
12 | Message-id: 20170916091611.10241-1-m.olbrich@pengutronix.de | 11 | Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> |
12 | Message-id: 20221003145641.1921467-1-jerome.forissier@linaro.org | ||
13 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
13 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 14 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
14 | --- | 15 | --- |
15 | hw/sd/sd.c | 12 ++++++------ | 16 | hw/arm/boot.c | 4 ++++ |
16 | 1 file changed, 6 insertions(+), 6 deletions(-) | 17 | 1 file changed, 4 insertions(+) |
17 | 18 | ||
18 | diff --git a/hw/sd/sd.c b/hw/sd/sd.c | 19 | diff --git a/hw/arm/boot.c b/hw/arm/boot.c |
19 | index XXXXXXX..XXXXXXX 100644 | 20 | index XXXXXXX..XXXXXXX 100644 |
20 | --- a/hw/sd/sd.c | 21 | --- a/hw/arm/boot.c |
21 | +++ b/hw/sd/sd.c | 22 | +++ b/hw/arm/boot.c |
22 | @@ -XXX,XX +XXX,XX @@ uint8_t sd_read_data(SDState *sd) | 23 | @@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque) |
23 | break; | 24 | if (cpu_isar_feature(aa64_sve, cpu)) { |
24 | 25 | env->cp15.cptr_el[3] |= R_CPTR_EL3_EZ_MASK; | |
25 | case 18: /* CMD18: READ_MULTIPLE_BLOCK */ | 26 | } |
26 | - if (sd->data_offset == 0) | 27 | + if (cpu_isar_feature(aa64_sme, cpu)) { |
27 | + if (sd->data_offset == 0) { | 28 | + env->cp15.cptr_el[3] |= R_CPTR_EL3_ESM_MASK; |
28 | + if (sd->data_start + io_len > sd->size) { | 29 | + env->cp15.scr_el3 |= SCR_ENTP2; |
29 | + sd->card_status |= ADDRESS_ERROR; | 30 | + } |
30 | + return 0x00; | 31 | /* AArch64 kernels never boot in secure mode */ |
31 | + } | 32 | assert(!info->secure_boot); |
32 | BLK_READ_BLOCK(sd->data_start, io_len); | 33 | /* This hook is only supported for AArch32 currently: |
33 | + } | ||
34 | ret = sd->data[sd->data_offset ++]; | ||
35 | |||
36 | if (sd->data_offset >= io_len) { | ||
37 | @@ -XXX,XX +XXX,XX @@ uint8_t sd_read_data(SDState *sd) | ||
38 | break; | ||
39 | } | ||
40 | } | ||
41 | - | ||
42 | - if (sd->data_start + io_len > sd->size) { | ||
43 | - sd->card_status |= ADDRESS_ERROR; | ||
44 | - break; | ||
45 | - } | ||
46 | } | ||
47 | break; | ||
48 | |||
49 | -- | 34 | -- |
50 | 2.7.4 | 35 | 2.25.1 |
51 | |||
52 | diff view generated by jsdifflib |
1 | For the SG instruction and secure function return we are going | 1 | Arm CPUs support some subset of the granule (page) sizes 4K, 16K and |
---|---|---|---|
2 | to want to do memory accesses using the MMU index of the CPU | 2 | 64K. The guest selects the one it wants using bits in the TCR_ELx |
3 | in secure state, even though the CPU is currently in non-secure | 3 | registers. If it tries to program these registers with a value that |
4 | state. Write arm_v7m_mmu_idx_for_secstate() to do this job, | 4 | is either reserved or which requests a size that the CPU does not |
5 | and use it in cpu_mmu_index(). | 5 | implement, the architecture requires that the CPU behaves as if the |
6 | 6 | field was programmed to some size that has been implemented. | |
7 | Currently we don't implement this, and instead let the guest use any | ||
8 | granule size, even if the CPU ID register fields say it isn't | ||
9 | present. | ||
10 | |||
11 | Make aa64_va_parameters() check against the supported granule size | ||
12 | and force use of a different one if it is not implemented. | ||
13 | |||
14 | (A subsequent commit will make ARMVAParameters use the new enum | ||
15 | rather than the current pair of using16k/using64k bools.) | ||
16 | |||
17 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
7 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 18 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
8 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | 19 | Message-id: 20221003162315.2833797-2-peter.maydell@linaro.org |
9 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
10 | Message-id: 1506092407-26985-17-git-send-email-peter.maydell@linaro.org | ||
11 | --- | 20 | --- |
12 | target/arm/cpu.h | 32 +++++++++++++++++++++----------- | 21 | target/arm/cpu.h | 33 +++++++++++++ |
13 | 1 file changed, 21 insertions(+), 11 deletions(-) | 22 | target/arm/internals.h | 9 ++++ |
23 | target/arm/helper.c | 102 +++++++++++++++++++++++++++++++++++++---- | ||
24 | 3 files changed, 136 insertions(+), 8 deletions(-) | ||
14 | 25 | ||
15 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | 26 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h |
16 | index XXXXXXX..XXXXXXX 100644 | 27 | index XXXXXXX..XXXXXXX 100644 |
17 | --- a/target/arm/cpu.h | 28 | --- a/target/arm/cpu.h |
18 | +++ b/target/arm/cpu.h | 29 | +++ b/target/arm/cpu.h |
19 | @@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx) | 30 | @@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_tgran16_2_lpa2(const ARMISARegisters *id) |
31 | return t >= 3 || (t == 0 && isar_feature_aa64_tgran16_lpa2(id)); | ||
32 | } | ||
33 | |||
34 | +static inline bool isar_feature_aa64_tgran4(const ARMISARegisters *id) | ||
35 | +{ | ||
36 | + return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 0; | ||
37 | +} | ||
38 | + | ||
39 | +static inline bool isar_feature_aa64_tgran16(const ARMISARegisters *id) | ||
40 | +{ | ||
41 | + return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 1; | ||
42 | +} | ||
43 | + | ||
44 | +static inline bool isar_feature_aa64_tgran64(const ARMISARegisters *id) | ||
45 | +{ | ||
46 | + return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64) >= 0; | ||
47 | +} | ||
48 | + | ||
49 | +static inline bool isar_feature_aa64_tgran4_2(const ARMISARegisters *id) | ||
50 | +{ | ||
51 | + unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2); | ||
52 | + return t >= 2 || (t == 0 && isar_feature_aa64_tgran4(id)); | ||
53 | +} | ||
54 | + | ||
55 | +static inline bool isar_feature_aa64_tgran16_2(const ARMISARegisters *id) | ||
56 | +{ | ||
57 | + unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2); | ||
58 | + return t >= 2 || (t == 0 && isar_feature_aa64_tgran16(id)); | ||
59 | +} | ||
60 | + | ||
61 | +static inline bool isar_feature_aa64_tgran64_2(const ARMISARegisters *id) | ||
62 | +{ | ||
63 | + unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64_2); | ||
64 | + return t >= 2 || (t == 0 && isar_feature_aa64_tgran64(id)); | ||
65 | +} | ||
66 | + | ||
67 | static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id) | ||
68 | { | ||
69 | return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0; | ||
70 | diff --git a/target/arm/internals.h b/target/arm/internals.h | ||
71 | index XXXXXXX..XXXXXXX 100644 | ||
72 | --- a/target/arm/internals.h | ||
73 | +++ b/target/arm/internals.h | ||
74 | @@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id) | ||
75 | return valid; | ||
76 | } | ||
77 | |||
78 | +/* Granule size (i.e. page size) */ | ||
79 | +typedef enum ARMGranuleSize { | ||
80 | + /* Same order as TG0 encoding */ | ||
81 | + Gran4K, | ||
82 | + Gran64K, | ||
83 | + Gran16K, | ||
84 | + GranInvalid, | ||
85 | +} ARMGranuleSize; | ||
86 | + | ||
87 | /* | ||
88 | * Parameters of a given virtual address, as extracted from the | ||
89 | * translation control register (TCR) for a given regime. | ||
90 | diff --git a/target/arm/helper.c b/target/arm/helper.c | ||
91 | index XXXXXXX..XXXXXXX 100644 | ||
92 | --- a/target/arm/helper.c | ||
93 | +++ b/target/arm/helper.c | ||
94 | @@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tcma(uint64_t tcr, ARMMMUIdx mmu_idx) | ||
20 | } | 95 | } |
21 | } | 96 | } |
22 | 97 | ||
23 | +/* Return the MMU index for a v7M CPU in the specified security state */ | 98 | +static ARMGranuleSize tg0_to_gran_size(int tg) |
24 | +static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, | 99 | +{ |
25 | + bool secstate) | 100 | + switch (tg) { |
26 | +{ | 101 | + case 0: |
27 | + int el = arm_current_el(env); | 102 | + return Gran4K; |
28 | + ARMMMUIdx mmu_idx; | 103 | + case 1: |
29 | + | 104 | + return Gran64K; |
30 | + if (el == 0) { | 105 | + case 2: |
31 | + mmu_idx = secstate ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser; | 106 | + return Gran16K; |
32 | + } else { | 107 | + default: |
33 | + mmu_idx = secstate ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv; | 108 | + return GranInvalid; |
34 | + } | 109 | + } |
35 | + | 110 | +} |
36 | + if (armv7m_nvic_neg_prio_requested(env->nvic, secstate)) { | 111 | + |
37 | + mmu_idx = secstate ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri; | 112 | +static ARMGranuleSize tg1_to_gran_size(int tg) |
38 | + } | 113 | +{ |
39 | + | 114 | + switch (tg) { |
40 | + return mmu_idx; | 115 | + case 1: |
41 | +} | 116 | + return Gran16K; |
42 | + | 117 | + case 2: |
43 | /* Determine the current mmu_idx to use for normal loads/stores */ | 118 | + return Gran4K; |
44 | static inline int cpu_mmu_index(CPUARMState *env, bool ifetch) | 119 | + case 3: |
120 | + return Gran64K; | ||
121 | + default: | ||
122 | + return GranInvalid; | ||
123 | + } | ||
124 | +} | ||
125 | + | ||
126 | +static inline bool have4k(ARMCPU *cpu, bool stage2) | ||
127 | +{ | ||
128 | + return stage2 ? cpu_isar_feature(aa64_tgran4_2, cpu) | ||
129 | + : cpu_isar_feature(aa64_tgran4, cpu); | ||
130 | +} | ||
131 | + | ||
132 | +static inline bool have16k(ARMCPU *cpu, bool stage2) | ||
133 | +{ | ||
134 | + return stage2 ? cpu_isar_feature(aa64_tgran16_2, cpu) | ||
135 | + : cpu_isar_feature(aa64_tgran16, cpu); | ||
136 | +} | ||
137 | + | ||
138 | +static inline bool have64k(ARMCPU *cpu, bool stage2) | ||
139 | +{ | ||
140 | + return stage2 ? cpu_isar_feature(aa64_tgran64_2, cpu) | ||
141 | + : cpu_isar_feature(aa64_tgran64, cpu); | ||
142 | +} | ||
143 | + | ||
144 | +static ARMGranuleSize sanitize_gran_size(ARMCPU *cpu, ARMGranuleSize gran, | ||
145 | + bool stage2) | ||
146 | +{ | ||
147 | + switch (gran) { | ||
148 | + case Gran4K: | ||
149 | + if (have4k(cpu, stage2)) { | ||
150 | + return gran; | ||
151 | + } | ||
152 | + break; | ||
153 | + case Gran16K: | ||
154 | + if (have16k(cpu, stage2)) { | ||
155 | + return gran; | ||
156 | + } | ||
157 | + break; | ||
158 | + case Gran64K: | ||
159 | + if (have64k(cpu, stage2)) { | ||
160 | + return gran; | ||
161 | + } | ||
162 | + break; | ||
163 | + case GranInvalid: | ||
164 | + break; | ||
165 | + } | ||
166 | + /* | ||
167 | + * If the guest selects a granule size that isn't implemented, | ||
168 | + * the architecture requires that we behave as if it selected one | ||
169 | + * that is (with an IMPDEF choice of which one to pick). We choose | ||
170 | + * to implement the smallest supported granule size. | ||
171 | + */ | ||
172 | + if (have4k(cpu, stage2)) { | ||
173 | + return Gran4K; | ||
174 | + } | ||
175 | + if (have16k(cpu, stage2)) { | ||
176 | + return Gran16K; | ||
177 | + } | ||
178 | + assert(have64k(cpu, stage2)); | ||
179 | + return Gran64K; | ||
180 | +} | ||
181 | + | ||
182 | ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, | ||
183 | ARMMMUIdx mmu_idx, bool data) | ||
45 | { | 184 | { |
46 | int el = arm_current_el(env); | 185 | uint64_t tcr = regime_tcr(env, mmu_idx); |
47 | 186 | bool epd, hpd, using16k, using64k, tsz_oob, ds; | |
48 | if (arm_feature(env, ARM_FEATURE_M)) { | 187 | int select, tsz, tbi, max_tsz, min_tsz, ps, sh; |
49 | - ARMMMUIdx mmu_idx; | 188 | + ARMGranuleSize gran; |
50 | - | 189 | ARMCPU *cpu = env_archcpu(env); |
51 | - if (el == 0) { | 190 | + bool stage2 = mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S; |
52 | - mmu_idx = env->v7m.secure ? ARMMMUIdx_MSUser : ARMMMUIdx_MUser; | 191 | |
53 | - } else { | 192 | if (!regime_has_2_ranges(mmu_idx)) { |
54 | - mmu_idx = env->v7m.secure ? ARMMMUIdx_MSPriv : ARMMMUIdx_MPriv; | 193 | select = 0; |
55 | - } | 194 | tsz = extract32(tcr, 0, 6); |
56 | - | 195 | - using64k = extract32(tcr, 14, 1); |
57 | - if (armv7m_nvic_neg_prio_requested(env->nvic, env->v7m.secure)) { | 196 | - using16k = extract32(tcr, 15, 1); |
58 | - mmu_idx = env->v7m.secure ? ARMMMUIdx_MSNegPri : ARMMMUIdx_MNegPri; | 197 | - if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { |
59 | - } | 198 | + gran = tg0_to_gran_size(extract32(tcr, 14, 2)); |
60 | + ARMMMUIdx mmu_idx = arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure); | 199 | + if (stage2) { |
61 | 200 | /* VTCR_EL2 */ | |
62 | return arm_to_core_mmu_idx(mmu_idx); | 201 | hpd = false; |
202 | } else { | ||
203 | @@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, | ||
204 | select = extract64(va, 55, 1); | ||
205 | if (!select) { | ||
206 | tsz = extract32(tcr, 0, 6); | ||
207 | + gran = tg0_to_gran_size(extract32(tcr, 14, 2)); | ||
208 | epd = extract32(tcr, 7, 1); | ||
209 | sh = extract32(tcr, 12, 2); | ||
210 | - using64k = extract32(tcr, 14, 1); | ||
211 | - using16k = extract32(tcr, 15, 1); | ||
212 | hpd = extract64(tcr, 41, 1); | ||
213 | } else { | ||
214 | - int tg = extract32(tcr, 30, 2); | ||
215 | - using16k = tg == 1; | ||
216 | - using64k = tg == 3; | ||
217 | tsz = extract32(tcr, 16, 6); | ||
218 | + gran = tg1_to_gran_size(extract32(tcr, 30, 2)); | ||
219 | epd = extract32(tcr, 23, 1); | ||
220 | sh = extract32(tcr, 28, 2); | ||
221 | hpd = extract64(tcr, 42, 1); | ||
222 | @@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, | ||
223 | ds = extract64(tcr, 59, 1); | ||
63 | } | 224 | } |
225 | |||
226 | + gran = sanitize_gran_size(cpu, gran, stage2); | ||
227 | + using64k = gran == Gran64K; | ||
228 | + using16k = gran == Gran16K; | ||
229 | + | ||
230 | if (cpu_isar_feature(aa64_st, cpu)) { | ||
231 | max_tsz = 48 - using64k; | ||
232 | } else { | ||
64 | -- | 233 | -- |
65 | 2.7.4 | 234 | 2.25.1 |
66 | |||
67 | diff view generated by jsdifflib |
1 | Implement the security attribute lookups for memory accesses | 1 | Now we have an enum for the granule size, use it in the |
---|---|---|---|
2 | in the get_phys_addr() functions, causing these to generate | 2 | ARMVAParameters struct instead of the using16k/using64k bools. |
3 | various kinds of SecureFault for bad accesses. | ||
4 | |||
5 | The major subtlety in this code relates to handling of the | ||
6 | case when the security attributes the SAU assigns to the | ||
7 | address don't match the current security state of the CPU. | ||
8 | |||
9 | In the ARM ARM pseudocode for validating instruction | ||
10 | accesses, the security attributes of the address determine | ||
11 | whether the Secure or NonSecure MPU state is used. At face | ||
12 | value, handling this would require us to encode the relevant | ||
13 | bits of state into mmu_idx for both S and NS at once, which | ||
14 | would result in our needing 16 mmu indexes. Fortunately we | ||
15 | don't actually need to do this because a mismatch between | ||
16 | address attributes and CPU state means either: | ||
17 | * some kind of fault (usually a SecureFault, but in theory | ||
18 | perhaps a UserFault for unaligned access to Device memory) | ||
19 | * execution of the SG instruction in NS state from a | ||
20 | Secure & NonSecure code region | ||
21 | |||
22 | The purpose of SG is simply to flip the CPU into Secure | ||
23 | state, so we can handle it by emulating execution of that | ||
24 | instruction directly in arm_v7m_cpu_do_interrupt(), which | ||
25 | means we can treat all the mismatch cases as "throw an | ||
26 | exception" and we don't need to encode the state of the | ||
27 | other MPU bank into our mmu_idx values. | ||
28 | |||
29 | This commit doesn't include the actual emulation of SG; | ||
30 | it also doesn't include implementation of the IDAU, which | ||
31 | is a per-board way to specify hard-coded memory attributes | ||
32 | for addresses, which override the CPU-internal SAU if they | ||
33 | specify a more secure setting than the SAU is programmed to. | ||
34 | 3 | ||
35 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 4 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
36 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | 5 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> |
37 | Message-id: 1506092407-26985-15-git-send-email-peter.maydell@linaro.org | 6 | Message-id: 20221003162315.2833797-3-peter.maydell@linaro.org |
38 | --- | 7 | --- |
39 | target/arm/internals.h | 15 ++++ | 8 | target/arm/internals.h | 23 +++++++++++++++++++++-- |
40 | target/arm/helper.c | 182 ++++++++++++++++++++++++++++++++++++++++++++++++- | 9 | target/arm/helper.c | 39 ++++++++++++++++++++++++++++----------- |
41 | 2 files changed, 195 insertions(+), 2 deletions(-) | 10 | target/arm/ptw.c | 8 +------- |
11 | 3 files changed, 50 insertions(+), 20 deletions(-) | ||
42 | 12 | ||
43 | diff --git a/target/arm/internals.h b/target/arm/internals.h | 13 | diff --git a/target/arm/internals.h b/target/arm/internals.h |
44 | index XXXXXXX..XXXXXXX 100644 | 14 | index XXXXXXX..XXXXXXX 100644 |
45 | --- a/target/arm/internals.h | 15 | --- a/target/arm/internals.h |
46 | +++ b/target/arm/internals.h | 16 | +++ b/target/arm/internals.h |
47 | @@ -XXX,XX +XXX,XX @@ FIELD(V7M_EXCRET, DCRS, 5, 1) | 17 | @@ -XXX,XX +XXX,XX @@ typedef enum ARMGranuleSize { |
48 | FIELD(V7M_EXCRET, S, 6, 1) | 18 | GranInvalid, |
49 | FIELD(V7M_EXCRET, RES1, 7, 25) /* including the must-be-1 prefix */ | 19 | } ARMGranuleSize; |
50 | 20 | ||
51 | +/* We use a few fake FSR values for internal purposes in M profile. | 21 | +/** |
52 | + * M profile cores don't have A/R format FSRs, but currently our | 22 | + * arm_granule_bits: Return address size of the granule in bits |
53 | + * get_phys_addr() code assumes A/R profile and reports failures via | 23 | + * |
54 | + * an A/R format FSR value. We then translate that into the proper | 24 | + * Return the address size of the granule in bits. This corresponds |
55 | + * M profile exception and FSR status bit in arm_v7m_cpu_do_interrupt(). | 25 | + * to the pseudocode TGxGranuleBits(). |
56 | + * Mostly the FSR values we use for this are those defined for v7PMSA, | ||
57 | + * since we share some of that codepath. A few kinds of fault are | ||
58 | + * only for M profile and have no A/R equivalent, though, so we have | ||
59 | + * to pick a value from the reserved range (which we never otherwise | ||
60 | + * generate) to use for these. | ||
61 | + * These values will never be visible to the guest. | ||
62 | + */ | 26 | + */ |
63 | +#define M_FAKE_FSR_NSC_EXEC 0xf /* NS executing in S&NSC memory */ | 27 | +static inline int arm_granule_bits(ARMGranuleSize gran) |
64 | +#define M_FAKE_FSR_SFAULT 0xe /* SecureFault INVTRAN, INVEP or AUVIOL */ | 28 | +{ |
29 | + switch (gran) { | ||
30 | + case Gran64K: | ||
31 | + return 16; | ||
32 | + case Gran16K: | ||
33 | + return 14; | ||
34 | + case Gran4K: | ||
35 | + return 12; | ||
36 | + default: | ||
37 | + g_assert_not_reached(); | ||
38 | + } | ||
39 | +} | ||
65 | + | 40 | + |
66 | /* | 41 | /* |
67 | * For AArch64, map a given EL to an index in the banked_spsr array. | 42 | * Parameters of a given virtual address, as extracted from the |
68 | * Note that this mapping and the AArch32 mapping defined in bank_number() | 43 | * translation control register (TCR) for a given regime. |
44 | @@ -XXX,XX +XXX,XX @@ typedef struct ARMVAParameters { | ||
45 | bool tbi : 1; | ||
46 | bool epd : 1; | ||
47 | bool hpd : 1; | ||
48 | - bool using16k : 1; | ||
49 | - bool using64k : 1; | ||
50 | bool tsz_oob : 1; /* tsz has been clamped to legal range */ | ||
51 | bool ds : 1; | ||
52 | + ARMGranuleSize gran : 2; | ||
53 | } ARMVAParameters; | ||
54 | |||
55 | ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, | ||
69 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 56 | diff --git a/target/arm/helper.c b/target/arm/helper.c |
70 | index XXXXXXX..XXXXXXX 100644 | 57 | index XXXXXXX..XXXXXXX 100644 |
71 | --- a/target/arm/helper.c | 58 | --- a/target/arm/helper.c |
72 | +++ b/target/arm/helper.c | 59 | +++ b/target/arm/helper.c |
73 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address, | 60 | @@ -XXX,XX +XXX,XX @@ typedef struct { |
74 | target_ulong *page_size_ptr, uint32_t *fsr, | 61 | uint64_t length; |
75 | ARMMMUFaultInfo *fi); | 62 | } TLBIRange; |
76 | 63 | ||
77 | +/* Security attributes for an address, as returned by v8m_security_lookup. */ | 64 | +static ARMGranuleSize tlbi_range_tg_to_gran_size(int tg) |
78 | +typedef struct V8M_SAttributes { | ||
79 | + bool ns; | ||
80 | + bool nsc; | ||
81 | + uint8_t sregion; | ||
82 | + bool srvalid; | ||
83 | + uint8_t iregion; | ||
84 | + bool irvalid; | ||
85 | +} V8M_SAttributes; | ||
86 | + | ||
87 | /* Definitions for the PMCCNTR and PMCR registers */ | ||
88 | #define PMCRD 0x8 | ||
89 | #define PMCRC 0x4 | ||
90 | @@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs) | ||
91 | * raises the fault, in the A profile short-descriptor format. | ||
92 | */ | ||
93 | switch (env->exception.fsr & 0xf) { | ||
94 | + case M_FAKE_FSR_NSC_EXEC: | ||
95 | + /* Exception generated when we try to execute code at an address | ||
96 | + * which is marked as Secure & Non-Secure Callable and the CPU | ||
97 | + * is in the Non-Secure state. The only instruction which can | ||
98 | + * be executed like this is SG (and that only if both halves of | ||
99 | + * the SG instruction have the same security attributes.) | ||
100 | + * Everything else must generate an INVEP SecureFault, so we | ||
101 | + * emulate the SG instruction here. | ||
102 | + * TODO: actually emulate SG. | ||
103 | + */ | ||
104 | + env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; | ||
105 | + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); | ||
106 | + qemu_log_mask(CPU_LOG_INT, | ||
107 | + "...really SecureFault with SFSR.INVEP\n"); | ||
108 | + break; | ||
109 | + case M_FAKE_FSR_SFAULT: | ||
110 | + /* Various flavours of SecureFault for attempts to execute or | ||
111 | + * access data in the wrong security state. | ||
112 | + */ | ||
113 | + switch (cs->exception_index) { | ||
114 | + case EXCP_PREFETCH_ABORT: | ||
115 | + if (env->v7m.secure) { | ||
116 | + env->v7m.sfsr |= R_V7M_SFSR_INVTRAN_MASK; | ||
117 | + qemu_log_mask(CPU_LOG_INT, | ||
118 | + "...really SecureFault with SFSR.INVTRAN\n"); | ||
119 | + } else { | ||
120 | + env->v7m.sfsr |= R_V7M_SFSR_INVEP_MASK; | ||
121 | + qemu_log_mask(CPU_LOG_INT, | ||
122 | + "...really SecureFault with SFSR.INVEP\n"); | ||
123 | + } | ||
124 | + break; | ||
125 | + case EXCP_DATA_ABORT: | ||
126 | + /* This must be an NS access to S memory */ | ||
127 | + env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK; | ||
128 | + qemu_log_mask(CPU_LOG_INT, | ||
129 | + "...really SecureFault with SFSR.AUVIOL\n"); | ||
130 | + break; | ||
131 | + } | ||
132 | + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); | ||
133 | + break; | ||
134 | case 0x8: /* External Abort */ | ||
135 | switch (cs->exception_index) { | ||
136 | case EXCP_PREFETCH_ABORT: | ||
137 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
138 | return !(*prot & (1 << access_type)); | ||
139 | } | ||
140 | |||
141 | +static bool v8m_is_sau_exempt(CPUARMState *env, | ||
142 | + uint32_t address, MMUAccessType access_type) | ||
143 | +{ | 65 | +{ |
144 | + /* The architecture specifies that certain address ranges are | 66 | + /* |
145 | + * exempt from v8M SAU/IDAU checks. | 67 | + * Note that the TLBI range TG field encoding differs from both |
68 | + * TG0 and TG1 encodings. | ||
146 | + */ | 69 | + */ |
147 | + return | 70 | + switch (tg) { |
148 | + (access_type == MMU_INST_FETCH && m_is_system_region(env, address)) || | 71 | + case 1: |
149 | + (address >= 0xe0000000 && address <= 0xe0002fff) || | 72 | + return Gran4K; |
150 | + (address >= 0xe000e000 && address <= 0xe000efff) || | 73 | + case 2: |
151 | + (address >= 0xe002e000 && address <= 0xe002efff) || | 74 | + return Gran16K; |
152 | + (address >= 0xe0040000 && address <= 0xe0041fff) || | 75 | + case 3: |
153 | + (address >= 0xe00ff000 && address <= 0xe00fffff); | 76 | + return Gran64K; |
154 | +} | 77 | + default: |
155 | + | 78 | + return GranInvalid; |
156 | +static void v8m_security_lookup(CPUARMState *env, uint32_t address, | ||
157 | + MMUAccessType access_type, ARMMMUIdx mmu_idx, | ||
158 | + V8M_SAttributes *sattrs) | ||
159 | +{ | ||
160 | + /* Look up the security attributes for this address. Compare the | ||
161 | + * pseudocode SecurityCheck() function. | ||
162 | + * We assume the caller has zero-initialized *sattrs. | ||
163 | + */ | ||
164 | + ARMCPU *cpu = arm_env_get_cpu(env); | ||
165 | + int r; | ||
166 | + | ||
167 | + /* TODO: implement IDAU */ | ||
168 | + | ||
169 | + if (access_type == MMU_INST_FETCH && extract32(address, 28, 4) == 0xf) { | ||
170 | + /* 0xf0000000..0xffffffff is always S for insn fetches */ | ||
171 | + return; | ||
172 | + } | ||
173 | + | ||
174 | + if (v8m_is_sau_exempt(env, address, access_type)) { | ||
175 | + sattrs->ns = !regime_is_secure(env, mmu_idx); | ||
176 | + return; | ||
177 | + } | ||
178 | + | ||
179 | + switch (env->sau.ctrl & 3) { | ||
180 | + case 0: /* SAU.ENABLE == 0, SAU.ALLNS == 0 */ | ||
181 | + break; | ||
182 | + case 2: /* SAU.ENABLE == 0, SAU.ALLNS == 1 */ | ||
183 | + sattrs->ns = true; | ||
184 | + break; | ||
185 | + default: /* SAU.ENABLE == 1 */ | ||
186 | + for (r = 0; r < cpu->sau_sregion; r++) { | ||
187 | + if (env->sau.rlar[r] & 1) { | ||
188 | + uint32_t base = env->sau.rbar[r] & ~0x1f; | ||
189 | + uint32_t limit = env->sau.rlar[r] | 0x1f; | ||
190 | + | ||
191 | + if (base <= address && limit >= address) { | ||
192 | + if (sattrs->srvalid) { | ||
193 | + /* If we hit in more than one region then we must report | ||
194 | + * as Secure, not NS-Callable, with no valid region | ||
195 | + * number info. | ||
196 | + */ | ||
197 | + sattrs->ns = false; | ||
198 | + sattrs->nsc = false; | ||
199 | + sattrs->sregion = 0; | ||
200 | + sattrs->srvalid = false; | ||
201 | + break; | ||
202 | + } else { | ||
203 | + if (env->sau.rlar[r] & 2) { | ||
204 | + sattrs->nsc = true; | ||
205 | + } else { | ||
206 | + sattrs->ns = true; | ||
207 | + } | ||
208 | + sattrs->srvalid = true; | ||
209 | + sattrs->sregion = r; | ||
210 | + } | ||
211 | + } | ||
212 | + } | ||
213 | + } | ||
214 | + | ||
215 | + /* TODO when we support the IDAU then it may override the result here */ | ||
216 | + break; | ||
217 | + } | 79 | + } |
218 | +} | 80 | +} |
219 | + | 81 | + |
220 | static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, | 82 | static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx, |
221 | MMUAccessType access_type, ARMMMUIdx mmu_idx, | 83 | uint64_t value) |
222 | - hwaddr *phys_ptr, int *prot, uint32_t *fsr) | ||
223 | + hwaddr *phys_ptr, MemTxAttrs *txattrs, | ||
224 | + int *prot, uint32_t *fsr) | ||
225 | { | 84 | { |
226 | ARMCPU *cpu = arm_env_get_cpu(env); | 85 | @@ -XXX,XX +XXX,XX @@ static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx, |
227 | bool is_user = regime_is_user(env, mmu_idx); | 86 | uint64_t select = sextract64(value, 36, 1); |
228 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, | 87 | ARMVAParameters param = aa64_va_parameters(env, select, mmuidx, true); |
229 | int n; | 88 | TLBIRange ret = { }; |
230 | int matchregion = -1; | 89 | + ARMGranuleSize gran; |
231 | bool hit = false; | 90 | |
232 | + V8M_SAttributes sattrs = {}; | 91 | page_size_granule = extract64(value, 46, 2); |
233 | 92 | + gran = tlbi_range_tg_to_gran_size(page_size_granule); | |
234 | *phys_ptr = address; | 93 | |
235 | *prot = 0; | 94 | /* The granule encoded in value must match the granule in use. */ |
236 | 95 | - if (page_size_granule != (param.using64k ? 3 : param.using16k ? 2 : 1)) { | |
237 | + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { | 96 | + if (gran != param.gran) { |
238 | + v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs); | 97 | qemu_log_mask(LOG_GUEST_ERROR, "Invalid tlbi page size granule %d\n", |
239 | + if (access_type == MMU_INST_FETCH) { | 98 | page_size_granule); |
240 | + /* Instruction fetches always use the MMU bank and the | 99 | return ret; |
241 | + * transaction attribute determined by the fetch address, | 100 | } |
242 | + * regardless of CPU state. This is painful for QEMU | 101 | |
243 | + * to handle, because it would mean we need to encode | 102 | - page_shift = (page_size_granule - 1) * 2 + 12; |
244 | + * into the mmu_idx not just the (user, negpri) information | 103 | + page_shift = arm_granule_bits(gran); |
245 | + * for the current security state but also that for the | 104 | num = extract64(value, 39, 5); |
246 | + * other security state, which would balloon the number | 105 | scale = extract64(value, 44, 2); |
247 | + * of mmu_idx values needed alarmingly. | 106 | exponent = (5 * scale) + 1; |
248 | + * Fortunately we can avoid this because it's not actually | 107 | @@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, |
249 | + * possible to arbitrarily execute code from memory with | 108 | ARMMMUIdx mmu_idx, bool data) |
250 | + * the wrong security attribute: it will always generate | 109 | { |
251 | + * an exception of some kind or another, apart from the | 110 | uint64_t tcr = regime_tcr(env, mmu_idx); |
252 | + * special case of an NS CPU executing an SG instruction | 111 | - bool epd, hpd, using16k, using64k, tsz_oob, ds; |
253 | + * in S&NSC memory. So we always just fail the translation | 112 | + bool epd, hpd, tsz_oob, ds; |
254 | + * here and sort things out in the exception handler | 113 | int select, tsz, tbi, max_tsz, min_tsz, ps, sh; |
255 | + * (including possibly emulating an SG instruction). | 114 | ARMGranuleSize gran; |
256 | + */ | 115 | ARMCPU *cpu = env_archcpu(env); |
257 | + if (sattrs.ns != !secure) { | 116 | @@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, |
258 | + *fsr = sattrs.nsc ? M_FAKE_FSR_NSC_EXEC : M_FAKE_FSR_SFAULT; | 117 | } |
259 | + return true; | 118 | |
260 | + } | 119 | gran = sanitize_gran_size(cpu, gran, stage2); |
261 | + } else { | 120 | - using64k = gran == Gran64K; |
262 | + /* For data accesses we always use the MMU bank indicated | 121 | - using16k = gran == Gran16K; |
263 | + * by the current CPU state, but the security attributes | 122 | |
264 | + * might downgrade a secure access to nonsecure. | 123 | if (cpu_isar_feature(aa64_st, cpu)) { |
265 | + */ | 124 | - max_tsz = 48 - using64k; |
266 | + if (sattrs.ns) { | 125 | + max_tsz = 48 - (gran == Gran64K); |
267 | + txattrs->secure = false; | 126 | } else { |
268 | + } else if (!secure) { | 127 | max_tsz = 39; |
269 | + /* NS access to S memory must fault. | 128 | } |
270 | + * Architecturally we should first check whether the | 129 | @@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, |
271 | + * MPU information for this address indicates that we | 130 | * adjust the effective value of DS, as documented. |
272 | + * are doing an unaligned access to Device memory, which | 131 | */ |
273 | + * should generate a UsageFault instead. QEMU does not | 132 | min_tsz = 16; |
274 | + * currently check for that kind of unaligned access though. | 133 | - if (using64k) { |
275 | + * If we added it we would need to do so as a special case | 134 | + if (gran == Gran64K) { |
276 | + * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt(). | 135 | if (cpu_isar_feature(aa64_lva, cpu)) { |
277 | + */ | 136 | min_tsz = 12; |
278 | + *fsr = M_FAKE_FSR_SFAULT; | 137 | } |
279 | + return true; | 138 | @@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, |
280 | + } | 139 | switch (mmu_idx) { |
281 | + } | 140 | case ARMMMUIdx_Stage2: |
282 | + } | 141 | case ARMMMUIdx_Stage2_S: |
283 | + | 142 | - if (using16k) { |
284 | /* Unlike the ARM ARM pseudocode, we don't need to check whether this | 143 | + if (gran == Gran16K) { |
285 | * was an exception vector read from the vector table (which is always | 144 | ds = cpu_isar_feature(aa64_tgran16_2_lpa2, cpu); |
286 | * done using the default system address map), because those accesses | 145 | } else { |
287 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address, | 146 | ds = cpu_isar_feature(aa64_tgran4_2_lpa2, cpu); |
288 | if (arm_feature(env, ARM_FEATURE_V8)) { | 147 | } |
289 | /* PMSAv8 */ | 148 | break; |
290 | ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx, | 149 | default: |
291 | - phys_ptr, prot, fsr); | 150 | - if (using16k) { |
292 | + phys_ptr, attrs, prot, fsr); | 151 | + if (gran == Gran16K) { |
293 | } else if (arm_feature(env, ARM_FEATURE_V7)) { | 152 | ds = cpu_isar_feature(aa64_tgran16_lpa2, cpu); |
294 | /* PMSAv7 */ | 153 | } else { |
295 | ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx, | 154 | ds = cpu_isar_feature(aa64_tgran4_lpa2, cpu); |
155 | @@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, | ||
156 | .tbi = tbi, | ||
157 | .epd = epd, | ||
158 | .hpd = hpd, | ||
159 | - .using16k = using16k, | ||
160 | - .using64k = using64k, | ||
161 | .tsz_oob = tsz_oob, | ||
162 | .ds = ds, | ||
163 | + .gran = gran, | ||
164 | }; | ||
165 | } | ||
166 | |||
167 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
168 | index XXXXXXX..XXXXXXX 100644 | ||
169 | --- a/target/arm/ptw.c | ||
170 | +++ b/target/arm/ptw.c | ||
171 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, | ||
172 | } | ||
173 | } | ||
174 | |||
175 | - if (param.using64k) { | ||
176 | - stride = 13; | ||
177 | - } else if (param.using16k) { | ||
178 | - stride = 11; | ||
179 | - } else { | ||
180 | - stride = 9; | ||
181 | - } | ||
182 | + stride = arm_granule_bits(param.gran) - 3; | ||
183 | |||
184 | /* | ||
185 | * Note that QEMU ignores shareability and cacheability attributes, | ||
296 | -- | 186 | -- |
297 | 2.7.4 | 187 | 2.25.1 |
298 | |||
299 | diff view generated by jsdifflib |
1 | When we added support for the new SHCSR bits in v8M in commit | 1 | FEAT_GTG is a change tho the ID register ID_AA64MMFR0_EL1 so that it |
---|---|---|---|
2 | 437d59c17e9 the code to support writing to the new HARDFAULTPENDED | 2 | can report a different set of supported granule (page) sizes for |
3 | bit was accidentally only added for non-secure writes; the | 3 | stage 1 and stage 2 translation tables. As of commit c20281b2a5048 |
4 | secure banked version of the bit should also be writable. | 4 | we already report the granule sizes that way for '-cpu max', and now |
5 | we also correctly make attempts to use unimplemented granule sizes | ||
6 | fail, so we can report the support of the feature in the | ||
7 | documentation. | ||
5 | 8 | ||
9 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
6 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
7 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | 11 | Message-id: 20221003162315.2833797-4-peter.maydell@linaro.org |
8 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
9 | Message-id: 1506092407-26985-21-git-send-email-peter.maydell@linaro.org | ||
10 | --- | 12 | --- |
11 | hw/intc/armv7m_nvic.c | 1 + | 13 | docs/system/arm/emulation.rst | 1 + |
12 | 1 file changed, 1 insertion(+) | 14 | 1 file changed, 1 insertion(+) |
13 | 15 | ||
14 | diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c | 16 | diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst |
15 | index XXXXXXX..XXXXXXX 100644 | 17 | index XXXXXXX..XXXXXXX 100644 |
16 | --- a/hw/intc/armv7m_nvic.c | 18 | --- a/docs/system/arm/emulation.rst |
17 | +++ b/hw/intc/armv7m_nvic.c | 19 | +++ b/docs/system/arm/emulation.rst |
18 | @@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value, | 20 | @@ -XXX,XX +XXX,XX @@ the following architecture extensions: |
19 | s->sec_vectors[ARMV7M_EXCP_BUS].enabled = (value & (1 << 17)) != 0; | 21 | - FEAT_FRINTTS (Floating-point to integer instructions) |
20 | s->sec_vectors[ARMV7M_EXCP_USAGE].enabled = | 22 | - FEAT_FlagM (Flag manipulation instructions v2) |
21 | (value & (1 << 18)) != 0; | 23 | - FEAT_FlagM2 (Enhancements to flag manipulation instructions) |
22 | + s->sec_vectors[ARMV7M_EXCP_HARD].pending = (value & (1 << 21)) != 0; | 24 | +- FEAT_GTG (Guest translation granule size) |
23 | /* SecureFault not banked, but RAZ/WI to NS */ | 25 | - FEAT_HCX (Support for the HCRX_EL2 register) |
24 | s->vectors[ARMV7M_EXCP_SECURE].active = (value & (1 << 4)) != 0; | 26 | - FEAT_HPDS (Hierarchical permission disables) |
25 | s->vectors[ARMV7M_EXCP_SECURE].enabled = (value & (1 << 19)) != 0; | 27 | - FEAT_I8MM (AArch64 Int8 matrix multiplication instructions) |
26 | -- | 28 | -- |
27 | 2.7.4 | 29 | 2.25.1 |
28 | |||
29 | diff view generated by jsdifflib |