1 | Small pile of bug fixes for rc1. I've included my patches to get | 1 | Hi; this is the latest target-arm queue; most of this is a refactoring |
---|---|---|---|
2 | our docs building with Sphinx 3, just for convenience... | 2 | patchset from RTH for the arm page-table-walk emulation. |
3 | 3 | ||
4 | thanks | ||
4 | -- PMM | 5 | -- PMM |
5 | 6 | ||
6 | The following changes since commit b149dea55cce97cb226683d06af61984a1c11e96: | 7 | The following changes since commit f1d33f55c47dfdaf8daacd618588ad3ae4c452d1: |
7 | 8 | ||
8 | Merge remote-tracking branch 'remotes/cschoenebeck/tags/pull-9p-20201102' into staging (2020-11-02 10:57:48 +0000) | 9 | Merge tag 'pull-testing-gdbstub-plugins-gitdm-061022-3' of https://github.com/stsquad/qemu into staging (2022-10-06 07:11:56 -0400) |
9 | 10 | ||
10 | are available in the Git repository at: | 11 | are available in the Git repository at: |
11 | 12 | ||
12 | https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20201102 | 13 | https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221010 |
13 | 14 | ||
14 | for you to fetch changes up to ffb4fbf90a2f63c9cb33e4bb9f854c79bf04ca4a: | 15 | for you to fetch changes up to 915f62844cf62e428c7c178149b5ff1cbe129b07: |
15 | 16 | ||
16 | tests/qtest/npcm7xx_rng-test: Disable randomness tests (2020-11-02 16:52:18 +0000) | 17 | docs/system/arm/emulation.rst: Report FEAT_GTG support (2022-10-10 14:52:25 +0100) |
17 | 18 | ||
18 | ---------------------------------------------------------------- | 19 | ---------------------------------------------------------------- |
19 | target-arm queue: | 20 | target-arm queue: |
20 | * target/arm: Fix Neon emulation bugs on big-endian hosts | 21 | * Retry KVM_CREATE_VM call if it fails EINTR |
21 | * target/arm: fix handling of HCR.FB | 22 | * allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented |
22 | * target/arm: fix LORID_EL1 access check | 23 | * docs/nuvoton: Update URL for images |
23 | * disas/capstone: Fix monitor disassembly of >32 bytes | 24 | * refactoring of page table walk code |
24 | * hw/arm/smmuv3: Fix potential integer overflow (CID 1432363) | 25 | * hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3 |
25 | * hw/arm/boot: fix SVE for EL3 direct kernel boot | 26 | * Don't allow guest to use unimplemented granule sizes |
26 | * hw/display/omap_lcdc: Fix potential NULL pointer dereference | 27 | * Report FEAT_GTG support |
27 | * hw/display/exynos4210_fimd: Fix potential NULL pointer dereference | ||
28 | * target/arm: Get correct MMU index for other-security-state | ||
29 | * configure: Test that gio libs from pkg-config work | ||
30 | * hw/intc/arm_gicv3_cpuif: Make GIC maintenance interrupts work | ||
31 | * docs: Fix building with Sphinx 3 | ||
32 | * tests/qtest/npcm7xx_rng-test: Disable randomness tests | ||
33 | 28 | ||
34 | ---------------------------------------------------------------- | 29 | ---------------------------------------------------------------- |
35 | AlexChen (2): | 30 | Jerome Forissier (2): |
36 | hw/display/omap_lcdc: Fix potential NULL pointer dereference | 31 | target/arm: allow setting SCR_EL3.EnTP2 when FEAT_SME is implemented |
37 | hw/display/exynos4210_fimd: Fix potential NULL pointer dereference | 32 | hw/arm/boot: set CPTR_EL3.ESM and SCR_EL3.EnTP2 when booting Linux with EL3 |
38 | 33 | ||
39 | Peter Maydell (9): | 34 | Joel Stanley (1): |
40 | target/arm: Fix float16 pairwise Neon ops on big-endian hosts | 35 | docs/nuvoton: Update URL for images |
41 | target/arm: Fix VUDOT/VSDOT (scalar) on big-endian hosts | ||
42 | disas/capstone: Fix monitor disassembly of >32 bytes | ||
43 | target/arm: Get correct MMU index for other-security-state | ||
44 | configure: Test that gio libs from pkg-config work | ||
45 | hw/intc/arm_gicv3_cpuif: Make GIC maintenance interrupts work | ||
46 | scripts/kerneldoc: For Sphinx 3 use c:macro for macros with arguments | ||
47 | qemu-option-trace.rst.inc: Don't use option:: markup | ||
48 | tests/qtest/npcm7xx_rng-test: Disable randomness tests | ||
49 | 36 | ||
50 | Philippe Mathieu-Daudé (1): | 37 | Peter Maydell (4): |
51 | hw/arm/smmuv3: Fix potential integer overflow (CID 1432363) | 38 | target/arm/kvm: Retry KVM_CREATE_VM call if it fails EINTR |
39 | target/arm: Don't allow guest to use unimplemented granule sizes | ||
40 | target/arm: Use ARMGranuleSize in ARMVAParameters | ||
41 | docs/system/arm/emulation.rst: Report FEAT_GTG support | ||
52 | 42 | ||
53 | Richard Henderson (11): | 43 | Richard Henderson (21): |
54 | target/arm: Introduce neon_full_reg_offset | 44 | target/arm: Split s2walk_secure from ipa_secure in get_phys_addr |
55 | target/arm: Move neon_element_offset to translate.c | 45 | target/arm: Make the final stage1+2 write to secure be unconditional |
56 | target/arm: Use neon_element_offset in neon_load/store_reg | 46 | target/arm: Add is_secure parameter to get_phys_addr_lpae |
57 | target/arm: Use neon_element_offset in vfp_reg_offset | 47 | target/arm: Fix S2 disabled check in S1_ptw_translate |
58 | target/arm: Add read/write_neon_element32 | 48 | target/arm: Add is_secure parameter to regime_translation_disabled |
59 | target/arm: Expand read/write_neon_element32 to all MemOp | 49 | target/arm: Split out get_phys_addr_with_secure |
60 | target/arm: Rename neon_load_reg32 to vfp_load_reg32 | 50 | target/arm: Add is_secure parameter to v7m_read_half_insn |
61 | target/arm: Add read/write_neon_element64 | 51 | target/arm: Add TBFLAG_M32.SECURE |
62 | target/arm: Rename neon_load_reg64 to vfp_load_reg64 | 52 | target/arm: Merge regime_is_secure into get_phys_addr |
63 | target/arm: Simplify do_long_3d and do_2scalar_long | 53 | target/arm: Add is_secure parameter to do_ats_write |
64 | target/arm: Improve do_prewiden_3d | 54 | target/arm: Fold secure and non-secure a-profile mmu indexes |
55 | target/arm: Reorg regime_translation_disabled | ||
56 | target/arm: Drop secure check for HCR.TGE vs SCTLR_EL1.M | ||
57 | target/arm: Introduce arm_hcr_el2_eff_secstate | ||
58 | target/arm: Hoist read of *is_secure in S1_ptw_translate | ||
59 | target/arm: Remove env argument from combined_attrs_fwb | ||
60 | target/arm: Pass HCR to attribute subroutines. | ||
61 | target/arm: Fix ATS12NSO* from S PL1 | ||
62 | target/arm: Split out get_phys_addr_disabled | ||
63 | target/arm: Fix cacheattr in get_phys_addr_disabled | ||
64 | target/arm: Use tlb_set_page_full | ||
65 | 65 | ||
66 | Rémi Denis-Courmont (3): | 66 | docs/system/arm/emulation.rst | 1 + |
67 | target/arm: fix handling of HCR.FB | 67 | docs/system/arm/nuvoton.rst | 4 +- |
68 | target/arm: fix LORID_EL1 access check | 68 | target/arm/cpu-param.h | 2 +- |
69 | hw/arm/boot: fix SVE for EL3 direct kernel boot | 69 | target/arm/cpu.h | 181 ++++++++------ |
70 | 70 | target/arm/internals.h | 150 ++++++----- | |
71 | docs/qemu-option-trace.rst.inc | 6 +- | 71 | hw/arm/boot.c | 4 + |
72 | configure | 10 +- | 72 | target/arm/helper.c | 332 ++++++++++++++---------- |
73 | include/hw/intc/arm_gicv3_common.h | 1 - | 73 | target/arm/kvm.c | 4 +- |
74 | disas/capstone.c | 2 +- | 74 | target/arm/m_helper.c | 29 ++- |
75 | hw/arm/boot.c | 3 + | 75 | target/arm/ptw.c | 570 ++++++++++++++++++++++-------------------- |
76 | hw/arm/smmuv3.c | 3 +- | 76 | target/arm/tlb_helper.c | 9 +- |
77 | hw/display/exynos4210_fimd.c | 4 +- | 77 | target/arm/translate-a64.c | 8 - |
78 | hw/display/omap_lcdc.c | 10 +- | 78 | target/arm/translate.c | 9 +- |
79 | hw/intc/arm_gicv3_cpuif.c | 5 +- | 79 | 13 files changed, 717 insertions(+), 586 deletions(-) |
80 | target/arm/helper.c | 24 +- | ||
81 | target/arm/m_helper.c | 3 +- | ||
82 | target/arm/translate.c | 153 +++++++++--- | ||
83 | target/arm/vec_helper.c | 12 +- | ||
84 | tests/qtest/npcm7xx_rng-test.c | 14 +- | ||
85 | scripts/kernel-doc | 18 +- | ||
86 | target/arm/translate-neon.c.inc | 472 ++++++++++++++++++++----------------- | ||
87 | target/arm/translate-vfp.c.inc | 341 +++++++++++---------------- | ||
88 | 17 files changed, 588 insertions(+), 493 deletions(-) | ||
89 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | Occasionally the KVM_CREATE_VM ioctl can return EINTR, even though | ||
2 | there is no pending signal to be taken. In commit 94ccff13382055 | ||
3 | we added a retry-on-EINTR loop to the KVM_CREATE_VM call in the | ||
4 | generic KVM code. Adopt the same approach for the use of the | ||
5 | ioctl in the Arm-specific KVM code (where we use it to create a | ||
6 | scratch VM for probing for various things). | ||
1 | 7 | ||
8 | For more information, see the mailing list thread: | ||
9 | https://lore.kernel.org/qemu-devel/8735e0s1zw.wl-maz@kernel.org/ | ||
10 | |||
11 | Reported-by: Vitaly Chikunov <vt@altlinux.org> | ||
12 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
13 | Reviewed-by: Vitaly Chikunov <vt@altlinux.org> | ||
14 | Reviewed-by: Eric Auger <eric.auger@redhat.com> | ||
15 | Acked-by: Marc Zyngier <maz@kernel.org> | ||
16 | Message-id: 20220930113824.1933293-1-peter.maydell@linaro.org | ||
17 | --- | ||
18 | target/arm/kvm.c | 4 +++- | ||
19 | 1 file changed, 3 insertions(+), 1 deletion(-) | ||
20 | |||
21 | diff --git a/target/arm/kvm.c b/target/arm/kvm.c | ||
22 | index XXXXXXX..XXXXXXX 100644 | ||
23 | --- a/target/arm/kvm.c | ||
24 | +++ b/target/arm/kvm.c | ||
25 | @@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try, | ||
26 | if (max_vm_pa_size < 0) { | ||
27 | max_vm_pa_size = 0; | ||
28 | } | ||
29 | - vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size); | ||
30 | + do { | ||
31 | + vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size); | ||
32 | + } while (vmfd == -1 && errno == EINTR); | ||
33 | if (vmfd < 0) { | ||
34 | goto err; | ||
35 | } | ||
36 | -- | ||
37 | 2.25.1 | diff view generated by jsdifflib |
1 | If we're using the capstone disassembler, disassembly of a run of | 1 | From: Jerome Forissier <jerome.forissier@linaro.org> |
---|---|---|---|
2 | instructions more than 32 bytes long disassembles the wrong data for | ||
3 | instructions beyond the 32 byte mark: | ||
4 | 2 | ||
5 | (qemu) xp /16x 0x100 | 3 | Updates write_scr() to allow setting SCR_EL3.EnTP2 when FEAT_SME is |
6 | 0000000000000100: 0x00000005 0x54410001 0x00000001 0x00001000 | 4 | implemented. SCR_EL3 being a 64-bit register, valid_mask is changed |
7 | 0000000000000110: 0x00000000 0x00000004 0x54410002 0x3c000000 | 5 | to uint64_t and the SCR_* constants in target/arm/cpu.h are extended |
8 | 0000000000000120: 0x00000000 0x00000004 0x54410009 0x74736574 | 6 | to 64-bit so that masking and bitwise not (~) behave as expected. |
9 | 0000000000000130: 0x00000000 0x00000000 0x00000000 0x00000000 | ||
10 | (qemu) xp /16i 0x100 | ||
11 | 0x00000100: 00000005 andeq r0, r0, r5 | ||
12 | 0x00000104: 54410001 strbpl r0, [r1], #-1 | ||
13 | 0x00000108: 00000001 andeq r0, r0, r1 | ||
14 | 0x0000010c: 00001000 andeq r1, r0, r0 | ||
15 | 0x00000110: 00000000 andeq r0, r0, r0 | ||
16 | 0x00000114: 00000004 andeq r0, r0, r4 | ||
17 | 0x00000118: 54410002 strbpl r0, [r1], #-2 | ||
18 | 0x0000011c: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c | ||
19 | 0x00000120: 54410001 strbpl r0, [r1], #-1 | ||
20 | 0x00000124: 00000001 andeq r0, r0, r1 | ||
21 | 0x00000128: 00001000 andeq r1, r0, r0 | ||
22 | 0x0000012c: 00000000 andeq r0, r0, r0 | ||
23 | 0x00000130: 00000004 andeq r0, r0, r4 | ||
24 | 0x00000134: 54410002 strbpl r0, [r1], #-2 | ||
25 | 0x00000138: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c | ||
26 | 0x0000013c: 00000000 andeq r0, r0, r0 | ||
27 | 7 | ||
28 | Here the disassembly of 0x120..0x13f is using the data that is in | 8 | This enables booting Linux with Trusted Firmware-A at EL3 with |
29 | 0x104..0x123. | 9 | "-M virt,secure=on -cpu max". |
30 | |||
31 | This is caused by passing the wrong value to the read_memory_func(). | ||
32 | The intention is that at this point in the loop the 'cap_buf' buffer | ||
33 | already contains 'csize' bytes of data for the instruction at guest | ||
34 | addr 'pc', and we want to read in an extra 'tsize' bytes. Those | ||
35 | extra bytes are therefore at 'pc + csize', not 'pc'. On the first | ||
36 | time through the loop 'csize' happens to be zero, so the initial read | ||
37 | of 32 bytes into cap_buf is correct and as long as the disassembly | ||
38 | never needs to read more data we return the correct information. | ||
39 | |||
40 | Use the correct guest address in the call to read_memory_func(). | ||
41 | 10 | ||
42 | Cc: qemu-stable@nongnu.org | 11 | Cc: qemu-stable@nongnu.org |
43 | Fixes: https://bugs.launchpad.net/qemu/+bug/1900779 | 12 | Fixes: 78cb9776662a ("target/arm: Enable SME for -cpu max") |
13 | Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> | ||
14 | Reviewed-by: Andre Przywara <andre.przywara@arm.com> | ||
15 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
16 | Message-id: 20221004072354.27037-1-jerome.forissier@linaro.org | ||
44 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 17 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
45 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
46 | Message-id: 20201022132445.25039-1-peter.maydell@linaro.org | ||
47 | --- | 18 | --- |
48 | disas/capstone.c | 2 +- | 19 | target/arm/cpu.h | 54 ++++++++++++++++++++++----------------------- |
49 | 1 file changed, 1 insertion(+), 1 deletion(-) | 20 | target/arm/helper.c | 5 ++++- |
21 | 2 files changed, 31 insertions(+), 28 deletions(-) | ||
50 | 22 | ||
51 | diff --git a/disas/capstone.c b/disas/capstone.c | 23 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h |
52 | index XXXXXXX..XXXXXXX 100644 | 24 | index XXXXXXX..XXXXXXX 100644 |
53 | --- a/disas/capstone.c | 25 | --- a/target/arm/cpu.h |
54 | +++ b/disas/capstone.c | 26 | +++ b/target/arm/cpu.h |
55 | @@ -XXX,XX +XXX,XX @@ bool cap_disas_monitor(disassemble_info *info, uint64_t pc, int count) | 27 | @@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask) |
56 | 28 | ||
57 | /* Make certain that we can make progress. */ | 29 | #define HPFAR_NS (1ULL << 63) |
58 | assert(tsize != 0); | 30 | |
59 | - info->read_memory_func(pc, cap_buf + csize, tsize, info); | 31 | -#define SCR_NS (1U << 0) |
60 | + info->read_memory_func(pc + csize, cap_buf + csize, tsize, info); | 32 | -#define SCR_IRQ (1U << 1) |
61 | csize += tsize; | 33 | -#define SCR_FIQ (1U << 2) |
62 | 34 | -#define SCR_EA (1U << 3) | |
63 | if (cs_disasm_iter(handle, &cbuf, &csize, &pc, insn)) { | 35 | -#define SCR_FW (1U << 4) |
36 | -#define SCR_AW (1U << 5) | ||
37 | -#define SCR_NET (1U << 6) | ||
38 | -#define SCR_SMD (1U << 7) | ||
39 | -#define SCR_HCE (1U << 8) | ||
40 | -#define SCR_SIF (1U << 9) | ||
41 | -#define SCR_RW (1U << 10) | ||
42 | -#define SCR_ST (1U << 11) | ||
43 | -#define SCR_TWI (1U << 12) | ||
44 | -#define SCR_TWE (1U << 13) | ||
45 | -#define SCR_TLOR (1U << 14) | ||
46 | -#define SCR_TERR (1U << 15) | ||
47 | -#define SCR_APK (1U << 16) | ||
48 | -#define SCR_API (1U << 17) | ||
49 | -#define SCR_EEL2 (1U << 18) | ||
50 | -#define SCR_EASE (1U << 19) | ||
51 | -#define SCR_NMEA (1U << 20) | ||
52 | -#define SCR_FIEN (1U << 21) | ||
53 | -#define SCR_ENSCXT (1U << 25) | ||
54 | -#define SCR_ATA (1U << 26) | ||
55 | -#define SCR_FGTEN (1U << 27) | ||
56 | -#define SCR_ECVEN (1U << 28) | ||
57 | -#define SCR_TWEDEN (1U << 29) | ||
58 | +#define SCR_NS (1ULL << 0) | ||
59 | +#define SCR_IRQ (1ULL << 1) | ||
60 | +#define SCR_FIQ (1ULL << 2) | ||
61 | +#define SCR_EA (1ULL << 3) | ||
62 | +#define SCR_FW (1ULL << 4) | ||
63 | +#define SCR_AW (1ULL << 5) | ||
64 | +#define SCR_NET (1ULL << 6) | ||
65 | +#define SCR_SMD (1ULL << 7) | ||
66 | +#define SCR_HCE (1ULL << 8) | ||
67 | +#define SCR_SIF (1ULL << 9) | ||
68 | +#define SCR_RW (1ULL << 10) | ||
69 | +#define SCR_ST (1ULL << 11) | ||
70 | +#define SCR_TWI (1ULL << 12) | ||
71 | +#define SCR_TWE (1ULL << 13) | ||
72 | +#define SCR_TLOR (1ULL << 14) | ||
73 | +#define SCR_TERR (1ULL << 15) | ||
74 | +#define SCR_APK (1ULL << 16) | ||
75 | +#define SCR_API (1ULL << 17) | ||
76 | +#define SCR_EEL2 (1ULL << 18) | ||
77 | +#define SCR_EASE (1ULL << 19) | ||
78 | +#define SCR_NMEA (1ULL << 20) | ||
79 | +#define SCR_FIEN (1ULL << 21) | ||
80 | +#define SCR_ENSCXT (1ULL << 25) | ||
81 | +#define SCR_ATA (1ULL << 26) | ||
82 | +#define SCR_FGTEN (1ULL << 27) | ||
83 | +#define SCR_ECVEN (1ULL << 28) | ||
84 | +#define SCR_TWEDEN (1ULL << 29) | ||
85 | #define SCR_TWEDEL MAKE_64BIT_MASK(30, 4) | ||
86 | #define SCR_TME (1ULL << 34) | ||
87 | #define SCR_AMVOFFEN (1ULL << 35) | ||
88 | diff --git a/target/arm/helper.c b/target/arm/helper.c | ||
89 | index XXXXXXX..XXXXXXX 100644 | ||
90 | --- a/target/arm/helper.c | ||
91 | +++ b/target/arm/helper.c | ||
92 | @@ -XXX,XX +XXX,XX @@ static void vbar_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
93 | static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
94 | { | ||
95 | /* Begin with base v8.0 state. */ | ||
96 | - uint32_t valid_mask = 0x3fff; | ||
97 | + uint64_t valid_mask = 0x3fff; | ||
98 | ARMCPU *cpu = env_archcpu(env); | ||
99 | |||
100 | /* | ||
101 | @@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
102 | if (cpu_isar_feature(aa64_doublefault, cpu)) { | ||
103 | valid_mask |= SCR_EASE | SCR_NMEA; | ||
104 | } | ||
105 | + if (cpu_isar_feature(aa64_sme, cpu)) { | ||
106 | + valid_mask |= SCR_ENTP2; | ||
107 | + } | ||
108 | } else { | ||
109 | valid_mask &= ~(SCR_RW | SCR_ST); | ||
110 | if (cpu_isar_feature(aa32_ras, cpu)) { | ||
64 | -- | 111 | -- |
65 | 2.20.1 | 112 | 2.25.1 |
66 | |||
67 | diff view generated by jsdifflib |
1 | The randomness tests in the NPCM7xx RNG test fail intermittently | 1 | From: Joel Stanley <joel@jms.id.au> |
---|---|---|---|
2 | but fairly frequently. On my machine running the test in a loop: | ||
3 | while QTEST_QEMU_BINARY=./qemu-system-aarch64 ./tests/qtest/npcm7xx_rng-test; do true; done | ||
4 | 2 | ||
5 | will fail in less than a minute with an error like: | 3 | openpower.xyz was retired some time ago. The OpenBMC Jenkins is where |
6 | ERROR:../../tests/qtest/npcm7xx_rng-test.c:256:test_first_byte_runs: | 4 | images can be found these days. |
7 | assertion failed (calc_runs_p(buf.l, sizeof(buf) * BITS_PER_BYTE) > 0.01): (0.00286205989 > 0.01) | ||
8 | 5 | ||
9 | (Failures have been observed on all 4 of the randomness tests, | 6 | Signed-off-by: Joel Stanley <joel@jms.id.au> |
10 | not just first_byte_runs.) | 7 | Reviewed-by: Hao Wu <wuhaotsh@google.com> |
8 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
9 | Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
10 | Message-id: 20221004050042.22681-1-joel@jms.id.au | ||
11 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
12 | --- | ||
13 | docs/system/arm/nuvoton.rst | 4 ++-- | ||
14 | 1 file changed, 2 insertions(+), 2 deletions(-) | ||
11 | 15 | ||
12 | It's not clear why these tests are failing like this, but intermittent | 16 | diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst |
13 | failures make CI and merge testing awkward, so disable running them | ||
14 | unless a developer specifically sets QEMU_TEST_FLAKY_RNG_TESTS when | ||
15 | running the test suite, until we work out the cause. | ||
16 | |||
17 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
18 | Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> | ||
19 | Message-id: 20201102152454.8287-1-peter.maydell@linaro.org | ||
20 | Reviewed-by: Havard Skinnemoen <hskinnemoen@google.com> | ||
21 | --- | ||
22 | tests/qtest/npcm7xx_rng-test.c | 14 ++++++++++---- | ||
23 | 1 file changed, 10 insertions(+), 4 deletions(-) | ||
24 | |||
25 | diff --git a/tests/qtest/npcm7xx_rng-test.c b/tests/qtest/npcm7xx_rng-test.c | ||
26 | index XXXXXXX..XXXXXXX 100644 | 17 | index XXXXXXX..XXXXXXX 100644 |
27 | --- a/tests/qtest/npcm7xx_rng-test.c | 18 | --- a/docs/system/arm/nuvoton.rst |
28 | +++ b/tests/qtest/npcm7xx_rng-test.c | 19 | +++ b/docs/system/arm/nuvoton.rst |
29 | @@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv) | 20 | @@ -XXX,XX +XXX,XX @@ Boot options |
30 | 21 | ||
31 | qtest_add_func("npcm7xx_rng/enable_disable", test_enable_disable); | 22 | The Nuvoton machines can boot from an OpenBMC firmware image, or directly into |
32 | qtest_add_func("npcm7xx_rng/rosel", test_rosel); | 23 | a kernel using the ``-kernel`` option. OpenBMC images for ``quanta-gsj`` and |
33 | - qtest_add_func("npcm7xx_rng/continuous/monobit", test_continuous_monobit); | 24 | -possibly others can be downloaded from the OpenPOWER jenkins : |
34 | - qtest_add_func("npcm7xx_rng/continuous/runs", test_continuous_runs); | 25 | +possibly others can be downloaded from the OpenBMC jenkins : |
35 | - qtest_add_func("npcm7xx_rng/first_byte/monobit", test_first_byte_monobit); | 26 | |
36 | - qtest_add_func("npcm7xx_rng/first_byte/runs", test_first_byte_runs); | 27 | - https://openpower.xyz/ |
37 | + /* | 28 | + https://jenkins.openbmc.org/ |
38 | + * These tests fail intermittently; only run them on explicit | 29 | |
39 | + * request until we figure out why. | 30 | The firmware image should be attached as an MTD drive. Example : |
40 | + */ | 31 | |
41 | + if (getenv("QEMU_TEST_FLAKY_RNG_TESTS")) { | ||
42 | + qtest_add_func("npcm7xx_rng/continuous/monobit", test_continuous_monobit); | ||
43 | + qtest_add_func("npcm7xx_rng/continuous/runs", test_continuous_runs); | ||
44 | + qtest_add_func("npcm7xx_rng/first_byte/monobit", test_first_byte_monobit); | ||
45 | + qtest_add_func("npcm7xx_rng/first_byte/runs", test_first_byte_runs); | ||
46 | + } | ||
47 | |||
48 | qtest_start("-machine npcm750-evb"); | ||
49 | ret = g_test_run(); | ||
50 | -- | 32 | -- |
51 | 2.20.1 | 33 | 2.25.1 |
52 | 34 | ||
53 | 35 | diff view generated by jsdifflib |
New patch | |||
---|---|---|---|
1 | From: Richard Henderson <richard.henderson@linaro.org> | ||
1 | 2 | ||
3 | The starting security state comes with the translation regime, | ||
4 | not the current state of arm_is_secure_below_el3(). | ||
5 | |||
6 | Create a new local variable, s2walk_secure, which does not need | ||
7 | to be written back to result->attrs.secure -- we compute that | ||
8 | value later, after the S2 walk is complete. | ||
9 | |||
10 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
11 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
12 | Message-id: 20221001162318.153420-2-richard.henderson@linaro.org | ||
13 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
14 | --- | ||
15 | target/arm/ptw.c | 18 +++++++++--------- | ||
16 | 1 file changed, 9 insertions(+), 9 deletions(-) | ||
17 | |||
18 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
19 | index XXXXXXX..XXXXXXX 100644 | ||
20 | --- a/target/arm/ptw.c | ||
21 | +++ b/target/arm/ptw.c | ||
22 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
23 | hwaddr ipa; | ||
24 | int s1_prot; | ||
25 | int ret; | ||
26 | - bool ipa_secure; | ||
27 | + bool ipa_secure, s2walk_secure; | ||
28 | ARMCacheAttrs cacheattrs1; | ||
29 | ARMMMUIdx s2_mmu_idx; | ||
30 | bool is_el0; | ||
31 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
32 | |||
33 | ipa = result->phys; | ||
34 | ipa_secure = result->attrs.secure; | ||
35 | - if (arm_is_secure_below_el3(env)) { | ||
36 | - if (ipa_secure) { | ||
37 | - result->attrs.secure = !(env->cp15.vstcr_el2 & VSTCR_SW); | ||
38 | - } else { | ||
39 | - result->attrs.secure = !(env->cp15.vtcr_el2 & VTCR_NSW); | ||
40 | - } | ||
41 | + if (is_secure) { | ||
42 | + /* Select TCR based on the NS bit from the S1 walk. */ | ||
43 | + s2walk_secure = !(ipa_secure | ||
44 | + ? env->cp15.vstcr_el2 & VSTCR_SW | ||
45 | + : env->cp15.vtcr_el2 & VTCR_NSW); | ||
46 | } else { | ||
47 | assert(!ipa_secure); | ||
48 | + s2walk_secure = false; | ||
49 | } | ||
50 | |||
51 | - s2_mmu_idx = (result->attrs.secure | ||
52 | + s2_mmu_idx = (s2walk_secure | ||
53 | ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2); | ||
54 | is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0; | ||
55 | |||
56 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
57 | result->cacheattrs); | ||
58 | |||
59 | /* Check if IPA translates to secure or non-secure PA space. */ | ||
60 | - if (arm_is_secure_below_el3(env)) { | ||
61 | + if (is_secure) { | ||
62 | if (ipa_secure) { | ||
63 | result->attrs.secure = | ||
64 | !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW)); | ||
65 | -- | ||
66 | 2.25.1 | diff view generated by jsdifflib |
1 | From: AlexChen <alex.chen@huawei.com> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | In omap_lcd_interrupts(), the pointer omap_lcd is dereferinced before | 3 | While the stage2 call to get_phys_addr_lpae should never set |
4 | being check if it is valid, which may lead to NULL pointer dereference. | 4 | attrs.secure when given a non-secure input, it's just as easy |
5 | So move the assignment to surface after checking that the omap_lcd is valid | 5 | to make the final update to attrs.secure be unconditional and |
6 | and move surface_bits_per_pixel(surface) to after the surface assignment. | 6 | false in the case of non-secure input. |
7 | 7 | ||
8 | Reported-by: Euler Robot <euler.robot@huawei.com> | 8 | Suggested-by: Peter Maydell <peter.maydell@linaro.org> |
9 | Signed-off-by: AlexChen <alex.chen@huawei.com> | 9 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
10 | Message-id: 5F9CDB8A.9000001@huawei.com | 10 | Message-id: 20221007152159.1414065-1-richard.henderson@linaro.org |
11 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | 11 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
12 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 12 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
13 | --- | 13 | --- |
14 | hw/display/omap_lcdc.c | 10 +++++++--- | 14 | target/arm/ptw.c | 21 ++++++++++----------- |
15 | 1 file changed, 7 insertions(+), 3 deletions(-) | 15 | 1 file changed, 10 insertions(+), 11 deletions(-) |
16 | 16 | ||
17 | diff --git a/hw/display/omap_lcdc.c b/hw/display/omap_lcdc.c | 17 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
18 | index XXXXXXX..XXXXXXX 100644 | 18 | index XXXXXXX..XXXXXXX 100644 |
19 | --- a/hw/display/omap_lcdc.c | 19 | --- a/target/arm/ptw.c |
20 | +++ b/hw/display/omap_lcdc.c | 20 | +++ b/target/arm/ptw.c |
21 | @@ -XXX,XX +XXX,XX @@ static void omap_lcd_interrupts(struct omap_lcd_panel_s *s) | 21 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, |
22 | static void omap_update_display(void *opaque) | 22 | result->cacheattrs = combine_cacheattrs(env, cacheattrs1, |
23 | { | 23 | result->cacheattrs); |
24 | struct omap_lcd_panel_s *omap_lcd = (struct omap_lcd_panel_s *) opaque; | 24 | |
25 | - DisplaySurface *surface = qemu_console_surface(omap_lcd->con); | 25 | - /* Check if IPA translates to secure or non-secure PA space. */ |
26 | + DisplaySurface *surface; | 26 | - if (is_secure) { |
27 | draw_line_func draw_line; | 27 | - if (ipa_secure) { |
28 | int size, height, first, last; | 28 | - result->attrs.secure = |
29 | int width, linesize, step, bpp, frame_offset; | 29 | - !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW)); |
30 | hwaddr frame_base; | 30 | - } else { |
31 | 31 | - result->attrs.secure = | |
32 | - if (!omap_lcd || omap_lcd->plm == 1 || !omap_lcd->enable || | 32 | - !((env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW)) |
33 | - !surface_bits_per_pixel(surface)) { | 33 | - || (env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))); |
34 | + if (!omap_lcd || omap_lcd->plm == 1 || !omap_lcd->enable) { | 34 | - } |
35 | + return; | 35 | - } |
36 | + } | 36 | + /* |
37 | + * Check if IPA translates to secure or non-secure PA space. | ||
38 | + * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA. | ||
39 | + */ | ||
40 | + result->attrs.secure = | ||
41 | + (is_secure | ||
42 | + && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW)) | ||
43 | + && (ipa_secure | ||
44 | + || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW)))); | ||
37 | + | 45 | + |
38 | + surface = qemu_console_surface(omap_lcd->con); | 46 | return 0; |
39 | + if (!surface_bits_per_pixel(surface)) { | 47 | } else { |
40 | return; | 48 | /* |
41 | } | ||
42 | |||
43 | -- | 49 | -- |
44 | 2.20.1 | 50 | 2.25.1 |
45 | |||
46 | diff view generated by jsdifflib |
1 | Sphinx 3.2 is pickier than earlier versions about the option:: markup, | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | and complains about our usage in qemu-option-trace.rst: | ||
3 | 2 | ||
4 | ../../docs/qemu-option-trace.rst.inc:4:Malformed option description | 3 | Remove the use of regime_is_secure from get_phys_addr_lpae, |
5 | '[enable=]PATTERN', should look like "opt", "-opt args", "--opt args", | 4 | using the new parameter instead. |
6 | "/opt args" or "+opt args" | ||
7 | 5 | ||
8 | In this file, we're really trying to document the different parts of | 6 | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> |
9 | the top-level --trace option, which qemu-nbd.rst and qemu-img.rst | 7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
10 | have already introduced with an option:: markup. So it's not right | 8 | Message-id: 20221001162318.153420-3-richard.henderson@linaro.org |
11 | to use option:: here anyway. Switch to a different markup | 9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
12 | (definition lists) which gives about the same formatted output. | 10 | --- |
11 | target/arm/ptw.c | 20 ++++++++++---------- | ||
12 | 1 file changed, 10 insertions(+), 10 deletions(-) | ||
13 | 13 | ||
14 | (Unlike option::, this markup doesn't produce index entries; but | 14 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
15 | at the moment we don't do anything much with indexes anyway, and | ||
16 | in any case I think it doesn't make much sense to have individual | ||
17 | index entries for the sub-parts of the --trace option.) | ||
18 | |||
19 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
20 | Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> | ||
21 | Tested-by: Stefan Hajnoczi <stefanha@redhat.com> | ||
22 | Message-id: 20201030174700.7204-3-peter.maydell@linaro.org | ||
23 | --- | ||
24 | docs/qemu-option-trace.rst.inc | 6 +++--- | ||
25 | 1 file changed, 3 insertions(+), 3 deletions(-) | ||
26 | |||
27 | diff --git a/docs/qemu-option-trace.rst.inc b/docs/qemu-option-trace.rst.inc | ||
28 | index XXXXXXX..XXXXXXX 100644 | 15 | index XXXXXXX..XXXXXXX 100644 |
29 | --- a/docs/qemu-option-trace.rst.inc | 16 | --- a/target/arm/ptw.c |
30 | +++ b/docs/qemu-option-trace.rst.inc | 17 | +++ b/target/arm/ptw.c |
31 | @@ -XXX,XX +XXX,XX @@ | 18 | @@ -XXX,XX +XXX,XX @@ |
32 | 19 | ||
33 | Specify tracing options. | 20 | static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, |
34 | 21 | MMUAccessType access_type, ARMMMUIdx mmu_idx, | |
35 | -.. option:: [enable=]PATTERN | 22 | - bool s1_is_el0, GetPhysAddrResult *result, |
36 | +``[enable=]PATTERN`` | 23 | - ARMMMUFaultInfo *fi) |
37 | 24 | + bool is_secure, bool s1_is_el0, | |
38 | Immediately enable events matching *PATTERN* | 25 | + GetPhysAddrResult *result, ARMMMUFaultInfo *fi) |
39 | (either event name or a globbing pattern). This option is only | 26 | __attribute__((nonnull)); |
40 | @@ -XXX,XX +XXX,XX @@ Specify tracing options. | 27 | |
41 | 28 | /* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */ | |
42 | Use :option:`-trace help` to print a list of names of trace points. | 29 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, |
43 | 30 | GetPhysAddrResult s2 = {}; | |
44 | -.. option:: events=FILE | 31 | int ret; |
45 | +``events=FILE`` | 32 | |
46 | 33 | - ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, false, | |
47 | Immediately enable events listed in *FILE*. | 34 | - &s2, fi); |
48 | The file must contain one event name (as listed in the ``trace-events-all`` | 35 | + ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, |
49 | @@ -XXX,XX +XXX,XX @@ Specify tracing options. | 36 | + *is_secure, false, &s2, fi); |
50 | available if QEMU has been compiled with the ``simple``, ``log`` or | 37 | if (ret) { |
51 | ``ftrace`` tracing backend. | 38 | assert(fi->type != ARMFault_None); |
52 | 39 | fi->s2addr = addr; | |
53 | -.. option:: file=FILE | 40 | @@ -XXX,XX +XXX,XX @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level, |
54 | +``file=FILE`` | 41 | */ |
55 | 42 | static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, | |
56 | Log output traces to *FILE*. | 43 | MMUAccessType access_type, ARMMMUIdx mmu_idx, |
57 | This option is only available if QEMU has been compiled with | 44 | - bool s1_is_el0, GetPhysAddrResult *result, |
45 | - ARMMMUFaultInfo *fi) | ||
46 | + bool is_secure, bool s1_is_el0, | ||
47 | + GetPhysAddrResult *result, ARMMMUFaultInfo *fi) | ||
48 | { | ||
49 | ARMCPU *cpu = env_archcpu(env); | ||
50 | /* Read an LPAE long-descriptor translation table. */ | ||
51 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, | ||
52 | * remain non-secure. We implement this by just ORing in the NSTable/NS | ||
53 | * bits at each step. | ||
54 | */ | ||
55 | - tableattrs = regime_is_secure(env, mmu_idx) ? 0 : (1 << 4); | ||
56 | + tableattrs = is_secure ? 0 : (1 << 4); | ||
57 | for (;;) { | ||
58 | uint64_t descriptor; | ||
59 | bool nstable; | ||
60 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
61 | memset(result, 0, sizeof(*result)); | ||
62 | |||
63 | ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx, | ||
64 | - is_el0, result, fi); | ||
65 | + s2walk_secure, is_el0, result, fi); | ||
66 | fi->s2addr = ipa; | ||
67 | |||
68 | /* Combine the S1 and S2 perms. */ | ||
69 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
70 | } | ||
71 | |||
72 | if (regime_using_lpae_format(env, mmu_idx)) { | ||
73 | - return get_phys_addr_lpae(env, address, access_type, mmu_idx, false, | ||
74 | - result, fi); | ||
75 | + return get_phys_addr_lpae(env, address, access_type, mmu_idx, | ||
76 | + is_secure, false, result, fi); | ||
77 | } else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) { | ||
78 | return get_phys_addr_v6(env, address, access_type, mmu_idx, | ||
79 | is_secure, result, fi); | ||
58 | -- | 80 | -- |
59 | 2.20.1 | 81 | 2.25.1 |
60 | 82 | ||
61 | 83 | diff view generated by jsdifflib |
1 | From: AlexChen <alex.chen@huawei.com> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | In exynos4210_fimd_update(), the pointer s is dereferinced before | 3 | Pass the correct stage2 mmu_idx to regime_translation_disabled, |
4 | being check if it is valid, which may lead to NULL pointer dereference. | 4 | which we computed afterward. |
5 | So move the assignment to global_width after checking that the s is valid. | ||
6 | 5 | ||
7 | Reported-by: Euler Robot <euler.robot@huawei.com> | 6 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
8 | Signed-off-by: Alex Chen <alex.chen@huawei.com> | 7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
9 | Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> | 8 | Message-id: 20221001162318.153420-4-richard.henderson@linaro.org |
10 | Message-id: 5F9F8D88.9030102@huawei.com | ||
11 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
12 | --- | 10 | --- |
13 | hw/display/exynos4210_fimd.c | 4 +++- | 11 | target/arm/ptw.c | 6 +++--- |
14 | 1 file changed, 3 insertions(+), 1 deletion(-) | 12 | 1 file changed, 3 insertions(+), 3 deletions(-) |
15 | 13 | ||
16 | diff --git a/hw/display/exynos4210_fimd.c b/hw/display/exynos4210_fimd.c | 14 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
17 | index XXXXXXX..XXXXXXX 100644 | 15 | index XXXXXXX..XXXXXXX 100644 |
18 | --- a/hw/display/exynos4210_fimd.c | 16 | --- a/target/arm/ptw.c |
19 | +++ b/hw/display/exynos4210_fimd.c | 17 | +++ b/target/arm/ptw.c |
20 | @@ -XXX,XX +XXX,XX @@ static void exynos4210_fimd_update(void *opaque) | 18 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, |
21 | bool blend = false; | 19 | hwaddr addr, bool *is_secure, |
22 | uint8_t *host_fb_addr; | 20 | ARMMMUFaultInfo *fi) |
23 | bool is_dirty = false; | 21 | { |
24 | - const int global_width = (s->vidtcon[2] & FIMD_VIDTCON2_SIZE_MASK) + 1; | 22 | + ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; |
25 | + int global_width; | ||
26 | |||
27 | if (!s || !s->console || !s->enabled || | ||
28 | surface_bits_per_pixel(qemu_console_surface(s->console)) == 0) { | ||
29 | return; | ||
30 | } | ||
31 | + | 23 | + |
32 | + global_width = (s->vidtcon[2] & FIMD_VIDTCON2_SIZE_MASK) + 1; | 24 | if (arm_mmu_idx_is_stage1_of_2(mmu_idx) && |
33 | exynos4210_update_resolution(s); | 25 | - !regime_translation_disabled(env, ARMMMUIdx_Stage2)) { |
34 | surface = qemu_console_surface(s->console); | 26 | - ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S |
27 | - : ARMMMUIdx_Stage2; | ||
28 | + !regime_translation_disabled(env, s2_mmu_idx)) { | ||
29 | GetPhysAddrResult s2 = {}; | ||
30 | int ret; | ||
35 | 31 | ||
36 | -- | 32 | -- |
37 | 2.20.1 | 33 | 2.25.1 |
38 | |||
39 | diff view generated by jsdifflib |
1 | From: Richard Henderson <richard.henderson@linaro.org> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | The only uses of this function are for loading VFP | 3 | Remove the use of regime_is_secure from regime_translation_disabled, |
4 | double-precision values, and nothing to do with NEON. | 4 | using the new parameter instead. |
5 | 5 | ||
6 | This fixes a bug in S1_ptw_translate and get_phys_addr where we had | ||
7 | passed ARMMMUIdx_Stage2 and not ARMMMUIdx_Stage2_S to determine if | ||
8 | Stage2 is disabled, affecting FEAT_SEL2. | ||
9 | |||
10 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
11 | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> | ||
6 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | 12 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
7 | Message-id: 20201030022618.785675-10-richard.henderson@linaro.org | 13 | Message-id: 20221001162318.153420-5-richard.henderson@linaro.org |
8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 14 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
10 | --- | 15 | --- |
11 | target/arm/translate.c | 8 ++-- | 16 | target/arm/ptw.c | 20 +++++++++++--------- |
12 | target/arm/translate-vfp.c.inc | 84 +++++++++++++++++----------------- | 17 | 1 file changed, 11 insertions(+), 9 deletions(-) |
13 | 2 files changed, 46 insertions(+), 46 deletions(-) | ||
14 | 18 | ||
15 | diff --git a/target/arm/translate.c b/target/arm/translate.c | 19 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
16 | index XXXXXXX..XXXXXXX 100644 | 20 | index XXXXXXX..XXXXXXX 100644 |
17 | --- a/target/arm/translate.c | 21 | --- a/target/arm/ptw.c |
18 | +++ b/target/arm/translate.c | 22 | +++ b/target/arm/ptw.c |
19 | @@ -XXX,XX +XXX,XX @@ static long vfp_reg_offset(bool dp, unsigned reg) | 23 | @@ -XXX,XX +XXX,XX @@ static uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn) |
20 | } | ||
21 | } | 24 | } |
22 | 25 | ||
23 | -static inline void neon_load_reg64(TCGv_i64 var, int reg) | 26 | /* Return true if the specified stage of address translation is disabled */ |
24 | +static inline void vfp_load_reg64(TCGv_i64 var, int reg) | 27 | -static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx) |
28 | +static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
29 | + bool is_secure) | ||
25 | { | 30 | { |
26 | - tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg)); | 31 | uint64_t hcr_el2; |
27 | + tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(true, reg)); | 32 | |
28 | } | 33 | if (arm_feature(env, ARM_FEATURE_M)) { |
29 | 34 | - switch (env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] & | |
30 | -static inline void neon_store_reg64(TCGv_i64 var, int reg) | 35 | + switch (env->v7m.mpu_ctrl[is_secure] & |
31 | +static inline void vfp_store_reg64(TCGv_i64 var, int reg) | 36 | (R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) { |
32 | { | 37 | case R_V7M_MPU_CTRL_ENABLE_MASK: |
33 | - tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(1, reg)); | 38 | /* Enabled, but not for HardFault and NMI */ |
34 | + tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(true, reg)); | 39 | @@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx) |
35 | } | 40 | |
36 | 41 | if (hcr_el2 & HCR_TGE) { | |
37 | static inline void vfp_load_reg32(TCGv_i32 var, int reg) | 42 | /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */ |
38 | diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc | 43 | - if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) { |
39 | index XXXXXXX..XXXXXXX 100644 | 44 | + if (!is_secure && regime_el(env, mmu_idx) == 1) { |
40 | --- a/target/arm/translate-vfp.c.inc | 45 | return true; |
41 | +++ b/target/arm/translate-vfp.c.inc | ||
42 | @@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a) | ||
43 | tcg_gen_ext_i32_i64(nf, cpu_NF); | ||
44 | tcg_gen_ext_i32_i64(vf, cpu_VF); | ||
45 | |||
46 | - neon_load_reg64(frn, rn); | ||
47 | - neon_load_reg64(frm, rm); | ||
48 | + vfp_load_reg64(frn, rn); | ||
49 | + vfp_load_reg64(frm, rm); | ||
50 | switch (a->cc) { | ||
51 | case 0: /* eq: Z */ | ||
52 | tcg_gen_movcond_i64(TCG_COND_EQ, dest, zf, zero, | ||
53 | @@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a) | ||
54 | tcg_temp_free_i64(tmp); | ||
55 | break; | ||
56 | } | ||
57 | - neon_store_reg64(dest, rd); | ||
58 | + vfp_store_reg64(dest, rd); | ||
59 | tcg_temp_free_i64(frn); | ||
60 | tcg_temp_free_i64(frm); | ||
61 | tcg_temp_free_i64(dest); | ||
62 | @@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a) | ||
63 | TCGv_i64 tcg_res; | ||
64 | tcg_op = tcg_temp_new_i64(); | ||
65 | tcg_res = tcg_temp_new_i64(); | ||
66 | - neon_load_reg64(tcg_op, rm); | ||
67 | + vfp_load_reg64(tcg_op, rm); | ||
68 | gen_helper_rintd(tcg_res, tcg_op, fpst); | ||
69 | - neon_store_reg64(tcg_res, rd); | ||
70 | + vfp_store_reg64(tcg_res, rd); | ||
71 | tcg_temp_free_i64(tcg_op); | ||
72 | tcg_temp_free_i64(tcg_res); | ||
73 | } else { | ||
74 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a) | ||
75 | tcg_double = tcg_temp_new_i64(); | ||
76 | tcg_res = tcg_temp_new_i64(); | ||
77 | tcg_tmp = tcg_temp_new_i32(); | ||
78 | - neon_load_reg64(tcg_double, rm); | ||
79 | + vfp_load_reg64(tcg_double, rm); | ||
80 | if (is_signed) { | ||
81 | gen_helper_vfp_tosld(tcg_res, tcg_double, tcg_shift, fpst); | ||
82 | } else { | ||
83 | @@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_dp(DisasContext *s, arg_VLDR_VSTR_dp *a) | ||
84 | tmp = tcg_temp_new_i64(); | ||
85 | if (a->l) { | ||
86 | gen_aa32_ld64(s, tmp, addr, get_mem_index(s)); | ||
87 | - neon_store_reg64(tmp, a->vd); | ||
88 | + vfp_store_reg64(tmp, a->vd); | ||
89 | } else { | ||
90 | - neon_load_reg64(tmp, a->vd); | ||
91 | + vfp_load_reg64(tmp, a->vd); | ||
92 | gen_aa32_st64(s, tmp, addr, get_mem_index(s)); | ||
93 | } | ||
94 | tcg_temp_free_i64(tmp); | ||
95 | @@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_dp(DisasContext *s, arg_VLDM_VSTM_dp *a) | ||
96 | if (a->l) { | ||
97 | /* load */ | ||
98 | gen_aa32_ld64(s, tmp, addr, get_mem_index(s)); | ||
99 | - neon_store_reg64(tmp, a->vd + i); | ||
100 | + vfp_store_reg64(tmp, a->vd + i); | ||
101 | } else { | ||
102 | /* store */ | ||
103 | - neon_load_reg64(tmp, a->vd + i); | ||
104 | + vfp_load_reg64(tmp, a->vd + i); | ||
105 | gen_aa32_st64(s, tmp, addr, get_mem_index(s)); | ||
106 | } | ||
107 | tcg_gen_addi_i32(addr, addr, offset); | ||
108 | @@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn, | ||
109 | fd = tcg_temp_new_i64(); | ||
110 | fpst = fpstatus_ptr(FPST_FPCR); | ||
111 | |||
112 | - neon_load_reg64(f0, vn); | ||
113 | - neon_load_reg64(f1, vm); | ||
114 | + vfp_load_reg64(f0, vn); | ||
115 | + vfp_load_reg64(f1, vm); | ||
116 | |||
117 | for (;;) { | ||
118 | if (reads_vd) { | ||
119 | - neon_load_reg64(fd, vd); | ||
120 | + vfp_load_reg64(fd, vd); | ||
121 | } | ||
122 | fn(fd, f0, f1, fpst); | ||
123 | - neon_store_reg64(fd, vd); | ||
124 | + vfp_store_reg64(fd, vd); | ||
125 | |||
126 | if (veclen == 0) { | ||
127 | break; | ||
128 | @@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_dp(DisasContext *s, VFPGen3OpDPFn *fn, | ||
129 | veclen--; | ||
130 | vd = vfp_advance_dreg(vd, delta_d); | ||
131 | vn = vfp_advance_dreg(vn, delta_d); | ||
132 | - neon_load_reg64(f0, vn); | ||
133 | + vfp_load_reg64(f0, vn); | ||
134 | if (delta_m) { | ||
135 | vm = vfp_advance_dreg(vm, delta_m); | ||
136 | - neon_load_reg64(f1, vm); | ||
137 | + vfp_load_reg64(f1, vm); | ||
138 | } | 46 | } |
139 | } | 47 | } |
140 | 48 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, | |
141 | @@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm) | 49 | ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; |
142 | f0 = tcg_temp_new_i64(); | 50 | |
143 | fd = tcg_temp_new_i64(); | 51 | if (arm_mmu_idx_is_stage1_of_2(mmu_idx) && |
144 | 52 | - !regime_translation_disabled(env, s2_mmu_idx)) { | |
145 | - neon_load_reg64(f0, vm); | 53 | + !regime_translation_disabled(env, s2_mmu_idx, *is_secure)) { |
146 | + vfp_load_reg64(f0, vm); | 54 | GetPhysAddrResult s2 = {}; |
147 | 55 | int ret; | |
148 | for (;;) { | 56 | |
149 | fn(fd, f0); | 57 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, |
150 | - neon_store_reg64(fd, vd); | 58 | uint32_t base; |
151 | + vfp_store_reg64(fd, vd); | 59 | bool is_user = regime_is_user(env, mmu_idx); |
152 | 60 | ||
153 | if (veclen == 0) { | 61 | - if (regime_translation_disabled(env, mmu_idx)) { |
154 | break; | 62 | + if (regime_translation_disabled(env, mmu_idx, is_secure)) { |
155 | @@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm) | 63 | /* MPU disabled. */ |
156 | /* single source one-many */ | 64 | result->phys = address; |
157 | while (veclen--) { | 65 | result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; |
158 | vd = vfp_advance_dreg(vd, delta_d); | 66 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, |
159 | - neon_store_reg64(fd, vd); | 67 | result->page_size = TARGET_PAGE_SIZE; |
160 | + vfp_store_reg64(fd, vd); | 68 | result->prot = 0; |
69 | |||
70 | - if (regime_translation_disabled(env, mmu_idx) || | ||
71 | + if (regime_translation_disabled(env, mmu_idx, secure) || | ||
72 | m_is_ppb_region(env, address)) { | ||
73 | /* | ||
74 | * MPU disabled or M profile PPB access: use default memory map. | ||
75 | @@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, | ||
76 | * are done in arm_v7m_load_vector(), which always does a direct | ||
77 | * read using address_space_ldl(), rather than going via this function. | ||
78 | */ | ||
79 | - if (regime_translation_disabled(env, mmu_idx)) { /* MPU disabled */ | ||
80 | + if (regime_translation_disabled(env, mmu_idx, secure)) { /* MPU disabled */ | ||
81 | hit = true; | ||
82 | } else if (m_is_ppb_region(env, address)) { | ||
83 | hit = true; | ||
84 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
85 | result, fi); | ||
86 | |||
87 | /* If S1 fails or S2 is disabled, return early. */ | ||
88 | - if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2)) { | ||
89 | + if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2, | ||
90 | + is_secure)) { | ||
91 | return ret; | ||
161 | } | 92 | } |
162 | break; | 93 | |
163 | } | 94 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, |
164 | @@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_dp(DisasContext *s, VFPGen2OpDPFn *fn, int vd, int vm) | 95 | |
165 | veclen--; | 96 | /* Definitely a real MMU, not an MPU */ |
166 | vd = vfp_advance_dreg(vd, delta_d); | 97 | |
167 | vd = vfp_advance_dreg(vm, delta_m); | 98 | - if (regime_translation_disabled(env, mmu_idx)) { |
168 | - neon_load_reg64(f0, vm); | 99 | + if (regime_translation_disabled(env, mmu_idx, is_secure)) { |
169 | + vfp_load_reg64(f0, vm); | 100 | uint64_t hcr; |
170 | } | 101 | uint8_t memattr; |
171 | 102 | ||
172 | tcg_temp_free_i64(f0); | ||
173 | @@ -XXX,XX +XXX,XX @@ static bool do_vfm_dp(DisasContext *s, arg_VFMA_dp *a, bool neg_n, bool neg_d) | ||
174 | vm = tcg_temp_new_i64(); | ||
175 | vd = tcg_temp_new_i64(); | ||
176 | |||
177 | - neon_load_reg64(vn, a->vn); | ||
178 | - neon_load_reg64(vm, a->vm); | ||
179 | + vfp_load_reg64(vn, a->vn); | ||
180 | + vfp_load_reg64(vm, a->vm); | ||
181 | if (neg_n) { | ||
182 | /* VFNMS, VFMS */ | ||
183 | gen_helper_vfp_negd(vn, vn); | ||
184 | } | ||
185 | - neon_load_reg64(vd, a->vd); | ||
186 | + vfp_load_reg64(vd, a->vd); | ||
187 | if (neg_d) { | ||
188 | /* VFNMA, VFNMS */ | ||
189 | gen_helper_vfp_negd(vd, vd); | ||
190 | } | ||
191 | fpst = fpstatus_ptr(FPST_FPCR); | ||
192 | gen_helper_vfp_muladdd(vd, vn, vm, vd, fpst); | ||
193 | - neon_store_reg64(vd, a->vd); | ||
194 | + vfp_store_reg64(vd, a->vd); | ||
195 | |||
196 | tcg_temp_free_ptr(fpst); | ||
197 | tcg_temp_free_i64(vn); | ||
198 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a) | ||
199 | fd = tcg_const_i64(vfp_expand_imm(MO_64, a->imm)); | ||
200 | |||
201 | for (;;) { | ||
202 | - neon_store_reg64(fd, vd); | ||
203 | + vfp_store_reg64(fd, vd); | ||
204 | |||
205 | if (veclen == 0) { | ||
206 | break; | ||
207 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_dp(DisasContext *s, arg_VCMP_dp *a) | ||
208 | vd = tcg_temp_new_i64(); | ||
209 | vm = tcg_temp_new_i64(); | ||
210 | |||
211 | - neon_load_reg64(vd, a->vd); | ||
212 | + vfp_load_reg64(vd, a->vd); | ||
213 | if (a->z) { | ||
214 | tcg_gen_movi_i64(vm, 0); | ||
215 | } else { | ||
216 | - neon_load_reg64(vm, a->vm); | ||
217 | + vfp_load_reg64(vm, a->vm); | ||
218 | } | ||
219 | |||
220 | if (a->e) { | ||
221 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f64_f16(DisasContext *s, arg_VCVT_f64_f16 *a) | ||
222 | tcg_gen_ld16u_i32(tmp, cpu_env, vfp_f16_offset(a->vm, a->t)); | ||
223 | vd = tcg_temp_new_i64(); | ||
224 | gen_helper_vfp_fcvt_f16_to_f64(vd, tmp, fpst, ahp_mode); | ||
225 | - neon_store_reg64(vd, a->vd); | ||
226 | + vfp_store_reg64(vd, a->vd); | ||
227 | tcg_temp_free_i32(ahp_mode); | ||
228 | tcg_temp_free_ptr(fpst); | ||
229 | tcg_temp_free_i32(tmp); | ||
230 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f64(DisasContext *s, arg_VCVT_f16_f64 *a) | ||
231 | tmp = tcg_temp_new_i32(); | ||
232 | vm = tcg_temp_new_i64(); | ||
233 | |||
234 | - neon_load_reg64(vm, a->vm); | ||
235 | + vfp_load_reg64(vm, a->vm); | ||
236 | gen_helper_vfp_fcvt_f64_to_f16(tmp, vm, fpst, ahp_mode); | ||
237 | tcg_temp_free_i64(vm); | ||
238 | tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t)); | ||
239 | @@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_dp(DisasContext *s, arg_VRINTR_dp *a) | ||
240 | } | ||
241 | |||
242 | tmp = tcg_temp_new_i64(); | ||
243 | - neon_load_reg64(tmp, a->vm); | ||
244 | + vfp_load_reg64(tmp, a->vm); | ||
245 | fpst = fpstatus_ptr(FPST_FPCR); | ||
246 | gen_helper_rintd(tmp, tmp, fpst); | ||
247 | - neon_store_reg64(tmp, a->vd); | ||
248 | + vfp_store_reg64(tmp, a->vd); | ||
249 | tcg_temp_free_ptr(fpst); | ||
250 | tcg_temp_free_i64(tmp); | ||
251 | return true; | ||
252 | @@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_dp(DisasContext *s, arg_VRINTZ_dp *a) | ||
253 | } | ||
254 | |||
255 | tmp = tcg_temp_new_i64(); | ||
256 | - neon_load_reg64(tmp, a->vm); | ||
257 | + vfp_load_reg64(tmp, a->vm); | ||
258 | fpst = fpstatus_ptr(FPST_FPCR); | ||
259 | tcg_rmode = tcg_const_i32(float_round_to_zero); | ||
260 | gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst); | ||
261 | gen_helper_rintd(tmp, tmp, fpst); | ||
262 | gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst); | ||
263 | - neon_store_reg64(tmp, a->vd); | ||
264 | + vfp_store_reg64(tmp, a->vd); | ||
265 | tcg_temp_free_ptr(fpst); | ||
266 | tcg_temp_free_i64(tmp); | ||
267 | tcg_temp_free_i32(tcg_rmode); | ||
268 | @@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_dp(DisasContext *s, arg_VRINTX_dp *a) | ||
269 | } | ||
270 | |||
271 | tmp = tcg_temp_new_i64(); | ||
272 | - neon_load_reg64(tmp, a->vm); | ||
273 | + vfp_load_reg64(tmp, a->vm); | ||
274 | fpst = fpstatus_ptr(FPST_FPCR); | ||
275 | gen_helper_rintd_exact(tmp, tmp, fpst); | ||
276 | - neon_store_reg64(tmp, a->vd); | ||
277 | + vfp_store_reg64(tmp, a->vd); | ||
278 | tcg_temp_free_ptr(fpst); | ||
279 | tcg_temp_free_i64(tmp); | ||
280 | return true; | ||
281 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a) | ||
282 | vd = tcg_temp_new_i64(); | ||
283 | vfp_load_reg32(vm, a->vm); | ||
284 | gen_helper_vfp_fcvtds(vd, vm, cpu_env); | ||
285 | - neon_store_reg64(vd, a->vd); | ||
286 | + vfp_store_reg64(vd, a->vd); | ||
287 | tcg_temp_free_i32(vm); | ||
288 | tcg_temp_free_i64(vd); | ||
289 | return true; | ||
290 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a) | ||
291 | |||
292 | vd = tcg_temp_new_i32(); | ||
293 | vm = tcg_temp_new_i64(); | ||
294 | - neon_load_reg64(vm, a->vm); | ||
295 | + vfp_load_reg64(vm, a->vm); | ||
296 | gen_helper_vfp_fcvtsd(vd, vm, cpu_env); | ||
297 | vfp_store_reg32(vd, a->vd); | ||
298 | tcg_temp_free_i32(vd); | ||
299 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a) | ||
300 | /* u32 -> f64 */ | ||
301 | gen_helper_vfp_uitod(vd, vm, fpst); | ||
302 | } | ||
303 | - neon_store_reg64(vd, a->vd); | ||
304 | + vfp_store_reg64(vd, a->vd); | ||
305 | tcg_temp_free_i32(vm); | ||
306 | tcg_temp_free_i64(vd); | ||
307 | tcg_temp_free_ptr(fpst); | ||
308 | @@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a) | ||
309 | |||
310 | vm = tcg_temp_new_i64(); | ||
311 | vd = tcg_temp_new_i32(); | ||
312 | - neon_load_reg64(vm, a->vm); | ||
313 | + vfp_load_reg64(vm, a->vm); | ||
314 | gen_helper_vjcvt(vd, vm, cpu_env); | ||
315 | vfp_store_reg32(vd, a->vd); | ||
316 | tcg_temp_free_i64(vm); | ||
317 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a) | ||
318 | frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm); | ||
319 | |||
320 | vd = tcg_temp_new_i64(); | ||
321 | - neon_load_reg64(vd, a->vd); | ||
322 | + vfp_load_reg64(vd, a->vd); | ||
323 | |||
324 | fpst = fpstatus_ptr(FPST_FPCR); | ||
325 | shift = tcg_const_i32(frac_bits); | ||
326 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_dp(DisasContext *s, arg_VCVT_fix_dp *a) | ||
327 | g_assert_not_reached(); | ||
328 | } | ||
329 | |||
330 | - neon_store_reg64(vd, a->vd); | ||
331 | + vfp_store_reg64(vd, a->vd); | ||
332 | tcg_temp_free_i64(vd); | ||
333 | tcg_temp_free_i32(shift); | ||
334 | tcg_temp_free_ptr(fpst); | ||
335 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a) | ||
336 | fpst = fpstatus_ptr(FPST_FPCR); | ||
337 | vm = tcg_temp_new_i64(); | ||
338 | vd = tcg_temp_new_i32(); | ||
339 | - neon_load_reg64(vm, a->vm); | ||
340 | + vfp_load_reg64(vm, a->vm); | ||
341 | |||
342 | if (a->s) { | ||
343 | if (a->rz) { | ||
344 | -- | 103 | -- |
345 | 2.20.1 | 104 | 2.25.1 |
346 | 105 | ||
347 | 106 | diff view generated by jsdifflib |
1 | From: Richard Henderson <richard.henderson@linaro.org> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | This seems a bit more readable than using offsetof CPU_DoubleU. | 3 | Retain the existing get_phys_addr interface using the security |
4 | state derived from mmu_idx. Move the kerneldoc comments to the | ||
5 | header file where they belong. | ||
4 | 6 | ||
7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
5 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | 8 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
6 | Message-id: 20201030022618.785675-5-richard.henderson@linaro.org | 9 | Message-id: 20221001162318.153420-6-richard.henderson@linaro.org |
7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
9 | --- | 11 | --- |
10 | target/arm/translate.c | 13 ++++--------- | 12 | target/arm/internals.h | 40 ++++++++++++++++++++++++++++++++++++++ |
11 | 1 file changed, 4 insertions(+), 9 deletions(-) | 13 | target/arm/ptw.c | 44 ++++++++++++++---------------------------- |
14 | 2 files changed, 55 insertions(+), 29 deletions(-) | ||
12 | 15 | ||
13 | diff --git a/target/arm/translate.c b/target/arm/translate.c | 16 | diff --git a/target/arm/internals.h b/target/arm/internals.h |
14 | index XXXXXXX..XXXXXXX 100644 | 17 | index XXXXXXX..XXXXXXX 100644 |
15 | --- a/target/arm/translate.c | 18 | --- a/target/arm/internals.h |
16 | +++ b/target/arm/translate.c | 19 | +++ b/target/arm/internals.h |
17 | @@ -XXX,XX +XXX,XX @@ static long neon_element_offset(int reg, int element, MemOp size) | 20 | @@ -XXX,XX +XXX,XX @@ typedef struct GetPhysAddrResult { |
18 | return neon_full_reg_offset(reg) + ofs; | 21 | ARMCacheAttrs cacheattrs; |
22 | } GetPhysAddrResult; | ||
23 | |||
24 | +/** | ||
25 | + * get_phys_addr_with_secure: get the physical address for a virtual address | ||
26 | + * @env: CPUARMState | ||
27 | + * @address: virtual address to get physical address for | ||
28 | + * @access_type: 0 for read, 1 for write, 2 for execute | ||
29 | + * @mmu_idx: MMU index indicating required translation regime | ||
30 | + * @is_secure: security state for the access | ||
31 | + * @result: set on translation success. | ||
32 | + * @fi: set to fault info if the translation fails | ||
33 | + * | ||
34 | + * Find the physical address corresponding to the given virtual address, | ||
35 | + * by doing a translation table walk on MMU based systems or using the | ||
36 | + * MPU state on MPU based systems. | ||
37 | + * | ||
38 | + * Returns false if the translation was successful. Otherwise, phys_ptr, attrs, | ||
39 | + * prot and page_size may not be filled in, and the populated fsr value provides | ||
40 | + * information on why the translation aborted, in the format of a | ||
41 | + * DFSR/IFSR fault register, with the following caveats: | ||
42 | + * * we honour the short vs long DFSR format differences. | ||
43 | + * * the WnR bit is never set (the caller must do this). | ||
44 | + * * for PSMAv5 based systems we don't bother to return a full FSR format | ||
45 | + * value. | ||
46 | + */ | ||
47 | +bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
48 | + MMUAccessType access_type, | ||
49 | + ARMMMUIdx mmu_idx, bool is_secure, | ||
50 | + GetPhysAddrResult *result, ARMMMUFaultInfo *fi) | ||
51 | + __attribute__((nonnull)); | ||
52 | + | ||
53 | +/** | ||
54 | + * get_phys_addr: get the physical address for a virtual address | ||
55 | + * @env: CPUARMState | ||
56 | + * @address: virtual address to get physical address for | ||
57 | + * @access_type: 0 for read, 1 for write, 2 for execute | ||
58 | + * @mmu_idx: MMU index indicating required translation regime | ||
59 | + * @result: set on translation success. | ||
60 | + * @fi: set to fault info if the translation fails | ||
61 | + * | ||
62 | + * Similarly, but use the security regime of @mmu_idx. | ||
63 | + */ | ||
64 | bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
65 | MMUAccessType access_type, ARMMMUIdx mmu_idx, | ||
66 | GetPhysAddrResult *result, ARMMMUFaultInfo *fi) | ||
67 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
68 | index XXXXXXX..XXXXXXX 100644 | ||
69 | --- a/target/arm/ptw.c | ||
70 | +++ b/target/arm/ptw.c | ||
71 | @@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env, | ||
72 | return ret; | ||
19 | } | 73 | } |
20 | 74 | ||
21 | -static inline long vfp_reg_offset(bool dp, unsigned reg) | 75 | -/** |
22 | +/* Return the offset of a VFP Dreg (dp = true) or VFP Sreg (dp = false). */ | 76 | - * get_phys_addr - get the physical address for this virtual address |
23 | +static long vfp_reg_offset(bool dp, unsigned reg) | 77 | - * |
78 | - * Find the physical address corresponding to the given virtual address, | ||
79 | - * by doing a translation table walk on MMU based systems or using the | ||
80 | - * MPU state on MPU based systems. | ||
81 | - * | ||
82 | - * Returns false if the translation was successful. Otherwise, phys_ptr, attrs, | ||
83 | - * prot and page_size may not be filled in, and the populated fsr value provides | ||
84 | - * information on why the translation aborted, in the format of a | ||
85 | - * DFSR/IFSR fault register, with the following caveats: | ||
86 | - * * we honour the short vs long DFSR format differences. | ||
87 | - * * the WnR bit is never set (the caller must do this). | ||
88 | - * * for PSMAv5 based systems we don't bother to return a full FSR format | ||
89 | - * value. | ||
90 | - * | ||
91 | - * @env: CPUARMState | ||
92 | - * @address: virtual address to get physical address for | ||
93 | - * @access_type: 0 for read, 1 for write, 2 for execute | ||
94 | - * @mmu_idx: MMU index indicating required translation regime | ||
95 | - * @result: set on translation success. | ||
96 | - * @fi: set to fault info if the translation fails | ||
97 | - */ | ||
98 | -bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
99 | - MMUAccessType access_type, ARMMMUIdx mmu_idx, | ||
100 | - GetPhysAddrResult *result, ARMMMUFaultInfo *fi) | ||
101 | +bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
102 | + MMUAccessType access_type, ARMMMUIdx mmu_idx, | ||
103 | + bool is_secure, GetPhysAddrResult *result, | ||
104 | + ARMMMUFaultInfo *fi) | ||
24 | { | 105 | { |
25 | if (dp) { | 106 | ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx); |
26 | - return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]); | 107 | - bool is_secure = regime_is_secure(env, mmu_idx); |
27 | + return neon_element_offset(reg, 0, MO_64); | 108 | |
28 | } else { | 109 | if (mmu_idx != s1_mmu_idx) { |
29 | - long ofs = offsetof(CPUARMState, vfp.zregs[reg >> 2].d[(reg >> 1) & 1]); | 110 | /* |
30 | - if (reg & 1) { | 111 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, |
31 | - ofs += offsetof(CPU_DoubleU, l.upper); | 112 | ARMMMUIdx s2_mmu_idx; |
32 | - } else { | 113 | bool is_el0; |
33 | - ofs += offsetof(CPU_DoubleU, l.lower); | 114 | |
34 | - } | 115 | - ret = get_phys_addr(env, address, access_type, s1_mmu_idx, |
35 | - return ofs; | 116 | - result, fi); |
36 | + return neon_element_offset(reg >> 1, reg & 1, MO_32); | 117 | + ret = get_phys_addr_with_secure(env, address, access_type, |
118 | + s1_mmu_idx, is_secure, result, fi); | ||
119 | |||
120 | /* If S1 fails or S2 is disabled, return early. */ | ||
121 | if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2, | ||
122 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
37 | } | 123 | } |
38 | } | 124 | } |
39 | 125 | ||
126 | +bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
127 | + MMUAccessType access_type, ARMMMUIdx mmu_idx, | ||
128 | + GetPhysAddrResult *result, ARMMMUFaultInfo *fi) | ||
129 | +{ | ||
130 | + return get_phys_addr_with_secure(env, address, access_type, mmu_idx, | ||
131 | + regime_is_secure(env, mmu_idx), | ||
132 | + result, fi); | ||
133 | +} | ||
134 | + | ||
135 | hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr, | ||
136 | MemTxAttrs *attrs) | ||
137 | { | ||
40 | -- | 138 | -- |
41 | 2.20.1 | 139 | 2.25.1 |
42 | |||
43 | diff view generated by jsdifflib |
1 | In arm_v7m_mmu_idx_for_secstate() we get the 'priv' level to pass to | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | armv7m_mmu_idx_for_secstate_and_priv() by calling arm_current_el(). | ||
3 | This is incorrect when the security state being queried is not the | ||
4 | current one, because arm_current_el() uses the current security state | ||
5 | to determine which of the banked CONTROL.nPRIV bits to look at. | ||
6 | The effect was that if (for instance) Secure state was in privileged | ||
7 | mode but Non-Secure was not then we would return the wrong MMU index. | ||
8 | 2 | ||
9 | The only places where we are using this function in a way that could | 3 | Remove the use of regime_is_secure from v7m_read_half_insn, using |
10 | trigger this bug are for the stack loads during a v8M function-return | 4 | the new parameter instead. |
11 | and for the instruction fetch of a v8M SG insn. | ||
12 | 5 | ||
13 | Fix the bug by expanding out the M-profile version of the | 6 | As it happens, both callers pass true, propagated from the argument |
14 | arm_current_el() logic inline so it can use the passed in secstate | 7 | to arm_v7m_mmu_idx_for_secstate which created the mmu_idx argument, |
15 | rather than env->v7m.secure. | 8 | but that is a detail of v7m_handle_execute_nsc we need not expose |
9 | to the callee. | ||
16 | 10 | ||
11 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
12 | Reviewed-by: Alex Bennée <alex.bennee@linaro.org> | ||
13 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
14 | Message-id: 20221001162318.153420-7-richard.henderson@linaro.org | ||
17 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 15 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
18 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
19 | Message-id: 20201022164408.13214-1-peter.maydell@linaro.org | ||
20 | --- | 16 | --- |
21 | target/arm/m_helper.c | 3 ++- | 17 | target/arm/m_helper.c | 9 ++++----- |
22 | 1 file changed, 2 insertions(+), 1 deletion(-) | 18 | 1 file changed, 4 insertions(+), 5 deletions(-) |
23 | 19 | ||
24 | diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c | 20 | diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c |
25 | index XXXXXXX..XXXXXXX 100644 | 21 | index XXXXXXX..XXXXXXX 100644 |
26 | --- a/target/arm/m_helper.c | 22 | --- a/target/arm/m_helper.c |
27 | +++ b/target/arm/m_helper.c | 23 | +++ b/target/arm/m_helper.c |
28 | @@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env, | 24 | @@ -XXX,XX +XXX,XX @@ static bool do_v7m_function_return(ARMCPU *cpu) |
29 | /* Return the MMU index for a v7M CPU in the specified security state */ | 25 | return true; |
30 | ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate) | 26 | } |
27 | |||
28 | -static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, | ||
29 | +static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool secure, | ||
30 | uint32_t addr, uint16_t *insn) | ||
31 | { | 31 | { |
32 | - bool priv = arm_current_el(env) != 0; | 32 | /* |
33 | + bool priv = arm_v7m_is_handler_mode(env) || | 33 | @@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, |
34 | + !(env->v7m.control[secstate] & 1); | 34 | ARMMMUFaultInfo fi = {}; |
35 | 35 | MemTxResult txres; | |
36 | return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv); | 36 | |
37 | } | 37 | - v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, |
38 | - regime_is_secure(env, mmu_idx), &sattrs); | ||
39 | + v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, secure, &sattrs); | ||
40 | if (!sattrs.nsc || sattrs.ns) { | ||
41 | /* | ||
42 | * This must be the second half of the insn, and it straddles a | ||
43 | @@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu) | ||
44 | /* We want to do the MPU lookup as secure; work out what mmu_idx that is */ | ||
45 | mmu_idx = arm_v7m_mmu_idx_for_secstate(env, true); | ||
46 | |||
47 | - if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) { | ||
48 | + if (!v7m_read_half_insn(cpu, mmu_idx, true, env->regs[15], &insn)) { | ||
49 | return false; | ||
50 | } | ||
51 | |||
52 | @@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu) | ||
53 | goto gen_invep; | ||
54 | } | ||
55 | |||
56 | - if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) { | ||
57 | + if (!v7m_read_half_insn(cpu, mmu_idx, true, env->regs[15] + 2, &insn)) { | ||
58 | return false; | ||
59 | } | ||
60 | |||
38 | -- | 61 | -- |
39 | 2.20.1 | 62 | 2.25.1 |
40 | 63 | ||
41 | 64 | diff view generated by jsdifflib |
1 | From: Richard Henderson <richard.henderson@linaro.org> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | We can use proper widening loads to extend 32-bit inputs, | 3 | Remove the use of regime_is_secure from arm_tr_init_disas_context. |
4 | and skip the "widenfn" step. | 4 | Instead, provide the value of v8m_secure directly from tb_flags. |
5 | Rather than use regime_is_secure, use the env->v7m.secure directly, | ||
6 | as per arm_mmu_idx_el. | ||
5 | 7 | ||
8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
6 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | 9 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
7 | Message-id: 20201030022618.785675-12-richard.henderson@linaro.org | 10 | Message-id: 20221001162318.153420-8-richard.henderson@linaro.org |
8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 11 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
10 | --- | 12 | --- |
11 | target/arm/translate.c | 6 +++ | 13 | target/arm/cpu.h | 2 ++ |
12 | target/arm/translate-neon.c.inc | 66 ++++++++++++++++++--------------- | 14 | target/arm/helper.c | 4 ++++ |
13 | 2 files changed, 43 insertions(+), 29 deletions(-) | 15 | target/arm/translate.c | 3 +-- |
16 | 3 files changed, 7 insertions(+), 2 deletions(-) | ||
14 | 17 | ||
18 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | ||
19 | index XXXXXXX..XXXXXXX 100644 | ||
20 | --- a/target/arm/cpu.h | ||
21 | +++ b/target/arm/cpu.h | ||
22 | @@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_M32, NEW_FP_CTXT_NEEDED, 3, 1) /* Not cached. */ | ||
23 | FIELD(TBFLAG_M32, FPCCR_S_WRONG, 4, 1) /* Not cached. */ | ||
24 | /* Set if MVE insns are definitely not predicated by VPR or LTPSIZE */ | ||
25 | FIELD(TBFLAG_M32, MVE_NO_PRED, 5, 1) /* Not cached. */ | ||
26 | +/* Set if in secure mode */ | ||
27 | +FIELD(TBFLAG_M32, SECURE, 6, 1) | ||
28 | |||
29 | /* | ||
30 | * Bit usage when in AArch64 state | ||
31 | diff --git a/target/arm/helper.c b/target/arm/helper.c | ||
32 | index XXXXXXX..XXXXXXX 100644 | ||
33 | --- a/target/arm/helper.c | ||
34 | +++ b/target/arm/helper.c | ||
35 | @@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_m32(CPUARMState *env, int fp_el, | ||
36 | DP_TBFLAG_M32(flags, STACKCHECK, 1); | ||
37 | } | ||
38 | |||
39 | + if (arm_feature(env, ARM_FEATURE_M_SECURITY) && env->v7m.secure) { | ||
40 | + DP_TBFLAG_M32(flags, SECURE, 1); | ||
41 | + } | ||
42 | + | ||
43 | return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags); | ||
44 | } | ||
45 | |||
15 | diff --git a/target/arm/translate.c b/target/arm/translate.c | 46 | diff --git a/target/arm/translate.c b/target/arm/translate.c |
16 | index XXXXXXX..XXXXXXX 100644 | 47 | index XXXXXXX..XXXXXXX 100644 |
17 | --- a/target/arm/translate.c | 48 | --- a/target/arm/translate.c |
18 | +++ b/target/arm/translate.c | 49 | +++ b/target/arm/translate.c |
19 | @@ -XXX,XX +XXX,XX @@ static void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop) | 50 | @@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs) |
20 | long off = neon_element_offset(reg, ele, memop); | 51 | dc->vfp_enabled = 1; |
21 | 52 | dc->be_data = MO_TE; | |
22 | switch (memop) { | 53 | dc->v7m_handler_mode = EX_TBFLAG_M32(tb_flags, HANDLER); |
23 | + case MO_SL: | 54 | - dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) && |
24 | + tcg_gen_ld32s_i64(dest, cpu_env, off); | 55 | - regime_is_secure(env, dc->mmu_idx); |
25 | + break; | 56 | + dc->v8m_secure = EX_TBFLAG_M32(tb_flags, SECURE); |
26 | + case MO_UL: | 57 | dc->v8m_stackcheck = EX_TBFLAG_M32(tb_flags, STACKCHECK); |
27 | + tcg_gen_ld32u_i64(dest, cpu_env, off); | 58 | dc->v8m_fpccr_s_wrong = EX_TBFLAG_M32(tb_flags, FPCCR_S_WRONG); |
28 | + break; | 59 | dc->v7m_new_fp_ctxt_needed = |
29 | case MO_Q: | ||
30 | tcg_gen_ld_i64(dest, cpu_env, off); | ||
31 | break; | ||
32 | diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc | ||
33 | index XXXXXXX..XXXXXXX 100644 | ||
34 | --- a/target/arm/translate-neon.c.inc | ||
35 | +++ b/target/arm/translate-neon.c.inc | ||
36 | @@ -XXX,XX +XXX,XX @@ static bool trans_Vimm_1r(DisasContext *s, arg_1reg_imm *a) | ||
37 | static bool do_prewiden_3d(DisasContext *s, arg_3diff *a, | ||
38 | NeonGenWidenFn *widenfn, | ||
39 | NeonGenTwo64OpFn *opfn, | ||
40 | - bool src1_wide) | ||
41 | + int src1_mop, int src2_mop) | ||
42 | { | ||
43 | /* 3-regs different lengths, prewidening case (VADDL/VSUBL/VAADW/VSUBW) */ | ||
44 | TCGv_i64 rn0_64, rn1_64, rm_64; | ||
45 | - TCGv_i32 rm; | ||
46 | |||
47 | if (!arm_dc_feature(s, ARM_FEATURE_NEON)) { | ||
48 | return false; | ||
49 | @@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a, | ||
50 | return false; | ||
51 | } | ||
52 | |||
53 | - if (!widenfn || !opfn) { | ||
54 | + if (!opfn) { | ||
55 | /* size == 3 case, which is an entirely different insn group */ | ||
56 | return false; | ||
57 | } | ||
58 | |||
59 | - if ((a->vd & 1) || (src1_wide && (a->vn & 1))) { | ||
60 | + if ((a->vd & 1) || (src1_mop == MO_Q && (a->vn & 1))) { | ||
61 | return false; | ||
62 | } | ||
63 | |||
64 | @@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a, | ||
65 | rn1_64 = tcg_temp_new_i64(); | ||
66 | rm_64 = tcg_temp_new_i64(); | ||
67 | |||
68 | - if (src1_wide) { | ||
69 | - read_neon_element64(rn0_64, a->vn, 0, MO_64); | ||
70 | + if (src1_mop >= 0) { | ||
71 | + read_neon_element64(rn0_64, a->vn, 0, src1_mop); | ||
72 | } else { | ||
73 | TCGv_i32 tmp = tcg_temp_new_i32(); | ||
74 | read_neon_element32(tmp, a->vn, 0, MO_32); | ||
75 | widenfn(rn0_64, tmp); | ||
76 | tcg_temp_free_i32(tmp); | ||
77 | } | ||
78 | - rm = tcg_temp_new_i32(); | ||
79 | - read_neon_element32(rm, a->vm, 0, MO_32); | ||
80 | + if (src2_mop >= 0) { | ||
81 | + read_neon_element64(rm_64, a->vm, 0, src2_mop); | ||
82 | + } else { | ||
83 | + TCGv_i32 tmp = tcg_temp_new_i32(); | ||
84 | + read_neon_element32(tmp, a->vm, 0, MO_32); | ||
85 | + widenfn(rm_64, tmp); | ||
86 | + tcg_temp_free_i32(tmp); | ||
87 | + } | ||
88 | |||
89 | - widenfn(rm_64, rm); | ||
90 | - tcg_temp_free_i32(rm); | ||
91 | opfn(rn0_64, rn0_64, rm_64); | ||
92 | |||
93 | /* | ||
94 | * Load second pass inputs before storing the first pass result, to | ||
95 | * avoid incorrect results if a narrow input overlaps with the result. | ||
96 | */ | ||
97 | - if (src1_wide) { | ||
98 | - read_neon_element64(rn1_64, a->vn, 1, MO_64); | ||
99 | + if (src1_mop >= 0) { | ||
100 | + read_neon_element64(rn1_64, a->vn, 1, src1_mop); | ||
101 | } else { | ||
102 | TCGv_i32 tmp = tcg_temp_new_i32(); | ||
103 | read_neon_element32(tmp, a->vn, 1, MO_32); | ||
104 | widenfn(rn1_64, tmp); | ||
105 | tcg_temp_free_i32(tmp); | ||
106 | } | ||
107 | - rm = tcg_temp_new_i32(); | ||
108 | - read_neon_element32(rm, a->vm, 1, MO_32); | ||
109 | + if (src2_mop >= 0) { | ||
110 | + read_neon_element64(rm_64, a->vm, 1, src2_mop); | ||
111 | + } else { | ||
112 | + TCGv_i32 tmp = tcg_temp_new_i32(); | ||
113 | + read_neon_element32(tmp, a->vm, 1, MO_32); | ||
114 | + widenfn(rm_64, tmp); | ||
115 | + tcg_temp_free_i32(tmp); | ||
116 | + } | ||
117 | |||
118 | write_neon_element64(rn0_64, a->vd, 0, MO_64); | ||
119 | |||
120 | - widenfn(rm_64, rm); | ||
121 | - tcg_temp_free_i32(rm); | ||
122 | opfn(rn1_64, rn1_64, rm_64); | ||
123 | write_neon_element64(rn1_64, a->vd, 1, MO_64); | ||
124 | |||
125 | @@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a, | ||
126 | return true; | ||
127 | } | ||
128 | |||
129 | -#define DO_PREWIDEN(INSN, S, EXT, OP, SRC1WIDE) \ | ||
130 | +#define DO_PREWIDEN(INSN, S, OP, SRC1WIDE, SIGN) \ | ||
131 | static bool trans_##INSN##_3d(DisasContext *s, arg_3diff *a) \ | ||
132 | { \ | ||
133 | static NeonGenWidenFn * const widenfn[] = { \ | ||
134 | gen_helper_neon_widen_##S##8, \ | ||
135 | gen_helper_neon_widen_##S##16, \ | ||
136 | - tcg_gen_##EXT##_i32_i64, \ | ||
137 | - NULL, \ | ||
138 | + NULL, NULL, \ | ||
139 | }; \ | ||
140 | static NeonGenTwo64OpFn * const addfn[] = { \ | ||
141 | gen_helper_neon_##OP##l_u16, \ | ||
142 | @@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a, | ||
143 | tcg_gen_##OP##_i64, \ | ||
144 | NULL, \ | ||
145 | }; \ | ||
146 | - return do_prewiden_3d(s, a, widenfn[a->size], \ | ||
147 | - addfn[a->size], SRC1WIDE); \ | ||
148 | + int narrow_mop = a->size == MO_32 ? MO_32 | SIGN : -1; \ | ||
149 | + return do_prewiden_3d(s, a, widenfn[a->size], addfn[a->size], \ | ||
150 | + SRC1WIDE ? MO_Q : narrow_mop, \ | ||
151 | + narrow_mop); \ | ||
152 | } | ||
153 | |||
154 | -DO_PREWIDEN(VADDL_S, s, ext, add, false) | ||
155 | -DO_PREWIDEN(VADDL_U, u, extu, add, false) | ||
156 | -DO_PREWIDEN(VSUBL_S, s, ext, sub, false) | ||
157 | -DO_PREWIDEN(VSUBL_U, u, extu, sub, false) | ||
158 | -DO_PREWIDEN(VADDW_S, s, ext, add, true) | ||
159 | -DO_PREWIDEN(VADDW_U, u, extu, add, true) | ||
160 | -DO_PREWIDEN(VSUBW_S, s, ext, sub, true) | ||
161 | -DO_PREWIDEN(VSUBW_U, u, extu, sub, true) | ||
162 | +DO_PREWIDEN(VADDL_S, s, add, false, MO_SIGN) | ||
163 | +DO_PREWIDEN(VADDL_U, u, add, false, 0) | ||
164 | +DO_PREWIDEN(VSUBL_S, s, sub, false, MO_SIGN) | ||
165 | +DO_PREWIDEN(VSUBL_U, u, sub, false, 0) | ||
166 | +DO_PREWIDEN(VADDW_S, s, add, true, MO_SIGN) | ||
167 | +DO_PREWIDEN(VADDW_U, u, add, true, 0) | ||
168 | +DO_PREWIDEN(VSUBW_S, s, sub, true, MO_SIGN) | ||
169 | +DO_PREWIDEN(VSUBW_U, u, sub, true, 0) | ||
170 | |||
171 | static bool do_narrow_3d(DisasContext *s, arg_3diff *a, | ||
172 | NeonGenTwo64OpFn *opfn, NeonGenNarrowFn *narrowfn) | ||
173 | -- | 60 | -- |
174 | 2.20.1 | 61 | 2.25.1 |
175 | |||
176 | diff view generated by jsdifflib |
1 | From: Richard Henderson <richard.henderson@linaro.org> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | Replace all uses of neon_load/store_reg64 within translate-neon.c.inc. | 3 | This is the last use of regime_is_secure; remove it |
4 | entirely before changing the layout of ARMMMUIdx. | ||
4 | 5 | ||
6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
5 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | 7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
6 | Message-id: 20201030022618.785675-9-richard.henderson@linaro.org | 8 | Message-id: 20221001162318.153420-9-richard.henderson@linaro.org |
7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
9 | --- | 10 | --- |
10 | target/arm/translate.c | 26 +++++++++ | 11 | target/arm/internals.h | 42 ---------------------------------------- |
11 | target/arm/translate-neon.c.inc | 94 ++++++++++++++++----------------- | 12 | target/arm/ptw.c | 44 ++++++++++++++++++++++++++++++++++++++++-- |
12 | 2 files changed, 73 insertions(+), 47 deletions(-) | 13 | 2 files changed, 42 insertions(+), 44 deletions(-) |
13 | 14 | ||
14 | diff --git a/target/arm/translate.c b/target/arm/translate.c | 15 | diff --git a/target/arm/internals.h b/target/arm/internals.h |
15 | index XXXXXXX..XXXXXXX 100644 | 16 | index XXXXXXX..XXXXXXX 100644 |
16 | --- a/target/arm/translate.c | 17 | --- a/target/arm/internals.h |
17 | +++ b/target/arm/translate.c | 18 | +++ b/target/arm/internals.h |
18 | @@ -XXX,XX +XXX,XX @@ static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop) | 19 | @@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx) |
19 | } | 20 | } |
20 | } | 21 | } |
21 | 22 | ||
22 | +static void read_neon_element64(TCGv_i64 dest, int reg, int ele, MemOp memop) | 23 | -/* Return true if this address translation regime is secure */ |
23 | +{ | 24 | -static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx) |
24 | + long off = neon_element_offset(reg, ele, memop); | 25 | -{ |
26 | - switch (mmu_idx) { | ||
27 | - case ARMMMUIdx_E10_0: | ||
28 | - case ARMMMUIdx_E10_1: | ||
29 | - case ARMMMUIdx_E10_1_PAN: | ||
30 | - case ARMMMUIdx_E20_0: | ||
31 | - case ARMMMUIdx_E20_2: | ||
32 | - case ARMMMUIdx_E20_2_PAN: | ||
33 | - case ARMMMUIdx_Stage1_E0: | ||
34 | - case ARMMMUIdx_Stage1_E1: | ||
35 | - case ARMMMUIdx_Stage1_E1_PAN: | ||
36 | - case ARMMMUIdx_E2: | ||
37 | - case ARMMMUIdx_Stage2: | ||
38 | - case ARMMMUIdx_MPrivNegPri: | ||
39 | - case ARMMMUIdx_MUserNegPri: | ||
40 | - case ARMMMUIdx_MPriv: | ||
41 | - case ARMMMUIdx_MUser: | ||
42 | - return false; | ||
43 | - case ARMMMUIdx_SE3: | ||
44 | - case ARMMMUIdx_SE10_0: | ||
45 | - case ARMMMUIdx_SE10_1: | ||
46 | - case ARMMMUIdx_SE10_1_PAN: | ||
47 | - case ARMMMUIdx_SE20_0: | ||
48 | - case ARMMMUIdx_SE20_2: | ||
49 | - case ARMMMUIdx_SE20_2_PAN: | ||
50 | - case ARMMMUIdx_Stage1_SE0: | ||
51 | - case ARMMMUIdx_Stage1_SE1: | ||
52 | - case ARMMMUIdx_Stage1_SE1_PAN: | ||
53 | - case ARMMMUIdx_SE2: | ||
54 | - case ARMMMUIdx_Stage2_S: | ||
55 | - case ARMMMUIdx_MSPrivNegPri: | ||
56 | - case ARMMMUIdx_MSUserNegPri: | ||
57 | - case ARMMMUIdx_MSPriv: | ||
58 | - case ARMMMUIdx_MSUser: | ||
59 | - return true; | ||
60 | - default: | ||
61 | - g_assert_not_reached(); | ||
62 | - } | ||
63 | -} | ||
64 | - | ||
65 | static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
66 | { | ||
67 | switch (mmu_idx) { | ||
68 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
69 | index XXXXXXX..XXXXXXX 100644 | ||
70 | --- a/target/arm/ptw.c | ||
71 | +++ b/target/arm/ptw.c | ||
72 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
73 | MMUAccessType access_type, ARMMMUIdx mmu_idx, | ||
74 | GetPhysAddrResult *result, ARMMMUFaultInfo *fi) | ||
75 | { | ||
76 | + bool is_secure; | ||
25 | + | 77 | + |
26 | + switch (memop) { | 78 | + switch (mmu_idx) { |
27 | + case MO_Q: | 79 | + case ARMMMUIdx_E10_0: |
28 | + tcg_gen_ld_i64(dest, cpu_env, off); | 80 | + case ARMMMUIdx_E10_1: |
81 | + case ARMMMUIdx_E10_1_PAN: | ||
82 | + case ARMMMUIdx_E20_0: | ||
83 | + case ARMMMUIdx_E20_2: | ||
84 | + case ARMMMUIdx_E20_2_PAN: | ||
85 | + case ARMMMUIdx_Stage1_E0: | ||
86 | + case ARMMMUIdx_Stage1_E1: | ||
87 | + case ARMMMUIdx_Stage1_E1_PAN: | ||
88 | + case ARMMMUIdx_E2: | ||
89 | + case ARMMMUIdx_Stage2: | ||
90 | + case ARMMMUIdx_MPrivNegPri: | ||
91 | + case ARMMMUIdx_MUserNegPri: | ||
92 | + case ARMMMUIdx_MPriv: | ||
93 | + case ARMMMUIdx_MUser: | ||
94 | + is_secure = false; | ||
95 | + break; | ||
96 | + case ARMMMUIdx_SE3: | ||
97 | + case ARMMMUIdx_SE10_0: | ||
98 | + case ARMMMUIdx_SE10_1: | ||
99 | + case ARMMMUIdx_SE10_1_PAN: | ||
100 | + case ARMMMUIdx_SE20_0: | ||
101 | + case ARMMMUIdx_SE20_2: | ||
102 | + case ARMMMUIdx_SE20_2_PAN: | ||
103 | + case ARMMMUIdx_Stage1_SE0: | ||
104 | + case ARMMMUIdx_Stage1_SE1: | ||
105 | + case ARMMMUIdx_Stage1_SE1_PAN: | ||
106 | + case ARMMMUIdx_SE2: | ||
107 | + case ARMMMUIdx_Stage2_S: | ||
108 | + case ARMMMUIdx_MSPrivNegPri: | ||
109 | + case ARMMMUIdx_MSUserNegPri: | ||
110 | + case ARMMMUIdx_MSPriv: | ||
111 | + case ARMMMUIdx_MSUser: | ||
112 | + is_secure = true; | ||
29 | + break; | 113 | + break; |
30 | + default: | 114 | + default: |
31 | + g_assert_not_reached(); | 115 | + g_assert_not_reached(); |
32 | + } | 116 | + } |
33 | +} | 117 | return get_phys_addr_with_secure(env, address, access_type, mmu_idx, |
34 | + | 118 | - regime_is_secure(env, mmu_idx), |
35 | static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop) | 119 | - result, fi); |
36 | { | 120 | + is_secure, result, fi); |
37 | long off = neon_element_offset(reg, ele, memop); | ||
38 | @@ -XXX,XX +XXX,XX @@ static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop) | ||
39 | } | ||
40 | } | 121 | } |
41 | 122 | ||
42 | +static void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop) | 123 | hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr, |
43 | +{ | ||
44 | + long off = neon_element_offset(reg, ele, memop); | ||
45 | + | ||
46 | + switch (memop) { | ||
47 | + case MO_64: | ||
48 | + tcg_gen_st_i64(src, cpu_env, off); | ||
49 | + break; | ||
50 | + default: | ||
51 | + g_assert_not_reached(); | ||
52 | + } | ||
53 | +} | ||
54 | + | ||
55 | static TCGv_ptr vfp_reg_ptr(bool dp, int reg) | ||
56 | { | ||
57 | TCGv_ptr ret = tcg_temp_new_ptr(); | ||
58 | diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc | ||
59 | index XXXXXXX..XXXXXXX 100644 | ||
60 | --- a/target/arm/translate-neon.c.inc | ||
61 | +++ b/target/arm/translate-neon.c.inc | ||
62 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_64(DisasContext *s, arg_2reg_shift *a, | ||
63 | for (pass = 0; pass < a->q + 1; pass++) { | ||
64 | TCGv_i64 tmp = tcg_temp_new_i64(); | ||
65 | |||
66 | - neon_load_reg64(tmp, a->vm + pass); | ||
67 | + read_neon_element64(tmp, a->vm, pass, MO_64); | ||
68 | fn(tmp, cpu_env, tmp, constimm); | ||
69 | - neon_store_reg64(tmp, a->vd + pass); | ||
70 | + write_neon_element64(tmp, a->vd, pass, MO_64); | ||
71 | tcg_temp_free_i64(tmp); | ||
72 | } | ||
73 | tcg_temp_free_i64(constimm); | ||
74 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a, | ||
75 | rd = tcg_temp_new_i32(); | ||
76 | |||
77 | /* Load both inputs first to avoid potential overwrite if rm == rd */ | ||
78 | - neon_load_reg64(rm1, a->vm); | ||
79 | - neon_load_reg64(rm2, a->vm + 1); | ||
80 | + read_neon_element64(rm1, a->vm, 0, MO_64); | ||
81 | + read_neon_element64(rm2, a->vm, 1, MO_64); | ||
82 | |||
83 | shiftfn(rm1, rm1, constimm); | ||
84 | narrowfn(rd, cpu_env, rm1); | ||
85 | @@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a, | ||
86 | tcg_gen_shli_i64(tmp, tmp, a->shift); | ||
87 | tcg_gen_andi_i64(tmp, tmp, ~widen_mask); | ||
88 | } | ||
89 | - neon_store_reg64(tmp, a->vd); | ||
90 | + write_neon_element64(tmp, a->vd, 0, MO_64); | ||
91 | |||
92 | widenfn(tmp, rm1); | ||
93 | tcg_temp_free_i32(rm1); | ||
94 | @@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a, | ||
95 | tcg_gen_shli_i64(tmp, tmp, a->shift); | ||
96 | tcg_gen_andi_i64(tmp, tmp, ~widen_mask); | ||
97 | } | ||
98 | - neon_store_reg64(tmp, a->vd + 1); | ||
99 | + write_neon_element64(tmp, a->vd, 1, MO_64); | ||
100 | tcg_temp_free_i64(tmp); | ||
101 | return true; | ||
102 | } | ||
103 | @@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a, | ||
104 | rm_64 = tcg_temp_new_i64(); | ||
105 | |||
106 | if (src1_wide) { | ||
107 | - neon_load_reg64(rn0_64, a->vn); | ||
108 | + read_neon_element64(rn0_64, a->vn, 0, MO_64); | ||
109 | } else { | ||
110 | TCGv_i32 tmp = tcg_temp_new_i32(); | ||
111 | read_neon_element32(tmp, a->vn, 0, MO_32); | ||
112 | @@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a, | ||
113 | * avoid incorrect results if a narrow input overlaps with the result. | ||
114 | */ | ||
115 | if (src1_wide) { | ||
116 | - neon_load_reg64(rn1_64, a->vn + 1); | ||
117 | + read_neon_element64(rn1_64, a->vn, 1, MO_64); | ||
118 | } else { | ||
119 | TCGv_i32 tmp = tcg_temp_new_i32(); | ||
120 | read_neon_element32(tmp, a->vn, 1, MO_32); | ||
121 | @@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a, | ||
122 | rm = tcg_temp_new_i32(); | ||
123 | read_neon_element32(rm, a->vm, 1, MO_32); | ||
124 | |||
125 | - neon_store_reg64(rn0_64, a->vd); | ||
126 | + write_neon_element64(rn0_64, a->vd, 0, MO_64); | ||
127 | |||
128 | widenfn(rm_64, rm); | ||
129 | tcg_temp_free_i32(rm); | ||
130 | opfn(rn1_64, rn1_64, rm_64); | ||
131 | - neon_store_reg64(rn1_64, a->vd + 1); | ||
132 | + write_neon_element64(rn1_64, a->vd, 1, MO_64); | ||
133 | |||
134 | tcg_temp_free_i64(rn0_64); | ||
135 | tcg_temp_free_i64(rn1_64); | ||
136 | @@ -XXX,XX +XXX,XX @@ static bool do_narrow_3d(DisasContext *s, arg_3diff *a, | ||
137 | rd0 = tcg_temp_new_i32(); | ||
138 | rd1 = tcg_temp_new_i32(); | ||
139 | |||
140 | - neon_load_reg64(rn_64, a->vn); | ||
141 | - neon_load_reg64(rm_64, a->vm); | ||
142 | + read_neon_element64(rn_64, a->vn, 0, MO_64); | ||
143 | + read_neon_element64(rm_64, a->vm, 0, MO_64); | ||
144 | |||
145 | opfn(rn_64, rn_64, rm_64); | ||
146 | |||
147 | narrowfn(rd0, rn_64); | ||
148 | |||
149 | - neon_load_reg64(rn_64, a->vn + 1); | ||
150 | - neon_load_reg64(rm_64, a->vm + 1); | ||
151 | + read_neon_element64(rn_64, a->vn, 1, MO_64); | ||
152 | + read_neon_element64(rm_64, a->vm, 1, MO_64); | ||
153 | |||
154 | opfn(rn_64, rn_64, rm_64); | ||
155 | |||
156 | @@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a, | ||
157 | /* Don't store results until after all loads: they might overlap */ | ||
158 | if (accfn) { | ||
159 | tmp = tcg_temp_new_i64(); | ||
160 | - neon_load_reg64(tmp, a->vd); | ||
161 | + read_neon_element64(tmp, a->vd, 0, MO_64); | ||
162 | accfn(tmp, tmp, rd0); | ||
163 | - neon_store_reg64(tmp, a->vd); | ||
164 | - neon_load_reg64(tmp, a->vd + 1); | ||
165 | + write_neon_element64(tmp, a->vd, 0, MO_64); | ||
166 | + read_neon_element64(tmp, a->vd, 1, MO_64); | ||
167 | accfn(tmp, tmp, rd1); | ||
168 | - neon_store_reg64(tmp, a->vd + 1); | ||
169 | + write_neon_element64(tmp, a->vd, 1, MO_64); | ||
170 | tcg_temp_free_i64(tmp); | ||
171 | } else { | ||
172 | - neon_store_reg64(rd0, a->vd); | ||
173 | - neon_store_reg64(rd1, a->vd + 1); | ||
174 | + write_neon_element64(rd0, a->vd, 0, MO_64); | ||
175 | + write_neon_element64(rd1, a->vd, 1, MO_64); | ||
176 | } | ||
177 | |||
178 | tcg_temp_free_i64(rd0); | ||
179 | @@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a, | ||
180 | |||
181 | if (accfn) { | ||
182 | TCGv_i64 t64 = tcg_temp_new_i64(); | ||
183 | - neon_load_reg64(t64, a->vd); | ||
184 | + read_neon_element64(t64, a->vd, 0, MO_64); | ||
185 | accfn(t64, t64, rn0_64); | ||
186 | - neon_store_reg64(t64, a->vd); | ||
187 | - neon_load_reg64(t64, a->vd + 1); | ||
188 | + write_neon_element64(t64, a->vd, 0, MO_64); | ||
189 | + read_neon_element64(t64, a->vd, 1, MO_64); | ||
190 | accfn(t64, t64, rn1_64); | ||
191 | - neon_store_reg64(t64, a->vd + 1); | ||
192 | + write_neon_element64(t64, a->vd, 1, MO_64); | ||
193 | tcg_temp_free_i64(t64); | ||
194 | } else { | ||
195 | - neon_store_reg64(rn0_64, a->vd); | ||
196 | - neon_store_reg64(rn1_64, a->vd + 1); | ||
197 | + write_neon_element64(rn0_64, a->vd, 0, MO_64); | ||
198 | + write_neon_element64(rn1_64, a->vd, 1, MO_64); | ||
199 | } | ||
200 | tcg_temp_free_i64(rn0_64); | ||
201 | tcg_temp_free_i64(rn1_64); | ||
202 | @@ -XXX,XX +XXX,XX @@ static bool trans_VEXT(DisasContext *s, arg_VEXT *a) | ||
203 | right = tcg_temp_new_i64(); | ||
204 | dest = tcg_temp_new_i64(); | ||
205 | |||
206 | - neon_load_reg64(right, a->vn); | ||
207 | - neon_load_reg64(left, a->vm); | ||
208 | + read_neon_element64(right, a->vn, 0, MO_64); | ||
209 | + read_neon_element64(left, a->vm, 0, MO_64); | ||
210 | tcg_gen_extract2_i64(dest, right, left, a->imm * 8); | ||
211 | - neon_store_reg64(dest, a->vd); | ||
212 | + write_neon_element64(dest, a->vd, 0, MO_64); | ||
213 | |||
214 | tcg_temp_free_i64(left); | ||
215 | tcg_temp_free_i64(right); | ||
216 | @@ -XXX,XX +XXX,XX @@ static bool trans_VEXT(DisasContext *s, arg_VEXT *a) | ||
217 | destright = tcg_temp_new_i64(); | ||
218 | |||
219 | if (a->imm < 8) { | ||
220 | - neon_load_reg64(right, a->vn); | ||
221 | - neon_load_reg64(middle, a->vn + 1); | ||
222 | + read_neon_element64(right, a->vn, 0, MO_64); | ||
223 | + read_neon_element64(middle, a->vn, 1, MO_64); | ||
224 | tcg_gen_extract2_i64(destright, right, middle, a->imm * 8); | ||
225 | - neon_load_reg64(left, a->vm); | ||
226 | + read_neon_element64(left, a->vm, 0, MO_64); | ||
227 | tcg_gen_extract2_i64(destleft, middle, left, a->imm * 8); | ||
228 | } else { | ||
229 | - neon_load_reg64(right, a->vn + 1); | ||
230 | - neon_load_reg64(middle, a->vm); | ||
231 | + read_neon_element64(right, a->vn, 1, MO_64); | ||
232 | + read_neon_element64(middle, a->vm, 0, MO_64); | ||
233 | tcg_gen_extract2_i64(destright, right, middle, (a->imm - 8) * 8); | ||
234 | - neon_load_reg64(left, a->vm + 1); | ||
235 | + read_neon_element64(left, a->vm, 1, MO_64); | ||
236 | tcg_gen_extract2_i64(destleft, middle, left, (a->imm - 8) * 8); | ||
237 | } | ||
238 | |||
239 | - neon_store_reg64(destright, a->vd); | ||
240 | - neon_store_reg64(destleft, a->vd + 1); | ||
241 | + write_neon_element64(destright, a->vd, 0, MO_64); | ||
242 | + write_neon_element64(destleft, a->vd, 1, MO_64); | ||
243 | |||
244 | tcg_temp_free_i64(destright); | ||
245 | tcg_temp_free_i64(destleft); | ||
246 | @@ -XXX,XX +XXX,XX @@ static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a, | ||
247 | |||
248 | if (accfn) { | ||
249 | TCGv_i64 tmp64 = tcg_temp_new_i64(); | ||
250 | - neon_load_reg64(tmp64, a->vd + pass); | ||
251 | + read_neon_element64(tmp64, a->vd, pass, MO_64); | ||
252 | accfn(rd_64, tmp64, rd_64); | ||
253 | tcg_temp_free_i64(tmp64); | ||
254 | } | ||
255 | - neon_store_reg64(rd_64, a->vd + pass); | ||
256 | + write_neon_element64(rd_64, a->vd, pass, MO_64); | ||
257 | tcg_temp_free_i64(rd_64); | ||
258 | } | ||
259 | return true; | ||
260 | @@ -XXX,XX +XXX,XX @@ static bool do_vmovn(DisasContext *s, arg_2misc *a, | ||
261 | rd0 = tcg_temp_new_i32(); | ||
262 | rd1 = tcg_temp_new_i32(); | ||
263 | |||
264 | - neon_load_reg64(rm, a->vm); | ||
265 | + read_neon_element64(rm, a->vm, 0, MO_64); | ||
266 | narrowfn(rd0, cpu_env, rm); | ||
267 | - neon_load_reg64(rm, a->vm + 1); | ||
268 | + read_neon_element64(rm, a->vm, 1, MO_64); | ||
269 | narrowfn(rd1, cpu_env, rm); | ||
270 | write_neon_element32(rd0, a->vd, 0, MO_32); | ||
271 | write_neon_element32(rd1, a->vd, 1, MO_32); | ||
272 | @@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a) | ||
273 | |||
274 | widenfn(rd, rm0); | ||
275 | tcg_gen_shli_i64(rd, rd, 8 << a->size); | ||
276 | - neon_store_reg64(rd, a->vd); | ||
277 | + write_neon_element64(rd, a->vd, 0, MO_64); | ||
278 | widenfn(rd, rm1); | ||
279 | tcg_gen_shli_i64(rd, rd, 8 << a->size); | ||
280 | - neon_store_reg64(rd, a->vd + 1); | ||
281 | + write_neon_element64(rd, a->vd, 1, MO_64); | ||
282 | |||
283 | tcg_temp_free_i64(rd); | ||
284 | tcg_temp_free_i32(rm0); | ||
285 | @@ -XXX,XX +XXX,XX @@ static bool trans_VSWP(DisasContext *s, arg_2misc *a) | ||
286 | rm = tcg_temp_new_i64(); | ||
287 | rd = tcg_temp_new_i64(); | ||
288 | for (pass = 0; pass < (a->q ? 2 : 1); pass++) { | ||
289 | - neon_load_reg64(rm, a->vm + pass); | ||
290 | - neon_load_reg64(rd, a->vd + pass); | ||
291 | - neon_store_reg64(rm, a->vd + pass); | ||
292 | - neon_store_reg64(rd, a->vm + pass); | ||
293 | + read_neon_element64(rm, a->vm, pass, MO_64); | ||
294 | + read_neon_element64(rd, a->vd, pass, MO_64); | ||
295 | + write_neon_element64(rm, a->vd, pass, MO_64); | ||
296 | + write_neon_element64(rd, a->vm, pass, MO_64); | ||
297 | } | ||
298 | tcg_temp_free_i64(rm); | ||
299 | tcg_temp_free_i64(rd); | ||
300 | -- | 124 | -- |
301 | 2.20.1 | 125 | 2.25.1 |
302 | |||
303 | diff view generated by jsdifflib |
1 | From: Richard Henderson <richard.henderson@linaro.org> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | The only uses of this function are for loading VFP | 3 | Use get_phys_addr_with_secure directly. For a-profile, this is the |
4 | single-precision values, and nothing to do with NEON. | 4 | one place where the value of is_secure may not equal arm_is_secure(env). |
5 | 5 | ||
6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
6 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | 7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
7 | Message-id: 20201030022618.785675-8-richard.henderson@linaro.org | 8 | Message-id: 20221001162318.153420-10-richard.henderson@linaro.org |
8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
10 | --- | 10 | --- |
11 | target/arm/translate.c | 4 +- | 11 | target/arm/helper.c | 19 ++++++++++++++----- |
12 | target/arm/translate-vfp.c.inc | 184 ++++++++++++++++----------------- | 12 | 1 file changed, 14 insertions(+), 5 deletions(-) |
13 | 2 files changed, 94 insertions(+), 94 deletions(-) | ||
14 | 13 | ||
15 | diff --git a/target/arm/translate.c b/target/arm/translate.c | 14 | diff --git a/target/arm/helper.c b/target/arm/helper.c |
16 | index XXXXXXX..XXXXXXX 100644 | 15 | index XXXXXXX..XXXXXXX 100644 |
17 | --- a/target/arm/translate.c | 16 | --- a/target/arm/helper.c |
18 | +++ b/target/arm/translate.c | 17 | +++ b/target/arm/helper.c |
19 | @@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg64(TCGv_i64 var, int reg) | 18 | @@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri, |
20 | tcg_gen_st_i64(var, cpu_env, vfp_reg_offset(1, reg)); | 19 | |
21 | } | 20 | #ifdef CONFIG_TCG |
22 | 21 | static uint64_t do_ats_write(CPUARMState *env, uint64_t value, | |
23 | -static inline void neon_load_reg32(TCGv_i32 var, int reg) | 22 | - MMUAccessType access_type, ARMMMUIdx mmu_idx) |
24 | +static inline void vfp_load_reg32(TCGv_i32 var, int reg) | 23 | + MMUAccessType access_type, ARMMMUIdx mmu_idx, |
24 | + bool is_secure) | ||
25 | { | 25 | { |
26 | tcg_gen_ld_i32(var, cpu_env, vfp_reg_offset(false, reg)); | 26 | bool ret; |
27 | } | 27 | uint64_t par64; |
28 | 28 | @@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value, | |
29 | -static inline void neon_store_reg32(TCGv_i32 var, int reg) | 29 | ARMMMUFaultInfo fi = {}; |
30 | +static inline void vfp_store_reg32(TCGv_i32 var, int reg) | 30 | GetPhysAddrResult res = {}; |
31 | { | 31 | |
32 | tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg)); | 32 | - ret = get_phys_addr(env, value, access_type, mmu_idx, &res, &fi); |
33 | } | 33 | + ret = get_phys_addr_with_secure(env, value, access_type, mmu_idx, |
34 | diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc | 34 | + is_secure, &res, &fi); |
35 | index XXXXXXX..XXXXXXX 100644 | 35 | |
36 | --- a/target/arm/translate-vfp.c.inc | 36 | /* |
37 | +++ b/target/arm/translate-vfp.c.inc | 37 | * ATS operations only do S1 or S1+S2 translations, so we never |
38 | @@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a) | 38 | @@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) |
39 | frn = tcg_temp_new_i32(); | 39 | switch (el) { |
40 | frm = tcg_temp_new_i32(); | 40 | case 3: |
41 | dest = tcg_temp_new_i32(); | 41 | mmu_idx = ARMMMUIdx_SE3; |
42 | - neon_load_reg32(frn, rn); | 42 | + secure = true; |
43 | - neon_load_reg32(frm, rm); | ||
44 | + vfp_load_reg32(frn, rn); | ||
45 | + vfp_load_reg32(frm, rm); | ||
46 | switch (a->cc) { | ||
47 | case 0: /* eq: Z */ | ||
48 | tcg_gen_movcond_i32(TCG_COND_EQ, dest, cpu_ZF, zero, | ||
49 | @@ -XXX,XX +XXX,XX @@ static bool trans_VSEL(DisasContext *s, arg_VSEL *a) | ||
50 | if (sz == 1) { | ||
51 | tcg_gen_andi_i32(dest, dest, 0xffff); | ||
52 | } | ||
53 | - neon_store_reg32(dest, rd); | ||
54 | + vfp_store_reg32(dest, rd); | ||
55 | tcg_temp_free_i32(frn); | ||
56 | tcg_temp_free_i32(frm); | ||
57 | tcg_temp_free_i32(dest); | ||
58 | @@ -XXX,XX +XXX,XX @@ static bool trans_VRINT(DisasContext *s, arg_VRINT *a) | ||
59 | TCGv_i32 tcg_res; | ||
60 | tcg_op = tcg_temp_new_i32(); | ||
61 | tcg_res = tcg_temp_new_i32(); | ||
62 | - neon_load_reg32(tcg_op, rm); | ||
63 | + vfp_load_reg32(tcg_op, rm); | ||
64 | if (sz == 1) { | ||
65 | gen_helper_rinth(tcg_res, tcg_op, fpst); | ||
66 | } else { | ||
67 | gen_helper_rints(tcg_res, tcg_op, fpst); | ||
68 | } | ||
69 | - neon_store_reg32(tcg_res, rd); | ||
70 | + vfp_store_reg32(tcg_res, rd); | ||
71 | tcg_temp_free_i32(tcg_op); | ||
72 | tcg_temp_free_i32(tcg_res); | ||
73 | } | ||
74 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a) | ||
75 | gen_helper_vfp_tould(tcg_res, tcg_double, tcg_shift, fpst); | ||
76 | } | ||
77 | tcg_gen_extrl_i64_i32(tcg_tmp, tcg_res); | ||
78 | - neon_store_reg32(tcg_tmp, rd); | ||
79 | + vfp_store_reg32(tcg_tmp, rd); | ||
80 | tcg_temp_free_i32(tcg_tmp); | ||
81 | tcg_temp_free_i64(tcg_res); | ||
82 | tcg_temp_free_i64(tcg_double); | ||
83 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a) | ||
84 | TCGv_i32 tcg_single, tcg_res; | ||
85 | tcg_single = tcg_temp_new_i32(); | ||
86 | tcg_res = tcg_temp_new_i32(); | ||
87 | - neon_load_reg32(tcg_single, rm); | ||
88 | + vfp_load_reg32(tcg_single, rm); | ||
89 | if (sz == 1) { | ||
90 | if (is_signed) { | ||
91 | gen_helper_vfp_toslh(tcg_res, tcg_single, tcg_shift, fpst); | ||
92 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT(DisasContext *s, arg_VCVT *a) | ||
93 | gen_helper_vfp_touls(tcg_res, tcg_single, tcg_shift, fpst); | ||
94 | } | ||
95 | } | ||
96 | - neon_store_reg32(tcg_res, rd); | ||
97 | + vfp_store_reg32(tcg_res, rd); | ||
98 | tcg_temp_free_i32(tcg_res); | ||
99 | tcg_temp_free_i32(tcg_single); | ||
100 | } | ||
101 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_half(DisasContext *s, arg_VMOV_single *a) | ||
102 | if (a->l) { | ||
103 | /* VFP to general purpose register */ | ||
104 | tmp = tcg_temp_new_i32(); | ||
105 | - neon_load_reg32(tmp, a->vn); | ||
106 | + vfp_load_reg32(tmp, a->vn); | ||
107 | tcg_gen_andi_i32(tmp, tmp, 0xffff); | ||
108 | store_reg(s, a->rt, tmp); | ||
109 | } else { | ||
110 | /* general purpose register to VFP */ | ||
111 | tmp = load_reg(s, a->rt); | ||
112 | tcg_gen_andi_i32(tmp, tmp, 0xffff); | ||
113 | - neon_store_reg32(tmp, a->vn); | ||
114 | + vfp_store_reg32(tmp, a->vn); | ||
115 | tcg_temp_free_i32(tmp); | ||
116 | } | ||
117 | |||
118 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a) | ||
119 | if (a->l) { | ||
120 | /* VFP to general purpose register */ | ||
121 | tmp = tcg_temp_new_i32(); | ||
122 | - neon_load_reg32(tmp, a->vn); | ||
123 | + vfp_load_reg32(tmp, a->vn); | ||
124 | if (a->rt == 15) { | ||
125 | /* Set the 4 flag bits in the CPSR. */ | ||
126 | gen_set_nzcv(tmp); | ||
127 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_single(DisasContext *s, arg_VMOV_single *a) | ||
128 | } else { | ||
129 | /* general purpose register to VFP */ | ||
130 | tmp = load_reg(s, a->rt); | ||
131 | - neon_store_reg32(tmp, a->vn); | ||
132 | + vfp_store_reg32(tmp, a->vn); | ||
133 | tcg_temp_free_i32(tmp); | ||
134 | } | ||
135 | |||
136 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a) | ||
137 | if (a->op) { | ||
138 | /* fpreg to gpreg */ | ||
139 | tmp = tcg_temp_new_i32(); | ||
140 | - neon_load_reg32(tmp, a->vm); | ||
141 | + vfp_load_reg32(tmp, a->vm); | ||
142 | store_reg(s, a->rt, tmp); | ||
143 | tmp = tcg_temp_new_i32(); | ||
144 | - neon_load_reg32(tmp, a->vm + 1); | ||
145 | + vfp_load_reg32(tmp, a->vm + 1); | ||
146 | store_reg(s, a->rt2, tmp); | ||
147 | } else { | ||
148 | /* gpreg to fpreg */ | ||
149 | tmp = load_reg(s, a->rt); | ||
150 | - neon_store_reg32(tmp, a->vm); | ||
151 | + vfp_store_reg32(tmp, a->vm); | ||
152 | tcg_temp_free_i32(tmp); | ||
153 | tmp = load_reg(s, a->rt2); | ||
154 | - neon_store_reg32(tmp, a->vm + 1); | ||
155 | + vfp_store_reg32(tmp, a->vm + 1); | ||
156 | tcg_temp_free_i32(tmp); | ||
157 | } | ||
158 | |||
159 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_64_dp(DisasContext *s, arg_VMOV_64_dp *a) | ||
160 | if (a->op) { | ||
161 | /* fpreg to gpreg */ | ||
162 | tmp = tcg_temp_new_i32(); | ||
163 | - neon_load_reg32(tmp, a->vm * 2); | ||
164 | + vfp_load_reg32(tmp, a->vm * 2); | ||
165 | store_reg(s, a->rt, tmp); | ||
166 | tmp = tcg_temp_new_i32(); | ||
167 | - neon_load_reg32(tmp, a->vm * 2 + 1); | ||
168 | + vfp_load_reg32(tmp, a->vm * 2 + 1); | ||
169 | store_reg(s, a->rt2, tmp); | ||
170 | } else { | ||
171 | /* gpreg to fpreg */ | ||
172 | tmp = load_reg(s, a->rt); | ||
173 | - neon_store_reg32(tmp, a->vm * 2); | ||
174 | + vfp_store_reg32(tmp, a->vm * 2); | ||
175 | tcg_temp_free_i32(tmp); | ||
176 | tmp = load_reg(s, a->rt2); | ||
177 | - neon_store_reg32(tmp, a->vm * 2 + 1); | ||
178 | + vfp_store_reg32(tmp, a->vm * 2 + 1); | ||
179 | tcg_temp_free_i32(tmp); | ||
180 | } | ||
181 | |||
182 | @@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_hp(DisasContext *s, arg_VLDR_VSTR_sp *a) | ||
183 | tmp = tcg_temp_new_i32(); | ||
184 | if (a->l) { | ||
185 | gen_aa32_ld16u(s, tmp, addr, get_mem_index(s)); | ||
186 | - neon_store_reg32(tmp, a->vd); | ||
187 | + vfp_store_reg32(tmp, a->vd); | ||
188 | } else { | ||
189 | - neon_load_reg32(tmp, a->vd); | ||
190 | + vfp_load_reg32(tmp, a->vd); | ||
191 | gen_aa32_st16(s, tmp, addr, get_mem_index(s)); | ||
192 | } | ||
193 | tcg_temp_free_i32(tmp); | ||
194 | @@ -XXX,XX +XXX,XX @@ static bool trans_VLDR_VSTR_sp(DisasContext *s, arg_VLDR_VSTR_sp *a) | ||
195 | tmp = tcg_temp_new_i32(); | ||
196 | if (a->l) { | ||
197 | gen_aa32_ld32u(s, tmp, addr, get_mem_index(s)); | ||
198 | - neon_store_reg32(tmp, a->vd); | ||
199 | + vfp_store_reg32(tmp, a->vd); | ||
200 | } else { | ||
201 | - neon_load_reg32(tmp, a->vd); | ||
202 | + vfp_load_reg32(tmp, a->vd); | ||
203 | gen_aa32_st32(s, tmp, addr, get_mem_index(s)); | ||
204 | } | ||
205 | tcg_temp_free_i32(tmp); | ||
206 | @@ -XXX,XX +XXX,XX @@ static bool trans_VLDM_VSTM_sp(DisasContext *s, arg_VLDM_VSTM_sp *a) | ||
207 | if (a->l) { | ||
208 | /* load */ | ||
209 | gen_aa32_ld32u(s, tmp, addr, get_mem_index(s)); | ||
210 | - neon_store_reg32(tmp, a->vd + i); | ||
211 | + vfp_store_reg32(tmp, a->vd + i); | ||
212 | } else { | ||
213 | /* store */ | ||
214 | - neon_load_reg32(tmp, a->vd + i); | ||
215 | + vfp_load_reg32(tmp, a->vd + i); | ||
216 | gen_aa32_st32(s, tmp, addr, get_mem_index(s)); | ||
217 | } | ||
218 | tcg_gen_addi_i32(addr, addr, offset); | ||
219 | @@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn, | ||
220 | fd = tcg_temp_new_i32(); | ||
221 | fpst = fpstatus_ptr(FPST_FPCR); | ||
222 | |||
223 | - neon_load_reg32(f0, vn); | ||
224 | - neon_load_reg32(f1, vm); | ||
225 | + vfp_load_reg32(f0, vn); | ||
226 | + vfp_load_reg32(f1, vm); | ||
227 | |||
228 | for (;;) { | ||
229 | if (reads_vd) { | ||
230 | - neon_load_reg32(fd, vd); | ||
231 | + vfp_load_reg32(fd, vd); | ||
232 | } | ||
233 | fn(fd, f0, f1, fpst); | ||
234 | - neon_store_reg32(fd, vd); | ||
235 | + vfp_store_reg32(fd, vd); | ||
236 | |||
237 | if (veclen == 0) { | ||
238 | break; | 43 | break; |
239 | @@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_sp(DisasContext *s, VFPGen3OpSPFn *fn, | 44 | case 2: |
240 | veclen--; | 45 | g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */ |
241 | vd = vfp_advance_sreg(vd, delta_d); | 46 | @@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) |
242 | vn = vfp_advance_sreg(vn, delta_d); | 47 | switch (el) { |
243 | - neon_load_reg32(f0, vn); | 48 | case 3: |
244 | + vfp_load_reg32(f0, vn); | 49 | mmu_idx = ARMMMUIdx_SE10_0; |
245 | if (delta_m) { | 50 | + secure = true; |
246 | vm = vfp_advance_sreg(vm, delta_m); | ||
247 | - neon_load_reg32(f1, vm); | ||
248 | + vfp_load_reg32(f1, vm); | ||
249 | } | ||
250 | } | ||
251 | |||
252 | @@ -XXX,XX +XXX,XX @@ static bool do_vfp_3op_hp(DisasContext *s, VFPGen3OpSPFn *fn, | ||
253 | fd = tcg_temp_new_i32(); | ||
254 | fpst = fpstatus_ptr(FPST_FPCR_F16); | ||
255 | |||
256 | - neon_load_reg32(f0, vn); | ||
257 | - neon_load_reg32(f1, vm); | ||
258 | + vfp_load_reg32(f0, vn); | ||
259 | + vfp_load_reg32(f1, vm); | ||
260 | |||
261 | if (reads_vd) { | ||
262 | - neon_load_reg32(fd, vd); | ||
263 | + vfp_load_reg32(fd, vd); | ||
264 | } | ||
265 | fn(fd, f0, f1, fpst); | ||
266 | - neon_store_reg32(fd, vd); | ||
267 | + vfp_store_reg32(fd, vd); | ||
268 | |||
269 | tcg_temp_free_i32(f0); | ||
270 | tcg_temp_free_i32(f1); | ||
271 | @@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm) | ||
272 | f0 = tcg_temp_new_i32(); | ||
273 | fd = tcg_temp_new_i32(); | ||
274 | |||
275 | - neon_load_reg32(f0, vm); | ||
276 | + vfp_load_reg32(f0, vm); | ||
277 | |||
278 | for (;;) { | ||
279 | fn(fd, f0); | ||
280 | - neon_store_reg32(fd, vd); | ||
281 | + vfp_store_reg32(fd, vd); | ||
282 | |||
283 | if (veclen == 0) { | ||
284 | break; | 51 | break; |
285 | @@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm) | 52 | case 2: |
286 | /* single source one-many */ | 53 | g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */ |
287 | while (veclen--) { | 54 | @@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) |
288 | vd = vfp_advance_sreg(vd, delta_d); | 55 | case 4: |
289 | - neon_store_reg32(fd, vd); | 56 | /* stage 1+2 NonSecure PL1: ATS12NSOPR, ATS12NSOPW */ |
290 | + vfp_store_reg32(fd, vd); | 57 | mmu_idx = ARMMMUIdx_E10_1; |
291 | } | 58 | + secure = false; |
292 | break; | 59 | break; |
293 | } | 60 | case 6: |
294 | @@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_sp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm) | 61 | /* stage 1+2 NonSecure PL0: ATS12NSOUR, ATS12NSOUW */ |
295 | veclen--; | 62 | mmu_idx = ARMMMUIdx_E10_0; |
296 | vd = vfp_advance_sreg(vd, delta_d); | 63 | + secure = false; |
297 | vm = vfp_advance_sreg(vm, delta_m); | 64 | break; |
298 | - neon_load_reg32(f0, vm); | 65 | default: |
299 | + vfp_load_reg32(f0, vm); | ||
300 | } | ||
301 | |||
302 | tcg_temp_free_i32(f0); | ||
303 | @@ -XXX,XX +XXX,XX @@ static bool do_vfp_2op_hp(DisasContext *s, VFPGen2OpSPFn *fn, int vd, int vm) | ||
304 | } | ||
305 | |||
306 | f0 = tcg_temp_new_i32(); | ||
307 | - neon_load_reg32(f0, vm); | ||
308 | + vfp_load_reg32(f0, vm); | ||
309 | fn(f0, f0); | ||
310 | - neon_store_reg32(f0, vd); | ||
311 | + vfp_store_reg32(f0, vd); | ||
312 | tcg_temp_free_i32(f0); | ||
313 | |||
314 | return true; | ||
315 | @@ -XXX,XX +XXX,XX @@ static bool do_vfm_hp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d) | ||
316 | vm = tcg_temp_new_i32(); | ||
317 | vd = tcg_temp_new_i32(); | ||
318 | |||
319 | - neon_load_reg32(vn, a->vn); | ||
320 | - neon_load_reg32(vm, a->vm); | ||
321 | + vfp_load_reg32(vn, a->vn); | ||
322 | + vfp_load_reg32(vm, a->vm); | ||
323 | if (neg_n) { | ||
324 | /* VFNMS, VFMS */ | ||
325 | gen_helper_vfp_negh(vn, vn); | ||
326 | } | ||
327 | - neon_load_reg32(vd, a->vd); | ||
328 | + vfp_load_reg32(vd, a->vd); | ||
329 | if (neg_d) { | ||
330 | /* VFNMA, VFNMS */ | ||
331 | gen_helper_vfp_negh(vd, vd); | ||
332 | } | ||
333 | fpst = fpstatus_ptr(FPST_FPCR_F16); | ||
334 | gen_helper_vfp_muladdh(vd, vn, vm, vd, fpst); | ||
335 | - neon_store_reg32(vd, a->vd); | ||
336 | + vfp_store_reg32(vd, a->vd); | ||
337 | |||
338 | tcg_temp_free_ptr(fpst); | ||
339 | tcg_temp_free_i32(vn); | ||
340 | @@ -XXX,XX +XXX,XX @@ static bool do_vfm_sp(DisasContext *s, arg_VFMA_sp *a, bool neg_n, bool neg_d) | ||
341 | vm = tcg_temp_new_i32(); | ||
342 | vd = tcg_temp_new_i32(); | ||
343 | |||
344 | - neon_load_reg32(vn, a->vn); | ||
345 | - neon_load_reg32(vm, a->vm); | ||
346 | + vfp_load_reg32(vn, a->vn); | ||
347 | + vfp_load_reg32(vm, a->vm); | ||
348 | if (neg_n) { | ||
349 | /* VFNMS, VFMS */ | ||
350 | gen_helper_vfp_negs(vn, vn); | ||
351 | } | ||
352 | - neon_load_reg32(vd, a->vd); | ||
353 | + vfp_load_reg32(vd, a->vd); | ||
354 | if (neg_d) { | ||
355 | /* VFNMA, VFNMS */ | ||
356 | gen_helper_vfp_negs(vd, vd); | ||
357 | } | ||
358 | fpst = fpstatus_ptr(FPST_FPCR); | ||
359 | gen_helper_vfp_muladds(vd, vn, vm, vd, fpst); | ||
360 | - neon_store_reg32(vd, a->vd); | ||
361 | + vfp_store_reg32(vd, a->vd); | ||
362 | |||
363 | tcg_temp_free_ptr(fpst); | ||
364 | tcg_temp_free_i32(vn); | ||
365 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_hp(DisasContext *s, arg_VMOV_imm_sp *a) | ||
366 | } | ||
367 | |||
368 | fd = tcg_const_i32(vfp_expand_imm(MO_16, a->imm)); | ||
369 | - neon_store_reg32(fd, a->vd); | ||
370 | + vfp_store_reg32(fd, a->vd); | ||
371 | tcg_temp_free_i32(fd); | ||
372 | return true; | ||
373 | } | ||
374 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a) | ||
375 | fd = tcg_const_i32(vfp_expand_imm(MO_32, a->imm)); | ||
376 | |||
377 | for (;;) { | ||
378 | - neon_store_reg32(fd, vd); | ||
379 | + vfp_store_reg32(fd, vd); | ||
380 | |||
381 | if (veclen == 0) { | ||
382 | break; | ||
383 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_hp(DisasContext *s, arg_VCMP_sp *a) | ||
384 | vd = tcg_temp_new_i32(); | ||
385 | vm = tcg_temp_new_i32(); | ||
386 | |||
387 | - neon_load_reg32(vd, a->vd); | ||
388 | + vfp_load_reg32(vd, a->vd); | ||
389 | if (a->z) { | ||
390 | tcg_gen_movi_i32(vm, 0); | ||
391 | } else { | ||
392 | - neon_load_reg32(vm, a->vm); | ||
393 | + vfp_load_reg32(vm, a->vm); | ||
394 | } | ||
395 | |||
396 | if (a->e) { | ||
397 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCMP_sp(DisasContext *s, arg_VCMP_sp *a) | ||
398 | vd = tcg_temp_new_i32(); | ||
399 | vm = tcg_temp_new_i32(); | ||
400 | |||
401 | - neon_load_reg32(vd, a->vd); | ||
402 | + vfp_load_reg32(vd, a->vd); | ||
403 | if (a->z) { | ||
404 | tcg_gen_movi_i32(vm, 0); | ||
405 | } else { | ||
406 | - neon_load_reg32(vm, a->vm); | ||
407 | + vfp_load_reg32(vm, a->vm); | ||
408 | } | ||
409 | |||
410 | if (a->e) { | ||
411 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f32_f16(DisasContext *s, arg_VCVT_f32_f16 *a) | ||
412 | /* The T bit tells us if we want the low or high 16 bits of Vm */ | ||
413 | tcg_gen_ld16u_i32(tmp, cpu_env, vfp_f16_offset(a->vm, a->t)); | ||
414 | gen_helper_vfp_fcvt_f16_to_f32(tmp, tmp, fpst, ahp_mode); | ||
415 | - neon_store_reg32(tmp, a->vd); | ||
416 | + vfp_store_reg32(tmp, a->vd); | ||
417 | tcg_temp_free_i32(ahp_mode); | ||
418 | tcg_temp_free_ptr(fpst); | ||
419 | tcg_temp_free_i32(tmp); | ||
420 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_f16_f32(DisasContext *s, arg_VCVT_f16_f32 *a) | ||
421 | ahp_mode = get_ahp_flag(); | ||
422 | tmp = tcg_temp_new_i32(); | ||
423 | |||
424 | - neon_load_reg32(tmp, a->vm); | ||
425 | + vfp_load_reg32(tmp, a->vm); | ||
426 | gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp_mode); | ||
427 | tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t)); | ||
428 | tcg_temp_free_i32(ahp_mode); | ||
429 | @@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_hp(DisasContext *s, arg_VRINTR_sp *a) | ||
430 | } | ||
431 | |||
432 | tmp = tcg_temp_new_i32(); | ||
433 | - neon_load_reg32(tmp, a->vm); | ||
434 | + vfp_load_reg32(tmp, a->vm); | ||
435 | fpst = fpstatus_ptr(FPST_FPCR_F16); | ||
436 | gen_helper_rinth(tmp, tmp, fpst); | ||
437 | - neon_store_reg32(tmp, a->vd); | ||
438 | + vfp_store_reg32(tmp, a->vd); | ||
439 | tcg_temp_free_ptr(fpst); | ||
440 | tcg_temp_free_i32(tmp); | ||
441 | return true; | ||
442 | @@ -XXX,XX +XXX,XX @@ static bool trans_VRINTR_sp(DisasContext *s, arg_VRINTR_sp *a) | ||
443 | } | ||
444 | |||
445 | tmp = tcg_temp_new_i32(); | ||
446 | - neon_load_reg32(tmp, a->vm); | ||
447 | + vfp_load_reg32(tmp, a->vm); | ||
448 | fpst = fpstatus_ptr(FPST_FPCR); | ||
449 | gen_helper_rints(tmp, tmp, fpst); | ||
450 | - neon_store_reg32(tmp, a->vd); | ||
451 | + vfp_store_reg32(tmp, a->vd); | ||
452 | tcg_temp_free_ptr(fpst); | ||
453 | tcg_temp_free_i32(tmp); | ||
454 | return true; | ||
455 | @@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_hp(DisasContext *s, arg_VRINTZ_sp *a) | ||
456 | } | ||
457 | |||
458 | tmp = tcg_temp_new_i32(); | ||
459 | - neon_load_reg32(tmp, a->vm); | ||
460 | + vfp_load_reg32(tmp, a->vm); | ||
461 | fpst = fpstatus_ptr(FPST_FPCR_F16); | ||
462 | tcg_rmode = tcg_const_i32(float_round_to_zero); | ||
463 | gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst); | ||
464 | gen_helper_rinth(tmp, tmp, fpst); | ||
465 | gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst); | ||
466 | - neon_store_reg32(tmp, a->vd); | ||
467 | + vfp_store_reg32(tmp, a->vd); | ||
468 | tcg_temp_free_ptr(fpst); | ||
469 | tcg_temp_free_i32(tcg_rmode); | ||
470 | tcg_temp_free_i32(tmp); | ||
471 | @@ -XXX,XX +XXX,XX @@ static bool trans_VRINTZ_sp(DisasContext *s, arg_VRINTZ_sp *a) | ||
472 | } | ||
473 | |||
474 | tmp = tcg_temp_new_i32(); | ||
475 | - neon_load_reg32(tmp, a->vm); | ||
476 | + vfp_load_reg32(tmp, a->vm); | ||
477 | fpst = fpstatus_ptr(FPST_FPCR); | ||
478 | tcg_rmode = tcg_const_i32(float_round_to_zero); | ||
479 | gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst); | ||
480 | gen_helper_rints(tmp, tmp, fpst); | ||
481 | gen_helper_set_rmode(tcg_rmode, tcg_rmode, fpst); | ||
482 | - neon_store_reg32(tmp, a->vd); | ||
483 | + vfp_store_reg32(tmp, a->vd); | ||
484 | tcg_temp_free_ptr(fpst); | ||
485 | tcg_temp_free_i32(tcg_rmode); | ||
486 | tcg_temp_free_i32(tmp); | ||
487 | @@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_hp(DisasContext *s, arg_VRINTX_sp *a) | ||
488 | } | ||
489 | |||
490 | tmp = tcg_temp_new_i32(); | ||
491 | - neon_load_reg32(tmp, a->vm); | ||
492 | + vfp_load_reg32(tmp, a->vm); | ||
493 | fpst = fpstatus_ptr(FPST_FPCR_F16); | ||
494 | gen_helper_rinth_exact(tmp, tmp, fpst); | ||
495 | - neon_store_reg32(tmp, a->vd); | ||
496 | + vfp_store_reg32(tmp, a->vd); | ||
497 | tcg_temp_free_ptr(fpst); | ||
498 | tcg_temp_free_i32(tmp); | ||
499 | return true; | ||
500 | @@ -XXX,XX +XXX,XX @@ static bool trans_VRINTX_sp(DisasContext *s, arg_VRINTX_sp *a) | ||
501 | } | ||
502 | |||
503 | tmp = tcg_temp_new_i32(); | ||
504 | - neon_load_reg32(tmp, a->vm); | ||
505 | + vfp_load_reg32(tmp, a->vm); | ||
506 | fpst = fpstatus_ptr(FPST_FPCR); | ||
507 | gen_helper_rints_exact(tmp, tmp, fpst); | ||
508 | - neon_store_reg32(tmp, a->vd); | ||
509 | + vfp_store_reg32(tmp, a->vd); | ||
510 | tcg_temp_free_ptr(fpst); | ||
511 | tcg_temp_free_i32(tmp); | ||
512 | return true; | ||
513 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp(DisasContext *s, arg_VCVT_sp *a) | ||
514 | |||
515 | vm = tcg_temp_new_i32(); | ||
516 | vd = tcg_temp_new_i64(); | ||
517 | - neon_load_reg32(vm, a->vm); | ||
518 | + vfp_load_reg32(vm, a->vm); | ||
519 | gen_helper_vfp_fcvtds(vd, vm, cpu_env); | ||
520 | neon_store_reg64(vd, a->vd); | ||
521 | tcg_temp_free_i32(vm); | ||
522 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp(DisasContext *s, arg_VCVT_dp *a) | ||
523 | vm = tcg_temp_new_i64(); | ||
524 | neon_load_reg64(vm, a->vm); | ||
525 | gen_helper_vfp_fcvtsd(vd, vm, cpu_env); | ||
526 | - neon_store_reg32(vd, a->vd); | ||
527 | + vfp_store_reg32(vd, a->vd); | ||
528 | tcg_temp_free_i32(vd); | ||
529 | tcg_temp_free_i64(vm); | ||
530 | return true; | ||
531 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_hp(DisasContext *s, arg_VCVT_int_sp *a) | ||
532 | } | ||
533 | |||
534 | vm = tcg_temp_new_i32(); | ||
535 | - neon_load_reg32(vm, a->vm); | ||
536 | + vfp_load_reg32(vm, a->vm); | ||
537 | fpst = fpstatus_ptr(FPST_FPCR_F16); | ||
538 | if (a->s) { | ||
539 | /* i32 -> f16 */ | ||
540 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_hp(DisasContext *s, arg_VCVT_int_sp *a) | ||
541 | /* u32 -> f16 */ | ||
542 | gen_helper_vfp_uitoh(vm, vm, fpst); | ||
543 | } | ||
544 | - neon_store_reg32(vm, a->vd); | ||
545 | + vfp_store_reg32(vm, a->vd); | ||
546 | tcg_temp_free_i32(vm); | ||
547 | tcg_temp_free_ptr(fpst); | ||
548 | return true; | ||
549 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a) | ||
550 | } | ||
551 | |||
552 | vm = tcg_temp_new_i32(); | ||
553 | - neon_load_reg32(vm, a->vm); | ||
554 | + vfp_load_reg32(vm, a->vm); | ||
555 | fpst = fpstatus_ptr(FPST_FPCR); | ||
556 | if (a->s) { | ||
557 | /* i32 -> f32 */ | ||
558 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_sp(DisasContext *s, arg_VCVT_int_sp *a) | ||
559 | /* u32 -> f32 */ | ||
560 | gen_helper_vfp_uitos(vm, vm, fpst); | ||
561 | } | ||
562 | - neon_store_reg32(vm, a->vd); | ||
563 | + vfp_store_reg32(vm, a->vd); | ||
564 | tcg_temp_free_i32(vm); | ||
565 | tcg_temp_free_ptr(fpst); | ||
566 | return true; | ||
567 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_int_dp(DisasContext *s, arg_VCVT_int_dp *a) | ||
568 | |||
569 | vm = tcg_temp_new_i32(); | ||
570 | vd = tcg_temp_new_i64(); | ||
571 | - neon_load_reg32(vm, a->vm); | ||
572 | + vfp_load_reg32(vm, a->vm); | ||
573 | fpst = fpstatus_ptr(FPST_FPCR); | ||
574 | if (a->s) { | ||
575 | /* i32 -> f64 */ | ||
576 | @@ -XXX,XX +XXX,XX @@ static bool trans_VJCVT(DisasContext *s, arg_VJCVT *a) | ||
577 | vd = tcg_temp_new_i32(); | ||
578 | neon_load_reg64(vm, a->vm); | ||
579 | gen_helper_vjcvt(vd, vm, cpu_env); | ||
580 | - neon_store_reg32(vd, a->vd); | ||
581 | + vfp_store_reg32(vd, a->vd); | ||
582 | tcg_temp_free_i64(vm); | ||
583 | tcg_temp_free_i32(vd); | ||
584 | return true; | ||
585 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a) | ||
586 | frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm); | ||
587 | |||
588 | vd = tcg_temp_new_i32(); | ||
589 | - neon_load_reg32(vd, a->vd); | ||
590 | + vfp_load_reg32(vd, a->vd); | ||
591 | |||
592 | fpst = fpstatus_ptr(FPST_FPCR_F16); | ||
593 | shift = tcg_const_i32(frac_bits); | ||
594 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_hp(DisasContext *s, arg_VCVT_fix_sp *a) | ||
595 | g_assert_not_reached(); | 66 | g_assert_not_reached(); |
596 | } | 67 | } |
597 | 68 | ||
598 | - neon_store_reg32(vd, a->vd); | 69 | - par64 = do_ats_write(env, value, access_type, mmu_idx); |
599 | + vfp_store_reg32(vd, a->vd); | 70 | + par64 = do_ats_write(env, value, access_type, mmu_idx, secure); |
600 | tcg_temp_free_i32(vd); | 71 | |
601 | tcg_temp_free_i32(shift); | 72 | A32_BANKED_CURRENT_REG_SET(env, par, par64); |
602 | tcg_temp_free_ptr(fpst); | 73 | #else |
603 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a) | 74 | @@ -XXX,XX +XXX,XX @@ static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri, |
604 | frac_bits = (a->opc & 1) ? (32 - a->imm) : (16 - a->imm); | 75 | MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD; |
605 | 76 | uint64_t par64; | |
606 | vd = tcg_temp_new_i32(); | 77 | |
607 | - neon_load_reg32(vd, a->vd); | 78 | - par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2); |
608 | + vfp_load_reg32(vd, a->vd); | 79 | + /* There is no SecureEL2 for AArch32. */ |
609 | 80 | + par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2, false); | |
610 | fpst = fpstatus_ptr(FPST_FPCR); | 81 | |
611 | shift = tcg_const_i32(frac_bits); | 82 | A32_BANKED_CURRENT_REG_SET(env, par, par64); |
612 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_fix_sp(DisasContext *s, arg_VCVT_fix_sp *a) | 83 | #else |
84 | @@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri, | ||
85 | break; | ||
86 | case 6: /* AT S1E3R, AT S1E3W */ | ||
87 | mmu_idx = ARMMMUIdx_SE3; | ||
88 | + secure = true; | ||
89 | break; | ||
90 | default: | ||
91 | g_assert_not_reached(); | ||
92 | @@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri, | ||
613 | g_assert_not_reached(); | 93 | g_assert_not_reached(); |
614 | } | 94 | } |
615 | 95 | ||
616 | - neon_store_reg32(vd, a->vd); | 96 | - env->cp15.par_el[1] = do_ats_write(env, value, access_type, mmu_idx); |
617 | + vfp_store_reg32(vd, a->vd); | 97 | + env->cp15.par_el[1] = do_ats_write(env, value, access_type, |
618 | tcg_temp_free_i32(vd); | 98 | + mmu_idx, secure); |
619 | tcg_temp_free_i32(shift); | 99 | #else |
620 | tcg_temp_free_ptr(fpst); | 100 | /* Handled by hardware accelerator. */ |
621 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_hp_int(DisasContext *s, arg_VCVT_sp_int *a) | 101 | g_assert_not_reached(); |
622 | |||
623 | fpst = fpstatus_ptr(FPST_FPCR_F16); | ||
624 | vm = tcg_temp_new_i32(); | ||
625 | - neon_load_reg32(vm, a->vm); | ||
626 | + vfp_load_reg32(vm, a->vm); | ||
627 | |||
628 | if (a->s) { | ||
629 | if (a->rz) { | ||
630 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_hp_int(DisasContext *s, arg_VCVT_sp_int *a) | ||
631 | gen_helper_vfp_touih(vm, vm, fpst); | ||
632 | } | ||
633 | } | ||
634 | - neon_store_reg32(vm, a->vd); | ||
635 | + vfp_store_reg32(vm, a->vd); | ||
636 | tcg_temp_free_i32(vm); | ||
637 | tcg_temp_free_ptr(fpst); | ||
638 | return true; | ||
639 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a) | ||
640 | |||
641 | fpst = fpstatus_ptr(FPST_FPCR); | ||
642 | vm = tcg_temp_new_i32(); | ||
643 | - neon_load_reg32(vm, a->vm); | ||
644 | + vfp_load_reg32(vm, a->vm); | ||
645 | |||
646 | if (a->s) { | ||
647 | if (a->rz) { | ||
648 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_sp_int(DisasContext *s, arg_VCVT_sp_int *a) | ||
649 | gen_helper_vfp_touis(vm, vm, fpst); | ||
650 | } | ||
651 | } | ||
652 | - neon_store_reg32(vm, a->vd); | ||
653 | + vfp_store_reg32(vm, a->vd); | ||
654 | tcg_temp_free_i32(vm); | ||
655 | tcg_temp_free_ptr(fpst); | ||
656 | return true; | ||
657 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_dp_int(DisasContext *s, arg_VCVT_dp_int *a) | ||
658 | gen_helper_vfp_touid(vd, vm, fpst); | ||
659 | } | ||
660 | } | ||
661 | - neon_store_reg32(vd, a->vd); | ||
662 | + vfp_store_reg32(vd, a->vd); | ||
663 | tcg_temp_free_i32(vd); | ||
664 | tcg_temp_free_i64(vm); | ||
665 | tcg_temp_free_ptr(fpst); | ||
666 | @@ -XXX,XX +XXX,XX @@ static bool trans_VINS(DisasContext *s, arg_VINS *a) | ||
667 | /* Insert low half of Vm into high half of Vd */ | ||
668 | rm = tcg_temp_new_i32(); | ||
669 | rd = tcg_temp_new_i32(); | ||
670 | - neon_load_reg32(rm, a->vm); | ||
671 | - neon_load_reg32(rd, a->vd); | ||
672 | + vfp_load_reg32(rm, a->vm); | ||
673 | + vfp_load_reg32(rd, a->vd); | ||
674 | tcg_gen_deposit_i32(rd, rd, rm, 16, 16); | ||
675 | - neon_store_reg32(rd, a->vd); | ||
676 | + vfp_store_reg32(rd, a->vd); | ||
677 | tcg_temp_free_i32(rm); | ||
678 | tcg_temp_free_i32(rd); | ||
679 | return true; | ||
680 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOVX(DisasContext *s, arg_VINS *a) | ||
681 | |||
682 | /* Set Vd to high half of Vm */ | ||
683 | rm = tcg_temp_new_i32(); | ||
684 | - neon_load_reg32(rm, a->vm); | ||
685 | + vfp_load_reg32(rm, a->vm); | ||
686 | tcg_gen_shri_i32(rm, rm, 16); | ||
687 | - neon_store_reg32(rm, a->vd); | ||
688 | + vfp_store_reg32(rm, a->vd); | ||
689 | tcg_temp_free_i32(rm); | ||
690 | return true; | ||
691 | } | ||
692 | -- | 102 | -- |
693 | 2.20.1 | 103 | 2.25.1 |
694 | |||
695 | diff view generated by jsdifflib |
1 | From: Richard Henderson <richard.henderson@linaro.org> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | These are the only users of neon_reg_offset, so remove that. | 3 | For a-profile aarch64, which does not bank system registers, it takes |
4 | quite a lot of code to switch between security states. In the process, | ||
5 | registers such as TCR_EL{1,2} must be swapped, which in itself requires | ||
6 | the flushing of softmmu tlbs. Therefore it doesn't buy us anything to | ||
7 | separate tlbs by security state. | ||
4 | 8 | ||
9 | Retain the distinction between Stage2 and Stage2_S. | ||
10 | |||
11 | This will be important as we implement FEAT_RME, and do not wish to | ||
12 | add a third set of mmu indexes for Realm state. | ||
13 | |||
14 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
5 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | 15 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
6 | Message-id: 20201030022618.785675-4-richard.henderson@linaro.org | 16 | Message-id: 20221001162318.153420-11-richard.henderson@linaro.org |
7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 17 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
9 | --- | 18 | --- |
10 | target/arm/translate.c | 14 ++------------ | 19 | target/arm/cpu-param.h | 2 +- |
11 | 1 file changed, 2 insertions(+), 12 deletions(-) | 20 | target/arm/cpu.h | 72 +++++++------------ |
21 | target/arm/internals.h | 31 +------- | ||
22 | target/arm/helper.c | 144 +++++++++++++------------------------ | ||
23 | target/arm/ptw.c | 25 ++----- | ||
24 | target/arm/translate-a64.c | 8 --- | ||
25 | target/arm/translate.c | 6 +- | ||
26 | 7 files changed, 85 insertions(+), 203 deletions(-) | ||
12 | 27 | ||
28 | diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h | ||
29 | index XXXXXXX..XXXXXXX 100644 | ||
30 | --- a/target/arm/cpu-param.h | ||
31 | +++ b/target/arm/cpu-param.h | ||
32 | @@ -XXX,XX +XXX,XX @@ | ||
33 | # define TARGET_PAGE_BITS_MIN 10 | ||
34 | #endif | ||
35 | |||
36 | -#define NB_MMU_MODES 15 | ||
37 | +#define NB_MMU_MODES 8 | ||
38 | |||
39 | #endif | ||
40 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | ||
41 | index XXXXXXX..XXXXXXX 100644 | ||
42 | --- a/target/arm/cpu.h | ||
43 | +++ b/target/arm/cpu.h | ||
44 | @@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync); | ||
45 | * table over and over. | ||
46 | * 6. we need separate EL1/EL2 mmu_idx for handling the Privileged Access | ||
47 | * Never (PAN) bit within PSTATE. | ||
48 | + * 7. we fold together the secure and non-secure regimes for A-profile, | ||
49 | + * because there are no banked system registers for aarch64, so the | ||
50 | + * process of switching between secure and non-secure is | ||
51 | + * already heavyweight. | ||
52 | * | ||
53 | * This gives us the following list of cases: | ||
54 | * | ||
55 | - * NS EL0 EL1&0 stage 1+2 (aka NS PL0) | ||
56 | - * NS EL1 EL1&0 stage 1+2 (aka NS PL1) | ||
57 | - * NS EL1 EL1&0 stage 1+2 +PAN | ||
58 | - * NS EL0 EL2&0 | ||
59 | - * NS EL2 EL2&0 | ||
60 | - * NS EL2 EL2&0 +PAN | ||
61 | - * NS EL2 (aka NS PL2) | ||
62 | - * S EL0 EL1&0 (aka S PL0) | ||
63 | - * S EL1 EL1&0 (not used if EL3 is 32 bit) | ||
64 | - * S EL1 EL1&0 +PAN | ||
65 | - * S EL3 (aka S PL1) | ||
66 | + * EL0 EL1&0 stage 1+2 (aka NS PL0) | ||
67 | + * EL1 EL1&0 stage 1+2 (aka NS PL1) | ||
68 | + * EL1 EL1&0 stage 1+2 +PAN | ||
69 | + * EL0 EL2&0 | ||
70 | + * EL2 EL2&0 | ||
71 | + * EL2 EL2&0 +PAN | ||
72 | + * EL2 (aka NS PL2) | ||
73 | + * EL3 (aka S PL1) | ||
74 | * | ||
75 | - * for a total of 11 different mmu_idx. | ||
76 | + * for a total of 8 different mmu_idx. | ||
77 | * | ||
78 | * R profile CPUs have an MPU, but can use the same set of MMU indexes | ||
79 | - * as A profile. They only need to distinguish NS EL0 and NS EL1 (and | ||
80 | - * NS EL2 if we ever model a Cortex-R52). | ||
81 | + * as A profile. They only need to distinguish EL0 and EL1 (and | ||
82 | + * EL2 if we ever model a Cortex-R52). | ||
83 | * | ||
84 | * M profile CPUs are rather different as they do not have a true MMU. | ||
85 | * They have the following different MMU indexes: | ||
86 | @@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync); | ||
87 | #define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */ | ||
88 | #define ARM_MMU_IDX_M 0x40 /* M profile */ | ||
89 | |||
90 | -/* Meanings of the bits for A profile mmu idx values */ | ||
91 | -#define ARM_MMU_IDX_A_NS 0x8 | ||
92 | - | ||
93 | /* Meanings of the bits for M profile mmu idx values */ | ||
94 | #define ARM_MMU_IDX_M_PRIV 0x1 | ||
95 | #define ARM_MMU_IDX_M_NEGPRI 0x2 | ||
96 | @@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx { | ||
97 | /* | ||
98 | * A-profile. | ||
99 | */ | ||
100 | - ARMMMUIdx_SE10_0 = 0 | ARM_MMU_IDX_A, | ||
101 | - ARMMMUIdx_SE20_0 = 1 | ARM_MMU_IDX_A, | ||
102 | - ARMMMUIdx_SE10_1 = 2 | ARM_MMU_IDX_A, | ||
103 | - ARMMMUIdx_SE20_2 = 3 | ARM_MMU_IDX_A, | ||
104 | - ARMMMUIdx_SE10_1_PAN = 4 | ARM_MMU_IDX_A, | ||
105 | - ARMMMUIdx_SE20_2_PAN = 5 | ARM_MMU_IDX_A, | ||
106 | - ARMMMUIdx_SE2 = 6 | ARM_MMU_IDX_A, | ||
107 | - ARMMMUIdx_SE3 = 7 | ARM_MMU_IDX_A, | ||
108 | - | ||
109 | - ARMMMUIdx_E10_0 = ARMMMUIdx_SE10_0 | ARM_MMU_IDX_A_NS, | ||
110 | - ARMMMUIdx_E20_0 = ARMMMUIdx_SE20_0 | ARM_MMU_IDX_A_NS, | ||
111 | - ARMMMUIdx_E10_1 = ARMMMUIdx_SE10_1 | ARM_MMU_IDX_A_NS, | ||
112 | - ARMMMUIdx_E20_2 = ARMMMUIdx_SE20_2 | ARM_MMU_IDX_A_NS, | ||
113 | - ARMMMUIdx_E10_1_PAN = ARMMMUIdx_SE10_1_PAN | ARM_MMU_IDX_A_NS, | ||
114 | - ARMMMUIdx_E20_2_PAN = ARMMMUIdx_SE20_2_PAN | ARM_MMU_IDX_A_NS, | ||
115 | - ARMMMUIdx_E2 = ARMMMUIdx_SE2 | ARM_MMU_IDX_A_NS, | ||
116 | + ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A, | ||
117 | + ARMMMUIdx_E20_0 = 1 | ARM_MMU_IDX_A, | ||
118 | + ARMMMUIdx_E10_1 = 2 | ARM_MMU_IDX_A, | ||
119 | + ARMMMUIdx_E20_2 = 3 | ARM_MMU_IDX_A, | ||
120 | + ARMMMUIdx_E10_1_PAN = 4 | ARM_MMU_IDX_A, | ||
121 | + ARMMMUIdx_E20_2_PAN = 5 | ARM_MMU_IDX_A, | ||
122 | + ARMMMUIdx_E2 = 6 | ARM_MMU_IDX_A, | ||
123 | + ARMMMUIdx_E3 = 7 | ARM_MMU_IDX_A, | ||
124 | |||
125 | /* | ||
126 | * These are not allocated TLBs and are used only for AT system | ||
127 | @@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx { | ||
128 | ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB, | ||
129 | ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB, | ||
130 | ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB, | ||
131 | - ARMMMUIdx_Stage1_SE0 = 3 | ARM_MMU_IDX_NOTLB, | ||
132 | - ARMMMUIdx_Stage1_SE1 = 4 | ARM_MMU_IDX_NOTLB, | ||
133 | - ARMMMUIdx_Stage1_SE1_PAN = 5 | ARM_MMU_IDX_NOTLB, | ||
134 | /* | ||
135 | * Not allocated a TLB: used only for second stage of an S12 page | ||
136 | * table walk, or for descriptor loads during first stage of an S1 | ||
137 | @@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx { | ||
138 | * then various TLB flush insns which currently are no-ops or flush | ||
139 | * only stage 1 MMU indexes will need to change to flush stage 2. | ||
140 | */ | ||
141 | - ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_NOTLB, | ||
142 | - ARMMMUIdx_Stage2_S = 7 | ARM_MMU_IDX_NOTLB, | ||
143 | + ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB, | ||
144 | + ARMMMUIdx_Stage2_S = 4 | ARM_MMU_IDX_NOTLB, | ||
145 | |||
146 | /* | ||
147 | * M-profile. | ||
148 | @@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit { | ||
149 | TO_CORE_BIT(E2), | ||
150 | TO_CORE_BIT(E20_2), | ||
151 | TO_CORE_BIT(E20_2_PAN), | ||
152 | - TO_CORE_BIT(SE10_0), | ||
153 | - TO_CORE_BIT(SE20_0), | ||
154 | - TO_CORE_BIT(SE10_1), | ||
155 | - TO_CORE_BIT(SE20_2), | ||
156 | - TO_CORE_BIT(SE10_1_PAN), | ||
157 | - TO_CORE_BIT(SE20_2_PAN), | ||
158 | - TO_CORE_BIT(SE2), | ||
159 | - TO_CORE_BIT(SE3), | ||
160 | + TO_CORE_BIT(E3), | ||
161 | |||
162 | TO_CORE_BIT(MUser), | ||
163 | TO_CORE_BIT(MPriv), | ||
164 | diff --git a/target/arm/internals.h b/target/arm/internals.h | ||
165 | index XXXXXXX..XXXXXXX 100644 | ||
166 | --- a/target/arm/internals.h | ||
167 | +++ b/target/arm/internals.h | ||
168 | @@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx) | ||
169 | case ARMMMUIdx_Stage1_E0: | ||
170 | case ARMMMUIdx_Stage1_E1: | ||
171 | case ARMMMUIdx_Stage1_E1_PAN: | ||
172 | - case ARMMMUIdx_Stage1_SE0: | ||
173 | - case ARMMMUIdx_Stage1_SE1: | ||
174 | - case ARMMMUIdx_Stage1_SE1_PAN: | ||
175 | case ARMMMUIdx_E10_0: | ||
176 | case ARMMMUIdx_E10_1: | ||
177 | case ARMMMUIdx_E10_1_PAN: | ||
178 | case ARMMMUIdx_E20_0: | ||
179 | case ARMMMUIdx_E20_2: | ||
180 | case ARMMMUIdx_E20_2_PAN: | ||
181 | - case ARMMMUIdx_SE10_0: | ||
182 | - case ARMMMUIdx_SE10_1: | ||
183 | - case ARMMMUIdx_SE10_1_PAN: | ||
184 | - case ARMMMUIdx_SE20_0: | ||
185 | - case ARMMMUIdx_SE20_2: | ||
186 | - case ARMMMUIdx_SE20_2_PAN: | ||
187 | return true; | ||
188 | default: | ||
189 | return false; | ||
190 | @@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
191 | { | ||
192 | switch (mmu_idx) { | ||
193 | case ARMMMUIdx_Stage1_E1_PAN: | ||
194 | - case ARMMMUIdx_Stage1_SE1_PAN: | ||
195 | case ARMMMUIdx_E10_1_PAN: | ||
196 | case ARMMMUIdx_E20_2_PAN: | ||
197 | - case ARMMMUIdx_SE10_1_PAN: | ||
198 | - case ARMMMUIdx_SE20_2_PAN: | ||
199 | return true; | ||
200 | default: | ||
201 | return false; | ||
202 | @@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
203 | static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
204 | { | ||
205 | switch (mmu_idx) { | ||
206 | - case ARMMMUIdx_SE20_0: | ||
207 | - case ARMMMUIdx_SE20_2: | ||
208 | - case ARMMMUIdx_SE20_2_PAN: | ||
209 | case ARMMMUIdx_E20_0: | ||
210 | case ARMMMUIdx_E20_2: | ||
211 | case ARMMMUIdx_E20_2_PAN: | ||
212 | case ARMMMUIdx_Stage2: | ||
213 | case ARMMMUIdx_Stage2_S: | ||
214 | - case ARMMMUIdx_SE2: | ||
215 | case ARMMMUIdx_E2: | ||
216 | return 2; | ||
217 | - case ARMMMUIdx_SE3: | ||
218 | + case ARMMMUIdx_E3: | ||
219 | return 3; | ||
220 | - case ARMMMUIdx_SE10_0: | ||
221 | - case ARMMMUIdx_Stage1_SE0: | ||
222 | - return arm_el_is_aa64(env, 3) ? 1 : 3; | ||
223 | - case ARMMMUIdx_SE10_1: | ||
224 | - case ARMMMUIdx_SE10_1_PAN: | ||
225 | + case ARMMMUIdx_E10_0: | ||
226 | case ARMMMUIdx_Stage1_E0: | ||
227 | + return arm_el_is_aa64(env, 3) || !arm_is_secure_below_el3(env) ? 1 : 3; | ||
228 | case ARMMMUIdx_Stage1_E1: | ||
229 | case ARMMMUIdx_Stage1_E1_PAN: | ||
230 | - case ARMMMUIdx_Stage1_SE1: | ||
231 | - case ARMMMUIdx_Stage1_SE1_PAN: | ||
232 | - case ARMMMUIdx_E10_0: | ||
233 | case ARMMMUIdx_E10_1: | ||
234 | case ARMMMUIdx_E10_1_PAN: | ||
235 | case ARMMMUIdx_MPrivNegPri: | ||
236 | @@ -XXX,XX +XXX,XX @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx) | ||
237 | case ARMMMUIdx_Stage1_E0: | ||
238 | case ARMMMUIdx_Stage1_E1: | ||
239 | case ARMMMUIdx_Stage1_E1_PAN: | ||
240 | - case ARMMMUIdx_Stage1_SE0: | ||
241 | - case ARMMMUIdx_Stage1_SE1: | ||
242 | - case ARMMMUIdx_Stage1_SE1_PAN: | ||
243 | return true; | ||
244 | default: | ||
245 | return false; | ||
246 | diff --git a/target/arm/helper.c b/target/arm/helper.c | ||
247 | index XXXXXXX..XXXXXXX 100644 | ||
248 | --- a/target/arm/helper.c | ||
249 | +++ b/target/arm/helper.c | ||
250 | @@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
251 | /* Begin with base v8.0 state. */ | ||
252 | uint64_t valid_mask = 0x3fff; | ||
253 | ARMCPU *cpu = env_archcpu(env); | ||
254 | + uint64_t changed; | ||
255 | |||
256 | /* | ||
257 | * Because SCR_EL3 is the "real" cpreg and SCR is the alias, reset always | ||
258 | @@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
259 | |||
260 | /* Clear all-context RES0 bits. */ | ||
261 | value &= valid_mask; | ||
262 | - raw_write(env, ri, value); | ||
263 | + changed = env->cp15.scr_el3 ^ value; | ||
264 | + env->cp15.scr_el3 = value; | ||
265 | + | ||
266 | + /* | ||
267 | + * If SCR_EL3.NS changes, i.e. arm_is_secure_below_el3, then | ||
268 | + * we must invalidate all TLBs below EL3. | ||
269 | + */ | ||
270 | + if (changed & SCR_NS) { | ||
271 | + tlb_flush_by_mmuidx(env_cpu(env), (ARMMMUIdxBit_E10_0 | | ||
272 | + ARMMMUIdxBit_E20_0 | | ||
273 | + ARMMMUIdxBit_E10_1 | | ||
274 | + ARMMMUIdxBit_E20_2 | | ||
275 | + ARMMMUIdxBit_E10_1_PAN | | ||
276 | + ARMMMUIdxBit_E20_2_PAN | | ||
277 | + ARMMMUIdxBit_E2)); | ||
278 | + } | ||
279 | } | ||
280 | |||
281 | static void scr_reset(CPUARMState *env, const ARMCPRegInfo *ri) | ||
282 | @@ -XXX,XX +XXX,XX @@ static int gt_phys_redir_timeridx(CPUARMState *env) | ||
283 | case ARMMMUIdx_E20_0: | ||
284 | case ARMMMUIdx_E20_2: | ||
285 | case ARMMMUIdx_E20_2_PAN: | ||
286 | - case ARMMMUIdx_SE20_0: | ||
287 | - case ARMMMUIdx_SE20_2: | ||
288 | - case ARMMMUIdx_SE20_2_PAN: | ||
289 | return GTIMER_HYP; | ||
290 | default: | ||
291 | return GTIMER_PHYS; | ||
292 | @@ -XXX,XX +XXX,XX @@ static int gt_virt_redir_timeridx(CPUARMState *env) | ||
293 | case ARMMMUIdx_E20_0: | ||
294 | case ARMMMUIdx_E20_2: | ||
295 | case ARMMMUIdx_E20_2_PAN: | ||
296 | - case ARMMMUIdx_SE20_0: | ||
297 | - case ARMMMUIdx_SE20_2: | ||
298 | - case ARMMMUIdx_SE20_2_PAN: | ||
299 | return GTIMER_HYPVIRT; | ||
300 | default: | ||
301 | return GTIMER_VIRT; | ||
302 | @@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
303 | /* stage 1 current state PL1: ATS1CPR, ATS1CPW, ATS1CPRP, ATS1CPWP */ | ||
304 | switch (el) { | ||
305 | case 3: | ||
306 | - mmu_idx = ARMMMUIdx_SE3; | ||
307 | + mmu_idx = ARMMMUIdx_E3; | ||
308 | secure = true; | ||
309 | break; | ||
310 | case 2: | ||
311 | @@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
312 | /* fall through */ | ||
313 | case 1: | ||
314 | if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) { | ||
315 | - mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN | ||
316 | - : ARMMMUIdx_Stage1_E1_PAN); | ||
317 | + mmu_idx = ARMMMUIdx_Stage1_E1_PAN; | ||
318 | } else { | ||
319 | - mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1; | ||
320 | + mmu_idx = ARMMMUIdx_Stage1_E1; | ||
321 | } | ||
322 | break; | ||
323 | default: | ||
324 | @@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
325 | /* stage 1 current state PL0: ATS1CUR, ATS1CUW */ | ||
326 | switch (el) { | ||
327 | case 3: | ||
328 | - mmu_idx = ARMMMUIdx_SE10_0; | ||
329 | + mmu_idx = ARMMMUIdx_E10_0; | ||
330 | secure = true; | ||
331 | break; | ||
332 | case 2: | ||
333 | @@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value) | ||
334 | mmu_idx = ARMMMUIdx_Stage1_E0; | ||
335 | break; | ||
336 | case 1: | ||
337 | - mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0; | ||
338 | + mmu_idx = ARMMMUIdx_Stage1_E0; | ||
339 | break; | ||
340 | default: | ||
341 | g_assert_not_reached(); | ||
342 | @@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri, | ||
343 | switch (ri->opc1) { | ||
344 | case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */ | ||
345 | if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) { | ||
346 | - mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN | ||
347 | - : ARMMMUIdx_Stage1_E1_PAN); | ||
348 | + mmu_idx = ARMMMUIdx_Stage1_E1_PAN; | ||
349 | } else { | ||
350 | - mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1; | ||
351 | + mmu_idx = ARMMMUIdx_Stage1_E1; | ||
352 | } | ||
353 | break; | ||
354 | case 4: /* AT S1E2R, AT S1E2W */ | ||
355 | - mmu_idx = secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2; | ||
356 | + mmu_idx = ARMMMUIdx_E2; | ||
357 | break; | ||
358 | case 6: /* AT S1E3R, AT S1E3W */ | ||
359 | - mmu_idx = ARMMMUIdx_SE3; | ||
360 | + mmu_idx = ARMMMUIdx_E3; | ||
361 | secure = true; | ||
362 | break; | ||
363 | default: | ||
364 | @@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri, | ||
365 | } | ||
366 | break; | ||
367 | case 2: /* AT S1E0R, AT S1E0W */ | ||
368 | - mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0; | ||
369 | + mmu_idx = ARMMMUIdx_Stage1_E0; | ||
370 | break; | ||
371 | case 4: /* AT S12E1R, AT S12E1W */ | ||
372 | - mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_E10_1; | ||
373 | + mmu_idx = ARMMMUIdx_E10_1; | ||
374 | break; | ||
375 | case 6: /* AT S12E0R, AT S12E0W */ | ||
376 | - mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_E10_0; | ||
377 | + mmu_idx = ARMMMUIdx_E10_0; | ||
378 | break; | ||
379 | default: | ||
380 | g_assert_not_reached(); | ||
381 | @@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
382 | uint16_t mask = ARMMMUIdxBit_E20_2 | | ||
383 | ARMMMUIdxBit_E20_2_PAN | | ||
384 | ARMMMUIdxBit_E20_0; | ||
385 | - | ||
386 | - if (arm_is_secure_below_el3(env)) { | ||
387 | - mask >>= ARM_MMU_IDX_A_NS; | ||
388 | - } | ||
389 | - | ||
390 | tlb_flush_by_mmuidx(env_cpu(env), mask); | ||
391 | } | ||
392 | raw_write(env, ri, value); | ||
393 | @@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
394 | uint16_t mask = ARMMMUIdxBit_E10_1 | | ||
395 | ARMMMUIdxBit_E10_1_PAN | | ||
396 | ARMMMUIdxBit_E10_0; | ||
397 | - | ||
398 | - if (arm_is_secure_below_el3(env)) { | ||
399 | - mask >>= ARM_MMU_IDX_A_NS; | ||
400 | - } | ||
401 | - | ||
402 | tlb_flush_by_mmuidx(cs, mask); | ||
403 | raw_write(env, ri, value); | ||
404 | } | ||
405 | @@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env) | ||
406 | ARMMMUIdxBit_E10_1_PAN | | ||
407 | ARMMMUIdxBit_E10_0; | ||
408 | } | ||
409 | - | ||
410 | - if (arm_is_secure_below_el3(env)) { | ||
411 | - mask >>= ARM_MMU_IDX_A_NS; | ||
412 | - } | ||
413 | - | ||
414 | return mask; | ||
415 | } | ||
416 | |||
417 | @@ -XXX,XX +XXX,XX @@ static int vae1_tlbbits(CPUARMState *env, uint64_t addr) | ||
418 | mmu_idx = ARMMMUIdx_E10_0; | ||
419 | } | ||
420 | |||
421 | - if (arm_is_secure_below_el3(env)) { | ||
422 | - mmu_idx &= ~ARM_MMU_IDX_A_NS; | ||
423 | - } | ||
424 | - | ||
425 | return tlbbits_for_regime(env, mmu_idx, addr); | ||
426 | } | ||
427 | |||
428 | @@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env) | ||
429 | * stage 2 translations, whereas most other scopes only invalidate | ||
430 | * stage 1 translations. | ||
431 | */ | ||
432 | - if (arm_is_secure_below_el3(env)) { | ||
433 | - return ARMMMUIdxBit_SE10_1 | | ||
434 | - ARMMMUIdxBit_SE10_1_PAN | | ||
435 | - ARMMMUIdxBit_SE10_0; | ||
436 | - } else { | ||
437 | - return ARMMMUIdxBit_E10_1 | | ||
438 | - ARMMMUIdxBit_E10_1_PAN | | ||
439 | - ARMMMUIdxBit_E10_0; | ||
440 | - } | ||
441 | + return (ARMMMUIdxBit_E10_1 | | ||
442 | + ARMMMUIdxBit_E10_1_PAN | | ||
443 | + ARMMMUIdxBit_E10_0); | ||
444 | } | ||
445 | |||
446 | static int e2_tlbmask(CPUARMState *env) | ||
447 | { | ||
448 | - if (arm_is_secure_below_el3(env)) { | ||
449 | - return ARMMMUIdxBit_SE20_0 | | ||
450 | - ARMMMUIdxBit_SE20_2 | | ||
451 | - ARMMMUIdxBit_SE20_2_PAN | | ||
452 | - ARMMMUIdxBit_SE2; | ||
453 | - } else { | ||
454 | - return ARMMMUIdxBit_E20_0 | | ||
455 | - ARMMMUIdxBit_E20_2 | | ||
456 | - ARMMMUIdxBit_E20_2_PAN | | ||
457 | - ARMMMUIdxBit_E2; | ||
458 | - } | ||
459 | + return (ARMMMUIdxBit_E20_0 | | ||
460 | + ARMMMUIdxBit_E20_2 | | ||
461 | + ARMMMUIdxBit_E20_2_PAN | | ||
462 | + ARMMMUIdxBit_E2); | ||
463 | } | ||
464 | |||
465 | static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
466 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
467 | ARMCPU *cpu = env_archcpu(env); | ||
468 | CPUState *cs = CPU(cpu); | ||
469 | |||
470 | - tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_SE3); | ||
471 | + tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_E3); | ||
472 | } | ||
473 | |||
474 | static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
475 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
476 | { | ||
477 | CPUState *cs = env_cpu(env); | ||
478 | |||
479 | - tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_SE3); | ||
480 | + tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_E3); | ||
481 | } | ||
482 | |||
483 | static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
484 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
485 | CPUState *cs = CPU(cpu); | ||
486 | uint64_t pageaddr = sextract64(value << 12, 0, 56); | ||
487 | |||
488 | - tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_SE3); | ||
489 | + tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_E3); | ||
490 | } | ||
491 | |||
492 | static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
493 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
494 | { | ||
495 | CPUState *cs = env_cpu(env); | ||
496 | uint64_t pageaddr = sextract64(value << 12, 0, 56); | ||
497 | - bool secure = arm_is_secure_below_el3(env); | ||
498 | - int mask = secure ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2; | ||
499 | - int bits = tlbbits_for_regime(env, secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2, | ||
500 | - pageaddr); | ||
501 | + int bits = tlbbits_for_regime(env, ARMMMUIdx_E2, pageaddr); | ||
502 | |||
503 | - tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits); | ||
504 | + tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, | ||
505 | + ARMMMUIdxBit_E2, bits); | ||
506 | } | ||
507 | |||
508 | static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
509 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri, | ||
510 | { | ||
511 | CPUState *cs = env_cpu(env); | ||
512 | uint64_t pageaddr = sextract64(value << 12, 0, 56); | ||
513 | - int bits = tlbbits_for_regime(env, ARMMMUIdx_SE3, pageaddr); | ||
514 | + int bits = tlbbits_for_regime(env, ARMMMUIdx_E3, pageaddr); | ||
515 | |||
516 | tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, | ||
517 | - ARMMMUIdxBit_SE3, bits); | ||
518 | + ARMMMUIdxBit_E3, bits); | ||
519 | } | ||
520 | |||
521 | #ifdef TARGET_AARCH64 | ||
522 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae1is_write(CPUARMState *env, | ||
523 | |||
524 | static int vae2_tlbmask(CPUARMState *env) | ||
525 | { | ||
526 | - return (arm_is_secure_below_el3(env) | ||
527 | - ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2); | ||
528 | + return ARMMMUIdxBit_E2; | ||
529 | } | ||
530 | |||
531 | static void tlbi_aa64_rvae2_write(CPUARMState *env, | ||
532 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3_write(CPUARMState *env, | ||
533 | * flush-last-level-only. | ||
534 | */ | ||
535 | |||
536 | - do_rvae_write(env, value, ARMMMUIdxBit_SE3, | ||
537 | - tlb_force_broadcast(env)); | ||
538 | + do_rvae_write(env, value, ARMMMUIdxBit_E3, tlb_force_broadcast(env)); | ||
539 | } | ||
540 | |||
541 | static void tlbi_aa64_rvae3is_write(CPUARMState *env, | ||
542 | @@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3is_write(CPUARMState *env, | ||
543 | * flush-last-level-only or inner/outer specific flushes. | ||
544 | */ | ||
545 | |||
546 | - do_rvae_write(env, value, ARMMMUIdxBit_SE3, true); | ||
547 | + do_rvae_write(env, value, ARMMMUIdxBit_E3, true); | ||
548 | } | ||
549 | #endif | ||
550 | |||
551 | @@ -XXX,XX +XXX,XX @@ uint64_t arm_sctlr(CPUARMState *env, int el) | ||
552 | /* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */ | ||
553 | if (el == 0) { | ||
554 | ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0); | ||
555 | - el = (mmu_idx == ARMMMUIdx_E20_0 || mmu_idx == ARMMMUIdx_SE20_0) | ||
556 | - ? 2 : 1; | ||
557 | + el = mmu_idx == ARMMMUIdx_E20_0 ? 2 : 1; | ||
558 | } | ||
559 | return env->cp15.sctlr_el[el]; | ||
560 | } | ||
561 | @@ -XXX,XX +XXX,XX @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx) | ||
562 | switch (mmu_idx) { | ||
563 | case ARMMMUIdx_E10_0: | ||
564 | case ARMMMUIdx_E20_0: | ||
565 | - case ARMMMUIdx_SE10_0: | ||
566 | - case ARMMMUIdx_SE20_0: | ||
567 | return 0; | ||
568 | case ARMMMUIdx_E10_1: | ||
569 | case ARMMMUIdx_E10_1_PAN: | ||
570 | - case ARMMMUIdx_SE10_1: | ||
571 | - case ARMMMUIdx_SE10_1_PAN: | ||
572 | return 1; | ||
573 | case ARMMMUIdx_E2: | ||
574 | case ARMMMUIdx_E20_2: | ||
575 | case ARMMMUIdx_E20_2_PAN: | ||
576 | - case ARMMMUIdx_SE2: | ||
577 | - case ARMMMUIdx_SE20_2: | ||
578 | - case ARMMMUIdx_SE20_2_PAN: | ||
579 | return 2; | ||
580 | - case ARMMMUIdx_SE3: | ||
581 | + case ARMMMUIdx_E3: | ||
582 | return 3; | ||
583 | default: | ||
584 | g_assert_not_reached(); | ||
585 | @@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el) | ||
586 | } | ||
587 | break; | ||
588 | case 3: | ||
589 | - return ARMMMUIdx_SE3; | ||
590 | + return ARMMMUIdx_E3; | ||
591 | default: | ||
592 | g_assert_not_reached(); | ||
593 | } | ||
594 | |||
595 | - if (arm_is_secure_below_el3(env)) { | ||
596 | - idx &= ~ARM_MMU_IDX_A_NS; | ||
597 | - } | ||
598 | - | ||
599 | return idx; | ||
600 | } | ||
601 | |||
602 | @@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el, | ||
603 | switch (mmu_idx) { | ||
604 | case ARMMMUIdx_E10_1: | ||
605 | case ARMMMUIdx_E10_1_PAN: | ||
606 | - case ARMMMUIdx_SE10_1: | ||
607 | - case ARMMMUIdx_SE10_1_PAN: | ||
608 | /* TODO: ARMv8.3-NV */ | ||
609 | DP_TBFLAG_A64(flags, UNPRIV, 1); | ||
610 | break; | ||
611 | case ARMMMUIdx_E20_2: | ||
612 | case ARMMMUIdx_E20_2_PAN: | ||
613 | - case ARMMMUIdx_SE20_2: | ||
614 | - case ARMMMUIdx_SE20_2_PAN: | ||
615 | /* | ||
616 | * Note that EL20_2 is gated by HCR_EL2.E2H == 1, but EL20_0 is | ||
617 | * gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR. | ||
618 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
619 | index XXXXXXX..XXXXXXX 100644 | ||
620 | --- a/target/arm/ptw.c | ||
621 | +++ b/target/arm/ptw.c | ||
622 | @@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu) | ||
623 | ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx) | ||
624 | { | ||
625 | switch (mmu_idx) { | ||
626 | - case ARMMMUIdx_SE10_0: | ||
627 | - return ARMMMUIdx_Stage1_SE0; | ||
628 | - case ARMMMUIdx_SE10_1: | ||
629 | - return ARMMMUIdx_Stage1_SE1; | ||
630 | - case ARMMMUIdx_SE10_1_PAN: | ||
631 | - return ARMMMUIdx_Stage1_SE1_PAN; | ||
632 | case ARMMMUIdx_E10_0: | ||
633 | return ARMMMUIdx_Stage1_E0; | ||
634 | case ARMMMUIdx_E10_1: | ||
635 | @@ -XXX,XX +XXX,XX @@ static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
636 | static bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx) | ||
637 | { | ||
638 | switch (mmu_idx) { | ||
639 | - case ARMMMUIdx_SE10_0: | ||
640 | case ARMMMUIdx_E20_0: | ||
641 | - case ARMMMUIdx_SE20_0: | ||
642 | case ARMMMUIdx_Stage1_E0: | ||
643 | - case ARMMMUIdx_Stage1_SE0: | ||
644 | case ARMMMUIdx_MUser: | ||
645 | case ARMMMUIdx_MSUser: | ||
646 | case ARMMMUIdx_MUserNegPri: | ||
647 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
648 | |||
649 | s2_mmu_idx = (s2walk_secure | ||
650 | ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2); | ||
651 | - is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0; | ||
652 | + is_el0 = mmu_idx == ARMMMUIdx_E10_0; | ||
653 | |||
654 | /* | ||
655 | * S1 is done, now do S2 translation. | ||
656 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
657 | case ARMMMUIdx_Stage1_E1: | ||
658 | case ARMMMUIdx_Stage1_E1_PAN: | ||
659 | case ARMMMUIdx_E2: | ||
660 | + is_secure = arm_is_secure_below_el3(env); | ||
661 | + break; | ||
662 | case ARMMMUIdx_Stage2: | ||
663 | case ARMMMUIdx_MPrivNegPri: | ||
664 | case ARMMMUIdx_MUserNegPri: | ||
665 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address, | ||
666 | case ARMMMUIdx_MUser: | ||
667 | is_secure = false; | ||
668 | break; | ||
669 | - case ARMMMUIdx_SE3: | ||
670 | - case ARMMMUIdx_SE10_0: | ||
671 | - case ARMMMUIdx_SE10_1: | ||
672 | - case ARMMMUIdx_SE10_1_PAN: | ||
673 | - case ARMMMUIdx_SE20_0: | ||
674 | - case ARMMMUIdx_SE20_2: | ||
675 | - case ARMMMUIdx_SE20_2_PAN: | ||
676 | - case ARMMMUIdx_Stage1_SE0: | ||
677 | - case ARMMMUIdx_Stage1_SE1: | ||
678 | - case ARMMMUIdx_Stage1_SE1_PAN: | ||
679 | - case ARMMMUIdx_SE2: | ||
680 | + case ARMMMUIdx_E3: | ||
681 | case ARMMMUIdx_Stage2_S: | ||
682 | case ARMMMUIdx_MSPrivNegPri: | ||
683 | case ARMMMUIdx_MSUserNegPri: | ||
684 | diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c | ||
685 | index XXXXXXX..XXXXXXX 100644 | ||
686 | --- a/target/arm/translate-a64.c | ||
687 | +++ b/target/arm/translate-a64.c | ||
688 | @@ -XXX,XX +XXX,XX @@ static int get_a64_user_mem_index(DisasContext *s) | ||
689 | case ARMMMUIdx_E20_2_PAN: | ||
690 | useridx = ARMMMUIdx_E20_0; | ||
691 | break; | ||
692 | - case ARMMMUIdx_SE10_1: | ||
693 | - case ARMMMUIdx_SE10_1_PAN: | ||
694 | - useridx = ARMMMUIdx_SE10_0; | ||
695 | - break; | ||
696 | - case ARMMMUIdx_SE20_2: | ||
697 | - case ARMMMUIdx_SE20_2_PAN: | ||
698 | - useridx = ARMMMUIdx_SE20_0; | ||
699 | - break; | ||
700 | default: | ||
701 | g_assert_not_reached(); | ||
702 | } | ||
13 | diff --git a/target/arm/translate.c b/target/arm/translate.c | 703 | diff --git a/target/arm/translate.c b/target/arm/translate.c |
14 | index XXXXXXX..XXXXXXX 100644 | 704 | index XXXXXXX..XXXXXXX 100644 |
15 | --- a/target/arm/translate.c | 705 | --- a/target/arm/translate.c |
16 | +++ b/target/arm/translate.c | 706 | +++ b/target/arm/translate.c |
17 | @@ -XXX,XX +XXX,XX @@ static inline long vfp_reg_offset(bool dp, unsigned reg) | 707 | @@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s) |
18 | } | 708 | * otherwise, access as if at PL0. |
19 | } | 709 | */ |
20 | 710 | switch (s->mmu_idx) { | |
21 | -/* Return the offset of a 32-bit piece of a NEON register. | 711 | + case ARMMMUIdx_E3: |
22 | - zero is the least significant end of the register. */ | 712 | case ARMMMUIdx_E2: /* this one is UNPREDICTABLE */ |
23 | -static inline long | 713 | case ARMMMUIdx_E10_0: |
24 | -neon_reg_offset (int reg, int n) | 714 | case ARMMMUIdx_E10_1: |
25 | -{ | 715 | case ARMMMUIdx_E10_1_PAN: |
26 | - int sreg; | 716 | return arm_to_core_mmu_idx(ARMMMUIdx_E10_0); |
27 | - sreg = reg * 2 + n; | 717 | - case ARMMMUIdx_SE3: |
28 | - return vfp_reg_offset(0, sreg); | 718 | - case ARMMMUIdx_SE10_0: |
29 | -} | 719 | - case ARMMMUIdx_SE10_1: |
30 | - | 720 | - case ARMMMUIdx_SE10_1_PAN: |
31 | static TCGv_i32 neon_load_reg(int reg, int pass) | 721 | - return arm_to_core_mmu_idx(ARMMMUIdx_SE10_0); |
32 | { | 722 | case ARMMMUIdx_MUser: |
33 | TCGv_i32 tmp = tcg_temp_new_i32(); | 723 | case ARMMMUIdx_MPriv: |
34 | - tcg_gen_ld_i32(tmp, cpu_env, neon_reg_offset(reg, pass)); | 724 | return arm_to_core_mmu_idx(ARMMMUIdx_MUser); |
35 | + tcg_gen_ld_i32(tmp, cpu_env, neon_element_offset(reg, pass, MO_32)); | ||
36 | return tmp; | ||
37 | } | ||
38 | |||
39 | static void neon_store_reg(int reg, int pass, TCGv_i32 var) | ||
40 | { | ||
41 | - tcg_gen_st_i32(var, cpu_env, neon_reg_offset(reg, pass)); | ||
42 | + tcg_gen_st_i32(var, cpu_env, neon_element_offset(reg, pass, MO_32)); | ||
43 | tcg_temp_free_i32(var); | ||
44 | } | ||
45 | |||
46 | -- | 725 | -- |
47 | 2.20.1 | 726 | 2.25.1 |
48 | |||
49 | diff view generated by jsdifflib |
1 | From: Philippe Mathieu-Daudé <philmd@redhat.com> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | Use the BIT_ULL() macro to ensure we use 64-bit arithmetic. | 3 | Use a switch on mmu_idx for the a-profile indexes, instead of |
4 | This fixes the following Coverity issue (OVERFLOW_BEFORE_WIDEN): | 4 | three different if's vs regime_el and arm_mmu_idx_is_stage1_of_2. |
5 | 5 | ||
6 | CID 1432363 (#1 of 1): Unintentional integer overflow: | ||
7 | |||
8 | overflow_before_widen: | ||
9 | Potentially overflowing expression 1 << scale with type int | ||
10 | (32 bits, signed) is evaluated using 32-bit arithmetic, and | ||
11 | then used in a context that expects an expression of type | ||
12 | hwaddr (64 bits, unsigned). | ||
13 | |||
14 | Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> | ||
15 | Acked-by: Eric Auger <eric.auger@redhat.com> | ||
16 | Message-id: 20201030144617.1535064-1-philmd@redhat.com | ||
17 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | 6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
8 | Message-id: 20221001162318.153420-12-richard.henderson@linaro.org | ||
18 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
19 | --- | 10 | --- |
20 | hw/arm/smmuv3.c | 3 ++- | 11 | target/arm/ptw.c | 32 +++++++++++++++++++++++++------- |
21 | 1 file changed, 2 insertions(+), 1 deletion(-) | 12 | 1 file changed, 25 insertions(+), 7 deletions(-) |
22 | 13 | ||
23 | diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c | 14 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
24 | index XXXXXXX..XXXXXXX 100644 | 15 | index XXXXXXX..XXXXXXX 100644 |
25 | --- a/hw/arm/smmuv3.c | 16 | --- a/target/arm/ptw.c |
26 | +++ b/hw/arm/smmuv3.c | 17 | +++ b/target/arm/ptw.c |
27 | @@ -XXX,XX +XXX,XX @@ | 18 | @@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx, |
28 | */ | 19 | |
29 | 20 | hcr_el2 = arm_hcr_el2_eff(env); | |
30 | #include "qemu/osdep.h" | 21 | |
31 | +#include "qemu/bitops.h" | 22 | - if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { |
32 | #include "hw/irq.h" | 23 | + switch (mmu_idx) { |
33 | #include "hw/sysbus.h" | 24 | + case ARMMMUIdx_Stage2: |
34 | #include "migration/vmstate.h" | 25 | + case ARMMMUIdx_Stage2_S: |
35 | @@ -XXX,XX +XXX,XX @@ static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd) | 26 | /* HCR.DC means HCR.VM behaves as 1 */ |
36 | scale = CMD_SCALE(cmd); | 27 | return (hcr_el2 & (HCR_DC | HCR_VM)) == 0; |
37 | num = CMD_NUM(cmd); | 28 | - } |
38 | ttl = CMD_TTL(cmd); | 29 | |
39 | - num_pages = (num + 1) * (1 << (scale)); | 30 | - if (hcr_el2 & HCR_TGE) { |
40 | + num_pages = (num + 1) * BIT_ULL(scale); | 31 | + case ARMMMUIdx_E10_0: |
32 | + case ARMMMUIdx_E10_1: | ||
33 | + case ARMMMUIdx_E10_1_PAN: | ||
34 | /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */ | ||
35 | - if (!is_secure && regime_el(env, mmu_idx) == 1) { | ||
36 | + if (!is_secure && (hcr_el2 & HCR_TGE)) { | ||
37 | return true; | ||
38 | } | ||
39 | - } | ||
40 | + break; | ||
41 | |||
42 | - if ((hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) { | ||
43 | + case ARMMMUIdx_Stage1_E0: | ||
44 | + case ARMMMUIdx_Stage1_E1: | ||
45 | + case ARMMMUIdx_Stage1_E1_PAN: | ||
46 | /* HCR.DC means SCTLR_EL1.M behaves as 0 */ | ||
47 | - return true; | ||
48 | + if (hcr_el2 & HCR_DC) { | ||
49 | + return true; | ||
50 | + } | ||
51 | + break; | ||
52 | + | ||
53 | + case ARMMMUIdx_E20_0: | ||
54 | + case ARMMMUIdx_E20_2: | ||
55 | + case ARMMMUIdx_E20_2_PAN: | ||
56 | + case ARMMMUIdx_E2: | ||
57 | + case ARMMMUIdx_E3: | ||
58 | + break; | ||
59 | + | ||
60 | + default: | ||
61 | + g_assert_not_reached(); | ||
41 | } | 62 | } |
42 | 63 | ||
43 | if (type == SMMU_CMD_TLBI_NH_VA) { | 64 | return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0; |
44 | -- | 65 | -- |
45 | 2.20.1 | 66 | 2.25.1 |
46 | |||
47 | diff view generated by jsdifflib |
1 | The helper functions for performing the udot/sdot operations against | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | a scalar were not using an address-swizzling macro when converting | ||
3 | the index of the scalar element into a pointer into the vm array. | ||
4 | This had no effect on little-endian hosts but meant we generated | ||
5 | incorrect results on big-endian hosts. | ||
6 | 2 | ||
7 | For these insns, the index is indexing over group of 4 8-bit values, | 3 | The effect of TGE does not only apply to non-secure state, |
8 | so 32 bits per indexed entity, and H4() is therefore what we want. | 4 | now that Secure EL2 exists. |
9 | (For Neon the only possible input indexes are 0 and 1.) | ||
10 | 5 | ||
6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
8 | Message-id: 20221001162318.153420-13-richard.henderson@linaro.org | ||
11 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
12 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
13 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
14 | Message-id: 20201028191712.4910-3-peter.maydell@linaro.org | ||
15 | --- | 10 | --- |
16 | target/arm/vec_helper.c | 4 ++-- | 11 | target/arm/ptw.c | 4 ++-- |
17 | 1 file changed, 2 insertions(+), 2 deletions(-) | 12 | 1 file changed, 2 insertions(+), 2 deletions(-) |
18 | 13 | ||
19 | diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c | 14 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
20 | index XXXXXXX..XXXXXXX 100644 | 15 | index XXXXXXX..XXXXXXX 100644 |
21 | --- a/target/arm/vec_helper.c | 16 | --- a/target/arm/ptw.c |
22 | +++ b/target/arm/vec_helper.c | 17 | +++ b/target/arm/ptw.c |
23 | @@ -XXX,XX +XXX,XX @@ void HELPER(gvec_sdot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc) | 18 | @@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx, |
24 | intptr_t index = simd_data(desc); | 19 | case ARMMMUIdx_E10_0: |
25 | uint32_t *d = vd; | 20 | case ARMMMUIdx_E10_1: |
26 | int8_t *n = vn; | 21 | case ARMMMUIdx_E10_1_PAN: |
27 | - int8_t *m_indexed = (int8_t *)vm + index * 4; | 22 | - /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */ |
28 | + int8_t *m_indexed = (int8_t *)vm + H4(index) * 4; | 23 | - if (!is_secure && (hcr_el2 & HCR_TGE)) { |
29 | 24 | + /* TGE means that EL0/1 act as if SCTLR_EL1.M is zero */ | |
30 | /* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd. | 25 | + if (hcr_el2 & HCR_TGE) { |
31 | * Otherwise opr_sz is a multiple of 16. | 26 | return true; |
32 | @@ -XXX,XX +XXX,XX @@ void HELPER(gvec_udot_idx_b)(void *vd, void *vn, void *vm, uint32_t desc) | 27 | } |
33 | intptr_t index = simd_data(desc); | 28 | break; |
34 | uint32_t *d = vd; | ||
35 | uint8_t *n = vn; | ||
36 | - uint8_t *m_indexed = (uint8_t *)vm + index * 4; | ||
37 | + uint8_t *m_indexed = (uint8_t *)vm + H4(index) * 4; | ||
38 | |||
39 | /* Notice the special case of opr_sz == 8, from aa64/aa32 advsimd. | ||
40 | * Otherwise opr_sz is a multiple of 16. | ||
41 | -- | 29 | -- |
42 | 2.20.1 | 30 | 2.25.1 |
43 | |||
44 | diff view generated by jsdifflib |
1 | From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | HCR should be applied when NS is set, not when it is cleared. | 3 | For page walking, we may require HCR for a security state |
4 | that is not "current". | ||
4 | 5 | ||
5 | Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> | ||
6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | 6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
8 | Message-id: 20221001162318.153420-14-richard.henderson@linaro.org | ||
7 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
8 | --- | 10 | --- |
9 | target/arm/helper.c | 5 ++--- | 11 | target/arm/cpu.h | 20 +++++++++++++------- |
10 | 1 file changed, 2 insertions(+), 3 deletions(-) | 12 | target/arm/helper.c | 11 ++++++++--- |
13 | 2 files changed, 21 insertions(+), 10 deletions(-) | ||
11 | 14 | ||
15 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | ||
16 | index XXXXXXX..XXXXXXX 100644 | ||
17 | --- a/target/arm/cpu.h | ||
18 | +++ b/target/arm/cpu.h | ||
19 | @@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env) | ||
20 | * Return true if the current security state has AArch64 EL2 or AArch32 Hyp. | ||
21 | * This corresponds to the pseudocode EL2Enabled() | ||
22 | */ | ||
23 | +static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure) | ||
24 | +{ | ||
25 | + return arm_feature(env, ARM_FEATURE_EL2) | ||
26 | + && (!secure || (env->cp15.scr_el3 & SCR_EEL2)); | ||
27 | +} | ||
28 | + | ||
29 | static inline bool arm_is_el2_enabled(CPUARMState *env) | ||
30 | { | ||
31 | - if (arm_feature(env, ARM_FEATURE_EL2)) { | ||
32 | - if (arm_is_secure_below_el3(env)) { | ||
33 | - return (env->cp15.scr_el3 & SCR_EEL2) != 0; | ||
34 | - } | ||
35 | - return true; | ||
36 | - } | ||
37 | - return false; | ||
38 | + return arm_is_el2_enabled_secstate(env, arm_is_secure_below_el3(env)); | ||
39 | } | ||
40 | |||
41 | #else | ||
42 | @@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env) | ||
43 | return false; | ||
44 | } | ||
45 | |||
46 | +static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure) | ||
47 | +{ | ||
48 | + return false; | ||
49 | +} | ||
50 | + | ||
51 | static inline bool arm_is_el2_enabled(CPUARMState *env) | ||
52 | { | ||
53 | return false; | ||
54 | @@ -XXX,XX +XXX,XX @@ static inline bool arm_is_el2_enabled(CPUARMState *env) | ||
55 | * "for all purposes other than a direct read or write access of HCR_EL2." | ||
56 | * Not included here is HCR_RW. | ||
57 | */ | ||
58 | +uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure); | ||
59 | uint64_t arm_hcr_el2_eff(CPUARMState *env); | ||
60 | uint64_t arm_hcrx_el2_eff(CPUARMState *env); | ||
61 | |||
12 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 62 | diff --git a/target/arm/helper.c b/target/arm/helper.c |
13 | index XXXXXXX..XXXXXXX 100644 | 63 | index XXXXXXX..XXXXXXX 100644 |
14 | --- a/target/arm/helper.c | 64 | --- a/target/arm/helper.c |
15 | +++ b/target/arm/helper.c | 65 | +++ b/target/arm/helper.c |
16 | @@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri, | 66 | @@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri, |
67 | } | ||
17 | 68 | ||
18 | /* | 69 | /* |
19 | * Non-IS variants of TLB operations are upgraded to | 70 | - * Return the effective value of HCR_EL2. |
20 | - * IS versions if we are at NS EL1 and HCR_EL2.FB is set to | 71 | + * Return the effective value of HCR_EL2, at the given security state. |
21 | + * IS versions if we are at EL1 and HCR_EL2.FB is effectively set to | 72 | * Bits that are not included here: |
22 | * force broadcast of these operations. | 73 | * RW (read from SCR_EL3.RW as needed) |
23 | */ | 74 | */ |
24 | static bool tlb_force_broadcast(CPUARMState *env) | 75 | -uint64_t arm_hcr_el2_eff(CPUARMState *env) |
76 | +uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure) | ||
25 | { | 77 | { |
26 | - return (env->cp15.hcr_el2 & HCR_FB) && | 78 | uint64_t ret = env->cp15.hcr_el2; |
27 | - arm_current_el(env) == 1 && arm_is_secure_below_el3(env); | 79 | |
28 | + return arm_current_el(env) == 1 && (arm_hcr_el2_eff(env) & HCR_FB); | 80 | - if (!arm_is_el2_enabled(env)) { |
81 | + if (!arm_is_el2_enabled_secstate(env, secure)) { | ||
82 | /* | ||
83 | * "This register has no effect if EL2 is not enabled in the | ||
84 | * current Security state". This is ARMv8.4-SecEL2 speak for | ||
85 | @@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff(CPUARMState *env) | ||
86 | return ret; | ||
29 | } | 87 | } |
30 | 88 | ||
31 | static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri, | 89 | +uint64_t arm_hcr_el2_eff(CPUARMState *env) |
90 | +{ | ||
91 | + return arm_hcr_el2_eff_secstate(env, arm_is_secure_below_el3(env)); | ||
92 | +} | ||
93 | + | ||
94 | /* | ||
95 | * Corresponds to ARM pseudocode function ELIsInHost(). | ||
96 | */ | ||
32 | -- | 97 | -- |
33 | 2.20.1 | 98 | 2.25.1 |
34 | |||
35 | diff view generated by jsdifflib |
1 | The kerneldoc script currently emits Sphinx markup for a macro with | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | arguments that uses the c:function directive. This is correct for | ||
3 | Sphinx versions earlier than Sphinx 3, where c:macro doesn't allow | ||
4 | documentation of macros with arguments and c:function is not picky | ||
5 | about the syntax of what it is passed. However, in Sphinx 3 the | ||
6 | c:macro directive was enhanced to support macros with arguments, | ||
7 | and c:function was made more picky about what syntax it accepted. | ||
8 | 2 | ||
9 | When kerneldoc is told that it needs to produce output for Sphinx | 3 | Rename the argument to is_secure_ptr, and introduce a |
10 | 3 or later, make it emit c:function only for functions and c:macro | 4 | local variable is_secure with the value. We only write |
11 | for macros with arguments. We assume that anything with a return | 5 | back to the pointer toward the end of the function. |
12 | type is a function and anything without is a macro. | ||
13 | 6 | ||
14 | This fixes the Sphinx error: | 7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
8 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
9 | Message-id: 20221001162318.153420-15-richard.henderson@linaro.org | ||
10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
11 | --- | ||
12 | target/arm/ptw.c | 22 ++++++++++++---------- | ||
13 | 1 file changed, 12 insertions(+), 10 deletions(-) | ||
15 | 14 | ||
16 | /home/petmay01/linaro/qemu-from-laptop/qemu/docs/../include/qom/object.h:155:Error in declarator | 15 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
17 | If declarator-id with parameters (e.g., 'void f(int arg)'): | 16 | index XXXXXXX..XXXXXXX 100644 |
18 | Invalid C declaration: Expected identifier in nested name. [error at 25] | 17 | --- a/target/arm/ptw.c |
19 | DECLARE_INSTANCE_CHECKER ( InstanceType, OBJ_NAME, TYPENAME) | 18 | +++ b/target/arm/ptw.c |
20 | -------------------------^ | 19 | @@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs) |
21 | If parenthesis in noptr-declarator (e.g., 'void (*f(int arg))(double)'): | 20 | |
22 | Error in declarator or parameters | 21 | /* Translate a S1 pagetable walk through S2 if needed. */ |
23 | Invalid C declaration: Expecting "(" in parameters. [error at 39] | 22 | static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, |
24 | DECLARE_INSTANCE_CHECKER ( InstanceType, OBJ_NAME, TYPENAME) | 23 | - hwaddr addr, bool *is_secure, |
25 | ---------------------------------------^ | 24 | + hwaddr addr, bool *is_secure_ptr, |
26 | 25 | ARMMMUFaultInfo *fi) | |
27 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 26 | { |
28 | Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> | 27 | - ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; |
29 | Tested-by: Stefan Hajnoczi <stefanha@redhat.com> | 28 | + bool is_secure = *is_secure_ptr; |
30 | Message-id: 20201030174700.7204-2-peter.maydell@linaro.org | 29 | + ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2; |
31 | --- | 30 | |
32 | scripts/kernel-doc | 18 +++++++++++++++++- | 31 | if (arm_mmu_idx_is_stage1_of_2(mmu_idx) && |
33 | 1 file changed, 17 insertions(+), 1 deletion(-) | 32 | - !regime_translation_disabled(env, s2_mmu_idx, *is_secure)) { |
34 | 33 | + !regime_translation_disabled(env, s2_mmu_idx, is_secure)) { | |
35 | diff --git a/scripts/kernel-doc b/scripts/kernel-doc | 34 | GetPhysAddrResult s2 = {}; |
36 | index XXXXXXX..XXXXXXX 100755 | 35 | int ret; |
37 | --- a/scripts/kernel-doc | 36 | |
38 | +++ b/scripts/kernel-doc | 37 | ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, |
39 | @@ -XXX,XX +XXX,XX @@ sub output_function_rst(%) { | 38 | - *is_secure, false, &s2, fi); |
40 | output_highlight_rst($args{'purpose'}); | 39 | + is_secure, false, &s2, fi); |
41 | $start = "\n\n**Syntax**\n\n ``"; | 40 | if (ret) { |
42 | } else { | 41 | assert(fi->type != ARMFault_None); |
43 | - print ".. c:function:: "; | 42 | fi->s2addr = addr; |
44 | + if ((split(/\./, $sphinx_version))[0] >= 3) { | 43 | fi->stage2 = true; |
45 | + # Sphinx 3 and later distinguish macros and functions and | 44 | fi->s1ptw = true; |
46 | + # complain if you use c:function with something that's not | 45 | - fi->s1ns = !*is_secure; |
47 | + # syntactically valid as a function declaration. | 46 | + fi->s1ns = !is_secure; |
48 | + # We assume that anything with a return type is a function | 47 | return ~0; |
49 | + # and anything without is a macro. | 48 | } |
50 | + if ($args{'functiontype'} ne "") { | 49 | if ((arm_hcr_el2_eff(env) & HCR_PTW) && |
51 | + print ".. c:function:: "; | 50 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, |
52 | + } else { | 51 | fi->s2addr = addr; |
53 | + print ".. c:macro:: "; | 52 | fi->stage2 = true; |
54 | + } | 53 | fi->s1ptw = true; |
55 | + } else { | 54 | - fi->s1ns = !*is_secure; |
56 | + # Older Sphinx don't support documenting macros that take | 55 | + fi->s1ns = !is_secure; |
57 | + # arguments with c:macro, and don't complain about the use | 56 | return ~0; |
58 | + # of c:function for this. | 57 | } |
59 | + print ".. c:function:: "; | 58 | |
60 | + } | 59 | if (arm_is_secure_below_el3(env)) { |
61 | } | 60 | /* Check if page table walk is to secure or non-secure PA space. */ |
62 | if ($args{'functiontype'} ne "") { | 61 | - if (*is_secure) { |
63 | $start .= $args{'functiontype'} . " " . $args{'function'} . " ("; | 62 | - *is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW); |
63 | + if (is_secure) { | ||
64 | + is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW); | ||
65 | } else { | ||
66 | - *is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW); | ||
67 | + is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW); | ||
68 | } | ||
69 | + *is_secure_ptr = is_secure; | ||
70 | } else { | ||
71 | - assert(!*is_secure); | ||
72 | + assert(!is_secure); | ||
73 | } | ||
74 | |||
75 | addr = s2.phys; | ||
64 | -- | 76 | -- |
65 | 2.20.1 | 77 | 2.25.1 |
66 | |||
67 | diff view generated by jsdifflib |
1 | From: Richard Henderson <richard.henderson@linaro.org> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | In both cases, we can sink the write-back and perform | 3 | This value is unused. |
4 | the accumulate into the normal destination temps. | ||
5 | 4 | ||
6 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | 5 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
7 | Message-id: 20201030022618.785675-11-richard.henderson@linaro.org | ||
8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | 6 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
7 | Message-id: 20221001162318.153420-16-richard.henderson@linaro.org | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
10 | --- | 9 | --- |
11 | target/arm/translate-neon.c.inc | 23 +++++++++-------------- | 10 | target/arm/ptw.c | 5 ++--- |
12 | 1 file changed, 9 insertions(+), 14 deletions(-) | 11 | 1 file changed, 2 insertions(+), 3 deletions(-) |
13 | 12 | ||
14 | diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc | 13 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
15 | index XXXXXXX..XXXXXXX 100644 | 14 | index XXXXXXX..XXXXXXX 100644 |
16 | --- a/target/arm/translate-neon.c.inc | 15 | --- a/target/arm/ptw.c |
17 | +++ b/target/arm/translate-neon.c.inc | 16 | +++ b/target/arm/ptw.c |
18 | @@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a, | 17 | @@ -XXX,XX +XXX,XX @@ static uint8_t force_cacheattr_nibble_wb(uint8_t attr) |
19 | if (accfn) { | 18 | * s1 and s2 for the HCR_EL2.FWB == 1 case, returning the |
20 | tmp = tcg_temp_new_i64(); | 19 | * combined attributes in MAIR_EL1 format. |
21 | read_neon_element64(tmp, a->vd, 0, MO_64); | 20 | */ |
22 | - accfn(tmp, tmp, rd0); | 21 | -static uint8_t combined_attrs_fwb(CPUARMState *env, |
23 | - write_neon_element64(tmp, a->vd, 0, MO_64); | 22 | - ARMCacheAttrs s1, ARMCacheAttrs s2) |
24 | + accfn(rd0, tmp, rd0); | 23 | +static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2) |
25 | read_neon_element64(tmp, a->vd, 1, MO_64); | 24 | { |
26 | - accfn(tmp, tmp, rd1); | 25 | switch (s2.attrs) { |
27 | - write_neon_element64(tmp, a->vd, 1, MO_64); | 26 | case 7: |
28 | + accfn(rd1, tmp, rd1); | 27 | @@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env, |
29 | tcg_temp_free_i64(tmp); | 28 | |
30 | - } else { | 29 | /* Combine memory type and cacheability attributes */ |
31 | - write_neon_element64(rd0, a->vd, 0, MO_64); | 30 | if (arm_hcr_el2_eff(env) & HCR_FWB) { |
32 | - write_neon_element64(rd1, a->vd, 1, MO_64); | 31 | - ret.attrs = combined_attrs_fwb(env, s1, s2); |
32 | + ret.attrs = combined_attrs_fwb(s1, s2); | ||
33 | } else { | ||
34 | ret.attrs = combined_attrs_nofwb(env, s1, s2); | ||
33 | } | 35 | } |
34 | |||
35 | + write_neon_element64(rd0, a->vd, 0, MO_64); | ||
36 | + write_neon_element64(rd1, a->vd, 1, MO_64); | ||
37 | tcg_temp_free_i64(rd0); | ||
38 | tcg_temp_free_i64(rd1); | ||
39 | |||
40 | @@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a, | ||
41 | if (accfn) { | ||
42 | TCGv_i64 t64 = tcg_temp_new_i64(); | ||
43 | read_neon_element64(t64, a->vd, 0, MO_64); | ||
44 | - accfn(t64, t64, rn0_64); | ||
45 | - write_neon_element64(t64, a->vd, 0, MO_64); | ||
46 | + accfn(rn0_64, t64, rn0_64); | ||
47 | read_neon_element64(t64, a->vd, 1, MO_64); | ||
48 | - accfn(t64, t64, rn1_64); | ||
49 | - write_neon_element64(t64, a->vd, 1, MO_64); | ||
50 | + accfn(rn1_64, t64, rn1_64); | ||
51 | tcg_temp_free_i64(t64); | ||
52 | - } else { | ||
53 | - write_neon_element64(rn0_64, a->vd, 0, MO_64); | ||
54 | - write_neon_element64(rn1_64, a->vd, 1, MO_64); | ||
55 | } | ||
56 | + | ||
57 | + write_neon_element64(rn0_64, a->vd, 0, MO_64); | ||
58 | + write_neon_element64(rn1_64, a->vd, 1, MO_64); | ||
59 | tcg_temp_free_i64(rn0_64); | ||
60 | tcg_temp_free_i64(rn1_64); | ||
61 | return true; | ||
62 | -- | 36 | -- |
63 | 2.20.1 | 37 | 2.25.1 |
64 | |||
65 | diff view generated by jsdifflib |
1 | From: Richard Henderson <richard.henderson@linaro.org> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | This function makes it clear that we're talking about the whole | 3 | These subroutines did not need ENV for anything except |
4 | register, and not the 32-bit piece at index 0. This fixes a bug | 4 | retrieving the effective value of HCR anyway. |
5 | when running on a big-endian host. | ||
6 | 5 | ||
6 | We have computed the effective value of HCR in the callers, | ||
7 | and this will be especially important for interpreting HCR | ||
8 | in a non-current security state. | ||
9 | |||
10 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | 11 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
8 | Message-id: 20201030022618.785675-2-richard.henderson@linaro.org | 12 | Message-id: 20221001162318.153420-17-richard.henderson@linaro.org |
9 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 13 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
11 | --- | 14 | --- |
12 | target/arm/translate.c | 8 ++++++ | 15 | target/arm/ptw.c | 30 +++++++++++++++++------------- |
13 | target/arm/translate-neon.c.inc | 44 ++++++++++++++++----------------- | 16 | 1 file changed, 17 insertions(+), 13 deletions(-) |
14 | target/arm/translate-vfp.c.inc | 2 +- | ||
15 | 3 files changed, 31 insertions(+), 23 deletions(-) | ||
16 | 17 | ||
17 | diff --git a/target/arm/translate.c b/target/arm/translate.c | 18 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
18 | index XXXXXXX..XXXXXXX 100644 | 19 | index XXXXXXX..XXXXXXX 100644 |
19 | --- a/target/arm/translate.c | 20 | --- a/target/arm/ptw.c |
20 | +++ b/target/arm/translate.c | 21 | +++ b/target/arm/ptw.c |
21 | @@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm) | 22 | @@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx, |
22 | unallocated_encoding(s); | 23 | return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0; |
23 | } | 24 | } |
24 | 25 | ||
25 | +/* | 26 | -static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs) |
26 | + * Return the offset of a "full" NEON Dreg. | 27 | +static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs) |
27 | + */ | 28 | { |
28 | +static long neon_full_reg_offset(unsigned reg) | 29 | /* |
29 | +{ | 30 | * For an S1 page table walk, the stage 1 attributes are always |
30 | + return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]); | 31 | @@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs) |
31 | +} | 32 | * when cacheattrs.attrs bit [2] is 0. |
33 | */ | ||
34 | assert(cacheattrs.is_s2_format); | ||
35 | - if (arm_hcr_el2_eff(env) & HCR_FWB) { | ||
36 | + if (hcr & HCR_FWB) { | ||
37 | return (cacheattrs.attrs & 0x4) == 0; | ||
38 | } else { | ||
39 | return (cacheattrs.attrs & 0xc) == 0; | ||
40 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
41 | if (arm_mmu_idx_is_stage1_of_2(mmu_idx) && | ||
42 | !regime_translation_disabled(env, s2_mmu_idx, is_secure)) { | ||
43 | GetPhysAddrResult s2 = {}; | ||
44 | + uint64_t hcr; | ||
45 | int ret; | ||
46 | |||
47 | ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, | ||
48 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
49 | fi->s1ns = !is_secure; | ||
50 | return ~0; | ||
51 | } | ||
52 | - if ((arm_hcr_el2_eff(env) & HCR_PTW) && | ||
53 | - ptw_attrs_are_device(env, s2.cacheattrs)) { | ||
32 | + | 54 | + |
33 | static inline long vfp_reg_offset(bool dp, unsigned reg) | 55 | + hcr = arm_hcr_el2_eff(env); |
56 | + if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) { | ||
57 | /* | ||
58 | * PTW set and S1 walk touched S2 Device memory: | ||
59 | * generate Permission fault. | ||
60 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, | ||
61 | * ref: shared/translation/attrs/S2AttrDecode() | ||
62 | * .../S2ConvertAttrsHints() | ||
63 | */ | ||
64 | -static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs) | ||
65 | +static uint8_t convert_stage2_attrs(uint64_t hcr, uint8_t s2attrs) | ||
34 | { | 66 | { |
35 | if (dp) { | 67 | uint8_t hiattr = extract32(s2attrs, 2, 2); |
36 | diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc | 68 | uint8_t loattr = extract32(s2attrs, 0, 2); |
37 | index XXXXXXX..XXXXXXX 100644 | 69 | uint8_t hihint = 0, lohint = 0; |
38 | --- a/target/arm/translate-neon.c.inc | 70 | |
39 | +++ b/target/arm/translate-neon.c.inc | 71 | if (hiattr != 0) { /* normal memory */ |
40 | @@ -XXX,XX +XXX,XX @@ neon_element_offset(int reg, int element, MemOp size) | 72 | - if (arm_hcr_el2_eff(env) & HCR_CD) { /* cache disabled */ |
41 | ofs ^= 8 - element_size; | 73 | + if (hcr & HCR_CD) { /* cache disabled */ |
74 | hiattr = loattr = 1; /* non-cacheable */ | ||
75 | } else { | ||
76 | if (hiattr != 1) { /* Write-through or write-back */ | ||
77 | @@ -XXX,XX +XXX,XX @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2) | ||
78 | * s1 and s2 for the HCR_EL2.FWB == 0 case, returning the | ||
79 | * combined attributes in MAIR_EL1 format. | ||
80 | */ | ||
81 | -static uint8_t combined_attrs_nofwb(CPUARMState *env, | ||
82 | +static uint8_t combined_attrs_nofwb(uint64_t hcr, | ||
83 | ARMCacheAttrs s1, ARMCacheAttrs s2) | ||
84 | { | ||
85 | uint8_t s1lo, s2lo, s1hi, s2hi, s2_mair_attrs, ret_attrs; | ||
86 | |||
87 | - s2_mair_attrs = convert_stage2_attrs(env, s2.attrs); | ||
88 | + s2_mair_attrs = convert_stage2_attrs(hcr, s2.attrs); | ||
89 | |||
90 | s1lo = extract32(s1.attrs, 0, 4); | ||
91 | s2lo = extract32(s2_mair_attrs, 0, 4); | ||
92 | @@ -XXX,XX +XXX,XX @@ static uint8_t combined_attrs_fwb(ARMCacheAttrs s1, ARMCacheAttrs s2) | ||
93 | * @s1: Attributes from stage 1 walk | ||
94 | * @s2: Attributes from stage 2 walk | ||
95 | */ | ||
96 | -static ARMCacheAttrs combine_cacheattrs(CPUARMState *env, | ||
97 | +static ARMCacheAttrs combine_cacheattrs(uint64_t hcr, | ||
98 | ARMCacheAttrs s1, ARMCacheAttrs s2) | ||
99 | { | ||
100 | ARMCacheAttrs ret; | ||
101 | @@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env, | ||
42 | } | 102 | } |
43 | #endif | 103 | |
44 | - return neon_reg_offset(reg, 0) + ofs; | 104 | /* Combine memory type and cacheability attributes */ |
45 | + return neon_full_reg_offset(reg) + ofs; | 105 | - if (arm_hcr_el2_eff(env) & HCR_FWB) { |
46 | } | 106 | + if (hcr & HCR_FWB) { |
47 | 107 | ret.attrs = combined_attrs_fwb(s1, s2); | |
48 | static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop) | 108 | } else { |
49 | @@ -XXX,XX +XXX,XX @@ static bool trans_VLD_all_lanes(DisasContext *s, arg_VLD_all_lanes *a) | 109 | - ret.attrs = combined_attrs_nofwb(env, s1, s2); |
50 | * We cannot write 16 bytes at once because the | 110 | + ret.attrs = combined_attrs_nofwb(hcr, s1, s2); |
51 | * destination is unaligned. | ||
52 | */ | ||
53 | - tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0), | ||
54 | + tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(vd), | ||
55 | 8, 8, tmp); | ||
56 | - tcg_gen_gvec_mov(0, neon_reg_offset(vd + 1, 0), | ||
57 | - neon_reg_offset(vd, 0), 8, 8); | ||
58 | + tcg_gen_gvec_mov(0, neon_full_reg_offset(vd + 1), | ||
59 | + neon_full_reg_offset(vd), 8, 8); | ||
60 | } else { | ||
61 | - tcg_gen_gvec_dup_i32(size, neon_reg_offset(vd, 0), | ||
62 | + tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(vd), | ||
63 | vec_size, vec_size, tmp); | ||
64 | } | ||
65 | tcg_gen_addi_i32(addr, addr, 1 << size); | ||
66 | @@ -XXX,XX +XXX,XX @@ static bool trans_VLDST_single(DisasContext *s, arg_VLDST_single *a) | ||
67 | static bool do_3same(DisasContext *s, arg_3same *a, GVecGen3Fn fn) | ||
68 | { | ||
69 | int vec_size = a->q ? 16 : 8; | ||
70 | - int rd_ofs = neon_reg_offset(a->vd, 0); | ||
71 | - int rn_ofs = neon_reg_offset(a->vn, 0); | ||
72 | - int rm_ofs = neon_reg_offset(a->vm, 0); | ||
73 | + int rd_ofs = neon_full_reg_offset(a->vd); | ||
74 | + int rn_ofs = neon_full_reg_offset(a->vn); | ||
75 | + int rm_ofs = neon_full_reg_offset(a->vm); | ||
76 | |||
77 | if (!arm_dc_feature(s, ARM_FEATURE_NEON)) { | ||
78 | return false; | ||
79 | @@ -XXX,XX +XXX,XX @@ static bool do_vector_2sh(DisasContext *s, arg_2reg_shift *a, GVecGen2iFn *fn) | ||
80 | { | ||
81 | /* Handle a 2-reg-shift insn which can be vectorized. */ | ||
82 | int vec_size = a->q ? 16 : 8; | ||
83 | - int rd_ofs = neon_reg_offset(a->vd, 0); | ||
84 | - int rm_ofs = neon_reg_offset(a->vm, 0); | ||
85 | + int rd_ofs = neon_full_reg_offset(a->vd); | ||
86 | + int rm_ofs = neon_full_reg_offset(a->vm); | ||
87 | |||
88 | if (!arm_dc_feature(s, ARM_FEATURE_NEON)) { | ||
89 | return false; | ||
90 | @@ -XXX,XX +XXX,XX @@ static bool do_fp_2sh(DisasContext *s, arg_2reg_shift *a, | ||
91 | { | ||
92 | /* FP operations in 2-reg-and-shift group */ | ||
93 | int vec_size = a->q ? 16 : 8; | ||
94 | - int rd_ofs = neon_reg_offset(a->vd, 0); | ||
95 | - int rm_ofs = neon_reg_offset(a->vm, 0); | ||
96 | + int rd_ofs = neon_full_reg_offset(a->vd); | ||
97 | + int rm_ofs = neon_full_reg_offset(a->vm); | ||
98 | TCGv_ptr fpst; | ||
99 | |||
100 | if (!arm_dc_feature(s, ARM_FEATURE_NEON)) { | ||
101 | @@ -XXX,XX +XXX,XX @@ static bool do_1reg_imm(DisasContext *s, arg_1reg_imm *a, | ||
102 | return true; | ||
103 | } | 111 | } |
104 | 112 | ||
105 | - reg_ofs = neon_reg_offset(a->vd, 0); | 113 | /* |
106 | + reg_ofs = neon_full_reg_offset(a->vd); | 114 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, |
107 | vec_size = a->q ? 16 : 8; | 115 | ARMCacheAttrs cacheattrs1; |
108 | imm = asimd_imm_const(a->imm, a->cmode, a->op); | 116 | ARMMMUIdx s2_mmu_idx; |
109 | 117 | bool is_el0; | |
110 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMULL_P_3d(DisasContext *s, arg_3diff *a) | 118 | + uint64_t hcr; |
111 | return true; | 119 | |
112 | } | 120 | ret = get_phys_addr_with_secure(env, address, access_type, |
113 | 121 | s1_mmu_idx, is_secure, result, fi); | |
114 | - tcg_gen_gvec_3_ool(neon_reg_offset(a->vd, 0), | 122 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, |
115 | - neon_reg_offset(a->vn, 0), | 123 | } |
116 | - neon_reg_offset(a->vm, 0), | 124 | |
117 | + tcg_gen_gvec_3_ool(neon_full_reg_offset(a->vd), | 125 | /* Combine the S1 and S2 cache attributes. */ |
118 | + neon_full_reg_offset(a->vn), | 126 | - if (arm_hcr_el2_eff(env) & HCR_DC) { |
119 | + neon_full_reg_offset(a->vm), | 127 | + hcr = arm_hcr_el2_eff(env); |
120 | 16, 16, 0, fn_gvec); | 128 | + if (hcr & HCR_DC) { |
121 | return true; | 129 | /* |
122 | } | 130 | * HCR.DC forces the first stage attributes to |
123 | @@ -XXX,XX +XXX,XX @@ static bool do_2scalar_fp_vec(DisasContext *s, arg_2scalar *a, | 131 | * Normal Non-Shareable, |
124 | { | 132 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, |
125 | /* Two registers and a scalar, using gvec */ | 133 | } |
126 | int vec_size = a->q ? 16 : 8; | 134 | cacheattrs1.shareability = 0; |
127 | - int rd_ofs = neon_reg_offset(a->vd, 0); | 135 | } |
128 | - int rn_ofs = neon_reg_offset(a->vn, 0); | 136 | - result->cacheattrs = combine_cacheattrs(env, cacheattrs1, |
129 | + int rd_ofs = neon_full_reg_offset(a->vd); | 137 | + result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1, |
130 | + int rn_ofs = neon_full_reg_offset(a->vn); | 138 | result->cacheattrs); |
131 | int rm_ofs; | 139 | |
132 | int idx; | 140 | /* |
133 | TCGv_ptr fpstatus; | ||
134 | @@ -XXX,XX +XXX,XX @@ static bool do_2scalar_fp_vec(DisasContext *s, arg_2scalar *a, | ||
135 | /* a->vm is M:Vm, which encodes both register and index */ | ||
136 | idx = extract32(a->vm, a->size + 2, 2); | ||
137 | a->vm = extract32(a->vm, 0, a->size + 2); | ||
138 | - rm_ofs = neon_reg_offset(a->vm, 0); | ||
139 | + rm_ofs = neon_full_reg_offset(a->vm); | ||
140 | |||
141 | fpstatus = fpstatus_ptr(a->size == 1 ? FPST_STD_F16 : FPST_STD); | ||
142 | tcg_gen_gvec_3_ptr(rd_ofs, rn_ofs, rm_ofs, fpstatus, | ||
143 | @@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a) | ||
144 | return true; | ||
145 | } | ||
146 | |||
147 | - tcg_gen_gvec_dup_mem(a->size, neon_reg_offset(a->vd, 0), | ||
148 | + tcg_gen_gvec_dup_mem(a->size, neon_full_reg_offset(a->vd), | ||
149 | neon_element_offset(a->vm, a->index, a->size), | ||
150 | a->q ? 16 : 8, a->q ? 16 : 8); | ||
151 | return true; | ||
152 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F32_F16(DisasContext *s, arg_2misc *a) | ||
153 | static bool do_2misc_vec(DisasContext *s, arg_2misc *a, GVecGen2Fn *fn) | ||
154 | { | ||
155 | int vec_size = a->q ? 16 : 8; | ||
156 | - int rd_ofs = neon_reg_offset(a->vd, 0); | ||
157 | - int rm_ofs = neon_reg_offset(a->vm, 0); | ||
158 | + int rd_ofs = neon_full_reg_offset(a->vd); | ||
159 | + int rm_ofs = neon_full_reg_offset(a->vm); | ||
160 | |||
161 | if (!arm_dc_feature(s, ARM_FEATURE_NEON)) { | ||
162 | return false; | ||
163 | diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc | ||
164 | index XXXXXXX..XXXXXXX 100644 | ||
165 | --- a/target/arm/translate-vfp.c.inc | ||
166 | +++ b/target/arm/translate-vfp.c.inc | ||
167 | @@ -XXX,XX +XXX,XX @@ static bool trans_VDUP(DisasContext *s, arg_VDUP *a) | ||
168 | } | ||
169 | |||
170 | tmp = load_reg(s, a->rt); | ||
171 | - tcg_gen_gvec_dup_i32(size, neon_reg_offset(a->vn, 0), | ||
172 | + tcg_gen_gvec_dup_i32(size, neon_full_reg_offset(a->vn), | ||
173 | vec_size, vec_size, tmp); | ||
174 | tcg_temp_free_i32(tmp); | ||
175 | |||
176 | -- | 141 | -- |
177 | 2.20.1 | 142 | 2.25.1 |
178 | |||
179 | diff view generated by jsdifflib |
1 | In the neon_padd/pmax/pmin helpers for float16, a cut-and-paste error | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | meant we were using the H4() address swizzler macro rather than the | ||
3 | H2() which is required for 2-byte data. This had no effect on | ||
4 | little-endian hosts but meant we put the result data into the | ||
5 | destination Dreg in the wrong order on big-endian hosts. | ||
6 | 2 | ||
3 | Use arm_hcr_el2_eff_secstate instead of arm_hcr_el2_eff, so | ||
4 | that we use is_secure instead of the current security state. | ||
5 | These AT* operations have been broken since arm_hcr_el2_eff | ||
6 | gained a check for "el2 enabled" for Secure EL2. | ||
7 | |||
8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
10 | Message-id: 20221001162318.153420-18-richard.henderson@linaro.org | ||
7 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 11 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
8 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
9 | Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> | ||
10 | Message-id: 20201028191712.4910-2-peter.maydell@linaro.org | ||
11 | --- | 12 | --- |
12 | target/arm/vec_helper.c | 8 ++++---- | 13 | target/arm/ptw.c | 8 ++++---- |
13 | 1 file changed, 4 insertions(+), 4 deletions(-) | 14 | 1 file changed, 4 insertions(+), 4 deletions(-) |
14 | 15 | ||
15 | diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c | 16 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
16 | index XXXXXXX..XXXXXXX 100644 | 17 | index XXXXXXX..XXXXXXX 100644 |
17 | --- a/target/arm/vec_helper.c | 18 | --- a/target/arm/ptw.c |
18 | +++ b/target/arm/vec_helper.c | 19 | +++ b/target/arm/ptw.c |
19 | @@ -XXX,XX +XXX,XX @@ DO_ABA(gvec_uaba_d, uint64_t) | 20 | @@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx, |
20 | r2 = float16_##OP(m[H2(0)], m[H2(1)], fpst); \ | 21 | } |
21 | r3 = float16_##OP(m[H2(2)], m[H2(3)], fpst); \ | ||
22 | \ | ||
23 | - d[H4(0)] = r0; \ | ||
24 | - d[H4(1)] = r1; \ | ||
25 | - d[H4(2)] = r2; \ | ||
26 | - d[H4(3)] = r3; \ | ||
27 | + d[H2(0)] = r0; \ | ||
28 | + d[H2(1)] = r1; \ | ||
29 | + d[H2(2)] = r2; \ | ||
30 | + d[H2(3)] = r3; \ | ||
31 | } | 22 | } |
32 | 23 | ||
33 | DO_NEON_PAIRWISE(neon_padd, add) | 24 | - hcr_el2 = arm_hcr_el2_eff(env); |
25 | + hcr_el2 = arm_hcr_el2_eff_secstate(env, is_secure); | ||
26 | |||
27 | switch (mmu_idx) { | ||
28 | case ARMMMUIdx_Stage2: | ||
29 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
30 | return ~0; | ||
31 | } | ||
32 | |||
33 | - hcr = arm_hcr_el2_eff(env); | ||
34 | + hcr = arm_hcr_el2_eff_secstate(env, is_secure); | ||
35 | if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) { | ||
36 | /* | ||
37 | * PTW set and S1 walk touched S2 Device memory: | ||
38 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
39 | } | ||
40 | |||
41 | /* Combine the S1 and S2 cache attributes. */ | ||
42 | - hcr = arm_hcr_el2_eff(env); | ||
43 | + hcr = arm_hcr_el2_eff_secstate(env, is_secure); | ||
44 | if (hcr & HCR_DC) { | ||
45 | /* | ||
46 | * HCR.DC forces the first stage attributes to | ||
47 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
48 | result->page_size = TARGET_PAGE_SIZE; | ||
49 | |||
50 | /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */ | ||
51 | - hcr = arm_hcr_el2_eff(env); | ||
52 | + hcr = arm_hcr_el2_eff_secstate(env, is_secure); | ||
53 | result->cacheattrs.shareability = 0; | ||
54 | result->cacheattrs.is_s2_format = false; | ||
55 | if (hcr & HCR_DC) { | ||
34 | -- | 56 | -- |
35 | 2.20.1 | 57 | 2.25.1 |
36 | |||
37 | diff view generated by jsdifflib |
1 | From: Richard Henderson <richard.henderson@linaro.org> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | This will shortly have users outside of translate-neon.c.inc. | 3 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
4 | |||
5 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | 4 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
6 | Message-id: 20201030022618.785675-3-richard.henderson@linaro.org | 5 | Message-id: 20221001162318.153420-19-richard.henderson@linaro.org |
7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 6 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
9 | --- | 7 | --- |
10 | target/arm/translate.c | 20 ++++++++++++++++++++ | 8 | target/arm/ptw.c | 138 +++++++++++++++++++++++++---------------------- |
11 | target/arm/translate-neon.c.inc | 19 ------------------- | 9 | 1 file changed, 74 insertions(+), 64 deletions(-) |
12 | 2 files changed, 20 insertions(+), 19 deletions(-) | ||
13 | 10 | ||
14 | diff --git a/target/arm/translate.c b/target/arm/translate.c | 11 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
15 | index XXXXXXX..XXXXXXX 100644 | 12 | index XXXXXXX..XXXXXXX 100644 |
16 | --- a/target/arm/translate.c | 13 | --- a/target/arm/ptw.c |
17 | +++ b/target/arm/translate.c | 14 | +++ b/target/arm/ptw.c |
18 | @@ -XXX,XX +XXX,XX @@ static long neon_full_reg_offset(unsigned reg) | 15 | @@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr, |
19 | return offsetof(CPUARMState, vfp.zregs[reg >> 1].d[reg & 1]); | 16 | return ret; |
20 | } | 17 | } |
21 | 18 | ||
22 | +/* | 19 | +/* |
23 | + * Return the offset of a 2**SIZE piece of a NEON register, at index ELE, | 20 | + * MMU disabled. S1 addresses within aa64 translation regimes are |
24 | + * where 0 is the least significant end of the register. | 21 | + * still checked for bounds -- see AArch64.S1DisabledOutput(). |
25 | + */ | 22 | + */ |
26 | +static long neon_element_offset(int reg, int element, MemOp size) | 23 | +static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address, |
24 | + MMUAccessType access_type, | ||
25 | + ARMMMUIdx mmu_idx, bool is_secure, | ||
26 | + GetPhysAddrResult *result, | ||
27 | + ARMMMUFaultInfo *fi) | ||
27 | +{ | 28 | +{ |
28 | + int element_size = 1 << size; | 29 | + uint64_t hcr; |
29 | + int ofs = element * element_size; | 30 | + uint8_t memattr; |
30 | +#ifdef HOST_WORDS_BIGENDIAN | 31 | + |
31 | + /* | 32 | + if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) { |
32 | + * Calculate the offset assuming fully little-endian, | 33 | + int r_el = regime_el(env, mmu_idx); |
33 | + * then XOR to account for the order of the 8-byte units. | 34 | + if (arm_el_is_aa64(env, r_el)) { |
34 | + */ | 35 | + int pamax = arm_pamax(env_archcpu(env)); |
35 | + if (element_size < 8) { | 36 | + uint64_t tcr = env->cp15.tcr_el[r_el]; |
36 | + ofs ^= 8 - element_size; | 37 | + int addrtop, tbi; |
38 | + | ||
39 | + tbi = aa64_va_parameter_tbi(tcr, mmu_idx); | ||
40 | + if (access_type == MMU_INST_FETCH) { | ||
41 | + tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx); | ||
42 | + } | ||
43 | + tbi = (tbi >> extract64(address, 55, 1)) & 1; | ||
44 | + addrtop = (tbi ? 55 : 63); | ||
45 | + | ||
46 | + if (extract64(address, pamax, addrtop - pamax + 1) != 0) { | ||
47 | + fi->type = ARMFault_AddressSize; | ||
48 | + fi->level = 0; | ||
49 | + fi->stage2 = false; | ||
50 | + return 1; | ||
51 | + } | ||
52 | + | ||
53 | + /* | ||
54 | + * When TBI is disabled, we've just validated that all of the | ||
55 | + * bits above PAMax are zero, so logically we only need to | ||
56 | + * clear the top byte for TBI. But it's clearer to follow | ||
57 | + * the pseudocode set of addrdesc.paddress. | ||
58 | + */ | ||
59 | + address = extract64(address, 0, 52); | ||
60 | + } | ||
37 | + } | 61 | + } |
38 | +#endif | 62 | + |
39 | + return neon_full_reg_offset(reg) + ofs; | 63 | + result->phys = address; |
64 | + result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
65 | + result->page_size = TARGET_PAGE_SIZE; | ||
66 | + | ||
67 | + /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */ | ||
68 | + hcr = arm_hcr_el2_eff_secstate(env, is_secure); | ||
69 | + result->cacheattrs.shareability = 0; | ||
70 | + result->cacheattrs.is_s2_format = false; | ||
71 | + if (hcr & HCR_DC) { | ||
72 | + if (hcr & HCR_DCT) { | ||
73 | + memattr = 0xf0; /* Tagged, Normal, WB, RWA */ | ||
74 | + } else { | ||
75 | + memattr = 0xff; /* Normal, WB, RWA */ | ||
76 | + } | ||
77 | + } else if (access_type == MMU_INST_FETCH) { | ||
78 | + if (regime_sctlr(env, mmu_idx) & SCTLR_I) { | ||
79 | + memattr = 0xee; /* Normal, WT, RA, NT */ | ||
80 | + } else { | ||
81 | + memattr = 0x44; /* Normal, NC, No */ | ||
82 | + } | ||
83 | + result->cacheattrs.shareability = 2; /* outer sharable */ | ||
84 | + } else { | ||
85 | + memattr = 0x00; /* Device, nGnRnE */ | ||
86 | + } | ||
87 | + result->cacheattrs.attrs = memattr; | ||
88 | + return 0; | ||
40 | +} | 89 | +} |
41 | + | 90 | + |
42 | static inline long vfp_reg_offset(bool dp, unsigned reg) | 91 | bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, |
43 | { | 92 | MMUAccessType access_type, ARMMMUIdx mmu_idx, |
44 | if (dp) { | 93 | bool is_secure, GetPhysAddrResult *result, |
45 | diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc | 94 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, |
46 | index XXXXXXX..XXXXXXX 100644 | 95 | /* Definitely a real MMU, not an MPU */ |
47 | --- a/target/arm/translate-neon.c.inc | 96 | |
48 | +++ b/target/arm/translate-neon.c.inc | 97 | if (regime_translation_disabled(env, mmu_idx, is_secure)) { |
49 | @@ -XXX,XX +XXX,XX @@ static inline int neon_3same_fp_size(DisasContext *s, int x) | 98 | - uint64_t hcr; |
50 | #include "decode-neon-ls.c.inc" | 99 | - uint8_t memattr; |
51 | #include "decode-neon-shared.c.inc" | ||
52 | |||
53 | -/* Return the offset of a 2**SIZE piece of a NEON register, at index ELE, | ||
54 | - * where 0 is the least significant end of the register. | ||
55 | - */ | ||
56 | -static inline long | ||
57 | -neon_element_offset(int reg, int element, MemOp size) | ||
58 | -{ | ||
59 | - int element_size = 1 << size; | ||
60 | - int ofs = element * element_size; | ||
61 | -#ifdef HOST_WORDS_BIGENDIAN | ||
62 | - /* Calculate the offset assuming fully little-endian, | ||
63 | - * then XOR to account for the order of the 8-byte units. | ||
64 | - */ | ||
65 | - if (element_size < 8) { | ||
66 | - ofs ^= 8 - element_size; | ||
67 | - } | ||
68 | -#endif | ||
69 | - return neon_full_reg_offset(reg) + ofs; | ||
70 | -} | ||
71 | - | 100 | - |
72 | static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop) | 101 | - /* |
73 | { | 102 | - * MMU disabled. S1 addresses within aa64 translation regimes are |
74 | long offset = neon_element_offset(reg, ele, mop & MO_SIZE); | 103 | - * still checked for bounds -- see AArch64.TranslateAddressS1Off. |
104 | - */ | ||
105 | - if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) { | ||
106 | - int r_el = regime_el(env, mmu_idx); | ||
107 | - if (arm_el_is_aa64(env, r_el)) { | ||
108 | - int pamax = arm_pamax(env_archcpu(env)); | ||
109 | - uint64_t tcr = env->cp15.tcr_el[r_el]; | ||
110 | - int addrtop, tbi; | ||
111 | - | ||
112 | - tbi = aa64_va_parameter_tbi(tcr, mmu_idx); | ||
113 | - if (access_type == MMU_INST_FETCH) { | ||
114 | - tbi &= ~aa64_va_parameter_tbid(tcr, mmu_idx); | ||
115 | - } | ||
116 | - tbi = (tbi >> extract64(address, 55, 1)) & 1; | ||
117 | - addrtop = (tbi ? 55 : 63); | ||
118 | - | ||
119 | - if (extract64(address, pamax, addrtop - pamax + 1) != 0) { | ||
120 | - fi->type = ARMFault_AddressSize; | ||
121 | - fi->level = 0; | ||
122 | - fi->stage2 = false; | ||
123 | - return 1; | ||
124 | - } | ||
125 | - | ||
126 | - /* | ||
127 | - * When TBI is disabled, we've just validated that all of the | ||
128 | - * bits above PAMax are zero, so logically we only need to | ||
129 | - * clear the top byte for TBI. But it's clearer to follow | ||
130 | - * the pseudocode set of addrdesc.paddress. | ||
131 | - */ | ||
132 | - address = extract64(address, 0, 52); | ||
133 | - } | ||
134 | - } | ||
135 | - result->phys = address; | ||
136 | - result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
137 | - result->page_size = TARGET_PAGE_SIZE; | ||
138 | - | ||
139 | - /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */ | ||
140 | - hcr = arm_hcr_el2_eff_secstate(env, is_secure); | ||
141 | - result->cacheattrs.shareability = 0; | ||
142 | - result->cacheattrs.is_s2_format = false; | ||
143 | - if (hcr & HCR_DC) { | ||
144 | - if (hcr & HCR_DCT) { | ||
145 | - memattr = 0xf0; /* Tagged, Normal, WB, RWA */ | ||
146 | - } else { | ||
147 | - memattr = 0xff; /* Normal, WB, RWA */ | ||
148 | - } | ||
149 | - } else if (access_type == MMU_INST_FETCH) { | ||
150 | - if (regime_sctlr(env, mmu_idx) & SCTLR_I) { | ||
151 | - memattr = 0xee; /* Normal, WT, RA, NT */ | ||
152 | - } else { | ||
153 | - memattr = 0x44; /* Normal, NC, No */ | ||
154 | - } | ||
155 | - result->cacheattrs.shareability = 2; /* outer sharable */ | ||
156 | - } else { | ||
157 | - memattr = 0x00; /* Device, nGnRnE */ | ||
158 | - } | ||
159 | - result->cacheattrs.attrs = memattr; | ||
160 | - return 0; | ||
161 | + return get_phys_addr_disabled(env, address, access_type, mmu_idx, | ||
162 | + is_secure, result, fi); | ||
163 | } | ||
164 | - | ||
165 | if (regime_using_lpae_format(env, mmu_idx)) { | ||
166 | return get_phys_addr_lpae(env, address, access_type, mmu_idx, | ||
167 | is_secure, false, result, fi); | ||
75 | -- | 168 | -- |
76 | 2.20.1 | 169 | 2.25.1 |
77 | |||
78 | diff view generated by jsdifflib |
1 | From: Richard Henderson <richard.henderson@linaro.org> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | We can then use this to improve VMOV (scalar to gp) and | 3 | Do not apply memattr or shareability for Stage2 translations. |
4 | VMOV (gp to scalar) so that we simply perform the memory | 4 | Make sure to apply HCR_{DC,DCT} only to Regime_EL10, per the |
5 | operation that we wanted, rather than inserting or | 5 | pseudocode in AArch64.S1DisabledOutput. |
6 | extracting from a 32-bit quantity. | ||
7 | |||
8 | These were the last uses of neon_load/store_reg, so remove them. | ||
9 | 6 | ||
10 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | 7 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> |
11 | Message-id: 20201030022618.785675-7-richard.henderson@linaro.org | ||
12 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | 8 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
9 | Message-id: 20221001162318.153420-20-richard.henderson@linaro.org | ||
13 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
14 | --- | 11 | --- |
15 | target/arm/translate.c | 50 +++++++++++++----------- | 12 | target/arm/ptw.c | 48 +++++++++++++++++++++++++----------------------- |
16 | target/arm/translate-vfp.c.inc | 71 +++++----------------------------- | 13 | 1 file changed, 25 insertions(+), 23 deletions(-) |
17 | 2 files changed, 37 insertions(+), 84 deletions(-) | ||
18 | 14 | ||
19 | diff --git a/target/arm/translate.c b/target/arm/translate.c | 15 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c |
20 | index XXXXXXX..XXXXXXX 100644 | 16 | index XXXXXXX..XXXXXXX 100644 |
21 | --- a/target/arm/translate.c | 17 | --- a/target/arm/ptw.c |
22 | +++ b/target/arm/translate.c | 18 | +++ b/target/arm/ptw.c |
23 | @@ -XXX,XX +XXX,XX @@ static long neon_full_reg_offset(unsigned reg) | 19 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address, |
24 | * Return the offset of a 2**SIZE piece of a NEON register, at index ELE, | 20 | GetPhysAddrResult *result, |
25 | * where 0 is the least significant end of the register. | 21 | ARMMMUFaultInfo *fi) |
26 | */ | ||
27 | -static long neon_element_offset(int reg, int element, MemOp size) | ||
28 | +static long neon_element_offset(int reg, int element, MemOp memop) | ||
29 | { | 22 | { |
30 | - int element_size = 1 << size; | 23 | - uint64_t hcr; |
31 | + int element_size = 1 << (memop & MO_SIZE); | 24 | - uint8_t memattr; |
32 | int ofs = element * element_size; | 25 | + uint8_t memattr = 0x00; /* Device nGnRnE */ |
33 | #ifdef HOST_WORDS_BIGENDIAN | 26 | + uint8_t shareability = 0; /* non-sharable */ |
34 | /* | 27 | |
35 | @@ -XXX,XX +XXX,XX @@ static long vfp_reg_offset(bool dp, unsigned reg) | 28 | if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) { |
29 | int r_el = regime_el(env, mmu_idx); | ||
30 | + | ||
31 | if (arm_el_is_aa64(env, r_el)) { | ||
32 | int pamax = arm_pamax(env_archcpu(env)); | ||
33 | uint64_t tcr = env->cp15.tcr_el[r_el]; | ||
34 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address, | ||
35 | */ | ||
36 | address = extract64(address, 0, 52); | ||
37 | } | ||
38 | + | ||
39 | + /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */ | ||
40 | + if (r_el == 1) { | ||
41 | + uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure); | ||
42 | + if (hcr & HCR_DC) { | ||
43 | + if (hcr & HCR_DCT) { | ||
44 | + memattr = 0xf0; /* Tagged, Normal, WB, RWA */ | ||
45 | + } else { | ||
46 | + memattr = 0xff; /* Normal, WB, RWA */ | ||
47 | + } | ||
48 | + } | ||
49 | + } | ||
50 | + if (memattr == 0 && access_type == MMU_INST_FETCH) { | ||
51 | + if (regime_sctlr(env, mmu_idx) & SCTLR_I) { | ||
52 | + memattr = 0xee; /* Normal, WT, RA, NT */ | ||
53 | + } else { | ||
54 | + memattr = 0x44; /* Normal, NC, No */ | ||
55 | + } | ||
56 | + shareability = 2; /* outer sharable */ | ||
57 | + } | ||
58 | + result->cacheattrs.is_s2_format = false; | ||
36 | } | 59 | } |
37 | } | 60 | |
38 | 61 | result->phys = address; | |
39 | -static TCGv_i32 neon_load_reg(int reg, int pass) | 62 | result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; |
40 | -{ | 63 | result->page_size = TARGET_PAGE_SIZE; |
41 | - TCGv_i32 tmp = tcg_temp_new_i32(); | ||
42 | - tcg_gen_ld_i32(tmp, cpu_env, neon_element_offset(reg, pass, MO_32)); | ||
43 | - return tmp; | ||
44 | -} | ||
45 | - | 64 | - |
46 | -static void neon_store_reg(int reg, int pass, TCGv_i32 var) | 65 | - /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */ |
47 | -{ | 66 | - hcr = arm_hcr_el2_eff_secstate(env, is_secure); |
48 | - tcg_gen_st_i32(var, cpu_env, neon_element_offset(reg, pass, MO_32)); | 67 | - result->cacheattrs.shareability = 0; |
49 | - tcg_temp_free_i32(var); | 68 | - result->cacheattrs.is_s2_format = false; |
50 | -} | 69 | - if (hcr & HCR_DC) { |
51 | - | 70 | - if (hcr & HCR_DCT) { |
52 | static inline void neon_load_reg64(TCGv_i64 var, int reg) | 71 | - memattr = 0xf0; /* Tagged, Normal, WB, RWA */ |
53 | { | 72 | - } else { |
54 | tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg)); | 73 | - memattr = 0xff; /* Normal, WB, RWA */ |
55 | @@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg32(TCGv_i32 var, int reg) | ||
56 | tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg)); | ||
57 | } | ||
58 | |||
59 | -static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size) | ||
60 | +static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp memop) | ||
61 | { | ||
62 | - long off = neon_element_offset(reg, ele, size); | ||
63 | + long off = neon_element_offset(reg, ele, memop); | ||
64 | |||
65 | - switch (size) { | ||
66 | - case MO_32: | ||
67 | + switch (memop) { | ||
68 | + case MO_SB: | ||
69 | + tcg_gen_ld8s_i32(dest, cpu_env, off); | ||
70 | + break; | ||
71 | + case MO_UB: | ||
72 | + tcg_gen_ld8u_i32(dest, cpu_env, off); | ||
73 | + break; | ||
74 | + case MO_SW: | ||
75 | + tcg_gen_ld16s_i32(dest, cpu_env, off); | ||
76 | + break; | ||
77 | + case MO_UW: | ||
78 | + tcg_gen_ld16u_i32(dest, cpu_env, off); | ||
79 | + break; | ||
80 | + case MO_UL: | ||
81 | + case MO_SL: | ||
82 | tcg_gen_ld_i32(dest, cpu_env, off); | ||
83 | break; | ||
84 | default: | ||
85 | @@ -XXX,XX +XXX,XX @@ static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size) | ||
86 | } | ||
87 | } | ||
88 | |||
89 | -static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp size) | ||
90 | +static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp memop) | ||
91 | { | ||
92 | - long off = neon_element_offset(reg, ele, size); | ||
93 | + long off = neon_element_offset(reg, ele, memop); | ||
94 | |||
95 | - switch (size) { | ||
96 | + switch (memop) { | ||
97 | + case MO_8: | ||
98 | + tcg_gen_st8_i32(src, cpu_env, off); | ||
99 | + break; | ||
100 | + case MO_16: | ||
101 | + tcg_gen_st16_i32(src, cpu_env, off); | ||
102 | + break; | ||
103 | case MO_32: | ||
104 | tcg_gen_st_i32(src, cpu_env, off); | ||
105 | break; | ||
106 | diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc | ||
107 | index XXXXXXX..XXXXXXX 100644 | ||
108 | --- a/target/arm/translate-vfp.c.inc | ||
109 | +++ b/target/arm/translate-vfp.c.inc | ||
110 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a) | ||
111 | { | ||
112 | /* VMOV scalar to general purpose register */ | ||
113 | TCGv_i32 tmp; | ||
114 | - int pass; | ||
115 | - uint32_t offset; | ||
116 | |||
117 | - /* SIZE == 2 is a VFP instruction; otherwise NEON. */ | ||
118 | - if (a->size == 2 | ||
119 | + /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */ | ||
120 | + if (a->size == MO_32 | ||
121 | ? !dc_isar_feature(aa32_fpsp_v2, s) | ||
122 | : !arm_dc_feature(s, ARM_FEATURE_NEON)) { | ||
123 | return false; | ||
124 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a) | ||
125 | return false; | ||
126 | } | ||
127 | |||
128 | - offset = a->index << a->size; | ||
129 | - pass = extract32(offset, 2, 1); | ||
130 | - offset = extract32(offset, 0, 2) * 8; | ||
131 | - | ||
132 | if (!vfp_access_check(s)) { | ||
133 | return true; | ||
134 | } | ||
135 | |||
136 | - tmp = neon_load_reg(a->vn, pass); | ||
137 | - switch (a->size) { | ||
138 | - case 0: | ||
139 | - if (offset) { | ||
140 | - tcg_gen_shri_i32(tmp, tmp, offset); | ||
141 | - } | 74 | - } |
142 | - if (a->u) { | 75 | - } else if (access_type == MMU_INST_FETCH) { |
143 | - gen_uxtb(tmp); | 76 | - if (regime_sctlr(env, mmu_idx) & SCTLR_I) { |
77 | - memattr = 0xee; /* Normal, WT, RA, NT */ | ||
144 | - } else { | 78 | - } else { |
145 | - gen_sxtb(tmp); | 79 | - memattr = 0x44; /* Normal, NC, No */ |
146 | - } | 80 | - } |
147 | - break; | 81 | - result->cacheattrs.shareability = 2; /* outer sharable */ |
148 | - case 1: | 82 | - } else { |
149 | - if (a->u) { | 83 | - memattr = 0x00; /* Device, nGnRnE */ |
150 | - if (offset) { | ||
151 | - tcg_gen_shri_i32(tmp, tmp, 16); | ||
152 | - } else { | ||
153 | - gen_uxth(tmp); | ||
154 | - } | ||
155 | - } else { | ||
156 | - if (offset) { | ||
157 | - tcg_gen_sari_i32(tmp, tmp, 16); | ||
158 | - } else { | ||
159 | - gen_sxth(tmp); | ||
160 | - } | ||
161 | - } | ||
162 | - break; | ||
163 | - case 2: | ||
164 | - break; | ||
165 | - } | 84 | - } |
166 | + tmp = tcg_temp_new_i32(); | 85 | + result->cacheattrs.shareability = shareability; |
167 | + read_neon_element32(tmp, a->vn, a->index, a->size | (a->u ? 0 : MO_SIGN)); | 86 | result->cacheattrs.attrs = memattr; |
168 | store_reg(s, a->rt, tmp); | 87 | return 0; |
169 | |||
170 | return true; | ||
171 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_to_gp(DisasContext *s, arg_VMOV_to_gp *a) | ||
172 | static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a) | ||
173 | { | ||
174 | /* VMOV general purpose register to scalar */ | ||
175 | - TCGv_i32 tmp, tmp2; | ||
176 | - int pass; | ||
177 | - uint32_t offset; | ||
178 | + TCGv_i32 tmp; | ||
179 | |||
180 | - /* SIZE == 2 is a VFP instruction; otherwise NEON. */ | ||
181 | - if (a->size == 2 | ||
182 | + /* SIZE == MO_32 is a VFP instruction; otherwise NEON. */ | ||
183 | + if (a->size == MO_32 | ||
184 | ? !dc_isar_feature(aa32_fpsp_v2, s) | ||
185 | : !arm_dc_feature(s, ARM_FEATURE_NEON)) { | ||
186 | return false; | ||
187 | @@ -XXX,XX +XXX,XX @@ static bool trans_VMOV_from_gp(DisasContext *s, arg_VMOV_from_gp *a) | ||
188 | return false; | ||
189 | } | ||
190 | |||
191 | - offset = a->index << a->size; | ||
192 | - pass = extract32(offset, 2, 1); | ||
193 | - offset = extract32(offset, 0, 2) * 8; | ||
194 | - | ||
195 | if (!vfp_access_check(s)) { | ||
196 | return true; | ||
197 | } | ||
198 | |||
199 | tmp = load_reg(s, a->rt); | ||
200 | - switch (a->size) { | ||
201 | - case 0: | ||
202 | - tmp2 = neon_load_reg(a->vn, pass); | ||
203 | - tcg_gen_deposit_i32(tmp, tmp2, tmp, offset, 8); | ||
204 | - tcg_temp_free_i32(tmp2); | ||
205 | - break; | ||
206 | - case 1: | ||
207 | - tmp2 = neon_load_reg(a->vn, pass); | ||
208 | - tcg_gen_deposit_i32(tmp, tmp2, tmp, offset, 16); | ||
209 | - tcg_temp_free_i32(tmp2); | ||
210 | - break; | ||
211 | - case 2: | ||
212 | - break; | ||
213 | - } | ||
214 | - neon_store_reg(a->vn, pass, tmp); | ||
215 | + write_neon_element32(tmp, a->vn, a->index, a->size); | ||
216 | + tcg_temp_free_i32(tmp); | ||
217 | |||
218 | return true; | ||
219 | } | 88 | } |
220 | -- | 89 | -- |
221 | 2.20.1 | 90 | 2.25.1 |
222 | |||
223 | diff view generated by jsdifflib |
1 | From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> | 1 | From: Richard Henderson <richard.henderson@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | Secure mode is not exempted from checking SCR_EL3.TLOR, and in the | 3 | Adjust GetPhysAddrResult to fill in CPUTLBEntryFull, |
4 | future HCR_EL2.TLOR when S-EL2 is enabled. | 4 | so that it may be passed directly to tlb_set_page_full. |
5 | 5 | ||
6 | Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> | 6 | The change is large, but mostly mechanical. The major |
7 | non-mechanical change is page_size -> lg_page_size. | ||
8 | Most of the time this is obvious, and is related to | ||
9 | TARGET_PAGE_BITS. | ||
10 | |||
11 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | ||
7 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | 12 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> |
13 | Message-id: 20221001162318.153420-21-richard.henderson@linaro.org | ||
8 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 14 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
9 | --- | 15 | --- |
10 | target/arm/helper.c | 19 +++++-------------- | 16 | target/arm/internals.h | 5 +- |
11 | 1 file changed, 5 insertions(+), 14 deletions(-) | 17 | target/arm/helper.c | 12 +-- |
18 | target/arm/m_helper.c | 20 ++--- | ||
19 | target/arm/ptw.c | 179 ++++++++++++++++++++-------------------- | ||
20 | target/arm/tlb_helper.c | 9 +- | ||
21 | 5 files changed, 111 insertions(+), 114 deletions(-) | ||
12 | 22 | ||
23 | diff --git a/target/arm/internals.h b/target/arm/internals.h | ||
24 | index XXXXXXX..XXXXXXX 100644 | ||
25 | --- a/target/arm/internals.h | ||
26 | +++ b/target/arm/internals.h | ||
27 | @@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs { | ||
28 | |||
29 | /* Fields that are valid upon success. */ | ||
30 | typedef struct GetPhysAddrResult { | ||
31 | - hwaddr phys; | ||
32 | - target_ulong page_size; | ||
33 | - int prot; | ||
34 | - MemTxAttrs attrs; | ||
35 | + CPUTLBEntryFull f; | ||
36 | ARMCacheAttrs cacheattrs; | ||
37 | } GetPhysAddrResult; | ||
38 | |||
13 | diff --git a/target/arm/helper.c b/target/arm/helper.c | 39 | diff --git a/target/arm/helper.c b/target/arm/helper.c |
14 | index XXXXXXX..XXXXXXX 100644 | 40 | index XXXXXXX..XXXXXXX 100644 |
15 | --- a/target/arm/helper.c | 41 | --- a/target/arm/helper.c |
16 | +++ b/target/arm/helper.c | 42 | +++ b/target/arm/helper.c |
17 | @@ -XXX,XX +XXX,XX @@ static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri) | 43 | @@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value, |
18 | #endif | 44 | /* Create a 64-bit PAR */ |
19 | 45 | par64 = (1 << 11); /* LPAE bit always set */ | |
20 | /* Shared logic between LORID and the rest of the LOR* registers. | 46 | if (!ret) { |
21 | - * Secure state has already been delt with. | 47 | - par64 |= res.phys & ~0xfffULL; |
22 | + * Secure state exclusion has already been dealt with. | 48 | - if (!res.attrs.secure) { |
23 | */ | 49 | + par64 |= res.f.phys_addr & ~0xfffULL; |
24 | -static CPAccessResult access_lor_ns(CPUARMState *env) | 50 | + if (!res.f.attrs.secure) { |
25 | +static CPAccessResult access_lor_ns(CPUARMState *env, | 51 | par64 |= (1 << 9); /* NS */ |
26 | + const ARMCPRegInfo *ri, bool isread) | 52 | } |
53 | par64 |= (uint64_t)res.cacheattrs.attrs << 56; /* ATTR */ | ||
54 | @@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value, | ||
55 | */ | ||
56 | if (!ret) { | ||
57 | /* We do not set any attribute bits in the PAR */ | ||
58 | - if (res.page_size == (1 << 24) | ||
59 | + if (res.f.lg_page_size == 24 | ||
60 | && arm_feature(env, ARM_FEATURE_V7)) { | ||
61 | - par64 = (res.phys & 0xff000000) | (1 << 1); | ||
62 | + par64 = (res.f.phys_addr & 0xff000000) | (1 << 1); | ||
63 | } else { | ||
64 | - par64 = res.phys & 0xfffff000; | ||
65 | + par64 = res.f.phys_addr & 0xfffff000; | ||
66 | } | ||
67 | - if (!res.attrs.secure) { | ||
68 | + if (!res.f.attrs.secure) { | ||
69 | par64 |= (1 << 9); /* NS */ | ||
70 | } | ||
71 | } else { | ||
72 | diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c | ||
73 | index XXXXXXX..XXXXXXX 100644 | ||
74 | --- a/target/arm/m_helper.c | ||
75 | +++ b/target/arm/m_helper.c | ||
76 | @@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value, | ||
77 | } | ||
78 | goto pend_fault; | ||
79 | } | ||
80 | - address_space_stl_le(arm_addressspace(cs, res.attrs), res.phys, value, | ||
81 | - res.attrs, &txres); | ||
82 | + address_space_stl_le(arm_addressspace(cs, res.f.attrs), res.f.phys_addr, | ||
83 | + value, res.f.attrs, &txres); | ||
84 | if (txres != MEMTX_OK) { | ||
85 | /* BusFault trying to write the data */ | ||
86 | if (mode == STACK_LAZYFP) { | ||
87 | @@ -XXX,XX +XXX,XX @@ static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr, | ||
88 | goto pend_fault; | ||
89 | } | ||
90 | |||
91 | - value = address_space_ldl(arm_addressspace(cs, res.attrs), res.phys, | ||
92 | - res.attrs, &txres); | ||
93 | + value = address_space_ldl(arm_addressspace(cs, res.f.attrs), | ||
94 | + res.f.phys_addr, res.f.attrs, &txres); | ||
95 | if (txres != MEMTX_OK) { | ||
96 | /* BusFault trying to read the data */ | ||
97 | qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n"); | ||
98 | @@ -XXX,XX +XXX,XX @@ static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, bool secure, | ||
99 | qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL\n"); | ||
100 | return false; | ||
101 | } | ||
102 | - *insn = address_space_lduw_le(arm_addressspace(cs, res.attrs), res.phys, | ||
103 | - res.attrs, &txres); | ||
104 | + *insn = address_space_lduw_le(arm_addressspace(cs, res.f.attrs), | ||
105 | + res.f.phys_addr, res.f.attrs, &txres); | ||
106 | if (txres != MEMTX_OK) { | ||
107 | env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_IBUSERR_MASK; | ||
108 | armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false); | ||
109 | @@ -XXX,XX +XXX,XX @@ static bool v7m_read_sg_stack_word(ARMCPU *cpu, ARMMMUIdx mmu_idx, | ||
110 | } | ||
111 | return false; | ||
112 | } | ||
113 | - value = address_space_ldl(arm_addressspace(cs, res.attrs), res.phys, | ||
114 | - res.attrs, &txres); | ||
115 | + value = address_space_ldl(arm_addressspace(cs, res.f.attrs), | ||
116 | + res.f.phys_addr, res.f.attrs, &txres); | ||
117 | if (txres != MEMTX_OK) { | ||
118 | /* BusFault trying to read the data */ | ||
119 | qemu_log_mask(CPU_LOG_INT, | ||
120 | @@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op) | ||
121 | } else { | ||
122 | mrvalid = true; | ||
123 | } | ||
124 | - r = res.prot & PAGE_READ; | ||
125 | - rw = res.prot & PAGE_WRITE; | ||
126 | + r = res.f.prot & PAGE_READ; | ||
127 | + rw = res.f.prot & PAGE_WRITE; | ||
128 | } else { | ||
129 | r = false; | ||
130 | rw = false; | ||
131 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
132 | index XXXXXXX..XXXXXXX 100644 | ||
133 | --- a/target/arm/ptw.c | ||
134 | +++ b/target/arm/ptw.c | ||
135 | @@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
136 | assert(!is_secure); | ||
137 | } | ||
138 | |||
139 | - addr = s2.phys; | ||
140 | + addr = s2.f.phys_addr; | ||
141 | } | ||
142 | return addr; | ||
143 | } | ||
144 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address, | ||
145 | /* 1Mb section. */ | ||
146 | phys_addr = (desc & 0xfff00000) | (address & 0x000fffff); | ||
147 | ap = (desc >> 10) & 3; | ||
148 | - result->page_size = 1024 * 1024; | ||
149 | + result->f.lg_page_size = 20; /* 1MB */ | ||
150 | } else { | ||
151 | /* Lookup l2 entry. */ | ||
152 | if (type == 1) { | ||
153 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address, | ||
154 | case 1: /* 64k page. */ | ||
155 | phys_addr = (desc & 0xffff0000) | (address & 0xffff); | ||
156 | ap = (desc >> (4 + ((address >> 13) & 6))) & 3; | ||
157 | - result->page_size = 0x10000; | ||
158 | + result->f.lg_page_size = 16; | ||
159 | break; | ||
160 | case 2: /* 4k page. */ | ||
161 | phys_addr = (desc & 0xfffff000) | (address & 0xfff); | ||
162 | ap = (desc >> (4 + ((address >> 9) & 6))) & 3; | ||
163 | - result->page_size = 0x1000; | ||
164 | + result->f.lg_page_size = 12; | ||
165 | break; | ||
166 | case 3: /* 1k page, or ARMv6/XScale "extended small (4k) page" */ | ||
167 | if (type == 1) { | ||
168 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address, | ||
169 | if (arm_feature(env, ARM_FEATURE_XSCALE) | ||
170 | || arm_feature(env, ARM_FEATURE_V6)) { | ||
171 | phys_addr = (desc & 0xfffff000) | (address & 0xfff); | ||
172 | - result->page_size = 0x1000; | ||
173 | + result->f.lg_page_size = 12; | ||
174 | } else { | ||
175 | /* | ||
176 | * UNPREDICTABLE in ARMv5; we choose to take a | ||
177 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address, | ||
178 | } | ||
179 | } else { | ||
180 | phys_addr = (desc & 0xfffffc00) | (address & 0x3ff); | ||
181 | - result->page_size = 0x400; | ||
182 | + result->f.lg_page_size = 10; | ||
183 | } | ||
184 | ap = (desc >> 4) & 3; | ||
185 | break; | ||
186 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address, | ||
187 | g_assert_not_reached(); | ||
188 | } | ||
189 | } | ||
190 | - result->prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot); | ||
191 | - result->prot |= result->prot ? PAGE_EXEC : 0; | ||
192 | - if (!(result->prot & (1 << access_type))) { | ||
193 | + result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot); | ||
194 | + result->f.prot |= result->f.prot ? PAGE_EXEC : 0; | ||
195 | + if (!(result->f.prot & (1 << access_type))) { | ||
196 | /* Access permission fault. */ | ||
197 | fi->type = ARMFault_Permission; | ||
198 | goto do_fault; | ||
199 | } | ||
200 | - result->phys = phys_addr; | ||
201 | + result->f.phys_addr = phys_addr; | ||
202 | return false; | ||
203 | do_fault: | ||
204 | fi->domain = domain; | ||
205 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, | ||
206 | phys_addr = (desc & 0xff000000) | (address & 0x00ffffff); | ||
207 | phys_addr |= (uint64_t)extract32(desc, 20, 4) << 32; | ||
208 | phys_addr |= (uint64_t)extract32(desc, 5, 4) << 36; | ||
209 | - result->page_size = 0x1000000; | ||
210 | + result->f.lg_page_size = 24; /* 16MB */ | ||
211 | } else { | ||
212 | /* Section. */ | ||
213 | phys_addr = (desc & 0xfff00000) | (address & 0x000fffff); | ||
214 | - result->page_size = 0x100000; | ||
215 | + result->f.lg_page_size = 20; /* 1MB */ | ||
216 | } | ||
217 | ap = ((desc >> 10) & 3) | ((desc >> 13) & 4); | ||
218 | xn = desc & (1 << 4); | ||
219 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, | ||
220 | case 1: /* 64k page. */ | ||
221 | phys_addr = (desc & 0xffff0000) | (address & 0xffff); | ||
222 | xn = desc & (1 << 15); | ||
223 | - result->page_size = 0x10000; | ||
224 | + result->f.lg_page_size = 16; | ||
225 | break; | ||
226 | case 2: case 3: /* 4k page. */ | ||
227 | phys_addr = (desc & 0xfffff000) | (address & 0xfff); | ||
228 | xn = desc & 1; | ||
229 | - result->page_size = 0x1000; | ||
230 | + result->f.lg_page_size = 12; | ||
231 | break; | ||
232 | default: | ||
233 | /* Never happens, but compiler isn't smart enough to tell. */ | ||
234 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, | ||
235 | } | ||
236 | } | ||
237 | if (domain_prot == 3) { | ||
238 | - result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
239 | + result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
240 | } else { | ||
241 | if (pxn && !regime_is_user(env, mmu_idx)) { | ||
242 | xn = 1; | ||
243 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, | ||
244 | fi->type = ARMFault_AccessFlag; | ||
245 | goto do_fault; | ||
246 | } | ||
247 | - result->prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1); | ||
248 | + result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap >> 1); | ||
249 | } else { | ||
250 | - result->prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot); | ||
251 | + result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot); | ||
252 | } | ||
253 | - if (result->prot && !xn) { | ||
254 | - result->prot |= PAGE_EXEC; | ||
255 | + if (result->f.prot && !xn) { | ||
256 | + result->f.prot |= PAGE_EXEC; | ||
257 | } | ||
258 | - if (!(result->prot & (1 << access_type))) { | ||
259 | + if (!(result->f.prot & (1 << access_type))) { | ||
260 | /* Access permission fault. */ | ||
261 | fi->type = ARMFault_Permission; | ||
262 | goto do_fault; | ||
263 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, | ||
264 | * the CPU doesn't support TZ or this is a non-secure translation | ||
265 | * regime, because the attribute will already be non-secure. | ||
266 | */ | ||
267 | - result->attrs.secure = false; | ||
268 | + result->f.attrs.secure = false; | ||
269 | } | ||
270 | - result->phys = phys_addr; | ||
271 | + result->f.phys_addr = phys_addr; | ||
272 | return false; | ||
273 | do_fault: | ||
274 | fi->domain = domain; | ||
275 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, | ||
276 | if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { | ||
277 | ns = mmu_idx == ARMMMUIdx_Stage2; | ||
278 | xn = extract32(attrs, 11, 2); | ||
279 | - result->prot = get_S2prot(env, ap, xn, s1_is_el0); | ||
280 | + result->f.prot = get_S2prot(env, ap, xn, s1_is_el0); | ||
281 | } else { | ||
282 | ns = extract32(attrs, 3, 1); | ||
283 | xn = extract32(attrs, 12, 1); | ||
284 | pxn = extract32(attrs, 11, 1); | ||
285 | - result->prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn); | ||
286 | + result->f.prot = get_S1prot(env, mmu_idx, aarch64, ap, ns, xn, pxn); | ||
287 | } | ||
288 | |||
289 | fault_type = ARMFault_Permission; | ||
290 | - if (!(result->prot & (1 << access_type))) { | ||
291 | + if (!(result->f.prot & (1 << access_type))) { | ||
292 | goto do_fault; | ||
293 | } | ||
294 | |||
295 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, | ||
296 | * the CPU doesn't support TZ or this is a non-secure translation | ||
297 | * regime, because the attribute will already be non-secure. | ||
298 | */ | ||
299 | - result->attrs.secure = false; | ||
300 | + result->f.attrs.secure = false; | ||
301 | } | ||
302 | /* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */ | ||
303 | if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) { | ||
304 | - arm_tlb_bti_gp(&result->attrs) = true; | ||
305 | + arm_tlb_bti_gp(&result->f.attrs) = true; | ||
306 | } | ||
307 | |||
308 | if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { | ||
309 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, | ||
310 | result->cacheattrs.shareability = extract32(attrs, 6, 2); | ||
311 | } | ||
312 | |||
313 | - result->phys = descaddr; | ||
314 | - result->page_size = page_size; | ||
315 | + result->f.phys_addr = descaddr; | ||
316 | + result->f.lg_page_size = ctz64(page_size); | ||
317 | return false; | ||
318 | |||
319 | do_fault: | ||
320 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, | ||
321 | |||
322 | if (regime_translation_disabled(env, mmu_idx, is_secure)) { | ||
323 | /* MPU disabled. */ | ||
324 | - result->phys = address; | ||
325 | - result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
326 | + result->f.phys_addr = address; | ||
327 | + result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
328 | return false; | ||
329 | } | ||
330 | |||
331 | - result->phys = address; | ||
332 | + result->f.phys_addr = address; | ||
333 | for (n = 7; n >= 0; n--) { | ||
334 | base = env->cp15.c6_region[n]; | ||
335 | if ((base & 1) == 0) { | ||
336 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, | ||
337 | fi->level = 1; | ||
338 | return true; | ||
339 | } | ||
340 | - result->prot = PAGE_READ | PAGE_WRITE; | ||
341 | + result->f.prot = PAGE_READ | PAGE_WRITE; | ||
342 | break; | ||
343 | case 2: | ||
344 | - result->prot = PAGE_READ; | ||
345 | + result->f.prot = PAGE_READ; | ||
346 | if (!is_user) { | ||
347 | - result->prot |= PAGE_WRITE; | ||
348 | + result->f.prot |= PAGE_WRITE; | ||
349 | } | ||
350 | break; | ||
351 | case 3: | ||
352 | - result->prot = PAGE_READ | PAGE_WRITE; | ||
353 | + result->f.prot = PAGE_READ | PAGE_WRITE; | ||
354 | break; | ||
355 | case 5: | ||
356 | if (is_user) { | ||
357 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, | ||
358 | fi->level = 1; | ||
359 | return true; | ||
360 | } | ||
361 | - result->prot = PAGE_READ; | ||
362 | + result->f.prot = PAGE_READ; | ||
363 | break; | ||
364 | case 6: | ||
365 | - result->prot = PAGE_READ; | ||
366 | + result->f.prot = PAGE_READ; | ||
367 | break; | ||
368 | default: | ||
369 | /* Bad permission. */ | ||
370 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, | ||
371 | fi->level = 1; | ||
372 | return true; | ||
373 | } | ||
374 | - result->prot |= PAGE_EXEC; | ||
375 | + result->f.prot |= PAGE_EXEC; | ||
376 | return false; | ||
377 | } | ||
378 | |||
379 | static void get_phys_addr_pmsav7_default(CPUARMState *env, ARMMMUIdx mmu_idx, | ||
380 | - int32_t address, int *prot) | ||
381 | + int32_t address, uint8_t *prot) | ||
27 | { | 382 | { |
28 | int el = arm_current_el(env); | 383 | if (!arm_feature(env, ARM_FEATURE_M)) { |
29 | 384 | *prot = PAGE_READ | PAGE_WRITE; | |
30 | @@ -XXX,XX +XXX,XX @@ static CPAccessResult access_lor_ns(CPUARMState *env) | 385 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, |
31 | return CP_ACCESS_OK; | 386 | int n; |
387 | bool is_user = regime_is_user(env, mmu_idx); | ||
388 | |||
389 | - result->phys = address; | ||
390 | - result->page_size = TARGET_PAGE_SIZE; | ||
391 | - result->prot = 0; | ||
392 | + result->f.phys_addr = address; | ||
393 | + result->f.lg_page_size = TARGET_PAGE_BITS; | ||
394 | + result->f.prot = 0; | ||
395 | |||
396 | if (regime_translation_disabled(env, mmu_idx, secure) || | ||
397 | m_is_ppb_region(env, address)) { | ||
398 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
399 | * which always does a direct read using address_space_ldl(), rather | ||
400 | * than going via this function, so we don't need to check that here. | ||
401 | */ | ||
402 | - get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot); | ||
403 | + get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot); | ||
404 | } else { /* MPU enabled */ | ||
405 | for (n = (int)cpu->pmsav7_dregion - 1; n >= 0; n--) { | ||
406 | /* region search */ | ||
407 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
408 | if (ranges_overlap(base, rmask, | ||
409 | address & TARGET_PAGE_MASK, | ||
410 | TARGET_PAGE_SIZE)) { | ||
411 | - result->page_size = 1; | ||
412 | + result->f.lg_page_size = 0; | ||
413 | } | ||
414 | continue; | ||
415 | } | ||
416 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
417 | continue; | ||
418 | } | ||
419 | if (rsize < TARGET_PAGE_BITS) { | ||
420 | - result->page_size = 1 << rsize; | ||
421 | + result->f.lg_page_size = rsize; | ||
422 | } | ||
423 | break; | ||
424 | } | ||
425 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
426 | fi->type = ARMFault_Background; | ||
427 | return true; | ||
428 | } | ||
429 | - get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot); | ||
430 | + get_phys_addr_pmsav7_default(env, mmu_idx, address, | ||
431 | + &result->f.prot); | ||
432 | } else { /* a MPU hit! */ | ||
433 | uint32_t ap = extract32(env->pmsav7.dracr[n], 8, 3); | ||
434 | uint32_t xn = extract32(env->pmsav7.dracr[n], 12, 1); | ||
435 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
436 | case 5: | ||
437 | break; /* no access */ | ||
438 | case 3: | ||
439 | - result->prot |= PAGE_WRITE; | ||
440 | + result->f.prot |= PAGE_WRITE; | ||
441 | /* fall through */ | ||
442 | case 2: | ||
443 | case 6: | ||
444 | - result->prot |= PAGE_READ | PAGE_EXEC; | ||
445 | + result->f.prot |= PAGE_READ | PAGE_EXEC; | ||
446 | break; | ||
447 | case 7: | ||
448 | /* for v7M, same as 6; for R profile a reserved value */ | ||
449 | if (arm_feature(env, ARM_FEATURE_M)) { | ||
450 | - result->prot |= PAGE_READ | PAGE_EXEC; | ||
451 | + result->f.prot |= PAGE_READ | PAGE_EXEC; | ||
452 | break; | ||
453 | } | ||
454 | /* fall through */ | ||
455 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
456 | case 1: | ||
457 | case 2: | ||
458 | case 3: | ||
459 | - result->prot |= PAGE_WRITE; | ||
460 | + result->f.prot |= PAGE_WRITE; | ||
461 | /* fall through */ | ||
462 | case 5: | ||
463 | case 6: | ||
464 | - result->prot |= PAGE_READ | PAGE_EXEC; | ||
465 | + result->f.prot |= PAGE_READ | PAGE_EXEC; | ||
466 | break; | ||
467 | case 7: | ||
468 | /* for v7M, same as 6; for R profile a reserved value */ | ||
469 | if (arm_feature(env, ARM_FEATURE_M)) { | ||
470 | - result->prot |= PAGE_READ | PAGE_EXEC; | ||
471 | + result->f.prot |= PAGE_READ | PAGE_EXEC; | ||
472 | break; | ||
473 | } | ||
474 | /* fall through */ | ||
475 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, | ||
476 | |||
477 | /* execute never */ | ||
478 | if (xn) { | ||
479 | - result->prot &= ~PAGE_EXEC; | ||
480 | + result->f.prot &= ~PAGE_EXEC; | ||
481 | } | ||
482 | } | ||
483 | } | ||
484 | |||
485 | fi->type = ARMFault_Permission; | ||
486 | fi->level = 1; | ||
487 | - return !(result->prot & (1 << access_type)); | ||
488 | + return !(result->f.prot & (1 << access_type)); | ||
32 | } | 489 | } |
33 | 490 | ||
34 | -static CPAccessResult access_lorid(CPUARMState *env, const ARMCPRegInfo *ri, | 491 | bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, |
35 | - bool isread) | 492 | @@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, |
36 | -{ | 493 | uint32_t addr_page_base = address & TARGET_PAGE_MASK; |
37 | - if (arm_is_secure_below_el3(env)) { | 494 | uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1); |
38 | - /* Access ok in secure mode. */ | 495 | |
39 | - return CP_ACCESS_OK; | 496 | - result->page_size = TARGET_PAGE_SIZE; |
40 | - } | 497 | - result->phys = address; |
41 | - return access_lor_ns(env); | 498 | - result->prot = 0; |
42 | -} | 499 | + result->f.lg_page_size = TARGET_PAGE_BITS; |
43 | - | 500 | + result->f.phys_addr = address; |
44 | static CPAccessResult access_lor_other(CPUARMState *env, | 501 | + result->f.prot = 0; |
45 | const ARMCPRegInfo *ri, bool isread) | 502 | if (mregion) { |
46 | { | 503 | *mregion = -1; |
47 | @@ -XXX,XX +XXX,XX @@ static CPAccessResult access_lor_other(CPUARMState *env, | 504 | } |
48 | /* Access denied in secure mode. */ | 505 | @@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, |
49 | return CP_ACCESS_TRAP; | 506 | ranges_overlap(base, limit - base + 1, |
50 | } | 507 | addr_page_base, |
51 | - return access_lor_ns(env); | 508 | TARGET_PAGE_SIZE)) { |
52 | + return access_lor_ns(env, ri, isread); | 509 | - result->page_size = 1; |
510 | + result->f.lg_page_size = 0; | ||
511 | } | ||
512 | continue; | ||
513 | } | ||
514 | |||
515 | if (base > addr_page_base || limit < addr_page_limit) { | ||
516 | - result->page_size = 1; | ||
517 | + result->f.lg_page_size = 0; | ||
518 | } | ||
519 | |||
520 | if (matchregion != -1) { | ||
521 | @@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, | ||
522 | |||
523 | if (matchregion == -1) { | ||
524 | /* hit using the background region */ | ||
525 | - get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->prot); | ||
526 | + get_phys_addr_pmsav7_default(env, mmu_idx, address, &result->f.prot); | ||
527 | } else { | ||
528 | uint32_t ap = extract32(env->pmsav8.rbar[secure][matchregion], 1, 2); | ||
529 | uint32_t xn = extract32(env->pmsav8.rbar[secure][matchregion], 0, 1); | ||
530 | @@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, | ||
531 | xn = 1; | ||
532 | } | ||
533 | |||
534 | - result->prot = simple_ap_to_rw_prot(env, mmu_idx, ap); | ||
535 | - if (result->prot && !xn && !(pxn && !is_user)) { | ||
536 | - result->prot |= PAGE_EXEC; | ||
537 | + result->f.prot = simple_ap_to_rw_prot(env, mmu_idx, ap); | ||
538 | + if (result->f.prot && !xn && !(pxn && !is_user)) { | ||
539 | + result->f.prot |= PAGE_EXEC; | ||
540 | } | ||
541 | /* | ||
542 | * We don't need to look the attribute up in the MAIR0/MAIR1 | ||
543 | @@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, | ||
544 | |||
545 | fi->type = ARMFault_Permission; | ||
546 | fi->level = 1; | ||
547 | - return !(result->prot & (1 << access_type)); | ||
548 | + return !(result->f.prot & (1 << access_type)); | ||
53 | } | 549 | } |
54 | 550 | ||
55 | /* | 551 | static bool v8m_is_sau_exempt(CPUARMState *env, |
56 | @@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo lor_reginfo[] = { | 552 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, |
57 | .type = ARM_CP_CONST, .resetvalue = 0 }, | 553 | } else { |
58 | { .name = "LORID_EL1", .state = ARM_CP_STATE_AA64, | 554 | fi->type = ARMFault_QEMU_SFault; |
59 | .opc0 = 3, .opc1 = 0, .crn = 10, .crm = 4, .opc2 = 7, | 555 | } |
60 | - .access = PL1_R, .accessfn = access_lorid, | 556 | - result->page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE; |
61 | + .access = PL1_R, .accessfn = access_lor_ns, | 557 | - result->phys = address; |
62 | .type = ARM_CP_CONST, .resetvalue = 0 }, | 558 | - result->prot = 0; |
63 | REGINFO_SENTINEL | 559 | + result->f.lg_page_size = sattrs.subpage ? 0 : TARGET_PAGE_BITS; |
64 | }; | 560 | + result->f.phys_addr = address; |
561 | + result->f.prot = 0; | ||
562 | return true; | ||
563 | } | ||
564 | } else { | ||
565 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, | ||
566 | * might downgrade a secure access to nonsecure. | ||
567 | */ | ||
568 | if (sattrs.ns) { | ||
569 | - result->attrs.secure = false; | ||
570 | + result->f.attrs.secure = false; | ||
571 | } else if (!secure) { | ||
572 | /* | ||
573 | * NS access to S memory must fault. | ||
574 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, | ||
575 | * for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt(). | ||
576 | */ | ||
577 | fi->type = ARMFault_QEMU_SFault; | ||
578 | - result->page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE; | ||
579 | - result->phys = address; | ||
580 | - result->prot = 0; | ||
581 | + result->f.lg_page_size = sattrs.subpage ? 0 : TARGET_PAGE_BITS; | ||
582 | + result->f.phys_addr = address; | ||
583 | + result->f.prot = 0; | ||
584 | return true; | ||
585 | } | ||
586 | } | ||
587 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, | ||
588 | ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, secure, | ||
589 | result, fi, NULL); | ||
590 | if (sattrs.subpage) { | ||
591 | - result->page_size = 1; | ||
592 | + result->f.lg_page_size = 0; | ||
593 | } | ||
594 | return ret; | ||
595 | } | ||
596 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address, | ||
597 | result->cacheattrs.is_s2_format = false; | ||
598 | } | ||
599 | |||
600 | - result->phys = address; | ||
601 | - result->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
602 | - result->page_size = TARGET_PAGE_SIZE; | ||
603 | + result->f.phys_addr = address; | ||
604 | + result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC; | ||
605 | + result->f.lg_page_size = TARGET_PAGE_BITS; | ||
606 | result->cacheattrs.shareability = shareability; | ||
607 | result->cacheattrs.attrs = memattr; | ||
608 | return 0; | ||
609 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
610 | return ret; | ||
611 | } | ||
612 | |||
613 | - ipa = result->phys; | ||
614 | - ipa_secure = result->attrs.secure; | ||
615 | + ipa = result->f.phys_addr; | ||
616 | + ipa_secure = result->f.attrs.secure; | ||
617 | if (is_secure) { | ||
618 | /* Select TCR based on the NS bit from the S1 walk. */ | ||
619 | s2walk_secure = !(ipa_secure | ||
620 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
621 | * Save the stage1 results so that we may merge | ||
622 | * prot and cacheattrs later. | ||
623 | */ | ||
624 | - s1_prot = result->prot; | ||
625 | + s1_prot = result->f.prot; | ||
626 | cacheattrs1 = result->cacheattrs; | ||
627 | memset(result, 0, sizeof(*result)); | ||
628 | |||
629 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
630 | fi->s2addr = ipa; | ||
631 | |||
632 | /* Combine the S1 and S2 perms. */ | ||
633 | - result->prot &= s1_prot; | ||
634 | + result->f.prot &= s1_prot; | ||
635 | |||
636 | /* If S2 fails, return early. */ | ||
637 | if (ret) { | ||
638 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
639 | * Check if IPA translates to secure or non-secure PA space. | ||
640 | * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA. | ||
641 | */ | ||
642 | - result->attrs.secure = | ||
643 | + result->f.attrs.secure = | ||
644 | (is_secure | ||
645 | && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW)) | ||
646 | && (ipa_secure | ||
647 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
648 | * cannot upgrade an non-secure translation regime's attributes | ||
649 | * to secure. | ||
650 | */ | ||
651 | - result->attrs.secure = is_secure; | ||
652 | - result->attrs.user = regime_is_user(env, mmu_idx); | ||
653 | + result->f.attrs.secure = is_secure; | ||
654 | + result->f.attrs.user = regime_is_user(env, mmu_idx); | ||
655 | |||
656 | /* | ||
657 | * Fast Context Switch Extension. This doesn't exist at all in v8. | ||
658 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
659 | |||
660 | if (arm_feature(env, ARM_FEATURE_PMSA)) { | ||
661 | bool ret; | ||
662 | - result->page_size = TARGET_PAGE_SIZE; | ||
663 | + result->f.lg_page_size = TARGET_PAGE_BITS; | ||
664 | |||
665 | if (arm_feature(env, ARM_FEATURE_V8)) { | ||
666 | /* PMSAv8 */ | ||
667 | @@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address, | ||
668 | (access_type == MMU_DATA_STORE ? "writing" : "execute"), | ||
669 | (uint32_t)address, mmu_idx, | ||
670 | ret ? "Miss" : "Hit", | ||
671 | - result->prot & PAGE_READ ? 'r' : '-', | ||
672 | - result->prot & PAGE_WRITE ? 'w' : '-', | ||
673 | - result->prot & PAGE_EXEC ? 'x' : '-'); | ||
674 | + result->f.prot & PAGE_READ ? 'r' : '-', | ||
675 | + result->f.prot & PAGE_WRITE ? 'w' : '-', | ||
676 | + result->f.prot & PAGE_EXEC ? 'x' : '-'); | ||
677 | |||
678 | return ret; | ||
679 | } | ||
680 | @@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr, | ||
681 | bool ret; | ||
682 | |||
683 | ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &res, &fi); | ||
684 | - *attrs = res.attrs; | ||
685 | + *attrs = res.f.attrs; | ||
686 | |||
687 | if (ret) { | ||
688 | return -1; | ||
689 | } | ||
690 | - return res.phys; | ||
691 | + return res.f.phys_addr; | ||
692 | } | ||
693 | diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c | ||
694 | index XXXXXXX..XXXXXXX 100644 | ||
695 | --- a/target/arm/tlb_helper.c | ||
696 | +++ b/target/arm/tlb_helper.c | ||
697 | @@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size, | ||
698 | * target page size are handled specially, so for those we | ||
699 | * pass in the exact addresses. | ||
700 | */ | ||
701 | - if (res.page_size >= TARGET_PAGE_SIZE) { | ||
702 | - res.phys &= TARGET_PAGE_MASK; | ||
703 | + if (res.f.lg_page_size >= TARGET_PAGE_BITS) { | ||
704 | + res.f.phys_addr &= TARGET_PAGE_MASK; | ||
705 | address &= TARGET_PAGE_MASK; | ||
706 | } | ||
707 | /* Notice and record tagged memory. */ | ||
708 | if (cpu_isar_feature(aa64_mte, cpu) && res.cacheattrs.attrs == 0xf0) { | ||
709 | - arm_tlb_mte_tagged(&res.attrs) = true; | ||
710 | + arm_tlb_mte_tagged(&res.f.attrs) = true; | ||
711 | } | ||
712 | |||
713 | - tlb_set_page_with_attrs(cs, address, res.phys, res.attrs, | ||
714 | - res.prot, mmu_idx, res.page_size); | ||
715 | + tlb_set_page_full(cs, mmu_idx, address, &res.f); | ||
716 | return true; | ||
717 | } else if (probe) { | ||
718 | return false; | ||
65 | -- | 719 | -- |
66 | 2.20.1 | 720 | 2.25.1 |
67 | |||
68 | diff view generated by jsdifflib |
1 | From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> | 1 | From: Jerome Forissier <jerome.forissier@linaro.org> |
---|---|---|---|
2 | 2 | ||
3 | When booting a CPU with EL3 using the -kernel flag, set up CPTR_EL3 so | 3 | According to the Linux kernel booting.rst [1], CPTR_EL3.ESM and |
4 | that SVE will not trap to EL3. | 4 | SCR_EL3.EnTP2 must be initialized to 1 when EL3 is present and FEAT_SME |
5 | is advertised. This has to be taken care of when QEMU boots directly | ||
6 | into the kernel (i.e., "-M virt,secure=on -cpu max -kernel Image"). | ||
5 | 7 | ||
6 | Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com> | 8 | Cc: qemu-stable@nongnu.org |
7 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | 9 | Fixes: 78cb9776662a ("target/arm: Enable SME for -cpu max") |
8 | Message-id: 20201030151541.11976-1-remi@remlab.net | 10 | Link: [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/arm64/booting.rst?h=v6.0#n321 |
11 | Signed-off-by: Jerome Forissier <jerome.forissier@linaro.org> | ||
12 | Message-id: 20221003145641.1921467-1-jerome.forissier@linaro.org | ||
13 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
9 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 14 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
10 | --- | 15 | --- |
11 | hw/arm/boot.c | 3 +++ | 16 | hw/arm/boot.c | 4 ++++ |
12 | 1 file changed, 3 insertions(+) | 17 | 1 file changed, 4 insertions(+) |
13 | 18 | ||
14 | diff --git a/hw/arm/boot.c b/hw/arm/boot.c | 19 | diff --git a/hw/arm/boot.c b/hw/arm/boot.c |
15 | index XXXXXXX..XXXXXXX 100644 | 20 | index XXXXXXX..XXXXXXX 100644 |
16 | --- a/hw/arm/boot.c | 21 | --- a/hw/arm/boot.c |
17 | +++ b/hw/arm/boot.c | 22 | +++ b/hw/arm/boot.c |
18 | @@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque) | 23 | @@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque) |
19 | if (cpu_isar_feature(aa64_mte, cpu)) { | 24 | if (cpu_isar_feature(aa64_sve, cpu)) { |
20 | env->cp15.scr_el3 |= SCR_ATA; | 25 | env->cp15.cptr_el[3] |= R_CPTR_EL3_EZ_MASK; |
21 | } | 26 | } |
22 | + if (cpu_isar_feature(aa64_sve, cpu)) { | 27 | + if (cpu_isar_feature(aa64_sme, cpu)) { |
23 | + env->cp15.cptr_el[3] |= CPTR_EZ; | 28 | + env->cp15.cptr_el[3] |= R_CPTR_EL3_ESM_MASK; |
29 | + env->cp15.scr_el3 |= SCR_ENTP2; | ||
24 | + } | 30 | + } |
25 | /* AArch64 kernels never boot in secure mode */ | 31 | /* AArch64 kernels never boot in secure mode */ |
26 | assert(!info->secure_boot); | 32 | assert(!info->secure_boot); |
27 | /* This hook is only supported for AArch32 currently: | 33 | /* This hook is only supported for AArch32 currently: |
28 | -- | 34 | -- |
29 | 2.20.1 | 35 | 2.25.1 |
30 | |||
31 | diff view generated by jsdifflib |
1 | In gicv3_init_cpuif() we copy the ARMCPU gicv3_maintenance_interrupt | 1 | Arm CPUs support some subset of the granule (page) sizes 4K, 16K and |
---|---|---|---|
2 | into the GICv3CPUState struct's maintenance_irq field. This will | 2 | 64K. The guest selects the one it wants using bits in the TCR_ELx |
3 | only work if the board happens to have already wired up the CPU | 3 | registers. If it tries to program these registers with a value that |
4 | maintenance IRQ before the GIC was realized. Unfortunately this is | 4 | is either reserved or which requests a size that the CPU does not |
5 | not the case for the 'virt' board, and so the value that gets copied | 5 | implement, the architecture requires that the CPU behaves as if the |
6 | is NULL (since a qemu_irq is really a pointer to an IRQState struct | 6 | field was programmed to some size that has been implemented. |
7 | under the hood). The effect is that the CPU interface code never | 7 | Currently we don't implement this, and instead let the guest use any |
8 | actually raises the maintenance interrupt line. | 8 | granule size, even if the CPU ID register fields say it isn't |
9 | 9 | present. | |
10 | Instead, since the GICv3CPUState has a pointer to the CPUState, make | 10 | |
11 | the dereference at the point where we want to raise the interrupt, to | 11 | Make aa64_va_parameters() check against the supported granule size |
12 | avoid an implicit requirement on board code to wire things up in a | 12 | and force use of a different one if it is not implemented. |
13 | particular order. | 13 | |
14 | 14 | (A subsequent commit will make ARMVAParameters use the new enum | |
15 | Reported-by: Jose Martins <josemartins90@gmail.com> | 15 | rather than the current pair of using16k/using64k bools.) |
16 | |||
17 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> | ||
16 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 18 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
17 | Message-id: 20201009153904.28529-1-peter.maydell@linaro.org | 19 | Message-id: 20221003162315.2833797-2-peter.maydell@linaro.org |
18 | Reviewed-by: Luc Michel <luc@lmichel.fr> | ||
19 | --- | 20 | --- |
20 | include/hw/intc/arm_gicv3_common.h | 1 - | 21 | target/arm/cpu.h | 33 +++++++++++++ |
21 | hw/intc/arm_gicv3_cpuif.c | 5 ++--- | 22 | target/arm/internals.h | 9 ++++ |
22 | 2 files changed, 2 insertions(+), 4 deletions(-) | 23 | target/arm/helper.c | 102 +++++++++++++++++++++++++++++++++++++---- |
23 | 24 | 3 files changed, 136 insertions(+), 8 deletions(-) | |
24 | diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h | 25 | |
26 | diff --git a/target/arm/cpu.h b/target/arm/cpu.h | ||
25 | index XXXXXXX..XXXXXXX 100644 | 27 | index XXXXXXX..XXXXXXX 100644 |
26 | --- a/include/hw/intc/arm_gicv3_common.h | 28 | --- a/target/arm/cpu.h |
27 | +++ b/include/hw/intc/arm_gicv3_common.h | 29 | +++ b/target/arm/cpu.h |
28 | @@ -XXX,XX +XXX,XX @@ struct GICv3CPUState { | 30 | @@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_tgran16_2_lpa2(const ARMISARegisters *id) |
29 | qemu_irq parent_fiq; | 31 | return t >= 3 || (t == 0 && isar_feature_aa64_tgran16_lpa2(id)); |
30 | qemu_irq parent_virq; | 32 | } |
31 | qemu_irq parent_vfiq; | 33 | |
32 | - qemu_irq maintenance_irq; | 34 | +static inline bool isar_feature_aa64_tgran4(const ARMISARegisters *id) |
33 | 35 | +{ | |
34 | /* Redistributor */ | 36 | + return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 0; |
35 | uint32_t level; /* Current IRQ level */ | 37 | +} |
36 | diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c | 38 | + |
39 | +static inline bool isar_feature_aa64_tgran16(const ARMISARegisters *id) | ||
40 | +{ | ||
41 | + return FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16) >= 1; | ||
42 | +} | ||
43 | + | ||
44 | +static inline bool isar_feature_aa64_tgran64(const ARMISARegisters *id) | ||
45 | +{ | ||
46 | + return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64) >= 0; | ||
47 | +} | ||
48 | + | ||
49 | +static inline bool isar_feature_aa64_tgran4_2(const ARMISARegisters *id) | ||
50 | +{ | ||
51 | + unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4_2); | ||
52 | + return t >= 2 || (t == 0 && isar_feature_aa64_tgran4(id)); | ||
53 | +} | ||
54 | + | ||
55 | +static inline bool isar_feature_aa64_tgran16_2(const ARMISARegisters *id) | ||
56 | +{ | ||
57 | + unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN16_2); | ||
58 | + return t >= 2 || (t == 0 && isar_feature_aa64_tgran16(id)); | ||
59 | +} | ||
60 | + | ||
61 | +static inline bool isar_feature_aa64_tgran64_2(const ARMISARegisters *id) | ||
62 | +{ | ||
63 | + unsigned t = FIELD_EX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN64_2); | ||
64 | + return t >= 2 || (t == 0 && isar_feature_aa64_tgran64(id)); | ||
65 | +} | ||
66 | + | ||
67 | static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id) | ||
68 | { | ||
69 | return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0; | ||
70 | diff --git a/target/arm/internals.h b/target/arm/internals.h | ||
37 | index XXXXXXX..XXXXXXX 100644 | 71 | index XXXXXXX..XXXXXXX 100644 |
38 | --- a/hw/intc/arm_gicv3_cpuif.c | 72 | --- a/target/arm/internals.h |
39 | +++ b/hw/intc/arm_gicv3_cpuif.c | 73 | +++ b/target/arm/internals.h |
40 | @@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs) | 74 | @@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id) |
41 | int irqlevel = 0; | 75 | return valid; |
42 | int fiqlevel = 0; | ||
43 | int maintlevel = 0; | ||
44 | + ARMCPU *cpu = ARM_CPU(cs->cpu); | ||
45 | |||
46 | idx = hppvi_index(cs); | ||
47 | trace_gicv3_cpuif_virt_update(gicv3_redist_affid(cs), idx); | ||
48 | @@ -XXX,XX +XXX,XX @@ static void gicv3_cpuif_virt_update(GICv3CPUState *cs) | ||
49 | |||
50 | qemu_set_irq(cs->parent_vfiq, fiqlevel); | ||
51 | qemu_set_irq(cs->parent_virq, irqlevel); | ||
52 | - qemu_set_irq(cs->maintenance_irq, maintlevel); | ||
53 | + qemu_set_irq(cpu->gicv3_maintenance_interrupt, maintlevel); | ||
54 | } | 76 | } |
55 | 77 | ||
56 | static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri) | 78 | +/* Granule size (i.e. page size) */ |
57 | @@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s) | 79 | +typedef enum ARMGranuleSize { |
58 | && cpu->gic_num_lrs) { | 80 | + /* Same order as TG0 encoding */ |
59 | int j; | 81 | + Gran4K, |
60 | 82 | + Gran64K, | |
61 | - cs->maintenance_irq = cpu->gicv3_maintenance_interrupt; | 83 | + Gran16K, |
62 | - | 84 | + GranInvalid, |
63 | cs->num_list_regs = cpu->gic_num_lrs; | 85 | +} ARMGranuleSize; |
64 | cs->vpribits = cpu->gic_vpribits; | 86 | + |
65 | cs->vprebits = cpu->gic_vprebits; | 87 | /* |
88 | * Parameters of a given virtual address, as extracted from the | ||
89 | * translation control register (TCR) for a given regime. | ||
90 | diff --git a/target/arm/helper.c b/target/arm/helper.c | ||
91 | index XXXXXXX..XXXXXXX 100644 | ||
92 | --- a/target/arm/helper.c | ||
93 | +++ b/target/arm/helper.c | ||
94 | @@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tcma(uint64_t tcr, ARMMMUIdx mmu_idx) | ||
95 | } | ||
96 | } | ||
97 | |||
98 | +static ARMGranuleSize tg0_to_gran_size(int tg) | ||
99 | +{ | ||
100 | + switch (tg) { | ||
101 | + case 0: | ||
102 | + return Gran4K; | ||
103 | + case 1: | ||
104 | + return Gran64K; | ||
105 | + case 2: | ||
106 | + return Gran16K; | ||
107 | + default: | ||
108 | + return GranInvalid; | ||
109 | + } | ||
110 | +} | ||
111 | + | ||
112 | +static ARMGranuleSize tg1_to_gran_size(int tg) | ||
113 | +{ | ||
114 | + switch (tg) { | ||
115 | + case 1: | ||
116 | + return Gran16K; | ||
117 | + case 2: | ||
118 | + return Gran4K; | ||
119 | + case 3: | ||
120 | + return Gran64K; | ||
121 | + default: | ||
122 | + return GranInvalid; | ||
123 | + } | ||
124 | +} | ||
125 | + | ||
126 | +static inline bool have4k(ARMCPU *cpu, bool stage2) | ||
127 | +{ | ||
128 | + return stage2 ? cpu_isar_feature(aa64_tgran4_2, cpu) | ||
129 | + : cpu_isar_feature(aa64_tgran4, cpu); | ||
130 | +} | ||
131 | + | ||
132 | +static inline bool have16k(ARMCPU *cpu, bool stage2) | ||
133 | +{ | ||
134 | + return stage2 ? cpu_isar_feature(aa64_tgran16_2, cpu) | ||
135 | + : cpu_isar_feature(aa64_tgran16, cpu); | ||
136 | +} | ||
137 | + | ||
138 | +static inline bool have64k(ARMCPU *cpu, bool stage2) | ||
139 | +{ | ||
140 | + return stage2 ? cpu_isar_feature(aa64_tgran64_2, cpu) | ||
141 | + : cpu_isar_feature(aa64_tgran64, cpu); | ||
142 | +} | ||
143 | + | ||
144 | +static ARMGranuleSize sanitize_gran_size(ARMCPU *cpu, ARMGranuleSize gran, | ||
145 | + bool stage2) | ||
146 | +{ | ||
147 | + switch (gran) { | ||
148 | + case Gran4K: | ||
149 | + if (have4k(cpu, stage2)) { | ||
150 | + return gran; | ||
151 | + } | ||
152 | + break; | ||
153 | + case Gran16K: | ||
154 | + if (have16k(cpu, stage2)) { | ||
155 | + return gran; | ||
156 | + } | ||
157 | + break; | ||
158 | + case Gran64K: | ||
159 | + if (have64k(cpu, stage2)) { | ||
160 | + return gran; | ||
161 | + } | ||
162 | + break; | ||
163 | + case GranInvalid: | ||
164 | + break; | ||
165 | + } | ||
166 | + /* | ||
167 | + * If the guest selects a granule size that isn't implemented, | ||
168 | + * the architecture requires that we behave as if it selected one | ||
169 | + * that is (with an IMPDEF choice of which one to pick). We choose | ||
170 | + * to implement the smallest supported granule size. | ||
171 | + */ | ||
172 | + if (have4k(cpu, stage2)) { | ||
173 | + return Gran4K; | ||
174 | + } | ||
175 | + if (have16k(cpu, stage2)) { | ||
176 | + return Gran16K; | ||
177 | + } | ||
178 | + assert(have64k(cpu, stage2)); | ||
179 | + return Gran64K; | ||
180 | +} | ||
181 | + | ||
182 | ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, | ||
183 | ARMMMUIdx mmu_idx, bool data) | ||
184 | { | ||
185 | uint64_t tcr = regime_tcr(env, mmu_idx); | ||
186 | bool epd, hpd, using16k, using64k, tsz_oob, ds; | ||
187 | int select, tsz, tbi, max_tsz, min_tsz, ps, sh; | ||
188 | + ARMGranuleSize gran; | ||
189 | ARMCPU *cpu = env_archcpu(env); | ||
190 | + bool stage2 = mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S; | ||
191 | |||
192 | if (!regime_has_2_ranges(mmu_idx)) { | ||
193 | select = 0; | ||
194 | tsz = extract32(tcr, 0, 6); | ||
195 | - using64k = extract32(tcr, 14, 1); | ||
196 | - using16k = extract32(tcr, 15, 1); | ||
197 | - if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) { | ||
198 | + gran = tg0_to_gran_size(extract32(tcr, 14, 2)); | ||
199 | + if (stage2) { | ||
200 | /* VTCR_EL2 */ | ||
201 | hpd = false; | ||
202 | } else { | ||
203 | @@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, | ||
204 | select = extract64(va, 55, 1); | ||
205 | if (!select) { | ||
206 | tsz = extract32(tcr, 0, 6); | ||
207 | + gran = tg0_to_gran_size(extract32(tcr, 14, 2)); | ||
208 | epd = extract32(tcr, 7, 1); | ||
209 | sh = extract32(tcr, 12, 2); | ||
210 | - using64k = extract32(tcr, 14, 1); | ||
211 | - using16k = extract32(tcr, 15, 1); | ||
212 | hpd = extract64(tcr, 41, 1); | ||
213 | } else { | ||
214 | - int tg = extract32(tcr, 30, 2); | ||
215 | - using16k = tg == 1; | ||
216 | - using64k = tg == 3; | ||
217 | tsz = extract32(tcr, 16, 6); | ||
218 | + gran = tg1_to_gran_size(extract32(tcr, 30, 2)); | ||
219 | epd = extract32(tcr, 23, 1); | ||
220 | sh = extract32(tcr, 28, 2); | ||
221 | hpd = extract64(tcr, 42, 1); | ||
222 | @@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, | ||
223 | ds = extract64(tcr, 59, 1); | ||
224 | } | ||
225 | |||
226 | + gran = sanitize_gran_size(cpu, gran, stage2); | ||
227 | + using64k = gran == Gran64K; | ||
228 | + using16k = gran == Gran16K; | ||
229 | + | ||
230 | if (cpu_isar_feature(aa64_st, cpu)) { | ||
231 | max_tsz = 48 - using64k; | ||
232 | } else { | ||
66 | -- | 233 | -- |
67 | 2.20.1 | 234 | 2.25.1 |
68 | |||
69 | diff view generated by jsdifflib |
1 | From: Richard Henderson <richard.henderson@linaro.org> | 1 | Now we have an enum for the granule size, use it in the |
---|---|---|---|
2 | ARMVAParameters struct instead of the using16k/using64k bools. | ||
2 | 3 | ||
3 | Model these off the aa64 read/write_vec_element functions. | 4 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
4 | Use it within translate-neon.c.inc. The new functions do | 5 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> |
5 | not allocate or free temps, so this rearranges the calling | 6 | Message-id: 20221003162315.2833797-3-peter.maydell@linaro.org |
6 | code a bit. | 7 | --- |
8 | target/arm/internals.h | 23 +++++++++++++++++++++-- | ||
9 | target/arm/helper.c | 39 ++++++++++++++++++++++++++++----------- | ||
10 | target/arm/ptw.c | 8 +------- | ||
11 | 3 files changed, 50 insertions(+), 20 deletions(-) | ||
7 | 12 | ||
8 | Signed-off-by: Richard Henderson <richard.henderson@linaro.org> | 13 | diff --git a/target/arm/internals.h b/target/arm/internals.h |
9 | Message-id: 20201030022618.785675-6-richard.henderson@linaro.org | ||
10 | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> | ||
11 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | ||
12 | --- | ||
13 | target/arm/translate.c | 26 ++++ | ||
14 | target/arm/translate-neon.c.inc | 256 ++++++++++++++++++++------------ | ||
15 | 2 files changed, 183 insertions(+), 99 deletions(-) | ||
16 | |||
17 | diff --git a/target/arm/translate.c b/target/arm/translate.c | ||
18 | index XXXXXXX..XXXXXXX 100644 | 14 | index XXXXXXX..XXXXXXX 100644 |
19 | --- a/target/arm/translate.c | 15 | --- a/target/arm/internals.h |
20 | +++ b/target/arm/translate.c | 16 | +++ b/target/arm/internals.h |
21 | @@ -XXX,XX +XXX,XX @@ static inline void neon_store_reg32(TCGv_i32 var, int reg) | 17 | @@ -XXX,XX +XXX,XX @@ typedef enum ARMGranuleSize { |
22 | tcg_gen_st_i32(var, cpu_env, vfp_reg_offset(false, reg)); | 18 | GranInvalid, |
23 | } | 19 | } ARMGranuleSize; |
24 | 20 | ||
25 | +static void read_neon_element32(TCGv_i32 dest, int reg, int ele, MemOp size) | 21 | +/** |
22 | + * arm_granule_bits: Return address size of the granule in bits | ||
23 | + * | ||
24 | + * Return the address size of the granule in bits. This corresponds | ||
25 | + * to the pseudocode TGxGranuleBits(). | ||
26 | + */ | ||
27 | +static inline int arm_granule_bits(ARMGranuleSize gran) | ||
26 | +{ | 28 | +{ |
27 | + long off = neon_element_offset(reg, ele, size); | 29 | + switch (gran) { |
28 | + | 30 | + case Gran64K: |
29 | + switch (size) { | 31 | + return 16; |
30 | + case MO_32: | 32 | + case Gran16K: |
31 | + tcg_gen_ld_i32(dest, cpu_env, off); | 33 | + return 14; |
32 | + break; | 34 | + case Gran4K: |
35 | + return 12; | ||
33 | + default: | 36 | + default: |
34 | + g_assert_not_reached(); | 37 | + g_assert_not_reached(); |
35 | + } | 38 | + } |
36 | +} | 39 | +} |
37 | + | 40 | + |
38 | +static void write_neon_element32(TCGv_i32 src, int reg, int ele, MemOp size) | 41 | /* |
42 | * Parameters of a given virtual address, as extracted from the | ||
43 | * translation control register (TCR) for a given regime. | ||
44 | @@ -XXX,XX +XXX,XX @@ typedef struct ARMVAParameters { | ||
45 | bool tbi : 1; | ||
46 | bool epd : 1; | ||
47 | bool hpd : 1; | ||
48 | - bool using16k : 1; | ||
49 | - bool using64k : 1; | ||
50 | bool tsz_oob : 1; /* tsz has been clamped to legal range */ | ||
51 | bool ds : 1; | ||
52 | + ARMGranuleSize gran : 2; | ||
53 | } ARMVAParameters; | ||
54 | |||
55 | ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, | ||
56 | diff --git a/target/arm/helper.c b/target/arm/helper.c | ||
57 | index XXXXXXX..XXXXXXX 100644 | ||
58 | --- a/target/arm/helper.c | ||
59 | +++ b/target/arm/helper.c | ||
60 | @@ -XXX,XX +XXX,XX @@ typedef struct { | ||
61 | uint64_t length; | ||
62 | } TLBIRange; | ||
63 | |||
64 | +static ARMGranuleSize tlbi_range_tg_to_gran_size(int tg) | ||
39 | +{ | 65 | +{ |
40 | + long off = neon_element_offset(reg, ele, size); | 66 | + /* |
41 | + | 67 | + * Note that the TLBI range TG field encoding differs from both |
42 | + switch (size) { | 68 | + * TG0 and TG1 encodings. |
43 | + case MO_32: | 69 | + */ |
44 | + tcg_gen_st_i32(src, cpu_env, off); | 70 | + switch (tg) { |
45 | + break; | 71 | + case 1: |
72 | + return Gran4K; | ||
73 | + case 2: | ||
74 | + return Gran16K; | ||
75 | + case 3: | ||
76 | + return Gran64K; | ||
46 | + default: | 77 | + default: |
47 | + g_assert_not_reached(); | 78 | + return GranInvalid; |
48 | + } | 79 | + } |
49 | +} | 80 | +} |
50 | + | 81 | + |
51 | static TCGv_ptr vfp_reg_ptr(bool dp, int reg) | 82 | static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx, |
83 | uint64_t value) | ||
52 | { | 84 | { |
53 | TCGv_ptr ret = tcg_temp_new_ptr(); | 85 | @@ -XXX,XX +XXX,XX @@ static TLBIRange tlbi_aa64_get_range(CPUARMState *env, ARMMMUIdx mmuidx, |
54 | diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc | 86 | uint64_t select = sextract64(value, 36, 1); |
87 | ARMVAParameters param = aa64_va_parameters(env, select, mmuidx, true); | ||
88 | TLBIRange ret = { }; | ||
89 | + ARMGranuleSize gran; | ||
90 | |||
91 | page_size_granule = extract64(value, 46, 2); | ||
92 | + gran = tlbi_range_tg_to_gran_size(page_size_granule); | ||
93 | |||
94 | /* The granule encoded in value must match the granule in use. */ | ||
95 | - if (page_size_granule != (param.using64k ? 3 : param.using16k ? 2 : 1)) { | ||
96 | + if (gran != param.gran) { | ||
97 | qemu_log_mask(LOG_GUEST_ERROR, "Invalid tlbi page size granule %d\n", | ||
98 | page_size_granule); | ||
99 | return ret; | ||
100 | } | ||
101 | |||
102 | - page_shift = (page_size_granule - 1) * 2 + 12; | ||
103 | + page_shift = arm_granule_bits(gran); | ||
104 | num = extract64(value, 39, 5); | ||
105 | scale = extract64(value, 44, 2); | ||
106 | exponent = (5 * scale) + 1; | ||
107 | @@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, | ||
108 | ARMMMUIdx mmu_idx, bool data) | ||
109 | { | ||
110 | uint64_t tcr = regime_tcr(env, mmu_idx); | ||
111 | - bool epd, hpd, using16k, using64k, tsz_oob, ds; | ||
112 | + bool epd, hpd, tsz_oob, ds; | ||
113 | int select, tsz, tbi, max_tsz, min_tsz, ps, sh; | ||
114 | ARMGranuleSize gran; | ||
115 | ARMCPU *cpu = env_archcpu(env); | ||
116 | @@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, | ||
117 | } | ||
118 | |||
119 | gran = sanitize_gran_size(cpu, gran, stage2); | ||
120 | - using64k = gran == Gran64K; | ||
121 | - using16k = gran == Gran16K; | ||
122 | |||
123 | if (cpu_isar_feature(aa64_st, cpu)) { | ||
124 | - max_tsz = 48 - using64k; | ||
125 | + max_tsz = 48 - (gran == Gran64K); | ||
126 | } else { | ||
127 | max_tsz = 39; | ||
128 | } | ||
129 | @@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, | ||
130 | * adjust the effective value of DS, as documented. | ||
131 | */ | ||
132 | min_tsz = 16; | ||
133 | - if (using64k) { | ||
134 | + if (gran == Gran64K) { | ||
135 | if (cpu_isar_feature(aa64_lva, cpu)) { | ||
136 | min_tsz = 12; | ||
137 | } | ||
138 | @@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, | ||
139 | switch (mmu_idx) { | ||
140 | case ARMMMUIdx_Stage2: | ||
141 | case ARMMMUIdx_Stage2_S: | ||
142 | - if (using16k) { | ||
143 | + if (gran == Gran16K) { | ||
144 | ds = cpu_isar_feature(aa64_tgran16_2_lpa2, cpu); | ||
145 | } else { | ||
146 | ds = cpu_isar_feature(aa64_tgran4_2_lpa2, cpu); | ||
147 | } | ||
148 | break; | ||
149 | default: | ||
150 | - if (using16k) { | ||
151 | + if (gran == Gran16K) { | ||
152 | ds = cpu_isar_feature(aa64_tgran16_lpa2, cpu); | ||
153 | } else { | ||
154 | ds = cpu_isar_feature(aa64_tgran4_lpa2, cpu); | ||
155 | @@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, | ||
156 | .tbi = tbi, | ||
157 | .epd = epd, | ||
158 | .hpd = hpd, | ||
159 | - .using16k = using16k, | ||
160 | - .using64k = using64k, | ||
161 | .tsz_oob = tsz_oob, | ||
162 | .ds = ds, | ||
163 | + .gran = gran, | ||
164 | }; | ||
165 | } | ||
166 | |||
167 | diff --git a/target/arm/ptw.c b/target/arm/ptw.c | ||
55 | index XXXXXXX..XXXXXXX 100644 | 168 | index XXXXXXX..XXXXXXX 100644 |
56 | --- a/target/arm/translate-neon.c.inc | 169 | --- a/target/arm/ptw.c |
57 | +++ b/target/arm/translate-neon.c.inc | 170 | +++ b/target/arm/ptw.c |
58 | @@ -XXX,XX +XXX,XX @@ static bool do_3same_pair(DisasContext *s, arg_3same *a, NeonGenTwoOpFn *fn) | 171 | @@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, |
59 | * early. Since Q is 0 there are always just two passes, so instead | ||
60 | * of a complicated loop over each pass we just unroll. | ||
61 | */ | ||
62 | - tmp = neon_load_reg(a->vn, 0); | ||
63 | - tmp2 = neon_load_reg(a->vn, 1); | ||
64 | + tmp = tcg_temp_new_i32(); | ||
65 | + tmp2 = tcg_temp_new_i32(); | ||
66 | + tmp3 = tcg_temp_new_i32(); | ||
67 | + | ||
68 | + read_neon_element32(tmp, a->vn, 0, MO_32); | ||
69 | + read_neon_element32(tmp2, a->vn, 1, MO_32); | ||
70 | fn(tmp, tmp, tmp2); | ||
71 | - tcg_temp_free_i32(tmp2); | ||
72 | |||
73 | - tmp3 = neon_load_reg(a->vm, 0); | ||
74 | - tmp2 = neon_load_reg(a->vm, 1); | ||
75 | + read_neon_element32(tmp3, a->vm, 0, MO_32); | ||
76 | + read_neon_element32(tmp2, a->vm, 1, MO_32); | ||
77 | fn(tmp3, tmp3, tmp2); | ||
78 | - tcg_temp_free_i32(tmp2); | ||
79 | |||
80 | - neon_store_reg(a->vd, 0, tmp); | ||
81 | - neon_store_reg(a->vd, 1, tmp3); | ||
82 | + write_neon_element32(tmp, a->vd, 0, MO_32); | ||
83 | + write_neon_element32(tmp3, a->vd, 1, MO_32); | ||
84 | + | ||
85 | + tcg_temp_free_i32(tmp); | ||
86 | + tcg_temp_free_i32(tmp2); | ||
87 | + tcg_temp_free_i32(tmp3); | ||
88 | return true; | ||
89 | } | ||
90 | |||
91 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a, | ||
92 | * 2-reg-and-shift operations, size < 3 case, where the | ||
93 | * helper needs to be passed cpu_env. | ||
94 | */ | ||
95 | - TCGv_i32 constimm; | ||
96 | + TCGv_i32 constimm, tmp; | ||
97 | int pass; | ||
98 | |||
99 | if (!arm_dc_feature(s, ARM_FEATURE_NEON)) { | ||
100 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_env_32(DisasContext *s, arg_2reg_shift *a, | ||
101 | * by immediate using the variable shift operations. | ||
102 | */ | ||
103 | constimm = tcg_const_i32(dup_const(a->size, a->shift)); | ||
104 | + tmp = tcg_temp_new_i32(); | ||
105 | |||
106 | for (pass = 0; pass < (a->q ? 4 : 2); pass++) { | ||
107 | - TCGv_i32 tmp = neon_load_reg(a->vm, pass); | ||
108 | + read_neon_element32(tmp, a->vm, pass, MO_32); | ||
109 | fn(tmp, cpu_env, tmp, constimm); | ||
110 | - neon_store_reg(a->vd, pass, tmp); | ||
111 | + write_neon_element32(tmp, a->vd, pass, MO_32); | ||
112 | } | ||
113 | + tcg_temp_free_i32(tmp); | ||
114 | tcg_temp_free_i32(constimm); | ||
115 | return true; | ||
116 | } | ||
117 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_64(DisasContext *s, arg_2reg_shift *a, | ||
118 | constimm = tcg_const_i64(-a->shift); | ||
119 | rm1 = tcg_temp_new_i64(); | ||
120 | rm2 = tcg_temp_new_i64(); | ||
121 | + rd = tcg_temp_new_i32(); | ||
122 | |||
123 | /* Load both inputs first to avoid potential overwrite if rm == rd */ | ||
124 | neon_load_reg64(rm1, a->vm); | ||
125 | neon_load_reg64(rm2, a->vm + 1); | ||
126 | |||
127 | shiftfn(rm1, rm1, constimm); | ||
128 | - rd = tcg_temp_new_i32(); | ||
129 | narrowfn(rd, cpu_env, rm1); | ||
130 | - neon_store_reg(a->vd, 0, rd); | ||
131 | + write_neon_element32(rd, a->vd, 0, MO_32); | ||
132 | |||
133 | shiftfn(rm2, rm2, constimm); | ||
134 | - rd = tcg_temp_new_i32(); | ||
135 | narrowfn(rd, cpu_env, rm2); | ||
136 | - neon_store_reg(a->vd, 1, rd); | ||
137 | + write_neon_element32(rd, a->vd, 1, MO_32); | ||
138 | |||
139 | + tcg_temp_free_i32(rd); | ||
140 | tcg_temp_free_i64(rm1); | ||
141 | tcg_temp_free_i64(rm2); | ||
142 | tcg_temp_free_i64(constimm); | ||
143 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a, | ||
144 | constimm = tcg_const_i32(imm); | ||
145 | |||
146 | /* Load all inputs first to avoid potential overwrite */ | ||
147 | - rm1 = neon_load_reg(a->vm, 0); | ||
148 | - rm2 = neon_load_reg(a->vm, 1); | ||
149 | - rm3 = neon_load_reg(a->vm + 1, 0); | ||
150 | - rm4 = neon_load_reg(a->vm + 1, 1); | ||
151 | + rm1 = tcg_temp_new_i32(); | ||
152 | + rm2 = tcg_temp_new_i32(); | ||
153 | + rm3 = tcg_temp_new_i32(); | ||
154 | + rm4 = tcg_temp_new_i32(); | ||
155 | + read_neon_element32(rm1, a->vm, 0, MO_32); | ||
156 | + read_neon_element32(rm2, a->vm, 1, MO_32); | ||
157 | + read_neon_element32(rm3, a->vm, 2, MO_32); | ||
158 | + read_neon_element32(rm4, a->vm, 3, MO_32); | ||
159 | rtmp = tcg_temp_new_i64(); | ||
160 | |||
161 | shiftfn(rm1, rm1, constimm); | ||
162 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a, | ||
163 | tcg_temp_free_i32(rm2); | ||
164 | |||
165 | narrowfn(rm1, cpu_env, rtmp); | ||
166 | - neon_store_reg(a->vd, 0, rm1); | ||
167 | + write_neon_element32(rm1, a->vd, 0, MO_32); | ||
168 | + tcg_temp_free_i32(rm1); | ||
169 | |||
170 | shiftfn(rm3, rm3, constimm); | ||
171 | shiftfn(rm4, rm4, constimm); | ||
172 | @@ -XXX,XX +XXX,XX @@ static bool do_2shift_narrow_32(DisasContext *s, arg_2reg_shift *a, | ||
173 | |||
174 | narrowfn(rm3, cpu_env, rtmp); | ||
175 | tcg_temp_free_i64(rtmp); | ||
176 | - neon_store_reg(a->vd, 1, rm3); | ||
177 | + write_neon_element32(rm3, a->vd, 1, MO_32); | ||
178 | + tcg_temp_free_i32(rm3); | ||
179 | return true; | ||
180 | } | ||
181 | |||
182 | @@ -XXX,XX +XXX,XX @@ static bool do_vshll_2sh(DisasContext *s, arg_2reg_shift *a, | ||
183 | widen_mask = dup_const(a->size + 1, widen_mask); | ||
184 | } | ||
185 | |||
186 | - rm0 = neon_load_reg(a->vm, 0); | ||
187 | - rm1 = neon_load_reg(a->vm, 1); | ||
188 | + rm0 = tcg_temp_new_i32(); | ||
189 | + rm1 = tcg_temp_new_i32(); | ||
190 | + read_neon_element32(rm0, a->vm, 0, MO_32); | ||
191 | + read_neon_element32(rm1, a->vm, 1, MO_32); | ||
192 | tmp = tcg_temp_new_i64(); | ||
193 | |||
194 | widenfn(tmp, rm0); | ||
195 | @@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a, | ||
196 | if (src1_wide) { | ||
197 | neon_load_reg64(rn0_64, a->vn); | ||
198 | } else { | ||
199 | - TCGv_i32 tmp = neon_load_reg(a->vn, 0); | ||
200 | + TCGv_i32 tmp = tcg_temp_new_i32(); | ||
201 | + read_neon_element32(tmp, a->vn, 0, MO_32); | ||
202 | widenfn(rn0_64, tmp); | ||
203 | tcg_temp_free_i32(tmp); | ||
204 | } | ||
205 | - rm = neon_load_reg(a->vm, 0); | ||
206 | + rm = tcg_temp_new_i32(); | ||
207 | + read_neon_element32(rm, a->vm, 0, MO_32); | ||
208 | |||
209 | widenfn(rm_64, rm); | ||
210 | tcg_temp_free_i32(rm); | ||
211 | @@ -XXX,XX +XXX,XX @@ static bool do_prewiden_3d(DisasContext *s, arg_3diff *a, | ||
212 | if (src1_wide) { | ||
213 | neon_load_reg64(rn1_64, a->vn + 1); | ||
214 | } else { | ||
215 | - TCGv_i32 tmp = neon_load_reg(a->vn, 1); | ||
216 | + TCGv_i32 tmp = tcg_temp_new_i32(); | ||
217 | + read_neon_element32(tmp, a->vn, 1, MO_32); | ||
218 | widenfn(rn1_64, tmp); | ||
219 | tcg_temp_free_i32(tmp); | ||
220 | } | ||
221 | - rm = neon_load_reg(a->vm, 1); | ||
222 | + rm = tcg_temp_new_i32(); | ||
223 | + read_neon_element32(rm, a->vm, 1, MO_32); | ||
224 | |||
225 | neon_store_reg64(rn0_64, a->vd); | ||
226 | |||
227 | @@ -XXX,XX +XXX,XX @@ static bool do_narrow_3d(DisasContext *s, arg_3diff *a, | ||
228 | |||
229 | narrowfn(rd1, rn_64); | ||
230 | |||
231 | - neon_store_reg(a->vd, 0, rd0); | ||
232 | - neon_store_reg(a->vd, 1, rd1); | ||
233 | + write_neon_element32(rd0, a->vd, 0, MO_32); | ||
234 | + write_neon_element32(rd1, a->vd, 1, MO_32); | ||
235 | |||
236 | + tcg_temp_free_i32(rd0); | ||
237 | + tcg_temp_free_i32(rd1); | ||
238 | tcg_temp_free_i64(rn_64); | ||
239 | tcg_temp_free_i64(rm_64); | ||
240 | |||
241 | @@ -XXX,XX +XXX,XX @@ static bool do_long_3d(DisasContext *s, arg_3diff *a, | ||
242 | rd0 = tcg_temp_new_i64(); | ||
243 | rd1 = tcg_temp_new_i64(); | ||
244 | |||
245 | - rn = neon_load_reg(a->vn, 0); | ||
246 | - rm = neon_load_reg(a->vm, 0); | ||
247 | + rn = tcg_temp_new_i32(); | ||
248 | + rm = tcg_temp_new_i32(); | ||
249 | + read_neon_element32(rn, a->vn, 0, MO_32); | ||
250 | + read_neon_element32(rm, a->vm, 0, MO_32); | ||
251 | opfn(rd0, rn, rm); | ||
252 | - tcg_temp_free_i32(rn); | ||
253 | - tcg_temp_free_i32(rm); | ||
254 | |||
255 | - rn = neon_load_reg(a->vn, 1); | ||
256 | - rm = neon_load_reg(a->vm, 1); | ||
257 | + read_neon_element32(rn, a->vn, 1, MO_32); | ||
258 | + read_neon_element32(rm, a->vm, 1, MO_32); | ||
259 | opfn(rd1, rn, rm); | ||
260 | tcg_temp_free_i32(rn); | ||
261 | tcg_temp_free_i32(rm); | ||
262 | @@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var) | ||
263 | |||
264 | static inline TCGv_i32 neon_get_scalar(int size, int reg) | ||
265 | { | ||
266 | - TCGv_i32 tmp; | ||
267 | - if (size == 1) { | ||
268 | - tmp = neon_load_reg(reg & 7, reg >> 4); | ||
269 | + TCGv_i32 tmp = tcg_temp_new_i32(); | ||
270 | + if (size == MO_16) { | ||
271 | + read_neon_element32(tmp, reg & 7, reg >> 4, MO_32); | ||
272 | if (reg & 8) { | ||
273 | gen_neon_dup_high16(tmp); | ||
274 | } else { | ||
275 | gen_neon_dup_low16(tmp); | ||
276 | } | ||
277 | } else { | ||
278 | - tmp = neon_load_reg(reg & 15, reg >> 4); | ||
279 | + read_neon_element32(tmp, reg & 15, reg >> 4, MO_32); | ||
280 | } | ||
281 | return tmp; | ||
282 | } | ||
283 | @@ -XXX,XX +XXX,XX @@ static bool do_2scalar(DisasContext *s, arg_2scalar *a, | ||
284 | * perform an accumulation operation of that result into the | ||
285 | * destination. | ||
286 | */ | ||
287 | - TCGv_i32 scalar; | ||
288 | + TCGv_i32 scalar, tmp; | ||
289 | int pass; | ||
290 | |||
291 | if (!arm_dc_feature(s, ARM_FEATURE_NEON)) { | ||
292 | @@ -XXX,XX +XXX,XX @@ static bool do_2scalar(DisasContext *s, arg_2scalar *a, | ||
293 | } | ||
294 | |||
295 | scalar = neon_get_scalar(a->size, a->vm); | ||
296 | + tmp = tcg_temp_new_i32(); | ||
297 | |||
298 | for (pass = 0; pass < (a->q ? 4 : 2); pass++) { | ||
299 | - TCGv_i32 tmp = neon_load_reg(a->vn, pass); | ||
300 | + read_neon_element32(tmp, a->vn, pass, MO_32); | ||
301 | opfn(tmp, tmp, scalar); | ||
302 | if (accfn) { | ||
303 | - TCGv_i32 rd = neon_load_reg(a->vd, pass); | ||
304 | + TCGv_i32 rd = tcg_temp_new_i32(); | ||
305 | + read_neon_element32(rd, a->vd, pass, MO_32); | ||
306 | accfn(tmp, rd, tmp); | ||
307 | tcg_temp_free_i32(rd); | ||
308 | } | ||
309 | - neon_store_reg(a->vd, pass, tmp); | ||
310 | + write_neon_element32(tmp, a->vd, pass, MO_32); | ||
311 | } | ||
312 | + tcg_temp_free_i32(tmp); | ||
313 | tcg_temp_free_i32(scalar); | ||
314 | return true; | ||
315 | } | ||
316 | @@ -XXX,XX +XXX,XX @@ static bool do_vqrdmlah_2sc(DisasContext *s, arg_2scalar *a, | ||
317 | * performs a kind of fused op-then-accumulate using a helper | ||
318 | * function that takes all of rd, rn and the scalar at once. | ||
319 | */ | ||
320 | - TCGv_i32 scalar; | ||
321 | + TCGv_i32 scalar, rn, rd; | ||
322 | int pass; | ||
323 | |||
324 | if (!arm_dc_feature(s, ARM_FEATURE_NEON)) { | ||
325 | @@ -XXX,XX +XXX,XX @@ static bool do_vqrdmlah_2sc(DisasContext *s, arg_2scalar *a, | ||
326 | } | ||
327 | |||
328 | scalar = neon_get_scalar(a->size, a->vm); | ||
329 | + rn = tcg_temp_new_i32(); | ||
330 | + rd = tcg_temp_new_i32(); | ||
331 | |||
332 | for (pass = 0; pass < (a->q ? 4 : 2); pass++) { | ||
333 | - TCGv_i32 rn = neon_load_reg(a->vn, pass); | ||
334 | - TCGv_i32 rd = neon_load_reg(a->vd, pass); | ||
335 | + read_neon_element32(rn, a->vn, pass, MO_32); | ||
336 | + read_neon_element32(rd, a->vd, pass, MO_32); | ||
337 | opfn(rd, cpu_env, rn, scalar, rd); | ||
338 | - tcg_temp_free_i32(rn); | ||
339 | - neon_store_reg(a->vd, pass, rd); | ||
340 | + write_neon_element32(rd, a->vd, pass, MO_32); | ||
341 | } | ||
342 | + tcg_temp_free_i32(rn); | ||
343 | + tcg_temp_free_i32(rd); | ||
344 | tcg_temp_free_i32(scalar); | ||
345 | |||
346 | return true; | ||
347 | @@ -XXX,XX +XXX,XX @@ static bool do_2scalar_long(DisasContext *s, arg_2scalar *a, | ||
348 | scalar = neon_get_scalar(a->size, a->vm); | ||
349 | |||
350 | /* Load all inputs before writing any outputs, in case of overlap */ | ||
351 | - rn = neon_load_reg(a->vn, 0); | ||
352 | + rn = tcg_temp_new_i32(); | ||
353 | + read_neon_element32(rn, a->vn, 0, MO_32); | ||
354 | rn0_64 = tcg_temp_new_i64(); | ||
355 | opfn(rn0_64, rn, scalar); | ||
356 | - tcg_temp_free_i32(rn); | ||
357 | |||
358 | - rn = neon_load_reg(a->vn, 1); | ||
359 | + read_neon_element32(rn, a->vn, 1, MO_32); | ||
360 | rn1_64 = tcg_temp_new_i64(); | ||
361 | opfn(rn1_64, rn, scalar); | ||
362 | tcg_temp_free_i32(rn); | ||
363 | @@ -XXX,XX +XXX,XX @@ static bool trans_VTBL(DisasContext *s, arg_VTBL *a) | ||
364 | return false; | ||
365 | } | ||
366 | n <<= 3; | ||
367 | + tmp = tcg_temp_new_i32(); | ||
368 | if (a->op) { | ||
369 | - tmp = neon_load_reg(a->vd, 0); | ||
370 | + read_neon_element32(tmp, a->vd, 0, MO_32); | ||
371 | } else { | ||
372 | - tmp = tcg_temp_new_i32(); | ||
373 | tcg_gen_movi_i32(tmp, 0); | ||
374 | } | ||
375 | - tmp2 = neon_load_reg(a->vm, 0); | ||
376 | + tmp2 = tcg_temp_new_i32(); | ||
377 | + read_neon_element32(tmp2, a->vm, 0, MO_32); | ||
378 | ptr1 = vfp_reg_ptr(true, a->vn); | ||
379 | tmp4 = tcg_const_i32(n); | ||
380 | gen_helper_neon_tbl(tmp2, tmp2, tmp, ptr1, tmp4); | ||
381 | - tcg_temp_free_i32(tmp); | ||
382 | + | ||
383 | if (a->op) { | ||
384 | - tmp = neon_load_reg(a->vd, 1); | ||
385 | + read_neon_element32(tmp, a->vd, 1, MO_32); | ||
386 | } else { | ||
387 | - tmp = tcg_temp_new_i32(); | ||
388 | tcg_gen_movi_i32(tmp, 0); | ||
389 | } | ||
390 | - tmp3 = neon_load_reg(a->vm, 1); | ||
391 | + tmp3 = tcg_temp_new_i32(); | ||
392 | + read_neon_element32(tmp3, a->vm, 1, MO_32); | ||
393 | gen_helper_neon_tbl(tmp3, tmp3, tmp, ptr1, tmp4); | ||
394 | + tcg_temp_free_i32(tmp); | ||
395 | tcg_temp_free_i32(tmp4); | ||
396 | tcg_temp_free_ptr(ptr1); | ||
397 | - neon_store_reg(a->vd, 0, tmp2); | ||
398 | - neon_store_reg(a->vd, 1, tmp3); | ||
399 | - tcg_temp_free_i32(tmp); | ||
400 | + | ||
401 | + write_neon_element32(tmp2, a->vd, 0, MO_32); | ||
402 | + write_neon_element32(tmp3, a->vd, 1, MO_32); | ||
403 | + tcg_temp_free_i32(tmp2); | ||
404 | + tcg_temp_free_i32(tmp3); | ||
405 | return true; | ||
406 | } | ||
407 | |||
408 | @@ -XXX,XX +XXX,XX @@ static bool trans_VDUP_scalar(DisasContext *s, arg_VDUP_scalar *a) | ||
409 | static bool trans_VREV64(DisasContext *s, arg_VREV64 *a) | ||
410 | { | ||
411 | int pass, half; | ||
412 | + TCGv_i32 tmp[2]; | ||
413 | |||
414 | if (!arm_dc_feature(s, ARM_FEATURE_NEON)) { | ||
415 | return false; | ||
416 | @@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_VREV64 *a) | ||
417 | return true; | ||
418 | } | ||
419 | |||
420 | - for (pass = 0; pass < (a->q ? 2 : 1); pass++) { | ||
421 | - TCGv_i32 tmp[2]; | ||
422 | + tmp[0] = tcg_temp_new_i32(); | ||
423 | + tmp[1] = tcg_temp_new_i32(); | ||
424 | |||
425 | + for (pass = 0; pass < (a->q ? 2 : 1); pass++) { | ||
426 | for (half = 0; half < 2; half++) { | ||
427 | - tmp[half] = neon_load_reg(a->vm, pass * 2 + half); | ||
428 | + read_neon_element32(tmp[half], a->vm, pass * 2 + half, MO_32); | ||
429 | switch (a->size) { | ||
430 | case 0: | ||
431 | tcg_gen_bswap32_i32(tmp[half], tmp[half]); | ||
432 | @@ -XXX,XX +XXX,XX @@ static bool trans_VREV64(DisasContext *s, arg_VREV64 *a) | ||
433 | g_assert_not_reached(); | ||
434 | } | ||
435 | } | ||
436 | - neon_store_reg(a->vd, pass * 2, tmp[1]); | ||
437 | - neon_store_reg(a->vd, pass * 2 + 1, tmp[0]); | ||
438 | + write_neon_element32(tmp[1], a->vd, pass * 2, MO_32); | ||
439 | + write_neon_element32(tmp[0], a->vd, pass * 2 + 1, MO_32); | ||
440 | } | ||
441 | + | ||
442 | + tcg_temp_free_i32(tmp[0]); | ||
443 | + tcg_temp_free_i32(tmp[1]); | ||
444 | return true; | ||
445 | } | ||
446 | |||
447 | @@ -XXX,XX +XXX,XX @@ static bool do_2misc_pairwise(DisasContext *s, arg_2misc *a, | ||
448 | rm0_64 = tcg_temp_new_i64(); | ||
449 | rm1_64 = tcg_temp_new_i64(); | ||
450 | rd_64 = tcg_temp_new_i64(); | ||
451 | - tmp = neon_load_reg(a->vm, pass * 2); | ||
452 | + | ||
453 | + tmp = tcg_temp_new_i32(); | ||
454 | + read_neon_element32(tmp, a->vm, pass * 2, MO_32); | ||
455 | widenfn(rm0_64, tmp); | ||
456 | - tcg_temp_free_i32(tmp); | ||
457 | - tmp = neon_load_reg(a->vm, pass * 2 + 1); | ||
458 | + read_neon_element32(tmp, a->vm, pass * 2 + 1, MO_32); | ||
459 | widenfn(rm1_64, tmp); | ||
460 | tcg_temp_free_i32(tmp); | ||
461 | + | ||
462 | opfn(rd_64, rm0_64, rm1_64); | ||
463 | tcg_temp_free_i64(rm0_64); | ||
464 | tcg_temp_free_i64(rm1_64); | ||
465 | @@ -XXX,XX +XXX,XX @@ static bool do_vmovn(DisasContext *s, arg_2misc *a, | ||
466 | narrowfn(rd0, cpu_env, rm); | ||
467 | neon_load_reg64(rm, a->vm + 1); | ||
468 | narrowfn(rd1, cpu_env, rm); | ||
469 | - neon_store_reg(a->vd, 0, rd0); | ||
470 | - neon_store_reg(a->vd, 1, rd1); | ||
471 | + write_neon_element32(rd0, a->vd, 0, MO_32); | ||
472 | + write_neon_element32(rd1, a->vd, 1, MO_32); | ||
473 | + tcg_temp_free_i32(rd0); | ||
474 | + tcg_temp_free_i32(rd1); | ||
475 | tcg_temp_free_i64(rm); | ||
476 | return true; | ||
477 | } | ||
478 | @@ -XXX,XX +XXX,XX @@ static bool trans_VSHLL(DisasContext *s, arg_2misc *a) | ||
479 | } | ||
480 | |||
481 | rd = tcg_temp_new_i64(); | ||
482 | + rm0 = tcg_temp_new_i32(); | ||
483 | + rm1 = tcg_temp_new_i32(); | ||
484 | |||
485 | - rm0 = neon_load_reg(a->vm, 0); | ||
486 | - rm1 = neon_load_reg(a->vm, 1); | ||
487 | + read_neon_element32(rm0, a->vm, 0, MO_32); | ||
488 | + read_neon_element32(rm1, a->vm, 1, MO_32); | ||
489 | |||
490 | widenfn(rd, rm0); | ||
491 | tcg_gen_shli_i64(rd, rd, 8 << a->size); | ||
492 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F16_F32(DisasContext *s, arg_2misc *a) | ||
493 | |||
494 | fpst = fpstatus_ptr(FPST_STD); | ||
495 | ahp = get_ahp_flag(); | ||
496 | - tmp = neon_load_reg(a->vm, 0); | ||
497 | + tmp = tcg_temp_new_i32(); | ||
498 | + read_neon_element32(tmp, a->vm, 0, MO_32); | ||
499 | gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp); | ||
500 | - tmp2 = neon_load_reg(a->vm, 1); | ||
501 | + tmp2 = tcg_temp_new_i32(); | ||
502 | + read_neon_element32(tmp2, a->vm, 1, MO_32); | ||
503 | gen_helper_vfp_fcvt_f32_to_f16(tmp2, tmp2, fpst, ahp); | ||
504 | tcg_gen_shli_i32(tmp2, tmp2, 16); | ||
505 | tcg_gen_or_i32(tmp2, tmp2, tmp); | ||
506 | - tcg_temp_free_i32(tmp); | ||
507 | - tmp = neon_load_reg(a->vm, 2); | ||
508 | + read_neon_element32(tmp, a->vm, 2, MO_32); | ||
509 | gen_helper_vfp_fcvt_f32_to_f16(tmp, tmp, fpst, ahp); | ||
510 | - tmp3 = neon_load_reg(a->vm, 3); | ||
511 | - neon_store_reg(a->vd, 0, tmp2); | ||
512 | + tmp3 = tcg_temp_new_i32(); | ||
513 | + read_neon_element32(tmp3, a->vm, 3, MO_32); | ||
514 | + write_neon_element32(tmp2, a->vd, 0, MO_32); | ||
515 | + tcg_temp_free_i32(tmp2); | ||
516 | gen_helper_vfp_fcvt_f32_to_f16(tmp3, tmp3, fpst, ahp); | ||
517 | tcg_gen_shli_i32(tmp3, tmp3, 16); | ||
518 | tcg_gen_or_i32(tmp3, tmp3, tmp); | ||
519 | - neon_store_reg(a->vd, 1, tmp3); | ||
520 | + write_neon_element32(tmp3, a->vd, 1, MO_32); | ||
521 | + tcg_temp_free_i32(tmp3); | ||
522 | tcg_temp_free_i32(tmp); | ||
523 | tcg_temp_free_i32(ahp); | ||
524 | tcg_temp_free_ptr(fpst); | ||
525 | @@ -XXX,XX +XXX,XX @@ static bool trans_VCVT_F32_F16(DisasContext *s, arg_2misc *a) | ||
526 | fpst = fpstatus_ptr(FPST_STD); | ||
527 | ahp = get_ahp_flag(); | ||
528 | tmp3 = tcg_temp_new_i32(); | ||
529 | - tmp = neon_load_reg(a->vm, 0); | ||
530 | - tmp2 = neon_load_reg(a->vm, 1); | ||
531 | + tmp2 = tcg_temp_new_i32(); | ||
532 | + tmp = tcg_temp_new_i32(); | ||
533 | + read_neon_element32(tmp, a->vm, 0, MO_32); | ||
534 | + read_neon_element32(tmp2, a->vm, 1, MO_32); | ||
535 | tcg_gen_ext16u_i32(tmp3, tmp); | ||
536 | gen_helper_vfp_fcvt_f16_to_f32(tmp3, tmp3, fpst, ahp); | ||
537 | - neon_store_reg(a->vd, 0, tmp3); | ||
538 | + write_neon_element32(tmp3, a->vd, 0, MO_32); | ||
539 | tcg_gen_shri_i32(tmp, tmp, 16); | ||
540 | gen_helper_vfp_fcvt_f16_to_f32(tmp, tmp, fpst, ahp); | ||
541 | - neon_store_reg(a->vd, 1, tmp); | ||
542 | - tmp3 = tcg_temp_new_i32(); | ||
543 | + write_neon_element32(tmp, a->vd, 1, MO_32); | ||
544 | + tcg_temp_free_i32(tmp); | ||
545 | tcg_gen_ext16u_i32(tmp3, tmp2); | ||
546 | gen_helper_vfp_fcvt_f16_to_f32(tmp3, tmp3, fpst, ahp); | ||
547 | - neon_store_reg(a->vd, 2, tmp3); | ||
548 | + write_neon_element32(tmp3, a->vd, 2, MO_32); | ||
549 | + tcg_temp_free_i32(tmp3); | ||
550 | tcg_gen_shri_i32(tmp2, tmp2, 16); | ||
551 | gen_helper_vfp_fcvt_f16_to_f32(tmp2, tmp2, fpst, ahp); | ||
552 | - neon_store_reg(a->vd, 3, tmp2); | ||
553 | + write_neon_element32(tmp2, a->vd, 3, MO_32); | ||
554 | + tcg_temp_free_i32(tmp2); | ||
555 | tcg_temp_free_i32(ahp); | ||
556 | tcg_temp_free_ptr(fpst); | ||
557 | |||
558 | @@ -XXX,XX +XXX,XX @@ DO_2M_CRYPTO(SHA256SU0, aa32_sha2, 2) | ||
559 | |||
560 | static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn) | ||
561 | { | ||
562 | + TCGv_i32 tmp; | ||
563 | int pass; | ||
564 | |||
565 | /* Handle a 2-reg-misc operation by iterating 32 bits at a time */ | ||
566 | @@ -XXX,XX +XXX,XX @@ static bool do_2misc(DisasContext *s, arg_2misc *a, NeonGenOneOpFn *fn) | ||
567 | return true; | ||
568 | } | ||
569 | |||
570 | + tmp = tcg_temp_new_i32(); | ||
571 | for (pass = 0; pass < (a->q ? 4 : 2); pass++) { | ||
572 | - TCGv_i32 tmp = neon_load_reg(a->vm, pass); | ||
573 | + read_neon_element32(tmp, a->vm, pass, MO_32); | ||
574 | fn(tmp, tmp); | ||
575 | - neon_store_reg(a->vd, pass, tmp); | ||
576 | + write_neon_element32(tmp, a->vd, pass, MO_32); | ||
577 | } | ||
578 | + tcg_temp_free_i32(tmp); | ||
579 | |||
580 | return true; | ||
581 | } | ||
582 | @@ -XXX,XX +XXX,XX @@ static bool trans_VTRN(DisasContext *s, arg_2misc *a) | ||
583 | return true; | ||
584 | } | ||
585 | |||
586 | - if (a->size == 2) { | ||
587 | + tmp = tcg_temp_new_i32(); | ||
588 | + tmp2 = tcg_temp_new_i32(); | ||
589 | + if (a->size == MO_32) { | ||
590 | for (pass = 0; pass < (a->q ? 4 : 2); pass += 2) { | ||
591 | - tmp = neon_load_reg(a->vm, pass); | ||
592 | - tmp2 = neon_load_reg(a->vd, pass + 1); | ||
593 | - neon_store_reg(a->vm, pass, tmp2); | ||
594 | - neon_store_reg(a->vd, pass + 1, tmp); | ||
595 | + read_neon_element32(tmp, a->vm, pass, MO_32); | ||
596 | + read_neon_element32(tmp2, a->vd, pass + 1, MO_32); | ||
597 | + write_neon_element32(tmp2, a->vm, pass, MO_32); | ||
598 | + write_neon_element32(tmp, a->vd, pass + 1, MO_32); | ||
599 | } | ||
600 | } else { | ||
601 | for (pass = 0; pass < (a->q ? 4 : 2); pass++) { | ||
602 | - tmp = neon_load_reg(a->vm, pass); | ||
603 | - tmp2 = neon_load_reg(a->vd, pass); | ||
604 | - if (a->size == 0) { | ||
605 | + read_neon_element32(tmp, a->vm, pass, MO_32); | ||
606 | + read_neon_element32(tmp2, a->vd, pass, MO_32); | ||
607 | + if (a->size == MO_8) { | ||
608 | gen_neon_trn_u8(tmp, tmp2); | ||
609 | } else { | ||
610 | gen_neon_trn_u16(tmp, tmp2); | ||
611 | } | ||
612 | - neon_store_reg(a->vm, pass, tmp2); | ||
613 | - neon_store_reg(a->vd, pass, tmp); | ||
614 | + write_neon_element32(tmp2, a->vm, pass, MO_32); | ||
615 | + write_neon_element32(tmp, a->vd, pass, MO_32); | ||
616 | } | 172 | } |
617 | } | 173 | } |
618 | + tcg_temp_free_i32(tmp); | 174 | |
619 | + tcg_temp_free_i32(tmp2); | 175 | - if (param.using64k) { |
620 | return true; | 176 | - stride = 13; |
621 | } | 177 | - } else if (param.using16k) { |
178 | - stride = 11; | ||
179 | - } else { | ||
180 | - stride = 9; | ||
181 | - } | ||
182 | + stride = arm_granule_bits(param.gran) - 3; | ||
183 | |||
184 | /* | ||
185 | * Note that QEMU ignores shareability and cacheability attributes, | ||
622 | -- | 186 | -- |
623 | 2.20.1 | 187 | 2.25.1 |
624 | |||
625 | diff view generated by jsdifflib |
1 | On some hosts (eg Ubuntu Bionic) pkg-config returns a set of | 1 | FEAT_GTG is a change tho the ID register ID_AA64MMFR0_EL1 so that it |
---|---|---|---|
2 | libraries for gio-2.0 which don't actually work when compiling | 2 | can report a different set of supported granule (page) sizes for |
3 | statically. (Specifically, the returned library string includes | 3 | stage 1 and stage 2 translation tables. As of commit c20281b2a5048 |
4 | -lmount, but not -lblkid which -lmount depends upon, so linking | 4 | we already report the granule sizes that way for '-cpu max', and now |
5 | fails due to missing symbols.) | 5 | we also correctly make attempts to use unimplemented granule sizes |
6 | fail, so we can report the support of the feature in the | ||
7 | documentation. | ||
6 | 8 | ||
7 | Check that the libraries work, and don't enable gio if they don't, | 9 | Reviewed-by: Richard Henderson <richard.henderson@linaro.org> |
8 | in the same way we do for gnutls. | 10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> |
11 | Message-id: 20221003162315.2833797-4-peter.maydell@linaro.org | ||
12 | --- | ||
13 | docs/system/arm/emulation.rst | 1 + | ||
14 | 1 file changed, 1 insertion(+) | ||
9 | 15 | ||
10 | Signed-off-by: Peter Maydell <peter.maydell@linaro.org> | 16 | diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst |
11 | Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> | 17 | index XXXXXXX..XXXXXXX 100644 |
12 | Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> | 18 | --- a/docs/system/arm/emulation.rst |
13 | Message-id: 20200928160402.7961-1-peter.maydell@linaro.org | 19 | +++ b/docs/system/arm/emulation.rst |
14 | --- | 20 | @@ -XXX,XX +XXX,XX @@ the following architecture extensions: |
15 | configure | 10 +++++++++- | 21 | - FEAT_FRINTTS (Floating-point to integer instructions) |
16 | 1 file changed, 9 insertions(+), 1 deletion(-) | 22 | - FEAT_FlagM (Flag manipulation instructions v2) |
17 | 23 | - FEAT_FlagM2 (Enhancements to flag manipulation instructions) | |
18 | diff --git a/configure b/configure | 24 | +- FEAT_GTG (Guest translation granule size) |
19 | index XXXXXXX..XXXXXXX 100755 | 25 | - FEAT_HCX (Support for the HCRX_EL2 register) |
20 | --- a/configure | 26 | - FEAT_HPDS (Hierarchical permission disables) |
21 | +++ b/configure | 27 | - FEAT_I8MM (AArch64 Int8 matrix multiplication instructions) |
22 | @@ -XXX,XX +XXX,XX @@ if test "$static" = yes && test "$mingw32" = yes; then | ||
23 | fi | ||
24 | |||
25 | if $pkg_config --atleast-version=$glib_req_ver gio-2.0; then | ||
26 | - gio=yes | ||
27 | gio_cflags=$($pkg_config --cflags gio-2.0) | ||
28 | gio_libs=$($pkg_config --libs gio-2.0) | ||
29 | gdbus_codegen=$($pkg_config --variable=gdbus_codegen gio-2.0) | ||
30 | if [ ! -x "$gdbus_codegen" ]; then | ||
31 | gdbus_codegen= | ||
32 | fi | ||
33 | + # Check that the libraries actually work -- Ubuntu 18.04 ships | ||
34 | + # with pkg-config --static --libs data for gio-2.0 that is missing | ||
35 | + # -lblkid and will give a link error. | ||
36 | + write_c_skeleton | ||
37 | + if compile_prog "" "gio_libs" ; then | ||
38 | + gio=yes | ||
39 | + else | ||
40 | + gio=no | ||
41 | + fi | ||
42 | else | ||
43 | gio=no | ||
44 | fi | ||
45 | -- | 28 | -- |
46 | 2.20.1 | 29 | 2.25.1 |
47 | |||
48 | diff view generated by jsdifflib |