1
ARM queue: mostly patches from me, but also the Smartfusion2 board.
1
As promised, another pullreq... This one's mostly RTH's patches.
2
2
3
thanks
3
thanks
4
-- PMM
4
-- PMM
5
5
6
The following changes since commit 9ee660e7c138595224b65ddc1c5712549f0a278c:
6
The following changes since commit 784c2e4f232adf5ef47a84a262ec72a07d068d6a:
7
7
8
Merge remote-tracking branch 'remotes/yongbok/tags/mips-20170921' into staging (2017-09-21 14:40:32 +0100)
8
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging (2018-10-19 15:30:40 +0100)
9
9
10
are available in the git repository at:
10
are available in the Git repository at:
11
11
12
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20170921
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20181019
13
13
14
for you to fetch changes up to 6d262dcb7d108eda93813574c2061398084dc795:
14
for you to fetch changes up to 88c9add25e7120e8622796c81ad3f3fb7f8d40e7:
15
15
16
msf2: Add Emcraft's Smartfusion2 SOM kit (2017-09-21 16:36:56 +0100)
16
target/arm: Only flush tlb if ASID changes (2018-10-19 17:38:48 +0100)
17
17
18
----------------------------------------------------------------
18
----------------------------------------------------------------
19
target-arm queue:
19
target-arm queue:
20
* more preparatory work for v8M support
20
* ssi-sd: Make devices picking up backends unavailable with -device
21
* convert some omap devices away from old_mmio
21
* Add support for VCPU event states
22
* remove out of date ARM ARM section references in comments
22
* Move towards making ID registers the source of truth for
23
* add the Smartfusion2 board
23
whether a guest CPU implements a feature, rather than having
24
parallel ID registers and feature bit flags
25
* Implement various HCR hypervisor trap/config bits
26
* Get IL bit correct for v7 syndrome values
27
* Report correct syndrome for FP/SIMD traps to Hyp mode
28
* hw/arm/boot: Increase compliance with kernel arm64 boot protocol
29
* Refactor A32 Neon to use generic vector infrastructure
30
* Fix a bug in A32 VLD2 "(multiple 2-element structures)" insn
31
* net: cadence_gem: Report features correctly in ID register
32
* Avoid some unnecessary TLB flushes on TTBR register writes
24
33
25
----------------------------------------------------------------
34
----------------------------------------------------------------
26
Peter Maydell (26):
35
Dongjiu Geng (1):
27
target/arm: Implement MSR/MRS access to NS banked registers
36
target/arm: Add support for VCPU event states
28
nvic: Add banked exception states
29
nvic: Add cached vectpending_is_s_banked state
30
nvic: Add cached vectpending_prio state
31
nvic: Implement AIRCR changes for v8M
32
nvic: Make ICSR.RETTOBASE handle banked exceptions
33
nvic: Implement NVIC_ITNS<n> registers
34
nvic: Handle banked exceptions in nvic_recompute_state()
35
nvic: Make set_pending and clear_pending take a secure parameter
36
nvic: Make SHPR registers banked
37
nvic: Compare group priority for escalation to HF
38
nvic: In escalation to HardFault, support HF not being priority -1
39
nvic: Implement v8M changes to fixed priority exceptions
40
nvic: Disable the non-secure HardFault if AIRCR.BFHFNMINS is clear
41
nvic: Handle v8M changes in nvic_exec_prio()
42
target/arm: Handle banking in negative-execution-priority check in cpu_mmu_index()
43
nvic: Make ICSR banked for v8M
44
nvic: Make SHCSR banked for v8M
45
nvic: Support banked exceptions in acknowledge and complete
46
target/arm: Remove out of date ARM ARM section references in A64 decoder
47
hw/arm/palm.c: Don't use old_mmio for static_ops
48
hw/gpio/omap_gpio.c: Don't use old_mmio
49
hw/timer/omap_synctimer.c: Don't use old_mmio
50
hw/timer/omap_gptimer: Don't use old_mmio
51
hw/i2c/omap_i2c.c: Don't use old_mmio
52
hw/arm/omap2.c: Don't use old_mmio
53
37
54
Subbaraya Sundeep (5):
38
Edgar E. Iglesias (2):
55
msf2: Add Smartfusion2 System timer
39
net: cadence_gem: Announce availability of priority queues
56
msf2: Microsemi Smartfusion2 System Register block
40
net: cadence_gem: Announce 64bit addressing support
57
msf2: Add Smartfusion2 SPI controller
58
msf2: Add Smartfusion2 SoC
59
msf2: Add Emcraft's Smartfusion2 SOM kit
60
41
61
hw/arm/Makefile.objs | 1 +
42
Markus Armbruster (1):
62
hw/misc/Makefile.objs | 1 +
43
ssi-sd: Make devices picking up backends unavailable with -device
63
hw/ssi/Makefile.objs | 1 +
64
hw/timer/Makefile.objs | 1 +
65
include/hw/arm/msf2-soc.h | 67 +++
66
include/hw/intc/armv7m_nvic.h | 33 +-
67
include/hw/misc/msf2-sysreg.h | 77 ++++
68
include/hw/ssi/mss-spi.h | 58 +++
69
include/hw/timer/mss-timer.h | 64 +++
70
target/arm/cpu.h | 62 ++-
71
hw/arm/msf2-soc.c | 238 +++++++++++
72
hw/arm/msf2-som.c | 105 +++++
73
hw/arm/omap2.c | 49 ++-
74
hw/arm/palm.c | 30 +-
75
hw/gpio/omap_gpio.c | 26 +-
76
hw/i2c/omap_i2c.c | 44 +-
77
hw/intc/armv7m_nvic.c | 913 ++++++++++++++++++++++++++++++++++------
78
hw/misc/msf2-sysreg.c | 160 +++++++
79
hw/ssi/mss-spi.c | 404 ++++++++++++++++++
80
hw/timer/mss-timer.c | 289 +++++++++++++
81
hw/timer/omap_gptimer.c | 49 ++-
82
hw/timer/omap_synctimer.c | 35 +-
83
target/arm/cpu.c | 7 +
84
target/arm/helper.c | 142 ++++++-
85
target/arm/translate-a64.c | 227 +++++-----
86
default-configs/arm-softmmu.mak | 1 +
87
hw/intc/trace-events | 13 +-
88
hw/misc/trace-events | 5 +
89
28 files changed, 2735 insertions(+), 367 deletions(-)
90
create mode 100644 include/hw/arm/msf2-soc.h
91
create mode 100644 include/hw/misc/msf2-sysreg.h
92
create mode 100644 include/hw/ssi/mss-spi.h
93
create mode 100644 include/hw/timer/mss-timer.h
94
create mode 100644 hw/arm/msf2-soc.c
95
create mode 100644 hw/arm/msf2-som.c
96
create mode 100644 hw/misc/msf2-sysreg.c
97
create mode 100644 hw/ssi/mss-spi.c
98
create mode 100644 hw/timer/mss-timer.c
99
44
45
Peter Maydell (10):
46
target/arm: Improve debug logging of AArch32 exception return
47
target/arm: Make switch_mode() file-local
48
target/arm: Implement HCR.FB
49
target/arm: Implement HCR.DC
50
target/arm: ISR_EL1 bits track virtual interrupts if IMO/FMO set
51
target/arm: Implement HCR.VI and VF
52
target/arm: Implement HCR.PTW
53
target/arm: New utility function to extract EC from syndrome
54
target/arm: Get IL bit correct for v7 syndrome values
55
target/arm: Report correct syndrome for FP/SIMD traps to Hyp mode
56
57
Richard Henderson (30):
58
target/arm: Move some system registers into a substructure
59
target/arm: V8M should not imply V7VE
60
target/arm: Convert v8 extensions from feature bits to isar tests
61
target/arm: Convert division from feature bits to isar0 tests
62
target/arm: Convert jazelle from feature bit to isar1 test
63
target/arm: Convert t32ee from feature bit to isar3 test
64
target/arm: Convert sve from feature bit to aa64pfr0 test
65
target/arm: Convert v8.2-fp16 from feature bit to aa64pfr0 test
66
target/arm: Hoist address increment for vector memory ops
67
target/arm: Don't call tcg_clear_temp_count
68
target/arm: Use tcg_gen_gvec_dup_i64 for LD[1-4]R
69
target/arm: Promote consecutive memory ops for aa64
70
target/arm: Mark some arrays const
71
target/arm: Use gvec for NEON VDUP
72
target/arm: Use gvec for NEON VMOV, VMVN, VBIC & VORR (immediate)
73
target/arm: Use gvec for NEON_3R_LOGIC insns
74
target/arm: Use gvec for NEON_3R_VADD_VSUB insns
75
target/arm: Use gvec for NEON_2RM_VMN, NEON_2RM_VNEG
76
target/arm: Use gvec for NEON_3R_VMUL
77
target/arm: Use gvec for VSHR, VSHL
78
target/arm: Use gvec for VSRA
79
target/arm: Use gvec for VSRI, VSLI
80
target/arm: Use gvec for NEON_3R_VML
81
target/arm: Use gvec for NEON_3R_VTST_VCEQ, NEON_3R_VCGT, NEON_3R_VCGE
82
target/arm: Use gvec for NEON VLD all lanes
83
target/arm: Reorg NEON VLD/VST all elements
84
target/arm: Promote consecutive memory ops for aa32
85
target/arm: Reorg NEON VLD/VST single element to one lane
86
target/arm: Remove writefn from TTBR0_EL3
87
target/arm: Only flush tlb if ASID changes
88
89
Stewart Hildebrand (1):
90
hw/arm/boot: Increase compliance with kernel arm64 boot protocol
91
92
target/arm/cpu.h | 227 ++++++-
93
target/arm/internals.h | 45 +-
94
target/arm/kvm_arm.h | 24 +
95
target/arm/translate.h | 21 +
96
hw/arm/boot.c | 18 +
97
hw/intc/armv7m_nvic.c | 12 +-
98
hw/net/cadence_gem.c | 9 +-
99
hw/sd/ssi-sd.c | 2 +
100
linux-user/aarch64/signal.c | 4 +-
101
linux-user/elfload.c | 60 +-
102
linux-user/syscall.c | 10 +-
103
target/arm/cpu.c | 242 ++++----
104
target/arm/cpu64.c | 148 +++--
105
target/arm/helper.c | 397 ++++++++----
106
target/arm/kvm.c | 60 ++
107
target/arm/kvm32.c | 13 +
108
target/arm/kvm64.c | 15 +-
109
target/arm/machine.c | 28 +-
110
target/arm/op_helper.c | 2 +-
111
target/arm/translate-a64.c | 715 ++++-----------------
112
target/arm/translate.c | 1451 ++++++++++++++++++++++++++++---------------
113
21 files changed, 2021 insertions(+), 1482 deletions(-)
114
diff view generated by jsdifflib
New patch
1
From: Markus Armbruster <armbru@redhat.com>
1
2
3
Device models aren't supposed to go on fishing expeditions for
4
backends. They should expose suitable properties for the user to set.
5
For onboard devices, board code sets them.
6
7
Device ssi-sd picks up its block backend in its init() method with
8
drive_get_next() instead. This mistake is already marked FIXME since
9
commit af9e40a.
10
11
Unset user_creatable to remove the mistake from our external
12
interface. Since the SSI bus doesn't support hotplug, only -device
13
can be affected. Only certain ARM machines have ssi-sd and provide an
14
SSI bus for it; this patch breaks -device ssi-sd for these machines.
15
No actual use of -device ssi-sd is known.
16
17
Signed-off-by: Markus Armbruster <armbru@redhat.com>
18
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Acked-by: Thomas Huth <thuth@redhat.com>
20
Message-id: 20181009060835.4608-1-armbru@redhat.com
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
23
hw/sd/ssi-sd.c | 2 ++
24
1 file changed, 2 insertions(+)
25
26
diff --git a/hw/sd/ssi-sd.c b/hw/sd/ssi-sd.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/sd/ssi-sd.c
29
+++ b/hw/sd/ssi-sd.c
30
@@ -XXX,XX +XXX,XX @@ static void ssi_sd_class_init(ObjectClass *klass, void *data)
31
k->cs_polarity = SSI_CS_LOW;
32
dc->vmsd = &vmstate_ssi_sd;
33
dc->reset = ssi_sd_reset;
34
+ /* Reason: init() method uses drive_get_next() */
35
+ dc->user_creatable = false;
36
}
37
38
static const TypeInfo ssi_sd_info = {
39
--
40
2.19.1
41
42
diff view generated by jsdifflib
1
From: Subbaraya Sundeep <sundeep.lkml@gmail.com>
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
2
2
3
Added Sytem register block of Smartfusion2.
3
This patch extends the qemu-kvm state sync logic with support for
4
This block has PLL registers which are accessed by guest.
4
KVM_GET/SET_VCPU_EVENTS, giving access to yet missing SError exception.
5
5
And also it can support the exception state migration.
6
Signed-off-by: Subbaraya Sundeep <sundeep.lkml@gmail.com>
6
7
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
7
The SError exception states include SError pending state and ESR value,
8
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
the kvm_put/get_vcpu_events() will be called when set or get system
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
registers. When do migration, if source machine has SError pending,
10
Message-id: 20170920201737.25723-3-f4bug@amsat.org
10
QEMU will do this migration regardless whether the target machine supports
11
to specify guest ESR value, because if target machine does not support that,
12
it can also inject the SError with zero ESR value.
13
14
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
15
Reviewed-by: Andrew Jones <drjones@redhat.com>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Message-id: 1538067351-23931-3-git-send-email-gengdongjiu@huawei.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
19
---
13
hw/misc/Makefile.objs | 1 +
20
target/arm/cpu.h | 7 ++++++
14
include/hw/misc/msf2-sysreg.h | 77 ++++++++++++++++++++
21
target/arm/kvm_arm.h | 24 ++++++++++++++++++
15
hw/misc/msf2-sysreg.c | 160 ++++++++++++++++++++++++++++++++++++++++++
22
target/arm/kvm.c | 60 ++++++++++++++++++++++++++++++++++++++++++++
16
hw/misc/trace-events | 5 ++
23
target/arm/kvm32.c | 13 ++++++++++
17
4 files changed, 243 insertions(+)
24
target/arm/kvm64.c | 13 ++++++++++
18
create mode 100644 include/hw/misc/msf2-sysreg.h
25
target/arm/machine.c | 22 ++++++++++++++++
19
create mode 100644 hw/misc/msf2-sysreg.c
26
6 files changed, 139 insertions(+)
20
27
21
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/misc/Makefile.objs
30
--- a/target/arm/cpu.h
24
+++ b/hw/misc/Makefile.objs
31
+++ b/target/arm/cpu.h
25
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_HYPERV_TESTDEV) += hyperv_testdev.o
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
26
obj-$(CONFIG_AUX) += auxbus.o
33
*/
27
obj-$(CONFIG_ASPEED_SOC) += aspeed_scu.o aspeed_sdmc.o
34
} exception;
28
obj-y += mmio_interface.o
35
29
+obj-$(CONFIG_MSF2) += msf2-sysreg.o
36
+ /* Information associated with an SError */
30
diff --git a/include/hw/misc/msf2-sysreg.h b/include/hw/misc/msf2-sysreg.h
37
+ struct {
31
new file mode 100644
38
+ uint8_t pending;
32
index XXXXXXX..XXXXXXX
39
+ uint8_t has_esr;
33
--- /dev/null
40
+ uint64_t esr;
34
+++ b/include/hw/misc/msf2-sysreg.h
41
+ } serror;
35
@@ -XXX,XX +XXX,XX @@
42
+
36
+/*
43
/* Thumb-2 EE state. */
37
+ * Microsemi SmartFusion2 SYSREG
44
uint32_t teecr;
45
uint32_t teehbr;
46
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/kvm_arm.h
49
+++ b/target/arm/kvm_arm.h
50
@@ -XXX,XX +XXX,XX @@ bool write_kvmstate_to_list(ARMCPU *cpu);
51
*/
52
void kvm_arm_reset_vcpu(ARMCPU *cpu);
53
54
+/**
55
+ * kvm_arm_init_serror_injection:
56
+ * @cs: CPUState
38
+ *
57
+ *
39
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
58
+ * Check whether KVM can set guest SError syndrome.
59
+ */
60
+void kvm_arm_init_serror_injection(CPUState *cs);
61
+
62
+/**
63
+ * kvm_get_vcpu_events:
64
+ * @cpu: ARMCPU
40
+ *
65
+ *
41
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
66
+ * Get VCPU related state from kvm.
42
+ * of this software and associated documentation files (the "Software"), to deal
67
+ */
43
+ * in the Software without restriction, including without limitation the rights
68
+int kvm_get_vcpu_events(ARMCPU *cpu);
44
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
69
+
45
+ * copies of the Software, and to permit persons to whom the Software is
70
+/**
46
+ * furnished to do so, subject to the following conditions:
71
+ * kvm_put_vcpu_events:
72
+ * @cpu: ARMCPU
47
+ *
73
+ *
48
+ * The above copyright notice and this permission notice shall be included in
74
+ * Put VCPU related state to kvm.
49
+ * all copies or substantial portions of the Software.
50
+ *
51
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
52
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
53
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
54
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
55
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
56
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
57
+ * THE SOFTWARE.
58
+ */
75
+ */
59
+
76
+int kvm_put_vcpu_events(ARMCPU *cpu);
60
+#ifndef HW_MSF2_SYSREG_H
77
+
61
+#define HW_MSF2_SYSREG_H
78
#ifdef CONFIG_KVM
62
+
79
/**
63
+#include "hw/sysbus.h"
80
* kvm_arm_create_scratch_host_vcpu:
64
+
81
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
65
+enum {
82
index XXXXXXX..XXXXXXX 100644
66
+ ESRAM_CR = 0x00 / 4,
83
--- a/target/arm/kvm.c
67
+ ESRAM_MAX_LAT,
84
+++ b/target/arm/kvm.c
68
+ DDR_CR,
85
@@ -XXX,XX +XXX,XX @@ const KVMCapabilityInfo kvm_arch_required_capabilities[] = {
69
+ ENVM_CR,
86
};
70
+ ENVM_REMAP_BASE_CR,
87
71
+ ENVM_REMAP_FAB_CR,
88
static bool cap_has_mp_state;
72
+ CC_CR,
89
+static bool cap_has_inject_serror_esr;
73
+ CC_REGION_CR,
90
74
+ CC_LOCK_BASE_ADDR_CR,
91
static ARMHostCPUFeatures arm_host_cpu_features;
75
+ CC_FLUSH_INDX_CR,
92
76
+ DDRB_BUF_TIMER_CR,
93
@@ -XXX,XX +XXX,XX @@ int kvm_arm_vcpu_init(CPUState *cs)
77
+ DDRB_NB_ADDR_CR,
94
return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
78
+ DDRB_NB_SIZE_CR,
95
}
79
+ DDRB_CR,
96
80
+
97
+void kvm_arm_init_serror_injection(CPUState *cs)
81
+ SOFT_RESET_CR = 0x48 / 4,
98
+{
82
+ M3_CR,
99
+ cap_has_inject_serror_esr = kvm_check_extension(cs->kvm_state,
83
+
100
+ KVM_CAP_ARM_INJECT_SERROR_ESR);
84
+ GPIO_SYSRESET_SEL_CR = 0x58 / 4,
101
+}
85
+
102
+
86
+ MDDR_CR = 0x60 / 4,
103
bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
87
+
104
int *fdarray,
88
+ MSSDDR_PLL_STATUS_LOW_CR = 0x90 / 4,
105
struct kvm_vcpu_init *init)
89
+ MSSDDR_PLL_STATUS_HIGH_CR,
106
@@ -XXX,XX +XXX,XX @@ int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu)
90
+ MSSDDR_FACC1_CR,
107
return 0;
91
+ MSSDDR_FACC2_CR,
108
}
92
+
109
93
+ MSSDDR_PLL_STATUS = 0x150 / 4,
110
+int kvm_put_vcpu_events(ARMCPU *cpu)
94
+};
111
+{
95
+
112
+ CPUARMState *env = &cpu->env;
96
+#define MSF2_SYSREG_MMIO_SIZE 0x300
113
+ struct kvm_vcpu_events events;
97
+
114
+ int ret;
98
+#define TYPE_MSF2_SYSREG "msf2-sysreg"
115
+
99
+#define MSF2_SYSREG(obj) OBJECT_CHECK(MSF2SysregState, (obj), TYPE_MSF2_SYSREG)
116
+ if (!kvm_has_vcpu_events()) {
100
+
117
+ return 0;
101
+typedef struct MSF2SysregState {
118
+ }
102
+ SysBusDevice parent_obj;
119
+
103
+
120
+ memset(&events, 0, sizeof(events));
104
+ MemoryRegion iomem;
121
+ events.exception.serror_pending = env->serror.pending;
105
+
122
+
106
+ uint8_t apb0div;
123
+ /* Inject SError to guest with specified syndrome if host kernel
107
+ uint8_t apb1div;
124
+ * supports it, otherwise inject SError without syndrome.
108
+
125
+ */
109
+ uint32_t regs[MSF2_SYSREG_MMIO_SIZE / 4];
126
+ if (cap_has_inject_serror_esr) {
110
+} MSF2SysregState;
127
+ events.exception.serror_has_esr = env->serror.has_esr;
111
+
128
+ events.exception.serror_esr = env->serror.esr;
112
+#endif /* HW_MSF2_SYSREG_H */
129
+ }
113
diff --git a/hw/misc/msf2-sysreg.c b/hw/misc/msf2-sysreg.c
130
+
114
new file mode 100644
131
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events);
115
index XXXXXXX..XXXXXXX
132
+ if (ret) {
116
--- /dev/null
133
+ error_report("failed to put vcpu events");
117
+++ b/hw/misc/msf2-sysreg.c
118
@@ -XXX,XX +XXX,XX @@
119
+/*
120
+ * System Register block model of Microsemi SmartFusion2.
121
+ *
122
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
123
+ *
124
+ * This program is free software; you can redistribute it and/or
125
+ * modify it under the terms of the GNU General Public License
126
+ * as published by the Free Software Foundation; either version
127
+ * 2 of the License, or (at your option) any later version.
128
+ *
129
+ * You should have received a copy of the GNU General Public License along
130
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
131
+ */
132
+
133
+#include "qemu/osdep.h"
134
+#include "qapi/error.h"
135
+#include "qemu/log.h"
136
+#include "hw/misc/msf2-sysreg.h"
137
+#include "qemu/error-report.h"
138
+#include "trace.h"
139
+
140
+static inline int msf2_divbits(uint32_t div)
141
+{
142
+ int r = ctz32(div);
143
+
144
+ return (div < 8) ? r : r + 1;
145
+}
146
+
147
+static void msf2_sysreg_reset(DeviceState *d)
148
+{
149
+ MSF2SysregState *s = MSF2_SYSREG(d);
150
+
151
+ s->regs[MSSDDR_PLL_STATUS_LOW_CR] = 0x021A2358;
152
+ s->regs[MSSDDR_PLL_STATUS] = 0x3;
153
+ s->regs[MSSDDR_FACC1_CR] = msf2_divbits(s->apb0div) << 5 |
154
+ msf2_divbits(s->apb1div) << 2;
155
+}
156
+
157
+static uint64_t msf2_sysreg_read(void *opaque, hwaddr offset,
158
+ unsigned size)
159
+{
160
+ MSF2SysregState *s = opaque;
161
+ uint32_t ret = 0;
162
+
163
+ offset >>= 2;
164
+ if (offset < ARRAY_SIZE(s->regs)) {
165
+ ret = s->regs[offset];
166
+ trace_msf2_sysreg_read(offset << 2, ret);
167
+ } else {
168
+ qemu_log_mask(LOG_GUEST_ERROR,
169
+ "%s: Bad offset 0x%08" HWADDR_PRIx "\n", __func__,
170
+ offset << 2);
171
+ }
134
+ }
172
+
135
+
173
+ return ret;
136
+ return ret;
174
+}
137
+}
175
+
138
+
176
+static void msf2_sysreg_write(void *opaque, hwaddr offset,
139
+int kvm_get_vcpu_events(ARMCPU *cpu)
177
+ uint64_t val, unsigned size)
140
+{
178
+{
141
+ CPUARMState *env = &cpu->env;
179
+ MSF2SysregState *s = opaque;
142
+ struct kvm_vcpu_events events;
180
+ uint32_t newval = val;
143
+ int ret;
181
+
144
+
182
+ offset >>= 2;
145
+ if (!kvm_has_vcpu_events()) {
183
+
146
+ return 0;
184
+ switch (offset) {
147
+ }
185
+ case MSSDDR_PLL_STATUS:
148
+
186
+ trace_msf2_sysreg_write_pll_status();
149
+ memset(&events, 0, sizeof(events));
187
+ break;
150
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_VCPU_EVENTS, &events);
188
+
151
+ if (ret) {
189
+ case ESRAM_CR:
152
+ error_report("failed to get vcpu events");
190
+ case DDR_CR:
153
+ return ret;
191
+ case ENVM_REMAP_BASE_CR:
154
+ }
192
+ if (newval != s->regs[offset]) {
155
+
193
+ qemu_log_mask(LOG_UNIMP,
156
+ env->serror.pending = events.exception.serror_pending;
194
+ TYPE_MSF2_SYSREG": remapping not supported\n");
157
+ env->serror.has_esr = events.exception.serror_has_esr;
195
+ }
158
+ env->serror.esr = events.exception.serror_esr;
196
+ break;
159
+
197
+
160
+ return 0;
198
+ default:
161
+}
199
+ if (offset < ARRAY_SIZE(s->regs)) {
162
+
200
+ trace_msf2_sysreg_write(offset << 2, newval, s->regs[offset]);
163
void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
201
+ s->regs[offset] = newval;
164
{
202
+ } else {
165
}
203
+ qemu_log_mask(LOG_GUEST_ERROR,
166
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
204
+ "%s: Bad offset 0x%08" HWADDR_PRIx "\n", __func__,
167
index XXXXXXX..XXXXXXX 100644
205
+ offset << 2);
168
--- a/target/arm/kvm32.c
206
+ }
169
+++ b/target/arm/kvm32.c
207
+ break;
170
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
208
+ }
171
}
209
+}
172
cpu->mp_affinity = mpidr & ARM32_AFFINITY_MASK;
210
+
173
211
+static const MemoryRegionOps sysreg_ops = {
174
+ /* Check whether userspace can specify guest syndrome value */
212
+ .read = msf2_sysreg_read,
175
+ kvm_arm_init_serror_injection(cs);
213
+ .write = msf2_sysreg_write,
176
+
214
+ .endianness = DEVICE_NATIVE_ENDIAN,
177
return kvm_arm_init_cpreg_list(cpu);
215
+};
178
}
216
+
179
217
+static void msf2_sysreg_init(Object *obj)
180
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
218
+{
181
return ret;
219
+ MSF2SysregState *s = MSF2_SYSREG(obj);
182
}
220
+
183
221
+ memory_region_init_io(&s->iomem, obj, &sysreg_ops, s, TYPE_MSF2_SYSREG,
184
+ ret = kvm_put_vcpu_events(cpu);
222
+ MSF2_SYSREG_MMIO_SIZE);
185
+ if (ret) {
223
+ sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->iomem);
186
+ return ret;
224
+}
187
+ }
225
+
188
+
226
+static const VMStateDescription vmstate_msf2_sysreg = {
189
/* Note that we do not call write_cpustate_to_list()
227
+ .name = TYPE_MSF2_SYSREG,
190
* here, so we are only writing the tuple list back to
191
* KVM. This is safe because nothing can change the
192
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
193
}
194
vfp_set_fpscr(env, fpscr);
195
196
+ ret = kvm_get_vcpu_events(cpu);
197
+ if (ret) {
198
+ return ret;
199
+ }
200
+
201
if (!write_kvmstate_to_list(cpu)) {
202
return EINVAL;
203
}
204
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
205
index XXXXXXX..XXXXXXX 100644
206
--- a/target/arm/kvm64.c
207
+++ b/target/arm/kvm64.c
208
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
209
210
kvm_arm_init_debug(cs);
211
212
+ /* Check whether user space can specify guest syndrome value */
213
+ kvm_arm_init_serror_injection(cs);
214
+
215
return kvm_arm_init_cpreg_list(cpu);
216
}
217
218
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
219
return ret;
220
}
221
222
+ ret = kvm_put_vcpu_events(cpu);
223
+ if (ret) {
224
+ return ret;
225
+ }
226
+
227
if (!write_list_to_kvmstate(cpu, level)) {
228
return EINVAL;
229
}
230
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
231
}
232
vfp_set_fpcr(env, fpr);
233
234
+ ret = kvm_get_vcpu_events(cpu);
235
+ if (ret) {
236
+ return ret;
237
+ }
238
+
239
if (!write_kvmstate_to_list(cpu)) {
240
return EINVAL;
241
}
242
diff --git a/target/arm/machine.c b/target/arm/machine.c
243
index XXXXXXX..XXXXXXX 100644
244
--- a/target/arm/machine.c
245
+++ b/target/arm/machine.c
246
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_sve = {
247
};
248
#endif /* AARCH64 */
249
250
+static bool serror_needed(void *opaque)
251
+{
252
+ ARMCPU *cpu = opaque;
253
+ CPUARMState *env = &cpu->env;
254
+
255
+ return env->serror.pending != 0;
256
+}
257
+
258
+static const VMStateDescription vmstate_serror = {
259
+ .name = "cpu/serror",
228
+ .version_id = 1,
260
+ .version_id = 1,
229
+ .minimum_version_id = 1,
261
+ .minimum_version_id = 1,
262
+ .needed = serror_needed,
230
+ .fields = (VMStateField[]) {
263
+ .fields = (VMStateField[]) {
231
+ VMSTATE_UINT32_ARRAY(regs, MSF2SysregState, MSF2_SYSREG_MMIO_SIZE / 4),
264
+ VMSTATE_UINT8(env.serror.pending, ARMCPU),
265
+ VMSTATE_UINT8(env.serror.has_esr, ARMCPU),
266
+ VMSTATE_UINT64(env.serror.esr, ARMCPU),
232
+ VMSTATE_END_OF_LIST()
267
+ VMSTATE_END_OF_LIST()
233
+ }
268
+ }
234
+};
269
+};
235
+
270
+
236
+static Property msf2_sysreg_properties[] = {
271
static bool m_needed(void *opaque)
237
+ /* default divisors in Libero GUI */
272
{
238
+ DEFINE_PROP_UINT8("apb0divisor", MSF2SysregState, apb0div, 2),
273
ARMCPU *cpu = opaque;
239
+ DEFINE_PROP_UINT8("apb1divisor", MSF2SysregState, apb1div, 2),
274
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
240
+ DEFINE_PROP_END_OF_LIST(),
275
#ifdef TARGET_AARCH64
241
+};
276
&vmstate_sve,
242
+
277
#endif
243
+static void msf2_sysreg_realize(DeviceState *dev, Error **errp)
278
+ &vmstate_serror,
244
+{
279
NULL
245
+ MSF2SysregState *s = MSF2_SYSREG(dev);
280
}
246
+
281
};
247
+ if ((s->apb0div > 32 || !is_power_of_2(s->apb0div))
248
+ || (s->apb1div > 32 || !is_power_of_2(s->apb1div))) {
249
+ error_setg(errp, "Invalid apb divisor value");
250
+ error_append_hint(errp, "apb divisor must be a power of 2"
251
+ " and maximum value is 32\n");
252
+ }
253
+}
254
+
255
+static void msf2_sysreg_class_init(ObjectClass *klass, void *data)
256
+{
257
+ DeviceClass *dc = DEVICE_CLASS(klass);
258
+
259
+ dc->vmsd = &vmstate_msf2_sysreg;
260
+ dc->reset = msf2_sysreg_reset;
261
+ dc->props = msf2_sysreg_properties;
262
+ dc->realize = msf2_sysreg_realize;
263
+}
264
+
265
+static const TypeInfo msf2_sysreg_info = {
266
+ .name = TYPE_MSF2_SYSREG,
267
+ .parent = TYPE_SYS_BUS_DEVICE,
268
+ .class_init = msf2_sysreg_class_init,
269
+ .instance_size = sizeof(MSF2SysregState),
270
+ .instance_init = msf2_sysreg_init,
271
+};
272
+
273
+static void msf2_sysreg_register_types(void)
274
+{
275
+ type_register_static(&msf2_sysreg_info);
276
+}
277
+
278
+type_init(msf2_sysreg_register_types)
279
diff --git a/hw/misc/trace-events b/hw/misc/trace-events
280
index XXXXXXX..XXXXXXX 100644
281
--- a/hw/misc/trace-events
282
+++ b/hw/misc/trace-events
283
@@ -XXX,XX +XXX,XX @@ mps2_scc_reset(void) "MPS2 SCC: reset"
284
mps2_scc_leds(char led7, char led6, char led5, char led4, char led3, char led2, char led1, char led0) "MPS2 SCC LEDs: %c%c%c%c%c%c%c%c"
285
mps2_scc_cfg_write(unsigned function, unsigned device, uint32_t value) "MPS2 SCC config write: function %d device %d data 0x%" PRIx32
286
mps2_scc_cfg_read(unsigned function, unsigned device, uint32_t value) "MPS2 SCC config read: function %d device %d data 0x%" PRIx32
287
+
288
+# hw/misc/msf2-sysreg.c
289
+msf2_sysreg_write(uint64_t offset, uint32_t val, uint32_t prev) "msf2-sysreg write: addr 0x%08" HWADDR_PRIx " data 0x%" PRIx32 " prev 0x%" PRIx32
290
+msf2_sysreg_read(uint64_t offset, uint32_t val) "msf2-sysreg read: addr 0x%08" HWADDR_PRIx " data 0x%08" PRIx32
291
+msf2_sysreg_write_pll_status(void) "Invalid write to read only PLL status register"
292
--
282
--
293
2.7.4
283
2.19.1
294
284
295
285
diff view generated by jsdifflib
1
Update armv7m_nvic_acknowledge_irq() and armv7m_nvic_complete_irq()
1
From: Richard Henderson <richard.henderson@linaro.org>
2
to handle banked exceptions:
3
* acknowledge needs to use the correct vector, which may be
4
in sec_vectors[]
5
* acknowledge needs to return to its caller whether the
6
exception should be taken to secure or non-secure state
7
* complete needs its caller to tell it whether the exception
8
being completed is a secure one or not
9
2
3
Create struct ARMISARegisters, to be accessed during translation.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181016223115.24100-2-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 1505240046-11454-20-git-send-email-peter.maydell@linaro.org
13
---
9
---
14
target/arm/cpu.h | 15 +++++++++++++--
10
target/arm/cpu.h | 32 ++++----
15
hw/intc/armv7m_nvic.c | 26 ++++++++++++++++++++------
11
hw/intc/armv7m_nvic.c | 12 +--
16
target/arm/helper.c | 8 +++++---
12
target/arm/cpu.c | 178 +++++++++++++++++++++---------------------
17
hw/intc/trace-events | 4 ++--
13
target/arm/cpu64.c | 70 ++++++++---------
18
4 files changed, 40 insertions(+), 13 deletions(-)
14
target/arm/helper.c | 28 +++----
15
5 files changed, 162 insertions(+), 158 deletions(-)
19
16
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
19
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ static inline bool armv7m_nvic_can_take_pending_exception(void *opaque)
21
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
25
* of architecturally banked exceptions.
22
* ARMv7AR ARM Architecture Reference Manual. A reset_ prefix
26
*/
23
* is used for reset values of non-constant registers; no reset_
27
void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
24
* prefix means a constant register.
28
-void armv7m_nvic_acknowledge_irq(void *opaque);
25
+ * Some of these registers are split out into a substructure that
29
+/**
26
+ * is shared with the translators to control the ISA.
30
+ * armv7m_nvic_acknowledge_irq: make highest priority pending exception active
27
*/
31
+ * @opaque: the NVIC
28
+ struct ARMISARegisters {
32
+ *
29
+ uint32_t id_isar0;
33
+ * Move the current highest priority pending exception from the pending
30
+ uint32_t id_isar1;
34
+ * state to the active state, and update v7m.exception to indicate that
31
+ uint32_t id_isar2;
35
+ * it is the exception currently being handled.
32
+ uint32_t id_isar3;
36
+ *
33
+ uint32_t id_isar4;
37
+ * Returns: true if exception should be taken to Secure state, false for NS
34
+ uint32_t id_isar5;
38
+ */
35
+ uint32_t id_isar6;
39
+bool armv7m_nvic_acknowledge_irq(void *opaque);
36
+ uint32_t mvfr0;
40
/**
37
+ uint32_t mvfr1;
41
* armv7m_nvic_complete_irq: complete specified interrupt or exception
38
+ uint32_t mvfr2;
42
* @opaque: the NVIC
39
+ uint64_t id_aa64isar0;
43
* @irq: the exception number to complete
40
+ uint64_t id_aa64isar1;
44
+ * @secure: true if this exception was secure
41
+ uint64_t id_aa64pfr0;
45
*
42
+ uint64_t id_aa64pfr1;
46
* Returns: -1 if the irq was not active
43
+ } isar;
47
* 1 if completing this irq brought us back to base (no active irqs)
44
uint32_t midr;
48
* 0 if there is still an irq active after this one was completed
45
uint32_t revidr;
49
* (Ignoring -1, this is the same as the RETTOBASE value before completion.)
46
uint32_t reset_fpsid;
50
*/
47
- uint32_t mvfr0;
51
-int armv7m_nvic_complete_irq(void *opaque, int irq);
48
- uint32_t mvfr1;
52
+int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
49
- uint32_t mvfr2;
53
/**
50
uint32_t ctr;
54
* armv7m_nvic_raw_execution_priority: return the raw execution priority
51
uint32_t reset_sctlr;
55
* @opaque: the NVIC
52
uint32_t id_pfr0;
53
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
54
uint32_t id_mmfr2;
55
uint32_t id_mmfr3;
56
uint32_t id_mmfr4;
57
- uint32_t id_isar0;
58
- uint32_t id_isar1;
59
- uint32_t id_isar2;
60
- uint32_t id_isar3;
61
- uint32_t id_isar4;
62
- uint32_t id_isar5;
63
- uint32_t id_isar6;
64
- uint64_t id_aa64pfr0;
65
- uint64_t id_aa64pfr1;
66
uint64_t id_aa64dfr0;
67
uint64_t id_aa64dfr1;
68
uint64_t id_aa64afr0;
69
uint64_t id_aa64afr1;
70
- uint64_t id_aa64isar0;
71
- uint64_t id_aa64isar1;
72
uint64_t id_aa64mmfr0;
73
uint64_t id_aa64mmfr1;
74
uint32_t dbgdidr;
56
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
75
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
57
index XXXXXXX..XXXXXXX 100644
76
index XXXXXXX..XXXXXXX 100644
58
--- a/hw/intc/armv7m_nvic.c
77
--- a/hw/intc/armv7m_nvic.c
59
+++ b/hw/intc/armv7m_nvic.c
78
+++ b/hw/intc/armv7m_nvic.c
60
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
79
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
80
case 0xd5c: /* MMFR3. */
81
return cpu->id_mmfr3;
82
case 0xd60: /* ISAR0. */
83
- return cpu->id_isar0;
84
+ return cpu->isar.id_isar0;
85
case 0xd64: /* ISAR1. */
86
- return cpu->id_isar1;
87
+ return cpu->isar.id_isar1;
88
case 0xd68: /* ISAR2. */
89
- return cpu->id_isar2;
90
+ return cpu->isar.id_isar2;
91
case 0xd6c: /* ISAR3. */
92
- return cpu->id_isar3;
93
+ return cpu->isar.id_isar3;
94
case 0xd70: /* ISAR4. */
95
- return cpu->id_isar4;
96
+ return cpu->isar.id_isar4;
97
case 0xd74: /* ISAR5. */
98
- return cpu->id_isar5;
99
+ return cpu->isar.id_isar5;
100
case 0xd78: /* CLIDR */
101
return cpu->clidr;
102
case 0xd7c: /* CTR */
103
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
104
index XXXXXXX..XXXXXXX 100644
105
--- a/target/arm/cpu.c
106
+++ b/target/arm/cpu.c
107
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
108
g_hash_table_foreach(cpu->cp_regs, cp_reg_check_reset, cpu);
109
110
env->vfp.xregs[ARM_VFP_FPSID] = cpu->reset_fpsid;
111
- env->vfp.xregs[ARM_VFP_MVFR0] = cpu->mvfr0;
112
- env->vfp.xregs[ARM_VFP_MVFR1] = cpu->mvfr1;
113
- env->vfp.xregs[ARM_VFP_MVFR2] = cpu->mvfr2;
114
+ env->vfp.xregs[ARM_VFP_MVFR0] = cpu->isar.mvfr0;
115
+ env->vfp.xregs[ARM_VFP_MVFR1] = cpu->isar.mvfr1;
116
+ env->vfp.xregs[ARM_VFP_MVFR2] = cpu->isar.mvfr2;
117
118
cpu->power_state = cpu->start_powered_off ? PSCI_OFF : PSCI_ON;
119
s->halted = cpu->start_powered_off;
120
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
121
* registers as well. These are id_pfr1[7:4] and id_aa64pfr0[15:12].
122
*/
123
cpu->id_pfr1 &= ~0xf0;
124
- cpu->id_aa64pfr0 &= ~0xf000;
125
+ cpu->isar.id_aa64pfr0 &= ~0xf000;
126
}
127
128
if (!cpu->has_el2) {
129
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
130
* registers if we don't have EL2. These are id_pfr1[15:12] and
131
* id_aa64pfr0_el1[11:8].
132
*/
133
- cpu->id_aa64pfr0 &= ~0xf00;
134
+ cpu->isar.id_aa64pfr0 &= ~0xf00;
135
cpu->id_pfr1 &= ~0xf000;
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
139
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
140
cpu->midr = 0x4107b362;
141
cpu->reset_fpsid = 0x410120b4;
142
- cpu->mvfr0 = 0x11111111;
143
- cpu->mvfr1 = 0x00000000;
144
+ cpu->isar.mvfr0 = 0x11111111;
145
+ cpu->isar.mvfr1 = 0x00000000;
146
cpu->ctr = 0x1dd20d2;
147
cpu->reset_sctlr = 0x00050078;
148
cpu->id_pfr0 = 0x111;
149
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
150
cpu->id_mmfr0 = 0x01130003;
151
cpu->id_mmfr1 = 0x10030302;
152
cpu->id_mmfr2 = 0x01222110;
153
- cpu->id_isar0 = 0x00140011;
154
- cpu->id_isar1 = 0x12002111;
155
- cpu->id_isar2 = 0x11231111;
156
- cpu->id_isar3 = 0x01102131;
157
- cpu->id_isar4 = 0x141;
158
+ cpu->isar.id_isar0 = 0x00140011;
159
+ cpu->isar.id_isar1 = 0x12002111;
160
+ cpu->isar.id_isar2 = 0x11231111;
161
+ cpu->isar.id_isar3 = 0x01102131;
162
+ cpu->isar.id_isar4 = 0x141;
163
cpu->reset_auxcr = 7;
61
}
164
}
62
165
63
/* Make pending IRQ active. */
166
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
64
-void armv7m_nvic_acknowledge_irq(void *opaque)
167
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
65
+bool armv7m_nvic_acknowledge_irq(void *opaque)
168
cpu->midr = 0x4117b363;
66
{
169
cpu->reset_fpsid = 0x410120b4;
67
NVICState *s = (NVICState *)opaque;
170
- cpu->mvfr0 = 0x11111111;
68
CPUARMState *env = &s->cpu->env;
171
- cpu->mvfr1 = 0x00000000;
69
const int pending = s->vectpending;
172
+ cpu->isar.mvfr0 = 0x11111111;
70
const int running = nvic_exec_prio(s);
173
+ cpu->isar.mvfr1 = 0x00000000;
71
VecInfo *vec;
174
cpu->ctr = 0x1dd20d2;
72
+ bool targets_secure;
175
cpu->reset_sctlr = 0x00050078;
73
176
cpu->id_pfr0 = 0x111;
74
assert(pending > ARMV7M_EXCP_RESET && pending < s->num_irq);
177
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
75
178
cpu->id_mmfr0 = 0x01130003;
76
- vec = &s->vectors[pending];
179
cpu->id_mmfr1 = 0x10030302;
77
+ if (s->vectpending_is_s_banked) {
180
cpu->id_mmfr2 = 0x01222110;
78
+ vec = &s->sec_vectors[pending];
181
- cpu->id_isar0 = 0x00140011;
79
+ targets_secure = true;
182
- cpu->id_isar1 = 0x12002111;
80
+ } else {
183
- cpu->id_isar2 = 0x11231111;
81
+ vec = &s->vectors[pending];
184
- cpu->id_isar3 = 0x01102131;
82
+ targets_secure = !exc_is_banked(s->vectpending) &&
185
- cpu->id_isar4 = 0x141;
83
+ exc_targets_secure(s, s->vectpending);
186
+ cpu->isar.id_isar0 = 0x00140011;
84
+ }
187
+ cpu->isar.id_isar1 = 0x12002111;
85
188
+ cpu->isar.id_isar2 = 0x11231111;
86
assert(vec->enabled);
189
+ cpu->isar.id_isar3 = 0x01102131;
87
assert(vec->pending);
190
+ cpu->isar.id_isar4 = 0x141;
88
191
cpu->reset_auxcr = 7;
89
assert(s->vectpending_prio < running);
90
91
- trace_nvic_acknowledge_irq(pending, s->vectpending_prio);
92
+ trace_nvic_acknowledge_irq(pending, s->vectpending_prio, targets_secure);
93
94
vec->active = 1;
95
vec->pending = 0;
96
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque)
97
env->v7m.exception = s->vectpending;
98
99
nvic_irq_update(s);
100
+
101
+ return targets_secure;
102
}
192
}
103
193
104
-int armv7m_nvic_complete_irq(void *opaque, int irq)
194
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
105
+int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
195
set_feature(&cpu->env, ARM_FEATURE_EL3);
106
{
196
cpu->midr = 0x410fb767;
107
NVICState *s = (NVICState *)opaque;
197
cpu->reset_fpsid = 0x410120b5;
108
VecInfo *vec;
198
- cpu->mvfr0 = 0x11111111;
109
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq)
199
- cpu->mvfr1 = 0x00000000;
110
200
+ cpu->isar.mvfr0 = 0x11111111;
111
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
201
+ cpu->isar.mvfr1 = 0x00000000;
112
202
cpu->ctr = 0x1dd20d2;
113
- vec = &s->vectors[irq];
203
cpu->reset_sctlr = 0x00050078;
114
+ if (secure && exc_is_banked(irq)) {
204
cpu->id_pfr0 = 0x111;
115
+ vec = &s->sec_vectors[irq];
205
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
116
+ } else {
206
cpu->id_mmfr0 = 0x01130003;
117
+ vec = &s->vectors[irq];
207
cpu->id_mmfr1 = 0x10030302;
118
+ }
208
cpu->id_mmfr2 = 0x01222100;
119
209
- cpu->id_isar0 = 0x0140011;
120
- trace_nvic_complete_irq(irq);
210
- cpu->id_isar1 = 0x12002111;
121
+ trace_nvic_complete_irq(irq, secure);
211
- cpu->id_isar2 = 0x11231121;
122
212
- cpu->id_isar3 = 0x01102131;
123
if (!vec->active) {
213
- cpu->id_isar4 = 0x01141;
124
/* Tell the caller this was an illegal exception return */
214
+ cpu->isar.id_isar0 = 0x0140011;
215
+ cpu->isar.id_isar1 = 0x12002111;
216
+ cpu->isar.id_isar2 = 0x11231121;
217
+ cpu->isar.id_isar3 = 0x01102131;
218
+ cpu->isar.id_isar4 = 0x01141;
219
cpu->reset_auxcr = 7;
220
}
221
222
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
223
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
224
cpu->midr = 0x410fb022;
225
cpu->reset_fpsid = 0x410120b4;
226
- cpu->mvfr0 = 0x11111111;
227
- cpu->mvfr1 = 0x00000000;
228
+ cpu->isar.mvfr0 = 0x11111111;
229
+ cpu->isar.mvfr1 = 0x00000000;
230
cpu->ctr = 0x1d192992; /* 32K icache 32K dcache */
231
cpu->id_pfr0 = 0x111;
232
cpu->id_pfr1 = 0x1;
233
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
234
cpu->id_mmfr0 = 0x01100103;
235
cpu->id_mmfr1 = 0x10020302;
236
cpu->id_mmfr2 = 0x01222000;
237
- cpu->id_isar0 = 0x00100011;
238
- cpu->id_isar1 = 0x12002111;
239
- cpu->id_isar2 = 0x11221011;
240
- cpu->id_isar3 = 0x01102131;
241
- cpu->id_isar4 = 0x141;
242
+ cpu->isar.id_isar0 = 0x00100011;
243
+ cpu->isar.id_isar1 = 0x12002111;
244
+ cpu->isar.id_isar2 = 0x11221011;
245
+ cpu->isar.id_isar3 = 0x01102131;
246
+ cpu->isar.id_isar4 = 0x141;
247
cpu->reset_auxcr = 1;
248
}
249
250
@@ -XXX,XX +XXX,XX @@ static void cortex_m3_initfn(Object *obj)
251
cpu->id_mmfr1 = 0x00000000;
252
cpu->id_mmfr2 = 0x00000000;
253
cpu->id_mmfr3 = 0x00000000;
254
- cpu->id_isar0 = 0x01141110;
255
- cpu->id_isar1 = 0x02111000;
256
- cpu->id_isar2 = 0x21112231;
257
- cpu->id_isar3 = 0x01111110;
258
- cpu->id_isar4 = 0x01310102;
259
- cpu->id_isar5 = 0x00000000;
260
- cpu->id_isar6 = 0x00000000;
261
+ cpu->isar.id_isar0 = 0x01141110;
262
+ cpu->isar.id_isar1 = 0x02111000;
263
+ cpu->isar.id_isar2 = 0x21112231;
264
+ cpu->isar.id_isar3 = 0x01111110;
265
+ cpu->isar.id_isar4 = 0x01310102;
266
+ cpu->isar.id_isar5 = 0x00000000;
267
+ cpu->isar.id_isar6 = 0x00000000;
268
}
269
270
static void cortex_m4_initfn(Object *obj)
271
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
272
cpu->id_mmfr1 = 0x00000000;
273
cpu->id_mmfr2 = 0x00000000;
274
cpu->id_mmfr3 = 0x00000000;
275
- cpu->id_isar0 = 0x01141110;
276
- cpu->id_isar1 = 0x02111000;
277
- cpu->id_isar2 = 0x21112231;
278
- cpu->id_isar3 = 0x01111110;
279
- cpu->id_isar4 = 0x01310102;
280
- cpu->id_isar5 = 0x00000000;
281
- cpu->id_isar6 = 0x00000000;
282
+ cpu->isar.id_isar0 = 0x01141110;
283
+ cpu->isar.id_isar1 = 0x02111000;
284
+ cpu->isar.id_isar2 = 0x21112231;
285
+ cpu->isar.id_isar3 = 0x01111110;
286
+ cpu->isar.id_isar4 = 0x01310102;
287
+ cpu->isar.id_isar5 = 0x00000000;
288
+ cpu->isar.id_isar6 = 0x00000000;
289
}
290
291
static void cortex_m33_initfn(Object *obj)
292
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
293
cpu->id_mmfr1 = 0x00000000;
294
cpu->id_mmfr2 = 0x01000000;
295
cpu->id_mmfr3 = 0x00000000;
296
- cpu->id_isar0 = 0x01101110;
297
- cpu->id_isar1 = 0x02212000;
298
- cpu->id_isar2 = 0x20232232;
299
- cpu->id_isar3 = 0x01111131;
300
- cpu->id_isar4 = 0x01310132;
301
- cpu->id_isar5 = 0x00000000;
302
- cpu->id_isar6 = 0x00000000;
303
+ cpu->isar.id_isar0 = 0x01101110;
304
+ cpu->isar.id_isar1 = 0x02212000;
305
+ cpu->isar.id_isar2 = 0x20232232;
306
+ cpu->isar.id_isar3 = 0x01111131;
307
+ cpu->isar.id_isar4 = 0x01310132;
308
+ cpu->isar.id_isar5 = 0x00000000;
309
+ cpu->isar.id_isar6 = 0x00000000;
310
cpu->clidr = 0x00000000;
311
cpu->ctr = 0x8000c000;
312
}
313
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
314
cpu->id_mmfr1 = 0x00000000;
315
cpu->id_mmfr2 = 0x01200000;
316
cpu->id_mmfr3 = 0x0211;
317
- cpu->id_isar0 = 0x02101111;
318
- cpu->id_isar1 = 0x13112111;
319
- cpu->id_isar2 = 0x21232141;
320
- cpu->id_isar3 = 0x01112131;
321
- cpu->id_isar4 = 0x0010142;
322
- cpu->id_isar5 = 0x0;
323
- cpu->id_isar6 = 0x0;
324
+ cpu->isar.id_isar0 = 0x02101111;
325
+ cpu->isar.id_isar1 = 0x13112111;
326
+ cpu->isar.id_isar2 = 0x21232141;
327
+ cpu->isar.id_isar3 = 0x01112131;
328
+ cpu->isar.id_isar4 = 0x0010142;
329
+ cpu->isar.id_isar5 = 0x0;
330
+ cpu->isar.id_isar6 = 0x0;
331
cpu->mp_is_up = true;
332
cpu->pmsav7_dregion = 16;
333
define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
334
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
335
set_feature(&cpu->env, ARM_FEATURE_EL3);
336
cpu->midr = 0x410fc080;
337
cpu->reset_fpsid = 0x410330c0;
338
- cpu->mvfr0 = 0x11110222;
339
- cpu->mvfr1 = 0x00011111;
340
+ cpu->isar.mvfr0 = 0x11110222;
341
+ cpu->isar.mvfr1 = 0x00011111;
342
cpu->ctr = 0x82048004;
343
cpu->reset_sctlr = 0x00c50078;
344
cpu->id_pfr0 = 0x1031;
345
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
346
cpu->id_mmfr1 = 0x20000000;
347
cpu->id_mmfr2 = 0x01202000;
348
cpu->id_mmfr3 = 0x11;
349
- cpu->id_isar0 = 0x00101111;
350
- cpu->id_isar1 = 0x12112111;
351
- cpu->id_isar2 = 0x21232031;
352
- cpu->id_isar3 = 0x11112131;
353
- cpu->id_isar4 = 0x00111142;
354
+ cpu->isar.id_isar0 = 0x00101111;
355
+ cpu->isar.id_isar1 = 0x12112111;
356
+ cpu->isar.id_isar2 = 0x21232031;
357
+ cpu->isar.id_isar3 = 0x11112131;
358
+ cpu->isar.id_isar4 = 0x00111142;
359
cpu->dbgdidr = 0x15141000;
360
cpu->clidr = (1 << 27) | (2 << 24) | 3;
361
cpu->ccsidr[0] = 0xe007e01a; /* 16k L1 dcache. */
362
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
363
set_feature(&cpu->env, ARM_FEATURE_CBAR);
364
cpu->midr = 0x410fc090;
365
cpu->reset_fpsid = 0x41033090;
366
- cpu->mvfr0 = 0x11110222;
367
- cpu->mvfr1 = 0x01111111;
368
+ cpu->isar.mvfr0 = 0x11110222;
369
+ cpu->isar.mvfr1 = 0x01111111;
370
cpu->ctr = 0x80038003;
371
cpu->reset_sctlr = 0x00c50078;
372
cpu->id_pfr0 = 0x1031;
373
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
374
cpu->id_mmfr1 = 0x20000000;
375
cpu->id_mmfr2 = 0x01230000;
376
cpu->id_mmfr3 = 0x00002111;
377
- cpu->id_isar0 = 0x00101111;
378
- cpu->id_isar1 = 0x13112111;
379
- cpu->id_isar2 = 0x21232041;
380
- cpu->id_isar3 = 0x11112131;
381
- cpu->id_isar4 = 0x00111142;
382
+ cpu->isar.id_isar0 = 0x00101111;
383
+ cpu->isar.id_isar1 = 0x13112111;
384
+ cpu->isar.id_isar2 = 0x21232041;
385
+ cpu->isar.id_isar3 = 0x11112131;
386
+ cpu->isar.id_isar4 = 0x00111142;
387
cpu->dbgdidr = 0x35141000;
388
cpu->clidr = (1 << 27) | (1 << 24) | 3;
389
cpu->ccsidr[0] = 0xe00fe019; /* 16k L1 dcache. */
390
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
391
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A7;
392
cpu->midr = 0x410fc075;
393
cpu->reset_fpsid = 0x41023075;
394
- cpu->mvfr0 = 0x10110222;
395
- cpu->mvfr1 = 0x11111111;
396
+ cpu->isar.mvfr0 = 0x10110222;
397
+ cpu->isar.mvfr1 = 0x11111111;
398
cpu->ctr = 0x84448003;
399
cpu->reset_sctlr = 0x00c50078;
400
cpu->id_pfr0 = 0x00001131;
401
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
402
/* a7_mpcore_r0p5_trm, page 4-4 gives 0x01101110; but
403
* table 4-41 gives 0x02101110, which includes the arm div insns.
404
*/
405
- cpu->id_isar0 = 0x02101110;
406
- cpu->id_isar1 = 0x13112111;
407
- cpu->id_isar2 = 0x21232041;
408
- cpu->id_isar3 = 0x11112131;
409
- cpu->id_isar4 = 0x10011142;
410
+ cpu->isar.id_isar0 = 0x02101110;
411
+ cpu->isar.id_isar1 = 0x13112111;
412
+ cpu->isar.id_isar2 = 0x21232041;
413
+ cpu->isar.id_isar3 = 0x11112131;
414
+ cpu->isar.id_isar4 = 0x10011142;
415
cpu->dbgdidr = 0x3515f005;
416
cpu->clidr = 0x0a200023;
417
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
418
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
419
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15;
420
cpu->midr = 0x412fc0f1;
421
cpu->reset_fpsid = 0x410430f0;
422
- cpu->mvfr0 = 0x10110222;
423
- cpu->mvfr1 = 0x11111111;
424
+ cpu->isar.mvfr0 = 0x10110222;
425
+ cpu->isar.mvfr1 = 0x11111111;
426
cpu->ctr = 0x8444c004;
427
cpu->reset_sctlr = 0x00c50078;
428
cpu->id_pfr0 = 0x00001131;
429
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
430
cpu->id_mmfr1 = 0x20000000;
431
cpu->id_mmfr2 = 0x01240000;
432
cpu->id_mmfr3 = 0x02102211;
433
- cpu->id_isar0 = 0x02101110;
434
- cpu->id_isar1 = 0x13112111;
435
- cpu->id_isar2 = 0x21232041;
436
- cpu->id_isar3 = 0x11112131;
437
- cpu->id_isar4 = 0x10011142;
438
+ cpu->isar.id_isar0 = 0x02101110;
439
+ cpu->isar.id_isar1 = 0x13112111;
440
+ cpu->isar.id_isar2 = 0x21232041;
441
+ cpu->isar.id_isar3 = 0x11112131;
442
+ cpu->isar.id_isar4 = 0x10011142;
443
cpu->dbgdidr = 0x3515f021;
444
cpu->clidr = 0x0a200023;
445
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
446
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
447
index XXXXXXX..XXXXXXX 100644
448
--- a/target/arm/cpu64.c
449
+++ b/target/arm/cpu64.c
450
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
451
cpu->midr = 0x411fd070;
452
cpu->revidr = 0x00000000;
453
cpu->reset_fpsid = 0x41034070;
454
- cpu->mvfr0 = 0x10110222;
455
- cpu->mvfr1 = 0x12111111;
456
- cpu->mvfr2 = 0x00000043;
457
+ cpu->isar.mvfr0 = 0x10110222;
458
+ cpu->isar.mvfr1 = 0x12111111;
459
+ cpu->isar.mvfr2 = 0x00000043;
460
cpu->ctr = 0x8444c004;
461
cpu->reset_sctlr = 0x00c50838;
462
cpu->id_pfr0 = 0x00000131;
463
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
464
cpu->id_mmfr1 = 0x40000000;
465
cpu->id_mmfr2 = 0x01260000;
466
cpu->id_mmfr3 = 0x02102211;
467
- cpu->id_isar0 = 0x02101110;
468
- cpu->id_isar1 = 0x13112111;
469
- cpu->id_isar2 = 0x21232042;
470
- cpu->id_isar3 = 0x01112131;
471
- cpu->id_isar4 = 0x00011142;
472
- cpu->id_isar5 = 0x00011121;
473
- cpu->id_isar6 = 0;
474
- cpu->id_aa64pfr0 = 0x00002222;
475
+ cpu->isar.id_isar0 = 0x02101110;
476
+ cpu->isar.id_isar1 = 0x13112111;
477
+ cpu->isar.id_isar2 = 0x21232042;
478
+ cpu->isar.id_isar3 = 0x01112131;
479
+ cpu->isar.id_isar4 = 0x00011142;
480
+ cpu->isar.id_isar5 = 0x00011121;
481
+ cpu->isar.id_isar6 = 0;
482
+ cpu->isar.id_aa64pfr0 = 0x00002222;
483
cpu->id_aa64dfr0 = 0x10305106;
484
cpu->pmceid0 = 0x00000000;
485
cpu->pmceid1 = 0x00000000;
486
- cpu->id_aa64isar0 = 0x00011120;
487
+ cpu->isar.id_aa64isar0 = 0x00011120;
488
cpu->id_aa64mmfr0 = 0x00001124;
489
cpu->dbgdidr = 0x3516d000;
490
cpu->clidr = 0x0a200023;
491
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
492
cpu->midr = 0x410fd034;
493
cpu->revidr = 0x00000000;
494
cpu->reset_fpsid = 0x41034070;
495
- cpu->mvfr0 = 0x10110222;
496
- cpu->mvfr1 = 0x12111111;
497
- cpu->mvfr2 = 0x00000043;
498
+ cpu->isar.mvfr0 = 0x10110222;
499
+ cpu->isar.mvfr1 = 0x12111111;
500
+ cpu->isar.mvfr2 = 0x00000043;
501
cpu->ctr = 0x84448004; /* L1Ip = VIPT */
502
cpu->reset_sctlr = 0x00c50838;
503
cpu->id_pfr0 = 0x00000131;
504
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
505
cpu->id_mmfr1 = 0x40000000;
506
cpu->id_mmfr2 = 0x01260000;
507
cpu->id_mmfr3 = 0x02102211;
508
- cpu->id_isar0 = 0x02101110;
509
- cpu->id_isar1 = 0x13112111;
510
- cpu->id_isar2 = 0x21232042;
511
- cpu->id_isar3 = 0x01112131;
512
- cpu->id_isar4 = 0x00011142;
513
- cpu->id_isar5 = 0x00011121;
514
- cpu->id_isar6 = 0;
515
- cpu->id_aa64pfr0 = 0x00002222;
516
+ cpu->isar.id_isar0 = 0x02101110;
517
+ cpu->isar.id_isar1 = 0x13112111;
518
+ cpu->isar.id_isar2 = 0x21232042;
519
+ cpu->isar.id_isar3 = 0x01112131;
520
+ cpu->isar.id_isar4 = 0x00011142;
521
+ cpu->isar.id_isar5 = 0x00011121;
522
+ cpu->isar.id_isar6 = 0;
523
+ cpu->isar.id_aa64pfr0 = 0x00002222;
524
cpu->id_aa64dfr0 = 0x10305106;
525
- cpu->id_aa64isar0 = 0x00011120;
526
+ cpu->isar.id_aa64isar0 = 0x00011120;
527
cpu->id_aa64mmfr0 = 0x00001122; /* 40 bit physical addr */
528
cpu->dbgdidr = 0x3516d000;
529
cpu->clidr = 0x0a200023;
530
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
531
cpu->midr = 0x410fd083;
532
cpu->revidr = 0x00000000;
533
cpu->reset_fpsid = 0x41034080;
534
- cpu->mvfr0 = 0x10110222;
535
- cpu->mvfr1 = 0x12111111;
536
- cpu->mvfr2 = 0x00000043;
537
+ cpu->isar.mvfr0 = 0x10110222;
538
+ cpu->isar.mvfr1 = 0x12111111;
539
+ cpu->isar.mvfr2 = 0x00000043;
540
cpu->ctr = 0x8444c004;
541
cpu->reset_sctlr = 0x00c50838;
542
cpu->id_pfr0 = 0x00000131;
543
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
544
cpu->id_mmfr1 = 0x40000000;
545
cpu->id_mmfr2 = 0x01260000;
546
cpu->id_mmfr3 = 0x02102211;
547
- cpu->id_isar0 = 0x02101110;
548
- cpu->id_isar1 = 0x13112111;
549
- cpu->id_isar2 = 0x21232042;
550
- cpu->id_isar3 = 0x01112131;
551
- cpu->id_isar4 = 0x00011142;
552
- cpu->id_isar5 = 0x00011121;
553
- cpu->id_aa64pfr0 = 0x00002222;
554
+ cpu->isar.id_isar0 = 0x02101110;
555
+ cpu->isar.id_isar1 = 0x13112111;
556
+ cpu->isar.id_isar2 = 0x21232042;
557
+ cpu->isar.id_isar3 = 0x01112131;
558
+ cpu->isar.id_isar4 = 0x00011142;
559
+ cpu->isar.id_isar5 = 0x00011121;
560
+ cpu->isar.id_aa64pfr0 = 0x00002222;
561
cpu->id_aa64dfr0 = 0x10305106;
562
cpu->pmceid0 = 0x00000000;
563
cpu->pmceid1 = 0x00000000;
564
- cpu->id_aa64isar0 = 0x00011120;
565
+ cpu->isar.id_aa64isar0 = 0x00011120;
566
cpu->id_aa64mmfr0 = 0x00001124;
567
cpu->dbgdidr = 0x3516d000;
568
cpu->clidr = 0x0a200023;
125
diff --git a/target/arm/helper.c b/target/arm/helper.c
569
diff --git a/target/arm/helper.c b/target/arm/helper.c
126
index XXXXXXX..XXXXXXX 100644
570
index XXXXXXX..XXXXXXX 100644
127
--- a/target/arm/helper.c
571
--- a/target/arm/helper.c
128
+++ b/target/arm/helper.c
572
+++ b/target/arm/helper.c
129
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
573
@@ -XXX,XX +XXX,XX @@ static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri)
130
bool return_to_sp_process = false;
574
static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
131
bool return_to_handler = false;
575
{
132
bool rettobase = false;
576
ARMCPU *cpu = arm_env_get_cpu(env);
133
+ bool exc_secure = false;
577
- uint64_t pfr0 = cpu->id_aa64pfr0;
134
578
+ uint64_t pfr0 = cpu->isar.id_aa64pfr0;
135
/* We can only get here from an EXCP_EXCEPTION_EXIT, and
579
136
* gen_bx_excret() enforces the architectural rule
580
if (env->gicv3state) {
137
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
581
pfr0 |= 1 << 24;
138
* which security state's faultmask to clear. (v8M ARM ARM R_KBNF.)
582
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
139
*/
583
{ .name = "ID_ISAR0", .state = ARM_CP_STATE_BOTH,
140
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
584
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
141
- int es = excret & R_V7M_EXCRET_ES_MASK;
585
.access = PL1_R, .type = ARM_CP_CONST,
142
+ exc_secure = excret & R_V7M_EXCRET_ES_MASK;
586
- .resetvalue = cpu->id_isar0 },
143
if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) {
587
+ .resetvalue = cpu->isar.id_isar0 },
144
- env->v7m.faultmask[es] = 0;
588
{ .name = "ID_ISAR1", .state = ARM_CP_STATE_BOTH,
145
+ env->v7m.faultmask[exc_secure] = 0;
589
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 1,
146
}
590
.access = PL1_R, .type = ARM_CP_CONST,
147
} else {
591
- .resetvalue = cpu->id_isar1 },
148
env->v7m.faultmask[M_REG_NS] = 0;
592
+ .resetvalue = cpu->isar.id_isar1 },
149
}
593
{ .name = "ID_ISAR2", .state = ARM_CP_STATE_BOTH,
150
}
594
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2,
151
595
.access = PL1_R, .type = ARM_CP_CONST,
152
- switch (armv7m_nvic_complete_irq(env->nvic, env->v7m.exception)) {
596
- .resetvalue = cpu->id_isar2 },
153
+ switch (armv7m_nvic_complete_irq(env->nvic, env->v7m.exception,
597
+ .resetvalue = cpu->isar.id_isar2 },
154
+ exc_secure)) {
598
{ .name = "ID_ISAR3", .state = ARM_CP_STATE_BOTH,
155
case -1:
599
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 3,
156
/* attempt to exit an exception that isn't active */
600
.access = PL1_R, .type = ARM_CP_CONST,
157
ufault = true;
601
- .resetvalue = cpu->id_isar3 },
158
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
602
+ .resetvalue = cpu->isar.id_isar3 },
159
index XXXXXXX..XXXXXXX 100644
603
{ .name = "ID_ISAR4", .state = ARM_CP_STATE_BOTH,
160
--- a/hw/intc/trace-events
604
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 4,
161
+++ b/hw/intc/trace-events
605
.access = PL1_R, .type = ARM_CP_CONST,
162
@@ -XXX,XX +XXX,XX @@ nvic_escalate_disabled(int irq) "NVIC escalating irq %d to HardFault: disabled"
606
- .resetvalue = cpu->id_isar4 },
163
nvic_set_pending(int irq, bool secure, int en, int prio) "NVIC set pending irq %d secure-bank %d (enabled: %d priority %d)"
607
+ .resetvalue = cpu->isar.id_isar4 },
164
nvic_clear_pending(int irq, bool secure, int en, int prio) "NVIC clear pending irq %d secure-bank %d (enabled: %d priority %d)"
608
{ .name = "ID_ISAR5", .state = ARM_CP_STATE_BOTH,
165
nvic_set_pending_level(int irq) "NVIC set pending: irq %d higher prio than vectpending: setting irq line to 1"
609
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 5,
166
-nvic_acknowledge_irq(int irq, int prio) "NVIC acknowledge IRQ: %d now active (prio %d)"
610
.access = PL1_R, .type = ARM_CP_CONST,
167
-nvic_complete_irq(int irq) "NVIC complete IRQ %d"
611
- .resetvalue = cpu->id_isar5 },
168
+nvic_acknowledge_irq(int irq, int prio, bool targets_secure) "NVIC acknowledge IRQ: %d now active (prio %d targets_secure %d)"
612
+ .resetvalue = cpu->isar.id_isar5 },
169
+nvic_complete_irq(int irq, bool secure) "NVIC complete IRQ %d (secure %d)"
613
{ .name = "ID_MMFR4", .state = ARM_CP_STATE_BOTH,
170
nvic_set_irq_level(int irq, int level) "NVIC external irq %d level set to %d"
614
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6,
171
nvic_sysreg_read(uint64_t addr, uint32_t value, unsigned size) "NVIC sysreg read addr 0x%" PRIx64 " data 0x%" PRIx32 " size %u"
615
.access = PL1_R, .type = ARM_CP_CONST,
172
nvic_sysreg_write(uint64_t addr, uint32_t value, unsigned size) "NVIC sysreg write addr 0x%" PRIx64 " data 0x%" PRIx32 " size %u"
616
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
617
{ .name = "ID_ISAR6", .state = ARM_CP_STATE_BOTH,
618
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 7,
619
.access = PL1_R, .type = ARM_CP_CONST,
620
- .resetvalue = cpu->id_isar6 },
621
+ .resetvalue = cpu->isar.id_isar6 },
622
REGINFO_SENTINEL
623
};
624
define_arm_cp_regs(cpu, v6_idregs);
625
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
626
{ .name = "ID_AA64PFR1_EL1", .state = ARM_CP_STATE_AA64,
627
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 1,
628
.access = PL1_R, .type = ARM_CP_CONST,
629
- .resetvalue = cpu->id_aa64pfr1},
630
+ .resetvalue = cpu->isar.id_aa64pfr1},
631
{ .name = "ID_AA64PFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
632
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 2,
633
.access = PL1_R, .type = ARM_CP_CONST,
634
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
635
{ .name = "ID_AA64ISAR0_EL1", .state = ARM_CP_STATE_AA64,
636
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 0,
637
.access = PL1_R, .type = ARM_CP_CONST,
638
- .resetvalue = cpu->id_aa64isar0 },
639
+ .resetvalue = cpu->isar.id_aa64isar0 },
640
{ .name = "ID_AA64ISAR1_EL1", .state = ARM_CP_STATE_AA64,
641
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 1,
642
.access = PL1_R, .type = ARM_CP_CONST,
643
- .resetvalue = cpu->id_aa64isar1 },
644
+ .resetvalue = cpu->isar.id_aa64isar1 },
645
{ .name = "ID_AA64ISAR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
646
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 2,
647
.access = PL1_R, .type = ARM_CP_CONST,
648
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
649
{ .name = "MVFR0_EL1", .state = ARM_CP_STATE_AA64,
650
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 0,
651
.access = PL1_R, .type = ARM_CP_CONST,
652
- .resetvalue = cpu->mvfr0 },
653
+ .resetvalue = cpu->isar.mvfr0 },
654
{ .name = "MVFR1_EL1", .state = ARM_CP_STATE_AA64,
655
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 1,
656
.access = PL1_R, .type = ARM_CP_CONST,
657
- .resetvalue = cpu->mvfr1 },
658
+ .resetvalue = cpu->isar.mvfr1 },
659
{ .name = "MVFR2_EL1", .state = ARM_CP_STATE_AA64,
660
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2,
661
.access = PL1_R, .type = ARM_CP_CONST,
662
- .resetvalue = cpu->mvfr2 },
663
+ .resetvalue = cpu->isar.mvfr2 },
664
{ .name = "MVFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
665
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 3,
666
.access = PL1_R, .type = ARM_CP_CONST,
173
--
667
--
174
2.7.4
668
2.19.1
175
669
176
670
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Instantiating mps2-an505 (cortex-m33) will fail make check when
4
V7VE asserts that ID_ISAR0.Divide includes ARM division. It is
5
also wrong to include ARM_FEATURE_LPAE.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181016223115.24100-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/cpu.c | 6 +++++-
13
1 file changed, 5 insertions(+), 1 deletion(-)
14
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.c
18
+++ b/target/arm/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
20
21
/* Some features automatically imply others: */
22
if (arm_feature(env, ARM_FEATURE_V8)) {
23
- set_feature(env, ARM_FEATURE_V7VE);
24
+ if (arm_feature(env, ARM_FEATURE_M)) {
25
+ set_feature(env, ARM_FEATURE_V7);
26
+ } else {
27
+ set_feature(env, ARM_FEATURE_V7VE);
28
+ }
29
}
30
if (arm_feature(env, ARM_FEATURE_V7VE)) {
31
/* v7 Virtualization Extensions. In real hardware this implies
32
--
33
2.19.1
34
35
diff view generated by jsdifflib
1
The Application Interrupt and Reset Control Register has some changes
1
From: Richard Henderson <richard.henderson@linaro.org>
2
for v8M:
3
* new bits SYSRESETREQS, BFHFNMINS and PRIS: these all have
4
real state if the security extension is implemented and otherwise
5
are constant
6
* the PRIGROUP field is banked between security states
7
* non-secure code can be blocked from using the SYSRESET bit
8
to reset the system if SYSRESETREQS is set
9
2
10
Implement the new state and the changes to register read and write.
3
Most of the v8 extensions are self-contained within the ISAR
11
For the moment we ignore the effects of the secure PRIGROUP.
4
registers and are not implied by other feature bits, which
12
We will implement the effects of PRIS and BFHFNMIS later.
5
makes them the easiest to convert.
13
6
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181016223115.24100-4-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 1505240046-11454-6-git-send-email-peter.maydell@linaro.org
17
---
12
---
18
include/hw/intc/armv7m_nvic.h | 3 ++-
13
target/arm/cpu.h | 131 +++++++++++++++++++++++++++++++++----
19
target/arm/cpu.h | 12 +++++++++++
14
target/arm/translate.h | 7 ++
20
hw/intc/armv7m_nvic.c | 49 +++++++++++++++++++++++++++++++++----------
15
linux-user/elfload.c | 46 ++++++++-----
21
target/arm/cpu.c | 7 +++++++
16
target/arm/cpu.c | 27 +++++---
22
4 files changed, 59 insertions(+), 12 deletions(-)
17
target/arm/cpu64.c | 57 +++++++++-------
18
target/arm/translate-a64.c | 101 ++++++++++++++--------------
19
target/arm/translate.c | 36 +++++-----
20
7 files changed, 273 insertions(+), 132 deletions(-)
23
21
24
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/hw/intc/armv7m_nvic.h
27
+++ b/include/hw/intc/armv7m_nvic.h
28
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
29
* Entries in sec_vectors[] for non-banked exception numbers are unused.
30
*/
31
VecInfo sec_vectors[NVIC_INTERNAL_VECTORS];
32
- uint32_t prigroup;
33
+ /* The PRIGROUP field in AIRCR is banked */
34
+ uint32_t prigroup[M_REG_NUM_BANKS];
35
36
/* The following fields are all cached state that can be recalculated
37
* from the vectors[] and sec_vectors[] arrays and the prigroup field:
38
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
39
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/cpu.h
24
--- a/target/arm/cpu.h
41
+++ b/target/arm/cpu.h
25
+++ b/target/arm/cpu.h
42
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
26
@@ -XXX,XX +XXX,XX @@ typedef enum ARMPSCIState {
43
int exception;
27
PSCI_ON_PENDING = 2
44
uint32_t primask[M_REG_NUM_BANKS];
28
} ARMPSCIState;
45
uint32_t faultmask[M_REG_NUM_BANKS];
29
46
+ uint32_t aircr; /* only holds r/w state if security extn implemented */
30
+typedef struct ARMISARegisters ARMISARegisters;
47
uint32_t secure; /* Is CPU in Secure state? (not guest visible) */
31
+
48
} v7m;
32
/**
49
33
* ARMCPU:
50
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CCR, STKALIGN, 9, 1)
34
* @env: #CPUARMState
51
FIELD(V7M_CCR, DC, 16, 1)
35
@@ -XXX,XX +XXX,XX @@ enum arm_features {
52
FIELD(V7M_CCR, IC, 17, 1)
36
ARM_FEATURE_LPAE, /* has Large Physical Address Extension */
53
37
ARM_FEATURE_V8,
54
+/* V7M AIRCR bits */
38
ARM_FEATURE_AARCH64, /* supports 64 bit mode */
55
+FIELD(V7M_AIRCR, VECTRESET, 0, 1)
39
- ARM_FEATURE_V8_AES, /* implements AES part of v8 Crypto Extensions */
56
+FIELD(V7M_AIRCR, VECTCLRACTIVE, 1, 1)
40
ARM_FEATURE_CBAR, /* has cp15 CBAR */
57
+FIELD(V7M_AIRCR, SYSRESETREQ, 2, 1)
41
ARM_FEATURE_CRC, /* ARMv8 CRC instructions */
58
+FIELD(V7M_AIRCR, SYSRESETREQS, 3, 1)
42
ARM_FEATURE_CBAR_RO, /* has cp15 CBAR and it is read-only */
59
+FIELD(V7M_AIRCR, PRIGROUP, 8, 3)
43
ARM_FEATURE_EL2, /* has EL2 Virtualization support */
60
+FIELD(V7M_AIRCR, BFHFNMINS, 13, 1)
44
ARM_FEATURE_EL3, /* has EL3 Secure monitor support */
61
+FIELD(V7M_AIRCR, PRIS, 14, 1)
45
- ARM_FEATURE_V8_SHA1, /* implements SHA1 part of v8 Crypto Extensions */
62
+FIELD(V7M_AIRCR, ENDIANNESS, 15, 1)
46
- ARM_FEATURE_V8_SHA256, /* implements SHA256 part of v8 Crypto Extensions */
63
+FIELD(V7M_AIRCR, VECTKEY, 16, 16)
47
- ARM_FEATURE_V8_PMULL, /* implements PMULL part of v8 Crypto Extensions */
64
+
48
ARM_FEATURE_THUMB_DSP, /* DSP insns supported in the Thumb encodings */
65
/* V7M CFSR bits for MMFSR */
49
ARM_FEATURE_PMU, /* has PMU support */
66
FIELD(V7M_CFSR, IACCVIOL, 0, 1)
50
ARM_FEATURE_VBAR, /* has cp15 VBAR */
67
FIELD(V7M_CFSR, DACCVIOL, 1, 1)
51
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
68
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
52
ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
53
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
54
- ARM_FEATURE_V8_SHA512, /* implements SHA512 part of v8 Crypto Extensions */
55
- ARM_FEATURE_V8_SHA3, /* implements SHA3 part of v8 Crypto Extensions */
56
- ARM_FEATURE_V8_SM3, /* implements SM3 part of v8 Crypto Extensions */
57
- ARM_FEATURE_V8_SM4, /* implements SM4 part of v8 Crypto Extensions */
58
- ARM_FEATURE_V8_ATOMICS, /* ARMv8.1-Atomics feature */
59
- ARM_FEATURE_V8_RDM, /* implements v8.1 simd round multiply */
60
- ARM_FEATURE_V8_DOTPROD, /* implements v8.2 simd dot product */
61
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
62
- ARM_FEATURE_V8_FCMA, /* has complex number part of v8.3 extensions. */
63
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
64
};
65
66
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
67
/* Shared between translate-sve.c and sve_helper.c. */
68
extern const uint64_t pred_esz_masks[4];
69
70
+/*
71
+ * 32-bit feature tests via id registers.
72
+ */
73
+static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
74
+{
75
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
76
+}
77
+
78
+static inline bool isar_feature_aa32_pmull(const ARMISARegisters *id)
79
+{
80
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) > 1;
81
+}
82
+
83
+static inline bool isar_feature_aa32_sha1(const ARMISARegisters *id)
84
+{
85
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA1) != 0;
86
+}
87
+
88
+static inline bool isar_feature_aa32_sha2(const ARMISARegisters *id)
89
+{
90
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA2) != 0;
91
+}
92
+
93
+static inline bool isar_feature_aa32_crc32(const ARMISARegisters *id)
94
+{
95
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, CRC32) != 0;
96
+}
97
+
98
+static inline bool isar_feature_aa32_rdm(const ARMISARegisters *id)
99
+{
100
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, RDM) != 0;
101
+}
102
+
103
+static inline bool isar_feature_aa32_vcma(const ARMISARegisters *id)
104
+{
105
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, VCMA) != 0;
106
+}
107
+
108
+static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
109
+{
110
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
111
+}
112
+
113
+/*
114
+ * 64-bit feature tests via id registers.
115
+ */
116
+static inline bool isar_feature_aa64_aes(const ARMISARegisters *id)
117
+{
118
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) != 0;
119
+}
120
+
121
+static inline bool isar_feature_aa64_pmull(const ARMISARegisters *id)
122
+{
123
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) > 1;
124
+}
125
+
126
+static inline bool isar_feature_aa64_sha1(const ARMISARegisters *id)
127
+{
128
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA1) != 0;
129
+}
130
+
131
+static inline bool isar_feature_aa64_sha256(const ARMISARegisters *id)
132
+{
133
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) != 0;
134
+}
135
+
136
+static inline bool isar_feature_aa64_sha512(const ARMISARegisters *id)
137
+{
138
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) > 1;
139
+}
140
+
141
+static inline bool isar_feature_aa64_crc32(const ARMISARegisters *id)
142
+{
143
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, CRC32) != 0;
144
+}
145
+
146
+static inline bool isar_feature_aa64_atomics(const ARMISARegisters *id)
147
+{
148
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, ATOMIC) != 0;
149
+}
150
+
151
+static inline bool isar_feature_aa64_rdm(const ARMISARegisters *id)
152
+{
153
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, RDM) != 0;
154
+}
155
+
156
+static inline bool isar_feature_aa64_sha3(const ARMISARegisters *id)
157
+{
158
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA3) != 0;
159
+}
160
+
161
+static inline bool isar_feature_aa64_sm3(const ARMISARegisters *id)
162
+{
163
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM3) != 0;
164
+}
165
+
166
+static inline bool isar_feature_aa64_sm4(const ARMISARegisters *id)
167
+{
168
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM4) != 0;
169
+}
170
+
171
+static inline bool isar_feature_aa64_dp(const ARMISARegisters *id)
172
+{
173
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, DP) != 0;
174
+}
175
+
176
+static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
177
+{
178
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
179
+}
180
+
181
+/*
182
+ * Forward to the above feature tests given an ARMCPU pointer.
183
+ */
184
+#define cpu_isar_feature(name, cpu) \
185
+ ({ ARMCPU *cpu_ = (cpu); isar_feature_##name(&cpu_->isar); })
186
+
187
#endif
188
diff --git a/target/arm/translate.h b/target/arm/translate.h
69
index XXXXXXX..XXXXXXX 100644
189
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/intc/armv7m_nvic.c
190
--- a/target/arm/translate.h
71
+++ b/hw/intc/armv7m_nvic.c
191
+++ b/target/arm/translate.h
72
@@ -XXX,XX +XXX,XX @@ static bool nvic_isrpending(NVICState *s)
192
@@ -XXX,XX +XXX,XX @@
73
*/
193
/* internal defines */
74
static inline uint32_t nvic_gprio_mask(NVICState *s)
194
typedef struct DisasContext {
75
{
195
DisasContextBase base;
76
- return ~0U << (s->prigroup + 1);
196
+ const ARMISARegisters *isar;
77
+ return ~0U << (s->prigroup[M_REG_NS] + 1);
197
198
target_ulong pc;
199
target_ulong page_start;
200
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
201
return ret;
78
}
202
}
79
203
80
/* Recompute vectpending and exception_prio */
204
+/*
81
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
205
+ * Forward to the isar_feature_* tests given a DisasContext pointer.
82
return val;
206
+ */
83
case 0xd08: /* Vector Table Offset. */
207
+#define dc_isar_feature(name, ctx) \
84
return cpu->env.v7m.vecbase[attrs.secure];
208
+ ({ DisasContext *ctx_ = (ctx); isar_feature_##name(ctx_->isar); })
85
- case 0xd0c: /* Application Interrupt/Reset Control. */
209
+
86
- return 0xfa050000 | (s->prigroup << 8);
210
#endif /* TARGET_ARM_TRANSLATE_H */
87
+ case 0xd0c: /* Application Interrupt/Reset Control (AIRCR) */
211
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
88
+ val = 0xfa050000 | (s->prigroup[attrs.secure] << 8);
212
index XXXXXXX..XXXXXXX 100644
89
+ if (attrs.secure) {
213
--- a/linux-user/elfload.c
90
+ /* s->aircr stores PRIS, BFHFNMINS, SYSRESETREQS */
214
+++ b/linux-user/elfload.c
91
+ val |= cpu->env.v7m.aircr;
215
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
92
+ } else {
216
/* probe for the extra features */
93
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
217
#define GET_FEATURE(feat, hwcap) \
94
+ /* BFHFNMINS is R/O from NS; other bits are RAZ/WI. If
218
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
95
+ * security isn't supported then BFHFNMINS is RAO (and
219
+
96
+ * the bit in env.v7m.aircr is always set).
220
+#define GET_FEATURE_ID(feat, hwcap) \
97
+ */
221
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
98
+ val |= cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK;
222
+
99
+ }
223
/* EDSP is in v5TE and above, but all our v5 CPUs are v5TE */
100
+ }
224
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
101
+ return val;
225
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
102
case 0xd10: /* System Control. */
226
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap2(void)
103
/* TODO: Implement SLEEPONEXIT. */
227
ARMCPU *cpu = ARM_CPU(thread_cpu);
104
return 0;
228
uint32_t hwcaps = 0;
105
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
229
106
case 0xd08: /* Vector Table Offset. */
230
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP2_ARM_AES);
107
cpu->env.v7m.vecbase[attrs.secure] = value & 0xffffff80;
231
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP2_ARM_PMULL);
108
break;
232
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP2_ARM_SHA1);
109
- case 0xd0c: /* Application Interrupt/Reset Control. */
233
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP2_ARM_SHA2);
110
- if ((value >> 16) == 0x05fa) {
234
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP2_ARM_CRC32);
111
- if (value & 4) {
235
+ GET_FEATURE_ID(aa32_aes, ARM_HWCAP2_ARM_AES);
112
- qemu_irq_pulse(s->sysresetreq);
236
+ GET_FEATURE_ID(aa32_pmull, ARM_HWCAP2_ARM_PMULL);
113
+ case 0xd0c: /* Application Interrupt/Reset Control (AIRCR) */
237
+ GET_FEATURE_ID(aa32_sha1, ARM_HWCAP2_ARM_SHA1);
114
+ if ((value >> R_V7M_AIRCR_VECTKEY_SHIFT) == 0x05fa) {
238
+ GET_FEATURE_ID(aa32_sha2, ARM_HWCAP2_ARM_SHA2);
115
+ if (value & R_V7M_AIRCR_SYSRESETREQ_MASK) {
239
+ GET_FEATURE_ID(aa32_crc32, ARM_HWCAP2_ARM_CRC32);
116
+ if (attrs.secure ||
240
return hwcaps;
117
+ !(cpu->env.v7m.aircr & R_V7M_AIRCR_SYSRESETREQS_MASK)) {
241
}
118
+ qemu_irq_pulse(s->sysresetreq);
242
119
+ }
243
#undef GET_FEATURE
120
}
244
+#undef GET_FEATURE_ID
121
- if (value & 2) {
245
122
+ if (value & R_V7M_AIRCR_VECTCLRACTIVE_MASK) {
246
#else
123
qemu_log_mask(LOG_GUEST_ERROR,
247
/* 64 bit ARM definitions */
124
"Setting VECTCLRACTIVE when not in DEBUG mode "
248
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
125
"is UNPREDICTABLE\n");
249
/* probe for the extra features */
126
}
250
#define GET_FEATURE(feat, hwcap) \
127
- if (value & 1) {
251
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
128
+ if (value & R_V7M_AIRCR_VECTRESET_MASK) {
252
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP_A64_AES);
129
+ /* NB: this bit is RES0 in v8M */
253
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP_A64_PMULL);
130
qemu_log_mask(LOG_GUEST_ERROR,
254
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP_A64_SHA1);
131
"Setting VECTRESET when not in DEBUG mode "
255
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP_A64_SHA2);
132
"is UNPREDICTABLE\n");
256
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP_A64_CRC32);
133
}
257
- GET_FEATURE(ARM_FEATURE_V8_SHA3, ARM_HWCAP_A64_SHA3);
134
- s->prigroup = extract32(value, 8, 3);
258
- GET_FEATURE(ARM_FEATURE_V8_SM3, ARM_HWCAP_A64_SM3);
135
+ s->prigroup[attrs.secure] = extract32(value,
259
- GET_FEATURE(ARM_FEATURE_V8_SM4, ARM_HWCAP_A64_SM4);
136
+ R_V7M_AIRCR_PRIGROUP_SHIFT,
260
- GET_FEATURE(ARM_FEATURE_V8_SHA512, ARM_HWCAP_A64_SHA512);
137
+ R_V7M_AIRCR_PRIGROUP_LENGTH);
261
+#define GET_FEATURE_ID(feat, hwcap) \
138
+ if (attrs.secure) {
262
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
139
+ /* These bits are only writable by secure */
263
+
140
+ cpu->env.v7m.aircr = value &
264
+ GET_FEATURE_ID(aa64_aes, ARM_HWCAP_A64_AES);
141
+ (R_V7M_AIRCR_SYSRESETREQS_MASK |
265
+ GET_FEATURE_ID(aa64_pmull, ARM_HWCAP_A64_PMULL);
142
+ R_V7M_AIRCR_BFHFNMINS_MASK |
266
+ GET_FEATURE_ID(aa64_sha1, ARM_HWCAP_A64_SHA1);
143
+ R_V7M_AIRCR_PRIS_MASK);
267
+ GET_FEATURE_ID(aa64_sha256, ARM_HWCAP_A64_SHA2);
144
+ }
268
+ GET_FEATURE_ID(aa64_sha512, ARM_HWCAP_A64_SHA512);
145
nvic_irq_update(s);
269
+ GET_FEATURE_ID(aa64_crc32, ARM_HWCAP_A64_CRC32);
146
}
270
+ GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
147
break;
271
+ GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
148
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_nvic_security = {
272
+ GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
149
.fields = (VMStateField[]) {
273
GET_FEATURE(ARM_FEATURE_V8_FP16,
150
VMSTATE_STRUCT_ARRAY(sec_vectors, NVICState, NVIC_INTERNAL_VECTORS, 1,
274
ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
151
vmstate_VecInfo, VecInfo),
275
- GET_FEATURE(ARM_FEATURE_V8_ATOMICS, ARM_HWCAP_A64_ATOMICS);
152
+ VMSTATE_UINT32(prigroup[M_REG_S], NVICState),
276
- GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM);
153
VMSTATE_END_OF_LIST()
277
- GET_FEATURE(ARM_FEATURE_V8_DOTPROD, ARM_HWCAP_A64_ASIMDDP);
154
}
278
- GET_FEATURE(ARM_FEATURE_V8_FCMA, ARM_HWCAP_A64_FCMA);
155
};
279
+ GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
156
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_nvic = {
280
+ GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
157
.fields = (VMStateField[]) {
281
+ GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
158
VMSTATE_STRUCT_ARRAY(vectors, NVICState, NVIC_MAX_VECTORS, 1,
282
+ GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
159
vmstate_VecInfo, VecInfo),
283
GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
160
- VMSTATE_UINT32(prigroup, NVICState),
284
+
161
+ VMSTATE_UINT32(prigroup[M_REG_NS], NVICState),
285
#undef GET_FEATURE
162
VMSTATE_END_OF_LIST()
286
+#undef GET_FEATURE_ID
163
},
287
164
.subsections = (const VMStateDescription*[]) {
288
return hwcaps;
289
}
165
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
290
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
166
index XXXXXXX..XXXXXXX 100644
291
index XXXXXXX..XXXXXXX 100644
167
--- a/target/arm/cpu.c
292
--- a/target/arm/cpu.c
168
+++ b/target/arm/cpu.c
293
+++ b/target/arm/cpu.c
169
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
294
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
170
295
cortex_a15_initfn(obj);
171
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
296
#ifdef CONFIG_USER_ONLY
172
env->v7m.secure = true;
297
/* We don't set these in system emulation mode for the moment,
173
+ } else {
298
- * since we don't correctly set the ID registers to advertise them,
174
+ /* This bit resets to 0 if security is supported, but 1 if
299
+ * since we don't correctly set (all of) the ID registers to
175
+ * it is not. The bit is not present in v7M, but we set it
300
+ * advertise them.
176
+ * here so we can avoid having to make checks on it conditional
301
*/
177
+ * on ARM_FEATURE_V8 (we don't let the guest see the bit).
302
set_feature(&cpu->env, ARM_FEATURE_V8);
178
+ */
303
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
179
+ env->v7m.aircr = R_V7M_AIRCR_BFHFNMINS_MASK;
304
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
180
}
305
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
181
306
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
182
/* In v7M the reset value of this bit is IMPDEF, but ARM recommends
307
- set_feature(&cpu->env, ARM_FEATURE_CRC);
308
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
309
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
310
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
311
+ {
312
+ uint32_t t;
313
+
314
+ t = cpu->isar.id_isar5;
315
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
316
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
317
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
318
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
319
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
320
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
321
+ cpu->isar.id_isar5 = t;
322
+
323
+ t = cpu->isar.id_isar6;
324
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
325
+ cpu->isar.id_isar6 = t;
326
+ }
327
#endif
328
}
329
}
330
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
331
index XXXXXXX..XXXXXXX 100644
332
--- a/target/arm/cpu64.c
333
+++ b/target/arm/cpu64.c
334
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
335
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
336
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
337
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
338
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
339
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
340
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
341
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
342
- set_feature(&cpu->env, ARM_FEATURE_CRC);
343
set_feature(&cpu->env, ARM_FEATURE_EL2);
344
set_feature(&cpu->env, ARM_FEATURE_EL3);
345
set_feature(&cpu->env, ARM_FEATURE_PMU);
346
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
347
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
348
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
349
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
350
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
351
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
352
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
353
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
354
- set_feature(&cpu->env, ARM_FEATURE_CRC);
355
set_feature(&cpu->env, ARM_FEATURE_EL2);
356
set_feature(&cpu->env, ARM_FEATURE_EL3);
357
set_feature(&cpu->env, ARM_FEATURE_PMU);
358
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
359
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
360
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
361
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
362
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
363
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
364
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
365
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
366
- set_feature(&cpu->env, ARM_FEATURE_CRC);
367
set_feature(&cpu->env, ARM_FEATURE_EL2);
368
set_feature(&cpu->env, ARM_FEATURE_EL3);
369
set_feature(&cpu->env, ARM_FEATURE_PMU);
370
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
371
if (kvm_enabled()) {
372
kvm_arm_set_cpu_features_from_host(cpu);
373
} else {
374
+ uint64_t t;
375
+ uint32_t u;
376
aarch64_a57_initfn(obj);
377
+
378
+ t = cpu->isar.id_aa64isar0;
379
+ t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
380
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
381
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */
382
+ t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
383
+ t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2);
384
+ t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1);
385
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1);
386
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1);
387
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
388
+ t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
389
+ cpu->isar.id_aa64isar0 = t;
390
+
391
+ t = cpu->isar.id_aa64isar1;
392
+ t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
393
+ cpu->isar.id_aa64isar1 = t;
394
+
395
+ /* Replicate the same data to the 32-bit id registers. */
396
+ u = cpu->isar.id_isar5;
397
+ u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
398
+ u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
399
+ u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
400
+ u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
401
+ u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
402
+ u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
403
+ cpu->isar.id_isar5 = u;
404
+
405
+ u = cpu->isar.id_isar6;
406
+ u = FIELD_DP32(u, ID_ISAR6, DP, 1);
407
+ cpu->isar.id_isar6 = u;
408
+
409
#ifdef CONFIG_USER_ONLY
410
/* We don't set these in system emulation mode for the moment,
411
* since we don't correctly set the ID registers to advertise them,
412
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
413
* whereas the architecture requires them to be present in both if
414
* present in either.
415
*/
416
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA512);
417
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA3);
418
- set_feature(&cpu->env, ARM_FEATURE_V8_SM3);
419
- set_feature(&cpu->env, ARM_FEATURE_V8_SM4);
420
- set_feature(&cpu->env, ARM_FEATURE_V8_ATOMICS);
421
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
422
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
423
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
424
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
425
set_feature(&cpu->env, ARM_FEATURE_SVE);
426
/* For usermode -cpu max we can use a larger and more efficient DCZ
427
* blocksize since we don't have to follow what the hardware does.
428
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
429
index XXXXXXX..XXXXXXX 100644
430
--- a/target/arm/translate-a64.c
431
+++ b/target/arm/translate-a64.c
432
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
433
}
434
if (rt2 == 31
435
&& ((rt | rs) & 1) == 0
436
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
437
+ && dc_isar_feature(aa64_atomics, s)) {
438
/* CASP / CASPL */
439
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
440
return;
441
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
442
}
443
if (rt2 == 31
444
&& ((rt | rs) & 1) == 0
445
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
446
+ && dc_isar_feature(aa64_atomics, s)) {
447
/* CASPA / CASPAL */
448
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
449
return;
450
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
451
case 0xb: /* CASL */
452
case 0xe: /* CASA */
453
case 0xf: /* CASAL */
454
- if (rt2 == 31 && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
455
+ if (rt2 == 31 && dc_isar_feature(aa64_atomics, s)) {
456
gen_compare_and_swap(s, rs, rt, rn, size);
457
return;
458
}
459
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
460
int rs = extract32(insn, 16, 5);
461
int rn = extract32(insn, 5, 5);
462
int o3_opc = extract32(insn, 12, 4);
463
- int feature = ARM_FEATURE_V8_ATOMICS;
464
TCGv_i64 tcg_rn, tcg_rs;
465
AtomicThreeOpFn *fn;
466
467
- if (is_vector) {
468
+ if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
469
unallocated_encoding(s);
470
return;
471
}
472
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
473
unallocated_encoding(s);
474
return;
475
}
476
- if (!arm_dc_feature(s, feature)) {
477
- unallocated_encoding(s);
478
- return;
479
- }
480
481
if (rn == 31) {
482
gen_check_sp_alignment(s);
483
@@ -XXX,XX +XXX,XX @@ static void handle_crc32(DisasContext *s,
484
TCGv_i64 tcg_acc, tcg_val;
485
TCGv_i32 tcg_bytes;
486
487
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)
488
+ if (!dc_isar_feature(aa64_crc32, s)
489
|| (sf == 1 && sz != 3)
490
|| (sf == 0 && sz == 3)) {
491
unallocated_encoding(s);
492
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
493
bool u = extract32(insn, 29, 1);
494
TCGv_i32 ele1, ele2, ele3;
495
TCGv_i64 res;
496
- int feature;
497
+ bool feature;
498
499
switch (u * 16 + opcode) {
500
case 0x10: /* SQRDMLAH (vector) */
501
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
502
unallocated_encoding(s);
503
return;
504
}
505
- feature = ARM_FEATURE_V8_RDM;
506
+ feature = dc_isar_feature(aa64_rdm, s);
507
break;
508
default:
509
unallocated_encoding(s);
510
return;
511
}
512
- if (!arm_dc_feature(s, feature)) {
513
+ if (!feature) {
514
unallocated_encoding(s);
515
return;
516
}
517
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
518
return;
519
}
520
if (size == 3) {
521
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
522
+ if (!dc_isar_feature(aa64_pmull, s)) {
523
unallocated_encoding(s);
524
return;
525
}
526
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
527
int size = extract32(insn, 22, 2);
528
bool u = extract32(insn, 29, 1);
529
bool is_q = extract32(insn, 30, 1);
530
- int feature, rot;
531
+ bool feature;
532
+ int rot;
533
534
switch (u * 16 + opcode) {
535
case 0x10: /* SQRDMLAH (vector) */
536
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
537
unallocated_encoding(s);
538
return;
539
}
540
- feature = ARM_FEATURE_V8_RDM;
541
+ feature = dc_isar_feature(aa64_rdm, s);
542
break;
543
case 0x02: /* SDOT (vector) */
544
case 0x12: /* UDOT (vector) */
545
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
546
unallocated_encoding(s);
547
return;
548
}
549
- feature = ARM_FEATURE_V8_DOTPROD;
550
+ feature = dc_isar_feature(aa64_dp, s);
551
break;
552
case 0x18: /* FCMLA, #0 */
553
case 0x19: /* FCMLA, #90 */
554
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
555
unallocated_encoding(s);
556
return;
557
}
558
- feature = ARM_FEATURE_V8_FCMA;
559
+ feature = dc_isar_feature(aa64_fcma, s);
560
break;
561
default:
562
unallocated_encoding(s);
563
return;
564
}
565
- if (!arm_dc_feature(s, feature)) {
566
+ if (!feature) {
567
unallocated_encoding(s);
568
return;
569
}
570
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
571
break;
572
case 0x1d: /* SQRDMLAH */
573
case 0x1f: /* SQRDMLSH */
574
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
575
+ if (!dc_isar_feature(aa64_rdm, s)) {
576
unallocated_encoding(s);
577
return;
578
}
579
break;
580
case 0x0e: /* SDOT */
581
case 0x1e: /* UDOT */
582
- if (size != MO_32 || !arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
583
+ if (size != MO_32 || !dc_isar_feature(aa64_dp, s)) {
584
unallocated_encoding(s);
585
return;
586
}
587
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
588
case 0x13: /* FCMLA #90 */
589
case 0x15: /* FCMLA #180 */
590
case 0x17: /* FCMLA #270 */
591
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
592
+ if (!dc_isar_feature(aa64_fcma, s)) {
593
unallocated_encoding(s);
594
return;
595
}
596
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
597
TCGv_i32 tcg_decrypt;
598
CryptoThreeOpIntFn *genfn;
599
600
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
601
- || size != 0) {
602
+ if (!dc_isar_feature(aa64_aes, s) || size != 0) {
603
unallocated_encoding(s);
604
return;
605
}
606
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
607
int rd = extract32(insn, 0, 5);
608
CryptoThreeOpFn *genfn;
609
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
610
- int feature = ARM_FEATURE_V8_SHA256;
611
+ bool feature;
612
613
if (size != 0) {
614
unallocated_encoding(s);
615
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
616
case 2: /* SHA1M */
617
case 3: /* SHA1SU0 */
618
genfn = NULL;
619
- feature = ARM_FEATURE_V8_SHA1;
620
+ feature = dc_isar_feature(aa64_sha1, s);
621
break;
622
case 4: /* SHA256H */
623
genfn = gen_helper_crypto_sha256h;
624
+ feature = dc_isar_feature(aa64_sha256, s);
625
break;
626
case 5: /* SHA256H2 */
627
genfn = gen_helper_crypto_sha256h2;
628
+ feature = dc_isar_feature(aa64_sha256, s);
629
break;
630
case 6: /* SHA256SU1 */
631
genfn = gen_helper_crypto_sha256su1;
632
+ feature = dc_isar_feature(aa64_sha256, s);
633
break;
634
default:
635
unallocated_encoding(s);
636
return;
637
}
638
639
- if (!arm_dc_feature(s, feature)) {
640
+ if (!feature) {
641
unallocated_encoding(s);
642
return;
643
}
644
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
645
int rn = extract32(insn, 5, 5);
646
int rd = extract32(insn, 0, 5);
647
CryptoTwoOpFn *genfn;
648
- int feature;
649
+ bool feature;
650
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
651
652
if (size != 0) {
653
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
654
655
switch (opcode) {
656
case 0: /* SHA1H */
657
- feature = ARM_FEATURE_V8_SHA1;
658
+ feature = dc_isar_feature(aa64_sha1, s);
659
genfn = gen_helper_crypto_sha1h;
660
break;
661
case 1: /* SHA1SU1 */
662
- feature = ARM_FEATURE_V8_SHA1;
663
+ feature = dc_isar_feature(aa64_sha1, s);
664
genfn = gen_helper_crypto_sha1su1;
665
break;
666
case 2: /* SHA256SU0 */
667
- feature = ARM_FEATURE_V8_SHA256;
668
+ feature = dc_isar_feature(aa64_sha256, s);
669
genfn = gen_helper_crypto_sha256su0;
670
break;
671
default:
672
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
673
return;
674
}
675
676
- if (!arm_dc_feature(s, feature)) {
677
+ if (!feature) {
678
unallocated_encoding(s);
679
return;
680
}
681
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
682
int rm = extract32(insn, 16, 5);
683
int rn = extract32(insn, 5, 5);
684
int rd = extract32(insn, 0, 5);
685
- int feature;
686
+ bool feature;
687
CryptoThreeOpFn *genfn;
688
689
if (o == 0) {
690
switch (opcode) {
691
case 0: /* SHA512H */
692
- feature = ARM_FEATURE_V8_SHA512;
693
+ feature = dc_isar_feature(aa64_sha512, s);
694
genfn = gen_helper_crypto_sha512h;
695
break;
696
case 1: /* SHA512H2 */
697
- feature = ARM_FEATURE_V8_SHA512;
698
+ feature = dc_isar_feature(aa64_sha512, s);
699
genfn = gen_helper_crypto_sha512h2;
700
break;
701
case 2: /* SHA512SU1 */
702
- feature = ARM_FEATURE_V8_SHA512;
703
+ feature = dc_isar_feature(aa64_sha512, s);
704
genfn = gen_helper_crypto_sha512su1;
705
break;
706
case 3: /* RAX1 */
707
- feature = ARM_FEATURE_V8_SHA3;
708
+ feature = dc_isar_feature(aa64_sha3, s);
709
genfn = NULL;
710
break;
711
}
712
} else {
713
switch (opcode) {
714
case 0: /* SM3PARTW1 */
715
- feature = ARM_FEATURE_V8_SM3;
716
+ feature = dc_isar_feature(aa64_sm3, s);
717
genfn = gen_helper_crypto_sm3partw1;
718
break;
719
case 1: /* SM3PARTW2 */
720
- feature = ARM_FEATURE_V8_SM3;
721
+ feature = dc_isar_feature(aa64_sm3, s);
722
genfn = gen_helper_crypto_sm3partw2;
723
break;
724
case 2: /* SM4EKEY */
725
- feature = ARM_FEATURE_V8_SM4;
726
+ feature = dc_isar_feature(aa64_sm4, s);
727
genfn = gen_helper_crypto_sm4ekey;
728
break;
729
default:
730
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
731
}
732
}
733
734
- if (!arm_dc_feature(s, feature)) {
735
+ if (!feature) {
736
unallocated_encoding(s);
737
return;
738
}
739
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
740
int rn = extract32(insn, 5, 5);
741
int rd = extract32(insn, 0, 5);
742
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
743
- int feature;
744
+ bool feature;
745
CryptoTwoOpFn *genfn;
746
747
switch (opcode) {
748
case 0: /* SHA512SU0 */
749
- feature = ARM_FEATURE_V8_SHA512;
750
+ feature = dc_isar_feature(aa64_sha512, s);
751
genfn = gen_helper_crypto_sha512su0;
752
break;
753
case 1: /* SM4E */
754
- feature = ARM_FEATURE_V8_SM4;
755
+ feature = dc_isar_feature(aa64_sm4, s);
756
genfn = gen_helper_crypto_sm4e;
757
break;
758
default:
759
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
760
return;
761
}
762
763
- if (!arm_dc_feature(s, feature)) {
764
+ if (!feature) {
765
unallocated_encoding(s);
766
return;
767
}
768
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
769
int ra = extract32(insn, 10, 5);
770
int rn = extract32(insn, 5, 5);
771
int rd = extract32(insn, 0, 5);
772
- int feature;
773
+ bool feature;
774
775
switch (op0) {
776
case 0: /* EOR3 */
777
case 1: /* BCAX */
778
- feature = ARM_FEATURE_V8_SHA3;
779
+ feature = dc_isar_feature(aa64_sha3, s);
780
break;
781
case 2: /* SM3SS1 */
782
- feature = ARM_FEATURE_V8_SM3;
783
+ feature = dc_isar_feature(aa64_sm3, s);
784
break;
785
default:
786
unallocated_encoding(s);
787
return;
788
}
789
790
- if (!arm_dc_feature(s, feature)) {
791
+ if (!feature) {
792
unallocated_encoding(s);
793
return;
794
}
795
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
796
TCGv_i64 tcg_op1, tcg_op2, tcg_res[2];
797
int pass;
798
799
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA3)) {
800
+ if (!dc_isar_feature(aa64_sha3, s)) {
801
unallocated_encoding(s);
802
return;
803
}
804
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
805
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
806
TCGv_i32 tcg_imm2, tcg_opcode;
807
808
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SM3)) {
809
+ if (!dc_isar_feature(aa64_sm3, s)) {
810
unallocated_encoding(s);
811
return;
812
}
813
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
814
ARMCPU *arm_cpu = arm_env_get_cpu(env);
815
int bound;
816
817
+ dc->isar = &arm_cpu->isar;
818
dc->pc = dc->base.pc_first;
819
dc->condjmp = 0;
820
821
diff --git a/target/arm/translate.c b/target/arm/translate.c
822
index XXXXXXX..XXXXXXX 100644
823
--- a/target/arm/translate.c
824
+++ b/target/arm/translate.c
825
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_2rm_sizes[] = {
826
static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
827
int q, int rd, int rn, int rm)
828
{
829
- if (arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
830
+ if (dc_isar_feature(aa32_rdm, s)) {
831
int opr_sz = (1 + q) * 8;
832
tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
833
vfp_reg_offset(1, rn),
834
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
835
return 1;
836
}
837
if (!u) { /* SHA-1 */
838
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
839
+ if (!dc_isar_feature(aa32_sha1, s)) {
840
return 1;
841
}
842
ptr1 = vfp_reg_ptr(true, rd);
843
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
844
gen_helper_crypto_sha1_3reg(ptr1, ptr2, ptr3, tmp4);
845
tcg_temp_free_i32(tmp4);
846
} else { /* SHA-256 */
847
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256) || size == 3) {
848
+ if (!dc_isar_feature(aa32_sha2, s) || size == 3) {
849
return 1;
850
}
851
ptr1 = vfp_reg_ptr(true, rd);
852
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
853
if (op == 14 && size == 2) {
854
TCGv_i64 tcg_rn, tcg_rm, tcg_rd;
855
856
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
857
+ if (!dc_isar_feature(aa32_pmull, s)) {
858
return 1;
859
}
860
tcg_rn = tcg_temp_new_i64();
861
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
862
{
863
NeonGenThreeOpEnvFn *fn;
864
865
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
866
+ if (!dc_isar_feature(aa32_rdm, s)) {
867
return 1;
868
}
869
if (u && ((rd | rn) & 1)) {
870
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
871
break;
872
}
873
case NEON_2RM_AESE: case NEON_2RM_AESMC:
874
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
875
- || ((rm | rd) & 1)) {
876
+ if (!dc_isar_feature(aa32_aes, s) || ((rm | rd) & 1)) {
877
return 1;
878
}
879
ptr1 = vfp_reg_ptr(true, rd);
880
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
881
tcg_temp_free_i32(tmp3);
882
break;
883
case NEON_2RM_SHA1H:
884
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)
885
- || ((rm | rd) & 1)) {
886
+ if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
887
return 1;
888
}
889
ptr1 = vfp_reg_ptr(true, rd);
890
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
891
}
892
/* bit 6 (q): set -> SHA256SU0, cleared -> SHA1SU1 */
893
if (q) {
894
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256)) {
895
+ if (!dc_isar_feature(aa32_sha2, s)) {
896
return 1;
897
}
898
- } else if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
899
+ } else if (!dc_isar_feature(aa32_sha1, s)) {
900
return 1;
901
}
902
ptr1 = vfp_reg_ptr(true, rd);
903
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
904
/* VCMLA -- 1111 110R R.1S .... .... 1000 ...0 .... */
905
int size = extract32(insn, 20, 1);
906
data = extract32(insn, 23, 2); /* rot */
907
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
908
+ if (!dc_isar_feature(aa32_vcma, s)
909
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
910
return 1;
911
}
912
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
913
/* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
914
int size = extract32(insn, 20, 1);
915
data = extract32(insn, 24, 1); /* rot */
916
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
917
+ if (!dc_isar_feature(aa32_vcma, s)
918
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
919
return 1;
920
}
921
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
922
} else if ((insn & 0xfeb00f00) == 0xfc200d00) {
923
/* V[US]DOT -- 1111 1100 0.10 .... .... 1101 .Q.U .... */
924
bool u = extract32(insn, 4, 1);
925
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
926
+ if (!dc_isar_feature(aa32_dp, s)) {
927
return 1;
928
}
929
fn_gvec = u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
930
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
931
int size = extract32(insn, 23, 1);
932
int index;
933
934
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
935
+ if (!dc_isar_feature(aa32_vcma, s)) {
936
return 1;
937
}
938
if (size == 0) {
939
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
940
} else if ((insn & 0xffb00f00) == 0xfe200d00) {
941
/* V[US]DOT -- 1111 1110 0.10 .... .... 1101 .Q.U .... */
942
int u = extract32(insn, 4, 1);
943
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
944
+ if (!dc_isar_feature(aa32_dp, s)) {
945
return 1;
946
}
947
fn_gvec = u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
948
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
949
* op1 == 3 is UNPREDICTABLE but handle as UNDEFINED.
950
* Bits 8, 10 and 11 should be zero.
951
*/
952
- if (!arm_dc_feature(s, ARM_FEATURE_CRC) || op1 == 0x3 ||
953
- (c & 0xd) != 0) {
954
+ if (!dc_isar_feature(aa32_crc32, s) || op1 == 0x3 || (c & 0xd) != 0) {
955
goto illegal_op;
956
}
957
958
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
959
case 0x28:
960
case 0x29:
961
case 0x2a:
962
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)) {
963
+ if (!dc_isar_feature(aa32_crc32, s)) {
964
goto illegal_op;
965
}
966
break;
967
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
968
CPUARMState *env = cs->env_ptr;
969
ARMCPU *cpu = arm_env_get_cpu(env);
970
971
+ dc->isar = &cpu->isar;
972
dc->pc = dc->base.pc_first;
973
dc->condjmp = 0;
974
183
--
975
--
184
2.7.4
976
2.19.1
185
977
186
978
diff view generated by jsdifflib
1
From: Subbaraya Sundeep <sundeep.lkml@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Emulated Emcraft's Smartfusion2 System On Module starter
3
Both arm and thumb2 division are controlled by the same ISAR field,
4
kit.
4
which takes care of the arm implies thumb case. Having M imply
5
thumb2 division was wrong for cortex-m0, which is v6m and does not
6
have thumb2 at all, much less thumb2 division.
5
7
6
Signed-off-by: Subbaraya Sundeep <sundeep.lkml@gmail.com>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20170920201737.25723-6-f4bug@amsat.org
10
Message-id: 20181016223115.24100-5-richard.henderson@linaro.org
9
[PMD: drop cpu_model to directly use cpu type]
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
13
---
12
hw/arm/Makefile.objs | 2 +-
14
target/arm/cpu.h | 12 ++++++++++--
13
hw/arm/msf2-som.c | 105 +++++++++++++++++++++++++++++++++++++++++++++++++++
15
linux-user/elfload.c | 4 ++--
14
2 files changed, 106 insertions(+), 1 deletion(-)
16
target/arm/cpu.c | 10 +---------
15
create mode 100644 hw/arm/msf2-som.c
17
target/arm/translate.c | 4 ++--
18
4 files changed, 15 insertions(+), 15 deletions(-)
16
19
17
diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/Makefile.objs
22
--- a/target/arm/cpu.h
20
+++ b/hw/arm/Makefile.objs
23
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_FSL_IMX31) += fsl-imx31.o kzm.o
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
22
obj-$(CONFIG_FSL_IMX6) += fsl-imx6.o sabrelite.o
25
ARM_FEATURE_VFP3,
23
obj-$(CONFIG_ASPEED_SOC) += aspeed_soc.o aspeed.o
26
ARM_FEATURE_VFP_FP16,
24
obj-$(CONFIG_MPS2) += mps2.o
27
ARM_FEATURE_NEON,
25
-obj-$(CONFIG_MSF2) += msf2-soc.o
28
- ARM_FEATURE_THUMB_DIV, /* divide supported in Thumb encoding */
26
+obj-$(CONFIG_MSF2) += msf2-soc.o msf2-som.o
29
ARM_FEATURE_M, /* Microcontroller profile. */
27
diff --git a/hw/arm/msf2-som.c b/hw/arm/msf2-som.c
30
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
28
new file mode 100644
31
ARM_FEATURE_THUMB2EE,
29
index XXXXXXX..XXXXXXX
32
@@ -XXX,XX +XXX,XX @@ enum arm_features {
30
--- /dev/null
33
ARM_FEATURE_V5,
31
+++ b/hw/arm/msf2-som.c
34
ARM_FEATURE_STRONGARM,
32
@@ -XXX,XX +XXX,XX @@
35
ARM_FEATURE_VAPA, /* cp15 VA to PA lookups */
33
+/*
36
- ARM_FEATURE_ARM_DIV, /* divide supported in ARM encoding */
34
+ * SmartFusion2 SOM starter kit(from Emcraft) emulation.
37
ARM_FEATURE_VFP4, /* VFPv4 (implies that NEON is v2) */
35
+ *
38
ARM_FEATURE_GENERIC_TIMER,
36
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
39
ARM_FEATURE_MVFR, /* Media and VFP Feature Registers 0 and 1 */
37
+ *
40
@@ -XXX,XX +XXX,XX @@ extern const uint64_t pred_esz_masks[4];
38
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
41
/*
39
+ * of this software and associated documentation files (the "Software"), to deal
42
* 32-bit feature tests via id registers.
40
+ * in the Software without restriction, including without limitation the rights
43
*/
41
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
44
+static inline bool isar_feature_thumb_div(const ARMISARegisters *id)
42
+ * copies of the Software, and to permit persons to whom the Software is
43
+ * furnished to do so, subject to the following conditions:
44
+ *
45
+ * The above copyright notice and this permission notice shall be included in
46
+ * all copies or substantial portions of the Software.
47
+ *
48
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
49
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
50
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
51
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
52
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
53
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
54
+ * THE SOFTWARE.
55
+ */
56
+
57
+#include "qemu/osdep.h"
58
+#include "qapi/error.h"
59
+#include "qemu/error-report.h"
60
+#include "hw/boards.h"
61
+#include "hw/arm/arm.h"
62
+#include "exec/address-spaces.h"
63
+#include "qemu/cutils.h"
64
+#include "hw/arm/msf2-soc.h"
65
+#include "cpu.h"
66
+
67
+#define DDR_BASE_ADDRESS 0xA0000000
68
+#define DDR_SIZE (64 * M_BYTE)
69
+
70
+#define M2S010_ENVM_SIZE (256 * K_BYTE)
71
+#define M2S010_ESRAM_SIZE (64 * K_BYTE)
72
+
73
+static void emcraft_sf2_s2s010_init(MachineState *machine)
74
+{
45
+{
75
+ DeviceState *dev;
46
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) != 0;
76
+ DeviceState *spi_flash;
77
+ MSF2State *soc;
78
+ MachineClass *mc = MACHINE_GET_CLASS(machine);
79
+ DriveInfo *dinfo = drive_get_next(IF_MTD);
80
+ qemu_irq cs_line;
81
+ SSIBus *spi_bus;
82
+ MemoryRegion *sysmem = get_system_memory();
83
+ MemoryRegion *ddr = g_new(MemoryRegion, 1);
84
+
85
+ if (strcmp(machine->cpu_type, mc->default_cpu_type) != 0) {
86
+ error_report("This board can only be used with CPU %s",
87
+ mc->default_cpu_type);
88
+ }
89
+
90
+ memory_region_init_ram(ddr, NULL, "ddr-ram", DDR_SIZE,
91
+ &error_fatal);
92
+ memory_region_add_subregion(sysmem, DDR_BASE_ADDRESS, ddr);
93
+
94
+ dev = qdev_create(NULL, TYPE_MSF2_SOC);
95
+ qdev_prop_set_string(dev, "part-name", "M2S010");
96
+ qdev_prop_set_string(dev, "cpu-type", mc->default_cpu_type);
97
+
98
+ qdev_prop_set_uint64(dev, "eNVM-size", M2S010_ENVM_SIZE);
99
+ qdev_prop_set_uint64(dev, "eSRAM-size", M2S010_ESRAM_SIZE);
100
+
101
+ /*
102
+ * CPU clock and peripheral clocks(APB0, APB1)are configurable
103
+ * in Libero. CPU clock is divided by APB0 and APB1 divisors for
104
+ * peripherals. Emcraft's SoM kit comes with these settings by default.
105
+ */
106
+ qdev_prop_set_uint32(dev, "m3clk", 142 * 1000000);
107
+ qdev_prop_set_uint32(dev, "apb0div", 2);
108
+ qdev_prop_set_uint32(dev, "apb1div", 2);
109
+
110
+ object_property_set_bool(OBJECT(dev), true, "realized", &error_fatal);
111
+
112
+ soc = MSF2_SOC(dev);
113
+
114
+ /* Attach SPI flash to SPI0 controller */
115
+ spi_bus = (SSIBus *)qdev_get_child_bus(dev, "spi0");
116
+ spi_flash = ssi_create_slave_no_init(spi_bus, "s25sl12801");
117
+ qdev_prop_set_uint8(spi_flash, "spansion-cr2nv", 1);
118
+ if (dinfo) {
119
+ qdev_prop_set_drive(spi_flash, "drive", blk_by_legacy_dinfo(dinfo),
120
+ &error_fatal);
121
+ }
122
+ qdev_init_nofail(spi_flash);
123
+ cs_line = qdev_get_gpio_in_named(spi_flash, SSI_GPIO_CS, 0);
124
+ sysbus_connect_irq(SYS_BUS_DEVICE(&soc->spi[0]), 1, cs_line);
125
+
126
+ armv7m_load_kernel(ARM_CPU(first_cpu), machine->kernel_filename,
127
+ soc->envm_size);
128
+}
47
+}
129
+
48
+
130
+static void emcraft_sf2_machine_init(MachineClass *mc)
49
+static inline bool isar_feature_arm_div(const ARMISARegisters *id)
131
+{
50
+{
132
+ mc->desc = "SmartFusion2 SOM kit from Emcraft (M2S010)";
51
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
133
+ mc->init = emcraft_sf2_s2s010_init;
134
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-m3");
135
+}
52
+}
136
+
53
+
137
+DEFINE_MACHINE("emcraft-sf2", emcraft_sf2_machine_init)
54
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
55
{
56
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
57
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/linux-user/elfload.c
60
+++ b/linux-user/elfload.c
61
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
62
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
63
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
64
GET_FEATURE(ARM_FEATURE_VFP4, ARM_HWCAP_ARM_VFPv4);
65
- GET_FEATURE(ARM_FEATURE_ARM_DIV, ARM_HWCAP_ARM_IDIVA);
66
- GET_FEATURE(ARM_FEATURE_THUMB_DIV, ARM_HWCAP_ARM_IDIVT);
67
+ GET_FEATURE_ID(arm_div, ARM_HWCAP_ARM_IDIVA);
68
+ GET_FEATURE_ID(thumb_div, ARM_HWCAP_ARM_IDIVT);
69
/* All QEMU's VFPv3 CPUs have 32 registers, see VFP_DREG in translate.c.
70
* Note that the ARM_HWCAP_ARM_VFPv3D16 bit is always the inverse of
71
* ARM_HWCAP_ARM_VFPD32 (and so always clear for QEMU); it is unrelated
72
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
73
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/cpu.c
75
+++ b/target/arm/cpu.c
76
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
77
* Presence of EL2 itself is ARM_FEATURE_EL2, and of the
78
* Security Extensions is ARM_FEATURE_EL3.
79
*/
80
- set_feature(env, ARM_FEATURE_ARM_DIV);
81
+ assert(cpu_isar_feature(arm_div, cpu));
82
set_feature(env, ARM_FEATURE_LPAE);
83
set_feature(env, ARM_FEATURE_V7);
84
}
85
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
86
if (arm_feature(env, ARM_FEATURE_V5)) {
87
set_feature(env, ARM_FEATURE_V4T);
88
}
89
- if (arm_feature(env, ARM_FEATURE_M)) {
90
- set_feature(env, ARM_FEATURE_THUMB_DIV);
91
- }
92
- if (arm_feature(env, ARM_FEATURE_ARM_DIV)) {
93
- set_feature(env, ARM_FEATURE_THUMB_DIV);
94
- }
95
if (arm_feature(env, ARM_FEATURE_VFP4)) {
96
set_feature(env, ARM_FEATURE_VFP3);
97
set_feature(env, ARM_FEATURE_VFP_FP16);
98
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
99
ARMCPU *cpu = ARM_CPU(obj);
100
101
set_feature(&cpu->env, ARM_FEATURE_V7);
102
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DIV);
103
- set_feature(&cpu->env, ARM_FEATURE_ARM_DIV);
104
set_feature(&cpu->env, ARM_FEATURE_V7MP);
105
set_feature(&cpu->env, ARM_FEATURE_PMSA);
106
cpu->midr = 0x411fc153; /* r1p3 */
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
112
case 1:
113
case 3:
114
/* SDIV, UDIV */
115
- if (!arm_dc_feature(s, ARM_FEATURE_ARM_DIV)) {
116
+ if (!dc_isar_feature(arm_div, s)) {
117
goto illegal_op;
118
}
119
if (((insn >> 5) & 7) || (rd != 15)) {
120
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
121
tmp2 = load_reg(s, rm);
122
if ((op & 0x50) == 0x10) {
123
/* sdiv, udiv */
124
- if (!arm_dc_feature(s, ARM_FEATURE_THUMB_DIV)) {
125
+ if (!dc_isar_feature(thumb_div, s)) {
126
goto illegal_op;
127
}
128
if (op & 0x20)
138
--
129
--
139
2.7.4
130
2.19.1
140
131
141
132
diff view generated by jsdifflib
1
From: Subbaraya Sundeep <sundeep.lkml@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Modelled Microsemi's Smartfusion2 SPI controller.
3
Having V6 alone imply jazelle was wrong for cortex-m0.
4
Change to an assertion for V6 & !M.
4
5
5
Signed-off-by: Subbaraya Sundeep <sundeep.lkml@gmail.com>
6
This was harmless, because the only place we tested ARM_FEATURE_JAZELLE
6
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
7
was for 'bxj' in disas_arm(), which is unreachable for M-profile cores.
7
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
8
Message-id: 20170920201737.25723-4-f4bug@amsat.org
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20181016223115.24100-6-richard.henderson@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
14
---
11
hw/ssi/Makefile.objs | 1 +
15
target/arm/cpu.h | 6 +++++-
12
include/hw/ssi/mss-spi.h | 58 +++++++
16
target/arm/cpu.c | 17 ++++++++++++++---
13
hw/ssi/mss-spi.c | 404 +++++++++++++++++++++++++++++++++++++++++++++++
17
target/arm/translate.c | 2 +-
14
3 files changed, 463 insertions(+)
18
3 files changed, 20 insertions(+), 5 deletions(-)
15
create mode 100644 include/hw/ssi/mss-spi.h
16
create mode 100644 hw/ssi/mss-spi.c
17
19
18
diff --git a/hw/ssi/Makefile.objs b/hw/ssi/Makefile.objs
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/ssi/Makefile.objs
22
--- a/target/arm/cpu.h
21
+++ b/hw/ssi/Makefile.objs
23
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_XILINX_SPI) += xilinx_spi.o
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
23
common-obj-$(CONFIG_XILINX_SPIPS) += xilinx_spips.o
25
ARM_FEATURE_PMU, /* has PMU support */
24
common-obj-$(CONFIG_ASPEED_SOC) += aspeed_smc.o
26
ARM_FEATURE_VBAR, /* has cp15 VBAR */
25
common-obj-$(CONFIG_STM32F2XX_SPI) += stm32f2xx_spi.o
27
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
26
+common-obj-$(CONFIG_MSF2) += mss-spi.o
28
- ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
27
29
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
28
obj-$(CONFIG_OMAP) += omap_spi.o
30
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
29
obj-$(CONFIG_IMX) += imx_spi.o
31
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
30
diff --git a/include/hw/ssi/mss-spi.h b/include/hw/ssi/mss-spi.h
32
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_arm_div(const ARMISARegisters *id)
31
new file mode 100644
33
return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
32
index XXXXXXX..XXXXXXX
34
}
33
--- /dev/null
35
34
+++ b/include/hw/ssi/mss-spi.h
36
+static inline bool isar_feature_jazelle(const ARMISARegisters *id)
35
@@ -XXX,XX +XXX,XX @@
36
+/*
37
+ * Microsemi SmartFusion2 SPI
38
+ *
39
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
40
+ *
41
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
42
+ * of this software and associated documentation files (the "Software"), to deal
43
+ * in the Software without restriction, including without limitation the rights
44
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
45
+ * copies of the Software, and to permit persons to whom the Software is
46
+ * furnished to do so, subject to the following conditions:
47
+ *
48
+ * The above copyright notice and this permission notice shall be included in
49
+ * all copies or substantial portions of the Software.
50
+ *
51
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
52
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
53
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
54
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
55
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
56
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
57
+ * THE SOFTWARE.
58
+ */
59
+
60
+#ifndef HW_MSS_SPI_H
61
+#define HW_MSS_SPI_H
62
+
63
+#include "hw/sysbus.h"
64
+#include "hw/ssi/ssi.h"
65
+#include "qemu/fifo32.h"
66
+
67
+#define TYPE_MSS_SPI "mss-spi"
68
+#define MSS_SPI(obj) OBJECT_CHECK(MSSSpiState, (obj), TYPE_MSS_SPI)
69
+
70
+#define R_SPI_MAX 16
71
+
72
+typedef struct MSSSpiState {
73
+ SysBusDevice parent_obj;
74
+
75
+ MemoryRegion mmio;
76
+
77
+ qemu_irq irq;
78
+
79
+ qemu_irq cs_line;
80
+
81
+ SSIBus *spi;
82
+
83
+ Fifo32 rx_fifo;
84
+ Fifo32 tx_fifo;
85
+
86
+ int fifo_depth;
87
+ uint32_t frame_count;
88
+ bool enabled;
89
+
90
+ uint32_t regs[R_SPI_MAX];
91
+} MSSSpiState;
92
+
93
+#endif /* HW_MSS_SPI_H */
94
diff --git a/hw/ssi/mss-spi.c b/hw/ssi/mss-spi.c
95
new file mode 100644
96
index XXXXXXX..XXXXXXX
97
--- /dev/null
98
+++ b/hw/ssi/mss-spi.c
99
@@ -XXX,XX +XXX,XX @@
100
+/*
101
+ * Block model of SPI controller present in
102
+ * Microsemi's SmartFusion2 and SmartFusion SoCs.
103
+ *
104
+ * Copyright (C) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
105
+ *
106
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
107
+ * of this software and associated documentation files (the "Software"), to deal
108
+ * in the Software without restriction, including without limitation the rights
109
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
110
+ * copies of the Software, and to permit persons to whom the Software is
111
+ * furnished to do so, subject to the following conditions:
112
+ *
113
+ * The above copyright notice and this permission notice shall be included in
114
+ * all copies or substantial portions of the Software.
115
+ *
116
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
117
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
118
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
119
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
120
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
121
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
122
+ * THE SOFTWARE.
123
+ */
124
+
125
+#include "qemu/osdep.h"
126
+#include "hw/ssi/mss-spi.h"
127
+#include "qemu/log.h"
128
+
129
+#ifndef MSS_SPI_ERR_DEBUG
130
+#define MSS_SPI_ERR_DEBUG 0
131
+#endif
132
+
133
+#define DB_PRINT_L(lvl, fmt, args...) do { \
134
+ if (MSS_SPI_ERR_DEBUG >= lvl) { \
135
+ qemu_log("%s: " fmt "\n", __func__, ## args); \
136
+ } \
137
+} while (0);
138
+
139
+#define DB_PRINT(fmt, args...) DB_PRINT_L(1, fmt, ## args)
140
+
141
+#define FIFO_CAPACITY 32
142
+
143
+#define R_SPI_CONTROL 0
144
+#define R_SPI_DFSIZE 1
145
+#define R_SPI_STATUS 2
146
+#define R_SPI_INTCLR 3
147
+#define R_SPI_RX 4
148
+#define R_SPI_TX 5
149
+#define R_SPI_CLKGEN 6
150
+#define R_SPI_SS 7
151
+#define R_SPI_MIS 8
152
+#define R_SPI_RIS 9
153
+
154
+#define S_TXDONE (1 << 0)
155
+#define S_RXRDY (1 << 1)
156
+#define S_RXCHOVRF (1 << 2)
157
+#define S_RXFIFOFUL (1 << 4)
158
+#define S_RXFIFOFULNXT (1 << 5)
159
+#define S_RXFIFOEMP (1 << 6)
160
+#define S_RXFIFOEMPNXT (1 << 7)
161
+#define S_TXFIFOFUL (1 << 8)
162
+#define S_TXFIFOFULNXT (1 << 9)
163
+#define S_TXFIFOEMP (1 << 10)
164
+#define S_TXFIFOEMPNXT (1 << 11)
165
+#define S_FRAMESTART (1 << 12)
166
+#define S_SSEL (1 << 13)
167
+#define S_ACTIVE (1 << 14)
168
+
169
+#define C_ENABLE (1 << 0)
170
+#define C_MODE (1 << 1)
171
+#define C_INTRXDATA (1 << 4)
172
+#define C_INTTXDATA (1 << 5)
173
+#define C_INTRXOVRFLO (1 << 6)
174
+#define C_SPS (1 << 26)
175
+#define C_BIGFIFO (1 << 29)
176
+#define C_RESET (1 << 31)
177
+
178
+#define FRAMESZ_MASK 0x1F
179
+#define FMCOUNT_MASK 0x00FFFF00
180
+#define FMCOUNT_SHIFT 8
181
+
182
+static void txfifo_reset(MSSSpiState *s)
183
+{
37
+{
184
+ fifo32_reset(&s->tx_fifo);
38
+ return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
185
+
186
+ s->regs[R_SPI_STATUS] &= ~S_TXFIFOFUL;
187
+ s->regs[R_SPI_STATUS] |= S_TXFIFOEMP;
188
+}
39
+}
189
+
40
+
190
+static void rxfifo_reset(MSSSpiState *s)
41
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
191
+{
42
{
192
+ fifo32_reset(&s->rx_fifo);
43
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
193
+
44
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
194
+ s->regs[R_SPI_STATUS] &= ~S_RXFIFOFUL;
45
index XXXXXXX..XXXXXXX 100644
195
+ s->regs[R_SPI_STATUS] |= S_RXFIFOEMP;
46
--- a/target/arm/cpu.c
196
+}
47
+++ b/target/arm/cpu.c
197
+
48
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
198
+static void set_fifodepth(MSSSpiState *s)
49
}
199
+{
50
if (arm_feature(env, ARM_FEATURE_V6)) {
200
+ unsigned int size = s->regs[R_SPI_DFSIZE] & FRAMESZ_MASK;
51
set_feature(env, ARM_FEATURE_V5);
201
+
52
- set_feature(env, ARM_FEATURE_JAZELLE);
202
+ if (size <= 8) {
53
if (!arm_feature(env, ARM_FEATURE_M)) {
203
+ s->fifo_depth = 32;
54
+ assert(cpu_isar_feature(jazelle, cpu));
204
+ } else if (size <= 16) {
55
set_feature(env, ARM_FEATURE_AUXCR);
205
+ s->fifo_depth = 16;
56
}
206
+ } else if (size <= 32) {
57
}
207
+ s->fifo_depth = 8;
58
@@ -XXX,XX +XXX,XX @@ static void arm926_initfn(Object *obj)
208
+ } else {
59
set_feature(&cpu->env, ARM_FEATURE_VFP);
209
+ s->fifo_depth = 4;
60
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
210
+ }
61
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
211
+}
62
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
212
+
63
cpu->midr = 0x41069265;
213
+static void update_mis(MSSSpiState *s)
64
cpu->reset_fpsid = 0x41011090;
214
+{
65
cpu->ctr = 0x1dd20d2;
215
+ uint32_t reg = s->regs[R_SPI_CONTROL];
66
cpu->reset_sctlr = 0x00090078;
216
+ uint32_t tmp;
217
+
67
+
218
+ /*
68
+ /*
219
+ * form the Control register interrupt enable bits
69
+ * ARMv5 does not have the ID_ISAR registers, but we can still
220
+ * same as RIS, MIS and Interrupt clear registers for simplicity
70
+ * set the field to indicate Jazelle support within QEMU.
221
+ */
71
+ */
222
+ tmp = ((reg & C_INTRXOVRFLO) >> 4) | ((reg & C_INTRXDATA) >> 3) |
72
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
223
+ ((reg & C_INTTXDATA) >> 5);
73
}
224
+ s->regs[R_SPI_MIS] |= tmp & s->regs[R_SPI_RIS];
74
225
+}
75
static void arm946_initfn(Object *obj)
226
+
76
@@ -XXX,XX +XXX,XX @@ static void arm1026_initfn(Object *obj)
227
+static void spi_update_irq(MSSSpiState *s)
77
set_feature(&cpu->env, ARM_FEATURE_AUXCR);
228
+{
78
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
229
+ int irq;
79
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
230
+
80
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
231
+ update_mis(s);
81
cpu->midr = 0x4106a262;
232
+ irq = !!(s->regs[R_SPI_MIS]);
82
cpu->reset_fpsid = 0x410110a0;
233
+
83
cpu->ctr = 0x1dd20d2;
234
+ qemu_set_irq(s->irq, irq);
84
cpu->reset_sctlr = 0x00090078;
235
+}
85
cpu->reset_auxcr = 1;
236
+
237
+static void mss_spi_reset(DeviceState *d)
238
+{
239
+ MSSSpiState *s = MSS_SPI(d);
240
+
241
+ memset(s->regs, 0, sizeof s->regs);
242
+ s->regs[R_SPI_CONTROL] = 0x80000102;
243
+ s->regs[R_SPI_DFSIZE] = 0x4;
244
+ s->regs[R_SPI_STATUS] = S_SSEL | S_TXFIFOEMP | S_RXFIFOEMP;
245
+ s->regs[R_SPI_CLKGEN] = 0x7;
246
+ s->regs[R_SPI_RIS] = 0x0;
247
+
248
+ s->fifo_depth = 4;
249
+ s->frame_count = 1;
250
+ s->enabled = false;
251
+
252
+ rxfifo_reset(s);
253
+ txfifo_reset(s);
254
+}
255
+
256
+static uint64_t
257
+spi_read(void *opaque, hwaddr addr, unsigned int size)
258
+{
259
+ MSSSpiState *s = opaque;
260
+ uint32_t ret = 0;
261
+
262
+ addr >>= 2;
263
+ switch (addr) {
264
+ case R_SPI_RX:
265
+ s->regs[R_SPI_STATUS] &= ~S_RXFIFOFUL;
266
+ s->regs[R_SPI_STATUS] &= ~S_RXCHOVRF;
267
+ ret = fifo32_pop(&s->rx_fifo);
268
+ if (fifo32_is_empty(&s->rx_fifo)) {
269
+ s->regs[R_SPI_STATUS] |= S_RXFIFOEMP;
270
+ }
271
+ break;
272
+
273
+ case R_SPI_MIS:
274
+ update_mis(s);
275
+ ret = s->regs[R_SPI_MIS];
276
+ break;
277
+
278
+ default:
279
+ if (addr < ARRAY_SIZE(s->regs)) {
280
+ ret = s->regs[addr];
281
+ } else {
282
+ qemu_log_mask(LOG_GUEST_ERROR,
283
+ "%s: Bad offset 0x%" HWADDR_PRIx "\n", __func__,
284
+ addr * 4);
285
+ return ret;
286
+ }
287
+ break;
288
+ }
289
+
290
+ DB_PRINT("addr=0x%" HWADDR_PRIx " = 0x%" PRIx32, addr * 4, ret);
291
+ spi_update_irq(s);
292
+ return ret;
293
+}
294
+
295
+static void assert_cs(MSSSpiState *s)
296
+{
297
+ qemu_set_irq(s->cs_line, 0);
298
+}
299
+
300
+static void deassert_cs(MSSSpiState *s)
301
+{
302
+ qemu_set_irq(s->cs_line, 1);
303
+}
304
+
305
+static void spi_flush_txfifo(MSSSpiState *s)
306
+{
307
+ uint32_t tx;
308
+ uint32_t rx;
309
+ bool sps = !!(s->regs[R_SPI_CONTROL] & C_SPS);
310
+
86
+
311
+ /*
87
+ /*
312
+ * Chip Select(CS) is automatically controlled by this controller.
88
+ * ARMv5 does not have the ID_ISAR registers, but we can still
313
+ * If SPS bit is set in Control register then CS is asserted
89
+ * set the field to indicate Jazelle support within QEMU.
314
+ * until all the frames set in frame count of Control register are
315
+ * transferred. If SPS is not set then CS pulses between frames.
316
+ * Note that Slave Select register specifies which of the CS line
317
+ * has to be controlled automatically by controller. Bits SS[7:1] are for
318
+ * masters in FPGA fabric since we model only Microcontroller subsystem
319
+ * of Smartfusion2 we control only one CS(SS[0]) line.
320
+ */
90
+ */
321
+ while (!fifo32_is_empty(&s->tx_fifo) && s->frame_count) {
91
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
322
+ assert_cs(s);
323
+
92
+
324
+ s->regs[R_SPI_STATUS] &= ~(S_TXDONE | S_RXRDY);
93
{
325
+
94
/* The 1026 had an IFAR at c6,c0,0,1 rather than the ARMv6 c6,c0,0,2 */
326
+ tx = fifo32_pop(&s->tx_fifo);
95
ARMCPRegInfo ifar = {
327
+ DB_PRINT("data tx:0x%" PRIx32, tx);
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
328
+ rx = ssi_transfer(s->spi, tx);
97
index XXXXXXX..XXXXXXX 100644
329
+ DB_PRINT("data rx:0x%" PRIx32, rx);
98
--- a/target/arm/translate.c
330
+
99
+++ b/target/arm/translate.c
331
+ if (fifo32_num_used(&s->rx_fifo) == s->fifo_depth) {
100
@@ -XXX,XX +XXX,XX @@
332
+ s->regs[R_SPI_STATUS] |= S_RXCHOVRF;
101
#define ENABLE_ARCH_5 arm_dc_feature(s, ARM_FEATURE_V5)
333
+ s->regs[R_SPI_RIS] |= S_RXCHOVRF;
102
/* currently all emulated v5 cores are also v5TE, so don't bother */
334
+ } else {
103
#define ENABLE_ARCH_5TE arm_dc_feature(s, ARM_FEATURE_V5)
335
+ fifo32_push(&s->rx_fifo, rx);
104
-#define ENABLE_ARCH_5J arm_dc_feature(s, ARM_FEATURE_JAZELLE)
336
+ s->regs[R_SPI_STATUS] &= ~S_RXFIFOEMP;
105
+#define ENABLE_ARCH_5J dc_isar_feature(jazelle, s)
337
+ if (fifo32_num_used(&s->rx_fifo) == (s->fifo_depth - 1)) {
106
#define ENABLE_ARCH_6 arm_dc_feature(s, ARM_FEATURE_V6)
338
+ s->regs[R_SPI_STATUS] |= S_RXFIFOFULNXT;
107
#define ENABLE_ARCH_6K arm_dc_feature(s, ARM_FEATURE_V6K)
339
+ } else if (fifo32_num_used(&s->rx_fifo) == s->fifo_depth) {
108
#define ENABLE_ARCH_6T2 arm_dc_feature(s, ARM_FEATURE_THUMB2)
340
+ s->regs[R_SPI_STATUS] |= S_RXFIFOFUL;
341
+ }
342
+ }
343
+ s->frame_count--;
344
+ if (!sps) {
345
+ deassert_cs(s);
346
+ }
347
+ }
348
+
349
+ if (!s->frame_count) {
350
+ s->frame_count = (s->regs[R_SPI_CONTROL] & FMCOUNT_MASK) >>
351
+ FMCOUNT_SHIFT;
352
+ deassert_cs(s);
353
+ s->regs[R_SPI_RIS] |= S_TXDONE | S_RXRDY;
354
+ s->regs[R_SPI_STATUS] |= S_TXDONE | S_RXRDY;
355
+ }
356
+}
357
+
358
+static void spi_write(void *opaque, hwaddr addr,
359
+ uint64_t val64, unsigned int size)
360
+{
361
+ MSSSpiState *s = opaque;
362
+ uint32_t value = val64;
363
+
364
+ DB_PRINT("addr=0x%" HWADDR_PRIx " =0x%" PRIx32, addr, value);
365
+ addr >>= 2;
366
+
367
+ switch (addr) {
368
+ case R_SPI_TX:
369
+ /* adding to already full FIFO */
370
+ if (fifo32_num_used(&s->tx_fifo) == s->fifo_depth) {
371
+ break;
372
+ }
373
+ s->regs[R_SPI_STATUS] &= ~S_TXFIFOEMP;
374
+ fifo32_push(&s->tx_fifo, value);
375
+ if (fifo32_num_used(&s->tx_fifo) == (s->fifo_depth - 1)) {
376
+ s->regs[R_SPI_STATUS] |= S_TXFIFOFULNXT;
377
+ } else if (fifo32_num_used(&s->tx_fifo) == s->fifo_depth) {
378
+ s->regs[R_SPI_STATUS] |= S_TXFIFOFUL;
379
+ }
380
+ if (s->enabled) {
381
+ spi_flush_txfifo(s);
382
+ }
383
+ break;
384
+
385
+ case R_SPI_CONTROL:
386
+ s->regs[R_SPI_CONTROL] = value;
387
+ if (value & C_BIGFIFO) {
388
+ set_fifodepth(s);
389
+ } else {
390
+ s->fifo_depth = 4;
391
+ }
392
+ s->enabled = value & C_ENABLE;
393
+ s->frame_count = (value & FMCOUNT_MASK) >> FMCOUNT_SHIFT;
394
+ if (value & C_RESET) {
395
+ mss_spi_reset(DEVICE(s));
396
+ }
397
+ break;
398
+
399
+ case R_SPI_DFSIZE:
400
+ if (s->enabled) {
401
+ break;
402
+ }
403
+ s->regs[R_SPI_DFSIZE] = value;
404
+ break;
405
+
406
+ case R_SPI_INTCLR:
407
+ s->regs[R_SPI_INTCLR] = value;
408
+ if (value & S_TXDONE) {
409
+ s->regs[R_SPI_RIS] &= ~S_TXDONE;
410
+ }
411
+ if (value & S_RXRDY) {
412
+ s->regs[R_SPI_RIS] &= ~S_RXRDY;
413
+ }
414
+ if (value & S_RXCHOVRF) {
415
+ s->regs[R_SPI_RIS] &= ~S_RXCHOVRF;
416
+ }
417
+ break;
418
+
419
+ case R_SPI_MIS:
420
+ case R_SPI_STATUS:
421
+ case R_SPI_RIS:
422
+ qemu_log_mask(LOG_GUEST_ERROR,
423
+ "%s: Write to read only register 0x%" HWADDR_PRIx "\n",
424
+ __func__, addr * 4);
425
+ break;
426
+
427
+ default:
428
+ if (addr < ARRAY_SIZE(s->regs)) {
429
+ s->regs[addr] = value;
430
+ } else {
431
+ qemu_log_mask(LOG_GUEST_ERROR,
432
+ "%s: Bad offset 0x%" HWADDR_PRIx "\n", __func__,
433
+ addr * 4);
434
+ }
435
+ break;
436
+ }
437
+
438
+ spi_update_irq(s);
439
+}
440
+
441
+static const MemoryRegionOps spi_ops = {
442
+ .read = spi_read,
443
+ .write = spi_write,
444
+ .endianness = DEVICE_NATIVE_ENDIAN,
445
+ .valid = {
446
+ .min_access_size = 1,
447
+ .max_access_size = 4
448
+ }
449
+};
450
+
451
+static void mss_spi_realize(DeviceState *dev, Error **errp)
452
+{
453
+ MSSSpiState *s = MSS_SPI(dev);
454
+ SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
455
+
456
+ s->spi = ssi_create_bus(dev, "spi");
457
+
458
+ sysbus_init_irq(sbd, &s->irq);
459
+ ssi_auto_connect_slaves(dev, &s->cs_line, s->spi);
460
+ sysbus_init_irq(sbd, &s->cs_line);
461
+
462
+ memory_region_init_io(&s->mmio, OBJECT(s), &spi_ops, s,
463
+ TYPE_MSS_SPI, R_SPI_MAX * 4);
464
+ sysbus_init_mmio(sbd, &s->mmio);
465
+
466
+ fifo32_create(&s->tx_fifo, FIFO_CAPACITY);
467
+ fifo32_create(&s->rx_fifo, FIFO_CAPACITY);
468
+}
469
+
470
+static const VMStateDescription vmstate_mss_spi = {
471
+ .name = TYPE_MSS_SPI,
472
+ .version_id = 1,
473
+ .minimum_version_id = 1,
474
+ .fields = (VMStateField[]) {
475
+ VMSTATE_FIFO32(tx_fifo, MSSSpiState),
476
+ VMSTATE_FIFO32(rx_fifo, MSSSpiState),
477
+ VMSTATE_UINT32_ARRAY(regs, MSSSpiState, R_SPI_MAX),
478
+ VMSTATE_END_OF_LIST()
479
+ }
480
+};
481
+
482
+static void mss_spi_class_init(ObjectClass *klass, void *data)
483
+{
484
+ DeviceClass *dc = DEVICE_CLASS(klass);
485
+
486
+ dc->realize = mss_spi_realize;
487
+ dc->reset = mss_spi_reset;
488
+ dc->vmsd = &vmstate_mss_spi;
489
+}
490
+
491
+static const TypeInfo mss_spi_info = {
492
+ .name = TYPE_MSS_SPI,
493
+ .parent = TYPE_SYS_BUS_DEVICE,
494
+ .instance_size = sizeof(MSSSpiState),
495
+ .class_init = mss_spi_class_init,
496
+};
497
+
498
+static void mss_spi_register_types(void)
499
+{
500
+ type_register_static(&mss_spi_info);
501
+}
502
+
503
+type_init(mss_spi_register_types)
504
--
109
--
505
2.7.4
110
2.19.1
506
111
507
112
diff view generated by jsdifflib
1
Now that we have a banked FAULTMASK register and banked exceptions,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
we can implement the correct check in cpu_mmu_index() for whether
3
the MPU_CTRL.HFNMIENA bit's effect should apply. This bit causes
4
handlers which have requested a negative execution priority to run
5
with the MPU disabled. In v8M the test has to check this for the
6
current security state and so takes account of banking.
7
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20181016223115.24100-7-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 1505240046-11454-17-git-send-email-peter.maydell@linaro.org
11
---
8
---
12
target/arm/cpu.h | 21 ++++++++++++++++-----
9
target/arm/cpu.h | 6 +++++-
13
hw/intc/armv7m_nvic.c | 29 +++++++++++++++++++++++++++++
10
linux-user/elfload.c | 2 +-
14
2 files changed, 45 insertions(+), 5 deletions(-)
11
target/arm/cpu.c | 4 ----
12
target/arm/helper.c | 2 +-
13
target/arm/machine.c | 3 +--
14
5 files changed, 8 insertions(+), 9 deletions(-)
15
15
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
18
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq);
20
@@ -XXX,XX +XXX,XX @@ enum arm_features {
21
* (v8M ARM ARM I_PKLD.)
21
ARM_FEATURE_NEON,
22
*/
22
ARM_FEATURE_M, /* Microcontroller profile. */
23
int armv7m_nvic_raw_execution_priority(void *opaque);
23
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
24
+/**
24
- ARM_FEATURE_THUMB2EE,
25
+ * armv7m_nvic_neg_prio_requested: return true if the requested execution
25
ARM_FEATURE_V7MP, /* v7 Multiprocessing Extensions */
26
+ * priority is negative for the specified security state.
26
ARM_FEATURE_V7VE, /* v7 Virtualization Extensions (non-EL2 parts) */
27
+ * @opaque: the NVIC
27
ARM_FEATURE_V4T,
28
+ * @secure: the security state to test
28
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_jazelle(const ARMISARegisters *id)
29
+ * This corresponds to the pseudocode IsReqExecPriNeg().
29
return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
30
+ */
30
}
31
+#ifndef CONFIG_USER_ONLY
31
32
+bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure);
32
+static inline bool isar_feature_t32ee(const ARMISARegisters *id)
33
+#else
34
+static inline bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
35
+{
33
+{
36
+ return false;
34
+ return FIELD_EX32(id->id_isar3, ID_ISAR3, T32EE) != 0;
37
+}
38
+#endif
39
40
/* Interface for defining coprocessor registers.
41
* Registers are defined in tables of arm_cp_reginfo structs
42
@@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
43
if (arm_feature(env, ARM_FEATURE_M)) {
44
ARMMMUIdx mmu_idx = el == 0 ? ARMMMUIdx_MUser : ARMMMUIdx_MPriv;
45
46
- /* Execution priority is negative if FAULTMASK is set or
47
- * we're in a HardFault or NMI handler.
48
- */
49
- if ((env->v7m.exception > 0 && env->v7m.exception <= 3)
50
- || env->v7m.faultmask[env->v7m.secure]) {
51
+ if (armv7m_nvic_neg_prio_requested(env->nvic, env->v7m.secure)) {
52
mmu_idx = ARMMMUIdx_MNegPri;
53
}
54
55
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/intc/armv7m_nvic.c
58
+++ b/hw/intc/armv7m_nvic.c
59
@@ -XXX,XX +XXX,XX @@ static inline int nvic_exec_prio(NVICState *s)
60
return MIN(running, s->exception_prio);
61
}
62
63
+bool armv7m_nvic_neg_prio_requested(void *opaque, bool secure)
64
+{
65
+ /* Return true if the requested execution priority is negative
66
+ * for the specified security state, ie that security state
67
+ * has an active NMI or HardFault or has set its FAULTMASK.
68
+ * Note that this is not the same as whether the execution
69
+ * priority is actually negative (for instance AIRCR.PRIS may
70
+ * mean we don't allow FAULTMASK_NS to actually make the execution
71
+ * priority negative). Compare pseudocode IsReqExcPriNeg().
72
+ */
73
+ NVICState *s = opaque;
74
+
75
+ if (s->cpu->env.v7m.faultmask[secure]) {
76
+ return true;
77
+ }
78
+
79
+ if (secure ? s->sec_vectors[ARMV7M_EXCP_HARD].active :
80
+ s->vectors[ARMV7M_EXCP_HARD].active) {
81
+ return true;
82
+ }
83
+
84
+ if (s->vectors[ARMV7M_EXCP_NMI].active &&
85
+ exc_targets_secure(s, ARMV7M_EXCP_NMI) == secure) {
86
+ return true;
87
+ }
88
+
89
+ return false;
90
+}
35
+}
91
+
36
+
92
bool armv7m_nvic_can_take_pending_exception(void *opaque)
37
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
93
{
38
{
94
NVICState *s = opaque;
39
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
40
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/linux-user/elfload.c
43
+++ b/linux-user/elfload.c
44
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
45
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
46
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
47
GET_FEATURE(ARM_FEATURE_IWMMXT, ARM_HWCAP_ARM_IWMMXT);
48
- GET_FEATURE(ARM_FEATURE_THUMB2EE, ARM_HWCAP_ARM_THUMBEE);
49
+ GET_FEATURE_ID(t32ee, ARM_HWCAP_ARM_THUMBEE);
50
GET_FEATURE(ARM_FEATURE_NEON, ARM_HWCAP_ARM_NEON);
51
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
52
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
53
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/cpu.c
56
+++ b/target/arm/cpu.c
57
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
58
set_feature(&cpu->env, ARM_FEATURE_V7);
59
set_feature(&cpu->env, ARM_FEATURE_VFP3);
60
set_feature(&cpu->env, ARM_FEATURE_NEON);
61
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
62
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
63
set_feature(&cpu->env, ARM_FEATURE_EL3);
64
cpu->midr = 0x410fc080;
65
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
66
set_feature(&cpu->env, ARM_FEATURE_VFP3);
67
set_feature(&cpu->env, ARM_FEATURE_VFP_FP16);
68
set_feature(&cpu->env, ARM_FEATURE_NEON);
69
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
70
set_feature(&cpu->env, ARM_FEATURE_EL3);
71
/* Note that A9 supports the MP extensions even for
72
* A9UP and single-core A9MP (which are both different
73
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
74
set_feature(&cpu->env, ARM_FEATURE_V7VE);
75
set_feature(&cpu->env, ARM_FEATURE_VFP4);
76
set_feature(&cpu->env, ARM_FEATURE_NEON);
77
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
78
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
79
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
80
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
81
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
82
set_feature(&cpu->env, ARM_FEATURE_V7VE);
83
set_feature(&cpu->env, ARM_FEATURE_VFP4);
84
set_feature(&cpu->env, ARM_FEATURE_NEON);
85
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
86
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
87
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
88
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
89
diff --git a/target/arm/helper.c b/target/arm/helper.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/helper.c
92
+++ b/target/arm/helper.c
93
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
94
define_arm_cp_regs(cpu, vmsa_pmsa_cp_reginfo);
95
define_arm_cp_regs(cpu, vmsa_cp_reginfo);
96
}
97
- if (arm_feature(env, ARM_FEATURE_THUMB2EE)) {
98
+ if (cpu_isar_feature(t32ee, cpu)) {
99
define_arm_cp_regs(cpu, t2ee_cp_reginfo);
100
}
101
if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) {
102
diff --git a/target/arm/machine.c b/target/arm/machine.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/machine.c
105
+++ b/target/arm/machine.c
106
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
107
static bool thumb2ee_needed(void *opaque)
108
{
109
ARMCPU *cpu = opaque;
110
- CPUARMState *env = &cpu->env;
111
112
- return arm_feature(env, ARM_FEATURE_THUMB2EE);
113
+ return cpu_isar_feature(t32ee, cpu);
114
}
115
116
static const VMStateDescription vmstate_thumb2ee = {
95
--
117
--
96
2.7.4
118
2.19.1
97
119
98
120
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20181016223115.24100-8-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/cpu.h | 16 +++++++++++++++-
10
linux-user/aarch64/signal.c | 4 ++--
11
linux-user/elfload.c | 2 +-
12
linux-user/syscall.c | 10 ++++++----
13
target/arm/cpu64.c | 5 ++++-
14
target/arm/helper.c | 9 ++++++---
15
target/arm/machine.c | 3 +--
16
target/arm/translate-a64.c | 4 ++--
17
8 files changed, 37 insertions(+), 16 deletions(-)
18
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR1, FRINTTS, 32, 4)
24
FIELD(ID_AA64ISAR1, SB, 36, 4)
25
FIELD(ID_AA64ISAR1, SPECRES, 40, 4)
26
27
+FIELD(ID_AA64PFR0, EL0, 0, 4)
28
+FIELD(ID_AA64PFR0, EL1, 4, 4)
29
+FIELD(ID_AA64PFR0, EL2, 8, 4)
30
+FIELD(ID_AA64PFR0, EL3, 12, 4)
31
+FIELD(ID_AA64PFR0, FP, 16, 4)
32
+FIELD(ID_AA64PFR0, ADVSIMD, 20, 4)
33
+FIELD(ID_AA64PFR0, GIC, 24, 4)
34
+FIELD(ID_AA64PFR0, RAS, 28, 4)
35
+FIELD(ID_AA64PFR0, SVE, 32, 4)
36
+
37
QEMU_BUILD_BUG_ON(ARRAY_SIZE(((ARMCPU *)0)->ccsidr) <= R_V7M_CSSELR_INDEX_MASK);
38
39
/* If adding a feature bit which corresponds to a Linux ELF
40
@@ -XXX,XX +XXX,XX @@ enum arm_features {
41
ARM_FEATURE_PMU, /* has PMU support */
42
ARM_FEATURE_VBAR, /* has cp15 VBAR */
43
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
44
- ARM_FEATURE_SVE, /* has Scalable Vector Extension */
45
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
46
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
47
};
48
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
49
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
50
}
51
52
+static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
53
+{
54
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
55
+}
56
+
57
/*
58
* Forward to the above feature tests given an ARMCPU pointer.
59
*/
60
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/linux-user/aarch64/signal.c
63
+++ b/linux-user/aarch64/signal.c
64
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
65
break;
66
67
case TARGET_SVE_MAGIC:
68
- if (arm_feature(env, ARM_FEATURE_SVE)) {
69
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
70
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
71
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
72
if (!sve && size == sve_size) {
73
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
74
&layout);
75
76
/* SVE state needs saving only if it exists. */
77
- if (arm_feature(env, ARM_FEATURE_SVE)) {
78
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
79
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
80
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
81
sve_ofs = alloc_sigframe_space(sve_size, &layout);
82
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/linux-user/elfload.c
85
+++ b/linux-user/elfload.c
86
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
87
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
88
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
89
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
90
- GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
91
+ GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
92
93
#undef GET_FEATURE
94
#undef GET_FEATURE_ID
95
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/linux-user/syscall.c
98
+++ b/linux-user/syscall.c
99
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
100
* even though the current architectural maximum is VQ=16.
101
*/
102
ret = -TARGET_EINVAL;
103
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)
104
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(cpu_env))
105
&& arg2 >= 0 && arg2 <= 512 * 16 && !(arg2 & 15)) {
106
CPUARMState *env = cpu_env;
107
ARMCPU *cpu = arm_env_get_cpu(env);
108
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
109
return ret;
110
case TARGET_PR_SVE_GET_VL:
111
ret = -TARGET_EINVAL;
112
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)) {
113
- CPUARMState *env = cpu_env;
114
- ret = ((env->vfp.zcr_el[1] & 0xf) + 1) * 16;
115
+ {
116
+ ARMCPU *cpu = arm_env_get_cpu(cpu_env);
117
+ if (cpu_isar_feature(aa64_sve, cpu)) {
118
+ ret = ((cpu->env.vfp.zcr_el[1] & 0xf) + 1) * 16;
119
+ }
120
}
121
return ret;
122
#endif /* AARCH64 */
123
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
124
index XXXXXXX..XXXXXXX 100644
125
--- a/target/arm/cpu64.c
126
+++ b/target/arm/cpu64.c
127
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
128
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
129
cpu->isar.id_aa64isar1 = t;
130
131
+ t = cpu->isar.id_aa64pfr0;
132
+ t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
133
+ cpu->isar.id_aa64pfr0 = t;
134
+
135
/* Replicate the same data to the 32-bit id registers. */
136
u = cpu->isar.id_isar5;
137
u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
139
* present in either.
140
*/
141
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
142
- set_feature(&cpu->env, ARM_FEATURE_SVE);
143
/* For usermode -cpu max we can use a larger and more efficient DCZ
144
* blocksize since we don't have to follow what the hardware does.
145
*/
146
diff --git a/target/arm/helper.c b/target/arm/helper.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/target/arm/helper.c
149
+++ b/target/arm/helper.c
150
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
151
define_one_arm_cp_reg(cpu, &sctlr);
152
}
153
154
- if (arm_feature(env, ARM_FEATURE_SVE)) {
155
+ if (cpu_isar_feature(aa64_sve, cpu)) {
156
define_one_arm_cp_reg(cpu, &zcr_el1_reginfo);
157
if (arm_feature(env, ARM_FEATURE_EL2)) {
158
define_one_arm_cp_reg(cpu, &zcr_el2_reginfo);
159
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
160
uint32_t flags;
161
162
if (is_a64(env)) {
163
+ ARMCPU *cpu = arm_env_get_cpu(env);
164
+
165
*pc = env->pc;
166
flags = ARM_TBFLAG_AARCH64_STATE_MASK;
167
/* Get control bits for tagged addresses */
168
flags |= (arm_regime_tbi0(env, mmu_idx) << ARM_TBFLAG_TBI0_SHIFT);
169
flags |= (arm_regime_tbi1(env, mmu_idx) << ARM_TBFLAG_TBI1_SHIFT);
170
171
- if (arm_feature(env, ARM_FEATURE_SVE)) {
172
+ if (cpu_isar_feature(aa64_sve, cpu)) {
173
int sve_el = sve_exception_el(env, current_el);
174
uint32_t zcr_len;
175
176
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq)
177
void aarch64_sve_change_el(CPUARMState *env, int old_el,
178
int new_el, bool el0_a64)
179
{
180
+ ARMCPU *cpu = arm_env_get_cpu(env);
181
int old_len, new_len;
182
bool old_a64, new_a64;
183
184
/* Nothing to do if no SVE. */
185
- if (!arm_feature(env, ARM_FEATURE_SVE)) {
186
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
187
return;
188
}
189
190
diff --git a/target/arm/machine.c b/target/arm/machine.c
191
index XXXXXXX..XXXXXXX 100644
192
--- a/target/arm/machine.c
193
+++ b/target/arm/machine.c
194
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_iwmmxt = {
195
static bool sve_needed(void *opaque)
196
{
197
ARMCPU *cpu = opaque;
198
- CPUARMState *env = &cpu->env;
199
200
- return arm_feature(env, ARM_FEATURE_SVE);
201
+ return cpu_isar_feature(aa64_sve, cpu);
202
}
203
204
/* The first two words of each Zreg is stored in VFP state. */
205
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
206
index XXXXXXX..XXXXXXX 100644
207
--- a/target/arm/translate-a64.c
208
+++ b/target/arm/translate-a64.c
209
@@ -XXX,XX +XXX,XX @@ void aarch64_cpu_dump_state(CPUState *cs, FILE *f,
210
cpu_fprintf(f, " FPCR=%08x FPSR=%08x\n",
211
vfp_get_fpcr(env), vfp_get_fpsr(env));
212
213
- if (arm_feature(env, ARM_FEATURE_SVE) && sve_exception_el(env, el) == 0) {
214
+ if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) == 0) {
215
int j, zcr_len = sve_zcr_len_for_el(env, el);
216
217
for (i = 0; i <= FFR_PRED_NUM; i++) {
218
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
219
unallocated_encoding(s);
220
break;
221
case 0x2:
222
- if (!arm_dc_feature(s, ARM_FEATURE_SVE) || !disas_sve(s, insn)) {
223
+ if (!dc_isar_feature(aa64_sve, s) || !disas_sve(s, insn)) {
224
unallocated_encoding(s);
225
}
226
break;
227
--
228
2.19.1
229
230
diff view generated by jsdifflib
1
Make the armv7m_nvic_set_pending() and armv7m_nvic_clear_pending()
1
From: Richard Henderson <richard.henderson@linaro.org>
2
functions take a bool indicating whether to pend the secure
3
or non-secure version of a banked interrupt, and update the
4
callsites accordingly.
5
2
6
In most callsites we can simply pass the correct security
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
state in; in a couple of cases we use TODO comments to indicate
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
that we will return the code in a subsequent commit.
5
Message-id: 20181016223115.24100-9-richard.henderson@linaro.org
9
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 1505240046-11454-10-git-send-email-peter.maydell@linaro.org
13
---
8
---
14
target/arm/cpu.h | 14 ++++++++++-
9
target/arm/cpu.h | 17 +++++++++++++++-
15
hw/intc/armv7m_nvic.c | 64 ++++++++++++++++++++++++++++++++++++++-------------
10
linux-user/elfload.c | 6 +-----
16
target/arm/helper.c | 24 +++++++++++--------
11
target/arm/cpu64.c | 16 ++++++++-------
17
hw/intc/trace-events | 4 ++--
12
target/arm/helper.c | 2 +-
18
4 files changed, 77 insertions(+), 29 deletions(-)
13
target/arm/translate-a64.c | 40 +++++++++++++++++++-------------------
14
target/arm/translate.c | 6 +++---
15
6 files changed, 50 insertions(+), 37 deletions(-)
19
16
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
19
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ static inline bool armv7m_nvic_can_take_pending_exception(void *opaque)
21
@@ -XXX,XX +XXX,XX @@ enum arm_features {
25
return true;
22
ARM_FEATURE_PMU, /* has PMU support */
23
ARM_FEATURE_VBAR, /* has cp15 VBAR */
24
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
25
- ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
26
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
27
};
28
29
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
30
return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
26
}
31
}
27
#endif
32
28
-void armv7m_nvic_set_pending(void *opaque, int irq);
33
+static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
29
+/**
34
+{
30
+ * armv7m_nvic_set_pending: mark the specified exception as pending
35
+ /*
31
+ * @opaque: the NVIC
36
+ * This is a placeholder for use by VCMA until the rest of
32
+ * @irq: the exception number to mark pending
37
+ * the ARMv8.2-FP16 extension is implemented for aa32 mode.
33
+ * @secure: false for non-banked exceptions or for the nonsecure
38
+ * At which point we can properly set and check MVFR1.FPHP.
34
+ * version of a banked exception, true for the secure version of a banked
39
+ */
35
+ * exception.
40
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
36
+ *
41
+}
37
+ * Marks the specified exception as pending. Note that we will assert()
42
+
38
+ * if @secure is true and @irq does not specify one of the fixed set
43
/*
39
+ * of architecturally banked exceptions.
44
* 64-bit feature tests via id registers.
40
+ */
45
*/
41
+void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
46
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
42
void armv7m_nvic_acknowledge_irq(void *opaque);
47
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
43
/**
44
* armv7m_nvic_complete_irq: complete specified interrupt or exception
45
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/intc/armv7m_nvic.c
48
+++ b/hw/intc/armv7m_nvic.c
49
@@ -XXX,XX +XXX,XX @@ static void nvic_irq_update(NVICState *s)
50
qemu_set_irq(s->excpout, lvl);
51
}
48
}
52
49
53
-static void armv7m_nvic_clear_pending(void *opaque, int irq)
50
+static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
54
+/**
51
+{
55
+ * armv7m_nvic_clear_pending: mark the specified exception as not pending
52
+ /* We always set the AdvSIMD and FP fields identically wrt FP16. */
56
+ * @opaque: the NVIC
53
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
57
+ * @irq: the exception number to mark as not pending
54
+}
58
+ * @secure: false for non-banked exceptions or for the nonsecure
55
+
59
+ * version of a banked exception, true for the secure version of a banked
56
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
60
+ * exception.
61
+ *
62
+ * Marks the specified exception as not pending. Note that we will assert()
63
+ * if @secure is true and @irq does not specify one of the fixed set
64
+ * of architecturally banked exceptions.
65
+ */
66
+static void armv7m_nvic_clear_pending(void *opaque, int irq, bool secure)
67
{
57
{
68
NVICState *s = (NVICState *)opaque;
58
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
69
VecInfo *vec;
59
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
70
60
index XXXXXXX..XXXXXXX 100644
71
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
61
--- a/linux-user/elfload.c
72
62
+++ b/linux-user/elfload.c
73
- vec = &s->vectors[irq];
63
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
74
- trace_nvic_clear_pending(irq, vec->enabled, vec->prio);
64
hwcaps |= ARM_HWCAP_A64_ASIMD;
75
+ if (secure) {
65
76
+ assert(exc_is_banked(irq));
66
/* probe for the extra features */
77
+ vec = &s->sec_vectors[irq];
67
-#define GET_FEATURE(feat, hwcap) \
78
+ } else {
68
- do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
79
+ vec = &s->vectors[irq];
69
#define GET_FEATURE_ID(feat, hwcap) \
80
+ }
70
do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
81
+ trace_nvic_clear_pending(irq, secure, vec->enabled, vec->prio);
71
82
if (vec->pending) {
72
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
83
vec->pending = 0;
73
GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
84
nvic_irq_update(s);
74
GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
85
}
75
GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
86
}
76
- GET_FEATURE(ARM_FEATURE_V8_FP16,
87
77
- ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
88
-void armv7m_nvic_set_pending(void *opaque, int irq)
78
+ GET_FEATURE_ID(aa64_fp16, ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
89
+void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
79
GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
90
{
80
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
91
NVICState *s = (NVICState *)opaque;
81
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
92
+ bool banked = exc_is_banked(irq);
82
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
93
VecInfo *vec;
83
GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
94
84
95
assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
85
-#undef GET_FEATURE
96
+ assert(!secure || banked);
86
#undef GET_FEATURE_ID
97
87
98
- vec = &s->vectors[irq];
88
return hwcaps;
99
- trace_nvic_set_pending(irq, vec->enabled, vec->prio);
89
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
100
+ vec = (banked && secure) ? &s->sec_vectors[irq] : &s->vectors[irq];
90
index XXXXXXX..XXXXXXX 100644
101
91
--- a/target/arm/cpu64.c
102
+ trace_nvic_set_pending(irq, secure, vec->enabled, vec->prio);
92
+++ b/target/arm/cpu64.c
103
93
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
104
if (irq >= ARMV7M_EXCP_HARD && irq < ARMV7M_EXCP_PENDSV) {
94
105
/* If a synchronous exception is pending then it may be
95
t = cpu->isar.id_aa64pfr0;
106
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq)
96
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
107
"(current priority %d)\n", irq, running);
97
+ t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
108
}
98
+ t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
109
99
cpu->isar.id_aa64pfr0 = t;
110
- /* We can do the escalation, so we take HardFault instead */
100
111
+ /* We can do the escalation, so we take HardFault instead.
101
/* Replicate the same data to the 32-bit id registers. */
112
+ * If BFHFNMINS is set then we escalate to the banked HF for
102
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
113
+ * the target security state of the original exception; otherwise
103
u = FIELD_DP32(u, ID_ISAR6, DP, 1);
114
+ * we take a Secure HardFault.
104
cpu->isar.id_isar6 = u;
115
+ */
105
116
irq = ARMV7M_EXCP_HARD;
106
-#ifdef CONFIG_USER_ONLY
117
- vec = &s->vectors[irq];
107
- /* We don't set these in system emulation mode for the moment,
118
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY) &&
108
- * since we don't correctly set the ID registers to advertise them,
119
+ (secure ||
109
- * and in some cases they're only available in AArch64 and not AArch32,
120
+ !(s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK))) {
110
- * whereas the architecture requires them to be present in both if
121
+ vec = &s->sec_vectors[irq];
111
- * present in either.
122
+ } else {
112
+ /*
123
+ vec = &s->vectors[irq];
113
+ * FIXME: We do not yet support ARMv8.2-fp16 for AArch32 yet,
124
+ }
114
+ * so do not set MVFR1.FPHP. Strictly speaking this is not legal,
125
+ /* HF may be banked but there is only one shared HFSR */
115
+ * but it is also not legal to enable SVE without support for FP16,
126
s->cpu->env.v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
116
+ * and enabling SVE in system mode is more useful in the short term.
127
}
128
}
129
@@ -XXX,XX +XXX,XX @@ static void set_irq_level(void *opaque, int n, int level)
130
if (level != vec->level) {
131
vec->level = level;
132
if (level) {
133
- armv7m_nvic_set_pending(s, n);
134
+ armv7m_nvic_set_pending(s, n, false);
135
}
136
}
137
}
138
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
139
}
140
case 0xd04: /* Interrupt Control State. */
141
if (value & (1 << 31)) {
142
- armv7m_nvic_set_pending(s, ARMV7M_EXCP_NMI);
143
+ armv7m_nvic_set_pending(s, ARMV7M_EXCP_NMI, false);
144
}
145
if (value & (1 << 28)) {
146
- armv7m_nvic_set_pending(s, ARMV7M_EXCP_PENDSV);
147
+ armv7m_nvic_set_pending(s, ARMV7M_EXCP_PENDSV, attrs.secure);
148
} else if (value & (1 << 27)) {
149
- armv7m_nvic_clear_pending(s, ARMV7M_EXCP_PENDSV);
150
+ armv7m_nvic_clear_pending(s, ARMV7M_EXCP_PENDSV, attrs.secure);
151
}
152
if (value & (1 << 26)) {
153
- armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK);
154
+ armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK, attrs.secure);
155
} else if (value & (1 << 25)) {
156
- armv7m_nvic_clear_pending(s, ARMV7M_EXCP_SYSTICK);
157
+ armv7m_nvic_clear_pending(s, ARMV7M_EXCP_SYSTICK, attrs.secure);
158
}
159
break;
160
case 0xd08: /* Vector Table Offset. */
161
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
162
{
163
int excnum = (value & 0x1ff) + NVIC_FIRST_IRQ;
164
if (excnum < s->num_irq) {
165
- armv7m_nvic_set_pending(s, excnum);
166
+ armv7m_nvic_set_pending(s, excnum, false);
167
}
168
break;
169
}
170
@@ -XXX,XX +XXX,XX @@ static void nvic_systick_trigger(void *opaque, int n, int level)
171
/* SysTick just asked us to pend its exception.
172
* (This is different from an external interrupt line's
173
* behaviour.)
174
+ * TODO: when we implement the banked systicks we must make
175
+ * this pend the correct banked exception.
176
*/
117
*/
177
- armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK);
118
- set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
178
+ armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK, false);
119
+
179
}
120
+#ifdef CONFIG_USER_ONLY
180
}
121
/* For usermode -cpu max we can use a larger and more efficient DCZ
181
122
* blocksize since we don't have to follow what the hardware does.
123
*/
182
diff --git a/target/arm/helper.c b/target/arm/helper.c
124
diff --git a/target/arm/helper.c b/target/arm/helper.c
183
index XXXXXXX..XXXXXXX 100644
125
index XXXXXXX..XXXXXXX 100644
184
--- a/target/arm/helper.c
126
--- a/target/arm/helper.c
185
+++ b/target/arm/helper.c
127
+++ b/target/arm/helper.c
186
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
128
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
187
* stack, directly take a usage fault on the current stack.
129
uint32_t changed;
188
*/
130
189
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
131
/* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */
190
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE);
132
- if (!arm_feature(env, ARM_FEATURE_V8_FP16)) {
191
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
133
+ if (!cpu_isar_feature(aa64_fp16, arm_env_get_cpu(env))) {
192
v7m_exception_taken(cpu, excret);
134
val &= ~FPCR_FZ16;
193
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
135
}
194
"stackframe: failed exception return integrity check\n");
136
195
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
137
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
196
* exception return excret specified then this is a UsageFault.
138
index XXXXXXX..XXXXXXX 100644
197
*/
139
--- a/target/arm/translate-a64.c
198
if (return_to_handler != arm_v7m_is_handler_mode(env)) {
140
+++ b/target/arm/translate-a64.c
199
- /* Take an INVPC UsageFault by pushing the stack again. */
141
@@ -XXX,XX +XXX,XX @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
200
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE);
142
break;
201
+ /* Take an INVPC UsageFault by pushing the stack again.
143
case 3:
202
+ * TODO: the v8M version of this code should target the
144
size = MO_16;
203
+ * background state for this exception.
145
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
204
+ */
146
+ if (dc_isar_feature(aa64_fp16, s)) {
205
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
147
break;
206
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
148
}
207
v7m_push_stack(cpu);
149
/* fallthru */
208
v7m_exception_taken(cpu, excret);
150
@@ -XXX,XX +XXX,XX @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
209
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
151
break;
210
handle it. */
152
case 3:
211
switch (cs->exception_index) {
153
size = MO_16;
212
case EXCP_UDEF:
154
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
213
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE);
155
+ if (dc_isar_feature(aa64_fp16, s)) {
214
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
156
break;
215
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNDEFINSTR_MASK;
157
}
216
break;
158
/* fallthru */
217
case EXCP_NOCP:
159
@@ -XXX,XX +XXX,XX @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
218
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE);
160
break;
219
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
161
case 3:
220
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
162
sz = MO_16;
221
break;
163
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
222
case EXCP_INVSTATE:
164
+ if (dc_isar_feature(aa64_fp16, s)) {
223
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE);
165
break;
224
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
166
}
225
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVSTATE_MASK;
167
/* fallthru */
226
break;
168
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
227
case EXCP_SWI:
169
handle_fp_1src_double(s, opcode, rd, rn);
228
/* The PC already points to the next instruction. */
170
break;
229
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC);
171
case 3:
230
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
172
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
231
break;
173
+ if (!dc_isar_feature(aa64_fp16, s)) {
232
case EXCP_PREFETCH_ABORT:
174
unallocated_encoding(s);
233
case EXCP_DATA_ABORT:
175
return;
234
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
176
}
235
env->v7m.bfar);
177
@@ -XXX,XX +XXX,XX @@ static void disas_fp_2src(DisasContext *s, uint32_t insn)
178
handle_fp_2src_double(s, opcode, rd, rn, rm);
179
break;
180
case 3:
181
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
182
+ if (!dc_isar_feature(aa64_fp16, s)) {
183
unallocated_encoding(s);
184
return;
185
}
186
@@ -XXX,XX +XXX,XX @@ static void disas_fp_3src(DisasContext *s, uint32_t insn)
187
handle_fp_3src_double(s, o0, o1, rd, rn, rm, ra);
188
break;
189
case 3:
190
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
191
+ if (!dc_isar_feature(aa64_fp16, s)) {
192
unallocated_encoding(s);
193
return;
194
}
195
@@ -XXX,XX +XXX,XX @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
196
break;
197
case 3:
198
sz = MO_16;
199
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
200
+ if (dc_isar_feature(aa64_fp16, s)) {
201
break;
202
}
203
/* fallthru */
204
@@ -XXX,XX +XXX,XX @@ static void disas_fp_fixed_conv(DisasContext *s, uint32_t insn)
205
case 1: /* float64 */
206
break;
207
case 3: /* float16 */
208
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
209
+ if (dc_isar_feature(aa64_fp16, s)) {
210
break;
211
}
212
/* fallthru */
213
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
214
break;
215
case 0x6: /* 16-bit float, 32-bit int */
216
case 0xe: /* 16-bit float, 64-bit int */
217
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
218
+ if (dc_isar_feature(aa64_fp16, s)) {
236
break;
219
break;
237
}
220
}
238
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS);
221
/* fallthru */
239
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
222
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
240
break;
223
case 1: /* float64 */
241
default:
224
break;
242
/* All other FSR values are either MPU faults or "can't happen
225
case 3: /* float16 */
243
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
226
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
244
env->v7m.mmfar[env->v7m.secure]);
227
+ if (dc_isar_feature(aa64_fp16, s)) {
245
break;
228
break;
246
}
229
}
247
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM);
230
/* fallthru */
248
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM,
231
@@ -XXX,XX +XXX,XX @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
249
+ env->v7m.secure);
232
*/
250
break;
233
is_min = extract32(size, 1, 1);
251
}
234
is_fp = true;
252
break;
235
- if (!is_u && arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
253
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
236
+ if (!is_u && dc_isar_feature(aa64_fp16, s)) {
237
size = 1;
238
} else if (!is_u || !is_q || extract32(size, 0, 1)) {
239
unallocated_encoding(s);
240
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
241
242
if (o2 != 0 || ((cmode == 0xf) && is_neg && !is_q)) {
243
/* Check for FMOV (vector, immediate) - half-precision */
244
- if (!(arm_dc_feature(s, ARM_FEATURE_V8_FP16) && o2 && cmode == 0xf)) {
245
+ if (!(dc_isar_feature(aa64_fp16, s) && o2 && cmode == 0xf)) {
246
unallocated_encoding(s);
247
return;
248
}
249
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
250
case 0x2f: /* FMINP */
251
/* FP op, size[0] is 32 or 64 bit*/
252
if (!u) {
253
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
254
+ if (!dc_isar_feature(aa64_fp16, s)) {
255
unallocated_encoding(s);
254
return;
256
return;
257
} else {
258
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
259
size = MO_32;
260
} else if (immh & 2) {
261
size = MO_16;
262
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
263
+ if (!dc_isar_feature(aa64_fp16, s)) {
264
unallocated_encoding(s);
265
return;
266
}
267
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
268
size = MO_32;
269
} else if (immh & 0x2) {
270
size = MO_16;
271
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
272
+ if (!dc_isar_feature(aa64_fp16, s)) {
273
unallocated_encoding(s);
274
return;
275
}
276
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
277
return;
278
}
279
280
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
281
+ if (!dc_isar_feature(aa64_fp16, s)) {
282
unallocated_encoding(s);
283
}
284
285
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
286
TCGv_ptr fpst;
287
bool pairwise = false;
288
289
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
290
+ if (!dc_isar_feature(aa64_fp16, s)) {
291
unallocated_encoding(s);
292
return;
293
}
294
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
295
case 0x1c: /* FCADD, #90 */
296
case 0x1e: /* FCADD, #270 */
297
if (size == 0
298
- || (size == 1 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))
299
+ || (size == 1 && !dc_isar_feature(aa64_fp16, s))
300
|| (size == 3 && !is_q)) {
301
unallocated_encoding(s);
302
return;
303
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
304
bool need_fpst = true;
305
int rmode;
306
307
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
308
+ if (!dc_isar_feature(aa64_fp16, s)) {
309
unallocated_encoding(s);
310
return;
311
}
312
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
313
}
314
break;
315
}
316
- if (is_fp16 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
317
+ if (is_fp16 && !dc_isar_feature(aa64_fp16, s)) {
318
unallocated_encoding(s);
319
return;
320
}
321
diff --git a/target/arm/translate.c b/target/arm/translate.c
322
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/translate.c
324
+++ b/target/arm/translate.c
325
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
326
int size = extract32(insn, 20, 1);
327
data = extract32(insn, 23, 2); /* rot */
328
if (!dc_isar_feature(aa32_vcma, s)
329
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
330
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
331
return 1;
332
}
333
fn_gvec_ptr = size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
334
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
335
int size = extract32(insn, 20, 1);
336
data = extract32(insn, 24, 1); /* rot */
337
if (!dc_isar_feature(aa32_vcma, s)
338
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
339
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
340
return 1;
341
}
342
fn_gvec_ptr = size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
343
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
344
return 1;
345
}
346
if (size == 0) {
347
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
348
+ if (!dc_isar_feature(aa32_fp16_arith, s)) {
349
return 1;
255
}
350
}
256
}
351
/* For fp16, rm is just Vm, and index is M. */
257
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG);
258
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG, false);
259
break;
260
case EXCP_IRQ:
261
break;
262
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
263
index XXXXXXX..XXXXXXX 100644
264
--- a/hw/intc/trace-events
265
+++ b/hw/intc/trace-events
266
@@ -XXX,XX +XXX,XX @@ nvic_set_prio(int irq, uint8_t prio) "NVIC set irq %d priority %d"
267
nvic_irq_update(int vectpending, int pendprio, int exception_prio, int level) "NVIC vectpending %d pending prio %d exception_prio %d: setting irq line to %d"
268
nvic_escalate_prio(int irq, int irqprio, int runprio) "NVIC escalating irq %d to HardFault: insufficient priority %d >= %d"
269
nvic_escalate_disabled(int irq) "NVIC escalating irq %d to HardFault: disabled"
270
-nvic_set_pending(int irq, int en, int prio) "NVIC set pending irq %d (enabled: %d priority %d)"
271
-nvic_clear_pending(int irq, int en, int prio) "NVIC clear pending irq %d (enabled: %d priority %d)"
272
+nvic_set_pending(int irq, bool secure, int en, int prio) "NVIC set pending irq %d secure-bank %d (enabled: %d priority %d)"
273
+nvic_clear_pending(int irq, bool secure, int en, int prio) "NVIC clear pending irq %d secure-bank %d (enabled: %d priority %d)"
274
nvic_set_pending_level(int irq) "NVIC set pending: irq %d higher prio than vectpending: setting irq line to 1"
275
nvic_acknowledge_irq(int irq, int prio) "NVIC acknowledge IRQ: %d now active (prio %d)"
276
nvic_complete_irq(int irq) "NVIC complete IRQ %d"
277
--
352
--
278
2.7.4
353
2.19.1
279
354
280
355
diff view generated by jsdifflib
1
Don't use old_mmio in the memory region ops struct.
1
For AArch32, exception return happens through certain kinds
2
of CPSR write. We don't currently have any CPU_LOG_INT logging
3
of these events (unlike AArch64, where we log in the ERET
4
instruction). Add some suitable logging.
5
6
This will log exception returns like this:
7
Exception return from AArch32 hyp to usr PC 0x80100374
8
9
paralleling the existing logging in the exception_return
10
helper for AArch64 exception returns:
11
Exception return from AArch64 EL2 to AArch64 EL0 PC 0x8003045c
12
Exception return from AArch64 EL2 to AArch32 EL0 PC 0x8003045c
13
14
(Note that an AArch32 exception return can only be
15
AArch32->AArch32, never to AArch64.)
2
16
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 1505580378-9044-7-git-send-email-peter.maydell@linaro.org
19
Message-id: 20181012144235.19646-2-peter.maydell@linaro.org
6
---
20
---
7
hw/arm/omap2.c | 49 +++++++++++++++++++++++++++++++++++++------------
21
target/arm/internals.h | 18 ++++++++++++++++++
8
1 file changed, 37 insertions(+), 12 deletions(-)
22
target/arm/helper.c | 10 ++++++++++
23
target/arm/translate.c | 7 +------
24
3 files changed, 29 insertions(+), 6 deletions(-)
9
25
10
diff --git a/hw/arm/omap2.c b/hw/arm/omap2.c
26
diff --git a/target/arm/internals.h b/target/arm/internals.h
11
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
12
--- a/hw/arm/omap2.c
28
--- a/target/arm/internals.h
13
+++ b/hw/arm/omap2.c
29
+++ b/target/arm/internals.h
14
@@ -XXX,XX +XXX,XX @@ static void omap_sysctl_write(void *opaque, hwaddr addr,
30
@@ -XXX,XX +XXX,XX @@ static inline uint32_t v7m_sp_limit(CPUARMState *env)
15
}
31
}
16
}
32
}
17
33
18
+static uint64_t omap_sysctl_readfn(void *opaque, hwaddr addr,
34
+/**
19
+ unsigned size)
35
+ * aarch32_mode_name(): Return name of the AArch32 CPU mode
36
+ * @psr: Program Status Register indicating CPU mode
37
+ *
38
+ * Returns, for debug logging purposes, a printable representation
39
+ * of the AArch32 CPU mode ("svc", "usr", etc) as indicated by
40
+ * the low bits of the specified PSR.
41
+ */
42
+static inline const char *aarch32_mode_name(uint32_t psr)
20
+{
43
+{
21
+ switch (size) {
44
+ static const char cpu_mode_names[16][4] = {
22
+ case 1:
45
+ "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
23
+ return omap_sysctl_read8(opaque, addr);
46
+ "???", "???", "hyp", "und", "???", "???", "???", "sys"
24
+ case 2:
47
+ };
25
+ return omap_badwidth_read32(opaque, addr); /* TODO */
48
+
26
+ case 4:
49
+ return cpu_mode_names[psr & 0xf];
27
+ return omap_sysctl_read(opaque, addr);
28
+ default:
29
+ g_assert_not_reached();
30
+ }
31
+}
50
+}
32
+
51
+
33
+static void omap_sysctl_writefn(void *opaque, hwaddr addr,
52
#endif
34
+ uint64_t value, unsigned size)
53
diff --git a/target/arm/helper.c b/target/arm/helper.c
35
+{
54
index XXXXXXX..XXXXXXX 100644
36
+ switch (size) {
55
--- a/target/arm/helper.c
37
+ case 1:
56
+++ b/target/arm/helper.c
38
+ omap_sysctl_write8(opaque, addr, value);
57
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
39
+ break;
58
mask |= CPSR_IL;
40
+ case 2:
59
val |= CPSR_IL;
41
+ omap_badwidth_write32(opaque, addr, value); /* TODO */
60
}
42
+ break;
61
+ qemu_log_mask(LOG_GUEST_ERROR,
43
+ case 4:
62
+ "Illegal AArch32 mode switch attempt from %s to %s\n",
44
+ omap_sysctl_write(opaque, addr, value);
63
+ aarch32_mode_name(env->uncached_cpsr),
45
+ break;
64
+ aarch32_mode_name(val));
46
+ default:
65
} else {
47
+ g_assert_not_reached();
66
+ qemu_log_mask(CPU_LOG_INT, "%s %s to %s PC 0x%" PRIx32 "\n",
48
+ }
67
+ write_type == CPSRWriteExceptionReturn ?
49
+}
68
+ "Exception return from AArch32" :
50
+
69
+ "AArch32 mode switch from",
51
static const MemoryRegionOps omap_sysctl_ops = {
70
+ aarch32_mode_name(env->uncached_cpsr),
52
- .old_mmio = {
71
+ aarch32_mode_name(val), env->regs[15]);
53
- .read = {
72
switch_mode(env, val & CPSR_M);
54
- omap_sysctl_read8,
73
}
55
- omap_badwidth_read32,    /* TODO */
74
}
56
- omap_sysctl_read,
75
diff --git a/target/arm/translate.c b/target/arm/translate.c
57
- },
76
index XXXXXXX..XXXXXXX 100644
58
- .write = {
77
--- a/target/arm/translate.c
59
- omap_sysctl_write8,
78
+++ b/target/arm/translate.c
60
- omap_badwidth_write32,    /* TODO */
79
@@ -XXX,XX +XXX,XX @@ void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb)
61
- omap_sysctl_write,
80
translator_loop(ops, &dc.base, cpu, tb);
62
- },
81
}
63
- },
82
64
+ .read = omap_sysctl_readfn,
83
-static const char *cpu_mode_names[16] = {
65
+ .write = omap_sysctl_writefn,
84
- "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
66
+ .valid.min_access_size = 1,
85
- "???", "???", "hyp", "und", "???", "???", "???", "sys"
67
+ .valid.max_access_size = 4,
86
-};
68
.endianness = DEVICE_NATIVE_ENDIAN,
87
-
69
};
88
void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
70
89
int flags)
90
{
91
@@ -XXX,XX +XXX,XX @@ void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
92
psr & CPSR_V ? 'V' : '-',
93
psr & CPSR_T ? 'T' : 'A',
94
ns_status,
95
- cpu_mode_names[psr & 0xf], (psr & 0x10) ? 32 : 26);
96
+ aarch32_mode_name(psr), (psr & 0x10) ? 32 : 26);
97
}
98
99
if (flags & CPU_DUMP_FPU) {
71
--
100
--
72
2.7.4
101
2.19.1
73
102
74
103
diff view generated by jsdifflib
1
Drop the use of old_mmio in the omap2_gpio memory ops.
1
The switch_mode() function is defined in target/arm/helper.c and used
2
only in that file and nowhere else, so we can make it file-local
3
rather than global.
2
4
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 1505580378-9044-3-git-send-email-peter.maydell@linaro.org
7
Message-id: 20181012144235.19646-3-peter.maydell@linaro.org
6
---
8
---
7
hw/gpio/omap_gpio.c | 26 ++++++++++++--------------
9
target/arm/internals.h | 1 -
8
1 file changed, 12 insertions(+), 14 deletions(-)
10
target/arm/helper.c | 6 ++++--
11
2 files changed, 4 insertions(+), 3 deletions(-)
9
12
10
diff --git a/hw/gpio/omap_gpio.c b/hw/gpio/omap_gpio.c
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
11
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
12
--- a/hw/gpio/omap_gpio.c
15
--- a/target/arm/internals.h
13
+++ b/hw/gpio/omap_gpio.c
16
+++ b/target/arm/internals.h
14
@@ -XXX,XX +XXX,XX @@ static void omap2_gpio_module_write(void *opaque, hwaddr addr,
17
@@ -XXX,XX +XXX,XX @@ static inline int bank_number(int mode)
15
}
18
g_assert_not_reached();
16
}
19
}
17
20
18
-static uint32_t omap2_gpio_module_readp(void *opaque, hwaddr addr)
21
-void switch_mode(CPUARMState *, int);
19
+static uint64_t omap2_gpio_module_readp(void *opaque, hwaddr addr,
22
void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu);
20
+ unsigned size)
23
void arm_translate_init(void);
24
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
30
V8M_SAttributes *sattrs);
31
#endif
32
33
+static void switch_mode(CPUARMState *env, int mode);
34
+
35
static int vfp_gdb_get_reg(CPUARMState *env, uint8_t *buf, int reg)
21
{
36
{
22
return omap2_gpio_module_read(opaque, addr & ~3) >> ((addr & 3) << 3);
37
int nregs;
38
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
39
return 0;
23
}
40
}
24
41
25
static void omap2_gpio_module_writep(void *opaque, hwaddr addr,
42
-void switch_mode(CPUARMState *env, int mode)
26
- uint32_t value)
43
+static void switch_mode(CPUARMState *env, int mode)
27
+ uint64_t value, unsigned size)
28
{
44
{
29
uint32_t cur = 0;
45
ARMCPU *cpu = arm_env_get_cpu(env);
30
uint32_t mask = 0xffff;
46
31
47
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
32
+ if (size == 4) {
48
33
+ omap2_gpio_module_write(opaque, addr, value);
49
#else
34
+ return;
50
35
+ }
51
-void switch_mode(CPUARMState *env, int mode)
36
+
52
+static void switch_mode(CPUARMState *env, int mode)
37
switch (addr & ~3) {
53
{
38
case 0x00:    /* GPIO_REVISION */
54
int old_mode;
39
case 0x14:    /* GPIO_SYSSTATUS */
55
int i;
40
@@ -XXX,XX +XXX,XX @@ static void omap2_gpio_module_writep(void *opaque, hwaddr addr,
41
}
42
43
static const MemoryRegionOps omap2_gpio_module_ops = {
44
- .old_mmio = {
45
- .read = {
46
- omap2_gpio_module_readp,
47
- omap2_gpio_module_readp,
48
- omap2_gpio_module_read,
49
- },
50
- .write = {
51
- omap2_gpio_module_writep,
52
- omap2_gpio_module_writep,
53
- omap2_gpio_module_write,
54
- },
55
- },
56
+ .read = omap2_gpio_module_readp,
57
+ .write = omap2_gpio_module_writep,
58
+ .valid.min_access_size = 1,
59
+ .valid.max_access_size = 4,
60
.endianness = DEVICE_NATIVE_ENDIAN,
61
};
62
63
--
56
--
64
2.7.4
57
2.19.1
65
58
66
59
diff view generated by jsdifflib
New patch
1
1
The HCR.FB virtualization configuration register bit requests that
2
TLB maintenance, branch predictor invalidate-all and icache
3
invalidate-all operations performed in NS EL1 should be upgraded
4
from "local CPU only to "broadcast within Inner Shareable domain".
5
For QEMU we NOP the branch predictor and icache operations, so
6
we only need to upgrade the TLB invalidates:
7
AArch32 TLBIALL, TLBIMVA, TLBIASID, DTLBIALL, DTLBIMVA, DTLBIASID,
8
ITLBIALL, ITLBIMVA, ITLBIASID, TLBIMVAA, TLBIMVAL, TLBIMVAAL
9
AArch64 TLBI VMALLE1, TLBI VAE1, TLBI ASIDE1, TLBI VAAE1,
10
TLBI VALE1, TLBI VAALE1
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20181012144235.19646-4-peter.maydell@linaro.org
15
---
16
target/arm/helper.c | 191 +++++++++++++++++++++++++++-----------------
17
1 file changed, 116 insertions(+), 75 deletions(-)
18
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
raw_write(env, ri, value);
25
}
26
27
-static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
- uint64_t value)
29
-{
30
- /* Invalidate all (TLBIALL) */
31
- ARMCPU *cpu = arm_env_get_cpu(env);
32
-
33
- tlb_flush(CPU(cpu));
34
-}
35
-
36
-static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
37
- uint64_t value)
38
-{
39
- /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
40
- ARMCPU *cpu = arm_env_get_cpu(env);
41
-
42
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
43
-}
44
-
45
-static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
46
- uint64_t value)
47
-{
48
- /* Invalidate by ASID (TLBIASID) */
49
- ARMCPU *cpu = arm_env_get_cpu(env);
50
-
51
- tlb_flush(CPU(cpu));
52
-}
53
-
54
-static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
55
- uint64_t value)
56
-{
57
- /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
58
- ARMCPU *cpu = arm_env_get_cpu(env);
59
-
60
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
61
-}
62
-
63
/* IS variants of TLB operations must affect all cores */
64
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
65
uint64_t value)
66
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
67
tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
68
}
69
70
+/*
71
+ * Non-IS variants of TLB operations are upgraded to
72
+ * IS versions if we are at NS EL1 and HCR_EL2.FB is set to
73
+ * force broadcast of these operations.
74
+ */
75
+static bool tlb_force_broadcast(CPUARMState *env)
76
+{
77
+ return (env->cp15.hcr_el2 & HCR_FB) &&
78
+ arm_current_el(env) == 1 && arm_is_secure_below_el3(env);
79
+}
80
+
81
+static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
82
+ uint64_t value)
83
+{
84
+ /* Invalidate all (TLBIALL) */
85
+ ARMCPU *cpu = arm_env_get_cpu(env);
86
+
87
+ if (tlb_force_broadcast(env)) {
88
+ tlbiall_is_write(env, NULL, value);
89
+ return;
90
+ }
91
+
92
+ tlb_flush(CPU(cpu));
93
+}
94
+
95
+static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
96
+ uint64_t value)
97
+{
98
+ /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
99
+ ARMCPU *cpu = arm_env_get_cpu(env);
100
+
101
+ if (tlb_force_broadcast(env)) {
102
+ tlbimva_is_write(env, NULL, value);
103
+ return;
104
+ }
105
+
106
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
107
+}
108
+
109
+static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
110
+ uint64_t value)
111
+{
112
+ /* Invalidate by ASID (TLBIASID) */
113
+ ARMCPU *cpu = arm_env_get_cpu(env);
114
+
115
+ if (tlb_force_broadcast(env)) {
116
+ tlbiasid_is_write(env, NULL, value);
117
+ return;
118
+ }
119
+
120
+ tlb_flush(CPU(cpu));
121
+}
122
+
123
+static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
+ uint64_t value)
125
+{
126
+ /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
127
+ ARMCPU *cpu = arm_env_get_cpu(env);
128
+
129
+ if (tlb_force_broadcast(env)) {
130
+ tlbimvaa_is_write(env, NULL, value);
131
+ return;
132
+ }
133
+
134
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
135
+}
136
+
137
static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
138
uint64_t value)
139
{
140
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_access(CPUARMState *env,
141
* Page D4-1736 (DDI0487A.b)
142
*/
143
144
-static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
145
- uint64_t value)
146
-{
147
- CPUState *cs = ENV_GET_CPU(env);
148
-
149
- if (arm_is_secure_below_el3(env)) {
150
- tlb_flush_by_mmuidx(cs,
151
- ARMMMUIdxBit_S1SE1 |
152
- ARMMMUIdxBit_S1SE0);
153
- } else {
154
- tlb_flush_by_mmuidx(cs,
155
- ARMMMUIdxBit_S12NSE1 |
156
- ARMMMUIdxBit_S12NSE0);
157
- }
158
-}
159
-
160
static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
uint64_t value)
162
{
163
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
164
}
165
}
166
167
+static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
168
+ uint64_t value)
169
+{
170
+ CPUState *cs = ENV_GET_CPU(env);
171
+
172
+ if (tlb_force_broadcast(env)) {
173
+ tlbi_aa64_vmalle1_write(env, NULL, value);
174
+ return;
175
+ }
176
+
177
+ if (arm_is_secure_below_el3(env)) {
178
+ tlb_flush_by_mmuidx(cs,
179
+ ARMMMUIdxBit_S1SE1 |
180
+ ARMMMUIdxBit_S1SE0);
181
+ } else {
182
+ tlb_flush_by_mmuidx(cs,
183
+ ARMMMUIdxBit_S12NSE1 |
184
+ ARMMMUIdxBit_S12NSE0);
185
+ }
186
+}
187
+
188
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
189
uint64_t value)
190
{
191
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
192
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E3);
193
}
194
195
-static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
- uint64_t value)
197
-{
198
- /* Invalidate by VA, EL1&0 (AArch64 version).
199
- * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
200
- * since we don't support flush-for-specific-ASID-only or
201
- * flush-last-level-only.
202
- */
203
- ARMCPU *cpu = arm_env_get_cpu(env);
204
- CPUState *cs = CPU(cpu);
205
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
206
-
207
- if (arm_is_secure_below_el3(env)) {
208
- tlb_flush_page_by_mmuidx(cs, pageaddr,
209
- ARMMMUIdxBit_S1SE1 |
210
- ARMMMUIdxBit_S1SE0);
211
- } else {
212
- tlb_flush_page_by_mmuidx(cs, pageaddr,
213
- ARMMMUIdxBit_S12NSE1 |
214
- ARMMMUIdxBit_S12NSE0);
215
- }
216
-}
217
-
218
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
219
uint64_t value)
220
{
221
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
222
}
223
}
224
225
+static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
226
+ uint64_t value)
227
+{
228
+ /* Invalidate by VA, EL1&0 (AArch64 version).
229
+ * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
230
+ * since we don't support flush-for-specific-ASID-only or
231
+ * flush-last-level-only.
232
+ */
233
+ ARMCPU *cpu = arm_env_get_cpu(env);
234
+ CPUState *cs = CPU(cpu);
235
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
236
+
237
+ if (tlb_force_broadcast(env)) {
238
+ tlbi_aa64_vae1is_write(env, NULL, value);
239
+ return;
240
+ }
241
+
242
+ if (arm_is_secure_below_el3(env)) {
243
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
244
+ ARMMMUIdxBit_S1SE1 |
245
+ ARMMMUIdxBit_S1SE0);
246
+ } else {
247
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
248
+ ARMMMUIdxBit_S12NSE1 |
249
+ ARMMMUIdxBit_S12NSE0);
250
+ }
251
+}
252
+
253
static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
254
uint64_t value)
255
{
256
--
257
2.19.1
258
259
diff view generated by jsdifflib
1
Instead of looking up the pending priority
1
The HCR.DC virtualization configuration register bit has the
2
in nvic_pending_prio(), cache it in a new state struct
2
following effects:
3
field. The calculation of the pending priority given
3
* SCTLR.M behaves as if it is 0 for all purposes except
4
the interrupt number is more complicated in v8M with
4
direct reads of the bit
5
the security extension, so the caching will be worthwhile.
5
* HCR.VM behaves as if it is 1 for all purposes except
6
direct reads of the bit
7
* the memory type produced by the first stage of the EL1&EL0
8
translation regime is Normal Non-Shareable,
9
Inner Write-Back Read-Allocate Write-Allocate,
10
Outer Write-Back Read-Allocate Write-Allocate.
6
11
7
This changes nvic_pending_prio() from returning a full
12
Implement this behaviour.
8
(group + subpriority) priority value to returning a group
9
priority. This doesn't require changes to its callsites
10
because we use it only in comparisons of the form
11
execution_prio > nvic_pending_prio()
12
and execution priority is always a group priority, so
13
a test (exec prio > full prio) is true if and only if
14
(execprio > group_prio).
15
16
(Architecturally the expected comparison is with the
17
group priority for this sort of "would we preempt" test;
18
we were only doing a test with a full priority as an
19
optimisation to avoid the mask, which is possible
20
precisely because the two comparisons always give the
21
same answer.)
22
13
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
25
Message-id: 1505240046-11454-5-git-send-email-peter.maydell@linaro.org
16
Message-id: 20181012144235.19646-5-peter.maydell@linaro.org
26
---
17
---
27
include/hw/intc/armv7m_nvic.h | 2 ++
18
target/arm/helper.c | 23 +++++++++++++++++++++--
28
hw/intc/armv7m_nvic.c | 23 +++++++++++++----------
19
1 file changed, 21 insertions(+), 2 deletions(-)
29
hw/intc/trace-events | 2 +-
30
3 files changed, 16 insertions(+), 11 deletions(-)
31
20
32
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
33
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
34
--- a/include/hw/intc/armv7m_nvic.h
23
--- a/target/arm/helper.c
35
+++ b/include/hw/intc/armv7m_nvic.h
24
+++ b/target/arm/helper.c
36
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
25
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
37
* - vectpending
26
* * The Non-secure TTBCR.EAE bit is set to 1
38
* - vectpending_is_secure
27
* * The implementation includes EL2, and the value of HCR.VM is 1
39
* - exception_prio
28
*
40
+ * - vectpending_prio
29
+ * (Note that HCR.DC makes HCR.VM behave as if it is 1.)
41
*/
30
+ *
42
unsigned int vectpending; /* highest prio pending enabled exception */
31
* ATS1Hx always uses the 64bit format (not supported yet).
43
/* true if vectpending is a banked secure exception, ie it is in
32
*/
44
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
33
format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
45
*/
34
46
bool vectpending_is_s_banked;
35
if (arm_feature(env, ARM_FEATURE_EL2)) {
47
int exception_prio; /* group prio of the highest prio active exception */
36
if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
48
+ int vectpending_prio; /* group prio of the exeception in vectpending */
37
- format64 |= env->cp15.hcr_el2 & HCR_VM;
49
38
+ format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC);
50
MemoryRegion sysregmem;
39
} else {
51
MemoryRegion sysreg_ns_mem;
40
format64 |= arm_current_el(env) == 2;
52
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
41
}
53
index XXXXXXX..XXXXXXX 100644
42
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
54
--- a/hw/intc/armv7m_nvic.c
55
+++ b/hw/intc/armv7m_nvic.c
56
@@ -XXX,XX +XXX,XX @@ static const uint8_t nvic_id[] = {
57
58
static int nvic_pending_prio(NVICState *s)
59
{
60
- /* return the priority of the current pending interrupt,
61
+ /* return the group priority of the current pending interrupt,
62
* or NVIC_NOEXC_PRIO if no interrupt is pending
63
*/
64
- return s->vectpending ? s->vectors[s->vectpending].prio : NVIC_NOEXC_PRIO;
65
+ return s->vectpending_prio;
66
}
67
68
/* Return the value of the ISCR RETTOBASE bit:
69
@@ -XXX,XX +XXX,XX @@ static void nvic_recompute_state(NVICState *s)
70
active_prio &= nvic_gprio_mask(s);
71
}
43
}
72
44
73
+ if (pend_prio > 0) {
45
if (mmu_idx == ARMMMUIdx_S2NS) {
74
+ pend_prio &= nvic_gprio_mask(s);
46
- return (env->cp15.hcr_el2 & HCR_VM) == 0;
47
+ /* HCR.DC means HCR.VM behaves as 1 */
48
+ return (env->cp15.hcr_el2 & (HCR_DC | HCR_VM)) == 0;
49
}
50
51
if (env->cp15.hcr_el2 & HCR_TGE) {
52
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
53
}
54
}
55
56
+ if ((env->cp15.hcr_el2 & HCR_DC) &&
57
+ (mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1)) {
58
+ /* HCR.DC means SCTLR_EL1.M behaves as 0 */
59
+ return true;
75
+ }
60
+ }
76
+
61
+
77
s->vectpending = pend_irq;
62
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
78
+ s->vectpending_prio = pend_prio;
79
s->exception_prio = active_prio;
80
81
- trace_nvic_recompute_state(s->vectpending, s->exception_prio);
82
+ trace_nvic_recompute_state(s->vectpending,
83
+ s->vectpending_prio,
84
+ s->exception_prio);
85
}
63
}
86
64
87
/* Return the current execution priority of the CPU
65
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
88
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque)
66
89
CPUARMState *env = &s->cpu->env;
67
/* Combine the S1 and S2 cache attributes, if needed */
90
const int pending = s->vectpending;
68
if (!ret && cacheattrs != NULL) {
91
const int running = nvic_exec_prio(s);
69
+ if (env->cp15.hcr_el2 & HCR_DC) {
92
- int pendgroupprio;
70
+ /*
93
VecInfo *vec;
71
+ * HCR.DC forces the first stage attributes to
94
72
+ * Normal Non-Shareable,
95
assert(pending > ARMV7M_EXCP_RESET && pending < s->num_irq);
73
+ * Inner Write-Back Read-Allocate Write-Allocate,
96
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque)
74
+ * Outer Write-Back Read-Allocate Write-Allocate.
97
assert(vec->enabled);
75
+ */
98
assert(vec->pending);
76
+ cacheattrs->attrs = 0xff;
99
77
+ cacheattrs->shareability = 0;
100
- pendgroupprio = vec->prio;
78
+ }
101
- if (pendgroupprio > 0) {
79
*cacheattrs = combine_cacheattrs(*cacheattrs, cacheattrs2);
102
- pendgroupprio &= nvic_gprio_mask(s);
80
}
103
- }
81
104
- assert(pendgroupprio < running);
105
+ assert(s->vectpending_prio < running);
106
107
- trace_nvic_acknowledge_irq(pending, vec->prio);
108
+ trace_nvic_acknowledge_irq(pending, s->vectpending_prio);
109
110
vec->active = 1;
111
vec->pending = 0;
112
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
113
s->exception_prio = NVIC_NOEXC_PRIO;
114
s->vectpending = 0;
115
s->vectpending_is_s_banked = false;
116
+ s->vectpending_prio = NVIC_NOEXC_PRIO;
117
}
118
119
static void nvic_systick_trigger(void *opaque, int n, int level)
120
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
121
index XXXXXXX..XXXXXXX 100644
122
--- a/hw/intc/trace-events
123
+++ b/hw/intc/trace-events
124
@@ -XXX,XX +XXX,XX @@ gicv3_redist_set_irq(uint32_t cpu, int irq, int level) "GICv3 redistributor 0x%x
125
gicv3_redist_send_sgi(uint32_t cpu, int irq) "GICv3 redistributor 0x%x pending SGI %d"
126
127
# hw/intc/armv7m_nvic.c
128
-nvic_recompute_state(int vectpending, int exception_prio) "NVIC state recomputed: vectpending %d exception_prio %d"
129
+nvic_recompute_state(int vectpending, int vectpending_prio, int exception_prio) "NVIC state recomputed: vectpending %d vectpending_prio %d exception_prio %d"
130
nvic_set_prio(int irq, uint8_t prio) "NVIC set irq %d priority %d"
131
nvic_irq_update(int vectpending, int pendprio, int exception_prio, int level) "NVIC vectpending %d pending prio %d exception_prio %d: setting irq line to %d"
132
nvic_escalate_prio(int irq, int irqprio, int runprio) "NVIC escalating irq %d to HardFault: insufficient priority %d >= %d"
133
--
82
--
134
2.7.4
83
2.19.1
135
84
136
85
diff view generated by jsdifflib
1
Update nvic_exec_prio() to support the v8M changes:
1
The A/I/F bits in ISR_EL1 should track the virtual interrupt
2
* BASEPRI, FAULTMASK and PRIMASK are all banked
2
status, not the physical interrupt status, if the associated
3
* AIRCR.PRIS can affect NS priorities
3
HCR_EL2.AMO/IMO/FMO bit is set. Implement this, rather than
4
* AIRCR.BFHFNMINS affects FAULTMASK behaviour
4
always showing the physical interrupt status.
5
5
6
These changes mean that it's no longer possible to
6
We don't currently implement anything to do with external
7
definitely say that if FAULTMASK is set it overrides
7
aborts, so this applies only to the I and F bits (though it
8
PRIMASK, and if PRIMASK is set it overrides BASEPRI
8
ought to be possible for the outer guest to present a virtual
9
(since if PRIMASK_NS is set and AIRCR.PRIS is set then
9
external abort to the inner guest, even if QEMU doesn't
10
whether that 0x80 priority should take effect or the
10
emulate physical external aborts, so there is missing
11
priority in BASEPRI_S depends on the value of BASEPRI_S,
11
functionality in this area).
12
for instance). So we switch to the same approach used
13
by the pseudocode of working through BASEPRI, PRIMASK
14
and FAULTMASK and overriding the previous values if
15
needed.
16
12
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 1505240046-11454-16-git-send-email-peter.maydell@linaro.org
15
Message-id: 20181012144235.19646-6-peter.maydell@linaro.org
20
---
16
---
21
hw/intc/armv7m_nvic.c | 51 ++++++++++++++++++++++++++++++++++++++++++---------
17
target/arm/helper.c | 22 ++++++++++++++++++----
22
1 file changed, 42 insertions(+), 9 deletions(-)
18
1 file changed, 18 insertions(+), 4 deletions(-)
23
19
24
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
25
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/intc/armv7m_nvic.c
22
--- a/target/arm/helper.c
27
+++ b/hw/intc/armv7m_nvic.c
23
+++ b/target/arm/helper.c
28
@@ -XXX,XX +XXX,XX @@ static void nvic_recompute_state(NVICState *s)
24
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
29
static inline int nvic_exec_prio(NVICState *s)
25
CPUState *cs = ENV_GET_CPU(env);
30
{
26
uint64_t ret = 0;
31
CPUARMState *env = &s->cpu->env;
27
32
- int running;
28
- if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
33
+ int running = NVIC_NOEXC_PRIO;
29
- ret |= CPSR_I;
34
30
+ if (arm_hcr_el2_imo(env)) {
35
- if (env->v7m.faultmask[env->v7m.secure]) {
31
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
36
- running = -1;
32
+ ret |= CPSR_I;
37
- } else if (env->v7m.primask[env->v7m.secure]) {
33
+ }
38
+ if (env->v7m.basepri[M_REG_NS] > 0) {
34
+ } else {
39
+ running = exc_group_prio(s, env->v7m.basepri[M_REG_NS], M_REG_NS);
35
+ if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
40
+ }
36
+ ret |= CPSR_I;
37
+ }
38
}
39
- if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
40
- ret |= CPSR_F;
41
+
41
+
42
+ if (env->v7m.basepri[M_REG_S] > 0) {
42
+ if (arm_hcr_el2_fmo(env)) {
43
+ int basepri = exc_group_prio(s, env->v7m.basepri[M_REG_S], M_REG_S);
43
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
44
+ if (running > basepri) {
44
+ ret |= CPSR_F;
45
+ running = basepri;
46
+ }
45
+ }
47
+ }
46
+ } else {
48
+
47
+ if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
49
+ if (env->v7m.primask[M_REG_NS]) {
48
+ ret |= CPSR_F;
50
+ if (env->v7m.aircr & R_V7M_AIRCR_PRIS_MASK) {
51
+ if (running > NVIC_NS_PRIO_LIMIT) {
52
+ running = NVIC_NS_PRIO_LIMIT;
53
+ }
54
+ } else {
55
+ running = 0;
56
+ }
49
+ }
57
+ }
58
+
59
+ if (env->v7m.primask[M_REG_S]) {
60
running = 0;
61
- } else if (env->v7m.basepri[env->v7m.secure] > 0) {
62
- running = env->v7m.basepri[env->v7m.secure] &
63
- nvic_gprio_mask(s, env->v7m.secure);
64
- } else {
65
- running = NVIC_NOEXC_PRIO; /* lower than any possible priority */
66
}
50
}
67
+
51
+
68
+ if (env->v7m.faultmask[M_REG_NS]) {
52
/* External aborts are not possible in QEMU so A bit is always clear */
69
+ if (env->v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
53
return ret;
70
+ running = -1;
71
+ } else {
72
+ if (env->v7m.aircr & R_V7M_AIRCR_PRIS_MASK) {
73
+ if (running > NVIC_NS_PRIO_LIMIT) {
74
+ running = NVIC_NS_PRIO_LIMIT;
75
+ }
76
+ } else {
77
+ running = 0;
78
+ }
79
+ }
80
+ }
81
+
82
+ if (env->v7m.faultmask[M_REG_S]) {
83
+ running = (env->v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) ? -3 : -1;
84
+ }
85
+
86
/* consider priority of active handler */
87
return MIN(running, s->exception_prio);
88
}
54
}
89
--
55
--
90
2.7.4
56
2.19.1
91
57
92
58
diff view generated by jsdifflib
1
Make the set_prio() function take a bool indicating
1
The HCR_EL2 VI and VF bits are supposed to track whether there is
2
whether to pend the secure or non-secure version of a banked
2
a pending virtual IRQ or virtual FIQ. For QEMU we store the
3
interrupt, and use this to implement the correct banking
3
pending VIRQ/VFIQ status in cs->interrupt_request, so this means:
4
semantics for the SHPR registers.
4
* if the register is read we must get these bit values from
5
cs->interrupt_request
6
* if the register is written then we must write the bit
7
values back into cs->interrupt_request
5
8
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 1505240046-11454-11-git-send-email-peter.maydell@linaro.org
11
Message-id: 20181012144235.19646-7-peter.maydell@linaro.org
9
---
12
---
10
hw/intc/armv7m_nvic.c | 96 ++++++++++++++++++++++++++++++++++++++++++++++-----
13
target/arm/helper.c | 47 +++++++++++++++++++++++++++++++++++++++++----
11
hw/intc/trace-events | 2 +-
14
1 file changed, 43 insertions(+), 4 deletions(-)
12
2 files changed, 88 insertions(+), 10 deletions(-)
13
15
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
18
--- a/target/arm/helper.c
17
+++ b/hw/intc/armv7m_nvic.c
19
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_raw_execution_priority(void *opaque)
20
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
19
return s->exception_prio;
21
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
22
{
23
ARMCPU *cpu = arm_env_get_cpu(env);
24
+ CPUState *cs = ENV_GET_CPU(env);
25
uint64_t valid_mask = HCR_MASK;
26
27
if (arm_feature(env, ARM_FEATURE_EL3)) {
28
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
29
/* Clear RES0 bits. */
30
value &= valid_mask;
31
32
+ /*
33
+ * VI and VF are kept in cs->interrupt_request. Modifying that
34
+ * requires that we have the iothread lock, which is done by
35
+ * marking the reginfo structs as ARM_CP_IO.
36
+ * Note that if a write to HCR pends a VIRQ or VFIQ it is never
37
+ * possible for it to be taken immediately, because VIRQ and
38
+ * VFIQ are masked unless running at EL0 or EL1, and HCR
39
+ * can only be written at EL2.
40
+ */
41
+ g_assert(qemu_mutex_iothread_locked());
42
+ if (value & HCR_VI) {
43
+ cs->interrupt_request |= CPU_INTERRUPT_VIRQ;
44
+ } else {
45
+ cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
46
+ }
47
+ if (value & HCR_VF) {
48
+ cs->interrupt_request |= CPU_INTERRUPT_VFIQ;
49
+ } else {
50
+ cs->interrupt_request &= ~CPU_INTERRUPT_VFIQ;
51
+ }
52
+ value &= ~(HCR_VI | HCR_VF);
53
+
54
/* These bits change the MMU setup:
55
* HCR_VM enables stage 2 translation
56
* HCR_PTW forbids certain page-table setups
57
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
58
hcr_write(env, NULL, value);
20
}
59
}
21
60
22
-/* caller must call nvic_irq_update() after this */
61
+static uint64_t hcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
23
-static void set_prio(NVICState *s, unsigned irq, uint8_t prio)
62
+{
24
+/* caller must call nvic_irq_update() after this.
63
+ /* The VI and VF bits live in cs->interrupt_request */
25
+ * secure indicates the bank to use for banked exceptions (we assert if
64
+ uint64_t ret = env->cp15.hcr_el2 & ~(HCR_VI | HCR_VF);
26
+ * we are passed secure=true for a non-banked exception).
65
+ CPUState *cs = ENV_GET_CPU(env);
27
+ */
66
+
28
+static void set_prio(NVICState *s, unsigned irq, bool secure, uint8_t prio)
67
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
29
{
68
+ ret |= HCR_VI;
30
assert(irq > ARMV7M_EXCP_NMI); /* only use for configurable prios */
31
assert(irq < s->num_irq);
32
33
- s->vectors[irq].prio = prio;
34
+ if (secure) {
35
+ assert(exc_is_banked(irq));
36
+ s->sec_vectors[irq].prio = prio;
37
+ } else {
38
+ s->vectors[irq].prio = prio;
39
+ }
69
+ }
40
+
70
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
41
+ trace_nvic_set_prio(irq, secure, prio);
71
+ ret |= HCR_VF;
72
+ }
73
+ return ret;
42
+}
74
+}
43
+
75
+
44
+/* Return the current raw priority register value.
76
static const ARMCPRegInfo el2_cp_reginfo[] = {
45
+ * secure indicates the bank to use for banked exceptions (we assert if
77
{ .name = "HCR_EL2", .state = ARM_CP_STATE_AA64,
46
+ * we are passed secure=true for a non-banked exception).
78
+ .type = ARM_CP_IO,
47
+ */
79
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
48
+static int get_prio(NVICState *s, unsigned irq, bool secure)
80
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
49
+{
81
- .writefn = hcr_write },
50
+ assert(irq > ARMV7M_EXCP_NMI); /* only use for configurable prios */
82
+ .writefn = hcr_write, .readfn = hcr_read },
51
+ assert(irq < s->num_irq);
83
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
52
84
- .type = ARM_CP_ALIAS,
53
- trace_nvic_set_prio(irq, prio);
85
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
54
+ if (secure) {
86
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
55
+ assert(exc_is_banked(irq));
87
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
56
+ return s->sec_vectors[irq].prio;
88
- .writefn = hcr_writelow },
57
+ } else {
89
+ .writefn = hcr_writelow, .readfn = hcr_read },
58
+ return s->vectors[irq].prio;
90
{ .name = "ELR_EL2", .state = ARM_CP_STATE_AA64,
59
+ }
91
.type = ARM_CP_ALIAS,
60
}
92
.opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 1,
61
93
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
62
/* Recompute state and assert irq line accordingly.
94
63
@@ -XXX,XX +XXX,XX @@ static bool nvic_user_access_ok(NVICState *s, hwaddr offset, MemTxAttrs attrs)
95
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
64
}
96
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
65
}
97
- .type = ARM_CP_ALIAS,
66
98
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
67
+static int shpr_bank(NVICState *s, int exc, MemTxAttrs attrs)
99
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
68
+{
100
.access = PL2_RW,
69
+ /* Behaviour for the SHPR register field for this exception:
101
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
70
+ * return M_REG_NS to use the nonsecure vector (including for
71
+ * non-banked exceptions), M_REG_S for the secure version of
72
+ * a banked exception, and -1 if this field should RAZ/WI.
73
+ */
74
+ switch (exc) {
75
+ case ARMV7M_EXCP_MEM:
76
+ case ARMV7M_EXCP_USAGE:
77
+ case ARMV7M_EXCP_SVC:
78
+ case ARMV7M_EXCP_PENDSV:
79
+ case ARMV7M_EXCP_SYSTICK:
80
+ /* Banked exceptions */
81
+ return attrs.secure;
82
+ case ARMV7M_EXCP_BUS:
83
+ /* Not banked, RAZ/WI from nonsecure if BFHFNMINS is zero */
84
+ if (!attrs.secure &&
85
+ !(s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
86
+ return -1;
87
+ }
88
+ return M_REG_NS;
89
+ case ARMV7M_EXCP_SECURE:
90
+ /* Not banked, RAZ/WI from nonsecure */
91
+ if (!attrs.secure) {
92
+ return -1;
93
+ }
94
+ return M_REG_NS;
95
+ case ARMV7M_EXCP_DEBUG:
96
+ /* Not banked. TODO should RAZ/WI if DEMCR.SDME is set */
97
+ return M_REG_NS;
98
+ case 8 ... 10:
99
+ case 13:
100
+ /* RES0 */
101
+ return -1;
102
+ default:
103
+ /* Not reachable due to decode of SHPR register addresses */
104
+ g_assert_not_reached();
105
+ }
106
+}
107
+
108
static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
109
uint64_t *data, unsigned size,
110
MemTxAttrs attrs)
111
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
112
}
113
}
114
break;
115
- case 0xd18 ... 0xd23: /* System Handler Priority. */
116
+ case 0xd18 ... 0xd23: /* System Handler Priority (SHPR1, SHPR2, SHPR3) */
117
val = 0;
118
for (i = 0; i < size; i++) {
119
- val |= s->vectors[(offset - 0xd14) + i].prio << (i * 8);
120
+ unsigned hdlidx = (offset - 0xd14) + i;
121
+ int sbank = shpr_bank(s, hdlidx, attrs);
122
+
123
+ if (sbank < 0) {
124
+ continue;
125
+ }
126
+ val = deposit32(val, i * 8, 8, get_prio(s, hdlidx, sbank));
127
}
128
break;
129
case 0xfe0 ... 0xfff: /* ID. */
130
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
131
132
for (i = 0; i < size && startvec + i < s->num_irq; i++) {
133
if (attrs.secure || s->itns[startvec + i]) {
134
- set_prio(s, startvec + i, (value >> (i * 8)) & 0xff);
135
+ set_prio(s, startvec + i, false, (value >> (i * 8)) & 0xff);
136
}
137
}
138
nvic_irq_update(s);
139
return MEMTX_OK;
140
- case 0xd18 ... 0xd23: /* System Handler Priority. */
141
+ case 0xd18 ... 0xd23: /* System Handler Priority (SHPR1, SHPR2, SHPR3) */
142
for (i = 0; i < size; i++) {
143
unsigned hdlidx = (offset - 0xd14) + i;
144
- set_prio(s, hdlidx, (value >> (i * 8)) & 0xff);
145
+ int newprio = extract32(value, i * 8, 8);
146
+ int sbank = shpr_bank(s, hdlidx, attrs);
147
+
148
+ if (sbank < 0) {
149
+ continue;
150
+ }
151
+ set_prio(s, hdlidx, sbank, newprio);
152
}
153
nvic_irq_update(s);
154
return MEMTX_OK;
155
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
156
index XXXXXXX..XXXXXXX 100644
157
--- a/hw/intc/trace-events
158
+++ b/hw/intc/trace-events
159
@@ -XXX,XX +XXX,XX @@ gicv3_redist_send_sgi(uint32_t cpu, int irq) "GICv3 redistributor 0x%x pending S
160
# hw/intc/armv7m_nvic.c
161
nvic_recompute_state(int vectpending, int vectpending_prio, int exception_prio) "NVIC state recomputed: vectpending %d vectpending_prio %d exception_prio %d"
162
nvic_recompute_state_secure(int vectpending, bool vectpending_is_s_banked, int vectpending_prio, int exception_prio) "NVIC state recomputed: vectpending %d is_s_banked %d vectpending_prio %d exception_prio %d"
163
-nvic_set_prio(int irq, uint8_t prio) "NVIC set irq %d priority %d"
164
+nvic_set_prio(int irq, bool secure, uint8_t prio) "NVIC set irq %d secure-bank %d priority %d"
165
nvic_irq_update(int vectpending, int pendprio, int exception_prio, int level) "NVIC vectpending %d pending prio %d exception_prio %d: setting irq line to %d"
166
nvic_escalate_prio(int irq, int irqprio, int runprio) "NVIC escalating irq %d to HardFault: insufficient priority %d >= %d"
167
nvic_escalate_disabled(int irq) "NVIC escalating irq %d to HardFault: disabled"
168
--
102
--
169
2.7.4
103
2.19.1
170
104
171
105
diff view generated by jsdifflib
1
The ICSR NVIC register is banked for v8M. This doesn't
1
If the HCR_EL2 PTW virtualizaiton configuration register bit
2
require any new state, but it does mean that some bits
2
is set, then this means that a stage 2 Permission fault must
3
are controlled by BFHNFNMINS and some bits must work
3
be generated if a stage 1 translation table access is made
4
with the correct banked exception. There is also a new
4
to an address that is mapped as Device memory in stage 2.
5
in v8M PENDNMICLR bit.
5
Implement this.
6
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 1505240046-11454-18-git-send-email-peter.maydell@linaro.org
9
Message-id: 20181012144235.19646-8-peter.maydell@linaro.org
10
---
10
---
11
hw/intc/armv7m_nvic.c | 45 ++++++++++++++++++++++++++++++++-------------
11
target/arm/helper.c | 21 ++++++++++++++++++++-
12
1 file changed, 32 insertions(+), 13 deletions(-)
12
1 file changed, 20 insertions(+), 1 deletion(-)
13
13
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
16
--- a/target/arm/helper.c
17
+++ b/hw/intc/armv7m_nvic.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
18
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
19
hwaddr s2pa;
20
int s2prot;
21
int ret;
22
+ ARMCacheAttrs cacheattrs = {};
23
+ ARMCacheAttrs *pcacheattrs = NULL;
24
+
25
+ if (env->cp15.hcr_el2 & HCR_PTW) {
26
+ /*
27
+ * PTW means we must fault if this S1 walk touches S2 Device
28
+ * memory; otherwise we don't care about the attributes and can
29
+ * save the S2 translation the effort of computing them.
30
+ */
31
+ pcacheattrs = &cacheattrs;
32
+ }
33
34
ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
35
- &txattrs, &s2prot, &s2size, fi, NULL);
36
+ &txattrs, &s2prot, &s2size, fi, pcacheattrs);
37
if (ret) {
38
assert(fi->type != ARMFault_None);
39
fi->s2addr = addr;
40
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
41
fi->s1ptw = true;
42
return ~0;
43
}
44
+ if (pcacheattrs && (pcacheattrs->attrs & 0xf0) == 0) {
45
+ /* Access was to Device memory: generate Permission fault */
46
+ fi->type = ARMFault_Permission;
47
+ fi->s2addr = addr;
48
+ fi->stage2 = true;
49
+ fi->s1ptw = true;
50
+ return ~0;
51
+ }
52
addr = s2pa;
19
}
53
}
20
case 0xd00: /* CPUID Base. */
54
return addr;
21
return cpu->midr;
22
- case 0xd04: /* Interrupt Control State. */
23
+ case 0xd04: /* Interrupt Control State (ICSR) */
24
/* VECTACTIVE */
25
val = cpu->env.v7m.exception;
26
/* VECTPENDING */
27
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
28
if (nvic_rettobase(s)) {
29
val |= (1 << 11);
30
}
31
- /* PENDSTSET */
32
- if (s->vectors[ARMV7M_EXCP_SYSTICK].pending) {
33
- val |= (1 << 26);
34
- }
35
- /* PENDSVSET */
36
- if (s->vectors[ARMV7M_EXCP_PENDSV].pending) {
37
- val |= (1 << 28);
38
+ if (attrs.secure) {
39
+ /* PENDSTSET */
40
+ if (s->sec_vectors[ARMV7M_EXCP_SYSTICK].pending) {
41
+ val |= (1 << 26);
42
+ }
43
+ /* PENDSVSET */
44
+ if (s->sec_vectors[ARMV7M_EXCP_PENDSV].pending) {
45
+ val |= (1 << 28);
46
+ }
47
+ } else {
48
+ /* PENDSTSET */
49
+ if (s->vectors[ARMV7M_EXCP_SYSTICK].pending) {
50
+ val |= (1 << 26);
51
+ }
52
+ /* PENDSVSET */
53
+ if (s->vectors[ARMV7M_EXCP_PENDSV].pending) {
54
+ val |= (1 << 28);
55
+ }
56
}
57
/* NMIPENDSET */
58
- if (s->vectors[ARMV7M_EXCP_NMI].pending) {
59
+ if ((cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) &&
60
+ s->vectors[ARMV7M_EXCP_NMI].pending) {
61
val |= (1 << 31);
62
}
63
- /* ISRPREEMPT not implemented */
64
+ /* ISRPREEMPT: RES0 when halting debug not implemented */
65
+ /* STTNS: RES0 for the Main Extension */
66
return val;
67
case 0xd08: /* Vector Table Offset. */
68
return cpu->env.v7m.vecbase[attrs.secure];
69
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
70
nvic_irq_update(s);
71
break;
72
}
73
- case 0xd04: /* Interrupt Control State. */
74
- if (value & (1 << 31)) {
75
- armv7m_nvic_set_pending(s, ARMV7M_EXCP_NMI, false);
76
+ case 0xd04: /* Interrupt Control State (ICSR) */
77
+ if (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
78
+ if (value & (1 << 31)) {
79
+ armv7m_nvic_set_pending(s, ARMV7M_EXCP_NMI, false);
80
+ } else if (value & (1 << 30) &&
81
+ arm_feature(&cpu->env, ARM_FEATURE_V8)) {
82
+ /* PENDNMICLR didn't exist in v7M */
83
+ armv7m_nvic_clear_pending(s, ARMV7M_EXCP_NMI, false);
84
+ }
85
}
86
if (value & (1 << 28)) {
87
armv7m_nvic_set_pending(s, ARMV7M_EXCP_PENDSV, attrs.secure);
88
--
55
--
89
2.7.4
56
2.19.1
90
57
91
58
diff view generated by jsdifflib
1
If AIRCR.BFHFNMINS is clear, then although NonSecure HardFault
1
Create and use a utility function to extract the EC field
2
can still be pended via SHCSR.HARDFAULTPENDED it mustn't actually
2
from a syndrome, rather than open-coding the shift.
3
preempt execution. The simple way to achieve this is to clear the
4
enable bit for it, since the enable bit isn't guest visible.
5
3
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 1505240046-11454-15-git-send-email-peter.maydell@linaro.org
6
Message-id: 20181012144235.19646-9-peter.maydell@linaro.org
9
---
7
---
10
hw/intc/armv7m_nvic.c | 12 ++++++++++--
8
target/arm/internals.h | 5 +++++
11
1 file changed, 10 insertions(+), 2 deletions(-)
9
target/arm/helper.c | 4 ++--
10
target/arm/kvm64.c | 2 +-
11
target/arm/op_helper.c | 2 +-
12
4 files changed, 9 insertions(+), 4 deletions(-)
12
13
13
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
14
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/intc/armv7m_nvic.c
16
--- a/target/arm/internals.h
16
+++ b/hw/intc/armv7m_nvic.c
17
+++ b/target/arm/internals.h
17
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
18
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
18
(R_V7M_AIRCR_SYSRESETREQS_MASK |
19
#define ARM_EL_IL (1 << ARM_EL_IL_SHIFT)
19
R_V7M_AIRCR_BFHFNMINS_MASK |
20
#define ARM_EL_ISV (1 << ARM_EL_ISV_SHIFT)
20
R_V7M_AIRCR_PRIS_MASK);
21
21
- /* BFHFNMINS changes the priority of Secure HardFault */
22
+static inline uint32_t syn_get_ec(uint32_t syn)
22
+ /* BFHFNMINS changes the priority of Secure HardFault, and
23
+{
23
+ * allows a pending Non-secure HardFault to preempt (which
24
+ return syn >> ARM_EL_EC_SHIFT;
24
+ * we implement by marking it enabled).
25
+}
25
+ */
26
+
26
if (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
27
/* Utility functions for constructing various kinds of syndrome value.
27
s->sec_vectors[ARMV7M_EXCP_HARD].prio = -3;
28
* Note that in general we follow the AArch64 syndrome values; in a
28
+ s->vectors[ARMV7M_EXCP_HARD].enabled = 1;
29
* few cases the value in HSR for exceptions taken to AArch32 Hyp
29
} else {
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
30
s->sec_vectors[ARMV7M_EXCP_HARD].prio = -1;
31
index XXXXXXX..XXXXXXX 100644
31
+ s->vectors[ARMV7M_EXCP_HARD].enabled = 0;
32
--- a/target/arm/helper.c
32
}
33
+++ b/target/arm/helper.c
33
}
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
34
nvic_irq_update(s);
35
uint32_t moe;
35
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
36
36
NVICState *s = NVIC(dev);
37
/* If this is a debug exception we must update the DBGDSCR.MOE bits */
37
38
- switch (env->exception.syndrome >> ARM_EL_EC_SHIFT) {
38
s->vectors[ARMV7M_EXCP_NMI].enabled = 1;
39
+ switch (syn_get_ec(env->exception.syndrome)) {
39
- s->vectors[ARMV7M_EXCP_HARD].enabled = 1;
40
case EC_BREAKPOINT:
40
/* MEM, BUS, and USAGE are enabled through
41
case EC_BREAKPOINT_SAME_EL:
41
* the System Handler Control register
42
moe = 1;
42
*/
43
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
43
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
44
if (qemu_loglevel_mask(CPU_LOG_INT)
44
45
&& !excp_is_internal(cs->exception_index)) {
45
/* AIRCR.BFHFNMINS resets to 0 so Secure HF is priority -1 (R_CMTC) */
46
qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n",
46
s->sec_vectors[ARMV7M_EXCP_HARD].prio = -1;
47
- env->exception.syndrome >> ARM_EL_EC_SHIFT,
47
+ /* If AIRCR.BFHFNMINS is 0 then NS HF is (effectively) disabled */
48
+ syn_get_ec(env->exception.syndrome),
48
+ s->vectors[ARMV7M_EXCP_HARD].enabled = 0;
49
env->exception.syndrome);
49
+ } else {
50
+ s->vectors[ARMV7M_EXCP_HARD].enabled = 1;
51
}
50
}
52
51
53
/* Strictly speaking the reset handler should be enabled.
52
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/kvm64.c
55
+++ b/target/arm/kvm64.c
56
@@ -XXX,XX +XXX,XX @@ int kvm_arch_remove_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
57
58
bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
59
{
60
- int hsr_ec = debug_exit->hsr >> ARM_EL_EC_SHIFT;
61
+ int hsr_ec = syn_get_ec(debug_exit->hsr);
62
ARMCPU *cpu = ARM_CPU(cs);
63
CPUClass *cc = CPU_GET_CLASS(cs);
64
CPUARMState *env = &cpu->env;
65
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/op_helper.c
68
+++ b/target/arm/op_helper.c
69
@@ -XXX,XX +XXX,XX @@ void raise_exception(CPUARMState *env, uint32_t excp,
70
* (see DDI0478C.a D1.10.4)
71
*/
72
target_el = 2;
73
- if (syndrome >> ARM_EL_EC_SHIFT == EC_ADVSIMDFPACCESSTRAP) {
74
+ if (syn_get_ec(syndrome) == EC_ADVSIMDFPACCESSTRAP) {
75
syndrome = syn_uncategorized();
76
}
77
}
54
--
78
--
55
2.7.4
79
2.19.1
56
80
57
81
diff view generated by jsdifflib
1
In v8M the MSR and MRS instructions have extra register value
1
For the v7 version of the Arm architecture, the IL bit in
2
encodings to allow secure code to access the non-secure banked
2
syndrome register values where the field is not valid was
3
version of various special registers.
3
defined to be UNK/SBZP. In v8 this is RES1, which is what
4
QEMU currently implements. Handle the desired v7 behaviour
5
by squashing the IL bit for the affected cases:
6
* EC == EC_UNCATEGORIZED
7
* prefetch aborts
8
* data aborts where ISV is 0
4
9
5
(We don't implement the MSPLIM_NS or PSPLIM_NS aliases, because
10
(The fourth case listed in the v8 Arm ARM DDI 0487C.a in
6
we don't currently implement the stack limit registers at all.)
11
section G7.2.70, "illegal state exception", can't happen
12
on a v7 CPU.)
13
14
This deals with a corner case noted in a comment.
7
15
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 1505240046-11454-2-git-send-email-peter.maydell@linaro.org
18
Message-id: 20181012144235.19646-10-peter.maydell@linaro.org
11
---
19
---
12
target/arm/helper.c | 110 ++++++++++++++++++++++++++++++++++++++++++++++++++++
20
target/arm/internals.h | 7 ++-----
13
1 file changed, 110 insertions(+)
21
target/arm/helper.c | 13 +++++++++++++
22
2 files changed, 15 insertions(+), 5 deletions(-)
14
23
24
diff --git a/target/arm/internals.h b/target/arm/internals.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/internals.h
27
+++ b/target/arm/internals.h
28
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
29
/* Utility functions for constructing various kinds of syndrome value.
30
* Note that in general we follow the AArch64 syndrome values; in a
31
* few cases the value in HSR for exceptions taken to AArch32 Hyp
32
- * mode differs slightly, so if we ever implemented Hyp mode then the
33
- * syndrome value would need some massaging on exception entry.
34
- * (One example of this is that AArch64 defaults to IL bit set for
35
- * exceptions which don't specifically indicate information about the
36
- * trapping instruction, whereas AArch32 defaults to IL bit clear.)
37
+ * mode differs slightly, and we fix this up when populating HSR in
38
+ * arm_cpu_do_interrupt_aarch32_hyp().
39
*/
40
static inline uint32_t syn_uncategorized(void)
41
{
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
42
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
44
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
45
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
46
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs)
20
break;
21
case 20: /* CONTROL */
22
return env->v7m.control[env->v7m.secure];
23
+ case 0x94: /* CONTROL_NS */
24
+ /* We have to handle this here because unprivileged Secure code
25
+ * can read the NS CONTROL register.
26
+ */
27
+ if (!env->v7m.secure) {
28
+ return 0;
29
+ }
30
+ return env->v7m.control[M_REG_NS];
31
}
47
}
32
48
33
if (el == 0) {
49
if (cs->exception_index != EXCP_IRQ && cs->exception_index != EXCP_FIQ) {
34
return 0; /* unprivileged reads others as zero */
50
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
35
}
51
+ /*
36
52
+ * QEMU syndrome values are v8-style. v7 has the IL bit
37
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
53
+ * UNK/SBZP for "field not valid" cases, where v8 uses RES1.
38
+ switch (reg) {
54
+ * If this is a v7 CPU, squash the IL bit in those cases.
39
+ case 0x88: /* MSP_NS */
40
+ if (!env->v7m.secure) {
41
+ return 0;
42
+ }
43
+ return env->v7m.other_ss_msp;
44
+ case 0x89: /* PSP_NS */
45
+ if (!env->v7m.secure) {
46
+ return 0;
47
+ }
48
+ return env->v7m.other_ss_psp;
49
+ case 0x90: /* PRIMASK_NS */
50
+ if (!env->v7m.secure) {
51
+ return 0;
52
+ }
53
+ return env->v7m.primask[M_REG_NS];
54
+ case 0x91: /* BASEPRI_NS */
55
+ if (!env->v7m.secure) {
56
+ return 0;
57
+ }
58
+ return env->v7m.basepri[M_REG_NS];
59
+ case 0x93: /* FAULTMASK_NS */
60
+ if (!env->v7m.secure) {
61
+ return 0;
62
+ }
63
+ return env->v7m.faultmask[M_REG_NS];
64
+ case 0x98: /* SP_NS */
65
+ {
66
+ /* This gives the non-secure SP selected based on whether we're
67
+ * currently in handler mode or not, using the NS CONTROL.SPSEL.
68
+ */
55
+ */
69
+ bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
56
+ if (cs->exception_index == EXCP_PREFETCH_ABORT ||
70
+
57
+ (cs->exception_index == EXCP_DATA_ABORT &&
71
+ if (!env->v7m.secure) {
58
+ !(env->exception.syndrome & ARM_EL_ISV)) ||
72
+ return 0;
59
+ syn_get_ec(env->exception.syndrome) == EC_UNCATEGORIZED) {
73
+ }
60
+ env->exception.syndrome &= ~ARM_EL_IL;
74
+ if (!arm_v7m_is_handler_mode(env) && spsel) {
75
+ return env->v7m.other_ss_psp;
76
+ } else {
77
+ return env->v7m.other_ss_msp;
78
+ }
61
+ }
79
+ }
62
+ }
80
+ default:
63
env->cp15.esr_el[2] = env->exception.syndrome;
81
+ break;
82
+ }
83
+ }
84
+
85
switch (reg) {
86
case 8: /* MSP */
87
return (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_SPSEL_MASK) ?
88
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
89
return;
90
}
64
}
91
65
92
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
93
+ switch (reg) {
94
+ case 0x88: /* MSP_NS */
95
+ if (!env->v7m.secure) {
96
+ return;
97
+ }
98
+ env->v7m.other_ss_msp = val;
99
+ return;
100
+ case 0x89: /* PSP_NS */
101
+ if (!env->v7m.secure) {
102
+ return;
103
+ }
104
+ env->v7m.other_ss_psp = val;
105
+ return;
106
+ case 0x90: /* PRIMASK_NS */
107
+ if (!env->v7m.secure) {
108
+ return;
109
+ }
110
+ env->v7m.primask[M_REG_NS] = val & 1;
111
+ return;
112
+ case 0x91: /* BASEPRI_NS */
113
+ if (!env->v7m.secure) {
114
+ return;
115
+ }
116
+ env->v7m.basepri[M_REG_NS] = val & 0xff;
117
+ return;
118
+ case 0x93: /* FAULTMASK_NS */
119
+ if (!env->v7m.secure) {
120
+ return;
121
+ }
122
+ env->v7m.faultmask[M_REG_NS] = val & 1;
123
+ return;
124
+ case 0x98: /* SP_NS */
125
+ {
126
+ /* This gives the non-secure SP selected based on whether we're
127
+ * currently in handler mode or not, using the NS CONTROL.SPSEL.
128
+ */
129
+ bool spsel = env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK;
130
+
131
+ if (!env->v7m.secure) {
132
+ return;
133
+ }
134
+ if (!arm_v7m_is_handler_mode(env) && spsel) {
135
+ env->v7m.other_ss_psp = val;
136
+ } else {
137
+ env->v7m.other_ss_msp = val;
138
+ }
139
+ return;
140
+ }
141
+ default:
142
+ break;
143
+ }
144
+ }
145
+
146
switch (reg) {
147
case 0 ... 7: /* xPSR sub-fields */
148
/* only APSR is actually writable */
149
--
66
--
150
2.7.4
67
2.19.1
151
68
152
69
diff view generated by jsdifflib
1
Update the nvic_recompute_state() code to handle the security
1
For traps of FP/SIMD instructions to AArch32 Hyp mode, the syndrome
2
extension and its associated banked registers.
2
provided in HSR has more information than is reported to AArch64.
3
3
Specifically, there are extra fields TA and coproc which indicate
4
Code that uses the resulting cached state (ie the irq
4
whether the trapped instruction was FP or SIMD. Add this extra
5
acknowledge and complete code) will be updated in a later
5
information to the syndromes we construct, and mask it out when
6
commit.
6
taking the exception to AArch64.
7
7
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 1505240046-11454-9-git-send-email-peter.maydell@linaro.org
10
Message-id: 20181012144235.19646-11-peter.maydell@linaro.org
11
---
11
---
12
hw/intc/armv7m_nvic.c | 151 ++++++++++++++++++++++++++++++++++++++++++++++++--
12
target/arm/internals.h | 14 +++++++++++++-
13
hw/intc/trace-events | 1 +
13
target/arm/helper.c | 9 +++++++++
14
2 files changed, 147 insertions(+), 5 deletions(-)
14
target/arm/translate.c | 8 ++++----
15
3 files changed, 26 insertions(+), 5 deletions(-)
15
16
16
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/armv7m_nvic.c
19
--- a/target/arm/internals.h
19
+++ b/hw/intc/armv7m_nvic.c
20
+++ b/target/arm/internals.h
20
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
21
* (higher than the highest possible priority value)
22
* few cases the value in HSR for exceptions taken to AArch32 Hyp
23
* mode differs slightly, and we fix this up when populating HSR in
24
* arm_cpu_do_interrupt_aarch32_hyp().
25
+ * The exception is FP/SIMD access traps -- these report extra information
26
+ * when taking an exception to AArch32. For those we include the extra coproc
27
+ * and TA fields, and mask them out when taking the exception to AArch64.
22
*/
28
*/
23
#define NVIC_NOEXC_PRIO 0x100
29
static inline uint32_t syn_uncategorized(void)
24
+/* Maximum priority of non-secure exceptions when AIRCR.PRIS is set */
30
{
25
+#define NVIC_NS_PRIO_LIMIT 0x80
31
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_cp15_rrt_trap(int cv, int cond, int opc1, int crm,
26
32
27
static const uint8_t nvic_id[] = {
33
static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
28
0x00, 0xb0, 0x1b, 0x00, 0x0d, 0xe0, 0x05, 0xb1
34
{
29
@@ -XXX,XX +XXX,XX @@ static bool nvic_isrpending(NVICState *s)
35
+ /* AArch32 FP trap or any AArch64 FP/SIMD trap: TA == 0 coproc == 0xa */
30
return false;
36
return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
31
}
37
| (is_16bit ? 0 : ARM_EL_IL)
32
38
- | (cv << 24) | (cond << 20);
33
+static bool exc_is_banked(int exc)
39
+ | (cv << 24) | (cond << 20) | 0xa;
34
+{
35
+ /* Return true if this is one of the limited set of exceptions which
36
+ * are banked (and thus have state in sec_vectors[])
37
+ */
38
+ return exc == ARMV7M_EXCP_HARD ||
39
+ exc == ARMV7M_EXCP_MEM ||
40
+ exc == ARMV7M_EXCP_USAGE ||
41
+ exc == ARMV7M_EXCP_SVC ||
42
+ exc == ARMV7M_EXCP_PENDSV ||
43
+ exc == ARMV7M_EXCP_SYSTICK;
44
+}
40
+}
45
+
41
+
46
/* Return a mask word which clears the subpriority bits from
42
+static inline uint32_t syn_simd_access_trap(int cv, int cond, bool is_16bit)
47
* a priority value for an M-profile exception, leaving only
48
* the group priority.
49
*/
50
-static inline uint32_t nvic_gprio_mask(NVICState *s)
51
+static inline uint32_t nvic_gprio_mask(NVICState *s, bool secure)
52
+{
43
+{
53
+ return ~0U << (s->prigroup[secure] + 1);
44
+ /* AArch32 SIMD trap: TA == 1 coproc == 0 */
54
+}
45
+ return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
55
+
46
+ | (is_16bit ? 0 : ARM_EL_IL)
56
+static bool exc_targets_secure(NVICState *s, int exc)
47
+ | (cv << 24) | (cond << 20) | (1 << 5);
57
+{
48
}
58
+ /* Return true if this non-banked exception targets Secure state. */
49
59
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY)) {
50
static inline uint32_t syn_sve_access_trap(void)
60
+ return false;
51
diff --git a/target/arm/helper.c b/target/arm/helper.c
61
+ }
52
index XXXXXXX..XXXXXXX 100644
62
+
53
--- a/target/arm/helper.c
63
+ if (exc >= NVIC_FIRST_IRQ) {
54
+++ b/target/arm/helper.c
64
+ return !s->itns[exc];
55
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
65
+ }
56
case EXCP_HVC:
66
+
57
case EXCP_HYP_TRAP:
67
+ /* Function shouldn't be called for banked exceptions. */
58
case EXCP_SMC:
68
+ assert(!exc_is_banked(exc));
59
+ if (syn_get_ec(env->exception.syndrome) == EC_ADVSIMDFPACCESSTRAP) {
69
+
60
+ /*
70
+ switch (exc) {
61
+ * QEMU internal FP/SIMD syndromes from AArch32 include the
71
+ case ARMV7M_EXCP_NMI:
62
+ * TA and coproc fields which are only exposed if the exception
72
+ case ARMV7M_EXCP_BUS:
63
+ * is taken to AArch32 Hyp mode. Mask them out to get a valid
73
+ return !(s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK);
64
+ * AArch64 format syndrome.
74
+ case ARMV7M_EXCP_SECURE:
65
+ */
75
+ return true;
66
+ env->exception.syndrome &= ~MAKE_64BIT_MASK(0, 20);
76
+ case ARMV7M_EXCP_DEBUG:
77
+ /* TODO: controlled by DEMCR.SDME, which we don't yet implement */
78
+ return false;
79
+ default:
80
+ /* reset, and reserved (unused) low exception numbers.
81
+ * We'll get called by code that loops through all the exception
82
+ * numbers, but it doesn't matter what we return here as these
83
+ * non-existent exceptions will never be pended or active.
84
+ */
85
+ return true;
86
+ }
87
+}
88
+
89
+static int exc_group_prio(NVICState *s, int rawprio, bool targets_secure)
90
+{
91
+ /* Return the group priority for this exception, given its raw
92
+ * (group-and-subgroup) priority value and whether it is targeting
93
+ * secure state or not.
94
+ */
95
+ if (rawprio < 0) {
96
+ return rawprio;
97
+ }
98
+ rawprio &= nvic_gprio_mask(s, targets_secure);
99
+ /* AIRCR.PRIS causes us to squash all NS priorities into the
100
+ * lower half of the total range
101
+ */
102
+ if (!targets_secure &&
103
+ (s->cpu->env.v7m.aircr & R_V7M_AIRCR_PRIS_MASK)) {
104
+ rawprio = (rawprio >> 1) + NVIC_NS_PRIO_LIMIT;
105
+ }
106
+ return rawprio;
107
+}
108
+
109
+/* Recompute vectpending and exception_prio for a CPU which implements
110
+ * the Security extension
111
+ */
112
+static void nvic_recompute_state_secure(NVICState *s)
113
{
114
- return ~0U << (s->prigroup[M_REG_NS] + 1);
115
+ int i, bank;
116
+ int pend_prio = NVIC_NOEXC_PRIO;
117
+ int active_prio = NVIC_NOEXC_PRIO;
118
+ int pend_irq = 0;
119
+ bool pending_is_s_banked = false;
120
+
121
+ /* R_CQRV: precedence is by:
122
+ * - lowest group priority; if both the same then
123
+ * - lowest subpriority; if both the same then
124
+ * - lowest exception number; if both the same (ie banked) then
125
+ * - secure exception takes precedence
126
+ * Compare pseudocode RawExecutionPriority.
127
+ * Annoyingly, now we have two prigroup values (for S and NS)
128
+ * we can't do the loop comparison on raw priority values.
129
+ */
130
+ for (i = 1; i < s->num_irq; i++) {
131
+ for (bank = M_REG_S; bank >= M_REG_NS; bank--) {
132
+ VecInfo *vec;
133
+ int prio;
134
+ bool targets_secure;
135
+
136
+ if (bank == M_REG_S) {
137
+ if (!exc_is_banked(i)) {
138
+ continue;
139
+ }
140
+ vec = &s->sec_vectors[i];
141
+ targets_secure = true;
142
+ } else {
143
+ vec = &s->vectors[i];
144
+ targets_secure = !exc_is_banked(i) && exc_targets_secure(s, i);
145
+ }
146
+
147
+ prio = exc_group_prio(s, vec->prio, targets_secure);
148
+ if (vec->enabled && vec->pending && prio < pend_prio) {
149
+ pend_prio = prio;
150
+ pend_irq = i;
151
+ pending_is_s_banked = (bank == M_REG_S);
152
+ }
153
+ if (vec->active && prio < active_prio) {
154
+ active_prio = prio;
155
+ }
156
+ }
67
+ }
157
+ }
68
env->cp15.esr_el[new_el] = env->exception.syndrome;
158
+
69
break;
159
+ s->vectpending_is_s_banked = pending_is_s_banked;
70
case EXCP_IRQ:
160
+ s->vectpending = pend_irq;
71
diff --git a/target/arm/translate.c b/target/arm/translate.c
161
+ s->vectpending_prio = pend_prio;
72
index XXXXXXX..XXXXXXX 100644
162
+ s->exception_prio = active_prio;
73
--- a/target/arm/translate.c
163
+
74
+++ b/target/arm/translate.c
164
+ trace_nvic_recompute_state_secure(s->vectpending,
75
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
165
+ s->vectpending_is_s_banked,
76
*/
166
+ s->vectpending_prio,
77
if (s->fp_excp_el) {
167
+ s->exception_prio);
78
gen_exception_insn(s, 4, EXCP_UDEF,
168
}
79
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
169
80
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
170
/* Recompute vectpending and exception_prio */
81
return 0;
171
@@ -XXX,XX +XXX,XX @@ static void nvic_recompute_state(NVICState *s)
172
int active_prio = NVIC_NOEXC_PRIO;
173
int pend_irq = 0;
174
175
+ /* In theory we could write one function that handled both
176
+ * the "security extension present" and "not present"; however
177
+ * the security related changes significantly complicate the
178
+ * recomputation just by themselves and mixing both cases together
179
+ * would be even worse, so we retain a separate non-secure-only
180
+ * version for CPUs which don't implement the security extension.
181
+ */
182
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY)) {
183
+ nvic_recompute_state_secure(s);
184
+ return;
185
+ }
186
+
187
for (i = 1; i < s->num_irq; i++) {
188
VecInfo *vec = &s->vectors[i];
189
190
@@ -XXX,XX +XXX,XX @@ static void nvic_recompute_state(NVICState *s)
191
}
82
}
192
83
193
if (active_prio > 0) {
84
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
194
- active_prio &= nvic_gprio_mask(s);
85
*/
195
+ active_prio &= nvic_gprio_mask(s, false);
86
if (s->fp_excp_el) {
87
gen_exception_insn(s, 4, EXCP_UDEF,
88
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
89
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
90
return 0;
196
}
91
}
197
92
198
if (pend_prio > 0) {
93
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
199
- pend_prio &= nvic_gprio_mask(s);
94
200
+ pend_prio &= nvic_gprio_mask(s, false);
95
if (s->fp_excp_el) {
96
gen_exception_insn(s, 4, EXCP_UDEF,
97
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
98
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
99
return 0;
201
}
100
}
202
101
if (!s->vfp_enabled) {
203
s->vectpending = pend_irq;
102
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
204
@@ -XXX,XX +XXX,XX @@ static inline int nvic_exec_prio(NVICState *s)
103
205
} else if (env->v7m.primask[env->v7m.secure]) {
104
if (s->fp_excp_el) {
206
running = 0;
105
gen_exception_insn(s, 4, EXCP_UDEF,
207
} else if (env->v7m.basepri[env->v7m.secure] > 0) {
106
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
208
- running = env->v7m.basepri[env->v7m.secure] & nvic_gprio_mask(s);
107
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
209
+ running = env->v7m.basepri[env->v7m.secure] &
108
return 0;
210
+ nvic_gprio_mask(s, env->v7m.secure);
211
} else {
212
running = NVIC_NOEXC_PRIO; /* lower than any possible priority */
213
}
109
}
214
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
110
if (!s->vfp_enabled) {
215
index XXXXXXX..XXXXXXX 100644
216
--- a/hw/intc/trace-events
217
+++ b/hw/intc/trace-events
218
@@ -XXX,XX +XXX,XX @@ gicv3_redist_send_sgi(uint32_t cpu, int irq) "GICv3 redistributor 0x%x pending S
219
220
# hw/intc/armv7m_nvic.c
221
nvic_recompute_state(int vectpending, int vectpending_prio, int exception_prio) "NVIC state recomputed: vectpending %d vectpending_prio %d exception_prio %d"
222
+nvic_recompute_state_secure(int vectpending, bool vectpending_is_s_banked, int vectpending_prio, int exception_prio) "NVIC state recomputed: vectpending %d is_s_banked %d vectpending_prio %d exception_prio %d"
223
nvic_set_prio(int irq, uint8_t prio) "NVIC set irq %d priority %d"
224
nvic_irq_update(int vectpending, int pendprio, int exception_prio, int level) "NVIC vectpending %d pending prio %d exception_prio %d: setting irq line to %d"
225
nvic_escalate_prio(int irq, int irqprio, int runprio) "NVIC escalating irq %d to HardFault: insufficient priority %d >= %d"
226
--
111
--
227
2.7.4
112
2.19.1
228
113
229
114
diff view generated by jsdifflib
1
Handle banking of SHCSR: some register bits are banked between
1
From: Stewart Hildebrand <Stewart.Hildebrand@dornerworks.com>
2
Secure and Non-Secure, and some are only accessible to Secure.
3
2
3
"The Image must be placed text_offset bytes from a 2MB aligned base
4
address anywhere in usable system RAM and called there."
5
6
For the virt board, we write our startup bootloader at the very
7
bottom of RAM, so that bit can't be used for the image. To avoid
8
overlap in case the image requests to be loaded at an offset
9
smaller than our bootloader, we increment the load offset to the
10
next 2MB.
11
12
This fixes a boot failure for Xen AArch64.
13
14
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
15
Tested-by: Andre Przywara <andre.przywara@arm.com>
16
Message-id: b8a89518794b4436af0c151ed10de4fa@dornerworks.com
17
[PMM: Rephrased a comment a bit]
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 1505240046-11454-19-git-send-email-peter.maydell@linaro.org
7
---
20
---
8
hw/intc/armv7m_nvic.c | 221 ++++++++++++++++++++++++++++++++++++++------------
21
hw/arm/boot.c | 18 ++++++++++++++++++
9
1 file changed, 169 insertions(+), 52 deletions(-)
22
1 file changed, 18 insertions(+)
10
23
11
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
24
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
12
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/intc/armv7m_nvic.c
26
--- a/hw/arm/boot.c
14
+++ b/hw/intc/armv7m_nvic.c
27
+++ b/hw/arm/boot.c
15
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
28
@@ -XXX,XX +XXX,XX @@
16
val = cpu->env.v7m.ccr[attrs.secure];
29
#include "qemu/config-file.h"
17
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
30
#include "qemu/option.h"
18
return val;
31
#include "exec/address-spaces.h"
19
- case 0xd24: /* System Handler Status. */
32
+#include "qemu/units.h"
20
+ case 0xd24: /* System Handler Control and State (SHCSR) */
33
21
val = 0;
34
/* Kernel boot protocol is specified in the kernel docs
22
- if (s->vectors[ARMV7M_EXCP_MEM].active) {
35
* Documentation/arm/Booting and Documentation/arm64/booting.txt
23
- val |= (1 << 0);
36
@@ -XXX,XX +XXX,XX @@
24
- }
37
#define ARM64_TEXT_OFFSET_OFFSET 8
25
- if (s->vectors[ARMV7M_EXCP_BUS].active) {
38
#define ARM64_MAGIC_OFFSET 56
26
- val |= (1 << 1);
39
27
- }
40
+#define BOOTLOADER_MAX_SIZE (4 * KiB)
28
- if (s->vectors[ARMV7M_EXCP_USAGE].active) {
41
+
29
- val |= (1 << 3);
42
AddressSpace *arm_boot_address_space(ARMCPU *cpu,
30
+ if (attrs.secure) {
43
const struct arm_boot_info *info)
31
+ if (s->sec_vectors[ARMV7M_EXCP_MEM].active) {
44
{
32
+ val |= (1 << 0);
45
@@ -XXX,XX +XXX,XX @@ static void write_bootloader(const char *name, hwaddr addr,
33
+ }
46
code[i] = tswap32(insn);
34
+ if (s->sec_vectors[ARMV7M_EXCP_HARD].active) {
47
}
35
+ val |= (1 << 2);
48
36
+ }
49
+ assert((len * sizeof(uint32_t)) < BOOTLOADER_MAX_SIZE);
37
+ if (s->sec_vectors[ARMV7M_EXCP_USAGE].active) {
50
+
38
+ val |= (1 << 3);
51
rom_add_blob_fixed_as(name, code, len * sizeof(uint32_t), addr, as);
39
+ }
52
40
+ if (s->sec_vectors[ARMV7M_EXCP_SVC].active) {
53
g_free(code);
41
+ val |= (1 << 7);
54
@@ -XXX,XX +XXX,XX @@ static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
42
+ }
55
memcpy(&hdrvals, buffer + ARM64_TEXT_OFFSET_OFFSET, sizeof(hdrvals));
43
+ if (s->sec_vectors[ARMV7M_EXCP_PENDSV].active) {
56
if (hdrvals[1] != 0) {
44
+ val |= (1 << 10);
57
kernel_load_offset = le64_to_cpu(hdrvals[0]);
45
+ }
58
+
46
+ if (s->sec_vectors[ARMV7M_EXCP_SYSTICK].active) {
59
+ /*
47
+ val |= (1 << 11);
60
+ * We write our startup "bootloader" at the very bottom of RAM,
48
+ }
61
+ * so that bit can't be used for the image. Luckily the Image
49
+ if (s->sec_vectors[ARMV7M_EXCP_USAGE].pending) {
62
+ * format specification is that the image requests only an offset
50
+ val |= (1 << 12);
63
+ * from a 2MB boundary, not an absolute load address. So if the
51
+ }
64
+ * image requests an offset that might mean it overlaps with the
52
+ if (s->sec_vectors[ARMV7M_EXCP_MEM].pending) {
65
+ * bootloader, we can just load it starting at 2MB+offset rather
53
+ val |= (1 << 13);
66
+ * than 0MB + offset.
54
+ }
67
+ */
55
+ if (s->sec_vectors[ARMV7M_EXCP_SVC].pending) {
68
+ if (kernel_load_offset < BOOTLOADER_MAX_SIZE) {
56
+ val |= (1 << 15);
69
+ kernel_load_offset += 2 * MiB;
57
+ }
58
+ if (s->sec_vectors[ARMV7M_EXCP_MEM].enabled) {
59
+ val |= (1 << 16);
60
+ }
61
+ if (s->sec_vectors[ARMV7M_EXCP_USAGE].enabled) {
62
+ val |= (1 << 18);
63
+ }
64
+ if (s->sec_vectors[ARMV7M_EXCP_HARD].pending) {
65
+ val |= (1 << 21);
66
+ }
67
+ /* SecureFault is not banked but is always RAZ/WI to NS */
68
+ if (s->vectors[ARMV7M_EXCP_SECURE].active) {
69
+ val |= (1 << 4);
70
+ }
71
+ if (s->vectors[ARMV7M_EXCP_SECURE].enabled) {
72
+ val |= (1 << 19);
73
+ }
74
+ if (s->vectors[ARMV7M_EXCP_SECURE].pending) {
75
+ val |= (1 << 20);
76
+ }
77
+ } else {
78
+ if (s->vectors[ARMV7M_EXCP_MEM].active) {
79
+ val |= (1 << 0);
80
+ }
81
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
82
+ /* HARDFAULTACT, HARDFAULTPENDED not present in v7M */
83
+ if (s->vectors[ARMV7M_EXCP_HARD].active) {
84
+ val |= (1 << 2);
85
+ }
86
+ if (s->vectors[ARMV7M_EXCP_HARD].pending) {
87
+ val |= (1 << 21);
88
+ }
89
+ }
90
+ if (s->vectors[ARMV7M_EXCP_USAGE].active) {
91
+ val |= (1 << 3);
92
+ }
93
+ if (s->vectors[ARMV7M_EXCP_SVC].active) {
94
+ val |= (1 << 7);
95
+ }
96
+ if (s->vectors[ARMV7M_EXCP_PENDSV].active) {
97
+ val |= (1 << 10);
98
+ }
99
+ if (s->vectors[ARMV7M_EXCP_SYSTICK].active) {
100
+ val |= (1 << 11);
101
+ }
102
+ if (s->vectors[ARMV7M_EXCP_USAGE].pending) {
103
+ val |= (1 << 12);
104
+ }
105
+ if (s->vectors[ARMV7M_EXCP_MEM].pending) {
106
+ val |= (1 << 13);
107
+ }
108
+ if (s->vectors[ARMV7M_EXCP_SVC].pending) {
109
+ val |= (1 << 15);
110
+ }
111
+ if (s->vectors[ARMV7M_EXCP_MEM].enabled) {
112
+ val |= (1 << 16);
113
+ }
114
+ if (s->vectors[ARMV7M_EXCP_USAGE].enabled) {
115
+ val |= (1 << 18);
116
+ }
70
+ }
117
}
71
}
118
- if (s->vectors[ARMV7M_EXCP_SVC].active) {
72
}
119
- val |= (1 << 7);
73
120
+ if (attrs.secure || (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
121
+ if (s->vectors[ARMV7M_EXCP_BUS].active) {
122
+ val |= (1 << 1);
123
+ }
124
+ if (s->vectors[ARMV7M_EXCP_BUS].pending) {
125
+ val |= (1 << 14);
126
+ }
127
+ if (s->vectors[ARMV7M_EXCP_BUS].enabled) {
128
+ val |= (1 << 17);
129
+ }
130
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8) &&
131
+ s->vectors[ARMV7M_EXCP_NMI].active) {
132
+ /* NMIACT is not present in v7M */
133
+ val |= (1 << 5);
134
+ }
135
}
136
+
137
+ /* TODO: this is RAZ/WI from NS if DEMCR.SDME is set */
138
if (s->vectors[ARMV7M_EXCP_DEBUG].active) {
139
val |= (1 << 8);
140
}
141
- if (s->vectors[ARMV7M_EXCP_PENDSV].active) {
142
- val |= (1 << 10);
143
- }
144
- if (s->vectors[ARMV7M_EXCP_SYSTICK].active) {
145
- val |= (1 << 11);
146
- }
147
- if (s->vectors[ARMV7M_EXCP_USAGE].pending) {
148
- val |= (1 << 12);
149
- }
150
- if (s->vectors[ARMV7M_EXCP_MEM].pending) {
151
- val |= (1 << 13);
152
- }
153
- if (s->vectors[ARMV7M_EXCP_BUS].pending) {
154
- val |= (1 << 14);
155
- }
156
- if (s->vectors[ARMV7M_EXCP_SVC].pending) {
157
- val |= (1 << 15);
158
- }
159
- if (s->vectors[ARMV7M_EXCP_MEM].enabled) {
160
- val |= (1 << 16);
161
- }
162
- if (s->vectors[ARMV7M_EXCP_BUS].enabled) {
163
- val |= (1 << 17);
164
- }
165
- if (s->vectors[ARMV7M_EXCP_USAGE].enabled) {
166
- val |= (1 << 18);
167
- }
168
return val;
169
case 0xd28: /* Configurable Fault Status. */
170
/* The BFSR bits [15:8] are shared between security states
171
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
172
173
cpu->env.v7m.ccr[attrs.secure] = value;
174
break;
175
- case 0xd24: /* System Handler Control. */
176
- s->vectors[ARMV7M_EXCP_MEM].active = (value & (1 << 0)) != 0;
177
- s->vectors[ARMV7M_EXCP_BUS].active = (value & (1 << 1)) != 0;
178
- s->vectors[ARMV7M_EXCP_USAGE].active = (value & (1 << 3)) != 0;
179
- s->vectors[ARMV7M_EXCP_SVC].active = (value & (1 << 7)) != 0;
180
+ case 0xd24: /* System Handler Control and State (SHCSR) */
181
+ if (attrs.secure) {
182
+ s->sec_vectors[ARMV7M_EXCP_MEM].active = (value & (1 << 0)) != 0;
183
+ /* Secure HardFault active bit cannot be written */
184
+ s->sec_vectors[ARMV7M_EXCP_USAGE].active = (value & (1 << 3)) != 0;
185
+ s->sec_vectors[ARMV7M_EXCP_SVC].active = (value & (1 << 7)) != 0;
186
+ s->sec_vectors[ARMV7M_EXCP_PENDSV].active =
187
+ (value & (1 << 10)) != 0;
188
+ s->sec_vectors[ARMV7M_EXCP_SYSTICK].active =
189
+ (value & (1 << 11)) != 0;
190
+ s->sec_vectors[ARMV7M_EXCP_USAGE].pending =
191
+ (value & (1 << 12)) != 0;
192
+ s->sec_vectors[ARMV7M_EXCP_MEM].pending = (value & (1 << 13)) != 0;
193
+ s->sec_vectors[ARMV7M_EXCP_SVC].pending = (value & (1 << 15)) != 0;
194
+ s->sec_vectors[ARMV7M_EXCP_MEM].enabled = (value & (1 << 16)) != 0;
195
+ s->sec_vectors[ARMV7M_EXCP_BUS].enabled = (value & (1 << 17)) != 0;
196
+ s->sec_vectors[ARMV7M_EXCP_USAGE].enabled =
197
+ (value & (1 << 18)) != 0;
198
+ /* SecureFault not banked, but RAZ/WI to NS */
199
+ s->vectors[ARMV7M_EXCP_SECURE].active = (value & (1 << 4)) != 0;
200
+ s->vectors[ARMV7M_EXCP_SECURE].enabled = (value & (1 << 19)) != 0;
201
+ s->vectors[ARMV7M_EXCP_SECURE].pending = (value & (1 << 20)) != 0;
202
+ } else {
203
+ s->vectors[ARMV7M_EXCP_MEM].active = (value & (1 << 0)) != 0;
204
+ if (arm_feature(&cpu->env, ARM_FEATURE_V8)) {
205
+ /* HARDFAULTPENDED is not present in v7M */
206
+ s->vectors[ARMV7M_EXCP_HARD].pending = (value & (1 << 21)) != 0;
207
+ }
208
+ s->vectors[ARMV7M_EXCP_USAGE].active = (value & (1 << 3)) != 0;
209
+ s->vectors[ARMV7M_EXCP_SVC].active = (value & (1 << 7)) != 0;
210
+ s->vectors[ARMV7M_EXCP_PENDSV].active = (value & (1 << 10)) != 0;
211
+ s->vectors[ARMV7M_EXCP_SYSTICK].active = (value & (1 << 11)) != 0;
212
+ s->vectors[ARMV7M_EXCP_USAGE].pending = (value & (1 << 12)) != 0;
213
+ s->vectors[ARMV7M_EXCP_MEM].pending = (value & (1 << 13)) != 0;
214
+ s->vectors[ARMV7M_EXCP_SVC].pending = (value & (1 << 15)) != 0;
215
+ s->vectors[ARMV7M_EXCP_MEM].enabled = (value & (1 << 16)) != 0;
216
+ s->vectors[ARMV7M_EXCP_USAGE].enabled = (value & (1 << 18)) != 0;
217
+ }
218
+ if (attrs.secure || (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
219
+ s->vectors[ARMV7M_EXCP_BUS].active = (value & (1 << 1)) != 0;
220
+ s->vectors[ARMV7M_EXCP_BUS].pending = (value & (1 << 14)) != 0;
221
+ s->vectors[ARMV7M_EXCP_BUS].enabled = (value & (1 << 17)) != 0;
222
+ }
223
+ /* NMIACT can only be written if the write is of a zero, with
224
+ * BFHFNMINS 1, and by the CPU in secure state via the NS alias.
225
+ */
226
+ if (!attrs.secure && cpu->env.v7m.secure &&
227
+ (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) &&
228
+ (value & (1 << 5)) == 0) {
229
+ s->vectors[ARMV7M_EXCP_NMI].active = 0;
230
+ }
231
+ /* HARDFAULTACT can only be written if the write is of a zero
232
+ * to the non-secure HardFault state by the CPU in secure state.
233
+ * The only case where we can be targeting the non-secure HF state
234
+ * when in secure state is if this is a write via the NS alias
235
+ * and BFHFNMINS is 1.
236
+ */
237
+ if (!attrs.secure && cpu->env.v7m.secure &&
238
+ (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) &&
239
+ (value & (1 << 2)) == 0) {
240
+ s->vectors[ARMV7M_EXCP_HARD].active = 0;
241
+ }
242
+
243
+ /* TODO: this is RAZ/WI from NS if DEMCR.SDME is set */
244
s->vectors[ARMV7M_EXCP_DEBUG].active = (value & (1 << 8)) != 0;
245
- s->vectors[ARMV7M_EXCP_PENDSV].active = (value & (1 << 10)) != 0;
246
- s->vectors[ARMV7M_EXCP_SYSTICK].active = (value & (1 << 11)) != 0;
247
- s->vectors[ARMV7M_EXCP_USAGE].pending = (value & (1 << 12)) != 0;
248
- s->vectors[ARMV7M_EXCP_MEM].pending = (value & (1 << 13)) != 0;
249
- s->vectors[ARMV7M_EXCP_BUS].pending = (value & (1 << 14)) != 0;
250
- s->vectors[ARMV7M_EXCP_SVC].pending = (value & (1 << 15)) != 0;
251
- s->vectors[ARMV7M_EXCP_MEM].enabled = (value & (1 << 16)) != 0;
252
- s->vectors[ARMV7M_EXCP_BUS].enabled = (value & (1 << 17)) != 0;
253
- s->vectors[ARMV7M_EXCP_USAGE].enabled = (value & (1 << 18)) != 0;
254
nvic_irq_update(s);
255
break;
256
case 0xd28: /* Configurable Fault Status. */
257
--
74
--
258
2.7.4
75
2.19.1
259
76
260
77
diff view generated by jsdifflib
1
In the A64 decoder, we have a lot of references to section numbers
1
From: Richard Henderson <rth@twiddle.net>
2
from version A.a of the v8A ARM ARM (DDI0487). This version of the
3
document is now long obsolete (we are currently on revision B.a),
4
and various intervening versions renumbered all the sections.
5
2
6
The most recent B.a version of the document doesn't assign
3
This can reduce the number of opcodes required for certain
7
section numbers at all to the individual instruction classes
4
complex forms of load-multiple (e.g. ld4.16b).
8
in the way that the various A.x versions did. The simplest thing
9
to do is just to delete all the out of date C.x.x references.
10
5
6
Signed-off-by: Richard Henderson <rth@twiddle.net>
7
Message-id: 20181011205206.3552-2-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
13
Message-id: 20170915150849.23557-1-peter.maydell@linaro.org
14
---
10
---
15
target/arm/translate-a64.c | 227 +++++++++++++++++++++++----------------------
11
target/arm/translate-a64.c | 12 ++++++++----
16
1 file changed, 114 insertions(+), 113 deletions(-)
12
1 file changed, 8 insertions(+), 4 deletions(-)
17
13
18
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
19
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/translate-a64.c
16
--- a/target/arm/translate-a64.c
21
+++ b/target/arm/translate-a64.c
17
+++ b/target/arm/translate-a64.c
22
@@ -XXX,XX +XXX,XX @@ static inline AArch64DecodeFn *lookup_disas_fn(const AArch64DecodeTable *table,
18
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
23
}
19
bool is_store = !extract32(insn, 22, 1);
24
20
bool is_postidx = extract32(insn, 23, 1);
25
/*
21
bool is_q = extract32(insn, 30, 1);
26
- * the instruction disassembly implemented here matches
22
- TCGv_i64 tcg_addr, tcg_rn;
27
- * the instruction encoding classifications in chapter 3 (C3)
23
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
28
- * of the ARM Architecture Reference Manual (DDI0487A_a)
24
29
+ * The instruction disassembly implemented here matches
25
int ebytes = 1 << size;
30
+ * the instruction encoding classifications in chapter C4
26
int elements = (is_q ? 128 : 64) / (8 << size);
31
+ * of the ARM Architecture Reference Manual (DDI0487B_a);
27
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
32
+ * classification names and decode diagrams here should generally
28
tcg_rn = cpu_reg_sp(s, rn);
33
+ * match up with those in the manual.
29
tcg_addr = tcg_temp_new_i64();
34
*/
30
tcg_gen_mov_i64(tcg_addr, tcg_rn);
35
31
+ tcg_ebytes = tcg_const_i64(ebytes);
36
-/* C3.2.7 Unconditional branch (immediate)
32
37
+/* Unconditional branch (immediate)
33
for (r = 0; r < rpt; r++) {
38
* 31 30 26 25 0
34
int e;
39
* +----+-----------+-------------------------------------+
35
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
40
* | op | 0 0 1 0 1 | imm26 |
36
clear_vec_high(s, is_q, tt);
41
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
37
}
42
uint64_t addr = s->pc + sextract32(insn, 0, 26) * 4 - 4;
38
}
43
39
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
44
if (insn & (1U << 31)) {
40
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
45
- /* C5.6.26 BL Branch with link */
41
tt = (tt + 1) % 32;
46
+ /* BL Branch with link */
42
}
47
tcg_gen_movi_i64(cpu_reg(s, 30), s->pc);
43
}
44
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
45
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
46
}
48
}
47
}
49
48
+ tcg_temp_free_i64(tcg_ebytes);
50
- /* C5.6.20 B Branch / C5.6.26 BL Branch with link */
51
+ /* B Branch / BL Branch with link */
52
gen_goto_tb(s, 0, addr);
53
}
54
55
-/* C3.2.1 Compare & branch (immediate)
56
+/* Compare and branch (immediate)
57
* 31 30 25 24 23 5 4 0
58
* +----+-------------+----+---------------------+--------+
59
* | sf | 0 1 1 0 1 0 | op | imm19 | Rt |
60
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
61
gen_goto_tb(s, 1, addr);
62
}
63
64
-/* C3.2.5 Test & branch (immediate)
65
+/* Test and branch (immediate)
66
* 31 30 25 24 23 19 18 5 4 0
67
* +----+-------------+----+-------+-------------+------+
68
* | b5 | 0 1 1 0 1 1 | op | b40 | imm14 | Rt |
69
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
70
gen_goto_tb(s, 1, addr);
71
}
72
73
-/* C3.2.2 / C5.6.19 Conditional branch (immediate)
74
+/* Conditional branch (immediate)
75
* 31 25 24 23 5 4 3 0
76
* +---------------+----+---------------------+----+------+
77
* | 0 1 0 1 0 1 0 | o1 | imm19 | o0 | cond |
78
@@ -XXX,XX +XXX,XX @@ static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
79
}
80
}
81
82
-/* C5.6.68 HINT */
83
+/* HINT instruction group, including various allocated HINTs */
84
static void handle_hint(DisasContext *s, uint32_t insn,
85
unsigned int op1, unsigned int op2, unsigned int crm)
86
{
87
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
88
}
89
}
90
91
-/* C5.6.130 MSR (immediate) - move immediate to processor state field */
92
+/* MSR (immediate) - move immediate to processor state field */
93
static void handle_msr_i(DisasContext *s, uint32_t insn,
94
unsigned int op1, unsigned int op2, unsigned int crm)
95
{
96
@@ -XXX,XX +XXX,XX @@ static void gen_set_nzcv(TCGv_i64 tcg_rt)
97
tcg_temp_free_i32(nzcv);
98
}
99
100
-/* C5.6.129 MRS - move from system register
101
- * C5.6.131 MSR (register) - move to system register
102
- * C5.6.204 SYS
103
- * C5.6.205 SYSL
104
+/* MRS - move from system register
105
+ * MSR (register) - move to system register
106
+ * SYS
107
+ * SYSL
108
* These are all essentially the same insn in 'read' and 'write'
109
* versions, with varying op0 fields.
110
*/
111
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
112
}
113
}
114
115
-/* C3.2.4 System
116
+/* System
117
* 31 22 21 20 19 18 16 15 12 11 8 7 5 4 0
118
* +---------------------+---+-----+-----+-------+-------+-----+------+
119
* | 1 1 0 1 0 1 0 1 0 0 | L | op0 | op1 | CRn | CRm | op2 | Rt |
120
@@ -XXX,XX +XXX,XX @@ static void disas_system(DisasContext *s, uint32_t insn)
121
return;
122
}
123
switch (crn) {
124
- case 2: /* C5.6.68 HINT */
125
+ case 2: /* HINT (including allocated hints like NOP, YIELD, etc) */
126
handle_hint(s, insn, op1, op2, crm);
127
break;
128
case 3: /* CLREX, DSB, DMB, ISB */
129
handle_sync(s, insn, op1, op2, crm);
130
break;
131
- case 4: /* C5.6.130 MSR (immediate) */
132
+ case 4: /* MSR (immediate) */
133
handle_msr_i(s, insn, op1, op2, crm);
134
break;
135
default:
136
@@ -XXX,XX +XXX,XX @@ static void disas_system(DisasContext *s, uint32_t insn)
137
handle_sys(s, insn, l, op0, op1, op2, crn, crm, rt);
138
}
139
140
-/* C3.2.3 Exception generation
141
+/* Exception generation
142
*
143
* 31 24 23 21 20 5 4 2 1 0
144
* +-----------------+-----+------------------------+-----+----+
145
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
146
}
147
}
148
149
-/* C3.2.7 Unconditional branch (register)
150
+/* Unconditional branch (register)
151
* 31 25 24 21 20 16 15 10 9 5 4 0
152
* +---------------+-------+-------+-------+------+-------+
153
* | 1 1 0 1 0 1 1 | opc | op2 | op3 | Rn | op4 |
154
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
155
s->base.is_jmp = DISAS_JUMP;
156
}
157
158
-/* C3.2 Branches, exception generating and system instructions */
159
+/* Branches, exception generating and system instructions */
160
static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
161
{
162
switch (extract32(insn, 25, 7)) {
163
@@ -XXX,XX +XXX,XX @@ static bool disas_ldst_compute_iss_sf(int size, bool is_signed, int opc)
164
return regsize == 64;
165
}
166
167
-/* C3.3.6 Load/store exclusive
168
+/* Load/store exclusive
169
*
170
* 31 30 29 24 23 22 21 20 16 15 14 10 9 5 4 0
171
* +-----+-------------+----+---+----+------+----+-------+------+------+
172
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
173
}
174
175
/*
176
- * C3.3.5 Load register (literal)
177
+ * Load register (literal)
178
*
179
* 31 30 29 27 26 25 24 23 5 4 0
180
* +-----+-------+---+-----+-------------------+-------+
181
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
182
}
183
184
/*
185
- * C5.6.80 LDNP (Load Pair - non-temporal hint)
186
- * C5.6.81 LDP (Load Pair - non vector)
187
- * C5.6.82 LDPSW (Load Pair Signed Word - non vector)
188
- * C5.6.176 STNP (Store Pair - non-temporal hint)
189
- * C5.6.177 STP (Store Pair - non vector)
190
- * C6.3.165 LDNP (Load Pair of SIMD&FP - non-temporal hint)
191
- * C6.3.165 LDP (Load Pair of SIMD&FP)
192
- * C6.3.284 STNP (Store Pair of SIMD&FP - non-temporal hint)
193
- * C6.3.284 STP (Store Pair of SIMD&FP)
194
+ * LDNP (Load Pair - non-temporal hint)
195
+ * LDP (Load Pair - non vector)
196
+ * LDPSW (Load Pair Signed Word - non vector)
197
+ * STNP (Store Pair - non-temporal hint)
198
+ * STP (Store Pair - non vector)
199
+ * LDNP (Load Pair of SIMD&FP - non-temporal hint)
200
+ * LDP (Load Pair of SIMD&FP)
201
+ * STNP (Store Pair of SIMD&FP - non-temporal hint)
202
+ * STP (Store Pair of SIMD&FP)
203
*
204
* 31 30 29 27 26 25 24 23 22 21 15 14 10 9 5 4 0
205
* +-----+-------+---+---+-------+---+-----------------------------+
206
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
207
}
208
209
/*
210
- * C3.3.8 Load/store (immediate post-indexed)
211
- * C3.3.9 Load/store (immediate pre-indexed)
212
- * C3.3.12 Load/store (unscaled immediate)
213
+ * Load/store (immediate post-indexed)
214
+ * Load/store (immediate pre-indexed)
215
+ * Load/store (unscaled immediate)
216
*
217
* 31 30 29 27 26 25 24 23 22 21 20 12 11 10 9 5 4 0
218
* +----+-------+---+-----+-----+---+--------+-----+------+------+
219
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_imm9(DisasContext *s, uint32_t insn,
220
}
221
222
/*
223
- * C3.3.10 Load/store (register offset)
224
+ * Load/store (register offset)
225
*
226
* 31 30 29 27 26 25 24 23 22 21 20 16 15 13 12 11 10 9 5 4 0
227
* +----+-------+---+-----+-----+---+------+-----+--+-----+----+----+
228
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg_roffset(DisasContext *s, uint32_t insn,
229
}
230
231
/*
232
- * C3.3.13 Load/store (unsigned immediate)
233
+ * Load/store (unsigned immediate)
234
*
235
* 31 30 29 27 26 25 24 23 22 21 10 9 5
236
* +----+-------+---+-----+-----+------------+-------+------+
237
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg(DisasContext *s, uint32_t insn)
238
}
239
}
240
241
-/* C3.3.1 AdvSIMD load/store multiple structures
242
+/* AdvSIMD load/store multiple structures
243
*
244
* 31 30 29 23 22 21 16 15 12 11 10 9 5 4 0
245
* +---+---+---------------+---+-------------+--------+------+------+------+
246
* | 0 | Q | 0 0 1 1 0 0 0 | L | 0 0 0 0 0 0 | opcode | size | Rn | Rt |
247
* +---+---+---------------+---+-------------+--------+------+------+------+
248
*
249
- * C3.3.2 AdvSIMD load/store multiple structures (post-indexed)
250
+ * AdvSIMD load/store multiple structures (post-indexed)
251
*
252
* 31 30 29 23 22 21 20 16 15 12 11 10 9 5 4 0
253
* +---+---+---------------+---+---+---------+--------+------+------+------+
254
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
255
tcg_temp_free_i64(tcg_addr);
49
tcg_temp_free_i64(tcg_addr);
256
}
50
}
257
51
258
-/* C3.3.3 AdvSIMD load/store single structure
259
+/* AdvSIMD load/store single structure
260
*
261
* 31 30 29 23 22 21 20 16 15 13 12 11 10 9 5 4 0
262
* +---+---+---------------+-----+-----------+-----+---+------+------+------+
263
* | 0 | Q | 0 0 1 1 0 1 0 | L R | 0 0 0 0 0 | opc | S | size | Rn | Rt |
264
* +---+---+---------------+-----+-----------+-----+---+------+------+------+
265
*
266
- * C3.3.4 AdvSIMD load/store single structure (post-indexed)
267
+ * AdvSIMD load/store single structure (post-indexed)
268
*
269
* 31 30 29 23 22 21 20 16 15 13 12 11 10 9 5 4 0
270
* +---+---+---------------+-----+-----------+-----+---+------+------+------+
271
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
52
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
53
bool replicate = false;
54
int index = is_q << 3 | S << 2 | size;
55
int ebytes, xs;
56
- TCGv_i64 tcg_addr, tcg_rn;
57
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
58
59
switch (scale) {
60
case 3:
61
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
62
tcg_rn = cpu_reg_sp(s, rn);
63
tcg_addr = tcg_temp_new_i64();
64
tcg_gen_mov_i64(tcg_addr, tcg_rn);
65
+ tcg_ebytes = tcg_const_i64(ebytes);
66
67
for (xs = 0; xs < selem; xs++) {
68
if (replicate) {
69
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
70
do_vec_st(s, rt, index, tcg_addr, scale);
71
}
72
}
73
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
74
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
75
rt = (rt + 1) % 32;
76
}
77
78
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
79
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
80
}
81
}
82
+ tcg_temp_free_i64(tcg_ebytes);
272
tcg_temp_free_i64(tcg_addr);
83
tcg_temp_free_i64(tcg_addr);
273
}
84
}
274
85
275
-/* C3.3 Loads and stores */
276
+/* Loads and stores */
277
static void disas_ldst(DisasContext *s, uint32_t insn)
278
{
279
switch (extract32(insn, 24, 6)) {
280
@@ -XXX,XX +XXX,XX @@ static void disas_ldst(DisasContext *s, uint32_t insn)
281
}
282
}
283
284
-/* C3.4.6 PC-rel. addressing
285
+/* PC-rel. addressing
286
* 31 30 29 28 24 23 5 4 0
287
* +----+-------+-----------+-------------------+------+
288
* | op | immlo | 1 0 0 0 0 | immhi | Rd |
289
@@ -XXX,XX +XXX,XX @@ static void disas_pc_rel_adr(DisasContext *s, uint32_t insn)
290
}
291
292
/*
293
- * C3.4.1 Add/subtract (immediate)
294
+ * Add/subtract (immediate)
295
*
296
* 31 30 29 28 24 23 22 21 10 9 5 4 0
297
* +--+--+--+-----------+-----+-------------+-----+-----+
298
@@ -XXX,XX +XXX,XX @@ static bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
299
return true;
300
}
301
302
-/* C3.4.4 Logical (immediate)
303
+/* Logical (immediate)
304
* 31 30 29 28 23 22 21 16 15 10 9 5 4 0
305
* +----+-----+-------------+---+------+------+------+------+
306
* | sf | opc | 1 0 0 1 0 0 | N | immr | imms | Rn | Rd |
307
@@ -XXX,XX +XXX,XX @@ static void disas_logic_imm(DisasContext *s, uint32_t insn)
308
}
309
310
/*
311
- * C3.4.5 Move wide (immediate)
312
+ * Move wide (immediate)
313
*
314
* 31 30 29 28 23 22 21 20 5 4 0
315
* +--+-----+-------------+-----+----------------+------+
316
@@ -XXX,XX +XXX,XX @@ static void disas_movw_imm(DisasContext *s, uint32_t insn)
317
}
318
}
319
320
-/* C3.4.2 Bitfield
321
+/* Bitfield
322
* 31 30 29 28 23 22 21 16 15 10 9 5 4 0
323
* +----+-----+-------------+---+------+------+------+------+
324
* | sf | opc | 1 0 0 1 1 0 | N | immr | imms | Rn | Rd |
325
@@ -XXX,XX +XXX,XX @@ static void disas_bitfield(DisasContext *s, uint32_t insn)
326
}
327
}
328
329
-/* C3.4.3 Extract
330
+/* Extract
331
* 31 30 29 28 23 22 21 20 16 15 10 9 5 4 0
332
* +----+------+-------------+---+----+------+--------+------+------+
333
* | sf | op21 | 1 0 0 1 1 1 | N | o0 | Rm | imms | Rn | Rd |
334
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
335
}
336
}
337
338
-/* C3.4 Data processing - immediate */
339
+/* Data processing - immediate */
340
static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
341
{
342
switch (extract32(insn, 23, 6)) {
343
@@ -XXX,XX +XXX,XX @@ static void shift_reg_imm(TCGv_i64 dst, TCGv_i64 src, int sf,
344
}
345
}
346
347
-/* C3.5.10 Logical (shifted register)
348
+/* Logical (shifted register)
349
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
350
* +----+-----+-----------+-------+---+------+--------+------+------+
351
* | sf | opc | 0 1 0 1 0 | shift | N | Rm | imm6 | Rn | Rd |
352
@@ -XXX,XX +XXX,XX @@ static void disas_logic_reg(DisasContext *s, uint32_t insn)
353
}
354
355
/*
356
- * C3.5.1 Add/subtract (extended register)
357
+ * Add/subtract (extended register)
358
*
359
* 31|30|29|28 24|23 22|21|20 16|15 13|12 10|9 5|4 0|
360
* +--+--+--+-----------+-----+--+-------+------+------+----+----+
361
@@ -XXX,XX +XXX,XX @@ static void disas_add_sub_ext_reg(DisasContext *s, uint32_t insn)
362
}
363
364
/*
365
- * C3.5.2 Add/subtract (shifted register)
366
+ * Add/subtract (shifted register)
367
*
368
* 31 30 29 28 24 23 22 21 20 16 15 10 9 5 4 0
369
* +--+--+--+-----------+-----+--+-------+---------+------+------+
370
@@ -XXX,XX +XXX,XX @@ static void disas_add_sub_reg(DisasContext *s, uint32_t insn)
371
tcg_temp_free_i64(tcg_result);
372
}
373
374
-/* C3.5.9 Data-processing (3 source)
375
-
376
- 31 30 29 28 24 23 21 20 16 15 14 10 9 5 4 0
377
- +--+------+-----------+------+------+----+------+------+------+
378
- |sf| op54 | 1 1 0 1 1 | op31 | Rm | o0 | Ra | Rn | Rd |
379
- +--+------+-----------+------+------+----+------+------+------+
380
-
381
+/* Data-processing (3 source)
382
+ *
383
+ * 31 30 29 28 24 23 21 20 16 15 14 10 9 5 4 0
384
+ * +--+------+-----------+------+------+----+------+------+------+
385
+ * |sf| op54 | 1 1 0 1 1 | op31 | Rm | o0 | Ra | Rn | Rd |
386
+ * +--+------+-----------+------+------+----+------+------+------+
387
*/
388
static void disas_data_proc_3src(DisasContext *s, uint32_t insn)
389
{
390
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_3src(DisasContext *s, uint32_t insn)
391
tcg_temp_free_i64(tcg_tmp);
392
}
393
394
-/* C3.5.3 - Add/subtract (with carry)
395
+/* Add/subtract (with carry)
396
* 31 30 29 28 27 26 25 24 23 22 21 20 16 15 10 9 5 4 0
397
* +--+--+--+------------------------+------+---------+------+-----+
398
* |sf|op| S| 1 1 0 1 0 0 0 0 | rm | opcode2 | Rn | Rd |
399
@@ -XXX,XX +XXX,XX @@ static void disas_adc_sbc(DisasContext *s, uint32_t insn)
400
}
401
}
402
403
-/* C3.5.4 - C3.5.5 Conditional compare (immediate / register)
404
+/* Conditional compare (immediate / register)
405
* 31 30 29 28 27 26 25 24 23 22 21 20 16 15 12 11 10 9 5 4 3 0
406
* +--+--+--+------------------------+--------+------+----+--+------+--+-----+
407
* |sf|op| S| 1 1 0 1 0 0 1 0 |imm5/rm | cond |i/r |o2| Rn |o3|nzcv |
408
@@ -XXX,XX +XXX,XX @@ static void disas_cc(DisasContext *s, uint32_t insn)
409
tcg_temp_free_i32(tcg_t2);
410
}
411
412
-/* C3.5.6 Conditional select
413
+/* Conditional select
414
* 31 30 29 28 21 20 16 15 12 11 10 9 5 4 0
415
* +----+----+---+-----------------+------+------+-----+------+------+
416
* | sf | op | S | 1 1 0 1 0 1 0 0 | Rm | cond | op2 | Rn | Rd |
417
@@ -XXX,XX +XXX,XX @@ static void handle_rbit(DisasContext *s, unsigned int sf,
418
}
419
}
420
421
-/* C5.6.149 REV with sf==1, opcode==3 ("REV64") */
422
+/* REV with sf==1, opcode==3 ("REV64") */
423
static void handle_rev64(DisasContext *s, unsigned int sf,
424
unsigned int rn, unsigned int rd)
425
{
426
@@ -XXX,XX +XXX,XX @@ static void handle_rev64(DisasContext *s, unsigned int sf,
427
tcg_gen_bswap64_i64(cpu_reg(s, rd), cpu_reg(s, rn));
428
}
429
430
-/* C5.6.149 REV with sf==0, opcode==2
431
- * C5.6.151 REV32 (sf==1, opcode==2)
432
+/* REV with sf==0, opcode==2
433
+ * REV32 (sf==1, opcode==2)
434
*/
435
static void handle_rev32(DisasContext *s, unsigned int sf,
436
unsigned int rn, unsigned int rd)
437
@@ -XXX,XX +XXX,XX @@ static void handle_rev32(DisasContext *s, unsigned int sf,
438
}
439
}
440
441
-/* C5.6.150 REV16 (opcode==1) */
442
+/* REV16 (opcode==1) */
443
static void handle_rev16(DisasContext *s, unsigned int sf,
444
unsigned int rn, unsigned int rd)
445
{
446
@@ -XXX,XX +XXX,XX @@ static void handle_rev16(DisasContext *s, unsigned int sf,
447
tcg_temp_free_i64(tcg_tmp);
448
}
449
450
-/* C3.5.7 Data-processing (1 source)
451
+/* Data-processing (1 source)
452
* 31 30 29 28 21 20 16 15 10 9 5 4 0
453
* +----+---+---+-----------------+---------+--------+------+------+
454
* | sf | 1 | S | 1 1 0 1 0 1 1 0 | opcode2 | opcode | Rn | Rd |
455
@@ -XXX,XX +XXX,XX @@ static void handle_div(DisasContext *s, bool is_signed, unsigned int sf,
456
}
457
}
458
459
-/* C5.6.115 LSLV, C5.6.118 LSRV, C5.6.17 ASRV, C5.6.154 RORV */
460
+/* LSLV, LSRV, ASRV, RORV */
461
static void handle_shift_reg(DisasContext *s,
462
enum a64_shift_type shift_type, unsigned int sf,
463
unsigned int rm, unsigned int rn, unsigned int rd)
464
@@ -XXX,XX +XXX,XX @@ static void handle_crc32(DisasContext *s,
465
tcg_temp_free_i32(tcg_bytes);
466
}
467
468
-/* C3.5.8 Data-processing (2 source)
469
+/* Data-processing (2 source)
470
* 31 30 29 28 21 20 16 15 10 9 5 4 0
471
* +----+---+---+-----------------+------+--------+------+------+
472
* | sf | 0 | S | 1 1 0 1 0 1 1 0 | Rm | opcode | Rn | Rd |
473
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
474
}
475
}
476
477
-/* C3.5 Data processing - register */
478
+/* Data processing - register */
479
static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
480
{
481
switch (extract32(insn, 24, 5)) {
482
@@ -XXX,XX +XXX,XX @@ static void handle_fp_compare(DisasContext *s, bool is_double,
483
tcg_temp_free_i64(tcg_flags);
484
}
485
486
-/* C3.6.22 Floating point compare
487
+/* Floating point compare
488
* 31 30 29 28 24 23 22 21 20 16 15 14 13 10 9 5 4 0
489
* +---+---+---+-----------+------+---+------+-----+---------+------+-------+
490
* | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | op | 1 0 0 0 | Rn | op2 |
491
@@ -XXX,XX +XXX,XX @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
492
handle_fp_compare(s, type, rn, rm, opc & 1, opc & 2);
493
}
494
495
-/* C3.6.23 Floating point conditional compare
496
+/* Floating point conditional compare
497
* 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 3 0
498
* +---+---+---+-----------+------+---+------+------+-----+------+----+------+
499
* | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | cond | 0 1 | Rn | op | nzcv |
500
@@ -XXX,XX +XXX,XX @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
501
}
502
}
503
504
-/* C3.6.24 Floating point conditional select
505
+/* Floating point conditional select
506
* 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 0
507
* +---+---+---+-----------+------+---+------+------+-----+------+------+
508
* | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | cond | 1 1 | Rn | Rd |
509
@@ -XXX,XX +XXX,XX @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
510
tcg_temp_free_i64(t_true);
511
}
512
513
-/* C3.6.25 Floating-point data-processing (1 source) - single precision */
514
+/* Floating-point data-processing (1 source) - single precision */
515
static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
516
{
517
TCGv_ptr fpst;
518
@@ -XXX,XX +XXX,XX @@ static void handle_fp_1src_single(DisasContext *s, int opcode, int rd, int rn)
519
tcg_temp_free_i32(tcg_res);
520
}
521
522
-/* C3.6.25 Floating-point data-processing (1 source) - double precision */
523
+/* Floating-point data-processing (1 source) - double precision */
524
static void handle_fp_1src_double(DisasContext *s, int opcode, int rd, int rn)
525
{
526
TCGv_ptr fpst;
527
@@ -XXX,XX +XXX,XX @@ static void handle_fp_fcvt(DisasContext *s, int opcode,
528
}
529
}
530
531
-/* C3.6.25 Floating point data-processing (1 source)
532
+/* Floating point data-processing (1 source)
533
* 31 30 29 28 24 23 22 21 20 15 14 10 9 5 4 0
534
* +---+---+---+-----------+------+---+--------+-----------+------+------+
535
* | M | 0 | S | 1 1 1 1 0 | type | 1 | opcode | 1 0 0 0 0 | Rn | Rd |
536
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
537
}
538
}
539
540
-/* C3.6.26 Floating-point data-processing (2 source) - single precision */
541
+/* Floating-point data-processing (2 source) - single precision */
542
static void handle_fp_2src_single(DisasContext *s, int opcode,
543
int rd, int rn, int rm)
544
{
545
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_single(DisasContext *s, int opcode,
546
tcg_temp_free_i32(tcg_res);
547
}
548
549
-/* C3.6.26 Floating-point data-processing (2 source) - double precision */
550
+/* Floating-point data-processing (2 source) - double precision */
551
static void handle_fp_2src_double(DisasContext *s, int opcode,
552
int rd, int rn, int rm)
553
{
554
@@ -XXX,XX +XXX,XX @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
555
tcg_temp_free_i64(tcg_res);
556
}
557
558
-/* C3.6.26 Floating point data-processing (2 source)
559
+/* Floating point data-processing (2 source)
560
* 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 0
561
* +---+---+---+-----------+------+---+------+--------+-----+------+------+
562
* | M | 0 | S | 1 1 1 1 0 | type | 1 | Rm | opcode | 1 0 | Rn | Rd |
563
@@ -XXX,XX +XXX,XX @@ static void disas_fp_2src(DisasContext *s, uint32_t insn)
564
}
565
}
566
567
-/* C3.6.27 Floating-point data-processing (3 source) - single precision */
568
+/* Floating-point data-processing (3 source) - single precision */
569
static void handle_fp_3src_single(DisasContext *s, bool o0, bool o1,
570
int rd, int rn, int rm, int ra)
571
{
572
@@ -XXX,XX +XXX,XX @@ static void handle_fp_3src_single(DisasContext *s, bool o0, bool o1,
573
tcg_temp_free_i32(tcg_res);
574
}
575
576
-/* C3.6.27 Floating-point data-processing (3 source) - double precision */
577
+/* Floating-point data-processing (3 source) - double precision */
578
static void handle_fp_3src_double(DisasContext *s, bool o0, bool o1,
579
int rd, int rn, int rm, int ra)
580
{
581
@@ -XXX,XX +XXX,XX @@ static void handle_fp_3src_double(DisasContext *s, bool o0, bool o1,
582
tcg_temp_free_i64(tcg_res);
583
}
584
585
-/* C3.6.27 Floating point data-processing (3 source)
586
+/* Floating point data-processing (3 source)
587
* 31 30 29 28 24 23 22 21 20 16 15 14 10 9 5 4 0
588
* +---+---+---+-----------+------+----+------+----+------+------+------+
589
* | M | 0 | S | 1 1 1 1 1 | type | o1 | Rm | o0 | Ra | Rn | Rd |
590
@@ -XXX,XX +XXX,XX @@ static void disas_fp_3src(DisasContext *s, uint32_t insn)
591
}
592
}
593
594
-/* C3.6.28 Floating point immediate
595
+/* Floating point immediate
596
* 31 30 29 28 24 23 22 21 20 13 12 10 9 5 4 0
597
* +---+---+---+-----------+------+---+------------+-------+------+------+
598
* | M | 0 | S | 1 1 1 1 0 | type | 1 | imm8 | 1 0 0 | imm5 | Rd |
599
@@ -XXX,XX +XXX,XX @@ static void handle_fpfpcvt(DisasContext *s, int rd, int rn, int opcode,
600
tcg_temp_free_i32(tcg_shift);
601
}
602
603
-/* C3.6.29 Floating point <-> fixed point conversions
604
+/* Floating point <-> fixed point conversions
605
* 31 30 29 28 24 23 22 21 20 19 18 16 15 10 9 5 4 0
606
* +----+---+---+-----------+------+---+-------+--------+-------+------+------+
607
* | sf | 0 | S | 1 1 1 1 0 | type | 0 | rmode | opcode | scale | Rn | Rd |
608
@@ -XXX,XX +XXX,XX @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
609
}
610
}
611
612
-/* C3.6.30 Floating point <-> integer conversions
613
+/* Floating point <-> integer conversions
614
* 31 30 29 28 24 23 22 21 20 19 18 16 15 10 9 5 4 0
615
* +----+---+---+-----------+------+---+-------+-----+-------------+----+----+
616
* | sf | 0 | S | 1 1 1 1 0 | type | 1 | rmode | opc | 0 0 0 0 0 0 | Rn | Rd |
617
@@ -XXX,XX +XXX,XX @@ static void do_ext64(DisasContext *s, TCGv_i64 tcg_left, TCGv_i64 tcg_right,
618
tcg_temp_free_i64(tcg_tmp);
619
}
620
621
-/* C3.6.1 EXT
622
+/* EXT
623
* 31 30 29 24 23 22 21 20 16 15 14 11 10 9 5 4 0
624
* +---+---+-------------+-----+---+------+---+------+---+------+------+
625
* | 0 | Q | 1 0 1 1 1 0 | op2 | 0 | Rm | 0 | imm4 | 0 | Rn | Rd |
626
@@ -XXX,XX +XXX,XX @@ static void disas_simd_ext(DisasContext *s, uint32_t insn)
627
tcg_temp_free_i64(tcg_resh);
628
}
629
630
-/* C3.6.2 TBL/TBX
631
+/* TBL/TBX
632
* 31 30 29 24 23 22 21 20 16 15 14 13 12 11 10 9 5 4 0
633
* +---+---+-------------+-----+---+------+---+-----+----+-----+------+------+
634
* | 0 | Q | 0 0 1 1 1 0 | op2 | 0 | Rm | 0 | len | op | 0 0 | Rn | Rd |
635
@@ -XXX,XX +XXX,XX @@ static void disas_simd_tb(DisasContext *s, uint32_t insn)
636
tcg_temp_free_i64(tcg_resh);
637
}
638
639
-/* C3.6.3 ZIP/UZP/TRN
640
+/* ZIP/UZP/TRN
641
* 31 30 29 24 23 22 21 20 16 15 14 12 11 10 9 5 4 0
642
* +---+---+-------------+------+---+------+---+------------------+------+
643
* | 0 | Q | 0 0 1 1 1 0 | size | 0 | Rm | 0 | opc | 1 0 | Rn | Rd |
644
@@ -XXX,XX +XXX,XX @@ static void do_minmaxop(DisasContext *s, TCGv_i32 tcg_elt1, TCGv_i32 tcg_elt2,
645
}
646
}
647
648
-/* C3.6.4 AdvSIMD across lanes
649
+/* AdvSIMD across lanes
650
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
651
* +---+---+---+-----------+------+-----------+--------+-----+------+------+
652
* | 0 | Q | U | 0 1 1 1 0 | size | 1 1 0 0 0 | opcode | 1 0 | Rn | Rd |
653
@@ -XXX,XX +XXX,XX @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
654
tcg_temp_free_i64(tcg_res);
655
}
656
657
-/* C6.3.31 DUP (Element, Vector)
658
+/* DUP (Element, Vector)
659
*
660
* 31 30 29 21 20 16 15 10 9 5 4 0
661
* +---+---+-------------------+--------+-------------+------+------+
662
@@ -XXX,XX +XXX,XX @@ static void handle_simd_dupe(DisasContext *s, int is_q, int rd, int rn,
663
tcg_temp_free_i64(tmp);
664
}
665
666
-/* C6.3.31 DUP (element, scalar)
667
+/* DUP (element, scalar)
668
* 31 21 20 16 15 10 9 5 4 0
669
* +-----------------------+--------+-------------+------+------+
670
* | 0 1 0 1 1 1 1 0 0 0 0 | imm5 | 0 0 0 0 0 1 | Rn | Rd |
671
@@ -XXX,XX +XXX,XX @@ static void handle_simd_dupes(DisasContext *s, int rd, int rn,
672
tcg_temp_free_i64(tmp);
673
}
674
675
-/* C6.3.32 DUP (General)
676
+/* DUP (General)
677
*
678
* 31 30 29 21 20 16 15 10 9 5 4 0
679
* +---+---+-------------------+--------+-------------+------+------+
680
@@ -XXX,XX +XXX,XX @@ static void handle_simd_dupg(DisasContext *s, int is_q, int rd, int rn,
681
}
682
}
683
684
-/* C6.3.150 INS (Element)
685
+/* INS (Element)
686
*
687
* 31 21 20 16 15 14 11 10 9 5 4 0
688
* +-----------------------+--------+------------+---+------+------+
689
@@ -XXX,XX +XXX,XX @@ static void handle_simd_inse(DisasContext *s, int rd, int rn,
690
}
691
692
693
-/* C6.3.151 INS (General)
694
+/* INS (General)
695
*
696
* 31 21 20 16 15 10 9 5 4 0
697
* +-----------------------+--------+-------------+------+------+
698
@@ -XXX,XX +XXX,XX @@ static void handle_simd_insg(DisasContext *s, int rd, int rn, int imm5)
699
}
700
701
/*
702
- * C6.3.321 UMOV (General)
703
- * C6.3.237 SMOV (General)
704
+ * UMOV (General)
705
+ * SMOV (General)
706
*
707
* 31 30 29 21 20 16 15 12 10 9 5 4 0
708
* +---+---+-------------------+--------+-------------+------+------+
709
@@ -XXX,XX +XXX,XX @@ static void handle_simd_umov_smov(DisasContext *s, int is_q, int is_signed,
710
}
711
}
712
713
-/* C3.6.5 AdvSIMD copy
714
+/* AdvSIMD copy
715
* 31 30 29 28 21 20 16 15 14 11 10 9 5 4 0
716
* +---+---+----+-----------------+------+---+------+---+------+------+
717
* | 0 | Q | op | 0 1 1 1 0 0 0 0 | imm5 | 0 | imm4 | 1 | Rn | Rd |
718
@@ -XXX,XX +XXX,XX @@ static void disas_simd_copy(DisasContext *s, uint32_t insn)
719
}
720
}
721
722
-/* C3.6.6 AdvSIMD modified immediate
723
+/* AdvSIMD modified immediate
724
* 31 30 29 28 19 18 16 15 12 11 10 9 5 4 0
725
* +---+---+----+---------------------+-----+-------+----+---+-------+------+
726
* | 0 | Q | op | 0 1 1 1 1 0 0 0 0 0 | abc | cmode | o2 | 1 | defgh | Rd |
727
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
728
tcg_temp_free_i64(tcg_imm);
729
}
730
731
-/* C3.6.7 AdvSIMD scalar copy
732
+/* AdvSIMD scalar copy
733
* 31 30 29 28 21 20 16 15 14 11 10 9 5 4 0
734
* +-----+----+-----------------+------+---+------+---+------+------+
735
* | 0 1 | op | 1 1 1 1 0 0 0 0 | imm5 | 0 | imm4 | 1 | Rn | Rd |
736
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_copy(DisasContext *s, uint32_t insn)
737
handle_simd_dupes(s, rd, rn, imm5);
738
}
739
740
-/* C3.6.8 AdvSIMD scalar pairwise
741
+/* AdvSIMD scalar pairwise
742
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
743
* +-----+---+-----------+------+-----------+--------+-----+------+------+
744
* | 0 1 | U | 1 1 1 1 0 | size | 1 1 0 0 0 | opcode | 1 0 | Rn | Rd |
745
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
746
tcg_temp_free_i32(tcg_rmode);
747
}
748
749
-/* C3.6.9 AdvSIMD scalar shift by immediate
750
+/* AdvSIMD scalar shift by immediate
751
* 31 30 29 28 23 22 19 18 16 15 11 10 9 5 4 0
752
* +-----+---+-------------+------+------+--------+---+------+------+
753
* | 0 1 | U | 1 1 1 1 1 0 | immh | immb | opcode | 1 | Rn | Rd |
754
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_shift_imm(DisasContext *s, uint32_t insn)
755
}
756
}
757
758
-/* C3.6.10 AdvSIMD scalar three different
759
+/* AdvSIMD scalar three different
760
* 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 0
761
* +-----+---+-----------+------+---+------+--------+-----+------+------+
762
* | 0 1 | U | 1 1 1 1 0 | size | 1 | Rm | opcode | 0 0 | Rn | Rd |
763
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
764
}
765
}
766
767
-/* C3.6.11 AdvSIMD scalar three same
768
+/* AdvSIMD scalar three same
769
* 31 30 29 28 24 23 22 21 20 16 15 11 10 9 5 4 0
770
* +-----+---+-----------+------+---+------+--------+---+------+------+
771
* | 0 1 | U | 1 1 1 1 0 | size | 1 | Rm | opcode | 1 | Rn | Rd |
772
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,
773
}
774
}
775
776
-/* C3.6.12 AdvSIMD scalar two reg misc
777
+/* AdvSIMD scalar two reg misc
778
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
779
* +-----+---+-----------+------+-----------+--------+-----+------+------+
780
* | 0 1 | U | 1 1 1 1 0 | size | 1 0 0 0 0 | opcode | 1 0 | Rn | Rd |
781
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shrn(DisasContext *s, bool is_q,
782
}
783
784
785
-/* C3.6.14 AdvSIMD shift by immediate
786
+/* AdvSIMD shift by immediate
787
* 31 30 29 28 23 22 19 18 16 15 11 10 9 5 4 0
788
* +---+---+---+-------------+------+------+--------+---+------+------+
789
* | 0 | Q | U | 0 1 1 1 1 0 | immh | immb | opcode | 1 | Rn | Rd |
790
@@ -XXX,XX +XXX,XX @@ static void handle_pmull_64(DisasContext *s, int is_q, int rd, int rn, int rm)
791
tcg_temp_free_i64(tcg_res);
792
}
793
794
-/* C3.6.15 AdvSIMD three different
795
+/* AdvSIMD three different
796
* 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 0
797
* +---+---+---+-----------+------+---+------+--------+-----+------+------+
798
* | 0 | Q | U | 0 1 1 1 0 | size | 1 | Rm | opcode | 0 0 | Rn | Rd |
799
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
800
}
801
}
802
803
-/* C3.6.16 AdvSIMD three same
804
+/* AdvSIMD three same
805
* 31 30 29 28 24 23 22 21 20 16 15 11 10 9 5 4 0
806
* +---+---+---+-----------+------+---+------+--------+---+------+------+
807
* | 0 | Q | U | 0 1 1 1 0 | size | 1 | Rm | opcode | 1 | Rn | Rd |
808
@@ -XXX,XX +XXX,XX @@ static void handle_shll(DisasContext *s, bool is_q, int size, int rn, int rd)
809
}
810
}
811
812
-/* C3.6.17 AdvSIMD two reg misc
813
+/* AdvSIMD two reg misc
814
* 31 30 29 28 24 23 22 21 17 16 12 11 10 9 5 4 0
815
* +---+---+---+-----------+------+-----------+--------+-----+------+------+
816
* | 0 | Q | U | 0 1 1 1 0 | size | 1 0 0 0 0 | opcode | 1 0 | Rn | Rd |
817
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
818
}
819
}
820
821
-/* C3.6.13 AdvSIMD scalar x indexed element
822
+/* AdvSIMD scalar x indexed element
823
* 31 30 29 28 24 23 22 21 20 19 16 15 12 11 10 9 5 4 0
824
* +-----+---+-----------+------+---+---+------+-----+---+---+------+------+
825
* | 0 1 | U | 1 1 1 1 1 | size | L | M | Rm | opc | H | 0 | Rn | Rd |
826
* +-----+---+-----------+------+---+---+------+-----+---+---+------+------+
827
- * C3.6.18 AdvSIMD vector x indexed element
828
+ * AdvSIMD vector x indexed element
829
* 31 30 29 28 24 23 22 21 20 19 16 15 12 11 10 9 5 4 0
830
* +---+---+---+-----------+------+---+---+------+-----+---+---+------+------+
831
* | 0 | Q | U | 0 1 1 1 1 | size | L | M | Rm | opc | H | 0 | Rn | Rd |
832
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
833
}
834
}
835
836
-/* C3.6.19 Crypto AES
837
+/* Crypto AES
838
* 31 24 23 22 21 17 16 12 11 10 9 5 4 0
839
* +-----------------+------+-----------+--------+-----+------+------+
840
* | 0 1 0 0 1 1 1 0 | size | 1 0 1 0 0 | opcode | 1 0 | Rn | Rd |
841
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
842
tcg_temp_free_i32(tcg_decrypt);
843
}
844
845
-/* C3.6.20 Crypto three-reg SHA
846
+/* Crypto three-reg SHA
847
* 31 24 23 22 21 20 16 15 14 12 11 10 9 5 4 0
848
* +-----------------+------+---+------+---+--------+-----+------+------+
849
* | 0 1 0 1 1 1 1 0 | size | 0 | Rm | 0 | opcode | 0 0 | Rn | Rd |
850
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
851
tcg_temp_free_i32(tcg_rm_regno);
852
}
853
854
-/* C3.6.21 Crypto two-reg SHA
855
+/* Crypto two-reg SHA
856
* 31 24 23 22 21 17 16 12 11 10 9 5 4 0
857
* +-----------------+------+-----------+--------+-----+------+------+
858
* | 0 1 0 1 1 1 1 0 | size | 1 0 1 0 0 | opcode | 1 0 | Rn | Rd |
859
--
86
--
860
2.7.4
87
2.19.1
861
88
862
89
diff view generated by jsdifflib
1
Update the static_ops functions to use new-style mmio
1
From: Richard Henderson <richard.henderson@linaro.org>
2
rather than the legacy old_mmio functions.
3
2
3
This is done generically in translator_loop.
4
5
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20181011205206.3552-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 1505580378-9044-2-git-send-email-peter.maydell@linaro.org
7
---
11
---
8
hw/arm/palm.c | 30 ++++++++++--------------------
12
target/arm/translate-a64.c | 1 -
9
1 file changed, 10 insertions(+), 20 deletions(-)
13
target/arm/translate.c | 1 -
14
2 files changed, 2 deletions(-)
10
15
11
diff --git a/hw/arm/palm.c b/hw/arm/palm.c
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/palm.c
18
--- a/target/arm/translate-a64.c
14
+++ b/hw/arm/palm.c
19
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
16
#include "exec/address-spaces.h"
21
17
#include "cpu.h"
22
static void aarch64_tr_tb_start(DisasContextBase *db, CPUState *cpu)
18
19
-static uint32_t static_readb(void *opaque, hwaddr offset)
20
+static uint64_t static_read(void *opaque, hwaddr offset, unsigned size)
21
{
23
{
22
- uint32_t *val = (uint32_t *) opaque;
24
- tcg_clear_temp_count();
23
- return *val >> ((offset & 3) << 3);
24
-}
25
+ uint32_t *val = (uint32_t *)opaque;
26
+ uint32_t sizemask = 7 >> size;
27
28
-static uint32_t static_readh(void *opaque, hwaddr offset)
29
-{
30
- uint32_t *val = (uint32_t *) opaque;
31
- return *val >> ((offset & 1) << 3);
32
-}
33
-
34
-static uint32_t static_readw(void *opaque, hwaddr offset)
35
-{
36
- uint32_t *val = (uint32_t *) opaque;
37
- return *val >> ((offset & 0) << 3);
38
+ return *val >> ((offset & sizemask) << 3);
39
}
25
}
40
26
41
-static void static_write(void *opaque, hwaddr offset,
27
static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
42
- uint32_t value)
28
diff --git a/target/arm/translate.c b/target/arm/translate.c
43
+static void static_write(void *opaque, hwaddr offset, uint64_t value,
29
index XXXXXXX..XXXXXXX 100644
44
+ unsigned size)
30
--- a/target/arm/translate.c
45
{
31
+++ b/target/arm/translate.c
46
#ifdef SPY
32
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_start(DisasContextBase *dcbase, CPUState *cpu)
47
printf("%s: value %08lx written at " PA_FMT "\n",
33
tcg_gen_movi_i32(tmp, 0);
48
@@ -XXX,XX +XXX,XX @@ static void static_write(void *opaque, hwaddr offset,
34
store_cpu_field(tmp, condexec_bits);
35
}
36
- tcg_clear_temp_count();
49
}
37
}
50
38
51
static const MemoryRegionOps static_ops = {
39
static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
52
- .old_mmio = {
53
- .read = { static_readb, static_readh, static_readw, },
54
- .write = { static_write, static_write, static_write, },
55
- },
56
+ .read = static_read,
57
+ .write = static_write,
58
+ .valid.min_access_size = 1,
59
+ .valid.max_access_size = 4,
60
.endianness = DEVICE_NATIVE_ENDIAN,
61
};
62
63
--
40
--
64
2.7.4
41
2.19.1
65
42
66
43
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-4-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 28 +++-------------------------
9
1 file changed, 3 insertions(+), 25 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
16
for (xs = 0; xs < selem; xs++) {
17
if (replicate) {
18
/* Load and replicate to all elements */
19
- uint64_t mulconst;
20
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
21
22
tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr,
23
get_mem_index(s), s->be_data + scale);
24
- switch (scale) {
25
- case 0:
26
- mulconst = 0x0101010101010101ULL;
27
- break;
28
- case 1:
29
- mulconst = 0x0001000100010001ULL;
30
- break;
31
- case 2:
32
- mulconst = 0x0000000100000001ULL;
33
- break;
34
- case 3:
35
- mulconst = 0;
36
- break;
37
- default:
38
- g_assert_not_reached();
39
- }
40
- if (mulconst) {
41
- tcg_gen_muli_i64(tcg_tmp, tcg_tmp, mulconst);
42
- }
43
- write_vec_element(s, tcg_tmp, rt, 0, MO_64);
44
- if (is_q) {
45
- write_vec_element(s, tcg_tmp, rt, 1, MO_64);
46
- }
47
+ tcg_gen_gvec_dup_i64(scale, vec_full_reg_offset(s, rt),
48
+ (is_q + 1) * 8, vec_full_reg_size(s),
49
+ tcg_tmp);
50
tcg_temp_free_i64(tcg_tmp);
51
- clear_vec_high(s, is_q, rt);
52
} else {
53
/* Load/store one element per register */
54
if (is_load) {
55
--
56
2.19.1
57
58
diff view generated by jsdifflib
1
From: Subbaraya Sundeep <sundeep.lkml@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Smartfusion2 SoC has hardened Microcontroller subsystem
3
For a sequence of loads or stores from a single register,
4
and flash based FPGA fabric. This patch adds support for
4
little-endian operations can be promoted to an 8-byte op.
5
Microcontroller subsystem in the SoC.
5
This can reduce the number of operations by a factor of 8.
6
6
7
Signed-off-by: Subbaraya Sundeep <sundeep.lkml@gmail.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
8
Message-id: 20181011205206.3552-5-richard.henderson@linaro.org
9
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20170920201737.25723-5-f4bug@amsat.org
11
[PMD: drop cpu_model to directly use cpu type, check m3clk non null]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
hw/arm/Makefile.objs | 1 +
12
target/arm/translate-a64.c | 66 +++++++++++++++++++++++---------------
15
include/hw/arm/msf2-soc.h | 67 +++++++++++
13
1 file changed, 40 insertions(+), 26 deletions(-)
16
hw/arm/msf2-soc.c | 238 ++++++++++++++++++++++++++++++++++++++++
17
default-configs/arm-softmmu.mak | 1 +
18
4 files changed, 307 insertions(+)
19
create mode 100644 include/hw/arm/msf2-soc.h
20
create mode 100644 hw/arm/msf2-soc.c
21
14
22
diff --git a/hw/arm/Makefile.objs b/hw/arm/Makefile.objs
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
23
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/arm/Makefile.objs
17
--- a/target/arm/translate-a64.c
25
+++ b/hw/arm/Makefile.objs
18
+++ b/target/arm/translate-a64.c
26
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_FSL_IMX31) += fsl-imx31.o kzm.o
19
@@ -XXX,XX +XXX,XX @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
27
obj-$(CONFIG_FSL_IMX6) += fsl-imx6.o sabrelite.o
20
28
obj-$(CONFIG_ASPEED_SOC) += aspeed_soc.o aspeed.o
21
/* Store from vector register to memory */
29
obj-$(CONFIG_MPS2) += mps2.o
22
static void do_vec_st(DisasContext *s, int srcidx, int element,
30
+obj-$(CONFIG_MSF2) += msf2-soc.o
23
- TCGv_i64 tcg_addr, int size)
31
diff --git a/include/hw/arm/msf2-soc.h b/include/hw/arm/msf2-soc.h
24
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
32
new file mode 100644
25
{
33
index XXXXXXX..XXXXXXX
26
- TCGMemOp memop = s->be_data + size;
34
--- /dev/null
27
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
35
+++ b/include/hw/arm/msf2-soc.h
28
36
@@ -XXX,XX +XXX,XX @@
29
read_vec_element(s, tcg_tmp, srcidx, element, size);
37
+/*
30
- tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
38
+ * Microsemi Smartfusion2 SoC
31
+ tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
39
+ *
32
40
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
33
tcg_temp_free_i64(tcg_tmp);
41
+ *
34
}
42
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
35
43
+ * of this software and associated documentation files (the "Software"), to deal
36
/* Load from memory to vector register */
44
+ * in the Software without restriction, including without limitation the rights
37
static void do_vec_ld(DisasContext *s, int destidx, int element,
45
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
38
- TCGv_i64 tcg_addr, int size)
46
+ * copies of the Software, and to permit persons to whom the Software is
39
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
47
+ * furnished to do so, subject to the following conditions:
40
{
48
+ *
41
- TCGMemOp memop = s->be_data + size;
49
+ * The above copyright notice and this permission notice shall be included in
42
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
50
+ * all copies or substantial portions of the Software.
43
51
+ *
44
- tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
52
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
45
+ tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
53
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
46
write_vec_element(s, tcg_tmp, destidx, element, size);
54
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
47
55
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
48
tcg_temp_free_i64(tcg_tmp);
56
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
49
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
57
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
50
bool is_postidx = extract32(insn, 23, 1);
58
+ * THE SOFTWARE.
51
bool is_q = extract32(insn, 30, 1);
59
+ */
52
TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
60
+
53
+ TCGMemOp endian = s->be_data;
61
+#ifndef HW_ARM_MSF2_SOC_H
54
62
+#define HW_ARM_MSF2_SOC_H
55
- int ebytes = 1 << size;
63
+
56
- int elements = (is_q ? 128 : 64) / (8 << size);
64
+#include "hw/arm/armv7m.h"
57
+ int ebytes; /* bytes per element */
65
+#include "hw/timer/mss-timer.h"
58
+ int elements; /* elements per vector */
66
+#include "hw/misc/msf2-sysreg.h"
59
int rpt; /* num iterations */
67
+#include "hw/ssi/mss-spi.h"
60
int selem; /* structure elements */
68
+
61
int r;
69
+#define TYPE_MSF2_SOC "msf2-soc"
62
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
70
+#define MSF2_SOC(obj) OBJECT_CHECK(MSF2State, (obj), TYPE_MSF2_SOC)
63
gen_check_sp_alignment(s);
71
+
64
}
72
+#define MSF2_NUM_SPIS 2
65
73
+#define MSF2_NUM_UARTS 2
66
+ /* For our purposes, bytes are always little-endian. */
74
+
67
+ if (size == 0) {
75
+/*
68
+ endian = MO_LE;
76
+ * System timer consists of two programmable 32-bit
77
+ * decrementing counters that generate individual interrupts to
78
+ * the Cortex-M3 processor
79
+ */
80
+#define MSF2_NUM_TIMERS 2
81
+
82
+typedef struct MSF2State {
83
+ /*< private >*/
84
+ SysBusDevice parent_obj;
85
+ /*< public >*/
86
+
87
+ ARMv7MState armv7m;
88
+
89
+ char *cpu_type;
90
+ char *part_name;
91
+ uint64_t envm_size;
92
+ uint64_t esram_size;
93
+
94
+ uint32_t m3clk;
95
+ uint8_t apb0div;
96
+ uint8_t apb1div;
97
+
98
+ MSF2SysregState sysreg;
99
+ MSSTimerState timer;
100
+ MSSSpiState spi[MSF2_NUM_SPIS];
101
+} MSF2State;
102
+
103
+#endif
104
diff --git a/hw/arm/msf2-soc.c b/hw/arm/msf2-soc.c
105
new file mode 100644
106
index XXXXXXX..XXXXXXX
107
--- /dev/null
108
+++ b/hw/arm/msf2-soc.c
109
@@ -XXX,XX +XXX,XX @@
110
+/*
111
+ * SmartFusion2 SoC emulation.
112
+ *
113
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
114
+ *
115
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
116
+ * of this software and associated documentation files (the "Software"), to deal
117
+ * in the Software without restriction, including without limitation the rights
118
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
119
+ * copies of the Software, and to permit persons to whom the Software is
120
+ * furnished to do so, subject to the following conditions:
121
+ *
122
+ * The above copyright notice and this permission notice shall be included in
123
+ * all copies or substantial portions of the Software.
124
+ *
125
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
126
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
127
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
128
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
129
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
130
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
131
+ * THE SOFTWARE.
132
+ */
133
+
134
+#include "qemu/osdep.h"
135
+#include "qapi/error.h"
136
+#include "qemu-common.h"
137
+#include "hw/arm/arm.h"
138
+#include "exec/address-spaces.h"
139
+#include "hw/char/serial.h"
140
+#include "hw/boards.h"
141
+#include "sysemu/block-backend.h"
142
+#include "qemu/cutils.h"
143
+#include "hw/arm/msf2-soc.h"
144
+#include "hw/misc/unimp.h"
145
+
146
+#define MSF2_TIMER_BASE 0x40004000
147
+#define MSF2_SYSREG_BASE 0x40038000
148
+
149
+#define ENVM_BASE_ADDRESS 0x60000000
150
+
151
+#define SRAM_BASE_ADDRESS 0x20000000
152
+
153
+#define MSF2_ENVM_MAX_SIZE (512 * K_BYTE)
154
+
155
+/*
156
+ * eSRAM max size is 80k without SECDED(Single error correction and
157
+ * dual error detection) feature and 64k with SECDED.
158
+ * We do not support SECDED now.
159
+ */
160
+#define MSF2_ESRAM_MAX_SIZE (80 * K_BYTE)
161
+
162
+static const uint32_t spi_addr[MSF2_NUM_SPIS] = { 0x40001000 , 0x40011000 };
163
+static const uint32_t uart_addr[MSF2_NUM_UARTS] = { 0x40000000 , 0x40010000 };
164
+
165
+static const int spi_irq[MSF2_NUM_SPIS] = { 2, 3 };
166
+static const int uart_irq[MSF2_NUM_UARTS] = { 10, 11 };
167
+static const int timer_irq[MSF2_NUM_TIMERS] = { 14, 15 };
168
+
169
+static void m2sxxx_soc_initfn(Object *obj)
170
+{
171
+ MSF2State *s = MSF2_SOC(obj);
172
+ int i;
173
+
174
+ object_initialize(&s->armv7m, sizeof(s->armv7m), TYPE_ARMV7M);
175
+ qdev_set_parent_bus(DEVICE(&s->armv7m), sysbus_get_default());
176
+
177
+ object_initialize(&s->sysreg, sizeof(s->sysreg), TYPE_MSF2_SYSREG);
178
+ qdev_set_parent_bus(DEVICE(&s->sysreg), sysbus_get_default());
179
+
180
+ object_initialize(&s->timer, sizeof(s->timer), TYPE_MSS_TIMER);
181
+ qdev_set_parent_bus(DEVICE(&s->timer), sysbus_get_default());
182
+
183
+ for (i = 0; i < MSF2_NUM_SPIS; i++) {
184
+ object_initialize(&s->spi[i], sizeof(s->spi[i]),
185
+ TYPE_MSS_SPI);
186
+ qdev_set_parent_bus(DEVICE(&s->spi[i]), sysbus_get_default());
187
+ }
188
+}
189
+
190
+static void m2sxxx_soc_realize(DeviceState *dev_soc, Error **errp)
191
+{
192
+ MSF2State *s = MSF2_SOC(dev_soc);
193
+ DeviceState *dev, *armv7m;
194
+ SysBusDevice *busdev;
195
+ Error *err = NULL;
196
+ int i;
197
+
198
+ MemoryRegion *system_memory = get_system_memory();
199
+ MemoryRegion *nvm = g_new(MemoryRegion, 1);
200
+ MemoryRegion *nvm_alias = g_new(MemoryRegion, 1);
201
+ MemoryRegion *sram = g_new(MemoryRegion, 1);
202
+
203
+ memory_region_init_rom(nvm, NULL, "MSF2.eNVM", s->envm_size,
204
+ &error_fatal);
205
+ /*
206
+ * On power-on, the eNVM region 0x60000000 is automatically
207
+ * remapped to the Cortex-M3 processor executable region
208
+ * start address (0x0). We do not support remapping other eNVM,
209
+ * eSRAM and DDR regions by guest(via Sysreg) currently.
210
+ */
211
+ memory_region_init_alias(nvm_alias, NULL, "MSF2.eNVM",
212
+ nvm, 0, s->envm_size);
213
+
214
+ memory_region_add_subregion(system_memory, ENVM_BASE_ADDRESS, nvm);
215
+ memory_region_add_subregion(system_memory, 0, nvm_alias);
216
+
217
+ memory_region_init_ram(sram, NULL, "MSF2.eSRAM", s->esram_size,
218
+ &error_fatal);
219
+ memory_region_add_subregion(system_memory, SRAM_BASE_ADDRESS, sram);
220
+
221
+ armv7m = DEVICE(&s->armv7m);
222
+ qdev_prop_set_uint32(armv7m, "num-irq", 81);
223
+ qdev_prop_set_string(armv7m, "cpu-type", s->cpu_type);
224
+ object_property_set_link(OBJECT(&s->armv7m), OBJECT(get_system_memory()),
225
+ "memory", &error_abort);
226
+ object_property_set_bool(OBJECT(&s->armv7m), true, "realized", &err);
227
+ if (err != NULL) {
228
+ error_propagate(errp, err);
229
+ return;
230
+ }
69
+ }
231
+
70
+
232
+ if (!s->m3clk) {
71
+ /* Consecutive little-endian elements from a single register
233
+ error_setg(errp, "Invalid m3clk value");
72
+ * can be promoted to a larger little-endian operation.
234
+ error_append_hint(errp, "m3clk can not be zero\n");
73
+ */
235
+ return;
74
+ if (selem == 1 && endian == MO_LE) {
75
+ size = 3;
236
+ }
76
+ }
237
+ system_clock_scale = NANOSECONDS_PER_SECOND / s->m3clk;
77
+ ebytes = 1 << size;
78
+ elements = (is_q ? 16 : 8) / ebytes;
238
+
79
+
239
+ for (i = 0; i < MSF2_NUM_UARTS; i++) {
80
tcg_rn = cpu_reg_sp(s, rn);
240
+ if (serial_hds[i]) {
81
tcg_addr = tcg_temp_new_i64();
241
+ serial_mm_init(get_system_memory(), uart_addr[i], 2,
82
tcg_gen_mov_i64(tcg_addr, tcg_rn);
242
+ qdev_get_gpio_in(armv7m, uart_irq[i]),
83
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
243
+ 115200, serial_hds[i], DEVICE_NATIVE_ENDIAN);
84
for (r = 0; r < rpt; r++) {
85
int e;
86
for (e = 0; e < elements; e++) {
87
- int tt = (rt + r) % 32;
88
int xs;
89
for (xs = 0; xs < selem; xs++) {
90
+ int tt = (rt + r + xs) % 32;
91
if (is_store) {
92
- do_vec_st(s, tt, e, tcg_addr, size);
93
+ do_vec_st(s, tt, e, tcg_addr, size, endian);
94
} else {
95
- do_vec_ld(s, tt, e, tcg_addr, size);
96
-
97
- /* For non-quad operations, setting a slice of the low
98
- * 64 bits of the register clears the high 64 bits (in
99
- * the ARM ARM pseudocode this is implicit in the fact
100
- * that 'rval' is a 64 bit wide variable).
101
- * For quad operations, we might still need to zero the
102
- * high bits of SVE. We optimize by noticing that we only
103
- * need to do this the first time we touch a register.
104
- */
105
- if (e == 0 && (r == 0 || xs == selem - 1)) {
106
- clear_vec_high(s, is_q, tt);
107
- }
108
+ do_vec_ld(s, tt, e, tcg_addr, size, endian);
109
}
110
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
111
- tt = (tt + 1) % 32;
112
}
113
}
114
}
115
116
+ if (!is_store) {
117
+ /* For non-quad operations, setting a slice of the low
118
+ * 64 bits of the register clears the high 64 bits (in
119
+ * the ARM ARM pseudocode this is implicit in the fact
120
+ * that 'rval' is a 64 bit wide variable).
121
+ * For quad operations, we might still need to zero the
122
+ * high bits of SVE.
123
+ */
124
+ for (r = 0; r < rpt * selem; r++) {
125
+ int tt = (rt + r) % 32;
126
+ clear_vec_high(s, is_q, tt);
244
+ }
127
+ }
245
+ }
128
+ }
246
+
129
+
247
+ dev = DEVICE(&s->timer);
130
if (is_postidx) {
248
+ /* APB0 clock is the timer input clock */
131
int rm = extract32(insn, 16, 5);
249
+ qdev_prop_set_uint32(dev, "clock-frequency", s->m3clk / s->apb0div);
132
if (rm == 31) {
250
+ object_property_set_bool(OBJECT(&s->timer), true, "realized", &err);
133
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
251
+ if (err != NULL) {
134
} else {
252
+ error_propagate(errp, err);
135
/* Load/store one element per register */
253
+ return;
136
if (is_load) {
254
+ }
137
- do_vec_ld(s, rt, index, tcg_addr, scale);
255
+ busdev = SYS_BUS_DEVICE(dev);
138
+ do_vec_ld(s, rt, index, tcg_addr, scale, s->be_data);
256
+ sysbus_mmio_map(busdev, 0, MSF2_TIMER_BASE);
139
} else {
257
+ sysbus_connect_irq(busdev, 0,
140
- do_vec_st(s, rt, index, tcg_addr, scale);
258
+ qdev_get_gpio_in(armv7m, timer_irq[0]));
141
+ do_vec_st(s, rt, index, tcg_addr, scale, s->be_data);
259
+ sysbus_connect_irq(busdev, 1,
142
}
260
+ qdev_get_gpio_in(armv7m, timer_irq[1]));
143
}
261
+
144
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
262
+ dev = DEVICE(&s->sysreg);
263
+ qdev_prop_set_uint32(dev, "apb0divisor", s->apb0div);
264
+ qdev_prop_set_uint32(dev, "apb1divisor", s->apb1div);
265
+ object_property_set_bool(OBJECT(&s->sysreg), true, "realized", &err);
266
+ if (err != NULL) {
267
+ error_propagate(errp, err);
268
+ return;
269
+ }
270
+ busdev = SYS_BUS_DEVICE(dev);
271
+ sysbus_mmio_map(busdev, 0, MSF2_SYSREG_BASE);
272
+
273
+ for (i = 0; i < MSF2_NUM_SPIS; i++) {
274
+ gchar *bus_name;
275
+
276
+ object_property_set_bool(OBJECT(&s->spi[i]), true, "realized", &err);
277
+ if (err != NULL) {
278
+ error_propagate(errp, err);
279
+ return;
280
+ }
281
+
282
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->spi[i]), 0, spi_addr[i]);
283
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->spi[i]), 0,
284
+ qdev_get_gpio_in(armv7m, spi_irq[i]));
285
+
286
+ /* Alias controller SPI bus to the SoC itself */
287
+ bus_name = g_strdup_printf("spi%d", i);
288
+ object_property_add_alias(OBJECT(s), bus_name,
289
+ OBJECT(&s->spi[i]), "spi",
290
+ &error_abort);
291
+ g_free(bus_name);
292
+ }
293
+
294
+ /* Below devices are not modelled yet. */
295
+ create_unimplemented_device("i2c_0", 0x40002000, 0x1000);
296
+ create_unimplemented_device("dma", 0x40003000, 0x1000);
297
+ create_unimplemented_device("watchdog", 0x40005000, 0x1000);
298
+ create_unimplemented_device("i2c_1", 0x40012000, 0x1000);
299
+ create_unimplemented_device("gpio", 0x40013000, 0x1000);
300
+ create_unimplemented_device("hs-dma", 0x40014000, 0x1000);
301
+ create_unimplemented_device("can", 0x40015000, 0x1000);
302
+ create_unimplemented_device("rtc", 0x40017000, 0x1000);
303
+ create_unimplemented_device("apb_config", 0x40020000, 0x10000);
304
+ create_unimplemented_device("emac", 0x40041000, 0x1000);
305
+ create_unimplemented_device("usb", 0x40043000, 0x1000);
306
+}
307
+
308
+static Property m2sxxx_soc_properties[] = {
309
+ /*
310
+ * part name specifies the type of SmartFusion2 device variant(this
311
+ * property is for information purpose only.
312
+ */
313
+ DEFINE_PROP_STRING("cpu-type", MSF2State, cpu_type),
314
+ DEFINE_PROP_STRING("part-name", MSF2State, part_name),
315
+ DEFINE_PROP_UINT64("eNVM-size", MSF2State, envm_size, MSF2_ENVM_MAX_SIZE),
316
+ DEFINE_PROP_UINT64("eSRAM-size", MSF2State, esram_size,
317
+ MSF2_ESRAM_MAX_SIZE),
318
+ /* Libero GUI shows 100Mhz as default for clocks */
319
+ DEFINE_PROP_UINT32("m3clk", MSF2State, m3clk, 100 * 1000000),
320
+ /* default divisors in Libero GUI */
321
+ DEFINE_PROP_UINT8("apb0div", MSF2State, apb0div, 2),
322
+ DEFINE_PROP_UINT8("apb1div", MSF2State, apb1div, 2),
323
+ DEFINE_PROP_END_OF_LIST(),
324
+};
325
+
326
+static void m2sxxx_soc_class_init(ObjectClass *klass, void *data)
327
+{
328
+ DeviceClass *dc = DEVICE_CLASS(klass);
329
+
330
+ dc->realize = m2sxxx_soc_realize;
331
+ dc->props = m2sxxx_soc_properties;
332
+}
333
+
334
+static const TypeInfo m2sxxx_soc_info = {
335
+ .name = TYPE_MSF2_SOC,
336
+ .parent = TYPE_SYS_BUS_DEVICE,
337
+ .instance_size = sizeof(MSF2State),
338
+ .instance_init = m2sxxx_soc_initfn,
339
+ .class_init = m2sxxx_soc_class_init,
340
+};
341
+
342
+static void m2sxxx_soc_types(void)
343
+{
344
+ type_register_static(&m2sxxx_soc_info);
345
+}
346
+
347
+type_init(m2sxxx_soc_types)
348
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
349
index XXXXXXX..XXXXXXX 100644
350
--- a/default-configs/arm-softmmu.mak
351
+++ b/default-configs/arm-softmmu.mak
352
@@ -XXX,XX +XXX,XX @@ CONFIG_ACPI=y
353
CONFIG_SMBIOS=y
354
CONFIG_ASPEED_SOC=y
355
CONFIG_GPIO_KEY=y
356
+CONFIG_MSF2=y
357
--
145
--
358
2.7.4
146
2.19.1
359
147
360
148
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Message-id: 20181011205206.3552-6-richard.henderson@linaro.org
6
[PMM: drop change to now-deleted cpu_mode_names array]
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.c | 4 ++--
11
1 file changed, 2 insertions(+), 2 deletions(-)
12
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
16
+++ b/target/arm/translate.c
17
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 cpu_F0d, cpu_F1d;
18
19
#include "exec/gen-icount.h"
20
21
-static const char *regnames[] =
22
+static const char * const regnames[] =
23
{ "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7",
24
"r8", "r9", "r10", "r11", "r12", "r13", "r14", "pc" };
25
26
@@ -XXX,XX +XXX,XX @@ static struct {
27
int nregs;
28
int interleave;
29
int spacing;
30
-} neon_ls_element_type[11] = {
31
+} const neon_ls_element_type[11] = {
32
{4, 4, 1},
33
{4, 4, 2},
34
{4, 1, 1},
35
--
36
2.19.1
37
38
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Also introduces neon_element_offset to find the env offset
4
of a specific element within a neon register.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181011205206.3552-7-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/translate.c | 63 ++++++++++++++++++++++++------------------
12
1 file changed, 36 insertions(+), 27 deletions(-)
13
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
17
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ neon_reg_offset (int reg, int n)
19
return vfp_reg_offset(0, sreg);
20
}
21
22
+/* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
23
+ * where 0 is the least significant end of the register.
24
+ */
25
+static inline long
26
+neon_element_offset(int reg, int element, TCGMemOp size)
27
+{
28
+ int element_size = 1 << size;
29
+ int ofs = element * element_size;
30
+#ifdef HOST_WORDS_BIGENDIAN
31
+ /* Calculate the offset assuming fully little-endian,
32
+ * then XOR to account for the order of the 8-byte units.
33
+ */
34
+ if (element_size < 8) {
35
+ ofs ^= 8 - element_size;
36
+ }
37
+#endif
38
+ return neon_reg_offset(reg, 0) + ofs;
39
+}
40
+
41
static TCGv_i32 neon_load_reg(int reg, int pass)
42
{
43
TCGv_i32 tmp = tcg_temp_new_i32();
44
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
45
tmp = load_reg(s, rd);
46
if (insn & (1 << 23)) {
47
/* VDUP */
48
- if (size == 0) {
49
- gen_neon_dup_u8(tmp, 0);
50
- } else if (size == 1) {
51
- gen_neon_dup_low16(tmp);
52
- }
53
- for (n = 0; n <= pass * 2; n++) {
54
- tmp2 = tcg_temp_new_i32();
55
- tcg_gen_mov_i32(tmp2, tmp);
56
- neon_store_reg(rn, n, tmp2);
57
- }
58
- neon_store_reg(rn, n, tmp);
59
+ int vec_size = pass ? 16 : 8;
60
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rn, 0),
61
+ vec_size, vec_size, tmp);
62
+ tcg_temp_free_i32(tmp);
63
} else {
64
/* VMOV */
65
switch (size) {
66
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
67
tcg_temp_free_i32(tmp);
68
} else if ((insn & 0x380) == 0) {
69
/* VDUP */
70
+ int element;
71
+ TCGMemOp size;
72
+
73
if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
74
return 1;
75
}
76
- if (insn & (1 << 19)) {
77
- tmp = neon_load_reg(rm, 1);
78
- } else {
79
- tmp = neon_load_reg(rm, 0);
80
- }
81
if (insn & (1 << 16)) {
82
- gen_neon_dup_u8(tmp, ((insn >> 17) & 3) * 8);
83
+ size = MO_8;
84
+ element = (insn >> 17) & 7;
85
} else if (insn & (1 << 17)) {
86
- if ((insn >> 18) & 1)
87
- gen_neon_dup_high16(tmp);
88
- else
89
- gen_neon_dup_low16(tmp);
90
+ size = MO_16;
91
+ element = (insn >> 18) & 3;
92
+ } else {
93
+ size = MO_32;
94
+ element = (insn >> 19) & 1;
95
}
96
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
97
- tmp2 = tcg_temp_new_i32();
98
- tcg_gen_mov_i32(tmp2, tmp);
99
- neon_store_reg(rd, pass, tmp2);
100
- }
101
- tcg_temp_free_i32(tmp);
102
+ tcg_gen_gvec_dup_mem(size, neon_reg_offset(rd, 0),
103
+ neon_element_offset(rm, element, size),
104
+ q ? 16 : 8, q ? 16 : 8);
105
} else {
106
return 1;
107
}
108
--
109
2.19.1
110
111
diff view generated by jsdifflib
1
For v8M, the NVIC has a new set of registers per interrupt,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
NVIC_ITNS<n>. These determine whether the interrupt targets Secure
3
or Non-secure state. Implement the register read/write code for
4
these, and make them cause NVIC_IABR, NVIC_ICER, NVIC_ISER,
5
NVIC_ICPR, NVIC_IPR and NVIC_ISPR to RAZ/WI for non-secure
6
accesses to fields corresponding to interrupts which are
7
configured to target secure state.
8
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-8-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 1505240046-11454-8-git-send-email-peter.maydell@linaro.org
12
---
7
---
13
include/hw/intc/armv7m_nvic.h | 3 ++
8
target/arm/translate.c | 67 ++++++++++++++++++++++++------------------
14
hw/intc/armv7m_nvic.c | 74 +++++++++++++++++++++++++++++++++++++++----
9
1 file changed, 39 insertions(+), 28 deletions(-)
15
2 files changed, 70 insertions(+), 7 deletions(-)
16
10
17
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/intc/armv7m_nvic.h
13
--- a/target/arm/translate.c
20
+++ b/include/hw/intc/armv7m_nvic.h
14
+++ b/target/arm/translate.c
21
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
22
/* The PRIGROUP field in AIRCR is banked */
16
return 1;
23
uint32_t prigroup[M_REG_NUM_BANKS];
17
}
24
18
} else { /* (insn & 0x00380080) == 0 */
25
+ /* v8M NVIC_ITNS state (stored as a bool per bit) */
19
- int invert;
26
+ bool itns[NVIC_MAX_VECTORS];
20
+ int invert, reg_ofs, vec_size;
27
+
21
+
28
/* The following fields are all cached state that can be recalculated
22
if (q && (rd & 1)) {
29
* from the vectors[] and sec_vectors[] arrays and the prigroup field:
23
return 1;
30
* - vectpending
24
}
31
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
25
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
32
index XXXXXXX..XXXXXXX 100644
26
break;
33
--- a/hw/intc/armv7m_nvic.c
27
case 14:
34
+++ b/hw/intc/armv7m_nvic.c
28
imm |= (imm << 8) | (imm << 16) | (imm << 24);
35
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
29
- if (invert)
36
switch (offset) {
30
+ if (invert) {
37
case 4: /* Interrupt Control Type. */
31
imm = ~imm;
38
return ((s->num_irq - NVIC_FIRST_IRQ) / 32) - 1;
32
+ }
39
+ case 0x380 ... 0x3bf: /* NVIC_ITNS<n> */
33
break;
40
+ {
34
case 15:
41
+ int startvec = 32 * (offset - 0x380) + NVIC_FIRST_IRQ;
35
if (invert) {
42
+ int i;
36
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
37
| ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
38
break;
39
}
40
- if (invert)
41
+ if (invert) {
42
imm = ~imm;
43
+ }
44
45
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
46
- if (op & 1 && op < 12) {
47
- tmp = neon_load_reg(rd, pass);
48
- if (invert) {
49
- /* The immediate value has already been inverted, so
50
- BIC becomes AND. */
51
- tcg_gen_andi_i32(tmp, tmp, imm);
52
- } else {
53
- tcg_gen_ori_i32(tmp, tmp, imm);
54
- }
55
+ reg_ofs = neon_reg_offset(rd, 0);
56
+ vec_size = q ? 16 : 8;
43
+
57
+
44
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
58
+ if (op & 1 && op < 12) {
45
+ goto bad_offset;
59
+ if (invert) {
46
+ }
60
+ /* The immediate value has already been inverted,
47
+ if (!attrs.secure) {
61
+ * so BIC becomes AND.
48
+ return 0;
62
+ */
49
+ }
63
+ tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm,
50
+ val = 0;
64
+ vec_size, vec_size);
51
+ for (i = 0; i < 32 && startvec + i < s->num_irq; i++) {
65
} else {
52
+ if (s->itns[startvec + i]) {
66
- /* VMOV, VMVN. */
53
+ val |= (1 << i);
67
- tmp = tcg_temp_new_i32();
54
+ }
68
- if (op == 14 && invert) {
55
+ }
69
- int n;
56
+ return val;
70
- uint32_t val;
57
+ }
71
- val = 0;
58
case 0xd00: /* CPUID Base. */
72
- for (n = 0; n < 4; n++) {
59
return cpu->midr;
73
- if (imm & (1 << (n + (pass & 1) * 4)))
60
case 0xd04: /* Interrupt Control State. */
74
- val |= 0xff << (n * 8);
61
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
75
- }
62
ARMCPU *cpu = s->cpu;
76
- tcg_gen_movi_i32(tmp, val);
63
77
- } else {
64
switch (offset) {
78
- tcg_gen_movi_i32(tmp, imm);
65
+ case 0x380 ... 0x3bf: /* NVIC_ITNS<n> */
79
- }
66
+ {
80
+ tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm,
67
+ int startvec = 32 * (offset - 0x380) + NVIC_FIRST_IRQ;
81
+ vec_size, vec_size);
68
+ int i;
82
+ }
83
+ } else {
84
+ /* VMOV, VMVN. */
85
+ if (op == 14 && invert) {
86
+ TCGv_i64 t64 = tcg_temp_new_i64();
69
+
87
+
70
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
88
+ for (pass = 0; pass <= q; ++pass) {
71
+ goto bad_offset;
89
+ uint64_t val = 0;
72
+ }
90
+ int n;
73
+ if (!attrs.secure) {
91
+
74
+ break;
92
+ for (n = 0; n < 8; n++) {
75
+ }
93
+ if (imm & (1 << (n + pass * 8))) {
76
+ for (i = 0; i < 32 && startvec + i < s->num_irq; i++) {
94
+ val |= 0xffull << (n * 8);
77
+ s->itns[startvec + i] = (value >> i) & 1;
95
+ }
78
+ }
96
+ }
79
+ nvic_irq_update(s);
97
+ tcg_gen_movi_i64(t64, val);
80
+ break;
98
+ neon_store_reg64(t64, rd + pass);
81
+ }
99
+ }
82
case 0xd04: /* Interrupt Control State. */
100
+ tcg_temp_free_i64(t64);
83
if (value & (1 << 31)) {
101
+ } else {
84
armv7m_nvic_set_pending(s, ARMV7M_EXCP_NMI);
102
+ tcg_gen_gvec_dup32i(reg_ofs, vec_size, vec_size, imm);
85
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
103
}
86
startvec = offset - 0x180 + NVIC_FIRST_IRQ; /* vector # */
104
- neon_store_reg(rd, pass, tmp);
87
88
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
89
- if (s->vectors[startvec + i].enabled) {
90
+ if (s->vectors[startvec + i].enabled &&
91
+ (attrs.secure || s->itns[startvec + i])) {
92
val |= (1 << i);
93
}
105
}
94
}
106
}
95
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
107
} else { /* (insn & 0x00800010 == 0x00800000) */
96
val = 0;
97
startvec = offset - 0x280 + NVIC_FIRST_IRQ; /* vector # */
98
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
99
- if (s->vectors[startvec + i].pending) {
100
+ if (s->vectors[startvec + i].pending &&
101
+ (attrs.secure || s->itns[startvec + i])) {
102
val |= (1 << i);
103
}
104
}
105
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
106
startvec = offset - 0x300 + NVIC_FIRST_IRQ; /* vector # */
107
108
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
109
- if (s->vectors[startvec + i].active) {
110
+ if (s->vectors[startvec + i].active &&
111
+ (attrs.secure || s->itns[startvec + i])) {
112
val |= (1 << i);
113
}
114
}
115
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
116
startvec = offset - 0x400 + NVIC_FIRST_IRQ; /* vector # */
117
118
for (i = 0; i < size && startvec + i < s->num_irq; i++) {
119
- val |= s->vectors[startvec + i].prio << (8 * i);
120
+ if (attrs.secure || s->itns[startvec + i]) {
121
+ val |= s->vectors[startvec + i].prio << (8 * i);
122
+ }
123
}
124
break;
125
case 0xd18 ... 0xd23: /* System Handler Priority. */
126
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
127
startvec = 8 * (offset - 0x180) + NVIC_FIRST_IRQ;
128
129
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
130
- if (value & (1 << i)) {
131
+ if (value & (1 << i) &&
132
+ (attrs.secure || s->itns[startvec + i])) {
133
s->vectors[startvec + i].enabled = setval;
134
}
135
}
136
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
137
startvec = 8 * (offset - 0x280) + NVIC_FIRST_IRQ; /* vector # */
138
139
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
140
- if (value & (1 << i)) {
141
+ if (value & (1 << i) &&
142
+ (attrs.secure || s->itns[startvec + i])) {
143
s->vectors[startvec + i].pending = setval;
144
}
145
}
146
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
147
startvec = 8 * (offset - 0x400) + NVIC_FIRST_IRQ; /* vector # */
148
149
for (i = 0; i < size && startvec + i < s->num_irq; i++) {
150
- set_prio(s, startvec + i, (value >> (i * 8)) & 0xff);
151
+ if (attrs.secure || s->itns[startvec + i]) {
152
+ set_prio(s, startvec + i, (value >> (i * 8)) & 0xff);
153
+ }
154
}
155
nvic_irq_update(s);
156
return MEMTX_OK;
157
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_nvic_security = {
158
VMSTATE_STRUCT_ARRAY(sec_vectors, NVICState, NVIC_INTERNAL_VECTORS, 1,
159
vmstate_VecInfo, VecInfo),
160
VMSTATE_UINT32(prigroup[M_REG_S], NVICState),
161
+ VMSTATE_BOOL_ARRAY(itns, NVICState, NVIC_MAX_VECTORS),
162
VMSTATE_END_OF_LIST()
163
}
164
};
165
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
166
s->vectpending = 0;
167
s->vectpending_is_s_banked = false;
168
s->vectpending_prio = NVIC_NOEXC_PRIO;
169
+
170
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY)) {
171
+ memset(s->itns, 0, sizeof(s->itns));
172
+ } else {
173
+ /* This state is constant and not guest accessible in a non-security
174
+ * NVIC; we set the bits to true to avoid having to do a feature
175
+ * bit check in the NVIC enable/pend/etc register accessors.
176
+ */
177
+ int i;
178
+
179
+ for (i = NVIC_FIRST_IRQ; i < ARRAY_SIZE(s->itns); i++) {
180
+ s->itns[i] = true;
181
+ }
182
+ }
183
}
184
185
static void nvic_systick_trigger(void *opaque, int n, int level)
186
--
108
--
187
2.7.4
109
2.19.1
188
110
189
111
diff view generated by jsdifflib
1
From: Subbaraya Sundeep <sundeep.lkml@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Modelled System Timer in Microsemi's Smartfusion2 Soc.
3
Move expanders for VBSL, VBIT, and VBIF from translate-a64.c.
4
Timer has two 32bit down counters and two interrupts.
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Subbaraya Sundeep <sundeep.lkml@gmail.com>
6
Message-id: 20181011205206.3552-9-richard.henderson@linaro.org
7
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20170920201737.25723-2-f4bug@amsat.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
9
---
13
hw/timer/Makefile.objs | 1 +
10
target/arm/translate.h | 6 ++
14
include/hw/timer/mss-timer.h | 64 ++++++++++
11
target/arm/translate-a64.c | 61 --------------
15
hw/timer/mss-timer.c | 289 +++++++++++++++++++++++++++++++++++++++++++
12
target/arm/translate.c | 162 +++++++++++++++++++++++++++----------
16
3 files changed, 354 insertions(+)
13
3 files changed, 124 insertions(+), 105 deletions(-)
17
create mode 100644 include/hw/timer/mss-timer.h
14
18
create mode 100644 hw/timer/mss-timer.c
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
19
20
diff --git a/hw/timer/Makefile.objs b/hw/timer/Makefile.objs
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/timer/Makefile.objs
17
--- a/target/arm/translate.h
23
+++ b/hw/timer/Makefile.objs
18
+++ b/target/arm/translate.h
24
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_ASPEED_SOC) += aspeed_timer.o
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
25
20
return ret;
26
common-obj-$(CONFIG_SUN4V_RTC) += sun4v-rtc.o
21
}
27
common-obj-$(CONFIG_CMSDK_APB_TIMER) += cmsdk-apb-timer.o
22
28
+common-obj-$(CONFIG_MSF2) += mss-timer.o
23
+
29
diff --git a/include/hw/timer/mss-timer.h b/include/hw/timer/mss-timer.h
24
+/* Vector operations shared between ARM and AArch64. */
30
new file mode 100644
25
+extern const GVecGen3 bsl_op;
31
index XXXXXXX..XXXXXXX
26
+extern const GVecGen3 bit_op;
32
--- /dev/null
27
+extern const GVecGen3 bif_op;
33
+++ b/include/hw/timer/mss-timer.h
28
+
34
@@ -XXX,XX +XXX,XX @@
29
/*
30
* Forward to the isar_feature_* tests given a DisasContext pointer.
31
*/
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-a64.c
35
+++ b/target/arm/translate-a64.c
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
37
}
38
}
39
40
-static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
41
-{
42
- tcg_gen_xor_i64(rn, rn, rm);
43
- tcg_gen_and_i64(rn, rn, rd);
44
- tcg_gen_xor_i64(rd, rm, rn);
45
-}
46
-
47
-static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
48
-{
49
- tcg_gen_xor_i64(rn, rn, rd);
50
- tcg_gen_and_i64(rn, rn, rm);
51
- tcg_gen_xor_i64(rd, rd, rn);
52
-}
53
-
54
-static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
55
-{
56
- tcg_gen_xor_i64(rn, rn, rd);
57
- tcg_gen_andc_i64(rn, rn, rm);
58
- tcg_gen_xor_i64(rd, rd, rn);
59
-}
60
-
61
-static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
62
-{
63
- tcg_gen_xor_vec(vece, rn, rn, rm);
64
- tcg_gen_and_vec(vece, rn, rn, rd);
65
- tcg_gen_xor_vec(vece, rd, rm, rn);
66
-}
67
-
68
-static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
69
-{
70
- tcg_gen_xor_vec(vece, rn, rn, rd);
71
- tcg_gen_and_vec(vece, rn, rn, rm);
72
- tcg_gen_xor_vec(vece, rd, rd, rn);
73
-}
74
-
75
-static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
76
-{
77
- tcg_gen_xor_vec(vece, rn, rn, rd);
78
- tcg_gen_andc_vec(vece, rn, rn, rm);
79
- tcg_gen_xor_vec(vece, rd, rd, rn);
80
-}
81
-
82
/* Logic op (opcode == 3) subgroup of C3.6.16. */
83
static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
84
{
85
- static const GVecGen3 bsl_op = {
86
- .fni8 = gen_bsl_i64,
87
- .fniv = gen_bsl_vec,
88
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
89
- .load_dest = true
90
- };
91
- static const GVecGen3 bit_op = {
92
- .fni8 = gen_bit_i64,
93
- .fniv = gen_bit_vec,
94
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
95
- .load_dest = true
96
- };
97
- static const GVecGen3 bif_op = {
98
- .fni8 = gen_bif_i64,
99
- .fniv = gen_bif_vec,
100
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
- .load_dest = true
102
- };
103
-
104
int rd = extract32(insn, 0, 5);
105
int rn = extract32(insn, 5, 5);
106
int rm = extract32(insn, 16, 5);
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
112
return 0;
113
}
114
115
-/* Bitwise select. dest = c ? t : f. Clobbers T and F. */
116
-static void gen_neon_bsl(TCGv_i32 dest, TCGv_i32 t, TCGv_i32 f, TCGv_i32 c)
117
-{
118
- tcg_gen_and_i32(t, t, c);
119
- tcg_gen_andc_i32(f, f, c);
120
- tcg_gen_or_i32(dest, t, f);
121
-}
122
-
123
static inline void gen_neon_narrow(int size, TCGv_i32 dest, TCGv_i64 src)
124
{
125
switch (size) {
126
@@ -XXX,XX +XXX,XX @@ static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
127
return 1;
128
}
129
35
+/*
130
+/*
36
+ * Microsemi SmartFusion2 Timer.
131
+ * Expanders for VBitOps_VBIF, VBIT, VBSL.
37
+ *
38
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>
39
+ *
40
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
41
+ * of this software and associated documentation files (the "Software"), to deal
42
+ * in the Software without restriction, including without limitation the rights
43
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
44
+ * copies of the Software, and to permit persons to whom the Software is
45
+ * furnished to do so, subject to the following conditions:
46
+ *
47
+ * The above copyright notice and this permission notice shall be included in
48
+ * all copies or substantial portions of the Software.
49
+ *
50
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
51
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
52
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
53
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
54
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
55
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
56
+ * THE SOFTWARE.
57
+ */
132
+ */
58
+
133
+static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
59
+#ifndef HW_MSS_TIMER_H
134
+{
60
+#define HW_MSS_TIMER_H
135
+ tcg_gen_xor_i64(rn, rn, rm);
61
+
136
+ tcg_gen_and_i64(rn, rn, rd);
62
+#include "hw/sysbus.h"
137
+ tcg_gen_xor_i64(rd, rm, rn);
63
+#include "hw/ptimer.h"
138
+}
64
+
139
+
65
+#define TYPE_MSS_TIMER "mss-timer"
140
+static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
66
+#define MSS_TIMER(obj) OBJECT_CHECK(MSSTimerState, \
141
+{
67
+ (obj), TYPE_MSS_TIMER)
142
+ tcg_gen_xor_i64(rn, rn, rd);
68
+
143
+ tcg_gen_and_i64(rn, rn, rm);
69
+/*
144
+ tcg_gen_xor_i64(rd, rd, rn);
70
+ * There are two 32-bit down counting timers.
145
+}
71
+ * Timers 1 and 2 can be concatenated into a single 64-bit Timer
146
+
72
+ * that operates either in Periodic mode or in One-shot mode.
147
+static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
73
+ * Writing 1 to the TIM64_MODE register bit 0 sets the Timers in 64-bit mode.
148
+{
74
+ * In 64-bit mode, writing to the 32-bit registers has no effect.
149
+ tcg_gen_xor_i64(rn, rn, rd);
75
+ * Similarly, in 32-bit mode, writing to the 64-bit mode registers
150
+ tcg_gen_andc_i64(rn, rn, rm);
76
+ * has no effect. Only two 32-bit timers are supported currently.
151
+ tcg_gen_xor_i64(rd, rd, rn);
77
+ */
152
+}
78
+#define NUM_TIMERS 2
153
+
79
+
154
+static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
80
+#define R_TIM1_MAX 6
155
+{
81
+
156
+ tcg_gen_xor_vec(vece, rn, rn, rm);
82
+struct Msf2Timer {
157
+ tcg_gen_and_vec(vece, rn, rn, rd);
83
+ QEMUBH *bh;
158
+ tcg_gen_xor_vec(vece, rd, rm, rn);
84
+ ptimer_state *ptimer;
159
+}
85
+
160
+
86
+ uint32_t regs[R_TIM1_MAX];
161
+static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
87
+ qemu_irq irq;
162
+{
163
+ tcg_gen_xor_vec(vece, rn, rn, rd);
164
+ tcg_gen_and_vec(vece, rn, rn, rm);
165
+ tcg_gen_xor_vec(vece, rd, rd, rn);
166
+}
167
+
168
+static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
169
+{
170
+ tcg_gen_xor_vec(vece, rn, rn, rd);
171
+ tcg_gen_andc_vec(vece, rn, rn, rm);
172
+ tcg_gen_xor_vec(vece, rd, rd, rn);
173
+}
174
+
175
+const GVecGen3 bsl_op = {
176
+ .fni8 = gen_bsl_i64,
177
+ .fniv = gen_bsl_vec,
178
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
179
+ .load_dest = true
88
+};
180
+};
89
+
181
+
90
+typedef struct MSSTimerState {
182
+const GVecGen3 bit_op = {
91
+ SysBusDevice parent_obj;
183
+ .fni8 = gen_bit_i64,
92
+
184
+ .fniv = gen_bit_vec,
93
+ MemoryRegion mmio;
185
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
94
+ uint32_t freq_hz;
186
+ .load_dest = true
95
+ struct Msf2Timer timers[NUM_TIMERS];
96
+} MSSTimerState;
97
+
98
+#endif /* HW_MSS_TIMER_H */
99
diff --git a/hw/timer/mss-timer.c b/hw/timer/mss-timer.c
100
new file mode 100644
101
index XXXXXXX..XXXXXXX
102
--- /dev/null
103
+++ b/hw/timer/mss-timer.c
104
@@ -XXX,XX +XXX,XX @@
105
+/*
106
+ * Block model of System timer present in
107
+ * Microsemi's SmartFusion2 and SmartFusion SoCs.
108
+ *
109
+ * Copyright (c) 2017 Subbaraya Sundeep <sundeep.lkml@gmail.com>.
110
+ *
111
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
112
+ * of this software and associated documentation files (the "Software"), to deal
113
+ * in the Software without restriction, including without limitation the rights
114
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
115
+ * copies of the Software, and to permit persons to whom the Software is
116
+ * furnished to do so, subject to the following conditions:
117
+ *
118
+ * The above copyright notice and this permission notice shall be included in
119
+ * all copies or substantial portions of the Software.
120
+ *
121
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
122
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
123
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
124
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
125
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
126
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
127
+ * THE SOFTWARE.
128
+ */
129
+
130
+#include "qemu/osdep.h"
131
+#include "qemu/main-loop.h"
132
+#include "qemu/log.h"
133
+#include "hw/timer/mss-timer.h"
134
+
135
+#ifndef MSS_TIMER_ERR_DEBUG
136
+#define MSS_TIMER_ERR_DEBUG 0
137
+#endif
138
+
139
+#define DB_PRINT_L(lvl, fmt, args...) do { \
140
+ if (MSS_TIMER_ERR_DEBUG >= lvl) { \
141
+ qemu_log("%s: " fmt "\n", __func__, ## args); \
142
+ } \
143
+} while (0);
144
+
145
+#define DB_PRINT(fmt, args...) DB_PRINT_L(1, fmt, ## args)
146
+
147
+#define R_TIM_VAL 0
148
+#define R_TIM_LOADVAL 1
149
+#define R_TIM_BGLOADVAL 2
150
+#define R_TIM_CTRL 3
151
+#define R_TIM_RIS 4
152
+#define R_TIM_MIS 5
153
+
154
+#define TIMER_CTRL_ENBL (1 << 0)
155
+#define TIMER_CTRL_ONESHOT (1 << 1)
156
+#define TIMER_CTRL_INTR (1 << 2)
157
+#define TIMER_RIS_ACK (1 << 0)
158
+#define TIMER_RST_CLR (1 << 6)
159
+#define TIMER_MODE (1 << 0)
160
+
161
+static void timer_update_irq(struct Msf2Timer *st)
162
+{
163
+ bool isr, ier;
164
+
165
+ isr = !!(st->regs[R_TIM_RIS] & TIMER_RIS_ACK);
166
+ ier = !!(st->regs[R_TIM_CTRL] & TIMER_CTRL_INTR);
167
+ qemu_set_irq(st->irq, (ier && isr));
168
+}
169
+
170
+static void timer_update(struct Msf2Timer *st)
171
+{
172
+ uint64_t count;
173
+
174
+ if (!(st->regs[R_TIM_CTRL] & TIMER_CTRL_ENBL)) {
175
+ ptimer_stop(st->ptimer);
176
+ return;
177
+ }
178
+
179
+ count = st->regs[R_TIM_LOADVAL];
180
+ ptimer_set_limit(st->ptimer, count, 1);
181
+ ptimer_run(st->ptimer, 1);
182
+}
183
+
184
+static uint64_t
185
+timer_read(void *opaque, hwaddr offset, unsigned int size)
186
+{
187
+ MSSTimerState *t = opaque;
188
+ hwaddr addr;
189
+ struct Msf2Timer *st;
190
+ uint32_t ret = 0;
191
+ int timer = 0;
192
+ int isr;
193
+ int ier;
194
+
195
+ addr = offset >> 2;
196
+ /*
197
+ * Two independent timers has same base address.
198
+ * Based on address passed figure out which timer is being used.
199
+ */
200
+ if ((addr >= R_TIM1_MAX) && (addr < NUM_TIMERS * R_TIM1_MAX)) {
201
+ timer = 1;
202
+ addr -= R_TIM1_MAX;
203
+ }
204
+
205
+ st = &t->timers[timer];
206
+
207
+ switch (addr) {
208
+ case R_TIM_VAL:
209
+ ret = ptimer_get_count(st->ptimer);
210
+ break;
211
+
212
+ case R_TIM_MIS:
213
+ isr = !!(st->regs[R_TIM_RIS] & TIMER_RIS_ACK);
214
+ ier = !!(st->regs[R_TIM_CTRL] & TIMER_CTRL_INTR);
215
+ ret = ier & isr;
216
+ break;
217
+
218
+ default:
219
+ if (addr < R_TIM1_MAX) {
220
+ ret = st->regs[addr];
221
+ } else {
222
+ qemu_log_mask(LOG_GUEST_ERROR,
223
+ TYPE_MSS_TIMER": 64-bit mode not supported\n");
224
+ return ret;
225
+ }
226
+ break;
227
+ }
228
+
229
+ DB_PRINT("timer=%d 0x%" HWADDR_PRIx "=0x%" PRIx32, timer, offset,
230
+ ret);
231
+ return ret;
232
+}
233
+
234
+static void
235
+timer_write(void *opaque, hwaddr offset,
236
+ uint64_t val64, unsigned int size)
237
+{
238
+ MSSTimerState *t = opaque;
239
+ hwaddr addr;
240
+ struct Msf2Timer *st;
241
+ int timer = 0;
242
+ uint32_t value = val64;
243
+
244
+ addr = offset >> 2;
245
+ /*
246
+ * Two independent timers has same base address.
247
+ * Based on addr passed figure out which timer is being used.
248
+ */
249
+ if ((addr >= R_TIM1_MAX) && (addr < NUM_TIMERS * R_TIM1_MAX)) {
250
+ timer = 1;
251
+ addr -= R_TIM1_MAX;
252
+ }
253
+
254
+ st = &t->timers[timer];
255
+
256
+ DB_PRINT("addr=0x%" HWADDR_PRIx " val=0x%" PRIx32 " (timer=%d)", offset,
257
+ value, timer);
258
+
259
+ switch (addr) {
260
+ case R_TIM_CTRL:
261
+ st->regs[R_TIM_CTRL] = value;
262
+ timer_update(st);
263
+ break;
264
+
265
+ case R_TIM_RIS:
266
+ if (value & TIMER_RIS_ACK) {
267
+ st->regs[R_TIM_RIS] &= ~TIMER_RIS_ACK;
268
+ }
269
+ break;
270
+
271
+ case R_TIM_LOADVAL:
272
+ st->regs[R_TIM_LOADVAL] = value;
273
+ if (st->regs[R_TIM_CTRL] & TIMER_CTRL_ENBL) {
274
+ timer_update(st);
275
+ }
276
+ break;
277
+
278
+ case R_TIM_BGLOADVAL:
279
+ st->regs[R_TIM_BGLOADVAL] = value;
280
+ st->regs[R_TIM_LOADVAL] = value;
281
+ break;
282
+
283
+ case R_TIM_VAL:
284
+ case R_TIM_MIS:
285
+ break;
286
+
287
+ default:
288
+ if (addr < R_TIM1_MAX) {
289
+ st->regs[addr] = value;
290
+ } else {
291
+ qemu_log_mask(LOG_GUEST_ERROR,
292
+ TYPE_MSS_TIMER": 64-bit mode not supported\n");
293
+ return;
294
+ }
295
+ break;
296
+ }
297
+ timer_update_irq(st);
298
+}
299
+
300
+static const MemoryRegionOps timer_ops = {
301
+ .read = timer_read,
302
+ .write = timer_write,
303
+ .endianness = DEVICE_NATIVE_ENDIAN,
304
+ .valid = {
305
+ .min_access_size = 1,
306
+ .max_access_size = 4
307
+ }
308
+};
187
+};
309
+
188
+
310
+static void timer_hit(void *opaque)
189
+const GVecGen3 bif_op = {
311
+{
190
+ .fni8 = gen_bif_i64,
312
+ struct Msf2Timer *st = opaque;
191
+ .fniv = gen_bif_vec,
313
+
192
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
314
+ st->regs[R_TIM_RIS] |= TIMER_RIS_ACK;
193
+ .load_dest = true
315
+
316
+ if (!(st->regs[R_TIM_CTRL] & TIMER_CTRL_ONESHOT)) {
317
+ timer_update(st);
318
+ }
319
+ timer_update_irq(st);
320
+}
321
+
322
+static void mss_timer_init(Object *obj)
323
+{
324
+ MSSTimerState *t = MSS_TIMER(obj);
325
+ int i;
326
+
327
+ /* Init all the ptimers. */
328
+ for (i = 0; i < NUM_TIMERS; i++) {
329
+ struct Msf2Timer *st = &t->timers[i];
330
+
331
+ st->bh = qemu_bh_new(timer_hit, st);
332
+ st->ptimer = ptimer_init(st->bh, PTIMER_POLICY_DEFAULT);
333
+ ptimer_set_freq(st->ptimer, t->freq_hz);
334
+ sysbus_init_irq(SYS_BUS_DEVICE(obj), &st->irq);
335
+ }
336
+
337
+ memory_region_init_io(&t->mmio, OBJECT(t), &timer_ops, t, TYPE_MSS_TIMER,
338
+ NUM_TIMERS * R_TIM1_MAX * 4);
339
+ sysbus_init_mmio(SYS_BUS_DEVICE(obj), &t->mmio);
340
+}
341
+
342
+static const VMStateDescription vmstate_timers = {
343
+ .name = "mss-timer-block",
344
+ .version_id = 1,
345
+ .minimum_version_id = 1,
346
+ .fields = (VMStateField[]) {
347
+ VMSTATE_PTIMER(ptimer, struct Msf2Timer),
348
+ VMSTATE_UINT32_ARRAY(regs, struct Msf2Timer, R_TIM1_MAX),
349
+ VMSTATE_END_OF_LIST()
350
+ }
351
+};
194
+};
352
+
195
+
353
+static const VMStateDescription vmstate_mss_timer = {
196
+
354
+ .name = TYPE_MSS_TIMER,
197
/* Translate a NEON data processing instruction. Return nonzero if the
355
+ .version_id = 1,
198
instruction is invalid.
356
+ .minimum_version_id = 1,
199
We process data in a mixture of 32-bit and 64-bit chunks.
357
+ .fields = (VMStateField[]) {
200
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
358
+ VMSTATE_UINT32(freq_hz, MSSTimerState),
201
{
359
+ VMSTATE_STRUCT_ARRAY(timers, MSSTimerState, NUM_TIMERS, 0,
202
int op;
360
+ vmstate_timers, struct Msf2Timer),
203
int q;
361
+ VMSTATE_END_OF_LIST()
204
- int rd, rn, rm;
362
+ }
205
+ int rd, rn, rm, rd_ofs, rn_ofs, rm_ofs;
363
+};
206
int size;
364
+
207
int shift;
365
+static Property mss_timer_properties[] = {
208
int pass;
366
+ /* Libero GUI shows 100Mhz as default for clocks */
209
int count;
367
+ DEFINE_PROP_UINT32("clock-frequency", MSSTimerState, freq_hz,
210
int pairwise;
368
+ 100 * 1000000),
211
int u;
369
+ DEFINE_PROP_END_OF_LIST(),
212
+ int vec_size;
370
+};
213
uint32_t imm, mask;
371
+
214
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
372
+static void mss_timer_class_init(ObjectClass *klass, void *data)
215
TCGv_ptr ptr1, ptr2, ptr3;
373
+{
216
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
374
+ DeviceClass *dc = DEVICE_CLASS(klass);
217
VFP_DREG_N(rn, insn);
375
+
218
VFP_DREG_M(rm, insn);
376
+ dc->props = mss_timer_properties;
219
size = (insn >> 20) & 3;
377
+ dc->vmsd = &vmstate_mss_timer;
220
+ vec_size = q ? 16 : 8;
378
+}
221
+ rd_ofs = neon_reg_offset(rd, 0);
379
+
222
+ rn_ofs = neon_reg_offset(rn, 0);
380
+static const TypeInfo mss_timer_info = {
223
+ rm_ofs = neon_reg_offset(rm, 0);
381
+ .name = TYPE_MSS_TIMER,
224
+
382
+ .parent = TYPE_SYS_BUS_DEVICE,
225
if ((insn & (1 << 23)) == 0) {
383
+ .instance_size = sizeof(MSSTimerState),
226
/* Three register same length. */
384
+ .instance_init = mss_timer_init,
227
op = ((insn >> 7) & 0x1e) | ((insn >> 4) & 1);
385
+ .class_init = mss_timer_class_init,
228
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
386
+};
229
q, rd, rn, rm);
387
+
230
}
388
+static void mss_timer_register_types(void)
231
return 1;
389
+{
232
+
390
+ type_register_static(&mss_timer_info);
233
+ case NEON_3R_LOGIC: /* Logic ops. */
391
+}
234
+ switch ((u << 2) | size) {
392
+
235
+ case 0: /* VAND */
393
+type_init(mss_timer_register_types)
236
+ tcg_gen_gvec_and(0, rd_ofs, rn_ofs, rm_ofs,
237
+ vec_size, vec_size);
238
+ break;
239
+ case 1: /* VBIC */
240
+ tcg_gen_gvec_andc(0, rd_ofs, rn_ofs, rm_ofs,
241
+ vec_size, vec_size);
242
+ break;
243
+ case 2:
244
+ if (rn == rm) {
245
+ /* VMOV */
246
+ tcg_gen_gvec_mov(0, rd_ofs, rn_ofs, vec_size, vec_size);
247
+ } else {
248
+ /* VORR */
249
+ tcg_gen_gvec_or(0, rd_ofs, rn_ofs, rm_ofs,
250
+ vec_size, vec_size);
251
+ }
252
+ break;
253
+ case 3: /* VORN */
254
+ tcg_gen_gvec_orc(0, rd_ofs, rn_ofs, rm_ofs,
255
+ vec_size, vec_size);
256
+ break;
257
+ case 4: /* VEOR */
258
+ tcg_gen_gvec_xor(0, rd_ofs, rn_ofs, rm_ofs,
259
+ vec_size, vec_size);
260
+ break;
261
+ case 5: /* VBSL */
262
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
263
+ vec_size, vec_size, &bsl_op);
264
+ break;
265
+ case 6: /* VBIT */
266
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
267
+ vec_size, vec_size, &bit_op);
268
+ break;
269
+ case 7: /* VBIF */
270
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
271
+ vec_size, vec_size, &bif_op);
272
+ break;
273
+ }
274
+ return 0;
275
}
276
- if (size == 3 && op != NEON_3R_LOGIC) {
277
+ if (size == 3) {
278
/* 64-bit element instructions. */
279
for (pass = 0; pass < (q ? 2 : 1); pass++) {
280
neon_load_reg64(cpu_V0, rn + pass);
281
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
282
case NEON_3R_VRHADD:
283
GEN_NEON_INTEGER_OP(rhadd);
284
break;
285
- case NEON_3R_LOGIC: /* Logic ops. */
286
- switch ((u << 2) | size) {
287
- case 0: /* VAND */
288
- tcg_gen_and_i32(tmp, tmp, tmp2);
289
- break;
290
- case 1: /* BIC */
291
- tcg_gen_andc_i32(tmp, tmp, tmp2);
292
- break;
293
- case 2: /* VORR */
294
- tcg_gen_or_i32(tmp, tmp, tmp2);
295
- break;
296
- case 3: /* VORN */
297
- tcg_gen_orc_i32(tmp, tmp, tmp2);
298
- break;
299
- case 4: /* VEOR */
300
- tcg_gen_xor_i32(tmp, tmp, tmp2);
301
- break;
302
- case 5: /* VBSL */
303
- tmp3 = neon_load_reg(rd, pass);
304
- gen_neon_bsl(tmp, tmp, tmp2, tmp3);
305
- tcg_temp_free_i32(tmp3);
306
- break;
307
- case 6: /* VBIT */
308
- tmp3 = neon_load_reg(rd, pass);
309
- gen_neon_bsl(tmp, tmp, tmp3, tmp2);
310
- tcg_temp_free_i32(tmp3);
311
- break;
312
- case 7: /* VBIF */
313
- tmp3 = neon_load_reg(rd, pass);
314
- gen_neon_bsl(tmp, tmp3, tmp, tmp2);
315
- tcg_temp_free_i32(tmp3);
316
- break;
317
- }
318
- break;
319
case NEON_3R_VHSUB:
320
GEN_NEON_INTEGER_OP(hsub);
321
break;
394
--
322
--
395
2.7.4
323
2.19.1
396
324
397
325
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-10-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 29 ++++++++++-------------------
9
1 file changed, 10 insertions(+), 19 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
break;
17
}
18
return 0;
19
+
20
+ case NEON_3R_VADD_VSUB:
21
+ if (u) {
22
+ tcg_gen_gvec_sub(size, rd_ofs, rn_ofs, rm_ofs,
23
+ vec_size, vec_size);
24
+ } else {
25
+ tcg_gen_gvec_add(size, rd_ofs, rn_ofs, rm_ofs,
26
+ vec_size, vec_size);
27
+ }
28
+ return 0;
29
}
30
if (size == 3) {
31
/* 64-bit element instructions. */
32
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
33
cpu_V1, cpu_V0);
34
}
35
break;
36
- case NEON_3R_VADD_VSUB:
37
- if (u) {
38
- tcg_gen_sub_i64(CPU_V001);
39
- } else {
40
- tcg_gen_add_i64(CPU_V001);
41
- }
42
- break;
43
default:
44
abort();
45
}
46
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
47
tmp2 = neon_load_reg(rd, pass);
48
gen_neon_add(size, tmp, tmp2);
49
break;
50
- case NEON_3R_VADD_VSUB:
51
- if (!u) { /* VADD */
52
- gen_neon_add(size, tmp, tmp2);
53
- } else { /* VSUB */
54
- switch (size) {
55
- case 0: gen_helper_neon_sub_u8(tmp, tmp, tmp2); break;
56
- case 1: gen_helper_neon_sub_u16(tmp, tmp, tmp2); break;
57
- case 2: tcg_gen_sub_i32(tmp, tmp, tmp2); break;
58
- default: abort();
59
- }
60
- }
61
- break;
62
case NEON_3R_VTST_VCEQ:
63
if (!u) { /* VTST */
64
switch (size) {
65
--
66
2.19.1
67
68
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-11-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 16 ++++++++--------
9
1 file changed, 8 insertions(+), 8 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
tcg_temp_free_ptr(ptr1);
17
tcg_temp_free_ptr(ptr2);
18
break;
19
+
20
+ case NEON_2RM_VMVN:
21
+ tcg_gen_gvec_not(0, rd_ofs, rm_ofs, vec_size, vec_size);
22
+ break;
23
+ case NEON_2RM_VNEG:
24
+ tcg_gen_gvec_neg(size, rd_ofs, rm_ofs, vec_size, vec_size);
25
+ break;
26
+
27
default:
28
elementwise:
29
for (pass = 0; pass < (q ? 4 : 2); pass++) {
30
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
31
case NEON_2RM_VCNT:
32
gen_helper_neon_cnt_u8(tmp, tmp);
33
break;
34
- case NEON_2RM_VMVN:
35
- tcg_gen_not_i32(tmp, tmp);
36
- break;
37
case NEON_2RM_VQABS:
38
switch (size) {
39
case 0:
40
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
41
default: abort();
42
}
43
break;
44
- case NEON_2RM_VNEG:
45
- tmp2 = tcg_const_i32(0);
46
- gen_neon_rsb(size, tmp, tmp2);
47
- tcg_temp_free_i32(tmp2);
48
- break;
49
case NEON_2RM_VCGT0_F:
50
{
51
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
52
--
53
2.19.1
54
55
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-12-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 31 +++++++++++++++----------------
9
1 file changed, 15 insertions(+), 16 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
vec_size, vec_size);
17
}
18
return 0;
19
+
20
+ case NEON_3R_VMUL: /* VMUL */
21
+ if (u) {
22
+ /* Polynomial case allows only P8 and is handled below. */
23
+ if (size != 0) {
24
+ return 1;
25
+ }
26
+ } else {
27
+ tcg_gen_gvec_mul(size, rd_ofs, rn_ofs, rm_ofs,
28
+ vec_size, vec_size);
29
+ return 0;
30
+ }
31
+ break;
32
}
33
if (size == 3) {
34
/* 64-bit element instructions. */
35
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
36
return 1;
37
}
38
break;
39
- case NEON_3R_VMUL:
40
- if (u && (size != 0)) {
41
- /* UNDEF on invalid size for polynomial subcase */
42
- return 1;
43
- }
44
- break;
45
case NEON_3R_VFM_VQRDMLSH:
46
if (!arm_dc_feature(s, ARM_FEATURE_VFP4)) {
47
return 1;
48
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
49
}
50
break;
51
case NEON_3R_VMUL:
52
- if (u) { /* polynomial */
53
- gen_helper_neon_mul_p8(tmp, tmp, tmp2);
54
- } else { /* Integer */
55
- switch (size) {
56
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
57
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
58
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
59
- default: abort();
60
- }
61
- }
62
+ /* VMUL.P8; other cases already eliminated. */
63
+ gen_helper_neon_mul_p8(tmp, tmp, tmp2);
64
break;
65
case NEON_3R_VPMAX:
66
GEN_NEON_INTEGER_OP(pmax);
67
--
68
2.19.1
69
70
diff view generated by jsdifflib
1
In v7M, the fixed-priority exceptions are:
1
From: Richard Henderson <richard.henderson@linaro.org>
2
Reset: -3
3
NMI: -2
4
HardFault: -1
5
2
6
In v8M, this changes because Secure HardFault may need
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
to be prioritised above NMI:
4
Message-id: 20181011205206.3552-13-richard.henderson@linaro.org
8
Reset: -4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Secure HardFault if AIRCR.BFHFNMINS == 1: -3
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
NMI: -2
7
---
11
Secure HardFault if AIRCR.BFHFNMINS == 0: -1
8
target/arm/translate.c | 70 +++++++++++++++++++++++++++++-------------
12
NonSecure HardFault: -1
9
1 file changed, 48 insertions(+), 22 deletions(-)
13
10
14
Make these changes, including support for changing the
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
priority of Secure HardFault as AIRCR.BFHFNMINS changes.
16
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 1505240046-11454-14-git-send-email-peter.maydell@linaro.org
20
---
21
hw/intc/armv7m_nvic.c | 22 +++++++++++++++++++---
22
1 file changed, 19 insertions(+), 3 deletions(-)
23
24
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
25
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/intc/armv7m_nvic.c
13
--- a/target/arm/translate.c
27
+++ b/hw/intc/armv7m_nvic.c
14
+++ b/target/arm/translate.c
28
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
29
(R_V7M_AIRCR_SYSRESETREQS_MASK |
16
size--;
30
R_V7M_AIRCR_BFHFNMINS_MASK |
17
}
31
R_V7M_AIRCR_PRIS_MASK);
18
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
32
+ /* BFHFNMINS changes the priority of Secure HardFault */
19
- /* To avoid excessive duplication of ops we implement shift
33
+ if (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
20
- by immediate using the variable shift operations. */
34
+ s->sec_vectors[ARMV7M_EXCP_HARD].prio = -3;
21
if (op < 8) {
35
+ } else {
22
/* Shift by immediate:
36
+ s->sec_vectors[ARMV7M_EXCP_HARD].prio = -1;
23
VSHR, VSRA, VRSHR, VRSRA, VSRI, VSHL, VQSHL, VQSHLU. */
24
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
25
}
26
/* Right shifts are encoded as N - shift, where N is the
27
element size in bits. */
28
- if (op <= 4)
29
+ if (op <= 4) {
30
shift = shift - (1 << (size + 3));
37
+ }
31
+ }
38
}
39
nvic_irq_update(s);
40
}
41
@@ -XXX,XX +XXX,XX @@ static int nvic_post_load(void *opaque, int version_id)
42
{
43
NVICState *s = opaque;
44
unsigned i;
45
+ int resetprio;
46
47
/* Check for out of range priority settings */
48
- if (s->vectors[ARMV7M_EXCP_RESET].prio != -3 ||
49
+ resetprio = arm_feature(&s->cpu->env, ARM_FEATURE_V8) ? -4 : -3;
50
+
32
+
51
+ if (s->vectors[ARMV7M_EXCP_RESET].prio != resetprio ||
33
+ switch (op) {
52
s->vectors[ARMV7M_EXCP_NMI].prio != -2 ||
34
+ case 0: /* VSHR */
53
s->vectors[ARMV7M_EXCP_HARD].prio != -1) {
35
+ /* Right shift comes here negative. */
54
return 1;
36
+ shift = -shift;
55
@@ -XXX,XX +XXX,XX @@ static int nvic_security_post_load(void *opaque, int version_id)
37
+ /* Shifts larger than the element size are architecturally
56
int i;
38
+ * valid. Unsigned results in all zeros; signed results
57
39
+ * in all sign bits.
58
/* Check for out of range priority settings */
40
+ */
59
- if (s->sec_vectors[ARMV7M_EXCP_HARD].prio != -1) {
41
+ if (!u) {
60
+ if (s->sec_vectors[ARMV7M_EXCP_HARD].prio != -1
42
+ tcg_gen_gvec_sari(size, rd_ofs, rm_ofs,
61
+ && s->sec_vectors[ARMV7M_EXCP_HARD].prio != -3) {
43
+ MIN(shift, (8 << size) - 1),
62
+ /* We can't cross-check against AIRCR.BFHFNMINS as we don't know
44
+ vec_size, vec_size);
63
+ * if the CPU state has been migrated yet; a mismatch won't
45
+ } else if (shift >= 8 << size) {
64
+ * cause the emulation to blow up, though.
46
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
65
+ */
47
+ } else {
66
return 1;
48
+ tcg_gen_gvec_shri(size, rd_ofs, rm_ofs, shift,
67
}
49
+ vec_size, vec_size);
68
for (i = ARMV7M_EXCP_MEM; i < ARRAY_SIZE(s->sec_vectors); i++) {
50
+ }
69
@@ -XXX,XX +XXX,XX @@ static Property props_nvic[] = {
51
+ return 0;
70
52
+
71
static void armv7m_nvic_reset(DeviceState *dev)
53
+ case 5: /* VSHL, VSLI */
72
{
54
+ if (!u) { /* VSHL */
73
+ int resetprio;
55
+ /* Shifts larger than the element size are
74
NVICState *s = NVIC(dev);
56
+ * architecturally valid and results in zero.
75
57
+ */
76
s->vectors[ARMV7M_EXCP_NMI].enabled = 1;
58
+ if (shift >= 8 << size) {
77
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
59
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
78
s->vectors[ARMV7M_EXCP_PENDSV].enabled = 1;
60
+ } else {
79
s->vectors[ARMV7M_EXCP_SYSTICK].enabled = 1;
61
+ tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
80
62
+ vec_size, vec_size);
81
- s->vectors[ARMV7M_EXCP_RESET].prio = -3;
63
+ }
82
+ resetprio = arm_feature(&s->cpu->env, ARM_FEATURE_V8) ? -4 : -3;
64
+ return 0;
83
+ s->vectors[ARMV7M_EXCP_RESET].prio = resetprio;
65
+ }
84
s->vectors[ARMV7M_EXCP_NMI].prio = -2;
66
+ break;
85
s->vectors[ARMV7M_EXCP_HARD].prio = -1;
67
+ }
68
+
69
if (size == 3) {
70
count = q + 1;
71
} else {
72
count = q ? 4: 2;
73
}
74
- switch (size) {
75
- case 0:
76
- imm = (uint8_t) shift;
77
- imm |= imm << 8;
78
- imm |= imm << 16;
79
- break;
80
- case 1:
81
- imm = (uint16_t) shift;
82
- imm |= imm << 16;
83
- break;
84
- case 2:
85
- case 3:
86
- imm = shift;
87
- break;
88
- default:
89
- abort();
90
- }
91
+
92
+ /* To avoid excessive duplication of ops we implement shift
93
+ * by immediate using the variable shift operations.
94
+ */
95
+ imm = dup_const(size, shift);
96
97
for (pass = 0; pass < count; pass++) {
98
if (size == 3) {
99
neon_load_reg64(cpu_V0, rm + pass);
100
tcg_gen_movi_i64(cpu_V1, imm);
101
switch (op) {
102
- case 0: /* VSHR */
103
case 1: /* VSRA */
104
if (u)
105
gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
106
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
107
cpu_V0, cpu_V1);
108
}
109
break;
110
+ default:
111
+ g_assert_not_reached();
112
}
113
if (op == 1 || op == 3) {
114
/* Accumulate. */
115
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
116
tmp2 = tcg_temp_new_i32();
117
tcg_gen_movi_i32(tmp2, imm);
118
switch (op) {
119
- case 0: /* VSHR */
120
case 1: /* VSRA */
121
GEN_NEON_INTEGER_OP(shl);
122
break;
123
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
124
case 7: /* VQSHL */
125
GEN_NEON_INTEGER_OP_ENV(qshl);
126
break;
127
+ default:
128
+ g_assert_not_reached();
129
}
130
tcg_temp_free_i32(tmp2);
86
131
87
--
132
--
88
2.7.4
133
2.19.1
89
134
90
135
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Move ssra_op and usra_op expanders from translate-a64.c.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-14-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.h | 2 +
11
target/arm/translate-a64.c | 106 ----------------------------
12
target/arm/translate.c | 139 ++++++++++++++++++++++++++++++++++---
13
3 files changed, 130 insertions(+), 117 deletions(-)
14
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
18
+++ b/target/arm/translate.h
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
20
extern const GVecGen3 bsl_op;
21
extern const GVecGen3 bit_op;
22
extern const GVecGen3 bif_op;
23
+extern const GVecGen2i ssra_op[4];
24
+extern const GVecGen2i usra_op[4];
25
26
/*
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
33
}
34
}
35
36
-static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
37
-{
38
- tcg_gen_vec_sar8i_i64(a, a, shift);
39
- tcg_gen_vec_add8_i64(d, d, a);
40
-}
41
-
42
-static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
43
-{
44
- tcg_gen_vec_sar16i_i64(a, a, shift);
45
- tcg_gen_vec_add16_i64(d, d, a);
46
-}
47
-
48
-static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
49
-{
50
- tcg_gen_sari_i32(a, a, shift);
51
- tcg_gen_add_i32(d, d, a);
52
-}
53
-
54
-static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
55
-{
56
- tcg_gen_sari_i64(a, a, shift);
57
- tcg_gen_add_i64(d, d, a);
58
-}
59
-
60
-static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
61
-{
62
- tcg_gen_sari_vec(vece, a, a, sh);
63
- tcg_gen_add_vec(vece, d, d, a);
64
-}
65
-
66
-static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
67
-{
68
- tcg_gen_vec_shr8i_i64(a, a, shift);
69
- tcg_gen_vec_add8_i64(d, d, a);
70
-}
71
-
72
-static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
73
-{
74
- tcg_gen_vec_shr16i_i64(a, a, shift);
75
- tcg_gen_vec_add16_i64(d, d, a);
76
-}
77
-
78
-static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
79
-{
80
- tcg_gen_shri_i32(a, a, shift);
81
- tcg_gen_add_i32(d, d, a);
82
-}
83
-
84
-static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
85
-{
86
- tcg_gen_shri_i64(a, a, shift);
87
- tcg_gen_add_i64(d, d, a);
88
-}
89
-
90
-static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
91
-{
92
- tcg_gen_shri_vec(vece, a, a, sh);
93
- tcg_gen_add_vec(vece, d, d, a);
94
-}
95
-
96
static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
97
{
98
uint64_t mask = dup_const(MO_8, 0xff >> shift);
99
@@ -XXX,XX +XXX,XX @@ static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
100
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
101
int immh, int immb, int opcode, int rn, int rd)
102
{
103
- static const GVecGen2i ssra_op[4] = {
104
- { .fni8 = gen_ssra8_i64,
105
- .fniv = gen_ssra_vec,
106
- .load_dest = true,
107
- .opc = INDEX_op_sari_vec,
108
- .vece = MO_8 },
109
- { .fni8 = gen_ssra16_i64,
110
- .fniv = gen_ssra_vec,
111
- .load_dest = true,
112
- .opc = INDEX_op_sari_vec,
113
- .vece = MO_16 },
114
- { .fni4 = gen_ssra32_i32,
115
- .fniv = gen_ssra_vec,
116
- .load_dest = true,
117
- .opc = INDEX_op_sari_vec,
118
- .vece = MO_32 },
119
- { .fni8 = gen_ssra64_i64,
120
- .fniv = gen_ssra_vec,
121
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
122
- .load_dest = true,
123
- .opc = INDEX_op_sari_vec,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen2i usra_op[4] = {
127
- { .fni8 = gen_usra8_i64,
128
- .fniv = gen_usra_vec,
129
- .load_dest = true,
130
- .opc = INDEX_op_shri_vec,
131
- .vece = MO_8, },
132
- { .fni8 = gen_usra16_i64,
133
- .fniv = gen_usra_vec,
134
- .load_dest = true,
135
- .opc = INDEX_op_shri_vec,
136
- .vece = MO_16, },
137
- { .fni4 = gen_usra32_i32,
138
- .fniv = gen_usra_vec,
139
- .load_dest = true,
140
- .opc = INDEX_op_shri_vec,
141
- .vece = MO_32, },
142
- { .fni8 = gen_usra64_i64,
143
- .fniv = gen_usra_vec,
144
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
145
- .load_dest = true,
146
- .opc = INDEX_op_shri_vec,
147
- .vece = MO_64, },
148
- };
149
static const GVecGen2i sri_op[4] = {
150
{ .fni8 = gen_shr8_ins_i64,
151
.fniv = gen_shr_ins_vec,
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ const GVecGen3 bif_op = {
157
.load_dest = true
158
};
159
160
+static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
161
+{
162
+ tcg_gen_vec_sar8i_i64(a, a, shift);
163
+ tcg_gen_vec_add8_i64(d, d, a);
164
+}
165
+
166
+static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
167
+{
168
+ tcg_gen_vec_sar16i_i64(a, a, shift);
169
+ tcg_gen_vec_add16_i64(d, d, a);
170
+}
171
+
172
+static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
173
+{
174
+ tcg_gen_sari_i32(a, a, shift);
175
+ tcg_gen_add_i32(d, d, a);
176
+}
177
+
178
+static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
179
+{
180
+ tcg_gen_sari_i64(a, a, shift);
181
+ tcg_gen_add_i64(d, d, a);
182
+}
183
+
184
+static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
185
+{
186
+ tcg_gen_sari_vec(vece, a, a, sh);
187
+ tcg_gen_add_vec(vece, d, d, a);
188
+}
189
+
190
+const GVecGen2i ssra_op[4] = {
191
+ { .fni8 = gen_ssra8_i64,
192
+ .fniv = gen_ssra_vec,
193
+ .load_dest = true,
194
+ .opc = INDEX_op_sari_vec,
195
+ .vece = MO_8 },
196
+ { .fni8 = gen_ssra16_i64,
197
+ .fniv = gen_ssra_vec,
198
+ .load_dest = true,
199
+ .opc = INDEX_op_sari_vec,
200
+ .vece = MO_16 },
201
+ { .fni4 = gen_ssra32_i32,
202
+ .fniv = gen_ssra_vec,
203
+ .load_dest = true,
204
+ .opc = INDEX_op_sari_vec,
205
+ .vece = MO_32 },
206
+ { .fni8 = gen_ssra64_i64,
207
+ .fniv = gen_ssra_vec,
208
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
209
+ .load_dest = true,
210
+ .opc = INDEX_op_sari_vec,
211
+ .vece = MO_64 },
212
+};
213
+
214
+static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
215
+{
216
+ tcg_gen_vec_shr8i_i64(a, a, shift);
217
+ tcg_gen_vec_add8_i64(d, d, a);
218
+}
219
+
220
+static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
221
+{
222
+ tcg_gen_vec_shr16i_i64(a, a, shift);
223
+ tcg_gen_vec_add16_i64(d, d, a);
224
+}
225
+
226
+static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
227
+{
228
+ tcg_gen_shri_i32(a, a, shift);
229
+ tcg_gen_add_i32(d, d, a);
230
+}
231
+
232
+static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
233
+{
234
+ tcg_gen_shri_i64(a, a, shift);
235
+ tcg_gen_add_i64(d, d, a);
236
+}
237
+
238
+static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
239
+{
240
+ tcg_gen_shri_vec(vece, a, a, sh);
241
+ tcg_gen_add_vec(vece, d, d, a);
242
+}
243
+
244
+const GVecGen2i usra_op[4] = {
245
+ { .fni8 = gen_usra8_i64,
246
+ .fniv = gen_usra_vec,
247
+ .load_dest = true,
248
+ .opc = INDEX_op_shri_vec,
249
+ .vece = MO_8, },
250
+ { .fni8 = gen_usra16_i64,
251
+ .fniv = gen_usra_vec,
252
+ .load_dest = true,
253
+ .opc = INDEX_op_shri_vec,
254
+ .vece = MO_16, },
255
+ { .fni4 = gen_usra32_i32,
256
+ .fniv = gen_usra_vec,
257
+ .load_dest = true,
258
+ .opc = INDEX_op_shri_vec,
259
+ .vece = MO_32, },
260
+ { .fni8 = gen_usra64_i64,
261
+ .fniv = gen_usra_vec,
262
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
263
+ .load_dest = true,
264
+ .opc = INDEX_op_shri_vec,
265
+ .vece = MO_64, },
266
+};
267
268
/* Translate a NEON data processing instruction. Return nonzero if the
269
instruction is invalid.
270
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
271
}
272
return 0;
273
274
+ case 1: /* VSRA */
275
+ /* Right shift comes here negative. */
276
+ shift = -shift;
277
+ /* Shifts larger than the element size are architecturally
278
+ * valid. Unsigned results in all zeros; signed results
279
+ * in all sign bits.
280
+ */
281
+ if (!u) {
282
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
283
+ MIN(shift, (8 << size) - 1),
284
+ &ssra_op[size]);
285
+ } else if (shift >= 8 << size) {
286
+ /* rd += 0 */
287
+ } else {
288
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
289
+ shift, &usra_op[size]);
290
+ }
291
+ return 0;
292
+
293
case 5: /* VSHL, VSLI */
294
if (!u) { /* VSHL */
295
/* Shifts larger than the element size are
296
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
297
neon_load_reg64(cpu_V0, rm + pass);
298
tcg_gen_movi_i64(cpu_V1, imm);
299
switch (op) {
300
- case 1: /* VSRA */
301
- if (u)
302
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
303
- else
304
- gen_helper_neon_shl_s64(cpu_V0, cpu_V0, cpu_V1);
305
- break;
306
case 2: /* VRSHR */
307
case 3: /* VRSRA */
308
if (u)
309
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
310
default:
311
g_assert_not_reached();
312
}
313
- if (op == 1 || op == 3) {
314
+ if (op == 3) {
315
/* Accumulate. */
316
neon_load_reg64(cpu_V1, rd + pass);
317
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
318
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
319
tmp2 = tcg_temp_new_i32();
320
tcg_gen_movi_i32(tmp2, imm);
321
switch (op) {
322
- case 1: /* VSRA */
323
- GEN_NEON_INTEGER_OP(shl);
324
- break;
325
case 2: /* VRSHR */
326
case 3: /* VRSRA */
327
GEN_NEON_INTEGER_OP(rshl);
328
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
329
}
330
tcg_temp_free_i32(tmp2);
331
332
- if (op == 1 || op == 3) {
333
+ if (op == 3) {
334
/* Accumulate. */
335
tmp2 = neon_load_reg(rd, pass);
336
gen_neon_add(size, tmp, tmp2);
337
--
338
2.19.1
339
340
diff view generated by jsdifflib
1
Don't use old_mmio in the memory region ops struct.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Move shi_op and sli_op expanders from translate-a64.c.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-15-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 1505580378-9044-6-git-send-email-peter.maydell@linaro.org
6
---
9
---
7
hw/i2c/omap_i2c.c | 44 ++++++++++++++++++++++++++++++++------------
10
target/arm/translate.h | 2 +
8
1 file changed, 32 insertions(+), 12 deletions(-)
11
target/arm/translate-a64.c | 152 +----------------------
12
target/arm/translate.c | 244 ++++++++++++++++++++++++++-----------
13
3 files changed, 179 insertions(+), 219 deletions(-)
9
14
10
diff --git a/hw/i2c/omap_i2c.c b/hw/i2c/omap_i2c.c
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
11
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
12
--- a/hw/i2c/omap_i2c.c
17
--- a/target/arm/translate.h
13
+++ b/hw/i2c/omap_i2c.c
18
+++ b/target/arm/translate.h
14
@@ -XXX,XX +XXX,XX @@ static void omap_i2c_writeb(void *opaque, hwaddr addr,
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
20
extern const GVecGen3 bif_op;
21
extern const GVecGen2i ssra_op[4];
22
extern const GVecGen2i usra_op[4];
23
+extern const GVecGen2i sri_op[4];
24
+extern const GVecGen2i sli_op[4];
25
26
/*
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
15
}
33
}
16
}
34
}
17
35
18
+static uint64_t omap_i2c_readfn(void *opaque, hwaddr addr,
36
-static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
19
+ unsigned size)
37
-{
20
+{
38
- uint64_t mask = dup_const(MO_8, 0xff >> shift);
21
+ switch (size) {
39
- TCGv_i64 t = tcg_temp_new_i64();
22
+ case 2:
40
-
23
+ return omap_i2c_read(opaque, addr);
41
- tcg_gen_shri_i64(t, a, shift);
24
+ default:
42
- tcg_gen_andi_i64(t, t, mask);
25
+ return omap_badwidth_read16(opaque, addr);
43
- tcg_gen_andi_i64(d, d, ~mask);
44
- tcg_gen_or_i64(d, d, t);
45
- tcg_temp_free_i64(t);
46
-}
47
-
48
-static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
49
-{
50
- uint64_t mask = dup_const(MO_16, 0xffff >> shift);
51
- TCGv_i64 t = tcg_temp_new_i64();
52
-
53
- tcg_gen_shri_i64(t, a, shift);
54
- tcg_gen_andi_i64(t, t, mask);
55
- tcg_gen_andi_i64(d, d, ~mask);
56
- tcg_gen_or_i64(d, d, t);
57
- tcg_temp_free_i64(t);
58
-}
59
-
60
-static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
61
-{
62
- tcg_gen_shri_i32(a, a, shift);
63
- tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
64
-}
65
-
66
-static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
67
-{
68
- tcg_gen_shri_i64(a, a, shift);
69
- tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
70
-}
71
-
72
-static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
73
-{
74
- uint64_t mask = (2ull << ((8 << vece) - 1)) - 1;
75
- TCGv_vec t = tcg_temp_new_vec_matching(d);
76
- TCGv_vec m = tcg_temp_new_vec_matching(d);
77
-
78
- tcg_gen_dupi_vec(vece, m, mask ^ (mask >> sh));
79
- tcg_gen_shri_vec(vece, t, a, sh);
80
- tcg_gen_and_vec(vece, d, d, m);
81
- tcg_gen_or_vec(vece, d, d, t);
82
-
83
- tcg_temp_free_vec(t);
84
- tcg_temp_free_vec(m);
85
-}
86
-
87
/* SSHR[RA]/USHR[RA] - Vector shift right (optional rounding/accumulate) */
88
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
89
int immh, int immb, int opcode, int rn, int rd)
90
{
91
- static const GVecGen2i sri_op[4] = {
92
- { .fni8 = gen_shr8_ins_i64,
93
- .fniv = gen_shr_ins_vec,
94
- .load_dest = true,
95
- .opc = INDEX_op_shri_vec,
96
- .vece = MO_8 },
97
- { .fni8 = gen_shr16_ins_i64,
98
- .fniv = gen_shr_ins_vec,
99
- .load_dest = true,
100
- .opc = INDEX_op_shri_vec,
101
- .vece = MO_16 },
102
- { .fni4 = gen_shr32_ins_i32,
103
- .fniv = gen_shr_ins_vec,
104
- .load_dest = true,
105
- .opc = INDEX_op_shri_vec,
106
- .vece = MO_32 },
107
- { .fni8 = gen_shr64_ins_i64,
108
- .fniv = gen_shr_ins_vec,
109
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
110
- .load_dest = true,
111
- .opc = INDEX_op_shri_vec,
112
- .vece = MO_64 },
113
- };
114
-
115
int size = 32 - clz32(immh) - 1;
116
int immhb = immh << 3 | immb;
117
int shift = 2 * (8 << size) - immhb;
118
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
119
clear_vec_high(s, is_q, rd);
120
}
121
122
-static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
123
-{
124
- uint64_t mask = dup_const(MO_8, 0xff << shift);
125
- TCGv_i64 t = tcg_temp_new_i64();
126
-
127
- tcg_gen_shli_i64(t, a, shift);
128
- tcg_gen_andi_i64(t, t, mask);
129
- tcg_gen_andi_i64(d, d, ~mask);
130
- tcg_gen_or_i64(d, d, t);
131
- tcg_temp_free_i64(t);
132
-}
133
-
134
-static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
135
-{
136
- uint64_t mask = dup_const(MO_16, 0xffff << shift);
137
- TCGv_i64 t = tcg_temp_new_i64();
138
-
139
- tcg_gen_shli_i64(t, a, shift);
140
- tcg_gen_andi_i64(t, t, mask);
141
- tcg_gen_andi_i64(d, d, ~mask);
142
- tcg_gen_or_i64(d, d, t);
143
- tcg_temp_free_i64(t);
144
-}
145
-
146
-static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
147
-{
148
- tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
149
-}
150
-
151
-static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
152
-{
153
- tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
154
-}
155
-
156
-static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
157
-{
158
- uint64_t mask = (1ull << sh) - 1;
159
- TCGv_vec t = tcg_temp_new_vec_matching(d);
160
- TCGv_vec m = tcg_temp_new_vec_matching(d);
161
-
162
- tcg_gen_dupi_vec(vece, m, mask);
163
- tcg_gen_shli_vec(vece, t, a, sh);
164
- tcg_gen_and_vec(vece, d, d, m);
165
- tcg_gen_or_vec(vece, d, d, t);
166
-
167
- tcg_temp_free_vec(t);
168
- tcg_temp_free_vec(m);
169
-}
170
-
171
/* SHL/SLI - Vector shift left */
172
static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
173
int immh, int immb, int opcode, int rn, int rd)
174
{
175
- static const GVecGen2i shi_op[4] = {
176
- { .fni8 = gen_shl8_ins_i64,
177
- .fniv = gen_shl_ins_vec,
178
- .opc = INDEX_op_shli_vec,
179
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
180
- .load_dest = true,
181
- .vece = MO_8 },
182
- { .fni8 = gen_shl16_ins_i64,
183
- .fniv = gen_shl_ins_vec,
184
- .opc = INDEX_op_shli_vec,
185
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
186
- .load_dest = true,
187
- .vece = MO_16 },
188
- { .fni4 = gen_shl32_ins_i32,
189
- .fniv = gen_shl_ins_vec,
190
- .opc = INDEX_op_shli_vec,
191
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
192
- .load_dest = true,
193
- .vece = MO_32 },
194
- { .fni8 = gen_shl64_ins_i64,
195
- .fniv = gen_shl_ins_vec,
196
- .opc = INDEX_op_shli_vec,
197
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
198
- .load_dest = true,
199
- .vece = MO_64 },
200
- };
201
int size = 32 - clz32(immh) - 1;
202
int immhb = immh << 3 | immb;
203
int shift = immhb - (8 << size);
204
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
205
}
206
207
if (insert) {
208
- gen_gvec_op2i(s, is_q, rd, rn, shift, &shi_op[size]);
209
+ gen_gvec_op2i(s, is_q, rd, rn, shift, &sli_op[size]);
210
} else {
211
gen_gvec_fn2i(s, is_q, rd, rn, shift, tcg_gen_gvec_shli, size);
212
}
213
diff --git a/target/arm/translate.c b/target/arm/translate.c
214
index XXXXXXX..XXXXXXX 100644
215
--- a/target/arm/translate.c
216
+++ b/target/arm/translate.c
217
@@ -XXX,XX +XXX,XX @@ const GVecGen2i usra_op[4] = {
218
.vece = MO_64, },
219
};
220
221
+static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
222
+{
223
+ uint64_t mask = dup_const(MO_8, 0xff >> shift);
224
+ TCGv_i64 t = tcg_temp_new_i64();
225
+
226
+ tcg_gen_shri_i64(t, a, shift);
227
+ tcg_gen_andi_i64(t, t, mask);
228
+ tcg_gen_andi_i64(d, d, ~mask);
229
+ tcg_gen_or_i64(d, d, t);
230
+ tcg_temp_free_i64(t);
231
+}
232
+
233
+static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
234
+{
235
+ uint64_t mask = dup_const(MO_16, 0xffff >> shift);
236
+ TCGv_i64 t = tcg_temp_new_i64();
237
+
238
+ tcg_gen_shri_i64(t, a, shift);
239
+ tcg_gen_andi_i64(t, t, mask);
240
+ tcg_gen_andi_i64(d, d, ~mask);
241
+ tcg_gen_or_i64(d, d, t);
242
+ tcg_temp_free_i64(t);
243
+}
244
+
245
+static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
246
+{
247
+ tcg_gen_shri_i32(a, a, shift);
248
+ tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
249
+}
250
+
251
+static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
252
+{
253
+ tcg_gen_shri_i64(a, a, shift);
254
+ tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
255
+}
256
+
257
+static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
258
+{
259
+ if (sh == 0) {
260
+ tcg_gen_mov_vec(d, a);
261
+ } else {
262
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
263
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
264
+
265
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
266
+ tcg_gen_shri_vec(vece, t, a, sh);
267
+ tcg_gen_and_vec(vece, d, d, m);
268
+ tcg_gen_or_vec(vece, d, d, t);
269
+
270
+ tcg_temp_free_vec(t);
271
+ tcg_temp_free_vec(m);
26
+ }
272
+ }
27
+}
273
+}
28
+
274
+
29
+static void omap_i2c_writefn(void *opaque, hwaddr addr,
275
+const GVecGen2i sri_op[4] = {
30
+ uint64_t value, unsigned size)
276
+ { .fni8 = gen_shr8_ins_i64,
31
+{
277
+ .fniv = gen_shr_ins_vec,
32
+ switch (size) {
278
+ .load_dest = true,
33
+ case 1:
279
+ .opc = INDEX_op_shri_vec,
34
+ /* Only the last fifo write can be 8 bit. */
280
+ .vece = MO_8 },
35
+ omap_i2c_writeb(opaque, addr, value);
281
+ { .fni8 = gen_shr16_ins_i64,
36
+ break;
282
+ .fniv = gen_shr_ins_vec,
37
+ case 2:
283
+ .load_dest = true,
38
+ omap_i2c_write(opaque, addr, value);
284
+ .opc = INDEX_op_shri_vec,
39
+ break;
285
+ .vece = MO_16 },
40
+ default:
286
+ { .fni4 = gen_shr32_ins_i32,
41
+ omap_badwidth_write16(opaque, addr, value);
287
+ .fniv = gen_shr_ins_vec,
42
+ break;
288
+ .load_dest = true,
289
+ .opc = INDEX_op_shri_vec,
290
+ .vece = MO_32 },
291
+ { .fni8 = gen_shr64_ins_i64,
292
+ .fniv = gen_shr_ins_vec,
293
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
294
+ .load_dest = true,
295
+ .opc = INDEX_op_shri_vec,
296
+ .vece = MO_64 },
297
+};
298
+
299
+static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
300
+{
301
+ uint64_t mask = dup_const(MO_8, 0xff << shift);
302
+ TCGv_i64 t = tcg_temp_new_i64();
303
+
304
+ tcg_gen_shli_i64(t, a, shift);
305
+ tcg_gen_andi_i64(t, t, mask);
306
+ tcg_gen_andi_i64(d, d, ~mask);
307
+ tcg_gen_or_i64(d, d, t);
308
+ tcg_temp_free_i64(t);
309
+}
310
+
311
+static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
312
+{
313
+ uint64_t mask = dup_const(MO_16, 0xffff << shift);
314
+ TCGv_i64 t = tcg_temp_new_i64();
315
+
316
+ tcg_gen_shli_i64(t, a, shift);
317
+ tcg_gen_andi_i64(t, t, mask);
318
+ tcg_gen_andi_i64(d, d, ~mask);
319
+ tcg_gen_or_i64(d, d, t);
320
+ tcg_temp_free_i64(t);
321
+}
322
+
323
+static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
324
+{
325
+ tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
326
+}
327
+
328
+static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
329
+{
330
+ tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
331
+}
332
+
333
+static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
334
+{
335
+ if (sh == 0) {
336
+ tcg_gen_mov_vec(d, a);
337
+ } else {
338
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
339
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
340
+
341
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
342
+ tcg_gen_shli_vec(vece, t, a, sh);
343
+ tcg_gen_and_vec(vece, d, d, m);
344
+ tcg_gen_or_vec(vece, d, d, t);
345
+
346
+ tcg_temp_free_vec(t);
347
+ tcg_temp_free_vec(m);
43
+ }
348
+ }
44
+}
349
+}
45
+
350
+
46
static const MemoryRegionOps omap_i2c_ops = {
351
+const GVecGen2i sli_op[4] = {
47
- .old_mmio = {
352
+ { .fni8 = gen_shl8_ins_i64,
48
- .read = {
353
+ .fniv = gen_shl_ins_vec,
49
- omap_badwidth_read16,
354
+ .load_dest = true,
50
- omap_i2c_read,
355
+ .opc = INDEX_op_shli_vec,
51
- omap_badwidth_read16,
356
+ .vece = MO_8 },
52
- },
357
+ { .fni8 = gen_shl16_ins_i64,
53
- .write = {
358
+ .fniv = gen_shl_ins_vec,
54
- omap_i2c_writeb, /* Only the last fifo write can be 8 bit. */
359
+ .load_dest = true,
55
- omap_i2c_write,
360
+ .opc = INDEX_op_shli_vec,
56
- omap_badwidth_write16,
361
+ .vece = MO_16 },
57
- },
362
+ { .fni4 = gen_shl32_ins_i32,
58
- },
363
+ .fniv = gen_shl_ins_vec,
59
+ .read = omap_i2c_readfn,
364
+ .load_dest = true,
60
+ .write = omap_i2c_writefn,
365
+ .opc = INDEX_op_shli_vec,
61
+ .valid.min_access_size = 1,
366
+ .vece = MO_32 },
62
+ .valid.max_access_size = 4,
367
+ { .fni8 = gen_shl64_ins_i64,
63
.endianness = DEVICE_NATIVE_ENDIAN,
368
+ .fniv = gen_shl_ins_vec,
64
};
369
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
65
370
+ .load_dest = true,
371
+ .opc = INDEX_op_shli_vec,
372
+ .vece = MO_64 },
373
+};
374
+
375
/* Translate a NEON data processing instruction. Return nonzero if the
376
instruction is invalid.
377
We process data in a mixture of 32-bit and 64-bit chunks.
378
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
379
int pairwise;
380
int u;
381
int vec_size;
382
- uint32_t imm, mask;
383
+ uint32_t imm;
384
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
385
TCGv_ptr ptr1, ptr2, ptr3;
386
TCGv_i64 tmp64;
387
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
388
}
389
return 0;
390
391
+ case 4: /* VSRI */
392
+ if (!u) {
393
+ return 1;
394
+ }
395
+ /* Right shift comes here negative. */
396
+ shift = -shift;
397
+ /* Shift out of range leaves destination unchanged. */
398
+ if (shift < 8 << size) {
399
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
400
+ shift, &sri_op[size]);
401
+ }
402
+ return 0;
403
+
404
case 5: /* VSHL, VSLI */
405
- if (!u) { /* VSHL */
406
+ if (u) { /* VSLI */
407
+ /* Shift out of range leaves destination unchanged. */
408
+ if (shift < 8 << size) {
409
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size,
410
+ vec_size, shift, &sli_op[size]);
411
+ }
412
+ } else { /* VSHL */
413
/* Shifts larger than the element size are
414
* architecturally valid and results in zero.
415
*/
416
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
417
tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
418
vec_size, vec_size);
419
}
420
- return 0;
421
}
422
- break;
423
+ return 0;
424
}
425
426
if (size == 3) {
427
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
428
else
429
gen_helper_neon_rshl_s64(cpu_V0, cpu_V0, cpu_V1);
430
break;
431
- case 4: /* VSRI */
432
- case 5: /* VSHL, VSLI */
433
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
434
- break;
435
case 6: /* VQSHLU */
436
gen_helper_neon_qshlu_s64(cpu_V0, cpu_env,
437
cpu_V0, cpu_V1);
438
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
439
/* Accumulate. */
440
neon_load_reg64(cpu_V1, rd + pass);
441
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
442
- } else if (op == 4 || (op == 5 && u)) {
443
- /* Insert */
444
- neon_load_reg64(cpu_V1, rd + pass);
445
- uint64_t mask;
446
- if (shift < -63 || shift > 63) {
447
- mask = 0;
448
- } else {
449
- if (op == 4) {
450
- mask = 0xffffffffffffffffull >> -shift;
451
- } else {
452
- mask = 0xffffffffffffffffull << shift;
453
- }
454
- }
455
- tcg_gen_andi_i64(cpu_V1, cpu_V1, ~mask);
456
- tcg_gen_or_i64(cpu_V0, cpu_V0, cpu_V1);
457
}
458
neon_store_reg64(cpu_V0, rd + pass);
459
} else { /* size < 3 */
460
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
461
case 3: /* VRSRA */
462
GEN_NEON_INTEGER_OP(rshl);
463
break;
464
- case 4: /* VSRI */
465
- case 5: /* VSHL, VSLI */
466
- switch (size) {
467
- case 0: gen_helper_neon_shl_u8(tmp, tmp, tmp2); break;
468
- case 1: gen_helper_neon_shl_u16(tmp, tmp, tmp2); break;
469
- case 2: gen_helper_neon_shl_u32(tmp, tmp, tmp2); break;
470
- default: abort();
471
- }
472
- break;
473
case 6: /* VQSHLU */
474
switch (size) {
475
case 0:
476
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
477
tmp2 = neon_load_reg(rd, pass);
478
gen_neon_add(size, tmp, tmp2);
479
tcg_temp_free_i32(tmp2);
480
- } else if (op == 4 || (op == 5 && u)) {
481
- /* Insert */
482
- switch (size) {
483
- case 0:
484
- if (op == 4)
485
- mask = 0xff >> -shift;
486
- else
487
- mask = (uint8_t)(0xff << shift);
488
- mask |= mask << 8;
489
- mask |= mask << 16;
490
- break;
491
- case 1:
492
- if (op == 4)
493
- mask = 0xffff >> -shift;
494
- else
495
- mask = (uint16_t)(0xffff << shift);
496
- mask |= mask << 16;
497
- break;
498
- case 2:
499
- if (shift < -31 || shift > 31) {
500
- mask = 0;
501
- } else {
502
- if (op == 4)
503
- mask = 0xffffffffu >> -shift;
504
- else
505
- mask = 0xffffffffu << shift;
506
- }
507
- break;
508
- default:
509
- abort();
510
- }
511
- tmp2 = neon_load_reg(rd, pass);
512
- tcg_gen_andi_i32(tmp, tmp, mask);
513
- tcg_gen_andi_i32(tmp2, tmp2, ~mask);
514
- tcg_gen_or_i32(tmp, tmp, tmp2);
515
- tcg_temp_free_i32(tmp2);
516
}
517
neon_store_reg(rd, pass, tmp);
518
}
66
--
519
--
67
2.7.4
520
2.19.1
68
521
69
522
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Move mla_op and mls_op expanders from translate-a64.c.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-16-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.h | 2 +
11
target/arm/translate-a64.c | 106 -----------------------------
12
target/arm/translate.c | 134 ++++++++++++++++++++++++++++++++-----
13
3 files changed, 120 insertions(+), 122 deletions(-)
14
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
18
+++ b/target/arm/translate.h
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
20
extern const GVecGen3 bsl_op;
21
extern const GVecGen3 bit_op;
22
extern const GVecGen3 bif_op;
23
+extern const GVecGen3 mla_op[4];
24
+extern const GVecGen3 mls_op[4];
25
extern const GVecGen2i ssra_op[4];
26
extern const GVecGen2i usra_op[4];
27
extern const GVecGen2i sri_op[4];
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
33
}
34
}
35
36
-static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
37
-{
38
- gen_helper_neon_mul_u8(a, a, b);
39
- gen_helper_neon_add_u8(d, d, a);
40
-}
41
-
42
-static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
43
-{
44
- gen_helper_neon_mul_u16(a, a, b);
45
- gen_helper_neon_add_u16(d, d, a);
46
-}
47
-
48
-static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
49
-{
50
- tcg_gen_mul_i32(a, a, b);
51
- tcg_gen_add_i32(d, d, a);
52
-}
53
-
54
-static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
55
-{
56
- tcg_gen_mul_i64(a, a, b);
57
- tcg_gen_add_i64(d, d, a);
58
-}
59
-
60
-static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
61
-{
62
- tcg_gen_mul_vec(vece, a, a, b);
63
- tcg_gen_add_vec(vece, d, d, a);
64
-}
65
-
66
-static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
67
-{
68
- gen_helper_neon_mul_u8(a, a, b);
69
- gen_helper_neon_sub_u8(d, d, a);
70
-}
71
-
72
-static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
73
-{
74
- gen_helper_neon_mul_u16(a, a, b);
75
- gen_helper_neon_sub_u16(d, d, a);
76
-}
77
-
78
-static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
79
-{
80
- tcg_gen_mul_i32(a, a, b);
81
- tcg_gen_sub_i32(d, d, a);
82
-}
83
-
84
-static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
85
-{
86
- tcg_gen_mul_i64(a, a, b);
87
- tcg_gen_sub_i64(d, d, a);
88
-}
89
-
90
-static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
91
-{
92
- tcg_gen_mul_vec(vece, a, a, b);
93
- tcg_gen_sub_vec(vece, d, d, a);
94
-}
95
-
96
/* Integer op subgroup of C3.6.16. */
97
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
98
{
99
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
100
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
.vece = MO_64 },
102
};
103
- static const GVecGen3 mla_op[4] = {
104
- { .fni4 = gen_mla8_i32,
105
- .fniv = gen_mla_vec,
106
- .opc = INDEX_op_mul_vec,
107
- .load_dest = true,
108
- .vece = MO_8 },
109
- { .fni4 = gen_mla16_i32,
110
- .fniv = gen_mla_vec,
111
- .opc = INDEX_op_mul_vec,
112
- .load_dest = true,
113
- .vece = MO_16 },
114
- { .fni4 = gen_mla32_i32,
115
- .fniv = gen_mla_vec,
116
- .opc = INDEX_op_mul_vec,
117
- .load_dest = true,
118
- .vece = MO_32 },
119
- { .fni8 = gen_mla64_i64,
120
- .fniv = gen_mla_vec,
121
- .opc = INDEX_op_mul_vec,
122
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
123
- .load_dest = true,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen3 mls_op[4] = {
127
- { .fni4 = gen_mls8_i32,
128
- .fniv = gen_mls_vec,
129
- .opc = INDEX_op_mul_vec,
130
- .load_dest = true,
131
- .vece = MO_8 },
132
- { .fni4 = gen_mls16_i32,
133
- .fniv = gen_mls_vec,
134
- .opc = INDEX_op_mul_vec,
135
- .load_dest = true,
136
- .vece = MO_16 },
137
- { .fni4 = gen_mls32_i32,
138
- .fniv = gen_mls_vec,
139
- .opc = INDEX_op_mul_vec,
140
- .load_dest = true,
141
- .vece = MO_32 },
142
- { .fni8 = gen_mls64_i64,
143
- .fniv = gen_mls_vec,
144
- .opc = INDEX_op_mul_vec,
145
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
146
- .load_dest = true,
147
- .vece = MO_64 },
148
- };
149
150
int is_q = extract32(insn, 30, 1);
151
int u = extract32(insn, 29, 1);
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ static void gen_neon_narrow_op(int op, int u, int size,
157
#define NEON_3R_VABA 15
158
#define NEON_3R_VADD_VSUB 16
159
#define NEON_3R_VTST_VCEQ 17
160
-#define NEON_3R_VML 18 /* VMLA, VMLAL, VMLS, VMLSL */
161
+#define NEON_3R_VML 18 /* VMLA, VMLS */
162
#define NEON_3R_VMUL 19
163
#define NEON_3R_VPMAX 20
164
#define NEON_3R_VPMIN 21
165
@@ -XXX,XX +XXX,XX @@ const GVecGen2i sli_op[4] = {
166
.vece = MO_64 },
167
};
168
169
+static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
170
+{
171
+ gen_helper_neon_mul_u8(a, a, b);
172
+ gen_helper_neon_add_u8(d, d, a);
173
+}
174
+
175
+static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
176
+{
177
+ gen_helper_neon_mul_u8(a, a, b);
178
+ gen_helper_neon_sub_u8(d, d, a);
179
+}
180
+
181
+static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
182
+{
183
+ gen_helper_neon_mul_u16(a, a, b);
184
+ gen_helper_neon_add_u16(d, d, a);
185
+}
186
+
187
+static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
188
+{
189
+ gen_helper_neon_mul_u16(a, a, b);
190
+ gen_helper_neon_sub_u16(d, d, a);
191
+}
192
+
193
+static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
194
+{
195
+ tcg_gen_mul_i32(a, a, b);
196
+ tcg_gen_add_i32(d, d, a);
197
+}
198
+
199
+static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
200
+{
201
+ tcg_gen_mul_i32(a, a, b);
202
+ tcg_gen_sub_i32(d, d, a);
203
+}
204
+
205
+static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
206
+{
207
+ tcg_gen_mul_i64(a, a, b);
208
+ tcg_gen_add_i64(d, d, a);
209
+}
210
+
211
+static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
212
+{
213
+ tcg_gen_mul_i64(a, a, b);
214
+ tcg_gen_sub_i64(d, d, a);
215
+}
216
+
217
+static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
218
+{
219
+ tcg_gen_mul_vec(vece, a, a, b);
220
+ tcg_gen_add_vec(vece, d, d, a);
221
+}
222
+
223
+static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
224
+{
225
+ tcg_gen_mul_vec(vece, a, a, b);
226
+ tcg_gen_sub_vec(vece, d, d, a);
227
+}
228
+
229
+/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
230
+ * these tables are shared with AArch64 which does support them.
231
+ */
232
+const GVecGen3 mla_op[4] = {
233
+ { .fni4 = gen_mla8_i32,
234
+ .fniv = gen_mla_vec,
235
+ .opc = INDEX_op_mul_vec,
236
+ .load_dest = true,
237
+ .vece = MO_8 },
238
+ { .fni4 = gen_mla16_i32,
239
+ .fniv = gen_mla_vec,
240
+ .opc = INDEX_op_mul_vec,
241
+ .load_dest = true,
242
+ .vece = MO_16 },
243
+ { .fni4 = gen_mla32_i32,
244
+ .fniv = gen_mla_vec,
245
+ .opc = INDEX_op_mul_vec,
246
+ .load_dest = true,
247
+ .vece = MO_32 },
248
+ { .fni8 = gen_mla64_i64,
249
+ .fniv = gen_mla_vec,
250
+ .opc = INDEX_op_mul_vec,
251
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
252
+ .load_dest = true,
253
+ .vece = MO_64 },
254
+};
255
+
256
+const GVecGen3 mls_op[4] = {
257
+ { .fni4 = gen_mls8_i32,
258
+ .fniv = gen_mls_vec,
259
+ .opc = INDEX_op_mul_vec,
260
+ .load_dest = true,
261
+ .vece = MO_8 },
262
+ { .fni4 = gen_mls16_i32,
263
+ .fniv = gen_mls_vec,
264
+ .opc = INDEX_op_mul_vec,
265
+ .load_dest = true,
266
+ .vece = MO_16 },
267
+ { .fni4 = gen_mls32_i32,
268
+ .fniv = gen_mls_vec,
269
+ .opc = INDEX_op_mul_vec,
270
+ .load_dest = true,
271
+ .vece = MO_32 },
272
+ { .fni8 = gen_mls64_i64,
273
+ .fniv = gen_mls_vec,
274
+ .opc = INDEX_op_mul_vec,
275
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
276
+ .load_dest = true,
277
+ .vece = MO_64 },
278
+};
279
+
280
/* Translate a NEON data processing instruction. Return nonzero if the
281
instruction is invalid.
282
We process data in a mixture of 32-bit and 64-bit chunks.
283
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
284
return 0;
285
}
286
break;
287
+
288
+ case NEON_3R_VML: /* VMLA, VMLS */
289
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
290
+ u ? &mls_op[size] : &mla_op[size]);
291
+ return 0;
292
}
293
+
294
if (size == 3) {
295
/* 64-bit element instructions. */
296
for (pass = 0; pass < (q ? 2 : 1); pass++) {
297
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
298
}
299
}
300
break;
301
- case NEON_3R_VML: /* VMLA, VMLAL, VMLS,VMLSL */
302
- switch (size) {
303
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
304
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
305
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
306
- default: abort();
307
- }
308
- tcg_temp_free_i32(tmp2);
309
- tmp2 = neon_load_reg(rd, pass);
310
- if (u) { /* VMLS */
311
- gen_neon_rsb(size, tmp, tmp2);
312
- } else { /* VMLA */
313
- gen_neon_add(size, tmp, tmp2);
314
- }
315
- break;
316
case NEON_3R_VMUL:
317
/* VMUL.P8; other cases already eliminated. */
318
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
319
--
320
2.19.1
321
322
diff view generated by jsdifflib
1
For the v8M security extension, some exceptions must be banked
1
From: Richard Henderson <richard.henderson@linaro.org>
2
between security states. Add the new vecinfo array which holds
2
3
the state for the banked exceptions and migrate it if the
3
Move cmtst_op expanders from translate-a64.c.
4
CPU the NVIC is attached to implements the security extension.
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-17-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
---
9
---
9
include/hw/intc/armv7m_nvic.h | 14 ++++++++++++
10
target/arm/translate.h | 2 +
10
hw/intc/armv7m_nvic.c | 53 ++++++++++++++++++++++++++++++++++++++++++-
11
target/arm/translate-a64.c | 38 ------------------
11
2 files changed, 66 insertions(+), 1 deletion(-)
12
target/arm/translate.c | 81 +++++++++++++++++++++++++++-----------
12
13
3 files changed, 60 insertions(+), 61 deletions(-)
13
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
14
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
15
--- a/include/hw/intc/armv7m_nvic.h
17
--- a/target/arm/translate.h
16
+++ b/include/hw/intc/armv7m_nvic.h
18
+++ b/target/arm/translate.h
17
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
18
20
extern const GVecGen3 bif_op;
19
/* Highest permitted number of exceptions (architectural limit) */
21
extern const GVecGen3 mla_op[4];
20
#define NVIC_MAX_VECTORS 512
22
extern const GVecGen3 mls_op[4];
21
+/* Number of internal exceptions */
23
+extern const GVecGen3 cmtst_op[4];
22
+#define NVIC_INTERNAL_VECTORS 16
24
extern const GVecGen2i ssra_op[4];
23
25
extern const GVecGen2i usra_op[4];
24
typedef struct VecInfo {
26
extern const GVecGen2i sri_op[4];
25
/* Exception priorities can range from -3 to 255; only the unmodifiable
27
extern const GVecGen2i sli_op[4];
26
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
28
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
27
ARMCPU *cpu;
29
28
30
/*
29
VecInfo vectors[NVIC_MAX_VECTORS];
31
* Forward to the isar_feature_* tests given a DisasContext pointer.
30
+ /* If the v8M security extension is implemented, some of the internal
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
31
+ * exceptions are banked between security states (ie there exists both
32
+ * a Secure and a NonSecure version of the exception and its state):
33
+ * HardFault, MemManage, UsageFault, SVCall, PendSV, SysTick (R_PJHV)
34
+ * The rest (including all the external exceptions) are not banked, though
35
+ * they may be configurable to target either Secure or NonSecure state.
36
+ * We store the secure exception state in sec_vectors[] for the banked
37
+ * exceptions, and otherwise use only vectors[] (including for exceptions
38
+ * like SecureFault that unconditionally target Secure state).
39
+ * Entries in sec_vectors[] for non-banked exception numbers are unused.
40
+ */
41
+ VecInfo sec_vectors[NVIC_INTERNAL_VECTORS];
42
uint32_t prigroup;
43
44
/* vectpending and exception_prio are both cached state that can
45
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
46
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/intc/armv7m_nvic.c
34
--- a/target/arm/translate-a64.c
48
+++ b/hw/intc/armv7m_nvic.c
35
+++ b/target/arm/translate-a64.c
49
@@ -XXX,XX +XXX,XX @@
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
50
* For historical reasons QEMU tends to use "interrupt" and
51
* "exception" more or less interchangeably.
52
*/
53
-#define NVIC_FIRST_IRQ 16
54
+#define NVIC_FIRST_IRQ NVIC_INTERNAL_VECTORS
55
#define NVIC_MAX_IRQ (NVIC_MAX_VECTORS - NVIC_FIRST_IRQ)
56
57
/* Effective running priority of the CPU when no exception is active
58
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_VecInfo = {
59
}
37
}
38
}
39
40
-/* CMTST : test is "if (X & Y != 0)". */
41
-static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
42
-{
43
- tcg_gen_and_i32(d, a, b);
44
- tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
45
- tcg_gen_neg_i32(d, d);
46
-}
47
-
48
-static void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
49
-{
50
- tcg_gen_and_i64(d, a, b);
51
- tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
52
- tcg_gen_neg_i64(d, d);
53
-}
54
-
55
-static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
56
-{
57
- tcg_gen_and_vec(vece, d, a, b);
58
- tcg_gen_dupi_vec(vece, a, 0);
59
- tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
60
-}
61
-
62
static void handle_3same_64(DisasContext *s, int opcode, bool u,
63
TCGv_i64 tcg_rd, TCGv_i64 tcg_rn, TCGv_i64 tcg_rm)
64
{
65
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
66
/* Integer op subgroup of C3.6.16. */
67
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
68
{
69
- static const GVecGen3 cmtst_op[4] = {
70
- { .fni4 = gen_helper_neon_tst_u8,
71
- .fniv = gen_cmtst_vec,
72
- .vece = MO_8 },
73
- { .fni4 = gen_helper_neon_tst_u16,
74
- .fniv = gen_cmtst_vec,
75
- .vece = MO_16 },
76
- { .fni4 = gen_cmtst_i32,
77
- .fniv = gen_cmtst_vec,
78
- .vece = MO_32 },
79
- { .fni8 = gen_cmtst_i64,
80
- .fniv = gen_cmtst_vec,
81
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
82
- .vece = MO_64 },
83
- };
84
-
85
int is_q = extract32(insn, 30, 1);
86
int u = extract32(insn, 29, 1);
87
int size = extract32(insn, 22, 2);
88
diff --git a/target/arm/translate.c b/target/arm/translate.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/translate.c
91
+++ b/target/arm/translate.c
92
@@ -XXX,XX +XXX,XX @@ const GVecGen3 mls_op[4] = {
93
.vece = MO_64 },
60
};
94
};
61
95
62
+static bool nvic_security_needed(void *opaque)
96
+/* CMTST : test is "if (X & Y != 0)". */
97
+static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
63
+{
98
+{
64
+ NVICState *s = opaque;
99
+ tcg_gen_and_i32(d, a, b);
65
+
100
+ tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
66
+ return arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY);
101
+ tcg_gen_neg_i32(d, d);
67
+}
102
+}
68
+
103
+
69
+static int nvic_security_post_load(void *opaque, int version_id)
104
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
70
+{
105
+{
71
+ NVICState *s = opaque;
106
+ tcg_gen_and_i64(d, a, b);
72
+ int i;
107
+ tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
73
+
108
+ tcg_gen_neg_i64(d, d);
74
+ /* Check for out of range priority settings */
75
+ if (s->sec_vectors[ARMV7M_EXCP_HARD].prio != -1) {
76
+ return 1;
77
+ }
78
+ for (i = ARMV7M_EXCP_MEM; i < ARRAY_SIZE(s->sec_vectors); i++) {
79
+ if (s->sec_vectors[i].prio & ~0xff) {
80
+ return 1;
81
+ }
82
+ }
83
+ return 0;
84
+}
109
+}
85
+
110
+
86
+static const VMStateDescription vmstate_nvic_security = {
111
+static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
87
+ .name = "nvic/m-security",
112
+{
88
+ .version_id = 1,
113
+ tcg_gen_and_vec(vece, d, a, b);
89
+ .minimum_version_id = 1,
114
+ tcg_gen_dupi_vec(vece, a, 0);
90
+ .needed = nvic_security_needed,
115
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
91
+ .post_load = &nvic_security_post_load,
116
+}
92
+ .fields = (VMStateField[]) {
117
+
93
+ VMSTATE_STRUCT_ARRAY(sec_vectors, NVICState, NVIC_INTERNAL_VECTORS, 1,
118
+const GVecGen3 cmtst_op[4] = {
94
+ vmstate_VecInfo, VecInfo),
119
+ { .fni4 = gen_helper_neon_tst_u8,
95
+ VMSTATE_END_OF_LIST()
120
+ .fniv = gen_cmtst_vec,
96
+ }
121
+ .vece = MO_8 },
122
+ { .fni4 = gen_helper_neon_tst_u16,
123
+ .fniv = gen_cmtst_vec,
124
+ .vece = MO_16 },
125
+ { .fni4 = gen_cmtst_i32,
126
+ .fniv = gen_cmtst_vec,
127
+ .vece = MO_32 },
128
+ { .fni8 = gen_cmtst_i64,
129
+ .fniv = gen_cmtst_vec,
130
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
131
+ .vece = MO_64 },
97
+};
132
+};
98
+
133
+
99
static const VMStateDescription vmstate_nvic = {
134
/* Translate a NEON data processing instruction. Return nonzero if the
100
.name = "armv7m_nvic",
135
instruction is invalid.
101
.version_id = 4,
136
We process data in a mixture of 32-bit and 64-bit chunks.
102
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_nvic = {
137
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
103
vmstate_VecInfo, VecInfo),
138
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
104
VMSTATE_UINT32(prigroup, NVICState),
139
u ? &mls_op[size] : &mla_op[size]);
105
VMSTATE_END_OF_LIST()
140
return 0;
106
+ },
141
+
107
+ .subsections = (const VMStateDescription*[]) {
142
+ case NEON_3R_VTST_VCEQ:
108
+ &vmstate_nvic_security,
143
+ if (u) { /* VCEQ */
109
+ NULL
144
+ tcg_gen_gvec_cmp(TCG_COND_EQ, size, rd_ofs, rn_ofs, rm_ofs,
110
}
145
+ vec_size, vec_size);
111
};
146
+ } else { /* VTST */
112
147
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
113
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
148
+ vec_size, vec_size, &cmtst_op[size]);
114
s->vectors[ARMV7M_EXCP_NMI].prio = -2;
149
+ }
115
s->vectors[ARMV7M_EXCP_HARD].prio = -1;
150
+ return 0;
116
151
+
117
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY)) {
152
+ case NEON_3R_VCGT:
118
+ s->sec_vectors[ARMV7M_EXCP_HARD].enabled = 1;
153
+ tcg_gen_gvec_cmp(u ? TCG_COND_GTU : TCG_COND_GT, size,
119
+ s->sec_vectors[ARMV7M_EXCP_SVC].enabled = 1;
154
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
120
+ s->sec_vectors[ARMV7M_EXCP_PENDSV].enabled = 1;
155
+ return 0;
121
+ s->sec_vectors[ARMV7M_EXCP_SYSTICK].enabled = 1;
156
+
122
+
157
+ case NEON_3R_VCGE:
123
+ /* AIRCR.BFHFNMINS resets to 0 so Secure HF is priority -1 (R_CMTC) */
158
+ tcg_gen_gvec_cmp(u ? TCG_COND_GEU : TCG_COND_GE, size,
124
+ s->sec_vectors[ARMV7M_EXCP_HARD].prio = -1;
159
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
125
+ }
160
+ return 0;
126
+
161
}
127
/* Strictly speaking the reset handler should be enabled.
162
128
* However, we don't simulate soft resets through the NVIC,
163
if (size == 3) {
129
* and the reset vector should never be pended.
164
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
165
case NEON_3R_VQSUB:
166
GEN_NEON_INTEGER_OP_ENV(qsub);
167
break;
168
- case NEON_3R_VCGT:
169
- GEN_NEON_INTEGER_OP(cgt);
170
- break;
171
- case NEON_3R_VCGE:
172
- GEN_NEON_INTEGER_OP(cge);
173
- break;
174
case NEON_3R_VSHL:
175
GEN_NEON_INTEGER_OP(shl);
176
break;
177
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
178
tmp2 = neon_load_reg(rd, pass);
179
gen_neon_add(size, tmp, tmp2);
180
break;
181
- case NEON_3R_VTST_VCEQ:
182
- if (!u) { /* VTST */
183
- switch (size) {
184
- case 0: gen_helper_neon_tst_u8(tmp, tmp, tmp2); break;
185
- case 1: gen_helper_neon_tst_u16(tmp, tmp, tmp2); break;
186
- case 2: gen_helper_neon_tst_u32(tmp, tmp, tmp2); break;
187
- default: abort();
188
- }
189
- } else { /* VCEQ */
190
- switch (size) {
191
- case 0: gen_helper_neon_ceq_u8(tmp, tmp, tmp2); break;
192
- case 1: gen_helper_neon_ceq_u16(tmp, tmp, tmp2); break;
193
- case 2: gen_helper_neon_ceq_u32(tmp, tmp, tmp2); break;
194
- default: abort();
195
- }
196
- }
197
- break;
198
case NEON_3R_VMUL:
199
/* VMUL.P8; other cases already eliminated. */
200
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
130
--
201
--
131
2.7.4
202
2.19.1
132
203
133
204
diff view generated by jsdifflib
1
With banked exceptions, just the exception number in
1
From: Richard Henderson <richard.henderson@linaro.org>
2
s->vectpending is no longer sufficient to uniquely identify
3
the pending exception. Add a vectpending_is_s_banked bool
4
which is true if the exception is using the sec_vectors[]
5
array.
6
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-18-richard.henderson@linaro.org
5
[PMM: added parens in ?: expression]
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 1505240046-11454-4-git-send-email-peter.maydell@linaro.org
9
---
8
---
10
include/hw/intc/armv7m_nvic.h | 11 +++++++++--
9
target/arm/translate.c | 81 ++++++++++++++----------------------------
11
hw/intc/armv7m_nvic.c | 1 +
10
1 file changed, 26 insertions(+), 55 deletions(-)
12
2 files changed, 10 insertions(+), 2 deletions(-)
13
11
14
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/include/hw/intc/armv7m_nvic.h
14
--- a/target/arm/translate.c
17
+++ b/include/hw/intc/armv7m_nvic.h
15
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
16
@@ -XXX,XX +XXX,XX @@ static void gen_vfp_msr(TCGv_i32 tmp)
19
VecInfo sec_vectors[NVIC_INTERNAL_VECTORS];
17
tcg_temp_free_i32(tmp);
20
uint32_t prigroup;
21
22
- /* vectpending and exception_prio are both cached state that can
23
- * be recalculated from the vectors[] array and the prigroup field.
24
+ /* The following fields are all cached state that can be recalculated
25
+ * from the vectors[] and sec_vectors[] arrays and the prigroup field:
26
+ * - vectpending
27
+ * - vectpending_is_secure
28
+ * - exception_prio
29
*/
30
unsigned int vectpending; /* highest prio pending enabled exception */
31
+ /* true if vectpending is a banked secure exception, ie it is in
32
+ * sec_vectors[] rather than vectors[]
33
+ */
34
+ bool vectpending_is_s_banked;
35
int exception_prio; /* group prio of the highest prio active exception */
36
37
MemoryRegion sysregmem;
38
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/hw/intc/armv7m_nvic.c
41
+++ b/hw/intc/armv7m_nvic.c
42
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
43
44
s->exception_prio = NVIC_NOEXC_PRIO;
45
s->vectpending = 0;
46
+ s->vectpending_is_s_banked = false;
47
}
18
}
48
19
49
static void nvic_systick_trigger(void *opaque, int n, int level)
20
-static void gen_neon_dup_u8(TCGv_i32 var, int shift)
21
-{
22
- TCGv_i32 tmp = tcg_temp_new_i32();
23
- if (shift)
24
- tcg_gen_shri_i32(var, var, shift);
25
- tcg_gen_ext8u_i32(var, var);
26
- tcg_gen_shli_i32(tmp, var, 8);
27
- tcg_gen_or_i32(var, var, tmp);
28
- tcg_gen_shli_i32(tmp, var, 16);
29
- tcg_gen_or_i32(var, var, tmp);
30
- tcg_temp_free_i32(tmp);
31
-}
32
-
33
static void gen_neon_dup_low16(TCGv_i32 var)
34
{
35
TCGv_i32 tmp = tcg_temp_new_i32();
36
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
37
tcg_temp_free_i32(tmp);
38
}
39
40
-static TCGv_i32 gen_load_and_replicate(DisasContext *s, TCGv_i32 addr, int size)
41
-{
42
- /* Load a single Neon element and replicate into a 32 bit TCG reg */
43
- TCGv_i32 tmp = tcg_temp_new_i32();
44
- switch (size) {
45
- case 0:
46
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
47
- gen_neon_dup_u8(tmp, 0);
48
- break;
49
- case 1:
50
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
51
- gen_neon_dup_low16(tmp);
52
- break;
53
- case 2:
54
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
55
- break;
56
- default: /* Avoid compiler warnings. */
57
- abort();
58
- }
59
- return tmp;
60
-}
61
-
62
static int handle_vsel(uint32_t insn, uint32_t rd, uint32_t rn, uint32_t rm,
63
uint32_t dp)
64
{
65
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
66
int load;
67
int shift;
68
int n;
69
+ int vec_size;
70
TCGv_i32 addr;
71
TCGv_i32 tmp;
72
TCGv_i32 tmp2;
73
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
74
}
75
addr = tcg_temp_new_i32();
76
load_reg_var(s, addr, rn);
77
- if (nregs == 1) {
78
- /* VLD1 to all lanes: bit 5 indicates how many Dregs to write */
79
- tmp = gen_load_and_replicate(s, addr, size);
80
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
81
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
82
- if (insn & (1 << 5)) {
83
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 0));
84
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 1));
85
- }
86
- tcg_temp_free_i32(tmp);
87
- } else {
88
- /* VLD2/3/4 to all lanes: bit 5 indicates register stride */
89
- stride = (insn & (1 << 5)) ? 2 : 1;
90
- for (reg = 0; reg < nregs; reg++) {
91
- tmp = gen_load_and_replicate(s, addr, size);
92
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
93
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
94
- tcg_temp_free_i32(tmp);
95
- tcg_gen_addi_i32(addr, addr, 1 << size);
96
- rd += stride;
97
+
98
+ /* VLD1 to all lanes: bit 5 indicates how many Dregs to write.
99
+ * VLD2/3/4 to all lanes: bit 5 indicates register stride.
100
+ */
101
+ stride = (insn & (1 << 5)) ? 2 : 1;
102
+ vec_size = nregs == 1 ? stride * 8 : 8;
103
+
104
+ tmp = tcg_temp_new_i32();
105
+ for (reg = 0; reg < nregs; reg++) {
106
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
107
+ s->be_data | size);
108
+ if ((rd & 1) && vec_size == 16) {
109
+ /* We cannot write 16 bytes at once because the
110
+ * destination is unaligned.
111
+ */
112
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
113
+ 8, 8, tmp);
114
+ tcg_gen_gvec_mov(0, neon_reg_offset(rd + 1, 0),
115
+ neon_reg_offset(rd, 0), 8, 8);
116
+ } else {
117
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
118
+ vec_size, vec_size, tmp);
119
}
120
+ tcg_gen_addi_i32(addr, addr, 1 << size);
121
+ rd += stride;
122
}
123
+ tcg_temp_free_i32(tmp);
124
tcg_temp_free_i32(addr);
125
stride = (1 << size) * nregs;
126
} else {
50
--
127
--
51
2.7.4
128
2.19.1
52
129
53
130
diff view generated by jsdifflib
1
Don't use the old_mmio struct in memory region ops.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Instead of shifts and masks, use direct loads and stores from the neon
4
register file. Mirror the iteration structure of the ARM pseudocode
5
more closely. Correct the parameters of the VLD2 A2 insn.
6
7
Note that this includes a bugfix for handling of the insn
8
"VLD2 (multiple 2-element structures)" -- we were using an
9
incorrect stride value.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20181011205206.3552-19-richard.henderson@linaro.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 1505580378-9044-5-git-send-email-peter.maydell@linaro.org
6
---
15
---
7
hw/timer/omap_gptimer.c | 49 +++++++++++++++++++++++++++++++++++++------------
16
target/arm/translate.c | 170 ++++++++++++++++++-----------------------
8
1 file changed, 37 insertions(+), 12 deletions(-)
17
1 file changed, 74 insertions(+), 96 deletions(-)
9
18
10
diff --git a/hw/timer/omap_gptimer.c b/hw/timer/omap_gptimer.c
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
11
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
12
--- a/hw/timer/omap_gptimer.c
21
--- a/target/arm/translate.c
13
+++ b/hw/timer/omap_gptimer.c
22
+++ b/target/arm/translate.c
14
@@ -XXX,XX +XXX,XX @@ static void omap_gp_timer_writeh(void *opaque, hwaddr addr,
23
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
15
s->writeh = (uint16_t) value;
24
return tmp;
16
}
25
}
17
26
18
+static uint64_t omap_gp_timer_readfn(void *opaque, hwaddr addr,
27
+static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
19
+ unsigned size)
20
+{
28
+{
21
+ switch (size) {
29
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
22
+ case 1:
30
+
23
+ return omap_badwidth_read32(opaque, addr);
31
+ switch (mop) {
24
+ case 2:
32
+ case MO_UB:
25
+ return omap_gp_timer_readh(opaque, addr);
33
+ tcg_gen_ld8u_i64(var, cpu_env, offset);
26
+ case 4:
34
+ break;
27
+ return omap_gp_timer_readw(opaque, addr);
35
+ case MO_UW:
36
+ tcg_gen_ld16u_i64(var, cpu_env, offset);
37
+ break;
38
+ case MO_UL:
39
+ tcg_gen_ld32u_i64(var, cpu_env, offset);
40
+ break;
41
+ case MO_Q:
42
+ tcg_gen_ld_i64(var, cpu_env, offset);
43
+ break;
28
+ default:
44
+ default:
29
+ g_assert_not_reached();
45
+ g_assert_not_reached();
30
+ }
46
+ }
31
+}
47
+}
32
+
48
+
33
+static void omap_gp_timer_writefn(void *opaque, hwaddr addr,
49
static void neon_store_reg(int reg, int pass, TCGv_i32 var)
34
+ uint64_t value, unsigned size)
50
{
51
tcg_gen_st_i32(var, cpu_env, neon_reg_offset(reg, pass));
52
tcg_temp_free_i32(var);
53
}
54
55
+static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
35
+{
56
+{
57
+ long offset = neon_element_offset(reg, ele, size);
58
+
36
+ switch (size) {
59
+ switch (size) {
37
+ case 1:
60
+ case MO_8:
38
+ omap_badwidth_write32(opaque, addr, value);
61
+ tcg_gen_st8_i64(var, cpu_env, offset);
39
+ break;
62
+ break;
40
+ case 2:
63
+ case MO_16:
41
+ omap_gp_timer_writeh(opaque, addr, value);
64
+ tcg_gen_st16_i64(var, cpu_env, offset);
42
+ break;
65
+ break;
43
+ case 4:
66
+ case MO_32:
44
+ omap_gp_timer_write(opaque, addr, value);
67
+ tcg_gen_st32_i64(var, cpu_env, offset);
68
+ break;
69
+ case MO_64:
70
+ tcg_gen_st_i64(var, cpu_env, offset);
45
+ break;
71
+ break;
46
+ default:
72
+ default:
47
+ g_assert_not_reached();
73
+ g_assert_not_reached();
48
+ }
74
+ }
49
+}
75
+}
50
+
76
+
51
static const MemoryRegionOps omap_gp_timer_ops = {
77
static inline void neon_load_reg64(TCGv_i64 var, int reg)
52
- .old_mmio = {
78
{
53
- .read = {
79
tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
54
- omap_badwidth_read32,
80
@@ -XXX,XX +XXX,XX @@ static struct {
55
- omap_gp_timer_readh,
81
int interleave;
56
- omap_gp_timer_readw,
82
int spacing;
57
- },
83
} const neon_ls_element_type[11] = {
58
- .write = {
84
- {4, 4, 1},
59
- omap_badwidth_write32,
85
- {4, 4, 2},
60
- omap_gp_timer_writeh,
86
+ {1, 4, 1},
61
- omap_gp_timer_write,
87
+ {1, 4, 2},
62
- },
88
{4, 1, 1},
63
- },
89
- {4, 2, 1},
64
+ .read = omap_gp_timer_readfn,
90
- {3, 3, 1},
65
+ .write = omap_gp_timer_writefn,
91
- {3, 3, 2},
66
+ .valid.min_access_size = 1,
92
+ {2, 2, 2},
67
+ .valid.max_access_size = 4,
93
+ {1, 3, 1},
68
.endianness = DEVICE_NATIVE_ENDIAN,
94
+ {1, 3, 2},
95
{3, 1, 1},
96
{1, 1, 1},
97
- {2, 2, 1},
98
- {2, 2, 2},
99
+ {1, 2, 1},
100
+ {1, 2, 2},
101
{2, 1, 1}
69
};
102
};
70
103
104
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
105
int shift;
106
int n;
107
int vec_size;
108
+ int mmu_idx;
109
+ TCGMemOp endian;
110
TCGv_i32 addr;
111
TCGv_i32 tmp;
112
TCGv_i32 tmp2;
113
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
114
rn = (insn >> 16) & 0xf;
115
rm = insn & 0xf;
116
load = (insn & (1 << 21)) != 0;
117
+ endian = s->be_data;
118
+ mmu_idx = get_mem_index(s);
119
if ((insn & (1 << 23)) == 0) {
120
/* Load store all elements. */
121
op = (insn >> 8) & 0xf;
122
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
123
nregs = neon_ls_element_type[op].nregs;
124
interleave = neon_ls_element_type[op].interleave;
125
spacing = neon_ls_element_type[op].spacing;
126
- if (size == 3 && (interleave | spacing) != 1)
127
+ if (size == 3 && (interleave | spacing) != 1) {
128
return 1;
129
+ }
130
+ tmp64 = tcg_temp_new_i64();
131
addr = tcg_temp_new_i32();
132
+ tmp2 = tcg_const_i32(1 << size);
133
load_reg_var(s, addr, rn);
134
- stride = (1 << size) * interleave;
135
for (reg = 0; reg < nregs; reg++) {
136
- if (interleave > 2 || (interleave == 2 && nregs == 2)) {
137
- load_reg_var(s, addr, rn);
138
- tcg_gen_addi_i32(addr, addr, (1 << size) * reg);
139
- } else if (interleave == 2 && nregs == 4 && reg == 2) {
140
- load_reg_var(s, addr, rn);
141
- tcg_gen_addi_i32(addr, addr, 1 << size);
142
- }
143
- if (size == 3) {
144
- tmp64 = tcg_temp_new_i64();
145
- if (load) {
146
- gen_aa32_ld64(s, tmp64, addr, get_mem_index(s));
147
- neon_store_reg64(tmp64, rd);
148
- } else {
149
- neon_load_reg64(tmp64, rd);
150
- gen_aa32_st64(s, tmp64, addr, get_mem_index(s));
151
- }
152
- tcg_temp_free_i64(tmp64);
153
- tcg_gen_addi_i32(addr, addr, stride);
154
- } else {
155
- for (pass = 0; pass < 2; pass++) {
156
- if (size == 2) {
157
- if (load) {
158
- tmp = tcg_temp_new_i32();
159
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
160
- neon_store_reg(rd, pass, tmp);
161
- } else {
162
- tmp = neon_load_reg(rd, pass);
163
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
164
- tcg_temp_free_i32(tmp);
165
- }
166
- tcg_gen_addi_i32(addr, addr, stride);
167
- } else if (size == 1) {
168
- if (load) {
169
- tmp = tcg_temp_new_i32();
170
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
171
- tcg_gen_addi_i32(addr, addr, stride);
172
- tmp2 = tcg_temp_new_i32();
173
- gen_aa32_ld16u(s, tmp2, addr, get_mem_index(s));
174
- tcg_gen_addi_i32(addr, addr, stride);
175
- tcg_gen_shli_i32(tmp2, tmp2, 16);
176
- tcg_gen_or_i32(tmp, tmp, tmp2);
177
- tcg_temp_free_i32(tmp2);
178
- neon_store_reg(rd, pass, tmp);
179
- } else {
180
- tmp = neon_load_reg(rd, pass);
181
- tmp2 = tcg_temp_new_i32();
182
- tcg_gen_shri_i32(tmp2, tmp, 16);
183
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
184
- tcg_temp_free_i32(tmp);
185
- tcg_gen_addi_i32(addr, addr, stride);
186
- gen_aa32_st16(s, tmp2, addr, get_mem_index(s));
187
- tcg_temp_free_i32(tmp2);
188
- tcg_gen_addi_i32(addr, addr, stride);
189
- }
190
- } else /* size == 0 */ {
191
- if (load) {
192
- tmp2 = NULL;
193
- for (n = 0; n < 4; n++) {
194
- tmp = tcg_temp_new_i32();
195
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
196
- tcg_gen_addi_i32(addr, addr, stride);
197
- if (n == 0) {
198
- tmp2 = tmp;
199
- } else {
200
- tcg_gen_shli_i32(tmp, tmp, n * 8);
201
- tcg_gen_or_i32(tmp2, tmp2, tmp);
202
- tcg_temp_free_i32(tmp);
203
- }
204
- }
205
- neon_store_reg(rd, pass, tmp2);
206
- } else {
207
- tmp2 = neon_load_reg(rd, pass);
208
- for (n = 0; n < 4; n++) {
209
- tmp = tcg_temp_new_i32();
210
- if (n == 0) {
211
- tcg_gen_mov_i32(tmp, tmp2);
212
- } else {
213
- tcg_gen_shri_i32(tmp, tmp2, n * 8);
214
- }
215
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
216
- tcg_temp_free_i32(tmp);
217
- tcg_gen_addi_i32(addr, addr, stride);
218
- }
219
- tcg_temp_free_i32(tmp2);
220
- }
221
+ for (n = 0; n < 8 >> size; n++) {
222
+ int xs;
223
+ for (xs = 0; xs < interleave; xs++) {
224
+ int tt = rd + reg + spacing * xs;
225
+
226
+ if (load) {
227
+ gen_aa32_ld_i64(s, tmp64, addr, mmu_idx, endian | size);
228
+ neon_store_element64(tt, n, size, tmp64);
229
+ } else {
230
+ neon_load_element64(tmp64, tt, n, size);
231
+ gen_aa32_st_i64(s, tmp64, addr, mmu_idx, endian | size);
232
}
233
+ tcg_gen_add_i32(addr, addr, tmp2);
234
}
235
}
236
- rd += spacing;
237
}
238
tcg_temp_free_i32(addr);
239
- stride = nregs * 8;
240
+ tcg_temp_free_i32(tmp2);
241
+ tcg_temp_free_i64(tmp64);
242
+ stride = nregs * interleave * 8;
243
} else {
244
size = (insn >> 10) & 3;
245
if (size == 3) {
71
--
246
--
72
2.7.4
247
2.19.1
73
248
74
249
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
For a sequence of loads or stores from a single register,
4
little-endian operations can be promoted to an 8-byte op.
5
This can reduce the number of operations by a factor of 8.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181011205206.3552-20-richard.henderson@linaro.org
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/translate.c | 10 ++++++++++
14
1 file changed, 10 insertions(+)
15
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate.c
19
+++ b/target/arm/translate.c
20
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
21
if (size == 3 && (interleave | spacing) != 1) {
22
return 1;
23
}
24
+ /* For our purposes, bytes are always little-endian. */
25
+ if (size == 0) {
26
+ endian = MO_LE;
27
+ }
28
+ /* Consecutive little-endian elements from a single register
29
+ * can be promoted to a larger little-endian operation.
30
+ */
31
+ if (interleave == 1 && endian == MO_LE) {
32
+ size = 3;
33
+ }
34
tmp64 = tcg_temp_new_i64();
35
addr = tcg_temp_new_i32();
36
tmp2 = tcg_const_i32(1 << size);
37
--
38
2.19.1
39
40
diff view generated by jsdifflib
1
Don't use the old_mmio in the memory region ops struct.
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Instead of shifts and masks, use direct loads and stores from
4
the neon register file.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181011205206.3552-21-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 1505580378-9044-4-git-send-email-peter.maydell@linaro.org
6
---
10
---
7
hw/timer/omap_synctimer.c | 35 +++++++++++++++++++++--------------
11
target/arm/translate.c | 92 +++++++++++++++++++++++-------------------
8
1 file changed, 21 insertions(+), 14 deletions(-)
12
1 file changed, 50 insertions(+), 42 deletions(-)
9
13
10
diff --git a/hw/timer/omap_synctimer.c b/hw/timer/omap_synctimer.c
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
11
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
12
--- a/hw/timer/omap_synctimer.c
16
--- a/target/arm/translate.c
13
+++ b/hw/timer/omap_synctimer.c
17
+++ b/target/arm/translate.c
14
@@ -XXX,XX +XXX,XX @@ static uint32_t omap_synctimer_readh(void *opaque, hwaddr addr)
18
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
15
}
19
return tmp;
16
}
20
}
17
21
18
-static void omap_synctimer_write(void *opaque, hwaddr addr,
22
+static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
19
- uint32_t value)
20
+static uint64_t omap_synctimer_readfn(void *opaque, hwaddr addr,
21
+ unsigned size)
22
+{
23
+{
23
+ switch (size) {
24
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
24
+ case 1:
25
+
25
+ return omap_badwidth_read32(opaque, addr);
26
+ switch (mop) {
26
+ case 2:
27
+ case MO_UB:
27
+ return omap_synctimer_readh(opaque, addr);
28
+ tcg_gen_ld8u_i32(var, cpu_env, offset);
28
+ case 4:
29
+ break;
29
+ return omap_synctimer_readw(opaque, addr);
30
+ case MO_UW:
31
+ tcg_gen_ld16u_i32(var, cpu_env, offset);
32
+ break;
33
+ case MO_UL:
34
+ tcg_gen_ld_i32(var, cpu_env, offset);
35
+ break;
30
+ default:
36
+ default:
31
+ g_assert_not_reached();
37
+ g_assert_not_reached();
32
+ }
38
+ }
33
+}
39
+}
34
+
40
+
35
+static void omap_synctimer_writefn(void *opaque, hwaddr addr,
41
static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
36
+ uint64_t value, unsigned size)
37
{
42
{
38
OMAP_BAD_REG(addr);
43
long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
44
@@ -XXX,XX +XXX,XX @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
45
tcg_temp_free_i32(var);
39
}
46
}
40
47
41
static const MemoryRegionOps omap_synctimer_ops = {
48
+static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
42
- .old_mmio = {
49
+{
43
- .read = {
50
+ long offset = neon_element_offset(reg, ele, size);
44
- omap_badwidth_read32,
51
+
45
- omap_synctimer_readh,
52
+ switch (size) {
46
- omap_synctimer_readw,
53
+ case MO_8:
47
- },
54
+ tcg_gen_st8_i32(var, cpu_env, offset);
48
- .write = {
55
+ break;
49
- omap_badwidth_write32,
56
+ case MO_16:
50
- omap_synctimer_write,
57
+ tcg_gen_st16_i32(var, cpu_env, offset);
51
- omap_synctimer_write,
58
+ break;
52
- },
59
+ case MO_32:
53
- },
60
+ tcg_gen_st_i32(var, cpu_env, offset);
54
+ .read = omap_synctimer_readfn,
61
+ break;
55
+ .write = omap_synctimer_writefn,
62
+ default:
56
+ .valid.min_access_size = 1,
63
+ g_assert_not_reached();
57
+ .valid.max_access_size = 4,
64
+ }
58
.endianness = DEVICE_NATIVE_ENDIAN,
65
+}
59
};
66
+
60
67
static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
68
{
69
long offset = neon_element_offset(reg, ele, size);
70
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
71
int stride;
72
int size;
73
int reg;
74
- int pass;
75
int load;
76
- int shift;
77
int n;
78
int vec_size;
79
int mmu_idx;
80
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
81
} else {
82
/* Single element. */
83
int idx = (insn >> 4) & 0xf;
84
- pass = (insn >> 7) & 1;
85
+ int reg_idx;
86
switch (size) {
87
case 0:
88
- shift = ((insn >> 5) & 3) * 8;
89
+ reg_idx = (insn >> 5) & 7;
90
stride = 1;
91
break;
92
case 1:
93
- shift = ((insn >> 6) & 1) * 16;
94
+ reg_idx = (insn >> 6) & 3;
95
stride = (insn & (1 << 5)) ? 2 : 1;
96
break;
97
case 2:
98
- shift = 0;
99
+ reg_idx = (insn >> 7) & 1;
100
stride = (insn & (1 << 6)) ? 2 : 1;
101
break;
102
default:
103
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
104
*/
105
return 1;
106
}
107
+ tmp = tcg_temp_new_i32();
108
addr = tcg_temp_new_i32();
109
load_reg_var(s, addr, rn);
110
for (reg = 0; reg < nregs; reg++) {
111
if (load) {
112
- tmp = tcg_temp_new_i32();
113
- switch (size) {
114
- case 0:
115
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
116
- break;
117
- case 1:
118
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
119
- break;
120
- case 2:
121
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
122
- break;
123
- default: /* Avoid compiler warnings. */
124
- abort();
125
- }
126
- if (size != 2) {
127
- tmp2 = neon_load_reg(rd, pass);
128
- tcg_gen_deposit_i32(tmp, tmp2, tmp,
129
- shift, size ? 16 : 8);
130
- tcg_temp_free_i32(tmp2);
131
- }
132
- neon_store_reg(rd, pass, tmp);
133
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
134
+ s->be_data | size);
135
+ neon_store_element(rd, reg_idx, size, tmp);
136
} else { /* Store */
137
- tmp = neon_load_reg(rd, pass);
138
- if (shift)
139
- tcg_gen_shri_i32(tmp, tmp, shift);
140
- switch (size) {
141
- case 0:
142
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
143
- break;
144
- case 1:
145
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
146
- break;
147
- case 2:
148
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
149
- break;
150
- }
151
- tcg_temp_free_i32(tmp);
152
+ neon_load_element(tmp, rd, reg_idx, size);
153
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s),
154
+ s->be_data | size);
155
}
156
rd += stride;
157
tcg_gen_addi_i32(addr, addr, 1 << size);
158
}
159
tcg_temp_free_i32(addr);
160
+ tcg_temp_free_i32(tmp);
161
stride = nregs * (1 << size);
162
}
163
}
61
--
164
--
62
2.7.4
165
2.19.1
63
166
64
167
diff view generated by jsdifflib
New patch
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
2
3
Announce the availability of the various priority queues.
4
This fixes an issue where guest kernels would miss to
5
configure secondary queues due to inproper feature bits.
6
7
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20181017213932.19973-2-edgar.iglesias@gmail.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/net/cadence_gem.c | 8 +++++++-
13
1 file changed, 7 insertions(+), 1 deletion(-)
14
15
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/net/cadence_gem.c
18
+++ b/hw/net/cadence_gem.c
19
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
20
int i;
21
CadenceGEMState *s = CADENCE_GEM(d);
22
const uint8_t *a;
23
+ uint32_t queues_mask = 0;
24
25
DB_PRINT("\n");
26
27
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
28
s->regs[GEM_DESCONF] = 0x02500111;
29
s->regs[GEM_DESCONF2] = 0x2ab13fff;
30
s->regs[GEM_DESCONF5] = 0x002f2045;
31
- s->regs[GEM_DESCONF6] = 0x00000200;
32
+ s->regs[GEM_DESCONF6] = 0x0;
33
+
34
+ if (s->num_priority_queues > 1) {
35
+ queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
36
+ s->regs[GEM_DESCONF6] |= queues_mask;
37
+ }
38
39
/* Set MAC address */
40
a = &s->conf.macaddr.a[0];
41
--
42
2.19.1
43
44
diff view generated by jsdifflib
1
When escalating to HardFault, we must go into Lockup if we
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
can't take the synchronous HardFault because the current
3
execution priority is already at or below the priority of
4
HardFault. In v7M HF is always priority -1 so a simple < 0
5
comparison sufficed; in v8M the priority of HardFault can
6
vary depending on whether it is a Secure or NonSecure
7
HardFault, so we must check against the priority of the
8
HardFault exception vector we're about to use.
9
2
3
Announce 64bit addressing support.
4
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Message-id: 20181017213932.19973-3-edgar.iglesias@gmail.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 1505240046-11454-13-git-send-email-peter.maydell@linaro.org
13
---
10
---
14
hw/intc/armv7m_nvic.c | 23 ++++++++++++-----------
11
hw/net/cadence_gem.c | 3 ++-
15
1 file changed, 12 insertions(+), 11 deletions(-)
12
1 file changed, 2 insertions(+), 1 deletion(-)
16
13
17
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
14
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/armv7m_nvic.c
16
--- a/hw/net/cadence_gem.c
20
+++ b/hw/intc/armv7m_nvic.c
17
+++ b/hw/net/cadence_gem.c
21
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
18
@@ -XXX,XX +XXX,XX @@
22
}
19
#define GEM_DESCONF4 (0x0000028C/4)
23
20
#define GEM_DESCONF5 (0x00000290/4)
24
if (escalate) {
21
#define GEM_DESCONF6 (0x00000294/4)
25
- if (running < 0) {
22
+#define GEM_DESCONF6_64B_MASK (1U << 23)
26
- /* We want to escalate to HardFault but we can't take a
23
#define GEM_DESCONF7 (0x00000298/4)
27
- * synchronous HardFault at this point either. This is a
24
28
- * Lockup condition due to a guest bug. We don't model
25
#define GEM_INT_Q1_STATUS (0x00000400 / 4)
29
- * Lockup, so report via cpu_abort() instead.
26
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
30
- */
27
s->regs[GEM_DESCONF] = 0x02500111;
31
- cpu_abort(&s->cpu->parent_obj,
28
s->regs[GEM_DESCONF2] = 0x2ab13fff;
32
- "Lockup: can't escalate %d to HardFault "
29
s->regs[GEM_DESCONF5] = 0x002f2045;
33
- "(current priority %d)\n", irq, running);
30
- s->regs[GEM_DESCONF6] = 0x0;
34
- }
31
+ s->regs[GEM_DESCONF6] = GEM_DESCONF6_64B_MASK;
35
32
36
- /* We can do the escalation, so we take HardFault instead.
33
if (s->num_priority_queues > 1) {
37
+ /* We need to escalate this exception to a synchronous HardFault.
34
queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
38
* If BFHFNMINS is set then we escalate to the banked HF for
39
* the target security state of the original exception; otherwise
40
* we take a Secure HardFault.
41
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
42
} else {
43
vec = &s->vectors[irq];
44
}
45
+ if (running <= vec->prio) {
46
+ /* We want to escalate to HardFault but we can't take the
47
+ * synchronous HardFault at this point either. This is a
48
+ * Lockup condition due to a guest bug. We don't model
49
+ * Lockup, so report via cpu_abort() instead.
50
+ */
51
+ cpu_abort(&s->cpu->parent_obj,
52
+ "Lockup: can't escalate %d to HardFault "
53
+ "(current priority %d)\n", irq, running);
54
+ }
55
+
56
/* HF may be banked but there is only one shared HFSR */
57
s->cpu->env.v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
58
}
59
--
35
--
60
2.7.4
36
2.19.1
61
37
62
38
diff view generated by jsdifflib
1
In armv7m_nvic_set_pending() we have to compare the
1
From: Richard Henderson <richard.henderson@linaro.org>
2
priority of an exception against the execution priority
3
to decide whether it needs to be escalated to HardFault.
4
In the specification this is a comparison against the
5
exception's group priority; for v7M we implemented it
6
as a comparison against the raw exception priority
7
because the two comparisons will always give the
8
same answer. For v8M the existence of AIRCR.PRIS and
9
the possibility of different PRIGROUP values for secure
10
and nonsecure exceptions means we need to explicitly
11
calculate the vector's group priority for this check.
12
2
3
The EL3 version of this register does not include an ASID,
4
and so the tlb_flush performed by vmsa_ttbr_write is not needed.
5
6
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20181019015617.22583-2-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 1505240046-11454-12-git-send-email-peter.maydell@linaro.org
16
---
11
---
17
hw/intc/armv7m_nvic.c | 2 +-
12
target/arm/helper.c | 2 +-
18
1 file changed, 1 insertion(+), 1 deletion(-)
13
1 file changed, 1 insertion(+), 1 deletion(-)
19
14
20
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/armv7m_nvic.c
17
--- a/target/arm/helper.c
23
+++ b/hw/intc/armv7m_nvic.c
18
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure)
19
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
25
int running = nvic_exec_prio(s);
20
.fieldoffset = offsetof(CPUARMState, cp15.mvbar) },
26
bool escalate = false;
21
{ .name = "TTBR0_EL3", .state = ARM_CP_STATE_AA64,
27
22
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 0,
28
- if (vec->prio >= running) {
23
- .access = PL3_RW, .writefn = vmsa_ttbr_write, .resetvalue = 0,
29
+ if (exc_group_prio(s, vec->prio, secure) >= running) {
24
+ .access = PL3_RW, .resetvalue = 0,
30
trace_nvic_escalate_prio(irq, vec->prio, running);
25
.fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[3]) },
31
escalate = true;
26
{ .name = "TCR_EL3", .state = ARM_CP_STATE_AA64,
32
} else if (!vec->enabled) {
27
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 2,
33
--
28
--
34
2.7.4
29
2.19.1
35
30
36
31
diff view generated by jsdifflib
1
Update the code in nvic_rettobase() so that it checks the
1
From: Richard Henderson <richard.henderson@linaro.org>
2
sec_vectors[] array as well as the vectors[] array if needed.
3
2
3
Since QEMU does not implement ASIDs, changes to the ASID must flush the
4
tlb. However, if the ASID does not change there is no reason to flush.
5
6
In testing a boot of the Ubuntu installer to the first menu, this reduces
7
the number of flushes by 30%, or nearly 600k instances.
8
9
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
13
Message-id: 20181019015617.22583-3-richard.henderson@linaro.org
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 1505240046-11454-7-git-send-email-peter.maydell@linaro.org
7
---
15
---
8
hw/intc/armv7m_nvic.c | 5 ++++-
16
target/arm/helper.c | 8 +++-----
9
1 file changed, 4 insertions(+), 1 deletion(-)
17
1 file changed, 3 insertions(+), 5 deletions(-)
10
18
11
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/intc/armv7m_nvic.c
21
--- a/target/arm/helper.c
14
+++ b/hw/intc/armv7m_nvic.c
22
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static int nvic_pending_prio(NVICState *s)
23
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_el1_write(CPUARMState *env, const ARMCPRegInfo *ri,
16
static bool nvic_rettobase(NVICState *s)
24
static void vmsa_ttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
25
uint64_t value)
17
{
26
{
18
int irq, nhand = 0;
27
- /* 64 bit accesses to the TTBRs can change the ASID and so we
19
+ bool check_sec = arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY);
28
- * must flush the TLB.
20
29
- */
21
for (irq = ARMV7M_EXCP_RESET; irq < s->num_irq; irq++) {
30
- if (cpreg_field_is_64bit(ri)) {
22
- if (s->vectors[irq].active) {
31
+ /* If the ASID changes (with a 64-bit write), we must flush the TLB. */
23
+ if (s->vectors[irq].active ||
32
+ if (cpreg_field_is_64bit(ri) &&
24
+ (check_sec && irq < NVIC_INTERNAL_VECTORS &&
33
+ extract64(raw_read(env, ri) ^ value, 48, 16) != 0) {
25
+ s->sec_vectors[irq].active)) {
34
ARMCPU *cpu = arm_env_get_cpu(env);
26
nhand++;
35
-
27
if (nhand == 2) {
36
tlb_flush(CPU(cpu));
28
return 0;
37
}
38
raw_write(env, ri, value);
29
--
39
--
30
2.7.4
40
2.19.1
31
41
32
42
diff view generated by jsdifflib