1
Second lot of ARM changes to sneak in before freeze:
1
As promised, another pullreq... This one's mostly RTH's patches.
2
* fixed version of the raspi2 sd controller patches
3
* GICv3 save/restore
4
* v7M QOMify
5
6
I've also included the Linux header update patches stolen
7
from Paolo's pullreq since it hasn't quite hit master yet.
8
2
9
thanks
3
thanks
10
-- PMM
4
-- PMM
11
5
12
The following changes since commit 1bbe5dc66b770d7bedd1d51d7935da948a510dd6:
6
The following changes since commit 784c2e4f232adf5ef47a84a262ec72a07d068d6a:
13
7
14
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20170228' into staging (2017-02-28 14:50:17 +0000)
8
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging (2018-10-19 15:30:40 +0100)
15
9
16
are available in the git repository at:
10
are available in the Git repository at:
17
11
18
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20170228-1
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20181019
19
13
20
for you to fetch changes up to 1eeb5c7deacbfb4d4cad17590a16a99f3d85eabb:
14
for you to fetch changes up to 88c9add25e7120e8622796c81ad3f3fb7f8d40e7:
21
15
22
bcm2835: add sdhost and gpio controllers (2017-02-28 17:10:00 +0000)
16
target/arm: Only flush tlb if ASID changes (2018-10-19 17:38:48 +0100)
23
17
24
----------------------------------------------------------------
18
----------------------------------------------------------------
25
target-arm queue:
19
target-arm queue:
26
* raspi2: add gpio controller and sdhost controller, with
20
* ssi-sd: Make devices picking up backends unavailable with -device
27
the wiring so the guest can switch which controller the
21
* Add support for VCPU event states
28
SD card is attached to
22
* Move towards making ID registers the source of truth for
29
(this is sufficient to get raspbian kernels to boot)
23
whether a guest CPU implements a feature, rather than having
30
* GICv3: support state save/restore from KVM
24
parallel ID registers and feature bit flags
31
* update Linux headers to 4.11
25
* Implement various HCR hypervisor trap/config bits
32
* refactor and QOMify the ARMv7M container object
26
* Get IL bit correct for v7 syndrome values
27
* Report correct syndrome for FP/SIMD traps to Hyp mode
28
* hw/arm/boot: Increase compliance with kernel arm64 boot protocol
29
* Refactor A32 Neon to use generic vector infrastructure
30
* Fix a bug in A32 VLD2 "(multiple 2-element structures)" insn
31
* net: cadence_gem: Report features correctly in ID register
32
* Avoid some unnecessary TLB flushes on TTBR register writes
33
33
34
----------------------------------------------------------------
34
----------------------------------------------------------------
35
Clement Deschamps (3):
35
Dongjiu Geng (1):
36
hw/sd: add card-reparenting function
36
target/arm: Add support for VCPU event states
37
bcm2835_gpio: add bcm2835 gpio controller
38
bcm2835: add sdhost and gpio controllers
39
37
40
Paolo Bonzini (2):
38
Edgar E. Iglesias (2):
41
update-linux-headers: update for 4.11
39
net: cadence_gem: Announce availability of priority queues
42
update Linux headers to 4.11
40
net: cadence_gem: Announce 64bit addressing support
43
41
44
Peter Maydell (12):
42
Markus Armbruster (1):
45
armv7m: Abstract out the "load kernel" code
43
ssi-sd: Make devices picking up backends unavailable with -device
46
armv7m: Move NVICState struct definition into header
47
armv7m: QOMify the armv7m container
48
armv7m: Use QOMified armv7m object in armv7m_init()
49
armv7m: Make ARMv7M object take memory region link
50
armv7m: Make NVIC expose a memory region rather than mapping itself
51
armv7m: Make bitband device take the address space to access
52
armv7m: Don't put core v7M devices under CONFIG_STELLARIS
53
armv7m: Split systick out from NVIC
54
stm32f205: Create armv7m object without using armv7m_init()
55
stm32f205: Rename 'nvic' local to 'armv7m'
56
qdev: Have qdev_set_parent_bus() handle devices already on a bus
57
44
58
Vijaya Kumar K (4):
45
Peter Maydell (10):
59
hw/intc/arm_gicv3_kvm: Add ICC_SRE_EL1 register to vmstate
46
target/arm: Improve debug logging of AArch32 exception return
60
hw/intc/arm_gicv3_kvm: Implement get/put functions
47
target/arm: Make switch_mode() file-local
61
target-arm: Add GICv3CPUState in CPUARMState struct
48
target/arm: Implement HCR.FB
62
hw/intc/arm_gicv3_kvm: Reset GICv3 cpu interface registers
49
target/arm: Implement HCR.DC
50
target/arm: ISR_EL1 bits track virtual interrupts if IMO/FMO set
51
target/arm: Implement HCR.VI and VF
52
target/arm: Implement HCR.PTW
53
target/arm: New utility function to extract EC from syndrome
54
target/arm: Get IL bit correct for v7 syndrome values
55
target/arm: Report correct syndrome for FP/SIMD traps to Hyp mode
63
56
64
hw/gpio/Makefile.objs | 1 +
57
Richard Henderson (30):
65
hw/intc/Makefile.objs | 2 +-
58
target/arm: Move some system registers into a substructure
66
hw/timer/Makefile.objs | 1 +
59
target/arm: V8M should not imply V7VE
67
hw/intc/gicv3_internal.h | 3 +
60
target/arm: Convert v8 extensions from feature bits to isar tests
68
include/hw/arm/arm.h | 12 +
61
target/arm: Convert division from feature bits to isar0 tests
69
include/hw/arm/armv7m.h | 63 +++
62
target/arm: Convert jazelle from feature bit to isar1 test
70
include/hw/arm/armv7m_nvic.h | 62 ++
63
target/arm: Convert t32ee from feature bit to isar3 test
71
include/hw/arm/bcm2835_peripherals.h | 4 +
64
target/arm: Convert sve from feature bit to aa64pfr0 test
72
include/hw/arm/stm32f205_soc.h | 4 +-
65
target/arm: Convert v8.2-fp16 from feature bit to aa64pfr0 test
73
include/hw/gpio/bcm2835_gpio.h | 39 ++
66
target/arm: Hoist address increment for vector memory ops
74
include/hw/intc/arm_gicv3_common.h | 1 +
67
target/arm: Don't call tcg_clear_temp_count
75
include/hw/sd/sd.h | 11 +
68
target/arm: Use tcg_gen_gvec_dup_i64 for LD[1-4]R
76
include/hw/timer/armv7m_systick.h | 34 ++
69
target/arm: Promote consecutive memory ops for aa64
77
include/standard-headers/asm-x86/hyperv.h | 8 +
70
target/arm: Mark some arrays const
78
include/standard-headers/linux/input-event-codes.h | 2 +-
71
target/arm: Use gvec for NEON VDUP
79
include/standard-headers/linux/pci_regs.h | 25 +
72
target/arm: Use gvec for NEON VMOV, VMVN, VBIC & VORR (immediate)
80
include/standard-headers/linux/virtio_ids.h | 1 +
73
target/arm: Use gvec for NEON_3R_LOGIC insns
81
linux-headers/asm-arm/kvm.h | 15 +
74
target/arm: Use gvec for NEON_3R_VADD_VSUB insns
82
linux-headers/asm-arm/unistd-common.h | 357 ++++++++++++
75
target/arm: Use gvec for NEON_2RM_VMN, NEON_2RM_VNEG
83
linux-headers/asm-arm/unistd-eabi.h | 5 +
76
target/arm: Use gvec for NEON_3R_VMUL
84
linux-headers/asm-arm/unistd-oabi.h | 17 +
77
target/arm: Use gvec for VSHR, VSHL
85
linux-headers/asm-arm/unistd.h | 419 +-------------
78
target/arm: Use gvec for VSRA
86
linux-headers/asm-arm64/kvm.h | 13 +
79
target/arm: Use gvec for VSRI, VSLI
87
linux-headers/asm-powerpc/kvm.h | 27 +
80
target/arm: Use gvec for NEON_3R_VML
88
linux-headers/asm-powerpc/unistd.h | 1 +
81
target/arm: Use gvec for NEON_3R_VTST_VCEQ, NEON_3R_VCGT, NEON_3R_VCGE
89
linux-headers/asm-x86/kvm_para.h | 13 +-
82
target/arm: Use gvec for NEON VLD all lanes
90
linux-headers/linux/kvm.h | 24 +-
83
target/arm: Reorg NEON VLD/VST all elements
91
linux-headers/linux/kvm_para.h | 2 +
84
target/arm: Promote consecutive memory ops for aa32
92
linux-headers/linux/userfaultfd.h | 67 ++-
85
target/arm: Reorg NEON VLD/VST single element to one lane
93
linux-headers/linux/vfio.h | 10 +
86
target/arm: Remove writefn from TTBR0_EL3
94
target/arm/cpu.h | 2 +
87
target/arm: Only flush tlb if ASID changes
95
hw/arm/armv7m.c | 379 ++++++++-----
96
hw/arm/bcm2835_peripherals.c | 43 +-
97
hw/arm/netduino2.c | 7 +-
98
hw/arm/stm32f205_soc.c | 28 +-
99
hw/core/qdev.c | 14 +
100
hw/gpio/bcm2835_gpio.c | 353 ++++++++++++
101
hw/intc/arm_gicv3_common.c | 38 ++
102
hw/intc/arm_gicv3_cpuif.c | 8 +
103
hw/intc/arm_gicv3_kvm.c | 629 ++++++++++++++++++++-
104
hw/intc/armv7m_nvic.c | 214 ++-----
105
hw/sd/core.c | 27 +
106
hw/timer/armv7m_systick.c | 240 ++++++++
107
default-configs/arm-softmmu.mak | 2 +
108
hw/timer/trace-events | 6 +
109
scripts/update-linux-headers.sh | 13 +-
110
46 files changed, 2479 insertions(+), 767 deletions(-)
111
create mode 100644 include/hw/arm/armv7m.h
112
create mode 100644 include/hw/arm/armv7m_nvic.h
113
create mode 100644 include/hw/gpio/bcm2835_gpio.h
114
create mode 100644 include/hw/timer/armv7m_systick.h
115
create mode 100644 linux-headers/asm-arm/unistd-common.h
116
create mode 100644 linux-headers/asm-arm/unistd-eabi.h
117
create mode 100644 linux-headers/asm-arm/unistd-oabi.h
118
create mode 100644 hw/gpio/bcm2835_gpio.c
119
create mode 100644 hw/timer/armv7m_systick.c
120
88
89
Stewart Hildebrand (1):
90
hw/arm/boot: Increase compliance with kernel arm64 boot protocol
91
92
target/arm/cpu.h | 227 ++++++-
93
target/arm/internals.h | 45 +-
94
target/arm/kvm_arm.h | 24 +
95
target/arm/translate.h | 21 +
96
hw/arm/boot.c | 18 +
97
hw/intc/armv7m_nvic.c | 12 +-
98
hw/net/cadence_gem.c | 9 +-
99
hw/sd/ssi-sd.c | 2 +
100
linux-user/aarch64/signal.c | 4 +-
101
linux-user/elfload.c | 60 +-
102
linux-user/syscall.c | 10 +-
103
target/arm/cpu.c | 242 ++++----
104
target/arm/cpu64.c | 148 +++--
105
target/arm/helper.c | 397 ++++++++----
106
target/arm/kvm.c | 60 ++
107
target/arm/kvm32.c | 13 +
108
target/arm/kvm64.c | 15 +-
109
target/arm/machine.c | 28 +-
110
target/arm/op_helper.c | 2 +-
111
target/arm/translate-a64.c | 715 ++++-----------------
112
target/arm/translate.c | 1451 ++++++++++++++++++++++++++++---------------
113
21 files changed, 2021 insertions(+), 1482 deletions(-)
114
diff view generated by jsdifflib
New patch
1
From: Markus Armbruster <armbru@redhat.com>
1
2
3
Device models aren't supposed to go on fishing expeditions for
4
backends. They should expose suitable properties for the user to set.
5
For onboard devices, board code sets them.
6
7
Device ssi-sd picks up its block backend in its init() method with
8
drive_get_next() instead. This mistake is already marked FIXME since
9
commit af9e40a.
10
11
Unset user_creatable to remove the mistake from our external
12
interface. Since the SSI bus doesn't support hotplug, only -device
13
can be affected. Only certain ARM machines have ssi-sd and provide an
14
SSI bus for it; this patch breaks -device ssi-sd for these machines.
15
No actual use of -device ssi-sd is known.
16
17
Signed-off-by: Markus Armbruster <armbru@redhat.com>
18
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Acked-by: Thomas Huth <thuth@redhat.com>
20
Message-id: 20181009060835.4608-1-armbru@redhat.com
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
23
hw/sd/ssi-sd.c | 2 ++
24
1 file changed, 2 insertions(+)
25
26
diff --git a/hw/sd/ssi-sd.c b/hw/sd/ssi-sd.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/sd/ssi-sd.c
29
+++ b/hw/sd/ssi-sd.c
30
@@ -XXX,XX +XXX,XX @@ static void ssi_sd_class_init(ObjectClass *klass, void *data)
31
k->cs_polarity = SSI_CS_LOW;
32
dc->vmsd = &vmstate_ssi_sd;
33
dc->reset = ssi_sd_reset;
34
+ /* Reason: init() method uses drive_get_next() */
35
+ dc->user_creatable = false;
36
}
37
38
static const TypeInfo ssi_sd_info = {
39
--
40
2.19.1
41
42
diff view generated by jsdifflib
1
From: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
2
2
3
To Save and Restore ICC_SRE_EL1 register introduce vmstate
3
This patch extends the qemu-kvm state sync logic with support for
4
subsection and load only if non-zero.
4
KVM_GET/SET_VCPU_EVENTS, giving access to yet missing SError exception.
5
Also initialize icc_sre_el1 with to 0x7 in pre_load
5
And also it can support the exception state migration.
6
function.
6
7
7
The SError exception states include SError pending state and ESR value,
8
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
8
the kvm_put/get_vcpu_events() will be called when set or get system
9
registers. When do migration, if source machine has SError pending,
10
QEMU will do this migration regardless whether the target machine supports
11
to specify guest ESR value, because if target machine does not support that,
12
it can also inject the SError with zero ESR value.
13
14
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
15
Reviewed-by: Andrew Jones <drjones@redhat.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
17
Message-id: 1538067351-23931-3-git-send-email-gengdongjiu@huawei.com
11
Message-id: 1487850673-26455-3-git-send-email-vijay.kilari@gmail.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
19
---
14
include/hw/intc/arm_gicv3_common.h | 1 +
20
target/arm/cpu.h | 7 ++++++
15
hw/intc/arm_gicv3_common.c | 36 ++++++++++++++++++++++++++++++++++++
21
target/arm/kvm_arm.h | 24 ++++++++++++++++++
16
2 files changed, 37 insertions(+)
22
target/arm/kvm.c | 60 ++++++++++++++++++++++++++++++++++++++++++++
17
23
target/arm/kvm32.c | 13 ++++++++++
18
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
24
target/arm/kvm64.c | 13 ++++++++++
19
index XXXXXXX..XXXXXXX 100644
25
target/arm/machine.c | 22 ++++++++++++++++
20
--- a/include/hw/intc/arm_gicv3_common.h
26
6 files changed, 139 insertions(+)
21
+++ b/include/hw/intc/arm_gicv3_common.h
27
22
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
uint8_t gicr_ipriorityr[GIC_INTERNAL];
29
index XXXXXXX..XXXXXXX 100644
24
30
--- a/target/arm/cpu.h
25
/* CPU interface */
31
+++ b/target/arm/cpu.h
26
+ uint64_t icc_sre_el1;
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
27
uint64_t icc_ctlr_el1[2];
33
*/
28
uint64_t icc_pmr_el1;
34
} exception;
29
uint64_t icc_bpr[3];
35
30
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
36
+ /* Information associated with an SError */
31
index XXXXXXX..XXXXXXX 100644
37
+ struct {
32
--- a/hw/intc/arm_gicv3_common.c
38
+ uint8_t pending;
33
+++ b/hw/intc/arm_gicv3_common.c
39
+ uint8_t has_esr;
34
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gicv3_cpu_virt = {
40
+ uint64_t esr;
35
}
41
+ } serror;
42
+
43
/* Thumb-2 EE state. */
44
uint32_t teecr;
45
uint32_t teehbr;
46
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/kvm_arm.h
49
+++ b/target/arm/kvm_arm.h
50
@@ -XXX,XX +XXX,XX @@ bool write_kvmstate_to_list(ARMCPU *cpu);
51
*/
52
void kvm_arm_reset_vcpu(ARMCPU *cpu);
53
54
+/**
55
+ * kvm_arm_init_serror_injection:
56
+ * @cs: CPUState
57
+ *
58
+ * Check whether KVM can set guest SError syndrome.
59
+ */
60
+void kvm_arm_init_serror_injection(CPUState *cs);
61
+
62
+/**
63
+ * kvm_get_vcpu_events:
64
+ * @cpu: ARMCPU
65
+ *
66
+ * Get VCPU related state from kvm.
67
+ */
68
+int kvm_get_vcpu_events(ARMCPU *cpu);
69
+
70
+/**
71
+ * kvm_put_vcpu_events:
72
+ * @cpu: ARMCPU
73
+ *
74
+ * Put VCPU related state to kvm.
75
+ */
76
+int kvm_put_vcpu_events(ARMCPU *cpu);
77
+
78
#ifdef CONFIG_KVM
79
/**
80
* kvm_arm_create_scratch_host_vcpu:
81
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/kvm.c
84
+++ b/target/arm/kvm.c
85
@@ -XXX,XX +XXX,XX @@ const KVMCapabilityInfo kvm_arch_required_capabilities[] = {
36
};
86
};
37
87
38
+static int icc_sre_el1_reg_pre_load(void *opaque)
88
static bool cap_has_mp_state;
39
+{
89
+static bool cap_has_inject_serror_esr;
40
+ GICv3CPUState *cs = opaque;
90
41
+
91
static ARMHostCPUFeatures arm_host_cpu_features;
42
+ /*
92
43
+ * If the sre_el1 subsection is not transferred this
93
@@ -XXX,XX +XXX,XX @@ int kvm_arm_vcpu_init(CPUState *cs)
44
+ * means SRE_EL1 is 0x7 (which might not be the same as
94
return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
45
+ * our reset value).
95
}
46
+ */
96
47
+ cs->icc_sre_el1 = 0x7;
97
+void kvm_arm_init_serror_injection(CPUState *cs)
98
+{
99
+ cap_has_inject_serror_esr = kvm_check_extension(cs->kvm_state,
100
+ KVM_CAP_ARM_INJECT_SERROR_ESR);
101
+}
102
+
103
bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
104
int *fdarray,
105
struct kvm_vcpu_init *init)
106
@@ -XXX,XX +XXX,XX @@ int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu)
107
return 0;
108
}
109
110
+int kvm_put_vcpu_events(ARMCPU *cpu)
111
+{
112
+ CPUARMState *env = &cpu->env;
113
+ struct kvm_vcpu_events events;
114
+ int ret;
115
+
116
+ if (!kvm_has_vcpu_events()) {
117
+ return 0;
118
+ }
119
+
120
+ memset(&events, 0, sizeof(events));
121
+ events.exception.serror_pending = env->serror.pending;
122
+
123
+ /* Inject SError to guest with specified syndrome if host kernel
124
+ * supports it, otherwise inject SError without syndrome.
125
+ */
126
+ if (cap_has_inject_serror_esr) {
127
+ events.exception.serror_has_esr = env->serror.has_esr;
128
+ events.exception.serror_esr = env->serror.esr;
129
+ }
130
+
131
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events);
132
+ if (ret) {
133
+ error_report("failed to put vcpu events");
134
+ }
135
+
136
+ return ret;
137
+}
138
+
139
+int kvm_get_vcpu_events(ARMCPU *cpu)
140
+{
141
+ CPUARMState *env = &cpu->env;
142
+ struct kvm_vcpu_events events;
143
+ int ret;
144
+
145
+ if (!kvm_has_vcpu_events()) {
146
+ return 0;
147
+ }
148
+
149
+ memset(&events, 0, sizeof(events));
150
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_VCPU_EVENTS, &events);
151
+ if (ret) {
152
+ error_report("failed to get vcpu events");
153
+ return ret;
154
+ }
155
+
156
+ env->serror.pending = events.exception.serror_pending;
157
+ env->serror.has_esr = events.exception.serror_has_esr;
158
+ env->serror.esr = events.exception.serror_esr;
159
+
48
+ return 0;
160
+ return 0;
49
+}
161
+}
50
+
162
+
51
+static bool icc_sre_el1_reg_needed(void *opaque)
163
void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
52
+{
164
{
53
+ GICv3CPUState *cs = opaque;
165
}
54
+
166
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
55
+ return cs->icc_sre_el1 != 7;
167
index XXXXXXX..XXXXXXX 100644
56
+}
168
--- a/target/arm/kvm32.c
57
+
169
+++ b/target/arm/kvm32.c
58
+const VMStateDescription vmstate_gicv3_cpu_sre_el1 = {
170
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
59
+ .name = "arm_gicv3_cpu/sre_el1",
171
}
172
cpu->mp_affinity = mpidr & ARM32_AFFINITY_MASK;
173
174
+ /* Check whether userspace can specify guest syndrome value */
175
+ kvm_arm_init_serror_injection(cs);
176
+
177
return kvm_arm_init_cpreg_list(cpu);
178
}
179
180
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
181
return ret;
182
}
183
184
+ ret = kvm_put_vcpu_events(cpu);
185
+ if (ret) {
186
+ return ret;
187
+ }
188
+
189
/* Note that we do not call write_cpustate_to_list()
190
* here, so we are only writing the tuple list back to
191
* KVM. This is safe because nothing can change the
192
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
193
}
194
vfp_set_fpscr(env, fpscr);
195
196
+ ret = kvm_get_vcpu_events(cpu);
197
+ if (ret) {
198
+ return ret;
199
+ }
200
+
201
if (!write_kvmstate_to_list(cpu)) {
202
return EINVAL;
203
}
204
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
205
index XXXXXXX..XXXXXXX 100644
206
--- a/target/arm/kvm64.c
207
+++ b/target/arm/kvm64.c
208
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
209
210
kvm_arm_init_debug(cs);
211
212
+ /* Check whether user space can specify guest syndrome value */
213
+ kvm_arm_init_serror_injection(cs);
214
+
215
return kvm_arm_init_cpreg_list(cpu);
216
}
217
218
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
219
return ret;
220
}
221
222
+ ret = kvm_put_vcpu_events(cpu);
223
+ if (ret) {
224
+ return ret;
225
+ }
226
+
227
if (!write_list_to_kvmstate(cpu, level)) {
228
return EINVAL;
229
}
230
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
231
}
232
vfp_set_fpcr(env, fpr);
233
234
+ ret = kvm_get_vcpu_events(cpu);
235
+ if (ret) {
236
+ return ret;
237
+ }
238
+
239
if (!write_kvmstate_to_list(cpu)) {
240
return EINVAL;
241
}
242
diff --git a/target/arm/machine.c b/target/arm/machine.c
243
index XXXXXXX..XXXXXXX 100644
244
--- a/target/arm/machine.c
245
+++ b/target/arm/machine.c
246
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_sve = {
247
};
248
#endif /* AARCH64 */
249
250
+static bool serror_needed(void *opaque)
251
+{
252
+ ARMCPU *cpu = opaque;
253
+ CPUARMState *env = &cpu->env;
254
+
255
+ return env->serror.pending != 0;
256
+}
257
+
258
+static const VMStateDescription vmstate_serror = {
259
+ .name = "cpu/serror",
60
+ .version_id = 1,
260
+ .version_id = 1,
61
+ .minimum_version_id = 1,
261
+ .minimum_version_id = 1,
62
+ .pre_load = icc_sre_el1_reg_pre_load,
262
+ .needed = serror_needed,
63
+ .needed = icc_sre_el1_reg_needed,
64
+ .fields = (VMStateField[]) {
263
+ .fields = (VMStateField[]) {
65
+ VMSTATE_UINT64(icc_sre_el1, GICv3CPUState),
264
+ VMSTATE_UINT8(env.serror.pending, ARMCPU),
265
+ VMSTATE_UINT8(env.serror.has_esr, ARMCPU),
266
+ VMSTATE_UINT64(env.serror.esr, ARMCPU),
66
+ VMSTATE_END_OF_LIST()
267
+ VMSTATE_END_OF_LIST()
67
+ }
268
+ }
68
+};
269
+};
69
+
270
+
70
static const VMStateDescription vmstate_gicv3_cpu = {
271
static bool m_needed(void *opaque)
71
.name = "arm_gicv3_cpu",
272
{
72
.version_id = 1,
273
ARMCPU *cpu = opaque;
73
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gicv3_cpu = {
274
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
74
.subsections = (const VMStateDescription * []) {
275
#ifdef TARGET_AARCH64
75
&vmstate_gicv3_cpu_virt,
276
&vmstate_sve,
277
#endif
278
+ &vmstate_serror,
76
NULL
279
NULL
77
+ },
78
+ .subsections = (const VMStateDescription * []) {
79
+ &vmstate_gicv3_cpu_sre_el1,
80
+ NULL
81
}
280
}
82
};
281
};
83
84
--
282
--
85
2.7.4
283
2.19.1
86
284
87
285
diff view generated by jsdifflib
1
Move the NVICState struct definition into a header, so we can
1
From: Richard Henderson <richard.henderson@linaro.org>
2
embed it into other QOM objects like SoCs.
3
2
3
Create struct ARMISARegisters, to be accessed during translation.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181016223115.24100-2-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Message-id: 1487604965-23220-3-git-send-email-peter.maydell@linaro.org
8
---
9
---
9
include/hw/arm/armv7m_nvic.h | 66 ++++++++++++++++++++++++++++++++++++++++++++
10
target/arm/cpu.h | 32 ++++----
10
hw/intc/armv7m_nvic.c | 49 +-------------------------------
11
hw/intc/armv7m_nvic.c | 12 +--
11
2 files changed, 67 insertions(+), 48 deletions(-)
12
target/arm/cpu.c | 178 +++++++++++++++++++++---------------------
12
create mode 100644 include/hw/arm/armv7m_nvic.h
13
target/arm/cpu64.c | 70 ++++++++---------
14
target/arm/helper.c | 28 +++----
15
5 files changed, 162 insertions(+), 158 deletions(-)
13
16
14
diff --git a/include/hw/arm/armv7m_nvic.h b/include/hw/arm/armv7m_nvic.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
new file mode 100644
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX
19
--- a/target/arm/cpu.h
17
--- /dev/null
20
+++ b/target/arm/cpu.h
18
+++ b/include/hw/arm/armv7m_nvic.h
21
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
19
@@ -XXX,XX +XXX,XX @@
22
* ARMv7AR ARM Architecture Reference Manual. A reset_ prefix
20
+/*
23
* is used for reset values of non-constant registers; no reset_
21
+ * ARMv7M NVIC object
24
* prefix means a constant register.
22
+ *
25
+ * Some of these registers are split out into a substructure that
23
+ * Copyright (c) 2017 Linaro Ltd
26
+ * is shared with the translators to control the ISA.
24
+ * Written by Peter Maydell <peter.maydell@linaro.org>
27
*/
25
+ *
28
+ struct ARMISARegisters {
26
+ * This code is licensed under the GPL version 2 or later.
29
+ uint32_t id_isar0;
27
+ */
30
+ uint32_t id_isar1;
28
+
31
+ uint32_t id_isar2;
29
+#ifndef HW_ARM_ARMV7M_NVIC_H
32
+ uint32_t id_isar3;
30
+#define HW_ARM_ARMV7M_NVIC_H
33
+ uint32_t id_isar4;
31
+
34
+ uint32_t id_isar5;
32
+#include "target/arm/cpu.h"
35
+ uint32_t id_isar6;
33
+#include "hw/sysbus.h"
36
+ uint32_t mvfr0;
34
+
37
+ uint32_t mvfr1;
35
+#define TYPE_NVIC "armv7m_nvic"
38
+ uint32_t mvfr2;
36
+
39
+ uint64_t id_aa64isar0;
37
+#define NVIC(obj) \
40
+ uint64_t id_aa64isar1;
38
+ OBJECT_CHECK(NVICState, (obj), TYPE_NVIC)
41
+ uint64_t id_aa64pfr0;
39
+
42
+ uint64_t id_aa64pfr1;
40
+/* Highest permitted number of exceptions (architectural limit) */
43
+ } isar;
41
+#define NVIC_MAX_VECTORS 512
44
uint32_t midr;
42
+
45
uint32_t revidr;
43
+typedef struct VecInfo {
46
uint32_t reset_fpsid;
44
+ /* Exception priorities can range from -3 to 255; only the unmodifiable
47
- uint32_t mvfr0;
45
+ * priority values for RESET, NMI and HardFault can be negative.
48
- uint32_t mvfr1;
46
+ */
49
- uint32_t mvfr2;
47
+ int16_t prio;
50
uint32_t ctr;
48
+ uint8_t enabled;
51
uint32_t reset_sctlr;
49
+ uint8_t pending;
52
uint32_t id_pfr0;
50
+ uint8_t active;
53
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
51
+ uint8_t level; /* exceptions <=15 never set level */
54
uint32_t id_mmfr2;
52
+} VecInfo;
55
uint32_t id_mmfr3;
53
+
56
uint32_t id_mmfr4;
54
+typedef struct NVICState {
57
- uint32_t id_isar0;
55
+ /*< private >*/
58
- uint32_t id_isar1;
56
+ SysBusDevice parent_obj;
59
- uint32_t id_isar2;
57
+ /*< public >*/
60
- uint32_t id_isar3;
58
+
61
- uint32_t id_isar4;
59
+ ARMCPU *cpu;
62
- uint32_t id_isar5;
60
+
63
- uint32_t id_isar6;
61
+ VecInfo vectors[NVIC_MAX_VECTORS];
64
- uint64_t id_aa64pfr0;
62
+ uint32_t prigroup;
65
- uint64_t id_aa64pfr1;
63
+
66
uint64_t id_aa64dfr0;
64
+ /* vectpending and exception_prio are both cached state that can
67
uint64_t id_aa64dfr1;
65
+ * be recalculated from the vectors[] array and the prigroup field.
68
uint64_t id_aa64afr0;
66
+ */
69
uint64_t id_aa64afr1;
67
+ unsigned int vectpending; /* highest prio pending enabled exception */
70
- uint64_t id_aa64isar0;
68
+ int exception_prio; /* group prio of the highest prio active exception */
71
- uint64_t id_aa64isar1;
69
+
72
uint64_t id_aa64mmfr0;
70
+ struct {
73
uint64_t id_aa64mmfr1;
71
+ uint32_t control;
74
uint32_t dbgdidr;
72
+ uint32_t reload;
73
+ int64_t tick;
74
+ QEMUTimer *timer;
75
+ } systick;
76
+
77
+ MemoryRegion sysregmem;
78
+ MemoryRegion container;
79
+
80
+ uint32_t num_irq;
81
+ qemu_irq excpout;
82
+ qemu_irq sysresetreq;
83
+} NVICState;
84
+
85
+#endif
86
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
75
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
87
index XXXXXXX..XXXXXXX 100644
76
index XXXXXXX..XXXXXXX 100644
88
--- a/hw/intc/armv7m_nvic.c
77
--- a/hw/intc/armv7m_nvic.c
89
+++ b/hw/intc/armv7m_nvic.c
78
+++ b/hw/intc/armv7m_nvic.c
90
@@ -XXX,XX +XXX,XX @@
79
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
91
#include "hw/sysbus.h"
80
case 0xd5c: /* MMFR3. */
92
#include "qemu/timer.h"
81
return cpu->id_mmfr3;
93
#include "hw/arm/arm.h"
82
case 0xd60: /* ISAR0. */
94
+#include "hw/arm/armv7m_nvic.h"
83
- return cpu->id_isar0;
95
#include "target/arm/cpu.h"
84
+ return cpu->isar.id_isar0;
96
#include "exec/address-spaces.h"
85
case 0xd64: /* ISAR1. */
97
#include "qemu/log.h"
86
- return cpu->id_isar1;
98
@@ -XXX,XX +XXX,XX @@
87
+ return cpu->isar.id_isar1;
99
* "exception" more or less interchangeably.
88
case 0xd68: /* ISAR2. */
100
*/
89
- return cpu->id_isar2;
101
#define NVIC_FIRST_IRQ 16
90
+ return cpu->isar.id_isar2;
102
-#define NVIC_MAX_VECTORS 512
91
case 0xd6c: /* ISAR3. */
103
#define NVIC_MAX_IRQ (NVIC_MAX_VECTORS - NVIC_FIRST_IRQ)
92
- return cpu->id_isar3;
104
93
+ return cpu->isar.id_isar3;
105
/* Effective running priority of the CPU when no exception is active
94
case 0xd70: /* ISAR4. */
106
@@ -XXX,XX +XXX,XX @@
95
- return cpu->id_isar4;
107
*/
96
+ return cpu->isar.id_isar4;
108
#define NVIC_NOEXC_PRIO 0x100
97
case 0xd74: /* ISAR5. */
109
98
- return cpu->id_isar5;
110
-typedef struct VecInfo {
99
+ return cpu->isar.id_isar5;
111
- /* Exception priorities can range from -3 to 255; only the unmodifiable
100
case 0xd78: /* CLIDR */
112
- * priority values for RESET, NMI and HardFault can be negative.
101
return cpu->clidr;
113
- */
102
case 0xd7c: /* CTR */
114
- int16_t prio;
103
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
115
- uint8_t enabled;
104
index XXXXXXX..XXXXXXX 100644
116
- uint8_t pending;
105
--- a/target/arm/cpu.c
117
- uint8_t active;
106
+++ b/target/arm/cpu.c
118
- uint8_t level; /* exceptions <=15 never set level */
107
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
119
-} VecInfo;
108
g_hash_table_foreach(cpu->cp_regs, cp_reg_check_reset, cpu);
120
-
109
121
-typedef struct NVICState {
110
env->vfp.xregs[ARM_VFP_FPSID] = cpu->reset_fpsid;
122
- /*< private >*/
111
- env->vfp.xregs[ARM_VFP_MVFR0] = cpu->mvfr0;
123
- SysBusDevice parent_obj;
112
- env->vfp.xregs[ARM_VFP_MVFR1] = cpu->mvfr1;
124
- /*< public >*/
113
- env->vfp.xregs[ARM_VFP_MVFR2] = cpu->mvfr2;
125
-
114
+ env->vfp.xregs[ARM_VFP_MVFR0] = cpu->isar.mvfr0;
126
- ARMCPU *cpu;
115
+ env->vfp.xregs[ARM_VFP_MVFR1] = cpu->isar.mvfr1;
127
-
116
+ env->vfp.xregs[ARM_VFP_MVFR2] = cpu->isar.mvfr2;
128
- VecInfo vectors[NVIC_MAX_VECTORS];
117
129
- uint32_t prigroup;
118
cpu->power_state = cpu->start_powered_off ? PSCI_OFF : PSCI_ON;
130
-
119
s->halted = cpu->start_powered_off;
131
- /* vectpending and exception_prio are both cached state that can
120
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
132
- * be recalculated from the vectors[] array and the prigroup field.
121
* registers as well. These are id_pfr1[7:4] and id_aa64pfr0[15:12].
133
- */
122
*/
134
- unsigned int vectpending; /* highest prio pending enabled exception */
123
cpu->id_pfr1 &= ~0xf0;
135
- int exception_prio; /* group prio of the highest prio active exception */
124
- cpu->id_aa64pfr0 &= ~0xf000;
136
-
125
+ cpu->isar.id_aa64pfr0 &= ~0xf000;
137
- struct {
126
}
138
- uint32_t control;
127
139
- uint32_t reload;
128
if (!cpu->has_el2) {
140
- int64_t tick;
129
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
141
- QEMUTimer *timer;
130
* registers if we don't have EL2. These are id_pfr1[15:12] and
142
- } systick;
131
* id_aa64pfr0_el1[11:8].
143
-
132
*/
144
- MemoryRegion sysregmem;
133
- cpu->id_aa64pfr0 &= ~0xf00;
145
- MemoryRegion container;
134
+ cpu->isar.id_aa64pfr0 &= ~0xf00;
146
-
135
cpu->id_pfr1 &= ~0xf000;
147
- uint32_t num_irq;
136
}
148
- qemu_irq excpout;
137
149
- qemu_irq sysresetreq;
138
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
150
-} NVICState;
139
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
151
-
140
cpu->midr = 0x4107b362;
152
-#define TYPE_NVIC "armv7m_nvic"
141
cpu->reset_fpsid = 0x410120b4;
153
-
142
- cpu->mvfr0 = 0x11111111;
154
-#define NVIC(obj) \
143
- cpu->mvfr1 = 0x00000000;
155
- OBJECT_CHECK(NVICState, (obj), TYPE_NVIC)
144
+ cpu->isar.mvfr0 = 0x11111111;
156
-
145
+ cpu->isar.mvfr1 = 0x00000000;
157
static const uint8_t nvic_id[] = {
146
cpu->ctr = 0x1dd20d2;
158
0x00, 0xb0, 0x1b, 0x00, 0x0d, 0xe0, 0x05, 0xb1
147
cpu->reset_sctlr = 0x00050078;
159
};
148
cpu->id_pfr0 = 0x111;
149
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
150
cpu->id_mmfr0 = 0x01130003;
151
cpu->id_mmfr1 = 0x10030302;
152
cpu->id_mmfr2 = 0x01222110;
153
- cpu->id_isar0 = 0x00140011;
154
- cpu->id_isar1 = 0x12002111;
155
- cpu->id_isar2 = 0x11231111;
156
- cpu->id_isar3 = 0x01102131;
157
- cpu->id_isar4 = 0x141;
158
+ cpu->isar.id_isar0 = 0x00140011;
159
+ cpu->isar.id_isar1 = 0x12002111;
160
+ cpu->isar.id_isar2 = 0x11231111;
161
+ cpu->isar.id_isar3 = 0x01102131;
162
+ cpu->isar.id_isar4 = 0x141;
163
cpu->reset_auxcr = 7;
164
}
165
166
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
167
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
168
cpu->midr = 0x4117b363;
169
cpu->reset_fpsid = 0x410120b4;
170
- cpu->mvfr0 = 0x11111111;
171
- cpu->mvfr1 = 0x00000000;
172
+ cpu->isar.mvfr0 = 0x11111111;
173
+ cpu->isar.mvfr1 = 0x00000000;
174
cpu->ctr = 0x1dd20d2;
175
cpu->reset_sctlr = 0x00050078;
176
cpu->id_pfr0 = 0x111;
177
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
178
cpu->id_mmfr0 = 0x01130003;
179
cpu->id_mmfr1 = 0x10030302;
180
cpu->id_mmfr2 = 0x01222110;
181
- cpu->id_isar0 = 0x00140011;
182
- cpu->id_isar1 = 0x12002111;
183
- cpu->id_isar2 = 0x11231111;
184
- cpu->id_isar3 = 0x01102131;
185
- cpu->id_isar4 = 0x141;
186
+ cpu->isar.id_isar0 = 0x00140011;
187
+ cpu->isar.id_isar1 = 0x12002111;
188
+ cpu->isar.id_isar2 = 0x11231111;
189
+ cpu->isar.id_isar3 = 0x01102131;
190
+ cpu->isar.id_isar4 = 0x141;
191
cpu->reset_auxcr = 7;
192
}
193
194
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
195
set_feature(&cpu->env, ARM_FEATURE_EL3);
196
cpu->midr = 0x410fb767;
197
cpu->reset_fpsid = 0x410120b5;
198
- cpu->mvfr0 = 0x11111111;
199
- cpu->mvfr1 = 0x00000000;
200
+ cpu->isar.mvfr0 = 0x11111111;
201
+ cpu->isar.mvfr1 = 0x00000000;
202
cpu->ctr = 0x1dd20d2;
203
cpu->reset_sctlr = 0x00050078;
204
cpu->id_pfr0 = 0x111;
205
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
206
cpu->id_mmfr0 = 0x01130003;
207
cpu->id_mmfr1 = 0x10030302;
208
cpu->id_mmfr2 = 0x01222100;
209
- cpu->id_isar0 = 0x0140011;
210
- cpu->id_isar1 = 0x12002111;
211
- cpu->id_isar2 = 0x11231121;
212
- cpu->id_isar3 = 0x01102131;
213
- cpu->id_isar4 = 0x01141;
214
+ cpu->isar.id_isar0 = 0x0140011;
215
+ cpu->isar.id_isar1 = 0x12002111;
216
+ cpu->isar.id_isar2 = 0x11231121;
217
+ cpu->isar.id_isar3 = 0x01102131;
218
+ cpu->isar.id_isar4 = 0x01141;
219
cpu->reset_auxcr = 7;
220
}
221
222
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
223
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
224
cpu->midr = 0x410fb022;
225
cpu->reset_fpsid = 0x410120b4;
226
- cpu->mvfr0 = 0x11111111;
227
- cpu->mvfr1 = 0x00000000;
228
+ cpu->isar.mvfr0 = 0x11111111;
229
+ cpu->isar.mvfr1 = 0x00000000;
230
cpu->ctr = 0x1d192992; /* 32K icache 32K dcache */
231
cpu->id_pfr0 = 0x111;
232
cpu->id_pfr1 = 0x1;
233
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
234
cpu->id_mmfr0 = 0x01100103;
235
cpu->id_mmfr1 = 0x10020302;
236
cpu->id_mmfr2 = 0x01222000;
237
- cpu->id_isar0 = 0x00100011;
238
- cpu->id_isar1 = 0x12002111;
239
- cpu->id_isar2 = 0x11221011;
240
- cpu->id_isar3 = 0x01102131;
241
- cpu->id_isar4 = 0x141;
242
+ cpu->isar.id_isar0 = 0x00100011;
243
+ cpu->isar.id_isar1 = 0x12002111;
244
+ cpu->isar.id_isar2 = 0x11221011;
245
+ cpu->isar.id_isar3 = 0x01102131;
246
+ cpu->isar.id_isar4 = 0x141;
247
cpu->reset_auxcr = 1;
248
}
249
250
@@ -XXX,XX +XXX,XX @@ static void cortex_m3_initfn(Object *obj)
251
cpu->id_mmfr1 = 0x00000000;
252
cpu->id_mmfr2 = 0x00000000;
253
cpu->id_mmfr3 = 0x00000000;
254
- cpu->id_isar0 = 0x01141110;
255
- cpu->id_isar1 = 0x02111000;
256
- cpu->id_isar2 = 0x21112231;
257
- cpu->id_isar3 = 0x01111110;
258
- cpu->id_isar4 = 0x01310102;
259
- cpu->id_isar5 = 0x00000000;
260
- cpu->id_isar6 = 0x00000000;
261
+ cpu->isar.id_isar0 = 0x01141110;
262
+ cpu->isar.id_isar1 = 0x02111000;
263
+ cpu->isar.id_isar2 = 0x21112231;
264
+ cpu->isar.id_isar3 = 0x01111110;
265
+ cpu->isar.id_isar4 = 0x01310102;
266
+ cpu->isar.id_isar5 = 0x00000000;
267
+ cpu->isar.id_isar6 = 0x00000000;
268
}
269
270
static void cortex_m4_initfn(Object *obj)
271
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
272
cpu->id_mmfr1 = 0x00000000;
273
cpu->id_mmfr2 = 0x00000000;
274
cpu->id_mmfr3 = 0x00000000;
275
- cpu->id_isar0 = 0x01141110;
276
- cpu->id_isar1 = 0x02111000;
277
- cpu->id_isar2 = 0x21112231;
278
- cpu->id_isar3 = 0x01111110;
279
- cpu->id_isar4 = 0x01310102;
280
- cpu->id_isar5 = 0x00000000;
281
- cpu->id_isar6 = 0x00000000;
282
+ cpu->isar.id_isar0 = 0x01141110;
283
+ cpu->isar.id_isar1 = 0x02111000;
284
+ cpu->isar.id_isar2 = 0x21112231;
285
+ cpu->isar.id_isar3 = 0x01111110;
286
+ cpu->isar.id_isar4 = 0x01310102;
287
+ cpu->isar.id_isar5 = 0x00000000;
288
+ cpu->isar.id_isar6 = 0x00000000;
289
}
290
291
static void cortex_m33_initfn(Object *obj)
292
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
293
cpu->id_mmfr1 = 0x00000000;
294
cpu->id_mmfr2 = 0x01000000;
295
cpu->id_mmfr3 = 0x00000000;
296
- cpu->id_isar0 = 0x01101110;
297
- cpu->id_isar1 = 0x02212000;
298
- cpu->id_isar2 = 0x20232232;
299
- cpu->id_isar3 = 0x01111131;
300
- cpu->id_isar4 = 0x01310132;
301
- cpu->id_isar5 = 0x00000000;
302
- cpu->id_isar6 = 0x00000000;
303
+ cpu->isar.id_isar0 = 0x01101110;
304
+ cpu->isar.id_isar1 = 0x02212000;
305
+ cpu->isar.id_isar2 = 0x20232232;
306
+ cpu->isar.id_isar3 = 0x01111131;
307
+ cpu->isar.id_isar4 = 0x01310132;
308
+ cpu->isar.id_isar5 = 0x00000000;
309
+ cpu->isar.id_isar6 = 0x00000000;
310
cpu->clidr = 0x00000000;
311
cpu->ctr = 0x8000c000;
312
}
313
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
314
cpu->id_mmfr1 = 0x00000000;
315
cpu->id_mmfr2 = 0x01200000;
316
cpu->id_mmfr3 = 0x0211;
317
- cpu->id_isar0 = 0x02101111;
318
- cpu->id_isar1 = 0x13112111;
319
- cpu->id_isar2 = 0x21232141;
320
- cpu->id_isar3 = 0x01112131;
321
- cpu->id_isar4 = 0x0010142;
322
- cpu->id_isar5 = 0x0;
323
- cpu->id_isar6 = 0x0;
324
+ cpu->isar.id_isar0 = 0x02101111;
325
+ cpu->isar.id_isar1 = 0x13112111;
326
+ cpu->isar.id_isar2 = 0x21232141;
327
+ cpu->isar.id_isar3 = 0x01112131;
328
+ cpu->isar.id_isar4 = 0x0010142;
329
+ cpu->isar.id_isar5 = 0x0;
330
+ cpu->isar.id_isar6 = 0x0;
331
cpu->mp_is_up = true;
332
cpu->pmsav7_dregion = 16;
333
define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
334
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
335
set_feature(&cpu->env, ARM_FEATURE_EL3);
336
cpu->midr = 0x410fc080;
337
cpu->reset_fpsid = 0x410330c0;
338
- cpu->mvfr0 = 0x11110222;
339
- cpu->mvfr1 = 0x00011111;
340
+ cpu->isar.mvfr0 = 0x11110222;
341
+ cpu->isar.mvfr1 = 0x00011111;
342
cpu->ctr = 0x82048004;
343
cpu->reset_sctlr = 0x00c50078;
344
cpu->id_pfr0 = 0x1031;
345
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
346
cpu->id_mmfr1 = 0x20000000;
347
cpu->id_mmfr2 = 0x01202000;
348
cpu->id_mmfr3 = 0x11;
349
- cpu->id_isar0 = 0x00101111;
350
- cpu->id_isar1 = 0x12112111;
351
- cpu->id_isar2 = 0x21232031;
352
- cpu->id_isar3 = 0x11112131;
353
- cpu->id_isar4 = 0x00111142;
354
+ cpu->isar.id_isar0 = 0x00101111;
355
+ cpu->isar.id_isar1 = 0x12112111;
356
+ cpu->isar.id_isar2 = 0x21232031;
357
+ cpu->isar.id_isar3 = 0x11112131;
358
+ cpu->isar.id_isar4 = 0x00111142;
359
cpu->dbgdidr = 0x15141000;
360
cpu->clidr = (1 << 27) | (2 << 24) | 3;
361
cpu->ccsidr[0] = 0xe007e01a; /* 16k L1 dcache. */
362
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
363
set_feature(&cpu->env, ARM_FEATURE_CBAR);
364
cpu->midr = 0x410fc090;
365
cpu->reset_fpsid = 0x41033090;
366
- cpu->mvfr0 = 0x11110222;
367
- cpu->mvfr1 = 0x01111111;
368
+ cpu->isar.mvfr0 = 0x11110222;
369
+ cpu->isar.mvfr1 = 0x01111111;
370
cpu->ctr = 0x80038003;
371
cpu->reset_sctlr = 0x00c50078;
372
cpu->id_pfr0 = 0x1031;
373
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
374
cpu->id_mmfr1 = 0x20000000;
375
cpu->id_mmfr2 = 0x01230000;
376
cpu->id_mmfr3 = 0x00002111;
377
- cpu->id_isar0 = 0x00101111;
378
- cpu->id_isar1 = 0x13112111;
379
- cpu->id_isar2 = 0x21232041;
380
- cpu->id_isar3 = 0x11112131;
381
- cpu->id_isar4 = 0x00111142;
382
+ cpu->isar.id_isar0 = 0x00101111;
383
+ cpu->isar.id_isar1 = 0x13112111;
384
+ cpu->isar.id_isar2 = 0x21232041;
385
+ cpu->isar.id_isar3 = 0x11112131;
386
+ cpu->isar.id_isar4 = 0x00111142;
387
cpu->dbgdidr = 0x35141000;
388
cpu->clidr = (1 << 27) | (1 << 24) | 3;
389
cpu->ccsidr[0] = 0xe00fe019; /* 16k L1 dcache. */
390
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
391
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A7;
392
cpu->midr = 0x410fc075;
393
cpu->reset_fpsid = 0x41023075;
394
- cpu->mvfr0 = 0x10110222;
395
- cpu->mvfr1 = 0x11111111;
396
+ cpu->isar.mvfr0 = 0x10110222;
397
+ cpu->isar.mvfr1 = 0x11111111;
398
cpu->ctr = 0x84448003;
399
cpu->reset_sctlr = 0x00c50078;
400
cpu->id_pfr0 = 0x00001131;
401
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
402
/* a7_mpcore_r0p5_trm, page 4-4 gives 0x01101110; but
403
* table 4-41 gives 0x02101110, which includes the arm div insns.
404
*/
405
- cpu->id_isar0 = 0x02101110;
406
- cpu->id_isar1 = 0x13112111;
407
- cpu->id_isar2 = 0x21232041;
408
- cpu->id_isar3 = 0x11112131;
409
- cpu->id_isar4 = 0x10011142;
410
+ cpu->isar.id_isar0 = 0x02101110;
411
+ cpu->isar.id_isar1 = 0x13112111;
412
+ cpu->isar.id_isar2 = 0x21232041;
413
+ cpu->isar.id_isar3 = 0x11112131;
414
+ cpu->isar.id_isar4 = 0x10011142;
415
cpu->dbgdidr = 0x3515f005;
416
cpu->clidr = 0x0a200023;
417
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
418
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
419
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15;
420
cpu->midr = 0x412fc0f1;
421
cpu->reset_fpsid = 0x410430f0;
422
- cpu->mvfr0 = 0x10110222;
423
- cpu->mvfr1 = 0x11111111;
424
+ cpu->isar.mvfr0 = 0x10110222;
425
+ cpu->isar.mvfr1 = 0x11111111;
426
cpu->ctr = 0x8444c004;
427
cpu->reset_sctlr = 0x00c50078;
428
cpu->id_pfr0 = 0x00001131;
429
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
430
cpu->id_mmfr1 = 0x20000000;
431
cpu->id_mmfr2 = 0x01240000;
432
cpu->id_mmfr3 = 0x02102211;
433
- cpu->id_isar0 = 0x02101110;
434
- cpu->id_isar1 = 0x13112111;
435
- cpu->id_isar2 = 0x21232041;
436
- cpu->id_isar3 = 0x11112131;
437
- cpu->id_isar4 = 0x10011142;
438
+ cpu->isar.id_isar0 = 0x02101110;
439
+ cpu->isar.id_isar1 = 0x13112111;
440
+ cpu->isar.id_isar2 = 0x21232041;
441
+ cpu->isar.id_isar3 = 0x11112131;
442
+ cpu->isar.id_isar4 = 0x10011142;
443
cpu->dbgdidr = 0x3515f021;
444
cpu->clidr = 0x0a200023;
445
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
446
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
447
index XXXXXXX..XXXXXXX 100644
448
--- a/target/arm/cpu64.c
449
+++ b/target/arm/cpu64.c
450
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
451
cpu->midr = 0x411fd070;
452
cpu->revidr = 0x00000000;
453
cpu->reset_fpsid = 0x41034070;
454
- cpu->mvfr0 = 0x10110222;
455
- cpu->mvfr1 = 0x12111111;
456
- cpu->mvfr2 = 0x00000043;
457
+ cpu->isar.mvfr0 = 0x10110222;
458
+ cpu->isar.mvfr1 = 0x12111111;
459
+ cpu->isar.mvfr2 = 0x00000043;
460
cpu->ctr = 0x8444c004;
461
cpu->reset_sctlr = 0x00c50838;
462
cpu->id_pfr0 = 0x00000131;
463
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
464
cpu->id_mmfr1 = 0x40000000;
465
cpu->id_mmfr2 = 0x01260000;
466
cpu->id_mmfr3 = 0x02102211;
467
- cpu->id_isar0 = 0x02101110;
468
- cpu->id_isar1 = 0x13112111;
469
- cpu->id_isar2 = 0x21232042;
470
- cpu->id_isar3 = 0x01112131;
471
- cpu->id_isar4 = 0x00011142;
472
- cpu->id_isar5 = 0x00011121;
473
- cpu->id_isar6 = 0;
474
- cpu->id_aa64pfr0 = 0x00002222;
475
+ cpu->isar.id_isar0 = 0x02101110;
476
+ cpu->isar.id_isar1 = 0x13112111;
477
+ cpu->isar.id_isar2 = 0x21232042;
478
+ cpu->isar.id_isar3 = 0x01112131;
479
+ cpu->isar.id_isar4 = 0x00011142;
480
+ cpu->isar.id_isar5 = 0x00011121;
481
+ cpu->isar.id_isar6 = 0;
482
+ cpu->isar.id_aa64pfr0 = 0x00002222;
483
cpu->id_aa64dfr0 = 0x10305106;
484
cpu->pmceid0 = 0x00000000;
485
cpu->pmceid1 = 0x00000000;
486
- cpu->id_aa64isar0 = 0x00011120;
487
+ cpu->isar.id_aa64isar0 = 0x00011120;
488
cpu->id_aa64mmfr0 = 0x00001124;
489
cpu->dbgdidr = 0x3516d000;
490
cpu->clidr = 0x0a200023;
491
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
492
cpu->midr = 0x410fd034;
493
cpu->revidr = 0x00000000;
494
cpu->reset_fpsid = 0x41034070;
495
- cpu->mvfr0 = 0x10110222;
496
- cpu->mvfr1 = 0x12111111;
497
- cpu->mvfr2 = 0x00000043;
498
+ cpu->isar.mvfr0 = 0x10110222;
499
+ cpu->isar.mvfr1 = 0x12111111;
500
+ cpu->isar.mvfr2 = 0x00000043;
501
cpu->ctr = 0x84448004; /* L1Ip = VIPT */
502
cpu->reset_sctlr = 0x00c50838;
503
cpu->id_pfr0 = 0x00000131;
504
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
505
cpu->id_mmfr1 = 0x40000000;
506
cpu->id_mmfr2 = 0x01260000;
507
cpu->id_mmfr3 = 0x02102211;
508
- cpu->id_isar0 = 0x02101110;
509
- cpu->id_isar1 = 0x13112111;
510
- cpu->id_isar2 = 0x21232042;
511
- cpu->id_isar3 = 0x01112131;
512
- cpu->id_isar4 = 0x00011142;
513
- cpu->id_isar5 = 0x00011121;
514
- cpu->id_isar6 = 0;
515
- cpu->id_aa64pfr0 = 0x00002222;
516
+ cpu->isar.id_isar0 = 0x02101110;
517
+ cpu->isar.id_isar1 = 0x13112111;
518
+ cpu->isar.id_isar2 = 0x21232042;
519
+ cpu->isar.id_isar3 = 0x01112131;
520
+ cpu->isar.id_isar4 = 0x00011142;
521
+ cpu->isar.id_isar5 = 0x00011121;
522
+ cpu->isar.id_isar6 = 0;
523
+ cpu->isar.id_aa64pfr0 = 0x00002222;
524
cpu->id_aa64dfr0 = 0x10305106;
525
- cpu->id_aa64isar0 = 0x00011120;
526
+ cpu->isar.id_aa64isar0 = 0x00011120;
527
cpu->id_aa64mmfr0 = 0x00001122; /* 40 bit physical addr */
528
cpu->dbgdidr = 0x3516d000;
529
cpu->clidr = 0x0a200023;
530
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
531
cpu->midr = 0x410fd083;
532
cpu->revidr = 0x00000000;
533
cpu->reset_fpsid = 0x41034080;
534
- cpu->mvfr0 = 0x10110222;
535
- cpu->mvfr1 = 0x12111111;
536
- cpu->mvfr2 = 0x00000043;
537
+ cpu->isar.mvfr0 = 0x10110222;
538
+ cpu->isar.mvfr1 = 0x12111111;
539
+ cpu->isar.mvfr2 = 0x00000043;
540
cpu->ctr = 0x8444c004;
541
cpu->reset_sctlr = 0x00c50838;
542
cpu->id_pfr0 = 0x00000131;
543
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
544
cpu->id_mmfr1 = 0x40000000;
545
cpu->id_mmfr2 = 0x01260000;
546
cpu->id_mmfr3 = 0x02102211;
547
- cpu->id_isar0 = 0x02101110;
548
- cpu->id_isar1 = 0x13112111;
549
- cpu->id_isar2 = 0x21232042;
550
- cpu->id_isar3 = 0x01112131;
551
- cpu->id_isar4 = 0x00011142;
552
- cpu->id_isar5 = 0x00011121;
553
- cpu->id_aa64pfr0 = 0x00002222;
554
+ cpu->isar.id_isar0 = 0x02101110;
555
+ cpu->isar.id_isar1 = 0x13112111;
556
+ cpu->isar.id_isar2 = 0x21232042;
557
+ cpu->isar.id_isar3 = 0x01112131;
558
+ cpu->isar.id_isar4 = 0x00011142;
559
+ cpu->isar.id_isar5 = 0x00011121;
560
+ cpu->isar.id_aa64pfr0 = 0x00002222;
561
cpu->id_aa64dfr0 = 0x10305106;
562
cpu->pmceid0 = 0x00000000;
563
cpu->pmceid1 = 0x00000000;
564
- cpu->id_aa64isar0 = 0x00011120;
565
+ cpu->isar.id_aa64isar0 = 0x00011120;
566
cpu->id_aa64mmfr0 = 0x00001124;
567
cpu->dbgdidr = 0x3516d000;
568
cpu->clidr = 0x0a200023;
569
diff --git a/target/arm/helper.c b/target/arm/helper.c
570
index XXXXXXX..XXXXXXX 100644
571
--- a/target/arm/helper.c
572
+++ b/target/arm/helper.c
573
@@ -XXX,XX +XXX,XX @@ static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri)
574
static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
575
{
576
ARMCPU *cpu = arm_env_get_cpu(env);
577
- uint64_t pfr0 = cpu->id_aa64pfr0;
578
+ uint64_t pfr0 = cpu->isar.id_aa64pfr0;
579
580
if (env->gicv3state) {
581
pfr0 |= 1 << 24;
582
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
583
{ .name = "ID_ISAR0", .state = ARM_CP_STATE_BOTH,
584
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
585
.access = PL1_R, .type = ARM_CP_CONST,
586
- .resetvalue = cpu->id_isar0 },
587
+ .resetvalue = cpu->isar.id_isar0 },
588
{ .name = "ID_ISAR1", .state = ARM_CP_STATE_BOTH,
589
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 1,
590
.access = PL1_R, .type = ARM_CP_CONST,
591
- .resetvalue = cpu->id_isar1 },
592
+ .resetvalue = cpu->isar.id_isar1 },
593
{ .name = "ID_ISAR2", .state = ARM_CP_STATE_BOTH,
594
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2,
595
.access = PL1_R, .type = ARM_CP_CONST,
596
- .resetvalue = cpu->id_isar2 },
597
+ .resetvalue = cpu->isar.id_isar2 },
598
{ .name = "ID_ISAR3", .state = ARM_CP_STATE_BOTH,
599
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 3,
600
.access = PL1_R, .type = ARM_CP_CONST,
601
- .resetvalue = cpu->id_isar3 },
602
+ .resetvalue = cpu->isar.id_isar3 },
603
{ .name = "ID_ISAR4", .state = ARM_CP_STATE_BOTH,
604
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 4,
605
.access = PL1_R, .type = ARM_CP_CONST,
606
- .resetvalue = cpu->id_isar4 },
607
+ .resetvalue = cpu->isar.id_isar4 },
608
{ .name = "ID_ISAR5", .state = ARM_CP_STATE_BOTH,
609
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 5,
610
.access = PL1_R, .type = ARM_CP_CONST,
611
- .resetvalue = cpu->id_isar5 },
612
+ .resetvalue = cpu->isar.id_isar5 },
613
{ .name = "ID_MMFR4", .state = ARM_CP_STATE_BOTH,
614
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6,
615
.access = PL1_R, .type = ARM_CP_CONST,
616
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
617
{ .name = "ID_ISAR6", .state = ARM_CP_STATE_BOTH,
618
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 7,
619
.access = PL1_R, .type = ARM_CP_CONST,
620
- .resetvalue = cpu->id_isar6 },
621
+ .resetvalue = cpu->isar.id_isar6 },
622
REGINFO_SENTINEL
623
};
624
define_arm_cp_regs(cpu, v6_idregs);
625
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
626
{ .name = "ID_AA64PFR1_EL1", .state = ARM_CP_STATE_AA64,
627
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 1,
628
.access = PL1_R, .type = ARM_CP_CONST,
629
- .resetvalue = cpu->id_aa64pfr1},
630
+ .resetvalue = cpu->isar.id_aa64pfr1},
631
{ .name = "ID_AA64PFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
632
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 2,
633
.access = PL1_R, .type = ARM_CP_CONST,
634
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
635
{ .name = "ID_AA64ISAR0_EL1", .state = ARM_CP_STATE_AA64,
636
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 0,
637
.access = PL1_R, .type = ARM_CP_CONST,
638
- .resetvalue = cpu->id_aa64isar0 },
639
+ .resetvalue = cpu->isar.id_aa64isar0 },
640
{ .name = "ID_AA64ISAR1_EL1", .state = ARM_CP_STATE_AA64,
641
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 1,
642
.access = PL1_R, .type = ARM_CP_CONST,
643
- .resetvalue = cpu->id_aa64isar1 },
644
+ .resetvalue = cpu->isar.id_aa64isar1 },
645
{ .name = "ID_AA64ISAR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
646
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 2,
647
.access = PL1_R, .type = ARM_CP_CONST,
648
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
649
{ .name = "MVFR0_EL1", .state = ARM_CP_STATE_AA64,
650
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 0,
651
.access = PL1_R, .type = ARM_CP_CONST,
652
- .resetvalue = cpu->mvfr0 },
653
+ .resetvalue = cpu->isar.mvfr0 },
654
{ .name = "MVFR1_EL1", .state = ARM_CP_STATE_AA64,
655
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 1,
656
.access = PL1_R, .type = ARM_CP_CONST,
657
- .resetvalue = cpu->mvfr1 },
658
+ .resetvalue = cpu->isar.mvfr1 },
659
{ .name = "MVFR2_EL1", .state = ARM_CP_STATE_AA64,
660
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2,
661
.access = PL1_R, .type = ARM_CP_CONST,
662
- .resetvalue = cpu->mvfr2 },
663
+ .resetvalue = cpu->isar.mvfr2 },
664
{ .name = "MVFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
665
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 3,
666
.access = PL1_R, .type = ARM_CP_CONST,
160
--
667
--
161
2.7.4
668
2.19.1
162
669
163
670
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Instantiating mps2-an505 (cortex-m33) will fail make check when
4
V7VE asserts that ID_ISAR0.Divide includes ARM division. It is
5
also wrong to include ARM_FEATURE_LPAE.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181016223115.24100-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/cpu.c | 6 +++++-
13
1 file changed, 5 insertions(+), 1 deletion(-)
14
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.c
18
+++ b/target/arm/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
20
21
/* Some features automatically imply others: */
22
if (arm_feature(env, ARM_FEATURE_V8)) {
23
- set_feature(env, ARM_FEATURE_V7VE);
24
+ if (arm_feature(env, ARM_FEATURE_M)) {
25
+ set_feature(env, ARM_FEATURE_V7);
26
+ } else {
27
+ set_feature(env, ARM_FEATURE_V7VE);
28
+ }
29
}
30
if (arm_feature(env, ARM_FEATURE_V7VE)) {
31
/* v7 Virtualization Extensions. In real hardware this implies
32
--
33
2.19.1
34
35
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Most of the v8 extensions are self-contained within the ISAR
4
registers and are not implied by other feature bits, which
5
makes them the easiest to convert.
6
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181016223115.24100-4-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/cpu.h | 131 +++++++++++++++++++++++++++++++++----
14
target/arm/translate.h | 7 ++
15
linux-user/elfload.c | 46 ++++++++-----
16
target/arm/cpu.c | 27 +++++---
17
target/arm/cpu64.c | 57 +++++++++-------
18
target/arm/translate-a64.c | 101 ++++++++++++++--------------
19
target/arm/translate.c | 36 +++++-----
20
7 files changed, 273 insertions(+), 132 deletions(-)
21
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
25
+++ b/target/arm/cpu.h
26
@@ -XXX,XX +XXX,XX @@ typedef enum ARMPSCIState {
27
PSCI_ON_PENDING = 2
28
} ARMPSCIState;
29
30
+typedef struct ARMISARegisters ARMISARegisters;
31
+
32
/**
33
* ARMCPU:
34
* @env: #CPUARMState
35
@@ -XXX,XX +XXX,XX @@ enum arm_features {
36
ARM_FEATURE_LPAE, /* has Large Physical Address Extension */
37
ARM_FEATURE_V8,
38
ARM_FEATURE_AARCH64, /* supports 64 bit mode */
39
- ARM_FEATURE_V8_AES, /* implements AES part of v8 Crypto Extensions */
40
ARM_FEATURE_CBAR, /* has cp15 CBAR */
41
ARM_FEATURE_CRC, /* ARMv8 CRC instructions */
42
ARM_FEATURE_CBAR_RO, /* has cp15 CBAR and it is read-only */
43
ARM_FEATURE_EL2, /* has EL2 Virtualization support */
44
ARM_FEATURE_EL3, /* has EL3 Secure monitor support */
45
- ARM_FEATURE_V8_SHA1, /* implements SHA1 part of v8 Crypto Extensions */
46
- ARM_FEATURE_V8_SHA256, /* implements SHA256 part of v8 Crypto Extensions */
47
- ARM_FEATURE_V8_PMULL, /* implements PMULL part of v8 Crypto Extensions */
48
ARM_FEATURE_THUMB_DSP, /* DSP insns supported in the Thumb encodings */
49
ARM_FEATURE_PMU, /* has PMU support */
50
ARM_FEATURE_VBAR, /* has cp15 VBAR */
51
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
52
ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
53
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
54
- ARM_FEATURE_V8_SHA512, /* implements SHA512 part of v8 Crypto Extensions */
55
- ARM_FEATURE_V8_SHA3, /* implements SHA3 part of v8 Crypto Extensions */
56
- ARM_FEATURE_V8_SM3, /* implements SM3 part of v8 Crypto Extensions */
57
- ARM_FEATURE_V8_SM4, /* implements SM4 part of v8 Crypto Extensions */
58
- ARM_FEATURE_V8_ATOMICS, /* ARMv8.1-Atomics feature */
59
- ARM_FEATURE_V8_RDM, /* implements v8.1 simd round multiply */
60
- ARM_FEATURE_V8_DOTPROD, /* implements v8.2 simd dot product */
61
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
62
- ARM_FEATURE_V8_FCMA, /* has complex number part of v8.3 extensions. */
63
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
64
};
65
66
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
67
/* Shared between translate-sve.c and sve_helper.c. */
68
extern const uint64_t pred_esz_masks[4];
69
70
+/*
71
+ * 32-bit feature tests via id registers.
72
+ */
73
+static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
74
+{
75
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
76
+}
77
+
78
+static inline bool isar_feature_aa32_pmull(const ARMISARegisters *id)
79
+{
80
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) > 1;
81
+}
82
+
83
+static inline bool isar_feature_aa32_sha1(const ARMISARegisters *id)
84
+{
85
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA1) != 0;
86
+}
87
+
88
+static inline bool isar_feature_aa32_sha2(const ARMISARegisters *id)
89
+{
90
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA2) != 0;
91
+}
92
+
93
+static inline bool isar_feature_aa32_crc32(const ARMISARegisters *id)
94
+{
95
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, CRC32) != 0;
96
+}
97
+
98
+static inline bool isar_feature_aa32_rdm(const ARMISARegisters *id)
99
+{
100
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, RDM) != 0;
101
+}
102
+
103
+static inline bool isar_feature_aa32_vcma(const ARMISARegisters *id)
104
+{
105
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, VCMA) != 0;
106
+}
107
+
108
+static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
109
+{
110
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
111
+}
112
+
113
+/*
114
+ * 64-bit feature tests via id registers.
115
+ */
116
+static inline bool isar_feature_aa64_aes(const ARMISARegisters *id)
117
+{
118
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) != 0;
119
+}
120
+
121
+static inline bool isar_feature_aa64_pmull(const ARMISARegisters *id)
122
+{
123
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) > 1;
124
+}
125
+
126
+static inline bool isar_feature_aa64_sha1(const ARMISARegisters *id)
127
+{
128
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA1) != 0;
129
+}
130
+
131
+static inline bool isar_feature_aa64_sha256(const ARMISARegisters *id)
132
+{
133
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) != 0;
134
+}
135
+
136
+static inline bool isar_feature_aa64_sha512(const ARMISARegisters *id)
137
+{
138
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) > 1;
139
+}
140
+
141
+static inline bool isar_feature_aa64_crc32(const ARMISARegisters *id)
142
+{
143
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, CRC32) != 0;
144
+}
145
+
146
+static inline bool isar_feature_aa64_atomics(const ARMISARegisters *id)
147
+{
148
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, ATOMIC) != 0;
149
+}
150
+
151
+static inline bool isar_feature_aa64_rdm(const ARMISARegisters *id)
152
+{
153
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, RDM) != 0;
154
+}
155
+
156
+static inline bool isar_feature_aa64_sha3(const ARMISARegisters *id)
157
+{
158
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA3) != 0;
159
+}
160
+
161
+static inline bool isar_feature_aa64_sm3(const ARMISARegisters *id)
162
+{
163
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM3) != 0;
164
+}
165
+
166
+static inline bool isar_feature_aa64_sm4(const ARMISARegisters *id)
167
+{
168
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM4) != 0;
169
+}
170
+
171
+static inline bool isar_feature_aa64_dp(const ARMISARegisters *id)
172
+{
173
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, DP) != 0;
174
+}
175
+
176
+static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
177
+{
178
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
179
+}
180
+
181
+/*
182
+ * Forward to the above feature tests given an ARMCPU pointer.
183
+ */
184
+#define cpu_isar_feature(name, cpu) \
185
+ ({ ARMCPU *cpu_ = (cpu); isar_feature_##name(&cpu_->isar); })
186
+
187
#endif
188
diff --git a/target/arm/translate.h b/target/arm/translate.h
189
index XXXXXXX..XXXXXXX 100644
190
--- a/target/arm/translate.h
191
+++ b/target/arm/translate.h
192
@@ -XXX,XX +XXX,XX @@
193
/* internal defines */
194
typedef struct DisasContext {
195
DisasContextBase base;
196
+ const ARMISARegisters *isar;
197
198
target_ulong pc;
199
target_ulong page_start;
200
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
201
return ret;
202
}
203
204
+/*
205
+ * Forward to the isar_feature_* tests given a DisasContext pointer.
206
+ */
207
+#define dc_isar_feature(name, ctx) \
208
+ ({ DisasContext *ctx_ = (ctx); isar_feature_##name(ctx_->isar); })
209
+
210
#endif /* TARGET_ARM_TRANSLATE_H */
211
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
212
index XXXXXXX..XXXXXXX 100644
213
--- a/linux-user/elfload.c
214
+++ b/linux-user/elfload.c
215
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
216
/* probe for the extra features */
217
#define GET_FEATURE(feat, hwcap) \
218
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
219
+
220
+#define GET_FEATURE_ID(feat, hwcap) \
221
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
222
+
223
/* EDSP is in v5TE and above, but all our v5 CPUs are v5TE */
224
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
225
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
226
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap2(void)
227
ARMCPU *cpu = ARM_CPU(thread_cpu);
228
uint32_t hwcaps = 0;
229
230
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP2_ARM_AES);
231
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP2_ARM_PMULL);
232
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP2_ARM_SHA1);
233
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP2_ARM_SHA2);
234
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP2_ARM_CRC32);
235
+ GET_FEATURE_ID(aa32_aes, ARM_HWCAP2_ARM_AES);
236
+ GET_FEATURE_ID(aa32_pmull, ARM_HWCAP2_ARM_PMULL);
237
+ GET_FEATURE_ID(aa32_sha1, ARM_HWCAP2_ARM_SHA1);
238
+ GET_FEATURE_ID(aa32_sha2, ARM_HWCAP2_ARM_SHA2);
239
+ GET_FEATURE_ID(aa32_crc32, ARM_HWCAP2_ARM_CRC32);
240
return hwcaps;
241
}
242
243
#undef GET_FEATURE
244
+#undef GET_FEATURE_ID
245
246
#else
247
/* 64 bit ARM definitions */
248
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
249
/* probe for the extra features */
250
#define GET_FEATURE(feat, hwcap) \
251
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
252
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP_A64_AES);
253
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP_A64_PMULL);
254
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP_A64_SHA1);
255
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP_A64_SHA2);
256
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP_A64_CRC32);
257
- GET_FEATURE(ARM_FEATURE_V8_SHA3, ARM_HWCAP_A64_SHA3);
258
- GET_FEATURE(ARM_FEATURE_V8_SM3, ARM_HWCAP_A64_SM3);
259
- GET_FEATURE(ARM_FEATURE_V8_SM4, ARM_HWCAP_A64_SM4);
260
- GET_FEATURE(ARM_FEATURE_V8_SHA512, ARM_HWCAP_A64_SHA512);
261
+#define GET_FEATURE_ID(feat, hwcap) \
262
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
263
+
264
+ GET_FEATURE_ID(aa64_aes, ARM_HWCAP_A64_AES);
265
+ GET_FEATURE_ID(aa64_pmull, ARM_HWCAP_A64_PMULL);
266
+ GET_FEATURE_ID(aa64_sha1, ARM_HWCAP_A64_SHA1);
267
+ GET_FEATURE_ID(aa64_sha256, ARM_HWCAP_A64_SHA2);
268
+ GET_FEATURE_ID(aa64_sha512, ARM_HWCAP_A64_SHA512);
269
+ GET_FEATURE_ID(aa64_crc32, ARM_HWCAP_A64_CRC32);
270
+ GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
271
+ GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
272
+ GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
273
GET_FEATURE(ARM_FEATURE_V8_FP16,
274
ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
275
- GET_FEATURE(ARM_FEATURE_V8_ATOMICS, ARM_HWCAP_A64_ATOMICS);
276
- GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM);
277
- GET_FEATURE(ARM_FEATURE_V8_DOTPROD, ARM_HWCAP_A64_ASIMDDP);
278
- GET_FEATURE(ARM_FEATURE_V8_FCMA, ARM_HWCAP_A64_FCMA);
279
+ GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
280
+ GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
281
+ GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
282
+ GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
283
GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
284
+
285
#undef GET_FEATURE
286
+#undef GET_FEATURE_ID
287
288
return hwcaps;
289
}
290
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
291
index XXXXXXX..XXXXXXX 100644
292
--- a/target/arm/cpu.c
293
+++ b/target/arm/cpu.c
294
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
295
cortex_a15_initfn(obj);
296
#ifdef CONFIG_USER_ONLY
297
/* We don't set these in system emulation mode for the moment,
298
- * since we don't correctly set the ID registers to advertise them,
299
+ * since we don't correctly set (all of) the ID registers to
300
+ * advertise them.
301
*/
302
set_feature(&cpu->env, ARM_FEATURE_V8);
303
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
304
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
305
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
306
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
307
- set_feature(&cpu->env, ARM_FEATURE_CRC);
308
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
309
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
310
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
311
+ {
312
+ uint32_t t;
313
+
314
+ t = cpu->isar.id_isar5;
315
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
316
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
317
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
318
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
319
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
320
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
321
+ cpu->isar.id_isar5 = t;
322
+
323
+ t = cpu->isar.id_isar6;
324
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
325
+ cpu->isar.id_isar6 = t;
326
+ }
327
#endif
328
}
329
}
330
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
331
index XXXXXXX..XXXXXXX 100644
332
--- a/target/arm/cpu64.c
333
+++ b/target/arm/cpu64.c
334
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
335
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
336
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
337
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
338
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
339
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
340
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
341
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
342
- set_feature(&cpu->env, ARM_FEATURE_CRC);
343
set_feature(&cpu->env, ARM_FEATURE_EL2);
344
set_feature(&cpu->env, ARM_FEATURE_EL3);
345
set_feature(&cpu->env, ARM_FEATURE_PMU);
346
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
347
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
348
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
349
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
350
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
351
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
352
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
353
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
354
- set_feature(&cpu->env, ARM_FEATURE_CRC);
355
set_feature(&cpu->env, ARM_FEATURE_EL2);
356
set_feature(&cpu->env, ARM_FEATURE_EL3);
357
set_feature(&cpu->env, ARM_FEATURE_PMU);
358
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
359
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
360
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
361
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
362
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
363
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
364
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
365
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
366
- set_feature(&cpu->env, ARM_FEATURE_CRC);
367
set_feature(&cpu->env, ARM_FEATURE_EL2);
368
set_feature(&cpu->env, ARM_FEATURE_EL3);
369
set_feature(&cpu->env, ARM_FEATURE_PMU);
370
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
371
if (kvm_enabled()) {
372
kvm_arm_set_cpu_features_from_host(cpu);
373
} else {
374
+ uint64_t t;
375
+ uint32_t u;
376
aarch64_a57_initfn(obj);
377
+
378
+ t = cpu->isar.id_aa64isar0;
379
+ t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
380
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
381
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */
382
+ t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
383
+ t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2);
384
+ t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1);
385
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1);
386
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1);
387
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
388
+ t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
389
+ cpu->isar.id_aa64isar0 = t;
390
+
391
+ t = cpu->isar.id_aa64isar1;
392
+ t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
393
+ cpu->isar.id_aa64isar1 = t;
394
+
395
+ /* Replicate the same data to the 32-bit id registers. */
396
+ u = cpu->isar.id_isar5;
397
+ u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
398
+ u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
399
+ u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
400
+ u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
401
+ u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
402
+ u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
403
+ cpu->isar.id_isar5 = u;
404
+
405
+ u = cpu->isar.id_isar6;
406
+ u = FIELD_DP32(u, ID_ISAR6, DP, 1);
407
+ cpu->isar.id_isar6 = u;
408
+
409
#ifdef CONFIG_USER_ONLY
410
/* We don't set these in system emulation mode for the moment,
411
* since we don't correctly set the ID registers to advertise them,
412
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
413
* whereas the architecture requires them to be present in both if
414
* present in either.
415
*/
416
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA512);
417
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA3);
418
- set_feature(&cpu->env, ARM_FEATURE_V8_SM3);
419
- set_feature(&cpu->env, ARM_FEATURE_V8_SM4);
420
- set_feature(&cpu->env, ARM_FEATURE_V8_ATOMICS);
421
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
422
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
423
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
424
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
425
set_feature(&cpu->env, ARM_FEATURE_SVE);
426
/* For usermode -cpu max we can use a larger and more efficient DCZ
427
* blocksize since we don't have to follow what the hardware does.
428
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
429
index XXXXXXX..XXXXXXX 100644
430
--- a/target/arm/translate-a64.c
431
+++ b/target/arm/translate-a64.c
432
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
433
}
434
if (rt2 == 31
435
&& ((rt | rs) & 1) == 0
436
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
437
+ && dc_isar_feature(aa64_atomics, s)) {
438
/* CASP / CASPL */
439
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
440
return;
441
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
442
}
443
if (rt2 == 31
444
&& ((rt | rs) & 1) == 0
445
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
446
+ && dc_isar_feature(aa64_atomics, s)) {
447
/* CASPA / CASPAL */
448
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
449
return;
450
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
451
case 0xb: /* CASL */
452
case 0xe: /* CASA */
453
case 0xf: /* CASAL */
454
- if (rt2 == 31 && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
455
+ if (rt2 == 31 && dc_isar_feature(aa64_atomics, s)) {
456
gen_compare_and_swap(s, rs, rt, rn, size);
457
return;
458
}
459
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
460
int rs = extract32(insn, 16, 5);
461
int rn = extract32(insn, 5, 5);
462
int o3_opc = extract32(insn, 12, 4);
463
- int feature = ARM_FEATURE_V8_ATOMICS;
464
TCGv_i64 tcg_rn, tcg_rs;
465
AtomicThreeOpFn *fn;
466
467
- if (is_vector) {
468
+ if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
469
unallocated_encoding(s);
470
return;
471
}
472
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
473
unallocated_encoding(s);
474
return;
475
}
476
- if (!arm_dc_feature(s, feature)) {
477
- unallocated_encoding(s);
478
- return;
479
- }
480
481
if (rn == 31) {
482
gen_check_sp_alignment(s);
483
@@ -XXX,XX +XXX,XX @@ static void handle_crc32(DisasContext *s,
484
TCGv_i64 tcg_acc, tcg_val;
485
TCGv_i32 tcg_bytes;
486
487
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)
488
+ if (!dc_isar_feature(aa64_crc32, s)
489
|| (sf == 1 && sz != 3)
490
|| (sf == 0 && sz == 3)) {
491
unallocated_encoding(s);
492
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
493
bool u = extract32(insn, 29, 1);
494
TCGv_i32 ele1, ele2, ele3;
495
TCGv_i64 res;
496
- int feature;
497
+ bool feature;
498
499
switch (u * 16 + opcode) {
500
case 0x10: /* SQRDMLAH (vector) */
501
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
502
unallocated_encoding(s);
503
return;
504
}
505
- feature = ARM_FEATURE_V8_RDM;
506
+ feature = dc_isar_feature(aa64_rdm, s);
507
break;
508
default:
509
unallocated_encoding(s);
510
return;
511
}
512
- if (!arm_dc_feature(s, feature)) {
513
+ if (!feature) {
514
unallocated_encoding(s);
515
return;
516
}
517
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
518
return;
519
}
520
if (size == 3) {
521
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
522
+ if (!dc_isar_feature(aa64_pmull, s)) {
523
unallocated_encoding(s);
524
return;
525
}
526
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
527
int size = extract32(insn, 22, 2);
528
bool u = extract32(insn, 29, 1);
529
bool is_q = extract32(insn, 30, 1);
530
- int feature, rot;
531
+ bool feature;
532
+ int rot;
533
534
switch (u * 16 + opcode) {
535
case 0x10: /* SQRDMLAH (vector) */
536
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
537
unallocated_encoding(s);
538
return;
539
}
540
- feature = ARM_FEATURE_V8_RDM;
541
+ feature = dc_isar_feature(aa64_rdm, s);
542
break;
543
case 0x02: /* SDOT (vector) */
544
case 0x12: /* UDOT (vector) */
545
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
546
unallocated_encoding(s);
547
return;
548
}
549
- feature = ARM_FEATURE_V8_DOTPROD;
550
+ feature = dc_isar_feature(aa64_dp, s);
551
break;
552
case 0x18: /* FCMLA, #0 */
553
case 0x19: /* FCMLA, #90 */
554
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
555
unallocated_encoding(s);
556
return;
557
}
558
- feature = ARM_FEATURE_V8_FCMA;
559
+ feature = dc_isar_feature(aa64_fcma, s);
560
break;
561
default:
562
unallocated_encoding(s);
563
return;
564
}
565
- if (!arm_dc_feature(s, feature)) {
566
+ if (!feature) {
567
unallocated_encoding(s);
568
return;
569
}
570
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
571
break;
572
case 0x1d: /* SQRDMLAH */
573
case 0x1f: /* SQRDMLSH */
574
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
575
+ if (!dc_isar_feature(aa64_rdm, s)) {
576
unallocated_encoding(s);
577
return;
578
}
579
break;
580
case 0x0e: /* SDOT */
581
case 0x1e: /* UDOT */
582
- if (size != MO_32 || !arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
583
+ if (size != MO_32 || !dc_isar_feature(aa64_dp, s)) {
584
unallocated_encoding(s);
585
return;
586
}
587
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
588
case 0x13: /* FCMLA #90 */
589
case 0x15: /* FCMLA #180 */
590
case 0x17: /* FCMLA #270 */
591
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
592
+ if (!dc_isar_feature(aa64_fcma, s)) {
593
unallocated_encoding(s);
594
return;
595
}
596
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
597
TCGv_i32 tcg_decrypt;
598
CryptoThreeOpIntFn *genfn;
599
600
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
601
- || size != 0) {
602
+ if (!dc_isar_feature(aa64_aes, s) || size != 0) {
603
unallocated_encoding(s);
604
return;
605
}
606
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
607
int rd = extract32(insn, 0, 5);
608
CryptoThreeOpFn *genfn;
609
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
610
- int feature = ARM_FEATURE_V8_SHA256;
611
+ bool feature;
612
613
if (size != 0) {
614
unallocated_encoding(s);
615
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
616
case 2: /* SHA1M */
617
case 3: /* SHA1SU0 */
618
genfn = NULL;
619
- feature = ARM_FEATURE_V8_SHA1;
620
+ feature = dc_isar_feature(aa64_sha1, s);
621
break;
622
case 4: /* SHA256H */
623
genfn = gen_helper_crypto_sha256h;
624
+ feature = dc_isar_feature(aa64_sha256, s);
625
break;
626
case 5: /* SHA256H2 */
627
genfn = gen_helper_crypto_sha256h2;
628
+ feature = dc_isar_feature(aa64_sha256, s);
629
break;
630
case 6: /* SHA256SU1 */
631
genfn = gen_helper_crypto_sha256su1;
632
+ feature = dc_isar_feature(aa64_sha256, s);
633
break;
634
default:
635
unallocated_encoding(s);
636
return;
637
}
638
639
- if (!arm_dc_feature(s, feature)) {
640
+ if (!feature) {
641
unallocated_encoding(s);
642
return;
643
}
644
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
645
int rn = extract32(insn, 5, 5);
646
int rd = extract32(insn, 0, 5);
647
CryptoTwoOpFn *genfn;
648
- int feature;
649
+ bool feature;
650
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
651
652
if (size != 0) {
653
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
654
655
switch (opcode) {
656
case 0: /* SHA1H */
657
- feature = ARM_FEATURE_V8_SHA1;
658
+ feature = dc_isar_feature(aa64_sha1, s);
659
genfn = gen_helper_crypto_sha1h;
660
break;
661
case 1: /* SHA1SU1 */
662
- feature = ARM_FEATURE_V8_SHA1;
663
+ feature = dc_isar_feature(aa64_sha1, s);
664
genfn = gen_helper_crypto_sha1su1;
665
break;
666
case 2: /* SHA256SU0 */
667
- feature = ARM_FEATURE_V8_SHA256;
668
+ feature = dc_isar_feature(aa64_sha256, s);
669
genfn = gen_helper_crypto_sha256su0;
670
break;
671
default:
672
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
673
return;
674
}
675
676
- if (!arm_dc_feature(s, feature)) {
677
+ if (!feature) {
678
unallocated_encoding(s);
679
return;
680
}
681
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
682
int rm = extract32(insn, 16, 5);
683
int rn = extract32(insn, 5, 5);
684
int rd = extract32(insn, 0, 5);
685
- int feature;
686
+ bool feature;
687
CryptoThreeOpFn *genfn;
688
689
if (o == 0) {
690
switch (opcode) {
691
case 0: /* SHA512H */
692
- feature = ARM_FEATURE_V8_SHA512;
693
+ feature = dc_isar_feature(aa64_sha512, s);
694
genfn = gen_helper_crypto_sha512h;
695
break;
696
case 1: /* SHA512H2 */
697
- feature = ARM_FEATURE_V8_SHA512;
698
+ feature = dc_isar_feature(aa64_sha512, s);
699
genfn = gen_helper_crypto_sha512h2;
700
break;
701
case 2: /* SHA512SU1 */
702
- feature = ARM_FEATURE_V8_SHA512;
703
+ feature = dc_isar_feature(aa64_sha512, s);
704
genfn = gen_helper_crypto_sha512su1;
705
break;
706
case 3: /* RAX1 */
707
- feature = ARM_FEATURE_V8_SHA3;
708
+ feature = dc_isar_feature(aa64_sha3, s);
709
genfn = NULL;
710
break;
711
}
712
} else {
713
switch (opcode) {
714
case 0: /* SM3PARTW1 */
715
- feature = ARM_FEATURE_V8_SM3;
716
+ feature = dc_isar_feature(aa64_sm3, s);
717
genfn = gen_helper_crypto_sm3partw1;
718
break;
719
case 1: /* SM3PARTW2 */
720
- feature = ARM_FEATURE_V8_SM3;
721
+ feature = dc_isar_feature(aa64_sm3, s);
722
genfn = gen_helper_crypto_sm3partw2;
723
break;
724
case 2: /* SM4EKEY */
725
- feature = ARM_FEATURE_V8_SM4;
726
+ feature = dc_isar_feature(aa64_sm4, s);
727
genfn = gen_helper_crypto_sm4ekey;
728
break;
729
default:
730
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
731
}
732
}
733
734
- if (!arm_dc_feature(s, feature)) {
735
+ if (!feature) {
736
unallocated_encoding(s);
737
return;
738
}
739
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
740
int rn = extract32(insn, 5, 5);
741
int rd = extract32(insn, 0, 5);
742
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
743
- int feature;
744
+ bool feature;
745
CryptoTwoOpFn *genfn;
746
747
switch (opcode) {
748
case 0: /* SHA512SU0 */
749
- feature = ARM_FEATURE_V8_SHA512;
750
+ feature = dc_isar_feature(aa64_sha512, s);
751
genfn = gen_helper_crypto_sha512su0;
752
break;
753
case 1: /* SM4E */
754
- feature = ARM_FEATURE_V8_SM4;
755
+ feature = dc_isar_feature(aa64_sm4, s);
756
genfn = gen_helper_crypto_sm4e;
757
break;
758
default:
759
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
760
return;
761
}
762
763
- if (!arm_dc_feature(s, feature)) {
764
+ if (!feature) {
765
unallocated_encoding(s);
766
return;
767
}
768
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
769
int ra = extract32(insn, 10, 5);
770
int rn = extract32(insn, 5, 5);
771
int rd = extract32(insn, 0, 5);
772
- int feature;
773
+ bool feature;
774
775
switch (op0) {
776
case 0: /* EOR3 */
777
case 1: /* BCAX */
778
- feature = ARM_FEATURE_V8_SHA3;
779
+ feature = dc_isar_feature(aa64_sha3, s);
780
break;
781
case 2: /* SM3SS1 */
782
- feature = ARM_FEATURE_V8_SM3;
783
+ feature = dc_isar_feature(aa64_sm3, s);
784
break;
785
default:
786
unallocated_encoding(s);
787
return;
788
}
789
790
- if (!arm_dc_feature(s, feature)) {
791
+ if (!feature) {
792
unallocated_encoding(s);
793
return;
794
}
795
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
796
TCGv_i64 tcg_op1, tcg_op2, tcg_res[2];
797
int pass;
798
799
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA3)) {
800
+ if (!dc_isar_feature(aa64_sha3, s)) {
801
unallocated_encoding(s);
802
return;
803
}
804
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
805
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
806
TCGv_i32 tcg_imm2, tcg_opcode;
807
808
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SM3)) {
809
+ if (!dc_isar_feature(aa64_sm3, s)) {
810
unallocated_encoding(s);
811
return;
812
}
813
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
814
ARMCPU *arm_cpu = arm_env_get_cpu(env);
815
int bound;
816
817
+ dc->isar = &arm_cpu->isar;
818
dc->pc = dc->base.pc_first;
819
dc->condjmp = 0;
820
821
diff --git a/target/arm/translate.c b/target/arm/translate.c
822
index XXXXXXX..XXXXXXX 100644
823
--- a/target/arm/translate.c
824
+++ b/target/arm/translate.c
825
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_2rm_sizes[] = {
826
static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
827
int q, int rd, int rn, int rm)
828
{
829
- if (arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
830
+ if (dc_isar_feature(aa32_rdm, s)) {
831
int opr_sz = (1 + q) * 8;
832
tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
833
vfp_reg_offset(1, rn),
834
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
835
return 1;
836
}
837
if (!u) { /* SHA-1 */
838
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
839
+ if (!dc_isar_feature(aa32_sha1, s)) {
840
return 1;
841
}
842
ptr1 = vfp_reg_ptr(true, rd);
843
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
844
gen_helper_crypto_sha1_3reg(ptr1, ptr2, ptr3, tmp4);
845
tcg_temp_free_i32(tmp4);
846
} else { /* SHA-256 */
847
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256) || size == 3) {
848
+ if (!dc_isar_feature(aa32_sha2, s) || size == 3) {
849
return 1;
850
}
851
ptr1 = vfp_reg_ptr(true, rd);
852
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
853
if (op == 14 && size == 2) {
854
TCGv_i64 tcg_rn, tcg_rm, tcg_rd;
855
856
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
857
+ if (!dc_isar_feature(aa32_pmull, s)) {
858
return 1;
859
}
860
tcg_rn = tcg_temp_new_i64();
861
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
862
{
863
NeonGenThreeOpEnvFn *fn;
864
865
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
866
+ if (!dc_isar_feature(aa32_rdm, s)) {
867
return 1;
868
}
869
if (u && ((rd | rn) & 1)) {
870
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
871
break;
872
}
873
case NEON_2RM_AESE: case NEON_2RM_AESMC:
874
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
875
- || ((rm | rd) & 1)) {
876
+ if (!dc_isar_feature(aa32_aes, s) || ((rm | rd) & 1)) {
877
return 1;
878
}
879
ptr1 = vfp_reg_ptr(true, rd);
880
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
881
tcg_temp_free_i32(tmp3);
882
break;
883
case NEON_2RM_SHA1H:
884
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)
885
- || ((rm | rd) & 1)) {
886
+ if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
887
return 1;
888
}
889
ptr1 = vfp_reg_ptr(true, rd);
890
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
891
}
892
/* bit 6 (q): set -> SHA256SU0, cleared -> SHA1SU1 */
893
if (q) {
894
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256)) {
895
+ if (!dc_isar_feature(aa32_sha2, s)) {
896
return 1;
897
}
898
- } else if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
899
+ } else if (!dc_isar_feature(aa32_sha1, s)) {
900
return 1;
901
}
902
ptr1 = vfp_reg_ptr(true, rd);
903
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
904
/* VCMLA -- 1111 110R R.1S .... .... 1000 ...0 .... */
905
int size = extract32(insn, 20, 1);
906
data = extract32(insn, 23, 2); /* rot */
907
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
908
+ if (!dc_isar_feature(aa32_vcma, s)
909
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
910
return 1;
911
}
912
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
913
/* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
914
int size = extract32(insn, 20, 1);
915
data = extract32(insn, 24, 1); /* rot */
916
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
917
+ if (!dc_isar_feature(aa32_vcma, s)
918
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
919
return 1;
920
}
921
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
922
} else if ((insn & 0xfeb00f00) == 0xfc200d00) {
923
/* V[US]DOT -- 1111 1100 0.10 .... .... 1101 .Q.U .... */
924
bool u = extract32(insn, 4, 1);
925
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
926
+ if (!dc_isar_feature(aa32_dp, s)) {
927
return 1;
928
}
929
fn_gvec = u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
930
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
931
int size = extract32(insn, 23, 1);
932
int index;
933
934
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
935
+ if (!dc_isar_feature(aa32_vcma, s)) {
936
return 1;
937
}
938
if (size == 0) {
939
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
940
} else if ((insn & 0xffb00f00) == 0xfe200d00) {
941
/* V[US]DOT -- 1111 1110 0.10 .... .... 1101 .Q.U .... */
942
int u = extract32(insn, 4, 1);
943
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
944
+ if (!dc_isar_feature(aa32_dp, s)) {
945
return 1;
946
}
947
fn_gvec = u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
948
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
949
* op1 == 3 is UNPREDICTABLE but handle as UNDEFINED.
950
* Bits 8, 10 and 11 should be zero.
951
*/
952
- if (!arm_dc_feature(s, ARM_FEATURE_CRC) || op1 == 0x3 ||
953
- (c & 0xd) != 0) {
954
+ if (!dc_isar_feature(aa32_crc32, s) || op1 == 0x3 || (c & 0xd) != 0) {
955
goto illegal_op;
956
}
957
958
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
959
case 0x28:
960
case 0x29:
961
case 0x2a:
962
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)) {
963
+ if (!dc_isar_feature(aa32_crc32, s)) {
964
goto illegal_op;
965
}
966
break;
967
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
968
CPUARMState *env = cs->env_ptr;
969
ARMCPU *cpu = arm_env_get_cpu(env);
970
971
+ dc->isar = &cpu->isar;
972
dc->pc = dc->base.pc_first;
973
dc->condjmp = 0;
974
975
--
976
2.19.1
977
978
diff view generated by jsdifflib
1
From: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add gicv3state void pointer to CPUARMState struct
3
Both arm and thumb2 division are controlled by the same ISAR field,
4
to store GICv3CPUState.
4
which takes care of the arm implies thumb case. Having M imply
5
thumb2 division was wrong for cortex-m0, which is v6m and does not
6
have thumb2 at all, much less thumb2 division.
5
7
6
In case of usecase like CPU reset, we need to reset
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
GICv3CPUState of the CPU. In such scenario, this pointer
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
becomes handy.
10
Message-id: 20181016223115.24100-5-richard.henderson@linaro.org
9
10
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Eric Auger <eric.auger@redhat.com>
13
Message-id: 1487850673-26455-5-git-send-email-vijay.kilari@gmail.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
13
---
16
hw/intc/gicv3_internal.h | 2 ++
14
target/arm/cpu.h | 12 ++++++++++--
17
target/arm/cpu.h | 2 ++
15
linux-user/elfload.c | 4 ++--
18
hw/intc/arm_gicv3_common.c | 2 ++
16
target/arm/cpu.c | 10 +---------
19
hw/intc/arm_gicv3_cpuif.c | 8 ++++++++
17
target/arm/translate.c | 4 ++--
20
4 files changed, 14 insertions(+)
18
4 files changed, 15 insertions(+), 15 deletions(-)
21
19
22
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
23
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/intc/gicv3_internal.h
25
+++ b/hw/intc/gicv3_internal.h
26
@@ -XXX,XX +XXX,XX @@ static inline void gicv3_cache_all_target_cpustates(GICv3State *s)
27
}
28
}
29
30
+void gicv3_set_gicv3state(CPUState *cpu, GICv3CPUState *s);
31
+
32
#endif /* QEMU_ARM_GICV3_INTERNAL_H */
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/cpu.h
22
--- a/target/arm/cpu.h
36
+++ b/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
37
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
38
25
ARM_FEATURE_VFP3,
39
void *nvic;
26
ARM_FEATURE_VFP_FP16,
40
const struct arm_boot_info *boot_info;
27
ARM_FEATURE_NEON,
41
+ /* Store GICv3CPUState to access from this struct */
28
- ARM_FEATURE_THUMB_DIV, /* divide supported in Thumb encoding */
42
+ void *gicv3state;
29
ARM_FEATURE_M, /* Microcontroller profile. */
43
} CPUARMState;
30
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
44
31
ARM_FEATURE_THUMB2EE,
45
/**
32
@@ -XXX,XX +XXX,XX @@ enum arm_features {
46
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
33
ARM_FEATURE_V5,
34
ARM_FEATURE_STRONGARM,
35
ARM_FEATURE_VAPA, /* cp15 VA to PA lookups */
36
- ARM_FEATURE_ARM_DIV, /* divide supported in ARM encoding */
37
ARM_FEATURE_VFP4, /* VFPv4 (implies that NEON is v2) */
38
ARM_FEATURE_GENERIC_TIMER,
39
ARM_FEATURE_MVFR, /* Media and VFP Feature Registers 0 and 1 */
40
@@ -XXX,XX +XXX,XX @@ extern const uint64_t pred_esz_masks[4];
41
/*
42
* 32-bit feature tests via id registers.
43
*/
44
+static inline bool isar_feature_thumb_div(const ARMISARegisters *id)
45
+{
46
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) != 0;
47
+}
48
+
49
+static inline bool isar_feature_arm_div(const ARMISARegisters *id)
50
+{
51
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
52
+}
53
+
54
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
55
{
56
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
57
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
47
index XXXXXXX..XXXXXXX 100644
58
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/intc/arm_gicv3_common.c
59
--- a/linux-user/elfload.c
49
+++ b/hw/intc/arm_gicv3_common.c
60
+++ b/linux-user/elfload.c
50
@@ -XXX,XX +XXX,XX @@ static void arm_gicv3_common_realize(DeviceState *dev, Error **errp)
61
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
51
62
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
52
s->cpu[i].cpu = cpu;
63
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
53
s->cpu[i].gic = s;
64
GET_FEATURE(ARM_FEATURE_VFP4, ARM_HWCAP_ARM_VFPv4);
54
+ /* Store GICv3CPUState in CPUARMState gicv3state pointer */
65
- GET_FEATURE(ARM_FEATURE_ARM_DIV, ARM_HWCAP_ARM_IDIVA);
55
+ gicv3_set_gicv3state(cpu, &s->cpu[i]);
66
- GET_FEATURE(ARM_FEATURE_THUMB_DIV, ARM_HWCAP_ARM_IDIVT);
56
67
+ GET_FEATURE_ID(arm_div, ARM_HWCAP_ARM_IDIVA);
57
/* Pre-construct the GICR_TYPER:
68
+ GET_FEATURE_ID(thumb_div, ARM_HWCAP_ARM_IDIVT);
58
* For our implementation:
69
/* All QEMU's VFPv3 CPUs have 32 registers, see VFP_DREG in translate.c.
59
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
70
* Note that the ARM_HWCAP_ARM_VFPv3D16 bit is always the inverse of
71
* ARM_HWCAP_ARM_VFPD32 (and so always clear for QEMU); it is unrelated
72
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
60
index XXXXXXX..XXXXXXX 100644
73
index XXXXXXX..XXXXXXX 100644
61
--- a/hw/intc/arm_gicv3_cpuif.c
74
--- a/target/arm/cpu.c
62
+++ b/hw/intc/arm_gicv3_cpuif.c
75
+++ b/target/arm/cpu.c
63
@@ -XXX,XX +XXX,XX @@
76
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
64
#include "gicv3_internal.h"
77
* Presence of EL2 itself is ARM_FEATURE_EL2, and of the
65
#include "cpu.h"
78
* Security Extensions is ARM_FEATURE_EL3.
66
79
*/
67
+void gicv3_set_gicv3state(CPUState *cpu, GICv3CPUState *s)
80
- set_feature(env, ARM_FEATURE_ARM_DIV);
68
+{
81
+ assert(cpu_isar_feature(arm_div, cpu));
69
+ ARMCPU *arm_cpu = ARM_CPU(cpu);
82
set_feature(env, ARM_FEATURE_LPAE);
70
+ CPUARMState *env = &arm_cpu->env;
83
set_feature(env, ARM_FEATURE_V7);
71
+
84
}
72
+ env->gicv3state = (void *)s;
85
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
73
+};
86
if (arm_feature(env, ARM_FEATURE_V5)) {
74
+
87
set_feature(env, ARM_FEATURE_V4T);
75
static GICv3CPUState *icc_cs_from_env(CPUARMState *env)
88
}
76
{
89
- if (arm_feature(env, ARM_FEATURE_M)) {
77
/* Given the CPU, find the right GICv3CPUState struct.
90
- set_feature(env, ARM_FEATURE_THUMB_DIV);
91
- }
92
- if (arm_feature(env, ARM_FEATURE_ARM_DIV)) {
93
- set_feature(env, ARM_FEATURE_THUMB_DIV);
94
- }
95
if (arm_feature(env, ARM_FEATURE_VFP4)) {
96
set_feature(env, ARM_FEATURE_VFP3);
97
set_feature(env, ARM_FEATURE_VFP_FP16);
98
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
99
ARMCPU *cpu = ARM_CPU(obj);
100
101
set_feature(&cpu->env, ARM_FEATURE_V7);
102
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DIV);
103
- set_feature(&cpu->env, ARM_FEATURE_ARM_DIV);
104
set_feature(&cpu->env, ARM_FEATURE_V7MP);
105
set_feature(&cpu->env, ARM_FEATURE_PMSA);
106
cpu->midr = 0x411fc153; /* r1p3 */
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
112
case 1:
113
case 3:
114
/* SDIV, UDIV */
115
- if (!arm_dc_feature(s, ARM_FEATURE_ARM_DIV)) {
116
+ if (!dc_isar_feature(arm_div, s)) {
117
goto illegal_op;
118
}
119
if (((insn >> 5) & 7) || (rd != 15)) {
120
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
121
tmp2 = load_reg(s, rm);
122
if ((op & 0x50) == 0x10) {
123
/* sdiv, udiv */
124
- if (!arm_dc_feature(s, ARM_FEATURE_THUMB_DIV)) {
125
+ if (!dc_isar_feature(thumb_div, s)) {
126
goto illegal_op;
127
}
128
if (op & 0x20)
78
--
129
--
79
2.7.4
130
2.19.1
80
131
81
132
diff view generated by jsdifflib
1
From: Clement Deschamps <clement.deschamps@antfield.fr>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Provide a new function sdbus_reparent_card() in sd core for reparenting
3
Having V6 alone imply jazelle was wrong for cortex-m0.
4
a card from a SDBus to another one.
4
Change to an assertion for V6 & !M.
5
5
6
This function is required by the raspi platform, where the two SD
6
This was harmless, because the only place we tested ARM_FEATURE_JAZELLE
7
controllers can be dynamically switched.
7
was for 'bxj' in disas_arm(), which is unreachable for M-profile cores.
8
8
9
Signed-off-by: Clement Deschamps <clement.deschamps@antfield.fr>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20181016223115.24100-6-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 1488293711-14195-3-git-send-email-peter.maydell@linaro.org
13
Message-id: 20170224164021.9066-3-clement.deschamps@antfield.fr
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
[PMM: added a doc comment to the header file; changed to
16
use new behaviour of qdev_set_parent_bus()]
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
14
---
19
include/hw/sd/sd.h | 11 +++++++++++
15
target/arm/cpu.h | 6 +++++-
20
hw/sd/core.c | 27 +++++++++++++++++++++++++++
16
target/arm/cpu.c | 17 ++++++++++++++---
21
2 files changed, 38 insertions(+)
17
target/arm/translate.c | 2 +-
18
3 files changed, 20 insertions(+), 5 deletions(-)
22
19
23
diff --git a/include/hw/sd/sd.h b/include/hw/sd/sd.h
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/sd/sd.h
22
--- a/target/arm/cpu.h
26
+++ b/include/hw/sd/sd.h
23
+++ b/target/arm/cpu.h
27
@@ -XXX,XX +XXX,XX @@ uint8_t sdbus_read_data(SDBus *sd);
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
28
bool sdbus_data_ready(SDBus *sd);
25
ARM_FEATURE_PMU, /* has PMU support */
29
bool sdbus_get_inserted(SDBus *sd);
26
ARM_FEATURE_VBAR, /* has cp15 VBAR */
30
bool sdbus_get_readonly(SDBus *sd);
27
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
31
+/**
28
- ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
32
+ * sdbus_reparent_card: Reparent an SD card from one controller to another
29
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
33
+ * @from: controller bus to remove card from
30
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
34
+ * @to: controller bus to move card to
31
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
35
+ *
32
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_arm_div(const ARMISARegisters *id)
36
+ * Reparent an SD card, effectively unplugging it from one controller
33
return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
37
+ * and inserting it into another. This is useful for SoCs like the
38
+ * bcm2835 which have two SD controllers and connect a single SD card
39
+ * to them, selected by the guest reprogramming GPIO line routing.
40
+ */
41
+void sdbus_reparent_card(SDBus *from, SDBus *to);
42
43
/* Functions to be used by SD devices to report back to qdevified controllers */
44
void sdbus_set_inserted(SDBus *sd, bool inserted);
45
diff --git a/hw/sd/core.c b/hw/sd/core.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/sd/core.c
48
+++ b/hw/sd/core.c
49
@@ -XXX,XX +XXX,XX @@ void sdbus_set_readonly(SDBus *sdbus, bool readonly)
50
}
51
}
34
}
52
35
53
+void sdbus_reparent_card(SDBus *from, SDBus *to)
36
+static inline bool isar_feature_jazelle(const ARMISARegisters *id)
54
+{
37
+{
55
+ SDState *card = get_card(from);
38
+ return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
56
+ SDCardClass *sc;
57
+ bool readonly;
58
+
59
+ /* We directly reparent the card object rather than implementing this
60
+ * as a hotpluggable connection because we don't want to expose SD cards
61
+ * to users as being hotpluggable, and we can get away with it in this
62
+ * limited use case. This could perhaps be implemented more cleanly in
63
+ * future by adding support to the hotplug infrastructure for "device
64
+ * can be hotplugged only via code, not by user".
65
+ */
66
+
67
+ if (!card) {
68
+ return;
69
+ }
70
+
71
+ sc = SD_CARD_GET_CLASS(card);
72
+ readonly = sc->get_readonly(card);
73
+
74
+ sdbus_set_inserted(from, false);
75
+ qdev_set_parent_bus(DEVICE(card), &to->qbus);
76
+ sdbus_set_inserted(to, true);
77
+ sdbus_set_readonly(to, readonly);
78
+}
39
+}
79
+
40
+
80
static const TypeInfo sd_bus_info = {
41
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
81
.name = TYPE_SD_BUS,
42
{
82
.parent = TYPE_BUS,
43
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
44
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/cpu.c
47
+++ b/target/arm/cpu.c
48
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
49
}
50
if (arm_feature(env, ARM_FEATURE_V6)) {
51
set_feature(env, ARM_FEATURE_V5);
52
- set_feature(env, ARM_FEATURE_JAZELLE);
53
if (!arm_feature(env, ARM_FEATURE_M)) {
54
+ assert(cpu_isar_feature(jazelle, cpu));
55
set_feature(env, ARM_FEATURE_AUXCR);
56
}
57
}
58
@@ -XXX,XX +XXX,XX @@ static void arm926_initfn(Object *obj)
59
set_feature(&cpu->env, ARM_FEATURE_VFP);
60
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
61
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
62
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
63
cpu->midr = 0x41069265;
64
cpu->reset_fpsid = 0x41011090;
65
cpu->ctr = 0x1dd20d2;
66
cpu->reset_sctlr = 0x00090078;
67
+
68
+ /*
69
+ * ARMv5 does not have the ID_ISAR registers, but we can still
70
+ * set the field to indicate Jazelle support within QEMU.
71
+ */
72
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
73
}
74
75
static void arm946_initfn(Object *obj)
76
@@ -XXX,XX +XXX,XX @@ static void arm1026_initfn(Object *obj)
77
set_feature(&cpu->env, ARM_FEATURE_AUXCR);
78
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
79
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
80
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
81
cpu->midr = 0x4106a262;
82
cpu->reset_fpsid = 0x410110a0;
83
cpu->ctr = 0x1dd20d2;
84
cpu->reset_sctlr = 0x00090078;
85
cpu->reset_auxcr = 1;
86
+
87
+ /*
88
+ * ARMv5 does not have the ID_ISAR registers, but we can still
89
+ * set the field to indicate Jazelle support within QEMU.
90
+ */
91
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
92
+
93
{
94
/* The 1026 had an IFAR at c6,c0,0,1 rather than the ARMv6 c6,c0,0,2 */
95
ARMCPRegInfo ifar = {
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@
101
#define ENABLE_ARCH_5 arm_dc_feature(s, ARM_FEATURE_V5)
102
/* currently all emulated v5 cores are also v5TE, so don't bother */
103
#define ENABLE_ARCH_5TE arm_dc_feature(s, ARM_FEATURE_V5)
104
-#define ENABLE_ARCH_5J arm_dc_feature(s, ARM_FEATURE_JAZELLE)
105
+#define ENABLE_ARCH_5J dc_isar_feature(jazelle, s)
106
#define ENABLE_ARCH_6 arm_dc_feature(s, ARM_FEATURE_V6)
107
#define ENABLE_ARCH_6K arm_dc_feature(s, ARM_FEATURE_V6K)
108
#define ENABLE_ARCH_6T2 arm_dc_feature(s, ARM_FEATURE_THUMB2)
83
--
109
--
84
2.7.4
110
2.19.1
85
111
86
112
diff view generated by jsdifflib
1
From: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reset CPU interface registers of GICv3 when CPU is reset.
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
For this, ARMCPRegInfo struct is registered with one ICC
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
register whose resetfn is called when cpu is reset.
5
Message-id: 20181016223115.24100-7-richard.henderson@linaro.org
6
7
All the ICC registers are reset under one single register
8
reset function instead of calling resetfn for each ICC
9
register.
10
11
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
14
Message-id: 1487850673-26455-6-git-send-email-vijay.kilari@gmail.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
8
---
17
hw/intc/arm_gicv3_kvm.c | 60 +++++++++++++++++++++++++++++++++++++++++++++++++
9
target/arm/cpu.h | 6 +++++-
18
1 file changed, 60 insertions(+)
10
linux-user/elfload.c | 2 +-
11
target/arm/cpu.c | 4 ----
12
target/arm/helper.c | 2 +-
13
target/arm/machine.c | 3 +--
14
5 files changed, 8 insertions(+), 9 deletions(-)
19
15
20
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/arm_gicv3_kvm.c
18
--- a/target/arm/cpu.h
23
+++ b/hw/intc/arm_gicv3_kvm.c
19
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_get(GICv3State *s)
20
@@ -XXX,XX +XXX,XX @@ enum arm_features {
25
}
21
ARM_FEATURE_NEON,
22
ARM_FEATURE_M, /* Microcontroller profile. */
23
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
24
- ARM_FEATURE_THUMB2EE,
25
ARM_FEATURE_V7MP, /* v7 Multiprocessing Extensions */
26
ARM_FEATURE_V7VE, /* v7 Virtualization Extensions (non-EL2 parts) */
27
ARM_FEATURE_V4T,
28
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_jazelle(const ARMISARegisters *id)
29
return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
26
}
30
}
27
31
28
+static void arm_gicv3_icc_reset(CPUARMState *env, const ARMCPRegInfo *ri)
32
+static inline bool isar_feature_t32ee(const ARMISARegisters *id)
29
+{
33
+{
30
+ ARMCPU *cpu;
34
+ return FIELD_EX32(id->id_isar3, ID_ISAR3, T32EE) != 0;
31
+ GICv3State *s;
32
+ GICv3CPUState *c;
33
+
34
+ c = (GICv3CPUState *)env->gicv3state;
35
+ s = c->gic;
36
+ cpu = ARM_CPU(c->cpu);
37
+
38
+ /* Initialize to actual HW supported configuration */
39
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS,
40
+ KVM_VGIC_ATTR(ICC_CTLR_EL1, cpu->mp_affinity),
41
+ &c->icc_ctlr_el1[GICV3_NS], false);
42
+
43
+ c->icc_ctlr_el1[GICV3_S] = c->icc_ctlr_el1[GICV3_NS];
44
+ c->icc_pmr_el1 = 0;
45
+ c->icc_bpr[GICV3_G0] = GIC_MIN_BPR;
46
+ c->icc_bpr[GICV3_G1] = GIC_MIN_BPR;
47
+ c->icc_bpr[GICV3_G1NS] = GIC_MIN_BPR;
48
+
49
+ c->icc_sre_el1 = 0x7;
50
+ memset(c->icc_apr, 0, sizeof(c->icc_apr));
51
+ memset(c->icc_igrpen, 0, sizeof(c->icc_igrpen));
52
+}
35
+}
53
+
36
+
54
static void kvm_arm_gicv3_reset(DeviceState *dev)
37
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
55
{
38
{
56
GICv3State *s = ARM_GICV3_COMMON(dev);
39
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
57
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_reset(DeviceState *dev)
40
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
58
kvm_arm_gicv3_put(s);
41
index XXXXXXX..XXXXXXX 100644
42
--- a/linux-user/elfload.c
43
+++ b/linux-user/elfload.c
44
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
45
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
46
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
47
GET_FEATURE(ARM_FEATURE_IWMMXT, ARM_HWCAP_ARM_IWMMXT);
48
- GET_FEATURE(ARM_FEATURE_THUMB2EE, ARM_HWCAP_ARM_THUMBEE);
49
+ GET_FEATURE_ID(t32ee, ARM_HWCAP_ARM_THUMBEE);
50
GET_FEATURE(ARM_FEATURE_NEON, ARM_HWCAP_ARM_NEON);
51
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
52
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
53
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/cpu.c
56
+++ b/target/arm/cpu.c
57
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
58
set_feature(&cpu->env, ARM_FEATURE_V7);
59
set_feature(&cpu->env, ARM_FEATURE_VFP3);
60
set_feature(&cpu->env, ARM_FEATURE_NEON);
61
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
62
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
63
set_feature(&cpu->env, ARM_FEATURE_EL3);
64
cpu->midr = 0x410fc080;
65
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
66
set_feature(&cpu->env, ARM_FEATURE_VFP3);
67
set_feature(&cpu->env, ARM_FEATURE_VFP_FP16);
68
set_feature(&cpu->env, ARM_FEATURE_NEON);
69
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
70
set_feature(&cpu->env, ARM_FEATURE_EL3);
71
/* Note that A9 supports the MP extensions even for
72
* A9UP and single-core A9MP (which are both different
73
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
74
set_feature(&cpu->env, ARM_FEATURE_V7VE);
75
set_feature(&cpu->env, ARM_FEATURE_VFP4);
76
set_feature(&cpu->env, ARM_FEATURE_NEON);
77
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
78
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
79
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
80
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
81
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
82
set_feature(&cpu->env, ARM_FEATURE_V7VE);
83
set_feature(&cpu->env, ARM_FEATURE_VFP4);
84
set_feature(&cpu->env, ARM_FEATURE_NEON);
85
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
86
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
87
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
88
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
89
diff --git a/target/arm/helper.c b/target/arm/helper.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/helper.c
92
+++ b/target/arm/helper.c
93
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
94
define_arm_cp_regs(cpu, vmsa_pmsa_cp_reginfo);
95
define_arm_cp_regs(cpu, vmsa_cp_reginfo);
96
}
97
- if (arm_feature(env, ARM_FEATURE_THUMB2EE)) {
98
+ if (cpu_isar_feature(t32ee, cpu)) {
99
define_arm_cp_regs(cpu, t2ee_cp_reginfo);
100
}
101
if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) {
102
diff --git a/target/arm/machine.c b/target/arm/machine.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/machine.c
105
+++ b/target/arm/machine.c
106
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
107
static bool thumb2ee_needed(void *opaque)
108
{
109
ARMCPU *cpu = opaque;
110
- CPUARMState *env = &cpu->env;
111
112
- return arm_feature(env, ARM_FEATURE_THUMB2EE);
113
+ return cpu_isar_feature(t32ee, cpu);
59
}
114
}
60
115
61
+/*
116
static const VMStateDescription vmstate_thumb2ee = {
62
+ * CPU interface registers of GIC needs to be reset on CPU reset.
63
+ * For the calling arm_gicv3_icc_reset() on CPU reset, we register
64
+ * below ARMCPRegInfo. As we reset the whole cpu interface under single
65
+ * register reset, we define only one register of CPU interface instead
66
+ * of defining all the registers.
67
+ */
68
+static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
69
+ { .name = "ICC_CTLR_EL1", .state = ARM_CP_STATE_BOTH,
70
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 12, .opc2 = 4,
71
+ /*
72
+ * If ARM_CP_NOP is used, resetfn is not called,
73
+ * So ARM_CP_NO_RAW is appropriate type.
74
+ */
75
+ .type = ARM_CP_NO_RAW,
76
+ .access = PL1_RW,
77
+ .readfn = arm_cp_read_zero,
78
+ .writefn = arm_cp_write_ignore,
79
+ /*
80
+ * We hang the whole cpu interface reset routine off here
81
+ * rather than parcelling it out into one little function
82
+ * per register
83
+ */
84
+ .resetfn = arm_gicv3_icc_reset,
85
+ },
86
+ REGINFO_SENTINEL
87
+};
88
+
89
static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
90
{
91
GICv3State *s = KVM_ARM_GICV3(dev);
92
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
93
94
gicv3_init_irqs_and_mmio(s, kvm_arm_gicv3_set_irq, NULL);
95
96
+ for (i = 0; i < s->num_cpu; i++) {
97
+ ARMCPU *cpu = ARM_CPU(qemu_get_cpu(i));
98
+
99
+ define_arm_cp_regs(cpu, gicv3_cpuif_reginfo);
100
+ }
101
+
102
/* Try to create the device via the device control API */
103
s->dev_fd = kvm_create_device(kvm_state, KVM_DEV_TYPE_ARM_VGIC_V3, false);
104
if (s->dev_fd < 0) {
105
--
117
--
106
2.7.4
118
2.19.1
107
119
108
120
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20181016223115.24100-8-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/cpu.h | 16 +++++++++++++++-
10
linux-user/aarch64/signal.c | 4 ++--
11
linux-user/elfload.c | 2 +-
12
linux-user/syscall.c | 10 ++++++----
13
target/arm/cpu64.c | 5 ++++-
14
target/arm/helper.c | 9 ++++++---
15
target/arm/machine.c | 3 +--
16
target/arm/translate-a64.c | 4 ++--
17
8 files changed, 37 insertions(+), 16 deletions(-)
18
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR1, FRINTTS, 32, 4)
24
FIELD(ID_AA64ISAR1, SB, 36, 4)
25
FIELD(ID_AA64ISAR1, SPECRES, 40, 4)
26
27
+FIELD(ID_AA64PFR0, EL0, 0, 4)
28
+FIELD(ID_AA64PFR0, EL1, 4, 4)
29
+FIELD(ID_AA64PFR0, EL2, 8, 4)
30
+FIELD(ID_AA64PFR0, EL3, 12, 4)
31
+FIELD(ID_AA64PFR0, FP, 16, 4)
32
+FIELD(ID_AA64PFR0, ADVSIMD, 20, 4)
33
+FIELD(ID_AA64PFR0, GIC, 24, 4)
34
+FIELD(ID_AA64PFR0, RAS, 28, 4)
35
+FIELD(ID_AA64PFR0, SVE, 32, 4)
36
+
37
QEMU_BUILD_BUG_ON(ARRAY_SIZE(((ARMCPU *)0)->ccsidr) <= R_V7M_CSSELR_INDEX_MASK);
38
39
/* If adding a feature bit which corresponds to a Linux ELF
40
@@ -XXX,XX +XXX,XX @@ enum arm_features {
41
ARM_FEATURE_PMU, /* has PMU support */
42
ARM_FEATURE_VBAR, /* has cp15 VBAR */
43
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
44
- ARM_FEATURE_SVE, /* has Scalable Vector Extension */
45
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
46
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
47
};
48
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
49
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
50
}
51
52
+static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
53
+{
54
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
55
+}
56
+
57
/*
58
* Forward to the above feature tests given an ARMCPU pointer.
59
*/
60
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/linux-user/aarch64/signal.c
63
+++ b/linux-user/aarch64/signal.c
64
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
65
break;
66
67
case TARGET_SVE_MAGIC:
68
- if (arm_feature(env, ARM_FEATURE_SVE)) {
69
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
70
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
71
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
72
if (!sve && size == sve_size) {
73
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
74
&layout);
75
76
/* SVE state needs saving only if it exists. */
77
- if (arm_feature(env, ARM_FEATURE_SVE)) {
78
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
79
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
80
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
81
sve_ofs = alloc_sigframe_space(sve_size, &layout);
82
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/linux-user/elfload.c
85
+++ b/linux-user/elfload.c
86
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
87
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
88
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
89
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
90
- GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
91
+ GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
92
93
#undef GET_FEATURE
94
#undef GET_FEATURE_ID
95
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/linux-user/syscall.c
98
+++ b/linux-user/syscall.c
99
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
100
* even though the current architectural maximum is VQ=16.
101
*/
102
ret = -TARGET_EINVAL;
103
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)
104
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(cpu_env))
105
&& arg2 >= 0 && arg2 <= 512 * 16 && !(arg2 & 15)) {
106
CPUARMState *env = cpu_env;
107
ARMCPU *cpu = arm_env_get_cpu(env);
108
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
109
return ret;
110
case TARGET_PR_SVE_GET_VL:
111
ret = -TARGET_EINVAL;
112
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)) {
113
- CPUARMState *env = cpu_env;
114
- ret = ((env->vfp.zcr_el[1] & 0xf) + 1) * 16;
115
+ {
116
+ ARMCPU *cpu = arm_env_get_cpu(cpu_env);
117
+ if (cpu_isar_feature(aa64_sve, cpu)) {
118
+ ret = ((cpu->env.vfp.zcr_el[1] & 0xf) + 1) * 16;
119
+ }
120
}
121
return ret;
122
#endif /* AARCH64 */
123
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
124
index XXXXXXX..XXXXXXX 100644
125
--- a/target/arm/cpu64.c
126
+++ b/target/arm/cpu64.c
127
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
128
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
129
cpu->isar.id_aa64isar1 = t;
130
131
+ t = cpu->isar.id_aa64pfr0;
132
+ t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
133
+ cpu->isar.id_aa64pfr0 = t;
134
+
135
/* Replicate the same data to the 32-bit id registers. */
136
u = cpu->isar.id_isar5;
137
u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
139
* present in either.
140
*/
141
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
142
- set_feature(&cpu->env, ARM_FEATURE_SVE);
143
/* For usermode -cpu max we can use a larger and more efficient DCZ
144
* blocksize since we don't have to follow what the hardware does.
145
*/
146
diff --git a/target/arm/helper.c b/target/arm/helper.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/target/arm/helper.c
149
+++ b/target/arm/helper.c
150
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
151
define_one_arm_cp_reg(cpu, &sctlr);
152
}
153
154
- if (arm_feature(env, ARM_FEATURE_SVE)) {
155
+ if (cpu_isar_feature(aa64_sve, cpu)) {
156
define_one_arm_cp_reg(cpu, &zcr_el1_reginfo);
157
if (arm_feature(env, ARM_FEATURE_EL2)) {
158
define_one_arm_cp_reg(cpu, &zcr_el2_reginfo);
159
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
160
uint32_t flags;
161
162
if (is_a64(env)) {
163
+ ARMCPU *cpu = arm_env_get_cpu(env);
164
+
165
*pc = env->pc;
166
flags = ARM_TBFLAG_AARCH64_STATE_MASK;
167
/* Get control bits for tagged addresses */
168
flags |= (arm_regime_tbi0(env, mmu_idx) << ARM_TBFLAG_TBI0_SHIFT);
169
flags |= (arm_regime_tbi1(env, mmu_idx) << ARM_TBFLAG_TBI1_SHIFT);
170
171
- if (arm_feature(env, ARM_FEATURE_SVE)) {
172
+ if (cpu_isar_feature(aa64_sve, cpu)) {
173
int sve_el = sve_exception_el(env, current_el);
174
uint32_t zcr_len;
175
176
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq)
177
void aarch64_sve_change_el(CPUARMState *env, int old_el,
178
int new_el, bool el0_a64)
179
{
180
+ ARMCPU *cpu = arm_env_get_cpu(env);
181
int old_len, new_len;
182
bool old_a64, new_a64;
183
184
/* Nothing to do if no SVE. */
185
- if (!arm_feature(env, ARM_FEATURE_SVE)) {
186
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
187
return;
188
}
189
190
diff --git a/target/arm/machine.c b/target/arm/machine.c
191
index XXXXXXX..XXXXXXX 100644
192
--- a/target/arm/machine.c
193
+++ b/target/arm/machine.c
194
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_iwmmxt = {
195
static bool sve_needed(void *opaque)
196
{
197
ARMCPU *cpu = opaque;
198
- CPUARMState *env = &cpu->env;
199
200
- return arm_feature(env, ARM_FEATURE_SVE);
201
+ return cpu_isar_feature(aa64_sve, cpu);
202
}
203
204
/* The first two words of each Zreg is stored in VFP state. */
205
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
206
index XXXXXXX..XXXXXXX 100644
207
--- a/target/arm/translate-a64.c
208
+++ b/target/arm/translate-a64.c
209
@@ -XXX,XX +XXX,XX @@ void aarch64_cpu_dump_state(CPUState *cs, FILE *f,
210
cpu_fprintf(f, " FPCR=%08x FPSR=%08x\n",
211
vfp_get_fpcr(env), vfp_get_fpsr(env));
212
213
- if (arm_feature(env, ARM_FEATURE_SVE) && sve_exception_el(env, el) == 0) {
214
+ if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) == 0) {
215
int j, zcr_len = sve_zcr_len_for_el(env, el);
216
217
for (i = 0; i <= FFR_PRED_NUM; i++) {
218
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
219
unallocated_encoding(s);
220
break;
221
case 0x2:
222
- if (!arm_dc_feature(s, ARM_FEATURE_SVE) || !disas_sve(s, insn)) {
223
+ if (!dc_isar_feature(aa64_sve, s) || !disas_sve(s, insn)) {
224
unallocated_encoding(s);
225
}
226
break;
227
--
228
2.19.1
229
230
diff view generated by jsdifflib
1
The SysTick timer isn't really part of the NVIC proper;
1
From: Richard Henderson <richard.henderson@linaro.org>
2
we just modelled it that way back when we couldn't
3
easily have devices that only occupied a small chunk
4
of a memory region. Split it out into its own device.
5
2
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20181016223115.24100-9-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 1487604965-23220-10-git-send-email-peter.maydell@linaro.org
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
---
8
---
10
hw/timer/Makefile.objs | 1 +
9
target/arm/cpu.h | 17 +++++++++++++++-
11
include/hw/arm/armv7m_nvic.h | 10 +-
10
linux-user/elfload.c | 6 +-----
12
include/hw/timer/armv7m_systick.h | 34 ++++++
11
target/arm/cpu64.c | 16 ++++++++-------
13
hw/intc/armv7m_nvic.c | 160 ++++++-------------------
12
target/arm/helper.c | 2 +-
14
hw/timer/armv7m_systick.c | 240 ++++++++++++++++++++++++++++++++++++++
13
target/arm/translate-a64.c | 40 +++++++++++++++++++-------------------
15
hw/timer/trace-events | 6 +
14
target/arm/translate.c | 6 +++---
16
6 files changed, 318 insertions(+), 133 deletions(-)
15
6 files changed, 50 insertions(+), 37 deletions(-)
17
create mode 100644 include/hw/timer/armv7m_systick.h
18
create mode 100644 hw/timer/armv7m_systick.c
19
16
20
diff --git a/hw/timer/Makefile.objs b/hw/timer/Makefile.objs
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/timer/Makefile.objs
19
--- a/target/arm/cpu.h
23
+++ b/hw/timer/Makefile.objs
20
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ enum arm_features {
25
common-obj-$(CONFIG_ARM_TIMER) += arm_timer.o
22
ARM_FEATURE_PMU, /* has PMU support */
26
common-obj-$(CONFIG_ARM_MPTIMER) += arm_mptimer.o
23
ARM_FEATURE_VBAR, /* has cp15 VBAR */
27
+common-obj-$(CONFIG_ARM_V7M) += armv7m_systick.o
24
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
28
common-obj-$(CONFIG_A9_GTIMER) += a9gtimer.o
25
- ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
29
common-obj-$(CONFIG_CADENCE) += cadence_ttc.o
26
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
30
common-obj-$(CONFIG_DS1338) += ds1338.o
31
diff --git a/include/hw/arm/armv7m_nvic.h b/include/hw/arm/armv7m_nvic.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/include/hw/arm/armv7m_nvic.h
34
+++ b/include/hw/arm/armv7m_nvic.h
35
@@ -XXX,XX +XXX,XX @@
36
37
#include "target/arm/cpu.h"
38
#include "hw/sysbus.h"
39
+#include "hw/timer/armv7m_systick.h"
40
41
#define TYPE_NVIC "armv7m_nvic"
42
43
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
44
unsigned int vectpending; /* highest prio pending enabled exception */
45
int exception_prio; /* group prio of the highest prio active exception */
46
47
- struct {
48
- uint32_t control;
49
- uint32_t reload;
50
- int64_t tick;
51
- QEMUTimer *timer;
52
- } systick;
53
-
54
MemoryRegion sysregmem;
55
MemoryRegion container;
56
57
uint32_t num_irq;
58
qemu_irq excpout;
59
qemu_irq sysresetreq;
60
+
61
+ SysTickState systick;
62
} NVICState;
63
64
#endif
65
diff --git a/include/hw/timer/armv7m_systick.h b/include/hw/timer/armv7m_systick.h
66
new file mode 100644
67
index XXXXXXX..XXXXXXX
68
--- /dev/null
69
+++ b/include/hw/timer/armv7m_systick.h
70
@@ -XXX,XX +XXX,XX @@
71
+/*
72
+ * ARMv7M SysTick timer
73
+ *
74
+ * Copyright (c) 2006-2007 CodeSourcery.
75
+ * Written by Paul Brook
76
+ * Copyright (c) 2017 Linaro Ltd
77
+ * Written by Peter Maydell
78
+ *
79
+ * This code is licensed under the GPL (version 2 or later).
80
+ */
81
+
82
+#ifndef HW_TIMER_ARMV7M_SYSTICK_H
83
+#define HW_TIMER_ARMV7M_SYSTICK_H
84
+
85
+#include "hw/sysbus.h"
86
+
87
+#define TYPE_SYSTICK "armv7m_systick"
88
+
89
+#define SYSTICK(obj) OBJECT_CHECK(SysTickState, (obj), TYPE_SYSTICK)
90
+
91
+typedef struct SysTickState {
92
+ /*< private >*/
93
+ SysBusDevice parent_obj;
94
+ /*< public >*/
95
+
96
+ uint32_t control;
97
+ uint32_t reload;
98
+ int64_t tick;
99
+ QEMUTimer *timer;
100
+ MemoryRegion iomem;
101
+ qemu_irq irq;
102
+} SysTickState;
103
+
104
+#endif
105
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/hw/intc/armv7m_nvic.c
108
+++ b/hw/intc/armv7m_nvic.c
109
@@ -XXX,XX +XXX,XX @@ static const uint8_t nvic_id[] = {
110
0x00, 0xb0, 0x1b, 0x00, 0x0d, 0xe0, 0x05, 0xb1
111
};
27
};
112
28
113
-/* qemu timers run at 1GHz. We want something closer to 1MHz. */
29
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
114
-#define SYSTICK_SCALE 1000ULL
30
return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
115
-
31
}
116
-#define SYSTICK_ENABLE (1 << 0)
32
117
-#define SYSTICK_TICKINT (1 << 1)
33
+static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
118
-#define SYSTICK_CLKSOURCE (1 << 2)
119
-#define SYSTICK_COUNTFLAG (1 << 16)
120
-
121
-int system_clock_scale;
122
-
123
-/* Conversion factor from qemu timer to SysTick frequencies. */
124
-static inline int64_t systick_scale(NVICState *s)
125
-{
126
- if (s->systick.control & SYSTICK_CLKSOURCE)
127
- return system_clock_scale;
128
- else
129
- return 1000;
130
-}
131
-
132
-static void systick_reload(NVICState *s, int reset)
133
-{
134
- /* The Cortex-M3 Devices Generic User Guide says that "When the
135
- * ENABLE bit is set to 1, the counter loads the RELOAD value from the
136
- * SYST RVR register and then counts down". So, we need to check the
137
- * ENABLE bit before reloading the value.
138
- */
139
- if ((s->systick.control & SYSTICK_ENABLE) == 0) {
140
- return;
141
- }
142
-
143
- if (reset)
144
- s->systick.tick = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
145
- s->systick.tick += (s->systick.reload + 1) * systick_scale(s);
146
- timer_mod(s->systick.timer, s->systick.tick);
147
-}
148
-
149
-static void systick_timer_tick(void * opaque)
150
-{
151
- NVICState *s = (NVICState *)opaque;
152
- s->systick.control |= SYSTICK_COUNTFLAG;
153
- if (s->systick.control & SYSTICK_TICKINT) {
154
- /* Trigger the interrupt. */
155
- armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK);
156
- }
157
- if (s->systick.reload == 0) {
158
- s->systick.control &= ~SYSTICK_ENABLE;
159
- } else {
160
- systick_reload(s, 0);
161
- }
162
-}
163
-
164
-static void systick_reset(NVICState *s)
165
-{
166
- s->systick.control = 0;
167
- s->systick.reload = 0;
168
- s->systick.tick = 0;
169
- timer_del(s->systick.timer);
170
-}
171
-
172
static int nvic_pending_prio(NVICState *s)
173
{
174
/* return the priority of the current pending interrupt,
175
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset)
176
switch (offset) {
177
case 4: /* Interrupt Control Type. */
178
return ((s->num_irq - NVIC_FIRST_IRQ) / 32) - 1;
179
- case 0x10: /* SysTick Control and Status. */
180
- val = s->systick.control;
181
- s->systick.control &= ~SYSTICK_COUNTFLAG;
182
- return val;
183
- case 0x14: /* SysTick Reload Value. */
184
- return s->systick.reload;
185
- case 0x18: /* SysTick Current Value. */
186
- {
187
- int64_t t;
188
- if ((s->systick.control & SYSTICK_ENABLE) == 0)
189
- return 0;
190
- t = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
191
- if (t >= s->systick.tick)
192
- return 0;
193
- val = ((s->systick.tick - (t + 1)) / systick_scale(s)) + 1;
194
- /* The interrupt in triggered when the timer reaches zero.
195
- However the counter is not reloaded until the next clock
196
- tick. This is a hack to return zero during the first tick. */
197
- if (val > s->systick.reload)
198
- val = 0;
199
- return val;
200
- }
201
- case 0x1c: /* SysTick Calibration Value. */
202
- return 10000;
203
case 0xd00: /* CPUID Base. */
204
return cpu->midr;
205
case 0xd04: /* Interrupt Control State. */
206
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset)
207
static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value)
208
{
209
ARMCPU *cpu = s->cpu;
210
- uint32_t oldval;
211
+
212
switch (offset) {
213
- case 0x10: /* SysTick Control and Status. */
214
- oldval = s->systick.control;
215
- s->systick.control &= 0xfffffff8;
216
- s->systick.control |= value & 7;
217
- if ((oldval ^ value) & SYSTICK_ENABLE) {
218
- int64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
219
- if (value & SYSTICK_ENABLE) {
220
- if (s->systick.tick) {
221
- s->systick.tick += now;
222
- timer_mod(s->systick.timer, s->systick.tick);
223
- } else {
224
- systick_reload(s, 1);
225
- }
226
- } else {
227
- timer_del(s->systick.timer);
228
- s->systick.tick -= now;
229
- if (s->systick.tick < 0)
230
- s->systick.tick = 0;
231
- }
232
- } else if ((oldval ^ value) & SYSTICK_CLKSOURCE) {
233
- /* This is a hack. Force the timer to be reloaded
234
- when the reference clock is changed. */
235
- systick_reload(s, 1);
236
- }
237
- break;
238
- case 0x14: /* SysTick Reload Value. */
239
- s->systick.reload = value;
240
- break;
241
- case 0x18: /* SysTick Current Value. Writes reload the timer. */
242
- systick_reload(s, 1);
243
- s->systick.control &= ~SYSTICK_COUNTFLAG;
244
- break;
245
case 0xd04: /* Interrupt Control State. */
246
if (value & (1 << 31)) {
247
armv7m_nvic_set_pending(s, ARMV7M_EXCP_NMI);
248
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_VecInfo = {
249
250
static const VMStateDescription vmstate_nvic = {
251
.name = "armv7m_nvic",
252
- .version_id = 3,
253
- .minimum_version_id = 3,
254
+ .version_id = 4,
255
+ .minimum_version_id = 4,
256
.post_load = &nvic_post_load,
257
.fields = (VMStateField[]) {
258
VMSTATE_STRUCT_ARRAY(vectors, NVICState, NVIC_MAX_VECTORS, 1,
259
vmstate_VecInfo, VecInfo),
260
- VMSTATE_UINT32(systick.control, NVICState),
261
- VMSTATE_UINT32(systick.reload, NVICState),
262
- VMSTATE_INT64(systick.tick, NVICState),
263
- VMSTATE_TIMER_PTR(systick.timer, NVICState),
264
VMSTATE_UINT32(prigroup, NVICState),
265
VMSTATE_END_OF_LIST()
266
}
267
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_reset(DeviceState *dev)
268
269
s->exception_prio = NVIC_NOEXC_PRIO;
270
s->vectpending = 0;
271
+}
272
273
- systick_reset(s);
274
+static void nvic_systick_trigger(void *opaque, int n, int level)
275
+{
34
+{
276
+ NVICState *s = opaque;
35
+ /*
277
+
36
+ * This is a placeholder for use by VCMA until the rest of
278
+ if (level) {
37
+ * the ARMv8.2-FP16 extension is implemented for aa32 mode.
279
+ /* SysTick just asked us to pend its exception.
38
+ * At which point we can properly set and check MVFR1.FPHP.
280
+ * (This is different from an external interrupt line's
39
+ */
281
+ * behaviour.)
40
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
282
+ */
283
+ armv7m_nvic_set_pending(s, ARMV7M_EXCP_SYSTICK);
284
+ }
285
}
286
287
static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
288
{
289
NVICState *s = NVIC(dev);
290
+ SysBusDevice *systick_sbd;
291
+ Error *err = NULL;
292
293
s->cpu = ARM_CPU(qemu_get_cpu(0));
294
assert(s->cpu);
295
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
296
/* include space for internal exception vectors */
297
s->num_irq += NVIC_FIRST_IRQ;
298
299
+ object_property_set_bool(OBJECT(&s->systick), true, "realized", &err);
300
+ if (err != NULL) {
301
+ error_propagate(errp, err);
302
+ return;
303
+ }
304
+ systick_sbd = SYS_BUS_DEVICE(&s->systick);
305
+ sysbus_connect_irq(systick_sbd, 0,
306
+ qdev_get_gpio_in_named(dev, "systick-trigger", 0));
307
+
308
/* The NVIC and System Control Space (SCS) starts at 0xe000e000
309
* and looks like this:
310
* 0x004 - ICTR
311
- * 0x010 - 0x1c - systick
312
+ * 0x010 - 0xff - systick
313
* 0x100..0x7ec - NVIC
314
* 0x7f0..0xcff - Reserved
315
* 0xd00..0xd3c - SCS registers
316
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
317
memory_region_init_io(&s->sysregmem, OBJECT(s), &nvic_sysreg_ops, s,
318
"nvic_sysregs", 0x1000);
319
memory_region_add_subregion(&s->container, 0, &s->sysregmem);
320
+ memory_region_add_subregion_overlap(&s->container, 0x10,
321
+ sysbus_mmio_get_region(systick_sbd, 0),
322
+ 1);
323
324
sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->container);
325
-
326
- s->systick.timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, systick_timer_tick, s);
327
}
328
329
static void armv7m_nvic_instance_init(Object *obj)
330
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_instance_init(Object *obj)
331
NVICState *nvic = NVIC(obj);
332
SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
333
334
+ object_initialize(&nvic->systick, sizeof(nvic->systick), TYPE_SYSTICK);
335
+ qdev_set_parent_bus(DEVICE(&nvic->systick), sysbus_get_default());
336
+
337
sysbus_init_irq(sbd, &nvic->excpout);
338
qdev_init_gpio_out_named(dev, &nvic->sysresetreq, "SYSRESETREQ", 1);
339
+ qdev_init_gpio_in_named(dev, nvic_systick_trigger, "systick-trigger", 1);
340
}
341
342
static void armv7m_nvic_class_init(ObjectClass *klass, void *data)
343
diff --git a/hw/timer/armv7m_systick.c b/hw/timer/armv7m_systick.c
344
new file mode 100644
345
index XXXXXXX..XXXXXXX
346
--- /dev/null
347
+++ b/hw/timer/armv7m_systick.c
348
@@ -XXX,XX +XXX,XX @@
349
+/*
350
+ * ARMv7M SysTick timer
351
+ *
352
+ * Copyright (c) 2006-2007 CodeSourcery.
353
+ * Written by Paul Brook
354
+ * Copyright (c) 2017 Linaro Ltd
355
+ * Written by Peter Maydell
356
+ *
357
+ * This code is licensed under the GPL (version 2 or later).
358
+ */
359
+
360
+#include "qemu/osdep.h"
361
+#include "hw/timer/armv7m_systick.h"
362
+#include "qemu-common.h"
363
+#include "hw/sysbus.h"
364
+#include "qemu/timer.h"
365
+#include "qemu/log.h"
366
+#include "trace.h"
367
+
368
+/* qemu timers run at 1GHz. We want something closer to 1MHz. */
369
+#define SYSTICK_SCALE 1000ULL
370
+
371
+#define SYSTICK_ENABLE (1 << 0)
372
+#define SYSTICK_TICKINT (1 << 1)
373
+#define SYSTICK_CLKSOURCE (1 << 2)
374
+#define SYSTICK_COUNTFLAG (1 << 16)
375
+
376
+int system_clock_scale;
377
+
378
+/* Conversion factor from qemu timer to SysTick frequencies. */
379
+static inline int64_t systick_scale(SysTickState *s)
380
+{
381
+ if (s->control & SYSTICK_CLKSOURCE) {
382
+ return system_clock_scale;
383
+ } else {
384
+ return 1000;
385
+ }
386
+}
41
+}
387
+
42
+
388
+static void systick_reload(SysTickState *s, int reset)
43
/*
44
* 64-bit feature tests via id registers.
45
*/
46
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
47
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
48
}
49
50
+static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
389
+{
51
+{
390
+ /* The Cortex-M3 Devices Generic User Guide says that "When the
52
+ /* We always set the AdvSIMD and FP fields identically wrt FP16. */
391
+ * ENABLE bit is set to 1, the counter loads the RELOAD value from the
53
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
392
+ * SYST RVR register and then counts down". So, we need to check the
393
+ * ENABLE bit before reloading the value.
394
+ */
395
+ trace_systick_reload();
396
+
397
+ if ((s->control & SYSTICK_ENABLE) == 0) {
398
+ return;
399
+ }
400
+
401
+ if (reset) {
402
+ s->tick = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
403
+ }
404
+ s->tick += (s->reload + 1) * systick_scale(s);
405
+ timer_mod(s->timer, s->tick);
406
+}
54
+}
407
+
55
+
408
+static void systick_timer_tick(void *opaque)
56
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
409
+{
57
{
410
+ SysTickState *s = (SysTickState *)opaque;
58
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
59
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/linux-user/elfload.c
62
+++ b/linux-user/elfload.c
63
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
64
hwcaps |= ARM_HWCAP_A64_ASIMD;
65
66
/* probe for the extra features */
67
-#define GET_FEATURE(feat, hwcap) \
68
- do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
69
#define GET_FEATURE_ID(feat, hwcap) \
70
do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
71
72
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
73
GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
74
GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
75
GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
76
- GET_FEATURE(ARM_FEATURE_V8_FP16,
77
- ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
78
+ GET_FEATURE_ID(aa64_fp16, ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
79
GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
80
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
81
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
82
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
83
GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
84
85
-#undef GET_FEATURE
86
#undef GET_FEATURE_ID
87
88
return hwcaps;
89
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/cpu64.c
92
+++ b/target/arm/cpu64.c
93
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
94
95
t = cpu->isar.id_aa64pfr0;
96
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
97
+ t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
98
+ t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
99
cpu->isar.id_aa64pfr0 = t;
100
101
/* Replicate the same data to the 32-bit id registers. */
102
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
103
u = FIELD_DP32(u, ID_ISAR6, DP, 1);
104
cpu->isar.id_isar6 = u;
105
106
-#ifdef CONFIG_USER_ONLY
107
- /* We don't set these in system emulation mode for the moment,
108
- * since we don't correctly set the ID registers to advertise them,
109
- * and in some cases they're only available in AArch64 and not AArch32,
110
- * whereas the architecture requires them to be present in both if
111
- * present in either.
112
+ /*
113
+ * FIXME: We do not yet support ARMv8.2-fp16 for AArch32 yet,
114
+ * so do not set MVFR1.FPHP. Strictly speaking this is not legal,
115
+ * but it is also not legal to enable SVE without support for FP16,
116
+ * and enabling SVE in system mode is more useful in the short term.
117
*/
118
- set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
411
+
119
+
412
+ trace_systick_timer_tick();
120
+#ifdef CONFIG_USER_ONLY
413
+
121
/* For usermode -cpu max we can use a larger and more efficient DCZ
414
+ s->control |= SYSTICK_COUNTFLAG;
122
* blocksize since we don't have to follow what the hardware does.
415
+ if (s->control & SYSTICK_TICKINT) {
123
*/
416
+ /* Tell the NVIC to pend the SysTick exception */
124
diff --git a/target/arm/helper.c b/target/arm/helper.c
417
+ qemu_irq_pulse(s->irq);
125
index XXXXXXX..XXXXXXX 100644
418
+ }
126
--- a/target/arm/helper.c
419
+ if (s->reload == 0) {
127
+++ b/target/arm/helper.c
420
+ s->control &= ~SYSTICK_ENABLE;
128
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
421
+ } else {
129
uint32_t changed;
422
+ systick_reload(s, 0);
130
423
+ }
131
/* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */
424
+}
132
- if (!arm_feature(env, ARM_FEATURE_V8_FP16)) {
425
+
133
+ if (!cpu_isar_feature(aa64_fp16, arm_env_get_cpu(env))) {
426
+static uint64_t systick_read(void *opaque, hwaddr addr, unsigned size)
134
val &= ~FPCR_FZ16;
427
+{
135
}
428
+ SysTickState *s = opaque;
136
429
+ uint32_t val;
137
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
430
+
138
index XXXXXXX..XXXXXXX 100644
431
+ switch (addr) {
139
--- a/target/arm/translate-a64.c
432
+ case 0x0: /* SysTick Control and Status. */
140
+++ b/target/arm/translate-a64.c
433
+ val = s->control;
141
@@ -XXX,XX +XXX,XX @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
434
+ s->control &= ~SYSTICK_COUNTFLAG;
142
break;
435
+ break;
143
case 3:
436
+ case 0x4: /* SysTick Reload Value. */
144
size = MO_16;
437
+ val = s->reload;
145
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
438
+ break;
146
+ if (dc_isar_feature(aa64_fp16, s)) {
439
+ case 0x8: /* SysTick Current Value. */
147
break;
440
+ {
148
}
441
+ int64_t t;
149
/* fallthru */
442
+
150
@@ -XXX,XX +XXX,XX @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
443
+ if ((s->control & SYSTICK_ENABLE) == 0) {
151
break;
444
+ val = 0;
152
case 3:
445
+ break;
153
size = MO_16;
446
+ }
154
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
447
+ t = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
155
+ if (dc_isar_feature(aa64_fp16, s)) {
448
+ if (t >= s->tick) {
156
break;
449
+ val = 0;
157
}
450
+ break;
158
/* fallthru */
451
+ }
159
@@ -XXX,XX +XXX,XX @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
452
+ val = ((s->tick - (t + 1)) / systick_scale(s)) + 1;
160
break;
453
+ /* The interrupt in triggered when the timer reaches zero.
161
case 3:
454
+ However the counter is not reloaded until the next clock
162
sz = MO_16;
455
+ tick. This is a hack to return zero during the first tick. */
163
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
456
+ if (val > s->reload) {
164
+ if (dc_isar_feature(aa64_fp16, s)) {
457
+ val = 0;
165
break;
458
+ }
166
}
459
+ break;
167
/* fallthru */
460
+ }
168
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
461
+ case 0xc: /* SysTick Calibration Value. */
169
handle_fp_1src_double(s, opcode, rd, rn);
462
+ val = 10000;
170
break;
463
+ break;
171
case 3:
464
+ default:
172
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
465
+ val = 0;
173
+ if (!dc_isar_feature(aa64_fp16, s)) {
466
+ qemu_log_mask(LOG_GUEST_ERROR,
174
unallocated_encoding(s);
467
+ "SysTick: Bad read offset 0x%" HWADDR_PRIx "\n", addr);
175
return;
468
+ break;
176
}
469
+ }
177
@@ -XXX,XX +XXX,XX @@ static void disas_fp_2src(DisasContext *s, uint32_t insn)
470
+
178
handle_fp_2src_double(s, opcode, rd, rn, rm);
471
+ trace_systick_read(addr, val, size);
179
break;
472
+ return val;
180
case 3:
473
+}
181
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
474
+
182
+ if (!dc_isar_feature(aa64_fp16, s)) {
475
+static void systick_write(void *opaque, hwaddr addr,
183
unallocated_encoding(s);
476
+ uint64_t value, unsigned size)
184
return;
477
+{
185
}
478
+ SysTickState *s = opaque;
186
@@ -XXX,XX +XXX,XX @@ static void disas_fp_3src(DisasContext *s, uint32_t insn)
479
+
187
handle_fp_3src_double(s, o0, o1, rd, rn, rm, ra);
480
+ trace_systick_write(addr, value, size);
188
break;
481
+
189
case 3:
482
+ switch (addr) {
190
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
483
+ case 0x0: /* SysTick Control and Status. */
191
+ if (!dc_isar_feature(aa64_fp16, s)) {
484
+ {
192
unallocated_encoding(s);
485
+ uint32_t oldval = s->control;
193
return;
486
+
194
}
487
+ s->control &= 0xfffffff8;
195
@@ -XXX,XX +XXX,XX @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
488
+ s->control |= value & 7;
196
break;
489
+ if ((oldval ^ value) & SYSTICK_ENABLE) {
197
case 3:
490
+ int64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
198
sz = MO_16;
491
+ if (value & SYSTICK_ENABLE) {
199
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
492
+ if (s->tick) {
200
+ if (dc_isar_feature(aa64_fp16, s)) {
493
+ s->tick += now;
201
break;
494
+ timer_mod(s->timer, s->tick);
202
}
495
+ } else {
203
/* fallthru */
496
+ systick_reload(s, 1);
204
@@ -XXX,XX +XXX,XX @@ static void disas_fp_fixed_conv(DisasContext *s, uint32_t insn)
497
+ }
205
case 1: /* float64 */
498
+ } else {
206
break;
499
+ timer_del(s->timer);
207
case 3: /* float16 */
500
+ s->tick -= now;
208
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
501
+ if (s->tick < 0) {
209
+ if (dc_isar_feature(aa64_fp16, s)) {
502
+ s->tick = 0;
210
break;
503
+ }
211
}
504
+ }
212
/* fallthru */
505
+ } else if ((oldval ^ value) & SYSTICK_CLKSOURCE) {
213
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
506
+ /* This is a hack. Force the timer to be reloaded
214
break;
507
+ when the reference clock is changed. */
215
case 0x6: /* 16-bit float, 32-bit int */
508
+ systick_reload(s, 1);
216
case 0xe: /* 16-bit float, 64-bit int */
509
+ }
217
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
510
+ break;
218
+ if (dc_isar_feature(aa64_fp16, s)) {
511
+ }
219
break;
512
+ case 0x4: /* SysTick Reload Value. */
220
}
513
+ s->reload = value;
221
/* fallthru */
514
+ break;
222
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
515
+ case 0x8: /* SysTick Current Value. Writes reload the timer. */
223
case 1: /* float64 */
516
+ systick_reload(s, 1);
224
break;
517
+ s->control &= ~SYSTICK_COUNTFLAG;
225
case 3: /* float16 */
518
+ break;
226
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
519
+ default:
227
+ if (dc_isar_feature(aa64_fp16, s)) {
520
+ qemu_log_mask(LOG_GUEST_ERROR,
228
break;
521
+ "SysTick: Bad write offset 0x%" HWADDR_PRIx "\n", addr);
229
}
522
+ }
230
/* fallthru */
523
+}
231
@@ -XXX,XX +XXX,XX @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
524
+
232
*/
525
+static const MemoryRegionOps systick_ops = {
233
is_min = extract32(size, 1, 1);
526
+ .read = systick_read,
234
is_fp = true;
527
+ .write = systick_write,
235
- if (!is_u && arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
528
+ .endianness = DEVICE_NATIVE_ENDIAN,
236
+ if (!is_u && dc_isar_feature(aa64_fp16, s)) {
529
+ .valid.min_access_size = 4,
237
size = 1;
530
+ .valid.max_access_size = 4,
238
} else if (!is_u || !is_q || extract32(size, 0, 1)) {
531
+};
239
unallocated_encoding(s);
532
+
240
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
533
+static void systick_reset(DeviceState *dev)
241
534
+{
242
if (o2 != 0 || ((cmode == 0xf) && is_neg && !is_q)) {
535
+ SysTickState *s = SYSTICK(dev);
243
/* Check for FMOV (vector, immediate) - half-precision */
536
+
244
- if (!(arm_dc_feature(s, ARM_FEATURE_V8_FP16) && o2 && cmode == 0xf)) {
537
+ s->control = 0;
245
+ if (!(dc_isar_feature(aa64_fp16, s) && o2 && cmode == 0xf)) {
538
+ s->reload = 0;
246
unallocated_encoding(s);
539
+ s->tick = 0;
247
return;
540
+ timer_del(s->timer);
248
}
541
+}
249
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
542
+
250
case 0x2f: /* FMINP */
543
+static void systick_instance_init(Object *obj)
251
/* FP op, size[0] is 32 or 64 bit*/
544
+{
252
if (!u) {
545
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
253
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
546
+ SysTickState *s = SYSTICK(obj);
254
+ if (!dc_isar_feature(aa64_fp16, s)) {
547
+
255
unallocated_encoding(s);
548
+ memory_region_init_io(&s->iomem, obj, &systick_ops, s, "systick", 0xe0);
256
return;
549
+ sysbus_init_mmio(sbd, &s->iomem);
257
} else {
550
+ sysbus_init_irq(sbd, &s->irq);
258
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
551
+ s->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, systick_timer_tick, s);
259
size = MO_32;
552
+}
260
} else if (immh & 2) {
553
+
261
size = MO_16;
554
+static const VMStateDescription vmstate_systick = {
262
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
555
+ .name = "armv7m_systick",
263
+ if (!dc_isar_feature(aa64_fp16, s)) {
556
+ .version_id = 1,
264
unallocated_encoding(s);
557
+ .minimum_version_id = 1,
265
return;
558
+ .fields = (VMStateField[]) {
266
}
559
+ VMSTATE_UINT32(control, SysTickState),
267
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
560
+ VMSTATE_UINT32(reload, SysTickState),
268
size = MO_32;
561
+ VMSTATE_INT64(tick, SysTickState),
269
} else if (immh & 0x2) {
562
+ VMSTATE_TIMER_PTR(timer, SysTickState),
270
size = MO_16;
563
+ VMSTATE_END_OF_LIST()
271
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
564
+ }
272
+ if (!dc_isar_feature(aa64_fp16, s)) {
565
+};
273
unallocated_encoding(s);
566
+
274
return;
567
+static void systick_class_init(ObjectClass *klass, void *data)
275
}
568
+{
276
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
569
+ DeviceClass *dc = DEVICE_CLASS(klass);
277
return;
570
+
278
}
571
+ dc->vmsd = &vmstate_systick;
279
572
+ dc->reset = systick_reset;
280
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
573
+}
281
+ if (!dc_isar_feature(aa64_fp16, s)) {
574
+
282
unallocated_encoding(s);
575
+static const TypeInfo armv7m_systick_info = {
283
}
576
+ .name = TYPE_SYSTICK,
284
577
+ .parent = TYPE_SYS_BUS_DEVICE,
285
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
578
+ .instance_init = systick_instance_init,
286
TCGv_ptr fpst;
579
+ .instance_size = sizeof(SysTickState),
287
bool pairwise = false;
580
+ .class_init = systick_class_init,
288
581
+};
289
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
582
+
290
+ if (!dc_isar_feature(aa64_fp16, s)) {
583
+static void armv7m_systick_register_types(void)
291
unallocated_encoding(s);
584
+{
292
return;
585
+ type_register_static(&armv7m_systick_info);
293
}
586
+}
294
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
587
+
295
case 0x1c: /* FCADD, #90 */
588
+type_init(armv7m_systick_register_types)
296
case 0x1e: /* FCADD, #270 */
589
diff --git a/hw/timer/trace-events b/hw/timer/trace-events
297
if (size == 0
590
index XXXXXXX..XXXXXXX 100644
298
- || (size == 1 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))
591
--- a/hw/timer/trace-events
299
+ || (size == 1 && !dc_isar_feature(aa64_fp16, s))
592
+++ b/hw/timer/trace-events
300
|| (size == 3 && !is_q)) {
593
@@ -XXX,XX +XXX,XX @@ aspeed_timer_ctrl_pulse_enable(uint8_t i, bool enable) "Timer %" PRIu8 ": %d"
301
unallocated_encoding(s);
594
aspeed_timer_set_ctrl2(uint32_t value) "Value: 0x%" PRIx32
302
return;
595
aspeed_timer_set_value(int timer, int reg, uint32_t value) "Timer %d register %d: 0x%" PRIx32
303
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
596
aspeed_timer_read(uint64_t offset, unsigned size, uint64_t value) "From 0x%" PRIx64 ": of size %u: 0x%" PRIx64
304
bool need_fpst = true;
597
+
305
int rmode;
598
+# hw/timer/armv7m_systick.c
306
599
+systick_reload(void) "systick reload"
307
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
600
+systick_timer_tick(void) "systick reload"
308
+ if (!dc_isar_feature(aa64_fp16, s)) {
601
+systick_read(uint64_t addr, uint32_t value, unsigned size) "systick read addr 0x%" PRIx64 " data 0x%" PRIx32 " size %u"
309
unallocated_encoding(s);
602
+systick_write(uint64_t addr, uint32_t value, unsigned size) "systick write addr 0x%" PRIx64 " data 0x%" PRIx32 " size %u"
310
return;
311
}
312
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
313
}
314
break;
315
}
316
- if (is_fp16 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
317
+ if (is_fp16 && !dc_isar_feature(aa64_fp16, s)) {
318
unallocated_encoding(s);
319
return;
320
}
321
diff --git a/target/arm/translate.c b/target/arm/translate.c
322
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/translate.c
324
+++ b/target/arm/translate.c
325
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
326
int size = extract32(insn, 20, 1);
327
data = extract32(insn, 23, 2); /* rot */
328
if (!dc_isar_feature(aa32_vcma, s)
329
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
330
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
331
return 1;
332
}
333
fn_gvec_ptr = size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
334
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
335
int size = extract32(insn, 20, 1);
336
data = extract32(insn, 24, 1); /* rot */
337
if (!dc_isar_feature(aa32_vcma, s)
338
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
339
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
340
return 1;
341
}
342
fn_gvec_ptr = size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
343
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
344
return 1;
345
}
346
if (size == 0) {
347
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
348
+ if (!dc_isar_feature(aa32_fp16_arith, s)) {
349
return 1;
350
}
351
/* For fp16, rm is just Vm, and index is M. */
603
--
352
--
604
2.7.4
353
2.19.1
605
354
606
355
diff view generated by jsdifflib
1
Abstract the "load kernel" code out of armv7m_init() into its own
1
For AArch32, exception return happens through certain kinds
2
function. This includes the registration of the CPU reset function,
2
of CPSR write. We don't currently have any CPU_LOG_INT logging
3
to parallel how we handle this for A profile cores.
3
of these events (unlike AArch64, where we log in the ERET
4
instruction). Add some suitable logging.
4
5
5
We make the function public so that boards which choose to
6
This will log exception returns like this:
6
directly instantiate an ARMv7M device object can call it.
7
Exception return from AArch32 hyp to usr PC 0x80100374
8
9
paralleling the existing logging in the exception_return
10
helper for AArch64 exception returns:
11
Exception return from AArch64 EL2 to AArch64 EL0 PC 0x8003045c
12
Exception return from AArch64 EL2 to AArch32 EL0 PC 0x8003045c
13
14
(Note that an AArch32 exception return can only be
15
AArch32->AArch32, never to AArch64.)
7
16
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
19
Message-id: 20181012144235.19646-2-peter.maydell@linaro.org
11
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
12
Message-id: 1487604965-23220-2-git-send-email-peter.maydell@linaro.org
13
---
20
---
14
include/hw/arm/arm.h | 12 ++++++++++++
21
target/arm/internals.h | 18 ++++++++++++++++++
15
hw/arm/armv7m.c | 23 ++++++++++++++++++-----
22
target/arm/helper.c | 10 ++++++++++
16
2 files changed, 30 insertions(+), 5 deletions(-)
23
target/arm/translate.c | 7 +------
24
3 files changed, 29 insertions(+), 6 deletions(-)
17
25
18
diff --git a/include/hw/arm/arm.h b/include/hw/arm/arm.h
26
diff --git a/target/arm/internals.h b/target/arm/internals.h
19
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/arm/arm.h
28
--- a/target/arm/internals.h
21
+++ b/include/hw/arm/arm.h
29
+++ b/target/arm/internals.h
22
@@ -XXX,XX +XXX,XX @@ typedef enum {
30
@@ -XXX,XX +XXX,XX @@ static inline uint32_t v7m_sp_limit(CPUARMState *env)
23
/* armv7m.c */
31
}
24
DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
32
}
25
const char *kernel_filename, const char *cpu_model);
33
26
+/**
34
+/**
27
+ * armv7m_load_kernel:
35
+ * aarch32_mode_name(): Return name of the AArch32 CPU mode
28
+ * @cpu: CPU
36
+ * @psr: Program Status Register indicating CPU mode
29
+ * @kernel_filename: file to load
30
+ * @mem_size: mem_size: maximum image size to load
31
+ *
37
+ *
32
+ * Load the guest image for an ARMv7M system. This must be called by
38
+ * Returns, for debug logging purposes, a printable representation
33
+ * any ARMv7M board, either directly or via armv7m_init(). (This is
39
+ * of the AArch32 CPU mode ("svc", "usr", etc) as indicated by
34
+ * necessary to ensure that the CPU resets correctly on system reset,
40
+ * the low bits of the specified PSR.
35
+ * as well as for kernel loading.)
36
+ */
41
+ */
37
+void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size);
42
+static inline const char *aarch32_mode_name(uint32_t psr)
38
43
+{
39
/*
44
+ static const char cpu_mode_names[16][4] = {
40
* struct used as a parameter of the arm_load_kernel machine init
45
+ "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
41
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
46
+ "???", "???", "hyp", "und", "???", "???", "???", "sys"
42
index XXXXXXX..XXXXXXX 100644
47
+ };
43
--- a/hw/arm/armv7m.c
48
+
44
+++ b/hw/arm/armv7m.c
49
+ return cpu_mode_names[psr & 0xf];
45
@@ -XXX,XX +XXX,XX @@ DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
46
ARMCPU *cpu;
47
CPUARMState *env;
48
DeviceState *nvic;
49
- int image_size;
50
- uint64_t entry;
51
- uint64_t lowaddr;
52
- int big_endian;
53
54
if (cpu_model == NULL) {
55
    cpu_model = "cortex-m3";
56
@@ -XXX,XX +XXX,XX @@ DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
57
qdev_init_nofail(nvic);
58
sysbus_connect_irq(SYS_BUS_DEVICE(nvic), 0,
59
qdev_get_gpio_in(DEVICE(cpu), ARM_CPU_IRQ));
60
+ armv7m_load_kernel(cpu, kernel_filename, mem_size);
61
+ return nvic;
62
+}
50
+}
63
+
51
+
64
+void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size)
52
#endif
65
+{
53
diff --git a/target/arm/helper.c b/target/arm/helper.c
66
+ int image_size;
54
index XXXXXXX..XXXXXXX 100644
67
+ uint64_t entry;
55
--- a/target/arm/helper.c
68
+ uint64_t lowaddr;
56
+++ b/target/arm/helper.c
69
+ int big_endian;
57
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
70
58
mask |= CPSR_IL;
71
#ifdef TARGET_WORDS_BIGENDIAN
59
val |= CPSR_IL;
72
big_endian = 1;
60
}
73
@@ -XXX,XX +XXX,XX @@ DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
61
+ qemu_log_mask(LOG_GUEST_ERROR,
62
+ "Illegal AArch32 mode switch attempt from %s to %s\n",
63
+ aarch32_mode_name(env->uncached_cpsr),
64
+ aarch32_mode_name(val));
65
} else {
66
+ qemu_log_mask(CPU_LOG_INT, "%s %s to %s PC 0x%" PRIx32 "\n",
67
+ write_type == CPSRWriteExceptionReturn ?
68
+ "Exception return from AArch32" :
69
+ "AArch32 mode switch from",
70
+ aarch32_mode_name(env->uncached_cpsr),
71
+ aarch32_mode_name(val), env->regs[15]);
72
switch_mode(env, val & CPSR_M);
74
}
73
}
75
}
74
}
76
75
diff --git a/target/arm/translate.c b/target/arm/translate.c
77
+ /* CPU objects (unlike devices) are not automatically reset on system
76
index XXXXXXX..XXXXXXX 100644
78
+ * reset, so we must always register a handler to do so. Unlike
77
--- a/target/arm/translate.c
79
+ * A-profile CPUs, we don't need to do anything special in the
78
+++ b/target/arm/translate.c
80
+ * handler to arrange that it starts correctly.
79
@@ -XXX,XX +XXX,XX @@ void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb)
81
+ * This is arguably the wrong place to do this, but it matches the
80
translator_loop(ops, &dc.base, cpu, tb);
82
+ * way A-profile does it. Note that this means that every M profile
83
+ * board must call this function!
84
+ */
85
qemu_register_reset(armv7m_reset, cpu);
86
- return nvic;
87
}
81
}
88
82
89
static Property bitband_properties[] = {
83
-static const char *cpu_mode_names[16] = {
84
- "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
85
- "???", "???", "hyp", "und", "???", "???", "???", "sys"
86
-};
87
-
88
void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
89
int flags)
90
{
91
@@ -XXX,XX +XXX,XX @@ void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
92
psr & CPSR_V ? 'V' : '-',
93
psr & CPSR_T ? 'T' : 'A',
94
ns_status,
95
- cpu_mode_names[psr & 0xf], (psr & 0x10) ? 32 : 26);
96
+ aarch32_mode_name(psr), (psr & 0x10) ? 32 : 26);
97
}
98
99
if (flags & CPU_DUMP_FPU) {
90
--
100
--
91
2.7.4
101
2.19.1
92
102
93
103
diff view generated by jsdifflib
1
Make the legacy armv7m_init() function use the newly QOMified
1
The switch_mode() function is defined in target/arm/helper.c and used
2
armv7m object rather than doing everything by hand.
2
only in that file and nowhere else, so we can make it file-local
3
3
rather than global.
4
We can return the armv7m object rather than the NVIC from
5
armv7m_init() because its interface to the rest of the
6
board (GPIOs, etc) is identical.
7
4
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Message-id: 20181012144235.19646-3-peter.maydell@linaro.org
11
Message-id: 1487604965-23220-5-git-send-email-peter.maydell@linaro.org
12
---
8
---
13
hw/arm/armv7m.c | 49 ++++++++++++-------------------------------------
9
target/arm/internals.h | 1 -
14
1 file changed, 12 insertions(+), 37 deletions(-)
10
target/arm/helper.c | 6 ++++--
11
2 files changed, 4 insertions(+), 3 deletions(-)
15
12
16
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/armv7m.c
15
--- a/target/arm/internals.h
19
+++ b/hw/arm/armv7m.c
16
+++ b/target/arm/internals.h
20
@@ -XXX,XX +XXX,XX @@ static void bitband_init(Object *obj)
17
@@ -XXX,XX +XXX,XX @@ static inline int bank_number(int mode)
21
sysbus_init_mmio(dev, &s->iomem);
18
g_assert_not_reached();
22
}
19
}
23
20
24
-static void armv7m_bitband_init(void)
21
-void switch_mode(CPUARMState *, int);
25
-{
22
void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu);
26
- DeviceState *dev;
23
void arm_translate_init(void);
27
-
24
28
- dev = qdev_create(NULL, TYPE_BITBAND);
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
29
- qdev_prop_set_uint32(dev, "base", 0x20000000);
26
index XXXXXXX..XXXXXXX 100644
30
- qdev_init_nofail(dev);
27
--- a/target/arm/helper.c
31
- sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, 0x22000000);
28
+++ b/target/arm/helper.c
32
-
29
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
33
- dev = qdev_create(NULL, TYPE_BITBAND);
30
V8M_SAttributes *sattrs);
34
- qdev_prop_set_uint32(dev, "base", 0x40000000);
31
#endif
35
- qdev_init_nofail(dev);
32
36
- sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, 0x42000000);
33
+static void switch_mode(CPUARMState *env, int mode);
37
-}
34
+
38
-
35
static int vfp_gdb_get_reg(CPUARMState *env, uint8_t *buf, int reg)
39
/* Board init. */
40
41
static const hwaddr bitband_input_addr[ARMV7M_NUM_BITBANDS] = {
42
@@ -XXX,XX +XXX,XX @@ static void armv7m_reset(void *opaque)
43
44
/* Init CPU and memory for a v7-M based board.
45
mem_size is in bytes.
46
- Returns the NVIC array. */
47
+ Returns the ARMv7M device. */
48
49
DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
50
const char *kernel_filename, const char *cpu_model)
51
{
36
{
52
- ARMCPU *cpu;
37
int nregs;
53
- CPUARMState *env;
38
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
54
- DeviceState *nvic;
39
return 0;
55
+ DeviceState *armv7m;
56
57
if (cpu_model == NULL) {
58
-    cpu_model = "cortex-m3";
59
+ cpu_model = "cortex-m3";
60
}
61
- cpu = cpu_arm_init(cpu_model);
62
- if (cpu == NULL) {
63
- fprintf(stderr, "Unable to find CPU definition\n");
64
- exit(1);
65
- }
66
- env = &cpu->env;
67
-
68
- armv7m_bitband_init();
69
-
70
- nvic = qdev_create(NULL, "armv7m_nvic");
71
- qdev_prop_set_uint32(nvic, "num-irq", num_irq);
72
- env->nvic = nvic;
73
- qdev_init_nofail(nvic);
74
- sysbus_connect_irq(SYS_BUS_DEVICE(nvic), 0,
75
- qdev_get_gpio_in(DEVICE(cpu), ARM_CPU_IRQ));
76
- armv7m_load_kernel(cpu, kernel_filename, mem_size);
77
- return nvic;
78
+
79
+ armv7m = qdev_create(NULL, "armv7m");
80
+ qdev_prop_set_uint32(armv7m, "num-irq", num_irq);
81
+ qdev_prop_set_string(armv7m, "cpu-model", cpu_model);
82
+ /* This will exit with an error if the user passed us a bad cpu_model */
83
+ qdev_init_nofail(armv7m);
84
+
85
+ armv7m_load_kernel(ARM_CPU(first_cpu), kernel_filename, mem_size);
86
+ return armv7m;
87
}
40
}
88
41
89
void armv7m_load_kernel(ARMCPU *cpu, const char *kernel_filename, int mem_size)
42
-void switch_mode(CPUARMState *env, int mode)
43
+static void switch_mode(CPUARMState *env, int mode)
44
{
45
ARMCPU *cpu = arm_env_get_cpu(env);
46
47
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
48
49
#else
50
51
-void switch_mode(CPUARMState *env, int mode)
52
+static void switch_mode(CPUARMState *env, int mode)
53
{
54
int old_mode;
55
int i;
90
--
56
--
91
2.7.4
57
2.19.1
92
58
93
59
diff view generated by jsdifflib
New patch
1
1
The HCR.FB virtualization configuration register bit requests that
2
TLB maintenance, branch predictor invalidate-all and icache
3
invalidate-all operations performed in NS EL1 should be upgraded
4
from "local CPU only to "broadcast within Inner Shareable domain".
5
For QEMU we NOP the branch predictor and icache operations, so
6
we only need to upgrade the TLB invalidates:
7
AArch32 TLBIALL, TLBIMVA, TLBIASID, DTLBIALL, DTLBIMVA, DTLBIASID,
8
ITLBIALL, ITLBIMVA, ITLBIASID, TLBIMVAA, TLBIMVAL, TLBIMVAAL
9
AArch64 TLBI VMALLE1, TLBI VAE1, TLBI ASIDE1, TLBI VAAE1,
10
TLBI VALE1, TLBI VAALE1
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20181012144235.19646-4-peter.maydell@linaro.org
15
---
16
target/arm/helper.c | 191 +++++++++++++++++++++++++++-----------------
17
1 file changed, 116 insertions(+), 75 deletions(-)
18
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
raw_write(env, ri, value);
25
}
26
27
-static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
- uint64_t value)
29
-{
30
- /* Invalidate all (TLBIALL) */
31
- ARMCPU *cpu = arm_env_get_cpu(env);
32
-
33
- tlb_flush(CPU(cpu));
34
-}
35
-
36
-static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
37
- uint64_t value)
38
-{
39
- /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
40
- ARMCPU *cpu = arm_env_get_cpu(env);
41
-
42
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
43
-}
44
-
45
-static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
46
- uint64_t value)
47
-{
48
- /* Invalidate by ASID (TLBIASID) */
49
- ARMCPU *cpu = arm_env_get_cpu(env);
50
-
51
- tlb_flush(CPU(cpu));
52
-}
53
-
54
-static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
55
- uint64_t value)
56
-{
57
- /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
58
- ARMCPU *cpu = arm_env_get_cpu(env);
59
-
60
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
61
-}
62
-
63
/* IS variants of TLB operations must affect all cores */
64
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
65
uint64_t value)
66
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
67
tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
68
}
69
70
+/*
71
+ * Non-IS variants of TLB operations are upgraded to
72
+ * IS versions if we are at NS EL1 and HCR_EL2.FB is set to
73
+ * force broadcast of these operations.
74
+ */
75
+static bool tlb_force_broadcast(CPUARMState *env)
76
+{
77
+ return (env->cp15.hcr_el2 & HCR_FB) &&
78
+ arm_current_el(env) == 1 && arm_is_secure_below_el3(env);
79
+}
80
+
81
+static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
82
+ uint64_t value)
83
+{
84
+ /* Invalidate all (TLBIALL) */
85
+ ARMCPU *cpu = arm_env_get_cpu(env);
86
+
87
+ if (tlb_force_broadcast(env)) {
88
+ tlbiall_is_write(env, NULL, value);
89
+ return;
90
+ }
91
+
92
+ tlb_flush(CPU(cpu));
93
+}
94
+
95
+static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
96
+ uint64_t value)
97
+{
98
+ /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
99
+ ARMCPU *cpu = arm_env_get_cpu(env);
100
+
101
+ if (tlb_force_broadcast(env)) {
102
+ tlbimva_is_write(env, NULL, value);
103
+ return;
104
+ }
105
+
106
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
107
+}
108
+
109
+static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
110
+ uint64_t value)
111
+{
112
+ /* Invalidate by ASID (TLBIASID) */
113
+ ARMCPU *cpu = arm_env_get_cpu(env);
114
+
115
+ if (tlb_force_broadcast(env)) {
116
+ tlbiasid_is_write(env, NULL, value);
117
+ return;
118
+ }
119
+
120
+ tlb_flush(CPU(cpu));
121
+}
122
+
123
+static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
+ uint64_t value)
125
+{
126
+ /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
127
+ ARMCPU *cpu = arm_env_get_cpu(env);
128
+
129
+ if (tlb_force_broadcast(env)) {
130
+ tlbimvaa_is_write(env, NULL, value);
131
+ return;
132
+ }
133
+
134
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
135
+}
136
+
137
static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
138
uint64_t value)
139
{
140
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_access(CPUARMState *env,
141
* Page D4-1736 (DDI0487A.b)
142
*/
143
144
-static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
145
- uint64_t value)
146
-{
147
- CPUState *cs = ENV_GET_CPU(env);
148
-
149
- if (arm_is_secure_below_el3(env)) {
150
- tlb_flush_by_mmuidx(cs,
151
- ARMMMUIdxBit_S1SE1 |
152
- ARMMMUIdxBit_S1SE0);
153
- } else {
154
- tlb_flush_by_mmuidx(cs,
155
- ARMMMUIdxBit_S12NSE1 |
156
- ARMMMUIdxBit_S12NSE0);
157
- }
158
-}
159
-
160
static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
uint64_t value)
162
{
163
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
164
}
165
}
166
167
+static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
168
+ uint64_t value)
169
+{
170
+ CPUState *cs = ENV_GET_CPU(env);
171
+
172
+ if (tlb_force_broadcast(env)) {
173
+ tlbi_aa64_vmalle1_write(env, NULL, value);
174
+ return;
175
+ }
176
+
177
+ if (arm_is_secure_below_el3(env)) {
178
+ tlb_flush_by_mmuidx(cs,
179
+ ARMMMUIdxBit_S1SE1 |
180
+ ARMMMUIdxBit_S1SE0);
181
+ } else {
182
+ tlb_flush_by_mmuidx(cs,
183
+ ARMMMUIdxBit_S12NSE1 |
184
+ ARMMMUIdxBit_S12NSE0);
185
+ }
186
+}
187
+
188
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
189
uint64_t value)
190
{
191
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
192
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E3);
193
}
194
195
-static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
- uint64_t value)
197
-{
198
- /* Invalidate by VA, EL1&0 (AArch64 version).
199
- * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
200
- * since we don't support flush-for-specific-ASID-only or
201
- * flush-last-level-only.
202
- */
203
- ARMCPU *cpu = arm_env_get_cpu(env);
204
- CPUState *cs = CPU(cpu);
205
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
206
-
207
- if (arm_is_secure_below_el3(env)) {
208
- tlb_flush_page_by_mmuidx(cs, pageaddr,
209
- ARMMMUIdxBit_S1SE1 |
210
- ARMMMUIdxBit_S1SE0);
211
- } else {
212
- tlb_flush_page_by_mmuidx(cs, pageaddr,
213
- ARMMMUIdxBit_S12NSE1 |
214
- ARMMMUIdxBit_S12NSE0);
215
- }
216
-}
217
-
218
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
219
uint64_t value)
220
{
221
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
222
}
223
}
224
225
+static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
226
+ uint64_t value)
227
+{
228
+ /* Invalidate by VA, EL1&0 (AArch64 version).
229
+ * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
230
+ * since we don't support flush-for-specific-ASID-only or
231
+ * flush-last-level-only.
232
+ */
233
+ ARMCPU *cpu = arm_env_get_cpu(env);
234
+ CPUState *cs = CPU(cpu);
235
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
236
+
237
+ if (tlb_force_broadcast(env)) {
238
+ tlbi_aa64_vae1is_write(env, NULL, value);
239
+ return;
240
+ }
241
+
242
+ if (arm_is_secure_below_el3(env)) {
243
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
244
+ ARMMMUIdxBit_S1SE1 |
245
+ ARMMMUIdxBit_S1SE0);
246
+ } else {
247
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
248
+ ARMMMUIdxBit_S12NSE1 |
249
+ ARMMMUIdxBit_S12NSE0);
250
+ }
251
+}
252
+
253
static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
254
uint64_t value)
255
{
256
--
257
2.19.1
258
259
diff view generated by jsdifflib
1
Make the ARMv7M object take a memory region link which it uses
1
The HCR.DC virtualization configuration register bit has the
2
to wire up the bitband rather than having them always put
2
following effects:
3
themselves in the system address space.
3
* SCTLR.M behaves as if it is 0 for all purposes except
4
direct reads of the bit
5
* HCR.VM behaves as if it is 1 for all purposes except
6
direct reads of the bit
7
* the memory type produced by the first stage of the EL1&EL0
8
translation regime is Normal Non-Shareable,
9
Inner Write-Back Read-Allocate Write-Allocate,
10
Outer Write-Back Read-Allocate Write-Allocate.
11
12
Implement this behaviour.
4
13
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 1487604965-23220-6-git-send-email-peter.maydell@linaro.org
16
Message-id: 20181012144235.19646-5-peter.maydell@linaro.org
8
---
17
---
9
include/hw/arm/armv7m.h | 10 ++++++++++
18
target/arm/helper.c | 23 +++++++++++++++++++++--
10
hw/arm/armv7m.c | 23 ++++++++++++++++++++++-
19
1 file changed, 21 insertions(+), 2 deletions(-)
11
2 files changed, 32 insertions(+), 1 deletion(-)
12
20
13
diff --git a/include/hw/arm/armv7m.h b/include/hw/arm/armv7m.h
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
15
--- a/include/hw/arm/armv7m.h
23
--- a/target/arm/helper.c
16
+++ b/include/hw/arm/armv7m.h
24
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ typedef struct {
25
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
18
* + Named GPIO output SYSRESETREQ: signalled for guest AIRCR.SYSRESETREQ
26
* * The Non-secure TTBCR.EAE bit is set to 1
19
* + Property "cpu-model": CPU model to instantiate
27
* * The implementation includes EL2, and the value of HCR.VM is 1
20
* + Property "num-irq": number of external IRQ lines
28
*
21
+ * + Property "memory": MemoryRegion defining the physical address space
29
+ * (Note that HCR.DC makes HCR.VM behave as if it is 1.)
22
+ * that CPU accesses see. (The NVIC, bitbanding and other CPU-internal
30
+ *
23
+ * devices will be automatically layered on top of this view.)
31
* ATS1Hx always uses the 64bit format (not supported yet).
24
*/
32
*/
25
typedef struct ARMv7MState {
33
format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
26
/*< private >*/
34
27
@@ -XXX,XX +XXX,XX @@ typedef struct ARMv7MState {
35
if (arm_feature(env, ARM_FEATURE_EL2)) {
28
BitBandState bitband[ARMV7M_NUM_BITBANDS];
36
if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
29
ARMCPU *cpu;
37
- format64 |= env->cp15.hcr_el2 & HCR_VM;
30
38
+ format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC);
31
+ /* MemoryRegion we pass to the CPU, with our devices layered on
39
} else {
32
+ * top of the ones the board provides in board_memory.
40
format64 |= arm_current_el(env) == 2;
33
+ */
41
}
34
+ MemoryRegion container;
42
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
35
+
43
}
36
/* Properties */
44
37
char *cpu_model;
45
if (mmu_idx == ARMMMUIdx_S2NS) {
38
+ /* MemoryRegion the board provides to us (with its devices, RAM, etc) */
46
- return (env->cp15.hcr_el2 & HCR_VM) == 0;
39
+ MemoryRegion *board_memory;
47
+ /* HCR.DC means HCR.VM behaves as 1 */
40
} ARMv7MState;
48
+ return (env->cp15.hcr_el2 & (HCR_DC | HCR_VM)) == 0;
41
49
}
42
#endif
50
43
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
51
if (env->cp15.hcr_el2 & HCR_TGE) {
44
index XXXXXXX..XXXXXXX 100644
52
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
45
--- a/hw/arm/armv7m.c
53
}
46
+++ b/hw/arm/armv7m.c
54
}
47
@@ -XXX,XX +XXX,XX @@
55
48
#include "elf.h"
56
+ if ((env->cp15.hcr_el2 & HCR_DC) &&
49
#include "sysemu/qtest.h"
57
+ (mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1)) {
50
#include "qemu/error-report.h"
58
+ /* HCR.DC means SCTLR_EL1.M behaves as 0 */
51
+#include "exec/address-spaces.h"
59
+ return true;
52
53
/* Bitbanded IO. Each word corresponds to a single bit. */
54
55
@@ -XXX,XX +XXX,XX @@ static void armv7m_instance_init(Object *obj)
56
57
/* Can't init the cpu here, we don't yet know which model to use */
58
59
+ object_property_add_link(obj, "memory",
60
+ TYPE_MEMORY_REGION,
61
+ (Object **)&s->board_memory,
62
+ qdev_prop_allow_set_link_before_realize,
63
+ OBJ_PROP_LINK_UNREF_ON_RELEASE,
64
+ &error_abort);
65
+ memory_region_init(&s->container, obj, "armv7m-container", UINT64_MAX);
66
+
67
object_initialize(&s->nvic, sizeof(s->nvic), "armv7m_nvic");
68
qdev_set_parent_bus(DEVICE(&s->nvic), sysbus_get_default());
69
object_property_add_alias(obj, "num-irq",
70
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
71
const char *typename;
72
CPUClass *cc;
73
74
+ if (!s->board_memory) {
75
+ error_setg(errp, "memory property was not set");
76
+ return;
77
+ }
60
+ }
78
+
61
+
79
+ memory_region_add_subregion_overlap(&s->container, 0, s->board_memory, -1);
62
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
80
+
81
cpustr = g_strsplit(s->cpu_model, ",", 2);
82
83
oc = cpu_class_by_name(TYPE_ARM_CPU, cpustr[0]);
84
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
85
return;
86
}
87
88
+ object_property_set_link(OBJECT(s->cpu), OBJECT(&s->container), "memory",
89
+ &error_abort);
90
object_property_set_bool(OBJECT(s->cpu), true, "realized", &err);
91
if (err != NULL) {
92
error_propagate(errp, err);
93
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
94
return;
95
}
96
97
- sysbus_mmio_map(sbd, 0, bitband_output_addr[i]);
98
+ memory_region_add_subregion(&s->container, bitband_output_addr[i],
99
+ sysbus_mmio_get_region(sbd, 0));
100
}
101
}
63
}
102
64
103
@@ -XXX,XX +XXX,XX @@ DeviceState *armv7m_init(MemoryRegion *system_memory, int mem_size, int num_irq,
65
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
104
armv7m = qdev_create(NULL, "armv7m");
66
105
qdev_prop_set_uint32(armv7m, "num-irq", num_irq);
67
/* Combine the S1 and S2 cache attributes, if needed */
106
qdev_prop_set_string(armv7m, "cpu-model", cpu_model);
68
if (!ret && cacheattrs != NULL) {
107
+ object_property_set_link(OBJECT(armv7m), OBJECT(get_system_memory()),
69
+ if (env->cp15.hcr_el2 & HCR_DC) {
108
+ "memory", &error_abort);
70
+ /*
109
/* This will exit with an error if the user passed us a bad cpu_model */
71
+ * HCR.DC forces the first stage attributes to
110
qdev_init_nofail(armv7m);
72
+ * Normal Non-Shareable,
73
+ * Inner Write-Back Read-Allocate Write-Allocate,
74
+ * Outer Write-Back Read-Allocate Write-Allocate.
75
+ */
76
+ cacheattrs->attrs = 0xff;
77
+ cacheattrs->shareability = 0;
78
+ }
79
*cacheattrs = combine_cacheattrs(*cacheattrs, cacheattrs2);
80
}
111
81
112
--
82
--
113
2.7.4
83
2.19.1
114
84
115
85
diff view generated by jsdifflib
New patch
1
The A/I/F bits in ISR_EL1 should track the virtual interrupt
2
status, not the physical interrupt status, if the associated
3
HCR_EL2.AMO/IMO/FMO bit is set. Implement this, rather than
4
always showing the physical interrupt status.
1
5
6
We don't currently implement anything to do with external
7
aborts, so this applies only to the I and F bits (though it
8
ought to be possible for the outer guest to present a virtual
9
external abort to the inner guest, even if QEMU doesn't
10
emulate physical external aborts, so there is missing
11
functionality in this area).
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20181012144235.19646-6-peter.maydell@linaro.org
16
---
17
target/arm/helper.c | 22 ++++++++++++++++++----
18
1 file changed, 18 insertions(+), 4 deletions(-)
19
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
23
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
25
CPUState *cs = ENV_GET_CPU(env);
26
uint64_t ret = 0;
27
28
- if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
29
- ret |= CPSR_I;
30
+ if (arm_hcr_el2_imo(env)) {
31
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
32
+ ret |= CPSR_I;
33
+ }
34
+ } else {
35
+ if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
36
+ ret |= CPSR_I;
37
+ }
38
}
39
- if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
40
- ret |= CPSR_F;
41
+
42
+ if (arm_hcr_el2_fmo(env)) {
43
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
44
+ ret |= CPSR_F;
45
+ }
46
+ } else {
47
+ if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
48
+ ret |= CPSR_F;
49
+ }
50
}
51
+
52
/* External aborts are not possible in QEMU so A bit is always clear */
53
return ret;
54
}
55
--
56
2.19.1
57
58
diff view generated by jsdifflib
1
Instead of the bitband device doing a cpu_physical_memory_read/write,
1
The HCR_EL2 VI and VF bits are supposed to track whether there is
2
make it take a MemoryRegion which specifies where it should be
2
a pending virtual IRQ or virtual FIQ. For QEMU we store the
3
accessing, and use address_space_read/write to access the
3
pending VIRQ/VFIQ status in cs->interrupt_request, so this means:
4
corresponding AddressSpace.
4
* if the register is read we must get these bit values from
5
5
cs->interrupt_request
6
Since this entails pretty much a rewrite, convert away from
6
* if the register is written then we must write the bit
7
old_mmio in the process.
7
values back into cs->interrupt_request
8
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 1487604965-23220-8-git-send-email-peter.maydell@linaro.org
11
Message-id: 20181012144235.19646-7-peter.maydell@linaro.org
12
---
12
---
13
include/hw/arm/armv7m.h | 2 +
13
target/arm/helper.c | 47 +++++++++++++++++++++++++++++++++++++++++----
14
hw/arm/armv7m.c | 166 +++++++++++++++++++++++-------------------------
14
1 file changed, 43 insertions(+), 4 deletions(-)
15
2 files changed, 81 insertions(+), 87 deletions(-)
16
15
17
diff --git a/include/hw/arm/armv7m.h b/include/hw/arm/armv7m.h
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/arm/armv7m.h
18
--- a/target/arm/helper.c
20
+++ b/include/hw/arm/armv7m.h
19
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ typedef struct {
20
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
22
SysBusDevice parent_obj;
21
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
23
/*< public >*/
24
25
+ AddressSpace *source_as;
26
MemoryRegion iomem;
27
uint32_t base;
28
+ MemoryRegion *source_memory;
29
} BitBandState;
30
31
#define TYPE_ARMV7M "armv7m"
32
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/arm/armv7m.c
35
+++ b/hw/arm/armv7m.c
36
@@ -XXX,XX +XXX,XX @@
37
/* Bitbanded IO. Each word corresponds to a single bit. */
38
39
/* Get the byte address of the real memory for a bitband access. */
40
-static inline uint32_t bitband_addr(void * opaque, uint32_t addr)
41
+static inline hwaddr bitband_addr(BitBandState *s, hwaddr offset)
42
{
22
{
43
- uint32_t res;
23
ARMCPU *cpu = arm_env_get_cpu(env);
44
-
24
+ CPUState *cs = ENV_GET_CPU(env);
45
- res = *(uint32_t *)opaque;
25
uint64_t valid_mask = HCR_MASK;
46
- res |= (addr & 0x1ffffff) >> 5;
26
47
- return res;
27
if (arm_feature(env, ARM_FEATURE_EL3)) {
48
-
28
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
49
-}
29
/* Clear RES0 bits. */
50
-
30
value &= valid_mask;
51
-static uint32_t bitband_readb(void *opaque, hwaddr offset)
31
52
-{
32
+ /*
53
- uint8_t v;
33
+ * VI and VF are kept in cs->interrupt_request. Modifying that
54
- cpu_physical_memory_read(bitband_addr(opaque, offset), &v, 1);
34
+ * requires that we have the iothread lock, which is done by
55
- return (v & (1 << ((offset >> 2) & 7))) != 0;
35
+ * marking the reginfo structs as ARM_CP_IO.
56
-}
36
+ * Note that if a write to HCR pends a VIRQ or VFIQ it is never
57
-
37
+ * possible for it to be taken immediately, because VIRQ and
58
-static void bitband_writeb(void *opaque, hwaddr offset,
38
+ * VFIQ are masked unless running at EL0 or EL1, and HCR
59
- uint32_t value)
39
+ * can only be written at EL2.
60
-{
40
+ */
61
- uint32_t addr;
41
+ g_assert(qemu_mutex_iothread_locked());
62
- uint8_t mask;
42
+ if (value & HCR_VI) {
63
- uint8_t v;
43
+ cs->interrupt_request |= CPU_INTERRUPT_VIRQ;
64
- addr = bitband_addr(opaque, offset);
44
+ } else {
65
- mask = (1 << ((offset >> 2) & 7));
45
+ cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
66
- cpu_physical_memory_read(addr, &v, 1);
46
+ }
67
- if (value & 1)
47
+ if (value & HCR_VF) {
68
- v |= mask;
48
+ cs->interrupt_request |= CPU_INTERRUPT_VFIQ;
69
- else
49
+ } else {
70
- v &= ~mask;
50
+ cs->interrupt_request &= ~CPU_INTERRUPT_VFIQ;
71
- cpu_physical_memory_write(addr, &v, 1);
51
+ }
72
-}
52
+ value &= ~(HCR_VI | HCR_VF);
73
-
53
+
74
-static uint32_t bitband_readw(void *opaque, hwaddr offset)
54
/* These bits change the MMU setup:
75
-{
55
* HCR_VM enables stage 2 translation
76
- uint32_t addr;
56
* HCR_PTW forbids certain page-table setups
77
- uint16_t mask;
57
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
78
- uint16_t v;
58
hcr_write(env, NULL, value);
79
- addr = bitband_addr(opaque, offset) & ~1;
80
- mask = (1 << ((offset >> 2) & 15));
81
- mask = tswap16(mask);
82
- cpu_physical_memory_read(addr, &v, 2);
83
- return (v & mask) != 0;
84
-}
85
-
86
-static void bitband_writew(void *opaque, hwaddr offset,
87
- uint32_t value)
88
-{
89
- uint32_t addr;
90
- uint16_t mask;
91
- uint16_t v;
92
- addr = bitband_addr(opaque, offset) & ~1;
93
- mask = (1 << ((offset >> 2) & 15));
94
- mask = tswap16(mask);
95
- cpu_physical_memory_read(addr, &v, 2);
96
- if (value & 1)
97
- v |= mask;
98
- else
99
- v &= ~mask;
100
- cpu_physical_memory_write(addr, &v, 2);
101
+ return s->base | (offset & 0x1ffffff) >> 5;
102
}
59
}
103
60
104
-static uint32_t bitband_readl(void *opaque, hwaddr offset)
61
+static uint64_t hcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
105
+static MemTxResult bitband_read(void *opaque, hwaddr offset,
62
+{
106
+ uint64_t *data, unsigned size, MemTxAttrs attrs)
63
+ /* The VI and VF bits live in cs->interrupt_request */
107
{
64
+ uint64_t ret = env->cp15.hcr_el2 & ~(HCR_VI | HCR_VF);
108
- uint32_t addr;
65
+ CPUState *cs = ENV_GET_CPU(env);
109
- uint32_t mask;
110
- uint32_t v;
111
- addr = bitband_addr(opaque, offset) & ~3;
112
- mask = (1 << ((offset >> 2) & 31));
113
- mask = tswap32(mask);
114
- cpu_physical_memory_read(addr, &v, 4);
115
- return (v & mask) != 0;
116
+ BitBandState *s = opaque;
117
+ uint8_t buf[4];
118
+ MemTxResult res;
119
+ int bitpos, bit;
120
+ hwaddr addr;
121
+
66
+
122
+ assert(size <= 4);
67
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
123
+
68
+ ret |= HCR_VI;
124
+ /* Find address in underlying memory and round down to multiple of size */
125
+ addr = bitband_addr(s, offset) & (-size);
126
+ res = address_space_read(s->source_as, addr, attrs, buf, size);
127
+ if (res) {
128
+ return res;
129
+ }
69
+ }
130
+ /* Bit position in the N bytes read... */
70
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
131
+ bitpos = (offset >> 2) & ((size * 8) - 1);
71
+ ret |= HCR_VF;
132
+ /* ...converted to byte in buffer and bit in byte */
133
+ bit = (buf[bitpos >> 3] >> (bitpos & 7)) & 1;
134
+ *data = bit;
135
+ return MEMTX_OK;
136
}
137
138
-static void bitband_writel(void *opaque, hwaddr offset,
139
- uint32_t value)
140
+static MemTxResult bitband_write(void *opaque, hwaddr offset, uint64_t value,
141
+ unsigned size, MemTxAttrs attrs)
142
{
143
- uint32_t addr;
144
- uint32_t mask;
145
- uint32_t v;
146
- addr = bitband_addr(opaque, offset) & ~3;
147
- mask = (1 << ((offset >> 2) & 31));
148
- mask = tswap32(mask);
149
- cpu_physical_memory_read(addr, &v, 4);
150
- if (value & 1)
151
- v |= mask;
152
- else
153
- v &= ~mask;
154
- cpu_physical_memory_write(addr, &v, 4);
155
+ BitBandState *s = opaque;
156
+ uint8_t buf[4];
157
+ MemTxResult res;
158
+ int bitpos, bit;
159
+ hwaddr addr;
160
+
161
+ assert(size <= 4);
162
+
163
+ /* Find address in underlying memory and round down to multiple of size */
164
+ addr = bitband_addr(s, offset) & (-size);
165
+ res = address_space_read(s->source_as, addr, attrs, buf, size);
166
+ if (res) {
167
+ return res;
168
+ }
72
+ }
169
+ /* Bit position in the N bytes read... */
73
+ return ret;
170
+ bitpos = (offset >> 2) & ((size * 8) - 1);
171
+ /* ...converted to byte in buffer and bit in byte */
172
+ bit = 1 << (bitpos & 7);
173
+ if (value & 1) {
174
+ buf[bitpos >> 3] |= bit;
175
+ } else {
176
+ buf[bitpos >> 3] &= ~bit;
177
+ }
178
+ return address_space_write(s->source_as, addr, attrs, buf, size);
179
}
180
181
static const MemoryRegionOps bitband_ops = {
182
- .old_mmio = {
183
- .read = { bitband_readb, bitband_readw, bitband_readl, },
184
- .write = { bitband_writeb, bitband_writew, bitband_writel, },
185
- },
186
+ .read_with_attrs = bitband_read,
187
+ .write_with_attrs = bitband_write,
188
.endianness = DEVICE_NATIVE_ENDIAN,
189
+ .impl.min_access_size = 1,
190
+ .impl.max_access_size = 4,
191
+ .valid.min_access_size = 1,
192
+ .valid.max_access_size = 4,
193
};
194
195
static void bitband_init(Object *obj)
196
@@ -XXX,XX +XXX,XX @@ static void bitband_init(Object *obj)
197
BitBandState *s = BITBAND(obj);
198
SysBusDevice *dev = SYS_BUS_DEVICE(obj);
199
200
- memory_region_init_io(&s->iomem, obj, &bitband_ops, &s->base,
201
+ object_property_add_link(obj, "source-memory",
202
+ TYPE_MEMORY_REGION,
203
+ (Object **)&s->source_memory,
204
+ qdev_prop_allow_set_link_before_realize,
205
+ OBJ_PROP_LINK_UNREF_ON_RELEASE,
206
+ &error_abort);
207
+ memory_region_init_io(&s->iomem, obj, &bitband_ops, s,
208
"bitband", 0x02000000);
209
sysbus_init_mmio(dev, &s->iomem);
210
}
211
212
+static void bitband_realize(DeviceState *dev, Error **errp)
213
+{
214
+ BitBandState *s = BITBAND(dev);
215
+
216
+ if (!s->source_memory) {
217
+ error_setg(errp, "source-memory property not set");
218
+ return;
219
+ }
220
+
221
+ s->source_as = address_space_init_shareable(s->source_memory,
222
+ "bitband-source");
223
+}
74
+}
224
+
75
+
225
/* Board init. */
76
static const ARMCPRegInfo el2_cp_reginfo[] = {
226
77
{ .name = "HCR_EL2", .state = ARM_CP_STATE_AA64,
227
static const hwaddr bitband_input_addr[ARMV7M_NUM_BITBANDS] = {
78
+ .type = ARM_CP_IO,
228
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
79
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
229
error_propagate(errp, err);
80
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
230
return;
81
- .writefn = hcr_write },
231
}
82
+ .writefn = hcr_write, .readfn = hcr_read },
232
+ object_property_set_link(obj, OBJECT(s->board_memory),
83
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
233
+ "source-memory", &error_abort);
84
- .type = ARM_CP_ALIAS,
234
object_property_set_bool(obj, true, "realized", &err);
85
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
235
if (err != NULL) {
86
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
236
error_propagate(errp, err);
87
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
237
@@ -XXX,XX +XXX,XX @@ static void bitband_class_init(ObjectClass *klass, void *data)
88
- .writefn = hcr_writelow },
238
{
89
+ .writefn = hcr_writelow, .readfn = hcr_read },
239
DeviceClass *dc = DEVICE_CLASS(klass);
90
{ .name = "ELR_EL2", .state = ARM_CP_STATE_AA64,
240
91
.type = ARM_CP_ALIAS,
241
+ dc->realize = bitband_realize;
92
.opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 1,
242
dc->props = bitband_properties;
93
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
243
}
94
244
95
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
96
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
97
- .type = ARM_CP_ALIAS,
98
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
99
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
100
.access = PL2_RW,
101
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
245
--
102
--
246
2.7.4
103
2.19.1
247
104
248
105
diff view generated by jsdifflib
New patch
1
If the HCR_EL2 PTW virtualizaiton configuration register bit
2
is set, then this means that a stage 2 Permission fault must
3
be generated if a stage 1 translation table access is made
4
to an address that is mapped as Device memory in stage 2.
5
Implement this.
1
6
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181012144235.19646-8-peter.maydell@linaro.org
10
---
11
target/arm/helper.c | 21 ++++++++++++++++++++-
12
1 file changed, 20 insertions(+), 1 deletion(-)
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
19
hwaddr s2pa;
20
int s2prot;
21
int ret;
22
+ ARMCacheAttrs cacheattrs = {};
23
+ ARMCacheAttrs *pcacheattrs = NULL;
24
+
25
+ if (env->cp15.hcr_el2 & HCR_PTW) {
26
+ /*
27
+ * PTW means we must fault if this S1 walk touches S2 Device
28
+ * memory; otherwise we don't care about the attributes and can
29
+ * save the S2 translation the effort of computing them.
30
+ */
31
+ pcacheattrs = &cacheattrs;
32
+ }
33
34
ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
35
- &txattrs, &s2prot, &s2size, fi, NULL);
36
+ &txattrs, &s2prot, &s2size, fi, pcacheattrs);
37
if (ret) {
38
assert(fi->type != ARMFault_None);
39
fi->s2addr = addr;
40
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
41
fi->s1ptw = true;
42
return ~0;
43
}
44
+ if (pcacheattrs && (pcacheattrs->attrs & 0xf0) == 0) {
45
+ /* Access was to Device memory: generate Permission fault */
46
+ fi->type = ARMFault_Permission;
47
+ fi->s2addr = addr;
48
+ fi->stage2 = true;
49
+ fi->s1ptw = true;
50
+ return ~0;
51
+ }
52
addr = s2pa;
53
}
54
return addr;
55
--
56
2.19.1
57
58
diff view generated by jsdifflib
1
The NVIC is a core v7M device that exists for all v7M CPUs;
1
Create and use a utility function to extract the EC field
2
put it under a CONFIG_ARM_V7M rather than hiding it under
2
from a syndrome, rather than open-coding the shift.
3
CONFIG_STELLARIS.
4
5
(We'll use CONFIG_ARM_V7M for the SysTick device too
6
when we split it out of the NVIC.)
7
3
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 20181012144235.19646-9-peter.maydell@linaro.org
11
Message-id: 1487604965-23220-9-git-send-email-peter.maydell@linaro.org
12
---
7
---
13
hw/intc/Makefile.objs | 2 +-
8
target/arm/internals.h | 5 +++++
14
default-configs/arm-softmmu.mak | 2 ++
9
target/arm/helper.c | 4 ++--
15
2 files changed, 3 insertions(+), 1 deletion(-)
10
target/arm/kvm64.c | 2 +-
11
target/arm/op_helper.c | 2 +-
12
4 files changed, 9 insertions(+), 4 deletions(-)
16
13
17
diff --git a/hw/intc/Makefile.objs b/hw/intc/Makefile.objs
14
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/Makefile.objs
16
--- a/target/arm/internals.h
20
+++ b/hw/intc/Makefile.objs
17
+++ b/target/arm/internals.h
21
@@ -XXX,XX +XXX,XX @@ obj-$(CONFIG_APIC) += apic.o apic_common.o
18
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
22
obj-$(CONFIG_ARM_GIC_KVM) += arm_gic_kvm.o
19
#define ARM_EL_IL (1 << ARM_EL_IL_SHIFT)
23
obj-$(call land,$(CONFIG_ARM_GIC_KVM),$(TARGET_AARCH64)) += arm_gicv3_kvm.o
20
#define ARM_EL_ISV (1 << ARM_EL_ISV_SHIFT)
24
obj-$(call land,$(CONFIG_ARM_GIC_KVM),$(TARGET_AARCH64)) += arm_gicv3_its_kvm.o
21
25
-obj-$(CONFIG_STELLARIS) += armv7m_nvic.o
22
+static inline uint32_t syn_get_ec(uint32_t syn)
26
+obj-$(CONFIG_ARM_V7M) += armv7m_nvic.o
23
+{
27
obj-$(CONFIG_EXYNOS4) += exynos4210_gic.o exynos4210_combiner.o
24
+ return syn >> ARM_EL_EC_SHIFT;
28
obj-$(CONFIG_GRLIB) += grlib_irqmp.o
25
+}
29
obj-$(CONFIG_IOAPIC) += ioapic.o
26
+
30
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
27
/* Utility functions for constructing various kinds of syndrome value.
28
* Note that in general we follow the AArch64 syndrome values; in a
29
* few cases the value in HSR for exceptions taken to AArch32 Hyp
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
31
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
32
--- a/default-configs/arm-softmmu.mak
32
--- a/target/arm/helper.c
33
+++ b/default-configs/arm-softmmu.mak
33
+++ b/target/arm/helper.c
34
@@ -XXX,XX +XXX,XX @@ CONFIG_ARM11MPCORE=y
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
35
CONFIG_A9MPCORE=y
35
uint32_t moe;
36
CONFIG_A15MPCORE=y
36
37
37
/* If this is a debug exception we must update the DBGDSCR.MOE bits */
38
+CONFIG_ARM_V7M=y
38
- switch (env->exception.syndrome >> ARM_EL_EC_SHIFT) {
39
+
39
+ switch (syn_get_ec(env->exception.syndrome)) {
40
CONFIG_ARM_GIC=y
40
case EC_BREAKPOINT:
41
CONFIG_ARM_GIC_KVM=$(CONFIG_KVM)
41
case EC_BREAKPOINT_SAME_EL:
42
CONFIG_ARM_TIMER=y
42
moe = 1;
43
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
44
if (qemu_loglevel_mask(CPU_LOG_INT)
45
&& !excp_is_internal(cs->exception_index)) {
46
qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n",
47
- env->exception.syndrome >> ARM_EL_EC_SHIFT,
48
+ syn_get_ec(env->exception.syndrome),
49
env->exception.syndrome);
50
}
51
52
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/kvm64.c
55
+++ b/target/arm/kvm64.c
56
@@ -XXX,XX +XXX,XX @@ int kvm_arch_remove_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
57
58
bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
59
{
60
- int hsr_ec = debug_exit->hsr >> ARM_EL_EC_SHIFT;
61
+ int hsr_ec = syn_get_ec(debug_exit->hsr);
62
ARMCPU *cpu = ARM_CPU(cs);
63
CPUClass *cc = CPU_GET_CLASS(cs);
64
CPUARMState *env = &cpu->env;
65
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/op_helper.c
68
+++ b/target/arm/op_helper.c
69
@@ -XXX,XX +XXX,XX @@ void raise_exception(CPUARMState *env, uint32_t excp,
70
* (see DDI0478C.a D1.10.4)
71
*/
72
target_el = 2;
73
- if (syndrome >> ARM_EL_EC_SHIFT == EC_ADVSIMDFPACCESSTRAP) {
74
+ if (syn_get_ec(syndrome) == EC_ADVSIMDFPACCESSTRAP) {
75
syndrome = syn_uncategorized();
76
}
77
}
43
--
78
--
44
2.7.4
79
2.19.1
45
80
46
81
diff view generated by jsdifflib
1
Make the NVIC device expose a memory region for its users
1
For the v7 version of the Arm architecture, the IL bit in
2
to map, rather than mapping itself into the system memory
2
syndrome register values where the field is not valid was
3
space on realize, and get the one user (the ARMv7M object)
3
defined to be UNK/SBZP. In v8 this is RES1, which is what
4
to do this.
4
QEMU currently implements. Handle the desired v7 behaviour
5
by squashing the IL bit for the affected cases:
6
* EC == EC_UNCATEGORIZED
7
* prefetch aborts
8
* data aborts where ISV is 0
9
10
(The fourth case listed in the v8 Arm ARM DDI 0487C.a in
11
section G7.2.70, "illegal state exception", can't happen
12
on a v7 CPU.)
13
14
This deals with a corner case noted in a comment.
5
15
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 1487604965-23220-7-git-send-email-peter.maydell@linaro.org
18
Message-id: 20181012144235.19646-10-peter.maydell@linaro.org
9
---
19
---
10
hw/arm/armv7m.c | 7 ++++++-
20
target/arm/internals.h | 7 ++-----
11
hw/intc/armv7m_nvic.c | 7 ++-----
21
target/arm/helper.c | 13 +++++++++++++
12
2 files changed, 8 insertions(+), 6 deletions(-)
22
2 files changed, 15 insertions(+), 5 deletions(-)
13
23
14
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
24
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/armv7m.c
26
--- a/target/arm/internals.h
17
+++ b/hw/arm/armv7m.c
27
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ static void armv7m_instance_init(Object *obj)
28
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
19
static void armv7m_realize(DeviceState *dev, Error **errp)
29
/* Utility functions for constructing various kinds of syndrome value.
30
* Note that in general we follow the AArch64 syndrome values; in a
31
* few cases the value in HSR for exceptions taken to AArch32 Hyp
32
- * mode differs slightly, so if we ever implemented Hyp mode then the
33
- * syndrome value would need some massaging on exception entry.
34
- * (One example of this is that AArch64 defaults to IL bit set for
35
- * exceptions which don't specifically indicate information about the
36
- * trapping instruction, whereas AArch32 defaults to IL bit clear.)
37
+ * mode differs slightly, and we fix this up when populating HSR in
38
+ * arm_cpu_do_interrupt_aarch32_hyp().
39
*/
40
static inline uint32_t syn_uncategorized(void)
20
{
41
{
21
ARMv7MState *s = ARMV7M(dev);
42
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
+ SysBusDevice *sbd;
23
Error *err = NULL;
24
int i;
25
char **cpustr;
26
@@ -XXX,XX +XXX,XX @@ static void armv7m_realize(DeviceState *dev, Error **errp)
27
qdev_pass_gpios(DEVICE(&s->nvic), dev, "SYSRESETREQ");
28
29
/* Wire the NVIC up to the CPU */
30
- sysbus_connect_irq(SYS_BUS_DEVICE(&s->nvic), 0,
31
+ sbd = SYS_BUS_DEVICE(&s->nvic);
32
+ sysbus_connect_irq(sbd, 0,
33
qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
34
s->cpu->env.nvic = &s->nvic;
35
36
+ memory_region_add_subregion(&s->container, 0xe000e000,
37
+ sysbus_mmio_get_region(sbd, 0));
38
+
39
for (i = 0; i < ARRAY_SIZE(s->bitband); i++) {
40
Object *obj = OBJECT(&s->bitband[i]);
41
SysBusDevice *sbd = SYS_BUS_DEVICE(&s->bitband[i]);
42
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
43
index XXXXXXX..XXXXXXX 100644
43
index XXXXXXX..XXXXXXX 100644
44
--- a/hw/intc/armv7m_nvic.c
44
--- a/target/arm/helper.c
45
+++ b/hw/intc/armv7m_nvic.c
45
+++ b/target/arm/helper.c
46
@@ -XXX,XX +XXX,XX @@
46
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs)
47
#include "hw/arm/arm.h"
47
}
48
#include "hw/arm/armv7m_nvic.h"
48
49
#include "target/arm/cpu.h"
49
if (cs->exception_index != EXCP_IRQ && cs->exception_index != EXCP_FIQ) {
50
-#include "exec/address-spaces.h"
50
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
51
#include "qemu/log.h"
51
+ /*
52
#include "trace.h"
52
+ * QEMU syndrome values are v8-style. v7 has the IL bit
53
53
+ * UNK/SBZP for "field not valid" cases, where v8 uses RES1.
54
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
54
+ * If this is a v7 CPU, squash the IL bit in those cases.
55
"nvic_sysregs", 0x1000);
55
+ */
56
memory_region_add_subregion(&s->container, 0, &s->sysregmem);
56
+ if (cs->exception_index == EXCP_PREFETCH_ABORT ||
57
57
+ (cs->exception_index == EXCP_DATA_ABORT &&
58
- /* Map the whole thing into system memory at the location required
58
+ !(env->exception.syndrome & ARM_EL_ISV)) ||
59
- * by the v7M architecture.
59
+ syn_get_ec(env->exception.syndrome) == EC_UNCATEGORIZED) {
60
- */
60
+ env->exception.syndrome &= ~ARM_EL_IL;
61
- memory_region_add_subregion(get_system_memory(), 0xe000e000, &s->container);
61
+ }
62
+ sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->container);
62
+ }
63
+
63
env->cp15.esr_el[2] = env->exception.syndrome;
64
s->systick.timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, systick_timer_tick, s);
64
}
65
}
66
65
67
--
66
--
68
2.7.4
67
2.19.1
69
68
70
69
diff view generated by jsdifflib
1
Instead of qdev_set_parent_bus() silently doing the wrong
1
For traps of FP/SIMD instructions to AArch32 Hyp mode, the syndrome
2
thing if it's handed a device that's already on a bus,
2
provided in HSR has more information than is reported to AArch64.
3
have it remove the device from the old bus and add it to
3
Specifically, there are extra fields TA and coproc which indicate
4
the new one. This is useful for the raspi2 sdcard.
4
whether the trapped instruction was FP or SIMD. Add this extra
5
information to the syndromes we construct, and mask it out when
6
taking the exception to AArch64.
5
7
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 1488293711-14195-2-git-send-email-peter.maydell@linaro.org
10
Message-id: 20181012144235.19646-11-peter.maydell@linaro.org
9
---
11
---
10
hw/core/qdev.c | 14 ++++++++++++++
12
target/arm/internals.h | 14 +++++++++++++-
11
1 file changed, 14 insertions(+)
13
target/arm/helper.c | 9 +++++++++
14
target/arm/translate.c | 8 ++++----
15
3 files changed, 26 insertions(+), 5 deletions(-)
12
16
13
diff --git a/hw/core/qdev.c b/hw/core/qdev.c
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
14
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/core/qdev.c
19
--- a/target/arm/internals.h
16
+++ b/hw/core/qdev.c
20
+++ b/target/arm/internals.h
17
@@ -XXX,XX +XXX,XX @@ static void bus_add_child(BusState *bus, DeviceState *child)
21
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
18
22
* few cases the value in HSR for exceptions taken to AArch32 Hyp
19
void qdev_set_parent_bus(DeviceState *dev, BusState *bus)
23
* mode differs slightly, and we fix this up when populating HSR in
24
* arm_cpu_do_interrupt_aarch32_hyp().
25
+ * The exception is FP/SIMD access traps -- these report extra information
26
+ * when taking an exception to AArch32. For those we include the extra coproc
27
+ * and TA fields, and mask them out when taking the exception to AArch64.
28
*/
29
static inline uint32_t syn_uncategorized(void)
20
{
30
{
21
+ bool replugging = dev->parent_bus != NULL;
31
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_cp15_rrt_trap(int cv, int cond, int opc1, int crm,
32
33
static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
34
{
35
+ /* AArch32 FP trap or any AArch64 FP/SIMD trap: TA == 0 coproc == 0xa */
36
return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
37
| (is_16bit ? 0 : ARM_EL_IL)
38
- | (cv << 24) | (cond << 20);
39
+ | (cv << 24) | (cond << 20) | 0xa;
40
+}
22
+
41
+
23
+ if (replugging) {
42
+static inline uint32_t syn_simd_access_trap(int cv, int cond, bool is_16bit)
24
+ /* Keep a reference to the device while it's not plugged into
43
+{
25
+ * any bus, to avoid it potentially evaporating when it is
44
+ /* AArch32 SIMD trap: TA == 1 coproc == 0 */
26
+ * dereffed in bus_remove_child().
45
+ return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
27
+ */
46
+ | (is_16bit ? 0 : ARM_EL_IL)
28
+ object_ref(OBJECT(dev));
47
+ | (cv << 24) | (cond << 20) | (1 << 5);
29
+ bus_remove_child(dev->parent_bus, dev);
30
+ object_unref(OBJECT(dev->parent_bus));
31
+ }
32
dev->parent_bus = bus;
33
object_ref(OBJECT(bus));
34
bus_add_child(bus, dev);
35
+ if (replugging) {
36
+ object_unref(OBJECT(dev));
37
+ }
38
}
48
}
39
49
40
/* Create a new device. This only initializes the device state
50
static inline uint32_t syn_sve_access_trap(void)
51
diff --git a/target/arm/helper.c b/target/arm/helper.c
52
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/helper.c
54
+++ b/target/arm/helper.c
55
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
56
case EXCP_HVC:
57
case EXCP_HYP_TRAP:
58
case EXCP_SMC:
59
+ if (syn_get_ec(env->exception.syndrome) == EC_ADVSIMDFPACCESSTRAP) {
60
+ /*
61
+ * QEMU internal FP/SIMD syndromes from AArch32 include the
62
+ * TA and coproc fields which are only exposed if the exception
63
+ * is taken to AArch32 Hyp mode. Mask them out to get a valid
64
+ * AArch64 format syndrome.
65
+ */
66
+ env->exception.syndrome &= ~MAKE_64BIT_MASK(0, 20);
67
+ }
68
env->cp15.esr_el[new_el] = env->exception.syndrome;
69
break;
70
case EXCP_IRQ:
71
diff --git a/target/arm/translate.c b/target/arm/translate.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/translate.c
74
+++ b/target/arm/translate.c
75
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
76
*/
77
if (s->fp_excp_el) {
78
gen_exception_insn(s, 4, EXCP_UDEF,
79
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
80
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
81
return 0;
82
}
83
84
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
85
*/
86
if (s->fp_excp_el) {
87
gen_exception_insn(s, 4, EXCP_UDEF,
88
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
89
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
90
return 0;
91
}
92
93
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
94
95
if (s->fp_excp_el) {
96
gen_exception_insn(s, 4, EXCP_UDEF,
97
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
98
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
99
return 0;
100
}
101
if (!s->vfp_enabled) {
102
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
103
104
if (s->fp_excp_el) {
105
gen_exception_insn(s, 4, EXCP_UDEF,
106
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
107
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
108
return 0;
109
}
110
if (!s->vfp_enabled) {
41
--
111
--
42
2.7.4
112
2.19.1
43
113
44
114
diff view generated by jsdifflib
New patch
1
From: Stewart Hildebrand <Stewart.Hildebrand@dornerworks.com>
1
2
3
"The Image must be placed text_offset bytes from a 2MB aligned base
4
address anywhere in usable system RAM and called there."
5
6
For the virt board, we write our startup bootloader at the very
7
bottom of RAM, so that bit can't be used for the image. To avoid
8
overlap in case the image requests to be loaded at an offset
9
smaller than our bootloader, we increment the load offset to the
10
next 2MB.
11
12
This fixes a boot failure for Xen AArch64.
13
14
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
15
Tested-by: Andre Przywara <andre.przywara@arm.com>
16
Message-id: b8a89518794b4436af0c151ed10de4fa@dornerworks.com
17
[PMM: Rephrased a comment a bit]
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
hw/arm/boot.c | 18 ++++++++++++++++++
22
1 file changed, 18 insertions(+)
23
24
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/arm/boot.c
27
+++ b/hw/arm/boot.c
28
@@ -XXX,XX +XXX,XX @@
29
#include "qemu/config-file.h"
30
#include "qemu/option.h"
31
#include "exec/address-spaces.h"
32
+#include "qemu/units.h"
33
34
/* Kernel boot protocol is specified in the kernel docs
35
* Documentation/arm/Booting and Documentation/arm64/booting.txt
36
@@ -XXX,XX +XXX,XX @@
37
#define ARM64_TEXT_OFFSET_OFFSET 8
38
#define ARM64_MAGIC_OFFSET 56
39
40
+#define BOOTLOADER_MAX_SIZE (4 * KiB)
41
+
42
AddressSpace *arm_boot_address_space(ARMCPU *cpu,
43
const struct arm_boot_info *info)
44
{
45
@@ -XXX,XX +XXX,XX @@ static void write_bootloader(const char *name, hwaddr addr,
46
code[i] = tswap32(insn);
47
}
48
49
+ assert((len * sizeof(uint32_t)) < BOOTLOADER_MAX_SIZE);
50
+
51
rom_add_blob_fixed_as(name, code, len * sizeof(uint32_t), addr, as);
52
53
g_free(code);
54
@@ -XXX,XX +XXX,XX @@ static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
55
memcpy(&hdrvals, buffer + ARM64_TEXT_OFFSET_OFFSET, sizeof(hdrvals));
56
if (hdrvals[1] != 0) {
57
kernel_load_offset = le64_to_cpu(hdrvals[0]);
58
+
59
+ /*
60
+ * We write our startup "bootloader" at the very bottom of RAM,
61
+ * so that bit can't be used for the image. Luckily the Image
62
+ * format specification is that the image requests only an offset
63
+ * from a 2MB boundary, not an absolute load address. So if the
64
+ * image requests an offset that might mean it overlaps with the
65
+ * bootloader, we can just load it starting at 2MB+offset rather
66
+ * than 0MB + offset.
67
+ */
68
+ if (kernel_load_offset < BOOTLOADER_MAX_SIZE) {
69
+ kernel_load_offset += 2 * MiB;
70
+ }
71
}
72
}
73
74
--
75
2.19.1
76
77
diff view generated by jsdifflib
1
The local variable 'nvic' in stm32f205_soc_realize() no longer
1
From: Richard Henderson <rth@twiddle.net>
2
holds a direct pointer to the NVIC device; it is a pointer to
3
the ARMv7M container object. Rename it 'armv7m' accordingly.
4
2
3
This can reduce the number of opcodes required for certain
4
complex forms of load-multiple (e.g. ld4.16b).
5
6
Signed-off-by: Richard Henderson <rth@twiddle.net>
7
Message-id: 20181011205206.3552-2-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Message-id: 1487604965-23220-12-git-send-email-peter.maydell@linaro.org
10
---
10
---
11
hw/arm/stm32f205_soc.c | 18 +++++++++---------
11
target/arm/translate-a64.c | 12 ++++++++----
12
1 file changed, 9 insertions(+), 9 deletions(-)
12
1 file changed, 8 insertions(+), 4 deletions(-)
13
13
14
diff --git a/hw/arm/stm32f205_soc.c b/hw/arm/stm32f205_soc.c
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/stm32f205_soc.c
16
--- a/target/arm/translate-a64.c
17
+++ b/hw/arm/stm32f205_soc.c
17
+++ b/target/arm/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static void stm32f205_soc_initfn(Object *obj)
18
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
19
static void stm32f205_soc_realize(DeviceState *dev_soc, Error **errp)
19
bool is_store = !extract32(insn, 22, 1);
20
{
20
bool is_postidx = extract32(insn, 23, 1);
21
STM32F205State *s = STM32F205_SOC(dev_soc);
21
bool is_q = extract32(insn, 30, 1);
22
- DeviceState *dev, *nvic;
22
- TCGv_i64 tcg_addr, tcg_rn;
23
+ DeviceState *dev, *armv7m;
23
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
24
SysBusDevice *busdev;
24
25
Error *err = NULL;
25
int ebytes = 1 << size;
26
int i;
26
int elements = (is_q ? 128 : 64) / (8 << size);
27
@@ -XXX,XX +XXX,XX @@ static void stm32f205_soc_realize(DeviceState *dev_soc, Error **errp)
27
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
28
vmstate_register_ram_global(sram);
28
tcg_rn = cpu_reg_sp(s, rn);
29
memory_region_add_subregion(system_memory, SRAM_BASE_ADDRESS, sram);
29
tcg_addr = tcg_temp_new_i64();
30
30
tcg_gen_mov_i64(tcg_addr, tcg_rn);
31
- nvic = DEVICE(&s->armv7m);
31
+ tcg_ebytes = tcg_const_i64(ebytes);
32
- qdev_prop_set_uint32(nvic, "num-irq", 96);
32
33
- qdev_prop_set_string(nvic, "cpu-model", s->cpu_model);
33
for (r = 0; r < rpt; r++) {
34
+ armv7m = DEVICE(&s->armv7m);
34
int e;
35
+ qdev_prop_set_uint32(armv7m, "num-irq", 96);
35
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
36
+ qdev_prop_set_string(armv7m, "cpu-model", s->cpu_model);
36
clear_vec_high(s, is_q, tt);
37
object_property_set_link(OBJECT(&s->armv7m), OBJECT(get_system_memory()),
37
}
38
"memory", &error_abort);
38
}
39
object_property_set_bool(OBJECT(&s->armv7m), true, "realized", &err);
39
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
40
@@ -XXX,XX +XXX,XX @@ static void stm32f205_soc_realize(DeviceState *dev_soc, Error **errp)
40
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
41
tt = (tt + 1) % 32;
42
}
43
}
44
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
45
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
46
}
41
}
47
}
42
busdev = SYS_BUS_DEVICE(dev);
48
+ tcg_temp_free_i64(tcg_ebytes);
43
sysbus_mmio_map(busdev, 0, 0x40013800);
49
tcg_temp_free_i64(tcg_addr);
44
- sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(nvic, 71));
50
}
45
+ sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(armv7m, 71));
51
46
52
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
47
/* Attach UART (uses USART registers) and USART controllers */
53
bool replicate = false;
48
for (i = 0; i < STM_NUM_USARTS; i++) {
54
int index = is_q << 3 | S << 2 | size;
49
@@ -XXX,XX +XXX,XX @@ static void stm32f205_soc_realize(DeviceState *dev_soc, Error **errp)
55
int ebytes, xs;
56
- TCGv_i64 tcg_addr, tcg_rn;
57
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
58
59
switch (scale) {
60
case 3:
61
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
62
tcg_rn = cpu_reg_sp(s, rn);
63
tcg_addr = tcg_temp_new_i64();
64
tcg_gen_mov_i64(tcg_addr, tcg_rn);
65
+ tcg_ebytes = tcg_const_i64(ebytes);
66
67
for (xs = 0; xs < selem; xs++) {
68
if (replicate) {
69
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
70
do_vec_st(s, rt, index, tcg_addr, scale);
71
}
50
}
72
}
51
busdev = SYS_BUS_DEVICE(dev);
73
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
52
sysbus_mmio_map(busdev, 0, usart_addr[i]);
74
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
53
- sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(nvic, usart_irq[i]));
75
rt = (rt + 1) % 32;
54
+ sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(armv7m, usart_irq[i]));
55
}
76
}
56
77
57
/* Timer 2 to 5 */
78
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
58
@@ -XXX,XX +XXX,XX @@ static void stm32f205_soc_realize(DeviceState *dev_soc, Error **errp)
79
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
59
}
80
}
60
busdev = SYS_BUS_DEVICE(dev);
61
sysbus_mmio_map(busdev, 0, timer_addr[i]);
62
- sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(nvic, timer_irq[i]));
63
+ sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(armv7m, timer_irq[i]));
64
}
81
}
65
82
+ tcg_temp_free_i64(tcg_ebytes);
66
/* ADC 1 to 3 */
83
tcg_temp_free_i64(tcg_addr);
67
@@ -XXX,XX +XXX,XX @@ static void stm32f205_soc_realize(DeviceState *dev_soc, Error **errp)
68
return;
69
}
70
qdev_connect_gpio_out(DEVICE(s->adc_irqs), 0,
71
- qdev_get_gpio_in(nvic, ADC_IRQ));
72
+ qdev_get_gpio_in(armv7m, ADC_IRQ));
73
74
for (i = 0; i < STM_NUM_ADCS; i++) {
75
dev = DEVICE(&(s->adc[i]));
76
@@ -XXX,XX +XXX,XX @@ static void stm32f205_soc_realize(DeviceState *dev_soc, Error **errp)
77
}
78
busdev = SYS_BUS_DEVICE(dev);
79
sysbus_mmio_map(busdev, 0, spi_addr[i]);
80
- sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(nvic, spi_irq[i]));
81
+ sysbus_connect_irq(busdev, 0, qdev_get_gpio_in(armv7m, spi_irq[i]));
82
}
83
}
84
}
84
85
85
--
86
--
86
2.7.4
87
2.19.1
87
88
88
89
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This is done generically in translator_loop.
4
5
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20181011205206.3552-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate-a64.c | 1 -
13
target/arm/translate.c | 1 -
14
2 files changed, 2 deletions(-)
15
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-a64.c
19
+++ b/target/arm/translate-a64.c
20
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
21
22
static void aarch64_tr_tb_start(DisasContextBase *db, CPUState *cpu)
23
{
24
- tcg_clear_temp_count();
25
}
26
27
static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
28
diff --git a/target/arm/translate.c b/target/arm/translate.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate.c
31
+++ b/target/arm/translate.c
32
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_start(DisasContextBase *dcbase, CPUState *cpu)
33
tcg_gen_movi_i32(tmp, 0);
34
store_cpu_field(tmp, condexec_bits);
35
}
36
- tcg_clear_temp_count();
37
}
38
39
static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
40
--
41
2.19.1
42
43
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-4-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 28 +++-------------------------
9
1 file changed, 3 insertions(+), 25 deletions(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
16
for (xs = 0; xs < selem; xs++) {
17
if (replicate) {
18
/* Load and replicate to all elements */
19
- uint64_t mulconst;
20
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
21
22
tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr,
23
get_mem_index(s), s->be_data + scale);
24
- switch (scale) {
25
- case 0:
26
- mulconst = 0x0101010101010101ULL;
27
- break;
28
- case 1:
29
- mulconst = 0x0001000100010001ULL;
30
- break;
31
- case 2:
32
- mulconst = 0x0000000100000001ULL;
33
- break;
34
- case 3:
35
- mulconst = 0;
36
- break;
37
- default:
38
- g_assert_not_reached();
39
- }
40
- if (mulconst) {
41
- tcg_gen_muli_i64(tcg_tmp, tcg_tmp, mulconst);
42
- }
43
- write_vec_element(s, tcg_tmp, rt, 0, MO_64);
44
- if (is_q) {
45
- write_vec_element(s, tcg_tmp, rt, 1, MO_64);
46
- }
47
+ tcg_gen_gvec_dup_i64(scale, vec_full_reg_offset(s, rt),
48
+ (is_q + 1) * 8, vec_full_reg_size(s),
49
+ tcg_tmp);
50
tcg_temp_free_i64(tcg_tmp);
51
- clear_vec_high(s, is_q, rt);
52
} else {
53
/* Load/store one element per register */
54
if (is_load) {
55
--
56
2.19.1
57
58
diff view generated by jsdifflib
1
From: Clement Deschamps <clement.deschamps@antfield.fr>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This adds the bcm2835_sdhost and bcm2835_gpio to the BCM2835 platform.
3
For a sequence of loads or stores from a single register,
4
little-endian operations can be promoted to an 8-byte op.
5
This can reduce the number of operations by a factor of 8.
4
6
5
For supporting the SD controller selection (alternate function of GPIOs
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
48-53), the bcm2835_gpio now exposes an sdbus.
8
Message-id: 20181011205206.3552-5-richard.henderson@linaro.org
7
It also has a link to both the sdbus of sdhci and sdhost controllers,
8
and the card is reparented from one bus to another when the alternate
9
function of GPIOs 48-53 is modified.
10
11
Signed-off-by: Clement Deschamps <clement.deschamps@antfield.fr>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 1488293711-14195-5-git-send-email-peter.maydell@linaro.org
15
Message-id: 20170224164021.9066-5-clement.deschamps@antfield.fr
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
11
---
19
include/hw/arm/bcm2835_peripherals.h | 4 ++++
12
target/arm/translate-a64.c | 66 +++++++++++++++++++++++---------------
20
hw/arm/bcm2835_peripherals.c | 43 ++++++++++++++++++++++++++++++++++--
13
1 file changed, 40 insertions(+), 26 deletions(-)
21
2 files changed, 45 insertions(+), 2 deletions(-)
22
14
23
diff --git a/include/hw/arm/bcm2835_peripherals.h b/include/hw/arm/bcm2835_peripherals.h
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
24
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/arm/bcm2835_peripherals.h
17
--- a/target/arm/translate-a64.c
26
+++ b/include/hw/arm/bcm2835_peripherals.h
18
+++ b/target/arm/translate-a64.c
27
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
28
#include "hw/misc/bcm2835_rng.h"
20
29
#include "hw/misc/bcm2835_mbox.h"
21
/* Store from vector register to memory */
30
#include "hw/sd/sdhci.h"
22
static void do_vec_st(DisasContext *s, int srcidx, int element,
31
+#include "hw/sd/bcm2835_sdhost.h"
23
- TCGv_i64 tcg_addr, int size)
32
+#include "hw/gpio/bcm2835_gpio.h"
24
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
33
25
{
34
#define TYPE_BCM2835_PERIPHERALS "bcm2835-peripherals"
26
- TCGMemOp memop = s->be_data + size;
35
#define BCM2835_PERIPHERALS(obj) \
27
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
36
@@ -XXX,XX +XXX,XX @@ typedef struct BCM2835PeripheralState {
28
37
BCM2835RngState rng;
29
read_vec_element(s, tcg_tmp, srcidx, element, size);
38
BCM2835MboxState mboxes;
30
- tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
39
SDHCIState sdhci;
31
+ tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
40
+ BCM2835SDHostState sdhost;
32
41
+ BCM2835GpioState gpio;
33
tcg_temp_free_i64(tcg_tmp);
42
} BCM2835PeripheralState;
43
44
#endif /* BCM2835_PERIPHERALS_H */
45
diff --git a/hw/arm/bcm2835_peripherals.c b/hw/arm/bcm2835_peripherals.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/arm/bcm2835_peripherals.c
48
+++ b/hw/arm/bcm2835_peripherals.c
49
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_init(Object *obj)
50
object_property_add_child(obj, "sdhci", OBJECT(&s->sdhci), NULL);
51
qdev_set_parent_bus(DEVICE(&s->sdhci), sysbus_get_default());
52
53
+ /* SDHOST */
54
+ object_initialize(&s->sdhost, sizeof(s->sdhost), TYPE_BCM2835_SDHOST);
55
+ object_property_add_child(obj, "sdhost", OBJECT(&s->sdhost), NULL);
56
+ qdev_set_parent_bus(DEVICE(&s->sdhost), sysbus_get_default());
57
+
58
/* DMA Channels */
59
object_initialize(&s->dma, sizeof(s->dma), TYPE_BCM2835_DMA);
60
object_property_add_child(obj, "dma", OBJECT(&s->dma), NULL);
61
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_init(Object *obj)
62
63
object_property_add_const_link(OBJECT(&s->dma), "dma-mr",
64
OBJECT(&s->gpu_bus_mr), &error_abort);
65
+
66
+ /* GPIO */
67
+ object_initialize(&s->gpio, sizeof(s->gpio), TYPE_BCM2835_GPIO);
68
+ object_property_add_child(obj, "gpio", OBJECT(&s->gpio), NULL);
69
+ qdev_set_parent_bus(DEVICE(&s->gpio), sysbus_get_default());
70
+
71
+ object_property_add_const_link(OBJECT(&s->gpio), "sdbus-sdhci",
72
+ OBJECT(&s->sdhci.sdbus), &error_abort);
73
+ object_property_add_const_link(OBJECT(&s->gpio), "sdbus-sdhost",
74
+ OBJECT(&s->sdhost.sdbus), &error_abort);
75
}
34
}
76
35
77
static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
36
/* Load from memory to vector register */
78
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
37
static void do_vec_ld(DisasContext *s, int destidx, int element,
79
sysbus_connect_irq(SYS_BUS_DEVICE(&s->sdhci), 0,
38
- TCGv_i64 tcg_addr, int size)
80
qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
39
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
81
INTERRUPT_ARASANSDIO));
40
{
82
- object_property_add_alias(OBJECT(s), "sd-bus", OBJECT(&s->sdhci), "sd-bus",
41
- TCGMemOp memop = s->be_data + size;
83
- &err);
42
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
84
+
43
85
+ /* SDHOST */
44
- tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
86
+ object_property_set_bool(OBJECT(&s->sdhost), true, "realized", &err);
45
+ tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
87
if (err) {
46
write_vec_element(s, tcg_tmp, destidx, element, size);
88
error_propagate(errp, err);
47
89
return;
48
tcg_temp_free_i64(tcg_tmp);
49
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
50
bool is_postidx = extract32(insn, 23, 1);
51
bool is_q = extract32(insn, 30, 1);
52
TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
53
+ TCGMemOp endian = s->be_data;
54
55
- int ebytes = 1 << size;
56
- int elements = (is_q ? 128 : 64) / (8 << size);
57
+ int ebytes; /* bytes per element */
58
+ int elements; /* elements per vector */
59
int rpt; /* num iterations */
60
int selem; /* structure elements */
61
int r;
62
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
63
gen_check_sp_alignment(s);
90
}
64
}
91
65
92
+ memory_region_add_subregion(&s->peri_mr, MMCI0_OFFSET,
66
+ /* For our purposes, bytes are always little-endian. */
93
+ sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->sdhost), 0));
67
+ if (size == 0) {
94
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->sdhost), 0,
68
+ endian = MO_LE;
95
+ qdev_get_gpio_in_named(DEVICE(&s->ic), BCM2835_IC_GPU_IRQ,
96
+ INTERRUPT_SDIO));
97
+
98
/* DMA Channels */
99
object_property_set_bool(OBJECT(&s->dma), true, "realized", &err);
100
if (err) {
101
@@ -XXX,XX +XXX,XX @@ static void bcm2835_peripherals_realize(DeviceState *dev, Error **errp)
102
BCM2835_IC_GPU_IRQ,
103
INTERRUPT_DMA0 + n));
104
}
105
+
106
+ /* GPIO */
107
+ object_property_set_bool(OBJECT(&s->gpio), true, "realized", &err);
108
+ if (err) {
109
+ error_propagate(errp, err);
110
+ return;
111
+ }
69
+ }
112
+
70
+
113
+ memory_region_add_subregion(&s->peri_mr, GPIO_OFFSET,
71
+ /* Consecutive little-endian elements from a single register
114
+ sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->gpio), 0));
72
+ * can be promoted to a larger little-endian operation.
73
+ */
74
+ if (selem == 1 && endian == MO_LE) {
75
+ size = 3;
76
+ }
77
+ ebytes = 1 << size;
78
+ elements = (is_q ? 16 : 8) / ebytes;
115
+
79
+
116
+ object_property_add_alias(OBJECT(s), "sd-bus", OBJECT(&s->gpio), "sd-bus",
80
tcg_rn = cpu_reg_sp(s, rn);
117
+ &err);
81
tcg_addr = tcg_temp_new_i64();
118
+ if (err) {
82
tcg_gen_mov_i64(tcg_addr, tcg_rn);
119
+ error_propagate(errp, err);
83
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
120
+ return;
84
for (r = 0; r < rpt; r++) {
85
int e;
86
for (e = 0; e < elements; e++) {
87
- int tt = (rt + r) % 32;
88
int xs;
89
for (xs = 0; xs < selem; xs++) {
90
+ int tt = (rt + r + xs) % 32;
91
if (is_store) {
92
- do_vec_st(s, tt, e, tcg_addr, size);
93
+ do_vec_st(s, tt, e, tcg_addr, size, endian);
94
} else {
95
- do_vec_ld(s, tt, e, tcg_addr, size);
96
-
97
- /* For non-quad operations, setting a slice of the low
98
- * 64 bits of the register clears the high 64 bits (in
99
- * the ARM ARM pseudocode this is implicit in the fact
100
- * that 'rval' is a 64 bit wide variable).
101
- * For quad operations, we might still need to zero the
102
- * high bits of SVE. We optimize by noticing that we only
103
- * need to do this the first time we touch a register.
104
- */
105
- if (e == 0 && (r == 0 || xs == selem - 1)) {
106
- clear_vec_high(s, is_q, tt);
107
- }
108
+ do_vec_ld(s, tt, e, tcg_addr, size, endian);
109
}
110
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
111
- tt = (tt + 1) % 32;
112
}
113
}
114
}
115
116
+ if (!is_store) {
117
+ /* For non-quad operations, setting a slice of the low
118
+ * 64 bits of the register clears the high 64 bits (in
119
+ * the ARM ARM pseudocode this is implicit in the fact
120
+ * that 'rval' is a 64 bit wide variable).
121
+ * For quad operations, we might still need to zero the
122
+ * high bits of SVE.
123
+ */
124
+ for (r = 0; r < rpt * selem; r++) {
125
+ int tt = (rt + r) % 32;
126
+ clear_vec_high(s, is_q, tt);
127
+ }
121
+ }
128
+ }
122
}
129
+
123
130
if (is_postidx) {
124
static void bcm2835_peripherals_class_init(ObjectClass *oc, void *data)
131
int rm = extract32(insn, 16, 5);
132
if (rm == 31) {
133
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
134
} else {
135
/* Load/store one element per register */
136
if (is_load) {
137
- do_vec_ld(s, rt, index, tcg_addr, scale);
138
+ do_vec_ld(s, rt, index, tcg_addr, scale, s->be_data);
139
} else {
140
- do_vec_st(s, rt, index, tcg_addr, scale);
141
+ do_vec_st(s, rt, index, tcg_addr, scale, s->be_data);
142
}
143
}
144
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
125
--
145
--
126
2.7.4
146
2.19.1
127
147
128
148
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Message-id: 20181011205206.3552-6-richard.henderson@linaro.org
6
[PMM: drop change to now-deleted cpu_mode_names array]
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.c | 4 ++--
11
1 file changed, 2 insertions(+), 2 deletions(-)
12
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
16
+++ b/target/arm/translate.c
17
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 cpu_F0d, cpu_F1d;
18
19
#include "exec/gen-icount.h"
20
21
-static const char *regnames[] =
22
+static const char * const regnames[] =
23
{ "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7",
24
"r8", "r9", "r10", "r11", "r12", "r13", "r14", "pc" };
25
26
@@ -XXX,XX +XXX,XX @@ static struct {
27
int nregs;
28
int interleave;
29
int spacing;
30
-} neon_ls_element_type[11] = {
31
+} const neon_ls_element_type[11] = {
32
{4, 4, 1},
33
{4, 4, 2},
34
{4, 1, 1},
35
--
36
2.19.1
37
38
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Also introduces neon_element_offset to find the env offset
4
of a specific element within a neon register.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181011205206.3552-7-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/translate.c | 63 ++++++++++++++++++++++++------------------
12
1 file changed, 36 insertions(+), 27 deletions(-)
13
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
17
+++ b/target/arm/translate.c
18
@@ -XXX,XX +XXX,XX @@ neon_reg_offset (int reg, int n)
19
return vfp_reg_offset(0, sreg);
20
}
21
22
+/* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
23
+ * where 0 is the least significant end of the register.
24
+ */
25
+static inline long
26
+neon_element_offset(int reg, int element, TCGMemOp size)
27
+{
28
+ int element_size = 1 << size;
29
+ int ofs = element * element_size;
30
+#ifdef HOST_WORDS_BIGENDIAN
31
+ /* Calculate the offset assuming fully little-endian,
32
+ * then XOR to account for the order of the 8-byte units.
33
+ */
34
+ if (element_size < 8) {
35
+ ofs ^= 8 - element_size;
36
+ }
37
+#endif
38
+ return neon_reg_offset(reg, 0) + ofs;
39
+}
40
+
41
static TCGv_i32 neon_load_reg(int reg, int pass)
42
{
43
TCGv_i32 tmp = tcg_temp_new_i32();
44
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
45
tmp = load_reg(s, rd);
46
if (insn & (1 << 23)) {
47
/* VDUP */
48
- if (size == 0) {
49
- gen_neon_dup_u8(tmp, 0);
50
- } else if (size == 1) {
51
- gen_neon_dup_low16(tmp);
52
- }
53
- for (n = 0; n <= pass * 2; n++) {
54
- tmp2 = tcg_temp_new_i32();
55
- tcg_gen_mov_i32(tmp2, tmp);
56
- neon_store_reg(rn, n, tmp2);
57
- }
58
- neon_store_reg(rn, n, tmp);
59
+ int vec_size = pass ? 16 : 8;
60
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rn, 0),
61
+ vec_size, vec_size, tmp);
62
+ tcg_temp_free_i32(tmp);
63
} else {
64
/* VMOV */
65
switch (size) {
66
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
67
tcg_temp_free_i32(tmp);
68
} else if ((insn & 0x380) == 0) {
69
/* VDUP */
70
+ int element;
71
+ TCGMemOp size;
72
+
73
if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
74
return 1;
75
}
76
- if (insn & (1 << 19)) {
77
- tmp = neon_load_reg(rm, 1);
78
- } else {
79
- tmp = neon_load_reg(rm, 0);
80
- }
81
if (insn & (1 << 16)) {
82
- gen_neon_dup_u8(tmp, ((insn >> 17) & 3) * 8);
83
+ size = MO_8;
84
+ element = (insn >> 17) & 7;
85
} else if (insn & (1 << 17)) {
86
- if ((insn >> 18) & 1)
87
- gen_neon_dup_high16(tmp);
88
- else
89
- gen_neon_dup_low16(tmp);
90
+ size = MO_16;
91
+ element = (insn >> 18) & 3;
92
+ } else {
93
+ size = MO_32;
94
+ element = (insn >> 19) & 1;
95
}
96
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
97
- tmp2 = tcg_temp_new_i32();
98
- tcg_gen_mov_i32(tmp2, tmp);
99
- neon_store_reg(rd, pass, tmp2);
100
- }
101
- tcg_temp_free_i32(tmp);
102
+ tcg_gen_gvec_dup_mem(size, neon_reg_offset(rd, 0),
103
+ neon_element_offset(rm, element, size),
104
+ q ? 16 : 8, q ? 16 : 8);
105
} else {
106
return 1;
107
}
108
--
109
2.19.1
110
111
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-8-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 67 ++++++++++++++++++++++++------------------
9
1 file changed, 39 insertions(+), 28 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
return 1;
17
}
18
} else { /* (insn & 0x00380080) == 0 */
19
- int invert;
20
+ int invert, reg_ofs, vec_size;
21
+
22
if (q && (rd & 1)) {
23
return 1;
24
}
25
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
26
break;
27
case 14:
28
imm |= (imm << 8) | (imm << 16) | (imm << 24);
29
- if (invert)
30
+ if (invert) {
31
imm = ~imm;
32
+ }
33
break;
34
case 15:
35
if (invert) {
36
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
37
| ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
38
break;
39
}
40
- if (invert)
41
+ if (invert) {
42
imm = ~imm;
43
+ }
44
45
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
46
- if (op & 1 && op < 12) {
47
- tmp = neon_load_reg(rd, pass);
48
- if (invert) {
49
- /* The immediate value has already been inverted, so
50
- BIC becomes AND. */
51
- tcg_gen_andi_i32(tmp, tmp, imm);
52
- } else {
53
- tcg_gen_ori_i32(tmp, tmp, imm);
54
- }
55
+ reg_ofs = neon_reg_offset(rd, 0);
56
+ vec_size = q ? 16 : 8;
57
+
58
+ if (op & 1 && op < 12) {
59
+ if (invert) {
60
+ /* The immediate value has already been inverted,
61
+ * so BIC becomes AND.
62
+ */
63
+ tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm,
64
+ vec_size, vec_size);
65
} else {
66
- /* VMOV, VMVN. */
67
- tmp = tcg_temp_new_i32();
68
- if (op == 14 && invert) {
69
- int n;
70
- uint32_t val;
71
- val = 0;
72
- for (n = 0; n < 4; n++) {
73
- if (imm & (1 << (n + (pass & 1) * 4)))
74
- val |= 0xff << (n * 8);
75
- }
76
- tcg_gen_movi_i32(tmp, val);
77
- } else {
78
- tcg_gen_movi_i32(tmp, imm);
79
- }
80
+ tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm,
81
+ vec_size, vec_size);
82
+ }
83
+ } else {
84
+ /* VMOV, VMVN. */
85
+ if (op == 14 && invert) {
86
+ TCGv_i64 t64 = tcg_temp_new_i64();
87
+
88
+ for (pass = 0; pass <= q; ++pass) {
89
+ uint64_t val = 0;
90
+ int n;
91
+
92
+ for (n = 0; n < 8; n++) {
93
+ if (imm & (1 << (n + pass * 8))) {
94
+ val |= 0xffull << (n * 8);
95
+ }
96
+ }
97
+ tcg_gen_movi_i64(t64, val);
98
+ neon_store_reg64(t64, rd + pass);
99
+ }
100
+ tcg_temp_free_i64(t64);
101
+ } else {
102
+ tcg_gen_gvec_dup32i(reg_ofs, vec_size, vec_size, imm);
103
}
104
- neon_store_reg(rd, pass, tmp);
105
}
106
}
107
} else { /* (insn & 0x00800010 == 0x00800000) */
108
--
109
2.19.1
110
111
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
virtio_mmio.h would be deleted; I am leaving it in though it was a
3
Move expanders for VBSL, VBIT, and VBIF from translate-a64.c.
4
mistake to add it.
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
6
Message-id: 20181011205206.3552-9-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
---
9
include/standard-headers/asm-x86/hyperv.h | 8 +
10
target/arm/translate.h | 6 ++
10
include/standard-headers/linux/input-event-codes.h | 2 +-
11
target/arm/translate-a64.c | 61 --------------
11
include/standard-headers/linux/pci_regs.h | 25 ++
12
target/arm/translate.c | 162 +++++++++++++++++++++++++++----------
12
include/standard-headers/linux/virtio_ids.h | 1 +
13
3 files changed, 124 insertions(+), 105 deletions(-)
13
linux-headers/asm-arm/kvm.h | 15 +
14
14
linux-headers/asm-arm/unistd-common.h | 357 ++++++++++++++++++
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
15
linux-headers/asm-arm/unistd-eabi.h | 5 +
16
linux-headers/asm-arm/unistd-oabi.h | 17 +
17
linux-headers/asm-arm/unistd.h | 419 +--------------------
18
linux-headers/asm-arm64/kvm.h | 13 +
19
linux-headers/asm-powerpc/kvm.h | 27 ++
20
linux-headers/asm-powerpc/unistd.h | 1 +
21
linux-headers/asm-x86/kvm_para.h | 13 +-
22
linux-headers/linux/kvm.h | 24 +-
23
linux-headers/linux/kvm_para.h | 2 +
24
linux-headers/linux/userfaultfd.h | 67 +++-
25
linux-headers/linux/vfio.h | 10 +
26
17 files changed, 577 insertions(+), 429 deletions(-)
27
create mode 100644 linux-headers/asm-arm/unistd-common.h
28
create mode 100644 linux-headers/asm-arm/unistd-eabi.h
29
create mode 100644 linux-headers/asm-arm/unistd-oabi.h
30
31
diff --git a/include/standard-headers/asm-x86/hyperv.h b/include/standard-headers/asm-x86/hyperv.h
32
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
33
--- a/include/standard-headers/asm-x86/hyperv.h
17
--- a/target/arm/translate.h
34
+++ b/include/standard-headers/asm-x86/hyperv.h
18
+++ b/target/arm/translate.h
35
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
36
*/
20
return ret;
37
#define HV_X64_MSR_STAT_PAGES_AVAILABLE        (1 << 8)
21
}
38
22
39
+/* Crash MSR available */
23
+
40
+#define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE (1 << 10)
24
+/* Vector operations shared between ARM and AArch64. */
25
+extern const GVecGen3 bsl_op;
26
+extern const GVecGen3 bit_op;
27
+extern const GVecGen3 bif_op;
41
+
28
+
42
/*
29
/*
43
* Feature identification: EBX indicates which flags were specified at
30
* Forward to the isar_feature_* tests given a DisasContext pointer.
44
* partition creation. The format is the same as the partition creation
45
@@ -XXX,XX +XXX,XX @@
46
*/
31
*/
47
#define HV_X64_RELAXED_TIMING_RECOMMENDED    (1 << 5)
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-a64.c
35
+++ b/target/arm/translate-a64.c
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
37
}
38
}
39
40
-static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
41
-{
42
- tcg_gen_xor_i64(rn, rn, rm);
43
- tcg_gen_and_i64(rn, rn, rd);
44
- tcg_gen_xor_i64(rd, rm, rn);
45
-}
46
-
47
-static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
48
-{
49
- tcg_gen_xor_i64(rn, rn, rd);
50
- tcg_gen_and_i64(rn, rn, rm);
51
- tcg_gen_xor_i64(rd, rd, rn);
52
-}
53
-
54
-static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
55
-{
56
- tcg_gen_xor_i64(rn, rn, rd);
57
- tcg_gen_andc_i64(rn, rn, rm);
58
- tcg_gen_xor_i64(rd, rd, rn);
59
-}
60
-
61
-static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
62
-{
63
- tcg_gen_xor_vec(vece, rn, rn, rm);
64
- tcg_gen_and_vec(vece, rn, rn, rd);
65
- tcg_gen_xor_vec(vece, rd, rm, rn);
66
-}
67
-
68
-static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
69
-{
70
- tcg_gen_xor_vec(vece, rn, rn, rd);
71
- tcg_gen_and_vec(vece, rn, rn, rm);
72
- tcg_gen_xor_vec(vece, rd, rd, rn);
73
-}
74
-
75
-static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
76
-{
77
- tcg_gen_xor_vec(vece, rn, rn, rd);
78
- tcg_gen_andc_vec(vece, rn, rn, rm);
79
- tcg_gen_xor_vec(vece, rd, rd, rn);
80
-}
81
-
82
/* Logic op (opcode == 3) subgroup of C3.6.16. */
83
static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
84
{
85
- static const GVecGen3 bsl_op = {
86
- .fni8 = gen_bsl_i64,
87
- .fniv = gen_bsl_vec,
88
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
89
- .load_dest = true
90
- };
91
- static const GVecGen3 bit_op = {
92
- .fni8 = gen_bit_i64,
93
- .fniv = gen_bit_vec,
94
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
95
- .load_dest = true
96
- };
97
- static const GVecGen3 bif_op = {
98
- .fni8 = gen_bif_i64,
99
- .fniv = gen_bif_vec,
100
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
- .load_dest = true
102
- };
103
-
104
int rd = extract32(insn, 0, 5);
105
int rn = extract32(insn, 5, 5);
106
int rm = extract32(insn, 16, 5);
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
112
return 0;
113
}
114
115
-/* Bitwise select. dest = c ? t : f. Clobbers T and F. */
116
-static void gen_neon_bsl(TCGv_i32 dest, TCGv_i32 t, TCGv_i32 f, TCGv_i32 c)
117
-{
118
- tcg_gen_and_i32(t, t, c);
119
- tcg_gen_andc_i32(f, f, c);
120
- tcg_gen_or_i32(dest, t, f);
121
-}
122
-
123
static inline void gen_neon_narrow(int size, TCGv_i32 dest, TCGv_i64 src)
124
{
125
switch (size) {
126
@@ -XXX,XX +XXX,XX @@ static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
127
return 1;
128
}
48
129
49
+/*
130
+/*
50
+ * Crash notification flag.
131
+ * Expanders for VBitOps_VBIF, VBIT, VBSL.
51
+ */
132
+ */
52
+#define HV_CRASH_CTL_CRASH_NOTIFY (1ULL << 63)
133
+static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
53
+
134
+{
54
/* MSR used to identify the guest OS. */
135
+ tcg_gen_xor_i64(rn, rn, rm);
55
#define HV_X64_MSR_GUEST_OS_ID            0x40000000
136
+ tcg_gen_and_i64(rn, rn, rd);
56
137
+ tcg_gen_xor_i64(rd, rm, rn);
57
diff --git a/include/standard-headers/linux/input-event-codes.h b/include/standard-headers/linux/input-event-codes.h
138
+}
58
index XXXXXXX..XXXXXXX 100644
139
+
59
--- a/include/standard-headers/linux/input-event-codes.h
140
+static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
60
+++ b/include/standard-headers/linux/input-event-codes.h
141
+{
61
@@ -XXX,XX +XXX,XX @@
142
+ tcg_gen_xor_i64(rn, rn, rd);
62
* Control a data application associated with the currently viewed channel,
143
+ tcg_gen_and_i64(rn, rn, rm);
63
* e.g. teletext or data broadcast application (MHEG, MHP, HbbTV, etc.)
144
+ tcg_gen_xor_i64(rd, rd, rn);
64
*/
145
+}
65
-#define KEY_DATA            0x275
146
+
66
+#define KEY_DATA            0x277
147
+static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
67
148
+{
68
#define BTN_TRIGGER_HAPPY        0x2c0
149
+ tcg_gen_xor_i64(rn, rn, rd);
69
#define BTN_TRIGGER_HAPPY1        0x2c0
150
+ tcg_gen_andc_i64(rn, rn, rm);
70
diff --git a/include/standard-headers/linux/pci_regs.h b/include/standard-headers/linux/pci_regs.h
151
+ tcg_gen_xor_i64(rd, rd, rn);
71
index XXXXXXX..XXXXXXX 100644
152
+}
72
--- a/include/standard-headers/linux/pci_regs.h
153
+
73
+++ b/include/standard-headers/linux/pci_regs.h
154
+static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
74
@@ -XXX,XX +XXX,XX @@
155
+{
75
#define LINUX_PCI_REGS_H
156
+ tcg_gen_xor_vec(vece, rn, rn, rm);
76
157
+ tcg_gen_and_vec(vece, rn, rn, rd);
77
/*
158
+ tcg_gen_xor_vec(vece, rd, rm, rn);
78
+ * Conventional PCI and PCI-X Mode 1 devices have 256 bytes of
159
+}
79
+ * configuration space. PCI-X Mode 2 and PCIe devices have 4096 bytes of
160
+
80
+ * configuration space.
161
+static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
81
+ */
162
+{
82
+#define PCI_CFG_SPACE_SIZE    256
163
+ tcg_gen_xor_vec(vece, rn, rn, rd);
83
+#define PCI_CFG_SPACE_EXP_SIZE    4096
164
+ tcg_gen_and_vec(vece, rn, rn, rm);
84
+
165
+ tcg_gen_xor_vec(vece, rd, rd, rn);
85
+/*
166
+}
86
* Under PCI, each device has 256 bytes of configuration address space,
167
+
87
* of which the first 64 bytes are standardized as follows:
168
+static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
88
*/
169
+{
89
@@ -XXX,XX +XXX,XX @@
170
+ tcg_gen_xor_vec(vece, rn, rn, rd);
90
#define PCI_EXT_CAP_ID_PMUX    0x1A    /* Protocol Multiplexing */
171
+ tcg_gen_andc_vec(vece, rn, rn, rm);
91
#define PCI_EXT_CAP_ID_PASID    0x1B    /* Process Address Space ID */
172
+ tcg_gen_xor_vec(vece, rd, rd, rn);
92
#define PCI_EXT_CAP_ID_DPC    0x1D    /* Downstream Port Containment */
173
+}
93
+#define PCI_EXT_CAP_ID_L1SS    0x1E    /* L1 PM Substates */
174
+
94
#define PCI_EXT_CAP_ID_PTM    0x1F    /* Precision Time Measurement */
175
+const GVecGen3 bsl_op = {
95
#define PCI_EXT_CAP_ID_MAX    PCI_EXT_CAP_ID_PTM
176
+ .fni8 = gen_bsl_i64,
96
177
+ .fniv = gen_bsl_vec,
97
@@ -XXX,XX +XXX,XX @@
178
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
98
#define PCI_EXP_DPC_STATUS        8    /* DPC Status */
179
+ .load_dest = true
99
#define PCI_EXP_DPC_STATUS_TRIGGER    0x01    /* Trigger Status */
100
#define PCI_EXP_DPC_STATUS_INTERRUPT    0x08    /* Interrupt Status */
101
+#define PCI_EXP_DPC_RP_BUSY        0x10    /* Root Port Busy */
102
103
#define PCI_EXP_DPC_SOURCE_ID        10    /* DPC Source Identifier */
104
105
@@ -XXX,XX +XXX,XX @@
106
#define PCI_PTM_CTRL_ENABLE        0x00000001 /* PTM enable */
107
#define PCI_PTM_CTRL_ROOT        0x00000002 /* Root select */
108
109
+/* L1 PM Substates */
110
+#define PCI_L1SS_CAP         4    /* capability register */
111
+#define PCI_L1SS_CAP_PCIPM_L1_2     1    /* PCI PM L1.2 Support */
112
+#define PCI_L1SS_CAP_PCIPM_L1_1     2    /* PCI PM L1.1 Support */
113
+#define PCI_L1SS_CAP_ASPM_L1_2         4    /* ASPM L1.2 Support */
114
+#define PCI_L1SS_CAP_ASPM_L1_1         8    /* ASPM L1.1 Support */
115
+#define PCI_L1SS_CAP_L1_PM_SS        16    /* L1 PM Substates Support */
116
+#define PCI_L1SS_CTL1         8    /* Control Register 1 */
117
+#define PCI_L1SS_CTL1_PCIPM_L1_2    1    /* PCI PM L1.2 Enable */
118
+#define PCI_L1SS_CTL1_PCIPM_L1_1    2    /* PCI PM L1.1 Support */
119
+#define PCI_L1SS_CTL1_ASPM_L1_2    4    /* ASPM L1.2 Support */
120
+#define PCI_L1SS_CTL1_ASPM_L1_1    8    /* ASPM L1.1 Support */
121
+#define PCI_L1SS_CTL1_L1SS_MASK    0x0000000F
122
+#define PCI_L1SS_CTL2         0xC    /* Control Register 2 */
123
+
124
#endif /* LINUX_PCI_REGS_H */
125
diff --git a/include/standard-headers/linux/virtio_ids.h b/include/standard-headers/linux/virtio_ids.h
126
index XXXXXXX..XXXXXXX 100644
127
--- a/include/standard-headers/linux/virtio_ids.h
128
+++ b/include/standard-headers/linux/virtio_ids.h
129
@@ -XXX,XX +XXX,XX @@
130
#define VIRTIO_ID_INPUT 18 /* virtio input */
131
#define VIRTIO_ID_VSOCK 19 /* virtio vsock transport */
132
#define VIRTIO_ID_CRYPTO 20 /* virtio crypto */
133
+
134
#endif /* _LINUX_VIRTIO_IDS_H */
135
diff --git a/linux-headers/asm-arm/kvm.h b/linux-headers/asm-arm/kvm.h
136
index XXXXXXX..XXXXXXX 100644
137
--- a/linux-headers/asm-arm/kvm.h
138
+++ b/linux-headers/asm-arm/kvm.h
139
@@ -XXX,XX +XXX,XX @@ struct kvm_regs {
140
/* Supported VGICv3 address types */
141
#define KVM_VGIC_V3_ADDR_TYPE_DIST    2
142
#define KVM_VGIC_V3_ADDR_TYPE_REDIST    3
143
+#define KVM_VGIC_ITS_ADDR_TYPE        4
144
145
#define KVM_VGIC_V3_DIST_SIZE        SZ_64K
146
#define KVM_VGIC_V3_REDIST_SIZE        (2 * SZ_64K)
147
+#define KVM_VGIC_V3_ITS_SIZE        (2 * SZ_64K)
148
149
#define KVM_ARM_VCPU_POWER_OFF        0 /* CPU is started in OFF state */
150
#define KVM_ARM_VCPU_PSCI_0_2        1 /* CPU uses PSCI v0.2 */
151
@@ -XXX,XX +XXX,XX @@ struct kvm_arch_memory_slot {
152
#define KVM_DEV_ARM_VGIC_GRP_CPU_REGS    2
153
#define KVM_DEV_ARM_VGIC_CPUID_SHIFT    32
154
#define KVM_DEV_ARM_VGIC_CPUID_MASK    (0xffULL << KVM_DEV_ARM_VGIC_CPUID_SHIFT)
155
+#define KVM_DEV_ARM_VGIC_V3_MPIDR_SHIFT 32
156
+#define KVM_DEV_ARM_VGIC_V3_MPIDR_MASK \
157
+            (0xffffffffULL << KVM_DEV_ARM_VGIC_V3_MPIDR_SHIFT)
158
#define KVM_DEV_ARM_VGIC_OFFSET_SHIFT    0
159
#define KVM_DEV_ARM_VGIC_OFFSET_MASK    (0xffffffffULL << KVM_DEV_ARM_VGIC_OFFSET_SHIFT)
160
+#define KVM_DEV_ARM_VGIC_SYSREG_INSTR_MASK (0xffff)
161
#define KVM_DEV_ARM_VGIC_GRP_NR_IRQS    3
162
#define KVM_DEV_ARM_VGIC_GRP_CTRL 4
163
+#define KVM_DEV_ARM_VGIC_GRP_REDIST_REGS 5
164
+#define KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS 6
165
+#define KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO 7
166
+#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT    10
167
+#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK \
168
+            (0x3fffffULL << KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT)
169
+#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INTID_MASK 0x3ff
170
+#define VGIC_LEVEL_INFO_LINE_LEVEL    0
171
+
172
#define KVM_DEV_ARM_VGIC_CTRL_INIT 0
173
174
/* KVM_IRQ_LINE irq field index values */
175
diff --git a/linux-headers/asm-arm/unistd-common.h b/linux-headers/asm-arm/unistd-common.h
176
new file mode 100644
177
index XXXXXXX..XXXXXXX
178
--- /dev/null
179
+++ b/linux-headers/asm-arm/unistd-common.h
180
@@ -XXX,XX +XXX,XX @@
181
+#ifndef _ASM_ARM_UNISTD_COMMON_H
182
+#define _ASM_ARM_UNISTD_COMMON_H 1
183
+
184
+#define __NR_restart_syscall (__NR_SYSCALL_BASE + 0)
185
+#define __NR_exit (__NR_SYSCALL_BASE + 1)
186
+#define __NR_fork (__NR_SYSCALL_BASE + 2)
187
+#define __NR_read (__NR_SYSCALL_BASE + 3)
188
+#define __NR_write (__NR_SYSCALL_BASE + 4)
189
+#define __NR_open (__NR_SYSCALL_BASE + 5)
190
+#define __NR_close (__NR_SYSCALL_BASE + 6)
191
+#define __NR_creat (__NR_SYSCALL_BASE + 8)
192
+#define __NR_link (__NR_SYSCALL_BASE + 9)
193
+#define __NR_unlink (__NR_SYSCALL_BASE + 10)
194
+#define __NR_execve (__NR_SYSCALL_BASE + 11)
195
+#define __NR_chdir (__NR_SYSCALL_BASE + 12)
196
+#define __NR_mknod (__NR_SYSCALL_BASE + 14)
197
+#define __NR_chmod (__NR_SYSCALL_BASE + 15)
198
+#define __NR_lchown (__NR_SYSCALL_BASE + 16)
199
+#define __NR_lseek (__NR_SYSCALL_BASE + 19)
200
+#define __NR_getpid (__NR_SYSCALL_BASE + 20)
201
+#define __NR_mount (__NR_SYSCALL_BASE + 21)
202
+#define __NR_setuid (__NR_SYSCALL_BASE + 23)
203
+#define __NR_getuid (__NR_SYSCALL_BASE + 24)
204
+#define __NR_ptrace (__NR_SYSCALL_BASE + 26)
205
+#define __NR_pause (__NR_SYSCALL_BASE + 29)
206
+#define __NR_access (__NR_SYSCALL_BASE + 33)
207
+#define __NR_nice (__NR_SYSCALL_BASE + 34)
208
+#define __NR_sync (__NR_SYSCALL_BASE + 36)
209
+#define __NR_kill (__NR_SYSCALL_BASE + 37)
210
+#define __NR_rename (__NR_SYSCALL_BASE + 38)
211
+#define __NR_mkdir (__NR_SYSCALL_BASE + 39)
212
+#define __NR_rmdir (__NR_SYSCALL_BASE + 40)
213
+#define __NR_dup (__NR_SYSCALL_BASE + 41)
214
+#define __NR_pipe (__NR_SYSCALL_BASE + 42)
215
+#define __NR_times (__NR_SYSCALL_BASE + 43)
216
+#define __NR_brk (__NR_SYSCALL_BASE + 45)
217
+#define __NR_setgid (__NR_SYSCALL_BASE + 46)
218
+#define __NR_getgid (__NR_SYSCALL_BASE + 47)
219
+#define __NR_geteuid (__NR_SYSCALL_BASE + 49)
220
+#define __NR_getegid (__NR_SYSCALL_BASE + 50)
221
+#define __NR_acct (__NR_SYSCALL_BASE + 51)
222
+#define __NR_umount2 (__NR_SYSCALL_BASE + 52)
223
+#define __NR_ioctl (__NR_SYSCALL_BASE + 54)
224
+#define __NR_fcntl (__NR_SYSCALL_BASE + 55)
225
+#define __NR_setpgid (__NR_SYSCALL_BASE + 57)
226
+#define __NR_umask (__NR_SYSCALL_BASE + 60)
227
+#define __NR_chroot (__NR_SYSCALL_BASE + 61)
228
+#define __NR_ustat (__NR_SYSCALL_BASE + 62)
229
+#define __NR_dup2 (__NR_SYSCALL_BASE + 63)
230
+#define __NR_getppid (__NR_SYSCALL_BASE + 64)
231
+#define __NR_getpgrp (__NR_SYSCALL_BASE + 65)
232
+#define __NR_setsid (__NR_SYSCALL_BASE + 66)
233
+#define __NR_sigaction (__NR_SYSCALL_BASE + 67)
234
+#define __NR_setreuid (__NR_SYSCALL_BASE + 70)
235
+#define __NR_setregid (__NR_SYSCALL_BASE + 71)
236
+#define __NR_sigsuspend (__NR_SYSCALL_BASE + 72)
237
+#define __NR_sigpending (__NR_SYSCALL_BASE + 73)
238
+#define __NR_sethostname (__NR_SYSCALL_BASE + 74)
239
+#define __NR_setrlimit (__NR_SYSCALL_BASE + 75)
240
+#define __NR_getrusage (__NR_SYSCALL_BASE + 77)
241
+#define __NR_gettimeofday (__NR_SYSCALL_BASE + 78)
242
+#define __NR_settimeofday (__NR_SYSCALL_BASE + 79)
243
+#define __NR_getgroups (__NR_SYSCALL_BASE + 80)
244
+#define __NR_setgroups (__NR_SYSCALL_BASE + 81)
245
+#define __NR_symlink (__NR_SYSCALL_BASE + 83)
246
+#define __NR_readlink (__NR_SYSCALL_BASE + 85)
247
+#define __NR_uselib (__NR_SYSCALL_BASE + 86)
248
+#define __NR_swapon (__NR_SYSCALL_BASE + 87)
249
+#define __NR_reboot (__NR_SYSCALL_BASE + 88)
250
+#define __NR_munmap (__NR_SYSCALL_BASE + 91)
251
+#define __NR_truncate (__NR_SYSCALL_BASE + 92)
252
+#define __NR_ftruncate (__NR_SYSCALL_BASE + 93)
253
+#define __NR_fchmod (__NR_SYSCALL_BASE + 94)
254
+#define __NR_fchown (__NR_SYSCALL_BASE + 95)
255
+#define __NR_getpriority (__NR_SYSCALL_BASE + 96)
256
+#define __NR_setpriority (__NR_SYSCALL_BASE + 97)
257
+#define __NR_statfs (__NR_SYSCALL_BASE + 99)
258
+#define __NR_fstatfs (__NR_SYSCALL_BASE + 100)
259
+#define __NR_syslog (__NR_SYSCALL_BASE + 103)
260
+#define __NR_setitimer (__NR_SYSCALL_BASE + 104)
261
+#define __NR_getitimer (__NR_SYSCALL_BASE + 105)
262
+#define __NR_stat (__NR_SYSCALL_BASE + 106)
263
+#define __NR_lstat (__NR_SYSCALL_BASE + 107)
264
+#define __NR_fstat (__NR_SYSCALL_BASE + 108)
265
+#define __NR_vhangup (__NR_SYSCALL_BASE + 111)
266
+#define __NR_wait4 (__NR_SYSCALL_BASE + 114)
267
+#define __NR_swapoff (__NR_SYSCALL_BASE + 115)
268
+#define __NR_sysinfo (__NR_SYSCALL_BASE + 116)
269
+#define __NR_fsync (__NR_SYSCALL_BASE + 118)
270
+#define __NR_sigreturn (__NR_SYSCALL_BASE + 119)
271
+#define __NR_clone (__NR_SYSCALL_BASE + 120)
272
+#define __NR_setdomainname (__NR_SYSCALL_BASE + 121)
273
+#define __NR_uname (__NR_SYSCALL_BASE + 122)
274
+#define __NR_adjtimex (__NR_SYSCALL_BASE + 124)
275
+#define __NR_mprotect (__NR_SYSCALL_BASE + 125)
276
+#define __NR_sigprocmask (__NR_SYSCALL_BASE + 126)
277
+#define __NR_init_module (__NR_SYSCALL_BASE + 128)
278
+#define __NR_delete_module (__NR_SYSCALL_BASE + 129)
279
+#define __NR_quotactl (__NR_SYSCALL_BASE + 131)
280
+#define __NR_getpgid (__NR_SYSCALL_BASE + 132)
281
+#define __NR_fchdir (__NR_SYSCALL_BASE + 133)
282
+#define __NR_bdflush (__NR_SYSCALL_BASE + 134)
283
+#define __NR_sysfs (__NR_SYSCALL_BASE + 135)
284
+#define __NR_personality (__NR_SYSCALL_BASE + 136)
285
+#define __NR_setfsuid (__NR_SYSCALL_BASE + 138)
286
+#define __NR_setfsgid (__NR_SYSCALL_BASE + 139)
287
+#define __NR__llseek (__NR_SYSCALL_BASE + 140)
288
+#define __NR_getdents (__NR_SYSCALL_BASE + 141)
289
+#define __NR__newselect (__NR_SYSCALL_BASE + 142)
290
+#define __NR_flock (__NR_SYSCALL_BASE + 143)
291
+#define __NR_msync (__NR_SYSCALL_BASE + 144)
292
+#define __NR_readv (__NR_SYSCALL_BASE + 145)
293
+#define __NR_writev (__NR_SYSCALL_BASE + 146)
294
+#define __NR_getsid (__NR_SYSCALL_BASE + 147)
295
+#define __NR_fdatasync (__NR_SYSCALL_BASE + 148)
296
+#define __NR__sysctl (__NR_SYSCALL_BASE + 149)
297
+#define __NR_mlock (__NR_SYSCALL_BASE + 150)
298
+#define __NR_munlock (__NR_SYSCALL_BASE + 151)
299
+#define __NR_mlockall (__NR_SYSCALL_BASE + 152)
300
+#define __NR_munlockall (__NR_SYSCALL_BASE + 153)
301
+#define __NR_sched_setparam (__NR_SYSCALL_BASE + 154)
302
+#define __NR_sched_getparam (__NR_SYSCALL_BASE + 155)
303
+#define __NR_sched_setscheduler (__NR_SYSCALL_BASE + 156)
304
+#define __NR_sched_getscheduler (__NR_SYSCALL_BASE + 157)
305
+#define __NR_sched_yield (__NR_SYSCALL_BASE + 158)
306
+#define __NR_sched_get_priority_max (__NR_SYSCALL_BASE + 159)
307
+#define __NR_sched_get_priority_min (__NR_SYSCALL_BASE + 160)
308
+#define __NR_sched_rr_get_interval (__NR_SYSCALL_BASE + 161)
309
+#define __NR_nanosleep (__NR_SYSCALL_BASE + 162)
310
+#define __NR_mremap (__NR_SYSCALL_BASE + 163)
311
+#define __NR_setresuid (__NR_SYSCALL_BASE + 164)
312
+#define __NR_getresuid (__NR_SYSCALL_BASE + 165)
313
+#define __NR_poll (__NR_SYSCALL_BASE + 168)
314
+#define __NR_nfsservctl (__NR_SYSCALL_BASE + 169)
315
+#define __NR_setresgid (__NR_SYSCALL_BASE + 170)
316
+#define __NR_getresgid (__NR_SYSCALL_BASE + 171)
317
+#define __NR_prctl (__NR_SYSCALL_BASE + 172)
318
+#define __NR_rt_sigreturn (__NR_SYSCALL_BASE + 173)
319
+#define __NR_rt_sigaction (__NR_SYSCALL_BASE + 174)
320
+#define __NR_rt_sigprocmask (__NR_SYSCALL_BASE + 175)
321
+#define __NR_rt_sigpending (__NR_SYSCALL_BASE + 176)
322
+#define __NR_rt_sigtimedwait (__NR_SYSCALL_BASE + 177)
323
+#define __NR_rt_sigqueueinfo (__NR_SYSCALL_BASE + 178)
324
+#define __NR_rt_sigsuspend (__NR_SYSCALL_BASE + 179)
325
+#define __NR_pread64 (__NR_SYSCALL_BASE + 180)
326
+#define __NR_pwrite64 (__NR_SYSCALL_BASE + 181)
327
+#define __NR_chown (__NR_SYSCALL_BASE + 182)
328
+#define __NR_getcwd (__NR_SYSCALL_BASE + 183)
329
+#define __NR_capget (__NR_SYSCALL_BASE + 184)
330
+#define __NR_capset (__NR_SYSCALL_BASE + 185)
331
+#define __NR_sigaltstack (__NR_SYSCALL_BASE + 186)
332
+#define __NR_sendfile (__NR_SYSCALL_BASE + 187)
333
+#define __NR_vfork (__NR_SYSCALL_BASE + 190)
334
+#define __NR_ugetrlimit (__NR_SYSCALL_BASE + 191)
335
+#define __NR_mmap2 (__NR_SYSCALL_BASE + 192)
336
+#define __NR_truncate64 (__NR_SYSCALL_BASE + 193)
337
+#define __NR_ftruncate64 (__NR_SYSCALL_BASE + 194)
338
+#define __NR_stat64 (__NR_SYSCALL_BASE + 195)
339
+#define __NR_lstat64 (__NR_SYSCALL_BASE + 196)
340
+#define __NR_fstat64 (__NR_SYSCALL_BASE + 197)
341
+#define __NR_lchown32 (__NR_SYSCALL_BASE + 198)
342
+#define __NR_getuid32 (__NR_SYSCALL_BASE + 199)
343
+#define __NR_getgid32 (__NR_SYSCALL_BASE + 200)
344
+#define __NR_geteuid32 (__NR_SYSCALL_BASE + 201)
345
+#define __NR_getegid32 (__NR_SYSCALL_BASE + 202)
346
+#define __NR_setreuid32 (__NR_SYSCALL_BASE + 203)
347
+#define __NR_setregid32 (__NR_SYSCALL_BASE + 204)
348
+#define __NR_getgroups32 (__NR_SYSCALL_BASE + 205)
349
+#define __NR_setgroups32 (__NR_SYSCALL_BASE + 206)
350
+#define __NR_fchown32 (__NR_SYSCALL_BASE + 207)
351
+#define __NR_setresuid32 (__NR_SYSCALL_BASE + 208)
352
+#define __NR_getresuid32 (__NR_SYSCALL_BASE + 209)
353
+#define __NR_setresgid32 (__NR_SYSCALL_BASE + 210)
354
+#define __NR_getresgid32 (__NR_SYSCALL_BASE + 211)
355
+#define __NR_chown32 (__NR_SYSCALL_BASE + 212)
356
+#define __NR_setuid32 (__NR_SYSCALL_BASE + 213)
357
+#define __NR_setgid32 (__NR_SYSCALL_BASE + 214)
358
+#define __NR_setfsuid32 (__NR_SYSCALL_BASE + 215)
359
+#define __NR_setfsgid32 (__NR_SYSCALL_BASE + 216)
360
+#define __NR_getdents64 (__NR_SYSCALL_BASE + 217)
361
+#define __NR_pivot_root (__NR_SYSCALL_BASE + 218)
362
+#define __NR_mincore (__NR_SYSCALL_BASE + 219)
363
+#define __NR_madvise (__NR_SYSCALL_BASE + 220)
364
+#define __NR_fcntl64 (__NR_SYSCALL_BASE + 221)
365
+#define __NR_gettid (__NR_SYSCALL_BASE + 224)
366
+#define __NR_readahead (__NR_SYSCALL_BASE + 225)
367
+#define __NR_setxattr (__NR_SYSCALL_BASE + 226)
368
+#define __NR_lsetxattr (__NR_SYSCALL_BASE + 227)
369
+#define __NR_fsetxattr (__NR_SYSCALL_BASE + 228)
370
+#define __NR_getxattr (__NR_SYSCALL_BASE + 229)
371
+#define __NR_lgetxattr (__NR_SYSCALL_BASE + 230)
372
+#define __NR_fgetxattr (__NR_SYSCALL_BASE + 231)
373
+#define __NR_listxattr (__NR_SYSCALL_BASE + 232)
374
+#define __NR_llistxattr (__NR_SYSCALL_BASE + 233)
375
+#define __NR_flistxattr (__NR_SYSCALL_BASE + 234)
376
+#define __NR_removexattr (__NR_SYSCALL_BASE + 235)
377
+#define __NR_lremovexattr (__NR_SYSCALL_BASE + 236)
378
+#define __NR_fremovexattr (__NR_SYSCALL_BASE + 237)
379
+#define __NR_tkill (__NR_SYSCALL_BASE + 238)
380
+#define __NR_sendfile64 (__NR_SYSCALL_BASE + 239)
381
+#define __NR_futex (__NR_SYSCALL_BASE + 240)
382
+#define __NR_sched_setaffinity (__NR_SYSCALL_BASE + 241)
383
+#define __NR_sched_getaffinity (__NR_SYSCALL_BASE + 242)
384
+#define __NR_io_setup (__NR_SYSCALL_BASE + 243)
385
+#define __NR_io_destroy (__NR_SYSCALL_BASE + 244)
386
+#define __NR_io_getevents (__NR_SYSCALL_BASE + 245)
387
+#define __NR_io_submit (__NR_SYSCALL_BASE + 246)
388
+#define __NR_io_cancel (__NR_SYSCALL_BASE + 247)
389
+#define __NR_exit_group (__NR_SYSCALL_BASE + 248)
390
+#define __NR_lookup_dcookie (__NR_SYSCALL_BASE + 249)
391
+#define __NR_epoll_create (__NR_SYSCALL_BASE + 250)
392
+#define __NR_epoll_ctl (__NR_SYSCALL_BASE + 251)
393
+#define __NR_epoll_wait (__NR_SYSCALL_BASE + 252)
394
+#define __NR_remap_file_pages (__NR_SYSCALL_BASE + 253)
395
+#define __NR_set_tid_address (__NR_SYSCALL_BASE + 256)
396
+#define __NR_timer_create (__NR_SYSCALL_BASE + 257)
397
+#define __NR_timer_settime (__NR_SYSCALL_BASE + 258)
398
+#define __NR_timer_gettime (__NR_SYSCALL_BASE + 259)
399
+#define __NR_timer_getoverrun (__NR_SYSCALL_BASE + 260)
400
+#define __NR_timer_delete (__NR_SYSCALL_BASE + 261)
401
+#define __NR_clock_settime (__NR_SYSCALL_BASE + 262)
402
+#define __NR_clock_gettime (__NR_SYSCALL_BASE + 263)
403
+#define __NR_clock_getres (__NR_SYSCALL_BASE + 264)
404
+#define __NR_clock_nanosleep (__NR_SYSCALL_BASE + 265)
405
+#define __NR_statfs64 (__NR_SYSCALL_BASE + 266)
406
+#define __NR_fstatfs64 (__NR_SYSCALL_BASE + 267)
407
+#define __NR_tgkill (__NR_SYSCALL_BASE + 268)
408
+#define __NR_utimes (__NR_SYSCALL_BASE + 269)
409
+#define __NR_arm_fadvise64_64 (__NR_SYSCALL_BASE + 270)
410
+#define __NR_pciconfig_iobase (__NR_SYSCALL_BASE + 271)
411
+#define __NR_pciconfig_read (__NR_SYSCALL_BASE + 272)
412
+#define __NR_pciconfig_write (__NR_SYSCALL_BASE + 273)
413
+#define __NR_mq_open (__NR_SYSCALL_BASE + 274)
414
+#define __NR_mq_unlink (__NR_SYSCALL_BASE + 275)
415
+#define __NR_mq_timedsend (__NR_SYSCALL_BASE + 276)
416
+#define __NR_mq_timedreceive (__NR_SYSCALL_BASE + 277)
417
+#define __NR_mq_notify (__NR_SYSCALL_BASE + 278)
418
+#define __NR_mq_getsetattr (__NR_SYSCALL_BASE + 279)
419
+#define __NR_waitid (__NR_SYSCALL_BASE + 280)
420
+#define __NR_socket (__NR_SYSCALL_BASE + 281)
421
+#define __NR_bind (__NR_SYSCALL_BASE + 282)
422
+#define __NR_connect (__NR_SYSCALL_BASE + 283)
423
+#define __NR_listen (__NR_SYSCALL_BASE + 284)
424
+#define __NR_accept (__NR_SYSCALL_BASE + 285)
425
+#define __NR_getsockname (__NR_SYSCALL_BASE + 286)
426
+#define __NR_getpeername (__NR_SYSCALL_BASE + 287)
427
+#define __NR_socketpair (__NR_SYSCALL_BASE + 288)
428
+#define __NR_send (__NR_SYSCALL_BASE + 289)
429
+#define __NR_sendto (__NR_SYSCALL_BASE + 290)
430
+#define __NR_recv (__NR_SYSCALL_BASE + 291)
431
+#define __NR_recvfrom (__NR_SYSCALL_BASE + 292)
432
+#define __NR_shutdown (__NR_SYSCALL_BASE + 293)
433
+#define __NR_setsockopt (__NR_SYSCALL_BASE + 294)
434
+#define __NR_getsockopt (__NR_SYSCALL_BASE + 295)
435
+#define __NR_sendmsg (__NR_SYSCALL_BASE + 296)
436
+#define __NR_recvmsg (__NR_SYSCALL_BASE + 297)
437
+#define __NR_semop (__NR_SYSCALL_BASE + 298)
438
+#define __NR_semget (__NR_SYSCALL_BASE + 299)
439
+#define __NR_semctl (__NR_SYSCALL_BASE + 300)
440
+#define __NR_msgsnd (__NR_SYSCALL_BASE + 301)
441
+#define __NR_msgrcv (__NR_SYSCALL_BASE + 302)
442
+#define __NR_msgget (__NR_SYSCALL_BASE + 303)
443
+#define __NR_msgctl (__NR_SYSCALL_BASE + 304)
444
+#define __NR_shmat (__NR_SYSCALL_BASE + 305)
445
+#define __NR_shmdt (__NR_SYSCALL_BASE + 306)
446
+#define __NR_shmget (__NR_SYSCALL_BASE + 307)
447
+#define __NR_shmctl (__NR_SYSCALL_BASE + 308)
448
+#define __NR_add_key (__NR_SYSCALL_BASE + 309)
449
+#define __NR_request_key (__NR_SYSCALL_BASE + 310)
450
+#define __NR_keyctl (__NR_SYSCALL_BASE + 311)
451
+#define __NR_semtimedop (__NR_SYSCALL_BASE + 312)
452
+#define __NR_vserver (__NR_SYSCALL_BASE + 313)
453
+#define __NR_ioprio_set (__NR_SYSCALL_BASE + 314)
454
+#define __NR_ioprio_get (__NR_SYSCALL_BASE + 315)
455
+#define __NR_inotify_init (__NR_SYSCALL_BASE + 316)
456
+#define __NR_inotify_add_watch (__NR_SYSCALL_BASE + 317)
457
+#define __NR_inotify_rm_watch (__NR_SYSCALL_BASE + 318)
458
+#define __NR_mbind (__NR_SYSCALL_BASE + 319)
459
+#define __NR_get_mempolicy (__NR_SYSCALL_BASE + 320)
460
+#define __NR_set_mempolicy (__NR_SYSCALL_BASE + 321)
461
+#define __NR_openat (__NR_SYSCALL_BASE + 322)
462
+#define __NR_mkdirat (__NR_SYSCALL_BASE + 323)
463
+#define __NR_mknodat (__NR_SYSCALL_BASE + 324)
464
+#define __NR_fchownat (__NR_SYSCALL_BASE + 325)
465
+#define __NR_futimesat (__NR_SYSCALL_BASE + 326)
466
+#define __NR_fstatat64 (__NR_SYSCALL_BASE + 327)
467
+#define __NR_unlinkat (__NR_SYSCALL_BASE + 328)
468
+#define __NR_renameat (__NR_SYSCALL_BASE + 329)
469
+#define __NR_linkat (__NR_SYSCALL_BASE + 330)
470
+#define __NR_symlinkat (__NR_SYSCALL_BASE + 331)
471
+#define __NR_readlinkat (__NR_SYSCALL_BASE + 332)
472
+#define __NR_fchmodat (__NR_SYSCALL_BASE + 333)
473
+#define __NR_faccessat (__NR_SYSCALL_BASE + 334)
474
+#define __NR_pselect6 (__NR_SYSCALL_BASE + 335)
475
+#define __NR_ppoll (__NR_SYSCALL_BASE + 336)
476
+#define __NR_unshare (__NR_SYSCALL_BASE + 337)
477
+#define __NR_set_robust_list (__NR_SYSCALL_BASE + 338)
478
+#define __NR_get_robust_list (__NR_SYSCALL_BASE + 339)
479
+#define __NR_splice (__NR_SYSCALL_BASE + 340)
480
+#define __NR_arm_sync_file_range (__NR_SYSCALL_BASE + 341)
481
+#define __NR_tee (__NR_SYSCALL_BASE + 342)
482
+#define __NR_vmsplice (__NR_SYSCALL_BASE + 343)
483
+#define __NR_move_pages (__NR_SYSCALL_BASE + 344)
484
+#define __NR_getcpu (__NR_SYSCALL_BASE + 345)
485
+#define __NR_epoll_pwait (__NR_SYSCALL_BASE + 346)
486
+#define __NR_kexec_load (__NR_SYSCALL_BASE + 347)
487
+#define __NR_utimensat (__NR_SYSCALL_BASE + 348)
488
+#define __NR_signalfd (__NR_SYSCALL_BASE + 349)
489
+#define __NR_timerfd_create (__NR_SYSCALL_BASE + 350)
490
+#define __NR_eventfd (__NR_SYSCALL_BASE + 351)
491
+#define __NR_fallocate (__NR_SYSCALL_BASE + 352)
492
+#define __NR_timerfd_settime (__NR_SYSCALL_BASE + 353)
493
+#define __NR_timerfd_gettime (__NR_SYSCALL_BASE + 354)
494
+#define __NR_signalfd4 (__NR_SYSCALL_BASE + 355)
495
+#define __NR_eventfd2 (__NR_SYSCALL_BASE + 356)
496
+#define __NR_epoll_create1 (__NR_SYSCALL_BASE + 357)
497
+#define __NR_dup3 (__NR_SYSCALL_BASE + 358)
498
+#define __NR_pipe2 (__NR_SYSCALL_BASE + 359)
499
+#define __NR_inotify_init1 (__NR_SYSCALL_BASE + 360)
500
+#define __NR_preadv (__NR_SYSCALL_BASE + 361)
501
+#define __NR_pwritev (__NR_SYSCALL_BASE + 362)
502
+#define __NR_rt_tgsigqueueinfo (__NR_SYSCALL_BASE + 363)
503
+#define __NR_perf_event_open (__NR_SYSCALL_BASE + 364)
504
+#define __NR_recvmmsg (__NR_SYSCALL_BASE + 365)
505
+#define __NR_accept4 (__NR_SYSCALL_BASE + 366)
506
+#define __NR_fanotify_init (__NR_SYSCALL_BASE + 367)
507
+#define __NR_fanotify_mark (__NR_SYSCALL_BASE + 368)
508
+#define __NR_prlimit64 (__NR_SYSCALL_BASE + 369)
509
+#define __NR_name_to_handle_at (__NR_SYSCALL_BASE + 370)
510
+#define __NR_open_by_handle_at (__NR_SYSCALL_BASE + 371)
511
+#define __NR_clock_adjtime (__NR_SYSCALL_BASE + 372)
512
+#define __NR_syncfs (__NR_SYSCALL_BASE + 373)
513
+#define __NR_sendmmsg (__NR_SYSCALL_BASE + 374)
514
+#define __NR_setns (__NR_SYSCALL_BASE + 375)
515
+#define __NR_process_vm_readv (__NR_SYSCALL_BASE + 376)
516
+#define __NR_process_vm_writev (__NR_SYSCALL_BASE + 377)
517
+#define __NR_kcmp (__NR_SYSCALL_BASE + 378)
518
+#define __NR_finit_module (__NR_SYSCALL_BASE + 379)
519
+#define __NR_sched_setattr (__NR_SYSCALL_BASE + 380)
520
+#define __NR_sched_getattr (__NR_SYSCALL_BASE + 381)
521
+#define __NR_renameat2 (__NR_SYSCALL_BASE + 382)
522
+#define __NR_seccomp (__NR_SYSCALL_BASE + 383)
523
+#define __NR_getrandom (__NR_SYSCALL_BASE + 384)
524
+#define __NR_memfd_create (__NR_SYSCALL_BASE + 385)
525
+#define __NR_bpf (__NR_SYSCALL_BASE + 386)
526
+#define __NR_execveat (__NR_SYSCALL_BASE + 387)
527
+#define __NR_userfaultfd (__NR_SYSCALL_BASE + 388)
528
+#define __NR_membarrier (__NR_SYSCALL_BASE + 389)
529
+#define __NR_mlock2 (__NR_SYSCALL_BASE + 390)
530
+#define __NR_copy_file_range (__NR_SYSCALL_BASE + 391)
531
+#define __NR_preadv2 (__NR_SYSCALL_BASE + 392)
532
+#define __NR_pwritev2 (__NR_SYSCALL_BASE + 393)
533
+#define __NR_pkey_mprotect (__NR_SYSCALL_BASE + 394)
534
+#define __NR_pkey_alloc (__NR_SYSCALL_BASE + 395)
535
+#define __NR_pkey_free (__NR_SYSCALL_BASE + 396)
536
+
537
+#endif /* _ASM_ARM_UNISTD_COMMON_H */
538
diff --git a/linux-headers/asm-arm/unistd-eabi.h b/linux-headers/asm-arm/unistd-eabi.h
539
new file mode 100644
540
index XXXXXXX..XXXXXXX
541
--- /dev/null
542
+++ b/linux-headers/asm-arm/unistd-eabi.h
543
@@ -XXX,XX +XXX,XX @@
544
+#ifndef _ASM_ARM_UNISTD_EABI_H
545
+#define _ASM_ARM_UNISTD_EABI_H 1
546
+
547
+
548
+#endif /* _ASM_ARM_UNISTD_EABI_H */
549
diff --git a/linux-headers/asm-arm/unistd-oabi.h b/linux-headers/asm-arm/unistd-oabi.h
550
new file mode 100644
551
index XXXXXXX..XXXXXXX
552
--- /dev/null
553
+++ b/linux-headers/asm-arm/unistd-oabi.h
554
@@ -XXX,XX +XXX,XX @@
555
+#ifndef _ASM_ARM_UNISTD_OABI_H
556
+#define _ASM_ARM_UNISTD_OABI_H 1
557
+
558
+#define __NR_time (__NR_SYSCALL_BASE + 13)
559
+#define __NR_umount (__NR_SYSCALL_BASE + 22)
560
+#define __NR_stime (__NR_SYSCALL_BASE + 25)
561
+#define __NR_alarm (__NR_SYSCALL_BASE + 27)
562
+#define __NR_utime (__NR_SYSCALL_BASE + 30)
563
+#define __NR_getrlimit (__NR_SYSCALL_BASE + 76)
564
+#define __NR_select (__NR_SYSCALL_BASE + 82)
565
+#define __NR_readdir (__NR_SYSCALL_BASE + 89)
566
+#define __NR_mmap (__NR_SYSCALL_BASE + 90)
567
+#define __NR_socketcall (__NR_SYSCALL_BASE + 102)
568
+#define __NR_syscall (__NR_SYSCALL_BASE + 113)
569
+#define __NR_ipc (__NR_SYSCALL_BASE + 117)
570
+
571
+#endif /* _ASM_ARM_UNISTD_OABI_H */
572
diff --git a/linux-headers/asm-arm/unistd.h b/linux-headers/asm-arm/unistd.h
573
index XXXXXXX..XXXXXXX 100644
574
--- a/linux-headers/asm-arm/unistd.h
575
+++ b/linux-headers/asm-arm/unistd.h
576
@@ -XXX,XX +XXX,XX @@
577
578
#if defined(__thumb__) || defined(__ARM_EABI__)
579
#define __NR_SYSCALL_BASE    0
580
+#include <asm/unistd-eabi.h>
581
#else
582
#define __NR_SYSCALL_BASE    __NR_OABI_SYSCALL_BASE
583
+#include <asm/unistd-oabi.h>
584
#endif
585
586
-/*
587
- * This file contains the system call numbers.
588
- */
589
-
590
-#define __NR_restart_syscall        (__NR_SYSCALL_BASE+ 0)
591
-#define __NR_exit            (__NR_SYSCALL_BASE+ 1)
592
-#define __NR_fork            (__NR_SYSCALL_BASE+ 2)
593
-#define __NR_read            (__NR_SYSCALL_BASE+ 3)
594
-#define __NR_write            (__NR_SYSCALL_BASE+ 4)
595
-#define __NR_open            (__NR_SYSCALL_BASE+ 5)
596
-#define __NR_close            (__NR_SYSCALL_BASE+ 6)
597
-                    /* 7 was sys_waitpid */
598
-#define __NR_creat            (__NR_SYSCALL_BASE+ 8)
599
-#define __NR_link            (__NR_SYSCALL_BASE+ 9)
600
-#define __NR_unlink            (__NR_SYSCALL_BASE+ 10)
601
-#define __NR_execve            (__NR_SYSCALL_BASE+ 11)
602
-#define __NR_chdir            (__NR_SYSCALL_BASE+ 12)
603
-#define __NR_time            (__NR_SYSCALL_BASE+ 13)
604
-#define __NR_mknod            (__NR_SYSCALL_BASE+ 14)
605
-#define __NR_chmod            (__NR_SYSCALL_BASE+ 15)
606
-#define __NR_lchown            (__NR_SYSCALL_BASE+ 16)
607
-                    /* 17 was sys_break */
608
-                    /* 18 was sys_stat */
609
-#define __NR_lseek            (__NR_SYSCALL_BASE+ 19)
610
-#define __NR_getpid            (__NR_SYSCALL_BASE+ 20)
611
-#define __NR_mount            (__NR_SYSCALL_BASE+ 21)
612
-#define __NR_umount            (__NR_SYSCALL_BASE+ 22)
613
-#define __NR_setuid            (__NR_SYSCALL_BASE+ 23)
614
-#define __NR_getuid            (__NR_SYSCALL_BASE+ 24)
615
-#define __NR_stime            (__NR_SYSCALL_BASE+ 25)
616
-#define __NR_ptrace            (__NR_SYSCALL_BASE+ 26)
617
-#define __NR_alarm            (__NR_SYSCALL_BASE+ 27)
618
-                    /* 28 was sys_fstat */
619
-#define __NR_pause            (__NR_SYSCALL_BASE+ 29)
620
-#define __NR_utime            (__NR_SYSCALL_BASE+ 30)
621
-                    /* 31 was sys_stty */
622
-                    /* 32 was sys_gtty */
623
-#define __NR_access            (__NR_SYSCALL_BASE+ 33)
624
-#define __NR_nice            (__NR_SYSCALL_BASE+ 34)
625
-                    /* 35 was sys_ftime */
626
-#define __NR_sync            (__NR_SYSCALL_BASE+ 36)
627
-#define __NR_kill            (__NR_SYSCALL_BASE+ 37)
628
-#define __NR_rename            (__NR_SYSCALL_BASE+ 38)
629
-#define __NR_mkdir            (__NR_SYSCALL_BASE+ 39)
630
-#define __NR_rmdir            (__NR_SYSCALL_BASE+ 40)
631
-#define __NR_dup            (__NR_SYSCALL_BASE+ 41)
632
-#define __NR_pipe            (__NR_SYSCALL_BASE+ 42)
633
-#define __NR_times            (__NR_SYSCALL_BASE+ 43)
634
-                    /* 44 was sys_prof */
635
-#define __NR_brk            (__NR_SYSCALL_BASE+ 45)
636
-#define __NR_setgid            (__NR_SYSCALL_BASE+ 46)
637
-#define __NR_getgid            (__NR_SYSCALL_BASE+ 47)
638
-                    /* 48 was sys_signal */
639
-#define __NR_geteuid            (__NR_SYSCALL_BASE+ 49)
640
-#define __NR_getegid            (__NR_SYSCALL_BASE+ 50)
641
-#define __NR_acct            (__NR_SYSCALL_BASE+ 51)
642
-#define __NR_umount2            (__NR_SYSCALL_BASE+ 52)
643
-                    /* 53 was sys_lock */
644
-#define __NR_ioctl            (__NR_SYSCALL_BASE+ 54)
645
-#define __NR_fcntl            (__NR_SYSCALL_BASE+ 55)
646
-                    /* 56 was sys_mpx */
647
-#define __NR_setpgid            (__NR_SYSCALL_BASE+ 57)
648
-                    /* 58 was sys_ulimit */
649
-                    /* 59 was sys_olduname */
650
-#define __NR_umask            (__NR_SYSCALL_BASE+ 60)
651
-#define __NR_chroot            (__NR_SYSCALL_BASE+ 61)
652
-#define __NR_ustat            (__NR_SYSCALL_BASE+ 62)
653
-#define __NR_dup2            (__NR_SYSCALL_BASE+ 63)
654
-#define __NR_getppid            (__NR_SYSCALL_BASE+ 64)
655
-#define __NR_getpgrp            (__NR_SYSCALL_BASE+ 65)
656
-#define __NR_setsid            (__NR_SYSCALL_BASE+ 66)
657
-#define __NR_sigaction            (__NR_SYSCALL_BASE+ 67)
658
-                    /* 68 was sys_sgetmask */
659
-                    /* 69 was sys_ssetmask */
660
-#define __NR_setreuid            (__NR_SYSCALL_BASE+ 70)
661
-#define __NR_setregid            (__NR_SYSCALL_BASE+ 71)
662
-#define __NR_sigsuspend            (__NR_SYSCALL_BASE+ 72)
663
-#define __NR_sigpending            (__NR_SYSCALL_BASE+ 73)
664
-#define __NR_sethostname        (__NR_SYSCALL_BASE+ 74)
665
-#define __NR_setrlimit            (__NR_SYSCALL_BASE+ 75)
666
-#define __NR_getrlimit            (__NR_SYSCALL_BASE+ 76)    /* Back compat 2GB limited rlimit */
667
-#define __NR_getrusage            (__NR_SYSCALL_BASE+ 77)
668
-#define __NR_gettimeofday        (__NR_SYSCALL_BASE+ 78)
669
-#define __NR_settimeofday        (__NR_SYSCALL_BASE+ 79)
670
-#define __NR_getgroups            (__NR_SYSCALL_BASE+ 80)
671
-#define __NR_setgroups            (__NR_SYSCALL_BASE+ 81)
672
-#define __NR_select            (__NR_SYSCALL_BASE+ 82)
673
-#define __NR_symlink            (__NR_SYSCALL_BASE+ 83)
674
-                    /* 84 was sys_lstat */
675
-#define __NR_readlink            (__NR_SYSCALL_BASE+ 85)
676
-#define __NR_uselib            (__NR_SYSCALL_BASE+ 86)
677
-#define __NR_swapon            (__NR_SYSCALL_BASE+ 87)
678
-#define __NR_reboot            (__NR_SYSCALL_BASE+ 88)
679
-#define __NR_readdir            (__NR_SYSCALL_BASE+ 89)
680
-#define __NR_mmap            (__NR_SYSCALL_BASE+ 90)
681
-#define __NR_munmap            (__NR_SYSCALL_BASE+ 91)
682
-#define __NR_truncate            (__NR_SYSCALL_BASE+ 92)
683
-#define __NR_ftruncate            (__NR_SYSCALL_BASE+ 93)
684
-#define __NR_fchmod            (__NR_SYSCALL_BASE+ 94)
685
-#define __NR_fchown            (__NR_SYSCALL_BASE+ 95)
686
-#define __NR_getpriority        (__NR_SYSCALL_BASE+ 96)
687
-#define __NR_setpriority        (__NR_SYSCALL_BASE+ 97)
688
-                    /* 98 was sys_profil */
689
-#define __NR_statfs            (__NR_SYSCALL_BASE+ 99)
690
-#define __NR_fstatfs            (__NR_SYSCALL_BASE+100)
691
-                    /* 101 was sys_ioperm */
692
-#define __NR_socketcall            (__NR_SYSCALL_BASE+102)
693
-#define __NR_syslog            (__NR_SYSCALL_BASE+103)
694
-#define __NR_setitimer            (__NR_SYSCALL_BASE+104)
695
-#define __NR_getitimer            (__NR_SYSCALL_BASE+105)
696
-#define __NR_stat            (__NR_SYSCALL_BASE+106)
697
-#define __NR_lstat            (__NR_SYSCALL_BASE+107)
698
-#define __NR_fstat            (__NR_SYSCALL_BASE+108)
699
-                    /* 109 was sys_uname */
700
-                    /* 110 was sys_iopl */
701
-#define __NR_vhangup            (__NR_SYSCALL_BASE+111)
702
-                    /* 112 was sys_idle */
703
-#define __NR_syscall            (__NR_SYSCALL_BASE+113) /* syscall to call a syscall! */
704
-#define __NR_wait4            (__NR_SYSCALL_BASE+114)
705
-#define __NR_swapoff            (__NR_SYSCALL_BASE+115)
706
-#define __NR_sysinfo            (__NR_SYSCALL_BASE+116)
707
-#define __NR_ipc            (__NR_SYSCALL_BASE+117)
708
-#define __NR_fsync            (__NR_SYSCALL_BASE+118)
709
-#define __NR_sigreturn            (__NR_SYSCALL_BASE+119)
710
-#define __NR_clone            (__NR_SYSCALL_BASE+120)
711
-#define __NR_setdomainname        (__NR_SYSCALL_BASE+121)
712
-#define __NR_uname            (__NR_SYSCALL_BASE+122)
713
-                    /* 123 was sys_modify_ldt */
714
-#define __NR_adjtimex            (__NR_SYSCALL_BASE+124)
715
-#define __NR_mprotect            (__NR_SYSCALL_BASE+125)
716
-#define __NR_sigprocmask        (__NR_SYSCALL_BASE+126)
717
-                    /* 127 was sys_create_module */
718
-#define __NR_init_module        (__NR_SYSCALL_BASE+128)
719
-#define __NR_delete_module        (__NR_SYSCALL_BASE+129)
720
-                    /* 130 was sys_get_kernel_syms */
721
-#define __NR_quotactl            (__NR_SYSCALL_BASE+131)
722
-#define __NR_getpgid            (__NR_SYSCALL_BASE+132)
723
-#define __NR_fchdir            (__NR_SYSCALL_BASE+133)
724
-#define __NR_bdflush            (__NR_SYSCALL_BASE+134)
725
-#define __NR_sysfs            (__NR_SYSCALL_BASE+135)
726
-#define __NR_personality        (__NR_SYSCALL_BASE+136)
727
-                    /* 137 was sys_afs_syscall */
728
-#define __NR_setfsuid            (__NR_SYSCALL_BASE+138)
729
-#define __NR_setfsgid            (__NR_SYSCALL_BASE+139)
730
-#define __NR__llseek            (__NR_SYSCALL_BASE+140)
731
-#define __NR_getdents            (__NR_SYSCALL_BASE+141)
732
-#define __NR__newselect            (__NR_SYSCALL_BASE+142)
733
-#define __NR_flock            (__NR_SYSCALL_BASE+143)
734
-#define __NR_msync            (__NR_SYSCALL_BASE+144)
735
-#define __NR_readv            (__NR_SYSCALL_BASE+145)
736
-#define __NR_writev            (__NR_SYSCALL_BASE+146)
737
-#define __NR_getsid            (__NR_SYSCALL_BASE+147)
738
-#define __NR_fdatasync            (__NR_SYSCALL_BASE+148)
739
-#define __NR__sysctl            (__NR_SYSCALL_BASE+149)
740
-#define __NR_mlock            (__NR_SYSCALL_BASE+150)
741
-#define __NR_munlock            (__NR_SYSCALL_BASE+151)
742
-#define __NR_mlockall            (__NR_SYSCALL_BASE+152)
743
-#define __NR_munlockall            (__NR_SYSCALL_BASE+153)
744
-#define __NR_sched_setparam        (__NR_SYSCALL_BASE+154)
745
-#define __NR_sched_getparam        (__NR_SYSCALL_BASE+155)
746
-#define __NR_sched_setscheduler        (__NR_SYSCALL_BASE+156)
747
-#define __NR_sched_getscheduler        (__NR_SYSCALL_BASE+157)
748
-#define __NR_sched_yield        (__NR_SYSCALL_BASE+158)
749
-#define __NR_sched_get_priority_max    (__NR_SYSCALL_BASE+159)
750
-#define __NR_sched_get_priority_min    (__NR_SYSCALL_BASE+160)
751
-#define __NR_sched_rr_get_interval    (__NR_SYSCALL_BASE+161)
752
-#define __NR_nanosleep            (__NR_SYSCALL_BASE+162)
753
-#define __NR_mremap            (__NR_SYSCALL_BASE+163)
754
-#define __NR_setresuid            (__NR_SYSCALL_BASE+164)
755
-#define __NR_getresuid            (__NR_SYSCALL_BASE+165)
756
-                    /* 166 was sys_vm86 */
757
-                    /* 167 was sys_query_module */
758
-#define __NR_poll            (__NR_SYSCALL_BASE+168)
759
-#define __NR_nfsservctl            (__NR_SYSCALL_BASE+169)
760
-#define __NR_setresgid            (__NR_SYSCALL_BASE+170)
761
-#define __NR_getresgid            (__NR_SYSCALL_BASE+171)
762
-#define __NR_prctl            (__NR_SYSCALL_BASE+172)
763
-#define __NR_rt_sigreturn        (__NR_SYSCALL_BASE+173)
764
-#define __NR_rt_sigaction        (__NR_SYSCALL_BASE+174)
765
-#define __NR_rt_sigprocmask        (__NR_SYSCALL_BASE+175)
766
-#define __NR_rt_sigpending        (__NR_SYSCALL_BASE+176)
767
-#define __NR_rt_sigtimedwait        (__NR_SYSCALL_BASE+177)
768
-#define __NR_rt_sigqueueinfo        (__NR_SYSCALL_BASE+178)
769
-#define __NR_rt_sigsuspend        (__NR_SYSCALL_BASE+179)
770
-#define __NR_pread64            (__NR_SYSCALL_BASE+180)
771
-#define __NR_pwrite64            (__NR_SYSCALL_BASE+181)
772
-#define __NR_chown            (__NR_SYSCALL_BASE+182)
773
-#define __NR_getcwd            (__NR_SYSCALL_BASE+183)
774
-#define __NR_capget            (__NR_SYSCALL_BASE+184)
775
-#define __NR_capset            (__NR_SYSCALL_BASE+185)
776
-#define __NR_sigaltstack        (__NR_SYSCALL_BASE+186)
777
-#define __NR_sendfile            (__NR_SYSCALL_BASE+187)
778
-                    /* 188 reserved */
779
-                    /* 189 reserved */
780
-#define __NR_vfork            (__NR_SYSCALL_BASE+190)
781
-#define __NR_ugetrlimit            (__NR_SYSCALL_BASE+191)    /* SuS compliant getrlimit */
782
-#define __NR_mmap2            (__NR_SYSCALL_BASE+192)
783
-#define __NR_truncate64            (__NR_SYSCALL_BASE+193)
784
-#define __NR_ftruncate64        (__NR_SYSCALL_BASE+194)
785
-#define __NR_stat64            (__NR_SYSCALL_BASE+195)
786
-#define __NR_lstat64            (__NR_SYSCALL_BASE+196)
787
-#define __NR_fstat64            (__NR_SYSCALL_BASE+197)
788
-#define __NR_lchown32            (__NR_SYSCALL_BASE+198)
789
-#define __NR_getuid32            (__NR_SYSCALL_BASE+199)
790
-#define __NR_getgid32            (__NR_SYSCALL_BASE+200)
791
-#define __NR_geteuid32            (__NR_SYSCALL_BASE+201)
792
-#define __NR_getegid32            (__NR_SYSCALL_BASE+202)
793
-#define __NR_setreuid32            (__NR_SYSCALL_BASE+203)
794
-#define __NR_setregid32            (__NR_SYSCALL_BASE+204)
795
-#define __NR_getgroups32        (__NR_SYSCALL_BASE+205)
796
-#define __NR_setgroups32        (__NR_SYSCALL_BASE+206)
797
-#define __NR_fchown32            (__NR_SYSCALL_BASE+207)
798
-#define __NR_setresuid32        (__NR_SYSCALL_BASE+208)
799
-#define __NR_getresuid32        (__NR_SYSCALL_BASE+209)
800
-#define __NR_setresgid32        (__NR_SYSCALL_BASE+210)
801
-#define __NR_getresgid32        (__NR_SYSCALL_BASE+211)
802
-#define __NR_chown32            (__NR_SYSCALL_BASE+212)
803
-#define __NR_setuid32            (__NR_SYSCALL_BASE+213)
804
-#define __NR_setgid32            (__NR_SYSCALL_BASE+214)
805
-#define __NR_setfsuid32            (__NR_SYSCALL_BASE+215)
806
-#define __NR_setfsgid32            (__NR_SYSCALL_BASE+216)
807
-#define __NR_getdents64            (__NR_SYSCALL_BASE+217)
808
-#define __NR_pivot_root            (__NR_SYSCALL_BASE+218)
809
-#define __NR_mincore            (__NR_SYSCALL_BASE+219)
810
-#define __NR_madvise            (__NR_SYSCALL_BASE+220)
811
-#define __NR_fcntl64            (__NR_SYSCALL_BASE+221)
812
-                    /* 222 for tux */
813
-                    /* 223 is unused */
814
-#define __NR_gettid            (__NR_SYSCALL_BASE+224)
815
-#define __NR_readahead            (__NR_SYSCALL_BASE+225)
816
-#define __NR_setxattr            (__NR_SYSCALL_BASE+226)
817
-#define __NR_lsetxattr            (__NR_SYSCALL_BASE+227)
818
-#define __NR_fsetxattr            (__NR_SYSCALL_BASE+228)
819
-#define __NR_getxattr            (__NR_SYSCALL_BASE+229)
820
-#define __NR_lgetxattr            (__NR_SYSCALL_BASE+230)
821
-#define __NR_fgetxattr            (__NR_SYSCALL_BASE+231)
822
-#define __NR_listxattr            (__NR_SYSCALL_BASE+232)
823
-#define __NR_llistxattr            (__NR_SYSCALL_BASE+233)
824
-#define __NR_flistxattr            (__NR_SYSCALL_BASE+234)
825
-#define __NR_removexattr        (__NR_SYSCALL_BASE+235)
826
-#define __NR_lremovexattr        (__NR_SYSCALL_BASE+236)
827
-#define __NR_fremovexattr        (__NR_SYSCALL_BASE+237)
828
-#define __NR_tkill            (__NR_SYSCALL_BASE+238)
829
-#define __NR_sendfile64            (__NR_SYSCALL_BASE+239)
830
-#define __NR_futex            (__NR_SYSCALL_BASE+240)
831
-#define __NR_sched_setaffinity        (__NR_SYSCALL_BASE+241)
832
-#define __NR_sched_getaffinity        (__NR_SYSCALL_BASE+242)
833
-#define __NR_io_setup            (__NR_SYSCALL_BASE+243)
834
-#define __NR_io_destroy            (__NR_SYSCALL_BASE+244)
835
-#define __NR_io_getevents        (__NR_SYSCALL_BASE+245)
836
-#define __NR_io_submit            (__NR_SYSCALL_BASE+246)
837
-#define __NR_io_cancel            (__NR_SYSCALL_BASE+247)
838
-#define __NR_exit_group            (__NR_SYSCALL_BASE+248)
839
-#define __NR_lookup_dcookie        (__NR_SYSCALL_BASE+249)
840
-#define __NR_epoll_create        (__NR_SYSCALL_BASE+250)
841
-#define __NR_epoll_ctl            (__NR_SYSCALL_BASE+251)
842
-#define __NR_epoll_wait            (__NR_SYSCALL_BASE+252)
843
-#define __NR_remap_file_pages        (__NR_SYSCALL_BASE+253)
844
-                    /* 254 for set_thread_area */
845
-                    /* 255 for get_thread_area */
846
-#define __NR_set_tid_address        (__NR_SYSCALL_BASE+256)
847
-#define __NR_timer_create        (__NR_SYSCALL_BASE+257)
848
-#define __NR_timer_settime        (__NR_SYSCALL_BASE+258)
849
-#define __NR_timer_gettime        (__NR_SYSCALL_BASE+259)
850
-#define __NR_timer_getoverrun        (__NR_SYSCALL_BASE+260)
851
-#define __NR_timer_delete        (__NR_SYSCALL_BASE+261)
852
-#define __NR_clock_settime        (__NR_SYSCALL_BASE+262)
853
-#define __NR_clock_gettime        (__NR_SYSCALL_BASE+263)
854
-#define __NR_clock_getres        (__NR_SYSCALL_BASE+264)
855
-#define __NR_clock_nanosleep        (__NR_SYSCALL_BASE+265)
856
-#define __NR_statfs64            (__NR_SYSCALL_BASE+266)
857
-#define __NR_fstatfs64            (__NR_SYSCALL_BASE+267)
858
-#define __NR_tgkill            (__NR_SYSCALL_BASE+268)
859
-#define __NR_utimes            (__NR_SYSCALL_BASE+269)
860
-#define __NR_arm_fadvise64_64        (__NR_SYSCALL_BASE+270)
861
-#define __NR_pciconfig_iobase        (__NR_SYSCALL_BASE+271)
862
-#define __NR_pciconfig_read        (__NR_SYSCALL_BASE+272)
863
-#define __NR_pciconfig_write        (__NR_SYSCALL_BASE+273)
864
-#define __NR_mq_open            (__NR_SYSCALL_BASE+274)
865
-#define __NR_mq_unlink            (__NR_SYSCALL_BASE+275)
866
-#define __NR_mq_timedsend        (__NR_SYSCALL_BASE+276)
867
-#define __NR_mq_timedreceive        (__NR_SYSCALL_BASE+277)
868
-#define __NR_mq_notify            (__NR_SYSCALL_BASE+278)
869
-#define __NR_mq_getsetattr        (__NR_SYSCALL_BASE+279)
870
-#define __NR_waitid            (__NR_SYSCALL_BASE+280)
871
-#define __NR_socket            (__NR_SYSCALL_BASE+281)
872
-#define __NR_bind            (__NR_SYSCALL_BASE+282)
873
-#define __NR_connect            (__NR_SYSCALL_BASE+283)
874
-#define __NR_listen            (__NR_SYSCALL_BASE+284)
875
-#define __NR_accept            (__NR_SYSCALL_BASE+285)
876
-#define __NR_getsockname        (__NR_SYSCALL_BASE+286)
877
-#define __NR_getpeername        (__NR_SYSCALL_BASE+287)
878
-#define __NR_socketpair            (__NR_SYSCALL_BASE+288)
879
-#define __NR_send            (__NR_SYSCALL_BASE+289)
880
-#define __NR_sendto            (__NR_SYSCALL_BASE+290)
881
-#define __NR_recv            (__NR_SYSCALL_BASE+291)
882
-#define __NR_recvfrom            (__NR_SYSCALL_BASE+292)
883
-#define __NR_shutdown            (__NR_SYSCALL_BASE+293)
884
-#define __NR_setsockopt            (__NR_SYSCALL_BASE+294)
885
-#define __NR_getsockopt            (__NR_SYSCALL_BASE+295)
886
-#define __NR_sendmsg            (__NR_SYSCALL_BASE+296)
887
-#define __NR_recvmsg            (__NR_SYSCALL_BASE+297)
888
-#define __NR_semop            (__NR_SYSCALL_BASE+298)
889
-#define __NR_semget            (__NR_SYSCALL_BASE+299)
890
-#define __NR_semctl            (__NR_SYSCALL_BASE+300)
891
-#define __NR_msgsnd            (__NR_SYSCALL_BASE+301)
892
-#define __NR_msgrcv            (__NR_SYSCALL_BASE+302)
893
-#define __NR_msgget            (__NR_SYSCALL_BASE+303)
894
-#define __NR_msgctl            (__NR_SYSCALL_BASE+304)
895
-#define __NR_shmat            (__NR_SYSCALL_BASE+305)
896
-#define __NR_shmdt            (__NR_SYSCALL_BASE+306)
897
-#define __NR_shmget            (__NR_SYSCALL_BASE+307)
898
-#define __NR_shmctl            (__NR_SYSCALL_BASE+308)
899
-#define __NR_add_key            (__NR_SYSCALL_BASE+309)
900
-#define __NR_request_key        (__NR_SYSCALL_BASE+310)
901
-#define __NR_keyctl            (__NR_SYSCALL_BASE+311)
902
-#define __NR_semtimedop            (__NR_SYSCALL_BASE+312)
903
-#define __NR_vserver            (__NR_SYSCALL_BASE+313)
904
-#define __NR_ioprio_set            (__NR_SYSCALL_BASE+314)
905
-#define __NR_ioprio_get            (__NR_SYSCALL_BASE+315)
906
-#define __NR_inotify_init        (__NR_SYSCALL_BASE+316)
907
-#define __NR_inotify_add_watch        (__NR_SYSCALL_BASE+317)
908
-#define __NR_inotify_rm_watch        (__NR_SYSCALL_BASE+318)
909
-#define __NR_mbind            (__NR_SYSCALL_BASE+319)
910
-#define __NR_get_mempolicy        (__NR_SYSCALL_BASE+320)
911
-#define __NR_set_mempolicy        (__NR_SYSCALL_BASE+321)
912
-#define __NR_openat            (__NR_SYSCALL_BASE+322)
913
-#define __NR_mkdirat            (__NR_SYSCALL_BASE+323)
914
-#define __NR_mknodat            (__NR_SYSCALL_BASE+324)
915
-#define __NR_fchownat            (__NR_SYSCALL_BASE+325)
916
-#define __NR_futimesat            (__NR_SYSCALL_BASE+326)
917
-#define __NR_fstatat64            (__NR_SYSCALL_BASE+327)
918
-#define __NR_unlinkat            (__NR_SYSCALL_BASE+328)
919
-#define __NR_renameat            (__NR_SYSCALL_BASE+329)
920
-#define __NR_linkat            (__NR_SYSCALL_BASE+330)
921
-#define __NR_symlinkat            (__NR_SYSCALL_BASE+331)
922
-#define __NR_readlinkat            (__NR_SYSCALL_BASE+332)
923
-#define __NR_fchmodat            (__NR_SYSCALL_BASE+333)
924
-#define __NR_faccessat            (__NR_SYSCALL_BASE+334)
925
-#define __NR_pselect6            (__NR_SYSCALL_BASE+335)
926
-#define __NR_ppoll            (__NR_SYSCALL_BASE+336)
927
-#define __NR_unshare            (__NR_SYSCALL_BASE+337)
928
-#define __NR_set_robust_list        (__NR_SYSCALL_BASE+338)
929
-#define __NR_get_robust_list        (__NR_SYSCALL_BASE+339)
930
-#define __NR_splice            (__NR_SYSCALL_BASE+340)
931
-#define __NR_arm_sync_file_range    (__NR_SYSCALL_BASE+341)
932
+#include <asm/unistd-common.h>
933
#define __NR_sync_file_range2        __NR_arm_sync_file_range
934
-#define __NR_tee            (__NR_SYSCALL_BASE+342)
935
-#define __NR_vmsplice            (__NR_SYSCALL_BASE+343)
936
-#define __NR_move_pages            (__NR_SYSCALL_BASE+344)
937
-#define __NR_getcpu            (__NR_SYSCALL_BASE+345)
938
-#define __NR_epoll_pwait        (__NR_SYSCALL_BASE+346)
939
-#define __NR_kexec_load            (__NR_SYSCALL_BASE+347)
940
-#define __NR_utimensat            (__NR_SYSCALL_BASE+348)
941
-#define __NR_signalfd            (__NR_SYSCALL_BASE+349)
942
-#define __NR_timerfd_create        (__NR_SYSCALL_BASE+350)
943
-#define __NR_eventfd            (__NR_SYSCALL_BASE+351)
944
-#define __NR_fallocate            (__NR_SYSCALL_BASE+352)
945
-#define __NR_timerfd_settime        (__NR_SYSCALL_BASE+353)
946
-#define __NR_timerfd_gettime        (__NR_SYSCALL_BASE+354)
947
-#define __NR_signalfd4            (__NR_SYSCALL_BASE+355)
948
-#define __NR_eventfd2            (__NR_SYSCALL_BASE+356)
949
-#define __NR_epoll_create1        (__NR_SYSCALL_BASE+357)
950
-#define __NR_dup3            (__NR_SYSCALL_BASE+358)
951
-#define __NR_pipe2            (__NR_SYSCALL_BASE+359)
952
-#define __NR_inotify_init1        (__NR_SYSCALL_BASE+360)
953
-#define __NR_preadv            (__NR_SYSCALL_BASE+361)
954
-#define __NR_pwritev            (__NR_SYSCALL_BASE+362)
955
-#define __NR_rt_tgsigqueueinfo        (__NR_SYSCALL_BASE+363)
956
-#define __NR_perf_event_open        (__NR_SYSCALL_BASE+364)
957
-#define __NR_recvmmsg            (__NR_SYSCALL_BASE+365)
958
-#define __NR_accept4            (__NR_SYSCALL_BASE+366)
959
-#define __NR_fanotify_init        (__NR_SYSCALL_BASE+367)
960
-#define __NR_fanotify_mark        (__NR_SYSCALL_BASE+368)
961
-#define __NR_prlimit64            (__NR_SYSCALL_BASE+369)
962
-#define __NR_name_to_handle_at        (__NR_SYSCALL_BASE+370)
963
-#define __NR_open_by_handle_at        (__NR_SYSCALL_BASE+371)
964
-#define __NR_clock_adjtime        (__NR_SYSCALL_BASE+372)
965
-#define __NR_syncfs            (__NR_SYSCALL_BASE+373)
966
-#define __NR_sendmmsg            (__NR_SYSCALL_BASE+374)
967
-#define __NR_setns            (__NR_SYSCALL_BASE+375)
968
-#define __NR_process_vm_readv        (__NR_SYSCALL_BASE+376)
969
-#define __NR_process_vm_writev        (__NR_SYSCALL_BASE+377)
970
-#define __NR_kcmp            (__NR_SYSCALL_BASE+378)
971
-#define __NR_finit_module        (__NR_SYSCALL_BASE+379)
972
-#define __NR_sched_setattr        (__NR_SYSCALL_BASE+380)
973
-#define __NR_sched_getattr        (__NR_SYSCALL_BASE+381)
974
-#define __NR_renameat2            (__NR_SYSCALL_BASE+382)
975
-#define __NR_seccomp            (__NR_SYSCALL_BASE+383)
976
-#define __NR_getrandom            (__NR_SYSCALL_BASE+384)
977
-#define __NR_memfd_create        (__NR_SYSCALL_BASE+385)
978
-#define __NR_bpf            (__NR_SYSCALL_BASE+386)
979
-#define __NR_execveat            (__NR_SYSCALL_BASE+387)
980
-#define __NR_userfaultfd        (__NR_SYSCALL_BASE+388)
981
-#define __NR_membarrier            (__NR_SYSCALL_BASE+389)
982
-#define __NR_mlock2            (__NR_SYSCALL_BASE+390)
983
-#define __NR_copy_file_range        (__NR_SYSCALL_BASE+391)
984
-#define __NR_preadv2            (__NR_SYSCALL_BASE+392)
985
-#define __NR_pwritev2            (__NR_SYSCALL_BASE+393)
986
987
/*
988
* The following SWIs are ARM private.
989
@@ -XXX,XX +XXX,XX @@
990
#define __ARM_NR_usr32            (__ARM_NR_BASE+4)
991
#define __ARM_NR_set_tls        (__ARM_NR_BASE+5)
992
993
-/*
994
- * The following syscalls are obsolete and no longer available for EABI.
995
- */
996
-#if defined(__ARM_EABI__)
997
-#undef __NR_time
998
-#undef __NR_umount
999
-#undef __NR_stime
1000
-#undef __NR_alarm
1001
-#undef __NR_utime
1002
-#undef __NR_getrlimit
1003
-#undef __NR_select
1004
-#undef __NR_readdir
1005
-#undef __NR_mmap
1006
-#undef __NR_socketcall
1007
-#undef __NR_syscall
1008
-#undef __NR_ipc
1009
-#endif
1010
-
1011
#endif /* __ASM_ARM_UNISTD_H */
1012
diff --git a/linux-headers/asm-arm64/kvm.h b/linux-headers/asm-arm64/kvm.h
1013
index XXXXXXX..XXXXXXX 100644
1014
--- a/linux-headers/asm-arm64/kvm.h
1015
+++ b/linux-headers/asm-arm64/kvm.h
1016
@@ -XXX,XX +XXX,XX @@ struct kvm_arch_memory_slot {
1017
#define KVM_DEV_ARM_VGIC_GRP_CPU_REGS    2
1018
#define KVM_DEV_ARM_VGIC_CPUID_SHIFT    32
1019
#define KVM_DEV_ARM_VGIC_CPUID_MASK    (0xffULL << KVM_DEV_ARM_VGIC_CPUID_SHIFT)
1020
+#define KVM_DEV_ARM_VGIC_V3_MPIDR_SHIFT 32
1021
+#define KVM_DEV_ARM_VGIC_V3_MPIDR_MASK \
1022
+            (0xffffffffULL << KVM_DEV_ARM_VGIC_V3_MPIDR_SHIFT)
1023
#define KVM_DEV_ARM_VGIC_OFFSET_SHIFT    0
1024
#define KVM_DEV_ARM_VGIC_OFFSET_MASK    (0xffffffffULL << KVM_DEV_ARM_VGIC_OFFSET_SHIFT)
1025
+#define KVM_DEV_ARM_VGIC_SYSREG_INSTR_MASK (0xffff)
1026
#define KVM_DEV_ARM_VGIC_GRP_NR_IRQS    3
1027
#define KVM_DEV_ARM_VGIC_GRP_CTRL    4
1028
+#define KVM_DEV_ARM_VGIC_GRP_REDIST_REGS 5
1029
+#define KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS 6
1030
+#define KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO 7
1031
+#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT    10
1032
+#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK \
1033
+            (0x3fffffULL << KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT)
1034
+#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INTID_MASK    0x3ff
1035
+#define VGIC_LEVEL_INFO_LINE_LEVEL    0
1036
+
1037
#define KVM_DEV_ARM_VGIC_CTRL_INIT    0
1038
1039
/* Device Control API on vcpu fd */
1040
diff --git a/linux-headers/asm-powerpc/kvm.h b/linux-headers/asm-powerpc/kvm.h
1041
index XXXXXXX..XXXXXXX 100644
1042
--- a/linux-headers/asm-powerpc/kvm.h
1043
+++ b/linux-headers/asm-powerpc/kvm.h
1044
@@ -XXX,XX +XXX,XX @@ struct kvm_get_htab_header {
1045
    __u16    n_invalid;
1046
};
1047
1048
+/* For KVM_PPC_CONFIGURE_V3_MMU */
1049
+struct kvm_ppc_mmuv3_cfg {
1050
+    __u64    flags;
1051
+    __u64    process_table;    /* second doubleword of partition table entry */
1052
+};
180
+};
1053
+
181
+
1054
+/* Flag values for KVM_PPC_CONFIGURE_V3_MMU */
182
+const GVecGen3 bit_op = {
1055
+#define KVM_PPC_MMUV3_RADIX    1    /* 1 = radix mode, 0 = HPT */
183
+ .fni8 = gen_bit_i64,
1056
+#define KVM_PPC_MMUV3_GTSE    2    /* global translation shootdown enb. */
184
+ .fniv = gen_bit_vec,
1057
+
185
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
1058
+/* For KVM_PPC_GET_RMMU_INFO */
186
+ .load_dest = true
1059
+struct kvm_ppc_rmmu_info {
1060
+    struct kvm_ppc_radix_geom {
1061
+        __u8    page_shift;
1062
+        __u8    level_bits[4];
1063
+        __u8    pad[3];
1064
+    }    geometries[8];
1065
+    __u32    ap_encodings[8];
1066
+};
187
+};
1067
+
188
+
1068
/* Per-vcpu XICS interrupt controller state */
189
+const GVecGen3 bif_op = {
1069
#define KVM_REG_PPC_ICP_STATE    (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0x8c)
190
+ .fni8 = gen_bif_i64,
1070
191
+ .fniv = gen_bif_vec,
1071
@@ -XXX,XX +XXX,XX @@ struct kvm_get_htab_header {
192
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
1072
#define KVM_REG_PPC_SPRG9    (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xba)
193
+ .load_dest = true
1073
#define KVM_REG_PPC_DBSR    (KVM_REG_PPC | KVM_REG_SIZE_U32 | 0xbb)
1074
1075
+/* POWER9 registers */
1076
+#define KVM_REG_PPC_TIDR    (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xbc)
1077
+#define KVM_REG_PPC_PSSCR    (KVM_REG_PPC | KVM_REG_SIZE_U64 | 0xbd)
1078
+
1079
/* Transactional Memory checkpointed state:
1080
* This is all GPRs, all VSX regs and a subset of SPRs
1081
*/
1082
@@ -XXX,XX +XXX,XX @@ struct kvm_get_htab_header {
1083
#define KVM_REG_PPC_TM_VSCR    (KVM_REG_PPC_TM | KVM_REG_SIZE_U32 | 0x67)
1084
#define KVM_REG_PPC_TM_DSCR    (KVM_REG_PPC_TM | KVM_REG_SIZE_U64 | 0x68)
1085
#define KVM_REG_PPC_TM_TAR    (KVM_REG_PPC_TM | KVM_REG_SIZE_U64 | 0x69)
1086
+#define KVM_REG_PPC_TM_XER    (KVM_REG_PPC_TM | KVM_REG_SIZE_U64 | 0x6a)
1087
1088
/* PPC64 eXternal Interrupt Controller Specification */
1089
#define KVM_DEV_XICS_GRP_SOURCES    1    /* 64-bit source attributes */
1090
@@ -XXX,XX +XXX,XX @@ struct kvm_get_htab_header {
1091
#define KVM_XICS_LEVEL_SENSITIVE    (1ULL << 40)
1092
#define KVM_XICS_MASKED        (1ULL << 41)
1093
#define KVM_XICS_PENDING        (1ULL << 42)
1094
+#define KVM_XICS_PRESENTED        (1ULL << 43)
1095
+#define KVM_XICS_QUEUED        (1ULL << 44)
1096
1097
#endif /* __LINUX_KVM_POWERPC_H */
1098
diff --git a/linux-headers/asm-powerpc/unistd.h b/linux-headers/asm-powerpc/unistd.h
1099
index XXXXXXX..XXXXXXX 100644
1100
--- a/linux-headers/asm-powerpc/unistd.h
1101
+++ b/linux-headers/asm-powerpc/unistd.h
1102
@@ -XXX,XX +XXX,XX @@
1103
#define __NR_copy_file_range    379
1104
#define __NR_preadv2        380
1105
#define __NR_pwritev2        381
1106
+#define __NR_kexec_file_load    382
1107
1108
#endif /* _ASM_POWERPC_UNISTD_H_ */
1109
diff --git a/linux-headers/asm-x86/kvm_para.h b/linux-headers/asm-x86/kvm_para.h
1110
index XXXXXXX..XXXXXXX 100644
1111
--- a/linux-headers/asm-x86/kvm_para.h
1112
+++ b/linux-headers/asm-x86/kvm_para.h
1113
@@ -XXX,XX +XXX,XX @@ struct kvm_steal_time {
1114
    __u64 steal;
1115
    __u32 version;
1116
    __u32 flags;
1117
-    __u32 pad[12];
1118
+    __u8 preempted;
1119
+    __u8 u8_pad[3];
1120
+    __u32 pad[11];
1121
+};
194
+};
1122
+
195
+
1123
+#define KVM_CLOCK_PAIRING_WALLCLOCK 0
196
+
1124
+struct kvm_clock_pairing {
197
/* Translate a NEON data processing instruction. Return nonzero if the
1125
+    __s64 sec;
198
instruction is invalid.
1126
+    __s64 nsec;
199
We process data in a mixture of 32-bit and 64-bit chunks.
1127
+    __u64 tsc;
200
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
1128
+    __u32 flags;
201
{
1129
+    __u32 pad[9];
202
int op;
1130
};
203
int q;
1131
204
- int rd, rn, rm;
1132
#define KVM_STEAL_ALIGNMENT_BITS 5
205
+ int rd, rn, rm, rd_ofs, rn_ofs, rm_ofs;
1133
diff --git a/linux-headers/linux/kvm.h b/linux-headers/linux/kvm.h
206
int size;
1134
index XXXXXXX..XXXXXXX 100644
207
int shift;
1135
--- a/linux-headers/linux/kvm.h
208
int pass;
1136
+++ b/linux-headers/linux/kvm.h
209
int count;
1137
@@ -XXX,XX +XXX,XX @@ struct kvm_hyperv_exit {
210
int pairwise;
1138
struct kvm_run {
211
int u;
1139
    /* in */
212
+ int vec_size;
1140
    __u8 request_interrupt_window;
213
uint32_t imm, mask;
1141
-    __u8 padding1[7];
214
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
1142
+    __u8 immediate_exit;
215
TCGv_ptr ptr1, ptr2, ptr3;
1143
+    __u8 padding1[6];
216
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
1144
217
VFP_DREG_N(rn, insn);
1145
    /* out */
218
VFP_DREG_M(rm, insn);
1146
    __u32 exit_reason;
219
size = (insn >> 20) & 3;
1147
@@ -XXX,XX +XXX,XX @@ struct kvm_enable_cap {
220
+ vec_size = q ? 16 : 8;
1148
};
221
+ rd_ofs = neon_reg_offset(rd, 0);
1149
222
+ rn_ofs = neon_reg_offset(rn, 0);
1150
/* for KVM_PPC_GET_PVINFO */
223
+ rm_ofs = neon_reg_offset(rm, 0);
1151
+
224
+
1152
+#define KVM_PPC_PVINFO_FLAGS_EV_IDLE (1<<0)
225
if ((insn & (1 << 23)) == 0) {
1153
+
226
/* Three register same length. */
1154
struct kvm_ppc_pvinfo {
227
op = ((insn >> 7) & 0x1e) | ((insn >> 4) & 1);
1155
    /* out */
228
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
1156
    __u32 flags;
229
q, rd, rn, rm);
1157
@@ -XXX,XX +XXX,XX @@ struct kvm_ppc_smmu_info {
230
}
1158
    struct kvm_ppc_one_seg_page_size sps[KVM_PPC_PAGE_SIZES_MAX_SZ];
231
return 1;
1159
};
232
+
1160
233
+ case NEON_3R_LOGIC: /* Logic ops. */
1161
-#define KVM_PPC_PVINFO_FLAGS_EV_IDLE (1<<0)
234
+ switch ((u << 2) | size) {
1162
+/* for KVM_PPC_RESIZE_HPT_{PREPARE,COMMIT} */
235
+ case 0: /* VAND */
1163
+struct kvm_ppc_resize_hpt {
236
+ tcg_gen_gvec_and(0, rd_ofs, rn_ofs, rm_ofs,
1164
+    __u64 flags;
237
+ vec_size, vec_size);
1165
+    __u32 shift;
238
+ break;
1166
+    __u32 pad;
239
+ case 1: /* VBIC */
1167
+};
240
+ tcg_gen_gvec_andc(0, rd_ofs, rn_ofs, rm_ofs,
1168
241
+ vec_size, vec_size);
1169
#define KVMIO 0xAE
242
+ break;
1170
243
+ case 2:
1171
@@ -XXX,XX +XXX,XX @@ struct kvm_ppc_smmu_info {
244
+ if (rn == rm) {
1172
#define KVM_CAP_S390_USER_INSTR0 130
245
+ /* VMOV */
1173
#define KVM_CAP_MSI_DEVID 131
246
+ tcg_gen_gvec_mov(0, rd_ofs, rn_ofs, vec_size, vec_size);
1174
#define KVM_CAP_PPC_HTM 132
247
+ } else {
1175
+#define KVM_CAP_SPAPR_RESIZE_HPT 133
248
+ /* VORR */
1176
+#define KVM_CAP_PPC_MMU_RADIX 134
249
+ tcg_gen_gvec_or(0, rd_ofs, rn_ofs, rm_ofs,
1177
+#define KVM_CAP_PPC_MMU_HASH_V3 135
250
+ vec_size, vec_size);
1178
+#define KVM_CAP_IMMEDIATE_EXIT 136
251
+ }
1179
252
+ break;
1180
#ifdef KVM_CAP_IRQ_ROUTING
253
+ case 3: /* VORN */
1181
254
+ tcg_gen_gvec_orc(0, rd_ofs, rn_ofs, rm_ofs,
1182
@@ -XXX,XX +XXX,XX @@ struct kvm_s390_ucas_mapping {
255
+ vec_size, vec_size);
1183
#define KVM_ARM_SET_DEVICE_ADDR     _IOW(KVMIO, 0xab, struct kvm_arm_device_addr)
256
+ break;
1184
/* Available with KVM_CAP_PPC_RTAS */
257
+ case 4: /* VEOR */
1185
#define KVM_PPC_RTAS_DEFINE_TOKEN _IOW(KVMIO, 0xac, struct kvm_rtas_token_args)
258
+ tcg_gen_gvec_xor(0, rd_ofs, rn_ofs, rm_ofs,
1186
+/* Available with KVM_CAP_SPAPR_RESIZE_HPT */
259
+ vec_size, vec_size);
1187
+#define KVM_PPC_RESIZE_HPT_PREPARE _IOR(KVMIO, 0xad, struct kvm_ppc_resize_hpt)
260
+ break;
1188
+#define KVM_PPC_RESIZE_HPT_COMMIT _IOR(KVMIO, 0xae, struct kvm_ppc_resize_hpt)
261
+ case 5: /* VBSL */
1189
+/* Available with KVM_CAP_PPC_RADIX_MMU or KVM_CAP_PPC_HASH_MMU_V3 */
262
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
1190
+#define KVM_PPC_CONFIGURE_V3_MMU _IOW(KVMIO, 0xaf, struct kvm_ppc_mmuv3_cfg)
263
+ vec_size, vec_size, &bsl_op);
1191
+/* Available with KVM_CAP_PPC_RADIX_MMU */
264
+ break;
1192
+#define KVM_PPC_GET_RMMU_INFO     _IOW(KVMIO, 0xb0, struct kvm_ppc_rmmu_info)
265
+ case 6: /* VBIT */
1193
266
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
1194
/* ioctl for vm fd */
267
+ vec_size, vec_size, &bit_op);
1195
#define KVM_CREATE_DEVICE     _IOWR(KVMIO, 0xe0, struct kvm_create_device)
268
+ break;
1196
diff --git a/linux-headers/linux/kvm_para.h b/linux-headers/linux/kvm_para.h
269
+ case 7: /* VBIF */
1197
index XXXXXXX..XXXXXXX 100644
270
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
1198
--- a/linux-headers/linux/kvm_para.h
271
+ vec_size, vec_size, &bif_op);
1199
+++ b/linux-headers/linux/kvm_para.h
272
+ break;
1200
@@ -XXX,XX +XXX,XX @@
273
+ }
1201
#define KVM_EFAULT        EFAULT
274
+ return 0;
1202
#define KVM_E2BIG        E2BIG
275
}
1203
#define KVM_EPERM        EPERM
276
- if (size == 3 && op != NEON_3R_LOGIC) {
1204
+#define KVM_EOPNOTSUPP        95
277
+ if (size == 3) {
1205
278
/* 64-bit element instructions. */
1206
#define KVM_HC_VAPIC_POLL_IRQ        1
279
for (pass = 0; pass < (q ? 2 : 1); pass++) {
1207
#define KVM_HC_MMU_OP            2
280
neon_load_reg64(cpu_V0, rn + pass);
1208
@@ -XXX,XX +XXX,XX @@
281
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
1209
#define KVM_HC_MIPS_GET_CLOCK_FREQ    6
282
case NEON_3R_VRHADD:
1210
#define KVM_HC_MIPS_EXIT_VM        7
283
GEN_NEON_INTEGER_OP(rhadd);
1211
#define KVM_HC_MIPS_CONSOLE_OUTPUT    8
284
break;
1212
+#define KVM_HC_CLOCK_PAIRING        9
285
- case NEON_3R_LOGIC: /* Logic ops. */
1213
286
- switch ((u << 2) | size) {
1214
/*
287
- case 0: /* VAND */
1215
* hypercalls use architecture specific
288
- tcg_gen_and_i32(tmp, tmp, tmp2);
1216
diff --git a/linux-headers/linux/userfaultfd.h b/linux-headers/linux/userfaultfd.h
289
- break;
1217
index XXXXXXX..XXXXXXX 100644
290
- case 1: /* BIC */
1218
--- a/linux-headers/linux/userfaultfd.h
291
- tcg_gen_andc_i32(tmp, tmp, tmp2);
1219
+++ b/linux-headers/linux/userfaultfd.h
292
- break;
1220
@@ -XXX,XX +XXX,XX @@
293
- case 2: /* VORR */
1221
294
- tcg_gen_or_i32(tmp, tmp, tmp2);
1222
#include <linux/types.h>
295
- break;
1223
296
- case 3: /* VORN */
1224
-#define UFFD_API ((__u64)0xAA)
297
- tcg_gen_orc_i32(tmp, tmp, tmp2);
1225
/*
298
- break;
1226
- * After implementing the respective features it will become:
299
- case 4: /* VEOR */
1227
- * #define UFFD_API_FEATURES (UFFD_FEATURE_PAGEFAULT_FLAG_WP | \
300
- tcg_gen_xor_i32(tmp, tmp, tmp2);
1228
- *             UFFD_FEATURE_EVENT_FORK)
301
- break;
1229
+ * If the UFFDIO_API is upgraded someday, the UFFDIO_UNREGISTER and
302
- case 5: /* VBSL */
1230
+ * UFFDIO_WAKE ioctls should be defined as _IOW and not as _IOR. In
303
- tmp3 = neon_load_reg(rd, pass);
1231
+ * userfaultfd.h we assumed the kernel was reading (instead _IOC_READ
304
- gen_neon_bsl(tmp, tmp, tmp2, tmp3);
1232
+ * means the userland is reading).
305
- tcg_temp_free_i32(tmp3);
1233
*/
306
- break;
1234
-#define UFFD_API_FEATURES (0)
307
- case 6: /* VBIT */
1235
+#define UFFD_API ((__u64)0xAA)
308
- tmp3 = neon_load_reg(rd, pass);
1236
+#define UFFD_API_FEATURES (UFFD_FEATURE_EVENT_FORK |        \
309
- gen_neon_bsl(tmp, tmp, tmp3, tmp2);
1237
+             UFFD_FEATURE_EVENT_REMAP |        \
310
- tcg_temp_free_i32(tmp3);
1238
+             UFFD_FEATURE_EVENT_MADVDONTNEED |    \
311
- break;
1239
+             UFFD_FEATURE_MISSING_HUGETLBFS |    \
312
- case 7: /* VBIF */
1240
+             UFFD_FEATURE_MISSING_SHMEM)
313
- tmp3 = neon_load_reg(rd, pass);
1241
#define UFFD_API_IOCTLS                \
314
- gen_neon_bsl(tmp, tmp3, tmp, tmp2);
1242
    ((__u64)1 << _UFFDIO_REGISTER |        \
315
- tcg_temp_free_i32(tmp3);
1243
     (__u64)1 << _UFFDIO_UNREGISTER |    \
316
- break;
1244
@@ -XXX,XX +XXX,XX @@
317
- }
1245
    ((__u64)1 << _UFFDIO_WAKE |        \
318
- break;
1246
     (__u64)1 << _UFFDIO_COPY |        \
319
case NEON_3R_VHSUB:
1247
     (__u64)1 << _UFFDIO_ZEROPAGE)
320
GEN_NEON_INTEGER_OP(hsub);
1248
+#define UFFD_API_RANGE_IOCTLS_BASIC        \
321
break;
1249
+    ((__u64)1 << _UFFDIO_WAKE |        \
1250
+     (__u64)1 << _UFFDIO_COPY)
1251
1252
/*
1253
* Valid ioctl command number range with this API is from 0x00 to
1254
@@ -XXX,XX +XXX,XX @@ struct uffd_msg {
1255
        } pagefault;
1256
1257
        struct {
1258
+            __u32    ufd;
1259
+        } fork;
1260
+
1261
+        struct {
1262
+            __u64    from;
1263
+            __u64    to;
1264
+            __u64    len;
1265
+        } remap;
1266
+
1267
+        struct {
1268
+            __u64    start;
1269
+            __u64    end;
1270
+        } madv_dn;
1271
+
1272
+        struct {
1273
            /* unused reserved fields */
1274
            __u64    reserved1;
1275
            __u64    reserved2;
1276
@@ -XXX,XX +XXX,XX @@ struct uffd_msg {
1277
* Start at 0x12 and not at 0 to be more strict against bugs.
1278
*/
1279
#define UFFD_EVENT_PAGEFAULT    0x12
1280
-#if 0 /* not available yet */
1281
#define UFFD_EVENT_FORK        0x13
1282
-#endif
1283
+#define UFFD_EVENT_REMAP    0x14
1284
+#define UFFD_EVENT_MADVDONTNEED    0x15
1285
1286
/* flags for UFFD_EVENT_PAGEFAULT */
1287
#define UFFD_PAGEFAULT_FLAG_WRITE    (1<<0)    /* If this was a write fault */
1288
@@ -XXX,XX +XXX,XX @@ struct uffdio_api {
1289
     * Note: UFFD_EVENT_PAGEFAULT and UFFD_PAGEFAULT_FLAG_WRITE
1290
     * are to be considered implicitly always enabled in all kernels as
1291
     * long as the uffdio_api.api requested matches UFFD_API.
1292
+     *
1293
+     * UFFD_FEATURE_MISSING_HUGETLBFS means an UFFDIO_REGISTER
1294
+     * with UFFDIO_REGISTER_MODE_MISSING mode will succeed on
1295
+     * hugetlbfs virtual memory ranges. Adding or not adding
1296
+     * UFFD_FEATURE_MISSING_HUGETLBFS to uffdio_api.features has
1297
+     * no real functional effect after UFFDIO_API returns, but
1298
+     * it's only useful for an initial feature set probe at
1299
+     * UFFDIO_API time. There are two ways to use it:
1300
+     *
1301
+     * 1) by adding UFFD_FEATURE_MISSING_HUGETLBFS to the
1302
+     * uffdio_api.features before calling UFFDIO_API, an error
1303
+     * will be returned by UFFDIO_API on a kernel without
1304
+     * hugetlbfs missing support
1305
+     *
1306
+     * 2) the UFFD_FEATURE_MISSING_HUGETLBFS can not be added in
1307
+     * uffdio_api.features and instead it will be set by the
1308
+     * kernel in the uffdio_api.features if the kernel supports
1309
+     * it, so userland can later check if the feature flag is
1310
+     * present in uffdio_api.features after UFFDIO_API
1311
+     * succeeded.
1312
+     *
1313
+     * UFFD_FEATURE_MISSING_SHMEM works the same as
1314
+     * UFFD_FEATURE_MISSING_HUGETLBFS, but it applies to shmem
1315
+     * (i.e. tmpfs and other shmem based APIs).
1316
     */
1317
-#if 0 /* not available yet */
1318
#define UFFD_FEATURE_PAGEFAULT_FLAG_WP        (1<<0)
1319
#define UFFD_FEATURE_EVENT_FORK            (1<<1)
1320
-#endif
1321
+#define UFFD_FEATURE_EVENT_REMAP        (1<<2)
1322
+#define UFFD_FEATURE_EVENT_MADVDONTNEED        (1<<3)
1323
+#define UFFD_FEATURE_MISSING_HUGETLBFS        (1<<4)
1324
+#define UFFD_FEATURE_MISSING_SHMEM        (1<<5)
1325
    __u64 features;
1326
1327
    __u64 ioctls;
1328
diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h
1329
index XXXXXXX..XXXXXXX 100644
1330
--- a/linux-headers/linux/vfio.h
1331
+++ b/linux-headers/linux/vfio.h
1332
@@ -XXX,XX +XXX,XX @@ struct vfio_device_info {
1333
};
1334
#define VFIO_DEVICE_GET_INFO        _IO(VFIO_TYPE, VFIO_BASE + 7)
1335
1336
+/*
1337
+ * Vendor driver using Mediated device framework should provide device_api
1338
+ * attribute in supported type attribute groups. Device API string should be one
1339
+ * of the following corresponding to device flags in vfio_device_info structure.
1340
+ */
1341
+
1342
+#define VFIO_DEVICE_API_PCI_STRING        "vfio-pci"
1343
+#define VFIO_DEVICE_API_PLATFORM_STRING        "vfio-platform"
1344
+#define VFIO_DEVICE_API_AMBA_STRING        "vfio-amba"
1345
+
1346
/**
1347
* VFIO_DEVICE_GET_REGION_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 8,
1348
*                 struct vfio_region_info)
1349
--
322
--
1350
2.7.4
323
2.19.1
1351
324
1352
325
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-10-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 29 ++++++++++-------------------
9
1 file changed, 10 insertions(+), 19 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
break;
17
}
18
return 0;
19
+
20
+ case NEON_3R_VADD_VSUB:
21
+ if (u) {
22
+ tcg_gen_gvec_sub(size, rd_ofs, rn_ofs, rm_ofs,
23
+ vec_size, vec_size);
24
+ } else {
25
+ tcg_gen_gvec_add(size, rd_ofs, rn_ofs, rm_ofs,
26
+ vec_size, vec_size);
27
+ }
28
+ return 0;
29
}
30
if (size == 3) {
31
/* 64-bit element instructions. */
32
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
33
cpu_V1, cpu_V0);
34
}
35
break;
36
- case NEON_3R_VADD_VSUB:
37
- if (u) {
38
- tcg_gen_sub_i64(CPU_V001);
39
- } else {
40
- tcg_gen_add_i64(CPU_V001);
41
- }
42
- break;
43
default:
44
abort();
45
}
46
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
47
tmp2 = neon_load_reg(rd, pass);
48
gen_neon_add(size, tmp, tmp2);
49
break;
50
- case NEON_3R_VADD_VSUB:
51
- if (!u) { /* VADD */
52
- gen_neon_add(size, tmp, tmp2);
53
- } else { /* VSUB */
54
- switch (size) {
55
- case 0: gen_helper_neon_sub_u8(tmp, tmp, tmp2); break;
56
- case 1: gen_helper_neon_sub_u16(tmp, tmp, tmp2); break;
57
- case 2: tcg_gen_sub_i32(tmp, tmp, tmp2); break;
58
- default: abort();
59
- }
60
- }
61
- break;
62
case NEON_3R_VTST_VCEQ:
63
if (!u) { /* VTST */
64
switch (size) {
65
--
66
2.19.1
67
68
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-11-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 16 ++++++++--------
9
1 file changed, 8 insertions(+), 8 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
tcg_temp_free_ptr(ptr1);
17
tcg_temp_free_ptr(ptr2);
18
break;
19
+
20
+ case NEON_2RM_VMVN:
21
+ tcg_gen_gvec_not(0, rd_ofs, rm_ofs, vec_size, vec_size);
22
+ break;
23
+ case NEON_2RM_VNEG:
24
+ tcg_gen_gvec_neg(size, rd_ofs, rm_ofs, vec_size, vec_size);
25
+ break;
26
+
27
default:
28
elementwise:
29
for (pass = 0; pass < (q ? 4 : 2); pass++) {
30
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
31
case NEON_2RM_VCNT:
32
gen_helper_neon_cnt_u8(tmp, tmp);
33
break;
34
- case NEON_2RM_VMVN:
35
- tcg_gen_not_i32(tmp, tmp);
36
- break;
37
case NEON_2RM_VQABS:
38
switch (size) {
39
case 0:
40
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
41
default: abort();
42
}
43
break;
44
- case NEON_2RM_VNEG:
45
- tmp2 = tcg_const_i32(0);
46
- gen_neon_rsb(size, tmp, tmp2);
47
- tcg_temp_free_i32(tmp2);
48
- break;
49
case NEON_2RM_VCGT0_F:
50
{
51
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
52
--
53
2.19.1
54
55
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-12-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 31 +++++++++++++++----------------
9
1 file changed, 15 insertions(+), 16 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
vec_size, vec_size);
17
}
18
return 0;
19
+
20
+ case NEON_3R_VMUL: /* VMUL */
21
+ if (u) {
22
+ /* Polynomial case allows only P8 and is handled below. */
23
+ if (size != 0) {
24
+ return 1;
25
+ }
26
+ } else {
27
+ tcg_gen_gvec_mul(size, rd_ofs, rn_ofs, rm_ofs,
28
+ vec_size, vec_size);
29
+ return 0;
30
+ }
31
+ break;
32
}
33
if (size == 3) {
34
/* 64-bit element instructions. */
35
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
36
return 1;
37
}
38
break;
39
- case NEON_3R_VMUL:
40
- if (u && (size != 0)) {
41
- /* UNDEF on invalid size for polynomial subcase */
42
- return 1;
43
- }
44
- break;
45
case NEON_3R_VFM_VQRDMLSH:
46
if (!arm_dc_feature(s, ARM_FEATURE_VFP4)) {
47
return 1;
48
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
49
}
50
break;
51
case NEON_3R_VMUL:
52
- if (u) { /* polynomial */
53
- gen_helper_neon_mul_p8(tmp, tmp, tmp2);
54
- } else { /* Integer */
55
- switch (size) {
56
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
57
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
58
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
59
- default: abort();
60
- }
61
- }
62
+ /* VMUL.P8; other cases already eliminated. */
63
+ gen_helper_neon_mul_p8(tmp, tmp, tmp2);
64
break;
65
case NEON_3R_VPMAX:
66
GEN_NEON_INTEGER_OP(pmax);
67
--
68
2.19.1
69
70
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-13-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate.c | 70 +++++++++++++++++++++++++++++-------------
9
1 file changed, 48 insertions(+), 22 deletions(-)
10
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
16
size--;
17
}
18
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
19
- /* To avoid excessive duplication of ops we implement shift
20
- by immediate using the variable shift operations. */
21
if (op < 8) {
22
/* Shift by immediate:
23
VSHR, VSRA, VRSHR, VRSRA, VSRI, VSHL, VQSHL, VQSHLU. */
24
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
25
}
26
/* Right shifts are encoded as N - shift, where N is the
27
element size in bits. */
28
- if (op <= 4)
29
+ if (op <= 4) {
30
shift = shift - (1 << (size + 3));
31
+ }
32
+
33
+ switch (op) {
34
+ case 0: /* VSHR */
35
+ /* Right shift comes here negative. */
36
+ shift = -shift;
37
+ /* Shifts larger than the element size are architecturally
38
+ * valid. Unsigned results in all zeros; signed results
39
+ * in all sign bits.
40
+ */
41
+ if (!u) {
42
+ tcg_gen_gvec_sari(size, rd_ofs, rm_ofs,
43
+ MIN(shift, (8 << size) - 1),
44
+ vec_size, vec_size);
45
+ } else if (shift >= 8 << size) {
46
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
47
+ } else {
48
+ tcg_gen_gvec_shri(size, rd_ofs, rm_ofs, shift,
49
+ vec_size, vec_size);
50
+ }
51
+ return 0;
52
+
53
+ case 5: /* VSHL, VSLI */
54
+ if (!u) { /* VSHL */
55
+ /* Shifts larger than the element size are
56
+ * architecturally valid and results in zero.
57
+ */
58
+ if (shift >= 8 << size) {
59
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
60
+ } else {
61
+ tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
62
+ vec_size, vec_size);
63
+ }
64
+ return 0;
65
+ }
66
+ break;
67
+ }
68
+
69
if (size == 3) {
70
count = q + 1;
71
} else {
72
count = q ? 4: 2;
73
}
74
- switch (size) {
75
- case 0:
76
- imm = (uint8_t) shift;
77
- imm |= imm << 8;
78
- imm |= imm << 16;
79
- break;
80
- case 1:
81
- imm = (uint16_t) shift;
82
- imm |= imm << 16;
83
- break;
84
- case 2:
85
- case 3:
86
- imm = shift;
87
- break;
88
- default:
89
- abort();
90
- }
91
+
92
+ /* To avoid excessive duplication of ops we implement shift
93
+ * by immediate using the variable shift operations.
94
+ */
95
+ imm = dup_const(size, shift);
96
97
for (pass = 0; pass < count; pass++) {
98
if (size == 3) {
99
neon_load_reg64(cpu_V0, rm + pass);
100
tcg_gen_movi_i64(cpu_V1, imm);
101
switch (op) {
102
- case 0: /* VSHR */
103
case 1: /* VSRA */
104
if (u)
105
gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
106
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
107
cpu_V0, cpu_V1);
108
}
109
break;
110
+ default:
111
+ g_assert_not_reached();
112
}
113
if (op == 1 || op == 3) {
114
/* Accumulate. */
115
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
116
tmp2 = tcg_temp_new_i32();
117
tcg_gen_movi_i32(tmp2, imm);
118
switch (op) {
119
- case 0: /* VSHR */
120
case 1: /* VSRA */
121
GEN_NEON_INTEGER_OP(shl);
122
break;
123
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
124
case 7: /* VQSHL */
125
GEN_NEON_INTEGER_OP_ENV(qshl);
126
break;
127
+ default:
128
+ g_assert_not_reached();
129
}
130
tcg_temp_free_i32(tmp2);
131
132
--
133
2.19.1
134
135
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Move ssra_op and usra_op expanders from translate-a64.c.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-14-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.h | 2 +
11
target/arm/translate-a64.c | 106 ----------------------------
12
target/arm/translate.c | 139 ++++++++++++++++++++++++++++++++++---
13
3 files changed, 130 insertions(+), 117 deletions(-)
14
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
18
+++ b/target/arm/translate.h
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
20
extern const GVecGen3 bsl_op;
21
extern const GVecGen3 bit_op;
22
extern const GVecGen3 bif_op;
23
+extern const GVecGen2i ssra_op[4];
24
+extern const GVecGen2i usra_op[4];
25
26
/*
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
33
}
34
}
35
36
-static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
37
-{
38
- tcg_gen_vec_sar8i_i64(a, a, shift);
39
- tcg_gen_vec_add8_i64(d, d, a);
40
-}
41
-
42
-static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
43
-{
44
- tcg_gen_vec_sar16i_i64(a, a, shift);
45
- tcg_gen_vec_add16_i64(d, d, a);
46
-}
47
-
48
-static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
49
-{
50
- tcg_gen_sari_i32(a, a, shift);
51
- tcg_gen_add_i32(d, d, a);
52
-}
53
-
54
-static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
55
-{
56
- tcg_gen_sari_i64(a, a, shift);
57
- tcg_gen_add_i64(d, d, a);
58
-}
59
-
60
-static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
61
-{
62
- tcg_gen_sari_vec(vece, a, a, sh);
63
- tcg_gen_add_vec(vece, d, d, a);
64
-}
65
-
66
-static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
67
-{
68
- tcg_gen_vec_shr8i_i64(a, a, shift);
69
- tcg_gen_vec_add8_i64(d, d, a);
70
-}
71
-
72
-static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
73
-{
74
- tcg_gen_vec_shr16i_i64(a, a, shift);
75
- tcg_gen_vec_add16_i64(d, d, a);
76
-}
77
-
78
-static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
79
-{
80
- tcg_gen_shri_i32(a, a, shift);
81
- tcg_gen_add_i32(d, d, a);
82
-}
83
-
84
-static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
85
-{
86
- tcg_gen_shri_i64(a, a, shift);
87
- tcg_gen_add_i64(d, d, a);
88
-}
89
-
90
-static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
91
-{
92
- tcg_gen_shri_vec(vece, a, a, sh);
93
- tcg_gen_add_vec(vece, d, d, a);
94
-}
95
-
96
static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
97
{
98
uint64_t mask = dup_const(MO_8, 0xff >> shift);
99
@@ -XXX,XX +XXX,XX @@ static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
100
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
101
int immh, int immb, int opcode, int rn, int rd)
102
{
103
- static const GVecGen2i ssra_op[4] = {
104
- { .fni8 = gen_ssra8_i64,
105
- .fniv = gen_ssra_vec,
106
- .load_dest = true,
107
- .opc = INDEX_op_sari_vec,
108
- .vece = MO_8 },
109
- { .fni8 = gen_ssra16_i64,
110
- .fniv = gen_ssra_vec,
111
- .load_dest = true,
112
- .opc = INDEX_op_sari_vec,
113
- .vece = MO_16 },
114
- { .fni4 = gen_ssra32_i32,
115
- .fniv = gen_ssra_vec,
116
- .load_dest = true,
117
- .opc = INDEX_op_sari_vec,
118
- .vece = MO_32 },
119
- { .fni8 = gen_ssra64_i64,
120
- .fniv = gen_ssra_vec,
121
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
122
- .load_dest = true,
123
- .opc = INDEX_op_sari_vec,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen2i usra_op[4] = {
127
- { .fni8 = gen_usra8_i64,
128
- .fniv = gen_usra_vec,
129
- .load_dest = true,
130
- .opc = INDEX_op_shri_vec,
131
- .vece = MO_8, },
132
- { .fni8 = gen_usra16_i64,
133
- .fniv = gen_usra_vec,
134
- .load_dest = true,
135
- .opc = INDEX_op_shri_vec,
136
- .vece = MO_16, },
137
- { .fni4 = gen_usra32_i32,
138
- .fniv = gen_usra_vec,
139
- .load_dest = true,
140
- .opc = INDEX_op_shri_vec,
141
- .vece = MO_32, },
142
- { .fni8 = gen_usra64_i64,
143
- .fniv = gen_usra_vec,
144
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
145
- .load_dest = true,
146
- .opc = INDEX_op_shri_vec,
147
- .vece = MO_64, },
148
- };
149
static const GVecGen2i sri_op[4] = {
150
{ .fni8 = gen_shr8_ins_i64,
151
.fniv = gen_shr_ins_vec,
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ const GVecGen3 bif_op = {
157
.load_dest = true
158
};
159
160
+static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
161
+{
162
+ tcg_gen_vec_sar8i_i64(a, a, shift);
163
+ tcg_gen_vec_add8_i64(d, d, a);
164
+}
165
+
166
+static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
167
+{
168
+ tcg_gen_vec_sar16i_i64(a, a, shift);
169
+ tcg_gen_vec_add16_i64(d, d, a);
170
+}
171
+
172
+static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
173
+{
174
+ tcg_gen_sari_i32(a, a, shift);
175
+ tcg_gen_add_i32(d, d, a);
176
+}
177
+
178
+static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
179
+{
180
+ tcg_gen_sari_i64(a, a, shift);
181
+ tcg_gen_add_i64(d, d, a);
182
+}
183
+
184
+static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
185
+{
186
+ tcg_gen_sari_vec(vece, a, a, sh);
187
+ tcg_gen_add_vec(vece, d, d, a);
188
+}
189
+
190
+const GVecGen2i ssra_op[4] = {
191
+ { .fni8 = gen_ssra8_i64,
192
+ .fniv = gen_ssra_vec,
193
+ .load_dest = true,
194
+ .opc = INDEX_op_sari_vec,
195
+ .vece = MO_8 },
196
+ { .fni8 = gen_ssra16_i64,
197
+ .fniv = gen_ssra_vec,
198
+ .load_dest = true,
199
+ .opc = INDEX_op_sari_vec,
200
+ .vece = MO_16 },
201
+ { .fni4 = gen_ssra32_i32,
202
+ .fniv = gen_ssra_vec,
203
+ .load_dest = true,
204
+ .opc = INDEX_op_sari_vec,
205
+ .vece = MO_32 },
206
+ { .fni8 = gen_ssra64_i64,
207
+ .fniv = gen_ssra_vec,
208
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
209
+ .load_dest = true,
210
+ .opc = INDEX_op_sari_vec,
211
+ .vece = MO_64 },
212
+};
213
+
214
+static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
215
+{
216
+ tcg_gen_vec_shr8i_i64(a, a, shift);
217
+ tcg_gen_vec_add8_i64(d, d, a);
218
+}
219
+
220
+static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
221
+{
222
+ tcg_gen_vec_shr16i_i64(a, a, shift);
223
+ tcg_gen_vec_add16_i64(d, d, a);
224
+}
225
+
226
+static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
227
+{
228
+ tcg_gen_shri_i32(a, a, shift);
229
+ tcg_gen_add_i32(d, d, a);
230
+}
231
+
232
+static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
233
+{
234
+ tcg_gen_shri_i64(a, a, shift);
235
+ tcg_gen_add_i64(d, d, a);
236
+}
237
+
238
+static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
239
+{
240
+ tcg_gen_shri_vec(vece, a, a, sh);
241
+ tcg_gen_add_vec(vece, d, d, a);
242
+}
243
+
244
+const GVecGen2i usra_op[4] = {
245
+ { .fni8 = gen_usra8_i64,
246
+ .fniv = gen_usra_vec,
247
+ .load_dest = true,
248
+ .opc = INDEX_op_shri_vec,
249
+ .vece = MO_8, },
250
+ { .fni8 = gen_usra16_i64,
251
+ .fniv = gen_usra_vec,
252
+ .load_dest = true,
253
+ .opc = INDEX_op_shri_vec,
254
+ .vece = MO_16, },
255
+ { .fni4 = gen_usra32_i32,
256
+ .fniv = gen_usra_vec,
257
+ .load_dest = true,
258
+ .opc = INDEX_op_shri_vec,
259
+ .vece = MO_32, },
260
+ { .fni8 = gen_usra64_i64,
261
+ .fniv = gen_usra_vec,
262
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
263
+ .load_dest = true,
264
+ .opc = INDEX_op_shri_vec,
265
+ .vece = MO_64, },
266
+};
267
268
/* Translate a NEON data processing instruction. Return nonzero if the
269
instruction is invalid.
270
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
271
}
272
return 0;
273
274
+ case 1: /* VSRA */
275
+ /* Right shift comes here negative. */
276
+ shift = -shift;
277
+ /* Shifts larger than the element size are architecturally
278
+ * valid. Unsigned results in all zeros; signed results
279
+ * in all sign bits.
280
+ */
281
+ if (!u) {
282
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
283
+ MIN(shift, (8 << size) - 1),
284
+ &ssra_op[size]);
285
+ } else if (shift >= 8 << size) {
286
+ /* rd += 0 */
287
+ } else {
288
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
289
+ shift, &usra_op[size]);
290
+ }
291
+ return 0;
292
+
293
case 5: /* VSHL, VSLI */
294
if (!u) { /* VSHL */
295
/* Shifts larger than the element size are
296
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
297
neon_load_reg64(cpu_V0, rm + pass);
298
tcg_gen_movi_i64(cpu_V1, imm);
299
switch (op) {
300
- case 1: /* VSRA */
301
- if (u)
302
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
303
- else
304
- gen_helper_neon_shl_s64(cpu_V0, cpu_V0, cpu_V1);
305
- break;
306
case 2: /* VRSHR */
307
case 3: /* VRSRA */
308
if (u)
309
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
310
default:
311
g_assert_not_reached();
312
}
313
- if (op == 1 || op == 3) {
314
+ if (op == 3) {
315
/* Accumulate. */
316
neon_load_reg64(cpu_V1, rd + pass);
317
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
318
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
319
tmp2 = tcg_temp_new_i32();
320
tcg_gen_movi_i32(tmp2, imm);
321
switch (op) {
322
- case 1: /* VSRA */
323
- GEN_NEON_INTEGER_OP(shl);
324
- break;
325
case 2: /* VRSHR */
326
case 3: /* VRSRA */
327
GEN_NEON_INTEGER_OP(rshl);
328
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
329
}
330
tcg_temp_free_i32(tmp2);
331
332
- if (op == 1 || op == 3) {
333
+ if (op == 3) {
334
/* Accumulate. */
335
tmp2 = neon_load_reg(rd, pass);
336
gen_neon_add(size, tmp, tmp2);
337
--
338
2.19.1
339
340
diff view generated by jsdifflib
1
From: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This actually implements pre_save and post_load methods for in-kernel
3
Move shi_op and sli_op expanders from translate-a64.c.
4
vGICv3.
5
4
6
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-15-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
10
Message-id: 1487850673-26455-4-git-send-email-vijay.kilari@gmail.com
11
[PMM:
12
* use decimal, not 0bnnn
13
* fixed typo in names of ICC_APR0R_EL1 and ICC_AP1R_EL1
14
* completely rearranged the get and put functions to read and write
15
the state in a natural order, rather than mixing distributor and
16
redistributor state together]
17
Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@cavium.com>
18
[Vijay:
19
* Update macro KVM_VGIC_ATTR
20
* Use 32 bit access for gicd and gicr
21
* GICD_IROUTER, GICD_TYPER, GICR_PROPBASER and GICR_PENDBASER reg
22
access are changed from 64-bit to 32-bit access
23
* Add ICC_SRE_EL1 save and restore
24
* Dropped translate_fn mechanism and coded functions to handle
25
save and restore of edge_trigger and priority
26
* Number of APnR register saved/restored based on number of
27
priority bits supported]
28
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
29
---
9
---
30
hw/intc/gicv3_internal.h | 1 +
10
target/arm/translate.h | 2 +
31
hw/intc/arm_gicv3_kvm.c | 573 +++++++++++++++++++++++++++++++++++++++++++++--
11
target/arm/translate-a64.c | 152 +----------------------
32
2 files changed, 558 insertions(+), 16 deletions(-)
12
target/arm/translate.c | 244 ++++++++++++++++++++++++++-----------
13
3 files changed, 179 insertions(+), 219 deletions(-)
33
14
34
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
35
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
36
--- a/hw/intc/gicv3_internal.h
17
--- a/target/arm/translate.h
37
+++ b/hw/intc/gicv3_internal.h
18
+++ b/target/arm/translate.h
38
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
39
#define ICC_CTLR_EL1_EOIMODE (1U << 1)
20
extern const GVecGen3 bif_op;
40
#define ICC_CTLR_EL1_PMHE (1U << 6)
21
extern const GVecGen2i ssra_op[4];
41
#define ICC_CTLR_EL1_PRIBITS_SHIFT 8
22
extern const GVecGen2i usra_op[4];
42
+#define ICC_CTLR_EL1_PRIBITS_MASK (7U << ICC_CTLR_EL1_PRIBITS_SHIFT)
23
+extern const GVecGen2i sri_op[4];
43
#define ICC_CTLR_EL1_IDBITS_SHIFT 11
24
+extern const GVecGen2i sli_op[4];
44
#define ICC_CTLR_EL1_SEIS (1U << 14)
25
45
#define ICC_CTLR_EL1_A3V (1U << 15)
26
/*
46
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
47
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
48
--- a/hw/intc/arm_gicv3_kvm.c
30
--- a/target/arm/translate-a64.c
49
+++ b/hw/intc/arm_gicv3_kvm.c
31
+++ b/target/arm/translate-a64.c
50
@@ -XXX,XX +XXX,XX @@
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
51
#include "qapi/error.h"
33
}
52
#include "hw/intc/arm_gicv3_common.h"
53
#include "hw/sysbus.h"
54
+#include "qemu/error-report.h"
55
#include "sysemu/kvm.h"
56
#include "kvm_arm.h"
57
+#include "gicv3_internal.h"
58
#include "vgic_common.h"
59
#include "migration/migration.h"
60
61
@@ -XXX,XX +XXX,XX @@
62
#define KVM_ARM_GICV3_GET_CLASS(obj) \
63
OBJECT_GET_CLASS(KVMARMGICv3Class, (obj), TYPE_KVM_ARM_GICV3)
64
65
+#define KVM_DEV_ARM_VGIC_SYSREG(op0, op1, crn, crm, op2) \
66
+ (ARM64_SYS_REG_SHIFT_MASK(op0, OP0) | \
67
+ ARM64_SYS_REG_SHIFT_MASK(op1, OP1) | \
68
+ ARM64_SYS_REG_SHIFT_MASK(crn, CRN) | \
69
+ ARM64_SYS_REG_SHIFT_MASK(crm, CRM) | \
70
+ ARM64_SYS_REG_SHIFT_MASK(op2, OP2))
71
+
72
+#define ICC_PMR_EL1 \
73
+ KVM_DEV_ARM_VGIC_SYSREG(3, 0, 4, 6, 0)
74
+#define ICC_BPR0_EL1 \
75
+ KVM_DEV_ARM_VGIC_SYSREG(3, 0, 12, 8, 3)
76
+#define ICC_AP0R_EL1(n) \
77
+ KVM_DEV_ARM_VGIC_SYSREG(3, 0, 12, 8, 4 | n)
78
+#define ICC_AP1R_EL1(n) \
79
+ KVM_DEV_ARM_VGIC_SYSREG(3, 0, 12, 9, n)
80
+#define ICC_BPR1_EL1 \
81
+ KVM_DEV_ARM_VGIC_SYSREG(3, 0, 12, 12, 3)
82
+#define ICC_CTLR_EL1 \
83
+ KVM_DEV_ARM_VGIC_SYSREG(3, 0, 12, 12, 4)
84
+#define ICC_SRE_EL1 \
85
+ KVM_DEV_ARM_VGIC_SYSREG(3, 0, 12, 12, 5)
86
+#define ICC_IGRPEN0_EL1 \
87
+ KVM_DEV_ARM_VGIC_SYSREG(3, 0, 12, 12, 6)
88
+#define ICC_IGRPEN1_EL1 \
89
+ KVM_DEV_ARM_VGIC_SYSREG(3, 0, 12, 12, 7)
90
+
91
typedef struct KVMARMGICv3Class {
92
ARMGICv3CommonClass parent_class;
93
DeviceRealize parent_realize;
94
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_set_irq(void *opaque, int irq, int level)
95
kvm_arm_gic_set_irq(s->num_irq, irq, level);
96
}
34
}
97
35
98
+#define KVM_VGIC_ATTR(reg, typer) \
36
-static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
99
+ ((typer & KVM_DEV_ARM_VGIC_V3_MPIDR_MASK) | (reg))
37
-{
100
+
38
- uint64_t mask = dup_const(MO_8, 0xff >> shift);
101
+static inline void kvm_gicd_access(GICv3State *s, int offset,
39
- TCGv_i64 t = tcg_temp_new_i64();
102
+ uint32_t *val, bool write)
40
-
103
+{
41
- tcg_gen_shri_i64(t, a, shift);
104
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_DIST_REGS,
42
- tcg_gen_andi_i64(t, t, mask);
105
+ KVM_VGIC_ATTR(offset, 0),
43
- tcg_gen_andi_i64(d, d, ~mask);
106
+ val, write);
44
- tcg_gen_or_i64(d, d, t);
107
+}
45
- tcg_temp_free_i64(t);
108
+
46
-}
109
+static inline void kvm_gicr_access(GICv3State *s, int offset, int cpu,
47
-
110
+ uint32_t *val, bool write)
48
-static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
111
+{
49
-{
112
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_REDIST_REGS,
50
- uint64_t mask = dup_const(MO_16, 0xffff >> shift);
113
+ KVM_VGIC_ATTR(offset, s->cpu[cpu].gicr_typer),
51
- TCGv_i64 t = tcg_temp_new_i64();
114
+ val, write);
52
-
115
+}
53
- tcg_gen_shri_i64(t, a, shift);
116
+
54
- tcg_gen_andi_i64(t, t, mask);
117
+static inline void kvm_gicc_access(GICv3State *s, uint64_t reg, int cpu,
55
- tcg_gen_andi_i64(d, d, ~mask);
118
+ uint64_t *val, bool write)
56
- tcg_gen_or_i64(d, d, t);
119
+{
57
- tcg_temp_free_i64(t);
120
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS,
58
-}
121
+ KVM_VGIC_ATTR(reg, s->cpu[cpu].gicr_typer),
59
-
122
+ val, write);
60
-static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
123
+}
61
-{
124
+
62
- tcg_gen_shri_i32(a, a, shift);
125
+static inline void kvm_gic_line_level_access(GICv3State *s, int irq, int cpu,
63
- tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
126
+ uint32_t *val, bool write)
64
-}
127
+{
65
-
128
+ kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO,
66
-static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
129
+ KVM_VGIC_ATTR(irq, s->cpu[cpu].gicr_typer) |
67
-{
130
+ (VGIC_LEVEL_INFO_LINE_LEVEL <<
68
- tcg_gen_shri_i64(a, a, shift);
131
+ KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT),
69
- tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
132
+ val, write);
70
-}
133
+}
71
-
134
+
72
-static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
135
+/* Loop through each distributor IRQ related register; since bits
73
-{
136
+ * corresponding to SPIs and PPIs are RAZ/WI when affinity routing
74
- uint64_t mask = (2ull << ((8 << vece) - 1)) - 1;
137
+ * is enabled, we skip those.
75
- TCGv_vec t = tcg_temp_new_vec_matching(d);
138
+ */
76
- TCGv_vec m = tcg_temp_new_vec_matching(d);
139
+#define for_each_dist_irq_reg(_irq, _max, _field_width) \
77
-
140
+ for (_irq = GIC_INTERNAL; _irq < _max; _irq += (32 / _field_width))
78
- tcg_gen_dupi_vec(vece, m, mask ^ (mask >> sh));
141
+
79
- tcg_gen_shri_vec(vece, t, a, sh);
142
+static void kvm_dist_get_priority(GICv3State *s, uint32_t offset, uint8_t *bmp)
80
- tcg_gen_and_vec(vece, d, d, m);
143
+{
81
- tcg_gen_or_vec(vece, d, d, t);
144
+ uint32_t reg, *field;
82
-
145
+ int irq;
83
- tcg_temp_free_vec(t);
146
+
84
- tcg_temp_free_vec(m);
147
+ field = (uint32_t *)bmp;
85
-}
148
+ for_each_dist_irq_reg(irq, s->num_irq, 8) {
86
-
149
+ kvm_gicd_access(s, offset, &reg, false);
87
/* SSHR[RA]/USHR[RA] - Vector shift right (optional rounding/accumulate) */
150
+ *field = reg;
88
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
151
+ offset += 4;
89
int immh, int immb, int opcode, int rn, int rd)
152
+ field++;
90
{
91
- static const GVecGen2i sri_op[4] = {
92
- { .fni8 = gen_shr8_ins_i64,
93
- .fniv = gen_shr_ins_vec,
94
- .load_dest = true,
95
- .opc = INDEX_op_shri_vec,
96
- .vece = MO_8 },
97
- { .fni8 = gen_shr16_ins_i64,
98
- .fniv = gen_shr_ins_vec,
99
- .load_dest = true,
100
- .opc = INDEX_op_shri_vec,
101
- .vece = MO_16 },
102
- { .fni4 = gen_shr32_ins_i32,
103
- .fniv = gen_shr_ins_vec,
104
- .load_dest = true,
105
- .opc = INDEX_op_shri_vec,
106
- .vece = MO_32 },
107
- { .fni8 = gen_shr64_ins_i64,
108
- .fniv = gen_shr_ins_vec,
109
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
110
- .load_dest = true,
111
- .opc = INDEX_op_shri_vec,
112
- .vece = MO_64 },
113
- };
114
-
115
int size = 32 - clz32(immh) - 1;
116
int immhb = immh << 3 | immb;
117
int shift = 2 * (8 << size) - immhb;
118
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
119
clear_vec_high(s, is_q, rd);
120
}
121
122
-static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
123
-{
124
- uint64_t mask = dup_const(MO_8, 0xff << shift);
125
- TCGv_i64 t = tcg_temp_new_i64();
126
-
127
- tcg_gen_shli_i64(t, a, shift);
128
- tcg_gen_andi_i64(t, t, mask);
129
- tcg_gen_andi_i64(d, d, ~mask);
130
- tcg_gen_or_i64(d, d, t);
131
- tcg_temp_free_i64(t);
132
-}
133
-
134
-static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
135
-{
136
- uint64_t mask = dup_const(MO_16, 0xffff << shift);
137
- TCGv_i64 t = tcg_temp_new_i64();
138
-
139
- tcg_gen_shli_i64(t, a, shift);
140
- tcg_gen_andi_i64(t, t, mask);
141
- tcg_gen_andi_i64(d, d, ~mask);
142
- tcg_gen_or_i64(d, d, t);
143
- tcg_temp_free_i64(t);
144
-}
145
-
146
-static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
147
-{
148
- tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
149
-}
150
-
151
-static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
152
-{
153
- tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
154
-}
155
-
156
-static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
157
-{
158
- uint64_t mask = (1ull << sh) - 1;
159
- TCGv_vec t = tcg_temp_new_vec_matching(d);
160
- TCGv_vec m = tcg_temp_new_vec_matching(d);
161
-
162
- tcg_gen_dupi_vec(vece, m, mask);
163
- tcg_gen_shli_vec(vece, t, a, sh);
164
- tcg_gen_and_vec(vece, d, d, m);
165
- tcg_gen_or_vec(vece, d, d, t);
166
-
167
- tcg_temp_free_vec(t);
168
- tcg_temp_free_vec(m);
169
-}
170
-
171
/* SHL/SLI - Vector shift left */
172
static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
173
int immh, int immb, int opcode, int rn, int rd)
174
{
175
- static const GVecGen2i shi_op[4] = {
176
- { .fni8 = gen_shl8_ins_i64,
177
- .fniv = gen_shl_ins_vec,
178
- .opc = INDEX_op_shli_vec,
179
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
180
- .load_dest = true,
181
- .vece = MO_8 },
182
- { .fni8 = gen_shl16_ins_i64,
183
- .fniv = gen_shl_ins_vec,
184
- .opc = INDEX_op_shli_vec,
185
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
186
- .load_dest = true,
187
- .vece = MO_16 },
188
- { .fni4 = gen_shl32_ins_i32,
189
- .fniv = gen_shl_ins_vec,
190
- .opc = INDEX_op_shli_vec,
191
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
192
- .load_dest = true,
193
- .vece = MO_32 },
194
- { .fni8 = gen_shl64_ins_i64,
195
- .fniv = gen_shl_ins_vec,
196
- .opc = INDEX_op_shli_vec,
197
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
198
- .load_dest = true,
199
- .vece = MO_64 },
200
- };
201
int size = 32 - clz32(immh) - 1;
202
int immhb = immh << 3 | immb;
203
int shift = immhb - (8 << size);
204
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
205
}
206
207
if (insert) {
208
- gen_gvec_op2i(s, is_q, rd, rn, shift, &shi_op[size]);
209
+ gen_gvec_op2i(s, is_q, rd, rn, shift, &sli_op[size]);
210
} else {
211
gen_gvec_fn2i(s, is_q, rd, rn, shift, tcg_gen_gvec_shli, size);
212
}
213
diff --git a/target/arm/translate.c b/target/arm/translate.c
214
index XXXXXXX..XXXXXXX 100644
215
--- a/target/arm/translate.c
216
+++ b/target/arm/translate.c
217
@@ -XXX,XX +XXX,XX @@ const GVecGen2i usra_op[4] = {
218
.vece = MO_64, },
219
};
220
221
+static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
222
+{
223
+ uint64_t mask = dup_const(MO_8, 0xff >> shift);
224
+ TCGv_i64 t = tcg_temp_new_i64();
225
+
226
+ tcg_gen_shri_i64(t, a, shift);
227
+ tcg_gen_andi_i64(t, t, mask);
228
+ tcg_gen_andi_i64(d, d, ~mask);
229
+ tcg_gen_or_i64(d, d, t);
230
+ tcg_temp_free_i64(t);
231
+}
232
+
233
+static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
234
+{
235
+ uint64_t mask = dup_const(MO_16, 0xffff >> shift);
236
+ TCGv_i64 t = tcg_temp_new_i64();
237
+
238
+ tcg_gen_shri_i64(t, a, shift);
239
+ tcg_gen_andi_i64(t, t, mask);
240
+ tcg_gen_andi_i64(d, d, ~mask);
241
+ tcg_gen_or_i64(d, d, t);
242
+ tcg_temp_free_i64(t);
243
+}
244
+
245
+static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
246
+{
247
+ tcg_gen_shri_i32(a, a, shift);
248
+ tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
249
+}
250
+
251
+static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
252
+{
253
+ tcg_gen_shri_i64(a, a, shift);
254
+ tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
255
+}
256
+
257
+static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
258
+{
259
+ if (sh == 0) {
260
+ tcg_gen_mov_vec(d, a);
261
+ } else {
262
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
263
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
264
+
265
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
266
+ tcg_gen_shri_vec(vece, t, a, sh);
267
+ tcg_gen_and_vec(vece, d, d, m);
268
+ tcg_gen_or_vec(vece, d, d, t);
269
+
270
+ tcg_temp_free_vec(t);
271
+ tcg_temp_free_vec(m);
153
+ }
272
+ }
154
+}
273
+}
155
+
274
+
156
+static void kvm_dist_put_priority(GICv3State *s, uint32_t offset, uint8_t *bmp)
275
+const GVecGen2i sri_op[4] = {
157
+{
276
+ { .fni8 = gen_shr8_ins_i64,
158
+ uint32_t reg, *field;
277
+ .fniv = gen_shr_ins_vec,
159
+ int irq;
278
+ .load_dest = true,
160
+
279
+ .opc = INDEX_op_shri_vec,
161
+ field = (uint32_t *)bmp;
280
+ .vece = MO_8 },
162
+ for_each_dist_irq_reg(irq, s->num_irq, 8) {
281
+ { .fni8 = gen_shr16_ins_i64,
163
+ reg = *field;
282
+ .fniv = gen_shr_ins_vec,
164
+ kvm_gicd_access(s, offset, &reg, true);
283
+ .load_dest = true,
165
+ offset += 4;
284
+ .opc = INDEX_op_shri_vec,
166
+ field++;
285
+ .vece = MO_16 },
286
+ { .fni4 = gen_shr32_ins_i32,
287
+ .fniv = gen_shr_ins_vec,
288
+ .load_dest = true,
289
+ .opc = INDEX_op_shri_vec,
290
+ .vece = MO_32 },
291
+ { .fni8 = gen_shr64_ins_i64,
292
+ .fniv = gen_shr_ins_vec,
293
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
294
+ .load_dest = true,
295
+ .opc = INDEX_op_shri_vec,
296
+ .vece = MO_64 },
297
+};
298
+
299
+static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
300
+{
301
+ uint64_t mask = dup_const(MO_8, 0xff << shift);
302
+ TCGv_i64 t = tcg_temp_new_i64();
303
+
304
+ tcg_gen_shli_i64(t, a, shift);
305
+ tcg_gen_andi_i64(t, t, mask);
306
+ tcg_gen_andi_i64(d, d, ~mask);
307
+ tcg_gen_or_i64(d, d, t);
308
+ tcg_temp_free_i64(t);
309
+}
310
+
311
+static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
312
+{
313
+ uint64_t mask = dup_const(MO_16, 0xffff << shift);
314
+ TCGv_i64 t = tcg_temp_new_i64();
315
+
316
+ tcg_gen_shli_i64(t, a, shift);
317
+ tcg_gen_andi_i64(t, t, mask);
318
+ tcg_gen_andi_i64(d, d, ~mask);
319
+ tcg_gen_or_i64(d, d, t);
320
+ tcg_temp_free_i64(t);
321
+}
322
+
323
+static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
324
+{
325
+ tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
326
+}
327
+
328
+static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
329
+{
330
+ tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
331
+}
332
+
333
+static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
334
+{
335
+ if (sh == 0) {
336
+ tcg_gen_mov_vec(d, a);
337
+ } else {
338
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
339
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
340
+
341
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
342
+ tcg_gen_shli_vec(vece, t, a, sh);
343
+ tcg_gen_and_vec(vece, d, d, m);
344
+ tcg_gen_or_vec(vece, d, d, t);
345
+
346
+ tcg_temp_free_vec(t);
347
+ tcg_temp_free_vec(m);
167
+ }
348
+ }
168
+}
349
+}
169
+
350
+
170
+static void kvm_dist_get_edge_trigger(GICv3State *s, uint32_t offset,
351
+const GVecGen2i sli_op[4] = {
171
+ uint32_t *bmp)
352
+ { .fni8 = gen_shl8_ins_i64,
172
+{
353
+ .fniv = gen_shl_ins_vec,
173
+ uint32_t reg;
354
+ .load_dest = true,
174
+ int irq;
355
+ .opc = INDEX_op_shli_vec,
175
+
356
+ .vece = MO_8 },
176
+ for_each_dist_irq_reg(irq, s->num_irq, 2) {
357
+ { .fni8 = gen_shl16_ins_i64,
177
+ kvm_gicd_access(s, offset, &reg, false);
358
+ .fniv = gen_shl_ins_vec,
178
+ reg = half_unshuffle32(reg >> 1);
359
+ .load_dest = true,
179
+ if (irq % 32 != 0) {
360
+ .opc = INDEX_op_shli_vec,
180
+ reg = (reg << 16);
361
+ .vece = MO_16 },
181
+ }
362
+ { .fni4 = gen_shl32_ins_i32,
182
+ *gic_bmp_ptr32(bmp, irq) |= reg;
363
+ .fniv = gen_shl_ins_vec,
183
+ offset += 4;
364
+ .load_dest = true,
184
+ }
365
+ .opc = INDEX_op_shli_vec,
185
+}
366
+ .vece = MO_32 },
186
+
367
+ { .fni8 = gen_shl64_ins_i64,
187
+static void kvm_dist_put_edge_trigger(GICv3State *s, uint32_t offset,
368
+ .fniv = gen_shl_ins_vec,
188
+ uint32_t *bmp)
369
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
189
+{
370
+ .load_dest = true,
190
+ uint32_t reg;
371
+ .opc = INDEX_op_shli_vec,
191
+ int irq;
372
+ .vece = MO_64 },
192
+
373
+};
193
+ for_each_dist_irq_reg(irq, s->num_irq, 2) {
374
+
194
+ reg = *gic_bmp_ptr32(bmp, irq);
375
/* Translate a NEON data processing instruction. Return nonzero if the
195
+ if (irq % 32 != 0) {
376
instruction is invalid.
196
+ reg = (reg & 0xffff0000) >> 16;
377
We process data in a mixture of 32-bit and 64-bit chunks.
197
+ } else {
378
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
198
+ reg = reg & 0xffff;
379
int pairwise;
199
+ }
380
int u;
200
+ reg = half_shuffle32(reg) << 1;
381
int vec_size;
201
+ kvm_gicd_access(s, offset, &reg, true);
382
- uint32_t imm, mask;
202
+ offset += 4;
383
+ uint32_t imm;
203
+ }
384
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
204
+}
385
TCGv_ptr ptr1, ptr2, ptr3;
205
+
386
TCGv_i64 tmp64;
206
+static void kvm_gic_get_line_level_bmp(GICv3State *s, uint32_t *bmp)
387
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
207
+{
388
}
208
+ uint32_t reg;
389
return 0;
209
+ int irq;
390
210
+
391
+ case 4: /* VSRI */
211
+ for_each_dist_irq_reg(irq, s->num_irq, 1) {
392
+ if (!u) {
212
+ kvm_gic_line_level_access(s, irq, 0, &reg, false);
393
+ return 1;
213
+ *gic_bmp_ptr32(bmp, irq) = reg;
394
+ }
214
+ }
395
+ /* Right shift comes here negative. */
215
+}
396
+ shift = -shift;
216
+
397
+ /* Shift out of range leaves destination unchanged. */
217
+static void kvm_gic_put_line_level_bmp(GICv3State *s, uint32_t *bmp)
398
+ if (shift < 8 << size) {
218
+{
399
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
219
+ uint32_t reg;
400
+ shift, &sri_op[size]);
220
+ int irq;
401
+ }
221
+
402
+ return 0;
222
+ for_each_dist_irq_reg(irq, s->num_irq, 1) {
403
+
223
+ reg = *gic_bmp_ptr32(bmp, irq);
404
case 5: /* VSHL, VSLI */
224
+ kvm_gic_line_level_access(s, irq, 0, &reg, true);
405
- if (!u) { /* VSHL */
225
+ }
406
+ if (u) { /* VSLI */
226
+}
407
+ /* Shift out of range leaves destination unchanged. */
227
+
408
+ if (shift < 8 << size) {
228
+/* Read a bitmap register group from the kernel VGIC. */
409
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size,
229
+static void kvm_dist_getbmp(GICv3State *s, uint32_t offset, uint32_t *bmp)
410
+ vec_size, shift, &sli_op[size]);
230
+{
411
+ }
231
+ uint32_t reg;
412
+ } else { /* VSHL */
232
+ int irq;
413
/* Shifts larger than the element size are
233
+
414
* architecturally valid and results in zero.
234
+ for_each_dist_irq_reg(irq, s->num_irq, 1) {
415
*/
235
+ kvm_gicd_access(s, offset, &reg, false);
416
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
236
+ *gic_bmp_ptr32(bmp, irq) = reg;
417
tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
237
+ offset += 4;
418
vec_size, vec_size);
238
+ }
419
}
239
+}
420
- return 0;
240
+
421
}
241
+static void kvm_dist_putbmp(GICv3State *s, uint32_t offset,
422
- break;
242
+ uint32_t clroffset, uint32_t *bmp)
423
+ return 0;
243
+{
424
}
244
+ uint32_t reg;
425
245
+ int irq;
426
if (size == 3) {
246
+
427
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
247
+ for_each_dist_irq_reg(irq, s->num_irq, 1) {
428
else
248
+ /* If this bitmap is a set/clear register pair, first write to the
429
gen_helper_neon_rshl_s64(cpu_V0, cpu_V0, cpu_V1);
249
+ * clear-reg to clear all bits before using the set-reg to write
430
break;
250
+ * the 1 bits.
431
- case 4: /* VSRI */
251
+ */
432
- case 5: /* VSHL, VSLI */
252
+ if (clroffset != 0) {
433
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
253
+ reg = 0;
434
- break;
254
+ kvm_gicd_access(s, clroffset, &reg, true);
435
case 6: /* VQSHLU */
255
+ }
436
gen_helper_neon_qshlu_s64(cpu_V0, cpu_env,
256
+ reg = *gic_bmp_ptr32(bmp, irq);
437
cpu_V0, cpu_V1);
257
+ kvm_gicd_access(s, offset, &reg, true);
438
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
258
+ offset += 4;
439
/* Accumulate. */
259
+ }
440
neon_load_reg64(cpu_V1, rd + pass);
260
+}
441
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
261
+
442
- } else if (op == 4 || (op == 5 && u)) {
262
+static void kvm_arm_gicv3_check(GICv3State *s)
443
- /* Insert */
263
+{
444
- neon_load_reg64(cpu_V1, rd + pass);
264
+ uint32_t reg;
445
- uint64_t mask;
265
+ uint32_t num_irq;
446
- if (shift < -63 || shift > 63) {
266
+
447
- mask = 0;
267
+ /* Sanity checking s->num_irq */
448
- } else {
268
+ kvm_gicd_access(s, GICD_TYPER, &reg, false);
449
- if (op == 4) {
269
+ num_irq = ((reg & 0x1f) + 1) * 32;
450
- mask = 0xffffffffffffffffull >> -shift;
270
+
451
- } else {
271
+ if (num_irq < s->num_irq) {
452
- mask = 0xffffffffffffffffull << shift;
272
+ error_report("Model requests %u IRQs, but kernel supports max %u",
453
- }
273
+ s->num_irq, num_irq);
454
- }
274
+ abort();
455
- tcg_gen_andi_i64(cpu_V1, cpu_V1, ~mask);
275
+ }
456
- tcg_gen_or_i64(cpu_V0, cpu_V0, cpu_V1);
276
+}
457
}
277
+
458
neon_store_reg64(cpu_V0, rd + pass);
278
static void kvm_arm_gicv3_put(GICv3State *s)
459
} else { /* size < 3 */
279
{
460
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
280
- /* TODO */
461
case 3: /* VRSRA */
281
- DPRINTF("Cannot put kernel gic state, no kernel interface\n");
462
GEN_NEON_INTEGER_OP(rshl);
282
+ uint32_t regl, regh, reg;
463
break;
283
+ uint64_t reg64, redist_typer;
464
- case 4: /* VSRI */
284
+ int ncpu, i;
465
- case 5: /* VSHL, VSLI */
285
+
466
- switch (size) {
286
+ kvm_arm_gicv3_check(s);
467
- case 0: gen_helper_neon_shl_u8(tmp, tmp, tmp2); break;
287
+
468
- case 1: gen_helper_neon_shl_u16(tmp, tmp, tmp2); break;
288
+ kvm_gicr_access(s, GICR_TYPER, 0, &regl, false);
469
- case 2: gen_helper_neon_shl_u32(tmp, tmp, tmp2); break;
289
+ kvm_gicr_access(s, GICR_TYPER + 4, 0, &regh, false);
470
- default: abort();
290
+ redist_typer = ((uint64_t)regh << 32) | regl;
471
- }
291
+
472
- break;
292
+ reg = s->gicd_ctlr;
473
case 6: /* VQSHLU */
293
+ kvm_gicd_access(s, GICD_CTLR, &reg, true);
474
switch (size) {
294
+
475
case 0:
295
+ if (redist_typer & GICR_TYPER_PLPIS) {
476
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
296
+ /* Set base addresses before LPIs are enabled by GICR_CTLR write */
477
tmp2 = neon_load_reg(rd, pass);
297
+ for (ncpu = 0; ncpu < s->num_cpu; ncpu++) {
478
gen_neon_add(size, tmp, tmp2);
298
+ GICv3CPUState *c = &s->cpu[ncpu];
479
tcg_temp_free_i32(tmp2);
299
+
480
- } else if (op == 4 || (op == 5 && u)) {
300
+ reg64 = c->gicr_propbaser;
481
- /* Insert */
301
+ regl = (uint32_t)reg64;
482
- switch (size) {
302
+ kvm_gicr_access(s, GICR_PROPBASER, ncpu, &regl, true);
483
- case 0:
303
+ regh = (uint32_t)(reg64 >> 32);
484
- if (op == 4)
304
+ kvm_gicr_access(s, GICR_PROPBASER + 4, ncpu, &regh, true);
485
- mask = 0xff >> -shift;
305
+
486
- else
306
+ reg64 = c->gicr_pendbaser;
487
- mask = (uint8_t)(0xff << shift);
307
+ if (!c->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) {
488
- mask |= mask << 8;
308
+ /* Setting PTZ is advised if LPIs are disabled, to reduce
489
- mask |= mask << 16;
309
+ * GIC initialization time.
490
- break;
310
+ */
491
- case 1:
311
+ reg64 |= GICR_PENDBASER_PTZ;
492
- if (op == 4)
312
+ }
493
- mask = 0xffff >> -shift;
313
+ regl = (uint32_t)reg64;
494
- else
314
+ kvm_gicr_access(s, GICR_PENDBASER, ncpu, &regl, true);
495
- mask = (uint16_t)(0xffff << shift);
315
+ regh = (uint32_t)(reg64 >> 32);
496
- mask |= mask << 16;
316
+ kvm_gicr_access(s, GICR_PENDBASER + 4, ncpu, &regh, true);
497
- break;
317
+ }
498
- case 2:
318
+ }
499
- if (shift < -31 || shift > 31) {
319
+
500
- mask = 0;
320
+ /* Redistributor state (one per CPU) */
501
- } else {
321
+
502
- if (op == 4)
322
+ for (ncpu = 0; ncpu < s->num_cpu; ncpu++) {
503
- mask = 0xffffffffu >> -shift;
323
+ GICv3CPUState *c = &s->cpu[ncpu];
504
- else
324
+
505
- mask = 0xffffffffu << shift;
325
+ reg = c->gicr_ctlr;
506
- }
326
+ kvm_gicr_access(s, GICR_CTLR, ncpu, &reg, true);
507
- break;
327
+
508
- default:
328
+ reg = c->gicr_statusr[GICV3_NS];
509
- abort();
329
+ kvm_gicr_access(s, GICR_STATUSR, ncpu, &reg, true);
510
- }
330
+
511
- tmp2 = neon_load_reg(rd, pass);
331
+ reg = c->gicr_waker;
512
- tcg_gen_andi_i32(tmp, tmp, mask);
332
+ kvm_gicr_access(s, GICR_WAKER, ncpu, &reg, true);
513
- tcg_gen_andi_i32(tmp2, tmp2, ~mask);
333
+
514
- tcg_gen_or_i32(tmp, tmp, tmp2);
334
+ reg = c->gicr_igroupr0;
515
- tcg_temp_free_i32(tmp2);
335
+ kvm_gicr_access(s, GICR_IGROUPR0, ncpu, &reg, true);
516
}
336
+
517
neon_store_reg(rd, pass, tmp);
337
+ reg = ~0;
518
}
338
+ kvm_gicr_access(s, GICR_ICENABLER0, ncpu, &reg, true);
339
+ reg = c->gicr_ienabler0;
340
+ kvm_gicr_access(s, GICR_ISENABLER0, ncpu, &reg, true);
341
+
342
+ /* Restore config before pending so we treat level/edge correctly */
343
+ reg = half_shuffle32(c->edge_trigger >> 16) << 1;
344
+ kvm_gicr_access(s, GICR_ICFGR1, ncpu, &reg, true);
345
+
346
+ reg = c->level;
347
+ kvm_gic_line_level_access(s, 0, ncpu, &reg, true);
348
+
349
+ reg = ~0;
350
+ kvm_gicr_access(s, GICR_ICPENDR0, ncpu, &reg, true);
351
+ reg = c->gicr_ipendr0;
352
+ kvm_gicr_access(s, GICR_ISPENDR0, ncpu, &reg, true);
353
+
354
+ reg = ~0;
355
+ kvm_gicr_access(s, GICR_ICACTIVER0, ncpu, &reg, true);
356
+ reg = c->gicr_iactiver0;
357
+ kvm_gicr_access(s, GICR_ISACTIVER0, ncpu, &reg, true);
358
+
359
+ for (i = 0; i < GIC_INTERNAL; i += 4) {
360
+ reg = c->gicr_ipriorityr[i] |
361
+ (c->gicr_ipriorityr[i + 1] << 8) |
362
+ (c->gicr_ipriorityr[i + 2] << 16) |
363
+ (c->gicr_ipriorityr[i + 3] << 24);
364
+ kvm_gicr_access(s, GICR_IPRIORITYR + i, ncpu, &reg, true);
365
+ }
366
+ }
367
+
368
+ /* Distributor state (shared between all CPUs */
369
+ reg = s->gicd_statusr[GICV3_NS];
370
+ kvm_gicd_access(s, GICD_STATUSR, &reg, true);
371
+
372
+ /* s->enable bitmap -> GICD_ISENABLERn */
373
+ kvm_dist_putbmp(s, GICD_ISENABLER, GICD_ICENABLER, s->enabled);
374
+
375
+ /* s->group bitmap -> GICD_IGROUPRn */
376
+ kvm_dist_putbmp(s, GICD_IGROUPR, 0, s->group);
377
+
378
+ /* Restore targets before pending to ensure the pending state is set on
379
+ * the appropriate CPU interfaces in the kernel
380
+ */
381
+
382
+ /* s->gicd_irouter[irq] -> GICD_IROUTERn
383
+ * We can't use kvm_dist_put() here because the registers are 64-bit
384
+ */
385
+ for (i = GIC_INTERNAL; i < s->num_irq; i++) {
386
+ uint32_t offset;
387
+
388
+ offset = GICD_IROUTER + (sizeof(uint32_t) * i);
389
+ reg = (uint32_t)s->gicd_irouter[i];
390
+ kvm_gicd_access(s, offset, &reg, true);
391
+
392
+ offset = GICD_IROUTER + (sizeof(uint32_t) * i) + 4;
393
+ reg = (uint32_t)(s->gicd_irouter[i] >> 32);
394
+ kvm_gicd_access(s, offset, &reg, true);
395
+ }
396
+
397
+ /* s->trigger bitmap -> GICD_ICFGRn
398
+ * (restore configuration registers before pending IRQs so we treat
399
+ * level/edge correctly)
400
+ */
401
+ kvm_dist_put_edge_trigger(s, GICD_ICFGR, s->edge_trigger);
402
+
403
+ /* s->level bitmap -> line_level */
404
+ kvm_gic_put_line_level_bmp(s, s->level);
405
+
406
+ /* s->pending bitmap -> GICD_ISPENDRn */
407
+ kvm_dist_putbmp(s, GICD_ISPENDR, GICD_ICPENDR, s->pending);
408
+
409
+ /* s->active bitmap -> GICD_ISACTIVERn */
410
+ kvm_dist_putbmp(s, GICD_ISACTIVER, GICD_ICACTIVER, s->active);
411
+
412
+ /* s->gicd_ipriority[] -> GICD_IPRIORITYRn */
413
+ kvm_dist_put_priority(s, GICD_IPRIORITYR, s->gicd_ipriority);
414
+
415
+ /* CPU Interface state (one per CPU) */
416
+
417
+ for (ncpu = 0; ncpu < s->num_cpu; ncpu++) {
418
+ GICv3CPUState *c = &s->cpu[ncpu];
419
+ int num_pri_bits;
420
+
421
+ kvm_gicc_access(s, ICC_SRE_EL1, ncpu, &c->icc_sre_el1, true);
422
+ kvm_gicc_access(s, ICC_CTLR_EL1, ncpu,
423
+ &c->icc_ctlr_el1[GICV3_NS], true);
424
+ kvm_gicc_access(s, ICC_IGRPEN0_EL1, ncpu,
425
+ &c->icc_igrpen[GICV3_G0], true);
426
+ kvm_gicc_access(s, ICC_IGRPEN1_EL1, ncpu,
427
+ &c->icc_igrpen[GICV3_G1NS], true);
428
+ kvm_gicc_access(s, ICC_PMR_EL1, ncpu, &c->icc_pmr_el1, true);
429
+ kvm_gicc_access(s, ICC_BPR0_EL1, ncpu, &c->icc_bpr[GICV3_G0], true);
430
+ kvm_gicc_access(s, ICC_BPR1_EL1, ncpu, &c->icc_bpr[GICV3_G1NS], true);
431
+
432
+ num_pri_bits = ((c->icc_ctlr_el1[GICV3_NS] &
433
+ ICC_CTLR_EL1_PRIBITS_MASK) >>
434
+ ICC_CTLR_EL1_PRIBITS_SHIFT) + 1;
435
+
436
+ switch (num_pri_bits) {
437
+ case 7:
438
+ reg64 = c->icc_apr[GICV3_G0][3];
439
+ kvm_gicc_access(s, ICC_AP0R_EL1(3), ncpu, &reg64, true);
440
+ reg64 = c->icc_apr[GICV3_G0][2];
441
+ kvm_gicc_access(s, ICC_AP0R_EL1(2), ncpu, &reg64, true);
442
+ case 6:
443
+ reg64 = c->icc_apr[GICV3_G0][1];
444
+ kvm_gicc_access(s, ICC_AP0R_EL1(1), ncpu, &reg64, true);
445
+ default:
446
+ reg64 = c->icc_apr[GICV3_G0][0];
447
+ kvm_gicc_access(s, ICC_AP0R_EL1(0), ncpu, &reg64, true);
448
+ }
449
+
450
+ switch (num_pri_bits) {
451
+ case 7:
452
+ reg64 = c->icc_apr[GICV3_G1NS][3];
453
+ kvm_gicc_access(s, ICC_AP1R_EL1(3), ncpu, &reg64, true);
454
+ reg64 = c->icc_apr[GICV3_G1NS][2];
455
+ kvm_gicc_access(s, ICC_AP1R_EL1(2), ncpu, &reg64, true);
456
+ case 6:
457
+ reg64 = c->icc_apr[GICV3_G1NS][1];
458
+ kvm_gicc_access(s, ICC_AP1R_EL1(1), ncpu, &reg64, true);
459
+ default:
460
+ reg64 = c->icc_apr[GICV3_G1NS][0];
461
+ kvm_gicc_access(s, ICC_AP1R_EL1(0), ncpu, &reg64, true);
462
+ }
463
+ }
464
}
465
466
static void kvm_arm_gicv3_get(GICv3State *s)
467
{
468
- /* TODO */
469
- DPRINTF("Cannot get kernel gic state, no kernel interface\n");
470
+ uint32_t regl, regh, reg;
471
+ uint64_t reg64, redist_typer;
472
+ int ncpu, i;
473
+
474
+ kvm_arm_gicv3_check(s);
475
+
476
+ kvm_gicr_access(s, GICR_TYPER, 0, &regl, false);
477
+ kvm_gicr_access(s, GICR_TYPER + 4, 0, &regh, false);
478
+ redist_typer = ((uint64_t)regh << 32) | regl;
479
+
480
+ kvm_gicd_access(s, GICD_CTLR, &reg, false);
481
+ s->gicd_ctlr = reg;
482
+
483
+ /* Redistributor state (one per CPU) */
484
+
485
+ for (ncpu = 0; ncpu < s->num_cpu; ncpu++) {
486
+ GICv3CPUState *c = &s->cpu[ncpu];
487
+
488
+ kvm_gicr_access(s, GICR_CTLR, ncpu, &reg, false);
489
+ c->gicr_ctlr = reg;
490
+
491
+ kvm_gicr_access(s, GICR_STATUSR, ncpu, &reg, false);
492
+ c->gicr_statusr[GICV3_NS] = reg;
493
+
494
+ kvm_gicr_access(s, GICR_WAKER, ncpu, &reg, false);
495
+ c->gicr_waker = reg;
496
+
497
+ kvm_gicr_access(s, GICR_IGROUPR0, ncpu, &reg, false);
498
+ c->gicr_igroupr0 = reg;
499
+ kvm_gicr_access(s, GICR_ISENABLER0, ncpu, &reg, false);
500
+ c->gicr_ienabler0 = reg;
501
+ kvm_gicr_access(s, GICR_ICFGR1, ncpu, &reg, false);
502
+ c->edge_trigger = half_unshuffle32(reg >> 1) << 16;
503
+ kvm_gic_line_level_access(s, 0, ncpu, &reg, false);
504
+ c->level = reg;
505
+ kvm_gicr_access(s, GICR_ISPENDR0, ncpu, &reg, false);
506
+ c->gicr_ipendr0 = reg;
507
+ kvm_gicr_access(s, GICR_ISACTIVER0, ncpu, &reg, false);
508
+ c->gicr_iactiver0 = reg;
509
+
510
+ for (i = 0; i < GIC_INTERNAL; i += 4) {
511
+ kvm_gicr_access(s, GICR_IPRIORITYR + i, ncpu, &reg, false);
512
+ c->gicr_ipriorityr[i] = extract32(reg, 0, 8);
513
+ c->gicr_ipriorityr[i + 1] = extract32(reg, 8, 8);
514
+ c->gicr_ipriorityr[i + 2] = extract32(reg, 16, 8);
515
+ c->gicr_ipriorityr[i + 3] = extract32(reg, 24, 8);
516
+ }
517
+ }
518
+
519
+ if (redist_typer & GICR_TYPER_PLPIS) {
520
+ for (ncpu = 0; ncpu < s->num_cpu; ncpu++) {
521
+ GICv3CPUState *c = &s->cpu[ncpu];
522
+
523
+ kvm_gicr_access(s, GICR_PROPBASER, ncpu, &regl, false);
524
+ kvm_gicr_access(s, GICR_PROPBASER + 4, ncpu, &regh, false);
525
+ c->gicr_propbaser = ((uint64_t)regh << 32) | regl;
526
+
527
+ kvm_gicr_access(s, GICR_PENDBASER, ncpu, &regl, false);
528
+ kvm_gicr_access(s, GICR_PENDBASER + 4, ncpu, &regh, false);
529
+ c->gicr_pendbaser = ((uint64_t)regh << 32) | regl;
530
+ }
531
+ }
532
+
533
+ /* Distributor state (shared between all CPUs */
534
+
535
+ kvm_gicd_access(s, GICD_STATUSR, &reg, false);
536
+ s->gicd_statusr[GICV3_NS] = reg;
537
+
538
+ /* GICD_IGROUPRn -> s->group bitmap */
539
+ kvm_dist_getbmp(s, GICD_IGROUPR, s->group);
540
+
541
+ /* GICD_ISENABLERn -> s->enabled bitmap */
542
+ kvm_dist_getbmp(s, GICD_ISENABLER, s->enabled);
543
+
544
+ /* Line level of irq */
545
+ kvm_gic_get_line_level_bmp(s, s->level);
546
+ /* GICD_ISPENDRn -> s->pending bitmap */
547
+ kvm_dist_getbmp(s, GICD_ISPENDR, s->pending);
548
+
549
+ /* GICD_ISACTIVERn -> s->active bitmap */
550
+ kvm_dist_getbmp(s, GICD_ISACTIVER, s->active);
551
+
552
+ /* GICD_ICFGRn -> s->trigger bitmap */
553
+ kvm_dist_get_edge_trigger(s, GICD_ICFGR, s->edge_trigger);
554
+
555
+ /* GICD_IPRIORITYRn -> s->gicd_ipriority[] */
556
+ kvm_dist_get_priority(s, GICD_IPRIORITYR, s->gicd_ipriority);
557
+
558
+ /* GICD_IROUTERn -> s->gicd_irouter[irq] */
559
+ for (i = GIC_INTERNAL; i < s->num_irq; i++) {
560
+ uint32_t offset;
561
+
562
+ offset = GICD_IROUTER + (sizeof(uint32_t) * i);
563
+ kvm_gicd_access(s, offset, &regl, false);
564
+ offset = GICD_IROUTER + (sizeof(uint32_t) * i) + 4;
565
+ kvm_gicd_access(s, offset, &regh, false);
566
+ s->gicd_irouter[i] = ((uint64_t)regh << 32) | regl;
567
+ }
568
+
569
+ /*****************************************************************
570
+ * CPU Interface(s) State
571
+ */
572
+
573
+ for (ncpu = 0; ncpu < s->num_cpu; ncpu++) {
574
+ GICv3CPUState *c = &s->cpu[ncpu];
575
+ int num_pri_bits;
576
+
577
+ kvm_gicc_access(s, ICC_SRE_EL1, ncpu, &c->icc_sre_el1, false);
578
+ kvm_gicc_access(s, ICC_CTLR_EL1, ncpu,
579
+ &c->icc_ctlr_el1[GICV3_NS], false);
580
+ kvm_gicc_access(s, ICC_IGRPEN0_EL1, ncpu,
581
+ &c->icc_igrpen[GICV3_G0], false);
582
+ kvm_gicc_access(s, ICC_IGRPEN1_EL1, ncpu,
583
+ &c->icc_igrpen[GICV3_G1NS], false);
584
+ kvm_gicc_access(s, ICC_PMR_EL1, ncpu, &c->icc_pmr_el1, false);
585
+ kvm_gicc_access(s, ICC_BPR0_EL1, ncpu, &c->icc_bpr[GICV3_G0], false);
586
+ kvm_gicc_access(s, ICC_BPR1_EL1, ncpu, &c->icc_bpr[GICV3_G1NS], false);
587
+ num_pri_bits = ((c->icc_ctlr_el1[GICV3_NS] &
588
+ ICC_CTLR_EL1_PRIBITS_MASK) >>
589
+ ICC_CTLR_EL1_PRIBITS_SHIFT) + 1;
590
+
591
+ switch (num_pri_bits) {
592
+ case 7:
593
+ kvm_gicc_access(s, ICC_AP0R_EL1(3), ncpu, &reg64, false);
594
+ c->icc_apr[GICV3_G0][3] = reg64;
595
+ kvm_gicc_access(s, ICC_AP0R_EL1(2), ncpu, &reg64, false);
596
+ c->icc_apr[GICV3_G0][2] = reg64;
597
+ case 6:
598
+ kvm_gicc_access(s, ICC_AP0R_EL1(1), ncpu, &reg64, false);
599
+ c->icc_apr[GICV3_G0][1] = reg64;
600
+ default:
601
+ kvm_gicc_access(s, ICC_AP0R_EL1(0), ncpu, &reg64, false);
602
+ c->icc_apr[GICV3_G0][0] = reg64;
603
+ }
604
+
605
+ switch (num_pri_bits) {
606
+ case 7:
607
+ kvm_gicc_access(s, ICC_AP1R_EL1(3), ncpu, &reg64, false);
608
+ c->icc_apr[GICV3_G1NS][3] = reg64;
609
+ kvm_gicc_access(s, ICC_AP1R_EL1(2), ncpu, &reg64, false);
610
+ c->icc_apr[GICV3_G1NS][2] = reg64;
611
+ case 6:
612
+ kvm_gicc_access(s, ICC_AP1R_EL1(1), ncpu, &reg64, false);
613
+ c->icc_apr[GICV3_G1NS][1] = reg64;
614
+ default:
615
+ kvm_gicc_access(s, ICC_AP1R_EL1(0), ncpu, &reg64, false);
616
+ c->icc_apr[GICV3_G1NS][0] = reg64;
617
+ }
618
+ }
619
}
620
621
static void kvm_arm_gicv3_reset(DeviceState *dev)
622
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_reset(DeviceState *dev)
623
DPRINTF("Reset\n");
624
625
kgc->parent_reset(dev);
626
+
627
+ if (s->migration_blocker) {
628
+ DPRINTF("Cannot put kernel gic state, no kernel interface\n");
629
+ return;
630
+ }
631
+
632
kvm_arm_gicv3_put(s);
633
}
634
635
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
636
637
gicv3_init_irqs_and_mmio(s, kvm_arm_gicv3_set_irq, NULL);
638
639
- /* Block migration of a KVM GICv3 device: the API for saving and restoring
640
- * the state in the kernel is not yet finalised in the kernel or
641
- * implemented in QEMU.
642
- */
643
- error_setg(&s->migration_blocker, "vGICv3 migration is not implemented");
644
- migrate_add_blocker(s->migration_blocker, &local_err);
645
- if (local_err) {
646
- error_propagate(errp, local_err);
647
- error_free(s->migration_blocker);
648
- return;
649
- }
650
-
651
/* Try to create the device via the device control API */
652
s->dev_fd = kvm_create_device(kvm_state, KVM_DEV_TYPE_ARM_VGIC_V3, false);
653
if (s->dev_fd < 0) {
654
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gicv3_realize(DeviceState *dev, Error **errp)
655
656
kvm_irqchip_commit_routes(kvm_state);
657
}
658
+
659
+ if (!kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_DIST_REGS,
660
+ GICD_CTLR)) {
661
+ error_setg(&s->migration_blocker, "This operating system kernel does "
662
+ "not support vGICv3 migration");
663
+ migrate_add_blocker(s->migration_blocker, &local_err);
664
+ if (local_err) {
665
+ error_propagate(errp, local_err);
666
+ error_free(s->migration_blocker);
667
+ return;
668
+ }
669
+ }
670
}
671
672
static void kvm_arm_gicv3_class_init(ObjectClass *klass, void *data)
673
--
519
--
674
2.7.4
520
2.19.1
675
521
676
522
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Move mla_op and mls_op expanders from translate-a64.c.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181011205206.3552-16-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate.h | 2 +
11
target/arm/translate-a64.c | 106 -----------------------------
12
target/arm/translate.c | 134 ++++++++++++++++++++++++++++++++-----
13
3 files changed, 120 insertions(+), 122 deletions(-)
14
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
18
+++ b/target/arm/translate.h
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
20
extern const GVecGen3 bsl_op;
21
extern const GVecGen3 bit_op;
22
extern const GVecGen3 bif_op;
23
+extern const GVecGen3 mla_op[4];
24
+extern const GVecGen3 mls_op[4];
25
extern const GVecGen2i ssra_op[4];
26
extern const GVecGen2i usra_op[4];
27
extern const GVecGen2i sri_op[4];
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
31
+++ b/target/arm/translate-a64.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
33
}
34
}
35
36
-static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
37
-{
38
- gen_helper_neon_mul_u8(a, a, b);
39
- gen_helper_neon_add_u8(d, d, a);
40
-}
41
-
42
-static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
43
-{
44
- gen_helper_neon_mul_u16(a, a, b);
45
- gen_helper_neon_add_u16(d, d, a);
46
-}
47
-
48
-static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
49
-{
50
- tcg_gen_mul_i32(a, a, b);
51
- tcg_gen_add_i32(d, d, a);
52
-}
53
-
54
-static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
55
-{
56
- tcg_gen_mul_i64(a, a, b);
57
- tcg_gen_add_i64(d, d, a);
58
-}
59
-
60
-static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
61
-{
62
- tcg_gen_mul_vec(vece, a, a, b);
63
- tcg_gen_add_vec(vece, d, d, a);
64
-}
65
-
66
-static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
67
-{
68
- gen_helper_neon_mul_u8(a, a, b);
69
- gen_helper_neon_sub_u8(d, d, a);
70
-}
71
-
72
-static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
73
-{
74
- gen_helper_neon_mul_u16(a, a, b);
75
- gen_helper_neon_sub_u16(d, d, a);
76
-}
77
-
78
-static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
79
-{
80
- tcg_gen_mul_i32(a, a, b);
81
- tcg_gen_sub_i32(d, d, a);
82
-}
83
-
84
-static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
85
-{
86
- tcg_gen_mul_i64(a, a, b);
87
- tcg_gen_sub_i64(d, d, a);
88
-}
89
-
90
-static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
91
-{
92
- tcg_gen_mul_vec(vece, a, a, b);
93
- tcg_gen_sub_vec(vece, d, d, a);
94
-}
95
-
96
/* Integer op subgroup of C3.6.16. */
97
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
98
{
99
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
100
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
.vece = MO_64 },
102
};
103
- static const GVecGen3 mla_op[4] = {
104
- { .fni4 = gen_mla8_i32,
105
- .fniv = gen_mla_vec,
106
- .opc = INDEX_op_mul_vec,
107
- .load_dest = true,
108
- .vece = MO_8 },
109
- { .fni4 = gen_mla16_i32,
110
- .fniv = gen_mla_vec,
111
- .opc = INDEX_op_mul_vec,
112
- .load_dest = true,
113
- .vece = MO_16 },
114
- { .fni4 = gen_mla32_i32,
115
- .fniv = gen_mla_vec,
116
- .opc = INDEX_op_mul_vec,
117
- .load_dest = true,
118
- .vece = MO_32 },
119
- { .fni8 = gen_mla64_i64,
120
- .fniv = gen_mla_vec,
121
- .opc = INDEX_op_mul_vec,
122
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
123
- .load_dest = true,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen3 mls_op[4] = {
127
- { .fni4 = gen_mls8_i32,
128
- .fniv = gen_mls_vec,
129
- .opc = INDEX_op_mul_vec,
130
- .load_dest = true,
131
- .vece = MO_8 },
132
- { .fni4 = gen_mls16_i32,
133
- .fniv = gen_mls_vec,
134
- .opc = INDEX_op_mul_vec,
135
- .load_dest = true,
136
- .vece = MO_16 },
137
- { .fni4 = gen_mls32_i32,
138
- .fniv = gen_mls_vec,
139
- .opc = INDEX_op_mul_vec,
140
- .load_dest = true,
141
- .vece = MO_32 },
142
- { .fni8 = gen_mls64_i64,
143
- .fniv = gen_mls_vec,
144
- .opc = INDEX_op_mul_vec,
145
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
146
- .load_dest = true,
147
- .vece = MO_64 },
148
- };
149
150
int is_q = extract32(insn, 30, 1);
151
int u = extract32(insn, 29, 1);
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ static void gen_neon_narrow_op(int op, int u, int size,
157
#define NEON_3R_VABA 15
158
#define NEON_3R_VADD_VSUB 16
159
#define NEON_3R_VTST_VCEQ 17
160
-#define NEON_3R_VML 18 /* VMLA, VMLAL, VMLS, VMLSL */
161
+#define NEON_3R_VML 18 /* VMLA, VMLS */
162
#define NEON_3R_VMUL 19
163
#define NEON_3R_VPMAX 20
164
#define NEON_3R_VPMIN 21
165
@@ -XXX,XX +XXX,XX @@ const GVecGen2i sli_op[4] = {
166
.vece = MO_64 },
167
};
168
169
+static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
170
+{
171
+ gen_helper_neon_mul_u8(a, a, b);
172
+ gen_helper_neon_add_u8(d, d, a);
173
+}
174
+
175
+static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
176
+{
177
+ gen_helper_neon_mul_u8(a, a, b);
178
+ gen_helper_neon_sub_u8(d, d, a);
179
+}
180
+
181
+static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
182
+{
183
+ gen_helper_neon_mul_u16(a, a, b);
184
+ gen_helper_neon_add_u16(d, d, a);
185
+}
186
+
187
+static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
188
+{
189
+ gen_helper_neon_mul_u16(a, a, b);
190
+ gen_helper_neon_sub_u16(d, d, a);
191
+}
192
+
193
+static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
194
+{
195
+ tcg_gen_mul_i32(a, a, b);
196
+ tcg_gen_add_i32(d, d, a);
197
+}
198
+
199
+static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
200
+{
201
+ tcg_gen_mul_i32(a, a, b);
202
+ tcg_gen_sub_i32(d, d, a);
203
+}
204
+
205
+static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
206
+{
207
+ tcg_gen_mul_i64(a, a, b);
208
+ tcg_gen_add_i64(d, d, a);
209
+}
210
+
211
+static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
212
+{
213
+ tcg_gen_mul_i64(a, a, b);
214
+ tcg_gen_sub_i64(d, d, a);
215
+}
216
+
217
+static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
218
+{
219
+ tcg_gen_mul_vec(vece, a, a, b);
220
+ tcg_gen_add_vec(vece, d, d, a);
221
+}
222
+
223
+static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
224
+{
225
+ tcg_gen_mul_vec(vece, a, a, b);
226
+ tcg_gen_sub_vec(vece, d, d, a);
227
+}
228
+
229
+/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
230
+ * these tables are shared with AArch64 which does support them.
231
+ */
232
+const GVecGen3 mla_op[4] = {
233
+ { .fni4 = gen_mla8_i32,
234
+ .fniv = gen_mla_vec,
235
+ .opc = INDEX_op_mul_vec,
236
+ .load_dest = true,
237
+ .vece = MO_8 },
238
+ { .fni4 = gen_mla16_i32,
239
+ .fniv = gen_mla_vec,
240
+ .opc = INDEX_op_mul_vec,
241
+ .load_dest = true,
242
+ .vece = MO_16 },
243
+ { .fni4 = gen_mla32_i32,
244
+ .fniv = gen_mla_vec,
245
+ .opc = INDEX_op_mul_vec,
246
+ .load_dest = true,
247
+ .vece = MO_32 },
248
+ { .fni8 = gen_mla64_i64,
249
+ .fniv = gen_mla_vec,
250
+ .opc = INDEX_op_mul_vec,
251
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
252
+ .load_dest = true,
253
+ .vece = MO_64 },
254
+};
255
+
256
+const GVecGen3 mls_op[4] = {
257
+ { .fni4 = gen_mls8_i32,
258
+ .fniv = gen_mls_vec,
259
+ .opc = INDEX_op_mul_vec,
260
+ .load_dest = true,
261
+ .vece = MO_8 },
262
+ { .fni4 = gen_mls16_i32,
263
+ .fniv = gen_mls_vec,
264
+ .opc = INDEX_op_mul_vec,
265
+ .load_dest = true,
266
+ .vece = MO_16 },
267
+ { .fni4 = gen_mls32_i32,
268
+ .fniv = gen_mls_vec,
269
+ .opc = INDEX_op_mul_vec,
270
+ .load_dest = true,
271
+ .vece = MO_32 },
272
+ { .fni8 = gen_mls64_i64,
273
+ .fniv = gen_mls_vec,
274
+ .opc = INDEX_op_mul_vec,
275
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
276
+ .load_dest = true,
277
+ .vece = MO_64 },
278
+};
279
+
280
/* Translate a NEON data processing instruction. Return nonzero if the
281
instruction is invalid.
282
We process data in a mixture of 32-bit and 64-bit chunks.
283
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
284
return 0;
285
}
286
break;
287
+
288
+ case NEON_3R_VML: /* VMLA, VMLS */
289
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
290
+ u ? &mls_op[size] : &mla_op[size]);
291
+ return 0;
292
}
293
+
294
if (size == 3) {
295
/* 64-bit element instructions. */
296
for (pass = 0; pass < (q ? 2 : 1); pass++) {
297
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
298
}
299
}
300
break;
301
- case NEON_3R_VML: /* VMLA, VMLAL, VMLS,VMLSL */
302
- switch (size) {
303
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
304
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
305
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
306
- default: abort();
307
- }
308
- tcg_temp_free_i32(tmp2);
309
- tmp2 = neon_load_reg(rd, pass);
310
- if (u) { /* VMLS */
311
- gen_neon_rsb(size, tmp, tmp2);
312
- } else { /* VMLA */
313
- gen_neon_add(size, tmp, tmp2);
314
- }
315
- break;
316
case NEON_3R_VMUL:
317
/* VMUL.P8; other cases already eliminated. */
318
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
319
--
320
2.19.1
321
322
diff view generated by jsdifflib
1
From: Clement Deschamps <clement.deschamps@antfield.fr>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This adds the BCM2835 GPIO controller.
3
Move cmtst_op expanders from translate-a64.c.
4
4
5
It currently implements:
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
- The 54 GPIOs as outputs (qemu_irq)
6
Message-id: 20181011205206.3552-17-richard.henderson@linaro.org
7
- The SD controller selection via alternate function of GPIOs 48-53
8
9
Signed-off-by: Clement Deschamps <clement.deschamps@antfield.fr>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 1488293711-14195-4-git-send-email-peter.maydell@linaro.org
13
Message-id: 20170224164021.9066-4-clement.deschamps@antfield.fr
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
9
---
17
hw/gpio/Makefile.objs | 1 +
10
target/arm/translate.h | 2 +
18
include/hw/gpio/bcm2835_gpio.h | 39 +++++
11
target/arm/translate-a64.c | 38 ------------------
19
hw/gpio/bcm2835_gpio.c | 353 +++++++++++++++++++++++++++++++++++++++++
12
target/arm/translate.c | 81 +++++++++++++++++++++++++++-----------
20
3 files changed, 393 insertions(+)
13
3 files changed, 60 insertions(+), 61 deletions(-)
21
create mode 100644 include/hw/gpio/bcm2835_gpio.h
14
22
create mode 100644 hw/gpio/bcm2835_gpio.c
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
23
24
diff --git a/hw/gpio/Makefile.objs b/hw/gpio/Makefile.objs
25
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/gpio/Makefile.objs
17
--- a/target/arm/translate.h
27
+++ b/hw/gpio/Makefile.objs
18
+++ b/target/arm/translate.h
28
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_GPIO_KEY) += gpio_key.o
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
29
20
extern const GVecGen3 bif_op;
30
obj-$(CONFIG_OMAP) += omap_gpio.o
21
extern const GVecGen3 mla_op[4];
31
obj-$(CONFIG_IMX) += imx_gpio.o
22
extern const GVecGen3 mls_op[4];
32
+obj-$(CONFIG_RASPI) += bcm2835_gpio.o
23
+extern const GVecGen3 cmtst_op[4];
33
diff --git a/include/hw/gpio/bcm2835_gpio.h b/include/hw/gpio/bcm2835_gpio.h
24
extern const GVecGen2i ssra_op[4];
34
new file mode 100644
25
extern const GVecGen2i usra_op[4];
35
index XXXXXXX..XXXXXXX
26
extern const GVecGen2i sri_op[4];
36
--- /dev/null
27
extern const GVecGen2i sli_op[4];
37
+++ b/include/hw/gpio/bcm2835_gpio.h
28
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
38
@@ -XXX,XX +XXX,XX @@
29
39
+/*
30
/*
40
+ * Raspberry Pi (BCM2835) GPIO Controller
31
* Forward to the isar_feature_* tests given a DisasContext pointer.
41
+ *
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
42
+ * Copyright (c) 2017 Antfield SAS
33
index XXXXXXX..XXXXXXX 100644
43
+ *
34
--- a/target/arm/translate-a64.c
44
+ * Authors:
35
+++ b/target/arm/translate-a64.c
45
+ * Clement Deschamps <clement.deschamps@antfield.fr>
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
46
+ * Luc Michel <luc.michel@antfield.fr>
37
}
47
+ *
38
}
48
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
39
49
+ * See the COPYING file in the top-level directory.
40
-/* CMTST : test is "if (X & Y != 0)". */
50
+ */
41
-static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
51
+
42
-{
52
+#ifndef BCM2835_GPIO_H
43
- tcg_gen_and_i32(d, a, b);
53
+#define BCM2835_GPIO_H
44
- tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
54
+
45
- tcg_gen_neg_i32(d, d);
55
+#include "hw/sd/sd.h"
46
-}
56
+
47
-
57
+typedef struct BCM2835GpioState {
48
-static void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
58
+ SysBusDevice parent_obj;
49
-{
59
+
50
- tcg_gen_and_i64(d, a, b);
60
+ MemoryRegion iomem;
51
- tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
61
+
52
- tcg_gen_neg_i64(d, d);
62
+ /* SDBus selector */
53
-}
63
+ SDBus sdbus;
54
-
64
+ SDBus *sdbus_sdhci;
55
-static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
65
+ SDBus *sdbus_sdhost;
56
-{
66
+
57
- tcg_gen_and_vec(vece, d, a, b);
67
+ uint8_t fsel[54];
58
- tcg_gen_dupi_vec(vece, a, 0);
68
+ uint32_t lev0, lev1;
59
- tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
69
+ uint8_t sd_fsel;
60
-}
70
+ qemu_irq out[54];
61
-
71
+} BCM2835GpioState;
62
static void handle_3same_64(DisasContext *s, int opcode, bool u,
72
+
63
TCGv_i64 tcg_rd, TCGv_i64 tcg_rn, TCGv_i64 tcg_rm)
73
+#define TYPE_BCM2835_GPIO "bcm2835_gpio"
64
{
74
+#define BCM2835_GPIO(obj) \
65
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
75
+ OBJECT_CHECK(BCM2835GpioState, (obj), TYPE_BCM2835_GPIO)
66
/* Integer op subgroup of C3.6.16. */
76
+
67
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
77
+#endif
68
{
78
diff --git a/hw/gpio/bcm2835_gpio.c b/hw/gpio/bcm2835_gpio.c
69
- static const GVecGen3 cmtst_op[4] = {
79
new file mode 100644
70
- { .fni4 = gen_helper_neon_tst_u8,
80
index XXXXXXX..XXXXXXX
71
- .fniv = gen_cmtst_vec,
81
--- /dev/null
72
- .vece = MO_8 },
82
+++ b/hw/gpio/bcm2835_gpio.c
73
- { .fni4 = gen_helper_neon_tst_u16,
83
@@ -XXX,XX +XXX,XX @@
74
- .fniv = gen_cmtst_vec,
84
+/*
75
- .vece = MO_16 },
85
+ * Raspberry Pi (BCM2835) GPIO Controller
76
- { .fni4 = gen_cmtst_i32,
86
+ *
77
- .fniv = gen_cmtst_vec,
87
+ * Copyright (c) 2017 Antfield SAS
78
- .vece = MO_32 },
88
+ *
79
- { .fni8 = gen_cmtst_i64,
89
+ * Authors:
80
- .fniv = gen_cmtst_vec,
90
+ * Clement Deschamps <clement.deschamps@antfield.fr>
81
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
91
+ * Luc Michel <luc.michel@antfield.fr>
82
- .vece = MO_64 },
92
+ *
83
- };
93
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
84
-
94
+ * See the COPYING file in the top-level directory.
85
int is_q = extract32(insn, 30, 1);
95
+ */
86
int u = extract32(insn, 29, 1);
96
+
87
int size = extract32(insn, 22, 2);
97
+#include "qemu/osdep.h"
88
diff --git a/target/arm/translate.c b/target/arm/translate.c
98
+#include "qemu/log.h"
89
index XXXXXXX..XXXXXXX 100644
99
+#include "qemu/timer.h"
90
--- a/target/arm/translate.c
100
+#include "qapi/error.h"
91
+++ b/target/arm/translate.c
101
+#include "hw/sysbus.h"
92
@@ -XXX,XX +XXX,XX @@ const GVecGen3 mls_op[4] = {
102
+#include "hw/sd/sd.h"
93
.vece = MO_64 },
103
+#include "hw/gpio/bcm2835_gpio.h"
94
};
104
+
95
105
+#define GPFSEL0 0x00
96
+/* CMTST : test is "if (X & Y != 0)". */
106
+#define GPFSEL1 0x04
97
+static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
107
+#define GPFSEL2 0x08
108
+#define GPFSEL3 0x0C
109
+#define GPFSEL4 0x10
110
+#define GPFSEL5 0x14
111
+#define GPSET0 0x1C
112
+#define GPSET1 0x20
113
+#define GPCLR0 0x28
114
+#define GPCLR1 0x2C
115
+#define GPLEV0 0x34
116
+#define GPLEV1 0x38
117
+#define GPEDS0 0x40
118
+#define GPEDS1 0x44
119
+#define GPREN0 0x4C
120
+#define GPREN1 0x50
121
+#define GPFEN0 0x58
122
+#define GPFEN1 0x5C
123
+#define GPHEN0 0x64
124
+#define GPHEN1 0x68
125
+#define GPLEN0 0x70
126
+#define GPLEN1 0x74
127
+#define GPAREN0 0x7C
128
+#define GPAREN1 0x80
129
+#define GPAFEN0 0x88
130
+#define GPAFEN1 0x8C
131
+#define GPPUD 0x94
132
+#define GPPUDCLK0 0x98
133
+#define GPPUDCLK1 0x9C
134
+
135
+static uint32_t gpfsel_get(BCM2835GpioState *s, uint8_t reg)
136
+{
98
+{
137
+ int i;
99
+ tcg_gen_and_i32(d, a, b);
138
+ uint32_t value = 0;
100
+ tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
139
+ for (i = 0; i < 10; i++) {
101
+ tcg_gen_neg_i32(d, d);
140
+ uint32_t index = 10 * reg + i;
141
+ if (index < sizeof(s->fsel)) {
142
+ value |= (s->fsel[index] & 0x7) << (3 * i);
143
+ }
144
+ }
145
+ return value;
146
+}
102
+}
147
+
103
+
148
+static void gpfsel_set(BCM2835GpioState *s, uint8_t reg, uint32_t value)
104
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
149
+{
105
+{
150
+ int i;
106
+ tcg_gen_and_i64(d, a, b);
151
+ for (i = 0; i < 10; i++) {
107
+ tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
152
+ uint32_t index = 10 * reg + i;
108
+ tcg_gen_neg_i64(d, d);
153
+ if (index < sizeof(s->fsel)) {
154
+ int fsel = (value >> (3 * i)) & 0x7;
155
+ s->fsel[index] = fsel;
156
+ }
157
+ }
158
+
159
+ /* SD controller selection (48-53) */
160
+ if (s->sd_fsel != 0
161
+ && (s->fsel[48] == 0) /* SD_CLK_R */
162
+ && (s->fsel[49] == 0) /* SD_CMD_R */
163
+ && (s->fsel[50] == 0) /* SD_DATA0_R */
164
+ && (s->fsel[51] == 0) /* SD_DATA1_R */
165
+ && (s->fsel[52] == 0) /* SD_DATA2_R */
166
+ && (s->fsel[53] == 0) /* SD_DATA3_R */
167
+ ) {
168
+ /* SDHCI controller selected */
169
+ sdbus_reparent_card(s->sdbus_sdhost, s->sdbus_sdhci);
170
+ s->sd_fsel = 0;
171
+ } else if (s->sd_fsel != 4
172
+ && (s->fsel[48] == 4) /* SD_CLK_R */
173
+ && (s->fsel[49] == 4) /* SD_CMD_R */
174
+ && (s->fsel[50] == 4) /* SD_DATA0_R */
175
+ && (s->fsel[51] == 4) /* SD_DATA1_R */
176
+ && (s->fsel[52] == 4) /* SD_DATA2_R */
177
+ && (s->fsel[53] == 4) /* SD_DATA3_R */
178
+ ) {
179
+ /* SDHost controller selected */
180
+ sdbus_reparent_card(s->sdbus_sdhci, s->sdbus_sdhost);
181
+ s->sd_fsel = 4;
182
+ }
183
+}
109
+}
184
+
110
+
185
+static int gpfsel_is_out(BCM2835GpioState *s, int index)
111
+static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
186
+{
112
+{
187
+ if (index >= 0 && index < 54) {
113
+ tcg_gen_and_vec(vece, d, a, b);
188
+ return s->fsel[index] == 1;
114
+ tcg_gen_dupi_vec(vece, a, 0);
189
+ }
115
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
190
+ return 0;
191
+}
116
+}
192
+
117
+
193
+static void gpset(BCM2835GpioState *s,
118
+const GVecGen3 cmtst_op[4] = {
194
+ uint32_t val, uint8_t start, uint8_t count, uint32_t *lev)
119
+ { .fni4 = gen_helper_neon_tst_u8,
195
+{
120
+ .fniv = gen_cmtst_vec,
196
+ uint32_t changes = val & ~*lev;
121
+ .vece = MO_8 },
197
+ uint32_t cur = 1;
122
+ { .fni4 = gen_helper_neon_tst_u16,
198
+
123
+ .fniv = gen_cmtst_vec,
199
+ int i;
124
+ .vece = MO_16 },
200
+ for (i = 0; i < count; i++) {
125
+ { .fni4 = gen_cmtst_i32,
201
+ if ((changes & cur) && (gpfsel_is_out(s, start + i))) {
126
+ .fniv = gen_cmtst_vec,
202
+ qemu_set_irq(s->out[start + i], 1);
127
+ .vece = MO_32 },
203
+ }
128
+ { .fni8 = gen_cmtst_i64,
204
+ cur <<= 1;
129
+ .fniv = gen_cmtst_vec,
205
+ }
130
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
206
+
131
+ .vece = MO_64 },
207
+ *lev |= val;
208
+}
209
+
210
+static void gpclr(BCM2835GpioState *s,
211
+ uint32_t val, uint8_t start, uint8_t count, uint32_t *lev)
212
+{
213
+ uint32_t changes = val & *lev;
214
+ uint32_t cur = 1;
215
+
216
+ int i;
217
+ for (i = 0; i < count; i++) {
218
+ if ((changes & cur) && (gpfsel_is_out(s, start + i))) {
219
+ qemu_set_irq(s->out[start + i], 0);
220
+ }
221
+ cur <<= 1;
222
+ }
223
+
224
+ *lev &= ~val;
225
+}
226
+
227
+static uint64_t bcm2835_gpio_read(void *opaque, hwaddr offset,
228
+ unsigned size)
229
+{
230
+ BCM2835GpioState *s = (BCM2835GpioState *)opaque;
231
+
232
+ switch (offset) {
233
+ case GPFSEL0:
234
+ case GPFSEL1:
235
+ case GPFSEL2:
236
+ case GPFSEL3:
237
+ case GPFSEL4:
238
+ case GPFSEL5:
239
+ return gpfsel_get(s, offset / 4);
240
+ case GPSET0:
241
+ case GPSET1:
242
+ /* Write Only */
243
+ return 0;
244
+ case GPCLR0:
245
+ case GPCLR1:
246
+ /* Write Only */
247
+ return 0;
248
+ case GPLEV0:
249
+ return s->lev0;
250
+ case GPLEV1:
251
+ return s->lev1;
252
+ case GPEDS0:
253
+ case GPEDS1:
254
+ case GPREN0:
255
+ case GPREN1:
256
+ case GPFEN0:
257
+ case GPFEN1:
258
+ case GPHEN0:
259
+ case GPHEN1:
260
+ case GPLEN0:
261
+ case GPLEN1:
262
+ case GPAREN0:
263
+ case GPAREN1:
264
+ case GPAFEN0:
265
+ case GPAFEN1:
266
+ case GPPUD:
267
+ case GPPUDCLK0:
268
+ case GPPUDCLK1:
269
+ /* Not implemented */
270
+ return 0;
271
+ default:
272
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset %"HWADDR_PRIx"\n",
273
+ __func__, offset);
274
+ break;
275
+ }
276
+
277
+ return 0;
278
+}
279
+
280
+static void bcm2835_gpio_write(void *opaque, hwaddr offset,
281
+ uint64_t value, unsigned size)
282
+{
283
+ BCM2835GpioState *s = (BCM2835GpioState *)opaque;
284
+
285
+ switch (offset) {
286
+ case GPFSEL0:
287
+ case GPFSEL1:
288
+ case GPFSEL2:
289
+ case GPFSEL3:
290
+ case GPFSEL4:
291
+ case GPFSEL5:
292
+ gpfsel_set(s, offset / 4, value);
293
+ break;
294
+ case GPSET0:
295
+ gpset(s, value, 0, 32, &s->lev0);
296
+ break;
297
+ case GPSET1:
298
+ gpset(s, value, 32, 22, &s->lev1);
299
+ break;
300
+ case GPCLR0:
301
+ gpclr(s, value, 0, 32, &s->lev0);
302
+ break;
303
+ case GPCLR1:
304
+ gpclr(s, value, 32, 22, &s->lev1);
305
+ break;
306
+ case GPLEV0:
307
+ case GPLEV1:
308
+ /* Read Only */
309
+ break;
310
+ case GPEDS0:
311
+ case GPEDS1:
312
+ case GPREN0:
313
+ case GPREN1:
314
+ case GPFEN0:
315
+ case GPFEN1:
316
+ case GPHEN0:
317
+ case GPHEN1:
318
+ case GPLEN0:
319
+ case GPLEN1:
320
+ case GPAREN0:
321
+ case GPAREN1:
322
+ case GPAFEN0:
323
+ case GPAFEN1:
324
+ case GPPUD:
325
+ case GPPUDCLK0:
326
+ case GPPUDCLK1:
327
+ /* Not implemented */
328
+ break;
329
+ default:
330
+ goto err_out;
331
+ }
332
+ return;
333
+
334
+err_out:
335
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad offset %"HWADDR_PRIx"\n",
336
+ __func__, offset);
337
+}
338
+
339
+static void bcm2835_gpio_reset(DeviceState *dev)
340
+{
341
+ BCM2835GpioState *s = BCM2835_GPIO(dev);
342
+
343
+ int i;
344
+ for (i = 0; i < 6; i++) {
345
+ gpfsel_set(s, i, 0);
346
+ }
347
+
348
+ s->sd_fsel = 0;
349
+
350
+ /* SDHCI is selected by default */
351
+ sdbus_reparent_card(&s->sdbus, s->sdbus_sdhci);
352
+
353
+ s->lev0 = 0;
354
+ s->lev1 = 0;
355
+}
356
+
357
+static const MemoryRegionOps bcm2835_gpio_ops = {
358
+ .read = bcm2835_gpio_read,
359
+ .write = bcm2835_gpio_write,
360
+ .endianness = DEVICE_NATIVE_ENDIAN,
361
+};
132
+};
362
+
133
+
363
+static const VMStateDescription vmstate_bcm2835_gpio = {
134
/* Translate a NEON data processing instruction. Return nonzero if the
364
+ .name = "bcm2835_gpio",
135
instruction is invalid.
365
+ .version_id = 1,
136
We process data in a mixture of 32-bit and 64-bit chunks.
366
+ .minimum_version_id = 1,
137
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
367
+ .fields = (VMStateField[]) {
138
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
368
+ VMSTATE_UINT8_ARRAY(fsel, BCM2835GpioState, 54),
139
u ? &mls_op[size] : &mla_op[size]);
369
+ VMSTATE_UINT32(lev0, BCM2835GpioState),
140
return 0;
370
+ VMSTATE_UINT32(lev1, BCM2835GpioState),
141
+
371
+ VMSTATE_UINT8(sd_fsel, BCM2835GpioState),
142
+ case NEON_3R_VTST_VCEQ:
372
+ VMSTATE_END_OF_LIST()
143
+ if (u) { /* VCEQ */
373
+ }
144
+ tcg_gen_gvec_cmp(TCG_COND_EQ, size, rd_ofs, rn_ofs, rm_ofs,
374
+};
145
+ vec_size, vec_size);
375
+
146
+ } else { /* VTST */
376
+static void bcm2835_gpio_init(Object *obj)
147
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
377
+{
148
+ vec_size, vec_size, &cmtst_op[size]);
378
+ BCM2835GpioState *s = BCM2835_GPIO(obj);
149
+ }
379
+ DeviceState *dev = DEVICE(obj);
150
+ return 0;
380
+ SysBusDevice *sbd = SYS_BUS_DEVICE(obj);
151
+
381
+
152
+ case NEON_3R_VCGT:
382
+ qbus_create_inplace(&s->sdbus, sizeof(s->sdbus),
153
+ tcg_gen_gvec_cmp(u ? TCG_COND_GTU : TCG_COND_GT, size,
383
+ TYPE_SD_BUS, DEVICE(s), "sd-bus");
154
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
384
+
155
+ return 0;
385
+ memory_region_init_io(&s->iomem, obj,
156
+
386
+ &bcm2835_gpio_ops, s, "bcm2835_gpio", 0x1000);
157
+ case NEON_3R_VCGE:
387
+ sysbus_init_mmio(sbd, &s->iomem);
158
+ tcg_gen_gvec_cmp(u ? TCG_COND_GEU : TCG_COND_GE, size,
388
+ qdev_init_gpio_out(dev, s->out, 54);
159
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
389
+}
160
+ return 0;
390
+
161
}
391
+static void bcm2835_gpio_realize(DeviceState *dev, Error **errp)
162
392
+{
163
if (size == 3) {
393
+ BCM2835GpioState *s = BCM2835_GPIO(dev);
164
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
394
+ Object *obj;
165
case NEON_3R_VQSUB:
395
+ Error *err = NULL;
166
GEN_NEON_INTEGER_OP_ENV(qsub);
396
+
167
break;
397
+ obj = object_property_get_link(OBJECT(dev), "sdbus-sdhci", &err);
168
- case NEON_3R_VCGT:
398
+ if (obj == NULL) {
169
- GEN_NEON_INTEGER_OP(cgt);
399
+ error_setg(errp, "%s: required sdhci link not found: %s",
170
- break;
400
+ __func__, error_get_pretty(err));
171
- case NEON_3R_VCGE:
401
+ return;
172
- GEN_NEON_INTEGER_OP(cge);
402
+ }
173
- break;
403
+ s->sdbus_sdhci = SD_BUS(obj);
174
case NEON_3R_VSHL:
404
+
175
GEN_NEON_INTEGER_OP(shl);
405
+ obj = object_property_get_link(OBJECT(dev), "sdbus-sdhost", &err);
176
break;
406
+ if (obj == NULL) {
177
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
407
+ error_setg(errp, "%s: required sdhost link not found: %s",
178
tmp2 = neon_load_reg(rd, pass);
408
+ __func__, error_get_pretty(err));
179
gen_neon_add(size, tmp, tmp2);
409
+ return;
180
break;
410
+ }
181
- case NEON_3R_VTST_VCEQ:
411
+ s->sdbus_sdhost = SD_BUS(obj);
182
- if (!u) { /* VTST */
412
+}
183
- switch (size) {
413
+
184
- case 0: gen_helper_neon_tst_u8(tmp, tmp, tmp2); break;
414
+static void bcm2835_gpio_class_init(ObjectClass *klass, void *data)
185
- case 1: gen_helper_neon_tst_u16(tmp, tmp, tmp2); break;
415
+{
186
- case 2: gen_helper_neon_tst_u32(tmp, tmp, tmp2); break;
416
+ DeviceClass *dc = DEVICE_CLASS(klass);
187
- default: abort();
417
+
188
- }
418
+ dc->vmsd = &vmstate_bcm2835_gpio;
189
- } else { /* VCEQ */
419
+ dc->realize = &bcm2835_gpio_realize;
190
- switch (size) {
420
+ dc->reset = &bcm2835_gpio_reset;
191
- case 0: gen_helper_neon_ceq_u8(tmp, tmp, tmp2); break;
421
+}
192
- case 1: gen_helper_neon_ceq_u16(tmp, tmp, tmp2); break;
422
+
193
- case 2: gen_helper_neon_ceq_u32(tmp, tmp, tmp2); break;
423
+static const TypeInfo bcm2835_gpio_info = {
194
- default: abort();
424
+ .name = TYPE_BCM2835_GPIO,
195
- }
425
+ .parent = TYPE_SYS_BUS_DEVICE,
196
- }
426
+ .instance_size = sizeof(BCM2835GpioState),
197
- break;
427
+ .instance_init = bcm2835_gpio_init,
198
case NEON_3R_VMUL:
428
+ .class_init = bcm2835_gpio_class_init,
199
/* VMUL.P8; other cases already eliminated. */
429
+};
200
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
430
+
431
+static void bcm2835_gpio_register_types(void)
432
+{
433
+ type_register_static(&bcm2835_gpio_info);
434
+}
435
+
436
+type_init(bcm2835_gpio_register_types)
437
--
201
--
438
2.7.4
202
2.19.1
439
203
440
204
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-18-richard.henderson@linaro.org
5
[PMM: added parens in ?: expression]
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
9
target/arm/translate.c | 81 ++++++++++++++----------------------------
10
1 file changed, 26 insertions(+), 55 deletions(-)
11
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate.c
15
+++ b/target/arm/translate.c
16
@@ -XXX,XX +XXX,XX @@ static void gen_vfp_msr(TCGv_i32 tmp)
17
tcg_temp_free_i32(tmp);
18
}
19
20
-static void gen_neon_dup_u8(TCGv_i32 var, int shift)
21
-{
22
- TCGv_i32 tmp = tcg_temp_new_i32();
23
- if (shift)
24
- tcg_gen_shri_i32(var, var, shift);
25
- tcg_gen_ext8u_i32(var, var);
26
- tcg_gen_shli_i32(tmp, var, 8);
27
- tcg_gen_or_i32(var, var, tmp);
28
- tcg_gen_shli_i32(tmp, var, 16);
29
- tcg_gen_or_i32(var, var, tmp);
30
- tcg_temp_free_i32(tmp);
31
-}
32
-
33
static void gen_neon_dup_low16(TCGv_i32 var)
34
{
35
TCGv_i32 tmp = tcg_temp_new_i32();
36
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
37
tcg_temp_free_i32(tmp);
38
}
39
40
-static TCGv_i32 gen_load_and_replicate(DisasContext *s, TCGv_i32 addr, int size)
41
-{
42
- /* Load a single Neon element and replicate into a 32 bit TCG reg */
43
- TCGv_i32 tmp = tcg_temp_new_i32();
44
- switch (size) {
45
- case 0:
46
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
47
- gen_neon_dup_u8(tmp, 0);
48
- break;
49
- case 1:
50
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
51
- gen_neon_dup_low16(tmp);
52
- break;
53
- case 2:
54
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
55
- break;
56
- default: /* Avoid compiler warnings. */
57
- abort();
58
- }
59
- return tmp;
60
-}
61
-
62
static int handle_vsel(uint32_t insn, uint32_t rd, uint32_t rn, uint32_t rm,
63
uint32_t dp)
64
{
65
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
66
int load;
67
int shift;
68
int n;
69
+ int vec_size;
70
TCGv_i32 addr;
71
TCGv_i32 tmp;
72
TCGv_i32 tmp2;
73
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
74
}
75
addr = tcg_temp_new_i32();
76
load_reg_var(s, addr, rn);
77
- if (nregs == 1) {
78
- /* VLD1 to all lanes: bit 5 indicates how many Dregs to write */
79
- tmp = gen_load_and_replicate(s, addr, size);
80
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
81
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
82
- if (insn & (1 << 5)) {
83
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 0));
84
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 1));
85
- }
86
- tcg_temp_free_i32(tmp);
87
- } else {
88
- /* VLD2/3/4 to all lanes: bit 5 indicates register stride */
89
- stride = (insn & (1 << 5)) ? 2 : 1;
90
- for (reg = 0; reg < nregs; reg++) {
91
- tmp = gen_load_and_replicate(s, addr, size);
92
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
93
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
94
- tcg_temp_free_i32(tmp);
95
- tcg_gen_addi_i32(addr, addr, 1 << size);
96
- rd += stride;
97
+
98
+ /* VLD1 to all lanes: bit 5 indicates how many Dregs to write.
99
+ * VLD2/3/4 to all lanes: bit 5 indicates register stride.
100
+ */
101
+ stride = (insn & (1 << 5)) ? 2 : 1;
102
+ vec_size = nregs == 1 ? stride * 8 : 8;
103
+
104
+ tmp = tcg_temp_new_i32();
105
+ for (reg = 0; reg < nregs; reg++) {
106
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
107
+ s->be_data | size);
108
+ if ((rd & 1) && vec_size == 16) {
109
+ /* We cannot write 16 bytes at once because the
110
+ * destination is unaligned.
111
+ */
112
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
113
+ 8, 8, tmp);
114
+ tcg_gen_gvec_mov(0, neon_reg_offset(rd + 1, 0),
115
+ neon_reg_offset(rd, 0), 8, 8);
116
+ } else {
117
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
118
+ vec_size, vec_size, tmp);
119
}
120
+ tcg_gen_addi_i32(addr, addr, 1 << size);
121
+ rd += stride;
122
}
123
+ tcg_temp_free_i32(tmp);
124
tcg_temp_free_i32(addr);
125
stride = (1 << size) * nregs;
126
} else {
127
--
128
2.19.1
129
130
diff view generated by jsdifflib
New patch
1
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Instead of shifts and masks, use direct loads and stores from the neon
4
register file. Mirror the iteration structure of the ARM pseudocode
5
more closely. Correct the parameters of the VLD2 A2 insn.
6
7
Note that this includes a bugfix for handling of the insn
8
"VLD2 (multiple 2-element structures)" -- we were using an
9
incorrect stride value.
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20181011205206.3552-19-richard.henderson@linaro.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/translate.c | 170 ++++++++++++++++++-----------------------
17
1 file changed, 74 insertions(+), 96 deletions(-)
18
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
22
+++ b/target/arm/translate.c
23
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
24
return tmp;
25
}
26
27
+static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
28
+{
29
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
30
+
31
+ switch (mop) {
32
+ case MO_UB:
33
+ tcg_gen_ld8u_i64(var, cpu_env, offset);
34
+ break;
35
+ case MO_UW:
36
+ tcg_gen_ld16u_i64(var, cpu_env, offset);
37
+ break;
38
+ case MO_UL:
39
+ tcg_gen_ld32u_i64(var, cpu_env, offset);
40
+ break;
41
+ case MO_Q:
42
+ tcg_gen_ld_i64(var, cpu_env, offset);
43
+ break;
44
+ default:
45
+ g_assert_not_reached();
46
+ }
47
+}
48
+
49
static void neon_store_reg(int reg, int pass, TCGv_i32 var)
50
{
51
tcg_gen_st_i32(var, cpu_env, neon_reg_offset(reg, pass));
52
tcg_temp_free_i32(var);
53
}
54
55
+static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
56
+{
57
+ long offset = neon_element_offset(reg, ele, size);
58
+
59
+ switch (size) {
60
+ case MO_8:
61
+ tcg_gen_st8_i64(var, cpu_env, offset);
62
+ break;
63
+ case MO_16:
64
+ tcg_gen_st16_i64(var, cpu_env, offset);
65
+ break;
66
+ case MO_32:
67
+ tcg_gen_st32_i64(var, cpu_env, offset);
68
+ break;
69
+ case MO_64:
70
+ tcg_gen_st_i64(var, cpu_env, offset);
71
+ break;
72
+ default:
73
+ g_assert_not_reached();
74
+ }
75
+}
76
+
77
static inline void neon_load_reg64(TCGv_i64 var, int reg)
78
{
79
tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
80
@@ -XXX,XX +XXX,XX @@ static struct {
81
int interleave;
82
int spacing;
83
} const neon_ls_element_type[11] = {
84
- {4, 4, 1},
85
- {4, 4, 2},
86
+ {1, 4, 1},
87
+ {1, 4, 2},
88
{4, 1, 1},
89
- {4, 2, 1},
90
- {3, 3, 1},
91
- {3, 3, 2},
92
+ {2, 2, 2},
93
+ {1, 3, 1},
94
+ {1, 3, 2},
95
{3, 1, 1},
96
{1, 1, 1},
97
- {2, 2, 1},
98
- {2, 2, 2},
99
+ {1, 2, 1},
100
+ {1, 2, 2},
101
{2, 1, 1}
102
};
103
104
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
105
int shift;
106
int n;
107
int vec_size;
108
+ int mmu_idx;
109
+ TCGMemOp endian;
110
TCGv_i32 addr;
111
TCGv_i32 tmp;
112
TCGv_i32 tmp2;
113
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
114
rn = (insn >> 16) & 0xf;
115
rm = insn & 0xf;
116
load = (insn & (1 << 21)) != 0;
117
+ endian = s->be_data;
118
+ mmu_idx = get_mem_index(s);
119
if ((insn & (1 << 23)) == 0) {
120
/* Load store all elements. */
121
op = (insn >> 8) & 0xf;
122
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
123
nregs = neon_ls_element_type[op].nregs;
124
interleave = neon_ls_element_type[op].interleave;
125
spacing = neon_ls_element_type[op].spacing;
126
- if (size == 3 && (interleave | spacing) != 1)
127
+ if (size == 3 && (interleave | spacing) != 1) {
128
return 1;
129
+ }
130
+ tmp64 = tcg_temp_new_i64();
131
addr = tcg_temp_new_i32();
132
+ tmp2 = tcg_const_i32(1 << size);
133
load_reg_var(s, addr, rn);
134
- stride = (1 << size) * interleave;
135
for (reg = 0; reg < nregs; reg++) {
136
- if (interleave > 2 || (interleave == 2 && nregs == 2)) {
137
- load_reg_var(s, addr, rn);
138
- tcg_gen_addi_i32(addr, addr, (1 << size) * reg);
139
- } else if (interleave == 2 && nregs == 4 && reg == 2) {
140
- load_reg_var(s, addr, rn);
141
- tcg_gen_addi_i32(addr, addr, 1 << size);
142
- }
143
- if (size == 3) {
144
- tmp64 = tcg_temp_new_i64();
145
- if (load) {
146
- gen_aa32_ld64(s, tmp64, addr, get_mem_index(s));
147
- neon_store_reg64(tmp64, rd);
148
- } else {
149
- neon_load_reg64(tmp64, rd);
150
- gen_aa32_st64(s, tmp64, addr, get_mem_index(s));
151
- }
152
- tcg_temp_free_i64(tmp64);
153
- tcg_gen_addi_i32(addr, addr, stride);
154
- } else {
155
- for (pass = 0; pass < 2; pass++) {
156
- if (size == 2) {
157
- if (load) {
158
- tmp = tcg_temp_new_i32();
159
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
160
- neon_store_reg(rd, pass, tmp);
161
- } else {
162
- tmp = neon_load_reg(rd, pass);
163
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
164
- tcg_temp_free_i32(tmp);
165
- }
166
- tcg_gen_addi_i32(addr, addr, stride);
167
- } else if (size == 1) {
168
- if (load) {
169
- tmp = tcg_temp_new_i32();
170
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
171
- tcg_gen_addi_i32(addr, addr, stride);
172
- tmp2 = tcg_temp_new_i32();
173
- gen_aa32_ld16u(s, tmp2, addr, get_mem_index(s));
174
- tcg_gen_addi_i32(addr, addr, stride);
175
- tcg_gen_shli_i32(tmp2, tmp2, 16);
176
- tcg_gen_or_i32(tmp, tmp, tmp2);
177
- tcg_temp_free_i32(tmp2);
178
- neon_store_reg(rd, pass, tmp);
179
- } else {
180
- tmp = neon_load_reg(rd, pass);
181
- tmp2 = tcg_temp_new_i32();
182
- tcg_gen_shri_i32(tmp2, tmp, 16);
183
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
184
- tcg_temp_free_i32(tmp);
185
- tcg_gen_addi_i32(addr, addr, stride);
186
- gen_aa32_st16(s, tmp2, addr, get_mem_index(s));
187
- tcg_temp_free_i32(tmp2);
188
- tcg_gen_addi_i32(addr, addr, stride);
189
- }
190
- } else /* size == 0 */ {
191
- if (load) {
192
- tmp2 = NULL;
193
- for (n = 0; n < 4; n++) {
194
- tmp = tcg_temp_new_i32();
195
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
196
- tcg_gen_addi_i32(addr, addr, stride);
197
- if (n == 0) {
198
- tmp2 = tmp;
199
- } else {
200
- tcg_gen_shli_i32(tmp, tmp, n * 8);
201
- tcg_gen_or_i32(tmp2, tmp2, tmp);
202
- tcg_temp_free_i32(tmp);
203
- }
204
- }
205
- neon_store_reg(rd, pass, tmp2);
206
- } else {
207
- tmp2 = neon_load_reg(rd, pass);
208
- for (n = 0; n < 4; n++) {
209
- tmp = tcg_temp_new_i32();
210
- if (n == 0) {
211
- tcg_gen_mov_i32(tmp, tmp2);
212
- } else {
213
- tcg_gen_shri_i32(tmp, tmp2, n * 8);
214
- }
215
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
216
- tcg_temp_free_i32(tmp);
217
- tcg_gen_addi_i32(addr, addr, stride);
218
- }
219
- tcg_temp_free_i32(tmp2);
220
- }
221
+ for (n = 0; n < 8 >> size; n++) {
222
+ int xs;
223
+ for (xs = 0; xs < interleave; xs++) {
224
+ int tt = rd + reg + spacing * xs;
225
+
226
+ if (load) {
227
+ gen_aa32_ld_i64(s, tmp64, addr, mmu_idx, endian | size);
228
+ neon_store_element64(tt, n, size, tmp64);
229
+ } else {
230
+ neon_load_element64(tmp64, tt, n, size);
231
+ gen_aa32_st_i64(s, tmp64, addr, mmu_idx, endian | size);
232
}
233
+ tcg_gen_add_i32(addr, addr, tmp2);
234
}
235
}
236
- rd += spacing;
237
}
238
tcg_temp_free_i32(addr);
239
- stride = nregs * 8;
240
+ tcg_temp_free_i32(tmp2);
241
+ tcg_temp_free_i64(tmp64);
242
+ stride = nregs * interleave * 8;
243
} else {
244
size = (insn >> 10) & 3;
245
if (size == 3) {
246
--
247
2.19.1
248
249
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
For a sequence of loads or stores from a single register,
4
little-endian operations can be promoted to an 8-byte op.
5
This can reduce the number of operations by a factor of 8.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181011205206.3552-20-richard.henderson@linaro.org
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/translate.c | 10 ++++++++++
14
1 file changed, 10 insertions(+)
15
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate.c
19
+++ b/target/arm/translate.c
20
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
21
if (size == 3 && (interleave | spacing) != 1) {
22
return 1;
23
}
24
+ /* For our purposes, bytes are always little-endian. */
25
+ if (size == 0) {
26
+ endian = MO_LE;
27
+ }
28
+ /* Consecutive little-endian elements from a single register
29
+ * can be promoted to a larger little-endian operation.
30
+ */
31
+ if (interleave == 1 && endian == MO_LE) {
32
+ size = 3;
33
+ }
34
tmp64 = tcg_temp_new_i64();
35
addr = tcg_temp_new_i32();
36
tmp2 = tcg_const_i32(1 << size);
37
--
38
2.19.1
39
40
diff view generated by jsdifflib
1
Create a proper QOM object for the armv7m container, which
1
From: Richard Henderson <richard.henderson@linaro.org>
2
holds the CPU, the NVIC and the bitband regions.
3
2
3
Instead of shifts and masks, use direct loads and stores from
4
the neon register file.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181011205206.3552-21-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
6
Message-id: 1487604965-23220-4-git-send-email-peter.maydell@linaro.org
7
---
10
---
8
include/hw/arm/armv7m.h | 51 ++++++++++++++++++
11
target/arm/translate.c | 92 +++++++++++++++++++++++-------------------
9
hw/arm/armv7m.c | 139 +++++++++++++++++++++++++++++++++++++++++++-----
12
1 file changed, 50 insertions(+), 42 deletions(-)
10
2 files changed, 178 insertions(+), 12 deletions(-)
11
create mode 100644 include/hw/arm/armv7m.h
12
13
13
diff --git a/include/hw/arm/armv7m.h b/include/hw/arm/armv7m.h
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
new file mode 100644
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX
16
--- a/target/arm/translate.c
16
--- /dev/null
17
+++ b/target/arm/translate.c
17
+++ b/include/hw/arm/armv7m.h
18
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
18
@@ -XXX,XX +XXX,XX @@
19
return tmp;
19
+/*
20
}
20
+ * ARMv7M CPU object
21
21
+ *
22
+static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
22
+ * Copyright (c) 2017 Linaro Ltd
23
+{
23
+ * Written by Peter Maydell <peter.maydell@linaro.org>
24
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
24
+ *
25
+ * This code is licensed under the GPL version 2 or later.
26
+ */
27
+
25
+
28
+#ifndef HW_ARM_ARMV7M_H
26
+ switch (mop) {
29
+#define HW_ARM_ARMV7M_H
27
+ case MO_UB:
30
+
28
+ tcg_gen_ld8u_i32(var, cpu_env, offset);
31
+#include "hw/sysbus.h"
29
+ break;
32
+#include "hw/arm/armv7m_nvic.h"
30
+ case MO_UW:
33
+
31
+ tcg_gen_ld16u_i32(var, cpu_env, offset);
34
+#define TYPE_BITBAND "ARM,bitband-memory"
32
+ break;
35
+#define BITBAND(obj) OBJECT_CHECK(BitBandState, (obj), TYPE_BITBAND)
33
+ case MO_UL:
36
+
34
+ tcg_gen_ld_i32(var, cpu_env, offset);
37
+typedef struct {
35
+ break;
38
+ /*< private >*/
36
+ default:
39
+ SysBusDevice parent_obj;
37
+ g_assert_not_reached();
40
+ /*< public >*/
41
+
42
+ MemoryRegion iomem;
43
+ uint32_t base;
44
+} BitBandState;
45
+
46
+#define TYPE_ARMV7M "armv7m"
47
+#define ARMV7M(obj) OBJECT_CHECK(ARMv7MState, (obj), TYPE_ARMV7M)
48
+
49
+#define ARMV7M_NUM_BITBANDS 2
50
+
51
+/* ARMv7M container object.
52
+ * + Unnamed GPIO input lines: external IRQ lines for the NVIC
53
+ * + Named GPIO output SYSRESETREQ: signalled for guest AIRCR.SYSRESETREQ
54
+ * + Property "cpu-model": CPU model to instantiate
55
+ * + Property "num-irq": number of external IRQ lines
56
+ */
57
+typedef struct ARMv7MState {
58
+ /*< private >*/
59
+ SysBusDevice parent_obj;
60
+ /*< public >*/
61
+ NVICState nvic;
62
+ BitBandState bitband[ARMV7M_NUM_BITBANDS];
63
+ ARMCPU *cpu;
64
+
65
+ /* Properties */
66
+ char *cpu_model;
67
+} ARMv7MState;
68
+
69
+#endif
70
diff --git a/hw/arm/armv7m.c b/hw/arm/armv7m.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/hw/arm/armv7m.c
73
+++ b/hw/arm/armv7m.c
74
@@ -XXX,XX +XXX,XX @@
75
*/
76
77
#include "qemu/osdep.h"
78
+#include "hw/arm/armv7m.h"
79
#include "qapi/error.h"
80
#include "qemu-common.h"
81
#include "cpu.h"
82
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps bitband_ops = {
83
.endianness = DEVICE_NATIVE_ENDIAN,
84
};
85
86
-#define TYPE_BITBAND "ARM,bitband-memory"
87
-#define BITBAND(obj) OBJECT_CHECK(BitBandState, (obj), TYPE_BITBAND)
88
-
89
-typedef struct {
90
- /*< private >*/
91
- SysBusDevice parent_obj;
92
- /*< public >*/
93
-
94
- MemoryRegion iomem;
95
- uint32_t base;
96
-} BitBandState;
97
-
98
static void bitband_init(Object *obj)
99
{
100
BitBandState *s = BITBAND(obj);
101
@@ -XXX,XX +XXX,XX @@ static void armv7m_bitband_init(void)
102
103
/* Board init. */
104
105
+static const hwaddr bitband_input_addr[ARMV7M_NUM_BITBANDS] = {
106
+ 0x20000000, 0x40000000
107
+};
108
+
109
+static const hwaddr bitband_output_addr[ARMV7M_NUM_BITBANDS] = {
110
+ 0x22000000, 0x42000000
111
+};
112
+
113
+static void armv7m_instance_init(Object *obj)
114
+{
115
+ ARMv7MState *s = ARMV7M(obj);
116
+ int i;
117
+
118
+ /* Can't init the cpu here, we don't yet know which model to use */
119
+
120
+ object_initialize(&s->nvic, sizeof(s->nvic), "armv7m_nvic");
121
+ qdev_set_parent_bus(DEVICE(&s->nvic), sysbus_get_default());
122
+ object_property_add_alias(obj, "num-irq",
123
+ OBJECT(&s->nvic), "num-irq", &error_abort);
124
+
125
+ for (i = 0; i < ARRAY_SIZE(s->bitband); i++) {
126
+ object_initialize(&s->bitband[i], sizeof(s->bitband[i]), TYPE_BITBAND);
127
+ qdev_set_parent_bus(DEVICE(&s->bitband[i]), sysbus_get_default());
128
+ }
38
+ }
129
+}
39
+}
130
+
40
+
131
+static void armv7m_realize(DeviceState *dev, Error **errp)
41
static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
42
{
43
long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
44
@@ -XXX,XX +XXX,XX @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
45
tcg_temp_free_i32(var);
46
}
47
48
+static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
132
+{
49
+{
133
+ ARMv7MState *s = ARMV7M(dev);
50
+ long offset = neon_element_offset(reg, ele, size);
134
+ Error *err = NULL;
135
+ int i;
136
+ char **cpustr;
137
+ ObjectClass *oc;
138
+ const char *typename;
139
+ CPUClass *cc;
140
+
51
+
141
+ cpustr = g_strsplit(s->cpu_model, ",", 2);
52
+ switch (size) {
142
+
53
+ case MO_8:
143
+ oc = cpu_class_by_name(TYPE_ARM_CPU, cpustr[0]);
54
+ tcg_gen_st8_i32(var, cpu_env, offset);
144
+ if (!oc) {
55
+ break;
145
+ error_setg(errp, "Unknown CPU model %s", cpustr[0]);
56
+ case MO_16:
146
+ g_strfreev(cpustr);
57
+ tcg_gen_st16_i32(var, cpu_env, offset);
147
+ return;
58
+ break;
148
+ }
59
+ case MO_32:
149
+
60
+ tcg_gen_st_i32(var, cpu_env, offset);
150
+ cc = CPU_CLASS(oc);
61
+ break;
151
+ typename = object_class_get_name(oc);
62
+ default:
152
+ cc->parse_features(typename, cpustr[1], &err);
63
+ g_assert_not_reached();
153
+ g_strfreev(cpustr);
154
+ if (err) {
155
+ error_propagate(errp, err);
156
+ return;
157
+ }
158
+
159
+ s->cpu = ARM_CPU(object_new(typename));
160
+ if (!s->cpu) {
161
+ error_setg(errp, "Unknown CPU model %s", s->cpu_model);
162
+ return;
163
+ }
164
+
165
+ object_property_set_bool(OBJECT(s->cpu), true, "realized", &err);
166
+ if (err != NULL) {
167
+ error_propagate(errp, err);
168
+ return;
169
+ }
170
+
171
+ /* Note that we must realize the NVIC after the CPU */
172
+ object_property_set_bool(OBJECT(&s->nvic), true, "realized", &err);
173
+ if (err != NULL) {
174
+ error_propagate(errp, err);
175
+ return;
176
+ }
177
+
178
+ /* Alias the NVIC's input and output GPIOs as our own so the board
179
+ * code can wire them up. (We do this in realize because the
180
+ * NVIC doesn't create the input GPIO array until realize.)
181
+ */
182
+ qdev_pass_gpios(DEVICE(&s->nvic), dev, NULL);
183
+ qdev_pass_gpios(DEVICE(&s->nvic), dev, "SYSRESETREQ");
184
+
185
+ /* Wire the NVIC up to the CPU */
186
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->nvic), 0,
187
+ qdev_get_gpio_in(DEVICE(s->cpu), ARM_CPU_IRQ));
188
+ s->cpu->env.nvic = &s->nvic;
189
+
190
+ for (i = 0; i < ARRAY_SIZE(s->bitband); i++) {
191
+ Object *obj = OBJECT(&s->bitband[i]);
192
+ SysBusDevice *sbd = SYS_BUS_DEVICE(&s->bitband[i]);
193
+
194
+ object_property_set_int(obj, bitband_input_addr[i], "base", &err);
195
+ if (err != NULL) {
196
+ error_propagate(errp, err);
197
+ return;
198
+ }
199
+ object_property_set_bool(obj, true, "realized", &err);
200
+ if (err != NULL) {
201
+ error_propagate(errp, err);
202
+ return;
203
+ }
204
+
205
+ sysbus_mmio_map(sbd, 0, bitband_output_addr[i]);
206
+ }
64
+ }
207
+}
65
+}
208
+
66
+
209
+static Property armv7m_properties[] = {
67
static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
210
+ DEFINE_PROP_STRING("cpu-model", ARMv7MState, cpu_model),
211
+ DEFINE_PROP_END_OF_LIST(),
212
+};
213
+
214
+static void armv7m_class_init(ObjectClass *klass, void *data)
215
+{
216
+ DeviceClass *dc = DEVICE_CLASS(klass);
217
+
218
+ dc->realize = armv7m_realize;
219
+ dc->props = armv7m_properties;
220
+}
221
+
222
+static const TypeInfo armv7m_info = {
223
+ .name = TYPE_ARMV7M,
224
+ .parent = TYPE_SYS_BUS_DEVICE,
225
+ .instance_size = sizeof(ARMv7MState),
226
+ .instance_init = armv7m_instance_init,
227
+ .class_init = armv7m_class_init,
228
+};
229
+
230
static void armv7m_reset(void *opaque)
231
{
68
{
232
ARMCPU *cpu = opaque;
69
long offset = neon_element_offset(reg, ele, size);
233
@@ -XXX,XX +XXX,XX @@ static const TypeInfo bitband_info = {
70
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
234
static void armv7m_register_types(void)
71
int stride;
235
{
72
int size;
236
type_register_static(&bitband_info);
73
int reg;
237
+ type_register_static(&armv7m_info);
74
- int pass;
238
}
75
int load;
239
76
- int shift;
240
type_init(armv7m_register_types)
77
int n;
78
int vec_size;
79
int mmu_idx;
80
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
81
} else {
82
/* Single element. */
83
int idx = (insn >> 4) & 0xf;
84
- pass = (insn >> 7) & 1;
85
+ int reg_idx;
86
switch (size) {
87
case 0:
88
- shift = ((insn >> 5) & 3) * 8;
89
+ reg_idx = (insn >> 5) & 7;
90
stride = 1;
91
break;
92
case 1:
93
- shift = ((insn >> 6) & 1) * 16;
94
+ reg_idx = (insn >> 6) & 3;
95
stride = (insn & (1 << 5)) ? 2 : 1;
96
break;
97
case 2:
98
- shift = 0;
99
+ reg_idx = (insn >> 7) & 1;
100
stride = (insn & (1 << 6)) ? 2 : 1;
101
break;
102
default:
103
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
104
*/
105
return 1;
106
}
107
+ tmp = tcg_temp_new_i32();
108
addr = tcg_temp_new_i32();
109
load_reg_var(s, addr, rn);
110
for (reg = 0; reg < nregs; reg++) {
111
if (load) {
112
- tmp = tcg_temp_new_i32();
113
- switch (size) {
114
- case 0:
115
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
116
- break;
117
- case 1:
118
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
119
- break;
120
- case 2:
121
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
122
- break;
123
- default: /* Avoid compiler warnings. */
124
- abort();
125
- }
126
- if (size != 2) {
127
- tmp2 = neon_load_reg(rd, pass);
128
- tcg_gen_deposit_i32(tmp, tmp2, tmp,
129
- shift, size ? 16 : 8);
130
- tcg_temp_free_i32(tmp2);
131
- }
132
- neon_store_reg(rd, pass, tmp);
133
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
134
+ s->be_data | size);
135
+ neon_store_element(rd, reg_idx, size, tmp);
136
} else { /* Store */
137
- tmp = neon_load_reg(rd, pass);
138
- if (shift)
139
- tcg_gen_shri_i32(tmp, tmp, shift);
140
- switch (size) {
141
- case 0:
142
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
143
- break;
144
- case 1:
145
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
146
- break;
147
- case 2:
148
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
149
- break;
150
- }
151
- tcg_temp_free_i32(tmp);
152
+ neon_load_element(tmp, rd, reg_idx, size);
153
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s),
154
+ s->be_data | size);
155
}
156
rd += stride;
157
tcg_gen_addi_i32(addr, addr, 1 << size);
158
}
159
tcg_temp_free_i32(addr);
160
+ tcg_temp_free_i32(tmp);
161
stride = nregs * (1 << size);
162
}
163
}
241
--
164
--
242
2.7.4
165
2.19.1
243
166
244
167
diff view generated by jsdifflib
1
Switch the stm32f205 SoC to create the armv7m object directly
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
rather than via the armv7m_init() wrapper. This fits better
3
with the SoC model's very QOMified design.
4
2
5
In particular this means we can push loading the guest image
3
Announce the availability of the various priority queues.
6
out to the top level board code where it belongs, rather
4
This fixes an issue where guest kernels would miss to
7
than the SoC object having a QOM property for the filename
5
configure secondary queues due to inproper feature bits.
8
to load.
9
6
7
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 20181017213932.19973-2-edgar.iglesias@gmail.com
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
13
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
14
Message-id: 1487604965-23220-11-git-send-email-peter.maydell@linaro.org
15
---
11
---
16
include/hw/arm/stm32f205_soc.h | 4 +++-
12
hw/net/cadence_gem.c | 8 +++++++-
17
hw/arm/netduino2.c | 7 ++++---
13
1 file changed, 7 insertions(+), 1 deletion(-)
18
hw/arm/stm32f205_soc.c | 16 +++++++++++++---
19
3 files changed, 20 insertions(+), 7 deletions(-)
20
14
21
diff --git a/include/hw/arm/stm32f205_soc.h b/include/hw/arm/stm32f205_soc.h
15
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
22
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/arm/stm32f205_soc.h
17
--- a/hw/net/cadence_gem.c
24
+++ b/include/hw/arm/stm32f205_soc.h
18
+++ b/hw/net/cadence_gem.c
25
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
26
#include "hw/adc/stm32f2xx_adc.h"
20
int i;
27
#include "hw/or-irq.h"
21
CadenceGEMState *s = CADENCE_GEM(d);
28
#include "hw/ssi/stm32f2xx_spi.h"
22
const uint8_t *a;
29
+#include "hw/arm/armv7m.h"
23
+ uint32_t queues_mask = 0;
30
24
31
#define TYPE_STM32F205_SOC "stm32f205-soc"
25
DB_PRINT("\n");
32
#define STM32F205_SOC(obj) \
26
33
@@ -XXX,XX +XXX,XX @@ typedef struct STM32F205State {
27
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
34
SysBusDevice parent_obj;
28
s->regs[GEM_DESCONF] = 0x02500111;
35
/*< public >*/
29
s->regs[GEM_DESCONF2] = 0x2ab13fff;
36
30
s->regs[GEM_DESCONF5] = 0x002f2045;
37
- char *kernel_filename;
31
- s->regs[GEM_DESCONF6] = 0x00000200;
38
char *cpu_model;
32
+ s->regs[GEM_DESCONF6] = 0x0;
39
40
+ ARMv7MState armv7m;
41
+
33
+
42
STM32F2XXSyscfgState syscfg;
34
+ if (s->num_priority_queues > 1) {
43
STM32F2XXUsartState usart[STM_NUM_USARTS];
35
+ queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
44
STM32F2XXTimerState timer[STM_NUM_TIMERS];
36
+ s->regs[GEM_DESCONF6] |= queues_mask;
45
diff --git a/hw/arm/netduino2.c b/hw/arm/netduino2.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/arm/netduino2.c
48
+++ b/hw/arm/netduino2.c
49
@@ -XXX,XX +XXX,XX @@
50
#include "hw/boards.h"
51
#include "qemu/error-report.h"
52
#include "hw/arm/stm32f205_soc.h"
53
+#include "hw/arm/arm.h"
54
55
static void netduino2_init(MachineState *machine)
56
{
57
DeviceState *dev;
58
59
dev = qdev_create(NULL, TYPE_STM32F205_SOC);
60
- if (machine->kernel_filename) {
61
- qdev_prop_set_string(dev, "kernel-filename", machine->kernel_filename);
62
- }
63
qdev_prop_set_string(dev, "cpu-model", "cortex-m3");
64
object_property_set_bool(OBJECT(dev), true, "realized", &error_fatal);
65
+
66
+ armv7m_load_kernel(ARM_CPU(first_cpu), machine->kernel_filename,
67
+ FLASH_SIZE);
68
}
69
70
static void netduino2_machine_init(MachineClass *mc)
71
diff --git a/hw/arm/stm32f205_soc.c b/hw/arm/stm32f205_soc.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/hw/arm/stm32f205_soc.c
74
+++ b/hw/arm/stm32f205_soc.c
75
@@ -XXX,XX +XXX,XX @@ static void stm32f205_soc_initfn(Object *obj)
76
STM32F205State *s = STM32F205_SOC(obj);
77
int i;
78
79
+ object_initialize(&s->armv7m, sizeof(s->armv7m), TYPE_ARMV7M);
80
+ qdev_set_parent_bus(DEVICE(&s->armv7m), sysbus_get_default());
81
+
82
object_initialize(&s->syscfg, sizeof(s->syscfg), TYPE_STM32F2XX_SYSCFG);
83
qdev_set_parent_bus(DEVICE(&s->syscfg), sysbus_get_default());
84
85
@@ -XXX,XX +XXX,XX @@ static void stm32f205_soc_realize(DeviceState *dev_soc, Error **errp)
86
vmstate_register_ram_global(sram);
87
memory_region_add_subregion(system_memory, SRAM_BASE_ADDRESS, sram);
88
89
- nvic = armv7m_init(get_system_memory(), FLASH_SIZE, 96,
90
- s->kernel_filename, s->cpu_model);
91
+ nvic = DEVICE(&s->armv7m);
92
+ qdev_prop_set_uint32(nvic, "num-irq", 96);
93
+ qdev_prop_set_string(nvic, "cpu-model", s->cpu_model);
94
+ object_property_set_link(OBJECT(&s->armv7m), OBJECT(get_system_memory()),
95
+ "memory", &error_abort);
96
+ object_property_set_bool(OBJECT(&s->armv7m), true, "realized", &err);
97
+ if (err != NULL) {
98
+ error_propagate(errp, err);
99
+ return;
100
+ }
37
+ }
101
38
102
/* System configuration controller */
39
/* Set MAC address */
103
dev = DEVICE(&s->syscfg);
40
a = &s->conf.macaddr.a[0];
104
@@ -XXX,XX +XXX,XX @@ static void stm32f205_soc_realize(DeviceState *dev_soc, Error **errp)
105
}
106
107
static Property stm32f205_soc_properties[] = {
108
- DEFINE_PROP_STRING("kernel-filename", STM32F205State, kernel_filename),
109
DEFINE_PROP_STRING("cpu-model", STM32F205State, cpu_model),
110
DEFINE_PROP_END_OF_LIST(),
111
};
112
--
41
--
113
2.7.4
42
2.19.1
114
43
115
44
diff view generated by jsdifflib
New patch
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
2
3
Announce 64bit addressing support.
4
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Message-id: 20181017213932.19973-3-edgar.iglesias@gmail.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/net/cadence_gem.c | 3 ++-
12
1 file changed, 2 insertions(+), 1 deletion(-)
13
14
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/net/cadence_gem.c
17
+++ b/hw/net/cadence_gem.c
18
@@ -XXX,XX +XXX,XX @@
19
#define GEM_DESCONF4 (0x0000028C/4)
20
#define GEM_DESCONF5 (0x00000290/4)
21
#define GEM_DESCONF6 (0x00000294/4)
22
+#define GEM_DESCONF6_64B_MASK (1U << 23)
23
#define GEM_DESCONF7 (0x00000298/4)
24
25
#define GEM_INT_Q1_STATUS (0x00000400 / 4)
26
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
27
s->regs[GEM_DESCONF] = 0x02500111;
28
s->regs[GEM_DESCONF2] = 0x2ab13fff;
29
s->regs[GEM_DESCONF5] = 0x002f2045;
30
- s->regs[GEM_DESCONF6] = 0x0;
31
+ s->regs[GEM_DESCONF6] = GEM_DESCONF6_64B_MASK;
32
33
if (s->num_priority_queues > 1) {
34
queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
35
--
36
2.19.1
37
38
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
The EL3 version of this register does not include an ASID,
4
and so the tlb_flush performed by vmsa_ttbr_write is not needed.
5
6
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20181019015617.22583-2-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
20
.fieldoffset = offsetof(CPUARMState, cp15.mvbar) },
21
{ .name = "TTBR0_EL3", .state = ARM_CP_STATE_AA64,
22
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 0,
23
- .access = PL3_RW, .writefn = vmsa_ttbr_write, .resetvalue = 0,
24
+ .access = PL3_RW, .resetvalue = 0,
25
.fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[3]) },
26
{ .name = "TCR_EL3", .state = ARM_CP_STATE_AA64,
27
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 2,
28
--
29
2.19.1
30
31
diff view generated by jsdifflib
1
From: Paolo Bonzini <pbonzini@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The linux-headers/asm-arm/unistd.h file has been split in three
3
Since QEMU does not implement ASIDs, changes to the ASID must flush the
4
sub-files, copy them along. However, building them requires
4
tlb. However, if the ASID does not change there is no reason to flush.
5
setting ARCH rather than SRCARCH.
6
5
7
SRCARCH defaults to $(ARCH) anyway; to avoid future occurrence of
6
In testing a boot of the Ubuntu installer to the first menu, this reduces
8
the same problem use ARCH for all architectures where SRCARCH=ARCH.
7
the number of flushes by 30%, or nearly 600k instances.
9
Currently these are all except x86, sparc, sh and tile.
10
8
11
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
9
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
12
Message-id: 20170221122920.16245-2-pbonzini@redhat.com
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
13
Message-id: 20181019015617.22583-3-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
---
15
scripts/update-linux-headers.sh | 13 ++++++++++++-
16
target/arm/helper.c | 8 +++-----
16
1 file changed, 12 insertions(+), 1 deletion(-)
17
1 file changed, 3 insertions(+), 5 deletions(-)
17
18
18
diff --git a/scripts/update-linux-headers.sh b/scripts/update-linux-headers.sh
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
index XXXXXXX..XXXXXXX 100755
20
index XXXXXXX..XXXXXXX 100644
20
--- a/scripts/update-linux-headers.sh
21
--- a/target/arm/helper.c
21
+++ b/scripts/update-linux-headers.sh
22
+++ b/target/arm/helper.c
22
@@ -XXX,XX +XXX,XX @@ for arch in $ARCHLIST; do
23
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_el1_write(CPUARMState *env, const ARMCPRegInfo *ri,
23
continue
24
static void vmsa_ttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
fi
25
uint64_t value)
25
26
{
26
- make -C "$linux" INSTALL_HDR_PATH="$tmpdir" SRCARCH=$arch headers_install
27
- /* 64 bit accesses to the TTBRs can change the ASID and so we
27
+ if [ "$arch" = x86 ]; then
28
- * must flush the TLB.
28
+ arch_var=SRCARCH
29
- */
29
+ else
30
- if (cpreg_field_is_64bit(ri)) {
30
+ arch_var=ARCH
31
+ /* If the ASID changes (with a 64-bit write), we must flush the TLB. */
31
+ fi
32
+ if (cpreg_field_is_64bit(ri) &&
32
+
33
+ extract64(raw_read(env, ri) ^ value, 48, 16) != 0) {
33
+ make -C "$linux" INSTALL_HDR_PATH="$tmpdir" $arch_var=$arch headers_install
34
ARMCPU *cpu = arm_env_get_cpu(env);
34
35
-
35
rm -rf "$output/linux-headers/asm-$arch"
36
tlb_flush(CPU(cpu));
36
mkdir -p "$output/linux-headers/asm-$arch"
37
}
37
@@ -XXX,XX +XXX,XX @@ for arch in $ARCHLIST; do
38
raw_write(env, ri, value);
38
cp_portable "$tmpdir/include/asm/kvm_virtio.h" "$output/include/standard-headers/asm-s390/"
39
cp_portable "$tmpdir/include/asm/virtio-ccw.h" "$output/include/standard-headers/asm-s390/"
40
fi
41
+ if [ $arch = arm ]; then
42
+ cp "$tmpdir/include/asm/unistd-eabi.h" "$output/linux-headers/asm-arm/"
43
+ cp "$tmpdir/include/asm/unistd-oabi.h" "$output/linux-headers/asm-arm/"
44
+ cp "$tmpdir/include/asm/unistd-common.h" "$output/linux-headers/asm-arm/"
45
+ fi
46
if [ $arch = x86 ]; then
47
cp_portable "$tmpdir/include/asm/hyperv.h" "$output/include/standard-headers/asm-x86/"
48
cp "$tmpdir/include/asm/unistd_32.h" "$output/linux-headers/asm-x86/"
49
--
39
--
50
2.7.4
40
2.19.1
51
41
52
42
diff view generated by jsdifflib