1
As promised, another pullreq... This one's mostly RTH's patches.
1
First pullreq for arm of the 4.1 series, since I'm back from
2
holiday now. This is mostly my M-profile FPU series and Philippe's
3
devices.h cleanup. I have a pile of other patchsets to work through
4
in my to-review folder, but 42 patches is definitely quite
5
big enough to send now...
2
6
3
thanks
7
thanks
4
-- PMM
8
-- PMM
5
9
6
The following changes since commit 784c2e4f232adf5ef47a84a262ec72a07d068d6a:
10
The following changes since commit 413a99a92c13ec408dcf2adaa87918dc81e890c8:
7
11
8
Merge remote-tracking branch 'remotes/jasowang/tags/net-pull-request' into staging (2018-10-19 15:30:40 +0100)
12
Add Nios II semihosting support. (2019-04-29 16:09:51 +0100)
9
13
10
are available in the Git repository at:
14
are available in the Git repository at:
11
15
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20181019
16
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190429
13
17
14
for you to fetch changes up to 88c9add25e7120e8622796c81ad3f3fb7f8d40e7:
18
for you to fetch changes up to 437cc27ddfded3bbab6afd5ac1761e0e195edba7:
15
19
16
target/arm: Only flush tlb if ASID changes (2018-10-19 17:38:48 +0100)
20
hw/devices: Move SMSC 91C111 declaration into a new header (2019-04-29 17:57:21 +0100)
17
21
18
----------------------------------------------------------------
22
----------------------------------------------------------------
19
target-arm queue:
23
target-arm queue:
20
* ssi-sd: Make devices picking up backends unavailable with -device
24
* remove "bag of random stuff" hw/devices.h header
21
* Add support for VCPU event states
25
* implement FPU for Cortex-M and enable it for Cortex-M4 and -M33
22
* Move towards making ID registers the source of truth for
26
* hw/dma: Compile the bcm2835_dma device as common object
23
whether a guest CPU implements a feature, rather than having
27
* configure: Remove --source-path option
24
parallel ID registers and feature bit flags
28
* hw/ssi/xilinx_spips: Avoid variable length array
25
* Implement various HCR hypervisor trap/config bits
29
* hw/arm/smmuv3: Remove SMMUNotifierNode
26
* Get IL bit correct for v7 syndrome values
27
* Report correct syndrome for FP/SIMD traps to Hyp mode
28
* hw/arm/boot: Increase compliance with kernel arm64 boot protocol
29
* Refactor A32 Neon to use generic vector infrastructure
30
* Fix a bug in A32 VLD2 "(multiple 2-element structures)" insn
31
* net: cadence_gem: Report features correctly in ID register
32
* Avoid some unnecessary TLB flushes on TTBR register writes
33
30
34
----------------------------------------------------------------
31
----------------------------------------------------------------
35
Dongjiu Geng (1):
32
Eric Auger (1):
36
target/arm: Add support for VCPU event states
33
hw/arm/smmuv3: Remove SMMUNotifierNode
37
34
38
Edgar E. Iglesias (2):
35
Peter Maydell (28):
39
net: cadence_gem: Announce availability of priority queues
36
hw/ssi/xilinx_spips: Avoid variable length array
40
net: cadence_gem: Announce 64bit addressing support
37
configure: Remove --source-path option
38
target/arm: Make sure M-profile FPSCR RES0 bits are not settable
39
hw/intc/armv7m_nvic: Allow reading of M-profile MVFR* registers
40
target/arm: Implement dummy versions of M-profile FP-related registers
41
target/arm: Disable most VFP sysregs for M-profile
42
target/arm: Honour M-profile FP enable bits
43
target/arm: Decode FP instructions for M profile
44
target/arm: Clear CONTROL_S.SFPA in SG insn if FPU present
45
target/arm: Handle SFPA and FPCA bits in reads and writes of CONTROL
46
target/arm/helper: don't return early for STKOF faults during stacking
47
target/arm: Handle floating point registers in exception entry
48
target/arm: Implement v7m_update_fpccr()
49
target/arm: Clear CONTROL.SFPA in BXNS and BLXNS
50
target/arm: Clean excReturn bits when tail chaining
51
target/arm: Allow for floating point in callee stack integrity check
52
target/arm: Handle floating point registers in exception return
53
target/arm: Move NS TBFLAG from bit 19 to bit 6
54
target/arm: Overlap VECSTRIDE and XSCALE_CPAR TB flags
55
target/arm: Set FPCCR.S when executing M-profile floating point insns
56
target/arm: Activate M-profile floating point context when FPCCR.ASPEN is set
57
target/arm: New helper function arm_v7m_mmu_idx_all()
58
target/arm: New function armv7m_nvic_set_pending_lazyfp()
59
target/arm: Add lazy-FP-stacking support to v7m_stack_write()
60
target/arm: Implement M-profile lazy FP state preservation
61
target/arm: Implement VLSTM for v7M CPUs with an FPU
62
target/arm: Implement VLLDM for v7M CPUs with an FPU
63
target/arm: Enable FPU for Cortex-M4 and Cortex-M33
41
64
42
Markus Armbruster (1):
65
Philippe Mathieu-Daudé (13):
43
ssi-sd: Make devices picking up backends unavailable with -device
66
hw/dma: Compile the bcm2835_dma device as common object
67
hw/arm/aspeed: Use TYPE_TMP105/TYPE_PCA9552 instead of hardcoded string
68
hw/arm/nseries: Use TYPE_TMP105 instead of hardcoded string
69
hw/display/tc6393xb: Remove unused functions
70
hw/devices: Move TC6393XB declarations into a new header
71
hw/devices: Move Blizzard declarations into a new header
72
hw/devices: Move CBus declarations into a new header
73
hw/devices: Move Gamepad declarations into a new header
74
hw/devices: Move TI touchscreen declarations into a new header
75
hw/devices: Move LAN9118 declarations into a new header
76
hw/net/ne2000-isa: Add guards to the header
77
hw/net/lan9118: Export TYPE_LAN9118 and use it instead of hardcoded string
78
hw/devices: Move SMSC 91C111 declaration into a new header
44
79
45
Peter Maydell (10):
80
configure | 10 +-
46
target/arm: Improve debug logging of AArch32 exception return
81
hw/dma/Makefile.objs | 2 +-
47
target/arm: Make switch_mode() file-local
82
include/hw/arm/omap.h | 6 +-
48
target/arm: Implement HCR.FB
83
include/hw/arm/smmu-common.h | 8 +-
49
target/arm: Implement HCR.DC
84
include/hw/devices.h | 62 ---
50
target/arm: ISR_EL1 bits track virtual interrupts if IMO/FMO set
85
include/hw/display/blizzard.h | 22 ++
51
target/arm: Implement HCR.VI and VF
86
include/hw/display/tc6393xb.h | 24 ++
52
target/arm: Implement HCR.PTW
87
include/hw/input/gamepad.h | 19 +
53
target/arm: New utility function to extract EC from syndrome
88
include/hw/input/tsc2xxx.h | 36 ++
54
target/arm: Get IL bit correct for v7 syndrome values
89
include/hw/misc/cbus.h | 32 ++
55
target/arm: Report correct syndrome for FP/SIMD traps to Hyp mode
90
include/hw/net/lan9118.h | 21 +
91
include/hw/net/ne2000-isa.h | 6 +
92
include/hw/net/smc91c111.h | 19 +
93
include/qemu/typedefs.h | 1 -
94
target/arm/cpu.h | 95 ++++-
95
target/arm/helper.h | 5 +
96
target/arm/translate.h | 3 +
97
hw/arm/aspeed.c | 13 +-
98
hw/arm/exynos4_boards.c | 3 +-
99
hw/arm/gumstix.c | 2 +-
100
hw/arm/integratorcp.c | 2 +-
101
hw/arm/kzm.c | 2 +-
102
hw/arm/mainstone.c | 2 +-
103
hw/arm/mps2-tz.c | 3 +-
104
hw/arm/mps2.c | 2 +-
105
hw/arm/nseries.c | 7 +-
106
hw/arm/palm.c | 2 +-
107
hw/arm/realview.c | 3 +-
108
hw/arm/smmu-common.c | 6 +-
109
hw/arm/smmuv3.c | 28 +-
110
hw/arm/stellaris.c | 2 +-
111
hw/arm/tosa.c | 2 +-
112
hw/arm/versatilepb.c | 2 +-
113
hw/arm/vexpress.c | 2 +-
114
hw/display/blizzard.c | 2 +-
115
hw/display/tc6393xb.c | 18 +-
116
hw/input/stellaris_input.c | 2 +-
117
hw/input/tsc2005.c | 2 +-
118
hw/input/tsc210x.c | 4 +-
119
hw/intc/armv7m_nvic.c | 261 +++++++++++++
120
hw/misc/cbus.c | 2 +-
121
hw/net/lan9118.c | 3 +-
122
hw/net/smc91c111.c | 2 +-
123
hw/ssi/xilinx_spips.c | 6 +-
124
target/arm/cpu.c | 20 +
125
target/arm/helper.c | 873 +++++++++++++++++++++++++++++++++++++++---
126
target/arm/machine.c | 16 +
127
target/arm/translate.c | 150 +++++++-
128
target/arm/vfp_helper.c | 8 +
129
MAINTAINERS | 7 +
130
50 files changed, 1595 insertions(+), 235 deletions(-)
131
delete mode 100644 include/hw/devices.h
132
create mode 100644 include/hw/display/blizzard.h
133
create mode 100644 include/hw/display/tc6393xb.h
134
create mode 100644 include/hw/input/gamepad.h
135
create mode 100644 include/hw/input/tsc2xxx.h
136
create mode 100644 include/hw/misc/cbus.h
137
create mode 100644 include/hw/net/lan9118.h
138
create mode 100644 include/hw/net/smc91c111.h
56
139
57
Richard Henderson (30):
58
target/arm: Move some system registers into a substructure
59
target/arm: V8M should not imply V7VE
60
target/arm: Convert v8 extensions from feature bits to isar tests
61
target/arm: Convert division from feature bits to isar0 tests
62
target/arm: Convert jazelle from feature bit to isar1 test
63
target/arm: Convert t32ee from feature bit to isar3 test
64
target/arm: Convert sve from feature bit to aa64pfr0 test
65
target/arm: Convert v8.2-fp16 from feature bit to aa64pfr0 test
66
target/arm: Hoist address increment for vector memory ops
67
target/arm: Don't call tcg_clear_temp_count
68
target/arm: Use tcg_gen_gvec_dup_i64 for LD[1-4]R
69
target/arm: Promote consecutive memory ops for aa64
70
target/arm: Mark some arrays const
71
target/arm: Use gvec for NEON VDUP
72
target/arm: Use gvec for NEON VMOV, VMVN, VBIC & VORR (immediate)
73
target/arm: Use gvec for NEON_3R_LOGIC insns
74
target/arm: Use gvec for NEON_3R_VADD_VSUB insns
75
target/arm: Use gvec for NEON_2RM_VMN, NEON_2RM_VNEG
76
target/arm: Use gvec for NEON_3R_VMUL
77
target/arm: Use gvec for VSHR, VSHL
78
target/arm: Use gvec for VSRA
79
target/arm: Use gvec for VSRI, VSLI
80
target/arm: Use gvec for NEON_3R_VML
81
target/arm: Use gvec for NEON_3R_VTST_VCEQ, NEON_3R_VCGT, NEON_3R_VCGE
82
target/arm: Use gvec for NEON VLD all lanes
83
target/arm: Reorg NEON VLD/VST all elements
84
target/arm: Promote consecutive memory ops for aa32
85
target/arm: Reorg NEON VLD/VST single element to one lane
86
target/arm: Remove writefn from TTBR0_EL3
87
target/arm: Only flush tlb if ASID changes
88
89
Stewart Hildebrand (1):
90
hw/arm/boot: Increase compliance with kernel arm64 boot protocol
91
92
target/arm/cpu.h | 227 ++++++-
93
target/arm/internals.h | 45 +-
94
target/arm/kvm_arm.h | 24 +
95
target/arm/translate.h | 21 +
96
hw/arm/boot.c | 18 +
97
hw/intc/armv7m_nvic.c | 12 +-
98
hw/net/cadence_gem.c | 9 +-
99
hw/sd/ssi-sd.c | 2 +
100
linux-user/aarch64/signal.c | 4 +-
101
linux-user/elfload.c | 60 +-
102
linux-user/syscall.c | 10 +-
103
target/arm/cpu.c | 242 ++++----
104
target/arm/cpu64.c | 148 +++--
105
target/arm/helper.c | 397 ++++++++----
106
target/arm/kvm.c | 60 ++
107
target/arm/kvm32.c | 13 +
108
target/arm/kvm64.c | 15 +-
109
target/arm/machine.c | 28 +-
110
target/arm/op_helper.c | 2 +-
111
target/arm/translate-a64.c | 715 ++++-----------------
112
target/arm/translate.c | 1451 ++++++++++++++++++++++++++++---------------
113
21 files changed, 2021 insertions(+), 1482 deletions(-)
114
diff view generated by jsdifflib
Deleted patch
1
From: Markus Armbruster <armbru@redhat.com>
2
1
3
Device models aren't supposed to go on fishing expeditions for
4
backends. They should expose suitable properties for the user to set.
5
For onboard devices, board code sets them.
6
7
Device ssi-sd picks up its block backend in its init() method with
8
drive_get_next() instead. This mistake is already marked FIXME since
9
commit af9e40a.
10
11
Unset user_creatable to remove the mistake from our external
12
interface. Since the SSI bus doesn't support hotplug, only -device
13
can be affected. Only certain ARM machines have ssi-sd and provide an
14
SSI bus for it; this patch breaks -device ssi-sd for these machines.
15
No actual use of -device ssi-sd is known.
16
17
Signed-off-by: Markus Armbruster <armbru@redhat.com>
18
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Acked-by: Thomas Huth <thuth@redhat.com>
20
Message-id: 20181009060835.4608-1-armbru@redhat.com
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
23
hw/sd/ssi-sd.c | 2 ++
24
1 file changed, 2 insertions(+)
25
26
diff --git a/hw/sd/ssi-sd.c b/hw/sd/ssi-sd.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/hw/sd/ssi-sd.c
29
+++ b/hw/sd/ssi-sd.c
30
@@ -XXX,XX +XXX,XX @@ static void ssi_sd_class_init(ObjectClass *klass, void *data)
31
k->cs_polarity = SSI_CS_LOW;
32
dc->vmsd = &vmstate_ssi_sd;
33
dc->reset = ssi_sd_reset;
34
+ /* Reason: init() method uses drive_get_next() */
35
+ dc->user_creatable = false;
36
}
37
38
static const TypeInfo ssi_sd_info = {
39
--
40
2.19.1
41
42
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Eric Auger <eric.auger@redhat.com>
2
2
3
Move shi_op and sli_op expanders from translate-a64.c.
3
The SMMUNotifierNode struct is not necessary and brings extra
4
complexity so let's remove it. We now directly track the SMMUDevices
5
which have registered IOMMU MR notifiers.
4
6
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
This is inspired from the same transformation on intel-iommu
6
Message-id: 20181011205206.3552-15-richard.henderson@linaro.org
8
done in commit b4a4ba0d68f50f218ee3957b6638dbee32a5eeef
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
("intel-iommu: remove IntelIOMMUNotifierNode")
10
11
Signed-off-by: Eric Auger <eric.auger@redhat.com>
12
Reviewed-by: Peter Xu <peterx@redhat.com>
13
Message-id: 20190409160219.19026-1-eric.auger@redhat.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
15
---
10
target/arm/translate.h | 2 +
16
include/hw/arm/smmu-common.h | 8 ++------
11
target/arm/translate-a64.c | 152 +----------------------
17
hw/arm/smmu-common.c | 6 +++---
12
target/arm/translate.c | 244 ++++++++++++++++++++++++++-----------
18
hw/arm/smmuv3.c | 28 +++++++---------------------
13
3 files changed, 179 insertions(+), 219 deletions(-)
19
3 files changed, 12 insertions(+), 30 deletions(-)
14
20
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
21
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
16
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
23
--- a/include/hw/arm/smmu-common.h
18
+++ b/target/arm/translate.h
24
+++ b/include/hw/arm/smmu-common.h
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
25
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUDevice {
20
extern const GVecGen3 bif_op;
26
AddressSpace as;
21
extern const GVecGen2i ssra_op[4];
27
uint32_t cfg_cache_hits;
22
extern const GVecGen2i usra_op[4];
28
uint32_t cfg_cache_misses;
23
+extern const GVecGen2i sri_op[4];
29
+ QLIST_ENTRY(SMMUDevice) next;
24
+extern const GVecGen2i sli_op[4];
30
} SMMUDevice;
25
31
26
/*
32
-typedef struct SMMUNotifierNode {
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
33
- SMMUDevice *sdev;
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
34
- QLIST_ENTRY(SMMUNotifierNode) next;
35
-} SMMUNotifierNode;
36
-
37
typedef struct SMMUPciBus {
38
PCIBus *bus;
39
SMMUDevice *pbdev[0]; /* Parent array is sparse, so dynamically alloc */
40
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUState {
41
GHashTable *iotlb;
42
SMMUPciBus *smmu_pcibus_by_bus_num[SMMU_PCI_BUS_MAX];
43
PCIBus *pci_bus;
44
- QLIST_HEAD(, SMMUNotifierNode) notifiers_list;
45
+ QLIST_HEAD(, SMMUDevice) devices_with_notifiers;
46
uint8_t bus_num;
47
PCIBus *primary_bus;
48
} SMMUState;
49
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
29
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
51
--- a/hw/arm/smmu-common.c
31
+++ b/target/arm/translate-a64.c
52
+++ b/hw/arm/smmu-common.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
53
@@ -XXX,XX +XXX,XX @@ inline void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr)
54
/* Unmap all notifiers of all mr's */
55
void smmu_inv_notifiers_all(SMMUState *s)
56
{
57
- SMMUNotifierNode *node;
58
+ SMMUDevice *sdev;
59
60
- QLIST_FOREACH(node, &s->notifiers_list, next) {
61
- smmu_inv_notifiers_mr(&node->sdev->iommu);
62
+ QLIST_FOREACH(sdev, &s->devices_with_notifiers, next) {
63
+ smmu_inv_notifiers_mr(&sdev->iommu);
33
}
64
}
34
}
65
}
35
66
36
-static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
67
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
37
-{
68
index XXXXXXX..XXXXXXX 100644
38
- uint64_t mask = dup_const(MO_8, 0xff >> shift);
69
--- a/hw/arm/smmuv3.c
39
- TCGv_i64 t = tcg_temp_new_i64();
70
+++ b/hw/arm/smmuv3.c
71
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
72
/* invalidate an asid/iova tuple in all mr's */
73
static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova)
74
{
75
- SMMUNotifierNode *node;
76
+ SMMUDevice *sdev;
77
78
- QLIST_FOREACH(node, &s->notifiers_list, next) {
79
- IOMMUMemoryRegion *mr = &node->sdev->iommu;
80
+ QLIST_FOREACH(sdev, &s->devices_with_notifiers, next) {
81
+ IOMMUMemoryRegion *mr = &sdev->iommu;
82
IOMMUNotifier *n;
83
84
trace_smmuv3_inv_notifiers_iova(mr->parent_obj.name, asid, iova);
85
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_flag_changed(IOMMUMemoryRegion *iommu,
86
SMMUDevice *sdev = container_of(iommu, SMMUDevice, iommu);
87
SMMUv3State *s3 = sdev->smmu;
88
SMMUState *s = &(s3->smmu_state);
89
- SMMUNotifierNode *node = NULL;
90
- SMMUNotifierNode *next_node = NULL;
91
92
if (new & IOMMU_NOTIFIER_MAP) {
93
int bus_num = pci_bus_num(sdev->bus);
94
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_flag_changed(IOMMUMemoryRegion *iommu,
95
96
if (old == IOMMU_NOTIFIER_NONE) {
97
trace_smmuv3_notify_flag_add(iommu->parent_obj.name);
98
- node = g_malloc0(sizeof(*node));
99
- node->sdev = sdev;
100
- QLIST_INSERT_HEAD(&s->notifiers_list, node, next);
101
- return;
102
- }
40
-
103
-
41
- tcg_gen_shri_i64(t, a, shift);
104
- /* update notifier node with new flags */
42
- tcg_gen_andi_i64(t, t, mask);
105
- QLIST_FOREACH_SAFE(node, &s->notifiers_list, next, next_node) {
43
- tcg_gen_andi_i64(d, d, ~mask);
106
- if (node->sdev == sdev) {
44
- tcg_gen_or_i64(d, d, t);
107
- if (new == IOMMU_NOTIFIER_NONE) {
45
- tcg_temp_free_i64(t);
108
- trace_smmuv3_notify_flag_del(iommu->parent_obj.name);
46
-}
109
- QLIST_REMOVE(node, next);
47
-
110
- g_free(node);
48
-static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
111
- }
49
-{
112
- return;
50
- uint64_t mask = dup_const(MO_16, 0xffff >> shift);
113
- }
51
- TCGv_i64 t = tcg_temp_new_i64();
114
+ QLIST_INSERT_HEAD(&s->devices_with_notifiers, sdev, next);
52
-
115
+ } else if (new == IOMMU_NOTIFIER_NONE) {
53
- tcg_gen_shri_i64(t, a, shift);
116
+ trace_smmuv3_notify_flag_del(iommu->parent_obj.name);
54
- tcg_gen_andi_i64(t, t, mask);
117
+ QLIST_REMOVE(sdev, next);
55
- tcg_gen_andi_i64(d, d, ~mask);
118
}
56
- tcg_gen_or_i64(d, d, t);
57
- tcg_temp_free_i64(t);
58
-}
59
-
60
-static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
61
-{
62
- tcg_gen_shri_i32(a, a, shift);
63
- tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
64
-}
65
-
66
-static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
67
-{
68
- tcg_gen_shri_i64(a, a, shift);
69
- tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
70
-}
71
-
72
-static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
73
-{
74
- uint64_t mask = (2ull << ((8 << vece) - 1)) - 1;
75
- TCGv_vec t = tcg_temp_new_vec_matching(d);
76
- TCGv_vec m = tcg_temp_new_vec_matching(d);
77
-
78
- tcg_gen_dupi_vec(vece, m, mask ^ (mask >> sh));
79
- tcg_gen_shri_vec(vece, t, a, sh);
80
- tcg_gen_and_vec(vece, d, d, m);
81
- tcg_gen_or_vec(vece, d, d, t);
82
-
83
- tcg_temp_free_vec(t);
84
- tcg_temp_free_vec(m);
85
-}
86
-
87
/* SSHR[RA]/USHR[RA] - Vector shift right (optional rounding/accumulate) */
88
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
89
int immh, int immb, int opcode, int rn, int rd)
90
{
91
- static const GVecGen2i sri_op[4] = {
92
- { .fni8 = gen_shr8_ins_i64,
93
- .fniv = gen_shr_ins_vec,
94
- .load_dest = true,
95
- .opc = INDEX_op_shri_vec,
96
- .vece = MO_8 },
97
- { .fni8 = gen_shr16_ins_i64,
98
- .fniv = gen_shr_ins_vec,
99
- .load_dest = true,
100
- .opc = INDEX_op_shri_vec,
101
- .vece = MO_16 },
102
- { .fni4 = gen_shr32_ins_i32,
103
- .fniv = gen_shr_ins_vec,
104
- .load_dest = true,
105
- .opc = INDEX_op_shri_vec,
106
- .vece = MO_32 },
107
- { .fni8 = gen_shr64_ins_i64,
108
- .fniv = gen_shr_ins_vec,
109
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
110
- .load_dest = true,
111
- .opc = INDEX_op_shri_vec,
112
- .vece = MO_64 },
113
- };
114
-
115
int size = 32 - clz32(immh) - 1;
116
int immhb = immh << 3 | immb;
117
int shift = 2 * (8 << size) - immhb;
118
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
119
clear_vec_high(s, is_q, rd);
120
}
119
}
121
120
122
-static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
123
-{
124
- uint64_t mask = dup_const(MO_8, 0xff << shift);
125
- TCGv_i64 t = tcg_temp_new_i64();
126
-
127
- tcg_gen_shli_i64(t, a, shift);
128
- tcg_gen_andi_i64(t, t, mask);
129
- tcg_gen_andi_i64(d, d, ~mask);
130
- tcg_gen_or_i64(d, d, t);
131
- tcg_temp_free_i64(t);
132
-}
133
-
134
-static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
135
-{
136
- uint64_t mask = dup_const(MO_16, 0xffff << shift);
137
- TCGv_i64 t = tcg_temp_new_i64();
138
-
139
- tcg_gen_shli_i64(t, a, shift);
140
- tcg_gen_andi_i64(t, t, mask);
141
- tcg_gen_andi_i64(d, d, ~mask);
142
- tcg_gen_or_i64(d, d, t);
143
- tcg_temp_free_i64(t);
144
-}
145
-
146
-static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
147
-{
148
- tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
149
-}
150
-
151
-static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
152
-{
153
- tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
154
-}
155
-
156
-static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
157
-{
158
- uint64_t mask = (1ull << sh) - 1;
159
- TCGv_vec t = tcg_temp_new_vec_matching(d);
160
- TCGv_vec m = tcg_temp_new_vec_matching(d);
161
-
162
- tcg_gen_dupi_vec(vece, m, mask);
163
- tcg_gen_shli_vec(vece, t, a, sh);
164
- tcg_gen_and_vec(vece, d, d, m);
165
- tcg_gen_or_vec(vece, d, d, t);
166
-
167
- tcg_temp_free_vec(t);
168
- tcg_temp_free_vec(m);
169
-}
170
-
171
/* SHL/SLI - Vector shift left */
172
static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
173
int immh, int immb, int opcode, int rn, int rd)
174
{
175
- static const GVecGen2i shi_op[4] = {
176
- { .fni8 = gen_shl8_ins_i64,
177
- .fniv = gen_shl_ins_vec,
178
- .opc = INDEX_op_shli_vec,
179
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
180
- .load_dest = true,
181
- .vece = MO_8 },
182
- { .fni8 = gen_shl16_ins_i64,
183
- .fniv = gen_shl_ins_vec,
184
- .opc = INDEX_op_shli_vec,
185
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
186
- .load_dest = true,
187
- .vece = MO_16 },
188
- { .fni4 = gen_shl32_ins_i32,
189
- .fniv = gen_shl_ins_vec,
190
- .opc = INDEX_op_shli_vec,
191
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
192
- .load_dest = true,
193
- .vece = MO_32 },
194
- { .fni8 = gen_shl64_ins_i64,
195
- .fniv = gen_shl_ins_vec,
196
- .opc = INDEX_op_shli_vec,
197
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
198
- .load_dest = true,
199
- .vece = MO_64 },
200
- };
201
int size = 32 - clz32(immh) - 1;
202
int immhb = immh << 3 | immb;
203
int shift = immhb - (8 << size);
204
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shli(DisasContext *s, bool is_q, bool insert,
205
}
206
207
if (insert) {
208
- gen_gvec_op2i(s, is_q, rd, rn, shift, &shi_op[size]);
209
+ gen_gvec_op2i(s, is_q, rd, rn, shift, &sli_op[size]);
210
} else {
211
gen_gvec_fn2i(s, is_q, rd, rn, shift, tcg_gen_gvec_shli, size);
212
}
213
diff --git a/target/arm/translate.c b/target/arm/translate.c
214
index XXXXXXX..XXXXXXX 100644
215
--- a/target/arm/translate.c
216
+++ b/target/arm/translate.c
217
@@ -XXX,XX +XXX,XX @@ const GVecGen2i usra_op[4] = {
218
.vece = MO_64, },
219
};
220
221
+static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
222
+{
223
+ uint64_t mask = dup_const(MO_8, 0xff >> shift);
224
+ TCGv_i64 t = tcg_temp_new_i64();
225
+
226
+ tcg_gen_shri_i64(t, a, shift);
227
+ tcg_gen_andi_i64(t, t, mask);
228
+ tcg_gen_andi_i64(d, d, ~mask);
229
+ tcg_gen_or_i64(d, d, t);
230
+ tcg_temp_free_i64(t);
231
+}
232
+
233
+static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
234
+{
235
+ uint64_t mask = dup_const(MO_16, 0xffff >> shift);
236
+ TCGv_i64 t = tcg_temp_new_i64();
237
+
238
+ tcg_gen_shri_i64(t, a, shift);
239
+ tcg_gen_andi_i64(t, t, mask);
240
+ tcg_gen_andi_i64(d, d, ~mask);
241
+ tcg_gen_or_i64(d, d, t);
242
+ tcg_temp_free_i64(t);
243
+}
244
+
245
+static void gen_shr32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
246
+{
247
+ tcg_gen_shri_i32(a, a, shift);
248
+ tcg_gen_deposit_i32(d, d, a, 0, 32 - shift);
249
+}
250
+
251
+static void gen_shr64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
252
+{
253
+ tcg_gen_shri_i64(a, a, shift);
254
+ tcg_gen_deposit_i64(d, d, a, 0, 64 - shift);
255
+}
256
+
257
+static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
258
+{
259
+ if (sh == 0) {
260
+ tcg_gen_mov_vec(d, a);
261
+ } else {
262
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
263
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
264
+
265
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK((8 << vece) - sh, sh));
266
+ tcg_gen_shri_vec(vece, t, a, sh);
267
+ tcg_gen_and_vec(vece, d, d, m);
268
+ tcg_gen_or_vec(vece, d, d, t);
269
+
270
+ tcg_temp_free_vec(t);
271
+ tcg_temp_free_vec(m);
272
+ }
273
+}
274
+
275
+const GVecGen2i sri_op[4] = {
276
+ { .fni8 = gen_shr8_ins_i64,
277
+ .fniv = gen_shr_ins_vec,
278
+ .load_dest = true,
279
+ .opc = INDEX_op_shri_vec,
280
+ .vece = MO_8 },
281
+ { .fni8 = gen_shr16_ins_i64,
282
+ .fniv = gen_shr_ins_vec,
283
+ .load_dest = true,
284
+ .opc = INDEX_op_shri_vec,
285
+ .vece = MO_16 },
286
+ { .fni4 = gen_shr32_ins_i32,
287
+ .fniv = gen_shr_ins_vec,
288
+ .load_dest = true,
289
+ .opc = INDEX_op_shri_vec,
290
+ .vece = MO_32 },
291
+ { .fni8 = gen_shr64_ins_i64,
292
+ .fniv = gen_shr_ins_vec,
293
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
294
+ .load_dest = true,
295
+ .opc = INDEX_op_shri_vec,
296
+ .vece = MO_64 },
297
+};
298
+
299
+static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
300
+{
301
+ uint64_t mask = dup_const(MO_8, 0xff << shift);
302
+ TCGv_i64 t = tcg_temp_new_i64();
303
+
304
+ tcg_gen_shli_i64(t, a, shift);
305
+ tcg_gen_andi_i64(t, t, mask);
306
+ tcg_gen_andi_i64(d, d, ~mask);
307
+ tcg_gen_or_i64(d, d, t);
308
+ tcg_temp_free_i64(t);
309
+}
310
+
311
+static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
312
+{
313
+ uint64_t mask = dup_const(MO_16, 0xffff << shift);
314
+ TCGv_i64 t = tcg_temp_new_i64();
315
+
316
+ tcg_gen_shli_i64(t, a, shift);
317
+ tcg_gen_andi_i64(t, t, mask);
318
+ tcg_gen_andi_i64(d, d, ~mask);
319
+ tcg_gen_or_i64(d, d, t);
320
+ tcg_temp_free_i64(t);
321
+}
322
+
323
+static void gen_shl32_ins_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
324
+{
325
+ tcg_gen_deposit_i32(d, d, a, shift, 32 - shift);
326
+}
327
+
328
+static void gen_shl64_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
329
+{
330
+ tcg_gen_deposit_i64(d, d, a, shift, 64 - shift);
331
+}
332
+
333
+static void gen_shl_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
334
+{
335
+ if (sh == 0) {
336
+ tcg_gen_mov_vec(d, a);
337
+ } else {
338
+ TCGv_vec t = tcg_temp_new_vec_matching(d);
339
+ TCGv_vec m = tcg_temp_new_vec_matching(d);
340
+
341
+ tcg_gen_dupi_vec(vece, m, MAKE_64BIT_MASK(0, sh));
342
+ tcg_gen_shli_vec(vece, t, a, sh);
343
+ tcg_gen_and_vec(vece, d, d, m);
344
+ tcg_gen_or_vec(vece, d, d, t);
345
+
346
+ tcg_temp_free_vec(t);
347
+ tcg_temp_free_vec(m);
348
+ }
349
+}
350
+
351
+const GVecGen2i sli_op[4] = {
352
+ { .fni8 = gen_shl8_ins_i64,
353
+ .fniv = gen_shl_ins_vec,
354
+ .load_dest = true,
355
+ .opc = INDEX_op_shli_vec,
356
+ .vece = MO_8 },
357
+ { .fni8 = gen_shl16_ins_i64,
358
+ .fniv = gen_shl_ins_vec,
359
+ .load_dest = true,
360
+ .opc = INDEX_op_shli_vec,
361
+ .vece = MO_16 },
362
+ { .fni4 = gen_shl32_ins_i32,
363
+ .fniv = gen_shl_ins_vec,
364
+ .load_dest = true,
365
+ .opc = INDEX_op_shli_vec,
366
+ .vece = MO_32 },
367
+ { .fni8 = gen_shl64_ins_i64,
368
+ .fniv = gen_shl_ins_vec,
369
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
370
+ .load_dest = true,
371
+ .opc = INDEX_op_shli_vec,
372
+ .vece = MO_64 },
373
+};
374
+
375
/* Translate a NEON data processing instruction. Return nonzero if the
376
instruction is invalid.
377
We process data in a mixture of 32-bit and 64-bit chunks.
378
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
379
int pairwise;
380
int u;
381
int vec_size;
382
- uint32_t imm, mask;
383
+ uint32_t imm;
384
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
385
TCGv_ptr ptr1, ptr2, ptr3;
386
TCGv_i64 tmp64;
387
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
388
}
389
return 0;
390
391
+ case 4: /* VSRI */
392
+ if (!u) {
393
+ return 1;
394
+ }
395
+ /* Right shift comes here negative. */
396
+ shift = -shift;
397
+ /* Shift out of range leaves destination unchanged. */
398
+ if (shift < 8 << size) {
399
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
400
+ shift, &sri_op[size]);
401
+ }
402
+ return 0;
403
+
404
case 5: /* VSHL, VSLI */
405
- if (!u) { /* VSHL */
406
+ if (u) { /* VSLI */
407
+ /* Shift out of range leaves destination unchanged. */
408
+ if (shift < 8 << size) {
409
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size,
410
+ vec_size, shift, &sli_op[size]);
411
+ }
412
+ } else { /* VSHL */
413
/* Shifts larger than the element size are
414
* architecturally valid and results in zero.
415
*/
416
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
417
tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
418
vec_size, vec_size);
419
}
420
- return 0;
421
}
422
- break;
423
+ return 0;
424
}
425
426
if (size == 3) {
427
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
428
else
429
gen_helper_neon_rshl_s64(cpu_V0, cpu_V0, cpu_V1);
430
break;
431
- case 4: /* VSRI */
432
- case 5: /* VSHL, VSLI */
433
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
434
- break;
435
case 6: /* VQSHLU */
436
gen_helper_neon_qshlu_s64(cpu_V0, cpu_env,
437
cpu_V0, cpu_V1);
438
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
439
/* Accumulate. */
440
neon_load_reg64(cpu_V1, rd + pass);
441
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
442
- } else if (op == 4 || (op == 5 && u)) {
443
- /* Insert */
444
- neon_load_reg64(cpu_V1, rd + pass);
445
- uint64_t mask;
446
- if (shift < -63 || shift > 63) {
447
- mask = 0;
448
- } else {
449
- if (op == 4) {
450
- mask = 0xffffffffffffffffull >> -shift;
451
- } else {
452
- mask = 0xffffffffffffffffull << shift;
453
- }
454
- }
455
- tcg_gen_andi_i64(cpu_V1, cpu_V1, ~mask);
456
- tcg_gen_or_i64(cpu_V0, cpu_V0, cpu_V1);
457
}
458
neon_store_reg64(cpu_V0, rd + pass);
459
} else { /* size < 3 */
460
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
461
case 3: /* VRSRA */
462
GEN_NEON_INTEGER_OP(rshl);
463
break;
464
- case 4: /* VSRI */
465
- case 5: /* VSHL, VSLI */
466
- switch (size) {
467
- case 0: gen_helper_neon_shl_u8(tmp, tmp, tmp2); break;
468
- case 1: gen_helper_neon_shl_u16(tmp, tmp, tmp2); break;
469
- case 2: gen_helper_neon_shl_u32(tmp, tmp, tmp2); break;
470
- default: abort();
471
- }
472
- break;
473
case 6: /* VQSHLU */
474
switch (size) {
475
case 0:
476
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
477
tmp2 = neon_load_reg(rd, pass);
478
gen_neon_add(size, tmp, tmp2);
479
tcg_temp_free_i32(tmp2);
480
- } else if (op == 4 || (op == 5 && u)) {
481
- /* Insert */
482
- switch (size) {
483
- case 0:
484
- if (op == 4)
485
- mask = 0xff >> -shift;
486
- else
487
- mask = (uint8_t)(0xff << shift);
488
- mask |= mask << 8;
489
- mask |= mask << 16;
490
- break;
491
- case 1:
492
- if (op == 4)
493
- mask = 0xffff >> -shift;
494
- else
495
- mask = (uint16_t)(0xffff << shift);
496
- mask |= mask << 16;
497
- break;
498
- case 2:
499
- if (shift < -31 || shift > 31) {
500
- mask = 0;
501
- } else {
502
- if (op == 4)
503
- mask = 0xffffffffu >> -shift;
504
- else
505
- mask = 0xffffffffu << shift;
506
- }
507
- break;
508
- default:
509
- abort();
510
- }
511
- tmp2 = neon_load_reg(rd, pass);
512
- tcg_gen_andi_i32(tmp, tmp, mask);
513
- tcg_gen_andi_i32(tmp2, tmp2, ~mask);
514
- tcg_gen_or_i32(tmp, tmp, tmp2);
515
- tcg_temp_free_i32(tmp2);
516
}
517
neon_store_reg(rd, pass, tmp);
518
}
519
--
121
--
520
2.19.1
122
2.20.1
521
123
522
124
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
In the stripe8() function we use a variable length array; however
2
we know that the maximum length required is MAX_NUM_BUSSES. Use
3
a fixed-length array and an assert instead.
2
4
3
Announce 64bit addressing support.
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
10
Message-id: 20190328152635.2794-1-peter.maydell@linaro.org
11
---
12
hw/ssi/xilinx_spips.c | 6 ++++--
13
1 file changed, 4 insertions(+), 2 deletions(-)
4
14
5
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
15
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Message-id: 20181017213932.19973-3-edgar.iglesias@gmail.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/net/cadence_gem.c | 3 ++-
12
1 file changed, 2 insertions(+), 1 deletion(-)
13
14
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/net/cadence_gem.c
17
--- a/hw/ssi/xilinx_spips.c
17
+++ b/hw/net/cadence_gem.c
18
+++ b/hw/ssi/xilinx_spips.c
18
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_reset(DeviceState *d)
19
#define GEM_DESCONF4 (0x0000028C/4)
20
20
#define GEM_DESCONF5 (0x00000290/4)
21
static inline void stripe8(uint8_t *x, int num, bool dir)
21
#define GEM_DESCONF6 (0x00000294/4)
22
{
22
+#define GEM_DESCONF6_64B_MASK (1U << 23)
23
- uint8_t r[num];
23
#define GEM_DESCONF7 (0x00000298/4)
24
- memset(r, 0, sizeof(uint8_t) * num);
24
25
+ uint8_t r[MAX_NUM_BUSSES];
25
#define GEM_INT_Q1_STATUS (0x00000400 / 4)
26
int idx[2] = {0, 0};
26
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
27
int bit[2] = {0, 7};
27
s->regs[GEM_DESCONF] = 0x02500111;
28
int d = dir;
28
s->regs[GEM_DESCONF2] = 0x2ab13fff;
29
29
s->regs[GEM_DESCONF5] = 0x002f2045;
30
+ assert(num <= MAX_NUM_BUSSES);
30
- s->regs[GEM_DESCONF6] = 0x0;
31
+ memset(r, 0, sizeof(uint8_t) * num);
31
+ s->regs[GEM_DESCONF6] = GEM_DESCONF6_64B_MASK;
32
+
32
33
for (idx[0] = 0; idx[0] < num; ++idx[0]) {
33
if (s->num_priority_queues > 1) {
34
for (bit[0] = 7; bit[0] >= 0; bit[0]--) {
34
queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
35
r[idx[!d]] |= x[idx[d]] & 1 << bit[d] ? 1 << bit[!d] : 0;
35
--
36
--
36
2.19.1
37
2.20.1
37
38
38
39
diff view generated by jsdifflib
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
1
Normally configure identifies the source path by looking
2
at the location where the configure script itself exists.
3
We also provide a --source-path option which lets the user
4
manually override this.
2
5
3
Announce the availability of the various priority queues.
6
There isn't really an obvious use case for the --source-path
4
This fixes an issue where guest kernels would miss to
7
option, and in commit 927128222b0a91f56c13a in 2017 we
5
configure secondary queues due to inproper feature bits.
8
accidentally added some logic that looks at $source_path
9
before the command line option that overrides it has been
10
processed.
6
11
7
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
12
The fact that nobody complained suggests that there isn't
8
Message-id: 20181017213932.19973-2-edgar.iglesias@gmail.com
13
any use of this option and we aren't testing it either;
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
remove it. This allows us to move the "make $source_path
15
absolute" logic up so that there is no window in the script
16
where $source_path is set but not yet absolute.
17
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
20
Message-id: 20190318134019.23729-1-peter.maydell@linaro.org
11
---
21
---
12
hw/net/cadence_gem.c | 8 +++++++-
22
configure | 10 ++--------
13
1 file changed, 7 insertions(+), 1 deletion(-)
23
1 file changed, 2 insertions(+), 8 deletions(-)
14
24
15
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
25
diff --git a/configure b/configure
16
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100755
17
--- a/hw/net/cadence_gem.c
27
--- a/configure
18
+++ b/hw/net/cadence_gem.c
28
+++ b/configure
19
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
29
@@ -XXX,XX +XXX,XX @@ ld_has() {
20
int i;
30
21
CadenceGEMState *s = CADENCE_GEM(d);
31
# default parameters
22
const uint8_t *a;
32
source_path=$(dirname "$0")
23
+ uint32_t queues_mask = 0;
33
+# make source path absolute
24
34
+source_path=$(cd "$source_path"; pwd)
25
DB_PRINT("\n");
35
cpu=""
26
36
iasl="iasl"
27
@@ -XXX,XX +XXX,XX @@ static void gem_reset(DeviceState *d)
37
interp_prefix="/usr/gnemul/qemu-%M"
28
s->regs[GEM_DESCONF] = 0x02500111;
38
@@ -XXX,XX +XXX,XX @@ for opt do
29
s->regs[GEM_DESCONF2] = 0x2ab13fff;
39
;;
30
s->regs[GEM_DESCONF5] = 0x002f2045;
40
--cxx=*) CXX="$optarg"
31
- s->regs[GEM_DESCONF6] = 0x00000200;
41
;;
32
+ s->regs[GEM_DESCONF6] = 0x0;
42
- --source-path=*) source_path="$optarg"
33
+
43
- ;;
34
+ if (s->num_priority_queues > 1) {
44
--cpu=*) cpu="$optarg"
35
+ queues_mask = MAKE_64BIT_MASK(1, s->num_priority_queues - 1);
45
;;
36
+ s->regs[GEM_DESCONF6] |= queues_mask;
46
--extra-cflags=*) QEMU_CFLAGS="$QEMU_CFLAGS $optarg"
37
+ }
47
@@ -XXX,XX +XXX,XX @@ if test "$debug_info" = "yes"; then
38
48
LDFLAGS="-g $LDFLAGS"
39
/* Set MAC address */
49
fi
40
a = &s->conf.macaddr.a[0];
50
51
-# make source path absolute
52
-source_path=$(cd "$source_path"; pwd)
53
-
54
# running configure in the source tree?
55
# we know that's the case if configure is there.
56
if test -f "./configure"; then
57
@@ -XXX,XX +XXX,XX @@ for opt do
58
;;
59
--interp-prefix=*) interp_prefix="$optarg"
60
;;
61
- --source-path=*)
62
- ;;
63
--cross-prefix=*)
64
;;
65
--cc=*)
66
@@ -XXX,XX +XXX,XX @@ $(echo Available targets: $default_target_list | \
67
--target-list-exclude=LIST exclude a set of targets from the default target-list
68
69
Advanced options (experts only):
70
- --source-path=PATH path of source code [$source_path]
71
--cross-prefix=PREFIX use PREFIX for compile tools [$cross_prefix]
72
--cc=CC use C compiler CC [$cc]
73
--iasl=IASL use ACPI compiler IASL [$iasl]
41
--
74
--
42
2.19.1
75
2.20.1
43
76
44
77
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Enforce that for M-profile various FPSCR bits which are RES0 there
2
but have defined meanings on A-profile are never settable. This
3
ensures that M-profile code can't enable the A-profile behaviour
4
(notably vector length/stride handling) by accident.
2
5
3
For a sequence of loads or stores from a single register,
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
little-endian operations can be promoted to an 8-byte op.
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
This can reduce the number of operations by a factor of 8.
8
Message-id: 20190416125744.27770-2-peter.maydell@linaro.org
9
---
10
target/arm/vfp_helper.c | 8 ++++++++
11
1 file changed, 8 insertions(+)
6
12
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
8
Message-id: 20181011205206.3552-5-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate-a64.c | 66 +++++++++++++++++++++++---------------
13
1 file changed, 40 insertions(+), 26 deletions(-)
14
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.c
15
--- a/target/arm/vfp_helper.c
18
+++ b/target/arm/translate-a64.c
16
+++ b/target/arm/vfp_helper.c
19
@@ -XXX,XX +XXX,XX @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
17
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
20
18
val &= ~FPCR_FZ16;
21
/* Store from vector register to memory */
22
static void do_vec_st(DisasContext *s, int srcidx, int element,
23
- TCGv_i64 tcg_addr, int size)
24
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
25
{
26
- TCGMemOp memop = s->be_data + size;
27
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
28
29
read_vec_element(s, tcg_tmp, srcidx, element, size);
30
- tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
31
+ tcg_gen_qemu_st_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
32
33
tcg_temp_free_i64(tcg_tmp);
34
}
35
36
/* Load from memory to vector register */
37
static void do_vec_ld(DisasContext *s, int destidx, int element,
38
- TCGv_i64 tcg_addr, int size)
39
+ TCGv_i64 tcg_addr, int size, TCGMemOp endian)
40
{
41
- TCGMemOp memop = s->be_data + size;
42
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
43
44
- tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), memop);
45
+ tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr, get_mem_index(s), endian | size);
46
write_vec_element(s, tcg_tmp, destidx, element, size);
47
48
tcg_temp_free_i64(tcg_tmp);
49
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
50
bool is_postidx = extract32(insn, 23, 1);
51
bool is_q = extract32(insn, 30, 1);
52
TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
53
+ TCGMemOp endian = s->be_data;
54
55
- int ebytes = 1 << size;
56
- int elements = (is_q ? 128 : 64) / (8 << size);
57
+ int ebytes; /* bytes per element */
58
+ int elements; /* elements per vector */
59
int rpt; /* num iterations */
60
int selem; /* structure elements */
61
int r;
62
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
63
gen_check_sp_alignment(s);
64
}
19
}
65
20
66
+ /* For our purposes, bytes are always little-endian. */
21
+ if (arm_feature(env, ARM_FEATURE_M)) {
67
+ if (size == 0) {
22
+ /*
68
+ endian = MO_LE;
23
+ * M profile FPSCR is RES0 for the QC, STRIDE, FZ16, LEN bits
24
+ * and also for the trapped-exception-handling bits IxE.
25
+ */
26
+ val &= 0xf7c0009f;
69
+ }
27
+ }
70
+
28
+
71
+ /* Consecutive little-endian elements from a single register
29
/*
72
+ * can be promoted to a larger little-endian operation.
30
* We don't implement trapped exception handling, so the
73
+ */
31
* trap enable bits, IDE|IXE|UFE|OFE|DZE|IOE are all RAZ/WI (not RES0!)
74
+ if (selem == 1 && endian == MO_LE) {
75
+ size = 3;
76
+ }
77
+ ebytes = 1 << size;
78
+ elements = (is_q ? 16 : 8) / ebytes;
79
+
80
tcg_rn = cpu_reg_sp(s, rn);
81
tcg_addr = tcg_temp_new_i64();
82
tcg_gen_mov_i64(tcg_addr, tcg_rn);
83
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
84
for (r = 0; r < rpt; r++) {
85
int e;
86
for (e = 0; e < elements; e++) {
87
- int tt = (rt + r) % 32;
88
int xs;
89
for (xs = 0; xs < selem; xs++) {
90
+ int tt = (rt + r + xs) % 32;
91
if (is_store) {
92
- do_vec_st(s, tt, e, tcg_addr, size);
93
+ do_vec_st(s, tt, e, tcg_addr, size, endian);
94
} else {
95
- do_vec_ld(s, tt, e, tcg_addr, size);
96
-
97
- /* For non-quad operations, setting a slice of the low
98
- * 64 bits of the register clears the high 64 bits (in
99
- * the ARM ARM pseudocode this is implicit in the fact
100
- * that 'rval' is a 64 bit wide variable).
101
- * For quad operations, we might still need to zero the
102
- * high bits of SVE. We optimize by noticing that we only
103
- * need to do this the first time we touch a register.
104
- */
105
- if (e == 0 && (r == 0 || xs == selem - 1)) {
106
- clear_vec_high(s, is_q, tt);
107
- }
108
+ do_vec_ld(s, tt, e, tcg_addr, size, endian);
109
}
110
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
111
- tt = (tt + 1) % 32;
112
}
113
}
114
}
115
116
+ if (!is_store) {
117
+ /* For non-quad operations, setting a slice of the low
118
+ * 64 bits of the register clears the high 64 bits (in
119
+ * the ARM ARM pseudocode this is implicit in the fact
120
+ * that 'rval' is a 64 bit wide variable).
121
+ * For quad operations, we might still need to zero the
122
+ * high bits of SVE.
123
+ */
124
+ for (r = 0; r < rpt * selem; r++) {
125
+ int tt = (rt + r) % 32;
126
+ clear_vec_high(s, is_q, tt);
127
+ }
128
+ }
129
+
130
if (is_postidx) {
131
int rm = extract32(insn, 16, 5);
132
if (rm == 31) {
133
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
134
} else {
135
/* Load/store one element per register */
136
if (is_load) {
137
- do_vec_ld(s, rt, index, tcg_addr, scale);
138
+ do_vec_ld(s, rt, index, tcg_addr, scale, s->be_data);
139
} else {
140
- do_vec_st(s, rt, index, tcg_addr, scale);
141
+ do_vec_st(s, rt, index, tcg_addr, scale, s->be_data);
142
}
143
}
144
tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
145
--
32
--
146
2.19.1
33
2.20.1
147
34
148
35
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
For M-profile the MVFR* ID registers are memory mapped, in the
2
range we implement via the NVIC. Allow them to be read.
3
(If the CPU has no FPU, these registers are defined to be RAZ.)
2
4
3
Instead of shifts and masks, use direct loads and stores from
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
the neon register file.
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190416125744.27770-3-peter.maydell@linaro.org
8
---
9
hw/intc/armv7m_nvic.c | 6 ++++++
10
1 file changed, 6 insertions(+)
5
11
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
7
Message-id: 20181011205206.3552-21-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/translate.c | 92 +++++++++++++++++++++++-------------------
12
1 file changed, 50 insertions(+), 42 deletions(-)
13
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate.c
14
--- a/hw/intc/armv7m_nvic.c
17
+++ b/target/arm/translate.c
15
+++ b/hw/intc/armv7m_nvic.c
18
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
16
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
19
return tmp;
17
return 0;
20
}
21
22
+static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
23
+{
24
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
25
+
26
+ switch (mop) {
27
+ case MO_UB:
28
+ tcg_gen_ld8u_i32(var, cpu_env, offset);
29
+ break;
30
+ case MO_UW:
31
+ tcg_gen_ld16u_i32(var, cpu_env, offset);
32
+ break;
33
+ case MO_UL:
34
+ tcg_gen_ld_i32(var, cpu_env, offset);
35
+ break;
36
+ default:
37
+ g_assert_not_reached();
38
+ }
39
+}
40
+
41
static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
42
{
43
long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
44
@@ -XXX,XX +XXX,XX @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
45
tcg_temp_free_i32(var);
46
}
47
48
+static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
49
+{
50
+ long offset = neon_element_offset(reg, ele, size);
51
+
52
+ switch (size) {
53
+ case MO_8:
54
+ tcg_gen_st8_i32(var, cpu_env, offset);
55
+ break;
56
+ case MO_16:
57
+ tcg_gen_st16_i32(var, cpu_env, offset);
58
+ break;
59
+ case MO_32:
60
+ tcg_gen_st_i32(var, cpu_env, offset);
61
+ break;
62
+ default:
63
+ g_assert_not_reached();
64
+ }
65
+}
66
+
67
static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
68
{
69
long offset = neon_element_offset(reg, ele, size);
70
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
71
int stride;
72
int size;
73
int reg;
74
- int pass;
75
int load;
76
- int shift;
77
int n;
78
int vec_size;
79
int mmu_idx;
80
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
81
} else {
82
/* Single element. */
83
int idx = (insn >> 4) & 0xf;
84
- pass = (insn >> 7) & 1;
85
+ int reg_idx;
86
switch (size) {
87
case 0:
88
- shift = ((insn >> 5) & 3) * 8;
89
+ reg_idx = (insn >> 5) & 7;
90
stride = 1;
91
break;
92
case 1:
93
- shift = ((insn >> 6) & 1) * 16;
94
+ reg_idx = (insn >> 6) & 3;
95
stride = (insn & (1 << 5)) ? 2 : 1;
96
break;
97
case 2:
98
- shift = 0;
99
+ reg_idx = (insn >> 7) & 1;
100
stride = (insn & (1 << 6)) ? 2 : 1;
101
break;
102
default:
103
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
104
*/
105
return 1;
106
}
107
+ tmp = tcg_temp_new_i32();
108
addr = tcg_temp_new_i32();
109
load_reg_var(s, addr, rn);
110
for (reg = 0; reg < nregs; reg++) {
111
if (load) {
112
- tmp = tcg_temp_new_i32();
113
- switch (size) {
114
- case 0:
115
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
116
- break;
117
- case 1:
118
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
119
- break;
120
- case 2:
121
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
122
- break;
123
- default: /* Avoid compiler warnings. */
124
- abort();
125
- }
126
- if (size != 2) {
127
- tmp2 = neon_load_reg(rd, pass);
128
- tcg_gen_deposit_i32(tmp, tmp2, tmp,
129
- shift, size ? 16 : 8);
130
- tcg_temp_free_i32(tmp2);
131
- }
132
- neon_store_reg(rd, pass, tmp);
133
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
134
+ s->be_data | size);
135
+ neon_store_element(rd, reg_idx, size, tmp);
136
} else { /* Store */
137
- tmp = neon_load_reg(rd, pass);
138
- if (shift)
139
- tcg_gen_shri_i32(tmp, tmp, shift);
140
- switch (size) {
141
- case 0:
142
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
143
- break;
144
- case 1:
145
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
146
- break;
147
- case 2:
148
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
149
- break;
150
- }
151
- tcg_temp_free_i32(tmp);
152
+ neon_load_element(tmp, rd, reg_idx, size);
153
+ gen_aa32_st_i32(s, tmp, addr, get_mem_index(s),
154
+ s->be_data | size);
155
}
156
rd += stride;
157
tcg_gen_addi_i32(addr, addr, 1 << size);
158
}
159
tcg_temp_free_i32(addr);
160
+ tcg_temp_free_i32(tmp);
161
stride = nregs * (1 << size);
162
}
18
}
163
}
19
return cpu->env.v7m.sfar;
20
+ case 0xf40: /* MVFR0 */
21
+ return cpu->isar.mvfr0;
22
+ case 0xf44: /* MVFR1 */
23
+ return cpu->isar.mvfr1;
24
+ case 0xf48: /* MVFR2 */
25
+ return cpu->isar.mvfr2;
26
default:
27
bad_offset:
28
qemu_log_mask(LOG_GUEST_ERROR, "NVIC: Bad read offset 0x%x\n", offset);
164
--
29
--
165
2.19.1
30
2.20.1
166
31
167
32
diff view generated by jsdifflib
1
From: Dongjiu Geng <gengdongjiu@huawei.com>
1
The M-profile floating point support has three associated config
2
2
registers: FPCAR, FPCCR and FPDSCR. It also makes the registers
3
This patch extends the qemu-kvm state sync logic with support for
3
CPACR and NSACR have behaviour other than reads-as-zero.
4
KVM_GET/SET_VCPU_EVENTS, giving access to yet missing SError exception.
4
Add support for all of these as simple reads-as-written registers.
5
And also it can support the exception state migration.
5
We will hook up actual functionality later.
6
6
7
The SError exception states include SError pending state and ESR value,
7
The main complexity here is handling the FPCCR register, which
8
the kvm_put/get_vcpu_events() will be called when set or get system
8
has a mix of banked and unbanked bits.
9
registers. When do migration, if source machine has SError pending,
9
10
QEMU will do this migration regardless whether the target machine supports
10
Note that we don't share storage with the A-profile
11
to specify guest ESR value, because if target machine does not support that,
11
cpu->cp15.nsacr and cpu->cp15.cpacr_el1, though the behaviour
12
it can also inject the SError with zero ESR value.
12
is quite similar, for two reasons:
13
13
* the M profile CPACR is banked between security states
14
Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com>
14
* it preserves the invariant that M profile uses no state
15
Reviewed-by: Andrew Jones <drjones@redhat.com>
15
inside the cp15 substruct
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
17
Message-id: 1538067351-23931-3-git-send-email-gengdongjiu@huawei.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20190416125744.27770-4-peter.maydell@linaro.org
19
---
20
---
20
target/arm/cpu.h | 7 ++++++
21
target/arm/cpu.h | 34 ++++++++++++
21
target/arm/kvm_arm.h | 24 ++++++++++++++++++
22
hw/intc/armv7m_nvic.c | 125 ++++++++++++++++++++++++++++++++++++++++++
22
target/arm/kvm.c | 60 ++++++++++++++++++++++++++++++++++++++++++++
23
target/arm/cpu.c | 5 ++
23
target/arm/kvm32.c | 13 ++++++++++
24
target/arm/machine.c | 16 ++++++
24
target/arm/kvm64.c | 13 ++++++++++
25
4 files changed, 180 insertions(+)
25
target/arm/machine.c | 22 ++++++++++++++++
26
6 files changed, 139 insertions(+)
27
26
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.h
29
--- a/target/arm/cpu.h
31
+++ b/target/arm/cpu.h
30
+++ b/target/arm/cpu.h
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
31
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
33
*/
32
uint32_t scr[M_REG_NUM_BANKS];
34
} exception;
33
uint32_t msplim[M_REG_NUM_BANKS];
35
34
uint32_t psplim[M_REG_NUM_BANKS];
36
+ /* Information associated with an SError */
35
+ uint32_t fpcar[M_REG_NUM_BANKS];
37
+ struct {
36
+ uint32_t fpccr[M_REG_NUM_BANKS];
38
+ uint8_t pending;
37
+ uint32_t fpdscr[M_REG_NUM_BANKS];
39
+ uint8_t has_esr;
38
+ uint32_t cpacr[M_REG_NUM_BANKS];
40
+ uint64_t esr;
39
+ uint32_t nsacr;
41
+ } serror;
40
} v7m;
42
+
41
43
/* Thumb-2 EE state. */
42
/* Information associated with an exception about to be taken:
44
uint32_t teecr;
43
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CSSELR, LEVEL, 1, 3)
45
uint32_t teehbr;
46
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/kvm_arm.h
49
+++ b/target/arm/kvm_arm.h
50
@@ -XXX,XX +XXX,XX @@ bool write_kvmstate_to_list(ARMCPU *cpu);
51
*/
44
*/
52
void kvm_arm_reset_vcpu(ARMCPU *cpu);
45
FIELD(V7M_CSSELR, INDEX, 0, 4)
53
46
54
+/**
47
+/* v7M FPCCR bits */
55
+ * kvm_arm_init_serror_injection:
48
+FIELD(V7M_FPCCR, LSPACT, 0, 1)
56
+ * @cs: CPUState
49
+FIELD(V7M_FPCCR, USER, 1, 1)
57
+ *
50
+FIELD(V7M_FPCCR, S, 2, 1)
58
+ * Check whether KVM can set guest SError syndrome.
51
+FIELD(V7M_FPCCR, THREAD, 3, 1)
59
+ */
52
+FIELD(V7M_FPCCR, HFRDY, 4, 1)
60
+void kvm_arm_init_serror_injection(CPUState *cs);
53
+FIELD(V7M_FPCCR, MMRDY, 5, 1)
61
+
54
+FIELD(V7M_FPCCR, BFRDY, 6, 1)
62
+/**
55
+FIELD(V7M_FPCCR, SFRDY, 7, 1)
63
+ * kvm_get_vcpu_events:
56
+FIELD(V7M_FPCCR, MONRDY, 8, 1)
64
+ * @cpu: ARMCPU
57
+FIELD(V7M_FPCCR, SPLIMVIOL, 9, 1)
65
+ *
58
+FIELD(V7M_FPCCR, UFRDY, 10, 1)
66
+ * Get VCPU related state from kvm.
59
+FIELD(V7M_FPCCR, RES0, 11, 15)
67
+ */
60
+FIELD(V7M_FPCCR, TS, 26, 1)
68
+int kvm_get_vcpu_events(ARMCPU *cpu);
61
+FIELD(V7M_FPCCR, CLRONRETS, 27, 1)
69
+
62
+FIELD(V7M_FPCCR, CLRONRET, 28, 1)
70
+/**
63
+FIELD(V7M_FPCCR, LSPENS, 29, 1)
71
+ * kvm_put_vcpu_events:
64
+FIELD(V7M_FPCCR, LSPEN, 30, 1)
72
+ * @cpu: ARMCPU
65
+FIELD(V7M_FPCCR, ASPEN, 31, 1)
73
+ *
66
+/* These bits are banked. Others are non-banked and live in the M_REG_S bank */
74
+ * Put VCPU related state to kvm.
67
+#define R_V7M_FPCCR_BANKED_MASK \
75
+ */
68
+ (R_V7M_FPCCR_LSPACT_MASK | \
76
+int kvm_put_vcpu_events(ARMCPU *cpu);
69
+ R_V7M_FPCCR_USER_MASK | \
77
+
70
+ R_V7M_FPCCR_THREAD_MASK | \
78
#ifdef CONFIG_KVM
71
+ R_V7M_FPCCR_MMRDY_MASK | \
79
/**
72
+ R_V7M_FPCCR_SPLIMVIOL_MASK | \
80
* kvm_arm_create_scratch_host_vcpu:
73
+ R_V7M_FPCCR_UFRDY_MASK | \
81
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
74
+ R_V7M_FPCCR_ASPEN_MASK)
82
index XXXXXXX..XXXXXXX 100644
75
+
83
--- a/target/arm/kvm.c
76
/*
84
+++ b/target/arm/kvm.c
77
* System register ID fields.
85
@@ -XXX,XX +XXX,XX @@ const KVMCapabilityInfo kvm_arch_required_capabilities[] = {
78
*/
86
};
79
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
87
80
index XXXXXXX..XXXXXXX 100644
88
static bool cap_has_mp_state;
81
--- a/hw/intc/armv7m_nvic.c
89
+static bool cap_has_inject_serror_esr;
82
+++ b/hw/intc/armv7m_nvic.c
90
83
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
91
static ARMHostCPUFeatures arm_host_cpu_features;
84
}
92
85
case 0xd84: /* CSSELR */
93
@@ -XXX,XX +XXX,XX @@ int kvm_arm_vcpu_init(CPUState *cs)
86
return cpu->env.v7m.csselr[attrs.secure];
94
return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
87
+ case 0xd88: /* CPACR */
95
}
88
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
96
89
+ return 0;
97
+void kvm_arm_init_serror_injection(CPUState *cs)
90
+ }
98
+{
91
+ return cpu->env.v7m.cpacr[attrs.secure];
99
+ cap_has_inject_serror_esr = kvm_check_extension(cs->kvm_state,
92
+ case 0xd8c: /* NSACR */
100
+ KVM_CAP_ARM_INJECT_SERROR_ESR);
93
+ if (!attrs.secure || !arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
101
+}
94
+ return 0;
102
+
95
+ }
103
bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
96
+ return cpu->env.v7m.nsacr;
104
int *fdarray,
97
/* TODO: Implement debug registers. */
105
struct kvm_vcpu_init *init)
98
case 0xd90: /* MPU_TYPE */
106
@@ -XXX,XX +XXX,XX @@ int kvm_arm_sync_mpstate_to_qemu(ARMCPU *cpu)
99
/* Unified MPU; if the MPU is not present this value is zero */
107
return 0;
100
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
108
}
101
return 0;
109
102
}
110
+int kvm_put_vcpu_events(ARMCPU *cpu)
103
return cpu->env.v7m.sfar;
111
+{
104
+ case 0xf34: /* FPCCR */
112
+ CPUARMState *env = &cpu->env;
105
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
113
+ struct kvm_vcpu_events events;
106
+ return 0;
114
+ int ret;
107
+ }
115
+
108
+ if (attrs.secure) {
116
+ if (!kvm_has_vcpu_events()) {
109
+ return cpu->env.v7m.fpccr[M_REG_S];
117
+ return 0;
110
+ } else {
118
+ }
111
+ /*
119
+
112
+ * NS can read LSPEN, CLRONRET and MONRDY. It can read
120
+ memset(&events, 0, sizeof(events));
113
+ * BFRDY and HFRDY if AIRCR.BFHFNMINS != 0;
121
+ events.exception.serror_pending = env->serror.pending;
114
+ * other non-banked bits RAZ.
122
+
115
+ * TODO: MONRDY should RAZ/WI if DEMCR.SDME is set.
123
+ /* Inject SError to guest with specified syndrome if host kernel
116
+ */
124
+ * supports it, otherwise inject SError without syndrome.
117
+ uint32_t value = cpu->env.v7m.fpccr[M_REG_S];
125
+ */
118
+ uint32_t mask = R_V7M_FPCCR_LSPEN_MASK |
126
+ if (cap_has_inject_serror_esr) {
119
+ R_V7M_FPCCR_CLRONRET_MASK |
127
+ events.exception.serror_has_esr = env->serror.has_esr;
120
+ R_V7M_FPCCR_MONRDY_MASK;
128
+ events.exception.serror_esr = env->serror.esr;
121
+
129
+ }
122
+ if (s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
130
+
123
+ mask |= R_V7M_FPCCR_BFRDY_MASK | R_V7M_FPCCR_HFRDY_MASK;
131
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events);
124
+ }
132
+ if (ret) {
125
+
133
+ error_report("failed to put vcpu events");
126
+ value &= mask;
134
+ }
127
+
135
+
128
+ value |= cpu->env.v7m.fpccr[M_REG_NS];
136
+ return ret;
129
+ return value;
137
+}
130
+ }
138
+
131
+ case 0xf38: /* FPCAR */
139
+int kvm_get_vcpu_events(ARMCPU *cpu)
132
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
140
+{
133
+ return 0;
141
+ CPUARMState *env = &cpu->env;
134
+ }
142
+ struct kvm_vcpu_events events;
135
+ return cpu->env.v7m.fpcar[attrs.secure];
143
+ int ret;
136
+ case 0xf3c: /* FPDSCR */
144
+
137
+ if (!arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
145
+ if (!kvm_has_vcpu_events()) {
138
+ return 0;
146
+ return 0;
139
+ }
147
+ }
140
+ return cpu->env.v7m.fpdscr[attrs.secure];
148
+
141
case 0xf40: /* MVFR0 */
149
+ memset(&events, 0, sizeof(events));
142
return cpu->isar.mvfr0;
150
+ ret = kvm_vcpu_ioctl(CPU(cpu), KVM_GET_VCPU_EVENTS, &events);
143
case 0xf44: /* MVFR1 */
151
+ if (ret) {
144
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
152
+ error_report("failed to get vcpu events");
145
cpu->env.v7m.csselr[attrs.secure] = value & R_V7M_CSSELR_INDEX_MASK;
153
+ return ret;
146
}
154
+ }
147
break;
155
+
148
+ case 0xd88: /* CPACR */
156
+ env->serror.pending = events.exception.serror_pending;
149
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
157
+ env->serror.has_esr = events.exception.serror_has_esr;
150
+ /* We implement only the Floating Point extension's CP10/CP11 */
158
+ env->serror.esr = events.exception.serror_esr;
151
+ cpu->env.v7m.cpacr[attrs.secure] = value & (0xf << 20);
159
+
152
+ }
160
+ return 0;
153
+ break;
161
+}
154
+ case 0xd8c: /* NSACR */
162
+
155
+ if (attrs.secure && arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
163
void kvm_arch_pre_run(CPUState *cs, struct kvm_run *run)
156
+ /* We implement only the Floating Point extension's CP10/CP11 */
164
{
157
+ cpu->env.v7m.nsacr = value & (3 << 10);
165
}
158
+ }
166
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
159
+ break;
167
index XXXXXXX..XXXXXXX 100644
160
case 0xd90: /* MPU_TYPE */
168
--- a/target/arm/kvm32.c
161
return; /* RO */
169
+++ b/target/arm/kvm32.c
162
case 0xd94: /* MPU_CTRL */
170
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
163
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
171
}
164
}
172
cpu->mp_affinity = mpidr & ARM32_AFFINITY_MASK;
165
break;
173
166
}
174
+ /* Check whether userspace can specify guest syndrome value */
167
+ case 0xf34: /* FPCCR */
175
+ kvm_arm_init_serror_injection(cs);
168
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
176
+
169
+ /* Not all bits here are banked. */
177
return kvm_arm_init_cpreg_list(cpu);
170
+ uint32_t fpccr_s;
178
}
171
+
179
172
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
180
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
173
+ /* Don't allow setting of bits not present in v7M */
181
return ret;
174
+ value &= (R_V7M_FPCCR_LSPACT_MASK |
182
}
175
+ R_V7M_FPCCR_USER_MASK |
183
176
+ R_V7M_FPCCR_THREAD_MASK |
184
+ ret = kvm_put_vcpu_events(cpu);
177
+ R_V7M_FPCCR_HFRDY_MASK |
185
+ if (ret) {
178
+ R_V7M_FPCCR_MMRDY_MASK |
186
+ return ret;
179
+ R_V7M_FPCCR_BFRDY_MASK |
187
+ }
180
+ R_V7M_FPCCR_MONRDY_MASK |
188
+
181
+ R_V7M_FPCCR_LSPEN_MASK |
189
/* Note that we do not call write_cpustate_to_list()
182
+ R_V7M_FPCCR_ASPEN_MASK);
190
* here, so we are only writing the tuple list back to
183
+ }
191
* KVM. This is safe because nothing can change the
184
+ value &= ~R_V7M_FPCCR_RES0_MASK;
192
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
185
+
193
}
186
+ if (!attrs.secure) {
194
vfp_set_fpscr(env, fpscr);
187
+ /* Some non-banked bits are configurably writable by NS */
195
188
+ fpccr_s = cpu->env.v7m.fpccr[M_REG_S];
196
+ ret = kvm_get_vcpu_events(cpu);
189
+ if (!(fpccr_s & R_V7M_FPCCR_LSPENS_MASK)) {
197
+ if (ret) {
190
+ uint32_t lspen = FIELD_EX32(value, V7M_FPCCR, LSPEN);
198
+ return ret;
191
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, LSPEN, lspen);
199
+ }
192
+ }
200
+
193
+ if (!(fpccr_s & R_V7M_FPCCR_CLRONRETS_MASK)) {
201
if (!write_kvmstate_to_list(cpu)) {
194
+ uint32_t cor = FIELD_EX32(value, V7M_FPCCR, CLRONRET);
202
return EINVAL;
195
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, CLRONRET, cor);
203
}
196
+ }
204
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
197
+ if ((s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK)) {
205
index XXXXXXX..XXXXXXX 100644
198
+ uint32_t hfrdy = FIELD_EX32(value, V7M_FPCCR, HFRDY);
206
--- a/target/arm/kvm64.c
199
+ uint32_t bfrdy = FIELD_EX32(value, V7M_FPCCR, BFRDY);
207
+++ b/target/arm/kvm64.c
200
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, HFRDY, hfrdy);
208
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
201
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, BFRDY, bfrdy);
209
202
+ }
210
kvm_arm_init_debug(cs);
203
+ /* TODO MONRDY should RAZ/WI if DEMCR.SDME is set */
211
204
+ {
212
+ /* Check whether user space can specify guest syndrome value */
205
+ uint32_t monrdy = FIELD_EX32(value, V7M_FPCCR, MONRDY);
213
+ kvm_arm_init_serror_injection(cs);
206
+ fpccr_s = FIELD_DP32(fpccr_s, V7M_FPCCR, MONRDY, monrdy);
214
+
207
+ }
215
return kvm_arm_init_cpreg_list(cpu);
208
+
216
}
209
+ /*
217
210
+ * All other non-banked bits are RAZ/WI from NS; write
218
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
211
+ * just the banked bits to fpccr[M_REG_NS].
219
return ret;
212
+ */
220
}
213
+ value &= R_V7M_FPCCR_BANKED_MASK;
221
214
+ cpu->env.v7m.fpccr[M_REG_NS] = value;
222
+ ret = kvm_put_vcpu_events(cpu);
215
+ } else {
223
+ if (ret) {
216
+ fpccr_s = value;
224
+ return ret;
217
+ }
225
+ }
218
+ cpu->env.v7m.fpccr[M_REG_S] = fpccr_s;
226
+
219
+ }
227
if (!write_list_to_kvmstate(cpu, level)) {
220
+ break;
228
return EINVAL;
221
+ case 0xf38: /* FPCAR */
229
}
222
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
230
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
223
+ value &= ~7;
231
}
224
+ cpu->env.v7m.fpcar[attrs.secure] = value;
232
vfp_set_fpcr(env, fpr);
225
+ }
233
226
+ break;
234
+ ret = kvm_get_vcpu_events(cpu);
227
+ case 0xf3c: /* FPDSCR */
235
+ if (ret) {
228
+ if (arm_feature(&cpu->env, ARM_FEATURE_VFP)) {
236
+ return ret;
229
+ value &= 0x07c00000;
237
+ }
230
+ cpu->env.v7m.fpdscr[attrs.secure] = value;
238
+
231
+ }
239
if (!write_kvmstate_to_list(cpu)) {
232
+ break;
240
return EINVAL;
233
case 0xf50: /* ICIALLU */
241
}
234
case 0xf58: /* ICIMVAU */
235
case 0xf5c: /* DCIMVAC */
236
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
237
index XXXXXXX..XXXXXXX 100644
238
--- a/target/arm/cpu.c
239
+++ b/target/arm/cpu.c
240
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
241
env->v7m.ccr[M_REG_S] |= R_V7M_CCR_UNALIGN_TRP_MASK;
242
}
243
244
+ if (arm_feature(env, ARM_FEATURE_VFP)) {
245
+ env->v7m.fpccr[M_REG_NS] = R_V7M_FPCCR_ASPEN_MASK;
246
+ env->v7m.fpccr[M_REG_S] = R_V7M_FPCCR_ASPEN_MASK |
247
+ R_V7M_FPCCR_LSPEN_MASK | R_V7M_FPCCR_S_MASK;
248
+ }
249
/* Unlike A/R profile, M profile defines the reset LR value */
250
env->regs[14] = 0xffffffff;
251
242
diff --git a/target/arm/machine.c b/target/arm/machine.c
252
diff --git a/target/arm/machine.c b/target/arm/machine.c
243
index XXXXXXX..XXXXXXX 100644
253
index XXXXXXX..XXXXXXX 100644
244
--- a/target/arm/machine.c
254
--- a/target/arm/machine.c
245
+++ b/target/arm/machine.c
255
+++ b/target/arm/machine.c
246
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_sve = {
256
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_v8m = {
257
}
247
};
258
};
248
#endif /* AARCH64 */
259
249
260
+static const VMStateDescription vmstate_m_fp = {
250
+static bool serror_needed(void *opaque)
261
+ .name = "cpu/m/fp",
251
+{
252
+ ARMCPU *cpu = opaque;
253
+ CPUARMState *env = &cpu->env;
254
+
255
+ return env->serror.pending != 0;
256
+}
257
+
258
+static const VMStateDescription vmstate_serror = {
259
+ .name = "cpu/serror",
260
+ .version_id = 1,
262
+ .version_id = 1,
261
+ .minimum_version_id = 1,
263
+ .minimum_version_id = 1,
262
+ .needed = serror_needed,
264
+ .needed = vfp_needed,
263
+ .fields = (VMStateField[]) {
265
+ .fields = (VMStateField[]) {
264
+ VMSTATE_UINT8(env.serror.pending, ARMCPU),
266
+ VMSTATE_UINT32_ARRAY(env.v7m.fpcar, ARMCPU, M_REG_NUM_BANKS),
265
+ VMSTATE_UINT8(env.serror.has_esr, ARMCPU),
267
+ VMSTATE_UINT32_ARRAY(env.v7m.fpccr, ARMCPU, M_REG_NUM_BANKS),
266
+ VMSTATE_UINT64(env.serror.esr, ARMCPU),
268
+ VMSTATE_UINT32_ARRAY(env.v7m.fpdscr, ARMCPU, M_REG_NUM_BANKS),
269
+ VMSTATE_UINT32_ARRAY(env.v7m.cpacr, ARMCPU, M_REG_NUM_BANKS),
270
+ VMSTATE_UINT32(env.v7m.nsacr, ARMCPU),
267
+ VMSTATE_END_OF_LIST()
271
+ VMSTATE_END_OF_LIST()
268
+ }
272
+ }
269
+};
273
+};
270
+
274
+
271
static bool m_needed(void *opaque)
275
static const VMStateDescription vmstate_m = {
272
{
276
.name = "cpu/m",
273
ARMCPU *cpu = opaque;
277
.version_id = 4,
274
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
278
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
275
#ifdef TARGET_AARCH64
279
&vmstate_m_scr,
276
&vmstate_sve,
280
&vmstate_m_other_sp,
277
#endif
281
&vmstate_m_v8m,
278
+ &vmstate_serror,
282
+ &vmstate_m_fp,
279
NULL
283
NULL
280
}
284
}
281
};
285
};
282
--
286
--
283
2.19.1
287
2.20.1
284
288
285
289
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The only "system register" that M-profile floating point exposes
2
via the VMRS/VMRS instructions is FPSCR, and it does not have
3
the odd special case for rd==15. Add a check to ensure we only
4
expose FPSCR.
2
5
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Message-id: 20181011205206.3552-13-richard.henderson@linaro.org
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190416125744.27770-5-peter.maydell@linaro.org
7
---
9
---
8
target/arm/translate.c | 70 +++++++++++++++++++++++++++++-------------
10
target/arm/translate.c | 19 +++++++++++++++++--
9
1 file changed, 48 insertions(+), 22 deletions(-)
11
1 file changed, 17 insertions(+), 2 deletions(-)
10
12
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
15
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
16
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
17
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
16
size--;
18
}
17
}
18
shift = (insn >> 16) & ((1 << (3 + size)) - 1);
19
- /* To avoid excessive duplication of ops we implement shift
20
- by immediate using the variable shift operations. */
21
if (op < 8) {
22
/* Shift by immediate:
23
VSHR, VSRA, VRSHR, VRSRA, VSRI, VSHL, VQSHL, VQSHLU. */
24
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
25
}
19
}
26
/* Right shifts are encoded as N - shift, where N is the
20
} else { /* !dp */
27
element size in bits. */
21
+ bool is_sysreg;
28
- if (op <= 4)
22
+
29
+ if (op <= 4) {
23
if ((insn & 0x6f) != 0x00)
30
shift = shift - (1 << (size + 3));
24
return 1;
25
rn = VFP_SREG_N(insn);
26
+
27
+ is_sysreg = extract32(insn, 21, 1);
28
+
29
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
30
+ /*
31
+ * The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
32
+ * Writes to R15 are UNPREDICTABLE; we choose to undef.
33
+ */
34
+ if (is_sysreg && (rd == 15 || (rn >> 1) != ARM_VFP_FPSCR)) {
35
+ return 1;
36
+ }
31
+ }
37
+ }
32
+
38
+
33
+ switch (op) {
39
if (insn & ARM_CP_RW_BIT) {
34
+ case 0: /* VSHR */
40
/* vfp->arm */
35
+ /* Right shift comes here negative. */
41
- if (insn & (1 << 21)) {
36
+ shift = -shift;
42
+ if (is_sysreg) {
37
+ /* Shifts larger than the element size are architecturally
43
/* system register */
38
+ * valid. Unsigned results in all zeros; signed results
44
rn >>= 1;
39
+ * in all sign bits.
45
40
+ */
46
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
41
+ if (!u) {
47
}
42
+ tcg_gen_gvec_sari(size, rd_ofs, rm_ofs,
43
+ MIN(shift, (8 << size) - 1),
44
+ vec_size, vec_size);
45
+ } else if (shift >= 8 << size) {
46
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
47
+ } else {
48
+ tcg_gen_gvec_shri(size, rd_ofs, rm_ofs, shift,
49
+ vec_size, vec_size);
50
+ }
51
+ return 0;
52
+
53
+ case 5: /* VSHL, VSLI */
54
+ if (!u) { /* VSHL */
55
+ /* Shifts larger than the element size are
56
+ * architecturally valid and results in zero.
57
+ */
58
+ if (shift >= 8 << size) {
59
+ tcg_gen_gvec_dup8i(rd_ofs, vec_size, vec_size, 0);
60
+ } else {
61
+ tcg_gen_gvec_shli(size, rd_ofs, rm_ofs, shift,
62
+ vec_size, vec_size);
63
+ }
64
+ return 0;
65
+ }
66
+ break;
67
+ }
68
+
69
if (size == 3) {
70
count = q + 1;
71
} else {
48
} else {
72
count = q ? 4: 2;
49
/* arm->vfp */
73
}
50
- if (insn & (1 << 21)) {
74
- switch (size) {
51
+ if (is_sysreg) {
75
- case 0:
52
rn >>= 1;
76
- imm = (uint8_t) shift;
53
/* system register */
77
- imm |= imm << 8;
54
switch (rn) {
78
- imm |= imm << 16;
79
- break;
80
- case 1:
81
- imm = (uint16_t) shift;
82
- imm |= imm << 16;
83
- break;
84
- case 2:
85
- case 3:
86
- imm = shift;
87
- break;
88
- default:
89
- abort();
90
- }
91
+
92
+ /* To avoid excessive duplication of ops we implement shift
93
+ * by immediate using the variable shift operations.
94
+ */
95
+ imm = dup_const(size, shift);
96
97
for (pass = 0; pass < count; pass++) {
98
if (size == 3) {
99
neon_load_reg64(cpu_V0, rm + pass);
100
tcg_gen_movi_i64(cpu_V1, imm);
101
switch (op) {
102
- case 0: /* VSHR */
103
case 1: /* VSRA */
104
if (u)
105
gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
106
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
107
cpu_V0, cpu_V1);
108
}
109
break;
110
+ default:
111
+ g_assert_not_reached();
112
}
113
if (op == 1 || op == 3) {
114
/* Accumulate. */
115
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
116
tmp2 = tcg_temp_new_i32();
117
tcg_gen_movi_i32(tmp2, imm);
118
switch (op) {
119
- case 0: /* VSHR */
120
case 1: /* VSRA */
121
GEN_NEON_INTEGER_OP(shl);
122
break;
123
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
124
case 7: /* VQSHL */
125
GEN_NEON_INTEGER_OP_ENV(qshl);
126
break;
127
+ default:
128
+ g_assert_not_reached();
129
}
130
tcg_temp_free_i32(tmp2);
131
132
--
55
--
133
2.19.1
56
2.20.1
134
57
135
58
diff view generated by jsdifflib
1
For traps of FP/SIMD instructions to AArch32 Hyp mode, the syndrome
1
Like AArch64, M-profile floating point has no FPEXC enable
2
provided in HSR has more information than is reported to AArch64.
2
bit to gate floating point; so always set the VFPEN TB flag.
3
Specifically, there are extra fields TA and coproc which indicate
3
4
whether the trapped instruction was FP or SIMD. Add this extra
4
M-profile also has CPACR and NSACR similar to A-profile;
5
information to the syndromes we construct, and mask it out when
5
they behave slightly differently:
6
taking the exception to AArch64.
6
* the CPACR is banked between Secure and Non-Secure
7
* if the NSACR forces a trap then this is taken to
8
the Secure state, not the Non-Secure state
9
10
Honour the CPACR and NSACR settings. The NSACR handling
11
requires us to borrow the exception.target_el field
12
(usually meaningless for M profile) to distinguish the
13
NOCP UsageFault taken to Secure state from the more
14
usual fault taken to the current security state.
7
15
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20181012144235.19646-11-peter.maydell@linaro.org
18
Message-id: 20190416125744.27770-6-peter.maydell@linaro.org
11
---
19
---
12
target/arm/internals.h | 14 +++++++++++++-
20
target/arm/helper.c | 55 +++++++++++++++++++++++++++++++++++++++---
13
target/arm/helper.c | 9 +++++++++
21
target/arm/translate.c | 10 ++++++--
14
target/arm/translate.c | 8 ++++----
22
2 files changed, 60 insertions(+), 5 deletions(-)
15
3 files changed, 26 insertions(+), 5 deletions(-)
16
23
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/internals.h
20
+++ b/target/arm/internals.h
21
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
22
* few cases the value in HSR for exceptions taken to AArch32 Hyp
23
* mode differs slightly, and we fix this up when populating HSR in
24
* arm_cpu_do_interrupt_aarch32_hyp().
25
+ * The exception is FP/SIMD access traps -- these report extra information
26
+ * when taking an exception to AArch32. For those we include the extra coproc
27
+ * and TA fields, and mask them out when taking the exception to AArch64.
28
*/
29
static inline uint32_t syn_uncategorized(void)
30
{
31
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_cp15_rrt_trap(int cv, int cond, int opc1, int crm,
32
33
static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
34
{
35
+ /* AArch32 FP trap or any AArch64 FP/SIMD trap: TA == 0 coproc == 0xa */
36
return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
37
| (is_16bit ? 0 : ARM_EL_IL)
38
- | (cv << 24) | (cond << 20);
39
+ | (cv << 24) | (cond << 20) | 0xa;
40
+}
41
+
42
+static inline uint32_t syn_simd_access_trap(int cv, int cond, bool is_16bit)
43
+{
44
+ /* AArch32 SIMD trap: TA == 1 coproc == 0 */
45
+ return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
46
+ | (is_16bit ? 0 : ARM_EL_IL)
47
+ | (cv << 24) | (cond << 20) | (1 << 5);
48
}
49
50
static inline uint32_t syn_sve_access_trap(void)
51
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
diff --git a/target/arm/helper.c b/target/arm/helper.c
52
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
53
--- a/target/arm/helper.c
26
--- a/target/arm/helper.c
54
+++ b/target/arm/helper.c
27
+++ b/target/arm/helper.c
55
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
28
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
56
case EXCP_HVC:
29
return target_el;
57
case EXCP_HYP_TRAP:
30
}
58
case EXCP_SMC:
31
59
+ if (syn_get_ec(env->exception.syndrome) == EC_ADVSIMDFPACCESSTRAP) {
32
+/*
60
+ /*
33
+ * Return true if the v7M CPACR permits access to the FPU for the specified
61
+ * QEMU internal FP/SIMD syndromes from AArch32 include the
34
+ * security state and privilege level.
62
+ * TA and coproc fields which are only exposed if the exception
35
+ */
63
+ * is taken to AArch32 Hyp mode. Mask them out to get a valid
36
+static bool v7m_cpacr_pass(CPUARMState *env, bool is_secure, bool is_priv)
64
+ * AArch64 format syndrome.
37
+{
65
+ */
38
+ switch (extract32(env->v7m.cpacr[is_secure], 20, 2)) {
66
+ env->exception.syndrome &= ~MAKE_64BIT_MASK(0, 20);
39
+ case 0:
40
+ case 2: /* UNPREDICTABLE: we treat like 0 */
41
+ return false;
42
+ case 1:
43
+ return is_priv;
44
+ case 3:
45
+ return true;
46
+ default:
47
+ g_assert_not_reached();
48
+ }
49
+}
50
+
51
static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
52
ARMMMUIdx mmu_idx, bool ignfault)
53
{
54
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
55
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNDEFINSTR_MASK;
56
break;
57
case EXCP_NOCP:
58
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
59
- env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
60
+ {
61
+ /*
62
+ * NOCP might be directed to something other than the current
63
+ * security state if this fault is because of NSACR; we indicate
64
+ * the target security state using exception.target_el.
65
+ */
66
+ int target_secstate;
67
+
68
+ if (env->exception.target_el == 3) {
69
+ target_secstate = M_REG_S;
70
+ } else {
71
+ target_secstate = env->v7m.secure;
67
+ }
72
+ }
68
env->cp15.esr_el[new_el] = env->exception.syndrome;
73
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, target_secstate);
74
+ env->v7m.cfsr[target_secstate] |= R_V7M_CFSR_NOCP_MASK;
69
break;
75
break;
70
case EXCP_IRQ:
76
+ }
77
case EXCP_INVSTATE:
78
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
79
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVSTATE_MASK;
80
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
81
return 0;
82
}
83
84
+ if (arm_feature(env, ARM_FEATURE_M)) {
85
+ /* CPACR can cause a NOCP UsageFault taken to current security state */
86
+ if (!v7m_cpacr_pass(env, env->v7m.secure, cur_el != 0)) {
87
+ return 1;
88
+ }
89
+
90
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) && !env->v7m.secure) {
91
+ if (!extract32(env->v7m.nsacr, 10, 1)) {
92
+ /* FP insns cause a NOCP UsageFault taken to Secure */
93
+ return 3;
94
+ }
95
+ }
96
+
97
+ return 0;
98
+ }
99
+
100
/* The CPACR controls traps to EL1, or PL1 if we're 32 bit:
101
* 0, 2 : trap EL0 and EL1/PL1 accesses
102
* 1 : trap only EL0 accesses
103
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
104
flags = FIELD_DP32(flags, TBFLAG_A32, SCTLR_B, arm_sctlr_b(env));
105
flags = FIELD_DP32(flags, TBFLAG_A32, NS, !access_secure_reg(env));
106
if (env->vfp.xregs[ARM_VFP_FPEXC] & (1 << 30)
107
- || arm_el_is_aa64(env, 1)) {
108
+ || arm_el_is_aa64(env, 1) || arm_feature(env, ARM_FEATURE_M)) {
109
flags = FIELD_DP32(flags, TBFLAG_A32, VFPEN, 1);
110
}
111
flags = FIELD_DP32(flags, TBFLAG_A32, XSCALE_CPAR, env->cp15.c15_cpar);
71
diff --git a/target/arm/translate.c b/target/arm/translate.c
112
diff --git a/target/arm/translate.c b/target/arm/translate.c
72
index XXXXXXX..XXXXXXX 100644
113
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/translate.c
114
--- a/target/arm/translate.c
74
+++ b/target/arm/translate.c
115
+++ b/target/arm/translate.c
75
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
116
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
117
* for attempts to execute invalid vfp/neon encodings with FP disabled.
76
*/
118
*/
77
if (s->fp_excp_el) {
119
if (s->fp_excp_el) {
78
gen_exception_insn(s, 4, EXCP_UDEF,
120
- gen_exception_insn(s, 4, EXCP_UDEF,
79
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
121
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
80
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
122
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
123
+ gen_exception_insn(s, 4, EXCP_NOCP, syn_uncategorized(),
124
+ s->fp_excp_el);
125
+ } else {
126
+ gen_exception_insn(s, 4, EXCP_UDEF,
127
+ syn_fp_access_trap(1, 0xe, false),
128
+ s->fp_excp_el);
129
+ }
81
return 0;
130
return 0;
82
}
131
}
83
132
84
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
85
*/
86
if (s->fp_excp_el) {
87
gen_exception_insn(s, 4, EXCP_UDEF,
88
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
89
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
90
return 0;
91
}
92
93
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
94
95
if (s->fp_excp_el) {
96
gen_exception_insn(s, 4, EXCP_UDEF,
97
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
98
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
99
return 0;
100
}
101
if (!s->vfp_enabled) {
102
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
103
104
if (s->fp_excp_el) {
105
gen_exception_insn(s, 4, EXCP_UDEF,
106
- syn_fp_access_trap(1, 0xe, false), s->fp_excp_el);
107
+ syn_simd_access_trap(1, 0xe, false), s->fp_excp_el);
108
return 0;
109
}
110
if (!s->vfp_enabled) {
111
--
133
--
112
2.19.1
134
2.20.1
113
135
114
136
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Correct the decode of the M-profile "coprocessor and
2
floating-point instructions" space:
3
* op0 == 0b11 is always unallocated
4
* if the CPU has an FPU then all insns with op1 == 0b101
5
are floating point and go to disas_vfp_insn()
2
6
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
For the moment we leave VLLDM and VLSTM as NOPs; in
4
Message-id: 20181011205206.3552-10-richard.henderson@linaro.org
8
a later commit we will fill in the proper implementation
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
for the case where an FPU is present.
10
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20190416125744.27770-7-peter.maydell@linaro.org
7
---
14
---
8
target/arm/translate.c | 29 ++++++++++-------------------
15
target/arm/translate.c | 26 ++++++++++++++++++++++----
9
1 file changed, 10 insertions(+), 19 deletions(-)
16
1 file changed, 22 insertions(+), 4 deletions(-)
10
17
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
diff --git a/target/arm/translate.c b/target/arm/translate.c
12
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
20
--- a/target/arm/translate.c
14
+++ b/target/arm/translate.c
21
+++ b/target/arm/translate.c
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
22
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
23
case 6: case 7: case 14: case 15:
24
/* Coprocessor. */
25
if (arm_dc_feature(s, ARM_FEATURE_M)) {
26
- /* We don't currently implement M profile FP support,
27
- * so this entire space should give a NOCP fault, with
28
- * the exception of the v8M VLLDM and VLSTM insns, which
29
- * must be NOPs in Secure state and UNDEF in Nonsecure state.
30
+ /* 0b111x_11xx_xxxx_xxxx_xxxx_xxxx_xxxx_xxxx */
31
+ if (extract32(insn, 24, 2) == 3) {
32
+ goto illegal_op; /* op0 = 0b11 : unallocated */
33
+ }
34
+
35
+ /*
36
+ * Decode VLLDM and VLSTM first: these are nonstandard because:
37
+ * * if there is no FPU then these insns must NOP in
38
+ * Secure state and UNDEF in Nonsecure state
39
+ * * if there is an FPU then these insns do not have
40
+ * the usual behaviour that disas_vfp_insn() provides of
41
+ * being controlled by CPACR/NSACR enable bits or the
42
+ * lazy-stacking logic.
43
*/
44
if (arm_dc_feature(s, ARM_FEATURE_V8) &&
45
(insn & 0xffa00f00) == 0xec200a00) {
46
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
47
/* Just NOP since FP support is not implemented */
16
break;
48
break;
17
}
49
}
18
return 0;
50
+ if (arm_dc_feature(s, ARM_FEATURE_VFP) &&
51
+ ((insn >> 8) & 0xe) == 10) {
52
+ /* FP, and the CPU supports it */
53
+ if (disas_vfp_insn(s, insn)) {
54
+ goto illegal_op;
55
+ }
56
+ break;
57
+ }
19
+
58
+
20
+ case NEON_3R_VADD_VSUB:
59
/* All other insns: NOCP */
21
+ if (u) {
60
gen_exception_insn(s, 4, EXCP_NOCP, syn_uncategorized(),
22
+ tcg_gen_gvec_sub(size, rd_ofs, rn_ofs, rm_ofs,
61
default_exception_el(s));
23
+ vec_size, vec_size);
24
+ } else {
25
+ tcg_gen_gvec_add(size, rd_ofs, rn_ofs, rm_ofs,
26
+ vec_size, vec_size);
27
+ }
28
+ return 0;
29
}
30
if (size == 3) {
31
/* 64-bit element instructions. */
32
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
33
cpu_V1, cpu_V0);
34
}
35
break;
36
- case NEON_3R_VADD_VSUB:
37
- if (u) {
38
- tcg_gen_sub_i64(CPU_V001);
39
- } else {
40
- tcg_gen_add_i64(CPU_V001);
41
- }
42
- break;
43
default:
44
abort();
45
}
46
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
47
tmp2 = neon_load_reg(rd, pass);
48
gen_neon_add(size, tmp, tmp2);
49
break;
50
- case NEON_3R_VADD_VSUB:
51
- if (!u) { /* VADD */
52
- gen_neon_add(size, tmp, tmp2);
53
- } else { /* VSUB */
54
- switch (size) {
55
- case 0: gen_helper_neon_sub_u8(tmp, tmp, tmp2); break;
56
- case 1: gen_helper_neon_sub_u16(tmp, tmp, tmp2); break;
57
- case 2: tcg_gen_sub_i32(tmp, tmp, tmp2); break;
58
- default: abort();
59
- }
60
- }
61
- break;
62
case NEON_3R_VTST_VCEQ:
63
if (!u) { /* VTST */
64
switch (size) {
65
--
62
--
66
2.19.1
63
2.20.1
67
64
68
65
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
If the floating point extension is present, then the SG instruction
2
must clear the CONTROL_S.SFPA bit. Implement this.
2
3
3
Since QEMU does not implement ASIDs, changes to the ASID must flush the
4
(On a no-FPU system the bit will always be zero, so we don't need
4
tlb. However, if the ASID does not change there is no reason to flush.
5
to make the clearing of the bit conditional on ARM_FEATURE_VFP.)
5
6
6
In testing a boot of the Ubuntu installer to the first menu, this reduces
7
the number of flushes by 30%, or nearly 600k instances.
8
9
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
13
Message-id: 20181019015617.22583-3-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190416125744.27770-8-peter.maydell@linaro.org
15
---
10
---
16
target/arm/helper.c | 8 +++-----
11
target/arm/helper.c | 1 +
17
1 file changed, 3 insertions(+), 5 deletions(-)
12
1 file changed, 1 insertion(+)
18
13
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_el1_write(CPUARMState *env, const ARMCPRegInfo *ri,
18
@@ -XXX,XX +XXX,XX @@ static bool v7m_handle_execute_nsc(ARMCPU *cpu)
24
static void vmsa_ttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
19
qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx32
25
uint64_t value)
20
", executing it\n", env->regs[15]);
26
{
21
env->regs[14] &= ~1;
27
- /* 64 bit accesses to the TTBRs can change the ASID and so we
22
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
28
- * must flush the TLB.
23
switch_v7m_security_state(env, true);
29
- */
24
xpsr_write(env, 0, XPSR_IT);
30
- if (cpreg_field_is_64bit(ri)) {
25
env->regs[15] += 4;
31
+ /* If the ASID changes (with a 64-bit write), we must flush the TLB. */
32
+ if (cpreg_field_is_64bit(ri) &&
33
+ extract64(raw_read(env, ri) ^ value, 48, 16) != 0) {
34
ARMCPU *cpu = arm_env_get_cpu(env);
35
-
36
tlb_flush(CPU(cpu));
37
}
38
raw_write(env, ri, value);
39
--
26
--
40
2.19.1
27
2.20.1
41
28
42
29
diff view generated by jsdifflib
1
Create and use a utility function to extract the EC field
1
The M-profile CONTROL register has two bits -- SFPA and FPCA --
2
from a syndrome, rather than open-coding the shift.
2
which relate to floating-point support, and should be RES0 otherwise.
3
Handle them correctly in the MSR/MRS register access code.
4
Neither is banked between security states, so they are stored
5
in v7m.control[M_REG_S] regardless of current security state.
3
6
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181012144235.19646-9-peter.maydell@linaro.org
9
Message-id: 20190416125744.27770-9-peter.maydell@linaro.org
7
---
10
---
8
target/arm/internals.h | 5 +++++
11
target/arm/helper.c | 57 ++++++++++++++++++++++++++++++++++++++-------
9
target/arm/helper.c | 4 ++--
12
1 file changed, 49 insertions(+), 8 deletions(-)
10
target/arm/kvm64.c | 2 +-
11
target/arm/op_helper.c | 2 +-
12
4 files changed, 9 insertions(+), 4 deletions(-)
13
13
14
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/internals.h
17
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
19
#define ARM_EL_IL (1 << ARM_EL_IL_SHIFT)
20
#define ARM_EL_ISV (1 << ARM_EL_ISV_SHIFT)
21
22
+static inline uint32_t syn_get_ec(uint32_t syn)
23
+{
24
+ return syn >> ARM_EL_EC_SHIFT;
25
+}
26
+
27
/* Utility functions for constructing various kinds of syndrome value.
28
* Note that in general we follow the AArch64 syndrome values; in a
29
* few cases the value in HSR for exceptions taken to AArch32 Hyp
30
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
31
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
33
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
34
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
18
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
35
uint32_t moe;
19
return xpsr_read(env) & mask;
36
20
break;
37
/* If this is a debug exception we must update the DBGDSCR.MOE bits */
21
case 20: /* CONTROL */
38
- switch (env->exception.syndrome >> ARM_EL_EC_SHIFT) {
22
- return env->v7m.control[env->v7m.secure];
39
+ switch (syn_get_ec(env->exception.syndrome)) {
23
+ {
40
case EC_BREAKPOINT:
24
+ uint32_t value = env->v7m.control[env->v7m.secure];
41
case EC_BREAKPOINT_SAME_EL:
25
+ if (!env->v7m.secure) {
42
moe = 1;
26
+ /* SFPA is RAZ/WI from NS; FPCA is stored in the M_REG_S bank */
43
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_interrupt(CPUState *cs)
27
+ value |= env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK;
44
if (qemu_loglevel_mask(CPU_LOG_INT)
28
+ }
45
&& !excp_is_internal(cs->exception_index)) {
29
+ return value;
46
qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n",
30
+ }
47
- env->exception.syndrome >> ARM_EL_EC_SHIFT,
31
case 0x94: /* CONTROL_NS */
48
+ syn_get_ec(env->exception.syndrome),
32
/* We have to handle this here because unprivileged Secure code
49
env->exception.syndrome);
33
* can read the NS CONTROL register.
34
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
35
if (!env->v7m.secure) {
36
return 0;
37
}
38
- return env->v7m.control[M_REG_NS];
39
+ return env->v7m.control[M_REG_NS] |
40
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK);
50
}
41
}
51
42
52
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
43
if (el == 0) {
53
index XXXXXXX..XXXXXXX 100644
44
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
54
--- a/target/arm/kvm64.c
45
*/
55
+++ b/target/arm/kvm64.c
46
uint32_t mask = extract32(maskreg, 8, 4);
56
@@ -XXX,XX +XXX,XX @@ int kvm_arch_remove_sw_breakpoint(CPUState *cs, struct kvm_sw_breakpoint *bp)
47
uint32_t reg = extract32(maskreg, 0, 8);
57
48
+ int cur_el = arm_current_el(env);
58
bool kvm_arm_handle_debug(CPUState *cs, struct kvm_debug_exit_arch *debug_exit)
49
59
{
50
- if (arm_current_el(env) == 0 && reg > 7) {
60
- int hsr_ec = debug_exit->hsr >> ARM_EL_EC_SHIFT;
51
- /* only xPSR sub-fields may be written by unprivileged */
61
+ int hsr_ec = syn_get_ec(debug_exit->hsr);
52
+ if (cur_el == 0 && reg > 7 && reg != 20) {
62
ARMCPU *cpu = ARM_CPU(cs);
53
+ /*
63
CPUClass *cc = CPU_GET_CLASS(cs);
54
+ * only xPSR sub-fields and CONTROL.SFPA may be written by
64
CPUARMState *env = &cpu->env;
55
+ * unprivileged code
65
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
56
+ */
66
index XXXXXXX..XXXXXXX 100644
57
return;
67
--- a/target/arm/op_helper.c
58
}
68
+++ b/target/arm/op_helper.c
59
69
@@ -XXX,XX +XXX,XX @@ void raise_exception(CPUARMState *env, uint32_t excp,
60
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
70
* (see DDI0478C.a D1.10.4)
61
env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
62
env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
63
}
64
+ /*
65
+ * SFPA is RAZ/WI from NS. FPCA is RO if NSACR.CP10 == 0,
66
+ * RES0 if the FPU is not present, and is stored in the S bank
67
+ */
68
+ if (arm_feature(env, ARM_FEATURE_VFP) &&
69
+ extract32(env->v7m.nsacr, 10, 1)) {
70
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
71
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
72
+ }
73
return;
74
case 0x98: /* SP_NS */
75
{
76
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
77
env->v7m.faultmask[env->v7m.secure] = val & 1;
78
break;
79
case 20: /* CONTROL */
80
- /* Writing to the SPSEL bit only has an effect if we are in
81
+ /*
82
+ * Writing to the SPSEL bit only has an effect if we are in
83
* thread mode; other bits can be updated by any privileged code.
84
* write_v7m_control_spsel() deals with updating the SPSEL bit in
85
* env->v7m.control, so we only need update the others.
86
* For v7M, we must just ignore explicit writes to SPSEL in handler
87
* mode; for v8M the write is permitted but will have no effect.
88
+ * All these bits are writes-ignored from non-privileged code,
89
+ * except for SFPA.
71
*/
90
*/
72
target_el = 2;
91
- if (arm_feature(env, ARM_FEATURE_V8) ||
73
- if (syndrome >> ARM_EL_EC_SHIFT == EC_ADVSIMDFPACCESSTRAP) {
92
- !arm_v7m_is_handler_mode(env)) {
74
+ if (syn_get_ec(syndrome) == EC_ADVSIMDFPACCESSTRAP) {
93
+ if (cur_el > 0 && (arm_feature(env, ARM_FEATURE_V8) ||
75
syndrome = syn_uncategorized();
94
+ !arm_v7m_is_handler_mode(env))) {
95
write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
76
}
96
}
77
}
97
- if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
98
+ if (cur_el > 0 && arm_feature(env, ARM_FEATURE_M_MAIN)) {
99
env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
100
env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
101
}
102
+ if (arm_feature(env, ARM_FEATURE_VFP)) {
103
+ /*
104
+ * SFPA is RAZ/WI from NS or if no FPU.
105
+ * FPCA is RO if NSACR.CP10 == 0, RES0 if the FPU is not present.
106
+ * Both are stored in the S bank.
107
+ */
108
+ if (env->v7m.secure) {
109
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
110
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_SFPA_MASK;
111
+ }
112
+ if (cur_el > 0 &&
113
+ (env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_SECURITY) ||
114
+ extract32(env->v7m.nsacr, 10, 1))) {
115
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
116
+ env->v7m.control[M_REG_S] |= val & R_V7M_CONTROL_FPCA_MASK;
117
+ }
118
+ }
119
break;
120
default:
121
bad_reg:
78
--
122
--
79
2.19.1
123
2.20.1
80
124
81
125
diff view generated by jsdifflib
1
The A/I/F bits in ISR_EL1 should track the virtual interrupt
1
Currently the code in v7m_push_stack() which detects a violation
2
status, not the physical interrupt status, if the associated
2
of the v8M stack limit simply returns early if it does so. This
3
HCR_EL2.AMO/IMO/FMO bit is set. Implement this, rather than
3
is OK for the current integer-only code, but won't work for the
4
always showing the physical interrupt status.
4
floating point handling we're about to add. We need to continue
5
5
executing the rest of the function so that we check for other
6
We don't currently implement anything to do with external
6
exceptions like not having permission to use the FPU and so
7
aborts, so this applies only to the I and F bits (though it
7
that we correctly set the FPCCR state if we are doing lazy
8
ought to be possible for the outer guest to present a virtual
8
stacking. Refactor to avoid the early return.
9
external abort to the inner guest, even if QEMU doesn't
10
emulate physical external aborts, so there is missing
11
functionality in this area).
12
9
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20181012144235.19646-6-peter.maydell@linaro.org
12
Message-id: 20190416125744.27770-10-peter.maydell@linaro.org
16
---
13
---
17
target/arm/helper.c | 22 ++++++++++++++++++----
14
target/arm/helper.c | 23 ++++++++++++++++++-----
18
1 file changed, 18 insertions(+), 4 deletions(-)
15
1 file changed, 18 insertions(+), 5 deletions(-)
19
16
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/helper.c
19
--- a/target/arm/helper.c
23
+++ b/target/arm/helper.c
20
+++ b/target/arm/helper.c
24
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
21
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
25
CPUState *cs = ENV_GET_CPU(env);
22
* should ignore further stack faults trying to process
26
uint64_t ret = 0;
23
* that derived exception.)
27
24
*/
28
- if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
25
- bool stacked_ok;
29
- ret |= CPSR_I;
26
+ bool stacked_ok = true, limitviol = false;
30
+ if (arm_hcr_el2_imo(env)) {
27
CPUARMState *env = &cpu->env;
31
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
28
uint32_t xpsr = xpsr_read(env);
32
+ ret |= CPSR_I;
29
uint32_t frameptr = env->regs[13];
33
+ }
30
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
34
+ } else {
31
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
35
+ if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
32
env->v7m.secure);
36
+ ret |= CPSR_I;
33
env->regs[13] = limit;
37
+ }
34
- return true;
35
+ /*
36
+ * We won't try to perform any further memory accesses but
37
+ * we must continue through the following code to check for
38
+ * permission faults during FPU state preservation, and we
39
+ * must update FPCCR if lazy stacking is enabled.
40
+ */
41
+ limitviol = true;
42
+ stacked_ok = false;
43
}
38
}
44
}
39
- if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
45
40
- ret |= CPSR_F;
46
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
41
+
47
* (which may be taken in preference to the one we started with
42
+ if (arm_hcr_el2_fmo(env)) {
48
* if it has higher priority).
43
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
49
*/
44
+ ret |= CPSR_F;
50
- stacked_ok =
45
+ }
51
+ stacked_ok = stacked_ok &&
46
+ } else {
52
v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, false) &&
47
+ if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
53
v7m_stack_write(cpu, frameptr + 4, env->regs[1], mmu_idx, false) &&
48
+ ret |= CPSR_F;
54
v7m_stack_write(cpu, frameptr + 8, env->regs[2], mmu_idx, false) &&
49
+ }
55
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
50
}
56
v7m_stack_write(cpu, frameptr + 24, env->regs[15], mmu_idx, false) &&
51
+
57
v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, false);
52
/* External aborts are not possible in QEMU so A bit is always clear */
58
53
return ret;
59
- /* Update SP regardless of whether any of the stack accesses failed. */
60
- env->regs[13] = frameptr;
61
+ /*
62
+ * If we broke a stack limit then SP was already updated earlier;
63
+ * otherwise we update SP regardless of whether any of the stack
64
+ * accesses failed or we took some other kind of fault.
65
+ */
66
+ if (!limitviol) {
67
+ env->regs[13] = frameptr;
68
+ }
69
70
return !stacked_ok;
54
}
71
}
55
--
72
--
56
2.19.1
73
2.20.1
57
74
58
75
diff view generated by jsdifflib
1
For the v7 version of the Arm architecture, the IL bit in
1
Handle floating point registers in exception entry.
2
syndrome register values where the field is not valid was
2
This corresponds to the FP-specific parts of the pseudocode
3
defined to be UNK/SBZP. In v8 this is RES1, which is what
3
functions ActivateException() and PushStack().
4
QEMU currently implements. Handle the desired v7 behaviour
5
by squashing the IL bit for the affected cases:
6
* EC == EC_UNCATEGORIZED
7
* prefetch aborts
8
* data aborts where ISV is 0
9
4
10
(The fourth case listed in the v8 Arm ARM DDI 0487C.a in
5
We defer the code corresponding to UpdateFPCCR() to a later patch.
11
section G7.2.70, "illegal state exception", can't happen
12
on a v7 CPU.)
13
14
This deals with a corner case noted in a comment.
15
6
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20181012144235.19646-10-peter.maydell@linaro.org
9
Message-id: 20190416125744.27770-11-peter.maydell@linaro.org
19
---
10
---
20
target/arm/internals.h | 7 ++-----
11
target/arm/helper.c | 98 +++++++++++++++++++++++++++++++++++++++++++--
21
target/arm/helper.c | 13 +++++++++++++
12
1 file changed, 95 insertions(+), 3 deletions(-)
22
2 files changed, 15 insertions(+), 5 deletions(-)
23
13
24
diff --git a/target/arm/internals.h b/target/arm/internals.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/internals.h
27
+++ b/target/arm/internals.h
28
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_get_ec(uint32_t syn)
29
/* Utility functions for constructing various kinds of syndrome value.
30
* Note that in general we follow the AArch64 syndrome values; in a
31
* few cases the value in HSR for exceptions taken to AArch32 Hyp
32
- * mode differs slightly, so if we ever implemented Hyp mode then the
33
- * syndrome value would need some massaging on exception entry.
34
- * (One example of this is that AArch64 defaults to IL bit set for
35
- * exceptions which don't specifically indicate information about the
36
- * trapping instruction, whereas AArch32 defaults to IL bit clear.)
37
+ * mode differs slightly, and we fix this up when populating HSR in
38
+ * arm_cpu_do_interrupt_aarch32_hyp().
39
*/
40
static inline uint32_t syn_uncategorized(void)
41
{
42
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
43
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
45
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
46
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs)
18
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
19
switch_v7m_security_state(env, targets_secure);
20
write_v7m_control_spsel(env, 0);
21
arm_clear_exclusive(env);
22
+ /* Clear SFPA and FPCA (has no effect if no FPU) */
23
+ env->v7m.control[M_REG_S] &=
24
+ ~(R_V7M_CONTROL_FPCA_MASK | R_V7M_CONTROL_SFPA_MASK);
25
/* Clear IT bits */
26
env->condexec_bits = 0;
27
env->regs[14] = lr;
28
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
29
uint32_t xpsr = xpsr_read(env);
30
uint32_t frameptr = env->regs[13];
31
ARMMMUIdx mmu_idx = arm_mmu_idx(env);
32
+ uint32_t framesize;
33
+ bool nsacr_cp10 = extract32(env->v7m.nsacr, 10, 1);
34
+
35
+ if ((env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) &&
36
+ (env->v7m.secure || nsacr_cp10)) {
37
+ if (env->v7m.secure &&
38
+ env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK) {
39
+ framesize = 0xa8;
40
+ } else {
41
+ framesize = 0x68;
42
+ }
43
+ } else {
44
+ framesize = 0x20;
45
+ }
46
47
/* Align stack pointer if the guest wants that */
48
if ((frameptr & 4) &&
49
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
50
xpsr |= XPSR_SPREALIGN;
47
}
51
}
48
52
49
if (cs->exception_index != EXCP_IRQ && cs->exception_index != EXCP_FIQ) {
53
- frameptr -= 0x20;
50
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
54
+ xpsr &= ~XPSR_SFPA;
51
+ /*
55
+ if (env->v7m.secure &&
52
+ * QEMU syndrome values are v8-style. v7 has the IL bit
56
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
53
+ * UNK/SBZP for "field not valid" cases, where v8 uses RES1.
57
+ xpsr |= XPSR_SFPA;
54
+ * If this is a v7 CPU, squash the IL bit in those cases.
58
+ }
55
+ */
59
+
56
+ if (cs->exception_index == EXCP_PREFETCH_ABORT ||
60
+ frameptr -= framesize;
57
+ (cs->exception_index == EXCP_DATA_ABORT &&
61
58
+ !(env->exception.syndrome & ARM_EL_ISV)) ||
62
if (arm_feature(env, ARM_FEATURE_V8)) {
59
+ syn_get_ec(env->exception.syndrome) == EC_UNCATEGORIZED) {
63
uint32_t limit = v7m_sp_limit(env);
60
+ env->exception.syndrome &= ~ARM_EL_IL;
64
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
65
v7m_stack_write(cpu, frameptr + 24, env->regs[15], mmu_idx, false) &&
66
v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, false);
67
68
+ if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) {
69
+ /* FPU is active, try to save its registers */
70
+ bool fpccr_s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
71
+ bool lspact = env->v7m.fpccr[fpccr_s] & R_V7M_FPCCR_LSPACT_MASK;
72
+
73
+ if (lspact && arm_feature(env, ARM_FEATURE_M_SECURITY)) {
74
+ qemu_log_mask(CPU_LOG_INT,
75
+ "...SecureFault because LSPACT and FPCA both set\n");
76
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
77
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
78
+ } else if (!env->v7m.secure && !nsacr_cp10) {
79
+ qemu_log_mask(CPU_LOG_INT,
80
+ "...Secure UsageFault with CFSR.NOCP because "
81
+ "NSACR.CP10 prevents stacking FP regs\n");
82
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
83
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
84
+ } else {
85
+ if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
86
+ /* Lazy stacking disabled, save registers now */
87
+ int i;
88
+ bool cpacr_pass = v7m_cpacr_pass(env, env->v7m.secure,
89
+ arm_current_el(env) != 0);
90
+
91
+ if (stacked_ok && !cpacr_pass) {
92
+ /*
93
+ * Take UsageFault if CPACR forbids access. The pseudocode
94
+ * here does a full CheckCPEnabled() but we know the NSACR
95
+ * check can never fail as we have already handled that.
96
+ */
97
+ qemu_log_mask(CPU_LOG_INT,
98
+ "...UsageFault with CFSR.NOCP because "
99
+ "CPACR.CP10 prevents stacking FP regs\n");
100
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
101
+ env->v7m.secure);
102
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_NOCP_MASK;
103
+ stacked_ok = false;
104
+ }
105
+
106
+ for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
107
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
108
+ uint32_t faddr = frameptr + 0x20 + 4 * i;
109
+ uint32_t slo = extract64(dn, 0, 32);
110
+ uint32_t shi = extract64(dn, 32, 32);
111
+
112
+ if (i >= 16) {
113
+ faddr += 8; /* skip the slot for the FPSCR */
114
+ }
115
+ stacked_ok = stacked_ok &&
116
+ v7m_stack_write(cpu, faddr, slo, mmu_idx, false) &&
117
+ v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, false);
118
+ }
119
+ stacked_ok = stacked_ok &&
120
+ v7m_stack_write(cpu, frameptr + 0x60,
121
+ vfp_get_fpscr(env), mmu_idx, false);
122
+ if (cpacr_pass) {
123
+ for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
124
+ *aa32_vfp_dreg(env, i / 2) = 0;
125
+ }
126
+ vfp_set_fpscr(env, 0);
127
+ }
128
+ } else {
129
+ /* Lazy stacking enabled, save necessary info to stack later */
130
+ /* TODO : equivalent of UpdateFPCCR() pseudocode */
61
+ }
131
+ }
62
+ }
132
+ }
63
env->cp15.esr_el[2] = env->exception.syndrome;
133
+ }
64
}
134
+
65
135
/*
136
* If we broke a stack limit then SP was already updated earlier;
137
* otherwise we update SP regardless of whether any of the stack
138
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
139
140
if (arm_feature(env, ARM_FEATURE_V8)) {
141
lr = R_V7M_EXCRET_RES1_MASK |
142
- R_V7M_EXCRET_DCRS_MASK |
143
- R_V7M_EXCRET_FTYPE_MASK;
144
+ R_V7M_EXCRET_DCRS_MASK;
145
/* The S bit indicates whether we should return to Secure
146
* or NonSecure (ie our current state).
147
* The ES bit indicates whether we're taking this exception
148
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
149
if (env->v7m.secure) {
150
lr |= R_V7M_EXCRET_S_MASK;
151
}
152
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
153
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
154
+ }
155
} else {
156
lr = R_V7M_EXCRET_RES1_MASK |
157
R_V7M_EXCRET_S_MASK |
66
--
158
--
67
2.19.1
159
2.20.1
68
160
69
161
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the code which updates the FPCCR register on an
2
exception entry where we are going to use lazy FP stacking.
3
We have to defer to the NVIC to determine whether the
4
various exceptions are currently ready or not.
2
5
3
Create struct ARMISARegisters, to be accessed during translation.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20181016223115.24100-2-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20190416125744.27770-12-peter.maydell@linaro.org
9
---
8
---
10
target/arm/cpu.h | 32 ++++----
9
target/arm/cpu.h | 14 +++++++++
11
hw/intc/armv7m_nvic.c | 12 +--
10
hw/intc/armv7m_nvic.c | 34 ++++++++++++++++++++++
12
target/arm/cpu.c | 178 +++++++++++++++++++++---------------------
11
target/arm/helper.c | 67 ++++++++++++++++++++++++++++++++++++++++++-
13
target/arm/cpu64.c | 70 ++++++++---------
12
3 files changed, 114 insertions(+), 1 deletion(-)
14
target/arm/helper.c | 28 +++----
15
5 files changed, 162 insertions(+), 158 deletions(-)
16
13
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
16
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
18
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque);
22
* ARMv7AR ARM Architecture Reference Manual. A reset_ prefix
19
* (Ignoring -1, this is the same as the RETTOBASE value before completion.)
23
* is used for reset values of non-constant registers; no reset_
20
*/
24
* prefix means a constant register.
21
int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure);
25
+ * Some of these registers are split out into a substructure that
22
+/**
26
+ * is shared with the translators to control the ISA.
23
+ * armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
27
*/
24
+ * @opaque: the NVIC
28
+ struct ARMISARegisters {
25
+ * @irq: the exception number to mark pending
29
+ uint32_t id_isar0;
26
+ * @secure: false for non-banked exceptions or for the nonsecure
30
+ uint32_t id_isar1;
27
+ * version of a banked exception, true for the secure version of a banked
31
+ uint32_t id_isar2;
28
+ * exception.
32
+ uint32_t id_isar3;
29
+ *
33
+ uint32_t id_isar4;
30
+ * Return whether an exception is "ready", i.e. whether the exception is
34
+ uint32_t id_isar5;
31
+ * enabled and is configured at a priority which would allow it to
35
+ uint32_t id_isar6;
32
+ * interrupt the current execution priority. This controls whether the
36
+ uint32_t mvfr0;
33
+ * RDY bit for it in the FPCCR is set.
37
+ uint32_t mvfr1;
34
+ */
38
+ uint32_t mvfr2;
35
+bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure);
39
+ uint64_t id_aa64isar0;
36
/**
40
+ uint64_t id_aa64isar1;
37
* armv7m_nvic_raw_execution_priority: return the raw execution priority
41
+ uint64_t id_aa64pfr0;
38
* @opaque: the NVIC
42
+ uint64_t id_aa64pfr1;
43
+ } isar;
44
uint32_t midr;
45
uint32_t revidr;
46
uint32_t reset_fpsid;
47
- uint32_t mvfr0;
48
- uint32_t mvfr1;
49
- uint32_t mvfr2;
50
uint32_t ctr;
51
uint32_t reset_sctlr;
52
uint32_t id_pfr0;
53
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
54
uint32_t id_mmfr2;
55
uint32_t id_mmfr3;
56
uint32_t id_mmfr4;
57
- uint32_t id_isar0;
58
- uint32_t id_isar1;
59
- uint32_t id_isar2;
60
- uint32_t id_isar3;
61
- uint32_t id_isar4;
62
- uint32_t id_isar5;
63
- uint32_t id_isar6;
64
- uint64_t id_aa64pfr0;
65
- uint64_t id_aa64pfr1;
66
uint64_t id_aa64dfr0;
67
uint64_t id_aa64dfr1;
68
uint64_t id_aa64afr0;
69
uint64_t id_aa64afr1;
70
- uint64_t id_aa64isar0;
71
- uint64_t id_aa64isar1;
72
uint64_t id_aa64mmfr0;
73
uint64_t id_aa64mmfr1;
74
uint32_t dbgdidr;
75
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
39
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
76
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
77
--- a/hw/intc/armv7m_nvic.c
41
--- a/hw/intc/armv7m_nvic.c
78
+++ b/hw/intc/armv7m_nvic.c
42
+++ b/hw/intc/armv7m_nvic.c
79
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
43
@@ -XXX,XX +XXX,XX @@ int armv7m_nvic_complete_irq(void *opaque, int irq, bool secure)
80
case 0xd5c: /* MMFR3. */
44
return ret;
81
return cpu->id_mmfr3;
82
case 0xd60: /* ISAR0. */
83
- return cpu->id_isar0;
84
+ return cpu->isar.id_isar0;
85
case 0xd64: /* ISAR1. */
86
- return cpu->id_isar1;
87
+ return cpu->isar.id_isar1;
88
case 0xd68: /* ISAR2. */
89
- return cpu->id_isar2;
90
+ return cpu->isar.id_isar2;
91
case 0xd6c: /* ISAR3. */
92
- return cpu->id_isar3;
93
+ return cpu->isar.id_isar3;
94
case 0xd70: /* ISAR4. */
95
- return cpu->id_isar4;
96
+ return cpu->isar.id_isar4;
97
case 0xd74: /* ISAR5. */
98
- return cpu->id_isar5;
99
+ return cpu->isar.id_isar5;
100
case 0xd78: /* CLIDR */
101
return cpu->clidr;
102
case 0xd7c: /* CTR */
103
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
104
index XXXXXXX..XXXXXXX 100644
105
--- a/target/arm/cpu.c
106
+++ b/target/arm/cpu.c
107
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
108
g_hash_table_foreach(cpu->cp_regs, cp_reg_check_reset, cpu);
109
110
env->vfp.xregs[ARM_VFP_FPSID] = cpu->reset_fpsid;
111
- env->vfp.xregs[ARM_VFP_MVFR0] = cpu->mvfr0;
112
- env->vfp.xregs[ARM_VFP_MVFR1] = cpu->mvfr1;
113
- env->vfp.xregs[ARM_VFP_MVFR2] = cpu->mvfr2;
114
+ env->vfp.xregs[ARM_VFP_MVFR0] = cpu->isar.mvfr0;
115
+ env->vfp.xregs[ARM_VFP_MVFR1] = cpu->isar.mvfr1;
116
+ env->vfp.xregs[ARM_VFP_MVFR2] = cpu->isar.mvfr2;
117
118
cpu->power_state = cpu->start_powered_off ? PSCI_OFF : PSCI_ON;
119
s->halted = cpu->start_powered_off;
120
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
121
* registers as well. These are id_pfr1[7:4] and id_aa64pfr0[15:12].
122
*/
123
cpu->id_pfr1 &= ~0xf0;
124
- cpu->id_aa64pfr0 &= ~0xf000;
125
+ cpu->isar.id_aa64pfr0 &= ~0xf000;
126
}
127
128
if (!cpu->has_el2) {
129
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
130
* registers if we don't have EL2. These are id_pfr1[15:12] and
131
* id_aa64pfr0_el1[11:8].
132
*/
133
- cpu->id_aa64pfr0 &= ~0xf00;
134
+ cpu->isar.id_aa64pfr0 &= ~0xf00;
135
cpu->id_pfr1 &= ~0xf000;
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
139
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
140
cpu->midr = 0x4107b362;
141
cpu->reset_fpsid = 0x410120b4;
142
- cpu->mvfr0 = 0x11111111;
143
- cpu->mvfr1 = 0x00000000;
144
+ cpu->isar.mvfr0 = 0x11111111;
145
+ cpu->isar.mvfr1 = 0x00000000;
146
cpu->ctr = 0x1dd20d2;
147
cpu->reset_sctlr = 0x00050078;
148
cpu->id_pfr0 = 0x111;
149
@@ -XXX,XX +XXX,XX @@ static void arm1136_r2_initfn(Object *obj)
150
cpu->id_mmfr0 = 0x01130003;
151
cpu->id_mmfr1 = 0x10030302;
152
cpu->id_mmfr2 = 0x01222110;
153
- cpu->id_isar0 = 0x00140011;
154
- cpu->id_isar1 = 0x12002111;
155
- cpu->id_isar2 = 0x11231111;
156
- cpu->id_isar3 = 0x01102131;
157
- cpu->id_isar4 = 0x141;
158
+ cpu->isar.id_isar0 = 0x00140011;
159
+ cpu->isar.id_isar1 = 0x12002111;
160
+ cpu->isar.id_isar2 = 0x11231111;
161
+ cpu->isar.id_isar3 = 0x01102131;
162
+ cpu->isar.id_isar4 = 0x141;
163
cpu->reset_auxcr = 7;
164
}
45
}
165
46
166
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
47
+bool armv7m_nvic_get_ready_status(void *opaque, int irq, bool secure)
167
set_feature(&cpu->env, ARM_FEATURE_CACHE_BLOCK_OPS);
48
+{
168
cpu->midr = 0x4117b363;
49
+ /*
169
cpu->reset_fpsid = 0x410120b4;
50
+ * Return whether an exception is "ready", i.e. it is enabled and is
170
- cpu->mvfr0 = 0x11111111;
51
+ * configured at a priority which would allow it to interrupt the
171
- cpu->mvfr1 = 0x00000000;
52
+ * current execution priority.
172
+ cpu->isar.mvfr0 = 0x11111111;
53
+ *
173
+ cpu->isar.mvfr1 = 0x00000000;
54
+ * irq and secure have the same semantics as for armv7m_nvic_set_pending():
174
cpu->ctr = 0x1dd20d2;
55
+ * for non-banked exceptions secure is always false; for banked exceptions
175
cpu->reset_sctlr = 0x00050078;
56
+ * it indicates which of the exceptions is required.
176
cpu->id_pfr0 = 0x111;
57
+ */
177
@@ -XXX,XX +XXX,XX @@ static void arm1136_initfn(Object *obj)
58
+ NVICState *s = (NVICState *)opaque;
178
cpu->id_mmfr0 = 0x01130003;
59
+ bool banked = exc_is_banked(irq);
179
cpu->id_mmfr1 = 0x10030302;
60
+ VecInfo *vec;
180
cpu->id_mmfr2 = 0x01222110;
61
+ int running = nvic_exec_prio(s);
181
- cpu->id_isar0 = 0x00140011;
62
+
182
- cpu->id_isar1 = 0x12002111;
63
+ assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
183
- cpu->id_isar2 = 0x11231111;
64
+ assert(!secure || banked);
184
- cpu->id_isar3 = 0x01102131;
65
+
185
- cpu->id_isar4 = 0x141;
66
+ /*
186
+ cpu->isar.id_isar0 = 0x00140011;
67
+ * HardFault is an odd special case: we always check against -1,
187
+ cpu->isar.id_isar1 = 0x12002111;
68
+ * even if we're secure and HardFault has priority -3; we never
188
+ cpu->isar.id_isar2 = 0x11231111;
69
+ * need to check for enabled state.
189
+ cpu->isar.id_isar3 = 0x01102131;
70
+ */
190
+ cpu->isar.id_isar4 = 0x141;
71
+ if (irq == ARMV7M_EXCP_HARD) {
191
cpu->reset_auxcr = 7;
72
+ return running > -1;
192
}
73
+ }
193
74
+
194
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
75
+ vec = (banked && secure) ? &s->sec_vectors[irq] : &s->vectors[irq];
195
set_feature(&cpu->env, ARM_FEATURE_EL3);
76
+
196
cpu->midr = 0x410fb767;
77
+ return vec->enabled &&
197
cpu->reset_fpsid = 0x410120b5;
78
+ exc_group_prio(s, vec->prio, secure) < running;
198
- cpu->mvfr0 = 0x11111111;
79
+}
199
- cpu->mvfr1 = 0x00000000;
80
+
200
+ cpu->isar.mvfr0 = 0x11111111;
81
/* callback when external interrupt line is changed */
201
+ cpu->isar.mvfr1 = 0x00000000;
82
static void set_irq_level(void *opaque, int n, int level)
202
cpu->ctr = 0x1dd20d2;
83
{
203
cpu->reset_sctlr = 0x00050078;
204
cpu->id_pfr0 = 0x111;
205
@@ -XXX,XX +XXX,XX @@ static void arm1176_initfn(Object *obj)
206
cpu->id_mmfr0 = 0x01130003;
207
cpu->id_mmfr1 = 0x10030302;
208
cpu->id_mmfr2 = 0x01222100;
209
- cpu->id_isar0 = 0x0140011;
210
- cpu->id_isar1 = 0x12002111;
211
- cpu->id_isar2 = 0x11231121;
212
- cpu->id_isar3 = 0x01102131;
213
- cpu->id_isar4 = 0x01141;
214
+ cpu->isar.id_isar0 = 0x0140011;
215
+ cpu->isar.id_isar1 = 0x12002111;
216
+ cpu->isar.id_isar2 = 0x11231121;
217
+ cpu->isar.id_isar3 = 0x01102131;
218
+ cpu->isar.id_isar4 = 0x01141;
219
cpu->reset_auxcr = 7;
220
}
221
222
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
223
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
224
cpu->midr = 0x410fb022;
225
cpu->reset_fpsid = 0x410120b4;
226
- cpu->mvfr0 = 0x11111111;
227
- cpu->mvfr1 = 0x00000000;
228
+ cpu->isar.mvfr0 = 0x11111111;
229
+ cpu->isar.mvfr1 = 0x00000000;
230
cpu->ctr = 0x1d192992; /* 32K icache 32K dcache */
231
cpu->id_pfr0 = 0x111;
232
cpu->id_pfr1 = 0x1;
233
@@ -XXX,XX +XXX,XX @@ static void arm11mpcore_initfn(Object *obj)
234
cpu->id_mmfr0 = 0x01100103;
235
cpu->id_mmfr1 = 0x10020302;
236
cpu->id_mmfr2 = 0x01222000;
237
- cpu->id_isar0 = 0x00100011;
238
- cpu->id_isar1 = 0x12002111;
239
- cpu->id_isar2 = 0x11221011;
240
- cpu->id_isar3 = 0x01102131;
241
- cpu->id_isar4 = 0x141;
242
+ cpu->isar.id_isar0 = 0x00100011;
243
+ cpu->isar.id_isar1 = 0x12002111;
244
+ cpu->isar.id_isar2 = 0x11221011;
245
+ cpu->isar.id_isar3 = 0x01102131;
246
+ cpu->isar.id_isar4 = 0x141;
247
cpu->reset_auxcr = 1;
248
}
249
250
@@ -XXX,XX +XXX,XX @@ static void cortex_m3_initfn(Object *obj)
251
cpu->id_mmfr1 = 0x00000000;
252
cpu->id_mmfr2 = 0x00000000;
253
cpu->id_mmfr3 = 0x00000000;
254
- cpu->id_isar0 = 0x01141110;
255
- cpu->id_isar1 = 0x02111000;
256
- cpu->id_isar2 = 0x21112231;
257
- cpu->id_isar3 = 0x01111110;
258
- cpu->id_isar4 = 0x01310102;
259
- cpu->id_isar5 = 0x00000000;
260
- cpu->id_isar6 = 0x00000000;
261
+ cpu->isar.id_isar0 = 0x01141110;
262
+ cpu->isar.id_isar1 = 0x02111000;
263
+ cpu->isar.id_isar2 = 0x21112231;
264
+ cpu->isar.id_isar3 = 0x01111110;
265
+ cpu->isar.id_isar4 = 0x01310102;
266
+ cpu->isar.id_isar5 = 0x00000000;
267
+ cpu->isar.id_isar6 = 0x00000000;
268
}
269
270
static void cortex_m4_initfn(Object *obj)
271
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
272
cpu->id_mmfr1 = 0x00000000;
273
cpu->id_mmfr2 = 0x00000000;
274
cpu->id_mmfr3 = 0x00000000;
275
- cpu->id_isar0 = 0x01141110;
276
- cpu->id_isar1 = 0x02111000;
277
- cpu->id_isar2 = 0x21112231;
278
- cpu->id_isar3 = 0x01111110;
279
- cpu->id_isar4 = 0x01310102;
280
- cpu->id_isar5 = 0x00000000;
281
- cpu->id_isar6 = 0x00000000;
282
+ cpu->isar.id_isar0 = 0x01141110;
283
+ cpu->isar.id_isar1 = 0x02111000;
284
+ cpu->isar.id_isar2 = 0x21112231;
285
+ cpu->isar.id_isar3 = 0x01111110;
286
+ cpu->isar.id_isar4 = 0x01310102;
287
+ cpu->isar.id_isar5 = 0x00000000;
288
+ cpu->isar.id_isar6 = 0x00000000;
289
}
290
291
static void cortex_m33_initfn(Object *obj)
292
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
293
cpu->id_mmfr1 = 0x00000000;
294
cpu->id_mmfr2 = 0x01000000;
295
cpu->id_mmfr3 = 0x00000000;
296
- cpu->id_isar0 = 0x01101110;
297
- cpu->id_isar1 = 0x02212000;
298
- cpu->id_isar2 = 0x20232232;
299
- cpu->id_isar3 = 0x01111131;
300
- cpu->id_isar4 = 0x01310132;
301
- cpu->id_isar5 = 0x00000000;
302
- cpu->id_isar6 = 0x00000000;
303
+ cpu->isar.id_isar0 = 0x01101110;
304
+ cpu->isar.id_isar1 = 0x02212000;
305
+ cpu->isar.id_isar2 = 0x20232232;
306
+ cpu->isar.id_isar3 = 0x01111131;
307
+ cpu->isar.id_isar4 = 0x01310132;
308
+ cpu->isar.id_isar5 = 0x00000000;
309
+ cpu->isar.id_isar6 = 0x00000000;
310
cpu->clidr = 0x00000000;
311
cpu->ctr = 0x8000c000;
312
}
313
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
314
cpu->id_mmfr1 = 0x00000000;
315
cpu->id_mmfr2 = 0x01200000;
316
cpu->id_mmfr3 = 0x0211;
317
- cpu->id_isar0 = 0x02101111;
318
- cpu->id_isar1 = 0x13112111;
319
- cpu->id_isar2 = 0x21232141;
320
- cpu->id_isar3 = 0x01112131;
321
- cpu->id_isar4 = 0x0010142;
322
- cpu->id_isar5 = 0x0;
323
- cpu->id_isar6 = 0x0;
324
+ cpu->isar.id_isar0 = 0x02101111;
325
+ cpu->isar.id_isar1 = 0x13112111;
326
+ cpu->isar.id_isar2 = 0x21232141;
327
+ cpu->isar.id_isar3 = 0x01112131;
328
+ cpu->isar.id_isar4 = 0x0010142;
329
+ cpu->isar.id_isar5 = 0x0;
330
+ cpu->isar.id_isar6 = 0x0;
331
cpu->mp_is_up = true;
332
cpu->pmsav7_dregion = 16;
333
define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
334
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
335
set_feature(&cpu->env, ARM_FEATURE_EL3);
336
cpu->midr = 0x410fc080;
337
cpu->reset_fpsid = 0x410330c0;
338
- cpu->mvfr0 = 0x11110222;
339
- cpu->mvfr1 = 0x00011111;
340
+ cpu->isar.mvfr0 = 0x11110222;
341
+ cpu->isar.mvfr1 = 0x00011111;
342
cpu->ctr = 0x82048004;
343
cpu->reset_sctlr = 0x00c50078;
344
cpu->id_pfr0 = 0x1031;
345
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
346
cpu->id_mmfr1 = 0x20000000;
347
cpu->id_mmfr2 = 0x01202000;
348
cpu->id_mmfr3 = 0x11;
349
- cpu->id_isar0 = 0x00101111;
350
- cpu->id_isar1 = 0x12112111;
351
- cpu->id_isar2 = 0x21232031;
352
- cpu->id_isar3 = 0x11112131;
353
- cpu->id_isar4 = 0x00111142;
354
+ cpu->isar.id_isar0 = 0x00101111;
355
+ cpu->isar.id_isar1 = 0x12112111;
356
+ cpu->isar.id_isar2 = 0x21232031;
357
+ cpu->isar.id_isar3 = 0x11112131;
358
+ cpu->isar.id_isar4 = 0x00111142;
359
cpu->dbgdidr = 0x15141000;
360
cpu->clidr = (1 << 27) | (2 << 24) | 3;
361
cpu->ccsidr[0] = 0xe007e01a; /* 16k L1 dcache. */
362
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
363
set_feature(&cpu->env, ARM_FEATURE_CBAR);
364
cpu->midr = 0x410fc090;
365
cpu->reset_fpsid = 0x41033090;
366
- cpu->mvfr0 = 0x11110222;
367
- cpu->mvfr1 = 0x01111111;
368
+ cpu->isar.mvfr0 = 0x11110222;
369
+ cpu->isar.mvfr1 = 0x01111111;
370
cpu->ctr = 0x80038003;
371
cpu->reset_sctlr = 0x00c50078;
372
cpu->id_pfr0 = 0x1031;
373
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
374
cpu->id_mmfr1 = 0x20000000;
375
cpu->id_mmfr2 = 0x01230000;
376
cpu->id_mmfr3 = 0x00002111;
377
- cpu->id_isar0 = 0x00101111;
378
- cpu->id_isar1 = 0x13112111;
379
- cpu->id_isar2 = 0x21232041;
380
- cpu->id_isar3 = 0x11112131;
381
- cpu->id_isar4 = 0x00111142;
382
+ cpu->isar.id_isar0 = 0x00101111;
383
+ cpu->isar.id_isar1 = 0x13112111;
384
+ cpu->isar.id_isar2 = 0x21232041;
385
+ cpu->isar.id_isar3 = 0x11112131;
386
+ cpu->isar.id_isar4 = 0x00111142;
387
cpu->dbgdidr = 0x35141000;
388
cpu->clidr = (1 << 27) | (1 << 24) | 3;
389
cpu->ccsidr[0] = 0xe00fe019; /* 16k L1 dcache. */
390
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
391
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A7;
392
cpu->midr = 0x410fc075;
393
cpu->reset_fpsid = 0x41023075;
394
- cpu->mvfr0 = 0x10110222;
395
- cpu->mvfr1 = 0x11111111;
396
+ cpu->isar.mvfr0 = 0x10110222;
397
+ cpu->isar.mvfr1 = 0x11111111;
398
cpu->ctr = 0x84448003;
399
cpu->reset_sctlr = 0x00c50078;
400
cpu->id_pfr0 = 0x00001131;
401
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
402
/* a7_mpcore_r0p5_trm, page 4-4 gives 0x01101110; but
403
* table 4-41 gives 0x02101110, which includes the arm div insns.
404
*/
405
- cpu->id_isar0 = 0x02101110;
406
- cpu->id_isar1 = 0x13112111;
407
- cpu->id_isar2 = 0x21232041;
408
- cpu->id_isar3 = 0x11112131;
409
- cpu->id_isar4 = 0x10011142;
410
+ cpu->isar.id_isar0 = 0x02101110;
411
+ cpu->isar.id_isar1 = 0x13112111;
412
+ cpu->isar.id_isar2 = 0x21232041;
413
+ cpu->isar.id_isar3 = 0x11112131;
414
+ cpu->isar.id_isar4 = 0x10011142;
415
cpu->dbgdidr = 0x3515f005;
416
cpu->clidr = 0x0a200023;
417
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
418
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
419
cpu->kvm_target = QEMU_KVM_ARM_TARGET_CORTEX_A15;
420
cpu->midr = 0x412fc0f1;
421
cpu->reset_fpsid = 0x410430f0;
422
- cpu->mvfr0 = 0x10110222;
423
- cpu->mvfr1 = 0x11111111;
424
+ cpu->isar.mvfr0 = 0x10110222;
425
+ cpu->isar.mvfr1 = 0x11111111;
426
cpu->ctr = 0x8444c004;
427
cpu->reset_sctlr = 0x00c50078;
428
cpu->id_pfr0 = 0x00001131;
429
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
430
cpu->id_mmfr1 = 0x20000000;
431
cpu->id_mmfr2 = 0x01240000;
432
cpu->id_mmfr3 = 0x02102211;
433
- cpu->id_isar0 = 0x02101110;
434
- cpu->id_isar1 = 0x13112111;
435
- cpu->id_isar2 = 0x21232041;
436
- cpu->id_isar3 = 0x11112131;
437
- cpu->id_isar4 = 0x10011142;
438
+ cpu->isar.id_isar0 = 0x02101110;
439
+ cpu->isar.id_isar1 = 0x13112111;
440
+ cpu->isar.id_isar2 = 0x21232041;
441
+ cpu->isar.id_isar3 = 0x11112131;
442
+ cpu->isar.id_isar4 = 0x10011142;
443
cpu->dbgdidr = 0x3515f021;
444
cpu->clidr = 0x0a200023;
445
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
446
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
447
index XXXXXXX..XXXXXXX 100644
448
--- a/target/arm/cpu64.c
449
+++ b/target/arm/cpu64.c
450
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
451
cpu->midr = 0x411fd070;
452
cpu->revidr = 0x00000000;
453
cpu->reset_fpsid = 0x41034070;
454
- cpu->mvfr0 = 0x10110222;
455
- cpu->mvfr1 = 0x12111111;
456
- cpu->mvfr2 = 0x00000043;
457
+ cpu->isar.mvfr0 = 0x10110222;
458
+ cpu->isar.mvfr1 = 0x12111111;
459
+ cpu->isar.mvfr2 = 0x00000043;
460
cpu->ctr = 0x8444c004;
461
cpu->reset_sctlr = 0x00c50838;
462
cpu->id_pfr0 = 0x00000131;
463
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
464
cpu->id_mmfr1 = 0x40000000;
465
cpu->id_mmfr2 = 0x01260000;
466
cpu->id_mmfr3 = 0x02102211;
467
- cpu->id_isar0 = 0x02101110;
468
- cpu->id_isar1 = 0x13112111;
469
- cpu->id_isar2 = 0x21232042;
470
- cpu->id_isar3 = 0x01112131;
471
- cpu->id_isar4 = 0x00011142;
472
- cpu->id_isar5 = 0x00011121;
473
- cpu->id_isar6 = 0;
474
- cpu->id_aa64pfr0 = 0x00002222;
475
+ cpu->isar.id_isar0 = 0x02101110;
476
+ cpu->isar.id_isar1 = 0x13112111;
477
+ cpu->isar.id_isar2 = 0x21232042;
478
+ cpu->isar.id_isar3 = 0x01112131;
479
+ cpu->isar.id_isar4 = 0x00011142;
480
+ cpu->isar.id_isar5 = 0x00011121;
481
+ cpu->isar.id_isar6 = 0;
482
+ cpu->isar.id_aa64pfr0 = 0x00002222;
483
cpu->id_aa64dfr0 = 0x10305106;
484
cpu->pmceid0 = 0x00000000;
485
cpu->pmceid1 = 0x00000000;
486
- cpu->id_aa64isar0 = 0x00011120;
487
+ cpu->isar.id_aa64isar0 = 0x00011120;
488
cpu->id_aa64mmfr0 = 0x00001124;
489
cpu->dbgdidr = 0x3516d000;
490
cpu->clidr = 0x0a200023;
491
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
492
cpu->midr = 0x410fd034;
493
cpu->revidr = 0x00000000;
494
cpu->reset_fpsid = 0x41034070;
495
- cpu->mvfr0 = 0x10110222;
496
- cpu->mvfr1 = 0x12111111;
497
- cpu->mvfr2 = 0x00000043;
498
+ cpu->isar.mvfr0 = 0x10110222;
499
+ cpu->isar.mvfr1 = 0x12111111;
500
+ cpu->isar.mvfr2 = 0x00000043;
501
cpu->ctr = 0x84448004; /* L1Ip = VIPT */
502
cpu->reset_sctlr = 0x00c50838;
503
cpu->id_pfr0 = 0x00000131;
504
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
505
cpu->id_mmfr1 = 0x40000000;
506
cpu->id_mmfr2 = 0x01260000;
507
cpu->id_mmfr3 = 0x02102211;
508
- cpu->id_isar0 = 0x02101110;
509
- cpu->id_isar1 = 0x13112111;
510
- cpu->id_isar2 = 0x21232042;
511
- cpu->id_isar3 = 0x01112131;
512
- cpu->id_isar4 = 0x00011142;
513
- cpu->id_isar5 = 0x00011121;
514
- cpu->id_isar6 = 0;
515
- cpu->id_aa64pfr0 = 0x00002222;
516
+ cpu->isar.id_isar0 = 0x02101110;
517
+ cpu->isar.id_isar1 = 0x13112111;
518
+ cpu->isar.id_isar2 = 0x21232042;
519
+ cpu->isar.id_isar3 = 0x01112131;
520
+ cpu->isar.id_isar4 = 0x00011142;
521
+ cpu->isar.id_isar5 = 0x00011121;
522
+ cpu->isar.id_isar6 = 0;
523
+ cpu->isar.id_aa64pfr0 = 0x00002222;
524
cpu->id_aa64dfr0 = 0x10305106;
525
- cpu->id_aa64isar0 = 0x00011120;
526
+ cpu->isar.id_aa64isar0 = 0x00011120;
527
cpu->id_aa64mmfr0 = 0x00001122; /* 40 bit physical addr */
528
cpu->dbgdidr = 0x3516d000;
529
cpu->clidr = 0x0a200023;
530
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
531
cpu->midr = 0x410fd083;
532
cpu->revidr = 0x00000000;
533
cpu->reset_fpsid = 0x41034080;
534
- cpu->mvfr0 = 0x10110222;
535
- cpu->mvfr1 = 0x12111111;
536
- cpu->mvfr2 = 0x00000043;
537
+ cpu->isar.mvfr0 = 0x10110222;
538
+ cpu->isar.mvfr1 = 0x12111111;
539
+ cpu->isar.mvfr2 = 0x00000043;
540
cpu->ctr = 0x8444c004;
541
cpu->reset_sctlr = 0x00c50838;
542
cpu->id_pfr0 = 0x00000131;
543
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
544
cpu->id_mmfr1 = 0x40000000;
545
cpu->id_mmfr2 = 0x01260000;
546
cpu->id_mmfr3 = 0x02102211;
547
- cpu->id_isar0 = 0x02101110;
548
- cpu->id_isar1 = 0x13112111;
549
- cpu->id_isar2 = 0x21232042;
550
- cpu->id_isar3 = 0x01112131;
551
- cpu->id_isar4 = 0x00011142;
552
- cpu->id_isar5 = 0x00011121;
553
- cpu->id_aa64pfr0 = 0x00002222;
554
+ cpu->isar.id_isar0 = 0x02101110;
555
+ cpu->isar.id_isar1 = 0x13112111;
556
+ cpu->isar.id_isar2 = 0x21232042;
557
+ cpu->isar.id_isar3 = 0x01112131;
558
+ cpu->isar.id_isar4 = 0x00011142;
559
+ cpu->isar.id_isar5 = 0x00011121;
560
+ cpu->isar.id_aa64pfr0 = 0x00002222;
561
cpu->id_aa64dfr0 = 0x10305106;
562
cpu->pmceid0 = 0x00000000;
563
cpu->pmceid1 = 0x00000000;
564
- cpu->id_aa64isar0 = 0x00011120;
565
+ cpu->isar.id_aa64isar0 = 0x00011120;
566
cpu->id_aa64mmfr0 = 0x00001124;
567
cpu->dbgdidr = 0x3516d000;
568
cpu->clidr = 0x0a200023;
569
diff --git a/target/arm/helper.c b/target/arm/helper.c
84
diff --git a/target/arm/helper.c b/target/arm/helper.c
570
index XXXXXXX..XXXXXXX 100644
85
index XXXXXXX..XXXXXXX 100644
571
--- a/target/arm/helper.c
86
--- a/target/arm/helper.c
572
+++ b/target/arm/helper.c
87
+++ b/target/arm/helper.c
573
@@ -XXX,XX +XXX,XX @@ static uint64_t id_pfr1_read(CPUARMState *env, const ARMCPRegInfo *ri)
88
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
574
static uint64_t id_aa64pfr0_read(CPUARMState *env, const ARMCPRegInfo *ri)
89
env->thumb = addr & 1;
90
}
91
92
+static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
93
+ bool apply_splim)
94
+{
95
+ /*
96
+ * Like the pseudocode UpdateFPCCR: save state in FPCAR and FPCCR
97
+ * that we will need later in order to do lazy FP reg stacking.
98
+ */
99
+ bool is_secure = env->v7m.secure;
100
+ void *nvic = env->nvic;
101
+ /*
102
+ * Some bits are unbanked and live always in fpccr[M_REG_S]; some bits
103
+ * are banked and we want to update the bit in the bank for the
104
+ * current security state; and in one case we want to specifically
105
+ * update the NS banked version of a bit even if we are secure.
106
+ */
107
+ uint32_t *fpccr_s = &env->v7m.fpccr[M_REG_S];
108
+ uint32_t *fpccr_ns = &env->v7m.fpccr[M_REG_NS];
109
+ uint32_t *fpccr = &env->v7m.fpccr[is_secure];
110
+ bool hfrdy, bfrdy, mmrdy, ns_ufrdy, s_ufrdy, sfrdy, monrdy;
111
+
112
+ env->v7m.fpcar[is_secure] = frameptr & ~0x7;
113
+
114
+ if (apply_splim && arm_feature(env, ARM_FEATURE_V8)) {
115
+ bool splimviol;
116
+ uint32_t splim = v7m_sp_limit(env);
117
+ bool ign = armv7m_nvic_neg_prio_requested(nvic, is_secure) &&
118
+ (env->v7m.ccr[is_secure] & R_V7M_CCR_STKOFHFNMIGN_MASK);
119
+
120
+ splimviol = !ign && frameptr < splim;
121
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, SPLIMVIOL, splimviol);
122
+ }
123
+
124
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, LSPACT, 1);
125
+
126
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, S, is_secure);
127
+
128
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, USER, arm_current_el(env) == 0);
129
+
130
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, THREAD,
131
+ !arm_v7m_is_handler_mode(env));
132
+
133
+ hfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_HARD, false);
134
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, HFRDY, hfrdy);
135
+
136
+ bfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_BUS, false);
137
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, BFRDY, bfrdy);
138
+
139
+ mmrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_MEM, is_secure);
140
+ *fpccr = FIELD_DP32(*fpccr, V7M_FPCCR, MMRDY, mmrdy);
141
+
142
+ ns_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, false);
143
+ *fpccr_ns = FIELD_DP32(*fpccr_ns, V7M_FPCCR, UFRDY, ns_ufrdy);
144
+
145
+ monrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_DEBUG, false);
146
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, MONRDY, monrdy);
147
+
148
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
149
+ s_ufrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_USAGE, true);
150
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, UFRDY, s_ufrdy);
151
+
152
+ sfrdy = armv7m_nvic_get_ready_status(nvic, ARMV7M_EXCP_SECURE, false);
153
+ *fpccr_s = FIELD_DP32(*fpccr_s, V7M_FPCCR, SFRDY, sfrdy);
154
+ }
155
+}
156
+
157
static bool v7m_push_stack(ARMCPU *cpu)
575
{
158
{
576
ARMCPU *cpu = arm_env_get_cpu(env);
159
/* Do the "set up stack frame" part of exception entry,
577
- uint64_t pfr0 = cpu->id_aa64pfr0;
160
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
578
+ uint64_t pfr0 = cpu->isar.id_aa64pfr0;
161
}
579
162
} else {
580
if (env->gicv3state) {
163
/* Lazy stacking enabled, save necessary info to stack later */
581
pfr0 |= 1 << 24;
164
- /* TODO : equivalent of UpdateFPCCR() pseudocode */
582
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
165
+ v7m_update_fpccr(env, frameptr + 0x20, true);
583
{ .name = "ID_ISAR0", .state = ARM_CP_STATE_BOTH,
166
}
584
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
167
}
585
.access = PL1_R, .type = ARM_CP_CONST,
168
}
586
- .resetvalue = cpu->id_isar0 },
587
+ .resetvalue = cpu->isar.id_isar0 },
588
{ .name = "ID_ISAR1", .state = ARM_CP_STATE_BOTH,
589
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 1,
590
.access = PL1_R, .type = ARM_CP_CONST,
591
- .resetvalue = cpu->id_isar1 },
592
+ .resetvalue = cpu->isar.id_isar1 },
593
{ .name = "ID_ISAR2", .state = ARM_CP_STATE_BOTH,
594
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2,
595
.access = PL1_R, .type = ARM_CP_CONST,
596
- .resetvalue = cpu->id_isar2 },
597
+ .resetvalue = cpu->isar.id_isar2 },
598
{ .name = "ID_ISAR3", .state = ARM_CP_STATE_BOTH,
599
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 3,
600
.access = PL1_R, .type = ARM_CP_CONST,
601
- .resetvalue = cpu->id_isar3 },
602
+ .resetvalue = cpu->isar.id_isar3 },
603
{ .name = "ID_ISAR4", .state = ARM_CP_STATE_BOTH,
604
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 4,
605
.access = PL1_R, .type = ARM_CP_CONST,
606
- .resetvalue = cpu->id_isar4 },
607
+ .resetvalue = cpu->isar.id_isar4 },
608
{ .name = "ID_ISAR5", .state = ARM_CP_STATE_BOTH,
609
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 5,
610
.access = PL1_R, .type = ARM_CP_CONST,
611
- .resetvalue = cpu->id_isar5 },
612
+ .resetvalue = cpu->isar.id_isar5 },
613
{ .name = "ID_MMFR4", .state = ARM_CP_STATE_BOTH,
614
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6,
615
.access = PL1_R, .type = ARM_CP_CONST,
616
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
617
{ .name = "ID_ISAR6", .state = ARM_CP_STATE_BOTH,
618
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 7,
619
.access = PL1_R, .type = ARM_CP_CONST,
620
- .resetvalue = cpu->id_isar6 },
621
+ .resetvalue = cpu->isar.id_isar6 },
622
REGINFO_SENTINEL
623
};
624
define_arm_cp_regs(cpu, v6_idregs);
625
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
626
{ .name = "ID_AA64PFR1_EL1", .state = ARM_CP_STATE_AA64,
627
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 1,
628
.access = PL1_R, .type = ARM_CP_CONST,
629
- .resetvalue = cpu->id_aa64pfr1},
630
+ .resetvalue = cpu->isar.id_aa64pfr1},
631
{ .name = "ID_AA64PFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
632
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 2,
633
.access = PL1_R, .type = ARM_CP_CONST,
634
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
635
{ .name = "ID_AA64ISAR0_EL1", .state = ARM_CP_STATE_AA64,
636
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 0,
637
.access = PL1_R, .type = ARM_CP_CONST,
638
- .resetvalue = cpu->id_aa64isar0 },
639
+ .resetvalue = cpu->isar.id_aa64isar0 },
640
{ .name = "ID_AA64ISAR1_EL1", .state = ARM_CP_STATE_AA64,
641
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 1,
642
.access = PL1_R, .type = ARM_CP_CONST,
643
- .resetvalue = cpu->id_aa64isar1 },
644
+ .resetvalue = cpu->isar.id_aa64isar1 },
645
{ .name = "ID_AA64ISAR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
646
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 2,
647
.access = PL1_R, .type = ARM_CP_CONST,
648
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
649
{ .name = "MVFR0_EL1", .state = ARM_CP_STATE_AA64,
650
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 0,
651
.access = PL1_R, .type = ARM_CP_CONST,
652
- .resetvalue = cpu->mvfr0 },
653
+ .resetvalue = cpu->isar.mvfr0 },
654
{ .name = "MVFR1_EL1", .state = ARM_CP_STATE_AA64,
655
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 1,
656
.access = PL1_R, .type = ARM_CP_CONST,
657
- .resetvalue = cpu->mvfr1 },
658
+ .resetvalue = cpu->isar.mvfr1 },
659
{ .name = "MVFR2_EL1", .state = ARM_CP_STATE_AA64,
660
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2,
661
.access = PL1_R, .type = ARM_CP_CONST,
662
- .resetvalue = cpu->mvfr2 },
663
+ .resetvalue = cpu->isar.mvfr2 },
664
{ .name = "MVFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
665
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 3,
666
.access = PL1_R, .type = ARM_CP_CONST,
667
--
169
--
668
2.19.1
170
2.20.1
669
171
670
172
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
For v8M floating point support, transitions from Secure
2
to Non-secure state via BLNS and BLXNS must clear the
3
CONTROL.SFPA bit. (This corresponds to the pseudocode
4
BranchToNS() function.)
2
5
3
The EL3 version of this register does not include an ASID,
4
and so the tlb_flush performed by vmsa_ttbr_write is not needed.
5
6
Reviewed-by: Aaron Lindsay <aaron@os.amperecomputing.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20181019015617.22583-2-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190416125744.27770-13-peter.maydell@linaro.org
11
---
9
---
12
target/arm/helper.c | 2 +-
10
target/arm/helper.c | 4 ++++
13
1 file changed, 1 insertion(+), 1 deletion(-)
11
1 file changed, 4 insertions(+)
14
12
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
15
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
17
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
20
.fieldoffset = offsetof(CPUARMState, cp15.mvbar) },
18
/* translate.c should have made BXNS UNDEF unless we're secure */
21
{ .name = "TTBR0_EL3", .state = ARM_CP_STATE_AA64,
19
assert(env->v7m.secure);
22
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 0,
20
23
- .access = PL3_RW, .writefn = vmsa_ttbr_write, .resetvalue = 0,
21
+ if (!(dest & 1)) {
24
+ .access = PL3_RW, .resetvalue = 0,
22
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
25
.fieldoffset = offsetof(CPUARMState, cp15.ttbr0_el[3]) },
23
+ }
26
{ .name = "TCR_EL3", .state = ARM_CP_STATE_AA64,
24
switch_v7m_security_state(env, dest & 1);
27
.opc0 = 3, .opc1 = 6, .crn = 2, .crm = 0, .opc2 = 2,
25
env->thumb = 1;
26
env->regs[15] = dest & ~1;
27
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
28
*/
29
write_v7m_exception(env, 1);
30
}
31
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_SFPA_MASK;
32
switch_v7m_security_state(env, 0);
33
env->thumb = 1;
34
env->regs[15] = dest;
28
--
35
--
29
2.19.1
36
2.20.1
30
37
31
38
diff view generated by jsdifflib
1
If the HCR_EL2 PTW virtualizaiton configuration register bit
1
The TailChain() pseudocode specifies that a tail chaining
2
is set, then this means that a stage 2 Permission fault must
2
exception should sanitize the excReturn all-ones bits and
3
be generated if a stage 1 translation table access is made
3
(if there is no FPU) the excReturn FType bits; we weren't
4
to an address that is mapped as Device memory in stage 2.
4
doing this.
5
Implement this.
6
5
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181012144235.19646-8-peter.maydell@linaro.org
8
Message-id: 20190416125744.27770-14-peter.maydell@linaro.org
10
---
9
---
11
target/arm/helper.c | 21 ++++++++++++++++++++-
10
target/arm/helper.c | 8 ++++++++
12
1 file changed, 20 insertions(+), 1 deletion(-)
11
1 file changed, 8 insertions(+)
13
12
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
15
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
17
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
19
hwaddr s2pa;
18
qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
20
int s2prot;
19
targets_secure ? "secure" : "nonsecure", exc);
21
int ret;
20
22
+ ARMCacheAttrs cacheattrs = {};
21
+ if (dotailchain) {
23
+ ARMCacheAttrs *pcacheattrs = NULL;
22
+ /* Sanitize LR FType and PREFIX bits */
23
+ if (!arm_feature(env, ARM_FEATURE_VFP)) {
24
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
25
+ }
26
+ lr = deposit32(lr, 24, 8, 0xff);
27
+ }
24
+
28
+
25
+ if (env->cp15.hcr_el2 & HCR_PTW) {
29
if (arm_feature(env, ARM_FEATURE_V8)) {
26
+ /*
30
if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
27
+ * PTW means we must fault if this S1 walk touches S2 Device
31
(lr & R_V7M_EXCRET_S_MASK)) {
28
+ * memory; otherwise we don't care about the attributes and can
29
+ * save the S2 translation the effort of computing them.
30
+ */
31
+ pcacheattrs = &cacheattrs;
32
+ }
33
34
ret = get_phys_addr_lpae(env, addr, 0, ARMMMUIdx_S2NS, &s2pa,
35
- &txattrs, &s2prot, &s2size, fi, NULL);
36
+ &txattrs, &s2prot, &s2size, fi, pcacheattrs);
37
if (ret) {
38
assert(fi->type != ARMFault_None);
39
fi->s2addr = addr;
40
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
41
fi->s1ptw = true;
42
return ~0;
43
}
44
+ if (pcacheattrs && (pcacheattrs->attrs & 0xf0) == 0) {
45
+ /* Access was to Device memory: generate Permission fault */
46
+ fi->type = ARMFault_Permission;
47
+ fi->s2addr = addr;
48
+ fi->stage2 = true;
49
+ fi->s1ptw = true;
50
+ return ~0;
51
+ }
52
addr = s2pa;
53
}
54
return addr;
55
--
32
--
56
2.19.1
33
2.20.1
57
34
58
35
diff view generated by jsdifflib
1
The HCR_EL2 VI and VF bits are supposed to track whether there is
1
The magic value pushed onto the callee stack as an integrity
2
a pending virtual IRQ or virtual FIQ. For QEMU we store the
2
check is different if floating point is present.
3
pending VIRQ/VFIQ status in cs->interrupt_request, so this means:
4
* if the register is read we must get these bit values from
5
cs->interrupt_request
6
* if the register is written then we must write the bit
7
values back into cs->interrupt_request
8
3
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20181012144235.19646-7-peter.maydell@linaro.org
6
Message-id: 20190416125744.27770-15-peter.maydell@linaro.org
12
---
7
---
13
target/arm/helper.c | 47 +++++++++++++++++++++++++++++++++++++++++----
8
target/arm/helper.c | 22 +++++++++++++++++++---
14
1 file changed, 43 insertions(+), 4 deletions(-)
9
1 file changed, 19 insertions(+), 3 deletions(-)
15
10
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
13
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
14
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
15
@@ -XXX,XX +XXX,XX @@ load_fail:
21
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
16
return false;
22
{
17
}
23
ARMCPU *cpu = arm_env_get_cpu(env);
18
24
+ CPUState *cs = ENV_GET_CPU(env);
19
+static uint32_t v7m_integrity_sig(CPUARMState *env, uint32_t lr)
25
uint64_t valid_mask = HCR_MASK;
20
+{
26
27
if (arm_feature(env, ARM_FEATURE_EL3)) {
28
@@ -XXX,XX +XXX,XX @@ static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
29
/* Clear RES0 bits. */
30
value &= valid_mask;
31
32
+ /*
21
+ /*
33
+ * VI and VF are kept in cs->interrupt_request. Modifying that
22
+ * Return the integrity signature value for the callee-saves
34
+ * requires that we have the iothread lock, which is done by
23
+ * stack frame section. @lr is the exception return payload/LR value
35
+ * marking the reginfo structs as ARM_CP_IO.
24
+ * whose FType bit forms bit 0 of the signature if FP is present.
36
+ * Note that if a write to HCR pends a VIRQ or VFIQ it is never
37
+ * possible for it to be taken immediately, because VIRQ and
38
+ * VFIQ are masked unless running at EL0 or EL1, and HCR
39
+ * can only be written at EL2.
40
+ */
25
+ */
41
+ g_assert(qemu_mutex_iothread_locked());
26
+ uint32_t sig = 0xfefa125a;
42
+ if (value & HCR_VI) {
27
+
43
+ cs->interrupt_request |= CPU_INTERRUPT_VIRQ;
28
+ if (!arm_feature(env, ARM_FEATURE_VFP) || (lr & R_V7M_EXCRET_FTYPE_MASK)) {
44
+ } else {
29
+ sig |= 1;
45
+ cs->interrupt_request &= ~CPU_INTERRUPT_VIRQ;
46
+ }
30
+ }
47
+ if (value & HCR_VF) {
31
+ return sig;
48
+ cs->interrupt_request |= CPU_INTERRUPT_VFIQ;
49
+ } else {
50
+ cs->interrupt_request &= ~CPU_INTERRUPT_VFIQ;
51
+ }
52
+ value &= ~(HCR_VI | HCR_VF);
53
+
54
/* These bits change the MMU setup:
55
* HCR_VM enables stage 2 translation
56
* HCR_PTW forbids certain page-table setups
57
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
58
hcr_write(env, NULL, value);
59
}
60
61
+static uint64_t hcr_read(CPUARMState *env, const ARMCPRegInfo *ri)
62
+{
63
+ /* The VI and VF bits live in cs->interrupt_request */
64
+ uint64_t ret = env->cp15.hcr_el2 & ~(HCR_VI | HCR_VF);
65
+ CPUState *cs = ENV_GET_CPU(env);
66
+
67
+ if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
68
+ ret |= HCR_VI;
69
+ }
70
+ if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
71
+ ret |= HCR_VF;
72
+ }
73
+ return ret;
74
+}
32
+}
75
+
33
+
76
static const ARMCPRegInfo el2_cp_reginfo[] = {
34
static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
77
{ .name = "HCR_EL2", .state = ARM_CP_STATE_AA64,
35
bool ignore_faults)
78
+ .type = ARM_CP_IO,
36
{
79
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
37
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
80
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
38
bool stacked_ok;
81
- .writefn = hcr_write },
39
uint32_t limit;
82
+ .writefn = hcr_write, .readfn = hcr_read },
40
bool want_psp;
83
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
41
+ uint32_t sig;
84
- .type = ARM_CP_ALIAS,
42
85
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
43
if (dotailchain) {
86
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
44
bool mode = lr & R_V7M_EXCRET_MODE_MASK;
87
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
45
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
88
- .writefn = hcr_writelow },
46
/* Write as much of the stack frame as we can. A write failure may
89
+ .writefn = hcr_writelow, .readfn = hcr_read },
47
* cause us to pend a derived exception.
90
{ .name = "ELR_EL2", .state = ARM_CP_STATE_AA64,
48
*/
91
.type = ARM_CP_ALIAS,
49
+ sig = v7m_integrity_sig(env, lr);
92
.opc0 = 3, .opc1 = 4, .crn = 4, .crm = 0, .opc2 = 1,
50
stacked_ok =
93
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
51
- v7m_stack_write(cpu, frameptr, 0xfefa125b, mmu_idx, ignore_faults) &&
94
52
+ v7m_stack_write(cpu, frameptr, sig, mmu_idx, ignore_faults) &&
95
static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
53
v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx,
96
{ .name = "HCR2", .state = ARM_CP_STATE_AA32,
54
ignore_faults) &&
97
- .type = ARM_CP_ALIAS,
55
v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx,
98
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
56
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
99
.cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
57
if (return_to_secure &&
100
.access = PL2_RW,
58
((excret & R_V7M_EXCRET_ES_MASK) == 0 ||
101
.fieldoffset = offsetofhigh32(CPUARMState, cp15.hcr_el2),
59
(excret & R_V7M_EXCRET_DCRS_MASK) == 0)) {
60
- uint32_t expected_sig = 0xfefa125b;
61
uint32_t actual_sig;
62
63
pop_ok = v7m_stack_read(cpu, &actual_sig, frameptr, mmu_idx);
64
65
- if (pop_ok && expected_sig != actual_sig) {
66
+ if (pop_ok && v7m_integrity_sig(env, excret) != actual_sig) {
67
/* Take a SecureFault on the current stack */
68
env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
69
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
102
--
70
--
103
2.19.1
71
2.20.1
104
72
105
73
diff view generated by jsdifflib
1
The HCR.DC virtualization configuration register bit has the
1
Handle floating point registers in exception return.
2
following effects:
2
This corresponds to pseudocode functions ValidateExceptionReturn(),
3
* SCTLR.M behaves as if it is 0 for all purposes except
3
ExceptionReturn(), PopStack() and ConsumeExcStackFrame().
4
direct reads of the bit
5
* HCR.VM behaves as if it is 1 for all purposes except
6
direct reads of the bit
7
* the memory type produced by the first stage of the EL1&EL0
8
translation regime is Normal Non-Shareable,
9
Inner Write-Back Read-Allocate Write-Allocate,
10
Outer Write-Back Read-Allocate Write-Allocate.
11
12
Implement this behaviour.
13
4
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20181012144235.19646-5-peter.maydell@linaro.org
7
Message-id: 20190416125744.27770-16-peter.maydell@linaro.org
17
---
8
---
18
target/arm/helper.c | 23 +++++++++++++++++++++--
9
target/arm/helper.c | 142 +++++++++++++++++++++++++++++++++++++++++++-
19
1 file changed, 21 insertions(+), 2 deletions(-)
10
1 file changed, 141 insertions(+), 1 deletion(-)
20
11
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
14
--- a/target/arm/helper.c
24
+++ b/target/arm/helper.c
15
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
16
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
26
* * The Non-secure TTBCR.EAE bit is set to 1
17
bool rettobase = false;
27
* * The implementation includes EL2, and the value of HCR.VM is 1
18
bool exc_secure = false;
28
*
19
bool return_to_secure;
29
+ * (Note that HCR.DC makes HCR.VM behave as if it is 1.)
20
+ bool ftype;
30
+ *
21
+ bool restore_s16_s31;
31
* ATS1Hx always uses the 64bit format (not supported yet).
22
32
*/
23
/* If we're not in Handler mode then jumps to magic exception-exit
33
format64 = arm_s1_regime_using_lpae_format(env, mmu_idx);
24
* addresses don't have magic behaviour. However for the v8M
34
25
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
35
if (arm_feature(env, ARM_FEATURE_EL2)) {
26
excret);
36
if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
37
- format64 |= env->cp15.hcr_el2 & HCR_VM;
38
+ format64 |= env->cp15.hcr_el2 & (HCR_VM | HCR_DC);
39
} else {
40
format64 |= arm_current_el(env) == 2;
41
}
42
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
43
}
27
}
44
28
45
if (mmu_idx == ARMMMUIdx_S2NS) {
29
+ ftype = excret & R_V7M_EXCRET_FTYPE_MASK;
46
- return (env->cp15.hcr_el2 & HCR_VM) == 0;
30
+
47
+ /* HCR.DC means HCR.VM behaves as 1 */
31
+ if (!arm_feature(env, ARM_FEATURE_VFP) && !ftype) {
48
+ return (env->cp15.hcr_el2 & (HCR_DC | HCR_VM)) == 0;
32
+ qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero FTYPE in exception "
49
}
33
+ "exit PC value 0x%" PRIx32 " is UNPREDICTABLE "
50
34
+ "if FPU not present\n",
51
if (env->cp15.hcr_el2 & HCR_TGE) {
35
+ excret);
52
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
36
+ ftype = true;
53
}
54
}
55
56
+ if ((env->cp15.hcr_el2 & HCR_DC) &&
57
+ (mmu_idx == ARMMMUIdx_S1NSE0 || mmu_idx == ARMMMUIdx_S1NSE1)) {
58
+ /* HCR.DC means SCTLR_EL1.M behaves as 0 */
59
+ return true;
60
+ }
37
+ }
61
+
38
+
62
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
39
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
63
}
40
/* EXC_RETURN.ES validation check (R_SMFL). We must do this before
64
41
* we pick which FAULTMASK to clear.
65
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
42
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
66
43
*/
67
/* Combine the S1 and S2 cache attributes, if needed */
44
write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
68
if (!ret && cacheattrs != NULL) {
45
69
+ if (env->cp15.hcr_el2 & HCR_DC) {
46
+ /*
47
+ * Clear scratch FP values left in caller saved registers; this
48
+ * must happen before any kind of tail chaining.
49
+ */
50
+ if ((env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_CLRONRET_MASK) &&
51
+ (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
52
+ if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
53
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
54
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
55
+ qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
56
+ "stackframe: error during lazy state deactivation\n");
57
+ v7m_exception_taken(cpu, excret, true, false);
58
+ return;
59
+ } else {
60
+ /* Clear s0..s15 and FPSCR */
61
+ int i;
62
+
63
+ for (i = 0; i < 16; i += 2) {
64
+ *aa32_vfp_dreg(env, i / 2) = 0;
65
+ }
66
+ vfp_set_fpscr(env, 0);
67
+ }
68
+ }
69
+
70
if (sfault) {
71
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
72
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
73
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
74
}
75
}
76
77
+ if (!ftype) {
78
+ /* FP present and we need to handle it */
79
+ if (!return_to_secure &&
80
+ (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK)) {
81
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
82
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
83
+ qemu_log_mask(CPU_LOG_INT,
84
+ "...taking SecureFault on existing stackframe: "
85
+ "Secure LSPACT set but exception return is "
86
+ "not to secure state\n");
87
+ v7m_exception_taken(cpu, excret, true, false);
88
+ return;
89
+ }
90
+
91
+ restore_s16_s31 = return_to_secure &&
92
+ (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
93
+
94
+ if (env->v7m.fpccr[return_to_secure] & R_V7M_FPCCR_LSPACT_MASK) {
95
+ /* State in FPU is still valid, just clear LSPACT */
96
+ env->v7m.fpccr[return_to_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
97
+ } else {
98
+ int i;
99
+ uint32_t fpscr;
100
+ bool cpacr_pass, nsacr_pass;
101
+
102
+ cpacr_pass = v7m_cpacr_pass(env, return_to_secure,
103
+ return_to_priv);
104
+ nsacr_pass = return_to_secure ||
105
+ extract32(env->v7m.nsacr, 10, 1);
106
+
107
+ if (!cpacr_pass) {
108
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
109
+ return_to_secure);
110
+ env->v7m.cfsr[return_to_secure] |= R_V7M_CFSR_NOCP_MASK;
111
+ qemu_log_mask(CPU_LOG_INT,
112
+ "...taking UsageFault on existing "
113
+ "stackframe: CPACR.CP10 prevents unstacking "
114
+ "FP regs\n");
115
+ v7m_exception_taken(cpu, excret, true, false);
116
+ return;
117
+ } else if (!nsacr_pass) {
118
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, true);
119
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_INVPC_MASK;
120
+ qemu_log_mask(CPU_LOG_INT,
121
+ "...taking Secure UsageFault on existing "
122
+ "stackframe: NSACR.CP10 prevents unstacking "
123
+ "FP regs\n");
124
+ v7m_exception_taken(cpu, excret, true, false);
125
+ return;
126
+ }
127
+
128
+ for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
129
+ uint32_t slo, shi;
130
+ uint64_t dn;
131
+ uint32_t faddr = frameptr + 0x20 + 4 * i;
132
+
133
+ if (i >= 16) {
134
+ faddr += 8; /* Skip the slot for the FPSCR */
135
+ }
136
+
137
+ pop_ok = pop_ok &&
138
+ v7m_stack_read(cpu, &slo, faddr, mmu_idx) &&
139
+ v7m_stack_read(cpu, &shi, faddr + 4, mmu_idx);
140
+
141
+ if (!pop_ok) {
142
+ break;
143
+ }
144
+
145
+ dn = (uint64_t)shi << 32 | slo;
146
+ *aa32_vfp_dreg(env, i / 2) = dn;
147
+ }
148
+ pop_ok = pop_ok &&
149
+ v7m_stack_read(cpu, &fpscr, frameptr + 0x60, mmu_idx);
150
+ if (pop_ok) {
151
+ vfp_set_fpscr(env, fpscr);
152
+ }
153
+ if (!pop_ok) {
70
+ /*
154
+ /*
71
+ * HCR.DC forces the first stage attributes to
155
+ * These regs are 0 if security extension present;
72
+ * Normal Non-Shareable,
156
+ * otherwise merely UNKNOWN. We zero always.
73
+ * Inner Write-Back Read-Allocate Write-Allocate,
74
+ * Outer Write-Back Read-Allocate Write-Allocate.
75
+ */
157
+ */
76
+ cacheattrs->attrs = 0xff;
158
+ for (i = 0; i < (restore_s16_s31 ? 32 : 16); i += 2) {
77
+ cacheattrs->shareability = 0;
159
+ *aa32_vfp_dreg(env, i / 2) = 0;
160
+ }
161
+ vfp_set_fpscr(env, 0);
78
+ }
162
+ }
79
*cacheattrs = combine_cacheattrs(*cacheattrs, cacheattrs2);
163
+ }
80
}
164
+ }
81
165
+ env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
166
+ V7M_CONTROL, FPCA, !ftype);
167
+
168
/* Commit to consuming the stack frame */
169
frameptr += 0x20;
170
+ if (!ftype) {
171
+ frameptr += 0x48;
172
+ if (restore_s16_s31) {
173
+ frameptr += 0x40;
174
+ }
175
+ }
176
/* Undo stack alignment (the SPREALIGN bit indicates that the original
177
* pre-exception SP was not 8-aligned and we added a padding word to
178
* align it, so we undo this by ORing in the bit that increases it
179
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
180
*frame_sp_p = frameptr;
181
}
182
/* This xpsr_write() will invalidate frame_sp_p as it may switch stack */
183
- xpsr_write(env, xpsr, ~XPSR_SPREALIGN);
184
+ xpsr_write(env, xpsr, ~(XPSR_SPREALIGN | XPSR_SFPA));
185
+
186
+ if (env->v7m.secure) {
187
+ bool sfpa = xpsr & XPSR_SFPA;
188
+
189
+ env->v7m.control[M_REG_S] = FIELD_DP32(env->v7m.control[M_REG_S],
190
+ V7M_CONTROL, SFPA, sfpa);
191
+ }
192
193
/* The restored xPSR exception field will be zero if we're
194
* resuming in Thread mode. If that doesn't match what the
82
--
195
--
83
2.19.1
196
2.20.1
84
197
85
198
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Move the NS TBFLAG down from bit 19 to bit 6, which has not
2
been used since commit c1e3781090b9d36c60 in 2015, when we
3
started passing the entire MMU index in the TB flags rather
4
than just a 'privilege level' bit.
2
5
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
This rearrangement is not strictly necessary, but means that
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
we can put M-profile-only bits next to each other rather
5
Message-id: 20181016223115.24100-7-richard.henderson@linaro.org
8
than scattered across the flag word.
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20190416125744.27770-17-peter.maydell@linaro.org
8
---
13
---
9
target/arm/cpu.h | 6 +++++-
14
target/arm/cpu.h | 11 ++++++-----
10
linux-user/elfload.c | 2 +-
15
1 file changed, 6 insertions(+), 5 deletions(-)
11
target/arm/cpu.c | 4 ----
12
target/arm/helper.c | 2 +-
13
target/arm/machine.c | 3 +--
14
5 files changed, 8 insertions(+), 9 deletions(-)
15
16
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
19
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ enum arm_features {
21
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, BE_DATA, 23, 1)
21
ARM_FEATURE_NEON,
22
FIELD(TBFLAG_A32, THUMB, 0, 1)
22
ARM_FEATURE_M, /* Microcontroller profile. */
23
FIELD(TBFLAG_A32, VECLEN, 1, 3)
23
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
24
FIELD(TBFLAG_A32, VECSTRIDE, 4, 2)
24
- ARM_FEATURE_THUMB2EE,
25
+/*
25
ARM_FEATURE_V7MP, /* v7 Multiprocessing Extensions */
26
+ * Indicates whether cp register reads and writes by guest code should access
26
ARM_FEATURE_V7VE, /* v7 Virtualization Extensions (non-EL2 parts) */
27
+ * the secure or nonsecure bank of banked registers; note that this is not
27
ARM_FEATURE_V4T,
28
+ * the same thing as the current security state of the processor!
28
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_jazelle(const ARMISARegisters *id)
29
+ */
29
return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
30
+FIELD(TBFLAG_A32, NS, 6, 1)
30
}
31
FIELD(TBFLAG_A32, VFPEN, 7, 1)
31
32
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
32
+static inline bool isar_feature_t32ee(const ARMISARegisters *id)
33
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
33
+{
34
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
34
+ return FIELD_EX32(id->id_isar3, ID_ISAR3, T32EE) != 0;
35
* checks on the other bits at runtime
35
+}
36
*/
36
+
37
FIELD(TBFLAG_A32, XSCALE_CPAR, 17, 2)
37
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
38
-/* Indicates whether cp register reads and writes by guest code should access
38
{
39
- * the secure or nonsecure bank of banked registers; note that this is not
39
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
40
- * the same thing as the current security state of the processor!
40
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
41
- */
41
index XXXXXXX..XXXXXXX 100644
42
-FIELD(TBFLAG_A32, NS, 19, 1)
42
--- a/linux-user/elfload.c
43
/* For M profile only, Handler (ie not Thread) mode */
43
+++ b/linux-user/elfload.c
44
FIELD(TBFLAG_A32, HANDLER, 21, 1)
44
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
45
/* For M profile only, whether we should generate stack-limit checks */
45
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
46
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
47
GET_FEATURE(ARM_FEATURE_IWMMXT, ARM_HWCAP_ARM_IWMMXT);
48
- GET_FEATURE(ARM_FEATURE_THUMB2EE, ARM_HWCAP_ARM_THUMBEE);
49
+ GET_FEATURE_ID(t32ee, ARM_HWCAP_ARM_THUMBEE);
50
GET_FEATURE(ARM_FEATURE_NEON, ARM_HWCAP_ARM_NEON);
51
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
52
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
53
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/cpu.c
56
+++ b/target/arm/cpu.c
57
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
58
set_feature(&cpu->env, ARM_FEATURE_V7);
59
set_feature(&cpu->env, ARM_FEATURE_VFP3);
60
set_feature(&cpu->env, ARM_FEATURE_NEON);
61
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
62
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
63
set_feature(&cpu->env, ARM_FEATURE_EL3);
64
cpu->midr = 0x410fc080;
65
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
66
set_feature(&cpu->env, ARM_FEATURE_VFP3);
67
set_feature(&cpu->env, ARM_FEATURE_VFP_FP16);
68
set_feature(&cpu->env, ARM_FEATURE_NEON);
69
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
70
set_feature(&cpu->env, ARM_FEATURE_EL3);
71
/* Note that A9 supports the MP extensions even for
72
* A9UP and single-core A9MP (which are both different
73
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
74
set_feature(&cpu->env, ARM_FEATURE_V7VE);
75
set_feature(&cpu->env, ARM_FEATURE_VFP4);
76
set_feature(&cpu->env, ARM_FEATURE_NEON);
77
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
78
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
79
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
80
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
81
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
82
set_feature(&cpu->env, ARM_FEATURE_V7VE);
83
set_feature(&cpu->env, ARM_FEATURE_VFP4);
84
set_feature(&cpu->env, ARM_FEATURE_NEON);
85
- set_feature(&cpu->env, ARM_FEATURE_THUMB2EE);
86
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
87
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
88
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
89
diff --git a/target/arm/helper.c b/target/arm/helper.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/helper.c
92
+++ b/target/arm/helper.c
93
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
94
define_arm_cp_regs(cpu, vmsa_pmsa_cp_reginfo);
95
define_arm_cp_regs(cpu, vmsa_cp_reginfo);
96
}
97
- if (arm_feature(env, ARM_FEATURE_THUMB2EE)) {
98
+ if (cpu_isar_feature(t32ee, cpu)) {
99
define_arm_cp_regs(cpu, t2ee_cp_reginfo);
100
}
101
if (arm_feature(env, ARM_FEATURE_GENERIC_TIMER)) {
102
diff --git a/target/arm/machine.c b/target/arm/machine.c
103
index XXXXXXX..XXXXXXX 100644
104
--- a/target/arm/machine.c
105
+++ b/target/arm/machine.c
106
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
107
static bool thumb2ee_needed(void *opaque)
108
{
109
ARMCPU *cpu = opaque;
110
- CPUARMState *env = &cpu->env;
111
112
- return arm_feature(env, ARM_FEATURE_THUMB2EE);
113
+ return cpu_isar_feature(t32ee, cpu);
114
}
115
116
static const VMStateDescription vmstate_thumb2ee = {
117
--
46
--
118
2.19.1
47
2.20.1
119
48
120
49
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
We are close to running out of TB flags for AArch32; we could
2
start using the cs_base word, but before we do that we can
3
economise on our usage by sharing the same bits for the VFP
4
VECSTRIDE field and the XScale XSCALE_CPAR field. This
5
works because no XScale CPU ever had VFP.
2
6
3
Having V6 alone imply jazelle was wrong for cortex-m0.
4
Change to an assertion for V6 & !M.
5
6
This was harmless, because the only place we tested ARM_FEATURE_JAZELLE
7
was for 'bxj' in disas_arm(), which is unreachable for M-profile cores.
8
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20181016223115.24100-6-richard.henderson@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190416125744.27770-18-peter.maydell@linaro.org
14
---
10
---
15
target/arm/cpu.h | 6 +++++-
11
target/arm/cpu.h | 10 ++++++----
16
target/arm/cpu.c | 17 ++++++++++++++---
12
target/arm/cpu.c | 7 +++++++
17
target/arm/translate.c | 2 +-
13
target/arm/helper.c | 6 +++++-
18
3 files changed, 20 insertions(+), 5 deletions(-)
14
target/arm/translate.c | 9 +++++++--
15
4 files changed, 25 insertions(+), 7 deletions(-)
19
16
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
19
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
21
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, BE_DATA, 23, 1)
25
ARM_FEATURE_PMU, /* has PMU support */
22
FIELD(TBFLAG_A32, THUMB, 0, 1)
26
ARM_FEATURE_VBAR, /* has cp15 VBAR */
23
FIELD(TBFLAG_A32, VECLEN, 1, 3)
27
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
24
FIELD(TBFLAG_A32, VECSTRIDE, 4, 2)
28
- ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
25
+/*
29
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
26
+ * We store the bottom two bits of the CPAR as TB flags and handle
30
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
27
+ * checks on the other bits at runtime. This shares the same bits as
31
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
28
+ * VECSTRIDE, which is OK as no XScale CPU has VFP.
32
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_arm_div(const ARMISARegisters *id)
29
+ */
33
return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
30
+FIELD(TBFLAG_A32, XSCALE_CPAR, 4, 2)
34
}
31
/*
35
32
* Indicates whether cp register reads and writes by guest code should access
36
+static inline bool isar_feature_jazelle(const ARMISARegisters *id)
33
* the secure or nonsecure bank of banked registers; note that this is not
37
+{
34
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
38
+ return FIELD_EX32(id->id_isar1, ID_ISAR1, JAZELLE) != 0;
35
FIELD(TBFLAG_A32, VFPEN, 7, 1)
39
+}
36
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
40
+
37
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
41
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
38
-/* We store the bottom two bits of the CPAR as TB flags and handle
42
{
39
- * checks on the other bits at runtime
43
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
40
- */
41
-FIELD(TBFLAG_A32, XSCALE_CPAR, 17, 2)
42
/* For M profile only, Handler (ie not Thread) mode */
43
FIELD(TBFLAG_A32, HANDLER, 21, 1)
44
/* For M profile only, whether we should generate stack-limit checks */
44
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
45
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
45
index XXXXXXX..XXXXXXX 100644
46
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/cpu.c
47
--- a/target/arm/cpu.c
47
+++ b/target/arm/cpu.c
48
+++ b/target/arm/cpu.c
48
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
49
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
50
set_feature(env, ARM_FEATURE_THUMB_DSP);
49
}
51
}
50
if (arm_feature(env, ARM_FEATURE_V6)) {
52
51
set_feature(env, ARM_FEATURE_V5);
53
+ /*
52
- set_feature(env, ARM_FEATURE_JAZELLE);
54
+ * We rely on no XScale CPU having VFP so we can use the same bits in the
53
if (!arm_feature(env, ARM_FEATURE_M)) {
55
+ * TB flags field for VECSTRIDE and XSCALE_CPAR.
54
+ assert(cpu_isar_feature(jazelle, cpu));
56
+ */
55
set_feature(env, ARM_FEATURE_AUXCR);
57
+ assert(!(arm_feature(env, ARM_FEATURE_VFP) &&
58
+ arm_feature(env, ARM_FEATURE_XSCALE)));
59
+
60
if (arm_feature(env, ARM_FEATURE_V7) &&
61
!arm_feature(env, ARM_FEATURE_M) &&
62
!arm_feature(env, ARM_FEATURE_PMSA)) {
63
diff --git a/target/arm/helper.c b/target/arm/helper.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/helper.c
66
+++ b/target/arm/helper.c
67
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
68
|| arm_el_is_aa64(env, 1) || arm_feature(env, ARM_FEATURE_M)) {
69
flags = FIELD_DP32(flags, TBFLAG_A32, VFPEN, 1);
56
}
70
}
71
- flags = FIELD_DP32(flags, TBFLAG_A32, XSCALE_CPAR, env->cp15.c15_cpar);
72
+ /* Note that XSCALE_CPAR shares bits with VECSTRIDE */
73
+ if (arm_feature(env, ARM_FEATURE_XSCALE)) {
74
+ flags = FIELD_DP32(flags, TBFLAG_A32,
75
+ XSCALE_CPAR, env->cp15.c15_cpar);
76
+ }
57
}
77
}
58
@@ -XXX,XX +XXX,XX @@ static void arm926_initfn(Object *obj)
78
59
set_feature(&cpu->env, ARM_FEATURE_VFP);
79
flags = FIELD_DP32(flags, TBFLAG_ANY, MMUIDX, arm_to_core_mmu_idx(mmu_idx));
60
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
61
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
62
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
63
cpu->midr = 0x41069265;
64
cpu->reset_fpsid = 0x41011090;
65
cpu->ctr = 0x1dd20d2;
66
cpu->reset_sctlr = 0x00090078;
67
+
68
+ /*
69
+ * ARMv5 does not have the ID_ISAR registers, but we can still
70
+ * set the field to indicate Jazelle support within QEMU.
71
+ */
72
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
73
}
74
75
static void arm946_initfn(Object *obj)
76
@@ -XXX,XX +XXX,XX @@ static void arm1026_initfn(Object *obj)
77
set_feature(&cpu->env, ARM_FEATURE_AUXCR);
78
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
79
set_feature(&cpu->env, ARM_FEATURE_CACHE_TEST_CLEAN);
80
- set_feature(&cpu->env, ARM_FEATURE_JAZELLE);
81
cpu->midr = 0x4106a262;
82
cpu->reset_fpsid = 0x410110a0;
83
cpu->ctr = 0x1dd20d2;
84
cpu->reset_sctlr = 0x00090078;
85
cpu->reset_auxcr = 1;
86
+
87
+ /*
88
+ * ARMv5 does not have the ID_ISAR registers, but we can still
89
+ * set the field to indicate Jazelle support within QEMU.
90
+ */
91
+ cpu->isar.id_isar1 = FIELD_DP32(cpu->isar.id_isar1, ID_ISAR1, JAZELLE, 1);
92
+
93
{
94
/* The 1026 had an IFAR at c6,c0,0,1 rather than the ARMv6 c6,c0,0,2 */
95
ARMCPRegInfo ifar = {
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
80
diff --git a/target/arm/translate.c b/target/arm/translate.c
97
index XXXXXXX..XXXXXXX 100644
81
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate.c
82
--- a/target/arm/translate.c
99
+++ b/target/arm/translate.c
83
+++ b/target/arm/translate.c
100
@@ -XXX,XX +XXX,XX @@
84
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
101
#define ENABLE_ARCH_5 arm_dc_feature(s, ARM_FEATURE_V5)
85
dc->fp_excp_el = FIELD_EX32(tb_flags, TBFLAG_ANY, FPEXC_EL);
102
/* currently all emulated v5 cores are also v5TE, so don't bother */
86
dc->vfp_enabled = FIELD_EX32(tb_flags, TBFLAG_A32, VFPEN);
103
#define ENABLE_ARCH_5TE arm_dc_feature(s, ARM_FEATURE_V5)
87
dc->vec_len = FIELD_EX32(tb_flags, TBFLAG_A32, VECLEN);
104
-#define ENABLE_ARCH_5J arm_dc_feature(s, ARM_FEATURE_JAZELLE)
88
- dc->vec_stride = FIELD_EX32(tb_flags, TBFLAG_A32, VECSTRIDE);
105
+#define ENABLE_ARCH_5J dc_isar_feature(jazelle, s)
89
- dc->c15_cpar = FIELD_EX32(tb_flags, TBFLAG_A32, XSCALE_CPAR);
106
#define ENABLE_ARCH_6 arm_dc_feature(s, ARM_FEATURE_V6)
90
+ if (arm_feature(env, ARM_FEATURE_XSCALE)) {
107
#define ENABLE_ARCH_6K arm_dc_feature(s, ARM_FEATURE_V6K)
91
+ dc->c15_cpar = FIELD_EX32(tb_flags, TBFLAG_A32, XSCALE_CPAR);
108
#define ENABLE_ARCH_6T2 arm_dc_feature(s, ARM_FEATURE_THUMB2)
92
+ dc->vec_stride = 0;
93
+ } else {
94
+ dc->vec_stride = FIELD_EX32(tb_flags, TBFLAG_A32, VECSTRIDE);
95
+ dc->c15_cpar = 0;
96
+ }
97
dc->v7m_handler_mode = FIELD_EX32(tb_flags, TBFLAG_A32, HANDLER);
98
dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
99
regime_is_secure(env, dc->mmu_idx);
109
--
100
--
110
2.19.1
101
2.20.1
111
102
112
103
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The M-profile FPCCR.S bit indicates the security status of
2
the floating point context. In the pseudocode ExecuteFPCheck()
3
function it is unconditionally set to match the current
4
security state whenever a floating point instruction is
5
executed.
2
6
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Implement this by adding a new TB flag which tracks whether
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
FPCCR.S is different from the current security state, so
5
Message-id: 20181016223115.24100-9-richard.henderson@linaro.org
9
that we only need to emit the code to update it in the
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
less-common case when it is not already set correctly.
11
12
Note that we will add the handling for the other work done
13
by ExecuteFPCheck() in later commits.
14
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20190416125744.27770-19-peter.maydell@linaro.org
8
---
18
---
9
target/arm/cpu.h | 17 +++++++++++++++-
19
target/arm/cpu.h | 2 ++
10
linux-user/elfload.c | 6 +-----
20
target/arm/translate.h | 1 +
11
target/arm/cpu64.c | 16 ++++++++-------
21
target/arm/helper.c | 5 +++++
12
target/arm/helper.c | 2 +-
22
target/arm/translate.c | 20 ++++++++++++++++++++
13
target/arm/translate-a64.c | 40 +++++++++++++++++++-------------------
23
4 files changed, 28 insertions(+)
14
target/arm/translate.c | 6 +++---
15
6 files changed, 50 insertions(+), 37 deletions(-)
16
24
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
27
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
28
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ enum arm_features {
29
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
22
ARM_FEATURE_PMU, /* has PMU support */
30
FIELD(TBFLAG_A32, VFPEN, 7, 1)
23
ARM_FEATURE_VBAR, /* has cp15 VBAR */
31
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
24
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
32
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
25
- ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
33
+/* For M profile only, set if FPCCR.S does not match current security state */
26
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
34
+FIELD(TBFLAG_A32, FPCCR_S_WRONG, 20, 1)
27
};
35
/* For M profile only, Handler (ie not Thread) mode */
28
36
FIELD(TBFLAG_A32, HANDLER, 21, 1)
29
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
37
/* For M profile only, whether we should generate stack-limit checks */
30
return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
38
diff --git a/target/arm/translate.h b/target/arm/translate.h
31
}
32
33
+static inline bool isar_feature_aa32_fp16_arith(const ARMISARegisters *id)
34
+{
35
+ /*
36
+ * This is a placeholder for use by VCMA until the rest of
37
+ * the ARMv8.2-FP16 extension is implemented for aa32 mode.
38
+ * At which point we can properly set and check MVFR1.FPHP.
39
+ */
40
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
41
+}
42
+
43
/*
44
* 64-bit feature tests via id registers.
45
*/
46
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
47
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
48
}
49
50
+static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
51
+{
52
+ /* We always set the AdvSIMD and FP fields identically wrt FP16. */
53
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, FP) == 1;
54
+}
55
+
56
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
57
{
58
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
59
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
60
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
61
--- a/linux-user/elfload.c
40
--- a/target/arm/translate.h
62
+++ b/linux-user/elfload.c
41
+++ b/target/arm/translate.h
63
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
42
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
64
hwcaps |= ARM_HWCAP_A64_ASIMD;
43
bool v7m_handler_mode;
65
44
bool v8m_secure; /* true if v8M and we're in Secure mode */
66
/* probe for the extra features */
45
bool v8m_stackcheck; /* true if we need to perform v8M stack limit checks */
67
-#define GET_FEATURE(feat, hwcap) \
46
+ bool v8m_fpccr_s_wrong; /* true if v8M FPCCR.S != v8m_secure */
68
- do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
47
/* Immediate value in AArch32 SVC insn; must be set if is_jmp == DISAS_SWI
69
#define GET_FEATURE_ID(feat, hwcap) \
48
* so that top level loop can generate correct syndrome information.
70
do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
49
*/
71
72
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
73
GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
74
GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
75
GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
76
- GET_FEATURE(ARM_FEATURE_V8_FP16,
77
- ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
78
+ GET_FEATURE_ID(aa64_fp16, ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
79
GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
80
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
81
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
82
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
83
GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
84
85
-#undef GET_FEATURE
86
#undef GET_FEATURE_ID
87
88
return hwcaps;
89
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/cpu64.c
92
+++ b/target/arm/cpu64.c
93
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
94
95
t = cpu->isar.id_aa64pfr0;
96
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
97
+ t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
98
+ t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
99
cpu->isar.id_aa64pfr0 = t;
100
101
/* Replicate the same data to the 32-bit id registers. */
102
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
103
u = FIELD_DP32(u, ID_ISAR6, DP, 1);
104
cpu->isar.id_isar6 = u;
105
106
-#ifdef CONFIG_USER_ONLY
107
- /* We don't set these in system emulation mode for the moment,
108
- * since we don't correctly set the ID registers to advertise them,
109
- * and in some cases they're only available in AArch64 and not AArch32,
110
- * whereas the architecture requires them to be present in both if
111
- * present in either.
112
+ /*
113
+ * FIXME: We do not yet support ARMv8.2-fp16 for AArch32 yet,
114
+ * so do not set MVFR1.FPHP. Strictly speaking this is not legal,
115
+ * but it is also not legal to enable SVE without support for FP16,
116
+ * and enabling SVE in system mode is more useful in the short term.
117
*/
118
- set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
119
+
120
+#ifdef CONFIG_USER_ONLY
121
/* For usermode -cpu max we can use a larger and more efficient DCZ
122
* blocksize since we don't have to follow what the hardware does.
123
*/
124
diff --git a/target/arm/helper.c b/target/arm/helper.c
50
diff --git a/target/arm/helper.c b/target/arm/helper.c
125
index XXXXXXX..XXXXXXX 100644
51
index XXXXXXX..XXXXXXX 100644
126
--- a/target/arm/helper.c
52
--- a/target/arm/helper.c
127
+++ b/target/arm/helper.c
53
+++ b/target/arm/helper.c
128
@@ -XXX,XX +XXX,XX @@ void HELPER(vfp_set_fpscr)(CPUARMState *env, uint32_t val)
54
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
129
uint32_t changed;
55
flags = FIELD_DP32(flags, TBFLAG_A32, STACKCHECK, 1);
130
131
/* When ARMv8.2-FP16 is not supported, FZ16 is RES0. */
132
- if (!arm_feature(env, ARM_FEATURE_V8_FP16)) {
133
+ if (!cpu_isar_feature(aa64_fp16, arm_env_get_cpu(env))) {
134
val &= ~FPCR_FZ16;
135
}
56
}
136
57
137
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
58
+ if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
138
index XXXXXXX..XXXXXXX 100644
59
+ FIELD_EX32(env->v7m.fpccr[M_REG_S], V7M_FPCCR, S) != env->v7m.secure) {
139
--- a/target/arm/translate-a64.c
60
+ flags = FIELD_DP32(flags, TBFLAG_A32, FPCCR_S_WRONG, 1);
140
+++ b/target/arm/translate-a64.c
61
+ }
141
@@ -XXX,XX +XXX,XX @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
62
+
142
break;
63
*pflags = flags;
143
case 3:
64
*cs_base = 0;
144
size = MO_16;
65
}
145
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
146
+ if (dc_isar_feature(aa64_fp16, s)) {
147
break;
148
}
149
/* fallthru */
150
@@ -XXX,XX +XXX,XX @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
151
break;
152
case 3:
153
size = MO_16;
154
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
155
+ if (dc_isar_feature(aa64_fp16, s)) {
156
break;
157
}
158
/* fallthru */
159
@@ -XXX,XX +XXX,XX @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
160
break;
161
case 3:
162
sz = MO_16;
163
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
164
+ if (dc_isar_feature(aa64_fp16, s)) {
165
break;
166
}
167
/* fallthru */
168
@@ -XXX,XX +XXX,XX @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
169
handle_fp_1src_double(s, opcode, rd, rn);
170
break;
171
case 3:
172
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
173
+ if (!dc_isar_feature(aa64_fp16, s)) {
174
unallocated_encoding(s);
175
return;
176
}
177
@@ -XXX,XX +XXX,XX @@ static void disas_fp_2src(DisasContext *s, uint32_t insn)
178
handle_fp_2src_double(s, opcode, rd, rn, rm);
179
break;
180
case 3:
181
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
182
+ if (!dc_isar_feature(aa64_fp16, s)) {
183
unallocated_encoding(s);
184
return;
185
}
186
@@ -XXX,XX +XXX,XX @@ static void disas_fp_3src(DisasContext *s, uint32_t insn)
187
handle_fp_3src_double(s, o0, o1, rd, rn, rm, ra);
188
break;
189
case 3:
190
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
191
+ if (!dc_isar_feature(aa64_fp16, s)) {
192
unallocated_encoding(s);
193
return;
194
}
195
@@ -XXX,XX +XXX,XX @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
196
break;
197
case 3:
198
sz = MO_16;
199
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
200
+ if (dc_isar_feature(aa64_fp16, s)) {
201
break;
202
}
203
/* fallthru */
204
@@ -XXX,XX +XXX,XX @@ static void disas_fp_fixed_conv(DisasContext *s, uint32_t insn)
205
case 1: /* float64 */
206
break;
207
case 3: /* float16 */
208
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
209
+ if (dc_isar_feature(aa64_fp16, s)) {
210
break;
211
}
212
/* fallthru */
213
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
214
break;
215
case 0x6: /* 16-bit float, 32-bit int */
216
case 0xe: /* 16-bit float, 64-bit int */
217
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
218
+ if (dc_isar_feature(aa64_fp16, s)) {
219
break;
220
}
221
/* fallthru */
222
@@ -XXX,XX +XXX,XX @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
223
case 1: /* float64 */
224
break;
225
case 3: /* float16 */
226
- if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
227
+ if (dc_isar_feature(aa64_fp16, s)) {
228
break;
229
}
230
/* fallthru */
231
@@ -XXX,XX +XXX,XX @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
232
*/
233
is_min = extract32(size, 1, 1);
234
is_fp = true;
235
- if (!is_u && arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
236
+ if (!is_u && dc_isar_feature(aa64_fp16, s)) {
237
size = 1;
238
} else if (!is_u || !is_q || extract32(size, 0, 1)) {
239
unallocated_encoding(s);
240
@@ -XXX,XX +XXX,XX @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
241
242
if (o2 != 0 || ((cmode == 0xf) && is_neg && !is_q)) {
243
/* Check for FMOV (vector, immediate) - half-precision */
244
- if (!(arm_dc_feature(s, ARM_FEATURE_V8_FP16) && o2 && cmode == 0xf)) {
245
+ if (!(dc_isar_feature(aa64_fp16, s) && o2 && cmode == 0xf)) {
246
unallocated_encoding(s);
247
return;
248
}
249
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
250
case 0x2f: /* FMINP */
251
/* FP op, size[0] is 32 or 64 bit*/
252
if (!u) {
253
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
254
+ if (!dc_isar_feature(aa64_fp16, s)) {
255
unallocated_encoding(s);
256
return;
257
} else {
258
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
259
size = MO_32;
260
} else if (immh & 2) {
261
size = MO_16;
262
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
263
+ if (!dc_isar_feature(aa64_fp16, s)) {
264
unallocated_encoding(s);
265
return;
266
}
267
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
268
size = MO_32;
269
} else if (immh & 0x2) {
270
size = MO_16;
271
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
272
+ if (!dc_isar_feature(aa64_fp16, s)) {
273
unallocated_encoding(s);
274
return;
275
}
276
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
277
return;
278
}
279
280
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
281
+ if (!dc_isar_feature(aa64_fp16, s)) {
282
unallocated_encoding(s);
283
}
284
285
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
286
TCGv_ptr fpst;
287
bool pairwise = false;
288
289
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
290
+ if (!dc_isar_feature(aa64_fp16, s)) {
291
unallocated_encoding(s);
292
return;
293
}
294
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
295
case 0x1c: /* FCADD, #90 */
296
case 0x1e: /* FCADD, #270 */
297
if (size == 0
298
- || (size == 1 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))
299
+ || (size == 1 && !dc_isar_feature(aa64_fp16, s))
300
|| (size == 3 && !is_q)) {
301
unallocated_encoding(s);
302
return;
303
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
304
bool need_fpst = true;
305
int rmode;
306
307
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
308
+ if (!dc_isar_feature(aa64_fp16, s)) {
309
unallocated_encoding(s);
310
return;
311
}
312
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
313
}
314
break;
315
}
316
- if (is_fp16 && !arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
317
+ if (is_fp16 && !dc_isar_feature(aa64_fp16, s)) {
318
unallocated_encoding(s);
319
return;
320
}
321
diff --git a/target/arm/translate.c b/target/arm/translate.c
66
diff --git a/target/arm/translate.c b/target/arm/translate.c
322
index XXXXXXX..XXXXXXX 100644
67
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/translate.c
68
--- a/target/arm/translate.c
324
+++ b/target/arm/translate.c
69
+++ b/target/arm/translate.c
325
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
70
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
326
int size = extract32(insn, 20, 1);
327
data = extract32(insn, 23, 2); /* rot */
328
if (!dc_isar_feature(aa32_vcma, s)
329
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
330
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
331
return 1;
332
}
71
}
333
fn_gvec_ptr = size ? gen_helper_gvec_fcmlas : gen_helper_gvec_fcmlah;
72
}
334
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
73
335
int size = extract32(insn, 20, 1);
74
+ if (arm_dc_feature(s, ARM_FEATURE_M)) {
336
data = extract32(insn, 24, 1); /* rot */
75
+ /* Handle M-profile lazy FP state mechanics */
337
if (!dc_isar_feature(aa32_vcma, s)
76
+
338
- || (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
77
+ /* Update ownership of FP context: set FPCCR.S to match current state */
339
+ || (!size && !dc_isar_feature(aa32_fp16_arith, s))) {
78
+ if (s->v8m_fpccr_s_wrong) {
340
return 1;
79
+ TCGv_i32 tmp;
341
}
80
+
342
fn_gvec_ptr = size ? gen_helper_gvec_fcadds : gen_helper_gvec_fcaddh;
81
+ tmp = load_cpu_field(v7m.fpccr[M_REG_S]);
343
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
82
+ if (s->v8m_secure) {
344
return 1;
83
+ tcg_gen_ori_i32(tmp, tmp, R_V7M_FPCCR_S_MASK);
345
}
84
+ } else {
346
if (size == 0) {
85
+ tcg_gen_andi_i32(tmp, tmp, ~R_V7M_FPCCR_S_MASK);
347
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
86
+ }
348
+ if (!dc_isar_feature(aa32_fp16_arith, s)) {
87
+ store_cpu_field(tmp, v7m.fpccr[M_REG_S]);
349
return 1;
88
+ /* Don't need to do this for any further FP insns in this TB */
350
}
89
+ s->v8m_fpccr_s_wrong = false;
351
/* For fp16, rm is just Vm, and index is M. */
90
+ }
91
+ }
92
+
93
if (extract32(insn, 28, 4) == 0xf) {
94
/*
95
* Encodings with T=1 (Thumb) or unconditional (ARM):
96
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
97
dc->v8m_secure = arm_feature(env, ARM_FEATURE_M_SECURITY) &&
98
regime_is_secure(env, dc->mmu_idx);
99
dc->v8m_stackcheck = FIELD_EX32(tb_flags, TBFLAG_A32, STACKCHECK);
100
+ dc->v8m_fpccr_s_wrong = FIELD_EX32(tb_flags, TBFLAG_A32, FPCCR_S_WRONG);
101
dc->cp_regs = cpu->cp_regs;
102
dc->features = env->features;
103
352
--
104
--
353
2.19.1
105
2.20.1
354
106
355
107
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The M-profile FPCCR.ASPEN bit indicates that automatic floating-point
2
context preservation is enabled. Before executing any floating-point
3
instruction, if FPCCR.ASPEN is set and the CONTROL FPCA/SFPA bits
4
indicate that there is no active floating point context then we
5
must create a new context (by initializing FPSCR and setting
6
FPCA/SFPA to indicate that the context is now active). In the
7
pseudocode this is handled by ExecuteFPCheck().
2
8
3
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Implement this with a new TB flag which tracks whether we
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
need to create a new FP context.
5
Message-id: 20181016223115.24100-8-richard.henderson@linaro.org
11
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20190416125744.27770-20-peter.maydell@linaro.org
8
---
15
---
9
target/arm/cpu.h | 16 +++++++++++++++-
16
target/arm/cpu.h | 2 ++
10
linux-user/aarch64/signal.c | 4 ++--
17
target/arm/translate.h | 1 +
11
linux-user/elfload.c | 2 +-
18
target/arm/helper.c | 13 +++++++++++++
12
linux-user/syscall.c | 10 ++++++----
19
target/arm/translate.c | 29 +++++++++++++++++++++++++++++
13
target/arm/cpu64.c | 5 ++++-
20
4 files changed, 45 insertions(+)
14
target/arm/helper.c | 9 ++++++---
15
target/arm/machine.c | 3 +--
16
target/arm/translate-a64.c | 4 ++--
17
8 files changed, 37 insertions(+), 16 deletions(-)
18
21
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
24
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
25
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64ISAR1, FRINTTS, 32, 4)
26
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
24
FIELD(ID_AA64ISAR1, SB, 36, 4)
27
FIELD(TBFLAG_A32, VFPEN, 7, 1)
25
FIELD(ID_AA64ISAR1, SPECRES, 40, 4)
28
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
26
29
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
27
+FIELD(ID_AA64PFR0, EL0, 0, 4)
30
+/* For M profile only, set if we must create a new FP context */
28
+FIELD(ID_AA64PFR0, EL1, 4, 4)
31
+FIELD(TBFLAG_A32, NEW_FP_CTXT_NEEDED, 19, 1)
29
+FIELD(ID_AA64PFR0, EL2, 8, 4)
32
/* For M profile only, set if FPCCR.S does not match current security state */
30
+FIELD(ID_AA64PFR0, EL3, 12, 4)
33
FIELD(TBFLAG_A32, FPCCR_S_WRONG, 20, 1)
31
+FIELD(ID_AA64PFR0, FP, 16, 4)
34
/* For M profile only, Handler (ie not Thread) mode */
32
+FIELD(ID_AA64PFR0, ADVSIMD, 20, 4)
35
diff --git a/target/arm/translate.h b/target/arm/translate.h
33
+FIELD(ID_AA64PFR0, GIC, 24, 4)
34
+FIELD(ID_AA64PFR0, RAS, 28, 4)
35
+FIELD(ID_AA64PFR0, SVE, 32, 4)
36
+
37
QEMU_BUILD_BUG_ON(ARRAY_SIZE(((ARMCPU *)0)->ccsidr) <= R_V7M_CSSELR_INDEX_MASK);
38
39
/* If adding a feature bit which corresponds to a Linux ELF
40
@@ -XXX,XX +XXX,XX @@ enum arm_features {
41
ARM_FEATURE_PMU, /* has PMU support */
42
ARM_FEATURE_VBAR, /* has cp15 VBAR */
43
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
44
- ARM_FEATURE_SVE, /* has Scalable Vector Extension */
45
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
46
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
47
};
48
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
49
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
50
}
51
52
+static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
53
+{
54
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
55
+}
56
+
57
/*
58
* Forward to the above feature tests given an ARMCPU pointer.
59
*/
60
diff --git a/linux-user/aarch64/signal.c b/linux-user/aarch64/signal.c
61
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
62
--- a/linux-user/aarch64/signal.c
37
--- a/target/arm/translate.h
63
+++ b/linux-user/aarch64/signal.c
38
+++ b/target/arm/translate.h
64
@@ -XXX,XX +XXX,XX @@ static int target_restore_sigframe(CPUARMState *env,
39
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
65
break;
40
bool v8m_secure; /* true if v8M and we're in Secure mode */
66
41
bool v8m_stackcheck; /* true if we need to perform v8M stack limit checks */
67
case TARGET_SVE_MAGIC:
42
bool v8m_fpccr_s_wrong; /* true if v8M FPCCR.S != v8m_secure */
68
- if (arm_feature(env, ARM_FEATURE_SVE)) {
43
+ bool v7m_new_fp_ctxt_needed; /* ASPEN set but no active FP context */
69
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
44
/* Immediate value in AArch32 SVC insn; must be set if is_jmp == DISAS_SWI
70
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
45
* so that top level loop can generate correct syndrome information.
71
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
46
*/
72
if (!sve && size == sve_size) {
73
@@ -XXX,XX +XXX,XX @@ static void target_setup_frame(int usig, struct target_sigaction *ka,
74
&layout);
75
76
/* SVE state needs saving only if it exists. */
77
- if (arm_feature(env, ARM_FEATURE_SVE)) {
78
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(env))) {
79
vq = (env->vfp.zcr_el[1] & 0xf) + 1;
80
sve_size = QEMU_ALIGN_UP(TARGET_SVE_SIG_CONTEXT_SIZE(vq), 16);
81
sve_ofs = alloc_sigframe_space(sve_size, &layout);
82
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/linux-user/elfload.c
85
+++ b/linux-user/elfload.c
86
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
87
GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
88
GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
89
GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
90
- GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
91
+ GET_FEATURE_ID(aa64_sve, ARM_HWCAP_A64_SVE);
92
93
#undef GET_FEATURE
94
#undef GET_FEATURE_ID
95
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
96
index XXXXXXX..XXXXXXX 100644
97
--- a/linux-user/syscall.c
98
+++ b/linux-user/syscall.c
99
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
100
* even though the current architectural maximum is VQ=16.
101
*/
102
ret = -TARGET_EINVAL;
103
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)
104
+ if (cpu_isar_feature(aa64_sve, arm_env_get_cpu(cpu_env))
105
&& arg2 >= 0 && arg2 <= 512 * 16 && !(arg2 & 15)) {
106
CPUARMState *env = cpu_env;
107
ARMCPU *cpu = arm_env_get_cpu(env);
108
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
109
return ret;
110
case TARGET_PR_SVE_GET_VL:
111
ret = -TARGET_EINVAL;
112
- if (arm_feature(cpu_env, ARM_FEATURE_SVE)) {
113
- CPUARMState *env = cpu_env;
114
- ret = ((env->vfp.zcr_el[1] & 0xf) + 1) * 16;
115
+ {
116
+ ARMCPU *cpu = arm_env_get_cpu(cpu_env);
117
+ if (cpu_isar_feature(aa64_sve, cpu)) {
118
+ ret = ((cpu->env.vfp.zcr_el[1] & 0xf) + 1) * 16;
119
+ }
120
}
121
return ret;
122
#endif /* AARCH64 */
123
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
124
index XXXXXXX..XXXXXXX 100644
125
--- a/target/arm/cpu64.c
126
+++ b/target/arm/cpu64.c
127
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
128
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
129
cpu->isar.id_aa64isar1 = t;
130
131
+ t = cpu->isar.id_aa64pfr0;
132
+ t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
133
+ cpu->isar.id_aa64pfr0 = t;
134
+
135
/* Replicate the same data to the 32-bit id registers. */
136
u = cpu->isar.id_isar5;
137
u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
139
* present in either.
140
*/
141
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
142
- set_feature(&cpu->env, ARM_FEATURE_SVE);
143
/* For usermode -cpu max we can use a larger and more efficient DCZ
144
* blocksize since we don't have to follow what the hardware does.
145
*/
146
diff --git a/target/arm/helper.c b/target/arm/helper.c
47
diff --git a/target/arm/helper.c b/target/arm/helper.c
147
index XXXXXXX..XXXXXXX 100644
48
index XXXXXXX..XXXXXXX 100644
148
--- a/target/arm/helper.c
49
--- a/target/arm/helper.c
149
+++ b/target/arm/helper.c
50
+++ b/target/arm/helper.c
150
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
51
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
151
define_one_arm_cp_reg(cpu, &sctlr);
52
flags = FIELD_DP32(flags, TBFLAG_A32, FPCCR_S_WRONG, 1);
152
}
53
}
153
54
154
- if (arm_feature(env, ARM_FEATURE_SVE)) {
55
+ if (arm_feature(env, ARM_FEATURE_M) &&
155
+ if (cpu_isar_feature(aa64_sve, cpu)) {
56
+ (env->v7m.fpccr[env->v7m.secure] & R_V7M_FPCCR_ASPEN_MASK) &&
156
define_one_arm_cp_reg(cpu, &zcr_el1_reginfo);
57
+ (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) ||
157
if (arm_feature(env, ARM_FEATURE_EL2)) {
58
+ (env->v7m.secure &&
158
define_one_arm_cp_reg(cpu, &zcr_el2_reginfo);
59
+ !(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)))) {
159
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
60
+ /*
160
uint32_t flags;
61
+ * ASPEN is set, but FPCA/SFPA indicate that there is no active
161
62
+ * FP context; we must create a new FP context before executing
162
if (is_a64(env)) {
63
+ * any FP insn.
163
+ ARMCPU *cpu = arm_env_get_cpu(env);
64
+ */
65
+ flags = FIELD_DP32(flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED, 1);
66
+ }
164
+
67
+
165
*pc = env->pc;
68
*pflags = flags;
166
flags = ARM_TBFLAG_AARCH64_STATE_MASK;
69
*cs_base = 0;
167
/* Get control bits for tagged addresses */
70
}
168
flags |= (arm_regime_tbi0(env, mmu_idx) << ARM_TBFLAG_TBI0_SHIFT);
71
diff --git a/target/arm/translate.c b/target/arm/translate.c
169
flags |= (arm_regime_tbi1(env, mmu_idx) << ARM_TBFLAG_TBI1_SHIFT);
72
index XXXXXXX..XXXXXXX 100644
170
73
--- a/target/arm/translate.c
171
- if (arm_feature(env, ARM_FEATURE_SVE)) {
74
+++ b/target/arm/translate.c
172
+ if (cpu_isar_feature(aa64_sve, cpu)) {
75
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
173
int sve_el = sve_exception_el(env, current_el);
76
/* Don't need to do this for any further FP insns in this TB */
174
uint32_t zcr_len;
77
s->v8m_fpccr_s_wrong = false;
175
78
}
176
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq)
79
+
177
void aarch64_sve_change_el(CPUARMState *env, int old_el,
80
+ if (s->v7m_new_fp_ctxt_needed) {
178
int new_el, bool el0_a64)
81
+ /*
179
{
82
+ * Create new FP context by updating CONTROL.FPCA, CONTROL.SFPA
180
+ ARMCPU *cpu = arm_env_get_cpu(env);
83
+ * and the FPSCR.
181
int old_len, new_len;
84
+ */
182
bool old_a64, new_a64;
85
+ TCGv_i32 control, fpscr;
183
86
+ uint32_t bits = R_V7M_CONTROL_FPCA_MASK;
184
/* Nothing to do if no SVE. */
87
+
185
- if (!arm_feature(env, ARM_FEATURE_SVE)) {
88
+ fpscr = load_cpu_field(v7m.fpdscr[s->v8m_secure]);
186
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
89
+ gen_helper_vfp_set_fpscr(cpu_env, fpscr);
187
return;
90
+ tcg_temp_free_i32(fpscr);
91
+ /*
92
+ * We don't need to arrange to end the TB, because the only
93
+ * parts of FPSCR which we cache in the TB flags are the VECLEN
94
+ * and VECSTRIDE, and those don't exist for M-profile.
95
+ */
96
+
97
+ if (s->v8m_secure) {
98
+ bits |= R_V7M_CONTROL_SFPA_MASK;
99
+ }
100
+ control = load_cpu_field(v7m.control[M_REG_S]);
101
+ tcg_gen_ori_i32(control, control, bits);
102
+ store_cpu_field(control, v7m.control[M_REG_S]);
103
+ /* Don't need to do this for any further FP insns in this TB */
104
+ s->v7m_new_fp_ctxt_needed = false;
105
+ }
188
}
106
}
189
107
190
diff --git a/target/arm/machine.c b/target/arm/machine.c
108
if (extract32(insn, 28, 4) == 0xf) {
191
index XXXXXXX..XXXXXXX 100644
109
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
192
--- a/target/arm/machine.c
110
regime_is_secure(env, dc->mmu_idx);
193
+++ b/target/arm/machine.c
111
dc->v8m_stackcheck = FIELD_EX32(tb_flags, TBFLAG_A32, STACKCHECK);
194
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_iwmmxt = {
112
dc->v8m_fpccr_s_wrong = FIELD_EX32(tb_flags, TBFLAG_A32, FPCCR_S_WRONG);
195
static bool sve_needed(void *opaque)
113
+ dc->v7m_new_fp_ctxt_needed =
196
{
114
+ FIELD_EX32(tb_flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED);
197
ARMCPU *cpu = opaque;
115
dc->cp_regs = cpu->cp_regs;
198
- CPUARMState *env = &cpu->env;
116
dc->features = env->features;
199
117
200
- return arm_feature(env, ARM_FEATURE_SVE);
201
+ return cpu_isar_feature(aa64_sve, cpu);
202
}
203
204
/* The first two words of each Zreg is stored in VFP state. */
205
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
206
index XXXXXXX..XXXXXXX 100644
207
--- a/target/arm/translate-a64.c
208
+++ b/target/arm/translate-a64.c
209
@@ -XXX,XX +XXX,XX @@ void aarch64_cpu_dump_state(CPUState *cs, FILE *f,
210
cpu_fprintf(f, " FPCR=%08x FPSR=%08x\n",
211
vfp_get_fpcr(env), vfp_get_fpsr(env));
212
213
- if (arm_feature(env, ARM_FEATURE_SVE) && sve_exception_el(env, el) == 0) {
214
+ if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) == 0) {
215
int j, zcr_len = sve_zcr_len_for_el(env, el);
216
217
for (i = 0; i <= FFR_PRED_NUM; i++) {
218
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
219
unallocated_encoding(s);
220
break;
221
case 0x2:
222
- if (!arm_dc_feature(s, ARM_FEATURE_SVE) || !disas_sve(s, insn)) {
223
+ if (!dc_isar_feature(aa64_sve, s) || !disas_sve(s, insn)) {
224
unallocated_encoding(s);
225
}
226
break;
227
--
118
--
228
2.19.1
119
2.20.1
229
120
230
121
diff view generated by jsdifflib
1
The switch_mode() function is defined in target/arm/helper.c and used
1
Add a new helper function which returns the MMU index to use
2
only in that file and nowhere else, so we can make it file-local
2
for v7M, where the caller specifies all of the security
3
rather than global.
3
state, privilege level and whether the execution priority
4
is negative, and reimplement the existing
5
arm_v7m_mmu_idx_for_secstate_and_priv() in terms of it.
6
7
We are going to need this for the lazy-FP-stacking code.
4
8
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181012144235.19646-3-peter.maydell@linaro.org
11
Message-id: 20190416125744.27770-21-peter.maydell@linaro.org
8
---
12
---
9
target/arm/internals.h | 1 -
13
target/arm/cpu.h | 7 +++++++
10
target/arm/helper.c | 6 ++++--
14
target/arm/helper.c | 14 +++++++++++---
11
2 files changed, 4 insertions(+), 3 deletions(-)
15
2 files changed, 18 insertions(+), 3 deletions(-)
12
16
13
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/internals.h
19
--- a/target/arm/cpu.h
16
+++ b/target/arm/internals.h
20
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ static inline int bank_number(int mode)
21
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
18
g_assert_not_reached();
22
}
19
}
23
}
20
24
21
-void switch_mode(CPUARMState *, int);
25
+/*
22
void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu);
26
+ * Return the MMU index for a v7M CPU with all relevant information
23
void arm_translate_init(void);
27
+ * manually specified.
24
28
+ */
29
+ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
30
+ bool secstate, bool priv, bool negpri);
31
+
32
/* Return the MMU index for a v7M CPU in the specified security and
33
* privilege state.
34
*/
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
35
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
37
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
38
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
39
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
30
V8M_SAttributes *sattrs);
31
#endif
32
33
+static void switch_mode(CPUARMState *env, int mode);
34
+
35
static int vfp_gdb_get_reg(CPUARMState *env, uint8_t *buf, int reg)
36
{
37
int nregs;
38
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
39
return 0;
40
return 0;
40
}
41
}
41
42
42
-void switch_mode(CPUARMState *env, int mode)
43
-ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
43
+static void switch_mode(CPUARMState *env, int mode)
44
- bool secstate, bool priv)
45
+ARMMMUIdx arm_v7m_mmu_idx_all(CPUARMState *env,
46
+ bool secstate, bool priv, bool negpri)
44
{
47
{
45
ARMCPU *cpu = arm_env_get_cpu(env);
48
ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
46
49
47
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env)
50
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
48
51
mmu_idx |= ARM_MMU_IDX_M_PRIV;
49
#else
52
}
50
53
51
-void switch_mode(CPUARMState *env, int mode)
54
- if (armv7m_nvic_neg_prio_requested(env->nvic, secstate)) {
52
+static void switch_mode(CPUARMState *env, int mode)
55
+ if (negpri) {
56
mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
57
}
58
59
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
60
return mmu_idx;
61
}
62
63
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
64
+ bool secstate, bool priv)
65
+{
66
+ bool negpri = armv7m_nvic_neg_prio_requested(env->nvic, secstate);
67
+
68
+ return arm_v7m_mmu_idx_all(env, secstate, priv, negpri);
69
+}
70
+
71
/* Return the MMU index for a v7M CPU in the specified security state */
72
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
53
{
73
{
54
int old_mode;
55
int i;
56
--
74
--
57
2.19.1
75
2.20.1
58
76
59
77
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
In the v7M architecture, if an exception is generated in the process
2
of doing the lazy stacking of FP registers, the handling of
3
possible escalation to HardFault is treated differently to the normal
4
approach: it works based on the saved information about exception
5
readiness that was stored in the FPCCR when the stack frame was
6
created. Provide a new function armv7m_nvic_set_pending_lazyfp()
7
which pends exceptions during lazy stacking, and implements
8
this logic.
2
9
3
Instead of shifts and masks, use direct loads and stores from the neon
10
This corresponds to the pseudocode TakePreserveFPException().
4
register file. Mirror the iteration structure of the ARM pseudocode
5
more closely. Correct the parameters of the VLD2 A2 insn.
6
11
7
Note that this includes a bugfix for handling of the insn
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
"VLD2 (multiple 2-element structures)" -- we were using an
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
incorrect stride value.
14
Message-id: 20190416125744.27770-22-peter.maydell@linaro.org
15
---
16
target/arm/cpu.h | 12 ++++++
17
hw/intc/armv7m_nvic.c | 96 +++++++++++++++++++++++++++++++++++++++++++
18
2 files changed, 108 insertions(+)
10
19
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
Message-id: 20181011205206.3552-19-richard.henderson@linaro.org
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
target/arm/translate.c | 170 ++++++++++++++++++-----------------------
17
1 file changed, 74 insertions(+), 96 deletions(-)
18
19
diff --git a/target/arm/translate.c b/target/arm/translate.c
20
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/translate.c
22
--- a/target/arm/cpu.h
22
+++ b/target/arm/translate.c
23
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 neon_load_reg(int reg, int pass)
24
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending(void *opaque, int irq, bool secure);
24
return tmp;
25
* a different exception).
26
*/
27
void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure);
28
+/**
29
+ * armv7m_nvic_set_pending_lazyfp: mark this lazy FP exception as pending
30
+ * @opaque: the NVIC
31
+ * @irq: the exception number to mark pending
32
+ * @secure: false for non-banked exceptions or for the nonsecure
33
+ * version of a banked exception, true for the secure version of a banked
34
+ * exception.
35
+ *
36
+ * Similar to armv7m_nvic_set_pending(), but specifically for exceptions
37
+ * generated in the course of lazy stacking of FP registers.
38
+ */
39
+void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure);
40
/**
41
* armv7m_nvic_get_pending_irq_info: return highest priority pending
42
* exception, and whether it targets Secure state
43
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/hw/intc/armv7m_nvic.c
46
+++ b/hw/intc/armv7m_nvic.c
47
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_set_pending_derived(void *opaque, int irq, bool secure)
48
do_armv7m_nvic_set_pending(opaque, irq, secure, true);
25
}
49
}
26
50
27
+static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
51
+void armv7m_nvic_set_pending_lazyfp(void *opaque, int irq, bool secure)
28
+{
52
+{
29
+ long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
53
+ /*
54
+ * Pend an exception during lazy FP stacking. This differs
55
+ * from the usual exception pending because the logic for
56
+ * whether we should escalate depends on the saved context
57
+ * in the FPCCR register, not on the current state of the CPU/NVIC.
58
+ */
59
+ NVICState *s = (NVICState *)opaque;
60
+ bool banked = exc_is_banked(irq);
61
+ VecInfo *vec;
62
+ bool targets_secure;
63
+ bool escalate = false;
64
+ /*
65
+ * We will only look at bits in fpccr if this is a banked exception
66
+ * (in which case 'secure' tells us whether it is the S or NS version).
67
+ * All the bits for the non-banked exceptions are in fpccr_s.
68
+ */
69
+ uint32_t fpccr_s = s->cpu->env.v7m.fpccr[M_REG_S];
70
+ uint32_t fpccr = s->cpu->env.v7m.fpccr[secure];
30
+
71
+
31
+ switch (mop) {
72
+ assert(irq > ARMV7M_EXCP_RESET && irq < s->num_irq);
32
+ case MO_UB:
73
+ assert(!secure || banked);
33
+ tcg_gen_ld8u_i64(var, cpu_env, offset);
74
+
75
+ vec = (banked && secure) ? &s->sec_vectors[irq] : &s->vectors[irq];
76
+
77
+ targets_secure = banked ? secure : exc_targets_secure(s, irq);
78
+
79
+ switch (irq) {
80
+ case ARMV7M_EXCP_DEBUG:
81
+ if (!(fpccr_s & R_V7M_FPCCR_MONRDY_MASK)) {
82
+ /* Ignore DebugMonitor exception */
83
+ return;
84
+ }
34
+ break;
85
+ break;
35
+ case MO_UW:
86
+ case ARMV7M_EXCP_MEM:
36
+ tcg_gen_ld16u_i64(var, cpu_env, offset);
87
+ escalate = !(fpccr & R_V7M_FPCCR_MMRDY_MASK);
37
+ break;
88
+ break;
38
+ case MO_UL:
89
+ case ARMV7M_EXCP_USAGE:
39
+ tcg_gen_ld32u_i64(var, cpu_env, offset);
90
+ escalate = !(fpccr & R_V7M_FPCCR_UFRDY_MASK);
40
+ break;
91
+ break;
41
+ case MO_Q:
92
+ case ARMV7M_EXCP_BUS:
42
+ tcg_gen_ld_i64(var, cpu_env, offset);
93
+ escalate = !(fpccr_s & R_V7M_FPCCR_BFRDY_MASK);
94
+ break;
95
+ case ARMV7M_EXCP_SECURE:
96
+ escalate = !(fpccr_s & R_V7M_FPCCR_SFRDY_MASK);
43
+ break;
97
+ break;
44
+ default:
98
+ default:
45
+ g_assert_not_reached();
99
+ g_assert_not_reached();
46
+ }
100
+ }
47
+}
48
+
101
+
49
static void neon_store_reg(int reg, int pass, TCGv_i32 var)
102
+ if (escalate) {
50
{
103
+ /*
51
tcg_gen_st_i32(var, cpu_env, neon_reg_offset(reg, pass));
104
+ * Escalate to HardFault: faults that initially targeted Secure
52
tcg_temp_free_i32(var);
105
+ * continue to do so, even if HF normally targets NonSecure.
53
}
106
+ */
54
107
+ irq = ARMV7M_EXCP_HARD;
55
+static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
108
+ if (arm_feature(&s->cpu->env, ARM_FEATURE_M_SECURITY) &&
56
+{
109
+ (targets_secure ||
57
+ long offset = neon_element_offset(reg, ele, size);
110
+ !(s->cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK))) {
111
+ vec = &s->sec_vectors[irq];
112
+ } else {
113
+ vec = &s->vectors[irq];
114
+ }
115
+ }
58
+
116
+
59
+ switch (size) {
117
+ if (!vec->enabled ||
60
+ case MO_8:
118
+ nvic_exec_prio(s) <= exc_group_prio(s, vec->prio, secure)) {
61
+ tcg_gen_st8_i64(var, cpu_env, offset);
119
+ if (!(fpccr_s & R_V7M_FPCCR_HFRDY_MASK)) {
62
+ break;
120
+ /*
63
+ case MO_16:
121
+ * We want to escalate to HardFault but the context the
64
+ tcg_gen_st16_i64(var, cpu_env, offset);
122
+ * FP state belongs to prevents the exception pre-empting.
65
+ break;
123
+ */
66
+ case MO_32:
124
+ cpu_abort(&s->cpu->parent_obj,
67
+ tcg_gen_st32_i64(var, cpu_env, offset);
125
+ "Lockup: can't escalate to HardFault during "
68
+ break;
126
+ "lazy FP register stacking\n");
69
+ case MO_64:
127
+ }
70
+ tcg_gen_st_i64(var, cpu_env, offset);
128
+ }
71
+ break;
129
+
72
+ default:
130
+ if (escalate) {
73
+ g_assert_not_reached();
131
+ s->cpu->env.v7m.hfsr |= R_V7M_HFSR_FORCED_MASK;
132
+ }
133
+ if (!vec->pending) {
134
+ vec->pending = 1;
135
+ /*
136
+ * We do not call nvic_irq_update(), because we know our caller
137
+ * is going to handle causing us to take the exception by
138
+ * raising EXCP_LAZYFP, so raising the IRQ line would be
139
+ * pointless extra work. We just need to recompute the
140
+ * priorities so that armv7m_nvic_can_take_pending_exception()
141
+ * returns the right answer.
142
+ */
143
+ nvic_recompute_state(s);
74
+ }
144
+ }
75
+}
145
+}
76
+
146
+
77
static inline void neon_load_reg64(TCGv_i64 var, int reg)
147
/* Make pending IRQ active. */
148
void armv7m_nvic_acknowledge_irq(void *opaque)
78
{
149
{
79
tcg_gen_ld_i64(var, cpu_env, vfp_reg_offset(1, reg));
80
@@ -XXX,XX +XXX,XX @@ static struct {
81
int interleave;
82
int spacing;
83
} const neon_ls_element_type[11] = {
84
- {4, 4, 1},
85
- {4, 4, 2},
86
+ {1, 4, 1},
87
+ {1, 4, 2},
88
{4, 1, 1},
89
- {4, 2, 1},
90
- {3, 3, 1},
91
- {3, 3, 2},
92
+ {2, 2, 2},
93
+ {1, 3, 1},
94
+ {1, 3, 2},
95
{3, 1, 1},
96
{1, 1, 1},
97
- {2, 2, 1},
98
- {2, 2, 2},
99
+ {1, 2, 1},
100
+ {1, 2, 2},
101
{2, 1, 1}
102
};
103
104
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
105
int shift;
106
int n;
107
int vec_size;
108
+ int mmu_idx;
109
+ TCGMemOp endian;
110
TCGv_i32 addr;
111
TCGv_i32 tmp;
112
TCGv_i32 tmp2;
113
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
114
rn = (insn >> 16) & 0xf;
115
rm = insn & 0xf;
116
load = (insn & (1 << 21)) != 0;
117
+ endian = s->be_data;
118
+ mmu_idx = get_mem_index(s);
119
if ((insn & (1 << 23)) == 0) {
120
/* Load store all elements. */
121
op = (insn >> 8) & 0xf;
122
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
123
nregs = neon_ls_element_type[op].nregs;
124
interleave = neon_ls_element_type[op].interleave;
125
spacing = neon_ls_element_type[op].spacing;
126
- if (size == 3 && (interleave | spacing) != 1)
127
+ if (size == 3 && (interleave | spacing) != 1) {
128
return 1;
129
+ }
130
+ tmp64 = tcg_temp_new_i64();
131
addr = tcg_temp_new_i32();
132
+ tmp2 = tcg_const_i32(1 << size);
133
load_reg_var(s, addr, rn);
134
- stride = (1 << size) * interleave;
135
for (reg = 0; reg < nregs; reg++) {
136
- if (interleave > 2 || (interleave == 2 && nregs == 2)) {
137
- load_reg_var(s, addr, rn);
138
- tcg_gen_addi_i32(addr, addr, (1 << size) * reg);
139
- } else if (interleave == 2 && nregs == 4 && reg == 2) {
140
- load_reg_var(s, addr, rn);
141
- tcg_gen_addi_i32(addr, addr, 1 << size);
142
- }
143
- if (size == 3) {
144
- tmp64 = tcg_temp_new_i64();
145
- if (load) {
146
- gen_aa32_ld64(s, tmp64, addr, get_mem_index(s));
147
- neon_store_reg64(tmp64, rd);
148
- } else {
149
- neon_load_reg64(tmp64, rd);
150
- gen_aa32_st64(s, tmp64, addr, get_mem_index(s));
151
- }
152
- tcg_temp_free_i64(tmp64);
153
- tcg_gen_addi_i32(addr, addr, stride);
154
- } else {
155
- for (pass = 0; pass < 2; pass++) {
156
- if (size == 2) {
157
- if (load) {
158
- tmp = tcg_temp_new_i32();
159
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
160
- neon_store_reg(rd, pass, tmp);
161
- } else {
162
- tmp = neon_load_reg(rd, pass);
163
- gen_aa32_st32(s, tmp, addr, get_mem_index(s));
164
- tcg_temp_free_i32(tmp);
165
- }
166
- tcg_gen_addi_i32(addr, addr, stride);
167
- } else if (size == 1) {
168
- if (load) {
169
- tmp = tcg_temp_new_i32();
170
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
171
- tcg_gen_addi_i32(addr, addr, stride);
172
- tmp2 = tcg_temp_new_i32();
173
- gen_aa32_ld16u(s, tmp2, addr, get_mem_index(s));
174
- tcg_gen_addi_i32(addr, addr, stride);
175
- tcg_gen_shli_i32(tmp2, tmp2, 16);
176
- tcg_gen_or_i32(tmp, tmp, tmp2);
177
- tcg_temp_free_i32(tmp2);
178
- neon_store_reg(rd, pass, tmp);
179
- } else {
180
- tmp = neon_load_reg(rd, pass);
181
- tmp2 = tcg_temp_new_i32();
182
- tcg_gen_shri_i32(tmp2, tmp, 16);
183
- gen_aa32_st16(s, tmp, addr, get_mem_index(s));
184
- tcg_temp_free_i32(tmp);
185
- tcg_gen_addi_i32(addr, addr, stride);
186
- gen_aa32_st16(s, tmp2, addr, get_mem_index(s));
187
- tcg_temp_free_i32(tmp2);
188
- tcg_gen_addi_i32(addr, addr, stride);
189
- }
190
- } else /* size == 0 */ {
191
- if (load) {
192
- tmp2 = NULL;
193
- for (n = 0; n < 4; n++) {
194
- tmp = tcg_temp_new_i32();
195
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
196
- tcg_gen_addi_i32(addr, addr, stride);
197
- if (n == 0) {
198
- tmp2 = tmp;
199
- } else {
200
- tcg_gen_shli_i32(tmp, tmp, n * 8);
201
- tcg_gen_or_i32(tmp2, tmp2, tmp);
202
- tcg_temp_free_i32(tmp);
203
- }
204
- }
205
- neon_store_reg(rd, pass, tmp2);
206
- } else {
207
- tmp2 = neon_load_reg(rd, pass);
208
- for (n = 0; n < 4; n++) {
209
- tmp = tcg_temp_new_i32();
210
- if (n == 0) {
211
- tcg_gen_mov_i32(tmp, tmp2);
212
- } else {
213
- tcg_gen_shri_i32(tmp, tmp2, n * 8);
214
- }
215
- gen_aa32_st8(s, tmp, addr, get_mem_index(s));
216
- tcg_temp_free_i32(tmp);
217
- tcg_gen_addi_i32(addr, addr, stride);
218
- }
219
- tcg_temp_free_i32(tmp2);
220
- }
221
+ for (n = 0; n < 8 >> size; n++) {
222
+ int xs;
223
+ for (xs = 0; xs < interleave; xs++) {
224
+ int tt = rd + reg + spacing * xs;
225
+
226
+ if (load) {
227
+ gen_aa32_ld_i64(s, tmp64, addr, mmu_idx, endian | size);
228
+ neon_store_element64(tt, n, size, tmp64);
229
+ } else {
230
+ neon_load_element64(tmp64, tt, n, size);
231
+ gen_aa32_st_i64(s, tmp64, addr, mmu_idx, endian | size);
232
}
233
+ tcg_gen_add_i32(addr, addr, tmp2);
234
}
235
}
236
- rd += spacing;
237
}
238
tcg_temp_free_i32(addr);
239
- stride = nregs * 8;
240
+ tcg_temp_free_i32(tmp2);
241
+ tcg_temp_free_i64(tmp64);
242
+ stride = nregs * interleave * 8;
243
} else {
244
size = (insn >> 10) & 3;
245
if (size == 3) {
246
--
150
--
247
2.19.1
151
2.20.1
248
152
249
153
diff view generated by jsdifflib
1
The HCR.FB virtualization configuration register bit requests that
1
Pushing registers to the stack for v7M needs to handle three cases:
2
TLB maintenance, branch predictor invalidate-all and icache
2
* the "normal" case where we pend exceptions
3
invalidate-all operations performed in NS EL1 should be upgraded
3
* an "ignore faults" case where we set FSR bits but
4
from "local CPU only to "broadcast within Inner Shareable domain".
4
do not pend exceptions (this is used when we are
5
For QEMU we NOP the branch predictor and icache operations, so
5
handling some kinds of derived exception on exception entry)
6
we only need to upgrade the TLB invalidates:
6
* a "lazy FP stacking" case, where different FSR bits
7
AArch32 TLBIALL, TLBIMVA, TLBIASID, DTLBIALL, DTLBIMVA, DTLBIASID,
7
are set and the exception is pended differently
8
ITLBIALL, ITLBIMVA, ITLBIASID, TLBIMVAA, TLBIMVAL, TLBIMVAAL
8
9
AArch64 TLBI VMALLE1, TLBI VAE1, TLBI ASIDE1, TLBI VAAE1,
9
Implement this by changing the existing flag argument that
10
TLBI VALE1, TLBI VAALE1
10
tells us whether to ignore faults or not into an enum that
11
specifies which of the 3 modes we should handle.
11
12
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20181012144235.19646-4-peter.maydell@linaro.org
15
Message-id: 20190416125744.27770-23-peter.maydell@linaro.org
15
---
16
---
16
target/arm/helper.c | 191 +++++++++++++++++++++++++++-----------------
17
target/arm/helper.c | 118 +++++++++++++++++++++++++++++---------------
17
1 file changed, 116 insertions(+), 75 deletions(-)
18
1 file changed, 79 insertions(+), 39 deletions(-)
18
19
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
22
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
23
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
24
@@ -XXX,XX +XXX,XX @@ static bool v7m_cpacr_pass(CPUARMState *env, bool is_secure, bool is_priv)
24
raw_write(env, ri, value);
25
}
26
27
-static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
- uint64_t value)
29
-{
30
- /* Invalidate all (TLBIALL) */
31
- ARMCPU *cpu = arm_env_get_cpu(env);
32
-
33
- tlb_flush(CPU(cpu));
34
-}
35
-
36
-static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
37
- uint64_t value)
38
-{
39
- /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
40
- ARMCPU *cpu = arm_env_get_cpu(env);
41
-
42
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
43
-}
44
-
45
-static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
46
- uint64_t value)
47
-{
48
- /* Invalidate by ASID (TLBIASID) */
49
- ARMCPU *cpu = arm_env_get_cpu(env);
50
-
51
- tlb_flush(CPU(cpu));
52
-}
53
-
54
-static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
55
- uint64_t value)
56
-{
57
- /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
58
- ARMCPU *cpu = arm_env_get_cpu(env);
59
-
60
- tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
61
-}
62
-
63
/* IS variants of TLB operations must affect all cores */
64
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
65
uint64_t value)
66
@@ -XXX,XX +XXX,XX @@ static void tlbimvaa_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
67
tlb_flush_page_all_cpus_synced(cs, value & TARGET_PAGE_MASK);
68
}
69
70
+/*
71
+ * Non-IS variants of TLB operations are upgraded to
72
+ * IS versions if we are at NS EL1 and HCR_EL2.FB is set to
73
+ * force broadcast of these operations.
74
+ */
75
+static bool tlb_force_broadcast(CPUARMState *env)
76
+{
77
+ return (env->cp15.hcr_el2 & HCR_FB) &&
78
+ arm_current_el(env) == 1 && arm_is_secure_below_el3(env);
79
+}
80
+
81
+static void tlbiall_write(CPUARMState *env, const ARMCPRegInfo *ri,
82
+ uint64_t value)
83
+{
84
+ /* Invalidate all (TLBIALL) */
85
+ ARMCPU *cpu = arm_env_get_cpu(env);
86
+
87
+ if (tlb_force_broadcast(env)) {
88
+ tlbiall_is_write(env, NULL, value);
89
+ return;
90
+ }
91
+
92
+ tlb_flush(CPU(cpu));
93
+}
94
+
95
+static void tlbimva_write(CPUARMState *env, const ARMCPRegInfo *ri,
96
+ uint64_t value)
97
+{
98
+ /* Invalidate single TLB entry by MVA and ASID (TLBIMVA) */
99
+ ARMCPU *cpu = arm_env_get_cpu(env);
100
+
101
+ if (tlb_force_broadcast(env)) {
102
+ tlbimva_is_write(env, NULL, value);
103
+ return;
104
+ }
105
+
106
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
107
+}
108
+
109
+static void tlbiasid_write(CPUARMState *env, const ARMCPRegInfo *ri,
110
+ uint64_t value)
111
+{
112
+ /* Invalidate by ASID (TLBIASID) */
113
+ ARMCPU *cpu = arm_env_get_cpu(env);
114
+
115
+ if (tlb_force_broadcast(env)) {
116
+ tlbiasid_is_write(env, NULL, value);
117
+ return;
118
+ }
119
+
120
+ tlb_flush(CPU(cpu));
121
+}
122
+
123
+static void tlbimvaa_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
+ uint64_t value)
125
+{
126
+ /* Invalidate single entry by MVA, all ASIDs (TLBIMVAA) */
127
+ ARMCPU *cpu = arm_env_get_cpu(env);
128
+
129
+ if (tlb_force_broadcast(env)) {
130
+ tlbimvaa_is_write(env, NULL, value);
131
+ return;
132
+ }
133
+
134
+ tlb_flush_page(CPU(cpu), value & TARGET_PAGE_MASK);
135
+}
136
+
137
static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
138
uint64_t value)
139
{
140
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_access(CPUARMState *env,
141
* Page D4-1736 (DDI0487A.b)
142
*/
143
144
-static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
145
- uint64_t value)
146
-{
147
- CPUState *cs = ENV_GET_CPU(env);
148
-
149
- if (arm_is_secure_below_el3(env)) {
150
- tlb_flush_by_mmuidx(cs,
151
- ARMMMUIdxBit_S1SE1 |
152
- ARMMMUIdxBit_S1SE0);
153
- } else {
154
- tlb_flush_by_mmuidx(cs,
155
- ARMMMUIdxBit_S12NSE1 |
156
- ARMMMUIdxBit_S12NSE0);
157
- }
158
-}
159
-
160
static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
uint64_t value)
162
{
163
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
164
}
25
}
165
}
26
}
166
27
167
+static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
+/*
168
+ uint64_t value)
29
+ * What kind of stack write are we doing? This affects how exceptions
169
+{
30
+ * generated during the stacking are treated.
170
+ CPUState *cs = ENV_GET_CPU(env);
31
+ */
32
+typedef enum StackingMode {
33
+ STACK_NORMAL,
34
+ STACK_IGNFAULTS,
35
+ STACK_LAZYFP,
36
+} StackingMode;
171
+
37
+
172
+ if (tlb_force_broadcast(env)) {
38
static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
173
+ tlbi_aa64_vmalle1_write(env, NULL, value);
39
- ARMMMUIdx mmu_idx, bool ignfault)
174
+ return;
40
+ ARMMMUIdx mmu_idx, StackingMode mode)
175
+ }
176
+
177
+ if (arm_is_secure_below_el3(env)) {
178
+ tlb_flush_by_mmuidx(cs,
179
+ ARMMMUIdxBit_S1SE1 |
180
+ ARMMMUIdxBit_S1SE0);
181
+ } else {
182
+ tlb_flush_by_mmuidx(cs,
183
+ ARMMMUIdxBit_S12NSE1 |
184
+ ARMMMUIdxBit_S12NSE0);
185
+ }
186
+}
187
+
188
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
189
uint64_t value)
190
{
41
{
191
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
42
CPUState *cs = CPU(cpu);
192
tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E3);
43
CPUARMState *env = &cpu->env;
44
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
45
&attrs, &prot, &page_size, &fi, NULL)) {
46
/* MPU/SAU lookup failed */
47
if (fi.type == ARMFault_QEMU_SFault) {
48
- qemu_log_mask(CPU_LOG_INT,
49
- "...SecureFault with SFSR.AUVIOL during stacking\n");
50
- env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVALID_MASK;
51
+ if (mode == STACK_LAZYFP) {
52
+ qemu_log_mask(CPU_LOG_INT,
53
+ "...SecureFault with SFSR.LSPERR "
54
+ "during lazy stacking\n");
55
+ env->v7m.sfsr |= R_V7M_SFSR_LSPERR_MASK;
56
+ } else {
57
+ qemu_log_mask(CPU_LOG_INT,
58
+ "...SecureFault with SFSR.AUVIOL "
59
+ "during stacking\n");
60
+ env->v7m.sfsr |= R_V7M_SFSR_AUVIOL_MASK;
61
+ }
62
+ env->v7m.sfsr |= R_V7M_SFSR_SFARVALID_MASK;
63
env->v7m.sfar = addr;
64
exc = ARMV7M_EXCP_SECURE;
65
exc_secure = false;
66
} else {
67
- qemu_log_mask(CPU_LOG_INT, "...MemManageFault with CFSR.MSTKERR\n");
68
- env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK;
69
+ if (mode == STACK_LAZYFP) {
70
+ qemu_log_mask(CPU_LOG_INT,
71
+ "...MemManageFault with CFSR.MLSPERR\n");
72
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MLSPERR_MASK;
73
+ } else {
74
+ qemu_log_mask(CPU_LOG_INT,
75
+ "...MemManageFault with CFSR.MSTKERR\n");
76
+ env->v7m.cfsr[secure] |= R_V7M_CFSR_MSTKERR_MASK;
77
+ }
78
exc = ARMV7M_EXCP_MEM;
79
exc_secure = secure;
80
}
81
@@ -XXX,XX +XXX,XX @@ static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value,
82
attrs, &txres);
83
if (txres != MEMTX_OK) {
84
/* BusFault trying to write the data */
85
- qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n");
86
- env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK;
87
+ if (mode == STACK_LAZYFP) {
88
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.LSPERR\n");
89
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_LSPERR_MASK;
90
+ } else {
91
+ qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n");
92
+ env->v7m.cfsr[M_REG_NS] |= R_V7M_CFSR_STKERR_MASK;
93
+ }
94
exc = ARMV7M_EXCP_BUS;
95
exc_secure = false;
96
goto pend_fault;
97
@@ -XXX,XX +XXX,XX @@ pend_fault:
98
* later if we have two derived exceptions.
99
* The only case when we must not pend the exception but instead
100
* throw it away is if we are doing the push of the callee registers
101
- * and we've already generated a derived exception. Even in this
102
- * case we will still update the fault status registers.
103
+ * and we've already generated a derived exception (this is indicated
104
+ * by the caller passing STACK_IGNFAULTS). Even in this case we will
105
+ * still update the fault status registers.
106
*/
107
- if (!ignfault) {
108
+ switch (mode) {
109
+ case STACK_NORMAL:
110
armv7m_nvic_set_pending_derived(env->nvic, exc, exc_secure);
111
+ break;
112
+ case STACK_LAZYFP:
113
+ armv7m_nvic_set_pending_lazyfp(env->nvic, exc, exc_secure);
114
+ break;
115
+ case STACK_IGNFAULTS:
116
+ break;
117
}
118
return false;
193
}
119
}
194
120
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
195
-static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
121
uint32_t limit;
196
- uint64_t value)
122
bool want_psp;
197
-{
123
uint32_t sig;
198
- /* Invalidate by VA, EL1&0 (AArch64 version).
124
+ StackingMode smode = ignore_faults ? STACK_IGNFAULTS : STACK_NORMAL;
199
- * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
125
200
- * since we don't support flush-for-specific-ASID-only or
126
if (dotailchain) {
201
- * flush-last-level-only.
127
bool mode = lr & R_V7M_EXCRET_MODE_MASK;
202
- */
128
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
203
- ARMCPU *cpu = arm_env_get_cpu(env);
129
*/
204
- CPUState *cs = CPU(cpu);
130
sig = v7m_integrity_sig(env, lr);
205
- uint64_t pageaddr = sextract64(value << 12, 0, 56);
131
stacked_ok =
206
-
132
- v7m_stack_write(cpu, frameptr, sig, mmu_idx, ignore_faults) &&
207
- if (arm_is_secure_below_el3(env)) {
133
- v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx,
208
- tlb_flush_page_by_mmuidx(cs, pageaddr,
134
- ignore_faults) &&
209
- ARMMMUIdxBit_S1SE1 |
135
- v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx,
210
- ARMMMUIdxBit_S1SE0);
136
- ignore_faults) &&
211
- } else {
137
- v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx,
212
- tlb_flush_page_by_mmuidx(cs, pageaddr,
138
- ignore_faults) &&
213
- ARMMMUIdxBit_S12NSE1 |
139
- v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx,
214
- ARMMMUIdxBit_S12NSE0);
140
- ignore_faults) &&
215
- }
141
- v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx,
216
-}
142
- ignore_faults) &&
217
-
143
- v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx,
218
static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
144
- ignore_faults) &&
219
uint64_t value)
145
- v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx,
220
{
146
- ignore_faults) &&
221
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
147
- v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx,
222
}
148
- ignore_faults);
223
}
149
+ v7m_stack_write(cpu, frameptr, sig, mmu_idx, smode) &&
224
150
+ v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx, smode) &&
225
+static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
151
+ v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx, smode) &&
226
+ uint64_t value)
152
+ v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx, smode) &&
227
+{
153
+ v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx, smode) &&
228
+ /* Invalidate by VA, EL1&0 (AArch64 version).
154
+ v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx, smode) &&
229
+ * Currently handles all of VAE1, VAAE1, VAALE1 and VALE1,
155
+ v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx, smode) &&
230
+ * since we don't support flush-for-specific-ASID-only or
156
+ v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx, smode) &&
231
+ * flush-last-level-only.
157
+ v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx, smode);
232
+ */
158
233
+ ARMCPU *cpu = arm_env_get_cpu(env);
159
/* Update SP regardless of whether any of the stack accesses failed. */
234
+ CPUState *cs = CPU(cpu);
160
*frame_sp_p = frameptr;
235
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
161
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
236
+
162
* if it has higher priority).
237
+ if (tlb_force_broadcast(env)) {
163
*/
238
+ tlbi_aa64_vae1is_write(env, NULL, value);
164
stacked_ok = stacked_ok &&
239
+ return;
165
- v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, false) &&
240
+ }
166
- v7m_stack_write(cpu, frameptr + 4, env->regs[1], mmu_idx, false) &&
241
+
167
- v7m_stack_write(cpu, frameptr + 8, env->regs[2], mmu_idx, false) &&
242
+ if (arm_is_secure_below_el3(env)) {
168
- v7m_stack_write(cpu, frameptr + 12, env->regs[3], mmu_idx, false) &&
243
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
169
- v7m_stack_write(cpu, frameptr + 16, env->regs[12], mmu_idx, false) &&
244
+ ARMMMUIdxBit_S1SE1 |
170
- v7m_stack_write(cpu, frameptr + 20, env->regs[14], mmu_idx, false) &&
245
+ ARMMMUIdxBit_S1SE0);
171
- v7m_stack_write(cpu, frameptr + 24, env->regs[15], mmu_idx, false) &&
246
+ } else {
172
- v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, false);
247
+ tlb_flush_page_by_mmuidx(cs, pageaddr,
173
+ v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, STACK_NORMAL) &&
248
+ ARMMMUIdxBit_S12NSE1 |
174
+ v7m_stack_write(cpu, frameptr + 4, env->regs[1],
249
+ ARMMMUIdxBit_S12NSE0);
175
+ mmu_idx, STACK_NORMAL) &&
250
+ }
176
+ v7m_stack_write(cpu, frameptr + 8, env->regs[2],
251
+}
177
+ mmu_idx, STACK_NORMAL) &&
252
+
178
+ v7m_stack_write(cpu, frameptr + 12, env->regs[3],
253
static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
179
+ mmu_idx, STACK_NORMAL) &&
254
uint64_t value)
180
+ v7m_stack_write(cpu, frameptr + 16, env->regs[12],
255
{
181
+ mmu_idx, STACK_NORMAL) &&
182
+ v7m_stack_write(cpu, frameptr + 20, env->regs[14],
183
+ mmu_idx, STACK_NORMAL) &&
184
+ v7m_stack_write(cpu, frameptr + 24, env->regs[15],
185
+ mmu_idx, STACK_NORMAL) &&
186
+ v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, STACK_NORMAL);
187
188
if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK) {
189
/* FPU is active, try to save its registers */
190
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
191
faddr += 8; /* skip the slot for the FPSCR */
192
}
193
stacked_ok = stacked_ok &&
194
- v7m_stack_write(cpu, faddr, slo, mmu_idx, false) &&
195
- v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, false);
196
+ v7m_stack_write(cpu, faddr, slo,
197
+ mmu_idx, STACK_NORMAL) &&
198
+ v7m_stack_write(cpu, faddr + 4, shi,
199
+ mmu_idx, STACK_NORMAL);
200
}
201
stacked_ok = stacked_ok &&
202
v7m_stack_write(cpu, frameptr + 0x60,
203
- vfp_get_fpscr(env), mmu_idx, false);
204
+ vfp_get_fpscr(env), mmu_idx, STACK_NORMAL);
205
if (cpacr_pass) {
206
for (i = 0; i < ((framesize == 0xa8) ? 32 : 16); i += 2) {
207
*aa32_vfp_dreg(env, i / 2) = 0;
256
--
208
--
257
2.19.1
209
2.20.1
258
210
259
211
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
The M-profile architecture floating point system supports
2
2
lazy FP state preservation, where FP registers are not
3
Most of the v8 extensions are self-contained within the ISAR
3
pushed to the stack when an exception occurs but are instead
4
registers and are not implied by other feature bits, which
4
only saved if and when the first FP instruction in the exception
5
makes them the easiest to convert.
5
handler is executed. Implement this in QEMU, corresponding
6
6
to the check of LSPACT in the pseudocode ExecuteFPCheck().
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20181016223115.24100-4-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20190416125744.27770-24-peter.maydell@linaro.org
12
---
11
---
13
target/arm/cpu.h | 131 +++++++++++++++++++++++++++++++++----
12
target/arm/cpu.h | 3 ++
14
target/arm/translate.h | 7 ++
13
target/arm/helper.h | 2 +
15
linux-user/elfload.c | 46 ++++++++-----
14
target/arm/translate.h | 1 +
16
target/arm/cpu.c | 27 +++++---
15
target/arm/helper.c | 112 +++++++++++++++++++++++++++++++++++++++++
17
target/arm/cpu64.c | 57 +++++++++-------
16
target/arm/translate.c | 22 ++++++++
18
target/arm/translate-a64.c | 101 ++++++++++++++--------------
17
5 files changed, 140 insertions(+)
19
target/arm/translate.c | 36 +++++-----
20
7 files changed, 273 insertions(+), 132 deletions(-)
21
18
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
21
--- a/target/arm/cpu.h
25
+++ b/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
26
@@ -XXX,XX +XXX,XX @@ typedef enum ARMPSCIState {
23
@@ -XXX,XX +XXX,XX @@
27
PSCI_ON_PENDING = 2
24
#define EXCP_NOCP 17 /* v7M NOCP UsageFault */
28
} ARMPSCIState;
25
#define EXCP_INVSTATE 18 /* v7M INVSTATE UsageFault */
29
26
#define EXCP_STKOF 19 /* v8M STKOF UsageFault */
30
+typedef struct ARMISARegisters ARMISARegisters;
27
+#define EXCP_LAZYFP 20 /* v7M fault during lazy FP stacking */
31
+
28
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
32
/**
29
33
* ARMCPU:
30
#define ARMV7M_EXCP_RESET 1
34
* @env: #CPUARMState
31
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, NS, 6, 1)
35
@@ -XXX,XX +XXX,XX @@ enum arm_features {
32
FIELD(TBFLAG_A32, VFPEN, 7, 1)
36
ARM_FEATURE_LPAE, /* has Large Physical Address Extension */
33
FIELD(TBFLAG_A32, CONDEXEC, 8, 8)
37
ARM_FEATURE_V8,
34
FIELD(TBFLAG_A32, SCTLR_B, 16, 1)
38
ARM_FEATURE_AARCH64, /* supports 64 bit mode */
35
+/* For M profile only, set if FPCCR.LSPACT is set */
39
- ARM_FEATURE_V8_AES, /* implements AES part of v8 Crypto Extensions */
36
+FIELD(TBFLAG_A32, LSPACT, 18, 1)
40
ARM_FEATURE_CBAR, /* has cp15 CBAR */
37
/* For M profile only, set if we must create a new FP context */
41
ARM_FEATURE_CRC, /* ARMv8 CRC instructions */
38
FIELD(TBFLAG_A32, NEW_FP_CTXT_NEEDED, 19, 1)
42
ARM_FEATURE_CBAR_RO, /* has cp15 CBAR and it is read-only */
39
/* For M profile only, set if FPCCR.S does not match current security state */
43
ARM_FEATURE_EL2, /* has EL2 Virtualization support */
40
diff --git a/target/arm/helper.h b/target/arm/helper.h
44
ARM_FEATURE_EL3, /* has EL3 Secure monitor support */
41
index XXXXXXX..XXXXXXX 100644
45
- ARM_FEATURE_V8_SHA1, /* implements SHA1 part of v8 Crypto Extensions */
42
--- a/target/arm/helper.h
46
- ARM_FEATURE_V8_SHA256, /* implements SHA256 part of v8 Crypto Extensions */
43
+++ b/target/arm/helper.h
47
- ARM_FEATURE_V8_PMULL, /* implements PMULL part of v8 Crypto Extensions */
44
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(v7m_blxns, void, env, i32)
48
ARM_FEATURE_THUMB_DSP, /* DSP insns supported in the Thumb encodings */
45
49
ARM_FEATURE_PMU, /* has PMU support */
46
DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
50
ARM_FEATURE_VBAR, /* has cp15 VBAR */
47
51
ARM_FEATURE_M_SECURITY, /* M profile Security Extension */
48
+DEF_HELPER_1(v7m_preserve_fp_state, void, env)
52
ARM_FEATURE_JAZELLE, /* has (trivial) Jazelle implementation */
49
+
53
ARM_FEATURE_SVE, /* has Scalable Vector Extension */
50
DEF_HELPER_2(v8m_stackcheck, void, env, i32)
54
- ARM_FEATURE_V8_SHA512, /* implements SHA512 part of v8 Crypto Extensions */
51
55
- ARM_FEATURE_V8_SHA3, /* implements SHA3 part of v8 Crypto Extensions */
52
DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32)
56
- ARM_FEATURE_V8_SM3, /* implements SM3 part of v8 Crypto Extensions */
57
- ARM_FEATURE_V8_SM4, /* implements SM4 part of v8 Crypto Extensions */
58
- ARM_FEATURE_V8_ATOMICS, /* ARMv8.1-Atomics feature */
59
- ARM_FEATURE_V8_RDM, /* implements v8.1 simd round multiply */
60
- ARM_FEATURE_V8_DOTPROD, /* implements v8.2 simd dot product */
61
ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */
62
- ARM_FEATURE_V8_FCMA, /* has complex number part of v8.3 extensions. */
63
ARM_FEATURE_M_MAIN, /* M profile Main Extension */
64
};
65
66
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
67
/* Shared between translate-sve.c and sve_helper.c. */
68
extern const uint64_t pred_esz_masks[4];
69
70
+/*
71
+ * 32-bit feature tests via id registers.
72
+ */
73
+static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
74
+{
75
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
76
+}
77
+
78
+static inline bool isar_feature_aa32_pmull(const ARMISARegisters *id)
79
+{
80
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) > 1;
81
+}
82
+
83
+static inline bool isar_feature_aa32_sha1(const ARMISARegisters *id)
84
+{
85
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA1) != 0;
86
+}
87
+
88
+static inline bool isar_feature_aa32_sha2(const ARMISARegisters *id)
89
+{
90
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, SHA2) != 0;
91
+}
92
+
93
+static inline bool isar_feature_aa32_crc32(const ARMISARegisters *id)
94
+{
95
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, CRC32) != 0;
96
+}
97
+
98
+static inline bool isar_feature_aa32_rdm(const ARMISARegisters *id)
99
+{
100
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, RDM) != 0;
101
+}
102
+
103
+static inline bool isar_feature_aa32_vcma(const ARMISARegisters *id)
104
+{
105
+ return FIELD_EX32(id->id_isar5, ID_ISAR5, VCMA) != 0;
106
+}
107
+
108
+static inline bool isar_feature_aa32_dp(const ARMISARegisters *id)
109
+{
110
+ return FIELD_EX32(id->id_isar6, ID_ISAR6, DP) != 0;
111
+}
112
+
113
+/*
114
+ * 64-bit feature tests via id registers.
115
+ */
116
+static inline bool isar_feature_aa64_aes(const ARMISARegisters *id)
117
+{
118
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) != 0;
119
+}
120
+
121
+static inline bool isar_feature_aa64_pmull(const ARMISARegisters *id)
122
+{
123
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, AES) > 1;
124
+}
125
+
126
+static inline bool isar_feature_aa64_sha1(const ARMISARegisters *id)
127
+{
128
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA1) != 0;
129
+}
130
+
131
+static inline bool isar_feature_aa64_sha256(const ARMISARegisters *id)
132
+{
133
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) != 0;
134
+}
135
+
136
+static inline bool isar_feature_aa64_sha512(const ARMISARegisters *id)
137
+{
138
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA2) > 1;
139
+}
140
+
141
+static inline bool isar_feature_aa64_crc32(const ARMISARegisters *id)
142
+{
143
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, CRC32) != 0;
144
+}
145
+
146
+static inline bool isar_feature_aa64_atomics(const ARMISARegisters *id)
147
+{
148
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, ATOMIC) != 0;
149
+}
150
+
151
+static inline bool isar_feature_aa64_rdm(const ARMISARegisters *id)
152
+{
153
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, RDM) != 0;
154
+}
155
+
156
+static inline bool isar_feature_aa64_sha3(const ARMISARegisters *id)
157
+{
158
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SHA3) != 0;
159
+}
160
+
161
+static inline bool isar_feature_aa64_sm3(const ARMISARegisters *id)
162
+{
163
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM3) != 0;
164
+}
165
+
166
+static inline bool isar_feature_aa64_sm4(const ARMISARegisters *id)
167
+{
168
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, SM4) != 0;
169
+}
170
+
171
+static inline bool isar_feature_aa64_dp(const ARMISARegisters *id)
172
+{
173
+ return FIELD_EX64(id->id_aa64isar0, ID_AA64ISAR0, DP) != 0;
174
+}
175
+
176
+static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
177
+{
178
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
179
+}
180
+
181
+/*
182
+ * Forward to the above feature tests given an ARMCPU pointer.
183
+ */
184
+#define cpu_isar_feature(name, cpu) \
185
+ ({ ARMCPU *cpu_ = (cpu); isar_feature_##name(&cpu_->isar); })
186
+
187
#endif
188
diff --git a/target/arm/translate.h b/target/arm/translate.h
53
diff --git a/target/arm/translate.h b/target/arm/translate.h
189
index XXXXXXX..XXXXXXX 100644
54
index XXXXXXX..XXXXXXX 100644
190
--- a/target/arm/translate.h
55
--- a/target/arm/translate.h
191
+++ b/target/arm/translate.h
56
+++ b/target/arm/translate.h
192
@@ -XXX,XX +XXX,XX @@
57
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
193
/* internal defines */
58
bool v8m_stackcheck; /* true if we need to perform v8M stack limit checks */
194
typedef struct DisasContext {
59
bool v8m_fpccr_s_wrong; /* true if v8M FPCCR.S != v8m_secure */
195
DisasContextBase base;
60
bool v7m_new_fp_ctxt_needed; /* ASPEN set but no active FP context */
196
+ const ARMISARegisters *isar;
61
+ bool v7m_lspact; /* FPCCR.LSPACT set */
197
62
/* Immediate value in AArch32 SVC insn; must be set if is_jmp == DISAS_SWI
198
target_ulong pc;
63
* so that top level loop can generate correct syndrome information.
199
target_ulong page_start;
64
*/
200
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
65
diff --git a/target/arm/helper.c b/target/arm/helper.c
201
return ret;
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/helper.c
68
+++ b/target/arm/helper.c
69
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest)
70
g_assert_not_reached();
202
}
71
}
203
72
204
+/*
73
+void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
205
+ * Forward to the isar_feature_* tests given a DisasContext pointer.
74
+{
206
+ */
75
+ /* translate.c should never generate calls here in user-only mode */
207
+#define dc_isar_feature(name, ctx) \
76
+ g_assert_not_reached();
208
+ ({ DisasContext *ctx_ = (ctx); isar_feature_##name(ctx_->isar); })
77
+}
209
+
78
+
210
#endif /* TARGET_ARM_TRANSLATE_H */
79
uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
211
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
80
{
212
index XXXXXXX..XXXXXXX 100644
81
/* The TT instructions can be used by unprivileged code, but in
213
--- a/linux-user/elfload.c
82
@@ -XXX,XX +XXX,XX @@ pend_fault:
214
+++ b/linux-user/elfload.c
83
return false;
215
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
216
/* probe for the extra features */
217
#define GET_FEATURE(feat, hwcap) \
218
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
219
+
220
+#define GET_FEATURE_ID(feat, hwcap) \
221
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
222
+
223
/* EDSP is in v5TE and above, but all our v5 CPUs are v5TE */
224
GET_FEATURE(ARM_FEATURE_V5, ARM_HWCAP_ARM_EDSP);
225
GET_FEATURE(ARM_FEATURE_VFP, ARM_HWCAP_ARM_VFP);
226
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap2(void)
227
ARMCPU *cpu = ARM_CPU(thread_cpu);
228
uint32_t hwcaps = 0;
229
230
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP2_ARM_AES);
231
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP2_ARM_PMULL);
232
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP2_ARM_SHA1);
233
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP2_ARM_SHA2);
234
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP2_ARM_CRC32);
235
+ GET_FEATURE_ID(aa32_aes, ARM_HWCAP2_ARM_AES);
236
+ GET_FEATURE_ID(aa32_pmull, ARM_HWCAP2_ARM_PMULL);
237
+ GET_FEATURE_ID(aa32_sha1, ARM_HWCAP2_ARM_SHA1);
238
+ GET_FEATURE_ID(aa32_sha2, ARM_HWCAP2_ARM_SHA2);
239
+ GET_FEATURE_ID(aa32_crc32, ARM_HWCAP2_ARM_CRC32);
240
return hwcaps;
241
}
84
}
242
85
243
#undef GET_FEATURE
86
+void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
244
+#undef GET_FEATURE_ID
87
+{
245
88
+ /*
246
#else
89
+ * Preserve FP state (because LSPACT was set and we are about
247
/* 64 bit ARM definitions */
90
+ * to execute an FP instruction). This corresponds to the
248
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
91
+ * PreserveFPState() pseudocode.
249
/* probe for the extra features */
92
+ * We may throw an exception if the stacking fails.
250
#define GET_FEATURE(feat, hwcap) \
93
+ */
251
do { if (arm_feature(&cpu->env, feat)) { hwcaps |= hwcap; } } while (0)
94
+ ARMCPU *cpu = arm_env_get_cpu(env);
252
- GET_FEATURE(ARM_FEATURE_V8_AES, ARM_HWCAP_A64_AES);
95
+ bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
253
- GET_FEATURE(ARM_FEATURE_V8_PMULL, ARM_HWCAP_A64_PMULL);
96
+ bool negpri = !(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_HFRDY_MASK);
254
- GET_FEATURE(ARM_FEATURE_V8_SHA1, ARM_HWCAP_A64_SHA1);
97
+ bool is_priv = !(env->v7m.fpccr[is_secure] & R_V7M_FPCCR_USER_MASK);
255
- GET_FEATURE(ARM_FEATURE_V8_SHA256, ARM_HWCAP_A64_SHA2);
98
+ bool splimviol = env->v7m.fpccr[is_secure] & R_V7M_FPCCR_SPLIMVIOL_MASK;
256
- GET_FEATURE(ARM_FEATURE_CRC, ARM_HWCAP_A64_CRC32);
99
+ uint32_t fpcar = env->v7m.fpcar[is_secure];
257
- GET_FEATURE(ARM_FEATURE_V8_SHA3, ARM_HWCAP_A64_SHA3);
100
+ bool stacked_ok = true;
258
- GET_FEATURE(ARM_FEATURE_V8_SM3, ARM_HWCAP_A64_SM3);
101
+ bool ts = is_secure && (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK);
259
- GET_FEATURE(ARM_FEATURE_V8_SM4, ARM_HWCAP_A64_SM4);
102
+ bool take_exception;
260
- GET_FEATURE(ARM_FEATURE_V8_SHA512, ARM_HWCAP_A64_SHA512);
103
+
261
+#define GET_FEATURE_ID(feat, hwcap) \
104
+ /* Take the iothread lock as we are going to touch the NVIC */
262
+ do { if (cpu_isar_feature(feat, cpu)) { hwcaps |= hwcap; } } while (0)
105
+ qemu_mutex_lock_iothread();
263
+
106
+
264
+ GET_FEATURE_ID(aa64_aes, ARM_HWCAP_A64_AES);
107
+ /* Check the background context had access to the FPU */
265
+ GET_FEATURE_ID(aa64_pmull, ARM_HWCAP_A64_PMULL);
108
+ if (!v7m_cpacr_pass(env, is_secure, is_priv)) {
266
+ GET_FEATURE_ID(aa64_sha1, ARM_HWCAP_A64_SHA1);
109
+ armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, is_secure);
267
+ GET_FEATURE_ID(aa64_sha256, ARM_HWCAP_A64_SHA2);
110
+ env->v7m.cfsr[is_secure] |= R_V7M_CFSR_NOCP_MASK;
268
+ GET_FEATURE_ID(aa64_sha512, ARM_HWCAP_A64_SHA512);
111
+ stacked_ok = false;
269
+ GET_FEATURE_ID(aa64_crc32, ARM_HWCAP_A64_CRC32);
112
+ } else if (!is_secure && !extract32(env->v7m.nsacr, 10, 1)) {
270
+ GET_FEATURE_ID(aa64_sha3, ARM_HWCAP_A64_SHA3);
113
+ armv7m_nvic_set_pending_lazyfp(env->nvic, ARMV7M_EXCP_USAGE, M_REG_S);
271
+ GET_FEATURE_ID(aa64_sm3, ARM_HWCAP_A64_SM3);
114
+ env->v7m.cfsr[M_REG_S] |= R_V7M_CFSR_NOCP_MASK;
272
+ GET_FEATURE_ID(aa64_sm4, ARM_HWCAP_A64_SM4);
115
+ stacked_ok = false;
273
GET_FEATURE(ARM_FEATURE_V8_FP16,
116
+ }
274
ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP);
117
+
275
- GET_FEATURE(ARM_FEATURE_V8_ATOMICS, ARM_HWCAP_A64_ATOMICS);
118
+ if (!splimviol && stacked_ok) {
276
- GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM);
119
+ /* We only stack if the stack limit wasn't violated */
277
- GET_FEATURE(ARM_FEATURE_V8_DOTPROD, ARM_HWCAP_A64_ASIMDDP);
120
+ int i;
278
- GET_FEATURE(ARM_FEATURE_V8_FCMA, ARM_HWCAP_A64_FCMA);
121
+ ARMMMUIdx mmu_idx;
279
+ GET_FEATURE_ID(aa64_atomics, ARM_HWCAP_A64_ATOMICS);
122
+
280
+ GET_FEATURE_ID(aa64_rdm, ARM_HWCAP_A64_ASIMDRDM);
123
+ mmu_idx = arm_v7m_mmu_idx_all(env, is_secure, is_priv, negpri);
281
+ GET_FEATURE_ID(aa64_dp, ARM_HWCAP_A64_ASIMDDP);
124
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
282
+ GET_FEATURE_ID(aa64_fcma, ARM_HWCAP_A64_FCMA);
125
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
283
GET_FEATURE(ARM_FEATURE_SVE, ARM_HWCAP_A64_SVE);
126
+ uint32_t faddr = fpcar + 4 * i;
284
+
127
+ uint32_t slo = extract64(dn, 0, 32);
285
#undef GET_FEATURE
128
+ uint32_t shi = extract64(dn, 32, 32);
286
+#undef GET_FEATURE_ID
129
+
287
130
+ if (i >= 16) {
288
return hwcaps;
131
+ faddr += 8; /* skip the slot for the FPSCR */
289
}
132
+ }
290
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
133
+ stacked_ok = stacked_ok &&
291
index XXXXXXX..XXXXXXX 100644
134
+ v7m_stack_write(cpu, faddr, slo, mmu_idx, STACK_LAZYFP) &&
292
--- a/target/arm/cpu.c
135
+ v7m_stack_write(cpu, faddr + 4, shi, mmu_idx, STACK_LAZYFP);
293
+++ b/target/arm/cpu.c
136
+ }
294
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
137
+
295
cortex_a15_initfn(obj);
138
+ stacked_ok = stacked_ok &&
296
#ifdef CONFIG_USER_ONLY
139
+ v7m_stack_write(cpu, fpcar + 0x40,
297
/* We don't set these in system emulation mode for the moment,
140
+ vfp_get_fpscr(env), mmu_idx, STACK_LAZYFP);
298
- * since we don't correctly set the ID registers to advertise them,
141
+ }
299
+ * since we don't correctly set (all of) the ID registers to
142
+
300
+ * advertise them.
143
+ /*
301
*/
144
+ * We definitely pended an exception, but it's possible that it
302
set_feature(&cpu->env, ARM_FEATURE_V8);
145
+ * might not be able to be taken now. If its priority permits us
303
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
146
+ * to take it now, then we must not update the LSPACT or FP regs,
304
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
147
+ * but instead jump out to take the exception immediately.
305
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
148
+ * If it's just pending and won't be taken until the current
306
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
149
+ * handler exits, then we do update LSPACT and the FP regs.
307
- set_feature(&cpu->env, ARM_FEATURE_CRC);
150
+ */
308
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
151
+ take_exception = !stacked_ok &&
309
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
152
+ armv7m_nvic_can_take_pending_exception(env->nvic);
310
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
153
+
311
+ {
154
+ qemu_mutex_unlock_iothread();
312
+ uint32_t t;
155
+
313
+
156
+ if (take_exception) {
314
+ t = cpu->isar.id_isar5;
157
+ raise_exception_ra(env, EXCP_LAZYFP, 0, 1, GETPC());
315
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
158
+ }
316
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
159
+
317
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
160
+ env->v7m.fpccr[is_secure] &= ~R_V7M_FPCCR_LSPACT_MASK;
318
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
161
+
319
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
162
+ if (ts) {
320
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
163
+ /* Clear s0 to s31 and the FPSCR */
321
+ cpu->isar.id_isar5 = t;
164
+ int i;
322
+
165
+
323
+ t = cpu->isar.id_isar6;
166
+ for (i = 0; i < 32; i += 2) {
324
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
167
+ *aa32_vfp_dreg(env, i / 2) = 0;
325
+ cpu->isar.id_isar6 = t;
168
+ }
326
+ }
169
+ vfp_set_fpscr(env, 0);
327
#endif
170
+ }
328
}
171
+ /*
329
}
172
+ * Otherwise s0 to s15 and FPSCR are UNKNOWN; we choose to leave them
330
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
173
+ * unchanged.
331
index XXXXXXX..XXXXXXX 100644
174
+ */
332
--- a/target/arm/cpu64.c
175
+}
333
+++ b/target/arm/cpu64.c
176
+
334
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
177
/* Write to v7M CONTROL.SPSEL bit for the specified security bank.
335
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
178
* This may change the current stack pointer between Main and Process
336
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
179
* stack pointers if it is done for the CONTROL register for the current
337
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
180
@@ -XXX,XX +XXX,XX @@ static void arm_log_exception(int idx)
338
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
181
[EXCP_NOCP] = "v7M NOCP UsageFault",
339
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
182
[EXCP_INVSTATE] = "v7M INVSTATE UsageFault",
340
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
183
[EXCP_STKOF] = "v8M STKOF UsageFault",
341
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
184
+ [EXCP_LAZYFP] = "v7M exception during lazy FP stacking",
342
- set_feature(&cpu->env, ARM_FEATURE_CRC);
185
};
343
set_feature(&cpu->env, ARM_FEATURE_EL2);
186
344
set_feature(&cpu->env, ARM_FEATURE_EL3);
187
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
345
set_feature(&cpu->env, ARM_FEATURE_PMU);
188
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
346
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
347
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
348
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
349
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
350
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
351
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
352
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
353
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
354
- set_feature(&cpu->env, ARM_FEATURE_CRC);
355
set_feature(&cpu->env, ARM_FEATURE_EL2);
356
set_feature(&cpu->env, ARM_FEATURE_EL3);
357
set_feature(&cpu->env, ARM_FEATURE_PMU);
358
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
359
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
360
set_feature(&cpu->env, ARM_FEATURE_AARCH64);
361
set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
362
- set_feature(&cpu->env, ARM_FEATURE_V8_AES);
363
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA1);
364
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA256);
365
- set_feature(&cpu->env, ARM_FEATURE_V8_PMULL);
366
- set_feature(&cpu->env, ARM_FEATURE_CRC);
367
set_feature(&cpu->env, ARM_FEATURE_EL2);
368
set_feature(&cpu->env, ARM_FEATURE_EL3);
369
set_feature(&cpu->env, ARM_FEATURE_PMU);
370
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
371
if (kvm_enabled()) {
372
kvm_arm_set_cpu_features_from_host(cpu);
373
} else {
374
+ uint64_t t;
375
+ uint32_t u;
376
aarch64_a57_initfn(obj);
377
+
378
+ t = cpu->isar.id_aa64isar0;
379
+ t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
380
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
381
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */
382
+ t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
383
+ t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2);
384
+ t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1);
385
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1);
386
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1);
387
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
388
+ t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
389
+ cpu->isar.id_aa64isar0 = t;
390
+
391
+ t = cpu->isar.id_aa64isar1;
392
+ t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
393
+ cpu->isar.id_aa64isar1 = t;
394
+
395
+ /* Replicate the same data to the 32-bit id registers. */
396
+ u = cpu->isar.id_isar5;
397
+ u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
398
+ u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
399
+ u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
400
+ u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
401
+ u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
402
+ u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
403
+ cpu->isar.id_isar5 = u;
404
+
405
+ u = cpu->isar.id_isar6;
406
+ u = FIELD_DP32(u, ID_ISAR6, DP, 1);
407
+ cpu->isar.id_isar6 = u;
408
+
409
#ifdef CONFIG_USER_ONLY
410
/* We don't set these in system emulation mode for the moment,
411
* since we don't correctly set the ID registers to advertise them,
412
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
413
* whereas the architecture requires them to be present in both if
414
* present in either.
415
*/
416
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA512);
417
- set_feature(&cpu->env, ARM_FEATURE_V8_SHA3);
418
- set_feature(&cpu->env, ARM_FEATURE_V8_SM3);
419
- set_feature(&cpu->env, ARM_FEATURE_V8_SM4);
420
- set_feature(&cpu->env, ARM_FEATURE_V8_ATOMICS);
421
- set_feature(&cpu->env, ARM_FEATURE_V8_RDM);
422
- set_feature(&cpu->env, ARM_FEATURE_V8_DOTPROD);
423
set_feature(&cpu->env, ARM_FEATURE_V8_FP16);
424
- set_feature(&cpu->env, ARM_FEATURE_V8_FCMA);
425
set_feature(&cpu->env, ARM_FEATURE_SVE);
426
/* For usermode -cpu max we can use a larger and more efficient DCZ
427
* blocksize since we don't have to follow what the hardware does.
428
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
429
index XXXXXXX..XXXXXXX 100644
430
--- a/target/arm/translate-a64.c
431
+++ b/target/arm/translate-a64.c
432
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
433
}
434
if (rt2 == 31
435
&& ((rt | rs) & 1) == 0
436
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
437
+ && dc_isar_feature(aa64_atomics, s)) {
438
/* CASP / CASPL */
439
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
440
return;
441
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
442
}
443
if (rt2 == 31
444
&& ((rt | rs) & 1) == 0
445
- && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
446
+ && dc_isar_feature(aa64_atomics, s)) {
447
/* CASPA / CASPAL */
448
gen_compare_and_swap_pair(s, rs, rt, rn, size | 2);
449
return;
450
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_excl(DisasContext *s, uint32_t insn)
451
case 0xb: /* CASL */
452
case 0xe: /* CASA */
453
case 0xf: /* CASAL */
454
- if (rt2 == 31 && arm_dc_feature(s, ARM_FEATURE_V8_ATOMICS)) {
455
+ if (rt2 == 31 && dc_isar_feature(aa64_atomics, s)) {
456
gen_compare_and_swap(s, rs, rt, rn, size);
457
return;
458
}
459
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
460
int rs = extract32(insn, 16, 5);
461
int rn = extract32(insn, 5, 5);
462
int o3_opc = extract32(insn, 12, 4);
463
- int feature = ARM_FEATURE_V8_ATOMICS;
464
TCGv_i64 tcg_rn, tcg_rs;
465
AtomicThreeOpFn *fn;
466
467
- if (is_vector) {
468
+ if (is_vector || !dc_isar_feature(aa64_atomics, s)) {
469
unallocated_encoding(s);
470
return;
471
}
472
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
473
unallocated_encoding(s);
474
return;
475
}
476
- if (!arm_dc_feature(s, feature)) {
477
- unallocated_encoding(s);
478
- return;
479
- }
480
481
if (rn == 31) {
482
gen_check_sp_alignment(s);
483
@@ -XXX,XX +XXX,XX @@ static void handle_crc32(DisasContext *s,
484
TCGv_i64 tcg_acc, tcg_val;
485
TCGv_i32 tcg_bytes;
486
487
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)
488
+ if (!dc_isar_feature(aa64_crc32, s)
489
|| (sf == 1 && sz != 3)
490
|| (sf == 0 && sz == 3)) {
491
unallocated_encoding(s);
492
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
493
bool u = extract32(insn, 29, 1);
494
TCGv_i32 ele1, ele2, ele3;
495
TCGv_i64 res;
496
- int feature;
497
+ bool feature;
498
499
switch (u * 16 + opcode) {
500
case 0x10: /* SQRDMLAH (vector) */
501
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_same_extra(DisasContext *s,
502
unallocated_encoding(s);
503
return;
504
}
505
- feature = ARM_FEATURE_V8_RDM;
506
+ feature = dc_isar_feature(aa64_rdm, s);
507
break;
508
default:
509
unallocated_encoding(s);
510
return;
511
}
512
- if (!arm_dc_feature(s, feature)) {
513
+ if (!feature) {
514
unallocated_encoding(s);
515
return;
516
}
517
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
518
return;
519
}
520
if (size == 3) {
521
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
522
+ if (!dc_isar_feature(aa64_pmull, s)) {
523
unallocated_encoding(s);
524
return;
525
}
526
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
527
int size = extract32(insn, 22, 2);
528
bool u = extract32(insn, 29, 1);
529
bool is_q = extract32(insn, 30, 1);
530
- int feature, rot;
531
+ bool feature;
532
+ int rot;
533
534
switch (u * 16 + opcode) {
535
case 0x10: /* SQRDMLAH (vector) */
536
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
537
unallocated_encoding(s);
538
return;
539
}
540
- feature = ARM_FEATURE_V8_RDM;
541
+ feature = dc_isar_feature(aa64_rdm, s);
542
break;
543
case 0x02: /* SDOT (vector) */
544
case 0x12: /* UDOT (vector) */
545
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
546
unallocated_encoding(s);
547
return;
548
}
549
- feature = ARM_FEATURE_V8_DOTPROD;
550
+ feature = dc_isar_feature(aa64_dp, s);
551
break;
552
case 0x18: /* FCMLA, #0 */
553
case 0x19: /* FCMLA, #90 */
554
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
555
unallocated_encoding(s);
556
return;
557
}
558
- feature = ARM_FEATURE_V8_FCMA;
559
+ feature = dc_isar_feature(aa64_fcma, s);
560
break;
561
default:
562
unallocated_encoding(s);
563
return;
564
}
565
- if (!arm_dc_feature(s, feature)) {
566
+ if (!feature) {
567
unallocated_encoding(s);
568
return;
569
}
570
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
571
break;
572
case 0x1d: /* SQRDMLAH */
573
case 0x1f: /* SQRDMLSH */
574
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
575
+ if (!dc_isar_feature(aa64_rdm, s)) {
576
unallocated_encoding(s);
577
return;
189
return;
578
}
190
}
579
break;
191
break;
580
case 0x0e: /* SDOT */
192
+ case EXCP_LAZYFP:
581
case 0x1e: /* UDOT */
193
+ /*
582
- if (size != MO_32 || !arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
194
+ * We already pended the specific exception in the NVIC in the
583
+ if (size != MO_32 || !dc_isar_feature(aa64_dp, s)) {
195
+ * v7m_preserve_fp_state() helper function.
584
unallocated_encoding(s);
196
+ */
585
return;
197
+ break;
586
}
198
default:
587
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
199
cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
588
case 0x13: /* FCMLA #90 */
200
return; /* Never happens. Keep compiler happy. */
589
case 0x15: /* FCMLA #180 */
201
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
590
case 0x17: /* FCMLA #270 */
202
flags = FIELD_DP32(flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED, 1);
591
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
592
+ if (!dc_isar_feature(aa64_fcma, s)) {
593
unallocated_encoding(s);
594
return;
595
}
596
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_aes(DisasContext *s, uint32_t insn)
597
TCGv_i32 tcg_decrypt;
598
CryptoThreeOpIntFn *genfn;
599
600
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
601
- || size != 0) {
602
+ if (!dc_isar_feature(aa64_aes, s) || size != 0) {
603
unallocated_encoding(s);
604
return;
605
}
203
}
606
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
204
607
int rd = extract32(insn, 0, 5);
205
+ if (arm_feature(env, ARM_FEATURE_M)) {
608
CryptoThreeOpFn *genfn;
206
+ bool is_secure = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
609
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
207
+
610
- int feature = ARM_FEATURE_V8_SHA256;
208
+ if (env->v7m.fpccr[is_secure] & R_V7M_FPCCR_LSPACT_MASK) {
611
+ bool feature;
209
+ flags = FIELD_DP32(flags, TBFLAG_A32, LSPACT, 1);
612
210
+ }
613
if (size != 0) {
211
+ }
614
unallocated_encoding(s);
212
+
615
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha(DisasContext *s, uint32_t insn)
213
*pflags = flags;
616
case 2: /* SHA1M */
214
*cs_base = 0;
617
case 3: /* SHA1SU0 */
215
}
618
genfn = NULL;
619
- feature = ARM_FEATURE_V8_SHA1;
620
+ feature = dc_isar_feature(aa64_sha1, s);
621
break;
622
case 4: /* SHA256H */
623
genfn = gen_helper_crypto_sha256h;
624
+ feature = dc_isar_feature(aa64_sha256, s);
625
break;
626
case 5: /* SHA256H2 */
627
genfn = gen_helper_crypto_sha256h2;
628
+ feature = dc_isar_feature(aa64_sha256, s);
629
break;
630
case 6: /* SHA256SU1 */
631
genfn = gen_helper_crypto_sha256su1;
632
+ feature = dc_isar_feature(aa64_sha256, s);
633
break;
634
default:
635
unallocated_encoding(s);
636
return;
637
}
638
639
- if (!arm_dc_feature(s, feature)) {
640
+ if (!feature) {
641
unallocated_encoding(s);
642
return;
643
}
644
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
645
int rn = extract32(insn, 5, 5);
646
int rd = extract32(insn, 0, 5);
647
CryptoTwoOpFn *genfn;
648
- int feature;
649
+ bool feature;
650
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
651
652
if (size != 0) {
653
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
654
655
switch (opcode) {
656
case 0: /* SHA1H */
657
- feature = ARM_FEATURE_V8_SHA1;
658
+ feature = dc_isar_feature(aa64_sha1, s);
659
genfn = gen_helper_crypto_sha1h;
660
break;
661
case 1: /* SHA1SU1 */
662
- feature = ARM_FEATURE_V8_SHA1;
663
+ feature = dc_isar_feature(aa64_sha1, s);
664
genfn = gen_helper_crypto_sha1su1;
665
break;
666
case 2: /* SHA256SU0 */
667
- feature = ARM_FEATURE_V8_SHA256;
668
+ feature = dc_isar_feature(aa64_sha256, s);
669
genfn = gen_helper_crypto_sha256su0;
670
break;
671
default:
672
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha(DisasContext *s, uint32_t insn)
673
return;
674
}
675
676
- if (!arm_dc_feature(s, feature)) {
677
+ if (!feature) {
678
unallocated_encoding(s);
679
return;
680
}
681
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
682
int rm = extract32(insn, 16, 5);
683
int rn = extract32(insn, 5, 5);
684
int rd = extract32(insn, 0, 5);
685
- int feature;
686
+ bool feature;
687
CryptoThreeOpFn *genfn;
688
689
if (o == 0) {
690
switch (opcode) {
691
case 0: /* SHA512H */
692
- feature = ARM_FEATURE_V8_SHA512;
693
+ feature = dc_isar_feature(aa64_sha512, s);
694
genfn = gen_helper_crypto_sha512h;
695
break;
696
case 1: /* SHA512H2 */
697
- feature = ARM_FEATURE_V8_SHA512;
698
+ feature = dc_isar_feature(aa64_sha512, s);
699
genfn = gen_helper_crypto_sha512h2;
700
break;
701
case 2: /* SHA512SU1 */
702
- feature = ARM_FEATURE_V8_SHA512;
703
+ feature = dc_isar_feature(aa64_sha512, s);
704
genfn = gen_helper_crypto_sha512su1;
705
break;
706
case 3: /* RAX1 */
707
- feature = ARM_FEATURE_V8_SHA3;
708
+ feature = dc_isar_feature(aa64_sha3, s);
709
genfn = NULL;
710
break;
711
}
712
} else {
713
switch (opcode) {
714
case 0: /* SM3PARTW1 */
715
- feature = ARM_FEATURE_V8_SM3;
716
+ feature = dc_isar_feature(aa64_sm3, s);
717
genfn = gen_helper_crypto_sm3partw1;
718
break;
719
case 1: /* SM3PARTW2 */
720
- feature = ARM_FEATURE_V8_SM3;
721
+ feature = dc_isar_feature(aa64_sm3, s);
722
genfn = gen_helper_crypto_sm3partw2;
723
break;
724
case 2: /* SM4EKEY */
725
- feature = ARM_FEATURE_V8_SM4;
726
+ feature = dc_isar_feature(aa64_sm4, s);
727
genfn = gen_helper_crypto_sm4ekey;
728
break;
729
default:
730
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
731
}
732
}
733
734
- if (!arm_dc_feature(s, feature)) {
735
+ if (!feature) {
736
unallocated_encoding(s);
737
return;
738
}
739
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
740
int rn = extract32(insn, 5, 5);
741
int rd = extract32(insn, 0, 5);
742
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr;
743
- int feature;
744
+ bool feature;
745
CryptoTwoOpFn *genfn;
746
747
switch (opcode) {
748
case 0: /* SHA512SU0 */
749
- feature = ARM_FEATURE_V8_SHA512;
750
+ feature = dc_isar_feature(aa64_sha512, s);
751
genfn = gen_helper_crypto_sha512su0;
752
break;
753
case 1: /* SM4E */
754
- feature = ARM_FEATURE_V8_SM4;
755
+ feature = dc_isar_feature(aa64_sm4, s);
756
genfn = gen_helper_crypto_sm4e;
757
break;
758
default:
759
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_two_reg_sha512(DisasContext *s, uint32_t insn)
760
return;
761
}
762
763
- if (!arm_dc_feature(s, feature)) {
764
+ if (!feature) {
765
unallocated_encoding(s);
766
return;
767
}
768
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
769
int ra = extract32(insn, 10, 5);
770
int rn = extract32(insn, 5, 5);
771
int rd = extract32(insn, 0, 5);
772
- int feature;
773
+ bool feature;
774
775
switch (op0) {
776
case 0: /* EOR3 */
777
case 1: /* BCAX */
778
- feature = ARM_FEATURE_V8_SHA3;
779
+ feature = dc_isar_feature(aa64_sha3, s);
780
break;
781
case 2: /* SM3SS1 */
782
- feature = ARM_FEATURE_V8_SM3;
783
+ feature = dc_isar_feature(aa64_sm3, s);
784
break;
785
default:
786
unallocated_encoding(s);
787
return;
788
}
789
790
- if (!arm_dc_feature(s, feature)) {
791
+ if (!feature) {
792
unallocated_encoding(s);
793
return;
794
}
795
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
796
TCGv_i64 tcg_op1, tcg_op2, tcg_res[2];
797
int pass;
798
799
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA3)) {
800
+ if (!dc_isar_feature(aa64_sha3, s)) {
801
unallocated_encoding(s);
802
return;
803
}
804
@@ -XXX,XX +XXX,XX @@ static void disas_crypto_three_reg_imm2(DisasContext *s, uint32_t insn)
805
TCGv_ptr tcg_rd_ptr, tcg_rn_ptr, tcg_rm_ptr;
806
TCGv_i32 tcg_imm2, tcg_opcode;
807
808
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SM3)) {
809
+ if (!dc_isar_feature(aa64_sm3, s)) {
810
unallocated_encoding(s);
811
return;
812
}
813
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
814
ARMCPU *arm_cpu = arm_env_get_cpu(env);
815
int bound;
816
817
+ dc->isar = &arm_cpu->isar;
818
dc->pc = dc->base.pc_first;
819
dc->condjmp = 0;
820
821
diff --git a/target/arm/translate.c b/target/arm/translate.c
216
diff --git a/target/arm/translate.c b/target/arm/translate.c
822
index XXXXXXX..XXXXXXX 100644
217
index XXXXXXX..XXXXXXX 100644
823
--- a/target/arm/translate.c
218
--- a/target/arm/translate.c
824
+++ b/target/arm/translate.c
219
+++ b/target/arm/translate.c
825
@@ -XXX,XX +XXX,XX @@ static const uint8_t neon_2rm_sizes[] = {
220
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
826
static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
221
if (arm_dc_feature(s, ARM_FEATURE_M)) {
827
int q, int rd, int rn, int rm)
222
/* Handle M-profile lazy FP state mechanics */
828
{
223
829
- if (arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
224
+ /* Trigger lazy-state preservation if necessary */
830
+ if (dc_isar_feature(aa32_rdm, s)) {
225
+ if (s->v7m_lspact) {
831
int opr_sz = (1 + q) * 8;
226
+ /*
832
tcg_gen_gvec_3_ptr(vfp_reg_offset(1, rd),
227
+ * Lazy state saving affects external memory and also the NVIC,
833
vfp_reg_offset(1, rn),
228
+ * so we must mark it as an IO operation for icount.
834
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
229
+ */
835
return 1;
230
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
836
}
231
+ gen_io_start();
837
if (!u) { /* SHA-1 */
232
+ }
838
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
233
+ gen_helper_v7m_preserve_fp_state(cpu_env);
839
+ if (!dc_isar_feature(aa32_sha1, s)) {
234
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
840
return 1;
235
+ gen_io_end();
841
}
236
+ }
842
ptr1 = vfp_reg_ptr(true, rd);
237
+ /*
843
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
238
+ * If the preserve_fp_state helper doesn't throw an exception
844
gen_helper_crypto_sha1_3reg(ptr1, ptr2, ptr3, tmp4);
239
+ * then it will clear LSPACT; we don't need to repeat this for
845
tcg_temp_free_i32(tmp4);
240
+ * any further FP insns in this TB.
846
} else { /* SHA-256 */
241
+ */
847
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256) || size == 3) {
242
+ s->v7m_lspact = false;
848
+ if (!dc_isar_feature(aa32_sha2, s) || size == 3) {
243
+ }
849
return 1;
244
+
850
}
245
/* Update ownership of FP context: set FPCCR.S to match current state */
851
ptr1 = vfp_reg_ptr(true, rd);
246
if (s->v8m_fpccr_s_wrong) {
852
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
247
TCGv_i32 tmp;
853
if (op == 14 && size == 2) {
854
TCGv_i64 tcg_rn, tcg_rm, tcg_rd;
855
856
- if (!arm_dc_feature(s, ARM_FEATURE_V8_PMULL)) {
857
+ if (!dc_isar_feature(aa32_pmull, s)) {
858
return 1;
859
}
860
tcg_rn = tcg_temp_new_i64();
861
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
862
{
863
NeonGenThreeOpEnvFn *fn;
864
865
- if (!arm_dc_feature(s, ARM_FEATURE_V8_RDM)) {
866
+ if (!dc_isar_feature(aa32_rdm, s)) {
867
return 1;
868
}
869
if (u && ((rd | rn) & 1)) {
870
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
871
break;
872
}
873
case NEON_2RM_AESE: case NEON_2RM_AESMC:
874
- if (!arm_dc_feature(s, ARM_FEATURE_V8_AES)
875
- || ((rm | rd) & 1)) {
876
+ if (!dc_isar_feature(aa32_aes, s) || ((rm | rd) & 1)) {
877
return 1;
878
}
879
ptr1 = vfp_reg_ptr(true, rd);
880
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
881
tcg_temp_free_i32(tmp3);
882
break;
883
case NEON_2RM_SHA1H:
884
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)
885
- || ((rm | rd) & 1)) {
886
+ if (!dc_isar_feature(aa32_sha1, s) || ((rm | rd) & 1)) {
887
return 1;
888
}
889
ptr1 = vfp_reg_ptr(true, rd);
890
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
891
}
892
/* bit 6 (q): set -> SHA256SU0, cleared -> SHA1SU1 */
893
if (q) {
894
- if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA256)) {
895
+ if (!dc_isar_feature(aa32_sha2, s)) {
896
return 1;
897
}
898
- } else if (!arm_dc_feature(s, ARM_FEATURE_V8_SHA1)) {
899
+ } else if (!dc_isar_feature(aa32_sha1, s)) {
900
return 1;
901
}
902
ptr1 = vfp_reg_ptr(true, rd);
903
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
904
/* VCMLA -- 1111 110R R.1S .... .... 1000 ...0 .... */
905
int size = extract32(insn, 20, 1);
906
data = extract32(insn, 23, 2); /* rot */
907
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
908
+ if (!dc_isar_feature(aa32_vcma, s)
909
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
910
return 1;
911
}
912
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
913
/* VCADD -- 1111 110R 1.0S .... .... 1000 ...0 .... */
914
int size = extract32(insn, 20, 1);
915
data = extract32(insn, 24, 1); /* rot */
916
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)
917
+ if (!dc_isar_feature(aa32_vcma, s)
918
|| (!size && !arm_dc_feature(s, ARM_FEATURE_V8_FP16))) {
919
return 1;
920
}
921
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_3same_ext(DisasContext *s, uint32_t insn)
922
} else if ((insn & 0xfeb00f00) == 0xfc200d00) {
923
/* V[US]DOT -- 1111 1100 0.10 .... .... 1101 .Q.U .... */
924
bool u = extract32(insn, 4, 1);
925
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
926
+ if (!dc_isar_feature(aa32_dp, s)) {
927
return 1;
928
}
929
fn_gvec = u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
930
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
931
int size = extract32(insn, 23, 1);
932
int index;
933
934
- if (!arm_dc_feature(s, ARM_FEATURE_V8_FCMA)) {
935
+ if (!dc_isar_feature(aa32_vcma, s)) {
936
return 1;
937
}
938
if (size == 0) {
939
@@ -XXX,XX +XXX,XX @@ static int disas_neon_insn_2reg_scalar_ext(DisasContext *s, uint32_t insn)
940
} else if ((insn & 0xffb00f00) == 0xfe200d00) {
941
/* V[US]DOT -- 1111 1110 0.10 .... .... 1101 .Q.U .... */
942
int u = extract32(insn, 4, 1);
943
- if (!arm_dc_feature(s, ARM_FEATURE_V8_DOTPROD)) {
944
+ if (!dc_isar_feature(aa32_dp, s)) {
945
return 1;
946
}
947
fn_gvec = u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
948
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
949
* op1 == 3 is UNPREDICTABLE but handle as UNDEFINED.
950
* Bits 8, 10 and 11 should be zero.
951
*/
952
- if (!arm_dc_feature(s, ARM_FEATURE_CRC) || op1 == 0x3 ||
953
- (c & 0xd) != 0) {
954
+ if (!dc_isar_feature(aa32_crc32, s) || op1 == 0x3 || (c & 0xd) != 0) {
955
goto illegal_op;
956
}
957
958
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
959
case 0x28:
960
case 0x29:
961
case 0x2a:
962
- if (!arm_dc_feature(s, ARM_FEATURE_CRC)) {
963
+ if (!dc_isar_feature(aa32_crc32, s)) {
964
goto illegal_op;
965
}
966
break;
967
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
248
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
968
CPUARMState *env = cs->env_ptr;
249
dc->v8m_fpccr_s_wrong = FIELD_EX32(tb_flags, TBFLAG_A32, FPCCR_S_WRONG);
969
ARMCPU *cpu = arm_env_get_cpu(env);
250
dc->v7m_new_fp_ctxt_needed =
970
251
FIELD_EX32(tb_flags, TBFLAG_A32, NEW_FP_CTXT_NEEDED);
971
+ dc->isar = &cpu->isar;
252
+ dc->v7m_lspact = FIELD_EX32(tb_flags, TBFLAG_A32, LSPACT);
972
dc->pc = dc->base.pc_first;
253
dc->cp_regs = cpu->cp_regs;
973
dc->condjmp = 0;
254
dc->features = env->features;
974
255
975
--
256
--
976
2.19.1
257
2.20.1
977
258
978
259
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Implement the VLSTM instruction for v7M for the FPU present case.
2
2
3
Both arm and thumb2 division are controlled by the same ISAR field,
4
which takes care of the arm implies thumb case. Having M imply
5
thumb2 division was wrong for cortex-m0, which is v6m and does not
6
have thumb2 at all, much less thumb2 division.
7
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20181016223115.24100-5-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190416125744.27770-25-peter.maydell@linaro.org
13
---
6
---
14
target/arm/cpu.h | 12 ++++++++++--
7
target/arm/cpu.h | 2 +
15
linux-user/elfload.c | 4 ++--
8
target/arm/helper.h | 2 +
16
target/arm/cpu.c | 10 +---------
9
target/arm/helper.c | 84 ++++++++++++++++++++++++++++++++++++++++++
17
target/arm/translate.c | 4 ++--
10
target/arm/translate.c | 15 +++++++-
18
4 files changed, 15 insertions(+), 15 deletions(-)
11
4 files changed, 102 insertions(+), 1 deletion(-)
19
12
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
15
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ enum arm_features {
17
@@ -XXX,XX +XXX,XX @@
25
ARM_FEATURE_VFP3,
18
#define EXCP_INVSTATE 18 /* v7M INVSTATE UsageFault */
26
ARM_FEATURE_VFP_FP16,
19
#define EXCP_STKOF 19 /* v8M STKOF UsageFault */
27
ARM_FEATURE_NEON,
20
#define EXCP_LAZYFP 20 /* v7M fault during lazy FP stacking */
28
- ARM_FEATURE_THUMB_DIV, /* divide supported in Thumb encoding */
21
+#define EXCP_LSERR 21 /* v8M LSERR SecureFault */
29
ARM_FEATURE_M, /* Microcontroller profile. */
22
+#define EXCP_UNALIGNED 22 /* v7M UNALIGNED UsageFault */
30
ARM_FEATURE_OMAPCP, /* OMAP specific CP15 ops handling. */
23
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
31
ARM_FEATURE_THUMB2EE,
24
32
@@ -XXX,XX +XXX,XX @@ enum arm_features {
25
#define ARMV7M_EXCP_RESET 1
33
ARM_FEATURE_V5,
26
diff --git a/target/arm/helper.h b/target/arm/helper.h
34
ARM_FEATURE_STRONGARM,
27
index XXXXXXX..XXXXXXX 100644
35
ARM_FEATURE_VAPA, /* cp15 VA to PA lookups */
28
--- a/target/arm/helper.h
36
- ARM_FEATURE_ARM_DIV, /* divide supported in ARM encoding */
29
+++ b/target/arm/helper.h
37
ARM_FEATURE_VFP4, /* VFPv4 (implies that NEON is v2) */
30
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
38
ARM_FEATURE_GENERIC_TIMER,
31
39
ARM_FEATURE_MVFR, /* Media and VFP Feature Registers 0 and 1 */
32
DEF_HELPER_1(v7m_preserve_fp_state, void, env)
40
@@ -XXX,XX +XXX,XX @@ extern const uint64_t pred_esz_masks[4];
33
41
/*
34
+DEF_HELPER_2(v7m_vlstm, void, env, i32)
42
* 32-bit feature tests via id registers.
35
+
43
*/
36
DEF_HELPER_2(v8m_stackcheck, void, env, i32)
44
+static inline bool isar_feature_thumb_div(const ARMISARegisters *id)
37
38
DEF_HELPER_4(access_check_cp_reg, void, env, ptr, i32, i32)
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/helper.c
42
+++ b/target/arm/helper.c
43
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_preserve_fp_state)(CPUARMState *env)
44
g_assert_not_reached();
45
}
46
47
+void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
45
+{
48
+{
46
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) != 0;
49
+ /* translate.c should never generate calls here in user-only mode */
50
+ g_assert_not_reached();
47
+}
51
+}
48
+
52
+
49
+static inline bool isar_feature_arm_div(const ARMISARegisters *id)
53
uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
54
{
55
/* The TT instructions can be used by unprivileged code, but in
56
@@ -XXX,XX +XXX,XX @@ static void v7m_update_fpccr(CPUARMState *env, uint32_t frameptr,
57
}
58
}
59
60
+void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
50
+{
61
+{
51
+ return FIELD_EX32(id->id_isar0, ID_ISAR0, DIVIDE) > 1;
62
+ /* fptr is the value of Rn, the frame pointer we store the FP regs to */
63
+ bool s = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_S_MASK;
64
+ bool lspact = env->v7m.fpccr[s] & R_V7M_FPCCR_LSPACT_MASK;
65
+
66
+ assert(env->v7m.secure);
67
+
68
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
69
+ return;
70
+ }
71
+
72
+ /* Check access to the coprocessor is permitted */
73
+ if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
74
+ raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
75
+ }
76
+
77
+ if (lspact) {
78
+ /* LSPACT should not be active when there is active FP state */
79
+ raise_exception_ra(env, EXCP_LSERR, 0, 1, GETPC());
80
+ }
81
+
82
+ if (fptr & 7) {
83
+ raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
84
+ }
85
+
86
+ /*
87
+ * Note that we do not use v7m_stack_write() here, because the
88
+ * accesses should not set the FSR bits for stacking errors if they
89
+ * fail. (In pseudocode terms, they are AccType_NORMAL, not AccType_STACK
90
+ * or AccType_LAZYFP). Faults in cpu_stl_data() will throw exceptions
91
+ * and longjmp out.
92
+ */
93
+ if (!(env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPEN_MASK)) {
94
+ bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
95
+ int i;
96
+
97
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
98
+ uint64_t dn = *aa32_vfp_dreg(env, i / 2);
99
+ uint32_t faddr = fptr + 4 * i;
100
+ uint32_t slo = extract64(dn, 0, 32);
101
+ uint32_t shi = extract64(dn, 32, 32);
102
+
103
+ if (i >= 16) {
104
+ faddr += 8; /* skip the slot for the FPSCR */
105
+ }
106
+ cpu_stl_data(env, faddr, slo);
107
+ cpu_stl_data(env, faddr + 4, shi);
108
+ }
109
+ cpu_stl_data(env, fptr + 0x40, vfp_get_fpscr(env));
110
+
111
+ /*
112
+ * If TS is 0 then s0 to s15 and FPSCR are UNKNOWN; we choose to
113
+ * leave them unchanged, matching our choice in v7m_preserve_fp_state.
114
+ */
115
+ if (ts) {
116
+ for (i = 0; i < 32; i += 2) {
117
+ *aa32_vfp_dreg(env, i / 2) = 0;
118
+ }
119
+ vfp_set_fpscr(env, 0);
120
+ }
121
+ } else {
122
+ v7m_update_fpccr(env, fptr, false);
123
+ }
124
+
125
+ env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
52
+}
126
+}
53
+
127
+
54
static inline bool isar_feature_aa32_aes(const ARMISARegisters *id)
128
static bool v7m_push_stack(ARMCPU *cpu)
55
{
129
{
56
return FIELD_EX32(id->id_isar5, ID_ISAR5, AES) != 0;
130
/* Do the "set up stack frame" part of exception entry,
57
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
131
@@ -XXX,XX +XXX,XX @@ static void arm_log_exception(int idx)
58
index XXXXXXX..XXXXXXX 100644
132
[EXCP_INVSTATE] = "v7M INVSTATE UsageFault",
59
--- a/linux-user/elfload.c
133
[EXCP_STKOF] = "v8M STKOF UsageFault",
60
+++ b/linux-user/elfload.c
134
[EXCP_LAZYFP] = "v7M exception during lazy FP stacking",
61
@@ -XXX,XX +XXX,XX @@ static uint32_t get_elf_hwcap(void)
135
+ [EXCP_LSERR] = "v8M LSERR UsageFault",
62
GET_FEATURE(ARM_FEATURE_VFP3, ARM_HWCAP_ARM_VFPv3);
136
+ [EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault",
63
GET_FEATURE(ARM_FEATURE_V6K, ARM_HWCAP_ARM_TLS);
137
};
64
GET_FEATURE(ARM_FEATURE_VFP4, ARM_HWCAP_ARM_VFPv4);
138
65
- GET_FEATURE(ARM_FEATURE_ARM_DIV, ARM_HWCAP_ARM_IDIVA);
139
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
66
- GET_FEATURE(ARM_FEATURE_THUMB_DIV, ARM_HWCAP_ARM_IDIVT);
140
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
67
+ GET_FEATURE_ID(arm_div, ARM_HWCAP_ARM_IDIVA);
141
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
68
+ GET_FEATURE_ID(thumb_div, ARM_HWCAP_ARM_IDIVT);
142
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_STKOF_MASK;
69
/* All QEMU's VFPv3 CPUs have 32 registers, see VFP_DREG in translate.c.
143
break;
70
* Note that the ARM_HWCAP_ARM_VFPv3D16 bit is always the inverse of
144
+ case EXCP_LSERR:
71
* ARM_HWCAP_ARM_VFPD32 (and so always clear for QEMU); it is unrelated
145
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
72
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
146
+ env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
73
index XXXXXXX..XXXXXXX 100644
147
+ break;
74
--- a/target/arm/cpu.c
148
+ case EXCP_UNALIGNED:
75
+++ b/target/arm/cpu.c
149
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
76
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
150
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
77
* Presence of EL2 itself is ARM_FEATURE_EL2, and of the
151
+ break;
78
* Security Extensions is ARM_FEATURE_EL3.
152
case EXCP_SWI:
79
*/
153
/* The PC already points to the next instruction. */
80
- set_feature(env, ARM_FEATURE_ARM_DIV);
154
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secure);
81
+ assert(cpu_isar_feature(arm_div, cpu));
82
set_feature(env, ARM_FEATURE_LPAE);
83
set_feature(env, ARM_FEATURE_V7);
84
}
85
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
86
if (arm_feature(env, ARM_FEATURE_V5)) {
87
set_feature(env, ARM_FEATURE_V4T);
88
}
89
- if (arm_feature(env, ARM_FEATURE_M)) {
90
- set_feature(env, ARM_FEATURE_THUMB_DIV);
91
- }
92
- if (arm_feature(env, ARM_FEATURE_ARM_DIV)) {
93
- set_feature(env, ARM_FEATURE_THUMB_DIV);
94
- }
95
if (arm_feature(env, ARM_FEATURE_VFP4)) {
96
set_feature(env, ARM_FEATURE_VFP3);
97
set_feature(env, ARM_FEATURE_VFP_FP16);
98
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
99
ARMCPU *cpu = ARM_CPU(obj);
100
101
set_feature(&cpu->env, ARM_FEATURE_V7);
102
- set_feature(&cpu->env, ARM_FEATURE_THUMB_DIV);
103
- set_feature(&cpu->env, ARM_FEATURE_ARM_DIV);
104
set_feature(&cpu->env, ARM_FEATURE_V7MP);
105
set_feature(&cpu->env, ARM_FEATURE_PMSA);
106
cpu->midr = 0x411fc153; /* r1p3 */
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
155
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
156
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
157
--- a/target/arm/translate.c
110
+++ b/target/arm/translate.c
158
+++ b/target/arm/translate.c
111
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
112
case 1:
113
case 3:
114
/* SDIV, UDIV */
115
- if (!arm_dc_feature(s, ARM_FEATURE_ARM_DIV)) {
116
+ if (!dc_isar_feature(arm_div, s)) {
117
goto illegal_op;
118
}
119
if (((insn >> 5) & 7) || (rd != 15)) {
120
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
159
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
121
tmp2 = load_reg(s, rm);
160
if (!s->v8m_secure || (insn & 0x0040f0ff)) {
122
if ((op & 0x50) == 0x10) {
123
/* sdiv, udiv */
124
- if (!arm_dc_feature(s, ARM_FEATURE_THUMB_DIV)) {
125
+ if (!dc_isar_feature(thumb_div, s)) {
126
goto illegal_op;
161
goto illegal_op;
127
}
162
}
128
if (op & 0x20)
163
- /* Just NOP since FP support is not implemented */
164
+
165
+ if (arm_dc_feature(s, ARM_FEATURE_VFP)) {
166
+ TCGv_i32 fptr = load_reg(s, rn);
167
+
168
+ if (extract32(insn, 20, 1)) {
169
+ /* VLLDM */
170
+ } else {
171
+ gen_helper_v7m_vlstm(cpu_env, fptr);
172
+ }
173
+ tcg_temp_free_i32(fptr);
174
+
175
+ /* End the TB, because we have updated FP control bits */
176
+ s->base.is_jmp = DISAS_UPDATE;
177
+ }
178
break;
179
}
180
if (arm_dc_feature(s, ARM_FEATURE_VFP) &&
129
--
181
--
130
2.19.1
182
2.20.1
131
183
132
184
diff view generated by jsdifflib
1
For AArch32, exception return happens through certain kinds
1
Implement the VLLDM instruction for v7M for the FPU present cas.
2
of CPSR write. We don't currently have any CPU_LOG_INT logging
3
of these events (unlike AArch64, where we log in the ERET
4
instruction). Add some suitable logging.
5
6
This will log exception returns like this:
7
Exception return from AArch32 hyp to usr PC 0x80100374
8
9
paralleling the existing logging in the exception_return
10
helper for AArch64 exception returns:
11
Exception return from AArch64 EL2 to AArch64 EL0 PC 0x8003045c
12
Exception return from AArch64 EL2 to AArch32 EL0 PC 0x8003045c
13
14
(Note that an AArch32 exception return can only be
15
AArch32->AArch32, never to AArch64.)
16
2
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20181012144235.19646-2-peter.maydell@linaro.org
5
Message-id: 20190416125744.27770-26-peter.maydell@linaro.org
20
---
6
---
21
target/arm/internals.h | 18 ++++++++++++++++++
7
target/arm/helper.h | 1 +
22
target/arm/helper.c | 10 ++++++++++
8
target/arm/helper.c | 54 ++++++++++++++++++++++++++++++++++++++++++
23
target/arm/translate.c | 7 +------
9
target/arm/translate.c | 2 +-
24
3 files changed, 29 insertions(+), 6 deletions(-)
10
3 files changed, 56 insertions(+), 1 deletion(-)
25
11
26
diff --git a/target/arm/internals.h b/target/arm/internals.h
12
diff --git a/target/arm/helper.h b/target/arm/helper.h
27
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/internals.h
14
--- a/target/arm/helper.h
29
+++ b/target/arm/internals.h
15
+++ b/target/arm/helper.h
30
@@ -XXX,XX +XXX,XX @@ static inline uint32_t v7m_sp_limit(CPUARMState *env)
16
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_3(v7m_tt, i32, env, i32, i32)
31
}
17
DEF_HELPER_1(v7m_preserve_fp_state, void, env)
32
}
18
33
19
DEF_HELPER_2(v7m_vlstm, void, env, i32)
34
+/**
20
+DEF_HELPER_2(v7m_vlldm, void, env, i32)
35
+ * aarch32_mode_name(): Return name of the AArch32 CPU mode
21
36
+ * @psr: Program Status Register indicating CPU mode
22
DEF_HELPER_2(v8m_stackcheck, void, env, i32)
37
+ *
23
38
+ * Returns, for debug logging purposes, a printable representation
39
+ * of the AArch32 CPU mode ("svc", "usr", etc) as indicated by
40
+ * the low bits of the specified PSR.
41
+ */
42
+static inline const char *aarch32_mode_name(uint32_t psr)
43
+{
44
+ static const char cpu_mode_names[16][4] = {
45
+ "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
46
+ "???", "???", "hyp", "und", "???", "???", "???", "sys"
47
+ };
48
+
49
+ return cpu_mode_names[psr & 0xf];
50
+}
51
+
52
#endif
53
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
diff --git a/target/arm/helper.c b/target/arm/helper.c
54
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
55
--- a/target/arm/helper.c
26
--- a/target/arm/helper.c
56
+++ b/target/arm/helper.c
27
+++ b/target/arm/helper.c
57
@@ -XXX,XX +XXX,XX @@ void cpsr_write(CPUARMState *env, uint32_t val, uint32_t mask,
28
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
58
mask |= CPSR_IL;
29
g_assert_not_reached();
59
val |= CPSR_IL;
30
}
60
}
31
61
+ qemu_log_mask(LOG_GUEST_ERROR,
32
+void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
62
+ "Illegal AArch32 mode switch attempt from %s to %s\n",
33
+{
63
+ aarch32_mode_name(env->uncached_cpsr),
34
+ /* translate.c should never generate calls here in user-only mode */
64
+ aarch32_mode_name(val));
35
+ g_assert_not_reached();
65
} else {
36
+}
66
+ qemu_log_mask(CPU_LOG_INT, "%s %s to %s PC 0x%" PRIx32 "\n",
37
+
67
+ write_type == CPSRWriteExceptionReturn ?
38
uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
68
+ "Exception return from AArch32" :
39
{
69
+ "AArch32 mode switch from",
40
/* The TT instructions can be used by unprivileged code, but in
70
+ aarch32_mode_name(env->uncached_cpsr),
41
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_vlstm)(CPUARMState *env, uint32_t fptr)
71
+ aarch32_mode_name(val), env->regs[15]);
42
env->v7m.control[M_REG_S] &= ~R_V7M_CONTROL_FPCA_MASK;
72
switch_mode(env, val & CPSR_M);
43
}
73
}
44
74
}
45
+void HELPER(v7m_vlldm)(CPUARMState *env, uint32_t fptr)
46
+{
47
+ /* fptr is the value of Rn, the frame pointer we load the FP regs from */
48
+ assert(env->v7m.secure);
49
+
50
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK)) {
51
+ return;
52
+ }
53
+
54
+ /* Check access to the coprocessor is permitted */
55
+ if (!v7m_cpacr_pass(env, true, arm_current_el(env) != 0)) {
56
+ raise_exception_ra(env, EXCP_NOCP, 0, 1, GETPC());
57
+ }
58
+
59
+ if (env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_LSPACT_MASK) {
60
+ /* State in FP is still valid */
61
+ env->v7m.fpccr[M_REG_S] &= ~R_V7M_FPCCR_LSPACT_MASK;
62
+ } else {
63
+ bool ts = env->v7m.fpccr[M_REG_S] & R_V7M_FPCCR_TS_MASK;
64
+ int i;
65
+ uint32_t fpscr;
66
+
67
+ if (fptr & 7) {
68
+ raise_exception_ra(env, EXCP_UNALIGNED, 0, 1, GETPC());
69
+ }
70
+
71
+ for (i = 0; i < (ts ? 32 : 16); i += 2) {
72
+ uint32_t slo, shi;
73
+ uint64_t dn;
74
+ uint32_t faddr = fptr + 4 * i;
75
+
76
+ if (i >= 16) {
77
+ faddr += 8; /* skip the slot for the FPSCR */
78
+ }
79
+
80
+ slo = cpu_ldl_data(env, faddr);
81
+ shi = cpu_ldl_data(env, faddr + 4);
82
+
83
+ dn = (uint64_t) shi << 32 | slo;
84
+ *aa32_vfp_dreg(env, i / 2) = dn;
85
+ }
86
+ fpscr = cpu_ldl_data(env, fptr + 0x40);
87
+ vfp_set_fpscr(env, fpscr);
88
+ }
89
+
90
+ env->v7m.control[M_REG_S] |= R_V7M_CONTROL_FPCA_MASK;
91
+}
92
+
93
static bool v7m_push_stack(ARMCPU *cpu)
94
{
95
/* Do the "set up stack frame" part of exception entry,
75
diff --git a/target/arm/translate.c b/target/arm/translate.c
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
76
index XXXXXXX..XXXXXXX 100644
97
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/translate.c
98
--- a/target/arm/translate.c
78
+++ b/target/arm/translate.c
99
+++ b/target/arm/translate.c
79
@@ -XXX,XX +XXX,XX @@ void gen_intermediate_code(CPUState *cpu, TranslationBlock *tb)
100
@@ -XXX,XX +XXX,XX @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
80
translator_loop(ops, &dc.base, cpu, tb);
101
TCGv_i32 fptr = load_reg(s, rn);
81
}
102
82
103
if (extract32(insn, 20, 1)) {
83
-static const char *cpu_mode_names[16] = {
104
- /* VLLDM */
84
- "usr", "fiq", "irq", "svc", "???", "???", "mon", "abt",
105
+ gen_helper_v7m_vlldm(cpu_env, fptr);
85
- "???", "???", "hyp", "und", "???", "???", "???", "sys"
106
} else {
86
-};
107
gen_helper_v7m_vlstm(cpu_env, fptr);
87
-
108
}
88
void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
89
int flags)
90
{
91
@@ -XXX,XX +XXX,XX @@ void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf,
92
psr & CPSR_V ? 'V' : '-',
93
psr & CPSR_T ? 'T' : 'A',
94
ns_status,
95
- cpu_mode_names[psr & 0xf], (psr & 0x10) ? 32 : 26);
96
+ aarch32_mode_name(psr), (psr & 0x10) ? 32 : 26);
97
}
98
99
if (flags & CPU_DUMP_FPU) {
100
--
109
--
101
2.19.1
110
2.20.1
102
111
103
112
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
Enable the FPU by default for the Cortex-M4 and Cortex-M33.
2
2
3
Instantiating mps2-an505 (cortex-m33) will fail make check when
4
V7VE asserts that ID_ISAR0.Divide includes ARM division. It is
5
also wrong to include ARM_FEATURE_LPAE.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20181016223115.24100-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
3
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190416125744.27770-27-peter.maydell@linaro.org
11
---
6
---
12
target/arm/cpu.c | 6 +++++-
7
target/arm/cpu.c | 8 ++++++++
13
1 file changed, 5 insertions(+), 1 deletion(-)
8
1 file changed, 8 insertions(+)
14
9
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
10
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
index XXXXXXX..XXXXXXX 100644
11
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.c
12
--- a/target/arm/cpu.c
18
+++ b/target/arm/cpu.c
13
+++ b/target/arm/cpu.c
19
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
14
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
20
15
set_feature(&cpu->env, ARM_FEATURE_M);
21
/* Some features automatically imply others: */
16
set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
22
if (arm_feature(env, ARM_FEATURE_V8)) {
17
set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
23
- set_feature(env, ARM_FEATURE_V7VE);
18
+ set_feature(&cpu->env, ARM_FEATURE_VFP4);
24
+ if (arm_feature(env, ARM_FEATURE_M)) {
19
cpu->midr = 0x410fc240; /* r0p0 */
25
+ set_feature(env, ARM_FEATURE_V7);
20
cpu->pmsav7_dregion = 8;
26
+ } else {
21
+ cpu->isar.mvfr0 = 0x10110021;
27
+ set_feature(env, ARM_FEATURE_V7VE);
22
+ cpu->isar.mvfr1 = 0x11000011;
28
+ }
23
+ cpu->isar.mvfr2 = 0x00000000;
29
}
24
cpu->id_pfr0 = 0x00000030;
30
if (arm_feature(env, ARM_FEATURE_V7VE)) {
25
cpu->id_pfr1 = 0x00000200;
31
/* v7 Virtualization Extensions. In real hardware this implies
26
cpu->id_dfr0 = 0x00100000;
27
@@ -XXX,XX +XXX,XX @@ static void cortex_m33_initfn(Object *obj)
28
set_feature(&cpu->env, ARM_FEATURE_M_MAIN);
29
set_feature(&cpu->env, ARM_FEATURE_M_SECURITY);
30
set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
31
+ set_feature(&cpu->env, ARM_FEATURE_VFP4);
32
cpu->midr = 0x410fd213; /* r0p3 */
33
cpu->pmsav7_dregion = 16;
34
cpu->sau_sregion = 8;
35
+ cpu->isar.mvfr0 = 0x10110021;
36
+ cpu->isar.mvfr1 = 0x11000011;
37
+ cpu->isar.mvfr2 = 0x00000040;
38
cpu->id_pfr0 = 0x00000030;
39
cpu->id_pfr1 = 0x00000210;
40
cpu->id_dfr0 = 0x00200000;
32
--
41
--
33
2.19.1
42
2.20.1
34
43
35
44
diff view generated by jsdifflib
Deleted patch
1
From: Stewart Hildebrand <Stewart.Hildebrand@dornerworks.com>
2
1
3
"The Image must be placed text_offset bytes from a 2MB aligned base
4
address anywhere in usable system RAM and called there."
5
6
For the virt board, we write our startup bootloader at the very
7
bottom of RAM, so that bit can't be used for the image. To avoid
8
overlap in case the image requests to be loaded at an offset
9
smaller than our bootloader, we increment the load offset to the
10
next 2MB.
11
12
This fixes a boot failure for Xen AArch64.
13
14
Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
15
Tested-by: Andre Przywara <andre.przywara@arm.com>
16
Message-id: b8a89518794b4436af0c151ed10de4fa@dornerworks.com
17
[PMM: Rephrased a comment a bit]
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
hw/arm/boot.c | 18 ++++++++++++++++++
22
1 file changed, 18 insertions(+)
23
24
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/arm/boot.c
27
+++ b/hw/arm/boot.c
28
@@ -XXX,XX +XXX,XX @@
29
#include "qemu/config-file.h"
30
#include "qemu/option.h"
31
#include "exec/address-spaces.h"
32
+#include "qemu/units.h"
33
34
/* Kernel boot protocol is specified in the kernel docs
35
* Documentation/arm/Booting and Documentation/arm64/booting.txt
36
@@ -XXX,XX +XXX,XX @@
37
#define ARM64_TEXT_OFFSET_OFFSET 8
38
#define ARM64_MAGIC_OFFSET 56
39
40
+#define BOOTLOADER_MAX_SIZE (4 * KiB)
41
+
42
AddressSpace *arm_boot_address_space(ARMCPU *cpu,
43
const struct arm_boot_info *info)
44
{
45
@@ -XXX,XX +XXX,XX @@ static void write_bootloader(const char *name, hwaddr addr,
46
code[i] = tswap32(insn);
47
}
48
49
+ assert((len * sizeof(uint32_t)) < BOOTLOADER_MAX_SIZE);
50
+
51
rom_add_blob_fixed_as(name, code, len * sizeof(uint32_t), addr, as);
52
53
g_free(code);
54
@@ -XXX,XX +XXX,XX @@ static uint64_t load_aarch64_image(const char *filename, hwaddr mem_base,
55
memcpy(&hdrvals, buffer + ARM64_TEXT_OFFSET_OFFSET, sizeof(hdrvals));
56
if (hdrvals[1] != 0) {
57
kernel_load_offset = le64_to_cpu(hdrvals[0]);
58
+
59
+ /*
60
+ * We write our startup "bootloader" at the very bottom of RAM,
61
+ * so that bit can't be used for the image. Luckily the Image
62
+ * format specification is that the image requests only an offset
63
+ * from a 2MB boundary, not an absolute load address. So if the
64
+ * image requests an offset that might mean it overlaps with the
65
+ * bootloader, we can just load it starting at 2MB+offset rather
66
+ * than 0MB + offset.
67
+ */
68
+ if (kernel_load_offset < BOOTLOADER_MAX_SIZE) {
69
+ kernel_load_offset += 2 * MiB;
70
+ }
71
}
72
}
73
74
--
75
2.19.1
76
77
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <rth@twiddle.net>
2
1
3
This can reduce the number of opcodes required for certain
4
complex forms of load-multiple (e.g. ld4.16b).
5
6
Signed-off-by: Richard Henderson <rth@twiddle.net>
7
Message-id: 20181011205206.3552-2-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/translate-a64.c | 12 ++++++++----
12
1 file changed, 8 insertions(+), 4 deletions(-)
13
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-a64.c
17
+++ b/target/arm/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
19
bool is_store = !extract32(insn, 22, 1);
20
bool is_postidx = extract32(insn, 23, 1);
21
bool is_q = extract32(insn, 30, 1);
22
- TCGv_i64 tcg_addr, tcg_rn;
23
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
24
25
int ebytes = 1 << size;
26
int elements = (is_q ? 128 : 64) / (8 << size);
27
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
28
tcg_rn = cpu_reg_sp(s, rn);
29
tcg_addr = tcg_temp_new_i64();
30
tcg_gen_mov_i64(tcg_addr, tcg_rn);
31
+ tcg_ebytes = tcg_const_i64(ebytes);
32
33
for (r = 0; r < rpt; r++) {
34
int e;
35
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
36
clear_vec_high(s, is_q, tt);
37
}
38
}
39
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
40
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
41
tt = (tt + 1) % 32;
42
}
43
}
44
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
45
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
46
}
47
}
48
+ tcg_temp_free_i64(tcg_ebytes);
49
tcg_temp_free_i64(tcg_addr);
50
}
51
52
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
53
bool replicate = false;
54
int index = is_q << 3 | S << 2 | size;
55
int ebytes, xs;
56
- TCGv_i64 tcg_addr, tcg_rn;
57
+ TCGv_i64 tcg_addr, tcg_rn, tcg_ebytes;
58
59
switch (scale) {
60
case 3:
61
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
62
tcg_rn = cpu_reg_sp(s, rn);
63
tcg_addr = tcg_temp_new_i64();
64
tcg_gen_mov_i64(tcg_addr, tcg_rn);
65
+ tcg_ebytes = tcg_const_i64(ebytes);
66
67
for (xs = 0; xs < selem; xs++) {
68
if (replicate) {
69
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
70
do_vec_st(s, rt, index, tcg_addr, scale);
71
}
72
}
73
- tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
74
+ tcg_gen_add_i64(tcg_addr, tcg_addr, tcg_ebytes);
75
rt = (rt + 1) % 32;
76
}
77
78
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
79
tcg_gen_add_i64(tcg_rn, tcg_rn, cpu_reg(s, rm));
80
}
81
}
82
+ tcg_temp_free_i64(tcg_ebytes);
83
tcg_temp_free_i64(tcg_addr);
84
}
85
86
--
87
2.19.1
88
89
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
For a sequence of loads or stores from a single register,
3
This device is used by both ARM (BCM2836, for raspi2) and AArch64
4
little-endian operations can be promoted to an 8-byte op.
4
(BCM2837, for raspi3) targets, and is not CPU-specific.
5
This can reduce the number of operations by a factor of 8.
5
Move it to common object, so we build it once for all targets.
6
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20181011205206.3552-20-richard.henderson@linaro.org
8
Message-id: 20190427133028.12874-1-philmd@redhat.com
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
target/arm/translate.c | 10 ++++++++++
12
hw/dma/Makefile.objs | 2 +-
14
1 file changed, 10 insertions(+)
13
1 file changed, 1 insertion(+), 1 deletion(-)
15
14
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
15
diff --git a/hw/dma/Makefile.objs b/hw/dma/Makefile.objs
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate.c
17
--- a/hw/dma/Makefile.objs
19
+++ b/target/arm/translate.c
18
+++ b/hw/dma/Makefile.objs
20
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
19
@@ -XXX,XX +XXX,XX @@ common-obj-$(CONFIG_XLNX_ZYNQMP_ARM) += xlnx-zdma.o
21
if (size == 3 && (interleave | spacing) != 1) {
20
22
return 1;
21
obj-$(CONFIG_OMAP) += omap_dma.o soc_dma.o
23
}
22
obj-$(CONFIG_PXA2XX) += pxa2xx_dma.o
24
+ /* For our purposes, bytes are always little-endian. */
23
-obj-$(CONFIG_RASPI) += bcm2835_dma.o
25
+ if (size == 0) {
24
+common-obj-$(CONFIG_RASPI) += bcm2835_dma.o
26
+ endian = MO_LE;
27
+ }
28
+ /* Consecutive little-endian elements from a single register
29
+ * can be promoted to a larger little-endian operation.
30
+ */
31
+ if (interleave == 1 && endian == MO_LE) {
32
+ size = 3;
33
+ }
34
tmp64 = tcg_temp_new_i64();
35
addr = tcg_temp_new_i32();
36
tmp2 = tcg_const_i32(1 << size);
37
--
25
--
38
2.19.1
26
2.20.1
39
27
40
28
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Move cmtst_op expanders from translate-a64.c.
3
Reviewed-by: Thomas Huth <thuth@redhat.com>
4
4
Reviewed-by: Cédric Le Goater <clg@kaod.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Markus Armbruster <armbru@redhat.com>
6
Message-id: 20181011205206.3552-17-richard.henderson@linaro.org
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20190412165416.7977-2-philmd@redhat.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
9
---
10
target/arm/translate.h | 2 +
10
hw/arm/aspeed.c | 13 +++++++++----
11
target/arm/translate-a64.c | 38 ------------------
11
1 file changed, 9 insertions(+), 4 deletions(-)
12
target/arm/translate.c | 81 +++++++++++++++++++++++++++-----------
13
3 files changed, 60 insertions(+), 61 deletions(-)
14
12
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
13
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
16
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
15
--- a/hw/arm/aspeed.c
18
+++ b/target/arm/translate.h
16
+++ b/hw/arm/aspeed.c
19
@@ -XXX,XX +XXX,XX @@ extern const GVecGen3 bit_op;
17
@@ -XXX,XX +XXX,XX @@
20
extern const GVecGen3 bif_op;
18
#include "hw/arm/aspeed_soc.h"
21
extern const GVecGen3 mla_op[4];
19
#include "hw/boards.h"
22
extern const GVecGen3 mls_op[4];
20
#include "hw/i2c/smbus_eeprom.h"
23
+extern const GVecGen3 cmtst_op[4];
21
+#include "hw/misc/pca9552.h"
24
extern const GVecGen2i ssra_op[4];
22
+#include "hw/misc/tmp105.h"
25
extern const GVecGen2i usra_op[4];
23
#include "qemu/log.h"
26
extern const GVecGen2i sri_op[4];
24
#include "sysemu/block-backend.h"
27
extern const GVecGen2i sli_op[4];
25
#include "hw/loader.h"
28
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b);
26
@@ -XXX,XX +XXX,XX @@ static void ast2500_evb_i2c_init(AspeedBoardState *bmc)
29
27
eeprom_buf);
30
/*
28
31
* Forward to the isar_feature_* tests given a DisasContext pointer.
29
/* The AST2500 EVB expects a LM75 but a TMP105 is compatible */
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
30
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 7), "tmp105", 0x4d);
33
index XXXXXXX..XXXXXXX 100644
31
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 7),
34
--- a/target/arm/translate-a64.c
32
+ TYPE_TMP105, 0x4d);
35
+++ b/target/arm/translate-a64.c
33
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
34
/* The AST2500 EVB does not have an RTC. Let's pretend that one is
37
}
35
* plugged on the I2C bus header */
36
@@ -XXX,XX +XXX,XX @@ static void witherspoon_bmc_i2c_init(AspeedBoardState *bmc)
37
AspeedSoCState *soc = &bmc->soc;
38
uint8_t *eeprom_buf = g_malloc0(8 * 1024);
39
40
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 3), "pca9552", 0x60);
41
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 3), TYPE_PCA9552,
42
+ 0x60);
43
44
i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 4), "tmp423", 0x4c);
45
i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 5), "tmp423", 0x4c);
46
47
/* The Witherspoon expects a TMP275 but a TMP105 is compatible */
48
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 9), "tmp105", 0x4a);
49
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 9), TYPE_TMP105,
50
+ 0x4a);
51
52
/* The witherspoon board expects Epson RX8900 I2C RTC but a ds1338 is
53
* good enough */
54
@@ -XXX,XX +XXX,XX @@ static void witherspoon_bmc_i2c_init(AspeedBoardState *bmc)
55
56
smbus_eeprom_init_one(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 11), 0x51,
57
eeprom_buf);
58
- i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 11), "pca9552",
59
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 11), TYPE_PCA9552,
60
0x60);
38
}
61
}
39
62
40
-/* CMTST : test is "if (X & Y != 0)". */
41
-static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
42
-{
43
- tcg_gen_and_i32(d, a, b);
44
- tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
45
- tcg_gen_neg_i32(d, d);
46
-}
47
-
48
-static void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
49
-{
50
- tcg_gen_and_i64(d, a, b);
51
- tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
52
- tcg_gen_neg_i64(d, d);
53
-}
54
-
55
-static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
56
-{
57
- tcg_gen_and_vec(vece, d, a, b);
58
- tcg_gen_dupi_vec(vece, a, 0);
59
- tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
60
-}
61
-
62
static void handle_3same_64(DisasContext *s, int opcode, bool u,
63
TCGv_i64 tcg_rd, TCGv_i64 tcg_rn, TCGv_i64 tcg_rm)
64
{
65
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
66
/* Integer op subgroup of C3.6.16. */
67
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
68
{
69
- static const GVecGen3 cmtst_op[4] = {
70
- { .fni4 = gen_helper_neon_tst_u8,
71
- .fniv = gen_cmtst_vec,
72
- .vece = MO_8 },
73
- { .fni4 = gen_helper_neon_tst_u16,
74
- .fniv = gen_cmtst_vec,
75
- .vece = MO_16 },
76
- { .fni4 = gen_cmtst_i32,
77
- .fniv = gen_cmtst_vec,
78
- .vece = MO_32 },
79
- { .fni8 = gen_cmtst_i64,
80
- .fniv = gen_cmtst_vec,
81
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
82
- .vece = MO_64 },
83
- };
84
-
85
int is_q = extract32(insn, 30, 1);
86
int u = extract32(insn, 29, 1);
87
int size = extract32(insn, 22, 2);
88
diff --git a/target/arm/translate.c b/target/arm/translate.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/target/arm/translate.c
91
+++ b/target/arm/translate.c
92
@@ -XXX,XX +XXX,XX @@ const GVecGen3 mls_op[4] = {
93
.vece = MO_64 },
94
};
95
96
+/* CMTST : test is "if (X & Y != 0)". */
97
+static void gen_cmtst_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
98
+{
99
+ tcg_gen_and_i32(d, a, b);
100
+ tcg_gen_setcondi_i32(TCG_COND_NE, d, d, 0);
101
+ tcg_gen_neg_i32(d, d);
102
+}
103
+
104
+void gen_cmtst_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
105
+{
106
+ tcg_gen_and_i64(d, a, b);
107
+ tcg_gen_setcondi_i64(TCG_COND_NE, d, d, 0);
108
+ tcg_gen_neg_i64(d, d);
109
+}
110
+
111
+static void gen_cmtst_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
112
+{
113
+ tcg_gen_and_vec(vece, d, a, b);
114
+ tcg_gen_dupi_vec(vece, a, 0);
115
+ tcg_gen_cmp_vec(TCG_COND_NE, vece, d, d, a);
116
+}
117
+
118
+const GVecGen3 cmtst_op[4] = {
119
+ { .fni4 = gen_helper_neon_tst_u8,
120
+ .fniv = gen_cmtst_vec,
121
+ .vece = MO_8 },
122
+ { .fni4 = gen_helper_neon_tst_u16,
123
+ .fniv = gen_cmtst_vec,
124
+ .vece = MO_16 },
125
+ { .fni4 = gen_cmtst_i32,
126
+ .fniv = gen_cmtst_vec,
127
+ .vece = MO_32 },
128
+ { .fni8 = gen_cmtst_i64,
129
+ .fniv = gen_cmtst_vec,
130
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
131
+ .vece = MO_64 },
132
+};
133
+
134
/* Translate a NEON data processing instruction. Return nonzero if the
135
instruction is invalid.
136
We process data in a mixture of 32-bit and 64-bit chunks.
137
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
138
tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
139
u ? &mls_op[size] : &mla_op[size]);
140
return 0;
141
+
142
+ case NEON_3R_VTST_VCEQ:
143
+ if (u) { /* VCEQ */
144
+ tcg_gen_gvec_cmp(TCG_COND_EQ, size, rd_ofs, rn_ofs, rm_ofs,
145
+ vec_size, vec_size);
146
+ } else { /* VTST */
147
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
148
+ vec_size, vec_size, &cmtst_op[size]);
149
+ }
150
+ return 0;
151
+
152
+ case NEON_3R_VCGT:
153
+ tcg_gen_gvec_cmp(u ? TCG_COND_GTU : TCG_COND_GT, size,
154
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
155
+ return 0;
156
+
157
+ case NEON_3R_VCGE:
158
+ tcg_gen_gvec_cmp(u ? TCG_COND_GEU : TCG_COND_GE, size,
159
+ rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size);
160
+ return 0;
161
}
162
163
if (size == 3) {
164
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
165
case NEON_3R_VQSUB:
166
GEN_NEON_INTEGER_OP_ENV(qsub);
167
break;
168
- case NEON_3R_VCGT:
169
- GEN_NEON_INTEGER_OP(cgt);
170
- break;
171
- case NEON_3R_VCGE:
172
- GEN_NEON_INTEGER_OP(cge);
173
- break;
174
case NEON_3R_VSHL:
175
GEN_NEON_INTEGER_OP(shl);
176
break;
177
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
178
tmp2 = neon_load_reg(rd, pass);
179
gen_neon_add(size, tmp, tmp2);
180
break;
181
- case NEON_3R_VTST_VCEQ:
182
- if (!u) { /* VTST */
183
- switch (size) {
184
- case 0: gen_helper_neon_tst_u8(tmp, tmp, tmp2); break;
185
- case 1: gen_helper_neon_tst_u16(tmp, tmp, tmp2); break;
186
- case 2: gen_helper_neon_tst_u32(tmp, tmp, tmp2); break;
187
- default: abort();
188
- }
189
- } else { /* VCEQ */
190
- switch (size) {
191
- case 0: gen_helper_neon_ceq_u8(tmp, tmp, tmp2); break;
192
- case 1: gen_helper_neon_ceq_u16(tmp, tmp, tmp2); break;
193
- case 2: gen_helper_neon_ceq_u32(tmp, tmp, tmp2); break;
194
- default: abort();
195
- }
196
- }
197
- break;
198
case NEON_3R_VMUL:
199
/* VMUL.P8; other cases already eliminated. */
200
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
201
--
63
--
202
2.19.1
64
2.20.1
203
65
204
66
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Move mla_op and mls_op expanders from translate-a64.c.
3
Suggested-by: Markus Armbruster <armbru@redhat.com>
4
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190412165416.7977-3-philmd@redhat.com
6
Message-id: 20181011205206.3552-16-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
8
---
10
target/arm/translate.h | 2 +
9
hw/arm/nseries.c | 3 ++-
11
target/arm/translate-a64.c | 106 -----------------------------
10
1 file changed, 2 insertions(+), 1 deletion(-)
12
target/arm/translate.c | 134 ++++++++++++++++++++++++++++++++-----
13
3 files changed, 120 insertions(+), 122 deletions(-)
14
11
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
12
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
14
--- a/hw/arm/nseries.c
18
+++ b/target/arm/translate.h
15
+++ b/hw/arm/nseries.c
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
16
@@ -XXX,XX +XXX,XX @@
20
extern const GVecGen3 bsl_op;
17
#include "hw/boards.h"
21
extern const GVecGen3 bit_op;
18
#include "hw/i2c/i2c.h"
22
extern const GVecGen3 bif_op;
19
#include "hw/devices.h"
23
+extern const GVecGen3 mla_op[4];
20
+#include "hw/misc/tmp105.h"
24
+extern const GVecGen3 mls_op[4];
21
#include "hw/block/flash.h"
25
extern const GVecGen2i ssra_op[4];
22
#include "hw/hw.h"
26
extern const GVecGen2i usra_op[4];
23
#include "hw/bt.h"
27
extern const GVecGen2i sri_op[4];
24
@@ -XXX,XX +XXX,XX @@ static void n8x0_i2c_setup(struct n800_s *s)
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
25
qemu_register_powerdown_notifier(&n8x0_system_powerdown_notifier);
29
index XXXXXXX..XXXXXXX 100644
26
30
--- a/target/arm/translate-a64.c
27
/* Attach a TMP105 PM chip (A0 wired to ground) */
31
+++ b/target/arm/translate-a64.c
28
- dev = i2c_create_slave(i2c, "tmp105", N8X0_TMP105_ADDR);
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
29
+ dev = i2c_create_slave(i2c, TYPE_TMP105, N8X0_TMP105_ADDR);
33
}
30
qdev_connect_gpio_out(dev, 0, tmp_irq);
34
}
31
}
35
32
36
-static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
37
-{
38
- gen_helper_neon_mul_u8(a, a, b);
39
- gen_helper_neon_add_u8(d, d, a);
40
-}
41
-
42
-static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
43
-{
44
- gen_helper_neon_mul_u16(a, a, b);
45
- gen_helper_neon_add_u16(d, d, a);
46
-}
47
-
48
-static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
49
-{
50
- tcg_gen_mul_i32(a, a, b);
51
- tcg_gen_add_i32(d, d, a);
52
-}
53
-
54
-static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
55
-{
56
- tcg_gen_mul_i64(a, a, b);
57
- tcg_gen_add_i64(d, d, a);
58
-}
59
-
60
-static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
61
-{
62
- tcg_gen_mul_vec(vece, a, a, b);
63
- tcg_gen_add_vec(vece, d, d, a);
64
-}
65
-
66
-static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
67
-{
68
- gen_helper_neon_mul_u8(a, a, b);
69
- gen_helper_neon_sub_u8(d, d, a);
70
-}
71
-
72
-static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
73
-{
74
- gen_helper_neon_mul_u16(a, a, b);
75
- gen_helper_neon_sub_u16(d, d, a);
76
-}
77
-
78
-static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
79
-{
80
- tcg_gen_mul_i32(a, a, b);
81
- tcg_gen_sub_i32(d, d, a);
82
-}
83
-
84
-static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
85
-{
86
- tcg_gen_mul_i64(a, a, b);
87
- tcg_gen_sub_i64(d, d, a);
88
-}
89
-
90
-static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
91
-{
92
- tcg_gen_mul_vec(vece, a, a, b);
93
- tcg_gen_sub_vec(vece, d, d, a);
94
-}
95
-
96
/* Integer op subgroup of C3.6.16. */
97
static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
98
{
99
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
100
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
.vece = MO_64 },
102
};
103
- static const GVecGen3 mla_op[4] = {
104
- { .fni4 = gen_mla8_i32,
105
- .fniv = gen_mla_vec,
106
- .opc = INDEX_op_mul_vec,
107
- .load_dest = true,
108
- .vece = MO_8 },
109
- { .fni4 = gen_mla16_i32,
110
- .fniv = gen_mla_vec,
111
- .opc = INDEX_op_mul_vec,
112
- .load_dest = true,
113
- .vece = MO_16 },
114
- { .fni4 = gen_mla32_i32,
115
- .fniv = gen_mla_vec,
116
- .opc = INDEX_op_mul_vec,
117
- .load_dest = true,
118
- .vece = MO_32 },
119
- { .fni8 = gen_mla64_i64,
120
- .fniv = gen_mla_vec,
121
- .opc = INDEX_op_mul_vec,
122
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
123
- .load_dest = true,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen3 mls_op[4] = {
127
- { .fni4 = gen_mls8_i32,
128
- .fniv = gen_mls_vec,
129
- .opc = INDEX_op_mul_vec,
130
- .load_dest = true,
131
- .vece = MO_8 },
132
- { .fni4 = gen_mls16_i32,
133
- .fniv = gen_mls_vec,
134
- .opc = INDEX_op_mul_vec,
135
- .load_dest = true,
136
- .vece = MO_16 },
137
- { .fni4 = gen_mls32_i32,
138
- .fniv = gen_mls_vec,
139
- .opc = INDEX_op_mul_vec,
140
- .load_dest = true,
141
- .vece = MO_32 },
142
- { .fni8 = gen_mls64_i64,
143
- .fniv = gen_mls_vec,
144
- .opc = INDEX_op_mul_vec,
145
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
146
- .load_dest = true,
147
- .vece = MO_64 },
148
- };
149
150
int is_q = extract32(insn, 30, 1);
151
int u = extract32(insn, 29, 1);
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ static void gen_neon_narrow_op(int op, int u, int size,
157
#define NEON_3R_VABA 15
158
#define NEON_3R_VADD_VSUB 16
159
#define NEON_3R_VTST_VCEQ 17
160
-#define NEON_3R_VML 18 /* VMLA, VMLAL, VMLS, VMLSL */
161
+#define NEON_3R_VML 18 /* VMLA, VMLS */
162
#define NEON_3R_VMUL 19
163
#define NEON_3R_VPMAX 20
164
#define NEON_3R_VPMIN 21
165
@@ -XXX,XX +XXX,XX @@ const GVecGen2i sli_op[4] = {
166
.vece = MO_64 },
167
};
168
169
+static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
170
+{
171
+ gen_helper_neon_mul_u8(a, a, b);
172
+ gen_helper_neon_add_u8(d, d, a);
173
+}
174
+
175
+static void gen_mls8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
176
+{
177
+ gen_helper_neon_mul_u8(a, a, b);
178
+ gen_helper_neon_sub_u8(d, d, a);
179
+}
180
+
181
+static void gen_mla16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
182
+{
183
+ gen_helper_neon_mul_u16(a, a, b);
184
+ gen_helper_neon_add_u16(d, d, a);
185
+}
186
+
187
+static void gen_mls16_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
188
+{
189
+ gen_helper_neon_mul_u16(a, a, b);
190
+ gen_helper_neon_sub_u16(d, d, a);
191
+}
192
+
193
+static void gen_mla32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
194
+{
195
+ tcg_gen_mul_i32(a, a, b);
196
+ tcg_gen_add_i32(d, d, a);
197
+}
198
+
199
+static void gen_mls32_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
200
+{
201
+ tcg_gen_mul_i32(a, a, b);
202
+ tcg_gen_sub_i32(d, d, a);
203
+}
204
+
205
+static void gen_mla64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
206
+{
207
+ tcg_gen_mul_i64(a, a, b);
208
+ tcg_gen_add_i64(d, d, a);
209
+}
210
+
211
+static void gen_mls64_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
212
+{
213
+ tcg_gen_mul_i64(a, a, b);
214
+ tcg_gen_sub_i64(d, d, a);
215
+}
216
+
217
+static void gen_mla_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
218
+{
219
+ tcg_gen_mul_vec(vece, a, a, b);
220
+ tcg_gen_add_vec(vece, d, d, a);
221
+}
222
+
223
+static void gen_mls_vec(unsigned vece, TCGv_vec d, TCGv_vec a, TCGv_vec b)
224
+{
225
+ tcg_gen_mul_vec(vece, a, a, b);
226
+ tcg_gen_sub_vec(vece, d, d, a);
227
+}
228
+
229
+/* Note that while NEON does not support VMLA and VMLS as 64-bit ops,
230
+ * these tables are shared with AArch64 which does support them.
231
+ */
232
+const GVecGen3 mla_op[4] = {
233
+ { .fni4 = gen_mla8_i32,
234
+ .fniv = gen_mla_vec,
235
+ .opc = INDEX_op_mul_vec,
236
+ .load_dest = true,
237
+ .vece = MO_8 },
238
+ { .fni4 = gen_mla16_i32,
239
+ .fniv = gen_mla_vec,
240
+ .opc = INDEX_op_mul_vec,
241
+ .load_dest = true,
242
+ .vece = MO_16 },
243
+ { .fni4 = gen_mla32_i32,
244
+ .fniv = gen_mla_vec,
245
+ .opc = INDEX_op_mul_vec,
246
+ .load_dest = true,
247
+ .vece = MO_32 },
248
+ { .fni8 = gen_mla64_i64,
249
+ .fniv = gen_mla_vec,
250
+ .opc = INDEX_op_mul_vec,
251
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
252
+ .load_dest = true,
253
+ .vece = MO_64 },
254
+};
255
+
256
+const GVecGen3 mls_op[4] = {
257
+ { .fni4 = gen_mls8_i32,
258
+ .fniv = gen_mls_vec,
259
+ .opc = INDEX_op_mul_vec,
260
+ .load_dest = true,
261
+ .vece = MO_8 },
262
+ { .fni4 = gen_mls16_i32,
263
+ .fniv = gen_mls_vec,
264
+ .opc = INDEX_op_mul_vec,
265
+ .load_dest = true,
266
+ .vece = MO_16 },
267
+ { .fni4 = gen_mls32_i32,
268
+ .fniv = gen_mls_vec,
269
+ .opc = INDEX_op_mul_vec,
270
+ .load_dest = true,
271
+ .vece = MO_32 },
272
+ { .fni8 = gen_mls64_i64,
273
+ .fniv = gen_mls_vec,
274
+ .opc = INDEX_op_mul_vec,
275
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
276
+ .load_dest = true,
277
+ .vece = MO_64 },
278
+};
279
+
280
/* Translate a NEON data processing instruction. Return nonzero if the
281
instruction is invalid.
282
We process data in a mixture of 32-bit and 64-bit chunks.
283
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
284
return 0;
285
}
286
break;
287
+
288
+ case NEON_3R_VML: /* VMLA, VMLS */
289
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs, vec_size, vec_size,
290
+ u ? &mls_op[size] : &mla_op[size]);
291
+ return 0;
292
}
293
+
294
if (size == 3) {
295
/* 64-bit element instructions. */
296
for (pass = 0; pass < (q ? 2 : 1); pass++) {
297
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
298
}
299
}
300
break;
301
- case NEON_3R_VML: /* VMLA, VMLAL, VMLS,VMLSL */
302
- switch (size) {
303
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
304
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
305
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
306
- default: abort();
307
- }
308
- tcg_temp_free_i32(tmp2);
309
- tmp2 = neon_load_reg(rd, pass);
310
- if (u) { /* VMLS */
311
- gen_neon_rsb(size, tmp, tmp2);
312
- } else { /* VMLA */
313
- gen_neon_add(size, tmp, tmp2);
314
- }
315
- break;
316
case NEON_3R_VMUL:
317
/* VMUL.P8; other cases already eliminated. */
318
gen_helper_neon_mul_p8(tmp, tmp, tmp2);
319
--
33
--
320
2.19.1
34
2.20.1
321
35
322
36
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
No code used the tc6393xb_gpio_in_get() and tc6393xb_gpio_out_set()
4
Message-id: 20181011205206.3552-18-richard.henderson@linaro.org
4
functions since their introduction in commit 88d2c950b002. Time to
5
[PMM: added parens in ?: expression]
5
remove them.
6
7
Suggested-by: Markus Armbruster <armbru@redhat.com>
8
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
9
Message-id: 20190412165416.7977-4-philmd@redhat.com
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
12
---
9
target/arm/translate.c | 81 ++++++++++++++----------------------------
13
include/hw/devices.h | 3 ---
10
1 file changed, 26 insertions(+), 55 deletions(-)
14
hw/display/tc6393xb.c | 16 ----------------
15
2 files changed, 19 deletions(-)
11
16
12
diff --git a/target/arm/translate.c b/target/arm/translate.c
17
diff --git a/include/hw/devices.h b/include/hw/devices.h
13
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/translate.c
19
--- a/include/hw/devices.h
15
+++ b/target/arm/translate.c
20
+++ b/include/hw/devices.h
16
@@ -XXX,XX +XXX,XX @@ static void gen_vfp_msr(TCGv_i32 tmp)
21
@@ -XXX,XX +XXX,XX @@ void retu_key_event(void *retu, int state);
17
tcg_temp_free_i32(tmp);
22
typedef struct TC6393xbState TC6393xbState;
18
}
23
TC6393xbState *tc6393xb_init(struct MemoryRegion *sysmem,
19
24
uint32_t base, qemu_irq irq);
20
-static void gen_neon_dup_u8(TCGv_i32 var, int shift)
25
-void tc6393xb_gpio_out_set(TC6393xbState *s, int line,
26
- qemu_irq handler);
27
-qemu_irq *tc6393xb_gpio_in_get(TC6393xbState *s);
28
qemu_irq tc6393xb_l3v_get(TC6393xbState *s);
29
30
#endif
31
diff --git a/hw/display/tc6393xb.c b/hw/display/tc6393xb.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/hw/display/tc6393xb.c
34
+++ b/hw/display/tc6393xb.c
35
@@ -XXX,XX +XXX,XX @@ struct TC6393xbState {
36
blanked : 1;
37
};
38
39
-qemu_irq *tc6393xb_gpio_in_get(TC6393xbState *s)
21
-{
40
-{
22
- TCGv_i32 tmp = tcg_temp_new_i32();
41
- return s->gpio_in;
23
- if (shift)
24
- tcg_gen_shri_i32(var, var, shift);
25
- tcg_gen_ext8u_i32(var, var);
26
- tcg_gen_shli_i32(tmp, var, 8);
27
- tcg_gen_or_i32(var, var, tmp);
28
- tcg_gen_shli_i32(tmp, var, 16);
29
- tcg_gen_or_i32(var, var, tmp);
30
- tcg_temp_free_i32(tmp);
31
-}
42
-}
32
-
43
-
33
static void gen_neon_dup_low16(TCGv_i32 var)
44
static void tc6393xb_gpio_set(void *opaque, int line, int level)
34
{
45
{
35
TCGv_i32 tmp = tcg_temp_new_i32();
46
// TC6393xbState *s = opaque;
36
@@ -XXX,XX +XXX,XX @@ static void gen_neon_dup_high16(TCGv_i32 var)
47
@@ -XXX,XX +XXX,XX @@ static void tc6393xb_gpio_set(void *opaque, int line, int level)
37
tcg_temp_free_i32(tmp);
48
// FIXME: how does the chip reflect the GPIO input level change?
38
}
49
}
39
50
40
-static TCGv_i32 gen_load_and_replicate(DisasContext *s, TCGv_i32 addr, int size)
51
-void tc6393xb_gpio_out_set(TC6393xbState *s, int line,
52
- qemu_irq handler)
41
-{
53
-{
42
- /* Load a single Neon element and replicate into a 32 bit TCG reg */
54
- if (line >= TC6393XB_GPIOS) {
43
- TCGv_i32 tmp = tcg_temp_new_i32();
55
- fprintf(stderr, "TC6393xb: no GPIO pin %d\n", line);
44
- switch (size) {
56
- return;
45
- case 0:
46
- gen_aa32_ld8u(s, tmp, addr, get_mem_index(s));
47
- gen_neon_dup_u8(tmp, 0);
48
- break;
49
- case 1:
50
- gen_aa32_ld16u(s, tmp, addr, get_mem_index(s));
51
- gen_neon_dup_low16(tmp);
52
- break;
53
- case 2:
54
- gen_aa32_ld32u(s, tmp, addr, get_mem_index(s));
55
- break;
56
- default: /* Avoid compiler warnings. */
57
- abort();
58
- }
57
- }
59
- return tmp;
58
-
59
- s->handler[line] = handler;
60
-}
60
-}
61
-
61
-
62
static int handle_vsel(uint32_t insn, uint32_t rd, uint32_t rn, uint32_t rm,
62
static void tc6393xb_gpio_handler_update(TC6393xbState *s)
63
uint32_t dp)
64
{
63
{
65
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
64
uint32_t level, diff;
66
int load;
67
int shift;
68
int n;
69
+ int vec_size;
70
TCGv_i32 addr;
71
TCGv_i32 tmp;
72
TCGv_i32 tmp2;
73
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
74
}
75
addr = tcg_temp_new_i32();
76
load_reg_var(s, addr, rn);
77
- if (nregs == 1) {
78
- /* VLD1 to all lanes: bit 5 indicates how many Dregs to write */
79
- tmp = gen_load_and_replicate(s, addr, size);
80
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
81
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
82
- if (insn & (1 << 5)) {
83
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 0));
84
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd + 1, 1));
85
- }
86
- tcg_temp_free_i32(tmp);
87
- } else {
88
- /* VLD2/3/4 to all lanes: bit 5 indicates register stride */
89
- stride = (insn & (1 << 5)) ? 2 : 1;
90
- for (reg = 0; reg < nregs; reg++) {
91
- tmp = gen_load_and_replicate(s, addr, size);
92
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 0));
93
- tcg_gen_st_i32(tmp, cpu_env, neon_reg_offset(rd, 1));
94
- tcg_temp_free_i32(tmp);
95
- tcg_gen_addi_i32(addr, addr, 1 << size);
96
- rd += stride;
97
+
98
+ /* VLD1 to all lanes: bit 5 indicates how many Dregs to write.
99
+ * VLD2/3/4 to all lanes: bit 5 indicates register stride.
100
+ */
101
+ stride = (insn & (1 << 5)) ? 2 : 1;
102
+ vec_size = nregs == 1 ? stride * 8 : 8;
103
+
104
+ tmp = tcg_temp_new_i32();
105
+ for (reg = 0; reg < nregs; reg++) {
106
+ gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s),
107
+ s->be_data | size);
108
+ if ((rd & 1) && vec_size == 16) {
109
+ /* We cannot write 16 bytes at once because the
110
+ * destination is unaligned.
111
+ */
112
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
113
+ 8, 8, tmp);
114
+ tcg_gen_gvec_mov(0, neon_reg_offset(rd + 1, 0),
115
+ neon_reg_offset(rd, 0), 8, 8);
116
+ } else {
117
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rd, 0),
118
+ vec_size, vec_size, tmp);
119
}
120
+ tcg_gen_addi_i32(addr, addr, 1 << size);
121
+ rd += stride;
122
}
123
+ tcg_temp_free_i32(tmp);
124
tcg_temp_free_i32(addr);
125
stride = (1 << size) * nregs;
126
} else {
127
--
65
--
128
2.19.1
66
2.20.1
129
67
130
68
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
4
Message-id: 20181011205206.3552-12-richard.henderson@linaro.org
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20190412165416.7977-5-philmd@redhat.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/translate.c | 31 +++++++++++++++----------------
8
include/hw/devices.h | 6 ------
9
1 file changed, 15 insertions(+), 16 deletions(-)
9
include/hw/display/tc6393xb.h | 24 ++++++++++++++++++++++++
10
hw/arm/tosa.c | 2 +-
11
hw/display/tc6393xb.c | 2 +-
12
MAINTAINERS | 1 +
13
5 files changed, 27 insertions(+), 8 deletions(-)
14
create mode 100644 include/hw/display/tc6393xb.h
10
15
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/include/hw/devices.h b/include/hw/devices.h
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
18
--- a/include/hw/devices.h
14
+++ b/target/arm/translate.c
19
+++ b/include/hw/devices.h
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ void *tahvo_init(qemu_irq irq, int betty);
16
vec_size, vec_size);
21
17
}
22
void retu_key_event(void *retu, int state);
18
return 0;
23
24
-/* tc6393xb.c */
25
-typedef struct TC6393xbState TC6393xbState;
26
-TC6393xbState *tc6393xb_init(struct MemoryRegion *sysmem,
27
- uint32_t base, qemu_irq irq);
28
-qemu_irq tc6393xb_l3v_get(TC6393xbState *s);
29
-
30
#endif
31
diff --git a/include/hw/display/tc6393xb.h b/include/hw/display/tc6393xb.h
32
new file mode 100644
33
index XXXXXXX..XXXXXXX
34
--- /dev/null
35
+++ b/include/hw/display/tc6393xb.h
36
@@ -XXX,XX +XXX,XX @@
37
+/*
38
+ * Toshiba TC6393XB I/O Controller.
39
+ * Found in Sharp Zaurus SL-6000 (tosa) or some
40
+ * Toshiba e-Series PDAs.
41
+ *
42
+ * Copyright (c) 2007 Hervé Poussineau
43
+ *
44
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
45
+ * See the COPYING file in the top-level directory.
46
+ */
19
+
47
+
20
+ case NEON_3R_VMUL: /* VMUL */
48
+#ifndef HW_DISPLAY_TC6393XB_H
21
+ if (u) {
49
+#define HW_DISPLAY_TC6393XB_H
22
+ /* Polynomial case allows only P8 and is handled below. */
50
+
23
+ if (size != 0) {
51
+#include "exec/memory.h"
24
+ return 1;
52
+#include "hw/irq.h"
25
+ }
53
+
26
+ } else {
54
+typedef struct TC6393xbState TC6393xbState;
27
+ tcg_gen_gvec_mul(size, rd_ofs, rn_ofs, rm_ofs,
55
+
28
+ vec_size, vec_size);
56
+TC6393xbState *tc6393xb_init(struct MemoryRegion *sysmem,
29
+ return 0;
57
+ uint32_t base, qemu_irq irq);
30
+ }
58
+qemu_irq tc6393xb_l3v_get(TC6393xbState *s);
31
+ break;
59
+
32
}
60
+#endif
33
if (size == 3) {
61
diff --git a/hw/arm/tosa.c b/hw/arm/tosa.c
34
/* 64-bit element instructions. */
62
index XXXXXXX..XXXXXXX 100644
35
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
63
--- a/hw/arm/tosa.c
36
return 1;
64
+++ b/hw/arm/tosa.c
37
}
65
@@ -XXX,XX +XXX,XX @@
38
break;
66
#include "hw/hw.h"
39
- case NEON_3R_VMUL:
67
#include "hw/arm/pxa.h"
40
- if (u && (size != 0)) {
68
#include "hw/arm/arm.h"
41
- /* UNDEF on invalid size for polynomial subcase */
69
-#include "hw/devices.h"
42
- return 1;
70
#include "hw/arm/sharpsl.h"
43
- }
71
#include "hw/pcmcia.h"
44
- break;
72
#include "hw/boards.h"
45
case NEON_3R_VFM_VQRDMLSH:
73
+#include "hw/display/tc6393xb.h"
46
if (!arm_dc_feature(s, ARM_FEATURE_VFP4)) {
74
#include "hw/i2c/i2c.h"
47
return 1;
75
#include "hw/ssi/ssi.h"
48
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
76
#include "hw/sysbus.h"
49
}
77
diff --git a/hw/display/tc6393xb.c b/hw/display/tc6393xb.c
50
break;
78
index XXXXXXX..XXXXXXX 100644
51
case NEON_3R_VMUL:
79
--- a/hw/display/tc6393xb.c
52
- if (u) { /* polynomial */
80
+++ b/hw/display/tc6393xb.c
53
- gen_helper_neon_mul_p8(tmp, tmp, tmp2);
81
@@ -XXX,XX +XXX,XX @@
54
- } else { /* Integer */
82
#include "qapi/error.h"
55
- switch (size) {
83
#include "qemu/host-utils.h"
56
- case 0: gen_helper_neon_mul_u8(tmp, tmp, tmp2); break;
84
#include "hw/hw.h"
57
- case 1: gen_helper_neon_mul_u16(tmp, tmp, tmp2); break;
85
-#include "hw/devices.h"
58
- case 2: tcg_gen_mul_i32(tmp, tmp, tmp2); break;
86
+#include "hw/display/tc6393xb.h"
59
- default: abort();
87
#include "hw/block/flash.h"
60
- }
88
#include "ui/console.h"
61
- }
89
#include "ui/pixel_ops.h"
62
+ /* VMUL.P8; other cases already eliminated. */
90
diff --git a/MAINTAINERS b/MAINTAINERS
63
+ gen_helper_neon_mul_p8(tmp, tmp, tmp2);
91
index XXXXXXX..XXXXXXX 100644
64
break;
92
--- a/MAINTAINERS
65
case NEON_3R_VPMAX:
93
+++ b/MAINTAINERS
66
GEN_NEON_INTEGER_OP(pmax);
94
@@ -XXX,XX +XXX,XX @@ F: hw/misc/mst_fpga.c
95
F: hw/misc/max111x.c
96
F: include/hw/arm/pxa.h
97
F: include/hw/arm/sharpsl.h
98
+F: include/hw/display/tc6393xb.h
99
100
SABRELITE / i.MX6
101
M: Peter Maydell <peter.maydell@linaro.org>
67
--
102
--
68
2.19.1
103
2.20.1
69
104
70
105
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Add an entries the Blizzard device in MAINTAINERS.
4
Message-id: 20181011205206.3552-11-richard.henderson@linaro.org
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Thomas Huth <thuth@redhat.com>
6
Reviewed-by: Markus Armbruster <armbru@redhat.com>
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Message-id: 20190412165416.7977-6-philmd@redhat.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
10
---
8
target/arm/translate.c | 16 ++++++++--------
11
include/hw/devices.h | 7 -------
9
1 file changed, 8 insertions(+), 8 deletions(-)
12
include/hw/display/blizzard.h | 22 ++++++++++++++++++++++
13
hw/arm/nseries.c | 1 +
14
hw/display/blizzard.c | 2 +-
15
MAINTAINERS | 2 ++
16
5 files changed, 26 insertions(+), 8 deletions(-)
17
create mode 100644 include/hw/display/blizzard.h
10
18
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
19
diff --git a/include/hw/devices.h b/include/hw/devices.h
12
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
21
--- a/include/hw/devices.h
14
+++ b/target/arm/translate.c
22
+++ b/include/hw/devices.h
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
23
@@ -XXX,XX +XXX,XX @@ void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
16
tcg_temp_free_ptr(ptr1);
24
/* stellaris_input.c */
17
tcg_temp_free_ptr(ptr2);
25
void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
18
break;
26
27
-/* blizzard.c */
28
-void *s1d13745_init(qemu_irq gpio_int);
29
-void s1d13745_write(void *opaque, int dc, uint16_t value);
30
-void s1d13745_write_block(void *opaque, int dc,
31
- void *buf, size_t len, int pitch);
32
-uint16_t s1d13745_read(void *opaque, int dc);
33
-
34
/* cbus.c */
35
typedef struct {
36
qemu_irq clk;
37
diff --git a/include/hw/display/blizzard.h b/include/hw/display/blizzard.h
38
new file mode 100644
39
index XXXXXXX..XXXXXXX
40
--- /dev/null
41
+++ b/include/hw/display/blizzard.h
42
@@ -XXX,XX +XXX,XX @@
43
+/*
44
+ * Epson S1D13744/S1D13745 (Blizzard/Hailstorm/Tornado) LCD/TV controller.
45
+ *
46
+ * Copyright (C) 2008 Nokia Corporation
47
+ * Written by Andrzej Zaborowski
48
+ *
49
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
50
+ * See the COPYING file in the top-level directory.
51
+ */
19
+
52
+
20
+ case NEON_2RM_VMVN:
53
+#ifndef HW_DISPLAY_BLIZZARD_H
21
+ tcg_gen_gvec_not(0, rd_ofs, rm_ofs, vec_size, vec_size);
54
+#define HW_DISPLAY_BLIZZARD_H
22
+ break;
23
+ case NEON_2RM_VNEG:
24
+ tcg_gen_gvec_neg(size, rd_ofs, rm_ofs, vec_size, vec_size);
25
+ break;
26
+
55
+
27
default:
56
+#include "hw/irq.h"
28
elementwise:
57
+
29
for (pass = 0; pass < (q ? 4 : 2); pass++) {
58
+void *s1d13745_init(qemu_irq gpio_int);
30
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
59
+void s1d13745_write(void *opaque, int dc, uint16_t value);
31
case NEON_2RM_VCNT:
60
+void s1d13745_write_block(void *opaque, int dc,
32
gen_helper_neon_cnt_u8(tmp, tmp);
61
+ void *buf, size_t len, int pitch);
33
break;
62
+uint16_t s1d13745_read(void *opaque, int dc);
34
- case NEON_2RM_VMVN:
63
+
35
- tcg_gen_not_i32(tmp, tmp);
64
+#endif
36
- break;
65
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
37
case NEON_2RM_VQABS:
66
index XXXXXXX..XXXXXXX 100644
38
switch (size) {
67
--- a/hw/arm/nseries.c
39
case 0:
68
+++ b/hw/arm/nseries.c
40
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
69
@@ -XXX,XX +XXX,XX @@
41
default: abort();
70
#include "hw/boards.h"
42
}
71
#include "hw/i2c/i2c.h"
43
break;
72
#include "hw/devices.h"
44
- case NEON_2RM_VNEG:
73
+#include "hw/display/blizzard.h"
45
- tmp2 = tcg_const_i32(0);
74
#include "hw/misc/tmp105.h"
46
- gen_neon_rsb(size, tmp, tmp2);
75
#include "hw/block/flash.h"
47
- tcg_temp_free_i32(tmp2);
76
#include "hw/hw.h"
48
- break;
77
diff --git a/hw/display/blizzard.c b/hw/display/blizzard.c
49
case NEON_2RM_VCGT0_F:
78
index XXXXXXX..XXXXXXX 100644
50
{
79
--- a/hw/display/blizzard.c
51
TCGv_ptr fpstatus = get_fpstatus_ptr(1);
80
+++ b/hw/display/blizzard.c
81
@@ -XXX,XX +XXX,XX @@
82
#include "qemu/osdep.h"
83
#include "qemu-common.h"
84
#include "ui/console.h"
85
-#include "hw/devices.h"
86
+#include "hw/display/blizzard.h"
87
#include "ui/pixel_ops.h"
88
89
typedef void (*blizzard_fn_t)(uint8_t *, const uint8_t *, unsigned int);
90
diff --git a/MAINTAINERS b/MAINTAINERS
91
index XXXXXXX..XXXXXXX 100644
92
--- a/MAINTAINERS
93
+++ b/MAINTAINERS
94
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
95
L: qemu-arm@nongnu.org
96
S: Odd Fixes
97
F: hw/arm/nseries.c
98
+F: hw/display/blizzard.c
99
F: hw/input/lm832x.c
100
F: hw/input/tsc2005.c
101
F: hw/misc/cbus.c
102
F: hw/timer/twl92230.c
103
+F: include/hw/display/blizzard.h
104
105
Palm
106
M: Andrzej Zaborowski <balrogg@gmail.com>
52
--
107
--
53
2.19.1
108
2.20.1
54
109
55
110
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Move expanders for VBSL, VBIT, and VBIF from translate-a64.c.
3
Reviewed-by: Thomas Huth <thuth@redhat.com>
4
4
Reviewed-by: Markus Armbruster <armbru@redhat.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Message-id: 20181011205206.3552-9-richard.henderson@linaro.org
6
Message-id: 20190412165416.7977-7-philmd@redhat.com
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
8
---
10
target/arm/translate.h | 6 ++
9
include/hw/devices.h | 14 --------------
11
target/arm/translate-a64.c | 61 --------------
10
include/hw/misc/cbus.h | 32 ++++++++++++++++++++++++++++++++
12
target/arm/translate.c | 162 +++++++++++++++++++++++++++----------
11
hw/arm/nseries.c | 1 +
13
3 files changed, 124 insertions(+), 105 deletions(-)
12
hw/misc/cbus.c | 2 +-
13
MAINTAINERS | 1 +
14
5 files changed, 35 insertions(+), 15 deletions(-)
15
create mode 100644 include/hw/misc/cbus.h
14
16
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
17
diff --git a/include/hw/devices.h b/include/hw/devices.h
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
19
--- a/include/hw/devices.h
18
+++ b/target/arm/translate.h
20
+++ b/include/hw/devices.h
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
21
@@ -XXX,XX +XXX,XX @@ void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
20
return ret;
22
/* stellaris_input.c */
21
}
23
void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
22
24
25
-/* cbus.c */
26
-typedef struct {
27
- qemu_irq clk;
28
- qemu_irq dat;
29
- qemu_irq sel;
30
-} CBus;
31
-CBus *cbus_init(qemu_irq dat_out);
32
-void cbus_attach(CBus *bus, void *slave_opaque);
33
-
34
-void *retu_init(qemu_irq irq, int vilma);
35
-void *tahvo_init(qemu_irq irq, int betty);
36
-
37
-void retu_key_event(void *retu, int state);
38
-
39
#endif
40
diff --git a/include/hw/misc/cbus.h b/include/hw/misc/cbus.h
41
new file mode 100644
42
index XXXXXXX..XXXXXXX
43
--- /dev/null
44
+++ b/include/hw/misc/cbus.h
45
@@ -XXX,XX +XXX,XX @@
46
+/*
47
+ * CBUS three-pin bus and the Retu / Betty / Tahvo / Vilma / Avilma /
48
+ * Hinku / Vinku / Ahne / Pihi chips used in various Nokia platforms.
49
+ * Based on reverse-engineering of a linux driver.
50
+ *
51
+ * Copyright (C) 2008 Nokia Corporation
52
+ * Written by Andrzej Zaborowski
53
+ *
54
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
55
+ * See the COPYING file in the top-level directory.
56
+ */
23
+
57
+
24
+/* Vector operations shared between ARM and AArch64. */
58
+#ifndef HW_MISC_CBUS_H
25
+extern const GVecGen3 bsl_op;
59
+#define HW_MISC_CBUS_H
26
+extern const GVecGen3 bit_op;
27
+extern const GVecGen3 bif_op;
28
+
60
+
29
/*
61
+#include "hw/irq.h"
30
* Forward to the isar_feature_* tests given a DisasContext pointer.
62
+
31
*/
63
+typedef struct {
32
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
64
+ qemu_irq clk;
65
+ qemu_irq dat;
66
+ qemu_irq sel;
67
+} CBus;
68
+
69
+CBus *cbus_init(qemu_irq dat_out);
70
+void cbus_attach(CBus *bus, void *slave_opaque);
71
+
72
+void *retu_init(qemu_irq irq, int vilma);
73
+void *tahvo_init(qemu_irq irq, int betty);
74
+
75
+void retu_key_event(void *retu, int state);
76
+
77
+#endif
78
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
33
index XXXXXXX..XXXXXXX 100644
79
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate-a64.c
80
--- a/hw/arm/nseries.c
35
+++ b/target/arm/translate-a64.c
81
+++ b/hw/arm/nseries.c
36
@@ -XXX,XX +XXX,XX @@ static void disas_simd_three_reg_diff(DisasContext *s, uint32_t insn)
82
@@ -XXX,XX +XXX,XX @@
37
}
83
#include "hw/i2c/i2c.h"
38
}
84
#include "hw/devices.h"
39
85
#include "hw/display/blizzard.h"
40
-static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
86
+#include "hw/misc/cbus.h"
41
-{
87
#include "hw/misc/tmp105.h"
42
- tcg_gen_xor_i64(rn, rn, rm);
88
#include "hw/block/flash.h"
43
- tcg_gen_and_i64(rn, rn, rd);
89
#include "hw/hw.h"
44
- tcg_gen_xor_i64(rd, rm, rn);
90
diff --git a/hw/misc/cbus.c b/hw/misc/cbus.c
45
-}
46
-
47
-static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
48
-{
49
- tcg_gen_xor_i64(rn, rn, rd);
50
- tcg_gen_and_i64(rn, rn, rm);
51
- tcg_gen_xor_i64(rd, rd, rn);
52
-}
53
-
54
-static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
55
-{
56
- tcg_gen_xor_i64(rn, rn, rd);
57
- tcg_gen_andc_i64(rn, rn, rm);
58
- tcg_gen_xor_i64(rd, rd, rn);
59
-}
60
-
61
-static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
62
-{
63
- tcg_gen_xor_vec(vece, rn, rn, rm);
64
- tcg_gen_and_vec(vece, rn, rn, rd);
65
- tcg_gen_xor_vec(vece, rd, rm, rn);
66
-}
67
-
68
-static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
69
-{
70
- tcg_gen_xor_vec(vece, rn, rn, rd);
71
- tcg_gen_and_vec(vece, rn, rn, rm);
72
- tcg_gen_xor_vec(vece, rd, rd, rn);
73
-}
74
-
75
-static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
76
-{
77
- tcg_gen_xor_vec(vece, rn, rn, rd);
78
- tcg_gen_andc_vec(vece, rn, rn, rm);
79
- tcg_gen_xor_vec(vece, rd, rd, rn);
80
-}
81
-
82
/* Logic op (opcode == 3) subgroup of C3.6.16. */
83
static void disas_simd_3same_logic(DisasContext *s, uint32_t insn)
84
{
85
- static const GVecGen3 bsl_op = {
86
- .fni8 = gen_bsl_i64,
87
- .fniv = gen_bsl_vec,
88
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
89
- .load_dest = true
90
- };
91
- static const GVecGen3 bit_op = {
92
- .fni8 = gen_bit_i64,
93
- .fniv = gen_bit_vec,
94
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
95
- .load_dest = true
96
- };
97
- static const GVecGen3 bif_op = {
98
- .fni8 = gen_bif_i64,
99
- .fniv = gen_bif_vec,
100
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
101
- .load_dest = true
102
- };
103
-
104
int rd = extract32(insn, 0, 5);
105
int rn = extract32(insn, 5, 5);
106
int rm = extract32(insn, 16, 5);
107
diff --git a/target/arm/translate.c b/target/arm/translate.c
108
index XXXXXXX..XXXXXXX 100644
91
index XXXXXXX..XXXXXXX 100644
109
--- a/target/arm/translate.c
92
--- a/hw/misc/cbus.c
110
+++ b/target/arm/translate.c
93
+++ b/hw/misc/cbus.c
111
@@ -XXX,XX +XXX,XX @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
94
@@ -XXX,XX +XXX,XX @@
112
return 0;
95
#include "qemu/osdep.h"
113
}
96
#include "hw/hw.h"
114
97
#include "hw/irq.h"
115
-/* Bitwise select. dest = c ? t : f. Clobbers T and F. */
98
-#include "hw/devices.h"
116
-static void gen_neon_bsl(TCGv_i32 dest, TCGv_i32 t, TCGv_i32 f, TCGv_i32 c)
99
+#include "hw/misc/cbus.h"
117
-{
100
#include "sysemu/sysemu.h"
118
- tcg_gen_and_i32(t, t, c);
101
119
- tcg_gen_andc_i32(f, f, c);
102
//#define DEBUG
120
- tcg_gen_or_i32(dest, t, f);
103
diff --git a/MAINTAINERS b/MAINTAINERS
121
-}
104
index XXXXXXX..XXXXXXX 100644
122
-
105
--- a/MAINTAINERS
123
static inline void gen_neon_narrow(int size, TCGv_i32 dest, TCGv_i64 src)
106
+++ b/MAINTAINERS
124
{
107
@@ -XXX,XX +XXX,XX @@ F: hw/input/tsc2005.c
125
switch (size) {
108
F: hw/misc/cbus.c
126
@@ -XXX,XX +XXX,XX @@ static int do_v81_helper(DisasContext *s, gen_helper_gvec_3_ptr *fn,
109
F: hw/timer/twl92230.c
127
return 1;
110
F: include/hw/display/blizzard.h
128
}
111
+F: include/hw/misc/cbus.h
129
112
130
+/*
113
Palm
131
+ * Expanders for VBitOps_VBIF, VBIT, VBSL.
114
M: Andrzej Zaborowski <balrogg@gmail.com>
132
+ */
133
+static void gen_bsl_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
134
+{
135
+ tcg_gen_xor_i64(rn, rn, rm);
136
+ tcg_gen_and_i64(rn, rn, rd);
137
+ tcg_gen_xor_i64(rd, rm, rn);
138
+}
139
+
140
+static void gen_bit_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
141
+{
142
+ tcg_gen_xor_i64(rn, rn, rd);
143
+ tcg_gen_and_i64(rn, rn, rm);
144
+ tcg_gen_xor_i64(rd, rd, rn);
145
+}
146
+
147
+static void gen_bif_i64(TCGv_i64 rd, TCGv_i64 rn, TCGv_i64 rm)
148
+{
149
+ tcg_gen_xor_i64(rn, rn, rd);
150
+ tcg_gen_andc_i64(rn, rn, rm);
151
+ tcg_gen_xor_i64(rd, rd, rn);
152
+}
153
+
154
+static void gen_bsl_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
155
+{
156
+ tcg_gen_xor_vec(vece, rn, rn, rm);
157
+ tcg_gen_and_vec(vece, rn, rn, rd);
158
+ tcg_gen_xor_vec(vece, rd, rm, rn);
159
+}
160
+
161
+static void gen_bit_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
162
+{
163
+ tcg_gen_xor_vec(vece, rn, rn, rd);
164
+ tcg_gen_and_vec(vece, rn, rn, rm);
165
+ tcg_gen_xor_vec(vece, rd, rd, rn);
166
+}
167
+
168
+static void gen_bif_vec(unsigned vece, TCGv_vec rd, TCGv_vec rn, TCGv_vec rm)
169
+{
170
+ tcg_gen_xor_vec(vece, rn, rn, rd);
171
+ tcg_gen_andc_vec(vece, rn, rn, rm);
172
+ tcg_gen_xor_vec(vece, rd, rd, rn);
173
+}
174
+
175
+const GVecGen3 bsl_op = {
176
+ .fni8 = gen_bsl_i64,
177
+ .fniv = gen_bsl_vec,
178
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
179
+ .load_dest = true
180
+};
181
+
182
+const GVecGen3 bit_op = {
183
+ .fni8 = gen_bit_i64,
184
+ .fniv = gen_bit_vec,
185
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
186
+ .load_dest = true
187
+};
188
+
189
+const GVecGen3 bif_op = {
190
+ .fni8 = gen_bif_i64,
191
+ .fniv = gen_bif_vec,
192
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
193
+ .load_dest = true
194
+};
195
+
196
+
197
/* Translate a NEON data processing instruction. Return nonzero if the
198
instruction is invalid.
199
We process data in a mixture of 32-bit and 64-bit chunks.
200
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
201
{
202
int op;
203
int q;
204
- int rd, rn, rm;
205
+ int rd, rn, rm, rd_ofs, rn_ofs, rm_ofs;
206
int size;
207
int shift;
208
int pass;
209
int count;
210
int pairwise;
211
int u;
212
+ int vec_size;
213
uint32_t imm, mask;
214
TCGv_i32 tmp, tmp2, tmp3, tmp4, tmp5;
215
TCGv_ptr ptr1, ptr2, ptr3;
216
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
217
VFP_DREG_N(rn, insn);
218
VFP_DREG_M(rm, insn);
219
size = (insn >> 20) & 3;
220
+ vec_size = q ? 16 : 8;
221
+ rd_ofs = neon_reg_offset(rd, 0);
222
+ rn_ofs = neon_reg_offset(rn, 0);
223
+ rm_ofs = neon_reg_offset(rm, 0);
224
+
225
if ((insn & (1 << 23)) == 0) {
226
/* Three register same length. */
227
op = ((insn >> 7) & 0x1e) | ((insn >> 4) & 1);
228
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
229
q, rd, rn, rm);
230
}
231
return 1;
232
+
233
+ case NEON_3R_LOGIC: /* Logic ops. */
234
+ switch ((u << 2) | size) {
235
+ case 0: /* VAND */
236
+ tcg_gen_gvec_and(0, rd_ofs, rn_ofs, rm_ofs,
237
+ vec_size, vec_size);
238
+ break;
239
+ case 1: /* VBIC */
240
+ tcg_gen_gvec_andc(0, rd_ofs, rn_ofs, rm_ofs,
241
+ vec_size, vec_size);
242
+ break;
243
+ case 2:
244
+ if (rn == rm) {
245
+ /* VMOV */
246
+ tcg_gen_gvec_mov(0, rd_ofs, rn_ofs, vec_size, vec_size);
247
+ } else {
248
+ /* VORR */
249
+ tcg_gen_gvec_or(0, rd_ofs, rn_ofs, rm_ofs,
250
+ vec_size, vec_size);
251
+ }
252
+ break;
253
+ case 3: /* VORN */
254
+ tcg_gen_gvec_orc(0, rd_ofs, rn_ofs, rm_ofs,
255
+ vec_size, vec_size);
256
+ break;
257
+ case 4: /* VEOR */
258
+ tcg_gen_gvec_xor(0, rd_ofs, rn_ofs, rm_ofs,
259
+ vec_size, vec_size);
260
+ break;
261
+ case 5: /* VBSL */
262
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
263
+ vec_size, vec_size, &bsl_op);
264
+ break;
265
+ case 6: /* VBIT */
266
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
267
+ vec_size, vec_size, &bit_op);
268
+ break;
269
+ case 7: /* VBIF */
270
+ tcg_gen_gvec_3(rd_ofs, rn_ofs, rm_ofs,
271
+ vec_size, vec_size, &bif_op);
272
+ break;
273
+ }
274
+ return 0;
275
}
276
- if (size == 3 && op != NEON_3R_LOGIC) {
277
+ if (size == 3) {
278
/* 64-bit element instructions. */
279
for (pass = 0; pass < (q ? 2 : 1); pass++) {
280
neon_load_reg64(cpu_V0, rn + pass);
281
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
282
case NEON_3R_VRHADD:
283
GEN_NEON_INTEGER_OP(rhadd);
284
break;
285
- case NEON_3R_LOGIC: /* Logic ops. */
286
- switch ((u << 2) | size) {
287
- case 0: /* VAND */
288
- tcg_gen_and_i32(tmp, tmp, tmp2);
289
- break;
290
- case 1: /* BIC */
291
- tcg_gen_andc_i32(tmp, tmp, tmp2);
292
- break;
293
- case 2: /* VORR */
294
- tcg_gen_or_i32(tmp, tmp, tmp2);
295
- break;
296
- case 3: /* VORN */
297
- tcg_gen_orc_i32(tmp, tmp, tmp2);
298
- break;
299
- case 4: /* VEOR */
300
- tcg_gen_xor_i32(tmp, tmp, tmp2);
301
- break;
302
- case 5: /* VBSL */
303
- tmp3 = neon_load_reg(rd, pass);
304
- gen_neon_bsl(tmp, tmp, tmp2, tmp3);
305
- tcg_temp_free_i32(tmp3);
306
- break;
307
- case 6: /* VBIT */
308
- tmp3 = neon_load_reg(rd, pass);
309
- gen_neon_bsl(tmp, tmp, tmp3, tmp2);
310
- tcg_temp_free_i32(tmp3);
311
- break;
312
- case 7: /* VBIF */
313
- tmp3 = neon_load_reg(rd, pass);
314
- gen_neon_bsl(tmp, tmp3, tmp, tmp2);
315
- tcg_temp_free_i32(tmp3);
316
- break;
317
- }
318
- break;
319
case NEON_3R_VHSUB:
320
GEN_NEON_INTEGER_OP(hsub);
321
break;
322
--
115
--
323
2.19.1
116
2.20.1
324
117
325
118
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
4
Message-id: 20181011205206.3552-8-richard.henderson@linaro.org
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20190412165416.7977-8-philmd@redhat.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/translate.c | 67 ++++++++++++++++++++++++------------------
8
include/hw/devices.h | 3 ---
9
1 file changed, 39 insertions(+), 28 deletions(-)
9
include/hw/input/gamepad.h | 19 +++++++++++++++++++
10
hw/arm/stellaris.c | 2 +-
11
hw/input/stellaris_input.c | 2 +-
12
MAINTAINERS | 1 +
13
5 files changed, 22 insertions(+), 5 deletions(-)
14
create mode 100644 include/hw/input/gamepad.h
10
15
11
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/include/hw/devices.h b/include/hw/devices.h
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate.c
18
--- a/include/hw/devices.h
14
+++ b/target/arm/translate.c
19
+++ b/include/hw/devices.h
15
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
20
@@ -XXX,XX +XXX,XX @@ void *tsc2005_init(qemu_irq pintdav);
16
return 1;
21
uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
17
}
22
void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
18
} else { /* (insn & 0x00380080) == 0 */
23
19
- int invert;
24
-/* stellaris_input.c */
20
+ int invert, reg_ofs, vec_size;
25
-void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
26
-
27
#endif
28
diff --git a/include/hw/input/gamepad.h b/include/hw/input/gamepad.h
29
new file mode 100644
30
index XXXXXXX..XXXXXXX
31
--- /dev/null
32
+++ b/include/hw/input/gamepad.h
33
@@ -XXX,XX +XXX,XX @@
34
+/*
35
+ * Gamepad style buttons connected to IRQ/GPIO lines
36
+ *
37
+ * Copyright (c) 2007 CodeSourcery.
38
+ * Written by Paul Brook
39
+ *
40
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
41
+ * See the COPYING file in the top-level directory.
42
+ */
21
+
43
+
22
if (q && (rd & 1)) {
44
+#ifndef HW_INPUT_GAMEPAD_H
23
return 1;
45
+#define HW_INPUT_GAMEPAD_H
24
}
25
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
26
break;
27
case 14:
28
imm |= (imm << 8) | (imm << 16) | (imm << 24);
29
- if (invert)
30
+ if (invert) {
31
imm = ~imm;
32
+ }
33
break;
34
case 15:
35
if (invert) {
36
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
37
| ((imm & 0x40) ? (0x1f << 25) : (1 << 30));
38
break;
39
}
40
- if (invert)
41
+ if (invert) {
42
imm = ~imm;
43
+ }
44
45
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
46
- if (op & 1 && op < 12) {
47
- tmp = neon_load_reg(rd, pass);
48
- if (invert) {
49
- /* The immediate value has already been inverted, so
50
- BIC becomes AND. */
51
- tcg_gen_andi_i32(tmp, tmp, imm);
52
- } else {
53
- tcg_gen_ori_i32(tmp, tmp, imm);
54
- }
55
+ reg_ofs = neon_reg_offset(rd, 0);
56
+ vec_size = q ? 16 : 8;
57
+
46
+
58
+ if (op & 1 && op < 12) {
47
+#include "hw/irq.h"
59
+ if (invert) {
60
+ /* The immediate value has already been inverted,
61
+ * so BIC becomes AND.
62
+ */
63
+ tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm,
64
+ vec_size, vec_size);
65
} else {
66
- /* VMOV, VMVN. */
67
- tmp = tcg_temp_new_i32();
68
- if (op == 14 && invert) {
69
- int n;
70
- uint32_t val;
71
- val = 0;
72
- for (n = 0; n < 4; n++) {
73
- if (imm & (1 << (n + (pass & 1) * 4)))
74
- val |= 0xff << (n * 8);
75
- }
76
- tcg_gen_movi_i32(tmp, val);
77
- } else {
78
- tcg_gen_movi_i32(tmp, imm);
79
- }
80
+ tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm,
81
+ vec_size, vec_size);
82
+ }
83
+ } else {
84
+ /* VMOV, VMVN. */
85
+ if (op == 14 && invert) {
86
+ TCGv_i64 t64 = tcg_temp_new_i64();
87
+
48
+
88
+ for (pass = 0; pass <= q; ++pass) {
49
+/* stellaris_input.c */
89
+ uint64_t val = 0;
50
+void stellaris_gamepad_init(int n, qemu_irq *irq, const int *keycode);
90
+ int n;
91
+
51
+
92
+ for (n = 0; n < 8; n++) {
52
+#endif
93
+ if (imm & (1 << (n + pass * 8))) {
53
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
94
+ val |= 0xffull << (n * 8);
54
index XXXXXXX..XXXXXXX 100644
95
+ }
55
--- a/hw/arm/stellaris.c
96
+ }
56
+++ b/hw/arm/stellaris.c
97
+ tcg_gen_movi_i64(t64, val);
57
@@ -XXX,XX +XXX,XX @@
98
+ neon_store_reg64(t64, rd + pass);
58
#include "hw/sysbus.h"
99
+ }
59
#include "hw/ssi/ssi.h"
100
+ tcg_temp_free_i64(t64);
60
#include "hw/arm/arm.h"
101
+ } else {
61
-#include "hw/devices.h"
102
+ tcg_gen_gvec_dup32i(reg_ofs, vec_size, vec_size, imm);
62
#include "qemu/timer.h"
103
}
63
#include "hw/i2c/i2c.h"
104
- neon_store_reg(rd, pass, tmp);
64
#include "net/net.h"
105
}
65
@@ -XXX,XX +XXX,XX @@
106
}
66
#include "sysemu/sysemu.h"
107
} else { /* (insn & 0x00800010 == 0x00800000) */
67
#include "hw/arm/armv7m.h"
68
#include "hw/char/pl011.h"
69
+#include "hw/input/gamepad.h"
70
#include "hw/watchdog/cmsdk-apb-watchdog.h"
71
#include "hw/misc/unimp.h"
72
#include "cpu.h"
73
diff --git a/hw/input/stellaris_input.c b/hw/input/stellaris_input.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/hw/input/stellaris_input.c
76
+++ b/hw/input/stellaris_input.c
77
@@ -XXX,XX +XXX,XX @@
78
*/
79
#include "qemu/osdep.h"
80
#include "hw/hw.h"
81
-#include "hw/devices.h"
82
+#include "hw/input/gamepad.h"
83
#include "ui/console.h"
84
85
typedef struct {
86
diff --git a/MAINTAINERS b/MAINTAINERS
87
index XXXXXXX..XXXXXXX 100644
88
--- a/MAINTAINERS
89
+++ b/MAINTAINERS
90
@@ -XXX,XX +XXX,XX @@ M: Peter Maydell <peter.maydell@linaro.org>
91
L: qemu-arm@nongnu.org
92
S: Maintained
93
F: hw/*/stellaris*
94
+F: include/hw/input/gamepad.h
95
96
Versatile Express
97
M: Peter Maydell <peter.maydell@linaro.org>
108
--
98
--
109
2.19.1
99
2.20.1
110
100
111
101
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Also introduces neon_element_offset to find the env offset
3
Since uWireSlave is only used in this new header, there is no
4
of a specific element within a neon register.
4
need to expose it via "qemu/typedefs.h".
5
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Markus Armbruster <armbru@redhat.com>
7
Message-id: 20181011205206.3552-7-richard.henderson@linaro.org
7
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20190412165416.7977-9-philmd@redhat.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/translate.c | 63 ++++++++++++++++++++++++------------------
11
include/hw/arm/omap.h | 6 +-----
12
1 file changed, 36 insertions(+), 27 deletions(-)
12
include/hw/devices.h | 15 ---------------
13
13
include/hw/input/tsc2xxx.h | 36 ++++++++++++++++++++++++++++++++++++
14
diff --git a/target/arm/translate.c b/target/arm/translate.c
14
include/qemu/typedefs.h | 1 -
15
index XXXXXXX..XXXXXXX 100644
15
hw/arm/nseries.c | 2 +-
16
--- a/target/arm/translate.c
16
hw/arm/palm.c | 2 +-
17
+++ b/target/arm/translate.c
17
hw/input/tsc2005.c | 2 +-
18
@@ -XXX,XX +XXX,XX @@ neon_reg_offset (int reg, int n)
18
hw/input/tsc210x.c | 4 ++--
19
return vfp_reg_offset(0, sreg);
19
MAINTAINERS | 2 ++
20
}
20
9 files changed, 44 insertions(+), 26 deletions(-)
21
21
create mode 100644 include/hw/input/tsc2xxx.h
22
+/* Return the offset of a 2**SIZE piece of a NEON register, at index ELE,
22
23
+ * where 0 is the least significant end of the register.
23
diff --git a/include/hw/arm/omap.h b/include/hw/arm/omap.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/arm/omap.h
26
+++ b/include/hw/arm/omap.h
27
@@ -XXX,XX +XXX,XX @@
28
#include "exec/memory.h"
29
# define hw_omap_h        "omap.h"
30
#include "hw/irq.h"
31
+#include "hw/input/tsc2xxx.h"
32
#include "target/arm/cpu-qom.h"
33
#include "qemu/log.h"
34
35
@@ -XXX,XX +XXX,XX @@ qemu_irq *omap_mpuio_in_get(struct omap_mpuio_s *s);
36
void omap_mpuio_out_set(struct omap_mpuio_s *s, int line, qemu_irq handler);
37
void omap_mpuio_key(struct omap_mpuio_s *s, int row, int col, int down);
38
39
-struct uWireSlave {
40
- uint16_t (*receive)(void *opaque);
41
- void (*send)(void *opaque, uint16_t data);
42
- void *opaque;
43
-};
44
struct omap_uwire_s;
45
void omap_uwire_attach(struct omap_uwire_s *s,
46
uWireSlave *slave, int chipselect);
47
diff --git a/include/hw/devices.h b/include/hw/devices.h
48
index XXXXXXX..XXXXXXX 100644
49
--- a/include/hw/devices.h
50
+++ b/include/hw/devices.h
51
@@ -XXX,XX +XXX,XX @@
52
/* Devices that have nowhere better to go. */
53
54
#include "hw/hw.h"
55
-#include "ui/console.h"
56
57
/* smc91c111.c */
58
void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
59
@@ -XXX,XX +XXX,XX @@ void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
60
/* lan9118.c */
61
void lan9118_init(NICInfo *, uint32_t, qemu_irq);
62
63
-/* tsc210x.c */
64
-uWireSlave *tsc2102_init(qemu_irq pint);
65
-uWireSlave *tsc2301_init(qemu_irq penirq, qemu_irq kbirq, qemu_irq dav);
66
-I2SCodec *tsc210x_codec(uWireSlave *chip);
67
-uint32_t tsc210x_txrx(void *opaque, uint32_t value, int len);
68
-void tsc210x_set_transform(uWireSlave *chip,
69
- MouseTransformInfo *info);
70
-void tsc210x_key_event(uWireSlave *chip, int key, int down);
71
-
72
-/* tsc2005.c */
73
-void *tsc2005_init(qemu_irq pintdav);
74
-uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
75
-void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
76
-
77
#endif
78
diff --git a/include/hw/input/tsc2xxx.h b/include/hw/input/tsc2xxx.h
79
new file mode 100644
80
index XXXXXXX..XXXXXXX
81
--- /dev/null
82
+++ b/include/hw/input/tsc2xxx.h
83
@@ -XXX,XX +XXX,XX @@
84
+/*
85
+ * TI touchscreen controller
86
+ *
87
+ * Copyright (c) 2006 Andrzej Zaborowski
88
+ * Copyright (C) 2008 Nokia Corporation
89
+ *
90
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
91
+ * See the COPYING file in the top-level directory.
24
+ */
92
+ */
25
+static inline long
93
+
26
+neon_element_offset(int reg, int element, TCGMemOp size)
94
+#ifndef HW_INPUT_TSC2XXX_H
27
+{
95
+#define HW_INPUT_TSC2XXX_H
28
+ int element_size = 1 << size;
96
+
29
+ int ofs = element * element_size;
97
+#include "hw/irq.h"
30
+#ifdef HOST_WORDS_BIGENDIAN
98
+#include "ui/console.h"
31
+ /* Calculate the offset assuming fully little-endian,
99
+
32
+ * then XOR to account for the order of the 8-byte units.
100
+typedef struct uWireSlave {
33
+ */
101
+ uint16_t (*receive)(void *opaque);
34
+ if (element_size < 8) {
102
+ void (*send)(void *opaque, uint16_t data);
35
+ ofs ^= 8 - element_size;
103
+ void *opaque;
36
+ }
104
+} uWireSlave;
105
+
106
+/* tsc210x.c */
107
+uWireSlave *tsc2102_init(qemu_irq pint);
108
+uWireSlave *tsc2301_init(qemu_irq penirq, qemu_irq kbirq, qemu_irq dav);
109
+I2SCodec *tsc210x_codec(uWireSlave *chip);
110
+uint32_t tsc210x_txrx(void *opaque, uint32_t value, int len);
111
+void tsc210x_set_transform(uWireSlave *chip, MouseTransformInfo *info);
112
+void tsc210x_key_event(uWireSlave *chip, int key, int down);
113
+
114
+/* tsc2005.c */
115
+void *tsc2005_init(qemu_irq pintdav);
116
+uint32_t tsc2005_txrx(void *opaque, uint32_t value, int len);
117
+void tsc2005_set_transform(void *opaque, MouseTransformInfo *info);
118
+
37
+#endif
119
+#endif
38
+ return neon_reg_offset(reg, 0) + ofs;
120
diff --git a/include/qemu/typedefs.h b/include/qemu/typedefs.h
39
+}
121
index XXXXXXX..XXXXXXX 100644
40
+
122
--- a/include/qemu/typedefs.h
41
static TCGv_i32 neon_load_reg(int reg, int pass)
123
+++ b/include/qemu/typedefs.h
42
{
124
@@ -XXX,XX +XXX,XX @@ typedef struct RAMBlock RAMBlock;
43
TCGv_i32 tmp = tcg_temp_new_i32();
125
typedef struct Range Range;
44
@@ -XXX,XX +XXX,XX @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
126
typedef struct SHPCDevice SHPCDevice;
45
tmp = load_reg(s, rd);
127
typedef struct SSIBus SSIBus;
46
if (insn & (1 << 23)) {
128
-typedef struct uWireSlave uWireSlave;
47
/* VDUP */
129
typedef struct VirtIODevice VirtIODevice;
48
- if (size == 0) {
130
typedef struct Visitor Visitor;
49
- gen_neon_dup_u8(tmp, 0);
131
typedef void SaveStateHandler(QEMUFile *f, void *opaque);
50
- } else if (size == 1) {
132
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
51
- gen_neon_dup_low16(tmp);
133
index XXXXXXX..XXXXXXX 100644
52
- }
134
--- a/hw/arm/nseries.c
53
- for (n = 0; n <= pass * 2; n++) {
135
+++ b/hw/arm/nseries.c
54
- tmp2 = tcg_temp_new_i32();
136
@@ -XXX,XX +XXX,XX @@
55
- tcg_gen_mov_i32(tmp2, tmp);
137
#include "ui/console.h"
56
- neon_store_reg(rn, n, tmp2);
138
#include "hw/boards.h"
57
- }
139
#include "hw/i2c/i2c.h"
58
- neon_store_reg(rn, n, tmp);
140
-#include "hw/devices.h"
59
+ int vec_size = pass ? 16 : 8;
141
#include "hw/display/blizzard.h"
60
+ tcg_gen_gvec_dup_i32(size, neon_reg_offset(rn, 0),
142
+#include "hw/input/tsc2xxx.h"
61
+ vec_size, vec_size, tmp);
143
#include "hw/misc/cbus.h"
62
+ tcg_temp_free_i32(tmp);
144
#include "hw/misc/tmp105.h"
63
} else {
145
#include "hw/block/flash.h"
64
/* VMOV */
146
diff --git a/hw/arm/palm.c b/hw/arm/palm.c
65
switch (size) {
147
index XXXXXXX..XXXXXXX 100644
66
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
148
--- a/hw/arm/palm.c
67
tcg_temp_free_i32(tmp);
149
+++ b/hw/arm/palm.c
68
} else if ((insn & 0x380) == 0) {
150
@@ -XXX,XX +XXX,XX @@
69
/* VDUP */
151
#include "hw/arm/omap.h"
70
+ int element;
152
#include "hw/boards.h"
71
+ TCGMemOp size;
153
#include "hw/arm/arm.h"
72
+
154
-#include "hw/devices.h"
73
if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
155
+#include "hw/input/tsc2xxx.h"
74
return 1;
156
#include "hw/loader.h"
75
}
157
#include "exec/address-spaces.h"
76
- if (insn & (1 << 19)) {
158
#include "cpu.h"
77
- tmp = neon_load_reg(rm, 1);
159
diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
78
- } else {
160
index XXXXXXX..XXXXXXX 100644
79
- tmp = neon_load_reg(rm, 0);
161
--- a/hw/input/tsc2005.c
80
- }
162
+++ b/hw/input/tsc2005.c
81
if (insn & (1 << 16)) {
163
@@ -XXX,XX +XXX,XX @@
82
- gen_neon_dup_u8(tmp, ((insn >> 17) & 3) * 8);
164
#include "hw/hw.h"
83
+ size = MO_8;
165
#include "qemu/timer.h"
84
+ element = (insn >> 17) & 7;
166
#include "ui/console.h"
85
} else if (insn & (1 << 17)) {
167
-#include "hw/devices.h"
86
- if ((insn >> 18) & 1)
168
+#include "hw/input/tsc2xxx.h"
87
- gen_neon_dup_high16(tmp);
169
#include "trace.h"
88
- else
170
89
- gen_neon_dup_low16(tmp);
171
#define TSC_CUT_RESOLUTION(value, p)    ((value) >> (16 - (p ? 12 : 10)))
90
+ size = MO_16;
172
diff --git a/hw/input/tsc210x.c b/hw/input/tsc210x.c
91
+ element = (insn >> 18) & 3;
173
index XXXXXXX..XXXXXXX 100644
92
+ } else {
174
--- a/hw/input/tsc210x.c
93
+ size = MO_32;
175
+++ b/hw/input/tsc210x.c
94
+ element = (insn >> 19) & 1;
176
@@ -XXX,XX +XXX,XX @@
95
}
177
#include "audio/audio.h"
96
- for (pass = 0; pass < (q ? 4 : 2); pass++) {
178
#include "qemu/timer.h"
97
- tmp2 = tcg_temp_new_i32();
179
#include "ui/console.h"
98
- tcg_gen_mov_i32(tmp2, tmp);
180
-#include "hw/arm/omap.h"    /* For I2SCodec and uWireSlave */
99
- neon_store_reg(rd, pass, tmp2);
181
-#include "hw/devices.h"
100
- }
182
+#include "hw/arm/omap.h" /* For I2SCodec */
101
- tcg_temp_free_i32(tmp);
183
+#include "hw/input/tsc2xxx.h"
102
+ tcg_gen_gvec_dup_mem(size, neon_reg_offset(rd, 0),
184
103
+ neon_element_offset(rm, element, size),
185
#define TSC_DATA_REGISTERS_PAGE        0x0
104
+ q ? 16 : 8, q ? 16 : 8);
186
#define TSC_CONTROL_REGISTERS_PAGE    0x1
105
} else {
187
diff --git a/MAINTAINERS b/MAINTAINERS
106
return 1;
188
index XXXXXXX..XXXXXXX 100644
107
}
189
--- a/MAINTAINERS
190
+++ b/MAINTAINERS
191
@@ -XXX,XX +XXX,XX @@ F: hw/input/tsc2005.c
192
F: hw/misc/cbus.c
193
F: hw/timer/twl92230.c
194
F: include/hw/display/blizzard.h
195
+F: include/hw/input/tsc2xxx.h
196
F: include/hw/misc/cbus.h
197
198
Palm
199
@@ -XXX,XX +XXX,XX @@ L: qemu-arm@nongnu.org
200
S: Odd Fixes
201
F: hw/arm/palm.c
202
F: hw/input/tsc210x.c
203
+F: include/hw/input/tsc2xxx.h
204
205
Raspberry Pi
206
M: Peter Maydell <peter.maydell@linaro.org>
108
--
207
--
109
2.19.1
208
2.20.1
110
209
111
210
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
4
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Message-id: 20181011205206.3552-6-richard.henderson@linaro.org
5
Message-id: 20190412165416.7977-10-philmd@redhat.com
6
[PMM: drop change to now-deleted cpu_mode_names array]
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
7
---
10
target/arm/translate.c | 4 ++--
8
include/hw/devices.h | 3 ---
11
1 file changed, 2 insertions(+), 2 deletions(-)
9
include/hw/net/lan9118.h | 19 +++++++++++++++++++
10
hw/arm/kzm.c | 2 +-
11
hw/arm/mps2.c | 2 +-
12
hw/arm/realview.c | 1 +
13
hw/arm/vexpress.c | 2 +-
14
hw/net/lan9118.c | 2 +-
15
7 files changed, 24 insertions(+), 7 deletions(-)
16
create mode 100644 include/hw/net/lan9118.h
12
17
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
18
diff --git a/include/hw/devices.h b/include/hw/devices.h
14
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate.c
20
--- a/include/hw/devices.h
16
+++ b/target/arm/translate.c
21
+++ b/include/hw/devices.h
17
@@ -XXX,XX +XXX,XX @@ static TCGv_i64 cpu_F0d, cpu_F1d;
22
@@ -XXX,XX +XXX,XX @@
18
23
/* smc91c111.c */
19
#include "exec/gen-icount.h"
24
void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
20
25
21
-static const char *regnames[] =
26
-/* lan9118.c */
22
+static const char * const regnames[] =
27
-void lan9118_init(NICInfo *, uint32_t, qemu_irq);
23
{ "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7",
28
-
24
"r8", "r9", "r10", "r11", "r12", "r13", "r14", "pc" };
29
#endif
25
30
diff --git a/include/hw/net/lan9118.h b/include/hw/net/lan9118.h
26
@@ -XXX,XX +XXX,XX @@ static struct {
31
new file mode 100644
27
int nregs;
32
index XXXXXXX..XXXXXXX
28
int interleave;
33
--- /dev/null
29
int spacing;
34
+++ b/include/hw/net/lan9118.h
30
-} neon_ls_element_type[11] = {
35
@@ -XXX,XX +XXX,XX @@
31
+} const neon_ls_element_type[11] = {
36
+/*
32
{4, 4, 1},
37
+ * SMSC LAN9118 Ethernet interface emulation
33
{4, 4, 2},
38
+ *
34
{4, 1, 1},
39
+ * Copyright (c) 2009 CodeSourcery, LLC.
40
+ * Written by Paul Brook
41
+ *
42
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
43
+ * See the COPYING file in the top-level directory.
44
+ */
45
+
46
+#ifndef HW_NET_LAN9118_H
47
+#define HW_NET_LAN9118_H
48
+
49
+#include "hw/irq.h"
50
+#include "net/net.h"
51
+
52
+void lan9118_init(NICInfo *, uint32_t, qemu_irq);
53
+
54
+#endif
55
diff --git a/hw/arm/kzm.c b/hw/arm/kzm.c
56
index XXXXXXX..XXXXXXX 100644
57
--- a/hw/arm/kzm.c
58
+++ b/hw/arm/kzm.c
59
@@ -XXX,XX +XXX,XX @@
60
#include "qemu/error-report.h"
61
#include "exec/address-spaces.h"
62
#include "net/net.h"
63
-#include "hw/devices.h"
64
+#include "hw/net/lan9118.h"
65
#include "hw/char/serial.h"
66
#include "sysemu/qtest.h"
67
68
diff --git a/hw/arm/mps2.c b/hw/arm/mps2.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/hw/arm/mps2.c
71
+++ b/hw/arm/mps2.c
72
@@ -XXX,XX +XXX,XX @@
73
#include "hw/timer/cmsdk-apb-timer.h"
74
#include "hw/timer/cmsdk-apb-dualtimer.h"
75
#include "hw/misc/mps2-scc.h"
76
-#include "hw/devices.h"
77
+#include "hw/net/lan9118.h"
78
#include "net/net.h"
79
80
typedef enum MPS2FPGAType {
81
diff --git a/hw/arm/realview.c b/hw/arm/realview.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/hw/arm/realview.c
84
+++ b/hw/arm/realview.c
85
@@ -XXX,XX +XXX,XX @@
86
#include "hw/arm/arm.h"
87
#include "hw/arm/primecell.h"
88
#include "hw/devices.h"
89
+#include "hw/net/lan9118.h"
90
#include "hw/pci/pci.h"
91
#include "net/net.h"
92
#include "sysemu/sysemu.h"
93
diff --git a/hw/arm/vexpress.c b/hw/arm/vexpress.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/hw/arm/vexpress.c
96
+++ b/hw/arm/vexpress.c
97
@@ -XXX,XX +XXX,XX @@
98
#include "hw/sysbus.h"
99
#include "hw/arm/arm.h"
100
#include "hw/arm/primecell.h"
101
-#include "hw/devices.h"
102
+#include "hw/net/lan9118.h"
103
#include "hw/i2c/i2c.h"
104
#include "net/net.h"
105
#include "sysemu/sysemu.h"
106
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/hw/net/lan9118.c
109
+++ b/hw/net/lan9118.c
110
@@ -XXX,XX +XXX,XX @@
111
#include "hw/sysbus.h"
112
#include "net/net.h"
113
#include "net/eth.h"
114
-#include "hw/devices.h"
115
+#include "hw/net/lan9118.h"
116
#include "sysemu/sysemu.h"
117
#include "hw/ptimer.h"
118
#include "qemu/log.h"
35
--
119
--
36
2.19.1
120
2.20.1
37
121
38
122
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Reviewed-by: Thomas Huth <thuth@redhat.com>
4
Message-id: 20181011205206.3552-4-richard.henderson@linaro.org
4
Reviewed-by: Markus Armbruster <armbru@redhat.com>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
6
Message-id: 20190412165416.7977-11-philmd@redhat.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
---
8
target/arm/translate-a64.c | 28 +++-------------------------
9
include/hw/net/ne2000-isa.h | 6 ++++++
9
1 file changed, 3 insertions(+), 25 deletions(-)
10
1 file changed, 6 insertions(+)
10
11
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
diff --git a/include/hw/net/ne2000-isa.h b/include/hw/net/ne2000-isa.h
12
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
--- a/include/hw/net/ne2000-isa.h
14
+++ b/target/arm/translate-a64.c
15
+++ b/include/hw/net/ne2000-isa.h
15
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
16
@@ -XXX,XX +XXX,XX @@
16
for (xs = 0; xs < selem; xs++) {
17
* This work is licensed under the terms of the GNU GPL, version 2 or later.
17
if (replicate) {
18
* See the COPYING file in the top-level directory.
18
/* Load and replicate to all elements */
19
*/
19
- uint64_t mulconst;
20
+
20
TCGv_i64 tcg_tmp = tcg_temp_new_i64();
21
+#ifndef HW_NET_NE2K_ISA_H
21
22
+#define HW_NET_NE2K_ISA_H
22
tcg_gen_qemu_ld_i64(tcg_tmp, tcg_addr,
23
+
23
get_mem_index(s), s->be_data + scale);
24
#include "hw/hw.h"
24
- switch (scale) {
25
#include "hw/qdev.h"
25
- case 0:
26
#include "hw/isa/isa.h"
26
- mulconst = 0x0101010101010101ULL;
27
@@ -XXX,XX +XXX,XX @@ static inline ISADevice *isa_ne2000_init(ISABus *bus, int base, int irq,
27
- break;
28
}
28
- case 1:
29
return d;
29
- mulconst = 0x0001000100010001ULL;
30
}
30
- break;
31
+
31
- case 2:
32
+#endif
32
- mulconst = 0x0000000100000001ULL;
33
- break;
34
- case 3:
35
- mulconst = 0;
36
- break;
37
- default:
38
- g_assert_not_reached();
39
- }
40
- if (mulconst) {
41
- tcg_gen_muli_i64(tcg_tmp, tcg_tmp, mulconst);
42
- }
43
- write_vec_element(s, tcg_tmp, rt, 0, MO_64);
44
- if (is_q) {
45
- write_vec_element(s, tcg_tmp, rt, 1, MO_64);
46
- }
47
+ tcg_gen_gvec_dup_i64(scale, vec_full_reg_offset(s, rt),
48
+ (is_q + 1) * 8, vec_full_reg_size(s),
49
+ tcg_tmp);
50
tcg_temp_free_i64(tcg_tmp);
51
- clear_vec_high(s, is_q, rt);
52
} else {
53
/* Load/store one element per register */
54
if (is_load) {
55
--
33
--
56
2.19.1
34
2.20.1
57
35
58
36
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
Move ssra_op and usra_op expanders from translate-a64.c.
3
Reviewed-by: Markus Armbruster <armbru@redhat.com>
4
4
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190412165416.7977-12-philmd@redhat.com
6
Message-id: 20181011205206.3552-14-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
7
---
10
target/arm/translate.h | 2 +
8
include/hw/net/lan9118.h | 2 ++
11
target/arm/translate-a64.c | 106 ----------------------------
9
hw/arm/exynos4_boards.c | 3 ++-
12
target/arm/translate.c | 139 ++++++++++++++++++++++++++++++++++---
10
hw/arm/mps2-tz.c | 3 ++-
13
3 files changed, 130 insertions(+), 117 deletions(-)
11
hw/net/lan9118.c | 1 -
12
4 files changed, 6 insertions(+), 3 deletions(-)
14
13
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
14
diff --git a/include/hw/net/lan9118.h b/include/hw/net/lan9118.h
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.h
16
--- a/include/hw/net/lan9118.h
18
+++ b/target/arm/translate.h
17
+++ b/include/hw/net/lan9118.h
19
@@ -XXX,XX +XXX,XX @@ static inline TCGv_i32 get_ahp_flag(void)
18
@@ -XXX,XX +XXX,XX @@
20
extern const GVecGen3 bsl_op;
19
#include "hw/irq.h"
21
extern const GVecGen3 bit_op;
20
#include "net/net.h"
22
extern const GVecGen3 bif_op;
21
23
+extern const GVecGen2i ssra_op[4];
22
+#define TYPE_LAN9118 "lan9118"
24
+extern const GVecGen2i usra_op[4];
23
+
25
24
void lan9118_init(NICInfo *, uint32_t, qemu_irq);
26
/*
25
27
* Forward to the isar_feature_* tests given a DisasContext pointer.
26
#endif
28
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
27
diff --git a/hw/arm/exynos4_boards.c b/hw/arm/exynos4_boards.c
29
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate-a64.c
29
--- a/hw/arm/exynos4_boards.c
31
+++ b/target/arm/translate-a64.c
30
+++ b/hw/arm/exynos4_boards.c
32
@@ -XXX,XX +XXX,XX @@ static void disas_simd_scalar_two_reg_misc(DisasContext *s, uint32_t insn)
31
@@ -XXX,XX +XXX,XX @@
32
#include "hw/arm/arm.h"
33
#include "exec/address-spaces.h"
34
#include "hw/arm/exynos4210.h"
35
+#include "hw/net/lan9118.h"
36
#include "hw/boards.h"
37
38
#undef DEBUG
39
@@ -XXX,XX +XXX,XX @@ static void lan9215_init(uint32_t base, qemu_irq irq)
40
/* This should be a 9215 but the 9118 is close enough */
41
if (nd_table[0].used) {
42
qemu_check_nic_model(&nd_table[0], "lan9118");
43
- dev = qdev_create(NULL, "lan9118");
44
+ dev = qdev_create(NULL, TYPE_LAN9118);
45
qdev_set_nic_properties(dev, &nd_table[0]);
46
qdev_prop_set_uint32(dev, "mode_16bit", 1);
47
qdev_init_nofail(dev);
48
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/hw/arm/mps2-tz.c
51
+++ b/hw/arm/mps2-tz.c
52
@@ -XXX,XX +XXX,XX @@
53
#include "hw/arm/armsse.h"
54
#include "hw/dma/pl080.h"
55
#include "hw/ssi/pl022.h"
56
+#include "hw/net/lan9118.h"
57
#include "net/net.h"
58
#include "hw/core/split-irq.h"
59
60
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_eth_dev(MPS2TZMachineState *mms, void *opaque,
61
* except that it doesn't support the checksum-offload feature.
62
*/
63
qemu_check_nic_model(nd, "lan9118");
64
- mms->lan9118 = qdev_create(NULL, "lan9118");
65
+ mms->lan9118 = qdev_create(NULL, TYPE_LAN9118);
66
qdev_set_nic_properties(mms->lan9118, nd);
67
qdev_init_nofail(mms->lan9118);
68
69
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/hw/net/lan9118.c
72
+++ b/hw/net/lan9118.c
73
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_lan9118_packet = {
33
}
74
}
34
}
35
36
-static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
37
-{
38
- tcg_gen_vec_sar8i_i64(a, a, shift);
39
- tcg_gen_vec_add8_i64(d, d, a);
40
-}
41
-
42
-static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
43
-{
44
- tcg_gen_vec_sar16i_i64(a, a, shift);
45
- tcg_gen_vec_add16_i64(d, d, a);
46
-}
47
-
48
-static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
49
-{
50
- tcg_gen_sari_i32(a, a, shift);
51
- tcg_gen_add_i32(d, d, a);
52
-}
53
-
54
-static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
55
-{
56
- tcg_gen_sari_i64(a, a, shift);
57
- tcg_gen_add_i64(d, d, a);
58
-}
59
-
60
-static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
61
-{
62
- tcg_gen_sari_vec(vece, a, a, sh);
63
- tcg_gen_add_vec(vece, d, d, a);
64
-}
65
-
66
-static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
67
-{
68
- tcg_gen_vec_shr8i_i64(a, a, shift);
69
- tcg_gen_vec_add8_i64(d, d, a);
70
-}
71
-
72
-static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
73
-{
74
- tcg_gen_vec_shr16i_i64(a, a, shift);
75
- tcg_gen_vec_add16_i64(d, d, a);
76
-}
77
-
78
-static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
79
-{
80
- tcg_gen_shri_i32(a, a, shift);
81
- tcg_gen_add_i32(d, d, a);
82
-}
83
-
84
-static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
85
-{
86
- tcg_gen_shri_i64(a, a, shift);
87
- tcg_gen_add_i64(d, d, a);
88
-}
89
-
90
-static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
91
-{
92
- tcg_gen_shri_vec(vece, a, a, sh);
93
- tcg_gen_add_vec(vece, d, d, a);
94
-}
95
-
96
static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
97
{
98
uint64_t mask = dup_const(MO_8, 0xff >> shift);
99
@@ -XXX,XX +XXX,XX @@ static void gen_shr_ins_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
100
static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
101
int immh, int immb, int opcode, int rn, int rd)
102
{
103
- static const GVecGen2i ssra_op[4] = {
104
- { .fni8 = gen_ssra8_i64,
105
- .fniv = gen_ssra_vec,
106
- .load_dest = true,
107
- .opc = INDEX_op_sari_vec,
108
- .vece = MO_8 },
109
- { .fni8 = gen_ssra16_i64,
110
- .fniv = gen_ssra_vec,
111
- .load_dest = true,
112
- .opc = INDEX_op_sari_vec,
113
- .vece = MO_16 },
114
- { .fni4 = gen_ssra32_i32,
115
- .fniv = gen_ssra_vec,
116
- .load_dest = true,
117
- .opc = INDEX_op_sari_vec,
118
- .vece = MO_32 },
119
- { .fni8 = gen_ssra64_i64,
120
- .fniv = gen_ssra_vec,
121
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
122
- .load_dest = true,
123
- .opc = INDEX_op_sari_vec,
124
- .vece = MO_64 },
125
- };
126
- static const GVecGen2i usra_op[4] = {
127
- { .fni8 = gen_usra8_i64,
128
- .fniv = gen_usra_vec,
129
- .load_dest = true,
130
- .opc = INDEX_op_shri_vec,
131
- .vece = MO_8, },
132
- { .fni8 = gen_usra16_i64,
133
- .fniv = gen_usra_vec,
134
- .load_dest = true,
135
- .opc = INDEX_op_shri_vec,
136
- .vece = MO_16, },
137
- { .fni4 = gen_usra32_i32,
138
- .fniv = gen_usra_vec,
139
- .load_dest = true,
140
- .opc = INDEX_op_shri_vec,
141
- .vece = MO_32, },
142
- { .fni8 = gen_usra64_i64,
143
- .fniv = gen_usra_vec,
144
- .prefer_i64 = TCG_TARGET_REG_BITS == 64,
145
- .load_dest = true,
146
- .opc = INDEX_op_shri_vec,
147
- .vece = MO_64, },
148
- };
149
static const GVecGen2i sri_op[4] = {
150
{ .fni8 = gen_shr8_ins_i64,
151
.fniv = gen_shr_ins_vec,
152
diff --git a/target/arm/translate.c b/target/arm/translate.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.c
155
+++ b/target/arm/translate.c
156
@@ -XXX,XX +XXX,XX @@ const GVecGen3 bif_op = {
157
.load_dest = true
158
};
75
};
159
76
160
+static void gen_ssra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
77
-#define TYPE_LAN9118 "lan9118"
161
+{
78
#define LAN9118(obj) OBJECT_CHECK(lan9118_state, (obj), TYPE_LAN9118)
162
+ tcg_gen_vec_sar8i_i64(a, a, shift);
79
163
+ tcg_gen_vec_add8_i64(d, d, a);
80
typedef struct {
164
+}
165
+
166
+static void gen_ssra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
167
+{
168
+ tcg_gen_vec_sar16i_i64(a, a, shift);
169
+ tcg_gen_vec_add16_i64(d, d, a);
170
+}
171
+
172
+static void gen_ssra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
173
+{
174
+ tcg_gen_sari_i32(a, a, shift);
175
+ tcg_gen_add_i32(d, d, a);
176
+}
177
+
178
+static void gen_ssra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
179
+{
180
+ tcg_gen_sari_i64(a, a, shift);
181
+ tcg_gen_add_i64(d, d, a);
182
+}
183
+
184
+static void gen_ssra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
185
+{
186
+ tcg_gen_sari_vec(vece, a, a, sh);
187
+ tcg_gen_add_vec(vece, d, d, a);
188
+}
189
+
190
+const GVecGen2i ssra_op[4] = {
191
+ { .fni8 = gen_ssra8_i64,
192
+ .fniv = gen_ssra_vec,
193
+ .load_dest = true,
194
+ .opc = INDEX_op_sari_vec,
195
+ .vece = MO_8 },
196
+ { .fni8 = gen_ssra16_i64,
197
+ .fniv = gen_ssra_vec,
198
+ .load_dest = true,
199
+ .opc = INDEX_op_sari_vec,
200
+ .vece = MO_16 },
201
+ { .fni4 = gen_ssra32_i32,
202
+ .fniv = gen_ssra_vec,
203
+ .load_dest = true,
204
+ .opc = INDEX_op_sari_vec,
205
+ .vece = MO_32 },
206
+ { .fni8 = gen_ssra64_i64,
207
+ .fniv = gen_ssra_vec,
208
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
209
+ .load_dest = true,
210
+ .opc = INDEX_op_sari_vec,
211
+ .vece = MO_64 },
212
+};
213
+
214
+static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
215
+{
216
+ tcg_gen_vec_shr8i_i64(a, a, shift);
217
+ tcg_gen_vec_add8_i64(d, d, a);
218
+}
219
+
220
+static void gen_usra16_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
221
+{
222
+ tcg_gen_vec_shr16i_i64(a, a, shift);
223
+ tcg_gen_vec_add16_i64(d, d, a);
224
+}
225
+
226
+static void gen_usra32_i32(TCGv_i32 d, TCGv_i32 a, int32_t shift)
227
+{
228
+ tcg_gen_shri_i32(a, a, shift);
229
+ tcg_gen_add_i32(d, d, a);
230
+}
231
+
232
+static void gen_usra64_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
233
+{
234
+ tcg_gen_shri_i64(a, a, shift);
235
+ tcg_gen_add_i64(d, d, a);
236
+}
237
+
238
+static void gen_usra_vec(unsigned vece, TCGv_vec d, TCGv_vec a, int64_t sh)
239
+{
240
+ tcg_gen_shri_vec(vece, a, a, sh);
241
+ tcg_gen_add_vec(vece, d, d, a);
242
+}
243
+
244
+const GVecGen2i usra_op[4] = {
245
+ { .fni8 = gen_usra8_i64,
246
+ .fniv = gen_usra_vec,
247
+ .load_dest = true,
248
+ .opc = INDEX_op_shri_vec,
249
+ .vece = MO_8, },
250
+ { .fni8 = gen_usra16_i64,
251
+ .fniv = gen_usra_vec,
252
+ .load_dest = true,
253
+ .opc = INDEX_op_shri_vec,
254
+ .vece = MO_16, },
255
+ { .fni4 = gen_usra32_i32,
256
+ .fniv = gen_usra_vec,
257
+ .load_dest = true,
258
+ .opc = INDEX_op_shri_vec,
259
+ .vece = MO_32, },
260
+ { .fni8 = gen_usra64_i64,
261
+ .fniv = gen_usra_vec,
262
+ .prefer_i64 = TCG_TARGET_REG_BITS == 64,
263
+ .load_dest = true,
264
+ .opc = INDEX_op_shri_vec,
265
+ .vece = MO_64, },
266
+};
267
268
/* Translate a NEON data processing instruction. Return nonzero if the
269
instruction is invalid.
270
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
271
}
272
return 0;
273
274
+ case 1: /* VSRA */
275
+ /* Right shift comes here negative. */
276
+ shift = -shift;
277
+ /* Shifts larger than the element size are architecturally
278
+ * valid. Unsigned results in all zeros; signed results
279
+ * in all sign bits.
280
+ */
281
+ if (!u) {
282
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
283
+ MIN(shift, (8 << size) - 1),
284
+ &ssra_op[size]);
285
+ } else if (shift >= 8 << size) {
286
+ /* rd += 0 */
287
+ } else {
288
+ tcg_gen_gvec_2i(rd_ofs, rm_ofs, vec_size, vec_size,
289
+ shift, &usra_op[size]);
290
+ }
291
+ return 0;
292
+
293
case 5: /* VSHL, VSLI */
294
if (!u) { /* VSHL */
295
/* Shifts larger than the element size are
296
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
297
neon_load_reg64(cpu_V0, rm + pass);
298
tcg_gen_movi_i64(cpu_V1, imm);
299
switch (op) {
300
- case 1: /* VSRA */
301
- if (u)
302
- gen_helper_neon_shl_u64(cpu_V0, cpu_V0, cpu_V1);
303
- else
304
- gen_helper_neon_shl_s64(cpu_V0, cpu_V0, cpu_V1);
305
- break;
306
case 2: /* VRSHR */
307
case 3: /* VRSRA */
308
if (u)
309
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
310
default:
311
g_assert_not_reached();
312
}
313
- if (op == 1 || op == 3) {
314
+ if (op == 3) {
315
/* Accumulate. */
316
neon_load_reg64(cpu_V1, rd + pass);
317
tcg_gen_add_i64(cpu_V0, cpu_V0, cpu_V1);
318
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
319
tmp2 = tcg_temp_new_i32();
320
tcg_gen_movi_i32(tmp2, imm);
321
switch (op) {
322
- case 1: /* VSRA */
323
- GEN_NEON_INTEGER_OP(shl);
324
- break;
325
case 2: /* VRSHR */
326
case 3: /* VRSRA */
327
GEN_NEON_INTEGER_OP(rshl);
328
@@ -XXX,XX +XXX,XX @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
329
}
330
tcg_temp_free_i32(tmp2);
331
332
- if (op == 1 || op == 3) {
333
+ if (op == 3) {
334
/* Accumulate. */
335
tmp2 = neon_load_reg(rd, pass);
336
gen_neon_add(size, tmp, tmp2);
337
--
81
--
338
2.19.1
82
2.20.1
339
83
340
84
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
2
3
This is done generically in translator_loop.
3
This commit finally deletes "hw/devices.h".
4
4
5
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
5
Reviewed-by: Markus Armbruster <armbru@redhat.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Message-id: 20190412165416.7977-13-philmd@redhat.com
8
Message-id: 20181011205206.3552-3-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
target/arm/translate-a64.c | 1 -
10
include/hw/devices.h | 11 -----------
13
target/arm/translate.c | 1 -
11
include/hw/net/smc91c111.h | 19 +++++++++++++++++++
14
2 files changed, 2 deletions(-)
12
hw/arm/gumstix.c | 2 +-
13
hw/arm/integratorcp.c | 2 +-
14
hw/arm/mainstone.c | 2 +-
15
hw/arm/realview.c | 2 +-
16
hw/arm/versatilepb.c | 2 +-
17
hw/net/smc91c111.c | 2 +-
18
8 files changed, 25 insertions(+), 17 deletions(-)
19
delete mode 100644 include/hw/devices.h
20
create mode 100644 include/hw/net/smc91c111.h
15
21
16
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
22
diff --git a/include/hw/devices.h b/include/hw/devices.h
23
deleted file mode 100644
24
index XXXXXXX..XXXXXXX
25
--- a/include/hw/devices.h
26
+++ /dev/null
27
@@ -XXX,XX +XXX,XX @@
28
-#ifndef QEMU_DEVICES_H
29
-#define QEMU_DEVICES_H
30
-
31
-/* Devices that have nowhere better to go. */
32
-
33
-#include "hw/hw.h"
34
-
35
-/* smc91c111.c */
36
-void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
37
-
38
-#endif
39
diff --git a/include/hw/net/smc91c111.h b/include/hw/net/smc91c111.h
40
new file mode 100644
41
index XXXXXXX..XXXXXXX
42
--- /dev/null
43
+++ b/include/hw/net/smc91c111.h
44
@@ -XXX,XX +XXX,XX @@
45
+/*
46
+ * SMSC 91C111 Ethernet interface emulation
47
+ *
48
+ * Copyright (c) 2005 CodeSourcery, LLC.
49
+ * Written by Paul Brook
50
+ *
51
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
52
+ * See the COPYING file in the top-level directory.
53
+ */
54
+
55
+#ifndef HW_NET_SMC91C111_H
56
+#define HW_NET_SMC91C111_H
57
+
58
+#include "hw/irq.h"
59
+#include "net/net.h"
60
+
61
+void smc91c111_init(NICInfo *, uint32_t, qemu_irq);
62
+
63
+#endif
64
diff --git a/hw/arm/gumstix.c b/hw/arm/gumstix.c
17
index XXXXXXX..XXXXXXX 100644
65
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-a64.c
66
--- a/hw/arm/gumstix.c
19
+++ b/target/arm/translate-a64.c
67
+++ b/hw/arm/gumstix.c
20
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
68
@@ -XXX,XX +XXX,XX @@
21
69
#include "hw/arm/pxa.h"
22
static void aarch64_tr_tb_start(DisasContextBase *db, CPUState *cpu)
70
#include "net/net.h"
23
{
71
#include "hw/block/flash.h"
24
- tcg_clear_temp_count();
72
-#include "hw/devices.h"
25
}
73
+#include "hw/net/smc91c111.h"
26
74
#include "hw/boards.h"
27
static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
75
#include "exec/address-spaces.h"
28
diff --git a/target/arm/translate.c b/target/arm/translate.c
76
#include "sysemu/qtest.h"
77
diff --git a/hw/arm/integratorcp.c b/hw/arm/integratorcp.c
29
index XXXXXXX..XXXXXXX 100644
78
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/translate.c
79
--- a/hw/arm/integratorcp.c
31
+++ b/target/arm/translate.c
80
+++ b/hw/arm/integratorcp.c
32
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_start(DisasContextBase *dcbase, CPUState *cpu)
81
@@ -XXX,XX +XXX,XX @@
33
tcg_gen_movi_i32(tmp, 0);
82
#include "qemu-common.h"
34
store_cpu_field(tmp, condexec_bits);
83
#include "cpu.h"
35
}
84
#include "hw/sysbus.h"
36
- tcg_clear_temp_count();
85
-#include "hw/devices.h"
37
}
86
#include "hw/boards.h"
38
87
#include "hw/arm/arm.h"
39
static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
88
#include "hw/misc/arm_integrator_debug.h"
89
+#include "hw/net/smc91c111.h"
90
#include "net/net.h"
91
#include "exec/address-spaces.h"
92
#include "sysemu/sysemu.h"
93
diff --git a/hw/arm/mainstone.c b/hw/arm/mainstone.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/hw/arm/mainstone.c
96
+++ b/hw/arm/mainstone.c
97
@@ -XXX,XX +XXX,XX @@
98
#include "hw/arm/pxa.h"
99
#include "hw/arm/arm.h"
100
#include "net/net.h"
101
-#include "hw/devices.h"
102
+#include "hw/net/smc91c111.h"
103
#include "hw/boards.h"
104
#include "hw/block/flash.h"
105
#include "hw/sysbus.h"
106
diff --git a/hw/arm/realview.c b/hw/arm/realview.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/hw/arm/realview.c
109
+++ b/hw/arm/realview.c
110
@@ -XXX,XX +XXX,XX @@
111
#include "hw/sysbus.h"
112
#include "hw/arm/arm.h"
113
#include "hw/arm/primecell.h"
114
-#include "hw/devices.h"
115
#include "hw/net/lan9118.h"
116
+#include "hw/net/smc91c111.h"
117
#include "hw/pci/pci.h"
118
#include "net/net.h"
119
#include "sysemu/sysemu.h"
120
diff --git a/hw/arm/versatilepb.c b/hw/arm/versatilepb.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/hw/arm/versatilepb.c
123
+++ b/hw/arm/versatilepb.c
124
@@ -XXX,XX +XXX,XX @@
125
#include "cpu.h"
126
#include "hw/sysbus.h"
127
#include "hw/arm/arm.h"
128
-#include "hw/devices.h"
129
+#include "hw/net/smc91c111.h"
130
#include "net/net.h"
131
#include "sysemu/sysemu.h"
132
#include "hw/pci/pci.h"
133
diff --git a/hw/net/smc91c111.c b/hw/net/smc91c111.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/hw/net/smc91c111.c
136
+++ b/hw/net/smc91c111.c
137
@@ -XXX,XX +XXX,XX @@
138
#include "qemu/osdep.h"
139
#include "hw/sysbus.h"
140
#include "net/net.h"
141
-#include "hw/devices.h"
142
+#include "hw/net/smc91c111.h"
143
#include "qemu/log.h"
144
/* For crc32 */
145
#include <zlib.h>
40
--
146
--
41
2.19.1
147
2.20.1
42
148
43
149
diff view generated by jsdifflib