1
target-arm queue: mostly patches from me this time round.
1
This pullreq is (1) my GICv4 patches (2) most of the first third of RTH's
2
Nothing too exciting.
2
cleanup patchset (3) one patch fixing an smmuv3 bug...
3
3
4
v2 changes: fix build failure on aarch64 hosts by moving the
5
gicv3_add_its() and gicv3_foreach_its() functions to
6
arm_gicv3_its_common.h.
7
8
thanks
4
-- PMM
9
-- PMM
5
10
6
The following changes since commit 78ac2eebbab9150edf5d0d00e3648f5ebb599001:
7
11
8
Merge tag 'artist-cursor-fix-final-pull-request' of https://github.com/hdeller/qemu-hppa into staging (2022-05-18 09:32:15 -0700)
12
The following changes since commit a74782936dc6e979ce371dabda4b1c05624ea87f:
13
14
Merge tag 'pull-migration-20220421a' of https://gitlab.com/dagrh/qemu into staging (2022-04-21 18:48:18 -0700)
9
15
10
are available in the Git repository at:
16
are available in the Git repository at:
11
17
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220519
18
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220422-1
13
19
14
for you to fetch changes up to fab8ad39fb75a0d9f097db67b2a334444754e88e:
20
for you to fetch changes up to c3ca7d56c4790c2223122f7e84b71161cd36dbce:
15
21
16
target/arm: Use FIELD definitions for CPACR, CPTR_ELx (2022-05-19 18:34:10 +0100)
22
hw/arm/smmuv3: Pass the actual perm to returned IOMMUTLBEntry in smmuv3_translate() (2022-04-22 14:44:55 +0100)
17
23
18
----------------------------------------------------------------
24
----------------------------------------------------------------
19
target-arm queue:
25
target-arm queue:
20
* Implement FEAT_S2FWB
26
* Implement GICv4 emulation
21
* Implement FEAT_IDST
27
* Some cleanup patches in target/arm
22
* Drop unsupported_encoding() macro
28
* hw/arm/smmuv3: Pass the actual perm to returned IOMMUTLBEntry in smmuv3_translate()
23
* hw/intc/arm_gicv3: Use correct number of priority bits for the CPU
24
* Fix aarch64 debug register names
25
* hw/adc/zynq-xadc: Use qemu_irq typedef
26
* target/arm/helper.c: Delete stray obsolete comment
27
* Make number of counters in PMCR follow the CPU
28
* hw/arm/virt: Fix dtb nits
29
* ptimer: Rename PTIMER_POLICY_DEFAULT to PTIMER_POLICY_LEGACY
30
* target/arm: Fix PAuth keys access checks for disabled SEL2
31
* Enable FEAT_HCX for -cpu max
32
* Use FIELD definitions for CPACR, CPTR_ELx
33
29
34
----------------------------------------------------------------
30
----------------------------------------------------------------
35
Chris Howard (1):
31
Peter Maydell (41):
36
Fix aarch64 debug register names.
32
hw/intc/arm_gicv3_its: Add missing blank line
33
hw/intc/arm_gicv3: Sanity-check num-cpu property
34
hw/intc/arm_gicv3: Insist that redist region capacity matches CPU count
35
hw/intc/arm_gicv3: Report correct PIDR0 values for ID registers
36
target/arm/cpu.c: ignore VIRQ and VFIQ if no EL2
37
hw/intc/arm_gicv3_its: Factor out "is intid a valid LPI ID?"
38
hw/intc/arm_gicv3_its: Implement GITS_BASER2 for GICv4
39
hw/intc/arm_gicv3_its: Implement VMAPI and VMAPTI
40
hw/intc/arm_gicv3_its: Implement VMAPP
41
hw/intc/arm_gicv3_its: Distinguish success and error cases of CMD_CONTINUE
42
hw/intc/arm_gicv3_its: Factor out "find ITE given devid, eventid"
43
hw/intc/arm_gicv3_its: Factor out CTE lookup sequence
44
hw/intc/arm_gicv3_its: Split out process_its_cmd() physical interrupt code
45
hw/intc/arm_gicv3_its: Handle virtual interrupts in process_its_cmd()
46
hw/intc/arm_gicv3: Keep pointers to every connected ITS
47
hw/intc/arm_gicv3_its: Implement VMOVP
48
hw/intc/arm_gicv3_its: Implement VSYNC
49
hw/intc/arm_gicv3_its: Implement INV command properly
50
hw/intc/arm_gicv3_its: Implement INV for virtual interrupts
51
hw/intc/arm_gicv3_its: Implement VMOVI
52
hw/intc/arm_gicv3_its: Implement VINVALL
53
hw/intc/arm_gicv3: Implement GICv4's new redistributor frame
54
hw/intc/arm_gicv3: Implement new GICv4 redistributor registers
55
hw/intc/arm_gicv3_cpuif: Split "update vIRQ/vFIQ" from gicv3_cpuif_virt_update()
56
hw/intc/arm_gicv3_cpuif: Support vLPIs
57
hw/intc/arm_gicv3_cpuif: Don't recalculate maintenance irq unnecessarily
58
hw/intc/arm_gicv3_redist: Factor out "update hpplpi for one LPI" logic
59
hw/intc/arm_gicv3_redist: Factor out "update hpplpi for all LPIs" logic
60
hw/intc/arm_gicv3_redist: Recalculate hppvlpi on VPENDBASER writes
61
hw/intc/arm_gicv3_redist: Factor out "update bit in pending table" code
62
hw/intc/arm_gicv3_redist: Implement gicv3_redist_process_vlpi()
63
hw/intc/arm_gicv3_redist: Implement gicv3_redist_vlpi_pending()
64
hw/intc/arm_gicv3_redist: Use set_pending_table_bit() in mov handling
65
hw/intc/arm_gicv3_redist: Implement gicv3_redist_mov_vlpi()
66
hw/intc/arm_gicv3_redist: Implement gicv3_redist_vinvall()
67
hw/intc/arm_gicv3_redist: Implement gicv3_redist_inv_vlpi()
68
hw/intc/arm_gicv3: Update ID and feature registers for GICv4
69
hw/intc/arm_gicv3: Allow 'revision' property to be set to 4
70
hw/arm/virt: Use VIRT_GIC_VERSION_* enum values in create_gic()
71
hw/arm/virt: Abstract out calculation of redistributor region capacity
72
hw/arm/virt: Support TCG GICv4
37
73
38
Florian Lugou (1):
74
Richard Henderson (19):
39
target/arm: Fix PAuth keys access checks for disabled SEL2
75
target/arm: Update ISAR fields for ARMv8.8
76
target/arm: Update SCR_EL3 bits to ARMv8.8
77
target/arm: Update SCTLR bits to ARMv9.2
78
target/arm: Change DisasContext.aarch64 to bool
79
target/arm: Change CPUArchState.aarch64 to bool
80
target/arm: Extend store_cpu_offset to take field size
81
target/arm: Change DisasContext.thumb to bool
82
target/arm: Change CPUArchState.thumb to bool
83
target/arm: Remove fpexc32_access
84
target/arm: Split out set_btype_raw
85
target/arm: Split out gen_rebuild_hflags
86
target/arm: Simplify GEN_SHIFT in translate.c
87
target/arm: Simplify gen_sar
88
target/arm: Simplify aa32 DISAS_WFI
89
target/arm: Use tcg_constant in translate-m-nocp.c
90
target/arm: Use tcg_constant in translate-neon.c
91
target/arm: Use smin/smax for do_sat_addsub_32
92
target/arm: Use tcg_constant in translate-vfp.c
93
target/arm: Use tcg_constant_i32 in translate.h
40
94
41
Peter Maydell (17):
95
Xiang Chen (1):
42
target/arm: Postpone interpretation of stage 2 descriptor attribute bits
96
hw/arm/smmuv3: Pass the actual perm to returned IOMMUTLBEntry in smmuv3_translate()
43
target/arm: Factor out FWB=0 specific part of combine_cacheattrs()
44
target/arm: Implement FEAT_S2FWB
45
target/arm: Enable FEAT_S2FWB for -cpu max
46
target/arm: Implement FEAT_IDST
47
target/arm: Drop unsupported_encoding() macro
48
hw/intc/arm_gicv3_cpuif: Handle CPUs that don't specify GICv3 parameters
49
hw/intc/arm_gicv3: report correct PRIbits field in ICV_CTLR_EL1
50
hw/intc/arm_gicv3_kvm.c: Stop using GIC_MIN_BPR constant
51
hw/intc/arm_gicv3: Support configurable number of physical priority bits
52
hw/intc/arm_gicv3: Use correct number of priority bits for the CPU
53
hw/intc/arm_gicv3: Provide ich_num_aprs()
54
target/arm/helper.c: Delete stray obsolete comment
55
target/arm: Make number of counters in PMCR follow the CPU
56
hw/arm/virt: Fix incorrect non-secure flash dtb node name
57
hw/arm/virt: Drop #size-cells and #address-cells from gpio-keys dtb node
58
ptimer: Rename PTIMER_POLICY_DEFAULT to PTIMER_POLICY_LEGACY
59
97
60
Philippe Mathieu-Daudé (1):
98
docs/system/arm/virt.rst | 5 +-
61
hw/adc/zynq-xadc: Use qemu_irq typedef
99
hw/intc/gicv3_internal.h | 213 +++++++-
62
100
include/hw/arm/virt.h | 19 +-
63
Richard Henderson (2):
101
include/hw/intc/arm_gicv3_common.h | 13 +
64
target/arm: Enable FEAT_HCX for -cpu max
102
include/hw/intc/arm_gicv3_its_common.h | 19 +
65
target/arm: Use FIELD definitions for CPACR, CPTR_ELx
103
target/arm/cpu.h | 59 ++-
66
104
target/arm/translate-a32.h | 13 +-
67
docs/system/arm/emulation.rst | 2 +
105
target/arm/translate.h | 17 +-
68
include/hw/adc/zynq-xadc.h | 3 +-
106
hw/arm/smmuv3.c | 2 +-
69
include/hw/intc/arm_gicv3_common.h | 8 +-
107
hw/arm/virt.c | 102 +++-
70
include/hw/ptimer.h | 16 +-
108
hw/intc/arm_gicv3_common.c | 54 +-
71
target/arm/cpregs.h | 24 +++
109
hw/intc/arm_gicv3_cpuif.c | 195 ++++++--
72
target/arm/cpu.h | 76 +++++++-
110
hw/intc/arm_gicv3_dist.c | 7 +-
73
target/arm/internals.h | 11 +-
111
hw/intc/arm_gicv3_its.c | 876 +++++++++++++++++++++++++++------
74
target/arm/translate-a64.h | 9 -
112
hw/intc/arm_gicv3_its_kvm.c | 2 +
75
hw/adc/zynq-xadc.c | 4 +-
113
hw/intc/arm_gicv3_kvm.c | 5 +
76
hw/arm/boot.c | 2 +-
114
hw/intc/arm_gicv3_redist.c | 480 +++++++++++++++---
77
hw/arm/musicpal.c | 2 +-
115
linux-user/arm/cpu_loop.c | 2 +-
78
hw/arm/virt.c | 4 +-
116
target/arm/cpu.c | 16 +-
79
hw/core/machine.c | 4 +-
117
target/arm/helper-a64.c | 4 +-
80
hw/dma/xilinx_axidma.c | 2 +-
118
target/arm/helper.c | 19 +-
81
hw/dma/xlnx_csu_dma.c | 2 +-
119
target/arm/hvf/hvf.c | 2 +-
82
hw/intc/arm_gicv3_common.c | 5 +
120
target/arm/m_helper.c | 6 +-
83
hw/intc/arm_gicv3_cpuif.c | 225 +++++++++++++++++-------
121
target/arm/op_helper.c | 13 -
84
hw/intc/arm_gicv3_kvm.c | 16 +-
122
target/arm/translate-a64.c | 50 +-
85
hw/m68k/mcf5206.c | 2 +-
123
target/arm/translate-m-nocp.c | 12 +-
86
hw/m68k/mcf5208.c | 2 +-
124
target/arm/translate-neon.c | 21 +-
87
hw/net/can/xlnx-zynqmp-can.c | 2 +-
125
target/arm/translate-sve.c | 9 +-
88
hw/net/fsl_etsec/etsec.c | 2 +-
126
target/arm/translate-vfp.c | 76 +--
89
hw/net/lan9118.c | 2 +-
127
target/arm/translate.c | 101 ++--
90
hw/rtc/exynos4210_rtc.c | 4 +-
128
hw/intc/trace-events | 18 +-
91
hw/timer/allwinner-a10-pit.c | 2 +-
129
31 files changed, 1890 insertions(+), 540 deletions(-)
92
hw/timer/altera_timer.c | 2 +-
93
hw/timer/arm_timer.c | 2 +-
94
hw/timer/digic-timer.c | 2 +-
95
hw/timer/etraxfs_timer.c | 6 +-
96
hw/timer/exynos4210_mct.c | 6 +-
97
hw/timer/exynos4210_pwm.c | 2 +-
98
hw/timer/grlib_gptimer.c | 2 +-
99
hw/timer/imx_epit.c | 4 +-
100
hw/timer/imx_gpt.c | 2 +-
101
hw/timer/mss-timer.c | 2 +-
102
hw/timer/sh_timer.c | 2 +-
103
hw/timer/slavio_timer.c | 2 +-
104
hw/timer/xilinx_timer.c | 2 +-
105
target/arm/cpu.c | 11 +-
106
target/arm/cpu64.c | 30 ++++
107
target/arm/cpu_tcg.c | 6 +
108
target/arm/helper.c | 348 ++++++++++++++++++++++++++++---------
109
target/arm/kvm64.c | 12 ++
110
target/arm/op_helper.c | 9 +
111
target/arm/translate-a64.c | 36 +++-
112
tests/unit/ptimer-test.c | 6 +-
113
46 files changed, 697 insertions(+), 228 deletions(-)
114
diff view generated by jsdifflib
Deleted patch
1
In the original Arm v8 two-stage translation, both stage 1 and stage
2
2 specify memory attributes (memory type, cacheability,
3
shareability); these are then combined to produce the overall memory
4
attributes for the whole stage 1+2 access. In QEMU we implement this
5
by having get_phys_addr() fill in an ARMCacheAttrs struct, and we
6
convert both the stage 1 and stage 2 attribute bit formats to the
7
same encoding (an 8-bit attribute value matching the MAIR_EL1 fields,
8
plus a 2-bit shareability value).
9
1
10
The new FEAT_S2FWB feature allows the guest to enable a different
11
interpretation of the attribute bits in the stage 2 descriptors.
12
These bits can now be used to control details of how the stage 1 and
13
2 attributes should be combined (for instance they can say "always
14
use the stage 1 attributes" or "ignore the stage 1 attributes and
15
always be Device memory"). This means we need to pass the raw bit
16
information for stage 2 down to the function which combines the stage
17
1 and stage 2 information.
18
19
Add a field to ARMCacheAttrs that indicates whether the attrs field
20
should be interpreted as MAIR format, or as the raw stage 2 attribute
21
bits from the descriptor, and store the appropriate values when
22
filling in cacheattrs.
23
24
We only need to interpret the attrs field in a few places:
25
* in do_ats_write(), where we know to expect a MAIR value
26
(there is no ATS instruction to do a stage-2-only walk)
27
* in S1_ptw_translate(), where we want to know whether the
28
combined S1 + S2 attributes indicate Device memory that
29
should provoke a fault
30
* in combine_cacheattrs(), which does the S1 + S2 combining
31
Update those places accordingly.
32
33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
35
Message-id: 20220505183950.2781801-2-peter.maydell@linaro.org
36
---
37
target/arm/internals.h | 7 ++++++-
38
target/arm/helper.c | 42 ++++++++++++++++++++++++++++++++++++------
39
2 files changed, 42 insertions(+), 7 deletions(-)
40
41
diff --git a/target/arm/internals.h b/target/arm/internals.h
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/internals.h
44
+++ b/target/arm/internals.h
45
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
46
47
/* Cacheability and shareability attributes for a memory access */
48
typedef struct ARMCacheAttrs {
49
- unsigned int attrs:8; /* as in the MAIR register encoding */
50
+ /*
51
+ * If is_s2_format is true, attrs is the S2 descriptor bits [5:2]
52
+ * Otherwise, attrs is the same as the MAIR_EL1 8-bit format
53
+ */
54
+ unsigned int attrs:8;
55
unsigned int shareability:2; /* as in the SH field of the VMSAv8-64 PTEs */
56
+ bool is_s2_format:1;
57
} ARMCacheAttrs;
58
59
bool get_phys_addr(CPUARMState *env, target_ulong address,
60
diff --git a/target/arm/helper.c b/target/arm/helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/helper.c
63
+++ b/target/arm/helper.c
64
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
65
ret = get_phys_addr(env, value, access_type, mmu_idx, &phys_addr, &attrs,
66
&prot, &page_size, &fi, &cacheattrs);
67
68
+ /*
69
+ * ATS operations only do S1 or S1+S2 translations, so we never
70
+ * have to deal with the ARMCacheAttrs format for S2 only.
71
+ */
72
+ assert(!cacheattrs.is_s2_format);
73
+
74
if (ret) {
75
/*
76
* Some kinds of translation fault must cause exceptions rather
77
@@ -XXX,XX +XXX,XX @@ static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
78
return true;
79
}
80
81
+static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
82
+{
83
+ /*
84
+ * For an S1 page table walk, the stage 1 attributes are always
85
+ * some form of "this is Normal memory". The combined S1+S2
86
+ * attributes are therefore only Device if stage 2 specifies Device.
87
+ * With HCR_EL2.FWB == 0 this is when descriptor bits [5:4] are 0b00,
88
+ * ie when cacheattrs.attrs bits [3:2] are 0b00.
89
+ */
90
+ assert(cacheattrs.is_s2_format);
91
+ return (cacheattrs.attrs & 0xc) == 0;
92
+}
93
+
94
/* Translate a S1 pagetable walk through S2 if needed. */
95
static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
96
hwaddr addr, bool *is_secure,
97
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
98
return ~0;
99
}
100
if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
101
- (cacheattrs.attrs & 0xf0) == 0) {
102
+ ptw_attrs_are_device(env, cacheattrs)) {
103
/*
104
* PTW set and S1 walk touched S2 Device memory:
105
* generate Permission fault.
106
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
107
}
108
109
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
110
- cacheattrs->attrs = convert_stage2_attrs(env, extract32(attrs, 0, 4));
111
+ cacheattrs->is_s2_format = true;
112
+ cacheattrs->attrs = extract32(attrs, 0, 4);
113
} else {
114
/* Index into MAIR registers for cache attributes */
115
uint8_t attrindx = extract32(attrs, 0, 3);
116
uint64_t mair = env->cp15.mair_el[regime_el(env, mmu_idx)];
117
assert(attrindx <= 7);
118
+ cacheattrs->is_s2_format = false;
119
cacheattrs->attrs = extract64(mair, attrindx * 8, 8);
120
}
121
122
@@ -XXX,XX +XXX,XX @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2)
123
/* Combine S1 and S2 cacheability/shareability attributes, per D4.5.4
124
* and CombineS1S2Desc()
125
*
126
+ * @env: CPUARMState
127
* @s1: Attributes from stage 1 walk
128
* @s2: Attributes from stage 2 walk
129
*/
130
-static ARMCacheAttrs combine_cacheattrs(ARMCacheAttrs s1, ARMCacheAttrs s2)
131
+static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
132
+ ARMCacheAttrs s1, ARMCacheAttrs s2)
133
{
134
uint8_t s1lo, s2lo, s1hi, s2hi;
135
ARMCacheAttrs ret;
136
bool tagged = false;
137
+ uint8_t s2_mair_attrs;
138
+
139
+ assert(s2.is_s2_format && !s1.is_s2_format);
140
+ ret.is_s2_format = false;
141
+
142
+ s2_mair_attrs = convert_stage2_attrs(env, s2.attrs);
143
144
if (s1.attrs == 0xf0) {
145
tagged = true;
146
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(ARMCacheAttrs s1, ARMCacheAttrs s2)
147
}
148
149
s1lo = extract32(s1.attrs, 0, 4);
150
- s2lo = extract32(s2.attrs, 0, 4);
151
+ s2lo = extract32(s2_mair_attrs, 0, 4);
152
s1hi = extract32(s1.attrs, 4, 4);
153
- s2hi = extract32(s2.attrs, 4, 4);
154
+ s2hi = extract32(s2_mair_attrs, 4, 4);
155
156
/* Combine shareability attributes (table D4-43) */
157
if (s1.shareability == 2 || s2.shareability == 2) {
158
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
159
}
160
cacheattrs->shareability = 0;
161
}
162
- *cacheattrs = combine_cacheattrs(*cacheattrs, cacheattrs2);
163
+ *cacheattrs = combine_cacheattrs(env, *cacheattrs, cacheattrs2);
164
165
/* Check if IPA translates to secure or non-secure PA space. */
166
if (arm_is_secure_below_el3(env)) {
167
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
168
/* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
169
hcr = arm_hcr_el2_eff(env);
170
cacheattrs->shareability = 0;
171
+ cacheattrs->is_s2_format = false;
172
if (hcr & HCR_DC) {
173
if (hcr & HCR_DCT) {
174
memattr = 0xf0; /* Tagged, Normal, WB, RWA */
175
--
176
2.25.1
diff view generated by jsdifflib
Deleted patch
1
Factor out the part of combine_cacheattrs() that is specific to
2
handling HCR_EL2.FWB == 0. This is the part where we combine the
3
memory type and cacheability attributes.
4
1
5
The "force Outer Shareable for Device or Normal Inner-NC Outer-NC"
6
logic remains in combine_cacheattrs() because it holds regardless
7
(this is the equivalent of the pseudocode EffectiveShareability()
8
function).
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20220505183950.2781801-3-peter.maydell@linaro.org
13
---
14
target/arm/helper.c | 88 +++++++++++++++++++++++++--------------------
15
1 file changed, 50 insertions(+), 38 deletions(-)
16
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
20
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2)
22
}
23
}
24
25
+/*
26
+ * Combine the memory type and cacheability attributes of
27
+ * s1 and s2 for the HCR_EL2.FWB == 0 case, returning the
28
+ * combined attributes in MAIR_EL1 format.
29
+ */
30
+static uint8_t combined_attrs_nofwb(CPUARMState *env,
31
+ ARMCacheAttrs s1, ARMCacheAttrs s2)
32
+{
33
+ uint8_t s1lo, s2lo, s1hi, s2hi, s2_mair_attrs, ret_attrs;
34
+
35
+ s2_mair_attrs = convert_stage2_attrs(env, s2.attrs);
36
+
37
+ s1lo = extract32(s1.attrs, 0, 4);
38
+ s2lo = extract32(s2_mair_attrs, 0, 4);
39
+ s1hi = extract32(s1.attrs, 4, 4);
40
+ s2hi = extract32(s2_mair_attrs, 4, 4);
41
+
42
+ /* Combine memory type and cacheability attributes */
43
+ if (s1hi == 0 || s2hi == 0) {
44
+ /* Device has precedence over normal */
45
+ if (s1lo == 0 || s2lo == 0) {
46
+ /* nGnRnE has precedence over anything */
47
+ ret_attrs = 0;
48
+ } else if (s1lo == 4 || s2lo == 4) {
49
+ /* non-Reordering has precedence over Reordering */
50
+ ret_attrs = 4; /* nGnRE */
51
+ } else if (s1lo == 8 || s2lo == 8) {
52
+ /* non-Gathering has precedence over Gathering */
53
+ ret_attrs = 8; /* nGRE */
54
+ } else {
55
+ ret_attrs = 0xc; /* GRE */
56
+ }
57
+ } else { /* Normal memory */
58
+ /* Outer/inner cacheability combine independently */
59
+ ret_attrs = combine_cacheattr_nibble(s1hi, s2hi) << 4
60
+ | combine_cacheattr_nibble(s1lo, s2lo);
61
+ }
62
+ return ret_attrs;
63
+}
64
+
65
/* Combine S1 and S2 cacheability/shareability attributes, per D4.5.4
66
* and CombineS1S2Desc()
67
*
68
@@ -XXX,XX +XXX,XX @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2)
69
static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
70
ARMCacheAttrs s1, ARMCacheAttrs s2)
71
{
72
- uint8_t s1lo, s2lo, s1hi, s2hi;
73
ARMCacheAttrs ret;
74
bool tagged = false;
75
- uint8_t s2_mair_attrs;
76
77
assert(s2.is_s2_format && !s1.is_s2_format);
78
ret.is_s2_format = false;
79
80
- s2_mair_attrs = convert_stage2_attrs(env, s2.attrs);
81
-
82
if (s1.attrs == 0xf0) {
83
tagged = true;
84
s1.attrs = 0xff;
85
}
86
87
- s1lo = extract32(s1.attrs, 0, 4);
88
- s2lo = extract32(s2_mair_attrs, 0, 4);
89
- s1hi = extract32(s1.attrs, 4, 4);
90
- s2hi = extract32(s2_mair_attrs, 4, 4);
91
-
92
/* Combine shareability attributes (table D4-43) */
93
if (s1.shareability == 2 || s2.shareability == 2) {
94
/* if either are outer-shareable, the result is outer-shareable */
95
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
96
}
97
98
/* Combine memory type and cacheability attributes */
99
- if (s1hi == 0 || s2hi == 0) {
100
- /* Device has precedence over normal */
101
- if (s1lo == 0 || s2lo == 0) {
102
- /* nGnRnE has precedence over anything */
103
- ret.attrs = 0;
104
- } else if (s1lo == 4 || s2lo == 4) {
105
- /* non-Reordering has precedence over Reordering */
106
- ret.attrs = 4; /* nGnRE */
107
- } else if (s1lo == 8 || s2lo == 8) {
108
- /* non-Gathering has precedence over Gathering */
109
- ret.attrs = 8; /* nGRE */
110
- } else {
111
- ret.attrs = 0xc; /* GRE */
112
- }
113
+ ret.attrs = combined_attrs_nofwb(env, s1, s2);
114
115
- /* Any location for which the resultant memory type is any
116
- * type of Device memory is always treated as Outer Shareable.
117
- */
118
+ /*
119
+ * Any location for which the resultant memory type is any
120
+ * type of Device memory is always treated as Outer Shareable.
121
+ * Any location for which the resultant memory type is Normal
122
+ * Inner Non-cacheable, Outer Non-cacheable is always treated
123
+ * as Outer Shareable.
124
+ * TODO: FEAT_XS adds another value (0x40) also meaning iNCoNC
125
+ */
126
+ if ((ret.attrs & 0xf0) == 0 || ret.attrs == 0x44) {
127
ret.shareability = 2;
128
- } else { /* Normal memory */
129
- /* Outer/inner cacheability combine independently */
130
- ret.attrs = combine_cacheattr_nibble(s1hi, s2hi) << 4
131
- | combine_cacheattr_nibble(s1lo, s2lo);
132
-
133
- if (ret.attrs == 0x44) {
134
- /* Any location for which the resultant memory type is Normal
135
- * Inner Non-cacheable, Outer Non-cacheable is always treated
136
- * as Outer Shareable.
137
- */
138
- ret.shareability = 2;
139
- }
140
}
141
142
/* TODO: CombineS1S2Desc does not consider transient, only WB, RWA. */
143
--
144
2.25.1
diff view generated by jsdifflib
Deleted patch
1
Implement the handling of FEAT_S2FWB; the meat of this is in the new
2
combined_attrs_fwb() function which combines S1 and S2 attributes
3
when HCR_EL2.FWB is set.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220505183950.2781801-4-peter.maydell@linaro.org
8
---
9
target/arm/cpu.h | 5 +++
10
target/arm/helper.c | 84 +++++++++++++++++++++++++++++++++++++++++++--
11
2 files changed, 86 insertions(+), 3 deletions(-)
12
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_st(const ARMISARegisters *id)
18
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, ST) != 0;
19
}
20
21
+static inline bool isar_feature_aa64_fwb(const ARMISARegisters *id)
22
+{
23
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, FWB) != 0;
24
+}
25
+
26
static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
27
{
28
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
29
diff --git a/target/arm/helper.c b/target/arm/helper.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/helper.c
32
+++ b/target/arm/helper.c
33
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
34
if (cpu_isar_feature(aa64_scxtnum, cpu)) {
35
valid_mask |= HCR_ENSCXT;
36
}
37
+ if (cpu_isar_feature(aa64_fwb, cpu)) {
38
+ valid_mask |= HCR_FWB;
39
+ }
40
}
41
42
/* Clear RES0 bits. */
43
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
44
* HCR_PTW forbids certain page-table setups
45
* HCR_DC disables stage1 and enables stage2 translation
46
* HCR_DCT enables tagging on (disabled) stage1 translation
47
+ * HCR_FWB changes the interpretation of stage2 descriptor bits
48
*/
49
- if ((env->cp15.hcr_el2 ^ value) & (HCR_VM | HCR_PTW | HCR_DC | HCR_DCT)) {
50
+ if ((env->cp15.hcr_el2 ^ value) &
51
+ (HCR_VM | HCR_PTW | HCR_DC | HCR_DCT | HCR_FWB)) {
52
tlb_flush(CPU(cpu));
53
}
54
env->cp15.hcr_el2 = value;
55
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
56
* attributes are therefore only Device if stage 2 specifies Device.
57
* With HCR_EL2.FWB == 0 this is when descriptor bits [5:4] are 0b00,
58
* ie when cacheattrs.attrs bits [3:2] are 0b00.
59
+ * With HCR_EL2.FWB == 1 this is when descriptor bit [4] is 0, ie
60
+ * when cacheattrs.attrs bit [2] is 0.
61
*/
62
assert(cacheattrs.is_s2_format);
63
- return (cacheattrs.attrs & 0xc) == 0;
64
+ if (arm_hcr_el2_eff(env) & HCR_FWB) {
65
+ return (cacheattrs.attrs & 0x4) == 0;
66
+ } else {
67
+ return (cacheattrs.attrs & 0xc) == 0;
68
+ }
69
}
70
71
/* Translate a S1 pagetable walk through S2 if needed. */
72
@@ -XXX,XX +XXX,XX @@ static uint8_t combined_attrs_nofwb(CPUARMState *env,
73
return ret_attrs;
74
}
75
76
+static uint8_t force_cacheattr_nibble_wb(uint8_t attr)
77
+{
78
+ /*
79
+ * Given the 4 bits specifying the outer or inner cacheability
80
+ * in MAIR format, return a value specifying Normal Write-Back,
81
+ * with the allocation and transient hints taken from the input
82
+ * if the input specified some kind of cacheable attribute.
83
+ */
84
+ if (attr == 0 || attr == 4) {
85
+ /*
86
+ * 0 == an UNPREDICTABLE encoding
87
+ * 4 == Non-cacheable
88
+ * Either way, force Write-Back RW allocate non-transient
89
+ */
90
+ return 0xf;
91
+ }
92
+ /* Change WriteThrough to WriteBack, keep allocation and transient hints */
93
+ return attr | 4;
94
+}
95
+
96
+/*
97
+ * Combine the memory type and cacheability attributes of
98
+ * s1 and s2 for the HCR_EL2.FWB == 1 case, returning the
99
+ * combined attributes in MAIR_EL1 format.
100
+ */
101
+static uint8_t combined_attrs_fwb(CPUARMState *env,
102
+ ARMCacheAttrs s1, ARMCacheAttrs s2)
103
+{
104
+ switch (s2.attrs) {
105
+ case 7:
106
+ /* Use stage 1 attributes */
107
+ return s1.attrs;
108
+ case 6:
109
+ /*
110
+ * Force Normal Write-Back. Note that if S1 is Normal cacheable
111
+ * then we take the allocation hints from it; otherwise it is
112
+ * RW allocate, non-transient.
113
+ */
114
+ if ((s1.attrs & 0xf0) == 0) {
115
+ /* S1 is Device */
116
+ return 0xff;
117
+ }
118
+ /* Need to check the Inner and Outer nibbles separately */
119
+ return force_cacheattr_nibble_wb(s1.attrs & 0xf) |
120
+ force_cacheattr_nibble_wb(s1.attrs >> 4) << 4;
121
+ case 5:
122
+ /* If S1 attrs are Device, use them; otherwise Normal Non-cacheable */
123
+ if ((s1.attrs & 0xf0) == 0) {
124
+ return s1.attrs;
125
+ }
126
+ return 0x44;
127
+ case 0 ... 3:
128
+ /* Force Device, of subtype specified by S2 */
129
+ return s2.attrs << 2;
130
+ default:
131
+ /*
132
+ * RESERVED values (including RES0 descriptor bit [5] being nonzero);
133
+ * arbitrarily force Device.
134
+ */
135
+ return 0;
136
+ }
137
+}
138
+
139
/* Combine S1 and S2 cacheability/shareability attributes, per D4.5.4
140
* and CombineS1S2Desc()
141
*
142
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
143
}
144
145
/* Combine memory type and cacheability attributes */
146
- ret.attrs = combined_attrs_nofwb(env, s1, s2);
147
+ if (arm_hcr_el2_eff(env) & HCR_FWB) {
148
+ ret.attrs = combined_attrs_fwb(env, s1, s2);
149
+ } else {
150
+ ret.attrs = combined_attrs_nofwb(env, s1, s2);
151
+ }
152
153
/*
154
* Any location for which the resultant memory type is any
155
--
156
2.25.1
diff view generated by jsdifflib
Deleted patch
1
Enable the FEAT_S2FWB for -cpu max. Since FEAT_S2FWB requires that
2
CLIDR_EL1.{LoUU,LoUIS} are zero, we explicitly squash these (the
3
inherited CLIDR_EL1 value from the Cortex-A57 has them as 1).
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220505183950.2781801-5-peter.maydell@linaro.org
8
---
9
docs/system/arm/emulation.rst | 1 +
10
target/arm/cpu64.c | 11 +++++++++++
11
2 files changed, 12 insertions(+)
12
13
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
14
index XXXXXXX..XXXXXXX 100644
15
--- a/docs/system/arm/emulation.rst
16
+++ b/docs/system/arm/emulation.rst
17
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
18
- FEAT_RAS (Reliability, availability, and serviceability)
19
- FEAT_RDM (Advanced SIMD rounding double multiply accumulate instructions)
20
- FEAT_RNG (Random number generator)
21
+- FEAT_S2FWB (Stage 2 forced Write-Back)
22
- FEAT_SB (Speculation Barrier)
23
- FEAT_SEL2 (Secure EL2)
24
- FEAT_SHA1 (SHA1 instructions)
25
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu64.c
28
+++ b/target/arm/cpu64.c
29
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
30
{
31
ARMCPU *cpu = ARM_CPU(obj);
32
uint64_t t;
33
+ uint32_t u;
34
35
if (kvm_enabled() || hvf_enabled()) {
36
/* With KVM or HVF, '-cpu max' is identical to '-cpu host' */
37
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
38
t = FIELD_DP64(t, MIDR_EL1, REVISION, 0);
39
cpu->midr = t;
40
41
+ /*
42
+ * We're going to set FEAT_S2FWB, which mandates that CLIDR_EL1.{LoUU,LoUIS}
43
+ * are zero.
44
+ */
45
+ u = cpu->clidr;
46
+ u = FIELD_DP32(u, CLIDR_EL1, LOUIS, 0);
47
+ u = FIELD_DP32(u, CLIDR_EL1, LOUU, 0);
48
+ cpu->clidr = u;
49
+
50
t = cpu->isar.id_aa64isar0;
51
t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* FEAT_PMULL */
52
t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1); /* FEAT_SHA1 */
53
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
54
t = FIELD_DP64(t, ID_AA64MMFR2, IESB, 1); /* FEAT_IESB */
55
t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
56
t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
57
+ t = FIELD_DP64(t, ID_AA64MMFR2, FWB, 1); /* FEAT_S2FWB */
58
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
59
t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
60
cpu->isar.id_aa64mmfr2 = t;
61
--
62
2.25.1
diff view generated by jsdifflib
Deleted patch
1
The Armv8.4 feature FEAT_IDST specifies that exceptions generated by
2
read accesses to the feature ID space should report a syndrome code
3
of 0x18 (EC_SYSTEMREGISTERTRAP) rather than 0x00 (EC_UNCATEGORIZED).
4
The feature ID space is defined to be:
5
op0 == 3, op1 == {0,1,3}, CRn == 0, CRm == {0-7}, op2 == {0-7}
6
1
7
In our implementation we might return the EC_UNCATEGORIZED syndrome
8
value for a system register access in four cases:
9
* no reginfo struct in the hashtable
10
* cp_access_ok() fails (ie ri->access doesn't permit the access)
11
* ri->accessfn returns CP_ACCESS_TRAP_UNCATEGORIZED at runtime
12
* ri->type includes ARM_CP_RAISES_EXC, and the readfn raises
13
an UNDEF exception at runtime
14
15
We have very few regdefs that set ARM_CP_RAISES_EXC, and none of
16
them are in the feature ID space. (In the unlikely event that any
17
are added in future they would need to take care of setting the
18
correct syndrome themselves.) This patch deals with the other
19
three cases, and enables FEAT_IDST for AArch64 -cpu max.
20
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 20220509155457.3560724-1-peter.maydell@linaro.org
24
---
25
docs/system/arm/emulation.rst | 1 +
26
target/arm/cpregs.h | 24 ++++++++++++++++++++++++
27
target/arm/cpu.h | 5 +++++
28
target/arm/cpu64.c | 1 +
29
target/arm/op_helper.c | 9 +++++++++
30
target/arm/translate-a64.c | 28 ++++++++++++++++++++++++++--
31
6 files changed, 66 insertions(+), 2 deletions(-)
32
33
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
34
index XXXXXXX..XXXXXXX 100644
35
--- a/docs/system/arm/emulation.rst
36
+++ b/docs/system/arm/emulation.rst
37
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
38
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
39
- FEAT_HPDS (Hierarchical permission disables)
40
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
41
+- FEAT_IDST (ID space trap handling)
42
- FEAT_IESB (Implicit error synchronization event)
43
- FEAT_JSCVT (JavaScript conversion instructions)
44
- FEAT_LOR (Limited ordering regions)
45
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpregs.h
48
+++ b/target/arm/cpregs.h
49
@@ -XXX,XX +XXX,XX @@ static inline bool cp_access_ok(int current_el,
50
/* Raw read of a coprocessor register (as needed for migration, etc) */
51
uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri);
52
53
+/*
54
+ * Return true if the cp register encoding is in the "feature ID space" as
55
+ * defined by FEAT_IDST (and thus should be reported with ER_ELx.EC
56
+ * as EC_SYSTEMREGISTERTRAP rather than EC_UNCATEGORIZED).
57
+ */
58
+static inline bool arm_cpreg_encoding_in_idspace(uint8_t opc0, uint8_t opc1,
59
+ uint8_t opc2,
60
+ uint8_t crn, uint8_t crm)
61
+{
62
+ return opc0 == 3 && (opc1 == 0 || opc1 == 1 || opc1 == 3) &&
63
+ crn == 0 && crm < 8;
64
+}
65
+
66
+/*
67
+ * As arm_cpreg_encoding_in_idspace(), but take the encoding from an
68
+ * ARMCPRegInfo.
69
+ */
70
+static inline bool arm_cpreg_in_idspace(const ARMCPRegInfo *ri)
71
+{
72
+ return ri->state == ARM_CP_STATE_AA64 &&
73
+ arm_cpreg_encoding_in_idspace(ri->opc0, ri->opc1, ri->opc2,
74
+ ri->crn, ri->crm);
75
+}
76
+
77
#endif /* TARGET_ARM_CPREGS_H */
78
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/cpu.h
81
+++ b/target/arm/cpu.h
82
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fwb(const ARMISARegisters *id)
83
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, FWB) != 0;
84
}
85
86
+static inline bool isar_feature_aa64_ids(const ARMISARegisters *id)
87
+{
88
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, IDS) != 0;
89
+}
90
+
91
static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
92
{
93
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
94
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
95
index XXXXXXX..XXXXXXX 100644
96
--- a/target/arm/cpu64.c
97
+++ b/target/arm/cpu64.c
98
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
99
t = FIELD_DP64(t, ID_AA64MMFR2, IESB, 1); /* FEAT_IESB */
100
t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
101
t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
102
+ t = FIELD_DP64(t, ID_AA64MMFR2, IDS, 1); /* FEAT_IDST */
103
t = FIELD_DP64(t, ID_AA64MMFR2, FWB, 1); /* FEAT_S2FWB */
104
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
105
t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
106
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/op_helper.c
109
+++ b/target/arm/op_helper.c
110
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(mrs_banked)(CPUARMState *env, uint32_t tgtmode, uint32_t regno)
111
void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome,
112
uint32_t isread)
113
{
114
+ ARMCPU *cpu = env_archcpu(env);
115
const ARMCPRegInfo *ri = rip;
116
CPAccessResult res = CP_ACCESS_OK;
117
int target_el;
118
@@ -XXX,XX +XXX,XX @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome,
119
case CP_ACCESS_TRAP:
120
break;
121
case CP_ACCESS_TRAP_UNCATEGORIZED:
122
+ if (cpu_isar_feature(aa64_ids, cpu) && isread &&
123
+ arm_cpreg_in_idspace(ri)) {
124
+ /*
125
+ * FEAT_IDST says this should be reported as EC_SYSTEMREGISTERTRAP,
126
+ * not EC_UNCATEGORIZED
127
+ */
128
+ break;
129
+ }
130
syndrome = syn_uncategorized();
131
break;
132
default:
133
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/target/arm/translate-a64.c
136
+++ b/target/arm/translate-a64.c
137
@@ -XXX,XX +XXX,XX @@ static void gen_set_nzcv(TCGv_i64 tcg_rt)
138
tcg_temp_free_i32(nzcv);
139
}
140
141
+static void gen_sysreg_undef(DisasContext *s, bool isread,
142
+ uint8_t op0, uint8_t op1, uint8_t op2,
143
+ uint8_t crn, uint8_t crm, uint8_t rt)
144
+{
145
+ /*
146
+ * Generate code to emit an UNDEF with correct syndrome
147
+ * information for a failed system register access.
148
+ * This is EC_UNCATEGORIZED (ie a standard UNDEF) in most cases,
149
+ * but if FEAT_IDST is implemented then read accesses to registers
150
+ * in the feature ID space are reported with the EC_SYSTEMREGISTERTRAP
151
+ * syndrome.
152
+ */
153
+ uint32_t syndrome;
154
+
155
+ if (isread && dc_isar_feature(aa64_ids, s) &&
156
+ arm_cpreg_encoding_in_idspace(op0, op1, op2, crn, crm)) {
157
+ syndrome = syn_aa64_sysregtrap(op0, op1, op2, crn, crm, rt, isread);
158
+ } else {
159
+ syndrome = syn_uncategorized();
160
+ }
161
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syndrome,
162
+ default_exception_el(s));
163
+}
164
+
165
/* MRS - move from system register
166
* MSR (register) - move to system register
167
* SYS
168
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
169
qemu_log_mask(LOG_UNIMP, "%s access to unsupported AArch64 "
170
"system register op0:%d op1:%d crn:%d crm:%d op2:%d\n",
171
isread ? "read" : "write", op0, op1, crn, crm, op2);
172
- unallocated_encoding(s);
173
+ gen_sysreg_undef(s, isread, op0, op1, op2, crn, crm, rt);
174
return;
175
}
176
177
/* Check access permissions */
178
if (!cp_access_ok(s->current_el, ri, isread)) {
179
- unallocated_encoding(s);
180
+ gen_sysreg_undef(s, isread, op0, op1, op2, crn, crm, rt);
181
return;
182
}
183
184
--
185
2.25.1
diff view generated by jsdifflib
Deleted patch
1
The unsupported_encoding() macro logs a LOG_UNIMP message and then
2
generates code to raise the usual exception for an unallocated
3
encoding. Back when we were still implementing the A64 decoder this
4
was helpful for flagging up when guest code was using something we
5
hadn't yet implemented. Now we completely cover the A64 instruction
6
set it is barely used. The only remaining uses are for five
7
instructions whose semantics are "UNDEF, unless being run under
8
external halting debug":
9
* HLT (when not being used for semihosting)
10
* DCPSR1, DCPS2, DCPS3
11
* DRPS
12
1
13
QEMU doesn't implement external halting debug, so for us the UNDEF is
14
the architecturally correct behaviour (because it's not possible to
15
execute these instructions with halting debug enabled). The
16
LOG_UNIMP doesn't serve a useful purpose; replace these uses of
17
unsupported_encoding() with unallocated_encoding(), and delete the
18
macro.
19
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 20220509160443.3561604-1-peter.maydell@linaro.org
24
---
25
target/arm/translate-a64.h | 9 ---------
26
target/arm/translate-a64.c | 8 ++++----
27
2 files changed, 4 insertions(+), 13 deletions(-)
28
29
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/translate-a64.h
32
+++ b/target/arm/translate-a64.h
33
@@ -XXX,XX +XXX,XX @@
34
#ifndef TARGET_ARM_TRANSLATE_A64_H
35
#define TARGET_ARM_TRANSLATE_A64_H
36
37
-#define unsupported_encoding(s, insn) \
38
- do { \
39
- qemu_log_mask(LOG_UNIMP, \
40
- "%s:%d: unsupported instruction encoding 0x%08x " \
41
- "at pc=%016" PRIx64 "\n", \
42
- __FILE__, __LINE__, insn, s->pc_curr); \
43
- unallocated_encoding(s); \
44
- } while (0)
45
-
46
TCGv_i64 new_tmp_a64(DisasContext *s);
47
TCGv_i64 new_tmp_a64_local(DisasContext *s);
48
TCGv_i64 new_tmp_a64_zero(DisasContext *s);
49
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/translate-a64.c
52
+++ b/target/arm/translate-a64.c
53
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
54
* with our 32-bit semihosting).
55
*/
56
if (s->current_el == 0) {
57
- unsupported_encoding(s, insn);
58
+ unallocated_encoding(s);
59
break;
60
}
61
#endif
62
gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
63
} else {
64
- unsupported_encoding(s, insn);
65
+ unallocated_encoding(s);
66
}
67
break;
68
case 5:
69
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
70
break;
71
}
72
/* DCPS1, DCPS2, DCPS3 */
73
- unsupported_encoding(s, insn);
74
+ unallocated_encoding(s);
75
break;
76
default:
77
unallocated_encoding(s);
78
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
79
if (op3 != 0 || op4 != 0 || rn != 0x1f) {
80
goto do_unallocated;
81
} else {
82
- unsupported_encoding(s, insn);
83
+ unallocated_encoding(s);
84
}
85
return;
86
87
--
88
2.25.1
89
90
diff view generated by jsdifflib
Deleted patch
1
We allow a GICv3 to be connected to any CPU, but we don't do anything
2
to handle the case where the CPU type doesn't in hardware have a
3
GICv3 CPU interface and so the various GIC configuration fields
4
(gic_num_lrs, vprebits, vpribits) are not specified.
5
1
6
The current behaviour is that we will add the EL1 CPU interface
7
registers, but will not put in the EL2 CPU interface registers, even
8
if the CPU has EL2, which will leave the GIC in a broken state and
9
probably result in the guest crashing as it tries to set it up. This
10
only affects the virt board when using the cortex-a15 or cortex-a7
11
CPU types (both 32-bit) with -machine gic-version=3 (or 'max')
12
and -machine virtualization=on.
13
14
Instead of failing to set up the EL2 registers, if the CPU doesn't
15
define the GIC configuration set it to a reasonable default, matching
16
the standard configuration for most Arm CPUs.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20220512151457.3899052-2-peter.maydell@linaro.org
21
---
22
hw/intc/arm_gicv3_cpuif.c | 18 +++++++++++++-----
23
1 file changed, 13 insertions(+), 5 deletions(-)
24
25
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/hw/intc/arm_gicv3_cpuif.c
28
+++ b/hw/intc/arm_gicv3_cpuif.c
29
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
30
ARMCPU *cpu = ARM_CPU(qemu_get_cpu(i));
31
GICv3CPUState *cs = &s->cpu[i];
32
33
+ /*
34
+ * If the CPU doesn't define a GICv3 configuration, probably because
35
+ * in real hardware it doesn't have one, then we use default values
36
+ * matching the one used by most Arm CPUs. This applies to:
37
+ * cpu->gic_num_lrs
38
+ * cpu->gic_vpribits
39
+ * cpu->gic_vprebits
40
+ */
41
+
42
/* Note that we can't just use the GICv3CPUState as an opaque pointer
43
* in define_arm_cp_regs_with_opaque(), because when we're called back
44
* it might be with code translated by CPU 0 but run by CPU 1, in
45
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
46
* get back to the GICv3CPUState from the CPUARMState.
47
*/
48
define_arm_cp_regs(cpu, gicv3_cpuif_reginfo);
49
- if (arm_feature(&cpu->env, ARM_FEATURE_EL2)
50
- && cpu->gic_num_lrs) {
51
+ if (arm_feature(&cpu->env, ARM_FEATURE_EL2)) {
52
int j;
53
54
- cs->num_list_regs = cpu->gic_num_lrs;
55
- cs->vpribits = cpu->gic_vpribits;
56
- cs->vprebits = cpu->gic_vprebits;
57
+ cs->num_list_regs = cpu->gic_num_lrs ?: 4;
58
+ cs->vpribits = cpu->gic_vpribits ?: 5;
59
+ cs->vprebits = cpu->gic_vprebits ?: 5;
60
61
/* Check against architectural constraints: getting these
62
* wrong would be a bug in the CPU code defining these,
63
--
64
2.25.1
diff view generated by jsdifflib
Deleted patch
1
As noted in the comment, the PRIbits field in ICV_CTLR_EL1 is
2
supposed to match the ICH_VTR_EL2 PRIbits setting; that is, it is the
3
virtual priority bit setting, not the physical priority bit setting.
4
(For QEMU currently we always implement 8 bits of physical priority,
5
so the PRIbits field was previously 7, since it is defined to be
6
"priority bits - 1".)
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220512151457.3899052-3-peter.maydell@linaro.org
11
Message-id: 20220506162129.2896966-2-peter.maydell@linaro.org
12
---
13
hw/intc/arm_gicv3_cpuif.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gicv3_cpuif.c
19
+++ b/hw/intc/arm_gicv3_cpuif.c
20
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
21
* should match the ones reported in ich_vtr_read().
22
*/
23
value = ICC_CTLR_EL1_A3V | (1 << ICC_CTLR_EL1_IDBITS_SHIFT) |
24
- (7 << ICC_CTLR_EL1_PRIBITS_SHIFT);
25
+ ((cs->vpribits - 1) << ICC_CTLR_EL1_PRIBITS_SHIFT);
26
27
if (cs->ich_vmcr_el2 & ICH_VMCR_EL2_VEOIM) {
28
value |= ICC_CTLR_EL1_EOIMODE;
29
--
30
2.25.1
diff view generated by jsdifflib
Deleted patch
1
The GIC_MIN_BPR constant defines the minimum BPR value that the TCG
2
emulated GICv3 supports. We're currently using this also as the
3
value we reset the KVM GICv3 ICC_BPR registers to, but this is only
4
right by accident.
5
1
6
We want to make the emulated GICv3 use a configurable number of
7
priority bits, which means that GIC_MIN_BPR will no longer be a
8
constant. Replace the uses in the KVM reset code with literal 0,
9
plus a constant explaining why this is reasonable.
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20220512151457.3899052-4-peter.maydell@linaro.org
14
Message-id: 20220506162129.2896966-3-peter.maydell@linaro.org
15
---
16
hw/intc/arm_gicv3_kvm.c | 16 +++++++++++++---
17
1 file changed, 13 insertions(+), 3 deletions(-)
18
19
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/intc/arm_gicv3_kvm.c
22
+++ b/hw/intc/arm_gicv3_kvm.c
23
@@ -XXX,XX +XXX,XX @@ static void arm_gicv3_icc_reset(CPUARMState *env, const ARMCPRegInfo *ri)
24
s = c->gic;
25
26
c->icc_pmr_el1 = 0;
27
- c->icc_bpr[GICV3_G0] = GIC_MIN_BPR;
28
- c->icc_bpr[GICV3_G1] = GIC_MIN_BPR;
29
- c->icc_bpr[GICV3_G1NS] = GIC_MIN_BPR;
30
+ /*
31
+ * Architecturally the reset value of the ICC_BPR registers
32
+ * is UNKNOWN. We set them all to 0 here; when the kernel
33
+ * uses these values to program the ICH_VMCR_EL2 fields that
34
+ * determine the guest-visible ICC_BPR register values, the
35
+ * hardware's "writing a value less than the minimum sets
36
+ * the field to the minimum value" behaviour will result in
37
+ * them effectively resetting to the correct minimum value
38
+ * for the host GIC.
39
+ */
40
+ c->icc_bpr[GICV3_G0] = 0;
41
+ c->icc_bpr[GICV3_G1] = 0;
42
+ c->icc_bpr[GICV3_G1NS] = 0;
43
44
c->icc_sre_el1 = 0x7;
45
memset(c->icc_apr, 0, sizeof(c->icc_apr));
46
--
47
2.25.1
diff view generated by jsdifflib
Deleted patch
1
The GICv3 code has always supported a configurable number of virtual
2
priority and preemption bits, but our implementation currently
3
hardcodes the number of physical priority bits at 8. This is not
4
what most hardware implementations provide; for instance the
5
Cortex-A53 provides only 5 bits of physical priority.
6
1
7
Make the number of physical priority/preemption bits driven by fields
8
in the GICv3CPUState, the way that we already do for virtual
9
priority/preemption bits. We set cs->pribits to 8, so there is no
10
behavioural change in this commit. A following commit will add the
11
machinery for CPUs to set this to the correct value for their
12
implementation.
13
14
Note that changing the number of priority bits would be a migration
15
compatibility break, because the semantics of the icc_apr[][] array
16
changes.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20220512151457.3899052-5-peter.maydell@linaro.org
21
Message-id: 20220506162129.2896966-4-peter.maydell@linaro.org
22
---
23
include/hw/intc/arm_gicv3_common.h | 7 +-
24
hw/intc/arm_gicv3_cpuif.c | 182 ++++++++++++++++++++---------
25
2 files changed, 130 insertions(+), 59 deletions(-)
26
27
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
28
index XXXXXXX..XXXXXXX 100644
29
--- a/include/hw/intc/arm_gicv3_common.h
30
+++ b/include/hw/intc/arm_gicv3_common.h
31
@@ -XXX,XX +XXX,XX @@
32
/* Maximum number of list registers (architectural limit) */
33
#define GICV3_LR_MAX 16
34
35
-/* Minimum BPR for Secure, or when security not enabled */
36
-#define GIC_MIN_BPR 0
37
-/* Minimum BPR for Nonsecure when security is enabled */
38
-#define GIC_MIN_BPR_NS (GIC_MIN_BPR + 1)
39
-
40
/* For some distributor fields we want to model the array of 32-bit
41
* register values which hold various bitmaps corresponding to enabled,
42
* pending, etc bits. These macros and functions facilitate that; the
43
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
44
int num_list_regs;
45
int vpribits; /* number of virtual priority bits */
46
int vprebits; /* number of virtual preemption bits */
47
+ int pribits; /* number of physical priority bits */
48
+ int prebits; /* number of physical preemption bits */
49
50
/* Current highest priority pending interrupt for this CPU.
51
* This is cached information that can be recalculated from the
52
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/hw/intc/arm_gicv3_cpuif.c
55
+++ b/hw/intc/arm_gicv3_cpuif.c
56
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_iar_read(CPUARMState *env, const ARMCPRegInfo *ri)
57
return intid;
58
}
59
60
+static uint32_t icc_fullprio_mask(GICv3CPUState *cs)
61
+{
62
+ /*
63
+ * Return a mask word which clears the unimplemented priority bits
64
+ * from a priority value for a physical interrupt. (Not to be confused
65
+ * with the group priority, whose mask depends on the value of BPR
66
+ * for the interrupt group.)
67
+ */
68
+ return ~0U << (8 - cs->pribits);
69
+}
70
+
71
+static inline int icc_min_bpr(GICv3CPUState *cs)
72
+{
73
+ /* The minimum BPR for the physical interface. */
74
+ return 7 - cs->prebits;
75
+}
76
+
77
+static inline int icc_min_bpr_ns(GICv3CPUState *cs)
78
+{
79
+ return icc_min_bpr(cs) + 1;
80
+}
81
+
82
+static inline int icc_num_aprs(GICv3CPUState *cs)
83
+{
84
+ /* Return the number of APR registers (1, 2, or 4) */
85
+ int aprmax = 1 << MAX(cs->prebits - 5, 0);
86
+ assert(aprmax <= ARRAY_SIZE(cs->icc_apr[0]));
87
+ return aprmax;
88
+}
89
+
90
static int icc_highest_active_prio(GICv3CPUState *cs)
91
{
92
/* Calculate the current running priority based on the set bits
93
@@ -XXX,XX +XXX,XX @@ static int icc_highest_active_prio(GICv3CPUState *cs)
94
*/
95
int i;
96
97
- for (i = 0; i < ARRAY_SIZE(cs->icc_apr[0]); i++) {
98
+ for (i = 0; i < icc_num_aprs(cs); i++) {
99
uint32_t apr = cs->icc_apr[GICV3_G0][i] |
100
cs->icc_apr[GICV3_G1][i] | cs->icc_apr[GICV3_G1NS][i];
101
102
if (!apr) {
103
continue;
104
}
105
- return (i * 32 + ctz32(apr)) << (GIC_MIN_BPR + 1);
106
+ return (i * 32 + ctz32(apr)) << (icc_min_bpr(cs) + 1);
107
}
108
/* No current active interrupts: return idle priority */
109
return 0xff;
110
@@ -XXX,XX +XXX,XX @@ static void icc_pmr_write(CPUARMState *env, const ARMCPRegInfo *ri,
111
112
trace_gicv3_icc_pmr_write(gicv3_redist_affid(cs), value);
113
114
- value &= 0xff;
115
+ value &= icc_fullprio_mask(cs);
116
117
if (arm_feature(env, ARM_FEATURE_EL3) && !arm_is_secure(env) &&
118
(env->cp15.scr_el3 & SCR_FIQ)) {
119
@@ -XXX,XX +XXX,XX @@ static void icc_activate_irq(GICv3CPUState *cs, int irq)
120
*/
121
uint32_t mask = icc_gprio_mask(cs, cs->hppi.grp);
122
int prio = cs->hppi.prio & mask;
123
- int aprbit = prio >> 1;
124
+ int aprbit = prio >> (8 - cs->prebits);
125
int regno = aprbit / 32;
126
int regbit = aprbit % 32;
127
128
@@ -XXX,XX +XXX,XX @@ static void icc_drop_prio(GICv3CPUState *cs, int grp)
129
*/
130
int i;
131
132
- for (i = 0; i < ARRAY_SIZE(cs->icc_apr[grp]); i++) {
133
+ for (i = 0; i < icc_num_aprs(cs); i++) {
134
uint64_t *papr = &cs->icc_apr[grp][i];
135
136
if (!*papr) {
137
@@ -XXX,XX +XXX,XX @@ static void icc_bpr_write(CPUARMState *env, const ARMCPRegInfo *ri,
138
return;
139
}
140
141
- minval = (grp == GICV3_G1NS) ? GIC_MIN_BPR_NS : GIC_MIN_BPR;
142
+ minval = (grp == GICV3_G1NS) ? icc_min_bpr_ns(cs) : icc_min_bpr(cs);
143
if (value < minval) {
144
value = minval;
145
}
146
@@ -XXX,XX +XXX,XX @@ static void icc_reset(CPUARMState *env, const ARMCPRegInfo *ri)
147
148
cs->icc_ctlr_el1[GICV3_S] = ICC_CTLR_EL1_A3V |
149
(1 << ICC_CTLR_EL1_IDBITS_SHIFT) |
150
- (7 << ICC_CTLR_EL1_PRIBITS_SHIFT);
151
+ ((cs->pribits - 1) << ICC_CTLR_EL1_PRIBITS_SHIFT);
152
cs->icc_ctlr_el1[GICV3_NS] = ICC_CTLR_EL1_A3V |
153
(1 << ICC_CTLR_EL1_IDBITS_SHIFT) |
154
- (7 << ICC_CTLR_EL1_PRIBITS_SHIFT);
155
+ ((cs->pribits - 1) << ICC_CTLR_EL1_PRIBITS_SHIFT);
156
cs->icc_pmr_el1 = 0;
157
- cs->icc_bpr[GICV3_G0] = GIC_MIN_BPR;
158
- cs->icc_bpr[GICV3_G1] = GIC_MIN_BPR;
159
- cs->icc_bpr[GICV3_G1NS] = GIC_MIN_BPR_NS;
160
+ cs->icc_bpr[GICV3_G0] = icc_min_bpr(cs);
161
+ cs->icc_bpr[GICV3_G1] = icc_min_bpr(cs);
162
+ cs->icc_bpr[GICV3_G1NS] = icc_min_bpr_ns(cs);
163
memset(cs->icc_apr, 0, sizeof(cs->icc_apr));
164
memset(cs->icc_igrpen, 0, sizeof(cs->icc_igrpen));
165
cs->icc_ctlr_el3 = ICC_CTLR_EL3_NDS | ICC_CTLR_EL3_A3V |
166
(1 << ICC_CTLR_EL3_IDBITS_SHIFT) |
167
- (7 << ICC_CTLR_EL3_PRIBITS_SHIFT);
168
+ ((cs->pribits - 1) << ICC_CTLR_EL3_PRIBITS_SHIFT);
169
170
memset(cs->ich_apr, 0, sizeof(cs->ich_apr));
171
cs->ich_hcr_el2 = 0;
172
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
173
.readfn = icc_ap_read,
174
.writefn = icc_ap_write,
175
},
176
- { .name = "ICC_AP0R1_EL1", .state = ARM_CP_STATE_BOTH,
177
- .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 8, .opc2 = 5,
178
- .type = ARM_CP_IO | ARM_CP_NO_RAW,
179
- .access = PL1_RW, .accessfn = gicv3_fiq_access,
180
- .readfn = icc_ap_read,
181
- .writefn = icc_ap_write,
182
- },
183
- { .name = "ICC_AP0R2_EL1", .state = ARM_CP_STATE_BOTH,
184
- .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 8, .opc2 = 6,
185
- .type = ARM_CP_IO | ARM_CP_NO_RAW,
186
- .access = PL1_RW, .accessfn = gicv3_fiq_access,
187
- .readfn = icc_ap_read,
188
- .writefn = icc_ap_write,
189
- },
190
- { .name = "ICC_AP0R3_EL1", .state = ARM_CP_STATE_BOTH,
191
- .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 8, .opc2 = 7,
192
- .type = ARM_CP_IO | ARM_CP_NO_RAW,
193
- .access = PL1_RW, .accessfn = gicv3_fiq_access,
194
- .readfn = icc_ap_read,
195
- .writefn = icc_ap_write,
196
- },
197
/* All the ICC_AP1R*_EL1 registers are banked */
198
{ .name = "ICC_AP1R0_EL1", .state = ARM_CP_STATE_BOTH,
199
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 0,
200
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
201
.readfn = icc_ap_read,
202
.writefn = icc_ap_write,
203
},
204
- { .name = "ICC_AP1R1_EL1", .state = ARM_CP_STATE_BOTH,
205
- .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 1,
206
- .type = ARM_CP_IO | ARM_CP_NO_RAW,
207
- .access = PL1_RW, .accessfn = gicv3_irq_access,
208
- .readfn = icc_ap_read,
209
- .writefn = icc_ap_write,
210
- },
211
- { .name = "ICC_AP1R2_EL1", .state = ARM_CP_STATE_BOTH,
212
- .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 2,
213
- .type = ARM_CP_IO | ARM_CP_NO_RAW,
214
- .access = PL1_RW, .accessfn = gicv3_irq_access,
215
- .readfn = icc_ap_read,
216
- .writefn = icc_ap_write,
217
- },
218
- { .name = "ICC_AP1R3_EL1", .state = ARM_CP_STATE_BOTH,
219
- .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 3,
220
- .type = ARM_CP_IO | ARM_CP_NO_RAW,
221
- .access = PL1_RW, .accessfn = gicv3_irq_access,
222
- .readfn = icc_ap_read,
223
- .writefn = icc_ap_write,
224
- },
225
{ .name = "ICC_DIR_EL1", .state = ARM_CP_STATE_BOTH,
226
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 11, .opc2 = 1,
227
.type = ARM_CP_IO | ARM_CP_NO_RAW,
228
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
229
},
230
};
231
232
+static const ARMCPRegInfo gicv3_cpuif_icc_apxr1_reginfo[] = {
233
+ { .name = "ICC_AP0R1_EL1", .state = ARM_CP_STATE_BOTH,
234
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 8, .opc2 = 5,
235
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
236
+ .access = PL1_RW, .accessfn = gicv3_fiq_access,
237
+ .readfn = icc_ap_read,
238
+ .writefn = icc_ap_write,
239
+ },
240
+ { .name = "ICC_AP1R1_EL1", .state = ARM_CP_STATE_BOTH,
241
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 1,
242
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
243
+ .access = PL1_RW, .accessfn = gicv3_irq_access,
244
+ .readfn = icc_ap_read,
245
+ .writefn = icc_ap_write,
246
+ },
247
+};
248
+
249
+static const ARMCPRegInfo gicv3_cpuif_icc_apxr23_reginfo[] = {
250
+ { .name = "ICC_AP0R2_EL1", .state = ARM_CP_STATE_BOTH,
251
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 8, .opc2 = 6,
252
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
253
+ .access = PL1_RW, .accessfn = gicv3_fiq_access,
254
+ .readfn = icc_ap_read,
255
+ .writefn = icc_ap_write,
256
+ },
257
+ { .name = "ICC_AP0R3_EL1", .state = ARM_CP_STATE_BOTH,
258
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 8, .opc2 = 7,
259
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
260
+ .access = PL1_RW, .accessfn = gicv3_fiq_access,
261
+ .readfn = icc_ap_read,
262
+ .writefn = icc_ap_write,
263
+ },
264
+ { .name = "ICC_AP1R2_EL1", .state = ARM_CP_STATE_BOTH,
265
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 2,
266
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
267
+ .access = PL1_RW, .accessfn = gicv3_irq_access,
268
+ .readfn = icc_ap_read,
269
+ .writefn = icc_ap_write,
270
+ },
271
+ { .name = "ICC_AP1R3_EL1", .state = ARM_CP_STATE_BOTH,
272
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 3,
273
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
274
+ .access = PL1_RW, .accessfn = gicv3_irq_access,
275
+ .readfn = icc_ap_read,
276
+ .writefn = icc_ap_write,
277
+ },
278
+};
279
+
280
static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
281
{
282
GICv3CPUState *cs = icc_cs_from_env(env);
283
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
284
* get back to the GICv3CPUState from the CPUARMState.
285
*/
286
define_arm_cp_regs(cpu, gicv3_cpuif_reginfo);
287
+
288
+ /*
289
+ * For the moment, retain the existing behaviour of 8 priority bits;
290
+ * in a following commit we will take this from the CPU state,
291
+ * as we do for the virtual priority bits.
292
+ */
293
+ cs->pribits = 8;
294
+ /*
295
+ * The GICv3 has separate ID register fields for virtual priority
296
+ * and preemption bit values, but only a single ID register field
297
+ * for the physical priority bits. The preemption bit count is
298
+ * always the same as the priority bit count, except that 8 bits
299
+ * of priority means 7 preemption bits. We precalculate the
300
+ * preemption bits because it simplifies the code and makes the
301
+ * parallels between the virtual and physical bits of the GIC
302
+ * a bit clearer.
303
+ */
304
+ cs->prebits = cs->pribits;
305
+ if (cs->prebits == 8) {
306
+ cs->prebits--;
307
+ }
308
+ /*
309
+ * Check that CPU code defining pribits didn't violate
310
+ * architectural constraints our implementation relies on.
311
+ */
312
+ g_assert(cs->pribits >= 4 && cs->pribits <= 8);
313
+
314
+ /*
315
+ * gicv3_cpuif_reginfo[] defines ICC_AP*R0_EL1; add definitions
316
+ * for ICC_AP*R{1,2,3}_EL1 if the prebits value requires them.
317
+ */
318
+ if (cs->prebits >= 6) {
319
+ define_arm_cp_regs(cpu, gicv3_cpuif_icc_apxr1_reginfo);
320
+ }
321
+ if (cs->prebits == 7) {
322
+ define_arm_cp_regs(cpu, gicv3_cpuif_icc_apxr23_reginfo);
323
+ }
324
+
325
if (arm_feature(&cpu->env, ARM_FEATURE_EL2)) {
326
int j;
327
328
--
329
2.25.1
diff view generated by jsdifflib
Deleted patch
1
Make the GICv3 set its number of bits of physical priority from the
2
implementation-specific value provided in the CPU state struct, in
3
the same way we already do for virtual priority bits. Because this
4
would be a migration compatibility break, we provide a property
5
force-8-bit-prio which is enabled for 7.0 and earlier versioned board
6
models to retain the legacy "always use 8 bits" behaviour.
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220512151457.3899052-6-peter.maydell@linaro.org
11
Message-id: 20220506162129.2896966-5-peter.maydell@linaro.org
12
---
13
include/hw/intc/arm_gicv3_common.h | 1 +
14
target/arm/cpu.h | 1 +
15
hw/core/machine.c | 4 +++-
16
hw/intc/arm_gicv3_common.c | 5 +++++
17
hw/intc/arm_gicv3_cpuif.c | 15 +++++++++++----
18
target/arm/cpu64.c | 6 ++++++
19
6 files changed, 27 insertions(+), 5 deletions(-)
20
21
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/intc/arm_gicv3_common.h
24
+++ b/include/hw/intc/arm_gicv3_common.h
25
@@ -XXX,XX +XXX,XX @@ struct GICv3State {
26
uint32_t revision;
27
bool lpi_enable;
28
bool security_extn;
29
+ bool force_8bit_prio;
30
bool irq_reset_nonsecure;
31
bool gicd_no_migration_shift_bug;
32
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/cpu.h
36
+++ b/target/arm/cpu.h
37
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
38
int gic_num_lrs; /* number of list registers */
39
int gic_vpribits; /* number of virtual priority bits */
40
int gic_vprebits; /* number of virtual preemption bits */
41
+ int gic_pribits; /* number of physical priority bits */
42
43
/* Whether the cfgend input is high (i.e. this CPU should reset into
44
* big-endian mode). This setting isn't used directly: instead it modifies
45
diff --git a/hw/core/machine.c b/hw/core/machine.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/core/machine.c
48
+++ b/hw/core/machine.c
49
@@ -XXX,XX +XXX,XX @@
50
#include "hw/virtio/virtio-pci.h"
51
#include "qom/object_interfaces.h"
52
53
-GlobalProperty hw_compat_7_0[] = {};
54
+GlobalProperty hw_compat_7_0[] = {
55
+ { "arm-gicv3-common", "force-8-bit-prio", "on" },
56
+};
57
const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
58
59
GlobalProperty hw_compat_6_2[] = {
60
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/hw/intc/arm_gicv3_common.c
63
+++ b/hw/intc/arm_gicv3_common.c
64
@@ -XXX,XX +XXX,XX @@ static Property arm_gicv3_common_properties[] = {
65
DEFINE_PROP_UINT32("revision", GICv3State, revision, 3),
66
DEFINE_PROP_BOOL("has-lpi", GICv3State, lpi_enable, 0),
67
DEFINE_PROP_BOOL("has-security-extensions", GICv3State, security_extn, 0),
68
+ /*
69
+ * Compatibility property: force 8 bits of physical priority, even
70
+ * if the CPU being emulated should have fewer.
71
+ */
72
+ DEFINE_PROP_BOOL("force-8-bit-prio", GICv3State, force_8bit_prio, 0),
73
DEFINE_PROP_ARRAY("redist-region-count", GICv3State, nb_redist_regions,
74
redist_region_count, qdev_prop_uint32, uint32_t),
75
DEFINE_PROP_LINK("sysmem", GICv3State, dma, TYPE_MEMORY_REGION,
76
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/hw/intc/arm_gicv3_cpuif.c
79
+++ b/hw/intc/arm_gicv3_cpuif.c
80
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
81
* cpu->gic_num_lrs
82
* cpu->gic_vpribits
83
* cpu->gic_vprebits
84
+ * cpu->gic_pribits
85
*/
86
87
/* Note that we can't just use the GICv3CPUState as an opaque pointer
88
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
89
define_arm_cp_regs(cpu, gicv3_cpuif_reginfo);
90
91
/*
92
- * For the moment, retain the existing behaviour of 8 priority bits;
93
- * in a following commit we will take this from the CPU state,
94
- * as we do for the virtual priority bits.
95
+ * The CPU implementation specifies the number of supported
96
+ * bits of physical priority. For backwards compatibility
97
+ * of migration, we have a compat property that forces use
98
+ * of 8 priority bits regardless of what the CPU really has.
99
*/
100
- cs->pribits = 8;
101
+ if (s->force_8bit_prio) {
102
+ cs->pribits = 8;
103
+ } else {
104
+ cs->pribits = cpu->gic_pribits ?: 5;
105
+ }
106
+
107
/*
108
* The GICv3 has separate ID register fields for virtual priority
109
* and preemption bit values, but only a single ID register field
110
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/cpu64.c
113
+++ b/target/arm/cpu64.c
114
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
115
cpu->gic_num_lrs = 4;
116
cpu->gic_vpribits = 5;
117
cpu->gic_vprebits = 5;
118
+ cpu->gic_pribits = 5;
119
define_cortex_a72_a57_a53_cp_reginfo(cpu);
120
}
121
122
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
123
cpu->gic_num_lrs = 4;
124
cpu->gic_vpribits = 5;
125
cpu->gic_vprebits = 5;
126
+ cpu->gic_pribits = 5;
127
define_cortex_a72_a57_a53_cp_reginfo(cpu);
128
}
129
130
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
131
cpu->gic_num_lrs = 4;
132
cpu->gic_vpribits = 5;
133
cpu->gic_vprebits = 5;
134
+ cpu->gic_pribits = 5;
135
define_cortex_a72_a57_a53_cp_reginfo(cpu);
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_a76_initfn(Object *obj)
139
cpu->gic_num_lrs = 4;
140
cpu->gic_vpribits = 5;
141
cpu->gic_vprebits = 5;
142
+ cpu->gic_pribits = 5;
143
144
/* From B5.1 AdvSIMD AArch64 register summary */
145
cpu->isar.mvfr0 = 0x10110222;
146
@@ -XXX,XX +XXX,XX @@ static void aarch64_neoverse_n1_initfn(Object *obj)
147
cpu->gic_num_lrs = 4;
148
cpu->gic_vpribits = 5;
149
cpu->gic_vprebits = 5;
150
+ cpu->gic_pribits = 5;
151
152
/* From B5.1 AdvSIMD AArch64 register summary */
153
cpu->isar.mvfr0 = 0x10110222;
154
@@ -XXX,XX +XXX,XX @@ static void aarch64_a64fx_initfn(Object *obj)
155
cpu->gic_num_lrs = 4;
156
cpu->gic_vpribits = 5;
157
cpu->gic_vprebits = 5;
158
+ cpu->gic_pribits = 5;
159
160
/* Suppport of A64FX's vector length are 128,256 and 512bit only */
161
aarch64_add_sve_properties(obj);
162
--
163
2.25.1
diff view generated by jsdifflib
Deleted patch
1
We previously open-coded the expression for the number of virtual APR
2
registers and the assertion that it was not going to cause us to
3
overflow the cs->ich_apr[] array. Factor this out into a new
4
ich_num_aprs() function, for consistency with the icc_num_aprs()
5
function we just added for the physical APR handling.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220512151457.3899052-7-peter.maydell@linaro.org
10
Message-id: 20220506162129.2896966-6-peter.maydell@linaro.org
11
---
12
hw/intc/arm_gicv3_cpuif.c | 16 ++++++++++------
13
1 file changed, 10 insertions(+), 6 deletions(-)
14
15
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/intc/arm_gicv3_cpuif.c
18
+++ b/hw/intc/arm_gicv3_cpuif.c
19
@@ -XXX,XX +XXX,XX @@ static inline int icv_min_vbpr(GICv3CPUState *cs)
20
return 7 - cs->vprebits;
21
}
22
23
+static inline int ich_num_aprs(GICv3CPUState *cs)
24
+{
25
+ /* Return the number of virtual APR registers (1, 2, or 4) */
26
+ int aprmax = 1 << (cs->vprebits - 5);
27
+ assert(aprmax <= ARRAY_SIZE(cs->ich_apr[0]));
28
+ return aprmax;
29
+}
30
+
31
/* Simple accessor functions for LR fields */
32
static uint32_t ich_lr_vintid(uint64_t lr)
33
{
34
@@ -XXX,XX +XXX,XX @@ static int ich_highest_active_virt_prio(GICv3CPUState *cs)
35
* in the ICH Active Priority Registers.
36
*/
37
int i;
38
- int aprmax = 1 << (cs->vprebits - 5);
39
-
40
- assert(aprmax <= ARRAY_SIZE(cs->ich_apr[0]));
41
+ int aprmax = ich_num_aprs(cs);
42
43
for (i = 0; i < aprmax; i++) {
44
uint32_t apr = cs->ich_apr[GICV3_G0][i] |
45
@@ -XXX,XX +XXX,XX @@ static int icv_drop_prio(GICv3CPUState *cs)
46
* 32 bits are actually relevant.
47
*/
48
int i;
49
- int aprmax = 1 << (cs->vprebits - 5);
50
-
51
- assert(aprmax <= ARRAY_SIZE(cs->ich_apr[0]));
52
+ int aprmax = ich_num_aprs(cs);
53
54
for (i = 0; i < aprmax; i++) {
55
uint64_t *papr0 = &cs->ich_apr[GICV3_G0][i];
56
--
57
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Chris Howard <cvz185@web.de>
2
1
3
Give all the debug registers their correct names including the
4
index, rather than having multiple registers all with the
5
same name string, which is confusing when viewed over the
6
gdbstub interface.
7
8
Signed-off-by: CHRIS HOWARD <cvz185@web.de>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 4127D8CA-D54A-47C7-A039-0DB7361E30C0@web.de
11
[PMM: expanded commit message]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/helper.c | 16 ++++++++++++----
15
1 file changed, 12 insertions(+), 4 deletions(-)
16
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
20
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
22
}
23
24
for (i = 0; i < brps; i++) {
25
+ char *dbgbvr_el1_name = g_strdup_printf("DBGBVR%d_EL1", i);
26
+ char *dbgbcr_el1_name = g_strdup_printf("DBGBCR%d_EL1", i);
27
ARMCPRegInfo dbgregs[] = {
28
- { .name = "DBGBVR", .state = ARM_CP_STATE_BOTH,
29
+ { .name = dbgbvr_el1_name, .state = ARM_CP_STATE_BOTH,
30
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 4,
31
.access = PL1_RW, .accessfn = access_tda,
32
.fieldoffset = offsetof(CPUARMState, cp15.dbgbvr[i]),
33
.writefn = dbgbvr_write, .raw_writefn = raw_write
34
},
35
- { .name = "DBGBCR", .state = ARM_CP_STATE_BOTH,
36
+ { .name = dbgbcr_el1_name, .state = ARM_CP_STATE_BOTH,
37
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 5,
38
.access = PL1_RW, .accessfn = access_tda,
39
.fieldoffset = offsetof(CPUARMState, cp15.dbgbcr[i]),
40
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
41
},
42
};
43
define_arm_cp_regs(cpu, dbgregs);
44
+ g_free(dbgbvr_el1_name);
45
+ g_free(dbgbcr_el1_name);
46
}
47
48
for (i = 0; i < wrps; i++) {
49
+ char *dbgwvr_el1_name = g_strdup_printf("DBGWVR%d_EL1", i);
50
+ char *dbgwcr_el1_name = g_strdup_printf("DBGWCR%d_EL1", i);
51
ARMCPRegInfo dbgregs[] = {
52
- { .name = "DBGWVR", .state = ARM_CP_STATE_BOTH,
53
+ { .name = dbgwvr_el1_name, .state = ARM_CP_STATE_BOTH,
54
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 6,
55
.access = PL1_RW, .accessfn = access_tda,
56
.fieldoffset = offsetof(CPUARMState, cp15.dbgwvr[i]),
57
.writefn = dbgwvr_write, .raw_writefn = raw_write
58
},
59
- { .name = "DBGWCR", .state = ARM_CP_STATE_BOTH,
60
+ { .name = dbgwcr_el1_name, .state = ARM_CP_STATE_BOTH,
61
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 7,
62
.access = PL1_RW, .accessfn = access_tda,
63
.fieldoffset = offsetof(CPUARMState, cp15.dbgwcr[i]),
64
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
65
},
66
};
67
define_arm_cp_regs(cpu, dbgregs);
68
+ g_free(dbgwvr_el1_name);
69
+ g_free(dbgwcr_el1_name);
70
}
71
}
72
73
--
74
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
Except hw/core/irq.c which implements the forward-declared opaque
4
qemu_irq structure, hw/adc/zynq-xadc.{c,h} are the only files not
5
using the typedef. Fix this single exception.
6
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Reviewed-by: Bernhard Beschow <shentey@gmail.com>
9
Message-id: 20220509202035.50335-1-philippe.mathieu.daude@gmail.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/hw/adc/zynq-xadc.h | 3 +--
13
hw/adc/zynq-xadc.c | 4 ++--
14
2 files changed, 3 insertions(+), 4 deletions(-)
15
16
diff --git a/include/hw/adc/zynq-xadc.h b/include/hw/adc/zynq-xadc.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/adc/zynq-xadc.h
19
+++ b/include/hw/adc/zynq-xadc.h
20
@@ -XXX,XX +XXX,XX @@ struct ZynqXADCState {
21
uint16_t xadc_dfifo[ZYNQ_XADC_FIFO_DEPTH];
22
uint16_t xadc_dfifo_entries;
23
24
- struct IRQState *qemu_irq;
25
-
26
+ qemu_irq irq;
27
};
28
29
#endif /* ZYNQ_XADC_H */
30
diff --git a/hw/adc/zynq-xadc.c b/hw/adc/zynq-xadc.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/adc/zynq-xadc.c
33
+++ b/hw/adc/zynq-xadc.c
34
@@ -XXX,XX +XXX,XX @@ static void zynq_xadc_update_ints(ZynqXADCState *s)
35
s->regs[INT_STS] |= INT_DFIFO_GTH;
36
}
37
38
- qemu_set_irq(s->qemu_irq, !!(s->regs[INT_STS] & ~s->regs[INT_MASK]));
39
+ qemu_set_irq(s->irq, !!(s->regs[INT_STS] & ~s->regs[INT_MASK]));
40
}
41
42
static void zynq_xadc_reset(DeviceState *d)
43
@@ -XXX,XX +XXX,XX @@ static void zynq_xadc_init(Object *obj)
44
memory_region_init_io(&s->iomem, obj, &xadc_ops, s, "zynq-xadc",
45
ZYNQ_XADC_MMIO_SIZE);
46
sysbus_init_mmio(sbd, &s->iomem);
47
- sysbus_init_irq(sbd, &s->qemu_irq);
48
+ sysbus_init_irq(sbd, &s->irq);
49
}
50
51
static const VMStateDescription vmstate_zynq_xadc = {
52
--
53
2.25.1
54
55
diff view generated by jsdifflib
Deleted patch
1
In commit 88ce6c6ee85d we switched from directly fishing the number
2
of breakpoints and watchpoints out of the ID register fields to
3
abstracting out functions to do this job, but we forgot to delete the
4
now-obsolete comment in define_debug_regs() about the relation
5
between the ID field value and the actual number of breakpoints and
6
watchpoints. Delete the obsolete comment.
7
1
8
Reported-by: CHRIS HOWARD <cvz185@web.de>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20220513131801.4082712-1-peter.maydell@linaro.org
13
---
14
target/arm/helper.c | 1 -
15
1 file changed, 1 deletion(-)
16
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
20
+++ b/target/arm/helper.c
21
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
22
define_one_arm_cp_reg(cpu, &dbgdidr);
23
}
24
25
- /* Note that all these register fields hold "number of Xs minus 1". */
26
brps = arm_num_brps(cpu);
27
wrps = arm_num_wrps(cpu);
28
ctx_cmps = arm_num_ctx_cmps(cpu);
29
--
30
2.25.1
31
32
diff view generated by jsdifflib
Deleted patch
1
Currently we give all the v7-and-up CPUs a PMU with 4 counters. This
2
means that we don't provide the 6 counters that are required by the
3
Arm BSA (Base System Architecture) specification if the CPU supports
4
the Virtualization extensions.
5
1
6
Instead of having a single PMCR_NUM_COUNTERS, make each CPU type
7
specify the PMCR reset value (obtained from the appropriate TRM), and
8
use the 'N' field of that value to define the number of counters
9
provided.
10
11
This means that we now supply 6 counters instead of 4 for:
12
Cortex-A9, Cortex-A15, Cortex-A53, Cortex-A57, Cortex-A72,
13
Cortex-A76, Neoverse-N1, '-cpu max'
14
This CPU goes from 4 to 8 counters:
15
A64FX
16
These CPUs remain with 4 counters:
17
Cortex-A7, Cortex-A8
18
This CPU goes down from 4 to 3 counters:
19
Cortex-R5
20
21
Note that because we now use the PMCR reset value of the specific
22
implementation, we no longer set the LC bit out of reset. This has
23
an UNKNOWN value out of reset for all cores with any AArch32 support,
24
so guest software should be setting it anyway if it wants it.
25
26
This change was originally landed in commit f7fb73b8cdd3f7 (during
27
the 6.0 release cycle) but was then reverted by commit
28
21c2dd77a6aa517 before that release because it did not work with KVM.
29
This version fixes that by creating the scratch vCPU in
30
kvm_arm_get_host_cpu_features() with the KVM_ARM_VCPU_PMU_V3 feature
31
if KVM supports it, and then only asking KVM for the PMCR_EL0 value
32
if the vCPU has a PMU.
33
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
36
[PMM: Added the correct value for a64fx]
37
Message-id: 20220513122852.4063586-1-peter.maydell@linaro.org
38
---
39
target/arm/cpu.h | 1 +
40
target/arm/internals.h | 4 +++-
41
target/arm/cpu64.c | 11 +++++++++++
42
target/arm/cpu_tcg.c | 6 ++++++
43
target/arm/helper.c | 25 ++++++++++++++-----------
44
target/arm/kvm64.c | 12 ++++++++++++
45
6 files changed, 47 insertions(+), 12 deletions(-)
46
47
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/cpu.h
50
+++ b/target/arm/cpu.h
51
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
52
uint64_t id_aa64dfr0;
53
uint64_t id_aa64dfr1;
54
uint64_t id_aa64zfr0;
55
+ uint64_t reset_pmcr_el0;
56
} isar;
57
uint64_t midr;
58
uint32_t revidr;
59
diff --git a/target/arm/internals.h b/target/arm/internals.h
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/internals.h
62
+++ b/target/arm/internals.h
63
@@ -XXX,XX +XXX,XX @@ enum MVEECIState {
64
65
static inline uint32_t pmu_num_counters(CPUARMState *env)
66
{
67
- return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
68
+ ARMCPU *cpu = env_archcpu(env);
69
+
70
+ return (cpu->isar.reset_pmcr_el0 & PMCRN_MASK) >> PMCRN_SHIFT;
71
}
72
73
/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */
74
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/cpu64.c
77
+++ b/target/arm/cpu64.c
78
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
79
cpu->isar.id_aa64isar0 = 0x00011120;
80
cpu->isar.id_aa64mmfr0 = 0x00001124;
81
cpu->isar.dbgdidr = 0x3516d000;
82
+ cpu->isar.reset_pmcr_el0 = 0x41013000;
83
cpu->clidr = 0x0a200023;
84
cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
85
cpu->ccsidr[1] = 0x201fe012; /* 48KB L1 icache */
86
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
87
cpu->isar.id_aa64isar0 = 0x00011120;
88
cpu->isar.id_aa64mmfr0 = 0x00001122; /* 40 bit physical addr */
89
cpu->isar.dbgdidr = 0x3516d000;
90
+ cpu->isar.reset_pmcr_el0 = 0x41033000;
91
cpu->clidr = 0x0a200023;
92
cpu->ccsidr[0] = 0x700fe01a; /* 32KB L1 dcache */
93
cpu->ccsidr[1] = 0x201fe00a; /* 32KB L1 icache */
94
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
95
cpu->isar.id_aa64isar0 = 0x00011120;
96
cpu->isar.id_aa64mmfr0 = 0x00001124;
97
cpu->isar.dbgdidr = 0x3516d000;
98
+ cpu->isar.reset_pmcr_el0 = 0x41023000;
99
cpu->clidr = 0x0a200023;
100
cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
101
cpu->ccsidr[1] = 0x201fe012; /* 48KB L1 icache */
102
@@ -XXX,XX +XXX,XX @@ static void aarch64_a76_initfn(Object *obj)
103
cpu->isar.mvfr0 = 0x10110222;
104
cpu->isar.mvfr1 = 0x13211111;
105
cpu->isar.mvfr2 = 0x00000043;
106
+
107
+ /* From D5.1 AArch64 PMU register summary */
108
+ cpu->isar.reset_pmcr_el0 = 0x410b3000;
109
}
110
111
static void aarch64_neoverse_n1_initfn(Object *obj)
112
@@ -XXX,XX +XXX,XX @@ static void aarch64_neoverse_n1_initfn(Object *obj)
113
cpu->isar.mvfr0 = 0x10110222;
114
cpu->isar.mvfr1 = 0x13211111;
115
cpu->isar.mvfr2 = 0x00000043;
116
+
117
+ /* From D5.1 AArch64 PMU register summary */
118
+ cpu->isar.reset_pmcr_el0 = 0x410c3000;
119
}
120
121
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
122
@@ -XXX,XX +XXX,XX @@ static void aarch64_a64fx_initfn(Object *obj)
123
set_bit(1, cpu->sve_vq_supported); /* 256bit */
124
set_bit(3, cpu->sve_vq_supported); /* 512bit */
125
126
+ cpu->isar.reset_pmcr_el0 = 0x46014040;
127
+
128
/* TODO: Add A64FX specific HPC extension registers */
129
}
130
131
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
132
index XXXXXXX..XXXXXXX 100644
133
--- a/target/arm/cpu_tcg.c
134
+++ b/target/arm/cpu_tcg.c
135
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
136
cpu->ccsidr[1] = 0x2007e01a; /* 16k L1 icache. */
137
cpu->ccsidr[2] = 0xf0000000; /* No L2 icache. */
138
cpu->reset_auxcr = 2;
139
+ cpu->isar.reset_pmcr_el0 = 0x41002000;
140
define_arm_cp_regs(cpu, cortexa8_cp_reginfo);
141
}
142
143
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
144
cpu->clidr = (1 << 27) | (1 << 24) | 3;
145
cpu->ccsidr[0] = 0xe00fe019; /* 16k L1 dcache. */
146
cpu->ccsidr[1] = 0x200fe019; /* 16k L1 icache. */
147
+ cpu->isar.reset_pmcr_el0 = 0x41093000;
148
define_arm_cp_regs(cpu, cortexa9_cp_reginfo);
149
}
150
151
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
152
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
153
cpu->ccsidr[1] = 0x201fe00a; /* 32K L1 icache */
154
cpu->ccsidr[2] = 0x711fe07a; /* 4096K L2 unified cache */
155
+ cpu->isar.reset_pmcr_el0 = 0x41072000;
156
define_arm_cp_regs(cpu, cortexa15_cp_reginfo); /* Same as A15 */
157
}
158
159
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
160
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
161
cpu->ccsidr[1] = 0x201fe00a; /* 32K L1 icache */
162
cpu->ccsidr[2] = 0x711fe07a; /* 4096K L2 unified cache */
163
+ cpu->isar.reset_pmcr_el0 = 0x410F3000;
164
define_arm_cp_regs(cpu, cortexa15_cp_reginfo);
165
}
166
167
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
168
cpu->isar.id_isar6 = 0x0;
169
cpu->mp_is_up = true;
170
cpu->pmsav7_dregion = 16;
171
+ cpu->isar.reset_pmcr_el0 = 0x41151800;
172
define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
173
}
174
175
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
176
cpu->isar.id_isar5 = 0x00011121;
177
cpu->isar.id_isar6 = 0;
178
cpu->isar.dbgdidr = 0x3516d000;
179
+ cpu->isar.reset_pmcr_el0 = 0x41013000;
180
cpu->clidr = 0x0a200023;
181
cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
182
cpu->ccsidr[1] = 0x201fe012; /* 48KB L1 icache */
183
diff --git a/target/arm/helper.c b/target/arm/helper.c
184
index XXXXXXX..XXXXXXX 100644
185
--- a/target/arm/helper.c
186
+++ b/target/arm/helper.c
187
@@ -XXX,XX +XXX,XX @@
188
#include "cpregs.h"
189
190
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
191
-#define PMCR_NUM_COUNTERS 4 /* QEMU IMPDEF choice */
192
193
#ifndef CONFIG_USER_ONLY
194
195
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
196
.resetvalue = 0,
197
.writefn = gt_hyp_ctl_write, .raw_writefn = raw_write },
198
#endif
199
- /* The only field of MDCR_EL2 that has a defined architectural reset value
200
- * is MDCR_EL2.HPMN which should reset to the value of PMCR_EL0.N.
201
- */
202
- { .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH,
203
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1,
204
- .access = PL2_RW, .resetvalue = PMCR_NUM_COUNTERS,
205
- .fieldoffset = offsetof(CPUARMState, cp15.mdcr_el2), },
206
{ .name = "HPFAR", .state = ARM_CP_STATE_AA32,
207
.cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4,
208
.access = PL2_RW, .accessfn = access_el3_aa32ns,
209
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
210
* field as main ID register, and we implement four counters in
211
* addition to the cycle count register.
212
*/
213
- unsigned int i, pmcrn = PMCR_NUM_COUNTERS;
214
+ unsigned int i, pmcrn = pmu_num_counters(&cpu->env);
215
ARMCPRegInfo pmcr = {
216
.name = "PMCR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 0,
217
.access = PL0_RW,
218
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
219
.access = PL0_RW, .accessfn = pmreg_access,
220
.type = ARM_CP_IO,
221
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmcr),
222
- .resetvalue = (cpu->midr & 0xff000000) | (pmcrn << PMCRN_SHIFT) |
223
- PMCRLC,
224
+ .resetvalue = cpu->isar.reset_pmcr_el0,
225
.writefn = pmcr_write, .raw_writefn = raw_write,
226
};
227
+
228
define_one_arm_cp_reg(cpu, &pmcr);
229
define_one_arm_cp_reg(cpu, &pmcr64);
230
for (i = 0; i < pmcrn; i++) {
231
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
232
.type = ARM_CP_EL3_NO_EL2_C_NZ,
233
.fieldoffset = offsetof(CPUARMState, cp15.vmpidr_el2) },
234
};
235
+ /*
236
+ * The only field of MDCR_EL2 that has a defined architectural reset
237
+ * value is MDCR_EL2.HPMN which should reset to the value of PMCR_EL0.N.
238
+ */
239
+ ARMCPRegInfo mdcr_el2 = {
240
+ .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH,
241
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1,
242
+ .access = PL2_RW, .resetvalue = pmu_num_counters(env),
243
+ .fieldoffset = offsetof(CPUARMState, cp15.mdcr_el2),
244
+ };
245
+ define_one_arm_cp_reg(cpu, &mdcr_el2);
246
define_arm_cp_regs(cpu, vpidr_regs);
247
define_arm_cp_regs(cpu, el2_cp_reginfo);
248
if (arm_feature(env, ARM_FEATURE_V8)) {
249
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
250
index XXXXXXX..XXXXXXX 100644
251
--- a/target/arm/kvm64.c
252
+++ b/target/arm/kvm64.c
253
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
254
*/
255
int fdarray[3];
256
bool sve_supported;
257
+ bool pmu_supported = false;
258
uint64_t features = 0;
259
uint64_t t;
260
int err;
261
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
262
1 << KVM_ARM_VCPU_PTRAUTH_GENERIC);
263
}
264
265
+ if (kvm_arm_pmu_supported()) {
266
+ init.features[0] |= 1 << KVM_ARM_VCPU_PMU_V3;
267
+ pmu_supported = true;
268
+ }
269
+
270
if (!kvm_arm_create_scratch_host_vcpu(cpus_to_try, fdarray, &init)) {
271
return false;
272
}
273
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
274
dbgdidr |= (1 << 15); /* RES1 bit */
275
ahcf->isar.dbgdidr = dbgdidr;
276
}
277
+
278
+ if (pmu_supported) {
279
+ /* PMCR_EL0 is only accessible if the vCPU has feature PMU_V3 */
280
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.reset_pmcr_el0,
281
+ ARM64_SYS_REG(3, 3, 9, 12, 0));
282
+ }
283
}
284
285
sve_supported = ioctl(fdarray[0], KVM_CHECK_EXTENSION, KVM_CAP_ARM_SVE) > 0;
286
--
287
2.25.1
diff view generated by jsdifflib
Deleted patch
1
In the virt board with secure=on we put two nodes in the dtb
2
for flash devices: one for the secure-only flash, and one
3
for the non-secure flash. We get the reg properties for these
4
correct, but in the DT node name, which by convention includes
5
the base address of devices, we used the wrong address. Fix it.
6
1
7
Spotted by dtc, which will complain
8
Warning (unique_unit_address): /flash@0: duplicate unit-address (also used in node /secflash@0)
9
if you dump the dtb from QEMU with -machine dumpdtb=file.dtb
10
and then decompile it with dtc.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20220513131316.4081539-2-peter.maydell@linaro.org
15
---
16
hw/arm/virt.c | 2 +-
17
1 file changed, 1 insertion(+), 1 deletion(-)
18
19
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/arm/virt.c
22
+++ b/hw/arm/virt.c
23
@@ -XXX,XX +XXX,XX @@ static void virt_flash_fdt(VirtMachineState *vms,
24
qemu_fdt_setprop_string(ms->fdt, nodename, "secure-status", "okay");
25
g_free(nodename);
26
27
- nodename = g_strdup_printf("/flash@%" PRIx64, flashbase);
28
+ nodename = g_strdup_printf("/flash@%" PRIx64, flashbase + flashsize);
29
qemu_fdt_add_subnode(ms->fdt, nodename);
30
qemu_fdt_setprop_string(ms->fdt, nodename, "compatible", "cfi-flash");
31
qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
32
--
33
2.25.1
diff view generated by jsdifflib
Deleted patch
1
The virt board generates a gpio-keys node in the dtb, but it
2
incorrectly gives this node #size-cells and #address-cells
3
properties. If you dump the dtb with 'machine dumpdtb=file.dtb'
4
and run it through dtc, dtc will warn about this:
5
1
6
Warning (avoid_unnecessary_addr_size): /gpio-keys: unnecessary #address-cells/#size-cells without "ranges" or child "reg" property
7
8
Remove the bogus properties.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20220513131316.4081539-3-peter.maydell@linaro.org
13
---
14
hw/arm/virt.c | 2 --
15
1 file changed, 2 deletions(-)
16
17
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/virt.c
20
+++ b/hw/arm/virt.c
21
@@ -XXX,XX +XXX,XX @@ static void create_gpio_keys(char *fdt, DeviceState *pl061_dev,
22
23
qemu_fdt_add_subnode(fdt, "/gpio-keys");
24
qemu_fdt_setprop_string(fdt, "/gpio-keys", "compatible", "gpio-keys");
25
- qemu_fdt_setprop_cell(fdt, "/gpio-keys", "#size-cells", 0);
26
- qemu_fdt_setprop_cell(fdt, "/gpio-keys", "#address-cells", 1);
27
28
qemu_fdt_add_subnode(fdt, "/gpio-keys/poweroff");
29
qemu_fdt_setprop_string(fdt, "/gpio-keys/poweroff",
30
--
31
2.25.1
diff view generated by jsdifflib
Deleted patch
1
The traditional ptimer behaviour includes a collection of weird edge
2
case behaviours. In 2016 we improved the ptimer implementation to
3
fix these and generally make the behaviour more flexible, with
4
ptimers opting in to the new behaviour by passing an appropriate set
5
of policy flags to ptimer_init(). For backwards-compatibility, we
6
defined PTIMER_POLICY_DEFAULT (which sets no flags) to give the old
7
weird behaviour.
8
1
9
This turns out to be a poor choice of name, because people writing
10
new devices which use ptimers are misled into thinking that the
11
default is probably a sensible choice of flags, when in fact it is
12
almost always not what you want. Rename PTIMER_POLICY_DEFAULT to
13
PTIMER_POLICY_LEGACY and beef up the comment to more clearly say that
14
new devices should not be using it.
15
16
The code-change part of this commit was produced by
17
sed -i -e 's/PTIMER_POLICY_DEFAULT/PTIMER_POLICY_LEGACY/g' $(git grep -l PTIMER_POLICY_DEFAULT)
18
with the exception of a test name string change in
19
tests/unit/ptimer-test.c which was added manually.
20
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Francisco Iglesias <francisco.iglesias@amd.com>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20220516103058.162280-1-peter.maydell@linaro.org
25
---
26
include/hw/ptimer.h | 16 ++++++++++++----
27
hw/arm/musicpal.c | 2 +-
28
hw/dma/xilinx_axidma.c | 2 +-
29
hw/dma/xlnx_csu_dma.c | 2 +-
30
hw/m68k/mcf5206.c | 2 +-
31
hw/m68k/mcf5208.c | 2 +-
32
hw/net/can/xlnx-zynqmp-can.c | 2 +-
33
hw/net/fsl_etsec/etsec.c | 2 +-
34
hw/net/lan9118.c | 2 +-
35
hw/rtc/exynos4210_rtc.c | 4 ++--
36
hw/timer/allwinner-a10-pit.c | 2 +-
37
hw/timer/altera_timer.c | 2 +-
38
hw/timer/arm_timer.c | 2 +-
39
hw/timer/digic-timer.c | 2 +-
40
hw/timer/etraxfs_timer.c | 6 +++---
41
hw/timer/exynos4210_mct.c | 6 +++---
42
hw/timer/exynos4210_pwm.c | 2 +-
43
hw/timer/grlib_gptimer.c | 2 +-
44
hw/timer/imx_epit.c | 4 ++--
45
hw/timer/imx_gpt.c | 2 +-
46
hw/timer/mss-timer.c | 2 +-
47
hw/timer/sh_timer.c | 2 +-
48
hw/timer/slavio_timer.c | 2 +-
49
hw/timer/xilinx_timer.c | 2 +-
50
tests/unit/ptimer-test.c | 6 +++---
51
25 files changed, 44 insertions(+), 36 deletions(-)
52
53
diff --git a/include/hw/ptimer.h b/include/hw/ptimer.h
54
index XXXXXXX..XXXXXXX 100644
55
--- a/include/hw/ptimer.h
56
+++ b/include/hw/ptimer.h
57
@@ -XXX,XX +XXX,XX @@
58
* to stderr when the guest attempts to enable the timer.
59
*/
60
61
-/* The default ptimer policy retains backward compatibility with the legacy
62
- * timers. Custom policies are adjusting the default one. Consider providing
63
- * a correct policy for your timer.
64
+/*
65
+ * The 'legacy' ptimer policy retains backward compatibility with the
66
+ * traditional ptimer behaviour from before policy flags were introduced.
67
+ * It has several weird behaviours which don't match typical hardware
68
+ * timer behaviour. For a new device using ptimers, you should not
69
+ * use PTIMER_POLICY_LEGACY, but instead check the actual behaviour
70
+ * that you need and specify the right set of policy flags to get that.
71
+ *
72
+ * If you are overhauling an existing device that uses PTIMER_POLICY_LEGACY
73
+ * and are in a position to check or test the real hardware behaviour,
74
+ * consider updating it to specify the right policy flags.
75
*
76
* The rough edges of the default policy:
77
* - Starting to run with a period = 0 emits error message and stops the
78
@@ -XXX,XX +XXX,XX @@
79
* since the last period, effectively restarting the timer with a
80
* counter = counter value at the moment of change (.i.e. one less).
81
*/
82
-#define PTIMER_POLICY_DEFAULT 0
83
+#define PTIMER_POLICY_LEGACY 0
84
85
/* Periodic timer counter stays with "0" for a one period before wrapping
86
* around. */
87
diff --git a/hw/arm/musicpal.c b/hw/arm/musicpal.c
88
index XXXXXXX..XXXXXXX 100644
89
--- a/hw/arm/musicpal.c
90
+++ b/hw/arm/musicpal.c
91
@@ -XXX,XX +XXX,XX @@ static void mv88w8618_timer_init(SysBusDevice *dev, mv88w8618_timer_state *s,
92
sysbus_init_irq(dev, &s->irq);
93
s->freq = freq;
94
95
- s->ptimer = ptimer_init(mv88w8618_timer_tick, s, PTIMER_POLICY_DEFAULT);
96
+ s->ptimer = ptimer_init(mv88w8618_timer_tick, s, PTIMER_POLICY_LEGACY);
97
}
98
99
static uint64_t mv88w8618_pit_read(void *opaque, hwaddr offset,
100
diff --git a/hw/dma/xilinx_axidma.c b/hw/dma/xilinx_axidma.c
101
index XXXXXXX..XXXXXXX 100644
102
--- a/hw/dma/xilinx_axidma.c
103
+++ b/hw/dma/xilinx_axidma.c
104
@@ -XXX,XX +XXX,XX @@ static void xilinx_axidma_realize(DeviceState *dev, Error **errp)
105
106
st->dma = s;
107
st->nr = i;
108
- st->ptimer = ptimer_init(timer_hit, st, PTIMER_POLICY_DEFAULT);
109
+ st->ptimer = ptimer_init(timer_hit, st, PTIMER_POLICY_LEGACY);
110
ptimer_transaction_begin(st->ptimer);
111
ptimer_set_freq(st->ptimer, s->freqhz);
112
ptimer_transaction_commit(st->ptimer);
113
diff --git a/hw/dma/xlnx_csu_dma.c b/hw/dma/xlnx_csu_dma.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/hw/dma/xlnx_csu_dma.c
116
+++ b/hw/dma/xlnx_csu_dma.c
117
@@ -XXX,XX +XXX,XX @@ static void xlnx_csu_dma_realize(DeviceState *dev, Error **errp)
118
sysbus_init_irq(SYS_BUS_DEVICE(dev), &s->irq);
119
120
s->src_timer = ptimer_init(xlnx_csu_dma_src_timeout_hit,
121
- s, PTIMER_POLICY_DEFAULT);
122
+ s, PTIMER_POLICY_LEGACY);
123
124
s->attr = MEMTXATTRS_UNSPECIFIED;
125
126
diff --git a/hw/m68k/mcf5206.c b/hw/m68k/mcf5206.c
127
index XXXXXXX..XXXXXXX 100644
128
--- a/hw/m68k/mcf5206.c
129
+++ b/hw/m68k/mcf5206.c
130
@@ -XXX,XX +XXX,XX @@ static m5206_timer_state *m5206_timer_init(qemu_irq irq)
131
m5206_timer_state *s;
132
133
s = g_new0(m5206_timer_state, 1);
134
- s->timer = ptimer_init(m5206_timer_trigger, s, PTIMER_POLICY_DEFAULT);
135
+ s->timer = ptimer_init(m5206_timer_trigger, s, PTIMER_POLICY_LEGACY);
136
s->irq = irq;
137
m5206_timer_reset(s);
138
return s;
139
diff --git a/hw/m68k/mcf5208.c b/hw/m68k/mcf5208.c
140
index XXXXXXX..XXXXXXX 100644
141
--- a/hw/m68k/mcf5208.c
142
+++ b/hw/m68k/mcf5208.c
143
@@ -XXX,XX +XXX,XX @@ static void mcf5208_sys_init(MemoryRegion *address_space, qemu_irq *pic)
144
/* Timers. */
145
for (i = 0; i < 2; i++) {
146
s = g_new0(m5208_timer_state, 1);
147
- s->timer = ptimer_init(m5208_timer_trigger, s, PTIMER_POLICY_DEFAULT);
148
+ s->timer = ptimer_init(m5208_timer_trigger, s, PTIMER_POLICY_LEGACY);
149
memory_region_init_io(&s->iomem, NULL, &m5208_timer_ops, s,
150
"m5208-timer", 0x00004000);
151
memory_region_add_subregion(address_space, 0xfc080000 + 0x4000 * i,
152
diff --git a/hw/net/can/xlnx-zynqmp-can.c b/hw/net/can/xlnx-zynqmp-can.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/hw/net/can/xlnx-zynqmp-can.c
155
+++ b/hw/net/can/xlnx-zynqmp-can.c
156
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_can_realize(DeviceState *dev, Error **errp)
157
158
/* Allocate a new timer. */
159
s->can_timer = ptimer_init(xlnx_zynqmp_can_ptimer_cb, s,
160
- PTIMER_POLICY_DEFAULT);
161
+ PTIMER_POLICY_LEGACY);
162
163
ptimer_transaction_begin(s->can_timer);
164
165
diff --git a/hw/net/fsl_etsec/etsec.c b/hw/net/fsl_etsec/etsec.c
166
index XXXXXXX..XXXXXXX 100644
167
--- a/hw/net/fsl_etsec/etsec.c
168
+++ b/hw/net/fsl_etsec/etsec.c
169
@@ -XXX,XX +XXX,XX @@ static void etsec_realize(DeviceState *dev, Error **errp)
170
object_get_typename(OBJECT(dev)), dev->id, etsec);
171
qemu_format_nic_info_str(qemu_get_queue(etsec->nic), etsec->conf.macaddr.a);
172
173
- etsec->ptimer = ptimer_init(etsec_timer_hit, etsec, PTIMER_POLICY_DEFAULT);
174
+ etsec->ptimer = ptimer_init(etsec_timer_hit, etsec, PTIMER_POLICY_LEGACY);
175
ptimer_transaction_begin(etsec->ptimer);
176
ptimer_set_freq(etsec->ptimer, 100);
177
ptimer_transaction_commit(etsec->ptimer);
178
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
179
index XXXXXXX..XXXXXXX 100644
180
--- a/hw/net/lan9118.c
181
+++ b/hw/net/lan9118.c
182
@@ -XXX,XX +XXX,XX @@ static void lan9118_realize(DeviceState *dev, Error **errp)
183
s->pmt_ctrl = 1;
184
s->txp = &s->tx_packet;
185
186
- s->timer = ptimer_init(lan9118_tick, s, PTIMER_POLICY_DEFAULT);
187
+ s->timer = ptimer_init(lan9118_tick, s, PTIMER_POLICY_LEGACY);
188
ptimer_transaction_begin(s->timer);
189
ptimer_set_freq(s->timer, 10000);
190
ptimer_set_limit(s->timer, 0xffff, 1);
191
diff --git a/hw/rtc/exynos4210_rtc.c b/hw/rtc/exynos4210_rtc.c
192
index XXXXXXX..XXXXXXX 100644
193
--- a/hw/rtc/exynos4210_rtc.c
194
+++ b/hw/rtc/exynos4210_rtc.c
195
@@ -XXX,XX +XXX,XX @@ static void exynos4210_rtc_init(Object *obj)
196
Exynos4210RTCState *s = EXYNOS4210_RTC(obj);
197
SysBusDevice *dev = SYS_BUS_DEVICE(obj);
198
199
- s->ptimer = ptimer_init(exynos4210_rtc_tick, s, PTIMER_POLICY_DEFAULT);
200
+ s->ptimer = ptimer_init(exynos4210_rtc_tick, s, PTIMER_POLICY_LEGACY);
201
ptimer_transaction_begin(s->ptimer);
202
ptimer_set_freq(s->ptimer, RTC_BASE_FREQ);
203
exynos4210_rtc_update_freq(s, 0);
204
ptimer_transaction_commit(s->ptimer);
205
206
s->ptimer_1Hz = ptimer_init(exynos4210_rtc_1Hz_tick,
207
- s, PTIMER_POLICY_DEFAULT);
208
+ s, PTIMER_POLICY_LEGACY);
209
ptimer_transaction_begin(s->ptimer_1Hz);
210
ptimer_set_freq(s->ptimer_1Hz, RTC_BASE_FREQ);
211
ptimer_transaction_commit(s->ptimer_1Hz);
212
diff --git a/hw/timer/allwinner-a10-pit.c b/hw/timer/allwinner-a10-pit.c
213
index XXXXXXX..XXXXXXX 100644
214
--- a/hw/timer/allwinner-a10-pit.c
215
+++ b/hw/timer/allwinner-a10-pit.c
216
@@ -XXX,XX +XXX,XX @@ static void a10_pit_init(Object *obj)
217
218
tc->container = s;
219
tc->index = i;
220
- s->timer[i] = ptimer_init(a10_pit_timer_cb, tc, PTIMER_POLICY_DEFAULT);
221
+ s->timer[i] = ptimer_init(a10_pit_timer_cb, tc, PTIMER_POLICY_LEGACY);
222
}
223
}
224
225
diff --git a/hw/timer/altera_timer.c b/hw/timer/altera_timer.c
226
index XXXXXXX..XXXXXXX 100644
227
--- a/hw/timer/altera_timer.c
228
+++ b/hw/timer/altera_timer.c
229
@@ -XXX,XX +XXX,XX @@ static void altera_timer_realize(DeviceState *dev, Error **errp)
230
return;
231
}
232
233
- t->ptimer = ptimer_init(timer_hit, t, PTIMER_POLICY_DEFAULT);
234
+ t->ptimer = ptimer_init(timer_hit, t, PTIMER_POLICY_LEGACY);
235
ptimer_transaction_begin(t->ptimer);
236
ptimer_set_freq(t->ptimer, t->freq_hz);
237
ptimer_transaction_commit(t->ptimer);
238
diff --git a/hw/timer/arm_timer.c b/hw/timer/arm_timer.c
239
index XXXXXXX..XXXXXXX 100644
240
--- a/hw/timer/arm_timer.c
241
+++ b/hw/timer/arm_timer.c
242
@@ -XXX,XX +XXX,XX @@ static arm_timer_state *arm_timer_init(uint32_t freq)
243
s->freq = freq;
244
s->control = TIMER_CTRL_IE;
245
246
- s->timer = ptimer_init(arm_timer_tick, s, PTIMER_POLICY_DEFAULT);
247
+ s->timer = ptimer_init(arm_timer_tick, s, PTIMER_POLICY_LEGACY);
248
vmstate_register(NULL, VMSTATE_INSTANCE_ID_ANY, &vmstate_arm_timer, s);
249
return s;
250
}
251
diff --git a/hw/timer/digic-timer.c b/hw/timer/digic-timer.c
252
index XXXXXXX..XXXXXXX 100644
253
--- a/hw/timer/digic-timer.c
254
+++ b/hw/timer/digic-timer.c
255
@@ -XXX,XX +XXX,XX @@ static void digic_timer_init(Object *obj)
256
{
257
DigicTimerState *s = DIGIC_TIMER(obj);
258
259
- s->ptimer = ptimer_init(digic_timer_tick, NULL, PTIMER_POLICY_DEFAULT);
260
+ s->ptimer = ptimer_init(digic_timer_tick, NULL, PTIMER_POLICY_LEGACY);
261
262
/*
263
* FIXME: there is no documentation on Digic timer
264
diff --git a/hw/timer/etraxfs_timer.c b/hw/timer/etraxfs_timer.c
265
index XXXXXXX..XXXXXXX 100644
266
--- a/hw/timer/etraxfs_timer.c
267
+++ b/hw/timer/etraxfs_timer.c
268
@@ -XXX,XX +XXX,XX @@ static void etraxfs_timer_realize(DeviceState *dev, Error **errp)
269
ETRAXTimerState *t = ETRAX_TIMER(dev);
270
SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
271
272
- t->ptimer_t0 = ptimer_init(timer0_hit, t, PTIMER_POLICY_DEFAULT);
273
- t->ptimer_t1 = ptimer_init(timer1_hit, t, PTIMER_POLICY_DEFAULT);
274
- t->ptimer_wd = ptimer_init(watchdog_hit, t, PTIMER_POLICY_DEFAULT);
275
+ t->ptimer_t0 = ptimer_init(timer0_hit, t, PTIMER_POLICY_LEGACY);
276
+ t->ptimer_t1 = ptimer_init(timer1_hit, t, PTIMER_POLICY_LEGACY);
277
+ t->ptimer_wd = ptimer_init(watchdog_hit, t, PTIMER_POLICY_LEGACY);
278
279
sysbus_init_irq(sbd, &t->irq);
280
sysbus_init_irq(sbd, &t->nmi);
281
diff --git a/hw/timer/exynos4210_mct.c b/hw/timer/exynos4210_mct.c
282
index XXXXXXX..XXXXXXX 100644
283
--- a/hw/timer/exynos4210_mct.c
284
+++ b/hw/timer/exynos4210_mct.c
285
@@ -XXX,XX +XXX,XX @@ static void exynos4210_mct_init(Object *obj)
286
287
/* Global timer */
288
s->g_timer.ptimer_frc = ptimer_init(exynos4210_gfrc_event, s,
289
- PTIMER_POLICY_DEFAULT);
290
+ PTIMER_POLICY_LEGACY);
291
memset(&s->g_timer.reg, 0, sizeof(struct gregs));
292
293
/* Local timers */
294
for (i = 0; i < 2; i++) {
295
s->l_timer[i].tick_timer.ptimer_tick =
296
ptimer_init(exynos4210_ltick_event, &s->l_timer[i],
297
- PTIMER_POLICY_DEFAULT);
298
+ PTIMER_POLICY_LEGACY);
299
s->l_timer[i].ptimer_frc =
300
ptimer_init(exynos4210_lfrc_event, &s->l_timer[i],
301
- PTIMER_POLICY_DEFAULT);
302
+ PTIMER_POLICY_LEGACY);
303
s->l_timer[i].id = i;
304
}
305
306
diff --git a/hw/timer/exynos4210_pwm.c b/hw/timer/exynos4210_pwm.c
307
index XXXXXXX..XXXXXXX 100644
308
--- a/hw/timer/exynos4210_pwm.c
309
+++ b/hw/timer/exynos4210_pwm.c
310
@@ -XXX,XX +XXX,XX @@ static void exynos4210_pwm_init(Object *obj)
311
sysbus_init_irq(dev, &s->timer[i].irq);
312
s->timer[i].ptimer = ptimer_init(exynos4210_pwm_tick,
313
&s->timer[i],
314
- PTIMER_POLICY_DEFAULT);
315
+ PTIMER_POLICY_LEGACY);
316
s->timer[i].id = i;
317
s->timer[i].parent = s;
318
}
319
diff --git a/hw/timer/grlib_gptimer.c b/hw/timer/grlib_gptimer.c
320
index XXXXXXX..XXXXXXX 100644
321
--- a/hw/timer/grlib_gptimer.c
322
+++ b/hw/timer/grlib_gptimer.c
323
@@ -XXX,XX +XXX,XX @@ static void grlib_gptimer_realize(DeviceState *dev, Error **errp)
324
325
timer->unit = unit;
326
timer->ptimer = ptimer_init(grlib_gptimer_hit, timer,
327
- PTIMER_POLICY_DEFAULT);
328
+ PTIMER_POLICY_LEGACY);
329
timer->id = i;
330
331
/* One IRQ line for each timer */
332
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
333
index XXXXXXX..XXXXXXX 100644
334
--- a/hw/timer/imx_epit.c
335
+++ b/hw/timer/imx_epit.c
336
@@ -XXX,XX +XXX,XX @@ static void imx_epit_realize(DeviceState *dev, Error **errp)
337
0x00001000);
338
sysbus_init_mmio(sbd, &s->iomem);
339
340
- s->timer_reload = ptimer_init(imx_epit_reload, s, PTIMER_POLICY_DEFAULT);
341
+ s->timer_reload = ptimer_init(imx_epit_reload, s, PTIMER_POLICY_LEGACY);
342
343
- s->timer_cmp = ptimer_init(imx_epit_cmp, s, PTIMER_POLICY_DEFAULT);
344
+ s->timer_cmp = ptimer_init(imx_epit_cmp, s, PTIMER_POLICY_LEGACY);
345
}
346
347
static void imx_epit_class_init(ObjectClass *klass, void *data)
348
diff --git a/hw/timer/imx_gpt.c b/hw/timer/imx_gpt.c
349
index XXXXXXX..XXXXXXX 100644
350
--- a/hw/timer/imx_gpt.c
351
+++ b/hw/timer/imx_gpt.c
352
@@ -XXX,XX +XXX,XX @@ static void imx_gpt_realize(DeviceState *dev, Error **errp)
353
0x00001000);
354
sysbus_init_mmio(sbd, &s->iomem);
355
356
- s->timer = ptimer_init(imx_gpt_timeout, s, PTIMER_POLICY_DEFAULT);
357
+ s->timer = ptimer_init(imx_gpt_timeout, s, PTIMER_POLICY_LEGACY);
358
}
359
360
static void imx_gpt_class_init(ObjectClass *klass, void *data)
361
diff --git a/hw/timer/mss-timer.c b/hw/timer/mss-timer.c
362
index XXXXXXX..XXXXXXX 100644
363
--- a/hw/timer/mss-timer.c
364
+++ b/hw/timer/mss-timer.c
365
@@ -XXX,XX +XXX,XX @@ static void mss_timer_init(Object *obj)
366
for (i = 0; i < NUM_TIMERS; i++) {
367
struct Msf2Timer *st = &t->timers[i];
368
369
- st->ptimer = ptimer_init(timer_hit, st, PTIMER_POLICY_DEFAULT);
370
+ st->ptimer = ptimer_init(timer_hit, st, PTIMER_POLICY_LEGACY);
371
ptimer_transaction_begin(st->ptimer);
372
ptimer_set_freq(st->ptimer, t->freq_hz);
373
ptimer_transaction_commit(st->ptimer);
374
diff --git a/hw/timer/sh_timer.c b/hw/timer/sh_timer.c
375
index XXXXXXX..XXXXXXX 100644
376
--- a/hw/timer/sh_timer.c
377
+++ b/hw/timer/sh_timer.c
378
@@ -XXX,XX +XXX,XX @@ static void *sh_timer_init(uint32_t freq, int feat, qemu_irq irq)
379
s->enabled = 0;
380
s->irq = irq;
381
382
- s->timer = ptimer_init(sh_timer_tick, s, PTIMER_POLICY_DEFAULT);
383
+ s->timer = ptimer_init(sh_timer_tick, s, PTIMER_POLICY_LEGACY);
384
385
sh_timer_write(s, OFFSET_TCOR >> 2, s->tcor);
386
sh_timer_write(s, OFFSET_TCNT >> 2, s->tcnt);
387
diff --git a/hw/timer/slavio_timer.c b/hw/timer/slavio_timer.c
388
index XXXXXXX..XXXXXXX 100644
389
--- a/hw/timer/slavio_timer.c
390
+++ b/hw/timer/slavio_timer.c
391
@@ -XXX,XX +XXX,XX @@ static void slavio_timer_init(Object *obj)
392
tc->timer_index = i;
393
394
s->cputimer[i].timer = ptimer_init(slavio_timer_irq, tc,
395
- PTIMER_POLICY_DEFAULT);
396
+ PTIMER_POLICY_LEGACY);
397
ptimer_transaction_begin(s->cputimer[i].timer);
398
ptimer_set_period(s->cputimer[i].timer, TIMER_PERIOD);
399
ptimer_transaction_commit(s->cputimer[i].timer);
400
diff --git a/hw/timer/xilinx_timer.c b/hw/timer/xilinx_timer.c
401
index XXXXXXX..XXXXXXX 100644
402
--- a/hw/timer/xilinx_timer.c
403
+++ b/hw/timer/xilinx_timer.c
404
@@ -XXX,XX +XXX,XX @@ static void xilinx_timer_realize(DeviceState *dev, Error **errp)
405
406
xt->parent = t;
407
xt->nr = i;
408
- xt->ptimer = ptimer_init(timer_hit, xt, PTIMER_POLICY_DEFAULT);
409
+ xt->ptimer = ptimer_init(timer_hit, xt, PTIMER_POLICY_LEGACY);
410
ptimer_transaction_begin(xt->ptimer);
411
ptimer_set_freq(xt->ptimer, t->freq_hz);
412
ptimer_transaction_commit(xt->ptimer);
413
diff --git a/tests/unit/ptimer-test.c b/tests/unit/ptimer-test.c
414
index XXXXXXX..XXXXXXX 100644
415
--- a/tests/unit/ptimer-test.c
416
+++ b/tests/unit/ptimer-test.c
417
@@ -XXX,XX +XXX,XX @@ static void add_ptimer_tests(uint8_t policy)
418
char policy_name[256] = "";
419
char *tmp;
420
421
- if (policy == PTIMER_POLICY_DEFAULT) {
422
- g_sprintf(policy_name, "default");
423
+ if (policy == PTIMER_POLICY_LEGACY) {
424
+ g_sprintf(policy_name, "legacy");
425
}
426
427
if (policy & PTIMER_POLICY_WRAP_AFTER_ONE_PERIOD) {
428
@@ -XXX,XX +XXX,XX @@ static void add_ptimer_tests(uint8_t policy)
429
static void add_all_ptimer_policies_comb_tests(void)
430
{
431
int last_policy = PTIMER_POLICY_TRIGGER_ONLY_ON_DECREMENT;
432
- int policy = PTIMER_POLICY_DEFAULT;
433
+ int policy = PTIMER_POLICY_LEGACY;
434
435
for (; policy < (last_policy << 1); policy++) {
436
if ((policy & PTIMER_POLICY_TRIGGER_ONLY_ON_DECREMENT) &&
437
--
438
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Florian Lugou <florian.lugou@provenrun.com>
2
1
3
As per the description of the HCR_EL2.APK field in the ARMv8 ARM,
4
Pointer Authentication keys accesses should only be trapped to Secure
5
EL2 if it is enabled.
6
7
Signed-off-by: Florian Lugou <florian.lugou@provenrun.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220517145242.1215271-1-florian.lugou@provenrun.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.c | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_pauth(CPUARMState *env, const ARMCPRegInfo *ri,
20
int el = arm_current_el(env);
21
22
if (el < 2 &&
23
- arm_feature(env, ARM_FEATURE_EL2) &&
24
+ arm_is_el2_enabled(env) &&
25
!(arm_hcr_el2_eff(env) & HCR_APK)) {
26
return CP_ACCESS_TRAP_EL2;
27
}
28
--
29
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This feature adds a new register, HCRX_EL2, which controls
4
many of the newer AArch64 features. So far the register is
5
effectively RES0, because none of the new features are done.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220517054850.177016-2-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/cpu.h | 20 ++++++++++++++++++
13
target/arm/cpu64.c | 1 +
14
target/arm/helper.c | 50 +++++++++++++++++++++++++++++++++++++++++++++
15
3 files changed, 71 insertions(+)
16
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
22
uint32_t pmsav5_data_ap; /* PMSAv5 MPU data access permissions */
23
uint32_t pmsav5_insn_ap; /* PMSAv5 MPU insn access permissions */
24
uint64_t hcr_el2; /* Hypervisor configuration register */
25
+ uint64_t hcrx_el2; /* Extended Hypervisor configuration register */
26
uint64_t scr_el3; /* Secure configuration register. */
27
union { /* Fault status registers. */
28
struct {
29
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
30
#define HCR_TWEDEN (1ULL << 59)
31
#define HCR_TWEDEL MAKE_64BIT_MASK(60, 4)
32
33
+#define HCRX_ENAS0 (1ULL << 0)
34
+#define HCRX_ENALS (1ULL << 1)
35
+#define HCRX_ENASR (1ULL << 2)
36
+#define HCRX_FNXS (1ULL << 3)
37
+#define HCRX_FGTNXS (1ULL << 4)
38
+#define HCRX_SMPME (1ULL << 5)
39
+#define HCRX_TALLINT (1ULL << 6)
40
+#define HCRX_VINMI (1ULL << 7)
41
+#define HCRX_VFNMI (1ULL << 8)
42
+#define HCRX_CMOW (1ULL << 9)
43
+#define HCRX_MCE2 (1ULL << 10)
44
+#define HCRX_MSCEN (1ULL << 11)
45
+
46
#define HPFAR_NS (1ULL << 63)
47
48
#define SCR_NS (1U << 0)
49
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_el2_enabled(CPUARMState *env)
50
* Not included here is HCR_RW.
51
*/
52
uint64_t arm_hcr_el2_eff(CPUARMState *env);
53
+uint64_t arm_hcrx_el2_eff(CPUARMState *env);
54
55
/* Return true if the specified exception level is running in AArch64 state. */
56
static inline bool arm_el_is_aa64(CPUARMState *env, int el)
57
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_ats1e1(const ARMISARegisters *id)
58
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) >= 2;
59
}
60
61
+static inline bool isar_feature_aa64_hcx(const ARMISARegisters *id)
62
+{
63
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HCX) != 0;
64
+}
65
+
66
static inline bool isar_feature_aa64_uao(const ARMISARegisters *id)
67
{
68
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, UAO) != 0;
69
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/cpu64.c
72
+++ b/target/arm/cpu64.c
73
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
74
t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1); /* FEAT_LOR */
75
t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* FEAT_PAN2 */
76
t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* FEAT_XNX */
77
+ t = FIELD_DP64(t, ID_AA64MMFR1, HCX, 1); /* FEAT_HCX */
78
cpu->isar.id_aa64mmfr1 = t;
79
80
t = cpu->isar.id_aa64mmfr2;
81
diff --git a/target/arm/helper.c b/target/arm/helper.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/helper.c
84
+++ b/target/arm/helper.c
85
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff(CPUARMState *env)
86
return ret;
87
}
88
89
+static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri,
90
+ uint64_t value)
91
+{
92
+ uint64_t valid_mask = 0;
93
+
94
+ /* No features adding bits to HCRX are implemented. */
95
+
96
+ /* Clear RES0 bits. */
97
+ env->cp15.hcrx_el2 = value & valid_mask;
98
+}
99
+
100
+static CPAccessResult access_hxen(CPUARMState *env, const ARMCPRegInfo *ri,
101
+ bool isread)
102
+{
103
+ if (arm_current_el(env) < 3
104
+ && arm_feature(env, ARM_FEATURE_EL3)
105
+ && !(env->cp15.scr_el3 & SCR_HXEN)) {
106
+ return CP_ACCESS_TRAP_EL3;
107
+ }
108
+ return CP_ACCESS_OK;
109
+}
110
+
111
+static const ARMCPRegInfo hcrx_el2_reginfo = {
112
+ .name = "HCRX_EL2", .state = ARM_CP_STATE_AA64,
113
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 2,
114
+ .access = PL2_RW, .writefn = hcrx_write, .accessfn = access_hxen,
115
+ .fieldoffset = offsetof(CPUARMState, cp15.hcrx_el2),
116
+};
117
+
118
+/* Return the effective value of HCRX_EL2. */
119
+uint64_t arm_hcrx_el2_eff(CPUARMState *env)
120
+{
121
+ /*
122
+ * The bits in this register behave as 0 for all purposes other than
123
+ * direct reads of the register if:
124
+ * - EL2 is not enabled in the current security state,
125
+ * - SCR_EL3.HXEn is 0.
126
+ */
127
+ if (!arm_is_el2_enabled(env)
128
+ || (arm_feature(env, ARM_FEATURE_EL3)
129
+ && !(env->cp15.scr_el3 & SCR_HXEN))) {
130
+ return 0;
131
+ }
132
+ return env->cp15.hcrx_el2;
133
+}
134
+
135
static void cptr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
136
uint64_t value)
137
{
138
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
139
define_arm_cp_regs(cpu, zcr_reginfo);
140
}
141
142
+ if (cpu_isar_feature(aa64_hcx, cpu)) {
143
+ define_one_arm_cp_reg(cpu, &hcrx_el2_reginfo);
144
+ }
145
+
146
#ifdef TARGET_AARCH64
147
if (cpu_isar_feature(aa64_pauth, cpu)) {
148
define_arm_cp_regs(cpu, pauth_reginfo);
149
--
150
2.25.1
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
We had a few CPTR_* bits defined, but missed quite a few.
4
Complete all of the fields up to ARMv9.2.
5
Use FIELD_EX64 instead of manual extract32.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220517054850.177016-3-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/cpu.h | 44 +++++++++++++++++++++++++++++++-----
13
hw/arm/boot.c | 2 +-
14
target/arm/cpu.c | 11 ++++++---
15
target/arm/helper.c | 54 ++++++++++++++++++++++-----------------------
16
4 files changed, 75 insertions(+), 36 deletions(-)
17
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
23
#define SCTLR_SPINTMASK (1ULL << 62) /* FEAT_NMI */
24
#define SCTLR_TIDCP (1ULL << 63) /* FEAT_TIDCP1 */
25
26
-#define CPTR_TCPAC (1U << 31)
27
-#define CPTR_TTA (1U << 20)
28
-#define CPTR_TFP (1U << 10)
29
-#define CPTR_TZ (1U << 8) /* CPTR_EL2 */
30
-#define CPTR_EZ (1U << 8) /* CPTR_EL3 */
31
+/* Bit definitions for CPACR (AArch32 only) */
32
+FIELD(CPACR, CP10, 20, 2)
33
+FIELD(CPACR, CP11, 22, 2)
34
+FIELD(CPACR, TRCDIS, 28, 1) /* matches CPACR_EL1.TTA */
35
+FIELD(CPACR, D32DIS, 30, 1) /* up to v7; RAZ in v8 */
36
+FIELD(CPACR, ASEDIS, 31, 1)
37
+
38
+/* Bit definitions for CPACR_EL1 (AArch64 only) */
39
+FIELD(CPACR_EL1, ZEN, 16, 2)
40
+FIELD(CPACR_EL1, FPEN, 20, 2)
41
+FIELD(CPACR_EL1, SMEN, 24, 2)
42
+FIELD(CPACR_EL1, TTA, 28, 1) /* matches CPACR.TRCDIS */
43
+
44
+/* Bit definitions for HCPTR (AArch32 only) */
45
+FIELD(HCPTR, TCP10, 10, 1)
46
+FIELD(HCPTR, TCP11, 11, 1)
47
+FIELD(HCPTR, TASE, 15, 1)
48
+FIELD(HCPTR, TTA, 20, 1)
49
+FIELD(HCPTR, TAM, 30, 1) /* matches CPTR_EL2.TAM */
50
+FIELD(HCPTR, TCPAC, 31, 1) /* matches CPTR_EL2.TCPAC */
51
+
52
+/* Bit definitions for CPTR_EL2 (AArch64 only) */
53
+FIELD(CPTR_EL2, TZ, 8, 1) /* !E2H */
54
+FIELD(CPTR_EL2, TFP, 10, 1) /* !E2H, matches HCPTR.TCP10 */
55
+FIELD(CPTR_EL2, TSM, 12, 1) /* !E2H */
56
+FIELD(CPTR_EL2, ZEN, 16, 2) /* E2H */
57
+FIELD(CPTR_EL2, FPEN, 20, 2) /* E2H */
58
+FIELD(CPTR_EL2, SMEN, 24, 2) /* E2H */
59
+FIELD(CPTR_EL2, TTA, 28, 1)
60
+FIELD(CPTR_EL2, TAM, 30, 1) /* matches HCPTR.TAM */
61
+FIELD(CPTR_EL2, TCPAC, 31, 1) /* matches HCPTR.TCPAC */
62
+
63
+/* Bit definitions for CPTR_EL3 (AArch64 only) */
64
+FIELD(CPTR_EL3, EZ, 8, 1)
65
+FIELD(CPTR_EL3, TFP, 10, 1)
66
+FIELD(CPTR_EL3, ESM, 12, 1)
67
+FIELD(CPTR_EL3, TTA, 20, 1)
68
+FIELD(CPTR_EL3, TAM, 30, 1)
69
+FIELD(CPTR_EL3, TCPAC, 31, 1)
70
71
#define MDCR_EPMAD (1U << 21)
72
#define MDCR_EDAD (1U << 20)
73
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/hw/arm/boot.c
76
+++ b/hw/arm/boot.c
77
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
78
env->cp15.scr_el3 |= SCR_ATA;
79
}
80
if (cpu_isar_feature(aa64_sve, cpu)) {
81
- env->cp15.cptr_el[3] |= CPTR_EZ;
82
+ env->cp15.cptr_el[3] |= R_CPTR_EL3_EZ_MASK;
83
}
84
/* AArch64 kernels never boot in secure mode */
85
assert(!info->secure_boot);
86
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/arm/cpu.c
89
+++ b/target/arm/cpu.c
90
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
91
/* Trap on btype=3 for PACIxSP. */
92
env->cp15.sctlr_el[1] |= SCTLR_BT0;
93
/* and to the FP/Neon instructions */
94
- env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 20, 2, 3);
95
+ env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
96
+ CPACR_EL1, FPEN, 3);
97
/* and to the SVE instructions */
98
- env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 16, 2, 3);
99
+ env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
100
+ CPACR_EL1, ZEN, 3);
101
/* with reasonable vector length */
102
if (cpu_isar_feature(aa64_sve, cpu)) {
103
env->vfp.zcr_el[1] =
104
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
105
} else {
106
#if defined(CONFIG_USER_ONLY)
107
/* Userspace expects access to cp10 and cp11 for FP/Neon */
108
- env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 20, 4, 0xf);
109
+ env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
110
+ CPACR, CP10, 3);
111
+ env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
112
+ CPACR, CP11, 3);
113
#endif
114
}
115
116
diff --git a/target/arm/helper.c b/target/arm/helper.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/arm/helper.c
119
+++ b/target/arm/helper.c
120
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
121
*/
122
if (cpu_isar_feature(aa32_vfp_simd, env_archcpu(env))) {
123
/* VFP coprocessor: cp10 & cp11 [23:20] */
124
- mask |= (1 << 31) | (1 << 30) | (0xf << 20);
125
+ mask |= R_CPACR_ASEDIS_MASK |
126
+ R_CPACR_D32DIS_MASK |
127
+ R_CPACR_CP11_MASK |
128
+ R_CPACR_CP10_MASK;
129
130
if (!arm_feature(env, ARM_FEATURE_NEON)) {
131
/* ASEDIS [31] bit is RAO/WI */
132
- value |= (1 << 31);
133
+ value |= R_CPACR_ASEDIS_MASK;
134
}
135
136
/* VFPv3 and upwards with NEON implement 32 double precision
137
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
138
*/
139
if (!cpu_isar_feature(aa32_simd_r32, env_archcpu(env))) {
140
/* D32DIS [30] is RAO/WI if D16-31 are not implemented. */
141
- value |= (1 << 30);
142
+ value |= R_CPACR_D32DIS_MASK;
143
}
144
}
145
value &= mask;
146
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
147
*/
148
if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) &&
149
!arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) {
150
- value &= ~(0xf << 20);
151
- value |= env->cp15.cpacr_el1 & (0xf << 20);
152
+ mask = R_CPACR_CP11_MASK | R_CPACR_CP10_MASK;
153
+ value = (value & ~mask) | (env->cp15.cpacr_el1 & mask);
154
}
155
156
env->cp15.cpacr_el1 = value;
157
@@ -XXX,XX +XXX,XX @@ static uint64_t cpacr_read(CPUARMState *env, const ARMCPRegInfo *ri)
158
159
if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) &&
160
!arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) {
161
- value &= ~(0xf << 20);
162
+ value = ~(R_CPACR_CP11_MASK | R_CPACR_CP10_MASK);
163
}
164
return value;
165
}
166
@@ -XXX,XX +XXX,XX @@ static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
167
if (arm_feature(env, ARM_FEATURE_V8)) {
168
/* Check if CPACR accesses are to be trapped to EL2 */
169
if (arm_current_el(env) == 1 && arm_is_el2_enabled(env) &&
170
- (env->cp15.cptr_el[2] & CPTR_TCPAC)) {
171
+ FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TCPAC)) {
172
return CP_ACCESS_TRAP_EL2;
173
/* Check if CPACR accesses are to be trapped to EL3 */
174
} else if (arm_current_el(env) < 3 &&
175
- (env->cp15.cptr_el[3] & CPTR_TCPAC)) {
176
+ FIELD_EX64(env->cp15.cptr_el[3], CPTR_EL3, TCPAC)) {
177
return CP_ACCESS_TRAP_EL3;
178
}
179
}
180
@@ -XXX,XX +XXX,XX @@ static CPAccessResult cptr_access(CPUARMState *env, const ARMCPRegInfo *ri,
181
bool isread)
182
{
183
/* Check if CPTR accesses are set to trap to EL3 */
184
- if (arm_current_el(env) == 2 && (env->cp15.cptr_el[3] & CPTR_TCPAC)) {
185
+ if (arm_current_el(env) == 2 &&
186
+ FIELD_EX64(env->cp15.cptr_el[3], CPTR_EL3, TCPAC)) {
187
return CP_ACCESS_TRAP_EL3;
188
}
189
190
@@ -XXX,XX +XXX,XX @@ static void cptr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
191
*/
192
if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) &&
193
!arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) {
194
- value &= ~(0x3 << 10);
195
- value |= env->cp15.cptr_el[2] & (0x3 << 10);
196
+ uint64_t mask = R_HCPTR_TCP11_MASK | R_HCPTR_TCP10_MASK;
197
+ value = (value & ~mask) | (env->cp15.cptr_el[2] & mask);
198
}
199
env->cp15.cptr_el[2] = value;
200
}
201
@@ -XXX,XX +XXX,XX @@ static uint64_t cptr_el2_read(CPUARMState *env, const ARMCPRegInfo *ri)
202
203
if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) &&
204
!arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) {
205
- value |= 0x3 << 10;
206
+ value |= R_HCPTR_TCP11_MASK | R_HCPTR_TCP10_MASK;
207
}
208
return value;
209
}
210
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
211
uint64_t hcr_el2 = arm_hcr_el2_eff(env);
212
213
if (el <= 1 && (hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
214
- /* Check CPACR.ZEN. */
215
- switch (extract32(env->cp15.cpacr_el1, 16, 2)) {
216
+ switch (FIELD_EX64(env->cp15.cpacr_el1, CPACR_EL1, ZEN)) {
217
case 1:
218
if (el != 0) {
219
break;
220
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
221
}
222
223
/* Check CPACR.FPEN. */
224
- switch (extract32(env->cp15.cpacr_el1, 20, 2)) {
225
+ switch (FIELD_EX64(env->cp15.cpacr_el1, CPACR_EL1, FPEN)) {
226
case 1:
227
if (el != 0) {
228
break;
229
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
230
*/
231
if (el <= 2) {
232
if (hcr_el2 & HCR_E2H) {
233
- /* Check CPTR_EL2.ZEN. */
234
- switch (extract32(env->cp15.cptr_el[2], 16, 2)) {
235
+ switch (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, ZEN)) {
236
case 1:
237
if (el != 0 || !(hcr_el2 & HCR_TGE)) {
238
break;
239
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
240
return 2;
241
}
242
243
- /* Check CPTR_EL2.FPEN. */
244
- switch (extract32(env->cp15.cptr_el[2], 20, 2)) {
245
+ switch (FIELD_EX32(env->cp15.cptr_el[2], CPTR_EL2, FPEN)) {
246
case 1:
247
if (el == 2 || !(hcr_el2 & HCR_TGE)) {
248
break;
249
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
250
return 0;
251
}
252
} else if (arm_is_el2_enabled(env)) {
253
- if (env->cp15.cptr_el[2] & CPTR_TZ) {
254
+ if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TZ)) {
255
return 2;
256
}
257
- if (env->cp15.cptr_el[2] & CPTR_TFP) {
258
+ if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TFP)) {
259
return 0;
260
}
261
}
262
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
263
264
/* CPTR_EL3. Since EZ is negative we must check for EL3. */
265
if (arm_feature(env, ARM_FEATURE_EL3)
266
- && !(env->cp15.cptr_el[3] & CPTR_EZ)) {
267
+ && !FIELD_EX64(env->cp15.cptr_el[3], CPTR_EL3, EZ)) {
268
return 3;
269
}
270
#endif
271
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
272
* This register is ignored if E2H+TGE are both set.
273
*/
274
if ((hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
275
- int fpen = extract32(env->cp15.cpacr_el1, 20, 2);
276
+ int fpen = FIELD_EX64(env->cp15.cpacr_el1, CPACR_EL1, FPEN);
277
278
switch (fpen) {
279
case 0:
280
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
281
*/
282
if (cur_el <= 2) {
283
if (hcr_el2 & HCR_E2H) {
284
- /* Check CPTR_EL2.FPEN. */
285
- switch (extract32(env->cp15.cptr_el[2], 20, 2)) {
286
+ switch (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, FPEN)) {
287
case 1:
288
if (cur_el != 0 || !(hcr_el2 & HCR_TGE)) {
289
break;
290
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
291
return 2;
292
}
293
} else if (arm_is_el2_enabled(env)) {
294
- if (env->cp15.cptr_el[2] & CPTR_TFP) {
295
+ if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TFP)) {
296
return 2;
297
}
298
}
299
}
300
301
/* CPTR_EL3 : present in v8 */
302
- if (env->cp15.cptr_el[3] & CPTR_TFP) {
303
+ if (FIELD_EX64(env->cp15.cptr_el[3], CPTR_EL3, TFP)) {
304
/* Trap all FP ops to EL3 */
305
return 3;
306
}
307
--
308
2.25.1
diff view generated by jsdifflib