1
target-arm queue: mostly smallish stuff. I expect to send
1
target-arm queue: mostly patches from me this time round.
2
out another pullreq at the end of this week, but since this
2
Nothing too exciting.
3
is up to 32 patches already I'd rather send it out now
4
than accumulate a monster sized patchset.
5
3
6
thanks
7
-- PMM
4
-- PMM
8
5
6
The following changes since commit 78ac2eebbab9150edf5d0d00e3648f5ebb599001:
9
7
10
The following changes since commit 0ab4c574a55448a37b9f616259b82950742c9427:
8
Merge tag 'artist-cursor-fix-final-pull-request' of https://github.com/hdeller/qemu-hppa into staging (2022-05-18 09:32:15 -0700)
11
12
Merge remote-tracking branch 'remotes/kraxel/tags/ui-20180626-pull-request' into staging (2018-06-26 16:44:57 +0100)
13
9
14
are available in the Git repository at:
10
are available in the Git repository at:
15
11
16
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180626
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220519
17
13
18
for you to fetch changes up to 9b945a9ee36a34eaeca412ef9ef35fbfe33c2c85:
14
for you to fetch changes up to fab8ad39fb75a0d9f097db67b2a334444754e88e:
19
15
20
aspeed/timer: use the APB frequency from the SCU (2018-06-26 17:50:42 +0100)
16
target/arm: Use FIELD definitions for CPACR, CPTR_ELx (2022-05-19 18:34:10 +0100)
21
17
22
----------------------------------------------------------------
18
----------------------------------------------------------------
23
target-arm queue:
19
target-arm queue:
24
* aspeed: set APB clocks correctly (fixes slowdown on palmetto)
20
* Implement FEAT_S2FWB
25
* smmuv3: cache config data and TLB entries
21
* Implement FEAT_IDST
26
* v7m/v8m: support read/write from MPU regions smaller than 1K
22
* Drop unsupported_encoding() macro
27
* various: clean up logging/debug messages
23
* hw/intc/arm_gicv3: Use correct number of priority bits for the CPU
28
* xilinx_spips: Make dma transactions as per dma_burst_size
24
* Fix aarch64 debug register names
25
* hw/adc/zynq-xadc: Use qemu_irq typedef
26
* target/arm/helper.c: Delete stray obsolete comment
27
* Make number of counters in PMCR follow the CPU
28
* hw/arm/virt: Fix dtb nits
29
* ptimer: Rename PTIMER_POLICY_DEFAULT to PTIMER_POLICY_LEGACY
30
* target/arm: Fix PAuth keys access checks for disabled SEL2
31
* Enable FEAT_HCX for -cpu max
32
* Use FIELD definitions for CPACR, CPTR_ELx
29
33
30
----------------------------------------------------------------
34
----------------------------------------------------------------
31
Cédric Le Goater (6):
35
Chris Howard (1):
32
aspeed/smc: fix dummy cycles count when in dual IO mode
36
Fix aarch64 debug register names.
33
aspeed/smc: fix HW strapping
34
aspeed/smc: rename aspeed_smc_flash_send_addr() to aspeed_smc_flash_setup()
35
aspeed/scu: introduce clock frequencies
36
aspeed: initialize the SCU controller first
37
aspeed/timer: use the APB frequency from the SCU
38
37
39
Eric Auger (3):
38
Florian Lugou (1):
40
hw/arm/smmuv3: Cache/invalidate config data
39
target/arm: Fix PAuth keys access checks for disabled SEL2
41
hw/arm/smmuv3: IOTLB emulation
42
hw/arm/smmuv3: Add notifications on invalidation
43
40
44
Jia He (1):
41
Peter Maydell (17):
45
hw/arm/smmuv3: Fix translate error handling
42
target/arm: Postpone interpretation of stage 2 descriptor attribute bits
43
target/arm: Factor out FWB=0 specific part of combine_cacheattrs()
44
target/arm: Implement FEAT_S2FWB
45
target/arm: Enable FEAT_S2FWB for -cpu max
46
target/arm: Implement FEAT_IDST
47
target/arm: Drop unsupported_encoding() macro
48
hw/intc/arm_gicv3_cpuif: Handle CPUs that don't specify GICv3 parameters
49
hw/intc/arm_gicv3: report correct PRIbits field in ICV_CTLR_EL1
50
hw/intc/arm_gicv3_kvm.c: Stop using GIC_MIN_BPR constant
51
hw/intc/arm_gicv3: Support configurable number of physical priority bits
52
hw/intc/arm_gicv3: Use correct number of priority bits for the CPU
53
hw/intc/arm_gicv3: Provide ich_num_aprs()
54
target/arm/helper.c: Delete stray obsolete comment
55
target/arm: Make number of counters in PMCR follow the CPU
56
hw/arm/virt: Fix incorrect non-secure flash dtb node name
57
hw/arm/virt: Drop #size-cells and #address-cells from gpio-keys dtb node
58
ptimer: Rename PTIMER_POLICY_DEFAULT to PTIMER_POLICY_LEGACY
46
59
47
Joel Stanley (1):
60
Philippe Mathieu-Daudé (1):
48
MAINTAINERS: Add ASPEED BMCs
61
hw/adc/zynq-xadc: Use qemu_irq typedef
49
62
50
Peter Maydell (3):
63
Richard Henderson (2):
51
tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
64
target/arm: Enable FEAT_HCX for -cpu max
52
target/arm: Set page (region) size in get_phys_addr_pmsav7()
65
target/arm: Use FIELD definitions for CPACR, CPTR_ELx
53
target/arm: Handle small regions in get_phys_addr_pmsav8()
54
66
55
Philippe Mathieu-Daudé (17):
67
docs/system/arm/emulation.rst | 2 +
56
MAINTAINERS: Adopt the Gumstix computers-on-module machines
68
include/hw/adc/zynq-xadc.h | 3 +-
57
hw/input/pckbd: Use qemu_log_mask(GUEST_ERROR) instead of fprintf
69
include/hw/intc/arm_gicv3_common.h | 8 +-
58
hw/input/tsc2005: Use qemu_log_mask(GUEST_ERROR) instead of fprintf
70
include/hw/ptimer.h | 16 +-
59
hw/dma/omap_dma: Use qemu_log_mask(UNIMP) instead of printf
71
target/arm/cpregs.h | 24 +++
60
hw/dma/omap_dma: Use qemu_log_mask(GUEST_ERROR) instead of fprintf
72
target/arm/cpu.h | 76 +++++++-
61
hw/ssi/omap_spi: Use qemu_log_mask(GUEST_ERROR) instead of fprintf
73
target/arm/internals.h | 11 +-
62
hw/sd/omap_mmc: Use qemu_log_mask(UNIMP) instead of printf
74
target/arm/translate-a64.h | 9 -
63
hw/i2c/omap_i2c: Use qemu_log_mask(UNIMP) instead of fprintf
75
hw/adc/zynq-xadc.c | 4 +-
64
hw/arm/omap1: Use qemu_log_mask(GUEST_ERROR) instead of fprintf
76
hw/arm/boot.c | 2 +-
65
hw/arm/omap: Use qemu_log_mask(GUEST_ERROR) instead of fprintf
77
hw/arm/musicpal.c | 2 +-
66
hw/arm/stellaris: Use qemu_log_mask(UNIMP) instead of fprintf
78
hw/arm/virt.c | 4 +-
67
hw/net/stellaris_enet: Fix a typo
79
hw/core/machine.c | 4 +-
68
hw/net/stellaris_enet: Use qemu_log_mask(GUEST_ERROR) instead of hw_error
80
hw/dma/xilinx_axidma.c | 2 +-
69
hw/net/smc91c111: Use qemu_log_mask(GUEST_ERROR) instead of hw_error
81
hw/dma/xlnx_csu_dma.c | 2 +-
70
hw/net/smc91c111: Use qemu_log_mask(UNIMP) instead of fprintf
82
hw/intc/arm_gicv3_common.c | 5 +
71
hw/arm/stellaris: Fix gptm_write() error message
83
hw/intc/arm_gicv3_cpuif.c | 225 +++++++++++++++++-------
72
hw/arm/stellaris: Use HWADDR_PRIx to display register address
84
hw/intc/arm_gicv3_kvm.c | 16 +-
85
hw/m68k/mcf5206.c | 2 +-
86
hw/m68k/mcf5208.c | 2 +-
87
hw/net/can/xlnx-zynqmp-can.c | 2 +-
88
hw/net/fsl_etsec/etsec.c | 2 +-
89
hw/net/lan9118.c | 2 +-
90
hw/rtc/exynos4210_rtc.c | 4 +-
91
hw/timer/allwinner-a10-pit.c | 2 +-
92
hw/timer/altera_timer.c | 2 +-
93
hw/timer/arm_timer.c | 2 +-
94
hw/timer/digic-timer.c | 2 +-
95
hw/timer/etraxfs_timer.c | 6 +-
96
hw/timer/exynos4210_mct.c | 6 +-
97
hw/timer/exynos4210_pwm.c | 2 +-
98
hw/timer/grlib_gptimer.c | 2 +-
99
hw/timer/imx_epit.c | 4 +-
100
hw/timer/imx_gpt.c | 2 +-
101
hw/timer/mss-timer.c | 2 +-
102
hw/timer/sh_timer.c | 2 +-
103
hw/timer/slavio_timer.c | 2 +-
104
hw/timer/xilinx_timer.c | 2 +-
105
target/arm/cpu.c | 11 +-
106
target/arm/cpu64.c | 30 ++++
107
target/arm/cpu_tcg.c | 6 +
108
target/arm/helper.c | 348 ++++++++++++++++++++++++++++---------
109
target/arm/kvm64.c | 12 ++
110
target/arm/op_helper.c | 9 +
111
target/arm/translate-a64.c | 36 +++-
112
tests/unit/ptimer-test.c | 6 +-
113
46 files changed, 697 insertions(+), 228 deletions(-)
73
114
74
Sai Pavan Boddu (1):
75
xilinx_spips: Make dma transactions as per dma_burst_size
76
77
accel/tcg/softmmu_template.h | 24 ++-
78
hw/arm/smmuv3-internal.h | 12 +-
79
include/exec/cpu-all.h | 5 +-
80
include/hw/arm/omap.h | 30 +--
81
include/hw/arm/smmu-common.h | 24 +++
82
include/hw/arm/smmuv3.h | 1 +
83
include/hw/misc/aspeed_scu.h | 70 ++++++-
84
include/hw/ssi/xilinx_spips.h | 5 +-
85
include/hw/timer/aspeed_timer.h | 4 +
86
accel/tcg/cputlb.c | 131 +++++++++++--
87
hw/arm/aspeed_soc.c | 42 ++--
88
hw/arm/omap1.c | 18 +-
89
hw/arm/smmu-common.c | 118 ++++++++++-
90
hw/arm/smmuv3.c | 420 ++++++++++++++++++++++++++++++++++++----
91
hw/arm/stellaris.c | 8 +-
92
hw/dma/omap_dma.c | 70 ++++---
93
hw/i2c/omap_i2c.c | 20 +-
94
hw/input/pckbd.c | 4 +-
95
hw/input/tsc2005.c | 13 +-
96
hw/misc/aspeed_scu.c | 106 ++++++++++
97
hw/net/smc91c111.c | 21 +-
98
hw/net/stellaris_enet.c | 11 +-
99
hw/sd/omap_mmc.c | 13 +-
100
hw/ssi/aspeed_smc.c | 48 ++---
101
hw/ssi/omap_spi.c | 15 +-
102
hw/ssi/xilinx_spips.c | 23 ++-
103
hw/timer/aspeed_timer.c | 19 +-
104
target/arm/helper.c | 115 +++++++----
105
MAINTAINERS | 14 +-
106
hw/arm/trace-events | 27 ++-
107
30 files changed, 1176 insertions(+), 255 deletions(-)
108
diff view generated by jsdifflib
Deleted patch
1
From: Cédric Le Goater <clg@kaod.org>
2
1
3
When configured in dual I/O mode, address and data are sent in dual
4
mode, including the dummy byte cycles in between. Adapt the count to
5
the IO setting.
6
7
Signed-off-by: Cédric Le Goater <clg@kaod.org>
8
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
9
Message-id: 20180612065716.10587-2-clg@kaod.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/ssi/aspeed_smc.c | 9 ++++++++-
13
1 file changed, 8 insertions(+), 1 deletion(-)
14
15
diff --git a/hw/ssi/aspeed_smc.c b/hw/ssi/aspeed_smc.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/ssi/aspeed_smc.c
18
+++ b/hw/ssi/aspeed_smc.c
19
@@ -XXX,XX +XXX,XX @@
20
21
/* CEx Control Register */
22
#define R_CTRL0 (0x10 / 4)
23
+#define CTRL_IO_DUAL_DATA (1 << 29)
24
+#define CTRL_IO_DUAL_ADDR_DATA (1 << 28) /* Includes dummies */
25
#define CTRL_CMD_SHIFT 16
26
#define CTRL_CMD_MASK 0xff
27
#define CTRL_DUMMY_HIGH_SHIFT 14
28
@@ -XXX,XX +XXX,XX @@ static int aspeed_smc_flash_dummies(const AspeedSMCFlash *fl)
29
uint32_t r_ctrl0 = s->regs[s->r_ctrl0 + fl->id];
30
uint32_t dummy_high = (r_ctrl0 >> CTRL_DUMMY_HIGH_SHIFT) & 0x1;
31
uint32_t dummy_low = (r_ctrl0 >> CTRL_DUMMY_LOW_SHIFT) & 0x3;
32
+ uint32_t dummies = ((dummy_high << 2) | dummy_low) * 8;
33
34
- return ((dummy_high << 2) | dummy_low) * 8;
35
+ if (r_ctrl0 & CTRL_IO_DUAL_ADDR_DATA) {
36
+ dummies /= 2;
37
+ }
38
+
39
+ return dummies;
40
}
41
42
static void aspeed_smc_flash_send_addr(AspeedSMCFlash *fl, uint32_t addr)
43
--
44
2.17.1
45
46
diff view generated by jsdifflib
1
From: Jia He <hejianet@gmail.com>
1
In the original Arm v8 two-stage translation, both stage 1 and stage
2
2 specify memory attributes (memory type, cacheability,
3
shareability); these are then combined to produce the overall memory
4
attributes for the whole stage 1+2 access. In QEMU we implement this
5
by having get_phys_addr() fill in an ARMCacheAttrs struct, and we
6
convert both the stage 1 and stage 2 attribute bit formats to the
7
same encoding (an 8-bit attribute value matching the MAIR_EL1 fields,
8
plus a 2-bit shareability value).
2
9
3
In case the STE's config is "Bypass" we currently don't set the
10
The new FEAT_S2FWB feature allows the guest to enable a different
4
IOMMUTLBEntry perm flags and the access does not succeed. Also
11
interpretation of the attribute bits in the stage 2 descriptors.
5
if the config is 0b0xx (Aborted/Reserved), decode_ste and
12
These bits can now be used to control details of how the stage 1 and
6
smmuv3_decode_config currently returns -EINVAL and we don't enter
13
2 attributes should be combined (for instance they can say "always
7
the expected code path: we record an event whereas we should not.
14
use the stage 1 attributes" or "ignore the stage 1 attributes and
15
always be Device memory"). This means we need to pass the raw bit
16
information for stage 2 down to the function which combines the stage
17
1 and stage 2 information.
8
18
9
This patch fixes those bugs and simplifies the error handling.
19
Add a field to ARMCacheAttrs that indicates whether the attrs field
10
decode_ste and smmuv3_decode_config now return 0 if aborted or
20
should be interpreted as MAIR format, or as the raw stage 2 attribute
11
bypassed config was found. Only bad config info produces negative
21
bits from the descriptor, and store the appropriate values when
12
error values. In smmuv3_translate we more clearly differentiate
22
filling in cacheattrs.
13
errors, bypass/smmu disabled, aborted and success cases. Also
14
trace points are differentiated.
15
23
16
Fixes: 9bde7f0674fe ("hw/arm/smmuv3: Implement translate callback")
24
We only need to interpret the attrs field in a few places:
17
Reported-by: jia.he@hxt-semitech.com
25
* in do_ats_write(), where we know to expect a MAIR value
18
Signed-off-by: jia.he@hxt-semitech.com
26
(there is no ATS instruction to do a stage-2-only walk)
19
Signed-off-by: Eric Auger <eric.auger@redhat.com>
27
* in S1_ptw_translate(), where we want to know whether the
20
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
28
combined S1 + S2 attributes indicate Device memory that
21
Message-id: 1529653501-15358-2-git-send-email-eric.auger@redhat.com
29
should provoke a fault
30
* in combine_cacheattrs(), which does the S1 + S2 combining
31
Update those places accordingly.
32
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
35
Message-id: 20220505183950.2781801-2-peter.maydell@linaro.org
23
---
36
---
24
hw/arm/smmuv3-internal.h | 12 ++++-
37
target/arm/internals.h | 7 ++++++-
25
hw/arm/smmuv3.c | 96 +++++++++++++++++++++++++++-------------
38
target/arm/helper.c | 42 ++++++++++++++++++++++++++++++++++++------
26
hw/arm/trace-events | 7 +--
39
2 files changed, 42 insertions(+), 7 deletions(-)
27
3 files changed, 80 insertions(+), 35 deletions(-)
28
40
29
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
41
diff --git a/target/arm/internals.h b/target/arm/internals.h
30
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/arm/smmuv3-internal.h
43
--- a/target/arm/internals.h
32
+++ b/hw/arm/smmuv3-internal.h
44
+++ b/target/arm/internals.h
33
@@ -XXX,XX +XXX,XX @@
45
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
34
46
35
#include "hw/arm/smmu-common.h"
47
/* Cacheability and shareability attributes for a memory access */
36
48
typedef struct ARMCacheAttrs {
37
+typedef enum SMMUTranslationStatus {
49
- unsigned int attrs:8; /* as in the MAIR register encoding */
38
+ SMMU_TRANS_DISABLE,
50
+ /*
39
+ SMMU_TRANS_ABORT,
51
+ * If is_s2_format is true, attrs is the S2 descriptor bits [5:2]
40
+ SMMU_TRANS_BYPASS,
52
+ * Otherwise, attrs is the same as the MAIR_EL1 8-bit format
41
+ SMMU_TRANS_ERROR,
53
+ */
42
+ SMMU_TRANS_SUCCESS,
54
+ unsigned int attrs:8;
43
+} SMMUTranslationStatus;
55
unsigned int shareability:2; /* as in the SH field of the VMSAv8-64 PTEs */
56
+ bool is_s2_format:1;
57
} ARMCacheAttrs;
58
59
bool get_phys_addr(CPUARMState *env, target_ulong address,
60
diff --git a/target/arm/helper.c b/target/arm/helper.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/helper.c
63
+++ b/target/arm/helper.c
64
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
65
ret = get_phys_addr(env, value, access_type, mmu_idx, &phys_addr, &attrs,
66
&prot, &page_size, &fi, &cacheattrs);
67
68
+ /*
69
+ * ATS operations only do S1 or S1+S2 translations, so we never
70
+ * have to deal with the ARMCacheAttrs format for S2 only.
71
+ */
72
+ assert(!cacheattrs.is_s2_format);
44
+
73
+
45
/* MMIO Registers */
74
if (ret) {
46
75
/*
47
REG32(IDR0, 0x0)
76
* Some kinds of translation fault must cause exceptions rather
48
@@ -XXX,XX +XXX,XX @@ enum { /* Command completion notification */
77
@@ -XXX,XX +XXX,XX @@ static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
49
/* Events */
78
return true;
50
51
typedef enum SMMUEventType {
52
- SMMU_EVT_OK = 0x00,
53
+ SMMU_EVT_NONE = 0x00,
54
SMMU_EVT_F_UUT ,
55
SMMU_EVT_C_BAD_STREAMID ,
56
SMMU_EVT_F_STE_FETCH ,
57
@@ -XXX,XX +XXX,XX @@ typedef enum SMMUEventType {
58
} SMMUEventType;
59
60
static const char *event_stringify[] = {
61
- [SMMU_EVT_OK] = "SMMU_EVT_OK",
62
+ [SMMU_EVT_NONE] = "no recorded event",
63
[SMMU_EVT_F_UUT] = "SMMU_EVT_F_UUT",
64
[SMMU_EVT_C_BAD_STREAMID] = "SMMU_EVT_C_BAD_STREAMID",
65
[SMMU_EVT_F_STE_FETCH] = "SMMU_EVT_F_STE_FETCH",
66
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/hw/arm/smmuv3.c
69
+++ b/hw/arm/smmuv3.c
70
@@ -XXX,XX +XXX,XX @@
71
#include "hw/qdev-core.h"
72
#include "hw/pci/pci.h"
73
#include "exec/address-spaces.h"
74
+#include "cpu.h"
75
#include "trace.h"
76
#include "qemu/log.h"
77
#include "qemu/error-report.h"
78
@@ -XXX,XX +XXX,XX @@ void smmuv3_record_event(SMMUv3State *s, SMMUEventInfo *info)
79
EVT_SET_SID(&evt, info->sid);
80
81
switch (info->type) {
82
- case SMMU_EVT_OK:
83
+ case SMMU_EVT_NONE:
84
return;
85
case SMMU_EVT_F_UUT:
86
EVT_SET_SSID(&evt, info->u.f_uut.ssid);
87
@@ -XXX,XX +XXX,XX @@ static int smmu_get_cd(SMMUv3State *s, STE *ste, uint32_t ssid,
88
return 0;
89
}
79
}
90
80
91
-/* Returns <0 if the caller has no need to continue the translation */
81
+static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
92
+/* Returns < 0 in case of invalid STE, 0 otherwise */
82
+{
93
static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
83
+ /*
94
STE *ste, SMMUEventInfo *event)
84
+ * For an S1 page table walk, the stage 1 attributes are always
85
+ * some form of "this is Normal memory". The combined S1+S2
86
+ * attributes are therefore only Device if stage 2 specifies Device.
87
+ * With HCR_EL2.FWB == 0 this is when descriptor bits [5:4] are 0b00,
88
+ * ie when cacheattrs.attrs bits [3:2] are 0b00.
89
+ */
90
+ assert(cacheattrs.is_s2_format);
91
+ return (cacheattrs.attrs & 0xc) == 0;
92
+}
93
+
94
/* Translate a S1 pagetable walk through S2 if needed. */
95
static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
96
hwaddr addr, bool *is_secure,
97
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
98
return ~0;
99
}
100
if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
101
- (cacheattrs.attrs & 0xf0) == 0) {
102
+ ptw_attrs_are_device(env, cacheattrs)) {
103
/*
104
* PTW set and S1 walk touched S2 Device memory:
105
* generate Permission fault.
106
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
107
}
108
109
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
110
- cacheattrs->attrs = convert_stage2_attrs(env, extract32(attrs, 0, 4));
111
+ cacheattrs->is_s2_format = true;
112
+ cacheattrs->attrs = extract32(attrs, 0, 4);
113
} else {
114
/* Index into MAIR registers for cache attributes */
115
uint8_t attrindx = extract32(attrs, 0, 3);
116
uint64_t mair = env->cp15.mair_el[regime_el(env, mmu_idx)];
117
assert(attrindx <= 7);
118
+ cacheattrs->is_s2_format = false;
119
cacheattrs->attrs = extract64(mair, attrindx * 8, 8);
120
}
121
122
@@ -XXX,XX +XXX,XX @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2)
123
/* Combine S1 and S2 cacheability/shareability attributes, per D4.5.4
124
* and CombineS1S2Desc()
125
*
126
+ * @env: CPUARMState
127
* @s1: Attributes from stage 1 walk
128
* @s2: Attributes from stage 2 walk
129
*/
130
-static ARMCacheAttrs combine_cacheattrs(ARMCacheAttrs s1, ARMCacheAttrs s2)
131
+static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
132
+ ARMCacheAttrs s1, ARMCacheAttrs s2)
95
{
133
{
96
uint32_t config;
134
uint8_t s1lo, s2lo, s1hi, s2hi;
97
- int ret = -EINVAL;
135
ARMCacheAttrs ret;
98
136
bool tagged = false;
99
if (!STE_VALID(ste)) {
137
+ uint8_t s2_mair_attrs;
100
goto bad_ste;
138
+
101
@@ -XXX,XX +XXX,XX @@ static int decode_ste(SMMUv3State *s, SMMUTransCfg *cfg,
139
+ assert(s2.is_s2_format && !s1.is_s2_format);
102
config = STE_CONFIG(ste);
140
+ ret.is_s2_format = false;
103
141
+
104
if (STE_CFG_ABORT(config)) {
142
+ s2_mair_attrs = convert_stage2_attrs(env, s2.attrs);
105
- cfg->aborted = true; /* abort but don't record any event */
143
106
- return ret;
144
if (s1.attrs == 0xf0) {
107
+ cfg->aborted = true;
145
tagged = true;
108
+ return 0;
146
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(ARMCacheAttrs s1, ARMCacheAttrs s2)
109
}
147
}
110
148
111
if (STE_CFG_BYPASS(config)) {
149
s1lo = extract32(s1.attrs, 0, 4);
112
cfg->bypassed = true;
150
- s2lo = extract32(s2.attrs, 0, 4);
113
- return ret;
151
+ s2lo = extract32(s2_mair_attrs, 0, 4);
114
+ return 0;
152
s1hi = extract32(s1.attrs, 4, 4);
115
}
153
- s2hi = extract32(s2.attrs, 4, 4);
116
154
+ s2hi = extract32(s2_mair_attrs, 4, 4);
117
if (STE_CFG_S2_ENABLED(config)) {
155
118
@@ -XXX,XX +XXX,XX @@ bad_cd:
156
/* Combine shareability attributes (table D4-43) */
119
* the different configuration decoding steps
157
if (s1.shareability == 2 || s2.shareability == 2) {
120
* @event: must be zero'ed by the caller
158
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
121
*
159
}
122
- * return < 0 if the translation needs to be aborted (@event is filled
160
cacheattrs->shareability = 0;
123
+ * return < 0 in case of config decoding error (@event is filled
161
}
124
* accordingly). Return 0 otherwise.
162
- *cacheattrs = combine_cacheattrs(*cacheattrs, cacheattrs2);
125
*/
163
+ *cacheattrs = combine_cacheattrs(env, *cacheattrs, cacheattrs2);
126
static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
164
127
@@ -XXX,XX +XXX,XX @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
165
/* Check if IPA translates to secure or non-secure PA space. */
128
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
166
if (arm_is_secure_below_el3(env)) {
129
uint32_t sid = smmu_get_sid(sdev);
167
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
130
SMMUv3State *s = sdev->smmu;
168
/* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
131
- int ret = -EINVAL;
169
hcr = arm_hcr_el2_eff(env);
132
+ int ret;
170
cacheattrs->shareability = 0;
133
STE ste;
171
+ cacheattrs->is_s2_format = false;
134
CD cd;
172
if (hcr & HCR_DC) {
135
173
if (hcr & HCR_DCT) {
136
- if (smmu_find_ste(s, sid, &ste, event)) {
174
memattr = 0xf0; /* Tagged, Normal, WB, RWA */
137
+ ret = smmu_find_ste(s, sid, &ste, event);
138
+ if (ret) {
139
return ret;
140
}
141
142
- if (decode_ste(s, cfg, &ste, event)) {
143
+ ret = decode_ste(s, cfg, &ste, event);
144
+ if (ret) {
145
return ret;
146
}
147
148
- if (smmu_get_cd(s, &ste, 0 /* ssid */, &cd, event)) {
149
+ if (cfg->aborted || cfg->bypassed) {
150
+ return 0;
151
+ }
152
+
153
+ ret = smmu_get_cd(s, &ste, 0 /* ssid */, &cd, event);
154
+ if (ret) {
155
return ret;
156
}
157
158
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
159
SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
160
SMMUv3State *s = sdev->smmu;
161
uint32_t sid = smmu_get_sid(sdev);
162
- SMMUEventInfo event = {.type = SMMU_EVT_OK, .sid = sid};
163
+ SMMUEventInfo event = {.type = SMMU_EVT_NONE, .sid = sid};
164
SMMUPTWEventInfo ptw_info = {};
165
+ SMMUTranslationStatus status;
166
SMMUTransCfg cfg = {};
167
IOMMUTLBEntry entry = {
168
.target_as = &address_space_memory,
169
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
170
.addr_mask = ~(hwaddr)0,
171
.perm = IOMMU_NONE,
172
};
173
- int ret = 0;
174
175
if (!smmu_enabled(s)) {
176
- goto out;
177
+ status = SMMU_TRANS_DISABLE;
178
+ goto epilogue;
179
}
180
181
- ret = smmuv3_decode_config(mr, &cfg, &event);
182
- if (ret) {
183
- goto out;
184
+ if (smmuv3_decode_config(mr, &cfg, &event)) {
185
+ status = SMMU_TRANS_ERROR;
186
+ goto epilogue;
187
}
188
189
if (cfg.aborted) {
190
- goto out;
191
+ status = SMMU_TRANS_ABORT;
192
+ goto epilogue;
193
}
194
195
- ret = smmu_ptw(&cfg, addr, flag, &entry, &ptw_info);
196
- if (ret) {
197
+ if (cfg.bypassed) {
198
+ status = SMMU_TRANS_BYPASS;
199
+ goto epilogue;
200
+ }
201
+
202
+ if (smmu_ptw(&cfg, addr, flag, &entry, &ptw_info)) {
203
switch (ptw_info.type) {
204
case SMMU_PTW_ERR_WALK_EABT:
205
event.type = SMMU_EVT_F_WALK_EABT;
206
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
207
default:
208
g_assert_not_reached();
209
}
210
+ status = SMMU_TRANS_ERROR;
211
+ } else {
212
+ status = SMMU_TRANS_SUCCESS;
213
}
214
-out:
215
- if (ret) {
216
- qemu_log_mask(LOG_GUEST_ERROR,
217
- "%s translation failed for iova=0x%"PRIx64"(%d)\n",
218
- mr->parent_obj.name, addr, ret);
219
- entry.perm = IOMMU_NONE;
220
- smmuv3_record_event(s, &event);
221
- } else if (!cfg.aborted) {
222
+
223
+epilogue:
224
+ switch (status) {
225
+ case SMMU_TRANS_SUCCESS:
226
entry.perm = flag;
227
- trace_smmuv3_translate(mr->parent_obj.name, sid, addr,
228
- entry.translated_addr, entry.perm);
229
+ trace_smmuv3_translate_success(mr->parent_obj.name, sid, addr,
230
+ entry.translated_addr, entry.perm);
231
+ break;
232
+ case SMMU_TRANS_DISABLE:
233
+ entry.perm = flag;
234
+ entry.addr_mask = ~TARGET_PAGE_MASK;
235
+ trace_smmuv3_translate_disable(mr->parent_obj.name, sid, addr,
236
+ entry.perm);
237
+ break;
238
+ case SMMU_TRANS_BYPASS:
239
+ entry.perm = flag;
240
+ entry.addr_mask = ~TARGET_PAGE_MASK;
241
+ trace_smmuv3_translate_bypass(mr->parent_obj.name, sid, addr,
242
+ entry.perm);
243
+ break;
244
+ case SMMU_TRANS_ABORT:
245
+ /* no event is recorded on abort */
246
+ trace_smmuv3_translate_abort(mr->parent_obj.name, sid, addr,
247
+ entry.perm);
248
+ break;
249
+ case SMMU_TRANS_ERROR:
250
+ qemu_log_mask(LOG_GUEST_ERROR,
251
+ "%s translation failed for iova=0x%"PRIx64"(%s)\n",
252
+ mr->parent_obj.name, addr, smmu_event_string(event.type));
253
+ smmuv3_record_event(s, &event);
254
+ break;
255
}
256
257
return entry;
258
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
259
index XXXXXXX..XXXXXXX 100644
260
--- a/hw/arm/trace-events
261
+++ b/hw/arm/trace-events
262
@@ -XXX,XX +XXX,XX @@ smmuv3_record_event(const char *type, uint32_t sid) "%s sid=%d"
263
smmuv3_find_ste(uint16_t sid, uint32_t features, uint16_t sid_split) "SID:0x%x features:0x%x, sid_split:0x%x"
264
smmuv3_find_ste_2lvl(uint64_t strtab_base, uint64_t l1ptr, int l1_ste_offset, uint64_t l2ptr, int l2_ste_offset, int max_l2_ste) "strtab_base:0x%"PRIx64" l1ptr:0x%"PRIx64" l1_off:0x%x, l2ptr:0x%"PRIx64" l2_off:0x%x max_l2_ste:%d"
265
smmuv3_get_ste(uint64_t addr) "STE addr: 0x%"PRIx64
266
-smmuv3_translate_bypass(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=%d bypass iova:0x%"PRIx64" is_write=%d"
267
-smmuv3_translate_in(uint16_t sid, int pci_bus_num, uint64_t strtab_base) "SID:0x%x bus:%d strtab_base:0x%"PRIx64
268
+smmuv3_translate_disable(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=%d bypass (smmu disabled) iova:0x%"PRIx64" is_write=%d"
269
+smmuv3_translate_bypass(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=%d STE bypass iova:0x%"PRIx64" is_write=%d"
270
+smmuv3_translate_abort(const char *n, uint16_t sid, uint64_t addr, bool is_write) "%s sid=%d abort on iova:0x%"PRIx64" is_write=%d"
271
+smmuv3_translate_success(const char *n, uint16_t sid, uint64_t iova, uint64_t translated, int perm) "%s sid=%d iova=0x%"PRIx64" translated=0x%"PRIx64" perm=0x%x"
272
smmuv3_get_cd(uint64_t addr) "CD addr: 0x%"PRIx64
273
-smmuv3_translate(const char *n, uint16_t sid, uint64_t iova, uint64_t translated, int perm) "%s sid=%d iova=0x%"PRIx64" translated=0x%"PRIx64" perm=0x%x"
274
smmuv3_decode_cd(uint32_t oas) "oas=%d"
275
smmuv3_decode_cd_tt(int i, uint32_t tsz, uint64_t ttb, uint32_t granule_sz) "TT[%d]:tsz:%d ttb:0x%"PRIx64" granule_sz:%d"
276
--
175
--
277
2.17.1
176
2.25.1
278
279
diff view generated by jsdifflib
1
Allow ARMv8M to handle small MPU and SAU region sizes, by making
1
Factor out the part of combine_cacheattrs() that is specific to
2
get_phys_add_pmsav8() set the page size to the 1 if the MPU or
2
handling HCR_EL2.FWB == 0. This is the part where we combine the
3
SAU region covers less than a TARGET_PAGE_SIZE.
3
memory type and cacheability attributes.
4
4
5
We choose to use a size of 1 because it makes no difference to
5
The "force Outer Shareable for Device or Normal Inner-NC Outer-NC"
6
the core code, and avoids having to track both the base and
6
logic remains in combine_cacheattrs() because it holds regardless
7
limit for SAU and MPU and then convert into an artificially
7
(this is the equivalent of the pseudocode EffectiveShareability()
8
restricted "page size" that the core code will then ignore.
8
function).
9
10
Since the core TCG code can't handle execution from small
11
MPU regions, we strip the exec permission from them so that
12
any execution attempts will cause an MPU exception, rather
13
than allowing it to end up with a cpu_abort() in
14
get_page_addr_code().
15
16
(The previous code's intention was to make any small page be
17
treated as having no permissions, but unfortunately errors
18
in the implementation meant that it didn't behave that way.
19
It's possible that some binaries using small regions were
20
accidentally working with our old behaviour and won't now.)
21
22
We also retain an existing bug, where we ignored the possibility
23
that the SAU region might not cover the entire page, in the
24
case of executable regions. This is necessary because some
25
currently-working guest code images rely on being able to
26
execute from addresses which are covered by a page-sized
27
MPU region but a smaller SAU region. We can remove this
28
workaround if we ever support execution from small regions.
29
9
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
32
Message-id: 20180620130619.11362-4-peter.maydell@linaro.org
12
Message-id: 20220505183950.2781801-3-peter.maydell@linaro.org
33
---
13
---
34
target/arm/helper.c | 78 ++++++++++++++++++++++++++++++++-------------
14
target/arm/helper.c | 88 +++++++++++++++++++++++++--------------------
35
1 file changed, 55 insertions(+), 23 deletions(-)
15
1 file changed, 50 insertions(+), 38 deletions(-)
36
16
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
38
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.c
19
--- a/target/arm/helper.c
40
+++ b/target/arm/helper.c
20
+++ b/target/arm/helper.c
41
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
21
@@ -XXX,XX +XXX,XX @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2)
42
22
}
43
/* Security attributes for an address, as returned by v8m_security_lookup. */
23
}
44
typedef struct V8M_SAttributes {
24
45
+ bool subpage; /* true if these attrs don't cover the whole TARGET_PAGE */
25
+/*
46
bool ns;
26
+ * Combine the memory type and cacheability attributes of
47
bool nsc;
27
+ * s1 and s2 for the HCR_EL2.FWB == 0 case, returning the
48
uint8_t sregion;
28
+ * combined attributes in MAIR_EL1 format.
49
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
29
+ */
50
int r;
30
+static uint8_t combined_attrs_nofwb(CPUARMState *env,
51
bool idau_exempt = false, idau_ns = true, idau_nsc = true;
31
+ ARMCacheAttrs s1, ARMCacheAttrs s2)
52
int idau_region = IREGION_NOTVALID;
32
+{
53
+ uint32_t addr_page_base = address & TARGET_PAGE_MASK;
33
+ uint8_t s1lo, s2lo, s1hi, s2hi, s2_mair_attrs, ret_attrs;
54
+ uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
34
+
55
35
+ s2_mair_attrs = convert_stage2_attrs(env, s2.attrs);
56
if (cpu->idau) {
36
+
57
IDAUInterfaceClass *iic = IDAU_INTERFACE_GET_CLASS(cpu->idau);
37
+ s1lo = extract32(s1.attrs, 0, 4);
58
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
38
+ s2lo = extract32(s2_mair_attrs, 0, 4);
59
uint32_t limit = env->sau.rlar[r] | 0x1f;
39
+ s1hi = extract32(s1.attrs, 4, 4);
60
40
+ s2hi = extract32(s2_mair_attrs, 4, 4);
61
if (base <= address && limit >= address) {
41
+
62
+ if (base > addr_page_base || limit < addr_page_limit) {
42
+ /* Combine memory type and cacheability attributes */
63
+ sattrs->subpage = true;
43
+ if (s1hi == 0 || s2hi == 0) {
64
+ }
44
+ /* Device has precedence over normal */
65
if (sattrs->srvalid) {
45
+ if (s1lo == 0 || s2lo == 0) {
66
/* If we hit in more than one region then we must report
46
+ /* nGnRnE has precedence over anything */
67
* as Secure, not NS-Callable, with no valid region
47
+ ret_attrs = 0;
68
@@ -XXX,XX +XXX,XX @@ static void v8m_security_lookup(CPUARMState *env, uint32_t address,
48
+ } else if (s1lo == 4 || s2lo == 4) {
69
static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
49
+ /* non-Reordering has precedence over Reordering */
70
MMUAccessType access_type, ARMMMUIdx mmu_idx,
50
+ ret_attrs = 4; /* nGnRE */
71
hwaddr *phys_ptr, MemTxAttrs *txattrs,
51
+ } else if (s1lo == 8 || s2lo == 8) {
72
- int *prot, ARMMMUFaultInfo *fi, uint32_t *mregion)
52
+ /* non-Gathering has precedence over Gathering */
73
+ int *prot, bool *is_subpage,
53
+ ret_attrs = 8; /* nGRE */
74
+ ARMMMUFaultInfo *fi, uint32_t *mregion)
54
+ } else {
55
+ ret_attrs = 0xc; /* GRE */
56
+ }
57
+ } else { /* Normal memory */
58
+ /* Outer/inner cacheability combine independently */
59
+ ret_attrs = combine_cacheattr_nibble(s1hi, s2hi) << 4
60
+ | combine_cacheattr_nibble(s1lo, s2lo);
61
+ }
62
+ return ret_attrs;
63
+}
64
+
65
/* Combine S1 and S2 cacheability/shareability attributes, per D4.5.4
66
* and CombineS1S2Desc()
67
*
68
@@ -XXX,XX +XXX,XX @@ static uint8_t combine_cacheattr_nibble(uint8_t s1, uint8_t s2)
69
static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
70
ARMCacheAttrs s1, ARMCacheAttrs s2)
75
{
71
{
76
/* Perform a PMSAv8 MPU lookup (without also doing the SAU check
72
- uint8_t s1lo, s2lo, s1hi, s2hi;
77
* that a full phys-to-virt translation does).
73
ARMCacheAttrs ret;
78
* mregion is (if not NULL) set to the region number which matched,
74
bool tagged = false;
79
* or -1 if no region number is returned (MPU off, address did not
75
- uint8_t s2_mair_attrs;
80
* hit a region, address hit in multiple regions).
76
81
+ * We set is_subpage to true if the region hit doesn't cover the
77
assert(s2.is_s2_format && !s1.is_s2_format);
82
+ * entire TARGET_PAGE the address is within.
78
ret.is_s2_format = false;
83
*/
79
84
ARMCPU *cpu = arm_env_get_cpu(env);
80
- s2_mair_attrs = convert_stage2_attrs(env, s2.attrs);
85
bool is_user = regime_is_user(env, mmu_idx);
86
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
87
int n;
88
int matchregion = -1;
89
bool hit = false;
90
+ uint32_t addr_page_base = address & TARGET_PAGE_MASK;
91
+ uint32_t addr_page_limit = addr_page_base + (TARGET_PAGE_SIZE - 1);
92
93
+ *is_subpage = false;
94
*phys_ptr = address;
95
*prot = 0;
96
if (mregion) {
97
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
98
continue;
99
}
100
101
+ if (base > addr_page_base || limit < addr_page_limit) {
102
+ *is_subpage = true;
103
+ }
104
+
105
if (hit) {
106
/* Multiple regions match -- always a failure (unlike
107
* PMSAv7 where highest-numbered-region wins)
108
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
109
110
matchregion = n;
111
hit = true;
112
-
81
-
113
- if (base & ~TARGET_PAGE_MASK) {
82
if (s1.attrs == 0xf0) {
114
- qemu_log_mask(LOG_UNIMP,
83
tagged = true;
115
- "MPU_RBAR[%d]: No support for MPU region base"
84
s1.attrs = 0xff;
116
- "address of 0x%" PRIx32 ". Minimum alignment is "
117
- "%d\n",
118
- n, base, TARGET_PAGE_BITS);
119
- continue;
120
- }
121
- if ((limit + 1) & ~TARGET_PAGE_MASK) {
122
- qemu_log_mask(LOG_UNIMP,
123
- "MPU_RBAR[%d]: No support for MPU region limit"
124
- "address of 0x%" PRIx32 ". Minimum alignment is "
125
- "%d\n",
126
- n, limit, TARGET_PAGE_BITS);
127
- continue;
128
- }
129
}
130
}
85
}
131
86
132
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
87
- s1lo = extract32(s1.attrs, 0, 4);
133
88
- s2lo = extract32(s2_mair_attrs, 0, 4);
134
fi->type = ARMFault_Permission;
89
- s1hi = extract32(s1.attrs, 4, 4);
135
fi->level = 1;
90
- s2hi = extract32(s2_mair_attrs, 4, 4);
91
-
92
/* Combine shareability attributes (table D4-43) */
93
if (s1.shareability == 2 || s2.shareability == 2) {
94
/* if either are outer-shareable, the result is outer-shareable */
95
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
96
}
97
98
/* Combine memory type and cacheability attributes */
99
- if (s1hi == 0 || s2hi == 0) {
100
- /* Device has precedence over normal */
101
- if (s1lo == 0 || s2lo == 0) {
102
- /* nGnRnE has precedence over anything */
103
- ret.attrs = 0;
104
- } else if (s1lo == 4 || s2lo == 4) {
105
- /* non-Reordering has precedence over Reordering */
106
- ret.attrs = 4; /* nGnRE */
107
- } else if (s1lo == 8 || s2lo == 8) {
108
- /* non-Gathering has precedence over Gathering */
109
- ret.attrs = 8; /* nGRE */
110
- } else {
111
- ret.attrs = 0xc; /* GRE */
112
- }
113
+ ret.attrs = combined_attrs_nofwb(env, s1, s2);
114
115
- /* Any location for which the resultant memory type is any
116
- * type of Device memory is always treated as Outer Shareable.
117
- */
136
+ /*
118
+ /*
137
+ * Core QEMU code can't handle execution from small pages yet, so
119
+ * Any location for which the resultant memory type is any
138
+ * don't try it. This means any attempted execution will generate
120
+ * type of Device memory is always treated as Outer Shareable.
139
+ * an MPU exception, rather than eventually causing QEMU to exit in
121
+ * Any location for which the resultant memory type is Normal
140
+ * get_page_addr_code().
122
+ * Inner Non-cacheable, Outer Non-cacheable is always treated
123
+ * as Outer Shareable.
124
+ * TODO: FEAT_XS adds another value (0x40) also meaning iNCoNC
141
+ */
125
+ */
142
+ if (*is_subpage && (*prot & PAGE_EXEC)) {
126
+ if ((ret.attrs & 0xf0) == 0 || ret.attrs == 0x44) {
143
+ qemu_log_mask(LOG_UNIMP,
127
ret.shareability = 2;
144
+ "MPU: No support for execution from regions "
128
- } else { /* Normal memory */
145
+ "smaller than 1K\n");
129
- /* Outer/inner cacheability combine independently */
146
+ *prot &= ~PAGE_EXEC;
130
- ret.attrs = combine_cacheattr_nibble(s1hi, s2hi) << 4
147
+ }
131
- | combine_cacheattr_nibble(s1lo, s2lo);
148
return !(*prot & (1 << access_type));
132
-
149
}
133
- if (ret.attrs == 0x44) {
150
134
- /* Any location for which the resultant memory type is Normal
151
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
135
- * Inner Non-cacheable, Outer Non-cacheable is always treated
152
static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
136
- * as Outer Shareable.
153
MMUAccessType access_type, ARMMMUIdx mmu_idx,
137
- */
154
hwaddr *phys_ptr, MemTxAttrs *txattrs,
138
- ret.shareability = 2;
155
- int *prot, ARMMMUFaultInfo *fi)
139
- }
156
+ int *prot, target_ulong *page_size,
157
+ ARMMMUFaultInfo *fi)
158
{
159
uint32_t secure = regime_is_secure(env, mmu_idx);
160
V8M_SAttributes sattrs = {};
161
+ bool ret;
162
+ bool mpu_is_subpage;
163
164
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
165
v8m_security_lookup(env, address, access_type, mmu_idx, &sattrs);
166
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
167
} else {
168
fi->type = ARMFault_QEMU_SFault;
169
}
170
+ *page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
171
*phys_ptr = address;
172
*prot = 0;
173
return true;
174
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
175
* for M_FAKE_FSR_SFAULT in arm_v7m_cpu_do_interrupt().
176
*/
177
fi->type = ARMFault_QEMU_SFault;
178
+ *page_size = sattrs.subpage ? 1 : TARGET_PAGE_SIZE;
179
*phys_ptr = address;
180
*prot = 0;
181
return true;
182
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
183
}
184
}
140
}
185
141
186
- return pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
142
/* TODO: CombineS1S2Desc does not consider transient, only WB, RWA. */
187
- txattrs, prot, fi, NULL);
188
+ ret = pmsav8_mpu_lookup(env, address, access_type, mmu_idx, phys_ptr,
189
+ txattrs, prot, &mpu_is_subpage, fi, NULL);
190
+ /*
191
+ * TODO: this is a temporary hack to ignore the fact that the SAU region
192
+ * is smaller than a page if this is an executable region. We never
193
+ * supported small MPU regions, but we did (accidentally) allow small
194
+ * SAU regions, and if we now made small SAU regions not be executable
195
+ * then this would break previously working guest code. We can't
196
+ * remove this until/unless we implement support for execution from
197
+ * small regions.
198
+ */
199
+ if (*prot & PAGE_EXEC) {
200
+ sattrs.subpage = false;
201
+ }
202
+ *page_size = sattrs.subpage || mpu_is_subpage ? 1 : TARGET_PAGE_SIZE;
203
+ return ret;
204
}
205
206
static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
207
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
208
if (arm_feature(env, ARM_FEATURE_V8)) {
209
/* PMSAv8 */
210
ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx,
211
- phys_ptr, attrs, prot, fi);
212
+ phys_ptr, attrs, prot, page_size, fi);
213
} else if (arm_feature(env, ARM_FEATURE_V7)) {
214
/* PMSAv7 */
215
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
216
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
217
uint32_t mregion;
218
bool targetpriv;
219
bool targetsec = env->v7m.secure;
220
+ bool is_subpage;
221
222
/* Work out what the security state and privilege level we're
223
* interested in is...
224
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op)
225
if (arm_current_el(env) != 0 || alt) {
226
/* We can ignore the return value as prot is always set */
227
pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx,
228
- &phys_addr, &attrs, &prot, &fi, &mregion);
229
+ &phys_addr, &attrs, &prot, &is_subpage,
230
+ &fi, &mregion);
231
if (mregion == -1) {
232
mrvalid = false;
233
mregion = 0;
234
--
143
--
235
2.17.1
144
2.25.1
236
237
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
Implement the handling of FEAT_S2FWB; the meat of this is in the new
2
combined_attrs_fwb() function which combines S1 and S2 attributes
3
when HCR_EL2.FWB is set.
2
4
3
Let's cache config data to avoid fetching and parsing STE/CD
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
structures on each translation. We invalidate them on data structure
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
invalidation commands.
7
Message-id: 20220505183950.2781801-4-peter.maydell@linaro.org
8
---
9
target/arm/cpu.h | 5 +++
10
target/arm/helper.c | 84 +++++++++++++++++++++++++++++++++++++++++++--
11
2 files changed, 86 insertions(+), 3 deletions(-)
6
12
7
We put in place a per-smmu mutex to protect the config cache. This
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
8
will be useful too to protect the IOTLB cache. The caches can be
9
accessed without BQL, ie. in IO dataplane. The same kind of mutex was
10
put in place in the intel viommu.
11
12
Signed-off-by: Eric Auger <eric.auger@redhat.com>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 1529653501-15358-3-git-send-email-eric.auger@redhat.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
include/hw/arm/smmu-common.h | 5 ++
18
include/hw/arm/smmuv3.h | 1 +
19
hw/arm/smmu-common.c | 24 ++++++-
20
hw/arm/smmuv3.c | 135 +++++++++++++++++++++++++++++++++--
21
hw/arm/trace-events | 6 ++
22
5 files changed, 164 insertions(+), 7 deletions(-)
23
24
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
25
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
26
--- a/include/hw/arm/smmu-common.h
15
--- a/target/arm/cpu.h
27
+++ b/include/hw/arm/smmu-common.h
16
+++ b/target/arm/cpu.h
28
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUDevice {
17
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_st(const ARMISARegisters *id)
29
int devfn;
18
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, ST) != 0;
30
IOMMUMemoryRegion iommu;
31
AddressSpace as;
32
+ uint32_t cfg_cache_hits;
33
+ uint32_t cfg_cache_misses;
34
} SMMUDevice;
35
36
typedef struct SMMUNotifierNode {
37
@@ -XXX,XX +XXX,XX @@ int smmu_ptw(SMMUTransCfg *cfg, dma_addr_t iova, IOMMUAccessFlags perm,
38
*/
39
SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova);
40
41
+/* Return the iommu mr associated to @sid, or NULL if none */
42
+IOMMUMemoryRegion *smmu_iommu_mr(SMMUState *s, uint32_t sid);
43
+
44
#endif /* HW_ARM_SMMU_COMMON */
45
diff --git a/include/hw/arm/smmuv3.h b/include/hw/arm/smmuv3.h
46
index XXXXXXX..XXXXXXX 100644
47
--- a/include/hw/arm/smmuv3.h
48
+++ b/include/hw/arm/smmuv3.h
49
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUv3State {
50
SMMUQueue eventq, cmdq;
51
52
qemu_irq irq[4];
53
+ QemuMutex mutex;
54
} SMMUv3State;
55
56
typedef enum {
57
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/hw/arm/smmu-common.c
60
+++ b/hw/arm/smmu-common.c
61
@@ -XXX,XX +XXX,XX @@ static AddressSpace *smmu_find_add_as(PCIBus *bus, void *opaque, int devfn)
62
return &sdev->as;
63
}
19
}
64
20
65
+IOMMUMemoryRegion *smmu_iommu_mr(SMMUState *s, uint32_t sid)
21
+static inline bool isar_feature_aa64_fwb(const ARMISARegisters *id)
66
+{
22
+{
67
+ uint8_t bus_n, devfn;
23
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, FWB) != 0;
68
+ SMMUPciBus *smmu_bus;
69
+ SMMUDevice *smmu;
70
+
71
+ bus_n = PCI_BUS_NUM(sid);
72
+ smmu_bus = smmu_find_smmu_pcibus(s, bus_n);
73
+ if (smmu_bus) {
74
+ devfn = sid & 0x7;
75
+ smmu = smmu_bus->pbdev[devfn];
76
+ if (smmu) {
77
+ return &smmu->iommu;
78
+ }
79
+ }
80
+ return NULL;
81
+}
24
+}
82
+
25
+
83
static void smmu_base_realize(DeviceState *dev, Error **errp)
26
static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
84
{
27
{
85
SMMUState *s = ARM_SMMU(dev);
28
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
86
@@ -XXX,XX +XXX,XX @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
29
diff --git a/target/arm/helper.c b/target/arm/helper.c
87
error_propagate(errp, local_err);
30
index XXXXXXX..XXXXXXX 100644
88
return;
31
--- a/target/arm/helper.c
32
+++ b/target/arm/helper.c
33
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
34
if (cpu_isar_feature(aa64_scxtnum, cpu)) {
35
valid_mask |= HCR_ENSCXT;
36
}
37
+ if (cpu_isar_feature(aa64_fwb, cpu)) {
38
+ valid_mask |= HCR_FWB;
39
+ }
89
}
40
}
90
-
41
91
+ s->configs = g_hash_table_new_full(NULL, NULL, NULL, g_free);
42
/* Clear RES0 bits. */
92
s->smmu_pcibus_by_busptr = g_hash_table_new(NULL, NULL);
43
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
93
44
* HCR_PTW forbids certain page-table setups
94
if (s->primary_bus) {
45
* HCR_DC disables stage1 and enables stage2 translation
95
@@ -XXX,XX +XXX,XX @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
46
* HCR_DCT enables tagging on (disabled) stage1 translation
96
47
+ * HCR_FWB changes the interpretation of stage2 descriptor bits
97
static void smmu_base_reset(DeviceState *dev)
48
*/
98
{
49
- if ((env->cp15.hcr_el2 ^ value) & (HCR_VM | HCR_PTW | HCR_DC | HCR_DCT)) {
99
- /* will be filled later on */
50
+ if ((env->cp15.hcr_el2 ^ value) &
100
+ SMMUState *s = ARM_SMMU(dev);
51
+ (HCR_VM | HCR_PTW | HCR_DC | HCR_DCT | HCR_FWB)) {
101
+
52
tlb_flush(CPU(cpu));
102
+ g_hash_table_remove_all(s->configs);
53
}
54
env->cp15.hcr_el2 = value;
55
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(CPUARMState *env, ARMCacheAttrs cacheattrs)
56
* attributes are therefore only Device if stage 2 specifies Device.
57
* With HCR_EL2.FWB == 0 this is when descriptor bits [5:4] are 0b00,
58
* ie when cacheattrs.attrs bits [3:2] are 0b00.
59
+ * With HCR_EL2.FWB == 1 this is when descriptor bit [4] is 0, ie
60
+ * when cacheattrs.attrs bit [2] is 0.
61
*/
62
assert(cacheattrs.is_s2_format);
63
- return (cacheattrs.attrs & 0xc) == 0;
64
+ if (arm_hcr_el2_eff(env) & HCR_FWB) {
65
+ return (cacheattrs.attrs & 0x4) == 0;
66
+ } else {
67
+ return (cacheattrs.attrs & 0xc) == 0;
68
+ }
103
}
69
}
104
70
105
static Property smmu_dev_properties[] = {
71
/* Translate a S1 pagetable walk through S2 if needed. */
106
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
72
@@ -XXX,XX +XXX,XX @@ static uint8_t combined_attrs_nofwb(CPUARMState *env,
107
index XXXXXXX..XXXXXXX 100644
73
return ret_attrs;
108
--- a/hw/arm/smmuv3.c
109
+++ b/hw/arm/smmuv3.c
110
@@ -XXX,XX +XXX,XX @@ static int smmuv3_decode_config(IOMMUMemoryRegion *mr, SMMUTransCfg *cfg,
111
return decode_cd(cfg, &cd, event);
112
}
74
}
113
75
114
+/**
76
+static uint8_t force_cacheattr_nibble_wb(uint8_t attr)
115
+ * smmuv3_get_config - Look up for a cached copy of configuration data for
116
+ * @sdev and on cache miss performs a configuration structure decoding from
117
+ * guest RAM.
118
+ *
119
+ * @sdev: SMMUDevice handle
120
+ * @event: output event info
121
+ *
122
+ * The configuration cache contains data resulting from both STE and CD
123
+ * decoding under the form of an SMMUTransCfg struct. The hash table is indexed
124
+ * by the SMMUDevice handle.
125
+ */
126
+static SMMUTransCfg *smmuv3_get_config(SMMUDevice *sdev, SMMUEventInfo *event)
127
+{
77
+{
128
+ SMMUv3State *s = sdev->smmu;
78
+ /*
129
+ SMMUState *bc = &s->smmu_state;
79
+ * Given the 4 bits specifying the outer or inner cacheability
130
+ SMMUTransCfg *cfg;
80
+ * in MAIR format, return a value specifying Normal Write-Back,
131
+
81
+ * with the allocation and transient hints taken from the input
132
+ cfg = g_hash_table_lookup(bc->configs, sdev);
82
+ * if the input specified some kind of cacheable attribute.
133
+ if (cfg) {
83
+ */
134
+ sdev->cfg_cache_hits++;
84
+ if (attr == 0 || attr == 4) {
135
+ trace_smmuv3_config_cache_hit(smmu_get_sid(sdev),
85
+ /*
136
+ sdev->cfg_cache_hits, sdev->cfg_cache_misses,
86
+ * 0 == an UNPREDICTABLE encoding
137
+ 100 * sdev->cfg_cache_hits /
87
+ * 4 == Non-cacheable
138
+ (sdev->cfg_cache_hits + sdev->cfg_cache_misses));
88
+ * Either way, force Write-Back RW allocate non-transient
139
+ } else {
89
+ */
140
+ sdev->cfg_cache_misses++;
90
+ return 0xf;
141
+ trace_smmuv3_config_cache_miss(smmu_get_sid(sdev),
142
+ sdev->cfg_cache_hits, sdev->cfg_cache_misses,
143
+ 100 * sdev->cfg_cache_hits /
144
+ (sdev->cfg_cache_hits + sdev->cfg_cache_misses));
145
+ cfg = g_new0(SMMUTransCfg, 1);
146
+
147
+ if (!smmuv3_decode_config(&sdev->iommu, cfg, event)) {
148
+ g_hash_table_insert(bc->configs, sdev, cfg);
149
+ } else {
150
+ g_free(cfg);
151
+ cfg = NULL;
152
+ }
153
+ }
91
+ }
154
+ return cfg;
92
+ /* Change WriteThrough to WriteBack, keep allocation and transient hints */
93
+ return attr | 4;
155
+}
94
+}
156
+
95
+
157
+static void smmuv3_flush_config(SMMUDevice *sdev)
96
+/*
97
+ * Combine the memory type and cacheability attributes of
98
+ * s1 and s2 for the HCR_EL2.FWB == 1 case, returning the
99
+ * combined attributes in MAIR_EL1 format.
100
+ */
101
+static uint8_t combined_attrs_fwb(CPUARMState *env,
102
+ ARMCacheAttrs s1, ARMCacheAttrs s2)
158
+{
103
+{
159
+ SMMUv3State *s = sdev->smmu;
104
+ switch (s2.attrs) {
160
+ SMMUState *bc = &s->smmu_state;
105
+ case 7:
161
+
106
+ /* Use stage 1 attributes */
162
+ trace_smmuv3_config_cache_inv(smmu_get_sid(sdev));
107
+ return s1.attrs;
163
+ g_hash_table_remove(bc->configs, sdev);
108
+ case 6:
109
+ /*
110
+ * Force Normal Write-Back. Note that if S1 is Normal cacheable
111
+ * then we take the allocation hints from it; otherwise it is
112
+ * RW allocate, non-transient.
113
+ */
114
+ if ((s1.attrs & 0xf0) == 0) {
115
+ /* S1 is Device */
116
+ return 0xff;
117
+ }
118
+ /* Need to check the Inner and Outer nibbles separately */
119
+ return force_cacheattr_nibble_wb(s1.attrs & 0xf) |
120
+ force_cacheattr_nibble_wb(s1.attrs >> 4) << 4;
121
+ case 5:
122
+ /* If S1 attrs are Device, use them; otherwise Normal Non-cacheable */
123
+ if ((s1.attrs & 0xf0) == 0) {
124
+ return s1.attrs;
125
+ }
126
+ return 0x44;
127
+ case 0 ... 3:
128
+ /* Force Device, of subtype specified by S2 */
129
+ return s2.attrs << 2;
130
+ default:
131
+ /*
132
+ * RESERVED values (including RES0 descriptor bit [5] being nonzero);
133
+ * arbitrarily force Device.
134
+ */
135
+ return 0;
136
+ }
164
+}
137
+}
165
+
138
+
166
static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
139
/* Combine S1 and S2 cacheability/shareability attributes, per D4.5.4
167
IOMMUAccessFlags flag, int iommu_idx)
140
* and CombineS1S2Desc()
168
{
141
*
169
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
142
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(CPUARMState *env,
170
SMMUEventInfo event = {.type = SMMU_EVT_NONE, .sid = sid};
171
SMMUPTWEventInfo ptw_info = {};
172
SMMUTranslationStatus status;
173
- SMMUTransCfg cfg = {};
174
+ SMMUTransCfg *cfg = NULL;
175
IOMMUTLBEntry entry = {
176
.target_as = &address_space_memory,
177
.iova = addr,
178
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
179
.perm = IOMMU_NONE,
180
};
181
182
+ qemu_mutex_lock(&s->mutex);
183
+
184
if (!smmu_enabled(s)) {
185
status = SMMU_TRANS_DISABLE;
186
goto epilogue;
187
}
143
}
188
144
189
- if (smmuv3_decode_config(mr, &cfg, &event)) {
145
/* Combine memory type and cacheability attributes */
190
+ cfg = smmuv3_get_config(sdev, &event);
146
- ret.attrs = combined_attrs_nofwb(env, s1, s2);
191
+ if (!cfg) {
147
+ if (arm_hcr_el2_eff(env) & HCR_FWB) {
192
status = SMMU_TRANS_ERROR;
148
+ ret.attrs = combined_attrs_fwb(env, s1, s2);
193
goto epilogue;
149
+ } else {
194
}
150
+ ret.attrs = combined_attrs_nofwb(env, s1, s2);
195
151
+ }
196
- if (cfg.aborted) {
152
197
+ if (cfg->aborted) {
153
/*
198
status = SMMU_TRANS_ABORT;
154
* Any location for which the resultant memory type is any
199
goto epilogue;
200
}
201
202
- if (cfg.bypassed) {
203
+ if (cfg->bypassed) {
204
status = SMMU_TRANS_BYPASS;
205
goto epilogue;
206
}
207
208
- if (smmu_ptw(&cfg, addr, flag, &entry, &ptw_info)) {
209
+ if (smmu_ptw(cfg, addr, flag, &entry, &ptw_info)) {
210
switch (ptw_info.type) {
211
case SMMU_PTW_ERR_WALK_EABT:
212
event.type = SMMU_EVT_F_WALK_EABT;
213
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
214
}
215
216
epilogue:
217
+ qemu_mutex_unlock(&s->mutex);
218
switch (status) {
219
case SMMU_TRANS_SUCCESS:
220
entry.perm = flag;
221
@@ -XXX,XX +XXX,XX @@ epilogue:
222
223
static int smmuv3_cmdq_consume(SMMUv3State *s)
224
{
225
+ SMMUState *bs = ARM_SMMU(s);
226
SMMUCmdError cmd_error = SMMU_CERROR_NONE;
227
SMMUQueue *q = &s->cmdq;
228
SMMUCommandType type = 0;
229
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
230
231
trace_smmuv3_cmdq_opcode(smmu_cmd_string(type));
232
233
+ qemu_mutex_lock(&s->mutex);
234
switch (type) {
235
case SMMU_CMD_SYNC:
236
if (CMD_SYNC_CS(&cmd) & CMD_SYNC_SIG_IRQ) {
237
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
238
break;
239
case SMMU_CMD_PREFETCH_CONFIG:
240
case SMMU_CMD_PREFETCH_ADDR:
241
+ break;
242
case SMMU_CMD_CFGI_STE:
243
+ {
244
+ uint32_t sid = CMD_SID(&cmd);
245
+ IOMMUMemoryRegion *mr = smmu_iommu_mr(bs, sid);
246
+ SMMUDevice *sdev;
247
+
248
+ if (CMD_SSEC(&cmd)) {
249
+ cmd_error = SMMU_CERROR_ILL;
250
+ break;
251
+ }
252
+
253
+ if (!mr) {
254
+ break;
255
+ }
256
+
257
+ trace_smmuv3_cmdq_cfgi_ste(sid);
258
+ sdev = container_of(mr, SMMUDevice, iommu);
259
+ smmuv3_flush_config(sdev);
260
+
261
+ break;
262
+ }
263
case SMMU_CMD_CFGI_STE_RANGE: /* same as SMMU_CMD_CFGI_ALL */
264
+ {
265
+ uint32_t start = CMD_SID(&cmd), end, i;
266
+ uint8_t range = CMD_STE_RANGE(&cmd);
267
+
268
+ if (CMD_SSEC(&cmd)) {
269
+ cmd_error = SMMU_CERROR_ILL;
270
+ break;
271
+ }
272
+
273
+ end = start + (1 << (range + 1)) - 1;
274
+ trace_smmuv3_cmdq_cfgi_ste_range(start, end);
275
+
276
+ for (i = start; i <= end; i++) {
277
+ IOMMUMemoryRegion *mr = smmu_iommu_mr(bs, i);
278
+ SMMUDevice *sdev;
279
+
280
+ if (!mr) {
281
+ continue;
282
+ }
283
+ sdev = container_of(mr, SMMUDevice, iommu);
284
+ smmuv3_flush_config(sdev);
285
+ }
286
+ break;
287
+ }
288
case SMMU_CMD_CFGI_CD:
289
case SMMU_CMD_CFGI_CD_ALL:
290
+ {
291
+ uint32_t sid = CMD_SID(&cmd);
292
+ IOMMUMemoryRegion *mr = smmu_iommu_mr(bs, sid);
293
+ SMMUDevice *sdev;
294
+
295
+ if (CMD_SSEC(&cmd)) {
296
+ cmd_error = SMMU_CERROR_ILL;
297
+ break;
298
+ }
299
+
300
+ if (!mr) {
301
+ break;
302
+ }
303
+
304
+ trace_smmuv3_cmdq_cfgi_cd(sid);
305
+ sdev = container_of(mr, SMMUDevice, iommu);
306
+ smmuv3_flush_config(sdev);
307
+ break;
308
+ }
309
case SMMU_CMD_TLBI_NH_ALL:
310
case SMMU_CMD_TLBI_NH_ASID:
311
case SMMU_CMD_TLBI_NH_VA:
312
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
313
"Illegal command type: %d\n", CMD_TYPE(&cmd));
314
break;
315
}
316
+ qemu_mutex_unlock(&s->mutex);
317
if (cmd_error) {
318
break;
319
}
320
@@ -XXX,XX +XXX,XX @@ static void smmu_realize(DeviceState *d, Error **errp)
321
return;
322
}
323
324
+ qemu_mutex_init(&s->mutex);
325
+
326
memory_region_init_io(&sys->iomem, OBJECT(s),
327
&smmu_mem_ops, sys, TYPE_ARM_SMMUV3, 0x20000);
328
329
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
330
index XXXXXXX..XXXXXXX 100644
331
--- a/hw/arm/trace-events
332
+++ b/hw/arm/trace-events
333
@@ -XXX,XX +XXX,XX @@ smmuv3_translate_success(const char *n, uint16_t sid, uint64_t iova, uint64_t tr
334
smmuv3_get_cd(uint64_t addr) "CD addr: 0x%"PRIx64
335
smmuv3_decode_cd(uint32_t oas) "oas=%d"
336
smmuv3_decode_cd_tt(int i, uint32_t tsz, uint64_t ttb, uint32_t granule_sz) "TT[%d]:tsz:%d ttb:0x%"PRIx64" granule_sz:%d"
337
+smmuv3_cmdq_cfgi_ste(int streamid) "streamid =%d"
338
+smmuv3_cmdq_cfgi_ste_range(int start, int end) "start=0x%d - end=0x%d"
339
+smmuv3_cmdq_cfgi_cd(uint32_t sid) "streamid = %d"
340
+smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache HIT for sid %d (hits=%d, misses=%d, hit rate=%d)"
341
+smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache MISS for sid %d (hits=%d, misses=%d, hit rate=%d)"
342
+smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid %d"
343
--
155
--
344
2.17.1
156
2.25.1
345
346
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
Enable the FEAT_S2FWB for -cpu max. Since FEAT_S2FWB requires that
2
CLIDR_EL1.{LoUU,LoUIS} are zero, we explicitly squash these (the
3
inherited CLIDR_EL1 value from the Cortex-A57 has them as 1).
2
4
3
The timer controller can be driven by either an external 1MHz clock or
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
by the APB clock. Today, the model makes the assumption that the APB
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
frequency is always set to 24MHz but this is incorrect.
7
Message-id: 20220505183950.2781801-5-peter.maydell@linaro.org
8
---
9
docs/system/arm/emulation.rst | 1 +
10
target/arm/cpu64.c | 11 +++++++++++
11
2 files changed, 12 insertions(+)
6
12
7
The AST2400 SoC on the palmetto machines uses a 48MHz input clock
13
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
8
source and the APB can be set to 48MHz. The consequence is a general
9
system slowdown. The QEMU machines using the AST2500 SoC do not seem
10
impacted today because the APB frequency is still set to 24MHz.
11
12
We fix the timer frequency for all SoCs by linking the Timer model to
13
the SCU model. The APB frequency driving the timers is now the one
14
configured for the SoC.
15
16
Signed-off-by: Cédric Le Goater <clg@kaod.org>
17
Reviewed-by: Joel Stanley <joel@jms.id.au>
18
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
19
Message-id: 20180622075700.5923-4-clg@kaod.org
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
---
22
include/hw/timer/aspeed_timer.h | 4 ++++
23
hw/arm/aspeed_soc.c | 2 ++
24
hw/timer/aspeed_timer.c | 19 +++++++++++++++----
25
3 files changed, 21 insertions(+), 4 deletions(-)
26
27
diff --git a/include/hw/timer/aspeed_timer.h b/include/hw/timer/aspeed_timer.h
28
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
29
--- a/include/hw/timer/aspeed_timer.h
15
--- a/docs/system/arm/emulation.rst
30
+++ b/include/hw/timer/aspeed_timer.h
16
+++ b/docs/system/arm/emulation.rst
31
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
32
18
- FEAT_RAS (Reliability, availability, and serviceability)
33
#include "qemu/timer.h"
19
- FEAT_RDM (Advanced SIMD rounding double multiply accumulate instructions)
34
20
- FEAT_RNG (Random number generator)
35
+typedef struct AspeedSCUState AspeedSCUState;
21
+- FEAT_S2FWB (Stage 2 forced Write-Back)
22
- FEAT_SB (Speculation Barrier)
23
- FEAT_SEL2 (Secure EL2)
24
- FEAT_SHA1 (SHA1 instructions)
25
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/cpu64.c
28
+++ b/target/arm/cpu64.c
29
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
30
{
31
ARMCPU *cpu = ARM_CPU(obj);
32
uint64_t t;
33
+ uint32_t u;
34
35
if (kvm_enabled() || hvf_enabled()) {
36
/* With KVM or HVF, '-cpu max' is identical to '-cpu host' */
37
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
38
t = FIELD_DP64(t, MIDR_EL1, REVISION, 0);
39
cpu->midr = t;
40
41
+ /*
42
+ * We're going to set FEAT_S2FWB, which mandates that CLIDR_EL1.{LoUU,LoUIS}
43
+ * are zero.
44
+ */
45
+ u = cpu->clidr;
46
+ u = FIELD_DP32(u, CLIDR_EL1, LOUIS, 0);
47
+ u = FIELD_DP32(u, CLIDR_EL1, LOUU, 0);
48
+ cpu->clidr = u;
36
+
49
+
37
#define ASPEED_TIMER(obj) \
50
t = cpu->isar.id_aa64isar0;
38
OBJECT_CHECK(AspeedTimerCtrlState, (obj), TYPE_ASPEED_TIMER);
51
t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* FEAT_PMULL */
39
#define TYPE_ASPEED_TIMER "aspeed.timer"
52
t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1); /* FEAT_SHA1 */
40
@@ -XXX,XX +XXX,XX @@ typedef struct AspeedTimerCtrlState {
53
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
41
uint32_t ctrl;
54
t = FIELD_DP64(t, ID_AA64MMFR2, IESB, 1); /* FEAT_IESB */
42
uint32_t ctrl2;
55
t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
43
AspeedTimer timers[ASPEED_TIMER_NR_TIMERS];
56
t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
44
+
57
+ t = FIELD_DP64(t, ID_AA64MMFR2, FWB, 1); /* FEAT_S2FWB */
45
+ AspeedSCUState *scu;
58
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
46
} AspeedTimerCtrlState;
59
t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
47
60
cpu->isar.id_aa64mmfr2 = t;
48
#endif /* ASPEED_TIMER_H */
49
diff --git a/hw/arm/aspeed_soc.c b/hw/arm/aspeed_soc.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/hw/arm/aspeed_soc.c
52
+++ b/hw/arm/aspeed_soc.c
53
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_init(Object *obj)
54
55
object_initialize(&s->timerctrl, sizeof(s->timerctrl), TYPE_ASPEED_TIMER);
56
object_property_add_child(obj, "timerctrl", OBJECT(&s->timerctrl), NULL);
57
+ object_property_add_const_link(OBJECT(&s->timerctrl), "scu",
58
+ OBJECT(&s->scu), &error_abort);
59
qdev_set_parent_bus(DEVICE(&s->timerctrl), sysbus_get_default());
60
61
object_initialize(&s->i2c, sizeof(s->i2c), TYPE_ASPEED_I2C);
62
diff --git a/hw/timer/aspeed_timer.c b/hw/timer/aspeed_timer.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/hw/timer/aspeed_timer.c
65
+++ b/hw/timer/aspeed_timer.c
66
@@ -XXX,XX +XXX,XX @@
67
*/
68
69
#include "qemu/osdep.h"
70
+#include "qapi/error.h"
71
#include "hw/sysbus.h"
72
#include "hw/timer/aspeed_timer.h"
73
+#include "hw/misc/aspeed_scu.h"
74
#include "qemu-common.h"
75
#include "qemu/bitops.h"
76
#include "qemu/timer.h"
77
@@ -XXX,XX +XXX,XX @@
78
#define TIMER_CLOCK_USE_EXT true
79
#define TIMER_CLOCK_EXT_HZ 1000000
80
#define TIMER_CLOCK_USE_APB false
81
-#define TIMER_CLOCK_APB_HZ 24000000
82
83
#define TIMER_REG_STATUS 0
84
#define TIMER_REG_RELOAD 1
85
@@ -XXX,XX +XXX,XX @@ static inline bool timer_external_clock(AspeedTimer *t)
86
return timer_ctrl_status(t, op_external_clock);
87
}
88
89
-static uint32_t clock_rates[] = { TIMER_CLOCK_APB_HZ, TIMER_CLOCK_EXT_HZ };
90
-
91
static inline uint32_t calculate_rate(struct AspeedTimer *t)
92
{
93
- return clock_rates[timer_external_clock(t)];
94
+ AspeedTimerCtrlState *s = timer_to_ctrl(t);
95
+
96
+ return timer_external_clock(t) ? TIMER_CLOCK_EXT_HZ : s->scu->apb_freq;
97
}
98
99
static inline uint32_t calculate_ticks(struct AspeedTimer *t, uint64_t now_ns)
100
@@ -XXX,XX +XXX,XX @@ static void aspeed_timer_realize(DeviceState *dev, Error **errp)
101
int i;
102
SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
103
AspeedTimerCtrlState *s = ASPEED_TIMER(dev);
104
+ Object *obj;
105
+ Error *err = NULL;
106
+
107
+ obj = object_property_get_link(OBJECT(dev), "scu", &err);
108
+ if (!obj) {
109
+ error_propagate(errp, err);
110
+ error_prepend(errp, "required link 'scu' not found: ");
111
+ return;
112
+ }
113
+ s->scu = ASPEED_SCU(obj);
114
115
for (i = 0; i < ASPEED_TIMER_NR_TIMERS; i++) {
116
aspeed_init_one_timer(s, i);
117
--
61
--
118
2.17.1
62
2.25.1
119
120
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
The Armv8.4 feature FEAT_IDST specifies that exceptions generated by
2
read accesses to the feature ID space should report a syndrome code
3
of 0x18 (EC_SYSTEMREGISTERTRAP) rather than 0x00 (EC_UNCATEGORIZED).
4
The feature ID space is defined to be:
5
op0 == 3, op1 == {0,1,3}, CRn == 0, CRm == {0-7}, op2 == {0-7}
2
6
3
All Aspeed SoC clocks are driven by an input source clock which can
7
In our implementation we might return the EC_UNCATEGORIZED syndrome
4
have different frequencies : 24MHz or 25MHz, and also, on the Aspeed
8
value for a system register access in four cases:
5
AST2400 SoC, 48MHz. The H-PLL (CPU) clock is defined from a
9
* no reginfo struct in the hashtable
6
calculation using parameters in the H-PLL Parameter register or from a
10
* cp_access_ok() fails (ie ri->access doesn't permit the access)
7
predefined set of frequencies if the setting is strapped by hardware
11
* ri->accessfn returns CP_ACCESS_TRAP_UNCATEGORIZED at runtime
8
(Aspeed AST2400 SoC). The other clocks of the SoC are then defined
12
* ri->type includes ARM_CP_RAISES_EXC, and the readfn raises
9
from the H-PLL using dividers.
13
an UNDEF exception at runtime
10
14
11
We introduce first the APB clock because it should be used to drive
15
We have very few regdefs that set ARM_CP_RAISES_EXC, and none of
12
the Aspeed timer model.
16
them are in the feature ID space. (In the unlikely event that any
17
are added in future they would need to take care of setting the
18
correct syndrome themselves.) This patch deals with the other
19
three cases, and enables FEAT_IDST for AArch64 -cpu max.
13
20
14
Signed-off-by: Cédric Le Goater <clg@kaod.org>
15
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
16
Message-id: 20180622075700.5923-2-clg@kaod.org
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 20220509155457.3560724-1-peter.maydell@linaro.org
18
---
24
---
19
include/hw/misc/aspeed_scu.h | 70 +++++++++++++++++++++--
25
docs/system/arm/emulation.rst | 1 +
20
hw/misc/aspeed_scu.c | 106 +++++++++++++++++++++++++++++++++++
26
target/arm/cpregs.h | 24 ++++++++++++++++++++++++
21
2 files changed, 172 insertions(+), 4 deletions(-)
27
target/arm/cpu.h | 5 +++++
28
target/arm/cpu64.c | 1 +
29
target/arm/op_helper.c | 9 +++++++++
30
target/arm/translate-a64.c | 28 ++++++++++++++++++++++++++--
31
6 files changed, 66 insertions(+), 2 deletions(-)
22
32
23
diff --git a/include/hw/misc/aspeed_scu.h b/include/hw/misc/aspeed_scu.h
33
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
24
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/misc/aspeed_scu.h
35
--- a/docs/system/arm/emulation.rst
26
+++ b/include/hw/misc/aspeed_scu.h
36
+++ b/docs/system/arm/emulation.rst
27
@@ -XXX,XX +XXX,XX @@ typedef struct AspeedSCUState {
37
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
28
uint32_t hw_strap1;
38
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
29
uint32_t hw_strap2;
39
- FEAT_HPDS (Hierarchical permission disables)
30
uint32_t hw_prot_key;
40
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
31
+
41
+- FEAT_IDST (ID space trap handling)
32
+ uint32_t clkin;
42
- FEAT_IESB (Implicit error synchronization event)
33
+ uint32_t hpll;
43
- FEAT_JSCVT (JavaScript conversion instructions)
34
+ uint32_t apb_freq;
44
- FEAT_LOR (Limited ordering regions)
35
} AspeedSCUState;
45
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
36
46
index XXXXXXX..XXXXXXX 100644
37
#define AST2400_A0_SILICON_REV 0x02000303U
47
--- a/target/arm/cpregs.h
38
@@ -XXX,XX +XXX,XX @@ extern bool is_supported_silicon_rev(uint32_t silicon_rev);
48
+++ b/target/arm/cpregs.h
39
* 1. 2012/12/29 Ryan Chen Create
49
@@ -XXX,XX +XXX,XX @@ static inline bool cp_access_ok(int current_el,
40
*/
50
/* Raw read of a coprocessor register (as needed for migration, etc) */
41
51
uint64_t read_raw_cp_reg(CPUARMState *env, const ARMCPRegInfo *ri);
42
-/* Hardware Strapping Register definition (for Aspeed AST2400 SOC)
52
43
+/* SCU08 Clock Selection Register
53
+/*
44
+ *
54
+ * Return true if the cp register encoding is in the "feature ID space" as
45
+ * 31 Enable Video Engine clock dynamic slow down
55
+ * defined by FEAT_IDST (and thus should be reported with ER_ELx.EC
46
+ * 30:28 Video Engine clock slow down setting
56
+ * as EC_SYSTEMREGISTERTRAP rather than EC_UNCATEGORIZED).
47
+ * 27 2D Engine GCLK clock source selection
48
+ * 26 2D Engine GCLK clock throttling enable
49
+ * 25:23 APB PCLK divider selection
50
+ * 22:20 LPC Host LHCLK divider selection
51
+ * 19 LPC Host LHCLK clock generation/output enable control
52
+ * 18:16 MAC AHB bus clock divider selection
53
+ * 15 SD/SDIO clock running enable
54
+ * 14:12 SD/SDIO divider selection
55
+ * 11 Reserved
56
+ * 10:8 Video port output clock delay control bit
57
+ * 7 ARM CPU/AHB clock slow down enable
58
+ * 6:4 ARM CPU/AHB clock slow down setting
59
+ * 3:2 ECLK clock source selection
60
+ * 1 CPU/AHB clock slow down idle timer
61
+ * 0 CPU/AHB clock dynamic slow down enable (defined in bit[6:4])
62
+ */
57
+ */
63
+#define SCU_CLK_GET_PCLK_DIV(x) (((x) >> 23) & 0x7)
58
+static inline bool arm_cpreg_encoding_in_idspace(uint8_t opc0, uint8_t opc1,
64
+
59
+ uint8_t opc2,
65
+/* SCU24 H-PLL Parameter Register (for Aspeed AST2400 SOC)
60
+ uint8_t crn, uint8_t crm)
66
+ *
67
+ * 18 H-PLL parameter selection
68
+ * 0: Select H-PLL by strapping resistors
69
+ * 1: Select H-PLL by the programmed registers (SCU24[17:0])
70
+ * 17 Enable H-PLL bypass mode
71
+ * 16 Turn off H-PLL
72
+ * 10:5 H-PLL Numerator
73
+ * 4 H-PLL Output Divider
74
+ * 3:0 H-PLL Denumerator
75
+ *
76
+ * (Output frequency) = 24MHz * (2-OD) * [(Numerator+2) / (Denumerator+1)]
77
+ */
78
+
79
+#define SCU_AST2400_H_PLL_PROGRAMMED (0x1 << 18)
80
+#define SCU_AST2400_H_PLL_BYPASS_EN (0x1 << 17)
81
+#define SCU_AST2400_H_PLL_OFF (0x1 << 16)
82
+
83
+/* SCU24 H-PLL Parameter Register (for Aspeed AST2500 SOC)
84
+ *
85
+ * 21 Enable H-PLL reset
86
+ * 20 Enable H-PLL bypass mode
87
+ * 19 Turn off H-PLL
88
+ * 18:13 H-PLL Post Divider
89
+ * 12:5 H-PLL Numerator (M)
90
+ * 4:0 H-PLL Denumerator (N)
91
+ *
92
+ * (Output frequency) = CLKIN(24MHz) * [(M+1) / (N+1)] / (P+1)
93
+ *
94
+ * The default frequency is 792Mhz when CLKIN = 24MHz
95
+ */
96
+
97
+#define SCU_H_PLL_BYPASS_EN (0x1 << 20)
98
+#define SCU_H_PLL_OFF (0x1 << 19)
99
+
100
+/* SCU70 Hardware Strapping Register definition (for Aspeed AST2400 SOC)
101
*
102
* 31:29 Software defined strapping registers
103
* 28:27 DRAM size setting (for VGA driver use)
104
@@ -XXX,XX +XXX,XX @@ extern bool is_supported_silicon_rev(uint32_t silicon_rev);
105
#define SCU_AST2400_HW_STRAP_GET_CLK_SOURCE(x) (((((x) >> 23) & 0x1) << 1) \
106
| (((x) >> 18) & 0x1))
107
#define SCU_AST2400_HW_STRAP_CLK_SOURCE_MASK ((0x1 << 23) | (0x1 << 18))
108
-#define AST2400_CLK_25M_IN (0x1 << 23)
109
+#define SCU_HW_STRAP_CLK_25M_IN (0x1 << 23)
110
#define AST2400_CLK_24M_IN 0
111
#define AST2400_CLK_48M_IN 1
112
#define AST2400_CLK_25M_IN_24M_USB_CKI 2
113
#define AST2400_CLK_25M_IN_48M_USB_CKI 3
114
115
+#define SCU_HW_STRAP_CLK_48M_IN (0x1 << 18)
116
#define SCU_HW_STRAP_2ND_BOOT_WDT (0x1 << 17)
117
#define SCU_HW_STRAP_SUPER_IO_CONFIG (0x1 << 16)
118
#define SCU_HW_STRAP_VGA_CLASS_CODE (0x1 << 15)
119
@@ -XXX,XX +XXX,XX @@ extern bool is_supported_silicon_rev(uint32_t silicon_rev);
120
#define AST2400_DIS_BOOT 3
121
122
/*
123
- * Hardware strapping register definition (for Aspeed AST2500 SoC and
124
- * higher)
125
+ * SCU70 Hardware strapping register definition (for Aspeed AST2500
126
+ * SoC and higher)
127
*
128
* 31 Enable SPI Flash Strap Auto Fetch Mode
129
* 30 Enable GPIO Strap Mode
130
diff --git a/hw/misc/aspeed_scu.c b/hw/misc/aspeed_scu.c
131
index XXXXXXX..XXXXXXX 100644
132
--- a/hw/misc/aspeed_scu.c
133
+++ b/hw/misc/aspeed_scu.c
134
@@ -XXX,XX +XXX,XX @@ static uint32_t aspeed_scu_get_random(void)
135
return num;
136
}
137
138
+static void aspeed_scu_set_apb_freq(AspeedSCUState *s)
139
+{
61
+{
140
+ uint32_t apb_divider;
62
+ return opc0 == 3 && (opc1 == 0 || opc1 == 1 || opc1 == 3) &&
141
+
63
+ crn == 0 && crm < 8;
142
+ switch (s->silicon_rev) {
143
+ case AST2400_A0_SILICON_REV:
144
+ case AST2400_A1_SILICON_REV:
145
+ apb_divider = 2;
146
+ break;
147
+ case AST2500_A0_SILICON_REV:
148
+ case AST2500_A1_SILICON_REV:
149
+ apb_divider = 4;
150
+ break;
151
+ default:
152
+ g_assert_not_reached();
153
+ }
154
+
155
+ s->apb_freq = s->hpll / (SCU_CLK_GET_PCLK_DIV(s->regs[CLK_SEL]) + 1)
156
+ / apb_divider;
157
+}
158
+
159
static uint64_t aspeed_scu_read(void *opaque, hwaddr offset, unsigned size)
160
{
161
AspeedSCUState *s = ASPEED_SCU(opaque);
162
@@ -XXX,XX +XXX,XX @@ static void aspeed_scu_write(void *opaque, hwaddr offset, uint64_t data,
163
case PROT_KEY:
164
s->regs[reg] = (data == ASPEED_SCU_PROT_KEY) ? 1 : 0;
165
return;
166
+ case CLK_SEL:
167
+ s->regs[reg] = data;
168
+ aspeed_scu_set_apb_freq(s);
169
+ break;
170
171
case FREQ_CNTR_EVAL:
172
case VGA_SCRATCH1 ... VGA_SCRATCH8:
173
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps aspeed_scu_ops = {
174
.valid.unaligned = false,
175
};
176
177
+static uint32_t aspeed_scu_get_clkin(AspeedSCUState *s)
178
+{
179
+ if (s->hw_strap1 & SCU_HW_STRAP_CLK_25M_IN) {
180
+ return 25000000;
181
+ } else if (s->hw_strap1 & SCU_HW_STRAP_CLK_48M_IN) {
182
+ return 48000000;
183
+ } else {
184
+ return 24000000;
185
+ }
186
+}
64
+}
187
+
65
+
188
+/*
66
+/*
189
+ * Strapped frequencies for the AST2400 in MHz. They depend on the
67
+ * As arm_cpreg_encoding_in_idspace(), but take the encoding from an
190
+ * clkin frequency.
68
+ * ARMCPRegInfo.
191
+ */
69
+ */
192
+static const uint32_t hpll_ast2400_freqs[][4] = {
70
+static inline bool arm_cpreg_in_idspace(const ARMCPRegInfo *ri)
193
+ { 384, 360, 336, 408 }, /* 24MHz or 48MHz */
194
+ { 400, 375, 350, 425 }, /* 25MHz */
195
+};
196
+
197
+static uint32_t aspeed_scu_calc_hpll_ast2400(AspeedSCUState *s)
198
+{
71
+{
199
+ uint32_t hpll_reg = s->regs[HPLL_PARAM];
72
+ return ri->state == ARM_CP_STATE_AA64 &&
200
+ uint8_t freq_select;
73
+ arm_cpreg_encoding_in_idspace(ri->opc0, ri->opc1, ri->opc2,
201
+ bool clk_25m_in;
74
+ ri->crn, ri->crm);
202
+
203
+ if (hpll_reg & SCU_AST2400_H_PLL_OFF) {
204
+ return 0;
205
+ }
206
+
207
+ if (hpll_reg & SCU_AST2400_H_PLL_PROGRAMMED) {
208
+ uint32_t multiplier = 1;
209
+
210
+ if (!(hpll_reg & SCU_AST2400_H_PLL_BYPASS_EN)) {
211
+ uint32_t n = (hpll_reg >> 5) & 0x3f;
212
+ uint32_t od = (hpll_reg >> 4) & 0x1;
213
+ uint32_t d = hpll_reg & 0xf;
214
+
215
+ multiplier = (2 - od) * ((n + 2) / (d + 1));
216
+ }
217
+
218
+ return s->clkin * multiplier;
219
+ }
220
+
221
+ /* HW strapping */
222
+ clk_25m_in = !!(s->hw_strap1 & SCU_HW_STRAP_CLK_25M_IN);
223
+ freq_select = SCU_AST2400_HW_STRAP_GET_H_PLL_CLK(s->hw_strap1);
224
+
225
+ return hpll_ast2400_freqs[clk_25m_in][freq_select] * 1000000;
226
+}
75
+}
227
+
76
+
228
+static uint32_t aspeed_scu_calc_hpll_ast2500(AspeedSCUState *s)
77
#endif /* TARGET_ARM_CPREGS_H */
78
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/cpu.h
81
+++ b/target/arm/cpu.h
82
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fwb(const ARMISARegisters *id)
83
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, FWB) != 0;
84
}
85
86
+static inline bool isar_feature_aa64_ids(const ARMISARegisters *id)
229
+{
87
+{
230
+ uint32_t hpll_reg = s->regs[HPLL_PARAM];
88
+ return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, IDS) != 0;
231
+ uint32_t multiplier = 1;
232
+
233
+ if (hpll_reg & SCU_H_PLL_OFF) {
234
+ return 0;
235
+ }
236
+
237
+ if (!(hpll_reg & SCU_H_PLL_BYPASS_EN)) {
238
+ uint32_t p = (hpll_reg >> 13) & 0x3f;
239
+ uint32_t m = (hpll_reg >> 5) & 0xff;
240
+ uint32_t n = hpll_reg & 0x1f;
241
+
242
+ multiplier = ((m + 1) / (n + 1)) / (p + 1);
243
+ }
244
+
245
+ return s->clkin * multiplier;
246
+}
89
+}
247
+
90
+
248
static void aspeed_scu_reset(DeviceState *dev)
91
static inline bool isar_feature_aa64_bti(const ARMISARegisters *id)
249
{
92
{
250
AspeedSCUState *s = ASPEED_SCU(dev);
93
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, BT) != 0;
251
const uint32_t *reset;
94
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
252
+ uint32_t (*calc_hpll)(AspeedSCUState *s);
95
index XXXXXXX..XXXXXXX 100644
253
96
--- a/target/arm/cpu64.c
254
switch (s->silicon_rev) {
97
+++ b/target/arm/cpu64.c
255
case AST2400_A0_SILICON_REV:
98
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
256
case AST2400_A1_SILICON_REV:
99
t = FIELD_DP64(t, ID_AA64MMFR2, IESB, 1); /* FEAT_IESB */
257
reset = ast2400_a0_resets;
100
t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
258
+ calc_hpll = aspeed_scu_calc_hpll_ast2400;
101
t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
102
+ t = FIELD_DP64(t, ID_AA64MMFR2, IDS, 1); /* FEAT_IDST */
103
t = FIELD_DP64(t, ID_AA64MMFR2, FWB, 1); /* FEAT_S2FWB */
104
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
105
t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
106
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
107
index XXXXXXX..XXXXXXX 100644
108
--- a/target/arm/op_helper.c
109
+++ b/target/arm/op_helper.c
110
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(mrs_banked)(CPUARMState *env, uint32_t tgtmode, uint32_t regno)
111
void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome,
112
uint32_t isread)
113
{
114
+ ARMCPU *cpu = env_archcpu(env);
115
const ARMCPRegInfo *ri = rip;
116
CPAccessResult res = CP_ACCESS_OK;
117
int target_el;
118
@@ -XXX,XX +XXX,XX @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome,
119
case CP_ACCESS_TRAP:
259
break;
120
break;
260
case AST2500_A0_SILICON_REV:
121
case CP_ACCESS_TRAP_UNCATEGORIZED:
261
case AST2500_A1_SILICON_REV:
122
+ if (cpu_isar_feature(aa64_ids, cpu) && isread &&
262
reset = ast2500_a1_resets;
123
+ arm_cpreg_in_idspace(ri)) {
263
+ calc_hpll = aspeed_scu_calc_hpll_ast2500;
124
+ /*
125
+ * FEAT_IDST says this should be reported as EC_SYSTEMREGISTERTRAP,
126
+ * not EC_UNCATEGORIZED
127
+ */
128
+ break;
129
+ }
130
syndrome = syn_uncategorized();
264
break;
131
break;
265
default:
132
default:
266
g_assert_not_reached();
133
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
267
@@ -XXX,XX +XXX,XX @@ static void aspeed_scu_reset(DeviceState *dev)
134
index XXXXXXX..XXXXXXX 100644
268
s->regs[HW_STRAP1] = s->hw_strap1;
135
--- a/target/arm/translate-a64.c
269
s->regs[HW_STRAP2] = s->hw_strap2;
136
+++ b/target/arm/translate-a64.c
270
s->regs[PROT_KEY] = s->hw_prot_key;
137
@@ -XXX,XX +XXX,XX @@ static void gen_set_nzcv(TCGv_i64 tcg_rt)
138
tcg_temp_free_i32(nzcv);
139
}
140
141
+static void gen_sysreg_undef(DisasContext *s, bool isread,
142
+ uint8_t op0, uint8_t op1, uint8_t op2,
143
+ uint8_t crn, uint8_t crm, uint8_t rt)
144
+{
145
+ /*
146
+ * Generate code to emit an UNDEF with correct syndrome
147
+ * information for a failed system register access.
148
+ * This is EC_UNCATEGORIZED (ie a standard UNDEF) in most cases,
149
+ * but if FEAT_IDST is implemented then read accesses to registers
150
+ * in the feature ID space are reported with the EC_SYSTEMREGISTERTRAP
151
+ * syndrome.
152
+ */
153
+ uint32_t syndrome;
271
+
154
+
272
+ /*
155
+ if (isread && dc_isar_feature(aa64_ids, s) &&
273
+ * All registers are set. Now compute the frequencies of the main clocks
156
+ arm_cpreg_encoding_in_idspace(op0, op1, op2, crn, crm)) {
274
+ */
157
+ syndrome = syn_aa64_sysregtrap(op0, op1, op2, crn, crm, rt, isread);
275
+ s->clkin = aspeed_scu_get_clkin(s);
158
+ } else {
276
+ s->hpll = calc_hpll(s);
159
+ syndrome = syn_uncategorized();
277
+ aspeed_scu_set_apb_freq(s);
160
+ }
278
}
161
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syndrome,
279
162
+ default_exception_el(s));
280
static uint32_t aspeed_silicon_revs[] = {
163
+}
164
+
165
/* MRS - move from system register
166
* MSR (register) - move to system register
167
* SYS
168
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
169
qemu_log_mask(LOG_UNIMP, "%s access to unsupported AArch64 "
170
"system register op0:%d op1:%d crn:%d crm:%d op2:%d\n",
171
isread ? "read" : "write", op0, op1, crn, crm, op2);
172
- unallocated_encoding(s);
173
+ gen_sysreg_undef(s, isread, op0, op1, op2, crn, crm, rt);
174
return;
175
}
176
177
/* Check access permissions */
178
if (!cp_access_ok(s->current_el, ri, isread)) {
179
- unallocated_encoding(s);
180
+ gen_sysreg_undef(s, isread, op0, op1, op2, crn, crm, rt);
181
return;
182
}
183
281
--
184
--
282
2.17.1
185
2.25.1
283
284
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
The unsupported_encoding() macro logs a LOG_UNIMP message and then
2
generates code to raise the usual exception for an unallocated
3
encoding. Back when we were still implementing the A64 decoder this
4
was helpful for flagging up when guest code was using something we
5
hadn't yet implemented. Now we completely cover the A64 instruction
6
set it is barely used. The only remaining uses are for five
7
instructions whose semantics are "UNDEF, unless being run under
8
external halting debug":
9
* HLT (when not being used for semihosting)
10
* DCPSR1, DCPS2, DCPS3
11
* DRPS
2
12
3
hw_error() finally calls abort(), but there is no need to abort here.
13
QEMU doesn't implement external halting debug, so for us the UNDEF is
14
the architecturally correct behaviour (because it's not possible to
15
execute these instructions with halting debug enabled). The
16
LOG_UNIMP doesn't serve a useful purpose; replace these uses of
17
unsupported_encoding() with unallocated_encoding(), and delete the
18
macro.
4
19
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Message-id: 20180624040609.17572-13-f4bug@amsat.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
22
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
23
Message-id: 20220509160443.3561604-1-peter.maydell@linaro.org
9
---
24
---
10
hw/net/stellaris_enet.c | 9 +++++++--
25
target/arm/translate-a64.h | 9 ---------
11
1 file changed, 7 insertions(+), 2 deletions(-)
26
target/arm/translate-a64.c | 8 ++++----
27
2 files changed, 4 insertions(+), 13 deletions(-)
12
28
13
diff --git a/hw/net/stellaris_enet.c b/hw/net/stellaris_enet.c
29
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
14
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/net/stellaris_enet.c
31
--- a/target/arm/translate-a64.h
16
+++ b/hw/net/stellaris_enet.c
32
+++ b/target/arm/translate-a64.h
17
@@ -XXX,XX +XXX,XX @@
33
@@ -XXX,XX +XXX,XX @@
18
#include "qemu/osdep.h"
34
#ifndef TARGET_ARM_TRANSLATE_A64_H
19
#include "hw/sysbus.h"
35
#define TARGET_ARM_TRANSLATE_A64_H
20
#include "net/net.h"
36
21
+#include "qemu/log.h"
37
-#define unsupported_encoding(s, insn) \
22
#include <zlib.h>
38
- do { \
23
39
- qemu_log_mask(LOG_UNIMP, \
24
//#define DEBUG_STELLARIS_ENET 1
40
- "%s:%d: unsupported instruction encoding 0x%08x " \
25
@@ -XXX,XX +XXX,XX @@ static uint64_t stellaris_enet_read(void *opaque, hwaddr offset,
41
- "at pc=%016" PRIx64 "\n", \
26
case 0x3c: /* Undocumented: Timestamp? */
42
- __FILE__, __LINE__, insn, s->pc_curr); \
27
return 0;
43
- unallocated_encoding(s); \
28
default:
44
- } while (0)
29
- hw_error("stellaris_enet_read: Bad offset %x\n", (int)offset);
45
-
30
+ qemu_log_mask(LOG_GUEST_ERROR, "stellaris_enet_rd%d: Illegal register"
46
TCGv_i64 new_tmp_a64(DisasContext *s);
31
+ " 0x02%" HWADDR_PRIx "\n",
47
TCGv_i64 new_tmp_a64_local(DisasContext *s);
32
+ size * 8, offset);
48
TCGv_i64 new_tmp_a64_zero(DisasContext *s);
33
return 0;
49
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
34
}
50
index XXXXXXX..XXXXXXX 100644
35
}
51
--- a/target/arm/translate-a64.c
36
@@ -XXX,XX +XXX,XX @@ static void stellaris_enet_write(void *opaque, hwaddr offset,
52
+++ b/target/arm/translate-a64.c
37
/* Ignored. */
53
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
54
* with our 32-bit semihosting).
55
*/
56
if (s->current_el == 0) {
57
- unsupported_encoding(s, insn);
58
+ unallocated_encoding(s);
59
break;
60
}
61
#endif
62
gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
63
} else {
64
- unsupported_encoding(s, insn);
65
+ unallocated_encoding(s);
66
}
67
break;
68
case 5:
69
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
70
break;
71
}
72
/* DCPS1, DCPS2, DCPS3 */
73
- unsupported_encoding(s, insn);
74
+ unallocated_encoding(s);
38
break;
75
break;
39
default:
76
default:
40
- hw_error("stellaris_enet_write: Bad offset %x\n", (int)offset);
77
unallocated_encoding(s);
41
+ qemu_log_mask(LOG_GUEST_ERROR, "stellaris_enet_wr%d: Illegal register "
78
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
42
+ "0x02%" HWADDR_PRIx " = 0x%" PRIx64 "\n",
79
if (op3 != 0 || op4 != 0 || rn != 0x1f) {
43
+ size * 8, offset, value);
80
goto do_unallocated;
44
}
81
} else {
45
}
82
- unsupported_encoding(s, insn);
83
+ unallocated_encoding(s);
84
}
85
return;
46
86
47
--
87
--
48
2.17.1
88
2.25.1
49
89
50
90
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
We allow a GICv3 to be connected to any CPU, but we don't do anything
2
to handle the case where the CPU type doesn't in hardware have a
3
GICv3 CPU interface and so the various GIC configuration fields
4
(gic_num_lrs, vprebits, vpribits) are not specified.
2
5
3
Suggested-by: Thomas Huth <thuth@redhat.com>
6
The current behaviour is that we will add the EL1 CPU interface
4
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
registers, but will not put in the EL2 CPU interface registers, even
5
Message-id: 20180624040609.17572-17-f4bug@amsat.org
8
if the CPU has EL2, which will leave the GIC in a broken state and
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
probably result in the guest crashing as it tries to set it up. This
10
only affects the virt board when using the cortex-a15 or cortex-a7
11
CPU types (both 32-bit) with -machine gic-version=3 (or 'max')
12
and -machine virtualization=on.
13
14
Instead of failing to set up the EL2 registers, if the CPU doesn't
15
define the GIC configuration set it to a reasonable default, matching
16
the standard configuration for most Arm CPUs.
17
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20220512151457.3899052-2-peter.maydell@linaro.org
8
---
21
---
9
hw/arm/stellaris.c | 6 ++++--
22
hw/intc/arm_gicv3_cpuif.c | 18 +++++++++++++-----
10
1 file changed, 4 insertions(+), 2 deletions(-)
23
1 file changed, 13 insertions(+), 5 deletions(-)
11
24
12
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
25
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
13
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/arm/stellaris.c
27
--- a/hw/intc/arm_gicv3_cpuif.c
15
+++ b/hw/arm/stellaris.c
28
+++ b/hw/intc/arm_gicv3_cpuif.c
16
@@ -XXX,XX +XXX,XX @@ static uint64_t gptm_read(void *opaque, hwaddr offset,
29
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
17
return 0;
30
ARMCPU *cpu = ARM_CPU(qemu_get_cpu(i));
18
default:
31
GICv3CPUState *cs = &s->cpu[i];
19
qemu_log_mask(LOG_GUEST_ERROR,
32
20
- "GPTM: read at bad offset 0x%x\n", (int)offset);
33
+ /*
21
+ "GPTM: read at bad offset 0x02%" HWADDR_PRIx "\n",
34
+ * If the CPU doesn't define a GICv3 configuration, probably because
22
+ offset);
35
+ * in real hardware it doesn't have one, then we use default values
23
return 0;
36
+ * matching the one used by most Arm CPUs. This applies to:
24
}
37
+ * cpu->gic_num_lrs
25
}
38
+ * cpu->gic_vpribits
26
@@ -XXX,XX +XXX,XX @@ static void gptm_write(void *opaque, hwaddr offset,
39
+ * cpu->gic_vprebits
27
break;
40
+ */
28
default:
41
+
29
qemu_log_mask(LOG_GUEST_ERROR,
42
/* Note that we can't just use the GICv3CPUState as an opaque pointer
30
- "GPTM: write at bad offset 0x%x\n", (int)offset);
43
* in define_arm_cp_regs_with_opaque(), because when we're called back
31
+ "GPTM: write at bad offset 0x02%" HWADDR_PRIx "\n",
44
* it might be with code translated by CPU 0 but run by CPU 1, in
32
+ offset);
45
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
33
}
46
* get back to the GICv3CPUState from the CPUARMState.
34
gptm_update_irq(s);
47
*/
35
}
48
define_arm_cp_regs(cpu, gicv3_cpuif_reginfo);
49
- if (arm_feature(&cpu->env, ARM_FEATURE_EL2)
50
- && cpu->gic_num_lrs) {
51
+ if (arm_feature(&cpu->env, ARM_FEATURE_EL2)) {
52
int j;
53
54
- cs->num_list_regs = cpu->gic_num_lrs;
55
- cs->vpribits = cpu->gic_vpribits;
56
- cs->vprebits = cpu->gic_vprebits;
57
+ cs->num_list_regs = cpu->gic_num_lrs ?: 4;
58
+ cs->vpribits = cpu->gic_vpribits ?: 5;
59
+ cs->vprebits = cpu->gic_vprebits ?: 5;
60
61
/* Check against architectural constraints: getting these
62
* wrong would be a bug in the CPU code defining these,
36
--
63
--
37
2.17.1
64
2.25.1
38
39
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
As noted in the comment, the PRIbits field in ICV_CTLR_EL1 is
2
supposed to match the ICH_VTR_EL2 PRIbits setting; that is, it is the
3
virtual priority bit setting, not the physical priority bit setting.
4
(For QEMU currently we always implement 8 bits of physical priority,
5
so the PRIbits field was previously 7, since it is defined to be
6
"priority bits - 1".)
2
7
3
Missed in df3692e04b2.
4
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Message-id: 20180624040609.17572-16-f4bug@amsat.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220512151457.3899052-3-peter.maydell@linaro.org
11
Message-id: 20220506162129.2896966-2-peter.maydell@linaro.org
9
---
12
---
10
hw/arm/stellaris.c | 2 +-
13
hw/intc/arm_gicv3_cpuif.c | 2 +-
11
1 file changed, 1 insertion(+), 1 deletion(-)
14
1 file changed, 1 insertion(+), 1 deletion(-)
12
15
13
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
16
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/stellaris.c
18
--- a/hw/intc/arm_gicv3_cpuif.c
16
+++ b/hw/arm/stellaris.c
19
+++ b/hw/intc/arm_gicv3_cpuif.c
17
@@ -XXX,XX +XXX,XX @@ static void gptm_write(void *opaque, hwaddr offset,
20
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
18
break;
21
* should match the ones reported in ich_vtr_read().
19
default:
22
*/
20
qemu_log_mask(LOG_GUEST_ERROR,
23
value = ICC_CTLR_EL1_A3V | (1 << ICC_CTLR_EL1_IDBITS_SHIFT) |
21
- "GPTM: read at bad offset 0x%x\n", (int)offset);
24
- (7 << ICC_CTLR_EL1_PRIBITS_SHIFT);
22
+ "GPTM: write at bad offset 0x%x\n", (int)offset);
25
+ ((cs->vpribits - 1) << ICC_CTLR_EL1_PRIBITS_SHIFT);
23
}
26
24
gptm_update_irq(s);
27
if (cs->ich_vmcr_el2 & ICH_VMCR_EL2_VEOIM) {
25
}
28
value |= ICC_CTLR_EL1_EOIMODE;
26
--
29
--
27
2.17.1
30
2.25.1
28
29
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
The GIC_MIN_BPR constant defines the minimum BPR value that the TCG
2
emulated GICv3 supports. We're currently using this also as the
3
value we reset the KVM GICv3 ICC_BPR registers to, but this is only
4
right by accident.
2
5
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
We want to make the emulated GICv3 use a configurable number of
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
7
priority bits, which means that GIC_MIN_BPR will no longer be a
5
Message-id: 20180624040609.17572-15-f4bug@amsat.org
8
constant. Replace the uses in the KVM reset code with literal 0,
9
plus a constant explaining why this is reasonable.
10
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20220512151457.3899052-4-peter.maydell@linaro.org
14
Message-id: 20220506162129.2896966-3-peter.maydell@linaro.org
7
---
15
---
8
hw/net/smc91c111.c | 12 ++++++++----
16
hw/intc/arm_gicv3_kvm.c | 16 +++++++++++++---
9
1 file changed, 8 insertions(+), 4 deletions(-)
17
1 file changed, 13 insertions(+), 3 deletions(-)
10
18
11
diff --git a/hw/net/smc91c111.c b/hw/net/smc91c111.c
19
diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
12
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/net/smc91c111.c
21
--- a/hw/intc/arm_gicv3_kvm.c
14
+++ b/hw/net/smc91c111.c
22
+++ b/hw/intc/arm_gicv3_kvm.c
15
@@ -XXX,XX +XXX,XX @@ static void smc91c111_writeb(void *opaque, hwaddr offset,
23
@@ -XXX,XX +XXX,XX @@ static void arm_gicv3_icc_reset(CPUARMState *env, const ARMCPRegInfo *ri)
16
SET_HIGH(gpr, value);
24
s = c->gic;
17
return;
25
18
case 12: /* Control */
26
c->icc_pmr_el1 = 0;
19
- if (value & 1)
27
- c->icc_bpr[GICV3_G0] = GIC_MIN_BPR;
20
- fprintf(stderr, "smc91c111:EEPROM store not implemented\n");
28
- c->icc_bpr[GICV3_G1] = GIC_MIN_BPR;
21
- if (value & 2)
29
- c->icc_bpr[GICV3_G1NS] = GIC_MIN_BPR;
22
- fprintf(stderr, "smc91c111:EEPROM reload not implemented\n");
30
+ /*
23
+ if (value & 1) {
31
+ * Architecturally the reset value of the ICC_BPR registers
24
+ qemu_log_mask(LOG_UNIMP,
32
+ * is UNKNOWN. We set them all to 0 here; when the kernel
25
+ "smc91c111: EEPROM store not implemented\n");
33
+ * uses these values to program the ICH_VMCR_EL2 fields that
26
+ }
34
+ * determine the guest-visible ICC_BPR register values, the
27
+ if (value & 2) {
35
+ * hardware's "writing a value less than the minimum sets
28
+ qemu_log_mask(LOG_UNIMP,
36
+ * the field to the minimum value" behaviour will result in
29
+ "smc91c111: EEPROM reload not implemented\n");
37
+ * them effectively resetting to the correct minimum value
30
+ }
38
+ * for the host GIC.
31
value &= ~3;
39
+ */
32
SET_LOW(ctr, value);
40
+ c->icc_bpr[GICV3_G0] = 0;
33
return;
41
+ c->icc_bpr[GICV3_G1] = 0;
42
+ c->icc_bpr[GICV3_G1NS] = 0;
43
44
c->icc_sre_el1 = 0x7;
45
memset(c->icc_apr, 0, sizeof(c->icc_apr));
34
--
46
--
35
2.17.1
47
2.25.1
36
37
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
The GICv3 code has always supported a configurable number of virtual
2
priority and preemption bits, but our implementation currently
3
hardcodes the number of physical priority bits at 8. This is not
4
what most hardware implementations provide; for instance the
5
Cortex-A53 provides only 5 bits of physical priority.
2
6
3
We emulate a TLB cache of size SMMU_IOTLB_MAX_SIZE=256.
7
Make the number of physical priority/preemption bits driven by fields
4
It is implemented as a hash table whose key is a combination
8
in the GICv3CPUState, the way that we already do for virtual
5
of the 16b asid and 48b IOVA (Jenkins hash).
9
priority/preemption bits. We set cs->pribits to 8, so there is no
10
behavioural change in this commit. A following commit will add the
11
machinery for CPUs to set this to the correct value for their
12
implementation.
6
13
7
Entries are invalidated on TLB invalidation commands, either
14
Note that changing the number of priority bits would be a migration
8
globally, or per asid, or per asid/iova.
15
compatibility break, because the semantics of the icc_apr[][] array
16
changes.
9
17
10
Signed-off-by: Eric Auger <eric.auger@redhat.com>
11
Message-id: 1529653501-15358-4-git-send-email-eric.auger@redhat.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20220512151457.3899052-5-peter.maydell@linaro.org
21
Message-id: 20220506162129.2896966-4-peter.maydell@linaro.org
14
---
22
---
15
include/hw/arm/smmu-common.h | 13 +++++
23
include/hw/intc/arm_gicv3_common.h | 7 +-
16
hw/arm/smmu-common.c | 60 ++++++++++++++++++++++
24
hw/intc/arm_gicv3_cpuif.c | 182 ++++++++++++++++++++---------
17
hw/arm/smmuv3.c | 98 ++++++++++++++++++++++++++++++++++--
25
2 files changed, 130 insertions(+), 59 deletions(-)
18
hw/arm/trace-events | 9 ++++
19
4 files changed, 176 insertions(+), 4 deletions(-)
20
26
21
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
27
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
22
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/arm/smmu-common.h
29
--- a/include/hw/intc/arm_gicv3_common.h
24
+++ b/include/hw/arm/smmu-common.h
30
+++ b/include/hw/intc/arm_gicv3_common.h
25
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUTransCfg {
31
@@ -XXX,XX +XXX,XX @@
26
uint8_t tbi; /* Top Byte Ignore */
32
/* Maximum number of list registers (architectural limit) */
27
uint16_t asid;
33
#define GICV3_LR_MAX 16
28
SMMUTransTableInfo tt[2];
34
29
+ uint32_t iotlb_hits; /* counts IOTLB hits for this asid */
35
-/* Minimum BPR for Secure, or when security not enabled */
30
+ uint32_t iotlb_misses; /* counts IOTLB misses for this asid */
36
-#define GIC_MIN_BPR 0
31
} SMMUTransCfg;
37
-/* Minimum BPR for Nonsecure when security is enabled */
32
38
-#define GIC_MIN_BPR_NS (GIC_MIN_BPR + 1)
33
typedef struct SMMUDevice {
39
-
34
@@ -XXX,XX +XXX,XX @@ typedef struct SMMUPciBus {
40
/* For some distributor fields we want to model the array of 32-bit
35
SMMUDevice *pbdev[0]; /* Parent array is sparse, so dynamically alloc */
41
* register values which hold various bitmaps corresponding to enabled,
36
} SMMUPciBus;
42
* pending, etc bits. These macros and functions facilitate that; the
37
43
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
38
+typedef struct SMMUIOTLBKey {
44
int num_list_regs;
39
+ uint64_t iova;
45
int vpribits; /* number of virtual priority bits */
40
+ uint16_t asid;
46
int vprebits; /* number of virtual preemption bits */
41
+} SMMUIOTLBKey;
47
+ int pribits; /* number of physical priority bits */
42
+
48
+ int prebits; /* number of physical preemption bits */
43
typedef struct SMMUState {
49
44
/* <private> */
50
/* Current highest priority pending interrupt for this CPU.
45
SysBusDevice dev;
51
* This is cached information that can be recalculated from the
46
@@ -XXX,XX +XXX,XX @@ SMMUTransTableInfo *select_tt(SMMUTransCfg *cfg, dma_addr_t iova);
52
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
47
/* Return the iommu mr associated to @sid, or NULL if none */
48
IOMMUMemoryRegion *smmu_iommu_mr(SMMUState *s, uint32_t sid);
49
50
+#define SMMU_IOTLB_MAX_SIZE 256
51
+
52
+void smmu_iotlb_inv_all(SMMUState *s);
53
+void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid);
54
+void smmu_iotlb_inv_iova(SMMUState *s, uint16_t asid, dma_addr_t iova);
55
+
56
#endif /* HW_ARM_SMMU_COMMON */
57
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
58
index XXXXXXX..XXXXXXX 100644
53
index XXXXXXX..XXXXXXX 100644
59
--- a/hw/arm/smmu-common.c
54
--- a/hw/intc/arm_gicv3_cpuif.c
60
+++ b/hw/arm/smmu-common.c
55
+++ b/hw/intc/arm_gicv3_cpuif.c
61
@@ -XXX,XX +XXX,XX @@
56
@@ -XXX,XX +XXX,XX @@ static uint64_t icv_iar_read(CPUARMState *env, const ARMCPRegInfo *ri)
62
#include "qom/cpu.h"
57
return intid;
63
#include "hw/qdev-properties.h"
58
}
64
#include "qapi/error.h"
59
65
+#include "qemu/jhash.h"
60
+static uint32_t icc_fullprio_mask(GICv3CPUState *cs)
66
67
#include "qemu/error-report.h"
68
#include "hw/arm/smmu-common.h"
69
#include "smmu-internal.h"
70
71
+/* IOTLB Management */
72
+
73
+inline void smmu_iotlb_inv_all(SMMUState *s)
74
+{
61
+{
75
+ trace_smmu_iotlb_inv_all();
62
+ /*
76
+ g_hash_table_remove_all(s->iotlb);
63
+ * Return a mask word which clears the unimplemented priority bits
64
+ * from a priority value for a physical interrupt. (Not to be confused
65
+ * with the group priority, whose mask depends on the value of BPR
66
+ * for the interrupt group.)
67
+ */
68
+ return ~0U << (8 - cs->pribits);
77
+}
69
+}
78
+
70
+
79
+static gboolean smmu_hash_remove_by_asid(gpointer key, gpointer value,
71
+static inline int icc_min_bpr(GICv3CPUState *cs)
80
+ gpointer user_data)
81
+{
72
+{
82
+ uint16_t asid = *(uint16_t *)user_data;
73
+ /* The minimum BPR for the physical interface. */
83
+ SMMUIOTLBKey *iotlb_key = (SMMUIOTLBKey *)key;
74
+ return 7 - cs->prebits;
84
+
85
+ return iotlb_key->asid == asid;
86
+}
75
+}
87
+
76
+
88
+inline void smmu_iotlb_inv_iova(SMMUState *s, uint16_t asid, dma_addr_t iova)
77
+static inline int icc_min_bpr_ns(GICv3CPUState *cs)
89
+{
78
+{
90
+ SMMUIOTLBKey key = {.asid = asid, .iova = iova};
79
+ return icc_min_bpr(cs) + 1;
91
+
92
+ trace_smmu_iotlb_inv_iova(asid, iova);
93
+ g_hash_table_remove(s->iotlb, &key);
94
+}
80
+}
95
+
81
+
96
+inline void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid)
82
+static inline int icc_num_aprs(GICv3CPUState *cs)
97
+{
83
+{
98
+ trace_smmu_iotlb_inv_asid(asid);
84
+ /* Return the number of APR registers (1, 2, or 4) */
99
+ g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_asid, &asid);
85
+ int aprmax = 1 << MAX(cs->prebits - 5, 0);
86
+ assert(aprmax <= ARRAY_SIZE(cs->icc_apr[0]));
87
+ return aprmax;
100
+}
88
+}
101
+
89
+
102
/* VMSAv8-64 Translation */
90
static int icc_highest_active_prio(GICv3CPUState *cs)
103
104
/**
105
@@ -XXX,XX +XXX,XX @@ IOMMUMemoryRegion *smmu_iommu_mr(SMMUState *s, uint32_t sid)
106
return NULL;
107
}
108
109
+static guint smmu_iotlb_key_hash(gconstpointer v)
110
+{
111
+ SMMUIOTLBKey *key = (SMMUIOTLBKey *)v;
112
+ uint32_t a, b, c;
113
+
114
+ /* Jenkins hash */
115
+ a = b = c = JHASH_INITVAL + sizeof(*key);
116
+ a += key->asid;
117
+ b += extract64(key->iova, 0, 32);
118
+ c += extract64(key->iova, 32, 32);
119
+
120
+ __jhash_mix(a, b, c);
121
+ __jhash_final(a, b, c);
122
+
123
+ return c;
124
+}
125
+
126
+static gboolean smmu_iotlb_key_equal(gconstpointer v1, gconstpointer v2)
127
+{
128
+ const SMMUIOTLBKey *k1 = v1;
129
+ const SMMUIOTLBKey *k2 = v2;
130
+
131
+ return (k1->asid == k2->asid) && (k1->iova == k2->iova);
132
+}
133
+
134
static void smmu_base_realize(DeviceState *dev, Error **errp)
135
{
91
{
136
SMMUState *s = ARM_SMMU(dev);
92
/* Calculate the current running priority based on the set bits
137
@@ -XXX,XX +XXX,XX @@ static void smmu_base_realize(DeviceState *dev, Error **errp)
93
@@ -XXX,XX +XXX,XX @@ static int icc_highest_active_prio(GICv3CPUState *cs)
94
*/
95
int i;
96
97
- for (i = 0; i < ARRAY_SIZE(cs->icc_apr[0]); i++) {
98
+ for (i = 0; i < icc_num_aprs(cs); i++) {
99
uint32_t apr = cs->icc_apr[GICV3_G0][i] |
100
cs->icc_apr[GICV3_G1][i] | cs->icc_apr[GICV3_G1NS][i];
101
102
if (!apr) {
103
continue;
104
}
105
- return (i * 32 + ctz32(apr)) << (GIC_MIN_BPR + 1);
106
+ return (i * 32 + ctz32(apr)) << (icc_min_bpr(cs) + 1);
107
}
108
/* No current active interrupts: return idle priority */
109
return 0xff;
110
@@ -XXX,XX +XXX,XX @@ static void icc_pmr_write(CPUARMState *env, const ARMCPRegInfo *ri,
111
112
trace_gicv3_icc_pmr_write(gicv3_redist_affid(cs), value);
113
114
- value &= 0xff;
115
+ value &= icc_fullprio_mask(cs);
116
117
if (arm_feature(env, ARM_FEATURE_EL3) && !arm_is_secure(env) &&
118
(env->cp15.scr_el3 & SCR_FIQ)) {
119
@@ -XXX,XX +XXX,XX @@ static void icc_activate_irq(GICv3CPUState *cs, int irq)
120
*/
121
uint32_t mask = icc_gprio_mask(cs, cs->hppi.grp);
122
int prio = cs->hppi.prio & mask;
123
- int aprbit = prio >> 1;
124
+ int aprbit = prio >> (8 - cs->prebits);
125
int regno = aprbit / 32;
126
int regbit = aprbit % 32;
127
128
@@ -XXX,XX +XXX,XX @@ static void icc_drop_prio(GICv3CPUState *cs, int grp)
129
*/
130
int i;
131
132
- for (i = 0; i < ARRAY_SIZE(cs->icc_apr[grp]); i++) {
133
+ for (i = 0; i < icc_num_aprs(cs); i++) {
134
uint64_t *papr = &cs->icc_apr[grp][i];
135
136
if (!*papr) {
137
@@ -XXX,XX +XXX,XX @@ static void icc_bpr_write(CPUARMState *env, const ARMCPRegInfo *ri,
138
return;
138
return;
139
}
139
}
140
s->configs = g_hash_table_new_full(NULL, NULL, NULL, g_free);
140
141
+ s->iotlb = g_hash_table_new_full(smmu_iotlb_key_hash, smmu_iotlb_key_equal,
141
- minval = (grp == GICV3_G1NS) ? GIC_MIN_BPR_NS : GIC_MIN_BPR;
142
+ g_free, g_free);
142
+ minval = (grp == GICV3_G1NS) ? icc_min_bpr_ns(cs) : icc_min_bpr(cs);
143
s->smmu_pcibus_by_busptr = g_hash_table_new(NULL, NULL);
143
if (value < minval) {
144
144
value = minval;
145
if (s->primary_bus) {
146
@@ -XXX,XX +XXX,XX @@ static void smmu_base_reset(DeviceState *dev)
147
SMMUState *s = ARM_SMMU(dev);
148
149
g_hash_table_remove_all(s->configs);
150
+ g_hash_table_remove_all(s->iotlb);
151
}
152
153
static Property smmu_dev_properties[] = {
154
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
155
index XXXXXXX..XXXXXXX 100644
156
--- a/hw/arm/smmuv3.c
157
+++ b/hw/arm/smmuv3.c
158
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
159
SMMUEventInfo event = {.type = SMMU_EVT_NONE, .sid = sid};
160
SMMUPTWEventInfo ptw_info = {};
161
SMMUTranslationStatus status;
162
+ SMMUState *bs = ARM_SMMU(s);
163
+ uint64_t page_mask, aligned_addr;
164
+ IOMMUTLBEntry *cached_entry = NULL;
165
+ SMMUTransTableInfo *tt;
166
SMMUTransCfg *cfg = NULL;
167
IOMMUTLBEntry entry = {
168
.target_as = &address_space_memory,
169
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
170
.addr_mask = ~(hwaddr)0,
171
.perm = IOMMU_NONE,
172
};
173
+ SMMUIOTLBKey key, *new_key;
174
175
qemu_mutex_lock(&s->mutex);
176
177
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
178
goto epilogue;
179
}
145
}
180
146
@@ -XXX,XX +XXX,XX @@ static void icc_reset(CPUARMState *env, const ARMCPRegInfo *ri)
181
- if (smmu_ptw(cfg, addr, flag, &entry, &ptw_info)) {
147
182
+ tt = select_tt(cfg, addr);
148
cs->icc_ctlr_el1[GICV3_S] = ICC_CTLR_EL1_A3V |
183
+ if (!tt) {
149
(1 << ICC_CTLR_EL1_IDBITS_SHIFT) |
184
+ if (event.record_trans_faults) {
150
- (7 << ICC_CTLR_EL1_PRIBITS_SHIFT);
185
+ event.type = SMMU_EVT_F_TRANSLATION;
151
+ ((cs->pribits - 1) << ICC_CTLR_EL1_PRIBITS_SHIFT);
186
+ event.u.f_translation.addr = addr;
152
cs->icc_ctlr_el1[GICV3_NS] = ICC_CTLR_EL1_A3V |
187
+ event.u.f_translation.rnw = flag & 0x1;
153
(1 << ICC_CTLR_EL1_IDBITS_SHIFT) |
154
- (7 << ICC_CTLR_EL1_PRIBITS_SHIFT);
155
+ ((cs->pribits - 1) << ICC_CTLR_EL1_PRIBITS_SHIFT);
156
cs->icc_pmr_el1 = 0;
157
- cs->icc_bpr[GICV3_G0] = GIC_MIN_BPR;
158
- cs->icc_bpr[GICV3_G1] = GIC_MIN_BPR;
159
- cs->icc_bpr[GICV3_G1NS] = GIC_MIN_BPR_NS;
160
+ cs->icc_bpr[GICV3_G0] = icc_min_bpr(cs);
161
+ cs->icc_bpr[GICV3_G1] = icc_min_bpr(cs);
162
+ cs->icc_bpr[GICV3_G1NS] = icc_min_bpr_ns(cs);
163
memset(cs->icc_apr, 0, sizeof(cs->icc_apr));
164
memset(cs->icc_igrpen, 0, sizeof(cs->icc_igrpen));
165
cs->icc_ctlr_el3 = ICC_CTLR_EL3_NDS | ICC_CTLR_EL3_A3V |
166
(1 << ICC_CTLR_EL3_IDBITS_SHIFT) |
167
- (7 << ICC_CTLR_EL3_PRIBITS_SHIFT);
168
+ ((cs->pribits - 1) << ICC_CTLR_EL3_PRIBITS_SHIFT);
169
170
memset(cs->ich_apr, 0, sizeof(cs->ich_apr));
171
cs->ich_hcr_el2 = 0;
172
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
173
.readfn = icc_ap_read,
174
.writefn = icc_ap_write,
175
},
176
- { .name = "ICC_AP0R1_EL1", .state = ARM_CP_STATE_BOTH,
177
- .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 8, .opc2 = 5,
178
- .type = ARM_CP_IO | ARM_CP_NO_RAW,
179
- .access = PL1_RW, .accessfn = gicv3_fiq_access,
180
- .readfn = icc_ap_read,
181
- .writefn = icc_ap_write,
182
- },
183
- { .name = "ICC_AP0R2_EL1", .state = ARM_CP_STATE_BOTH,
184
- .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 8, .opc2 = 6,
185
- .type = ARM_CP_IO | ARM_CP_NO_RAW,
186
- .access = PL1_RW, .accessfn = gicv3_fiq_access,
187
- .readfn = icc_ap_read,
188
- .writefn = icc_ap_write,
189
- },
190
- { .name = "ICC_AP0R3_EL1", .state = ARM_CP_STATE_BOTH,
191
- .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 8, .opc2 = 7,
192
- .type = ARM_CP_IO | ARM_CP_NO_RAW,
193
- .access = PL1_RW, .accessfn = gicv3_fiq_access,
194
- .readfn = icc_ap_read,
195
- .writefn = icc_ap_write,
196
- },
197
/* All the ICC_AP1R*_EL1 registers are banked */
198
{ .name = "ICC_AP1R0_EL1", .state = ARM_CP_STATE_BOTH,
199
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 0,
200
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
201
.readfn = icc_ap_read,
202
.writefn = icc_ap_write,
203
},
204
- { .name = "ICC_AP1R1_EL1", .state = ARM_CP_STATE_BOTH,
205
- .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 1,
206
- .type = ARM_CP_IO | ARM_CP_NO_RAW,
207
- .access = PL1_RW, .accessfn = gicv3_irq_access,
208
- .readfn = icc_ap_read,
209
- .writefn = icc_ap_write,
210
- },
211
- { .name = "ICC_AP1R2_EL1", .state = ARM_CP_STATE_BOTH,
212
- .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 2,
213
- .type = ARM_CP_IO | ARM_CP_NO_RAW,
214
- .access = PL1_RW, .accessfn = gicv3_irq_access,
215
- .readfn = icc_ap_read,
216
- .writefn = icc_ap_write,
217
- },
218
- { .name = "ICC_AP1R3_EL1", .state = ARM_CP_STATE_BOTH,
219
- .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 3,
220
- .type = ARM_CP_IO | ARM_CP_NO_RAW,
221
- .access = PL1_RW, .accessfn = gicv3_irq_access,
222
- .readfn = icc_ap_read,
223
- .writefn = icc_ap_write,
224
- },
225
{ .name = "ICC_DIR_EL1", .state = ARM_CP_STATE_BOTH,
226
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 11, .opc2 = 1,
227
.type = ARM_CP_IO | ARM_CP_NO_RAW,
228
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
229
},
230
};
231
232
+static const ARMCPRegInfo gicv3_cpuif_icc_apxr1_reginfo[] = {
233
+ { .name = "ICC_AP0R1_EL1", .state = ARM_CP_STATE_BOTH,
234
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 8, .opc2 = 5,
235
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
236
+ .access = PL1_RW, .accessfn = gicv3_fiq_access,
237
+ .readfn = icc_ap_read,
238
+ .writefn = icc_ap_write,
239
+ },
240
+ { .name = "ICC_AP1R1_EL1", .state = ARM_CP_STATE_BOTH,
241
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 1,
242
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
243
+ .access = PL1_RW, .accessfn = gicv3_irq_access,
244
+ .readfn = icc_ap_read,
245
+ .writefn = icc_ap_write,
246
+ },
247
+};
248
+
249
+static const ARMCPRegInfo gicv3_cpuif_icc_apxr23_reginfo[] = {
250
+ { .name = "ICC_AP0R2_EL1", .state = ARM_CP_STATE_BOTH,
251
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 8, .opc2 = 6,
252
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
253
+ .access = PL1_RW, .accessfn = gicv3_fiq_access,
254
+ .readfn = icc_ap_read,
255
+ .writefn = icc_ap_write,
256
+ },
257
+ { .name = "ICC_AP0R3_EL1", .state = ARM_CP_STATE_BOTH,
258
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 8, .opc2 = 7,
259
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
260
+ .access = PL1_RW, .accessfn = gicv3_fiq_access,
261
+ .readfn = icc_ap_read,
262
+ .writefn = icc_ap_write,
263
+ },
264
+ { .name = "ICC_AP1R2_EL1", .state = ARM_CP_STATE_BOTH,
265
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 2,
266
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
267
+ .access = PL1_RW, .accessfn = gicv3_irq_access,
268
+ .readfn = icc_ap_read,
269
+ .writefn = icc_ap_write,
270
+ },
271
+ { .name = "ICC_AP1R3_EL1", .state = ARM_CP_STATE_BOTH,
272
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 3,
273
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
274
+ .access = PL1_RW, .accessfn = gicv3_irq_access,
275
+ .readfn = icc_ap_read,
276
+ .writefn = icc_ap_write,
277
+ },
278
+};
279
+
280
static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
281
{
282
GICv3CPUState *cs = icc_cs_from_env(env);
283
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
284
* get back to the GICv3CPUState from the CPUARMState.
285
*/
286
define_arm_cp_regs(cpu, gicv3_cpuif_reginfo);
287
+
288
+ /*
289
+ * For the moment, retain the existing behaviour of 8 priority bits;
290
+ * in a following commit we will take this from the CPU state,
291
+ * as we do for the virtual priority bits.
292
+ */
293
+ cs->pribits = 8;
294
+ /*
295
+ * The GICv3 has separate ID register fields for virtual priority
296
+ * and preemption bit values, but only a single ID register field
297
+ * for the physical priority bits. The preemption bit count is
298
+ * always the same as the priority bit count, except that 8 bits
299
+ * of priority means 7 preemption bits. We precalculate the
300
+ * preemption bits because it simplifies the code and makes the
301
+ * parallels between the virtual and physical bits of the GIC
302
+ * a bit clearer.
303
+ */
304
+ cs->prebits = cs->pribits;
305
+ if (cs->prebits == 8) {
306
+ cs->prebits--;
188
+ }
307
+ }
189
+ status = SMMU_TRANS_ERROR;
308
+ /*
190
+ goto epilogue;
309
+ * Check that CPU code defining pribits didn't violate
191
+ }
310
+ * architectural constraints our implementation relies on.
192
+
311
+ */
193
+ page_mask = (1ULL << (tt->granule_sz)) - 1;
312
+ g_assert(cs->pribits >= 4 && cs->pribits <= 8);
194
+ aligned_addr = addr & ~page_mask;
313
+
195
+
314
+ /*
196
+ key.asid = cfg->asid;
315
+ * gicv3_cpuif_reginfo[] defines ICC_AP*R0_EL1; add definitions
197
+ key.iova = aligned_addr;
316
+ * for ICC_AP*R{1,2,3}_EL1 if the prebits value requires them.
198
+
317
+ */
199
+ cached_entry = g_hash_table_lookup(bs->iotlb, &key);
318
+ if (cs->prebits >= 6) {
200
+ if (cached_entry) {
319
+ define_arm_cp_regs(cpu, gicv3_cpuif_icc_apxr1_reginfo);
201
+ cfg->iotlb_hits++;
202
+ trace_smmu_iotlb_cache_hit(cfg->asid, aligned_addr,
203
+ cfg->iotlb_hits, cfg->iotlb_misses,
204
+ 100 * cfg->iotlb_hits /
205
+ (cfg->iotlb_hits + cfg->iotlb_misses));
206
+ if ((flag & IOMMU_WO) && !(cached_entry->perm & IOMMU_WO)) {
207
+ status = SMMU_TRANS_ERROR;
208
+ if (event.record_trans_faults) {
209
+ event.type = SMMU_EVT_F_PERMISSION;
210
+ event.u.f_permission.addr = addr;
211
+ event.u.f_permission.rnw = flag & 0x1;
212
+ }
213
+ } else {
214
+ status = SMMU_TRANS_SUCCESS;
215
+ }
320
+ }
216
+ goto epilogue;
321
+ if (cs->prebits == 7) {
217
+ }
322
+ define_arm_cp_regs(cpu, gicv3_cpuif_icc_apxr23_reginfo);
218
+
219
+ cfg->iotlb_misses++;
220
+ trace_smmu_iotlb_cache_miss(cfg->asid, addr & ~page_mask,
221
+ cfg->iotlb_hits, cfg->iotlb_misses,
222
+ 100 * cfg->iotlb_hits /
223
+ (cfg->iotlb_hits + cfg->iotlb_misses));
224
+
225
+ if (g_hash_table_size(bs->iotlb) >= SMMU_IOTLB_MAX_SIZE) {
226
+ smmu_iotlb_inv_all(bs);
227
+ }
228
+
229
+ cached_entry = g_new0(IOMMUTLBEntry, 1);
230
+
231
+ if (smmu_ptw(cfg, aligned_addr, flag, cached_entry, &ptw_info)) {
232
+ g_free(cached_entry);
233
switch (ptw_info.type) {
234
case SMMU_PTW_ERR_WALK_EABT:
235
event.type = SMMU_EVT_F_WALK_EABT;
236
@@ -XXX,XX +XXX,XX @@ static IOMMUTLBEntry smmuv3_translate(IOMMUMemoryRegion *mr, hwaddr addr,
237
}
238
status = SMMU_TRANS_ERROR;
239
} else {
240
+ new_key = g_new0(SMMUIOTLBKey, 1);
241
+ new_key->asid = cfg->asid;
242
+ new_key->iova = aligned_addr;
243
+ g_hash_table_insert(bs->iotlb, new_key, cached_entry);
244
status = SMMU_TRANS_SUCCESS;
245
}
246
247
@@ -XXX,XX +XXX,XX @@ epilogue:
248
switch (status) {
249
case SMMU_TRANS_SUCCESS:
250
entry.perm = flag;
251
+ entry.translated_addr = cached_entry->translated_addr +
252
+ (addr & page_mask);
253
+ entry.addr_mask = cached_entry->addr_mask;
254
trace_smmuv3_translate_success(mr->parent_obj.name, sid, addr,
255
entry.translated_addr, entry.perm);
256
break;
257
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
258
smmuv3_flush_config(sdev);
259
break;
260
}
261
- case SMMU_CMD_TLBI_NH_ALL:
262
case SMMU_CMD_TLBI_NH_ASID:
263
- case SMMU_CMD_TLBI_NH_VA:
264
+ {
265
+ uint16_t asid = CMD_ASID(&cmd);
266
+
267
+ trace_smmuv3_cmdq_tlbi_nh_asid(asid);
268
+ smmu_iotlb_inv_asid(bs, asid);
269
+ break;
270
+ }
323
+ }
271
+ case SMMU_CMD_TLBI_NH_ALL:
324
+
272
+ case SMMU_CMD_TLBI_NSNH_ALL:
325
if (arm_feature(&cpu->env, ARM_FEATURE_EL2)) {
273
+ trace_smmuv3_cmdq_tlbi_nh();
326
int j;
274
+ smmu_iotlb_inv_all(bs);
327
275
+ break;
276
case SMMU_CMD_TLBI_NH_VAA:
277
+ {
278
+ dma_addr_t addr = CMD_ADDR(&cmd);
279
+ uint16_t vmid = CMD_VMID(&cmd);
280
+
281
+ trace_smmuv3_cmdq_tlbi_nh_vaa(vmid, addr);
282
+ smmu_iotlb_inv_all(bs);
283
+ break;
284
+ }
285
+ case SMMU_CMD_TLBI_NH_VA:
286
+ {
287
+ uint16_t asid = CMD_ASID(&cmd);
288
+ uint16_t vmid = CMD_VMID(&cmd);
289
+ dma_addr_t addr = CMD_ADDR(&cmd);
290
+ bool leaf = CMD_LEAF(&cmd);
291
+
292
+ trace_smmuv3_cmdq_tlbi_nh_va(vmid, asid, addr, leaf);
293
+ smmu_iotlb_inv_iova(bs, asid, addr);
294
+ break;
295
+ }
296
case SMMU_CMD_TLBI_EL3_ALL:
297
case SMMU_CMD_TLBI_EL3_VA:
298
case SMMU_CMD_TLBI_EL2_ALL:
299
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
300
case SMMU_CMD_TLBI_EL2_VAA:
301
case SMMU_CMD_TLBI_S12_VMALL:
302
case SMMU_CMD_TLBI_S2_IPA:
303
- case SMMU_CMD_TLBI_NSNH_ALL:
304
case SMMU_CMD_ATC_INV:
305
case SMMU_CMD_PRI_RESP:
306
case SMMU_CMD_RESUME:
307
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
308
index XXXXXXX..XXXXXXX 100644
309
--- a/hw/arm/trace-events
310
+++ b/hw/arm/trace-events
311
@@ -XXX,XX +XXX,XX @@ smmu_ptw_invalid_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr,
312
smmu_ptw_page_pte(int stage, int level, uint64_t iova, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t address) "stage=%d level=%d iova=0x%"PRIx64" base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" page address = 0x%"PRIx64
313
smmu_ptw_block_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t iova, uint64_t gpa, int bsize_mb) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" iova=0x%"PRIx64" block address = 0x%"PRIx64" block size = %d MiB"
314
smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64
315
+smmu_iotlb_cache_hit(uint16_t asid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache HIT asid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
316
+smmu_iotlb_cache_miss(uint16_t asid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache MISS asid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d"
317
+smmu_iotlb_inv_all(void) "IOTLB invalidate all"
318
+smmu_iotlb_inv_asid(uint16_t asid) "IOTLB invalidate asid=%d"
319
+smmu_iotlb_inv_iova(uint16_t asid, uint64_t addr) "IOTLB invalidate asid=%d addr=0x%"PRIx64
320
321
#hw/arm/smmuv3.c
322
smmuv3_read_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
323
@@ -XXX,XX +XXX,XX @@ smmuv3_cmdq_cfgi_ste_range(int start, int end) "start=0x%d - end=0x%d"
324
smmuv3_cmdq_cfgi_cd(uint32_t sid) "streamid = %d"
325
smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache HIT for sid %d (hits=%d, misses=%d, hit rate=%d)"
326
smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache MISS for sid %d (hits=%d, misses=%d, hit rate=%d)"
327
+smmuv3_cmdq_tlbi_nh_va(int vmid, int asid, uint64_t addr, bool leaf) "vmid =%d asid =%d addr=0x%"PRIx64" leaf=%d"
328
+smmuv3_cmdq_tlbi_nh_vaa(int vmid, uint64_t addr) "vmid =%d addr=0x%"PRIx64
329
+smmuv3_cmdq_tlbi_nh(void) ""
330
+smmuv3_cmdq_tlbi_nh_asid(uint16_t asid) "asid=%d"
331
smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid %d"
332
--
328
--
333
2.17.1
329
2.25.1
334
335
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
Make the GICv3 set its number of bits of physical priority from the
2
implementation-specific value provided in the CPU state struct, in
3
the same way we already do for virtual priority bits. Because this
4
would be a migration compatibility break, we provide a property
5
force-8-bit-prio which is enabled for 7.0 and earlier versioned board
6
models to retain the legacy "always use 8 bits" behaviour.
2
7
3
Also handle the fake transfers for dummy bytes in this setup
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
routine. It will be useful when we activate MMIO execution.
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220512151457.3899052-6-peter.maydell@linaro.org
11
Message-id: 20220506162129.2896966-5-peter.maydell@linaro.org
12
---
13
include/hw/intc/arm_gicv3_common.h | 1 +
14
target/arm/cpu.h | 1 +
15
hw/core/machine.c | 4 +++-
16
hw/intc/arm_gicv3_common.c | 5 +++++
17
hw/intc/arm_gicv3_cpuif.c | 15 +++++++++++----
18
target/arm/cpu64.c | 6 ++++++
19
6 files changed, 27 insertions(+), 5 deletions(-)
5
20
6
Signed-off-by: Cédric Le Goater <clg@kaod.org>
21
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
7
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
8
Message-id: 20180612065716.10587-4-clg@kaod.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/ssi/aspeed_smc.c | 31 ++++++++++++++++---------------
12
1 file changed, 16 insertions(+), 15 deletions(-)
13
14
diff --git a/hw/ssi/aspeed_smc.c b/hw/ssi/aspeed_smc.c
15
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/ssi/aspeed_smc.c
23
--- a/include/hw/intc/arm_gicv3_common.h
17
+++ b/hw/ssi/aspeed_smc.c
24
+++ b/include/hw/intc/arm_gicv3_common.h
18
@@ -XXX,XX +XXX,XX @@ static int aspeed_smc_flash_dummies(const AspeedSMCFlash *fl)
25
@@ -XXX,XX +XXX,XX @@ struct GICv3State {
19
return dummies;
26
uint32_t revision;
27
bool lpi_enable;
28
bool security_extn;
29
+ bool force_8bit_prio;
30
bool irq_reset_nonsecure;
31
bool gicd_no_migration_shift_bug;
32
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/cpu.h
36
+++ b/target/arm/cpu.h
37
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
38
int gic_num_lrs; /* number of list registers */
39
int gic_vpribits; /* number of virtual priority bits */
40
int gic_vprebits; /* number of virtual preemption bits */
41
+ int gic_pribits; /* number of physical priority bits */
42
43
/* Whether the cfgend input is high (i.e. this CPU should reset into
44
* big-endian mode). This setting isn't used directly: instead it modifies
45
diff --git a/hw/core/machine.c b/hw/core/machine.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/core/machine.c
48
+++ b/hw/core/machine.c
49
@@ -XXX,XX +XXX,XX @@
50
#include "hw/virtio/virtio-pci.h"
51
#include "qom/object_interfaces.h"
52
53
-GlobalProperty hw_compat_7_0[] = {};
54
+GlobalProperty hw_compat_7_0[] = {
55
+ { "arm-gicv3-common", "force-8-bit-prio", "on" },
56
+};
57
const size_t hw_compat_7_0_len = G_N_ELEMENTS(hw_compat_7_0);
58
59
GlobalProperty hw_compat_6_2[] = {
60
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/hw/intc/arm_gicv3_common.c
63
+++ b/hw/intc/arm_gicv3_common.c
64
@@ -XXX,XX +XXX,XX @@ static Property arm_gicv3_common_properties[] = {
65
DEFINE_PROP_UINT32("revision", GICv3State, revision, 3),
66
DEFINE_PROP_BOOL("has-lpi", GICv3State, lpi_enable, 0),
67
DEFINE_PROP_BOOL("has-security-extensions", GICv3State, security_extn, 0),
68
+ /*
69
+ * Compatibility property: force 8 bits of physical priority, even
70
+ * if the CPU being emulated should have fewer.
71
+ */
72
+ DEFINE_PROP_BOOL("force-8-bit-prio", GICv3State, force_8bit_prio, 0),
73
DEFINE_PROP_ARRAY("redist-region-count", GICv3State, nb_redist_regions,
74
redist_region_count, qdev_prop_uint32, uint32_t),
75
DEFINE_PROP_LINK("sysmem", GICv3State, dma, TYPE_MEMORY_REGION,
76
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/hw/intc/arm_gicv3_cpuif.c
79
+++ b/hw/intc/arm_gicv3_cpuif.c
80
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
81
* cpu->gic_num_lrs
82
* cpu->gic_vpribits
83
* cpu->gic_vprebits
84
+ * cpu->gic_pribits
85
*/
86
87
/* Note that we can't just use the GICv3CPUState as an opaque pointer
88
@@ -XXX,XX +XXX,XX @@ void gicv3_init_cpuif(GICv3State *s)
89
define_arm_cp_regs(cpu, gicv3_cpuif_reginfo);
90
91
/*
92
- * For the moment, retain the existing behaviour of 8 priority bits;
93
- * in a following commit we will take this from the CPU state,
94
- * as we do for the virtual priority bits.
95
+ * The CPU implementation specifies the number of supported
96
+ * bits of physical priority. For backwards compatibility
97
+ * of migration, we have a compat property that forces use
98
+ * of 8 priority bits regardless of what the CPU really has.
99
*/
100
- cs->pribits = 8;
101
+ if (s->force_8bit_prio) {
102
+ cs->pribits = 8;
103
+ } else {
104
+ cs->pribits = cpu->gic_pribits ?: 5;
105
+ }
106
+
107
/*
108
* The GICv3 has separate ID register fields for virtual priority
109
* and preemption bit values, but only a single ID register field
110
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/cpu64.c
113
+++ b/target/arm/cpu64.c
114
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
115
cpu->gic_num_lrs = 4;
116
cpu->gic_vpribits = 5;
117
cpu->gic_vprebits = 5;
118
+ cpu->gic_pribits = 5;
119
define_cortex_a72_a57_a53_cp_reginfo(cpu);
20
}
120
}
21
121
22
-static void aspeed_smc_flash_send_addr(AspeedSMCFlash *fl, uint32_t addr)
122
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
23
+static void aspeed_smc_flash_setup(AspeedSMCFlash *fl, uint32_t addr)
123
cpu->gic_num_lrs = 4;
24
{
124
cpu->gic_vpribits = 5;
25
const AspeedSMCState *s = fl->controller;
125
cpu->gic_vprebits = 5;
26
uint8_t cmd = aspeed_smc_flash_cmd(fl);
126
+ cpu->gic_pribits = 5;
27
+ int i;
127
define_cortex_a72_a57_a53_cp_reginfo(cpu);
28
29
/* Flash access can not exceed CS segment */
30
addr = aspeed_smc_check_segment_addr(fl, addr);
31
@@ -XXX,XX +XXX,XX @@ static void aspeed_smc_flash_send_addr(AspeedSMCFlash *fl, uint32_t addr)
32
ssi_transfer(s->spi, (addr >> 16) & 0xff);
33
ssi_transfer(s->spi, (addr >> 8) & 0xff);
34
ssi_transfer(s->spi, (addr & 0xff));
35
+
36
+ /*
37
+ * Use fake transfers to model dummy bytes. The value should
38
+ * be configured to some non-zero value in fast read mode and
39
+ * zero in read mode. But, as the HW allows inconsistent
40
+ * settings, let's check for fast read mode.
41
+ */
42
+ if (aspeed_smc_flash_mode(fl) == CTRL_FREADMODE) {
43
+ for (i = 0; i < aspeed_smc_flash_dummies(fl); i++) {
44
+ ssi_transfer(fl->controller->spi, 0xFF);
45
+ }
46
+ }
47
}
128
}
48
129
49
static uint64_t aspeed_smc_flash_read(void *opaque, hwaddr addr, unsigned size)
130
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
50
@@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_smc_flash_read(void *opaque, hwaddr addr, unsigned size)
131
cpu->gic_num_lrs = 4;
51
case CTRL_READMODE:
132
cpu->gic_vpribits = 5;
52
case CTRL_FREADMODE:
133
cpu->gic_vprebits = 5;
53
aspeed_smc_flash_select(fl);
134
+ cpu->gic_pribits = 5;
54
- aspeed_smc_flash_send_addr(fl, addr);
135
define_cortex_a72_a57_a53_cp_reginfo(cpu);
55
-
136
}
56
- /*
137
57
- * Use fake transfers to model dummy bytes. The value should
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_a76_initfn(Object *obj)
58
- * be configured to some non-zero value in fast read mode and
139
cpu->gic_num_lrs = 4;
59
- * zero in read mode. But, as the HW allows inconsistent
140
cpu->gic_vpribits = 5;
60
- * settings, let's check for fast read mode.
141
cpu->gic_vprebits = 5;
61
- */
142
+ cpu->gic_pribits = 5;
62
- if (aspeed_smc_flash_mode(fl) == CTRL_FREADMODE) {
143
63
- for (i = 0; i < aspeed_smc_flash_dummies(fl); i++) {
144
/* From B5.1 AdvSIMD AArch64 register summary */
64
- ssi_transfer(fl->controller->spi, 0xFF);
145
cpu->isar.mvfr0 = 0x10110222;
65
- }
146
@@ -XXX,XX +XXX,XX @@ static void aarch64_neoverse_n1_initfn(Object *obj)
66
- }
147
cpu->gic_num_lrs = 4;
67
+ aspeed_smc_flash_setup(fl, addr);
148
cpu->gic_vpribits = 5;
68
149
cpu->gic_vprebits = 5;
69
for (i = 0; i < size; i++) {
150
+ cpu->gic_pribits = 5;
70
ret |= ssi_transfer(s->spi, 0x0) << (8 * i);
151
71
@@ -XXX,XX +XXX,XX @@ static void aspeed_smc_flash_write(void *opaque, hwaddr addr, uint64_t data,
152
/* From B5.1 AdvSIMD AArch64 register summary */
72
break;
153
cpu->isar.mvfr0 = 0x10110222;
73
case CTRL_WRITEMODE:
154
@@ -XXX,XX +XXX,XX @@ static void aarch64_a64fx_initfn(Object *obj)
74
aspeed_smc_flash_select(fl);
155
cpu->gic_num_lrs = 4;
75
- aspeed_smc_flash_send_addr(fl, addr);
156
cpu->gic_vpribits = 5;
76
+ aspeed_smc_flash_setup(fl, addr);
157
cpu->gic_vprebits = 5;
77
158
+ cpu->gic_pribits = 5;
78
for (i = 0; i < size; i++) {
159
79
ssi_transfer(s->spi, (data >> (8 * i)) & 0xff);
160
/* Suppport of A64FX's vector length are 128,256 and 512bit only */
161
aarch64_add_sve_properties(obj);
80
--
162
--
81
2.17.1
163
2.25.1
82
83
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
We previously open-coded the expression for the number of virtual APR
2
registers and the assertion that it was not going to cause us to
3
overflow the cs->ich_apr[] array. Factor this out into a new
4
ich_num_aprs() function, for consistency with the icc_num_aprs()
5
function we just added for the physical APR handling.
2
6
3
TCMI_VERBOSE is no more used, drop the OMAP_8/16/32B_REG macros.
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220512151457.3899052-7-peter.maydell@linaro.org
10
Message-id: 20220506162129.2896966-6-peter.maydell@linaro.org
11
---
12
hw/intc/arm_gicv3_cpuif.c | 16 ++++++++++------
13
1 file changed, 10 insertions(+), 6 deletions(-)
4
14
5
Suggested-by: Thomas Huth <thuth@redhat.com>
15
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
6
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Thomas Huth <thuth@redhat.com>
8
Message-id: 20180624040609.17572-9-f4bug@amsat.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/arm/omap.h | 18 ------------------
12
hw/arm/omap1.c | 18 ++++++++++++------
13
2 files changed, 12 insertions(+), 24 deletions(-)
14
15
diff --git a/include/hw/arm/omap.h b/include/hw/arm/omap.h
16
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/arm/omap.h
17
--- a/hw/intc/arm_gicv3_cpuif.c
18
+++ b/include/hw/arm/omap.h
18
+++ b/hw/intc/arm_gicv3_cpuif.c
19
@@ -XXX,XX +XXX,XX @@ enum {
19
@@ -XXX,XX +XXX,XX @@ static inline int icv_min_vbpr(GICv3CPUState *cs)
20
#define OMAP_GPIOSW_INVERTED    0x0001
20
return 7 - cs->vprebits;
21
#define OMAP_GPIOSW_OUTPUT    0x0002
21
}
22
22
23
-# define TCMI_VERBOSE            1
23
+static inline int ich_num_aprs(GICv3CPUState *cs)
24
-
25
-# ifdef TCMI_VERBOSE
26
-# define OMAP_8B_REG(paddr)        \
27
- fprintf(stderr, "%s: 8-bit register " OMAP_FMT_plx "\n",    \
28
- __func__, paddr)
29
-# define OMAP_16B_REG(paddr)        \
30
- fprintf(stderr, "%s: 16-bit register " OMAP_FMT_plx "\n",    \
31
- __func__, paddr)
32
-# define OMAP_32B_REG(paddr)        \
33
- fprintf(stderr, "%s: 32-bit register " OMAP_FMT_plx "\n",    \
34
- __func__, paddr)
35
-# else
36
-# define OMAP_8B_REG(paddr)
37
-# define OMAP_16B_REG(paddr)
38
-# define OMAP_32B_REG(paddr)
39
-# endif
40
-
41
# define OMAP_MPUI_REG_MASK        0x000007ff
42
43
#endif /* hw_omap_h */
44
diff --git a/hw/arm/omap1.c b/hw/arm/omap1.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/hw/arm/omap1.c
47
+++ b/hw/arm/omap1.c
48
@@ -XXX,XX +XXX,XX @@
49
#include "qemu/cutils.h"
50
#include "qemu/bcd.h"
51
52
+static inline void omap_log_badwidth(const char *funcname, hwaddr addr, int sz)
53
+{
24
+{
54
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: %d-bit register %#08" HWADDR_PRIx "\n",
25
+ /* Return the number of virtual APR registers (1, 2, or 4) */
55
+ funcname, 8 * sz, addr);
26
+ int aprmax = 1 << (cs->vprebits - 5);
27
+ assert(aprmax <= ARRAY_SIZE(cs->ich_apr[0]));
28
+ return aprmax;
56
+}
29
+}
57
+
30
+
58
/* Should signal the TCMI/GPMC */
31
/* Simple accessor functions for LR fields */
59
uint32_t omap_badwidth_read8(void *opaque, hwaddr addr)
32
static uint32_t ich_lr_vintid(uint64_t lr)
60
{
33
{
61
uint8_t ret;
34
@@ -XXX,XX +XXX,XX @@ static int ich_highest_active_virt_prio(GICv3CPUState *cs)
62
35
* in the ICH Active Priority Registers.
63
- OMAP_8B_REG(addr);
36
*/
64
+ omap_log_badwidth(__func__, addr, 1);
37
int i;
65
cpu_physical_memory_read(addr, &ret, 1);
38
- int aprmax = 1 << (cs->vprebits - 5);
66
return ret;
39
-
67
}
40
- assert(aprmax <= ARRAY_SIZE(cs->ich_apr[0]));
68
@@ -XXX,XX +XXX,XX @@ void omap_badwidth_write8(void *opaque, hwaddr addr,
41
+ int aprmax = ich_num_aprs(cs);
69
{
42
70
uint8_t val8 = value;
43
for (i = 0; i < aprmax; i++) {
71
44
uint32_t apr = cs->ich_apr[GICV3_G0][i] |
72
- OMAP_8B_REG(addr);
45
@@ -XXX,XX +XXX,XX @@ static int icv_drop_prio(GICv3CPUState *cs)
73
+ omap_log_badwidth(__func__, addr, 1);
46
* 32 bits are actually relevant.
74
cpu_physical_memory_write(addr, &val8, 1);
47
*/
75
}
48
int i;
76
49
- int aprmax = 1 << (cs->vprebits - 5);
77
@@ -XXX,XX +XXX,XX @@ uint32_t omap_badwidth_read16(void *opaque, hwaddr addr)
50
-
78
{
51
- assert(aprmax <= ARRAY_SIZE(cs->ich_apr[0]));
79
uint16_t ret;
52
+ int aprmax = ich_num_aprs(cs);
80
53
81
- OMAP_16B_REG(addr);
54
for (i = 0; i < aprmax; i++) {
82
+ omap_log_badwidth(__func__, addr, 2);
55
uint64_t *papr0 = &cs->ich_apr[GICV3_G0][i];
83
cpu_physical_memory_read(addr, &ret, 2);
84
return ret;
85
}
86
@@ -XXX,XX +XXX,XX @@ void omap_badwidth_write16(void *opaque, hwaddr addr,
87
{
88
uint16_t val16 = value;
89
90
- OMAP_16B_REG(addr);
91
+ omap_log_badwidth(__func__, addr, 2);
92
cpu_physical_memory_write(addr, &val16, 2);
93
}
94
95
@@ -XXX,XX +XXX,XX @@ uint32_t omap_badwidth_read32(void *opaque, hwaddr addr)
96
{
97
uint32_t ret;
98
99
- OMAP_32B_REG(addr);
100
+ omap_log_badwidth(__func__, addr, 4);
101
cpu_physical_memory_read(addr, &ret, 4);
102
return ret;
103
}
104
@@ -XXX,XX +XXX,XX @@ uint32_t omap_badwidth_read32(void *opaque, hwaddr addr)
105
void omap_badwidth_write32(void *opaque, hwaddr addr,
106
uint32_t value)
107
{
108
- OMAP_32B_REG(addr);
109
+ omap_log_badwidth(__func__, addr, 4);
110
cpu_physical_memory_write(addr, &value, 4);
111
}
112
113
--
56
--
114
2.17.1
57
2.25.1
115
116
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
From: Chris Howard <cvz185@web.de>
2
2
3
Only the flash type is strapped by HW. The 4BYTE mode is set by
3
Give all the debug registers their correct names including the
4
firmware when the flash device is detected.
4
index, rather than having multiple registers all with the
5
same name string, which is confusing when viewed over the
6
gdbstub interface.
5
7
6
Signed-off-by: Cédric Le Goater <clg@kaod.org>
8
Signed-off-by: CHRIS HOWARD <cvz185@web.de>
7
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180612065716.10587-3-clg@kaod.org
10
Message-id: 4127D8CA-D54A-47C7-A039-0DB7361E30C0@web.de
11
[PMM: expanded commit message]
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
hw/ssi/aspeed_smc.c | 8 +-------
14
target/arm/helper.c | 16 ++++++++++++----
12
1 file changed, 1 insertion(+), 7 deletions(-)
15
1 file changed, 12 insertions(+), 4 deletions(-)
13
16
14
diff --git a/hw/ssi/aspeed_smc.c b/hw/ssi/aspeed_smc.c
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/ssi/aspeed_smc.c
19
--- a/target/arm/helper.c
17
+++ b/hw/ssi/aspeed_smc.c
20
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static void aspeed_smc_reset(DeviceState *d)
21
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
19
aspeed_smc_segment_to_reg(&s->ctrl->segments[i]);
20
}
22
}
21
23
22
- /* HW strapping for AST2500 FMC controllers */
24
for (i = 0; i < brps; i++) {
23
+ /* HW strapping flash type for FMC controllers */
25
+ char *dbgbvr_el1_name = g_strdup_printf("DBGBVR%d_EL1", i);
24
if (s->ctrl->segments == aspeed_segments_ast2500_fmc) {
26
+ char *dbgbcr_el1_name = g_strdup_printf("DBGBCR%d_EL1", i);
25
/* flash type is fixed to SPI for CE0 and CE1 */
27
ARMCPRegInfo dbgregs[] = {
26
s->regs[s->r_conf] |= (CONF_FLASH_TYPE_SPI << CONF_FLASH_TYPE0);
28
- { .name = "DBGBVR", .state = ARM_CP_STATE_BOTH,
27
s->regs[s->r_conf] |= (CONF_FLASH_TYPE_SPI << CONF_FLASH_TYPE1);
29
+ { .name = dbgbvr_el1_name, .state = ARM_CP_STATE_BOTH,
28
-
30
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 4,
29
- /* 4BYTE mode is autodetected for CE0. Let's force it to 1 for
31
.access = PL1_RW, .accessfn = access_tda,
30
- * now */
32
.fieldoffset = offsetof(CPUARMState, cp15.dbgbvr[i]),
31
- s->regs[s->r_ce_ctrl] |= (1 << (CTRL_EXTENDED0));
33
.writefn = dbgbvr_write, .raw_writefn = raw_write
34
},
35
- { .name = "DBGBCR", .state = ARM_CP_STATE_BOTH,
36
+ { .name = dbgbcr_el1_name, .state = ARM_CP_STATE_BOTH,
37
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 5,
38
.access = PL1_RW, .accessfn = access_tda,
39
.fieldoffset = offsetof(CPUARMState, cp15.dbgbcr[i]),
40
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
41
},
42
};
43
define_arm_cp_regs(cpu, dbgregs);
44
+ g_free(dbgbvr_el1_name);
45
+ g_free(dbgbcr_el1_name);
32
}
46
}
33
47
34
/* HW strapping for AST2400 FMC controllers (SCU70). Let's use the
48
for (i = 0; i < wrps; i++) {
35
* configuration of the palmetto-bmc machine */
49
+ char *dbgwvr_el1_name = g_strdup_printf("DBGWVR%d_EL1", i);
36
if (s->ctrl->segments == aspeed_segments_fmc) {
50
+ char *dbgwcr_el1_name = g_strdup_printf("DBGWCR%d_EL1", i);
37
s->regs[s->r_conf] |= (CONF_FLASH_TYPE_SPI << CONF_FLASH_TYPE0);
51
ARMCPRegInfo dbgregs[] = {
38
-
52
- { .name = "DBGWVR", .state = ARM_CP_STATE_BOTH,
39
- s->regs[s->r_ce_ctrl] |= (1 << (CTRL_EXTENDED0));
53
+ { .name = dbgwvr_el1_name, .state = ARM_CP_STATE_BOTH,
54
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 6,
55
.access = PL1_RW, .accessfn = access_tda,
56
.fieldoffset = offsetof(CPUARMState, cp15.dbgwvr[i]),
57
.writefn = dbgwvr_write, .raw_writefn = raw_write
58
},
59
- { .name = "DBGWCR", .state = ARM_CP_STATE_BOTH,
60
+ { .name = dbgwcr_el1_name, .state = ARM_CP_STATE_BOTH,
61
.cp = 14, .opc0 = 2, .opc1 = 0, .crn = 0, .crm = i, .opc2 = 7,
62
.access = PL1_RW, .accessfn = access_tda,
63
.fieldoffset = offsetof(CPUARMState, cp15.dbgwcr[i]),
64
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
65
},
66
};
67
define_arm_cp_regs(cpu, dbgregs);
68
+ g_free(dbgwvr_el1_name);
69
+ g_free(dbgwcr_el1_name);
40
}
70
}
41
}
71
}
42
72
43
--
73
--
44
2.17.1
74
2.25.1
45
46
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
These COMs are hard to find, and the companie dropped the support
4
few years ago.
5
6
Per the "Gumstix Product Changes, Known Issues, and EOL" pdf:
7
8
- Phasing out: PXA270-based Verdex product line
9
September 2012
10
11
- Phasing out: PXA255-based Basix & Connex
12
September 2009
13
14
However there are still booting SD card image availables, very
15
convenient to stress test the QEMU SD card implementation.
16
Therefore I volunteer to keep an eye on this file, while it
17
is useful for testing.
18
19
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
20
Reviewed-by: Thomas Huth <thuth@redhat.com>
21
Message-id: 20180606144706.29732-1-f4bug@amsat.org
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
24
MAINTAINERS | 3 ++-
25
1 file changed, 2 insertions(+), 1 deletion(-)
26
27
diff --git a/MAINTAINERS b/MAINTAINERS
28
index XXXXXXX..XXXXXXX 100644
29
--- a/MAINTAINERS
30
+++ b/MAINTAINERS
31
@@ -XXX,XX +XXX,XX @@ F: include/hw/arm/digic.h
32
F: hw/*/digic*
33
34
Gumstix
35
+M: Philippe Mathieu-Daudé <f4bug@amsat.org>
36
L: qemu-devel@nongnu.org
37
L: qemu-arm@nongnu.org
38
-S: Orphan
39
+S: Odd Fixes
40
F: hw/arm/gumstix.c
41
42
i.MX31
43
--
44
2.17.1
45
46
diff view generated by jsdifflib
Deleted patch
1
From: Sai Pavan Boddu <saipava@xilinx.com>
2
1
3
Qspi dma has a burst length of 64 bytes, So limit the transactions w.r.t
4
dma-burst-size property.
5
6
Signed-off-by: Sai Pavan Boddu <saipava@xilinx.com>
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
8
Message-id: 1529660880-30376-1-git-send-email-sai.pavan.boddu@xilinx.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/hw/ssi/xilinx_spips.h | 5 ++++-
12
hw/ssi/xilinx_spips.c | 23 ++++++++++++++++++++---
13
2 files changed, 24 insertions(+), 4 deletions(-)
14
15
diff --git a/include/hw/ssi/xilinx_spips.h b/include/hw/ssi/xilinx_spips.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/hw/ssi/xilinx_spips.h
18
+++ b/include/hw/ssi/xilinx_spips.h
19
@@ -XXX,XX +XXX,XX @@ typedef struct XilinxSPIPS XilinxSPIPS;
20
/* Bite off 4k chunks at a time */
21
#define LQSPI_CACHE_SIZE 1024
22
23
+#define QSPI_DMA_MAX_BURST_SIZE 2048
24
+
25
typedef enum {
26
READ = 0x3, READ_4 = 0x13,
27
FAST_READ = 0xb, FAST_READ_4 = 0x0c,
28
@@ -XXX,XX +XXX,XX @@ typedef struct {
29
XilinxQSPIPS parent_obj;
30
31
StreamSlave *dma;
32
- uint8_t dma_buf[4];
33
int gqspi_irqline;
34
35
uint32_t regs[XLNX_ZYNQMP_SPIPS_R_MAX];
36
@@ -XXX,XX +XXX,XX @@ typedef struct {
37
uint8_t rx_fifo_g_align;
38
uint8_t tx_fifo_g_align;
39
bool man_start_com_g;
40
+ uint32_t dma_burst_size;
41
+ uint8_t dma_buf[QSPI_DMA_MAX_BURST_SIZE];
42
} XlnxZynqMPQSPIPS;
43
44
typedef struct XilinxSPIPSClass {
45
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
46
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/ssi/xilinx_spips.c
48
+++ b/hw/ssi/xilinx_spips.c
49
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_notify(void *opaque)
50
{
51
size_t ret;
52
uint32_t num;
53
- const void *rxd = pop_buf(recv_fifo, 4, &num);
54
+ const void *rxd;
55
+ int len;
56
+
57
+ len = recv_fifo->num >= rq->dma_burst_size ? rq->dma_burst_size :
58
+ recv_fifo->num;
59
+ rxd = pop_buf(recv_fifo, len, &num);
60
61
memcpy(rq->dma_buf, rxd, num);
62
63
- ret = stream_push(rq->dma, rq->dma_buf, 4);
64
- assert(ret == 4);
65
+ ret = stream_push(rq->dma, rq->dma_buf, num);
66
+ assert(ret == num);
67
xlnx_zynqmp_qspips_check_flush(rq);
68
}
69
}
70
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_realize(DeviceState *dev, Error **errp)
71
XlnxZynqMPQSPIPS *s = XLNX_ZYNQMP_QSPIPS(dev);
72
XilinxSPIPSClass *xsc = XILINX_SPIPS_GET_CLASS(s);
73
74
+ if (s->dma_burst_size > QSPI_DMA_MAX_BURST_SIZE) {
75
+ error_setg(errp,
76
+ "qspi dma burst size %u exceeds maximum limit %d",
77
+ s->dma_burst_size, QSPI_DMA_MAX_BURST_SIZE);
78
+ return;
79
+ }
80
xilinx_qspips_realize(dev, errp);
81
fifo8_create(&s->rx_fifo_g, xsc->rx_fifo_size);
82
fifo8_create(&s->tx_fifo_g, xsc->tx_fifo_size);
83
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_xlnx_zynqmp_qspips = {
84
}
85
};
86
87
+static Property xilinx_zynqmp_qspips_properties[] = {
88
+ DEFINE_PROP_UINT32("dma-burst-size", XlnxZynqMPQSPIPS, dma_burst_size, 64),
89
+ DEFINE_PROP_END_OF_LIST(),
90
+};
91
+
92
static Property xilinx_qspips_properties[] = {
93
/* We had to turn this off for 2.10 as it is not compatible with migration.
94
* It can be enabled but will prevent the device to be migrated.
95
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_qspips_class_init(ObjectClass *klass, void * data)
96
dc->realize = xlnx_zynqmp_qspips_realize;
97
dc->reset = xlnx_zynqmp_qspips_reset;
98
dc->vmsd = &vmstate_xlnx_zynqmp_qspips;
99
+ dc->props = xilinx_zynqmp_qspips_properties;
100
xsc->reg_ops = &xlnx_zynqmp_qspips_ops;
101
xsc->rx_fifo_size = RXFF_A_Q;
102
xsc->tx_fifo_size = TXFF_A_Q;
103
--
104
2.17.1
105
106
diff view generated by jsdifflib
Deleted patch
1
From: Joel Stanley <joel@jms.id.au>
2
1
3
This adds Cedric as the maintainer, with Andrew and I as reviewers, for
4
the ASPEED boards and the peripherals we have developed.
5
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
8
Acked-by: Cédric Le Goater <clg@kaod.org>
9
Signed-off-by: Joel Stanley <joel@jms.id.au>
10
Message-id: 20180625140055.32223-1-joel@jms.id.au
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
MAINTAINERS | 11 +++++++++++
14
1 file changed, 11 insertions(+)
15
16
diff --git a/MAINTAINERS b/MAINTAINERS
17
index XXXXXXX..XXXXXXX 100644
18
--- a/MAINTAINERS
19
+++ b/MAINTAINERS
20
@@ -XXX,XX +XXX,XX @@ M: Subbaraya Sundeep <sundeep.lkml@gmail.com>
21
S: Maintained
22
F: hw/arm/msf2-som.c
23
24
+ASPEED BMCs
25
+M: Cédric Le Goater <clg@kaod.org>
26
+R: Andrew Jeffery <andrew@aj.id.au>
27
+R: Joel Stanley <joel@jms.id.au>
28
+L: qemu-arm@nongnu.org
29
+S: Maintained
30
+F: hw/*/*aspeed*
31
+F: include/hw/*/*aspeed*
32
+F: hw/net/ftgmac100.c
33
+F: include/hw/net/ftgmac100.h
34
+
35
CRIS Machines
36
-------------
37
Axis Dev88
38
--
39
2.17.1
40
41
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Message-id: 20180624040609.17572-2-f4bug@amsat.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
hw/input/pckbd.c | 4 +++-
9
1 file changed, 3 insertions(+), 1 deletion(-)
10
11
diff --git a/hw/input/pckbd.c b/hw/input/pckbd.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/input/pckbd.c
14
+++ b/hw/input/pckbd.c
15
@@ -XXX,XX +XXX,XX @@
16
* THE SOFTWARE.
17
*/
18
#include "qemu/osdep.h"
19
+#include "qemu/log.h"
20
#include "hw/hw.h"
21
#include "hw/isa/isa.h"
22
#include "hw/i386/pc.h"
23
@@ -XXX,XX +XXX,XX @@ static void kbd_write_command(void *opaque, hwaddr addr,
24
/* ignore that */
25
break;
26
default:
27
- fprintf(stderr, "qemu: unsupported keyboard cmd=0x%02x\n", (int)val);
28
+ qemu_log_mask(LOG_GUEST_ERROR,
29
+ "unsupported keyboard cmd=0x%02" PRIx64 "\n", val);
30
break;
31
}
32
}
33
--
34
2.17.1
35
36
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
hw_error() finally calls abort(), but there is no need to abort here.
3
Except hw/core/irq.c which implements the forward-declared opaque
4
qemu_irq structure, hw/adc/zynq-xadc.{c,h} are the only files not
5
using the typedef. Fix this single exception.
4
6
5
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Thomas Huth <thuth@redhat.com>
8
Reviewed-by: Bernhard Beschow <shentey@gmail.com>
7
Message-id: 20180624040609.17572-14-f4bug@amsat.org
9
Message-id: 20220509202035.50335-1-philippe.mathieu.daude@gmail.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
11
---
10
hw/net/smc91c111.c | 9 +++++++--
12
include/hw/adc/zynq-xadc.h | 3 +--
11
1 file changed, 7 insertions(+), 2 deletions(-)
13
hw/adc/zynq-xadc.c | 4 ++--
14
2 files changed, 3 insertions(+), 4 deletions(-)
12
15
13
diff --git a/hw/net/smc91c111.c b/hw/net/smc91c111.c
16
diff --git a/include/hw/adc/zynq-xadc.h b/include/hw/adc/zynq-xadc.h
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/net/smc91c111.c
18
--- a/include/hw/adc/zynq-xadc.h
16
+++ b/hw/net/smc91c111.c
19
+++ b/include/hw/adc/zynq-xadc.h
17
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ struct ZynqXADCState {
18
#include "hw/sysbus.h"
21
uint16_t xadc_dfifo[ZYNQ_XADC_FIFO_DEPTH];
19
#include "net/net.h"
22
uint16_t xadc_dfifo_entries;
20
#include "hw/devices.h"
23
21
+#include "qemu/log.h"
24
- struct IRQState *qemu_irq;
22
/* For crc32 */
25
-
23
#include <zlib.h>
26
+ qemu_irq irq;
24
27
};
25
@@ -XXX,XX +XXX,XX @@ static void smc91c111_writeb(void *opaque, hwaddr offset,
28
26
}
29
#endif /* ZYNQ_XADC_H */
27
break;
30
diff --git a/hw/adc/zynq-xadc.c b/hw/adc/zynq-xadc.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/adc/zynq-xadc.c
33
+++ b/hw/adc/zynq-xadc.c
34
@@ -XXX,XX +XXX,XX @@ static void zynq_xadc_update_ints(ZynqXADCState *s)
35
s->regs[INT_STS] |= INT_DFIFO_GTH;
28
}
36
}
29
- hw_error("smc91c111_write: Bad reg %d:%x\n", s->bank, (int)offset);
37
30
+ qemu_log_mask(LOG_GUEST_ERROR, "smc91c111_write(bank:%d) Illegal register"
38
- qemu_set_irq(s->qemu_irq, !!(s->regs[INT_STS] & ~s->regs[INT_MASK]));
31
+ " 0x%" HWADDR_PRIx " = 0x%x\n",
39
+ qemu_set_irq(s->irq, !!(s->regs[INT_STS] & ~s->regs[INT_MASK]));
32
+ s->bank, offset, value);
33
}
40
}
34
41
35
static uint32_t smc91c111_readb(void *opaque, hwaddr offset)
42
static void zynq_xadc_reset(DeviceState *d)
36
@@ -XXX,XX +XXX,XX @@ static uint32_t smc91c111_readb(void *opaque, hwaddr offset)
43
@@ -XXX,XX +XXX,XX @@ static void zynq_xadc_init(Object *obj)
37
}
44
memory_region_init_io(&s->iomem, obj, &xadc_ops, s, "zynq-xadc",
38
break;
45
ZYNQ_XADC_MMIO_SIZE);
39
}
46
sysbus_init_mmio(sbd, &s->iomem);
40
- hw_error("smc91c111_read: Bad reg %d:%x\n", s->bank, (int)offset);
47
- sysbus_init_irq(sbd, &s->qemu_irq);
41
+ qemu_log_mask(LOG_GUEST_ERROR, "smc91c111_read(bank:%d) Illegal register"
48
+ sysbus_init_irq(sbd, &s->irq);
42
+ " 0x%" HWADDR_PRIx "\n",
43
+ s->bank, offset);
44
return 0;
45
}
49
}
46
50
51
static const VMStateDescription vmstate_zynq_xadc = {
47
--
52
--
48
2.17.1
53
2.25.1
49
54
50
55
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
In commit 88ce6c6ee85d we switched from directly fishing the number
2
of breakpoints and watchpoints out of the ID register fields to
3
abstracting out functions to do this job, but we forgot to delete the
4
now-obsolete comment in define_debug_regs() about the relation
5
between the ID field value and the actual number of breakpoints and
6
watchpoints. Delete the obsolete comment.
2
7
3
The System Control Unit should be initialized first as it drives all
8
Reported-by: CHRIS HOWARD <cvz185@web.de>
4
the configuration of the SoC and other device models.
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20220513131801.4082712-1-peter.maydell@linaro.org
13
---
14
target/arm/helper.c | 1 -
15
1 file changed, 1 deletion(-)
5
16
6
Signed-off-by: Cédric Le Goater <clg@kaod.org>
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
7
Reviewed-by: Joel Stanley <joel@jms.id.au>
8
Acked-by: Andrew Jeffery <andrew@aj.id.au>
9
Message-id: 20180622075700.5923-3-clg@kaod.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/arm/aspeed_soc.c | 40 ++++++++++++++++++++--------------------
13
1 file changed, 20 insertions(+), 20 deletions(-)
14
15
diff --git a/hw/arm/aspeed_soc.c b/hw/arm/aspeed_soc.c
16
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/aspeed_soc.c
19
--- a/target/arm/helper.c
18
+++ b/hw/arm/aspeed_soc.c
20
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_init(Object *obj)
21
@@ -XXX,XX +XXX,XX @@ static void define_debug_regs(ARMCPU *cpu)
20
object_initialize(&s->cpu, sizeof(s->cpu), sc->info->cpu_type);
22
define_one_arm_cp_reg(cpu, &dbgdidr);
21
object_property_add_child(obj, "cpu", OBJECT(&s->cpu), NULL);
22
23
- object_initialize(&s->vic, sizeof(s->vic), TYPE_ASPEED_VIC);
24
- object_property_add_child(obj, "vic", OBJECT(&s->vic), NULL);
25
- qdev_set_parent_bus(DEVICE(&s->vic), sysbus_get_default());
26
-
27
- object_initialize(&s->timerctrl, sizeof(s->timerctrl), TYPE_ASPEED_TIMER);
28
- object_property_add_child(obj, "timerctrl", OBJECT(&s->timerctrl), NULL);
29
- qdev_set_parent_bus(DEVICE(&s->timerctrl), sysbus_get_default());
30
-
31
- object_initialize(&s->i2c, sizeof(s->i2c), TYPE_ASPEED_I2C);
32
- object_property_add_child(obj, "i2c", OBJECT(&s->i2c), NULL);
33
- qdev_set_parent_bus(DEVICE(&s->i2c), sysbus_get_default());
34
-
35
object_initialize(&s->scu, sizeof(s->scu), TYPE_ASPEED_SCU);
36
object_property_add_child(obj, "scu", OBJECT(&s->scu), NULL);
37
qdev_set_parent_bus(DEVICE(&s->scu), sysbus_get_default());
38
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_init(Object *obj)
39
object_property_add_alias(obj, "hw-prot-key", OBJECT(&s->scu),
40
"hw-prot-key", &error_abort);
41
42
+ object_initialize(&s->vic, sizeof(s->vic), TYPE_ASPEED_VIC);
43
+ object_property_add_child(obj, "vic", OBJECT(&s->vic), NULL);
44
+ qdev_set_parent_bus(DEVICE(&s->vic), sysbus_get_default());
45
+
46
+ object_initialize(&s->timerctrl, sizeof(s->timerctrl), TYPE_ASPEED_TIMER);
47
+ object_property_add_child(obj, "timerctrl", OBJECT(&s->timerctrl), NULL);
48
+ qdev_set_parent_bus(DEVICE(&s->timerctrl), sysbus_get_default());
49
+
50
+ object_initialize(&s->i2c, sizeof(s->i2c), TYPE_ASPEED_I2C);
51
+ object_property_add_child(obj, "i2c", OBJECT(&s->i2c), NULL);
52
+ qdev_set_parent_bus(DEVICE(&s->i2c), sysbus_get_default());
53
+
54
object_initialize(&s->fmc, sizeof(s->fmc), sc->info->fmc_typename);
55
object_property_add_child(obj, "fmc", OBJECT(&s->fmc), NULL);
56
qdev_set_parent_bus(DEVICE(&s->fmc), sysbus_get_default());
57
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_realize(DeviceState *dev, Error **errp)
58
memory_region_add_subregion(get_system_memory(), ASPEED_SOC_SRAM_BASE,
59
&s->sram);
60
61
+ /* SCU */
62
+ object_property_set_bool(OBJECT(&s->scu), true, "realized", &err);
63
+ if (err) {
64
+ error_propagate(errp, err);
65
+ return;
66
+ }
67
+ sysbus_mmio_map(SYS_BUS_DEVICE(&s->scu), 0, ASPEED_SOC_SCU_BASE);
68
+
69
/* VIC */
70
object_property_set_bool(OBJECT(&s->vic), true, "realized", &err);
71
if (err) {
72
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_realize(DeviceState *dev, Error **errp)
73
sysbus_connect_irq(SYS_BUS_DEVICE(&s->timerctrl), i, irq);
74
}
23
}
75
24
76
- /* SCU */
25
- /* Note that all these register fields hold "number of Xs minus 1". */
77
- object_property_set_bool(OBJECT(&s->scu), true, "realized", &err);
26
brps = arm_num_brps(cpu);
78
- if (err) {
27
wrps = arm_num_wrps(cpu);
79
- error_propagate(errp, err);
28
ctx_cmps = arm_num_ctx_cmps(cpu);
80
- return;
81
- }
82
- sysbus_mmio_map(SYS_BUS_DEVICE(&s->scu), 0, ASPEED_SOC_SCU_BASE);
83
-
84
/* UART - attach an 8250 to the IO space as our UART5 */
85
if (serial_hd(0)) {
86
qemu_irq uart5 = qdev_get_gpio_in(DEVICE(&s->vic), uart_irqs[4]);
87
--
29
--
88
2.17.1
30
2.25.1
89
31
90
32
diff view generated by jsdifflib
1
We want to handle small MPU region sizes for ARMv7M. To do this,
1
Currently we give all the v7-and-up CPUs a PMU with 4 counters. This
2
make get_phys_addr_pmsav7() set the page size to the region
2
means that we don't provide the 6 counters that are required by the
3
size if it is less that TARGET_PAGE_SIZE, rather than working
3
Arm BSA (Base System Architecture) specification if the CPU supports
4
only in TARGET_PAGE_SIZE chunks.
4
the Virtualization extensions.
5
5
6
Since the core TCG code con't handle execution from small
6
Instead of having a single PMCR_NUM_COUNTERS, make each CPU type
7
MPU regions, we strip the exec permission from them so that
7
specify the PMCR reset value (obtained from the appropriate TRM), and
8
any execution attempts will cause an MPU exception, rather
8
use the 'N' field of that value to define the number of counters
9
than allowing it to end up with a cpu_abort() in
9
provided.
10
get_page_addr_code().
10
11
11
This means that we now supply 6 counters instead of 4 for:
12
(The previous code's intention was to make any small page be
12
Cortex-A9, Cortex-A15, Cortex-A53, Cortex-A57, Cortex-A72,
13
treated as having no permissions, but unfortunately errors
13
Cortex-A76, Neoverse-N1, '-cpu max'
14
in the implementation meant that it didn't behave that way.
14
This CPU goes from 4 to 8 counters:
15
It's possible that some binaries using small regions were
15
A64FX
16
accidentally working with our old behaviour and won't now.)
16
These CPUs remain with 4 counters:
17
Cortex-A7, Cortex-A8
18
This CPU goes down from 4 to 3 counters:
19
Cortex-R5
20
21
Note that because we now use the PMCR reset value of the specific
22
implementation, we no longer set the LC bit out of reset. This has
23
an UNKNOWN value out of reset for all cores with any AArch32 support,
24
so guest software should be setting it anyway if it wants it.
25
26
This change was originally landed in commit f7fb73b8cdd3f7 (during
27
the 6.0 release cycle) but was then reverted by commit
28
21c2dd77a6aa517 before that release because it did not work with KVM.
29
This version fixes that by creating the scratch vCPU in
30
kvm_arm_get_host_cpu_features() with the KVM_ARM_VCPU_PMU_V3 feature
31
if KVM supports it, and then only asking KVM for the PMCR_EL0 value
32
if the vCPU has a PMU.
17
33
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
35
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20180620130619.11362-3-peter.maydell@linaro.org
36
[PMM: Added the correct value for a64fx]
37
Message-id: 20220513122852.4063586-1-peter.maydell@linaro.org
21
---
38
---
22
target/arm/helper.c | 37 ++++++++++++++++++++++++++-----------
39
target/arm/cpu.h | 1 +
23
1 file changed, 26 insertions(+), 11 deletions(-)
40
target/arm/internals.h | 4 +++-
24
41
target/arm/cpu64.c | 11 +++++++++++
42
target/arm/cpu_tcg.c | 6 ++++++
43
target/arm/helper.c | 25 ++++++++++++++-----------
44
target/arm/kvm64.c | 12 ++++++++++++
45
6 files changed, 47 insertions(+), 12 deletions(-)
46
47
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/cpu.h
50
+++ b/target/arm/cpu.h
51
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
52
uint64_t id_aa64dfr0;
53
uint64_t id_aa64dfr1;
54
uint64_t id_aa64zfr0;
55
+ uint64_t reset_pmcr_el0;
56
} isar;
57
uint64_t midr;
58
uint32_t revidr;
59
diff --git a/target/arm/internals.h b/target/arm/internals.h
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/internals.h
62
+++ b/target/arm/internals.h
63
@@ -XXX,XX +XXX,XX @@ enum MVEECIState {
64
65
static inline uint32_t pmu_num_counters(CPUARMState *env)
66
{
67
- return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
68
+ ARMCPU *cpu = env_archcpu(env);
69
+
70
+ return (cpu->isar.reset_pmcr_el0 & PMCRN_MASK) >> PMCRN_SHIFT;
71
}
72
73
/* Bits allowed to be set/cleared for PMCNTEN* and PMINTEN* */
74
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/cpu64.c
77
+++ b/target/arm/cpu64.c
78
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
79
cpu->isar.id_aa64isar0 = 0x00011120;
80
cpu->isar.id_aa64mmfr0 = 0x00001124;
81
cpu->isar.dbgdidr = 0x3516d000;
82
+ cpu->isar.reset_pmcr_el0 = 0x41013000;
83
cpu->clidr = 0x0a200023;
84
cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
85
cpu->ccsidr[1] = 0x201fe012; /* 48KB L1 icache */
86
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
87
cpu->isar.id_aa64isar0 = 0x00011120;
88
cpu->isar.id_aa64mmfr0 = 0x00001122; /* 40 bit physical addr */
89
cpu->isar.dbgdidr = 0x3516d000;
90
+ cpu->isar.reset_pmcr_el0 = 0x41033000;
91
cpu->clidr = 0x0a200023;
92
cpu->ccsidr[0] = 0x700fe01a; /* 32KB L1 dcache */
93
cpu->ccsidr[1] = 0x201fe00a; /* 32KB L1 icache */
94
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
95
cpu->isar.id_aa64isar0 = 0x00011120;
96
cpu->isar.id_aa64mmfr0 = 0x00001124;
97
cpu->isar.dbgdidr = 0x3516d000;
98
+ cpu->isar.reset_pmcr_el0 = 0x41023000;
99
cpu->clidr = 0x0a200023;
100
cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
101
cpu->ccsidr[1] = 0x201fe012; /* 48KB L1 icache */
102
@@ -XXX,XX +XXX,XX @@ static void aarch64_a76_initfn(Object *obj)
103
cpu->isar.mvfr0 = 0x10110222;
104
cpu->isar.mvfr1 = 0x13211111;
105
cpu->isar.mvfr2 = 0x00000043;
106
+
107
+ /* From D5.1 AArch64 PMU register summary */
108
+ cpu->isar.reset_pmcr_el0 = 0x410b3000;
109
}
110
111
static void aarch64_neoverse_n1_initfn(Object *obj)
112
@@ -XXX,XX +XXX,XX @@ static void aarch64_neoverse_n1_initfn(Object *obj)
113
cpu->isar.mvfr0 = 0x10110222;
114
cpu->isar.mvfr1 = 0x13211111;
115
cpu->isar.mvfr2 = 0x00000043;
116
+
117
+ /* From D5.1 AArch64 PMU register summary */
118
+ cpu->isar.reset_pmcr_el0 = 0x410c3000;
119
}
120
121
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
122
@@ -XXX,XX +XXX,XX @@ static void aarch64_a64fx_initfn(Object *obj)
123
set_bit(1, cpu->sve_vq_supported); /* 256bit */
124
set_bit(3, cpu->sve_vq_supported); /* 512bit */
125
126
+ cpu->isar.reset_pmcr_el0 = 0x46014040;
127
+
128
/* TODO: Add A64FX specific HPC extension registers */
129
}
130
131
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
132
index XXXXXXX..XXXXXXX 100644
133
--- a/target/arm/cpu_tcg.c
134
+++ b/target/arm/cpu_tcg.c
135
@@ -XXX,XX +XXX,XX @@ static void cortex_a8_initfn(Object *obj)
136
cpu->ccsidr[1] = 0x2007e01a; /* 16k L1 icache. */
137
cpu->ccsidr[2] = 0xf0000000; /* No L2 icache. */
138
cpu->reset_auxcr = 2;
139
+ cpu->isar.reset_pmcr_el0 = 0x41002000;
140
define_arm_cp_regs(cpu, cortexa8_cp_reginfo);
141
}
142
143
@@ -XXX,XX +XXX,XX @@ static void cortex_a9_initfn(Object *obj)
144
cpu->clidr = (1 << 27) | (1 << 24) | 3;
145
cpu->ccsidr[0] = 0xe00fe019; /* 16k L1 dcache. */
146
cpu->ccsidr[1] = 0x200fe019; /* 16k L1 icache. */
147
+ cpu->isar.reset_pmcr_el0 = 0x41093000;
148
define_arm_cp_regs(cpu, cortexa9_cp_reginfo);
149
}
150
151
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
152
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
153
cpu->ccsidr[1] = 0x201fe00a; /* 32K L1 icache */
154
cpu->ccsidr[2] = 0x711fe07a; /* 4096K L2 unified cache */
155
+ cpu->isar.reset_pmcr_el0 = 0x41072000;
156
define_arm_cp_regs(cpu, cortexa15_cp_reginfo); /* Same as A15 */
157
}
158
159
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
160
cpu->ccsidr[0] = 0x701fe00a; /* 32K L1 dcache */
161
cpu->ccsidr[1] = 0x201fe00a; /* 32K L1 icache */
162
cpu->ccsidr[2] = 0x711fe07a; /* 4096K L2 unified cache */
163
+ cpu->isar.reset_pmcr_el0 = 0x410F3000;
164
define_arm_cp_regs(cpu, cortexa15_cp_reginfo);
165
}
166
167
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
168
cpu->isar.id_isar6 = 0x0;
169
cpu->mp_is_up = true;
170
cpu->pmsav7_dregion = 16;
171
+ cpu->isar.reset_pmcr_el0 = 0x41151800;
172
define_arm_cp_regs(cpu, cortexr5_cp_reginfo);
173
}
174
175
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
176
cpu->isar.id_isar5 = 0x00011121;
177
cpu->isar.id_isar6 = 0;
178
cpu->isar.dbgdidr = 0x3516d000;
179
+ cpu->isar.reset_pmcr_el0 = 0x41013000;
180
cpu->clidr = 0x0a200023;
181
cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
182
cpu->ccsidr[1] = 0x201fe012; /* 48KB L1 icache */
25
diff --git a/target/arm/helper.c b/target/arm/helper.c
183
diff --git a/target/arm/helper.c b/target/arm/helper.c
26
index XXXXXXX..XXXXXXX 100644
184
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/helper.c
185
--- a/target/arm/helper.c
28
+++ b/target/arm/helper.c
186
+++ b/target/arm/helper.c
29
@@ -XXX,XX +XXX,XX @@ static inline bool m_is_system_region(CPUARMState *env, uint32_t address)
187
@@ -XXX,XX +XXX,XX @@
30
static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
188
#include "cpregs.h"
31
MMUAccessType access_type, ARMMMUIdx mmu_idx,
189
32
hwaddr *phys_ptr, int *prot,
190
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
33
+ target_ulong *page_size,
191
-#define PMCR_NUM_COUNTERS 4 /* QEMU IMPDEF choice */
34
ARMMMUFaultInfo *fi)
192
35
{
193
#ifndef CONFIG_USER_ONLY
36
ARMCPU *cpu = arm_env_get_cpu(env);
194
37
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
195
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
38
bool is_user = regime_is_user(env, mmu_idx);
196
.resetvalue = 0,
39
197
.writefn = gt_hyp_ctl_write, .raw_writefn = raw_write },
40
*phys_ptr = address;
198
#endif
41
+ *page_size = TARGET_PAGE_SIZE;
199
- /* The only field of MDCR_EL2 that has a defined architectural reset value
42
*prot = 0;
200
- * is MDCR_EL2.HPMN which should reset to the value of PMCR_EL0.N.
43
201
- */
44
if (regime_translation_disabled(env, mmu_idx) ||
202
- { .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH,
45
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
203
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1,
46
rsize++;
204
- .access = PL2_RW, .resetvalue = PMCR_NUM_COUNTERS,
47
}
205
- .fieldoffset = offsetof(CPUARMState, cp15.mdcr_el2), },
48
}
206
{ .name = "HPFAR", .state = ARM_CP_STATE_AA32,
49
- if (rsize < TARGET_PAGE_BITS) {
207
.cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4,
50
- qemu_log_mask(LOG_UNIMP,
208
.access = PL2_RW, .accessfn = access_el3_aa32ns,
51
- "DRSR[%d]: No support for MPU (sub)region size of"
209
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
52
- " %" PRIu32 " bytes. Minimum is %d.\n",
210
* field as main ID register, and we implement four counters in
53
- n, (1 << rsize), TARGET_PAGE_SIZE);
211
* addition to the cycle count register.
54
- continue;
212
*/
55
- }
213
- unsigned int i, pmcrn = PMCR_NUM_COUNTERS;
56
if (srdis) {
214
+ unsigned int i, pmcrn = pmu_num_counters(&cpu->env);
57
continue;
215
ARMCPRegInfo pmcr = {
58
}
216
.name = "PMCR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 0,
59
+ if (rsize < TARGET_PAGE_BITS) {
217
.access = PL0_RW,
60
+ *page_size = 1 << rsize;
218
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
61
+ }
219
.access = PL0_RW, .accessfn = pmreg_access,
62
break;
220
.type = ARM_CP_IO,
221
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmcr),
222
- .resetvalue = (cpu->midr & 0xff000000) | (pmcrn << PMCRN_SHIFT) |
223
- PMCRLC,
224
+ .resetvalue = cpu->isar.reset_pmcr_el0,
225
.writefn = pmcr_write, .raw_writefn = raw_write,
226
};
227
+
228
define_one_arm_cp_reg(cpu, &pmcr);
229
define_one_arm_cp_reg(cpu, &pmcr64);
230
for (i = 0; i < pmcrn; i++) {
231
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
232
.type = ARM_CP_EL3_NO_EL2_C_NZ,
233
.fieldoffset = offsetof(CPUARMState, cp15.vmpidr_el2) },
234
};
235
+ /*
236
+ * The only field of MDCR_EL2 that has a defined architectural reset
237
+ * value is MDCR_EL2.HPMN which should reset to the value of PMCR_EL0.N.
238
+ */
239
+ ARMCPRegInfo mdcr_el2 = {
240
+ .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH,
241
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1,
242
+ .access = PL2_RW, .resetvalue = pmu_num_counters(env),
243
+ .fieldoffset = offsetof(CPUARMState, cp15.mdcr_el2),
244
+ };
245
+ define_one_arm_cp_reg(cpu, &mdcr_el2);
246
define_arm_cp_regs(cpu, vpidr_regs);
247
define_arm_cp_regs(cpu, el2_cp_reginfo);
248
if (arm_feature(env, ARM_FEATURE_V8)) {
249
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
250
index XXXXXXX..XXXXXXX 100644
251
--- a/target/arm/kvm64.c
252
+++ b/target/arm/kvm64.c
253
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
254
*/
255
int fdarray[3];
256
bool sve_supported;
257
+ bool pmu_supported = false;
258
uint64_t features = 0;
259
uint64_t t;
260
int err;
261
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
262
1 << KVM_ARM_VCPU_PTRAUTH_GENERIC);
263
}
264
265
+ if (kvm_arm_pmu_supported()) {
266
+ init.features[0] |= 1 << KVM_ARM_VCPU_PMU_V3;
267
+ pmu_supported = true;
268
+ }
269
+
270
if (!kvm_arm_create_scratch_host_vcpu(cpus_to_try, fdarray, &init)) {
271
return false;
272
}
273
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
274
dbgdidr |= (1 << 15); /* RES1 bit */
275
ahcf->isar.dbgdidr = dbgdidr;
63
}
276
}
64
277
+
65
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
278
+ if (pmu_supported) {
66
279
+ /* PMCR_EL0 is only accessible if the vCPU has feature PMU_V3 */
67
fi->type = ARMFault_Permission;
280
+ err |= read_sys_reg64(fdarray[2], &ahcf->isar.reset_pmcr_el0,
68
fi->level = 1;
281
+ ARM64_SYS_REG(3, 3, 9, 12, 0));
69
+ /*
70
+ * Core QEMU code can't handle execution from small pages yet, so
71
+ * don't try it. This way we'll get an MPU exception, rather than
72
+ * eventually causing QEMU to exit in get_page_addr_code().
73
+ */
74
+ if (*page_size < TARGET_PAGE_SIZE && (*prot & PAGE_EXEC)) {
75
+ qemu_log_mask(LOG_UNIMP,
76
+ "MPU: No support for execution from regions "
77
+ "smaller than 1K\n");
78
+ *prot &= ~PAGE_EXEC;
79
+ }
80
return !(*prot & (1 << access_type));
81
}
82
83
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
84
} else if (arm_feature(env, ARM_FEATURE_V7)) {
85
/* PMSAv7 */
86
ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
87
- phys_ptr, prot, fi);
88
+ phys_ptr, prot, page_size, fi);
89
} else {
90
/* Pre-v7 MPU */
91
ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
92
@@ -XXX,XX +XXX,XX @@ bool arm_tlb_fill(CPUState *cs, vaddr address,
93
core_to_arm_mmu_idx(env, mmu_idx), &phys_addr,
94
&attrs, &prot, &page_size, fi, NULL);
95
if (!ret) {
96
- /* Map a single [sub]page. */
97
- phys_addr &= TARGET_PAGE_MASK;
98
- address &= TARGET_PAGE_MASK;
99
+ /*
100
+ * Map a single [sub]page. Regions smaller than our declared
101
+ * target page size are handled specially, so for those we
102
+ * pass in the exact addresses.
103
+ */
104
+ if (page_size >= TARGET_PAGE_SIZE) {
105
+ phys_addr &= TARGET_PAGE_MASK;
106
+ address &= TARGET_PAGE_MASK;
107
+ }
282
+ }
108
tlb_set_page_with_attrs(cs, address, phys_addr, attrs,
283
}
109
prot, mmu_idx, page_size);
284
110
return 0;
285
sve_supported = ioctl(fdarray[0], KVM_CHECK_EXTENSION, KVM_CAP_ARM_SVE) > 0;
111
--
286
--
112
2.17.1
287
2.25.1
113
114
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
In the virt board with secure=on we put two nodes in the dtb
2
for flash devices: one for the secure-only flash, and one
3
for the non-secure flash. We get the reg properties for these
4
correct, but in the DT node name, which by convention includes
5
the base address of devices, we used the wrong address. Fix it.
2
6
3
Suggested-by: Thomas Huth <thuth@redhat.com>
7
Spotted by dtc, which will complain
4
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Warning (unique_unit_address): /flash@0: duplicate unit-address (also used in node /secflash@0)
5
Message-id: 20180624040609.17572-12-f4bug@amsat.org
9
if you dump the dtb from QEMU with -machine dumpdtb=file.dtb
10
and then decompile it with dtc.
11
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20220513131316.4081539-2-peter.maydell@linaro.org
7
---
15
---
8
hw/net/stellaris_enet.c | 2 +-
16
hw/arm/virt.c | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
17
1 file changed, 1 insertion(+), 1 deletion(-)
10
18
11
diff --git a/hw/net/stellaris_enet.c b/hw/net/stellaris_enet.c
19
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
12
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/net/stellaris_enet.c
21
--- a/hw/arm/virt.c
14
+++ b/hw/net/stellaris_enet.c
22
+++ b/hw/arm/virt.c
15
@@ -XXX,XX +XXX,XX @@ static uint64_t stellaris_enet_read(void *opaque, hwaddr offset,
23
@@ -XXX,XX +XXX,XX @@ static void virt_flash_fdt(VirtMachineState *vms,
16
return s->np;
24
qemu_fdt_setprop_string(ms->fdt, nodename, "secure-status", "okay");
17
case 0x38: /* TR */
25
g_free(nodename);
18
return 0;
26
19
- case 0x3c: /* Undocuented: Timestamp? */
27
- nodename = g_strdup_printf("/flash@%" PRIx64, flashbase);
20
+ case 0x3c: /* Undocumented: Timestamp? */
28
+ nodename = g_strdup_printf("/flash@%" PRIx64, flashbase + flashsize);
21
return 0;
29
qemu_fdt_add_subnode(ms->fdt, nodename);
22
default:
30
qemu_fdt_setprop_string(ms->fdt, nodename, "compatible", "cfi-flash");
23
hw_error("stellaris_enet_read: Bad offset %x\n", (int)offset);
31
qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg",
24
--
32
--
25
2.17.1
33
2.25.1
26
27
diff view generated by jsdifflib
1
Add support for MMU protection regions that are smaller than
1
The virt board generates a gpio-keys node in the dtb, but it
2
TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
2
incorrectly gives this node #size-cells and #address-cells
3
pages with a flag TLB_RECHECK. This flag causes us to always
3
properties. If you dump the dtb with 'machine dumpdtb=file.dtb'
4
take the slow-path for accesses. In the slow path we can then
4
and run it through dtc, dtc will warn about this:
5
special case them to always call tlb_fill() again, so we have
6
the correct information for the exact address being accessed.
7
5
8
This change allows us to handle reading and writing from small
6
Warning (avoid_unnecessary_addr_size): /gpio-keys: unnecessary #address-cells/#size-cells without "ranges" or child "reg" property
9
regions; we cannot deal with execution from the small region.
7
8
Remove the bogus properties.
10
9
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180620130619.11362-2-peter.maydell@linaro.org
12
Message-id: 20220513131316.4081539-3-peter.maydell@linaro.org
14
---
13
---
15
accel/tcg/softmmu_template.h | 24 ++++---
14
hw/arm/virt.c | 2 --
16
include/exec/cpu-all.h | 5 +-
15
1 file changed, 2 deletions(-)
17
accel/tcg/cputlb.c | 131 +++++++++++++++++++++++++++++------
18
3 files changed, 130 insertions(+), 30 deletions(-)
19
16
20
diff --git a/accel/tcg/softmmu_template.h b/accel/tcg/softmmu_template.h
17
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
21
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
22
--- a/accel/tcg/softmmu_template.h
19
--- a/hw/arm/virt.c
23
+++ b/accel/tcg/softmmu_template.h
20
+++ b/hw/arm/virt.c
24
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ static void create_gpio_keys(char *fdt, DeviceState *pl061_dev,
25
static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env,
22
26
size_t mmu_idx, size_t index,
23
qemu_fdt_add_subnode(fdt, "/gpio-keys");
27
target_ulong addr,
24
qemu_fdt_setprop_string(fdt, "/gpio-keys", "compatible", "gpio-keys");
28
- uintptr_t retaddr)
25
- qemu_fdt_setprop_cell(fdt, "/gpio-keys", "#size-cells", 0);
29
+ uintptr_t retaddr,
26
- qemu_fdt_setprop_cell(fdt, "/gpio-keys", "#address-cells", 1);
30
+ bool recheck)
27
31
{
28
qemu_fdt_add_subnode(fdt, "/gpio-keys/poweroff");
32
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
29
qemu_fdt_setprop_string(fdt, "/gpio-keys/poweroff",
33
- return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, DATA_SIZE);
34
+ return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, recheck,
35
+ DATA_SIZE);
36
}
37
#endif
38
39
@@ -XXX,XX +XXX,XX @@ WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr,
40
41
/* ??? Note that the io helpers always read data in the target
42
byte ordering. We should push the LE/BE request down into io. */
43
- res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
44
+ res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
45
+ tlb_addr & TLB_RECHECK);
46
res = TGT_LE(res);
47
return res;
48
}
49
@@ -XXX,XX +XXX,XX @@ WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr,
50
51
/* ??? Note that the io helpers always read data in the target
52
byte ordering. We should push the LE/BE request down into io. */
53
- res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
54
+ res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
55
+ tlb_addr & TLB_RECHECK);
56
res = TGT_BE(res);
57
return res;
58
}
59
@@ -XXX,XX +XXX,XX @@ static inline void glue(io_write, SUFFIX)(CPUArchState *env,
60
size_t mmu_idx, size_t index,
61
DATA_TYPE val,
62
target_ulong addr,
63
- uintptr_t retaddr)
64
+ uintptr_t retaddr,
65
+ bool recheck)
66
{
67
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
68
- return io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr, DATA_SIZE);
69
+ return io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr,
70
+ recheck, DATA_SIZE);
71
}
72
73
void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
74
@@ -XXX,XX +XXX,XX @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
75
/* ??? Note that the io helpers always read data in the target
76
byte ordering. We should push the LE/BE request down into io. */
77
val = TGT_LE(val);
78
- glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr);
79
+ glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr,
80
+ retaddr, tlb_addr & TLB_RECHECK);
81
return;
82
}
83
84
@@ -XXX,XX +XXX,XX @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
85
/* ??? Note that the io helpers always read data in the target
86
byte ordering. We should push the LE/BE request down into io. */
87
val = TGT_BE(val);
88
- glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr);
89
+ glue(io_write, SUFFIX)(env, mmu_idx, index, val, addr, retaddr,
90
+ tlb_addr & TLB_RECHECK);
91
return;
92
}
93
94
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
95
index XXXXXXX..XXXXXXX 100644
96
--- a/include/exec/cpu-all.h
97
+++ b/include/exec/cpu-all.h
98
@@ -XXX,XX +XXX,XX @@ CPUArchState *cpu_copy(CPUArchState *env);
99
#define TLB_NOTDIRTY (1 << (TARGET_PAGE_BITS - 2))
100
/* Set if TLB entry is an IO callback. */
101
#define TLB_MMIO (1 << (TARGET_PAGE_BITS - 3))
102
+/* Set if TLB entry must have MMU lookup repeated for every access */
103
+#define TLB_RECHECK (1 << (TARGET_PAGE_BITS - 4))
104
105
/* Use this mask to check interception with an alignment mask
106
* in a TCG backend.
107
*/
108
-#define TLB_FLAGS_MASK (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO)
109
+#define TLB_FLAGS_MASK (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
110
+ | TLB_RECHECK)
111
112
void dump_exec_info(FILE *f, fprintf_function cpu_fprintf);
113
void dump_opcount_info(FILE *f, fprintf_function cpu_fprintf);
114
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/accel/tcg/cputlb.c
117
+++ b/accel/tcg/cputlb.c
118
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
119
target_ulong code_address;
120
uintptr_t addend;
121
CPUTLBEntry *te, *tv, tn;
122
- hwaddr iotlb, xlat, sz;
123
+ hwaddr iotlb, xlat, sz, paddr_page;
124
+ target_ulong vaddr_page;
125
unsigned vidx = env->vtlb_index++ % CPU_VTLB_SIZE;
126
int asidx = cpu_asidx_from_attrs(cpu, attrs);
127
128
assert_cpu_is_self(cpu);
129
- assert(size >= TARGET_PAGE_SIZE);
130
- if (size != TARGET_PAGE_SIZE) {
131
- tlb_add_large_page(env, vaddr, size);
132
- }
133
134
- sz = size;
135
- section = address_space_translate_for_iotlb(cpu, asidx, paddr, &xlat, &sz,
136
- attrs, &prot);
137
+ if (size < TARGET_PAGE_SIZE) {
138
+ sz = TARGET_PAGE_SIZE;
139
+ } else {
140
+ if (size > TARGET_PAGE_SIZE) {
141
+ tlb_add_large_page(env, vaddr, size);
142
+ }
143
+ sz = size;
144
+ }
145
+ vaddr_page = vaddr & TARGET_PAGE_MASK;
146
+ paddr_page = paddr & TARGET_PAGE_MASK;
147
+
148
+ section = address_space_translate_for_iotlb(cpu, asidx, paddr_page,
149
+ &xlat, &sz, attrs, &prot);
150
assert(sz >= TARGET_PAGE_SIZE);
151
152
tlb_debug("vaddr=" TARGET_FMT_lx " paddr=0x" TARGET_FMT_plx
153
" prot=%x idx=%d\n",
154
vaddr, paddr, prot, mmu_idx);
155
156
- address = vaddr;
157
- if (!memory_region_is_ram(section->mr) && !memory_region_is_romd(section->mr)) {
158
+ address = vaddr_page;
159
+ if (size < TARGET_PAGE_SIZE) {
160
+ /*
161
+ * Slow-path the TLB entries; we will repeat the MMU check and TLB
162
+ * fill on every access.
163
+ */
164
+ address |= TLB_RECHECK;
165
+ }
166
+ if (!memory_region_is_ram(section->mr) &&
167
+ !memory_region_is_romd(section->mr)) {
168
/* IO memory case */
169
address |= TLB_MMIO;
170
addend = 0;
171
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
172
}
173
174
code_address = address;
175
- iotlb = memory_region_section_get_iotlb(cpu, section, vaddr, paddr, xlat,
176
- prot, &address);
177
+ iotlb = memory_region_section_get_iotlb(cpu, section, vaddr_page,
178
+ paddr_page, xlat, prot, &address);
179
180
- index = (vaddr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
181
+ index = (vaddr_page >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
182
te = &env->tlb_table[mmu_idx][index];
183
/* do not discard the translation in te, evict it into a victim tlb */
184
tv = &env->tlb_v_table[mmu_idx][vidx];
185
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
186
* TARGET_PAGE_BITS, and either
187
* + the ram_addr_t of the page base of the target RAM (if NOTDIRTY or ROM)
188
* + the offset within section->mr of the page base (otherwise)
189
- * We subtract the vaddr (which is page aligned and thus won't
190
+ * We subtract the vaddr_page (which is page aligned and thus won't
191
* disturb the low bits) to give an offset which can be added to the
192
* (non-page-aligned) vaddr of the eventual memory access to get
193
* the MemoryRegion offset for the access. Note that the vaddr we
194
* subtract here is that of the page base, and not the same as the
195
* vaddr we add back in io_readx()/io_writex()/get_page_addr_code().
196
*/
197
- env->iotlb[mmu_idx][index].addr = iotlb - vaddr;
198
+ env->iotlb[mmu_idx][index].addr = iotlb - vaddr_page;
199
env->iotlb[mmu_idx][index].attrs = attrs;
200
201
/* Now calculate the new entry */
202
- tn.addend = addend - vaddr;
203
+ tn.addend = addend - vaddr_page;
204
if (prot & PAGE_READ) {
205
tn.addr_read = address;
206
} else {
207
@@ -XXX,XX +XXX,XX @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
208
tn.addr_write = address | TLB_MMIO;
209
} else if (memory_region_is_ram(section->mr)
210
&& cpu_physical_memory_is_clean(
211
- memory_region_get_ram_addr(section->mr) + xlat)) {
212
+ memory_region_get_ram_addr(section->mr) + xlat)) {
213
tn.addr_write = address | TLB_NOTDIRTY;
214
} else {
215
tn.addr_write = address;
216
@@ -XXX,XX +XXX,XX @@ static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr)
217
218
static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
219
int mmu_idx,
220
- target_ulong addr, uintptr_t retaddr, int size)
221
+ target_ulong addr, uintptr_t retaddr,
222
+ bool recheck, int size)
223
{
224
CPUState *cpu = ENV_GET_CPU(env);
225
hwaddr mr_offset;
226
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
227
bool locked = false;
228
MemTxResult r;
229
230
+ if (recheck) {
231
+ /*
232
+ * This is a TLB_RECHECK access, where the MMU protection
233
+ * covers a smaller range than a target page, and we must
234
+ * repeat the MMU check here. This tlb_fill() call might
235
+ * longjump out if this access should cause a guest exception.
236
+ */
237
+ int index;
238
+ target_ulong tlb_addr;
239
+
240
+ tlb_fill(cpu, addr, size, MMU_DATA_LOAD, mmu_idx, retaddr);
241
+
242
+ index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
243
+ tlb_addr = env->tlb_table[mmu_idx][index].addr_read;
244
+ if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
245
+ /* RAM access */
246
+ uintptr_t haddr = addr + env->tlb_table[mmu_idx][index].addend;
247
+
248
+ return ldn_p((void *)haddr, size);
249
+ }
250
+ /* Fall through for handling IO accesses */
251
+ }
252
+
253
section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
254
mr = section->mr;
255
mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
256
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
257
static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
258
int mmu_idx,
259
uint64_t val, target_ulong addr,
260
- uintptr_t retaddr, int size)
261
+ uintptr_t retaddr, bool recheck, int size)
262
{
263
CPUState *cpu = ENV_GET_CPU(env);
264
hwaddr mr_offset;
265
@@ -XXX,XX +XXX,XX @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
266
bool locked = false;
267
MemTxResult r;
268
269
+ if (recheck) {
270
+ /*
271
+ * This is a TLB_RECHECK access, where the MMU protection
272
+ * covers a smaller range than a target page, and we must
273
+ * repeat the MMU check here. This tlb_fill() call might
274
+ * longjump out if this access should cause a guest exception.
275
+ */
276
+ int index;
277
+ target_ulong tlb_addr;
278
+
279
+ tlb_fill(cpu, addr, size, MMU_DATA_STORE, mmu_idx, retaddr);
280
+
281
+ index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
282
+ tlb_addr = env->tlb_table[mmu_idx][index].addr_write;
283
+ if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
284
+ /* RAM access */
285
+ uintptr_t haddr = addr + env->tlb_table[mmu_idx][index].addend;
286
+
287
+ stn_p((void *)haddr, size, val);
288
+ return;
289
+ }
290
+ /* Fall through for handling IO accesses */
291
+ }
292
+
293
section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
294
mr = section->mr;
295
mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
296
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
297
tlb_fill(ENV_GET_CPU(env), addr, 0, MMU_INST_FETCH, mmu_idx, 0);
298
}
299
}
300
+
301
+ if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
302
+ /*
303
+ * This is a TLB_RECHECK access, where the MMU protection
304
+ * covers a smaller range than a target page, and we must
305
+ * repeat the MMU check here. This tlb_fill() call might
306
+ * longjump out if this access should cause a guest exception.
307
+ */
308
+ int index;
309
+ target_ulong tlb_addr;
310
+
311
+ tlb_fill(cpu, addr, 0, MMU_INST_FETCH, mmu_idx, 0);
312
+
313
+ index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
314
+ tlb_addr = env->tlb_table[mmu_idx][index].addr_code;
315
+ if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
316
+ /* RAM access. We can't handle this, so for now just stop */
317
+ cpu_abort(cpu, "Unable to handle guest executing from RAM within "
318
+ "a small MPU region at 0x" TARGET_FMT_lx, addr);
319
+ }
320
+ /*
321
+ * Fall through to handle IO accesses (which will almost certainly
322
+ * also result in failure)
323
+ */
324
+ }
325
+
326
iotlbentry = &env->iotlb[mmu_idx][index];
327
section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
328
mr = section->mr;
329
@@ -XXX,XX +XXX,XX @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
330
tlb_addr = tlbe->addr_write & ~TLB_INVALID_MASK;
331
}
332
333
- /* Notice an IO access */
334
- if (unlikely(tlb_addr & TLB_MMIO)) {
335
+ /* Notice an IO access or a needs-MMU-lookup access */
336
+ if (unlikely(tlb_addr & (TLB_MMIO | TLB_RECHECK))) {
337
/* There's really nothing that can be done to
338
support this apart from stop-the-world. */
339
goto stop_the_world;
340
--
30
--
341
2.17.1
31
2.25.1
342
343
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
The traditional ptimer behaviour includes a collection of weird edge
2
case behaviours. In 2016 we improved the ptimer implementation to
3
fix these and generally make the behaviour more flexible, with
4
ptimers opting in to the new behaviour by passing an appropriate set
5
of policy flags to ptimer_init(). For backwards-compatibility, we
6
defined PTIMER_POLICY_DEFAULT (which sets no flags) to give the old
7
weird behaviour.
2
8
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
This turns out to be a poor choice of name, because people writing
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
10
new devices which use ptimers are misled into thinking that the
5
Message-id: 20180624040609.17572-3-f4bug@amsat.org
11
default is probably a sensible choice of flags, when in fact it is
12
almost always not what you want. Rename PTIMER_POLICY_DEFAULT to
13
PTIMER_POLICY_LEGACY and beef up the comment to more clearly say that
14
new devices should not be using it.
15
16
The code-change part of this commit was produced by
17
sed -i -e 's/PTIMER_POLICY_DEFAULT/PTIMER_POLICY_LEGACY/g' $(git grep -l PTIMER_POLICY_DEFAULT)
18
with the exception of a test name string change in
19
tests/unit/ptimer-test.c which was added manually.
20
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Francisco Iglesias <francisco.iglesias@amd.com>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20220516103058.162280-1-peter.maydell@linaro.org
7
---
25
---
8
hw/input/tsc2005.c | 13 ++++++++-----
26
include/hw/ptimer.h | 16 ++++++++++++----
9
1 file changed, 8 insertions(+), 5 deletions(-)
27
hw/arm/musicpal.c | 2 +-
28
hw/dma/xilinx_axidma.c | 2 +-
29
hw/dma/xlnx_csu_dma.c | 2 +-
30
hw/m68k/mcf5206.c | 2 +-
31
hw/m68k/mcf5208.c | 2 +-
32
hw/net/can/xlnx-zynqmp-can.c | 2 +-
33
hw/net/fsl_etsec/etsec.c | 2 +-
34
hw/net/lan9118.c | 2 +-
35
hw/rtc/exynos4210_rtc.c | 4 ++--
36
hw/timer/allwinner-a10-pit.c | 2 +-
37
hw/timer/altera_timer.c | 2 +-
38
hw/timer/arm_timer.c | 2 +-
39
hw/timer/digic-timer.c | 2 +-
40
hw/timer/etraxfs_timer.c | 6 +++---
41
hw/timer/exynos4210_mct.c | 6 +++---
42
hw/timer/exynos4210_pwm.c | 2 +-
43
hw/timer/grlib_gptimer.c | 2 +-
44
hw/timer/imx_epit.c | 4 ++--
45
hw/timer/imx_gpt.c | 2 +-
46
hw/timer/mss-timer.c | 2 +-
47
hw/timer/sh_timer.c | 2 +-
48
hw/timer/slavio_timer.c | 2 +-
49
hw/timer/xilinx_timer.c | 2 +-
50
tests/unit/ptimer-test.c | 6 +++---
51
25 files changed, 44 insertions(+), 36 deletions(-)
10
52
11
diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
53
diff --git a/include/hw/ptimer.h b/include/hw/ptimer.h
12
index XXXXXXX..XXXXXXX 100644
54
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/input/tsc2005.c
55
--- a/include/hw/ptimer.h
14
+++ b/hw/input/tsc2005.c
56
+++ b/include/hw/ptimer.h
15
@@ -XXX,XX +XXX,XX @@
57
@@ -XXX,XX +XXX,XX @@
58
* to stderr when the guest attempts to enable the timer.
16
*/
59
*/
17
60
18
#include "qemu/osdep.h"
61
-/* The default ptimer policy retains backward compatibility with the legacy
19
+#include "qemu/log.h"
62
- * timers. Custom policies are adjusting the default one. Consider providing
20
#include "hw/hw.h"
63
- * a correct policy for your timer.
21
#include "qemu/timer.h"
64
+/*
22
#include "ui/console.h"
65
+ * The 'legacy' ptimer policy retains backward compatibility with the
23
@@ -XXX,XX +XXX,XX @@ static void tsc2005_write(TSC2005State *s, int reg, uint16_t data)
66
+ * traditional ptimer behaviour from before policy flags were introduced.
24
}
67
+ * It has several weird behaviours which don't match typical hardware
25
s->nextprecision = (data >> 13) & 1;
68
+ * timer behaviour. For a new device using ptimers, you should not
26
s->timing[0] = data & 0x1fff;
69
+ * use PTIMER_POLICY_LEGACY, but instead check the actual behaviour
27
- if ((s->timing[0] >> 11) == 3)
70
+ * that you need and specify the right set of policy flags to get that.
28
- fprintf(stderr, "%s: illegal conversion clock setting\n",
71
+ *
29
- __func__);
72
+ * If you are overhauling an existing device that uses PTIMER_POLICY_LEGACY
30
+ if ((s->timing[0] >> 11) == 3) {
73
+ * and are in a position to check or test the real hardware behaviour,
31
+ qemu_log_mask(LOG_GUEST_ERROR,
74
+ * consider updating it to specify the right policy flags.
32
+ "tsc2005_write: illegal conversion clock setting\n");
75
*
33
+ }
76
* The rough edges of the default policy:
34
break;
77
* - Starting to run with a period = 0 emits error message and stops the
35
case 0xd:    /* CFR1 */
78
@@ -XXX,XX +XXX,XX @@
36
s->timing[1] = data & 0xf07;
79
* since the last period, effectively restarting the timer with a
37
@@ -XXX,XX +XXX,XX @@ static void tsc2005_write(TSC2005State *s, int reg, uint16_t data)
80
* counter = counter value at the moment of change (.i.e. one less).
38
break;
81
*/
39
82
-#define PTIMER_POLICY_DEFAULT 0
40
default:
83
+#define PTIMER_POLICY_LEGACY 0
41
- fprintf(stderr, "%s: write into read-only register %x\n",
84
42
- __func__, reg);
85
/* Periodic timer counter stays with "0" for a one period before wrapping
43
+ qemu_log_mask(LOG_GUEST_ERROR,
86
* around. */
44
+ "%s: write into read-only register 0x%x\n",
87
diff --git a/hw/arm/musicpal.c b/hw/arm/musicpal.c
45
+ __func__, reg);
88
index XXXXXXX..XXXXXXX 100644
89
--- a/hw/arm/musicpal.c
90
+++ b/hw/arm/musicpal.c
91
@@ -XXX,XX +XXX,XX @@ static void mv88w8618_timer_init(SysBusDevice *dev, mv88w8618_timer_state *s,
92
sysbus_init_irq(dev, &s->irq);
93
s->freq = freq;
94
95
- s->ptimer = ptimer_init(mv88w8618_timer_tick, s, PTIMER_POLICY_DEFAULT);
96
+ s->ptimer = ptimer_init(mv88w8618_timer_tick, s, PTIMER_POLICY_LEGACY);
97
}
98
99
static uint64_t mv88w8618_pit_read(void *opaque, hwaddr offset,
100
diff --git a/hw/dma/xilinx_axidma.c b/hw/dma/xilinx_axidma.c
101
index XXXXXXX..XXXXXXX 100644
102
--- a/hw/dma/xilinx_axidma.c
103
+++ b/hw/dma/xilinx_axidma.c
104
@@ -XXX,XX +XXX,XX @@ static void xilinx_axidma_realize(DeviceState *dev, Error **errp)
105
106
st->dma = s;
107
st->nr = i;
108
- st->ptimer = ptimer_init(timer_hit, st, PTIMER_POLICY_DEFAULT);
109
+ st->ptimer = ptimer_init(timer_hit, st, PTIMER_POLICY_LEGACY);
110
ptimer_transaction_begin(st->ptimer);
111
ptimer_set_freq(st->ptimer, s->freqhz);
112
ptimer_transaction_commit(st->ptimer);
113
diff --git a/hw/dma/xlnx_csu_dma.c b/hw/dma/xlnx_csu_dma.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/hw/dma/xlnx_csu_dma.c
116
+++ b/hw/dma/xlnx_csu_dma.c
117
@@ -XXX,XX +XXX,XX @@ static void xlnx_csu_dma_realize(DeviceState *dev, Error **errp)
118
sysbus_init_irq(SYS_BUS_DEVICE(dev), &s->irq);
119
120
s->src_timer = ptimer_init(xlnx_csu_dma_src_timeout_hit,
121
- s, PTIMER_POLICY_DEFAULT);
122
+ s, PTIMER_POLICY_LEGACY);
123
124
s->attr = MEMTXATTRS_UNSPECIFIED;
125
126
diff --git a/hw/m68k/mcf5206.c b/hw/m68k/mcf5206.c
127
index XXXXXXX..XXXXXXX 100644
128
--- a/hw/m68k/mcf5206.c
129
+++ b/hw/m68k/mcf5206.c
130
@@ -XXX,XX +XXX,XX @@ static m5206_timer_state *m5206_timer_init(qemu_irq irq)
131
m5206_timer_state *s;
132
133
s = g_new0(m5206_timer_state, 1);
134
- s->timer = ptimer_init(m5206_timer_trigger, s, PTIMER_POLICY_DEFAULT);
135
+ s->timer = ptimer_init(m5206_timer_trigger, s, PTIMER_POLICY_LEGACY);
136
s->irq = irq;
137
m5206_timer_reset(s);
138
return s;
139
diff --git a/hw/m68k/mcf5208.c b/hw/m68k/mcf5208.c
140
index XXXXXXX..XXXXXXX 100644
141
--- a/hw/m68k/mcf5208.c
142
+++ b/hw/m68k/mcf5208.c
143
@@ -XXX,XX +XXX,XX @@ static void mcf5208_sys_init(MemoryRegion *address_space, qemu_irq *pic)
144
/* Timers. */
145
for (i = 0; i < 2; i++) {
146
s = g_new0(m5208_timer_state, 1);
147
- s->timer = ptimer_init(m5208_timer_trigger, s, PTIMER_POLICY_DEFAULT);
148
+ s->timer = ptimer_init(m5208_timer_trigger, s, PTIMER_POLICY_LEGACY);
149
memory_region_init_io(&s->iomem, NULL, &m5208_timer_ops, s,
150
"m5208-timer", 0x00004000);
151
memory_region_add_subregion(address_space, 0xfc080000 + 0x4000 * i,
152
diff --git a/hw/net/can/xlnx-zynqmp-can.c b/hw/net/can/xlnx-zynqmp-can.c
153
index XXXXXXX..XXXXXXX 100644
154
--- a/hw/net/can/xlnx-zynqmp-can.c
155
+++ b/hw/net/can/xlnx-zynqmp-can.c
156
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_can_realize(DeviceState *dev, Error **errp)
157
158
/* Allocate a new timer. */
159
s->can_timer = ptimer_init(xlnx_zynqmp_can_ptimer_cb, s,
160
- PTIMER_POLICY_DEFAULT);
161
+ PTIMER_POLICY_LEGACY);
162
163
ptimer_transaction_begin(s->can_timer);
164
165
diff --git a/hw/net/fsl_etsec/etsec.c b/hw/net/fsl_etsec/etsec.c
166
index XXXXXXX..XXXXXXX 100644
167
--- a/hw/net/fsl_etsec/etsec.c
168
+++ b/hw/net/fsl_etsec/etsec.c
169
@@ -XXX,XX +XXX,XX @@ static void etsec_realize(DeviceState *dev, Error **errp)
170
object_get_typename(OBJECT(dev)), dev->id, etsec);
171
qemu_format_nic_info_str(qemu_get_queue(etsec->nic), etsec->conf.macaddr.a);
172
173
- etsec->ptimer = ptimer_init(etsec_timer_hit, etsec, PTIMER_POLICY_DEFAULT);
174
+ etsec->ptimer = ptimer_init(etsec_timer_hit, etsec, PTIMER_POLICY_LEGACY);
175
ptimer_transaction_begin(etsec->ptimer);
176
ptimer_set_freq(etsec->ptimer, 100);
177
ptimer_transaction_commit(etsec->ptimer);
178
diff --git a/hw/net/lan9118.c b/hw/net/lan9118.c
179
index XXXXXXX..XXXXXXX 100644
180
--- a/hw/net/lan9118.c
181
+++ b/hw/net/lan9118.c
182
@@ -XXX,XX +XXX,XX @@ static void lan9118_realize(DeviceState *dev, Error **errp)
183
s->pmt_ctrl = 1;
184
s->txp = &s->tx_packet;
185
186
- s->timer = ptimer_init(lan9118_tick, s, PTIMER_POLICY_DEFAULT);
187
+ s->timer = ptimer_init(lan9118_tick, s, PTIMER_POLICY_LEGACY);
188
ptimer_transaction_begin(s->timer);
189
ptimer_set_freq(s->timer, 10000);
190
ptimer_set_limit(s->timer, 0xffff, 1);
191
diff --git a/hw/rtc/exynos4210_rtc.c b/hw/rtc/exynos4210_rtc.c
192
index XXXXXXX..XXXXXXX 100644
193
--- a/hw/rtc/exynos4210_rtc.c
194
+++ b/hw/rtc/exynos4210_rtc.c
195
@@ -XXX,XX +XXX,XX @@ static void exynos4210_rtc_init(Object *obj)
196
Exynos4210RTCState *s = EXYNOS4210_RTC(obj);
197
SysBusDevice *dev = SYS_BUS_DEVICE(obj);
198
199
- s->ptimer = ptimer_init(exynos4210_rtc_tick, s, PTIMER_POLICY_DEFAULT);
200
+ s->ptimer = ptimer_init(exynos4210_rtc_tick, s, PTIMER_POLICY_LEGACY);
201
ptimer_transaction_begin(s->ptimer);
202
ptimer_set_freq(s->ptimer, RTC_BASE_FREQ);
203
exynos4210_rtc_update_freq(s, 0);
204
ptimer_transaction_commit(s->ptimer);
205
206
s->ptimer_1Hz = ptimer_init(exynos4210_rtc_1Hz_tick,
207
- s, PTIMER_POLICY_DEFAULT);
208
+ s, PTIMER_POLICY_LEGACY);
209
ptimer_transaction_begin(s->ptimer_1Hz);
210
ptimer_set_freq(s->ptimer_1Hz, RTC_BASE_FREQ);
211
ptimer_transaction_commit(s->ptimer_1Hz);
212
diff --git a/hw/timer/allwinner-a10-pit.c b/hw/timer/allwinner-a10-pit.c
213
index XXXXXXX..XXXXXXX 100644
214
--- a/hw/timer/allwinner-a10-pit.c
215
+++ b/hw/timer/allwinner-a10-pit.c
216
@@ -XXX,XX +XXX,XX @@ static void a10_pit_init(Object *obj)
217
218
tc->container = s;
219
tc->index = i;
220
- s->timer[i] = ptimer_init(a10_pit_timer_cb, tc, PTIMER_POLICY_DEFAULT);
221
+ s->timer[i] = ptimer_init(a10_pit_timer_cb, tc, PTIMER_POLICY_LEGACY);
46
}
222
}
47
}
223
}
48
224
225
diff --git a/hw/timer/altera_timer.c b/hw/timer/altera_timer.c
226
index XXXXXXX..XXXXXXX 100644
227
--- a/hw/timer/altera_timer.c
228
+++ b/hw/timer/altera_timer.c
229
@@ -XXX,XX +XXX,XX @@ static void altera_timer_realize(DeviceState *dev, Error **errp)
230
return;
231
}
232
233
- t->ptimer = ptimer_init(timer_hit, t, PTIMER_POLICY_DEFAULT);
234
+ t->ptimer = ptimer_init(timer_hit, t, PTIMER_POLICY_LEGACY);
235
ptimer_transaction_begin(t->ptimer);
236
ptimer_set_freq(t->ptimer, t->freq_hz);
237
ptimer_transaction_commit(t->ptimer);
238
diff --git a/hw/timer/arm_timer.c b/hw/timer/arm_timer.c
239
index XXXXXXX..XXXXXXX 100644
240
--- a/hw/timer/arm_timer.c
241
+++ b/hw/timer/arm_timer.c
242
@@ -XXX,XX +XXX,XX @@ static arm_timer_state *arm_timer_init(uint32_t freq)
243
s->freq = freq;
244
s->control = TIMER_CTRL_IE;
245
246
- s->timer = ptimer_init(arm_timer_tick, s, PTIMER_POLICY_DEFAULT);
247
+ s->timer = ptimer_init(arm_timer_tick, s, PTIMER_POLICY_LEGACY);
248
vmstate_register(NULL, VMSTATE_INSTANCE_ID_ANY, &vmstate_arm_timer, s);
249
return s;
250
}
251
diff --git a/hw/timer/digic-timer.c b/hw/timer/digic-timer.c
252
index XXXXXXX..XXXXXXX 100644
253
--- a/hw/timer/digic-timer.c
254
+++ b/hw/timer/digic-timer.c
255
@@ -XXX,XX +XXX,XX @@ static void digic_timer_init(Object *obj)
256
{
257
DigicTimerState *s = DIGIC_TIMER(obj);
258
259
- s->ptimer = ptimer_init(digic_timer_tick, NULL, PTIMER_POLICY_DEFAULT);
260
+ s->ptimer = ptimer_init(digic_timer_tick, NULL, PTIMER_POLICY_LEGACY);
261
262
/*
263
* FIXME: there is no documentation on Digic timer
264
diff --git a/hw/timer/etraxfs_timer.c b/hw/timer/etraxfs_timer.c
265
index XXXXXXX..XXXXXXX 100644
266
--- a/hw/timer/etraxfs_timer.c
267
+++ b/hw/timer/etraxfs_timer.c
268
@@ -XXX,XX +XXX,XX @@ static void etraxfs_timer_realize(DeviceState *dev, Error **errp)
269
ETRAXTimerState *t = ETRAX_TIMER(dev);
270
SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
271
272
- t->ptimer_t0 = ptimer_init(timer0_hit, t, PTIMER_POLICY_DEFAULT);
273
- t->ptimer_t1 = ptimer_init(timer1_hit, t, PTIMER_POLICY_DEFAULT);
274
- t->ptimer_wd = ptimer_init(watchdog_hit, t, PTIMER_POLICY_DEFAULT);
275
+ t->ptimer_t0 = ptimer_init(timer0_hit, t, PTIMER_POLICY_LEGACY);
276
+ t->ptimer_t1 = ptimer_init(timer1_hit, t, PTIMER_POLICY_LEGACY);
277
+ t->ptimer_wd = ptimer_init(watchdog_hit, t, PTIMER_POLICY_LEGACY);
278
279
sysbus_init_irq(sbd, &t->irq);
280
sysbus_init_irq(sbd, &t->nmi);
281
diff --git a/hw/timer/exynos4210_mct.c b/hw/timer/exynos4210_mct.c
282
index XXXXXXX..XXXXXXX 100644
283
--- a/hw/timer/exynos4210_mct.c
284
+++ b/hw/timer/exynos4210_mct.c
285
@@ -XXX,XX +XXX,XX @@ static void exynos4210_mct_init(Object *obj)
286
287
/* Global timer */
288
s->g_timer.ptimer_frc = ptimer_init(exynos4210_gfrc_event, s,
289
- PTIMER_POLICY_DEFAULT);
290
+ PTIMER_POLICY_LEGACY);
291
memset(&s->g_timer.reg, 0, sizeof(struct gregs));
292
293
/* Local timers */
294
for (i = 0; i < 2; i++) {
295
s->l_timer[i].tick_timer.ptimer_tick =
296
ptimer_init(exynos4210_ltick_event, &s->l_timer[i],
297
- PTIMER_POLICY_DEFAULT);
298
+ PTIMER_POLICY_LEGACY);
299
s->l_timer[i].ptimer_frc =
300
ptimer_init(exynos4210_lfrc_event, &s->l_timer[i],
301
- PTIMER_POLICY_DEFAULT);
302
+ PTIMER_POLICY_LEGACY);
303
s->l_timer[i].id = i;
304
}
305
306
diff --git a/hw/timer/exynos4210_pwm.c b/hw/timer/exynos4210_pwm.c
307
index XXXXXXX..XXXXXXX 100644
308
--- a/hw/timer/exynos4210_pwm.c
309
+++ b/hw/timer/exynos4210_pwm.c
310
@@ -XXX,XX +XXX,XX @@ static void exynos4210_pwm_init(Object *obj)
311
sysbus_init_irq(dev, &s->timer[i].irq);
312
s->timer[i].ptimer = ptimer_init(exynos4210_pwm_tick,
313
&s->timer[i],
314
- PTIMER_POLICY_DEFAULT);
315
+ PTIMER_POLICY_LEGACY);
316
s->timer[i].id = i;
317
s->timer[i].parent = s;
318
}
319
diff --git a/hw/timer/grlib_gptimer.c b/hw/timer/grlib_gptimer.c
320
index XXXXXXX..XXXXXXX 100644
321
--- a/hw/timer/grlib_gptimer.c
322
+++ b/hw/timer/grlib_gptimer.c
323
@@ -XXX,XX +XXX,XX @@ static void grlib_gptimer_realize(DeviceState *dev, Error **errp)
324
325
timer->unit = unit;
326
timer->ptimer = ptimer_init(grlib_gptimer_hit, timer,
327
- PTIMER_POLICY_DEFAULT);
328
+ PTIMER_POLICY_LEGACY);
329
timer->id = i;
330
331
/* One IRQ line for each timer */
332
diff --git a/hw/timer/imx_epit.c b/hw/timer/imx_epit.c
333
index XXXXXXX..XXXXXXX 100644
334
--- a/hw/timer/imx_epit.c
335
+++ b/hw/timer/imx_epit.c
336
@@ -XXX,XX +XXX,XX @@ static void imx_epit_realize(DeviceState *dev, Error **errp)
337
0x00001000);
338
sysbus_init_mmio(sbd, &s->iomem);
339
340
- s->timer_reload = ptimer_init(imx_epit_reload, s, PTIMER_POLICY_DEFAULT);
341
+ s->timer_reload = ptimer_init(imx_epit_reload, s, PTIMER_POLICY_LEGACY);
342
343
- s->timer_cmp = ptimer_init(imx_epit_cmp, s, PTIMER_POLICY_DEFAULT);
344
+ s->timer_cmp = ptimer_init(imx_epit_cmp, s, PTIMER_POLICY_LEGACY);
345
}
346
347
static void imx_epit_class_init(ObjectClass *klass, void *data)
348
diff --git a/hw/timer/imx_gpt.c b/hw/timer/imx_gpt.c
349
index XXXXXXX..XXXXXXX 100644
350
--- a/hw/timer/imx_gpt.c
351
+++ b/hw/timer/imx_gpt.c
352
@@ -XXX,XX +XXX,XX @@ static void imx_gpt_realize(DeviceState *dev, Error **errp)
353
0x00001000);
354
sysbus_init_mmio(sbd, &s->iomem);
355
356
- s->timer = ptimer_init(imx_gpt_timeout, s, PTIMER_POLICY_DEFAULT);
357
+ s->timer = ptimer_init(imx_gpt_timeout, s, PTIMER_POLICY_LEGACY);
358
}
359
360
static void imx_gpt_class_init(ObjectClass *klass, void *data)
361
diff --git a/hw/timer/mss-timer.c b/hw/timer/mss-timer.c
362
index XXXXXXX..XXXXXXX 100644
363
--- a/hw/timer/mss-timer.c
364
+++ b/hw/timer/mss-timer.c
365
@@ -XXX,XX +XXX,XX @@ static void mss_timer_init(Object *obj)
366
for (i = 0; i < NUM_TIMERS; i++) {
367
struct Msf2Timer *st = &t->timers[i];
368
369
- st->ptimer = ptimer_init(timer_hit, st, PTIMER_POLICY_DEFAULT);
370
+ st->ptimer = ptimer_init(timer_hit, st, PTIMER_POLICY_LEGACY);
371
ptimer_transaction_begin(st->ptimer);
372
ptimer_set_freq(st->ptimer, t->freq_hz);
373
ptimer_transaction_commit(st->ptimer);
374
diff --git a/hw/timer/sh_timer.c b/hw/timer/sh_timer.c
375
index XXXXXXX..XXXXXXX 100644
376
--- a/hw/timer/sh_timer.c
377
+++ b/hw/timer/sh_timer.c
378
@@ -XXX,XX +XXX,XX @@ static void *sh_timer_init(uint32_t freq, int feat, qemu_irq irq)
379
s->enabled = 0;
380
s->irq = irq;
381
382
- s->timer = ptimer_init(sh_timer_tick, s, PTIMER_POLICY_DEFAULT);
383
+ s->timer = ptimer_init(sh_timer_tick, s, PTIMER_POLICY_LEGACY);
384
385
sh_timer_write(s, OFFSET_TCOR >> 2, s->tcor);
386
sh_timer_write(s, OFFSET_TCNT >> 2, s->tcnt);
387
diff --git a/hw/timer/slavio_timer.c b/hw/timer/slavio_timer.c
388
index XXXXXXX..XXXXXXX 100644
389
--- a/hw/timer/slavio_timer.c
390
+++ b/hw/timer/slavio_timer.c
391
@@ -XXX,XX +XXX,XX @@ static void slavio_timer_init(Object *obj)
392
tc->timer_index = i;
393
394
s->cputimer[i].timer = ptimer_init(slavio_timer_irq, tc,
395
- PTIMER_POLICY_DEFAULT);
396
+ PTIMER_POLICY_LEGACY);
397
ptimer_transaction_begin(s->cputimer[i].timer);
398
ptimer_set_period(s->cputimer[i].timer, TIMER_PERIOD);
399
ptimer_transaction_commit(s->cputimer[i].timer);
400
diff --git a/hw/timer/xilinx_timer.c b/hw/timer/xilinx_timer.c
401
index XXXXXXX..XXXXXXX 100644
402
--- a/hw/timer/xilinx_timer.c
403
+++ b/hw/timer/xilinx_timer.c
404
@@ -XXX,XX +XXX,XX @@ static void xilinx_timer_realize(DeviceState *dev, Error **errp)
405
406
xt->parent = t;
407
xt->nr = i;
408
- xt->ptimer = ptimer_init(timer_hit, xt, PTIMER_POLICY_DEFAULT);
409
+ xt->ptimer = ptimer_init(timer_hit, xt, PTIMER_POLICY_LEGACY);
410
ptimer_transaction_begin(xt->ptimer);
411
ptimer_set_freq(xt->ptimer, t->freq_hz);
412
ptimer_transaction_commit(xt->ptimer);
413
diff --git a/tests/unit/ptimer-test.c b/tests/unit/ptimer-test.c
414
index XXXXXXX..XXXXXXX 100644
415
--- a/tests/unit/ptimer-test.c
416
+++ b/tests/unit/ptimer-test.c
417
@@ -XXX,XX +XXX,XX @@ static void add_ptimer_tests(uint8_t policy)
418
char policy_name[256] = "";
419
char *tmp;
420
421
- if (policy == PTIMER_POLICY_DEFAULT) {
422
- g_sprintf(policy_name, "default");
423
+ if (policy == PTIMER_POLICY_LEGACY) {
424
+ g_sprintf(policy_name, "legacy");
425
}
426
427
if (policy & PTIMER_POLICY_WRAP_AFTER_ONE_PERIOD) {
428
@@ -XXX,XX +XXX,XX @@ static void add_ptimer_tests(uint8_t policy)
429
static void add_all_ptimer_policies_comb_tests(void)
430
{
431
int last_policy = PTIMER_POLICY_TRIGGER_ONLY_ON_DECREMENT;
432
- int policy = PTIMER_POLICY_DEFAULT;
433
+ int policy = PTIMER_POLICY_LEGACY;
434
435
for (; policy < (last_policy << 1); policy++) {
436
if ((policy & PTIMER_POLICY_TRIGGER_ONLY_ON_DECREMENT) &&
49
--
437
--
50
2.17.1
438
2.25.1
51
52
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Message-id: 20180624040609.17572-4-f4bug@amsat.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
hw/dma/omap_dma.c | 6 ++++--
9
1 file changed, 4 insertions(+), 2 deletions(-)
10
11
diff --git a/hw/dma/omap_dma.c b/hw/dma/omap_dma.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/dma/omap_dma.c
14
+++ b/hw/dma/omap_dma.c
15
@@ -XXX,XX +XXX,XX @@
16
* with this program; if not, see <http://www.gnu.org/licenses/>.
17
*/
18
#include "qemu/osdep.h"
19
+#include "qemu/log.h"
20
#include "qemu-common.h"
21
#include "qemu/timer.h"
22
#include "hw/arm/omap.h"
23
@@ -XXX,XX +XXX,XX @@ static int omap_dma_sys_read(struct omap_dma_s *s, int offset,
24
case 0x480:    /* DMA_PCh0_SR */
25
case 0x482:    /* DMA_PCh1_SR */
26
case 0x4c0:    /* DMA_PChD_SR_0 */
27
- printf("%s: Physical Channel Status Registers not implemented.\n",
28
- __func__);
29
+ qemu_log_mask(LOG_UNIMP,
30
+ "%s: Physical Channel Status Registers not implemented\n",
31
+ __func__);
32
*ret = 0xff;
33
break;
34
35
--
36
2.17.1
37
38
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Message-id: 20180624040609.17572-5-f4bug@amsat.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
hw/dma/omap_dma.c | 64 +++++++++++++++++++++++++++++------------------
9
1 file changed, 40 insertions(+), 24 deletions(-)
10
11
diff --git a/hw/dma/omap_dma.c b/hw/dma/omap_dma.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/dma/omap_dma.c
14
+++ b/hw/dma/omap_dma.c
15
@@ -XXX,XX +XXX,XX @@ static int omap_dma_ch_reg_write(struct omap_dma_s *s,
16
ch->burst[0] = (value & 0x0180) >> 7;
17
ch->pack[0] = (value & 0x0040) >> 6;
18
ch->port[0] = (enum omap_dma_port) ((value & 0x003c) >> 2);
19
- if (ch->port[0] >= __omap_dma_port_last)
20
- printf("%s: invalid DMA port %i\n", __func__,
21
- ch->port[0]);
22
- if (ch->port[1] >= __omap_dma_port_last)
23
- printf("%s: invalid DMA port %i\n", __func__,
24
- ch->port[1]);
25
+ if (ch->port[0] >= __omap_dma_port_last) {
26
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid DMA port %i\n",
27
+ __func__, ch->port[0]);
28
+ }
29
+ if (ch->port[1] >= __omap_dma_port_last) {
30
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid DMA port %i\n",
31
+ __func__, ch->port[1]);
32
+ }
33
ch->data_type = 1 << (value & 3);
34
if ((value & 3) == 3) {
35
- printf("%s: bad data_type for DMA channel\n", __func__);
36
+ qemu_log_mask(LOG_GUEST_ERROR,
37
+ "%s: bad data_type for DMA channel\n", __func__);
38
ch->data_type >>= 1;
39
}
40
break;
41
@@ -XXX,XX +XXX,XX @@ static void omap_dma4_write(void *opaque, hwaddr addr,
42
if (value & 2)                        /* SOFTRESET */
43
omap_dma_reset(s->dma);
44
s->ocp = value & 0x3321;
45
- if (((s->ocp >> 12) & 3) == 3)                /* MIDLEMODE */
46
- fprintf(stderr, "%s: invalid DMA power mode\n", __func__);
47
+ if (((s->ocp >> 12) & 3) == 3) { /* MIDLEMODE */
48
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid DMA power mode\n",
49
+ __func__);
50
+ }
51
return;
52
53
case 0x78:    /* DMA4_GCR */
54
s->gcr = value & 0x00ff00ff;
55
-    if ((value & 0xff) == 0x00)        /* MAX_CHANNEL_FIFO_DEPTH */
56
- fprintf(stderr, "%s: wrong FIFO depth in GCR\n", __func__);
57
+ if ((value & 0xff) == 0x00) { /* MAX_CHANNEL_FIFO_DEPTH */
58
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: wrong FIFO depth in GCR\n",
59
+ __func__);
60
+ }
61
return;
62
63
case 0x80 ... 0xfff:
64
@@ -XXX,XX +XXX,XX @@ static void omap_dma4_write(void *opaque, hwaddr addr,
65
case 0x00:    /* DMA4_CCR */
66
ch->buf_disable = (value >> 25) & 1;
67
ch->src_sync = (value >> 24) & 1;    /* XXX For CamDMA must be 1 */
68
- if (ch->buf_disable && !ch->src_sync)
69
- fprintf(stderr, "%s: Buffering disable is not allowed in "
70
- "destination synchronised mode\n", __func__);
71
+ if (ch->buf_disable && !ch->src_sync) {
72
+ qemu_log_mask(LOG_GUEST_ERROR,
73
+ "%s: Buffering disable is not allowed in "
74
+ "destination synchronised mode\n", __func__);
75
+ }
76
ch->prefetch = (value >> 23) & 1;
77
ch->bs = (value >> 18) & 1;
78
ch->transparent_copy = (value >> 17) & 1;
79
@@ -XXX,XX +XXX,XX @@ static void omap_dma4_write(void *opaque, hwaddr addr,
80
ch->suspend = (value & 0x0100) >> 8;
81
ch->priority = (value & 0x0040) >> 6;
82
ch->fs = (value & 0x0020) >> 5;
83
- if (ch->fs && ch->bs && ch->mode[0] && ch->mode[1])
84
- fprintf(stderr, "%s: For a packet transfer at least one port "
85
- "must be constant-addressed\n", __func__);
86
+ if (ch->fs && ch->bs && ch->mode[0] && ch->mode[1]) {
87
+ qemu_log_mask(LOG_GUEST_ERROR,
88
+ "%s: For a packet transfer at least one port "
89
+ "must be constant-addressed\n", __func__);
90
+ }
91
ch->sync = (value & 0x001f) | ((value >> 14) & 0x0060);
92
/* XXX must be 0x01 for CamDMA */
93
94
@@ -XXX,XX +XXX,XX @@ static void omap_dma4_write(void *opaque, hwaddr addr,
95
ch->endian_lock[0] =(value >> 20) & 1;
96
ch->endian[1] =(value >> 19) & 1;
97
ch->endian_lock[1] =(value >> 18) & 1;
98
- if (ch->endian[0] != ch->endian[1])
99
- fprintf(stderr, "%s: DMA endianness conversion enable attempt\n",
100
- __func__);
101
+ if (ch->endian[0] != ch->endian[1]) {
102
+ qemu_log_mask(LOG_GUEST_ERROR,
103
+ "%s: DMA endianness conversion enable attempt\n",
104
+ __func__);
105
+ }
106
ch->write_mode = (value >> 16) & 3;
107
ch->burst[1] = (value & 0xc000) >> 14;
108
ch->pack[1] = (value & 0x2000) >> 13;
109
@@ -XXX,XX +XXX,XX @@ static void omap_dma4_write(void *opaque, hwaddr addr,
110
ch->burst[0] = (value & 0x0180) >> 7;
111
ch->pack[0] = (value & 0x0040) >> 6;
112
ch->translate[0] = (value & 0x003c) >> 2;
113
- if (ch->translate[0] | ch->translate[1])
114
- fprintf(stderr, "%s: bad MReqAddressTranslate sideband signal\n",
115
- __func__);
116
+ if (ch->translate[0] | ch->translate[1]) {
117
+ qemu_log_mask(LOG_GUEST_ERROR,
118
+ "%s: bad MReqAddressTranslate sideband signal\n",
119
+ __func__);
120
+ }
121
ch->data_type = 1 << (value & 3);
122
if ((value & 3) == 3) {
123
- printf("%s: bad data_type for DMA channel\n", __func__);
124
+ qemu_log_mask(LOG_GUEST_ERROR,
125
+ "%s: bad data_type for DMA channel\n", __func__);
126
ch->data_type >>= 1;
127
}
128
break;
129
--
130
2.17.1
131
132
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
5
Message-id: 20180624040609.17572-6-f4bug@amsat.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
hw/ssi/omap_spi.c | 15 ++++++++++-----
9
1 file changed, 10 insertions(+), 5 deletions(-)
10
11
diff --git a/hw/ssi/omap_spi.c b/hw/ssi/omap_spi.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/ssi/omap_spi.c
14
+++ b/hw/ssi/omap_spi.c
15
@@ -XXX,XX +XXX,XX @@
16
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
17
*/
18
#include "qemu/osdep.h"
19
+#include "qemu/log.h"
20
#include "hw/hw.h"
21
#include "hw/arm/omap.h"
22
23
@@ -XXX,XX +XXX,XX @@ static void omap_mcspi_write(void *opaque, hwaddr addr,
24
case 0x2c:    /* MCSPI_CHCONF */
25
if ((value ^ s->ch[ch].config) & (3 << 14))    /* DMAR | DMAW */
26
omap_mcspi_dmarequest_update(s->ch + ch);
27
- if (((value >> 12) & 3) == 3)            /* TRM */
28
- fprintf(stderr, "%s: invalid TRM value (3)\n", __func__);
29
- if (((value >> 7) & 0x1f) < 3)            /* WL */
30
- fprintf(stderr, "%s: invalid WL value (%" PRIx64 ")\n",
31
- __func__, (value >> 7) & 0x1f);
32
+ if (((value >> 12) & 3) == 3) { /* TRM */
33
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid TRM value (3)\n",
34
+ __func__);
35
+ }
36
+ if (((value >> 7) & 0x1f) < 3) { /* WL */
37
+ qemu_log_mask(LOG_GUEST_ERROR,
38
+ "%s: invalid WL value (%" PRIx64 ")\n",
39
+ __func__, (value >> 7) & 0x1f);
40
+ }
41
s->ch[ch].config = value & 0x7fffff;
42
break;
43
44
--
45
2.17.1
46
47
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Message-id: 20180624040609.17572-7-f4bug@amsat.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
hw/sd/omap_mmc.c | 13 +++++++++----
9
1 file changed, 9 insertions(+), 4 deletions(-)
10
11
diff --git a/hw/sd/omap_mmc.c b/hw/sd/omap_mmc.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/sd/omap_mmc.c
14
+++ b/hw/sd/omap_mmc.c
15
@@ -XXX,XX +XXX,XX @@
16
* with this program; if not, see <http://www.gnu.org/licenses/>.
17
*/
18
#include "qemu/osdep.h"
19
+#include "qemu/log.h"
20
#include "hw/hw.h"
21
#include "hw/arm/omap.h"
22
#include "hw/sd/sd.h"
23
@@ -XXX,XX +XXX,XX @@ static void omap_mmc_write(void *opaque, hwaddr offset,
24
s->enable = (value >> 11) & 1;
25
s->be = (value >> 10) & 1;
26
s->clkdiv = (value >> 0) & (s->rev >= 2 ? 0x3ff : 0xff);
27
- if (s->mode != 0)
28
- printf("SD mode %i unimplemented!\n", s->mode);
29
- if (s->be != 0)
30
- printf("SD FIFO byte sex unimplemented!\n");
31
+ if (s->mode != 0) {
32
+ qemu_log_mask(LOG_UNIMP,
33
+ "omap_mmc_wr: mode #%i unimplemented\n", s->mode);
34
+ }
35
+ if (s->be != 0) {
36
+ qemu_log_mask(LOG_UNIMP,
37
+ "omap_mmc_wr: Big Endian not implemented\n");
38
+ }
39
if (s->dw != 0 && s->lines < 4)
40
printf("4-bit SD bus enabled\n");
41
if (!s->enable)
42
--
43
2.17.1
44
45
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
1
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
5
Message-id: 20180624040609.17572-8-f4bug@amsat.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
hw/i2c/omap_i2c.c | 20 ++++++++++++--------
9
1 file changed, 12 insertions(+), 8 deletions(-)
10
11
diff --git a/hw/i2c/omap_i2c.c b/hw/i2c/omap_i2c.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/i2c/omap_i2c.c
14
+++ b/hw/i2c/omap_i2c.c
15
@@ -XXX,XX +XXX,XX @@
16
* with this program; if not, see <http://www.gnu.org/licenses/>.
17
*/
18
#include "qemu/osdep.h"
19
+#include "qemu/log.h"
20
#include "hw/hw.h"
21
#include "hw/i2c/i2c.h"
22
#include "hw/arm/omap.h"
23
@@ -XXX,XX +XXX,XX @@ static void omap_i2c_write(void *opaque, hwaddr addr,
24
}
25
break;
26
}
27
- if ((value & (1 << 15)) && !(value & (1 << 10))) {    /* MST */
28
- fprintf(stderr, "%s: I^2C slave mode not supported\n",
29
- __func__);
30
+ if ((value & (1 << 15)) && !(value & (1 << 10))) { /* MST */
31
+ qemu_log_mask(LOG_UNIMP, "%s: I^2C slave mode not supported\n",
32
+ __func__);
33
break;
34
}
35
- if ((value & (1 << 15)) && value & (1 << 8)) {        /* XA */
36
- fprintf(stderr, "%s: 10-bit addressing mode not supported\n",
37
- __func__);
38
+ if ((value & (1 << 15)) && value & (1 << 8)) { /* XA */
39
+ qemu_log_mask(LOG_UNIMP,
40
+ "%s: 10-bit addressing mode not supported\n",
41
+ __func__);
42
break;
43
}
44
if ((value & (1 << 15)) && value & (1 << 0)) {        /* STT */
45
@@ -XXX,XX +XXX,XX @@ static void omap_i2c_write(void *opaque, hwaddr addr,
46
s->stat |= 0x3f;
47
omap_i2c_interrupts_update(s);
48
}
49
- if (value & (1 << 15))                    /* ST_EN */
50
- fprintf(stderr, "%s: System Test not supported\n", __func__);
51
+ if (value & (1 << 15)) { /* ST_EN */
52
+ qemu_log_mask(LOG_UNIMP,
53
+ "%s: System Test not supported\n", __func__);
54
+ }
55
break;
56
57
default:
58
--
59
2.17.1
60
61
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Florian Lugou <florian.lugou@provenrun.com>
2
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
As per the description of the HCR_EL2.APK field in the ARMv8 ARM,
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
4
Pointer Authentication keys accesses should only be trapped to Secure
5
Message-id: 20180624040609.17572-11-f4bug@amsat.org
5
EL2 if it is enabled.
6
7
Signed-off-by: Florian Lugou <florian.lugou@provenrun.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220517145242.1215271-1-florian.lugou@provenrun.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
hw/arm/stellaris.c | 2 +-
12
target/arm/helper.c | 2 +-
9
1 file changed, 1 insertion(+), 1 deletion(-)
13
1 file changed, 1 insertion(+), 1 deletion(-)
10
14
11
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/hw/arm/stellaris.c
17
--- a/target/arm/helper.c
14
+++ b/hw/arm/stellaris.c
18
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static void ssys_write(void *opaque, hwaddr offset,
19
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_pauth(CPUARMState *env, const ARMCPRegInfo *ri,
16
case 0x040: /* SRCR0 */
20
int el = arm_current_el(env);
17
case 0x044: /* SRCR1 */
21
18
case 0x048: /* SRCR2 */
22
if (el < 2 &&
19
- fprintf(stderr, "Peripheral reset not implemented\n");
23
- arm_feature(env, ARM_FEATURE_EL2) &&
20
+ qemu_log_mask(LOG_UNIMP, "Peripheral reset not implemented\n");
24
+ arm_is_el2_enabled(env) &&
21
break;
25
!(arm_hcr_el2_eff(env) & HCR_APK)) {
22
case 0x054: /* IMC */
26
return CP_ACCESS_TRAP_EL2;
23
s->int_mask = value & 0x7f;
27
}
24
--
28
--
25
2.17.1
29
2.25.1
26
27
diff view generated by jsdifflib
1
From: Eric Auger <eric.auger@redhat.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
On TLB invalidation commands, let's call registered
3
This feature adds a new register, HCRX_EL2, which controls
4
IOMMU notifiers. Those can only be UNMAP notifiers.
4
many of the newer AArch64 features. So far the register is
5
SMMUv3 does not support notification on MAP (VFIO).
5
effectively RES0, because none of the new features are done.
6
6
7
This patch allows vhost use case where IOTLB API is notified
8
on each guest IOTLB invalidation.
9
10
Signed-off-by: Eric Auger <eric.auger@redhat.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 1529653501-15358-5-git-send-email-eric.auger@redhat.com
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220517054850.177016-2-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
include/hw/arm/smmu-common.h | 6 +++
12
target/arm/cpu.h | 20 ++++++++++++++++++
16
hw/arm/smmu-common.c | 34 +++++++++++++
13
target/arm/cpu64.c | 1 +
17
hw/arm/smmuv3.c | 99 +++++++++++++++++++++++++++++++++++-
14
target/arm/helper.c | 50 +++++++++++++++++++++++++++++++++++++++++++++
18
hw/arm/trace-events | 5 ++
15
3 files changed, 71 insertions(+)
19
4 files changed, 142 insertions(+), 2 deletions(-)
20
16
21
diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/arm/smmu-common.h
19
--- a/target/arm/cpu.h
24
+++ b/include/hw/arm/smmu-common.h
20
+++ b/target/arm/cpu.h
25
@@ -XXX,XX +XXX,XX @@ void smmu_iotlb_inv_all(SMMUState *s);
21
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
26
void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid);
22
uint32_t pmsav5_data_ap; /* PMSAv5 MPU data access permissions */
27
void smmu_iotlb_inv_iova(SMMUState *s, uint16_t asid, dma_addr_t iova);
23
uint32_t pmsav5_insn_ap; /* PMSAv5 MPU insn access permissions */
28
24
uint64_t hcr_el2; /* Hypervisor configuration register */
29
+/* Unmap the range of all the notifiers registered to any IOMMU mr */
25
+ uint64_t hcrx_el2; /* Extended Hypervisor configuration register */
30
+void smmu_inv_notifiers_all(SMMUState *s);
26
uint64_t scr_el3; /* Secure configuration register. */
27
union { /* Fault status registers. */
28
struct {
29
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
30
#define HCR_TWEDEN (1ULL << 59)
31
#define HCR_TWEDEL MAKE_64BIT_MASK(60, 4)
32
33
+#define HCRX_ENAS0 (1ULL << 0)
34
+#define HCRX_ENALS (1ULL << 1)
35
+#define HCRX_ENASR (1ULL << 2)
36
+#define HCRX_FNXS (1ULL << 3)
37
+#define HCRX_FGTNXS (1ULL << 4)
38
+#define HCRX_SMPME (1ULL << 5)
39
+#define HCRX_TALLINT (1ULL << 6)
40
+#define HCRX_VINMI (1ULL << 7)
41
+#define HCRX_VFNMI (1ULL << 8)
42
+#define HCRX_CMOW (1ULL << 9)
43
+#define HCRX_MCE2 (1ULL << 10)
44
+#define HCRX_MSCEN (1ULL << 11)
31
+
45
+
32
+/* Unmap the range of all the notifiers registered to @mr */
46
#define HPFAR_NS (1ULL << 63)
33
+void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr);
47
34
+
48
#define SCR_NS (1U << 0)
35
#endif /* HW_ARM_SMMU_COMMON */
49
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_el2_enabled(CPUARMState *env)
36
diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
50
* Not included here is HCR_RW.
37
index XXXXXXX..XXXXXXX 100644
51
*/
38
--- a/hw/arm/smmu-common.c
52
uint64_t arm_hcr_el2_eff(CPUARMState *env);
39
+++ b/hw/arm/smmu-common.c
53
+uint64_t arm_hcrx_el2_eff(CPUARMState *env);
40
@@ -XXX,XX +XXX,XX @@ static gboolean smmu_iotlb_key_equal(gconstpointer v1, gconstpointer v2)
54
41
return (k1->asid == k2->asid) && (k1->iova == k2->iova);
55
/* Return true if the specified exception level is running in AArch64 state. */
56
static inline bool arm_el_is_aa64(CPUARMState *env, int el)
57
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_ats1e1(const ARMISARegisters *id)
58
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, PAN) >= 2;
42
}
59
}
43
60
44
+/* Unmap the whole notifier's range */
61
+static inline bool isar_feature_aa64_hcx(const ARMISARegisters *id)
45
+static void smmu_unmap_notifier_range(IOMMUNotifier *n)
46
+{
62
+{
47
+ IOMMUTLBEntry entry;
63
+ return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, HCX) != 0;
48
+
49
+ entry.target_as = &address_space_memory;
50
+ entry.iova = n->start;
51
+ entry.perm = IOMMU_NONE;
52
+ entry.addr_mask = n->end - n->start;
53
+
54
+ memory_region_notify_one(n, &entry);
55
+}
64
+}
56
+
65
+
57
+/* Unmap all notifiers attached to @mr */
66
static inline bool isar_feature_aa64_uao(const ARMISARegisters *id)
58
+inline void smmu_inv_notifiers_mr(IOMMUMemoryRegion *mr)
67
{
68
return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, UAO) != 0;
69
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/cpu64.c
72
+++ b/target/arm/cpu64.c
73
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
74
t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1); /* FEAT_LOR */
75
t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* FEAT_PAN2 */
76
t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* FEAT_XNX */
77
+ t = FIELD_DP64(t, ID_AA64MMFR1, HCX, 1); /* FEAT_HCX */
78
cpu->isar.id_aa64mmfr1 = t;
79
80
t = cpu->isar.id_aa64mmfr2;
81
diff --git a/target/arm/helper.c b/target/arm/helper.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/helper.c
84
+++ b/target/arm/helper.c
85
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff(CPUARMState *env)
86
return ret;
87
}
88
89
+static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri,
90
+ uint64_t value)
59
+{
91
+{
60
+ IOMMUNotifier *n;
92
+ uint64_t valid_mask = 0;
61
+
93
+
62
+ trace_smmu_inv_notifiers_mr(mr->parent_obj.name);
94
+ /* No features adding bits to HCRX are implemented. */
63
+ IOMMU_NOTIFIER_FOREACH(n, mr) {
95
+
64
+ smmu_unmap_notifier_range(n);
96
+ /* Clear RES0 bits. */
65
+ }
97
+ env->cp15.hcrx_el2 = value & valid_mask;
66
+}
98
+}
67
+
99
+
68
+/* Unmap all notifiers of all mr's */
100
+static CPAccessResult access_hxen(CPUARMState *env, const ARMCPRegInfo *ri,
69
+void smmu_inv_notifiers_all(SMMUState *s)
101
+ bool isread)
70
+{
102
+{
71
+ SMMUNotifierNode *node;
103
+ if (arm_current_el(env) < 3
72
+
104
+ && arm_feature(env, ARM_FEATURE_EL3)
73
+ QLIST_FOREACH(node, &s->notifiers_list, next) {
105
+ && !(env->cp15.scr_el3 & SCR_HXEN)) {
74
+ smmu_inv_notifiers_mr(&node->sdev->iommu);
106
+ return CP_ACCESS_TRAP_EL3;
75
+ }
107
+ }
108
+ return CP_ACCESS_OK;
76
+}
109
+}
77
+
110
+
78
static void smmu_base_realize(DeviceState *dev, Error **errp)
111
+static const ARMCPRegInfo hcrx_el2_reginfo = {
112
+ .name = "HCRX_EL2", .state = ARM_CP_STATE_AA64,
113
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 2,
114
+ .access = PL2_RW, .writefn = hcrx_write, .accessfn = access_hxen,
115
+ .fieldoffset = offsetof(CPUARMState, cp15.hcrx_el2),
116
+};
117
+
118
+/* Return the effective value of HCRX_EL2. */
119
+uint64_t arm_hcrx_el2_eff(CPUARMState *env)
120
+{
121
+ /*
122
+ * The bits in this register behave as 0 for all purposes other than
123
+ * direct reads of the register if:
124
+ * - EL2 is not enabled in the current security state,
125
+ * - SCR_EL3.HXEn is 0.
126
+ */
127
+ if (!arm_is_el2_enabled(env)
128
+ || (arm_feature(env, ARM_FEATURE_EL3)
129
+ && !(env->cp15.scr_el3 & SCR_HXEN))) {
130
+ return 0;
131
+ }
132
+ return env->cp15.hcrx_el2;
133
+}
134
+
135
static void cptr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
136
uint64_t value)
79
{
137
{
80
SMMUState *s = ARM_SMMU(dev);
138
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
81
diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
139
define_arm_cp_regs(cpu, zcr_reginfo);
82
index XXXXXXX..XXXXXXX 100644
140
}
83
--- a/hw/arm/smmuv3.c
141
84
+++ b/hw/arm/smmuv3.c
142
+ if (cpu_isar_feature(aa64_hcx, cpu)) {
85
@@ -XXX,XX +XXX,XX @@ epilogue:
143
+ define_one_arm_cp_reg(cpu, &hcrx_el2_reginfo);
86
return entry;
87
}
88
89
+/**
90
+ * smmuv3_notify_iova - call the notifier @n for a given
91
+ * @asid and @iova tuple.
92
+ *
93
+ * @mr: IOMMU mr region handle
94
+ * @n: notifier to be called
95
+ * @asid: address space ID or negative value if we don't care
96
+ * @iova: iova
97
+ */
98
+static void smmuv3_notify_iova(IOMMUMemoryRegion *mr,
99
+ IOMMUNotifier *n,
100
+ int asid,
101
+ dma_addr_t iova)
102
+{
103
+ SMMUDevice *sdev = container_of(mr, SMMUDevice, iommu);
104
+ SMMUEventInfo event = {};
105
+ SMMUTransTableInfo *tt;
106
+ SMMUTransCfg *cfg;
107
+ IOMMUTLBEntry entry;
108
+
109
+ cfg = smmuv3_get_config(sdev, &event);
110
+ if (!cfg) {
111
+ qemu_log_mask(LOG_GUEST_ERROR,
112
+ "%s error decoding the configuration for iommu mr=%s\n",
113
+ __func__, mr->parent_obj.name);
114
+ return;
115
+ }
144
+ }
116
+
145
+
117
+ if (asid >= 0 && cfg->asid != asid) {
146
#ifdef TARGET_AARCH64
118
+ return;
147
if (cpu_isar_feature(aa64_pauth, cpu)) {
119
+ }
148
define_arm_cp_regs(cpu, pauth_reginfo);
120
+
121
+ tt = select_tt(cfg, iova);
122
+ if (!tt) {
123
+ return;
124
+ }
125
+
126
+ entry.target_as = &address_space_memory;
127
+ entry.iova = iova;
128
+ entry.addr_mask = (1 << tt->granule_sz) - 1;
129
+ entry.perm = IOMMU_NONE;
130
+
131
+ memory_region_notify_one(n, &entry);
132
+}
133
+
134
+/* invalidate an asid/iova tuple in all mr's */
135
+static void smmuv3_inv_notifiers_iova(SMMUState *s, int asid, dma_addr_t iova)
136
+{
137
+ SMMUNotifierNode *node;
138
+
139
+ QLIST_FOREACH(node, &s->notifiers_list, next) {
140
+ IOMMUMemoryRegion *mr = &node->sdev->iommu;
141
+ IOMMUNotifier *n;
142
+
143
+ trace_smmuv3_inv_notifiers_iova(mr->parent_obj.name, asid, iova);
144
+
145
+ IOMMU_NOTIFIER_FOREACH(n, mr) {
146
+ smmuv3_notify_iova(mr, n, asid, iova);
147
+ }
148
+ }
149
+}
150
+
151
static int smmuv3_cmdq_consume(SMMUv3State *s)
152
{
153
SMMUState *bs = ARM_SMMU(s);
154
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
155
uint16_t asid = CMD_ASID(&cmd);
156
157
trace_smmuv3_cmdq_tlbi_nh_asid(asid);
158
+ smmu_inv_notifiers_all(&s->smmu_state);
159
smmu_iotlb_inv_asid(bs, asid);
160
break;
161
}
162
case SMMU_CMD_TLBI_NH_ALL:
163
case SMMU_CMD_TLBI_NSNH_ALL:
164
trace_smmuv3_cmdq_tlbi_nh();
165
+ smmu_inv_notifiers_all(&s->smmu_state);
166
smmu_iotlb_inv_all(bs);
167
break;
168
case SMMU_CMD_TLBI_NH_VAA:
169
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
170
uint16_t vmid = CMD_VMID(&cmd);
171
172
trace_smmuv3_cmdq_tlbi_nh_vaa(vmid, addr);
173
+ smmuv3_inv_notifiers_iova(bs, -1, addr);
174
smmu_iotlb_inv_all(bs);
175
break;
176
}
177
@@ -XXX,XX +XXX,XX @@ static int smmuv3_cmdq_consume(SMMUv3State *s)
178
bool leaf = CMD_LEAF(&cmd);
179
180
trace_smmuv3_cmdq_tlbi_nh_va(vmid, asid, addr, leaf);
181
+ smmuv3_inv_notifiers_iova(bs, asid, addr);
182
smmu_iotlb_inv_iova(bs, asid, addr);
183
break;
184
}
185
@@ -XXX,XX +XXX,XX @@ static void smmuv3_notify_flag_changed(IOMMUMemoryRegion *iommu,
186
IOMMUNotifierFlag old,
187
IOMMUNotifierFlag new)
188
{
189
+ SMMUDevice *sdev = container_of(iommu, SMMUDevice, iommu);
190
+ SMMUv3State *s3 = sdev->smmu;
191
+ SMMUState *s = &(s3->smmu_state);
192
+ SMMUNotifierNode *node = NULL;
193
+ SMMUNotifierNode *next_node = NULL;
194
+
195
+ if (new & IOMMU_NOTIFIER_MAP) {
196
+ int bus_num = pci_bus_num(sdev->bus);
197
+ PCIDevice *pcidev = pci_find_device(sdev->bus, bus_num, sdev->devfn);
198
+
199
+ warn_report("SMMUv3 does not support notification on MAP: "
200
+ "device %s will not function properly", pcidev->name);
201
+ }
202
+
203
if (old == IOMMU_NOTIFIER_NONE) {
204
- warn_report("SMMUV3 does not support vhost/vfio integration yet: "
205
- "devices of those types will not function properly");
206
+ trace_smmuv3_notify_flag_add(iommu->parent_obj.name);
207
+ node = g_malloc0(sizeof(*node));
208
+ node->sdev = sdev;
209
+ QLIST_INSERT_HEAD(&s->notifiers_list, node, next);
210
+ return;
211
+ }
212
+
213
+ /* update notifier node with new flags */
214
+ QLIST_FOREACH_SAFE(node, &s->notifiers_list, next, next_node) {
215
+ if (node->sdev == sdev) {
216
+ if (new == IOMMU_NOTIFIER_NONE) {
217
+ trace_smmuv3_notify_flag_del(iommu->parent_obj.name);
218
+ QLIST_REMOVE(node, next);
219
+ g_free(node);
220
+ }
221
+ return;
222
+ }
223
}
224
}
225
226
diff --git a/hw/arm/trace-events b/hw/arm/trace-events
227
index XXXXXXX..XXXXXXX 100644
228
--- a/hw/arm/trace-events
229
+++ b/hw/arm/trace-events
230
@@ -XXX,XX +XXX,XX @@ smmu_iotlb_cache_miss(uint16_t asid, uint64_t addr, uint32_t hit, uint32_t miss,
231
smmu_iotlb_inv_all(void) "IOTLB invalidate all"
232
smmu_iotlb_inv_asid(uint16_t asid) "IOTLB invalidate asid=%d"
233
smmu_iotlb_inv_iova(uint16_t asid, uint64_t addr) "IOTLB invalidate asid=%d addr=0x%"PRIx64
234
+smmu_inv_notifiers_mr(const char *name) "iommu mr=%s"
235
236
#hw/arm/smmuv3.c
237
smmuv3_read_mmio(uint64_t addr, uint64_t val, unsigned size, uint32_t r) "addr: 0x%"PRIx64" val:0x%"PRIx64" size: 0x%x(%d)"
238
@@ -XXX,XX +XXX,XX @@ smmuv3_cmdq_tlbi_nh_vaa(int vmid, uint64_t addr) "vmid =%d addr=0x%"PRIx64
239
smmuv3_cmdq_tlbi_nh(void) ""
240
smmuv3_cmdq_tlbi_nh_asid(uint16_t asid) "asid=%d"
241
smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid %d"
242
+smmuv3_notify_flag_add(const char *iommu) "ADD SMMUNotifier node for iommu mr=%s"
243
+smmuv3_notify_flag_del(const char *iommu) "DEL SMMUNotifier node for iommu mr=%s"
244
+smmuv3_inv_notifiers_iova(const char *name, uint16_t asid, uint64_t iova) "iommu mr=%s asid=%d iova=0x%"PRIx64
245
+
246
--
149
--
247
2.17.1
150
2.25.1
248
249
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
3
We had a few CPTR_* bits defined, but missed quite a few.
4
Reviewed-by: Thomas Huth <thuth@redhat.com>
4
Complete all of the fields up to ARMv9.2.
5
Message-id: 20180624040609.17572-10-f4bug@amsat.org
5
Use FIELD_EX64 instead of manual extract32.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20220517054850.177016-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
include/hw/arm/omap.h | 12 ++++++------
12
target/arm/cpu.h | 44 +++++++++++++++++++++++++++++++-----
9
1 file changed, 6 insertions(+), 6 deletions(-)
13
hw/arm/boot.c | 2 +-
14
target/arm/cpu.c | 11 ++++++---
15
target/arm/helper.c | 54 ++++++++++++++++++++++-----------------------
16
4 files changed, 75 insertions(+), 36 deletions(-)
10
17
11
diff --git a/include/hw/arm/omap.h b/include/hw/arm/omap.h
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
13
--- a/include/hw/arm/omap.h
20
--- a/target/arm/cpu.h
14
+++ b/include/hw/arm/omap.h
21
+++ b/target/arm/cpu.h
15
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
16
# define hw_omap_h        "omap.h"
23
#define SCTLR_SPINTMASK (1ULL << 62) /* FEAT_NMI */
17
#include "hw/irq.h"
24
#define SCTLR_TIDCP (1ULL << 63) /* FEAT_TIDCP1 */
18
#include "target/arm/cpu-qom.h"
25
19
+#include "qemu/log.h"
26
-#define CPTR_TCPAC (1U << 31)
20
27
-#define CPTR_TTA (1U << 20)
21
# define OMAP_EMIFS_BASE    0x00000000
28
-#define CPTR_TFP (1U << 10)
22
# define OMAP2_Q0_BASE        0x00000000
29
-#define CPTR_TZ (1U << 8) /* CPTR_EL2 */
23
@@ -XXX,XX +XXX,XX @@ struct omap_mpu_state_s *omap2420_mpu_init(MemoryRegion *sysmem,
30
-#define CPTR_EZ (1U << 8) /* CPTR_EL3 */
24
unsigned long sdram_size,
31
+/* Bit definitions for CPACR (AArch32 only) */
25
const char *core);
32
+FIELD(CPACR, CP10, 20, 2)
26
33
+FIELD(CPACR, CP11, 22, 2)
27
-#define OMAP_FMT_plx "%#08" HWADDR_PRIx
34
+FIELD(CPACR, TRCDIS, 28, 1) /* matches CPACR_EL1.TTA */
28
-
35
+FIELD(CPACR, D32DIS, 30, 1) /* up to v7; RAZ in v8 */
29
uint32_t omap_badwidth_read8(void *opaque, hwaddr addr);
36
+FIELD(CPACR, ASEDIS, 31, 1)
30
void omap_badwidth_write8(void *opaque, hwaddr addr,
37
+
31
uint32_t value);
38
+/* Bit definitions for CPACR_EL1 (AArch64 only) */
32
@@ -XXX,XX +XXX,XX @@ void omap_badwidth_write32(void *opaque, hwaddr addr,
39
+FIELD(CPACR_EL1, ZEN, 16, 2)
33
void omap_mpu_wakeup(void *opaque, int irq, int req);
40
+FIELD(CPACR_EL1, FPEN, 20, 2)
34
41
+FIELD(CPACR_EL1, SMEN, 24, 2)
35
# define OMAP_BAD_REG(paddr)        \
42
+FIELD(CPACR_EL1, TTA, 28, 1) /* matches CPACR.TRCDIS */
36
- fprintf(stderr, "%s: Bad register " OMAP_FMT_plx "\n",    \
43
+
37
- __func__, paddr)
44
+/* Bit definitions for HCPTR (AArch32 only) */
38
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad register %#08"HWADDR_PRIx"\n", \
45
+FIELD(HCPTR, TCP10, 10, 1)
39
+ __func__, paddr)
46
+FIELD(HCPTR, TCP11, 11, 1)
40
# define OMAP_RO_REG(paddr)        \
47
+FIELD(HCPTR, TASE, 15, 1)
41
- fprintf(stderr, "%s: Read-only register " OMAP_FMT_plx "\n",    \
48
+FIELD(HCPTR, TTA, 20, 1)
42
- __func__, paddr)
49
+FIELD(HCPTR, TAM, 30, 1) /* matches CPTR_EL2.TAM */
43
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Read-only register %#08" \
50
+FIELD(HCPTR, TCPAC, 31, 1) /* matches CPTR_EL2.TCPAC */
44
+ HWADDR_PRIx "\n", \
51
+
45
+ __func__, paddr)
52
+/* Bit definitions for CPTR_EL2 (AArch64 only) */
46
53
+FIELD(CPTR_EL2, TZ, 8, 1) /* !E2H */
47
/* OMAP-specific Linux bootloader tags for the ATAG_BOARD area
54
+FIELD(CPTR_EL2, TFP, 10, 1) /* !E2H, matches HCPTR.TCP10 */
48
(Board-specifc tags are not here) */
55
+FIELD(CPTR_EL2, TSM, 12, 1) /* !E2H */
56
+FIELD(CPTR_EL2, ZEN, 16, 2) /* E2H */
57
+FIELD(CPTR_EL2, FPEN, 20, 2) /* E2H */
58
+FIELD(CPTR_EL2, SMEN, 24, 2) /* E2H */
59
+FIELD(CPTR_EL2, TTA, 28, 1)
60
+FIELD(CPTR_EL2, TAM, 30, 1) /* matches HCPTR.TAM */
61
+FIELD(CPTR_EL2, TCPAC, 31, 1) /* matches HCPTR.TCPAC */
62
+
63
+/* Bit definitions for CPTR_EL3 (AArch64 only) */
64
+FIELD(CPTR_EL3, EZ, 8, 1)
65
+FIELD(CPTR_EL3, TFP, 10, 1)
66
+FIELD(CPTR_EL3, ESM, 12, 1)
67
+FIELD(CPTR_EL3, TTA, 20, 1)
68
+FIELD(CPTR_EL3, TAM, 30, 1)
69
+FIELD(CPTR_EL3, TCPAC, 31, 1)
70
71
#define MDCR_EPMAD (1U << 21)
72
#define MDCR_EDAD (1U << 20)
73
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/hw/arm/boot.c
76
+++ b/hw/arm/boot.c
77
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
78
env->cp15.scr_el3 |= SCR_ATA;
79
}
80
if (cpu_isar_feature(aa64_sve, cpu)) {
81
- env->cp15.cptr_el[3] |= CPTR_EZ;
82
+ env->cp15.cptr_el[3] |= R_CPTR_EL3_EZ_MASK;
83
}
84
/* AArch64 kernels never boot in secure mode */
85
assert(!info->secure_boot);
86
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/target/arm/cpu.c
89
+++ b/target/arm/cpu.c
90
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
91
/* Trap on btype=3 for PACIxSP. */
92
env->cp15.sctlr_el[1] |= SCTLR_BT0;
93
/* and to the FP/Neon instructions */
94
- env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 20, 2, 3);
95
+ env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
96
+ CPACR_EL1, FPEN, 3);
97
/* and to the SVE instructions */
98
- env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 16, 2, 3);
99
+ env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
100
+ CPACR_EL1, ZEN, 3);
101
/* with reasonable vector length */
102
if (cpu_isar_feature(aa64_sve, cpu)) {
103
env->vfp.zcr_el[1] =
104
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
105
} else {
106
#if defined(CONFIG_USER_ONLY)
107
/* Userspace expects access to cp10 and cp11 for FP/Neon */
108
- env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 20, 4, 0xf);
109
+ env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
110
+ CPACR, CP10, 3);
111
+ env->cp15.cpacr_el1 = FIELD_DP64(env->cp15.cpacr_el1,
112
+ CPACR, CP11, 3);
113
#endif
114
}
115
116
diff --git a/target/arm/helper.c b/target/arm/helper.c
117
index XXXXXXX..XXXXXXX 100644
118
--- a/target/arm/helper.c
119
+++ b/target/arm/helper.c
120
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
121
*/
122
if (cpu_isar_feature(aa32_vfp_simd, env_archcpu(env))) {
123
/* VFP coprocessor: cp10 & cp11 [23:20] */
124
- mask |= (1 << 31) | (1 << 30) | (0xf << 20);
125
+ mask |= R_CPACR_ASEDIS_MASK |
126
+ R_CPACR_D32DIS_MASK |
127
+ R_CPACR_CP11_MASK |
128
+ R_CPACR_CP10_MASK;
129
130
if (!arm_feature(env, ARM_FEATURE_NEON)) {
131
/* ASEDIS [31] bit is RAO/WI */
132
- value |= (1 << 31);
133
+ value |= R_CPACR_ASEDIS_MASK;
134
}
135
136
/* VFPv3 and upwards with NEON implement 32 double precision
137
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
138
*/
139
if (!cpu_isar_feature(aa32_simd_r32, env_archcpu(env))) {
140
/* D32DIS [30] is RAO/WI if D16-31 are not implemented. */
141
- value |= (1 << 30);
142
+ value |= R_CPACR_D32DIS_MASK;
143
}
144
}
145
value &= mask;
146
@@ -XXX,XX +XXX,XX @@ static void cpacr_write(CPUARMState *env, const ARMCPRegInfo *ri,
147
*/
148
if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) &&
149
!arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) {
150
- value &= ~(0xf << 20);
151
- value |= env->cp15.cpacr_el1 & (0xf << 20);
152
+ mask = R_CPACR_CP11_MASK | R_CPACR_CP10_MASK;
153
+ value = (value & ~mask) | (env->cp15.cpacr_el1 & mask);
154
}
155
156
env->cp15.cpacr_el1 = value;
157
@@ -XXX,XX +XXX,XX @@ static uint64_t cpacr_read(CPUARMState *env, const ARMCPRegInfo *ri)
158
159
if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) &&
160
!arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) {
161
- value &= ~(0xf << 20);
162
+ value = ~(R_CPACR_CP11_MASK | R_CPACR_CP10_MASK);
163
}
164
return value;
165
}
166
@@ -XXX,XX +XXX,XX @@ static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
167
if (arm_feature(env, ARM_FEATURE_V8)) {
168
/* Check if CPACR accesses are to be trapped to EL2 */
169
if (arm_current_el(env) == 1 && arm_is_el2_enabled(env) &&
170
- (env->cp15.cptr_el[2] & CPTR_TCPAC)) {
171
+ FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TCPAC)) {
172
return CP_ACCESS_TRAP_EL2;
173
/* Check if CPACR accesses are to be trapped to EL3 */
174
} else if (arm_current_el(env) < 3 &&
175
- (env->cp15.cptr_el[3] & CPTR_TCPAC)) {
176
+ FIELD_EX64(env->cp15.cptr_el[3], CPTR_EL3, TCPAC)) {
177
return CP_ACCESS_TRAP_EL3;
178
}
179
}
180
@@ -XXX,XX +XXX,XX @@ static CPAccessResult cptr_access(CPUARMState *env, const ARMCPRegInfo *ri,
181
bool isread)
182
{
183
/* Check if CPTR accesses are set to trap to EL3 */
184
- if (arm_current_el(env) == 2 && (env->cp15.cptr_el[3] & CPTR_TCPAC)) {
185
+ if (arm_current_el(env) == 2 &&
186
+ FIELD_EX64(env->cp15.cptr_el[3], CPTR_EL3, TCPAC)) {
187
return CP_ACCESS_TRAP_EL3;
188
}
189
190
@@ -XXX,XX +XXX,XX @@ static void cptr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
191
*/
192
if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) &&
193
!arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) {
194
- value &= ~(0x3 << 10);
195
- value |= env->cp15.cptr_el[2] & (0x3 << 10);
196
+ uint64_t mask = R_HCPTR_TCP11_MASK | R_HCPTR_TCP10_MASK;
197
+ value = (value & ~mask) | (env->cp15.cptr_el[2] & mask);
198
}
199
env->cp15.cptr_el[2] = value;
200
}
201
@@ -XXX,XX +XXX,XX @@ static uint64_t cptr_el2_read(CPUARMState *env, const ARMCPRegInfo *ri)
202
203
if (arm_feature(env, ARM_FEATURE_EL3) && !arm_el_is_aa64(env, 3) &&
204
!arm_is_secure(env) && !extract32(env->cp15.nsacr, 10, 1)) {
205
- value |= 0x3 << 10;
206
+ value |= R_HCPTR_TCP11_MASK | R_HCPTR_TCP10_MASK;
207
}
208
return value;
209
}
210
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
211
uint64_t hcr_el2 = arm_hcr_el2_eff(env);
212
213
if (el <= 1 && (hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
214
- /* Check CPACR.ZEN. */
215
- switch (extract32(env->cp15.cpacr_el1, 16, 2)) {
216
+ switch (FIELD_EX64(env->cp15.cpacr_el1, CPACR_EL1, ZEN)) {
217
case 1:
218
if (el != 0) {
219
break;
220
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
221
}
222
223
/* Check CPACR.FPEN. */
224
- switch (extract32(env->cp15.cpacr_el1, 20, 2)) {
225
+ switch (FIELD_EX64(env->cp15.cpacr_el1, CPACR_EL1, FPEN)) {
226
case 1:
227
if (el != 0) {
228
break;
229
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
230
*/
231
if (el <= 2) {
232
if (hcr_el2 & HCR_E2H) {
233
- /* Check CPTR_EL2.ZEN. */
234
- switch (extract32(env->cp15.cptr_el[2], 16, 2)) {
235
+ switch (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, ZEN)) {
236
case 1:
237
if (el != 0 || !(hcr_el2 & HCR_TGE)) {
238
break;
239
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
240
return 2;
241
}
242
243
- /* Check CPTR_EL2.FPEN. */
244
- switch (extract32(env->cp15.cptr_el[2], 20, 2)) {
245
+ switch (FIELD_EX32(env->cp15.cptr_el[2], CPTR_EL2, FPEN)) {
246
case 1:
247
if (el == 2 || !(hcr_el2 & HCR_TGE)) {
248
break;
249
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
250
return 0;
251
}
252
} else if (arm_is_el2_enabled(env)) {
253
- if (env->cp15.cptr_el[2] & CPTR_TZ) {
254
+ if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TZ)) {
255
return 2;
256
}
257
- if (env->cp15.cptr_el[2] & CPTR_TFP) {
258
+ if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TFP)) {
259
return 0;
260
}
261
}
262
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
263
264
/* CPTR_EL3. Since EZ is negative we must check for EL3. */
265
if (arm_feature(env, ARM_FEATURE_EL3)
266
- && !(env->cp15.cptr_el[3] & CPTR_EZ)) {
267
+ && !FIELD_EX64(env->cp15.cptr_el[3], CPTR_EL3, EZ)) {
268
return 3;
269
}
270
#endif
271
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
272
* This register is ignored if E2H+TGE are both set.
273
*/
274
if ((hcr_el2 & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE)) {
275
- int fpen = extract32(env->cp15.cpacr_el1, 20, 2);
276
+ int fpen = FIELD_EX64(env->cp15.cpacr_el1, CPACR_EL1, FPEN);
277
278
switch (fpen) {
279
case 0:
280
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
281
*/
282
if (cur_el <= 2) {
283
if (hcr_el2 & HCR_E2H) {
284
- /* Check CPTR_EL2.FPEN. */
285
- switch (extract32(env->cp15.cptr_el[2], 20, 2)) {
286
+ switch (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, FPEN)) {
287
case 1:
288
if (cur_el != 0 || !(hcr_el2 & HCR_TGE)) {
289
break;
290
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
291
return 2;
292
}
293
} else if (arm_is_el2_enabled(env)) {
294
- if (env->cp15.cptr_el[2] & CPTR_TFP) {
295
+ if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TFP)) {
296
return 2;
297
}
298
}
299
}
300
301
/* CPTR_EL3 : present in v8 */
302
- if (env->cp15.cptr_el[3] & CPTR_TFP) {
303
+ if (FIELD_EX64(env->cp15.cptr_el[3], CPTR_EL3, TFP)) {
304
/* Trap all FP ops to EL3 */
305
return 3;
306
}
49
--
307
--
50
2.17.1
308
2.25.1
51
52
diff view generated by jsdifflib