1
ARM pullreq; contains some patches that arrived while I
1
arm pullreq for rc1. All minor bugfixes, except for the sve-default-vector-length
2
was on holiday, plus the series I sent off before going
2
patches, which are somewhere between a bugfix and a new feature.
3
away, which got reviewed while I was away.
4
3
5
thanks
4
thanks
6
-- PMM
5
-- PMM
7
6
7
The following changes since commit c08ccd1b53f488ac86c1f65cf7623dc91acc249a:
8
8
9
The following changes since commit c077a998eb3fcae2d048e3baeb5bc592d30fddde:
9
Merge remote-tracking branch 'remotes/rth-gitlab/tags/pull-tcg-20210726' into staging (2021-07-27 08:35:01 +0100)
10
10
11
Merge remote-tracking branch 'remotes/riku/tags/pull-linux-user-20170531' into staging (2017-06-01 15:50:40 +0100)
11
are available in the Git repository at:
12
12
13
are available in the git repository at:
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210727
14
14
15
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20170601
15
for you to fetch changes up to e229a179a503f2aee43a76888cf12fbdfe8a3749:
16
16
17
for you to fetch changes up to cdc58be430b0bdeaef282e2e70f8135ae531616d:
17
hw: aspeed_gpio: Fix memory size (2021-07-27 11:00:00 +0100)
18
19
hw/arm/virt: fdt: generate distance-map when needed (2017-06-01 17:27:07 +0100)
20
18
21
----------------------------------------------------------------
19
----------------------------------------------------------------
22
target-arm queue:
20
target-arm queue:
23
* virt: numa: provide ACPI distance info when needed
21
* hw/arm/smmuv3: Check 31st bit to see if CD is valid
24
* aspeed: fix i2c controller bugs
22
* qemu-options.hx: Fix formatting of -machine memory-backend option
25
* aspeed: add temperature sensor device
23
* hw: aspeed_gpio: Fix memory size
26
* M profile: support MPU
24
* hw/arm/nseries: Display hexadecimal value with '0x' prefix
27
* gicv3: fix mishandling of BPR1, VBPR1
25
* Add sve-default-vector-length cpu property
28
* load_uboot_image: don't assume a full header read
26
* docs: Update path that mentions deprecated.rst
29
* libvixl: Correct build failures on NetBSD
27
* hw/intc/armv7m_nvic: for v8.1M VECTPENDING hides S exceptions from NS
28
* hw/intc/armv7m_nvic: Correct size of ICSR.VECTPENDING
29
* hw/intc/armv7m_nvic: ISCR.ISRPENDING is set for non-enabled pending interrupts
30
* target/arm: Report M-profile alignment faults correctly to the guest
31
* target/arm: Add missing 'return's after calling v7m_exception_taken()
32
* target/arm: Enforce that M-profile SP low 2 bits are always zero
30
33
31
----------------------------------------------------------------
34
----------------------------------------------------------------
32
Andrew Jones (3):
35
Joe Komlodi (1):
33
load_uboot_image: don't assume a full header read
36
hw/arm/smmuv3: Check 31st bit to see if CD is valid
34
hw/arm/virt-acpi-build: build SLIT when needed
35
hw/arm/virt: fdt: generate distance-map when needed
36
37
37
Cédric Le Goater (6):
38
Joel Stanley (1):
38
aspeed/i2c: improve command handling
39
hw: aspeed_gpio: Fix memory size
39
aspeed/i2c: handle LAST command under the RX command
40
aspeed/i2c: introduce a state machine
41
aspeed: add some I2C devices to the Aspeed machines
42
hw/misc: add a TMP42{1,2,3} device model
43
aspeed: add a temp sensor device on I2C bus 3
44
40
45
Kamil Rytarowski (1):
41
Mao Zhongyi (1):
46
libvixl: Correct build failures on NetBSD
42
docs: Update path that mentions deprecated.rst
47
43
48
Michael Davidsaver (4):
44
Peter Maydell (7):
49
armv7m: Improve "-d mmu" tracing for PMSAv7 MPU
45
qemu-options.hx: Fix formatting of -machine memory-backend option
50
armv7m: Implement M profile default memory map
46
target/arm: Enforce that M-profile SP low 2 bits are always zero
51
armv7m: Classify faults as MemManage or BusFault
47
target/arm: Add missing 'return's after calling v7m_exception_taken()
52
arm: add MPU support to M profile CPUs
48
target/arm: Report M-profile alignment faults correctly to the guest
49
hw/intc/armv7m_nvic: ISCR.ISRPENDING is set for non-enabled pending interrupts
50
hw/intc/armv7m_nvic: Correct size of ICSR.VECTPENDING
51
hw/intc/armv7m_nvic: for v8.1M VECTPENDING hides S exceptions from NS
53
52
54
Peter Maydell (12):
53
Philippe Mathieu-Daudé (1):
55
hw/intc/arm_gicv3_cpuif: Fix reset value for VMCR_EL2.VBPR1
54
hw/arm/nseries: Display hexadecimal value with '0x' prefix
56
hw/intc/arm_gicv3_cpuif: Don't let BPR be set below its minimum
57
hw/intc/arm_gicv3_cpuif: Fix priority masking for NS BPR1
58
arm: Use the mmu_idx we're passed in arm_cpu_do_unaligned_access()
59
arm: Add support for M profile CPUs having different MMU index semantics
60
arm: Use different ARMMMUIdx values for M profile
61
arm: Clean up handling of no-MPU PMSA CPUs
62
arm: Don't clear ARM_FEATURE_PMSA for no-mpu configs
63
arm: Don't let no-MPU PMSA cores write to SCTLR.M
64
arm: Remove unnecessary check on cpu->pmsav7_dregion
65
arm: All M profile cores are PMSA
66
arm: Implement HFNMIENA support for M profile MPU
67
55
68
Wei Huang (1):
56
Richard Henderson (3):
69
target/arm: clear PMUVER field of AA64DFR0 when vPMU=off
57
target/arm: Correctly bound length in sve_zcr_get_valid_len
58
target/arm: Export aarch64_sve_zcr_get_valid_len
59
target/arm: Add sve-default-vector-length cpu property
70
60
71
disas/libvixl/Makefile.objs | 3 +
61
docs/system/arm/cpu-features.rst | 15 ++++++++++
72
hw/misc/Makefile.objs | 1 +
62
configure | 2 +-
73
target/arm/cpu.h | 118 ++++++++++--
63
hw/arm/smmuv3-internal.h | 2 +-
74
target/arm/translate.h | 2 +-
64
target/arm/cpu.h | 5 ++++
75
hw/arm/aspeed.c | 36 ++++
65
target/arm/internals.h | 10 +++++++
76
hw/arm/virt-acpi-build.c | 4 +
66
hw/arm/nseries.c | 2 +-
77
hw/arm/virt.c | 21 +++
67
hw/gpio/aspeed_gpio.c | 3 +-
78
hw/core/loader.c | 3 +-
68
hw/intc/armv7m_nvic.c | 40 +++++++++++++++++++--------
79
hw/i2c/aspeed_i2c.c | 65 ++++++-
69
target/arm/cpu.c | 14 ++++++++--
80
hw/intc/arm_gicv3_cpuif.c | 50 ++++-
70
target/arm/cpu64.c | 60 ++++++++++++++++++++++++++++++++++++++++
81
hw/intc/armv7m_nvic.c | 104 +++++++++++
71
target/arm/gdbstub.c | 4 +++
82
hw/misc/tmp421.c | 401 ++++++++++++++++++++++++++++++++++++++++
72
target/arm/helper.c | 8 ++++--
83
target/arm/cpu.c | 28 ++-
73
target/arm/m_helper.c | 24 ++++++++++++----
84
target/arm/helper.c | 338 ++++++++++++++++++++++-----------
74
target/arm/translate.c | 3 ++
85
target/arm/machine.c | 7 +-
75
target/i386/cpu.c | 2 +-
86
target/arm/op_helper.c | 3 +-
76
MAINTAINERS | 2 +-
87
target/arm/translate-a64.c | 18 +-
77
qemu-options.hx | 30 +++++++++++---------
88
target/arm/translate.c | 14 +-
78
17 files changed, 183 insertions(+), 43 deletions(-)
89
default-configs/arm-softmmu.mak | 1 +
90
19 files changed, 1060 insertions(+), 157 deletions(-)
91
create mode 100644 hw/misc/tmp421.c
92
79
diff view generated by jsdifflib
Deleted patch
1
From: Kamil Rytarowski <n54@gmx.com>
2
1
3
Ensure that C99 macros are defined regardless of the inclusion order of
4
headers in vixl. This is required at least on NetBSD.
5
6
The vixl/globals.h headers defines __STDC_CONSTANT_MACROS and must be
7
included before other system headers.
8
9
This file defines unconditionally the following macros, without altering
10
the original sources:
11
- __STDC_CONSTANT_MACROS
12
- __STDC_LIMIT_MACROS
13
- __STDC_FORMAT_MACROS
14
15
Signed-off-by: Kamil Rytarowski <n54@gmx.com>
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Message-id: 20170514051820.15985-1-n54@gmx.com
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
disas/libvixl/Makefile.objs | 3 +++
22
1 file changed, 3 insertions(+)
23
24
diff --git a/disas/libvixl/Makefile.objs b/disas/libvixl/Makefile.objs
25
index XXXXXXX..XXXXXXX 100644
26
--- a/disas/libvixl/Makefile.objs
27
+++ b/disas/libvixl/Makefile.objs
28
@@ -XXX,XX +XXX,XX @@ libvixl_OBJS = vixl/utils.o \
29
# The -Wno-sign-compare is needed only for gcc 4.6, which complains about
30
# some signed-unsigned equality comparisons which later gcc versions do not.
31
$(addprefix $(obj)/,$(libvixl_OBJS)): QEMU_CFLAGS := -I$(SRC_PATH)/disas/libvixl $(QEMU_CFLAGS) -Wno-sign-compare
32
+# Ensure that C99 macros are defined regardless of the inclusion order of
33
+# headers in vixl. This is required at least on NetBSD.
34
+$(addprefix $(obj)/,$(libvixl_OBJS)): QEMU_CFLAGS += -D__STDC_CONSTANT_MACROS -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS
35
36
common-obj-$(CONFIG_ARM_A64_DIS) += $(libvixl_OBJS)
37
--
38
2.7.4
39
40
diff view generated by jsdifflib
Deleted patch
1
From: Andrew Jones <drjones@redhat.com>
2
1
3
Don't allow load_uboot_image() to proceed when less bytes than
4
header-size was read.
5
6
Signed-off-by: Andrew Jones <drjones@redhat.com>
7
Message-id: 20170524091315.20284-1-drjones@redhat.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/core/loader.c | 3 ++-
12
1 file changed, 2 insertions(+), 1 deletion(-)
13
14
diff --git a/hw/core/loader.c b/hw/core/loader.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/core/loader.c
17
+++ b/hw/core/loader.c
18
@@ -XXX,XX +XXX,XX @@ static int load_uboot_image(const char *filename, hwaddr *ep, hwaddr *loadaddr,
19
return -1;
20
21
size = read(fd, hdr, sizeof(uboot_image_header_t));
22
- if (size < 0)
23
+ if (size < sizeof(uboot_image_header_t)) {
24
goto out;
25
+ }
26
27
bswap_uboot_header(hdr);
28
29
--
30
2.7.4
31
32
diff view generated by jsdifflib
1
From: Wei Huang <wei@redhat.com>
1
From: Joe Komlodi <joe.komlodi@xilinx.com>
2
2
3
The PMUv3 driver of linux kernel (in arch/arm64/kernel/perf_event.c)
3
The bit to see if a CD is valid is the last bit of the first word of the CD.
4
relies on the PMUVER field of id_aa64dfr0_el1 to decide if PMU support
5
is present or not. This patch clears the PMUVER field under TCG mode
6
when vPMU=off. Without it, PMUv3 will init insider guest VMs even
7
with vPMU=off. This patch also removes a redundant line inside the
8
if-statement.
9
4
10
Signed-off-by: Wei Huang <wei@redhat.com>
5
Signed-off-by: Joe Komlodi <joe.komlodi@xilinx.com>
11
Message-id: 1495123889-32301-1-git-send-email-wei@redhat.com
6
Message-id: 1626728232-134665-2-git-send-email-joe.komlodi@xilinx.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
9
---
15
target/arm/cpu.c | 2 +-
10
hw/arm/smmuv3-internal.h | 2 +-
16
1 file changed, 1 insertion(+), 1 deletion(-)
11
1 file changed, 1 insertion(+), 1 deletion(-)
17
12
18
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
13
diff --git a/hw/arm/smmuv3-internal.h b/hw/arm/smmuv3-internal.h
19
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.c
15
--- a/hw/arm/smmuv3-internal.h
21
+++ b/target/arm/cpu.c
16
+++ b/hw/arm/smmuv3-internal.h
22
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
17
@@ -XXX,XX +XXX,XX @@ static inline int pa_range(STE *ste)
23
}
18
24
19
/* CD fields */
25
if (!cpu->has_pmu) {
20
26
- cpu->has_pmu = false;
21
-#define CD_VALID(x) extract32((x)->word[0], 30, 1)
27
unset_feature(env, ARM_FEATURE_PMU);
22
+#define CD_VALID(x) extract32((x)->word[0], 31, 1)
28
+ cpu->id_aa64dfr0 &= ~0xf00;
23
#define CD_ASID(x) extract32((x)->word[1], 16, 16)
29
}
24
#define CD_TTB(x, sel) \
30
25
({ \
31
if (!arm_feature(env, ARM_FEATURE_EL2)) {
32
--
26
--
33
2.7.4
27
2.20.1
34
28
35
29
diff view generated by jsdifflib
1
When we calculate the mask to use to get the group priority from
1
The documentation of the -machine memory-backend has some minor
2
an interrupt priority, the way that NS BPR1 is handled differs
2
formatting errors:
3
from how BPR0 and S BPR1 work -- a BPR1 value of 1 means
3
* Misindentation of the initial line meant that the whole option
4
the group priority is in bits [7:1], whereas for BPR0 and S BPR1
4
section is incorrectly indented in the HTML output compared to
5
this is indicated by a 0 BPR value.
5
the other -machine options
6
* The examples weren't indented, which meant that they were formatted
7
as plain run-on text including outputting the "::" as text.
8
* The a) b) list has no rst-format markup so it is rendered as
9
a single run-on paragraph
6
10
7
Subtract 1 from the BPR value before creating the mask if
11
Fix the formatting.
8
we're using the NS BPR value, for both hardware and virtual
9
interrupts, as the GICv3 pseudocode does, and fix the comments
10
accordingly.
11
12
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
14
Message-id: 1493226792-3237-4-git-send-email-peter.maydell@linaro.org
15
Message-id: 20210719105257.3599-1-peter.maydell@linaro.org
15
---
16
---
16
hw/intc/arm_gicv3_cpuif.c | 42 ++++++++++++++++++++++++++++++++++++++----
17
qemu-options.hx | 30 +++++++++++++++++-------------
17
1 file changed, 38 insertions(+), 4 deletions(-)
18
1 file changed, 17 insertions(+), 13 deletions(-)
18
19
19
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
20
diff --git a/qemu-options.hx b/qemu-options.hx
20
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/intc/arm_gicv3_cpuif.c
22
--- a/qemu-options.hx
22
+++ b/hw/intc/arm_gicv3_cpuif.c
23
+++ b/qemu-options.hx
23
@@ -XXX,XX +XXX,XX @@ static uint32_t icv_gprio_mask(GICv3CPUState *cs, int group)
24
@@ -XXX,XX +XXX,XX @@ SRST
24
{
25
Enables or disables ACPI Heterogeneous Memory Attribute Table
25
/* Return a mask word which clears the subpriority bits from
26
(HMAT) support. The default is off.
26
* a priority value for a virtual interrupt in the specified group.
27
27
- * This depends on the VBPR value:
28
- ``memory-backend='id'``
28
+ * This depends on the VBPR value.
29
+ ``memory-backend='id'``
29
+ * If using VBPR0 then:
30
An alternative to legacy ``-mem-path`` and ``mem-prealloc`` options.
30
* a BPR of 0 means the group priority bits are [7:1];
31
Allows to use a memory backend as main RAM.
31
* a BPR of 1 means they are [7:2], and so on down to
32
32
* a BPR of 7 meaning no group priority bits at all.
33
For example:
33
+ * If using VBPR1 then:
34
::
34
+ * a BPR of 0 is impossible (the minimum value is 1)
35
- -object memory-backend-file,id=pc.ram,size=512M,mem-path=/hugetlbfs,prealloc=on,share=on
35
+ * a BPR of 1 means the group priority bits are [7:1];
36
- -machine memory-backend=pc.ram
36
+ * a BPR of 2 means they are [7:2], and so on down to
37
- -m 512M
37
+ * a BPR of 7 meaning the group priority is [7].
38
+ *
39
* Which BPR to use depends on the group of the interrupt and
40
* the current ICH_VMCR_EL2.VCBPR settings.
41
+ *
42
+ * This corresponds to the VGroupBits() pseudocode.
43
*/
44
+ int bpr;
45
+
38
+
46
if (group == GICV3_G1NS && cs->ich_vmcr_el2 & ICH_VMCR_EL2_VCBPR) {
39
+ -object memory-backend-file,id=pc.ram,size=512M,mem-path=/hugetlbfs,prealloc=on,share=on
47
group = GICV3_G0;
40
+ -machine memory-backend=pc.ram
48
}
41
+ -m 512M
49
42
50
- return ~0U << (read_vbpr(cs, group) + 1);
43
Migration compatibility note:
51
+ bpr = read_vbpr(cs, group);
44
- a) as backend id one shall use value of 'default-ram-id', advertised by
52
+ if (group == GICV3_G1NS) {
45
- machine type (available via ``query-machines`` QMP command), if migration
53
+ assert(bpr > 0);
46
- to/from old QEMU (<5.0) is expected.
54
+ bpr--;
47
- b) for machine types 4.0 and older, user shall
55
+ }
48
- use ``x-use-canonical-path-for-ramblock-id=off`` backend option
49
- if migration to/from old QEMU (<5.0) is expected.
56
+
50
+
57
+ return ~0U << (bpr + 1);
51
+ * as backend id one shall use value of 'default-ram-id', advertised by
58
}
52
+ machine type (available via ``query-machines`` QMP command), if migration
59
53
+ to/from old QEMU (<5.0) is expected.
60
static bool icv_hppi_can_preempt(GICv3CPUState *cs, uint64_t lr)
54
+ * for machine types 4.0 and older, user shall
61
@@ -XXX,XX +XXX,XX @@ static uint32_t icc_gprio_mask(GICv3CPUState *cs, int group)
55
+ use ``x-use-canonical-path-for-ramblock-id=off`` backend option
62
{
56
+ if migration to/from old QEMU (<5.0) is expected.
63
/* Return a mask word which clears the subpriority bits from
64
* a priority value for an interrupt in the specified group.
65
- * This depends on the BPR value:
66
+ * This depends on the BPR value. For CBPR0 (S or NS):
67
* a BPR of 0 means the group priority bits are [7:1];
68
* a BPR of 1 means they are [7:2], and so on down to
69
* a BPR of 7 meaning no group priority bits at all.
70
+ * For CBPR1 NS:
71
+ * a BPR of 0 is impossible (the minimum value is 1)
72
+ * a BPR of 1 means the group priority bits are [7:1];
73
+ * a BPR of 2 means they are [7:2], and so on down to
74
+ * a BPR of 7 meaning the group priority is [7].
75
+ *
76
* Which BPR to use depends on the group of the interrupt and
77
* the current ICC_CTLR.CBPR settings.
78
+ *
79
+ * This corresponds to the GroupBits() pseudocode.
80
*/
81
+ int bpr;
82
+
57
+
83
if ((group == GICV3_G1 && cs->icc_ctlr_el1[GICV3_S] & ICC_CTLR_EL1_CBPR) ||
58
For example:
84
(group == GICV3_G1NS &&
59
::
85
cs->icc_ctlr_el1[GICV3_NS] & ICC_CTLR_EL1_CBPR)) {
60
- -object memory-backend-ram,id=pc.ram,size=512M,x-use-canonical-path-for-ramblock-id=off
86
group = GICV3_G0;
61
- -machine memory-backend=pc.ram
87
}
62
- -m 512M
88
89
- return ~0U << ((cs->icc_bpr[group] & 7) + 1);
90
+ bpr = cs->icc_bpr[group] & 7;
91
+
63
+
92
+ if (group == GICV3_G1NS) {
64
+ -object memory-backend-ram,id=pc.ram,size=512M,x-use-canonical-path-for-ramblock-id=off
93
+ assert(bpr > 0);
65
+ -machine memory-backend=pc.ram
94
+ bpr--;
66
+ -m 512M
95
+ }
67
ERST
96
+
68
97
+ return ~0U << (bpr + 1);
69
HXCOMM Deprecated by -machine
98
}
99
100
static bool icc_no_enabled_hppi(GICv3CPUState *cs)
101
--
70
--
102
2.7.4
71
2.20.1
103
72
104
73
diff view generated by jsdifflib
1
The M profile CPU's MPU has an awkward corner case which we
1
For M-profile, unlike A-profile, the low 2 bits of SP are defined to be
2
would like to implement with a different MMU index.
2
RES0H, which is to say that they must be hardwired to zero so that
3
guest attempts to write non-zero values to them are ignored.
3
4
4
We can avoid having to bump the number of MMU modes ARM
5
Implement this behaviour by masking out the low bits:
5
uses, because some of our existing MMU indexes are only
6
* for writes to r13 by the gdbstub
6
used by non-M-profile CPUs, so we can borrow one.
7
* for writes to any of the various flavours of SP via MSR
7
To avoid that getting too confusing, clean up the code
8
* for writes to r13 via store_reg() in generated code
8
to try to keep the two meanings of the index separate.
9
9
10
Instead of ARMMMUIdx enum values being identical to core QEMU
10
Note that all the direct uses of cpu_R[] in translate.c are in places
11
MMU index values, they are now the core index values with some
11
where the register is definitely not r13 (usually because that has
12
high bits set. Any particular CPU always uses the same high
12
been checked for as an UNDEFINED or UNPREDICTABLE case and handled as
13
bits (so eventually A profile cores and M profile cores will
13
UNDEF).
14
use different bits). New functions arm_to_core_mmu_idx()
15
and core_to_arm_mmu_idx() convert between the two.
16
14
17
In general core index values are stored in 'int' types, and
15
All the other writes to regs[13] in C code are either:
18
ARM values are stored in ARMMMUIdx types.
16
* A-profile only code
17
* writes of values we can guarantee to be aligned, such as
18
- writes of previous-SP-value plus or minus a 4-aligned constant
19
- writes of the value in an SP limit register (which we already
20
enforce to be aligned)
19
21
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Message-id: 1493122030-32191-3-git-send-email-peter.maydell@linaro.org
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Message-id: 20210723162146.5167-2-peter.maydell@linaro.org
22
---
25
---
23
target/arm/cpu.h | 71 ++++++++++++++++-----
26
target/arm/gdbstub.c | 4 ++++
24
target/arm/translate.h | 2 +-
27
target/arm/m_helper.c | 14 ++++++++------
25
target/arm/helper.c | 151 ++++++++++++++++++++++++---------------------
28
target/arm/translate.c | 3 +++
26
target/arm/op_helper.c | 3 +-
29
3 files changed, 15 insertions(+), 6 deletions(-)
27
target/arm/translate-a64.c | 18 ++++--
28
target/arm/translate.c | 10 +--
29
6 files changed, 156 insertions(+), 99 deletions(-)
30
30
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
31
diff --git a/target/arm/gdbstub.c b/target/arm/gdbstub.c
32
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu.h
33
--- a/target/arm/gdbstub.c
34
+++ b/target/arm/cpu.h
34
+++ b/target/arm/gdbstub.c
35
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
35
@@ -XXX,XX +XXX,XX @@ int arm_cpu_gdb_write_register(CPUState *cs, uint8_t *mem_buf, int n)
36
* for the accesses done as part of a stage 1 page table walk, rather than
36
37
* having to walk the stage 2 page table over and over.)
37
if (n < 16) {
38
*
38
/* Core integer register. */
39
+ * The ARMMMUIdx and the mmu index value used by the core QEMU TLB code
39
+ if (n == 13 && arm_feature(env, ARM_FEATURE_M)) {
40
+ * are not quite the same -- different CPU types (most notably M profile
40
+ /* M profile SP low bits are always 0 */
41
+ * vs A/R profile) would like to use MMU indexes with different semantics,
41
+ tmp &= ~3;
42
+ * but since we don't ever need to use all of those in a single CPU we
42
+ }
43
+ * can avoid setting NB_MMU_MODES to more than 8. The lower bits of
43
env->regs[n] = tmp;
44
+ * ARMMMUIdx are the core TLB mmu index, and the higher bits are always
44
return 4;
45
+ * the same for any particular CPU.
45
}
46
+ * Variables of type ARMMUIdx are always full values, and the core
46
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
47
+ * index values are in variables of type 'int'.
47
index XXXXXXX..XXXXXXX 100644
48
+ *
48
--- a/target/arm/m_helper.c
49
* Our enumeration includes at the end some entries which are not "true"
49
+++ b/target/arm/m_helper.c
50
* mmu_idx values in that they don't have corresponding TLBs and are only
50
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
51
* valid for doing slow path page table walks.
51
if (!env->v7m.secure) {
52
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
52
return;
53
* of the AT/ATS operations.
53
}
54
* The values used are carefully arranged to make mmu_idx => EL lookup easy.
54
- env->v7m.other_ss_msp = val;
55
*/
55
+ env->v7m.other_ss_msp = val & ~3;
56
+#define ARM_MMU_IDX_A 0x10 /* A profile (and M profile, for the moment) */
56
return;
57
+#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
57
case 0x89: /* PSP_NS */
58
if (!env->v7m.secure) {
59
return;
60
}
61
- env->v7m.other_ss_psp = val;
62
+ env->v7m.other_ss_psp = val & ~3;
63
return;
64
case 0x8a: /* MSPLIM_NS */
65
if (!env->v7m.secure) {
66
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
67
68
limit = is_psp ? env->v7m.psplim[false] : env->v7m.msplim[false];
69
70
+ val &= ~0x3;
58
+
71
+
59
+#define ARM_MMU_IDX_TYPE_MASK (~0x7)
72
if (val < limit) {
60
+#define ARM_MMU_IDX_COREIDX_MASK 0x7
73
raise_exception_ra(env, EXCP_STKOF, 0, 1, GETPC());
61
+
74
}
62
typedef enum ARMMMUIdx {
75
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
63
- ARMMMUIdx_S12NSE0 = 0,
76
break;
64
- ARMMMUIdx_S12NSE1 = 1,
77
case 8: /* MSP */
65
- ARMMMUIdx_S1E2 = 2,
78
if (v7m_using_psp(env)) {
66
- ARMMMUIdx_S1E3 = 3,
79
- env->v7m.other_sp = val;
67
- ARMMMUIdx_S1SE0 = 4,
80
+ env->v7m.other_sp = val & ~3;
68
- ARMMMUIdx_S1SE1 = 5,
69
- ARMMMUIdx_S2NS = 6,
70
+ ARMMMUIdx_S12NSE0 = 0 | ARM_MMU_IDX_A,
71
+ ARMMMUIdx_S12NSE1 = 1 | ARM_MMU_IDX_A,
72
+ ARMMMUIdx_S1E2 = 2 | ARM_MMU_IDX_A,
73
+ ARMMMUIdx_S1E3 = 3 | ARM_MMU_IDX_A,
74
+ ARMMMUIdx_S1SE0 = 4 | ARM_MMU_IDX_A,
75
+ ARMMMUIdx_S1SE1 = 5 | ARM_MMU_IDX_A,
76
+ ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A,
77
/* Indexes below here don't have TLBs and are used only for AT system
78
* instructions or for the first stage of an S12 page table walk.
79
*/
80
- ARMMMUIdx_S1NSE0 = 7,
81
- ARMMMUIdx_S1NSE1 = 8,
82
+ ARMMMUIdx_S1NSE0 = 0 | ARM_MMU_IDX_NOTLB,
83
+ ARMMMUIdx_S1NSE1 = 1 | ARM_MMU_IDX_NOTLB,
84
} ARMMMUIdx;
85
86
+/* Bit macros for the core-mmu-index values for each index,
87
+ * for use when calling tlb_flush_by_mmuidx() and friends.
88
+ */
89
+typedef enum ARMMMUIdxBit {
90
+ ARMMMUIdxBit_S12NSE0 = 1 << 0,
91
+ ARMMMUIdxBit_S12NSE1 = 1 << 1,
92
+ ARMMMUIdxBit_S1E2 = 1 << 2,
93
+ ARMMMUIdxBit_S1E3 = 1 << 3,
94
+ ARMMMUIdxBit_S1SE0 = 1 << 4,
95
+ ARMMMUIdxBit_S1SE1 = 1 << 5,
96
+ ARMMMUIdxBit_S2NS = 1 << 6,
97
+} ARMMMUIdxBit;
98
+
99
#define MMU_USER_IDX 0
100
101
+static inline int arm_to_core_mmu_idx(ARMMMUIdx mmu_idx)
102
+{
103
+ return mmu_idx & ARM_MMU_IDX_COREIDX_MASK;
104
+}
105
+
106
+static inline ARMMMUIdx core_to_arm_mmu_idx(CPUARMState *env, int mmu_idx)
107
+{
108
+ return mmu_idx | ARM_MMU_IDX_A;
109
+}
110
+
111
/* Return the exception level we're running at if this is our mmu_idx */
112
static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
113
{
114
- assert(mmu_idx < ARMMMUIdx_S2NS);
115
- return mmu_idx & 3;
116
+ switch (mmu_idx & ARM_MMU_IDX_TYPE_MASK) {
117
+ case ARM_MMU_IDX_A:
118
+ return mmu_idx & 3;
119
+ default:
120
+ g_assert_not_reached();
121
+ }
122
}
123
124
/* Determine the current mmu_idx to use for normal loads/stores */
125
@@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
126
int el = arm_current_el(env);
127
128
if (el < 2 && arm_is_secure_below_el3(env)) {
129
- return ARMMMUIdx_S1SE0 + el;
130
+ return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0 + el);
131
}
132
return el;
133
}
134
@@ -XXX,XX +XXX,XX @@ static inline uint32_t arm_regime_tbi1(CPUARMState *env, ARMMMUIdx mmu_idx)
135
static inline void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
136
target_ulong *cs_base, uint32_t *flags)
137
{
138
- ARMMMUIdx mmu_idx = cpu_mmu_index(env, false);
139
+ ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
140
if (is_a64(env)) {
141
*pc = env->pc;
142
*flags = ARM_TBFLAG_AARCH64_STATE_MASK;
143
@@ -XXX,XX +XXX,XX @@ static inline void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
144
<< ARM_TBFLAG_XSCALE_CPAR_SHIFT);
145
}
146
147
- *flags |= (mmu_idx << ARM_TBFLAG_MMUIDX_SHIFT);
148
+ *flags |= (arm_to_core_mmu_idx(mmu_idx) << ARM_TBFLAG_MMUIDX_SHIFT);
149
150
/* The SS_ACTIVE and PSTATE_SS bits correspond to the state machine
151
* states defined in the ARM ARM for software singlestep:
152
diff --git a/target/arm/translate.h b/target/arm/translate.h
153
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.h
155
+++ b/target/arm/translate.h
156
@@ -XXX,XX +XXX,XX @@ static inline int arm_dc_feature(DisasContext *dc, int feature)
157
158
static inline int get_mem_index(DisasContext *s)
159
{
160
- return s->mmu_idx;
161
+ return arm_to_core_mmu_idx(s->mmu_idx);
162
}
163
164
/* Function used to determine the target exception EL when otherwise not known
165
diff --git a/target/arm/helper.c b/target/arm/helper.c
166
index XXXXXXX..XXXXXXX 100644
167
--- a/target/arm/helper.c
168
+++ b/target/arm/helper.c
169
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
170
CPUState *cs = ENV_GET_CPU(env);
171
172
tlb_flush_by_mmuidx(cs,
173
- (1 << ARMMMUIdx_S12NSE1) |
174
- (1 << ARMMMUIdx_S12NSE0) |
175
- (1 << ARMMMUIdx_S2NS));
176
+ ARMMMUIdxBit_S12NSE1 |
177
+ ARMMMUIdxBit_S12NSE0 |
178
+ ARMMMUIdxBit_S2NS);
179
}
180
181
static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
182
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
183
CPUState *cs = ENV_GET_CPU(env);
184
185
tlb_flush_by_mmuidx_all_cpus_synced(cs,
186
- (1 << ARMMMUIdx_S12NSE1) |
187
- (1 << ARMMMUIdx_S12NSE0) |
188
- (1 << ARMMMUIdx_S2NS));
189
+ ARMMMUIdxBit_S12NSE1 |
190
+ ARMMMUIdxBit_S12NSE0 |
191
+ ARMMMUIdxBit_S2NS);
192
}
193
194
static void tlbiipas2_write(CPUARMState *env, const ARMCPRegInfo *ri,
195
@@ -XXX,XX +XXX,XX @@ static void tlbiipas2_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
197
pageaddr = sextract64(value << 12, 0, 40);
198
199
- tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S2NS));
200
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S2NS);
201
}
202
203
static void tlbiipas2_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
204
@@ -XXX,XX +XXX,XX @@ static void tlbiipas2_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
205
pageaddr = sextract64(value << 12, 0, 40);
206
207
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
208
- (1 << ARMMMUIdx_S2NS));
209
+ ARMMMUIdxBit_S2NS);
210
}
211
212
static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
213
@@ -XXX,XX +XXX,XX @@ static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
214
{
215
CPUState *cs = ENV_GET_CPU(env);
216
217
- tlb_flush_by_mmuidx(cs, (1 << ARMMMUIdx_S1E2));
218
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E2);
219
}
220
221
static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
222
@@ -XXX,XX +XXX,XX @@ static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
223
{
224
CPUState *cs = ENV_GET_CPU(env);
225
226
- tlb_flush_by_mmuidx_all_cpus_synced(cs, (1 << ARMMMUIdx_S1E2));
227
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E2);
228
}
229
230
static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
231
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
232
CPUState *cs = ENV_GET_CPU(env);
233
uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
234
235
- tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S1E2));
236
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S1E2);
237
}
238
239
static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
240
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
241
uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
242
243
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
244
- (1 << ARMMMUIdx_S1E2));
245
+ ARMMMUIdxBit_S1E2);
246
}
247
248
static const ARMCPRegInfo cp_reginfo[] = {
249
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
250
/* Accesses to VTTBR may change the VMID so we must flush the TLB. */
251
if (raw_read(env, ri) != value) {
252
tlb_flush_by_mmuidx(cs,
253
- (1 << ARMMMUIdx_S12NSE1) |
254
- (1 << ARMMMUIdx_S12NSE0) |
255
- (1 << ARMMMUIdx_S2NS));
256
+ ARMMMUIdxBit_S12NSE1 |
257
+ ARMMMUIdxBit_S12NSE0 |
258
+ ARMMMUIdxBit_S2NS);
259
raw_write(env, ri, value);
260
}
261
}
262
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
263
264
if (arm_is_secure_below_el3(env)) {
265
tlb_flush_by_mmuidx(cs,
266
- (1 << ARMMMUIdx_S1SE1) |
267
- (1 << ARMMMUIdx_S1SE0));
268
+ ARMMMUIdxBit_S1SE1 |
269
+ ARMMMUIdxBit_S1SE0);
270
} else {
271
tlb_flush_by_mmuidx(cs,
272
- (1 << ARMMMUIdx_S12NSE1) |
273
- (1 << ARMMMUIdx_S12NSE0));
274
+ ARMMMUIdxBit_S12NSE1 |
275
+ ARMMMUIdxBit_S12NSE0);
276
}
277
}
278
279
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
280
281
if (sec) {
282
tlb_flush_by_mmuidx_all_cpus_synced(cs,
283
- (1 << ARMMMUIdx_S1SE1) |
284
- (1 << ARMMMUIdx_S1SE0));
285
+ ARMMMUIdxBit_S1SE1 |
286
+ ARMMMUIdxBit_S1SE0);
287
} else {
288
tlb_flush_by_mmuidx_all_cpus_synced(cs,
289
- (1 << ARMMMUIdx_S12NSE1) |
290
- (1 << ARMMMUIdx_S12NSE0));
291
+ ARMMMUIdxBit_S12NSE1 |
292
+ ARMMMUIdxBit_S12NSE0);
293
}
294
}
295
296
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
297
298
if (arm_is_secure_below_el3(env)) {
299
tlb_flush_by_mmuidx(cs,
300
- (1 << ARMMMUIdx_S1SE1) |
301
- (1 << ARMMMUIdx_S1SE0));
302
+ ARMMMUIdxBit_S1SE1 |
303
+ ARMMMUIdxBit_S1SE0);
304
} else {
305
if (arm_feature(env, ARM_FEATURE_EL2)) {
306
tlb_flush_by_mmuidx(cs,
307
- (1 << ARMMMUIdx_S12NSE1) |
308
- (1 << ARMMMUIdx_S12NSE0) |
309
- (1 << ARMMMUIdx_S2NS));
310
+ ARMMMUIdxBit_S12NSE1 |
311
+ ARMMMUIdxBit_S12NSE0 |
312
+ ARMMMUIdxBit_S2NS);
313
} else {
81
} else {
314
tlb_flush_by_mmuidx(cs,
82
- env->regs[13] = val;
315
- (1 << ARMMMUIdx_S12NSE1) |
83
+ env->regs[13] = val & ~3;
316
- (1 << ARMMMUIdx_S12NSE0));
317
+ ARMMMUIdxBit_S12NSE1 |
318
+ ARMMMUIdxBit_S12NSE0);
319
}
84
}
320
}
85
break;
321
}
86
case 9: /* PSP */
322
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
87
if (v7m_using_psp(env)) {
323
ARMCPU *cpu = arm_env_get_cpu(env);
88
- env->regs[13] = val;
324
CPUState *cs = CPU(cpu);
89
+ env->regs[13] = val & ~3;
325
90
} else {
326
- tlb_flush_by_mmuidx(cs, (1 << ARMMMUIdx_S1E2));
91
- env->v7m.other_sp = val;
327
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E2);
92
+ env->v7m.other_sp = val & ~3;
328
}
329
330
static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
331
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
332
ARMCPU *cpu = arm_env_get_cpu(env);
333
CPUState *cs = CPU(cpu);
334
335
- tlb_flush_by_mmuidx(cs, (1 << ARMMMUIdx_S1E3));
336
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E3);
337
}
338
339
static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
340
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
341
342
if (sec) {
343
tlb_flush_by_mmuidx_all_cpus_synced(cs,
344
- (1 << ARMMMUIdx_S1SE1) |
345
- (1 << ARMMMUIdx_S1SE0));
346
+ ARMMMUIdxBit_S1SE1 |
347
+ ARMMMUIdxBit_S1SE0);
348
} else if (has_el2) {
349
tlb_flush_by_mmuidx_all_cpus_synced(cs,
350
- (1 << ARMMMUIdx_S12NSE1) |
351
- (1 << ARMMMUIdx_S12NSE0) |
352
- (1 << ARMMMUIdx_S2NS));
353
+ ARMMMUIdxBit_S12NSE1 |
354
+ ARMMMUIdxBit_S12NSE0 |
355
+ ARMMMUIdxBit_S2NS);
356
} else {
357
tlb_flush_by_mmuidx_all_cpus_synced(cs,
358
- (1 << ARMMMUIdx_S12NSE1) |
359
- (1 << ARMMMUIdx_S12NSE0));
360
+ ARMMMUIdxBit_S12NSE1 |
361
+ ARMMMUIdxBit_S12NSE0);
362
}
363
}
364
365
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
366
{
367
CPUState *cs = ENV_GET_CPU(env);
368
369
- tlb_flush_by_mmuidx_all_cpus_synced(cs, (1 << ARMMMUIdx_S1E2));
370
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E2);
371
}
372
373
static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
374
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
375
{
376
CPUState *cs = ENV_GET_CPU(env);
377
378
- tlb_flush_by_mmuidx_all_cpus_synced(cs, (1 << ARMMMUIdx_S1E3));
379
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E3);
380
}
381
382
static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
383
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
384
385
if (arm_is_secure_below_el3(env)) {
386
tlb_flush_page_by_mmuidx(cs, pageaddr,
387
- (1 << ARMMMUIdx_S1SE1) |
388
- (1 << ARMMMUIdx_S1SE0));
389
+ ARMMMUIdxBit_S1SE1 |
390
+ ARMMMUIdxBit_S1SE0);
391
} else {
392
tlb_flush_page_by_mmuidx(cs, pageaddr,
393
- (1 << ARMMMUIdx_S12NSE1) |
394
- (1 << ARMMMUIdx_S12NSE0));
395
+ ARMMMUIdxBit_S12NSE1 |
396
+ ARMMMUIdxBit_S12NSE0);
397
}
398
}
399
400
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
401
CPUState *cs = CPU(cpu);
402
uint64_t pageaddr = sextract64(value << 12, 0, 56);
403
404
- tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S1E2));
405
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S1E2);
406
}
407
408
static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
409
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
410
CPUState *cs = CPU(cpu);
411
uint64_t pageaddr = sextract64(value << 12, 0, 56);
412
413
- tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S1E3));
414
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S1E3);
415
}
416
417
static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
418
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
419
420
if (sec) {
421
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
422
- (1 << ARMMMUIdx_S1SE1) |
423
- (1 << ARMMMUIdx_S1SE0));
424
+ ARMMMUIdxBit_S1SE1 |
425
+ ARMMMUIdxBit_S1SE0);
426
} else {
427
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
428
- (1 << ARMMMUIdx_S12NSE1) |
429
- (1 << ARMMMUIdx_S12NSE0));
430
+ ARMMMUIdxBit_S12NSE1 |
431
+ ARMMMUIdxBit_S12NSE0);
432
}
433
}
434
435
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
436
uint64_t pageaddr = sextract64(value << 12, 0, 56);
437
438
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
439
- (1 << ARMMMUIdx_S1E2));
440
+ ARMMMUIdxBit_S1E2);
441
}
442
443
static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
444
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
445
uint64_t pageaddr = sextract64(value << 12, 0, 56);
446
447
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
448
- (1 << ARMMMUIdx_S1E3));
449
+ ARMMMUIdxBit_S1E3);
450
}
451
452
static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
453
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
454
455
pageaddr = sextract64(value << 12, 0, 48);
456
457
- tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S2NS));
458
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S2NS);
459
}
460
461
static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
462
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
463
pageaddr = sextract64(value << 12, 0, 48);
464
465
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
466
- (1 << ARMMMUIdx_S2NS));
467
+ ARMMMUIdxBit_S2NS);
468
}
469
470
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
471
@@ -XXX,XX +XXX,XX @@ static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
472
return &env->cp15.tcr_el[regime_el(env, mmu_idx)];
473
}
474
475
+/* Convert a possible stage1+2 MMU index into the appropriate
476
+ * stage 1 MMU index
477
+ */
478
+static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
479
+{
480
+ if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
481
+ mmu_idx += (ARMMMUIdx_S1NSE0 - ARMMMUIdx_S12NSE0);
482
+ }
483
+ return mmu_idx;
484
+}
485
+
486
/* Returns TBI0 value for current regime el */
487
uint32_t arm_regime_tbi0(CPUARMState *env, ARMMMUIdx mmu_idx)
488
{
489
@@ -XXX,XX +XXX,XX @@ uint32_t arm_regime_tbi0(CPUARMState *env, ARMMMUIdx mmu_idx)
490
uint32_t el;
491
492
/* For EL0 and EL1, TBI is controlled by stage 1's TCR, so convert
493
- * a stage 1+2 mmu index into the appropriate stage 1 mmu index.
494
- */
495
- if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
496
- mmu_idx += ARMMMUIdx_S1NSE0;
497
- }
498
+ * a stage 1+2 mmu index into the appropriate stage 1 mmu index.
499
+ */
500
+ mmu_idx = stage_1_mmu_idx(mmu_idx);
501
502
tcr = regime_tcr(env, mmu_idx);
503
el = regime_el(env, mmu_idx);
504
@@ -XXX,XX +XXX,XX @@ uint32_t arm_regime_tbi1(CPUARMState *env, ARMMMUIdx mmu_idx)
505
uint32_t el;
506
507
/* For EL0 and EL1, TBI is controlled by stage 1's TCR, so convert
508
- * a stage 1+2 mmu index into the appropriate stage 1 mmu index.
509
- */
510
- if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
511
- mmu_idx += ARMMMUIdx_S1NSE0;
512
- }
513
+ * a stage 1+2 mmu index into the appropriate stage 1 mmu index.
514
+ */
515
+ mmu_idx = stage_1_mmu_idx(mmu_idx);
516
517
tcr = regime_tcr(env, mmu_idx);
518
el = regime_el(env, mmu_idx);
519
@@ -XXX,XX +XXX,XX @@ static inline bool regime_using_lpae_format(CPUARMState *env,
520
* on whether the long or short descriptor format is in use. */
521
bool arm_s1_regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
522
{
523
- if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
524
- mmu_idx += ARMMMUIdx_S1NSE0;
525
- }
526
+ mmu_idx = stage_1_mmu_idx(mmu_idx);
527
528
return regime_using_lpae_format(env, mmu_idx);
529
}
530
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
531
int ret;
532
533
ret = get_phys_addr(env, address, access_type,
534
- mmu_idx + ARMMMUIdx_S1NSE0, &ipa, attrs,
535
+ stage_1_mmu_idx(mmu_idx), &ipa, attrs,
536
prot, page_size, fsr, fi);
537
538
/* If S1 fails or S2 is disabled, return early. */
539
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
540
/*
541
* For non-EL2 CPUs a stage1+stage2 translation is just stage 1.
542
*/
543
- mmu_idx += ARMMMUIdx_S1NSE0;
544
+ mmu_idx = stage_1_mmu_idx(mmu_idx);
545
}
93
}
546
}
94
break;
547
95
case 10: /* MSPLIM */
548
@@ -XXX,XX +XXX,XX @@ bool arm_tlb_fill(CPUState *cs, vaddr address,
549
int ret;
550
MemTxAttrs attrs = {};
551
552
- ret = get_phys_addr(env, address, access_type, mmu_idx, &phys_addr,
553
+ ret = get_phys_addr(env, address, access_type,
554
+ core_to_arm_mmu_idx(env, mmu_idx), &phys_addr,
555
&attrs, &prot, &page_size, fsr, fi);
556
if (!ret) {
557
/* Map a single [sub]page. */
558
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
559
bool ret;
560
uint32_t fsr;
561
ARMMMUFaultInfo fi = {};
562
+ ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
563
564
*attrs = (MemTxAttrs) {};
565
566
- ret = get_phys_addr(env, addr, 0, cpu_mmu_index(env, false), &phys_addr,
567
+ ret = get_phys_addr(env, addr, 0, mmu_idx, &phys_addr,
568
attrs, &prot, &page_size, &fsr, &fi);
569
570
if (ret) {
571
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
572
index XXXXXXX..XXXXXXX 100644
573
--- a/target/arm/op_helper.c
574
+++ b/target/arm/op_helper.c
575
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
576
int target_el;
577
bool same_el;
578
uint32_t syn;
579
+ ARMMMUIdx arm_mmu_idx = core_to_arm_mmu_idx(env, mmu_idx);
580
581
if (retaddr) {
582
/* now we have a real cpu fault */
583
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
584
/* the DFSR for an alignment fault depends on whether we're using
585
* the LPAE long descriptor format, or the short descriptor format
586
*/
587
- if (arm_s1_regime_using_lpae_format(env, mmu_idx)) {
588
+ if (arm_s1_regime_using_lpae_format(env, arm_mmu_idx)) {
589
env->exception.fsr = (1 << 9) | 0x21;
590
} else {
591
env->exception.fsr = 0x1;
592
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
593
index XXXXXXX..XXXXXXX 100644
594
--- a/target/arm/translate-a64.c
595
+++ b/target/arm/translate-a64.c
596
@@ -XXX,XX +XXX,XX @@ void a64_translate_init(void)
597
offsetof(CPUARMState, exclusive_high), "exclusive_high");
598
}
599
600
-static inline ARMMMUIdx get_a64_user_mem_index(DisasContext *s)
601
+static inline int get_a64_user_mem_index(DisasContext *s)
602
{
603
- /* Return the mmu_idx to use for A64 "unprivileged load/store" insns:
604
+ /* Return the core mmu_idx to use for A64 "unprivileged load/store" insns:
605
* if EL1, access as if EL0; otherwise access at current EL
606
*/
607
+ ARMMMUIdx useridx;
608
+
609
switch (s->mmu_idx) {
610
case ARMMMUIdx_S12NSE1:
611
- return ARMMMUIdx_S12NSE0;
612
+ useridx = ARMMMUIdx_S12NSE0;
613
+ break;
614
case ARMMMUIdx_S1SE1:
615
- return ARMMMUIdx_S1SE0;
616
+ useridx = ARMMMUIdx_S1SE0;
617
+ break;
618
case ARMMMUIdx_S2NS:
619
g_assert_not_reached();
620
default:
621
- return s->mmu_idx;
622
+ useridx = s->mmu_idx;
623
+ break;
624
}
625
+ return arm_to_core_mmu_idx(useridx);
626
}
627
628
void aarch64_cpu_dump_state(CPUState *cs, FILE *f,
629
@@ -XXX,XX +XXX,XX @@ void gen_intermediate_code_a64(ARMCPU *cpu, TranslationBlock *tb)
630
dc->be_data = ARM_TBFLAG_BE_DATA(tb->flags) ? MO_BE : MO_LE;
631
dc->condexec_mask = 0;
632
dc->condexec_cond = 0;
633
- dc->mmu_idx = ARM_TBFLAG_MMUIDX(tb->flags);
634
+ dc->mmu_idx = core_to_arm_mmu_idx(env, ARM_TBFLAG_MMUIDX(tb->flags));
635
dc->tbi0 = ARM_TBFLAG_TBI0(tb->flags);
636
dc->tbi1 = ARM_TBFLAG_TBI1(tb->flags);
637
dc->current_el = arm_mmu_idx_to_el(dc->mmu_idx);
638
diff --git a/target/arm/translate.c b/target/arm/translate.c
96
diff --git a/target/arm/translate.c b/target/arm/translate.c
639
index XXXXXXX..XXXXXXX 100644
97
index XXXXXXX..XXXXXXX 100644
640
--- a/target/arm/translate.c
98
--- a/target/arm/translate.c
641
+++ b/target/arm/translate.c
99
+++ b/target/arm/translate.c
642
@@ -XXX,XX +XXX,XX @@ static void disas_set_da_iss(DisasContext *s, TCGMemOp memop, ISSInfo issinfo)
100
@@ -XXX,XX +XXX,XX @@ void store_reg(DisasContext *s, int reg, TCGv_i32 var)
643
disas_set_insn_syndrome(s, syn);
101
*/
644
}
102
tcg_gen_andi_i32(var, var, s->thumb ? ~1 : ~3);
645
103
s->base.is_jmp = DISAS_JUMP;
646
-static inline ARMMMUIdx get_a32_user_mem_index(DisasContext *s)
104
+ } else if (reg == 13 && arm_dc_feature(s, ARM_FEATURE_M)) {
647
+static inline int get_a32_user_mem_index(DisasContext *s)
105
+ /* For M-profile SP bits [1:0] are always zero */
648
{
106
+ tcg_gen_andi_i32(var, var, ~3);
649
- /* Return the mmu_idx to use for A32/T32 "unprivileged load/store"
107
}
650
+ /* Return the core mmu_idx to use for A32/T32 "unprivileged load/store"
108
tcg_gen_mov_i32(cpu_R[reg], var);
651
* insns:
109
tcg_temp_free_i32(var);
652
* if PL2, UNPREDICTABLE (we choose to implement as if PL0)
653
* otherwise, access as if at PL0.
654
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx get_a32_user_mem_index(DisasContext *s)
655
case ARMMMUIdx_S1E2: /* this one is UNPREDICTABLE */
656
case ARMMMUIdx_S12NSE0:
657
case ARMMMUIdx_S12NSE1:
658
- return ARMMMUIdx_S12NSE0;
659
+ return arm_to_core_mmu_idx(ARMMMUIdx_S12NSE0);
660
case ARMMMUIdx_S1E3:
661
case ARMMMUIdx_S1SE0:
662
case ARMMMUIdx_S1SE1:
663
- return ARMMMUIdx_S1SE0;
664
+ return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0);
665
case ARMMMUIdx_S2NS:
666
default:
667
g_assert_not_reached();
668
@@ -XXX,XX +XXX,XX @@ void gen_intermediate_code(CPUARMState *env, TranslationBlock *tb)
669
dc->be_data = ARM_TBFLAG_BE_DATA(tb->flags) ? MO_BE : MO_LE;
670
dc->condexec_mask = (ARM_TBFLAG_CONDEXEC(tb->flags) & 0xf) << 1;
671
dc->condexec_cond = ARM_TBFLAG_CONDEXEC(tb->flags) >> 4;
672
- dc->mmu_idx = ARM_TBFLAG_MMUIDX(tb->flags);
673
+ dc->mmu_idx = core_to_arm_mmu_idx(env, ARM_TBFLAG_MMUIDX(tb->flags));
674
dc->current_el = arm_mmu_idx_to_el(dc->mmu_idx);
675
#if !defined(CONFIG_USER_ONLY)
676
dc->user = (dc->current_el == 0);
677
--
110
--
678
2.7.4
111
2.20.1
679
112
680
113
diff view generated by jsdifflib
1
Implement HFNMIENA support for the M profile MPU. This bit controls
1
In do_v7m_exception_exit(), we perform various checks as part of
2
whether the MPU is treated as enabled when executing at execution
2
performing the exception return. If one of these checks fails, the
3
priorities of less than zero (in NMI, HardFault or with the FAULTMASK
3
architecture requires that we take an appropriate exception on the
4
bit set).
4
existing stackframe. We implement this by calling
5
v7m_exception_taken() to set up to take the new exception, and then
6
immediately returning from do_v7m_exception_exit() without proceeding
7
any further with the unstack-and-exception-return process.
5
8
6
Doing this requires us to use a different MMU index for "running
9
In a couple of checks that are new in v8.1M, we forgot the "return"
7
at execution priority < 0", because we will have different
10
statement, with the effect that if bad code in the guest tripped over
8
access permissions for that case versus the normal case.
11
these checks we would set up to take a UsageFault exception but then
12
blunder on trying to also unstack and return from the original
13
exception, with the probable result that the guest would crash.
14
15
Add the missing return statements.
9
16
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 1493122030-32191-14-git-send-email-peter.maydell@linaro.org
18
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
19
Message-id: 20210723162146.5167-3-peter.maydell@linaro.org
12
---
20
---
13
target/arm/cpu.h | 24 +++++++++++++++++++++++-
21
target/arm/m_helper.c | 2 ++
14
target/arm/helper.c | 18 +++++++++++++++++-
22
1 file changed, 2 insertions(+)
15
target/arm/translate.c | 1 +
16
3 files changed, 41 insertions(+), 2 deletions(-)
17
23
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
19
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.h
26
--- a/target/arm/m_helper.c
21
+++ b/target/arm/cpu.h
27
+++ b/target/arm/m_helper.c
22
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
28
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
23
* for the accesses done as part of a stage 1 page table walk, rather than
29
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
24
* having to walk the stage 2 page table over and over.)
30
"stackframe: NSACR prevents clearing FPU registers\n");
25
*
31
v7m_exception_taken(cpu, excret, true, false);
26
+ * R profile CPUs have an MPU, but can use the same set of MMU indexes
32
+ return;
27
+ * as A profile. They only need to distinguish NS EL0 and NS EL1 (and
33
} else if (!cpacr_pass) {
28
+ * NS EL2 if we ever model a Cortex-R52).
34
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
29
+ *
35
exc_secure);
30
+ * M profile CPUs are rather different as they do not have a true MMU.
36
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
31
+ * They have the following different MMU indexes:
37
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
32
+ * User
38
"stackframe: CPACR prevents clearing FPU registers\n");
33
+ * Privileged
39
v7m_exception_taken(cpu, excret, true, false);
34
+ * Execution priority negative (this is like privileged, but the
40
+ return;
35
+ * MPU HFNMIENA bit means that it may have different access permission
41
}
36
+ * check results to normal privileged code, so can't share a TLB).
42
}
37
+ *
43
/* Clear s0..s15, FPSCR and VPR */
38
* The ARMMMUIdx and the mmu index value used by the core QEMU TLB code
39
* are not quite the same -- different CPU types (most notably M profile
40
* vs A/R profile) would like to use MMU indexes with different semantics,
41
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
42
ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A,
43
ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M,
44
ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M,
45
+ ARMMMUIdx_MNegPri = 2 | ARM_MMU_IDX_M,
46
/* Indexes below here don't have TLBs and are used only for AT system
47
* instructions or for the first stage of an S12 page table walk.
48
*/
49
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
50
ARMMMUIdxBit_S2NS = 1 << 6,
51
ARMMMUIdxBit_MUser = 1 << 0,
52
ARMMMUIdxBit_MPriv = 1 << 1,
53
+ ARMMMUIdxBit_MNegPri = 1 << 2,
54
} ARMMMUIdxBit;
55
56
#define MMU_USER_IDX 0
57
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
58
case ARM_MMU_IDX_A:
59
return mmu_idx & 3;
60
case ARM_MMU_IDX_M:
61
- return mmu_idx & 1;
62
+ return mmu_idx == ARMMMUIdx_MUser ? 0 : 1;
63
default:
64
g_assert_not_reached();
65
}
66
@@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
67
if (arm_feature(env, ARM_FEATURE_M)) {
68
ARMMMUIdx mmu_idx = el == 0 ? ARMMMUIdx_MUser : ARMMMUIdx_MPriv;
69
70
+ /* Execution priority is negative if FAULTMASK is set or
71
+ * we're in a HardFault or NMI handler.
72
+ */
73
+ if ((env->v7m.exception > 0 && env->v7m.exception <= 3)
74
+ || env->daif & PSTATE_F) {
75
+ return arm_to_core_mmu_idx(ARMMMUIdx_MNegPri);
76
+ }
77
+
78
return arm_to_core_mmu_idx(mmu_idx);
79
}
80
81
diff --git a/target/arm/helper.c b/target/arm/helper.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/helper.c
84
+++ b/target/arm/helper.c
85
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
86
case ARMMMUIdx_S1NSE0:
87
case ARMMMUIdx_S1NSE1:
88
case ARMMMUIdx_MPriv:
89
+ case ARMMMUIdx_MNegPri:
90
case ARMMMUIdx_MUser:
91
return 1;
92
default:
93
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
94
case ARMMMUIdx_S1E2:
95
case ARMMMUIdx_S2NS:
96
case ARMMMUIdx_MPriv:
97
+ case ARMMMUIdx_MNegPri:
98
case ARMMMUIdx_MUser:
99
return false;
100
case ARMMMUIdx_S1E3:
101
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
102
ARMMMUIdx mmu_idx)
103
{
104
if (arm_feature(env, ARM_FEATURE_M)) {
105
- return !(env->v7m.mpu_ctrl & R_V7M_MPU_CTRL_ENABLE_MASK);
106
+ switch (env->v7m.mpu_ctrl &
107
+ (R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
108
+ case R_V7M_MPU_CTRL_ENABLE_MASK:
109
+ /* Enabled, but not for HardFault and NMI */
110
+ return mmu_idx == ARMMMUIdx_MNegPri;
111
+ case R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK:
112
+ /* Enabled for all cases */
113
+ return false;
114
+ case 0:
115
+ default:
116
+ /* HFNMIENA set and ENABLE clear is UNPREDICTABLE, but
117
+ * we warned about that in armv7m_nvic.c when the guest set it.
118
+ */
119
+ return true;
120
+ }
121
}
122
123
if (mmu_idx == ARMMMUIdx_S2NS) {
124
diff --git a/target/arm/translate.c b/target/arm/translate.c
125
index XXXXXXX..XXXXXXX 100644
126
--- a/target/arm/translate.c
127
+++ b/target/arm/translate.c
128
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
129
return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0);
130
case ARMMMUIdx_MUser:
131
case ARMMMUIdx_MPriv:
132
+ case ARMMMUIdx_MNegPri:
133
return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
134
case ARMMMUIdx_S2NS:
135
default:
136
--
44
--
137
2.7.4
45
2.20.1
138
46
139
47
diff view generated by jsdifflib
1
All M profile CPUs are PMSA, so set the feature bit.
1
For M-profile, we weren't reporting alignment faults triggered by the
2
(We haven't actually implemented the M profile MPU register
2
generic TCG code correctly to the guest. These get passed into
3
interface yet, but setting this feature bit gives us closer
3
arm_v7m_cpu_do_interrupt() as an EXCP_DATA_ABORT with an A-profile
4
to correct behaviour for the MPU-disabled case.)
4
style exception.fsr value of 1. We didn't check for this, and so
5
they fell through into the default of "assume this is an MPU fault"
6
and were reported to the guest as a data access violation MPU fault.
7
8
Report these alignment faults as UsageFaults which set the UNALIGNED
9
bit in the UFSR.
5
10
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 1493122030-32191-11-git-send-email-peter.maydell@linaro.org
13
Message-id: 20210723162146.5167-4-peter.maydell@linaro.org
9
---
14
---
10
target/arm/cpu.c | 8 ++++++++
15
target/arm/m_helper.c | 8 ++++++++
11
1 file changed, 8 insertions(+)
16
1 file changed, 8 insertions(+)
12
17
13
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
18
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
14
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.c
20
--- a/target/arm/m_helper.c
16
+++ b/target/arm/cpu.c
21
+++ b/target/arm/m_helper.c
17
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_post_init(Object *obj)
22
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
18
{
23
env->v7m.sfsr |= R_V7M_SFSR_LSERR_MASK;
19
ARMCPU *cpu = ARM_CPU(obj);
24
break;
20
25
case EXCP_UNALIGNED:
21
+ /* M profile implies PMSA. We have to do this here rather than
26
+ /* Unaligned faults reported by M-profile aware code */
22
+ * in realize with the other feature-implication checks because
27
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
23
+ * we look at the PMSA bit to see if we should add some properties.
28
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
24
+ */
29
break;
25
+ if (arm_feature(&cpu->env, ARM_FEATURE_M)) {
30
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
26
+ set_feature(&cpu->env, ARM_FEATURE_PMSA);
31
}
27
+ }
32
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false);
28
+
33
break;
29
if (arm_feature(&cpu->env, ARM_FEATURE_CBAR) ||
34
+ case 0x1: /* Alignment fault reported by generic code */
30
arm_feature(&cpu->env, ARM_FEATURE_CBAR_RO)) {
35
+ qemu_log_mask(CPU_LOG_INT,
31
qdev_property_add_static(DEVICE(obj), &arm_cpu_reset_cbar_property,
36
+ "...really UsageFault with UFSR.UNALIGNED\n");
37
+ env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_UNALIGNED_MASK;
38
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
39
+ env->v7m.secure);
40
+ break;
41
default:
42
/*
43
* All other FSR values are either MPU faults or "can't happen
32
--
44
--
33
2.7.4
45
2.20.1
34
46
35
47
diff view generated by jsdifflib
1
From: Michael Davidsaver <mdavidsaver@gmail.com>
1
The ISCR.ISRPENDING bit is set when an external interrupt is pending.
2
This is true whether that external interrupt is enabled or not.
3
This means that we can't use 's->vectpending == 0' as a shortcut to
4
"ISRPENDING is zero", because s->vectpending indicates only the
5
highest priority pending enabled interrupt.
2
6
3
The M series MPU is almost the same as the already implemented R
7
Remove the incorrect optimization so that if there is no pending
4
profile MPU (v7 PMSA). So all we need to implement here is the MPU
8
enabled interrupt we fall through to scanning through the whole
5
register interface in the system register space.
9
interrupt array.
6
10
7
This implementation has the same restriction as the R profile MPU
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
that it doesn't permit regions to be sized down smaller than 1K.
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20210723162146.5167-5-peter.maydell@linaro.org
14
---
15
hw/intc/armv7m_nvic.c | 9 ++++-----
16
1 file changed, 4 insertions(+), 5 deletions(-)
9
17
10
We also do not yet implement support for MPU_CTRL.HFNMIENA; this
11
bit should if zero disable use of the MPU when running HardFault,
12
NMI or with FAULTMASK set to 1 (ie at an execution priority of
13
less than zero) -- if the MPU is enabled we don't treat these
14
cases any differently.
15
16
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
17
Message-id: 1493122030-32191-13-git-send-email-peter.maydell@linaro.org
18
[PMM: Keep all the bits in mpu_ctrl field, rather than
19
using SCTLR bits for them; drop broken HFNMIENA support;
20
various cleanup]
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
23
target/arm/cpu.h | 6 +++
24
hw/intc/armv7m_nvic.c | 104 ++++++++++++++++++++++++++++++++++++++++++++++++++
25
target/arm/helper.c | 25 +++++++++++-
26
target/arm/machine.c | 5 ++-
27
4 files changed, 137 insertions(+), 3 deletions(-)
28
29
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/cpu.h
32
+++ b/target/arm/cpu.h
33
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
34
uint32_t dfsr; /* Debug Fault Status Register */
35
uint32_t mmfar; /* MemManage Fault Address */
36
uint32_t bfar; /* BusFault Address */
37
+ unsigned mpu_ctrl; /* MPU_CTRL (some bits kept in sctlr_el[1]) */
38
int exception;
39
} v7m;
40
41
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_DFSR, DWTTRAP, 2, 1)
42
FIELD(V7M_DFSR, VCATCH, 3, 1)
43
FIELD(V7M_DFSR, EXTERNAL, 4, 1)
44
45
+/* v7M MPU_CTRL bits */
46
+FIELD(V7M_MPU_CTRL, ENABLE, 0, 1)
47
+FIELD(V7M_MPU_CTRL, HFNMIENA, 1, 1)
48
+FIELD(V7M_MPU_CTRL, PRIVDEFENA, 2, 1)
49
+
50
/* If adding a feature bit which corresponds to a Linux ELF
51
* HWCAP bit, remember to update the feature-bit-to-hwcap
52
* mapping in linux-user/elfload.c:get_elf_hwcap().
53
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
18
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
54
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/intc/armv7m_nvic.c
20
--- a/hw/intc/armv7m_nvic.c
56
+++ b/hw/intc/armv7m_nvic.c
21
+++ b/hw/intc/armv7m_nvic.c
57
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@ static bool nvic_isrpending(NVICState *s)
58
#include "hw/arm/arm.h"
59
#include "hw/arm/armv7m_nvic.h"
60
#include "target/arm/cpu.h"
61
+#include "exec/exec-all.h"
62
#include "qemu/log.h"
63
#include "trace.h"
64
65
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset)
66
case 0xd70: /* ISAR4. */
67
return 0x01310102;
68
/* TODO: Implement debug registers. */
69
+ case 0xd90: /* MPU_TYPE */
70
+ /* Unified MPU; if the MPU is not present this value is zero */
71
+ return cpu->pmsav7_dregion << 8;
72
+ break;
73
+ case 0xd94: /* MPU_CTRL */
74
+ return cpu->env.v7m.mpu_ctrl;
75
+ case 0xd98: /* MPU_RNR */
76
+ return cpu->env.cp15.c6_rgnr;
77
+ case 0xd9c: /* MPU_RBAR */
78
+ case 0xda4: /* MPU_RBAR_A1 */
79
+ case 0xdac: /* MPU_RBAR_A2 */
80
+ case 0xdb4: /* MPU_RBAR_A3 */
81
+ {
82
+ int region = cpu->env.cp15.c6_rgnr;
83
+
84
+ if (region >= cpu->pmsav7_dregion) {
85
+ return 0;
86
+ }
87
+ return (cpu->env.pmsav7.drbar[region] & 0x1f) | (region & 0xf);
88
+ }
89
+ case 0xda0: /* MPU_RASR */
90
+ case 0xda8: /* MPU_RASR_A1 */
91
+ case 0xdb0: /* MPU_RASR_A2 */
92
+ case 0xdb8: /* MPU_RASR_A3 */
93
+ {
94
+ int region = cpu->env.cp15.c6_rgnr;
95
+
96
+ if (region >= cpu->pmsav7_dregion) {
97
+ return 0;
98
+ }
99
+ return ((cpu->env.pmsav7.dracr[region] & 0xffff) << 16) |
100
+ (cpu->env.pmsav7.drsr[region] & 0xffff);
101
+ }
102
default:
103
qemu_log_mask(LOG_GUEST_ERROR, "NVIC: Bad read offset 0x%x\n", offset);
104
return 0;
105
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value)
106
qemu_log_mask(LOG_UNIMP,
107
"NVIC: Aux fault status registers unimplemented\n");
108
break;
109
+ case 0xd90: /* MPU_TYPE */
110
+ return; /* RO */
111
+ case 0xd94: /* MPU_CTRL */
112
+ if ((value &
113
+ (R_V7M_MPU_CTRL_HFNMIENA_MASK | R_V7M_MPU_CTRL_ENABLE_MASK))
114
+ == R_V7M_MPU_CTRL_HFNMIENA_MASK) {
115
+ qemu_log_mask(LOG_GUEST_ERROR, "MPU_CTRL: HFNMIENA and !ENABLE is "
116
+ "UNPREDICTABLE\n");
117
+ }
118
+ cpu->env.v7m.mpu_ctrl = value & (R_V7M_MPU_CTRL_ENABLE_MASK |
119
+ R_V7M_MPU_CTRL_HFNMIENA_MASK |
120
+ R_V7M_MPU_CTRL_PRIVDEFENA_MASK);
121
+ tlb_flush(CPU(cpu));
122
+ break;
123
+ case 0xd98: /* MPU_RNR */
124
+ if (value >= cpu->pmsav7_dregion) {
125
+ qemu_log_mask(LOG_GUEST_ERROR, "MPU region out of range %"
126
+ PRIu32 "/%" PRIu32 "\n",
127
+ value, cpu->pmsav7_dregion);
128
+ } else {
129
+ cpu->env.cp15.c6_rgnr = value;
130
+ }
131
+ break;
132
+ case 0xd9c: /* MPU_RBAR */
133
+ case 0xda4: /* MPU_RBAR_A1 */
134
+ case 0xdac: /* MPU_RBAR_A2 */
135
+ case 0xdb4: /* MPU_RBAR_A3 */
136
+ {
137
+ int region;
138
+
139
+ if (value & (1 << 4)) {
140
+ /* VALID bit means use the region number specified in this
141
+ * value and also update MPU_RNR.REGION with that value.
142
+ */
143
+ region = extract32(value, 0, 4);
144
+ if (region >= cpu->pmsav7_dregion) {
145
+ qemu_log_mask(LOG_GUEST_ERROR,
146
+ "MPU region out of range %u/%" PRIu32 "\n",
147
+ region, cpu->pmsav7_dregion);
148
+ return;
149
+ }
150
+ cpu->env.cp15.c6_rgnr = region;
151
+ } else {
152
+ region = cpu->env.cp15.c6_rgnr;
153
+ }
154
+
155
+ if (region >= cpu->pmsav7_dregion) {
156
+ return;
157
+ }
158
+
159
+ cpu->env.pmsav7.drbar[region] = value & ~0x1f;
160
+ tlb_flush(CPU(cpu));
161
+ break;
162
+ }
163
+ case 0xda0: /* MPU_RASR */
164
+ case 0xda8: /* MPU_RASR_A1 */
165
+ case 0xdb0: /* MPU_RASR_A2 */
166
+ case 0xdb8: /* MPU_RASR_A3 */
167
+ {
168
+ int region = cpu->env.cp15.c6_rgnr;
169
+
170
+ if (region >= cpu->pmsav7_dregion) {
171
+ return;
172
+ }
173
+
174
+ cpu->env.pmsav7.drsr[region] = value & 0xff3f;
175
+ cpu->env.pmsav7.dracr[region] = (value >> 16) & 0x173f;
176
+ tlb_flush(CPU(cpu));
177
+ break;
178
+ }
179
case 0xf00: /* Software Triggered Interrupt Register */
180
{
181
/* user mode can only write to STIR if CCR.USERSETMPEND permits it */
182
diff --git a/target/arm/helper.c b/target/arm/helper.c
183
index XXXXXXX..XXXXXXX 100644
184
--- a/target/arm/helper.c
185
+++ b/target/arm/helper.c
186
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx)
187
static inline bool regime_translation_disabled(CPUARMState *env,
188
ARMMMUIdx mmu_idx)
189
{
23
{
190
+ if (arm_feature(env, ARM_FEATURE_M)) {
24
int irq;
191
+ return !(env->v7m.mpu_ctrl & R_V7M_MPU_CTRL_ENABLE_MASK);
25
192
+ }
26
- /* We can shortcut if the highest priority pending interrupt
193
+
27
- * happens to be external or if there is nothing pending.
194
if (mmu_idx == ARMMMUIdx_S2NS) {
28
+ /*
195
return (env->cp15.hcr_el2 & HCR_VM) == 0;
29
+ * We can shortcut if the highest priority pending interrupt
30
+ * happens to be external; if not we need to check the whole
31
+ * vectors[] array.
32
*/
33
if (s->vectpending > NVIC_FIRST_IRQ) {
34
return true;
196
}
35
}
197
@@ -XXX,XX +XXX,XX @@ static inline void get_phys_addr_pmsav7_default(CPUARMState *env,
36
- if (s->vectpending == 0) {
198
}
37
- return false;
199
}
38
- }
200
39
201
+static bool pmsav7_use_background_region(ARMCPU *cpu,
40
for (irq = NVIC_FIRST_IRQ; irq < s->num_irq; irq++) {
202
+ ARMMMUIdx mmu_idx, bool is_user)
41
if (s->vectors[irq].pending) {
203
+{
204
+ /* Return true if we should use the default memory map as a
205
+ * "background" region if there are no hits against any MPU regions.
206
+ */
207
+ CPUARMState *env = &cpu->env;
208
+
209
+ if (is_user) {
210
+ return false;
211
+ }
212
+
213
+ if (arm_feature(env, ARM_FEATURE_M)) {
214
+ return env->v7m.mpu_ctrl & R_V7M_MPU_CTRL_PRIVDEFENA_MASK;
215
+ } else {
216
+ return regime_sctlr(env, mmu_idx) & SCTLR_BR;
217
+ }
218
+}
219
+
220
static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
221
int access_type, ARMMMUIdx mmu_idx,
222
hwaddr *phys_ptr, int *prot, uint32_t *fsr)
223
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
224
}
225
226
if (n == -1) { /* no hits */
227
- if (is_user || !(regime_sctlr(env, mmu_idx) & SCTLR_BR)) {
228
+ if (!pmsav7_use_background_region(cpu, mmu_idx, is_user)) {
229
/* background fault */
230
*fsr = 0;
231
return true;
232
diff --git a/target/arm/machine.c b/target/arm/machine.c
233
index XXXXXXX..XXXXXXX 100644
234
--- a/target/arm/machine.c
235
+++ b/target/arm/machine.c
236
@@ -XXX,XX +XXX,XX @@ static bool m_needed(void *opaque)
237
238
static const VMStateDescription vmstate_m = {
239
.name = "cpu/m",
240
- .version_id = 3,
241
- .minimum_version_id = 3,
242
+ .version_id = 4,
243
+ .minimum_version_id = 4,
244
.needed = m_needed,
245
.fields = (VMStateField[]) {
246
VMSTATE_UINT32(env.v7m.vecbase, ARMCPU),
247
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
248
VMSTATE_UINT32(env.v7m.dfsr, ARMCPU),
249
VMSTATE_UINT32(env.v7m.mmfar, ARMCPU),
250
VMSTATE_UINT32(env.v7m.bfar, ARMCPU),
251
+ VMSTATE_UINT32(env.v7m.mpu_ctrl, ARMCPU),
252
VMSTATE_INT32(env.v7m.exception, ARMCPU),
253
VMSTATE_END_OF_LIST()
254
}
255
--
42
--
256
2.7.4
43
2.20.1
257
44
258
45
diff view generated by jsdifflib
1
We were setting the VBPR1 field of VMCR_EL2 to icv_min_vbpr()
1
The VECTPENDING field in the ICSR is 9 bits wide, in bits [20:12] of
2
on reset, but this is not correct. The field should reset to
2
the register. We were incorrectly masking it to 8 bits, so it would
3
the minimum value of ICV_BPR0_EL1 plus one.
3
report the wrong value if the pending exception was greater than 256.
4
Fix the bug.
4
5
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 1493226792-3237-2-git-send-email-peter.maydell@linaro.org
8
Message-id: 20210723162146.5167-6-peter.maydell@linaro.org
8
---
9
---
9
hw/intc/arm_gicv3_cpuif.c | 2 +-
10
hw/intc/armv7m_nvic.c | 2 +-
10
1 file changed, 1 insertion(+), 1 deletion(-)
11
1 file changed, 1 insertion(+), 1 deletion(-)
11
12
12
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
13
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
13
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/intc/arm_gicv3_cpuif.c
15
--- a/hw/intc/armv7m_nvic.c
15
+++ b/hw/intc/arm_gicv3_cpuif.c
16
+++ b/hw/intc/armv7m_nvic.c
16
@@ -XXX,XX +XXX,XX @@ static void icc_reset(CPUARMState *env, const ARMCPRegInfo *ri)
17
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
17
cs->ich_hcr_el2 = 0;
18
/* VECTACTIVE */
18
memset(cs->ich_lr_el2, 0, sizeof(cs->ich_lr_el2));
19
val = cpu->env.v7m.exception;
19
cs->ich_vmcr_el2 = ICH_VMCR_EL2_VFIQEN |
20
/* VECTPENDING */
20
- (icv_min_vbpr(cs) << ICH_VMCR_EL2_VBPR1_SHIFT) |
21
- val |= (s->vectpending & 0xff) << 12;
21
+ ((icv_min_vbpr(cs) + 1) << ICH_VMCR_EL2_VBPR1_SHIFT) |
22
+ val |= (s->vectpending & 0x1ff) << 12;
22
(icv_min_vbpr(cs) << ICH_VMCR_EL2_VBPR0_SHIFT);
23
/* ISRPENDING - set if any external IRQ is pending */
23
}
24
if (nvic_isrpending(s)) {
24
25
val |= (1 << 22);
25
--
26
--
26
2.7.4
27
2.20.1
27
28
28
29
diff view generated by jsdifflib
Deleted patch
1
icc_bpr_write() was not enforcing that writing a value below the
2
minimum for the BPR should behave as if the BPR was set to the
3
minimum value. This doesn't make a difference for the secure
4
BPRs (since we define the minimum for the QEMU implementation
5
as zero) but did mean we were allowing the NS BPR1 to be set to
6
0 when 1 should be the lowest value.
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 1493226792-3237-3-git-send-email-peter.maydell@linaro.org
11
---
12
hw/intc/arm_gicv3_cpuif.c | 6 ++++++
13
1 file changed, 6 insertions(+)
14
15
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/intc/arm_gicv3_cpuif.c
18
+++ b/hw/intc/arm_gicv3_cpuif.c
19
@@ -XXX,XX +XXX,XX @@ static void icc_bpr_write(CPUARMState *env, const ARMCPRegInfo *ri,
20
{
21
GICv3CPUState *cs = icc_cs_from_env(env);
22
int grp = (ri->crm == 8) ? GICV3_G0 : GICV3_G1;
23
+ uint64_t minval;
24
25
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
26
icv_bpr_write(env, ri, value);
27
@@ -XXX,XX +XXX,XX @@ static void icc_bpr_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
return;
29
}
30
31
+ minval = (grp == GICV3_G1NS) ? GIC_MIN_BPR_NS : GIC_MIN_BPR;
32
+ if (value < minval) {
33
+ value = minval;
34
+ }
35
+
36
cs->icc_bpr[grp] = value & 7;
37
gicv3_cpuif_update(cs);
38
}
39
--
40
2.7.4
41
42
diff view generated by jsdifflib
Deleted patch
1
When identifying the DFSR format for an alignment fault, use
2
the mmu index that we are passed, rather than calling cpu_mmu_index()
3
to get the mmu index for the current CPU state. This doesn't actually
4
make any difference since the only cases where the current MMU index
5
differs from the index used for the load are the "unprivileged
6
load/store" instructions, and in that case the mmu index may
7
differ but the translation regime is the same (apart from the
8
"use from Hyp mode" case which is UNPREDICTABLE).
9
However it's the more logical thing to do.
10
1
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Message-id: 1493122030-32191-2-git-send-email-peter.maydell@linaro.org
15
---
16
target/arm/op_helper.c | 2 +-
17
1 file changed, 1 insertion(+), 1 deletion(-)
18
19
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/op_helper.c
22
+++ b/target/arm/op_helper.c
23
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
24
/* the DFSR for an alignment fault depends on whether we're using
25
* the LPAE long descriptor format, or the short descriptor format
26
*/
27
- if (arm_s1_regime_using_lpae_format(env, cpu_mmu_index(env, false))) {
28
+ if (arm_s1_regime_using_lpae_format(env, mmu_idx)) {
29
env->exception.fsr = (1 << 9) | 0x21;
30
} else {
31
env->exception.fsr = 0x1;
32
--
33
2.7.4
34
35
diff view generated by jsdifflib
Deleted patch
1
Make M profile use completely separate ARMMMUIdx values from
2
those that A profile CPUs use. This is a prelude to adding
3
support for the MPU and for v8M, which together will require
4
6 MMU indexes which don't map cleanly onto the A profile
5
uses:
6
non secure User
7
non secure Privileged
8
non secure Privileged, execution priority < 0
9
secure User
10
secure Privileged
11
secure Privileged, execution priority < 0
12
1
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 1493122030-32191-4-git-send-email-peter.maydell@linaro.org
15
---
16
target/arm/cpu.h | 21 +++++++++++++++++++--
17
target/arm/helper.c | 5 +++++
18
target/arm/translate.c | 3 +++
19
3 files changed, 27 insertions(+), 2 deletions(-)
20
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.h
24
+++ b/target/arm/cpu.h
25
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
26
* of the AT/ATS operations.
27
* The values used are carefully arranged to make mmu_idx => EL lookup easy.
28
*/
29
-#define ARM_MMU_IDX_A 0x10 /* A profile (and M profile, for the moment) */
30
+#define ARM_MMU_IDX_A 0x10 /* A profile */
31
#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
32
+#define ARM_MMU_IDX_M 0x40 /* M profile */
33
34
#define ARM_MMU_IDX_TYPE_MASK (~0x7)
35
#define ARM_MMU_IDX_COREIDX_MASK 0x7
36
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
37
ARMMMUIdx_S1SE0 = 4 | ARM_MMU_IDX_A,
38
ARMMMUIdx_S1SE1 = 5 | ARM_MMU_IDX_A,
39
ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A,
40
+ ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M,
41
+ ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M,
42
/* Indexes below here don't have TLBs and are used only for AT system
43
* instructions or for the first stage of an S12 page table walk.
44
*/
45
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
46
ARMMMUIdxBit_S1SE0 = 1 << 4,
47
ARMMMUIdxBit_S1SE1 = 1 << 5,
48
ARMMMUIdxBit_S2NS = 1 << 6,
49
+ ARMMMUIdxBit_MUser = 1 << 0,
50
+ ARMMMUIdxBit_MPriv = 1 << 1,
51
} ARMMMUIdxBit;
52
53
#define MMU_USER_IDX 0
54
@@ -XXX,XX +XXX,XX @@ static inline int arm_to_core_mmu_idx(ARMMMUIdx mmu_idx)
55
56
static inline ARMMMUIdx core_to_arm_mmu_idx(CPUARMState *env, int mmu_idx)
57
{
58
- return mmu_idx | ARM_MMU_IDX_A;
59
+ if (arm_feature(env, ARM_FEATURE_M)) {
60
+ return mmu_idx | ARM_MMU_IDX_M;
61
+ } else {
62
+ return mmu_idx | ARM_MMU_IDX_A;
63
+ }
64
}
65
66
/* Return the exception level we're running at if this is our mmu_idx */
67
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
68
switch (mmu_idx & ARM_MMU_IDX_TYPE_MASK) {
69
case ARM_MMU_IDX_A:
70
return mmu_idx & 3;
71
+ case ARM_MMU_IDX_M:
72
+ return mmu_idx & 1;
73
default:
74
g_assert_not_reached();
75
}
76
@@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
77
{
78
int el = arm_current_el(env);
79
80
+ if (arm_feature(env, ARM_FEATURE_M)) {
81
+ ARMMMUIdx mmu_idx = el == 0 ? ARMMMUIdx_MUser : ARMMMUIdx_MPriv;
82
+
83
+ return arm_to_core_mmu_idx(mmu_idx);
84
+ }
85
+
86
if (el < 2 && arm_is_secure_below_el3(env)) {
87
return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0 + el);
88
}
89
diff --git a/target/arm/helper.c b/target/arm/helper.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/helper.c
92
+++ b/target/arm/helper.c
93
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
94
case ARMMMUIdx_S1SE1:
95
case ARMMMUIdx_S1NSE0:
96
case ARMMMUIdx_S1NSE1:
97
+ case ARMMMUIdx_MPriv:
98
+ case ARMMMUIdx_MUser:
99
return 1;
100
default:
101
g_assert_not_reached();
102
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
103
case ARMMMUIdx_S1NSE1:
104
case ARMMMUIdx_S1E2:
105
case ARMMMUIdx_S2NS:
106
+ case ARMMMUIdx_MPriv:
107
+ case ARMMMUIdx_MUser:
108
return false;
109
case ARMMMUIdx_S1E3:
110
case ARMMMUIdx_S1SE0:
111
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
112
switch (mmu_idx) {
113
case ARMMMUIdx_S1SE0:
114
case ARMMMUIdx_S1NSE0:
115
+ case ARMMMUIdx_MUser:
116
return true;
117
default:
118
return false;
119
diff --git a/target/arm/translate.c b/target/arm/translate.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/target/arm/translate.c
122
+++ b/target/arm/translate.c
123
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
124
case ARMMMUIdx_S1SE0:
125
case ARMMMUIdx_S1SE1:
126
return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0);
127
+ case ARMMMUIdx_MUser:
128
+ case ARMMMUIdx_MPriv:
129
+ return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
130
case ARMMMUIdx_S2NS:
131
default:
132
g_assert_not_reached();
133
--
134
2.7.4
135
136
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
In Arm v8.1M the VECTPENDING field in the ICSR has new behaviour: if
2
the register is accessed NonSecure and the highest priority pending
3
enabled exception (that would be returned in the VECTPENDING field)
4
targets Secure, then the VECTPENDING field must read 1 rather than
5
the exception number of the pending exception. Implement this.
2
6
3
The Aspeed I2C controller maintains a state machine in the command
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
register, which is mostly used for debug.
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210723162146.5167-7-peter.maydell@linaro.org
10
---
11
hw/intc/armv7m_nvic.c | 31 ++++++++++++++++++++++++-------
12
1 file changed, 24 insertions(+), 7 deletions(-)
5
13
6
Let's start adding a few states to handle abnormal STOP
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
7
commands. Today, the model uses the busy status of the bus as a
8
condition to do so but it is not precise enough.
9
10
Also remove the ABNORMAL bit for failing TX commands. This is
11
incorrect with respect to the specs.
12
13
Signed-off-by: Cédric Le Goater <clg@kaod.org>
14
Message-id: 1494827476-1487-4-git-send-email-clg@kaod.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
hw/i2c/aspeed_i2c.c | 36 +++++++++++++++++++++++++++++++++---
18
1 file changed, 33 insertions(+), 3 deletions(-)
19
20
diff --git a/hw/i2c/aspeed_i2c.c b/hw/i2c/aspeed_i2c.c
21
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/i2c/aspeed_i2c.c
16
--- a/hw/intc/armv7m_nvic.c
23
+++ b/hw/i2c/aspeed_i2c.c
17
+++ b/hw/intc/armv7m_nvic.c
24
@@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_i2c_bus_read(void *opaque, hwaddr offset,
18
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_acknowledge_irq(void *opaque)
25
}
19
nvic_irq_update(s);
26
}
20
}
27
21
28
+static void aspeed_i2c_set_state(AspeedI2CBus *bus, uint8_t state)
22
+static bool vectpending_targets_secure(NVICState *s)
29
+{
23
+{
30
+ bus->cmd &= ~(I2CD_TX_STATE_MASK << I2CD_TX_STATE_SHIFT);
24
+ /* Return true if s->vectpending targets Secure state */
31
+ bus->cmd |= (state & I2CD_TX_STATE_MASK) << I2CD_TX_STATE_SHIFT;
25
+ if (s->vectpending_is_s_banked) {
26
+ return true;
27
+ }
28
+ return !exc_is_banked(s->vectpending) &&
29
+ exc_targets_secure(s, s->vectpending);
32
+}
30
+}
33
+
31
+
34
+static uint8_t aspeed_i2c_get_state(AspeedI2CBus *bus)
32
void armv7m_nvic_get_pending_irq_info(void *opaque,
35
+{
33
int *pirq, bool *ptargets_secure)
36
+ return (bus->cmd >> I2CD_TX_STATE_SHIFT) & I2CD_TX_STATE_MASK;
37
+}
38
+
39
+/*
40
+ * The state machine needs some refinement. It is only used to track
41
+ * invalid STOP commands for the moment.
42
+ */
43
static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
44
{
34
{
45
bus->cmd &= ~0xFFFF;
35
@@ -XXX,XX +XXX,XX @@ void armv7m_nvic_get_pending_irq_info(void *opaque,
46
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
36
47
bus->intr_status = 0;
37
assert(pending > ARMV7M_EXCP_RESET && pending < s->num_irq);
48
38
49
if (bus->cmd & I2CD_M_START_CMD) {
39
- if (s->vectpending_is_s_banked) {
50
+ uint8_t state = aspeed_i2c_get_state(bus) & I2CD_MACTIVE ?
40
- targets_secure = true;
51
+ I2CD_MSTARTR : I2CD_MSTART;
41
- } else {
52
+
42
- targets_secure = !exc_is_banked(pending) &&
53
+ aspeed_i2c_set_state(bus, state);
43
- exc_targets_secure(s, pending);
54
+
44
- }
55
if (i2c_start_transfer(bus->bus, extract32(bus->buf, 1, 7),
45
+ targets_secure = vectpending_targets_secure(s);
56
extract32(bus->buf, 0, 1))) {
46
57
bus->intr_status |= I2CD_INTR_TX_NAK;
47
trace_nvic_get_pending_irq_info(pending, targets_secure);
58
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
48
59
if (!i2c_bus_busy(bus->bus)) {
49
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
60
return;
50
/* VECTACTIVE */
61
}
51
val = cpu->env.v7m.exception;
62
+ aspeed_i2c_set_state(bus, I2CD_MACTIVE);
52
/* VECTPENDING */
63
}
53
- val |= (s->vectpending & 0x1ff) << 12;
64
54
+ if (s->vectpending) {
65
if (bus->cmd & I2CD_M_TX_CMD) {
55
+ /*
66
+ aspeed_i2c_set_state(bus, I2CD_MTXD);
56
+ * From v8.1M VECTPENDING must read as 1 if accessed as
67
if (i2c_send(bus->bus, bus->buf)) {
57
+ * NonSecure and the highest priority pending and enabled
68
- bus->intr_status |= (I2CD_INTR_TX_NAK | I2CD_INTR_ABNORMAL);
58
+ * exception targets Secure.
69
+ bus->intr_status |= (I2CD_INTR_TX_NAK);
59
+ */
70
i2c_end_transfer(bus->bus);
60
+ int vp = s->vectpending;
71
} else {
61
+ if (!attrs.secure && arm_feature(&cpu->env, ARM_FEATURE_V8_1M) &&
72
bus->intr_status |= I2CD_INTR_TX_ACK;
62
+ vectpending_targets_secure(s)) {
73
}
63
+ vp = 1;
74
bus->cmd &= ~I2CD_M_TX_CMD;
64
+ }
75
+ aspeed_i2c_set_state(bus, I2CD_MACTIVE);
65
+ val |= (vp & 0x1ff) << 12;
76
}
66
+ }
77
67
/* ISRPENDING - set if any external IRQ is pending */
78
if (bus->cmd & (I2CD_M_RX_CMD | I2CD_M_S_RX_CMD_LAST)) {
68
if (nvic_isrpending(s)) {
79
- int ret = i2c_recv(bus->bus);
69
val |= (1 << 22);
80
+ int ret;
81
+
82
+ aspeed_i2c_set_state(bus, I2CD_MRXD);
83
+ ret = i2c_recv(bus->bus);
84
if (ret < 0) {
85
qemu_log_mask(LOG_GUEST_ERROR, "%s: read failed\n", __func__);
86
ret = 0xff;
87
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
88
i2c_nack(bus->bus);
89
}
90
bus->cmd &= ~(I2CD_M_RX_CMD | I2CD_M_S_RX_CMD_LAST);
91
+ aspeed_i2c_set_state(bus, I2CD_MACTIVE);
92
}
93
94
if (bus->cmd & I2CD_M_STOP_CMD) {
95
- if (!i2c_bus_busy(bus->bus)) {
96
+ if (!(aspeed_i2c_get_state(bus) & I2CD_MACTIVE)) {
97
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: abnormal stop\n", __func__);
98
bus->intr_status |= I2CD_INTR_ABNORMAL;
99
} else {
100
+ aspeed_i2c_set_state(bus, I2CD_MSTOP);
101
i2c_end_transfer(bus->bus);
102
bus->intr_status |= I2CD_INTR_NORMAL_STOP;
103
}
104
bus->cmd &= ~I2CD_M_STOP_CMD;
105
+ aspeed_i2c_set_state(bus, I2CD_IDLE);
106
}
107
}
108
109
--
70
--
110
2.7.4
71
2.20.1
111
72
112
73
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
From: Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
2
2
3
This is based on patch Shannon Zhao originally posted.
3
Missed in commit f3478392 "docs: Move deprecation, build
4
and license info out of system/"
4
5
5
Cc: Shannon Zhao <zhaoshenglong@huawei.com>
6
Signed-off-by: Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
6
Signed-off-by: Andrew Jones <drjones@redhat.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Shannon Zhao <shannon.zhao@linaro.org>
8
Message-id: 20210723065828.1336760-1-maozhongyi@cmss.chinamobile.com
8
Message-id: 20170529173751.3443-3-drjones@redhat.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
hw/arm/virt.c | 21 +++++++++++++++++++++
11
configure | 2 +-
12
1 file changed, 21 insertions(+)
12
target/i386/cpu.c | 2 +-
13
MAINTAINERS | 2 +-
14
3 files changed, 3 insertions(+), 3 deletions(-)
13
15
14
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
16
diff --git a/configure b/configure
17
index XXXXXXX..XXXXXXX 100755
18
--- a/configure
19
+++ b/configure
20
@@ -XXX,XX +XXX,XX @@ fi
21
22
if test -n "${deprecated_features}"; then
23
echo "Warning, deprecated features enabled."
24
- echo "Please see docs/system/deprecated.rst"
25
+ echo "Please see docs/about/deprecated.rst"
26
echo " features: ${deprecated_features}"
27
fi
28
29
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
15
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/virt.c
31
--- a/target/i386/cpu.c
17
+++ b/hw/arm/virt.c
32
+++ b/target/i386/cpu.c
18
@@ -XXX,XX +XXX,XX @@ static void create_fdt(VirtMachineState *vms)
33
@@ -XXX,XX +XXX,XX @@ static const X86CPUDefinition builtin_x86_defs[] = {
19
"clk24mhz");
34
* none", but this is just for compatibility while libvirt isn't
20
qemu_fdt_setprop_cell(fdt, "/apb-pclk", "phandle", vms->clock_phandle);
35
* adapted to resolve CPU model versions before creating VMs.
21
36
* See "Runnability guarantee of CPU models" at
22
+ if (have_numa_distance) {
37
- * docs/system/deprecated.rst.
23
+ int size = nb_numa_nodes * nb_numa_nodes * 3 * sizeof(uint32_t);
38
+ * docs/about/deprecated.rst.
24
+ uint32_t *matrix = g_malloc0(size);
39
*/
25
+ int idx, i, j;
40
X86CPUVersion default_cpu_version = 1;
26
+
41
27
+ for (i = 0; i < nb_numa_nodes; i++) {
42
diff --git a/MAINTAINERS b/MAINTAINERS
28
+ for (j = 0; j < nb_numa_nodes; j++) {
43
index XXXXXXX..XXXXXXX 100644
29
+ idx = (i * nb_numa_nodes + j) * 3;
44
--- a/MAINTAINERS
30
+ matrix[idx + 0] = cpu_to_be32(i);
45
+++ b/MAINTAINERS
31
+ matrix[idx + 1] = cpu_to_be32(j);
46
@@ -XXX,XX +XXX,XX @@ F: contrib/gitdm/*
32
+ matrix[idx + 2] = cpu_to_be32(numa_info[i].distance[j]);
47
33
+ }
48
Incompatible changes
34
+ }
49
R: libvir-list@redhat.com
35
+
50
-F: docs/system/deprecated.rst
36
+ qemu_fdt_add_subnode(fdt, "/distance-map");
51
+F: docs/about/deprecated.rst
37
+ qemu_fdt_setprop_string(fdt, "/distance-map", "compatible",
52
38
+ "numa-distance-map-v1");
53
Build System
39
+ qemu_fdt_setprop(fdt, "/distance-map", "distance-matrix",
54
------------
40
+ matrix, size);
41
+ g_free(matrix);
42
+ }
43
}
44
45
static void fdt_add_psci_node(const VirtMachineState *vms)
46
--
55
--
47
2.7.4
56
2.20.1
48
57
49
58
diff view generated by jsdifflib
1
From: Michael Davidsaver <mdavidsaver@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add support for the M profile default memory map which is used
3
Currently, our only caller is sve_zcr_len_for_el, which has
4
if the MPU is not present or disabled.
4
already masked the length extracted from ZCR_ELx, so the
5
masking done here is a nop. But we will shortly have uses
6
from other locations, where the length will be unmasked.
5
7
6
The main differences in behaviour from implementing this
8
Saturate the length to ARM_MAX_VQ instead of truncating to
7
correctly are that we set the PAGE_EXEC attribute on
9
the low 4 bits.
8
the right regions of memory, such that device regions
9
are not executable.
10
10
11
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 1493122030-32191-10-git-send-email-peter.maydell@linaro.org
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
[PMM: rephrased comment and commit message; don't mark
13
Message-id: 20210723203344.968563-2-richard.henderson@linaro.org
14
the flash memory region as not-writable; list all
15
the cases in the default map explicitly rather than
16
using a 'default' case for the non-executable regions]
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
15
---
19
target/arm/helper.c | 41 ++++++++++++++++++++++++++++++++---------
16
target/arm/helper.c | 4 +++-
20
1 file changed, 32 insertions(+), 9 deletions(-)
17
1 file changed, 3 insertions(+), 1 deletion(-)
21
18
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper.c
21
--- a/target/arm/helper.c
25
+++ b/target/arm/helper.c
22
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ static inline void get_phys_addr_pmsav7_default(CPUARMState *env,
23
@@ -XXX,XX +XXX,XX @@ static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
27
ARMMMUIdx mmu_idx,
28
int32_t address, int *prot)
29
{
24
{
30
- *prot = PAGE_READ | PAGE_WRITE;
25
uint32_t end_len;
31
- switch (address) {
26
32
- case 0xF0000000 ... 0xFFFFFFFF:
27
- end_len = start_len &= 0xf;
33
- if (regime_sctlr(env, mmu_idx) & SCTLR_V) { /* hivecs execing is ok */
28
+ start_len = MIN(start_len, ARM_MAX_VQ - 1);
34
+ if (!arm_feature(env, ARM_FEATURE_M)) {
29
+ end_len = start_len;
35
+ *prot = PAGE_READ | PAGE_WRITE;
30
+
36
+ switch (address) {
31
if (!test_bit(start_len, cpu->sve_vq_map)) {
37
+ case 0xF0000000 ... 0xFFFFFFFF:
32
end_len = find_last_bit(cpu->sve_vq_map, start_len);
38
+ if (regime_sctlr(env, mmu_idx) & SCTLR_V) {
33
assert(end_len < start_len);
39
+ /* hivecs execing is ok */
40
+ *prot |= PAGE_EXEC;
41
+ }
42
+ break;
43
+ case 0x00000000 ... 0x7FFFFFFF:
44
*prot |= PAGE_EXEC;
45
+ break;
46
+ }
47
+ } else {
48
+ /* Default system address map for M profile cores.
49
+ * The architecture specifies which regions are execute-never;
50
+ * at the MPU level no other checks are defined.
51
+ */
52
+ switch (address) {
53
+ case 0x00000000 ... 0x1fffffff: /* ROM */
54
+ case 0x20000000 ... 0x3fffffff: /* SRAM */
55
+ case 0x60000000 ... 0x7fffffff: /* RAM */
56
+ case 0x80000000 ... 0x9fffffff: /* RAM */
57
+ *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
58
+ break;
59
+ case 0x40000000 ... 0x5fffffff: /* Peripheral */
60
+ case 0xa0000000 ... 0xbfffffff: /* Device */
61
+ case 0xc0000000 ... 0xdfffffff: /* Device */
62
+ case 0xe0000000 ... 0xffffffff: /* System */
63
+ *prot = PAGE_READ | PAGE_WRITE;
64
+ break;
65
+ default:
66
+ g_assert_not_reached();
67
}
68
- break;
69
- case 0x00000000 ... 0x7FFFFFFF:
70
- *prot |= PAGE_EXEC;
71
- break;
72
}
73
-
74
}
75
76
static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
77
--
34
--
78
2.7.4
35
2.20.1
79
36
80
37
diff view generated by jsdifflib
1
From: Michael Davidsaver <mdavidsaver@gmail.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Improve the "-d mmu" tracing for the PMSAv7 MPU translation
3
Rename from sve_zcr_get_valid_len and make accessible
4
process as an aid in debugging guest MPU configurations:
4
from outside of helper.c.
5
* fix a missing newline for a guest-error log
6
* report the region number with guest-error or unimp
7
logs of bad region register values
8
* add a log message for the overall result of the lookup
9
* print "0x" prefix for hex values
10
5
11
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 20210723203344.968563-3-richard.henderson@linaro.org
14
Message-id: 1493122030-32191-9-git-send-email-peter.maydell@linaro.org
15
[PMM: a little tidyup, report region number in all messages
16
rather than just one]
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
10
---
19
target/arm/helper.c | 39 +++++++++++++++++++++++++++------------
11
target/arm/internals.h | 10 ++++++++++
20
1 file changed, 27 insertions(+), 12 deletions(-)
12
target/arm/helper.c | 4 ++--
13
2 files changed, 12 insertions(+), 2 deletions(-)
21
14
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/internals.h
18
+++ b/target/arm/internals.h
19
@@ -XXX,XX +XXX,XX @@ void arm_translate_init(void);
20
void arm_cpu_synchronize_from_tb(CPUState *cs, const TranslationBlock *tb);
21
#endif /* CONFIG_TCG */
22
23
+/**
24
+ * aarch64_sve_zcr_get_valid_len:
25
+ * @cpu: cpu context
26
+ * @start_len: maximum len to consider
27
+ *
28
+ * Return the maximum supported sve vector length <= @start_len.
29
+ * Note that both @start_len and the return value are in units
30
+ * of ZCR_ELx.LEN, so the vector bit length is (x + 1) * 128.
31
+ */
32
+uint32_t aarch64_sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len);
33
34
enum arm_fprounding {
35
FPROUNDING_TIEEVEN,
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
36
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper.c
38
--- a/target/arm/helper.c
25
+++ b/target/arm/helper.c
39
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
40
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
27
}
41
return 0;
28
42
}
29
if (!rsize) {
43
30
- qemu_log_mask(LOG_GUEST_ERROR, "DRSR.Rsize field can not be 0");
44
-static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
31
+ qemu_log_mask(LOG_GUEST_ERROR,
45
+uint32_t aarch64_sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
32
+ "DRSR[%d]: Rsize field cannot be 0\n", n);
46
{
33
continue;
47
uint32_t end_len;
34
}
48
35
rsize++;
49
@@ -XXX,XX +XXX,XX @@ uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
36
rmask = (1ull << rsize) - 1;
50
zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
37
38
if (base & rmask) {
39
- qemu_log_mask(LOG_GUEST_ERROR, "DRBAR %" PRIx32 " misaligned "
40
- "to DRSR region size, mask = %" PRIx32,
41
- base, rmask);
42
+ qemu_log_mask(LOG_GUEST_ERROR,
43
+ "DRBAR[%d]: 0x%" PRIx32 " misaligned "
44
+ "to DRSR region size, mask = 0x%" PRIx32 "\n",
45
+ n, base, rmask);
46
continue;
47
}
48
49
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
50
}
51
}
52
if (rsize < TARGET_PAGE_BITS) {
53
- qemu_log_mask(LOG_UNIMP, "No support for MPU (sub)region"
54
+ qemu_log_mask(LOG_UNIMP,
55
+ "DRSR[%d]: No support for MPU (sub)region "
56
"alignment of %" PRIu32 " bits. Minimum is %d\n",
57
- rsize, TARGET_PAGE_BITS);
58
+ n, rsize, TARGET_PAGE_BITS);
59
continue;
60
}
61
if (srdis) {
62
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
63
break;
64
default:
65
qemu_log_mask(LOG_GUEST_ERROR,
66
- "Bad value for AP bits in DRACR %"
67
- PRIx32 "\n", ap);
68
+ "DRACR[%d]: Bad value for AP bits: 0x%"
69
+ PRIx32 "\n", n, ap);
70
}
71
} else { /* Priv. mode AP bits decoding */
72
switch (ap) {
73
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
74
break;
75
default:
76
qemu_log_mask(LOG_GUEST_ERROR,
77
- "Bad value for AP bits in DRACR %"
78
- PRIx32 "\n", ap);
79
+ "DRACR[%d]: Bad value for AP bits: 0x%"
80
+ PRIx32 "\n", n, ap);
81
}
82
}
83
84
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
85
*/
86
if (arm_feature(env, ARM_FEATURE_PMSA) &&
87
arm_feature(env, ARM_FEATURE_V7)) {
88
+ bool ret;
89
*page_size = TARGET_PAGE_SIZE;
90
- return get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
91
- phys_ptr, prot, fsr);
92
+ ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
93
+ phys_ptr, prot, fsr);
94
+ qemu_log_mask(CPU_LOG_MMU, "PMSAv7 MPU lookup for %s at 0x%08" PRIx32
95
+ " mmu_idx %u -> %s (prot %c%c%c)\n",
96
+ access_type == 1 ? "reading" :
97
+ (access_type == 2 ? "writing" : "execute"),
98
+ (uint32_t)address, mmu_idx,
99
+ ret ? "Miss" : "Hit",
100
+ *prot & PAGE_READ ? 'r' : '-',
101
+ *prot & PAGE_WRITE ? 'w' : '-',
102
+ *prot & PAGE_EXEC ? 'x' : '-');
103
+
104
+ return ret;
105
}
51
}
106
52
107
if (regime_translation_disabled(env, mmu_idx)) {
53
- return sve_zcr_get_valid_len(cpu, zcr_len);
54
+ return aarch64_sve_zcr_get_valid_len(cpu, zcr_len);
55
}
56
57
static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
108
--
58
--
109
2.7.4
59
2.20.1
110
60
111
61
diff view generated by jsdifflib
1
ARM CPUs come in two flavours:
1
From: Richard Henderson <richard.henderson@linaro.org>
2
* proper MMU ("VMSA")
3
* only an MPU ("PMSA")
4
For PMSA, the MPU may be implemented, or not (in which case there
5
is default "always acts the same" behaviour, but it isn't guest
6
programmable).
7
2
8
QEMU is a bit confused about how we indicate this: we have an
3
Mirror the behavour of /proc/sys/abi/sve_default_vector_length
9
ARM_FEATURE_MPU, but it's not clear whether this indicates
4
under the real linux kernel. We have no way of passing along
10
"PMSA, not VMSA" or "PMSA and MPU present" , and sometimes we
5
a real default across exec like the kernel can, but this is a
11
use it for one purpose and sometimes the other.
6
decent way of adjusting the startup vector length of a process.
12
7
13
Currently trying to implement a PMSA-without-MPU core won't
8
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/482
14
work correctly because we turn off the ARM_FEATURE_MPU bit
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
and then a lot of things which should still exist get
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
turned off too.
11
Message-id: 20210723203344.968563-4-richard.henderson@linaro.org
12
[PMM: tweaked docs formatting, document -1 special-case,
13
added fixup patch from RTH mentioning QEMU's maximum veclen.]
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
docs/system/arm/cpu-features.rst | 15 ++++++++
17
target/arm/cpu.h | 5 +++
18
target/arm/cpu.c | 14 ++++++--
19
target/arm/cpu64.c | 60 ++++++++++++++++++++++++++++++++
20
4 files changed, 92 insertions(+), 2 deletions(-)
17
21
18
As the first step in cleaning this up, rename the feature
22
diff --git a/docs/system/arm/cpu-features.rst b/docs/system/arm/cpu-features.rst
19
bit to ARM_FEATURE_PMSA, which indicates a PMSA CPU (with
23
index XXXXXXX..XXXXXXX 100644
20
or without MPU).
24
--- a/docs/system/arm/cpu-features.rst
21
25
+++ b/docs/system/arm/cpu-features.rst
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
@@ -XXX,XX +XXX,XX @@ verbose command lines. However, the recommended way to select vector
23
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
27
lengths is to explicitly enable each desired length. Therefore only
24
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
28
example's (1), (4), and (6) exhibit recommended uses of the properties.
25
Message-id: 1493122030-32191-5-git-send-email-peter.maydell@linaro.org
29
26
---
30
+SVE User-mode Default Vector Length Property
27
target/arm/cpu.h | 2 +-
31
+--------------------------------------------
28
target/arm/cpu.c | 12 ++++++------
32
+
29
target/arm/helper.c | 12 ++++++------
33
+For qemu-aarch64, the cpu property ``sve-default-vector-length=N`` is
30
target/arm/machine.c | 2 +-
34
+defined to mirror the Linux kernel parameter file
31
4 files changed, 14 insertions(+), 14 deletions(-)
35
+``/proc/sys/abi/sve_default_vector_length``. The default length, ``N``,
32
36
+is in units of bytes and must be between 16 and 8192.
37
+If not specified, the default vector length is 64.
38
+
39
+If the default length is larger than the maximum vector length enabled,
40
+the actual vector length will be reduced. Note that the maximum vector
41
+length supported by QEMU is 256.
42
+
43
+If this property is set to ``-1`` then the default vector length
44
+is set to the maximum possible length.
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
45
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
index XXXXXXX..XXXXXXX 100644
46
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/cpu.h
47
--- a/target/arm/cpu.h
36
+++ b/target/arm/cpu.h
48
+++ b/target/arm/cpu.h
37
@@ -XXX,XX +XXX,XX @@ enum arm_features {
49
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
38
ARM_FEATURE_V6K,
50
/* Used to set the maximum vector length the cpu will support. */
39
ARM_FEATURE_V7,
51
uint32_t sve_max_vq;
40
ARM_FEATURE_THUMB2,
52
41
- ARM_FEATURE_MPU, /* Only has Memory Protection Unit, not full MMU. */
53
+#ifdef CONFIG_USER_ONLY
42
+ ARM_FEATURE_PMSA, /* no MMU; may have Memory Protection Unit */
54
+ /* Used to set the default vector length at process start. */
43
ARM_FEATURE_VFP3,
55
+ uint32_t sve_default_vq;
44
ARM_FEATURE_VFP_FP16,
56
+#endif
45
ARM_FEATURE_NEON,
57
+
58
/*
59
* In sve_vq_map each set bit is a supported vector length of
60
* (bit-number + 1) * 16 bytes, i.e. each bit number + 1 is the vector
46
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
61
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
47
index XXXXXXX..XXXXXXX 100644
62
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/cpu.c
63
--- a/target/arm/cpu.c
49
+++ b/target/arm/cpu.c
64
+++ b/target/arm/cpu.c
50
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_post_init(Object *obj)
65
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
51
&error_abort);
66
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 16, 2, 3);
67
/* with reasonable vector length */
68
if (cpu_isar_feature(aa64_sve, cpu)) {
69
- env->vfp.zcr_el[1] = MIN(cpu->sve_max_vq - 1, 3);
70
+ env->vfp.zcr_el[1] =
71
+ aarch64_sve_zcr_get_valid_len(cpu, cpu->sve_default_vq - 1);
72
}
73
/*
74
* Enable TBI0 but not TBI1.
75
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
76
QLIST_INIT(&cpu->pre_el_change_hooks);
77
QLIST_INIT(&cpu->el_change_hooks);
78
79
-#ifndef CONFIG_USER_ONLY
80
+#ifdef CONFIG_USER_ONLY
81
+# ifdef TARGET_AARCH64
82
+ /*
83
+ * The linux kernel defaults to 512-bit vectors, when sve is supported.
84
+ * See documentation for /proc/sys/abi/sve_default_vector_length, and
85
+ * our corresponding sve-default-vector-length cpu property.
86
+ */
87
+ cpu->sve_default_vq = 4;
88
+# endif
89
+#else
90
/* Our inbound IRQ and FIQ lines */
91
if (kvm_enabled()) {
92
/* VIRQ and VFIQ are unused with KVM but we add them to maintain
93
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/target/arm/cpu64.c
96
+++ b/target/arm/cpu64.c
97
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, bool value, Error **errp)
98
cpu->isar.id_aa64pfr0 = t;
99
}
100
101
+#ifdef CONFIG_USER_ONLY
102
+/* Mirror linux /proc/sys/abi/sve_default_vector_length. */
103
+static void cpu_arm_set_sve_default_vec_len(Object *obj, Visitor *v,
104
+ const char *name, void *opaque,
105
+ Error **errp)
106
+{
107
+ ARMCPU *cpu = ARM_CPU(obj);
108
+ int32_t default_len, default_vq, remainder;
109
+
110
+ if (!visit_type_int32(v, name, &default_len, errp)) {
111
+ return;
112
+ }
113
+
114
+ /* Undocumented, but the kernel allows -1 to indicate "maximum". */
115
+ if (default_len == -1) {
116
+ cpu->sve_default_vq = ARM_MAX_VQ;
117
+ return;
118
+ }
119
+
120
+ default_vq = default_len / 16;
121
+ remainder = default_len % 16;
122
+
123
+ /*
124
+ * Note that the 512 max comes from include/uapi/asm/sve_context.h
125
+ * and is the maximum architectural width of ZCR_ELx.LEN.
126
+ */
127
+ if (remainder || default_vq < 1 || default_vq > 512) {
128
+ error_setg(errp, "cannot set sve-default-vector-length");
129
+ if (remainder) {
130
+ error_append_hint(errp, "Vector length not a multiple of 16\n");
131
+ } else if (default_vq < 1) {
132
+ error_append_hint(errp, "Vector length smaller than 16\n");
133
+ } else {
134
+ error_append_hint(errp, "Vector length larger than %d\n",
135
+ 512 * 16);
136
+ }
137
+ return;
138
+ }
139
+
140
+ cpu->sve_default_vq = default_vq;
141
+}
142
+
143
+static void cpu_arm_get_sve_default_vec_len(Object *obj, Visitor *v,
144
+ const char *name, void *opaque,
145
+ Error **errp)
146
+{
147
+ ARMCPU *cpu = ARM_CPU(obj);
148
+ int32_t value = cpu->sve_default_vq * 16;
149
+
150
+ visit_type_int32(v, name, &value, errp);
151
+}
152
+#endif
153
+
154
void aarch64_add_sve_properties(Object *obj)
155
{
156
uint32_t vq;
157
@@ -XXX,XX +XXX,XX @@ void aarch64_add_sve_properties(Object *obj)
158
object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
159
cpu_arm_set_sve_vq, NULL, NULL);
52
}
160
}
53
161
+
54
- if (arm_feature(&cpu->env, ARM_FEATURE_MPU)) {
162
+#ifdef CONFIG_USER_ONLY
55
+ if (arm_feature(&cpu->env, ARM_FEATURE_PMSA)) {
163
+ /* Mirror linux /proc/sys/abi/sve_default_vector_length. */
56
qdev_property_add_static(DEVICE(obj), &arm_cpu_has_mpu_property,
164
+ object_property_add(obj, "sve-default-vector-length", "int32",
57
&error_abort);
165
+ cpu_arm_get_sve_default_vec_len,
58
if (arm_feature(&cpu->env, ARM_FEATURE_V7)) {
166
+ cpu_arm_set_sve_default_vec_len, NULL, NULL);
59
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
167
+#endif
60
61
if (arm_feature(env, ARM_FEATURE_V7) &&
62
!arm_feature(env, ARM_FEATURE_M) &&
63
- !arm_feature(env, ARM_FEATURE_MPU)) {
64
+ !arm_feature(env, ARM_FEATURE_PMSA)) {
65
/* v7VMSA drops support for the old ARMv5 tiny pages, so we
66
* can use 4K pages.
67
*/
68
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
69
}
70
71
if (!cpu->has_mpu) {
72
- unset_feature(env, ARM_FEATURE_MPU);
73
+ unset_feature(env, ARM_FEATURE_PMSA);
74
}
75
76
- if (arm_feature(env, ARM_FEATURE_MPU) &&
77
+ if (arm_feature(env, ARM_FEATURE_PMSA) &&
78
arm_feature(env, ARM_FEATURE_V7)) {
79
uint32_t nr = cpu->pmsav7_dregion;
80
81
@@ -XXX,XX +XXX,XX @@ static void arm946_initfn(Object *obj)
82
83
cpu->dtb_compatible = "arm,arm946";
84
set_feature(&cpu->env, ARM_FEATURE_V5);
85
- set_feature(&cpu->env, ARM_FEATURE_MPU);
86
+ set_feature(&cpu->env, ARM_FEATURE_PMSA);
87
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
88
cpu->midr = 0x41059461;
89
cpu->ctr = 0x0f004006;
90
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
91
set_feature(&cpu->env, ARM_FEATURE_THUMB_DIV);
92
set_feature(&cpu->env, ARM_FEATURE_ARM_DIV);
93
set_feature(&cpu->env, ARM_FEATURE_V7MP);
94
- set_feature(&cpu->env, ARM_FEATURE_MPU);
95
+ set_feature(&cpu->env, ARM_FEATURE_PMSA);
96
cpu->midr = 0x411fc153; /* r1p3 */
97
cpu->id_pfr0 = 0x0131;
98
cpu->id_pfr1 = 0x001;
99
diff --git a/target/arm/helper.c b/target/arm/helper.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/target/arm/helper.c
102
+++ b/target/arm/helper.c
103
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
104
{
105
ARMCPU *cpu = arm_env_get_cpu(env);
106
107
- if (raw_read(env, ri) != value && !arm_feature(env, ARM_FEATURE_MPU)
108
+ if (raw_read(env, ri) != value && !arm_feature(env, ARM_FEATURE_PMSA)
109
&& !extended_addresses_enabled(env)) {
110
/* For VMSA (when not using the LPAE long descriptor page table
111
* format) this register includes the ASID, so do a TLB flush.
112
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
113
define_arm_cp_regs(cpu, v6k_cp_reginfo);
114
}
115
if (arm_feature(env, ARM_FEATURE_V7MP) &&
116
- !arm_feature(env, ARM_FEATURE_MPU)) {
117
+ !arm_feature(env, ARM_FEATURE_PMSA)) {
118
define_arm_cp_regs(cpu, v7mp_cp_reginfo);
119
}
120
if (arm_feature(env, ARM_FEATURE_V7)) {
121
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
122
}
123
}
124
125
- if (arm_feature(env, ARM_FEATURE_MPU)) {
126
+ if (arm_feature(env, ARM_FEATURE_PMSA)) {
127
if (arm_feature(env, ARM_FEATURE_V6)) {
128
/* PMSAv6 not implemented */
129
assert(arm_feature(env, ARM_FEATURE_V7));
130
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
131
define_arm_cp_regs(cpu, id_pre_v8_midr_cp_reginfo);
132
}
133
define_arm_cp_regs(cpu, id_cp_reginfo);
134
- if (!arm_feature(env, ARM_FEATURE_MPU)) {
135
+ if (!arm_feature(env, ARM_FEATURE_PMSA)) {
136
define_one_arm_cp_reg(cpu, &id_tlbtr_reginfo);
137
} else if (arm_feature(env, ARM_FEATURE_V7)) {
138
define_one_arm_cp_reg(cpu, &id_mpuir_reginfo);
139
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
140
/* pmsav7 has special handling for when MPU is disabled so call it before
141
* the common MMU/MPU disabled check below.
142
*/
143
- if (arm_feature(env, ARM_FEATURE_MPU) &&
144
+ if (arm_feature(env, ARM_FEATURE_PMSA) &&
145
arm_feature(env, ARM_FEATURE_V7)) {
146
*page_size = TARGET_PAGE_SIZE;
147
return get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
148
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
149
return 0;
150
}
151
152
- if (arm_feature(env, ARM_FEATURE_MPU)) {
153
+ if (arm_feature(env, ARM_FEATURE_PMSA)) {
154
/* Pre-v7 MPU */
155
*page_size = TARGET_PAGE_SIZE;
156
return get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
157
diff --git a/target/arm/machine.c b/target/arm/machine.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/target/arm/machine.c
160
+++ b/target/arm/machine.c
161
@@ -XXX,XX +XXX,XX @@ static bool pmsav7_needed(void *opaque)
162
ARMCPU *cpu = opaque;
163
CPUARMState *env = &cpu->env;
164
165
- return arm_feature(env, ARM_FEATURE_MPU) &&
166
+ return arm_feature(env, ARM_FEATURE_PMSA) &&
167
arm_feature(env, ARM_FEATURE_V7);
168
}
168
}
169
169
170
void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp)
170
--
171
--
171
2.7.4
172
2.20.1
172
173
173
174
diff view generated by jsdifflib
Deleted patch
1
Fix the handling of QOM properties for PMSA CPUs with no MPU:
2
1
3
Allow no-MPU to be specified by either:
4
* has-mpu = false
5
* pmsav7_dregion = 0
6
and make setting one imply the other. Don't clear the PMSA
7
feature bit in this situation.
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
11
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Message-id: 1493122030-32191-6-git-send-email-peter.maydell@linaro.org
13
---
14
target/arm/cpu.c | 8 +++++++-
15
1 file changed, 7 insertions(+), 1 deletion(-)
16
17
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.c
20
+++ b/target/arm/cpu.c
21
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
22
cpu->id_pfr1 &= ~0xf000;
23
}
24
25
+ /* MPU can be configured out of a PMSA CPU either by setting has-mpu
26
+ * to false or by setting pmsav7-dregion to 0.
27
+ */
28
if (!cpu->has_mpu) {
29
- unset_feature(env, ARM_FEATURE_PMSA);
30
+ cpu->pmsav7_dregion = 0;
31
+ }
32
+ if (cpu->pmsav7_dregion == 0) {
33
+ cpu->has_mpu = false;
34
}
35
36
if (arm_feature(env, ARM_FEATURE_PMSA) &&
37
--
38
2.7.4
39
40
diff view generated by jsdifflib
Deleted patch
1
If the CPU is a PMSA config with no MPU implemented, then the
2
SCTLR.M bit should be RAZ/WI, so that the guest can never
3
turn on the non-existent MPU.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 1493122030-32191-7-git-send-email-peter.maydell@linaro.org
9
---
10
target/arm/helper.c | 5 +++++
11
1 file changed, 5 insertions(+)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static void sctlr_write(CPUARMState *env, const ARMCPRegInfo *ri,
18
return;
19
}
20
21
+ if (arm_feature(env, ARM_FEATURE_PMSA) && !cpu->has_mpu) {
22
+ /* M bit is RAZ/WI for PMSA with no MPU implemented */
23
+ value &= ~SCTLR_M;
24
+ }
25
+
26
raw_write(env, ri, value);
27
/* ??? Lots of these bits are not implemented. */
28
/* This may enable/disable the MMU, so do a TLB flush. */
29
--
30
2.7.4
31
32
diff view generated by jsdifflib
Deleted patch
1
Now that we enforce both:
2
* pmsav7_dregion == 0 implies has_mpu == false
3
* PMSA with has_mpu == false means SCTLR.M cannot be set
4
we can remove a check on pmsav7_dregion from get_phys_addr_pmsav7(),
5
because we can only reach this code path if the MPU is enabled
6
(and so region_translation_disabled() returned false).
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 1493122030-32191-8-git-send-email-peter.maydell@linaro.org
11
---
12
target/arm/helper.c | 3 +--
13
1 file changed, 1 insertion(+), 2 deletions(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
20
}
21
22
if (n == -1) { /* no hits */
23
- if (cpu->pmsav7_dregion &&
24
- (is_user || !(regime_sctlr(env, mmu_idx) & SCTLR_BR))) {
25
+ if (is_user || !(regime_sctlr(env, mmu_idx) & SCTLR_BR)) {
26
/* background fault */
27
*fsr = 0;
28
return true;
29
--
30
2.7.4
31
32
diff view generated by jsdifflib
Deleted patch
1
From: Michael Davidsaver <mdavidsaver@gmail.com>
2
1
3
General logic is that operations stopped by the MPU are MemManage,
4
and those which go through the MPU and are caught by the unassigned
5
handle are BusFault. Distinguish these by looking at the
6
exception.fsr values, and set the CFSR bits and (if appropriate)
7
fill in the BFAR or MMFAR with the exception address.
8
9
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
10
Message-id: 1493122030-32191-12-git-send-email-peter.maydell@linaro.org
11
[PMM: i-side faults do not set BFAR/MMFAR, only d-side;
12
added some CPU_LOG_INT logging]
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
---
16
target/arm/helper.c | 45 ++++++++++++++++++++++++++++++++++++++++++---
17
1 file changed, 42 insertions(+), 3 deletions(-)
18
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
24
break;
25
case EXCP_PREFETCH_ABORT:
26
case EXCP_DATA_ABORT:
27
- /* TODO: if we implemented the MPU registers, this is where we
28
- * should set the MMFAR, etc from exception.fsr and exception.vaddress.
29
+ /* Note that for M profile we don't have a guest facing FSR, but
30
+ * the env->exception.fsr will be populated by the code that
31
+ * raises the fault, in the A profile short-descriptor format.
32
*/
33
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM);
34
+ switch (env->exception.fsr & 0xf) {
35
+ case 0x8: /* External Abort */
36
+ switch (cs->exception_index) {
37
+ case EXCP_PREFETCH_ABORT:
38
+ env->v7m.cfsr |= R_V7M_CFSR_PRECISERR_MASK;
39
+ qemu_log_mask(CPU_LOG_INT, "...with CFSR.PRECISERR\n");
40
+ break;
41
+ case EXCP_DATA_ABORT:
42
+ env->v7m.cfsr |=
43
+ (R_V7M_CFSR_IBUSERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
44
+ env->v7m.bfar = env->exception.vaddress;
45
+ qemu_log_mask(CPU_LOG_INT,
46
+ "...with CFSR.IBUSERR and BFAR 0x%x\n",
47
+ env->v7m.bfar);
48
+ break;
49
+ }
50
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS);
51
+ break;
52
+ default:
53
+ /* All other FSR values are either MPU faults or "can't happen
54
+ * for M profile" cases.
55
+ */
56
+ switch (cs->exception_index) {
57
+ case EXCP_PREFETCH_ABORT:
58
+ env->v7m.cfsr |= R_V7M_CFSR_IACCVIOL_MASK;
59
+ qemu_log_mask(CPU_LOG_INT, "...with CFSR.IACCVIOL\n");
60
+ break;
61
+ case EXCP_DATA_ABORT:
62
+ env->v7m.cfsr |=
63
+ (R_V7M_CFSR_DACCVIOL_MASK | R_V7M_CFSR_MMARVALID_MASK);
64
+ env->v7m.mmfar = env->exception.vaddress;
65
+ qemu_log_mask(CPU_LOG_INT,
66
+ "...with CFSR.DACCVIOL and MMFAR 0x%x\n",
67
+ env->v7m.mmfar);
68
+ break;
69
+ }
70
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM);
71
+ break;
72
+ }
73
break;
74
case EXCP_BKPT:
75
if (semihosting_enabled()) {
76
--
77
2.7.4
78
79
diff view generated by jsdifflib
Deleted patch
1
From: Cédric Le Goater <clg@kaod.org>
2
1
3
Multiple I2C commands can be fired simultaneously and the controller
4
execute the commands following these priorities:
5
6
(1) Master Start Command
7
(2) Master Transmit Command
8
(3) Slave Transmit Command or Master Receive Command
9
(4) Master Stop Command
10
11
The current code is incorrect with respect to the above sequence and
12
needs to be reworked to handle each individual command.
13
14
Signed-off-by: Cédric Le Goater <clg@kaod.org>
15
Message-id: 1494827476-1487-2-git-send-email-clg@kaod.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
hw/i2c/aspeed_i2c.c | 24 ++++++++++++++++++------
19
1 file changed, 18 insertions(+), 6 deletions(-)
20
21
diff --git a/hw/i2c/aspeed_i2c.c b/hw/i2c/aspeed_i2c.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/i2c/aspeed_i2c.c
24
+++ b/hw/i2c/aspeed_i2c.c
25
@@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_i2c_bus_read(void *opaque, hwaddr offset,
26
27
static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
28
{
29
+ bus->cmd &= ~0xFFFF;
30
bus->cmd |= value & 0xFFFF;
31
bus->intr_status = 0;
32
33
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
34
bus->intr_status |= I2CD_INTR_TX_ACK;
35
}
36
37
- } else if (bus->cmd & I2CD_M_TX_CMD) {
38
+ /* START command is also a TX command, as the slave address is
39
+ * sent on the bus */
40
+ bus->cmd &= ~(I2CD_M_START_CMD | I2CD_M_TX_CMD);
41
+
42
+ /* No slave found */
43
+ if (!i2c_bus_busy(bus->bus)) {
44
+ return;
45
+ }
46
+ }
47
+
48
+ if (bus->cmd & I2CD_M_TX_CMD) {
49
if (i2c_send(bus->bus, bus->buf)) {
50
bus->intr_status |= (I2CD_INTR_TX_NAK | I2CD_INTR_ABNORMAL);
51
i2c_end_transfer(bus->bus);
52
} else {
53
bus->intr_status |= I2CD_INTR_TX_ACK;
54
}
55
+ bus->cmd &= ~I2CD_M_TX_CMD;
56
+ }
57
58
- } else if (bus->cmd & I2CD_M_RX_CMD) {
59
+ if (bus->cmd & I2CD_M_RX_CMD) {
60
int ret = i2c_recv(bus->bus);
61
if (ret < 0) {
62
qemu_log_mask(LOG_GUEST_ERROR, "%s: read failed\n", __func__);
63
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
64
bus->intr_status |= I2CD_INTR_RX_DONE;
65
}
66
bus->buf = (ret & I2CD_BYTE_BUF_RX_MASK) << I2CD_BYTE_BUF_RX_SHIFT;
67
+ bus->cmd &= ~I2CD_M_RX_CMD;
68
}
69
70
if (bus->cmd & (I2CD_M_STOP_CMD | I2CD_M_S_RX_CMD_LAST)) {
71
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
72
i2c_end_transfer(bus->bus);
73
bus->intr_status |= I2CD_INTR_NORMAL_STOP;
74
}
75
+ bus->cmd &= ~I2CD_M_STOP_CMD;
76
}
77
-
78
- /* command is handled, reset it and check for interrupts */
79
- bus->cmd &= ~0xFFFF;
80
- aspeed_i2c_bus_raise_interrupt(bus);
81
}
82
83
static void aspeed_i2c_bus_write(void *opaque, hwaddr offset,
84
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_write(void *opaque, hwaddr offset,
85
}
86
87
aspeed_i2c_bus_handle_cmd(bus, value);
88
+ aspeed_i2c_bus_raise_interrupt(bus);
89
break;
90
91
default:
92
--
93
2.7.4
94
95
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
2
2
3
Cc: Shannon Zhao <zhaoshenglong@huawei.com>
3
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
4
Signed-off-by: Andrew Jones <drjones@redhat.com>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
5
Message-id: 20210726150953.1218690-1-f4bug@amsat.org
6
Reviewed-by: Shannon Zhao <shannon.zhao@linaro.org>
7
Message-id: 20170529173751.3443-2-drjones@redhat.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
7
---
10
hw/arm/virt-acpi-build.c | 4 ++++
8
hw/arm/nseries.c | 2 +-
11
1 file changed, 4 insertions(+)
9
1 file changed, 1 insertion(+), 1 deletion(-)
12
10
13
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
11
diff --git a/hw/arm/nseries.c b/hw/arm/nseries.c
14
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/virt-acpi-build.c
13
--- a/hw/arm/nseries.c
16
+++ b/hw/arm/virt-acpi-build.c
14
+++ b/hw/arm/nseries.c
17
@@ -XXX,XX +XXX,XX @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
15
@@ -XXX,XX +XXX,XX @@ static uint32_t mipid_txrx(void *opaque, uint32_t cmd, int len)
18
if (nb_numa_nodes > 0) {
16
default:
19
acpi_add_table(table_offsets, tables_blob);
17
bad_cmd:
20
build_srat(tables_blob, tables->linker, vms);
18
qemu_log_mask(LOG_GUEST_ERROR,
21
+ if (have_numa_distance) {
19
- "%s: unknown command %02x\n", __func__, s->cmd);
22
+ acpi_add_table(table_offsets, tables_blob);
20
+ "%s: unknown command 0x%02x\n", __func__, s->cmd);
23
+ build_slit(tables_blob, tables->linker);
21
break;
24
+ }
25
}
22
}
26
23
27
if (its_class_name() && !vmc->no_its) {
28
--
24
--
29
2.7.4
25
2.20.1
30
26
31
27
diff view generated by jsdifflib
1
From: Cédric Le Goater <clg@kaod.org>
1
From: Joel Stanley <joel@jms.id.au>
2
2
3
Today, the LAST command is handled with the STOP command but this is
3
The macro used to calculate the maximum memory size of the MMIO region
4
incorrect. Also nack the I2C bus when a LAST is issued.
4
had a mistake, causing all GPIO models to create a mapping of 0x9D8.
5
The intent was to have it be 0x9D8 - 0x800.
5
6
6
Signed-off-by: Cédric Le Goater <clg@kaod.org>
7
This extra size doesn't matter on ast2400 and ast2500, which have a 4KB
7
Message-id: 1494827476-1487-3-git-send-email-clg@kaod.org
8
region set aside for the GPIO controller.
9
10
On the ast2600 the 3.3V and 1.8V GPIO controllers are 2KB apart, so the
11
regions would overlap. Worse was the 1.8V controller would map over the
12
top of the following peripheral, which happens to be the RTC.
13
14
The mmio region used by each device is a maximum of 2KB, so avoid the
15
calculations and hard code this as the maximum.
16
17
Fixes: 36d737ee82b2 ("hw/gpio: Add in AST2600 specific implementation")
18
Signed-off-by: Joel Stanley <joel@jms.id.au>
19
Reviewed-by: Rashmica Gupta <rashmica.g@gmail.com>
20
Reviewed-by: Cédric Le Goater <clg@kaod.org>
21
Message-id: 20210713065854.134634-2-joel@jms.id.au
22
[PMM: fix autocorrect error in commit message]
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
24
---
10
hw/i2c/aspeed_i2c.c | 9 ++++++---
25
hw/gpio/aspeed_gpio.c | 3 +--
11
1 file changed, 6 insertions(+), 3 deletions(-)
26
1 file changed, 1 insertion(+), 2 deletions(-)
12
27
13
diff --git a/hw/i2c/aspeed_i2c.c b/hw/i2c/aspeed_i2c.c
28
diff --git a/hw/gpio/aspeed_gpio.c b/hw/gpio/aspeed_gpio.c
14
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/i2c/aspeed_i2c.c
30
--- a/hw/gpio/aspeed_gpio.c
16
+++ b/hw/i2c/aspeed_i2c.c
31
+++ b/hw/gpio/aspeed_gpio.c
17
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
32
@@ -XXX,XX +XXX,XX @@
18
bus->cmd &= ~I2CD_M_TX_CMD;
33
#define GPIO_1_8V_MEM_SIZE 0x9D8
34
#define GPIO_1_8V_REG_ARRAY_SIZE ((GPIO_1_8V_MEM_SIZE - \
35
GPIO_1_8V_REG_OFFSET) >> 2)
36
-#define GPIO_MAX_MEM_SIZE MAX(GPIO_3_6V_MEM_SIZE, GPIO_1_8V_MEM_SIZE)
37
38
static int aspeed_evaluate_irq(GPIOSets *regs, int gpio_prev_high, int gpio)
39
{
40
@@ -XXX,XX +XXX,XX @@ static void aspeed_gpio_realize(DeviceState *dev, Error **errp)
19
}
41
}
20
42
21
- if (bus->cmd & I2CD_M_RX_CMD) {
43
memory_region_init_io(&s->iomem, OBJECT(s), &aspeed_gpio_ops, s,
22
+ if (bus->cmd & (I2CD_M_RX_CMD | I2CD_M_S_RX_CMD_LAST)) {
44
- TYPE_ASPEED_GPIO, GPIO_MAX_MEM_SIZE);
23
int ret = i2c_recv(bus->bus);
45
+ TYPE_ASPEED_GPIO, 0x800);
24
if (ret < 0) {
46
25
qemu_log_mask(LOG_GUEST_ERROR, "%s: read failed\n", __func__);
47
sysbus_init_mmio(sbd, &s->iomem);
26
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
48
}
27
bus->intr_status |= I2CD_INTR_RX_DONE;
28
}
29
bus->buf = (ret & I2CD_BYTE_BUF_RX_MASK) << I2CD_BYTE_BUF_RX_SHIFT;
30
- bus->cmd &= ~I2CD_M_RX_CMD;
31
+ if (bus->cmd & I2CD_M_S_RX_CMD_LAST) {
32
+ i2c_nack(bus->bus);
33
+ }
34
+ bus->cmd &= ~(I2CD_M_RX_CMD | I2CD_M_S_RX_CMD_LAST);
35
}
36
37
- if (bus->cmd & (I2CD_M_STOP_CMD | I2CD_M_S_RX_CMD_LAST)) {
38
+ if (bus->cmd & I2CD_M_STOP_CMD) {
39
if (!i2c_bus_busy(bus->bus)) {
40
bus->intr_status |= I2CD_INTR_ABNORMAL;
41
} else {
42
--
49
--
43
2.7.4
50
2.20.1
44
51
45
52
diff view generated by jsdifflib
Deleted patch
1
From: Cédric Le Goater <clg@kaod.org>
2
1
3
Let's add an RTC to the palmetto BMC and a LM75 temperature sensor to
4
the AST2500 EVB to start with.
5
6
Signed-off-by: Cédric Le Goater <clg@kaod.org>
7
Message-id: 1494827476-1487-5-git-send-email-clg@kaod.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/arm/aspeed.c | 27 +++++++++++++++++++++++++++
12
1 file changed, 27 insertions(+)
13
14
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/aspeed.c
17
+++ b/hw/arm/aspeed.c
18
@@ -XXX,XX +XXX,XX @@ typedef struct AspeedBoardConfig {
19
const char *fmc_model;
20
const char *spi_model;
21
uint32_t num_cs;
22
+ void (*i2c_init)(AspeedBoardState *bmc);
23
} AspeedBoardConfig;
24
25
enum {
26
@@ -XXX,XX +XXX,XX @@ enum {
27
SCU_AST2500_HW_STRAP_ACPI_ENABLE | \
28
SCU_HW_STRAP_SPI_MODE(SCU_HW_STRAP_SPI_MASTER))
29
30
+static void palmetto_bmc_i2c_init(AspeedBoardState *bmc);
31
+static void ast2500_evb_i2c_init(AspeedBoardState *bmc);
32
+
33
static const AspeedBoardConfig aspeed_boards[] = {
34
[PALMETTO_BMC] = {
35
.soc_name = "ast2400-a1",
36
@@ -XXX,XX +XXX,XX @@ static const AspeedBoardConfig aspeed_boards[] = {
37
.fmc_model = "n25q256a",
38
.spi_model = "mx25l25635e",
39
.num_cs = 1,
40
+ .i2c_init = palmetto_bmc_i2c_init,
41
},
42
[AST2500_EVB] = {
43
.soc_name = "ast2500-a1",
44
@@ -XXX,XX +XXX,XX @@ static const AspeedBoardConfig aspeed_boards[] = {
45
.fmc_model = "n25q256a",
46
.spi_model = "mx25l25635e",
47
.num_cs = 1,
48
+ .i2c_init = ast2500_evb_i2c_init,
49
},
50
[ROMULUS_BMC] = {
51
.soc_name = "ast2500-a1",
52
@@ -XXX,XX +XXX,XX @@ static void aspeed_board_init(MachineState *machine,
53
aspeed_board_binfo.ram_size = ram_size;
54
aspeed_board_binfo.loader_start = sc->info->sdram_base;
55
56
+ if (cfg->i2c_init) {
57
+ cfg->i2c_init(bmc);
58
+ }
59
+
60
arm_load_kernel(ARM_CPU(first_cpu), &aspeed_board_binfo);
61
}
62
63
+static void palmetto_bmc_i2c_init(AspeedBoardState *bmc)
64
+{
65
+ AspeedSoCState *soc = &bmc->soc;
66
+
67
+ /* The palmetto platform expects a ds3231 RTC but a ds1338 is
68
+ * enough to provide basic RTC features. Alarms will be missing */
69
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 0), "ds1338", 0x68);
70
+}
71
+
72
static void palmetto_bmc_init(MachineState *machine)
73
{
74
aspeed_board_init(machine, &aspeed_boards[PALMETTO_BMC]);
75
@@ -XXX,XX +XXX,XX @@ static const TypeInfo palmetto_bmc_type = {
76
.class_init = palmetto_bmc_class_init,
77
};
78
79
+static void ast2500_evb_i2c_init(AspeedBoardState *bmc)
80
+{
81
+ AspeedSoCState *soc = &bmc->soc;
82
+
83
+ /* The AST2500 EVB expects a LM75 but a TMP105 is compatible */
84
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 7), "tmp105", 0x4d);
85
+}
86
+
87
static void ast2500_evb_init(MachineState *machine)
88
{
89
aspeed_board_init(machine, &aspeed_boards[AST2500_EVB]);
90
--
91
2.7.4
92
93
diff view generated by jsdifflib
Deleted patch
1
From: Cédric Le Goater <clg@kaod.org>
2
1
3
Largely inspired by the TMP105 temperature sensor, here is a model for
4
the TMP42{1,2,3} temperature sensors.
5
6
Specs can be found here :
7
8
    http://www.ti.com/lit/gpn/tmp421
9
10
Signed-off-by: Cédric Le Goater <clg@kaod.org>
11
Message-id: 1494827476-1487-6-git-send-email-clg@kaod.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
hw/misc/Makefile.objs | 1 +
16
hw/misc/tmp421.c | 401 ++++++++++++++++++++++++++++++++++++++++
17
default-configs/arm-softmmu.mak | 1 +
18
3 files changed, 403 insertions(+)
19
create mode 100644 hw/misc/tmp421.c
20
21
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
22
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/misc/Makefile.objs
24
+++ b/hw/misc/Makefile.objs
25
@@ -XXX,XX +XXX,XX @@
26
common-obj-$(CONFIG_APPLESMC) += applesmc.o
27
common-obj-$(CONFIG_MAX111X) += max111x.o
28
common-obj-$(CONFIG_TMP105) += tmp105.o
29
+common-obj-$(CONFIG_TMP421) += tmp421.o
30
common-obj-$(CONFIG_ISA_DEBUG) += debugexit.o
31
common-obj-$(CONFIG_SGA) += sga.o
32
common-obj-$(CONFIG_ISA_TESTDEV) += pc-testdev.o
33
diff --git a/hw/misc/tmp421.c b/hw/misc/tmp421.c
34
new file mode 100644
35
index XXXXXXX..XXXXXXX
36
--- /dev/null
37
+++ b/hw/misc/tmp421.c
38
@@ -XXX,XX +XXX,XX @@
39
+/*
40
+ * Texas Instruments TMP421 temperature sensor.
41
+ *
42
+ * Copyright (c) 2016 IBM Corporation.
43
+ *
44
+ * Largely inspired by :
45
+ *
46
+ * Texas Instruments TMP105 temperature sensor.
47
+ *
48
+ * Copyright (C) 2008 Nokia Corporation
49
+ * Written by Andrzej Zaborowski <andrew@openedhand.com>
50
+ *
51
+ * This program is free software; you can redistribute it and/or
52
+ * modify it under the terms of the GNU General Public License as
53
+ * published by the Free Software Foundation; either version 2 or
54
+ * (at your option) version 3 of the License.
55
+ *
56
+ * This program is distributed in the hope that it will be useful,
57
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
58
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
59
+ * GNU General Public License for more details.
60
+ *
61
+ * You should have received a copy of the GNU General Public License along
62
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
63
+ */
64
+
65
+#include "qemu/osdep.h"
66
+#include "hw/hw.h"
67
+#include "hw/i2c/i2c.h"
68
+#include "qapi/error.h"
69
+#include "qapi/visitor.h"
70
+
71
+/* Manufacturer / Device ID's */
72
+#define TMP421_MANUFACTURER_ID 0x55
73
+#define TMP421_DEVICE_ID 0x21
74
+#define TMP422_DEVICE_ID 0x22
75
+#define TMP423_DEVICE_ID 0x23
76
+
77
+typedef struct DeviceInfo {
78
+ int model;
79
+ const char *name;
80
+} DeviceInfo;
81
+
82
+static const DeviceInfo devices[] = {
83
+ { TMP421_DEVICE_ID, "tmp421" },
84
+ { TMP422_DEVICE_ID, "tmp422" },
85
+ { TMP423_DEVICE_ID, "tmp423" },
86
+};
87
+
88
+typedef struct TMP421State {
89
+ /*< private >*/
90
+ I2CSlave i2c;
91
+ /*< public >*/
92
+
93
+ int16_t temperature[4];
94
+
95
+ uint8_t status;
96
+ uint8_t config[2];
97
+ uint8_t rate;
98
+
99
+ uint8_t len;
100
+ uint8_t buf[2];
101
+ uint8_t pointer;
102
+
103
+} TMP421State;
104
+
105
+typedef struct TMP421Class {
106
+ I2CSlaveClass parent_class;
107
+ DeviceInfo *dev;
108
+} TMP421Class;
109
+
110
+#define TYPE_TMP421 "tmp421-generic"
111
+#define TMP421(obj) OBJECT_CHECK(TMP421State, (obj), TYPE_TMP421)
112
+
113
+#define TMP421_CLASS(klass) \
114
+ OBJECT_CLASS_CHECK(TMP421Class, (klass), TYPE_TMP421)
115
+#define TMP421_GET_CLASS(obj) \
116
+ OBJECT_GET_CLASS(TMP421Class, (obj), TYPE_TMP421)
117
+
118
+/* the TMP421 registers */
119
+#define TMP421_STATUS_REG 0x08
120
+#define TMP421_STATUS_BUSY (1 << 7)
121
+#define TMP421_CONFIG_REG_1 0x09
122
+#define TMP421_CONFIG_RANGE (1 << 2)
123
+#define TMP421_CONFIG_SHUTDOWN (1 << 6)
124
+#define TMP421_CONFIG_REG_2 0x0A
125
+#define TMP421_CONFIG_RC (1 << 2)
126
+#define TMP421_CONFIG_LEN (1 << 3)
127
+#define TMP421_CONFIG_REN (1 << 4)
128
+#define TMP421_CONFIG_REN2 (1 << 5)
129
+#define TMP421_CONFIG_REN3 (1 << 6)
130
+
131
+#define TMP421_CONVERSION_RATE_REG 0x0B
132
+#define TMP421_ONE_SHOT 0x0F
133
+
134
+#define TMP421_RESET 0xFC
135
+#define TMP421_MANUFACTURER_ID_REG 0xFE
136
+#define TMP421_DEVICE_ID_REG 0xFF
137
+
138
+#define TMP421_TEMP_MSB0 0x00
139
+#define TMP421_TEMP_MSB1 0x01
140
+#define TMP421_TEMP_MSB2 0x02
141
+#define TMP421_TEMP_MSB3 0x03
142
+#define TMP421_TEMP_LSB0 0x10
143
+#define TMP421_TEMP_LSB1 0x11
144
+#define TMP421_TEMP_LSB2 0x12
145
+#define TMP421_TEMP_LSB3 0x13
146
+
147
+static const int32_t mins[2] = { -40000, -55000 };
148
+static const int32_t maxs[2] = { 127000, 150000 };
149
+
150
+static void tmp421_get_temperature(Object *obj, Visitor *v, const char *name,
151
+ void *opaque, Error **errp)
152
+{
153
+ TMP421State *s = TMP421(obj);
154
+ bool ext_range = (s->config[0] & TMP421_CONFIG_RANGE);
155
+ int offset = ext_range * 64 * 256;
156
+ int64_t value;
157
+ int tempid;
158
+
159
+ if (sscanf(name, "temperature%d", &tempid) != 1) {
160
+ error_setg(errp, "error reading %s: %m", name);
161
+ return;
162
+ }
163
+
164
+ if (tempid >= 4 || tempid < 0) {
165
+ error_setg(errp, "error reading %s", name);
166
+ return;
167
+ }
168
+
169
+ value = ((s->temperature[tempid] - offset) * 1000 + 128) / 256;
170
+
171
+ visit_type_int(v, name, &value, errp);
172
+}
173
+
174
+/* Units are 0.001 centigrades relative to 0 C. s->temperature is 8.8
175
+ * fixed point, so units are 1/256 centigrades. A simple ratio will do.
176
+ */
177
+static void tmp421_set_temperature(Object *obj, Visitor *v, const char *name,
178
+ void *opaque, Error **errp)
179
+{
180
+ TMP421State *s = TMP421(obj);
181
+ Error *local_err = NULL;
182
+ int64_t temp;
183
+ bool ext_range = (s->config[0] & TMP421_CONFIG_RANGE);
184
+ int offset = ext_range * 64 * 256;
185
+ int tempid;
186
+
187
+ visit_type_int(v, name, &temp, &local_err);
188
+ if (local_err) {
189
+ error_propagate(errp, local_err);
190
+ return;
191
+ }
192
+
193
+ if (temp >= maxs[ext_range] || temp < mins[ext_range]) {
194
+ error_setg(errp, "value %" PRId64 ".%03" PRIu64 " °C is out of range",
195
+ temp / 1000, temp % 1000);
196
+ return;
197
+ }
198
+
199
+ if (sscanf(name, "temperature%d", &tempid) != 1) {
200
+ error_setg(errp, "error reading %s: %m", name);
201
+ return;
202
+ }
203
+
204
+ if (tempid >= 4 || tempid < 0) {
205
+ error_setg(errp, "error reading %s", name);
206
+ return;
207
+ }
208
+
209
+ s->temperature[tempid] = (int16_t) ((temp * 256 - 128) / 1000) + offset;
210
+}
211
+
212
+static void tmp421_read(TMP421State *s)
213
+{
214
+ TMP421Class *sc = TMP421_GET_CLASS(s);
215
+
216
+ s->len = 0;
217
+
218
+ switch (s->pointer) {
219
+ case TMP421_MANUFACTURER_ID_REG:
220
+ s->buf[s->len++] = TMP421_MANUFACTURER_ID;
221
+ break;
222
+ case TMP421_DEVICE_ID_REG:
223
+ s->buf[s->len++] = sc->dev->model;
224
+ break;
225
+ case TMP421_CONFIG_REG_1:
226
+ s->buf[s->len++] = s->config[0];
227
+ break;
228
+ case TMP421_CONFIG_REG_2:
229
+ s->buf[s->len++] = s->config[1];
230
+ break;
231
+ case TMP421_CONVERSION_RATE_REG:
232
+ s->buf[s->len++] = s->rate;
233
+ break;
234
+ case TMP421_STATUS_REG:
235
+ s->buf[s->len++] = s->status;
236
+ break;
237
+
238
+ /* FIXME: check for channel enablement in config registers */
239
+ case TMP421_TEMP_MSB0:
240
+ s->buf[s->len++] = (((uint16_t) s->temperature[0]) >> 8);
241
+ s->buf[s->len++] = (((uint16_t) s->temperature[0]) >> 0) & 0xf0;
242
+ break;
243
+ case TMP421_TEMP_MSB1:
244
+ s->buf[s->len++] = (((uint16_t) s->temperature[1]) >> 8);
245
+ s->buf[s->len++] = (((uint16_t) s->temperature[1]) >> 0) & 0xf0;
246
+ break;
247
+ case TMP421_TEMP_MSB2:
248
+ s->buf[s->len++] = (((uint16_t) s->temperature[2]) >> 8);
249
+ s->buf[s->len++] = (((uint16_t) s->temperature[2]) >> 0) & 0xf0;
250
+ break;
251
+ case TMP421_TEMP_MSB3:
252
+ s->buf[s->len++] = (((uint16_t) s->temperature[3]) >> 8);
253
+ s->buf[s->len++] = (((uint16_t) s->temperature[3]) >> 0) & 0xf0;
254
+ break;
255
+ case TMP421_TEMP_LSB0:
256
+ s->buf[s->len++] = (((uint16_t) s->temperature[0]) >> 0) & 0xf0;
257
+ break;
258
+ case TMP421_TEMP_LSB1:
259
+ s->buf[s->len++] = (((uint16_t) s->temperature[1]) >> 0) & 0xf0;
260
+ break;
261
+ case TMP421_TEMP_LSB2:
262
+ s->buf[s->len++] = (((uint16_t) s->temperature[2]) >> 0) & 0xf0;
263
+ break;
264
+ case TMP421_TEMP_LSB3:
265
+ s->buf[s->len++] = (((uint16_t) s->temperature[3]) >> 0) & 0xf0;
266
+ break;
267
+ }
268
+}
269
+
270
+static void tmp421_reset(I2CSlave *i2c);
271
+
272
+static void tmp421_write(TMP421State *s)
273
+{
274
+ switch (s->pointer) {
275
+ case TMP421_CONVERSION_RATE_REG:
276
+ s->rate = s->buf[0];
277
+ break;
278
+ case TMP421_CONFIG_REG_1:
279
+ s->config[0] = s->buf[0];
280
+ break;
281
+ case TMP421_CONFIG_REG_2:
282
+ s->config[1] = s->buf[0];
283
+ break;
284
+ case TMP421_RESET:
285
+ tmp421_reset(I2C_SLAVE(s));
286
+ break;
287
+ }
288
+}
289
+
290
+static int tmp421_rx(I2CSlave *i2c)
291
+{
292
+ TMP421State *s = TMP421(i2c);
293
+
294
+ if (s->len < 2) {
295
+ return s->buf[s->len++];
296
+ } else {
297
+ return 0xff;
298
+ }
299
+}
300
+
301
+static int tmp421_tx(I2CSlave *i2c, uint8_t data)
302
+{
303
+ TMP421State *s = TMP421(i2c);
304
+
305
+ if (s->len == 0) {
306
+ /* first byte is the register pointer for a read or write
307
+ * operation */
308
+ s->pointer = data;
309
+ s->len++;
310
+ } else if (s->len == 1) {
311
+ /* second byte is the data to write. The device only supports
312
+ * one byte writes */
313
+ s->buf[0] = data;
314
+ tmp421_write(s);
315
+ }
316
+
317
+ return 0;
318
+}
319
+
320
+static int tmp421_event(I2CSlave *i2c, enum i2c_event event)
321
+{
322
+ TMP421State *s = TMP421(i2c);
323
+
324
+ if (event == I2C_START_RECV) {
325
+ tmp421_read(s);
326
+ }
327
+
328
+ s->len = 0;
329
+ return 0;
330
+}
331
+
332
+static const VMStateDescription vmstate_tmp421 = {
333
+ .name = "TMP421",
334
+ .version_id = 0,
335
+ .minimum_version_id = 0,
336
+ .fields = (VMStateField[]) {
337
+ VMSTATE_UINT8(len, TMP421State),
338
+ VMSTATE_UINT8_ARRAY(buf, TMP421State, 2),
339
+ VMSTATE_UINT8(pointer, TMP421State),
340
+ VMSTATE_UINT8_ARRAY(config, TMP421State, 2),
341
+ VMSTATE_UINT8(status, TMP421State),
342
+ VMSTATE_UINT8(rate, TMP421State),
343
+ VMSTATE_INT16_ARRAY(temperature, TMP421State, 4),
344
+ VMSTATE_I2C_SLAVE(i2c, TMP421State),
345
+ VMSTATE_END_OF_LIST()
346
+ }
347
+};
348
+
349
+static void tmp421_reset(I2CSlave *i2c)
350
+{
351
+ TMP421State *s = TMP421(i2c);
352
+ TMP421Class *sc = TMP421_GET_CLASS(s);
353
+
354
+ memset(s->temperature, 0, sizeof(s->temperature));
355
+ s->pointer = 0;
356
+
357
+ s->config[0] = 0; /* TMP421_CONFIG_RANGE */
358
+
359
+ /* resistance correction and channel enablement */
360
+ switch (sc->dev->model) {
361
+ case TMP421_DEVICE_ID:
362
+ s->config[1] = 0x1c;
363
+ break;
364
+ case TMP422_DEVICE_ID:
365
+ s->config[1] = 0x3c;
366
+ break;
367
+ case TMP423_DEVICE_ID:
368
+ s->config[1] = 0x7c;
369
+ break;
370
+ }
371
+
372
+ s->rate = 0x7; /* 8Hz */
373
+ s->status = 0;
374
+}
375
+
376
+static int tmp421_init(I2CSlave *i2c)
377
+{
378
+ TMP421State *s = TMP421(i2c);
379
+
380
+ tmp421_reset(&s->i2c);
381
+
382
+ return 0;
383
+}
384
+
385
+static void tmp421_initfn(Object *obj)
386
+{
387
+ object_property_add(obj, "temperature0", "int",
388
+ tmp421_get_temperature,
389
+ tmp421_set_temperature, NULL, NULL, NULL);
390
+ object_property_add(obj, "temperature1", "int",
391
+ tmp421_get_temperature,
392
+ tmp421_set_temperature, NULL, NULL, NULL);
393
+ object_property_add(obj, "temperature2", "int",
394
+ tmp421_get_temperature,
395
+ tmp421_set_temperature, NULL, NULL, NULL);
396
+ object_property_add(obj, "temperature3", "int",
397
+ tmp421_get_temperature,
398
+ tmp421_set_temperature, NULL, NULL, NULL);
399
+}
400
+
401
+static void tmp421_class_init(ObjectClass *klass, void *data)
402
+{
403
+ DeviceClass *dc = DEVICE_CLASS(klass);
404
+ I2CSlaveClass *k = I2C_SLAVE_CLASS(klass);
405
+ TMP421Class *sc = TMP421_CLASS(klass);
406
+
407
+ k->init = tmp421_init;
408
+ k->event = tmp421_event;
409
+ k->recv = tmp421_rx;
410
+ k->send = tmp421_tx;
411
+ dc->vmsd = &vmstate_tmp421;
412
+ sc->dev = (DeviceInfo *) data;
413
+}
414
+
415
+static const TypeInfo tmp421_info = {
416
+ .name = TYPE_TMP421,
417
+ .parent = TYPE_I2C_SLAVE,
418
+ .instance_size = sizeof(TMP421State),
419
+ .instance_init = tmp421_initfn,
420
+ .class_init = tmp421_class_init,
421
+};
422
+
423
+static void tmp421_register_types(void)
424
+{
425
+ int i;
426
+
427
+ type_register_static(&tmp421_info);
428
+ for (i = 0; i < ARRAY_SIZE(devices); ++i) {
429
+ TypeInfo ti = {
430
+ .name = devices[i].name,
431
+ .parent = TYPE_TMP421,
432
+ .class_init = tmp421_class_init,
433
+ .class_data = (void *) &devices[i],
434
+ };
435
+ type_register(&ti);
436
+ }
437
+}
438
+
439
+type_init(tmp421_register_types)
440
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
441
index XXXXXXX..XXXXXXX 100644
442
--- a/default-configs/arm-softmmu.mak
443
+++ b/default-configs/arm-softmmu.mak
444
@@ -XXX,XX +XXX,XX @@ CONFIG_TWL92230=y
445
CONFIG_TSC2005=y
446
CONFIG_LM832X=y
447
CONFIG_TMP105=y
448
+CONFIG_TMP421=y
449
CONFIG_STELLARIS=y
450
CONFIG_STELLARIS_INPUT=y
451
CONFIG_STELLARIS_ENET=y
452
--
453
2.7.4
454
455
diff view generated by jsdifflib
Deleted patch
1
From: Cédric Le Goater <clg@kaod.org>
2
1
3
Temperatures can be changed from the monitor with :
4
5
    (qemu) qom-set /machine/unattached/device[2] temperature0 12000
6
7
Signed-off-by: Cédric Le Goater <clg@kaod.org>
8
Message-id: 1494827476-1487-7-git-send-email-clg@kaod.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/arm/aspeed.c | 9 +++++++++
13
1 file changed, 9 insertions(+)
14
15
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/aspeed.c
18
+++ b/hw/arm/aspeed.c
19
@@ -XXX,XX +XXX,XX @@ static void aspeed_board_init(MachineState *machine,
20
static void palmetto_bmc_i2c_init(AspeedBoardState *bmc)
21
{
22
AspeedSoCState *soc = &bmc->soc;
23
+ DeviceState *dev;
24
25
/* The palmetto platform expects a ds3231 RTC but a ds1338 is
26
* enough to provide basic RTC features. Alarms will be missing */
27
i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 0), "ds1338", 0x68);
28
+
29
+ /* add a TMP423 temperature sensor */
30
+ dev = i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 2),
31
+ "tmp423", 0x4c);
32
+ object_property_set_int(OBJECT(dev), 31000, "temperature0", &error_abort);
33
+ object_property_set_int(OBJECT(dev), 28000, "temperature1", &error_abort);
34
+ object_property_set_int(OBJECT(dev), 20000, "temperature2", &error_abort);
35
+ object_property_set_int(OBJECT(dev), 110000, "temperature3", &error_abort);
36
}
37
38
static void palmetto_bmc_init(MachineState *machine)
39
--
40
2.7.4
41
42
diff view generated by jsdifflib