1
ARM pullreq; contains some patches that arrived while I
1
A last small test of bug fixes before rc1.
2
was on holiday, plus the series I sent off before going
3
away, which got reviewed while I was away.
4
2
5
thanks
3
thanks
6
-- PMM
4
-- PMM
7
5
6
The following changes since commit ed8ad9728a9c0eec34db9dff61dfa2f1dd625637:
8
7
9
The following changes since commit c077a998eb3fcae2d048e3baeb5bc592d30fddde:
8
Merge tag 'pull-tpm-2023-07-14-1' of https://github.com/stefanberger/qemu-tpm into staging (2023-07-15 14:54:04 +0100)
10
9
11
Merge remote-tracking branch 'remotes/riku/tags/pull-linux-user-20170531' into staging (2017-06-01 15:50:40 +0100)
10
are available in the Git repository at:
12
11
13
are available in the git repository at:
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230717
14
13
15
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20170601
14
for you to fetch changes up to c2c1c4a35c7c2b1a4140b0942b9797c857e476a4:
16
15
17
for you to fetch changes up to cdc58be430b0bdeaef282e2e70f8135ae531616d:
16
hw/nvram: Avoid unnecessary Xilinx eFuse backstore write (2023-07-17 11:05:52 +0100)
18
19
hw/arm/virt: fdt: generate distance-map when needed (2017-06-01 17:27:07 +0100)
20
17
21
----------------------------------------------------------------
18
----------------------------------------------------------------
22
target-arm queue:
19
target-arm queue:
23
* virt: numa: provide ACPI distance info when needed
20
* hw/arm/sbsa-ref: set 'slots' property of xhci
24
* aspeed: fix i2c controller bugs
21
* linux-user: Remove pointless NULL check in clock_adjtime handling
25
* aspeed: add temperature sensor device
22
* ptw: Fix S1_ptw_translate() debug path
26
* M profile: support MPU
23
* ptw: Account for FEAT_RME when applying {N}SW, SA bits
27
* gicv3: fix mishandling of BPR1, VBPR1
24
* accel/tcg: Zero-pad PC in TCG CPU exec trace lines
28
* load_uboot_image: don't assume a full header read
25
* hw/nvram: Avoid unnecessary Xilinx eFuse backstore write
29
* libvixl: Correct build failures on NetBSD
30
26
31
----------------------------------------------------------------
27
----------------------------------------------------------------
32
Andrew Jones (3):
28
Peter Maydell (5):
33
load_uboot_image: don't assume a full header read
29
linux-user: Remove pointless NULL check in clock_adjtime handling
34
hw/arm/virt-acpi-build: build SLIT when needed
30
target/arm/ptw.c: Add comments to S1Translate struct fields
35
hw/arm/virt: fdt: generate distance-map when needed
31
target/arm: Fix S1_ptw_translate() debug path
32
target/arm/ptw.c: Account for FEAT_RME when applying {N}SW, SA bits
33
accel/tcg: Zero-pad PC in TCG CPU exec trace lines
36
34
37
Cédric Le Goater (6):
35
Tong Ho (1):
38
aspeed/i2c: improve command handling
36
hw/nvram: Avoid unnecessary Xilinx eFuse backstore write
39
aspeed/i2c: handle LAST command under the RX command
40
aspeed/i2c: introduce a state machine
41
aspeed: add some I2C devices to the Aspeed machines
42
hw/misc: add a TMP42{1,2,3} device model
43
aspeed: add a temp sensor device on I2C bus 3
44
37
45
Kamil Rytarowski (1):
38
Yuquan Wang (1):
46
libvixl: Correct build failures on NetBSD
39
hw/arm/sbsa-ref: set 'slots' property of xhci
47
40
48
Michael Davidsaver (4):
41
accel/tcg/cpu-exec.c | 4 +--
49
armv7m: Improve "-d mmu" tracing for PMSAv7 MPU
42
accel/tcg/translate-all.c | 2 +-
50
armv7m: Implement M profile default memory map
43
hw/arm/sbsa-ref.c | 1 +
51
armv7m: Classify faults as MemManage or BusFault
44
hw/nvram/xlnx-efuse.c | 11 ++++--
52
arm: add MPU support to M profile CPUs
45
linux-user/syscall.c | 12 +++----
53
46
target/arm/ptw.c | 90 +++++++++++++++++++++++++++++++++++++++++------
54
Peter Maydell (12):
47
6 files changed, 98 insertions(+), 22 deletions(-)
55
hw/intc/arm_gicv3_cpuif: Fix reset value for VMCR_EL2.VBPR1
56
hw/intc/arm_gicv3_cpuif: Don't let BPR be set below its minimum
57
hw/intc/arm_gicv3_cpuif: Fix priority masking for NS BPR1
58
arm: Use the mmu_idx we're passed in arm_cpu_do_unaligned_access()
59
arm: Add support for M profile CPUs having different MMU index semantics
60
arm: Use different ARMMMUIdx values for M profile
61
arm: Clean up handling of no-MPU PMSA CPUs
62
arm: Don't clear ARM_FEATURE_PMSA for no-mpu configs
63
arm: Don't let no-MPU PMSA cores write to SCTLR.M
64
arm: Remove unnecessary check on cpu->pmsav7_dregion
65
arm: All M profile cores are PMSA
66
arm: Implement HFNMIENA support for M profile MPU
67
68
Wei Huang (1):
69
target/arm: clear PMUVER field of AA64DFR0 when vPMU=off
70
71
disas/libvixl/Makefile.objs | 3 +
72
hw/misc/Makefile.objs | 1 +
73
target/arm/cpu.h | 118 ++++++++++--
74
target/arm/translate.h | 2 +-
75
hw/arm/aspeed.c | 36 ++++
76
hw/arm/virt-acpi-build.c | 4 +
77
hw/arm/virt.c | 21 +++
78
hw/core/loader.c | 3 +-
79
hw/i2c/aspeed_i2c.c | 65 ++++++-
80
hw/intc/arm_gicv3_cpuif.c | 50 ++++-
81
hw/intc/armv7m_nvic.c | 104 +++++++++++
82
hw/misc/tmp421.c | 401 ++++++++++++++++++++++++++++++++++++++++
83
target/arm/cpu.c | 28 ++-
84
target/arm/helper.c | 338 ++++++++++++++++++++++-----------
85
target/arm/machine.c | 7 +-
86
target/arm/op_helper.c | 3 +-
87
target/arm/translate-a64.c | 18 +-
88
target/arm/translate.c | 14 +-
89
default-configs/arm-softmmu.mak | 1 +
90
19 files changed, 1060 insertions(+), 157 deletions(-)
91
create mode 100644 hw/misc/tmp421.c
92
diff view generated by jsdifflib
Deleted patch
1
From: Kamil Rytarowski <n54@gmx.com>
2
1
3
Ensure that C99 macros are defined regardless of the inclusion order of
4
headers in vixl. This is required at least on NetBSD.
5
6
The vixl/globals.h headers defines __STDC_CONSTANT_MACROS and must be
7
included before other system headers.
8
9
This file defines unconditionally the following macros, without altering
10
the original sources:
11
- __STDC_CONSTANT_MACROS
12
- __STDC_LIMIT_MACROS
13
- __STDC_FORMAT_MACROS
14
15
Signed-off-by: Kamil Rytarowski <n54@gmx.com>
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Message-id: 20170514051820.15985-1-n54@gmx.com
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
disas/libvixl/Makefile.objs | 3 +++
22
1 file changed, 3 insertions(+)
23
24
diff --git a/disas/libvixl/Makefile.objs b/disas/libvixl/Makefile.objs
25
index XXXXXXX..XXXXXXX 100644
26
--- a/disas/libvixl/Makefile.objs
27
+++ b/disas/libvixl/Makefile.objs
28
@@ -XXX,XX +XXX,XX @@ libvixl_OBJS = vixl/utils.o \
29
# The -Wno-sign-compare is needed only for gcc 4.6, which complains about
30
# some signed-unsigned equality comparisons which later gcc versions do not.
31
$(addprefix $(obj)/,$(libvixl_OBJS)): QEMU_CFLAGS := -I$(SRC_PATH)/disas/libvixl $(QEMU_CFLAGS) -Wno-sign-compare
32
+# Ensure that C99 macros are defined regardless of the inclusion order of
33
+# headers in vixl. This is required at least on NetBSD.
34
+$(addprefix $(obj)/,$(libvixl_OBJS)): QEMU_CFLAGS += -D__STDC_CONSTANT_MACROS -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS
35
36
common-obj-$(CONFIG_ARM_A64_DIS) += $(libvixl_OBJS)
37
--
38
2.7.4
39
40
diff view generated by jsdifflib
Deleted patch
1
From: Andrew Jones <drjones@redhat.com>
2
1
3
Don't allow load_uboot_image() to proceed when less bytes than
4
header-size was read.
5
6
Signed-off-by: Andrew Jones <drjones@redhat.com>
7
Message-id: 20170524091315.20284-1-drjones@redhat.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/core/loader.c | 3 ++-
12
1 file changed, 2 insertions(+), 1 deletion(-)
13
14
diff --git a/hw/core/loader.c b/hw/core/loader.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/core/loader.c
17
+++ b/hw/core/loader.c
18
@@ -XXX,XX +XXX,XX @@ static int load_uboot_image(const char *filename, hwaddr *ep, hwaddr *loadaddr,
19
return -1;
20
21
size = read(fd, hdr, sizeof(uboot_image_header_t));
22
- if (size < 0)
23
+ if (size < sizeof(uboot_image_header_t)) {
24
goto out;
25
+ }
26
27
bswap_uboot_header(hdr);
28
29
--
30
2.7.4
31
32
diff view generated by jsdifflib
1
From: Andrew Jones <drjones@redhat.com>
1
From: Yuquan Wang <wangyuquan1236@phytium.com.cn>
2
2
3
This is based on patch Shannon Zhao originally posted.
3
This extends the slots of xhci to 64, since the default xhci_sysbus
4
just supports one slot.
4
5
5
Cc: Shannon Zhao <zhaoshenglong@huawei.com>
6
Signed-off-by: Wang Yuquan <wangyuquan1236@phytium.com.cn>
6
Signed-off-by: Andrew Jones <drjones@redhat.com>
7
Signed-off-by: Chen Baozi <chenbaozi@phytium.com.cn>
7
Reviewed-by: Shannon Zhao <shannon.zhao@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20170529173751.3443-3-drjones@redhat.com
9
Reviewed-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
10
Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
11
Message-id: 20230710063750.473510-2-wangyuquan1236@phytium.com.cn
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
hw/arm/virt.c | 21 +++++++++++++++++++++
14
hw/arm/sbsa-ref.c | 1 +
12
1 file changed, 21 insertions(+)
15
1 file changed, 1 insertion(+)
13
16
14
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/virt.c
19
--- a/hw/arm/sbsa-ref.c
17
+++ b/hw/arm/virt.c
20
+++ b/hw/arm/sbsa-ref.c
18
@@ -XXX,XX +XXX,XX @@ static void create_fdt(VirtMachineState *vms)
21
@@ -XXX,XX +XXX,XX @@ static void create_xhci(const SBSAMachineState *sms)
19
"clk24mhz");
22
hwaddr base = sbsa_ref_memmap[SBSA_XHCI].base;
20
qemu_fdt_setprop_cell(fdt, "/apb-pclk", "phandle", vms->clock_phandle);
23
int irq = sbsa_ref_irqmap[SBSA_XHCI];
21
24
DeviceState *dev = qdev_new(TYPE_XHCI_SYSBUS);
22
+ if (have_numa_distance) {
25
+ qdev_prop_set_uint32(dev, "slots", XHCI_MAXSLOTS);
23
+ int size = nb_numa_nodes * nb_numa_nodes * 3 * sizeof(uint32_t);
26
24
+ uint32_t *matrix = g_malloc0(size);
27
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
25
+ int idx, i, j;
28
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base);
26
+
27
+ for (i = 0; i < nb_numa_nodes; i++) {
28
+ for (j = 0; j < nb_numa_nodes; j++) {
29
+ idx = (i * nb_numa_nodes + j) * 3;
30
+ matrix[idx + 0] = cpu_to_be32(i);
31
+ matrix[idx + 1] = cpu_to_be32(j);
32
+ matrix[idx + 2] = cpu_to_be32(numa_info[i].distance[j]);
33
+ }
34
+ }
35
+
36
+ qemu_fdt_add_subnode(fdt, "/distance-map");
37
+ qemu_fdt_setprop_string(fdt, "/distance-map", "compatible",
38
+ "numa-distance-map-v1");
39
+ qemu_fdt_setprop(fdt, "/distance-map", "distance-matrix",
40
+ matrix, size);
41
+ g_free(matrix);
42
+ }
43
}
44
45
static void fdt_add_psci_node(const VirtMachineState *vms)
46
--
29
--
47
2.7.4
30
2.34.1
48
49
diff view generated by jsdifflib
1
When we calculate the mask to use to get the group priority from
1
In the code for TARGET_NR_clock_adjtime, we set the pointer phtx to
2
an interrupt priority, the way that NS BPR1 is handled differs
2
the address of the local variable htx. This means it can never be
3
from how BPR0 and S BPR1 work -- a BPR1 value of 1 means
3
NULL, but later in the code we check it for NULL anyway. Coverity
4
the group priority is in bits [7:1], whereas for BPR0 and S BPR1
4
complains about this (CID 1507683) because the NULL check comes after
5
this is indicated by a 0 BPR value.
5
a call to clock_adjtime() that assumes it is non-NULL.
6
6
7
Subtract 1 from the BPR value before creating the mask if
7
Since phtx is always &htx, and is used only in three places, it's not
8
we're using the NS BPR value, for both hardware and virtual
8
really necessary. Remove it, bringing the code structure in to line
9
interrupts, as the GICv3 pseudocode does, and fix the comments
9
with that for TARGET_NR_clock_adjtime64, which already uses a simple
10
accordingly.
10
'&htx' when it wants a pointer to 'htx'.
11
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
14
Message-id: 1493226792-3237-4-git-send-email-peter.maydell@linaro.org
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20230623144410.1837261-1-peter.maydell@linaro.org
15
---
16
---
16
hw/intc/arm_gicv3_cpuif.c | 42 ++++++++++++++++++++++++++++++++++++++----
17
linux-user/syscall.c | 12 +++++-------
17
1 file changed, 38 insertions(+), 4 deletions(-)
18
1 file changed, 5 insertions(+), 7 deletions(-)
18
19
19
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
20
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
20
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/intc/arm_gicv3_cpuif.c
22
--- a/linux-user/syscall.c
22
+++ b/hw/intc/arm_gicv3_cpuif.c
23
+++ b/linux-user/syscall.c
23
@@ -XXX,XX +XXX,XX @@ static uint32_t icv_gprio_mask(GICv3CPUState *cs, int group)
24
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(CPUArchState *cpu_env, int num, abi_long arg1,
24
{
25
#if defined(TARGET_NR_clock_adjtime) && defined(CONFIG_CLOCK_ADJTIME)
25
/* Return a mask word which clears the subpriority bits from
26
case TARGET_NR_clock_adjtime:
26
* a priority value for a virtual interrupt in the specified group.
27
{
27
- * This depends on the VBPR value:
28
- struct timex htx, *phtx = &htx;
28
+ * This depends on the VBPR value.
29
+ struct timex htx;
29
+ * If using VBPR0 then:
30
30
* a BPR of 0 means the group priority bits are [7:1];
31
- if (target_to_host_timex(phtx, arg2) != 0) {
31
* a BPR of 1 means they are [7:2], and so on down to
32
+ if (target_to_host_timex(&htx, arg2) != 0) {
32
* a BPR of 7 meaning no group priority bits at all.
33
return -TARGET_EFAULT;
33
+ * If using VBPR1 then:
34
}
34
+ * a BPR of 0 is impossible (the minimum value is 1)
35
- ret = get_errno(clock_adjtime(arg1, phtx));
35
+ * a BPR of 1 means the group priority bits are [7:1];
36
- if (!is_error(ret) && phtx) {
36
+ * a BPR of 2 means they are [7:2], and so on down to
37
- if (host_to_target_timex(arg2, phtx) != 0) {
37
+ * a BPR of 7 meaning the group priority is [7].
38
- return -TARGET_EFAULT;
38
+ *
39
- }
39
* Which BPR to use depends on the group of the interrupt and
40
+ ret = get_errno(clock_adjtime(arg1, &htx));
40
* the current ICH_VMCR_EL2.VCBPR settings.
41
+ if (!is_error(ret) && host_to_target_timex(arg2, &htx)) {
41
+ *
42
+ return -TARGET_EFAULT;
42
+ * This corresponds to the VGroupBits() pseudocode.
43
}
43
*/
44
}
44
+ int bpr;
45
return ret;
45
+
46
if (group == GICV3_G1NS && cs->ich_vmcr_el2 & ICH_VMCR_EL2_VCBPR) {
47
group = GICV3_G0;
48
}
49
50
- return ~0U << (read_vbpr(cs, group) + 1);
51
+ bpr = read_vbpr(cs, group);
52
+ if (group == GICV3_G1NS) {
53
+ assert(bpr > 0);
54
+ bpr--;
55
+ }
56
+
57
+ return ~0U << (bpr + 1);
58
}
59
60
static bool icv_hppi_can_preempt(GICv3CPUState *cs, uint64_t lr)
61
@@ -XXX,XX +XXX,XX @@ static uint32_t icc_gprio_mask(GICv3CPUState *cs, int group)
62
{
63
/* Return a mask word which clears the subpriority bits from
64
* a priority value for an interrupt in the specified group.
65
- * This depends on the BPR value:
66
+ * This depends on the BPR value. For CBPR0 (S or NS):
67
* a BPR of 0 means the group priority bits are [7:1];
68
* a BPR of 1 means they are [7:2], and so on down to
69
* a BPR of 7 meaning no group priority bits at all.
70
+ * For CBPR1 NS:
71
+ * a BPR of 0 is impossible (the minimum value is 1)
72
+ * a BPR of 1 means the group priority bits are [7:1];
73
+ * a BPR of 2 means they are [7:2], and so on down to
74
+ * a BPR of 7 meaning the group priority is [7].
75
+ *
76
* Which BPR to use depends on the group of the interrupt and
77
* the current ICC_CTLR.CBPR settings.
78
+ *
79
+ * This corresponds to the GroupBits() pseudocode.
80
*/
81
+ int bpr;
82
+
83
if ((group == GICV3_G1 && cs->icc_ctlr_el1[GICV3_S] & ICC_CTLR_EL1_CBPR) ||
84
(group == GICV3_G1NS &&
85
cs->icc_ctlr_el1[GICV3_NS] & ICC_CTLR_EL1_CBPR)) {
86
group = GICV3_G0;
87
}
88
89
- return ~0U << ((cs->icc_bpr[group] & 7) + 1);
90
+ bpr = cs->icc_bpr[group] & 7;
91
+
92
+ if (group == GICV3_G1NS) {
93
+ assert(bpr > 0);
94
+ bpr--;
95
+ }
96
+
97
+ return ~0U << (bpr + 1);
98
}
99
100
static bool icc_no_enabled_hppi(GICv3CPUState *cs)
101
--
46
--
102
2.7.4
47
2.34.1
103
48
104
49
diff view generated by jsdifflib
1
We were setting the VBPR1 field of VMCR_EL2 to icv_min_vbpr()
1
Add comments to the in_* fields in the S1Translate struct
2
on reset, but this is not correct. The field should reset to
2
that explain what they're doing.
3
the minimum value of ICV_BPR0_EL1 plus one.
4
3
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 1493226792-3237-2-git-send-email-peter.maydell@linaro.org
6
Message-id: 20230710152130.3928330-2-peter.maydell@linaro.org
8
---
7
---
9
hw/intc/arm_gicv3_cpuif.c | 2 +-
8
target/arm/ptw.c | 40 ++++++++++++++++++++++++++++++++++++++++
10
1 file changed, 1 insertion(+), 1 deletion(-)
9
1 file changed, 40 insertions(+)
11
10
12
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
13
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
14
--- a/hw/intc/arm_gicv3_cpuif.c
13
--- a/target/arm/ptw.c
15
+++ b/hw/intc/arm_gicv3_cpuif.c
14
+++ b/target/arm/ptw.c
16
@@ -XXX,XX +XXX,XX @@ static void icc_reset(CPUARMState *env, const ARMCPRegInfo *ri)
15
@@ -XXX,XX +XXX,XX @@
17
cs->ich_hcr_el2 = 0;
16
#endif
18
memset(cs->ich_lr_el2, 0, sizeof(cs->ich_lr_el2));
17
19
cs->ich_vmcr_el2 = ICH_VMCR_EL2_VFIQEN |
18
typedef struct S1Translate {
20
- (icv_min_vbpr(cs) << ICH_VMCR_EL2_VBPR1_SHIFT) |
19
+ /*
21
+ ((icv_min_vbpr(cs) + 1) << ICH_VMCR_EL2_VBPR1_SHIFT) |
20
+ * in_mmu_idx : specifies which TTBR, TCR, etc to use for the walk.
22
(icv_min_vbpr(cs) << ICH_VMCR_EL2_VBPR0_SHIFT);
21
+ * Together with in_space, specifies the architectural translation regime.
23
}
22
+ */
24
23
ARMMMUIdx in_mmu_idx;
24
+ /*
25
+ * in_ptw_idx: specifies which mmuidx to use for the actual
26
+ * page table descriptor load operations. This will be one of the
27
+ * ARMMMUIdx_Stage2* or one of the ARMMMUIdx_Phys_* indexes.
28
+ * If a Secure ptw is "downgraded" to NonSecure by an NSTable bit,
29
+ * this field is updated accordingly.
30
+ */
31
ARMMMUIdx in_ptw_idx;
32
+ /*
33
+ * in_space: the security space for this walk. This plus
34
+ * the in_mmu_idx specify the architectural translation regime.
35
+ * If a Secure ptw is "downgraded" to NonSecure by an NSTable bit,
36
+ * this field is updated accordingly.
37
+ *
38
+ * Note that the security space for the in_ptw_idx may be different
39
+ * from that for the in_mmu_idx. We do not need to explicitly track
40
+ * the in_ptw_idx security space because:
41
+ * - if the in_ptw_idx is an ARMMMUIdx_Phys_* then the mmuidx
42
+ * itself specifies the security space
43
+ * - if the in_ptw_idx is an ARMMMUIdx_Stage2* then the security
44
+ * space used for ptw reads is the same as that of the security
45
+ * space of the stage 1 translation for all cases except where
46
+ * stage 1 is Secure; in that case the only possibilities for
47
+ * the ptw read are Secure and NonSecure, and the in_ptw_idx
48
+ * value being Stage2 vs Stage2_S distinguishes those.
49
+ */
50
ARMSecuritySpace in_space;
51
+ /*
52
+ * in_secure: whether the translation regime is a Secure one.
53
+ * This is always equal to arm_space_is_secure(in_space).
54
+ * If a Secure ptw is "downgraded" to NonSecure by an NSTable bit,
55
+ * this field is updated accordingly.
56
+ */
57
bool in_secure;
58
+ /*
59
+ * in_debug: is this a QEMU debug access (gdbstub, etc)? Debug
60
+ * accesses will not update the guest page table access flags
61
+ * and will not change the state of the softmmu TLBs.
62
+ */
63
bool in_debug;
64
/*
65
* If this is stage 2 of a stage 1+2 page table walk, then this must
25
--
66
--
26
2.7.4
67
2.34.1
27
28
diff view generated by jsdifflib
1
From: Michael Davidsaver <mdavidsaver@gmail.com>
1
In commit fe4a5472ccd6 we rearranged the logic in S1_ptw_translate()
2
so that the debug-access "call get_phys_addr_*" codepath is used both
3
when S1 is doing ptw reads from stage 2 and when it is doing ptw
4
reads from physical memory. However, we didn't update the
5
calculation of s2ptw->in_space and s2ptw->in_secure to account for
6
the "ptw reads from physical memory" case. This meant that debug
7
accesses when in Secure state broke.
2
8
3
The M series MPU is almost the same as the already implemented R
9
Create a new function S2_security_space() which returns the
4
profile MPU (v7 PMSA). So all we need to implement here is the MPU
10
correct security space to use for the ptw load, and use it to
5
register interface in the system register space.
11
determine the correct .in_secure and .in_space fields for the
12
stage 2 lookup for the ptw load.
6
13
7
This implementation has the same restriction as the R profile MPU
14
Reported-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
8
that it doesn't permit regions to be sized down smaller than 1K.
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
16
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
10
We also do not yet implement support for MPU_CTRL.HFNMIENA; this
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
bit should if zero disable use of the MPU when running HardFault,
18
Message-id: 20230710152130.3928330-3-peter.maydell@linaro.org
12
NMI or with FAULTMASK set to 1 (ie at an execution priority of
19
Fixes: fe4a5472ccd6 ("target/arm: Use get_phys_addr_with_struct in S1_ptw_translate")
13
less than zero) -- if the MPU is enabled we don't treat these
14
cases any differently.
15
16
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
17
Message-id: 1493122030-32191-13-git-send-email-peter.maydell@linaro.org
18
[PMM: Keep all the bits in mpu_ctrl field, rather than
19
using SCTLR bits for them; drop broken HFNMIENA support;
20
various cleanup]
21
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
---
21
---
23
target/arm/cpu.h | 6 +++
22
target/arm/ptw.c | 37 ++++++++++++++++++++++++++++++++-----
24
hw/intc/armv7m_nvic.c | 104 ++++++++++++++++++++++++++++++++++++++++++++++++++
23
1 file changed, 32 insertions(+), 5 deletions(-)
25
target/arm/helper.c | 25 +++++++++++-
26
target/arm/machine.c | 5 ++-
27
4 files changed, 137 insertions(+), 3 deletions(-)
28
24
29
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
30
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/cpu.h
27
--- a/target/arm/ptw.c
32
+++ b/target/arm/cpu.h
28
+++ b/target/arm/ptw.c
33
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
29
@@ -XXX,XX +XXX,XX @@ static bool S2_attrs_are_device(uint64_t hcr, uint8_t attrs)
34
uint32_t dfsr; /* Debug Fault Status Register */
35
uint32_t mmfar; /* MemManage Fault Address */
36
uint32_t bfar; /* BusFault Address */
37
+ unsigned mpu_ctrl; /* MPU_CTRL (some bits kept in sctlr_el[1]) */
38
int exception;
39
} v7m;
40
41
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_DFSR, DWTTRAP, 2, 1)
42
FIELD(V7M_DFSR, VCATCH, 3, 1)
43
FIELD(V7M_DFSR, EXTERNAL, 4, 1)
44
45
+/* v7M MPU_CTRL bits */
46
+FIELD(V7M_MPU_CTRL, ENABLE, 0, 1)
47
+FIELD(V7M_MPU_CTRL, HFNMIENA, 1, 1)
48
+FIELD(V7M_MPU_CTRL, PRIVDEFENA, 2, 1)
49
+
50
/* If adding a feature bit which corresponds to a Linux ELF
51
* HWCAP bit, remember to update the feature-bit-to-hwcap
52
* mapping in linux-user/elfload.c:get_elf_hwcap().
53
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/intc/armv7m_nvic.c
56
+++ b/hw/intc/armv7m_nvic.c
57
@@ -XXX,XX +XXX,XX @@
58
#include "hw/arm/arm.h"
59
#include "hw/arm/armv7m_nvic.h"
60
#include "target/arm/cpu.h"
61
+#include "exec/exec-all.h"
62
#include "qemu/log.h"
63
#include "trace.h"
64
65
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset)
66
case 0xd70: /* ISAR4. */
67
return 0x01310102;
68
/* TODO: Implement debug registers. */
69
+ case 0xd90: /* MPU_TYPE */
70
+ /* Unified MPU; if the MPU is not present this value is zero */
71
+ return cpu->pmsav7_dregion << 8;
72
+ break;
73
+ case 0xd94: /* MPU_CTRL */
74
+ return cpu->env.v7m.mpu_ctrl;
75
+ case 0xd98: /* MPU_RNR */
76
+ return cpu->env.cp15.c6_rgnr;
77
+ case 0xd9c: /* MPU_RBAR */
78
+ case 0xda4: /* MPU_RBAR_A1 */
79
+ case 0xdac: /* MPU_RBAR_A2 */
80
+ case 0xdb4: /* MPU_RBAR_A3 */
81
+ {
82
+ int region = cpu->env.cp15.c6_rgnr;
83
+
84
+ if (region >= cpu->pmsav7_dregion) {
85
+ return 0;
86
+ }
87
+ return (cpu->env.pmsav7.drbar[region] & 0x1f) | (region & 0xf);
88
+ }
89
+ case 0xda0: /* MPU_RASR */
90
+ case 0xda8: /* MPU_RASR_A1 */
91
+ case 0xdb0: /* MPU_RASR_A2 */
92
+ case 0xdb8: /* MPU_RASR_A3 */
93
+ {
94
+ int region = cpu->env.cp15.c6_rgnr;
95
+
96
+ if (region >= cpu->pmsav7_dregion) {
97
+ return 0;
98
+ }
99
+ return ((cpu->env.pmsav7.dracr[region] & 0xffff) << 16) |
100
+ (cpu->env.pmsav7.drsr[region] & 0xffff);
101
+ }
102
default:
103
qemu_log_mask(LOG_GUEST_ERROR, "NVIC: Bad read offset 0x%x\n", offset);
104
return 0;
105
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value)
106
qemu_log_mask(LOG_UNIMP,
107
"NVIC: Aux fault status registers unimplemented\n");
108
break;
109
+ case 0xd90: /* MPU_TYPE */
110
+ return; /* RO */
111
+ case 0xd94: /* MPU_CTRL */
112
+ if ((value &
113
+ (R_V7M_MPU_CTRL_HFNMIENA_MASK | R_V7M_MPU_CTRL_ENABLE_MASK))
114
+ == R_V7M_MPU_CTRL_HFNMIENA_MASK) {
115
+ qemu_log_mask(LOG_GUEST_ERROR, "MPU_CTRL: HFNMIENA and !ENABLE is "
116
+ "UNPREDICTABLE\n");
117
+ }
118
+ cpu->env.v7m.mpu_ctrl = value & (R_V7M_MPU_CTRL_ENABLE_MASK |
119
+ R_V7M_MPU_CTRL_HFNMIENA_MASK |
120
+ R_V7M_MPU_CTRL_PRIVDEFENA_MASK);
121
+ tlb_flush(CPU(cpu));
122
+ break;
123
+ case 0xd98: /* MPU_RNR */
124
+ if (value >= cpu->pmsav7_dregion) {
125
+ qemu_log_mask(LOG_GUEST_ERROR, "MPU region out of range %"
126
+ PRIu32 "/%" PRIu32 "\n",
127
+ value, cpu->pmsav7_dregion);
128
+ } else {
129
+ cpu->env.cp15.c6_rgnr = value;
130
+ }
131
+ break;
132
+ case 0xd9c: /* MPU_RBAR */
133
+ case 0xda4: /* MPU_RBAR_A1 */
134
+ case 0xdac: /* MPU_RBAR_A2 */
135
+ case 0xdb4: /* MPU_RBAR_A3 */
136
+ {
137
+ int region;
138
+
139
+ if (value & (1 << 4)) {
140
+ /* VALID bit means use the region number specified in this
141
+ * value and also update MPU_RNR.REGION with that value.
142
+ */
143
+ region = extract32(value, 0, 4);
144
+ if (region >= cpu->pmsav7_dregion) {
145
+ qemu_log_mask(LOG_GUEST_ERROR,
146
+ "MPU region out of range %u/%" PRIu32 "\n",
147
+ region, cpu->pmsav7_dregion);
148
+ return;
149
+ }
150
+ cpu->env.cp15.c6_rgnr = region;
151
+ } else {
152
+ region = cpu->env.cp15.c6_rgnr;
153
+ }
154
+
155
+ if (region >= cpu->pmsav7_dregion) {
156
+ return;
157
+ }
158
+
159
+ cpu->env.pmsav7.drbar[region] = value & ~0x1f;
160
+ tlb_flush(CPU(cpu));
161
+ break;
162
+ }
163
+ case 0xda0: /* MPU_RASR */
164
+ case 0xda8: /* MPU_RASR_A1 */
165
+ case 0xdb0: /* MPU_RASR_A2 */
166
+ case 0xdb8: /* MPU_RASR_A3 */
167
+ {
168
+ int region = cpu->env.cp15.c6_rgnr;
169
+
170
+ if (region >= cpu->pmsav7_dregion) {
171
+ return;
172
+ }
173
+
174
+ cpu->env.pmsav7.drsr[region] = value & 0xff3f;
175
+ cpu->env.pmsav7.dracr[region] = (value >> 16) & 0x173f;
176
+ tlb_flush(CPU(cpu));
177
+ break;
178
+ }
179
case 0xf00: /* Software Triggered Interrupt Register */
180
{
181
/* user mode can only write to STIR if CCR.USERSETMPEND permits it */
182
diff --git a/target/arm/helper.c b/target/arm/helper.c
183
index XXXXXXX..XXXXXXX 100644
184
--- a/target/arm/helper.c
185
+++ b/target/arm/helper.c
186
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx)
187
static inline bool regime_translation_disabled(CPUARMState *env,
188
ARMMMUIdx mmu_idx)
189
{
190
+ if (arm_feature(env, ARM_FEATURE_M)) {
191
+ return !(env->v7m.mpu_ctrl & R_V7M_MPU_CTRL_ENABLE_MASK);
192
+ }
193
+
194
if (mmu_idx == ARMMMUIdx_S2NS) {
195
return (env->cp15.hcr_el2 & HCR_VM) == 0;
196
}
197
@@ -XXX,XX +XXX,XX @@ static inline void get_phys_addr_pmsav7_default(CPUARMState *env,
198
}
30
}
199
}
31
}
200
32
201
+static bool pmsav7_use_background_region(ARMCPU *cpu,
33
+static ARMSecuritySpace S2_security_space(ARMSecuritySpace s1_space,
202
+ ARMMMUIdx mmu_idx, bool is_user)
34
+ ARMMMUIdx s2_mmu_idx)
203
+{
35
+{
204
+ /* Return true if we should use the default memory map as a
36
+ /*
205
+ * "background" region if there are no hits against any MPU regions.
37
+ * Return the security space to use for stage 2 when doing
38
+ * the S1 page table descriptor load.
206
+ */
39
+ */
207
+ CPUARMState *env = &cpu->env;
40
+ if (regime_is_stage2(s2_mmu_idx)) {
208
+
41
+ /*
209
+ if (is_user) {
42
+ * The security space for ptw reads is almost always the same
210
+ return false;
43
+ * as that of the security space of the stage 1 translation.
211
+ }
44
+ * The only exception is when stage 1 is Secure; in that case
212
+
45
+ * the ptw read might be to the Secure or the NonSecure space
213
+ if (arm_feature(env, ARM_FEATURE_M)) {
46
+ * (but never Realm or Root), and the s2_mmu_idx tells us which.
214
+ return env->v7m.mpu_ctrl & R_V7M_MPU_CTRL_PRIVDEFENA_MASK;
47
+ * Root translations are always single-stage.
48
+ */
49
+ if (s1_space == ARMSS_Secure) {
50
+ return arm_secure_to_space(s2_mmu_idx == ARMMMUIdx_Stage2_S);
51
+ } else {
52
+ assert(s2_mmu_idx != ARMMMUIdx_Stage2_S);
53
+ assert(s1_space != ARMSS_Root);
54
+ return s1_space;
55
+ }
215
+ } else {
56
+ } else {
216
+ return regime_sctlr(env, mmu_idx) & SCTLR_BR;
57
+ /* ptw loads are from phys: the mmu idx itself says which space */
58
+ return arm_phys_to_space(s2_mmu_idx);
217
+ }
59
+ }
218
+}
60
+}
219
+
61
+
220
static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
62
/* Translate a S1 pagetable walk through S2 if needed. */
221
int access_type, ARMMMUIdx mmu_idx,
63
static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
222
hwaddr *phys_ptr, int *prot, uint32_t *fsr)
64
hwaddr addr, ARMMMUFaultInfo *fi)
223
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
65
{
224
}
66
- ARMSecuritySpace space = ptw->in_space;
225
67
bool is_secure = ptw->in_secure;
226
if (n == -1) { /* no hits */
68
ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
227
- if (is_user || !(regime_sctlr(env, mmu_idx) & SCTLR_BR)) {
69
ARMMMUIdx s2_mmu_idx = ptw->in_ptw_idx;
228
+ if (!pmsav7_use_background_region(cpu, mmu_idx, is_user)) {
70
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
229
/* background fault */
71
* From gdbstub, do not use softmmu so that we don't modify the
230
*fsr = 0;
72
* state of the cpu at all, including softmmu tlb contents.
231
return true;
73
*/
232
diff --git a/target/arm/machine.c b/target/arm/machine.c
74
+ ARMSecuritySpace s2_space = S2_security_space(ptw->in_space, s2_mmu_idx);
233
index XXXXXXX..XXXXXXX 100644
75
S1Translate s2ptw = {
234
--- a/target/arm/machine.c
76
.in_mmu_idx = s2_mmu_idx,
235
+++ b/target/arm/machine.c
77
.in_ptw_idx = ptw_idx_for_stage_2(env, s2_mmu_idx),
236
@@ -XXX,XX +XXX,XX @@ static bool m_needed(void *opaque)
78
- .in_secure = s2_mmu_idx == ARMMMUIdx_Stage2_S,
237
79
- .in_space = (s2_mmu_idx == ARMMMUIdx_Stage2_S ? ARMSS_Secure
238
static const VMStateDescription vmstate_m = {
80
- : space == ARMSS_Realm ? ARMSS_Realm
239
.name = "cpu/m",
81
- : ARMSS_NonSecure),
240
- .version_id = 3,
82
+ .in_secure = arm_space_is_secure(s2_space),
241
- .minimum_version_id = 3,
83
+ .in_space = s2_space,
242
+ .version_id = 4,
84
.in_debug = true,
243
+ .minimum_version_id = 4,
85
};
244
.needed = m_needed,
86
GetPhysAddrResult s2 = { };
245
.fields = (VMStateField[]) {
246
VMSTATE_UINT32(env.v7m.vecbase, ARMCPU),
247
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
248
VMSTATE_UINT32(env.v7m.dfsr, ARMCPU),
249
VMSTATE_UINT32(env.v7m.mmfar, ARMCPU),
250
VMSTATE_UINT32(env.v7m.bfar, ARMCPU),
251
+ VMSTATE_UINT32(env.v7m.mpu_ctrl, ARMCPU),
252
VMSTATE_INT32(env.v7m.exception, ARMCPU),
253
VMSTATE_END_OF_LIST()
254
}
255
--
87
--
256
2.7.4
88
2.34.1
257
258
diff view generated by jsdifflib
1
icc_bpr_write() was not enforcing that writing a value below the
1
In get_phys_addr_twostage() the code that applies the effects of
2
minimum for the BPR should behave as if the BPR was set to the
2
VSTCR.{SA,SW} and VTCR.{NSA,NSW} only updates result->f.attrs.secure.
3
minimum value. This doesn't make a difference for the secure
3
Now we also have f.attrs.space for FEAT_RME, we need to keep the two
4
BPRs (since we define the minimum for the QEMU implementation
4
in sync.
5
as zero) but did mean we were allowing the NS BPR1 to be set to
5
6
0 when 1 should be the lowest value.
6
These bits only have an effect for Secure space translations, not
7
for Root, so use the input in_space field to determine whether to
8
apply them rather than the input is_secure. This doesn't actually
9
make a difference because Root translations are never two-stage,
10
but it's a little clearer.
7
11
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 1493226792-3237-3-git-send-email-peter.maydell@linaro.org
14
Message-id: 20230710152130.3928330-4-peter.maydell@linaro.org
11
---
15
---
12
hw/intc/arm_gicv3_cpuif.c | 6 ++++++
16
target/arm/ptw.c | 13 ++++++++-----
13
1 file changed, 6 insertions(+)
17
1 file changed, 8 insertions(+), 5 deletions(-)
14
18
15
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
19
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
16
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/intc/arm_gicv3_cpuif.c
21
--- a/target/arm/ptw.c
18
+++ b/hw/intc/arm_gicv3_cpuif.c
22
+++ b/target/arm/ptw.c
19
@@ -XXX,XX +XXX,XX @@ static void icc_bpr_write(CPUARMState *env, const ARMCPRegInfo *ri,
23
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
20
{
24
hwaddr ipa;
21
GICv3CPUState *cs = icc_cs_from_env(env);
25
int s1_prot, s1_lgpgsz;
22
int grp = (ri->crm == 8) ? GICV3_G0 : GICV3_G1;
26
bool is_secure = ptw->in_secure;
23
+ uint64_t minval;
27
+ ARMSecuritySpace in_space = ptw->in_space;
24
28
bool ret, ipa_secure;
25
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
29
ARMCacheAttrs cacheattrs1;
26
icv_bpr_write(env, ri, value);
30
ARMSecuritySpace ipa_space;
27
@@ -XXX,XX +XXX,XX @@ static void icc_bpr_write(CPUARMState *env, const ARMCPRegInfo *ri,
31
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
28
return;
32
* Check if IPA translates to secure or non-secure PA space.
29
}
33
* Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
30
34
*/
31
+ minval = (grp == GICV3_G1NS) ? GIC_MIN_BPR_NS : GIC_MIN_BPR;
35
- result->f.attrs.secure =
32
+ if (value < minval) {
36
- (is_secure
33
+ value = minval;
37
- && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
38
- && (ipa_secure
39
- || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
40
+ if (in_space == ARMSS_Secure) {
41
+ result->f.attrs.secure =
42
+ !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
43
+ && (ipa_secure
44
+ || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW)));
45
+ result->f.attrs.space = arm_secure_to_space(result->f.attrs.secure);
34
+ }
46
+ }
35
+
47
36
cs->icc_bpr[grp] = value & 7;
48
return false;
37
gicv3_cpuif_update(cs);
38
}
49
}
39
--
50
--
40
2.7.4
51
2.34.1
41
42
diff view generated by jsdifflib
Deleted patch
1
From: Wei Huang <wei@redhat.com>
2
1
3
The PMUv3 driver of linux kernel (in arch/arm64/kernel/perf_event.c)
4
relies on the PMUVER field of id_aa64dfr0_el1 to decide if PMU support
5
is present or not. This patch clears the PMUVER field under TCG mode
6
when vPMU=off. Without it, PMUv3 will init insider guest VMs even
7
with vPMU=off. This patch also removes a redundant line inside the
8
if-statement.
9
10
Signed-off-by: Wei Huang <wei@redhat.com>
11
Message-id: 1495123889-32301-1-git-send-email-wei@redhat.com
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/cpu.c | 2 +-
16
1 file changed, 1 insertion(+), 1 deletion(-)
17
18
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.c
21
+++ b/target/arm/cpu.c
22
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
23
}
24
25
if (!cpu->has_pmu) {
26
- cpu->has_pmu = false;
27
unset_feature(env, ARM_FEATURE_PMU);
28
+ cpu->id_aa64dfr0 &= ~0xf00;
29
}
30
31
if (!arm_feature(env, ARM_FEATURE_EL2)) {
32
--
33
2.7.4
34
35
diff view generated by jsdifflib
Deleted patch
1
When identifying the DFSR format for an alignment fault, use
2
the mmu index that we are passed, rather than calling cpu_mmu_index()
3
to get the mmu index for the current CPU state. This doesn't actually
4
make any difference since the only cases where the current MMU index
5
differs from the index used for the load are the "unprivileged
6
load/store" instructions, and in that case the mmu index may
7
differ but the translation regime is the same (apart from the
8
"use from Hyp mode" case which is UNPREDICTABLE).
9
However it's the more logical thing to do.
10
1
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Message-id: 1493122030-32191-2-git-send-email-peter.maydell@linaro.org
15
---
16
target/arm/op_helper.c | 2 +-
17
1 file changed, 1 insertion(+), 1 deletion(-)
18
19
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/op_helper.c
22
+++ b/target/arm/op_helper.c
23
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
24
/* the DFSR for an alignment fault depends on whether we're using
25
* the LPAE long descriptor format, or the short descriptor format
26
*/
27
- if (arm_s1_regime_using_lpae_format(env, cpu_mmu_index(env, false))) {
28
+ if (arm_s1_regime_using_lpae_format(env, mmu_idx)) {
29
env->exception.fsr = (1 << 9) | 0x21;
30
} else {
31
env->exception.fsr = 0x1;
32
--
33
2.7.4
34
35
diff view generated by jsdifflib
1
The M profile CPU's MPU has an awkward corner case which we
1
In commit f0a08b0913befbd we changed the type of the PC from
2
would like to implement with a different MMU index.
2
target_ulong to vaddr. In doing so we inadvertently dropped the
3
zero-padding on the PC in trace lines (the second item inside the []
4
in these lines). They used to look like this on AArch64, for
5
instance:
3
6
4
We can avoid having to bump the number of MMU modes ARM
7
Trace 0: 0x7f2260000100 [00000000/0000000040000000/00000061/ff200000]
5
uses, because some of our existing MMU indexes are only
6
used by non-M-profile CPUs, so we can borrow one.
7
To avoid that getting too confusing, clean up the code
8
to try to keep the two meanings of the index separate.
9
8
10
Instead of ARMMMUIdx enum values being identical to core QEMU
9
and now they look like this:
11
MMU index values, they are now the core index values with some
10
Trace 0: 0x7f4f50000100 [00000000/40000000/00000061/ff200000]
12
high bits set. Any particular CPU always uses the same high
13
bits (so eventually A profile cores and M profile cores will
14
use different bits). New functions arm_to_core_mmu_idx()
15
and core_to_arm_mmu_idx() convert between the two.
16
11
17
In general core index values are stored in 'int' types, and
12
and if the PC happens to be somewhere low like 0x5000
18
ARM values are stored in ARMMMUIdx types.
13
then the field is shown as /5000/.
19
14
15
This is because TARGET_FMT_lx is a "%08x" or "%016x" specifier,
16
depending on TARGET_LONG_SIZE, whereas VADDR_PRIx is just PRIx64
17
with no width specifier.
18
19
Restore the zero-padding by adding an 016 width specifier to
20
this tracing and a couple of others that were similarly recently
21
changed to use VADDR_PRIx without a width specifier.
22
23
We can't unfortunately restore the "32-bit guests are padded to
24
8 hex digits and 64-bit guests to 16 hex digits" behaviour so
25
easily.
26
27
Fixes: f0a08b0913befbd ("accel/tcg/cpu-exec.c: Widen pc to vaddr")
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Message-id: 1493122030-32191-3-git-send-email-peter.maydell@linaro.org
29
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
30
Reviewed-by: Anton Johansson <anjo@rev.ng>
31
Message-id: 20230711165434.4123674-1-peter.maydell@linaro.org
22
---
32
---
23
target/arm/cpu.h | 71 ++++++++++++++++-----
33
accel/tcg/cpu-exec.c | 4 ++--
24
target/arm/translate.h | 2 +-
34
accel/tcg/translate-all.c | 2 +-
25
target/arm/helper.c | 151 ++++++++++++++++++++++++---------------------
35
2 files changed, 3 insertions(+), 3 deletions(-)
26
target/arm/op_helper.c | 3 +-
27
target/arm/translate-a64.c | 18 ++++--
28
target/arm/translate.c | 10 +--
29
6 files changed, 156 insertions(+), 99 deletions(-)
30
36
31
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
37
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
32
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu.h
39
--- a/accel/tcg/cpu-exec.c
34
+++ b/target/arm/cpu.h
40
+++ b/accel/tcg/cpu-exec.c
35
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
41
@@ -XXX,XX +XXX,XX @@ static void log_cpu_exec(vaddr pc, CPUState *cpu,
36
* for the accesses done as part of a stage 1 page table walk, rather than
42
if (qemu_log_in_addr_range(pc)) {
37
* having to walk the stage 2 page table over and over.)
43
qemu_log_mask(CPU_LOG_EXEC,
38
*
44
"Trace %d: %p [%08" PRIx64
39
+ * The ARMMMUIdx and the mmu index value used by the core QEMU TLB code
45
- "/%" VADDR_PRIx "/%08x/%08x] %s\n",
40
+ * are not quite the same -- different CPU types (most notably M profile
46
+ "/%016" VADDR_PRIx "/%08x/%08x] %s\n",
41
+ * vs A/R profile) would like to use MMU indexes with different semantics,
47
cpu->cpu_index, tb->tc.ptr, tb->cs_base, pc,
42
+ * but since we don't ever need to use all of those in a single CPU we
48
tb->flags, tb->cflags, lookup_symbol(pc));
43
+ * can avoid setting NB_MMU_MODES to more than 8. The lower bits of
49
44
+ * ARMMMUIdx are the core TLB mmu index, and the higher bits are always
50
@@ -XXX,XX +XXX,XX @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
45
+ * the same for any particular CPU.
51
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
46
+ * Variables of type ARMMUIdx are always full values, and the core
52
vaddr pc = log_pc(cpu, last_tb);
47
+ * index values are in variables of type 'int'.
53
if (qemu_log_in_addr_range(pc)) {
48
+ *
54
- qemu_log("Stopped execution of TB chain before %p [%"
49
* Our enumeration includes at the end some entries which are not "true"
55
+ qemu_log("Stopped execution of TB chain before %p [%016"
50
* mmu_idx values in that they don't have corresponding TLBs and are only
56
VADDR_PRIx "] %s\n",
51
* valid for doing slow path page table walks.
57
last_tb->tc.ptr, pc, lookup_symbol(pc));
52
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
58
}
53
* of the AT/ATS operations.
59
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
54
* The values used are carefully arranged to make mmu_idx => EL lookup easy.
55
*/
56
+#define ARM_MMU_IDX_A 0x10 /* A profile (and M profile, for the moment) */
57
+#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
58
+
59
+#define ARM_MMU_IDX_TYPE_MASK (~0x7)
60
+#define ARM_MMU_IDX_COREIDX_MASK 0x7
61
+
62
typedef enum ARMMMUIdx {
63
- ARMMMUIdx_S12NSE0 = 0,
64
- ARMMMUIdx_S12NSE1 = 1,
65
- ARMMMUIdx_S1E2 = 2,
66
- ARMMMUIdx_S1E3 = 3,
67
- ARMMMUIdx_S1SE0 = 4,
68
- ARMMMUIdx_S1SE1 = 5,
69
- ARMMMUIdx_S2NS = 6,
70
+ ARMMMUIdx_S12NSE0 = 0 | ARM_MMU_IDX_A,
71
+ ARMMMUIdx_S12NSE1 = 1 | ARM_MMU_IDX_A,
72
+ ARMMMUIdx_S1E2 = 2 | ARM_MMU_IDX_A,
73
+ ARMMMUIdx_S1E3 = 3 | ARM_MMU_IDX_A,
74
+ ARMMMUIdx_S1SE0 = 4 | ARM_MMU_IDX_A,
75
+ ARMMMUIdx_S1SE1 = 5 | ARM_MMU_IDX_A,
76
+ ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A,
77
/* Indexes below here don't have TLBs and are used only for AT system
78
* instructions or for the first stage of an S12 page table walk.
79
*/
80
- ARMMMUIdx_S1NSE0 = 7,
81
- ARMMMUIdx_S1NSE1 = 8,
82
+ ARMMMUIdx_S1NSE0 = 0 | ARM_MMU_IDX_NOTLB,
83
+ ARMMMUIdx_S1NSE1 = 1 | ARM_MMU_IDX_NOTLB,
84
} ARMMMUIdx;
85
86
+/* Bit macros for the core-mmu-index values for each index,
87
+ * for use when calling tlb_flush_by_mmuidx() and friends.
88
+ */
89
+typedef enum ARMMMUIdxBit {
90
+ ARMMMUIdxBit_S12NSE0 = 1 << 0,
91
+ ARMMMUIdxBit_S12NSE1 = 1 << 1,
92
+ ARMMMUIdxBit_S1E2 = 1 << 2,
93
+ ARMMMUIdxBit_S1E3 = 1 << 3,
94
+ ARMMMUIdxBit_S1SE0 = 1 << 4,
95
+ ARMMMUIdxBit_S1SE1 = 1 << 5,
96
+ ARMMMUIdxBit_S2NS = 1 << 6,
97
+} ARMMMUIdxBit;
98
+
99
#define MMU_USER_IDX 0
100
101
+static inline int arm_to_core_mmu_idx(ARMMMUIdx mmu_idx)
102
+{
103
+ return mmu_idx & ARM_MMU_IDX_COREIDX_MASK;
104
+}
105
+
106
+static inline ARMMMUIdx core_to_arm_mmu_idx(CPUARMState *env, int mmu_idx)
107
+{
108
+ return mmu_idx | ARM_MMU_IDX_A;
109
+}
110
+
111
/* Return the exception level we're running at if this is our mmu_idx */
112
static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
113
{
114
- assert(mmu_idx < ARMMMUIdx_S2NS);
115
- return mmu_idx & 3;
116
+ switch (mmu_idx & ARM_MMU_IDX_TYPE_MASK) {
117
+ case ARM_MMU_IDX_A:
118
+ return mmu_idx & 3;
119
+ default:
120
+ g_assert_not_reached();
121
+ }
122
}
123
124
/* Determine the current mmu_idx to use for normal loads/stores */
125
@@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
126
int el = arm_current_el(env);
127
128
if (el < 2 && arm_is_secure_below_el3(env)) {
129
- return ARMMMUIdx_S1SE0 + el;
130
+ return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0 + el);
131
}
132
return el;
133
}
134
@@ -XXX,XX +XXX,XX @@ static inline uint32_t arm_regime_tbi1(CPUARMState *env, ARMMMUIdx mmu_idx)
135
static inline void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
136
target_ulong *cs_base, uint32_t *flags)
137
{
138
- ARMMMUIdx mmu_idx = cpu_mmu_index(env, false);
139
+ ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
140
if (is_a64(env)) {
141
*pc = env->pc;
142
*flags = ARM_TBFLAG_AARCH64_STATE_MASK;
143
@@ -XXX,XX +XXX,XX @@ static inline void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
144
<< ARM_TBFLAG_XSCALE_CPAR_SHIFT);
145
}
146
147
- *flags |= (mmu_idx << ARM_TBFLAG_MMUIDX_SHIFT);
148
+ *flags |= (arm_to_core_mmu_idx(mmu_idx) << ARM_TBFLAG_MMUIDX_SHIFT);
149
150
/* The SS_ACTIVE and PSTATE_SS bits correspond to the state machine
151
* states defined in the ARM ARM for software singlestep:
152
diff --git a/target/arm/translate.h b/target/arm/translate.h
153
index XXXXXXX..XXXXXXX 100644
60
index XXXXXXX..XXXXXXX 100644
154
--- a/target/arm/translate.h
61
--- a/accel/tcg/translate-all.c
155
+++ b/target/arm/translate.h
62
+++ b/accel/tcg/translate-all.c
156
@@ -XXX,XX +XXX,XX @@ static inline int arm_dc_feature(DisasContext *dc, int feature)
63
@@ -XXX,XX +XXX,XX @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
157
64
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
158
static inline int get_mem_index(DisasContext *s)
65
vaddr pc = log_pc(cpu, tb);
159
{
66
if (qemu_log_in_addr_range(pc)) {
160
- return s->mmu_idx;
67
- qemu_log("cpu_io_recompile: rewound execution of TB to %"
161
+ return arm_to_core_mmu_idx(s->mmu_idx);
68
+ qemu_log("cpu_io_recompile: rewound execution of TB to %016"
162
}
69
VADDR_PRIx "\n", pc);
163
164
/* Function used to determine the target exception EL when otherwise not known
165
diff --git a/target/arm/helper.c b/target/arm/helper.c
166
index XXXXXXX..XXXXXXX 100644
167
--- a/target/arm/helper.c
168
+++ b/target/arm/helper.c
169
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
170
CPUState *cs = ENV_GET_CPU(env);
171
172
tlb_flush_by_mmuidx(cs,
173
- (1 << ARMMMUIdx_S12NSE1) |
174
- (1 << ARMMMUIdx_S12NSE0) |
175
- (1 << ARMMMUIdx_S2NS));
176
+ ARMMMUIdxBit_S12NSE1 |
177
+ ARMMMUIdxBit_S12NSE0 |
178
+ ARMMMUIdxBit_S2NS);
179
}
180
181
static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
182
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
183
CPUState *cs = ENV_GET_CPU(env);
184
185
tlb_flush_by_mmuidx_all_cpus_synced(cs,
186
- (1 << ARMMMUIdx_S12NSE1) |
187
- (1 << ARMMMUIdx_S12NSE0) |
188
- (1 << ARMMMUIdx_S2NS));
189
+ ARMMMUIdxBit_S12NSE1 |
190
+ ARMMMUIdxBit_S12NSE0 |
191
+ ARMMMUIdxBit_S2NS);
192
}
193
194
static void tlbiipas2_write(CPUARMState *env, const ARMCPRegInfo *ri,
195
@@ -XXX,XX +XXX,XX @@ static void tlbiipas2_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
197
pageaddr = sextract64(value << 12, 0, 40);
198
199
- tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S2NS));
200
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S2NS);
201
}
202
203
static void tlbiipas2_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
204
@@ -XXX,XX +XXX,XX @@ static void tlbiipas2_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
205
pageaddr = sextract64(value << 12, 0, 40);
206
207
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
208
- (1 << ARMMMUIdx_S2NS));
209
+ ARMMMUIdxBit_S2NS);
210
}
211
212
static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
213
@@ -XXX,XX +XXX,XX @@ static void tlbiall_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
214
{
215
CPUState *cs = ENV_GET_CPU(env);
216
217
- tlb_flush_by_mmuidx(cs, (1 << ARMMMUIdx_S1E2));
218
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E2);
219
}
220
221
static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
222
@@ -XXX,XX +XXX,XX @@ static void tlbiall_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
223
{
224
CPUState *cs = ENV_GET_CPU(env);
225
226
- tlb_flush_by_mmuidx_all_cpus_synced(cs, (1 << ARMMMUIdx_S1E2));
227
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E2);
228
}
229
230
static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
231
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
232
CPUState *cs = ENV_GET_CPU(env);
233
uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
234
235
- tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S1E2));
236
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S1E2);
237
}
238
239
static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
240
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
241
uint64_t pageaddr = value & ~MAKE_64BIT_MASK(0, 12);
242
243
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
244
- (1 << ARMMMUIdx_S1E2));
245
+ ARMMMUIdxBit_S1E2);
246
}
247
248
static const ARMCPRegInfo cp_reginfo[] = {
249
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
250
/* Accesses to VTTBR may change the VMID so we must flush the TLB. */
251
if (raw_read(env, ri) != value) {
252
tlb_flush_by_mmuidx(cs,
253
- (1 << ARMMMUIdx_S12NSE1) |
254
- (1 << ARMMMUIdx_S12NSE0) |
255
- (1 << ARMMMUIdx_S2NS));
256
+ ARMMMUIdxBit_S12NSE1 |
257
+ ARMMMUIdxBit_S12NSE0 |
258
+ ARMMMUIdxBit_S2NS);
259
raw_write(env, ri, value);
260
}
261
}
262
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
263
264
if (arm_is_secure_below_el3(env)) {
265
tlb_flush_by_mmuidx(cs,
266
- (1 << ARMMMUIdx_S1SE1) |
267
- (1 << ARMMMUIdx_S1SE0));
268
+ ARMMMUIdxBit_S1SE1 |
269
+ ARMMMUIdxBit_S1SE0);
270
} else {
271
tlb_flush_by_mmuidx(cs,
272
- (1 << ARMMMUIdx_S12NSE1) |
273
- (1 << ARMMMUIdx_S12NSE0));
274
+ ARMMMUIdxBit_S12NSE1 |
275
+ ARMMMUIdxBit_S12NSE0);
276
}
277
}
278
279
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
280
281
if (sec) {
282
tlb_flush_by_mmuidx_all_cpus_synced(cs,
283
- (1 << ARMMMUIdx_S1SE1) |
284
- (1 << ARMMMUIdx_S1SE0));
285
+ ARMMMUIdxBit_S1SE1 |
286
+ ARMMMUIdxBit_S1SE0);
287
} else {
288
tlb_flush_by_mmuidx_all_cpus_synced(cs,
289
- (1 << ARMMMUIdx_S12NSE1) |
290
- (1 << ARMMMUIdx_S12NSE0));
291
+ ARMMMUIdxBit_S12NSE1 |
292
+ ARMMMUIdxBit_S12NSE0);
293
}
294
}
295
296
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
297
298
if (arm_is_secure_below_el3(env)) {
299
tlb_flush_by_mmuidx(cs,
300
- (1 << ARMMMUIdx_S1SE1) |
301
- (1 << ARMMMUIdx_S1SE0));
302
+ ARMMMUIdxBit_S1SE1 |
303
+ ARMMMUIdxBit_S1SE0);
304
} else {
305
if (arm_feature(env, ARM_FEATURE_EL2)) {
306
tlb_flush_by_mmuidx(cs,
307
- (1 << ARMMMUIdx_S12NSE1) |
308
- (1 << ARMMMUIdx_S12NSE0) |
309
- (1 << ARMMMUIdx_S2NS));
310
+ ARMMMUIdxBit_S12NSE1 |
311
+ ARMMMUIdxBit_S12NSE0 |
312
+ ARMMMUIdxBit_S2NS);
313
} else {
314
tlb_flush_by_mmuidx(cs,
315
- (1 << ARMMMUIdx_S12NSE1) |
316
- (1 << ARMMMUIdx_S12NSE0));
317
+ ARMMMUIdxBit_S12NSE1 |
318
+ ARMMMUIdxBit_S12NSE0);
319
}
70
}
320
}
71
}
321
}
322
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle2_write(CPUARMState *env, const ARMCPRegInfo *ri,
323
ARMCPU *cpu = arm_env_get_cpu(env);
324
CPUState *cs = CPU(cpu);
325
326
- tlb_flush_by_mmuidx(cs, (1 << ARMMMUIdx_S1E2));
327
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E2);
328
}
329
330
static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
331
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3_write(CPUARMState *env, const ARMCPRegInfo *ri,
332
ARMCPU *cpu = arm_env_get_cpu(env);
333
CPUState *cs = CPU(cpu);
334
335
- tlb_flush_by_mmuidx(cs, (1 << ARMMMUIdx_S1E3));
336
+ tlb_flush_by_mmuidx(cs, ARMMMUIdxBit_S1E3);
337
}
338
339
static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
340
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
341
342
if (sec) {
343
tlb_flush_by_mmuidx_all_cpus_synced(cs,
344
- (1 << ARMMMUIdx_S1SE1) |
345
- (1 << ARMMMUIdx_S1SE0));
346
+ ARMMMUIdxBit_S1SE1 |
347
+ ARMMMUIdxBit_S1SE0);
348
} else if (has_el2) {
349
tlb_flush_by_mmuidx_all_cpus_synced(cs,
350
- (1 << ARMMMUIdx_S12NSE1) |
351
- (1 << ARMMMUIdx_S12NSE0) |
352
- (1 << ARMMMUIdx_S2NS));
353
+ ARMMMUIdxBit_S12NSE1 |
354
+ ARMMMUIdxBit_S12NSE0 |
355
+ ARMMMUIdxBit_S2NS);
356
} else {
357
tlb_flush_by_mmuidx_all_cpus_synced(cs,
358
- (1 << ARMMMUIdx_S12NSE1) |
359
- (1 << ARMMMUIdx_S12NSE0));
360
+ ARMMMUIdxBit_S12NSE1 |
361
+ ARMMMUIdxBit_S12NSE0);
362
}
363
}
364
365
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
366
{
367
CPUState *cs = ENV_GET_CPU(env);
368
369
- tlb_flush_by_mmuidx_all_cpus_synced(cs, (1 << ARMMMUIdx_S1E2));
370
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E2);
371
}
372
373
static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
374
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_alle3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
375
{
376
CPUState *cs = ENV_GET_CPU(env);
377
378
- tlb_flush_by_mmuidx_all_cpus_synced(cs, (1 << ARMMMUIdx_S1E3));
379
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, ARMMMUIdxBit_S1E3);
380
}
381
382
static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
383
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1_write(CPUARMState *env, const ARMCPRegInfo *ri,
384
385
if (arm_is_secure_below_el3(env)) {
386
tlb_flush_page_by_mmuidx(cs, pageaddr,
387
- (1 << ARMMMUIdx_S1SE1) |
388
- (1 << ARMMMUIdx_S1SE0));
389
+ ARMMMUIdxBit_S1SE1 |
390
+ ARMMMUIdxBit_S1SE0);
391
} else {
392
tlb_flush_page_by_mmuidx(cs, pageaddr,
393
- (1 << ARMMMUIdx_S12NSE1) |
394
- (1 << ARMMMUIdx_S12NSE0));
395
+ ARMMMUIdxBit_S12NSE1 |
396
+ ARMMMUIdxBit_S12NSE0);
397
}
398
}
399
400
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
401
CPUState *cs = CPU(cpu);
402
uint64_t pageaddr = sextract64(value << 12, 0, 56);
403
404
- tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S1E2));
405
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S1E2);
406
}
407
408
static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
409
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
410
CPUState *cs = CPU(cpu);
411
uint64_t pageaddr = sextract64(value << 12, 0, 56);
412
413
- tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S1E3));
414
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S1E3);
415
}
416
417
static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
418
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
419
420
if (sec) {
421
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
422
- (1 << ARMMMUIdx_S1SE1) |
423
- (1 << ARMMMUIdx_S1SE0));
424
+ ARMMMUIdxBit_S1SE1 |
425
+ ARMMMUIdxBit_S1SE0);
426
} else {
427
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
428
- (1 << ARMMMUIdx_S12NSE1) |
429
- (1 << ARMMMUIdx_S12NSE0));
430
+ ARMMMUIdxBit_S12NSE1 |
431
+ ARMMMUIdxBit_S12NSE0);
432
}
433
}
434
435
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
436
uint64_t pageaddr = sextract64(value << 12, 0, 56);
437
438
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
439
- (1 << ARMMMUIdx_S1E2));
440
+ ARMMMUIdxBit_S1E2);
441
}
442
443
static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
444
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
445
uint64_t pageaddr = sextract64(value << 12, 0, 56);
446
447
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
448
- (1 << ARMMMUIdx_S1E3));
449
+ ARMMMUIdxBit_S1E3);
450
}
451
452
static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
453
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
454
455
pageaddr = sextract64(value << 12, 0, 48);
456
457
- tlb_flush_page_by_mmuidx(cs, pageaddr, (1 << ARMMMUIdx_S2NS));
458
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_S2NS);
459
}
460
461
static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
462
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
463
pageaddr = sextract64(value << 12, 0, 48);
464
465
tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr,
466
- (1 << ARMMMUIdx_S2NS));
467
+ ARMMMUIdxBit_S2NS);
468
}
469
470
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
471
@@ -XXX,XX +XXX,XX @@ static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
472
return &env->cp15.tcr_el[regime_el(env, mmu_idx)];
473
}
474
475
+/* Convert a possible stage1+2 MMU index into the appropriate
476
+ * stage 1 MMU index
477
+ */
478
+static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
479
+{
480
+ if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
481
+ mmu_idx += (ARMMMUIdx_S1NSE0 - ARMMMUIdx_S12NSE0);
482
+ }
483
+ return mmu_idx;
484
+}
485
+
486
/* Returns TBI0 value for current regime el */
487
uint32_t arm_regime_tbi0(CPUARMState *env, ARMMMUIdx mmu_idx)
488
{
489
@@ -XXX,XX +XXX,XX @@ uint32_t arm_regime_tbi0(CPUARMState *env, ARMMMUIdx mmu_idx)
490
uint32_t el;
491
492
/* For EL0 and EL1, TBI is controlled by stage 1's TCR, so convert
493
- * a stage 1+2 mmu index into the appropriate stage 1 mmu index.
494
- */
495
- if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
496
- mmu_idx += ARMMMUIdx_S1NSE0;
497
- }
498
+ * a stage 1+2 mmu index into the appropriate stage 1 mmu index.
499
+ */
500
+ mmu_idx = stage_1_mmu_idx(mmu_idx);
501
502
tcr = regime_tcr(env, mmu_idx);
503
el = regime_el(env, mmu_idx);
504
@@ -XXX,XX +XXX,XX @@ uint32_t arm_regime_tbi1(CPUARMState *env, ARMMMUIdx mmu_idx)
505
uint32_t el;
506
507
/* For EL0 and EL1, TBI is controlled by stage 1's TCR, so convert
508
- * a stage 1+2 mmu index into the appropriate stage 1 mmu index.
509
- */
510
- if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
511
- mmu_idx += ARMMMUIdx_S1NSE0;
512
- }
513
+ * a stage 1+2 mmu index into the appropriate stage 1 mmu index.
514
+ */
515
+ mmu_idx = stage_1_mmu_idx(mmu_idx);
516
517
tcr = regime_tcr(env, mmu_idx);
518
el = regime_el(env, mmu_idx);
519
@@ -XXX,XX +XXX,XX @@ static inline bool regime_using_lpae_format(CPUARMState *env,
520
* on whether the long or short descriptor format is in use. */
521
bool arm_s1_regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx)
522
{
523
- if (mmu_idx == ARMMMUIdx_S12NSE0 || mmu_idx == ARMMMUIdx_S12NSE1) {
524
- mmu_idx += ARMMMUIdx_S1NSE0;
525
- }
526
+ mmu_idx = stage_1_mmu_idx(mmu_idx);
527
528
return regime_using_lpae_format(env, mmu_idx);
529
}
530
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
531
int ret;
532
533
ret = get_phys_addr(env, address, access_type,
534
- mmu_idx + ARMMMUIdx_S1NSE0, &ipa, attrs,
535
+ stage_1_mmu_idx(mmu_idx), &ipa, attrs,
536
prot, page_size, fsr, fi);
537
538
/* If S1 fails or S2 is disabled, return early. */
539
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
540
/*
541
* For non-EL2 CPUs a stage1+stage2 translation is just stage 1.
542
*/
543
- mmu_idx += ARMMMUIdx_S1NSE0;
544
+ mmu_idx = stage_1_mmu_idx(mmu_idx);
545
}
546
}
547
548
@@ -XXX,XX +XXX,XX @@ bool arm_tlb_fill(CPUState *cs, vaddr address,
549
int ret;
550
MemTxAttrs attrs = {};
551
552
- ret = get_phys_addr(env, address, access_type, mmu_idx, &phys_addr,
553
+ ret = get_phys_addr(env, address, access_type,
554
+ core_to_arm_mmu_idx(env, mmu_idx), &phys_addr,
555
&attrs, &prot, &page_size, fsr, fi);
556
if (!ret) {
557
/* Map a single [sub]page. */
558
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
559
bool ret;
560
uint32_t fsr;
561
ARMMMUFaultInfo fi = {};
562
+ ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
563
564
*attrs = (MemTxAttrs) {};
565
566
- ret = get_phys_addr(env, addr, 0, cpu_mmu_index(env, false), &phys_addr,
567
+ ret = get_phys_addr(env, addr, 0, mmu_idx, &phys_addr,
568
attrs, &prot, &page_size, &fsr, &fi);
569
570
if (ret) {
571
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
572
index XXXXXXX..XXXXXXX 100644
573
--- a/target/arm/op_helper.c
574
+++ b/target/arm/op_helper.c
575
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
576
int target_el;
577
bool same_el;
578
uint32_t syn;
579
+ ARMMMUIdx arm_mmu_idx = core_to_arm_mmu_idx(env, mmu_idx);
580
581
if (retaddr) {
582
/* now we have a real cpu fault */
583
@@ -XXX,XX +XXX,XX @@ void arm_cpu_do_unaligned_access(CPUState *cs, vaddr vaddr,
584
/* the DFSR for an alignment fault depends on whether we're using
585
* the LPAE long descriptor format, or the short descriptor format
586
*/
587
- if (arm_s1_regime_using_lpae_format(env, mmu_idx)) {
588
+ if (arm_s1_regime_using_lpae_format(env, arm_mmu_idx)) {
589
env->exception.fsr = (1 << 9) | 0x21;
590
} else {
591
env->exception.fsr = 0x1;
592
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
593
index XXXXXXX..XXXXXXX 100644
594
--- a/target/arm/translate-a64.c
595
+++ b/target/arm/translate-a64.c
596
@@ -XXX,XX +XXX,XX @@ void a64_translate_init(void)
597
offsetof(CPUARMState, exclusive_high), "exclusive_high");
598
}
599
600
-static inline ARMMMUIdx get_a64_user_mem_index(DisasContext *s)
601
+static inline int get_a64_user_mem_index(DisasContext *s)
602
{
603
- /* Return the mmu_idx to use for A64 "unprivileged load/store" insns:
604
+ /* Return the core mmu_idx to use for A64 "unprivileged load/store" insns:
605
* if EL1, access as if EL0; otherwise access at current EL
606
*/
607
+ ARMMMUIdx useridx;
608
+
609
switch (s->mmu_idx) {
610
case ARMMMUIdx_S12NSE1:
611
- return ARMMMUIdx_S12NSE0;
612
+ useridx = ARMMMUIdx_S12NSE0;
613
+ break;
614
case ARMMMUIdx_S1SE1:
615
- return ARMMMUIdx_S1SE0;
616
+ useridx = ARMMMUIdx_S1SE0;
617
+ break;
618
case ARMMMUIdx_S2NS:
619
g_assert_not_reached();
620
default:
621
- return s->mmu_idx;
622
+ useridx = s->mmu_idx;
623
+ break;
624
}
625
+ return arm_to_core_mmu_idx(useridx);
626
}
627
628
void aarch64_cpu_dump_state(CPUState *cs, FILE *f,
629
@@ -XXX,XX +XXX,XX @@ void gen_intermediate_code_a64(ARMCPU *cpu, TranslationBlock *tb)
630
dc->be_data = ARM_TBFLAG_BE_DATA(tb->flags) ? MO_BE : MO_LE;
631
dc->condexec_mask = 0;
632
dc->condexec_cond = 0;
633
- dc->mmu_idx = ARM_TBFLAG_MMUIDX(tb->flags);
634
+ dc->mmu_idx = core_to_arm_mmu_idx(env, ARM_TBFLAG_MMUIDX(tb->flags));
635
dc->tbi0 = ARM_TBFLAG_TBI0(tb->flags);
636
dc->tbi1 = ARM_TBFLAG_TBI1(tb->flags);
637
dc->current_el = arm_mmu_idx_to_el(dc->mmu_idx);
638
diff --git a/target/arm/translate.c b/target/arm/translate.c
639
index XXXXXXX..XXXXXXX 100644
640
--- a/target/arm/translate.c
641
+++ b/target/arm/translate.c
642
@@ -XXX,XX +XXX,XX @@ static void disas_set_da_iss(DisasContext *s, TCGMemOp memop, ISSInfo issinfo)
643
disas_set_insn_syndrome(s, syn);
644
}
645
646
-static inline ARMMMUIdx get_a32_user_mem_index(DisasContext *s)
647
+static inline int get_a32_user_mem_index(DisasContext *s)
648
{
649
- /* Return the mmu_idx to use for A32/T32 "unprivileged load/store"
650
+ /* Return the core mmu_idx to use for A32/T32 "unprivileged load/store"
651
* insns:
652
* if PL2, UNPREDICTABLE (we choose to implement as if PL0)
653
* otherwise, access as if at PL0.
654
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx get_a32_user_mem_index(DisasContext *s)
655
case ARMMMUIdx_S1E2: /* this one is UNPREDICTABLE */
656
case ARMMMUIdx_S12NSE0:
657
case ARMMMUIdx_S12NSE1:
658
- return ARMMMUIdx_S12NSE0;
659
+ return arm_to_core_mmu_idx(ARMMMUIdx_S12NSE0);
660
case ARMMMUIdx_S1E3:
661
case ARMMMUIdx_S1SE0:
662
case ARMMMUIdx_S1SE1:
663
- return ARMMMUIdx_S1SE0;
664
+ return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0);
665
case ARMMMUIdx_S2NS:
666
default:
667
g_assert_not_reached();
668
@@ -XXX,XX +XXX,XX @@ void gen_intermediate_code(CPUARMState *env, TranslationBlock *tb)
669
dc->be_data = ARM_TBFLAG_BE_DATA(tb->flags) ? MO_BE : MO_LE;
670
dc->condexec_mask = (ARM_TBFLAG_CONDEXEC(tb->flags) & 0xf) << 1;
671
dc->condexec_cond = ARM_TBFLAG_CONDEXEC(tb->flags) >> 4;
672
- dc->mmu_idx = ARM_TBFLAG_MMUIDX(tb->flags);
673
+ dc->mmu_idx = core_to_arm_mmu_idx(env, ARM_TBFLAG_MMUIDX(tb->flags));
674
dc->current_el = arm_mmu_idx_to_el(dc->mmu_idx);
675
#if !defined(CONFIG_USER_ONLY)
676
dc->user = (dc->current_el == 0);
677
--
72
--
678
2.7.4
73
2.34.1
679
74
680
75
diff view generated by jsdifflib
Deleted patch
1
Make M profile use completely separate ARMMMUIdx values from
2
those that A profile CPUs use. This is a prelude to adding
3
support for the MPU and for v8M, which together will require
4
6 MMU indexes which don't map cleanly onto the A profile
5
uses:
6
non secure User
7
non secure Privileged
8
non secure Privileged, execution priority < 0
9
secure User
10
secure Privileged
11
secure Privileged, execution priority < 0
12
1
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 1493122030-32191-4-git-send-email-peter.maydell@linaro.org
15
---
16
target/arm/cpu.h | 21 +++++++++++++++++++--
17
target/arm/helper.c | 5 +++++
18
target/arm/translate.c | 3 +++
19
3 files changed, 27 insertions(+), 2 deletions(-)
20
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu.h
24
+++ b/target/arm/cpu.h
25
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
26
* of the AT/ATS operations.
27
* The values used are carefully arranged to make mmu_idx => EL lookup easy.
28
*/
29
-#define ARM_MMU_IDX_A 0x10 /* A profile (and M profile, for the moment) */
30
+#define ARM_MMU_IDX_A 0x10 /* A profile */
31
#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
32
+#define ARM_MMU_IDX_M 0x40 /* M profile */
33
34
#define ARM_MMU_IDX_TYPE_MASK (~0x7)
35
#define ARM_MMU_IDX_COREIDX_MASK 0x7
36
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
37
ARMMMUIdx_S1SE0 = 4 | ARM_MMU_IDX_A,
38
ARMMMUIdx_S1SE1 = 5 | ARM_MMU_IDX_A,
39
ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A,
40
+ ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M,
41
+ ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M,
42
/* Indexes below here don't have TLBs and are used only for AT system
43
* instructions or for the first stage of an S12 page table walk.
44
*/
45
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
46
ARMMMUIdxBit_S1SE0 = 1 << 4,
47
ARMMMUIdxBit_S1SE1 = 1 << 5,
48
ARMMMUIdxBit_S2NS = 1 << 6,
49
+ ARMMMUIdxBit_MUser = 1 << 0,
50
+ ARMMMUIdxBit_MPriv = 1 << 1,
51
} ARMMMUIdxBit;
52
53
#define MMU_USER_IDX 0
54
@@ -XXX,XX +XXX,XX @@ static inline int arm_to_core_mmu_idx(ARMMMUIdx mmu_idx)
55
56
static inline ARMMMUIdx core_to_arm_mmu_idx(CPUARMState *env, int mmu_idx)
57
{
58
- return mmu_idx | ARM_MMU_IDX_A;
59
+ if (arm_feature(env, ARM_FEATURE_M)) {
60
+ return mmu_idx | ARM_MMU_IDX_M;
61
+ } else {
62
+ return mmu_idx | ARM_MMU_IDX_A;
63
+ }
64
}
65
66
/* Return the exception level we're running at if this is our mmu_idx */
67
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
68
switch (mmu_idx & ARM_MMU_IDX_TYPE_MASK) {
69
case ARM_MMU_IDX_A:
70
return mmu_idx & 3;
71
+ case ARM_MMU_IDX_M:
72
+ return mmu_idx & 1;
73
default:
74
g_assert_not_reached();
75
}
76
@@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
77
{
78
int el = arm_current_el(env);
79
80
+ if (arm_feature(env, ARM_FEATURE_M)) {
81
+ ARMMMUIdx mmu_idx = el == 0 ? ARMMMUIdx_MUser : ARMMMUIdx_MPriv;
82
+
83
+ return arm_to_core_mmu_idx(mmu_idx);
84
+ }
85
+
86
if (el < 2 && arm_is_secure_below_el3(env)) {
87
return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0 + el);
88
}
89
diff --git a/target/arm/helper.c b/target/arm/helper.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/helper.c
92
+++ b/target/arm/helper.c
93
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
94
case ARMMMUIdx_S1SE1:
95
case ARMMMUIdx_S1NSE0:
96
case ARMMMUIdx_S1NSE1:
97
+ case ARMMMUIdx_MPriv:
98
+ case ARMMMUIdx_MUser:
99
return 1;
100
default:
101
g_assert_not_reached();
102
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
103
case ARMMMUIdx_S1NSE1:
104
case ARMMMUIdx_S1E2:
105
case ARMMMUIdx_S2NS:
106
+ case ARMMMUIdx_MPriv:
107
+ case ARMMMUIdx_MUser:
108
return false;
109
case ARMMMUIdx_S1E3:
110
case ARMMMUIdx_S1SE0:
111
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
112
switch (mmu_idx) {
113
case ARMMMUIdx_S1SE0:
114
case ARMMMUIdx_S1NSE0:
115
+ case ARMMMUIdx_MUser:
116
return true;
117
default:
118
return false;
119
diff --git a/target/arm/translate.c b/target/arm/translate.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/target/arm/translate.c
122
+++ b/target/arm/translate.c
123
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
124
case ARMMMUIdx_S1SE0:
125
case ARMMMUIdx_S1SE1:
126
return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0);
127
+ case ARMMMUIdx_MUser:
128
+ case ARMMMUIdx_MPriv:
129
+ return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
130
case ARMMMUIdx_S2NS:
131
default:
132
g_assert_not_reached();
133
--
134
2.7.4
135
136
diff view generated by jsdifflib
1
ARM CPUs come in two flavours:
1
From: Tong Ho <tong.ho@amd.com>
2
* proper MMU ("VMSA")
3
* only an MPU ("PMSA")
4
For PMSA, the MPU may be implemented, or not (in which case there
5
is default "always acts the same" behaviour, but it isn't guest
6
programmable).
7
2
8
QEMU is a bit confused about how we indicate this: we have an
3
Add a check in the bit-set operation to write the backstore
9
ARM_FEATURE_MPU, but it's not clear whether this indicates
4
only if the affected bit is 0 before.
10
"PMSA, not VMSA" or "PMSA and MPU present" , and sometimes we
11
use it for one purpose and sometimes the other.
12
5
13
Currently trying to implement a PMSA-without-MPU core won't
6
With this in place, there will be no need for callers to
14
work correctly because we turn off the ARM_FEATURE_MPU bit
7
do the checking in order to avoid unnecessary writes.
15
and then a lot of things which should still exist get
16
turned off too.
17
8
18
As the first step in cleaning this up, rename the feature
9
Signed-off-by: Tong Ho <tong.ho@amd.com>
19
bit to ARM_FEATURE_PMSA, which indicates a PMSA CPU (with
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
20
or without MPU).
11
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
hw/nvram/xlnx-efuse.c | 11 +++++++++--
16
1 file changed, 9 insertions(+), 2 deletions(-)
21
17
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
diff --git a/hw/nvram/xlnx-efuse.c b/hw/nvram/xlnx-efuse.c
23
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
24
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
25
Message-id: 1493122030-32191-5-git-send-email-peter.maydell@linaro.org
26
---
27
target/arm/cpu.h | 2 +-
28
target/arm/cpu.c | 12 ++++++------
29
target/arm/helper.c | 12 ++++++------
30
target/arm/machine.c | 2 +-
31
4 files changed, 14 insertions(+), 14 deletions(-)
32
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/cpu.h
20
--- a/hw/nvram/xlnx-efuse.c
36
+++ b/target/arm/cpu.h
21
+++ b/hw/nvram/xlnx-efuse.c
37
@@ -XXX,XX +XXX,XX @@ enum arm_features {
22
@@ -XXX,XX +XXX,XX @@ static bool efuse_ro_bits_find(XlnxEFuse *s, uint32_t k)
38
ARM_FEATURE_V6K,
23
39
ARM_FEATURE_V7,
24
bool xlnx_efuse_set_bit(XlnxEFuse *s, unsigned int bit)
40
ARM_FEATURE_THUMB2,
25
{
41
- ARM_FEATURE_MPU, /* Only has Memory Protection Unit, not full MMU. */
26
+ uint32_t set, *row;
42
+ ARM_FEATURE_PMSA, /* no MMU; may have Memory Protection Unit */
27
+
43
ARM_FEATURE_VFP3,
28
if (efuse_ro_bits_find(s, bit)) {
44
ARM_FEATURE_VFP_FP16,
29
g_autofree char *path = object_get_canonical_path(OBJECT(s));
45
ARM_FEATURE_NEON,
30
46
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
@@ -XXX,XX +XXX,XX @@ bool xlnx_efuse_set_bit(XlnxEFuse *s, unsigned int bit)
47
index XXXXXXX..XXXXXXX 100644
32
return false;
48
--- a/target/arm/cpu.c
49
+++ b/target/arm/cpu.c
50
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_post_init(Object *obj)
51
&error_abort);
52
}
33
}
53
34
54
- if (arm_feature(&cpu->env, ARM_FEATURE_MPU)) {
35
- s->fuse32[bit / 32] |= 1 << (bit % 32);
55
+ if (arm_feature(&cpu->env, ARM_FEATURE_PMSA)) {
36
- efuse_bdrv_sync(s, bit);
56
qdev_property_add_static(DEVICE(obj), &arm_cpu_has_mpu_property,
37
+ /* Avoid back-end write unless there is a real update */
57
&error_abort);
38
+ row = &s->fuse32[bit / 32];
58
if (arm_feature(&cpu->env, ARM_FEATURE_V7)) {
39
+ set = 1 << (bit % 32);
59
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
40
+ if (!(set & *row)) {
60
41
+ *row |= set;
61
if (arm_feature(env, ARM_FEATURE_V7) &&
42
+ efuse_bdrv_sync(s, bit);
62
!arm_feature(env, ARM_FEATURE_M) &&
43
+ }
63
- !arm_feature(env, ARM_FEATURE_MPU)) {
44
return true;
64
+ !arm_feature(env, ARM_FEATURE_PMSA)) {
65
/* v7VMSA drops support for the old ARMv5 tiny pages, so we
66
* can use 4K pages.
67
*/
68
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
69
}
70
71
if (!cpu->has_mpu) {
72
- unset_feature(env, ARM_FEATURE_MPU);
73
+ unset_feature(env, ARM_FEATURE_PMSA);
74
}
75
76
- if (arm_feature(env, ARM_FEATURE_MPU) &&
77
+ if (arm_feature(env, ARM_FEATURE_PMSA) &&
78
arm_feature(env, ARM_FEATURE_V7)) {
79
uint32_t nr = cpu->pmsav7_dregion;
80
81
@@ -XXX,XX +XXX,XX @@ static void arm946_initfn(Object *obj)
82
83
cpu->dtb_compatible = "arm,arm946";
84
set_feature(&cpu->env, ARM_FEATURE_V5);
85
- set_feature(&cpu->env, ARM_FEATURE_MPU);
86
+ set_feature(&cpu->env, ARM_FEATURE_PMSA);
87
set_feature(&cpu->env, ARM_FEATURE_DUMMY_C15_REGS);
88
cpu->midr = 0x41059461;
89
cpu->ctr = 0x0f004006;
90
@@ -XXX,XX +XXX,XX @@ static void cortex_r5_initfn(Object *obj)
91
set_feature(&cpu->env, ARM_FEATURE_THUMB_DIV);
92
set_feature(&cpu->env, ARM_FEATURE_ARM_DIV);
93
set_feature(&cpu->env, ARM_FEATURE_V7MP);
94
- set_feature(&cpu->env, ARM_FEATURE_MPU);
95
+ set_feature(&cpu->env, ARM_FEATURE_PMSA);
96
cpu->midr = 0x411fc153; /* r1p3 */
97
cpu->id_pfr0 = 0x0131;
98
cpu->id_pfr1 = 0x001;
99
diff --git a/target/arm/helper.c b/target/arm/helper.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/target/arm/helper.c
102
+++ b/target/arm/helper.c
103
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
104
{
105
ARMCPU *cpu = arm_env_get_cpu(env);
106
107
- if (raw_read(env, ri) != value && !arm_feature(env, ARM_FEATURE_MPU)
108
+ if (raw_read(env, ri) != value && !arm_feature(env, ARM_FEATURE_PMSA)
109
&& !extended_addresses_enabled(env)) {
110
/* For VMSA (when not using the LPAE long descriptor page table
111
* format) this register includes the ASID, so do a TLB flush.
112
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
113
define_arm_cp_regs(cpu, v6k_cp_reginfo);
114
}
115
if (arm_feature(env, ARM_FEATURE_V7MP) &&
116
- !arm_feature(env, ARM_FEATURE_MPU)) {
117
+ !arm_feature(env, ARM_FEATURE_PMSA)) {
118
define_arm_cp_regs(cpu, v7mp_cp_reginfo);
119
}
120
if (arm_feature(env, ARM_FEATURE_V7)) {
121
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
122
}
123
}
124
125
- if (arm_feature(env, ARM_FEATURE_MPU)) {
126
+ if (arm_feature(env, ARM_FEATURE_PMSA)) {
127
if (arm_feature(env, ARM_FEATURE_V6)) {
128
/* PMSAv6 not implemented */
129
assert(arm_feature(env, ARM_FEATURE_V7));
130
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
131
define_arm_cp_regs(cpu, id_pre_v8_midr_cp_reginfo);
132
}
133
define_arm_cp_regs(cpu, id_cp_reginfo);
134
- if (!arm_feature(env, ARM_FEATURE_MPU)) {
135
+ if (!arm_feature(env, ARM_FEATURE_PMSA)) {
136
define_one_arm_cp_reg(cpu, &id_tlbtr_reginfo);
137
} else if (arm_feature(env, ARM_FEATURE_V7)) {
138
define_one_arm_cp_reg(cpu, &id_mpuir_reginfo);
139
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
140
/* pmsav7 has special handling for when MPU is disabled so call it before
141
* the common MMU/MPU disabled check below.
142
*/
143
- if (arm_feature(env, ARM_FEATURE_MPU) &&
144
+ if (arm_feature(env, ARM_FEATURE_PMSA) &&
145
arm_feature(env, ARM_FEATURE_V7)) {
146
*page_size = TARGET_PAGE_SIZE;
147
return get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
148
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
149
return 0;
150
}
151
152
- if (arm_feature(env, ARM_FEATURE_MPU)) {
153
+ if (arm_feature(env, ARM_FEATURE_PMSA)) {
154
/* Pre-v7 MPU */
155
*page_size = TARGET_PAGE_SIZE;
156
return get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
157
diff --git a/target/arm/machine.c b/target/arm/machine.c
158
index XXXXXXX..XXXXXXX 100644
159
--- a/target/arm/machine.c
160
+++ b/target/arm/machine.c
161
@@ -XXX,XX +XXX,XX @@ static bool pmsav7_needed(void *opaque)
162
ARMCPU *cpu = opaque;
163
CPUARMState *env = &cpu->env;
164
165
- return arm_feature(env, ARM_FEATURE_MPU) &&
166
+ return arm_feature(env, ARM_FEATURE_PMSA) &&
167
arm_feature(env, ARM_FEATURE_V7);
168
}
45
}
169
46
170
--
47
--
171
2.7.4
48
2.34.1
172
49
173
50
diff view generated by jsdifflib
Deleted patch
1
Fix the handling of QOM properties for PMSA CPUs with no MPU:
2
1
3
Allow no-MPU to be specified by either:
4
* has-mpu = false
5
* pmsav7_dregion = 0
6
and make setting one imply the other. Don't clear the PMSA
7
feature bit in this situation.
8
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
11
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Message-id: 1493122030-32191-6-git-send-email-peter.maydell@linaro.org
13
---
14
target/arm/cpu.c | 8 +++++++-
15
1 file changed, 7 insertions(+), 1 deletion(-)
16
17
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.c
20
+++ b/target/arm/cpu.c
21
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
22
cpu->id_pfr1 &= ~0xf000;
23
}
24
25
+ /* MPU can be configured out of a PMSA CPU either by setting has-mpu
26
+ * to false or by setting pmsav7-dregion to 0.
27
+ */
28
if (!cpu->has_mpu) {
29
- unset_feature(env, ARM_FEATURE_PMSA);
30
+ cpu->pmsav7_dregion = 0;
31
+ }
32
+ if (cpu->pmsav7_dregion == 0) {
33
+ cpu->has_mpu = false;
34
}
35
36
if (arm_feature(env, ARM_FEATURE_PMSA) &&
37
--
38
2.7.4
39
40
diff view generated by jsdifflib
Deleted patch
1
If the CPU is a PMSA config with no MPU implemented, then the
2
SCTLR.M bit should be RAZ/WI, so that the guest can never
3
turn on the non-existent MPU.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 1493122030-32191-7-git-send-email-peter.maydell@linaro.org
9
---
10
target/arm/helper.c | 5 +++++
11
1 file changed, 5 insertions(+)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static void sctlr_write(CPUARMState *env, const ARMCPRegInfo *ri,
18
return;
19
}
20
21
+ if (arm_feature(env, ARM_FEATURE_PMSA) && !cpu->has_mpu) {
22
+ /* M bit is RAZ/WI for PMSA with no MPU implemented */
23
+ value &= ~SCTLR_M;
24
+ }
25
+
26
raw_write(env, ri, value);
27
/* ??? Lots of these bits are not implemented. */
28
/* This may enable/disable the MMU, so do a TLB flush. */
29
--
30
2.7.4
31
32
diff view generated by jsdifflib
Deleted patch
1
Now that we enforce both:
2
* pmsav7_dregion == 0 implies has_mpu == false
3
* PMSA with has_mpu == false means SCTLR.M cannot be set
4
we can remove a check on pmsav7_dregion from get_phys_addr_pmsav7(),
5
because we can only reach this code path if the MPU is enabled
6
(and so region_translation_disabled() returned false).
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 1493122030-32191-8-git-send-email-peter.maydell@linaro.org
11
---
12
target/arm/helper.c | 3 +--
13
1 file changed, 1 insertion(+), 2 deletions(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
20
}
21
22
if (n == -1) { /* no hits */
23
- if (cpu->pmsav7_dregion &&
24
- (is_user || !(regime_sctlr(env, mmu_idx) & SCTLR_BR))) {
25
+ if (is_user || !(regime_sctlr(env, mmu_idx) & SCTLR_BR)) {
26
/* background fault */
27
*fsr = 0;
28
return true;
29
--
30
2.7.4
31
32
diff view generated by jsdifflib
Deleted patch
1
From: Michael Davidsaver <mdavidsaver@gmail.com>
2
1
3
Improve the "-d mmu" tracing for the PMSAv7 MPU translation
4
process as an aid in debugging guest MPU configurations:
5
* fix a missing newline for a guest-error log
6
* report the region number with guest-error or unimp
7
logs of bad region register values
8
* add a log message for the overall result of the lookup
9
* print "0x" prefix for hex values
10
11
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
12
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Message-id: 1493122030-32191-9-git-send-email-peter.maydell@linaro.org
15
[PMM: a little tidyup, report region number in all messages
16
rather than just one]
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
target/arm/helper.c | 39 +++++++++++++++++++++++++++------------
20
1 file changed, 27 insertions(+), 12 deletions(-)
21
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper.c
25
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
27
}
28
29
if (!rsize) {
30
- qemu_log_mask(LOG_GUEST_ERROR, "DRSR.Rsize field can not be 0");
31
+ qemu_log_mask(LOG_GUEST_ERROR,
32
+ "DRSR[%d]: Rsize field cannot be 0\n", n);
33
continue;
34
}
35
rsize++;
36
rmask = (1ull << rsize) - 1;
37
38
if (base & rmask) {
39
- qemu_log_mask(LOG_GUEST_ERROR, "DRBAR %" PRIx32 " misaligned "
40
- "to DRSR region size, mask = %" PRIx32,
41
- base, rmask);
42
+ qemu_log_mask(LOG_GUEST_ERROR,
43
+ "DRBAR[%d]: 0x%" PRIx32 " misaligned "
44
+ "to DRSR region size, mask = 0x%" PRIx32 "\n",
45
+ n, base, rmask);
46
continue;
47
}
48
49
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
50
}
51
}
52
if (rsize < TARGET_PAGE_BITS) {
53
- qemu_log_mask(LOG_UNIMP, "No support for MPU (sub)region"
54
+ qemu_log_mask(LOG_UNIMP,
55
+ "DRSR[%d]: No support for MPU (sub)region "
56
"alignment of %" PRIu32 " bits. Minimum is %d\n",
57
- rsize, TARGET_PAGE_BITS);
58
+ n, rsize, TARGET_PAGE_BITS);
59
continue;
60
}
61
if (srdis) {
62
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
63
break;
64
default:
65
qemu_log_mask(LOG_GUEST_ERROR,
66
- "Bad value for AP bits in DRACR %"
67
- PRIx32 "\n", ap);
68
+ "DRACR[%d]: Bad value for AP bits: 0x%"
69
+ PRIx32 "\n", n, ap);
70
}
71
} else { /* Priv. mode AP bits decoding */
72
switch (ap) {
73
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
74
break;
75
default:
76
qemu_log_mask(LOG_GUEST_ERROR,
77
- "Bad value for AP bits in DRACR %"
78
- PRIx32 "\n", ap);
79
+ "DRACR[%d]: Bad value for AP bits: 0x%"
80
+ PRIx32 "\n", n, ap);
81
}
82
}
83
84
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr(CPUARMState *env, target_ulong address,
85
*/
86
if (arm_feature(env, ARM_FEATURE_PMSA) &&
87
arm_feature(env, ARM_FEATURE_V7)) {
88
+ bool ret;
89
*page_size = TARGET_PAGE_SIZE;
90
- return get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
91
- phys_ptr, prot, fsr);
92
+ ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
93
+ phys_ptr, prot, fsr);
94
+ qemu_log_mask(CPU_LOG_MMU, "PMSAv7 MPU lookup for %s at 0x%08" PRIx32
95
+ " mmu_idx %u -> %s (prot %c%c%c)\n",
96
+ access_type == 1 ? "reading" :
97
+ (access_type == 2 ? "writing" : "execute"),
98
+ (uint32_t)address, mmu_idx,
99
+ ret ? "Miss" : "Hit",
100
+ *prot & PAGE_READ ? 'r' : '-',
101
+ *prot & PAGE_WRITE ? 'w' : '-',
102
+ *prot & PAGE_EXEC ? 'x' : '-');
103
+
104
+ return ret;
105
}
106
107
if (regime_translation_disabled(env, mmu_idx)) {
108
--
109
2.7.4
110
111
diff view generated by jsdifflib
Deleted patch
1
From: Michael Davidsaver <mdavidsaver@gmail.com>
2
1
3
Add support for the M profile default memory map which is used
4
if the MPU is not present or disabled.
5
6
The main differences in behaviour from implementing this
7
correctly are that we set the PAGE_EXEC attribute on
8
the right regions of memory, such that device regions
9
are not executable.
10
11
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
12
Message-id: 1493122030-32191-10-git-send-email-peter.maydell@linaro.org
13
[PMM: rephrased comment and commit message; don't mark
14
the flash memory region as not-writable; list all
15
the cases in the default map explicitly rather than
16
using a 'default' case for the non-executable regions]
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
target/arm/helper.c | 41 ++++++++++++++++++++++++++++++++---------
20
1 file changed, 32 insertions(+), 9 deletions(-)
21
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper.c
25
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ static inline void get_phys_addr_pmsav7_default(CPUARMState *env,
27
ARMMMUIdx mmu_idx,
28
int32_t address, int *prot)
29
{
30
- *prot = PAGE_READ | PAGE_WRITE;
31
- switch (address) {
32
- case 0xF0000000 ... 0xFFFFFFFF:
33
- if (regime_sctlr(env, mmu_idx) & SCTLR_V) { /* hivecs execing is ok */
34
+ if (!arm_feature(env, ARM_FEATURE_M)) {
35
+ *prot = PAGE_READ | PAGE_WRITE;
36
+ switch (address) {
37
+ case 0xF0000000 ... 0xFFFFFFFF:
38
+ if (regime_sctlr(env, mmu_idx) & SCTLR_V) {
39
+ /* hivecs execing is ok */
40
+ *prot |= PAGE_EXEC;
41
+ }
42
+ break;
43
+ case 0x00000000 ... 0x7FFFFFFF:
44
*prot |= PAGE_EXEC;
45
+ break;
46
+ }
47
+ } else {
48
+ /* Default system address map for M profile cores.
49
+ * The architecture specifies which regions are execute-never;
50
+ * at the MPU level no other checks are defined.
51
+ */
52
+ switch (address) {
53
+ case 0x00000000 ... 0x1fffffff: /* ROM */
54
+ case 0x20000000 ... 0x3fffffff: /* SRAM */
55
+ case 0x60000000 ... 0x7fffffff: /* RAM */
56
+ case 0x80000000 ... 0x9fffffff: /* RAM */
57
+ *prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
58
+ break;
59
+ case 0x40000000 ... 0x5fffffff: /* Peripheral */
60
+ case 0xa0000000 ... 0xbfffffff: /* Device */
61
+ case 0xc0000000 ... 0xdfffffff: /* Device */
62
+ case 0xe0000000 ... 0xffffffff: /* System */
63
+ *prot = PAGE_READ | PAGE_WRITE;
64
+ break;
65
+ default:
66
+ g_assert_not_reached();
67
}
68
- break;
69
- case 0x00000000 ... 0x7FFFFFFF:
70
- *prot |= PAGE_EXEC;
71
- break;
72
}
73
-
74
}
75
76
static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
77
--
78
2.7.4
79
80
diff view generated by jsdifflib
Deleted patch
1
All M profile CPUs are PMSA, so set the feature bit.
2
(We haven't actually implemented the M profile MPU register
3
interface yet, but setting this feature bit gives us closer
4
to correct behaviour for the MPU-disabled case.)
5
1
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Message-id: 1493122030-32191-11-git-send-email-peter.maydell@linaro.org
9
---
10
target/arm/cpu.c | 8 ++++++++
11
1 file changed, 8 insertions(+)
12
13
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.c
16
+++ b/target/arm/cpu.c
17
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_post_init(Object *obj)
18
{
19
ARMCPU *cpu = ARM_CPU(obj);
20
21
+ /* M profile implies PMSA. We have to do this here rather than
22
+ * in realize with the other feature-implication checks because
23
+ * we look at the PMSA bit to see if we should add some properties.
24
+ */
25
+ if (arm_feature(&cpu->env, ARM_FEATURE_M)) {
26
+ set_feature(&cpu->env, ARM_FEATURE_PMSA);
27
+ }
28
+
29
if (arm_feature(&cpu->env, ARM_FEATURE_CBAR) ||
30
arm_feature(&cpu->env, ARM_FEATURE_CBAR_RO)) {
31
qdev_property_add_static(DEVICE(obj), &arm_cpu_reset_cbar_property,
32
--
33
2.7.4
34
35
diff view generated by jsdifflib
Deleted patch
1
From: Michael Davidsaver <mdavidsaver@gmail.com>
2
1
3
General logic is that operations stopped by the MPU are MemManage,
4
and those which go through the MPU and are caught by the unassigned
5
handle are BusFault. Distinguish these by looking at the
6
exception.fsr values, and set the CFSR bits and (if appropriate)
7
fill in the BFAR or MMFAR with the exception address.
8
9
Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com>
10
Message-id: 1493122030-32191-12-git-send-email-peter.maydell@linaro.org
11
[PMM: i-side faults do not set BFAR/MMFAR, only d-side;
12
added some CPU_LOG_INT logging]
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
15
---
16
target/arm/helper.c | 45 ++++++++++++++++++++++++++++++++++++++++++---
17
1 file changed, 42 insertions(+), 3 deletions(-)
18
19
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper.c
22
+++ b/target/arm/helper.c
23
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
24
break;
25
case EXCP_PREFETCH_ABORT:
26
case EXCP_DATA_ABORT:
27
- /* TODO: if we implemented the MPU registers, this is where we
28
- * should set the MMFAR, etc from exception.fsr and exception.vaddress.
29
+ /* Note that for M profile we don't have a guest facing FSR, but
30
+ * the env->exception.fsr will be populated by the code that
31
+ * raises the fault, in the A profile short-descriptor format.
32
*/
33
- armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM);
34
+ switch (env->exception.fsr & 0xf) {
35
+ case 0x8: /* External Abort */
36
+ switch (cs->exception_index) {
37
+ case EXCP_PREFETCH_ABORT:
38
+ env->v7m.cfsr |= R_V7M_CFSR_PRECISERR_MASK;
39
+ qemu_log_mask(CPU_LOG_INT, "...with CFSR.PRECISERR\n");
40
+ break;
41
+ case EXCP_DATA_ABORT:
42
+ env->v7m.cfsr |=
43
+ (R_V7M_CFSR_IBUSERR_MASK | R_V7M_CFSR_BFARVALID_MASK);
44
+ env->v7m.bfar = env->exception.vaddress;
45
+ qemu_log_mask(CPU_LOG_INT,
46
+ "...with CFSR.IBUSERR and BFAR 0x%x\n",
47
+ env->v7m.bfar);
48
+ break;
49
+ }
50
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS);
51
+ break;
52
+ default:
53
+ /* All other FSR values are either MPU faults or "can't happen
54
+ * for M profile" cases.
55
+ */
56
+ switch (cs->exception_index) {
57
+ case EXCP_PREFETCH_ABORT:
58
+ env->v7m.cfsr |= R_V7M_CFSR_IACCVIOL_MASK;
59
+ qemu_log_mask(CPU_LOG_INT, "...with CFSR.IACCVIOL\n");
60
+ break;
61
+ case EXCP_DATA_ABORT:
62
+ env->v7m.cfsr |=
63
+ (R_V7M_CFSR_DACCVIOL_MASK | R_V7M_CFSR_MMARVALID_MASK);
64
+ env->v7m.mmfar = env->exception.vaddress;
65
+ qemu_log_mask(CPU_LOG_INT,
66
+ "...with CFSR.DACCVIOL and MMFAR 0x%x\n",
67
+ env->v7m.mmfar);
68
+ break;
69
+ }
70
+ armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM);
71
+ break;
72
+ }
73
break;
74
case EXCP_BKPT:
75
if (semihosting_enabled()) {
76
--
77
2.7.4
78
79
diff view generated by jsdifflib
Deleted patch
1
Implement HFNMIENA support for the M profile MPU. This bit controls
2
whether the MPU is treated as enabled when executing at execution
3
priorities of less than zero (in NMI, HardFault or with the FAULTMASK
4
bit set).
5
1
6
Doing this requires us to use a different MMU index for "running
7
at execution priority < 0", because we will have different
8
access permissions for that case versus the normal case.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 1493122030-32191-14-git-send-email-peter.maydell@linaro.org
12
---
13
target/arm/cpu.h | 24 +++++++++++++++++++++++-
14
target/arm/helper.c | 18 +++++++++++++++++-
15
target/arm/translate.c | 1 +
16
3 files changed, 41 insertions(+), 2 deletions(-)
17
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
23
* for the accesses done as part of a stage 1 page table walk, rather than
24
* having to walk the stage 2 page table over and over.)
25
*
26
+ * R profile CPUs have an MPU, but can use the same set of MMU indexes
27
+ * as A profile. They only need to distinguish NS EL0 and NS EL1 (and
28
+ * NS EL2 if we ever model a Cortex-R52).
29
+ *
30
+ * M profile CPUs are rather different as they do not have a true MMU.
31
+ * They have the following different MMU indexes:
32
+ * User
33
+ * Privileged
34
+ * Execution priority negative (this is like privileged, but the
35
+ * MPU HFNMIENA bit means that it may have different access permission
36
+ * check results to normal privileged code, so can't share a TLB).
37
+ *
38
* The ARMMMUIdx and the mmu index value used by the core QEMU TLB code
39
* are not quite the same -- different CPU types (most notably M profile
40
* vs A/R profile) would like to use MMU indexes with different semantics,
41
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
42
ARMMMUIdx_S2NS = 6 | ARM_MMU_IDX_A,
43
ARMMMUIdx_MUser = 0 | ARM_MMU_IDX_M,
44
ARMMMUIdx_MPriv = 1 | ARM_MMU_IDX_M,
45
+ ARMMMUIdx_MNegPri = 2 | ARM_MMU_IDX_M,
46
/* Indexes below here don't have TLBs and are used only for AT system
47
* instructions or for the first stage of an S12 page table walk.
48
*/
49
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
50
ARMMMUIdxBit_S2NS = 1 << 6,
51
ARMMMUIdxBit_MUser = 1 << 0,
52
ARMMMUIdxBit_MPriv = 1 << 1,
53
+ ARMMMUIdxBit_MNegPri = 1 << 2,
54
} ARMMMUIdxBit;
55
56
#define MMU_USER_IDX 0
57
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
58
case ARM_MMU_IDX_A:
59
return mmu_idx & 3;
60
case ARM_MMU_IDX_M:
61
- return mmu_idx & 1;
62
+ return mmu_idx == ARMMMUIdx_MUser ? 0 : 1;
63
default:
64
g_assert_not_reached();
65
}
66
@@ -XXX,XX +XXX,XX @@ static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
67
if (arm_feature(env, ARM_FEATURE_M)) {
68
ARMMMUIdx mmu_idx = el == 0 ? ARMMMUIdx_MUser : ARMMMUIdx_MPriv;
69
70
+ /* Execution priority is negative if FAULTMASK is set or
71
+ * we're in a HardFault or NMI handler.
72
+ */
73
+ if ((env->v7m.exception > 0 && env->v7m.exception <= 3)
74
+ || env->daif & PSTATE_F) {
75
+ return arm_to_core_mmu_idx(ARMMMUIdx_MNegPri);
76
+ }
77
+
78
return arm_to_core_mmu_idx(mmu_idx);
79
}
80
81
diff --git a/target/arm/helper.c b/target/arm/helper.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/helper.c
84
+++ b/target/arm/helper.c
85
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
86
case ARMMMUIdx_S1NSE0:
87
case ARMMMUIdx_S1NSE1:
88
case ARMMMUIdx_MPriv:
89
+ case ARMMMUIdx_MNegPri:
90
case ARMMMUIdx_MUser:
91
return 1;
92
default:
93
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
94
case ARMMMUIdx_S1E2:
95
case ARMMMUIdx_S2NS:
96
case ARMMMUIdx_MPriv:
97
+ case ARMMMUIdx_MNegPri:
98
case ARMMMUIdx_MUser:
99
return false;
100
case ARMMMUIdx_S1E3:
101
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
102
ARMMMUIdx mmu_idx)
103
{
104
if (arm_feature(env, ARM_FEATURE_M)) {
105
- return !(env->v7m.mpu_ctrl & R_V7M_MPU_CTRL_ENABLE_MASK);
106
+ switch (env->v7m.mpu_ctrl &
107
+ (R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
108
+ case R_V7M_MPU_CTRL_ENABLE_MASK:
109
+ /* Enabled, but not for HardFault and NMI */
110
+ return mmu_idx == ARMMMUIdx_MNegPri;
111
+ case R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK:
112
+ /* Enabled for all cases */
113
+ return false;
114
+ case 0:
115
+ default:
116
+ /* HFNMIENA set and ENABLE clear is UNPREDICTABLE, but
117
+ * we warned about that in armv7m_nvic.c when the guest set it.
118
+ */
119
+ return true;
120
+ }
121
}
122
123
if (mmu_idx == ARMMMUIdx_S2NS) {
124
diff --git a/target/arm/translate.c b/target/arm/translate.c
125
index XXXXXXX..XXXXXXX 100644
126
--- a/target/arm/translate.c
127
+++ b/target/arm/translate.c
128
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
129
return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0);
130
case ARMMMUIdx_MUser:
131
case ARMMMUIdx_MPriv:
132
+ case ARMMMUIdx_MNegPri:
133
return arm_to_core_mmu_idx(ARMMMUIdx_MUser);
134
case ARMMMUIdx_S2NS:
135
default:
136
--
137
2.7.4
138
139
diff view generated by jsdifflib
Deleted patch
1
From: Cédric Le Goater <clg@kaod.org>
2
1
3
Multiple I2C commands can be fired simultaneously and the controller
4
execute the commands following these priorities:
5
6
(1) Master Start Command
7
(2) Master Transmit Command
8
(3) Slave Transmit Command or Master Receive Command
9
(4) Master Stop Command
10
11
The current code is incorrect with respect to the above sequence and
12
needs to be reworked to handle each individual command.
13
14
Signed-off-by: Cédric Le Goater <clg@kaod.org>
15
Message-id: 1494827476-1487-2-git-send-email-clg@kaod.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
hw/i2c/aspeed_i2c.c | 24 ++++++++++++++++++------
19
1 file changed, 18 insertions(+), 6 deletions(-)
20
21
diff --git a/hw/i2c/aspeed_i2c.c b/hw/i2c/aspeed_i2c.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/i2c/aspeed_i2c.c
24
+++ b/hw/i2c/aspeed_i2c.c
25
@@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_i2c_bus_read(void *opaque, hwaddr offset,
26
27
static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
28
{
29
+ bus->cmd &= ~0xFFFF;
30
bus->cmd |= value & 0xFFFF;
31
bus->intr_status = 0;
32
33
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
34
bus->intr_status |= I2CD_INTR_TX_ACK;
35
}
36
37
- } else if (bus->cmd & I2CD_M_TX_CMD) {
38
+ /* START command is also a TX command, as the slave address is
39
+ * sent on the bus */
40
+ bus->cmd &= ~(I2CD_M_START_CMD | I2CD_M_TX_CMD);
41
+
42
+ /* No slave found */
43
+ if (!i2c_bus_busy(bus->bus)) {
44
+ return;
45
+ }
46
+ }
47
+
48
+ if (bus->cmd & I2CD_M_TX_CMD) {
49
if (i2c_send(bus->bus, bus->buf)) {
50
bus->intr_status |= (I2CD_INTR_TX_NAK | I2CD_INTR_ABNORMAL);
51
i2c_end_transfer(bus->bus);
52
} else {
53
bus->intr_status |= I2CD_INTR_TX_ACK;
54
}
55
+ bus->cmd &= ~I2CD_M_TX_CMD;
56
+ }
57
58
- } else if (bus->cmd & I2CD_M_RX_CMD) {
59
+ if (bus->cmd & I2CD_M_RX_CMD) {
60
int ret = i2c_recv(bus->bus);
61
if (ret < 0) {
62
qemu_log_mask(LOG_GUEST_ERROR, "%s: read failed\n", __func__);
63
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
64
bus->intr_status |= I2CD_INTR_RX_DONE;
65
}
66
bus->buf = (ret & I2CD_BYTE_BUF_RX_MASK) << I2CD_BYTE_BUF_RX_SHIFT;
67
+ bus->cmd &= ~I2CD_M_RX_CMD;
68
}
69
70
if (bus->cmd & (I2CD_M_STOP_CMD | I2CD_M_S_RX_CMD_LAST)) {
71
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
72
i2c_end_transfer(bus->bus);
73
bus->intr_status |= I2CD_INTR_NORMAL_STOP;
74
}
75
+ bus->cmd &= ~I2CD_M_STOP_CMD;
76
}
77
-
78
- /* command is handled, reset it and check for interrupts */
79
- bus->cmd &= ~0xFFFF;
80
- aspeed_i2c_bus_raise_interrupt(bus);
81
}
82
83
static void aspeed_i2c_bus_write(void *opaque, hwaddr offset,
84
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_write(void *opaque, hwaddr offset,
85
}
86
87
aspeed_i2c_bus_handle_cmd(bus, value);
88
+ aspeed_i2c_bus_raise_interrupt(bus);
89
break;
90
91
default:
92
--
93
2.7.4
94
95
diff view generated by jsdifflib
Deleted patch
1
From: Cédric Le Goater <clg@kaod.org>
2
1
3
Today, the LAST command is handled with the STOP command but this is
4
incorrect. Also nack the I2C bus when a LAST is issued.
5
6
Signed-off-by: Cédric Le Goater <clg@kaod.org>
7
Message-id: 1494827476-1487-3-git-send-email-clg@kaod.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
hw/i2c/aspeed_i2c.c | 9 ++++++---
11
1 file changed, 6 insertions(+), 3 deletions(-)
12
13
diff --git a/hw/i2c/aspeed_i2c.c b/hw/i2c/aspeed_i2c.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/i2c/aspeed_i2c.c
16
+++ b/hw/i2c/aspeed_i2c.c
17
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
18
bus->cmd &= ~I2CD_M_TX_CMD;
19
}
20
21
- if (bus->cmd & I2CD_M_RX_CMD) {
22
+ if (bus->cmd & (I2CD_M_RX_CMD | I2CD_M_S_RX_CMD_LAST)) {
23
int ret = i2c_recv(bus->bus);
24
if (ret < 0) {
25
qemu_log_mask(LOG_GUEST_ERROR, "%s: read failed\n", __func__);
26
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
27
bus->intr_status |= I2CD_INTR_RX_DONE;
28
}
29
bus->buf = (ret & I2CD_BYTE_BUF_RX_MASK) << I2CD_BYTE_BUF_RX_SHIFT;
30
- bus->cmd &= ~I2CD_M_RX_CMD;
31
+ if (bus->cmd & I2CD_M_S_RX_CMD_LAST) {
32
+ i2c_nack(bus->bus);
33
+ }
34
+ bus->cmd &= ~(I2CD_M_RX_CMD | I2CD_M_S_RX_CMD_LAST);
35
}
36
37
- if (bus->cmd & (I2CD_M_STOP_CMD | I2CD_M_S_RX_CMD_LAST)) {
38
+ if (bus->cmd & I2CD_M_STOP_CMD) {
39
if (!i2c_bus_busy(bus->bus)) {
40
bus->intr_status |= I2CD_INTR_ABNORMAL;
41
} else {
42
--
43
2.7.4
44
45
diff view generated by jsdifflib
Deleted patch
1
From: Cédric Le Goater <clg@kaod.org>
2
1
3
The Aspeed I2C controller maintains a state machine in the command
4
register, which is mostly used for debug.
5
6
Let's start adding a few states to handle abnormal STOP
7
commands. Today, the model uses the busy status of the bus as a
8
condition to do so but it is not precise enough.
9
10
Also remove the ABNORMAL bit for failing TX commands. This is
11
incorrect with respect to the specs.
12
13
Signed-off-by: Cédric Le Goater <clg@kaod.org>
14
Message-id: 1494827476-1487-4-git-send-email-clg@kaod.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
hw/i2c/aspeed_i2c.c | 36 +++++++++++++++++++++++++++++++++---
18
1 file changed, 33 insertions(+), 3 deletions(-)
19
20
diff --git a/hw/i2c/aspeed_i2c.c b/hw/i2c/aspeed_i2c.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/i2c/aspeed_i2c.c
23
+++ b/hw/i2c/aspeed_i2c.c
24
@@ -XXX,XX +XXX,XX @@ static uint64_t aspeed_i2c_bus_read(void *opaque, hwaddr offset,
25
}
26
}
27
28
+static void aspeed_i2c_set_state(AspeedI2CBus *bus, uint8_t state)
29
+{
30
+ bus->cmd &= ~(I2CD_TX_STATE_MASK << I2CD_TX_STATE_SHIFT);
31
+ bus->cmd |= (state & I2CD_TX_STATE_MASK) << I2CD_TX_STATE_SHIFT;
32
+}
33
+
34
+static uint8_t aspeed_i2c_get_state(AspeedI2CBus *bus)
35
+{
36
+ return (bus->cmd >> I2CD_TX_STATE_SHIFT) & I2CD_TX_STATE_MASK;
37
+}
38
+
39
+/*
40
+ * The state machine needs some refinement. It is only used to track
41
+ * invalid STOP commands for the moment.
42
+ */
43
static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
44
{
45
bus->cmd &= ~0xFFFF;
46
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
47
bus->intr_status = 0;
48
49
if (bus->cmd & I2CD_M_START_CMD) {
50
+ uint8_t state = aspeed_i2c_get_state(bus) & I2CD_MACTIVE ?
51
+ I2CD_MSTARTR : I2CD_MSTART;
52
+
53
+ aspeed_i2c_set_state(bus, state);
54
+
55
if (i2c_start_transfer(bus->bus, extract32(bus->buf, 1, 7),
56
extract32(bus->buf, 0, 1))) {
57
bus->intr_status |= I2CD_INTR_TX_NAK;
58
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
59
if (!i2c_bus_busy(bus->bus)) {
60
return;
61
}
62
+ aspeed_i2c_set_state(bus, I2CD_MACTIVE);
63
}
64
65
if (bus->cmd & I2CD_M_TX_CMD) {
66
+ aspeed_i2c_set_state(bus, I2CD_MTXD);
67
if (i2c_send(bus->bus, bus->buf)) {
68
- bus->intr_status |= (I2CD_INTR_TX_NAK | I2CD_INTR_ABNORMAL);
69
+ bus->intr_status |= (I2CD_INTR_TX_NAK);
70
i2c_end_transfer(bus->bus);
71
} else {
72
bus->intr_status |= I2CD_INTR_TX_ACK;
73
}
74
bus->cmd &= ~I2CD_M_TX_CMD;
75
+ aspeed_i2c_set_state(bus, I2CD_MACTIVE);
76
}
77
78
if (bus->cmd & (I2CD_M_RX_CMD | I2CD_M_S_RX_CMD_LAST)) {
79
- int ret = i2c_recv(bus->bus);
80
+ int ret;
81
+
82
+ aspeed_i2c_set_state(bus, I2CD_MRXD);
83
+ ret = i2c_recv(bus->bus);
84
if (ret < 0) {
85
qemu_log_mask(LOG_GUEST_ERROR, "%s: read failed\n", __func__);
86
ret = 0xff;
87
@@ -XXX,XX +XXX,XX @@ static void aspeed_i2c_bus_handle_cmd(AspeedI2CBus *bus, uint64_t value)
88
i2c_nack(bus->bus);
89
}
90
bus->cmd &= ~(I2CD_M_RX_CMD | I2CD_M_S_RX_CMD_LAST);
91
+ aspeed_i2c_set_state(bus, I2CD_MACTIVE);
92
}
93
94
if (bus->cmd & I2CD_M_STOP_CMD) {
95
- if (!i2c_bus_busy(bus->bus)) {
96
+ if (!(aspeed_i2c_get_state(bus) & I2CD_MACTIVE)) {
97
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: abnormal stop\n", __func__);
98
bus->intr_status |= I2CD_INTR_ABNORMAL;
99
} else {
100
+ aspeed_i2c_set_state(bus, I2CD_MSTOP);
101
i2c_end_transfer(bus->bus);
102
bus->intr_status |= I2CD_INTR_NORMAL_STOP;
103
}
104
bus->cmd &= ~I2CD_M_STOP_CMD;
105
+ aspeed_i2c_set_state(bus, I2CD_IDLE);
106
}
107
}
108
109
--
110
2.7.4
111
112
diff view generated by jsdifflib
Deleted patch
1
From: Cédric Le Goater <clg@kaod.org>
2
1
3
Let's add an RTC to the palmetto BMC and a LM75 temperature sensor to
4
the AST2500 EVB to start with.
5
6
Signed-off-by: Cédric Le Goater <clg@kaod.org>
7
Message-id: 1494827476-1487-5-git-send-email-clg@kaod.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/arm/aspeed.c | 27 +++++++++++++++++++++++++++
12
1 file changed, 27 insertions(+)
13
14
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/aspeed.c
17
+++ b/hw/arm/aspeed.c
18
@@ -XXX,XX +XXX,XX @@ typedef struct AspeedBoardConfig {
19
const char *fmc_model;
20
const char *spi_model;
21
uint32_t num_cs;
22
+ void (*i2c_init)(AspeedBoardState *bmc);
23
} AspeedBoardConfig;
24
25
enum {
26
@@ -XXX,XX +XXX,XX @@ enum {
27
SCU_AST2500_HW_STRAP_ACPI_ENABLE | \
28
SCU_HW_STRAP_SPI_MODE(SCU_HW_STRAP_SPI_MASTER))
29
30
+static void palmetto_bmc_i2c_init(AspeedBoardState *bmc);
31
+static void ast2500_evb_i2c_init(AspeedBoardState *bmc);
32
+
33
static const AspeedBoardConfig aspeed_boards[] = {
34
[PALMETTO_BMC] = {
35
.soc_name = "ast2400-a1",
36
@@ -XXX,XX +XXX,XX @@ static const AspeedBoardConfig aspeed_boards[] = {
37
.fmc_model = "n25q256a",
38
.spi_model = "mx25l25635e",
39
.num_cs = 1,
40
+ .i2c_init = palmetto_bmc_i2c_init,
41
},
42
[AST2500_EVB] = {
43
.soc_name = "ast2500-a1",
44
@@ -XXX,XX +XXX,XX @@ static const AspeedBoardConfig aspeed_boards[] = {
45
.fmc_model = "n25q256a",
46
.spi_model = "mx25l25635e",
47
.num_cs = 1,
48
+ .i2c_init = ast2500_evb_i2c_init,
49
},
50
[ROMULUS_BMC] = {
51
.soc_name = "ast2500-a1",
52
@@ -XXX,XX +XXX,XX @@ static void aspeed_board_init(MachineState *machine,
53
aspeed_board_binfo.ram_size = ram_size;
54
aspeed_board_binfo.loader_start = sc->info->sdram_base;
55
56
+ if (cfg->i2c_init) {
57
+ cfg->i2c_init(bmc);
58
+ }
59
+
60
arm_load_kernel(ARM_CPU(first_cpu), &aspeed_board_binfo);
61
}
62
63
+static void palmetto_bmc_i2c_init(AspeedBoardState *bmc)
64
+{
65
+ AspeedSoCState *soc = &bmc->soc;
66
+
67
+ /* The palmetto platform expects a ds3231 RTC but a ds1338 is
68
+ * enough to provide basic RTC features. Alarms will be missing */
69
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 0), "ds1338", 0x68);
70
+}
71
+
72
static void palmetto_bmc_init(MachineState *machine)
73
{
74
aspeed_board_init(machine, &aspeed_boards[PALMETTO_BMC]);
75
@@ -XXX,XX +XXX,XX @@ static const TypeInfo palmetto_bmc_type = {
76
.class_init = palmetto_bmc_class_init,
77
};
78
79
+static void ast2500_evb_i2c_init(AspeedBoardState *bmc)
80
+{
81
+ AspeedSoCState *soc = &bmc->soc;
82
+
83
+ /* The AST2500 EVB expects a LM75 but a TMP105 is compatible */
84
+ i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 7), "tmp105", 0x4d);
85
+}
86
+
87
static void ast2500_evb_init(MachineState *machine)
88
{
89
aspeed_board_init(machine, &aspeed_boards[AST2500_EVB]);
90
--
91
2.7.4
92
93
diff view generated by jsdifflib
Deleted patch
1
From: Cédric Le Goater <clg@kaod.org>
2
1
3
Largely inspired by the TMP105 temperature sensor, here is a model for
4
the TMP42{1,2,3} temperature sensors.
5
6
Specs can be found here :
7
8
    http://www.ti.com/lit/gpn/tmp421
9
10
Signed-off-by: Cédric Le Goater <clg@kaod.org>
11
Message-id: 1494827476-1487-6-git-send-email-clg@kaod.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
hw/misc/Makefile.objs | 1 +
16
hw/misc/tmp421.c | 401 ++++++++++++++++++++++++++++++++++++++++
17
default-configs/arm-softmmu.mak | 1 +
18
3 files changed, 403 insertions(+)
19
create mode 100644 hw/misc/tmp421.c
20
21
diff --git a/hw/misc/Makefile.objs b/hw/misc/Makefile.objs
22
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/misc/Makefile.objs
24
+++ b/hw/misc/Makefile.objs
25
@@ -XXX,XX +XXX,XX @@
26
common-obj-$(CONFIG_APPLESMC) += applesmc.o
27
common-obj-$(CONFIG_MAX111X) += max111x.o
28
common-obj-$(CONFIG_TMP105) += tmp105.o
29
+common-obj-$(CONFIG_TMP421) += tmp421.o
30
common-obj-$(CONFIG_ISA_DEBUG) += debugexit.o
31
common-obj-$(CONFIG_SGA) += sga.o
32
common-obj-$(CONFIG_ISA_TESTDEV) += pc-testdev.o
33
diff --git a/hw/misc/tmp421.c b/hw/misc/tmp421.c
34
new file mode 100644
35
index XXXXXXX..XXXXXXX
36
--- /dev/null
37
+++ b/hw/misc/tmp421.c
38
@@ -XXX,XX +XXX,XX @@
39
+/*
40
+ * Texas Instruments TMP421 temperature sensor.
41
+ *
42
+ * Copyright (c) 2016 IBM Corporation.
43
+ *
44
+ * Largely inspired by :
45
+ *
46
+ * Texas Instruments TMP105 temperature sensor.
47
+ *
48
+ * Copyright (C) 2008 Nokia Corporation
49
+ * Written by Andrzej Zaborowski <andrew@openedhand.com>
50
+ *
51
+ * This program is free software; you can redistribute it and/or
52
+ * modify it under the terms of the GNU General Public License as
53
+ * published by the Free Software Foundation; either version 2 or
54
+ * (at your option) version 3 of the License.
55
+ *
56
+ * This program is distributed in the hope that it will be useful,
57
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
58
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
59
+ * GNU General Public License for more details.
60
+ *
61
+ * You should have received a copy of the GNU General Public License along
62
+ * with this program; if not, see <http://www.gnu.org/licenses/>.
63
+ */
64
+
65
+#include "qemu/osdep.h"
66
+#include "hw/hw.h"
67
+#include "hw/i2c/i2c.h"
68
+#include "qapi/error.h"
69
+#include "qapi/visitor.h"
70
+
71
+/* Manufacturer / Device ID's */
72
+#define TMP421_MANUFACTURER_ID 0x55
73
+#define TMP421_DEVICE_ID 0x21
74
+#define TMP422_DEVICE_ID 0x22
75
+#define TMP423_DEVICE_ID 0x23
76
+
77
+typedef struct DeviceInfo {
78
+ int model;
79
+ const char *name;
80
+} DeviceInfo;
81
+
82
+static const DeviceInfo devices[] = {
83
+ { TMP421_DEVICE_ID, "tmp421" },
84
+ { TMP422_DEVICE_ID, "tmp422" },
85
+ { TMP423_DEVICE_ID, "tmp423" },
86
+};
87
+
88
+typedef struct TMP421State {
89
+ /*< private >*/
90
+ I2CSlave i2c;
91
+ /*< public >*/
92
+
93
+ int16_t temperature[4];
94
+
95
+ uint8_t status;
96
+ uint8_t config[2];
97
+ uint8_t rate;
98
+
99
+ uint8_t len;
100
+ uint8_t buf[2];
101
+ uint8_t pointer;
102
+
103
+} TMP421State;
104
+
105
+typedef struct TMP421Class {
106
+ I2CSlaveClass parent_class;
107
+ DeviceInfo *dev;
108
+} TMP421Class;
109
+
110
+#define TYPE_TMP421 "tmp421-generic"
111
+#define TMP421(obj) OBJECT_CHECK(TMP421State, (obj), TYPE_TMP421)
112
+
113
+#define TMP421_CLASS(klass) \
114
+ OBJECT_CLASS_CHECK(TMP421Class, (klass), TYPE_TMP421)
115
+#define TMP421_GET_CLASS(obj) \
116
+ OBJECT_GET_CLASS(TMP421Class, (obj), TYPE_TMP421)
117
+
118
+/* the TMP421 registers */
119
+#define TMP421_STATUS_REG 0x08
120
+#define TMP421_STATUS_BUSY (1 << 7)
121
+#define TMP421_CONFIG_REG_1 0x09
122
+#define TMP421_CONFIG_RANGE (1 << 2)
123
+#define TMP421_CONFIG_SHUTDOWN (1 << 6)
124
+#define TMP421_CONFIG_REG_2 0x0A
125
+#define TMP421_CONFIG_RC (1 << 2)
126
+#define TMP421_CONFIG_LEN (1 << 3)
127
+#define TMP421_CONFIG_REN (1 << 4)
128
+#define TMP421_CONFIG_REN2 (1 << 5)
129
+#define TMP421_CONFIG_REN3 (1 << 6)
130
+
131
+#define TMP421_CONVERSION_RATE_REG 0x0B
132
+#define TMP421_ONE_SHOT 0x0F
133
+
134
+#define TMP421_RESET 0xFC
135
+#define TMP421_MANUFACTURER_ID_REG 0xFE
136
+#define TMP421_DEVICE_ID_REG 0xFF
137
+
138
+#define TMP421_TEMP_MSB0 0x00
139
+#define TMP421_TEMP_MSB1 0x01
140
+#define TMP421_TEMP_MSB2 0x02
141
+#define TMP421_TEMP_MSB3 0x03
142
+#define TMP421_TEMP_LSB0 0x10
143
+#define TMP421_TEMP_LSB1 0x11
144
+#define TMP421_TEMP_LSB2 0x12
145
+#define TMP421_TEMP_LSB3 0x13
146
+
147
+static const int32_t mins[2] = { -40000, -55000 };
148
+static const int32_t maxs[2] = { 127000, 150000 };
149
+
150
+static void tmp421_get_temperature(Object *obj, Visitor *v, const char *name,
151
+ void *opaque, Error **errp)
152
+{
153
+ TMP421State *s = TMP421(obj);
154
+ bool ext_range = (s->config[0] & TMP421_CONFIG_RANGE);
155
+ int offset = ext_range * 64 * 256;
156
+ int64_t value;
157
+ int tempid;
158
+
159
+ if (sscanf(name, "temperature%d", &tempid) != 1) {
160
+ error_setg(errp, "error reading %s: %m", name);
161
+ return;
162
+ }
163
+
164
+ if (tempid >= 4 || tempid < 0) {
165
+ error_setg(errp, "error reading %s", name);
166
+ return;
167
+ }
168
+
169
+ value = ((s->temperature[tempid] - offset) * 1000 + 128) / 256;
170
+
171
+ visit_type_int(v, name, &value, errp);
172
+}
173
+
174
+/* Units are 0.001 centigrades relative to 0 C. s->temperature is 8.8
175
+ * fixed point, so units are 1/256 centigrades. A simple ratio will do.
176
+ */
177
+static void tmp421_set_temperature(Object *obj, Visitor *v, const char *name,
178
+ void *opaque, Error **errp)
179
+{
180
+ TMP421State *s = TMP421(obj);
181
+ Error *local_err = NULL;
182
+ int64_t temp;
183
+ bool ext_range = (s->config[0] & TMP421_CONFIG_RANGE);
184
+ int offset = ext_range * 64 * 256;
185
+ int tempid;
186
+
187
+ visit_type_int(v, name, &temp, &local_err);
188
+ if (local_err) {
189
+ error_propagate(errp, local_err);
190
+ return;
191
+ }
192
+
193
+ if (temp >= maxs[ext_range] || temp < mins[ext_range]) {
194
+ error_setg(errp, "value %" PRId64 ".%03" PRIu64 " °C is out of range",
195
+ temp / 1000, temp % 1000);
196
+ return;
197
+ }
198
+
199
+ if (sscanf(name, "temperature%d", &tempid) != 1) {
200
+ error_setg(errp, "error reading %s: %m", name);
201
+ return;
202
+ }
203
+
204
+ if (tempid >= 4 || tempid < 0) {
205
+ error_setg(errp, "error reading %s", name);
206
+ return;
207
+ }
208
+
209
+ s->temperature[tempid] = (int16_t) ((temp * 256 - 128) / 1000) + offset;
210
+}
211
+
212
+static void tmp421_read(TMP421State *s)
213
+{
214
+ TMP421Class *sc = TMP421_GET_CLASS(s);
215
+
216
+ s->len = 0;
217
+
218
+ switch (s->pointer) {
219
+ case TMP421_MANUFACTURER_ID_REG:
220
+ s->buf[s->len++] = TMP421_MANUFACTURER_ID;
221
+ break;
222
+ case TMP421_DEVICE_ID_REG:
223
+ s->buf[s->len++] = sc->dev->model;
224
+ break;
225
+ case TMP421_CONFIG_REG_1:
226
+ s->buf[s->len++] = s->config[0];
227
+ break;
228
+ case TMP421_CONFIG_REG_2:
229
+ s->buf[s->len++] = s->config[1];
230
+ break;
231
+ case TMP421_CONVERSION_RATE_REG:
232
+ s->buf[s->len++] = s->rate;
233
+ break;
234
+ case TMP421_STATUS_REG:
235
+ s->buf[s->len++] = s->status;
236
+ break;
237
+
238
+ /* FIXME: check for channel enablement in config registers */
239
+ case TMP421_TEMP_MSB0:
240
+ s->buf[s->len++] = (((uint16_t) s->temperature[0]) >> 8);
241
+ s->buf[s->len++] = (((uint16_t) s->temperature[0]) >> 0) & 0xf0;
242
+ break;
243
+ case TMP421_TEMP_MSB1:
244
+ s->buf[s->len++] = (((uint16_t) s->temperature[1]) >> 8);
245
+ s->buf[s->len++] = (((uint16_t) s->temperature[1]) >> 0) & 0xf0;
246
+ break;
247
+ case TMP421_TEMP_MSB2:
248
+ s->buf[s->len++] = (((uint16_t) s->temperature[2]) >> 8);
249
+ s->buf[s->len++] = (((uint16_t) s->temperature[2]) >> 0) & 0xf0;
250
+ break;
251
+ case TMP421_TEMP_MSB3:
252
+ s->buf[s->len++] = (((uint16_t) s->temperature[3]) >> 8);
253
+ s->buf[s->len++] = (((uint16_t) s->temperature[3]) >> 0) & 0xf0;
254
+ break;
255
+ case TMP421_TEMP_LSB0:
256
+ s->buf[s->len++] = (((uint16_t) s->temperature[0]) >> 0) & 0xf0;
257
+ break;
258
+ case TMP421_TEMP_LSB1:
259
+ s->buf[s->len++] = (((uint16_t) s->temperature[1]) >> 0) & 0xf0;
260
+ break;
261
+ case TMP421_TEMP_LSB2:
262
+ s->buf[s->len++] = (((uint16_t) s->temperature[2]) >> 0) & 0xf0;
263
+ break;
264
+ case TMP421_TEMP_LSB3:
265
+ s->buf[s->len++] = (((uint16_t) s->temperature[3]) >> 0) & 0xf0;
266
+ break;
267
+ }
268
+}
269
+
270
+static void tmp421_reset(I2CSlave *i2c);
271
+
272
+static void tmp421_write(TMP421State *s)
273
+{
274
+ switch (s->pointer) {
275
+ case TMP421_CONVERSION_RATE_REG:
276
+ s->rate = s->buf[0];
277
+ break;
278
+ case TMP421_CONFIG_REG_1:
279
+ s->config[0] = s->buf[0];
280
+ break;
281
+ case TMP421_CONFIG_REG_2:
282
+ s->config[1] = s->buf[0];
283
+ break;
284
+ case TMP421_RESET:
285
+ tmp421_reset(I2C_SLAVE(s));
286
+ break;
287
+ }
288
+}
289
+
290
+static int tmp421_rx(I2CSlave *i2c)
291
+{
292
+ TMP421State *s = TMP421(i2c);
293
+
294
+ if (s->len < 2) {
295
+ return s->buf[s->len++];
296
+ } else {
297
+ return 0xff;
298
+ }
299
+}
300
+
301
+static int tmp421_tx(I2CSlave *i2c, uint8_t data)
302
+{
303
+ TMP421State *s = TMP421(i2c);
304
+
305
+ if (s->len == 0) {
306
+ /* first byte is the register pointer for a read or write
307
+ * operation */
308
+ s->pointer = data;
309
+ s->len++;
310
+ } else if (s->len == 1) {
311
+ /* second byte is the data to write. The device only supports
312
+ * one byte writes */
313
+ s->buf[0] = data;
314
+ tmp421_write(s);
315
+ }
316
+
317
+ return 0;
318
+}
319
+
320
+static int tmp421_event(I2CSlave *i2c, enum i2c_event event)
321
+{
322
+ TMP421State *s = TMP421(i2c);
323
+
324
+ if (event == I2C_START_RECV) {
325
+ tmp421_read(s);
326
+ }
327
+
328
+ s->len = 0;
329
+ return 0;
330
+}
331
+
332
+static const VMStateDescription vmstate_tmp421 = {
333
+ .name = "TMP421",
334
+ .version_id = 0,
335
+ .minimum_version_id = 0,
336
+ .fields = (VMStateField[]) {
337
+ VMSTATE_UINT8(len, TMP421State),
338
+ VMSTATE_UINT8_ARRAY(buf, TMP421State, 2),
339
+ VMSTATE_UINT8(pointer, TMP421State),
340
+ VMSTATE_UINT8_ARRAY(config, TMP421State, 2),
341
+ VMSTATE_UINT8(status, TMP421State),
342
+ VMSTATE_UINT8(rate, TMP421State),
343
+ VMSTATE_INT16_ARRAY(temperature, TMP421State, 4),
344
+ VMSTATE_I2C_SLAVE(i2c, TMP421State),
345
+ VMSTATE_END_OF_LIST()
346
+ }
347
+};
348
+
349
+static void tmp421_reset(I2CSlave *i2c)
350
+{
351
+ TMP421State *s = TMP421(i2c);
352
+ TMP421Class *sc = TMP421_GET_CLASS(s);
353
+
354
+ memset(s->temperature, 0, sizeof(s->temperature));
355
+ s->pointer = 0;
356
+
357
+ s->config[0] = 0; /* TMP421_CONFIG_RANGE */
358
+
359
+ /* resistance correction and channel enablement */
360
+ switch (sc->dev->model) {
361
+ case TMP421_DEVICE_ID:
362
+ s->config[1] = 0x1c;
363
+ break;
364
+ case TMP422_DEVICE_ID:
365
+ s->config[1] = 0x3c;
366
+ break;
367
+ case TMP423_DEVICE_ID:
368
+ s->config[1] = 0x7c;
369
+ break;
370
+ }
371
+
372
+ s->rate = 0x7; /* 8Hz */
373
+ s->status = 0;
374
+}
375
+
376
+static int tmp421_init(I2CSlave *i2c)
377
+{
378
+ TMP421State *s = TMP421(i2c);
379
+
380
+ tmp421_reset(&s->i2c);
381
+
382
+ return 0;
383
+}
384
+
385
+static void tmp421_initfn(Object *obj)
386
+{
387
+ object_property_add(obj, "temperature0", "int",
388
+ tmp421_get_temperature,
389
+ tmp421_set_temperature, NULL, NULL, NULL);
390
+ object_property_add(obj, "temperature1", "int",
391
+ tmp421_get_temperature,
392
+ tmp421_set_temperature, NULL, NULL, NULL);
393
+ object_property_add(obj, "temperature2", "int",
394
+ tmp421_get_temperature,
395
+ tmp421_set_temperature, NULL, NULL, NULL);
396
+ object_property_add(obj, "temperature3", "int",
397
+ tmp421_get_temperature,
398
+ tmp421_set_temperature, NULL, NULL, NULL);
399
+}
400
+
401
+static void tmp421_class_init(ObjectClass *klass, void *data)
402
+{
403
+ DeviceClass *dc = DEVICE_CLASS(klass);
404
+ I2CSlaveClass *k = I2C_SLAVE_CLASS(klass);
405
+ TMP421Class *sc = TMP421_CLASS(klass);
406
+
407
+ k->init = tmp421_init;
408
+ k->event = tmp421_event;
409
+ k->recv = tmp421_rx;
410
+ k->send = tmp421_tx;
411
+ dc->vmsd = &vmstate_tmp421;
412
+ sc->dev = (DeviceInfo *) data;
413
+}
414
+
415
+static const TypeInfo tmp421_info = {
416
+ .name = TYPE_TMP421,
417
+ .parent = TYPE_I2C_SLAVE,
418
+ .instance_size = sizeof(TMP421State),
419
+ .instance_init = tmp421_initfn,
420
+ .class_init = tmp421_class_init,
421
+};
422
+
423
+static void tmp421_register_types(void)
424
+{
425
+ int i;
426
+
427
+ type_register_static(&tmp421_info);
428
+ for (i = 0; i < ARRAY_SIZE(devices); ++i) {
429
+ TypeInfo ti = {
430
+ .name = devices[i].name,
431
+ .parent = TYPE_TMP421,
432
+ .class_init = tmp421_class_init,
433
+ .class_data = (void *) &devices[i],
434
+ };
435
+ type_register(&ti);
436
+ }
437
+}
438
+
439
+type_init(tmp421_register_types)
440
diff --git a/default-configs/arm-softmmu.mak b/default-configs/arm-softmmu.mak
441
index XXXXXXX..XXXXXXX 100644
442
--- a/default-configs/arm-softmmu.mak
443
+++ b/default-configs/arm-softmmu.mak
444
@@ -XXX,XX +XXX,XX @@ CONFIG_TWL92230=y
445
CONFIG_TSC2005=y
446
CONFIG_LM832X=y
447
CONFIG_TMP105=y
448
+CONFIG_TMP421=y
449
CONFIG_STELLARIS=y
450
CONFIG_STELLARIS_INPUT=y
451
CONFIG_STELLARIS_ENET=y
452
--
453
2.7.4
454
455
diff view generated by jsdifflib
Deleted patch
1
From: Cédric Le Goater <clg@kaod.org>
2
1
3
Temperatures can be changed from the monitor with :
4
5
    (qemu) qom-set /machine/unattached/device[2] temperature0 12000
6
7
Signed-off-by: Cédric Le Goater <clg@kaod.org>
8
Message-id: 1494827476-1487-7-git-send-email-clg@kaod.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/arm/aspeed.c | 9 +++++++++
13
1 file changed, 9 insertions(+)
14
15
diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/arm/aspeed.c
18
+++ b/hw/arm/aspeed.c
19
@@ -XXX,XX +XXX,XX @@ static void aspeed_board_init(MachineState *machine,
20
static void palmetto_bmc_i2c_init(AspeedBoardState *bmc)
21
{
22
AspeedSoCState *soc = &bmc->soc;
23
+ DeviceState *dev;
24
25
/* The palmetto platform expects a ds3231 RTC but a ds1338 is
26
* enough to provide basic RTC features. Alarms will be missing */
27
i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 0), "ds1338", 0x68);
28
+
29
+ /* add a TMP423 temperature sensor */
30
+ dev = i2c_create_slave(aspeed_i2c_get_bus(DEVICE(&soc->i2c), 2),
31
+ "tmp423", 0x4c);
32
+ object_property_set_int(OBJECT(dev), 31000, "temperature0", &error_abort);
33
+ object_property_set_int(OBJECT(dev), 28000, "temperature1", &error_abort);
34
+ object_property_set_int(OBJECT(dev), 20000, "temperature2", &error_abort);
35
+ object_property_set_int(OBJECT(dev), 110000, "temperature3", &error_abort);
36
}
37
38
static void palmetto_bmc_init(MachineState *machine)
39
--
40
2.7.4
41
42
diff view generated by jsdifflib
Deleted patch
1
From: Andrew Jones <drjones@redhat.com>
2
1
3
Cc: Shannon Zhao <zhaoshenglong@huawei.com>
4
Signed-off-by: Andrew Jones <drjones@redhat.com>
5
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
6
Reviewed-by: Shannon Zhao <shannon.zhao@linaro.org>
7
Message-id: 20170529173751.3443-2-drjones@redhat.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
hw/arm/virt-acpi-build.c | 4 ++++
11
1 file changed, 4 insertions(+)
12
13
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/arm/virt-acpi-build.c
16
+++ b/hw/arm/virt-acpi-build.c
17
@@ -XXX,XX +XXX,XX @@ void virt_acpi_build(VirtMachineState *vms, AcpiBuildTables *tables)
18
if (nb_numa_nodes > 0) {
19
acpi_add_table(table_offsets, tables_blob);
20
build_srat(tables_blob, tables->linker, vms);
21
+ if (have_numa_distance) {
22
+ acpi_add_table(table_offsets, tables_blob);
23
+ build_slit(tables_blob, tables->linker);
24
+ }
25
}
26
27
if (its_class_name() && !vmc->no_its) {
28
--
29
2.7.4
30
31
diff view generated by jsdifflib