1
target-arm queue: mostly just cleanup/minor stuff, but this does
1
Arm queue; the big bit here is RTH's MTE for user-mode series.
2
include the raspi3 board model.
3
2
4
-- PMM
3
-- PMM
5
4
6
The following changes since commit 9f9c53368b219a9115eddb39f0ff5ad19c977134:
5
The following changes since commit 83339e21d05c824ebc9131d644f25c23d0e41ecf:
7
6
8
Merge remote-tracking branch 'remotes/vivier/tags/m68k-for-2.12-pull-request' into staging (2018-02-15 10:14:11 +0000)
7
Merge remote-tracking branch 'remotes/stefanha-gitlab/tags/block-pull-request' into staging (2021-02-10 15:42:20 +0000)
9
8
10
are available in the Git repository at:
9
are available in the Git repository at:
11
10
12
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180215
11
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210211
13
12
14
for you to fetch changes up to e545f0f9be1f9e60951017c1e6558216732cc14e:
13
for you to fetch changes up to 5213c78932ecf4bae18d62baf8735724e25fb478:
15
14
16
target/arm: Implement v8M MSPLIM and PSPLIM registers (2018-02-15 13:48:11 +0000)
15
target/arm: Correctly initialize MDCR_EL2.HPMN (2021-02-11 11:50:16 +0000)
17
16
18
----------------------------------------------------------------
17
----------------------------------------------------------------
19
target-arm queue:
18
target-arm queue:
20
* aspeed: code cleanup to use unimplemented_device
19
* Correctly initialize MDCR_EL2.HPMN
21
* add 'raspi3' RaspberryPi 3 machine model
20
* versal: Use nr_apu_cpus in favor of hard coding 2
22
* more SVE prep work
21
* npcm7xx: Add ethernet device
23
* v8M: add minor missing registers
22
* Enable ARMv8.4-MemTag for user-mode emulation
24
* v7M: fix bug where we weren't migrating v7m.other_sp
23
* accel/tcg: Add URL of clang bug to comment about our workaround
25
* v7M: fix bugs in handling of interrupt registers for
24
* Add support for FEAT_DIT, Data Independent Timing
26
external interrupts beyond 32
25
* Remove GPIO from unimplemented NPCM7XX
26
* Fix SCR RES1 handling
27
* Don't migrate CPUARMState.features
27
28
28
----------------------------------------------------------------
29
----------------------------------------------------------------
29
Pekka Enberg (3):
30
Aaron Lindsay (1):
30
bcm2836: Make CPU type configurable
31
target/arm: Don't migrate CPUARMState.features
31
raspi: Raspberry Pi 3 support
32
raspi: Add "raspi3" machine type
33
32
34
Peter Maydell (11):
33
Daniel Müller (1):
35
hw/intc/armv7m_nvic: Don't hardcode M profile ID registers in NVIC
34
target/arm: Correctly initialize MDCR_EL2.HPMN
36
hw/intc/armv7m_nvic: Fix ICSR PENDNMISET/CLR handling
37
hw/intc/armv7m_nvic: Implement M profile cache maintenance ops
38
hw/intc/armv7m_nvic: Implement v8M CPPWR register
39
hw/intc/armv7m_nvic: Implement cache ID registers
40
hw/intc/armv7m_nvic: Implement SCR
41
target/arm: Implement writing to CONTROL_NS for v8M
42
hw/intc/armv7m_nvic: Fix byte-to-interrupt number conversions
43
target/arm: Add AIRCR to vmstate struct
44
target/arm: Migrate v7m.other_sp
45
target/arm: Implement v8M MSPLIM and PSPLIM registers
46
35
47
Philippe Mathieu-Daudé (2):
36
Doug Evans (3):
48
hw/arm/aspeed: directly map the serial device to the system address space
37
hw/net: Add npcm7xx emc model
49
hw/arm/aspeed: simplify using the 'unimplemented device' for aspeed_soc.io
38
hw/arm: Add npcm7xx emc model
39
tests/qtests: Add npcm7xx emc model test
50
40
51
Richard Henderson (5):
41
Edgar E. Iglesias (1):
52
target/arm: Remove ARM_CP_64BIT from ZCR_EL registers
42
hw/arm: versal: Use nr_apu_cpus in favor of hard coding 2
53
target/arm: Enforce FP access to FPCR/FPSR
54
target/arm: Suppress TB end for FPCR/FPSR
55
target/arm: Enforce access to ZCR_EL at translation
56
target/arm: Handle SVE registers when using clear_vec_high
57
43
58
include/hw/arm/aspeed_soc.h | 1 -
44
Hao Wu (1):
59
include/hw/arm/bcm2836.h | 1 +
45
hw/arm: Remove GPIO from unimplemented NPCM7XX
60
target/arm/cpu.h | 71 ++++++++++++-----
61
target/arm/internals.h | 6 ++
62
hw/arm/aspeed_soc.c | 35 ++-------
63
hw/arm/bcm2836.c | 17 +++--
64
hw/arm/raspi.c | 57 +++++++++++---
65
hw/intc/armv7m_nvic.c | 98 ++++++++++++++++++------
66
target/arm/cpu.c | 28 +++++++
67
target/arm/helper.c | 84 +++++++++++++++-----
68
target/arm/machine.c | 84 ++++++++++++++++++++
69
target/arm/translate-a64.c | 181 ++++++++++++++++++++------------------------
70
12 files changed, 452 insertions(+), 211 deletions(-)
71
46
47
Mike Nawrocki (1):
48
target/arm: Fix SCR RES1 handling
49
50
Peter Maydell (2):
51
arm: Update infocenter.arm.com URLs
52
accel/tcg: Add URL of clang bug to comment about our workaround
53
54
Rebecca Cran (4):
55
target/arm: Add support for FEAT_DIT, Data Independent Timing
56
target/arm: Support AA32 DIT by moving PSTATE_SS from cpsr into env->pstate
57
target/arm: Set ID_AA64PFR0.DIT and ID_PFR0.DIT to 1 for "max" AA64 CPU
58
target/arm: Set ID_PFR0.DIT to 1 for "max" 32-bit CPU
59
60
Richard Henderson (31):
61
tcg: Introduce target-specific page data for user-only
62
linux-user: Introduce PAGE_ANON
63
exec: Use uintptr_t for guest_base
64
exec: Use uintptr_t in cpu_ldst.h
65
exec: Improve types for guest_addr_valid
66
linux-user: Check for overflow in access_ok
67
linux-user: Tidy VERIFY_READ/VERIFY_WRITE
68
bsd-user: Tidy VERIFY_READ/VERIFY_WRITE
69
linux-user: Do not use guest_addr_valid for h2g_valid
70
linux-user: Fix guest_addr_valid vs reserved_va
71
exec: Introduce cpu_untagged_addr
72
exec: Use cpu_untagged_addr in g2h; split out g2h_untagged
73
linux-user: Explicitly untag memory management syscalls
74
linux-user: Use guest_range_valid in access_ok
75
exec: Rename guest_{addr,range}_valid to *_untagged
76
linux-user: Use cpu_untagged_addr in access_ok; split out *_untagged
77
linux-user: Move lock_user et al out of line
78
linux-user: Fix types in uaccess.c
79
linux-user: Handle tags in lock_user/unlock_user
80
linux-user/aarch64: Implement PR_TAGGED_ADDR_ENABLE
81
target/arm: Improve gen_top_byte_ignore
82
target/arm: Use the proper TBI settings for linux-user
83
linux-user/aarch64: Implement PR_MTE_TCF and PR_MTE_TAG
84
linux-user/aarch64: Implement PROT_MTE
85
target/arm: Split out syndrome.h from internals.h
86
linux-user/aarch64: Pass syndrome to EXC_*_ABORT
87
linux-user/aarch64: Signal SEGV_MTESERR for sync tag check fault
88
linux-user/aarch64: Signal SEGV_MTEAERR for async tag check error
89
target/arm: Add allocation tag storage for user mode
90
target/arm: Enable MTE for user-only
91
tests/tcg/aarch64: Add mte smoke tests
92
93
docs/system/arm/nuvoton.rst | 3 +-
94
bsd-user/qemu.h | 9 +-
95
include/exec/cpu-all.h | 47 +-
96
include/exec/cpu_ldst.h | 39 +-
97
include/exec/exec-all.h | 2 +-
98
include/hw/arm/npcm7xx.h | 2 +
99
include/hw/dma/pl080.h | 7 +-
100
include/hw/misc/arm_integrator_debug.h | 2 +-
101
include/hw/net/npcm7xx_emc.h | 286 +++++++++++
102
include/hw/ssi/pl022.h | 5 +-
103
linux-user/aarch64/target_signal.h | 3 +
104
linux-user/aarch64/target_syscall.h | 13 +
105
linux-user/qemu.h | 76 +--
106
linux-user/syscall_defs.h | 1 +
107
target/arm/cpu-param.h | 3 +
108
target/arm/cpu.h | 49 ++
109
target/arm/internals.h | 255 +---------
110
target/arm/syndrome.h | 273 +++++++++++
111
tests/tcg/aarch64/mte.h | 60 +++
112
accel/tcg/cpu-exec.c | 25 +-
113
accel/tcg/translate-all.c | 32 +-
114
accel/tcg/user-exec.c | 51 +-
115
bsd-user/main.c | 4 +-
116
hw/arm/aspeed_ast2600.c | 2 +-
117
hw/arm/musca.c | 4 +-
118
hw/arm/npcm7xx.c | 58 ++-
119
hw/arm/xlnx-versal.c | 4 +-
120
hw/misc/arm_integrator_debug.c | 2 +-
121
hw/net/npcm7xx_emc.c | 857 +++++++++++++++++++++++++++++++++
122
hw/timer/arm_timer.c | 7 +-
123
linux-user/aarch64/cpu_loop.c | 38 +-
124
linux-user/elfload.c | 18 +-
125
linux-user/flatload.c | 2 +-
126
linux-user/hppa/cpu_loop.c | 39 +-
127
linux-user/i386/cpu_loop.c | 6 +-
128
linux-user/i386/signal.c | 5 +-
129
linux-user/main.c | 4 +-
130
linux-user/mmap.c | 86 ++--
131
linux-user/ppc/signal.c | 4 +-
132
linux-user/syscall.c | 165 +++++--
133
linux-user/uaccess.c | 82 +++-
134
target/arm/cpu.c | 29 +-
135
target/arm/cpu64.c | 5 +
136
target/arm/helper-a64.c | 31 +-
137
target/arm/helper.c | 71 ++-
138
target/arm/machine.c | 2 +-
139
target/arm/mte_helper.c | 39 +-
140
target/arm/op_helper.c | 9 +-
141
target/arm/tlb_helper.c | 15 +-
142
target/arm/translate-a64.c | 37 +-
143
target/hppa/op_helper.c | 2 +-
144
target/i386/tcg/mem_helper.c | 2 +-
145
target/s390x/mem_helper.c | 4 +-
146
tests/qtest/npcm7xx_emc-test.c | 812 +++++++++++++++++++++++++++++++
147
tests/tcg/aarch64/mte-1.c | 28 ++
148
tests/tcg/aarch64/mte-2.c | 45 ++
149
tests/tcg/aarch64/mte-3.c | 51 ++
150
tests/tcg/aarch64/mte-4.c | 45 ++
151
tests/tcg/aarch64/pauth-2.c | 1 -
152
hw/net/meson.build | 1 +
153
hw/net/trace-events | 17 +
154
tests/qtest/meson.build | 1 +
155
tests/tcg/aarch64/Makefile.target | 6 +
156
tests/tcg/configure.sh | 4 +
157
64 files changed, 3312 insertions(+), 575 deletions(-)
158
create mode 100644 include/hw/net/npcm7xx_emc.h
159
create mode 100644 target/arm/syndrome.h
160
create mode 100644 tests/tcg/aarch64/mte.h
161
create mode 100644 hw/net/npcm7xx_emc.c
162
create mode 100644 tests/qtest/npcm7xx_emc-test.c
163
create mode 100644 tests/tcg/aarch64/mte-1.c
164
create mode 100644 tests/tcg/aarch64/mte-2.c
165
create mode 100644 tests/tcg/aarch64/mte-3.c
166
create mode 100644 tests/tcg/aarch64/mte-4.c
167
diff view generated by jsdifflib
1
In commit abc24d86cc0364f we accidentally broke migration of
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
2
the stack pointer value for the mode (process, handler) the CPU
3
is not currently running as. (The commit correctly removed the
4
no-longer-used v7m.current_sp flag from the VMState but also
5
deleted the still very much in use v7m.other_sp SP value field.)
6
2
7
Add a subsection to migrate it again. (We don't need to care
3
As feature flags are added or removed, the meanings of bits in the
8
about trying to retain compatibility with pre-abc24d86cc0364f
4
`features` field can change between QEMU versions, causing migration
9
versions of QEMU, because that commit bumped the version_id
5
failures. Additionally, migrating the field is not useful because it is
10
and we've since bumped it again a couple of times.)
6
a constant function of the CPU being used.
11
7
8
Fixes: LP:1914696
9
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
10
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Andrew Jones <drjones@redhat.com>
12
Tested-by: Andrew Jones <drjones@redhat.com>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20180209165810.6668-11-peter.maydell@linaro.org
15
---
15
---
16
target/arm/machine.c | 11 +++++++++++
16
target/arm/machine.c | 2 +-
17
1 file changed, 11 insertions(+)
17
1 file changed, 1 insertion(+), 1 deletion(-)
18
18
19
diff --git a/target/arm/machine.c b/target/arm/machine.c
19
diff --git a/target/arm/machine.c b/target/arm/machine.c
20
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/machine.c
21
--- a/target/arm/machine.c
22
+++ b/target/arm/machine.c
22
+++ b/target/arm/machine.c
23
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_scr = {
23
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
24
}
24
VMSTATE_UINT64(env.exclusive_addr, ARMCPU),
25
};
25
VMSTATE_UINT64(env.exclusive_val, ARMCPU),
26
26
VMSTATE_UINT64(env.exclusive_high, ARMCPU),
27
+static const VMStateDescription vmstate_m_other_sp = {
27
- VMSTATE_UINT64(env.features, ARMCPU),
28
+ .name = "cpu/m/other-sp",
28
+ VMSTATE_UNUSED(sizeof(uint64_t)),
29
+ .version_id = 1,
29
VMSTATE_UINT32(env.exception.syndrome, ARMCPU),
30
+ .minimum_version_id = 1,
30
VMSTATE_UINT32(env.exception.fsr, ARMCPU),
31
+ .fields = (VMStateField[]) {
31
VMSTATE_UINT64(env.exception.vaddress, ARMCPU),
32
+ VMSTATE_UINT32(env.v7m.other_sp, ARMCPU),
33
+ VMSTATE_END_OF_LIST()
34
+ }
35
+};
36
+
37
static const VMStateDescription vmstate_m = {
38
.name = "cpu/m",
39
.version_id = 4,
40
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
41
&vmstate_m_faultmask_primask,
42
&vmstate_m_csselr,
43
&vmstate_m_scr,
44
+ &vmstate_m_other_sp,
45
NULL
46
}
47
};
48
--
32
--
49
2.16.1
33
2.20.1
50
34
51
35
diff view generated by jsdifflib
1
The v8M architecture includes hardware support for enforcing
1
From: Mike Nawrocki <michael.nawrocki@gtri.gatech.edu>
2
stack pointer limits. We don't implement this behaviour yet,
3
but provide the MSPLIM and PSPLIM stack pointer limit registers
4
as reads-as-written, so that when we do implement the checks
5
in future this won't break guest migration.
6
2
3
The FW and AW bits of SCR_EL3 are RES1 only in some contexts. Force them
4
to 1 only when there is no support for AArch32 at EL1 or above.
5
6
The reset value will be 0x30 only if the CPU is AArch64-only; if there
7
is support for AArch32 at EL1 or above, it will be reset to 0.
8
9
Also adds helper function isar_feature_aa64_aa32_el1 to check if AArch32
10
is supported at EL1 or above.
11
12
Signed-off-by: Mike Nawrocki <michael.nawrocki@gtri.gatech.edu>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210203165552.16306-2-michael.nawrocki@gtri.gatech.edu
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180209165810.6668-12-peter.maydell@linaro.org
10
---
16
---
11
target/arm/cpu.h | 2 ++
17
target/arm/cpu.h | 5 +++++
12
target/arm/helper.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++
18
target/arm/helper.c | 16 ++++++++++++++--
13
target/arm/machine.c | 21 +++++++++++++++++++++
19
2 files changed, 19 insertions(+), 2 deletions(-)
14
3 files changed, 69 insertions(+)
15
20
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
23
--- a/target/arm/cpu.h
19
+++ b/target/arm/cpu.h
24
+++ b/target/arm/cpu.h
20
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
25
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_aa32(const ARMISARegisters *id)
21
uint32_t secure; /* Is CPU in Secure state? (not guest visible) */
26
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, EL0) >= 2;
22
uint32_t csselr[M_REG_NUM_BANKS];
27
}
23
uint32_t scr[M_REG_NUM_BANKS];
28
24
+ uint32_t msplim[M_REG_NUM_BANKS];
29
+static inline bool isar_feature_aa64_aa32_el1(const ARMISARegisters *id)
25
+ uint32_t psplim[M_REG_NUM_BANKS];
30
+{
26
} v7m;
31
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, EL1) >= 2;
27
32
+}
28
/* Information associated with an exception about to be taken:
33
+
34
static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
35
{
36
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
29
diff --git a/target/arm/helper.c b/target/arm/helper.c
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
30
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/helper.c
39
--- a/target/arm/helper.c
32
+++ b/target/arm/helper.c
40
+++ b/target/arm/helper.c
33
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
41
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
34
return 0;
42
ARMCPU *cpu = env_archcpu(env);
35
}
43
36
return env->v7m.other_ss_psp;
44
if (ri->state == ARM_CP_STATE_AA64) {
37
+ case 0x8a: /* MSPLIM_NS */
45
- value |= SCR_FW | SCR_AW; /* these two bits are RES1. */
38
+ if (!env->v7m.secure) {
46
+ if (arm_feature(env, ARM_FEATURE_AARCH64) &&
39
+ return 0;
47
+ !cpu_isar_feature(aa64_aa32_el1, cpu)) {
40
+ }
48
+ value |= SCR_FW | SCR_AW; /* these two bits are RES1. */
41
+ return env->v7m.msplim[M_REG_NS];
42
+ case 0x8b: /* PSPLIM_NS */
43
+ if (!env->v7m.secure) {
44
+ return 0;
45
+ }
46
+ return env->v7m.psplim[M_REG_NS];
47
case 0x90: /* PRIMASK_NS */
48
if (!env->v7m.secure) {
49
return 0;
50
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
51
return v7m_using_psp(env) ? env->v7m.other_sp : env->regs[13];
52
case 9: /* PSP */
53
return v7m_using_psp(env) ? env->regs[13] : env->v7m.other_sp;
54
+ case 10: /* MSPLIM */
55
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
56
+ goto bad_reg;
57
+ }
49
+ }
58
+ return env->v7m.msplim[env->v7m.secure];
50
valid_mask &= ~SCR_NET;
59
+ case 11: /* PSPLIM */
51
60
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
52
if (cpu_isar_feature(aa64_lor, cpu)) {
61
+ goto bad_reg;
53
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
62
+ }
54
raw_write(env, ri, value);
63
+ return env->v7m.psplim[env->v7m.secure];
55
}
64
case 16: /* PRIMASK */
56
65
return env->v7m.primask[env->v7m.secure];
57
+static void scr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
66
case 17: /* BASEPRI */
67
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
68
case 19: /* FAULTMASK */
69
return env->v7m.faultmask[env->v7m.secure];
70
default:
71
+ bad_reg:
72
qemu_log_mask(LOG_GUEST_ERROR, "Attempt to read unknown special"
73
" register %d\n", reg);
74
return 0;
75
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
76
}
77
env->v7m.other_ss_psp = val;
78
return;
79
+ case 0x8a: /* MSPLIM_NS */
80
+ if (!env->v7m.secure) {
81
+ return;
82
+ }
83
+ env->v7m.msplim[M_REG_NS] = val & ~7;
84
+ return;
85
+ case 0x8b: /* PSPLIM_NS */
86
+ if (!env->v7m.secure) {
87
+ return;
88
+ }
89
+ env->v7m.psplim[M_REG_NS] = val & ~7;
90
+ return;
91
case 0x90: /* PRIMASK_NS */
92
if (!env->v7m.secure) {
93
return;
94
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
95
env->v7m.other_sp = val;
96
}
97
break;
98
+ case 10: /* MSPLIM */
99
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
100
+ goto bad_reg;
101
+ }
102
+ env->v7m.msplim[env->v7m.secure] = val & ~7;
103
+ break;
104
+ case 11: /* PSPLIM */
105
+ if (!arm_feature(env, ARM_FEATURE_V8)) {
106
+ goto bad_reg;
107
+ }
108
+ env->v7m.psplim[env->v7m.secure] = val & ~7;
109
+ break;
110
case 16: /* PRIMASK */
111
env->v7m.primask[env->v7m.secure] = val & 1;
112
break;
113
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
114
env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
115
break;
116
default:
117
+ bad_reg:
118
qemu_log_mask(LOG_GUEST_ERROR, "Attempt to write unknown special"
119
" register %d\n", reg);
120
return;
121
diff --git a/target/arm/machine.c b/target/arm/machine.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/target/arm/machine.c
124
+++ b/target/arm/machine.c
125
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_other_sp = {
126
}
127
};
128
129
+static bool m_v8m_needed(void *opaque)
130
+{
58
+{
131
+ ARMCPU *cpu = opaque;
59
+ /*
132
+ CPUARMState *env = &cpu->env;
60
+ * scr_write will set the RES1 bits on an AArch64-only CPU.
133
+
61
+ * The reset value will be 0x30 on an AArch64-only CPU and 0 otherwise.
134
+ return arm_feature(env, ARM_FEATURE_M) && arm_feature(env, ARM_FEATURE_V8);
62
+ */
63
+ scr_write(env, ri, 0);
135
+}
64
+}
136
+
65
+
137
+static const VMStateDescription vmstate_m_v8m = {
66
static CPAccessResult access_aa64_tid2(CPUARMState *env,
138
+ .name = "cpu/m/v8m",
67
const ARMCPRegInfo *ri,
139
+ .version_id = 1,
68
bool isread)
140
+ .minimum_version_id = 1,
69
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el3_cp_reginfo[] = {
141
+ .needed = m_v8m_needed,
70
{ .name = "SCR_EL3", .state = ARM_CP_STATE_AA64,
142
+ .fields = (VMStateField[]) {
71
.opc0 = 3, .opc1 = 6, .crn = 1, .crm = 1, .opc2 = 0,
143
+ VMSTATE_UINT32_ARRAY(env.v7m.msplim, ARMCPU, M_REG_NUM_BANKS),
72
.access = PL3_RW, .fieldoffset = offsetof(CPUARMState, cp15.scr_el3),
144
+ VMSTATE_UINT32_ARRAY(env.v7m.psplim, ARMCPU, M_REG_NUM_BANKS),
73
- .resetvalue = 0, .writefn = scr_write },
145
+ VMSTATE_END_OF_LIST()
74
+ .resetfn = scr_reset, .writefn = scr_write },
146
+ }
75
{ .name = "SCR", .type = ARM_CP_ALIAS | ARM_CP_NEWEL,
147
+};
76
.cp = 15, .opc1 = 0, .crn = 1, .crm = 1, .opc2 = 0,
148
+
77
.access = PL1_RW, .accessfn = access_trap_aa32s_el1,
149
static const VMStateDescription vmstate_m = {
150
.name = "cpu/m",
151
.version_id = 4,
152
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
153
&vmstate_m_csselr,
154
&vmstate_m_scr,
155
&vmstate_m_other_sp,
156
+ &vmstate_m_v8m,
157
NULL
158
}
159
};
160
--
78
--
161
2.16.1
79
2.20.1
162
80
163
81
diff view generated by jsdifflib
New patch
1
From: Hao Wu <wuhaotsh@google.com>
1
2
3
NPCM7XX GPIO devices have been implemented in hw/gpio/npcm7xx-gpio.c. So
4
we removed them from the unimplemented devices list.
5
6
Reviewed-by: Doug Evans<dje@google.com>
7
Reviewed-by: Tyrong Ting<kfting@nuvoton.com>
8
Signed-off-by: Hao Wu<wuhaotsh@google.com>
9
Message-id: 20210129005845.416272-2-wuhaotsh@google.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/arm/npcm7xx.c | 8 --------
14
1 file changed, 8 deletions(-)
15
16
diff --git a/hw/arm/npcm7xx.c b/hw/arm/npcm7xx.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/arm/npcm7xx.c
19
+++ b/hw/arm/npcm7xx.c
20
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
21
create_unimplemented_device("npcm7xx.pcierc", 0xe1000000, 64 * KiB);
22
create_unimplemented_device("npcm7xx.kcs", 0xf0007000, 4 * KiB);
23
create_unimplemented_device("npcm7xx.gfxi", 0xf000e000, 4 * KiB);
24
- create_unimplemented_device("npcm7xx.gpio[0]", 0xf0010000, 4 * KiB);
25
- create_unimplemented_device("npcm7xx.gpio[1]", 0xf0011000, 4 * KiB);
26
- create_unimplemented_device("npcm7xx.gpio[2]", 0xf0012000, 4 * KiB);
27
- create_unimplemented_device("npcm7xx.gpio[3]", 0xf0013000, 4 * KiB);
28
- create_unimplemented_device("npcm7xx.gpio[4]", 0xf0014000, 4 * KiB);
29
- create_unimplemented_device("npcm7xx.gpio[5]", 0xf0015000, 4 * KiB);
30
- create_unimplemented_device("npcm7xx.gpio[6]", 0xf0016000, 4 * KiB);
31
- create_unimplemented_device("npcm7xx.gpio[7]", 0xf0017000, 4 * KiB);
32
create_unimplemented_device("npcm7xx.smbus[0]", 0xf0080000, 4 * KiB);
33
create_unimplemented_device("npcm7xx.smbus[1]", 0xf0081000, 4 * KiB);
34
create_unimplemented_device("npcm7xx.smbus[2]", 0xf0082000, 4 * KiB);
35
--
36
2.20.1
37
38
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Rebecca Cran <rebecca@nuviainc.com>
2
2
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
3
Add support for FEAT_DIT. DIT (Data Independent Timing) is a required
4
Message-id: 20180211205848.4568-3-richard.henderson@linaro.org
4
feature for ARMv8.4. Since virtual machine execution is largely
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
nondeterministic and TCG is outside of the security domain, it's
6
implemented as a NOP.
7
8
Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210208065700.19454-2-rebecca@nuviainc.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
12
---
8
target/arm/cpu.h | 35 ++++++++++++++++++-----------------
13
target/arm/cpu.h | 12 ++++++++++++
9
target/arm/helper.c | 6 ++++--
14
target/arm/internals.h | 6 ++++++
10
target/arm/translate-a64.c | 3 +++
15
target/arm/helper.c | 22 ++++++++++++++++++++++
11
3 files changed, 25 insertions(+), 19 deletions(-)
16
target/arm/translate-a64.c | 12 ++++++++++++
17
4 files changed, 52 insertions(+)
12
18
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
21
--- a/target/arm/cpu.h
16
+++ b/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
17
@@ -XXX,XX +XXX,XX @@ static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
23
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
24
#define CPSR_IT_2_7 (0xfc00U)
25
#define CPSR_GE (0xfU << 16)
26
#define CPSR_IL (1U << 20)
27
+#define CPSR_DIT (1U << 21)
28
#define CPSR_PAN (1U << 22)
29
#define CPSR_J (1U << 24)
30
#define CPSR_IT_0_1 (3U << 25)
31
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
32
#define PSTATE_SS (1U << 21)
33
#define PSTATE_PAN (1U << 22)
34
#define PSTATE_UAO (1U << 23)
35
+#define PSTATE_DIT (1U << 24)
36
#define PSTATE_TCO (1U << 25)
37
#define PSTATE_V (1U << 28)
38
#define PSTATE_C (1U << 29)
39
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa32_tts2uxn(const ARMISARegisters *id)
40
return FIELD_EX32(id->id_mmfr4, ID_MMFR4, XNX) != 0;
18
}
41
}
19
42
20
/* ARMCPRegInfo type field bits. If the SPECIAL bit is set this is a
43
+static inline bool isar_feature_aa32_dit(const ARMISARegisters *id)
21
- * special-behaviour cp reg and bits [15..8] indicate what behaviour
44
+{
22
+ * special-behaviour cp reg and bits [11..8] indicate what behaviour
45
+ return FIELD_EX32(id->id_pfr0, ID_PFR0, DIT) != 0;
23
* it has. Otherwise it is a simple cp reg, where CONST indicates that
46
+}
24
* TCG can assume the value to be constant (ie load at translate time)
47
+
25
* and 64BIT indicates a 64 bit wide coprocessor register. SUPPRESS_TB_END
48
/*
26
@@ -XXX,XX +XXX,XX @@ static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
49
* 64-bit feature tests via id registers.
27
* need to be surrounded by gen_io_start()/gen_io_end(). In particular,
28
* registers which implement clocks or timers require this.
29
*/
50
*/
30
-#define ARM_CP_SPECIAL 1
51
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_tts2uxn(const ARMISARegisters *id)
31
-#define ARM_CP_CONST 2
52
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, XNX) != 0;
32
-#define ARM_CP_64BIT 4
53
}
33
-#define ARM_CP_SUPPRESS_TB_END 8
54
34
-#define ARM_CP_OVERRIDE 16
55
+static inline bool isar_feature_aa64_dit(const ARMISARegisters *id)
35
-#define ARM_CP_ALIAS 32
56
+{
36
-#define ARM_CP_IO 64
57
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0;
37
-#define ARM_CP_NO_RAW 128
58
+}
38
-#define ARM_CP_NOP (ARM_CP_SPECIAL | (1 << 8))
59
+
39
-#define ARM_CP_WFI (ARM_CP_SPECIAL | (2 << 8))
60
/*
40
-#define ARM_CP_NZCV (ARM_CP_SPECIAL | (3 << 8))
61
* Feature tests for "does this exist in either 32-bit or 64-bit?"
41
-#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | (4 << 8))
62
*/
42
-#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | (5 << 8))
63
diff --git a/target/arm/internals.h b/target/arm/internals.h
43
-#define ARM_LAST_SPECIAL ARM_CP_DC_ZVA
64
index XXXXXXX..XXXXXXX 100644
44
+#define ARM_CP_SPECIAL 0x0001
65
--- a/target/arm/internals.h
45
+#define ARM_CP_CONST 0x0002
66
+++ b/target/arm/internals.h
46
+#define ARM_CP_64BIT 0x0004
67
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch32_cpsr_valid_mask(uint64_t features,
47
+#define ARM_CP_SUPPRESS_TB_END 0x0008
68
if (isar_feature_aa32_pan(id)) {
48
+#define ARM_CP_OVERRIDE 0x0010
69
valid |= CPSR_PAN;
49
+#define ARM_CP_ALIAS 0x0020
70
}
50
+#define ARM_CP_IO 0x0040
71
+ if (isar_feature_aa32_dit(id)) {
51
+#define ARM_CP_NO_RAW 0x0080
72
+ valid |= CPSR_DIT;
52
+#define ARM_CP_NOP (ARM_CP_SPECIAL | 0x0100)
73
+ }
53
+#define ARM_CP_WFI (ARM_CP_SPECIAL | 0x0200)
74
54
+#define ARM_CP_NZCV (ARM_CP_SPECIAL | 0x0300)
75
return valid;
55
+#define ARM_CP_CURRENTEL (ARM_CP_SPECIAL | 0x0400)
76
}
56
+#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500)
77
@@ -XXX,XX +XXX,XX @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
57
+#define ARM_LAST_SPECIAL ARM_CP_DC_ZVA
78
if (isar_feature_aa64_uao(id)) {
58
+#define ARM_CP_FPU 0x1000
79
valid |= PSTATE_UAO;
59
/* Used only as a terminator for ARMCPRegInfo lists */
80
}
60
-#define ARM_CP_SENTINEL 0xffff
81
+ if (isar_feature_aa64_dit(id)) {
61
+#define ARM_CP_SENTINEL 0xffff
82
+ valid |= PSTATE_DIT;
62
/* Mask of only the flag bits in a type field */
83
+ }
63
-#define ARM_CP_FLAG_MASK 0xff
84
if (isar_feature_aa64_mte(id)) {
64
+#define ARM_CP_FLAG_MASK 0x10ff
85
valid |= PSTATE_TCO;
65
86
}
66
/* Valid values for ARMCPRegInfo state field, indicating which of
67
* the AArch32 and AArch64 execution states this register is visible in.
68
diff --git a/target/arm/helper.c b/target/arm/helper.c
87
diff --git a/target/arm/helper.c b/target/arm/helper.c
69
index XXXXXXX..XXXXXXX 100644
88
index XXXXXXX..XXXXXXX 100644
70
--- a/target/arm/helper.c
89
--- a/target/arm/helper.c
71
+++ b/target/arm/helper.c
90
+++ b/target/arm/helper.c
72
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
91
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo uao_reginfo = {
73
.writefn = aa64_daif_write, .resetfn = arm_cp_reset_ignore },
92
.readfn = aa64_uao_read, .writefn = aa64_uao_write
74
{ .name = "FPCR", .state = ARM_CP_STATE_AA64,
93
};
75
.opc0 = 3, .opc1 = 3, .opc2 = 0, .crn = 4, .crm = 4,
94
76
- .access = PL0_RW, .readfn = aa64_fpcr_read, .writefn = aa64_fpcr_write },
95
+static uint64_t aa64_dit_read(CPUARMState *env, const ARMCPRegInfo *ri)
77
+ .access = PL0_RW, .type = ARM_CP_FPU,
96
+{
78
+ .readfn = aa64_fpcr_read, .writefn = aa64_fpcr_write },
97
+ return env->pstate & PSTATE_DIT;
79
{ .name = "FPSR", .state = ARM_CP_STATE_AA64,
98
+}
80
.opc0 = 3, .opc1 = 3, .opc2 = 1, .crn = 4, .crm = 4,
99
+
81
- .access = PL0_RW, .readfn = aa64_fpsr_read, .writefn = aa64_fpsr_write },
100
+static void aa64_dit_write(CPUARMState *env, const ARMCPRegInfo *ri,
82
+ .access = PL0_RW, .type = ARM_CP_FPU,
101
+ uint64_t value)
83
+ .readfn = aa64_fpsr_read, .writefn = aa64_fpsr_write },
102
+{
84
{ .name = "DCZID_EL0", .state = ARM_CP_STATE_AA64,
103
+ env->pstate = (env->pstate & ~PSTATE_DIT) | (value & PSTATE_DIT);
85
.opc0 = 3, .opc1 = 3, .opc2 = 7, .crn = 0, .crm = 0,
104
+}
86
.access = PL0_R, .type = ARM_CP_NO_RAW,
105
+
106
+static const ARMCPRegInfo dit_reginfo = {
107
+ .name = "DIT", .state = ARM_CP_STATE_AA64,
108
+ .opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 5,
109
+ .type = ARM_CP_NO_RAW, .access = PL0_RW,
110
+ .readfn = aa64_dit_read, .writefn = aa64_dit_write
111
+};
112
+
113
static CPAccessResult aa64_cacheop_poc_access(CPUARMState *env,
114
const ARMCPRegInfo *ri,
115
bool isread)
116
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
117
define_one_arm_cp_reg(cpu, &uao_reginfo);
118
}
119
120
+ if (cpu_isar_feature(aa64_dit, cpu)) {
121
+ define_one_arm_cp_reg(cpu, &dit_reginfo);
122
+ }
123
+
124
if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
125
define_arm_cp_regs(cpu, vhe_reginfo);
126
}
87
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
127
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
88
index XXXXXXX..XXXXXXX 100644
128
index XXXXXXX..XXXXXXX 100644
89
--- a/target/arm/translate-a64.c
129
--- a/target/arm/translate-a64.c
90
+++ b/target/arm/translate-a64.c
130
+++ b/target/arm/translate-a64.c
91
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
131
@@ -XXX,XX +XXX,XX @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
92
default:
132
tcg_temp_free_i32(t1);
93
break;
133
break;
94
}
134
95
+ if ((ri->type & ARM_CP_FPU) && !fp_access_check(s)) {
135
+ case 0x1a: /* DIT */
96
+ return;
136
+ if (!dc_isar_feature(aa64_dit, s)) {
97
+ }
137
+ goto do_unallocated;
98
138
+ }
99
if ((tb_cflags(s->base.tb) & CF_USE_ICOUNT) && (ri->type & ARM_CP_IO)) {
139
+ if (crm & 1) {
100
gen_io_start();
140
+ set_pstate_bits(PSTATE_DIT);
141
+ } else {
142
+ clear_pstate_bits(PSTATE_DIT);
143
+ }
144
+ /* There's no need to rebuild hflags because DIT is a nop */
145
+ break;
146
+
147
case 0x1e: /* DAIFSet */
148
t1 = tcg_const_i32(crm);
149
gen_helper_msr_i_daifset(cpu_env, t1);
101
--
150
--
102
2.16.1
151
2.20.1
103
152
104
153
diff view generated by jsdifflib
1
In commit 50f11062d4c896 we added support for MSR/MRS access
1
From: Rebecca Cran <rebecca@nuviainc.com>
2
to the NS banked special registers, but we forgot to implement
3
the support for writing to CONTROL_NS. Correct the omission.
4
2
3
cpsr has been treated as being the same as spsr, but it isn't.
4
Since PSTATE_SS isn't in cpsr, remove it and move it into env->pstate.
5
6
This allows us to add support for CPSR_DIT, adding helper functions
7
to merge SPSR_ELx to and from CPSR.
8
9
Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210208065700.19454-3-rebecca@nuviainc.com
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180209165810.6668-8-peter.maydell@linaro.org
8
---
13
---
9
target/arm/helper.c | 10 ++++++++++
14
target/arm/helper-a64.c | 27 +++++++++++++++++++++++----
10
1 file changed, 10 insertions(+)
15
target/arm/helper.c | 24 ++++++++++++++++++------
16
target/arm/op_helper.c | 9 +--------
17
3 files changed, 42 insertions(+), 18 deletions(-)
11
18
19
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/helper-a64.c
22
+++ b/target/arm/helper-a64.c
23
@@ -XXX,XX +XXX,XX @@ static int el_from_spsr(uint32_t spsr)
24
}
25
}
26
27
+static void cpsr_write_from_spsr_elx(CPUARMState *env,
28
+ uint32_t val)
29
+{
30
+ uint32_t mask;
31
+
32
+ /* Save SPSR_ELx.SS into PSTATE. */
33
+ env->pstate = (env->pstate & ~PSTATE_SS) | (val & PSTATE_SS);
34
+ val &= ~PSTATE_SS;
35
+
36
+ /* Move DIT to the correct location for CPSR */
37
+ if (val & PSTATE_DIT) {
38
+ val &= ~PSTATE_DIT;
39
+ val |= CPSR_DIT;
40
+ }
41
+
42
+ mask = aarch32_cpsr_valid_mask(env->features, \
43
+ &env_archcpu(env)->isar);
44
+ cpsr_write(env, val, mask, CPSRWriteRaw);
45
+}
46
+
47
void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
48
{
49
int cur_el = arm_current_el(env);
50
unsigned int spsr_idx = aarch64_banked_spsr_index(cur_el);
51
- uint32_t mask, spsr = env->banked_spsr[spsr_idx];
52
+ uint32_t spsr = env->banked_spsr[spsr_idx];
53
int new_el;
54
bool return_to_aa64 = (spsr & PSTATE_nRW) == 0;
55
56
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
57
* will sort the register banks out for us, and we've already
58
* caught all the bad-mode cases in el_from_spsr().
59
*/
60
- mask = aarch32_cpsr_valid_mask(env->features, &env_archcpu(env)->isar);
61
- cpsr_write(env, spsr, mask, CPSRWriteRaw);
62
+ cpsr_write_from_spsr_elx(env, spsr);
63
if (!arm_singlestep_active(env)) {
64
- env->uncached_cpsr &= ~PSTATE_SS;
65
+ env->pstate &= ~PSTATE_SS;
66
}
67
aarch64_sync_64_to_32(env);
68
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
index XXXXXXX..XXXXXXX 100644
70
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
71
--- a/target/arm/helper.c
15
+++ b/target/arm/helper.c
72
+++ b/target/arm/helper.c
16
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
73
@@ -XXX,XX +XXX,XX @@ static void take_aarch32_exception(CPUARMState *env, int new_mode,
17
}
74
* For exceptions taken to AArch32 we must clear the SS bit in both
18
env->v7m.faultmask[M_REG_NS] = val & 1;
75
* PSTATE and in the old-state value we save to SPSR_<mode>, so zero it now.
19
return;
76
*/
20
+ case 0x94: /* CONTROL_NS */
77
- env->uncached_cpsr &= ~PSTATE_SS;
21
+ if (!env->v7m.secure) {
78
+ env->pstate &= ~PSTATE_SS;
22
+ return;
79
env->spsr = cpsr_read(env);
23
+ }
80
/* Clear IT bits. */
24
+ write_v7m_control_spsel_for_secstate(env,
81
env->condexec_bits = 0;
25
+ val & R_V7M_CONTROL_SPSEL_MASK,
82
@@ -XXX,XX +XXX,XX @@ static int aarch64_regnum(CPUARMState *env, int aarch32_reg)
26
+ M_REG_NS);
83
}
27
+ env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
84
}
28
+ env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
85
29
+ return;
86
+static uint32_t cpsr_read_for_spsr_elx(CPUARMState *env)
30
case 0x98: /* SP_NS */
87
+{
31
{
88
+ uint32_t ret = cpsr_read(env);
32
/* This gives the non-secure SP selected based on whether we're
89
+
90
+ /* Move DIT to the correct location for SPSR_ELx */
91
+ if (ret & CPSR_DIT) {
92
+ ret &= ~CPSR_DIT;
93
+ ret |= PSTATE_DIT;
94
+ }
95
+ /* Merge PSTATE.SS into SPSR_ELx */
96
+ ret |= env->pstate & PSTATE_SS;
97
+
98
+ return ret;
99
+}
100
+
101
/* Handle exception entry to a target EL which is using AArch64 */
102
static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
103
{
104
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
105
aarch64_save_sp(env, arm_current_el(env));
106
env->elr_el[new_el] = env->pc;
107
} else {
108
- old_mode = cpsr_read(env);
109
+ old_mode = cpsr_read_for_spsr_elx(env);
110
env->elr_el[new_el] = env->regs[15];
111
112
aarch64_sync_32_to_64(env);
113
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
114
target_ulong *cs_base, uint32_t *pflags)
115
{
116
uint32_t flags = env->hflags;
117
- uint32_t pstate_for_ss;
118
119
*cs_base = 0;
120
assert_hflags_rebuild_correctly(env);
121
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
122
if (cpu_isar_feature(aa64_bti, env_archcpu(env))) {
123
flags = FIELD_DP32(flags, TBFLAG_A64, BTYPE, env->btype);
124
}
125
- pstate_for_ss = env->pstate;
126
} else {
127
*pc = env->regs[15];
128
129
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
130
131
flags = FIELD_DP32(flags, TBFLAG_AM32, THUMB, env->thumb);
132
flags = FIELD_DP32(flags, TBFLAG_AM32, CONDEXEC, env->condexec_bits);
133
- pstate_for_ss = env->uncached_cpsr;
134
}
135
136
/*
137
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
138
* SS_ACTIVE is set in hflags; PSTATE_SS is computed every TB.
139
*/
140
if (FIELD_EX32(flags, TBFLAG_ANY, SS_ACTIVE) &&
141
- (pstate_for_ss & PSTATE_SS)) {
142
+ (env->pstate & PSTATE_SS)) {
143
flags = FIELD_DP32(flags, TBFLAG_ANY, PSTATE_SS, 1);
144
}
145
146
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
147
index XXXXXXX..XXXXXXX 100644
148
--- a/target/arm/op_helper.c
149
+++ b/target/arm/op_helper.c
150
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_bkpt_insn)(CPUARMState *env, uint32_t syndrome)
151
152
uint32_t HELPER(cpsr_read)(CPUARMState *env)
153
{
154
- /*
155
- * We store the ARMv8 PSTATE.SS bit in env->uncached_cpsr.
156
- * This is convenient for populating SPSR_ELx, but must be
157
- * hidden from aarch32 mode, where it is not visible.
158
- *
159
- * TODO: ARMv8.4-DIT -- need to move SS somewhere else.
160
- */
161
- return cpsr_read(env) & ~(CPSR_EXEC | PSTATE_SS);
162
+ return cpsr_read(env) & ~CPSR_EXEC;
163
}
164
165
void HELPER(cpsr_write)(CPUARMState *env, uint32_t val, uint32_t mask)
33
--
166
--
34
2.16.1
167
2.20.1
35
168
36
169
diff view generated by jsdifflib
New patch
1
From: Rebecca Cran <rebecca@nuviainc.com>
1
2
3
Enable FEAT_DIT for the "max" AARCH64 CPU.
4
5
Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210208065700.19454-4-rebecca@nuviainc.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/cpu64.c | 5 +++++
11
1 file changed, 5 insertions(+)
12
13
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu64.c
16
+++ b/target/arm/cpu64.c
17
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
18
t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
19
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
20
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1);
21
+ t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1);
22
cpu->isar.id_aa64pfr0 = t;
23
24
t = cpu->isar.id_aa64pfr1;
25
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
26
u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
27
cpu->isar.id_isar6 = u;
28
29
+ u = cpu->isar.id_pfr0;
30
+ u = FIELD_DP32(u, ID_PFR0, DIT, 1);
31
+ cpu->isar.id_pfr0 = u;
32
+
33
u = cpu->isar.id_mmfr3;
34
u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
35
cpu->isar.id_mmfr3 = u;
36
--
37
2.20.1
38
39
diff view generated by jsdifflib
1
In commit commit 3b2e934463121 we added support for the AIRCR
1
From: Rebecca Cran <rebecca@nuviainc.com>
2
register holding state, but forgot to add it to the vmstate
3
structs. Since it only holds r/w state if the security extension
4
is implemented, we can just add it to vmstate_m_security.
5
2
3
Enable FEAT_DIT for the "max" 32-bit CPU.
4
5
Signed-off-by: Rebecca Cran <rebecca@nuviainc.com>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210208065700.19454-5-rebecca@nuviainc.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180209165810.6668-10-peter.maydell@linaro.org
9
---
9
---
10
target/arm/machine.c | 4 ++++
10
target/arm/cpu.c | 4 ++++
11
1 file changed, 4 insertions(+)
11
1 file changed, 4 insertions(+)
12
12
13
diff --git a/target/arm/machine.c b/target/arm/machine.c
13
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/machine.c
15
--- a/target/arm/cpu.c
16
+++ b/target/arm/machine.c
16
+++ b/target/arm/cpu.c
17
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_security = {
17
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
18
VMSTATE_VALIDATE("SAU_RNR is valid", sau_rnr_vmstate_validate),
18
t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
19
VMSTATE_UINT32(env.sau.ctrl, ARMCPU),
19
t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
20
VMSTATE_UINT32(env.v7m.scr[M_REG_S], ARMCPU),
20
cpu->isar.id_mmfr4 = t;
21
+ /* AIRCR is not secure-only, but our implementation is R/O if the
21
+
22
+ * security extension is unimplemented, so we migrate it here.
22
+ t = cpu->isar.id_pfr0;
23
+ */
23
+ t = FIELD_DP32(t, ID_PFR0, DIT, 1);
24
+ VMSTATE_UINT32(env.v7m.aircr, ARMCPU),
24
+ cpu->isar.id_pfr0 = t;
25
VMSTATE_END_OF_LIST()
26
}
25
}
27
};
26
#endif
27
}
28
--
28
--
29
2.16.1
29
2.20.1
30
30
31
31
diff view generated by jsdifflib
1
In many of the NVIC registers relating to interrupts, we
1
Update infocenter.arm.com URLs for various pieces of Arm
2
have to convert from a byte offset within a register set
2
documentation to the new developer.arm.com equivalents. (There is a
3
into the number of the first interrupt which is affected.
3
redirection in place from the old URLs, but we might as well update
4
We were getting this wrong for:
4
our comments in case the redirect ever disappears in future.)
5
* reads of NVIC_ISPR<n>, NVIC_ISER<n>, NVIC_ICPR<n>, NVIC_ICER<n>,
5
6
NVIC_IABR<n> -- in all these cases we were missing the "* 8"
6
This patch covers all the URLs which are not MPS2/SSE-200/IoTKit
7
needed to convert from the byte offset to the interrupt number
7
related (those are dealt with in a different patch).
8
(since all these registers use one bit per interrupt)
9
* writes of NVIC_IPR<n> had the opposite problem of a spurious
10
"* 8" (since these registers use one byte per interrupt)
11
8
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
14
Message-id: 20180209165810.6668-9-peter.maydell@linaro.org
11
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Message-id: 20210205171456.19939-1-peter.maydell@linaro.org
15
---
13
---
16
hw/intc/armv7m_nvic.c | 8 ++++----
14
include/hw/dma/pl080.h | 7 ++++---
17
1 file changed, 4 insertions(+), 4 deletions(-)
15
include/hw/misc/arm_integrator_debug.h | 2 +-
16
include/hw/ssi/pl022.h | 5 +++--
17
hw/arm/aspeed_ast2600.c | 2 +-
18
hw/arm/musca.c | 4 ++--
19
hw/misc/arm_integrator_debug.c | 2 +-
20
hw/timer/arm_timer.c | 7 ++++---
21
7 files changed, 16 insertions(+), 13 deletions(-)
18
22
19
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
23
diff --git a/include/hw/dma/pl080.h b/include/hw/dma/pl080.h
20
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/intc/armv7m_nvic.c
25
--- a/include/hw/dma/pl080.h
22
+++ b/hw/intc/armv7m_nvic.c
26
+++ b/include/hw/dma/pl080.h
23
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
27
@@ -XXX,XX +XXX,XX @@
24
/* fall through */
28
* (at your option) any later version.
25
case 0x180 ... 0x1bf: /* NVIC Clear enable */
29
*/
26
val = 0;
30
27
- startvec = offset - 0x180 + NVIC_FIRST_IRQ; /* vector # */
31
-/* This is a model of the Arm PrimeCell PL080/PL081 DMA controller:
28
+ startvec = 8 * (offset - 0x180) + NVIC_FIRST_IRQ; /* vector # */
32
+/*
29
33
+ * This is a model of the Arm PrimeCell PL080/PL081 DMA controller:
30
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
34
* The PL080 TRM is:
31
if (s->vectors[startvec + i].enabled &&
35
- * http://infocenter.arm.com/help/topic/com.arm.doc.ddi0196g/DDI0196.pdf
32
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
36
+ * https://developer.arm.com/documentation/ddi0196/latest
33
/* fall through */
37
* and the PL081 TRM is:
34
case 0x280 ... 0x2bf: /* NVIC Clear pend */
38
- * http://infocenter.arm.com/help/topic/com.arm.doc.ddi0218e/DDI0218.pdf
35
val = 0;
39
+ * https://developer.arm.com/documentation/ddi0218/latest
36
- startvec = offset - 0x280 + NVIC_FIRST_IRQ; /* vector # */
40
*
37
+ startvec = 8 * (offset - 0x280) + NVIC_FIRST_IRQ; /* vector # */
41
* QEMU interface:
38
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
42
* + sysbus IRQ 0: DMACINTR combined interrupt line
39
if (s->vectors[startvec + i].pending &&
43
diff --git a/include/hw/misc/arm_integrator_debug.h b/include/hw/misc/arm_integrator_debug.h
40
(attrs.secure || s->itns[startvec + i])) {
44
index XXXXXXX..XXXXXXX 100644
41
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
45
--- a/include/hw/misc/arm_integrator_debug.h
42
break;
46
+++ b/include/hw/misc/arm_integrator_debug.h
43
case 0x300 ... 0x33f: /* NVIC Active */
47
@@ -XXX,XX +XXX,XX @@
44
val = 0;
48
*
45
- startvec = offset - 0x300 + NVIC_FIRST_IRQ; /* vector # */
49
* Browse the data sheet:
46
+ startvec = 8 * (offset - 0x300) + NVIC_FIRST_IRQ; /* vector # */
50
*
47
51
- * http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0159b/Babbfijf.html
48
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
52
+ * https://developer.arm.com/documentation/dui0159/b/peripherals-and-interfaces/debug-leds-and-dip-switch-interface
49
if (s->vectors[startvec + i].active &&
53
*
50
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
54
* Copyright (c) 2013 Alex Bennée <alex@bennee.com>
51
case 0x300 ... 0x33f: /* NVIC Active */
55
*
52
return MEMTX_OK; /* R/O */
56
diff --git a/include/hw/ssi/pl022.h b/include/hw/ssi/pl022.h
53
case 0x400 ... 0x5ef: /* NVIC Priority */
57
index XXXXXXX..XXXXXXX 100644
54
- startvec = 8 * (offset - 0x400) + NVIC_FIRST_IRQ; /* vector # */
58
--- a/include/hw/ssi/pl022.h
55
+ startvec = (offset - 0x400) + NVIC_FIRST_IRQ; /* vector # */
59
+++ b/include/hw/ssi/pl022.h
56
60
@@ -XXX,XX +XXX,XX @@
57
for (i = 0; i < size && startvec + i < s->num_irq; i++) {
61
* (at your option) any later version.
58
if (attrs.secure || s->itns[startvec + i]) {
62
*/
63
64
-/* This is a model of the Arm PrimeCell PL022 synchronous serial port.
65
+/*
66
+ * This is a model of the Arm PrimeCell PL022 synchronous serial port.
67
* The PL022 TRM is:
68
- * http://infocenter.arm.com/help/topic/com.arm.doc.ddi0194h/DDI0194H_ssp_pl022_trm.pdf
69
+ * https://developer.arm.com/documentation/ddi0194/latest
70
*
71
* QEMU interface:
72
* + sysbus IRQ: SSPINTR combined interrupt line
73
diff --git a/hw/arm/aspeed_ast2600.c b/hw/arm/aspeed_ast2600.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/hw/arm/aspeed_ast2600.c
76
+++ b/hw/arm/aspeed_ast2600.c
77
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_ast2600_init(Object *obj)
78
/*
79
* ASPEED ast2600 has 0xf as cluster ID
80
*
81
- * http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0388e/CIHEBGFG.html
82
+ * https://developer.arm.com/documentation/ddi0388/e/the-system-control-coprocessors/summary-of-system-control-coprocessor-registers/multiprocessor-affinity-register
83
*/
84
static uint64_t aspeed_calc_affinity(int cpu)
85
{
86
diff --git a/hw/arm/musca.c b/hw/arm/musca.c
87
index XXXXXXX..XXXXXXX 100644
88
--- a/hw/arm/musca.c
89
+++ b/hw/arm/musca.c
90
@@ -XXX,XX +XXX,XX @@
91
* https://developer.arm.com/products/system-design/development-boards/iot-test-chips-and-boards/musca-a-test-chip-board
92
* https://developer.arm.com/products/system-design/development-boards/iot-test-chips-and-boards/musca-b-test-chip-board
93
* We model the A and B1 variants of this board, as described in the TRMs:
94
- * http://infocenter.arm.com/help/topic/com.arm.doc.101107_0000_00_en/index.html
95
- * http://infocenter.arm.com/help/topic/com.arm.doc.101312_0000_00_en/index.html
96
+ * https://developer.arm.com/documentation/101107/latest/
97
+ * https://developer.arm.com/documentation/101312/latest/
98
*/
99
100
#include "qemu/osdep.h"
101
diff --git a/hw/misc/arm_integrator_debug.c b/hw/misc/arm_integrator_debug.c
102
index XXXXXXX..XXXXXXX 100644
103
--- a/hw/misc/arm_integrator_debug.c
104
+++ b/hw/misc/arm_integrator_debug.c
105
@@ -XXX,XX +XXX,XX @@
106
* to this area.
107
*
108
* The real h/w is described at:
109
- * http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0159b/Babbfijf.html
110
+ * https://developer.arm.com/documentation/dui0159/b/peripherals-and-interfaces/debug-leds-and-dip-switch-interface
111
*
112
* Copyright (c) 2013 Alex Bennée <alex@bennee.com>
113
*
114
diff --git a/hw/timer/arm_timer.c b/hw/timer/arm_timer.c
115
index XXXXXXX..XXXXXXX 100644
116
--- a/hw/timer/arm_timer.c
117
+++ b/hw/timer/arm_timer.c
118
@@ -XXX,XX +XXX,XX @@ static arm_timer_state *arm_timer_init(uint32_t freq)
119
return s;
120
}
121
122
-/* ARM PrimeCell SP804 dual timer module.
123
+/*
124
+ * ARM PrimeCell SP804 dual timer module.
125
* Docs at
126
- * http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0271d/index.html
127
-*/
128
+ * https://developer.arm.com/documentation/ddi0271/latest/
129
+ */
130
131
#define TYPE_SP804 "sp804"
132
OBJECT_DECLARE_SIMPLE_TYPE(SP804State, SP804)
59
--
133
--
60
2.16.1
134
2.20.1
61
135
62
136
diff view generated by jsdifflib
1
The Coprocessor Power Control Register (CPPWR) is new in v8M.
1
In cpu_exec() we have a longstanding workaround for compilers which
2
It allows software to control whether coprocessors are allowed
2
do not correctly implement the part of the sigsetjmp()/siglongjmp()
3
to power down and lose their state. QEMU doesn't have any
3
spec which requires that local variables which are not changed
4
notion of power control, so we choose the IMPDEF option of
4
between the setjmp and the longjmp retain their value.
5
making the whole register RAZ/WI (indicating that no coprocessors
5
6
can ever power down and lose state).
6
I recently ran across the upstream clang bug report for this; add a
7
link to it to the comment describing the workaround, and generally
8
expand the comment, so that we have a reasonable chance in future of
9
understanding why it's there and determining when we can remove it,
10
assuming clang eventually fixes the bug.
11
12
Remove the /* buggy compiler */ comments on the #else and #endif:
13
they don't add anything to understanding and are somewhat misleading
14
since they're sandwiching the code path for *non*-buggy compilers.
7
15
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 20180209165810.6668-5-peter.maydell@linaro.org
18
Message-id: 20210129130330.30820-1-peter.maydell@linaro.org
11
---
19
---
12
hw/intc/armv7m_nvic.c | 14 ++++++++++++++
20
accel/tcg/cpu-exec.c | 25 +++++++++++++++++++------
13
1 file changed, 14 insertions(+)
21
1 file changed, 19 insertions(+), 6 deletions(-)
14
22
15
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
23
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
16
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/intc/armv7m_nvic.c
25
--- a/accel/tcg/cpu-exec.c
18
+++ b/hw/intc/armv7m_nvic.c
26
+++ b/accel/tcg/cpu-exec.c
19
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
27
@@ -XXX,XX +XXX,XX @@ int cpu_exec(CPUState *cpu)
20
switch (offset) {
28
/* prepare setjmp context for exception handling */
21
case 4: /* Interrupt Control Type. */
29
if (sigsetjmp(cpu->jmp_env, 0) != 0) {
22
return ((s->num_irq - NVIC_FIRST_IRQ) / 32) - 1;
30
#if defined(__clang__)
23
+ case 0xc: /* CPPWR */
31
- /* Some compilers wrongly smash all local variables after
24
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
32
- * siglongjmp. There were bug reports for gcc 4.5.0 and clang.
25
+ goto bad_offset;
33
+ /*
26
+ }
34
+ * Some compilers wrongly smash all local variables after
27
+ /* We make the IMPDEF choice that nothing can ever go into a
35
+ * siglongjmp (the spec requires that only non-volatile locals
28
+ * non-retentive power state, which allows us to RAZ/WI this.
36
+ * which are changed between the sigsetjmp and siglongjmp are
37
+ * permitted to be trashed). There were bug reports for gcc
38
+ * 4.5.0 and clang. The bug is fixed in all versions of gcc
39
+ * that we support, but is still unfixed in clang:
40
+ * https://bugs.llvm.org/show_bug.cgi?id=21183
41
+ *
42
* Reload essential local variables here for those compilers.
43
- * Newer versions of gcc would complain about this code (-Wclobbered). */
44
+ * Newer versions of gcc would complain about this code (-Wclobbered),
45
+ * so we only perform the workaround for clang.
29
+ */
46
+ */
30
+ return 0;
47
cpu = current_cpu;
31
case 0x380 ... 0x3bf: /* NVIC_ITNS<n> */
48
cc = CPU_GET_CLASS(cpu);
32
{
49
-#else /* buggy compiler */
33
int startvec = 8 * (offset - 0x380) + NVIC_FIRST_IRQ;
50
- /* Assert that the compiler does not smash local variables. */
34
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
51
+#else
35
ARMCPU *cpu = s->cpu;
52
+ /*
36
53
+ * Non-buggy compilers preserve these locals; assert that
37
switch (offset) {
54
+ * they have the correct value.
38
+ case 0xc: /* CPPWR */
55
+ */
39
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
56
g_assert(cpu == current_cpu);
40
+ goto bad_offset;
57
g_assert(cc == CPU_GET_CLASS(cpu));
41
+ }
58
-#endif /* buggy compiler */
42
+ /* Make the IMPDEF choice to RAZ/WI this. */
59
+#endif
43
+ break;
60
+
44
case 0x380 ... 0x3bf: /* NVIC_ITNS<n> */
61
#ifndef CONFIG_SOFTMMU
45
{
62
tcg_debug_assert(!have_mmap_lock());
46
int startvec = 8 * (offset - 0x380) + NVIC_FIRST_IRQ;
63
#endif
47
--
64
--
48
2.16.1
65
2.20.1
49
66
50
67
diff view generated by jsdifflib
1
M profile cores have a similar setup for cache ID registers
1
From: Richard Henderson <richard.henderson@linaro.org>
2
to A profile:
3
* Cache Level ID Register (CLIDR) is a fixed value
4
* Cache Type Register (CTR) is a fixed value
5
* Cache Size ID Registers (CCSIDR) are a bank of registers;
6
which one you see is selected by the Cache Size Selection
7
Register (CSSELR)
8
2
9
The only difference is that they're in the NVIC memory mapped
3
This data can be allocated by page_alloc_target_data() and
10
register space rather than being coprocessor registers.
4
released by page_set_flags(start, end, prot | PAGE_RESET).
11
Implement the M profile view of them.
12
5
13
Since neither Cortex-M3 nor Cortex-M4 implement caches,
6
This data will be used to hold tag memory for AArch64 MTE.
14
we don't need to update their init functions and can leave
15
the ctr/clidr/ccsidr[] fields in their ARMCPU structs at zero.
16
Newer cores (like the Cortex-M33) will want to be able to
17
set these ID registers to non-zero values, though.
18
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210210000223.884088-2-richard.henderson@linaro.org
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20180209165810.6668-6-peter.maydell@linaro.org
22
---
12
---
23
target/arm/cpu.h | 26 ++++++++++++++++++++++++++
13
include/exec/cpu-all.h | 42 +++++++++++++++++++++++++++++++++------
24
hw/intc/armv7m_nvic.c | 16 ++++++++++++++++
14
accel/tcg/translate-all.c | 28 ++++++++++++++++++++++++++
25
target/arm/machine.c | 36 ++++++++++++++++++++++++++++++++++++
15
linux-user/mmap.c | 4 +++-
26
3 files changed, 78 insertions(+)
16
linux-user/syscall.c | 4 ++--
17
4 files changed, 69 insertions(+), 9 deletions(-)
27
18
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
29
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.h
21
--- a/include/exec/cpu-all.h
31
+++ b/target/arm/cpu.h
22
+++ b/include/exec/cpu-all.h
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
23
@@ -XXX,XX +XXX,XX @@ extern intptr_t qemu_host_page_mask;
33
uint32_t faultmask[M_REG_NUM_BANKS];
24
#define PAGE_EXEC 0x0004
34
uint32_t aircr; /* only holds r/w state if security extn implemented */
25
#define PAGE_BITS (PAGE_READ | PAGE_WRITE | PAGE_EXEC)
35
uint32_t secure; /* Is CPU in Secure state? (not guest visible) */
26
#define PAGE_VALID 0x0008
36
+ uint32_t csselr[M_REG_NUM_BANKS];
27
-/* original state of the write flag (used when tracking self-modifying
37
} v7m;
28
- code */
38
29
+/*
39
/* Information associated with an exception about to be taken:
30
+ * Original state of the write flag (used when tracking self-modifying code)
40
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_MPU_CTRL, ENABLE, 0, 1)
31
+ */
41
FIELD(V7M_MPU_CTRL, HFNMIENA, 1, 1)
32
#define PAGE_WRITE_ORG 0x0010
42
FIELD(V7M_MPU_CTRL, PRIVDEFENA, 2, 1)
33
-/* Invalidate the TLB entry immediately, helpful for s390x
43
34
- * Low-Address-Protection. Used with PAGE_WRITE in tlb_set_page_with_attrs() */
44
+/* v7M CLIDR bits */
35
-#define PAGE_WRITE_INV 0x0040
45
+FIELD(V7M_CLIDR, CTYPE_ALL, 0, 21)
36
+/*
46
+FIELD(V7M_CLIDR, LOUIS, 21, 3)
37
+ * Invalidate the TLB entry immediately, helpful for s390x
47
+FIELD(V7M_CLIDR, LOC, 24, 3)
38
+ * Low-Address-Protection. Used with PAGE_WRITE in tlb_set_page_with_attrs()
48
+FIELD(V7M_CLIDR, LOUU, 27, 3)
39
+ */
49
+FIELD(V7M_CLIDR, ICB, 30, 2)
40
+#define PAGE_WRITE_INV 0x0020
41
+/* For use with page_set_flags: page is being replaced; target_data cleared. */
42
+#define PAGE_RESET 0x0040
50
+
43
+
51
+FIELD(V7M_CSSELR, IND, 0, 1)
44
#if defined(CONFIG_BSD) && defined(CONFIG_USER_ONLY)
52
+FIELD(V7M_CSSELR, LEVEL, 1, 3)
45
/* FIXME: Code that sets/uses this is broken and needs to go away. */
53
+/* We use the combination of InD and Level to index into cpu->ccsidr[];
46
-#define PAGE_RESERVED 0x0020
54
+ * define a mask for this and check that it doesn't permit running off
47
+#define PAGE_RESERVED 0x0100
55
+ * the end of the array.
48
#endif
49
/* Target-specific bits that will be used via page_get_flags(). */
50
#define PAGE_TARGET_1 0x0080
51
@@ -XXX,XX +XXX,XX @@ int walk_memory_regions(void *, walk_memory_regions_fn);
52
int page_get_flags(target_ulong address);
53
void page_set_flags(target_ulong start, target_ulong end, int flags);
54
int page_check_range(target_ulong start, target_ulong len, int flags);
55
+
56
+/**
57
+ * page_alloc_target_data(address, size)
58
+ * @address: guest virtual address
59
+ * @size: size of data to allocate
60
+ *
61
+ * Allocate @size bytes of out-of-band data to associate with the
62
+ * guest page at @address. If the page is not mapped, NULL will
63
+ * be returned. If there is existing data associated with @address,
64
+ * no new memory will be allocated.
65
+ *
66
+ * The memory will be freed when the guest page is deallocated,
67
+ * e.g. with the munmap system call.
56
+ */
68
+ */
57
+FIELD(V7M_CSSELR, INDEX, 0, 4)
69
+void *page_alloc_target_data(target_ulong address, size_t size);
58
+
70
+
59
+QEMU_BUILD_BUG_ON(ARRAY_SIZE(((ARMCPU *)0)->ccsidr) <= R_V7M_CSSELR_INDEX_MASK);
71
+/**
60
+
72
+ * page_get_target_data(address)
61
/* If adding a feature bit which corresponds to a Linux ELF
73
+ * @address: guest virtual address
62
* HWCAP bit, remember to update the feature-bit-to-hwcap
74
+ *
63
* mapping in linux-user/elfload.c:get_elf_hwcap().
75
+ * Return any out-of-bound memory assocated with the guest page
64
@@ -XXX,XX +XXX,XX @@ static inline int arm_debug_target_el(CPUARMState *env)
76
+ * at @address, as per page_alloc_target_data.
77
+ */
78
+void *page_get_target_data(target_ulong address);
79
#endif
80
81
CPUArchState *cpu_copy(CPUArchState *env);
82
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
83
index XXXXXXX..XXXXXXX 100644
84
--- a/accel/tcg/translate-all.c
85
+++ b/accel/tcg/translate-all.c
86
@@ -XXX,XX +XXX,XX @@ typedef struct PageDesc {
87
unsigned int code_write_count;
88
#else
89
unsigned long flags;
90
+ void *target_data;
91
#endif
92
#ifndef CONFIG_USER_ONLY
93
QemuSpin lock;
94
@@ -XXX,XX +XXX,XX @@ int page_get_flags(target_ulong address)
95
void page_set_flags(target_ulong start, target_ulong end, int flags)
96
{
97
target_ulong addr, len;
98
+ bool reset_target_data;
99
100
/* This function should never be called with addresses outside the
101
guest address space. If this assert fires, it probably indicates
102
@@ -XXX,XX +XXX,XX @@ void page_set_flags(target_ulong start, target_ulong end, int flags)
103
if (flags & PAGE_WRITE) {
104
flags |= PAGE_WRITE_ORG;
105
}
106
+ reset_target_data = !(flags & PAGE_VALID) || (flags & PAGE_RESET);
107
+ flags &= ~PAGE_RESET;
108
109
for (addr = start, len = end - start;
110
len != 0;
111
@@ -XXX,XX +XXX,XX @@ void page_set_flags(target_ulong start, target_ulong end, int flags)
112
p->first_tb) {
113
tb_invalidate_phys_page(addr, 0);
114
}
115
+ if (reset_target_data && p->target_data) {
116
+ g_free(p->target_data);
117
+ p->target_data = NULL;
118
+ }
119
p->flags = flags;
65
}
120
}
66
}
121
}
67
122
68
+static inline bool arm_v7m_csselr_razwi(ARMCPU *cpu)
123
+void *page_get_target_data(target_ulong address)
69
+{
124
+{
70
+ /* If all the CLIDR.Ctypem bits are 0 there are no caches, and
125
+ PageDesc *p = page_find(address >> TARGET_PAGE_BITS);
71
+ * CSSELR is RAZ/WI.
126
+ return p ? p->target_data : NULL;
72
+ */
73
+ return (cpu->clidr & R_V7M_CLIDR_CTYPE_ALL_MASK) != 0;
74
+}
127
+}
75
+
128
+
76
static inline bool aa64_generate_debug_exceptions(CPUARMState *env)
129
+void *page_alloc_target_data(target_ulong address, size_t size)
77
{
130
+{
78
if (arm_is_secure(env)) {
131
+ PageDesc *p = page_find(address >> TARGET_PAGE_BITS);
79
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
132
+ void *ret = NULL;
80
index XXXXXXX..XXXXXXX 100644
133
+
81
--- a/hw/intc/armv7m_nvic.c
134
+ if (p->flags & PAGE_VALID) {
82
+++ b/hw/intc/armv7m_nvic.c
135
+ ret = p->target_data;
83
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
136
+ if (!ret) {
84
return cpu->id_isar4;
137
+ p->target_data = ret = g_malloc0(size);
85
case 0xd74: /* ISAR5. */
138
+ }
86
return cpu->id_isar5;
87
+ case 0xd78: /* CLIDR */
88
+ return cpu->clidr;
89
+ case 0xd7c: /* CTR */
90
+ return cpu->ctr;
91
+ case 0xd80: /* CSSIDR */
92
+ {
93
+ int idx = cpu->env.v7m.csselr[attrs.secure] & R_V7M_CSSELR_INDEX_MASK;
94
+ return cpu->ccsidr[idx];
95
+ }
139
+ }
96
+ case 0xd84: /* CSSELR */
140
+ return ret;
97
+ return cpu->env.v7m.csselr[attrs.secure];
98
/* TODO: Implement debug registers. */
99
case 0xd90: /* MPU_TYPE */
100
/* Unified MPU; if the MPU is not present this value is zero */
101
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
102
qemu_log_mask(LOG_UNIMP,
103
"NVIC: Aux fault status registers unimplemented\n");
104
break;
105
+ case 0xd84: /* CSSELR */
106
+ if (!arm_v7m_csselr_razwi(cpu)) {
107
+ cpu->env.v7m.csselr[attrs.secure] = value & R_V7M_CSSELR_INDEX_MASK;
108
+ }
109
+ break;
110
case 0xd90: /* MPU_TYPE */
111
return; /* RO */
112
case 0xd94: /* MPU_CTRL */
113
diff --git a/target/arm/machine.c b/target/arm/machine.c
114
index XXXXXXX..XXXXXXX 100644
115
--- a/target/arm/machine.c
116
+++ b/target/arm/machine.c
117
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_faultmask_primask = {
118
}
119
};
120
121
+/* CSSELR is in a subsection because we didn't implement it previously.
122
+ * Migration from an old implementation will leave it at zero, which
123
+ * is OK since the only CPUs in the old implementation make the
124
+ * register RAZ/WI.
125
+ * Since there was no version of QEMU which implemented the CSSELR for
126
+ * just non-secure, we transfer both banks here rather than putting
127
+ * the secure banked version in the m-security subsection.
128
+ */
129
+static bool csselr_vmstate_validate(void *opaque, int version_id)
130
+{
131
+ ARMCPU *cpu = opaque;
132
+
133
+ return cpu->env.v7m.csselr[M_REG_NS] <= R_V7M_CSSELR_INDEX_MASK
134
+ && cpu->env.v7m.csselr[M_REG_S] <= R_V7M_CSSELR_INDEX_MASK;
135
+}
141
+}
136
+
142
+
137
+static bool m_csselr_needed(void *opaque)
143
int page_check_range(target_ulong start, target_ulong len, int flags)
138
+{
144
{
139
+ ARMCPU *cpu = opaque;
145
PageDesc *p;
140
+
146
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
141
+ return !arm_v7m_csselr_razwi(cpu);
147
index XXXXXXX..XXXXXXX 100644
142
+}
148
--- a/linux-user/mmap.c
143
+
149
+++ b/linux-user/mmap.c
144
+static const VMStateDescription vmstate_m_csselr = {
150
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
145
+ .name = "cpu/m/csselr",
151
}
146
+ .version_id = 1,
147
+ .minimum_version_id = 1,
148
+ .needed = m_csselr_needed,
149
+ .fields = (VMStateField[]) {
150
+ VMSTATE_UINT32_ARRAY(env.v7m.csselr, ARMCPU, M_REG_NUM_BANKS),
151
+ VMSTATE_VALIDATE("CSSELR is valid", csselr_vmstate_validate),
152
+ VMSTATE_END_OF_LIST()
153
+ }
154
+};
155
+
156
static const VMStateDescription vmstate_m = {
157
.name = "cpu/m",
158
.version_id = 4,
159
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
160
},
161
.subsections = (const VMStateDescription*[]) {
162
&vmstate_m_faultmask_primask,
163
+ &vmstate_m_csselr,
164
NULL
165
}
152
}
166
};
153
the_end1:
154
+ page_flags |= PAGE_RESET;
155
page_set_flags(start, start + len, page_flags);
156
the_end:
157
trace_target_mmap_complete(start);
158
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
159
new_addr = h2g(host_addr);
160
prot = page_get_flags(old_addr);
161
page_set_flags(old_addr, old_addr + old_size, 0);
162
- page_set_flags(new_addr, new_addr + new_size, prot | PAGE_VALID);
163
+ page_set_flags(new_addr, new_addr + new_size,
164
+ prot | PAGE_VALID | PAGE_RESET);
165
}
166
tb_invalidate_phys_range(new_addr, new_addr + new_size);
167
mmap_unlock();
168
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
169
index XXXXXXX..XXXXXXX 100644
170
--- a/linux-user/syscall.c
171
+++ b/linux-user/syscall.c
172
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
173
raddr=h2g((unsigned long)host_raddr);
174
175
page_set_flags(raddr, raddr + shm_info.shm_segsz,
176
- PAGE_VALID | PAGE_READ |
177
- ((shmflg & SHM_RDONLY)? 0 : PAGE_WRITE));
178
+ PAGE_VALID | PAGE_RESET | PAGE_READ |
179
+ (shmflg & SHM_RDONLY ? 0 : PAGE_WRITE));
180
181
for (i = 0; i < N_SHM_REGIONS; i++) {
182
if (!shm_regions[i].in_use) {
167
--
183
--
168
2.16.1
184
2.20.1
169
185
170
186
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Record whether the backing page is anonymous, or if it has file
4
backing. This will allow us to get close to the Linux AArch64
5
ABI for MTE, which allows tag memory only on ram-backed VMAs.
6
7
The real ABI allows tag memory on files, when those files are
8
on ram-backed filesystems, such as tmpfs. We will not be able
9
to implement that in QEMU linux-user.
10
11
Thankfully, anonymous memory for malloc arenas is the primary
12
consumer of this feature, so this restricted version should
13
still be of use.
14
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20210210000223.884088-3-richard.henderson@linaro.org
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
include/exec/cpu-all.h | 2 ++
21
linux-user/mmap.c | 3 +++
22
2 files changed, 5 insertions(+)
23
24
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/include/exec/cpu-all.h
27
+++ b/include/exec/cpu-all.h
28
@@ -XXX,XX +XXX,XX @@ extern intptr_t qemu_host_page_mask;
29
#define PAGE_WRITE_INV 0x0020
30
/* For use with page_set_flags: page is being replaced; target_data cleared. */
31
#define PAGE_RESET 0x0040
32
+/* For linux-user, indicates that the page is MAP_ANON. */
33
+#define PAGE_ANON 0x0080
34
35
#if defined(CONFIG_BSD) && defined(CONFIG_USER_ONLY)
36
/* FIXME: Code that sets/uses this is broken and needs to go away. */
37
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/linux-user/mmap.c
40
+++ b/linux-user/mmap.c
41
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
42
}
43
}
44
the_end1:
45
+ if (flags & MAP_ANONYMOUS) {
46
+ page_flags |= PAGE_ANON;
47
+ }
48
page_flags |= PAGE_RESET;
49
page_set_flags(start, start + len, page_flags);
50
the_end:
51
--
52
2.20.1
53
54
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This is more descriptive than 'unsigned long'.
4
No functional change, since these match on all linux+bsd hosts.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-4-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/exec/cpu-all.h | 2 +-
12
bsd-user/main.c | 4 ++--
13
linux-user/elfload.c | 4 ++--
14
linux-user/main.c | 4 ++--
15
4 files changed, 7 insertions(+), 7 deletions(-)
16
17
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/cpu-all.h
20
+++ b/include/exec/cpu-all.h
21
@@ -XXX,XX +XXX,XX @@ static inline void tswap64s(uint64_t *s)
22
/* On some host systems the guest address space is reserved on the host.
23
* This allows the guest address space to be offset to a convenient location.
24
*/
25
-extern unsigned long guest_base;
26
+extern uintptr_t guest_base;
27
extern bool have_guest_base;
28
extern unsigned long reserved_va;
29
30
diff --git a/bsd-user/main.c b/bsd-user/main.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/bsd-user/main.c
33
+++ b/bsd-user/main.c
34
@@ -XXX,XX +XXX,XX @@
35
36
int singlestep;
37
unsigned long mmap_min_addr;
38
-unsigned long guest_base;
39
+uintptr_t guest_base;
40
bool have_guest_base;
41
unsigned long reserved_va;
42
43
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
44
g_free(target_environ);
45
46
if (qemu_loglevel_mask(CPU_LOG_PAGE)) {
47
- qemu_log("guest_base 0x%lx\n", guest_base);
48
+ qemu_log("guest_base %p\n", (void *)guest_base);
49
log_page_dump("binary load");
50
51
qemu_log("start_brk 0x" TARGET_ABI_FMT_lx "\n", info->start_brk);
52
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/linux-user/elfload.c
55
+++ b/linux-user/elfload.c
56
@@ -XXX,XX +XXX,XX @@ static void pgb_have_guest_base(const char *image_name, abi_ulong guest_loaddr,
57
void *addr, *test;
58
59
if (!QEMU_IS_ALIGNED(guest_base, align)) {
60
- fprintf(stderr, "Requested guest base 0x%lx does not satisfy "
61
+ fprintf(stderr, "Requested guest base %p does not satisfy "
62
"host minimum alignment (0x%lx)\n",
63
- guest_base, align);
64
+ (void *)guest_base, align);
65
exit(EXIT_FAILURE);
66
}
67
68
diff --git a/linux-user/main.c b/linux-user/main.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/linux-user/main.c
71
+++ b/linux-user/main.c
72
@@ -XXX,XX +XXX,XX @@ static const char *cpu_model;
73
static const char *cpu_type;
74
static const char *seed_optarg;
75
unsigned long mmap_min_addr;
76
-unsigned long guest_base;
77
+uintptr_t guest_base;
78
bool have_guest_base;
79
80
/*
81
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv, char **envp)
82
g_free(target_environ);
83
84
if (qemu_loglevel_mask(CPU_LOG_PAGE)) {
85
- qemu_log("guest_base 0x%lx\n", guest_base);
86
+ qemu_log("guest_base %p\n", (void *)guest_base);
87
log_page_dump("binary load");
88
89
qemu_log("start_brk 0x" TARGET_ABI_FMT_lx "\n", info->start_brk);
90
--
91
2.20.1
92
93
diff view generated by jsdifflib
1
The PENDNMISET/CLR bits in the ICSR should be RAZ/WI from
1
From: Richard Henderson <richard.henderson@linaro.org>
2
NonSecure state if the AIRCR.BFHFNMINS bit is zero. We had
3
misimplemented this as making the bits RAZ/WI from both
4
Secure and NonSecure states. Fix this bug by checking
5
attrs.secure so that Secure code can pend and unpend NMIs.
6
2
3
This is more descriptive than 'unsigned long'.
4
No functional change, since these match on all linux+bsd hosts.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-5-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180209165810.6668-3-peter.maydell@linaro.org
10
---
10
---
11
hw/intc/armv7m_nvic.c | 6 +++---
11
include/exec/cpu_ldst.h | 6 +++---
12
1 file changed, 3 insertions(+), 3 deletions(-)
12
1 file changed, 3 insertions(+), 3 deletions(-)
13
13
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
14
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
16
--- a/include/exec/cpu_ldst.h
17
+++ b/hw/intc/armv7m_nvic.c
17
+++ b/include/exec/cpu_ldst.h
18
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
18
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
19
}
19
#endif
20
}
20
21
/* NMIPENDSET */
21
/* All direct uses of g2h and h2g need to go away for usermode softmmu. */
22
- if ((cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) &&
22
-#define g2h(x) ((void *)((unsigned long)(abi_ptr)(x) + guest_base))
23
- s->vectors[ARMV7M_EXCP_NMI].pending) {
23
+#define g2h(x) ((void *)((uintptr_t)(abi_ptr)(x) + guest_base))
24
+ if ((attrs.secure || (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK))
24
25
+ && s->vectors[ARMV7M_EXCP_NMI].pending) {
25
#if HOST_LONG_BITS <= TARGET_VIRT_ADDR_SPACE_BITS
26
val |= (1 << 31);
26
#define guest_addr_valid(x) (1)
27
}
27
#else
28
/* ISRPREEMPT: RES0 when halting debug not implemented */
28
#define guest_addr_valid(x) ((x) <= GUEST_ADDR_MAX)
29
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
29
#endif
30
break;
30
-#define h2g_valid(x) guest_addr_valid((unsigned long)(x) - guest_base)
31
}
31
+#define h2g_valid(x) guest_addr_valid((uintptr_t)(x) - guest_base)
32
case 0xd04: /* Interrupt Control State (ICSR) */
32
33
- if (cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
33
static inline int guest_range_valid(unsigned long start, unsigned long len)
34
+ if (attrs.secure || cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK) {
34
{
35
if (value & (1 << 31)) {
35
@@ -XXX,XX +XXX,XX @@ static inline int guest_range_valid(unsigned long start, unsigned long len)
36
armv7m_nvic_set_pending(s, ARMV7M_EXCP_NMI, false);
36
}
37
} else if (value & (1 << 30) &&
37
38
#define h2g_nocheck(x) ({ \
39
- unsigned long __ret = (unsigned long)(x) - guest_base; \
40
+ uintptr_t __ret = (uintptr_t)(x) - guest_base; \
41
(abi_ptr)__ret; \
42
})
43
38
--
44
--
39
2.16.1
45
2.20.1
40
46
41
47
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Return bool not int; pass abi_ulong not 'unsigned long'.
4
All callers use abi_ulong already, so the change in type
5
has no effect.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210210000223.884088-6-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/exec/cpu_ldst.h | 2 +-
13
1 file changed, 1 insertion(+), 1 deletion(-)
14
15
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/exec/cpu_ldst.h
18
+++ b/include/exec/cpu_ldst.h
19
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
20
#endif
21
#define h2g_valid(x) guest_addr_valid((uintptr_t)(x) - guest_base)
22
23
-static inline int guest_range_valid(unsigned long start, unsigned long len)
24
+static inline bool guest_range_valid(abi_ulong start, abi_ulong len)
25
{
26
return len - 1 <= GUEST_ADDR_MAX && start <= GUEST_ADDR_MAX - len + 1;
27
}
28
--
29
2.20.1
30
31
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Verify that addr + size - 1 does not wrap around.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210210000223.884088-7-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
linux-user/qemu.h | 17 ++++++++++++-----
11
1 file changed, 12 insertions(+), 5 deletions(-)
12
13
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/linux-user/qemu.h
16
+++ b/linux-user/qemu.h
17
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
18
#define VERIFY_READ 0
19
#define VERIFY_WRITE 1 /* implies read access */
20
21
-static inline int access_ok(int type, abi_ulong addr, abi_ulong size)
22
+static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
23
{
24
- return guest_addr_valid(addr) &&
25
- (size == 0 || guest_addr_valid(addr + size - 1)) &&
26
- page_check_range((target_ulong)addr, size,
27
- (type == VERIFY_READ) ? PAGE_READ : (PAGE_READ | PAGE_WRITE)) == 0;
28
+ if (!guest_addr_valid(addr)) {
29
+ return false;
30
+ }
31
+ if (size != 0 &&
32
+ (addr + size - 1 < addr ||
33
+ !guest_addr_valid(addr + size - 1))) {
34
+ return false;
35
+ }
36
+ return page_check_range((target_ulong)addr, size,
37
+ (type == VERIFY_READ) ? PAGE_READ :
38
+ (PAGE_READ | PAGE_WRITE)) == 0;
39
}
40
41
/* NOTE __get_user and __put_user use host pointers and don't check access.
42
--
43
2.20.1
44
45
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
These constants are only ever used with access_ok, and friends.
4
Rather than translating them to PAGE_* bits, let them equal
5
the PAGE_* bits to begin.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210210000223.884088-8-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
linux-user/qemu.h | 8 +++-----
13
1 file changed, 3 insertions(+), 5 deletions(-)
14
15
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/linux-user/qemu.h
18
+++ b/linux-user/qemu.h
19
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
20
21
/* user access */
22
23
-#define VERIFY_READ 0
24
-#define VERIFY_WRITE 1 /* implies read access */
25
+#define VERIFY_READ PAGE_READ
26
+#define VERIFY_WRITE (PAGE_READ | PAGE_WRITE)
27
28
static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
29
{
30
@@ -XXX,XX +XXX,XX @@ static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
31
!guest_addr_valid(addr + size - 1))) {
32
return false;
33
}
34
- return page_check_range((target_ulong)addr, size,
35
- (type == VERIFY_READ) ? PAGE_READ :
36
- (PAGE_READ | PAGE_WRITE)) == 0;
37
+ return page_check_range((target_ulong)addr, size, type) == 0;
38
}
39
40
/* NOTE __get_user and __put_user use host pointers and don't check access.
41
--
42
2.20.1
43
44
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
These constants are only ever used with access_ok, and friends.
4
Rather than translating them to PAGE_* bits, let them equal
5
the PAGE_* bits to begin.
6
7
Reviewed-by: Warner Losh <imp@bsdimp.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210210000223.884088-9-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
bsd-user/qemu.h | 9 ++++-----
14
1 file changed, 4 insertions(+), 5 deletions(-)
15
16
diff --git a/bsd-user/qemu.h b/bsd-user/qemu.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/bsd-user/qemu.h
19
+++ b/bsd-user/qemu.h
20
@@ -XXX,XX +XXX,XX @@ extern unsigned long x86_stack_size;
21
22
/* user access */
23
24
-#define VERIFY_READ 0
25
-#define VERIFY_WRITE 1 /* implies read access */
26
+#define VERIFY_READ PAGE_READ
27
+#define VERIFY_WRITE (PAGE_READ | PAGE_WRITE)
28
29
-static inline int access_ok(int type, abi_ulong addr, abi_ulong size)
30
+static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
31
{
32
- return page_check_range((target_ulong)addr, size,
33
- (type == VERIFY_READ) ? PAGE_READ : (PAGE_READ | PAGE_WRITE)) == 0;
34
+ return page_check_range((target_ulong)addr, size, type) == 0;
35
}
36
37
/* NOTE __get_user and __put_user use host pointers and don't check access. */
38
--
39
2.20.1
40
41
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
This is the only use of guest_addr_valid that does not begin
4
with a guest address, but a host address being transformed to
5
a guest address.
6
7
We will shortly adjust guest_addr_valid to handle guest memory
8
tags, and the host address should not be subjected to that.
9
10
Move h2g_valid adjacent to the other h2g macros.
11
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210210000223.884088-10-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
include/exec/cpu_ldst.h | 5 ++++-
18
1 file changed, 4 insertions(+), 1 deletion(-)
19
20
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/include/exec/cpu_ldst.h
23
+++ b/include/exec/cpu_ldst.h
24
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
25
#else
26
#define guest_addr_valid(x) ((x) <= GUEST_ADDR_MAX)
27
#endif
28
-#define h2g_valid(x) guest_addr_valid((uintptr_t)(x) - guest_base)
29
30
static inline bool guest_range_valid(abi_ulong start, abi_ulong len)
31
{
32
return len - 1 <= GUEST_ADDR_MAX && start <= GUEST_ADDR_MAX - len + 1;
33
}
34
35
+#define h2g_valid(x) \
36
+ (HOST_LONG_BITS <= TARGET_VIRT_ADDR_SPACE_BITS || \
37
+ (uintptr_t)(x) - guest_base <= GUEST_ADDR_MAX)
38
+
39
#define h2g_nocheck(x) ({ \
40
uintptr_t __ret = (uintptr_t)(x) - guest_base; \
41
(abi_ptr)__ret; \
42
--
43
2.20.1
44
45
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
We must always use GUEST_ADDR_MAX, because even 32-bit hosts can
4
use -R <reserved_va> to restrict the memory address of the guest.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-11-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/exec/cpu_ldst.h | 9 ++++-----
12
1 file changed, 4 insertions(+), 5 deletions(-)
13
14
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/cpu_ldst.h
17
+++ b/include/exec/cpu_ldst.h
18
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
19
/* All direct uses of g2h and h2g need to go away for usermode softmmu. */
20
#define g2h(x) ((void *)((uintptr_t)(abi_ptr)(x) + guest_base))
21
22
-#if HOST_LONG_BITS <= TARGET_VIRT_ADDR_SPACE_BITS
23
-#define guest_addr_valid(x) (1)
24
-#else
25
-#define guest_addr_valid(x) ((x) <= GUEST_ADDR_MAX)
26
-#endif
27
+static inline bool guest_addr_valid(abi_ulong x)
28
+{
29
+ return x <= GUEST_ADDR_MAX;
30
+}
31
32
static inline bool guest_range_valid(abi_ulong start, abi_ulong len)
33
{
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Provide an identity fallback for target that do not
4
use tagged addresses.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-12-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/exec/cpu_ldst.h | 7 +++++++
12
1 file changed, 7 insertions(+)
13
14
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/include/exec/cpu_ldst.h
17
+++ b/include/exec/cpu_ldst.h
18
@@ -XXX,XX +XXX,XX @@ typedef uint64_t abi_ptr;
19
#define TARGET_ABI_FMT_ptr "%"PRIx64
20
#endif
21
22
+#ifndef TARGET_TAGGED_ADDRESSES
23
+static inline abi_ptr cpu_untagged_addr(CPUState *cs, abi_ptr x)
24
+{
25
+ return x;
26
+}
27
+#endif
28
+
29
/* All direct uses of g2h and h2g need to go away for usermode softmmu. */
30
#define g2h(x) ((void *)((uintptr_t)(abi_ptr)(x) + guest_base))
31
32
--
33
2.20.1
34
35
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Use g2h_untagged in contexts that have no cpu, e.g. the binary
4
loaders that operate before the primary cpu is created. As a
5
colollary, target_mmap and friends must use untagged addresses,
6
since they are used by the loaders.
7
8
Use g2h_untagged on values returned from target_mmap, as the
9
kernel never applies a tag itself.
10
11
Use g2h_untagged on all pc values. The only current user of
12
tags, aarch64, removes tags from code addresses upon branch,
13
so "pc" is always untagged.
14
15
Use g2h with the cpu context on hand wherever possible.
16
17
Use g2h_untagged in lock_user, which will be updated soon.
18
19
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20210210000223.884088-13-richard.henderson@linaro.org
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
24
include/exec/cpu_ldst.h | 12 +++++-
25
include/exec/exec-all.h | 2 +-
26
linux-user/qemu.h | 6 +--
27
accel/tcg/translate-all.c | 4 +-
28
accel/tcg/user-exec.c | 48 ++++++++++++------------
29
linux-user/elfload.c | 12 +++---
30
linux-user/flatload.c | 2 +-
31
linux-user/hppa/cpu_loop.c | 31 ++++++++--------
32
linux-user/i386/cpu_loop.c | 4 +-
33
linux-user/mmap.c | 45 +++++++++++-----------
34
linux-user/ppc/signal.c | 4 +-
35
linux-user/syscall.c | 72 +++++++++++++++++++-----------------
36
target/arm/helper-a64.c | 4 +-
37
target/hppa/op_helper.c | 2 +-
38
target/i386/tcg/mem_helper.c | 2 +-
39
target/s390x/mem_helper.c | 4 +-
40
16 files changed, 135 insertions(+), 119 deletions(-)
41
42
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
43
index XXXXXXX..XXXXXXX 100644
44
--- a/include/exec/cpu_ldst.h
45
+++ b/include/exec/cpu_ldst.h
46
@@ -XXX,XX +XXX,XX @@ static inline abi_ptr cpu_untagged_addr(CPUState *cs, abi_ptr x)
47
#endif
48
49
/* All direct uses of g2h and h2g need to go away for usermode softmmu. */
50
-#define g2h(x) ((void *)((uintptr_t)(abi_ptr)(x) + guest_base))
51
+static inline void *g2h_untagged(abi_ptr x)
52
+{
53
+ return (void *)((uintptr_t)(x) + guest_base);
54
+}
55
+
56
+static inline void *g2h(CPUState *cs, abi_ptr x)
57
+{
58
+ return g2h_untagged(cpu_untagged_addr(cs, x));
59
+}
60
61
static inline bool guest_addr_valid(abi_ulong x)
62
{
63
@@ -XXX,XX +XXX,XX @@ static inline int cpu_ldsw_code(CPUArchState *env, abi_ptr addr)
64
static inline void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
65
MMUAccessType access_type, int mmu_idx)
66
{
67
- return g2h(addr);
68
+ return g2h(env_cpu(env), addr);
69
}
70
#else
71
void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
72
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
73
index XXXXXXX..XXXXXXX 100644
74
--- a/include/exec/exec-all.h
75
+++ b/include/exec/exec-all.h
76
@@ -XXX,XX +XXX,XX @@ static inline tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env,
77
void **hostp)
78
{
79
if (hostp) {
80
- *hostp = g2h(addr);
81
+ *hostp = g2h_untagged(addr);
82
}
83
return addr;
84
}
85
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
86
index XXXXXXX..XXXXXXX 100644
87
--- a/linux-user/qemu.h
88
+++ b/linux-user/qemu.h
89
@@ -XXX,XX +XXX,XX @@ static inline void *lock_user(int type, abi_ulong guest_addr, long len, int copy
90
return addr;
91
}
92
#else
93
- return g2h(guest_addr);
94
+ return g2h_untagged(guest_addr);
95
#endif
96
}
97
98
@@ -XXX,XX +XXX,XX @@ static inline void unlock_user(void *host_ptr, abi_ulong guest_addr,
99
#ifdef DEBUG_REMAP
100
if (!host_ptr)
101
return;
102
- if (host_ptr == g2h(guest_addr))
103
+ if (host_ptr == g2h_untagged(guest_addr))
104
return;
105
if (len > 0)
106
- memcpy(g2h(guest_addr), host_ptr, len);
107
+ memcpy(g2h_untagged(guest_addr), host_ptr, len);
108
g_free(host_ptr);
109
#endif
110
}
111
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
112
index XXXXXXX..XXXXXXX 100644
113
--- a/accel/tcg/translate-all.c
114
+++ b/accel/tcg/translate-all.c
115
@@ -XXX,XX +XXX,XX @@ static inline void tb_page_add(PageDesc *p, TranslationBlock *tb,
116
prot |= p2->flags;
117
p2->flags &= ~PAGE_WRITE;
118
}
119
- mprotect(g2h(page_addr), qemu_host_page_size,
120
+ mprotect(g2h_untagged(page_addr), qemu_host_page_size,
121
(prot & PAGE_BITS) & ~PAGE_WRITE);
122
if (DEBUG_TB_INVALIDATE_GATE) {
123
printf("protecting code page: 0x" TB_PAGE_ADDR_FMT "\n", page_addr);
124
@@ -XXX,XX +XXX,XX @@ int page_unprotect(target_ulong address, uintptr_t pc)
125
}
126
#endif
127
}
128
- mprotect((void *)g2h(host_start), qemu_host_page_size,
129
+ mprotect((void *)g2h_untagged(host_start), qemu_host_page_size,
130
prot & PAGE_BITS);
131
}
132
mmap_unlock();
133
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
134
index XXXXXXX..XXXXXXX 100644
135
--- a/accel/tcg/user-exec.c
136
+++ b/accel/tcg/user-exec.c
137
@@ -XXX,XX +XXX,XX @@ int probe_access_flags(CPUArchState *env, target_ulong addr,
138
int flags;
139
140
flags = probe_access_internal(env, addr, 0, access_type, nonfault, ra);
141
- *phost = flags ? NULL : g2h(addr);
142
+ *phost = flags ? NULL : g2h(env_cpu(env), addr);
143
return flags;
144
}
145
146
@@ -XXX,XX +XXX,XX @@ void *probe_access(CPUArchState *env, target_ulong addr, int size,
147
flags = probe_access_internal(env, addr, size, access_type, false, ra);
148
g_assert(flags == 0);
149
150
- return size ? g2h(addr) : NULL;
151
+ return size ? g2h(env_cpu(env), addr) : NULL;
152
}
153
154
#if defined(__i386__)
155
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldub_data(CPUArchState *env, abi_ptr ptr)
156
uint16_t meminfo = trace_mem_get_info(MO_UB, MMU_USER_IDX, false);
157
158
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
159
- ret = ldub_p(g2h(ptr));
160
+ ret = ldub_p(g2h(env_cpu(env), ptr));
161
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
162
return ret;
163
}
164
@@ -XXX,XX +XXX,XX @@ int cpu_ldsb_data(CPUArchState *env, abi_ptr ptr)
165
uint16_t meminfo = trace_mem_get_info(MO_SB, MMU_USER_IDX, false);
166
167
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
168
- ret = ldsb_p(g2h(ptr));
169
+ ret = ldsb_p(g2h(env_cpu(env), ptr));
170
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
171
return ret;
172
}
173
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_lduw_be_data(CPUArchState *env, abi_ptr ptr)
174
uint16_t meminfo = trace_mem_get_info(MO_BEUW, MMU_USER_IDX, false);
175
176
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
177
- ret = lduw_be_p(g2h(ptr));
178
+ ret = lduw_be_p(g2h(env_cpu(env), ptr));
179
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
180
return ret;
181
}
182
@@ -XXX,XX +XXX,XX @@ int cpu_ldsw_be_data(CPUArchState *env, abi_ptr ptr)
183
uint16_t meminfo = trace_mem_get_info(MO_BESW, MMU_USER_IDX, false);
184
185
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
186
- ret = ldsw_be_p(g2h(ptr));
187
+ ret = ldsw_be_p(g2h(env_cpu(env), ptr));
188
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
189
return ret;
190
}
191
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldl_be_data(CPUArchState *env, abi_ptr ptr)
192
uint16_t meminfo = trace_mem_get_info(MO_BEUL, MMU_USER_IDX, false);
193
194
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
195
- ret = ldl_be_p(g2h(ptr));
196
+ ret = ldl_be_p(g2h(env_cpu(env), ptr));
197
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
198
return ret;
199
}
200
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_ldq_be_data(CPUArchState *env, abi_ptr ptr)
201
uint16_t meminfo = trace_mem_get_info(MO_BEQ, MMU_USER_IDX, false);
202
203
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
204
- ret = ldq_be_p(g2h(ptr));
205
+ ret = ldq_be_p(g2h(env_cpu(env), ptr));
206
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
207
return ret;
208
}
209
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_lduw_le_data(CPUArchState *env, abi_ptr ptr)
210
uint16_t meminfo = trace_mem_get_info(MO_LEUW, MMU_USER_IDX, false);
211
212
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
213
- ret = lduw_le_p(g2h(ptr));
214
+ ret = lduw_le_p(g2h(env_cpu(env), ptr));
215
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
216
return ret;
217
}
218
@@ -XXX,XX +XXX,XX @@ int cpu_ldsw_le_data(CPUArchState *env, abi_ptr ptr)
219
uint16_t meminfo = trace_mem_get_info(MO_LESW, MMU_USER_IDX, false);
220
221
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
222
- ret = ldsw_le_p(g2h(ptr));
223
+ ret = ldsw_le_p(g2h(env_cpu(env), ptr));
224
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
225
return ret;
226
}
227
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldl_le_data(CPUArchState *env, abi_ptr ptr)
228
uint16_t meminfo = trace_mem_get_info(MO_LEUL, MMU_USER_IDX, false);
229
230
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
231
- ret = ldl_le_p(g2h(ptr));
232
+ ret = ldl_le_p(g2h(env_cpu(env), ptr));
233
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
234
return ret;
235
}
236
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_ldq_le_data(CPUArchState *env, abi_ptr ptr)
237
uint16_t meminfo = trace_mem_get_info(MO_LEQ, MMU_USER_IDX, false);
238
239
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
240
- ret = ldq_le_p(g2h(ptr));
241
+ ret = ldq_le_p(g2h(env_cpu(env), ptr));
242
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
243
return ret;
244
}
245
@@ -XXX,XX +XXX,XX @@ void cpu_stb_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
246
uint16_t meminfo = trace_mem_get_info(MO_UB, MMU_USER_IDX, true);
247
248
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
249
- stb_p(g2h(ptr), val);
250
+ stb_p(g2h(env_cpu(env), ptr), val);
251
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
252
}
253
254
@@ -XXX,XX +XXX,XX @@ void cpu_stw_be_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
255
uint16_t meminfo = trace_mem_get_info(MO_BEUW, MMU_USER_IDX, true);
256
257
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
258
- stw_be_p(g2h(ptr), val);
259
+ stw_be_p(g2h(env_cpu(env), ptr), val);
260
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
261
}
262
263
@@ -XXX,XX +XXX,XX @@ void cpu_stl_be_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
264
uint16_t meminfo = trace_mem_get_info(MO_BEUL, MMU_USER_IDX, true);
265
266
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
267
- stl_be_p(g2h(ptr), val);
268
+ stl_be_p(g2h(env_cpu(env), ptr), val);
269
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
270
}
271
272
@@ -XXX,XX +XXX,XX @@ void cpu_stq_be_data(CPUArchState *env, abi_ptr ptr, uint64_t val)
273
uint16_t meminfo = trace_mem_get_info(MO_BEQ, MMU_USER_IDX, true);
274
275
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
276
- stq_be_p(g2h(ptr), val);
277
+ stq_be_p(g2h(env_cpu(env), ptr), val);
278
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
279
}
280
281
@@ -XXX,XX +XXX,XX @@ void cpu_stw_le_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
282
uint16_t meminfo = trace_mem_get_info(MO_LEUW, MMU_USER_IDX, true);
283
284
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
285
- stw_le_p(g2h(ptr), val);
286
+ stw_le_p(g2h(env_cpu(env), ptr), val);
287
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
288
}
289
290
@@ -XXX,XX +XXX,XX @@ void cpu_stl_le_data(CPUArchState *env, abi_ptr ptr, uint32_t val)
291
uint16_t meminfo = trace_mem_get_info(MO_LEUL, MMU_USER_IDX, true);
292
293
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
294
- stl_le_p(g2h(ptr), val);
295
+ stl_le_p(g2h(env_cpu(env), ptr), val);
296
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
297
}
298
299
@@ -XXX,XX +XXX,XX @@ void cpu_stq_le_data(CPUArchState *env, abi_ptr ptr, uint64_t val)
300
uint16_t meminfo = trace_mem_get_info(MO_LEQ, MMU_USER_IDX, true);
301
302
trace_guest_mem_before_exec(env_cpu(env), ptr, meminfo);
303
- stq_le_p(g2h(ptr), val);
304
+ stq_le_p(g2h(env_cpu(env), ptr), val);
305
qemu_plugin_vcpu_mem_cb(env_cpu(env), ptr, meminfo);
306
}
307
308
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr ptr)
309
uint32_t ret;
310
311
set_helper_retaddr(1);
312
- ret = ldub_p(g2h(ptr));
313
+ ret = ldub_p(g2h_untagged(ptr));
314
clear_helper_retaddr();
315
return ret;
316
}
317
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr ptr)
318
uint32_t ret;
319
320
set_helper_retaddr(1);
321
- ret = lduw_p(g2h(ptr));
322
+ ret = lduw_p(g2h_untagged(ptr));
323
clear_helper_retaddr();
324
return ret;
325
}
326
@@ -XXX,XX +XXX,XX @@ uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr ptr)
327
uint32_t ret;
328
329
set_helper_retaddr(1);
330
- ret = ldl_p(g2h(ptr));
331
+ ret = ldl_p(g2h_untagged(ptr));
332
clear_helper_retaddr();
333
return ret;
334
}
335
@@ -XXX,XX +XXX,XX @@ uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr ptr)
336
uint64_t ret;
337
338
set_helper_retaddr(1);
339
- ret = ldq_p(g2h(ptr));
340
+ ret = ldq_p(g2h_untagged(ptr));
341
clear_helper_retaddr();
342
return ret;
343
}
344
@@ -XXX,XX +XXX,XX @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
345
if (unlikely(addr & (size - 1))) {
346
cpu_loop_exit_atomic(env_cpu(env), retaddr);
347
}
348
- void *ret = g2h(addr);
349
+ void *ret = g2h(env_cpu(env), addr);
350
set_helper_retaddr(retaddr);
351
return ret;
352
}
353
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
354
index XXXXXXX..XXXXXXX 100644
355
--- a/linux-user/elfload.c
356
+++ b/linux-user/elfload.c
357
@@ -XXX,XX +XXX,XX @@ enum {
358
359
static bool init_guest_commpage(void)
360
{
361
- void *want = g2h(ARM_COMMPAGE & -qemu_host_page_size);
362
+ void *want = g2h_untagged(ARM_COMMPAGE & -qemu_host_page_size);
363
void *addr = mmap(want, qemu_host_page_size, PROT_READ | PROT_WRITE,
364
MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
365
366
@@ -XXX,XX +XXX,XX @@ static bool init_guest_commpage(void)
367
}
368
369
/* Set kernel helper versions; rest of page is 0. */
370
- __put_user(5, (uint32_t *)g2h(0xffff0ffcu));
371
+ __put_user(5, (uint32_t *)g2h_untagged(0xffff0ffcu));
372
373
if (mprotect(addr, qemu_host_page_size, PROT_READ)) {
374
perror("Protecting guest commpage");
375
@@ -XXX,XX +XXX,XX @@ static void zero_bss(abi_ulong elf_bss, abi_ulong last_bss, int prot)
376
here is still actually needed. For now, continue with it,
377
but merge it with the "normal" mmap that would allocate the bss. */
378
379
- host_start = (uintptr_t) g2h(elf_bss);
380
- host_end = (uintptr_t) g2h(last_bss);
381
+ host_start = (uintptr_t) g2h_untagged(elf_bss);
382
+ host_end = (uintptr_t) g2h_untagged(last_bss);
383
host_map_start = REAL_HOST_PAGE_ALIGN(host_start);
384
385
if (host_map_start < host_end) {
386
@@ -XXX,XX +XXX,XX @@ static void pgb_have_guest_base(const char *image_name, abi_ulong guest_loaddr,
387
}
388
389
/* Reserve the address space for the binary, or reserved_va. */
390
- test = g2h(guest_loaddr);
391
+ test = g2h_untagged(guest_loaddr);
392
addr = mmap(test, guest_hiaddr - guest_loaddr, PROT_NONE, flags, -1, 0);
393
if (test != addr) {
394
pgb_fail_in_use(image_name);
395
@@ -XXX,XX +XXX,XX @@ static void pgb_reserved_va(const char *image_name, abi_ulong guest_loaddr,
396
397
/* Reserve the memory on the host. */
398
assert(guest_base != 0);
399
- test = g2h(0);
400
+ test = g2h_untagged(0);
401
addr = mmap(test, reserved_va, PROT_NONE, flags, -1, 0);
402
if (addr == MAP_FAILED || addr != test) {
403
error_report("Unable to reserve 0x%lx bytes of virtual address "
404
diff --git a/linux-user/flatload.c b/linux-user/flatload.c
405
index XXXXXXX..XXXXXXX 100644
406
--- a/linux-user/flatload.c
407
+++ b/linux-user/flatload.c
408
@@ -XXX,XX +XXX,XX @@ static int load_flat_file(struct linux_binprm * bprm,
409
}
410
411
/* zero the BSS. */
412
- memset(g2h(datapos + data_len), 0, bss_len);
413
+ memset(g2h_untagged(datapos + data_len), 0, bss_len);
414
415
return 0;
416
}
417
diff --git a/linux-user/hppa/cpu_loop.c b/linux-user/hppa/cpu_loop.c
418
index XXXXXXX..XXXXXXX 100644
419
--- a/linux-user/hppa/cpu_loop.c
420
+++ b/linux-user/hppa/cpu_loop.c
421
@@ -XXX,XX +XXX,XX @@
422
423
static abi_ulong hppa_lws(CPUHPPAState *env)
424
{
425
+ CPUState *cs = env_cpu(env);
426
uint32_t which = env->gr[20];
427
abi_ulong addr = env->gr[26];
428
abi_ulong old = env->gr[25];
429
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
430
}
431
old = tswap32(old);
432
new = tswap32(new);
433
- ret = qatomic_cmpxchg((uint32_t *)g2h(addr), old, new);
434
+ ret = qatomic_cmpxchg((uint32_t *)g2h(cs, addr), old, new);
435
ret = tswap32(ret);
436
break;
437
438
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
439
can be host-endian as well. */
440
switch (size) {
441
case 0:
442
- old = *(uint8_t *)g2h(old);
443
- new = *(uint8_t *)g2h(new);
444
- ret = qatomic_cmpxchg((uint8_t *)g2h(addr), old, new);
445
+ old = *(uint8_t *)g2h(cs, old);
446
+ new = *(uint8_t *)g2h(cs, new);
447
+ ret = qatomic_cmpxchg((uint8_t *)g2h(cs, addr), old, new);
448
ret = ret != old;
449
break;
450
case 1:
451
- old = *(uint16_t *)g2h(old);
452
- new = *(uint16_t *)g2h(new);
453
- ret = qatomic_cmpxchg((uint16_t *)g2h(addr), old, new);
454
+ old = *(uint16_t *)g2h(cs, old);
455
+ new = *(uint16_t *)g2h(cs, new);
456
+ ret = qatomic_cmpxchg((uint16_t *)g2h(cs, addr), old, new);
457
ret = ret != old;
458
break;
459
case 2:
460
- old = *(uint32_t *)g2h(old);
461
- new = *(uint32_t *)g2h(new);
462
- ret = qatomic_cmpxchg((uint32_t *)g2h(addr), old, new);
463
+ old = *(uint32_t *)g2h(cs, old);
464
+ new = *(uint32_t *)g2h(cs, new);
465
+ ret = qatomic_cmpxchg((uint32_t *)g2h(cs, addr), old, new);
466
ret = ret != old;
467
break;
468
case 3:
469
{
470
uint64_t o64, n64, r64;
471
- o64 = *(uint64_t *)g2h(old);
472
- n64 = *(uint64_t *)g2h(new);
473
+ o64 = *(uint64_t *)g2h(cs, old);
474
+ n64 = *(uint64_t *)g2h(cs, new);
475
#ifdef CONFIG_ATOMIC64
476
- r64 = qatomic_cmpxchg__nocheck((uint64_t *)g2h(addr),
477
+ r64 = qatomic_cmpxchg__nocheck((uint64_t *)g2h(cs, addr),
478
o64, n64);
479
ret = r64 != o64;
480
#else
481
start_exclusive();
482
- r64 = *(uint64_t *)g2h(addr);
483
+ r64 = *(uint64_t *)g2h(cs, addr);
484
ret = 1;
485
if (r64 == o64) {
486
- *(uint64_t *)g2h(addr) = n64;
487
+ *(uint64_t *)g2h(cs, addr) = n64;
488
ret = 0;
489
}
490
end_exclusive();
491
diff --git a/linux-user/i386/cpu_loop.c b/linux-user/i386/cpu_loop.c
492
index XXXXXXX..XXXXXXX 100644
493
--- a/linux-user/i386/cpu_loop.c
494
+++ b/linux-user/i386/cpu_loop.c
495
@@ -XXX,XX +XXX,XX @@ void target_cpu_copy_regs(CPUArchState *env, struct target_pt_regs *regs)
496
env->idt.base = target_mmap(0, sizeof(uint64_t) * (env->idt.limit + 1),
497
PROT_READ|PROT_WRITE,
498
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
499
- idt_table = g2h(env->idt.base);
500
+ idt_table = g2h_untagged(env->idt.base);
501
set_idt(0, 0);
502
set_idt(1, 0);
503
set_idt(2, 0);
504
@@ -XXX,XX +XXX,XX @@ void target_cpu_copy_regs(CPUArchState *env, struct target_pt_regs *regs)
505
PROT_READ|PROT_WRITE,
506
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
507
env->gdt.limit = sizeof(uint64_t) * TARGET_GDT_ENTRIES - 1;
508
- gdt_table = g2h(env->gdt.base);
509
+ gdt_table = g2h_untagged(env->gdt.base);
510
#ifdef TARGET_ABI32
511
write_dt(&gdt_table[__USER_CS >> 3], 0, 0xfffff,
512
DESC_G_MASK | DESC_B_MASK | DESC_P_MASK | DESC_S_MASK |
513
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
514
index XXXXXXX..XXXXXXX 100644
515
--- a/linux-user/mmap.c
516
+++ b/linux-user/mmap.c
517
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
518
}
519
end = host_end;
520
}
521
- ret = mprotect(g2h(host_start), qemu_host_page_size,
522
+ ret = mprotect(g2h_untagged(host_start), qemu_host_page_size,
523
prot1 & PAGE_BITS);
524
if (ret != 0) {
525
goto error;
526
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
527
for (addr = end; addr < host_end; addr += TARGET_PAGE_SIZE) {
528
prot1 |= page_get_flags(addr);
529
}
530
- ret = mprotect(g2h(host_end - qemu_host_page_size),
531
+ ret = mprotect(g2h_untagged(host_end - qemu_host_page_size),
532
qemu_host_page_size, prot1 & PAGE_BITS);
533
if (ret != 0) {
534
goto error;
535
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
536
537
/* handle the pages in the middle */
538
if (host_start < host_end) {
539
- ret = mprotect(g2h(host_start), host_end - host_start, host_prot);
540
+ ret = mprotect(g2h_untagged(host_start),
541
+ host_end - host_start, host_prot);
542
if (ret != 0) {
543
goto error;
544
}
545
@@ -XXX,XX +XXX,XX @@ static int mmap_frag(abi_ulong real_start,
546
int prot1, prot_new;
547
548
real_end = real_start + qemu_host_page_size;
549
- host_start = g2h(real_start);
550
+ host_start = g2h_untagged(real_start);
551
552
/* get the protection of the target pages outside the mapping */
553
prot1 = 0;
554
@@ -XXX,XX +XXX,XX @@ static int mmap_frag(abi_ulong real_start,
555
mprotect(host_start, qemu_host_page_size, prot1 | PROT_WRITE);
556
557
/* read the corresponding file data */
558
- if (pread(fd, g2h(start), end - start, offset) == -1)
559
+ if (pread(fd, g2h_untagged(start), end - start, offset) == -1)
560
return -1;
561
562
/* put final protection */
563
@@ -XXX,XX +XXX,XX @@ static int mmap_frag(abi_ulong real_start,
564
mprotect(host_start, qemu_host_page_size, prot_new);
565
}
566
if (prot_new & PROT_WRITE) {
567
- memset(g2h(start), 0, end - start);
568
+ memset(g2h_untagged(start), 0, end - start);
569
}
570
}
571
return 0;
572
@@ -XXX,XX +XXX,XX @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align)
573
* - mremap() with MREMAP_FIXED flag
574
* - shmat() with SHM_REMAP flag
575
*/
576
- ptr = mmap(g2h(addr), size, PROT_NONE,
577
+ ptr = mmap(g2h_untagged(addr), size, PROT_NONE,
578
MAP_ANONYMOUS|MAP_PRIVATE|MAP_NORESERVE, -1, 0);
579
580
/* ENOMEM, if host address space has no memory */
581
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
582
/* Note: we prefer to control the mapping address. It is
583
especially important if qemu_host_page_size >
584
qemu_real_host_page_size */
585
- p = mmap(g2h(start), host_len, host_prot,
586
+ p = mmap(g2h_untagged(start), host_len, host_prot,
587
flags | MAP_FIXED | MAP_ANONYMOUS, -1, 0);
588
if (p == MAP_FAILED) {
589
goto fail;
590
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
591
/* update start so that it points to the file position at 'offset' */
592
host_start = (unsigned long)p;
593
if (!(flags & MAP_ANONYMOUS)) {
594
- p = mmap(g2h(start), len, host_prot,
595
+ p = mmap(g2h_untagged(start), len, host_prot,
596
flags | MAP_FIXED, fd, host_offset);
597
if (p == MAP_FAILED) {
598
- munmap(g2h(start), host_len);
599
+ munmap(g2h_untagged(start), host_len);
600
goto fail;
601
}
602
host_start += offset - host_offset;
603
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
604
-1, 0);
605
if (retaddr == -1)
606
goto fail;
607
- if (pread(fd, g2h(start), len, offset) == -1)
608
+ if (pread(fd, g2h_untagged(start), len, offset) == -1)
609
goto fail;
610
if (!(host_prot & PROT_WRITE)) {
611
ret = target_mprotect(start, len, target_prot);
612
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
613
offset1 = 0;
614
else
615
offset1 = offset + real_start - start;
616
- p = mmap(g2h(real_start), real_end - real_start,
617
+ p = mmap(g2h_untagged(real_start), real_end - real_start,
618
host_prot, flags, fd, offset1);
619
if (p == MAP_FAILED)
620
goto fail;
621
@@ -XXX,XX +XXX,XX @@ static void mmap_reserve(abi_ulong start, abi_ulong size)
622
real_end -= qemu_host_page_size;
623
}
624
if (real_start != real_end) {
625
- mmap(g2h(real_start), real_end - real_start, PROT_NONE,
626
+ mmap(g2h_untagged(real_start), real_end - real_start, PROT_NONE,
627
MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_NORESERVE,
628
-1, 0);
629
}
630
@@ -XXX,XX +XXX,XX @@ int target_munmap(abi_ulong start, abi_ulong len)
631
if (reserved_va) {
632
mmap_reserve(real_start, real_end - real_start);
633
} else {
634
- ret = munmap(g2h(real_start), real_end - real_start);
635
+ ret = munmap(g2h_untagged(real_start), real_end - real_start);
636
}
637
}
638
639
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
640
mmap_lock();
641
642
if (flags & MREMAP_FIXED) {
643
- host_addr = mremap(g2h(old_addr), old_size, new_size,
644
- flags, g2h(new_addr));
645
+ host_addr = mremap(g2h_untagged(old_addr), old_size, new_size,
646
+ flags, g2h_untagged(new_addr));
647
648
if (reserved_va && host_addr != MAP_FAILED) {
649
/* If new and old addresses overlap then the above mremap will
650
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
651
errno = ENOMEM;
652
host_addr = MAP_FAILED;
653
} else {
654
- host_addr = mremap(g2h(old_addr), old_size, new_size,
655
- flags | MREMAP_FIXED, g2h(mmap_start));
656
+ host_addr = mremap(g2h_untagged(old_addr), old_size, new_size,
657
+ flags | MREMAP_FIXED,
658
+ g2h_untagged(mmap_start));
659
if (reserved_va) {
660
mmap_reserve(old_addr, old_size);
661
}
662
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
663
}
664
}
665
if (prot == 0) {
666
- host_addr = mremap(g2h(old_addr), old_size, new_size, flags);
667
+ host_addr = mremap(g2h_untagged(old_addr),
668
+ old_size, new_size, flags);
669
670
if (host_addr != MAP_FAILED) {
671
/* Check if address fits target address space */
672
if (!guest_range_valid(h2g(host_addr), new_size)) {
673
/* Revert mremap() changes */
674
- host_addr = mremap(g2h(old_addr), new_size, old_size,
675
- flags);
676
+ host_addr = mremap(g2h_untagged(old_addr),
677
+ new_size, old_size, flags);
678
errno = ENOMEM;
679
host_addr = MAP_FAILED;
680
} else if (reserved_va && old_size > new_size) {
681
diff --git a/linux-user/ppc/signal.c b/linux-user/ppc/signal.c
682
index XXXXXXX..XXXXXXX 100644
683
--- a/linux-user/ppc/signal.c
684
+++ b/linux-user/ppc/signal.c
685
@@ -XXX,XX +XXX,XX @@ static void restore_user_regs(CPUPPCState *env,
686
uint64_t v_addr;
687
/* 64-bit needs to recover the pointer to the vectors from the frame */
688
__get_user(v_addr, &frame->v_regs);
689
- v_regs = g2h(v_addr);
690
+ v_regs = g2h(env_cpu(env), v_addr);
691
#else
692
v_regs = (ppc_avr_t *)frame->mc_vregs.altivec;
693
#endif
694
@@ -XXX,XX +XXX,XX @@ void setup_rt_frame(int sig, struct target_sigaction *ka,
695
if (get_ppc64_abi(image) < 2) {
696
/* ELFv1 PPC64 function pointers are pointers to OPD entries. */
697
struct target_func_ptr *handler =
698
- (struct target_func_ptr *)g2h(ka->_sa_handler);
699
+ (struct target_func_ptr *)g2h(env_cpu(env), ka->_sa_handler);
700
env->nip = tswapl(handler->entry);
701
env->gpr[2] = tswapl(handler->toc);
702
} else {
703
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
704
index XXXXXXX..XXXXXXX 100644
705
--- a/linux-user/syscall.c
706
+++ b/linux-user/syscall.c
707
@@ -XXX,XX +XXX,XX @@ abi_long do_brk(abi_ulong new_brk)
708
/* Heap contents are initialized to zero, as for anonymous
709
* mapped pages. */
710
if (new_brk > target_brk) {
711
- memset(g2h(target_brk), 0, new_brk - target_brk);
712
+ memset(g2h_untagged(target_brk), 0, new_brk - target_brk);
713
}
714
    target_brk = new_brk;
715
DEBUGF_BRK(TARGET_ABI_FMT_lx " (new_brk <= brk_page)\n", target_brk);
716
@@ -XXX,XX +XXX,XX @@ abi_long do_brk(abi_ulong new_brk)
717
* come from the remaining part of the previous page: it may
718
* contains garbage data due to a previous heap usage (grown
719
* then shrunken). */
720
- memset(g2h(target_brk), 0, brk_page - target_brk);
721
+ memset(g2h_untagged(target_brk), 0, brk_page - target_brk);
722
723
target_brk = new_brk;
724
brk_page = HOST_PAGE_ALIGN(target_brk);
725
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
726
mmap_lock();
727
728
if (shmaddr)
729
- host_raddr = shmat(shmid, (void *)g2h(shmaddr), shmflg);
730
+ host_raddr = shmat(shmid, (void *)g2h_untagged(shmaddr), shmflg);
731
else {
732
abi_ulong mmap_start;
733
734
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
735
errno = ENOMEM;
736
host_raddr = (void *)-1;
737
} else
738
- host_raddr = shmat(shmid, g2h(mmap_start), shmflg | SHM_REMAP);
739
+ host_raddr = shmat(shmid, g2h_untagged(mmap_start),
740
+ shmflg | SHM_REMAP);
741
}
742
743
if (host_raddr == (void *)-1) {
744
@@ -XXX,XX +XXX,XX @@ static inline abi_long do_shmdt(abi_ulong shmaddr)
745
break;
746
}
747
}
748
- rv = get_errno(shmdt(g2h(shmaddr)));
749
+ rv = get_errno(shmdt(g2h_untagged(shmaddr)));
750
751
mmap_unlock();
752
753
@@ -XXX,XX +XXX,XX @@ static abi_long write_ldt(CPUX86State *env,
754
MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
755
if (env->ldt.base == -1)
756
return -TARGET_ENOMEM;
757
- memset(g2h(env->ldt.base), 0,
758
+ memset(g2h_untagged(env->ldt.base), 0,
759
TARGET_LDT_ENTRIES * TARGET_LDT_ENTRY_SIZE);
760
env->ldt.limit = 0xffff;
761
- ldt_table = g2h(env->ldt.base);
762
+ ldt_table = g2h_untagged(env->ldt.base);
763
}
764
765
/* NOTE: same code as Linux kernel */
766
@@ -XXX,XX +XXX,XX @@ static abi_long do_modify_ldt(CPUX86State *env, int func, abi_ulong ptr,
767
#if defined(TARGET_ABI32)
768
abi_long do_set_thread_area(CPUX86State *env, abi_ulong ptr)
769
{
770
- uint64_t *gdt_table = g2h(env->gdt.base);
771
+ uint64_t *gdt_table = g2h_untagged(env->gdt.base);
772
struct target_modify_ldt_ldt_s ldt_info;
773
struct target_modify_ldt_ldt_s *target_ldt_info;
774
int seg_32bit, contents, read_exec_only, limit_in_pages;
775
@@ -XXX,XX +XXX,XX @@ install:
776
static abi_long do_get_thread_area(CPUX86State *env, abi_ulong ptr)
777
{
778
struct target_modify_ldt_ldt_s *target_ldt_info;
779
- uint64_t *gdt_table = g2h(env->gdt.base);
780
+ uint64_t *gdt_table = g2h_untagged(env->gdt.base);
781
uint32_t base_addr, limit, flags;
782
int seg_32bit, contents, read_exec_only, limit_in_pages, idx;
783
int seg_not_present, useable, lm;
784
@@ -XXX,XX +XXX,XX @@ static int do_safe_futex(int *uaddr, int op, int val,
785
tricky. However they're probably useless because guest atomic
786
operations won't work either. */
787
#if defined(TARGET_NR_futex)
788
-static int do_futex(target_ulong uaddr, int op, int val, target_ulong timeout,
789
- target_ulong uaddr2, int val3)
790
+static int do_futex(CPUState *cpu, target_ulong uaddr, int op, int val,
791
+ target_ulong timeout, target_ulong uaddr2, int val3)
792
{
793
struct timespec ts, *pts;
794
int base_op;
795
@@ -XXX,XX +XXX,XX @@ static int do_futex(target_ulong uaddr, int op, int val, target_ulong timeout,
796
} else {
797
pts = NULL;
798
}
799
- return do_safe_futex(g2h(uaddr), op, tswap32(val), pts, NULL, val3);
800
+ return do_safe_futex(g2h(cpu, uaddr),
801
+ op, tswap32(val), pts, NULL, val3);
802
case FUTEX_WAKE:
803
- return do_safe_futex(g2h(uaddr), op, val, NULL, NULL, 0);
804
+ return do_safe_futex(g2h(cpu, uaddr),
805
+ op, val, NULL, NULL, 0);
806
case FUTEX_FD:
807
- return do_safe_futex(g2h(uaddr), op, val, NULL, NULL, 0);
808
+ return do_safe_futex(g2h(cpu, uaddr),
809
+ op, val, NULL, NULL, 0);
810
case FUTEX_REQUEUE:
811
case FUTEX_CMP_REQUEUE:
812
case FUTEX_WAKE_OP:
813
@@ -XXX,XX +XXX,XX @@ static int do_futex(target_ulong uaddr, int op, int val, target_ulong timeout,
814
to satisfy the compiler. We do not need to tswap TIMEOUT
815
since it's not compared to guest memory. */
816
pts = (struct timespec *)(uintptr_t) timeout;
817
- return do_safe_futex(g2h(uaddr), op, val, pts, g2h(uaddr2),
818
+ return do_safe_futex(g2h(cpu, uaddr), op, val, pts, g2h(cpu, uaddr2),
819
(base_op == FUTEX_CMP_REQUEUE
820
- ? tswap32(val3)
821
- : val3));
822
+ ? tswap32(val3) : val3));
823
default:
824
return -TARGET_ENOSYS;
825
}
826
@@ -XXX,XX +XXX,XX @@ static int do_futex(target_ulong uaddr, int op, int val, target_ulong timeout,
827
#endif
828
829
#if defined(TARGET_NR_futex_time64)
830
-static int do_futex_time64(target_ulong uaddr, int op, int val, target_ulong timeout,
831
+static int do_futex_time64(CPUState *cpu, target_ulong uaddr, int op,
832
+ int val, target_ulong timeout,
833
target_ulong uaddr2, int val3)
834
{
835
struct timespec ts, *pts;
836
@@ -XXX,XX +XXX,XX @@ static int do_futex_time64(target_ulong uaddr, int op, int val, target_ulong tim
837
} else {
838
pts = NULL;
839
}
840
- return do_safe_futex(g2h(uaddr), op, tswap32(val), pts, NULL, val3);
841
+ return do_safe_futex(g2h(cpu, uaddr), op,
842
+ tswap32(val), pts, NULL, val3);
843
case FUTEX_WAKE:
844
- return do_safe_futex(g2h(uaddr), op, val, NULL, NULL, 0);
845
+ return do_safe_futex(g2h(cpu, uaddr), op, val, NULL, NULL, 0);
846
case FUTEX_FD:
847
- return do_safe_futex(g2h(uaddr), op, val, NULL, NULL, 0);
848
+ return do_safe_futex(g2h(cpu, uaddr), op, val, NULL, NULL, 0);
849
case FUTEX_REQUEUE:
850
case FUTEX_CMP_REQUEUE:
851
case FUTEX_WAKE_OP:
852
@@ -XXX,XX +XXX,XX @@ static int do_futex_time64(target_ulong uaddr, int op, int val, target_ulong tim
853
to satisfy the compiler. We do not need to tswap TIMEOUT
854
since it's not compared to guest memory. */
855
pts = (struct timespec *)(uintptr_t) timeout;
856
- return do_safe_futex(g2h(uaddr), op, val, pts, g2h(uaddr2),
857
+ return do_safe_futex(g2h(cpu, uaddr), op, val, pts, g2h(cpu, uaddr2),
858
(base_op == FUTEX_CMP_REQUEUE
859
- ? tswap32(val3)
860
- : val3));
861
+ ? tswap32(val3) : val3));
862
default:
863
return -TARGET_ENOSYS;
864
}
865
@@ -XXX,XX +XXX,XX @@ static int open_self_maps(void *cpu_env, int fd)
866
const char *path;
867
868
max = h2g_valid(max - 1) ?
869
- max : (uintptr_t) g2h(GUEST_ADDR_MAX) + 1;
870
+ max : (uintptr_t) g2h_untagged(GUEST_ADDR_MAX) + 1;
871
872
if (page_check_range(h2g(min), max - min, flags) == -1) {
873
continue;
874
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
875
876
if (ts->child_tidptr) {
877
put_user_u32(0, ts->child_tidptr);
878
- do_sys_futex(g2h(ts->child_tidptr), FUTEX_WAKE, INT_MAX,
879
- NULL, NULL, 0);
880
+ do_sys_futex(g2h(cpu, ts->child_tidptr),
881
+ FUTEX_WAKE, INT_MAX, NULL, NULL, 0);
882
}
883
thread_cpu = NULL;
884
g_free(ts);
885
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
886
if (!arg5) {
887
ret = mount(p, p2, p3, (unsigned long)arg4, NULL);
888
} else {
889
- ret = mount(p, p2, p3, (unsigned long)arg4, g2h(arg5));
890
+ ret = mount(p, p2, p3, (unsigned long)arg4, g2h(cpu, arg5));
891
}
892
ret = get_errno(ret);
893
894
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
895
/* ??? msync/mlock/munlock are broken for softmmu. */
896
#ifdef TARGET_NR_msync
897
case TARGET_NR_msync:
898
- return get_errno(msync(g2h(arg1), arg2, arg3));
899
+ return get_errno(msync(g2h(cpu, arg1), arg2, arg3));
900
#endif
901
#ifdef TARGET_NR_mlock
902
case TARGET_NR_mlock:
903
- return get_errno(mlock(g2h(arg1), arg2));
904
+ return get_errno(mlock(g2h(cpu, arg1), arg2));
905
#endif
906
#ifdef TARGET_NR_munlock
907
case TARGET_NR_munlock:
908
- return get_errno(munlock(g2h(arg1), arg2));
909
+ return get_errno(munlock(g2h(cpu, arg1), arg2));
910
#endif
911
#ifdef TARGET_NR_mlockall
912
case TARGET_NR_mlockall:
913
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
914
915
#if defined(TARGET_NR_set_tid_address) && defined(__NR_set_tid_address)
916
case TARGET_NR_set_tid_address:
917
- return get_errno(set_tid_address((int *)g2h(arg1)));
918
+ return get_errno(set_tid_address((int *)g2h(cpu, arg1)));
919
#endif
920
921
case TARGET_NR_tkill:
922
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
923
#endif
924
#ifdef TARGET_NR_futex
925
case TARGET_NR_futex:
926
- return do_futex(arg1, arg2, arg3, arg4, arg5, arg6);
927
+ return do_futex(cpu, arg1, arg2, arg3, arg4, arg5, arg6);
928
#endif
929
#ifdef TARGET_NR_futex_time64
930
case TARGET_NR_futex_time64:
931
- return do_futex_time64(arg1, arg2, arg3, arg4, arg5, arg6);
932
+ return do_futex_time64(cpu, arg1, arg2, arg3, arg4, arg5, arg6);
933
#endif
934
#if defined(TARGET_NR_inotify_init) && defined(__NR_inotify_init)
935
case TARGET_NR_inotify_init:
936
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
937
index XXXXXXX..XXXXXXX 100644
938
--- a/target/arm/helper-a64.c
939
+++ b/target/arm/helper-a64.c
940
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(paired_cmpxchg64_le)(CPUARMState *env, uint64_t addr,
941
942
#ifdef CONFIG_USER_ONLY
943
/* ??? Enforce alignment. */
944
- uint64_t *haddr = g2h(addr);
945
+ uint64_t *haddr = g2h(env_cpu(env), addr);
946
947
set_helper_retaddr(ra);
948
o0 = ldq_le_p(haddr + 0);
949
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(paired_cmpxchg64_be)(CPUARMState *env, uint64_t addr,
950
951
#ifdef CONFIG_USER_ONLY
952
/* ??? Enforce alignment. */
953
- uint64_t *haddr = g2h(addr);
954
+ uint64_t *haddr = g2h(env_cpu(env), addr);
955
956
set_helper_retaddr(ra);
957
o1 = ldq_be_p(haddr + 0);
958
diff --git a/target/hppa/op_helper.c b/target/hppa/op_helper.c
959
index XXXXXXX..XXXXXXX 100644
960
--- a/target/hppa/op_helper.c
961
+++ b/target/hppa/op_helper.c
962
@@ -XXX,XX +XXX,XX @@ static void atomic_store_3(CPUHPPAState *env, target_ulong addr, uint32_t val,
963
#ifdef CONFIG_USER_ONLY
964
uint32_t old, new, cmp;
965
966
- uint32_t *haddr = g2h(addr - 1);
967
+ uint32_t *haddr = g2h(env_cpu(env), addr - 1);
968
old = *haddr;
969
while (1) {
970
new = (old & ~mask) | (val & mask);
971
diff --git a/target/i386/tcg/mem_helper.c b/target/i386/tcg/mem_helper.c
972
index XXXXXXX..XXXXXXX 100644
973
--- a/target/i386/tcg/mem_helper.c
974
+++ b/target/i386/tcg/mem_helper.c
975
@@ -XXX,XX +XXX,XX @@ void helper_cmpxchg8b(CPUX86State *env, target_ulong a0)
976
977
#ifdef CONFIG_USER_ONLY
978
{
979
- uint64_t *haddr = g2h(a0);
980
+ uint64_t *haddr = g2h(env_cpu(env), a0);
981
cmpv = cpu_to_le64(cmpv);
982
newv = cpu_to_le64(newv);
983
oldv = qatomic_cmpxchg__nocheck(haddr, cmpv, newv);
984
diff --git a/target/s390x/mem_helper.c b/target/s390x/mem_helper.c
985
index XXXXXXX..XXXXXXX 100644
986
--- a/target/s390x/mem_helper.c
987
+++ b/target/s390x/mem_helper.c
988
@@ -XXX,XX +XXX,XX @@ static uint32_t do_csst(CPUS390XState *env, uint32_t r3, uint64_t a1,
989
990
if (parallel) {
991
#ifdef CONFIG_USER_ONLY
992
- uint32_t *haddr = g2h(a1);
993
+ uint32_t *haddr = g2h(env_cpu(env), a1);
994
ov = qatomic_cmpxchg__nocheck(haddr, cv, nv);
995
#else
996
TCGMemOpIdx oi = make_memop_idx(MO_TEUL | MO_ALIGN, mem_idx);
997
@@ -XXX,XX +XXX,XX @@ static uint32_t do_csst(CPUS390XState *env, uint32_t r3, uint64_t a1,
998
if (parallel) {
999
#ifdef CONFIG_ATOMIC64
1000
# ifdef CONFIG_USER_ONLY
1001
- uint64_t *haddr = g2h(a1);
1002
+ uint64_t *haddr = g2h(env_cpu(env), a1);
1003
ov = qatomic_cmpxchg__nocheck(haddr, cv, nv);
1004
# else
1005
TCGMemOpIdx oi = make_memop_idx(MO_TEQ | MO_ALIGN, mem_idx);
1006
--
1007
2.20.1
1008
1009
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
We define target_mmap et al as untagged, so that they can be
4
used from the binary loaders. Explicitly call cpu_untagged_addr
5
for munmap, mprotect, mremap syscall entry points.
6
7
Add a few comments for the syscalls that are exempted by the
8
kernel's tagged-address-abi.rst.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20210210000223.884088-14-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
linux-user/syscall.c | 11 +++++++++++
16
1 file changed, 11 insertions(+)
17
18
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
19
index XXXXXXX..XXXXXXX 100644
20
--- a/linux-user/syscall.c
21
+++ b/linux-user/syscall.c
22
@@ -XXX,XX +XXX,XX @@ abi_long do_brk(abi_ulong new_brk)
23
abi_long mapped_addr;
24
abi_ulong new_alloc_size;
25
26
+ /* brk pointers are always untagged */
27
+
28
DEBUGF_BRK("do_brk(" TARGET_ABI_FMT_lx ") -> ", new_brk);
29
30
if (!new_brk) {
31
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
32
int i,ret;
33
abi_ulong shmlba;
34
35
+ /* shmat pointers are always untagged */
36
+
37
/* find out the length of the shared memory segment */
38
ret = get_errno(shmctl(shmid, IPC_STAT, &shm_info));
39
if (is_error(ret)) {
40
@@ -XXX,XX +XXX,XX @@ static inline abi_long do_shmdt(abi_ulong shmaddr)
41
int i;
42
abi_long rv;
43
44
+ /* shmdt pointers are always untagged */
45
+
46
mmap_lock();
47
48
for (i = 0; i < N_SHM_REGIONS; ++i) {
49
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
50
v5, v6));
51
}
52
#else
53
+ /* mmap pointers are always untagged */
54
ret = get_errno(target_mmap(arg1, arg2, arg3,
55
target_to_host_bitmask(arg4, mmap_flags_tbl),
56
arg5,
57
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
58
return get_errno(ret);
59
#endif
60
case TARGET_NR_munmap:
61
+ arg1 = cpu_untagged_addr(cpu, arg1);
62
return get_errno(target_munmap(arg1, arg2));
63
case TARGET_NR_mprotect:
64
+ arg1 = cpu_untagged_addr(cpu, arg1);
65
{
66
TaskState *ts = cpu->opaque;
67
/* Special hack to detect libc making the stack executable. */
68
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
69
return get_errno(target_mprotect(arg1, arg2, arg3));
70
#ifdef TARGET_NR_mremap
71
case TARGET_NR_mremap:
72
+ arg1 = cpu_untagged_addr(cpu, arg1);
73
+ /* mremap new_addr (arg5) is always untagged */
74
return get_errno(target_mremap(arg1, arg2, arg3, arg4, arg5));
75
#endif
76
/* ??? msync/mlock/munlock are broken for softmmu. */
77
--
78
2.20.1
79
80
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
We're currently open-coding the range check in access_ok;
4
use guest_range_valid when size != 0.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-15-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
linux-user/qemu.h | 9 +++------
12
1 file changed, 3 insertions(+), 6 deletions(-)
13
14
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/linux-user/qemu.h
17
+++ b/linux-user/qemu.h
18
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
19
20
static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
21
{
22
- if (!guest_addr_valid(addr)) {
23
- return false;
24
- }
25
- if (size != 0 &&
26
- (addr + size - 1 < addr ||
27
- !guest_addr_valid(addr + size - 1))) {
28
+ if (size == 0
29
+ ? !guest_addr_valid(addr)
30
+ : !guest_range_valid(addr, size)) {
31
return false;
32
}
33
return page_check_range((target_ulong)addr, size, type) == 0;
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
The places that use these are better off using untagged
4
addresses, so do not provide a tagged versions. Rename
5
to make it clear about the address type.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210210000223.884088-16-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/exec/cpu_ldst.h | 4 ++--
13
linux-user/qemu.h | 4 ++--
14
accel/tcg/user-exec.c | 3 ++-
15
linux-user/mmap.c | 12 ++++++------
16
linux-user/syscall.c | 2 +-
17
5 files changed, 13 insertions(+), 12 deletions(-)
18
19
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/include/exec/cpu_ldst.h
22
+++ b/include/exec/cpu_ldst.h
23
@@ -XXX,XX +XXX,XX @@ static inline void *g2h(CPUState *cs, abi_ptr x)
24
return g2h_untagged(cpu_untagged_addr(cs, x));
25
}
26
27
-static inline bool guest_addr_valid(abi_ulong x)
28
+static inline bool guest_addr_valid_untagged(abi_ulong x)
29
{
30
return x <= GUEST_ADDR_MAX;
31
}
32
33
-static inline bool guest_range_valid(abi_ulong start, abi_ulong len)
34
+static inline bool guest_range_valid_untagged(abi_ulong start, abi_ulong len)
35
{
36
return len - 1 <= GUEST_ADDR_MAX && start <= GUEST_ADDR_MAX - len + 1;
37
}
38
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
39
index XXXXXXX..XXXXXXX 100644
40
--- a/linux-user/qemu.h
41
+++ b/linux-user/qemu.h
42
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
43
static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
44
{
45
if (size == 0
46
- ? !guest_addr_valid(addr)
47
- : !guest_range_valid(addr, size)) {
48
+ ? !guest_addr_valid_untagged(addr)
49
+ : !guest_range_valid_untagged(addr, size)) {
50
return false;
51
}
52
return page_check_range((target_ulong)addr, size, type) == 0;
53
diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/accel/tcg/user-exec.c
56
+++ b/accel/tcg/user-exec.c
57
@@ -XXX,XX +XXX,XX @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
58
g_assert_not_reached();
59
}
60
61
- if (!guest_addr_valid(addr) || page_check_range(addr, 1, flags) < 0) {
62
+ if (!guest_addr_valid_untagged(addr) ||
63
+ page_check_range(addr, 1, flags) < 0) {
64
if (nonfault) {
65
return TLB_INVALID_MASK;
66
} else {
67
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/linux-user/mmap.c
70
+++ b/linux-user/mmap.c
71
@@ -XXX,XX +XXX,XX @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot)
72
}
73
len = TARGET_PAGE_ALIGN(len);
74
end = start + len;
75
- if (!guest_range_valid(start, len)) {
76
+ if (!guest_range_valid_untagged(start, len)) {
77
return -TARGET_ENOMEM;
78
}
79
if (len == 0) {
80
@@ -XXX,XX +XXX,XX @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot,
81
* It can fail only on 64-bit host with 32-bit target.
82
* On any other target/host host mmap() handles this error correctly.
83
*/
84
- if (end < start || !guest_range_valid(start, len)) {
85
+ if (end < start || !guest_range_valid_untagged(start, len)) {
86
errno = ENOMEM;
87
goto fail;
88
}
89
@@ -XXX,XX +XXX,XX @@ int target_munmap(abi_ulong start, abi_ulong len)
90
if (start & ~TARGET_PAGE_MASK)
91
return -TARGET_EINVAL;
92
len = TARGET_PAGE_ALIGN(len);
93
- if (len == 0 || !guest_range_valid(start, len)) {
94
+ if (len == 0 || !guest_range_valid_untagged(start, len)) {
95
return -TARGET_EINVAL;
96
}
97
98
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
99
int prot;
100
void *host_addr;
101
102
- if (!guest_range_valid(old_addr, old_size) ||
103
+ if (!guest_range_valid_untagged(old_addr, old_size) ||
104
((flags & MREMAP_FIXED) &&
105
- !guest_range_valid(new_addr, new_size))) {
106
+ !guest_range_valid_untagged(new_addr, new_size))) {
107
errno = ENOMEM;
108
return -1;
109
}
110
@@ -XXX,XX +XXX,XX @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
111
112
if (host_addr != MAP_FAILED) {
113
/* Check if address fits target address space */
114
- if (!guest_range_valid(h2g(host_addr), new_size)) {
115
+ if (!guest_range_valid_untagged(h2g(host_addr), new_size)) {
116
/* Revert mremap() changes */
117
host_addr = mremap(g2h_untagged(old_addr),
118
new_size, old_size, flags);
119
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/linux-user/syscall.c
122
+++ b/linux-user/syscall.c
123
@@ -XXX,XX +XXX,XX @@ static inline abi_ulong do_shmat(CPUArchState *cpu_env,
124
return -TARGET_EINVAL;
125
}
126
}
127
- if (!guest_range_valid(shmaddr, shm_info.shm_segsz)) {
128
+ if (!guest_range_valid_untagged(shmaddr, shm_info.shm_segsz)) {
129
return -TARGET_EINVAL;
130
}
131
132
--
133
2.20.1
134
135
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Provide both tagged and untagged versions of access_ok.
4
In a few places use thread_cpu, as the user is several
5
callees removed from do_syscall1.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210210000223.884088-17-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
linux-user/qemu.h | 11 +++++++++--
13
linux-user/elfload.c | 2 +-
14
linux-user/hppa/cpu_loop.c | 8 ++++----
15
linux-user/i386/cpu_loop.c | 2 +-
16
linux-user/i386/signal.c | 5 +++--
17
linux-user/syscall.c | 9 ++++++---
18
6 files changed, 24 insertions(+), 13 deletions(-)
19
20
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/linux-user/qemu.h
23
+++ b/linux-user/qemu.h
24
@@ -XXX,XX +XXX,XX @@ extern unsigned long guest_stack_size;
25
#define VERIFY_READ PAGE_READ
26
#define VERIFY_WRITE (PAGE_READ | PAGE_WRITE)
27
28
-static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
29
+static inline bool access_ok_untagged(int type, abi_ulong addr, abi_ulong size)
30
{
31
if (size == 0
32
? !guest_addr_valid_untagged(addr)
33
@@ -XXX,XX +XXX,XX @@ static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
34
return page_check_range((target_ulong)addr, size, type) == 0;
35
}
36
37
+static inline bool access_ok(CPUState *cpu, int type,
38
+ abi_ulong addr, abi_ulong size)
39
+{
40
+ return access_ok_untagged(type, cpu_untagged_addr(cpu, addr), size);
41
+}
42
+
43
/* NOTE __get_user and __put_user use host pointers and don't check access.
44
These are usually used to access struct data members once the struct has
45
been locked - usually with lock_user_struct. */
46
@@ -XXX,XX +XXX,XX @@ abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
47
host area will have the same contents as the guest. */
48
static inline void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
49
{
50
- if (!access_ok(type, guest_addr, len))
51
+ if (!access_ok_untagged(type, guest_addr, len)) {
52
return NULL;
53
+ }
54
#ifdef DEBUG_REMAP
55
{
56
void *addr;
57
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
58
index XXXXXXX..XXXXXXX 100644
59
--- a/linux-user/elfload.c
60
+++ b/linux-user/elfload.c
61
@@ -XXX,XX +XXX,XX @@ static int vma_get_mapping_count(const struct mm_struct *mm)
62
static abi_ulong vma_dump_size(const struct vm_area_struct *vma)
63
{
64
/* if we cannot even read the first page, skip it */
65
- if (!access_ok(VERIFY_READ, vma->vma_start, TARGET_PAGE_SIZE))
66
+ if (!access_ok_untagged(VERIFY_READ, vma->vma_start, TARGET_PAGE_SIZE))
67
return (0);
68
69
/*
70
diff --git a/linux-user/hppa/cpu_loop.c b/linux-user/hppa/cpu_loop.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/linux-user/hppa/cpu_loop.c
73
+++ b/linux-user/hppa/cpu_loop.c
74
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
75
return -TARGET_ENOSYS;
76
77
case 0: /* elf32 atomic 32bit cmpxchg */
78
- if ((addr & 3) || !access_ok(VERIFY_WRITE, addr, 4)) {
79
+ if ((addr & 3) || !access_ok(cs, VERIFY_WRITE, addr, 4)) {
80
return -TARGET_EFAULT;
81
}
82
old = tswap32(old);
83
@@ -XXX,XX +XXX,XX @@ static abi_ulong hppa_lws(CPUHPPAState *env)
84
return -TARGET_ENOSYS;
85
}
86
if (((addr | old | new) & ((1 << size) - 1))
87
- || !access_ok(VERIFY_WRITE, addr, 1 << size)
88
- || !access_ok(VERIFY_READ, old, 1 << size)
89
- || !access_ok(VERIFY_READ, new, 1 << size)) {
90
+ || !access_ok(cs, VERIFY_WRITE, addr, 1 << size)
91
+ || !access_ok(cs, VERIFY_READ, old, 1 << size)
92
+ || !access_ok(cs, VERIFY_READ, new, 1 << size)) {
93
return -TARGET_EFAULT;
94
}
95
/* Note that below we use host-endian loads so that the cmpxchg
96
diff --git a/linux-user/i386/cpu_loop.c b/linux-user/i386/cpu_loop.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/linux-user/i386/cpu_loop.c
99
+++ b/linux-user/i386/cpu_loop.c
100
@@ -XXX,XX +XXX,XX @@ static bool write_ok_or_segv(CPUX86State *env, abi_ptr addr, size_t len)
101
* For all the vsyscalls, NULL means "don't write anything" not
102
* "write it at address 0".
103
*/
104
- if (addr == 0 || access_ok(VERIFY_WRITE, addr, len)) {
105
+ if (addr == 0 || access_ok(env_cpu(env), VERIFY_WRITE, addr, len)) {
106
return true;
107
}
108
109
diff --git a/linux-user/i386/signal.c b/linux-user/i386/signal.c
110
index XXXXXXX..XXXXXXX 100644
111
--- a/linux-user/i386/signal.c
112
+++ b/linux-user/i386/signal.c
113
@@ -XXX,XX +XXX,XX @@ restore_sigcontext(CPUX86State *env, struct target_sigcontext *sc)
114
115
fpstate_addr = tswapl(sc->fpstate);
116
if (fpstate_addr != 0) {
117
- if (!access_ok(VERIFY_READ, fpstate_addr,
118
- sizeof(struct target_fpstate)))
119
+ if (!access_ok(env_cpu(env), VERIFY_READ, fpstate_addr,
120
+ sizeof(struct target_fpstate))) {
121
goto badframe;
122
+ }
123
#ifndef TARGET_X86_64
124
cpu_x86_frstor(env, fpstate_addr, 1);
125
#else
126
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
127
index XXXXXXX..XXXXXXX 100644
128
--- a/linux-user/syscall.c
129
+++ b/linux-user/syscall.c
130
@@ -XXX,XX +XXX,XX @@ static abi_long do_accept4(int fd, abi_ulong target_addr,
131
return -TARGET_EINVAL;
132
}
133
134
- if (!access_ok(VERIFY_WRITE, target_addr, addrlen))
135
+ if (!access_ok(thread_cpu, VERIFY_WRITE, target_addr, addrlen)) {
136
return -TARGET_EFAULT;
137
+ }
138
139
addr = alloca(addrlen);
140
141
@@ -XXX,XX +XXX,XX @@ static abi_long do_getpeername(int fd, abi_ulong target_addr,
142
return -TARGET_EINVAL;
143
}
144
145
- if (!access_ok(VERIFY_WRITE, target_addr, addrlen))
146
+ if (!access_ok(thread_cpu, VERIFY_WRITE, target_addr, addrlen)) {
147
return -TARGET_EFAULT;
148
+ }
149
150
addr = alloca(addrlen);
151
152
@@ -XXX,XX +XXX,XX @@ static abi_long do_getsockname(int fd, abi_ulong target_addr,
153
return -TARGET_EINVAL;
154
}
155
156
- if (!access_ok(VERIFY_WRITE, target_addr, addrlen))
157
+ if (!access_ok(thread_cpu, VERIFY_WRITE, target_addr, addrlen)) {
158
return -TARGET_EFAULT;
159
+ }
160
161
addr = alloca(addrlen);
162
163
--
164
2.20.1
165
166
diff view generated by jsdifflib
1
From: Pekka Enberg <penberg@iki.fi>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This patch adds a "raspi3" machine type, which can now be selected as
3
These functions are not small, except for unlock_user
4
the machine to run on by users via the "-M" command line option to QEMU.
4
without debugging enabled. Move them out of line, and
5
add missing braces on the way.
5
6
6
The machine type does *not* ignore memory transaction failures so we
7
likely need to add some dummy devices later when people run something
8
more complicated than what I'm using for testing.
9
10
Signed-off-by: Pekka Enberg <penberg@iki.fi>
11
[PMM: added #ifdef TARGET_AARCH64 so we don't provide the 64-bit
12
board in the 32-bit only arm-softmmu build.]
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210210000223.884088-18-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
11
---
17
hw/arm/raspi.c | 23 +++++++++++++++++++++++
12
linux-user/qemu.h | 45 ++++++-------------------------------------
18
1 file changed, 23 insertions(+)
13
linux-user/uaccess.c | 46 ++++++++++++++++++++++++++++++++++++++++++++
14
2 files changed, 52 insertions(+), 39 deletions(-)
19
15
20
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
16
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
21
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/arm/raspi.c
18
--- a/linux-user/qemu.h
23
+++ b/hw/arm/raspi.c
19
+++ b/linux-user/qemu.h
24
@@ -XXX,XX +XXX,XX @@ static void raspi2_machine_init(MachineClass *mc)
20
@@ -XXX,XX +XXX,XX @@ abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
25
mc->ignore_memory_transaction_failures = true;
21
26
};
22
/* Lock an area of guest memory into the host. If copy is true then the
27
DEFINE_MACHINE("raspi2", raspi2_machine_init)
23
host area will have the same contents as the guest. */
28
+
24
-static inline void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
29
+#ifdef TARGET_AARCH64
25
-{
30
+static void raspi3_init(MachineState *machine)
26
- if (!access_ok_untagged(type, guest_addr, len)) {
27
- return NULL;
28
- }
29
-#ifdef DEBUG_REMAP
30
- {
31
- void *addr;
32
- addr = g_malloc(len);
33
- if (copy)
34
- memcpy(addr, g2h(guest_addr), len);
35
- else
36
- memset(addr, 0, len);
37
- return addr;
38
- }
39
-#else
40
- return g2h_untagged(guest_addr);
41
-#endif
42
-}
43
+void *lock_user(int type, abi_ulong guest_addr, long len, int copy);
44
45
/* Unlock an area of guest memory. The first LEN bytes must be
46
flushed back to guest memory. host_ptr = NULL is explicitly
47
allowed and does nothing. */
48
-static inline void unlock_user(void *host_ptr, abi_ulong guest_addr,
49
- long len)
50
-{
51
-
52
#ifdef DEBUG_REMAP
53
- if (!host_ptr)
54
- return;
55
- if (host_ptr == g2h_untagged(guest_addr))
56
- return;
57
- if (len > 0)
58
- memcpy(g2h_untagged(guest_addr), host_ptr, len);
59
- g_free(host_ptr);
60
+static inline void unlock_user(void *host_ptr, abi_ulong guest_addr, long len)
61
+{ }
62
+#else
63
+void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
64
#endif
65
-}
66
67
/* Return the length of a string in target memory or -TARGET_EFAULT if
68
access error. */
69
abi_long target_strlen(abi_ulong gaddr);
70
71
/* Like lock_user but for null terminated strings. */
72
-static inline void *lock_user_string(abi_ulong guest_addr)
73
-{
74
- abi_long len;
75
- len = target_strlen(guest_addr);
76
- if (len < 0)
77
- return NULL;
78
- return lock_user(VERIFY_READ, guest_addr, (long)(len + 1), 1);
79
-}
80
+void *lock_user_string(abi_ulong guest_addr);
81
82
/* Helper macros for locking/unlocking a target struct. */
83
#define lock_user_struct(type, host_ptr, guest_addr, copy)    \
84
diff --git a/linux-user/uaccess.c b/linux-user/uaccess.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/linux-user/uaccess.c
87
+++ b/linux-user/uaccess.c
88
@@ -XXX,XX +XXX,XX @@
89
90
#include "qemu.h"
91
92
+void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
31
+{
93
+{
32
+ raspi_init(machine, 3);
94
+ if (!access_ok_untagged(type, guest_addr, len)) {
95
+ return NULL;
96
+ }
97
+#ifdef DEBUG_REMAP
98
+ {
99
+ void *addr;
100
+ addr = g_malloc(len);
101
+ if (copy) {
102
+ memcpy(addr, g2h(guest_addr), len);
103
+ } else {
104
+ memset(addr, 0, len);
105
+ }
106
+ return addr;
107
+ }
108
+#else
109
+ return g2h_untagged(guest_addr);
110
+#endif
33
+}
111
+}
34
+
112
+
35
+static void raspi3_machine_init(MachineClass *mc)
113
+#ifdef DEBUG_REMAP
114
+void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
36
+{
115
+{
37
+ mc->desc = "Raspberry Pi 3";
116
+ if (!host_ptr) {
38
+ mc->init = raspi3_init;
117
+ return;
39
+ mc->block_default_type = IF_SD;
118
+ }
40
+ mc->no_parallel = 1;
119
+ if (host_ptr == g2h_untagged(guest_addr)) {
41
+ mc->no_floppy = 1;
120
+ return;
42
+ mc->no_cdrom = 1;
121
+ }
43
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a53");
122
+ if (len > 0) {
44
+ mc->max_cpus = BCM2836_NCPUS;
123
+ memcpy(g2h_untagged(guest_addr), host_ptr, len);
45
+ mc->min_cpus = BCM2836_NCPUS;
124
+ }
46
+ mc->default_cpus = BCM2836_NCPUS;
125
+ g_free(host_ptr);
47
+ mc->default_ram_size = 1024 * 1024 * 1024;
48
+}
126
+}
49
+DEFINE_MACHINE("raspi3", raspi3_machine_init)
50
+#endif
127
+#endif
128
+
129
+void *lock_user_string(abi_ulong guest_addr)
130
+{
131
+ abi_long len = target_strlen(guest_addr);
132
+ if (len < 0) {
133
+ return NULL;
134
+ }
135
+ return lock_user(VERIFY_READ, guest_addr, (long)(len + 1), 1);
136
+}
137
+
138
/* copy_from_user() and copy_to_user() are usually used to copy data
139
* buffers between the target and host. These internally perform
140
* locking/unlocking of the memory.
51
--
141
--
52
2.16.1
142
2.20.1
53
143
54
144
diff view generated by jsdifflib
1
From: Pekka Enberg <penberg@iki.fi>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This patch adds Raspberry Pi 3 support to hw/arm/raspi.c. The
3
For copy_*_user, only 0 and -TARGET_EFAULT are returned; no need
4
differences to Pi 2 are:
4
to involve abi_long. Use size_t for lengths. Use bool for the
5
lock_user copy argument. Use ssize_t for target_strlen, because
6
we can't overflow the host memory space.
5
7
6
- Firmware address
7
- Board ID
8
- Board revision
9
10
The CPU is different too, but that's going to be configured as part of
11
the machine default CPU when we introduce a new machine type.
12
13
The patch was written from scratch by me but the logic is similar to
14
Zoltán Baldaszti's previous work, which I used as a reference (with
15
permission from the author):
16
17
https://github.com/bztsrc/qemu-raspi3
18
19
Signed-off-by: Pekka Enberg <penberg@iki.fi>
20
[PMM: fixed trailing whitespace on one line]
21
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210210000223.884088-19-richard.henderson@linaro.org
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
12
---
24
hw/arm/raspi.c | 31 +++++++++++++++++++++----------
13
linux-user/qemu.h | 14 ++++++--------
25
1 file changed, 21 insertions(+), 10 deletions(-)
14
linux-user/uaccess.c | 45 ++++++++++++++++++++++----------------------
15
2 files changed, 29 insertions(+), 30 deletions(-)
26
16
27
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
17
diff --git a/linux-user/qemu.h b/linux-user/qemu.h
28
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/raspi.c
19
--- a/linux-user/qemu.h
30
+++ b/hw/arm/raspi.c
20
+++ b/linux-user/qemu.h
31
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@
32
* Rasperry Pi 2 emulation Copyright (c) 2015, Microsoft
22
#include "exec/cpu_ldst.h"
33
* Written by Andrew Baumann
23
34
*
24
#undef DEBUG_REMAP
35
+ * Raspberry Pi 3 emulation Copyright (c) 2018 Zoltán Baldaszti
25
-#ifdef DEBUG_REMAP
36
+ * Upstream code cleanup (c) 2018 Pekka Enberg
26
-#endif /* DEBUG_REMAP */
37
+ *
27
38
* This code is licensed under the GNU GPLv2 and later.
28
#include "exec/user/abitypes.h"
29
30
@@ -XXX,XX +XXX,XX @@ static inline bool access_ok(CPUState *cpu, int type,
31
* buffers between the target and host. These internally perform
32
* locking/unlocking of the memory.
39
*/
33
*/
40
34
-abi_long copy_from_user(void *hptr, abi_ulong gaddr, size_t len);
35
-abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
36
+int copy_from_user(void *hptr, abi_ulong gaddr, size_t len);
37
+int copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
38
39
/* Functions for accessing guest memory. The tget and tput functions
40
read/write single values, byteswapping as necessary. The lock_user function
41
@@ -XXX,XX +XXX,XX @@ abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len);
42
43
/* Lock an area of guest memory into the host. If copy is true then the
44
host area will have the same contents as the guest. */
45
-void *lock_user(int type, abi_ulong guest_addr, long len, int copy);
46
+void *lock_user(int type, abi_ulong guest_addr, size_t len, bool copy);
47
48
/* Unlock an area of guest memory. The first LEN bytes must be
49
flushed back to guest memory. host_ptr = NULL is explicitly
50
allowed and does nothing. */
51
-#ifdef DEBUG_REMAP
52
-static inline void unlock_user(void *host_ptr, abi_ulong guest_addr, long len)
53
+#ifndef DEBUG_REMAP
54
+static inline void unlock_user(void *host_ptr, abi_ulong guest_addr, size_t len)
55
{ }
56
#else
57
void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
58
@@ -XXX,XX +XXX,XX @@ void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
59
60
/* Return the length of a string in target memory or -TARGET_EFAULT if
61
access error. */
62
-abi_long target_strlen(abi_ulong gaddr);
63
+ssize_t target_strlen(abi_ulong gaddr);
64
65
/* Like lock_user but for null terminated strings. */
66
void *lock_user_string(abi_ulong guest_addr);
67
diff --git a/linux-user/uaccess.c b/linux-user/uaccess.c
68
index XXXXXXX..XXXXXXX 100644
69
--- a/linux-user/uaccess.c
70
+++ b/linux-user/uaccess.c
41
@@ -XXX,XX +XXX,XX @@
71
@@ -XXX,XX +XXX,XX @@
42
#define SMPBOOT_ADDR 0x300 /* this should leave enough space for ATAGS */
72
43
#define MVBAR_ADDR 0x400 /* secure vectors */
73
#include "qemu.h"
44
#define BOARDSETUP_ADDR (MVBAR_ADDR + 0x20) /* board setup code */
74
45
-#define FIRMWARE_ADDR 0x8000 /* Pi loads kernel.img here by default */
75
-void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
46
+#define FIRMWARE_ADDR_2 0x8000 /* Pi 2 loads kernel.img here by default */
76
+void *lock_user(int type, abi_ulong guest_addr, size_t len, bool copy)
47
+#define FIRMWARE_ADDR_3 0x80000 /* Pi 3 loads kernel.img here by default */
77
{
48
78
if (!access_ok_untagged(type, guest_addr, len)) {
49
/* Table of Linux board IDs for different Pi versions */
79
return NULL;
50
-static const int raspi_boardid[] = {[1] = 0xc42, [2] = 0xc43};
80
@@ -XXX,XX +XXX,XX @@ void *lock_user(int type, abi_ulong guest_addr, long len, int copy)
51
+static const int raspi_boardid[] = {[1] = 0xc42, [2] = 0xc43, [3] = 0xc44};
52
53
typedef struct RasPiState {
54
BCM2836State soc;
55
@@ -XXX,XX +XXX,XX @@ static void setup_boot(MachineState *machine, int version, size_t ram_size)
56
binfo.secure_board_setup = true;
57
binfo.secure_boot = true;
58
59
- /* Pi2 requires SMP setup */
60
- if (version == 2) {
61
+ /* Pi2 and Pi3 requires SMP setup */
62
+ if (version >= 2) {
63
binfo.smp_loader_start = SMPBOOT_ADDR;
64
binfo.write_secondary_boot = write_smpboot;
65
binfo.secondary_cpu_reset_hook = reset_secondary;
66
@@ -XXX,XX +XXX,XX @@ static void setup_boot(MachineState *machine, int version, size_t ram_size)
67
* the normal Linux boot process
68
*/
69
if (machine->firmware) {
70
+ hwaddr firmware_addr = version == 3 ? FIRMWARE_ADDR_3 : FIRMWARE_ADDR_2;
71
/* load the firmware image (typically kernel.img) */
72
- r = load_image_targphys(machine->firmware, FIRMWARE_ADDR,
73
- ram_size - FIRMWARE_ADDR);
74
+ r = load_image_targphys(machine->firmware, firmware_addr,
75
+ ram_size - firmware_addr);
76
if (r < 0) {
77
error_report("Failed to load firmware from %s", machine->firmware);
78
exit(1);
79
}
80
81
- binfo.entry = FIRMWARE_ADDR;
82
+ binfo.entry = firmware_addr;
83
binfo.firmware_loaded = true;
84
} else {
85
binfo.kernel_filename = machine->kernel_filename;
86
@@ -XXX,XX +XXX,XX @@ static void setup_boot(MachineState *machine, int version, size_t ram_size)
87
arm_load_kernel(ARM_CPU(first_cpu), &binfo);
88
}
81
}
89
82
90
-static void raspi2_init(MachineState *machine)
83
#ifdef DEBUG_REMAP
91
+static void raspi_init(MachineState *machine, int version)
84
-void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
85
+void unlock_user(void *host_ptr, abi_ulong guest_addr, size_t len);
92
{
86
{
93
RasPiState *s = g_new0(RasPiState, 1);
87
if (!host_ptr) {
94
uint32_t vcram_size;
88
return;
95
@@ -XXX,XX +XXX,XX @@ static void raspi2_init(MachineState *machine)
89
@@ -XXX,XX +XXX,XX @@ void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
96
&error_abort);
90
if (host_ptr == g2h_untagged(guest_addr)) {
97
object_property_set_int(OBJECT(&s->soc), smp_cpus, "enabled-cpus",
91
return;
98
&error_abort);
92
}
99
- object_property_set_int(OBJECT(&s->soc), 0xa21041, "board-rev",
93
- if (len > 0) {
100
+ int board_rev = version == 3 ? 0xa02082 : 0xa21041;
94
+ if (len != 0) {
101
+ object_property_set_int(OBJECT(&s->soc), board_rev, "board-rev",
95
memcpy(g2h_untagged(guest_addr), host_ptr, len);
102
&error_abort);
96
}
103
object_property_set_bool(OBJECT(&s->soc), true, "realized", &error_abort);
97
g_free(host_ptr);
104
98
@@ -XXX,XX +XXX,XX @@ void unlock_user(void *host_ptr, abi_ulong guest_addr, long len);
105
@@ -XXX,XX +XXX,XX @@ static void raspi2_init(MachineState *machine)
99
106
100
void *lock_user_string(abi_ulong guest_addr)
107
vcram_size = object_property_get_uint(OBJECT(&s->soc), "vcram-size",
101
{
108
&error_abort);
102
- abi_long len = target_strlen(guest_addr);
109
- setup_boot(machine, 2, machine->ram_size - vcram_size);
103
+ ssize_t len = target_strlen(guest_addr);
110
+ setup_boot(machine, version, machine->ram_size - vcram_size);
104
if (len < 0) {
111
+}
105
return NULL;
112
+
106
}
113
+static void raspi2_init(MachineState *machine)
107
- return lock_user(VERIFY_READ, guest_addr, (long)(len + 1), 1);
114
+{
108
+ return lock_user(VERIFY_READ, guest_addr, (size_t)len + 1, 1);
115
+ raspi_init(machine, 2);
116
}
109
}
117
110
118
static void raspi2_machine_init(MachineClass *mc)
111
/* copy_from_user() and copy_to_user() are usually used to copy data
112
* buffers between the target and host. These internally perform
113
* locking/unlocking of the memory.
114
*/
115
-abi_long copy_from_user(void *hptr, abi_ulong gaddr, size_t len)
116
+int copy_from_user(void *hptr, abi_ulong gaddr, size_t len)
117
{
118
- abi_long ret = 0;
119
- void *ghptr;
120
+ int ret = 0;
121
+ void *ghptr = lock_user(VERIFY_READ, gaddr, len, 1);
122
123
- if ((ghptr = lock_user(VERIFY_READ, gaddr, len, 1))) {
124
+ if (ghptr) {
125
memcpy(hptr, ghptr, len);
126
unlock_user(ghptr, gaddr, 0);
127
- } else
128
+ } else {
129
ret = -TARGET_EFAULT;
130
-
131
+ }
132
return ret;
133
}
134
135
-
136
-abi_long copy_to_user(abi_ulong gaddr, void *hptr, size_t len)
137
+int copy_to_user(abi_ulong gaddr, void *hptr, size_t len)
138
{
139
- abi_long ret = 0;
140
- void *ghptr;
141
+ int ret = 0;
142
+ void *ghptr = lock_user(VERIFY_WRITE, gaddr, len, 0);
143
144
- if ((ghptr = lock_user(VERIFY_WRITE, gaddr, len, 0))) {
145
+ if (ghptr) {
146
memcpy(ghptr, hptr, len);
147
unlock_user(ghptr, gaddr, len);
148
- } else
149
+ } else {
150
ret = -TARGET_EFAULT;
151
+ }
152
153
return ret;
154
}
155
156
/* Return the length of a string in target memory or -TARGET_EFAULT if
157
access error */
158
-abi_long target_strlen(abi_ulong guest_addr1)
159
+ssize_t target_strlen(abi_ulong guest_addr1)
160
{
161
uint8_t *ptr;
162
abi_ulong guest_addr;
163
- int max_len, len;
164
+ size_t max_len, len;
165
166
guest_addr = guest_addr1;
167
for(;;) {
168
@@ -XXX,XX +XXX,XX @@ abi_long target_strlen(abi_ulong guest_addr1)
169
unlock_user(ptr, guest_addr, 0);
170
guest_addr += len;
171
/* we don't allow wrapping or integer overflow */
172
- if (guest_addr == 0 ||
173
- (guest_addr - guest_addr1) > 0x7fffffff)
174
+ if (guest_addr == 0 || (guest_addr - guest_addr1) > 0x7fffffff) {
175
return -TARGET_EFAULT;
176
- if (len != max_len)
177
+ }
178
+ if (len != max_len) {
179
break;
180
+ }
181
}
182
return guest_addr - guest_addr1;
183
}
119
--
184
--
120
2.16.1
185
2.20.1
121
186
122
187
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Nothing in either register affects the TB.
3
Resolve the untagged address once, using thread_cpu.
4
Tidy the DEBUG_REMAP code using glib routines.
4
5
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20180211205848.4568-4-richard.henderson@linaro.org
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20210210000223.884088-20-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/helper.c | 4 ++--
11
linux-user/uaccess.c | 27 ++++++++++++++-------------
11
1 file changed, 2 insertions(+), 2 deletions(-)
12
1 file changed, 14 insertions(+), 13 deletions(-)
12
13
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/linux-user/uaccess.c b/linux-user/uaccess.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
--- a/linux-user/uaccess.c
16
+++ b/target/arm/helper.c
17
+++ b/linux-user/uaccess.c
17
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
18
@@ -XXX,XX +XXX,XX @@
18
.writefn = aa64_daif_write, .resetfn = arm_cp_reset_ignore },
19
19
{ .name = "FPCR", .state = ARM_CP_STATE_AA64,
20
void *lock_user(int type, abi_ulong guest_addr, size_t len, bool copy)
20
.opc0 = 3, .opc1 = 3, .opc2 = 0, .crn = 4, .crm = 4,
21
{
21
- .access = PL0_RW, .type = ARM_CP_FPU,
22
+ void *host_addr;
22
+ .access = PL0_RW, .type = ARM_CP_FPU | ARM_CP_SUPPRESS_TB_END,
23
+
23
.readfn = aa64_fpcr_read, .writefn = aa64_fpcr_write },
24
+ guest_addr = cpu_untagged_addr(thread_cpu, guest_addr);
24
{ .name = "FPSR", .state = ARM_CP_STATE_AA64,
25
if (!access_ok_untagged(type, guest_addr, len)) {
25
.opc0 = 3, .opc1 = 3, .opc2 = 1, .crn = 4, .crm = 4,
26
return NULL;
26
- .access = PL0_RW, .type = ARM_CP_FPU,
27
}
27
+ .access = PL0_RW, .type = ARM_CP_FPU | ARM_CP_SUPPRESS_TB_END,
28
+ host_addr = g2h_untagged(guest_addr);
28
.readfn = aa64_fpsr_read, .writefn = aa64_fpsr_write },
29
#ifdef DEBUG_REMAP
29
{ .name = "DCZID_EL0", .state = ARM_CP_STATE_AA64,
30
- {
30
.opc0 = 3, .opc1 = 3, .opc2 = 7, .crn = 0, .crm = 0,
31
- void *addr;
32
- addr = g_malloc(len);
33
- if (copy) {
34
- memcpy(addr, g2h(guest_addr), len);
35
- } else {
36
- memset(addr, 0, len);
37
- }
38
- return addr;
39
+ if (copy) {
40
+ host_addr = g_memdup(host_addr, len);
41
+ } else {
42
+ host_addr = g_malloc0(len);
43
}
44
-#else
45
- return g2h_untagged(guest_addr);
46
#endif
47
+ return host_addr;
48
}
49
50
#ifdef DEBUG_REMAP
51
void unlock_user(void *host_ptr, abi_ulong guest_addr, size_t len);
52
{
53
+ void *host_ptr_conv;
54
+
55
if (!host_ptr) {
56
return;
57
}
58
- if (host_ptr == g2h_untagged(guest_addr)) {
59
+ host_ptr_conv = g2h(thread_cpu, guest_addr);
60
+ if (host_ptr == host_ptr_conv) {
61
return;
62
}
63
if (len != 0) {
64
- memcpy(g2h_untagged(guest_addr), host_ptr, len);
65
+ memcpy(host_ptr_conv, host_ptr, len);
66
}
67
g_free(host_ptr);
68
}
31
--
69
--
32
2.16.1
70
2.20.1
33
71
34
72
diff view generated by jsdifflib
1
We were previously making the system control register (SCR)
1
From: Richard Henderson <richard.henderson@linaro.org>
2
just RAZ/WI. Although we don't implement the functionality
3
this register controls, we should at least provide the state,
4
including the banked state for v8M.
5
2
3
This is the prctl bit that controls whether syscalls accept tagged
4
addresses. See Documentation/arm64/tagged-address-abi.rst in the
5
linux kernel.
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210210000223.884088-21-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180209165810.6668-7-peter.maydell@linaro.org
9
---
11
---
10
target/arm/cpu.h | 7 +++++++
12
linux-user/aarch64/target_syscall.h | 4 ++++
11
hw/intc/armv7m_nvic.c | 12 ++++++++----
13
target/arm/cpu-param.h | 3 +++
12
target/arm/machine.c | 12 ++++++++++++
14
target/arm/cpu.h | 31 +++++++++++++++++++++++++++++
13
3 files changed, 27 insertions(+), 4 deletions(-)
15
linux-user/syscall.c | 24 ++++++++++++++++++++++
16
4 files changed, 62 insertions(+)
14
17
18
diff --git a/linux-user/aarch64/target_syscall.h b/linux-user/aarch64/target_syscall.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/linux-user/aarch64/target_syscall.h
21
+++ b/linux-user/aarch64/target_syscall.h
22
@@ -XXX,XX +XXX,XX @@ struct target_pt_regs {
23
# define TARGET_PR_PAC_APDBKEY (1 << 3)
24
# define TARGET_PR_PAC_APGAKEY (1 << 4)
25
26
+#define TARGET_PR_SET_TAGGED_ADDR_CTRL 55
27
+#define TARGET_PR_GET_TAGGED_ADDR_CTRL 56
28
+# define TARGET_PR_TAGGED_ADDR_ENABLE (1UL << 0)
29
+
30
#endif /* AARCH64_TARGET_SYSCALL_H */
31
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/cpu-param.h
34
+++ b/target/arm/cpu-param.h
35
@@ -XXX,XX +XXX,XX @@
36
37
#ifdef CONFIG_USER_ONLY
38
#define TARGET_PAGE_BITS 12
39
+# ifdef TARGET_AARCH64
40
+# define TARGET_TAGGED_ADDRESSES
41
+# endif
42
#else
43
/*
44
* ARMv7 and later CPUs have 4K pages minimum, but ARMv5 and v6
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
45
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
46
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
47
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
48
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
49
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
20
uint32_t aircr; /* only holds r/w state if security extn implemented */
50
const struct arm_boot_info *boot_info;
21
uint32_t secure; /* Is CPU in Secure state? (not guest visible) */
51
/* Store GICv3CPUState to access from this struct */
22
uint32_t csselr[M_REG_NUM_BANKS];
52
void *gicv3state;
23
+ uint32_t scr[M_REG_NUM_BANKS];
24
} v7m;
25
26
/* Information associated with an exception about to be taken:
27
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_CCR, STKALIGN, 9, 1)
28
FIELD(V7M_CCR, DC, 16, 1)
29
FIELD(V7M_CCR, IC, 17, 1)
30
31
+/* V7M SCR bits */
32
+FIELD(V7M_SCR, SLEEPONEXIT, 1, 1)
33
+FIELD(V7M_SCR, SLEEPDEEP, 2, 1)
34
+FIELD(V7M_SCR, SLEEPDEEPS, 3, 1)
35
+FIELD(V7M_SCR, SEVONPEND, 4, 1)
36
+
53
+
37
/* V7M AIRCR bits */
54
+#ifdef TARGET_TAGGED_ADDRESSES
38
FIELD(V7M_AIRCR, VECTRESET, 0, 1)
55
+ /* Linux syscall tagged address support */
39
FIELD(V7M_AIRCR, VECTCLRACTIVE, 1, 1)
56
+ bool tagged_addr_enable;
40
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
57
+#endif
58
} CPUARMState;
59
60
static inline void set_feature(CPUARMState *env, int feature)
61
@@ -XXX,XX +XXX,XX @@ static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
62
*/
63
#define PAGE_BTI PAGE_TARGET_1
64
65
+#ifdef TARGET_TAGGED_ADDRESSES
66
+/**
67
+ * cpu_untagged_addr:
68
+ * @cs: CPU context
69
+ * @x: tagged address
70
+ *
71
+ * Remove any address tag from @x. This is explicitly related to the
72
+ * linux syscall TIF_TAGGED_ADDR setting, not TBI in general.
73
+ *
74
+ * There should be a better place to put this, but we need this in
75
+ * include/exec/cpu_ldst.h, and not some place linux-user specific.
76
+ */
77
+static inline target_ulong cpu_untagged_addr(CPUState *cs, target_ulong x)
78
+{
79
+ ARMCPU *cpu = ARM_CPU(cs);
80
+ if (cpu->env.tagged_addr_enable) {
81
+ /*
82
+ * TBI is enabled for userspace but not kernelspace addresses.
83
+ * Only clear the tag if bit 55 is clear.
84
+ */
85
+ x &= sextract64(x, 0, 56);
86
+ }
87
+ return x;
88
+}
89
+#endif
90
+
91
/*
92
* Naming convention for isar_feature functions:
93
* Functions which test 32-bit ID registers should have _aa32_ in
94
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
41
index XXXXXXX..XXXXXXX 100644
95
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/intc/armv7m_nvic.c
96
--- a/linux-user/syscall.c
43
+++ b/hw/intc/armv7m_nvic.c
97
+++ b/linux-user/syscall.c
44
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
98
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
45
}
99
}
46
return val;
100
}
47
case 0xd10: /* System Control. */
101
return -TARGET_EINVAL;
48
- /* TODO: Implement SLEEPONEXIT. */
102
+ case TARGET_PR_SET_TAGGED_ADDR_CTRL:
49
- return 0;
103
+ {
50
+ return cpu->env.v7m.scr[attrs.secure];
104
+ abi_ulong valid_mask = TARGET_PR_TAGGED_ADDR_ENABLE;
51
case 0xd14: /* Configuration Control. */
105
+ CPUARMState *env = cpu_env;
52
/* The BFHFNMIGN bit is the only non-banked bit; we
53
* keep it in the non-secure copy of the register.
54
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
55
}
56
break;
57
case 0xd10: /* System Control. */
58
- /* TODO: Implement control registers. */
59
- qemu_log_mask(LOG_UNIMP, "NVIC: SCR unimplemented\n");
60
+ /* We don't implement deep-sleep so these bits are RAZ/WI.
61
+ * The other bits in the register are banked.
62
+ * QEMU's implementation ignores SEVONPEND and SLEEPONEXIT, which
63
+ * is architecturally permitted.
64
+ */
65
+ value &= ~(R_V7M_SCR_SLEEPDEEP_MASK | R_V7M_SCR_SLEEPDEEPS_MASK);
66
+ cpu->env.v7m.scr[attrs.secure] = value;
67
break;
68
case 0xd14: /* Configuration Control. */
69
/* Enforce RAZ/WI on reserved and must-RAZ/WI bits */
70
diff --git a/target/arm/machine.c b/target/arm/machine.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/target/arm/machine.c
73
+++ b/target/arm/machine.c
74
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_csselr = {
75
}
76
};
77
78
+static const VMStateDescription vmstate_m_scr = {
79
+ .name = "cpu/m/scr",
80
+ .version_id = 1,
81
+ .minimum_version_id = 1,
82
+ .fields = (VMStateField[]) {
83
+ VMSTATE_UINT32(env.v7m.scr[M_REG_NS], ARMCPU),
84
+ VMSTATE_END_OF_LIST()
85
+ }
86
+};
87
+
106
+
88
static const VMStateDescription vmstate_m = {
107
+ if ((arg2 & ~valid_mask) || arg3 || arg4 || arg5) {
89
.name = "cpu/m",
108
+ return -TARGET_EINVAL;
90
.version_id = 4,
109
+ }
91
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m = {
110
+ env->tagged_addr_enable = arg2 & TARGET_PR_TAGGED_ADDR_ENABLE;
92
.subsections = (const VMStateDescription*[]) {
111
+ return 0;
93
&vmstate_m_faultmask_primask,
112
+ }
94
&vmstate_m_csselr,
113
+ case TARGET_PR_GET_TAGGED_ADDR_CTRL:
95
+ &vmstate_m_scr,
114
+ {
96
NULL
115
+ abi_long ret = 0;
97
}
116
+ CPUARMState *env = cpu_env;
98
};
117
+
99
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_m_security = {
118
+ if (arg2 || arg3 || arg4 || arg5) {
100
VMSTATE_UINT32(env.sau.rnr, ARMCPU),
119
+ return -TARGET_EINVAL;
101
VMSTATE_VALIDATE("SAU_RNR is valid", sau_rnr_vmstate_validate),
120
+ }
102
VMSTATE_UINT32(env.sau.ctrl, ARMCPU),
121
+ if (env->tagged_addr_enable) {
103
+ VMSTATE_UINT32(env.v7m.scr[M_REG_S], ARMCPU),
122
+ ret |= TARGET_PR_TAGGED_ADDR_ENABLE;
104
VMSTATE_END_OF_LIST()
123
+ }
105
}
124
+ return ret;
106
};
125
+ }
126
#endif /* AARCH64 */
127
case PR_GET_SECCOMP:
128
case PR_SET_SECCOMP:
107
--
129
--
108
2.16.1
130
2.20.1
109
131
110
132
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
When storing to an AdvSIMD FP register, all of the high
3
Use simple arithmetic instead of a conditional
4
bits of the SVE register are zeroed. Therefore, call it
4
move when tbi0 != tbi1.
5
more often with is_q as a parameter.
6
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180211205848.4568-6-richard.henderson@linaro.org
8
Message-id: 20210210000223.884088-22-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/translate-a64.c | 162 +++++++++++++++++----------------------------
11
target/arm/translate-a64.c | 25 ++++++++++++++-----------
13
1 file changed, 62 insertions(+), 100 deletions(-)
12
1 file changed, 14 insertions(+), 11 deletions(-)
14
13
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.c
16
--- a/target/arm/translate-a64.c
18
+++ b/target/arm/translate-a64.c
17
+++ b/target/arm/translate-a64.c
19
@@ -XXX,XX +XXX,XX @@ static TCGv_i32 read_fp_sreg(DisasContext *s, int reg)
18
@@ -XXX,XX +XXX,XX @@ static void gen_top_byte_ignore(DisasContext *s, TCGv_i64 dst,
20
return v;
19
/* Sign-extend from bit 55. */
21
}
20
tcg_gen_sextract_i64(dst, src, 0, 56);
22
21
23
+/* Clear the bits above an N-bit vector, for N = (is_q ? 128 : 64).
22
- if (tbi != 3) {
24
+ * If SVE is not enabled, then there are only 128 bits in the vector.
23
- TCGv_i64 tcg_zero = tcg_const_i64(0);
25
+ */
26
+static void clear_vec_high(DisasContext *s, bool is_q, int rd)
27
+{
28
+ unsigned ofs = fp_reg_offset(s, rd, MO_64);
29
+ unsigned vsz = vec_full_reg_size(s);
30
+
31
+ if (!is_q) {
32
+ TCGv_i64 tcg_zero = tcg_const_i64(0);
33
+ tcg_gen_st_i64(tcg_zero, cpu_env, ofs + 8);
34
+ tcg_temp_free_i64(tcg_zero);
35
+ }
36
+ if (vsz > 16) {
37
+ tcg_gen_gvec_dup8i(ofs + 16, vsz - 16, vsz - 16, 0);
38
+ }
39
+}
40
+
41
static void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v)
42
{
43
- TCGv_i64 tcg_zero = tcg_const_i64(0);
44
+ unsigned ofs = fp_reg_offset(s, reg, MO_64);
45
46
- tcg_gen_st_i64(v, cpu_env, fp_reg_offset(s, reg, MO_64));
47
- tcg_gen_st_i64(tcg_zero, cpu_env, fp_reg_hi_offset(s, reg));
48
- tcg_temp_free_i64(tcg_zero);
49
+ tcg_gen_st_i64(v, cpu_env, ofs);
50
+ clear_vec_high(s, false, reg);
51
}
52
53
static void write_fp_sreg(DisasContext *s, int reg, TCGv_i32 v)
54
@@ -XXX,XX +XXX,XX @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
55
56
tcg_temp_free_i64(tmplo);
57
tcg_temp_free_i64(tmphi);
58
+
59
+ clear_vec_high(s, true, destidx);
60
}
61
62
/*
63
@@ -XXX,XX +XXX,XX @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
64
}
65
}
66
67
-/* Clear the high 64 bits of a 128 bit vector (in general non-quad
68
- * vector ops all need to do this).
69
- */
70
-static void clear_vec_high(DisasContext *s, int rd)
71
-{
72
- TCGv_i64 tcg_zero = tcg_const_i64(0);
73
-
24
-
74
- write_vec_element(s, tcg_zero, rd, 1, MO_64);
25
- /*
75
- tcg_temp_free_i64(tcg_zero);
26
- * The two TBI bits differ.
76
-}
27
- * If tbi0, then !tbi1: only use the extension if positive.
77
-
28
- * if !tbi0, then tbi1: only use the extension if negative.
78
/* Store from vector register to memory */
29
- */
79
static void do_vec_st(DisasContext *s, int srcidx, int element,
30
- tcg_gen_movcond_i64(tbi == 1 ? TCG_COND_GE : TCG_COND_LT,
80
TCGv_i64 tcg_addr, int size)
31
- dst, dst, tcg_zero, dst, src);
81
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
32
- tcg_temp_free_i64(tcg_zero);
82
/* For non-quad operations, setting a slice of the low
33
+ switch (tbi) {
83
* 64 bits of the register clears the high 64 bits (in
34
+ case 1:
84
* the ARM ARM pseudocode this is implicit in the fact
35
+ /* tbi0 but !tbi1: only use the extension if positive */
85
- * that 'rval' is a 64 bit wide variable). We optimize
36
+ tcg_gen_and_i64(dst, dst, src);
86
- * by noticing that we only need to do this the first
37
+ break;
87
- * time we touch a register.
38
+ case 2:
88
+ * that 'rval' is a 64 bit wide variable).
39
+ /* !tbi0 but tbi1: only use the extension if negative */
89
+ * For quad operations, we might still need to zero the
40
+ tcg_gen_or_i64(dst, dst, src);
90
+ * high bits of SVE. We optimize by noticing that we only
41
+ break;
91
+ * need to do this the first time we touch a register.
42
+ case 3:
92
*/
43
+ /* tbi0 and tbi1: always use the extension */
93
- if (!is_q && e == 0 && (r == 0 || xs == selem - 1)) {
44
+ break;
94
- clear_vec_high(s, tt);
45
+ default:
95
+ if (e == 0 && (r == 0 || xs == selem - 1)) {
46
+ g_assert_not_reached();
96
+ clear_vec_high(s, is_q, tt);
97
}
98
}
99
tcg_gen_addi_i64(tcg_addr, tcg_addr, ebytes);
100
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_single_struct(DisasContext *s, uint32_t insn)
101
write_vec_element(s, tcg_tmp, rt, 0, MO_64);
102
if (is_q) {
103
write_vec_element(s, tcg_tmp, rt, 1, MO_64);
104
- } else {
105
- clear_vec_high(s, rt);
106
}
107
tcg_temp_free_i64(tcg_tmp);
108
+ clear_vec_high(s, is_q, rt);
109
} else {
110
/* Load/store one element per register */
111
if (is_load) {
112
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
113
}
114
115
if (!is_q) {
116
- clear_vec_high(s, rd);
117
write_vec_element(s, tcg_final, rd, 0, MO_64);
118
} else {
119
write_vec_element(s, tcg_final, rd, 1, MO_64);
120
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
121
tcg_temp_free_i64(tcg_rd);
122
tcg_temp_free_i32(tcg_rd_narrowed);
123
tcg_temp_free_i64(tcg_final);
124
- return;
125
+
126
+ clear_vec_high(s, is_q, rd);
127
}
128
129
/* SQSHLU, UQSHL, SQSHL: saturating left shifts */
130
@@ -XXX,XX +XXX,XX @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
131
tcg_temp_free_i64(tcg_op);
132
}
133
tcg_temp_free_i64(tcg_shift);
134
-
135
- if (!is_q) {
136
- clear_vec_high(s, rd);
137
- }
138
+ clear_vec_high(s, is_q, rd);
139
} else {
140
TCGv_i32 tcg_shift = tcg_const_i32(shift);
141
static NeonGenTwoOpEnvFn * const fns[2][2][3] = {
142
@@ -XXX,XX +XXX,XX @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
143
}
144
tcg_temp_free_i32(tcg_shift);
145
146
- if (!is_q && !scalar) {
147
- clear_vec_high(s, rd);
148
+ if (!scalar) {
149
+ clear_vec_high(s, is_q, rd);
150
}
47
}
151
}
48
}
152
}
49
}
153
@@ -XXX,XX +XXX,XX @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
154
}
155
}
156
157
- if (!is_double && elements == 2) {
158
- clear_vec_high(s, rd);
159
- }
160
-
161
tcg_temp_free_i64(tcg_int);
162
tcg_temp_free_ptr(tcg_fpst);
163
tcg_temp_free_i32(tcg_shift);
164
+
165
+ clear_vec_high(s, elements << size == 16, rd);
166
}
167
168
/* UCVTF/SCVTF - Integer to FP conversion */
169
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
170
write_vec_element(s, tcg_op, rd, pass, MO_64);
171
tcg_temp_free_i64(tcg_op);
172
}
173
- if (!is_q) {
174
- clear_vec_high(s, rd);
175
- }
176
+ clear_vec_high(s, is_q, rd);
177
} else {
178
int maxpass = is_scalar ? 1 : is_q ? 4 : 2;
179
for (pass = 0; pass < maxpass; pass++) {
180
@@ -XXX,XX +XXX,XX @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
181
}
182
tcg_temp_free_i32(tcg_op);
183
}
184
- if (!is_q && !is_scalar) {
185
- clear_vec_high(s, rd);
186
+ if (!is_scalar) {
187
+ clear_vec_high(s, is_q, rd);
188
}
189
}
190
191
@@ -XXX,XX +XXX,XX @@ static void handle_3same_float(DisasContext *s, int size, int elements,
192
193
tcg_temp_free_ptr(fpst);
194
195
- if ((elements << size) < 4) {
196
- /* scalar, or non-quad vector op */
197
- clear_vec_high(s, rd);
198
- }
199
+ clear_vec_high(s, elements * (size ? 8 : 4) > 8, rd);
200
}
201
202
/* AdvSIMD scalar three same
203
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
204
}
205
write_vec_element(s, tcg_res, rd, pass, MO_64);
206
}
207
- if (is_scalar) {
208
- clear_vec_high(s, rd);
209
- }
210
-
211
tcg_temp_free_i64(tcg_res);
212
tcg_temp_free_i64(tcg_zero);
213
tcg_temp_free_i64(tcg_op);
214
+
215
+ clear_vec_high(s, !is_scalar, rd);
216
} else {
217
TCGv_i32 tcg_op = tcg_temp_new_i32();
218
TCGv_i32 tcg_zero = tcg_const_i32(0);
219
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
220
tcg_temp_free_i32(tcg_res);
221
tcg_temp_free_i32(tcg_zero);
222
tcg_temp_free_i32(tcg_op);
223
- if (!is_q && !is_scalar) {
224
- clear_vec_high(s, rd);
225
+ if (!is_scalar) {
226
+ clear_vec_high(s, is_q, rd);
227
}
228
}
229
230
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
231
}
232
write_vec_element(s, tcg_res, rd, pass, MO_64);
233
}
234
- if (is_scalar) {
235
- clear_vec_high(s, rd);
236
- }
237
-
238
tcg_temp_free_i64(tcg_res);
239
tcg_temp_free_i64(tcg_op);
240
+ clear_vec_high(s, !is_scalar, rd);
241
} else {
242
TCGv_i32 tcg_op = tcg_temp_new_i32();
243
TCGv_i32 tcg_res = tcg_temp_new_i32();
244
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
245
}
246
tcg_temp_free_i32(tcg_res);
247
tcg_temp_free_i32(tcg_op);
248
- if (!is_q && !is_scalar) {
249
- clear_vec_high(s, rd);
250
+ if (!is_scalar) {
251
+ clear_vec_high(s, is_q, rd);
252
}
253
}
254
tcg_temp_free_ptr(fpst);
255
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
256
write_vec_element_i32(s, tcg_res[pass], rd, destelt + pass, MO_32);
257
tcg_temp_free_i32(tcg_res[pass]);
258
}
259
- if (!is_q) {
260
- clear_vec_high(s, rd);
261
- }
262
+ clear_vec_high(s, is_q, rd);
263
}
264
265
/* Remaining saturating accumulating ops */
266
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,
267
}
268
write_vec_element(s, tcg_rd, rd, pass, MO_64);
269
}
270
- if (is_scalar) {
271
- clear_vec_high(s, rd);
272
- }
273
-
274
tcg_temp_free_i64(tcg_rd);
275
tcg_temp_free_i64(tcg_rn);
276
+ clear_vec_high(s, !is_scalar, rd);
277
} else {
278
TCGv_i32 tcg_rn = tcg_temp_new_i32();
279
TCGv_i32 tcg_rd = tcg_temp_new_i32();
280
@@ -XXX,XX +XXX,XX @@ static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,
281
}
282
write_vec_element_i32(s, tcg_rd, rd, pass, MO_32);
283
}
284
-
285
- if (!is_q) {
286
- clear_vec_high(s, rd);
287
- }
288
-
289
tcg_temp_free_i32(tcg_rd);
290
tcg_temp_free_i32(tcg_rn);
291
+ clear_vec_high(s, is_q, rd);
292
}
293
}
294
295
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
296
tcg_temp_free_i64(tcg_round);
297
298
done:
299
- if (!is_q) {
300
- clear_vec_high(s, rd);
301
- }
302
+ clear_vec_high(s, is_q, rd);
303
}
304
305
static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
306
@@ -XXX,XX +XXX,XX @@ static void handle_vec_simd_shrn(DisasContext *s, bool is_q,
307
}
308
309
if (!is_q) {
310
- clear_vec_high(s, rd);
311
write_vec_element(s, tcg_final, rd, 0, MO_64);
312
} else {
313
write_vec_element(s, tcg_final, rd, 1, MO_64);
314
}
315
-
316
if (round) {
317
tcg_temp_free_i64(tcg_round);
318
}
319
tcg_temp_free_i64(tcg_rn);
320
tcg_temp_free_i64(tcg_rd);
321
tcg_temp_free_i64(tcg_final);
322
- return;
323
+
324
+ clear_vec_high(s, is_q, rd);
325
}
326
327
328
@@ -XXX,XX +XXX,XX @@ static void handle_3rd_narrowing(DisasContext *s, int is_q, int is_u, int size,
329
write_vec_element_i32(s, tcg_res[pass], rd, pass + part, MO_32);
330
tcg_temp_free_i32(tcg_res[pass]);
331
}
332
- if (!is_q) {
333
- clear_vec_high(s, rd);
334
- }
335
+ clear_vec_high(s, is_q, rd);
336
}
337
338
static void handle_pmull_64(DisasContext *s, int is_q, int rd, int rn, int rm)
339
@@ -XXX,XX +XXX,XX @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
340
write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_32);
341
tcg_temp_free_i32(tcg_res[pass]);
342
}
343
- if (!is_q) {
344
- clear_vec_high(s, rd);
345
- }
346
+ clear_vec_high(s, is_q, rd);
347
}
348
349
if (fpst) {
350
@@ -XXX,XX +XXX,XX @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
351
tcg_temp_free_i32(tcg_op2);
352
}
353
}
354
-
355
- if (!is_q) {
356
- clear_vec_high(s, rd);
357
- }
358
+ clear_vec_high(s, is_q, rd);
359
}
360
361
/* AdvSIMD three same
362
@@ -XXX,XX +XXX,XX @@ static void handle_rev(DisasContext *s, int opcode, bool u,
363
write_vec_element(s, tcg_tmp, rd, i, grp_size);
364
tcg_temp_free_i64(tcg_tmp);
365
}
366
- if (!is_q) {
367
- clear_vec_high(s, rd);
368
- }
369
+ clear_vec_high(s, is_q, rd);
370
} else {
371
int revmask = (1 << grp_size) - 1;
372
int esize = 8 << size;
373
@@ -XXX,XX +XXX,XX @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
374
tcg_temp_free_i32(tcg_op);
375
}
376
}
377
- if (!is_q) {
378
- clear_vec_high(s, rd);
379
- }
380
+ clear_vec_high(s, is_q, rd);
381
382
if (need_rmode) {
383
gen_helper_set_rmode(tcg_rmode, tcg_rmode, cpu_env);
384
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
385
tcg_temp_free_i64(tcg_res);
386
}
387
388
- if (is_scalar) {
389
- clear_vec_high(s, rd);
390
- }
391
-
392
tcg_temp_free_i64(tcg_idx);
393
+ clear_vec_high(s, !is_scalar, rd);
394
} else if (!is_long) {
395
/* 32 bit floating point, or 16 or 32 bit integer.
396
* For the 16 bit scalar case we use the usual Neon helpers and
397
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
398
}
399
400
tcg_temp_free_i32(tcg_idx);
401
-
402
- if (!is_q) {
403
- clear_vec_high(s, rd);
404
- }
405
+ clear_vec_high(s, is_q, rd);
406
} else {
407
/* long ops: 16x16->32 or 32x32->64 */
408
TCGv_i64 tcg_res[2];
409
@@ -XXX,XX +XXX,XX @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
410
}
411
tcg_temp_free_i64(tcg_idx);
412
413
- if (is_scalar) {
414
- clear_vec_high(s, rd);
415
- }
416
+ clear_vec_high(s, !is_scalar, rd);
417
} else {
418
TCGv_i32 tcg_idx = tcg_temp_new_i32();
419
420
--
50
--
421
2.16.1
51
2.20.1
422
52
423
53
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
We were fudging TBI1 enabled to speed up the generated code.
4
Now that we've improved the code generation, remove this.
5
Also, tidy the comment to reflect the current code.
6
7
The pauth test was testing a kernel address (-1) and making
8
incorrect assumptions about TBI1; stick to userland addresses.
9
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20210210000223.884088-23-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
15
target/arm/internals.h | 4 ++--
16
target/arm/cpu.c | 10 +++-------
17
tests/tcg/aarch64/pauth-2.c | 1 -
18
3 files changed, 5 insertions(+), 10 deletions(-)
19
20
diff --git a/target/arm/internals.h b/target/arm/internals.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/internals.h
23
+++ b/target/arm/internals.h
24
@@ -XXX,XX +XXX,XX @@ static inline bool tcma_check(uint32_t desc, int bit55, int ptr_tag)
25
*/
26
static inline uint64_t useronly_clean_ptr(uint64_t ptr)
27
{
28
- /* TBI is known to be enabled. */
29
#ifdef CONFIG_USER_ONLY
30
- ptr = sextract64(ptr, 0, 56);
31
+ /* TBI0 is known to be enabled, while TBI1 is disabled. */
32
+ ptr &= sextract64(ptr, 0, 56);
33
#endif
34
return ptr;
35
}
36
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/cpu.c
39
+++ b/target/arm/cpu.c
40
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
41
env->vfp.zcr_el[1] = MIN(cpu->sve_max_vq - 1, 3);
42
}
43
/*
44
- * Enable TBI0 and TBI1. While the real kernel only enables TBI0,
45
- * turning on both here will produce smaller code and otherwise
46
- * make no difference to the user-level emulation.
47
- *
48
- * In sve_probe_page, we assume that this is set.
49
- * Do not modify this without other changes.
50
+ * Enable TBI0 but not TBI1.
51
+ * Note that this must match useronly_clean_ptr.
52
*/
53
- env->cp15.tcr_el[1].raw_tcr = (3ULL << 37);
54
+ env->cp15.tcr_el[1].raw_tcr = (1ULL << 37);
55
#else
56
/* Reset into the highest available EL */
57
if (arm_feature(env, ARM_FEATURE_EL3)) {
58
diff --git a/tests/tcg/aarch64/pauth-2.c b/tests/tcg/aarch64/pauth-2.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/tests/tcg/aarch64/pauth-2.c
61
+++ b/tests/tcg/aarch64/pauth-2.c
62
@@ -XXX,XX +XXX,XX @@ void do_test(uint64_t value)
63
int main()
64
{
65
do_test(0);
66
- do_test(-1);
67
do_test(0xda004acedeadbeefull);
68
return 0;
69
}
70
--
71
2.20.1
72
73
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
These prctl fields are required for the function of MTE.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210210000223.884088-24-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
linux-user/aarch64/target_syscall.h | 9 ++++++
11
linux-user/syscall.c | 43 +++++++++++++++++++++++++++++
12
2 files changed, 52 insertions(+)
13
14
diff --git a/linux-user/aarch64/target_syscall.h b/linux-user/aarch64/target_syscall.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/linux-user/aarch64/target_syscall.h
17
+++ b/linux-user/aarch64/target_syscall.h
18
@@ -XXX,XX +XXX,XX @@ struct target_pt_regs {
19
#define TARGET_PR_SET_TAGGED_ADDR_CTRL 55
20
#define TARGET_PR_GET_TAGGED_ADDR_CTRL 56
21
# define TARGET_PR_TAGGED_ADDR_ENABLE (1UL << 0)
22
+/* MTE tag check fault modes */
23
+# define TARGET_PR_MTE_TCF_SHIFT 1
24
+# define TARGET_PR_MTE_TCF_NONE (0UL << TARGET_PR_MTE_TCF_SHIFT)
25
+# define TARGET_PR_MTE_TCF_SYNC (1UL << TARGET_PR_MTE_TCF_SHIFT)
26
+# define TARGET_PR_MTE_TCF_ASYNC (2UL << TARGET_PR_MTE_TCF_SHIFT)
27
+# define TARGET_PR_MTE_TCF_MASK (3UL << TARGET_PR_MTE_TCF_SHIFT)
28
+/* MTE tag inclusion mask */
29
+# define TARGET_PR_MTE_TAG_SHIFT 3
30
+# define TARGET_PR_MTE_TAG_MASK (0xffffUL << TARGET_PR_MTE_TAG_SHIFT)
31
32
#endif /* AARCH64_TARGET_SYSCALL_H */
33
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/linux-user/syscall.c
36
+++ b/linux-user/syscall.c
37
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
38
{
39
abi_ulong valid_mask = TARGET_PR_TAGGED_ADDR_ENABLE;
40
CPUARMState *env = cpu_env;
41
+ ARMCPU *cpu = env_archcpu(env);
42
+
43
+ if (cpu_isar_feature(aa64_mte, cpu)) {
44
+ valid_mask |= TARGET_PR_MTE_TCF_MASK;
45
+ valid_mask |= TARGET_PR_MTE_TAG_MASK;
46
+ }
47
48
if ((arg2 & ~valid_mask) || arg3 || arg4 || arg5) {
49
return -TARGET_EINVAL;
50
}
51
env->tagged_addr_enable = arg2 & TARGET_PR_TAGGED_ADDR_ENABLE;
52
+
53
+ if (cpu_isar_feature(aa64_mte, cpu)) {
54
+ switch (arg2 & TARGET_PR_MTE_TCF_MASK) {
55
+ case TARGET_PR_MTE_TCF_NONE:
56
+ case TARGET_PR_MTE_TCF_SYNC:
57
+ case TARGET_PR_MTE_TCF_ASYNC:
58
+ break;
59
+ default:
60
+ return -EINVAL;
61
+ }
62
+
63
+ /*
64
+ * Write PR_MTE_TCF to SCTLR_EL1[TCF0].
65
+ * Note that the syscall values are consistent with hw.
66
+ */
67
+ env->cp15.sctlr_el[1] =
68
+ deposit64(env->cp15.sctlr_el[1], 38, 2,
69
+ arg2 >> TARGET_PR_MTE_TCF_SHIFT);
70
+
71
+ /*
72
+ * Write PR_MTE_TAG to GCR_EL1[Exclude].
73
+ * Note that the syscall uses an include mask,
74
+ * and hardware uses an exclude mask -- invert.
75
+ */
76
+ env->cp15.gcr_el1 =
77
+ deposit64(env->cp15.gcr_el1, 0, 16,
78
+ ~arg2 >> TARGET_PR_MTE_TAG_SHIFT);
79
+ arm_rebuild_hflags(env);
80
+ }
81
return 0;
82
}
83
case TARGET_PR_GET_TAGGED_ADDR_CTRL:
84
{
85
abi_long ret = 0;
86
CPUARMState *env = cpu_env;
87
+ ARMCPU *cpu = env_archcpu(env);
88
89
if (arg2 || arg3 || arg4 || arg5) {
90
return -TARGET_EINVAL;
91
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
92
if (env->tagged_addr_enable) {
93
ret |= TARGET_PR_TAGGED_ADDR_ENABLE;
94
}
95
+ if (cpu_isar_feature(aa64_mte, cpu)) {
96
+ /* See above. */
97
+ ret |= (extract64(env->cp15.sctlr_el[1], 38, 2)
98
+ << TARGET_PR_MTE_TCF_SHIFT);
99
+ ret = deposit64(ret, TARGET_PR_MTE_TAG_SHIFT, 16,
100
+ ~env->cp15.gcr_el1);
101
+ }
102
return ret;
103
}
104
#endif /* AARCH64 */
105
--
106
2.20.1
107
108
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Remember the PROT_MTE bit as PAGE_MTE/PAGE_TARGET_2.
4
Otherwise this does not yet have effect.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-25-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
include/exec/cpu-all.h | 1 +
12
linux-user/syscall_defs.h | 1 +
13
target/arm/cpu.h | 1 +
14
linux-user/mmap.c | 22 ++++++++++++++--------
15
4 files changed, 17 insertions(+), 8 deletions(-)
16
17
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/exec/cpu-all.h
20
+++ b/include/exec/cpu-all.h
21
@@ -XXX,XX +XXX,XX @@ extern intptr_t qemu_host_page_mask;
22
#endif
23
/* Target-specific bits that will be used via page_get_flags(). */
24
#define PAGE_TARGET_1 0x0080
25
+#define PAGE_TARGET_2 0x0200
26
27
#if defined(CONFIG_USER_ONLY)
28
void page_dump(FILE *f);
29
diff --git a/linux-user/syscall_defs.h b/linux-user/syscall_defs.h
30
index XXXXXXX..XXXXXXX 100644
31
--- a/linux-user/syscall_defs.h
32
+++ b/linux-user/syscall_defs.h
33
@@ -XXX,XX +XXX,XX @@ struct target_winsize {
34
35
#ifdef TARGET_AARCH64
36
#define TARGET_PROT_BTI 0x10
37
+#define TARGET_PROT_MTE 0x20
38
#endif
39
40
/* Common */
41
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/cpu.h
44
+++ b/target/arm/cpu.h
45
@@ -XXX,XX +XXX,XX @@ static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
46
* AArch64 usage of the PAGE_TARGET_* bits for linux-user.
47
*/
48
#define PAGE_BTI PAGE_TARGET_1
49
+#define PAGE_MTE PAGE_TARGET_2
50
51
#ifdef TARGET_TAGGED_ADDRESSES
52
/**
53
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/linux-user/mmap.c
56
+++ b/linux-user/mmap.c
57
@@ -XXX,XX +XXX,XX @@ static int validate_prot_to_pageflags(int *host_prot, int prot)
58
| (prot & PROT_EXEC ? PROT_READ : 0);
59
60
#ifdef TARGET_AARCH64
61
- /*
62
- * The PROT_BTI bit is only accepted if the cpu supports the feature.
63
- * Since this is the unusual case, don't bother checking unless
64
- * the bit has been requested. If set and valid, record the bit
65
- * within QEMU's page_flags.
66
- */
67
- if (prot & TARGET_PROT_BTI) {
68
+ {
69
ARMCPU *cpu = ARM_CPU(thread_cpu);
70
- if (cpu_isar_feature(aa64_bti, cpu)) {
71
+
72
+ /*
73
+ * The PROT_BTI bit is only accepted if the cpu supports the feature.
74
+ * Since this is the unusual case, don't bother checking unless
75
+ * the bit has been requested. If set and valid, record the bit
76
+ * within QEMU's page_flags.
77
+ */
78
+ if ((prot & TARGET_PROT_BTI) && cpu_isar_feature(aa64_bti, cpu)) {
79
valid |= TARGET_PROT_BTI;
80
page_flags |= PAGE_BTI;
81
}
82
+ /* Similarly for the PROT_MTE bit. */
83
+ if ((prot & TARGET_PROT_MTE) && cpu_isar_feature(aa64_mte, cpu)) {
84
+ valid |= TARGET_PROT_MTE;
85
+ page_flags |= PAGE_MTE;
86
+ }
87
}
88
#endif
89
90
--
91
2.20.1
92
93
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This also makes sure that we get the correct ordering of
3
Move everything related to syndromes to a new file,
4
SVE vs FP exceptions.
4
which can be shared with linux-user.
5
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180211205848.4568-5-richard.henderson@linaro.org
8
Message-id: 20210210000223.884088-26-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/cpu.h | 3 ++-
11
target/arm/internals.h | 245 +-----------------------------------
12
target/arm/internals.h | 6 ++++++
12
target/arm/syndrome.h | 273 +++++++++++++++++++++++++++++++++++++++++
13
target/arm/helper.c | 22 ++++------------------
13
2 files changed, 274 insertions(+), 244 deletions(-)
14
target/arm/translate-a64.c | 16 ++++++++++++++++
14
create mode 100644 target/arm/syndrome.h
15
4 files changed, 28 insertions(+), 19 deletions(-)
16
15
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ static inline uint64_t cpreg_to_kvm_id(uint32_t cpregid)
22
#define ARM_CP_DC_ZVA (ARM_CP_SPECIAL | 0x0500)
23
#define ARM_LAST_SPECIAL ARM_CP_DC_ZVA
24
#define ARM_CP_FPU 0x1000
25
+#define ARM_CP_SVE 0x2000
26
/* Used only as a terminator for ARMCPRegInfo lists */
27
#define ARM_CP_SENTINEL 0xffff
28
/* Mask of only the flag bits in a type field */
29
-#define ARM_CP_FLAG_MASK 0x10ff
30
+#define ARM_CP_FLAG_MASK 0x30ff
31
32
/* Valid values for ARMCPRegInfo state field, indicating which of
33
* the AArch32 and AArch64 execution states this register is visible in.
34
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
35
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/internals.h
18
--- a/target/arm/internals.h
37
+++ b/target/arm/internals.h
19
+++ b/target/arm/internals.h
38
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
20
@@ -XXX,XX +XXX,XX @@
39
EC_AA64_HVC = 0x16,
21
#define TARGET_ARM_INTERNALS_H
40
EC_AA64_SMC = 0x17,
22
41
EC_SYSTEMREGISTERTRAP = 0x18,
23
#include "hw/registerfields.h"
24
+#include "syndrome.h"
25
26
/* register banks for CPU modes */
27
#define BANK_USRSYS 0
28
@@ -XXX,XX +XXX,XX @@ static inline bool extended_addresses_enabled(CPUARMState *env)
29
(arm_feature(env, ARM_FEATURE_LPAE) && (tcr->raw_tcr & TTBCR_EAE));
30
}
31
32
-/* Valid Syndrome Register EC field values */
33
-enum arm_exception_class {
34
- EC_UNCATEGORIZED = 0x00,
35
- EC_WFX_TRAP = 0x01,
36
- EC_CP15RTTRAP = 0x03,
37
- EC_CP15RRTTRAP = 0x04,
38
- EC_CP14RTTRAP = 0x05,
39
- EC_CP14DTTRAP = 0x06,
40
- EC_ADVSIMDFPACCESSTRAP = 0x07,
41
- EC_FPIDTRAP = 0x08,
42
- EC_PACTRAP = 0x09,
43
- EC_CP14RRTTRAP = 0x0c,
44
- EC_BTITRAP = 0x0d,
45
- EC_ILLEGALSTATE = 0x0e,
46
- EC_AA32_SVC = 0x11,
47
- EC_AA32_HVC = 0x12,
48
- EC_AA32_SMC = 0x13,
49
- EC_AA64_SVC = 0x15,
50
- EC_AA64_HVC = 0x16,
51
- EC_AA64_SMC = 0x17,
52
- EC_SYSTEMREGISTERTRAP = 0x18,
53
- EC_SVEACCESSTRAP = 0x19,
54
- EC_INSNABORT = 0x20,
55
- EC_INSNABORT_SAME_EL = 0x21,
56
- EC_PCALIGNMENT = 0x22,
57
- EC_DATAABORT = 0x24,
58
- EC_DATAABORT_SAME_EL = 0x25,
59
- EC_SPALIGNMENT = 0x26,
60
- EC_AA32_FPTRAP = 0x28,
61
- EC_AA64_FPTRAP = 0x2c,
62
- EC_SERROR = 0x2f,
63
- EC_BREAKPOINT = 0x30,
64
- EC_BREAKPOINT_SAME_EL = 0x31,
65
- EC_SOFTWARESTEP = 0x32,
66
- EC_SOFTWARESTEP_SAME_EL = 0x33,
67
- EC_WATCHPOINT = 0x34,
68
- EC_WATCHPOINT_SAME_EL = 0x35,
69
- EC_AA32_BKPT = 0x38,
70
- EC_VECTORCATCH = 0x3a,
71
- EC_AA64_BKPT = 0x3c,
72
-};
73
-
74
-#define ARM_EL_EC_SHIFT 26
75
-#define ARM_EL_IL_SHIFT 25
76
-#define ARM_EL_ISV_SHIFT 24
77
-#define ARM_EL_IL (1 << ARM_EL_IL_SHIFT)
78
-#define ARM_EL_ISV (1 << ARM_EL_ISV_SHIFT)
79
-
80
-static inline uint32_t syn_get_ec(uint32_t syn)
81
-{
82
- return syn >> ARM_EL_EC_SHIFT;
83
-}
84
-
85
-/* Utility functions for constructing various kinds of syndrome value.
86
- * Note that in general we follow the AArch64 syndrome values; in a
87
- * few cases the value in HSR for exceptions taken to AArch32 Hyp
88
- * mode differs slightly, and we fix this up when populating HSR in
89
- * arm_cpu_do_interrupt_aarch32_hyp().
90
- * The exception is FP/SIMD access traps -- these report extra information
91
- * when taking an exception to AArch32. For those we include the extra coproc
92
- * and TA fields, and mask them out when taking the exception to AArch64.
93
- */
94
-static inline uint32_t syn_uncategorized(void)
95
-{
96
- return (EC_UNCATEGORIZED << ARM_EL_EC_SHIFT) | ARM_EL_IL;
97
-}
98
-
99
-static inline uint32_t syn_aa64_svc(uint32_t imm16)
100
-{
101
- return (EC_AA64_SVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
102
-}
103
-
104
-static inline uint32_t syn_aa64_hvc(uint32_t imm16)
105
-{
106
- return (EC_AA64_HVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
107
-}
108
-
109
-static inline uint32_t syn_aa64_smc(uint32_t imm16)
110
-{
111
- return (EC_AA64_SMC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
112
-}
113
-
114
-static inline uint32_t syn_aa32_svc(uint32_t imm16, bool is_16bit)
115
-{
116
- return (EC_AA32_SVC << ARM_EL_EC_SHIFT) | (imm16 & 0xffff)
117
- | (is_16bit ? 0 : ARM_EL_IL);
118
-}
119
-
120
-static inline uint32_t syn_aa32_hvc(uint32_t imm16)
121
-{
122
- return (EC_AA32_HVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
123
-}
124
-
125
-static inline uint32_t syn_aa32_smc(void)
126
-{
127
- return (EC_AA32_SMC << ARM_EL_EC_SHIFT) | ARM_EL_IL;
128
-}
129
-
130
-static inline uint32_t syn_aa64_bkpt(uint32_t imm16)
131
-{
132
- return (EC_AA64_BKPT << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
133
-}
134
-
135
-static inline uint32_t syn_aa32_bkpt(uint32_t imm16, bool is_16bit)
136
-{
137
- return (EC_AA32_BKPT << ARM_EL_EC_SHIFT) | (imm16 & 0xffff)
138
- | (is_16bit ? 0 : ARM_EL_IL);
139
-}
140
-
141
-static inline uint32_t syn_aa64_sysregtrap(int op0, int op1, int op2,
142
- int crn, int crm, int rt,
143
- int isread)
144
-{
145
- return (EC_SYSTEMREGISTERTRAP << ARM_EL_EC_SHIFT) | ARM_EL_IL
146
- | (op0 << 20) | (op2 << 17) | (op1 << 14) | (crn << 10) | (rt << 5)
147
- | (crm << 1) | isread;
148
-}
149
-
150
-static inline uint32_t syn_cp14_rt_trap(int cv, int cond, int opc1, int opc2,
151
- int crn, int crm, int rt, int isread,
152
- bool is_16bit)
153
-{
154
- return (EC_CP14RTTRAP << ARM_EL_EC_SHIFT)
155
- | (is_16bit ? 0 : ARM_EL_IL)
156
- | (cv << 24) | (cond << 20) | (opc2 << 17) | (opc1 << 14)
157
- | (crn << 10) | (rt << 5) | (crm << 1) | isread;
158
-}
159
-
160
-static inline uint32_t syn_cp15_rt_trap(int cv, int cond, int opc1, int opc2,
161
- int crn, int crm, int rt, int isread,
162
- bool is_16bit)
163
-{
164
- return (EC_CP15RTTRAP << ARM_EL_EC_SHIFT)
165
- | (is_16bit ? 0 : ARM_EL_IL)
166
- | (cv << 24) | (cond << 20) | (opc2 << 17) | (opc1 << 14)
167
- | (crn << 10) | (rt << 5) | (crm << 1) | isread;
168
-}
169
-
170
-static inline uint32_t syn_cp14_rrt_trap(int cv, int cond, int opc1, int crm,
171
- int rt, int rt2, int isread,
172
- bool is_16bit)
173
-{
174
- return (EC_CP14RRTTRAP << ARM_EL_EC_SHIFT)
175
- | (is_16bit ? 0 : ARM_EL_IL)
176
- | (cv << 24) | (cond << 20) | (opc1 << 16)
177
- | (rt2 << 10) | (rt << 5) | (crm << 1) | isread;
178
-}
179
-
180
-static inline uint32_t syn_cp15_rrt_trap(int cv, int cond, int opc1, int crm,
181
- int rt, int rt2, int isread,
182
- bool is_16bit)
183
-{
184
- return (EC_CP15RRTTRAP << ARM_EL_EC_SHIFT)
185
- | (is_16bit ? 0 : ARM_EL_IL)
186
- | (cv << 24) | (cond << 20) | (opc1 << 16)
187
- | (rt2 << 10) | (rt << 5) | (crm << 1) | isread;
188
-}
189
-
190
-static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
191
-{
192
- /* AArch32 FP trap or any AArch64 FP/SIMD trap: TA == 0 coproc == 0xa */
193
- return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
194
- | (is_16bit ? 0 : ARM_EL_IL)
195
- | (cv << 24) | (cond << 20) | 0xa;
196
-}
197
-
198
-static inline uint32_t syn_simd_access_trap(int cv, int cond, bool is_16bit)
199
-{
200
- /* AArch32 SIMD trap: TA == 1 coproc == 0 */
201
- return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
202
- | (is_16bit ? 0 : ARM_EL_IL)
203
- | (cv << 24) | (cond << 20) | (1 << 5);
204
-}
205
-
206
-static inline uint32_t syn_sve_access_trap(void)
207
-{
208
- return EC_SVEACCESSTRAP << ARM_EL_EC_SHIFT;
209
-}
210
-
211
-static inline uint32_t syn_pactrap(void)
212
-{
213
- return EC_PACTRAP << ARM_EL_EC_SHIFT;
214
-}
215
-
216
-static inline uint32_t syn_btitrap(int btype)
217
-{
218
- return (EC_BTITRAP << ARM_EL_EC_SHIFT) | btype;
219
-}
220
-
221
-static inline uint32_t syn_insn_abort(int same_el, int ea, int s1ptw, int fsc)
222
-{
223
- return (EC_INSNABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
224
- | ARM_EL_IL | (ea << 9) | (s1ptw << 7) | fsc;
225
-}
226
-
227
-static inline uint32_t syn_data_abort_no_iss(int same_el, int fnv,
228
- int ea, int cm, int s1ptw,
229
- int wnr, int fsc)
230
-{
231
- return (EC_DATAABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
232
- | ARM_EL_IL
233
- | (fnv << 10) | (ea << 9) | (cm << 8) | (s1ptw << 7)
234
- | (wnr << 6) | fsc;
235
-}
236
-
237
-static inline uint32_t syn_data_abort_with_iss(int same_el,
238
- int sas, int sse, int srt,
239
- int sf, int ar,
240
- int ea, int cm, int s1ptw,
241
- int wnr, int fsc,
242
- bool is_16bit)
243
-{
244
- return (EC_DATAABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
245
- | (is_16bit ? 0 : ARM_EL_IL)
246
- | ARM_EL_ISV | (sas << 22) | (sse << 21) | (srt << 16)
247
- | (sf << 15) | (ar << 14)
248
- | (ea << 9) | (cm << 8) | (s1ptw << 7) | (wnr << 6) | fsc;
249
-}
250
-
251
-static inline uint32_t syn_swstep(int same_el, int isv, int ex)
252
-{
253
- return (EC_SOFTWARESTEP << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
254
- | ARM_EL_IL | (isv << 24) | (ex << 6) | 0x22;
255
-}
256
-
257
-static inline uint32_t syn_watchpoint(int same_el, int cm, int wnr)
258
-{
259
- return (EC_WATCHPOINT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
260
- | ARM_EL_IL | (cm << 8) | (wnr << 6) | 0x22;
261
-}
262
-
263
-static inline uint32_t syn_breakpoint(int same_el)
264
-{
265
- return (EC_BREAKPOINT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
266
- | ARM_EL_IL | 0x22;
267
-}
268
-
269
-static inline uint32_t syn_wfx(int cv, int cond, int ti, bool is_16bit)
270
-{
271
- return (EC_WFX_TRAP << ARM_EL_EC_SHIFT) |
272
- (is_16bit ? 0 : (1 << ARM_EL_IL_SHIFT)) |
273
- (cv << 24) | (cond << 20) | ti;
274
-}
275
-
276
/* Update a QEMU watchpoint based on the information the guest has set in the
277
* DBGWCR<n>_EL1 and DBGWVR<n>_EL1 registers.
278
*/
279
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
280
new file mode 100644
281
index XXXXXXX..XXXXXXX
282
--- /dev/null
283
+++ b/target/arm/syndrome.h
284
@@ -XXX,XX +XXX,XX @@
285
+/*
286
+ * QEMU ARM CPU -- syndrome functions and types
287
+ *
288
+ * Copyright (c) 2014 Linaro Ltd
289
+ *
290
+ * This program is free software; you can redistribute it and/or
291
+ * modify it under the terms of the GNU General Public License
292
+ * as published by the Free Software Foundation; either version 2
293
+ * of the License, or (at your option) any later version.
294
+ *
295
+ * This program is distributed in the hope that it will be useful,
296
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
297
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
298
+ * GNU General Public License for more details.
299
+ *
300
+ * You should have received a copy of the GNU General Public License
301
+ * along with this program; if not, see
302
+ * <http://www.gnu.org/licenses/gpl-2.0.html>
303
+ *
304
+ * This header defines functions, types, etc which need to be shared
305
+ * between different source files within target/arm/ but which are
306
+ * private to it and not required by the rest of QEMU.
307
+ */
308
+
309
+#ifndef TARGET_ARM_SYNDROME_H
310
+#define TARGET_ARM_SYNDROME_H
311
+
312
+/* Valid Syndrome Register EC field values */
313
+enum arm_exception_class {
314
+ EC_UNCATEGORIZED = 0x00,
315
+ EC_WFX_TRAP = 0x01,
316
+ EC_CP15RTTRAP = 0x03,
317
+ EC_CP15RRTTRAP = 0x04,
318
+ EC_CP14RTTRAP = 0x05,
319
+ EC_CP14DTTRAP = 0x06,
320
+ EC_ADVSIMDFPACCESSTRAP = 0x07,
321
+ EC_FPIDTRAP = 0x08,
322
+ EC_PACTRAP = 0x09,
323
+ EC_CP14RRTTRAP = 0x0c,
324
+ EC_BTITRAP = 0x0d,
325
+ EC_ILLEGALSTATE = 0x0e,
326
+ EC_AA32_SVC = 0x11,
327
+ EC_AA32_HVC = 0x12,
328
+ EC_AA32_SMC = 0x13,
329
+ EC_AA64_SVC = 0x15,
330
+ EC_AA64_HVC = 0x16,
331
+ EC_AA64_SMC = 0x17,
332
+ EC_SYSTEMREGISTERTRAP = 0x18,
42
+ EC_SVEACCESSTRAP = 0x19,
333
+ EC_SVEACCESSTRAP = 0x19,
43
EC_INSNABORT = 0x20,
334
+ EC_INSNABORT = 0x20,
44
EC_INSNABORT_SAME_EL = 0x21,
335
+ EC_INSNABORT_SAME_EL = 0x21,
45
EC_PCALIGNMENT = 0x22,
336
+ EC_PCALIGNMENT = 0x22,
46
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
337
+ EC_DATAABORT = 0x24,
47
| (cv << 24) | (cond << 20);
338
+ EC_DATAABORT_SAME_EL = 0x25,
48
}
339
+ EC_SPALIGNMENT = 0x26,
49
340
+ EC_AA32_FPTRAP = 0x28,
341
+ EC_AA64_FPTRAP = 0x2c,
342
+ EC_SERROR = 0x2f,
343
+ EC_BREAKPOINT = 0x30,
344
+ EC_BREAKPOINT_SAME_EL = 0x31,
345
+ EC_SOFTWARESTEP = 0x32,
346
+ EC_SOFTWARESTEP_SAME_EL = 0x33,
347
+ EC_WATCHPOINT = 0x34,
348
+ EC_WATCHPOINT_SAME_EL = 0x35,
349
+ EC_AA32_BKPT = 0x38,
350
+ EC_VECTORCATCH = 0x3a,
351
+ EC_AA64_BKPT = 0x3c,
352
+};
353
+
354
+#define ARM_EL_EC_SHIFT 26
355
+#define ARM_EL_IL_SHIFT 25
356
+#define ARM_EL_ISV_SHIFT 24
357
+#define ARM_EL_IL (1 << ARM_EL_IL_SHIFT)
358
+#define ARM_EL_ISV (1 << ARM_EL_ISV_SHIFT)
359
+
360
+static inline uint32_t syn_get_ec(uint32_t syn)
361
+{
362
+ return syn >> ARM_EL_EC_SHIFT;
363
+}
364
+
365
+/*
366
+ * Utility functions for constructing various kinds of syndrome value.
367
+ * Note that in general we follow the AArch64 syndrome values; in a
368
+ * few cases the value in HSR for exceptions taken to AArch32 Hyp
369
+ * mode differs slightly, and we fix this up when populating HSR in
370
+ * arm_cpu_do_interrupt_aarch32_hyp().
371
+ * The exception is FP/SIMD access traps -- these report extra information
372
+ * when taking an exception to AArch32. For those we include the extra coproc
373
+ * and TA fields, and mask them out when taking the exception to AArch64.
374
+ */
375
+static inline uint32_t syn_uncategorized(void)
376
+{
377
+ return (EC_UNCATEGORIZED << ARM_EL_EC_SHIFT) | ARM_EL_IL;
378
+}
379
+
380
+static inline uint32_t syn_aa64_svc(uint32_t imm16)
381
+{
382
+ return (EC_AA64_SVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
383
+}
384
+
385
+static inline uint32_t syn_aa64_hvc(uint32_t imm16)
386
+{
387
+ return (EC_AA64_HVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
388
+}
389
+
390
+static inline uint32_t syn_aa64_smc(uint32_t imm16)
391
+{
392
+ return (EC_AA64_SMC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
393
+}
394
+
395
+static inline uint32_t syn_aa32_svc(uint32_t imm16, bool is_16bit)
396
+{
397
+ return (EC_AA32_SVC << ARM_EL_EC_SHIFT) | (imm16 & 0xffff)
398
+ | (is_16bit ? 0 : ARM_EL_IL);
399
+}
400
+
401
+static inline uint32_t syn_aa32_hvc(uint32_t imm16)
402
+{
403
+ return (EC_AA32_HVC << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
404
+}
405
+
406
+static inline uint32_t syn_aa32_smc(void)
407
+{
408
+ return (EC_AA32_SMC << ARM_EL_EC_SHIFT) | ARM_EL_IL;
409
+}
410
+
411
+static inline uint32_t syn_aa64_bkpt(uint32_t imm16)
412
+{
413
+ return (EC_AA64_BKPT << ARM_EL_EC_SHIFT) | ARM_EL_IL | (imm16 & 0xffff);
414
+}
415
+
416
+static inline uint32_t syn_aa32_bkpt(uint32_t imm16, bool is_16bit)
417
+{
418
+ return (EC_AA32_BKPT << ARM_EL_EC_SHIFT) | (imm16 & 0xffff)
419
+ | (is_16bit ? 0 : ARM_EL_IL);
420
+}
421
+
422
+static inline uint32_t syn_aa64_sysregtrap(int op0, int op1, int op2,
423
+ int crn, int crm, int rt,
424
+ int isread)
425
+{
426
+ return (EC_SYSTEMREGISTERTRAP << ARM_EL_EC_SHIFT) | ARM_EL_IL
427
+ | (op0 << 20) | (op2 << 17) | (op1 << 14) | (crn << 10) | (rt << 5)
428
+ | (crm << 1) | isread;
429
+}
430
+
431
+static inline uint32_t syn_cp14_rt_trap(int cv, int cond, int opc1, int opc2,
432
+ int crn, int crm, int rt, int isread,
433
+ bool is_16bit)
434
+{
435
+ return (EC_CP14RTTRAP << ARM_EL_EC_SHIFT)
436
+ | (is_16bit ? 0 : ARM_EL_IL)
437
+ | (cv << 24) | (cond << 20) | (opc2 << 17) | (opc1 << 14)
438
+ | (crn << 10) | (rt << 5) | (crm << 1) | isread;
439
+}
440
+
441
+static inline uint32_t syn_cp15_rt_trap(int cv, int cond, int opc1, int opc2,
442
+ int crn, int crm, int rt, int isread,
443
+ bool is_16bit)
444
+{
445
+ return (EC_CP15RTTRAP << ARM_EL_EC_SHIFT)
446
+ | (is_16bit ? 0 : ARM_EL_IL)
447
+ | (cv << 24) | (cond << 20) | (opc2 << 17) | (opc1 << 14)
448
+ | (crn << 10) | (rt << 5) | (crm << 1) | isread;
449
+}
450
+
451
+static inline uint32_t syn_cp14_rrt_trap(int cv, int cond, int opc1, int crm,
452
+ int rt, int rt2, int isread,
453
+ bool is_16bit)
454
+{
455
+ return (EC_CP14RRTTRAP << ARM_EL_EC_SHIFT)
456
+ | (is_16bit ? 0 : ARM_EL_IL)
457
+ | (cv << 24) | (cond << 20) | (opc1 << 16)
458
+ | (rt2 << 10) | (rt << 5) | (crm << 1) | isread;
459
+}
460
+
461
+static inline uint32_t syn_cp15_rrt_trap(int cv, int cond, int opc1, int crm,
462
+ int rt, int rt2, int isread,
463
+ bool is_16bit)
464
+{
465
+ return (EC_CP15RRTTRAP << ARM_EL_EC_SHIFT)
466
+ | (is_16bit ? 0 : ARM_EL_IL)
467
+ | (cv << 24) | (cond << 20) | (opc1 << 16)
468
+ | (rt2 << 10) | (rt << 5) | (crm << 1) | isread;
469
+}
470
+
471
+static inline uint32_t syn_fp_access_trap(int cv, int cond, bool is_16bit)
472
+{
473
+ /* AArch32 FP trap or any AArch64 FP/SIMD trap: TA == 0 coproc == 0xa */
474
+ return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
475
+ | (is_16bit ? 0 : ARM_EL_IL)
476
+ | (cv << 24) | (cond << 20) | 0xa;
477
+}
478
+
479
+static inline uint32_t syn_simd_access_trap(int cv, int cond, bool is_16bit)
480
+{
481
+ /* AArch32 SIMD trap: TA == 1 coproc == 0 */
482
+ return (EC_ADVSIMDFPACCESSTRAP << ARM_EL_EC_SHIFT)
483
+ | (is_16bit ? 0 : ARM_EL_IL)
484
+ | (cv << 24) | (cond << 20) | (1 << 5);
485
+}
486
+
50
+static inline uint32_t syn_sve_access_trap(void)
487
+static inline uint32_t syn_sve_access_trap(void)
51
+{
488
+{
52
+ return EC_SVEACCESSTRAP << ARM_EL_EC_SHIFT;
489
+ return EC_SVEACCESSTRAP << ARM_EL_EC_SHIFT;
53
+}
490
+}
54
+
491
+
55
static inline uint32_t syn_insn_abort(int same_el, int ea, int s1ptw, int fsc)
492
+static inline uint32_t syn_pactrap(void)
56
{
493
+{
57
return (EC_INSNABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
494
+ return EC_PACTRAP << ARM_EL_EC_SHIFT;
58
diff --git a/target/arm/helper.c b/target/arm/helper.c
495
+}
59
index XXXXXXX..XXXXXXX 100644
496
+
60
--- a/target/arm/helper.c
497
+static inline uint32_t syn_btitrap(int btype)
61
+++ b/target/arm/helper.c
498
+{
62
@@ -XXX,XX +XXX,XX @@ static int sve_exception_el(CPUARMState *env)
499
+ return (EC_BTITRAP << ARM_EL_EC_SHIFT) | btype;
63
return 0;
500
+}
64
}
501
+
65
502
+static inline uint32_t syn_insn_abort(int same_el, int ea, int s1ptw, int fsc)
66
-static CPAccessResult zcr_access(CPUARMState *env, const ARMCPRegInfo *ri,
503
+{
67
- bool isread)
504
+ return (EC_INSNABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
68
-{
505
+ | ARM_EL_IL | (ea << 9) | (s1ptw << 7) | fsc;
69
- switch (sve_exception_el(env)) {
506
+}
70
- case 3:
507
+
71
- return CP_ACCESS_TRAP_EL3;
508
+static inline uint32_t syn_data_abort_no_iss(int same_el, int fnv,
72
- case 2:
509
+ int ea, int cm, int s1ptw,
73
- return CP_ACCESS_TRAP_EL2;
510
+ int wnr, int fsc)
74
- case 1:
511
+{
75
- return CP_ACCESS_TRAP;
512
+ return (EC_DATAABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
76
- }
513
+ | ARM_EL_IL
77
- return CP_ACCESS_OK;
514
+ | (fnv << 10) | (ea << 9) | (cm << 8) | (s1ptw << 7)
78
-}
515
+ | (wnr << 6) | fsc;
79
-
516
+}
80
static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
517
+
81
uint64_t value)
518
+static inline uint32_t syn_data_abort_with_iss(int same_el,
82
{
519
+ int sas, int sse, int srt,
83
@@ -XXX,XX +XXX,XX @@ static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
520
+ int sf, int ar,
84
static const ARMCPRegInfo zcr_el1_reginfo = {
521
+ int ea, int cm, int s1ptw,
85
.name = "ZCR_EL1", .state = ARM_CP_STATE_AA64,
522
+ int wnr, int fsc,
86
.opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0,
523
+ bool is_16bit)
87
- .access = PL1_RW, .accessfn = zcr_access,
524
+{
88
+ .access = PL1_RW, .type = ARM_CP_SVE | ARM_CP_FPU,
525
+ return (EC_DATAABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
89
.fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]),
526
+ | (is_16bit ? 0 : ARM_EL_IL)
90
.writefn = zcr_write, .raw_writefn = raw_write
527
+ | ARM_EL_ISV | (sas << 22) | (sse << 21) | (srt << 16)
91
};
528
+ | (sf << 15) | (ar << 14)
92
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo zcr_el1_reginfo = {
529
+ | (ea << 9) | (cm << 8) | (s1ptw << 7) | (wnr << 6) | fsc;
93
static const ARMCPRegInfo zcr_el2_reginfo = {
530
+}
94
.name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
531
+
95
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
532
+static inline uint32_t syn_swstep(int same_el, int isv, int ex)
96
- .access = PL2_RW, .accessfn = zcr_access,
533
+{
97
+ .access = PL2_RW, .type = ARM_CP_SVE | ARM_CP_FPU,
534
+ return (EC_SOFTWARESTEP << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
98
.fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]),
535
+ | ARM_EL_IL | (isv << 24) | (ex << 6) | 0x22;
99
.writefn = zcr_write, .raw_writefn = raw_write
536
+}
100
};
537
+
101
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo zcr_el2_reginfo = {
538
+static inline uint32_t syn_watchpoint(int same_el, int cm, int wnr)
102
static const ARMCPRegInfo zcr_no_el2_reginfo = {
539
+{
103
.name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
540
+ return (EC_WATCHPOINT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
104
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
541
+ | ARM_EL_IL | (cm << 8) | (wnr << 6) | 0x22;
105
- .access = PL2_RW,
542
+}
106
+ .access = PL2_RW, .type = ARM_CP_SVE | ARM_CP_FPU,
543
+
107
.readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore
544
+static inline uint32_t syn_breakpoint(int same_el)
108
};
545
+{
109
546
+ return (EC_BREAKPOINT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
110
static const ARMCPRegInfo zcr_el3_reginfo = {
547
+ | ARM_EL_IL | 0x22;
111
.name = "ZCR_EL3", .state = ARM_CP_STATE_AA64,
548
+}
112
.opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0,
549
+
113
- .access = PL3_RW, .accessfn = zcr_access,
550
+static inline uint32_t syn_wfx(int cv, int cond, int ti, bool is_16bit)
114
+ .access = PL3_RW, .type = ARM_CP_SVE | ARM_CP_FPU,
551
+{
115
.fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]),
552
+ return (EC_WFX_TRAP << ARM_EL_EC_SHIFT) |
116
.writefn = zcr_write, .raw_writefn = raw_write
553
+ (is_16bit ? 0 : (1 << ARM_EL_IL_SHIFT)) |
117
};
554
+ (cv << 24) | (cond << 20) | ti;
118
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
555
+}
119
index XXXXXXX..XXXXXXX 100644
556
+
120
--- a/target/arm/translate-a64.c
557
+#endif /* TARGET_ARM_SYNDROME_H */
121
+++ b/target/arm/translate-a64.c
122
@@ -XXX,XX +XXX,XX @@ static inline bool fp_access_check(DisasContext *s)
123
return false;
124
}
125
126
+/* Check that SVE access is enabled. If it is, return true.
127
+ * If not, emit code to generate an appropriate exception and return false.
128
+ */
129
+static inline bool sve_access_check(DisasContext *s)
130
+{
131
+ if (s->sve_excp_el) {
132
+ gen_exception_insn(s, 4, EXCP_UDEF, syn_sve_access_trap(),
133
+ s->sve_excp_el);
134
+ return false;
135
+ }
136
+ return true;
137
+}
138
+
139
/*
140
* This utility function is for doing register extension with an
141
* optional shift. You will likely want to pass a temporary for the
142
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
143
default:
144
break;
145
}
146
+ if ((ri->type & ARM_CP_SVE) && !sve_access_check(s)) {
147
+ return;
148
+ }
149
if ((ri->type & ARM_CP_FPU) && !fp_access_check(s)) {
150
return;
151
}
152
--
558
--
153
2.16.1
559
2.20.1
154
560
155
561
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
A proper syndrome is required to fill in the proper si_code.
4
Use page_get_flags to determine permission vs translation for user-only.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-27-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
linux-user/aarch64/cpu_loop.c | 24 +++++++++++++++++++++---
12
target/arm/tlb_helper.c | 15 +++++++++------
13
2 files changed, 30 insertions(+), 9 deletions(-)
14
15
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/linux-user/aarch64/cpu_loop.c
18
+++ b/linux-user/aarch64/cpu_loop.c
19
@@ -XXX,XX +XXX,XX @@
20
#include "cpu_loop-common.h"
21
#include "qemu/guest-random.h"
22
#include "hw/semihosting/common-semi.h"
23
+#include "target/arm/syndrome.h"
24
25
#define get_user_code_u32(x, gaddr, env) \
26
({ abi_long __r = get_user_u32((x), (gaddr)); \
27
@@ -XXX,XX +XXX,XX @@
28
void cpu_loop(CPUARMState *env)
29
{
30
CPUState *cs = env_cpu(env);
31
- int trapnr;
32
+ int trapnr, ec, fsc;
33
abi_long ret;
34
target_siginfo_t info;
35
36
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
37
case EXCP_DATA_ABORT:
38
info.si_signo = TARGET_SIGSEGV;
39
info.si_errno = 0;
40
- /* XXX: check env->error_code */
41
- info.si_code = TARGET_SEGV_MAPERR;
42
info._sifields._sigfault._addr = env->exception.vaddress;
43
+
44
+ /* We should only arrive here with EC in {DATAABORT, INSNABORT}. */
45
+ ec = syn_get_ec(env->exception.syndrome);
46
+ assert(ec == EC_DATAABORT || ec == EC_INSNABORT);
47
+
48
+ /* Both EC have the same format for FSC, or close enough. */
49
+ fsc = extract32(env->exception.syndrome, 0, 6);
50
+ switch (fsc) {
51
+ case 0x04 ... 0x07: /* Translation fault, level {0-3} */
52
+ info.si_code = TARGET_SEGV_MAPERR;
53
+ break;
54
+ case 0x09 ... 0x0b: /* Access flag fault, level {1-3} */
55
+ case 0x0d ... 0x0f: /* Permission fault, level {1-3} */
56
+ info.si_code = TARGET_SEGV_ACCERR;
57
+ break;
58
+ default:
59
+ g_assert_not_reached();
60
+ }
61
+
62
queue_signal(env, info.si_signo, QEMU_SI_FAULT, &info);
63
break;
64
case EXCP_DEBUG:
65
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
66
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/tlb_helper.c
68
+++ b/target/arm/tlb_helper.c
69
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
70
bool probe, uintptr_t retaddr)
71
{
72
ARMCPU *cpu = ARM_CPU(cs);
73
+ ARMMMUFaultInfo fi = {};
74
75
#ifdef CONFIG_USER_ONLY
76
- cpu->env.exception.vaddress = address;
77
- if (access_type == MMU_INST_FETCH) {
78
- cs->exception_index = EXCP_PREFETCH_ABORT;
79
+ int flags = page_get_flags(useronly_clean_ptr(address));
80
+ if (flags & PAGE_VALID) {
81
+ fi.type = ARMFault_Permission;
82
} else {
83
- cs->exception_index = EXCP_DATA_ABORT;
84
+ fi.type = ARMFault_Translation;
85
}
86
- cpu_loop_exit_restore(cs, retaddr);
87
+
88
+ /* now we have a real cpu fault */
89
+ cpu_restore_state(cs, retaddr, true);
90
+ arm_deliver_fault(cpu, address, access_type, mmu_idx, &fi);
91
#else
92
hwaddr phys_addr;
93
target_ulong page_size;
94
int prot, ret;
95
MemTxAttrs attrs = {};
96
- ARMMMUFaultInfo fi = {};
97
ARMCacheAttrs cacheattrs = {};
98
99
/*
100
--
101
2.20.1
102
103
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210210000223.884088-28-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
linux-user/aarch64/target_signal.h | 2 ++
9
linux-user/aarch64/cpu_loop.c | 3 +++
10
2 files changed, 5 insertions(+)
11
12
diff --git a/linux-user/aarch64/target_signal.h b/linux-user/aarch64/target_signal.h
13
index XXXXXXX..XXXXXXX 100644
14
--- a/linux-user/aarch64/target_signal.h
15
+++ b/linux-user/aarch64/target_signal.h
16
@@ -XXX,XX +XXX,XX @@ typedef struct target_sigaltstack {
17
18
#include "../generic/signal.h"
19
20
+#define TARGET_SEGV_MTESERR 9 /* Synchronous ARM MTE exception */
21
+
22
#define TARGET_ARCH_HAS_SETUP_FRAME
23
#endif /* AARCH64_TARGET_SIGNAL_H */
24
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/linux-user/aarch64/cpu_loop.c
27
+++ b/linux-user/aarch64/cpu_loop.c
28
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
29
case 0x0d ... 0x0f: /* Permission fault, level {1-3} */
30
info.si_code = TARGET_SEGV_ACCERR;
31
break;
32
+ case 0x11: /* Synchronous Tag Check Fault */
33
+ info.si_code = TARGET_SEGV_MTESERR;
34
+ break;
35
default:
36
g_assert_not_reached();
37
}
38
--
39
2.20.1
40
41
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
The real kernel collects _TIF_MTE_ASYNC_FAULT into the current thread's
4
state on any kernel entry (interrupt, exception etc), and then delivers
5
the signal in advance of resuming the thread.
6
7
This means that while the signal won't be delivered immediately, it will
8
not be delayed forever -- at minimum it will be delivered after the next
9
clock interrupt.
10
11
We don't have a clock interrupt in linux-user, so we issue a cpu_kick
12
to signal a return to the main loop at the end of the current TB.
13
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
Message-id: 20210210000223.884088-29-richard.henderson@linaro.org
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
linux-user/aarch64/target_signal.h | 1 +
20
linux-user/aarch64/cpu_loop.c | 11 +++++++++++
21
target/arm/mte_helper.c | 10 ++++++++++
22
3 files changed, 22 insertions(+)
23
24
diff --git a/linux-user/aarch64/target_signal.h b/linux-user/aarch64/target_signal.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/linux-user/aarch64/target_signal.h
27
+++ b/linux-user/aarch64/target_signal.h
28
@@ -XXX,XX +XXX,XX @@ typedef struct target_sigaltstack {
29
30
#include "../generic/signal.h"
31
32
+#define TARGET_SEGV_MTEAERR 8 /* Asynchronous ARM MTE error */
33
#define TARGET_SEGV_MTESERR 9 /* Synchronous ARM MTE exception */
34
35
#define TARGET_ARCH_HAS_SETUP_FRAME
36
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/linux-user/aarch64/cpu_loop.c
39
+++ b/linux-user/aarch64/cpu_loop.c
40
@@ -XXX,XX +XXX,XX @@ void cpu_loop(CPUARMState *env)
41
EXCP_DUMP(env, "qemu: unhandled CPU exception 0x%x - aborting\n", trapnr);
42
abort();
43
}
44
+
45
+ /* Check for MTE asynchronous faults */
46
+ if (unlikely(env->cp15.tfsr_el[0])) {
47
+ env->cp15.tfsr_el[0] = 0;
48
+ info.si_signo = TARGET_SIGSEGV;
49
+ info.si_errno = 0;
50
+ info._sifields._sigfault._addr = 0;
51
+ info.si_code = TARGET_SEGV_MTEAERR;
52
+ queue_signal(env, info.si_signo, QEMU_SI_FAULT, &info);
53
+ }
54
+
55
process_pending_signals(env);
56
/* Exception return on AArch64 always clears the exclusive monitor,
57
* so any return to running guest code implies this.
58
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/mte_helper.c
61
+++ b/target/arm/mte_helper.c
62
@@ -XXX,XX +XXX,XX @@ static void mte_check_fail(CPUARMState *env, uint32_t desc,
63
select = 0;
64
}
65
env->cp15.tfsr_el[el] |= 1 << select;
66
+#ifdef CONFIG_USER_ONLY
67
+ /*
68
+ * Stand in for a timer irq, setting _TIF_MTE_ASYNC_FAULT,
69
+ * which then sends a SIGSEGV when the thread is next scheduled.
70
+ * This cpu will return to the main loop at the end of the TB,
71
+ * which is rather sooner than "normal". But the alternative
72
+ * is waiting until the next syscall.
73
+ */
74
+ qemu_cpu_kick(env_cpu(env));
75
+#endif
76
break;
77
78
default:
79
--
80
2.20.1
81
82
diff view generated by jsdifflib
New patch
1
From: Richard Henderson <richard.henderson@linaro.org>
1
2
3
Use the now-saved PAGE_ANON and PAGE_MTE bits,
4
and the per-page saved data.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210210000223.884088-30-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/mte_helper.c | 29 +++++++++++++++++++++++++++--
12
1 file changed, 27 insertions(+), 2 deletions(-)
13
14
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/mte_helper.c
17
+++ b/target/arm/mte_helper.c
18
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
19
int tag_size, uintptr_t ra)
20
{
21
#ifdef CONFIG_USER_ONLY
22
- /* Tag storage not implemented. */
23
- return NULL;
24
+ uint64_t clean_ptr = useronly_clean_ptr(ptr);
25
+ int flags = page_get_flags(clean_ptr);
26
+ uint8_t *tags;
27
+ uintptr_t index;
28
+
29
+ if (!(flags & (ptr_access == MMU_DATA_STORE ? PAGE_WRITE : PAGE_READ))) {
30
+ /* SIGSEGV */
31
+ arm_cpu_tlb_fill(env_cpu(env), ptr, ptr_size, ptr_access,
32
+ ptr_mmu_idx, false, ra);
33
+ g_assert_not_reached();
34
+ }
35
+
36
+ /* Require both MAP_ANON and PROT_MTE for the page. */
37
+ if (!(flags & PAGE_ANON) || !(flags & PAGE_MTE)) {
38
+ return NULL;
39
+ }
40
+
41
+ tags = page_get_target_data(clean_ptr);
42
+ if (tags == NULL) {
43
+ size_t alloc_size = TARGET_PAGE_SIZE >> (LOG2_TAG_GRANULE + 1);
44
+ tags = page_alloc_target_data(clean_ptr, alloc_size);
45
+ assert(tags != NULL);
46
+ }
47
+
48
+ index = extract32(ptr, LOG2_TAG_GRANULE + 1,
49
+ TARGET_PAGE_BITS - LOG2_TAG_GRANULE - 1);
50
+ return tags + index;
51
#else
52
uintptr_t index;
53
CPUIOTLBEntry *iotlbentry;
54
--
55
2.20.1
56
57
diff view generated by jsdifflib
1
Instead of hardcoding the values of M profile ID registers in the
1
From: Richard Henderson <richard.henderson@linaro.org>
2
NVIC, use the fields in the CPU struct. This will allow us to
3
give different M profile CPU types different ID register values.
4
2
5
This commit includes the addition of the missing ID_ISAR5,
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
which exists as RES0 in both v7M and v8M.
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210210000223.884088-31-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/cpu.c | 15 +++++++++++++++
9
1 file changed, 15 insertions(+)
7
10
8
(The values of the ID registers might be wrong for the M4 --
9
this commit leaves the behaviour there unchanged.)
10
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20180209165810.6668-2-peter.maydell@linaro.org
15
---
16
hw/intc/armv7m_nvic.c | 30 ++++++++++++++++--------------
17
target/arm/cpu.c | 28 ++++++++++++++++++++++++++++
18
2 files changed, 44 insertions(+), 14 deletions(-)
19
20
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/armv7m_nvic.c
23
+++ b/hw/intc/armv7m_nvic.c
24
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
25
"Aux Fault status registers unimplemented\n");
26
return 0;
27
case 0xd40: /* PFR0. */
28
- return 0x00000030;
29
- case 0xd44: /* PRF1. */
30
- return 0x00000200;
31
+ return cpu->id_pfr0;
32
+ case 0xd44: /* PFR1. */
33
+ return cpu->id_pfr1;
34
case 0xd48: /* DFR0. */
35
- return 0x00100000;
36
+ return cpu->id_dfr0;
37
case 0xd4c: /* AFR0. */
38
- return 0x00000000;
39
+ return cpu->id_afr0;
40
case 0xd50: /* MMFR0. */
41
- return 0x00000030;
42
+ return cpu->id_mmfr0;
43
case 0xd54: /* MMFR1. */
44
- return 0x00000000;
45
+ return cpu->id_mmfr1;
46
case 0xd58: /* MMFR2. */
47
- return 0x00000000;
48
+ return cpu->id_mmfr2;
49
case 0xd5c: /* MMFR3. */
50
- return 0x00000000;
51
+ return cpu->id_mmfr3;
52
case 0xd60: /* ISAR0. */
53
- return 0x01141110;
54
+ return cpu->id_isar0;
55
case 0xd64: /* ISAR1. */
56
- return 0x02111000;
57
+ return cpu->id_isar1;
58
case 0xd68: /* ISAR2. */
59
- return 0x21112231;
60
+ return cpu->id_isar2;
61
case 0xd6c: /* ISAR3. */
62
- return 0x01111110;
63
+ return cpu->id_isar3;
64
case 0xd70: /* ISAR4. */
65
- return 0x01310102;
66
+ return cpu->id_isar4;
67
+ case 0xd74: /* ISAR5. */
68
+ return cpu->id_isar5;
69
/* TODO: Implement debug registers. */
70
case 0xd90: /* MPU_TYPE */
71
/* Unified MPU; if the MPU is not present this value is zero */
72
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
11
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
73
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
74
--- a/target/arm/cpu.c
13
--- a/target/arm/cpu.c
75
+++ b/target/arm/cpu.c
14
+++ b/target/arm/cpu.c
76
@@ -XXX,XX +XXX,XX @@ static void cortex_m3_initfn(Object *obj)
15
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
77
set_feature(&cpu->env, ARM_FEATURE_M);
16
* Note that this must match useronly_clean_ptr.
78
cpu->midr = 0x410fc231;
17
*/
79
cpu->pmsav7_dregion = 8;
18
env->cp15.tcr_el[1].raw_tcr = (1ULL << 37);
80
+ cpu->id_pfr0 = 0x00000030;
19
+
81
+ cpu->id_pfr1 = 0x00000200;
20
+ /* Enable MTE */
82
+ cpu->id_dfr0 = 0x00100000;
21
+ if (cpu_isar_feature(aa64_mte, cpu)) {
83
+ cpu->id_afr0 = 0x00000000;
22
+ /* Enable tag access, but leave TCF0 as No Effect (0). */
84
+ cpu->id_mmfr0 = 0x00000030;
23
+ env->cp15.sctlr_el[1] |= SCTLR_ATA0;
85
+ cpu->id_mmfr1 = 0x00000000;
24
+ /*
86
+ cpu->id_mmfr2 = 0x00000000;
25
+ * Exclude all tags, so that tag 0 is always used.
87
+ cpu->id_mmfr3 = 0x00000000;
26
+ * This corresponds to Linux current->thread.gcr_incl = 0.
88
+ cpu->id_isar0 = 0x01141110;
27
+ *
89
+ cpu->id_isar1 = 0x02111000;
28
+ * Set RRND, so that helper_irg() will generate a seed later.
90
+ cpu->id_isar2 = 0x21112231;
29
+ * Here in cpu_reset(), the crypto subsystem has not yet been
91
+ cpu->id_isar3 = 0x01111110;
30
+ * initialized.
92
+ cpu->id_isar4 = 0x01310102;
31
+ */
93
+ cpu->id_isar5 = 0x00000000;
32
+ env->cp15.gcr_el1 = 0x1ffff;
94
}
33
+ }
95
34
#else
96
static void cortex_m4_initfn(Object *obj)
35
/* Reset into the highest available EL */
97
@@ -XXX,XX +XXX,XX @@ static void cortex_m4_initfn(Object *obj)
36
if (arm_feature(env, ARM_FEATURE_EL3)) {
98
set_feature(&cpu->env, ARM_FEATURE_THUMB_DSP);
99
cpu->midr = 0x410fc240; /* r0p0 */
100
cpu->pmsav7_dregion = 8;
101
+ cpu->id_pfr0 = 0x00000030;
102
+ cpu->id_pfr1 = 0x00000200;
103
+ cpu->id_dfr0 = 0x00100000;
104
+ cpu->id_afr0 = 0x00000000;
105
+ cpu->id_mmfr0 = 0x00000030;
106
+ cpu->id_mmfr1 = 0x00000000;
107
+ cpu->id_mmfr2 = 0x00000000;
108
+ cpu->id_mmfr3 = 0x00000000;
109
+ cpu->id_isar0 = 0x01141110;
110
+ cpu->id_isar1 = 0x02111000;
111
+ cpu->id_isar2 = 0x21112231;
112
+ cpu->id_isar3 = 0x01111110;
113
+ cpu->id_isar4 = 0x01310102;
114
+ cpu->id_isar5 = 0x00000000;
115
}
116
117
static void arm_v7m_class_init(ObjectClass *oc, void *data)
118
--
37
--
119
2.16.1
38
2.20.1
120
39
121
40
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
(qemu) info mtree
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
address-space: cpu-memory-0
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
0000000000000000-ffffffffffffffff (prio 0, i/o): system
5
Message-id: 20210210000223.884088-32-richard.henderson@linaro.org
6
0000000000000000-0000000007ffffff (prio 0, rom): aspeed.boot_rom
7
000000001e600000-000000001e7fffff (prio -1, i/o): aspeed_soc.io
8
- 000000001e784000-000000001e78401f (prio 0, i/o): serial
9
000000001e620000-000000001e6200ff (prio 0, i/o): aspeed.smc.ast2500-fmc
10
000000001e630000-000000001e6300ff (prio 0, i/o): aspeed.smc.ast2500-spi1
11
[...]
12
000000001e720000-000000001e728fff (prio 0, ram): aspeed.sram
13
000000001e782000-000000001e782fff (prio 0, i/o): aspeed.timer
14
+ 000000001e784000-000000001e78401f (prio 0, i/o): serial
15
000000001e785000-000000001e78501f (prio 0, i/o): aspeed.wdt
16
000000001e785020-000000001e78503f (prio 0, i/o): aspeed.wdt
17
18
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
19
Reviewed-by: Cédric Le Goater <clg@kaod.org>
20
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
21
Message-id: 20180209085755.30414-2-f4bug@amsat.org
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
7
---
24
hw/arm/aspeed_soc.c | 3 ++-
8
tests/tcg/aarch64/mte.h | 60 +++++++++++++++++++++++++++++++
25
1 file changed, 2 insertions(+), 1 deletion(-)
9
tests/tcg/aarch64/mte-1.c | 28 +++++++++++++++
10
tests/tcg/aarch64/mte-2.c | 45 +++++++++++++++++++++++
11
tests/tcg/aarch64/mte-3.c | 51 ++++++++++++++++++++++++++
12
tests/tcg/aarch64/mte-4.c | 45 +++++++++++++++++++++++
13
tests/tcg/aarch64/Makefile.target | 6 ++++
14
tests/tcg/configure.sh | 4 +++
15
7 files changed, 239 insertions(+)
16
create mode 100644 tests/tcg/aarch64/mte.h
17
create mode 100644 tests/tcg/aarch64/mte-1.c
18
create mode 100644 tests/tcg/aarch64/mte-2.c
19
create mode 100644 tests/tcg/aarch64/mte-3.c
20
create mode 100644 tests/tcg/aarch64/mte-4.c
26
21
27
diff --git a/hw/arm/aspeed_soc.c b/hw/arm/aspeed_soc.c
22
diff --git a/tests/tcg/aarch64/mte.h b/tests/tcg/aarch64/mte.h
23
new file mode 100644
24
index XXXXXXX..XXXXXXX
25
--- /dev/null
26
+++ b/tests/tcg/aarch64/mte.h
27
@@ -XXX,XX +XXX,XX @@
28
+/*
29
+ * Linux kernel fallback API definitions for MTE and test helpers.
30
+ *
31
+ * Copyright (c) 2021 Linaro Ltd
32
+ * SPDX-License-Identifier: GPL-2.0-or-later
33
+ */
34
+
35
+#include <assert.h>
36
+#include <string.h>
37
+#include <stdlib.h>
38
+#include <stdio.h>
39
+#include <unistd.h>
40
+#include <signal.h>
41
+#include <sys/mman.h>
42
+#include <sys/prctl.h>
43
+
44
+#ifndef PR_SET_TAGGED_ADDR_CTRL
45
+# define PR_SET_TAGGED_ADDR_CTRL 55
46
+#endif
47
+#ifndef PR_TAGGED_ADDR_ENABLE
48
+# define PR_TAGGED_ADDR_ENABLE (1UL << 0)
49
+#endif
50
+#ifndef PR_MTE_TCF_SHIFT
51
+# define PR_MTE_TCF_SHIFT 1
52
+# define PR_MTE_TCF_NONE (0UL << PR_MTE_TCF_SHIFT)
53
+# define PR_MTE_TCF_SYNC (1UL << PR_MTE_TCF_SHIFT)
54
+# define PR_MTE_TCF_ASYNC (2UL << PR_MTE_TCF_SHIFT)
55
+# define PR_MTE_TAG_SHIFT 3
56
+#endif
57
+
58
+#ifndef PROT_MTE
59
+# define PROT_MTE 0x20
60
+#endif
61
+
62
+#ifndef SEGV_MTEAERR
63
+# define SEGV_MTEAERR 8
64
+# define SEGV_MTESERR 9
65
+#endif
66
+
67
+static void enable_mte(int tcf)
68
+{
69
+ int r = prctl(PR_SET_TAGGED_ADDR_CTRL,
70
+ PR_TAGGED_ADDR_ENABLE | tcf | (0xfffe << PR_MTE_TAG_SHIFT),
71
+ 0, 0, 0);
72
+ if (r < 0) {
73
+ perror("PR_SET_TAGGED_ADDR_CTRL");
74
+ exit(2);
75
+ }
76
+}
77
+
78
+static void *alloc_mte_mem(size_t size)
79
+{
80
+ void *p = mmap(NULL, size, PROT_READ | PROT_WRITE | PROT_MTE,
81
+ MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
82
+ if (p == MAP_FAILED) {
83
+ perror("mmap PROT_MTE");
84
+ exit(2);
85
+ }
86
+ return p;
87
+}
88
diff --git a/tests/tcg/aarch64/mte-1.c b/tests/tcg/aarch64/mte-1.c
89
new file mode 100644
90
index XXXXXXX..XXXXXXX
91
--- /dev/null
92
+++ b/tests/tcg/aarch64/mte-1.c
93
@@ -XXX,XX +XXX,XX @@
94
+/*
95
+ * Memory tagging, basic pass cases.
96
+ *
97
+ * Copyright (c) 2021 Linaro Ltd
98
+ * SPDX-License-Identifier: GPL-2.0-or-later
99
+ */
100
+
101
+#include "mte.h"
102
+
103
+int main(int ac, char **av)
104
+{
105
+ int *p0, *p1, *p2;
106
+ long c;
107
+
108
+ enable_mte(PR_MTE_TCF_NONE);
109
+ p0 = alloc_mte_mem(sizeof(*p0));
110
+
111
+ asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(1));
112
+ assert(p1 != p0);
113
+ asm("subp %0,%1,%2" : "=r"(c) : "r"(p0), "r"(p1));
114
+ assert(c == 0);
115
+
116
+ asm("stg %0, [%0]" : : "r"(p1));
117
+ asm("ldg %0, [%1]" : "=r"(p2) : "r"(p0), "0"(p0));
118
+ assert(p1 == p2);
119
+
120
+ return 0;
121
+}
122
diff --git a/tests/tcg/aarch64/mte-2.c b/tests/tcg/aarch64/mte-2.c
123
new file mode 100644
124
index XXXXXXX..XXXXXXX
125
--- /dev/null
126
+++ b/tests/tcg/aarch64/mte-2.c
127
@@ -XXX,XX +XXX,XX @@
128
+/*
129
+ * Memory tagging, basic fail cases, synchronous signals.
130
+ *
131
+ * Copyright (c) 2021 Linaro Ltd
132
+ * SPDX-License-Identifier: GPL-2.0-or-later
133
+ */
134
+
135
+#include "mte.h"
136
+
137
+void pass(int sig, siginfo_t *info, void *uc)
138
+{
139
+ assert(info->si_code == SEGV_MTESERR);
140
+ exit(0);
141
+}
142
+
143
+int main(int ac, char **av)
144
+{
145
+ struct sigaction sa;
146
+ int *p0, *p1, *p2;
147
+ long excl = 1;
148
+
149
+ enable_mte(PR_MTE_TCF_SYNC);
150
+ p0 = alloc_mte_mem(sizeof(*p0));
151
+
152
+ /* Create two differently tagged pointers. */
153
+ asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(excl));
154
+ asm("gmi %0,%1,%0" : "+r"(excl) : "r" (p1));
155
+ assert(excl != 1);
156
+ asm("irg %0,%1,%2" : "=r"(p2) : "r"(p0), "r"(excl));
157
+ assert(p1 != p2);
158
+
159
+ /* Store the tag from the first pointer. */
160
+ asm("stg %0, [%0]" : : "r"(p1));
161
+
162
+ *p1 = 0;
163
+
164
+ memset(&sa, 0, sizeof(sa));
165
+ sa.sa_sigaction = pass;
166
+ sa.sa_flags = SA_SIGINFO;
167
+ sigaction(SIGSEGV, &sa, NULL);
168
+
169
+ *p2 = 0;
170
+
171
+ abort();
172
+}
173
diff --git a/tests/tcg/aarch64/mte-3.c b/tests/tcg/aarch64/mte-3.c
174
new file mode 100644
175
index XXXXXXX..XXXXXXX
176
--- /dev/null
177
+++ b/tests/tcg/aarch64/mte-3.c
178
@@ -XXX,XX +XXX,XX @@
179
+/*
180
+ * Memory tagging, basic fail cases, asynchronous signals.
181
+ *
182
+ * Copyright (c) 2021 Linaro Ltd
183
+ * SPDX-License-Identifier: GPL-2.0-or-later
184
+ */
185
+
186
+#include "mte.h"
187
+
188
+void pass(int sig, siginfo_t *info, void *uc)
189
+{
190
+ assert(info->si_code == SEGV_MTEAERR);
191
+ exit(0);
192
+}
193
+
194
+int main(int ac, char **av)
195
+{
196
+ struct sigaction sa;
197
+ long *p0, *p1, *p2;
198
+ long excl = 1;
199
+
200
+ enable_mte(PR_MTE_TCF_ASYNC);
201
+ p0 = alloc_mte_mem(sizeof(*p0));
202
+
203
+ /* Create two differently tagged pointers. */
204
+ asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(excl));
205
+ asm("gmi %0,%1,%0" : "+r"(excl) : "r" (p1));
206
+ assert(excl != 1);
207
+ asm("irg %0,%1,%2" : "=r"(p2) : "r"(p0), "r"(excl));
208
+ assert(p1 != p2);
209
+
210
+ /* Store the tag from the first pointer. */
211
+ asm("stg %0, [%0]" : : "r"(p1));
212
+
213
+ *p1 = 0;
214
+
215
+ memset(&sa, 0, sizeof(sa));
216
+ sa.sa_sigaction = pass;
217
+ sa.sa_flags = SA_SIGINFO;
218
+ sigaction(SIGSEGV, &sa, NULL);
219
+
220
+ /*
221
+ * Signal for async error will happen eventually.
222
+ * For a real kernel this should be after the next IRQ (e.g. timer).
223
+ * For qemu linux-user, we kick the cpu and exit at the next TB.
224
+ * In either case, loop until this happens (or killed by timeout).
225
+ * For extra sauce, yield, producing EXCP_YIELD to cpu_loop().
226
+ */
227
+ asm("str %0, [%0]; yield" : : "r"(p2));
228
+ while (1);
229
+}
230
diff --git a/tests/tcg/aarch64/mte-4.c b/tests/tcg/aarch64/mte-4.c
231
new file mode 100644
232
index XXXXXXX..XXXXXXX
233
--- /dev/null
234
+++ b/tests/tcg/aarch64/mte-4.c
235
@@ -XXX,XX +XXX,XX @@
236
+/*
237
+ * Memory tagging, re-reading tag checks.
238
+ *
239
+ * Copyright (c) 2021 Linaro Ltd
240
+ * SPDX-License-Identifier: GPL-2.0-or-later
241
+ */
242
+
243
+#include "mte.h"
244
+
245
+void __attribute__((noinline)) tagset(void *p, size_t size)
246
+{
247
+ size_t i;
248
+ for (i = 0; i < size; i += 16) {
249
+ asm("stg %0, [%0]" : : "r"(p + i));
250
+ }
251
+}
252
+
253
+void __attribute__((noinline)) tagcheck(void *p, size_t size)
254
+{
255
+ size_t i;
256
+ void *c;
257
+
258
+ for (i = 0; i < size; i += 16) {
259
+ asm("ldg %0, [%1]" : "=r"(c) : "r"(p + i), "0"(p));
260
+ assert(c == p);
261
+ }
262
+}
263
+
264
+int main(int ac, char **av)
265
+{
266
+ size_t size = getpagesize() * 4;
267
+ long excl = 1;
268
+ int *p0, *p1;
269
+
270
+ enable_mte(PR_MTE_TCF_ASYNC);
271
+ p0 = alloc_mte_mem(size);
272
+
273
+ /* Tag the pointer. */
274
+ asm("irg %0,%1,%2" : "=r"(p1) : "r"(p0), "r"(excl));
275
+
276
+ tagset(p1, size);
277
+ tagcheck(p1, size);
278
+
279
+ return 0;
280
+}
281
diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile.target
28
index XXXXXXX..XXXXXXX 100644
282
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/aspeed_soc.c
283
--- a/tests/tcg/aarch64/Makefile.target
30
+++ b/hw/arm/aspeed_soc.c
284
+++ b/tests/tcg/aarch64/Makefile.target
31
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_realize(DeviceState *dev, Error **errp)
285
@@ -XXX,XX +XXX,XX @@ endif
32
/* UART - attach an 8250 to the IO space as our UART5 */
286
# bti-2 tests PROT_BTI, so no special compiler support required.
33
if (serial_hds[0]) {
287
AARCH64_TESTS += bti-2
34
qemu_irq uart5 = qdev_get_gpio_in(DEVICE(&s->vic), uart_irqs[4]);
288
35
- serial_mm_init(&s->iomem, ASPEED_SOC_UART_5_BASE, 2,
289
+# MTE Tests
36
+ serial_mm_init(get_system_memory(),
290
+ifneq ($(DOCKER_IMAGE)$(CROSS_CC_HAS_ARMV8_MTE),)
37
+ ASPEED_SOC_IOMEM_BASE + ASPEED_SOC_UART_5_BASE, 2,
291
+AARCH64_TESTS += mte-1 mte-2 mte-3 mte-4
38
uart5, 38400, serial_hds[0], DEVICE_LITTLE_ENDIAN);
292
+mte-%: CFLAGS += -march=armv8.5-a+memtag
39
}
293
+endif
294
+
295
# Semihosting smoke test for linux-user
296
AARCH64_TESTS += semihosting
297
run-semihosting: semihosting
298
diff --git a/tests/tcg/configure.sh b/tests/tcg/configure.sh
299
index XXXXXXX..XXXXXXX 100755
300
--- a/tests/tcg/configure.sh
301
+++ b/tests/tcg/configure.sh
302
@@ -XXX,XX +XXX,XX @@ for target in $target_list; do
303
-mbranch-protection=standard -o $TMPE $TMPC; then
304
echo "CROSS_CC_HAS_ARMV8_BTI=y" >> $config_target_mak
305
fi
306
+ if do_compiler "$target_compiler" $target_compiler_cflags \
307
+ -march=armv8.5-a+memtag -o $TMPE $TMPC; then
308
+ echo "CROSS_CC_HAS_ARMV8_MTE=y" >> $config_target_mak
309
+ fi
310
;;
311
esac
40
312
41
--
313
--
42
2.16.1
314
2.20.1
43
315
44
316
diff view generated by jsdifflib
1
From: Pekka Enberg <penberg@iki.fi>
1
From: Doug Evans <dje@google.com>
2
2
3
This patch adds a "cpu-type" property to BCM2836 SoC in preparation for
3
This is a 10/100 ethernet device that has several features.
4
reusing the code for the Raspberry Pi 3, which has a different processor
4
Only the ones needed by the Linux driver have been implemented.
5
model.
5
See npcm7xx_emc.c for a list of unimplemented features.
6
6
7
Signed-off-by: Pekka Enberg <penberg@iki.fi>
7
Reviewed-by: Hao Wu <wuhaotsh@google.com>
8
Reviewed-by: Avi Fishman <avi.fishman@nuvoton.com>
9
Signed-off-by: Doug Evans <dje@google.com>
10
Message-id: 20210209015541.778833-2-dje@google.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
include/hw/arm/bcm2836.h | 1 +
14
include/hw/net/npcm7xx_emc.h | 286 ++++++++++++
12
hw/arm/bcm2836.c | 17 +++++++++--------
15
hw/net/npcm7xx_emc.c | 857 +++++++++++++++++++++++++++++++++++
13
hw/arm/raspi.c | 3 +++
16
hw/net/meson.build | 1 +
14
3 files changed, 13 insertions(+), 8 deletions(-)
17
hw/net/trace-events | 17 +
18
4 files changed, 1161 insertions(+)
19
create mode 100644 include/hw/net/npcm7xx_emc.h
20
create mode 100644 hw/net/npcm7xx_emc.c
15
21
16
diff --git a/include/hw/arm/bcm2836.h b/include/hw/arm/bcm2836.h
22
diff --git a/include/hw/net/npcm7xx_emc.h b/include/hw/net/npcm7xx_emc.h
23
new file mode 100644
24
index XXXXXXX..XXXXXXX
25
--- /dev/null
26
+++ b/include/hw/net/npcm7xx_emc.h
27
@@ -XXX,XX +XXX,XX @@
28
+/*
29
+ * Nuvoton NPCM7xx EMC Module
30
+ *
31
+ * Copyright 2020 Google LLC
32
+ *
33
+ * This program is free software; you can redistribute it and/or modify it
34
+ * under the terms of the GNU General Public License as published by the
35
+ * Free Software Foundation; either version 2 of the License, or
36
+ * (at your option) any later version.
37
+ *
38
+ * This program is distributed in the hope that it will be useful, but WITHOUT
39
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
40
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
41
+ * for more details.
42
+ */
43
+
44
+#ifndef NPCM7XX_EMC_H
45
+#define NPCM7XX_EMC_H
46
+
47
+#include "hw/irq.h"
48
+#include "hw/sysbus.h"
49
+#include "net/net.h"
50
+
51
+/* 32-bit register indices. */
52
+enum NPCM7xxPWMRegister {
53
+ /* Control registers. */
54
+ REG_CAMCMR,
55
+ REG_CAMEN,
56
+
57
+ /* There are 16 CAMn[ML] registers. */
58
+ REG_CAMM_BASE,
59
+ REG_CAML_BASE,
60
+ REG_CAMML_LAST = 0x21,
61
+
62
+ REG_TXDLSA = 0x22,
63
+ REG_RXDLSA,
64
+ REG_MCMDR,
65
+ REG_MIID,
66
+ REG_MIIDA,
67
+ REG_FFTCR,
68
+ REG_TSDR,
69
+ REG_RSDR,
70
+ REG_DMARFC,
71
+ REG_MIEN,
72
+
73
+ /* Status registers. */
74
+ REG_MISTA,
75
+ REG_MGSTA,
76
+ REG_MPCNT,
77
+ REG_MRPC,
78
+ REG_MRPCC,
79
+ REG_MREPC,
80
+ REG_DMARFS,
81
+ REG_CTXDSA,
82
+ REG_CTXBSA,
83
+ REG_CRXDSA,
84
+ REG_CRXBSA,
85
+
86
+ NPCM7XX_NUM_EMC_REGS,
87
+};
88
+
89
+/* REG_CAMCMR fields */
90
+/* Enable CAM Compare */
91
+#define REG_CAMCMR_ECMP (1 << 4)
92
+/* Complement CAM Compare */
93
+#define REG_CAMCMR_CCAM (1 << 3)
94
+/* Accept Broadcast Packet */
95
+#define REG_CAMCMR_ABP (1 << 2)
96
+/* Accept Multicast Packet */
97
+#define REG_CAMCMR_AMP (1 << 1)
98
+/* Accept Unicast Packet */
99
+#define REG_CAMCMR_AUP (1 << 0)
100
+
101
+/* REG_MCMDR fields */
102
+/* Software Reset */
103
+#define REG_MCMDR_SWR (1 << 24)
104
+/* Internal Loopback Select */
105
+#define REG_MCMDR_LBK (1 << 21)
106
+/* Operation Mode Select */
107
+#define REG_MCMDR_OPMOD (1 << 20)
108
+/* Enable MDC Clock Generation */
109
+#define REG_MCMDR_ENMDC (1 << 19)
110
+/* Full-Duplex Mode Select */
111
+#define REG_MCMDR_FDUP (1 << 18)
112
+/* Enable SQE Checking */
113
+#define REG_MCMDR_ENSEQ (1 << 17)
114
+/* Send PAUSE Frame */
115
+#define REG_MCMDR_SDPZ (1 << 16)
116
+/* No Defer */
117
+#define REG_MCMDR_NDEF (1 << 9)
118
+/* Frame Transmission On */
119
+#define REG_MCMDR_TXON (1 << 8)
120
+/* Strip CRC Checksum */
121
+#define REG_MCMDR_SPCRC (1 << 5)
122
+/* Accept CRC Error Packet */
123
+#define REG_MCMDR_AEP (1 << 4)
124
+/* Accept Control Packet */
125
+#define REG_MCMDR_ACP (1 << 3)
126
+/* Accept Runt Packet */
127
+#define REG_MCMDR_ARP (1 << 2)
128
+/* Accept Long Packet */
129
+#define REG_MCMDR_ALP (1 << 1)
130
+/* Frame Reception On */
131
+#define REG_MCMDR_RXON (1 << 0)
132
+
133
+/* REG_MIEN fields */
134
+/* Enable Transmit Descriptor Unavailable Interrupt */
135
+#define REG_MIEN_ENTDU (1 << 23)
136
+/* Enable Transmit Completion Interrupt */
137
+#define REG_MIEN_ENTXCP (1 << 18)
138
+/* Enable Transmit Interrupt */
139
+#define REG_MIEN_ENTXINTR (1 << 16)
140
+/* Enable Receive Descriptor Unavailable Interrupt */
141
+#define REG_MIEN_ENRDU (1 << 10)
142
+/* Enable Receive Good Interrupt */
143
+#define REG_MIEN_ENRXGD (1 << 4)
144
+/* Enable Receive Interrupt */
145
+#define REG_MIEN_ENRXINTR (1 << 0)
146
+
147
+/* REG_MISTA fields */
148
+/* TODO: Add error fields and support simulated errors? */
149
+/* Transmit Bus Error Interrupt */
150
+#define REG_MISTA_TXBERR (1 << 24)
151
+/* Transmit Descriptor Unavailable Interrupt */
152
+#define REG_MISTA_TDU (1 << 23)
153
+/* Transmit Completion Interrupt */
154
+#define REG_MISTA_TXCP (1 << 18)
155
+/* Transmit Interrupt */
156
+#define REG_MISTA_TXINTR (1 << 16)
157
+/* Receive Bus Error Interrupt */
158
+#define REG_MISTA_RXBERR (1 << 11)
159
+/* Receive Descriptor Unavailable Interrupt */
160
+#define REG_MISTA_RDU (1 << 10)
161
+/* DMA Early Notification Interrupt */
162
+#define REG_MISTA_DENI (1 << 9)
163
+/* Maximum Frame Length Interrupt */
164
+#define REG_MISTA_DFOI (1 << 8)
165
+/* Receive Good Interrupt */
166
+#define REG_MISTA_RXGD (1 << 4)
167
+/* Packet Too Long Interrupt */
168
+#define REG_MISTA_PTLE (1 << 3)
169
+/* Receive Interrupt */
170
+#define REG_MISTA_RXINTR (1 << 0)
171
+
172
+/* REG_MGSTA fields */
173
+/* Transmission Halted */
174
+#define REG_MGSTA_TXHA (1 << 11)
175
+/* Receive Halted */
176
+#define REG_MGSTA_RXHA (1 << 11)
177
+
178
+/* REG_DMARFC fields */
179
+/* Maximum Receive Frame Length */
180
+#define REG_DMARFC_RXMS(word) extract32((word), 0, 16)
181
+
182
+/* REG MIIDA fields */
183
+/* Busy Bit */
184
+#define REG_MIIDA_BUSY (1 << 17)
185
+
186
+/* Transmit and receive descriptors */
187
+typedef struct NPCM7xxEMCTxDesc NPCM7xxEMCTxDesc;
188
+typedef struct NPCM7xxEMCRxDesc NPCM7xxEMCRxDesc;
189
+
190
+struct NPCM7xxEMCTxDesc {
191
+ uint32_t flags;
192
+ uint32_t txbsa;
193
+ uint32_t status_and_length;
194
+ uint32_t ntxdsa;
195
+};
196
+
197
+struct NPCM7xxEMCRxDesc {
198
+ uint32_t status_and_length;
199
+ uint32_t rxbsa;
200
+ uint32_t reserved;
201
+ uint32_t nrxdsa;
202
+};
203
+
204
+/* NPCM7xxEMCTxDesc.flags values */
205
+/* Owner: 0 = cpu, 1 = emc */
206
+#define TX_DESC_FLAG_OWNER_MASK (1 << 31)
207
+/* Transmit interrupt enable */
208
+#define TX_DESC_FLAG_INTEN (1 << 2)
209
+/* CRC append */
210
+#define TX_DESC_FLAG_CRCAPP (1 << 1)
211
+/* Padding enable */
212
+#define TX_DESC_FLAG_PADEN (1 << 0)
213
+
214
+/* NPCM7xxEMCTxDesc.status_and_length values */
215
+/* Collision count */
216
+#define TX_DESC_STATUS_CCNT_SHIFT 28
217
+#define TX_DESC_STATUS_CCNT_BITSIZE 4
218
+/* SQE error */
219
+#define TX_DESC_STATUS_SQE (1 << 26)
220
+/* Transmission paused */
221
+#define TX_DESC_STATUS_PAU (1 << 25)
222
+/* P transmission halted */
223
+#define TX_DESC_STATUS_TXHA (1 << 24)
224
+/* Late collision */
225
+#define TX_DESC_STATUS_LC (1 << 23)
226
+/* Transmission abort */
227
+#define TX_DESC_STATUS_TXABT (1 << 22)
228
+/* No carrier sense */
229
+#define TX_DESC_STATUS_NCS (1 << 21)
230
+/* Defer exceed */
231
+#define TX_DESC_STATUS_EXDEF (1 << 20)
232
+/* Transmission complete */
233
+#define TX_DESC_STATUS_TXCP (1 << 19)
234
+/* Transmission deferred */
235
+#define TX_DESC_STATUS_DEF (1 << 17)
236
+/* Transmit interrupt */
237
+#define TX_DESC_STATUS_TXINTR (1 << 16)
238
+
239
+#define TX_DESC_PKT_LEN(word) extract32((word), 0, 16)
240
+
241
+/* Transmit buffer start address */
242
+#define TX_DESC_TXBSA(word) ((uint32_t) (word) & ~3u)
243
+
244
+/* Next transmit descriptor start address */
245
+#define TX_DESC_NTXDSA(word) ((uint32_t) (word) & ~3u)
246
+
247
+/* NPCM7xxEMCRxDesc.status_and_length values */
248
+/* Owner: 0b00 = cpu, 0b01 = undefined, 0b10 = emc, 0b11 = undefined */
249
+#define RX_DESC_STATUS_OWNER_SHIFT 30
250
+#define RX_DESC_STATUS_OWNER_BITSIZE 2
251
+#define RX_DESC_STATUS_OWNER_MASK (3 << RX_DESC_STATUS_OWNER_SHIFT)
252
+/* Runt packet */
253
+#define RX_DESC_STATUS_RP (1 << 22)
254
+/* Alignment error */
255
+#define RX_DESC_STATUS_ALIE (1 << 21)
256
+/* Frame reception complete */
257
+#define RX_DESC_STATUS_RXGD (1 << 20)
258
+/* Packet too long */
259
+#define RX_DESC_STATUS_PTLE (1 << 19)
260
+/* CRC error */
261
+#define RX_DESC_STATUS_CRCE (1 << 17)
262
+/* Receive interrupt */
263
+#define RX_DESC_STATUS_RXINTR (1 << 16)
264
+
265
+#define RX_DESC_PKT_LEN(word) extract32((word), 0, 16)
266
+
267
+/* Receive buffer start address */
268
+#define RX_DESC_RXBSA(word) ((uint32_t) (word) & ~3u)
269
+
270
+/* Next receive descriptor start address */
271
+#define RX_DESC_NRXDSA(word) ((uint32_t) (word) & ~3u)
272
+
273
+/* Minimum packet length, when TX_DESC_FLAG_PADEN is set. */
274
+#define MIN_PACKET_LENGTH 64
275
+
276
+struct NPCM7xxEMCState {
277
+ /*< private >*/
278
+ SysBusDevice parent;
279
+ /*< public >*/
280
+
281
+ MemoryRegion iomem;
282
+
283
+ qemu_irq tx_irq;
284
+ qemu_irq rx_irq;
285
+
286
+ NICState *nic;
287
+ NICConf conf;
288
+
289
+ /* 0 or 1, for log messages */
290
+ uint8_t emc_num;
291
+
292
+ uint32_t regs[NPCM7XX_NUM_EMC_REGS];
293
+
294
+ /*
295
+ * tx is active. Set to true by TSDR and then switches off when out of
296
+ * descriptors. If the TXON bit in REG_MCMDR is off then this is off.
297
+ */
298
+ bool tx_active;
299
+
300
+ /*
301
+ * rx is active. Set to true by RSDR and then switches off when out of
302
+ * descriptors. If the RXON bit in REG_MCMDR is off then this is off.
303
+ */
304
+ bool rx_active;
305
+};
306
+
307
+typedef struct NPCM7xxEMCState NPCM7xxEMCState;
308
+
309
+#define TYPE_NPCM7XX_EMC "npcm7xx-emc"
310
+#define NPCM7XX_EMC(obj) \
311
+ OBJECT_CHECK(NPCM7xxEMCState, (obj), TYPE_NPCM7XX_EMC)
312
+
313
+#endif /* NPCM7XX_EMC_H */
314
diff --git a/hw/net/npcm7xx_emc.c b/hw/net/npcm7xx_emc.c
315
new file mode 100644
316
index XXXXXXX..XXXXXXX
317
--- /dev/null
318
+++ b/hw/net/npcm7xx_emc.c
319
@@ -XXX,XX +XXX,XX @@
320
+/*
321
+ * Nuvoton NPCM7xx EMC Module
322
+ *
323
+ * Copyright 2020 Google LLC
324
+ *
325
+ * This program is free software; you can redistribute it and/or modify it
326
+ * under the terms of the GNU General Public License as published by the
327
+ * Free Software Foundation; either version 2 of the License, or
328
+ * (at your option) any later version.
329
+ *
330
+ * This program is distributed in the hope that it will be useful, but WITHOUT
331
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
332
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
333
+ * for more details.
334
+ *
335
+ * Unsupported/unimplemented features:
336
+ * - MCMDR.FDUP (full duplex) is ignored, half duplex is not supported
337
+ * - Only CAM0 is supported, CAM[1-15] are not
338
+ * - writes to CAMEN.[1-15] are ignored, these bits always read as zeroes
339
+ * - MII is not implemented, MIIDA.BUSY and MIID always return zero
340
+ * - MCMDR.LBK is not implemented
341
+ * - MCMDR.{OPMOD,ENSQE,AEP,ARP} are not supported
342
+ * - H/W FIFOs are not supported, MCMDR.FFTCR is ignored
343
+ * - MGSTA.SQE is not supported
344
+ * - pause and control frames are not implemented
345
+ * - MGSTA.CCNT is not supported
346
+ * - MPCNT, DMARFS are not implemented
347
+ */
348
+
349
+#include "qemu/osdep.h"
350
+
351
+/* For crc32 */
352
+#include <zlib.h>
353
+
354
+#include "qemu-common.h"
355
+#include "hw/irq.h"
356
+#include "hw/qdev-clock.h"
357
+#include "hw/qdev-properties.h"
358
+#include "hw/net/npcm7xx_emc.h"
359
+#include "net/eth.h"
360
+#include "migration/vmstate.h"
361
+#include "qemu/bitops.h"
362
+#include "qemu/error-report.h"
363
+#include "qemu/log.h"
364
+#include "qemu/module.h"
365
+#include "qemu/units.h"
366
+#include "sysemu/dma.h"
367
+#include "trace.h"
368
+
369
+#define CRC_LENGTH 4
370
+
371
+/*
372
+ * The maximum size of a (layer 2) ethernet frame as defined by 802.3.
373
+ * 1518 = 6(dest macaddr) + 6(src macaddr) + 2(proto) + 4(crc) + 1500(payload)
374
+ * This does not include an additional 4 for the vlan field (802.1q).
375
+ */
376
+#define MAX_ETH_FRAME_SIZE 1518
377
+
378
+static const char *emc_reg_name(int regno)
379
+{
380
+#define REG(name) case REG_ ## name: return #name;
381
+ switch (regno) {
382
+ REG(CAMCMR)
383
+ REG(CAMEN)
384
+ REG(TXDLSA)
385
+ REG(RXDLSA)
386
+ REG(MCMDR)
387
+ REG(MIID)
388
+ REG(MIIDA)
389
+ REG(FFTCR)
390
+ REG(TSDR)
391
+ REG(RSDR)
392
+ REG(DMARFC)
393
+ REG(MIEN)
394
+ REG(MISTA)
395
+ REG(MGSTA)
396
+ REG(MPCNT)
397
+ REG(MRPC)
398
+ REG(MRPCC)
399
+ REG(MREPC)
400
+ REG(DMARFS)
401
+ REG(CTXDSA)
402
+ REG(CTXBSA)
403
+ REG(CRXDSA)
404
+ REG(CRXBSA)
405
+ case REG_CAMM_BASE + 0: return "CAM0M";
406
+ case REG_CAML_BASE + 0: return "CAM0L";
407
+ case REG_CAMM_BASE + 2 ... REG_CAMML_LAST:
408
+ /* Only CAM0 is supported, fold the others into something simple. */
409
+ if (regno & 1) {
410
+ return "CAM<n>L";
411
+ } else {
412
+ return "CAM<n>M";
413
+ }
414
+ default: return "UNKNOWN";
415
+ }
416
+#undef REG
417
+}
418
+
419
+static void emc_reset(NPCM7xxEMCState *emc)
420
+{
421
+ trace_npcm7xx_emc_reset(emc->emc_num);
422
+
423
+ memset(&emc->regs[0], 0, sizeof(emc->regs));
424
+
425
+ /* These regs have non-zero reset values. */
426
+ emc->regs[REG_TXDLSA] = 0xfffffffc;
427
+ emc->regs[REG_RXDLSA] = 0xfffffffc;
428
+ emc->regs[REG_MIIDA] = 0x00900000;
429
+ emc->regs[REG_FFTCR] = 0x0101;
430
+ emc->regs[REG_DMARFC] = 0x0800;
431
+ emc->regs[REG_MPCNT] = 0x7fff;
432
+
433
+ emc->tx_active = false;
434
+ emc->rx_active = false;
435
+}
436
+
437
+static void npcm7xx_emc_reset(DeviceState *dev)
438
+{
439
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(dev);
440
+ emc_reset(emc);
441
+}
442
+
443
+static void emc_soft_reset(NPCM7xxEMCState *emc)
444
+{
445
+ /*
446
+ * The docs say at least MCMDR.{LBK,OPMOD} bits are not changed during a
447
+ * soft reset, but does not go into further detail. For now, KISS.
448
+ */
449
+ uint32_t mcmdr = emc->regs[REG_MCMDR];
450
+ emc_reset(emc);
451
+ emc->regs[REG_MCMDR] = mcmdr & (REG_MCMDR_LBK | REG_MCMDR_OPMOD);
452
+
453
+ qemu_set_irq(emc->tx_irq, 0);
454
+ qemu_set_irq(emc->rx_irq, 0);
455
+}
456
+
457
+static void emc_set_link(NetClientState *nc)
458
+{
459
+ /* Nothing to do yet. */
460
+}
461
+
462
+/* MISTA.TXINTR is the union of the individual bits with their enables. */
463
+static void emc_update_mista_txintr(NPCM7xxEMCState *emc)
464
+{
465
+ /* Only look at the bits we support. */
466
+ uint32_t mask = (REG_MISTA_TXBERR |
467
+ REG_MISTA_TDU |
468
+ REG_MISTA_TXCP);
469
+ if (emc->regs[REG_MISTA] & emc->regs[REG_MIEN] & mask) {
470
+ emc->regs[REG_MISTA] |= REG_MISTA_TXINTR;
471
+ } else {
472
+ emc->regs[REG_MISTA] &= ~REG_MISTA_TXINTR;
473
+ }
474
+}
475
+
476
+/* MISTA.RXINTR is the union of the individual bits with their enables. */
477
+static void emc_update_mista_rxintr(NPCM7xxEMCState *emc)
478
+{
479
+ /* Only look at the bits we support. */
480
+ uint32_t mask = (REG_MISTA_RXBERR |
481
+ REG_MISTA_RDU |
482
+ REG_MISTA_RXGD);
483
+ if (emc->regs[REG_MISTA] & emc->regs[REG_MIEN] & mask) {
484
+ emc->regs[REG_MISTA] |= REG_MISTA_RXINTR;
485
+ } else {
486
+ emc->regs[REG_MISTA] &= ~REG_MISTA_RXINTR;
487
+ }
488
+}
489
+
490
+/* N.B. emc_update_mista_txintr must have already been called. */
491
+static void emc_update_tx_irq(NPCM7xxEMCState *emc)
492
+{
493
+ int level = !!(emc->regs[REG_MISTA] &
494
+ emc->regs[REG_MIEN] &
495
+ REG_MISTA_TXINTR);
496
+ trace_npcm7xx_emc_update_tx_irq(level);
497
+ qemu_set_irq(emc->tx_irq, level);
498
+}
499
+
500
+/* N.B. emc_update_mista_rxintr must have already been called. */
501
+static void emc_update_rx_irq(NPCM7xxEMCState *emc)
502
+{
503
+ int level = !!(emc->regs[REG_MISTA] &
504
+ emc->regs[REG_MIEN] &
505
+ REG_MISTA_RXINTR);
506
+ trace_npcm7xx_emc_update_rx_irq(level);
507
+ qemu_set_irq(emc->rx_irq, level);
508
+}
509
+
510
+/* Update IRQ states due to changes in MIEN,MISTA. */
511
+static void emc_update_irq_from_reg_change(NPCM7xxEMCState *emc)
512
+{
513
+ emc_update_mista_txintr(emc);
514
+ emc_update_tx_irq(emc);
515
+
516
+ emc_update_mista_rxintr(emc);
517
+ emc_update_rx_irq(emc);
518
+}
519
+
520
+static int emc_read_tx_desc(dma_addr_t addr, NPCM7xxEMCTxDesc *desc)
521
+{
522
+ if (dma_memory_read(&address_space_memory, addr, desc, sizeof(*desc))) {
523
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to read descriptor @ 0x%"
524
+ HWADDR_PRIx "\n", __func__, addr);
525
+ return -1;
526
+ }
527
+ desc->flags = le32_to_cpu(desc->flags);
528
+ desc->txbsa = le32_to_cpu(desc->txbsa);
529
+ desc->status_and_length = le32_to_cpu(desc->status_and_length);
530
+ desc->ntxdsa = le32_to_cpu(desc->ntxdsa);
531
+ return 0;
532
+}
533
+
534
+static int emc_write_tx_desc(const NPCM7xxEMCTxDesc *desc, dma_addr_t addr)
535
+{
536
+ NPCM7xxEMCTxDesc le_desc;
537
+
538
+ le_desc.flags = cpu_to_le32(desc->flags);
539
+ le_desc.txbsa = cpu_to_le32(desc->txbsa);
540
+ le_desc.status_and_length = cpu_to_le32(desc->status_and_length);
541
+ le_desc.ntxdsa = cpu_to_le32(desc->ntxdsa);
542
+ if (dma_memory_write(&address_space_memory, addr, &le_desc,
543
+ sizeof(le_desc))) {
544
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to write descriptor @ 0x%"
545
+ HWADDR_PRIx "\n", __func__, addr);
546
+ return -1;
547
+ }
548
+ return 0;
549
+}
550
+
551
+static int emc_read_rx_desc(dma_addr_t addr, NPCM7xxEMCRxDesc *desc)
552
+{
553
+ if (dma_memory_read(&address_space_memory, addr, desc, sizeof(*desc))) {
554
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to read descriptor @ 0x%"
555
+ HWADDR_PRIx "\n", __func__, addr);
556
+ return -1;
557
+ }
558
+ desc->status_and_length = le32_to_cpu(desc->status_and_length);
559
+ desc->rxbsa = le32_to_cpu(desc->rxbsa);
560
+ desc->reserved = le32_to_cpu(desc->reserved);
561
+ desc->nrxdsa = le32_to_cpu(desc->nrxdsa);
562
+ return 0;
563
+}
564
+
565
+static int emc_write_rx_desc(const NPCM7xxEMCRxDesc *desc, dma_addr_t addr)
566
+{
567
+ NPCM7xxEMCRxDesc le_desc;
568
+
569
+ le_desc.status_and_length = cpu_to_le32(desc->status_and_length);
570
+ le_desc.rxbsa = cpu_to_le32(desc->rxbsa);
571
+ le_desc.reserved = cpu_to_le32(desc->reserved);
572
+ le_desc.nrxdsa = cpu_to_le32(desc->nrxdsa);
573
+ if (dma_memory_write(&address_space_memory, addr, &le_desc,
574
+ sizeof(le_desc))) {
575
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to write descriptor @ 0x%"
576
+ HWADDR_PRIx "\n", __func__, addr);
577
+ return -1;
578
+ }
579
+ return 0;
580
+}
581
+
582
+static void emc_set_mista(NPCM7xxEMCState *emc, uint32_t flags)
583
+{
584
+ trace_npcm7xx_emc_set_mista(flags);
585
+ emc->regs[REG_MISTA] |= flags;
586
+ if (extract32(flags, 16, 16)) {
587
+ emc_update_mista_txintr(emc);
588
+ }
589
+ if (extract32(flags, 0, 16)) {
590
+ emc_update_mista_rxintr(emc);
591
+ }
592
+}
593
+
594
+static void emc_halt_tx(NPCM7xxEMCState *emc, uint32_t mista_flag)
595
+{
596
+ emc->tx_active = false;
597
+ emc_set_mista(emc, mista_flag);
598
+}
599
+
600
+static void emc_halt_rx(NPCM7xxEMCState *emc, uint32_t mista_flag)
601
+{
602
+ emc->rx_active = false;
603
+ emc_set_mista(emc, mista_flag);
604
+}
605
+
606
+static void emc_set_next_tx_descriptor(NPCM7xxEMCState *emc,
607
+ const NPCM7xxEMCTxDesc *tx_desc,
608
+ uint32_t desc_addr)
609
+{
610
+ /* Update the current descriptor, if only to reset the owner flag. */
611
+ if (emc_write_tx_desc(tx_desc, desc_addr)) {
612
+ /*
613
+ * We just read it so this shouldn't generally happen.
614
+ * Error already reported.
615
+ */
616
+ emc_set_mista(emc, REG_MISTA_TXBERR);
617
+ }
618
+ emc->regs[REG_CTXDSA] = TX_DESC_NTXDSA(tx_desc->ntxdsa);
619
+}
620
+
621
+static void emc_set_next_rx_descriptor(NPCM7xxEMCState *emc,
622
+ const NPCM7xxEMCRxDesc *rx_desc,
623
+ uint32_t desc_addr)
624
+{
625
+ /* Update the current descriptor, if only to reset the owner flag. */
626
+ if (emc_write_rx_desc(rx_desc, desc_addr)) {
627
+ /*
628
+ * We just read it so this shouldn't generally happen.
629
+ * Error already reported.
630
+ */
631
+ emc_set_mista(emc, REG_MISTA_RXBERR);
632
+ }
633
+ emc->regs[REG_CRXDSA] = RX_DESC_NRXDSA(rx_desc->nrxdsa);
634
+}
635
+
636
+static void emc_try_send_next_packet(NPCM7xxEMCState *emc)
637
+{
638
+ /* Working buffer for sending out packets. Most packets fit in this. */
639
+#define TX_BUFFER_SIZE 2048
640
+ uint8_t tx_send_buffer[TX_BUFFER_SIZE];
641
+ uint32_t desc_addr = TX_DESC_NTXDSA(emc->regs[REG_CTXDSA]);
642
+ NPCM7xxEMCTxDesc tx_desc;
643
+ uint32_t next_buf_addr, length;
644
+ uint8_t *buf;
645
+ g_autofree uint8_t *malloced_buf = NULL;
646
+
647
+ if (emc_read_tx_desc(desc_addr, &tx_desc)) {
648
+ /* Error reading descriptor, already reported. */
649
+ emc_halt_tx(emc, REG_MISTA_TXBERR);
650
+ emc_update_tx_irq(emc);
651
+ return;
652
+ }
653
+
654
+ /* Nothing we can do if we don't own the descriptor. */
655
+ if (!(tx_desc.flags & TX_DESC_FLAG_OWNER_MASK)) {
656
+ trace_npcm7xx_emc_cpu_owned_desc(desc_addr);
657
+ emc_halt_tx(emc, REG_MISTA_TDU);
658
+ emc_update_tx_irq(emc);
659
+ return;
660
+ }
661
+
662
+ /* Give the descriptor back regardless of what happens. */
663
+ tx_desc.flags &= ~TX_DESC_FLAG_OWNER_MASK;
664
+ tx_desc.status_and_length &= 0xffff;
665
+
666
+ /*
667
+ * Despite the h/w documentation saying the tx buffer is word aligned,
668
+ * the linux driver does not word align the buffer. There is value in not
669
+ * aligning the buffer: See the description of NET_IP_ALIGN in linux
670
+ * kernel sources.
671
+ */
672
+ next_buf_addr = tx_desc.txbsa;
673
+ emc->regs[REG_CTXBSA] = next_buf_addr;
674
+ length = TX_DESC_PKT_LEN(tx_desc.status_and_length);
675
+ buf = &tx_send_buffer[0];
676
+
677
+ if (length > sizeof(tx_send_buffer)) {
678
+ malloced_buf = g_malloc(length);
679
+ buf = malloced_buf;
680
+ }
681
+
682
+ if (dma_memory_read(&address_space_memory, next_buf_addr, buf, length)) {
683
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Failed to read packet @ 0x%x\n",
684
+ __func__, next_buf_addr);
685
+ emc_set_mista(emc, REG_MISTA_TXBERR);
686
+ emc_set_next_tx_descriptor(emc, &tx_desc, desc_addr);
687
+ emc_update_tx_irq(emc);
688
+ trace_npcm7xx_emc_tx_done(emc->regs[REG_CTXDSA]);
689
+ return;
690
+ }
691
+
692
+ if ((tx_desc.flags & TX_DESC_FLAG_PADEN) && (length < MIN_PACKET_LENGTH)) {
693
+ memset(buf + length, 0, MIN_PACKET_LENGTH - length);
694
+ length = MIN_PACKET_LENGTH;
695
+ }
696
+
697
+ /* N.B. emc_receive can get called here. */
698
+ qemu_send_packet(qemu_get_queue(emc->nic), buf, length);
699
+ trace_npcm7xx_emc_sent_packet(length);
700
+
701
+ tx_desc.status_and_length |= TX_DESC_STATUS_TXCP;
702
+ if (tx_desc.flags & TX_DESC_FLAG_INTEN) {
703
+ emc_set_mista(emc, REG_MISTA_TXCP);
704
+ }
705
+ if (emc->regs[REG_MISTA] & emc->regs[REG_MIEN] & REG_MISTA_TXINTR) {
706
+ tx_desc.status_and_length |= TX_DESC_STATUS_TXINTR;
707
+ }
708
+
709
+ emc_set_next_tx_descriptor(emc, &tx_desc, desc_addr);
710
+ emc_update_tx_irq(emc);
711
+ trace_npcm7xx_emc_tx_done(emc->regs[REG_CTXDSA]);
712
+}
713
+
714
+static bool emc_can_receive(NetClientState *nc)
715
+{
716
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(qemu_get_nic_opaque(nc));
717
+
718
+ bool can_receive = emc->rx_active;
719
+ trace_npcm7xx_emc_can_receive(can_receive);
720
+ return can_receive;
721
+}
722
+
723
+/* If result is false then *fail_reason contains the reason. */
724
+static bool emc_receive_filter1(NPCM7xxEMCState *emc, const uint8_t *buf,
725
+ size_t len, const char **fail_reason)
726
+{
727
+ eth_pkt_types_e pkt_type = get_eth_packet_type(PKT_GET_ETH_HDR(buf));
728
+
729
+ switch (pkt_type) {
730
+ case ETH_PKT_BCAST:
731
+ if (emc->regs[REG_CAMCMR] & REG_CAMCMR_CCAM) {
732
+ return true;
733
+ } else {
734
+ *fail_reason = "Broadcast packet disabled";
735
+ return !!(emc->regs[REG_CAMCMR] & REG_CAMCMR_ABP);
736
+ }
737
+ case ETH_PKT_MCAST:
738
+ if (emc->regs[REG_CAMCMR] & REG_CAMCMR_CCAM) {
739
+ return true;
740
+ } else {
741
+ *fail_reason = "Multicast packet disabled";
742
+ return !!(emc->regs[REG_CAMCMR] & REG_CAMCMR_AMP);
743
+ }
744
+ case ETH_PKT_UCAST: {
745
+ bool matches;
746
+ if (emc->regs[REG_CAMCMR] & REG_CAMCMR_AUP) {
747
+ return true;
748
+ }
749
+ matches = ((emc->regs[REG_CAMCMR] & REG_CAMCMR_ECMP) &&
750
+ /* We only support one CAM register, CAM0. */
751
+ (emc->regs[REG_CAMEN] & (1 << 0)) &&
752
+ memcmp(buf, emc->conf.macaddr.a, ETH_ALEN) == 0);
753
+ if (emc->regs[REG_CAMCMR] & REG_CAMCMR_CCAM) {
754
+ *fail_reason = "MACADDR matched, comparison complemented";
755
+ return !matches;
756
+ } else {
757
+ *fail_reason = "MACADDR didn't match";
758
+ return matches;
759
+ }
760
+ }
761
+ default:
762
+ g_assert_not_reached();
763
+ }
764
+}
765
+
766
+static bool emc_receive_filter(NPCM7xxEMCState *emc, const uint8_t *buf,
767
+ size_t len)
768
+{
769
+ const char *fail_reason = NULL;
770
+ bool ok = emc_receive_filter1(emc, buf, len, &fail_reason);
771
+ if (!ok) {
772
+ trace_npcm7xx_emc_packet_filtered_out(fail_reason);
773
+ }
774
+ return ok;
775
+}
776
+
777
+static ssize_t emc_receive(NetClientState *nc, const uint8_t *buf, size_t len1)
778
+{
779
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(qemu_get_nic_opaque(nc));
780
+ const uint32_t len = len1;
781
+ size_t max_frame_len;
782
+ bool long_frame;
783
+ uint32_t desc_addr;
784
+ NPCM7xxEMCRxDesc rx_desc;
785
+ uint32_t crc;
786
+ uint8_t *crc_ptr;
787
+ uint32_t buf_addr;
788
+
789
+ trace_npcm7xx_emc_receiving_packet(len);
790
+
791
+ if (!emc_can_receive(nc)) {
792
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Unexpected packet\n", __func__);
793
+ return -1;
794
+ }
795
+
796
+ if (len < ETH_HLEN ||
797
+ /* Defensive programming: drop unsupportable large packets. */
798
+ len > 0xffff - CRC_LENGTH) {
799
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Dropped frame of %u bytes\n",
800
+ __func__, len);
801
+ return len;
802
+ }
803
+
804
+ /*
805
+ * DENI is set if EMC received the Length/Type field of the incoming
806
+ * packet, so it will be set regardless of what happens next.
807
+ */
808
+ emc_set_mista(emc, REG_MISTA_DENI);
809
+
810
+ if (!emc_receive_filter(emc, buf, len)) {
811
+ emc_update_rx_irq(emc);
812
+ return len;
813
+ }
814
+
815
+ /* Huge frames (> DMARFC) are dropped. */
816
+ max_frame_len = REG_DMARFC_RXMS(emc->regs[REG_DMARFC]);
817
+ if (len + CRC_LENGTH > max_frame_len) {
818
+ trace_npcm7xx_emc_packet_dropped(len);
819
+ emc_set_mista(emc, REG_MISTA_DFOI);
820
+ emc_update_rx_irq(emc);
821
+ return len;
822
+ }
823
+
824
+ /*
825
+ * Long Frames (> MAX_ETH_FRAME_SIZE) are also dropped, unless MCMDR.ALP
826
+ * is set.
827
+ */
828
+ long_frame = false;
829
+ if (len + CRC_LENGTH > MAX_ETH_FRAME_SIZE) {
830
+ if (emc->regs[REG_MCMDR] & REG_MCMDR_ALP) {
831
+ long_frame = true;
832
+ } else {
833
+ trace_npcm7xx_emc_packet_dropped(len);
834
+ emc_set_mista(emc, REG_MISTA_PTLE);
835
+ emc_update_rx_irq(emc);
836
+ return len;
837
+ }
838
+ }
839
+
840
+ desc_addr = RX_DESC_NRXDSA(emc->regs[REG_CRXDSA]);
841
+ if (emc_read_rx_desc(desc_addr, &rx_desc)) {
842
+ /* Error reading descriptor, already reported. */
843
+ emc_halt_rx(emc, REG_MISTA_RXBERR);
844
+ emc_update_rx_irq(emc);
845
+ return len;
846
+ }
847
+
848
+ /* Nothing we can do if we don't own the descriptor. */
849
+ if (!(rx_desc.status_and_length & RX_DESC_STATUS_OWNER_MASK)) {
850
+ trace_npcm7xx_emc_cpu_owned_desc(desc_addr);
851
+ emc_halt_rx(emc, REG_MISTA_RDU);
852
+ emc_update_rx_irq(emc);
853
+ return len;
854
+ }
855
+
856
+ crc = 0;
857
+ crc_ptr = (uint8_t *) &crc;
858
+ if (!(emc->regs[REG_MCMDR] & REG_MCMDR_SPCRC)) {
859
+ crc = cpu_to_be32(crc32(~0, buf, len));
860
+ }
861
+
862
+ /* Give the descriptor back regardless of what happens. */
863
+ rx_desc.status_and_length &= ~RX_DESC_STATUS_OWNER_MASK;
864
+
865
+ buf_addr = rx_desc.rxbsa;
866
+ emc->regs[REG_CRXBSA] = buf_addr;
867
+ if (dma_memory_write(&address_space_memory, buf_addr, buf, len) ||
868
+ (!(emc->regs[REG_MCMDR] & REG_MCMDR_SPCRC) &&
869
+ dma_memory_write(&address_space_memory, buf_addr + len, crc_ptr,
870
+ 4))) {
871
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: Bus error writing packet\n",
872
+ __func__);
873
+ emc_set_mista(emc, REG_MISTA_RXBERR);
874
+ emc_set_next_rx_descriptor(emc, &rx_desc, desc_addr);
875
+ emc_update_rx_irq(emc);
876
+ trace_npcm7xx_emc_rx_done(emc->regs[REG_CRXDSA]);
877
+ return len;
878
+ }
879
+
880
+ trace_npcm7xx_emc_received_packet(len);
881
+
882
+ /* Note: We've already verified len+4 <= 0xffff. */
883
+ rx_desc.status_and_length = len;
884
+ if (!(emc->regs[REG_MCMDR] & REG_MCMDR_SPCRC)) {
885
+ rx_desc.status_and_length += 4;
886
+ }
887
+ rx_desc.status_and_length |= RX_DESC_STATUS_RXGD;
888
+ emc_set_mista(emc, REG_MISTA_RXGD);
889
+
890
+ if (emc->regs[REG_MISTA] & emc->regs[REG_MIEN] & REG_MISTA_RXINTR) {
891
+ rx_desc.status_and_length |= RX_DESC_STATUS_RXINTR;
892
+ }
893
+ if (long_frame) {
894
+ rx_desc.status_and_length |= RX_DESC_STATUS_PTLE;
895
+ }
896
+
897
+ emc_set_next_rx_descriptor(emc, &rx_desc, desc_addr);
898
+ emc_update_rx_irq(emc);
899
+ trace_npcm7xx_emc_rx_done(emc->regs[REG_CRXDSA]);
900
+ return len;
901
+}
902
+
903
+static void emc_try_receive_next_packet(NPCM7xxEMCState *emc)
904
+{
905
+ if (emc_can_receive(qemu_get_queue(emc->nic))) {
906
+ qemu_flush_queued_packets(qemu_get_queue(emc->nic));
907
+ }
908
+}
909
+
910
+static uint64_t npcm7xx_emc_read(void *opaque, hwaddr offset, unsigned size)
911
+{
912
+ NPCM7xxEMCState *emc = opaque;
913
+ uint32_t reg = offset / sizeof(uint32_t);
914
+ uint32_t result;
915
+
916
+ if (reg >= NPCM7XX_NUM_EMC_REGS) {
917
+ qemu_log_mask(LOG_GUEST_ERROR,
918
+ "%s: Invalid offset 0x%04" HWADDR_PRIx "\n",
919
+ __func__, offset);
920
+ return 0;
921
+ }
922
+
923
+ switch (reg) {
924
+ case REG_MIID:
925
+ /*
926
+ * We don't implement MII. For determinism, always return zero as
927
+ * writes record the last value written for debugging purposes.
928
+ */
929
+ qemu_log_mask(LOG_UNIMP, "%s: Read of MIID, returning 0\n", __func__);
930
+ result = 0;
931
+ break;
932
+ case REG_TSDR:
933
+ case REG_RSDR:
934
+ qemu_log_mask(LOG_GUEST_ERROR,
935
+ "%s: Read of write-only reg, %s/%d\n",
936
+ __func__, emc_reg_name(reg), reg);
937
+ return 0;
938
+ default:
939
+ result = emc->regs[reg];
940
+ break;
941
+ }
942
+
943
+ trace_npcm7xx_emc_reg_read(emc->emc_num, result, emc_reg_name(reg), reg);
944
+ return result;
945
+}
946
+
947
+static void npcm7xx_emc_write(void *opaque, hwaddr offset,
948
+ uint64_t v, unsigned size)
949
+{
950
+ NPCM7xxEMCState *emc = opaque;
951
+ uint32_t reg = offset / sizeof(uint32_t);
952
+ uint32_t value = v;
953
+
954
+ g_assert(size == sizeof(uint32_t));
955
+
956
+ if (reg >= NPCM7XX_NUM_EMC_REGS) {
957
+ qemu_log_mask(LOG_GUEST_ERROR,
958
+ "%s: Invalid offset 0x%04" HWADDR_PRIx "\n",
959
+ __func__, offset);
960
+ return;
961
+ }
962
+
963
+ trace_npcm7xx_emc_reg_write(emc->emc_num, emc_reg_name(reg), reg, value);
964
+
965
+ switch (reg) {
966
+ case REG_CAMCMR:
967
+ emc->regs[reg] = value;
968
+ break;
969
+ case REG_CAMEN:
970
+ /* Only CAM0 is supported, don't pretend otherwise. */
971
+ if (value & ~1) {
972
+ qemu_log_mask(LOG_GUEST_ERROR,
973
+ "%s: Only CAM0 is supported, cannot enable others"
974
+ ": 0x%x\n",
975
+ __func__, value);
976
+ }
977
+ emc->regs[reg] = value & 1;
978
+ break;
979
+ case REG_CAMM_BASE + 0:
980
+ emc->regs[reg] = value;
981
+ emc->conf.macaddr.a[0] = value >> 24;
982
+ emc->conf.macaddr.a[1] = value >> 16;
983
+ emc->conf.macaddr.a[2] = value >> 8;
984
+ emc->conf.macaddr.a[3] = value >> 0;
985
+ break;
986
+ case REG_CAML_BASE + 0:
987
+ emc->regs[reg] = value;
988
+ emc->conf.macaddr.a[4] = value >> 24;
989
+ emc->conf.macaddr.a[5] = value >> 16;
990
+ break;
991
+ case REG_MCMDR: {
992
+ uint32_t prev;
993
+ if (value & REG_MCMDR_SWR) {
994
+ emc_soft_reset(emc);
995
+ /* On h/w the reset happens over multiple cycles. For now KISS. */
996
+ break;
997
+ }
998
+ prev = emc->regs[reg];
999
+ emc->regs[reg] = value;
1000
+ /* Update tx state. */
1001
+ if (!(prev & REG_MCMDR_TXON) &&
1002
+ (value & REG_MCMDR_TXON)) {
1003
+ emc->regs[REG_CTXDSA] = emc->regs[REG_TXDLSA];
1004
+ /*
1005
+ * Linux kernel turns TX on with CPU still holding descriptor,
1006
+ * which suggests we should wait for a write to TSDR before trying
1007
+ * to send a packet: so we don't send one here.
1008
+ */
1009
+ } else if ((prev & REG_MCMDR_TXON) &&
1010
+ !(value & REG_MCMDR_TXON)) {
1011
+ emc->regs[REG_MGSTA] |= REG_MGSTA_TXHA;
1012
+ }
1013
+ if (!(value & REG_MCMDR_TXON)) {
1014
+ emc_halt_tx(emc, 0);
1015
+ }
1016
+ /* Update rx state. */
1017
+ if (!(prev & REG_MCMDR_RXON) &&
1018
+ (value & REG_MCMDR_RXON)) {
1019
+ emc->regs[REG_CRXDSA] = emc->regs[REG_RXDLSA];
1020
+ } else if ((prev & REG_MCMDR_RXON) &&
1021
+ !(value & REG_MCMDR_RXON)) {
1022
+ emc->regs[REG_MGSTA] |= REG_MGSTA_RXHA;
1023
+ }
1024
+ if (!(value & REG_MCMDR_RXON)) {
1025
+ emc_halt_rx(emc, 0);
1026
+ }
1027
+ break;
1028
+ }
1029
+ case REG_TXDLSA:
1030
+ case REG_RXDLSA:
1031
+ case REG_DMARFC:
1032
+ case REG_MIID:
1033
+ emc->regs[reg] = value;
1034
+ break;
1035
+ case REG_MIEN:
1036
+ emc->regs[reg] = value;
1037
+ emc_update_irq_from_reg_change(emc);
1038
+ break;
1039
+ case REG_MISTA:
1040
+ /* Clear the bits that have 1 in "value". */
1041
+ emc->regs[reg] &= ~value;
1042
+ emc_update_irq_from_reg_change(emc);
1043
+ break;
1044
+ case REG_MGSTA:
1045
+ /* Clear the bits that have 1 in "value". */
1046
+ emc->regs[reg] &= ~value;
1047
+ break;
1048
+ case REG_TSDR:
1049
+ if (emc->regs[REG_MCMDR] & REG_MCMDR_TXON) {
1050
+ emc->tx_active = true;
1051
+ /* Keep trying to send packets until we run out. */
1052
+ while (emc->tx_active) {
1053
+ emc_try_send_next_packet(emc);
1054
+ }
1055
+ }
1056
+ break;
1057
+ case REG_RSDR:
1058
+ if (emc->regs[REG_MCMDR] & REG_MCMDR_RXON) {
1059
+ emc->rx_active = true;
1060
+ emc_try_receive_next_packet(emc);
1061
+ }
1062
+ break;
1063
+ case REG_MIIDA:
1064
+ emc->regs[reg] = value & ~REG_MIIDA_BUSY;
1065
+ break;
1066
+ case REG_MRPC:
1067
+ case REG_MRPCC:
1068
+ case REG_MREPC:
1069
+ case REG_CTXDSA:
1070
+ case REG_CTXBSA:
1071
+ case REG_CRXDSA:
1072
+ case REG_CRXBSA:
1073
+ qemu_log_mask(LOG_GUEST_ERROR,
1074
+ "%s: Write to read-only reg %s/%d\n",
1075
+ __func__, emc_reg_name(reg), reg);
1076
+ break;
1077
+ default:
1078
+ qemu_log_mask(LOG_UNIMP, "%s: Write to unimplemented reg %s/%d\n",
1079
+ __func__, emc_reg_name(reg), reg);
1080
+ break;
1081
+ }
1082
+}
1083
+
1084
+static const struct MemoryRegionOps npcm7xx_emc_ops = {
1085
+ .read = npcm7xx_emc_read,
1086
+ .write = npcm7xx_emc_write,
1087
+ .endianness = DEVICE_LITTLE_ENDIAN,
1088
+ .valid = {
1089
+ .min_access_size = 4,
1090
+ .max_access_size = 4,
1091
+ .unaligned = false,
1092
+ },
1093
+};
1094
+
1095
+static void emc_cleanup(NetClientState *nc)
1096
+{
1097
+ /* Nothing to do yet. */
1098
+}
1099
+
1100
+static NetClientInfo net_npcm7xx_emc_info = {
1101
+ .type = NET_CLIENT_DRIVER_NIC,
1102
+ .size = sizeof(NICState),
1103
+ .can_receive = emc_can_receive,
1104
+ .receive = emc_receive,
1105
+ .cleanup = emc_cleanup,
1106
+ .link_status_changed = emc_set_link,
1107
+};
1108
+
1109
+static void npcm7xx_emc_realize(DeviceState *dev, Error **errp)
1110
+{
1111
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(dev);
1112
+ SysBusDevice *sbd = SYS_BUS_DEVICE(emc);
1113
+
1114
+ memory_region_init_io(&emc->iomem, OBJECT(emc), &npcm7xx_emc_ops, emc,
1115
+ TYPE_NPCM7XX_EMC, 4 * KiB);
1116
+ sysbus_init_mmio(sbd, &emc->iomem);
1117
+ sysbus_init_irq(sbd, &emc->tx_irq);
1118
+ sysbus_init_irq(sbd, &emc->rx_irq);
1119
+
1120
+ qemu_macaddr_default_if_unset(&emc->conf.macaddr);
1121
+ emc->nic = qemu_new_nic(&net_npcm7xx_emc_info, &emc->conf,
1122
+ object_get_typename(OBJECT(dev)), dev->id, emc);
1123
+ qemu_format_nic_info_str(qemu_get_queue(emc->nic), emc->conf.macaddr.a);
1124
+}
1125
+
1126
+static void npcm7xx_emc_unrealize(DeviceState *dev)
1127
+{
1128
+ NPCM7xxEMCState *emc = NPCM7XX_EMC(dev);
1129
+
1130
+ qemu_del_nic(emc->nic);
1131
+}
1132
+
1133
+static const VMStateDescription vmstate_npcm7xx_emc = {
1134
+ .name = TYPE_NPCM7XX_EMC,
1135
+ .version_id = 0,
1136
+ .minimum_version_id = 0,
1137
+ .fields = (VMStateField[]) {
1138
+ VMSTATE_UINT8(emc_num, NPCM7xxEMCState),
1139
+ VMSTATE_UINT32_ARRAY(regs, NPCM7xxEMCState, NPCM7XX_NUM_EMC_REGS),
1140
+ VMSTATE_BOOL(tx_active, NPCM7xxEMCState),
1141
+ VMSTATE_BOOL(rx_active, NPCM7xxEMCState),
1142
+ VMSTATE_END_OF_LIST(),
1143
+ },
1144
+};
1145
+
1146
+static Property npcm7xx_emc_properties[] = {
1147
+ DEFINE_NIC_PROPERTIES(NPCM7xxEMCState, conf),
1148
+ DEFINE_PROP_END_OF_LIST(),
1149
+};
1150
+
1151
+static void npcm7xx_emc_class_init(ObjectClass *klass, void *data)
1152
+{
1153
+ DeviceClass *dc = DEVICE_CLASS(klass);
1154
+
1155
+ set_bit(DEVICE_CATEGORY_NETWORK, dc->categories);
1156
+ dc->desc = "NPCM7xx EMC Controller";
1157
+ dc->realize = npcm7xx_emc_realize;
1158
+ dc->unrealize = npcm7xx_emc_unrealize;
1159
+ dc->reset = npcm7xx_emc_reset;
1160
+ dc->vmsd = &vmstate_npcm7xx_emc;
1161
+ device_class_set_props(dc, npcm7xx_emc_properties);
1162
+}
1163
+
1164
+static const TypeInfo npcm7xx_emc_info = {
1165
+ .name = TYPE_NPCM7XX_EMC,
1166
+ .parent = TYPE_SYS_BUS_DEVICE,
1167
+ .instance_size = sizeof(NPCM7xxEMCState),
1168
+ .class_init = npcm7xx_emc_class_init,
1169
+};
1170
+
1171
+static void npcm7xx_emc_register_type(void)
1172
+{
1173
+ type_register_static(&npcm7xx_emc_info);
1174
+}
1175
+
1176
+type_init(npcm7xx_emc_register_type)
1177
diff --git a/hw/net/meson.build b/hw/net/meson.build
17
index XXXXXXX..XXXXXXX 100644
1178
index XXXXXXX..XXXXXXX 100644
18
--- a/include/hw/arm/bcm2836.h
1179
--- a/hw/net/meson.build
19
+++ b/include/hw/arm/bcm2836.h
1180
+++ b/hw/net/meson.build
20
@@ -XXX,XX +XXX,XX @@ typedef struct BCM2836State {
1181
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_I82596_COMMON', if_true: files('i82596.c'))
21
DeviceState parent_obj;
1182
softmmu_ss.add(when: 'CONFIG_SUNHME', if_true: files('sunhme.c'))
22
/*< public >*/
1183
softmmu_ss.add(when: 'CONFIG_FTGMAC100', if_true: files('ftgmac100.c'))
23
1184
softmmu_ss.add(when: 'CONFIG_SUNGEM', if_true: files('sungem.c'))
24
+ char *cpu_type;
1185
+softmmu_ss.add(when: 'CONFIG_NPCM7XX', if_true: files('npcm7xx_emc.c'))
25
uint32_t enabled_cpus;
1186
26
1187
softmmu_ss.add(when: 'CONFIG_ETRAXFS', if_true: files('etraxfs_eth.c'))
27
ARMCPU cpus[BCM2836_NCPUS];
1188
softmmu_ss.add(when: 'CONFIG_COLDFIRE', if_true: files('mcf_fec.c'))
28
diff --git a/hw/arm/bcm2836.c b/hw/arm/bcm2836.c
1189
diff --git a/hw/net/trace-events b/hw/net/trace-events
29
index XXXXXXX..XXXXXXX 100644
1190
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/arm/bcm2836.c
1191
--- a/hw/net/trace-events
31
+++ b/hw/arm/bcm2836.c
1192
+++ b/hw/net/trace-events
32
@@ -XXX,XX +XXX,XX @@
1193
@@ -XXX,XX +XXX,XX @@ imx_fec_receive_last(int last) "rx frame flags 0x%04x"
33
static void bcm2836_init(Object *obj)
1194
imx_enet_receive(size_t size) "len %zu"
34
{
1195
imx_enet_receive_len(uint64_t addr, int len) "rx_bd 0x%"PRIx64" length %d"
35
BCM2836State *s = BCM2836(obj);
1196
imx_enet_receive_last(int last) "rx frame flags 0x%04x"
36
- int n;
1197
+
37
-
1198
+# npcm7xx_emc.c
38
- for (n = 0; n < BCM2836_NCPUS; n++) {
1199
+npcm7xx_emc_reset(int emc_num) "Resetting emc%d"
39
- object_initialize(&s->cpus[n], sizeof(s->cpus[n]),
1200
+npcm7xx_emc_update_tx_irq(int level) "Setting tx irq to %d"
40
- "cortex-a15-" TYPE_ARM_CPU);
1201
+npcm7xx_emc_update_rx_irq(int level) "Setting rx irq to %d"
41
- object_property_add_child(obj, "cpu[*]", OBJECT(&s->cpus[n]),
1202
+npcm7xx_emc_set_mista(uint32_t flags) "ORing 0x%x into MISTA"
42
- &error_abort);
1203
+npcm7xx_emc_cpu_owned_desc(uint32_t addr) "Can't process cpu-owned descriptor @0x%x"
43
- }
1204
+npcm7xx_emc_sent_packet(uint32_t len) "Sent %u byte packet"
44
1205
+npcm7xx_emc_tx_done(uint32_t ctxdsa) "TX done, CTXDSA=0x%x"
45
object_initialize(&s->control, sizeof(s->control), TYPE_BCM2836_CONTROL);
1206
+npcm7xx_emc_can_receive(int can_receive) "Can receive: %d"
46
object_property_add_child(obj, "control", OBJECT(&s->control), NULL);
1207
+npcm7xx_emc_packet_filtered_out(const char* fail_reason) "Packet filtered out: %s"
47
@@ -XXX,XX +XXX,XX @@ static void bcm2836_realize(DeviceState *dev, Error **errp)
1208
+npcm7xx_emc_packet_dropped(uint32_t len) "%u byte packet dropped"
48
1209
+npcm7xx_emc_receiving_packet(uint32_t len) "Receiving %u byte packet"
49
/* common peripherals from bcm2835 */
1210
+npcm7xx_emc_received_packet(uint32_t len) "Received %u byte packet"
50
1211
+npcm7xx_emc_rx_done(uint32_t crxdsa) "RX done, CRXDSA=0x%x"
51
+ obj = OBJECT(dev);
1212
+npcm7xx_emc_reg_read(int emc_num, uint32_t result, const char *name, int regno) "emc%d: 0x%x = reg[%s/%d]"
52
+ for (n = 0; n < BCM2836_NCPUS; n++) {
1213
+npcm7xx_emc_reg_write(int emc_num, const char *name, int regno, uint32_t value) "emc%d: reg[%s/%d] = 0x%x"
53
+ object_initialize(&s->cpus[n], sizeof(s->cpus[n]),
54
+ s->cpu_type);
55
+ object_property_add_child(obj, "cpu[*]", OBJECT(&s->cpus[n]),
56
+ &error_abort);
57
+ }
58
+
59
obj = object_property_get_link(OBJECT(dev), "ram", &err);
60
if (obj == NULL) {
61
error_setg(errp, "%s: required ram link not found: %s",
62
@@ -XXX,XX +XXX,XX @@ static void bcm2836_realize(DeviceState *dev, Error **errp)
63
}
64
65
static Property bcm2836_props[] = {
66
+ DEFINE_PROP_STRING("cpu-type", BCM2836State, cpu_type),
67
DEFINE_PROP_UINT32("enabled-cpus", BCM2836State, enabled_cpus, BCM2836_NCPUS),
68
DEFINE_PROP_END_OF_LIST()
69
};
70
diff --git a/hw/arm/raspi.c b/hw/arm/raspi.c
71
index XXXXXXX..XXXXXXX 100644
72
--- a/hw/arm/raspi.c
73
+++ b/hw/arm/raspi.c
74
@@ -XXX,XX +XXX,XX @@ static void raspi2_init(MachineState *machine)
75
/* Setup the SOC */
76
object_property_add_const_link(OBJECT(&s->soc), "ram", OBJECT(&s->ram),
77
&error_abort);
78
+ object_property_set_str(OBJECT(&s->soc), machine->cpu_type, "cpu-type",
79
+ &error_abort);
80
object_property_set_int(OBJECT(&s->soc), smp_cpus, "enabled-cpus",
81
&error_abort);
82
object_property_set_int(OBJECT(&s->soc), 0xa21041, "board-rev",
83
@@ -XXX,XX +XXX,XX @@ static void raspi2_machine_init(MachineClass *mc)
84
mc->no_parallel = 1;
85
mc->no_floppy = 1;
86
mc->no_cdrom = 1;
87
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a15");
88
mc->max_cpus = BCM2836_NCPUS;
89
mc->min_cpus = BCM2836_NCPUS;
90
mc->default_cpus = BCM2836_NCPUS;
91
--
1214
--
92
2.16.1
1215
2.20.1
93
1216
94
1217
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Doug Evans <dje@google.com>
2
2
3
(qemu) info mtree
3
This is a 10/100 ethernet device that has several features.
4
address-space: cpu-memory-0
4
Only the ones needed by the Linux driver have been implemented.
5
0000000000000000-ffffffffffffffff (prio 0, i/o): system
5
See npcm7xx_emc.c for a list of unimplemented features.
6
0000000000000000-0000000007ffffff (prio 0, rom): aspeed.boot_rom
7
- 000000001e600000-000000001e7fffff (prio -1, i/o): aspeed_soc.io
8
+ 000000001e600000-000000001e7fffff (prio -1000, i/o): aspeed_soc.io
9
000000001e620000-000000001e6200ff (prio 0, i/o): aspeed.smc.ast2500-fmc
10
000000001e630000-000000001e6300ff (prio 0, i/o): aspeed.smc.ast2500-spi1
11
000000001e631000-000000001e6310ff (prio 0, i/o): aspeed.smc.ast2500-spi2
12
6
13
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Hao Wu <wuhaotsh@google.com>
14
Reviewed-by: Cédric Le Goater <clg@kaod.org>
8
Reviewed-by: Avi Fishman <avi.fishman@nuvoton.com>
15
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Message-id: 20180209085755.30414-3-f4bug@amsat.org
10
Signed-off-by: Doug Evans <dje@google.com>
11
Message-id: 20210209015541.778833-3-dje@google.com
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
13
---
19
include/hw/arm/aspeed_soc.h | 1 -
14
docs/system/arm/nuvoton.rst | 3 ++-
20
hw/arm/aspeed_soc.c | 32 +++-----------------------------
15
include/hw/arm/npcm7xx.h | 2 ++
21
2 files changed, 3 insertions(+), 30 deletions(-)
16
hw/arm/npcm7xx.c | 50 +++++++++++++++++++++++++++++++++++--
17
3 files changed, 52 insertions(+), 3 deletions(-)
22
18
23
diff --git a/include/hw/arm/aspeed_soc.h b/include/hw/arm/aspeed_soc.h
19
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
24
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
25
--- a/include/hw/arm/aspeed_soc.h
21
--- a/docs/system/arm/nuvoton.rst
26
+++ b/include/hw/arm/aspeed_soc.h
22
+++ b/docs/system/arm/nuvoton.rst
27
@@ -XXX,XX +XXX,XX @@ typedef struct AspeedSoCState {
23
@@ -XXX,XX +XXX,XX @@ Supported devices
28
24
* GPIO controller
29
/*< public >*/
25
* Analog to Digital Converter (ADC)
30
ARMCPU cpu;
26
* Pulse Width Modulation (PWM)
31
- MemoryRegion iomem;
27
+ * Ethernet controller (EMC)
32
MemoryRegion sram;
28
33
AspeedVICState vic;
29
Missing devices
34
AspeedTimerCtrlState timerctrl;
30
---------------
35
diff --git a/hw/arm/aspeed_soc.c b/hw/arm/aspeed_soc.c
31
@@ -XXX,XX +XXX,XX @@ Missing devices
32
* Shared memory (SHM)
33
* eSPI slave interface
34
35
- * Ethernet controllers (GMAC and EMC)
36
+ * Ethernet controller (GMAC)
37
* USB device (USBD)
38
* SMBus controller (SMBF)
39
* Peripheral SPI controller (PSPI)
40
diff --git a/include/hw/arm/npcm7xx.h b/include/hw/arm/npcm7xx.h
36
index XXXXXXX..XXXXXXX 100644
41
index XXXXXXX..XXXXXXX 100644
37
--- a/hw/arm/aspeed_soc.c
42
--- a/include/hw/arm/npcm7xx.h
38
+++ b/hw/arm/aspeed_soc.c
43
+++ b/include/hw/arm/npcm7xx.h
39
@@ -XXX,XX +XXX,XX @@
44
@@ -XXX,XX +XXX,XX @@
40
#include "qemu-common.h"
45
#include "hw/misc/npcm7xx_gcr.h"
41
#include "cpu.h"
46
#include "hw/misc/npcm7xx_pwm.h"
42
#include "exec/address-spaces.h"
47
#include "hw/misc/npcm7xx_rng.h"
43
+#include "hw/misc/unimp.h"
48
+#include "hw/net/npcm7xx_emc.h"
44
#include "hw/arm/aspeed_soc.h"
49
#include "hw/nvram/npcm7xx_otp.h"
45
#include "hw/char/serial.h"
50
#include "hw/timer/npcm7xx_timer.h"
46
#include "qemu/log.h"
51
#include "hw/ssi/npcm7xx_fiu.h"
47
@@ -XXX,XX +XXX,XX @@ static const AspeedSoCInfo aspeed_socs[] = {
52
@@ -XXX,XX +XXX,XX @@ typedef struct NPCM7xxState {
48
},
53
EHCISysBusState ehci;
54
OHCISysBusState ohci;
55
NPCM7xxFIUState fiu[2];
56
+ NPCM7xxEMCState emc[2];
57
} NPCM7xxState;
58
59
#define TYPE_NPCM7XX "npcm7xx"
60
diff --git a/hw/arm/npcm7xx.c b/hw/arm/npcm7xx.c
61
index XXXXXXX..XXXXXXX 100644
62
--- a/hw/arm/npcm7xx.c
63
+++ b/hw/arm/npcm7xx.c
64
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxInterrupt {
65
NPCM7XX_UART1_IRQ,
66
NPCM7XX_UART2_IRQ,
67
NPCM7XX_UART3_IRQ,
68
+ NPCM7XX_EMC1RX_IRQ = 15,
69
+ NPCM7XX_EMC1TX_IRQ,
70
NPCM7XX_TIMER0_IRQ = 32, /* Timer Module 0 */
71
NPCM7XX_TIMER1_IRQ,
72
NPCM7XX_TIMER2_IRQ,
73
@@ -XXX,XX +XXX,XX @@ enum NPCM7xxInterrupt {
74
NPCM7XX_OHCI_IRQ = 62,
75
NPCM7XX_PWM0_IRQ = 93, /* PWM module 0 */
76
NPCM7XX_PWM1_IRQ, /* PWM module 1 */
77
+ NPCM7XX_EMC2RX_IRQ = 114,
78
+ NPCM7XX_EMC2TX_IRQ,
79
NPCM7XX_GPIO0_IRQ = 116,
80
NPCM7XX_GPIO1_IRQ,
81
NPCM7XX_GPIO2_IRQ,
82
@@ -XXX,XX +XXX,XX @@ static const hwaddr npcm7xx_pwm_addr[] = {
83
0xf0104000,
49
};
84
};
50
85
51
-/*
86
+/* Register base address for each EMC Module */
52
- * IO handlers: simply catch any reads/writes to IO addresses that aren't
87
+static const hwaddr npcm7xx_emc_addr[] = {
53
- * handled by a device mapping.
88
+ 0xf0825000,
54
- */
89
+ 0xf0826000,
55
-
90
+};
56
-static uint64_t aspeed_soc_io_read(void *p, hwaddr offset, unsigned size)
91
+
57
-{
92
static const struct {
58
- qemu_log_mask(LOG_UNIMP, "%s: 0x%" HWADDR_PRIx " [%u]\n",
93
hwaddr regs_addr;
59
- __func__, offset, size);
94
uint32_t unconnected_pins;
60
- return 0;
95
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_init(Object *obj)
61
-}
96
for (i = 0; i < ARRAY_SIZE(s->pwm); i++) {
62
-
97
object_initialize_child(obj, "pwm[*]", &s->pwm[i], TYPE_NPCM7XX_PWM);
63
-static void aspeed_soc_io_write(void *opaque, hwaddr offset, uint64_t value,
98
}
64
- unsigned size)
99
+
65
-{
100
+ for (i = 0; i < ARRAY_SIZE(s->emc); i++) {
66
- qemu_log_mask(LOG_UNIMP, "%s: 0x%" HWADDR_PRIx " <- 0x%" PRIx64 " [%u]\n",
101
+ object_initialize_child(obj, "emc[*]", &s->emc[i], TYPE_NPCM7XX_EMC);
67
- __func__, offset, value, size);
102
+ }
68
-}
103
}
69
-
104
70
-static const MemoryRegionOps aspeed_soc_io_ops = {
105
static void npcm7xx_realize(DeviceState *dev, Error **errp)
71
- .read = aspeed_soc_io_read,
106
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
72
- .write = aspeed_soc_io_write,
107
sysbus_connect_irq(sbd, i, npcm7xx_irq(s, NPCM7XX_PWM0_IRQ + i));
73
- .endianness = DEVICE_LITTLE_ENDIAN,
108
}
74
-};
109
75
-
110
+ /*
76
static void aspeed_soc_init(Object *obj)
111
+ * EMC Modules. Cannot fail.
77
{
112
+ * The mapping of the device to its netdev backend works as follows:
78
AspeedSoCState *s = ASPEED_SOC(obj);
113
+ * emc[i] = nd_table[i]
79
@@ -XXX,XX +XXX,XX @@ static void aspeed_soc_realize(DeviceState *dev, Error **errp)
114
+ * This works around the inability to specify the netdev property for the
80
Error *err = NULL, *local_err = NULL;
115
+ * emc device: it's not pluggable and thus the -device option can't be
81
116
+ * used.
82
/* IO space */
117
+ */
83
- memory_region_init_io(&s->iomem, NULL, &aspeed_soc_io_ops, NULL,
118
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(npcm7xx_emc_addr) != ARRAY_SIZE(s->emc));
84
- "aspeed_soc.io", ASPEED_SOC_IOMEM_SIZE);
119
+ QEMU_BUILD_BUG_ON(ARRAY_SIZE(s->emc) != 2);
85
- memory_region_add_subregion_overlap(get_system_memory(),
120
+ for (i = 0; i < ARRAY_SIZE(s->emc); i++) {
86
- ASPEED_SOC_IOMEM_BASE, &s->iomem, -1);
121
+ s->emc[i].emc_num = i;
87
+ create_unimplemented_device("aspeed_soc.io",
122
+ SysBusDevice *sbd = SYS_BUS_DEVICE(&s->emc[i]);
88
+ ASPEED_SOC_IOMEM_BASE, ASPEED_SOC_IOMEM_SIZE);
123
+ if (nd_table[i].used) {
89
124
+ qemu_check_nic_model(&nd_table[i], TYPE_NPCM7XX_EMC);
90
/* CPU */
125
+ qdev_set_nic_properties(DEVICE(sbd), &nd_table[i]);
91
object_property_set_bool(OBJECT(&s->cpu), true, "realized", &err);
126
+ }
127
+ /*
128
+ * The device exists regardless of whether it's connected to a QEMU
129
+ * netdev backend. So always instantiate it even if there is no
130
+ * backend.
131
+ */
132
+ sysbus_realize(sbd, &error_abort);
133
+ sysbus_mmio_map(sbd, 0, npcm7xx_emc_addr[i]);
134
+ int tx_irq = i == 0 ? NPCM7XX_EMC1TX_IRQ : NPCM7XX_EMC2TX_IRQ;
135
+ int rx_irq = i == 0 ? NPCM7XX_EMC1RX_IRQ : NPCM7XX_EMC2RX_IRQ;
136
+ /*
137
+ * N.B. The values for the second argument sysbus_connect_irq are
138
+ * chosen to match the registration order in npcm7xx_emc_realize.
139
+ */
140
+ sysbus_connect_irq(sbd, 0, npcm7xx_irq(s, tx_irq));
141
+ sysbus_connect_irq(sbd, 1, npcm7xx_irq(s, rx_irq));
142
+ }
143
+
144
/*
145
* Flash Interface Unit (FIU). Can fail if incorrect number of chip selects
146
* specified, but this is a programming error.
147
@@ -XXX,XX +XXX,XX @@ static void npcm7xx_realize(DeviceState *dev, Error **errp)
148
create_unimplemented_device("npcm7xx.vcd", 0xf0810000, 64 * KiB);
149
create_unimplemented_device("npcm7xx.ece", 0xf0820000, 8 * KiB);
150
create_unimplemented_device("npcm7xx.vdma", 0xf0822000, 8 * KiB);
151
- create_unimplemented_device("npcm7xx.emc1", 0xf0825000, 4 * KiB);
152
- create_unimplemented_device("npcm7xx.emc2", 0xf0826000, 4 * KiB);
153
create_unimplemented_device("npcm7xx.usbd[0]", 0xf0830000, 4 * KiB);
154
create_unimplemented_device("npcm7xx.usbd[1]", 0xf0831000, 4 * KiB);
155
create_unimplemented_device("npcm7xx.usbd[2]", 0xf0832000, 4 * KiB);
92
--
156
--
93
2.16.1
157
2.20.1
94
158
95
159
diff view generated by jsdifflib
New patch
1
From: Doug Evans <dje@google.com>
1
2
3
Reviewed-by: Hao Wu <wuhaotsh@google.com>
4
Reviewed-by: Avi Fishman <avi.fishman@nuvoton.com>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Doug Evans <dje@google.com>
7
Message-id: 20210209015541.778833-4-dje@google.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
tests/qtest/npcm7xx_emc-test.c | 812 +++++++++++++++++++++++++++++++++
11
tests/qtest/meson.build | 1 +
12
2 files changed, 813 insertions(+)
13
create mode 100644 tests/qtest/npcm7xx_emc-test.c
14
15
diff --git a/tests/qtest/npcm7xx_emc-test.c b/tests/qtest/npcm7xx_emc-test.c
16
new file mode 100644
17
index XXXXXXX..XXXXXXX
18
--- /dev/null
19
+++ b/tests/qtest/npcm7xx_emc-test.c
20
@@ -XXX,XX +XXX,XX @@
21
+/*
22
+ * QTests for Nuvoton NPCM7xx EMC Modules.
23
+ *
24
+ * Copyright 2020 Google LLC
25
+ *
26
+ * This program is free software; you can redistribute it and/or modify it
27
+ * under the terms of the GNU General Public License as published by the
28
+ * Free Software Foundation; either version 2 of the License, or
29
+ * (at your option) any later version.
30
+ *
31
+ * This program is distributed in the hope that it will be useful, but WITHOUT
32
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
33
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
34
+ * for more details.
35
+ */
36
+
37
+#include "qemu/osdep.h"
38
+#include "qemu-common.h"
39
+#include "libqos/libqos.h"
40
+#include "qapi/qmp/qdict.h"
41
+#include "qapi/qmp/qnum.h"
42
+#include "qemu/bitops.h"
43
+#include "qemu/iov.h"
44
+
45
+/* Name of the emc device. */
46
+#define TYPE_NPCM7XX_EMC "npcm7xx-emc"
47
+
48
+/* Timeout for various operations, in seconds. */
49
+#define TIMEOUT_SECONDS 10
50
+
51
+/* Address in memory of the descriptor. */
52
+#define DESC_ADDR (1 << 20) /* 1 MiB */
53
+
54
+/* Address in memory of the data packet. */
55
+#define DATA_ADDR (DESC_ADDR + 4096)
56
+
57
+#define CRC_LENGTH 4
58
+
59
+#define NUM_TX_DESCRIPTORS 3
60
+#define NUM_RX_DESCRIPTORS 2
61
+
62
+/* Size of tx,rx test buffers. */
63
+#define TX_DATA_LEN 64
64
+#define RX_DATA_LEN 64
65
+
66
+#define TX_STEP_COUNT 10000
67
+#define RX_STEP_COUNT 10000
68
+
69
+/* 32-bit register indices. */
70
+typedef enum NPCM7xxPWMRegister {
71
+ /* Control registers. */
72
+ REG_CAMCMR,
73
+ REG_CAMEN,
74
+
75
+ /* There are 16 CAMn[ML] registers. */
76
+ REG_CAMM_BASE,
77
+ REG_CAML_BASE,
78
+
79
+ REG_TXDLSA = 0x22,
80
+ REG_RXDLSA,
81
+ REG_MCMDR,
82
+ REG_MIID,
83
+ REG_MIIDA,
84
+ REG_FFTCR,
85
+ REG_TSDR,
86
+ REG_RSDR,
87
+ REG_DMARFC,
88
+ REG_MIEN,
89
+
90
+ /* Status registers. */
91
+ REG_MISTA,
92
+ REG_MGSTA,
93
+ REG_MPCNT,
94
+ REG_MRPC,
95
+ REG_MRPCC,
96
+ REG_MREPC,
97
+ REG_DMARFS,
98
+ REG_CTXDSA,
99
+ REG_CTXBSA,
100
+ REG_CRXDSA,
101
+ REG_CRXBSA,
102
+
103
+ NPCM7XX_NUM_EMC_REGS,
104
+} NPCM7xxPWMRegister;
105
+
106
+enum { NUM_CAMML_REGS = 16 };
107
+
108
+/* REG_CAMCMR fields */
109
+/* Enable CAM Compare */
110
+#define REG_CAMCMR_ECMP (1 << 4)
111
+/* Accept Unicast Packet */
112
+#define REG_CAMCMR_AUP (1 << 0)
113
+
114
+/* REG_MCMDR fields */
115
+/* Software Reset */
116
+#define REG_MCMDR_SWR (1 << 24)
117
+/* Frame Transmission On */
118
+#define REG_MCMDR_TXON (1 << 8)
119
+/* Accept Long Packet */
120
+#define REG_MCMDR_ALP (1 << 1)
121
+/* Frame Reception On */
122
+#define REG_MCMDR_RXON (1 << 0)
123
+
124
+/* REG_MIEN fields */
125
+/* Enable Transmit Completion Interrupt */
126
+#define REG_MIEN_ENTXCP (1 << 18)
127
+/* Enable Transmit Interrupt */
128
+#define REG_MIEN_ENTXINTR (1 << 16)
129
+/* Enable Receive Good Interrupt */
130
+#define REG_MIEN_ENRXGD (1 << 4)
131
+/* ENable Receive Interrupt */
132
+#define REG_MIEN_ENRXINTR (1 << 0)
133
+
134
+/* REG_MISTA fields */
135
+/* Transmit Bus Error Interrupt */
136
+#define REG_MISTA_TXBERR (1 << 24)
137
+/* Transmit Descriptor Unavailable Interrupt */
138
+#define REG_MISTA_TDU (1 << 23)
139
+/* Transmit Completion Interrupt */
140
+#define REG_MISTA_TXCP (1 << 18)
141
+/* Transmit Interrupt */
142
+#define REG_MISTA_TXINTR (1 << 16)
143
+/* Receive Bus Error Interrupt */
144
+#define REG_MISTA_RXBERR (1 << 11)
145
+/* Receive Descriptor Unavailable Interrupt */
146
+#define REG_MISTA_RDU (1 << 10)
147
+/* DMA Early Notification Interrupt */
148
+#define REG_MISTA_DENI (1 << 9)
149
+/* Maximum Frame Length Interrupt */
150
+#define REG_MISTA_DFOI (1 << 8)
151
+/* Receive Good Interrupt */
152
+#define REG_MISTA_RXGD (1 << 4)
153
+/* Packet Too Long Interrupt */
154
+#define REG_MISTA_PTLE (1 << 3)
155
+/* Receive Interrupt */
156
+#define REG_MISTA_RXINTR (1 << 0)
157
+
158
+typedef struct NPCM7xxEMCTxDesc NPCM7xxEMCTxDesc;
159
+typedef struct NPCM7xxEMCRxDesc NPCM7xxEMCRxDesc;
160
+
161
+struct NPCM7xxEMCTxDesc {
162
+ uint32_t flags;
163
+ uint32_t txbsa;
164
+ uint32_t status_and_length;
165
+ uint32_t ntxdsa;
166
+};
167
+
168
+struct NPCM7xxEMCRxDesc {
169
+ uint32_t status_and_length;
170
+ uint32_t rxbsa;
171
+ uint32_t reserved;
172
+ uint32_t nrxdsa;
173
+};
174
+
175
+/* NPCM7xxEMCTxDesc.flags values */
176
+/* Owner: 0 = cpu, 1 = emc */
177
+#define TX_DESC_FLAG_OWNER_MASK (1 << 31)
178
+/* Transmit interrupt enable */
179
+#define TX_DESC_FLAG_INTEN (1 << 2)
180
+
181
+/* NPCM7xxEMCTxDesc.status_and_length values */
182
+/* Transmission complete */
183
+#define TX_DESC_STATUS_TXCP (1 << 19)
184
+/* Transmit interrupt */
185
+#define TX_DESC_STATUS_TXINTR (1 << 16)
186
+
187
+/* NPCM7xxEMCRxDesc.status_and_length values */
188
+/* Owner: 0b00 = cpu, 0b10 = emc */
189
+#define RX_DESC_STATUS_OWNER_SHIFT 30
190
+#define RX_DESC_STATUS_OWNER_MASK 0xc0000000
191
+/* Frame Reception Complete */
192
+#define RX_DESC_STATUS_RXGD (1 << 20)
193
+/* Packet too long */
194
+#define RX_DESC_STATUS_PTLE (1 << 19)
195
+/* Receive Interrupt */
196
+#define RX_DESC_STATUS_RXINTR (1 << 16)
197
+
198
+#define RX_DESC_PKT_LEN(word) ((uint32_t) (word) & 0xffff)
199
+
200
+typedef struct EMCModule {
201
+ int rx_irq;
202
+ int tx_irq;
203
+ uint64_t base_addr;
204
+} EMCModule;
205
+
206
+typedef struct TestData {
207
+ const EMCModule *module;
208
+} TestData;
209
+
210
+static const EMCModule emc_module_list[] = {
211
+ {
212
+ .rx_irq = 15,
213
+ .tx_irq = 16,
214
+ .base_addr = 0xf0825000
215
+ },
216
+ {
217
+ .rx_irq = 114,
218
+ .tx_irq = 115,
219
+ .base_addr = 0xf0826000
220
+ }
221
+};
222
+
223
+/* Returns the index of the EMC module. */
224
+static int emc_module_index(const EMCModule *mod)
225
+{
226
+ ptrdiff_t diff = mod - emc_module_list;
227
+
228
+ g_assert_true(diff >= 0 && diff < ARRAY_SIZE(emc_module_list));
229
+
230
+ return diff;
231
+}
232
+
233
+static void packet_test_clear(void *sockets)
234
+{
235
+ int *test_sockets = sockets;
236
+
237
+ close(test_sockets[0]);
238
+ g_free(test_sockets);
239
+}
240
+
241
+static int *packet_test_init(int module_num, GString *cmd_line)
242
+{
243
+ int *test_sockets = g_new(int, 2);
244
+ int ret = socketpair(PF_UNIX, SOCK_STREAM, 0, test_sockets);
245
+ g_assert_cmpint(ret, != , -1);
246
+
247
+ /*
248
+ * KISS and use -nic. We specify two nics (both emc{0,1}) because there's
249
+ * currently no way to specify only emc1: The driver implicitly relies on
250
+ * emc[i] == nd_table[i].
251
+ */
252
+ if (module_num == 0) {
253
+ g_string_append_printf(cmd_line,
254
+ " -nic socket,fd=%d,model=" TYPE_NPCM7XX_EMC " "
255
+ " -nic user,model=" TYPE_NPCM7XX_EMC " ",
256
+ test_sockets[1]);
257
+ } else {
258
+ g_string_append_printf(cmd_line,
259
+ " -nic user,model=" TYPE_NPCM7XX_EMC " "
260
+ " -nic socket,fd=%d,model=" TYPE_NPCM7XX_EMC " ",
261
+ test_sockets[1]);
262
+ }
263
+
264
+ g_test_queue_destroy(packet_test_clear, test_sockets);
265
+ return test_sockets;
266
+}
267
+
268
+static uint32_t emc_read(QTestState *qts, const EMCModule *mod,
269
+ NPCM7xxPWMRegister regno)
270
+{
271
+ return qtest_readl(qts, mod->base_addr + regno * sizeof(uint32_t));
272
+}
273
+
274
+static void emc_write(QTestState *qts, const EMCModule *mod,
275
+ NPCM7xxPWMRegister regno, uint32_t value)
276
+{
277
+ qtest_writel(qts, mod->base_addr + regno * sizeof(uint32_t), value);
278
+}
279
+
280
+/*
281
+ * Reset the EMC module.
282
+ * The module must be reset before, e.g., TXDLSA,RXDLSA are changed.
283
+ */
284
+static bool emc_soft_reset(QTestState *qts, const EMCModule *mod)
285
+{
286
+ uint32_t val;
287
+ uint64_t end_time;
288
+
289
+ emc_write(qts, mod, REG_MCMDR, REG_MCMDR_SWR);
290
+
291
+ /*
292
+ * Wait for device to reset as the linux driver does.
293
+ * During reset the AHB reads 0 for all registers. So first wait for
294
+ * something that resets to non-zero, and then wait for SWR becoming 0.
295
+ */
296
+ end_time = g_get_monotonic_time() + TIMEOUT_SECONDS * G_TIME_SPAN_SECOND;
297
+
298
+ do {
299
+ qtest_clock_step(qts, 100);
300
+ val = emc_read(qts, mod, REG_FFTCR);
301
+ } while (val == 0 && g_get_monotonic_time() < end_time);
302
+ if (val != 0) {
303
+ do {
304
+ qtest_clock_step(qts, 100);
305
+ val = emc_read(qts, mod, REG_MCMDR);
306
+ if ((val & REG_MCMDR_SWR) == 0) {
307
+ /*
308
+ * N.B. The CAMs have been reset here, so macaddr matching of
309
+ * incoming packets will not work.
310
+ */
311
+ return true;
312
+ }
313
+ } while (g_get_monotonic_time() < end_time);
314
+ }
315
+
316
+ g_message("%s: Timeout expired", __func__);
317
+ return false;
318
+}
319
+
320
+/* Check emc registers are reset to default value. */
321
+static void test_init(gconstpointer test_data)
322
+{
323
+ const TestData *td = test_data;
324
+ const EMCModule *mod = td->module;
325
+ QTestState *qts = qtest_init("-machine quanta-gsj");
326
+ int i;
327
+
328
+#define CHECK_REG(regno, value) \
329
+ do { \
330
+ g_assert_cmphex(emc_read(qts, mod, (regno)), ==, (value)); \
331
+ } while (0)
332
+
333
+ CHECK_REG(REG_CAMCMR, 0);
334
+ CHECK_REG(REG_CAMEN, 0);
335
+ CHECK_REG(REG_TXDLSA, 0xfffffffc);
336
+ CHECK_REG(REG_RXDLSA, 0xfffffffc);
337
+ CHECK_REG(REG_MCMDR, 0);
338
+ CHECK_REG(REG_MIID, 0);
339
+ CHECK_REG(REG_MIIDA, 0x00900000);
340
+ CHECK_REG(REG_FFTCR, 0x0101);
341
+ CHECK_REG(REG_DMARFC, 0x0800);
342
+ CHECK_REG(REG_MIEN, 0);
343
+ CHECK_REG(REG_MISTA, 0);
344
+ CHECK_REG(REG_MGSTA, 0);
345
+ CHECK_REG(REG_MPCNT, 0x7fff);
346
+ CHECK_REG(REG_MRPC, 0);
347
+ CHECK_REG(REG_MRPCC, 0);
348
+ CHECK_REG(REG_MREPC, 0);
349
+ CHECK_REG(REG_DMARFS, 0);
350
+ CHECK_REG(REG_CTXDSA, 0);
351
+ CHECK_REG(REG_CTXBSA, 0);
352
+ CHECK_REG(REG_CRXDSA, 0);
353
+ CHECK_REG(REG_CRXBSA, 0);
354
+
355
+#undef CHECK_REG
356
+
357
+ for (i = 0; i < NUM_CAMML_REGS; ++i) {
358
+ g_assert_cmpuint(emc_read(qts, mod, REG_CAMM_BASE + i * 2), ==,
359
+ 0);
360
+ g_assert_cmpuint(emc_read(qts, mod, REG_CAML_BASE + i * 2), ==,
361
+ 0);
362
+ }
363
+
364
+ qtest_quit(qts);
365
+}
366
+
367
+static bool emc_wait_irq(QTestState *qts, const EMCModule *mod, int step,
368
+ bool is_tx)
369
+{
370
+ uint64_t end_time =
371
+ g_get_monotonic_time() + TIMEOUT_SECONDS * G_TIME_SPAN_SECOND;
372
+
373
+ do {
374
+ if (qtest_get_irq(qts, is_tx ? mod->tx_irq : mod->rx_irq)) {
375
+ return true;
376
+ }
377
+ qtest_clock_step(qts, step);
378
+ } while (g_get_monotonic_time() < end_time);
379
+
380
+ g_message("%s: Timeout expired", __func__);
381
+ return false;
382
+}
383
+
384
+static bool emc_wait_mista(QTestState *qts, const EMCModule *mod, int step,
385
+ uint32_t flag)
386
+{
387
+ uint64_t end_time =
388
+ g_get_monotonic_time() + TIMEOUT_SECONDS * G_TIME_SPAN_SECOND;
389
+
390
+ do {
391
+ uint32_t mista = emc_read(qts, mod, REG_MISTA);
392
+ if (mista & flag) {
393
+ return true;
394
+ }
395
+ qtest_clock_step(qts, step);
396
+ } while (g_get_monotonic_time() < end_time);
397
+
398
+ g_message("%s: Timeout expired", __func__);
399
+ return false;
400
+}
401
+
402
+static bool wait_socket_readable(int fd)
403
+{
404
+ fd_set read_fds;
405
+ struct timeval tv;
406
+ int rv;
407
+
408
+ FD_ZERO(&read_fds);
409
+ FD_SET(fd, &read_fds);
410
+ tv.tv_sec = TIMEOUT_SECONDS;
411
+ tv.tv_usec = 0;
412
+ rv = select(fd + 1, &read_fds, NULL, NULL, &tv);
413
+ if (rv == -1) {
414
+ perror("select");
415
+ } else if (rv == 0) {
416
+ g_message("%s: Timeout expired", __func__);
417
+ }
418
+ return rv == 1;
419
+}
420
+
421
+static void init_tx_desc(NPCM7xxEMCTxDesc *desc, size_t count,
422
+ uint32_t desc_addr)
423
+{
424
+ g_assert(count >= 2);
425
+ memset(&desc[0], 0, sizeof(*desc) * count);
426
+ /* Leave the last one alone, owned by the cpu -> stops transmission. */
427
+ for (size_t i = 0; i < count - 1; ++i) {
428
+ desc[i].flags =
429
+ cpu_to_le32(TX_DESC_FLAG_OWNER_MASK | /* owner = 1: emc */
430
+ TX_DESC_FLAG_INTEN |
431
+ 0 | /* crc append = 0 */
432
+ 0 /* padding enable = 0 */);
433
+ desc[i].status_and_length =
434
+ cpu_to_le32(0 | /* collision count = 0 */
435
+ 0 | /* SQE = 0 */
436
+ 0 | /* PAU = 0 */
437
+ 0 | /* TXHA = 0 */
438
+ 0 | /* LC = 0 */
439
+ 0 | /* TXABT = 0 */
440
+ 0 | /* NCS = 0 */
441
+ 0 | /* EXDEF = 0 */
442
+ 0 | /* TXCP = 0 */
443
+ 0 | /* DEF = 0 */
444
+ 0 | /* TXINTR = 0 */
445
+ 0 /* length filled in later */);
446
+ desc[i].ntxdsa = cpu_to_le32(desc_addr + (i + 1) * sizeof(*desc));
447
+ }
448
+}
449
+
450
+static void enable_tx(QTestState *qts, const EMCModule *mod,
451
+ const NPCM7xxEMCTxDesc *desc, size_t count,
452
+ uint32_t desc_addr, uint32_t mien_flags)
453
+{
454
+ /* Write the descriptors to guest memory. */
455
+ qtest_memwrite(qts, desc_addr, desc, sizeof(*desc) * count);
456
+
457
+ /* Trigger sending the packet. */
458
+ /* The module must be reset before changing TXDLSA. */
459
+ g_assert(emc_soft_reset(qts, mod));
460
+ emc_write(qts, mod, REG_TXDLSA, desc_addr);
461
+ emc_write(qts, mod, REG_CTXDSA, ~0);
462
+ emc_write(qts, mod, REG_MIEN, REG_MIEN_ENTXCP | mien_flags);
463
+ {
464
+ uint32_t mcmdr = emc_read(qts, mod, REG_MCMDR);
465
+ mcmdr |= REG_MCMDR_TXON;
466
+ emc_write(qts, mod, REG_MCMDR, mcmdr);
467
+ }
468
+
469
+ /* Prod the device to send the packet. */
470
+ emc_write(qts, mod, REG_TSDR, 1);
471
+}
472
+
473
+static void emc_send_verify1(QTestState *qts, const EMCModule *mod, int fd,
474
+ bool with_irq, uint32_t desc_addr,
475
+ uint32_t next_desc_addr,
476
+ const char *test_data, int test_size)
477
+{
478
+ NPCM7xxEMCTxDesc result_desc;
479
+ uint32_t expected_mask, expected_value, recv_len;
480
+ int ret;
481
+ char buffer[TX_DATA_LEN];
482
+
483
+ g_assert(wait_socket_readable(fd));
484
+
485
+ /* Read the descriptor back. */
486
+ qtest_memread(qts, desc_addr, &result_desc, sizeof(result_desc));
487
+ /* Descriptor should be owned by cpu now. */
488
+ g_assert((result_desc.flags & TX_DESC_FLAG_OWNER_MASK) == 0);
489
+ /* Test the status bits, ignoring the length field. */
490
+ expected_mask = 0xffff << 16;
491
+ expected_value = TX_DESC_STATUS_TXCP;
492
+ if (with_irq) {
493
+ expected_value |= TX_DESC_STATUS_TXINTR;
494
+ }
495
+ g_assert_cmphex((result_desc.status_and_length & expected_mask), ==,
496
+ expected_value);
497
+
498
+ /* Check data sent to the backend. */
499
+ recv_len = ~0;
500
+ ret = qemu_recv(fd, &recv_len, sizeof(recv_len), MSG_DONTWAIT);
501
+ g_assert_cmpint(ret, == , sizeof(recv_len));
502
+
503
+ g_assert(wait_socket_readable(fd));
504
+ memset(buffer, 0xff, sizeof(buffer));
505
+ ret = qemu_recv(fd, buffer, test_size, MSG_DONTWAIT);
506
+ g_assert_cmpmem(buffer, ret, test_data, test_size);
507
+}
508
+
509
+static void emc_send_verify(QTestState *qts, const EMCModule *mod, int fd,
510
+ bool with_irq)
511
+{
512
+ NPCM7xxEMCTxDesc desc[NUM_TX_DESCRIPTORS];
513
+ uint32_t desc_addr = DESC_ADDR;
514
+ static const char test1_data[] = "TEST1";
515
+ static const char test2_data[] = "Testing 1 2 3 ...";
516
+ uint32_t data1_addr = DATA_ADDR;
517
+ uint32_t data2_addr = data1_addr + sizeof(test1_data);
518
+ bool got_tdu;
519
+ uint32_t end_desc_addr;
520
+
521
+ /* Prepare test data buffer. */
522
+ qtest_memwrite(qts, data1_addr, test1_data, sizeof(test1_data));
523
+ qtest_memwrite(qts, data2_addr, test2_data, sizeof(test2_data));
524
+
525
+ init_tx_desc(&desc[0], NUM_TX_DESCRIPTORS, desc_addr);
526
+ desc[0].txbsa = cpu_to_le32(data1_addr);
527
+ desc[0].status_and_length |= sizeof(test1_data);
528
+ desc[1].txbsa = cpu_to_le32(data2_addr);
529
+ desc[1].status_and_length |= sizeof(test2_data);
530
+
531
+ enable_tx(qts, mod, &desc[0], NUM_TX_DESCRIPTORS, desc_addr,
532
+ with_irq ? REG_MIEN_ENTXINTR : 0);
533
+
534
+ /*
535
+ * It's problematic to observe the interrupt for each packet.
536
+ * Instead just wait until all the packets go out.
537
+ */
538
+ got_tdu = false;
539
+ while (!got_tdu) {
540
+ if (with_irq) {
541
+ g_assert_true(emc_wait_irq(qts, mod, TX_STEP_COUNT,
542
+ /*is_tx=*/true));
543
+ } else {
544
+ g_assert_true(emc_wait_mista(qts, mod, TX_STEP_COUNT,
545
+ REG_MISTA_TXINTR));
546
+ }
547
+ got_tdu = !!(emc_read(qts, mod, REG_MISTA) & REG_MISTA_TDU);
548
+ /* If we don't have TDU yet, reset the interrupt. */
549
+ if (!got_tdu) {
550
+ emc_write(qts, mod, REG_MISTA,
551
+ emc_read(qts, mod, REG_MISTA) & 0xffff0000);
552
+ }
553
+ }
554
+
555
+ end_desc_addr = desc_addr + 2 * sizeof(desc[0]);
556
+ g_assert_cmphex(emc_read(qts, mod, REG_CTXDSA), ==, end_desc_addr);
557
+ g_assert_cmphex(emc_read(qts, mod, REG_MISTA), ==,
558
+ REG_MISTA_TXCP | REG_MISTA_TXINTR | REG_MISTA_TDU);
559
+
560
+ emc_send_verify1(qts, mod, fd, with_irq,
561
+ desc_addr, end_desc_addr,
562
+ test1_data, sizeof(test1_data));
563
+ emc_send_verify1(qts, mod, fd, with_irq,
564
+ desc_addr + sizeof(desc[0]), end_desc_addr,
565
+ test2_data, sizeof(test2_data));
566
+}
567
+
568
+static void init_rx_desc(NPCM7xxEMCRxDesc *desc, size_t count,
569
+ uint32_t desc_addr, uint32_t data_addr)
570
+{
571
+ g_assert_true(count >= 2);
572
+ memset(desc, 0, sizeof(*desc) * count);
573
+ desc[0].rxbsa = cpu_to_le32(data_addr);
574
+ desc[0].status_and_length =
575
+ cpu_to_le32(0b10 << RX_DESC_STATUS_OWNER_SHIFT | /* owner = 10: emc */
576
+ 0 | /* RP = 0 */
577
+ 0 | /* ALIE = 0 */
578
+ 0 | /* RXGD = 0 */
579
+ 0 | /* PTLE = 0 */
580
+ 0 | /* CRCE = 0 */
581
+ 0 | /* RXINTR = 0 */
582
+ 0 /* length (filled in later) */);
583
+ /* Leave the last one alone, owned by the cpu -> stops transmission. */
584
+ desc[0].nrxdsa = cpu_to_le32(desc_addr + sizeof(*desc));
585
+}
586
+
587
+static void enable_rx(QTestState *qts, const EMCModule *mod,
588
+ const NPCM7xxEMCRxDesc *desc, size_t count,
589
+ uint32_t desc_addr, uint32_t mien_flags,
590
+ uint32_t mcmdr_flags)
591
+{
592
+ /*
593
+ * Write the descriptor to guest memory.
594
+ * FWIW, IWBN if the docs said the buffer needs to be at least DMARFC
595
+ * bytes.
596
+ */
597
+ qtest_memwrite(qts, desc_addr, desc, sizeof(*desc) * count);
598
+
599
+ /* Trigger receiving the packet. */
600
+ /* The module must be reset before changing RXDLSA. */
601
+ g_assert(emc_soft_reset(qts, mod));
602
+ emc_write(qts, mod, REG_RXDLSA, desc_addr);
603
+ emc_write(qts, mod, REG_MIEN, REG_MIEN_ENRXGD | mien_flags);
604
+
605
+ /*
606
+ * We don't know what the device's macaddr is, so just accept all
607
+ * unicast packets (AUP).
608
+ */
609
+ emc_write(qts, mod, REG_CAMCMR, REG_CAMCMR_AUP);
610
+ emc_write(qts, mod, REG_CAMEN, 1 << 0);
611
+ {
612
+ uint32_t mcmdr = emc_read(qts, mod, REG_MCMDR);
613
+ mcmdr |= REG_MCMDR_RXON | mcmdr_flags;
614
+ emc_write(qts, mod, REG_MCMDR, mcmdr);
615
+ }
616
+
617
+ /* Prod the device to accept a packet. */
618
+ emc_write(qts, mod, REG_RSDR, 1);
619
+}
620
+
621
+static void emc_recv_verify(QTestState *qts, const EMCModule *mod, int fd,
622
+ bool with_irq)
623
+{
624
+ NPCM7xxEMCRxDesc desc[NUM_RX_DESCRIPTORS];
625
+ uint32_t desc_addr = DESC_ADDR;
626
+ uint32_t data_addr = DATA_ADDR;
627
+ int ret;
628
+ uint32_t expected_mask, expected_value;
629
+ NPCM7xxEMCRxDesc result_desc;
630
+
631
+ /* Prepare test data buffer. */
632
+ const char test[RX_DATA_LEN] = "TEST";
633
+ int len = htonl(sizeof(test));
634
+ const struct iovec iov[] = {
635
+ {
636
+ .iov_base = &len,
637
+ .iov_len = sizeof(len),
638
+ },{
639
+ .iov_base = (char *) test,
640
+ .iov_len = sizeof(test),
641
+ },
642
+ };
643
+
644
+ /*
645
+ * Reset the device BEFORE sending a test packet, otherwise the packet
646
+ * may get swallowed by an active device of an earlier test.
647
+ */
648
+ init_rx_desc(&desc[0], NUM_RX_DESCRIPTORS, desc_addr, data_addr);
649
+ enable_rx(qts, mod, &desc[0], NUM_RX_DESCRIPTORS, desc_addr,
650
+ with_irq ? REG_MIEN_ENRXINTR : 0, 0);
651
+
652
+ /* Send test packet to device's socket. */
653
+ ret = iov_send(fd, iov, 2, 0, sizeof(len) + sizeof(test));
654
+ g_assert_cmpint(ret, == , sizeof(test) + sizeof(len));
655
+
656
+ /* Wait for RX interrupt. */
657
+ if (with_irq) {
658
+ g_assert_true(emc_wait_irq(qts, mod, RX_STEP_COUNT, /*is_tx=*/false));
659
+ } else {
660
+ g_assert_true(emc_wait_mista(qts, mod, RX_STEP_COUNT, REG_MISTA_RXGD));
661
+ }
662
+
663
+ g_assert_cmphex(emc_read(qts, mod, REG_CRXDSA), ==,
664
+ desc_addr + sizeof(desc[0]));
665
+
666
+ expected_mask = 0xffff;
667
+ expected_value = (REG_MISTA_DENI |
668
+ REG_MISTA_RXGD |
669
+ REG_MISTA_RXINTR);
670
+ g_assert_cmphex((emc_read(qts, mod, REG_MISTA) & expected_mask),
671
+ ==, expected_value);
672
+
673
+ /* Read the descriptor back. */
674
+ qtest_memread(qts, desc_addr, &result_desc, sizeof(result_desc));
675
+ /* Descriptor should be owned by cpu now. */
676
+ g_assert((result_desc.status_and_length & RX_DESC_STATUS_OWNER_MASK) == 0);
677
+ /* Test the status bits, ignoring the length field. */
678
+ expected_mask = 0xffff << 16;
679
+ expected_value = RX_DESC_STATUS_RXGD;
680
+ if (with_irq) {
681
+ expected_value |= RX_DESC_STATUS_RXINTR;
682
+ }
683
+ g_assert_cmphex((result_desc.status_and_length & expected_mask), ==,
684
+ expected_value);
685
+ g_assert_cmpint(RX_DESC_PKT_LEN(result_desc.status_and_length), ==,
686
+ RX_DATA_LEN + CRC_LENGTH);
687
+
688
+ {
689
+ char buffer[RX_DATA_LEN];
690
+ qtest_memread(qts, data_addr, buffer, sizeof(buffer));
691
+ g_assert_cmpstr(buffer, == , "TEST");
692
+ }
693
+}
694
+
695
+static void emc_test_ptle(QTestState *qts, const EMCModule *mod, int fd)
696
+{
697
+ NPCM7xxEMCRxDesc desc[NUM_RX_DESCRIPTORS];
698
+ uint32_t desc_addr = DESC_ADDR;
699
+ uint32_t data_addr = DATA_ADDR;
700
+ int ret;
701
+ NPCM7xxEMCRxDesc result_desc;
702
+ uint32_t expected_mask, expected_value;
703
+
704
+ /* Prepare test data buffer. */
705
+#define PTLE_DATA_LEN 1600
706
+ char test_data[PTLE_DATA_LEN];
707
+ int len = htonl(sizeof(test_data));
708
+ const struct iovec iov[] = {
709
+ {
710
+ .iov_base = &len,
711
+ .iov_len = sizeof(len),
712
+ },{
713
+ .iov_base = (char *) test_data,
714
+ .iov_len = sizeof(test_data),
715
+ },
716
+ };
717
+ memset(test_data, 42, sizeof(test_data));
718
+
719
+ /*
720
+ * Reset the device BEFORE sending a test packet, otherwise the packet
721
+ * may get swallowed by an active device of an earlier test.
722
+ */
723
+ init_rx_desc(&desc[0], NUM_RX_DESCRIPTORS, desc_addr, data_addr);
724
+ enable_rx(qts, mod, &desc[0], NUM_RX_DESCRIPTORS, desc_addr,
725
+ REG_MIEN_ENRXINTR, REG_MCMDR_ALP);
726
+
727
+ /* Send test packet to device's socket. */
728
+ ret = iov_send(fd, iov, 2, 0, sizeof(len) + sizeof(test_data));
729
+ g_assert_cmpint(ret, == , sizeof(test_data) + sizeof(len));
730
+
731
+ /* Wait for RX interrupt. */
732
+ g_assert_true(emc_wait_irq(qts, mod, RX_STEP_COUNT, /*is_tx=*/false));
733
+
734
+ /* Read the descriptor back. */
735
+ qtest_memread(qts, desc_addr, &result_desc, sizeof(result_desc));
736
+ /* Descriptor should be owned by cpu now. */
737
+ g_assert((result_desc.status_and_length & RX_DESC_STATUS_OWNER_MASK) == 0);
738
+ /* Test the status bits, ignoring the length field. */
739
+ expected_mask = 0xffff << 16;
740
+ expected_value = (RX_DESC_STATUS_RXGD |
741
+ RX_DESC_STATUS_PTLE |
742
+ RX_DESC_STATUS_RXINTR);
743
+ g_assert_cmphex((result_desc.status_and_length & expected_mask), ==,
744
+ expected_value);
745
+ g_assert_cmpint(RX_DESC_PKT_LEN(result_desc.status_and_length), ==,
746
+ PTLE_DATA_LEN + CRC_LENGTH);
747
+
748
+ {
749
+ char buffer[PTLE_DATA_LEN];
750
+ qtest_memread(qts, data_addr, buffer, sizeof(buffer));
751
+ g_assert(memcmp(buffer, test_data, PTLE_DATA_LEN) == 0);
752
+ }
753
+}
754
+
755
+static void test_tx(gconstpointer test_data)
756
+{
757
+ const TestData *td = test_data;
758
+ GString *cmd_line = g_string_new("-machine quanta-gsj");
759
+ int *test_sockets = packet_test_init(emc_module_index(td->module),
760
+ cmd_line);
761
+ QTestState *qts = qtest_init(cmd_line->str);
762
+
763
+ /*
764
+ * TODO: For pedantic correctness test_sockets[0] should be closed after
765
+ * the fork and before the exec, but that will require some harness
766
+ * improvements.
767
+ */
768
+ close(test_sockets[1]);
769
+ /* Defensive programming */
770
+ test_sockets[1] = -1;
771
+
772
+ qtest_irq_intercept_in(qts, "/machine/soc/a9mpcore/gic");
773
+
774
+ emc_send_verify(qts, td->module, test_sockets[0], /*with_irq=*/false);
775
+ emc_send_verify(qts, td->module, test_sockets[0], /*with_irq=*/true);
776
+
777
+ qtest_quit(qts);
778
+}
779
+
780
+static void test_rx(gconstpointer test_data)
781
+{
782
+ const TestData *td = test_data;
783
+ GString *cmd_line = g_string_new("-machine quanta-gsj");
784
+ int *test_sockets = packet_test_init(emc_module_index(td->module),
785
+ cmd_line);
786
+ QTestState *qts = qtest_init(cmd_line->str);
787
+
788
+ /*
789
+ * TODO: For pedantic correctness test_sockets[0] should be closed after
790
+ * the fork and before the exec, but that will require some harness
791
+ * improvements.
792
+ */
793
+ close(test_sockets[1]);
794
+ /* Defensive programming */
795
+ test_sockets[1] = -1;
796
+
797
+ qtest_irq_intercept_in(qts, "/machine/soc/a9mpcore/gic");
798
+
799
+ emc_recv_verify(qts, td->module, test_sockets[0], /*with_irq=*/false);
800
+ emc_recv_verify(qts, td->module, test_sockets[0], /*with_irq=*/true);
801
+ emc_test_ptle(qts, td->module, test_sockets[0]);
802
+
803
+ qtest_quit(qts);
804
+}
805
+
806
+static void emc_add_test(const char *name, const TestData* td,
807
+ GTestDataFunc fn)
808
+{
809
+ g_autofree char *full_name = g_strdup_printf(
810
+ "npcm7xx_emc/emc[%d]/%s", emc_module_index(td->module), name);
811
+ qtest_add_data_func(full_name, td, fn);
812
+}
813
+#define add_test(name, td) emc_add_test(#name, td, test_##name)
814
+
815
+int main(int argc, char **argv)
816
+{
817
+ TestData test_data_list[ARRAY_SIZE(emc_module_list)];
818
+
819
+ g_test_init(&argc, &argv, NULL);
820
+
821
+ for (int i = 0; i < ARRAY_SIZE(emc_module_list); ++i) {
822
+ TestData *td = &test_data_list[i];
823
+
824
+ td->module = &emc_module_list[i];
825
+
826
+ add_test(init, td);
827
+ add_test(tx, td);
828
+ add_test(rx, td);
829
+ }
830
+
831
+ return g_test_run();
832
+}
833
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
834
index XXXXXXX..XXXXXXX 100644
835
--- a/tests/qtest/meson.build
836
+++ b/tests/qtest/meson.build
837
@@ -XXX,XX +XXX,XX @@ qtests_sparc64 = \
838
839
qtests_npcm7xx = \
840
['npcm7xx_adc-test',
841
+ 'npcm7xx_emc-test',
842
'npcm7xx_gpio-test',
843
'npcm7xx_pwm-test',
844
'npcm7xx_rng-test',
845
--
846
2.20.1
847
848
diff view generated by jsdifflib
1
For M profile cores, cache maintenance operations are done by
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
writing to special registers in the system register space.
3
For QEMU, cache operations are always NOPs, since we don't
4
implement the cache. Implementing these explicitly avoids
5
a spurious LOG_GUEST_ERROR when the guest uses them.
6
2
3
Use nr_apu_cpus in favor of hard coding 2.
4
5
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Reviewed-by: Luc Michel <luc@lmichel.fr>
8
Message-id: 20210210142048.3125878-2-edgar.iglesias@gmail.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180209165810.6668-4-peter.maydell@linaro.org
10
---
10
---
11
hw/intc/armv7m_nvic.c | 12 ++++++++++++
11
hw/arm/xlnx-versal.c | 4 ++--
12
1 file changed, 12 insertions(+)
12
1 file changed, 2 insertions(+), 2 deletions(-)
13
13
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
14
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
16
--- a/hw/arm/xlnx-versal.c
17
+++ b/hw/intc/armv7m_nvic.c
17
+++ b/hw/arm/xlnx-versal.c
18
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
18
@@ -XXX,XX +XXX,XX @@ static void versal_create_apu_gic(Versal *s, qemu_irq *pic)
19
}
19
gicbusdev = SYS_BUS_DEVICE(&s->fpd.apu.gic);
20
break;
20
gicdev = DEVICE(&s->fpd.apu.gic);
21
}
21
qdev_prop_set_uint32(gicdev, "revision", 3);
22
+ case 0xf50: /* ICIALLU */
22
- qdev_prop_set_uint32(gicdev, "num-cpu", 2);
23
+ case 0xf58: /* ICIMVAU */
23
+ qdev_prop_set_uint32(gicdev, "num-cpu", nr_apu_cpus);
24
+ case 0xf5c: /* DCIMVAC */
24
qdev_prop_set_uint32(gicdev, "num-irq", XLNX_VERSAL_NR_IRQS + 32);
25
+ case 0xf60: /* DCISW */
25
qdev_prop_set_uint32(gicdev, "len-redist-region-count", 1);
26
+ case 0xf64: /* DCCMVAU */
26
- qdev_prop_set_uint32(gicdev, "redist-region-count[0]", 2);
27
+ case 0xf68: /* DCCMVAC */
27
+ qdev_prop_set_uint32(gicdev, "redist-region-count[0]", nr_apu_cpus);
28
+ case 0xf6c: /* DCCSW */
28
qdev_prop_set_bit(gicdev, "has-security-extensions", true);
29
+ case 0xf70: /* DCCIMVAC */
29
30
+ case 0xf74: /* DCCISW */
30
sysbus_realize(SYS_BUS_DEVICE(&s->fpd.apu.gic), &error_fatal);
31
+ case 0xf78: /* BPIALL */
32
+ /* Cache and branch predictor maintenance: for QEMU these always NOP */
33
+ break;
34
default:
35
bad_offset:
36
qemu_log_mask(LOG_GUEST_ERROR,
37
--
31
--
38
2.16.1
32
2.20.1
39
33
40
34
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Daniel Müller <muellerd@fb.com>
2
2
3
Because they are ARM_CP_STATE_AA64, ARM_CP_64BIT is implied.
3
When working with performance monitoring counters, we look at
4
MDCR_EL2.HPMN as part of the check whether a counter is enabled. This
5
check fails, because MDCR_EL2.HPMN is reset to 0, meaning that no
6
counters are "enabled" for < EL2.
7
That's in violation of the Arm specification, which states that
4
8
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
> On a Warm reset, this field [MDCR_EL2.HPMN] resets to the value in
6
Message-id: 20180211205848.4568-2-richard.henderson@linaro.org
10
> PMCR_EL0.N
11
12
That's also what a comment in the code acknowledges, but the necessary
13
adjustment seems to have been forgotten when support for more counters
14
was added.
15
This change fixes the issue by setting the reset value to PMCR.N, which
16
is four.
17
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
20
---
10
target/arm/helper.c | 8 ++++----
21
target/arm/helper.c | 9 ++++-----
11
1 file changed, 4 insertions(+), 4 deletions(-)
22
1 file changed, 4 insertions(+), 5 deletions(-)
12
23
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
24
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
26
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
27
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
@@ -XXX,XX +XXX,XX @@
18
static const ARMCPRegInfo zcr_el1_reginfo = {
29
#endif
19
.name = "ZCR_EL1", .state = ARM_CP_STATE_AA64,
30
20
.opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0,
31
#define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */
21
- .access = PL1_RW, .accessfn = zcr_access, .type = ARM_CP_64BIT,
32
+#define PMCR_NUM_COUNTERS 4 /* QEMU IMPDEF choice */
22
+ .access = PL1_RW, .accessfn = zcr_access,
33
23
.fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]),
34
#ifndef CONFIG_USER_ONLY
24
.writefn = zcr_write, .raw_writefn = raw_write
35
25
};
36
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
26
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo zcr_el1_reginfo = {
37
.writefn = gt_hyp_ctl_write, .raw_writefn = raw_write },
27
static const ARMCPRegInfo zcr_el2_reginfo = {
38
#endif
28
.name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
39
/* The only field of MDCR_EL2 that has a defined architectural reset value
29
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
40
- * is MDCR_EL2.HPMN which should reset to the value of PMCR_EL0.N; but we
30
- .access = PL2_RW, .accessfn = zcr_access, .type = ARM_CP_64BIT,
41
- * don't implement any PMU event counters, so using zero as a reset
31
+ .access = PL2_RW, .accessfn = zcr_access,
42
- * value for MDCR_EL2 is okay
32
.fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]),
43
+ * is MDCR_EL2.HPMN which should reset to the value of PMCR_EL0.N.
33
.writefn = zcr_write, .raw_writefn = raw_write
44
*/
34
};
45
{ .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH,
35
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo zcr_el2_reginfo = {
46
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1,
36
static const ARMCPRegInfo zcr_no_el2_reginfo = {
47
- .access = PL2_RW, .resetvalue = 0,
37
.name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
48
+ .access = PL2_RW, .resetvalue = PMCR_NUM_COUNTERS,
38
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
49
.fieldoffset = offsetof(CPUARMState, cp15.mdcr_el2), },
39
- .access = PL2_RW, .type = ARM_CP_64BIT,
50
{ .name = "HPFAR", .state = ARM_CP_STATE_AA32,
40
+ .access = PL2_RW,
51
.cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4,
41
.readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore
52
@@ -XXX,XX +XXX,XX @@ static void define_pmu_regs(ARMCPU *cpu)
42
};
53
* field as main ID register, and we implement four counters in
43
54
* addition to the cycle count register.
44
static const ARMCPRegInfo zcr_el3_reginfo = {
55
*/
45
.name = "ZCR_EL3", .state = ARM_CP_STATE_AA64,
56
- unsigned int i, pmcrn = 4;
46
.opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0,
57
+ unsigned int i, pmcrn = PMCR_NUM_COUNTERS;
47
- .access = PL3_RW, .accessfn = zcr_access, .type = ARM_CP_64BIT,
58
ARMCPRegInfo pmcr = {
48
+ .access = PL3_RW, .accessfn = zcr_access,
59
.name = "PMCR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 0,
49
.fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]),
60
.access = PL0_RW,
50
.writefn = zcr_write, .raw_writefn = raw_write
51
};
52
--
61
--
53
2.16.1
62
2.20.1
54
63
55
64
diff view generated by jsdifflib