1
Hi; here's the first arm pullreq for the 8.2 cycle. These are
1
Hi; here's a target-arm pullreq for rc0; these are all bugfixes
2
pretty much all bug fixes (mostly for the experimental FEAT_RME),
2
and similar minor stuff.
3
rather than any major features.
4
3
4
thanks
5
-- PMM
5
-- PMM
6
6
7
The following changes since commit b0dd9a7d6dd15a6898e9c585b521e6bec79b25aa:
7
The following changes since commit 0462a32b4f63b2448b4a196381138afd50719dc4:
8
8
9
Open 8.2 development tree (2023-08-22 07:14:07 -0700)
9
Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into staging (2025-03-14 09:31:13 +0800)
10
10
11
are available in the Git repository at:
11
are available in the Git repository at:
12
12
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230824
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20250314-1
14
14
15
for you to fetch changes up to cd1e4db73646006039f25879af3bff55b2295ff3:
15
for you to fetch changes up to a019e15edfd62beae1e2f6adc0fa7415ba20b14c:
16
16
17
target/arm: Fix 64-bit SSRA (2023-08-22 17:31:14 +0100)
17
meson.build: Set RUST_BACKTRACE for all tests (2025-03-14 12:54:33 +0000)
18
18
19
----------------------------------------------------------------
19
----------------------------------------------------------------
20
target-arm queue:
20
target-arm queue:
21
* hw/gpio/nrf51: implement DETECT signal
21
* Correctly handle corner cases of guest attempting an exception
22
* accel/kvm: Specify default IPA size for arm64
22
return to AArch32 when target EL is AArch64 only
23
* ptw: refactor, fix some FEAT_RME bugs
23
* MAINTAINERS: Fix status for Arm boards I "maintain"
24
* target/arm: Adjust PAR_EL1.SH for Device and Normal-NC memory types
24
* tests/functional: Bump up arm_replay timeout
25
* target/arm/helper: Implement CNTHCTL_EL2.CNT[VP]MASK
25
* Revert "hw/char/pl011: Warn when using disabled receiver"
26
* Fix SME ST1Q
26
* util/cacheflush: Make first DSB unconditional on aarch64
27
* Fix 64-bit SSRA
27
* target/arm: Fix SVE/SME access check logic
28
* meson.build: Set RUST_BACKTRACE for all tests
28
29
29
----------------------------------------------------------------
30
----------------------------------------------------------------
30
Akihiko Odaki (6):
31
Joe Komlodi (1):
31
kvm: Introduce kvm_arch_get_default_type hook
32
util/cacheflush: Make first DSB unconditional on aarch64
32
accel/kvm: Specify default IPA size for arm64
33
mips: Report an error when KVM_VM_MIPS_VZ is unavailable
34
accel/kvm: Use negative KVM type for error propagation
35
accel/kvm: Free as when an error occurred
36
accel/kvm: Make kvm_dirty_ring_reaper_init() void
37
33
38
Chris Laplante (6):
34
Paolo Bonzini (1):
39
hw/gpio/nrf51: implement DETECT signal
35
Revert "hw/char/pl011: Warn when using disabled receiver"
40
qtest: factor out qtest_install_gpio_out_intercept
41
qtest: implement named interception of out-GPIO
42
qtest: bail from irq_intercept_in if name is specified
43
qtest: irq_intercept_[out/in]: return FAIL if no intercepts are installed
44
qtest: microbit-test: add tests for nRF51 DETECT
45
36
46
Jean-Philippe Brucker (6):
37
Peter Maydell (13):
47
target/arm/ptw: Load stage-2 tables from realm physical space
38
target/arm: Move A32_BANKED_REG_{GET,SET} macros to cpregs.h
48
target/arm/helper: Fix tlbmask and tlbbits for TLBI VAE2*
39
target/arm: Un-inline access_secure_reg()
49
target/arm: Skip granule protection checks for AT instructions
40
linux-user/aarch64: Remove unused get/put_user macros
50
target/arm: Pass security space rather than flag for AT instructions
41
linux-user/arm: Remove unused get_put_user macros
51
target/arm/helper: Check SCR_EL3.{NSE, NS} encoding for AT instructions
42
target/arm: Move arm_cpu_data_is_big_endian() etc to internals.h
52
target/arm/helper: Implement CNTHCTL_EL2.CNT[VP]MASK
43
target/arm: Move arm_current_el() and arm_el_is_aa64() to internals.h
53
44
target/arm: SCR_EL3.RW should be treated as 1 if EL2 doesn't support AArch32
54
Peter Maydell (15):
45
target/arm: HCR_EL2.RW should be RAO/WI if EL1 doesn't support AArch32
55
target/arm/ptw: Don't set fi->s1ptw for UnsuppAtomicUpdate fault
46
target/arm: Add cpu local variable to exception_return helper
56
target/arm/ptw: Don't report GPC faults on stage 1 ptw as stage2 faults
47
target/arm: Forbid return to AArch32 when CPU is AArch64-only
57
target/arm/ptw: Set s1ns bit in fault info more consistently
48
MAINTAINERS: Fix status for Arm boards I "maintain"
58
target/arm/ptw: Pass ptw into get_phys_addr_pmsa*() and get_phys_addr_disabled()
49
tests/functional: Bump up arm_replay timeout
59
target/arm/ptw: Pass ARMSecurityState to regime_translation_disabled()
50
meson.build: Set RUST_BACKTRACE for all tests
60
target/arm/ptw: Pass an ARMSecuritySpace to arm_hcr_el2_eff_secstate()
61
target/arm: Pass an ARMSecuritySpace to arm_is_el2_enabled_secstate()
62
target/arm/ptw: Only fold in NSTable bit effects in Secure state
63
target/arm/ptw: Remove last uses of ptw->in_secure
64
target/arm/ptw: Remove S1Translate::in_secure
65
target/arm/ptw: Drop S1Translate::out_secure
66
target/arm/ptw: Set attributes correctly for MMU disabled data accesses
67
target/arm/ptw: Check for block descriptors at invalid levels
68
target/arm/ptw: Report stage 2 fault level for stage 2 faults on stage 1 ptw
69
target/arm: Adjust PAR_EL1.SH for Device and Normal-NC memory types
70
51
71
Richard Henderson (2):
52
Richard Henderson (2):
72
target/arm: Fix SME ST1Q
53
target/arm: Make DisasContext.{fp, sve}_access_checked tristate
73
target/arm: Fix 64-bit SSRA
54
target/arm: Simplify pstate_sm check in sve_access_check
74
55
75
include/hw/gpio/nrf51_gpio.h | 1 +
56
MAINTAINERS | 14 ++--
76
include/sysemu/kvm.h | 2 +
57
meson.build | 9 ++-
77
target/arm/cpu.h | 19 ++--
58
target/arm/cpregs.h | 28 +++++++
78
target/arm/internals.h | 25 ++---
59
target/arm/cpu.h | 153 +-----------------------------------
79
target/mips/kvm_mips.h | 9 --
60
target/arm/internals.h | 135 +++++++++++++++++++++++++++++++
80
tests/qtest/libqtest.h | 11 +++
61
target/arm/tcg/translate-a64.h | 2 +-
81
accel/kvm/kvm-all.c | 19 ++--
62
target/arm/tcg/translate.h | 10 ++-
82
hw/arm/virt.c | 2 +-
63
hw/char/pl011.c | 19 ++---
83
hw/gpio/nrf51_gpio.c | 14 ++-
64
hw/intc/arm_gicv3_cpuif.c | 1 +
84
hw/mips/loongson3_virt.c | 2 -
65
linux-user/aarch64/cpu_loop.c | 48 -----------
85
hw/ppc/spapr.c | 2 +-
66
linux-user/arm/cpu_loop.c | 43 +---------
86
softmmu/qtest.c | 52 +++++++---
67
target/arm/arch_dump.c | 1 +
87
target/arm/cpu.c | 6 ++
68
target/arm/helper.c | 16 +++-
88
target/arm/helper.c | 207 ++++++++++++++++++++++++++++----------
69
target/arm/tcg/helper-a64.c | 12 ++-
89
target/arm/kvm.c | 7 ++
70
target/arm/tcg/hflags.c | 9 +++
90
target/arm/ptw.c | 231 ++++++++++++++++++++++++++-----------------
71
target/arm/tcg/translate-a64.c | 37 ++++-----
91
target/arm/tcg/sme_helper.c | 2 +-
72
util/cacheflush.c | 4 +-
92
target/arm/tcg/translate.c | 2 +-
73
.gitlab-ci.d/buildtest-template.yml | 1 -
93
target/i386/kvm/kvm.c | 5 +
74
18 files changed, 257 insertions(+), 285 deletions(-)
94
target/mips/kvm.c | 3 +-
95
target/ppc/kvm.c | 5 +
96
target/riscv/kvm.c | 5 +
97
target/s390x/kvm/kvm.c | 5 +
98
tests/qtest/libqtest.c | 6 ++
99
tests/qtest/microbit-test.c | 44 +++++++++
100
target/arm/trace-events | 7 +-
101
26 files changed, 494 insertions(+), 199 deletions(-)
diff view generated by jsdifflib
Deleted patch
1
From: Chris Laplante <chris@laplante.io>
2
1
3
Implement nRF51 DETECT signal in the GPIO peripheral.
4
5
The reference manual makes mention of a per-pin DETECT signal, but these
6
are not exposed to the user. See https://devzone.nordicsemi.com/f/nordic-q-a/39858/gpio-per-pin-detect-signal-available
7
for more information. Currently, I don't see a reason to model these.
8
9
Signed-off-by: Chris Laplante <chris@laplante.io>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20230728160324.1159090-2-chris@laplante.io
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
include/hw/gpio/nrf51_gpio.h | 1 +
15
hw/gpio/nrf51_gpio.c | 14 +++++++++++++-
16
2 files changed, 14 insertions(+), 1 deletion(-)
17
18
diff --git a/include/hw/gpio/nrf51_gpio.h b/include/hw/gpio/nrf51_gpio.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/gpio/nrf51_gpio.h
21
+++ b/include/hw/gpio/nrf51_gpio.h
22
@@ -XXX,XX +XXX,XX @@ struct NRF51GPIOState {
23
uint32_t old_out_connected;
24
25
qemu_irq output[NRF51_GPIO_PINS];
26
+ qemu_irq detect;
27
};
28
29
30
diff --git a/hw/gpio/nrf51_gpio.c b/hw/gpio/nrf51_gpio.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/gpio/nrf51_gpio.c
33
+++ b/hw/gpio/nrf51_gpio.c
34
@@ -XXX,XX +XXX,XX @@ static void update_state(NRF51GPIOState *s)
35
int pull;
36
size_t i;
37
bool connected_out, dir, connected_in, out, in, input;
38
+ bool assert_detect = false;
39
40
for (i = 0; i < NRF51_GPIO_PINS; i++) {
41
pull = pull_value(s->cnf[i]);
42
@@ -XXX,XX +XXX,XX @@ static void update_state(NRF51GPIOState *s)
43
qemu_log_mask(LOG_GUEST_ERROR,
44
"GPIO pin %zu short circuited\n", i);
45
}
46
- if (!connected_in) {
47
+ if (connected_in) {
48
+ uint32_t detect_config = extract32(s->cnf[i], 16, 2);
49
+ if ((detect_config == 2) && (in == 1)) {
50
+ assert_detect = true;
51
+ }
52
+ if ((detect_config == 3) && (in == 0)) {
53
+ assert_detect = true;
54
+ }
55
+ } else {
56
/*
57
* Floating input: the output stimulates IN if connected,
58
* otherwise pull-up/pull-down resistors put a value on both
59
@@ -XXX,XX +XXX,XX @@ static void update_state(NRF51GPIOState *s)
60
}
61
update_output_irq(s, i, connected_out, out);
62
}
63
+
64
+ qemu_set_irq(s->detect, assert_detect);
65
}
66
67
/*
68
@@ -XXX,XX +XXX,XX @@ static void nrf51_gpio_init(Object *obj)
69
70
qdev_init_gpio_in(DEVICE(s), nrf51_gpio_set, NRF51_GPIO_PINS);
71
qdev_init_gpio_out(DEVICE(s), s->output, NRF51_GPIO_PINS);
72
+ qdev_init_gpio_out_named(DEVICE(s), &s->detect, "detect", 1);
73
}
74
75
static void nrf51_gpio_class_init(ObjectClass *klass, void *data)
76
--
77
2.34.1
diff view generated by jsdifflib
Deleted patch
1
From: Chris Laplante <chris@laplante.io>
2
1
3
Signed-off-by: Chris Laplante <chris@laplante.io>
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Message-id: 20230728160324.1159090-3-chris@laplante.io
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
softmmu/qtest.c | 16 ++++++++++------
9
1 file changed, 10 insertions(+), 6 deletions(-)
10
11
diff --git a/softmmu/qtest.c b/softmmu/qtest.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/softmmu/qtest.c
14
+++ b/softmmu/qtest.c
15
@@ -XXX,XX +XXX,XX @@ void qtest_set_command_cb(bool (*pc_cb)(CharBackend *chr, gchar **words))
16
process_command_cb = pc_cb;
17
}
18
19
+static void qtest_install_gpio_out_intercept(DeviceState *dev, const char *name, int n)
20
+{
21
+ qemu_irq *disconnected = g_new0(qemu_irq, 1);
22
+ qemu_irq icpt = qemu_allocate_irq(qtest_irq_handler,
23
+ disconnected, n);
24
+
25
+ *disconnected = qdev_intercept_gpio_out(dev, icpt, name, n);
26
+}
27
+
28
static void qtest_process_command(CharBackend *chr, gchar **words)
29
{
30
const gchar *command;
31
@@ -XXX,XX +XXX,XX @@ static void qtest_process_command(CharBackend *chr, gchar **words)
32
if (words[0][14] == 'o') {
33
int i;
34
for (i = 0; i < ngl->num_out; ++i) {
35
- qemu_irq *disconnected = g_new0(qemu_irq, 1);
36
- qemu_irq icpt = qemu_allocate_irq(qtest_irq_handler,
37
- disconnected, i);
38
-
39
- *disconnected = qdev_intercept_gpio_out(dev, icpt,
40
- ngl->name, i);
41
+ qtest_install_gpio_out_intercept(dev, ngl->name, i);
42
}
43
} else {
44
qemu_irq_intercept_in(ngl->in, qtest_irq_handler,
45
--
46
2.34.1
diff view generated by jsdifflib
Deleted patch
1
From: Chris Laplante <chris@laplante.io>
2
1
3
Adds qtest_irq_intercept_out_named method, which utilizes a new optional
4
name parameter to the irq_intercept_out qtest command.
5
6
Signed-off-by: Chris Laplante <chris@laplante.io>
7
Message-id: 20230728160324.1159090-4-chris@laplante.io
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
tests/qtest/libqtest.h | 11 +++++++++++
12
softmmu/qtest.c | 18 ++++++++++--------
13
tests/qtest/libqtest.c | 6 ++++++
14
3 files changed, 27 insertions(+), 8 deletions(-)
15
16
diff --git a/tests/qtest/libqtest.h b/tests/qtest/libqtest.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/tests/qtest/libqtest.h
19
+++ b/tests/qtest/libqtest.h
20
@@ -XXX,XX +XXX,XX @@ void qtest_irq_intercept_in(QTestState *s, const char *string);
21
*/
22
void qtest_irq_intercept_out(QTestState *s, const char *string);
23
24
+/**
25
+ * qtest_irq_intercept_out_named:
26
+ * @s: #QTestState instance to operate on.
27
+ * @qom_path: QOM path of a device.
28
+ * @name: Name of the GPIO out pin
29
+ *
30
+ * Associate a qtest irq with the named GPIO-out pin of the device
31
+ * whose path is specified by @string and whose name is @name.
32
+ */
33
+void qtest_irq_intercept_out_named(QTestState *s, const char *qom_path, const char *name);
34
+
35
/**
36
* qtest_set_irq_in:
37
* @s: QTestState instance to operate on.
38
diff --git a/softmmu/qtest.c b/softmmu/qtest.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/softmmu/qtest.c
41
+++ b/softmmu/qtest.c
42
@@ -XXX,XX +XXX,XX @@ static void qtest_process_command(CharBackend *chr, gchar **words)
43
|| strcmp(words[0], "irq_intercept_in") == 0) {
44
DeviceState *dev;
45
NamedGPIOList *ngl;
46
+ bool is_outbound;
47
48
g_assert(words[1]);
49
+ is_outbound = words[0][14] == 'o';
50
dev = DEVICE(object_resolve_path(words[1], NULL));
51
if (!dev) {
52
qtest_send_prefix(chr);
53
@@ -XXX,XX +XXX,XX @@ static void qtest_process_command(CharBackend *chr, gchar **words)
54
}
55
56
QLIST_FOREACH(ngl, &dev->gpios, node) {
57
- /* We don't support intercept of named GPIOs yet */
58
- if (ngl->name) {
59
- continue;
60
- }
61
- if (words[0][14] == 'o') {
62
- int i;
63
- for (i = 0; i < ngl->num_out; ++i) {
64
- qtest_install_gpio_out_intercept(dev, ngl->name, i);
65
+ /* We don't support inbound interception of named GPIOs yet */
66
+ if (is_outbound) {
67
+ /* NULL is valid and matchable, for "unnamed GPIO" */
68
+ if (g_strcmp0(ngl->name, words[2]) == 0) {
69
+ int i;
70
+ for (i = 0; i < ngl->num_out; ++i) {
71
+ qtest_install_gpio_out_intercept(dev, ngl->name, i);
72
+ }
73
}
74
} else {
75
qemu_irq_intercept_in(ngl->in, qtest_irq_handler,
76
diff --git a/tests/qtest/libqtest.c b/tests/qtest/libqtest.c
77
index XXXXXXX..XXXXXXX 100644
78
--- a/tests/qtest/libqtest.c
79
+++ b/tests/qtest/libqtest.c
80
@@ -XXX,XX +XXX,XX @@ void qtest_irq_intercept_out(QTestState *s, const char *qom_path)
81
qtest_rsp(s);
82
}
83
84
+void qtest_irq_intercept_out_named(QTestState *s, const char *qom_path, const char *name)
85
+{
86
+ qtest_sendf(s, "irq_intercept_out %s %s\n", qom_path, name);
87
+ qtest_rsp(s);
88
+}
89
+
90
void qtest_irq_intercept_in(QTestState *s, const char *qom_path)
91
{
92
qtest_sendf(s, "irq_intercept_in %s\n", qom_path);
93
--
94
2.34.1
diff view generated by jsdifflib
Deleted patch
1
From: Chris Laplante <chris@laplante.io>
2
1
3
Named interception of in-GPIOs is not supported yet.
4
5
Signed-off-by: Chris Laplante <chris@laplante.io>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20230728160324.1159090-5-chris@laplante.io
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
softmmu/qtest.c | 8 ++++++++
11
1 file changed, 8 insertions(+)
12
13
diff --git a/softmmu/qtest.c b/softmmu/qtest.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/softmmu/qtest.c
16
+++ b/softmmu/qtest.c
17
@@ -XXX,XX +XXX,XX @@ static void qtest_process_command(CharBackend *chr, gchar **words)
18
|| strcmp(words[0], "irq_intercept_in") == 0) {
19
DeviceState *dev;
20
NamedGPIOList *ngl;
21
+ bool is_named;
22
bool is_outbound;
23
24
g_assert(words[1]);
25
+ is_named = words[2] != NULL;
26
is_outbound = words[0][14] == 'o';
27
dev = DEVICE(object_resolve_path(words[1], NULL));
28
if (!dev) {
29
@@ -XXX,XX +XXX,XX @@ static void qtest_process_command(CharBackend *chr, gchar **words)
30
return;
31
}
32
33
+ if (is_named && !is_outbound) {
34
+ qtest_send_prefix(chr);
35
+ qtest_send(chr, "FAIL Interception of named in-GPIOs not yet supported\n");
36
+ return;
37
+ }
38
+
39
if (irq_intercept_dev) {
40
qtest_send_prefix(chr);
41
if (irq_intercept_dev != dev) {
42
--
43
2.34.1
diff view generated by jsdifflib
Deleted patch
1
From: Chris Laplante <chris@laplante.io>
2
1
3
This is much better than just silently failing with OK.
4
5
Signed-off-by: Chris Laplante <chris@laplante.io>
6
Message-id: 20230728160324.1159090-6-chris@laplante.io
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
softmmu/qtest.c | 12 ++++++++++--
11
1 file changed, 10 insertions(+), 2 deletions(-)
12
13
diff --git a/softmmu/qtest.c b/softmmu/qtest.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/softmmu/qtest.c
16
+++ b/softmmu/qtest.c
17
@@ -XXX,XX +XXX,XX @@ static void qtest_process_command(CharBackend *chr, gchar **words)
18
NamedGPIOList *ngl;
19
bool is_named;
20
bool is_outbound;
21
+ bool interception_succeeded = false;
22
23
g_assert(words[1]);
24
is_named = words[2] != NULL;
25
@@ -XXX,XX +XXX,XX @@ static void qtest_process_command(CharBackend *chr, gchar **words)
26
for (i = 0; i < ngl->num_out; ++i) {
27
qtest_install_gpio_out_intercept(dev, ngl->name, i);
28
}
29
+ interception_succeeded = true;
30
}
31
} else {
32
qemu_irq_intercept_in(ngl->in, qtest_irq_handler,
33
ngl->num_in);
34
+ interception_succeeded = true;
35
}
36
}
37
- irq_intercept_dev = dev;
38
+
39
qtest_send_prefix(chr);
40
- qtest_send(chr, "OK\n");
41
+ if (interception_succeeded) {
42
+ irq_intercept_dev = dev;
43
+ qtest_send(chr, "OK\n");
44
+ } else {
45
+ qtest_send(chr, "FAIL No intercepts installed\n");
46
+ }
47
} else if (strcmp(words[0], "set_irq_in") == 0) {
48
DeviceState *dev;
49
qemu_irq irq;
50
--
51
2.34.1
diff view generated by jsdifflib
Deleted patch
1
From: Chris Laplante <chris@laplante.io>
2
1
3
Exercise the DETECT mechanism of the GPIO peripheral.
4
5
Signed-off-by: Chris Laplante <chris@laplante.io>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20230728160324.1159090-7-chris@laplante.io
8
[PMM: fixed coding style nits]
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
tests/qtest/microbit-test.c | 44 +++++++++++++++++++++++++++++++++++++
12
1 file changed, 44 insertions(+)
13
14
diff --git a/tests/qtest/microbit-test.c b/tests/qtest/microbit-test.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/qtest/microbit-test.c
17
+++ b/tests/qtest/microbit-test.c
18
@@ -XXX,XX +XXX,XX @@ static void test_nrf51_gpio(void)
19
qtest_quit(qts);
20
}
21
22
+static void test_nrf51_gpio_detect(void)
23
+{
24
+ QTestState *qts = qtest_init("-M microbit");
25
+ int i;
26
+
27
+ /* Connect input buffer on pins 1-7, configure SENSE for high level */
28
+ for (i = 1; i <= 7; i++) {
29
+ qtest_writel(qts, NRF51_GPIO_BASE + NRF51_GPIO_REG_CNF_START + i * 4,
30
+ deposit32(0, 16, 2, 2));
31
+ }
32
+
33
+ qtest_irq_intercept_out_named(qts, "/machine/nrf51/gpio", "detect");
34
+
35
+ for (i = 1; i <= 7; i++) {
36
+ /* Set pin high */
37
+ qtest_set_irq_in(qts, "/machine/nrf51", "unnamed-gpio-in", i, 1);
38
+ uint32_t actual = qtest_readl(qts, NRF51_GPIO_BASE + NRF51_GPIO_REG_IN);
39
+ g_assert_cmpuint(actual, ==, 1 << i);
40
+
41
+ /* Check that DETECT is high */
42
+ g_assert_true(qtest_get_irq(qts, 0));
43
+
44
+ /* Set pin low, check that DETECT goes low. */
45
+ qtest_set_irq_in(qts, "/machine/nrf51", "unnamed-gpio-in", i, 0);
46
+ actual = qtest_readl(qts, NRF51_GPIO_BASE + NRF51_GPIO_REG_IN);
47
+ g_assert_cmpuint(actual, ==, 0x0);
48
+ g_assert_false(qtest_get_irq(qts, 0));
49
+ }
50
+
51
+ /* Set pin 0 high, check that DETECT doesn't fire */
52
+ qtest_set_irq_in(qts, "/machine/nrf51", "unnamed-gpio-in", 0, 1);
53
+ g_assert_false(qtest_get_irq(qts, 0));
54
+ qtest_set_irq_in(qts, "/machine/nrf51", "unnamed-gpio-in", 0, 0);
55
+
56
+ /* Set pins 1, 2, and 3 high, then set 3 low. Check DETECT is still high */
57
+ for (i = 1; i <= 3; i++) {
58
+ qtest_set_irq_in(qts, "/machine/nrf51", "unnamed-gpio-in", i, 1);
59
+ }
60
+ g_assert_true(qtest_get_irq(qts, 0));
61
+ qtest_set_irq_in(qts, "/machine/nrf51", "unnamed-gpio-in", 3, 0);
62
+ g_assert_true(qtest_get_irq(qts, 0));
63
+}
64
+
65
static void timer_task(QTestState *qts, hwaddr task)
66
{
67
qtest_writel(qts, NRF51_TIMER_BASE + task, NRF51_TRIGGER_TASK);
68
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
69
70
qtest_add_func("/microbit/nrf51/uart", test_nrf51_uart);
71
qtest_add_func("/microbit/nrf51/gpio", test_nrf51_gpio);
72
+ qtest_add_func("/microbit/nrf51/gpio_detect", test_nrf51_gpio_detect);
73
qtest_add_func("/microbit/nrf51/nvmc", test_nrf51_nvmc);
74
qtest_add_func("/microbit/nrf51/timer", test_nrf51_timer);
75
qtest_add_func("/microbit/microbit/i2c", test_microbit_i2c);
76
--
77
2.34.1
diff view generated by jsdifflib
1
The s1ns bit in ARMMMUFaultInfo is documented as "true if
1
The A32_BANKED_REG_{GET,SET} macros are only used inside target/arm;
2
we faulted on a non-secure IPA while in secure state". Both the
2
move their definitions to cpregs.h. There's no need to have them
3
places which look at this bit only do so after having confirmed
3
defined in all the code that includes cpu.h.
4
that this is a stage 2 fault and we're dealing with Secure EL2,
5
which leaves the ptw.c code free to set the bit to any random
6
value in the other cases.
7
8
Instead of taking advantage of that freedom, consistently
9
make the bit be set to false for the "not a stage 2 fault
10
for Secure EL2" cases. This removes some cases where we
11
were using an 'is_secure' boolean and leaving the reader
12
guessing about whether that was the right thing for Realm
13
and Root cases.
14
4
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20230807141514.19075-4-peter.maydell@linaro.org
18
---
7
---
19
target/arm/ptw.c | 19 +++++++++++++++----
8
target/arm/cpregs.h | 28 ++++++++++++++++++++++++++++
20
1 file changed, 15 insertions(+), 4 deletions(-)
9
target/arm/cpu.h | 27 ---------------------------
10
2 files changed, 28 insertions(+), 27 deletions(-)
21
11
22
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
12
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
23
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/ptw.c
14
--- a/target/arm/cpregs.h
25
+++ b/target/arm/ptw.c
15
+++ b/target/arm/cpregs.h
26
@@ -XXX,XX +XXX,XX @@ static ARMSecuritySpace S2_security_space(ARMSecuritySpace s1_space,
16
@@ -XXX,XX +XXX,XX @@ static inline bool arm_cpreg_traps_in_nv(const ARMCPRegInfo *ri)
27
}
17
return ri->opc1 == 4 || ri->opc1 == 5;
28
}
18
}
29
19
30
+static bool fault_s1ns(ARMSecuritySpace space, ARMMMUIdx s2_mmu_idx)
20
+/* Macros for accessing a specified CP register bank */
31
+{
21
+#define A32_BANKED_REG_GET(_env, _regname, _secure) \
32
+ /*
22
+ ((_secure) ? (_env)->cp15._regname##_s : (_env)->cp15._regname##_ns)
33
+ * For stage 2 faults in Secure EL22, S1NS indicates
34
+ * whether the faulting IPA is in the Secure or NonSecure
35
+ * IPA space. For all other kinds of fault, it is false.
36
+ */
37
+ return space == ARMSS_Secure && regime_is_stage2(s2_mmu_idx)
38
+ && s2_mmu_idx == ARMMMUIdx_Stage2_S;
39
+}
40
+
23
+
41
/* Translate a S1 pagetable walk through S2 if needed. */
24
+#define A32_BANKED_REG_SET(_env, _regname, _secure, _val) \
42
static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
25
+ do { \
43
hwaddr addr, ARMMMUFaultInfo *fi)
26
+ if (_secure) { \
44
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
27
+ (_env)->cp15._regname##_s = (_val); \
45
fi->s2addr = addr;
28
+ } else { \
46
fi->stage2 = true;
29
+ (_env)->cp15._regname##_ns = (_val); \
47
fi->s1ptw = true;
30
+ } \
48
- fi->s1ns = !is_secure;
31
+ } while (0)
49
+ fi->s1ns = fault_s1ns(ptw->in_space, s2_mmu_idx);
32
+
50
return false;
33
+/*
51
}
34
+ * Macros for automatically accessing a specific CP register bank depending on
52
}
35
+ * the current secure state of the system. These macros are not intended for
53
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
36
+ * supporting instruction translation reads/writes as these are dependent
54
fi->s2addr = addr;
37
+ * solely on the SCR.NS bit and not the mode.
55
fi->stage2 = regime_is_stage2(s2_mmu_idx);
38
+ */
56
fi->s1ptw = fi->stage2;
39
+#define A32_BANKED_CURRENT_REG_GET(_env, _regname) \
57
- fi->s1ns = !is_secure;
40
+ A32_BANKED_REG_GET((_env), _regname, \
58
+ fi->s1ns = fault_s1ns(ptw->in_space, s2_mmu_idx);
41
+ (arm_is_secure(_env) && !arm_el_is_aa64((_env), 3)))
59
return false;
42
+
43
+#define A32_BANKED_CURRENT_REG_SET(_env, _regname, _val) \
44
+ A32_BANKED_REG_SET((_env), _regname, \
45
+ (arm_is_secure(_env) && !arm_el_is_aa64((_env), 3)), \
46
+ (_val))
47
+
48
#endif /* TARGET_ARM_CPREGS_H */
49
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/cpu.h
52
+++ b/target/arm/cpu.h
53
@@ -XXX,XX +XXX,XX @@ static inline bool access_secure_reg(CPUARMState *env)
54
return ret;
60
}
55
}
61
56
62
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_casq_ptw(CPUARMState *env, uint64_t old_val,
57
-/* Macros for accessing a specified CP register bank */
63
fi->s2addr = ptw->out_virt;
58
-#define A32_BANKED_REG_GET(_env, _regname, _secure) \
64
fi->stage2 = true;
59
- ((_secure) ? (_env)->cp15._regname##_s : (_env)->cp15._regname##_ns)
65
fi->s1ptw = true;
60
-
66
- fi->s1ns = !ptw->in_secure;
61
-#define A32_BANKED_REG_SET(_env, _regname, _secure, _val) \
67
+ fi->s1ns = fault_s1ns(ptw->in_space, ptw->in_ptw_idx);
62
- do { \
68
return 0;
63
- if (_secure) { \
69
}
64
- (_env)->cp15._regname##_s = (_val); \
70
65
- } else { \
71
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
66
- (_env)->cp15._regname##_ns = (_val); \
72
fi->level = level;
67
- } \
73
/* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
68
- } while (0)
74
fi->stage2 = fi->s1ptw || regime_is_stage2(mmu_idx);
69
-
75
- fi->s1ns = mmu_idx == ARMMMUIdx_Stage2;
70
-/* Macros for automatically accessing a specific CP register bank depending on
76
+ fi->s1ns = fault_s1ns(ptw->in_space, mmu_idx);
71
- * the current secure state of the system. These macros are not intended for
77
return true;
72
- * supporting instruction translation reads/writes as these are dependent
78
}
73
- * solely on the SCR.NS bit and not the mode.
74
- */
75
-#define A32_BANKED_CURRENT_REG_GET(_env, _regname) \
76
- A32_BANKED_REG_GET((_env), _regname, \
77
- (arm_is_secure(_env) && !arm_el_is_aa64((_env), 3)))
78
-
79
-#define A32_BANKED_CURRENT_REG_SET(_env, _regname, _val) \
80
- A32_BANKED_REG_SET((_env), _regname, \
81
- (arm_is_secure(_env) && !arm_el_is_aa64((_env), 3)), \
82
- (_val))
83
-
84
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
85
uint32_t cur_el, bool secure);
79
86
80
--
87
--
81
2.34.1
88
2.43.0
diff view generated by jsdifflib
1
Pass an ARMSecuritySpace instead of a bool secure to
1
We would like to move arm_el_is_aa64() to internals.h; however, it is
2
arm_is_el2_enabled_secstate(). This doesn't change behaviour.
2
used by access_secure_reg(). Make that function not be inline, so
3
that it can stay in cpu.h.
4
5
access_secure_reg() is used only in two places:
6
* in hflags.c
7
* in the user-mode arm emulators, to decide whether to store
8
the TLS value in the secure or non-secure banked field
9
10
The second of these is not on a super-hot path that would care about
11
the inlining (and incidentally will always use the NS banked field
12
because our user-mode CPUs never set ARM_FEATURE_EL3); put the
13
definition of access_secure_reg() in hflags.c, near its only use
14
inside target/arm.
3
15
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230807141514.19075-8-peter.maydell@linaro.org
7
---
18
---
8
target/arm/cpu.h | 13 ++++++++-----
19
target/arm/cpu.h | 12 +++---------
9
target/arm/helper.c | 2 +-
20
target/arm/tcg/hflags.c | 9 +++++++++
10
2 files changed, 9 insertions(+), 6 deletions(-)
21
2 files changed, 12 insertions(+), 9 deletions(-)
11
22
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.h
25
--- a/target/arm/cpu.h
15
+++ b/target/arm/cpu.h
26
+++ b/target/arm/cpu.h
16
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
27
@@ -XXX,XX +XXX,XX @@ static inline bool arm_el_is_aa64(CPUARMState *env, int el)
17
28
return aa64;
18
/*
29
}
19
* Return true if the current security state has AArch64 EL2 or AArch32 Hyp.
30
20
- * This corresponds to the pseudocode EL2Enabled()
31
-/* Function for determining whether guest cp register reads and writes should
21
+ * This corresponds to the pseudocode EL2Enabled().
32
+/*
33
+ * Function for determining whether guest cp register reads and writes should
34
* access the secure or non-secure bank of a cp register. When EL3 is
35
* operating in AArch32 state, the NS-bit determines whether the secure
36
* instance of a cp register should be used. When EL3 is AArch64 (or if
37
* it doesn't exist at all) then there is no register banking, and all
38
* accesses are to the non-secure version.
22
*/
39
*/
23
-static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure)
40
-static inline bool access_secure_reg(CPUARMState *env)
24
+static inline bool arm_is_el2_enabled_secstate(CPUARMState *env,
41
-{
25
+ ARMSecuritySpace space)
42
- bool ret = (arm_feature(env, ARM_FEATURE_EL3) &&
26
{
43
- !arm_el_is_aa64(env, 3) &&
27
+ assert(space != ARMSS_Root);
44
- !(env->cp15.scr_el3 & SCR_NS));
28
return arm_feature(env, ARM_FEATURE_EL2)
45
-
29
- && (!secure || (env->cp15.scr_el3 & SCR_EEL2));
46
- return ret;
30
+ && (space != ARMSS_Secure || (env->cp15.scr_el3 & SCR_EEL2));
47
-}
48
+bool access_secure_reg(CPUARMState *env);
49
50
uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
51
uint32_t cur_el, bool secure);
52
diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/tcg/hflags.c
55
+++ b/target/arm/tcg/hflags.c
56
@@ -XXX,XX +XXX,XX @@ static bool aprofile_require_alignment(CPUARMState *env, int el, uint64_t sctlr)
57
#endif
31
}
58
}
32
59
33
static inline bool arm_is_el2_enabled(CPUARMState *env)
60
+bool access_secure_reg(CPUARMState *env)
34
{
61
+{
35
- return arm_is_el2_enabled_secstate(env, arm_is_secure_below_el3(env));
62
+ bool ret = (arm_feature(env, ARM_FEATURE_EL3) &&
36
+ return arm_is_el2_enabled_secstate(env, arm_security_space_below_el3(env));
63
+ !arm_el_is_aa64(env, 3) &&
37
}
64
+ !(env->cp15.scr_el3 & SCR_NS));
38
65
+
39
#else
66
+ return ret;
40
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
67
+}
41
return false;
68
+
42
}
69
static CPUARMTBFlags rebuild_hflags_common(CPUARMState *env, int fp_el,
43
70
ARMMMUIdx mmu_idx,
44
-static inline bool arm_is_el2_enabled_secstate(CPUARMState *env, bool secure)
71
CPUARMTBFlags flags)
45
+static inline bool arm_is_el2_enabled_secstate(CPUARMState *env,
46
+ ARMSecuritySpace space)
47
{
48
return false;
49
}
50
diff --git a/target/arm/helper.c b/target/arm/helper.c
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/helper.c
53
+++ b/target/arm/helper.c
54
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, ARMSecuritySpace space)
55
56
assert(space != ARMSS_Root);
57
58
- if (!arm_is_el2_enabled_secstate(env, arm_space_is_secure(space))) {
59
+ if (!arm_is_el2_enabled_secstate(env, space)) {
60
/*
61
* "This register has no effect if EL2 is not enabled in the
62
* current Security state". This is ARMv8.4-SecEL2 speak for
63
--
72
--
64
2.34.1
73
2.43.0
diff view generated by jsdifflib
1
When we report faults due to stage 2 faults during a stage 1
1
At the top of linux-user/aarch64/cpu_loop.c we define a set of
2
page table walk, the 'level' parameter should be the level
2
macros for reading and writing data and code words, but we never
3
of the walk in stage 2 that faulted, not the level of the
3
use these macros. Delete them.
4
walk in stage 1. Correct the reporting of these faults.
5
4
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230807141514.19075-15-peter.maydell@linaro.org
9
---
7
---
10
target/arm/ptw.c | 10 +++++++---
8
linux-user/aarch64/cpu_loop.c | 48 -----------------------------------
11
1 file changed, 7 insertions(+), 3 deletions(-)
9
1 file changed, 48 deletions(-)
12
10
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
11
diff --git a/linux-user/aarch64/cpu_loop.c b/linux-user/aarch64/cpu_loop.c
14
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/ptw.c
13
--- a/linux-user/aarch64/cpu_loop.c
16
+++ b/target/arm/ptw.c
14
+++ b/linux-user/aarch64/cpu_loop.c
17
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
15
@@ -XXX,XX +XXX,XX @@
18
do_translation_fault:
16
#include "target/arm/syndrome.h"
19
fi->type = ARMFault_Translation;
17
#include "target/arm/cpu-features.h"
20
do_fault:
18
21
- fi->level = level;
19
-#define get_user_code_u32(x, gaddr, env) \
22
- /* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
20
- ({ abi_long __r = get_user_u32((x), (gaddr)); \
23
- fi->stage2 = fi->s1ptw || regime_is_stage2(mmu_idx);
21
- if (!__r && bswap_code(arm_sctlr_b(env))) { \
24
+ if (fi->s1ptw) {
22
- (x) = bswap32(x); \
25
+ /* Retain the existing stage 2 fi->level */
23
- } \
26
+ assert(fi->stage2);
24
- __r; \
27
+ } else {
25
- })
28
+ fi->level = level;
26
-
29
+ fi->stage2 = regime_is_stage2(mmu_idx);
27
-#define get_user_code_u16(x, gaddr, env) \
30
+ }
28
- ({ abi_long __r = get_user_u16((x), (gaddr)); \
31
fi->s1ns = fault_s1ns(ptw->in_space, mmu_idx);
29
- if (!__r && bswap_code(arm_sctlr_b(env))) { \
32
return true;
30
- (x) = bswap16(x); \
33
}
31
- } \
32
- __r; \
33
- })
34
-
35
-#define get_user_data_u32(x, gaddr, env) \
36
- ({ abi_long __r = get_user_u32((x), (gaddr)); \
37
- if (!__r && arm_cpu_bswap_data(env)) { \
38
- (x) = bswap32(x); \
39
- } \
40
- __r; \
41
- })
42
-
43
-#define get_user_data_u16(x, gaddr, env) \
44
- ({ abi_long __r = get_user_u16((x), (gaddr)); \
45
- if (!__r && arm_cpu_bswap_data(env)) { \
46
- (x) = bswap16(x); \
47
- } \
48
- __r; \
49
- })
50
-
51
-#define put_user_data_u32(x, gaddr, env) \
52
- ({ typeof(x) __x = (x); \
53
- if (arm_cpu_bswap_data(env)) { \
54
- __x = bswap32(__x); \
55
- } \
56
- put_user_u32(__x, (gaddr)); \
57
- })
58
-
59
-#define put_user_data_u16(x, gaddr, env) \
60
- ({ typeof(x) __x = (x); \
61
- if (arm_cpu_bswap_data(env)) { \
62
- __x = bswap16(__x); \
63
- } \
64
- put_user_u16(__x, (gaddr)); \
65
- })
66
-
67
/* AArch64 main loop */
68
void cpu_loop(CPUARMState *env)
69
{
34
--
70
--
35
2.34.1
71
2.43.0
diff view generated by jsdifflib
1
When the MMU is disabled, data accesses should be Device nGnRnE,
1
In linux-user/arm/cpu_loop.c we define a full set of get/put
2
Outer Shareable, Untagged. We handle the other cases from
2
macros for both code and data (since the endianness handling
3
AArch64.S1DisabledOutput() correctly but missed this one.
3
is different between the two). However the only one we actually
4
Device nGnRnE is memattr == 0, so the only part we were missing
4
use is get_user_code_u32(). Remove the rest.
5
was that shareability should be set to 2 for both insn fetches
5
6
and data accesses.
6
We leave a comment noting how data-side accesses should be handled
7
for big-endian, because that's a subtle point and we just removed the
8
macros that were effectively documenting it.
7
9
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20230807141514.19075-13-peter.maydell@linaro.org
11
---
12
---
12
target/arm/ptw.c | 12 +++++++-----
13
linux-user/arm/cpu_loop.c | 43 ++++-----------------------------------
13
1 file changed, 7 insertions(+), 5 deletions(-)
14
1 file changed, 4 insertions(+), 39 deletions(-)
14
15
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
16
diff --git a/linux-user/arm/cpu_loop.c b/linux-user/arm/cpu_loop.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/ptw.c
18
--- a/linux-user/arm/cpu_loop.c
18
+++ b/target/arm/ptw.c
19
+++ b/linux-user/arm/cpu_loop.c
19
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env,
20
@@ -XXX,XX +XXX,XX @@
20
}
21
__r; \
21
}
22
})
22
}
23
23
- if (memattr == 0 && access_type == MMU_INST_FETCH) {
24
-#define get_user_code_u16(x, gaddr, env) \
24
- if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
25
- ({ abi_long __r = get_user_u16((x), (gaddr)); \
25
- memattr = 0xee; /* Normal, WT, RA, NT */
26
- if (!__r && bswap_code(arm_sctlr_b(env))) { \
26
- } else {
27
- (x) = bswap16(x); \
27
- memattr = 0x44; /* Normal, NC, No */
28
- } \
28
+ if (memattr == 0) {
29
- __r; \
29
+ if (access_type == MMU_INST_FETCH) {
30
- })
30
+ if (regime_sctlr(env, mmu_idx) & SCTLR_I) {
31
-
31
+ memattr = 0xee; /* Normal, WT, RA, NT */
32
-#define get_user_data_u32(x, gaddr, env) \
32
+ } else {
33
- ({ abi_long __r = get_user_u32((x), (gaddr)); \
33
+ memattr = 0x44; /* Normal, NC, No */
34
- if (!__r && arm_cpu_bswap_data(env)) { \
34
+ }
35
- (x) = bswap32(x); \
35
}
36
- } \
36
shareability = 2; /* outer shareable */
37
- __r; \
37
}
38
- })
39
-
40
-#define get_user_data_u16(x, gaddr, env) \
41
- ({ abi_long __r = get_user_u16((x), (gaddr)); \
42
- if (!__r && arm_cpu_bswap_data(env)) { \
43
- (x) = bswap16(x); \
44
- } \
45
- __r; \
46
- })
47
-
48
-#define put_user_data_u32(x, gaddr, env) \
49
- ({ typeof(x) __x = (x); \
50
- if (arm_cpu_bswap_data(env)) { \
51
- __x = bswap32(__x); \
52
- } \
53
- put_user_u32(__x, (gaddr)); \
54
- })
55
-
56
-#define put_user_data_u16(x, gaddr, env) \
57
- ({ typeof(x) __x = (x); \
58
- if (arm_cpu_bswap_data(env)) { \
59
- __x = bswap16(__x); \
60
- } \
61
- put_user_u16(__x, (gaddr)); \
62
- })
63
+/*
64
+ * Note that if we need to do data accesses here, they should do a
65
+ * bswap if arm_cpu_bswap_data() returns true.
66
+ */
67
68
/*
69
* Similar to code in accel/tcg/user-exec.c, but outside the execution loop.
38
--
70
--
39
2.34.1
71
2.43.0
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
The arm_cpu_data_is_big_endian() and related functions are now used
2
only in target/arm; they can be moved to internals.h.
2
3
3
When FEAT_RME is implemented, these bits override the value of
4
The motivation here is that we would like to move arm_current_el()
4
CNT[VP]_CTL_EL0.IMASK in Realm and Root state. Move the IRQ state update
5
to internals.h.
5
into a new gt_update_irq() function and test those bits every time we
6
recompute the IRQ state.
7
6
8
Since we're removing the IRQ state from some trace events, add a new
9
trace event for gt_update_irq().
10
11
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
12
Message-id: 20230809123706.1842548-7-jean-philippe@linaro.org
13
[PMM: only register change hook if not USER_ONLY and if TCG]
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
---
9
---
17
target/arm/cpu.h | 4 +++
10
target/arm/cpu.h | 48 ------------------------------------------
18
target/arm/cpu.c | 6 ++++
11
target/arm/internals.h | 48 ++++++++++++++++++++++++++++++++++++++++++
19
target/arm/helper.c | 65 ++++++++++++++++++++++++++++++++++-------
12
2 files changed, 48 insertions(+), 48 deletions(-)
20
target/arm/trace-events | 7 +++--
21
4 files changed, 68 insertions(+), 14 deletions(-)
22
13
23
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/cpu.h
16
--- a/target/arm/cpu.h
26
+++ b/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
27
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
18
@@ -XXX,XX +XXX,XX @@ static inline bool arm_sctlr_b(CPUARMState *env)
28
};
19
29
20
uint64_t arm_sctlr(CPUARMState *env, int el);
30
unsigned int gt_cntfrq_period_ns(ARMCPU *cpu);
21
31
+void gt_rme_post_el_change(ARMCPU *cpu, void *opaque);
22
-static inline bool arm_cpu_data_is_big_endian_a32(CPUARMState *env,
32
23
- bool sctlr_b)
33
void arm_cpu_post_init(Object *obj);
24
-{
34
25
-#ifdef CONFIG_USER_ONLY
35
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
26
- /*
36
#define HSTR_TTEE (1 << 16)
27
- * In system mode, BE32 is modelled in line with the
37
#define HSTR_TJDBX (1 << 17)
28
- * architecture (as word-invariant big-endianness), where loads
38
29
- * and stores are done little endian but from addresses which
39
+#define CNTHCTL_CNTVMASK (1 << 18)
30
- * are adjusted by XORing with the appropriate constant. So the
40
+#define CNTHCTL_CNTPMASK (1 << 19)
31
- * endianness to use for the raw data access is not affected by
41
+
32
- * SCTLR.B.
42
/* Return the current FPSCR value. */
33
- * In user mode, however, we model BE32 as byte-invariant
43
uint32_t vfp_get_fpscr(CPUARMState *env);
34
- * big-endianness (because user-only code cannot tell the
44
void vfp_set_fpscr(CPUARMState *env, uint32_t val);
35
- * difference), and so we need to use a data access endianness
45
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
36
- * that depends on SCTLR.B.
37
- */
38
- if (sctlr_b) {
39
- return true;
40
- }
41
-#endif
42
- /* In 32bit endianness is determined by looking at CPSR's E bit */
43
- return env->uncached_cpsr & CPSR_E;
44
-}
45
-
46
-static inline bool arm_cpu_data_is_big_endian_a64(int el, uint64_t sctlr)
47
-{
48
- return sctlr & (el ? SCTLR_EE : SCTLR_E0E);
49
-}
50
-
51
-/* Return true if the processor is in big-endian mode. */
52
-static inline bool arm_cpu_data_is_big_endian(CPUARMState *env)
53
-{
54
- if (!is_a64(env)) {
55
- return arm_cpu_data_is_big_endian_a32(env, arm_sctlr_b(env));
56
- } else {
57
- int cur_el = arm_current_el(env);
58
- uint64_t sctlr = arm_sctlr(env, cur_el);
59
- return arm_cpu_data_is_big_endian_a64(cur_el, sctlr);
60
- }
61
-}
62
-
63
#include "exec/cpu-all.h"
64
65
/*
66
@@ -XXX,XX +XXX,XX @@ static inline bool bswap_code(bool sctlr_b)
67
#endif
68
}
69
70
-#ifdef CONFIG_USER_ONLY
71
-static inline bool arm_cpu_bswap_data(CPUARMState *env)
72
-{
73
- return TARGET_BIG_ENDIAN ^ arm_cpu_data_is_big_endian(env);
74
-}
75
-#endif
76
-
77
void cpu_get_tb_cpu_state(CPUARMState *env, vaddr *pc,
78
uint64_t *cs_base, uint32_t *flags);
79
80
diff --git a/target/arm/internals.h b/target/arm/internals.h
46
index XXXXXXX..XXXXXXX 100644
81
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu.c
82
--- a/target/arm/internals.h
48
+++ b/target/arm/cpu.c
83
+++ b/target/arm/internals.h
49
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
84
@@ -XXX,XX +XXX,XX @@ static inline FloatRoundMode arm_rmode_to_sf(ARMFPRounding rmode)
50
set_feature(env, ARM_FEATURE_VBAR);
85
return arm_rmode_to_sf_map[rmode];
51
}
86
}
52
87
53
+#ifndef CONFIG_USER_ONLY
88
+static inline bool arm_cpu_data_is_big_endian_a32(CPUARMState *env,
54
+ if (tcg_enabled() && cpu_isar_feature(aa64_rme, cpu)) {
89
+ bool sctlr_b)
55
+ arm_register_el_change_hook(cpu, &gt_rme_post_el_change, 0);
90
+{
91
+#ifdef CONFIG_USER_ONLY
92
+ /*
93
+ * In system mode, BE32 is modelled in line with the
94
+ * architecture (as word-invariant big-endianness), where loads
95
+ * and stores are done little endian but from addresses which
96
+ * are adjusted by XORing with the appropriate constant. So the
97
+ * endianness to use for the raw data access is not affected by
98
+ * SCTLR.B.
99
+ * In user mode, however, we model BE32 as byte-invariant
100
+ * big-endianness (because user-only code cannot tell the
101
+ * difference), and so we need to use a data access endianness
102
+ * that depends on SCTLR.B.
103
+ */
104
+ if (sctlr_b) {
105
+ return true;
56
+ }
106
+ }
57
+#endif
107
+#endif
58
+
108
+ /* In 32bit endianness is determined by looking at CPSR's E bit */
59
register_cp_regs_for_features(cpu);
109
+ return env->uncached_cpsr & CPSR_E;
60
arm_cpu_register_gdb_regs_for_features(cpu);
61
62
diff --git a/target/arm/helper.c b/target/arm/helper.c
63
index XXXXXXX..XXXXXXX 100644
64
--- a/target/arm/helper.c
65
+++ b/target/arm/helper.c
66
@@ -XXX,XX +XXX,XX @@ static uint64_t gt_get_countervalue(CPUARMState *env)
67
return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) / gt_cntfrq_period_ns(cpu);
68
}
69
70
+static void gt_update_irq(ARMCPU *cpu, int timeridx)
71
+{
72
+ CPUARMState *env = &cpu->env;
73
+ uint64_t cnthctl = env->cp15.cnthctl_el2;
74
+ ARMSecuritySpace ss = arm_security_space(env);
75
+ /* ISTATUS && !IMASK */
76
+ int irqstate = (env->cp15.c14_timer[timeridx].ctl & 6) == 4;
77
+
78
+ /*
79
+ * If bit CNTHCTL_EL2.CNT[VP]MASK is set, it overrides IMASK.
80
+ * It is RES0 in Secure and NonSecure state.
81
+ */
82
+ if ((ss == ARMSS_Root || ss == ARMSS_Realm) &&
83
+ ((timeridx == GTIMER_VIRT && (cnthctl & CNTHCTL_CNTVMASK)) ||
84
+ (timeridx == GTIMER_PHYS && (cnthctl & CNTHCTL_CNTPMASK)))) {
85
+ irqstate = 0;
86
+ }
87
+
88
+ qemu_set_irq(cpu->gt_timer_outputs[timeridx], irqstate);
89
+ trace_arm_gt_update_irq(timeridx, irqstate);
90
+}
110
+}
91
+
111
+
92
+void gt_rme_post_el_change(ARMCPU *cpu, void *ignored)
112
+static inline bool arm_cpu_data_is_big_endian_a64(int el, uint64_t sctlr)
93
+{
113
+{
94
+ /*
114
+ return sctlr & (el ? SCTLR_EE : SCTLR_E0E);
95
+ * Changing security state between Root and Secure/NonSecure, which may
96
+ * happen when switching EL, can change the effective value of CNTHCTL_EL2
97
+ * mask bits. Update the IRQ state accordingly.
98
+ */
99
+ gt_update_irq(cpu, GTIMER_VIRT);
100
+ gt_update_irq(cpu, GTIMER_PHYS);
101
+}
115
+}
102
+
116
+
103
static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
117
+/* Return true if the processor is in big-endian mode. */
104
{
118
+static inline bool arm_cpu_data_is_big_endian(CPUARMState *env)
105
ARMGenericTimer *gt = &cpu->env.cp15.c14_timer[timeridx];
106
@@ -XXX,XX +XXX,XX @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
107
/* Note that this must be unsigned 64 bit arithmetic: */
108
int istatus = count - offset >= gt->cval;
109
uint64_t nexttick;
110
- int irqstate;
111
112
gt->ctl = deposit32(gt->ctl, 2, 1, istatus);
113
114
- irqstate = (istatus && !(gt->ctl & 2));
115
- qemu_set_irq(cpu->gt_timer_outputs[timeridx], irqstate);
116
-
117
if (istatus) {
118
/* Next transition is when count rolls back over to zero */
119
nexttick = UINT64_MAX;
120
@@ -XXX,XX +XXX,XX @@ static void gt_recalc_timer(ARMCPU *cpu, int timeridx)
121
} else {
122
timer_mod(cpu->gt_timer[timeridx], nexttick);
123
}
124
- trace_arm_gt_recalc(timeridx, irqstate, nexttick);
125
+ trace_arm_gt_recalc(timeridx, nexttick);
126
} else {
127
/* Timer disabled: ISTATUS and timer output always clear */
128
gt->ctl &= ~4;
129
- qemu_set_irq(cpu->gt_timer_outputs[timeridx], 0);
130
timer_del(cpu->gt_timer[timeridx]);
131
trace_arm_gt_recalc_disabled(timeridx);
132
}
133
+ gt_update_irq(cpu, timeridx);
134
}
135
136
static void gt_timer_reset(CPUARMState *env, const ARMCPRegInfo *ri,
137
@@ -XXX,XX +XXX,XX @@ static void gt_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
138
* IMASK toggled: don't need to recalculate,
139
* just set the interrupt line based on ISTATUS
140
*/
141
- int irqstate = (oldval & 4) && !(value & 2);
142
-
143
- trace_arm_gt_imask_toggle(timeridx, irqstate);
144
- qemu_set_irq(cpu->gt_timer_outputs[timeridx], irqstate);
145
+ trace_arm_gt_imask_toggle(timeridx);
146
+ gt_update_irq(cpu, timeridx);
147
}
148
}
149
150
@@ -XXX,XX +XXX,XX @@ static void gt_virt_ctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
151
gt_ctl_write(env, ri, GTIMER_VIRT, value);
152
}
153
154
+static void gt_cnthctl_write(CPUARMState *env, const ARMCPRegInfo *ri,
155
+ uint64_t value)
156
+{
119
+{
157
+ ARMCPU *cpu = env_archcpu(env);
120
+ if (!is_a64(env)) {
158
+ uint32_t oldval = env->cp15.cnthctl_el2;
121
+ return arm_cpu_data_is_big_endian_a32(env, arm_sctlr_b(env));
159
+
122
+ } else {
160
+ raw_write(env, ri, value);
123
+ int cur_el = arm_current_el(env);
161
+
124
+ uint64_t sctlr = arm_sctlr(env, cur_el);
162
+ if ((oldval ^ value) & CNTHCTL_CNTVMASK) {
125
+ return arm_cpu_data_is_big_endian_a64(cur_el, sctlr);
163
+ gt_update_irq(cpu, GTIMER_VIRT);
164
+ } else if ((oldval ^ value) & CNTHCTL_CNTPMASK) {
165
+ gt_update_irq(cpu, GTIMER_PHYS);
166
+ }
126
+ }
167
+}
127
+}
168
+
128
+
169
static void gt_cntvoff_write(CPUARMState *env, const ARMCPRegInfo *ri,
129
+#ifdef CONFIG_USER_ONLY
170
uint64_t value)
130
+static inline bool arm_cpu_bswap_data(CPUARMState *env)
131
+{
132
+ return TARGET_BIG_ENDIAN ^ arm_cpu_data_is_big_endian(env);
133
+}
134
+#endif
135
+
136
static inline void aarch64_save_sp(CPUARMState *env, int el)
171
{
137
{
172
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
138
if (env->pstate & PSTATE_SP) {
173
* reset values as IMPDEF. We choose to reset to 3 to comply with
174
* both ARMv7 and ARMv8.
175
*/
176
- .access = PL2_RW, .resetvalue = 3,
177
+ .access = PL2_RW, .type = ARM_CP_IO, .resetvalue = 3,
178
+ .writefn = gt_cnthctl_write, .raw_writefn = raw_write,
179
.fieldoffset = offsetof(CPUARMState, cp15.cnthctl_el2) },
180
{ .name = "CNTVOFF_EL2", .state = ARM_CP_STATE_AA64,
181
.opc0 = 3, .opc1 = 4, .crn = 14, .crm = 0, .opc2 = 3,
182
diff --git a/target/arm/trace-events b/target/arm/trace-events
183
index XXXXXXX..XXXXXXX 100644
184
--- a/target/arm/trace-events
185
+++ b/target/arm/trace-events
186
@@ -XXX,XX +XXX,XX @@
187
# See docs/devel/tracing.rst for syntax documentation.
188
189
# helper.c
190
-arm_gt_recalc(int timer, int irqstate, uint64_t nexttick) "gt recalc: timer %d irqstate %d next tick 0x%" PRIx64
191
-arm_gt_recalc_disabled(int timer) "gt recalc: timer %d irqstate 0 timer disabled"
192
+arm_gt_recalc(int timer, uint64_t nexttick) "gt recalc: timer %d next tick 0x%" PRIx64
193
+arm_gt_recalc_disabled(int timer) "gt recalc: timer %d timer disabled"
194
arm_gt_cval_write(int timer, uint64_t value) "gt_cval_write: timer %d value 0x%" PRIx64
195
arm_gt_tval_write(int timer, uint64_t value) "gt_tval_write: timer %d value 0x%" PRIx64
196
arm_gt_ctl_write(int timer, uint64_t value) "gt_ctl_write: timer %d value 0x%" PRIx64
197
-arm_gt_imask_toggle(int timer, int irqstate) "gt_ctl_write: timer %d IMASK toggle, new irqstate %d"
198
+arm_gt_imask_toggle(int timer) "gt_ctl_write: timer %d IMASK toggle"
199
arm_gt_cntvoff_write(uint64_t value) "gt_cntvoff_write: value 0x%" PRIx64
200
+arm_gt_update_irq(int timer, int irqstate) "gt_update_irq: timer %d irqstate %d"
201
202
# kvm.c
203
kvm_arm_fixup_msi_route(uint64_t iova, uint64_t gpa) "MSI iova = 0x%"PRIx64" is translated into 0x%"PRIx64
204
--
139
--
205
2.34.1
140
2.43.0
diff view generated by jsdifflib
1
arm_hcr_el2_eff_secstate() takes a bool secure, which it uses to
1
The functions arm_current_el() and arm_el_is_aa64() are used only in
2
determine whether EL2 is enabled in the current security state.
2
target/arm and in hw/intc/arm_gicv3_cpuif.c. They're functions that
3
With the advent of FEAT_RME this is no longer sufficient, because
3
query internal state of the CPU. Move them out of cpu.h and into
4
EL2 can be enabled for Secure state but not for Root, and both
4
internals.h.
5
of those will pass 'secure == true' in the callsites in ptw.c.
5
6
6
This means we need to include internals.h in arm_gicv3_cpuif.c, but
7
As it happens in all of our callsites in ptw.c we either avoid making
7
this is justifiable because that file is implementing the GICv3 CPU
8
the call or else avoid using the returned value if we're doing a
8
interface, which really is part of the CPU proper; we just ended up
9
translation for Root, so this is not a behaviour change even if the
9
implementing it in code in hw/intc/ for historical reasons.
10
experimental FEAT_RME is enabled. But it is less confusing in the
10
11
ptw.c code if we avoid the use of a bool secure that duplicates some
11
The motivation for this move is that we'd like to change
12
of the information in the ArmSecuritySpace argument.
12
arm_el_is_aa64() to add a condition that uses cpu_isar_feature();
13
13
but we don't want to include cpu-features.h in cpu.h.
14
Make arm_hcr_el2_eff_secstate() take an ARMSecuritySpace argument
15
instead. Because we always want to know the HCR_EL2 for the
16
security state defined by the current effective value of
17
SCR_EL3.{NSE,NS}, it makes no sense to pass ARMSS_Root here,
18
and we assert that callers don't do that.
19
20
To avoid the assert(), we thus push the call to
21
arm_hcr_el2_eff_secstate() down into the cases in
22
regime_translation_disabled() that need it, rather than calling the
23
function and ignoring the result for the Root space translations.
24
All other calls to this function in ptw.c are already in places
25
where we have confirmed that the mmu_idx is a stage 2 translation
26
or that the regime EL is not 3.
27
14
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
30
Message-id: 20230807141514.19075-7-peter.maydell@linaro.org
31
---
17
---
32
target/arm/cpu.h | 2 +-
18
target/arm/cpu.h | 66 --------------------------------------
33
target/arm/helper.c | 8 +++++---
19
target/arm/internals.h | 67 +++++++++++++++++++++++++++++++++++++++
34
target/arm/ptw.c | 15 +++++++--------
20
hw/intc/arm_gicv3_cpuif.c | 1 +
35
3 files changed, 13 insertions(+), 12 deletions(-)
21
target/arm/arch_dump.c | 1 +
22
4 files changed, 69 insertions(+), 66 deletions(-)
36
23
37
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
38
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/cpu.h
26
--- a/target/arm/cpu.h
40
+++ b/target/arm/cpu.h
27
+++ b/target/arm/cpu.h
41
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_el2_enabled(CPUARMState *env)
28
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, ARMSecuritySpace space);
42
* "for all purposes other than a direct read or write access of HCR_EL2."
43
* Not included here is HCR_RW.
44
*/
45
-uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure);
46
+uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, ARMSecuritySpace space);
47
uint64_t arm_hcr_el2_eff(CPUARMState *env);
29
uint64_t arm_hcr_el2_eff(CPUARMState *env);
48
uint64_t arm_hcrx_el2_eff(CPUARMState *env);
30
uint64_t arm_hcrx_el2_eff(CPUARMState *env);
49
31
50
diff --git a/target/arm/helper.c b/target/arm/helper.c
32
-/* Return true if the specified exception level is running in AArch64 state. */
51
index XXXXXXX..XXXXXXX 100644
33
-static inline bool arm_el_is_aa64(CPUARMState *env, int el)
52
--- a/target/arm/helper.c
34
-{
53
+++ b/target/arm/helper.c
35
- /* This isn't valid for EL0 (if we're in EL0, is_a64() is what you want,
54
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
36
- * and if we're not in EL0 then the state of EL0 isn't well defined.)
55
* Bits that are not included here:
37
- */
56
* RW (read from SCR_EL3.RW as needed)
38
- assert(el >= 1 && el <= 3);
57
*/
39
- bool aa64 = arm_feature(env, ARM_FEATURE_AARCH64);
58
-uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, bool secure)
40
-
59
+uint64_t arm_hcr_el2_eff_secstate(CPUARMState *env, ARMSecuritySpace space)
41
- /* The highest exception level is always at the maximum supported
42
- * register width, and then lower levels have a register width controlled
43
- * by bits in the SCR or HCR registers.
44
- */
45
- if (el == 3) {
46
- return aa64;
47
- }
48
-
49
- if (arm_feature(env, ARM_FEATURE_EL3) &&
50
- ((env->cp15.scr_el3 & SCR_NS) || !(env->cp15.scr_el3 & SCR_EEL2))) {
51
- aa64 = aa64 && (env->cp15.scr_el3 & SCR_RW);
52
- }
53
-
54
- if (el == 2) {
55
- return aa64;
56
- }
57
-
58
- if (arm_is_el2_enabled(env)) {
59
- aa64 = aa64 && (env->cp15.hcr_el2 & HCR_RW);
60
- }
61
-
62
- return aa64;
63
-}
64
-
65
/*
66
* Function for determining whether guest cp register reads and writes should
67
* access the secure or non-secure bank of a cp register. When EL3 is
68
@@ -XXX,XX +XXX,XX @@ static inline bool arm_v7m_is_handler_mode(CPUARMState *env)
69
return env->v7m.exception != 0;
70
}
71
72
-/* Return the current Exception Level (as per ARMv8; note that this differs
73
- * from the ARMv7 Privilege Level).
74
- */
75
-static inline int arm_current_el(CPUARMState *env)
76
-{
77
- if (arm_feature(env, ARM_FEATURE_M)) {
78
- return arm_v7m_is_handler_mode(env) ||
79
- !(env->v7m.control[env->v7m.secure] & 1);
80
- }
81
-
82
- if (is_a64(env)) {
83
- return extract32(env->pstate, 2, 2);
84
- }
85
-
86
- switch (env->uncached_cpsr & 0x1f) {
87
- case ARM_CPU_MODE_USR:
88
- return 0;
89
- case ARM_CPU_MODE_HYP:
90
- return 2;
91
- case ARM_CPU_MODE_MON:
92
- return 3;
93
- default:
94
- if (arm_is_secure(env) && !arm_el_is_aa64(env, 3)) {
95
- /* If EL3 is 32-bit then all secure privileged modes run in
96
- * EL3
97
- */
98
- return 3;
99
- }
100
-
101
- return 1;
102
- }
103
-}
104
-
105
/**
106
* write_list_to_cpustate
107
* @cpu: ARMCPU
108
diff --git a/target/arm/internals.h b/target/arm/internals.h
109
index XXXXXXX..XXXXXXX 100644
110
--- a/target/arm/internals.h
111
+++ b/target/arm/internals.h
112
@@ -XXX,XX +XXX,XX @@ static inline FloatRoundMode arm_rmode_to_sf(ARMFPRounding rmode)
113
return arm_rmode_to_sf_map[rmode];
114
}
115
116
+/* Return true if the specified exception level is running in AArch64 state. */
117
+static inline bool arm_el_is_aa64(CPUARMState *env, int el)
118
+{
119
+ /*
120
+ * This isn't valid for EL0 (if we're in EL0, is_a64() is what you want,
121
+ * and if we're not in EL0 then the state of EL0 isn't well defined.)
122
+ */
123
+ assert(el >= 1 && el <= 3);
124
+ bool aa64 = arm_feature(env, ARM_FEATURE_AARCH64);
125
+
126
+ /*
127
+ * The highest exception level is always at the maximum supported
128
+ * register width, and then lower levels have a register width controlled
129
+ * by bits in the SCR or HCR registers.
130
+ */
131
+ if (el == 3) {
132
+ return aa64;
133
+ }
134
+
135
+ if (arm_feature(env, ARM_FEATURE_EL3) &&
136
+ ((env->cp15.scr_el3 & SCR_NS) || !(env->cp15.scr_el3 & SCR_EEL2))) {
137
+ aa64 = aa64 && (env->cp15.scr_el3 & SCR_RW);
138
+ }
139
+
140
+ if (el == 2) {
141
+ return aa64;
142
+ }
143
+
144
+ if (arm_is_el2_enabled(env)) {
145
+ aa64 = aa64 && (env->cp15.hcr_el2 & HCR_RW);
146
+ }
147
+
148
+ return aa64;
149
+}
150
+
151
+/*
152
+ * Return the current Exception Level (as per ARMv8; note that this differs
153
+ * from the ARMv7 Privilege Level).
154
+ */
155
+static inline int arm_current_el(CPUARMState *env)
156
+{
157
+ if (arm_feature(env, ARM_FEATURE_M)) {
158
+ return arm_v7m_is_handler_mode(env) ||
159
+ !(env->v7m.control[env->v7m.secure] & 1);
160
+ }
161
+
162
+ if (is_a64(env)) {
163
+ return extract32(env->pstate, 2, 2);
164
+ }
165
+
166
+ switch (env->uncached_cpsr & 0x1f) {
167
+ case ARM_CPU_MODE_USR:
168
+ return 0;
169
+ case ARM_CPU_MODE_HYP:
170
+ return 2;
171
+ case ARM_CPU_MODE_MON:
172
+ return 3;
173
+ default:
174
+ if (arm_is_secure(env) && !arm_el_is_aa64(env, 3)) {
175
+ /* If EL3 is 32-bit then all secure privileged modes run in EL3 */
176
+ return 3;
177
+ }
178
+
179
+ return 1;
180
+ }
181
+}
182
+
183
static inline bool arm_cpu_data_is_big_endian_a32(CPUARMState *env,
184
bool sctlr_b)
60
{
185
{
61
uint64_t ret = env->cp15.hcr_el2;
186
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
62
187
index XXXXXXX..XXXXXXX 100644
63
- if (!arm_is_el2_enabled_secstate(env, secure)) {
188
--- a/hw/intc/arm_gicv3_cpuif.c
64
+ assert(space != ARMSS_Root);
189
+++ b/hw/intc/arm_gicv3_cpuif.c
65
+
190
@@ -XXX,XX +XXX,XX @@
66
+ if (!arm_is_el2_enabled_secstate(env, arm_space_is_secure(space))) {
191
#include "cpu.h"
67
/*
192
#include "target/arm/cpregs.h"
68
* "This register has no effect if EL2 is not enabled in the
193
#include "target/arm/cpu-features.h"
69
* current Security state". This is ARMv8.4-SecEL2 speak for
194
+#include "target/arm/internals.h"
70
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff(CPUARMState *env)
195
#include "system/tcg.h"
71
if (arm_feature(env, ARM_FEATURE_M)) {
196
#include "system/qtest.h"
72
return 0;
197
73
}
198
diff --git a/target/arm/arch_dump.c b/target/arm/arch_dump.c
74
- return arm_hcr_el2_eff_secstate(env, arm_is_secure_below_el3(env));
199
index XXXXXXX..XXXXXXX 100644
75
+ return arm_hcr_el2_eff_secstate(env, arm_security_space_below_el3(env));
200
--- a/target/arm/arch_dump.c
76
}
201
+++ b/target/arm/arch_dump.c
77
202
@@ -XXX,XX +XXX,XX @@
78
/*
203
#include "elf.h"
79
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
204
#include "system/dump.h"
80
index XXXXXXX..XXXXXXX 100644
205
#include "cpu-features.h"
81
--- a/target/arm/ptw.c
206
+#include "internals.h"
82
+++ b/target/arm/ptw.c
207
83
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
208
/* struct user_pt_regs from arch/arm64/include/uapi/asm/ptrace.h */
84
ARMSecuritySpace space)
209
struct aarch64_user_regs {
85
{
86
uint64_t hcr_el2;
87
- bool is_secure = arm_space_is_secure(space);
88
89
if (arm_feature(env, ARM_FEATURE_M)) {
90
+ bool is_secure = arm_space_is_secure(space);
91
switch (env->v7m.mpu_ctrl[is_secure] &
92
(R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
93
case R_V7M_MPU_CTRL_ENABLE_MASK:
94
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
95
}
96
}
97
98
- hcr_el2 = arm_hcr_el2_eff_secstate(env, is_secure);
99
100
switch (mmu_idx) {
101
case ARMMMUIdx_Stage2:
102
case ARMMMUIdx_Stage2_S:
103
/* HCR.DC means HCR.VM behaves as 1 */
104
+ hcr_el2 = arm_hcr_el2_eff_secstate(env, space);
105
return (hcr_el2 & (HCR_DC | HCR_VM)) == 0;
106
107
case ARMMMUIdx_E10_0:
108
case ARMMMUIdx_E10_1:
109
case ARMMMUIdx_E10_1_PAN:
110
/* TGE means that EL0/1 act as if SCTLR_EL1.M is zero */
111
+ hcr_el2 = arm_hcr_el2_eff_secstate(env, space);
112
if (hcr_el2 & HCR_TGE) {
113
return true;
114
}
115
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
116
case ARMMMUIdx_Stage1_E1:
117
case ARMMMUIdx_Stage1_E1_PAN:
118
/* HCR.DC means SCTLR_EL1.M behaves as 0 */
119
+ hcr_el2 = arm_hcr_el2_eff_secstate(env, space);
120
if (hcr_el2 & HCR_DC) {
121
return true;
122
}
123
@@ -XXX,XX +XXX,XX @@ static bool fault_s1ns(ARMSecuritySpace space, ARMMMUIdx s2_mmu_idx)
124
static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
125
hwaddr addr, ARMMMUFaultInfo *fi)
126
{
127
- bool is_secure = ptw->in_secure;
128
ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
129
ARMMMUIdx s2_mmu_idx = ptw->in_ptw_idx;
130
uint8_t pte_attrs;
131
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
132
}
133
134
if (regime_is_stage2(s2_mmu_idx)) {
135
- uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure);
136
+ uint64_t hcr = arm_hcr_el2_eff_secstate(env, ptw->in_space);
137
138
if ((hcr & HCR_PTW) && S2_attrs_are_device(hcr, pte_attrs)) {
139
/*
140
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env,
141
ARMMMUFaultInfo *fi)
142
{
143
ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
144
- bool is_secure = arm_space_is_secure(ptw->in_space);
145
uint8_t memattr = 0x00; /* Device nGnRnE */
146
uint8_t shareability = 0; /* non-shareable */
147
int r_el;
148
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env,
149
150
/* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */
151
if (r_el == 1) {
152
- uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure);
153
+ uint64_t hcr = arm_hcr_el2_eff_secstate(env, ptw->in_space);
154
if (hcr & HCR_DC) {
155
if (hcr & HCR_DCT) {
156
memattr = 0xf0; /* Tagged, Normal, WB, RWA */
157
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
158
{
159
hwaddr ipa;
160
int s1_prot, s1_lgpgsz;
161
- bool is_secure = ptw->in_secure;
162
ARMSecuritySpace in_space = ptw->in_space;
163
bool ret, ipa_secure;
164
ARMCacheAttrs cacheattrs1;
165
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
166
}
167
168
/* Combine the S1 and S2 cache attributes. */
169
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
170
+ hcr = arm_hcr_el2_eff_secstate(env, in_space);
171
if (hcr & HCR_DC) {
172
/*
173
* HCR.DC forces the first stage attributes to
174
--
210
--
175
2.34.1
211
2.43.0
diff view generated by jsdifflib
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
1
The definition of SCR_EL3.RW says that its effective value is 1 if:
2
- EL2 is implemented and does not support AArch32, and SCR_EL3.NS is 1
3
- the effective value of SCR_EL3.{EEL2,NS} is {1,0} (i.e. we are
4
Secure and Secure EL2 is disabled)
2
5
3
At the moment we only handle Secure and Nonsecure security spaces for
6
We implement the second of these in arm_el_is_aa64(), but forgot the
4
the AT instructions. Add support for Realm and Root.
7
first.
5
8
6
For AArch64, arm_security_space() gives the desired space. ARM DDI0487J
9
Provide a new function arm_scr_rw_eff() to return the effective
7
says (R_NYXTL):
10
value of SCR_EL3.RW, and use it in arm_el_is_aa64() and the other
11
places that currently look directly at the bit value.
8
12
9
If EL3 is implemented, then when an address translation instruction
13
(scr_write() enforces that the RW bit is RAO/WI if neither EL1 nor
10
that applies to an Exception level lower than EL3 is executed, the
14
EL2 have AArch32 support, but if EL1 does but EL2 does not then the
11
Effective value of SCR_EL3.{NSE, NS} determines the target Security
15
bit must still be writeable.)
12
state that the instruction applies to.
13
16
14
For AArch32, some instructions can access NonSecure space from Secure,
17
This will mean that if code at EL3 attempts to perform an exception
15
so we still need to pass the state explicitly to do_ats_write().
18
return to AArch32 EL2 when EL2 is AArch64-only we will correctly
19
handle this as an illegal exception return: it will be caught by the
20
"return to an EL which is configured for a different register width"
21
check in HELPER(exception_return).
16
22
17
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
23
We do already have some CPU types which don't implement AArch32
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
24
above EL0, so this is technically a bug; it doesn't seem worth
19
Message-id: 20230809123706.1842548-5-jean-philippe@linaro.org
25
backporting to stable because no sensible guest code will be
26
deliberately attempting to set the RW bit to a value corresponding
27
to an unimplemented execution state and then checking that we
28
did the right thing.
29
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
31
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
---
32
---
22
target/arm/internals.h | 18 +++++++++---------
33
target/arm/internals.h | 26 +++++++++++++++++++++++---
23
target/arm/helper.c | 27 ++++++++++++---------------
34
target/arm/helper.c | 4 ++--
24
target/arm/ptw.c | 12 ++++++------
35
2 files changed, 25 insertions(+), 5 deletions(-)
25
3 files changed, 27 insertions(+), 30 deletions(-)
26
36
27
diff --git a/target/arm/internals.h b/target/arm/internals.h
37
diff --git a/target/arm/internals.h b/target/arm/internals.h
28
index XXXXXXX..XXXXXXX 100644
38
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/internals.h
39
--- a/target/arm/internals.h
30
+++ b/target/arm/internals.h
40
+++ b/target/arm/internals.h
31
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
41
@@ -XXX,XX +XXX,XX @@ static inline FloatRoundMode arm_rmode_to_sf(ARMFPRounding rmode)
32
__attribute__((nonnull));
42
return arm_rmode_to_sf_map[rmode];
33
43
}
34
/**
44
35
- * get_phys_addr_with_secure_nogpc: get the physical address for a virtual
45
+/* Return the effective value of SCR_EL3.RW */
36
- * address
46
+static inline bool arm_scr_rw_eff(CPUARMState *env)
37
+ * get_phys_addr_with_space_nogpc: get the physical address for a virtual
47
+{
38
+ * address
48
+ /*
39
* @env: CPUARMState
49
+ * SCR_EL3.RW has an effective value of 1 if:
40
* @address: virtual address to get physical address for
50
+ * - we are NS and EL2 is implemented but doesn't support AArch32
41
* @access_type: 0 for read, 1 for write, 2 for execute
51
+ * - we are S and EL2 is enabled (in which case it must be AArch64)
42
* @mmu_idx: MMU index indicating required translation regime
52
+ */
43
- * @is_secure: security state for the access
53
+ ARMCPU *cpu = env_archcpu(env);
44
+ * @space: security space for the access
54
+
45
* @result: set on translation success.
55
+ if (env->cp15.scr_el3 & SCR_RW) {
46
* @fi: set to fault info if the translation fails
56
+ return true;
47
*
57
+ }
48
- * Similar to get_phys_addr, but use the given security regime and don't perform
58
+ if (env->cp15.scr_el3 & SCR_NS) {
49
+ * Similar to get_phys_addr, but use the given security space and don't perform
59
+ return arm_feature(env, ARM_FEATURE_EL2) &&
50
* a Granule Protection Check on the resulting address.
60
+ !cpu_isar_feature(aa64_aa32_el2, cpu);
51
*/
61
+ } else {
52
-bool get_phys_addr_with_secure_nogpc(CPUARMState *env, target_ulong address,
62
+ return env->cp15.scr_el3 & SCR_EEL2;
53
- MMUAccessType access_type,
63
+ }
54
- ARMMMUIdx mmu_idx, bool is_secure,
64
+}
55
- GetPhysAddrResult *result,
65
+
56
- ARMMMUFaultInfo *fi)
66
/* Return true if the specified exception level is running in AArch64 state. */
57
+bool get_phys_addr_with_space_nogpc(CPUARMState *env, target_ulong address,
67
static inline bool arm_el_is_aa64(CPUARMState *env, int el)
58
+ MMUAccessType access_type,
68
{
59
+ ARMMMUIdx mmu_idx, ARMSecuritySpace space,
69
@@ -XXX,XX +XXX,XX @@ static inline bool arm_el_is_aa64(CPUARMState *env, int el)
60
+ GetPhysAddrResult *result,
70
return aa64;
61
+ ARMMMUFaultInfo *fi)
71
}
62
__attribute__((nonnull));
72
63
73
- if (arm_feature(env, ARM_FEATURE_EL3) &&
64
bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
74
- ((env->cp15.scr_el3 & SCR_NS) || !(env->cp15.scr_el3 & SCR_EEL2))) {
75
- aa64 = aa64 && (env->cp15.scr_el3 & SCR_RW);
76
+ if (arm_feature(env, ARM_FEATURE_EL3)) {
77
+ aa64 = aa64 && arm_scr_rw_eff(env);
78
}
79
80
if (el == 2) {
65
diff --git a/target/arm/helper.c b/target/arm/helper.c
81
diff --git a/target/arm/helper.c b/target/arm/helper.c
66
index XXXXXXX..XXXXXXX 100644
82
index XXXXXXX..XXXXXXX 100644
67
--- a/target/arm/helper.c
83
--- a/target/arm/helper.c
68
+++ b/target/arm/helper.c
84
+++ b/target/arm/helper.c
69
@@ -XXX,XX +XXX,XX @@ static int par_el1_shareability(GetPhysAddrResult *res)
85
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
70
86
uint64_t hcr_el2;
71
static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
87
72
MMUAccessType access_type, ARMMMUIdx mmu_idx,
88
if (arm_feature(env, ARM_FEATURE_EL3)) {
73
- bool is_secure)
89
- rw = ((env->cp15.scr_el3 & SCR_RW) == SCR_RW);
74
+ ARMSecuritySpace ss)
90
+ rw = arm_scr_rw_eff(env);
75
{
91
} else {
76
bool ret;
92
/*
77
uint64_t par64;
93
* Either EL2 is the highest EL (and so the EL2 register width
78
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
94
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
79
* I_MXTJT: Granule protection checks are not performed on the final address
95
80
* of a successful translation.
96
switch (new_el) {
81
*/
82
- ret = get_phys_addr_with_secure_nogpc(env, value, access_type, mmu_idx,
83
- is_secure, &res, &fi);
84
+ ret = get_phys_addr_with_space_nogpc(env, value, access_type, mmu_idx, ss,
85
+ &res, &fi);
86
87
/*
88
* ATS operations only do S1 or S1+S2 translations, so we never
89
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
90
uint64_t par64;
91
ARMMMUIdx mmu_idx;
92
int el = arm_current_el(env);
93
- bool secure = arm_is_secure_below_el3(env);
94
+ ARMSecuritySpace ss = arm_security_space(env);
95
96
switch (ri->opc2 & 6) {
97
case 0:
98
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
99
switch (el) {
100
case 3:
97
case 3:
101
mmu_idx = ARMMMUIdx_E3;
98
- is_aa64 = (env->cp15.scr_el3 & SCR_RW) != 0;
102
- secure = true;
99
+ is_aa64 = arm_scr_rw_eff(env);
103
break;
100
break;
104
case 2:
101
case 2:
105
- g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
102
hcr = arm_hcr_el2_eff(env);
106
+ g_assert(ss != ARMSS_Secure); /* ARMv8.4-SecEL2 is 64-bit only */
107
/* fall through */
108
case 1:
109
if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) {
110
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
111
switch (el) {
112
case 3:
113
mmu_idx = ARMMMUIdx_E10_0;
114
- secure = true;
115
break;
116
case 2:
117
- g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
118
+ g_assert(ss != ARMSS_Secure); /* ARMv8.4-SecEL2 is 64-bit only */
119
mmu_idx = ARMMMUIdx_Stage1_E0;
120
break;
121
case 1:
122
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
123
case 4:
124
/* stage 1+2 NonSecure PL1: ATS12NSOPR, ATS12NSOPW */
125
mmu_idx = ARMMMUIdx_E10_1;
126
- secure = false;
127
+ ss = ARMSS_NonSecure;
128
break;
129
case 6:
130
/* stage 1+2 NonSecure PL0: ATS12NSOUR, ATS12NSOUW */
131
mmu_idx = ARMMMUIdx_E10_0;
132
- secure = false;
133
+ ss = ARMSS_NonSecure;
134
break;
135
default:
136
g_assert_not_reached();
137
}
138
139
- par64 = do_ats_write(env, value, access_type, mmu_idx, secure);
140
+ par64 = do_ats_write(env, value, access_type, mmu_idx, ss);
141
142
A32_BANKED_CURRENT_REG_SET(env, par, par64);
143
#else
144
@@ -XXX,XX +XXX,XX @@ static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri,
145
uint64_t par64;
146
147
/* There is no SecureEL2 for AArch32. */
148
- par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2, false);
149
+ par64 = do_ats_write(env, value, access_type, ARMMMUIdx_E2,
150
+ ARMSS_NonSecure);
151
152
A32_BANKED_CURRENT_REG_SET(env, par, par64);
153
#else
154
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
155
#ifdef CONFIG_TCG
156
MMUAccessType access_type = ri->opc2 & 1 ? MMU_DATA_STORE : MMU_DATA_LOAD;
157
ARMMMUIdx mmu_idx;
158
- int secure = arm_is_secure_below_el3(env);
159
uint64_t hcr_el2 = arm_hcr_el2_eff(env);
160
bool regime_e20 = (hcr_el2 & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE);
161
162
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
163
break;
164
case 6: /* AT S1E3R, AT S1E3W */
165
mmu_idx = ARMMMUIdx_E3;
166
- secure = true;
167
break;
168
default:
169
g_assert_not_reached();
170
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
171
}
172
173
env->cp15.par_el[1] = do_ats_write(env, value, access_type,
174
- mmu_idx, secure);
175
+ mmu_idx, arm_security_space(env));
176
#else
177
/* Handled by hardware accelerator. */
178
g_assert_not_reached();
179
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
180
index XXXXXXX..XXXXXXX 100644
181
--- a/target/arm/ptw.c
182
+++ b/target/arm/ptw.c
183
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_gpc(CPUARMState *env, S1Translate *ptw,
184
return false;
185
}
186
187
-bool get_phys_addr_with_secure_nogpc(CPUARMState *env, target_ulong address,
188
- MMUAccessType access_type,
189
- ARMMMUIdx mmu_idx, bool is_secure,
190
- GetPhysAddrResult *result,
191
- ARMMMUFaultInfo *fi)
192
+bool get_phys_addr_with_space_nogpc(CPUARMState *env, target_ulong address,
193
+ MMUAccessType access_type,
194
+ ARMMMUIdx mmu_idx, ARMSecuritySpace space,
195
+ GetPhysAddrResult *result,
196
+ ARMMMUFaultInfo *fi)
197
{
198
S1Translate ptw = {
199
.in_mmu_idx = mmu_idx,
200
- .in_space = arm_secure_to_space(is_secure),
201
+ .in_space = space,
202
};
203
return get_phys_addr_nogpc(env, &ptw, address, access_type, result, fi);
204
}
205
--
103
--
206
2.34.1
104
2.43.0
diff view generated by jsdifflib
1
The PAR_EL1.SH field documents that for the cases of:
1
When EL1 doesn't support AArch32, the HCR_EL2.RW bit is supposed to
2
* Device memory
2
be RAO/WI. Enforce the RAO/WI behaviour.
3
* Normal memory with both Inner and Outer Non-Cacheable
3
4
the field should be 0b10 rather than whatever was in the
4
Note that we handle "reset value should honour RES1 bits" in the same
5
translation table descriptor field. (In the pseudocode this
5
way that SCR_EL3 does, via a reset function.
6
is handled by PAREncodeShareability().) Perform this
6
7
adjustment when assembling a PAR value.
7
We do already have some CPU types which don't implement AArch32
8
above EL0, so this is technically a bug; it doesn't seem worth
9
backporting to stable because no sensible guest code will be
10
deliberately attempting to set the RW bit to a value corresponding
11
to an unimplemented execution state and then checking that we
12
did the right thing.
8
13
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20230807141514.19075-16-peter.maydell@linaro.org
12
---
16
---
13
target/arm/helper.c | 15 ++++++++++++++-
17
target/arm/helper.c | 12 ++++++++++++
14
1 file changed, 14 insertions(+), 1 deletion(-)
18
1 file changed, 12 insertions(+)
15
19
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
20
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
22
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
23
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri,
24
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
25
/* Clear RES0 bits. */
26
value &= valid_mask;
27
28
+ /* RW is RAO/WI if EL1 is AArch64 only */
29
+ if (!cpu_isar_feature(aa64_aa32_el1, cpu)) {
30
+ value |= HCR_RW;
31
+ }
32
+
33
/*
34
* These bits change the MMU setup:
35
* HCR_VM enables stage 2 translation
36
@@ -XXX,XX +XXX,XX @@ static void hcr_writelow(CPUARMState *env, const ARMCPRegInfo *ri,
37
do_hcr_write(env, value, MAKE_64BIT_MASK(32, 32));
21
}
38
}
22
39
23
#ifdef CONFIG_TCG
40
+static void hcr_reset(CPUARMState *env, const ARMCPRegInfo *ri)
24
+static int par_el1_shareability(GetPhysAddrResult *res)
25
+{
41
+{
26
+ /*
42
+ /* hcr_write will set the RES1 bits on an AArch64-only CPU */
27
+ * The PAR_EL1.SH field must be 0b10 for Device or Normal-NC
43
+ hcr_write(env, ri, 0);
28
+ * memory -- see pseudocode PAREncodeShareability().
29
+ */
30
+ if (((res->cacheattrs.attrs & 0xf0) == 0) ||
31
+ res->cacheattrs.attrs == 0x44 || res->cacheattrs.attrs == 0x40) {
32
+ return 2;
33
+ }
34
+ return res->cacheattrs.shareability;
35
+}
44
+}
36
+
45
+
37
static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
46
/*
38
MMUAccessType access_type, ARMMMUIdx mmu_idx,
47
* Return the effective value of HCR_EL2, at the given security state.
39
bool is_secure)
48
* Bits that are not included here:
40
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
49
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
41
par64 |= (1 << 9); /* NS */
50
.opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
42
}
51
.access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.hcr_el2),
43
par64 |= (uint64_t)res.cacheattrs.attrs << 56; /* ATTR */
52
.nv2_redirect_offset = 0x78,
44
- par64 |= res.cacheattrs.shareability << 7; /* SH */
53
+ .resetfn = hcr_reset,
45
+ par64 |= par_el1_shareability(&res) << 7; /* SH */
54
.writefn = hcr_write, .raw_writefn = raw_write },
46
} else {
55
{ .name = "HCR", .state = ARM_CP_STATE_AA32,
47
uint32_t fsr = arm_fi_to_lfsc(&fi);
56
.type = ARM_CP_ALIAS | ARM_CP_IO,
48
49
--
57
--
50
2.34.1
58
2.43.0
diff view generated by jsdifflib
1
The architecture doesn't permit block descriptors at any arbitrary
1
We already call env_archcpu() multiple times within the
2
level of the page table walk; it depends on the granule size which
2
exception_return helper function, and we're about to want to
3
levels are permitted. We implemented only a partial version of this
3
add another use of the ARMCPU pointer. Add a local variable
4
check which assumes that block descriptors are valid at all levels
4
cpu so we can call env_archcpu() just once.
5
except level 3, which meant that we wouldn't deliver the Translation
6
fault for all cases of this sort of guest page table error.
7
8
Implement the logic corresponding to the pseudocode
9
AArch64.DecodeDescriptorType() and AArch64.BlockDescSupported().
10
5
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20230807141514.19075-14-peter.maydell@linaro.org
14
---
8
---
15
target/arm/ptw.c | 25 +++++++++++++++++++++++--
9
target/arm/tcg/helper-a64.c | 7 ++++---
16
1 file changed, 23 insertions(+), 2 deletions(-)
10
1 file changed, 4 insertions(+), 3 deletions(-)
17
11
18
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
12
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
19
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/ptw.c
14
--- a/target/arm/tcg/helper-a64.c
21
+++ b/target/arm/ptw.c
15
+++ b/target/arm/tcg/helper-a64.c
22
@@ -XXX,XX +XXX,XX @@ static int check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, uint64_t tcr,
16
@@ -XXX,XX +XXX,XX @@ static void cpsr_write_from_spsr_elx(CPUARMState *env,
23
return INT_MIN;
17
24
}
18
void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
25
19
{
26
+static bool lpae_block_desc_valid(ARMCPU *cpu, bool ds,
20
+ ARMCPU *cpu = env_archcpu(env);
27
+ ARMGranuleSize gran, int level)
21
int cur_el = arm_current_el(env);
28
+{
22
unsigned int spsr_idx = aarch64_banked_spsr_index(cur_el);
29
+ /*
23
uint32_t spsr = env->banked_spsr[spsr_idx];
30
+ * See pseudocode AArch46.BlockDescSupported(): block descriptors
24
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
31
+ * are not valid at all levels, depending on the page size.
32
+ */
33
+ switch (gran) {
34
+ case Gran4K:
35
+ return (level == 0 && ds) || level == 1 || level == 2;
36
+ case Gran16K:
37
+ return (level == 1 && ds) || level == 2;
38
+ case Gran64K:
39
+ return (level == 1 && arm_pamax(cpu) == 52) || level == 2;
40
+ default:
41
+ g_assert_not_reached();
42
+ }
43
+}
44
+
45
/**
46
* get_phys_addr_lpae: perform one stage of page table walk, LPAE format
47
*
48
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
49
new_descriptor = descriptor;
50
51
restart_atomic_update:
52
- if (!(descriptor & 1) || (!(descriptor & 2) && (level == 3))) {
53
- /* Invalid, or the Reserved level 3 encoding */
54
+ if (!(descriptor & 1) ||
55
+ (!(descriptor & 2) &&
56
+ !lpae_block_desc_valid(cpu, param.ds, param.gran, level))) {
57
+ /* Invalid, or a block descriptor at an invalid level */
58
goto do_translation_fault;
59
}
25
}
60
26
27
bql_lock();
28
- arm_call_pre_el_change_hook(env_archcpu(env));
29
+ arm_call_pre_el_change_hook(cpu);
30
bql_unlock();
31
32
if (!return_to_aa64) {
33
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
34
int tbii;
35
36
env->aarch64 = true;
37
- spsr &= aarch64_pstate_valid_mask(&env_archcpu(env)->isar);
38
+ spsr &= aarch64_pstate_valid_mask(&cpu->isar);
39
pstate_write(env, spsr);
40
if (!arm_singlestep_active(env)) {
41
env->pstate &= ~PSTATE_SS;
42
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
43
aarch64_sve_change_el(env, cur_el, new_el, return_to_aa64);
44
45
bql_lock();
46
- arm_call_el_change_hook(env_archcpu(env));
47
+ arm_call_el_change_hook(cpu);
48
bql_unlock();
49
50
return;
61
--
51
--
62
2.34.1
52
2.43.0
diff view generated by jsdifflib
1
We no longer look at the in_secure field of the S1Translate struct
1
In the Arm ARM, rule R_TYTWB states that returning to AArch32
2
anyway, so we can remove it and all the code which sets it.
2
is an illegal exception return if:
3
* AArch32 is not supported at any exception level
4
* the target EL is configured for AArch64 via SCR_EL3.RW
5
or HCR_EL2.RW or via CPU state at reset
6
7
We check the second of these, but not the first (which can only be
8
relevant for the case of a return to EL0, because if AArch32 is not
9
supported at one of the higher ELs then the RW bits will have an
10
effective value of 1 and the the "configured for AArch64" condition
11
will hold also).
12
13
Add the missing condition. Although this is technically a bug
14
(because we have one AArch64-only CPU: a64fx) it isn't worth
15
backporting to stable because no sensible guest code will
16
deliberately try to return to a nonexistent execution state
17
to check that it gets an illegal exception return.
3
18
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230807141514.19075-11-peter.maydell@linaro.org
7
---
21
---
8
target/arm/ptw.c | 13 -------------
22
target/arm/tcg/helper-a64.c | 5 +++++
9
1 file changed, 13 deletions(-)
23
1 file changed, 5 insertions(+)
10
24
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
25
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
12
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/ptw.c
27
--- a/target/arm/tcg/helper-a64.c
14
+++ b/target/arm/ptw.c
28
+++ b/target/arm/tcg/helper-a64.c
15
@@ -XXX,XX +XXX,XX @@ typedef struct S1Translate {
29
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
16
* value being Stage2 vs Stage2_S distinguishes those.
30
goto illegal_return;
17
*/
18
ARMSecuritySpace in_space;
19
- /*
20
- * in_secure: whether the translation regime is a Secure one.
21
- * This is always equal to arm_space_is_secure(in_space).
22
- * If a Secure ptw is "downgraded" to NonSecure by an NSTable bit,
23
- * this field is updated accordingly.
24
- */
25
- bool in_secure;
26
/*
27
* in_debug: is this a QEMU debug access (gdbstub, etc)? Debug
28
* accesses will not update the guest page table access flags
29
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
30
S1Translate s2ptw = {
31
.in_mmu_idx = s2_mmu_idx,
32
.in_ptw_idx = ptw_idx_for_stage_2(env, s2_mmu_idx),
33
- .in_secure = arm_space_is_secure(s2_space),
34
.in_space = s2_space,
35
.in_debug = true,
36
};
37
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
38
QEMU_BUILD_BUG_ON(ARMMMUIdx_Phys_S + 1 != ARMMMUIdx_Phys_NS);
39
QEMU_BUILD_BUG_ON(ARMMMUIdx_Stage2_S + 1 != ARMMMUIdx_Stage2);
40
ptw->in_ptw_idx += 1;
41
- ptw->in_secure = false;
42
ptw->in_space = ARMSS_NonSecure;
43
}
31
}
44
32
45
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
33
+ if (!return_to_aa64 && !cpu_isar_feature(aa64_aa32, cpu)) {
46
34
+ /* Return to AArch32 when CPU is AArch64-only */
47
ptw->in_s1_is_el0 = ptw->in_mmu_idx == ARMMMUIdx_Stage1_E0;
35
+ goto illegal_return;
48
ptw->in_mmu_idx = ipa_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
36
+ }
49
- ptw->in_secure = ipa_secure;
37
+
50
ptw->in_space = ipa_space;
38
if (new_el == 1 && (arm_hcr_el2_eff(env) & HCR_TGE)) {
51
ptw->in_ptw_idx = ptw_idx_for_stage_2(env, ptw->in_mmu_idx);
39
goto illegal_return;
52
53
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
54
{
55
S1Translate ptw = {
56
.in_mmu_idx = mmu_idx,
57
- .in_secure = is_secure,
58
.in_space = arm_secure_to_space(is_secure),
59
};
60
return get_phys_addr_gpc(env, &ptw, address, access_type, result, fi);
61
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
62
}
40
}
63
64
ptw.in_space = ss;
65
- ptw.in_secure = arm_space_is_secure(ss);
66
return get_phys_addr_gpc(env, &ptw, address, access_type, result, fi);
67
}
68
69
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
70
S1Translate ptw = {
71
.in_mmu_idx = mmu_idx,
72
.in_space = ss,
73
- .in_secure = arm_space_is_secure(ss),
74
.in_debug = true,
75
};
76
GetPhysAddrResult res = {};
77
--
41
--
78
2.34.1
42
2.43.0
diff view generated by jsdifflib
1
From: Akihiko Odaki <akihiko.odaki@daynix.com>
1
I'm down as the only listed maintainer for quite a lot of Arm SoC and
2
board types. In some cases this is only as the "maintainer of last
3
resort" and I'm not in practice doing anything beyond patch review
4
and the odd bit of tidyup.
2
5
3
On MIPS, kvm_arch_get_default_type() returns a negative value when an
6
Move these entries in MAINTAINERS from "Maintained" to "Odd Fixes",
4
error occurred so handle the case. Also, let other machines return
7
to better represent reality. Entries for other boards and SoCs where
5
negative values when errors occur and declare returning a negative
8
I do more actively care (or where there is a listed co-maintainer)
6
value as the correct way to propagate an error that happened when
9
remain as they are.
7
determining KVM type.
8
10
9
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
10
Message-id: 20230727073134.134102-5-akihiko.odaki@daynix.com
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
13
Message-id: 20250307152838.3226398-1-peter.maydell@linaro.org
14
---
14
---
15
accel/kvm/kvm-all.c | 5 +++++
15
MAINTAINERS | 14 +++++++-------
16
hw/arm/virt.c | 2 +-
16
1 file changed, 7 insertions(+), 7 deletions(-)
17
hw/ppc/spapr.c | 2 +-
18
3 files changed, 7 insertions(+), 2 deletions(-)
19
17
20
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
18
diff --git a/MAINTAINERS b/MAINTAINERS
21
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
22
--- a/accel/kvm/kvm-all.c
20
--- a/MAINTAINERS
23
+++ b/accel/kvm/kvm-all.c
21
+++ b/MAINTAINERS
24
@@ -XXX,XX +XXX,XX @@ static int kvm_init(MachineState *ms)
22
@@ -XXX,XX +XXX,XX @@ F: docs/system/arm/kzm.rst
25
type = kvm_arch_get_default_type(ms);
23
Integrator CP
26
}
24
M: Peter Maydell <peter.maydell@linaro.org>
27
25
L: qemu-arm@nongnu.org
28
+ if (type < 0) {
26
-S: Maintained
29
+ ret = -EINVAL;
27
+S: Odd Fixes
30
+ goto err;
28
F: hw/arm/integratorcp.c
31
+ }
29
F: hw/misc/arm_integrator_debug.c
32
+
30
F: include/hw/misc/arm_integrator_debug.h
33
do {
31
@@ -XXX,XX +XXX,XX @@ F: docs/system/arm/mps2.rst
34
ret = kvm_ioctl(s, KVM_CREATE_VM, type);
32
Musca
35
} while (ret == -EINTR);
33
M: Peter Maydell <peter.maydell@linaro.org>
36
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
34
L: qemu-arm@nongnu.org
37
index XXXXXXX..XXXXXXX 100644
35
-S: Maintained
38
--- a/hw/arm/virt.c
36
+S: Odd Fixes
39
+++ b/hw/arm/virt.c
37
F: hw/arm/musca.c
40
@@ -XXX,XX +XXX,XX @@ static int virt_kvm_type(MachineState *ms, const char *type_str)
38
F: docs/system/arm/musca.rst
41
"require an IPA range (%d bits) larger than "
39
42
"the one supported by the host (%d bits)",
40
@@ -XXX,XX +XXX,XX @@ F: tests/functional/test_aarch64_raspi4.py
43
requested_pa_size, max_vm_pa_size);
41
Real View
44
- exit(1);
42
M: Peter Maydell <peter.maydell@linaro.org>
45
+ return -1;
43
L: qemu-arm@nongnu.org
46
}
44
-S: Maintained
47
/*
45
+S: Odd Fixes
48
* We return the requested PA log size, unless KVM only supports
46
F: hw/arm/realview*
49
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
47
F: hw/cpu/realview_mpcore.c
50
index XXXXXXX..XXXXXXX 100644
48
F: hw/intc/realview_gic.c
51
--- a/hw/ppc/spapr.c
49
@@ -XXX,XX +XXX,XX @@ F: tests/functional/test_arm_collie.py
52
+++ b/hw/ppc/spapr.c
50
Stellaris
53
@@ -XXX,XX +XXX,XX @@ static int spapr_kvm_type(MachineState *machine, const char *vm_type)
51
M: Peter Maydell <peter.maydell@linaro.org>
54
}
52
L: qemu-arm@nongnu.org
55
53
-S: Maintained
56
error_report("Unknown kvm-type specified '%s'", vm_type);
54
+S: Odd Fixes
57
- exit(1);
55
F: hw/*/stellaris*
58
+ return -1;
56
F: hw/display/ssd03*
59
}
57
F: include/hw/input/gamepad.h
60
58
@@ -XXX,XX +XXX,XX @@ F: docs/system/arm/stm32.rst
61
/*
59
Versatile Express
60
M: Peter Maydell <peter.maydell@linaro.org>
61
L: qemu-arm@nongnu.org
62
-S: Maintained
63
+S: Odd Fixes
64
F: hw/arm/vexpress.c
65
F: hw/display/sii9022.c
66
F: docs/system/arm/vexpress.rst
67
@@ -XXX,XX +XXX,XX @@ F: tests/functional/test_arm_vexpress.py
68
Versatile PB
69
M: Peter Maydell <peter.maydell@linaro.org>
70
L: qemu-arm@nongnu.org
71
-S: Maintained
72
+S: Odd Fixes
73
F: hw/*/versatile*
74
F: hw/i2c/arm_sbcon_i2c.c
75
F: include/hw/i2c/arm_sbcon_i2c.h
76
@@ -XXX,XX +XXX,XX @@ F: include/hw/hyperv/vmbus*.h
77
OMAP
78
M: Peter Maydell <peter.maydell@linaro.org>
79
L: qemu-arm@nongnu.org
80
-S: Maintained
81
+S: Odd Fixes
82
F: hw/*/omap*
83
F: include/hw/arm/omap.h
84
F: docs/system/arm/sx1.rst
62
--
85
--
63
2.34.1
86
2.43.0
64
87
65
88
diff view generated by jsdifflib
1
From: Akihiko Odaki <akihiko.odaki@daynix.com>
1
From: Paolo Bonzini <pbonzini@redhat.com>
2
2
3
On MIPS, QEMU requires KVM_VM_MIPS_VZ type for KVM. Report an error in
3
The guest does not control whether characters are sent on the UART.
4
such a case as other architectures do when an error occurred during KVM
4
Sending them before the guest happens to boot will now result in a
5
type decision.
5
"guest error" log entry that is only because of timing, even if the
6
guest _would_ later setup the receiver correctly.
6
7
7
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
8
This reverts the bulk of commit abf2b6a028670bd2890bb3aee7e103fe53e4b0df,
8
Message-id: 20230727073134.134102-4-akihiko.odaki@daynix.com
9
and instead adds a comment about why we don't check the enable bits.
10
11
Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
12
Cc: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
14
Message-id: 20250311153717.206129-1-pbonzini@redhat.com
15
[PMM: expanded comment]
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
---
18
---
13
target/mips/kvm.c | 1 +
19
hw/char/pl011.c | 19 ++++++++++---------
14
1 file changed, 1 insertion(+)
20
1 file changed, 10 insertions(+), 9 deletions(-)
15
21
16
diff --git a/target/mips/kvm.c b/target/mips/kvm.c
22
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
17
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
18
--- a/target/mips/kvm.c
24
--- a/hw/char/pl011.c
19
+++ b/target/mips/kvm.c
25
+++ b/hw/char/pl011.c
20
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_default_type(MachineState *machine)
26
@@ -XXX,XX +XXX,XX @@ static int pl011_can_receive(void *opaque)
21
}
27
unsigned fifo_depth = pl011_get_fifo_depth(s);
22
#endif
28
unsigned fifo_available = fifo_depth - s->read_count;
23
29
24
+ error_report("KVM_VM_MIPS_VZ type is not available");
30
- if (!(s->cr & CR_UARTEN)) {
25
return -1;
31
- qemu_log_mask(LOG_GUEST_ERROR,
32
- "PL011 receiving data on disabled UART\n");
33
- }
34
- if (!(s->cr & CR_RXE)) {
35
- qemu_log_mask(LOG_GUEST_ERROR,
36
- "PL011 receiving data on disabled RX UART\n");
37
- }
38
- trace_pl011_can_receive(s->lcr, s->read_count, fifo_depth, fifo_available);
39
+ /*
40
+ * In theory we should check the UART and RX enable bits here and
41
+ * return 0 if they are not set (so the guest can't receive data
42
+ * until you have enabled the UART). In practice we suspect there
43
+ * is at least some guest code out there which has been tested only
44
+ * on QEMU and which never bothers to enable the UART because we
45
+ * historically never enforced that. So we effectively keep the
46
+ * UART continuously enabled regardless of the enable bits.
47
+ */
48
49
+ trace_pl011_can_receive(s->lcr, s->read_count, fifo_depth, fifo_available);
50
return fifo_available;
26
}
51
}
27
52
28
--
53
--
29
2.34.1
54
2.43.0
30
55
31
56
diff view generated by jsdifflib
1
From: Akihiko Odaki <akihiko.odaki@daynix.com>
1
From: Joe Komlodi <komlodi@google.com>
2
2
3
Before this change, the default KVM type, which is used for non-virt
3
On ARM hosts with CTR_EL0.DIC and CTR_EL0.IDC set, this would only cause
4
machine models, was 0.
4
an ISB to be executed during cache maintenance, which could lead to QEMU
5
executing TBs containing garbage instructions.
5
6
6
The kernel documentation says:
7
This seems to be because the ISB finishes executing instructions and
7
> On arm64, the physical address size for a VM (IPA Size limit) is
8
flushes the pipeline, but the ISB doesn't guarantee that writes from the
8
> limited to 40bits by default. The limit can be configured if the host
9
executed instructions are committed. If a small enough TB is created, it's
9
> supports the extension KVM_CAP_ARM_VM_IPA_SIZE. When supported, use
10
possible that the writes setting up the TB aren't committed by the time the
10
> KVM_VM_TYPE_ARM_IPA_SIZE(IPA_Bits) to set the size in the machine type
11
TB is executed.
11
> identifier, where IPA_Bits is the maximum width of any physical
12
> address used by the VM. The IPA_Bits is encoded in bits[7-0] of the
13
> machine type identifier.
14
>
15
> e.g, to configure a guest to use 48bit physical address size::
16
>
17
> vm_fd = ioctl(dev_fd, KVM_CREATE_VM, KVM_VM_TYPE_ARM_IPA_SIZE(48));
18
>
19
> The requested size (IPA_Bits) must be:
20
>
21
> == =========================================================
22
> 0 Implies default size, 40bits (for backward compatibility)
23
> N Implies N bits, where N is a positive integer such that,
24
> 32 <= N <= Host_IPA_Limit
25
> == =========================================================
26
12
27
> Host_IPA_Limit is the maximum possible value for IPA_Bits on the host
13
This function is intended to be a port of the gcc implementation
28
> and is dependent on the CPU capability and the kernel configuration.
14
(https://github.com/gcc-mirror/gcc/blob/85b46d0795ac76bc192cb8f88b646a647acf98c1/libgcc/config/aarch64/sync-cache.c#L67)
29
> The limit can be retrieved using KVM_CAP_ARM_VM_IPA_SIZE of the
15
which makes the first DSB unconditional, so we can fix the synchronization
30
> KVM_CHECK_EXTENSION ioctl() at run-time.
16
issue by doing that as well.
31
>
32
> Creation of the VM will fail if the requested IPA size (whether it is
33
> implicit or explicit) is unsupported on the host.
34
https://docs.kernel.org/virt/kvm/api.html#kvm-create-vm
35
36
So if Host_IPA_Limit < 40, specifying 0 as the type will fail. This
37
actually confused libvirt, which uses "none" machine model to probe the
38
KVM availability, on M2 MacBook Air.
39
40
Fix this by using Host_IPA_Limit as the default type when
41
KVM_CAP_ARM_VM_IPA_SIZE is available.
42
17
43
Cc: qemu-stable@nongnu.org
18
Cc: qemu-stable@nongnu.org
44
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
19
Fixes: 664a79735e4deb1 ("util: Specialize flush_idcache_range for aarch64")
45
Message-id: 20230727073134.134102-3-akihiko.odaki@daynix.com
20
Signed-off-by: Joe Komlodi <komlodi@google.com>
21
Message-id: 20250310203622.1827940-2-komlodi@google.com
46
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
22
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
47
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
48
---
25
---
49
target/arm/kvm.c | 4 +++-
26
util/cacheflush.c | 4 +++-
50
1 file changed, 3 insertions(+), 1 deletion(-)
27
1 file changed, 3 insertions(+), 1 deletion(-)
51
28
52
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
29
diff --git a/util/cacheflush.c b/util/cacheflush.c
53
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/kvm.c
31
--- a/util/cacheflush.c
55
+++ b/target/arm/kvm.c
32
+++ b/util/cacheflush.c
56
@@ -XXX,XX +XXX,XX @@ int kvm_arm_get_max_vm_ipa_size(MachineState *ms, bool *fixed_ipa)
33
@@ -XXX,XX +XXX,XX @@ void flush_idcache_range(uintptr_t rx, uintptr_t rw, size_t len)
57
34
for (p = rw & -dcache_lsize; p < rw + len; p += dcache_lsize) {
58
int kvm_arch_get_default_type(MachineState *ms)
35
asm volatile("dc\tcvau, %0" : : "r" (p) : "memory");
59
{
36
}
60
- return 0;
37
- asm volatile("dsb\tish" : : : "memory");
61
+ bool fixed_ipa;
38
}
62
+ int size = kvm_arm_get_max_vm_ipa_size(ms, &fixed_ipa);
39
63
+ return fixed_ipa ? 0 : size;
40
+ /* DSB unconditionally to ensure any outstanding writes are committed. */
64
}
41
+ asm volatile("dsb\tish" : : : "memory");
65
42
+
66
int kvm_arch_init(MachineState *ms, KVMState *s)
43
/*
44
* If CTR_EL0.DIC is enabled, Instruction cache cleaning to the Point
45
* of Unification is not required for instruction to data coherence.
67
--
46
--
68
2.34.1
47
2.43.0
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Typo applied byte-wise shift instead of double-word shift.
3
The check for fp_excp_el in assert_fp_access_checked is
4
incorrect. For SME, with StreamingMode enabled, the access
5
is really against the streaming mode vectors, and access
6
to the normal fp registers is allowed to be disabled.
7
C.f. sme_enabled_check.
8
9
Convert sve_access_checked to match, even though we don't
10
currently check the exception state.
4
11
5
Cc: qemu-stable@nongnu.org
12
Cc: qemu-stable@nongnu.org
6
Fixes: 631e565450c ("target/arm: Create gen_gvec_[us]sra")
13
Fixes: 3d74825f4d6 ("target/arm: Add SME enablement checks")
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1737
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
15
Message-id: 20250307190415.982049-2-richard.henderson@linaro.org
10
Message-id: 20230821022025.397682-1-richard.henderson@linaro.org
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
18
---
13
target/arm/tcg/translate.c | 2 +-
19
target/arm/tcg/translate-a64.h | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
20
target/arm/tcg/translate.h | 10 +++++++---
21
target/arm/tcg/translate-a64.c | 17 +++++++++--------
22
3 files changed, 17 insertions(+), 12 deletions(-)
15
23
16
diff --git a/target/arm/tcg/translate.c b/target/arm/tcg/translate.c
24
diff --git a/target/arm/tcg/translate-a64.h b/target/arm/tcg/translate-a64.h
17
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/tcg/translate.c
26
--- a/target/arm/tcg/translate-a64.h
19
+++ b/target/arm/tcg/translate.c
27
+++ b/target/arm/tcg/translate-a64.h
20
@@ -XXX,XX +XXX,XX @@ void gen_gvec_ssra(unsigned vece, uint32_t rd_ofs, uint32_t rm_ofs,
28
@@ -XXX,XX +XXX,XX @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write,
21
.vece = MO_32 },
29
static inline void assert_fp_access_checked(DisasContext *s)
22
{ .fni8 = gen_ssra64_i64,
30
{
23
.fniv = gen_ssra_vec,
31
#ifdef CONFIG_DEBUG_TCG
24
- .fno = gen_helper_gvec_ssra_b,
32
- if (unlikely(!s->fp_access_checked || s->fp_excp_el)) {
25
+ .fno = gen_helper_gvec_ssra_d,
33
+ if (unlikely(s->fp_access_checked <= 0)) {
26
.prefer_i64 = TCG_TARGET_REG_BITS == 64,
34
fprintf(stderr, "target-arm: FP access check missing for "
27
.opt_opc = vecop_list,
35
"instruction 0x%08x\n", s->insn);
28
.load_dest = true,
36
abort();
37
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/tcg/translate.h
40
+++ b/target/arm/tcg/translate.h
41
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
42
bool aarch64;
43
bool thumb;
44
bool lse2;
45
- /* Because unallocated encodings generate different exception syndrome
46
+ /*
47
+ * Because unallocated encodings generate different exception syndrome
48
* information from traps due to FP being disabled, we can't do a single
49
* "is fp access disabled" check at a high level in the decode tree.
50
* To help in catching bugs where the access check was forgotten in some
51
* code path, we set this flag when the access check is done, and assert
52
* that it is set at the point where we actually touch the FP regs.
53
+ * 0: not checked,
54
+ * 1: checked, access ok
55
+ * -1: checked, access denied
56
*/
57
- bool fp_access_checked;
58
- bool sve_access_checked;
59
+ int8_t fp_access_checked;
60
+ int8_t sve_access_checked;
61
/* ARMv8 single-step state (this is distinct from the QEMU gdbstub
62
* single-step support).
63
*/
64
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/target/arm/tcg/translate-a64.c
67
+++ b/target/arm/tcg/translate-a64.c
68
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check_only(DisasContext *s)
69
{
70
if (s->fp_excp_el) {
71
assert(!s->fp_access_checked);
72
- s->fp_access_checked = true;
73
+ s->fp_access_checked = -1;
74
75
gen_exception_insn_el(s, 0, EXCP_UDEF,
76
syn_fp_access_trap(1, 0xe, false, 0),
77
s->fp_excp_el);
78
return false;
79
}
80
- s->fp_access_checked = true;
81
+ s->fp_access_checked = 1;
82
return true;
83
}
84
85
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
86
syn_sve_access_trap(), s->sve_excp_el);
87
goto fail_exit;
88
}
89
- s->sve_access_checked = true;
90
+ s->sve_access_checked = 1;
91
return fp_access_check(s);
92
93
fail_exit:
94
/* Assert that we only raise one exception per instruction. */
95
assert(!s->sve_access_checked);
96
- s->sve_access_checked = true;
97
+ s->sve_access_checked = -1;
98
return false;
99
}
100
101
@@ -XXX,XX +XXX,XX @@ bool sme_enabled_check(DisasContext *s)
102
* sme_excp_el by itself for cpregs access checks.
103
*/
104
if (!s->fp_excp_el || s->sme_excp_el < s->fp_excp_el) {
105
- s->fp_access_checked = true;
106
- return sme_access_check(s);
107
+ bool ret = sme_access_check(s);
108
+ s->fp_access_checked = (ret ? 1 : -1);
109
+ return ret;
110
}
111
return fp_access_check_only(s);
112
}
113
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
114
s->insn = insn;
115
s->base.pc_next = pc + 4;
116
117
- s->fp_access_checked = false;
118
- s->sve_access_checked = false;
119
+ s->fp_access_checked = 0;
120
+ s->sve_access_checked = 0;
121
122
if (s->pstate_il) {
123
/*
29
--
124
--
30
2.34.1
125
2.43.0
31
32
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
A typo, noted in the bug report, resulting in an
3
In StreamingMode, fp_access_checked is handled already.
4
incorrect write offset.
4
We cannot fall through to fp_access_check lest we fall
5
foul of the double-check assertion.
5
6
6
Cc: qemu-stable@nongnu.org
7
Cc: qemu-stable@nongnu.org
7
Fixes: 7390e0e9ab8 ("target/arm: Implement SME LD1, ST1")
8
Fixes: 285b1d5fcef ("target/arm: Handle SME in sve_access_check")
8
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1833
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
Message-id: 20250307190415.982049-3-richard.henderson@linaro.org
11
Message-id: 20230818214255.146905-1-richard.henderson@linaro.org
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
[PMM: move declaration of 'ret' to top of block]
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
---
14
target/arm/tcg/sme_helper.c | 2 +-
15
target/arm/tcg/translate-a64.c | 22 +++++++++++-----------
15
1 file changed, 1 insertion(+), 1 deletion(-)
16
1 file changed, 11 insertions(+), 11 deletions(-)
16
17
17
diff --git a/target/arm/tcg/sme_helper.c b/target/arm/tcg/sme_helper.c
18
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
18
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/tcg/sme_helper.c
20
--- a/target/arm/tcg/translate-a64.c
20
+++ b/target/arm/tcg/sme_helper.c
21
+++ b/target/arm/tcg/translate-a64.c
21
@@ -XXX,XX +XXX,XX @@ static inline void HNAME##_host(void *za, intptr_t off, void *host) \
22
@@ -XXX,XX +XXX,XX @@ static int fp_access_check_vector_hsd(DisasContext *s, bool is_q, MemOp esz)
22
{ \
23
bool sve_access_check(DisasContext *s)
23
uint64_t *ptr = za + off; \
24
{
24
HOST(host, ptr[BE]); \
25
if (s->pstate_sm || !dc_isar_feature(aa64_sve, s)) {
25
- HOST(host + 1, ptr[!BE]); \
26
+ bool ret;
26
+ HOST(host + 8, ptr[!BE]); \
27
+
27
} \
28
assert(dc_isar_feature(aa64_sme, s));
28
static inline void VNAME##_v_host(void *za, intptr_t off, void *host) \
29
- if (!sme_sm_enabled_check(s)) {
29
{ \
30
- goto fail_exit;
31
- }
32
- } else if (s->sve_excp_el) {
33
+ ret = sme_sm_enabled_check(s);
34
+ s->sve_access_checked = (ret ? 1 : -1);
35
+ return ret;
36
+ }
37
+ if (s->sve_excp_el) {
38
+ /* Assert that we only raise one exception per instruction. */
39
+ assert(!s->sve_access_checked);
40
gen_exception_insn_el(s, 0, EXCP_UDEF,
41
syn_sve_access_trap(), s->sve_excp_el);
42
- goto fail_exit;
43
+ s->sve_access_checked = -1;
44
+ return false;
45
}
46
s->sve_access_checked = 1;
47
return fp_access_check(s);
48
-
49
- fail_exit:
50
- /* Assert that we only raise one exception per instruction. */
51
- assert(!s->sve_access_checked);
52
- s->sve_access_checked = -1;
53
- return false;
54
}
55
56
/*
30
--
57
--
31
2.34.1
58
2.43.0
32
33
diff view generated by jsdifflib
1
From: Akihiko Odaki <akihiko.odaki@daynix.com>
1
We want to capture potential Rust backtraces on panics in our test
2
logs, which isn't Rust's default behaviour. Set RUST_BACKTRACE=1 in
3
the add_test_setup environments, so that all our tests get run with
4
this environment variable set.
2
5
3
kvm_arch_get_default_type() returns the default KVM type. This hook is
6
This makes the setting of that variable in the gitlab CI template
4
particularly useful to derive a KVM type that is valid for "none"
7
redundant, so we can remove it.
5
machine model, which is used by libvirt to probe the availability of
6
KVM.
7
8
8
For MIPS, the existing mips_kvm_type() is reused. This function ensures
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
the availability of VZ which is mandatory to use KVM on the current
10
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
10
QEMU.
11
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
12
Message-id: 20250310102950.3752908-1-peter.maydell@linaro.org
13
---
14
meson.build | 9 ++++++---
15
.gitlab-ci.d/buildtest-template.yml | 1 -
16
2 files changed, 6 insertions(+), 4 deletions(-)
11
17
12
Cc: qemu-stable@nongnu.org
18
diff --git a/meson.build b/meson.build
13
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
14
Message-id: 20230727073134.134102-2-akihiko.odaki@daynix.com
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
[PMM: added doc comment for new function]
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
19
---
20
include/sysemu/kvm.h | 2 ++
21
target/mips/kvm_mips.h | 9 ---------
22
accel/kvm/kvm-all.c | 4 +++-
23
hw/mips/loongson3_virt.c | 2 --
24
target/arm/kvm.c | 5 +++++
25
target/i386/kvm/kvm.c | 5 +++++
26
target/mips/kvm.c | 2 +-
27
target/ppc/kvm.c | 5 +++++
28
target/riscv/kvm.c | 5 +++++
29
target/s390x/kvm/kvm.c | 5 +++++
30
10 files changed, 31 insertions(+), 13 deletions(-)
31
32
diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
33
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
34
--- a/include/sysemu/kvm.h
20
--- a/meson.build
35
+++ b/include/sysemu/kvm.h
21
+++ b/meson.build
36
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cpu);
22
@@ -XXX,XX +XXX,XX @@ project('qemu', ['c'], meson_version: '>=1.5.0',
37
23
38
int kvm_arch_put_registers(CPUState *cpu, int level);
24
meson.add_devenv({ 'MESON_BUILD_ROOT' : meson.project_build_root() })
39
25
40
+int kvm_arch_get_default_type(MachineState *ms);
26
-add_test_setup('quick', exclude_suites: ['slow', 'thorough'], is_default: true)
41
+
27
-add_test_setup('slow', exclude_suites: ['thorough'], env: ['G_TEST_SLOW=1', 'SPEED=slow'])
42
int kvm_arch_init(MachineState *ms, KVMState *s);
28
-add_test_setup('thorough', env: ['G_TEST_SLOW=1', 'SPEED=thorough'])
43
29
+add_test_setup('quick', exclude_suites: ['slow', 'thorough'], is_default: true,
44
int kvm_arch_init_vcpu(CPUState *cpu);
30
+ env: ['RUST_BACKTRACE=1'])
45
diff --git a/target/mips/kvm_mips.h b/target/mips/kvm_mips.h
31
+add_test_setup('slow', exclude_suites: ['thorough'],
32
+ env: ['G_TEST_SLOW=1', 'SPEED=slow', 'RUST_BACKTRACE=1'])
33
+add_test_setup('thorough',
34
+ env: ['G_TEST_SLOW=1', 'SPEED=thorough', 'RUST_BACKTRACE=1'])
35
36
meson.add_postconf_script(find_program('scripts/symlink-install-tree.py'))
37
38
diff --git a/.gitlab-ci.d/buildtest-template.yml b/.gitlab-ci.d/buildtest-template.yml
46
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
47
--- a/target/mips/kvm_mips.h
40
--- a/.gitlab-ci.d/buildtest-template.yml
48
+++ b/target/mips/kvm_mips.h
41
+++ b/.gitlab-ci.d/buildtest-template.yml
49
@@ -XXX,XX +XXX,XX @@ void kvm_mips_reset_vcpu(MIPSCPU *cpu);
50
int kvm_mips_set_interrupt(MIPSCPU *cpu, int irq, int level);
51
int kvm_mips_set_ipi_interrupt(MIPSCPU *cpu, int irq, int level);
52
53
-#ifdef CONFIG_KVM
54
-int mips_kvm_type(MachineState *machine, const char *vm_type);
55
-#else
56
-static inline int mips_kvm_type(MachineState *machine, const char *vm_type)
57
-{
58
- return 0;
59
-}
60
-#endif
61
-
62
#endif /* KVM_MIPS_H */
63
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/accel/kvm/kvm-all.c
66
+++ b/accel/kvm/kvm-all.c
67
@@ -XXX,XX +XXX,XX @@ static int kvm_init(MachineState *ms)
68
KVMState *s;
69
const KVMCapabilityInfo *missing_cap;
70
int ret;
71
- int type = 0;
72
+ int type;
73
uint64_t dirty_log_manual_caps;
74
75
qemu_mutex_init(&kml_slots_lock);
76
@@ -XXX,XX +XXX,XX @@ static int kvm_init(MachineState *ms)
77
type = mc->kvm_type(ms, kvm_type);
78
} else if (mc->kvm_type) {
79
type = mc->kvm_type(ms, NULL);
80
+ } else {
81
+ type = kvm_arch_get_default_type(ms);
82
}
83
84
do {
85
diff --git a/hw/mips/loongson3_virt.c b/hw/mips/loongson3_virt.c
86
index XXXXXXX..XXXXXXX 100644
87
--- a/hw/mips/loongson3_virt.c
88
+++ b/hw/mips/loongson3_virt.c
89
@@ -XXX,XX +XXX,XX @@
42
@@ -XXX,XX +XXX,XX @@
90
#include "qemu/datadir.h"
43
stage: test
91
#include "qapi/error.h"
44
image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG
92
#include "elf.h"
45
script:
93
-#include "kvm_mips.h"
46
- - export RUST_BACKTRACE=1
94
#include "hw/char/serial.h"
47
- source scripts/ci/gitlab-ci-section
95
#include "hw/intc/loongson_liointc.h"
48
- section_start buildenv "Setting up to run tests"
96
#include "hw/mips/mips.h"
49
- scripts/git-submodule.sh update roms/SLOF
97
@@ -XXX,XX +XXX,XX @@ static void loongson3v_machine_class_init(ObjectClass *oc, void *data)
98
mc->max_cpus = LOONGSON_MAX_VCPUS;
99
mc->default_ram_id = "loongson3.highram";
100
mc->default_ram_size = 1600 * MiB;
101
- mc->kvm_type = mips_kvm_type;
102
mc->minimum_page_bits = 14;
103
mc->default_nic = "virtio-net-pci";
104
}
105
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
106
index XXXXXXX..XXXXXXX 100644
107
--- a/target/arm/kvm.c
108
+++ b/target/arm/kvm.c
109
@@ -XXX,XX +XXX,XX @@ int kvm_arm_get_max_vm_ipa_size(MachineState *ms, bool *fixed_ipa)
110
return ret > 0 ? ret : 40;
111
}
112
113
+int kvm_arch_get_default_type(MachineState *ms)
114
+{
115
+ return 0;
116
+}
117
+
118
int kvm_arch_init(MachineState *ms, KVMState *s)
119
{
120
int ret = 0;
121
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
122
index XXXXXXX..XXXXXXX 100644
123
--- a/target/i386/kvm/kvm.c
124
+++ b/target/i386/kvm/kvm.c
125
@@ -XXX,XX +XXX,XX @@ static void register_smram_listener(Notifier *n, void *unused)
126
&smram_address_space, 1, "kvm-smram");
127
}
128
129
+int kvm_arch_get_default_type(MachineState *ms)
130
+{
131
+ return 0;
132
+}
133
+
134
int kvm_arch_init(MachineState *ms, KVMState *s)
135
{
136
uint64_t identity_base = 0xfffbc000;
137
diff --git a/target/mips/kvm.c b/target/mips/kvm.c
138
index XXXXXXX..XXXXXXX 100644
139
--- a/target/mips/kvm.c
140
+++ b/target/mips/kvm.c
141
@@ -XXX,XX +XXX,XX @@ int kvm_arch_msi_data_to_gsi(uint32_t data)
142
abort();
143
}
144
145
-int mips_kvm_type(MachineState *machine, const char *vm_type)
146
+int kvm_arch_get_default_type(MachineState *machine)
147
{
148
#if defined(KVM_CAP_MIPS_VZ)
149
int r;
150
diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c
151
index XXXXXXX..XXXXXXX 100644
152
--- a/target/ppc/kvm.c
153
+++ b/target/ppc/kvm.c
154
@@ -XXX,XX +XXX,XX @@ static int kvm_ppc_register_host_cpu_type(void);
155
static void kvmppc_get_cpu_characteristics(KVMState *s);
156
static int kvmppc_get_dec_bits(void);
157
158
+int kvm_arch_get_default_type(MachineState *ms)
159
+{
160
+ return 0;
161
+}
162
+
163
int kvm_arch_init(MachineState *ms, KVMState *s)
164
{
165
cap_interrupt_unset = kvm_check_extension(s, KVM_CAP_PPC_UNSET_IRQ);
166
diff --git a/target/riscv/kvm.c b/target/riscv/kvm.c
167
index XXXXXXX..XXXXXXX 100644
168
--- a/target/riscv/kvm.c
169
+++ b/target/riscv/kvm.c
170
@@ -XXX,XX +XXX,XX @@ int kvm_arch_add_msi_route_post(struct kvm_irq_routing_entry *route,
171
return 0;
172
}
173
174
+int kvm_arch_get_default_type(MachineState *ms)
175
+{
176
+ return 0;
177
+}
178
+
179
int kvm_arch_init(MachineState *ms, KVMState *s)
180
{
181
return 0;
182
diff --git a/target/s390x/kvm/kvm.c b/target/s390x/kvm/kvm.c
183
index XXXXXXX..XXXXXXX 100644
184
--- a/target/s390x/kvm/kvm.c
185
+++ b/target/s390x/kvm/kvm.c
186
@@ -XXX,XX +XXX,XX @@ static void ccw_machine_class_foreach(ObjectClass *oc, void *opaque)
187
mc->default_cpu_type = S390_CPU_TYPE_NAME("host");
188
}
189
190
+int kvm_arch_get_default_type(MachineState *ms)
191
+{
192
+ return 0;
193
+}
194
+
195
int kvm_arch_init(MachineState *ms, KVMState *s)
196
{
197
object_class_foreach(ccw_machine_class_foreach, TYPE_S390_CCW_MACHINE,
198
--
50
--
199
2.34.1
51
2.43.0
200
52
201
53
diff view generated by jsdifflib
Deleted patch
1
From: Akihiko Odaki <akihiko.odaki@daynix.com>
2
1
3
An error may occur after s->as is allocated, for example if the
4
KVM_CREATE_VM ioctl call fails.
5
6
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
7
Message-id: 20230727073134.134102-6-akihiko.odaki@daynix.com
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
[PMM: tweaked commit message]
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
accel/kvm/kvm-all.c | 1 +
13
1 file changed, 1 insertion(+)
14
15
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/accel/kvm/kvm-all.c
18
+++ b/accel/kvm/kvm-all.c
19
@@ -XXX,XX +XXX,XX @@ err:
20
if (s->fd != -1) {
21
close(s->fd);
22
}
23
+ g_free(s->as);
24
g_free(s->memory_listener.slots);
25
26
return ret;
27
--
28
2.34.1
diff view generated by jsdifflib
Deleted patch
1
From: Akihiko Odaki <akihiko.odaki@daynix.com>
2
1
3
The returned value was always zero and had no meaning.
4
5
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
6
Message-id: 20230727073134.134102-7-akihiko.odaki@daynix.com
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
10
---
11
accel/kvm/kvm-all.c | 9 ++-------
12
1 file changed, 2 insertions(+), 7 deletions(-)
13
14
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/accel/kvm/kvm-all.c
17
+++ b/accel/kvm/kvm-all.c
18
@@ -XXX,XX +XXX,XX @@ static void *kvm_dirty_ring_reaper_thread(void *data)
19
return NULL;
20
}
21
22
-static int kvm_dirty_ring_reaper_init(KVMState *s)
23
+static void kvm_dirty_ring_reaper_init(KVMState *s)
24
{
25
struct KVMDirtyRingReaper *r = &s->reaper;
26
27
qemu_thread_create(&r->reaper_thr, "kvm-reaper",
28
kvm_dirty_ring_reaper_thread,
29
s, QEMU_THREAD_JOINABLE);
30
-
31
- return 0;
32
}
33
34
static int kvm_dirty_ring_init(KVMState *s)
35
@@ -XXX,XX +XXX,XX @@ static int kvm_init(MachineState *ms)
36
}
37
38
if (s->kvm_dirty_ring_size) {
39
- ret = kvm_dirty_ring_reaper_init(s);
40
- if (ret) {
41
- goto err;
42
- }
43
+ kvm_dirty_ring_reaper_init(s);
44
}
45
46
if (kvm_check_extension(kvm_state, KVM_CAP_BINARY_STATS_FD)) {
47
--
48
2.34.1
49
50
diff view generated by jsdifflib
Deleted patch
1
For an Unsupported Atomic Update fault where the stage 1 translation
2
table descriptor update can't be done because it's to an unsupported
3
memory type, this is a stage 1 abort (per the Arm ARM R_VSXXT). This
4
means we should not set fi->s1ptw, because this will cause the code
5
in the get_phys_addr_lpae() error-exit path to mark it as stage 2.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230807141514.19075-2-peter.maydell@linaro.org
10
---
11
target/arm/ptw.c | 1 -
12
1 file changed, 1 deletion(-)
13
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/ptw.c
17
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_casq_ptw(CPUARMState *env, uint64_t old_val,
19
20
if (unlikely(!host)) {
21
fi->type = ARMFault_UnsuppAtomicUpdate;
22
- fi->s1ptw = true;
23
return 0;
24
}
25
26
--
27
2.34.1
diff view generated by jsdifflib
Deleted patch
1
In S1_ptw_translate() we set up the ARMMMUFaultInfo if the attempt to
2
translate the page descriptor address into a physical address fails.
3
This used to only be possible if we are doing a stage 2 ptw for that
4
descriptor address, and so the code always sets fi->stage2 and
5
fi->s1ptw to true. However, with FEAT_RME it is also possible for
6
the lookup of the page descriptor address to fail because of a
7
Granule Protection Check fault. These should not be reported as
8
stage 2, otherwise arm_deliver_fault() will incorrectly set
9
HPFAR_EL2. Similarly the s1ptw bit should only be set for stage 2
10
faults on stage 1 translation table walks, i.e. not for GPC faults.
11
1
12
Add a comment to the the other place where we might detect a
13
stage2-fault-on-stage-1-ptw, in arm_casq_ptw(), noting why we know in
14
that case that it must really be a stage 2 fault and not a GPC fault.
15
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20230807141514.19075-3-peter.maydell@linaro.org
19
---
20
target/arm/ptw.c | 10 ++++++++--
21
1 file changed, 8 insertions(+), 2 deletions(-)
22
23
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/ptw.c
26
+++ b/target/arm/ptw.c
27
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
28
fi->type = ARMFault_GPCFOnWalk;
29
}
30
fi->s2addr = addr;
31
- fi->stage2 = true;
32
- fi->s1ptw = true;
33
+ fi->stage2 = regime_is_stage2(s2_mmu_idx);
34
+ fi->s1ptw = fi->stage2;
35
fi->s1ns = !is_secure;
36
return false;
37
}
38
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_casq_ptw(CPUARMState *env, uint64_t old_val,
39
env->tlb_fi = NULL;
40
41
if (unlikely(flags & TLB_INVALID_MASK)) {
42
+ /*
43
+ * We know this must be a stage 2 fault because the granule
44
+ * protection table does not separately track read and write
45
+ * permission, so all GPC faults are caught in S1_ptw_translate():
46
+ * we only get here for "readable but not writeable".
47
+ */
48
assert(fi->type != ARMFault_None);
49
fi->s2addr = ptw->out_virt;
50
fi->stage2 = true;
51
--
52
2.34.1
diff view generated by jsdifflib
Deleted patch
1
In commit 6d2654ffacea813916176 we created the S1Translate struct and
2
used it to plumb through various arguments that we were previously
3
passing one-at-a-time to get_phys_addr_v5(), get_phys_addr_v6(), and
4
get_phys_addr_lpae(). Extend that pattern to get_phys_addr_pmsav5(),
5
get_phys_addr_pmsav7(), get_phys_addr_pmsav8() and
6
get_phys_addr_disabled(), so that all the get_phys_addr_* functions
7
we call from get_phys_addr_nogpc() take the S1Translate struct rather
8
than the mmu_idx and is_secure bool.
9
1
10
(This refactoring is a prelude to having the called functions look
11
at ptw->is_space rather than using an is_secure boolean.)
12
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20230807141514.19075-5-peter.maydell@linaro.org
16
---
17
target/arm/ptw.c | 57 ++++++++++++++++++++++++++++++------------------
18
1 file changed, 36 insertions(+), 21 deletions(-)
19
20
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/ptw.c
23
+++ b/target/arm/ptw.c
24
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
25
return true;
26
}
27
28
-static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address,
29
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
30
- bool is_secure, GetPhysAddrResult *result,
31
+static bool get_phys_addr_pmsav5(CPUARMState *env,
32
+ S1Translate *ptw,
33
+ uint32_t address,
34
+ MMUAccessType access_type,
35
+ GetPhysAddrResult *result,
36
ARMMMUFaultInfo *fi)
37
{
38
int n;
39
uint32_t mask;
40
uint32_t base;
41
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
42
bool is_user = regime_is_user(env, mmu_idx);
43
+ bool is_secure = arm_space_is_secure(ptw->in_space);
44
45
if (regime_translation_disabled(env, mmu_idx, is_secure)) {
46
/* MPU disabled. */
47
@@ -XXX,XX +XXX,XX @@ static bool pmsav7_use_background_region(ARMCPU *cpu, ARMMMUIdx mmu_idx,
48
return regime_sctlr(env, mmu_idx) & SCTLR_BR;
49
}
50
51
-static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
52
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
53
- bool secure, GetPhysAddrResult *result,
54
+static bool get_phys_addr_pmsav7(CPUARMState *env,
55
+ S1Translate *ptw,
56
+ uint32_t address,
57
+ MMUAccessType access_type,
58
+ GetPhysAddrResult *result,
59
ARMMMUFaultInfo *fi)
60
{
61
ARMCPU *cpu = env_archcpu(env);
62
int n;
63
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
64
bool is_user = regime_is_user(env, mmu_idx);
65
+ bool secure = arm_space_is_secure(ptw->in_space);
66
67
result->f.phys_addr = address;
68
result->f.lg_page_size = TARGET_PAGE_BITS;
69
@@ -XXX,XX +XXX,XX @@ void v8m_security_lookup(CPUARMState *env, uint32_t address,
70
}
71
}
72
73
-static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address,
74
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
75
- bool secure, GetPhysAddrResult *result,
76
+static bool get_phys_addr_pmsav8(CPUARMState *env,
77
+ S1Translate *ptw,
78
+ uint32_t address,
79
+ MMUAccessType access_type,
80
+ GetPhysAddrResult *result,
81
ARMMMUFaultInfo *fi)
82
{
83
V8M_SAttributes sattrs = {};
84
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
85
+ bool secure = arm_space_is_secure(ptw->in_space);
86
bool ret;
87
88
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
89
@@ -XXX,XX +XXX,XX @@ static ARMCacheAttrs combine_cacheattrs(uint64_t hcr,
90
* MMU disabled. S1 addresses within aa64 translation regimes are
91
* still checked for bounds -- see AArch64.S1DisabledOutput().
92
*/
93
-static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
94
+static bool get_phys_addr_disabled(CPUARMState *env,
95
+ S1Translate *ptw,
96
+ target_ulong address,
97
MMUAccessType access_type,
98
- ARMMMUIdx mmu_idx, bool is_secure,
99
GetPhysAddrResult *result,
100
ARMMMUFaultInfo *fi)
101
{
102
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
103
+ bool is_secure = arm_space_is_secure(ptw->in_space);
104
uint8_t memattr = 0x00; /* Device nGnRnE */
105
uint8_t shareability = 0; /* non-shareable */
106
int r_el;
107
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_nogpc(CPUARMState *env, S1Translate *ptw,
108
case ARMMMUIdx_Phys_Root:
109
case ARMMMUIdx_Phys_Realm:
110
/* Checking Phys early avoids special casing later vs regime_el. */
111
- return get_phys_addr_disabled(env, address, access_type, mmu_idx,
112
- is_secure, result, fi);
113
+ return get_phys_addr_disabled(env, ptw, address, access_type,
114
+ result, fi);
115
116
case ARMMMUIdx_Stage1_E0:
117
case ARMMMUIdx_Stage1_E1:
118
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_nogpc(CPUARMState *env, S1Translate *ptw,
119
120
if (arm_feature(env, ARM_FEATURE_V8)) {
121
/* PMSAv8 */
122
- ret = get_phys_addr_pmsav8(env, address, access_type, mmu_idx,
123
- is_secure, result, fi);
124
+ ret = get_phys_addr_pmsav8(env, ptw, address, access_type,
125
+ result, fi);
126
} else if (arm_feature(env, ARM_FEATURE_V7)) {
127
/* PMSAv7 */
128
- ret = get_phys_addr_pmsav7(env, address, access_type, mmu_idx,
129
- is_secure, result, fi);
130
+ ret = get_phys_addr_pmsav7(env, ptw, address, access_type,
131
+ result, fi);
132
} else {
133
/* Pre-v7 MPU */
134
- ret = get_phys_addr_pmsav5(env, address, access_type, mmu_idx,
135
- is_secure, result, fi);
136
+ ret = get_phys_addr_pmsav5(env, ptw, address, access_type,
137
+ result, fi);
138
}
139
qemu_log_mask(CPU_LOG_MMU, "PMSA MPU lookup for %s at 0x%08" PRIx32
140
" mmu_idx %u -> %s (prot %c%c%c)\n",
141
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_nogpc(CPUARMState *env, S1Translate *ptw,
142
/* Definitely a real MMU, not an MPU */
143
144
if (regime_translation_disabled(env, mmu_idx, is_secure)) {
145
- return get_phys_addr_disabled(env, address, access_type, mmu_idx,
146
- is_secure, result, fi);
147
+ return get_phys_addr_disabled(env, ptw, address, access_type,
148
+ result, fi);
149
}
150
151
if (regime_using_lpae_format(env, mmu_idx)) {
152
--
153
2.34.1
diff view generated by jsdifflib
Deleted patch
1
Plumb the ARMSecurityState through to regime_translation_disabled()
2
rather than just a bool is_secure.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230807141514.19075-6-peter.maydell@linaro.org
7
---
8
target/arm/ptw.c | 15 ++++++++-------
9
1 file changed, 8 insertions(+), 7 deletions(-)
10
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/ptw.c
14
+++ b/target/arm/ptw.c
15
@@ -XXX,XX +XXX,XX @@ static uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx, int ttbrn)
16
17
/* Return true if the specified stage of address translation is disabled */
18
static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
19
- bool is_secure)
20
+ ARMSecuritySpace space)
21
{
22
uint64_t hcr_el2;
23
+ bool is_secure = arm_space_is_secure(space);
24
25
if (arm_feature(env, ARM_FEATURE_M)) {
26
switch (env->v7m.mpu_ctrl[is_secure] &
27
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav5(CPUARMState *env,
28
uint32_t base;
29
ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
30
bool is_user = regime_is_user(env, mmu_idx);
31
- bool is_secure = arm_space_is_secure(ptw->in_space);
32
33
- if (regime_translation_disabled(env, mmu_idx, is_secure)) {
34
+ if (regime_translation_disabled(env, mmu_idx, ptw->in_space)) {
35
/* MPU disabled. */
36
result->f.phys_addr = address;
37
result->f.prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
38
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env,
39
result->f.lg_page_size = TARGET_PAGE_BITS;
40
result->f.prot = 0;
41
42
- if (regime_translation_disabled(env, mmu_idx, secure) ||
43
+ if (regime_translation_disabled(env, mmu_idx, ptw->in_space) ||
44
m_is_ppb_region(env, address)) {
45
/*
46
* MPU disabled or M profile PPB access: use default memory map.
47
@@ -XXX,XX +XXX,XX @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
48
* are done in arm_v7m_load_vector(), which always does a direct
49
* read using address_space_ldl(), rather than going via this function.
50
*/
51
- if (regime_translation_disabled(env, mmu_idx, secure)) { /* MPU disabled */
52
+ if (regime_translation_disabled(env, mmu_idx, arm_secure_to_space(secure))) {
53
+ /* MPU disabled */
54
hit = true;
55
} else if (m_is_ppb_region(env, address)) {
56
hit = true;
57
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_nogpc(CPUARMState *env, S1Translate *ptw,
58
*/
59
ptw->in_mmu_idx = mmu_idx = s1_mmu_idx;
60
if (arm_feature(env, ARM_FEATURE_EL2) &&
61
- !regime_translation_disabled(env, ARMMMUIdx_Stage2, is_secure)) {
62
+ !regime_translation_disabled(env, ARMMMUIdx_Stage2, ptw->in_space)) {
63
return get_phys_addr_twostage(env, ptw, address, access_type,
64
result, fi);
65
}
66
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_nogpc(CPUARMState *env, S1Translate *ptw,
67
68
/* Definitely a real MMU, not an MPU */
69
70
- if (regime_translation_disabled(env, mmu_idx, is_secure)) {
71
+ if (regime_translation_disabled(env, mmu_idx, ptw->in_space)) {
72
return get_phys_addr_disabled(env, ptw, address, access_type,
73
result, fi);
74
}
75
--
76
2.34.1
diff view generated by jsdifflib
Deleted patch
1
When we do a translation in Secure state, the NSTable bits in table
2
descriptors may downgrade us to NonSecure; we update ptw->in_secure
3
and ptw->in_space accordingly. We guard that check correctly with a
4
conditional that means it's only applied for Secure stage 1
5
translations. However, later on in get_phys_addr_lpae() we fold the
6
effects of the NSTable bits into the final descriptor attributes
7
bits, and there we do it unconditionally regardless of the CPU state.
8
That means that in Realm state (where in_secure is false) we will set
9
bit 5 in attrs, and later use it to decide to output to non-secure
10
space.
11
1
12
We don't in fact need to do this folding in at all any more (since
13
commit 2f1ff4e7b9f30c): if an NSTable bit was set then we have
14
already set ptw->in_space to ARMSS_NonSecure, and in that situation
15
we don't look at attrs bit 5. The only thing we still need to deal
16
with is the real NS bit in the final descriptor word, so we can just
17
drop the code that ORed in the NSTable bit.
18
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20230807141514.19075-9-peter.maydell@linaro.org
22
---
23
target/arm/ptw.c | 3 +--
24
1 file changed, 1 insertion(+), 2 deletions(-)
25
26
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/ptw.c
29
+++ b/target/arm/ptw.c
30
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
31
* Extract attributes from the (modified) descriptor, and apply
32
* table descriptors. Stage 2 table descriptors do not include
33
* any attribute fields. HPD disables all the table attributes
34
- * except NSTable.
35
+ * except NSTable (which we have already handled).
36
*/
37
attrs = new_descriptor & (MAKE_64BIT_MASK(2, 10) | MAKE_64BIT_MASK(50, 14));
38
if (!regime_is_stage2(mmu_idx)) {
39
- attrs |= !ptw->in_secure << 5; /* NS */
40
if (!param.hpd) {
41
attrs |= extract64(tableattrs, 0, 2) << 53; /* XN, PXN */
42
/*
43
--
44
2.34.1
diff view generated by jsdifflib
Deleted patch
1
Replace the last uses of ptw->in_secure with appropriate
2
checks on ptw->in_space.
3
1
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230807141514.19075-10-peter.maydell@linaro.org
7
---
8
target/arm/ptw.c | 11 +++++++----
9
1 file changed, 7 insertions(+), 4 deletions(-)
10
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/ptw.c
14
+++ b/target/arm/ptw.c
15
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_nogpc(CPUARMState *env, S1Translate *ptw,
16
ARMMMUFaultInfo *fi)
17
{
18
ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
19
- bool is_secure = ptw->in_secure;
20
ARMMMUIdx s1_mmu_idx;
21
22
/*
23
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_nogpc(CPUARMState *env, S1Translate *ptw,
24
* cannot upgrade a NonSecure translation regime's attributes
25
* to Secure or Realm.
26
*/
27
- result->f.attrs.secure = is_secure;
28
result->f.attrs.space = ptw->in_space;
29
+ result->f.attrs.secure = arm_space_is_secure(ptw->in_space);
30
31
switch (mmu_idx) {
32
case ARMMMUIdx_Phys_S:
33
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_nogpc(CPUARMState *env, S1Translate *ptw,
34
case ARMMMUIdx_Stage1_E0:
35
case ARMMMUIdx_Stage1_E1:
36
case ARMMMUIdx_Stage1_E1_PAN:
37
- /* First stage lookup uses second stage for ptw. */
38
- ptw->in_ptw_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
39
+ /*
40
+ * First stage lookup uses second stage for ptw; only
41
+ * Secure has both S and NS IPA and starts with Stage2_S.
42
+ */
43
+ ptw->in_ptw_idx = (ptw->in_space == ARMSS_Secure) ?
44
+ ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
45
break;
46
47
case ARMMMUIdx_Stage2:
48
--
49
2.34.1
diff view generated by jsdifflib
Deleted patch
1
We only use S1Translate::out_secure in two places, where we are
2
setting up MemTxAttrs for a page table load. We can use
3
arm_space_is_secure(ptw->out_space) instead, which guarantees
4
that we're setting the MemTxAttrs secure and space fields
5
consistently, and allows us to drop the out_secure field in
6
S1Translate entirely.
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20230807141514.19075-12-peter.maydell@linaro.org
11
---
12
target/arm/ptw.c | 7 ++-----
13
1 file changed, 2 insertions(+), 5 deletions(-)
14
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/ptw.c
18
+++ b/target/arm/ptw.c
19
@@ -XXX,XX +XXX,XX @@ typedef struct S1Translate {
20
* Stage 2 is indicated by in_mmu_idx set to ARMMMUIdx_Stage2{,_S}.
21
*/
22
bool in_s1_is_el0;
23
- bool out_secure;
24
bool out_rw;
25
bool out_be;
26
ARMSecuritySpace out_space;
27
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
28
pte_attrs = s2.cacheattrs.attrs;
29
ptw->out_host = NULL;
30
ptw->out_rw = false;
31
- ptw->out_secure = s2.f.attrs.secure;
32
ptw->out_space = s2.f.attrs.space;
33
} else {
34
#ifdef CONFIG_TCG
35
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
36
ptw->out_phys = full->phys_addr | (addr & ~TARGET_PAGE_MASK);
37
ptw->out_rw = full->prot & PAGE_WRITE;
38
pte_attrs = full->pte_attrs;
39
- ptw->out_secure = full->attrs.secure;
40
ptw->out_space = full->attrs.space;
41
#else
42
g_assert_not_reached();
43
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw,
44
} else {
45
/* Page tables are in MMIO. */
46
MemTxAttrs attrs = {
47
- .secure = ptw->out_secure,
48
.space = ptw->out_space,
49
+ .secure = arm_space_is_secure(ptw->out_space),
50
};
51
AddressSpace *as = arm_addressspace(cs, attrs);
52
MemTxResult result = MEMTX_OK;
53
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw,
54
} else {
55
/* Page tables are in MMIO. */
56
MemTxAttrs attrs = {
57
- .secure = ptw->out_secure,
58
.space = ptw->out_space,
59
+ .secure = arm_space_is_secure(ptw->out_space),
60
};
61
AddressSpace *as = arm_addressspace(cs, attrs);
62
MemTxResult result = MEMTX_OK;
63
--
64
2.34.1
diff view generated by jsdifflib
Deleted patch
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
2
1
3
In realm state, stage-2 translation tables are fetched from the realm
4
physical address space (R_PGRQD).
5
6
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230809123706.1842548-2-jean-philippe@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/ptw.c | 26 ++++++++++++++++++--------
12
1 file changed, 18 insertions(+), 8 deletions(-)
13
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/ptw.c
17
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ static ARMMMUIdx ptw_idx_for_stage_2(CPUARMState *env, ARMMMUIdx stage2idx)
19
20
/*
21
* We're OK to check the current state of the CPU here because
22
- * (1) we always invalidate all TLBs when the SCR_EL3.NS bit changes
23
+ * (1) we always invalidate all TLBs when the SCR_EL3.NS or SCR_EL3.NSE bit
24
+ * changes.
25
* (2) there's no way to do a lookup that cares about Stage 2 for a
26
* different security state to the current one for AArch64, and AArch32
27
* never has a secure EL2. (AArch32 ATS12NSO[UP][RW] allow EL3 to do
28
* an NS stage 1+2 lookup while the NS bit is 0.)
29
*/
30
- if (!arm_is_secure_below_el3(env) || !arm_el_is_aa64(env, 3)) {
31
+ if (!arm_el_is_aa64(env, 3)) {
32
return ARMMMUIdx_Phys_NS;
33
}
34
- if (stage2idx == ARMMMUIdx_Stage2_S) {
35
- s2walk_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
36
- } else {
37
- s2walk_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
38
- }
39
- return s2walk_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS;
40
41
+ switch (arm_security_space_below_el3(env)) {
42
+ case ARMSS_NonSecure:
43
+ return ARMMMUIdx_Phys_NS;
44
+ case ARMSS_Realm:
45
+ return ARMMMUIdx_Phys_Realm;
46
+ case ARMSS_Secure:
47
+ if (stage2idx == ARMMMUIdx_Stage2_S) {
48
+ s2walk_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
49
+ } else {
50
+ s2walk_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
51
+ }
52
+ return s2walk_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS;
53
+ default:
54
+ g_assert_not_reached();
55
+ }
56
}
57
58
static bool regime_translation_big_endian(CPUARMState *env, ARMMMUIdx mmu_idx)
59
--
60
2.34.1
diff view generated by jsdifflib
Deleted patch
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
2
1
3
When HCR_EL2.E2H is enabled, TLB entries are formed using the EL2&0
4
translation regime, instead of the EL2 translation regime. The TLB VAE2*
5
instructions invalidate the regime that corresponds to the current value
6
of HCR_EL2.E2H.
7
8
At the moment we only invalidate the EL2 translation regime. This causes
9
problems with RMM, which issues TLBI VAE2IS instructions with
10
HCR_EL2.E2H enabled. Update vae2_tlbmask() to take HCR_EL2.E2H into
11
account.
12
13
Add vae2_tlbbits() as well, since the top-byte-ignore configuration is
14
different between the EL2&0 and EL2 regime.
15
16
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
17
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Message-id: 20230809123706.1842548-3-jean-philippe@linaro.org
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
21
target/arm/helper.c | 50 ++++++++++++++++++++++++++++++++++++---------
22
1 file changed, 40 insertions(+), 10 deletions(-)
23
24
diff --git a/target/arm/helper.c b/target/arm/helper.c
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/helper.c
27
+++ b/target/arm/helper.c
28
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env)
29
return mask;
30
}
31
32
+static int vae2_tlbmask(CPUARMState *env)
33
+{
34
+ uint64_t hcr = arm_hcr_el2_eff(env);
35
+ uint16_t mask;
36
+
37
+ if (hcr & HCR_E2H) {
38
+ mask = ARMMMUIdxBit_E20_2 |
39
+ ARMMMUIdxBit_E20_2_PAN |
40
+ ARMMMUIdxBit_E20_0;
41
+ } else {
42
+ mask = ARMMMUIdxBit_E2;
43
+ }
44
+ return mask;
45
+}
46
+
47
/* Return 56 if TBI is enabled, 64 otherwise. */
48
static int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
49
uint64_t addr)
50
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbbits(CPUARMState *env, uint64_t addr)
51
return tlbbits_for_regime(env, mmu_idx, addr);
52
}
53
54
+static int vae2_tlbbits(CPUARMState *env, uint64_t addr)
55
+{
56
+ uint64_t hcr = arm_hcr_el2_eff(env);
57
+ ARMMMUIdx mmu_idx;
58
+
59
+ /*
60
+ * Only the regime of the mmu_idx below is significant.
61
+ * Regime EL2&0 has two ranges with separate TBI configuration, while EL2
62
+ * only has one.
63
+ */
64
+ if (hcr & HCR_E2H) {
65
+ mmu_idx = ARMMMUIdx_E20_2;
66
+ } else {
67
+ mmu_idx = ARMMMUIdx_E2;
68
+ }
69
+
70
+ return tlbbits_for_regime(env, mmu_idx, addr);
71
+}
72
+
73
static void tlbi_aa64_vmalle1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
74
uint64_t value)
75
{
76
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2_write(CPUARMState *env, const ARMCPRegInfo *ri,
77
* flush-last-level-only.
78
*/
79
CPUState *cs = env_cpu(env);
80
- int mask = e2_tlbmask(env);
81
+ int mask = vae2_tlbmask(env);
82
uint64_t pageaddr = sextract64(value << 12, 0, 56);
83
+ int bits = vae2_tlbbits(env, pageaddr);
84
85
- tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
86
+ tlb_flush_page_bits_by_mmuidx(cs, pageaddr, mask, bits);
87
}
88
89
static void tlbi_aa64_vae3_write(CPUARMState *env, const ARMCPRegInfo *ri,
90
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
91
uint64_t value)
92
{
93
CPUState *cs = env_cpu(env);
94
+ int mask = vae2_tlbmask(env);
95
uint64_t pageaddr = sextract64(value << 12, 0, 56);
96
- int bits = tlbbits_for_regime(env, ARMMMUIdx_E2, pageaddr);
97
+ int bits = vae2_tlbbits(env, pageaddr);
98
99
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
100
- ARMMMUIdxBit_E2, bits);
101
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
102
}
103
104
static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
105
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae1is_write(CPUARMState *env,
106
do_rvae_write(env, value, vae1_tlbmask(env), true);
107
}
108
109
-static int vae2_tlbmask(CPUARMState *env)
110
-{
111
- return ARMMMUIdxBit_E2;
112
-}
113
-
114
static void tlbi_aa64_rvae2_write(CPUARMState *env,
115
const ARMCPRegInfo *ri,
116
uint64_t value)
117
--
118
2.34.1
diff view generated by jsdifflib
Deleted patch
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
2
1
3
GPC checks are not performed on the output address for AT instructions,
4
as stated by ARM DDI 0487J in D8.12.2:
5
6
When populating PAR_EL1 with the result of an address translation
7
instruction, granule protection checks are not performed on the final
8
output address of a successful translation.
9
10
Rename get_phys_addr_with_secure(), since it's only used to handle AT
11
instructions.
12
13
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20230809123706.1842548-4-jean-philippe@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
18
target/arm/internals.h | 25 ++++++++++++++-----------
19
target/arm/helper.c | 8 ++++++--
20
target/arm/ptw.c | 11 ++++++-----
21
3 files changed, 26 insertions(+), 18 deletions(-)
22
23
diff --git a/target/arm/internals.h b/target/arm/internals.h
24
index XXXXXXX..XXXXXXX 100644
25
--- a/target/arm/internals.h
26
+++ b/target/arm/internals.h
27
@@ -XXX,XX +XXX,XX @@ typedef struct GetPhysAddrResult {
28
} GetPhysAddrResult;
29
30
/**
31
- * get_phys_addr_with_secure: get the physical address for a virtual address
32
+ * get_phys_addr: get the physical address for a virtual address
33
* @env: CPUARMState
34
* @address: virtual address to get physical address for
35
* @access_type: 0 for read, 1 for write, 2 for execute
36
* @mmu_idx: MMU index indicating required translation regime
37
- * @is_secure: security state for the access
38
* @result: set on translation success.
39
* @fi: set to fault info if the translation fails
40
*
41
@@ -XXX,XX +XXX,XX @@ typedef struct GetPhysAddrResult {
42
* * for PSMAv5 based systems we don't bother to return a full FSR format
43
* value.
44
*/
45
-bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
46
- MMUAccessType access_type,
47
- ARMMMUIdx mmu_idx, bool is_secure,
48
- GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
49
+bool get_phys_addr(CPUARMState *env, target_ulong address,
50
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
51
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
52
__attribute__((nonnull));
53
54
/**
55
- * get_phys_addr: get the physical address for a virtual address
56
+ * get_phys_addr_with_secure_nogpc: get the physical address for a virtual
57
+ * address
58
* @env: CPUARMState
59
* @address: virtual address to get physical address for
60
* @access_type: 0 for read, 1 for write, 2 for execute
61
* @mmu_idx: MMU index indicating required translation regime
62
+ * @is_secure: security state for the access
63
* @result: set on translation success.
64
* @fi: set to fault info if the translation fails
65
*
66
- * Similarly, but use the security regime of @mmu_idx.
67
+ * Similar to get_phys_addr, but use the given security regime and don't perform
68
+ * a Granule Protection Check on the resulting address.
69
*/
70
-bool get_phys_addr(CPUARMState *env, target_ulong address,
71
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
72
- GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
73
+bool get_phys_addr_with_secure_nogpc(CPUARMState *env, target_ulong address,
74
+ MMUAccessType access_type,
75
+ ARMMMUIdx mmu_idx, bool is_secure,
76
+ GetPhysAddrResult *result,
77
+ ARMMMUFaultInfo *fi)
78
__attribute__((nonnull));
79
80
bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
81
diff --git a/target/arm/helper.c b/target/arm/helper.c
82
index XXXXXXX..XXXXXXX 100644
83
--- a/target/arm/helper.c
84
+++ b/target/arm/helper.c
85
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
86
ARMMMUFaultInfo fi = {};
87
GetPhysAddrResult res = {};
88
89
- ret = get_phys_addr_with_secure(env, value, access_type, mmu_idx,
90
- is_secure, &res, &fi);
91
+ /*
92
+ * I_MXTJT: Granule protection checks are not performed on the final address
93
+ * of a successful translation.
94
+ */
95
+ ret = get_phys_addr_with_secure_nogpc(env, value, access_type, mmu_idx,
96
+ is_secure, &res, &fi);
97
98
/*
99
* ATS operations only do S1 or S1+S2 translations, so we never
100
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
101
index XXXXXXX..XXXXXXX 100644
102
--- a/target/arm/ptw.c
103
+++ b/target/arm/ptw.c
104
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_gpc(CPUARMState *env, S1Translate *ptw,
105
return false;
106
}
107
108
-bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
109
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
110
- bool is_secure, GetPhysAddrResult *result,
111
- ARMMMUFaultInfo *fi)
112
+bool get_phys_addr_with_secure_nogpc(CPUARMState *env, target_ulong address,
113
+ MMUAccessType access_type,
114
+ ARMMMUIdx mmu_idx, bool is_secure,
115
+ GetPhysAddrResult *result,
116
+ ARMMMUFaultInfo *fi)
117
{
118
S1Translate ptw = {
119
.in_mmu_idx = mmu_idx,
120
.in_space = arm_secure_to_space(is_secure),
121
};
122
- return get_phys_addr_gpc(env, &ptw, address, access_type, result, fi);
123
+ return get_phys_addr_nogpc(env, &ptw, address, access_type, result, fi);
124
}
125
126
bool get_phys_addr(CPUARMState *env, target_ulong address,
127
--
128
2.34.1
diff view generated by jsdifflib
Deleted patch
1
From: Jean-Philippe Brucker <jean-philippe@linaro.org>
2
1
3
The AT instruction is UNDEFINED if the {NSE,NS} configuration is
4
invalid. Add a function to check this on all AT instructions that apply
5
to an EL lower than 3.
6
7
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
9
Message-id: 20230809123706.1842548-6-jean-philippe@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
target/arm/helper.c | 38 +++++++++++++++++++++++++++-----------
14
1 file changed, 27 insertions(+), 11 deletions(-)
15
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri,
21
#endif /* CONFIG_TCG */
22
}
23
24
+static CPAccessResult at_e012_access(CPUARMState *env, const ARMCPRegInfo *ri,
25
+ bool isread)
26
+{
27
+ /*
28
+ * R_NYXTL: instruction is UNDEFINED if it applies to an Exception level
29
+ * lower than EL3 and the combination SCR_EL3.{NSE,NS} is reserved. This can
30
+ * only happen when executing at EL3 because that combination also causes an
31
+ * illegal exception return. We don't need to check FEAT_RME either, because
32
+ * scr_write() ensures that the NSE bit is not set otherwise.
33
+ */
34
+ if ((env->cp15.scr_el3 & (SCR_NSE | SCR_NS)) == SCR_NSE) {
35
+ return CP_ACCESS_TRAP;
36
+ }
37
+ return CP_ACCESS_OK;
38
+}
39
+
40
static CPAccessResult at_s1e2_access(CPUARMState *env, const ARMCPRegInfo *ri,
41
bool isread)
42
{
43
@@ -XXX,XX +XXX,XX @@ static CPAccessResult at_s1e2_access(CPUARMState *env, const ARMCPRegInfo *ri,
44
!(env->cp15.scr_el3 & (SCR_NS | SCR_EEL2))) {
45
return CP_ACCESS_TRAP;
46
}
47
- return CP_ACCESS_OK;
48
+ return at_e012_access(env, ri, isread);
49
}
50
51
static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
52
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
53
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 0,
54
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
55
.fgt = FGT_ATS1E1R,
56
- .writefn = ats_write64 },
57
+ .accessfn = at_e012_access, .writefn = ats_write64 },
58
{ .name = "AT_S1E1W", .state = ARM_CP_STATE_AA64,
59
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 1,
60
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
61
.fgt = FGT_ATS1E1W,
62
- .writefn = ats_write64 },
63
+ .accessfn = at_e012_access, .writefn = ats_write64 },
64
{ .name = "AT_S1E0R", .state = ARM_CP_STATE_AA64,
65
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 2,
66
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
67
.fgt = FGT_ATS1E0R,
68
- .writefn = ats_write64 },
69
+ .accessfn = at_e012_access, .writefn = ats_write64 },
70
{ .name = "AT_S1E0W", .state = ARM_CP_STATE_AA64,
71
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 8, .opc2 = 3,
72
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
73
.fgt = FGT_ATS1E0W,
74
- .writefn = ats_write64 },
75
+ .accessfn = at_e012_access, .writefn = ats_write64 },
76
{ .name = "AT_S12E1R", .state = ARM_CP_STATE_AA64,
77
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 4,
78
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
79
- .writefn = ats_write64 },
80
+ .accessfn = at_e012_access, .writefn = ats_write64 },
81
{ .name = "AT_S12E1W", .state = ARM_CP_STATE_AA64,
82
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 5,
83
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
84
- .writefn = ats_write64 },
85
+ .accessfn = at_e012_access, .writefn = ats_write64 },
86
{ .name = "AT_S12E0R", .state = ARM_CP_STATE_AA64,
87
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 6,
88
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
89
- .writefn = ats_write64 },
90
+ .accessfn = at_e012_access, .writefn = ats_write64 },
91
{ .name = "AT_S12E0W", .state = ARM_CP_STATE_AA64,
92
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 7,
93
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
94
- .writefn = ats_write64 },
95
+ .accessfn = at_e012_access, .writefn = ats_write64 },
96
/* AT S1E2* are elsewhere as they UNDEF from EL3 if EL2 is not present */
97
{ .name = "AT_S1E3R", .state = ARM_CP_STATE_AA64,
98
.opc0 = 1, .opc1 = 6, .crn = 7, .crm = 8, .opc2 = 0,
99
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo ats1e1_reginfo[] = {
100
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 0,
101
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
102
.fgt = FGT_ATS1E1RP,
103
- .writefn = ats_write64 },
104
+ .accessfn = at_e012_access, .writefn = ats_write64 },
105
{ .name = "AT_S1E1WP", .state = ARM_CP_STATE_AA64,
106
.opc0 = 1, .opc1 = 0, .crn = 7, .crm = 9, .opc2 = 1,
107
.access = PL1_W, .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC,
108
.fgt = FGT_ATS1E1WP,
109
- .writefn = ats_write64 },
110
+ .accessfn = at_e012_access, .writefn = ats_write64 },
111
};
112
113
static const ARMCPRegInfo ats1cp_reginfo[] = {
114
--
115
2.34.1
diff view generated by jsdifflib