1
Arm pullreq: Rémi's ARMv8.4-SEL2 support is the big thing here.
1
Hi; this mostly contains the first slice of A64 decodetree
2
patches, plus some other minor pieces. It also has the
3
enablement of MTE for KVM guests.
2
4
3
thanks
5
thanks
4
-- PMM
6
-- PMM
5
7
6
The following changes since commit f1fcb6851aba6dd9838886dc179717a11e344a1c:
8
The following changes since commit d27e7c359330ba7020bdbed7ed2316cb4cf6ffc1:
7
9
8
Merge remote-tracking branch 'remotes/huth-gitlab/tags/pull-request-2021-01-19' into staging (2021-01-19 11:57:07 +0000)
10
qapi/parser: Drop two bad type hints for now (2023-05-17 10:18:33 -0700)
9
11
10
are available in the Git repository at:
12
are available in the Git repository at:
11
13
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210119
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230518
13
15
14
for you to fetch changes up to 6d39956891b3d1857af84f72f0230a6d99eb3b6a:
16
for you to fetch changes up to 91608e2a44f36e79cb83f863b8a7bb57d2c98061:
15
17
16
docs: Build and install all the docs in a single manual (2021-01-19 14:38:53 +0000)
18
docs: Convert u2f.txt to rST (2023-05-18 11:40:32 +0100)
17
19
18
----------------------------------------------------------------
20
----------------------------------------------------------------
19
target-arm queue:
21
target-arm queue:
20
* Implement IMPDEF pauth algorithm
22
* Fix vd == vm overlap in sve_ldff1_z
21
* Support ARMv8.4-SEL2
23
* Add support for MTE with KVM guests
22
* Fix bug where we were truncating predicate vector lengths in SVE insns
24
* Add RAZ/WI handling for DBGDTR[TX|RX]
23
* Implement new pvpanic-pci device
25
* Start of conversion of A64 decoder to decodetree
24
* npcm7xx_adc-test: Fix memleak in adc_qom_set
26
* Saturate L2CTLR_EL1 core count field rather than overflowing
25
* target/arm/m_helper: Silence GCC 10 maybe-uninitialized error
27
* vexpress: Avoid trivial memory leak of 'flashalias'
26
* docs: Build and install all the docs in a single manual
28
* sbsa-ref: switch default cpu core to Neoverse-N1
29
* sbsa-ref: use Bochs graphics card instead of VGA
30
* MAINTAINERS: Add Marcin Juszkiewicz to sbsa-ref reviewer list
31
* docs: Convert u2f.txt to rST
27
32
28
----------------------------------------------------------------
33
----------------------------------------------------------------
29
Gan Qixin (1):
34
Alex Bennée (1):
30
npcm7xx_adc-test: Fix memleak in adc_qom_set
35
target/arm: add RAZ/WI handling for DBGDTR[TX|RX]
31
36
32
Mihai Carabas (4):
37
Cornelia Huck (1):
33
hw/misc/pvpanic: split-out generic and bus dependent code
38
arm/kvm: add support for MTE
34
hw/misc/pvpanic: add PCI interface support
35
pvpanic : update pvpanic spec document
36
tests/qtest: add a test case for pvpanic-pci
37
39
38
Peter Maydell (1):
40
Marcin Juszkiewicz (3):
39
docs: Build and install all the docs in a single manual
41
sbsa-ref: switch default cpu core to Neoverse-N1
42
Maintainers: add myself as reviewer for sbsa-ref
43
sbsa-ref: use Bochs graphics card instead of VGA
40
44
41
Philippe Mathieu-Daudé (1):
45
Peter Maydell (14):
42
target/arm/m_helper: Silence GCC 10 maybe-uninitialized error
46
target/arm: Create decodetree skeleton for A64
47
target/arm: Pull calls to disas_sve() and disas_sme() out of legacy decoder
48
target/arm: Convert Extract instructions to decodetree
49
target/arm: Convert unconditional branch immediate to decodetree
50
target/arm: Convert CBZ, CBNZ to decodetree
51
target/arm: Convert TBZ, TBNZ to decodetree
52
target/arm: Convert conditional branch insns to decodetree
53
target/arm: Convert BR, BLR, RET to decodetree
54
target/arm: Convert BRA[AB]Z, BLR[AB]Z, RETA[AB] to decodetree
55
target/arm: Convert BRAA, BRAB, BLRAA, BLRAB to decodetree
56
target/arm: Convert ERET, ERETAA, ERETAB to decodetree
57
target/arm: Saturate L2CTLR_EL1 core count field rather than overflowing
58
hw/arm/vexpress: Avoid trivial memory leak of 'flashalias'
59
docs: Convert u2f.txt to rST
43
60
44
Richard Henderson (7):
61
Richard Henderson (10):
45
target/arm: Implement an IMPDEF pauth algorithm
62
target/arm: Fix vd == vm overlap in sve_ldff1_z
46
target/arm: Add cpu properties to control pauth
63
target/arm: Split out disas_a64_legacy
47
target/arm: Use object_property_add_bool for "sve" property
64
target/arm: Convert PC-rel addressing to decodetree
48
target/arm: Introduce PREDDESC field definitions
65
target/arm: Split gen_add_CC and gen_sub_CC
49
target/arm: Update PFIRST, PNEXT for pred_desc
66
target/arm: Convert Add/subtract (immediate) to decodetree
50
target/arm: Update ZIP, UZP, TRN for pred_desc
67
target/arm: Convert Add/subtract (immediate with tags) to decodetree
51
target/arm: Update REV, PUNPK for pred_desc
68
target/arm: Replace bitmask64 with MAKE_64BIT_MASK
69
target/arm: Convert Logical (immediate) to decodetree
70
target/arm: Convert Move wide (immediate) to decodetree
71
target/arm: Convert Bitfield to decodetree
52
72
53
Rémi Denis-Courmont (19):
73
MAINTAINERS | 1 +
54
target/arm: remove redundant tests
74
docs/system/device-emulation.rst | 1 +
55
target/arm: add arm_is_el2_enabled() helper
75
docs/system/devices/usb-u2f.rst | 93 +++
56
target/arm: use arm_is_el2_enabled() where applicable
76
docs/system/devices/usb.rst | 2 +-
57
target/arm: use arm_hcr_el2_eff() where applicable
77
docs/u2f.txt | 110 ----
58
target/arm: factor MDCR_EL2 common handling
78
target/arm/cpu.h | 4 +
59
target/arm: Define isar_feature function to test for presence of SEL2
79
target/arm/kvm_arm.h | 19 +
60
target/arm: add 64-bit S-EL2 to EL exception table
80
target/arm/tcg/translate.h | 5 +
61
target/arm: add MMU stage 1 for Secure EL2
81
target/arm/tcg/a64.decode | 152 +++++
62
target/arm: add ARMv8.4-SEL2 system registers
82
hw/arm/sbsa-ref.c | 4 +-
63
target/arm: handle VMID change in secure state
83
hw/arm/vexpress.c | 40 +-
64
target/arm: do S1_ptw_translate() before address space lookup
84
hw/arm/virt.c | 73 ++-
65
target/arm: translate NS bit in page-walks
85
target/arm/cortex-regs.c | 11 +-
66
target/arm: generalize 2-stage page-walk condition
86
target/arm/cpu.c | 9 +-
67
target/arm: secure stage 2 translation regime
87
target/arm/debug_helper.c | 11 +-
68
target/arm: set HPFAR_EL2.NS on secure stage 2 faults
88
target/arm/kvm.c | 35 +
69
target/arm: revector to run-time pick target EL
89
target/arm/kvm64.c | 5 +
70
target/arm: Implement SCR_EL2.EEL2
90
target/arm/tcg/sve_helper.c | 6 +
71
target/arm: enable Secure EL2 in max CPU
91
target/arm/tcg/translate-a64.c | 1321 ++++++++++++++++----------------------
72
target/arm: refactor vae1_tlbmask()
92
target/arm/tcg/meson.build | 1 +
93
20 files changed, 979 insertions(+), 924 deletions(-)
94
create mode 100644 docs/system/devices/usb-u2f.rst
95
delete mode 100644 docs/u2f.txt
96
create mode 100644 target/arm/tcg/a64.decode
73
97
74
docs/conf.py | 46 ++++-
75
docs/devel/conf.py | 15 --
76
docs/index.html.in | 17 --
77
docs/interop/conf.py | 28 ---
78
docs/meson.build | 64 +++---
79
docs/specs/conf.py | 16 --
80
docs/specs/pci-ids.txt | 1 +
81
docs/specs/pvpanic.txt | 13 +-
82
docs/system/arm/cpu-features.rst | 21 ++
83
docs/system/conf.py | 28 ---
84
docs/tools/conf.py | 37 ----
85
docs/user/conf.py | 15 --
86
include/hw/misc/pvpanic.h | 24 ++-
87
include/hw/pci/pci.h | 1 +
88
include/qemu/xxhash.h | 98 +++++++++
89
target/arm/cpu-param.h | 2 +-
90
target/arm/cpu.h | 107 ++++++++--
91
target/arm/internals.h | 45 +++++
92
hw/misc/pvpanic-isa.c | 94 +++++++++
93
hw/misc/pvpanic-pci.c | 95 +++++++++
94
hw/misc/pvpanic.c | 85 +-------
95
target/arm/cpu.c | 23 ++-
96
target/arm/cpu64.c | 65 ++++--
97
target/arm/helper-a64.c | 8 +-
98
target/arm/helper.c | 414 ++++++++++++++++++++++++++-------------
99
target/arm/m_helper.c | 2 +-
100
target/arm/monitor.c | 1 +
101
target/arm/op_helper.c | 4 +-
102
target/arm/pauth_helper.c | 27 ++-
103
target/arm/sve_helper.c | 33 ++--
104
target/arm/tlb_helper.c | 3 +
105
target/arm/translate-a64.c | 4 +
106
target/arm/translate-sve.c | 31 ++-
107
target/arm/translate.c | 36 +++-
108
tests/qtest/arm-cpu-features.c | 13 ++
109
tests/qtest/npcm7xx_adc-test.c | 1 +
110
tests/qtest/pvpanic-pci-test.c | 62 ++++++
111
.gitlab-ci.yml | 4 +-
112
hw/i386/Kconfig | 2 +-
113
hw/misc/Kconfig | 12 +-
114
hw/misc/meson.build | 4 +-
115
tests/qtest/meson.build | 3 +-
116
42 files changed, 1080 insertions(+), 524 deletions(-)
117
delete mode 100644 docs/devel/conf.py
118
delete mode 100644 docs/index.html.in
119
delete mode 100644 docs/interop/conf.py
120
delete mode 100644 docs/specs/conf.py
121
delete mode 100644 docs/system/conf.py
122
delete mode 100644 docs/tools/conf.py
123
delete mode 100644 docs/user/conf.py
124
create mode 100644 hw/misc/pvpanic-isa.c
125
create mode 100644 hw/misc/pvpanic-pci.c
126
create mode 100644 tests/qtest/pvpanic-pci-test.c
127
diff view generated by jsdifflib
1
From: Philippe Mathieu-Daudé <f4bug@amsat.org>
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
2
3
When building with GCC 10.2 configured with --extra-cflags=-Os, we get:
3
The world outside moves to newer and newer cpu cores. Let move SBSA
4
Reference Platform to something newer as well.
4
5
5
target/arm/m_helper.c: In function ‘arm_v7m_cpu_do_interrupt’:
6
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
6
target/arm/m_helper.c:1811:16: error: ‘restore_s16_s31’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
7
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
7
1811 | if (restore_s16_s31) {
8
Message-id: 20230506183417.1360427-1-marcin.juszkiewicz@linaro.org
8
| ^
9
target/arm/m_helper.c:1350:10: note: ‘restore_s16_s31’ was declared here
10
1350 | bool restore_s16_s31;
11
| ^~~~~~~~~~~~~~~
12
cc1: all warnings being treated as errors
13
14
Initialize the 'restore_s16_s31' variable to silence the warning.
15
16
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Message-id: 20210119062739.589049-1-f4bug@amsat.org
18
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
---
10
---
21
target/arm/m_helper.c | 2 +-
11
hw/arm/sbsa-ref.c | 2 +-
22
1 file changed, 1 insertion(+), 1 deletion(-)
12
1 file changed, 1 insertion(+), 1 deletion(-)
23
13
24
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
14
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
25
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/m_helper.c
16
--- a/hw/arm/sbsa-ref.c
27
+++ b/target/arm/m_helper.c
17
+++ b/hw/arm/sbsa-ref.c
28
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
18
@@ -XXX,XX +XXX,XX @@ static void sbsa_ref_class_init(ObjectClass *oc, void *data)
29
bool exc_secure = false;
19
30
bool return_to_secure;
20
mc->init = sbsa_ref_init;
31
bool ftype;
21
mc->desc = "QEMU 'SBSA Reference' ARM Virtual Machine";
32
- bool restore_s16_s31;
22
- mc->default_cpu_type = ARM_CPU_TYPE_NAME("cortex-a57");
33
+ bool restore_s16_s31 = false;
23
+ mc->default_cpu_type = ARM_CPU_TYPE_NAME("neoverse-n1");
34
24
mc->max_cpus = 512;
35
/*
25
mc->pci_allow_0_address = true;
36
* If we're not in Handler mode then jumps to magic exception-exit
26
mc->minimum_page_bits = 12;
37
--
27
--
38
2.20.1
28
2.34.1
39
40
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Update all users of do_perm_pred2 for the new
3
If vd == vm, copy vm to scratch, so that we can pre-zero
4
predicate descriptor field definitions.
4
the output and still access the gather indicies.
5
5
6
Cc: qemu-stable@nongnu.org
6
Cc: qemu-stable@nongnu.org
7
Buglink: https://bugs.launchpad.net/bugs/1908551
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1612
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210113062650.593824-5-richard.henderson@linaro.org
9
Message-id: 20230504104232.1877774-1-richard.henderson@linaro.org
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
12
---
13
target/arm/sve_helper.c | 8 ++++----
13
target/arm/tcg/sve_helper.c | 6 ++++++
14
target/arm/translate-sve.c | 13 ++++---------
14
1 file changed, 6 insertions(+)
15
2 files changed, 8 insertions(+), 13 deletions(-)
16
15
17
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
16
diff --git a/target/arm/tcg/sve_helper.c b/target/arm/tcg/sve_helper.c
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/sve_helper.c
18
--- a/target/arm/tcg/sve_helper.c
20
+++ b/target/arm/sve_helper.c
19
+++ b/target/arm/tcg/sve_helper.c
21
@@ -XXX,XX +XXX,XX @@ static uint8_t reverse_bits_8(uint8_t x, int n)
20
@@ -XXX,XX +XXX,XX @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
22
21
intptr_t reg_off;
23
void HELPER(sve_rev_p)(void *vd, void *vn, uint32_t pred_desc)
22
SVEHostPage info;
24
{
23
target_ulong addr, in_page;
25
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
24
+ ARMVectorReg scratch;
26
- int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
25
27
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
26
/* Skip to the first true predicate. */
28
+ int esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
27
reg_off = find_next_active(vg, 0, reg_max, esz);
29
intptr_t i, oprsz_2 = oprsz / 2;
28
@@ -XXX,XX +XXX,XX @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
30
29
return;
31
if (oprsz <= 8) {
30
}
32
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_rev_p)(void *vd, void *vn, uint32_t pred_desc)
31
33
32
+ /* Protect against overlap between vd and vm. */
34
void HELPER(sve_punpk_p)(void *vd, void *vn, uint32_t pred_desc)
33
+ if (unlikely(vd == vm)) {
35
{
34
+ vm = memcpy(&scratch, vm, reg_max);
36
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
35
+ }
37
- intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
36
+
38
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
37
/*
39
+ intptr_t high = FIELD_EX32(pred_desc, PREDDESC, DATA);
38
* Probe the first element, allowing faults.
40
uint64_t *d = vd;
39
*/
41
intptr_t i;
42
43
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/translate-sve.c
46
+++ b/target/arm/translate-sve.c
47
@@ -XXX,XX +XXX,XX @@ static bool do_perm_pred2(DisasContext *s, arg_rr_esz *a, bool high_odd,
48
TCGv_ptr t_d = tcg_temp_new_ptr();
49
TCGv_ptr t_n = tcg_temp_new_ptr();
50
TCGv_i32 t_desc;
51
- int desc;
52
+ uint32_t desc = 0;
53
54
tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
55
tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
56
57
- /* Predicate sizes may be smaller and cannot use simd_desc.
58
- We cannot round up, as we do elsewhere, because we need
59
- the exact size for ZIP2 and REV. We retain the style for
60
- the other helpers for consistency. */
61
-
62
- desc = vsz - 2;
63
- desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
64
- desc = deposit32(desc, SIMD_DATA_SHIFT + 2, 2, high_odd);
65
+ desc = FIELD_DP32(desc, PREDDESC, OPRSZ, vsz);
66
+ desc = FIELD_DP32(desc, PREDDESC, ESZ, a->esz);
67
+ desc = FIELD_DP32(desc, PREDDESC, DATA, high_odd);
68
t_desc = tcg_const_i32(desc);
69
70
fn(t_d, t_n, t_desc);
71
--
40
--
72
2.20.1
41
2.34.1
73
74
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
2
3
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
3
At Linaro I work on sbsa-ref, know direction it goes.
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
5
Message-id: 20210112104511.36576-18-remi.denis.courmont@huawei.com
5
May not get code details each time.
6
7
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Message-id: 20230515143753.365591-1-marcin.juszkiewicz@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/cpu64.c | 1 +
12
MAINTAINERS | 1 +
9
1 file changed, 1 insertion(+)
13
1 file changed, 1 insertion(+)
10
14
11
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
15
diff --git a/MAINTAINERS b/MAINTAINERS
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu64.c
17
--- a/MAINTAINERS
14
+++ b/target/arm/cpu64.c
18
+++ b/MAINTAINERS
15
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
19
@@ -XXX,XX +XXX,XX @@ SBSA-REF
16
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
20
M: Radoslaw Biernacki <rad@semihalf.com>
17
t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
21
M: Peter Maydell <peter.maydell@linaro.org>
18
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
22
R: Leif Lindholm <quic_llindhol@quicinc.com>
19
+ t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1);
23
+R: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
20
cpu->isar.id_aa64pfr0 = t;
24
L: qemu-arm@nongnu.org
21
25
S: Maintained
22
t = cpu->isar.id_aa64pfr1;
26
F: hw/arm/sbsa-ref.c
23
--
27
--
24
2.20.1
28
2.34.1
25
29
26
30
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
From: Cornelia Huck <cohuck@redhat.com>
2
2
3
This adds handling for the SCR_EL3.EEL2 bit.
3
Extend the 'mte' property for the virt machine to cover KVM as
4
4
well. For KVM, we don't allocate tag memory, but instead enable the
5
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
5
capability.
6
Message-id: 20210112104511.36576-17-remi.denis.courmont@huawei.com
6
7
[PMM: Applied fixes for review issues noted by RTH:
7
If MTE has been enabled, we need to disable migration, as we do not
8
- check for FEATURE_AARCH64 before checking sel2 isar feature
8
yet have a way to migrate the tags as well. Therefore, MTE will stay
9
- correct the commit message subject line]
9
off with KVM unless requested explicitly.
10
11
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20230428095533.21747-2-cohuck@redhat.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
16
---
13
target/arm/cpu.h | 8 ++++++--
17
target/arm/cpu.h | 4 +++
14
target/arm/cpu.c | 2 +-
18
target/arm/kvm_arm.h | 19 ++++++++++++
15
target/arm/helper.c | 19 ++++++++++++++++---
19
hw/arm/virt.c | 73 +++++++++++++++++++++++++-------------------
16
target/arm/translate.c | 15 +++++++++++++--
20
target/arm/cpu.c | 9 +++---
17
4 files changed, 36 insertions(+), 8 deletions(-)
21
target/arm/kvm.c | 35 +++++++++++++++++++++
22
target/arm/kvm64.c | 5 +++
23
6 files changed, 109 insertions(+), 36 deletions(-)
18
24
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
27
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
28
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
29
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
24
static inline bool arm_is_el2_enabled(CPUARMState *env)
30
*/
25
{
31
uint32_t psci_conduit;
26
if (arm_feature(env, ARM_FEATURE_EL2)) {
32
27
- return !arm_is_secure_below_el3(env);
33
+ /* CPU has Memory Tag Extension */
28
+ if (arm_is_secure_below_el3(env)) {
34
+ bool has_mte;
29
+ return (env->cp15.scr_el3 & SCR_EEL2) != 0;
35
+
30
+ }
36
/* For v8M, initial value of the Secure VTOR */
31
+ return true;
37
uint32_t init_svtor;
38
/* For v8M, initial value of the Non-secure VTOR */
39
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
40
bool prop_pauth;
41
bool prop_pauth_impdef;
42
bool prop_lpa2;
43
+ OnOffAuto prop_mte;
44
45
/* DCZ blocksize, in log_2(words), ie low 4 bits of DCZID_EL0 */
46
uint32_t dcz_blocksize;
47
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/kvm_arm.h
50
+++ b/target/arm/kvm_arm.h
51
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_pmu_supported(void);
52
*/
53
bool kvm_arm_sve_supported(void);
54
55
+/**
56
+ * kvm_arm_mte_supported:
57
+ *
58
+ * Returns: true if KVM can enable MTE, and false otherwise.
59
+ */
60
+bool kvm_arm_mte_supported(void);
61
+
62
/**
63
* kvm_arm_get_max_vm_ipa_size:
64
* @ms: Machine state handle
65
@@ -XXX,XX +XXX,XX @@ void kvm_arm_pvtime_init(CPUState *cs, uint64_t ipa);
66
67
int kvm_arm_set_irq(int cpu, int irqtype, int irq, int level);
68
69
+void kvm_arm_enable_mte(Object *cpuobj, Error **errp);
70
+
71
#else
72
73
/*
74
@@ -XXX,XX +XXX,XX @@ static inline bool kvm_arm_steal_time_supported(void)
75
return false;
76
}
77
78
+static inline bool kvm_arm_mte_supported(void)
79
+{
80
+ return false;
81
+}
82
+
83
/*
84
* These functions should never actually be called without KVM support.
85
*/
86
@@ -XXX,XX +XXX,XX @@ static inline uint32_t kvm_arm_sve_get_vls(CPUState *cs)
87
g_assert_not_reached();
88
}
89
90
+static inline void kvm_arm_enable_mte(Object *cpuobj, Error **errp)
91
+{
92
+ g_assert_not_reached();
93
+}
94
+
95
#endif
96
97
static inline const char *gic_class_name(void)
98
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
99
index XXXXXXX..XXXXXXX 100644
100
--- a/hw/arm/virt.c
101
+++ b/hw/arm/virt.c
102
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
103
exit(1);
32
}
104
}
33
return false;
105
34
}
106
- if (vms->mte && (kvm_enabled() || hvf_enabled())) {
35
@@ -XXX,XX +XXX,XX @@ static inline bool arm_el_is_aa64(CPUARMState *env, int el)
107
+ if (vms->mte && hvf_enabled()) {
36
return aa64;
108
error_report("mach-virt: %s does not support providing "
37
}
109
"MTE to the guest CPU",
38
110
current_accel_name());
39
- if (arm_feature(env, ARM_FEATURE_EL3)) {
111
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
40
+ if (arm_feature(env, ARM_FEATURE_EL3) &&
112
}
41
+ ((env->cp15.scr_el3 & SCR_NS) || !(env->cp15.scr_el3 & SCR_EEL2))) {
113
42
aa64 = aa64 && (env->cp15.scr_el3 & SCR_RW);
114
if (vms->mte) {
43
}
115
- /* Create the memory region only once, but link to all cpus. */
116
- if (!tag_sysmem) {
117
- /*
118
- * The property exists only if MemTag is supported.
119
- * If it is, we must allocate the ram to back that up.
120
- */
121
- if (!object_property_find(cpuobj, "tag-memory")) {
122
- error_report("MTE requested, but not supported "
123
- "by the guest CPU");
124
+ if (tcg_enabled()) {
125
+ /* Create the memory region only once, but link to all cpus. */
126
+ if (!tag_sysmem) {
127
+ /*
128
+ * The property exists only if MemTag is supported.
129
+ * If it is, we must allocate the ram to back that up.
130
+ */
131
+ if (!object_property_find(cpuobj, "tag-memory")) {
132
+ error_report("MTE requested, but not supported "
133
+ "by the guest CPU");
134
+ exit(1);
135
+ }
136
+
137
+ tag_sysmem = g_new(MemoryRegion, 1);
138
+ memory_region_init(tag_sysmem, OBJECT(machine),
139
+ "tag-memory", UINT64_MAX / 32);
140
+
141
+ if (vms->secure) {
142
+ secure_tag_sysmem = g_new(MemoryRegion, 1);
143
+ memory_region_init(secure_tag_sysmem, OBJECT(machine),
144
+ "secure-tag-memory",
145
+ UINT64_MAX / 32);
146
+
147
+ /* As with ram, secure-tag takes precedence over tag. */
148
+ memory_region_add_subregion_overlap(secure_tag_sysmem,
149
+ 0, tag_sysmem, -1);
150
+ }
151
+ }
152
+
153
+ object_property_set_link(cpuobj, "tag-memory",
154
+ OBJECT(tag_sysmem), &error_abort);
155
+ if (vms->secure) {
156
+ object_property_set_link(cpuobj, "secure-tag-memory",
157
+ OBJECT(secure_tag_sysmem),
158
+ &error_abort);
159
+ }
160
+ } else if (kvm_enabled()) {
161
+ if (!kvm_arm_mte_supported()) {
162
+ error_report("MTE requested, but not supported by KVM");
163
exit(1);
164
}
165
-
166
- tag_sysmem = g_new(MemoryRegion, 1);
167
- memory_region_init(tag_sysmem, OBJECT(machine),
168
- "tag-memory", UINT64_MAX / 32);
169
-
170
- if (vms->secure) {
171
- secure_tag_sysmem = g_new(MemoryRegion, 1);
172
- memory_region_init(secure_tag_sysmem, OBJECT(machine),
173
- "secure-tag-memory", UINT64_MAX / 32);
174
-
175
- /* As with ram, secure-tag takes precedence over tag. */
176
- memory_region_add_subregion_overlap(secure_tag_sysmem, 0,
177
- tag_sysmem, -1);
178
- }
179
- }
180
-
181
- object_property_set_link(cpuobj, "tag-memory", OBJECT(tag_sysmem),
182
- &error_abort);
183
- if (vms->secure) {
184
- object_property_set_link(cpuobj, "secure-tag-memory",
185
- OBJECT(secure_tag_sysmem),
186
- &error_abort);
187
+ kvm_arm_enable_mte(cpuobj, &error_abort);
188
}
189
}
44
190
45
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
191
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
46
index XXXXXXX..XXXXXXX 100644
192
index XXXXXXX..XXXXXXX 100644
47
--- a/target/arm/cpu.c
193
--- a/target/arm/cpu.c
48
+++ b/target/arm/cpu.c
194
+++ b/target/arm/cpu.c
49
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
195
@@ -XXX,XX +XXX,XX @@ void arm_cpu_post_init(Object *obj)
50
* masked from Secure state. The HCR and SCR settings
196
qdev_prop_allow_set_link_before_realize,
51
* don't affect the masking logic, only the interrupt routing.
197
OBJ_PROP_LINK_STRONG);
52
*/
198
}
53
- if (target_el == 3 || !secure) {
199
+ cpu->has_mte = true;
54
+ if (target_el == 3 || !secure || (env->cp15.scr_el3 & SCR_EEL2)) {
55
unmasked = true;
56
}
57
} else {
58
diff --git a/target/arm/helper.c b/target/arm/helper.c
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/helper.c
61
+++ b/target/arm/helper.c
62
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_trap_aa32s_el1(CPUARMState *env,
63
return CP_ACCESS_OK;
64
}
200
}
65
if (arm_is_secure_below_el3(env)) {
201
#endif
66
+ if (env->cp15.scr_el3 & SCR_EEL2) {
202
}
67
+ return CP_ACCESS_TRAP_EL2;
203
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
204
}
205
if (cpu->tag_memory) {
206
error_setg(errp,
207
- "Cannot enable %s when guest CPUs has MTE enabled",
208
+ "Cannot enable %s when guest CPUs has tag memory enabled",
209
current_accel_name());
210
return;
211
}
212
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
213
}
214
215
#ifndef CONFIG_USER_ONLY
216
- if (cpu->tag_memory == NULL && cpu_isar_feature(aa64_mte, cpu)) {
217
+ if (!cpu->has_mte && cpu_isar_feature(aa64_mte, cpu)) {
218
/*
219
- * Disable the MTE feature bits if we do not have tag-memory
220
- * provided by the machine.
221
+ * Disable the MTE feature bits if we do not have the feature
222
+ * setup by the machine.
223
*/
224
cpu->isar.id_aa64pfr1 =
225
FIELD_DP64(cpu->isar.id_aa64pfr1, ID_AA64PFR1, MTE, 0);
226
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
227
index XXXXXXX..XXXXXXX 100644
228
--- a/target/arm/kvm.c
229
+++ b/target/arm/kvm.c
230
@@ -XXX,XX +XXX,XX @@
231
#include "hw/boards.h"
232
#include "hw/irq.h"
233
#include "qemu/log.h"
234
+#include "migration/blocker.h"
235
236
const KVMCapabilityInfo kvm_arch_required_capabilities[] = {
237
KVM_CAP_LAST_INFO
238
@@ -XXX,XX +XXX,XX @@ bool kvm_arch_cpu_check_are_resettable(void)
239
void kvm_arch_accel_class_init(ObjectClass *oc)
240
{
241
}
242
+
243
+void kvm_arm_enable_mte(Object *cpuobj, Error **errp)
244
+{
245
+ static bool tried_to_enable;
246
+ static bool succeeded_to_enable;
247
+ Error *mte_migration_blocker = NULL;
248
+ int ret;
249
+
250
+ if (!tried_to_enable) {
251
+ /*
252
+ * MTE on KVM is enabled on a per-VM basis (and retrying doesn't make
253
+ * sense), and we only want a single migration blocker as well.
254
+ */
255
+ tried_to_enable = true;
256
+
257
+ ret = kvm_vm_enable_cap(kvm_state, KVM_CAP_ARM_MTE, 0);
258
+ if (ret) {
259
+ error_setg_errno(errp, -ret, "Failed to enable KVM_CAP_ARM_MTE");
260
+ return;
68
+ }
261
+ }
69
return CP_ACCESS_TRAP_EL3;
262
+
70
}
263
+ /* TODO: add proper migration support with MTE enabled */
71
/* This will be EL1 NS and EL2 NS, which just UNDEF */
264
+ error_setg(&mte_migration_blocker,
72
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
265
+ "Live migration disabled due to MTE enabled");
73
if (cpu_isar_feature(aa64_pauth, cpu)) {
266
+ if (migrate_add_blocker(mte_migration_blocker, errp)) {
74
valid_mask |= SCR_API | SCR_APK;
267
+ error_free(mte_migration_blocker);
75
}
268
+ return;
76
+ if (cpu_isar_feature(aa64_sel2, cpu)) {
77
+ valid_mask |= SCR_EEL2;
78
+ }
269
+ }
79
if (cpu_isar_feature(aa64_mte, cpu)) {
270
+ succeeded_to_enable = true;
80
valid_mask |= SCR_ATA;
271
+ }
81
}
272
+ if (succeeded_to_enable) {
82
@@ -XXX,XX +XXX,XX @@ static CPAccessResult ats_access(CPUARMState *env, const ARMCPRegInfo *ri,
273
+ object_property_set_bool(cpuobj, "has_mte", true, NULL);
83
bool isread)
274
+ }
84
{
275
+}
85
if (ri->opc2 & 4) {
276
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
86
- /* The ATS12NSO* operations must trap to EL3 if executed in
277
index XXXXXXX..XXXXXXX 100644
87
+ /* The ATS12NSO* operations must trap to EL3 or EL2 if executed in
278
--- a/target/arm/kvm64.c
88
* Secure EL1 (which can only happen if EL3 is AArch64).
279
+++ b/target/arm/kvm64.c
89
* They are simply UNDEF if executed from NS EL1.
280
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_steal_time_supported(void)
90
* They function normally from EL2 or EL3.
281
return kvm_check_extension(kvm_state, KVM_CAP_STEAL_TIME);
91
*/
282
}
92
if (arm_current_el(env) == 1) {
283
93
if (arm_is_secure_below_el3(env)) {
284
+bool kvm_arm_mte_supported(void)
94
+ if (env->cp15.scr_el3 & SCR_EEL2) {
285
+{
95
+ return CP_ACCESS_TRAP_UNCATEGORIZED_EL2;
286
+ return kvm_check_extension(kvm_state, KVM_CAP_ARM_MTE);
96
+ }
287
+}
97
return CP_ACCESS_TRAP_UNCATEGORIZED_EL3;
288
+
98
}
289
QEMU_BUILD_BUG_ON(KVM_ARM64_SVE_VQ_MIN != 1);
99
return CP_ACCESS_TRAP_UNCATEGORIZED;
290
100
@@ -XXX,XX +XXX,XX @@ static void ats1h_write(CPUARMState *env, const ARMCPRegInfo *ri,
291
uint32_t kvm_arm_sve_get_vls(CPUState *cs)
101
static CPAccessResult at_s1e2_access(CPUARMState *env, const ARMCPRegInfo *ri,
102
bool isread)
103
{
104
- if (arm_current_el(env) == 3 && !(env->cp15.scr_el3 & SCR_NS)) {
105
+ if (arm_current_el(env) == 3 &&
106
+ !(env->cp15.scr_el3 & (SCR_NS | SCR_EEL2))) {
107
return CP_ACCESS_TRAP;
108
}
109
return CP_ACCESS_OK;
110
@@ -XXX,XX +XXX,XX @@ static CPAccessResult nsacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
111
bool isread)
112
{
113
/* The NSACR is RW at EL3, and RO for NS EL1 and NS EL2.
114
- * At Secure EL1 it traps to EL3.
115
+ * At Secure EL1 it traps to EL3 or EL2.
116
*/
117
if (arm_current_el(env) == 3) {
118
return CP_ACCESS_OK;
119
}
120
if (arm_is_secure_below_el3(env)) {
121
+ if (env->cp15.scr_el3 & SCR_EEL2) {
122
+ return CP_ACCESS_TRAP_EL2;
123
+ }
124
return CP_ACCESS_TRAP_EL3;
125
}
126
/* Accesses from EL1 NS and EL2 NS are UNDEF for write but allow reads. */
127
diff --git a/target/arm/translate.c b/target/arm/translate.c
128
index XXXXXXX..XXXXXXX 100644
129
--- a/target/arm/translate.c
130
+++ b/target/arm/translate.c
131
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
132
}
133
if (s->current_el == 1) {
134
/* If we're in Secure EL1 (which implies that EL3 is AArch64)
135
- * then accesses to Mon registers trap to EL3
136
+ * then accesses to Mon registers trap to Secure EL2, if it exists,
137
+ * otherwise EL3.
138
*/
139
- TCGv_i32 tcg_el = tcg_const_i32(3);
140
+ TCGv_i32 tcg_el;
141
+
142
+ if (arm_dc_feature(s, ARM_FEATURE_AARCH64) &&
143
+ dc_isar_feature(aa64_sel2, s)) {
144
+ /* Target EL is EL<3 minus SCR_EL3.EEL2> */
145
+ tcg_el = load_cpu_field(cp15.scr_el3);
146
+ tcg_gen_sextract_i32(tcg_el, tcg_el, ctz32(SCR_EEL2), 1);
147
+ tcg_gen_addi_i32(tcg_el, tcg_el, 3);
148
+ } else {
149
+ tcg_el = tcg_const_i32(3);
150
+ }
151
152
gen_exception_el(s, EXCP_UDEF, syn_uncategorized(), tcg_el);
153
tcg_temp_free_i32(tcg_el);
154
--
292
--
155
2.20.1
293
2.34.1
156
157
diff view generated by jsdifflib
1
From: Mihai Carabas <mihai.carabas@oracle.com>
1
From: Alex Bennée <alex.bennee@linaro.org>
2
2
3
Add pvpanic PCI device support details in docs/specs/pvpanic.txt.
3
The commit b3aa2f2128 (target/arm: provide stubs for more external
4
debug registers) was added to handle HyperV's unconditional usage of
5
Debug Communications Channel. It turns out that Linux will similarly
6
break if you enable CONFIG_HVC_DCC "ARM JTAG DCC console".
4
7
5
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
8
Extend the registers we RAZ/WI set to avoid this.
6
[fixed s/device/bus/ error]
9
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Cc: Anders Roxell <anders.roxell@linaro.org>
11
Cc: Evgeny Iakovlev <eiakovlev@linux.microsoft.com>
12
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20230516104420.407912-1-alex.bennee@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
16
---
10
docs/specs/pvpanic.txt | 13 ++++++++++++-
17
target/arm/debug_helper.c | 11 +++++++++--
11
1 file changed, 12 insertions(+), 1 deletion(-)
18
1 file changed, 9 insertions(+), 2 deletions(-)
12
19
13
diff --git a/docs/specs/pvpanic.txt b/docs/specs/pvpanic.txt
20
diff --git a/target/arm/debug_helper.c b/target/arm/debug_helper.c
14
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
15
--- a/docs/specs/pvpanic.txt
22
--- a/target/arm/debug_helper.c
16
+++ b/docs/specs/pvpanic.txt
23
+++ b/target/arm/debug_helper.c
17
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
18
PVPANIC DEVICE
25
.access = PL0_R, .accessfn = access_tdcc,
19
==============
26
.type = ARM_CP_CONST, .resetvalue = 0 },
20
27
/*
21
-pvpanic device is a simulated ISA device, through which a guest panic
28
- * OSDTRRX_EL1/OSDTRTX_EL1 are used for save and restore of DBGDTRRX_EL0.
22
+pvpanic device is a simulated device, through which a guest panic
29
- * It is a component of the Debug Communications Channel, which is not implemented.
23
event is sent to qemu, and a QMP event is generated. This allows
30
+ * These registers belong to the Debug Communications Channel,
24
management apps (e.g. libvirt) to be notified and respond to the event.
31
+ * which is not implemented. However we implement RAZ/WI behaviour
25
32
+ * with trapping to prevent spurious SIGILLs if the guest OS does
26
@@ -XXX,XX +XXX,XX @@ The management app has the option of waiting for GUEST_PANICKED events,
33
+ * access them as the support cannot be probed for.
27
and/or polling for guest-panicked RunState, to learn when the pvpanic
34
*/
28
device has fired a panic event.
35
{ .name = "OSDTRRX_EL1", .state = ARM_CP_STATE_BOTH, .cp = 14,
29
36
.opc0 = 2, .opc1 = 0, .crn = 0, .crm = 0, .opc2 = 2,
30
+The pvpanic device can be implemented as an ISA device (using IOPORT) or as a
37
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
31
+PCI device.
38
.opc0 = 2, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2,
32
+
39
.access = PL1_RW, .accessfn = access_tdcc,
33
ISA Interface
40
.type = ARM_CP_CONST, .resetvalue = 0 },
34
-------------
41
+ /* DBGDTRTX_EL0/DBGDTRRX_EL0 depend on direction */
35
42
+ { .name = "DBGDTR_EL0", .state = ARM_CP_STATE_BOTH, .cp = 14,
36
@@ -XXX,XX +XXX,XX @@ bit 1: a guest panic has happened and will be handled by the guest;
43
+ .opc0 = 2, .opc1 = 3, .crn = 0, .crm = 5, .opc2 = 0,
37
the host should record it or report it, but should not affect
44
+ .access = PL0_RW, .accessfn = access_tdcc,
38
the execution of the guest.
45
+ .type = ARM_CP_CONST, .resetvalue = 0 },
39
46
/*
40
+PCI Interface
47
* OSECCR_EL1 provides a mechanism for an operating system
41
+-------------
48
* to access the contents of EDECCR. EDECCR is not implemented though,
42
+
43
+The PCI interface is similar to the ISA interface except that it uses an MMIO
44
+address space provided by its BAR0, 1 byte long. Any machine with a PCI bus
45
+can enable a pvpanic device by adding '-device pvpanic-pci' to the command
46
+line.
47
+
48
ACPI Interface
49
--------------
50
51
--
49
--
52
2.20.1
50
2.34.1
53
51
54
52
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
From: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
2
2
3
In this context, the HCR value is the effective value, and thus is
3
Bochs card is normal PCI Express card so it fits better in system with
4
zero in secure mode. The tests for HCR.{F,I}MO are sufficient.
4
PCI Express bus. VGA is simple legacy PCI card.
5
5
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
6
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Leif Lindholm <quic_llindhol@quicinc.com>
8
Message-id: 20210112104511.36576-1-remi.denis.courmont@huawei.com
8
Message-id: 20230505120936.1097060-1-marcin.juszkiewicz@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/cpu.c | 8 ++++----
11
hw/arm/sbsa-ref.c | 2 +-
12
target/arm/helper.c | 10 ++++------
12
1 file changed, 1 insertion(+), 1 deletion(-)
13
2 files changed, 8 insertions(+), 10 deletions(-)
14
13
15
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.c
16
--- a/hw/arm/sbsa-ref.c
18
+++ b/target/arm/cpu.c
17
+++ b/hw/arm/sbsa-ref.c
19
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
18
@@ -XXX,XX +XXX,XX @@ static void create_pcie(SBSAMachineState *sms)
20
break;
21
22
case EXCP_VFIQ:
23
- if (secure || !(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) {
24
- /* VFIQs are only taken when hypervized and non-secure. */
25
+ if (!(hcr_el2 & HCR_FMO) || (hcr_el2 & HCR_TGE)) {
26
+ /* VFIQs are only taken when hypervized. */
27
return false;
28
}
29
return !(env->daif & PSTATE_F);
30
case EXCP_VIRQ:
31
- if (secure || !(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
32
- /* VIRQs are only taken when hypervized and non-secure. */
33
+ if (!(hcr_el2 & HCR_IMO) || (hcr_el2 & HCR_TGE)) {
34
+ /* VIRQs are only taken when hypervized. */
35
return false;
36
}
37
return !(env->daif & PSTATE_I);
38
diff --git a/target/arm/helper.c b/target/arm/helper.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/helper.c
41
+++ b/target/arm/helper.c
42
@@ -XXX,XX +XXX,XX @@ static void csselr_write(CPUARMState *env, const ARMCPRegInfo *ri,
43
static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
44
{
45
CPUState *cs = env_cpu(env);
46
- uint64_t hcr_el2 = arm_hcr_el2_eff(env);
47
+ bool el1 = arm_current_el(env) == 1;
48
+ uint64_t hcr_el2 = el1 ? arm_hcr_el2_eff(env) : 0;
49
uint64_t ret = 0;
50
- bool allow_virt = (arm_current_el(env) == 1 &&
51
- (!arm_is_secure_below_el3(env) ||
52
- (env->cp15.scr_el3 & SCR_EEL2)));
53
54
- if (allow_virt && (hcr_el2 & HCR_IMO)) {
55
+ if (hcr_el2 & HCR_IMO) {
56
if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
57
ret |= CPSR_I;
58
}
59
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
60
}
19
}
61
}
20
}
62
21
63
- if (allow_virt && (hcr_el2 & HCR_FMO)) {
22
- pci_create_simple(pci->bus, -1, "VGA");
64
+ if (hcr_el2 & HCR_FMO) {
23
+ pci_create_simple(pci->bus, -1, "bochs-display");
65
if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
24
66
ret |= CPSR_F;
25
create_smmu(sms, pci->bus);
67
}
26
}
68
--
27
--
69
2.20.1
28
2.34.1
70
71
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Without hardware acceleration, a cryptographically strong
3
Split out all of the decode stuff from aarch64_tr_translate_insn.
4
algorithm is too expensive for pauth_computepac.
4
Call it disas_a64_legacy to indicate it will be replaced.
5
5
6
Even with hardware accel, we are not currently expecting
7
to link the linux-user binaries to any crypto libraries,
8
and doing so would generally make the --static build fail.
9
10
So choose XXH64 as a reasonably quick and decent hash.
11
12
Tested-by: Mark Rutland <mark.rutland@arm.com>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210111235740.462469-2-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20230512144106.3608981-2-peter.maydell@linaro.org
10
[PMM: Rebased]
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
13
---
18
include/qemu/xxhash.h | 98 +++++++++++++++++++++++++++++++++++++++
14
target/arm/tcg/translate-a64.c | 82 ++++++++++++++++++----------------
19
target/arm/cpu.h | 15 ++++--
15
1 file changed, 44 insertions(+), 38 deletions(-)
20
target/arm/pauth_helper.c | 27 +++++++++--
21
3 files changed, 131 insertions(+), 9 deletions(-)
22
16
23
diff --git a/include/qemu/xxhash.h b/include/qemu/xxhash.h
17
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
24
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
25
--- a/include/qemu/xxhash.h
19
--- a/target/arm/tcg/translate-a64.c
26
+++ b/include/qemu/xxhash.h
20
+++ b/target/arm/tcg/translate-a64.c
27
@@ -XXX,XX +XXX,XX @@ static inline uint32_t qemu_xxhash6(uint64_t ab, uint64_t cd, uint32_t e,
21
@@ -XXX,XX +XXX,XX @@ static bool btype_destination_ok(uint32_t insn, bool bt, int btype)
28
return qemu_xxhash7(ab, cd, e, f, 0);
22
return false;
29
}
23
}
30
24
31
+/*
25
+/* C3.1 A64 instruction index by encoding */
32
+ * Component parts of the XXH64 algorithm from
26
+static void disas_a64_legacy(DisasContext *s, uint32_t insn)
33
+ * https://github.com/Cyan4973/xxHash/blob/v0.8.0/xxhash.h
34
+ *
35
+ * The complete algorithm looks like
36
+ *
37
+ * i = 0;
38
+ * if (len >= 32) {
39
+ * v1 = seed + XXH_PRIME64_1 + XXH_PRIME64_2;
40
+ * v2 = seed + XXH_PRIME64_2;
41
+ * v3 = seed + 0;
42
+ * v4 = seed - XXH_PRIME64_1;
43
+ * do {
44
+ * v1 = XXH64_round(v1, get64bits(input + i));
45
+ * v2 = XXH64_round(v2, get64bits(input + i + 8));
46
+ * v3 = XXH64_round(v3, get64bits(input + i + 16));
47
+ * v4 = XXH64_round(v4, get64bits(input + i + 24));
48
+ * } while ((i += 32) <= len);
49
+ * h64 = XXH64_mergerounds(v1, v2, v3, v4);
50
+ * } else {
51
+ * h64 = seed + XXH_PRIME64_5;
52
+ * }
53
+ * h64 += len;
54
+ *
55
+ * for (; i + 8 <= len; i += 8) {
56
+ * h64 ^= XXH64_round(0, get64bits(input + i));
57
+ * h64 = rol64(h64, 27) * XXH_PRIME64_1 + XXH_PRIME64_4;
58
+ * }
59
+ * for (; i + 4 <= len; i += 4) {
60
+ * h64 ^= get32bits(input + i) * PRIME64_1;
61
+ * h64 = rol64(h64, 23) * XXH_PRIME64_2 + XXH_PRIME64_3;
62
+ * }
63
+ * for (; i < len; i += 1) {
64
+ * h64 ^= get8bits(input + i) * XXH_PRIME64_5;
65
+ * h64 = rol64(h64, 11) * XXH_PRIME64_1;
66
+ * }
67
+ *
68
+ * return XXH64_avalanche(h64)
69
+ *
70
+ * Exposing the pieces instead allows for simplified usage when
71
+ * the length is a known constant and the inputs are in registers.
72
+ */
73
+#define XXH_PRIME64_1 0x9E3779B185EBCA87ULL
74
+#define XXH_PRIME64_2 0xC2B2AE3D27D4EB4FULL
75
+#define XXH_PRIME64_3 0x165667B19E3779F9ULL
76
+#define XXH_PRIME64_4 0x85EBCA77C2B2AE63ULL
77
+#define XXH_PRIME64_5 0x27D4EB2F165667C5ULL
78
+
79
+static inline uint64_t XXH64_round(uint64_t acc, uint64_t input)
80
+{
27
+{
81
+ return rol64(acc + input * XXH_PRIME64_2, 31) * XXH_PRIME64_1;
28
+ switch (extract32(insn, 25, 4)) {
82
+}
29
+ case 0x0:
83
+
30
+ if (!extract32(insn, 31, 1) || !disas_sme(s, insn)) {
84
+static inline uint64_t XXH64_mergeround(uint64_t acc, uint64_t val)
31
+ unallocated_encoding(s);
85
+{
32
+ }
86
+ return (acc ^ XXH64_round(0, val)) * XXH_PRIME64_1 + XXH_PRIME64_4;
33
+ break;
87
+}
34
+ case 0x1: case 0x3: /* UNALLOCATED */
88
+
35
+ unallocated_encoding(s);
89
+static inline uint64_t XXH64_mergerounds(uint64_t v1, uint64_t v2,
36
+ break;
90
+ uint64_t v3, uint64_t v4)
37
+ case 0x2:
91
+{
38
+ if (!disas_sve(s, insn)) {
92
+ uint64_t h64;
39
+ unallocated_encoding(s);
93
+
40
+ }
94
+ h64 = rol64(v1, 1) + rol64(v2, 7) + rol64(v3, 12) + rol64(v4, 18);
41
+ break;
95
+ h64 = XXH64_mergeround(h64, v1);
42
+ case 0x8: case 0x9: /* Data processing - immediate */
96
+ h64 = XXH64_mergeround(h64, v2);
43
+ disas_data_proc_imm(s, insn);
97
+ h64 = XXH64_mergeround(h64, v3);
44
+ break;
98
+ h64 = XXH64_mergeround(h64, v4);
45
+ case 0xa: case 0xb: /* Branch, exception generation and system insns */
99
+
46
+ disas_b_exc_sys(s, insn);
100
+ return h64;
47
+ break;
101
+}
48
+ case 0x4:
102
+
49
+ case 0x6:
103
+static inline uint64_t XXH64_avalanche(uint64_t h64)
50
+ case 0xc:
104
+{
51
+ case 0xe: /* Loads and stores */
105
+ h64 ^= h64 >> 33;
52
+ disas_ldst(s, insn);
106
+ h64 *= XXH_PRIME64_2;
53
+ break;
107
+ h64 ^= h64 >> 29;
54
+ case 0x5:
108
+ h64 *= XXH_PRIME64_3;
55
+ case 0xd: /* Data processing - register */
109
+ h64 ^= h64 >> 32;
56
+ disas_data_proc_reg(s, insn);
110
+ return h64;
57
+ break;
111
+}
58
+ case 0x7:
112
+
59
+ case 0xf: /* Data processing - SIMD and floating point */
113
+static inline uint64_t qemu_xxhash64_4(uint64_t a, uint64_t b,
60
+ disas_data_proc_simd_fp(s, insn);
114
+ uint64_t c, uint64_t d)
61
+ break;
115
+{
62
+ default:
116
+ uint64_t v1 = QEMU_XXHASH_SEED + XXH_PRIME64_1 + XXH_PRIME64_2;
63
+ assert(FALSE); /* all 15 cases should be handled above */
117
+ uint64_t v2 = QEMU_XXHASH_SEED + XXH_PRIME64_2;
64
+ break;
118
+ uint64_t v3 = QEMU_XXHASH_SEED + 0;
119
+ uint64_t v4 = QEMU_XXHASH_SEED - XXH_PRIME64_1;
120
+
121
+ v1 = XXH64_round(v1, a);
122
+ v2 = XXH64_round(v2, b);
123
+ v3 = XXH64_round(v3, c);
124
+ v4 = XXH64_round(v4, d);
125
+
126
+ return XXH64_avalanche(XXH64_mergerounds(v1, v2, v3, v4));
127
+}
128
+
129
#endif /* QEMU_XXHASH_H */
130
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
131
index XXXXXXX..XXXXXXX 100644
132
--- a/target/arm/cpu.h
133
+++ b/target/arm/cpu.h
134
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
135
static inline bool isar_feature_aa64_pauth(const ARMISARegisters *id)
136
{
137
/*
138
- * Note that while QEMU will only implement the architected algorithm
139
- * QARMA, and thus APA+GPA, the host cpu for kvm may use implementation
140
- * defined algorithms, and thus API+GPI, and this predicate controls
141
- * migration of the 128-bit keys.
142
+ * Return true if any form of pauth is enabled, as this
143
+ * predicate controls migration of the 128-bit keys.
144
*/
145
return (id->id_aa64isar1 &
146
(FIELD_DP64(0, ID_AA64ISAR1, APA, 0xf) |
147
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_pauth(const ARMISARegisters *id)
148
FIELD_DP64(0, ID_AA64ISAR1, GPI, 0xf))) != 0;
149
}
150
151
+static inline bool isar_feature_aa64_pauth_arch(const ARMISARegisters *id)
152
+{
153
+ /*
154
+ * Return true if pauth is enabled with the architected QARMA algorithm.
155
+ * QEMU will always set APA+GPA to the same value.
156
+ */
157
+ return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, APA) != 0;
158
+}
159
+
160
static inline bool isar_feature_aa64_sb(const ARMISARegisters *id)
161
{
162
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, SB) != 0;
163
diff --git a/target/arm/pauth_helper.c b/target/arm/pauth_helper.c
164
index XXXXXXX..XXXXXXX 100644
165
--- a/target/arm/pauth_helper.c
166
+++ b/target/arm/pauth_helper.c
167
@@ -XXX,XX +XXX,XX @@
168
#include "exec/cpu_ldst.h"
169
#include "exec/helper-proto.h"
170
#include "tcg/tcg-gvec-desc.h"
171
+#include "qemu/xxhash.h"
172
173
174
static uint64_t pac_cell_shuffle(uint64_t i)
175
@@ -XXX,XX +XXX,XX @@ static uint64_t tweak_inv_shuffle(uint64_t i)
176
return o;
177
}
178
179
-static uint64_t pauth_computepac(uint64_t data, uint64_t modifier,
180
- ARMPACKey key)
181
+static uint64_t pauth_computepac_architected(uint64_t data, uint64_t modifier,
182
+ ARMPACKey key)
183
{
184
static const uint64_t RC[5] = {
185
0x0000000000000000ull,
186
@@ -XXX,XX +XXX,XX @@ static uint64_t pauth_computepac(uint64_t data, uint64_t modifier,
187
return workingval;
188
}
189
190
+static uint64_t pauth_computepac_impdef(uint64_t data, uint64_t modifier,
191
+ ARMPACKey key)
192
+{
193
+ return qemu_xxhash64_4(data, modifier, key.lo, key.hi);
194
+}
195
+
196
+static uint64_t pauth_computepac(CPUARMState *env, uint64_t data,
197
+ uint64_t modifier, ARMPACKey key)
198
+{
199
+ if (cpu_isar_feature(aa64_pauth_arch, env_archcpu(env))) {
200
+ return pauth_computepac_architected(data, modifier, key);
201
+ } else {
202
+ return pauth_computepac_impdef(data, modifier, key);
203
+ }
65
+ }
204
+}
66
+}
205
+
67
+
206
static uint64_t pauth_addpac(CPUARMState *env, uint64_t ptr, uint64_t modifier,
68
static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
207
ARMPACKey *key, bool data)
69
CPUState *cpu)
208
{
70
{
209
@@ -XXX,XX +XXX,XX @@ static uint64_t pauth_addpac(CPUARMState *env, uint64_t ptr, uint64_t modifier,
71
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
210
bot_bit = 64 - param.tsz;
72
disas_sme_fa64(s, insn);
211
ext_ptr = deposit64(ptr, bot_bit, top_bit - bot_bit, ext);
73
}
212
74
213
- pac = pauth_computepac(ext_ptr, modifier, *key);
75
- switch (extract32(insn, 25, 4)) {
214
+ pac = pauth_computepac(env, ext_ptr, modifier, *key);
76
- case 0x0:
77
- if (!extract32(insn, 31, 1) || !disas_sme(s, insn)) {
78
- unallocated_encoding(s);
79
- }
80
- break;
81
- case 0x1: case 0x3: /* UNALLOCATED */
82
- unallocated_encoding(s);
83
- break;
84
- case 0x2:
85
- if (!disas_sve(s, insn)) {
86
- unallocated_encoding(s);
87
- }
88
- break;
89
- case 0x8: case 0x9: /* Data processing - immediate */
90
- disas_data_proc_imm(s, insn);
91
- break;
92
- case 0xa: case 0xb: /* Branch, exception generation and system insns */
93
- disas_b_exc_sys(s, insn);
94
- break;
95
- case 0x4:
96
- case 0x6:
97
- case 0xc:
98
- case 0xe: /* Loads and stores */
99
- disas_ldst(s, insn);
100
- break;
101
- case 0x5:
102
- case 0xd: /* Data processing - register */
103
- disas_data_proc_reg(s, insn);
104
- break;
105
- case 0x7:
106
- case 0xf: /* Data processing - SIMD and floating point */
107
- disas_data_proc_simd_fp(s, insn);
108
- break;
109
- default:
110
- assert(FALSE); /* all 15 cases should be handled above */
111
- break;
112
- }
113
+ disas_a64_legacy(s, insn);
215
114
216
/*
115
/*
217
* Check if the ptr has good extension bits and corrupt the
116
* After execution of most insns, btype is reset to 0.
218
@@ -XXX,XX +XXX,XX @@ static uint64_t pauth_auth(CPUARMState *env, uint64_t ptr, uint64_t modifier,
219
uint64_t pac, orig_ptr, test;
220
221
orig_ptr = pauth_original_ptr(ptr, param);
222
- pac = pauth_computepac(orig_ptr, modifier, *key);
223
+ pac = pauth_computepac(env, orig_ptr, modifier, *key);
224
bot_bit = 64 - param.tsz;
225
top_bit = 64 - 8 * param.tbi;
226
227
@@ -XXX,XX +XXX,XX @@ uint64_t HELPER(pacga)(CPUARMState *env, uint64_t x, uint64_t y)
228
uint64_t pac;
229
230
pauth_check_trap(env, arm_current_el(env), GETPC());
231
- pac = pauth_computepac(x, y, env->keys.apga);
232
+ pac = pauth_computepac(env, x, y, env->keys.apga);
233
234
return pac & 0xffffffff00000000ull;
235
}
236
--
117
--
237
2.20.1
118
2.34.1
238
239
diff view generated by jsdifflib
1
From: Mihai Carabas <mihai.carabas@oracle.com>
1
The A64 translator uses a hand-written decoder for everything except
2
SVE or SME. It's fairly well structured, but it's becoming obvious
3
that it's still more painful to add instructions to than the A32
4
translator, because putting a new instruction into the right place in
5
a hand-written decoder is much harder than adding new instruction
6
patterns to a decodetree file.
2
7
3
Add a test case for pvpanic-pci device. The scenario is the same as pvpapnic
8
As the first step in conversion to decodetree, create the skeleton of
4
ISA device, but is using the PCI bus.
9
the decodetree decoder; where it does not handle instructions we will
10
fall back to the legacy decoder (which will be for everything at the
11
moment, since there are no patterns in a64.decode).
5
12
6
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
7
Acked-by: Thomas Huth <thuth@redhat.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20230512144106.3608981-3-peter.maydell@linaro.org
10
---
16
---
11
tests/qtest/pvpanic-pci-test.c | 62 ++++++++++++++++++++++++++++++++++
17
target/arm/tcg/a64.decode | 20 ++++++++++++++++++++
12
tests/qtest/meson.build | 1 +
18
target/arm/tcg/translate-a64.c | 18 +++++++++++-------
13
2 files changed, 63 insertions(+)
19
target/arm/tcg/meson.build | 1 +
14
create mode 100644 tests/qtest/pvpanic-pci-test.c
20
3 files changed, 32 insertions(+), 7 deletions(-)
21
create mode 100644 target/arm/tcg/a64.decode
15
22
16
diff --git a/tests/qtest/pvpanic-pci-test.c b/tests/qtest/pvpanic-pci-test.c
23
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
17
new file mode 100644
24
new file mode 100644
18
index XXXXXXX..XXXXXXX
25
index XXXXXXX..XXXXXXX
19
--- /dev/null
26
--- /dev/null
20
+++ b/tests/qtest/pvpanic-pci-test.c
27
+++ b/target/arm/tcg/a64.decode
21
@@ -XXX,XX +XXX,XX @@
28
@@ -XXX,XX +XXX,XX @@
29
+# AArch64 A64 allowed instruction decoding
30
+#
31
+# Copyright (c) 2023 Linaro, Ltd
32
+#
33
+# This library is free software; you can redistribute it and/or
34
+# modify it under the terms of the GNU Lesser General Public
35
+# License as published by the Free Software Foundation; either
36
+# version 2.1 of the License, or (at your option) any later version.
37
+#
38
+# This library is distributed in the hope that it will be useful,
39
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
40
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
41
+# Lesser General Public License for more details.
42
+#
43
+# You should have received a copy of the GNU Lesser General Public
44
+# License along with this library; if not, see <http://www.gnu.org/licenses/>.
45
+
46
+#
47
+# This file is processed by scripts/decodetree.py
48
+#
49
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/tcg/translate-a64.c
52
+++ b/target/arm/tcg/translate-a64.c
53
@@ -XXX,XX +XXX,XX @@ enum a64_shift_type {
54
A64_SHIFT_TYPE_ROR = 3
55
};
56
22
+/*
57
+/*
23
+ * QTest testcase for PV Panic PCI device
58
+ * Include the generated decoders.
24
+ *
25
+ * Copyright (C) 2020 Oracle
26
+ *
27
+ * Authors:
28
+ * Mihai Carabas <mihai.carabas@oracle.com>
29
+ *
30
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
31
+ * See the COPYING file in the top-level directory.
32
+ *
33
+ */
59
+ */
34
+
60
+
35
+#include "qemu/osdep.h"
61
+#include "decode-sme-fa64.c.inc"
36
+#include "libqos/libqtest.h"
62
+#include "decode-a64.c.inc"
37
+#include "qapi/qmp/qdict.h"
38
+#include "libqos/pci.h"
39
+#include "libqos/pci-pc.h"
40
+#include "hw/pci/pci_regs.h"
41
+
63
+
42
+static void test_panic(void)
64
/* Table based decoder typedefs - used when the relevant bits for decode
43
+{
65
* are too awkwardly scattered across the instruction (eg SIMD).
44
+ uint8_t val;
66
*/
45
+ QDict *response, *data;
67
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_simd_fp(DisasContext *s, uint32_t insn)
46
+ QTestState *qts;
68
}
47
+ QPCIBus *pcibus;
69
}
48
+ QPCIDevice *dev;
70
49
+ QPCIBar bar;
71
-/*
72
- * Include the generated SME FA64 decoder.
73
- */
74
-
75
-#include "decode-sme-fa64.c.inc"
76
-
77
static bool trans_OK(DisasContext *s, arg_OK *a)
78
{
79
return true;
80
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
81
disas_sme_fa64(s, insn);
82
}
83
84
- disas_a64_legacy(s, insn);
50
+
85
+
51
+ qts = qtest_init("-device pvpanic-pci");
86
+ if (!disas_a64(s, insn)) {
52
+ pcibus = qpci_new_pc(qts, NULL);
87
+ disas_a64_legacy(s, insn);
53
+ dev = qpci_device_find(pcibus, QPCI_DEVFN(0x4, 0x0));
88
+ }
54
+ qpci_device_enable(dev);
89
55
+ bar = qpci_iomap(dev, 0, NULL);
90
/*
56
+
91
* After execution of most insns, btype is reset to 0.
57
+ qpci_memread(dev, bar, 0, &val, sizeof(val));
92
diff --git a/target/arm/tcg/meson.build b/target/arm/tcg/meson.build
58
+ g_assert_cmpuint(val, ==, 3);
59
+
60
+ val = 1;
61
+ qpci_memwrite(dev, bar, 0, &val, sizeof(val));
62
+
63
+ response = qtest_qmp_eventwait_ref(qts, "GUEST_PANICKED");
64
+ g_assert(qdict_haskey(response, "data"));
65
+ data = qdict_get_qdict(response, "data");
66
+ g_assert(qdict_haskey(data, "action"));
67
+ g_assert_cmpstr(qdict_get_str(data, "action"), ==, "pause");
68
+ qobject_unref(response);
69
+
70
+ qtest_quit(qts);
71
+}
72
+
73
+int main(int argc, char **argv)
74
+{
75
+ int ret;
76
+
77
+ g_test_init(&argc, &argv, NULL);
78
+ qtest_add_func("/pvpanic-pci/panic", test_panic);
79
+
80
+ ret = g_test_run();
81
+
82
+ return ret;
83
+}
84
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
85
index XXXXXXX..XXXXXXX 100644
93
index XXXXXXX..XXXXXXX 100644
86
--- a/tests/qtest/meson.build
94
--- a/target/arm/tcg/meson.build
87
+++ b/tests/qtest/meson.build
95
+++ b/target/arm/tcg/meson.build
88
@@ -XXX,XX +XXX,XX @@ endif
96
@@ -XXX,XX +XXX,XX @@ gen = [
89
97
decodetree.process('a32-uncond.decode', extra_args: '--static-decode=disas_a32_uncond'),
90
qtests_pci = \
98
decodetree.process('t32.decode', extra_args: '--static-decode=disas_t32'),
91
(config_all_devices.has_key('CONFIG_VGA') ? ['display-vga-test'] : []) + \
99
decodetree.process('t16.decode', extra_args: ['-w', '16', '--static-decode=disas_t16']),
92
+ (config_all_devices.has_key('CONFIG_PVPANIC_PCI') ? ['pvpanic-pci-test'] : []) + \
100
+ decodetree.process('a64.decode', extra_args: ['--static-decode=disas_a64']),
93
(config_all_devices.has_key('CONFIG_IVSHMEM_DEVICE') ? ['ivshmem-test'] : [])
101
]
94
102
95
qtests_i386 = \
103
arm_ss.add(gen)
96
--
104
--
97
2.20.1
105
2.34.1
98
99
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
The SVE and SME decode is already done by decodetree. Pull the calls
2
to these decoders out of the legacy decoder. This doesn't change
3
behaviour because all the patterns in sve.decode and sme.decode
4
already require the bits that the legacy decoder is decoding to have
5
the correct values.
2
6
3
This will simplify accessing HCR conditionally in secure state.
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20230512144106.3608981-4-peter.maydell@linaro.org
10
---
11
target/arm/tcg/translate-a64.c | 20 ++++----------------
12
1 file changed, 4 insertions(+), 16 deletions(-)
4
13
5
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
14
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20210112104511.36576-4-remi.denis.courmont@huawei.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.c | 31 ++++++++++++++++++-------------
11
1 file changed, 18 insertions(+), 13 deletions(-)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
--- a/target/arm/tcg/translate-a64.c
16
+++ b/target/arm/helper.c
17
+++ b/target/arm/tcg/translate-a64.c
17
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_pou_access(CPUARMState *env,
18
@@ -XXX,XX +XXX,XX @@ static bool btype_destination_ok(uint32_t insn, bool bt, int btype)
18
19
static void disas_a64_legacy(DisasContext *s, uint32_t insn)
19
static int vae1_tlbmask(CPUARMState *env)
20
{
20
{
21
- /* Since we exclude secure first, we may read HCR_EL2 directly. */
21
switch (extract32(insn, 25, 4)) {
22
- if (arm_is_secure_below_el3(env)) {
22
- case 0x0:
23
- return ARMMMUIdxBit_SE10_1 |
23
- if (!extract32(insn, 31, 1) || !disas_sme(s, insn)) {
24
- ARMMMUIdxBit_SE10_1_PAN |
24
- unallocated_encoding(s);
25
- ARMMMUIdxBit_SE10_0;
25
- }
26
- } else if ((env->cp15.hcr_el2 & (HCR_E2H | HCR_TGE))
26
- break;
27
- == (HCR_E2H | HCR_TGE)) {
27
- case 0x1: case 0x3: /* UNALLOCATED */
28
+ uint64_t hcr = arm_hcr_el2_eff(env);
28
- unallocated_encoding(s);
29
+
29
- break;
30
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
30
- case 0x2:
31
return ARMMMUIdxBit_E20_2 |
31
- if (!disas_sve(s, insn)) {
32
ARMMMUIdxBit_E20_2_PAN |
32
- unallocated_encoding(s);
33
ARMMMUIdxBit_E20_0;
33
- }
34
+ } else if (arm_is_secure_below_el3(env)) {
34
- break;
35
+ return ARMMMUIdxBit_SE10_1 |
35
case 0x8: case 0x9: /* Data processing - immediate */
36
+ ARMMMUIdxBit_SE10_1_PAN |
36
disas_data_proc_imm(s, insn);
37
+ ARMMMUIdxBit_SE10_0;
37
break;
38
} else {
38
@@ -XXX,XX +XXX,XX @@ static void disas_a64_legacy(DisasContext *s, uint32_t insn)
39
return ARMMMUIdxBit_E10_1 |
39
disas_data_proc_simd_fp(s, insn);
40
ARMMMUIdxBit_E10_1_PAN |
40
break;
41
@@ -XXX,XX +XXX,XX @@ static inline uint64_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx)
41
default:
42
static inline bool regime_translation_disabled(CPUARMState *env,
42
- assert(FALSE); /* all 15 cases should be handled above */
43
ARMMMUIdx mmu_idx)
43
+ unallocated_encoding(s);
44
{
44
break;
45
+ uint64_t hcr_el2;
46
+
47
if (arm_feature(env, ARM_FEATURE_M)) {
48
switch (env->v7m.mpu_ctrl[regime_is_secure(env, mmu_idx)] &
49
(R_V7M_MPU_CTRL_ENABLE_MASK | R_V7M_MPU_CTRL_HFNMIENA_MASK)) {
50
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
51
}
52
}
45
}
53
46
}
54
+ hcr_el2 = arm_hcr_el2_eff(env);
47
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
55
+
48
disas_sme_fa64(s, insn);
56
if (mmu_idx == ARMMMUIdx_Stage2) {
57
/* HCR.DC means HCR.VM behaves as 1 */
58
- return (env->cp15.hcr_el2 & (HCR_DC | HCR_VM)) == 0;
59
+ return (hcr_el2 & (HCR_DC | HCR_VM)) == 0;
60
}
49
}
61
50
62
- if (env->cp15.hcr_el2 & HCR_TGE) {
51
-
63
+ if (hcr_el2 & HCR_TGE) {
52
- if (!disas_a64(s, insn)) {
64
/* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
53
+ if (!disas_a64(s, insn) &&
65
if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) {
54
+ !disas_sme(s, insn) &&
66
return true;
55
+ !disas_sve(s, insn)) {
67
}
56
disas_a64_legacy(s, insn);
68
}
57
}
69
58
70
- if ((env->cp15.hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
71
+ if ((hcr_el2 & HCR_DC) && arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
72
/* HCR.DC means SCTLR_EL1.M behaves as 0 */
73
return true;
74
}
75
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
76
fi->s1ptw = true;
77
return ~0;
78
}
79
- if ((env->cp15.hcr_el2 & HCR_PTW) && (cacheattrs.attrs & 0xf0) == 0) {
80
+ if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
81
+ (cacheattrs.attrs & 0xf0) == 0) {
82
/*
83
* PTW set and S1 walk touched S2 Device memory:
84
* generate Permission fault.
85
@@ -XXX,XX +XXX,XX @@ static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
86
uint8_t hihint = 0, lohint = 0;
87
88
if (hiattr != 0) { /* normal memory */
89
- if ((env->cp15.hcr_el2 & HCR_CD) != 0) { /* cache disabled */
90
+ if (arm_hcr_el2_eff(env) & HCR_CD) { /* cache disabled */
91
hiattr = loattr = 1; /* non-cacheable */
92
} else {
93
if (hiattr != 1) { /* Write-through or write-back */
94
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
95
}
96
97
/* Combine the S1 and S2 cache attributes. */
98
- if (env->cp15.hcr_el2 & HCR_DC) {
99
+ if (arm_hcr_el2_eff(env) & HCR_DC) {
100
/*
101
* HCR.DC forces the first stage attributes to
102
* Normal Non-Shareable,
103
--
59
--
104
2.20.1
60
2.34.1
105
106
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The crypto overhead of emulating pauth can be significant for
3
Convert the ADR and ADRP instructions.
4
some workloads. Add two boolean properties that allows the
5
feature to be turned off, on with the architected algorithm,
6
or on with an implementation defined algorithm.
7
4
8
We need two intermediate booleans to control the state while
9
parsing properties lest we clobber ID_AA64ISAR1 into an invalid
10
intermediate state.
11
12
Tested-by: Mark Rutland <mark.rutland@arm.com>
13
Reviewed-by: Andrew Jones <drjones@redhat.com>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20210111235740.462469-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
[PMM: fixed docs typo, tweaked text to clarify that the impdef
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
algorithm is specific to QEMU]
8
Message-id: 20230512144106.3608981-5-peter.maydell@linaro.org
9
[PMM: Rebased]
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
12
---
20
docs/system/arm/cpu-features.rst | 21 +++++++++++++++++
13
target/arm/tcg/a64.decode | 13 ++++++++++++
21
target/arm/cpu.h | 10 ++++++++
14
target/arm/tcg/translate-a64.c | 38 +++++++++++++---------------------
22
target/arm/cpu.c | 13 +++++++++++
15
2 files changed, 27 insertions(+), 24 deletions(-)
23
target/arm/cpu64.c | 40 ++++++++++++++++++++++++++++----
24
target/arm/monitor.c | 1 +
25
tests/qtest/arm-cpu-features.c | 13 +++++++++++
26
6 files changed, 94 insertions(+), 4 deletions(-)
27
16
28
diff --git a/docs/system/arm/cpu-features.rst b/docs/system/arm/cpu-features.rst
17
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
29
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
30
--- a/docs/system/arm/cpu-features.rst
19
--- a/target/arm/tcg/a64.decode
31
+++ b/docs/system/arm/cpu-features.rst
20
+++ b/target/arm/tcg/a64.decode
32
@@ -XXX,XX +XXX,XX @@ the list of KVM VCPU features and their descriptions.
21
@@ -XXX,XX +XXX,XX @@
33
influence the guest scheduler behavior and/or be
22
#
34
exposed to the guest userspace.
23
# This file is processed by scripts/decodetree.py
35
24
#
36
+TCG VCPU Features
37
+=================
38
+
25
+
39
+TCG VCPU features are CPU features that are specific to TCG.
26
+&ri rd imm
40
+Below is the list of TCG VCPU features and their descriptions.
41
+
27
+
42
+ pauth Enable or disable `FEAT_Pauth`, pointer
43
+ authentication. By default, the feature is
44
+ enabled with `-cpu max`.
45
+
28
+
46
+ pauth-impdef When `FEAT_Pauth` is enabled, either the
29
+### Data Processing - Immediate
47
+ *impdef* (Implementation Defined) algorithm
48
+ is enabled or the *architected* QARMA algorithm
49
+ is enabled. By default the impdef algorithm
50
+ is disabled, and QARMA is enabled.
51
+
30
+
52
+ The architected QARMA algorithm has good
31
+# PC-rel addressing
53
+ cryptographic properties, but can be quite slow
54
+ to emulate. The impdef algorithm used by QEMU
55
+ is non-cryptographic but significantly faster.
56
+
32
+
57
SVE CPU Properties
33
+%imm_pcrel 5:s19 29:2
58
==================
34
+@pcrel . .. ..... ................... rd:5 &ri imm=%imm_pcrel
59
35
+
60
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
36
+ADR 0 .. 10000 ................... ..... @pcrel
37
+ADRP 1 .. 10000 ................... ..... @pcrel
38
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
61
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
62
--- a/target/arm/cpu.h
40
--- a/target/arm/tcg/translate-a64.c
63
+++ b/target/arm/cpu.h
41
+++ b/target/arm/tcg/translate-a64.c
64
@@ -XXX,XX +XXX,XX @@ typedef struct {
42
@@ -XXX,XX +XXX,XX @@ static void disas_ldst(DisasContext *s, uint32_t insn)
65
#ifdef TARGET_AARCH64
66
# define ARM_MAX_VQ 16
67
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
68
+void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp);
69
#else
70
# define ARM_MAX_VQ 1
71
static inline void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) { }
72
+static inline void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) { }
73
#endif
74
75
typedef struct ARMVectorReg {
76
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
77
uint64_t reset_cbar;
78
uint32_t reset_auxcr;
79
bool reset_hivecs;
80
+
81
+ /*
82
+ * Intermediate values used during property parsing.
83
+ * Once finalized, the values should be read from ID_AA64ISAR1.
84
+ */
85
+ bool prop_pauth;
86
+ bool prop_pauth_impdef;
87
+
88
/* DCZ blocksize, in log_2(words), ie low 4 bits of DCZID_EL0 */
89
uint32_t dcz_blocksize;
90
uint64_t rvbar;
91
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
92
index XXXXXXX..XXXXXXX 100644
93
--- a/target/arm/cpu.c
94
+++ b/target/arm/cpu.c
95
@@ -XXX,XX +XXX,XX @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
96
error_propagate(errp, local_err);
97
return;
98
}
99
+
100
+ /*
101
+ * KVM does not support modifications to this feature.
102
+ * We have not registered the cpu properties when KVM
103
+ * is in use, so the user will not be able to set them.
104
+ */
105
+ if (!kvm_enabled()) {
106
+ arm_cpu_pauth_finalize(cpu, &local_err);
107
+ if (local_err != NULL) {
108
+ error_propagate(errp, local_err);
109
+ return;
110
+ }
111
+ }
112
}
113
114
if (kvm_enabled()) {
115
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
116
index XXXXXXX..XXXXXXX 100644
117
--- a/target/arm/cpu64.c
118
+++ b/target/arm/cpu64.c
119
@@ -XXX,XX +XXX,XX @@
120
#include "sysemu/kvm.h"
121
#include "kvm_arm.h"
122
#include "qapi/visitor.h"
123
+#include "hw/qdev-properties.h"
124
+
125
126
#ifndef CONFIG_USER_ONLY
127
static uint64_t a57_a53_l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
128
@@ -XXX,XX +XXX,XX @@ void aarch64_add_sve_properties(Object *obj)
129
}
43
}
130
}
44
}
131
45
132
+void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp)
46
-/* PC-rel. addressing
47
- * 31 30 29 28 24 23 5 4 0
48
- * +----+-------+-----------+-------------------+------+
49
- * | op | immlo | 1 0 0 0 0 | immhi | Rd |
50
- * +----+-------+-----------+-------------------+------+
51
+/*
52
+ * PC-rel. addressing
53
*/
54
-static void disas_pc_rel_adr(DisasContext *s, uint32_t insn)
55
+
56
+static bool trans_ADR(DisasContext *s, arg_ri *a)
57
{
58
- unsigned int page, rd;
59
- int64_t offset;
60
+ gen_pc_plus_diff(s, cpu_reg(s, a->rd), a->imm);
61
+ return true;
62
+}
63
64
- page = extract32(insn, 31, 1);
65
- /* SignExtend(immhi:immlo) -> offset */
66
- offset = sextract64(insn, 5, 19);
67
- offset = offset << 2 | extract32(insn, 29, 2);
68
- rd = extract32(insn, 0, 5);
69
+static bool trans_ADRP(DisasContext *s, arg_ri *a)
133
+{
70
+{
134
+ int arch_val = 0, impdef_val = 0;
71
+ int64_t offset = (int64_t)a->imm << 12;
135
+ uint64_t t;
72
136
+
73
- if (page) {
137
+ /* TODO: Handle HaveEnhancedPAC, HaveEnhancedPAC2, HaveFPAC. */
74
- /* ADRP (page based) */
138
+ if (cpu->prop_pauth) {
75
- offset <<= 12;
139
+ if (cpu->prop_pauth_impdef) {
76
- /* The page offset is ok for CF_PCREL. */
140
+ impdef_val = 1;
77
- offset -= s->pc_curr & 0xfff;
141
+ } else {
78
- }
142
+ arch_val = 1;
79
-
143
+ }
80
- gen_pc_plus_diff(s, cpu_reg(s, rd), offset);
144
+ } else if (cpu->prop_pauth_impdef) {
81
+ /* The page offset is ok for CF_PCREL. */
145
+ error_setg(errp, "cannot enable pauth-impdef without pauth");
82
+ offset -= s->pc_curr & 0xfff;
146
+ error_append_hint(errp, "Add pauth=on to the CPU property list.\n");
83
+ gen_pc_plus_diff(s, cpu_reg(s, a->rd), offset);
147
+ }
84
+ return true;
148
+
149
+ t = cpu->isar.id_aa64isar1;
150
+ t = FIELD_DP64(t, ID_AA64ISAR1, APA, arch_val);
151
+ t = FIELD_DP64(t, ID_AA64ISAR1, GPA, arch_val);
152
+ t = FIELD_DP64(t, ID_AA64ISAR1, API, impdef_val);
153
+ t = FIELD_DP64(t, ID_AA64ISAR1, GPI, impdef_val);
154
+ cpu->isar.id_aa64isar1 = t;
155
+}
156
+
157
+static Property arm_cpu_pauth_property =
158
+ DEFINE_PROP_BOOL("pauth", ARMCPU, prop_pauth, true);
159
+static Property arm_cpu_pauth_impdef_property =
160
+ DEFINE_PROP_BOOL("pauth-impdef", ARMCPU, prop_pauth_impdef, false);
161
+
162
/* -cpu max: if KVM is enabled, like -cpu host (best possible with this host);
163
* otherwise, a CPU with as many features enabled as our emulation supports.
164
* The version of '-cpu max' for qemu-system-arm is defined in cpu.c;
165
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
166
t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2);
167
t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1);
168
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
169
- t = FIELD_DP64(t, ID_AA64ISAR1, APA, 1); /* PAuth, architected only */
170
- t = FIELD_DP64(t, ID_AA64ISAR1, API, 0);
171
- t = FIELD_DP64(t, ID_AA64ISAR1, GPA, 1);
172
- t = FIELD_DP64(t, ID_AA64ISAR1, GPI, 0);
173
t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
174
t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
175
t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
176
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
177
cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
178
cpu->dcz_blocksize = 7; /* 512 bytes */
179
#endif
180
+
181
+ /* Default to PAUTH on, with the architected algorithm. */
182
+ qdev_property_add_static(DEVICE(obj), &arm_cpu_pauth_property);
183
+ qdev_property_add_static(DEVICE(obj), &arm_cpu_pauth_impdef_property);
184
}
185
186
aarch64_add_sve_properties(obj);
187
diff --git a/target/arm/monitor.c b/target/arm/monitor.c
188
index XXXXXXX..XXXXXXX 100644
189
--- a/target/arm/monitor.c
190
+++ b/target/arm/monitor.c
191
@@ -XXX,XX +XXX,XX @@ static const char *cpu_model_advertised_features[] = {
192
"sve640", "sve768", "sve896", "sve1024", "sve1152", "sve1280",
193
"sve1408", "sve1536", "sve1664", "sve1792", "sve1920", "sve2048",
194
"kvm-no-adjvtime", "kvm-steal-time",
195
+ "pauth", "pauth-impdef",
196
NULL
197
};
198
199
diff --git a/tests/qtest/arm-cpu-features.c b/tests/qtest/arm-cpu-features.c
200
index XXXXXXX..XXXXXXX 100644
201
--- a/tests/qtest/arm-cpu-features.c
202
+++ b/tests/qtest/arm-cpu-features.c
203
@@ -XXX,XX +XXX,XX @@ static void sve_tests_sve_off_kvm(const void *data)
204
qtest_quit(qts);
205
}
85
}
206
86
207
+static void pauth_tests_default(QTestState *qts, const char *cpu_type)
87
/*
208
+{
88
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
209
+ assert_has_feature_enabled(qts, cpu_type, "pauth");
89
static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
210
+ assert_has_feature_disabled(qts, cpu_type, "pauth-impdef");
211
+ assert_set_feature(qts, cpu_type, "pauth", false);
212
+ assert_set_feature(qts, cpu_type, "pauth", true);
213
+ assert_set_feature(qts, cpu_type, "pauth-impdef", true);
214
+ assert_set_feature(qts, cpu_type, "pauth-impdef", false);
215
+ assert_error(qts, cpu_type, "cannot enable pauth-impdef without pauth",
216
+ "{ 'pauth': false, 'pauth-impdef': true }");
217
+}
218
+
219
static void test_query_cpu_model_expansion(const void *data)
220
{
90
{
221
QTestState *qts;
91
switch (extract32(insn, 23, 6)) {
222
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion(const void *data)
92
- case 0x20: case 0x21: /* PC-rel. addressing */
223
assert_has_feature_enabled(qts, "cortex-a57", "aarch64");
93
- disas_pc_rel_adr(s, insn);
224
94
- break;
225
sve_tests_default(qts, "max");
95
case 0x22: /* Add/subtract (immediate) */
226
+ pauth_tests_default(qts, "max");
96
disas_add_sub_imm(s, insn);
227
97
break;
228
/* Test that features that depend on KVM generate errors without. */
229
assert_error(qts, "max",
230
--
98
--
231
2.20.1
99
2.34.1
232
233
diff view generated by jsdifflib
1
From: Mihai Carabas <mihai.carabas@oracle.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
To ease the PCI device addition in next patches, split the code as follows:
3
Split out specific 32-bit and 64-bit functions.
4
- generic code (read/write/setup) is being kept in pvpanic.c
4
These carry the same signature as tcg_gen_add_i64,
5
- ISA dependent code moved to pvpanic-isa.c
5
and so will be easier to pass as callbacks.
6
6
7
Also, rename:
7
Retain gen_add_CC and gen_sub_CC during conversion.
8
- ISA_PVPANIC_DEVICE -> PVPANIC_ISA_DEVICE.
9
- TYPE_PVPANIC -> TYPE_PVPANIC_ISA.
10
- MemoryRegion io -> mr.
11
- pvpanic_ioport_* in pvpanic_*.
12
8
13
Update the build system with the new files and config structure.
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Message-id: 20230512144106.3608981-6-peter.maydell@linaro.org
13
[PMM: rebased]
16
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
16
---
19
include/hw/misc/pvpanic.h | 23 +++++++++-
17
target/arm/tcg/translate-a64.c | 149 +++++++++++++++++++--------------
20
hw/misc/pvpanic-isa.c | 94 +++++++++++++++++++++++++++++++++++++++
18
1 file changed, 84 insertions(+), 65 deletions(-)
21
hw/misc/pvpanic.c | 85 +++--------------------------------
22
hw/i386/Kconfig | 2 +-
23
hw/misc/Kconfig | 6 ++-
24
hw/misc/meson.build | 3 +-
25
tests/qtest/meson.build | 2 +-
26
7 files changed, 130 insertions(+), 85 deletions(-)
27
create mode 100644 hw/misc/pvpanic-isa.c
28
19
29
diff --git a/include/hw/misc/pvpanic.h b/include/hw/misc/pvpanic.h
20
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
31
--- a/include/hw/misc/pvpanic.h
22
--- a/target/arm/tcg/translate-a64.c
32
+++ b/include/hw/misc/pvpanic.h
23
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@
24
@@ -XXX,XX +XXX,XX @@ static inline void gen_logic_CC(int sf, TCGv_i64 result)
34
25
}
35
#include "qom/object.h"
26
36
27
/* dest = T0 + T1; compute C, N, V and Z flags */
37
-#define TYPE_PVPANIC "pvpanic"
28
+static void gen_add64_CC(TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
38
+#define TYPE_PVPANIC_ISA_DEVICE "pvpanic"
29
+{
39
30
+ TCGv_i64 result, flag, tmp;
40
#define PVPANIC_IOPORT_PROP "ioport"
31
+ result = tcg_temp_new_i64();
41
32
+ flag = tcg_temp_new_i64();
42
+/* The bit of supported pv event, TODO: include uapi header and remove this */
33
+ tmp = tcg_temp_new_i64();
43
+#define PVPANIC_F_PANICKED 0
44
+#define PVPANIC_F_CRASHLOADED 1
45
+
34
+
46
+/* The pv event value */
35
+ tcg_gen_movi_i64(tmp, 0);
47
+#define PVPANIC_PANICKED (1 << PVPANIC_F_PANICKED)
36
+ tcg_gen_add2_i64(result, flag, t0, tmp, t1, tmp);
48
+#define PVPANIC_CRASHLOADED (1 << PVPANIC_F_CRASHLOADED)
49
+
37
+
50
+/*
38
+ tcg_gen_extrl_i64_i32(cpu_CF, flag);
51
+ * PVPanicState for any device type
52
+ */
53
+typedef struct PVPanicState PVPanicState;
54
+struct PVPanicState {
55
+ MemoryRegion mr;
56
+ uint8_t events;
57
+};
58
+
39
+
59
+void pvpanic_setup_io(PVPanicState *s, DeviceState *dev, unsigned size);
40
+ gen_set_NZ64(result);
60
+
41
+
61
static inline uint16_t pvpanic_port(void)
42
+ tcg_gen_xor_i64(flag, result, t0);
62
{
43
+ tcg_gen_xor_i64(tmp, t0, t1);
63
- Object *o = object_resolve_path_type("", TYPE_PVPANIC, NULL);
44
+ tcg_gen_andc_i64(flag, flag, tmp);
64
+ Object *o = object_resolve_path_type("", TYPE_PVPANIC_ISA_DEVICE, NULL);
45
+ tcg_gen_extrh_i64_i32(cpu_VF, flag);
65
if (!o) {
66
return 0;
67
}
68
diff --git a/hw/misc/pvpanic-isa.c b/hw/misc/pvpanic-isa.c
69
new file mode 100644
70
index XXXXXXX..XXXXXXX
71
--- /dev/null
72
+++ b/hw/misc/pvpanic-isa.c
73
@@ -XXX,XX +XXX,XX @@
74
+/*
75
+ * QEMU simulated pvpanic device.
76
+ *
77
+ * Copyright Fujitsu, Corp. 2013
78
+ *
79
+ * Authors:
80
+ * Wen Congyang <wency@cn.fujitsu.com>
81
+ * Hu Tao <hutao@cn.fujitsu.com>
82
+ *
83
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
84
+ * See the COPYING file in the top-level directory.
85
+ *
86
+ */
87
+
46
+
88
+#include "qemu/osdep.h"
47
+ tcg_gen_mov_i64(dest, result);
89
+#include "qemu/log.h"
90
+#include "qemu/module.h"
91
+#include "sysemu/runstate.h"
92
+
93
+#include "hw/nvram/fw_cfg.h"
94
+#include "hw/qdev-properties.h"
95
+#include "hw/misc/pvpanic.h"
96
+#include "qom/object.h"
97
+#include "hw/isa/isa.h"
98
+
99
+OBJECT_DECLARE_SIMPLE_TYPE(PVPanicISAState, PVPANIC_ISA_DEVICE)
100
+
101
+/*
102
+ * PVPanicISAState for ISA device and
103
+ * use ioport.
104
+ */
105
+struct PVPanicISAState {
106
+ ISADevice parent_obj;
107
+
108
+ uint16_t ioport;
109
+ PVPanicState pvpanic;
110
+};
111
+
112
+static void pvpanic_isa_initfn(Object *obj)
113
+{
114
+ PVPanicISAState *s = PVPANIC_ISA_DEVICE(obj);
115
+
116
+ pvpanic_setup_io(&s->pvpanic, DEVICE(s), 1);
117
+}
48
+}
118
+
49
+
119
+static void pvpanic_isa_realizefn(DeviceState *dev, Error **errp)
50
+static void gen_add32_CC(TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
120
+{
51
+{
121
+ ISADevice *d = ISA_DEVICE(dev);
52
+ TCGv_i32 t0_32 = tcg_temp_new_i32();
122
+ PVPanicISAState *s = PVPANIC_ISA_DEVICE(dev);
53
+ TCGv_i32 t1_32 = tcg_temp_new_i32();
123
+ PVPanicState *ps = &s->pvpanic;
54
+ TCGv_i32 tmp = tcg_temp_new_i32();
124
+ FWCfgState *fw_cfg = fw_cfg_find();
125
+ uint16_t *pvpanic_port;
126
+
55
+
127
+ if (!fw_cfg) {
56
+ tcg_gen_movi_i32(tmp, 0);
128
+ return;
57
+ tcg_gen_extrl_i64_i32(t0_32, t0);
129
+ }
58
+ tcg_gen_extrl_i64_i32(t1_32, t1);
130
+
59
+ tcg_gen_add2_i32(cpu_NF, cpu_CF, t0_32, tmp, t1_32, tmp);
131
+ pvpanic_port = g_malloc(sizeof(*pvpanic_port));
60
+ tcg_gen_mov_i32(cpu_ZF, cpu_NF);
132
+ *pvpanic_port = cpu_to_le16(s->ioport);
61
+ tcg_gen_xor_i32(cpu_VF, cpu_NF, t0_32);
133
+ fw_cfg_add_file(fw_cfg, "etc/pvpanic-port", pvpanic_port,
62
+ tcg_gen_xor_i32(tmp, t0_32, t1_32);
134
+ sizeof(*pvpanic_port));
63
+ tcg_gen_andc_i32(cpu_VF, cpu_VF, tmp);
135
+
64
+ tcg_gen_extu_i32_i64(dest, cpu_NF);
136
+ isa_register_ioport(d, &ps->mr, s->ioport);
137
+}
65
+}
138
+
66
+
139
+static Property pvpanic_isa_properties[] = {
67
static void gen_add_CC(int sf, TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
140
+ DEFINE_PROP_UINT16(PVPANIC_IOPORT_PROP, PVPanicISAState, ioport, 0x505),
68
{
141
+ DEFINE_PROP_UINT8("events", PVPanicISAState, pvpanic.events, PVPANIC_PANICKED | PVPANIC_CRASHLOADED),
69
if (sf) {
142
+ DEFINE_PROP_END_OF_LIST(),
70
- TCGv_i64 result, flag, tmp;
143
+};
71
- result = tcg_temp_new_i64();
72
- flag = tcg_temp_new_i64();
73
- tmp = tcg_temp_new_i64();
74
-
75
- tcg_gen_movi_i64(tmp, 0);
76
- tcg_gen_add2_i64(result, flag, t0, tmp, t1, tmp);
77
-
78
- tcg_gen_extrl_i64_i32(cpu_CF, flag);
79
-
80
- gen_set_NZ64(result);
81
-
82
- tcg_gen_xor_i64(flag, result, t0);
83
- tcg_gen_xor_i64(tmp, t0, t1);
84
- tcg_gen_andc_i64(flag, flag, tmp);
85
- tcg_gen_extrh_i64_i32(cpu_VF, flag);
86
-
87
- tcg_gen_mov_i64(dest, result);
88
+ gen_add64_CC(dest, t0, t1);
89
} else {
90
- /* 32 bit arithmetic */
91
- TCGv_i32 t0_32 = tcg_temp_new_i32();
92
- TCGv_i32 t1_32 = tcg_temp_new_i32();
93
- TCGv_i32 tmp = tcg_temp_new_i32();
94
-
95
- tcg_gen_movi_i32(tmp, 0);
96
- tcg_gen_extrl_i64_i32(t0_32, t0);
97
- tcg_gen_extrl_i64_i32(t1_32, t1);
98
- tcg_gen_add2_i32(cpu_NF, cpu_CF, t0_32, tmp, t1_32, tmp);
99
- tcg_gen_mov_i32(cpu_ZF, cpu_NF);
100
- tcg_gen_xor_i32(cpu_VF, cpu_NF, t0_32);
101
- tcg_gen_xor_i32(tmp, t0_32, t1_32);
102
- tcg_gen_andc_i32(cpu_VF, cpu_VF, tmp);
103
- tcg_gen_extu_i32_i64(dest, cpu_NF);
104
+ gen_add32_CC(dest, t0, t1);
105
}
106
}
107
108
/* dest = T0 - T1; compute C, N, V and Z flags */
109
+static void gen_sub64_CC(TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
110
+{
111
+ /* 64 bit arithmetic */
112
+ TCGv_i64 result, flag, tmp;
144
+
113
+
145
+static void pvpanic_isa_class_init(ObjectClass *klass, void *data)
114
+ result = tcg_temp_new_i64();
146
+{
115
+ flag = tcg_temp_new_i64();
147
+ DeviceClass *dc = DEVICE_CLASS(klass);
116
+ tcg_gen_sub_i64(result, t0, t1);
148
+
117
+
149
+ dc->realize = pvpanic_isa_realizefn;
118
+ gen_set_NZ64(result);
150
+ device_class_set_props(dc, pvpanic_isa_properties);
119
+
151
+ set_bit(DEVICE_CATEGORY_MISC, dc->categories);
120
+ tcg_gen_setcond_i64(TCG_COND_GEU, flag, t0, t1);
121
+ tcg_gen_extrl_i64_i32(cpu_CF, flag);
122
+
123
+ tcg_gen_xor_i64(flag, result, t0);
124
+ tmp = tcg_temp_new_i64();
125
+ tcg_gen_xor_i64(tmp, t0, t1);
126
+ tcg_gen_and_i64(flag, flag, tmp);
127
+ tcg_gen_extrh_i64_i32(cpu_VF, flag);
128
+ tcg_gen_mov_i64(dest, result);
152
+}
129
+}
153
+
130
+
154
+static TypeInfo pvpanic_isa_info = {
131
+static void gen_sub32_CC(TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
155
+ .name = TYPE_PVPANIC_ISA_DEVICE,
132
+{
156
+ .parent = TYPE_ISA_DEVICE,
133
+ /* 32 bit arithmetic */
157
+ .instance_size = sizeof(PVPanicISAState),
134
+ TCGv_i32 t0_32 = tcg_temp_new_i32();
158
+ .instance_init = pvpanic_isa_initfn,
135
+ TCGv_i32 t1_32 = tcg_temp_new_i32();
159
+ .class_init = pvpanic_isa_class_init,
136
+ TCGv_i32 tmp;
160
+};
161
+
137
+
162
+static void pvpanic_register_types(void)
138
+ tcg_gen_extrl_i64_i32(t0_32, t0);
163
+{
139
+ tcg_gen_extrl_i64_i32(t1_32, t1);
164
+ type_register_static(&pvpanic_isa_info);
140
+ tcg_gen_sub_i32(cpu_NF, t0_32, t1_32);
141
+ tcg_gen_mov_i32(cpu_ZF, cpu_NF);
142
+ tcg_gen_setcond_i32(TCG_COND_GEU, cpu_CF, t0_32, t1_32);
143
+ tcg_gen_xor_i32(cpu_VF, cpu_NF, t0_32);
144
+ tmp = tcg_temp_new_i32();
145
+ tcg_gen_xor_i32(tmp, t0_32, t1_32);
146
+ tcg_gen_and_i32(cpu_VF, cpu_VF, tmp);
147
+ tcg_gen_extu_i32_i64(dest, cpu_NF);
165
+}
148
+}
166
+
149
+
167
+type_init(pvpanic_register_types)
150
static void gen_sub_CC(int sf, TCGv_i64 dest, TCGv_i64 t0, TCGv_i64 t1)
168
diff --git a/hw/misc/pvpanic.c b/hw/misc/pvpanic.c
151
{
169
index XXXXXXX..XXXXXXX 100644
152
if (sf) {
170
--- a/hw/misc/pvpanic.c
153
- /* 64 bit arithmetic */
171
+++ b/hw/misc/pvpanic.c
154
- TCGv_i64 result, flag, tmp;
172
@@ -XXX,XX +XXX,XX @@
173
#include "hw/misc/pvpanic.h"
174
#include "qom/object.h"
175
176
-/* The bit of supported pv event, TODO: include uapi header and remove this */
177
-#define PVPANIC_F_PANICKED 0
178
-#define PVPANIC_F_CRASHLOADED 1
179
-
155
-
180
-/* The pv event value */
156
- result = tcg_temp_new_i64();
181
-#define PVPANIC_PANICKED (1 << PVPANIC_F_PANICKED)
157
- flag = tcg_temp_new_i64();
182
-#define PVPANIC_CRASHLOADED (1 << PVPANIC_F_CRASHLOADED)
158
- tcg_gen_sub_i64(result, t0, t1);
183
-
159
-
184
-typedef struct PVPanicState PVPanicState;
160
- gen_set_NZ64(result);
185
-DECLARE_INSTANCE_CHECKER(PVPanicState, ISA_PVPANIC_DEVICE,
186
- TYPE_PVPANIC)
187
-
161
-
188
static void handle_event(int event)
162
- tcg_gen_setcond_i64(TCG_COND_GEU, flag, t0, t1);
189
{
163
- tcg_gen_extrl_i64_i32(cpu_CF, flag);
190
static bool logged;
164
-
191
@@ -XXX,XX +XXX,XX @@ static void handle_event(int event)
165
- tcg_gen_xor_i64(flag, result, t0);
166
- tmp = tcg_temp_new_i64();
167
- tcg_gen_xor_i64(tmp, t0, t1);
168
- tcg_gen_and_i64(flag, flag, tmp);
169
- tcg_gen_extrh_i64_i32(cpu_VF, flag);
170
- tcg_gen_mov_i64(dest, result);
171
+ gen_sub64_CC(dest, t0, t1);
172
} else {
173
- /* 32 bit arithmetic */
174
- TCGv_i32 t0_32 = tcg_temp_new_i32();
175
- TCGv_i32 t1_32 = tcg_temp_new_i32();
176
- TCGv_i32 tmp;
177
-
178
- tcg_gen_extrl_i64_i32(t0_32, t0);
179
- tcg_gen_extrl_i64_i32(t1_32, t1);
180
- tcg_gen_sub_i32(cpu_NF, t0_32, t1_32);
181
- tcg_gen_mov_i32(cpu_ZF, cpu_NF);
182
- tcg_gen_setcond_i32(TCG_COND_GEU, cpu_CF, t0_32, t1_32);
183
- tcg_gen_xor_i32(cpu_VF, cpu_NF, t0_32);
184
- tmp = tcg_temp_new_i32();
185
- tcg_gen_xor_i32(tmp, t0_32, t1_32);
186
- tcg_gen_and_i32(cpu_VF, cpu_VF, tmp);
187
- tcg_gen_extu_i32_i64(dest, cpu_NF);
188
+ gen_sub32_CC(dest, t0, t1);
192
}
189
}
193
}
190
}
194
191
195
-#include "hw/isa/isa.h"
196
-
197
-struct PVPanicState {
198
- ISADevice parent_obj;
199
-
200
- MemoryRegion io;
201
- uint16_t ioport;
202
- uint8_t events;
203
-};
204
-
205
/* return supported events on read */
206
-static uint64_t pvpanic_ioport_read(void *opaque, hwaddr addr, unsigned size)
207
+static uint64_t pvpanic_read(void *opaque, hwaddr addr, unsigned size)
208
{
209
PVPanicState *pvp = opaque;
210
return pvp->events;
211
}
212
213
-static void pvpanic_ioport_write(void *opaque, hwaddr addr, uint64_t val,
214
+static void pvpanic_write(void *opaque, hwaddr addr, uint64_t val,
215
unsigned size)
216
{
217
handle_event(val);
218
}
219
220
static const MemoryRegionOps pvpanic_ops = {
221
- .read = pvpanic_ioport_read,
222
- .write = pvpanic_ioport_write,
223
+ .read = pvpanic_read,
224
+ .write = pvpanic_write,
225
.impl = {
226
.min_access_size = 1,
227
.max_access_size = 1,
228
},
229
};
230
231
-static void pvpanic_isa_initfn(Object *obj)
232
+void pvpanic_setup_io(PVPanicState *s, DeviceState *dev, unsigned size)
233
{
234
- PVPanicState *s = ISA_PVPANIC_DEVICE(obj);
235
-
236
- memory_region_init_io(&s->io, OBJECT(s), &pvpanic_ops, s, "pvpanic", 1);
237
+ memory_region_init_io(&s->mr, OBJECT(dev), &pvpanic_ops, s, "pvpanic", size);
238
}
239
-
240
-static void pvpanic_isa_realizefn(DeviceState *dev, Error **errp)
241
-{
242
- ISADevice *d = ISA_DEVICE(dev);
243
- PVPanicState *s = ISA_PVPANIC_DEVICE(dev);
244
- FWCfgState *fw_cfg = fw_cfg_find();
245
- uint16_t *pvpanic_port;
246
-
247
- if (!fw_cfg) {
248
- return;
249
- }
250
-
251
- pvpanic_port = g_malloc(sizeof(*pvpanic_port));
252
- *pvpanic_port = cpu_to_le16(s->ioport);
253
- fw_cfg_add_file(fw_cfg, "etc/pvpanic-port", pvpanic_port,
254
- sizeof(*pvpanic_port));
255
-
256
- isa_register_ioport(d, &s->io, s->ioport);
257
-}
258
-
259
-static Property pvpanic_isa_properties[] = {
260
- DEFINE_PROP_UINT16(PVPANIC_IOPORT_PROP, PVPanicState, ioport, 0x505),
261
- DEFINE_PROP_UINT8("events", PVPanicState, events, PVPANIC_PANICKED | PVPANIC_CRASHLOADED),
262
- DEFINE_PROP_END_OF_LIST(),
263
-};
264
-
265
-static void pvpanic_isa_class_init(ObjectClass *klass, void *data)
266
-{
267
- DeviceClass *dc = DEVICE_CLASS(klass);
268
-
269
- dc->realize = pvpanic_isa_realizefn;
270
- device_class_set_props(dc, pvpanic_isa_properties);
271
- set_bit(DEVICE_CATEGORY_MISC, dc->categories);
272
-}
273
-
274
-static TypeInfo pvpanic_isa_info = {
275
- .name = TYPE_PVPANIC,
276
- .parent = TYPE_ISA_DEVICE,
277
- .instance_size = sizeof(PVPanicState),
278
- .instance_init = pvpanic_isa_initfn,
279
- .class_init = pvpanic_isa_class_init,
280
-};
281
-
282
-static void pvpanic_register_types(void)
283
-{
284
- type_register_static(&pvpanic_isa_info);
285
-}
286
-
287
-type_init(pvpanic_register_types)
288
diff --git a/hw/i386/Kconfig b/hw/i386/Kconfig
289
index XXXXXXX..XXXXXXX 100644
290
--- a/hw/i386/Kconfig
291
+++ b/hw/i386/Kconfig
292
@@ -XXX,XX +XXX,XX @@ config PC
293
imply ISA_DEBUG
294
imply PARALLEL
295
imply PCI_DEVICES
296
- imply PVPANIC
297
+ imply PVPANIC_ISA
298
imply QXL
299
imply SEV
300
imply SGA
301
diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig
302
index XXXXXXX..XXXXXXX 100644
303
--- a/hw/misc/Kconfig
304
+++ b/hw/misc/Kconfig
305
@@ -XXX,XX +XXX,XX @@ config IOTKIT_SYSCTL
306
config IOTKIT_SYSINFO
307
bool
308
309
-config PVPANIC
310
+config PVPANIC_COMMON
311
+ bool
312
+
313
+config PVPANIC_ISA
314
bool
315
depends on ISA_BUS
316
+ select PVPANIC_COMMON
317
318
config AUX
319
bool
320
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
321
index XXXXXXX..XXXXXXX 100644
322
--- a/hw/misc/meson.build
323
+++ b/hw/misc/meson.build
324
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_EMC141X', if_true: files('emc141x.c'))
325
softmmu_ss.add(when: 'CONFIG_UNIMP', if_true: files('unimp.c'))
326
softmmu_ss.add(when: 'CONFIG_EMPTY_SLOT', if_true: files('empty_slot.c'))
327
softmmu_ss.add(when: 'CONFIG_LED', if_true: files('led.c'))
328
+softmmu_ss.add(when: 'CONFIG_PVPANIC_COMMON', if_true: files('pvpanic.c'))
329
330
# ARM devices
331
softmmu_ss.add(when: 'CONFIG_PL310', if_true: files('arm_l2x0.c'))
332
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_IOTKIT_SYSINFO', if_true: files('iotkit-sysinfo.c')
333
softmmu_ss.add(when: 'CONFIG_ARMSSE_CPUID', if_true: files('armsse-cpuid.c'))
334
softmmu_ss.add(when: 'CONFIG_ARMSSE_MHU', if_true: files('armsse-mhu.c'))
335
336
-softmmu_ss.add(when: 'CONFIG_PVPANIC', if_true: files('pvpanic.c'))
337
+softmmu_ss.add(when: 'CONFIG_PVPANIC_ISA', if_true: files('pvpanic-isa.c'))
338
softmmu_ss.add(when: 'CONFIG_AUX', if_true: files('auxbus.c'))
339
softmmu_ss.add(when: 'CONFIG_ASPEED_SOC', if_true: files('aspeed_scu.c', 'aspeed_sdmc.c', 'aspeed_xdma.c'))
340
softmmu_ss.add(when: 'CONFIG_MSF2', if_true: files('msf2-sysreg.c'))
341
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
342
index XXXXXXX..XXXXXXX 100644
343
--- a/tests/qtest/meson.build
344
+++ b/tests/qtest/meson.build
345
@@ -XXX,XX +XXX,XX @@ qtests_i386 = \
346
(config_host.has_key('CONFIG_LINUX') and \
347
config_all_devices.has_key('CONFIG_ISA_IPMI_BT') ? ['ipmi-bt-test'] : []) + \
348
(config_all_devices.has_key('CONFIG_WDT_IB700') ? ['wdt_ib700-test'] : []) + \
349
- (config_all_devices.has_key('CONFIG_PVPANIC') ? ['pvpanic-test'] : []) + \
350
+ (config_all_devices.has_key('CONFIG_PVPANIC_ISA') ? ['pvpanic-test'] : []) + \
351
(config_all_devices.has_key('CONFIG_HDA') ? ['intel-hda-test'] : []) + \
352
(config_all_devices.has_key('CONFIG_I82801B11') ? ['i82801b11-test'] : []) + \
353
(config_all_devices.has_key('CONFIG_IOH3420') ? ['ioh3420-test'] : []) + \
354
--
192
--
355
2.20.1
193
2.34.1
356
357
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
SVE predicate operations cannot use the "usual" simd_desc
3
Convert the ADD and SUB (immediate) instructions.
4
encoding, because the lengths are not a multiple of 8.
5
But we were abusing the SIMD_* fields to store values anyway.
6
This abuse broke when SIMD_OPRSZ_BITS was modified in e2e7168a214.
7
4
8
Introduce a new set of field definitions for exclusive use
9
of predicates, so that it is obvious what kind of predicate
10
we are manipulating. To be used in future patches.
11
12
Cc: qemu-stable@nongnu.org
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210113062650.593824-2-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230512144106.3608981-7-peter.maydell@linaro.org
9
[PMM: Rebased; adjusted to use translate.h's TRANS macro]
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
12
---
18
target/arm/internals.h | 9 +++++++++
13
target/arm/tcg/translate.h | 5 +++
19
1 file changed, 9 insertions(+)
14
target/arm/tcg/a64.decode | 17 ++++++++
15
target/arm/tcg/translate-a64.c | 73 ++++++++++------------------------
16
3 files changed, 42 insertions(+), 53 deletions(-)
20
17
21
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h
22
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/internals.h
20
--- a/target/arm/tcg/translate.h
24
+++ b/target/arm/internals.h
21
+++ b/target/arm/tcg/translate.h
25
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(int idx);
22
@@ -XXX,XX +XXX,XX @@ static inline int rsub_8(DisasContext *s, int x)
26
#define LOG2_TAG_GRANULE 4
23
return 8 - x;
27
#define TAG_GRANULE (1 << LOG2_TAG_GRANULE)
24
}
28
25
29
+/*
26
+static inline int shl_12(DisasContext *s, int x)
30
+ * SVE predicates are 1/8 the size of SVE vectors, and cannot use
27
+{
31
+ * the same simd_desc() encoding due to restrictions on size.
28
+ return x << 12;
32
+ * Use these instead.
29
+}
33
+ */
30
+
34
+FIELD(PREDDESC, OPRSZ, 0, 6)
31
static inline int neon_3same_fp_size(DisasContext *s, int x)
35
+FIELD(PREDDESC, ESZ, 6, 2)
32
{
36
+FIELD(PREDDESC, DATA, 8, 24)
33
/* Convert 0==fp32, 1==fp16 into a MO_* value */
34
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
35
index XXXXXXX..XXXXXXX 100644
36
--- a/target/arm/tcg/a64.decode
37
+++ b/target/arm/tcg/a64.decode
38
@@ -XXX,XX +XXX,XX @@
39
#
40
41
&ri rd imm
42
+&rri_sf rd rn imm sf
43
44
45
### Data Processing - Immediate
46
@@ -XXX,XX +XXX,XX @@
47
48
ADR 0 .. 10000 ................... ..... @pcrel
49
ADRP 1 .. 10000 ................... ..... @pcrel
50
+
51
+# Add/subtract (immediate)
52
+
53
+%imm12_sh12 10:12 !function=shl_12
54
+@addsub_imm sf:1 .. ...... . imm:12 rn:5 rd:5
55
+@addsub_imm12 sf:1 .. ...... . ............ rn:5 rd:5 imm=%imm12_sh12
56
+
57
+ADD_i . 00 100010 0 ............ ..... ..... @addsub_imm
58
+ADD_i . 00 100010 1 ............ ..... ..... @addsub_imm12
59
+ADDS_i . 01 100010 0 ............ ..... ..... @addsub_imm
60
+ADDS_i . 01 100010 1 ............ ..... ..... @addsub_imm12
61
+
62
+SUB_i . 10 100010 0 ............ ..... ..... @addsub_imm
63
+SUB_i . 10 100010 1 ............ ..... ..... @addsub_imm12
64
+SUBS_i . 11 100010 0 ............ ..... ..... @addsub_imm
65
+SUBS_i . 11 100010 1 ............ ..... ..... @addsub_imm12
66
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/tcg/translate-a64.c
69
+++ b/target/arm/tcg/translate-a64.c
70
@@ -XXX,XX +XXX,XX @@ static void disas_ldst(DisasContext *s, uint32_t insn)
71
}
72
}
73
74
+typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64);
75
+
76
+static bool gen_rri(DisasContext *s, arg_rri_sf *a,
77
+ bool rd_sp, bool rn_sp, ArithTwoOp *fn)
78
+{
79
+ TCGv_i64 tcg_rn = rn_sp ? cpu_reg_sp(s, a->rn) : cpu_reg(s, a->rn);
80
+ TCGv_i64 tcg_rd = rd_sp ? cpu_reg_sp(s, a->rd) : cpu_reg(s, a->rd);
81
+ TCGv_i64 tcg_imm = tcg_constant_i64(a->imm);
82
+
83
+ fn(tcg_rd, tcg_rn, tcg_imm);
84
+ if (!a->sf) {
85
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
86
+ }
87
+ return true;
88
+}
37
+
89
+
38
/*
90
/*
39
* The SVE simd_data field, for memory ops, contains either
91
* PC-rel. addressing
40
* rd (5 bits) or a shift count (2 bits).
92
*/
93
@@ -XXX,XX +XXX,XX @@ static bool trans_ADRP(DisasContext *s, arg_ri *a)
94
95
/*
96
* Add/subtract (immediate)
97
- *
98
- * 31 30 29 28 23 22 21 10 9 5 4 0
99
- * +--+--+--+-------------+--+-------------+-----+-----+
100
- * |sf|op| S| 1 0 0 0 1 0 |sh| imm12 | Rn | Rd |
101
- * +--+--+--+-------------+--+-------------+-----+-----+
102
- *
103
- * sf: 0 -> 32bit, 1 -> 64bit
104
- * op: 0 -> add , 1 -> sub
105
- * S: 1 -> set flags
106
- * sh: 1 -> LSL imm by 12
107
*/
108
-static void disas_add_sub_imm(DisasContext *s, uint32_t insn)
109
-{
110
- int rd = extract32(insn, 0, 5);
111
- int rn = extract32(insn, 5, 5);
112
- uint64_t imm = extract32(insn, 10, 12);
113
- bool shift = extract32(insn, 22, 1);
114
- bool setflags = extract32(insn, 29, 1);
115
- bool sub_op = extract32(insn, 30, 1);
116
- bool is_64bit = extract32(insn, 31, 1);
117
-
118
- TCGv_i64 tcg_rn = cpu_reg_sp(s, rn);
119
- TCGv_i64 tcg_rd = setflags ? cpu_reg(s, rd) : cpu_reg_sp(s, rd);
120
- TCGv_i64 tcg_result;
121
-
122
- if (shift) {
123
- imm <<= 12;
124
- }
125
-
126
- tcg_result = tcg_temp_new_i64();
127
- if (!setflags) {
128
- if (sub_op) {
129
- tcg_gen_subi_i64(tcg_result, tcg_rn, imm);
130
- } else {
131
- tcg_gen_addi_i64(tcg_result, tcg_rn, imm);
132
- }
133
- } else {
134
- TCGv_i64 tcg_imm = tcg_constant_i64(imm);
135
- if (sub_op) {
136
- gen_sub_CC(is_64bit, tcg_result, tcg_rn, tcg_imm);
137
- } else {
138
- gen_add_CC(is_64bit, tcg_result, tcg_rn, tcg_imm);
139
- }
140
- }
141
-
142
- if (is_64bit) {
143
- tcg_gen_mov_i64(tcg_rd, tcg_result);
144
- } else {
145
- tcg_gen_ext32u_i64(tcg_rd, tcg_result);
146
- }
147
-}
148
+TRANS(ADD_i, gen_rri, a, 1, 1, tcg_gen_add_i64)
149
+TRANS(SUB_i, gen_rri, a, 1, 1, tcg_gen_sub_i64)
150
+TRANS(ADDS_i, gen_rri, a, 0, 1, a->sf ? gen_add64_CC : gen_add32_CC)
151
+TRANS(SUBS_i, gen_rri, a, 0, 1, a->sf ? gen_sub64_CC : gen_sub32_CC)
152
153
/*
154
* Add/subtract (immediate, with tags)
155
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
156
static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
157
{
158
switch (extract32(insn, 23, 6)) {
159
- case 0x22: /* Add/subtract (immediate) */
160
- disas_add_sub_imm(s, insn);
161
- break;
162
case 0x23: /* Add/subtract (immediate, with tags) */
163
disas_add_sub_imm_with_tags(s, insn);
164
break;
41
--
165
--
42
2.20.1
166
2.34.1
43
44
diff view generated by jsdifflib
1
From: Gan Qixin <ganqixin@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The adc_qom_set function didn't free "response", which caused an indirect
3
Convert the ADDG and SUBG (immediate) instructions.
4
memory leak. So use qobject_unref() to fix it.
5
4
6
ASAN shows memory leak stack:
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Indirect leak of 593280 byte(s) in 144 object(s) allocated from:
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
#0 0x7f9a5e7e8d4e in __interceptor_calloc (/lib64/libasan.so.5+0x112d4e)
8
Message-id: 20230512144106.3608981-8-peter.maydell@linaro.org
10
#1 0x7f9a5e607a50 in g_malloc0 (/lib64/libglib-2.0.so.0+0x55a50)
9
[PMM: Rebased; use TRANS_FEAT()]
11
#2 0x55b1bebf636b in qdict_new ../qobject/qdict.c:30
12
#3 0x55b1bec09699 in parse_object ../qobject/json-parser.c:318
13
#4 0x55b1bec0b2df in parse_value ../qobject/json-parser.c:546
14
#5 0x55b1bec0b6a9 in json_parser_parse ../qobject/json-parser.c:580
15
#6 0x55b1bec060d1 in json_message_process_token ../qobject/json-streamer.c:92
16
#7 0x55b1bec16a12 in json_lexer_feed_char ../qobject/json-lexer.c:313
17
#8 0x55b1bec16fbd in json_lexer_feed ../qobject/json-lexer.c:350
18
#9 0x55b1bec06453 in json_message_parser_feed ../qobject/json-streamer.c:121
19
#10 0x55b1bebc2d51 in qmp_fd_receive ../tests/qtest/libqtest.c:614
20
#11 0x55b1bebc2f5e in qtest_qmp_receive_dict ../tests/qtest/libqtest.c:636
21
#12 0x55b1bebc2e6c in qtest_qmp_receive ../tests/qtest/libqtest.c:624
22
#13 0x55b1bebc3340 in qtest_vqmp ../tests/qtest/libqtest.c:715
23
#14 0x55b1bebc3942 in qtest_qmp ../tests/qtest/libqtest.c:756
24
#15 0x55b1bebbd64a in adc_qom_set ../tests/qtest/npcm7xx_adc-test.c:127
25
#16 0x55b1bebbd793 in adc_write_input ../tests/qtest/npcm7xx_adc-test.c:140
26
#17 0x55b1bebbdf92 in test_convert_external ../tests/qtest/npcm7xx_adc-test.c:246
27
28
Reported-by: Euler Robot <euler.robot@huawei.com>
29
Signed-off-by: Gan Qixin <ganqixin@huawei.com>
30
Reviewed-by: Hao Wu <wuhaotsh@google.com>
31
Message-id: 20210118065627.79903-1-ganqixin@huawei.com
32
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
---
12
---
35
tests/qtest/npcm7xx_adc-test.c | 1 +
13
target/arm/tcg/a64.decode | 8 +++++++
36
1 file changed, 1 insertion(+)
14
target/arm/tcg/translate-a64.c | 38 ++++++++++------------------------
15
2 files changed, 19 insertions(+), 27 deletions(-)
37
16
38
diff --git a/tests/qtest/npcm7xx_adc-test.c b/tests/qtest/npcm7xx_adc-test.c
17
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
39
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
40
--- a/tests/qtest/npcm7xx_adc-test.c
19
--- a/target/arm/tcg/a64.decode
41
+++ b/tests/qtest/npcm7xx_adc-test.c
20
+++ b/target/arm/tcg/a64.decode
42
@@ -XXX,XX +XXX,XX @@ static void adc_qom_set(QTestState *qts, const ADC *adc,
21
@@ -XXX,XX +XXX,XX @@ SUB_i . 10 100010 0 ............ ..... ..... @addsub_imm
43
path, name, value);
22
SUB_i . 10 100010 1 ............ ..... ..... @addsub_imm12
44
/* The qom set message returns successfully. */
23
SUBS_i . 11 100010 0 ............ ..... ..... @addsub_imm
45
g_assert_true(qdict_haskey(response, "return"));
24
SUBS_i . 11 100010 1 ............ ..... ..... @addsub_imm12
46
+ qobject_unref(response);
25
+
26
+# Add/subtract (immediate with tags)
27
+
28
+&rri_tag rd rn uimm6 uimm4
29
+@addsub_imm_tag . .. ...... . uimm6:6 .. uimm4:4 rn:5 rd:5 &rri_tag
30
+
31
+ADDG_i 1 00 100011 0 ...... 00 .... ..... ..... @addsub_imm_tag
32
+SUBG_i 1 10 100011 0 ...... 00 .... ..... ..... @addsub_imm_tag
33
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/tcg/translate-a64.c
36
+++ b/target/arm/tcg/translate-a64.c
37
@@ -XXX,XX +XXX,XX @@ TRANS(SUBS_i, gen_rri, a, 0, 1, a->sf ? gen_sub64_CC : gen_sub32_CC)
38
39
/*
40
* Add/subtract (immediate, with tags)
41
- *
42
- * 31 30 29 28 23 22 21 16 14 10 9 5 4 0
43
- * +--+--+--+-------------+--+---------+--+-------+-----+-----+
44
- * |sf|op| S| 1 0 0 0 1 1 |o2| uimm6 |o3| uimm4 | Rn | Rd |
45
- * +--+--+--+-------------+--+---------+--+-------+-----+-----+
46
- *
47
- * op: 0 -> add, 1 -> sub
48
*/
49
-static void disas_add_sub_imm_with_tags(DisasContext *s, uint32_t insn)
50
+
51
+static bool gen_add_sub_imm_with_tags(DisasContext *s, arg_rri_tag *a,
52
+ bool sub_op)
53
{
54
- int rd = extract32(insn, 0, 5);
55
- int rn = extract32(insn, 5, 5);
56
- int uimm4 = extract32(insn, 10, 4);
57
- int uimm6 = extract32(insn, 16, 6);
58
- bool sub_op = extract32(insn, 30, 1);
59
TCGv_i64 tcg_rn, tcg_rd;
60
int imm;
61
62
- /* Test all of sf=1, S=0, o2=0, o3=0. */
63
- if ((insn & 0xa040c000u) != 0x80000000u ||
64
- !dc_isar_feature(aa64_mte_insn_reg, s)) {
65
- unallocated_encoding(s);
66
- return;
67
- }
68
-
69
- imm = uimm6 << LOG2_TAG_GRANULE;
70
+ imm = a->uimm6 << LOG2_TAG_GRANULE;
71
if (sub_op) {
72
imm = -imm;
73
}
74
75
- tcg_rn = cpu_reg_sp(s, rn);
76
- tcg_rd = cpu_reg_sp(s, rd);
77
+ tcg_rn = cpu_reg_sp(s, a->rn);
78
+ tcg_rd = cpu_reg_sp(s, a->rd);
79
80
if (s->ata) {
81
gen_helper_addsubg(tcg_rd, cpu_env, tcg_rn,
82
tcg_constant_i32(imm),
83
- tcg_constant_i32(uimm4));
84
+ tcg_constant_i32(a->uimm4));
85
} else {
86
tcg_gen_addi_i64(tcg_rd, tcg_rn, imm);
87
gen_address_with_allocation_tag0(tcg_rd, tcg_rd);
88
}
89
+ return true;
47
}
90
}
48
91
49
static void adc_write_input(QTestState *qts, const ADC *adc,
92
+TRANS_FEAT(ADDG_i, aa64_mte_insn_reg, gen_add_sub_imm_with_tags, a, false)
93
+TRANS_FEAT(SUBG_i, aa64_mte_insn_reg, gen_add_sub_imm_with_tags, a, true)
94
+
95
/* The input should be a value in the bottom e bits (with higher
96
* bits zero); returns that value replicated into every element
97
* of size e in a 64 bit integer.
98
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
99
static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
100
{
101
switch (extract32(insn, 23, 6)) {
102
- case 0x23: /* Add/subtract (immediate, with tags) */
103
- disas_add_sub_imm_with_tags(s, insn);
104
- break;
105
case 0x24: /* Logical (immediate) */
106
disas_logic_imm(s, insn);
107
break;
50
--
108
--
51
2.20.1
109
2.34.1
52
53
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Update all users of do_perm_pred3 for the new
3
Use the bitops.h macro rather than rolling our own here.
4
predicate descriptor field definitions.
5
4
6
Cc: qemu-stable@nongnu.org
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210113062650.593824-4-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230512144106.3608981-9-peter.maydell@linaro.org
11
---
9
---
12
target/arm/sve_helper.c | 18 +++++++++---------
10
target/arm/tcg/translate-a64.c | 11 ++---------
13
target/arm/translate-sve.c | 12 ++++--------
11
1 file changed, 2 insertions(+), 9 deletions(-)
14
2 files changed, 13 insertions(+), 17 deletions(-)
15
12
16
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
13
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/sve_helper.c
15
--- a/target/arm/tcg/translate-a64.c
19
+++ b/target/arm/sve_helper.c
16
+++ b/target/arm/tcg/translate-a64.c
20
@@ -XXX,XX +XXX,XX @@ static uint64_t compress_bits(uint64_t x, int n)
17
@@ -XXX,XX +XXX,XX @@ static uint64_t bitfield_replicate(uint64_t mask, unsigned int e)
21
18
return mask;
22
void HELPER(sve_zip_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
19
}
23
{
20
24
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
21
-/* Return a value with the bottom len bits set (where 0 < len <= 64) */
25
- int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
22
-static inline uint64_t bitmask64(unsigned int length)
26
- intptr_t high = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
23
-{
27
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
24
- assert(length > 0 && length <= 64);
28
+ int esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
25
- return ~0ULL >> (64 - length);
29
+ intptr_t high = FIELD_EX32(pred_desc, PREDDESC, DATA);
26
-}
30
uint64_t *d = vd;
27
-
31
intptr_t i;
28
/* Simplified variant of pseudocode DecodeBitMasks() for the case where we
32
29
* only require the wmask. Returns false if the imms/immr/immn are a reserved
33
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_zip_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
30
* value (ie should cause a guest UNDEF exception), and true if they are
34
31
@@ -XXX,XX +XXX,XX @@ bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
35
void HELPER(sve_uzp_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
32
/* Create the value of one element: s+1 set bits rotated
36
{
33
* by r within the element (which is e bits wide)...
37
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
34
*/
38
- int esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
35
- mask = bitmask64(s + 1);
39
- int odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1) << esz;
36
+ mask = MAKE_64BIT_MASK(0, s + 1);
40
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
37
if (r) {
41
+ int esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
38
mask = (mask >> r) | (mask << (e - r));
42
+ int odd = FIELD_EX32(pred_desc, PREDDESC, DATA) << esz;
39
- mask &= bitmask64(e);
43
uint64_t *d = vd, *n = vn, *m = vm;
40
+ mask &= MAKE_64BIT_MASK(0, e);
44
uint64_t l, h;
41
}
45
intptr_t i;
42
/* ...then replicate the element over the whole 64 bit value */
46
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_uzp_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
43
mask = bitfield_replicate(mask, e);
47
48
void HELPER(sve_trn_p)(void *vd, void *vn, void *vm, uint32_t pred_desc)
49
{
50
- intptr_t oprsz = extract32(pred_desc, 0, SIMD_OPRSZ_BITS) + 2;
51
- uintptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
52
- bool odd = extract32(pred_desc, SIMD_DATA_SHIFT + 2, 1);
53
+ intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
54
+ int esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
55
+ int odd = FIELD_EX32(pred_desc, PREDDESC, DATA);
56
uint64_t *d = vd, *n = vn, *m = vm;
57
uint64_t mask;
58
int shr, shl;
59
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/translate-sve.c
62
+++ b/target/arm/translate-sve.c
63
@@ -XXX,XX +XXX,XX @@ static bool do_perm_pred3(DisasContext *s, arg_rrr_esz *a, bool high_odd,
64
65
unsigned vsz = pred_full_reg_size(s);
66
67
- /* Predicate sizes may be smaller and cannot use simd_desc.
68
- We cannot round up, as we do elsewhere, because we need
69
- the exact size for ZIP2 and REV. We retain the style for
70
- the other helpers for consistency. */
71
TCGv_ptr t_d = tcg_temp_new_ptr();
72
TCGv_ptr t_n = tcg_temp_new_ptr();
73
TCGv_ptr t_m = tcg_temp_new_ptr();
74
TCGv_i32 t_desc;
75
- int desc;
76
+ uint32_t desc = 0;
77
78
- desc = vsz - 2;
79
- desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
80
- desc = deposit32(desc, SIMD_DATA_SHIFT + 2, 2, high_odd);
81
+ desc = FIELD_DP32(desc, PREDDESC, OPRSZ, vsz);
82
+ desc = FIELD_DP32(desc, PREDDESC, ESZ, a->esz);
83
+ desc = FIELD_DP32(desc, PREDDESC, DATA, high_odd);
84
85
tcg_gen_addi_ptr(t_d, cpu_env, pred_full_reg_offset(s, a->rd));
86
tcg_gen_addi_ptr(t_n, cpu_env, pred_full_reg_offset(s, a->rn));
87
--
44
--
88
2.20.1
45
2.34.1
89
90
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Do not assume that EL2 is available in and only in non-secure context.
3
Convert the ADD, ORR, EOR, ANDS (immediate) instructions.
4
That equivalence is broken by ARMv8.4-SEL2.
5
4
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20210112104511.36576-3-remi.denis.courmont@huawei.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230512144106.3608981-10-peter.maydell@linaro.org
9
[PMM: rebased]
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
target/arm/cpu.h | 4 ++--
12
target/arm/tcg/a64.decode | 15 ++++++
12
target/arm/helper-a64.c | 8 +-------
13
target/arm/tcg/translate-a64.c | 94 +++++++++++-----------------------
13
target/arm/helper.c | 33 +++++++++++++--------------------
14
2 files changed, 44 insertions(+), 65 deletions(-)
14
3 files changed, 16 insertions(+), 29 deletions(-)
15
15
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.h
18
--- a/target/arm/tcg/a64.decode
19
+++ b/target/arm/cpu.h
19
+++ b/target/arm/tcg/a64.decode
20
@@ -XXX,XX +XXX,XX @@ static inline bool arm_el_is_aa64(CPUARMState *env, int el)
20
@@ -XXX,XX +XXX,XX @@ SUBS_i . 11 100010 1 ............ ..... ..... @addsub_imm12
21
return aa64;
21
22
ADDG_i 1 00 100011 0 ...... 00 .... ..... ..... @addsub_imm_tag
23
SUBG_i 1 10 100011 0 ...... 00 .... ..... ..... @addsub_imm_tag
24
+
25
+# Logical (immediate)
26
+
27
+&rri_log rd rn sf dbm
28
+@logic_imm_64 1 .. ...... dbm:13 rn:5 rd:5 &rri_log sf=1
29
+@logic_imm_32 0 .. ...... 0 dbm:12 rn:5 rd:5 &rri_log sf=0
30
+
31
+AND_i . 00 100100 . ...... ...... ..... ..... @logic_imm_64
32
+AND_i . 00 100100 . ...... ...... ..... ..... @logic_imm_32
33
+ORR_i . 01 100100 . ...... ...... ..... ..... @logic_imm_64
34
+ORR_i . 01 100100 . ...... ...... ..... ..... @logic_imm_32
35
+EOR_i . 10 100100 . ...... ...... ..... ..... @logic_imm_64
36
+EOR_i . 10 100100 . ...... ...... ..... ..... @logic_imm_32
37
+ANDS_i . 11 100100 . ...... ...... ..... ..... @logic_imm_64
38
+ANDS_i . 11 100100 . ...... ...... ..... ..... @logic_imm_32
39
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/tcg/translate-a64.c
42
+++ b/target/arm/tcg/translate-a64.c
43
@@ -XXX,XX +XXX,XX @@ static uint64_t bitfield_replicate(uint64_t mask, unsigned int e)
44
return mask;
45
}
46
47
-/* Simplified variant of pseudocode DecodeBitMasks() for the case where we
48
+/*
49
+ * Logical (immediate)
50
+ */
51
+
52
+/*
53
+ * Simplified variant of pseudocode DecodeBitMasks() for the case where we
54
* only require the wmask. Returns false if the imms/immr/immn are a reserved
55
* value (ie should cause a guest UNDEF exception), and true if they are
56
* valid, in which case the decoded bit pattern is written to result.
57
@@ -XXX,XX +XXX,XX @@ bool logic_imm_decode_wmask(uint64_t *result, unsigned int immn,
58
return true;
59
}
60
61
-/* Logical (immediate)
62
- * 31 30 29 28 23 22 21 16 15 10 9 5 4 0
63
- * +----+-----+-------------+---+------+------+------+------+
64
- * | sf | opc | 1 0 0 1 0 0 | N | immr | imms | Rn | Rd |
65
- * +----+-----+-------------+---+------+------+------+------+
66
- */
67
-static void disas_logic_imm(DisasContext *s, uint32_t insn)
68
+static bool gen_rri_log(DisasContext *s, arg_rri_log *a, bool set_cc,
69
+ void (*fn)(TCGv_i64, TCGv_i64, int64_t))
70
{
71
- unsigned int sf, opc, is_n, immr, imms, rn, rd;
72
TCGv_i64 tcg_rd, tcg_rn;
73
- uint64_t wmask;
74
- bool is_and = false;
75
+ uint64_t imm;
76
77
- sf = extract32(insn, 31, 1);
78
- opc = extract32(insn, 29, 2);
79
- is_n = extract32(insn, 22, 1);
80
- immr = extract32(insn, 16, 6);
81
- imms = extract32(insn, 10, 6);
82
- rn = extract32(insn, 5, 5);
83
- rd = extract32(insn, 0, 5);
84
-
85
- if (!sf && is_n) {
86
- unallocated_encoding(s);
87
- return;
88
+ /* Some immediate field values are reserved. */
89
+ if (!logic_imm_decode_wmask(&imm, extract32(a->dbm, 12, 1),
90
+ extract32(a->dbm, 0, 6),
91
+ extract32(a->dbm, 6, 6))) {
92
+ return false;
93
+ }
94
+ if (!a->sf) {
95
+ imm &= 0xffffffffull;
22
}
96
}
23
97
24
- if (arm_feature(env, ARM_FEATURE_EL2) && !arm_is_secure_below_el3(env)) {
98
- if (opc == 0x3) { /* ANDS */
25
+ if (arm_is_el2_enabled(env)) {
99
- tcg_rd = cpu_reg(s, rd);
26
aa64 = aa64 && (env->cp15.hcr_el2 & HCR_RW);
100
- } else {
101
- tcg_rd = cpu_reg_sp(s, rd);
102
- }
103
- tcg_rn = cpu_reg(s, rn);
104
+ tcg_rd = set_cc ? cpu_reg(s, a->rd) : cpu_reg_sp(s, a->rd);
105
+ tcg_rn = cpu_reg(s, a->rn);
106
107
- if (!logic_imm_decode_wmask(&wmask, is_n, imms, immr)) {
108
- /* some immediate field values are reserved */
109
- unallocated_encoding(s);
110
- return;
111
+ fn(tcg_rd, tcg_rn, imm);
112
+ if (set_cc) {
113
+ gen_logic_CC(a->sf, tcg_rd);
27
}
114
}
28
115
-
29
@@ -XXX,XX +XXX,XX @@ static inline int arm_debug_target_el(CPUARMState *env)
116
- if (!sf) {
30
bool secure = arm_is_secure(env);
117
- wmask &= 0xffffffff;
31
bool route_to_el2 = false;
32
33
- if (arm_feature(env, ARM_FEATURE_EL2) && !secure) {
34
+ if (arm_is_el2_enabled(env)) {
35
route_to_el2 = env->cp15.hcr_el2 & HCR_TGE ||
36
env->cp15.mdcr_el2 & MDCR_TDE;
37
}
38
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/helper-a64.c
41
+++ b/target/arm/helper-a64.c
42
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
43
if (new_el == -1) {
44
goto illegal_return;
45
}
46
- if (new_el > cur_el
47
- || (new_el == 2 && !arm_feature(env, ARM_FEATURE_EL2))) {
48
+ if (new_el > cur_el || (new_el == 2 && !arm_is_el2_enabled(env))) {
49
/* Disallow return to an EL which is unimplemented or higher
50
* than the current one.
51
*/
52
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
53
goto illegal_return;
54
}
55
56
- if (new_el == 2 && arm_is_secure_below_el3(env)) {
57
- /* Return to the non-existent secure-EL2 */
58
- goto illegal_return;
59
- }
118
- }
60
-
119
-
61
if (new_el == 1 && (arm_hcr_el2_eff(env) & HCR_TGE)) {
120
- switch (opc) {
62
goto illegal_return;
121
- case 0x3: /* ANDS */
122
- case 0x0: /* AND */
123
- tcg_gen_andi_i64(tcg_rd, tcg_rn, wmask);
124
- is_and = true;
125
- break;
126
- case 0x1: /* ORR */
127
- tcg_gen_ori_i64(tcg_rd, tcg_rn, wmask);
128
- break;
129
- case 0x2: /* EOR */
130
- tcg_gen_xori_i64(tcg_rd, tcg_rn, wmask);
131
- break;
132
- default:
133
- assert(FALSE); /* must handle all above */
134
- break;
135
- }
136
-
137
- if (!sf && !is_and) {
138
- /* zero extend final result; we know we can skip this for AND
139
- * since the immediate had the high 32 bits clear.
140
- */
141
+ if (!a->sf) {
142
tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
63
}
143
}
64
diff --git a/target/arm/helper.c b/target/arm/helper.c
144
-
65
index XXXXXXX..XXXXXXX 100644
145
- if (opc == 3) { /* ANDS */
66
--- a/target/arm/helper.c
146
- gen_logic_CC(sf, tcg_rd);
67
+++ b/target/arm/helper.c
147
- }
68
@@ -XXX,XX +XXX,XX @@ static CPAccessResult cpacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
148
+ return true;
149
}
150
151
+TRANS(AND_i, gen_rri_log, a, false, tcg_gen_andi_i64)
152
+TRANS(ORR_i, gen_rri_log, a, false, tcg_gen_ori_i64)
153
+TRANS(EOR_i, gen_rri_log, a, false, tcg_gen_xori_i64)
154
+TRANS(ANDS_i, gen_rri_log, a, true, tcg_gen_andi_i64)
155
+
156
/*
157
* Move wide (immediate)
158
*
159
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
160
static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
69
{
161
{
70
if (arm_feature(env, ARM_FEATURE_V8)) {
162
switch (extract32(insn, 23, 6)) {
71
/* Check if CPACR accesses are to be trapped to EL2 */
163
- case 0x24: /* Logical (immediate) */
72
- if (arm_current_el(env) == 1 &&
164
- disas_logic_imm(s, insn);
73
- (env->cp15.cptr_el[2] & CPTR_TCPAC) && !arm_is_secure(env)) {
165
- break;
74
+ if (arm_current_el(env) == 1 && arm_is_el2_enabled(env) &&
166
case 0x25: /* Move wide (immediate) */
75
+ (env->cp15.cptr_el[2] & CPTR_TCPAC)) {
167
disas_movw_imm(s, insn);
76
return CP_ACCESS_TRAP_EL2;
168
break;
77
/* Check if CPACR accesses are to be trapped to EL3 */
78
} else if (arm_current_el(env) < 3 &&
79
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_counter_access(CPUARMState *env, int timeridx,
80
bool isread)
81
{
82
unsigned int cur_el = arm_current_el(env);
83
- bool secure = arm_is_secure(env);
84
+ bool has_el2 = arm_is_el2_enabled(env);
85
uint64_t hcr = arm_hcr_el2_eff(env);
86
87
switch (cur_el) {
88
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_counter_access(CPUARMState *env, int timeridx,
89
}
90
} else {
91
/* If HCR_EL2.<E2H> == 0: check CNTHCTL_EL2.EL1PCEN. */
92
- if (arm_feature(env, ARM_FEATURE_EL2) &&
93
- timeridx == GTIMER_PHYS && !secure &&
94
+ if (has_el2 && timeridx == GTIMER_PHYS &&
95
!extract32(env->cp15.cnthctl_el2, 1, 1)) {
96
return CP_ACCESS_TRAP_EL2;
97
}
98
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_counter_access(CPUARMState *env, int timeridx,
99
100
case 1:
101
/* Check CNTHCTL_EL2.EL1PCTEN, which changes location based on E2H. */
102
- if (arm_feature(env, ARM_FEATURE_EL2) &&
103
- timeridx == GTIMER_PHYS && !secure &&
104
+ if (has_el2 && timeridx == GTIMER_PHYS &&
105
(hcr & HCR_E2H
106
? !extract32(env->cp15.cnthctl_el2, 10, 1)
107
: !extract32(env->cp15.cnthctl_el2, 0, 1))) {
108
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_timer_access(CPUARMState *env, int timeridx,
109
bool isread)
110
{
111
unsigned int cur_el = arm_current_el(env);
112
- bool secure = arm_is_secure(env);
113
+ bool has_el2 = arm_is_el2_enabled(env);
114
uint64_t hcr = arm_hcr_el2_eff(env);
115
116
switch (cur_el) {
117
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gt_timer_access(CPUARMState *env, int timeridx,
118
/* fall through */
119
120
case 1:
121
- if (arm_feature(env, ARM_FEATURE_EL2) &&
122
- timeridx == GTIMER_PHYS && !secure) {
123
+ if (has_el2 && timeridx == GTIMER_PHYS) {
124
if (hcr & HCR_E2H) {
125
/* If HCR_EL2.<E2H,TGE> == '10': check CNTHCTL_EL2.EL1PTEN. */
126
if (!extract32(env->cp15.cnthctl_el2, 11, 1)) {
127
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo strongarm_cp_reginfo[] = {
128
129
static uint64_t midr_read(CPUARMState *env, const ARMCPRegInfo *ri)
130
{
131
- ARMCPU *cpu = env_archcpu(env);
132
unsigned int cur_el = arm_current_el(env);
133
- bool secure = arm_is_secure(env);
134
135
- if (arm_feature(&cpu->env, ARM_FEATURE_EL2) && !secure && cur_el == 1) {
136
+ if (arm_is_el2_enabled(env) && cur_el == 1) {
137
return env->cp15.vpidr_el2;
138
}
139
return raw_read(env, ri);
140
@@ -XXX,XX +XXX,XX @@ static uint64_t mpidr_read_val(CPUARMState *env)
141
static uint64_t mpidr_read(CPUARMState *env, const ARMCPRegInfo *ri)
142
{
143
unsigned int cur_el = arm_current_el(env);
144
- bool secure = arm_is_secure(env);
145
146
- if (arm_feature(env, ARM_FEATURE_EL2) && !secure && cur_el == 1) {
147
+ if (arm_is_el2_enabled(env) && cur_el == 1) {
148
return env->cp15.vmpidr_el2;
149
}
150
return mpidr_read_val(env);
151
@@ -XXX,XX +XXX,XX @@ uint64_t arm_hcr_el2_eff(CPUARMState *env)
152
{
153
uint64_t ret = env->cp15.hcr_el2;
154
155
- if (arm_is_secure_below_el3(env)) {
156
+ if (!arm_is_el2_enabled(env)) {
157
/*
158
* "This register has no effect if EL2 is not enabled in the
159
* current Security state". This is ARMv8.4-SecEL2 speak for
160
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
161
/* CPTR_EL2. Since TZ and TFP are positive,
162
* they will be zero when EL2 is not present.
163
*/
164
- if (el <= 2 && !arm_is_secure_below_el3(env)) {
165
+ if (el <= 2 && arm_is_el2_enabled(env)) {
166
if (env->cp15.cptr_el[2] & CPTR_TZ) {
167
return 2;
168
}
169
@@ -XXX,XX +XXX,XX @@ static int bad_mode_switch(CPUARMState *env, int mode, CPSRWriteType write_type)
170
}
171
return 0;
172
case ARM_CPU_MODE_HYP:
173
- return !arm_feature(env, ARM_FEATURE_EL2)
174
- || arm_current_el(env) < 2 || arm_is_secure_below_el3(env);
175
+ return !arm_is_el2_enabled(env) || arm_current_el(env) < 2;
176
case ARM_CPU_MODE_MON:
177
return arm_current_el(env) < 3;
178
default:
179
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
180
181
/* CPTR_EL2 : present in v7VE or v8 */
182
if (cur_el <= 2 && extract32(env->cp15.cptr_el[2], 10, 1)
183
- && !arm_is_secure_below_el3(env)) {
184
+ && arm_is_el2_enabled(env)) {
185
/* Trap FP ops at EL2, NS-EL1 or NS-EL0 to EL2 */
186
return 2;
187
}
188
--
169
--
189
2.20.1
170
2.34.1
190
191
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
These two were odd, in that do_pfirst_pnext passed the
3
Convert the MON, MOVZ, MOVK instructions.
4
count of 64-bit words rather than bytes. Change to pass
5
the standard pred_full_reg_size to avoid confusion.
6
4
7
Cc: qemu-stable@nongnu.org
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210113062650.593824-3-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230512144106.3608981-11-peter.maydell@linaro.org
9
[PMM: Rebased]
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
12
---
13
target/arm/sve_helper.c | 7 ++++---
13
target/arm/tcg/a64.decode | 13 ++++++
14
target/arm/translate-sve.c | 6 +++---
14
target/arm/tcg/translate-a64.c | 73 ++++++++++++++--------------------
15
2 files changed, 7 insertions(+), 6 deletions(-)
15
2 files changed, 42 insertions(+), 44 deletions(-)
16
16
17
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
17
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
18
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/sve_helper.c
19
--- a/target/arm/tcg/a64.decode
20
+++ b/target/arm/sve_helper.c
20
+++ b/target/arm/tcg/a64.decode
21
@@ -XXX,XX +XXX,XX @@ static intptr_t last_active_element(uint64_t *g, intptr_t words, intptr_t esz)
21
@@ -XXX,XX +XXX,XX @@ EOR_i . 10 100100 . ...... ...... ..... ..... @logic_imm_64
22
return (intptr_t)-1 << esz;
22
EOR_i . 10 100100 . ...... ...... ..... ..... @logic_imm_32
23
ANDS_i . 11 100100 . ...... ...... ..... ..... @logic_imm_64
24
ANDS_i . 11 100100 . ...... ...... ..... ..... @logic_imm_32
25
+
26
+# Move wide (immediate)
27
+
28
+&movw rd sf imm hw
29
+@movw_64 1 .. ...... hw:2 imm:16 rd:5 &movw sf=1
30
+@movw_32 0 .. ...... 0 hw:1 imm:16 rd:5 &movw sf=0
31
+
32
+MOVN . 00 100101 .. ................ ..... @movw_64
33
+MOVN . 00 100101 .. ................ ..... @movw_32
34
+MOVZ . 10 100101 .. ................ ..... @movw_64
35
+MOVZ . 10 100101 .. ................ ..... @movw_32
36
+MOVK . 11 100101 .. ................ ..... @movw_64
37
+MOVK . 11 100101 .. ................ ..... @movw_32
38
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/tcg/translate-a64.c
41
+++ b/target/arm/tcg/translate-a64.c
42
@@ -XXX,XX +XXX,XX @@ TRANS(ANDS_i, gen_rri_log, a, true, tcg_gen_andi_i64)
43
44
/*
45
* Move wide (immediate)
46
- *
47
- * 31 30 29 28 23 22 21 20 5 4 0
48
- * +--+-----+-------------+-----+----------------+------+
49
- * |sf| opc | 1 0 0 1 0 1 | hw | imm16 | Rd |
50
- * +--+-----+-------------+-----+----------------+------+
51
- *
52
- * sf: 0 -> 32 bit, 1 -> 64 bit
53
- * opc: 00 -> N, 10 -> Z, 11 -> K
54
- * hw: shift/16 (0,16, and sf only 32, 48)
55
*/
56
-static void disas_movw_imm(DisasContext *s, uint32_t insn)
57
+
58
+static bool trans_MOVZ(DisasContext *s, arg_movw *a)
59
{
60
- int rd = extract32(insn, 0, 5);
61
- uint64_t imm = extract32(insn, 5, 16);
62
- int sf = extract32(insn, 31, 1);
63
- int opc = extract32(insn, 29, 2);
64
- int pos = extract32(insn, 21, 2) << 4;
65
- TCGv_i64 tcg_rd = cpu_reg(s, rd);
66
+ int pos = a->hw << 4;
67
+ tcg_gen_movi_i64(cpu_reg(s, a->rd), (uint64_t)a->imm << pos);
68
+ return true;
69
+}
70
71
- if (!sf && (pos >= 32)) {
72
- unallocated_encoding(s);
73
- return;
74
- }
75
+static bool trans_MOVN(DisasContext *s, arg_movw *a)
76
+{
77
+ int pos = a->hw << 4;
78
+ uint64_t imm = a->imm;
79
80
- switch (opc) {
81
- case 0: /* MOVN */
82
- case 2: /* MOVZ */
83
- imm <<= pos;
84
- if (opc == 0) {
85
- imm = ~imm;
86
- }
87
- if (!sf) {
88
- imm &= 0xffffffffu;
89
- }
90
- tcg_gen_movi_i64(tcg_rd, imm);
91
- break;
92
- case 3: /* MOVK */
93
- tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_constant_i64(imm), pos, 16);
94
- if (!sf) {
95
- tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
96
- }
97
- break;
98
- default:
99
- unallocated_encoding(s);
100
- break;
101
+ imm = ~(imm << pos);
102
+ if (!a->sf) {
103
+ imm = (uint32_t)imm;
104
}
105
+ tcg_gen_movi_i64(cpu_reg(s, a->rd), imm);
106
+ return true;
107
+}
108
+
109
+static bool trans_MOVK(DisasContext *s, arg_movw *a)
110
+{
111
+ int pos = a->hw << 4;
112
+ TCGv_i64 tcg_rd, tcg_im;
113
+
114
+ tcg_rd = cpu_reg(s, a->rd);
115
+ tcg_im = tcg_constant_i64(a->imm);
116
+ tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_im, pos, 16);
117
+ if (!a->sf) {
118
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
119
+ }
120
+ return true;
23
}
121
}
24
122
25
-uint32_t HELPER(sve_pfirst)(void *vd, void *vg, uint32_t words)
123
/* Bitfield
26
+uint32_t HELPER(sve_pfirst)(void *vd, void *vg, uint32_t pred_desc)
124
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
125
static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
27
{
126
{
28
+ intptr_t words = DIV_ROUND_UP(FIELD_EX32(pred_desc, PREDDESC, OPRSZ), 8);
127
switch (extract32(insn, 23, 6)) {
29
uint32_t flags = PREDTEST_INIT;
128
- case 0x25: /* Move wide (immediate) */
30
uint64_t *d = vd, *g = vg;
129
- disas_movw_imm(s, insn);
31
intptr_t i = 0;
130
- break;
32
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_pfirst)(void *vd, void *vg, uint32_t words)
131
case 0x26: /* Bitfield */
33
132
disas_bitfield(s, insn);
34
uint32_t HELPER(sve_pnext)(void *vd, void *vg, uint32_t pred_desc)
133
break;
35
{
36
- intptr_t words = extract32(pred_desc, 0, SIMD_OPRSZ_BITS);
37
- intptr_t esz = extract32(pred_desc, SIMD_DATA_SHIFT, 2);
38
+ intptr_t words = DIV_ROUND_UP(FIELD_EX32(pred_desc, PREDDESC, OPRSZ), 8);
39
+ intptr_t esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
40
uint32_t flags = PREDTEST_INIT;
41
uint64_t *d = vd, *g = vg, esz_mask;
42
intptr_t i, next;
43
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/translate-sve.c
46
+++ b/target/arm/translate-sve.c
47
@@ -XXX,XX +XXX,XX @@ static bool do_pfirst_pnext(DisasContext *s, arg_rr_esz *a,
48
TCGv_ptr t_pd = tcg_temp_new_ptr();
49
TCGv_ptr t_pg = tcg_temp_new_ptr();
50
TCGv_i32 t;
51
- unsigned desc;
52
+ unsigned desc = 0;
53
54
- desc = DIV_ROUND_UP(pred_full_reg_size(s), 8);
55
- desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
56
+ desc = FIELD_DP32(desc, PREDDESC, OPRSZ, pred_full_reg_size(s));
57
+ desc = FIELD_DP32(desc, PREDDESC, ESZ, a->esz);
58
59
tcg_gen_addi_ptr(t_pd, cpu_env, pred_full_reg_offset(s, a->rd));
60
tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->rn));
61
--
134
--
62
2.20.1
135
2.34.1
63
64
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The interface for object_property_add_bool is simpler,
3
Convert the BFM, SBFM, UBFM instructions.
4
making the code easier to understand.
5
4
6
Reviewed-by: Andrew Jones <drjones@redhat.com>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210111235740.462469-4-richard.henderson@linaro.org
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20230512144106.3608981-12-peter.maydell@linaro.org
9
[PMM: Rebased]
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
target/arm/cpu64.c | 24 ++++++++++--------------
12
target/arm/tcg/a64.decode | 13 +++
12
1 file changed, 10 insertions(+), 14 deletions(-)
13
target/arm/tcg/translate-a64.c | 144 ++++++++++++++++++---------------
14
2 files changed, 94 insertions(+), 63 deletions(-)
13
15
14
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
16
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu64.c
18
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/cpu64.c
19
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ static void cpu_max_set_sve_max_vq(Object *obj, Visitor *v, const char *name,
20
@@ -XXX,XX +XXX,XX @@ MOVZ . 10 100101 .. ................ ..... @movw_64
19
cpu->sve_max_vq = max_vq;
21
MOVZ . 10 100101 .. ................ ..... @movw_32
22
MOVK . 11 100101 .. ................ ..... @movw_64
23
MOVK . 11 100101 .. ................ ..... @movw_32
24
+
25
+# Bitfield
26
+
27
+&bitfield rd rn sf immr imms
28
+@bitfield_64 1 .. ...... 1 immr:6 imms:6 rn:5 rd:5 &bitfield sf=1
29
+@bitfield_32 0 .. ...... 0 0 immr:5 0 imms:5 rn:5 rd:5 &bitfield sf=0
30
+
31
+SBFM . 00 100110 . ...... ...... ..... ..... @bitfield_64
32
+SBFM . 00 100110 . ...... ...... ..... ..... @bitfield_32
33
+BFM . 01 100110 . ...... ...... ..... ..... @bitfield_64
34
+BFM . 01 100110 . ...... ...... ..... ..... @bitfield_32
35
+UBFM . 10 100110 . ...... ...... ..... ..... @bitfield_64
36
+UBFM . 10 100110 . ...... ...... ..... ..... @bitfield_32
37
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/tcg/translate-a64.c
40
+++ b/target/arm/tcg/translate-a64.c
41
@@ -XXX,XX +XXX,XX @@ static bool trans_MOVK(DisasContext *s, arg_movw *a)
42
return true;
20
}
43
}
21
44
45
-/* Bitfield
46
- * 31 30 29 28 23 22 21 16 15 10 9 5 4 0
47
- * +----+-----+-------------+---+------+------+------+------+
48
- * | sf | opc | 1 0 0 1 1 0 | N | immr | imms | Rn | Rd |
49
- * +----+-----+-------------+---+------+------+------+------+
22
+/*
50
+/*
23
+ * Note that cpu_arm_get/set_sve_vq cannot use the simpler
51
+ * Bitfield
24
+ * object_property_add_bool interface because they make use
52
*/
25
+ * of the contents of "name" to determine which bit on which
53
-static void disas_bitfield(DisasContext *s, uint32_t insn)
26
+ * to operate.
54
+
27
+ */
55
+static bool trans_SBFM(DisasContext *s, arg_SBFM *a)
28
static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name,
29
void *opaque, Error **errp)
30
{
56
{
31
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
57
- unsigned int sf, n, opc, ri, si, rn, rd, bitsize, pos, len;
32
set_bit(vq - 1, cpu->sve_vq_init);
58
- TCGv_i64 tcg_rd, tcg_tmp;
33
}
59
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
34
60
+ TCGv_i64 tcg_tmp = read_cpu_reg(s, a->rn, 1);
35
-static void cpu_arm_get_sve(Object *obj, Visitor *v, const char *name,
61
+ unsigned int bitsize = a->sf ? 64 : 32;
36
- void *opaque, Error **errp)
62
+ unsigned int ri = a->immr;
37
+static bool cpu_arm_get_sve(Object *obj, Error **errp)
63
+ unsigned int si = a->imms;
38
{
64
+ unsigned int pos, len;
39
ARMCPU *cpu = ARM_CPU(obj);
65
40
- bool value = cpu_isar_feature(aa64_sve, cpu);
66
- sf = extract32(insn, 31, 1);
41
-
67
- opc = extract32(insn, 29, 2);
42
- visit_type_bool(v, name, &value, errp);
68
- n = extract32(insn, 22, 1);
43
+ return cpu_isar_feature(aa64_sve, cpu);
69
- ri = extract32(insn, 16, 6);
44
}
70
- si = extract32(insn, 10, 6);
45
71
- rn = extract32(insn, 5, 5);
46
-static void cpu_arm_set_sve(Object *obj, Visitor *v, const char *name,
72
- rd = extract32(insn, 0, 5);
47
- void *opaque, Error **errp)
73
- bitsize = sf ? 64 : 32;
48
+static void cpu_arm_set_sve(Object *obj, bool value, Error **errp)
74
-
49
{
75
- if (sf != n || ri >= bitsize || si >= bitsize || opc > 2) {
50
ARMCPU *cpu = ARM_CPU(obj);
76
- unallocated_encoding(s);
51
- bool value;
52
uint64_t t;
53
54
- if (!visit_type_bool(v, name, &value, errp)) {
55
- return;
77
- return;
56
- }
78
- }
57
-
79
-
58
if (value && kvm_enabled() && !kvm_arm_sve_supported()) {
80
- tcg_rd = cpu_reg(s, rd);
59
error_setg(errp, "'sve' feature not supported by KVM on this host");
81
-
60
return;
82
- /* Suppress the zero-extend for !sf. Since RI and SI are constrained
61
@@ -XXX,XX +XXX,XX @@ void aarch64_add_sve_properties(Object *obj)
83
- to be smaller than bitsize, we'll never reference data outside the
84
- low 32-bits anyway. */
85
- tcg_tmp = read_cpu_reg(s, rn, 1);
86
-
87
- /* Recognize simple(r) extractions. */
88
if (si >= ri) {
89
/* Wd<s-r:0> = Wn<s:r> */
90
len = (si - ri) + 1;
91
- if (opc == 0) { /* SBFM: ASR, SBFX, SXTB, SXTH, SXTW */
92
- tcg_gen_sextract_i64(tcg_rd, tcg_tmp, ri, len);
93
- goto done;
94
- } else if (opc == 2) { /* UBFM: UBFX, LSR, UXTB, UXTH */
95
- tcg_gen_extract_i64(tcg_rd, tcg_tmp, ri, len);
96
- return;
97
+ tcg_gen_sextract_i64(tcg_rd, tcg_tmp, ri, len);
98
+ if (!a->sf) {
99
+ tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
100
}
101
- /* opc == 1, BFXIL fall through to deposit */
102
+ } else {
103
+ /* Wd<32+s-r,32-r> = Wn<s:0> */
104
+ len = si + 1;
105
+ pos = (bitsize - ri) & (bitsize - 1);
106
+
107
+ if (len < ri) {
108
+ /*
109
+ * Sign extend the destination field from len to fill the
110
+ * balance of the word. Let the deposit below insert all
111
+ * of those sign bits.
112
+ */
113
+ tcg_gen_sextract_i64(tcg_tmp, tcg_tmp, 0, len);
114
+ len = ri;
115
+ }
116
+
117
+ /*
118
+ * We start with zero, and we haven't modified any bits outside
119
+ * bitsize, therefore no final zero-extension is unneeded for !sf.
120
+ */
121
+ tcg_gen_deposit_z_i64(tcg_rd, tcg_tmp, pos, len);
122
+ }
123
+ return true;
124
+}
125
+
126
+static bool trans_UBFM(DisasContext *s, arg_UBFM *a)
127
+{
128
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
129
+ TCGv_i64 tcg_tmp = read_cpu_reg(s, a->rn, 1);
130
+ unsigned int bitsize = a->sf ? 64 : 32;
131
+ unsigned int ri = a->immr;
132
+ unsigned int si = a->imms;
133
+ unsigned int pos, len;
134
+
135
+ tcg_rd = cpu_reg(s, a->rd);
136
+ tcg_tmp = read_cpu_reg(s, a->rn, 1);
137
+
138
+ if (si >= ri) {
139
+ /* Wd<s-r:0> = Wn<s:r> */
140
+ len = (si - ri) + 1;
141
+ tcg_gen_extract_i64(tcg_rd, tcg_tmp, ri, len);
142
+ } else {
143
+ /* Wd<32+s-r,32-r> = Wn<s:0> */
144
+ len = si + 1;
145
+ pos = (bitsize - ri) & (bitsize - 1);
146
+ tcg_gen_deposit_z_i64(tcg_rd, tcg_tmp, pos, len);
147
+ }
148
+ return true;
149
+}
150
+
151
+static bool trans_BFM(DisasContext *s, arg_BFM *a)
152
+{
153
+ TCGv_i64 tcg_rd = cpu_reg(s, a->rd);
154
+ TCGv_i64 tcg_tmp = read_cpu_reg(s, a->rn, 1);
155
+ unsigned int bitsize = a->sf ? 64 : 32;
156
+ unsigned int ri = a->immr;
157
+ unsigned int si = a->imms;
158
+ unsigned int pos, len;
159
+
160
+ tcg_rd = cpu_reg(s, a->rd);
161
+ tcg_tmp = read_cpu_reg(s, a->rn, 1);
162
+
163
+ if (si >= ri) {
164
+ /* Wd<s-r:0> = Wn<s:r> */
165
tcg_gen_shri_i64(tcg_tmp, tcg_tmp, ri);
166
+ len = (si - ri) + 1;
167
pos = 0;
168
} else {
169
- /* Handle the ri > si case with a deposit
170
- * Wd<32+s-r,32-r> = Wn<s:0>
171
- */
172
+ /* Wd<32+s-r,32-r> = Wn<s:0> */
173
len = si + 1;
174
pos = (bitsize - ri) & (bitsize - 1);
175
}
176
177
- if (opc == 0 && len < ri) {
178
- /* SBFM: sign extend the destination field from len to fill
179
- the balance of the word. Let the deposit below insert all
180
- of those sign bits. */
181
- tcg_gen_sextract_i64(tcg_tmp, tcg_tmp, 0, len);
182
- len = ri;
183
- }
184
-
185
- if (opc == 1) { /* BFM, BFXIL */
186
- tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_tmp, pos, len);
187
- } else {
188
- /* SBFM or UBFM: We start with zero, and we haven't modified
189
- any bits outside bitsize, therefore the zero-extension
190
- below is unneeded. */
191
- tcg_gen_deposit_z_i64(tcg_rd, tcg_tmp, pos, len);
192
- return;
193
- }
194
-
195
- done:
196
- if (!sf) { /* zero extend final result */
197
+ tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_tmp, pos, len);
198
+ if (!a->sf) {
199
tcg_gen_ext32u_i64(tcg_rd, tcg_rd);
200
}
201
+ return true;
202
}
203
204
/* Extract
205
@@ -XXX,XX +XXX,XX @@ static void disas_extract(DisasContext *s, uint32_t insn)
206
static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
62
{
207
{
63
uint32_t vq;
208
switch (extract32(insn, 23, 6)) {
64
209
- case 0x26: /* Bitfield */
65
- object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
210
- disas_bitfield(s, insn);
66
- cpu_arm_set_sve, NULL, NULL);
211
- break;
67
+ object_property_add_bool(obj, "sve", cpu_arm_get_sve, cpu_arm_set_sve);
212
case 0x27: /* Extract */
68
213
disas_extract(s, insn);
69
for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
214
break;
70
char name[8];
71
--
215
--
72
2.20.1
216
2.34.1
73
74
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
Convert the EXTR instruction to decodetree (this is the
2
only one in the 'Extract" class). This is the last of
3
the dp-immediate insns in the legacy decoder, so we
4
can now remove disas_data_proc_imm().
2
5
3
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210112104511.36576-19-remi.denis.courmont@huawei.com
8
Message-id: 20230512144106.3608981-13-peter.maydell@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
9
---
8
target/arm/helper.c | 25 +++++++++++--------------
10
target/arm/tcg/a64.decode | 7 +++
9
1 file changed, 11 insertions(+), 14 deletions(-)
11
target/arm/tcg/translate-a64.c | 94 +++++++++++-----------------------
12
2 files changed, 36 insertions(+), 65 deletions(-)
10
13
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
12
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/helper.c
16
--- a/target/arm/tcg/a64.decode
14
+++ b/target/arm/helper.c
17
+++ b/target/arm/tcg/a64.decode
15
@@ -XXX,XX +XXX,XX @@ static CPAccessResult aa64_cacheop_pou_access(CPUARMState *env,
18
@@ -XXX,XX +XXX,XX @@ BFM . 01 100110 . ...... ...... ..... ..... @bitfield_64
16
static int vae1_tlbmask(CPUARMState *env)
19
BFM . 01 100110 . ...... ...... ..... ..... @bitfield_32
20
UBFM . 10 100110 . ...... ...... ..... ..... @bitfield_64
21
UBFM . 10 100110 . ...... ...... ..... ..... @bitfield_32
22
+
23
+# Extract
24
+
25
+&extract rd rn rm imm sf
26
+
27
+EXTR 1 00 100111 1 0 rm:5 imm:6 rn:5 rd:5 &extract sf=1
28
+EXTR 0 00 100111 0 0 rm:5 0 imm:5 rn:5 rd:5 &extract sf=0
29
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/tcg/translate-a64.c
32
+++ b/target/arm/tcg/translate-a64.c
33
@@ -XXX,XX +XXX,XX @@ static bool trans_BFM(DisasContext *s, arg_BFM *a)
34
return true;
35
}
36
37
-/* Extract
38
- * 31 30 29 28 23 22 21 20 16 15 10 9 5 4 0
39
- * +----+------+-------------+---+----+------+--------+------+------+
40
- * | sf | op21 | 1 0 0 1 1 1 | N | o0 | Rm | imms | Rn | Rd |
41
- * +----+------+-------------+---+----+------+--------+------+------+
42
- */
43
-static void disas_extract(DisasContext *s, uint32_t insn)
44
+static bool trans_EXTR(DisasContext *s, arg_extract *a)
17
{
45
{
18
uint64_t hcr = arm_hcr_el2_eff(env);
46
- unsigned int sf, n, rm, imm, rn, rd, bitsize, op21, op0;
19
+ uint16_t mask;
47
+ TCGv_i64 tcg_rd, tcg_rm, tcg_rn;
20
48
21
if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
49
- sf = extract32(insn, 31, 1);
22
- uint16_t mask = ARMMMUIdxBit_E20_2 |
50
- n = extract32(insn, 22, 1);
23
- ARMMMUIdxBit_E20_2_PAN |
51
- rm = extract32(insn, 16, 5);
24
- ARMMMUIdxBit_E20_0;
52
- imm = extract32(insn, 10, 6);
53
- rn = extract32(insn, 5, 5);
54
- rd = extract32(insn, 0, 5);
55
- op21 = extract32(insn, 29, 2);
56
- op0 = extract32(insn, 21, 1);
57
- bitsize = sf ? 64 : 32;
58
+ tcg_rd = cpu_reg(s, a->rd);
59
60
- if (sf != n || op21 || op0 || imm >= bitsize) {
61
- unallocated_encoding(s);
62
- } else {
63
- TCGv_i64 tcg_rd, tcg_rm, tcg_rn;
25
-
64
-
26
- if (arm_is_secure_below_el3(env)) {
65
- tcg_rd = cpu_reg(s, rd);
27
- mask >>= ARM_MMU_IDX_A_NS;
28
- }
29
-
66
-
30
- return mask;
67
- if (unlikely(imm == 0)) {
31
- } else if (arm_is_secure_below_el3(env)) {
68
- /* tcg shl_i32/shl_i64 is undefined for 32/64 bit shifts,
32
- return ARMMMUIdxBit_SE10_1 |
69
- * so an extract from bit 0 is a special case.
33
- ARMMMUIdxBit_SE10_1_PAN |
70
- */
34
- ARMMMUIdxBit_SE10_0;
71
- if (sf) {
35
+ mask = ARMMMUIdxBit_E20_2 |
72
- tcg_gen_mov_i64(tcg_rd, cpu_reg(s, rm));
36
+ ARMMMUIdxBit_E20_2_PAN |
73
- } else {
37
+ ARMMMUIdxBit_E20_0;
74
- tcg_gen_ext32u_i64(tcg_rd, cpu_reg(s, rm));
38
} else {
75
- }
39
- return ARMMMUIdxBit_E10_1 |
76
+ if (unlikely(a->imm == 0)) {
40
+ mask = ARMMMUIdxBit_E10_1 |
77
+ /*
41
ARMMMUIdxBit_E10_1_PAN |
78
+ * tcg shl_i32/shl_i64 is undefined for 32/64 bit shifts,
42
ARMMMUIdxBit_E10_0;
79
+ * so an extract from bit 0 is a special case.
80
+ */
81
+ if (a->sf) {
82
+ tcg_gen_mov_i64(tcg_rd, cpu_reg(s, a->rm));
83
} else {
84
- tcg_rm = cpu_reg(s, rm);
85
- tcg_rn = cpu_reg(s, rn);
86
+ tcg_gen_ext32u_i64(tcg_rd, cpu_reg(s, a->rm));
87
+ }
88
+ } else {
89
+ tcg_rm = cpu_reg(s, a->rm);
90
+ tcg_rn = cpu_reg(s, a->rn);
91
92
- if (sf) {
93
- /* Specialization to ROR happens in EXTRACT2. */
94
- tcg_gen_extract2_i64(tcg_rd, tcg_rm, tcg_rn, imm);
95
+ if (a->sf) {
96
+ /* Specialization to ROR happens in EXTRACT2. */
97
+ tcg_gen_extract2_i64(tcg_rd, tcg_rm, tcg_rn, a->imm);
98
+ } else {
99
+ TCGv_i32 t0 = tcg_temp_new_i32();
100
+
101
+ tcg_gen_extrl_i64_i32(t0, tcg_rm);
102
+ if (a->rm == a->rn) {
103
+ tcg_gen_rotri_i32(t0, t0, a->imm);
104
} else {
105
- TCGv_i32 t0 = tcg_temp_new_i32();
106
-
107
- tcg_gen_extrl_i64_i32(t0, tcg_rm);
108
- if (rm == rn) {
109
- tcg_gen_rotri_i32(t0, t0, imm);
110
- } else {
111
- TCGv_i32 t1 = tcg_temp_new_i32();
112
- tcg_gen_extrl_i64_i32(t1, tcg_rn);
113
- tcg_gen_extract2_i32(t0, t0, t1, imm);
114
- }
115
- tcg_gen_extu_i32_i64(tcg_rd, t0);
116
+ TCGv_i32 t1 = tcg_temp_new_i32();
117
+ tcg_gen_extrl_i64_i32(t1, tcg_rn);
118
+ tcg_gen_extract2_i32(t0, t0, t1, a->imm);
119
}
120
+ tcg_gen_extu_i32_i64(tcg_rd, t0);
121
}
43
}
122
}
44
+
123
-}
45
+ if (arm_is_secure_below_el3(env)) {
124
-
46
+ mask >>= ARM_MMU_IDX_A_NS;
125
-/* Data processing - immediate */
47
+ }
126
-static void disas_data_proc_imm(DisasContext *s, uint32_t insn)
48
+
127
-{
49
+ return mask;
128
- switch (extract32(insn, 23, 6)) {
129
- case 0x27: /* Extract */
130
- disas_extract(s, insn);
131
- break;
132
- default:
133
- unallocated_encoding(s);
134
- break;
135
- }
136
+ return true;
50
}
137
}
51
138
52
/* Return 56 if TBI is enabled, 64 otherwise. */
139
/* Shift a TCGv src by TCGv shift_amount, put result in dst.
140
@@ -XXX,XX +XXX,XX @@ static bool btype_destination_ok(uint32_t insn, bool bt, int btype)
141
static void disas_a64_legacy(DisasContext *s, uint32_t insn)
142
{
143
switch (extract32(insn, 25, 4)) {
144
- case 0x8: case 0x9: /* Data processing - immediate */
145
- disas_data_proc_imm(s, insn);
146
- break;
147
case 0xa: case 0xb: /* Branch, exception generation and system insns */
148
disas_b_exc_sys(s, insn);
149
break;
53
--
150
--
54
2.20.1
151
2.34.1
55
56
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
Convert the unconditional branch immediate insns B and BL to
2
decodetree.
2
3
3
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210112104511.36576-9-remi.denis.courmont@huawei.com
6
Message-id: 20230512144106.3608981-14-peter.maydell@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/cpu.h | 7 +++++++
8
target/arm/tcg/a64.decode | 9 +++++++++
9
target/arm/helper.c | 24 ++++++++++++++++++++++++
9
target/arm/tcg/translate-a64.c | 31 +++++++++++--------------------
10
2 files changed, 31 insertions(+)
10
2 files changed, 20 insertions(+), 20 deletions(-)
11
11
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
13
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.h
14
--- a/target/arm/tcg/a64.decode
15
+++ b/target/arm/cpu.h
15
+++ b/target/arm/tcg/a64.decode
16
@@ -XXX,XX +XXX,XX @@ typedef struct {
16
@@ -XXX,XX +XXX,XX @@
17
uint32_t base_mask;
17
18
} TCR;
18
&ri rd imm
19
19
&rri_sf rd rn imm sf
20
+#define VTCR_NSW (1u << 29)
20
+&i imm
21
+#define VTCR_NSA (1u << 30)
21
22
+#define VSTCR_SW VTCR_NSW
22
23
+#define VSTCR_SA VTCR_NSA
23
### Data Processing - Immediate
24
@@ -XXX,XX +XXX,XX @@ UBFM . 10 100110 . ...... ...... ..... ..... @bitfield_32
25
26
EXTR 1 00 100111 1 0 rm:5 imm:6 rn:5 rd:5 &extract sf=1
27
EXTR 0 00 100111 0 0 rm:5 0 imm:5 rn:5 rd:5 &extract sf=0
24
+
28
+
25
/* Define a maximum sized vector register.
29
+# Branches
26
* For 32-bit, this is a 128-bit NEON/AdvSIMD register.
30
+
27
* For 64-bit, this is a 2048-bit SVE register.
31
+%imm26 0:s26 !function=times_4
28
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
32
+@branch . ..... .......................... &i imm=%imm26
29
uint64_t ttbr1_el[4];
33
+
30
};
34
+B 0 00101 .......................... @branch
31
uint64_t vttbr_el2; /* Virtualization Translation Table Base. */
35
+BL 1 00101 .......................... @branch
32
+ uint64_t vsttbr_el2; /* Secure Virtualization Translation Table. */
36
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
33
/* MMU translation table base control. */
34
TCR tcr_el[4];
35
TCR vtcr_el2; /* Virtualization Translation Control. */
36
+ TCR vstcr_el2; /* Secure Virtualization Translation Control. */
37
uint32_t c2_data; /* MPU data cacheable bits. */
38
uint32_t c2_insn; /* MPU instruction cacheable bits. */
39
union { /* MMU domain access control register
40
diff --git a/target/arm/helper.c b/target/arm/helper.c
41
index XXXXXXX..XXXXXXX 100644
37
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/helper.c
38
--- a/target/arm/tcg/translate-a64.c
43
+++ b/target/arm/helper.c
39
+++ b/target/arm/tcg/translate-a64.c
44
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_v8_cp_reginfo[] = {
40
@@ -XXX,XX +XXX,XX @@ static inline AArch64DecodeFn *lookup_disas_fn(const AArch64DecodeTable *table,
45
REGINFO_SENTINEL
41
* match up with those in the manual.
46
};
42
*/
47
43
48
+static CPAccessResult sel2_access(CPUARMState *env, const ARMCPRegInfo *ri,
44
-/* Unconditional branch (immediate)
49
+ bool isread)
45
- * 31 30 26 25 0
50
+{
46
- * +----+-----------+-------------------------------------+
51
+ if (arm_current_el(env) == 3 || arm_is_secure_below_el3(env)) {
47
- * | op | 0 0 1 0 1 | imm26 |
52
+ return CP_ACCESS_OK;
48
- * +----+-----------+-------------------------------------+
53
+ }
49
- */
54
+ return CP_ACCESS_TRAP_UNCATEGORIZED;
50
-static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
51
+static bool trans_B(DisasContext *s, arg_i *a)
52
{
53
- int64_t diff = sextract32(insn, 0, 26) * 4;
54
-
55
- if (insn & (1U << 31)) {
56
- /* BL Branch with link */
57
- gen_pc_plus_diff(s, cpu_reg(s, 30), curr_insn_len(s));
58
- }
59
-
60
- /* B Branch / BL Branch with link */
61
reset_btype(s);
62
- gen_goto_tb(s, 0, diff);
63
+ gen_goto_tb(s, 0, a->imm);
64
+ return true;
55
+}
65
+}
56
+
66
+
57
+static const ARMCPRegInfo el2_sec_cp_reginfo[] = {
67
+static bool trans_BL(DisasContext *s, arg_i *a)
58
+ { .name = "VSTTBR_EL2", .state = ARM_CP_STATE_AA64,
68
+{
59
+ .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 6, .opc2 = 0,
69
+ gen_pc_plus_diff(s, cpu_reg(s, 30), curr_insn_len(s));
60
+ .access = PL2_RW, .accessfn = sel2_access,
70
+ reset_btype(s);
61
+ .fieldoffset = offsetof(CPUARMState, cp15.vsttbr_el2) },
71
+ gen_goto_tb(s, 0, a->imm);
62
+ { .name = "VSTCR_EL2", .state = ARM_CP_STATE_AA64,
72
+ return true;
63
+ .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 6, .opc2 = 2,
73
}
64
+ .access = PL2_RW, .accessfn = sel2_access,
74
65
+ .fieldoffset = offsetof(CPUARMState, cp15.vstcr_el2) },
75
/* Compare and branch (immediate)
66
+ REGINFO_SENTINEL
76
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
67
+};
77
static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
68
+
69
static CPAccessResult nsacr_access(CPUARMState *env, const ARMCPRegInfo *ri,
70
bool isread)
71
{
78
{
72
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
79
switch (extract32(insn, 25, 7)) {
73
if (arm_feature(env, ARM_FEATURE_V8)) {
80
- case 0x0a: case 0x0b:
74
define_arm_cp_regs(cpu, el2_v8_cp_reginfo);
81
- case 0x4a: case 0x4b: /* Unconditional branch (immediate) */
75
}
82
- disas_uncond_b_imm(s, insn);
76
+ if (cpu_isar_feature(aa64_sel2, cpu)) {
83
- break;
77
+ define_arm_cp_regs(cpu, el2_sec_cp_reginfo);
84
case 0x1a: case 0x5a: /* Compare & branch (immediate) */
78
+ }
85
disas_comp_b_imm(s, insn);
79
/* RVBAR_EL2 is only implemented if EL2 is the highest EL */
86
break;
80
if (!arm_feature(env, ARM_FEATURE_EL3)) {
81
ARMCPRegInfo rvbar = {
82
--
87
--
83
2.20.1
88
2.34.1
84
85
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
Convert the compare-and-branch-immediate insns CBZ and CBNZ
2
to decodetree.
2
3
3
This adds a common helper to compute the effective value of MDCR_EL2.
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
That is the actual value if EL2 is enabled in the current security
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
context, or 0 elsewise.
6
Message-id: 20230512144106.3608981-15-peter.maydell@linaro.org
7
---
8
target/arm/tcg/a64.decode | 5 +++++
9
target/arm/tcg/translate-a64.c | 26 ++++++--------------------
10
2 files changed, 11 insertions(+), 20 deletions(-)
6
11
7
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210112104511.36576-5-remi.denis.courmont@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.c | 38 ++++++++++++++++++++++----------------
13
1 file changed, 22 insertions(+), 16 deletions(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
14
--- a/target/arm/tcg/a64.decode
18
+++ b/target/arm/helper.c
15
+++ b/target/arm/tcg/a64.decode
19
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_trap_aa32s_el1(CPUARMState *env,
16
@@ -XXX,XX +XXX,XX @@ EXTR 0 00 100111 0 0 rm:5 0 imm:5 rn:5 rd:5 &extract sf=0
20
return CP_ACCESS_TRAP_UNCATEGORIZED;
17
18
B 0 00101 .......................... @branch
19
BL 1 00101 .......................... @branch
20
+
21
+%imm19 5:s19 !function=times_4
22
+&cbz rt imm sf nz
23
+
24
+CBZ sf:1 011010 nz:1 ................... rt:5 &cbz imm=%imm19
25
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/tcg/translate-a64.c
28
+++ b/target/arm/tcg/translate-a64.c
29
@@ -XXX,XX +XXX,XX @@ static bool trans_BL(DisasContext *s, arg_i *a)
30
return true;
21
}
31
}
22
32
23
+static uint64_t arm_mdcr_el2_eff(CPUARMState *env)
33
-/* Compare and branch (immediate)
24
+{
34
- * 31 30 25 24 23 5 4 0
25
+ return arm_is_el2_enabled(env) ? env->cp15.mdcr_el2 : 0;
35
- * +----+-------------+----+---------------------+--------+
26
+}
36
- * | sf | 0 1 1 0 1 0 | op | imm19 | Rt |
37
- * +----+-------------+----+---------------------+--------+
38
- */
39
-static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
27
+
40
+
28
/* Check for traps to "powerdown debug" registers, which are controlled
41
+static bool trans_CBZ(DisasContext *s, arg_cbz *a)
29
* by MDCR.TDOSA
30
*/
31
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri,
32
bool isread)
33
{
42
{
34
int el = arm_current_el(env);
43
- unsigned int sf, op, rt;
35
- bool mdcr_el2_tdosa = (env->cp15.mdcr_el2 & MDCR_TDOSA) ||
44
- int64_t diff;
36
- (env->cp15.mdcr_el2 & MDCR_TDE) ||
45
DisasLabel match;
37
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
46
TCGv_i64 tcg_cmp;
38
+ bool mdcr_el2_tdosa = (mdcr_el2 & MDCR_TDOSA) || (mdcr_el2 & MDCR_TDE) ||
47
39
(arm_hcr_el2_eff(env) & HCR_TGE);
48
- sf = extract32(insn, 31, 1);
40
49
- op = extract32(insn, 24, 1); /* 0: CBZ; 1: CBNZ */
41
- if (el < 2 && mdcr_el2_tdosa && !arm_is_secure_below_el3(env)) {
50
- rt = extract32(insn, 0, 5);
42
+ if (el < 2 && mdcr_el2_tdosa) {
51
- diff = sextract32(insn, 5, 19) * 4;
43
return CP_ACCESS_TRAP_EL2;
52
-
44
}
53
- tcg_cmp = read_cpu_reg(s, rt, sf);
45
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDOSA)) {
54
+ tcg_cmp = read_cpu_reg(s, a->rt, a->sf);
46
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri,
55
reset_btype(s);
47
bool isread)
56
57
match = gen_disas_label(s);
58
- tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
59
+ tcg_gen_brcondi_i64(a->nz ? TCG_COND_NE : TCG_COND_EQ,
60
tcg_cmp, 0, match.label);
61
gen_goto_tb(s, 0, 4);
62
set_disas_label(s, match);
63
- gen_goto_tb(s, 1, diff);
64
+ gen_goto_tb(s, 1, a->imm);
65
+ return true;
66
}
67
68
/* Test and branch (immediate)
69
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
70
static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
48
{
71
{
49
int el = arm_current_el(env);
72
switch (extract32(insn, 25, 7)) {
50
- bool mdcr_el2_tdra = (env->cp15.mdcr_el2 & MDCR_TDRA) ||
73
- case 0x1a: case 0x5a: /* Compare & branch (immediate) */
51
- (env->cp15.mdcr_el2 & MDCR_TDE) ||
74
- disas_comp_b_imm(s, insn);
52
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
75
- break;
53
+ bool mdcr_el2_tdra = (mdcr_el2 & MDCR_TDRA) || (mdcr_el2 & MDCR_TDE) ||
76
case 0x1b: case 0x5b: /* Test & branch (immediate) */
54
(arm_hcr_el2_eff(env) & HCR_TGE);
77
disas_test_b_imm(s, insn);
55
78
break;
56
- if (el < 2 && mdcr_el2_tdra && !arm_is_secure_below_el3(env)) {
57
+ if (el < 2 && mdcr_el2_tdra) {
58
return CP_ACCESS_TRAP_EL2;
59
}
60
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
61
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri,
62
bool isread)
63
{
64
int el = arm_current_el(env);
65
- bool mdcr_el2_tda = (env->cp15.mdcr_el2 & MDCR_TDA) ||
66
- (env->cp15.mdcr_el2 & MDCR_TDE) ||
67
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
68
+ bool mdcr_el2_tda = (mdcr_el2 & MDCR_TDA) || (mdcr_el2 & MDCR_TDE) ||
69
(arm_hcr_el2_eff(env) & HCR_TGE);
70
71
- if (el < 2 && mdcr_el2_tda && !arm_is_secure_below_el3(env)) {
72
+ if (el < 2 && mdcr_el2_tda) {
73
return CP_ACCESS_TRAP_EL2;
74
}
75
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
76
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tpm(CPUARMState *env, const ARMCPRegInfo *ri,
77
bool isread)
78
{
79
int el = arm_current_el(env);
80
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
81
82
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TPM)
83
- && !arm_is_secure_below_el3(env)) {
84
+ if (el < 2 && (mdcr_el2 & MDCR_TPM)) {
85
return CP_ACCESS_TRAP_EL2;
86
}
87
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TPM)) {
88
@@ -XXX,XX +XXX,XX @@ static CPAccessResult pmreg_access(CPUARMState *env, const ARMCPRegInfo *ri,
89
* trapping to EL2 or EL3 for other accesses.
90
*/
91
int el = arm_current_el(env);
92
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
93
94
if (el == 0 && !(env->cp15.c9_pmuserenr & 1)) {
95
return CP_ACCESS_TRAP;
96
}
97
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TPM)
98
- && !arm_is_secure_below_el3(env)) {
99
+ if (el < 2 && (mdcr_el2 & MDCR_TPM)) {
100
return CP_ACCESS_TRAP_EL2;
101
}
102
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TPM)) {
103
@@ -XXX,XX +XXX,XX @@ static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
104
bool enabled, prohibited, filtered;
105
bool secure = arm_is_secure(env);
106
int el = arm_current_el(env);
107
- uint8_t hpmn = env->cp15.mdcr_el2 & MDCR_HPMN;
108
+ uint64_t mdcr_el2 = arm_mdcr_el2_eff(env);
109
+ uint8_t hpmn = mdcr_el2 & MDCR_HPMN;
110
111
if (!arm_feature(env, ARM_FEATURE_PMU)) {
112
return false;
113
@@ -XXX,XX +XXX,XX @@ static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
114
(counter < hpmn || counter == 31)) {
115
e = env->cp15.c9_pmcr & PMCRE;
116
} else {
117
- e = env->cp15.mdcr_el2 & MDCR_HPME;
118
+ e = mdcr_el2 & MDCR_HPME;
119
}
120
enabled = e && (env->cp15.c9_pmcnten & (1 << counter));
121
122
if (!secure) {
123
if (el == 2 && (counter < hpmn || counter == 31)) {
124
- prohibited = env->cp15.mdcr_el2 & MDCR_HPMD;
125
+ prohibited = mdcr_el2 & MDCR_HPMD;
126
} else {
127
prohibited = false;
128
}
129
--
79
--
130
2.20.1
80
2.34.1
131
132
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
Convert the test-and-branch-immediate insns TBZ and TBNZ
2
to decodetree.
2
3
3
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210112104511.36576-15-remi.denis.courmont@huawei.com
6
Message-id: 20230512144106.3608981-16-peter.maydell@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/cpu.h | 2 ++
8
target/arm/tcg/a64.decode | 6 ++++++
9
target/arm/internals.h | 2 ++
9
target/arm/tcg/translate-a64.c | 25 +++++--------------------
10
target/arm/helper.c | 6 ++++++
10
2 files changed, 11 insertions(+), 20 deletions(-)
11
target/arm/tlb_helper.c | 3 +++
12
4 files changed, 13 insertions(+)
13
11
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
15
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
14
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/cpu.h
15
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
16
@@ -XXX,XX +XXX,XX @@ BL 1 00101 .......................... @branch
19
#define HCR_TWEDEN (1ULL << 59)
17
&cbz rt imm sf nz
20
#define HCR_TWEDEL MAKE_64BIT_MASK(60, 4)
18
21
19
CBZ sf:1 011010 nz:1 ................... rt:5 &cbz imm=%imm19
22
+#define HPFAR_NS (1ULL << 63)
23
+
20
+
24
#define SCR_NS (1U << 0)
21
+%imm14 5:s14 !function=times_4
25
#define SCR_IRQ (1U << 1)
22
+%imm31_19 31:1 19:5
26
#define SCR_FIQ (1U << 2)
23
+&tbz rt imm nz bitpos
27
diff --git a/target/arm/internals.h b/target/arm/internals.h
24
+
25
+TBZ . 011011 nz:1 ..... .............. rt:5 &tbz imm=%imm14 bitpos=%imm31_19
26
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
28
index XXXXXXX..XXXXXXX 100644
27
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/internals.h
28
--- a/target/arm/tcg/translate-a64.c
30
+++ b/target/arm/internals.h
29
+++ b/target/arm/tcg/translate-a64.c
31
@@ -XXX,XX +XXX,XX @@ typedef enum ARMFaultType {
30
@@ -XXX,XX +XXX,XX @@ static bool trans_CBZ(DisasContext *s, arg_cbz *a)
32
* @s2addr: Address that caused a fault at stage 2
33
* @stage2: True if we faulted at stage 2
34
* @s1ptw: True if we faulted at stage 2 while doing a stage 1 page-table walk
35
+ * @s1ns: True if we faulted on a non-secure IPA while in secure state
36
* @ea: True if we should set the EA (external abort type) bit in syndrome
37
*/
38
typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
39
@@ -XXX,XX +XXX,XX @@ struct ARMMMUFaultInfo {
40
int domain;
41
bool stage2;
42
bool s1ptw;
43
+ bool s1ns;
44
bool ea;
45
};
46
47
diff --git a/target/arm/helper.c b/target/arm/helper.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/helper.c
50
+++ b/target/arm/helper.c
51
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
52
target_el = 3;
53
} else {
54
env->cp15.hpfar_el2 = extract64(fi.s2addr, 12, 47) << 4;
55
+ if (arm_is_secure_below_el3(env) && fi.s1ns) {
56
+ env->cp15.hpfar_el2 |= HPFAR_NS;
57
+ }
58
target_el = 2;
59
}
60
take_exc = true;
61
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
62
fi->s2addr = addr;
63
fi->stage2 = true;
64
fi->s1ptw = true;
65
+ fi->s1ns = !*is_secure;
66
return ~0;
67
}
68
if ((arm_hcr_el2_eff(env) & HCR_PTW) &&
69
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
70
fi->s2addr = addr;
71
fi->stage2 = true;
72
fi->s1ptw = true;
73
+ fi->s1ns = !*is_secure;
74
return ~0;
75
}
76
77
@@ -XXX,XX +XXX,XX @@ do_fault:
78
/* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
79
fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_Stage2 ||
80
mmu_idx == ARMMMUIdx_Stage2_S);
81
+ fi->s1ns = mmu_idx == ARMMMUIdx_Stage2;
82
return true;
31
return true;
83
}
32
}
84
33
85
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
34
-/* Test and branch (immediate)
86
index XXXXXXX..XXXXXXX 100644
35
- * 31 30 25 24 23 19 18 5 4 0
87
--- a/target/arm/tlb_helper.c
36
- * +----+-------------+----+-------+-------------+------+
88
+++ b/target/arm/tlb_helper.c
37
- * | b5 | 0 1 1 0 1 1 | op | b40 | imm14 | Rt |
89
@@ -XXX,XX +XXX,XX @@ static void QEMU_NORETURN arm_deliver_fault(ARMCPU *cpu, vaddr addr,
38
- * +----+-------------+----+-------+-------------+------+
90
if (fi->stage2) {
39
- */
91
target_el = 2;
40
-static void disas_test_b_imm(DisasContext *s, uint32_t insn)
92
env->cp15.hpfar_el2 = extract64(fi->s2addr, 12, 47) << 4;
41
+static bool trans_TBZ(DisasContext *s, arg_tbz *a)
93
+ if (arm_is_secure_below_el3(env) && fi->s1ns) {
42
{
94
+ env->cp15.hpfar_el2 |= HPFAR_NS;
43
- unsigned int bit_pos, op, rt;
95
+ }
44
- int64_t diff;
96
}
45
DisasLabel match;
97
same_el = (arm_current_el(env) == target_el);
46
TCGv_i64 tcg_cmp;
98
47
48
- bit_pos = (extract32(insn, 31, 1) << 5) | extract32(insn, 19, 5);
49
- op = extract32(insn, 24, 1); /* 0: TBZ; 1: TBNZ */
50
- diff = sextract32(insn, 5, 14) * 4;
51
- rt = extract32(insn, 0, 5);
52
-
53
tcg_cmp = tcg_temp_new_i64();
54
- tcg_gen_andi_i64(tcg_cmp, cpu_reg(s, rt), (1ULL << bit_pos));
55
+ tcg_gen_andi_i64(tcg_cmp, cpu_reg(s, a->rt), 1ULL << a->bitpos);
56
57
reset_btype(s);
58
59
match = gen_disas_label(s);
60
- tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
61
+ tcg_gen_brcondi_i64(a->nz ? TCG_COND_NE : TCG_COND_EQ,
62
tcg_cmp, 0, match.label);
63
gen_goto_tb(s, 0, 4);
64
set_disas_label(s, match);
65
- gen_goto_tb(s, 1, diff);
66
+ gen_goto_tb(s, 1, a->imm);
67
+ return true;
68
}
69
70
/* Conditional branch (immediate)
71
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
72
static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
73
{
74
switch (extract32(insn, 25, 7)) {
75
- case 0x1b: case 0x5b: /* Test & branch (immediate) */
76
- disas_test_b_imm(s, insn);
77
- break;
78
case 0x2a: /* Conditional branch (immediate) */
79
disas_cond_b_imm(s, insn);
80
break;
99
--
81
--
100
2.20.1
82
2.34.1
101
102
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
Convert the immediate conditional branch insn B.cond to
2
decodetree.
2
3
3
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210112104511.36576-14-remi.denis.courmont@huawei.com
6
Message-id: 20230512144106.3608981-17-peter.maydell@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/cpu.h | 6 +++-
8
target/arm/tcg/a64.decode | 2 ++
9
target/arm/internals.h | 22 ++++++++++++
9
target/arm/tcg/translate-a64.c | 30 ++++++------------------------
10
target/arm/helper.c | 78 +++++++++++++++++++++++++++++-------------
10
2 files changed, 8 insertions(+), 24 deletions(-)
11
3 files changed, 81 insertions(+), 25 deletions(-)
12
11
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
14
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
14
--- a/target/arm/tcg/a64.decode
16
+++ b/target/arm/cpu.h
15
+++ b/target/arm/tcg/a64.decode
17
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
16
@@ -XXX,XX +XXX,XX @@ CBZ sf:1 011010 nz:1 ................... rt:5 &cbz imm=%imm19
18
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
17
&tbz rt imm nz bitpos
19
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
18
20
ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
19
TBZ . 011011 nz:1 ..... .............. rt:5 &tbz imm=%imm14 bitpos=%imm31_19
21
+ ARMMMUIdx_Stage1_SE0 = 3 | ARM_MMU_IDX_NOTLB,
20
+
22
+ ARMMMUIdx_Stage1_SE1 = 4 | ARM_MMU_IDX_NOTLB,
21
+B_cond 0101010 0 ................... 0 cond:4 imm=%imm19
23
+ ARMMMUIdx_Stage1_SE1_PAN = 5 | ARM_MMU_IDX_NOTLB,
22
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
24
/*
25
* Not allocated a TLB: used only for second stage of an S12 page
26
* table walk, or for descriptor loads during first stage of an S1
27
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
28
* then various TLB flush insns which currently are no-ops or flush
29
* only stage 1 MMU indexes will need to change to flush stage 2.
30
*/
31
- ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB,
32
+ ARMMMUIdx_Stage2 = 6 | ARM_MMU_IDX_NOTLB,
33
+ ARMMMUIdx_Stage2_S = 7 | ARM_MMU_IDX_NOTLB,
34
35
/*
36
* M-profile.
37
diff --git a/target/arm/internals.h b/target/arm/internals.h
38
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/internals.h
24
--- a/target/arm/tcg/translate-a64.c
40
+++ b/target/arm/internals.h
25
+++ b/target/arm/tcg/translate-a64.c
41
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
26
@@ -XXX,XX +XXX,XX @@ static bool trans_TBZ(DisasContext *s, arg_tbz *a)
42
case ARMMMUIdx_Stage1_E0:
43
case ARMMMUIdx_Stage1_E1:
44
case ARMMMUIdx_Stage1_E1_PAN:
45
+ case ARMMMUIdx_Stage1_SE0:
46
+ case ARMMMUIdx_Stage1_SE1:
47
+ case ARMMMUIdx_Stage1_SE1_PAN:
48
case ARMMMUIdx_E10_0:
49
case ARMMMUIdx_E10_1:
50
case ARMMMUIdx_E10_1_PAN:
51
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
52
case ARMMMUIdx_SE20_0:
53
case ARMMMUIdx_SE20_2:
54
case ARMMMUIdx_SE20_2_PAN:
55
+ case ARMMMUIdx_Stage1_SE0:
56
+ case ARMMMUIdx_Stage1_SE1:
57
+ case ARMMMUIdx_Stage1_SE1_PAN:
58
case ARMMMUIdx_SE2:
59
+ case ARMMMUIdx_Stage2_S:
60
case ARMMMUIdx_MSPrivNegPri:
61
case ARMMMUIdx_MSUserNegPri:
62
case ARMMMUIdx_MSPriv:
63
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
64
{
65
switch (mmu_idx) {
66
case ARMMMUIdx_Stage1_E1_PAN:
67
+ case ARMMMUIdx_Stage1_SE1_PAN:
68
case ARMMMUIdx_E10_1_PAN:
69
case ARMMMUIdx_E20_2_PAN:
70
case ARMMMUIdx_SE10_1_PAN:
71
@@ -XXX,XX +XXX,XX @@ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
72
case ARMMMUIdx_E20_2:
73
case ARMMMUIdx_E20_2_PAN:
74
case ARMMMUIdx_Stage2:
75
+ case ARMMMUIdx_Stage2_S:
76
case ARMMMUIdx_SE2:
77
case ARMMMUIdx_E2:
78
return 2;
79
case ARMMMUIdx_SE3:
80
return 3;
81
case ARMMMUIdx_SE10_0:
82
+ case ARMMMUIdx_Stage1_SE0:
83
return arm_el_is_aa64(env, 3) ? 1 : 3;
84
case ARMMMUIdx_SE10_1:
85
case ARMMMUIdx_SE10_1_PAN:
86
case ARMMMUIdx_Stage1_E0:
87
case ARMMMUIdx_Stage1_E1:
88
case ARMMMUIdx_Stage1_E1_PAN:
89
+ case ARMMMUIdx_Stage1_SE1:
90
+ case ARMMMUIdx_Stage1_SE1_PAN:
91
case ARMMMUIdx_E10_0:
92
case ARMMMUIdx_E10_1:
93
case ARMMMUIdx_E10_1_PAN:
94
@@ -XXX,XX +XXX,XX @@ static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx)
95
if (mmu_idx == ARMMMUIdx_Stage2) {
96
return &env->cp15.vtcr_el2;
97
}
98
+ if (mmu_idx == ARMMMUIdx_Stage2_S) {
99
+ /*
100
+ * Note: Secure stage 2 nominally shares fields from VTCR_EL2, but
101
+ * those are not currently used by QEMU, so just return VSTCR_EL2.
102
+ */
103
+ return &env->cp15.vstcr_el2;
104
+ }
105
return &env->cp15.tcr_el[regime_el(env, mmu_idx)];
106
}
107
108
@@ -XXX,XX +XXX,XX @@ static inline bool arm_mmu_idx_is_stage1_of_2(ARMMMUIdx mmu_idx)
109
case ARMMMUIdx_Stage1_E0:
110
case ARMMMUIdx_Stage1_E1:
111
case ARMMMUIdx_Stage1_E1_PAN:
112
+ case ARMMMUIdx_Stage1_SE0:
113
+ case ARMMMUIdx_Stage1_SE1:
114
+ case ARMMMUIdx_Stage1_SE1_PAN:
115
return true;
116
default:
117
return false;
118
diff --git a/target/arm/helper.c b/target/arm/helper.c
119
index XXXXXXX..XXXXXXX 100644
120
--- a/target/arm/helper.c
121
+++ b/target/arm/helper.c
122
@@ -XXX,XX +XXX,XX @@ static uint64_t do_ats_write(CPUARMState *env, uint64_t value,
123
uint32_t syn, fsr, fsc;
124
bool take_exc = false;
125
126
- if (fi.s1ptw && current_el == 1 && !arm_is_secure(env)
127
+ if (fi.s1ptw && current_el == 1
128
&& arm_mmu_idx_is_stage1_of_2(mmu_idx)) {
129
/*
130
* Synchronous stage 2 fault on an access made as part of the
131
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
132
/* fall through */
133
case 1:
134
if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) {
135
- mmu_idx = (secure ? ARMMMUIdx_SE10_1_PAN
136
+ mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN
137
: ARMMMUIdx_Stage1_E1_PAN);
138
} else {
139
- mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
140
+ mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1;
141
}
142
break;
143
default:
144
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
145
mmu_idx = ARMMMUIdx_SE10_0;
146
break;
147
case 2:
148
+ g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
149
mmu_idx = ARMMMUIdx_Stage1_E0;
150
break;
151
case 1:
152
- mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_Stage1_E0;
153
+ mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0;
154
break;
155
default:
156
g_assert_not_reached();
157
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
158
switch (ri->opc1) {
159
case 0: /* AT S1E1R, AT S1E1W, AT S1E1RP, AT S1E1WP */
160
if (ri->crm == 9 && (env->pstate & PSTATE_PAN)) {
161
- mmu_idx = (secure ? ARMMMUIdx_SE10_1_PAN
162
+ mmu_idx = (secure ? ARMMMUIdx_Stage1_SE1_PAN
163
: ARMMMUIdx_Stage1_E1_PAN);
164
} else {
165
- mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_Stage1_E1;
166
+ mmu_idx = secure ? ARMMMUIdx_Stage1_SE1 : ARMMMUIdx_Stage1_E1;
167
}
168
break;
169
case 4: /* AT S1E2R, AT S1E2W */
170
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
171
}
172
break;
173
case 2: /* AT S1E0R, AT S1E0W */
174
- mmu_idx = secure ? ARMMMUIdx_SE10_0 : ARMMMUIdx_Stage1_E0;
175
+ mmu_idx = secure ? ARMMMUIdx_Stage1_SE0 : ARMMMUIdx_Stage1_E0;
176
break;
177
case 4: /* AT S12E1R, AT S12E1W */
178
mmu_idx = secure ? ARMMMUIdx_SE10_1 : ARMMMUIdx_E10_1;
179
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
180
181
hcr_el2 = arm_hcr_el2_eff(env);
182
183
- if (mmu_idx == ARMMMUIdx_Stage2) {
184
+ if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
185
/* HCR.DC means HCR.VM behaves as 1 */
186
return (hcr_el2 & (HCR_DC | HCR_VM)) == 0;
187
}
188
@@ -XXX,XX +XXX,XX @@ static inline uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx,
189
if (mmu_idx == ARMMMUIdx_Stage2) {
190
return env->cp15.vttbr_el2;
191
}
192
+ if (mmu_idx == ARMMMUIdx_Stage2_S) {
193
+ return env->cp15.vsttbr_el2;
194
+ }
195
if (ttbrn == 0) {
196
return env->cp15.ttbr0_el[regime_el(env, mmu_idx)];
197
} else {
198
@@ -XXX,XX +XXX,XX @@ static inline uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx,
199
static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
200
{
201
switch (mmu_idx) {
202
+ case ARMMMUIdx_SE10_0:
203
+ return ARMMMUIdx_Stage1_SE0;
204
+ case ARMMMUIdx_SE10_1:
205
+ return ARMMMUIdx_Stage1_SE1;
206
+ case ARMMMUIdx_SE10_1_PAN:
207
+ return ARMMMUIdx_Stage1_SE1_PAN;
208
case ARMMMUIdx_E10_0:
209
return ARMMMUIdx_Stage1_E0;
210
case ARMMMUIdx_E10_1:
211
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
212
case ARMMMUIdx_E20_0:
213
case ARMMMUIdx_SE20_0:
214
case ARMMMUIdx_Stage1_E0:
215
+ case ARMMMUIdx_Stage1_SE0:
216
case ARMMMUIdx_MUser:
217
case ARMMMUIdx_MSUser:
218
case ARMMMUIdx_MUserNegPri:
219
@@ -XXX,XX +XXX,XX @@ static int get_S1prot(CPUARMState *env, ARMMMUIdx mmu_idx, bool is_aa64,
220
int wxn = 0;
221
222
assert(mmu_idx != ARMMMUIdx_Stage2);
223
+ assert(mmu_idx != ARMMMUIdx_Stage2_S);
224
225
user_rw = simple_ap_to_rw_prot_is_user(ap, true);
226
if (is_user) {
227
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
228
hwaddr s2pa;
229
int s2prot;
230
int ret;
231
+ ARMMMUIdx s2_mmu_idx = *is_secure ? ARMMMUIdx_Stage2_S
232
+ : ARMMMUIdx_Stage2;
233
ARMCacheAttrs cacheattrs = {};
234
MemTxAttrs txattrs = {};
235
236
- assert(!*is_secure); /* TODO: S-EL2 */
237
-
238
- ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, ARMMMUIdx_Stage2,
239
- false,
240
+ ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx, false,
241
&s2pa, &txattrs, &s2prot, &s2size, fi,
242
&cacheattrs);
243
if (ret) {
244
@@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx)
245
{
246
if (regime_has_2_ranges(mmu_idx)) {
247
return extract64(tcr, 37, 2);
248
- } else if (mmu_idx == ARMMMUIdx_Stage2) {
249
+ } else if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
250
return 0; /* VTCR_EL2 */
251
} else {
252
/* Replicate the single TBI bit so we always have 2 bits. */
253
@@ -XXX,XX +XXX,XX @@ static int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx)
254
{
255
if (regime_has_2_ranges(mmu_idx)) {
256
return extract64(tcr, 51, 2);
257
- } else if (mmu_idx == ARMMMUIdx_Stage2) {
258
+ } else if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
259
return 0; /* VTCR_EL2 */
260
} else {
261
/* Replicate the single TBID bit so we always have 2 bits. */
262
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
263
tsz = extract32(tcr, 0, 6);
264
using64k = extract32(tcr, 14, 1);
265
using16k = extract32(tcr, 15, 1);
266
- if (mmu_idx == ARMMMUIdx_Stage2) {
267
+ if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
268
/* VTCR_EL2 */
269
hpd = false;
270
} else {
271
@@ -XXX,XX +XXX,XX @@ static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
272
int select, tsz;
273
bool epd, hpd;
274
275
+ assert(mmu_idx != ARMMMUIdx_Stage2_S);
276
+
277
if (mmu_idx == ARMMMUIdx_Stage2) {
278
/* VTCR */
279
bool sext = extract32(tcr, 4, 1);
280
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
281
goto do_fault;
282
}
283
284
- if (mmu_idx != ARMMMUIdx_Stage2) {
285
+ if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
286
/* The starting level depends on the virtual address size (which can
287
* be up to 48 bits) and the translation granule size. It indicates
288
* the number of strides (stride bits at a time) needed to
289
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
290
attrs = extract64(descriptor, 2, 10)
291
| (extract64(descriptor, 52, 12) << 10);
292
293
- if (mmu_idx == ARMMMUIdx_Stage2) {
294
+ if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
295
/* Stage 2 table descriptors do not include any attribute fields */
296
break;
297
}
298
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
299
300
ap = extract32(attrs, 4, 2);
301
302
- if (mmu_idx == ARMMMUIdx_Stage2) {
303
- ns = true;
304
+ if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
305
+ ns = mmu_idx == ARMMMUIdx_Stage2;
306
xn = extract32(attrs, 11, 2);
307
*prot = get_S2prot(env, ap, xn, s1_is_el0);
308
} else {
309
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
310
arm_tlb_bti_gp(txattrs) = true;
311
}
312
313
- if (mmu_idx == ARMMMUIdx_Stage2) {
314
+ if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
315
cacheattrs->attrs = convert_stage2_attrs(env, extract32(attrs, 0, 4));
316
} else {
317
/* Index into MAIR registers for cache attributes */
318
@@ -XXX,XX +XXX,XX @@ do_fault:
319
fi->type = fault_type;
320
fi->level = level;
321
/* Tag the error as S2 for failed S1 PTW at S2 or ordinary S2. */
322
- fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_Stage2);
323
+ fi->stage2 = fi->s1ptw || (mmu_idx == ARMMMUIdx_Stage2 ||
324
+ mmu_idx == ARMMMUIdx_Stage2_S);
325
return true;
27
return true;
326
}
28
}
327
29
328
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
30
-/* Conditional branch (immediate)
329
int s2_prot;
31
- * 31 25 24 23 5 4 3 0
330
int ret;
32
- * +---------------+----+---------------------+----+------+
331
ARMCacheAttrs cacheattrs2 = {};
33
- * | 0 1 0 1 0 1 0 | o1 | imm19 | o0 | cond |
332
+ ARMMMUIdx s2_mmu_idx;
34
- * +---------------+----+---------------------+----+------+
333
+ bool is_el0;
35
- */
334
36
-static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
335
ret = get_phys_addr(env, address, access_type, s1_mmu_idx, &ipa,
37
+static bool trans_B_cond(DisasContext *s, arg_B_cond *a)
336
attrs, prot, page_size, fi, cacheattrs);
38
{
337
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
39
- unsigned int cond;
338
return ret;
40
- int64_t diff;
339
}
41
-
340
42
- if ((insn & (1 << 4)) || (insn & (1 << 24))) {
341
+ s2_mmu_idx = attrs->secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
43
- unallocated_encoding(s);
342
+ is_el0 = mmu_idx == ARMMMUIdx_E10_0 || mmu_idx == ARMMMUIdx_SE10_0;
44
- return;
343
+
45
- }
344
/* S1 is done. Now do S2 translation. */
46
- diff = sextract32(insn, 5, 19) * 4;
345
- ret = get_phys_addr_lpae(env, ipa, access_type, ARMMMUIdx_Stage2,
47
- cond = extract32(insn, 0, 4);
346
- mmu_idx == ARMMMUIdx_E10_0,
48
-
347
+ ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx, is_el0,
49
reset_btype(s);
348
phys_ptr, attrs, &s2_prot,
50
- if (cond < 0x0e) {
349
page_size, fi, &cacheattrs2);
51
+ if (a->cond < 0x0e) {
350
fi->s2addr = ipa;
52
/* genuinely conditional branches */
351
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
53
DisasLabel match = gen_disas_label(s);
352
cacheattrs->shareability = 0;
54
- arm_gen_test_cc(cond, match.label);
353
}
55
+ arm_gen_test_cc(a->cond, match.label);
354
*cacheattrs = combine_cacheattrs(*cacheattrs, cacheattrs2);
56
gen_goto_tb(s, 0, 4);
355
+
57
set_disas_label(s, match);
356
+ /* Check if IPA translates to secure or non-secure PA space. */
58
- gen_goto_tb(s, 1, diff);
357
+ if (arm_is_secure_below_el3(env)) {
59
+ gen_goto_tb(s, 1, a->imm);
358
+ if (attrs->secure) {
60
} else {
359
+ attrs->secure =
61
/* 0xe and 0xf are both "always" conditions */
360
+ !(env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_SW));
62
- gen_goto_tb(s, 0, diff);
361
+ } else {
63
+ gen_goto_tb(s, 0, a->imm);
362
+ attrs->secure =
64
}
363
+ !((env->cp15.vtcr_el2.raw_tcr & (VTCR_NSA | VTCR_NSW))
65
+ return true;
364
+ || (env->cp15.vstcr_el2.raw_tcr & VSTCR_SA));
66
}
365
+ }
67
366
+ }
68
/* HINT instruction group, including various allocated HINTs */
367
return 0;
69
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
368
} else {
70
static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
369
/*
71
{
370
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
72
switch (extract32(insn, 25, 7)) {
371
* MMU disabled. S1 addresses within aa64 translation regimes are
73
- case 0x2a: /* Conditional branch (immediate) */
372
* still checked for bounds -- see AArch64.TranslateAddressS1Off.
74
- disas_cond_b_imm(s, insn);
373
*/
75
- break;
374
- if (mmu_idx != ARMMMUIdx_Stage2) {
76
case 0x6a: /* Exception generation / System */
375
+ if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
77
if (insn & (1 << 24)) {
376
int r_el = regime_el(env, mmu_idx);
78
if (extract32(insn, 22, 2) == 0) {
377
if (arm_el_is_aa64(env, r_el)) {
378
int pamax = arm_pamax(env_archcpu(env));
379
--
79
--
380
2.20.1
80
2.34.1
381
382
diff view generated by jsdifflib
1
From: Mihai Carabas <mihai.carabas@oracle.com>
1
Convert the simple (non-pointer-auth) BR, BLR and RET insns
2
to decodetree.
2
3
3
Add PCI interface support for PVPANIC device. Create a new file pvpanic-pci.c
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
where the PCI specific routines reside and update the build system with the new
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
files and config structure.
6
Message-id: 20230512144106.3608981-18-peter.maydell@linaro.org
7
---
8
target/arm/tcg/a64.decode | 5 ++++
9
target/arm/tcg/translate-a64.c | 55 ++++++++++++++++++++++++++++++----
10
2 files changed, 54 insertions(+), 6 deletions(-)
6
11
7
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
12
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
8
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
[PMM: wrapped one long line]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
docs/specs/pci-ids.txt | 1 +
14
include/hw/misc/pvpanic.h | 1 +
15
include/hw/pci/pci.h | 1 +
16
hw/misc/pvpanic-pci.c | 95 +++++++++++++++++++++++++++++++++++++++
17
hw/misc/Kconfig | 6 +++
18
hw/misc/meson.build | 1 +
19
6 files changed, 105 insertions(+)
20
create mode 100644 hw/misc/pvpanic-pci.c
21
22
diff --git a/docs/specs/pci-ids.txt b/docs/specs/pci-ids.txt
23
index XXXXXXX..XXXXXXX 100644
13
index XXXXXXX..XXXXXXX 100644
24
--- a/docs/specs/pci-ids.txt
14
--- a/target/arm/tcg/a64.decode
25
+++ b/docs/specs/pci-ids.txt
15
+++ b/target/arm/tcg/a64.decode
26
@@ -XXX,XX +XXX,XX @@ PCI devices (other than virtio):
16
@@ -XXX,XX +XXX,XX @@
27
1b36:000d PCI xhci usb host adapter
17
# This file is processed by scripts/decodetree.py
28
1b36:000f mdpy (mdev sample device), linux/samples/vfio-mdev/mdpy.c
18
#
29
1b36:0010 PCIe NVMe device (-device nvme)
19
30
+1b36:0011 PCI PVPanic device (-device pvpanic-pci)
20
+&r rn
31
21
&ri rd imm
32
All these devices are documented in docs/specs.
22
&rri_sf rd rn imm sf
33
23
&i imm
34
diff --git a/include/hw/misc/pvpanic.h b/include/hw/misc/pvpanic.h
24
@@ -XXX,XX +XXX,XX @@ CBZ sf:1 011010 nz:1 ................... rt:5 &cbz imm=%imm19
25
TBZ . 011011 nz:1 ..... .............. rt:5 &tbz imm=%imm14 bitpos=%imm31_19
26
27
B_cond 0101010 0 ................... 0 cond:4 imm=%imm19
28
+
29
+BR 1101011 0000 11111 000000 rn:5 00000 &r
30
+BLR 1101011 0001 11111 000000 rn:5 00000 &r
31
+RET 1101011 0010 11111 000000 rn:5 00000 &r
32
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
35
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
36
--- a/include/hw/misc/pvpanic.h
34
--- a/target/arm/tcg/translate-a64.c
37
+++ b/include/hw/misc/pvpanic.h
35
+++ b/target/arm/tcg/translate-a64.c
38
@@ -XXX,XX +XXX,XX @@
36
@@ -XXX,XX +XXX,XX @@ static bool trans_B_cond(DisasContext *s, arg_B_cond *a)
39
#include "qom/object.h"
37
return true;
40
38
}
41
#define TYPE_PVPANIC_ISA_DEVICE "pvpanic"
39
42
+#define TYPE_PVPANIC_PCI_DEVICE "pvpanic-pci"
40
+static void set_btype_for_br(DisasContext *s, int rn)
43
41
+{
44
#define PVPANIC_IOPORT_PROP "ioport"
42
+ if (dc_isar_feature(aa64_bti, s)) {
45
43
+ /* BR to {x16,x17} or !guard -> 1, else 3. */
46
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
44
+ set_btype(s, rn == 16 || rn == 17 || !s->guarded_page ? 1 : 3);
47
index XXXXXXX..XXXXXXX 100644
48
--- a/include/hw/pci/pci.h
49
+++ b/include/hw/pci/pci.h
50
@@ -XXX,XX +XXX,XX @@ extern bool pci_available;
51
#define PCI_DEVICE_ID_REDHAT_PCIE_BRIDGE 0x000e
52
#define PCI_DEVICE_ID_REDHAT_MDPY 0x000f
53
#define PCI_DEVICE_ID_REDHAT_NVME 0x0010
54
+#define PCI_DEVICE_ID_REDHAT_PVPANIC 0x0011
55
#define PCI_DEVICE_ID_REDHAT_QXL 0x0100
56
57
#define FMT_PCIBUS PRIx64
58
diff --git a/hw/misc/pvpanic-pci.c b/hw/misc/pvpanic-pci.c
59
new file mode 100644
60
index XXXXXXX..XXXXXXX
61
--- /dev/null
62
+++ b/hw/misc/pvpanic-pci.c
63
@@ -XXX,XX +XXX,XX @@
64
+/*
65
+ * QEMU simulated PCI pvpanic device.
66
+ *
67
+ * Copyright (C) 2020 Oracle
68
+ *
69
+ * Authors:
70
+ * Mihai Carabas <mihai.carabas@oracle.com>
71
+ *
72
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
73
+ * See the COPYING file in the top-level directory.
74
+ *
75
+ */
76
+
77
+#include "qemu/osdep.h"
78
+#include "qemu/log.h"
79
+#include "qemu/module.h"
80
+#include "sysemu/runstate.h"
81
+
82
+#include "hw/nvram/fw_cfg.h"
83
+#include "hw/qdev-properties.h"
84
+#include "migration/vmstate.h"
85
+#include "hw/misc/pvpanic.h"
86
+#include "qom/object.h"
87
+#include "hw/pci/pci.h"
88
+
89
+OBJECT_DECLARE_SIMPLE_TYPE(PVPanicPCIState, PVPANIC_PCI_DEVICE)
90
+
91
+/*
92
+ * PVPanicPCIState for PCI device
93
+ */
94
+typedef struct PVPanicPCIState {
95
+ PCIDevice dev;
96
+ PVPanicState pvpanic;
97
+} PVPanicPCIState;
98
+
99
+static const VMStateDescription vmstate_pvpanic_pci = {
100
+ .name = "pvpanic-pci",
101
+ .version_id = 1,
102
+ .minimum_version_id = 1,
103
+ .fields = (VMStateField[]) {
104
+ VMSTATE_PCI_DEVICE(dev, PVPanicPCIState),
105
+ VMSTATE_END_OF_LIST()
106
+ }
45
+ }
107
+};
108
+
109
+static void pvpanic_pci_realizefn(PCIDevice *dev, Error **errp)
110
+{
111
+ PVPanicPCIState *s = PVPANIC_PCI_DEVICE(dev);
112
+ PVPanicState *ps = &s->pvpanic;
113
+
114
+ pvpanic_setup_io(&s->pvpanic, DEVICE(s), 2);
115
+
116
+ pci_register_bar(dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, &ps->mr);
117
+}
46
+}
118
+
47
+
119
+static Property pvpanic_pci_properties[] = {
48
+static void set_btype_for_blr(DisasContext *s)
120
+ DEFINE_PROP_UINT8("events", PVPanicPCIState, pvpanic.events,
121
+ PVPANIC_PANICKED | PVPANIC_CRASHLOADED),
122
+ DEFINE_PROP_END_OF_LIST(),
123
+};
124
+
125
+static void pvpanic_pci_class_init(ObjectClass *klass, void *data)
126
+{
49
+{
127
+ DeviceClass *dc = DEVICE_CLASS(klass);
50
+ if (dc_isar_feature(aa64_bti, s)) {
128
+ PCIDeviceClass *pc = PCI_DEVICE_CLASS(klass);
51
+ /* BLR sets BTYPE to 2, regardless of source guarded page. */
129
+
52
+ set_btype(s, 2);
130
+ device_class_set_props(dc, pvpanic_pci_properties);
53
+ }
131
+
132
+ pc->realize = pvpanic_pci_realizefn;
133
+ pc->vendor_id = PCI_VENDOR_ID_REDHAT;
134
+ pc->device_id = PCI_DEVICE_ID_REDHAT_PVPANIC;
135
+ pc->revision = 1;
136
+ pc->class_id = PCI_CLASS_SYSTEM_OTHER;
137
+ dc->vmsd = &vmstate_pvpanic_pci;
138
+
139
+ set_bit(DEVICE_CATEGORY_MISC, dc->categories);
140
+}
54
+}
141
+
55
+
142
+static TypeInfo pvpanic_pci_info = {
56
+static bool trans_BR(DisasContext *s, arg_r *a)
143
+ .name = TYPE_PVPANIC_PCI_DEVICE,
144
+ .parent = TYPE_PCI_DEVICE,
145
+ .instance_size = sizeof(PVPanicPCIState),
146
+ .class_init = pvpanic_pci_class_init,
147
+ .interfaces = (InterfaceInfo[]) {
148
+ { INTERFACE_CONVENTIONAL_PCI_DEVICE },
149
+ { }
150
+ }
151
+};
152
+
153
+static void pvpanic_register_types(void)
154
+{
57
+{
155
+ type_register_static(&pvpanic_pci_info);
58
+ gen_a64_set_pc(s, cpu_reg(s, a->rn));
59
+ set_btype_for_br(s, a->rn);
60
+ s->base.is_jmp = DISAS_JUMP;
61
+ return true;
156
+}
62
+}
157
+
63
+
158
+type_init(pvpanic_register_types);
64
+static bool trans_BLR(DisasContext *s, arg_r *a)
159
diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig
65
+{
160
index XXXXXXX..XXXXXXX 100644
66
+ TCGv_i64 dst = cpu_reg(s, a->rn);
161
--- a/hw/misc/Kconfig
67
+ TCGv_i64 lr = cpu_reg(s, 30);
162
+++ b/hw/misc/Kconfig
68
+ if (dst == lr) {
163
@@ -XXX,XX +XXX,XX @@ config IOTKIT_SYSINFO
69
+ TCGv_i64 tmp = tcg_temp_new_i64();
164
config PVPANIC_COMMON
70
+ tcg_gen_mov_i64(tmp, dst);
165
bool
71
+ dst = tmp;
166
72
+ }
167
+config PVPANIC_PCI
73
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
168
+ bool
74
+ gen_a64_set_pc(s, dst);
169
+ default y if PCI_DEVICES
75
+ set_btype_for_blr(s);
170
+ depends on PCI
76
+ s->base.is_jmp = DISAS_JUMP;
171
+ select PVPANIC_COMMON
77
+ return true;
78
+}
172
+
79
+
173
config PVPANIC_ISA
80
+static bool trans_RET(DisasContext *s, arg_r *a)
174
bool
81
+{
175
depends on ISA_BUS
82
+ gen_a64_set_pc(s, cpu_reg(s, a->rn));
176
diff --git a/hw/misc/meson.build b/hw/misc/meson.build
83
+ s->base.is_jmp = DISAS_JUMP;
177
index XXXXXXX..XXXXXXX 100644
84
+ return true;
178
--- a/hw/misc/meson.build
85
+}
179
+++ b/hw/misc/meson.build
86
+
180
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_ARMSSE_CPUID', if_true: files('armsse-cpuid.c'))
87
/* HINT instruction group, including various allocated HINTs */
181
softmmu_ss.add(when: 'CONFIG_ARMSSE_MHU', if_true: files('armsse-mhu.c'))
88
static void handle_hint(DisasContext *s, uint32_t insn,
182
89
unsigned int op1, unsigned int op2, unsigned int crm)
183
softmmu_ss.add(when: 'CONFIG_PVPANIC_ISA', if_true: files('pvpanic-isa.c'))
90
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
184
+softmmu_ss.add(when: 'CONFIG_PVPANIC_PCI', if_true: files('pvpanic-pci.c'))
91
btype_mod = opc;
185
softmmu_ss.add(when: 'CONFIG_AUX', if_true: files('auxbus.c'))
92
switch (op3) {
186
softmmu_ss.add(when: 'CONFIG_ASPEED_SOC', if_true: files('aspeed_scu.c', 'aspeed_sdmc.c', 'aspeed_xdma.c'))
93
case 0:
187
softmmu_ss.add(when: 'CONFIG_MSF2', if_true: files('msf2-sysreg.c'))
94
- /* BR, BLR, RET */
95
- if (op4 != 0) {
96
- goto do_unallocated;
97
- }
98
- dst = cpu_reg(s, rn);
99
- break;
100
+ /* BR, BLR, RET : handled in decodetree */
101
+ goto do_unallocated;
102
103
case 2:
104
case 3:
188
--
105
--
189
2.20.1
106
2.34.1
190
191
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
Convert the single-register pointer-authentication variants of BR,
2
BLR, RET to decodetree. (BRAA/BLRAA are in a different branch of
3
the legacy decoder and will be dealt with in the next commit.)
2
4
3
This adds the MMU indices for EL2 stage 1 in secure state.
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20230512144106.3608981-19-peter.maydell@linaro.org
8
---
9
target/arm/tcg/a64.decode | 7 ++
10
target/arm/tcg/translate-a64.c | 132 +++++++++++++++++++--------------
11
2 files changed, 84 insertions(+), 55 deletions(-)
4
12
5
To keep code contained, which is largelly identical between secure and
13
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
6
non-secure modes, the MMU indices are reassigned. The new assignments
7
provide a systematic pattern with a non-secure bit.
8
9
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210112104511.36576-8-remi.denis.courmont@huawei.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
target/arm/cpu-param.h | 2 +-
15
target/arm/cpu.h | 35 ++++++----
16
target/arm/internals.h | 12 ++++
17
target/arm/helper.c | 127 ++++++++++++++++++++++++-------------
18
target/arm/translate-a64.c | 4 ++
19
5 files changed, 123 insertions(+), 57 deletions(-)
20
21
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
22
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/cpu-param.h
15
--- a/target/arm/tcg/a64.decode
24
+++ b/target/arm/cpu-param.h
16
+++ b/target/arm/tcg/a64.decode
25
@@ -XXX,XX +XXX,XX @@
17
@@ -XXX,XX +XXX,XX @@ B_cond 0101010 0 ................... 0 cond:4 imm=%imm19
26
# define TARGET_PAGE_BITS_MIN 10
18
BR 1101011 0000 11111 000000 rn:5 00000 &r
27
#endif
19
BLR 1101011 0001 11111 000000 rn:5 00000 &r
28
20
RET 1101011 0010 11111 000000 rn:5 00000 &r
29
-#define NB_MMU_MODES 11
21
+
30
+#define NB_MMU_MODES 15
22
+&braz rn m
31
23
+BRAZ 1101011 0000 11111 00001 m:1 rn:5 11111 &braz # BRAAZ, BRABZ
32
#endif
24
+BLRAZ 1101011 0001 11111 00001 m:1 rn:5 11111 &braz # BLRAAZ, BLRABZ
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
25
+
26
+&reta m
27
+RETA 1101011 0010 11111 00001 m:1 11111 11111 &reta # RETAA, RETAB
28
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
34
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/cpu.h
30
--- a/target/arm/tcg/translate-a64.c
36
+++ b/target/arm/cpu.h
31
+++ b/target/arm/tcg/translate-a64.c
37
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
32
@@ -XXX,XX +XXX,XX @@ static bool trans_RET(DisasContext *s, arg_r *a)
38
#define ARM_MMU_IDX_NOTLB 0x20 /* does not have a TLB */
33
return true;
39
#define ARM_MMU_IDX_M 0x40 /* M profile */
40
41
+/* Meanings of the bits for A profile mmu idx values */
42
+#define ARM_MMU_IDX_A_NS 0x8
43
+
44
/* Meanings of the bits for M profile mmu idx values */
45
#define ARM_MMU_IDX_M_PRIV 0x1
46
#define ARM_MMU_IDX_M_NEGPRI 0x2
47
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
48
/*
49
* A-profile.
50
*/
51
- ARMMMUIdx_E10_0 = 0 | ARM_MMU_IDX_A,
52
- ARMMMUIdx_E20_0 = 1 | ARM_MMU_IDX_A,
53
+ ARMMMUIdx_SE10_0 = 0 | ARM_MMU_IDX_A,
54
+ ARMMMUIdx_SE20_0 = 1 | ARM_MMU_IDX_A,
55
+ ARMMMUIdx_SE10_1 = 2 | ARM_MMU_IDX_A,
56
+ ARMMMUIdx_SE20_2 = 3 | ARM_MMU_IDX_A,
57
+ ARMMMUIdx_SE10_1_PAN = 4 | ARM_MMU_IDX_A,
58
+ ARMMMUIdx_SE20_2_PAN = 5 | ARM_MMU_IDX_A,
59
+ ARMMMUIdx_SE2 = 6 | ARM_MMU_IDX_A,
60
+ ARMMMUIdx_SE3 = 7 | ARM_MMU_IDX_A,
61
62
- ARMMMUIdx_E10_1 = 2 | ARM_MMU_IDX_A,
63
- ARMMMUIdx_E10_1_PAN = 3 | ARM_MMU_IDX_A,
64
-
65
- ARMMMUIdx_E2 = 4 | ARM_MMU_IDX_A,
66
- ARMMMUIdx_E20_2 = 5 | ARM_MMU_IDX_A,
67
- ARMMMUIdx_E20_2_PAN = 6 | ARM_MMU_IDX_A,
68
-
69
- ARMMMUIdx_SE10_0 = 7 | ARM_MMU_IDX_A,
70
- ARMMMUIdx_SE10_1 = 8 | ARM_MMU_IDX_A,
71
- ARMMMUIdx_SE10_1_PAN = 9 | ARM_MMU_IDX_A,
72
- ARMMMUIdx_SE3 = 10 | ARM_MMU_IDX_A,
73
+ ARMMMUIdx_E10_0 = ARMMMUIdx_SE10_0 | ARM_MMU_IDX_A_NS,
74
+ ARMMMUIdx_E20_0 = ARMMMUIdx_SE20_0 | ARM_MMU_IDX_A_NS,
75
+ ARMMMUIdx_E10_1 = ARMMMUIdx_SE10_1 | ARM_MMU_IDX_A_NS,
76
+ ARMMMUIdx_E20_2 = ARMMMUIdx_SE20_2 | ARM_MMU_IDX_A_NS,
77
+ ARMMMUIdx_E10_1_PAN = ARMMMUIdx_SE10_1_PAN | ARM_MMU_IDX_A_NS,
78
+ ARMMMUIdx_E20_2_PAN = ARMMMUIdx_SE20_2_PAN | ARM_MMU_IDX_A_NS,
79
+ ARMMMUIdx_E2 = ARMMMUIdx_SE2 | ARM_MMU_IDX_A_NS,
80
81
/*
82
* These are not allocated TLBs and are used only for AT system
83
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
84
TO_CORE_BIT(E20_2),
85
TO_CORE_BIT(E20_2_PAN),
86
TO_CORE_BIT(SE10_0),
87
+ TO_CORE_BIT(SE20_0),
88
TO_CORE_BIT(SE10_1),
89
+ TO_CORE_BIT(SE20_2),
90
TO_CORE_BIT(SE10_1_PAN),
91
+ TO_CORE_BIT(SE20_2_PAN),
92
+ TO_CORE_BIT(SE2),
93
TO_CORE_BIT(SE3),
94
95
TO_CORE_BIT(MUser),
96
diff --git a/target/arm/internals.h b/target/arm/internals.h
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/internals.h
99
+++ b/target/arm/internals.h
100
@@ -XXX,XX +XXX,XX @@ static inline bool regime_has_2_ranges(ARMMMUIdx mmu_idx)
101
case ARMMMUIdx_SE10_0:
102
case ARMMMUIdx_SE10_1:
103
case ARMMMUIdx_SE10_1_PAN:
104
+ case ARMMMUIdx_SE20_0:
105
+ case ARMMMUIdx_SE20_2:
106
+ case ARMMMUIdx_SE20_2_PAN:
107
return true;
108
default:
109
return false;
110
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_secure(CPUARMState *env, ARMMMUIdx mmu_idx)
111
case ARMMMUIdx_SE10_0:
112
case ARMMMUIdx_SE10_1:
113
case ARMMMUIdx_SE10_1_PAN:
114
+ case ARMMMUIdx_SE20_0:
115
+ case ARMMMUIdx_SE20_2:
116
+ case ARMMMUIdx_SE20_2_PAN:
117
+ case ARMMMUIdx_SE2:
118
case ARMMMUIdx_MSPrivNegPri:
119
case ARMMMUIdx_MSUserNegPri:
120
case ARMMMUIdx_MSPriv:
121
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
122
case ARMMMUIdx_E10_1_PAN:
123
case ARMMMUIdx_E20_2_PAN:
124
case ARMMMUIdx_SE10_1_PAN:
125
+ case ARMMMUIdx_SE20_2_PAN:
126
return true;
127
default:
128
return false;
129
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_pan(CPUARMState *env, ARMMMUIdx mmu_idx)
130
static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx)
131
{
132
switch (mmu_idx) {
133
+ case ARMMMUIdx_SE20_0:
134
+ case ARMMMUIdx_SE20_2:
135
+ case ARMMMUIdx_SE20_2_PAN:
136
case ARMMMUIdx_E20_0:
137
case ARMMMUIdx_E20_2:
138
case ARMMMUIdx_E20_2_PAN:
139
case ARMMMUIdx_Stage2:
140
+ case ARMMMUIdx_SE2:
141
case ARMMMUIdx_E2:
142
return 2;
143
case ARMMMUIdx_SE3:
144
diff --git a/target/arm/helper.c b/target/arm/helper.c
145
index XXXXXXX..XXXXXXX 100644
146
--- a/target/arm/helper.c
147
+++ b/target/arm/helper.c
148
@@ -XXX,XX +XXX,XX @@ static int gt_phys_redir_timeridx(CPUARMState *env)
149
case ARMMMUIdx_E20_0:
150
case ARMMMUIdx_E20_2:
151
case ARMMMUIdx_E20_2_PAN:
152
+ case ARMMMUIdx_SE20_0:
153
+ case ARMMMUIdx_SE20_2:
154
+ case ARMMMUIdx_SE20_2_PAN:
155
return GTIMER_HYP;
156
default:
157
return GTIMER_PHYS;
158
@@ -XXX,XX +XXX,XX @@ static int gt_virt_redir_timeridx(CPUARMState *env)
159
case ARMMMUIdx_E20_0:
160
case ARMMMUIdx_E20_2:
161
case ARMMMUIdx_E20_2_PAN:
162
+ case ARMMMUIdx_SE20_0:
163
+ case ARMMMUIdx_SE20_2:
164
+ case ARMMMUIdx_SE20_2_PAN:
165
return GTIMER_HYPVIRT;
166
default:
167
return GTIMER_VIRT;
168
@@ -XXX,XX +XXX,XX @@ static void ats_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
169
mmu_idx = ARMMMUIdx_SE3;
170
break;
171
case 2:
172
- g_assert(!secure); /* TODO: ARMv8.4-SecEL2 */
173
+ g_assert(!secure); /* ARMv8.4-SecEL2 is 64-bit only */
174
/* fall through */
175
case 1:
176
if (ri->crm == 9 && (env->uncached_cpsr & CPSR_PAN)) {
177
@@ -XXX,XX +XXX,XX @@ static void ats_write64(CPUARMState *env, const ARMCPRegInfo *ri,
178
}
179
break;
180
case 4: /* AT S1E2R, AT S1E2W */
181
- mmu_idx = ARMMMUIdx_E2;
182
+ mmu_idx = secure ? ARMMMUIdx_SE2 : ARMMMUIdx_E2;
183
break;
184
case 6: /* AT S1E3R, AT S1E3W */
185
mmu_idx = ARMMMUIdx_SE3;
186
@@ -XXX,XX +XXX,XX @@ static void vmsa_tcr_ttbr_el2_write(CPUARMState *env, const ARMCPRegInfo *ri,
187
*/
188
if (extract64(raw_read(env, ri) ^ value, 48, 16) &&
189
(arm_hcr_el2_eff(env) & HCR_E2H)) {
190
- tlb_flush_by_mmuidx(env_cpu(env),
191
- ARMMMUIdxBit_E20_2 |
192
- ARMMMUIdxBit_E20_2_PAN |
193
- ARMMMUIdxBit_E20_0);
194
+ uint16_t mask = ARMMMUIdxBit_E20_2 |
195
+ ARMMMUIdxBit_E20_2_PAN |
196
+ ARMMMUIdxBit_E20_0;
197
+
198
+ if (arm_is_secure_below_el3(env)) {
199
+ mask >>= ARM_MMU_IDX_A_NS;
200
+ }
201
+
202
+ tlb_flush_by_mmuidx(env_cpu(env), mask);
203
}
204
raw_write(env, ri, value);
205
}
34
}
206
@@ -XXX,XX +XXX,XX @@ static int vae1_tlbmask(CPUARMState *env)
35
207
uint64_t hcr = arm_hcr_el2_eff(env);
36
+static TCGv_i64 auth_branch_target(DisasContext *s, TCGv_i64 dst,
208
37
+ TCGv_i64 modifier, bool use_key_a)
209
if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
38
+{
210
- return ARMMMUIdxBit_E20_2 |
39
+ TCGv_i64 truedst;
211
- ARMMMUIdxBit_E20_2_PAN |
40
+ /*
212
- ARMMMUIdxBit_E20_0;
41
+ * Return the branch target for a BRAA/RETA/etc, which is either
213
+ uint16_t mask = ARMMMUIdxBit_E20_2 |
42
+ * just the destination dst, or that value with the pauth check
214
+ ARMMMUIdxBit_E20_2_PAN |
43
+ * done and the code removed from the high bits.
215
+ ARMMMUIdxBit_E20_0;
44
+ */
216
+
45
+ if (!s->pauth_active) {
217
+ if (arm_is_secure_below_el3(env)) {
46
+ return dst;
218
+ mask >>= ARM_MMU_IDX_A_NS;
219
+ }
220
+
221
+ return mask;
222
} else if (arm_is_secure_below_el3(env)) {
223
return ARMMMUIdxBit_SE10_1 |
224
ARMMMUIdxBit_SE10_1_PAN |
225
@@ -XXX,XX +XXX,XX @@ static int tlbbits_for_regime(CPUARMState *env, ARMMMUIdx mmu_idx,
226
227
static int vae1_tlbbits(CPUARMState *env, uint64_t addr)
228
{
229
+ uint64_t hcr = arm_hcr_el2_eff(env);
230
ARMMMUIdx mmu_idx;
231
232
/* Only the regime of the mmu_idx below is significant. */
233
- if (arm_is_secure_below_el3(env)) {
234
- mmu_idx = ARMMMUIdx_SE10_0;
235
- } else if ((env->cp15.hcr_el2 & (HCR_E2H | HCR_TGE))
236
- == (HCR_E2H | HCR_TGE)) {
237
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
238
mmu_idx = ARMMMUIdx_E20_0;
239
} else {
240
mmu_idx = ARMMMUIdx_E10_0;
241
}
242
+
243
+ if (arm_is_secure_below_el3(env)) {
244
+ mmu_idx &= ~ARM_MMU_IDX_A_NS;
245
+ }
47
+ }
246
+
48
+
247
return tlbbits_for_regime(env, mmu_idx, addr);
49
+ truedst = tcg_temp_new_i64();
248
}
50
+ if (use_key_a) {
249
51
+ gen_helper_autia(truedst, cpu_env, dst, modifier);
250
@@ -XXX,XX +XXX,XX @@ static int alle1_tlbmask(CPUARMState *env)
251
252
static int e2_tlbmask(CPUARMState *env)
253
{
254
- /* TODO: ARMv8.4-SecEL2 */
255
- return ARMMMUIdxBit_E20_0 |
256
- ARMMMUIdxBit_E20_2 |
257
- ARMMMUIdxBit_E20_2_PAN |
258
- ARMMMUIdxBit_E2;
259
+ if (arm_is_secure_below_el3(env)) {
260
+ return ARMMMUIdxBit_SE20_0 |
261
+ ARMMMUIdxBit_SE20_2 |
262
+ ARMMMUIdxBit_SE20_2_PAN |
263
+ ARMMMUIdxBit_SE2;
264
+ } else {
52
+ } else {
265
+ return ARMMMUIdxBit_E20_0 |
53
+ gen_helper_autib(truedst, cpu_env, dst, modifier);
266
+ ARMMMUIdxBit_E20_2 |
267
+ ARMMMUIdxBit_E20_2_PAN |
268
+ ARMMMUIdxBit_E2;
269
+ }
54
+ }
270
}
55
+ return truedst;
271
56
+}
272
static void tlbi_aa64_alle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
273
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae2is_write(CPUARMState *env, const ARMCPRegInfo *ri,
274
{
275
CPUState *cs = env_cpu(env);
276
uint64_t pageaddr = sextract64(value << 12, 0, 56);
277
- int bits = tlbbits_for_regime(env, ARMMMUIdx_E2, pageaddr);
278
+ bool secure = arm_is_secure_below_el3(env);
279
+ int mask = secure ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2;
280
+ int bits = tlbbits_for_regime(env, secure ? ARMMMUIdx_E2 : ARMMMUIdx_SE2,
281
+ pageaddr);
282
283
- tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr,
284
- ARMMMUIdxBit_E2, bits);
285
+ tlb_flush_page_bits_by_mmuidx_all_cpus_synced(cs, pageaddr, mask, bits);
286
}
287
288
static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
289
@@ -XXX,XX +XXX,XX @@ uint64_t arm_sctlr(CPUARMState *env, int el)
290
/* Only EL0 needs to be adjusted for EL1&0 or EL2&0. */
291
if (el == 0) {
292
ARMMMUIdx mmu_idx = arm_mmu_idx_el(env, 0);
293
- el = (mmu_idx == ARMMMUIdx_E20_0 ? 2 : 1);
294
+ el = (mmu_idx == ARMMMUIdx_E20_0 || mmu_idx == ARMMMUIdx_SE20_0)
295
+ ? 2 : 1;
296
}
297
return env->cp15.sctlr_el[el];
298
}
299
@@ -XXX,XX +XXX,XX @@ static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx)
300
switch (mmu_idx) {
301
case ARMMMUIdx_SE10_0:
302
case ARMMMUIdx_E20_0:
303
+ case ARMMMUIdx_SE20_0:
304
case ARMMMUIdx_Stage1_E0:
305
case ARMMMUIdx_MUser:
306
case ARMMMUIdx_MSUser:
307
@@ -XXX,XX +XXX,XX @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
308
case ARMMMUIdx_E10_0:
309
case ARMMMUIdx_E20_0:
310
case ARMMMUIdx_SE10_0:
311
+ case ARMMMUIdx_SE20_0:
312
return 0;
313
case ARMMMUIdx_E10_1:
314
case ARMMMUIdx_E10_1_PAN:
315
@@ -XXX,XX +XXX,XX @@ int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
316
case ARMMMUIdx_E2:
317
case ARMMMUIdx_E20_2:
318
case ARMMMUIdx_E20_2_PAN:
319
+ case ARMMMUIdx_SE2:
320
+ case ARMMMUIdx_SE20_2:
321
+ case ARMMMUIdx_SE20_2_PAN:
322
return 2;
323
case ARMMMUIdx_SE3:
324
return 3;
325
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
326
327
ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
328
{
329
+ ARMMMUIdx idx;
330
+ uint64_t hcr;
331
+
57
+
332
if (arm_feature(env, ARM_FEATURE_M)) {
58
+static bool trans_BRAZ(DisasContext *s, arg_braz *a)
333
return arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure);
59
+{
334
}
60
+ TCGv_i64 dst;
335
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_mmu_idx_el(CPUARMState *env, int el)
336
/* See ARM pseudo-function ELIsInHost. */
337
switch (el) {
338
case 0:
339
- if (arm_is_secure_below_el3(env)) {
340
- return ARMMMUIdx_SE10_0;
341
+ hcr = arm_hcr_el2_eff(env);
342
+ if ((hcr & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)) {
343
+ idx = ARMMMUIdx_E20_0;
344
+ } else {
345
+ idx = ARMMMUIdx_E10_0;
346
}
347
- if ((env->cp15.hcr_el2 & (HCR_E2H | HCR_TGE)) == (HCR_E2H | HCR_TGE)
348
- && arm_el_is_aa64(env, 2)) {
349
- return ARMMMUIdx_E20_0;
350
- }
351
- return ARMMMUIdx_E10_0;
352
+ break;
353
case 1:
354
- if (arm_is_secure_below_el3(env)) {
355
- if (env->pstate & PSTATE_PAN) {
356
- return ARMMMUIdx_SE10_1_PAN;
357
- }
358
- return ARMMMUIdx_SE10_1;
359
- }
360
if (env->pstate & PSTATE_PAN) {
361
- return ARMMMUIdx_E10_1_PAN;
362
+ idx = ARMMMUIdx_E10_1_PAN;
363
+ } else {
364
+ idx = ARMMMUIdx_E10_1;
365
}
366
- return ARMMMUIdx_E10_1;
367
+ break;
368
case 2:
369
- /* TODO: ARMv8.4-SecEL2 */
370
/* Note that TGE does not apply at EL2. */
371
- if ((env->cp15.hcr_el2 & HCR_E2H) && arm_el_is_aa64(env, 2)) {
372
+ if (arm_hcr_el2_eff(env) & HCR_E2H) {
373
if (env->pstate & PSTATE_PAN) {
374
- return ARMMMUIdx_E20_2_PAN;
375
+ idx = ARMMMUIdx_E20_2_PAN;
376
+ } else {
377
+ idx = ARMMMUIdx_E20_2;
378
}
379
- return ARMMMUIdx_E20_2;
380
+ } else {
381
+ idx = ARMMMUIdx_E2;
382
}
383
- return ARMMMUIdx_E2;
384
+ break;
385
case 3:
386
return ARMMMUIdx_SE3;
387
default:
388
g_assert_not_reached();
389
}
390
+
61
+
391
+ if (arm_is_secure_below_el3(env)) {
62
+ if (!dc_isar_feature(aa64_pauth, s)) {
392
+ idx &= ~ARM_MMU_IDX_A_NS;
63
+ return false;
393
+ }
64
+ }
394
+
65
+
395
+ return idx;
66
+ dst = auth_branch_target(s, cpu_reg(s, a->rn), tcg_constant_i64(0), !a->m);
396
}
67
+ gen_a64_set_pc(s, dst);
397
68
+ set_btype_for_br(s, a->rn);
398
ARMMMUIdx arm_mmu_idx(CPUARMState *env)
69
+ s->base.is_jmp = DISAS_JUMP;
399
@@ -XXX,XX +XXX,XX @@ static uint32_t rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
70
+ return true;
400
break;
71
+}
401
case ARMMMUIdx_E20_2:
72
+
402
case ARMMMUIdx_E20_2_PAN:
73
+static bool trans_BLRAZ(DisasContext *s, arg_braz *a)
403
- /* TODO: ARMv8.4-SecEL2 */
74
+{
404
+ case ARMMMUIdx_SE20_2:
75
+ TCGv_i64 dst, lr;
405
+ case ARMMMUIdx_SE20_2_PAN:
76
+
406
/*
77
+ if (!dc_isar_feature(aa64_pauth, s)) {
407
* Note that EL20_2 is gated by HCR_EL2.E2H == 1, but EL20_0 is
78
+ return false;
408
* gated by HCR_EL2.<E2H,TGE> == '11', and so is LDTR.
79
+ }
409
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
80
+
410
index XXXXXXX..XXXXXXX 100644
81
+ dst = auth_branch_target(s, cpu_reg(s, a->rn), tcg_constant_i64(0), !a->m);
411
--- a/target/arm/translate-a64.c
82
+ lr = cpu_reg(s, 30);
412
+++ b/target/arm/translate-a64.c
83
+ if (dst == lr) {
413
@@ -XXX,XX +XXX,XX @@ static int get_a64_user_mem_index(DisasContext *s)
84
+ TCGv_i64 tmp = tcg_temp_new_i64();
414
case ARMMMUIdx_SE10_1_PAN:
85
+ tcg_gen_mov_i64(tmp, dst);
415
useridx = ARMMMUIdx_SE10_0;
86
+ dst = tmp;
416
break;
87
+ }
417
+ case ARMMMUIdx_SE20_2:
88
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
418
+ case ARMMMUIdx_SE20_2_PAN:
89
+ gen_a64_set_pc(s, dst);
419
+ useridx = ARMMMUIdx_SE20_0;
90
+ set_btype_for_blr(s);
420
+ break;
91
+ s->base.is_jmp = DISAS_JUMP;
421
default:
92
+ return true;
422
g_assert_not_reached();
93
+}
423
}
94
+
95
+static bool trans_RETA(DisasContext *s, arg_reta *a)
96
+{
97
+ TCGv_i64 dst;
98
+
99
+ dst = auth_branch_target(s, cpu_reg(s, 30), cpu_X[31], !a->m);
100
+ gen_a64_set_pc(s, dst);
101
+ s->base.is_jmp = DISAS_JUMP;
102
+ return true;
103
+}
104
+
105
/* HINT instruction group, including various allocated HINTs */
106
static void handle_hint(DisasContext *s, uint32_t insn,
107
unsigned int op1, unsigned int op2, unsigned int crm)
108
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
109
}
110
111
switch (opc) {
112
- case 0: /* BR */
113
- case 1: /* BLR */
114
- case 2: /* RET */
115
- btype_mod = opc;
116
- switch (op3) {
117
- case 0:
118
- /* BR, BLR, RET : handled in decodetree */
119
- goto do_unallocated;
120
-
121
- case 2:
122
- case 3:
123
- if (!dc_isar_feature(aa64_pauth, s)) {
124
- goto do_unallocated;
125
- }
126
- if (opc == 2) {
127
- /* RETAA, RETAB */
128
- if (rn != 0x1f || op4 != 0x1f) {
129
- goto do_unallocated;
130
- }
131
- rn = 30;
132
- modifier = cpu_X[31];
133
- } else {
134
- /* BRAAZ, BRABZ, BLRAAZ, BLRABZ */
135
- if (op4 != 0x1f) {
136
- goto do_unallocated;
137
- }
138
- modifier = tcg_constant_i64(0);
139
- }
140
- if (s->pauth_active) {
141
- dst = tcg_temp_new_i64();
142
- if (op3 == 2) {
143
- gen_helper_autia(dst, cpu_env, cpu_reg(s, rn), modifier);
144
- } else {
145
- gen_helper_autib(dst, cpu_env, cpu_reg(s, rn), modifier);
146
- }
147
- } else {
148
- dst = cpu_reg(s, rn);
149
- }
150
- break;
151
-
152
- default:
153
- goto do_unallocated;
154
- }
155
- /* BLR also needs to load return address */
156
- if (opc == 1) {
157
- TCGv_i64 lr = cpu_reg(s, 30);
158
- if (dst == lr) {
159
- TCGv_i64 tmp = tcg_temp_new_i64();
160
- tcg_gen_mov_i64(tmp, dst);
161
- dst = tmp;
162
- }
163
- gen_pc_plus_diff(s, lr, curr_insn_len(s));
164
- }
165
- gen_a64_set_pc(s, dst);
166
- break;
167
+ case 0:
168
+ case 1:
169
+ case 2:
170
+ /*
171
+ * BR, BLR, RET, RETAA, RETAB, BRAAZ, BRABZ, BLRAAZ, BLRABZ:
172
+ * handled in decodetree
173
+ */
174
+ goto do_unallocated;
175
176
case 8: /* BRAA */
177
case 9: /* BLRAA */
424
--
178
--
425
2.20.1
179
2.34.1
426
427
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
Convert the last four BR-with-pointer-auth insns to decodetree.
2
The remaining cases in the outer switch in disas_uncond_b_reg()
3
all return early rather than leaving the case statement, so we
4
can delete the now-unused code at the end of that function.
2
5
3
This checks if EL2 is enabled (meaning EL2 registers take effects) in
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
the current security context.
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20230512144106.3608981-20-peter.maydell@linaro.org
9
---
10
target/arm/tcg/a64.decode | 4 ++
11
target/arm/tcg/translate-a64.c | 97 ++++++++++++++--------------------
12
2 files changed, 43 insertions(+), 58 deletions(-)
5
13
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
14
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210112104511.36576-2-remi.denis.courmont@huawei.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/cpu.h | 17 +++++++++++++++++
12
1 file changed, 17 insertions(+)
13
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
16
--- a/target/arm/tcg/a64.decode
17
+++ b/target/arm/cpu.h
17
+++ b/target/arm/tcg/a64.decode
18
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
18
@@ -XXX,XX +XXX,XX @@ BLRAZ 1101011 0001 11111 00001 m:1 rn:5 11111 &braz # BLRAAZ, BLRABZ
19
return arm_is_secure_below_el3(env);
19
20
&reta m
21
RETA 1101011 0010 11111 00001 m:1 11111 11111 &reta # RETAA, RETAB
22
+
23
+&bra rn rm m
24
+BRA 1101011 1000 11111 00001 m:1 rn:5 rm:5 &bra # BRAA, BRAB
25
+BLRA 1101011 1001 11111 00001 m:1 rn:5 rm:5 &bra # BLRAA, BLRAB
26
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/tcg/translate-a64.c
29
+++ b/target/arm/tcg/translate-a64.c
30
@@ -XXX,XX +XXX,XX @@ static bool trans_RETA(DisasContext *s, arg_reta *a)
31
return true;
20
}
32
}
21
33
22
+/*
34
+static bool trans_BRA(DisasContext *s, arg_bra *a)
23
+ * Return true if the current security state has AArch64 EL2 or AArch32 Hyp.
24
+ * This corresponds to the pseudocode EL2Enabled()
25
+ */
26
+static inline bool arm_is_el2_enabled(CPUARMState *env)
27
+{
35
+{
28
+ if (arm_feature(env, ARM_FEATURE_EL2)) {
36
+ TCGv_i64 dst;
29
+ return !arm_is_secure_below_el3(env);
37
+
38
+ if (!dc_isar_feature(aa64_pauth, s)) {
39
+ return false;
30
+ }
40
+ }
31
+ return false;
41
+ dst = auth_branch_target(s, cpu_reg(s,a->rn), cpu_reg_sp(s, a->rm), !a->m);
42
+ gen_a64_set_pc(s, dst);
43
+ set_btype_for_br(s, a->rn);
44
+ s->base.is_jmp = DISAS_JUMP;
45
+ return true;
32
+}
46
+}
33
+
47
+
34
#else
48
+static bool trans_BLRA(DisasContext *s, arg_bra *a)
35
static inline bool arm_is_secure_below_el3(CPUARMState *env)
49
+{
50
+ TCGv_i64 dst, lr;
51
+
52
+ if (!dc_isar_feature(aa64_pauth, s)) {
53
+ return false;
54
+ }
55
+ dst = auth_branch_target(s, cpu_reg(s, a->rn), cpu_reg_sp(s, a->rm), !a->m);
56
+ lr = cpu_reg(s, 30);
57
+ if (dst == lr) {
58
+ TCGv_i64 tmp = tcg_temp_new_i64();
59
+ tcg_gen_mov_i64(tmp, dst);
60
+ dst = tmp;
61
+ }
62
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
63
+ gen_a64_set_pc(s, dst);
64
+ set_btype_for_blr(s);
65
+ s->base.is_jmp = DISAS_JUMP;
66
+ return true;
67
+}
68
+
69
/* HINT instruction group, including various allocated HINTs */
70
static void handle_hint(DisasContext *s, uint32_t insn,
71
unsigned int op1, unsigned int op2, unsigned int crm)
72
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
73
static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
36
{
74
{
37
@@ -XXX,XX +XXX,XX @@ static inline bool arm_is_secure(CPUARMState *env)
75
unsigned int opc, op2, op3, rn, op4;
38
{
76
- unsigned btype_mod = 2; /* 0: BR, 1: BLR, 2: other */
39
return false;
77
TCGv_i64 dst;
78
TCGv_i64 modifier;
79
80
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
81
case 0:
82
case 1:
83
case 2:
84
+ case 8:
85
+ case 9:
86
/*
87
- * BR, BLR, RET, RETAA, RETAB, BRAAZ, BRABZ, BLRAAZ, BLRABZ:
88
- * handled in decodetree
89
+ * BR, BLR, RET, RETAA, RETAB, BRAAZ, BRABZ, BLRAAZ, BLRABZ,
90
+ * BRAA, BLRAA: handled in decodetree
91
*/
92
goto do_unallocated;
93
94
- case 8: /* BRAA */
95
- case 9: /* BLRAA */
96
- if (!dc_isar_feature(aa64_pauth, s)) {
97
- goto do_unallocated;
98
- }
99
- if ((op3 & ~1) != 2) {
100
- goto do_unallocated;
101
- }
102
- btype_mod = opc & 1;
103
- if (s->pauth_active) {
104
- dst = tcg_temp_new_i64();
105
- modifier = cpu_reg_sp(s, op4);
106
- if (op3 == 2) {
107
- gen_helper_autia(dst, cpu_env, cpu_reg(s, rn), modifier);
108
- } else {
109
- gen_helper_autib(dst, cpu_env, cpu_reg(s, rn), modifier);
110
- }
111
- } else {
112
- dst = cpu_reg(s, rn);
113
- }
114
- /* BLRAA also needs to load return address */
115
- if (opc == 9) {
116
- TCGv_i64 lr = cpu_reg(s, 30);
117
- if (dst == lr) {
118
- TCGv_i64 tmp = tcg_temp_new_i64();
119
- tcg_gen_mov_i64(tmp, dst);
120
- dst = tmp;
121
- }
122
- gen_pc_plus_diff(s, lr, curr_insn_len(s));
123
- }
124
- gen_a64_set_pc(s, dst);
125
- break;
126
-
127
case 4: /* ERET */
128
if (s->current_el == 0) {
129
goto do_unallocated;
130
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
131
unallocated_encoding(s);
132
return;
133
}
134
-
135
- switch (btype_mod) {
136
- case 0: /* BR */
137
- if (dc_isar_feature(aa64_bti, s)) {
138
- /* BR to {x16,x17} or !guard -> 1, else 3. */
139
- set_btype(s, rn == 16 || rn == 17 || !s->guarded_page ? 1 : 3);
140
- }
141
- break;
142
-
143
- case 1: /* BLR */
144
- if (dc_isar_feature(aa64_bti, s)) {
145
- /* BLR sets BTYPE to 2, regardless of source guarded page. */
146
- set_btype(s, 2);
147
- }
148
- break;
149
-
150
- default: /* RET or none of the above. */
151
- /* BTYPE will be set to 0 by normal end-of-insn processing. */
152
- break;
153
- }
154
-
155
- s->base.is_jmp = DISAS_JUMP;
40
}
156
}
41
+
157
42
+static inline bool arm_is_el2_enabled(CPUARMState *env)
158
/* Branches, exception generating and system instructions */
43
+{
44
+ return false;
45
+}
46
#endif
47
48
/**
49
--
159
--
50
2.20.1
160
2.34.1
51
52
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
Convert the exception-return insns ERET, ERETA and ERETB to
2
decodetree. These were the last insns left in the legacy
3
decoder function disas_uncond_reg_b(), which allows us to
4
remove it.
2
5
3
On ARMv8-A, accesses by 32-bit secure EL1 to monitor registers trap to
6
The old decoder explicitly decoded the DRPS instruction,
4
the upper (64-bit) EL. With Secure EL2 support, we can no longer assume
7
only in order to call unallocated_encoding() on it, exactly
5
that that is always EL3, so make room for the value to be computed at
8
as would have happened if it hadn't decoded it. This is
6
run-time.
9
because this insn always UNDEFs unless the CPU is in
10
halting-debug state, which we don't emulate. So we list
11
the pattern in a comment in a64.decode, but don't actively
12
decode it.
7
13
8
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20210112104511.36576-16-remi.denis.courmont@huawei.com
16
Message-id: 20230512144106.3608981-21-peter.maydell@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
17
---
13
target/arm/translate.c | 23 +++++++++++++++++++++--
18
target/arm/tcg/a64.decode | 8 ++
14
1 file changed, 21 insertions(+), 2 deletions(-)
19
target/arm/tcg/translate-a64.c | 163 +++++++++++----------------------
20
2 files changed, 63 insertions(+), 108 deletions(-)
15
21
16
diff --git a/target/arm/translate.c b/target/arm/translate.c
22
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
17
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate.c
24
--- a/target/arm/tcg/a64.decode
19
+++ b/target/arm/translate.c
25
+++ b/target/arm/tcg/a64.decode
20
@@ -XXX,XX +XXX,XX @@ static void unallocated_encoding(DisasContext *s)
26
@@ -XXX,XX +XXX,XX @@ RETA 1101011 0010 11111 00001 m:1 11111 11111 &reta # RETAA, RETAB
21
default_exception_el(s));
27
&bra rn rm m
28
BRA 1101011 1000 11111 00001 m:1 rn:5 rm:5 &bra # BRAA, BRAB
29
BLRA 1101011 1001 11111 00001 m:1 rn:5 rm:5 &bra # BLRAA, BLRAB
30
+
31
+ERET 1101011 0100 11111 000000 11111 00000
32
+ERETA 1101011 0100 11111 00001 m:1 11111 11111 &reta # ERETAA, ERETAB
33
+
34
+# We don't need to decode DRPS because it always UNDEFs except when
35
+# the processor is in halting debug state (which we don't implement).
36
+# The pattern is listed here as documentation.
37
+# DRPS 1101011 0101 11111 000000 11111 00000
38
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/tcg/translate-a64.c
41
+++ b/target/arm/tcg/translate-a64.c
42
@@ -XXX,XX +XXX,XX @@ static bool trans_BLRA(DisasContext *s, arg_bra *a)
43
return true;
22
}
44
}
23
45
24
+static void gen_exception_el(DisasContext *s, int excp, uint32_t syn,
46
+static bool trans_ERET(DisasContext *s, arg_ERET *a)
25
+ TCGv_i32 tcg_el)
26
+{
47
+{
27
+ TCGv_i32 tcg_excp;
48
+ TCGv_i64 dst;
28
+ TCGv_i32 tcg_syn;
49
+
29
+
50
+ if (s->current_el == 0) {
30
+ gen_set_condexec(s);
51
+ return false;
31
+ gen_set_pc_im(s, s->pc_curr);
52
+ }
32
+ tcg_excp = tcg_const_i32(excp);
53
+ if (s->fgt_eret) {
33
+ tcg_syn = tcg_const_i32(syn);
54
+ gen_exception_insn_el(s, 0, EXCP_UDEF, 0, 2);
34
+ gen_helper_exception_with_syndrome(cpu_env, tcg_excp, tcg_syn, tcg_el);
55
+ return true;
35
+ tcg_temp_free_i32(tcg_syn);
56
+ }
36
+ tcg_temp_free_i32(tcg_excp);
57
+ dst = tcg_temp_new_i64();
37
+ s->base.is_jmp = DISAS_NORETURN;
58
+ tcg_gen_ld_i64(dst, cpu_env,
59
+ offsetof(CPUARMState, elr_el[s->current_el]));
60
+
61
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
62
+ gen_io_start();
63
+ }
64
+
65
+ gen_helper_exception_return(cpu_env, dst);
66
+ /* Must exit loop to check un-masked IRQs */
67
+ s->base.is_jmp = DISAS_EXIT;
68
+ return true;
38
+}
69
+}
39
+
70
+
40
/* Force a TB lookup after an instruction that changes the CPU state. */
71
+static bool trans_ERETA(DisasContext *s, arg_reta *a)
41
static inline void gen_lookup_tb(DisasContext *s)
72
+{
73
+ TCGv_i64 dst;
74
+
75
+ if (!dc_isar_feature(aa64_pauth, s)) {
76
+ return false;
77
+ }
78
+ if (s->current_el == 0) {
79
+ return false;
80
+ }
81
+ /* The FGT trap takes precedence over an auth trap. */
82
+ if (s->fgt_eret) {
83
+ gen_exception_insn_el(s, 0, EXCP_UDEF, a->m ? 3 : 2, 2);
84
+ return true;
85
+ }
86
+ dst = tcg_temp_new_i64();
87
+ tcg_gen_ld_i64(dst, cpu_env,
88
+ offsetof(CPUARMState, elr_el[s->current_el]));
89
+
90
+ dst = auth_branch_target(s, dst, cpu_X[31], !a->m);
91
+ if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
92
+ gen_io_start();
93
+ }
94
+
95
+ gen_helper_exception_return(cpu_env, dst);
96
+ /* Must exit loop to check un-masked IRQs */
97
+ s->base.is_jmp = DISAS_EXIT;
98
+ return true;
99
+}
100
+
101
/* HINT instruction group, including various allocated HINTs */
102
static void handle_hint(DisasContext *s, uint32_t insn,
103
unsigned int op1, unsigned int op2, unsigned int crm)
104
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
105
}
106
}
107
108
-/* Unconditional branch (register)
109
- * 31 25 24 21 20 16 15 10 9 5 4 0
110
- * +---------------+-------+-------+-------+------+-------+
111
- * | 1 1 0 1 0 1 1 | opc | op2 | op3 | Rn | op4 |
112
- * +---------------+-------+-------+-------+------+-------+
113
- */
114
-static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
115
-{
116
- unsigned int opc, op2, op3, rn, op4;
117
- TCGv_i64 dst;
118
- TCGv_i64 modifier;
119
-
120
- opc = extract32(insn, 21, 4);
121
- op2 = extract32(insn, 16, 5);
122
- op3 = extract32(insn, 10, 6);
123
- rn = extract32(insn, 5, 5);
124
- op4 = extract32(insn, 0, 5);
125
-
126
- if (op2 != 0x1f) {
127
- goto do_unallocated;
128
- }
129
-
130
- switch (opc) {
131
- case 0:
132
- case 1:
133
- case 2:
134
- case 8:
135
- case 9:
136
- /*
137
- * BR, BLR, RET, RETAA, RETAB, BRAAZ, BRABZ, BLRAAZ, BLRABZ,
138
- * BRAA, BLRAA: handled in decodetree
139
- */
140
- goto do_unallocated;
141
-
142
- case 4: /* ERET */
143
- if (s->current_el == 0) {
144
- goto do_unallocated;
145
- }
146
- switch (op3) {
147
- case 0: /* ERET */
148
- if (op4 != 0) {
149
- goto do_unallocated;
150
- }
151
- if (s->fgt_eret) {
152
- gen_exception_insn_el(s, 0, EXCP_UDEF, syn_erettrap(op3), 2);
153
- return;
154
- }
155
- dst = tcg_temp_new_i64();
156
- tcg_gen_ld_i64(dst, cpu_env,
157
- offsetof(CPUARMState, elr_el[s->current_el]));
158
- break;
159
-
160
- case 2: /* ERETAA */
161
- case 3: /* ERETAB */
162
- if (!dc_isar_feature(aa64_pauth, s)) {
163
- goto do_unallocated;
164
- }
165
- if (rn != 0x1f || op4 != 0x1f) {
166
- goto do_unallocated;
167
- }
168
- /* The FGT trap takes precedence over an auth trap. */
169
- if (s->fgt_eret) {
170
- gen_exception_insn_el(s, 0, EXCP_UDEF, syn_erettrap(op3), 2);
171
- return;
172
- }
173
- dst = tcg_temp_new_i64();
174
- tcg_gen_ld_i64(dst, cpu_env,
175
- offsetof(CPUARMState, elr_el[s->current_el]));
176
- if (s->pauth_active) {
177
- modifier = cpu_X[31];
178
- if (op3 == 2) {
179
- gen_helper_autia(dst, cpu_env, dst, modifier);
180
- } else {
181
- gen_helper_autib(dst, cpu_env, dst, modifier);
182
- }
183
- }
184
- break;
185
-
186
- default:
187
- goto do_unallocated;
188
- }
189
- if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
190
- gen_io_start();
191
- }
192
-
193
- gen_helper_exception_return(cpu_env, dst);
194
- /* Must exit loop to check un-masked IRQs */
195
- s->base.is_jmp = DISAS_EXIT;
196
- return;
197
-
198
- case 5: /* DRPS */
199
- if (op3 != 0 || op4 != 0 || rn != 0x1f) {
200
- goto do_unallocated;
201
- } else {
202
- unallocated_encoding(s);
203
- }
204
- return;
205
-
206
- default:
207
- do_unallocated:
208
- unallocated_encoding(s);
209
- return;
210
- }
211
-}
212
-
213
/* Branches, exception generating and system instructions */
214
static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
42
{
215
{
43
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
216
@@ -XXX,XX +XXX,XX @@ static void disas_b_exc_sys(DisasContext *s, uint32_t insn)
44
/* If we're in Secure EL1 (which implies that EL3 is AArch64)
217
disas_exc(s, insn);
45
* then accesses to Mon registers trap to EL3
46
*/
47
- exc_target = 3;
48
- goto undef;
49
+ TCGv_i32 tcg_el = tcg_const_i32(3);
50
+
51
+ gen_exception_el(s, EXCP_UDEF, syn_uncategorized(), tcg_el);
52
+ tcg_temp_free_i32(tcg_el);
53
+ return false;
54
}
218
}
55
break;
219
break;
56
case ARM_CPU_MODE_HYP:
220
- case 0x6b: /* Unconditional branch (register) */
221
- disas_uncond_b_reg(s, insn);
222
- break;
223
default:
224
unallocated_encoding(s);
225
break;
57
--
226
--
58
2.20.1
227
2.34.1
59
60
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
The IMPDEF sysreg L2CTLR_EL1 found on the Cortex-A35, A53, A57, A72
2
and which we (arguably dubiously) also provide in '-cpu max' has a
3
2 bit field for the number of processors in the cluster. On real
4
hardware this must be sufficient because it can only be configured
5
with up to 4 CPUs in the cluster. However on QEMU if the board code
6
does not explicitly configure the code into clusters with the right
7
CPU count we default to "give the value assuming that all CPUs in
8
the system are in a single cluster", which might be too big to fit
9
in the field.
2
10
3
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
11
Instead of just overflowing this 2-bit field, saturate to 3 (meaning
12
"4 CPUs", so at least we don't overwrite other fields in the register.
13
It's unlikely that any guest code really cares about the value in
14
this field; at least, if it does it probably also wants the system
15
to be more closely matching real hardware, i.e. not to have more
16
than 4 CPUs.
17
18
This issue has been present since the L2CTLR was first added in
19
commit 377a44ec8f2fac5b back in 2014. It was only noticed because
20
Coverity complains (CID 1509227) that the shift might overflow 32 bits
21
and inadvertently sign extend into the top half of the 64 bit value.
22
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210112104511.36576-6-remi.denis.courmont@huawei.com
25
Message-id: 20230512170223.3801643-2-peter.maydell@linaro.org
6
[PMM: tweaked commit message to match reduced scope of patch
7
following rebase]
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
26
---
10
target/arm/cpu.h | 5 +++++
27
target/arm/cortex-regs.c | 11 +++++++++--
11
1 file changed, 5 insertions(+)
28
1 file changed, 9 insertions(+), 2 deletions(-)
12
29
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
30
diff --git a/target/arm/cortex-regs.c b/target/arm/cortex-regs.c
14
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
32
--- a/target/arm/cortex-regs.c
16
+++ b/target/arm/cpu.h
33
+++ b/target/arm/cortex-regs.c
17
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_sve(const ARMISARegisters *id)
34
@@ -XXX,XX +XXX,XX @@ static uint64_t l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
18
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SVE) != 0;
35
{
36
ARMCPU *cpu = env_archcpu(env);
37
38
- /* Number of cores is in [25:24]; otherwise we RAZ */
39
- return (cpu->core_count - 1) << 24;
40
+ /*
41
+ * Number of cores is in [25:24]; otherwise we RAZ.
42
+ * If the board didn't configure the CPUs into clusters,
43
+ * we default to "all CPUs in one cluster", which might be
44
+ * more than the 4 that the hardware permits and which is
45
+ * all you can report in this two-bit field. Saturate to
46
+ * 0b11 (== 4 CPUs) rather than overflowing the field.
47
+ */
48
+ return MIN(cpu->core_count - 1, 3) << 24;
19
}
49
}
20
50
21
+static inline bool isar_feature_aa64_sel2(const ARMISARegisters *id)
51
static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
22
+{
23
+ return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, SEL2) != 0;
24
+}
25
+
26
static inline bool isar_feature_aa64_vh(const ARMISARegisters *id)
27
{
28
return FIELD_EX64(id->id_aa64mmfr1, ID_AA64MMFR1, VH) != 0;
29
--
52
--
30
2.20.1
53
2.34.1
31
32
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
With the ARMv8.4-SEL2 extension, EL2 is a legal exception level in
4
secure mode, though it can only be AArch64.
5
6
This patch adds the target EL for exceptions from 64-bit S-EL2.
7
8
It also fixes the target EL to EL2 when HCR.{A,F,I}MO are set in secure
9
mode. Those values were never used in practice as the effective value of
10
HCR was always 0 in secure mode.
11
12
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20210112104511.36576-7-remi.denis.courmont@huawei.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
target/arm/helper.c | 10 +++++-----
18
target/arm/op_helper.c | 4 ++--
19
2 files changed, 7 insertions(+), 7 deletions(-)
20
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
24
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ static const int8_t target_el_table[2][2][2][2][2][4] = {
26
{{/* 0 1 1 0 */{ 3, 3, 3, -1 },{ 3, -1, -1, 3 },},
27
{/* 0 1 1 1 */{ 3, 3, 3, -1 },{ 3, -1, -1, 3 },},},},},
28
{{{{/* 1 0 0 0 */{ 1, 1, 2, -1 },{ 1, 1, -1, 1 },},
29
- {/* 1 0 0 1 */{ 2, 2, 2, -1 },{ 1, 1, -1, 1 },},},
30
- {{/* 1 0 1 0 */{ 1, 1, 1, -1 },{ 1, 1, -1, 1 },},
31
- {/* 1 0 1 1 */{ 2, 2, 2, -1 },{ 1, 1, -1, 1 },},},},
32
+ {/* 1 0 0 1 */{ 2, 2, 2, -1 },{ 2, 2, -1, 1 },},},
33
+ {{/* 1 0 1 0 */{ 1, 1, 1, -1 },{ 1, 1, 1, 1 },},
34
+ {/* 1 0 1 1 */{ 2, 2, 2, -1 },{ 2, 2, 2, 1 },},},},
35
{{{/* 1 1 0 0 */{ 3, 3, 3, -1 },{ 3, 3, -1, 3 },},
36
{/* 1 1 0 1 */{ 3, 3, 3, -1 },{ 3, 3, -1, 3 },},},
37
- {{/* 1 1 1 0 */{ 3, 3, 3, -1 },{ 3, 3, -1, 3 },},
38
- {/* 1 1 1 1 */{ 3, 3, 3, -1 },{ 3, 3, -1, 3 },},},},},
39
+ {{/* 1 1 1 0 */{ 3, 3, 3, -1 },{ 3, 3, 3, 3 },},
40
+ {/* 1 1 1 1 */{ 3, 3, 3, -1 },{ 3, 3, 3, 3 },},},},},
41
};
42
43
/*
44
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
45
index XXXXXXX..XXXXXXX 100644
46
--- a/target/arm/op_helper.c
47
+++ b/target/arm/op_helper.c
48
@@ -XXX,XX +XXX,XX @@ void HELPER(access_check_cp_reg)(CPUARMState *env, void *rip, uint32_t syndrome,
49
target_el = exception_target_el(env);
50
break;
51
case CP_ACCESS_TRAP_EL2:
52
- /* Requesting a trap to EL2 when we're in EL3 or S-EL0/1 is
53
+ /* Requesting a trap to EL2 when we're in EL3 is
54
* a bug in the access function.
55
*/
56
- assert(!arm_is_secure(env) && arm_current_el(env) != 3);
57
+ assert(arm_current_el(env) != 3);
58
target_el = 2;
59
break;
60
case CP_ACCESS_TRAP_EL3:
61
--
62
2.20.1
63
64
diff view generated by jsdifflib
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
1
In the vexpress board code, we allocate a new MemoryRegion at the top
2
of vexpress_common_init() but only set it up and use it inside the
3
"if (map[VE_NORFLASHALIAS] != -1)" conditional, so we leak it if not.
4
This isn't a very interesting leak as it's a tiny amount of memory
5
once at startup, but it's easy to fix.
2
6
3
The VTTBR write callback so far assumes that the underlying VM lies in
7
We could silence Coverity simply by moving the g_new() into the
4
non-secure state. This handles the secure state scenario.
8
if() block, but this use of g_new(MemoryRegion, 1) is a legacy from
9
when this board model was originally written; we wouldn't do that
10
if we wrote it today. The MemoryRegions are conceptually a part of
11
the board and must not go away until the whole board is done with
12
(at the end of the simulation), so they belong in its state struct.
5
13
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
14
This machine already has a VexpressMachineState struct that extends
15
MachineState, so statically put the MemoryRegions in there instead of
16
dynamically allocating them separately at runtime.
17
18
Spotted by Coverity (CID 1509083).
19
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210112104511.36576-10-remi.denis.courmont@huawei.com
22
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
Message-id: 20230512170223.3801643-3-peter.maydell@linaro.org
10
---
24
---
11
target/arm/helper.c | 13 +++++++++----
25
hw/arm/vexpress.c | 40 ++++++++++++++++++++--------------------
12
1 file changed, 9 insertions(+), 4 deletions(-)
26
1 file changed, 20 insertions(+), 20 deletions(-)
13
27
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
28
diff --git a/hw/arm/vexpress.c b/hw/arm/vexpress.c
15
index XXXXXXX..XXXXXXX 100644
29
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
30
--- a/hw/arm/vexpress.c
17
+++ b/target/arm/helper.c
31
+++ b/hw/arm/vexpress.c
18
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
32
@@ -XXX,XX +XXX,XX @@ struct VexpressMachineClass {
19
* the combined stage 1&2 tlbs (EL10_1 and EL10_0).
33
20
*/
34
struct VexpressMachineState {
21
if (raw_read(env, ri) != value) {
35
MachineState parent;
22
- tlb_flush_by_mmuidx(cs,
36
+ MemoryRegion vram;
23
- ARMMMUIdxBit_E10_1 |
37
+ MemoryRegion sram;
24
- ARMMMUIdxBit_E10_1_PAN |
38
+ MemoryRegion flashalias;
25
- ARMMMUIdxBit_E10_0);
39
+ MemoryRegion lowram;
26
+ uint16_t mask = ARMMMUIdxBit_E10_1 |
40
+ MemoryRegion a15sram;
27
+ ARMMMUIdxBit_E10_1_PAN |
41
bool secure;
28
+ ARMMMUIdxBit_E10_0;
42
bool virt;
29
+
43
};
30
+ if (arm_is_secure_below_el3(env)) {
44
@@ -XXX,XX +XXX,XX @@ struct VexpressMachineState {
31
+ mask >>= ARM_MMU_IDX_A_NS;
45
#define TYPE_VEXPRESS_A15_MACHINE MACHINE_TYPE_NAME("vexpress-a15")
32
+ }
46
OBJECT_DECLARE_TYPE(VexpressMachineState, VexpressMachineClass, VEXPRESS_MACHINE)
33
+
47
34
+ tlb_flush_by_mmuidx(cs, mask);
48
-typedef void DBoardInitFn(const VexpressMachineState *machine,
35
raw_write(env, ri, value);
49
+typedef void DBoardInitFn(VexpressMachineState *machine,
50
ram_addr_t ram_size,
51
const char *cpu_type,
52
qemu_irq *pic);
53
@@ -XXX,XX +XXX,XX @@ static void init_cpus(MachineState *ms, const char *cpu_type,
36
}
54
}
37
}
55
}
56
57
-static void a9_daughterboard_init(const VexpressMachineState *vms,
58
+static void a9_daughterboard_init(VexpressMachineState *vms,
59
ram_addr_t ram_size,
60
const char *cpu_type,
61
qemu_irq *pic)
62
{
63
MachineState *machine = MACHINE(vms);
64
MemoryRegion *sysmem = get_system_memory();
65
- MemoryRegion *lowram = g_new(MemoryRegion, 1);
66
ram_addr_t low_ram_size;
67
68
if (ram_size > 0x40000000) {
69
@@ -XXX,XX +XXX,XX @@ static void a9_daughterboard_init(const VexpressMachineState *vms,
70
* address space should in theory be remappable to various
71
* things including ROM or RAM; we always map the RAM there.
72
*/
73
- memory_region_init_alias(lowram, NULL, "vexpress.lowmem", machine->ram,
74
- 0, low_ram_size);
75
- memory_region_add_subregion(sysmem, 0x0, lowram);
76
+ memory_region_init_alias(&vms->lowram, NULL, "vexpress.lowmem",
77
+ machine->ram, 0, low_ram_size);
78
+ memory_region_add_subregion(sysmem, 0x0, &vms->lowram);
79
memory_region_add_subregion(sysmem, 0x60000000, machine->ram);
80
81
/* 0x1e000000 A9MPCore (SCU) private memory region */
82
@@ -XXX,XX +XXX,XX @@ static VEDBoardInfo a9_daughterboard = {
83
.init = a9_daughterboard_init,
84
};
85
86
-static void a15_daughterboard_init(const VexpressMachineState *vms,
87
+static void a15_daughterboard_init(VexpressMachineState *vms,
88
ram_addr_t ram_size,
89
const char *cpu_type,
90
qemu_irq *pic)
91
{
92
MachineState *machine = MACHINE(vms);
93
MemoryRegion *sysmem = get_system_memory();
94
- MemoryRegion *sram = g_new(MemoryRegion, 1);
95
96
{
97
/* We have to use a separate 64 bit variable here to avoid the gcc
98
@@ -XXX,XX +XXX,XX @@ static void a15_daughterboard_init(const VexpressMachineState *vms,
99
/* 0x2b060000: SP805 watchdog: not modelled */
100
/* 0x2b0a0000: PL341 dynamic memory controller: not modelled */
101
/* 0x2e000000: system SRAM */
102
- memory_region_init_ram(sram, NULL, "vexpress.a15sram", 0x10000,
103
+ memory_region_init_ram(&vms->a15sram, NULL, "vexpress.a15sram", 0x10000,
104
&error_fatal);
105
- memory_region_add_subregion(sysmem, 0x2e000000, sram);
106
+ memory_region_add_subregion(sysmem, 0x2e000000, &vms->a15sram);
107
108
/* 0x7ffb0000: DMA330 DMA controller: not modelled */
109
/* 0x7ffd0000: PL354 static memory controller: not modelled */
110
@@ -XXX,XX +XXX,XX @@ static void vexpress_common_init(MachineState *machine)
111
I2CBus *i2c;
112
ram_addr_t vram_size, sram_size;
113
MemoryRegion *sysmem = get_system_memory();
114
- MemoryRegion *vram = g_new(MemoryRegion, 1);
115
- MemoryRegion *sram = g_new(MemoryRegion, 1);
116
- MemoryRegion *flashalias = g_new(MemoryRegion, 1);
117
- MemoryRegion *flash0mem;
118
const hwaddr *map = daughterboard->motherboard_map;
119
int i;
120
121
@@ -XXX,XX +XXX,XX @@ static void vexpress_common_init(MachineState *machine)
122
123
if (map[VE_NORFLASHALIAS] != -1) {
124
/* Map flash 0 as an alias into low memory */
125
+ MemoryRegion *flash0mem;
126
flash0mem = sysbus_mmio_get_region(SYS_BUS_DEVICE(pflash0), 0);
127
- memory_region_init_alias(flashalias, NULL, "vexpress.flashalias",
128
+ memory_region_init_alias(&vms->flashalias, NULL, "vexpress.flashalias",
129
flash0mem, 0, VEXPRESS_FLASH_SIZE);
130
- memory_region_add_subregion(sysmem, map[VE_NORFLASHALIAS], flashalias);
131
+ memory_region_add_subregion(sysmem, map[VE_NORFLASHALIAS], &vms->flashalias);
132
}
133
134
dinfo = drive_get(IF_PFLASH, 0, 1);
135
ve_pflash_cfi01_register(map[VE_NORFLASH1], "vexpress.flash1", dinfo);
136
137
sram_size = 0x2000000;
138
- memory_region_init_ram(sram, NULL, "vexpress.sram", sram_size,
139
+ memory_region_init_ram(&vms->sram, NULL, "vexpress.sram", sram_size,
140
&error_fatal);
141
- memory_region_add_subregion(sysmem, map[VE_SRAM], sram);
142
+ memory_region_add_subregion(sysmem, map[VE_SRAM], &vms->sram);
143
144
vram_size = 0x800000;
145
- memory_region_init_ram(vram, NULL, "vexpress.vram", vram_size,
146
+ memory_region_init_ram(&vms->vram, NULL, "vexpress.vram", vram_size,
147
&error_fatal);
148
- memory_region_add_subregion(sysmem, map[VE_VIDEORAM], vram);
149
+ memory_region_add_subregion(sysmem, map[VE_VIDEORAM], &vms->vram);
150
151
/* 0x4e000000 LAN9118 Ethernet */
152
if (nd_table[0].used) {
38
--
153
--
39
2.20.1
154
2.34.1
40
155
41
156
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
In the secure stage 2 translation regime, the VSTCR.SW and VTCR.NSW
4
bits can invert the secure flag for pagetable walks. This patchset
5
allows S1_ptw_translate() to change the non-secure bit.
6
7
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20210112104511.36576-11-remi.denis.courmont@huawei.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/helper.c | 9 ++++++---
13
1 file changed, 6 insertions(+), 3 deletions(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
20
21
/* Translate a S1 pagetable walk through S2 if needed. */
22
static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
23
- hwaddr addr, MemTxAttrs txattrs,
24
+ hwaddr addr, bool *is_secure,
25
ARMMMUFaultInfo *fi)
26
{
27
if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
28
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
29
int s2prot;
30
int ret;
31
ARMCacheAttrs cacheattrs = {};
32
+ MemTxAttrs txattrs = {};
33
+
34
+ assert(!*is_secure); /* TODO: S-EL2 */
35
36
ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, ARMMMUIdx_Stage2,
37
false,
38
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUState *cs, hwaddr addr, bool is_secure,
39
AddressSpace *as;
40
uint32_t data;
41
42
+ addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
43
attrs.secure = is_secure;
44
as = arm_addressspace(cs, attrs);
45
- addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fi);
46
if (fi->s1ptw) {
47
return 0;
48
}
49
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr addr, bool is_secure,
50
AddressSpace *as;
51
uint64_t data;
52
53
+ addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
54
attrs.secure = is_secure;
55
as = arm_addressspace(cs, attrs);
56
- addr = S1_ptw_translate(env, mmu_idx, addr, attrs, fi);
57
if (fi->s1ptw) {
58
return 0;
59
}
60
--
61
2.20.1
62
63
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20210112104511.36576-12-remi.denis.courmont@huawei.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper.c | 12 ++++++++++++
9
1 file changed, 12 insertions(+)
10
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/helper.c
14
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
16
fi->s1ptw = true;
17
return ~0;
18
}
19
+
20
+ if (arm_is_secure_below_el3(env)) {
21
+ /* Check if page table walk is to secure or non-secure PA space. */
22
+ if (*is_secure) {
23
+ *is_secure = !(env->cp15.vstcr_el2.raw_tcr & VSTCR_SW);
24
+ } else {
25
+ *is_secure = !(env->cp15.vtcr_el2.raw_tcr & VTCR_NSW);
26
+ }
27
+ } else {
28
+ assert(!*is_secure);
29
+ }
30
+
31
addr = s2pa;
32
}
33
return addr;
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
Deleted patch
1
From: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
2
1
3
The stage_1_mmu_idx() already effectively keeps track of which
4
translation regimes have two stages. Don't hard-code another test.
5
6
Signed-off-by: Rémi Denis-Courmont <remi.denis.courmont@huawei.com>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20210112104511.36576-13-remi.denis.courmont@huawei.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/helper.c | 13 ++++++-------
12
1 file changed, 6 insertions(+), 7 deletions(-)
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
19
target_ulong *page_size,
20
ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs)
21
{
22
- if (mmu_idx == ARMMMUIdx_E10_0 ||
23
- mmu_idx == ARMMMUIdx_E10_1 ||
24
- mmu_idx == ARMMMUIdx_E10_1_PAN) {
25
+ ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
26
+
27
+ if (mmu_idx != s1_mmu_idx) {
28
/* Call ourselves recursively to do the stage 1 and then stage 2
29
- * translations.
30
+ * translations if mmu_idx is a two-stage regime.
31
*/
32
if (arm_feature(env, ARM_FEATURE_EL2)) {
33
hwaddr ipa;
34
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
35
int ret;
36
ARMCacheAttrs cacheattrs2 = {};
37
38
- ret = get_phys_addr(env, address, access_type,
39
- stage_1_mmu_idx(mmu_idx), &ipa, attrs,
40
- prot, page_size, fi, cacheattrs);
41
+ ret = get_phys_addr(env, address, access_type, s1_mmu_idx, &ipa,
42
+ attrs, prot, page_size, fi, cacheattrs);
43
44
/* If S1 fails or S2 is disabled, return early. */
45
if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2)) {
46
--
47
2.20.1
48
49
diff view generated by jsdifflib
1
When we first converted our documentation to Sphinx, we split it into
1
Convert the u2f.txt file to rST, and place it in the right place
2
multiple manuals (system, interop, tools, etc), which are all built
2
in our manual layout. The old text didn't fit very well into our
3
separately. The primary driver for this was wanting to be able to
3
manual style, so the new version ends up looking like a rewrite,
4
avoid shipping the 'devel' manual to end-users. However, this is
4
although some of the original text is preserved:
5
working against the grain of the way Sphinx wants to be used and
5
6
causes some annoyances:
6
* the 'building' section of the old file is removed, since we
7
* Cross-references between documents become much harder or
7
generally assume that users have already built QEMU
8
possibly impossible
8
* some rather verbose text has been cut back
9
* There is no single index to the whole documentation
9
* document the passthrough device first, on the assumption
10
* Within one manual there's no links or table-of-contents info
10
that's most likely to be of interest to users
11
that lets you easily navigate to the others
11
* cut back on the duplication of text between sections
12
* The devel manual doesn't get published on the QEMU website
12
* format example command lines etc with rST
13
(it would be nice to able to refer to it there)
13
14
14
As it's a short document it seemed simplest to do this all
15
Merely hiding our developer documentation from end users seems like
15
in one go rather than try to do a minimal syntactic conversion
16
it's not enough benefit for these costs. Combine all the
16
and then clean up the wording and layout.
17
documentation into a single manual (the same way that the readthedocs
18
site builds it) and install the whole thing. The previous manual
19
divisions remain as the new top level sections in the manual.
20
21
* The per-manual conf.py files are no longer needed
22
* The man_pages[] specifications previously in each per-manual
23
conf.py move to the top level conf.py
24
* docs/meson.build logic is simplified as we now only need to run
25
Sphinx once for the HTML and then once for the manpages5B
26
* The old index.html.in that produced the top-level page with
27
links to each manual is no longer needed
28
29
Unfortunately this means that we now have to build the HTML
30
documentation into docs/manual in the build tree rather than directly
31
into docs/; otherwise it is too awkward to ensure we install only the
32
built manual and not also the dependency info, stamp file, etc. The
33
manual still ends up in the same place in the final installed
34
directory, but anybody who was consulting documentation from within
35
the build tree will have to adjust where they're looking.
36
17
37
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
38
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
19
Reviewed-by: Thomas Huth <thuth@redhat.com>
39
Message-id: 20210115154449.4801-1-peter.maydell@linaro.org
20
Message-id: 20230421163734.1152076-1-peter.maydell@linaro.org
40
---
21
---
41
docs/conf.py | 46 ++++++++++++++++++++++++++++++-
22
docs/system/device-emulation.rst | 1 +
42
docs/devel/conf.py | 15 -----------
23
docs/system/devices/usb-u2f.rst | 93 ++++++++++++++++++++++++++
43
docs/index.html.in | 17 ------------
24
docs/system/devices/usb.rst | 2 +-
44
docs/interop/conf.py | 28 -------------------
25
docs/u2f.txt | 110 -------------------------------
45
docs/meson.build | 64 +++++++++++++++++---------------------------
26
4 files changed, 95 insertions(+), 111 deletions(-)
46
docs/specs/conf.py | 16 -----------
27
create mode 100644 docs/system/devices/usb-u2f.rst
47
docs/system/conf.py | 28 -------------------
28
delete mode 100644 docs/u2f.txt
48
docs/tools/conf.py | 37 -------------------------
29
49
docs/user/conf.py | 15 -----------
30
diff --git a/docs/system/device-emulation.rst b/docs/system/device-emulation.rst
50
.gitlab-ci.yml | 4 +--
51
10 files changed, 72 insertions(+), 198 deletions(-)
52
delete mode 100644 docs/devel/conf.py
53
delete mode 100644 docs/index.html.in
54
delete mode 100644 docs/interop/conf.py
55
delete mode 100644 docs/specs/conf.py
56
delete mode 100644 docs/system/conf.py
57
delete mode 100644 docs/tools/conf.py
58
delete mode 100644 docs/user/conf.py
59
60
diff --git a/docs/conf.py b/docs/conf.py
61
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
62
--- a/docs/conf.py
32
--- a/docs/system/device-emulation.rst
63
+++ b/docs/conf.py
33
+++ b/docs/system/device-emulation.rst
64
@@ -XXX,XX +XXX,XX @@ latex_documents = [
34
@@ -XXX,XX +XXX,XX @@ Emulated Devices
65
35
devices/virtio-pmem.rst
66
# -- Options for manual page output ---------------------------------------
36
devices/vhost-user-rng.rst
67
# Individual manual/conf.py can override this to create man pages
37
devices/canokey.rst
68
-man_pages = []
38
+ devices/usb-u2f.rst
69
+man_pages = [
39
devices/igb.rst
70
+ ('interop/qemu-ga', 'qemu-ga',
40
diff --git a/docs/system/devices/usb-u2f.rst b/docs/system/devices/usb-u2f.rst
71
+ 'QEMU Guest Agent',
41
new file mode 100644
72
+ ['Michael Roth <mdroth@linux.vnet.ibm.com>'], 8),
42
index XXXXXXX..XXXXXXX
73
+ ('interop/qemu-ga-ref', 'qemu-ga-ref',
43
--- /dev/null
74
+ 'QEMU Guest Agent Protocol Reference',
44
+++ b/docs/system/devices/usb-u2f.rst
75
+ [], 7),
45
@@ -XXX,XX +XXX,XX @@
76
+ ('interop/qemu-qmp-ref', 'qemu-qmp-ref',
46
+Universal Second Factor (U2F) USB Key Device
77
+ 'QEMU QMP Reference Manual',
47
+============================================
78
+ [], 7),
48
+
79
+ ('interop/qemu-storage-daemon-qmp-ref', 'qemu-storage-daemon-qmp-ref',
49
+U2F is an open authentication standard that enables relying parties
80
+ 'QEMU Storage Daemon QMP Reference Manual',
50
+exposed to the internet to offer a strong second factor option for end
81
+ [], 7),
51
+user authentication.
82
+ ('system/qemu-manpage', 'qemu',
52
+
83
+ 'QEMU User Documentation',
53
+The second factor is provided by a device implementing the U2F
84
+ ['Fabrice Bellard'], 1),
54
+protocol. In case of a USB U2F security key, it is a USB HID device
85
+ ('system/qemu-block-drivers', 'qemu-block-drivers',
55
+that implements the U2F protocol.
86
+ 'QEMU block drivers reference',
56
+
87
+ ['Fabrice Bellard and the QEMU Project developers'], 7),
57
+QEMU supports both pass-through of a host U2F key device to a VM,
88
+ ('system/qemu-cpu-models', 'qemu-cpu-models',
58
+and software emulation of a U2F key.
89
+ 'QEMU CPU Models',
59
+
90
+ ['The QEMU Project developers'], 7),
60
+``u2f-passthru``
91
+ ('tools/qemu-img', 'qemu-img',
61
+----------------
92
+ 'QEMU disk image utility',
62
+
93
+ ['Fabrice Bellard'], 1),
63
+The ``u2f-passthru`` device allows you to connect a real hardware
94
+ ('tools/qemu-nbd', 'qemu-nbd',
64
+U2F key on your host to a guest VM. All requests made from the guest
95
+ 'QEMU Disk Network Block Device Server',
65
+are passed through to the physical security key connected to the
96
+ ['Anthony Liguori <anthony@codemonkey.ws>'], 8),
66
+host machine and vice versa.
97
+ ('tools/qemu-pr-helper', 'qemu-pr-helper',
67
+
98
+ 'QEMU persistent reservation helper',
68
+In addition, the dedicated pass-through allows you to share a single
99
+ [], 8),
69
+U2F security key with several guest VMs, which is not possible with a
100
+ ('tools/qemu-storage-daemon', 'qemu-storage-daemon',
70
+simple host device assignment pass-through.
101
+ 'QEMU storage daemon',
71
+
102
+ [], 1),
72
+You can specify the host U2F key to use with the ``hidraw``
103
+ ('tools/qemu-trace-stap', 'qemu-trace-stap',
73
+option, which takes the host path to a Linux ``/dev/hidrawN`` device:
104
+ 'QEMU SystemTap trace tool',
74
+
105
+ [], 1),
75
+.. parsed-literal::
106
+ ('tools/virtfs-proxy-helper', 'virtfs-proxy-helper',
76
+ |qemu_system| -usb -device u2f-passthru,hidraw=/dev/hidraw0
107
+ 'QEMU 9p virtfs proxy filesystem helper',
77
+
108
+ ['M. Mohan Kumar'], 1),
78
+If you don't specify the device, the ``u2f-passthru`` device will
109
+ ('tools/virtiofsd', 'virtiofsd',
79
+autoscan to take the first U2F device it finds on the host (this
110
+ 'QEMU virtio-fs shared file system daemon',
80
+requires a working libudev):
111
+ ['Stefan Hajnoczi <stefanha@redhat.com>',
81
+
112
+ 'Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>'], 1),
82
+.. parsed-literal::
113
+]
83
+ |qemu_system| -usb -device u2f-passthru
114
84
+
115
# -- Options for Texinfo output -------------------------------------------
85
+``u2f-emulated``
116
86
+----------------
117
diff --git a/docs/devel/conf.py b/docs/devel/conf.py
87
+
88
+``u2f-emulated`` is a completely software emulated U2F device.
89
+It uses `libu2f-emu <https://github.com/MattGorko/libu2f-emu>`__
90
+for the U2F key emulation. libu2f-emu
91
+provides a complete implementation of the U2F protocol device part for
92
+all specified transports given by the FIDO Alliance.
93
+
94
+To work, an emulated U2F device must have four elements:
95
+
96
+ * ec x509 certificate
97
+ * ec private key
98
+ * counter (four bytes value)
99
+ * 48 bytes of entropy (random bits)
100
+
101
+To use this type of device, these have to be configured, and these
102
+four elements must be passed one way or another.
103
+
104
+Assuming that you have a working libu2f-emu installed on the host,
105
+there are three possible ways to configure the ``u2f-emulated`` device:
106
+
107
+ * ephemeral
108
+ * setup directory
109
+ * manual
110
+
111
+Ephemeral is the simplest way to configure; it lets the device generate
112
+all the elements it needs for a single use of the lifetime of the device.
113
+It is the default if you do not pass any other options to the device.
114
+
115
+.. parsed-literal::
116
+ |qemu_system| -usb -device u2f-emulated
117
+
118
+You can pass the device the path of a setup directory on the host
119
+using the ``dir`` option; the directory must contain these four files:
120
+
121
+ * ``certificate.pem``: ec x509 certificate
122
+ * ``private-key.pem``: ec private key
123
+ * ``counter``: counter value
124
+ * ``entropy``: 48 bytes of entropy
125
+
126
+.. parsed-literal::
127
+ |qemu_system| -usb -device u2f-emulated,dir=$dir
128
+
129
+You can also manually pass the device the paths to each of these files,
130
+if you don't want them all to be in the same directory, using the options
131
+
132
+ * ``cert``
133
+ * ``priv``
134
+ * ``counter``
135
+ * ``entropy``
136
+
137
+.. parsed-literal::
138
+ |qemu_system| -usb -device u2f-emulated,cert=$DIR1/$FILE1,priv=$DIR2/$FILE2,counter=$DIR3/$FILE3,entropy=$DIR4/$FILE4
139
diff --git a/docs/system/devices/usb.rst b/docs/system/devices/usb.rst
140
index XXXXXXX..XXXXXXX 100644
141
--- a/docs/system/devices/usb.rst
142
+++ b/docs/system/devices/usb.rst
143
@@ -XXX,XX +XXX,XX @@ option or the ``device_add`` monitor command. Available devices are:
144
USB audio device
145
146
``u2f-{emulated,passthru}``
147
- Universal Second Factor device
148
+ :doc:`usb-u2f`
149
150
``canokey``
151
An Open-source Secure Key implementing FIDO2, OpenPGP, PIV and more.
152
diff --git a/docs/u2f.txt b/docs/u2f.txt
118
deleted file mode 100644
153
deleted file mode 100644
119
index XXXXXXX..XXXXXXX
154
index XXXXXXX..XXXXXXX
120
--- a/docs/devel/conf.py
155
--- a/docs/u2f.txt
121
+++ /dev/null
156
+++ /dev/null
122
@@ -XXX,XX +XXX,XX @@
157
@@ -XXX,XX +XXX,XX @@
123
-# -*- coding: utf-8 -*-
158
-QEMU U2F Key Device Documentation.
124
-#
159
-
125
-# QEMU documentation build configuration file for the 'devel' manual.
160
-Contents
126
-#
161
-1. USB U2F key device
127
-# This includes the top level conf file and then makes any necessary tweaks.
162
-2. Building
128
-import sys
163
-3. Using u2f-emulated
129
-import os
164
-4. Using u2f-passthru
130
-
165
-5. Libu2f-emu
131
-qemu_docdir = os.path.abspath("..")
166
-
132
-parent_config = os.path.join(qemu_docdir, "conf.py")
167
-1. USB U2F key device
133
-exec(compile(open(parent_config, "rb").read(), parent_config, 'exec'))
168
-
134
-
169
-U2F is an open authentication standard that enables relying parties
135
-# This slightly misuses the 'description', but is the best way to get
170
-exposed to the internet to offer a strong second factor option for end
136
-# the manual title to appear in the sidebar.
171
-user authentication.
137
-html_theme_options['description'] = u'Developer''s Guide'
172
-
138
diff --git a/docs/index.html.in b/docs/index.html.in
173
-The standard brings many advantages to both parties, client and server,
139
deleted file mode 100644
174
-allowing to reduce over-reliance on passwords, it increases authentication
140
index XXXXXXX..XXXXXXX
175
-security and simplifies passwords.
141
--- a/docs/index.html.in
176
-
142
+++ /dev/null
177
-The second factor is materialized by a device implementing the U2F
143
@@ -XXX,XX +XXX,XX @@
178
-protocol. In case of a USB U2F security key, it is a USB HID device
144
-<!DOCTYPE html>
179
-that implements the U2F protocol.
145
-<html lang="en">
180
-
146
- <head>
181
-In QEMU, the USB U2F key device offers a dedicated support of U2F, allowing
147
- <meta charset="UTF-8">
182
-guest USB FIDO/U2F security keys operating in two possible modes:
148
- <title>QEMU @VERSION@ Documentation</title>
183
-pass-through and emulated.
149
- </head>
184
-
150
- <body>
185
-The pass-through mode consists of passing all requests made from the guest
151
- <h1>QEMU @VERSION@ Documentation</h1>
186
-to the physical security key connected to the host machine and vice versa.
152
- <ul>
187
-In addition, the dedicated pass-through allows to have a U2F security key
153
- <li><a href="system/index.html">System Emulation User's Guide</a></li>
188
-shared on several guests which is not possible with a simple host device
154
- <li><a href="user/index.html">User Mode Emulation User's Guide</a></li>
189
-assignment pass-through.
155
- <li><a href="tools/index.html">Tools Guide</a></li>
190
-
156
- <li><a href="interop/index.html">System Emulation Management and Interoperability Guide</a></li>
191
-The emulated mode consists of completely emulating the behavior of an
157
- <li><a href="specs/index.html">System Emulation Guest Hardware Specifications</a></li>
192
-U2F device through software part. Libu2f-emu is used for that.
158
- </ul>
193
-
159
- </body>
194
-
160
-</html>
195
-2. Building
161
diff --git a/docs/interop/conf.py b/docs/interop/conf.py
196
-
162
deleted file mode 100644
197
-To ensure the build of the u2f-emulated device variant which depends
163
index XXXXXXX..XXXXXXX
198
-on libu2f-emu: configuring and building:
164
--- a/docs/interop/conf.py
199
-
165
+++ /dev/null
200
- ./configure --enable-u2f && make
166
@@ -XXX,XX +XXX,XX @@
201
-
167
-# -*- coding: utf-8 -*-
202
-The pass-through mode is built by default on Linux. To take advantage
168
-#
203
-of the autoscan option it provides, make sure you have a working libudev
169
-# QEMU documentation build configuration file for the 'interop' manual.
204
-installed on the host.
170
-#
205
-
171
-# This includes the top level conf file and then makes any necessary tweaks.
206
-
172
-import sys
207
-3. Using u2f-emulated
173
-import os
208
-
174
-
209
-To work, an emulated U2F device must have four elements:
175
-qemu_docdir = os.path.abspath("..")
210
- * ec x509 certificate
176
-parent_config = os.path.join(qemu_docdir, "conf.py")
211
- * ec private key
177
-exec(compile(open(parent_config, "rb").read(), parent_config, 'exec'))
212
- * counter (four bytes value)
178
-
213
- * 48 bytes of entropy (random bits)
179
-# This slightly misuses the 'description', but is the best way to get
214
-
180
-# the manual title to appear in the sidebar.
215
-To use this type of device, this one has to be configured, and these
181
-html_theme_options['description'] = u'System Emulation Management and Interoperability Guide'
216
-four elements must be passed one way or another.
182
-
217
-
183
-# One entry per manual page. List of tuples
218
-Assuming that you have a working libu2f-emu installed on the host.
184
-# (source start file, name, description, authors, manual section).
219
-There are three possible ways of configurations:
185
-man_pages = [
220
- * ephemeral
186
- ('qemu-ga', 'qemu-ga', u'QEMU Guest Agent',
221
- * setup directory
187
- ['Michael Roth <mdroth@linux.vnet.ibm.com>'], 8),
222
- * manual
188
- ('qemu-ga-ref', 'qemu-ga-ref', 'QEMU Guest Agent Protocol Reference',
223
-
189
- [], 7),
224
-Ephemeral is the simplest way to configure, it lets the device generate
190
- ('qemu-qmp-ref', 'qemu-qmp-ref', 'QEMU QMP Reference Manual',
225
-all the elements it needs for a single use of the lifetime of the device.
191
- [], 7),
226
-
192
- ('qemu-storage-daemon-qmp-ref', 'qemu-storage-daemon-qmp-ref',
227
- qemu -usb -device u2f-emulated
193
- 'QEMU Storage Daemon QMP Reference Manual', [], 7),
228
-
194
-]
229
-Setup directory allows to configure the device from a directory containing
195
diff --git a/docs/meson.build b/docs/meson.build
230
-four files:
196
index XXXXXXX..XXXXXXX 100644
231
- * certificate.pem: ec x509 certificate
197
--- a/docs/meson.build
232
- * private-key.pem: ec private key
198
+++ b/docs/meson.build
233
- * counter: counter value
199
@@ -XXX,XX +XXX,XX @@ if build_docs
234
- * entropy: 48 bytes of entropy
200
meson.source_root() / 'docs/sphinx/qmp_lexer.py',
235
-
201
qapi_gen_depends ]
236
- qemu -usb -device u2f-emulated,dir=$dir
202
237
-
203
- configure_file(output: 'index.html',
238
-Manual allows to configure the device more finely by specifying each
204
- input: files('index.html.in'),
239
-of the elements necessary for the device:
205
- configuration: {'VERSION': meson.project_version()},
240
- * cert
206
- install_dir: qemu_docdir)
241
- * priv
207
- manuals = [ 'devel', 'interop', 'tools', 'specs', 'system', 'user' ]
242
- * counter
208
man_pages = {
243
- * entropy
209
- 'interop' : {
244
-
210
'qemu-ga.8': (have_tools ? 'man8' : ''),
245
- qemu -usb -device u2f-emulated,cert=$DIR1/$FILE1,priv=$DIR2/$FILE2,counter=$DIR3/$FILE3,entropy=$DIR4/$FILE4
211
'qemu-ga-ref.7': 'man7',
246
-
212
'qemu-qmp-ref.7': 'man7',
247
-
213
'qemu-storage-daemon-qmp-ref.7': (have_tools ? 'man7' : ''),
248
-4. Using u2f-passthru
214
- },
249
-
215
- 'tools': {
250
-On the host specify the u2f-passthru device with a suitable hidraw:
216
'qemu-img.1': (have_tools ? 'man1' : ''),
251
-
217
'qemu-nbd.8': (have_tools ? 'man8' : ''),
252
- qemu -usb -device u2f-passthru,hidraw=/dev/hidraw0
218
'qemu-pr-helper.8': (have_tools ? 'man8' : ''),
253
-
219
@@ -XXX,XX +XXX,XX @@ if build_docs
254
-Alternately, the u2f-passthru device can autoscan to take the first
220
'qemu-trace-stap.1': (config_host.has_key('CONFIG_TRACE_SYSTEMTAP') ? 'man1' : ''),
255
-U2F device it finds on the host (this requires a working libudev):
221
'virtfs-proxy-helper.1': (have_virtfs_proxy_helper ? 'man1' : ''),
256
-
222
'virtiofsd.1': (have_virtiofsd ? 'man1' : ''),
257
- qemu -usb -device u2f-passthru
223
- },
258
-
224
- 'system': {
259
-
225
'qemu.1': 'man1',
260
-5. Libu2f-emu
226
'qemu-block-drivers.7': 'man7',
261
-
227
'qemu-cpu-models.7': 'man7'
262
-The u2f-emulated device uses libu2f-emu for the U2F key emulation. Libu2f-emu
228
- },
263
-implements completely the U2F protocol device part for all specified
229
}
264
-transport given by the FIDO Alliance.
230
265
-
231
sphinxdocs = []
266
-For more information about libu2f-emu see this page:
232
sphinxmans = []
267
-https://github.com/MattGorko/libu2f-emu.
233
- foreach manual : manuals
234
- private_dir = meson.current_build_dir() / (manual + '.p')
235
- output_dir = meson.current_build_dir() / manual
236
- input_dir = meson.current_source_dir() / manual
237
238
- this_manual = custom_target(manual + ' manual',
239
+ private_dir = meson.current_build_dir() / 'manual.p'
240
+ output_dir = meson.current_build_dir() / 'manual'
241
+ input_dir = meson.current_source_dir()
242
+
243
+ this_manual = custom_target('QEMU manual',
244
build_by_default: build_docs,
245
- output: [manual + '.stamp'],
246
- input: [files('conf.py'), files(manual / 'conf.py')],
247
- depfile: manual + '.d',
248
+ output: 'docs.stamp',
249
+ input: files('conf.py'),
250
+ depfile: 'docs.d',
251
depend_files: sphinx_extn_depends,
252
command: [SPHINX_ARGS, '-Ddepfile=@DEPFILE@',
253
'-Ddepfile_stamp=@OUTPUT0@',
254
'-b', 'html', '-d', private_dir,
255
input_dir, output_dir])
256
- sphinxdocs += this_manual
257
- if build_docs and manual != 'devel'
258
- install_subdir(output_dir, install_dir: qemu_docdir)
259
- endif
260
+ sphinxdocs += this_manual
261
+ install_subdir(output_dir, install_dir: qemu_docdir, strip_directory: true)
262
263
- these_man_pages = []
264
- install_dirs = []
265
- foreach page, section : man_pages.get(manual, {})
266
- these_man_pages += page
267
- install_dirs += section == '' ? false : get_option('mandir') / section
268
- endforeach
269
- if these_man_pages.length() > 0
270
- sphinxmans += custom_target(manual + ' man pages',
271
- build_by_default: build_docs,
272
- output: these_man_pages,
273
- input: this_manual,
274
- install: build_docs,
275
- install_dir: install_dirs,
276
- command: [SPHINX_ARGS, '-b', 'man', '-d', private_dir,
277
- input_dir, meson.current_build_dir()])
278
- endif
279
+ these_man_pages = []
280
+ install_dirs = []
281
+ foreach page, section : man_pages
282
+ these_man_pages += page
283
+ install_dirs += section == '' ? false : get_option('mandir') / section
284
endforeach
285
+
286
+ sphinxmans += custom_target('QEMU man pages',
287
+ build_by_default: build_docs,
288
+ output: these_man_pages,
289
+ input: this_manual,
290
+ install: build_docs,
291
+ install_dir: install_dirs,
292
+ command: [SPHINX_ARGS, '-b', 'man', '-d', private_dir,
293
+ input_dir, meson.current_build_dir()])
294
+
295
alias_target('sphinxdocs', sphinxdocs)
296
alias_target('html', sphinxdocs)
297
alias_target('man', sphinxmans)
298
diff --git a/docs/specs/conf.py b/docs/specs/conf.py
299
deleted file mode 100644
300
index XXXXXXX..XXXXXXX
301
--- a/docs/specs/conf.py
302
+++ /dev/null
303
@@ -XXX,XX +XXX,XX @@
304
-# -*- coding: utf-8 -*-
305
-#
306
-# QEMU documentation build configuration file for the 'specs' manual.
307
-#
308
-# This includes the top level conf file and then makes any necessary tweaks.
309
-import sys
310
-import os
311
-
312
-qemu_docdir = os.path.abspath("..")
313
-parent_config = os.path.join(qemu_docdir, "conf.py")
314
-exec(compile(open(parent_config, "rb").read(), parent_config, 'exec'))
315
-
316
-# This slightly misuses the 'description', but is the best way to get
317
-# the manual title to appear in the sidebar.
318
-html_theme_options['description'] = \
319
- u'System Emulation Guest Hardware Specifications'
320
diff --git a/docs/system/conf.py b/docs/system/conf.py
321
deleted file mode 100644
322
index XXXXXXX..XXXXXXX
323
--- a/docs/system/conf.py
324
+++ /dev/null
325
@@ -XXX,XX +XXX,XX @@
326
-# -*- coding: utf-8 -*-
327
-#
328
-# QEMU documentation build configuration file for the 'system' manual.
329
-#
330
-# This includes the top level conf file and then makes any necessary tweaks.
331
-import sys
332
-import os
333
-
334
-qemu_docdir = os.path.abspath("..")
335
-parent_config = os.path.join(qemu_docdir, "conf.py")
336
-exec(compile(open(parent_config, "rb").read(), parent_config, 'exec'))
337
-
338
-# This slightly misuses the 'description', but is the best way to get
339
-# the manual title to appear in the sidebar.
340
-html_theme_options['description'] = u'System Emulation User''s Guide'
341
-
342
-# One entry per manual page. List of tuples
343
-# (source start file, name, description, authors, manual section).
344
-man_pages = [
345
- ('qemu-manpage', 'qemu', u'QEMU User Documentation',
346
- ['Fabrice Bellard'], 1),
347
- ('qemu-block-drivers', 'qemu-block-drivers',
348
- u'QEMU block drivers reference',
349
- ['Fabrice Bellard and the QEMU Project developers'], 7),
350
- ('qemu-cpu-models', 'qemu-cpu-models',
351
- u'QEMU CPU Models',
352
- ['The QEMU Project developers'], 7)
353
-]
354
diff --git a/docs/tools/conf.py b/docs/tools/conf.py
355
deleted file mode 100644
356
index XXXXXXX..XXXXXXX
357
--- a/docs/tools/conf.py
358
+++ /dev/null
359
@@ -XXX,XX +XXX,XX @@
360
-# -*- coding: utf-8 -*-
361
-#
362
-# QEMU documentation build configuration file for the 'tools' manual.
363
-#
364
-# This includes the top level conf file and then makes any necessary tweaks.
365
-import sys
366
-import os
367
-
368
-qemu_docdir = os.path.abspath("..")
369
-parent_config = os.path.join(qemu_docdir, "conf.py")
370
-exec(compile(open(parent_config, "rb").read(), parent_config, 'exec'))
371
-
372
-# This slightly misuses the 'description', but is the best way to get
373
-# the manual title to appear in the sidebar.
374
-html_theme_options['description'] = \
375
- u'Tools Guide'
376
-
377
-# One entry per manual page. List of tuples
378
-# (source start file, name, description, authors, manual section).
379
-man_pages = [
380
- ('qemu-img', 'qemu-img', u'QEMU disk image utility',
381
- ['Fabrice Bellard'], 1),
382
- ('qemu-storage-daemon', 'qemu-storage-daemon', u'QEMU storage daemon',
383
- [], 1),
384
- ('qemu-nbd', 'qemu-nbd', u'QEMU Disk Network Block Device Server',
385
- ['Anthony Liguori <anthony@codemonkey.ws>'], 8),
386
- ('qemu-pr-helper', 'qemu-pr-helper', 'QEMU persistent reservation helper',
387
- [], 8),
388
- ('qemu-trace-stap', 'qemu-trace-stap', u'QEMU SystemTap trace tool',
389
- [], 1),
390
- ('virtfs-proxy-helper', 'virtfs-proxy-helper',
391
- u'QEMU 9p virtfs proxy filesystem helper',
392
- ['M. Mohan Kumar'], 1),
393
- ('virtiofsd', 'virtiofsd', u'QEMU virtio-fs shared file system daemon',
394
- ['Stefan Hajnoczi <stefanha@redhat.com>',
395
- 'Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>'], 1),
396
-]
397
diff --git a/docs/user/conf.py b/docs/user/conf.py
398
deleted file mode 100644
399
index XXXXXXX..XXXXXXX
400
--- a/docs/user/conf.py
401
+++ /dev/null
402
@@ -XXX,XX +XXX,XX @@
403
-# -*- coding: utf-8 -*-
404
-#
405
-# QEMU documentation build configuration file for the 'user' manual.
406
-#
407
-# This includes the top level conf file and then makes any necessary tweaks.
408
-import sys
409
-import os
410
-
411
-qemu_docdir = os.path.abspath("..")
412
-parent_config = os.path.join(qemu_docdir, "conf.py")
413
-exec(compile(open(parent_config, "rb").read(), parent_config, 'exec'))
414
-
415
-# This slightly misuses the 'description', but is the best way to get
416
-# the manual title to appear in the sidebar.
417
-html_theme_options['description'] = u'User Mode Emulation User''s Guide'
418
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
419
index XXXXXXX..XXXXXXX 100644
420
--- a/.gitlab-ci.yml
421
+++ b/.gitlab-ci.yml
422
@@ -XXX,XX +XXX,XX @@ pages:
423
-t "Welcome to the QEMU sourcecode"
424
- mv HTML public/src
425
# Project documentation
426
- - mv build/docs/index.html public/
427
- - for i in devel interop specs system tools user ; do mv build/docs/$i public/ ; done
428
+ - make -C build install DESTDIR=$(pwd)/temp-install
429
+ - mv temp-install/usr/local/share/doc/qemu/* public/
430
artifacts:
431
paths:
432
- public
433
--
268
--
434
2.20.1
269
2.34.1
435
436
diff view generated by jsdifflib