1
A largish pull request: the big things are Richard's PAuth work
1
target-arm queue: the big stuff here is the final part of
2
and Aaron's PMU emulation improvements.
2
rth's patches for Cortex-A76 and Neoverse-N1 support;
3
also present are Gavin's NUMA series and a few other things.
3
4
4
thanks
5
thanks
5
-- PMM
6
-- PMM
6
7
8
The following changes since commit 554623226f800acf48a2ed568900c1c968ec9a8b:
7
9
8
The following changes since commit 681d61362d3f766a00806b89d6581869041f73cb:
10
Merge tag 'qemu-sparc-20220508' of https://github.com/mcayland/qemu into staging (2022-05-08 17:03:26 -0500)
9
10
Merge remote-tracking branch 'remotes/jnsnow/tags/bitmaps-pull-request' into staging (2019-01-17 12:48:42 +0000)
11
11
12
are available in the Git repository at:
12
are available in the Git repository at:
13
13
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20190118
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220509
15
15
16
for you to fetch changes up to 2a0ed2804e2c77a1c4e255f05ab739618e05c85d:
16
for you to fetch changes up to ae9141d4a3265553503bf07d3574b40f84615a34:
17
17
18
tests/libqtest: Introduce qtest_init_with_serial() (2019-01-18 14:17:38 +0000)
18
hw/acpi/aml-build: Use existing CPU topology to build PPTT table (2022-05-09 11:47:55 +0100)
19
19
20
----------------------------------------------------------------
20
----------------------------------------------------------------
21
target-arm queue:
21
target-arm queue:
22
* hw/char/stm32f2xx_usart: Do not update data register when device is disabled
22
* MAINTAINERS/.mailmap: update email for Leif Lindholm
23
* hw/arm/virt-acpi-build: Set COHACC override flag in IORT SMMUv3 node
23
* hw/arm: add version information to sbsa-ref machine DT
24
* target/arm: Allow Aarch32 exception return to switch from Mon->Hyp
24
* Enable new features for -cpu max:
25
* ftgmac100: implement the new MDIO interface on Aspeed SoC
25
FEAT_Debugv8p2, FEAT_Debugv8p4, FEAT_RAS (minimal version only),
26
* implement the ARMv8.3-PAuth extension
26
FEAT_IESB, FEAT_CSV2, FEAT_CSV2_2, FEAT_CSV3, FEAT_DGH
27
* improve emulation of the ARM PMU
27
* Emulate Cortex-A76
28
* Emulate Neoverse-N1
29
* Fix the virt board default NUMA topology
28
30
29
----------------------------------------------------------------
31
----------------------------------------------------------------
30
Aaron Lindsay (13):
32
Gavin Shan (6):
31
migration: Add post_save function to VMStateDescription
33
qapi/machine.json: Add cluster-id
32
target/arm: Reorganize PMCCNTR accesses
34
qtest/numa-test: Specify CPU topology in aarch64_numa_cpu()
33
target/arm: Swap PMU values before/after migrations
35
hw/arm/virt: Consider SMP configuration in CPU topology
34
target/arm: Filter cycle counter based on PMCCFILTR_EL0
36
qtest/numa-test: Correct CPU and NUMA association in aarch64_numa_cpu()
35
target/arm: Allow AArch32 access for PMCCFILTR
37
hw/arm/virt: Fix CPU's default NUMA node ID
36
target/arm: Implement PMOVSSET
38
hw/acpi/aml-build: Use existing CPU topology to build PPTT table
37
target/arm: Define FIELDs for ID_DFR0
38
target/arm: Make PMCEID[01]_EL0 64 bit registers, add PMCEID[23]
39
target/arm: Add array for supported PMU events, generate PMCEID[01]_EL0
40
target/arm: Finish implementation of PM[X]EVCNTR and PM[X]EVTYPER
41
target/arm: PMU: Add instruction and cycle events
42
target/arm: PMU: Set PMCR.N to 4
43
target/arm: Implement PMSWINC
44
39
45
Alexander Graf (1):
40
Leif Lindholm (2):
46
target/arm: Allow Aarch32 exception return to switch from Mon->Hyp
41
MAINTAINERS/.mailmap: update email for Leif Lindholm
42
hw/arm: add versioning to sbsa-ref machine DT
47
43
48
Cédric Le Goater (1):
44
Richard Henderson (24):
49
ftgmac100: implement the new MDIO interface on Aspeed SoC
45
target/arm: Handle cpreg registration for missing EL
46
target/arm: Drop EL3 no EL2 fallbacks
47
target/arm: Merge zcr reginfo
48
target/arm: Adjust definition of CONTEXTIDR_EL2
49
target/arm: Move cortex impdef sysregs to cpu_tcg.c
50
target/arm: Update qemu-system-arm -cpu max to cortex-a57
51
target/arm: Set ID_DFR0.PerfMon for qemu-system-arm -cpu max
52
target/arm: Split out aa32_max_features
53
target/arm: Annotate arm_max_initfn with FEAT identifiers
54
target/arm: Use field names for manipulating EL2 and EL3 modes
55
target/arm: Enable FEAT_Debugv8p2 for -cpu max
56
target/arm: Enable FEAT_Debugv8p4 for -cpu max
57
target/arm: Add minimal RAS registers
58
target/arm: Enable SCR and HCR bits for RAS
59
target/arm: Implement virtual SError exceptions
60
target/arm: Implement ESB instruction
61
target/arm: Enable FEAT_RAS for -cpu max
62
target/arm: Enable FEAT_IESB for -cpu max
63
target/arm: Enable FEAT_CSV2 for -cpu max
64
target/arm: Enable FEAT_CSV2_2 for -cpu max
65
target/arm: Enable FEAT_CSV3 for -cpu max
66
target/arm: Enable FEAT_DGH for -cpu max
67
target/arm: Define cortex-a76
68
target/arm: Define neoverse-n1
50
69
51
Eric Auger (1):
70
docs/system/arm/emulation.rst | 10 +
52
hw/arm/virt-acpi-build: Set COHACC override flag in IORT SMMUv3 node
71
docs/system/arm/virt.rst | 2 +
53
72
qapi/machine.json | 6 +-
54
Julia Suvorova (1):
73
target/arm/cpregs.h | 11 +
55
tests/libqtest: Introduce qtest_init_with_serial()
74
target/arm/cpu.h | 23 ++
56
75
target/arm/helper.h | 1 +
57
Philippe Mathieu-Daudé (1):
76
target/arm/internals.h | 16 ++
58
hw/char/stm32f2xx_usart: Do not update data register when device is disabled
77
target/arm/syndrome.h | 5 +
59
78
target/arm/a32.decode | 16 +-
60
Richard Henderson (31):
79
target/arm/t32.decode | 18 +-
61
target/arm: Add state for the ARMv8.3-PAuth extension
80
hw/acpi/aml-build.c | 111 ++++----
62
target/arm: Add SCTLR bits through ARMv8.5
81
hw/arm/sbsa-ref.c | 16 ++
63
target/arm: Add PAuth active bit to tbflags
82
hw/arm/virt.c | 21 +-
64
target/arm: Introduce raise_exception_ra
83
hw/core/machine-hmp-cmds.c | 4 +
65
target/arm: Add PAuth helpers
84
hw/core/machine.c | 16 ++
66
target/arm: Decode PAuth within system hint space
85
target/arm/cpu.c | 66 ++++-
67
target/arm: Rearrange decode in disas_data_proc_1src
86
target/arm/cpu64.c | 353 ++++++++++++++-----------
68
target/arm: Decode PAuth within disas_data_proc_1src
87
target/arm/cpu_tcg.c | 227 +++++++++++-----
69
target/arm: Decode PAuth within disas_data_proc_2src
88
target/arm/helper.c | 600 +++++++++++++++++++++++++-----------------
70
target/arm: Move helper_exception_return to helper-a64.c
89
target/arm/op_helper.c | 43 +++
71
target/arm: Add new_pc argument to helper_exception_return
90
target/arm/translate-a64.c | 18 ++
72
target/arm: Rearrange decode in disas_uncond_b_reg
91
target/arm/translate.c | 23 ++
73
target/arm: Decode PAuth within disas_uncond_b_reg
92
tests/qtest/numa-test.c | 19 +-
74
target/arm: Decode Load/store register (pac)
93
.mailmap | 3 +-
75
target/arm: Move cpu_mmu_index out of line
94
MAINTAINERS | 2 +-
76
target/arm: Introduce arm_mmu_idx
95
25 files changed, 1068 insertions(+), 562 deletions(-)
77
target/arm: Introduce arm_stage1_mmu_idx
78
target/arm: Create ARMVAParameters and helpers
79
target/arm: Merge TBFLAG_AA_TB{0, 1} to TBII
80
target/arm: Export aa64_va_parameters to internals.h
81
target/arm: Add aa64_va_parameters_both
82
target/arm: Decode TBID from TCR
83
target/arm: Reuse aa64_va_parameters for setting tbflags
84
target/arm: Implement pauth_strip
85
target/arm: Implement pauth_auth
86
target/arm: Implement pauth_addpac
87
target/arm: Implement pauth_computepac
88
target/arm: Add PAuth system registers
89
target/arm: Enable PAuth for -cpu max
90
target/arm: Enable PAuth for user-only
91
target/arm: Tidy TBI handling in gen_a64_set_pc
92
93
target/arm/Makefile.objs | 1 +
94
include/hw/acpi/acpi-defs.h | 2 +
95
include/migration/vmstate.h | 1 +
96
target/arm/cpu.h | 244 +++++----
97
target/arm/helper-a64.h | 14 +
98
target/arm/helper.h | 1 -
99
target/arm/internals.h | 77 +++
100
target/arm/translate.h | 5 +-
101
tests/libqtest.h | 11 +
102
hw/arm/virt-acpi-build.c | 1 +
103
hw/char/stm32f2xx_usart.c | 3 +-
104
hw/net/ftgmac100.c | 80 ++-
105
migration/vmstate.c | 13 +-
106
target/arm/cpu.c | 19 +-
107
target/arm/cpu64.c | 68 ++-
108
target/arm/helper-a64.c | 155 ++++++
109
target/arm/helper.c | 1222 +++++++++++++++++++++++++++++++++----------
110
target/arm/machine.c | 24 +
111
target/arm/op_helper.c | 174 +-----
112
target/arm/pauth_helper.c | 497 ++++++++++++++++++
113
target/arm/translate-a64.c | 537 ++++++++++++++++---
114
tests/libqtest.c | 26 +
115
docs/devel/migration.rst | 9 +-
116
23 files changed, 2552 insertions(+), 632 deletions(-)
117
create mode 100644 target/arm/pauth_helper.c
118
diff view generated by jsdifflib
Deleted patch
1
From: Philippe Mathieu-Daudé <philmd@redhat.com>
2
1
3
When the device is disabled, the internal circuitry keeps the data
4
register loaded and doesn't update it.
5
6
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Message-id: 20190104182057.8778-1-philmd@redhat.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/char/stm32f2xx_usart.c | 3 +--
12
1 file changed, 1 insertion(+), 2 deletions(-)
13
14
diff --git a/hw/char/stm32f2xx_usart.c b/hw/char/stm32f2xx_usart.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/char/stm32f2xx_usart.c
17
+++ b/hw/char/stm32f2xx_usart.c
18
@@ -XXX,XX +XXX,XX @@ static void stm32f2xx_usart_receive(void *opaque, const uint8_t *buf, int size)
19
{
20
STM32F2XXUsartState *s = opaque;
21
22
- s->usart_dr = *buf;
23
-
24
if (!(s->usart_cr1 & USART_CR1_UE && s->usart_cr1 & USART_CR1_RE)) {
25
/* USART not enabled - drop the chars */
26
DB_PRINT("Dropping the chars\n");
27
return;
28
}
29
30
+ s->usart_dr = *buf;
31
s->usart_sr |= USART_SR_RXNE;
32
33
if (s->usart_cr1 & USART_CR1_RXNEIE) {
34
--
35
2.20.1
36
37
diff view generated by jsdifflib
Deleted patch
1
From: Eric Auger <eric.auger@redhat.com>
2
1
3
Let's report IO-coherent access is supported for translation
4
table walks, descriptor fetches and queues by setting the COHACC
5
override flag. Without that, we observe wrong command opcodes.
6
The DT description also advertises the dma coherency.
7
8
Fixes a703b4f6c1ee ("hw/arm/virt-acpi-build: Add smmuv3 node in IORT table")
9
10
Signed-off-by: Eric Auger <eric.auger@redhat.com>
11
Reported-by: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
12
Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
13
Reviewed-by: Andrew Jones <drjones@redhat.com>
14
Message-id: 20190107101041.765-1-eric.auger@redhat.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
include/hw/acpi/acpi-defs.h | 2 ++
18
hw/arm/virt-acpi-build.c | 1 +
19
2 files changed, 3 insertions(+)
20
21
diff --git a/include/hw/acpi/acpi-defs.h b/include/hw/acpi/acpi-defs.h
22
index XXXXXXX..XXXXXXX 100644
23
--- a/include/hw/acpi/acpi-defs.h
24
+++ b/include/hw/acpi/acpi-defs.h
25
@@ -XXX,XX +XXX,XX @@ struct AcpiIortItsGroup {
26
} QEMU_PACKED;
27
typedef struct AcpiIortItsGroup AcpiIortItsGroup;
28
29
+#define ACPI_IORT_SMMU_V3_COHACC_OVERRIDE 1
30
+
31
struct AcpiIortSmmu3 {
32
ACPI_IORT_NODE_HEADER_DEF
33
uint64_t base_address;
34
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
35
index XXXXXXX..XXXXXXX 100644
36
--- a/hw/arm/virt-acpi-build.c
37
+++ b/hw/arm/virt-acpi-build.c
38
@@ -XXX,XX +XXX,XX @@ build_iort(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
39
smmu->mapping_count = cpu_to_le32(1);
40
smmu->mapping_offset = cpu_to_le32(sizeof(*smmu));
41
smmu->base_address = cpu_to_le64(vms->memmap[VIRT_SMMU].base);
42
+ smmu->flags = cpu_to_le32(ACPI_IORT_SMMU_V3_COHACC_OVERRIDE);
43
smmu->event_gsiv = cpu_to_le32(irq);
44
smmu->pri_gsiv = cpu_to_le32(irq + 1);
45
smmu->gerr_gsiv = cpu_to_le32(irq + 2);
46
--
47
2.20.1
48
49
diff view generated by jsdifflib
Deleted patch
1
From: Alexander Graf <agraf@suse.de>
2
1
3
In U-boot, we switch from S-SVC -> Mon -> Hyp mode when we want to
4
enter Hyp mode. The change into Hyp mode is done by doing an
5
exception return from Mon. This doesn't work with current QEMU.
6
7
The problem is that in bad_mode_switch() we refuse to allow
8
the change of mode.
9
10
Note that bad_mode_switch() is used to do validation for two situations:
11
12
(1) changes to mode by instructions writing to CPSR.M
13
(ie not exception take/return) -- this corresponds to the
14
Armv8 Arm ARM pseudocode Arch32.WriteModeByInstr
15
(2) changes to mode by exception return
16
17
Attempting to enter or leave Hyp mode via case (1) is forbidden in
18
v8 and UNPREDICTABLE in v7, and QEMU is correct to disallow it
19
there. However, we're already doing that check at the top of the
20
bad_mode_switch() function, so if that passes then we should allow
21
the case (2) exception return mode changes to switch into Hyp mode.
22
23
We want to test whether we're trying to return to the nonexistent
24
"secure Hyp" mode, so we need to look at arm_is_secure_below_el3()
25
rather than arm_is_secure(), since the latter is always true if
26
we're in Mon (EL3).
27
28
Signed-off-by: Alexander Graf <agraf@suse.de>
29
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
30
Message-id: 20190109152430.32359-1-agraf@suse.de
31
[PMM: rewrote commit message]
32
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
33
---
34
target/arm/helper.c | 2 +-
35
1 file changed, 1 insertion(+), 1 deletion(-)
36
37
diff --git a/target/arm/helper.c b/target/arm/helper.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/target/arm/helper.c
40
+++ b/target/arm/helper.c
41
@@ -XXX,XX +XXX,XX @@ static int bad_mode_switch(CPUARMState *env, int mode, CPSRWriteType write_type)
42
return 0;
43
case ARM_CPU_MODE_HYP:
44
return !arm_feature(env, ARM_FEATURE_EL2)
45
- || arm_current_el(env) < 2 || arm_is_secure(env);
46
+ || arm_current_el(env) < 2 || arm_is_secure_below_el3(env);
47
case ARM_CPU_MODE_MON:
48
return arm_current_el(env) < 3;
49
default:
50
--
51
2.20.1
52
53
diff view generated by jsdifflib
Deleted patch
1
From: Cédric Le Goater <clg@kaod.org>
2
1
3
The PHY behind the MAC of an Aspeed SoC can be controlled using two
4
different MDC/MDIO interfaces. The same registers PHYCR (MAC60) and
5
PHYDATA (MAC64) are involved but they have a different layout.
6
7
BIT31 of the Feature Register (MAC40) controls which MDC/MDIO
8
interface is active.
9
10
Signed-off-by: Cédric Le Goater <clg@kaod.org>
11
Reviewed-by: Andrew Jeffery <andrew@aj.id.au>
12
Reviewed-by: Joel Stanley <joel@jms.id.au>
13
Message-id: 20190111125759.31577-1-clg@kaod.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
hw/net/ftgmac100.c | 80 +++++++++++++++++++++++++++++++++++++++-------
17
1 file changed, 68 insertions(+), 12 deletions(-)
18
19
diff --git a/hw/net/ftgmac100.c b/hw/net/ftgmac100.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/net/ftgmac100.c
22
+++ b/hw/net/ftgmac100.c
23
@@ -XXX,XX +XXX,XX @@
24
#define FTGMAC100_PHYDATA_MIIWDATA(x) ((x) & 0xffff)
25
#define FTGMAC100_PHYDATA_MIIRDATA(x) (((x) >> 16) & 0xffff)
26
27
+/*
28
+ * PHY control register - New MDC/MDIO interface
29
+ */
30
+#define FTGMAC100_PHYCR_NEW_DATA(x) (((x) >> 16) & 0xffff)
31
+#define FTGMAC100_PHYCR_NEW_FIRE (1 << 15)
32
+#define FTGMAC100_PHYCR_NEW_ST_22 (1 << 12)
33
+#define FTGMAC100_PHYCR_NEW_OP(x) (((x) >> 10) & 3)
34
+#define FTGMAC100_PHYCR_NEW_OP_WRITE 0x1
35
+#define FTGMAC100_PHYCR_NEW_OP_READ 0x2
36
+#define FTGMAC100_PHYCR_NEW_DEV(x) (((x) >> 5) & 0x1f)
37
+#define FTGMAC100_PHYCR_NEW_REG(x) ((x) & 0x1f)
38
+
39
/*
40
* Feature Register
41
*/
42
@@ -XXX,XX +XXX,XX @@ static void phy_reset(FTGMAC100State *s)
43
s->phy_int = 0;
44
}
45
46
-static uint32_t do_phy_read(FTGMAC100State *s, int reg)
47
+static uint16_t do_phy_read(FTGMAC100State *s, uint8_t reg)
48
{
49
- uint32_t val;
50
+ uint16_t val;
51
52
switch (reg) {
53
case MII_BMCR: /* Basic Control */
54
@@ -XXX,XX +XXX,XX @@ static uint32_t do_phy_read(FTGMAC100State *s, int reg)
55
MII_BMCR_FD | MII_BMCR_CTST)
56
#define MII_ANAR_MASK 0x2d7f
57
58
-static void do_phy_write(FTGMAC100State *s, int reg, uint32_t val)
59
+static void do_phy_write(FTGMAC100State *s, uint8_t reg, uint16_t val)
60
{
61
switch (reg) {
62
case MII_BMCR: /* Basic Control */
63
@@ -XXX,XX +XXX,XX @@ static void do_phy_write(FTGMAC100State *s, int reg, uint32_t val)
64
}
65
}
66
67
+static void do_phy_new_ctl(FTGMAC100State *s)
68
+{
69
+ uint8_t reg;
70
+ uint16_t data;
71
+
72
+ if (!(s->phycr & FTGMAC100_PHYCR_NEW_ST_22)) {
73
+ qemu_log_mask(LOG_UNIMP, "%s: unsupported ST code\n", __func__);
74
+ return;
75
+ }
76
+
77
+ /* Nothing to do */
78
+ if (!(s->phycr & FTGMAC100_PHYCR_NEW_FIRE)) {
79
+ return;
80
+ }
81
+
82
+ reg = FTGMAC100_PHYCR_NEW_REG(s->phycr);
83
+ data = FTGMAC100_PHYCR_NEW_DATA(s->phycr);
84
+
85
+ switch (FTGMAC100_PHYCR_NEW_OP(s->phycr)) {
86
+ case FTGMAC100_PHYCR_NEW_OP_WRITE:
87
+ do_phy_write(s, reg, data);
88
+ break;
89
+ case FTGMAC100_PHYCR_NEW_OP_READ:
90
+ s->phydata = do_phy_read(s, reg) & 0xffff;
91
+ break;
92
+ default:
93
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid OP code %08x\n",
94
+ __func__, s->phycr);
95
+ }
96
+
97
+ s->phycr &= ~FTGMAC100_PHYCR_NEW_FIRE;
98
+}
99
+
100
+static void do_phy_ctl(FTGMAC100State *s)
101
+{
102
+ uint8_t reg = FTGMAC100_PHYCR_REG(s->phycr);
103
+
104
+ if (s->phycr & FTGMAC100_PHYCR_MIIWR) {
105
+ do_phy_write(s, reg, s->phydata & 0xffff);
106
+ s->phycr &= ~FTGMAC100_PHYCR_MIIWR;
107
+ } else if (s->phycr & FTGMAC100_PHYCR_MIIRD) {
108
+ s->phydata = do_phy_read(s, reg) << 16;
109
+ s->phycr &= ~FTGMAC100_PHYCR_MIIRD;
110
+ } else {
111
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: no OP code %08x\n",
112
+ __func__, s->phycr);
113
+ }
114
+}
115
+
116
static int ftgmac100_read_bd(FTGMAC100Desc *bd, dma_addr_t addr)
117
{
118
if (dma_memory_read(&address_space_memory, addr, bd, sizeof(*bd))) {
119
@@ -XXX,XX +XXX,XX @@ static void ftgmac100_write(void *opaque, hwaddr addr,
120
uint64_t value, unsigned size)
121
{
122
FTGMAC100State *s = FTGMAC100(opaque);
123
- int reg;
124
125
switch (addr & 0xff) {
126
case FTGMAC100_ISR: /* Interrupt status */
127
@@ -XXX,XX +XXX,XX @@ static void ftgmac100_write(void *opaque, hwaddr addr,
128
break;
129
130
case FTGMAC100_PHYCR: /* PHY Device control */
131
- reg = FTGMAC100_PHYCR_REG(value);
132
s->phycr = value;
133
- if (value & FTGMAC100_PHYCR_MIIWR) {
134
- do_phy_write(s, reg, s->phydata & 0xffff);
135
- s->phycr &= ~FTGMAC100_PHYCR_MIIWR;
136
+ if (s->revr & FTGMAC100_REVR_NEW_MDIO_INTERFACE) {
137
+ do_phy_new_ctl(s);
138
} else {
139
- s->phydata = do_phy_read(s, reg) << 16;
140
- s->phycr &= ~FTGMAC100_PHYCR_MIIRD;
141
+ do_phy_ctl(s);
142
}
143
break;
144
case FTGMAC100_PHYDATA:
145
@@ -XXX,XX +XXX,XX @@ static void ftgmac100_write(void *opaque, hwaddr addr,
146
s->dblac = value;
147
break;
148
case FTGMAC100_REVR: /* Feature Register */
149
- /* TODO: Only Old MDIO interface is supported */
150
- s->revr = value & ~FTGMAC100_REVR_NEW_MDIO_INTERFACE;
151
+ s->revr = value;
152
break;
153
case FTGMAC100_FEAR1: /* Feature Register 1 */
154
s->fear1 = value;
155
--
156
2.20.1
157
158
diff view generated by jsdifflib
1
From: Julia Suvorova <jusual@mail.ru>
1
From: Leif Lindholm <quic_llindhol@quicinc.com>
2
2
3
Run qtest with a socket that connects QEMU chardev and test code.
3
NUVIA was acquired by Qualcomm in March 2021, but kept functioning on
4
separate infrastructure for a transitional period. We've now switched
5
over to contributing as Qualcomm Innovation Center (quicinc), so update
6
my email address to reflect this.
4
7
5
Signed-off-by: Julia Suvorova <jusual@mail.ru>
8
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
6
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
9
Message-id: 20220505113740.75565-1-quic_llindhol@quicinc.com
7
Message-id: 20190117161640.5496-2-jusual@mail.ru
10
Cc: Leif Lindholm <leif@nuviainc.com>
11
Cc: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
13
[Fixed commit message typo]
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
15
---
10
tests/libqtest.h | 11 +++++++++++
16
.mailmap | 3 ++-
11
tests/libqtest.c | 26 ++++++++++++++++++++++++++
17
MAINTAINERS | 2 +-
12
2 files changed, 37 insertions(+)
18
2 files changed, 3 insertions(+), 2 deletions(-)
13
19
14
diff --git a/tests/libqtest.h b/tests/libqtest.h
20
diff --git a/.mailmap b/.mailmap
15
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
16
--- a/tests/libqtest.h
22
--- a/.mailmap
17
+++ b/tests/libqtest.h
23
+++ b/.mailmap
18
@@ -XXX,XX +XXX,XX @@ QTestState *qtest_init(const char *extra_args);
24
@@ -XXX,XX +XXX,XX @@ Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
19
*/
25
Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com>
20
QTestState *qtest_init_without_qmp_handshake(const char *extra_args);
26
Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn>
21
27
James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
22
+/**
28
-Leif Lindholm <leif@nuviainc.com> <leif.lindholm@linaro.org>
23
+ * qtest_init_with_serial:
29
+Leif Lindholm <quic_llindhol@quicinc.com> <leif.lindholm@linaro.org>
24
+ * @extra_args: other arguments to pass to QEMU. CAUTION: these
30
+Leif Lindholm <quic_llindhol@quicinc.com> <leif@nuviainc.com>
25
+ * arguments are subject to word splitting and shell evaluation.
31
Radoslaw Biernacki <rad@semihalf.com> <radoslaw.biernacki@linaro.org>
26
+ * @sock_fd: pointer to store the socket file descriptor for
32
Paul Burton <paulburton@kernel.org> <paul.burton@mips.com>
27
+ * connection with serial.
33
Paul Burton <paulburton@kernel.org> <paul.burton@imgtec.com>
28
+ *
34
diff --git a/MAINTAINERS b/MAINTAINERS
29
+ * Returns: #QTestState instance.
30
+ */
31
+QTestState *qtest_init_with_serial(const char *extra_args, int *sock_fd);
32
+
33
/**
34
* qtest_quit:
35
* @s: #QTestState instance to operate on.
36
diff --git a/tests/libqtest.c b/tests/libqtest.c
37
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
38
--- a/tests/libqtest.c
36
--- a/MAINTAINERS
39
+++ b/tests/libqtest.c
37
+++ b/MAINTAINERS
40
@@ -XXX,XX +XXX,XX @@ QTestState *qtest_initf(const char *fmt, ...)
38
@@ -XXX,XX +XXX,XX @@ F: include/hw/ssi/imx_spi.h
41
return s;
39
SBSA-REF
42
}
40
M: Radoslaw Biernacki <rad@semihalf.com>
43
41
M: Peter Maydell <peter.maydell@linaro.org>
44
+QTestState *qtest_init_with_serial(const char *extra_args, int *sock_fd)
42
-R: Leif Lindholm <leif@nuviainc.com>
45
+{
43
+R: Leif Lindholm <quic_llindhol@quicinc.com>
46
+ int sock_fd_init;
44
L: qemu-arm@nongnu.org
47
+ char *sock_path, sock_dir[] = "/tmp/qtest-serial-XXXXXX";
45
S: Maintained
48
+ QTestState *qts;
46
F: hw/arm/sbsa-ref.c
49
+
50
+ g_assert(mkdtemp(sock_dir));
51
+ sock_path = g_strdup_printf("%s/sock", sock_dir);
52
+
53
+ sock_fd_init = init_socket(sock_path);
54
+
55
+ qts = qtest_initf("-chardev socket,id=s0,path=%s,nowait "
56
+ "-serial chardev:s0 %s",
57
+ sock_path, extra_args);
58
+
59
+ *sock_fd = socket_accept(sock_fd_init);
60
+
61
+ unlink(sock_path);
62
+ g_free(sock_path);
63
+ rmdir(sock_dir);
64
+
65
+ g_assert(*sock_fd >= 0);
66
+
67
+ return qts;
68
+}
69
+
70
void qtest_quit(QTestState *s)
71
{
72
g_hook_destroy_link(&abrt_hooks, g_hook_find_data(&abrt_hooks, TRUE, s));
73
--
47
--
74
2.20.1
48
2.25.1
75
49
76
50
diff view generated by jsdifflib
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add arrays to hold the registers, the definitions themselves, access
3
More gracefully handle cpregs when EL2 and/or EL3 are missing.
4
functions, and logic to reset counters when PMCR.P is set. Update
4
If the reg is entirely inaccessible, do not register it at all.
5
filtering code to support counters other than PMCCNTR. Support migration
5
If the reg is for EL2, and EL3 is present but EL2 is not,
6
with raw read/write functions.
6
either discard, squash to res0, const, or keep unchanged.
7
7
8
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
8
Per rule RJFFP, mark the 4 aarch32 hypervisor access registers
9
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
9
with ARM_CP_EL3_NO_EL2_KEEP, and mark all of the EL2 address
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
translation and tlb invalidation "regs" ARM_CP_EL3_NO_EL2_UNDEF.
11
Message-id: 20181211151945.29137-11-aaron@os.amperecomputing.com
11
Mark the 2 virtualization processor id regs ARM_CP_EL3_NO_EL2_C_NZ.
12
13
This will simplify cpreg registration for conditional arm features.
14
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20220506180242.216785-2-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
19
---
14
target/arm/cpu.h | 3 +
20
target/arm/cpregs.h | 11 +++
15
target/arm/helper.c | 296 +++++++++++++++++++++++++++++++++++++++++---
21
target/arm/helper.c | 178 ++++++++++++++++++++++++++++++--------------
16
2 files changed, 282 insertions(+), 17 deletions(-)
22
2 files changed, 133 insertions(+), 56 deletions(-)
17
23
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
24
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
19
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.h
26
--- a/target/arm/cpregs.h
21
+++ b/target/arm/cpu.h
27
+++ b/target/arm/cpregs.h
22
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
28
@@ -XXX,XX +XXX,XX @@ enum {
23
* pmccntr_op_finish.
29
ARM_CP_SVE = 1 << 14,
24
*/
30
/* Flag: Do not expose in gdb sysreg xml. */
25
uint64_t c15_ccnt_delta;
31
ARM_CP_NO_GDB = 1 << 15,
26
+ uint64_t c14_pmevcntr[31];
32
+ /*
27
+ uint64_t c14_pmevcntr_delta[31];
33
+ * Flags: If EL3 but not EL2...
28
+ uint64_t c14_pmevtyper[31];
34
+ * - UNDEF: discard the cpreg,
29
uint64_t pmccfiltr_el0; /* Performance Monitor Filter Register */
35
+ * - KEEP: retain the cpreg as is,
30
uint64_t vpidr_el2; /* Virtualization Processor ID Register */
36
+ * - C_NZ: set const on the cpreg, but retain resetvalue,
31
uint64_t vmpidr_el2; /* Virtualization Multiprocessor ID Register */
37
+ * - else: set const on the cpreg, zero resetvalue, aka RES0.
38
+ * See rule RJFFP in section D1.1.3 of DDI0487H.a.
39
+ */
40
+ ARM_CP_EL3_NO_EL2_UNDEF = 1 << 16,
41
+ ARM_CP_EL3_NO_EL2_KEEP = 1 << 17,
42
+ ARM_CP_EL3_NO_EL2_C_NZ = 1 << 18,
43
};
44
45
/*
32
diff --git a/target/arm/helper.c b/target/arm/helper.c
46
diff --git a/target/arm/helper.c b/target/arm/helper.c
33
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/helper.c
48
--- a/target/arm/helper.c
35
+++ b/target/arm/helper.c
49
+++ b/target/arm/helper.c
36
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
50
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
37
#define PMCRDP 0x10
51
.access = PL1_RW, .readfn = spsel_read, .writefn = spsel_write },
38
#define PMCRD 0x8
52
{ .name = "FPEXC32_EL2", .state = ARM_CP_STATE_AA64,
39
#define PMCRC 0x4
53
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 3, .opc2 = 0,
40
+#define PMCRP 0x2
54
- .access = PL2_RW, .type = ARM_CP_ALIAS | ARM_CP_FPU,
41
#define PMCRE 0x1
55
+ .access = PL2_RW,
42
56
+ .type = ARM_CP_ALIAS | ARM_CP_FPU | ARM_CP_EL3_NO_EL2_KEEP,
43
#define PMXEVTYPER_P 0x80000000
57
.fieldoffset = offsetof(CPUARMState, vfp.xregs[ARM_VFP_FPEXC]) },
44
@@ -XXX,XX +XXX,XX @@ uint64_t get_pmceid(CPUARMState *env, unsigned which)
58
{ .name = "DACR32_EL2", .state = ARM_CP_STATE_AA64,
45
return pmceid;
59
.opc0 = 3, .opc1 = 4, .crn = 3, .crm = 0, .opc2 = 0,
46
}
60
- .access = PL2_RW, .resetvalue = 0,
47
61
+ .access = PL2_RW, .resetvalue = 0, .type = ARM_CP_EL3_NO_EL2_KEEP,
48
+/*
62
.writefn = dacr_write, .raw_writefn = raw_write,
49
+ * Check at runtime whether a PMU event is supported for the current machine
63
.fieldoffset = offsetof(CPUARMState, cp15.dacr32_el2) },
50
+ */
64
{ .name = "IFSR32_EL2", .state = ARM_CP_STATE_AA64,
51
+static bool event_supported(uint16_t number)
65
.opc0 = 3, .opc1 = 4, .crn = 5, .crm = 0, .opc2 = 1,
52
+{
66
- .access = PL2_RW, .resetvalue = 0,
53
+ if (number > MAX_EVENT_ID) {
67
+ .access = PL2_RW, .resetvalue = 0, .type = ARM_CP_EL3_NO_EL2_KEEP,
54
+ return false;
68
.fieldoffset = offsetof(CPUARMState, cp15.ifsr32_el2) },
55
+ }
69
{ .name = "SPSR_IRQ", .state = ARM_CP_STATE_AA64,
56
+ return supported_event_map[number] != UNSUPPORTED_EVENT;
70
.type = ARM_CP_ALIAS,
57
+}
71
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
58
+
72
.writefn = tlbimva_hyp_is_write },
59
static CPAccessResult pmreg_access(CPUARMState *env, const ARMCPRegInfo *ri,
73
{ .name = "TLBI_ALLE2", .state = ARM_CP_STATE_AA64,
60
bool isread)
74
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 0,
75
- .type = ARM_CP_NO_RAW, .access = PL2_W,
76
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
77
.writefn = tlbi_aa64_alle2_write },
78
{ .name = "TLBI_VAE2", .state = ARM_CP_STATE_AA64,
79
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 1,
80
- .type = ARM_CP_NO_RAW, .access = PL2_W,
81
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
82
.writefn = tlbi_aa64_vae2_write },
83
{ .name = "TLBI_VALE2", .state = ARM_CP_STATE_AA64,
84
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 5,
85
- .access = PL2_W, .type = ARM_CP_NO_RAW,
86
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
87
.writefn = tlbi_aa64_vae2_write },
88
{ .name = "TLBI_ALLE2IS", .state = ARM_CP_STATE_AA64,
89
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 0,
90
- .access = PL2_W, .type = ARM_CP_NO_RAW,
91
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
92
.writefn = tlbi_aa64_alle2is_write },
93
{ .name = "TLBI_VAE2IS", .state = ARM_CP_STATE_AA64,
94
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 1,
95
- .type = ARM_CP_NO_RAW, .access = PL2_W,
96
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
97
.writefn = tlbi_aa64_vae2is_write },
98
{ .name = "TLBI_VALE2IS", .state = ARM_CP_STATE_AA64,
99
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 5,
100
- .access = PL2_W, .type = ARM_CP_NO_RAW,
101
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
102
.writefn = tlbi_aa64_vae2is_write },
103
#ifndef CONFIG_USER_ONLY
104
/* Unlike the other EL2-related AT operations, these must
105
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
106
{ .name = "AT_S1E2R", .state = ARM_CP_STATE_AA64,
107
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 0,
108
.access = PL2_W, .accessfn = at_s1e2_access,
109
- .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 },
110
+ .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC | ARM_CP_EL3_NO_EL2_UNDEF,
111
+ .writefn = ats_write64 },
112
{ .name = "AT_S1E2W", .state = ARM_CP_STATE_AA64,
113
.opc0 = 1, .opc1 = 4, .crn = 7, .crm = 8, .opc2 = 1,
114
.access = PL2_W, .accessfn = at_s1e2_access,
115
- .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC, .writefn = ats_write64 },
116
+ .type = ARM_CP_NO_RAW | ARM_CP_RAISES_EXC | ARM_CP_EL3_NO_EL2_UNDEF,
117
+ .writefn = ats_write64 },
118
/* The AArch32 ATS1H* operations are CONSTRAINED UNPREDICTABLE
119
* if EL2 is not implemented; we choose to UNDEF. Behaviour at EL3
120
* with SCR.NS == 0 outside Monitor mode is UNPREDICTABLE; we choose
121
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_cp_reginfo[] = {
122
{ .name = "DBGVCR32_EL2", .state = ARM_CP_STATE_AA64,
123
.opc0 = 2, .opc1 = 4, .crn = 0, .crm = 7, .opc2 = 0,
124
.access = PL2_RW, .accessfn = access_tda,
125
- .type = ARM_CP_NOP },
126
+ .type = ARM_CP_NOP | ARM_CP_EL3_NO_EL2_KEEP },
127
/* Dummy MDCCINT_EL1, since we don't implement the Debug Communications
128
* Channel but Linux may try to access this register. The 32-bit
129
* alias is DBGDCCINT.
130
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
131
.access = PL2_W, .type = ARM_CP_NOP },
132
{ .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
133
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
134
- .access = PL2_W, .type = ARM_CP_NO_RAW,
135
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
136
.writefn = tlbi_aa64_rvae2is_write },
137
{ .name = "TLBI_RVALE2IS", .state = ARM_CP_STATE_AA64,
138
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 5,
139
- .access = PL2_W, .type = ARM_CP_NO_RAW,
140
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
141
.writefn = tlbi_aa64_rvae2is_write },
142
{ .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
143
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
144
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
145
.access = PL2_W, .type = ARM_CP_NOP },
146
{ .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
147
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
148
- .access = PL2_W, .type = ARM_CP_NO_RAW,
149
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
150
.writefn = tlbi_aa64_rvae2is_write },
151
{ .name = "TLBI_RVALE2OS", .state = ARM_CP_STATE_AA64,
152
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 5,
153
- .access = PL2_W, .type = ARM_CP_NO_RAW,
154
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
155
.writefn = tlbi_aa64_rvae2is_write },
156
{ .name = "TLBI_RVAE2", .state = ARM_CP_STATE_AA64,
157
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 1,
158
- .access = PL2_W, .type = ARM_CP_NO_RAW,
159
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
160
.writefn = tlbi_aa64_rvae2_write },
161
{ .name = "TLBI_RVALE2", .state = ARM_CP_STATE_AA64,
162
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 6, .opc2 = 5,
163
- .access = PL2_W, .type = ARM_CP_NO_RAW,
164
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
165
.writefn = tlbi_aa64_rvae2_write },
166
{ .name = "TLBI_RVAE3IS", .state = ARM_CP_STATE_AA64,
167
.opc0 = 1, .opc1 = 6, .crn = 8, .crm = 2, .opc2 = 1,
168
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
169
.writefn = tlbi_aa64_vae1is_write },
170
{ .name = "TLBI_ALLE2OS", .state = ARM_CP_STATE_AA64,
171
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 0,
172
- .access = PL2_W, .type = ARM_CP_NO_RAW,
173
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
174
.writefn = tlbi_aa64_alle2is_write },
175
{ .name = "TLBI_VAE2OS", .state = ARM_CP_STATE_AA64,
176
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 1,
177
- .access = PL2_W, .type = ARM_CP_NO_RAW,
178
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
179
.writefn = tlbi_aa64_vae2is_write },
180
{ .name = "TLBI_ALLE1OS", .state = ARM_CP_STATE_AA64,
181
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 4,
182
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbios_reginfo[] = {
183
.writefn = tlbi_aa64_alle1is_write },
184
{ .name = "TLBI_VALE2OS", .state = ARM_CP_STATE_AA64,
185
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 5,
186
- .access = PL2_W, .type = ARM_CP_NO_RAW,
187
+ .access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
188
.writefn = tlbi_aa64_vae2is_write },
189
{ .name = "TLBI_VMALLS12E1OS", .state = ARM_CP_STATE_AA64,
190
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 1, .opc2 = 6,
191
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
192
{ .name = "VPIDR", .state = ARM_CP_STATE_AA32,
193
.cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
194
.access = PL2_RW, .accessfn = access_el3_aa32ns,
195
- .resetvalue = cpu->midr, .type = ARM_CP_ALIAS,
196
+ .resetvalue = cpu->midr,
197
+ .type = ARM_CP_ALIAS | ARM_CP_EL3_NO_EL2_C_NZ,
198
.fieldoffset = offsetoflow32(CPUARMState, cp15.vpidr_el2) },
199
{ .name = "VPIDR_EL2", .state = ARM_CP_STATE_AA64,
200
.opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
201
.access = PL2_RW, .resetvalue = cpu->midr,
202
+ .type = ARM_CP_EL3_NO_EL2_C_NZ,
203
.fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) },
204
{ .name = "VMPIDR", .state = ARM_CP_STATE_AA32,
205
.cp = 15, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
206
.access = PL2_RW, .accessfn = access_el3_aa32ns,
207
- .resetvalue = vmpidr_def, .type = ARM_CP_ALIAS,
208
+ .resetvalue = vmpidr_def,
209
+ .type = ARM_CP_ALIAS | ARM_CP_EL3_NO_EL2_C_NZ,
210
.fieldoffset = offsetoflow32(CPUARMState, cp15.vmpidr_el2) },
211
{ .name = "VMPIDR_EL2", .state = ARM_CP_STATE_AA64,
212
.opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
213
- .access = PL2_RW,
214
- .resetvalue = vmpidr_def,
215
+ .access = PL2_RW, .resetvalue = vmpidr_def,
216
+ .type = ARM_CP_EL3_NO_EL2_C_NZ,
217
.fieldoffset = offsetof(CPUARMState, cp15.vmpidr_el2) },
218
};
219
define_arm_cp_regs(cpu, vpidr_regs);
220
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
221
int crm, int opc1, int opc2,
222
const char *name)
61
{
223
{
62
@@ -XXX,XX +XXX,XX @@ static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
224
+ CPUARMState *env = &cpu->env;
63
prohibited = env->cp15.c9_pmcr & PMCRDP;
225
uint32_t key;
226
ARMCPRegInfo *r2;
227
bool is64 = r->type & ARM_CP_64BIT;
228
bool ns = secstate & ARM_CP_SECSTATE_NS;
229
int cp = r->cp;
230
- bool isbanked;
231
size_t name_len;
232
+ bool make_const;
233
234
switch (state) {
235
case ARM_CP_STATE_AA32:
236
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
237
}
64
}
238
}
65
239
66
- /* TODO Remove assert, set filter to correct PMEVTYPER */
240
+ /*
67
- assert(counter == 31);
241
+ * Eliminate registers that are not present because the EL is missing.
68
- filter = env->cp15.pmccfiltr_el0;
242
+ * Doing this here makes it easier to put all registers for a given
69
+ if (counter == 31) {
243
+ * feature into the same ARMCPRegInfo array and define them all at once.
70
+ filter = env->cp15.pmccfiltr_el0;
244
+ */
245
+ make_const = false;
246
+ if (arm_feature(env, ARM_FEATURE_EL3)) {
247
+ /*
248
+ * An EL2 register without EL2 but with EL3 is (usually) RES0.
249
+ * See rule RJFFP in section D1.1.3 of DDI0487H.a.
250
+ */
251
+ int min_el = ctz32(r->access) / 2;
252
+ if (min_el == 2 && !arm_feature(env, ARM_FEATURE_EL2)) {
253
+ if (r->type & ARM_CP_EL3_NO_EL2_UNDEF) {
254
+ return;
255
+ }
256
+ make_const = !(r->type & ARM_CP_EL3_NO_EL2_KEEP);
257
+ }
71
+ } else {
258
+ } else {
72
+ filter = env->cp15.c14_pmevtyper[counter];
259
+ CPAccessRights max_el = (arm_feature(env, ARM_FEATURE_EL2)
73
+ }
260
+ ? PL2_RW : PL1_RW);
74
261
+ if ((r->access & max_el) == 0) {
75
p = filter & PMXEVTYPER_P;
262
+ return;
76
u = filter & PMXEVTYPER_U;
77
@@ -XXX,XX +XXX,XX @@ static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
78
filtered = m != p;
79
}
80
81
+ if (counter != 31) {
82
+ /*
83
+ * If not checking PMCCNTR, ensure the counter is setup to an event we
84
+ * support
85
+ */
86
+ uint16_t event = filter & PMXEVTYPER_EVTCOUNT;
87
+ if (!event_supported(event)) {
88
+ return false;
89
+ }
263
+ }
90
+ }
264
+ }
91
+
265
+
92
return enabled && !prohibited && !filtered;
266
/* Combine cpreg and name into one allocation. */
93
}
267
name_len = strlen(name) + 1;
94
268
r2 = g_malloc(sizeof(*r2) + name_len);
95
@@ -XXX,XX +XXX,XX @@ void pmccntr_op_finish(CPUARMState *env)
269
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
270
r2->opaque = opaque;
96
}
271
}
97
}
272
98
273
- isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
99
+static void pmevcntr_op_start(CPUARMState *env, uint8_t counter)
274
- if (isbanked) {
100
+{
275
+ if (make_const) {
101
+
276
+ /* This should not have been a very special register to begin. */
102
+ uint16_t event = env->cp15.c14_pmevtyper[counter] & PMXEVTYPER_EVTCOUNT;
277
+ int old_special = r2->type & ARM_CP_SPECIAL_MASK;
103
+ uint64_t count = 0;
278
+ assert(old_special == 0 || old_special == ARM_CP_NOP);
104
+ if (event_supported(event)) {
279
/*
105
+ uint16_t event_idx = supported_event_map[event];
280
- * Register is banked (using both entries in array).
106
+ count = pm_events[event_idx].get_count(env);
281
- * Overwriting fieldoffset as the array is only used to define
107
+ }
282
- * banked registers but later only fieldoffset is used.
108
+
283
+ * Set the special function to CONST, retaining the other flags.
109
+ if (pmu_counter_enabled(env, counter)) {
284
+ * This is important for e.g. ARM_CP_SVE so that we still
110
+ env->cp15.c14_pmevcntr[counter] =
285
+ * take the SVE trap if CPTR_EL3.EZ == 0.
111
+ count - env->cp15.c14_pmevcntr_delta[counter];
286
*/
112
+ }
287
- r2->fieldoffset = r->bank_fieldoffsets[ns];
113
+ env->cp15.c14_pmevcntr_delta[counter] = count;
288
- }
114
+}
289
+ r2->type = (r2->type & ~ARM_CP_SPECIAL_MASK) | ARM_CP_CONST;
115
+
290
+ /*
116
+static void pmevcntr_op_finish(CPUARMState *env, uint8_t counter)
291
+ * Usually, these registers become RES0, but there are a few
117
+{
292
+ * special cases like VPIDR_EL2 which have a constant non-zero
118
+ if (pmu_counter_enabled(env, counter)) {
293
+ * value with writes ignored.
119
+ env->cp15.c14_pmevcntr_delta[counter] -=
294
+ */
120
+ env->cp15.c14_pmevcntr[counter];
295
+ if (!(r->type & ARM_CP_EL3_NO_EL2_C_NZ)) {
121
+ }
296
+ r2->resetvalue = 0;
122
+}
297
+ }
123
+
298
+ /*
124
void pmu_op_start(CPUARMState *env)
299
+ * ARM_CP_CONST has precedence, so removing the callbacks and
125
{
300
+ * offsets are not strictly necessary, but it is potentially
126
+ unsigned int i;
301
+ * less confusing to debug later.
127
pmccntr_op_start(env);
302
+ */
128
+ for (i = 0; i < pmu_num_counters(env); i++) {
303
+ r2->readfn = NULL;
129
+ pmevcntr_op_start(env, i);
304
+ r2->writefn = NULL;
130
+ }
305
+ r2->raw_readfn = NULL;
131
}
306
+ r2->raw_writefn = NULL;
132
307
+ r2->resetfn = NULL;
133
void pmu_op_finish(CPUARMState *env)
308
+ r2->fieldoffset = 0;
134
{
309
+ r2->bank_fieldoffsets[0] = 0;
135
+ unsigned int i;
310
+ r2->bank_fieldoffsets[1] = 0;
136
pmccntr_op_finish(env);
311
+ } else {
137
+ for (i = 0; i < pmu_num_counters(env); i++) {
312
+ bool isbanked = r->bank_fieldoffsets[0] && r->bank_fieldoffsets[1];
138
+ pmevcntr_op_finish(env, i);
313
139
+ }
314
- if (state == ARM_CP_STATE_AA32) {
140
}
315
if (isbanked) {
141
316
/*
142
void pmu_pre_el_change(ARMCPU *cpu, void *ignored)
317
- * If the register is banked then we don't need to migrate or
143
@@ -XXX,XX +XXX,XX @@ static void pmcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
318
- * reset the 32-bit instance in certain cases:
144
env->cp15.c15_ccnt = 0;
319
- *
320
- * 1) If the register has both 32-bit and 64-bit instances then we
321
- * can count on the 64-bit instance taking care of the
322
- * non-secure bank.
323
- * 2) If ARMv8 is enabled then we can count on a 64-bit version
324
- * taking care of the secure bank. This requires that separate
325
- * 32 and 64-bit definitions are provided.
326
+ * Register is banked (using both entries in array).
327
+ * Overwriting fieldoffset as the array is only used to define
328
+ * banked registers but later only fieldoffset is used.
329
*/
330
- if ((r->state == ARM_CP_STATE_BOTH && ns) ||
331
- (arm_feature(&cpu->env, ARM_FEATURE_V8) && !ns)) {
332
+ r2->fieldoffset = r->bank_fieldoffsets[ns];
333
+ }
334
+ if (state == ARM_CP_STATE_AA32) {
335
+ if (isbanked) {
336
+ /*
337
+ * If the register is banked then we don't need to migrate or
338
+ * reset the 32-bit instance in certain cases:
339
+ *
340
+ * 1) If the register has both 32-bit and 64-bit instances
341
+ * then we can count on the 64-bit instance taking care
342
+ * of the non-secure bank.
343
+ * 2) If ARMv8 is enabled then we can count on a 64-bit
344
+ * version taking care of the secure bank. This requires
345
+ * that separate 32 and 64-bit definitions are provided.
346
+ */
347
+ if ((r->state == ARM_CP_STATE_BOTH && ns) ||
348
+ (arm_feature(env, ARM_FEATURE_V8) && !ns)) {
349
+ r2->type |= ARM_CP_ALIAS;
350
+ }
351
+ } else if ((secstate != r->secure) && !ns) {
352
+ /*
353
+ * The register is not banked so we only want to allow
354
+ * migration of the non-secure instance.
355
+ */
356
r2->type |= ARM_CP_ALIAS;
357
}
358
- } else if ((secstate != r->secure) && !ns) {
359
- /*
360
- * The register is not banked so we only want to allow migration
361
- * of the non-secure instance.
362
- */
363
- r2->type |= ARM_CP_ALIAS;
364
- }
365
366
- if (HOST_BIG_ENDIAN &&
367
- r->state == ARM_CP_STATE_BOTH && r2->fieldoffset) {
368
- r2->fieldoffset += sizeof(uint32_t);
369
+ if (HOST_BIG_ENDIAN &&
370
+ r->state == ARM_CP_STATE_BOTH && r2->fieldoffset) {
371
+ r2->fieldoffset += sizeof(uint32_t);
372
+ }
373
}
145
}
374
}
146
375
147
+ if (value & PMCRP) {
376
@@ -XXX,XX +XXX,XX @@ static void add_cpreg_to_hashtable(ARMCPU *cpu, const ARMCPRegInfo *r,
148
+ unsigned int i;
377
* multiple times. Special registers (ie NOP/WFI) are
149
+ for (i = 0; i < pmu_num_counters(env); i++) {
378
* never migratable and not even raw-accessible.
150
+ env->cp15.c14_pmevcntr[i] = 0;
151
+ }
152
+ }
153
+
154
/* only the DP, X, D and E bits are writable */
155
env->cp15.c9_pmcr &= ~0x39;
156
env->cp15.c9_pmcr |= (value & 0x39);
157
@@ -XXX,XX +XXX,XX @@ void pmccntr_op_finish(CPUARMState *env)
158
{
159
}
160
161
+void pmevcntr_op_start(CPUARMState *env, uint8_t i)
162
+{
163
+}
164
+
165
+void pmevcntr_op_finish(CPUARMState *env, uint8_t i)
166
+{
167
+}
168
+
169
void pmu_op_start(CPUARMState *env)
170
{
171
}
172
@@ -XXX,XX +XXX,XX @@ static void pmovsset_write(CPUARMState *env, const ARMCPRegInfo *ri,
173
env->cp15.c9_pmovsr |= value;
174
}
175
176
-static void pmxevtyper_write(CPUARMState *env, const ARMCPRegInfo *ri,
177
- uint64_t value)
178
+static void pmevtyper_write(CPUARMState *env, const ARMCPRegInfo *ri,
179
+ uint64_t value, const uint8_t counter)
180
{
181
+ if (counter == 31) {
182
+ pmccfiltr_write(env, ri, value);
183
+ } else if (counter < pmu_num_counters(env)) {
184
+ pmevcntr_op_start(env, counter);
185
+
186
+ /*
187
+ * If this counter's event type is changing, store the current
188
+ * underlying count for the new type in c14_pmevcntr_delta[counter] so
189
+ * pmevcntr_op_finish has the correct baseline when it converts back to
190
+ * a delta.
191
+ */
192
+ uint16_t old_event = env->cp15.c14_pmevtyper[counter] &
193
+ PMXEVTYPER_EVTCOUNT;
194
+ uint16_t new_event = value & PMXEVTYPER_EVTCOUNT;
195
+ if (old_event != new_event) {
196
+ uint64_t count = 0;
197
+ if (event_supported(new_event)) {
198
+ uint16_t event_idx = supported_event_map[new_event];
199
+ count = pm_events[event_idx].get_count(env);
200
+ }
201
+ env->cp15.c14_pmevcntr_delta[counter] = count;
202
+ }
203
+
204
+ env->cp15.c14_pmevtyper[counter] = value & PMXEVTYPER_MASK;
205
+ pmevcntr_op_finish(env, counter);
206
+ }
207
/* Attempts to access PMXEVTYPER are CONSTRAINED UNPREDICTABLE when
208
* PMSELR value is equal to or greater than the number of implemented
209
* counters, but not equal to 0x1f. We opt to behave as a RAZ/WI.
210
*/
379
*/
211
- if (env->cp15.c9_pmselr == 0x1f) {
380
- if (r->type & ARM_CP_SPECIAL_MASK) {
212
- pmccfiltr_write(env, ri, value);
381
+ if (r2->type & ARM_CP_SPECIAL_MASK) {
213
+}
382
r2->type |= ARM_CP_NO_RAW;
214
+
215
+static uint64_t pmevtyper_read(CPUARMState *env, const ARMCPRegInfo *ri,
216
+ const uint8_t counter)
217
+{
218
+ if (counter == 31) {
219
+ return env->cp15.pmccfiltr_el0;
220
+ } else if (counter < pmu_num_counters(env)) {
221
+ return env->cp15.c14_pmevtyper[counter];
222
+ } else {
223
+ /*
224
+ * We opt to behave as a RAZ/WI when attempts to access PMXEVTYPER
225
+ * are CONSTRAINED UNPREDICTABLE. See comments in pmevtyper_write().
226
+ */
227
+ return 0;
228
}
383
}
229
}
384
if (((r->crm == CP_ANY) && crm != 0) ||
230
231
+static void pmevtyper_writefn(CPUARMState *env, const ARMCPRegInfo *ri,
232
+ uint64_t value)
233
+{
234
+ uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7);
235
+ pmevtyper_write(env, ri, value, counter);
236
+}
237
+
238
+static void pmevtyper_rawwrite(CPUARMState *env, const ARMCPRegInfo *ri,
239
+ uint64_t value)
240
+{
241
+ uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7);
242
+ env->cp15.c14_pmevtyper[counter] = value;
243
+
244
+ /*
245
+ * pmevtyper_rawwrite is called between a pair of pmu_op_start and
246
+ * pmu_op_finish calls when loading saved state for a migration. Because
247
+ * we're potentially updating the type of event here, the value written to
248
+ * c14_pmevcntr_delta by the preceeding pmu_op_start call may be for a
249
+ * different counter type. Therefore, we need to set this value to the
250
+ * current count for the counter type we're writing so that pmu_op_finish
251
+ * has the correct count for its calculation.
252
+ */
253
+ uint16_t event = value & PMXEVTYPER_EVTCOUNT;
254
+ if (event_supported(event)) {
255
+ uint16_t event_idx = supported_event_map[event];
256
+ env->cp15.c14_pmevcntr_delta[counter] =
257
+ pm_events[event_idx].get_count(env);
258
+ }
259
+}
260
+
261
+static uint64_t pmevtyper_readfn(CPUARMState *env, const ARMCPRegInfo *ri)
262
+{
263
+ uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7);
264
+ return pmevtyper_read(env, ri, counter);
265
+}
266
+
267
+static void pmxevtyper_write(CPUARMState *env, const ARMCPRegInfo *ri,
268
+ uint64_t value)
269
+{
270
+ pmevtyper_write(env, ri, value, env->cp15.c9_pmselr & 31);
271
+}
272
+
273
static uint64_t pmxevtyper_read(CPUARMState *env, const ARMCPRegInfo *ri)
274
{
275
- /* We opt to behave as a RAZ/WI when attempts to access PMXEVTYPER
276
- * are CONSTRAINED UNPREDICTABLE. See comments in pmxevtyper_write().
277
+ return pmevtyper_read(env, ri, env->cp15.c9_pmselr & 31);
278
+}
279
+
280
+static void pmevcntr_write(CPUARMState *env, const ARMCPRegInfo *ri,
281
+ uint64_t value, uint8_t counter)
282
+{
283
+ if (counter < pmu_num_counters(env)) {
284
+ pmevcntr_op_start(env, counter);
285
+ env->cp15.c14_pmevcntr[counter] = value;
286
+ pmevcntr_op_finish(env, counter);
287
+ }
288
+ /*
289
+ * We opt to behave as a RAZ/WI when attempts to access PM[X]EVCNTR
290
+ * are CONSTRAINED UNPREDICTABLE.
291
*/
292
- if (env->cp15.c9_pmselr == 0x1f) {
293
- return env->cp15.pmccfiltr_el0;
294
+}
295
+
296
+static uint64_t pmevcntr_read(CPUARMState *env, const ARMCPRegInfo *ri,
297
+ uint8_t counter)
298
+{
299
+ if (counter < pmu_num_counters(env)) {
300
+ uint64_t ret;
301
+ pmevcntr_op_start(env, counter);
302
+ ret = env->cp15.c14_pmevcntr[counter];
303
+ pmevcntr_op_finish(env, counter);
304
+ return ret;
305
} else {
306
+ /* We opt to behave as a RAZ/WI when attempts to access PM[X]EVCNTR
307
+ * are CONSTRAINED UNPREDICTABLE. */
308
return 0;
309
}
310
}
311
312
+static void pmevcntr_writefn(CPUARMState *env, const ARMCPRegInfo *ri,
313
+ uint64_t value)
314
+{
315
+ uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7);
316
+ pmevcntr_write(env, ri, value, counter);
317
+}
318
+
319
+static uint64_t pmevcntr_readfn(CPUARMState *env, const ARMCPRegInfo *ri)
320
+{
321
+ uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7);
322
+ return pmevcntr_read(env, ri, counter);
323
+}
324
+
325
+static void pmevcntr_rawwrite(CPUARMState *env, const ARMCPRegInfo *ri,
326
+ uint64_t value)
327
+{
328
+ uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7);
329
+ assert(counter < pmu_num_counters(env));
330
+ env->cp15.c14_pmevcntr[counter] = value;
331
+ pmevcntr_write(env, ri, value, counter);
332
+}
333
+
334
+static uint64_t pmevcntr_rawread(CPUARMState *env, const ARMCPRegInfo *ri)
335
+{
336
+ uint8_t counter = ((ri->crm & 3) << 3) | (ri->opc2 & 7);
337
+ assert(counter < pmu_num_counters(env));
338
+ return env->cp15.c14_pmevcntr[counter];
339
+}
340
+
341
+static void pmxevcntr_write(CPUARMState *env, const ARMCPRegInfo *ri,
342
+ uint64_t value)
343
+{
344
+ pmevcntr_write(env, ri, value, env->cp15.c9_pmselr & 31);
345
+}
346
+
347
+static uint64_t pmxevcntr_read(CPUARMState *env, const ARMCPRegInfo *ri)
348
+{
349
+ return pmevcntr_read(env, ri, env->cp15.c9_pmselr & 31);
350
+}
351
+
352
static void pmuserenr_write(CPUARMState *env, const ARMCPRegInfo *ri,
353
uint64_t value)
354
{
355
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
356
.fieldoffset = offsetof(CPUARMState, cp15.pmccfiltr_el0),
357
.resetvalue = 0, },
358
{ .name = "PMXEVTYPER", .cp = 15, .crn = 9, .crm = 13, .opc1 = 0, .opc2 = 1,
359
- .access = PL0_RW, .type = ARM_CP_NO_RAW, .accessfn = pmreg_access,
360
+ .access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO,
361
+ .accessfn = pmreg_access,
362
.writefn = pmxevtyper_write, .readfn = pmxevtyper_read },
363
{ .name = "PMXEVTYPER_EL0", .state = ARM_CP_STATE_AA64,
364
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 1,
365
- .access = PL0_RW, .type = ARM_CP_NO_RAW, .accessfn = pmreg_access,
366
+ .access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO,
367
+ .accessfn = pmreg_access,
368
.writefn = pmxevtyper_write, .readfn = pmxevtyper_read },
369
- /* Unimplemented, RAZ/WI. */
370
{ .name = "PMXEVCNTR", .cp = 15, .crn = 9, .crm = 13, .opc1 = 0, .opc2 = 2,
371
- .access = PL0_RW, .type = ARM_CP_CONST, .resetvalue = 0,
372
- .accessfn = pmreg_access_xevcntr },
373
+ .access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO,
374
+ .accessfn = pmreg_access_xevcntr,
375
+ .writefn = pmxevcntr_write, .readfn = pmxevcntr_read },
376
+ { .name = "PMXEVCNTR_EL0", .state = ARM_CP_STATE_AA64,
377
+ .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 2,
378
+ .access = PL0_RW, .type = ARM_CP_NO_RAW | ARM_CP_IO,
379
+ .accessfn = pmreg_access_xevcntr,
380
+ .writefn = pmxevcntr_write, .readfn = pmxevcntr_read },
381
{ .name = "PMUSERENR", .cp = 15, .crn = 9, .crm = 14, .opc1 = 0, .opc2 = 0,
382
.access = PL0_R | PL1_RW, .accessfn = access_tpm,
383
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmuserenr),
384
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo el2_cp_reginfo[] = {
385
#endif
386
/* The only field of MDCR_EL2 that has a defined architectural reset value
387
* is MDCR_EL2.HPMN which should reset to the value of PMCR_EL0.N; but we
388
- * don't impelment any PMU event counters, so using zero as a reset
389
+ * don't implement any PMU event counters, so using zero as a reset
390
* value for MDCR_EL2 is okay
391
*/
392
{ .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH,
393
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
394
* field as main ID register, and we implement only the cycle
395
* count register.
396
*/
397
+ unsigned int i, pmcrn = 0;
398
#ifndef CONFIG_USER_ONLY
399
ARMCPRegInfo pmcr = {
400
.name = "PMCR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 0,
401
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
402
};
403
define_one_arm_cp_reg(cpu, &pmcr);
404
define_one_arm_cp_reg(cpu, &pmcr64);
405
+ for (i = 0; i < pmcrn; i++) {
406
+ char *pmevcntr_name = g_strdup_printf("PMEVCNTR%d", i);
407
+ char *pmevcntr_el0_name = g_strdup_printf("PMEVCNTR%d_EL0", i);
408
+ char *pmevtyper_name = g_strdup_printf("PMEVTYPER%d", i);
409
+ char *pmevtyper_el0_name = g_strdup_printf("PMEVTYPER%d_EL0", i);
410
+ ARMCPRegInfo pmev_regs[] = {
411
+ { .name = pmevcntr_name, .cp = 15, .crn = 15,
412
+ .crm = 8 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
413
+ .access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
414
+ .readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
415
+ .accessfn = pmreg_access },
416
+ { .name = pmevcntr_el0_name, .state = ARM_CP_STATE_AA64,
417
+ .opc0 = 3, .opc1 = 3, .crn = 15, .crm = 8 | (3 & (i >> 3)),
418
+ .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access,
419
+ .type = ARM_CP_IO,
420
+ .readfn = pmevcntr_readfn, .writefn = pmevcntr_writefn,
421
+ .raw_readfn = pmevcntr_rawread,
422
+ .raw_writefn = pmevcntr_rawwrite },
423
+ { .name = pmevtyper_name, .cp = 15, .crn = 15,
424
+ .crm = 12 | (3 & (i >> 3)), .opc1 = 0, .opc2 = i & 7,
425
+ .access = PL0_RW, .type = ARM_CP_IO | ARM_CP_ALIAS,
426
+ .readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
427
+ .accessfn = pmreg_access },
428
+ { .name = pmevtyper_el0_name, .state = ARM_CP_STATE_AA64,
429
+ .opc0 = 3, .opc1 = 3, .crn = 15, .crm = 12 | (3 & (i >> 3)),
430
+ .opc2 = i & 7, .access = PL0_RW, .accessfn = pmreg_access,
431
+ .type = ARM_CP_IO,
432
+ .readfn = pmevtyper_readfn, .writefn = pmevtyper_writefn,
433
+ .raw_writefn = pmevtyper_rawwrite },
434
+ REGINFO_SENTINEL
435
+ };
436
+ define_arm_cp_regs(cpu, pmev_regs);
437
+ g_free(pmevcntr_name);
438
+ g_free(pmevcntr_el0_name);
439
+ g_free(pmevtyper_name);
440
+ g_free(pmevtyper_el0_name);
441
+ }
442
#endif
443
ARMCPRegInfo clidr = {
444
.name = "CLIDR", .state = ARM_CP_STATE_BOTH,
445
--
385
--
446
2.20.1
386
2.25.1
447
448
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The arm_regime_tbi{0,1} functions are replacable with the new function
3
Drop el3_no_el2_cp_reginfo, el3_no_el2_v8_cp_reginfo, and the local
4
by giving the lowest and highest address.
4
vpidr_regs definition, and rely on the squashing to ARM_CP_CONST
5
while registering for v8.
6
7
This is a behavior change for v7 cpus with Security Extensions and
8
without Virtualization Extensions, in that the virtualization cpregs
9
are now correctly not present. This would be a migration compatibility
10
break, except that we have an existing bug in which migration of 32-bit
11
cpus with Security Extensions enabled does not work.
5
12
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190108223129.5570-24-richard.henderson@linaro.org
15
Message-id: 20220506180242.216785-3-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
17
---
11
target/arm/cpu.h | 35 -----------------------
18
target/arm/helper.c | 158 ++++----------------------------------------
12
target/arm/helper.c | 70 ++++++++++++++++-----------------------------
19
1 file changed, 13 insertions(+), 145 deletions(-)
13
2 files changed, 24 insertions(+), 81 deletions(-)
20
14
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ static inline bool arm_cpu_bswap_data(CPUARMState *env)
20
}
21
#endif
22
23
-#ifndef CONFIG_USER_ONLY
24
-/**
25
- * arm_regime_tbi0:
26
- * @env: CPUARMState
27
- * @mmu_idx: MMU index indicating required translation regime
28
- *
29
- * Extracts the TBI0 value from the appropriate TCR for the current EL
30
- *
31
- * Returns: the TBI0 value.
32
- */
33
-uint32_t arm_regime_tbi0(CPUARMState *env, ARMMMUIdx mmu_idx);
34
-
35
-/**
36
- * arm_regime_tbi1:
37
- * @env: CPUARMState
38
- * @mmu_idx: MMU index indicating required translation regime
39
- *
40
- * Extracts the TBI1 value from the appropriate TCR for the current EL
41
- *
42
- * Returns: the TBI1 value.
43
- */
44
-uint32_t arm_regime_tbi1(CPUARMState *env, ARMMMUIdx mmu_idx);
45
-#else
46
-/* We can't handle tagged addresses properly in user-only mode */
47
-static inline uint32_t arm_regime_tbi0(CPUARMState *env, ARMMMUIdx mmu_idx)
48
-{
49
- return 0;
50
-}
51
-
52
-static inline uint32_t arm_regime_tbi1(CPUARMState *env, ARMMMUIdx mmu_idx)
53
-{
54
- return 0;
55
-}
56
-#endif
57
-
58
void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
59
target_ulong *cs_base, uint32_t *flags);
60
61
diff --git a/target/arm/helper.c b/target/arm/helper.c
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
62
index XXXXXXX..XXXXXXX 100644
22
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/helper.c
23
--- a/target/arm/helper.c
64
+++ b/target/arm/helper.c
24
+++ b/target/arm/helper.c
65
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx)
25
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
66
return mmu_idx;
26
.fieldoffset = offsetoflow32(CPUARMState, cp15.mdcr_el3) },
67
}
27
};
68
28
69
-/* Returns TBI0 value for current regime el */
29
-/* Used to describe the behaviour of EL2 regs when EL2 does not exist. */
70
-uint32_t arm_regime_tbi0(CPUARMState *env, ARMMMUIdx mmu_idx)
30
-static const ARMCPRegInfo el3_no_el2_cp_reginfo[] = {
71
-{
31
- { .name = "VBAR_EL2", .state = ARM_CP_STATE_BOTH,
72
- TCR *tcr;
32
- .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 0, .opc2 = 0,
73
- uint32_t el;
33
- .access = PL2_RW,
34
- .readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore },
35
- { .name = "HCR_EL2", .state = ARM_CP_STATE_BOTH,
36
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 0,
37
- .access = PL2_RW,
38
- .type = ARM_CP_CONST, .resetvalue = 0 },
39
- { .name = "HACR_EL2", .state = ARM_CP_STATE_BOTH,
40
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 7,
41
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
42
- { .name = "ESR_EL2", .state = ARM_CP_STATE_BOTH,
43
- .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 0,
44
- .access = PL2_RW,
45
- .type = ARM_CP_CONST, .resetvalue = 0 },
46
- { .name = "CPTR_EL2", .state = ARM_CP_STATE_BOTH,
47
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 2,
48
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
49
- { .name = "MAIR_EL2", .state = ARM_CP_STATE_BOTH,
50
- .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 0,
51
- .access = PL2_RW, .type = ARM_CP_CONST,
52
- .resetvalue = 0 },
53
- { .name = "HMAIR1", .state = ARM_CP_STATE_AA32,
54
- .cp = 15, .opc1 = 4, .crn = 10, .crm = 2, .opc2 = 1,
55
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
56
- { .name = "AMAIR_EL2", .state = ARM_CP_STATE_BOTH,
57
- .opc0 = 3, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 0,
58
- .access = PL2_RW, .type = ARM_CP_CONST,
59
- .resetvalue = 0 },
60
- { .name = "HAMAIR1", .state = ARM_CP_STATE_AA32,
61
- .cp = 15, .opc1 = 4, .crn = 10, .crm = 3, .opc2 = 1,
62
- .access = PL2_RW, .type = ARM_CP_CONST,
63
- .resetvalue = 0 },
64
- { .name = "AFSR0_EL2", .state = ARM_CP_STATE_BOTH,
65
- .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 0,
66
- .access = PL2_RW, .type = ARM_CP_CONST,
67
- .resetvalue = 0 },
68
- { .name = "AFSR1_EL2", .state = ARM_CP_STATE_BOTH,
69
- .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 1, .opc2 = 1,
70
- .access = PL2_RW, .type = ARM_CP_CONST,
71
- .resetvalue = 0 },
72
- { .name = "TCR_EL2", .state = ARM_CP_STATE_BOTH,
73
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 2,
74
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
75
- { .name = "VTCR_EL2", .state = ARM_CP_STATE_BOTH,
76
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 2,
77
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
78
- .type = ARM_CP_CONST, .resetvalue = 0 },
79
- { .name = "VTTBR", .state = ARM_CP_STATE_AA32,
80
- .cp = 15, .opc1 = 6, .crm = 2,
81
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
82
- .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
83
- { .name = "VTTBR_EL2", .state = ARM_CP_STATE_AA64,
84
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 1, .opc2 = 0,
85
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
86
- { .name = "SCTLR_EL2", .state = ARM_CP_STATE_BOTH,
87
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 0, .opc2 = 0,
88
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
89
- { .name = "TPIDR_EL2", .state = ARM_CP_STATE_BOTH,
90
- .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 2,
91
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
92
- { .name = "TTBR0_EL2", .state = ARM_CP_STATE_AA64,
93
- .opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 0,
94
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
95
- { .name = "HTTBR", .cp = 15, .opc1 = 4, .crm = 2,
96
- .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
97
- .resetvalue = 0 },
98
- { .name = "CNTHCTL_EL2", .state = ARM_CP_STATE_BOTH,
99
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 1, .opc2 = 0,
100
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
101
- { .name = "CNTVOFF_EL2", .state = ARM_CP_STATE_AA64,
102
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 0, .opc2 = 3,
103
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
104
- { .name = "CNTVOFF", .cp = 15, .opc1 = 4, .crm = 14,
105
- .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
106
- .resetvalue = 0 },
107
- { .name = "CNTHP_CVAL_EL2", .state = ARM_CP_STATE_AA64,
108
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 2,
109
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
110
- { .name = "CNTHP_CVAL", .cp = 15, .opc1 = 6, .crm = 14,
111
- .access = PL2_RW, .type = ARM_CP_64BIT | ARM_CP_CONST,
112
- .resetvalue = 0 },
113
- { .name = "CNTHP_TVAL_EL2", .state = ARM_CP_STATE_BOTH,
114
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 0,
115
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
116
- { .name = "CNTHP_CTL_EL2", .state = ARM_CP_STATE_BOTH,
117
- .opc0 = 3, .opc1 = 4, .crn = 14, .crm = 2, .opc2 = 1,
118
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
119
- { .name = "MDCR_EL2", .state = ARM_CP_STATE_BOTH,
120
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 1,
121
- .access = PL2_RW, .accessfn = access_tda,
122
- .type = ARM_CP_CONST, .resetvalue = 0 },
123
- { .name = "HPFAR_EL2", .state = ARM_CP_STATE_BOTH,
124
- .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 4,
125
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
126
- .type = ARM_CP_CONST, .resetvalue = 0 },
127
- { .name = "HSTR_EL2", .state = ARM_CP_STATE_BOTH,
128
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 3,
129
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
130
- { .name = "FAR_EL2", .state = ARM_CP_STATE_BOTH,
131
- .opc0 = 3, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 0,
132
- .access = PL2_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
133
- { .name = "HIFAR", .state = ARM_CP_STATE_AA32,
134
- .type = ARM_CP_CONST,
135
- .cp = 15, .opc1 = 4, .crn = 6, .crm = 0, .opc2 = 2,
136
- .access = PL2_RW, .resetvalue = 0 },
137
-};
74
-
138
-
75
- /* For EL0 and EL1, TBI is controlled by stage 1's TCR, so convert
139
-/* Ditto, but for registers which exist in ARMv8 but not v7 */
76
- * a stage 1+2 mmu index into the appropriate stage 1 mmu index.
140
-static const ARMCPRegInfo el3_no_el2_v8_cp_reginfo[] = {
77
- */
141
- { .name = "HCR2", .state = ARM_CP_STATE_AA32,
78
- mmu_idx = stage_1_mmu_idx(mmu_idx);
142
- .cp = 15, .opc1 = 4, .crn = 1, .crm = 1, .opc2 = 4,
143
- .access = PL2_RW,
144
- .type = ARM_CP_CONST, .resetvalue = 0 },
145
-};
79
-
146
-
80
- tcr = regime_tcr(env, mmu_idx);
147
static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
81
- el = regime_el(env, mmu_idx);
148
{
82
-
149
ARMCPU *cpu = env_archcpu(env);
83
- if (el > 1) {
150
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
84
- return extract64(tcr->raw_tcr, 20, 1);
151
define_arm_cp_regs(cpu, v8_idregs);
152
define_arm_cp_regs(cpu, v8_cp_reginfo);
153
}
154
- if (arm_feature(env, ARM_FEATURE_EL2)) {
155
+
156
+ /*
157
+ * Register the base EL2 cpregs.
158
+ * Pre v8, these registers are implemented only as part of the
159
+ * Virtualization Extensions (EL2 present). Beginning with v8,
160
+ * if EL2 is missing but EL3 is enabled, mostly these become
161
+ * RES0 from EL3, with some specific exceptions.
162
+ */
163
+ if (arm_feature(env, ARM_FEATURE_EL2)
164
+ || (arm_feature(env, ARM_FEATURE_EL3)
165
+ && arm_feature(env, ARM_FEATURE_V8))) {
166
uint64_t vmpidr_def = mpidr_read_val(env);
167
ARMCPRegInfo vpidr_regs[] = {
168
{ .name = "VPIDR", .state = ARM_CP_STATE_AA32,
169
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
170
};
171
define_one_arm_cp_reg(cpu, &rvbar);
172
}
85
- } else {
173
- } else {
86
- return extract64(tcr->raw_tcr, 37, 1);
174
- /* If EL2 is missing but higher ELs are enabled, we need to
87
- }
175
- * register the no_el2 reginfos.
88
-}
176
- */
89
-
177
- if (arm_feature(env, ARM_FEATURE_EL3)) {
90
-/* Returns TBI1 value for current regime el */
178
- /* When EL3 exists but not EL2, VPIDR and VMPIDR take the value
91
-uint32_t arm_regime_tbi1(CPUARMState *env, ARMMMUIdx mmu_idx)
179
- * of MIDR_EL1 and MPIDR_EL1.
92
-{
180
- */
93
- TCR *tcr;
181
- ARMCPRegInfo vpidr_regs[] = {
94
- uint32_t el;
182
- { .name = "VPIDR_EL2", .state = ARM_CP_STATE_BOTH,
95
-
183
- .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 0,
96
- /* For EL0 and EL1, TBI is controlled by stage 1's TCR, so convert
184
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
97
- * a stage 1+2 mmu index into the appropriate stage 1 mmu index.
185
- .type = ARM_CP_CONST, .resetvalue = cpu->midr,
98
- */
186
- .fieldoffset = offsetof(CPUARMState, cp15.vpidr_el2) },
99
- mmu_idx = stage_1_mmu_idx(mmu_idx);
187
- { .name = "VMPIDR_EL2", .state = ARM_CP_STATE_BOTH,
100
-
188
- .opc0 = 3, .opc1 = 4, .crn = 0, .crm = 0, .opc2 = 5,
101
- tcr = regime_tcr(env, mmu_idx);
189
- .access = PL2_RW, .accessfn = access_el3_aa32ns,
102
- el = regime_el(env, mmu_idx);
190
- .type = ARM_CP_NO_RAW,
103
-
191
- .writefn = arm_cp_write_ignore, .readfn = mpidr_read },
104
- if (el > 1) {
192
- };
105
- return 0;
193
- define_arm_cp_regs(cpu, vpidr_regs);
106
- } else {
194
- define_arm_cp_regs(cpu, el3_no_el2_cp_reginfo);
107
- return extract64(tcr->raw_tcr, 38, 1);
195
- if (arm_feature(env, ARM_FEATURE_V8)) {
108
- }
196
- define_arm_cp_regs(cpu, el3_no_el2_v8_cp_reginfo);
109
-}
197
- }
110
-
198
- }
111
/* Return the TTBR associated with this translation regime */
199
}
112
static inline uint64_t regime_ttbr(CPUARMState *env, ARMMMUIdx mmu_idx,
113
int ttbrn)
114
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
115
116
*pc = env->pc;
117
flags = FIELD_DP32(flags, TBFLAG_ANY, AARCH64_STATE, 1);
118
- /* Get control bits for tagged addresses */
119
- flags = FIELD_DP32(flags, TBFLAG_A64, TBII,
120
- (arm_regime_tbi1(env, mmu_idx) << 1) |
121
- arm_regime_tbi0(env, mmu_idx));
122
+
200
+
123
+#ifndef CONFIG_USER_ONLY
201
+ /* Register the base EL3 cpregs. */
124
+ /*
202
if (arm_feature(env, ARM_FEATURE_EL3)) {
125
+ * Get control bits for tagged addresses. Note that the
203
define_arm_cp_regs(cpu, el3_cp_reginfo);
126
+ * translator only uses this for instruction addresses.
204
ARMCPRegInfo el3_regs[] = {
127
+ */
128
+ {
129
+ ARMMMUIdx stage1 = stage_1_mmu_idx(mmu_idx);
130
+ ARMVAParameters p0 = aa64_va_parameters_both(env, 0, stage1);
131
+ int tbii, tbid;
132
+
133
+ /* FIXME: ARMv8.1-VHE S2 translation regime. */
134
+ if (regime_el(env, stage1) < 2) {
135
+ ARMVAParameters p1 = aa64_va_parameters_both(env, -1, stage1);
136
+ tbid = (p1.tbi << 1) | p0.tbi;
137
+ tbii = tbid & ~((p1.tbid << 1) | p0.tbid);
138
+ } else {
139
+ tbid = p0.tbi;
140
+ tbii = tbid & !p0.tbid;
141
+ }
142
+
143
+ flags = FIELD_DP32(flags, TBFLAG_A64, TBII, tbii);
144
+ }
145
+#endif
146
147
if (cpu_isar_feature(aa64_sve, cpu)) {
148
int sve_el = sve_exception_el(env, current_el);
149
--
205
--
150
2.20.1
206
2.25.1
151
152
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We need to reuse this from helper-a64.c. Provide a stub
3
Drop zcr_no_el2_reginfo and merge the 3 registers into one array,
4
definition for CONFIG_USER_ONLY. This matches the stub
4
now that ZCR_EL2 can be squashed to RES0 and ZCR_EL3 dropped
5
definitions that we removed for arm_regime_tbi{0,1} before.
5
while registering.
6
6
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190108223129.5570-21-richard.henderson@linaro.org
9
Message-id: 20220506180242.216785-4-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
11
---
12
target/arm/internals.h | 17 +++++++++++++++++
12
target/arm/helper.c | 55 ++++++++++++++-------------------------------
13
target/arm/helper.c | 4 ++--
13
1 file changed, 17 insertions(+), 38 deletions(-)
14
2 files changed, 19 insertions(+), 2 deletions(-)
15
14
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/internals.h
19
+++ b/target/arm/internals.h
20
@@ -XXX,XX +XXX,XX @@ typedef struct ARMVAParameters {
21
bool using64k : 1;
22
} ARMVAParameters;
23
24
+#ifdef CONFIG_USER_ONLY
25
+static inline ARMVAParameters aa64_va_parameters(CPUARMState *env,
26
+ uint64_t va,
27
+ ARMMMUIdx mmu_idx, bool data)
28
+{
29
+ return (ARMVAParameters) {
30
+ /* 48-bit address space */
31
+ .tsz = 16,
32
+ /* We can't handle tagged addresses properly in user-only mode */
33
+ .tbi = false,
34
+ };
35
+}
36
+#else
37
+ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
38
+ ARMMMUIdx mmu_idx, bool data);
39
+#endif
40
+
41
#endif
42
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
43
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/helper.c
17
--- a/target/arm/helper.c
45
+++ b/target/arm/helper.c
18
+++ b/target/arm/helper.c
46
@@ -XXX,XX +XXX,XX @@ static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
19
@@ -XXX,XX +XXX,XX @@ static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
47
return (hiattr << 6) | (hihint << 4) | (loattr << 2) | lohint;
20
}
48
}
21
}
49
22
50
-static ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
23
-static const ARMCPRegInfo zcr_el1_reginfo = {
51
- ARMMMUIdx mmu_idx, bool data)
24
- .name = "ZCR_EL1", .state = ARM_CP_STATE_AA64,
52
+ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
25
- .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0,
53
+ ARMMMUIdx mmu_idx, bool data)
26
- .access = PL1_RW, .type = ARM_CP_SVE,
54
{
27
- .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]),
55
uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
28
- .writefn = zcr_write, .raw_writefn = raw_write
56
uint32_t el = regime_el(env, mmu_idx);
29
-};
30
-
31
-static const ARMCPRegInfo zcr_el2_reginfo = {
32
- .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
33
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
34
- .access = PL2_RW, .type = ARM_CP_SVE,
35
- .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]),
36
- .writefn = zcr_write, .raw_writefn = raw_write
37
-};
38
-
39
-static const ARMCPRegInfo zcr_no_el2_reginfo = {
40
- .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
41
- .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
42
- .access = PL2_RW, .type = ARM_CP_SVE,
43
- .readfn = arm_cp_read_zero, .writefn = arm_cp_write_ignore
44
-};
45
-
46
-static const ARMCPRegInfo zcr_el3_reginfo = {
47
- .name = "ZCR_EL3", .state = ARM_CP_STATE_AA64,
48
- .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0,
49
- .access = PL3_RW, .type = ARM_CP_SVE,
50
- .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]),
51
- .writefn = zcr_write, .raw_writefn = raw_write
52
+static const ARMCPRegInfo zcr_reginfo[] = {
53
+ { .name = "ZCR_EL1", .state = ARM_CP_STATE_AA64,
54
+ .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 0,
55
+ .access = PL1_RW, .type = ARM_CP_SVE,
56
+ .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[1]),
57
+ .writefn = zcr_write, .raw_writefn = raw_write },
58
+ { .name = "ZCR_EL2", .state = ARM_CP_STATE_AA64,
59
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 0,
60
+ .access = PL2_RW, .type = ARM_CP_SVE,
61
+ .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[2]),
62
+ .writefn = zcr_write, .raw_writefn = raw_write },
63
+ { .name = "ZCR_EL3", .state = ARM_CP_STATE_AA64,
64
+ .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 0,
65
+ .access = PL3_RW, .type = ARM_CP_SVE,
66
+ .fieldoffset = offsetof(CPUARMState, vfp.zcr_el[3]),
67
+ .writefn = zcr_write, .raw_writefn = raw_write },
68
};
69
70
void hw_watchpoint_update(ARMCPU *cpu, int n)
71
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
72
}
73
74
if (cpu_isar_feature(aa64_sve, cpu)) {
75
- define_one_arm_cp_reg(cpu, &zcr_el1_reginfo);
76
- if (arm_feature(env, ARM_FEATURE_EL2)) {
77
- define_one_arm_cp_reg(cpu, &zcr_el2_reginfo);
78
- } else {
79
- define_one_arm_cp_reg(cpu, &zcr_no_el2_reginfo);
80
- }
81
- if (arm_feature(env, ARM_FEATURE_EL3)) {
82
- define_one_arm_cp_reg(cpu, &zcr_el3_reginfo);
83
- }
84
+ define_arm_cp_regs(cpu, zcr_reginfo);
85
}
86
87
#ifdef TARGET_AARCH64
57
--
88
--
58
2.20.1
89
2.25.1
59
60
diff view generated by jsdifflib
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add an array for PMOVSSET so we only define it for v7ve+ platforms
3
This register is present for either VHE or Debugv8p2.
4
4
5
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20181211151945.29137-7-aaron@os.amperecomputing.com
7
Message-id: 20220506180242.216785-5-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
9
---
10
target/arm/helper.c | 28 ++++++++++++++++++++++++++++
10
target/arm/helper.c | 15 +++++++++++----
11
1 file changed, 28 insertions(+)
11
1 file changed, 11 insertions(+), 4 deletions(-)
12
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static void pmovsr_write(CPUARMState *env, const ARMCPRegInfo *ri,
17
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo jazelle_regs[] = {
18
env->cp15.c9_pmovsr &= ~value;
18
.access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
19
}
20
21
+static void pmovsset_write(CPUARMState *env, const ARMCPRegInfo *ri,
22
+ uint64_t value)
23
+{
24
+ value &= pmu_counter_mask(env);
25
+ env->cp15.c9_pmovsr |= value;
26
+}
27
+
28
static void pmxevtyper_write(CPUARMState *env, const ARMCPRegInfo *ri,
29
uint64_t value)
30
{
31
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7mp_cp_reginfo[] = {
32
REGINFO_SENTINEL
33
};
19
};
34
20
35
+static const ARMCPRegInfo pmovsset_cp_reginfo[] = {
21
+static const ARMCPRegInfo contextidr_el2 = {
36
+ /* PMOVSSET is not implemented in v7 before v7ve */
22
+ .name = "CONTEXTIDR_EL2", .state = ARM_CP_STATE_AA64,
37
+ { .name = "PMOVSSET", .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 3,
23
+ .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1,
38
+ .access = PL0_RW, .accessfn = pmreg_access,
24
+ .access = PL2_RW,
39
+ .type = ARM_CP_ALIAS,
25
+ .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2])
40
+ .fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmovsr),
41
+ .writefn = pmovsset_write,
42
+ .raw_writefn = raw_write },
43
+ { .name = "PMOVSSET_EL0", .state = ARM_CP_STATE_AA64,
44
+ .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 14, .opc2 = 3,
45
+ .access = PL0_RW, .accessfn = pmreg_access,
46
+ .type = ARM_CP_ALIAS,
47
+ .fieldoffset = offsetof(CPUARMState, cp15.c9_pmovsr),
48
+ .writefn = pmovsset_write,
49
+ .raw_writefn = raw_write },
50
+ REGINFO_SENTINEL
51
+};
26
+};
52
+
27
+
53
static void teecr_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
static const ARMCPRegInfo vhe_reginfo[] = {
54
uint64_t value)
29
- { .name = "CONTEXTIDR_EL2", .state = ARM_CP_STATE_AA64,
55
{
30
- .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 1,
31
- .access = PL2_RW,
32
- .fieldoffset = offsetof(CPUARMState, cp15.contextidr_el[2]) },
33
{ .name = "TTBR1_EL2", .state = ARM_CP_STATE_AA64,
34
.opc0 = 3, .opc1 = 4, .crn = 2, .crm = 0, .opc2 = 1,
35
.access = PL2_RW, .writefn = vmsa_tcr_ttbr_el2_write,
56
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
36
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
57
!arm_feature(env, ARM_FEATURE_PMSA)) {
37
define_one_arm_cp_reg(cpu, &ssbs_reginfo);
58
define_arm_cp_regs(cpu, v7mp_cp_reginfo);
59
}
38
}
60
+ if (arm_feature(env, ARM_FEATURE_V7VE)) {
39
61
+ define_arm_cp_regs(cpu, pmovsset_cp_reginfo);
40
+ if (cpu_isar_feature(aa64_vh, cpu) ||
41
+ cpu_isar_feature(aa64_debugv8p2, cpu)) {
42
+ define_one_arm_cp_reg(cpu, &contextidr_el2);
62
+ }
43
+ }
63
if (arm_feature(env, ARM_FEATURE_V7)) {
44
if (arm_feature(env, ARM_FEATURE_EL2) && cpu_isar_feature(aa64_vh, cpu)) {
64
/* v7 performance monitor control register: same implementor
45
define_arm_cp_regs(cpu, vhe_reginfo);
65
* field as main ID register, and we implement only the cycle
46
}
66
--
47
--
67
2.20.1
48
2.25.1
68
69
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Split out functions to extract the virtual address parameters.
3
Previously we were defining some of these in user-only mode,
4
Let the functions choose T0 or T1 address space half, if present.
4
but none of them are accessible from user-only, therefore
5
Extract (most of) the control bits that vary between EL or Tx.
5
define them only in system mode.
6
6
7
This will shortly be used from cpu_tcg.c also.
8
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20220506180242.216785-6-richard.henderson@linaro.org
9
Message-id: 20190108223129.5570-19-richard.henderson@linaro.org
10
[PMM: fixed minor checkpatch comment nits]
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
---
13
target/arm/internals.h | 14 +++
14
target/arm/internals.h | 6 ++++
14
target/arm/helper.c | 278 ++++++++++++++++++++++-------------------
15
target/arm/cpu64.c | 64 +++---------------------------------------
15
2 files changed, 164 insertions(+), 128 deletions(-)
16
target/arm/cpu_tcg.c | 59 ++++++++++++++++++++++++++++++++++++++
17
3 files changed, 69 insertions(+), 60 deletions(-)
16
18
17
diff --git a/target/arm/internals.h b/target/arm/internals.h
19
diff --git a/target/arm/internals.h b/target/arm/internals.h
18
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/internals.h
21
--- a/target/arm/internals.h
20
+++ b/target/arm/internals.h
22
+++ b/target/arm/internals.h
21
@@ -XXX,XX +XXX,XX @@ static inline ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env)
23
@@ -XXX,XX +XXX,XX @@ int aarch64_fpu_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg);
22
ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env);
24
int aarch64_fpu_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg);
23
#endif
25
#endif
24
26
25
+/*
27
+#ifdef CONFIG_USER_ONLY
26
+ * Parameters of a given virtual address, as extracted from the
28
+static inline void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu) { }
27
+ * translation control register (TCR) for a given regime.
29
+#else
28
+ */
30
+void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu);
29
+typedef struct ARMVAParameters {
31
+#endif
30
+ unsigned tsz : 8;
31
+ unsigned select : 1;
32
+ bool tbi : 1;
33
+ bool epd : 1;
34
+ bool hpd : 1;
35
+ bool using16k : 1;
36
+ bool using64k : 1;
37
+} ARMVAParameters;
38
+
32
+
39
#endif
33
#endif
40
diff --git a/target/arm/helper.c b/target/arm/helper.c
34
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
41
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/helper.c
36
--- a/target/arm/cpu64.c
43
+++ b/target/arm/helper.c
37
+++ b/target/arm/cpu64.c
44
@@ -XXX,XX +XXX,XX @@ static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
38
@@ -XXX,XX +XXX,XX @@
45
return (hiattr << 6) | (hihint << 4) | (loattr << 2) | lohint;
39
#include "hvf_arm.h"
40
#include "qapi/visitor.h"
41
#include "hw/qdev-properties.h"
42
-#include "cpregs.h"
43
+#include "internals.h"
44
45
46
-#ifndef CONFIG_USER_ONLY
47
-static uint64_t a57_a53_l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
48
-{
49
- ARMCPU *cpu = env_archcpu(env);
50
-
51
- /* Number of cores is in [25:24]; otherwise we RAZ */
52
- return (cpu->core_count - 1) << 24;
53
-}
54
-#endif
55
-
56
-static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
57
-#ifndef CONFIG_USER_ONLY
58
- { .name = "L2CTLR_EL1", .state = ARM_CP_STATE_AA64,
59
- .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 2,
60
- .access = PL1_RW, .readfn = a57_a53_l2ctlr_read,
61
- .writefn = arm_cp_write_ignore },
62
- { .name = "L2CTLR",
63
- .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 2,
64
- .access = PL1_RW, .readfn = a57_a53_l2ctlr_read,
65
- .writefn = arm_cp_write_ignore },
66
-#endif
67
- { .name = "L2ECTLR_EL1", .state = ARM_CP_STATE_AA64,
68
- .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 3,
69
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
70
- { .name = "L2ECTLR",
71
- .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 3,
72
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
73
- { .name = "L2ACTLR", .state = ARM_CP_STATE_BOTH,
74
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 0, .opc2 = 0,
75
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
76
- { .name = "CPUACTLR_EL1", .state = ARM_CP_STATE_AA64,
77
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 0,
78
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
79
- { .name = "CPUACTLR",
80
- .cp = 15, .opc1 = 0, .crm = 15,
81
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
82
- { .name = "CPUECTLR_EL1", .state = ARM_CP_STATE_AA64,
83
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 1,
84
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
85
- { .name = "CPUECTLR",
86
- .cp = 15, .opc1 = 1, .crm = 15,
87
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
88
- { .name = "CPUMERRSR_EL1", .state = ARM_CP_STATE_AA64,
89
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 2,
90
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
91
- { .name = "CPUMERRSR",
92
- .cp = 15, .opc1 = 2, .crm = 15,
93
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
94
- { .name = "L2MERRSR_EL1", .state = ARM_CP_STATE_AA64,
95
- .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 3,
96
- .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
97
- { .name = "L2MERRSR",
98
- .cp = 15, .opc1 = 3, .crm = 15,
99
- .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
100
-};
101
-
102
static void aarch64_a57_initfn(Object *obj)
103
{
104
ARMCPU *cpu = ARM_CPU(obj);
105
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
106
cpu->gic_num_lrs = 4;
107
cpu->gic_vpribits = 5;
108
cpu->gic_vprebits = 5;
109
- define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
110
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
46
}
111
}
47
112
48
+static ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
113
static void aarch64_a53_initfn(Object *obj)
49
+ ARMMMUIdx mmu_idx, bool data)
114
@@ -XXX,XX +XXX,XX @@ static void aarch64_a53_initfn(Object *obj)
115
cpu->gic_num_lrs = 4;
116
cpu->gic_vpribits = 5;
117
cpu->gic_vprebits = 5;
118
- define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
119
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
120
}
121
122
static void aarch64_a72_initfn(Object *obj)
123
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
124
cpu->gic_num_lrs = 4;
125
cpu->gic_vpribits = 5;
126
cpu->gic_vprebits = 5;
127
- define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
128
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
129
}
130
131
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
132
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/target/arm/cpu_tcg.c
135
+++ b/target/arm/cpu_tcg.c
136
@@ -XXX,XX +XXX,XX @@
137
#endif
138
#include "cpregs.h"
139
140
+#ifndef CONFIG_USER_ONLY
141
+static uint64_t l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
50
+{
142
+{
51
+ uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
143
+ ARMCPU *cpu = env_archcpu(env);
52
+ uint32_t el = regime_el(env, mmu_idx);
144
+
53
+ bool tbi, epd, hpd, using16k, using64k;
145
+ /* Number of cores is in [25:24]; otherwise we RAZ */
54
+ int select, tsz;
146
+ return (cpu->core_count - 1) << 24;
55
+
56
+ /*
57
+ * Bit 55 is always between the two regions, and is canonical for
58
+ * determining if address tagging is enabled.
59
+ */
60
+ select = extract64(va, 55, 1);
61
+
62
+ if (el > 1) {
63
+ tsz = extract32(tcr, 0, 6);
64
+ using64k = extract32(tcr, 14, 1);
65
+ using16k = extract32(tcr, 15, 1);
66
+ if (mmu_idx == ARMMMUIdx_S2NS) {
67
+ /* VTCR_EL2 */
68
+ tbi = hpd = false;
69
+ } else {
70
+ tbi = extract32(tcr, 20, 1);
71
+ hpd = extract32(tcr, 24, 1);
72
+ }
73
+ epd = false;
74
+ } else if (!select) {
75
+ tsz = extract32(tcr, 0, 6);
76
+ epd = extract32(tcr, 7, 1);
77
+ using64k = extract32(tcr, 14, 1);
78
+ using16k = extract32(tcr, 15, 1);
79
+ tbi = extract64(tcr, 37, 1);
80
+ hpd = extract64(tcr, 41, 1);
81
+ } else {
82
+ int tg = extract32(tcr, 30, 2);
83
+ using16k = tg == 1;
84
+ using64k = tg == 3;
85
+ tsz = extract32(tcr, 16, 6);
86
+ epd = extract32(tcr, 23, 1);
87
+ tbi = extract64(tcr, 38, 1);
88
+ hpd = extract64(tcr, 42, 1);
89
+ }
90
+ tsz = MIN(tsz, 39); /* TODO: ARMv8.4-TTST */
91
+ tsz = MAX(tsz, 16); /* TODO: ARMv8.2-LVA */
92
+
93
+ return (ARMVAParameters) {
94
+ .tsz = tsz,
95
+ .select = select,
96
+ .tbi = tbi,
97
+ .epd = epd,
98
+ .hpd = hpd,
99
+ .using16k = using16k,
100
+ .using64k = using64k,
101
+ };
102
+}
147
+}
103
+
148
+
104
+static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
149
+static const ARMCPRegInfo cortex_a72_a57_a53_cp_reginfo[] = {
105
+ ARMMMUIdx mmu_idx)
150
+ { .name = "L2CTLR_EL1", .state = ARM_CP_STATE_AA64,
151
+ .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 2,
152
+ .access = PL1_RW, .readfn = l2ctlr_read,
153
+ .writefn = arm_cp_write_ignore },
154
+ { .name = "L2CTLR",
155
+ .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 2,
156
+ .access = PL1_RW, .readfn = l2ctlr_read,
157
+ .writefn = arm_cp_write_ignore },
158
+ { .name = "L2ECTLR_EL1", .state = ARM_CP_STATE_AA64,
159
+ .opc0 = 3, .opc1 = 1, .crn = 11, .crm = 0, .opc2 = 3,
160
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
161
+ { .name = "L2ECTLR",
162
+ .cp = 15, .opc1 = 1, .crn = 9, .crm = 0, .opc2 = 3,
163
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
164
+ { .name = "L2ACTLR", .state = ARM_CP_STATE_BOTH,
165
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 0, .opc2 = 0,
166
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
167
+ { .name = "CPUACTLR_EL1", .state = ARM_CP_STATE_AA64,
168
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 0,
169
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
170
+ { .name = "CPUACTLR",
171
+ .cp = 15, .opc1 = 0, .crm = 15,
172
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
173
+ { .name = "CPUECTLR_EL1", .state = ARM_CP_STATE_AA64,
174
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 1,
175
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
176
+ { .name = "CPUECTLR",
177
+ .cp = 15, .opc1 = 1, .crm = 15,
178
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
179
+ { .name = "CPUMERRSR_EL1", .state = ARM_CP_STATE_AA64,
180
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 2,
181
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
182
+ { .name = "CPUMERRSR",
183
+ .cp = 15, .opc1 = 2, .crm = 15,
184
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
185
+ { .name = "L2MERRSR_EL1", .state = ARM_CP_STATE_AA64,
186
+ .opc0 = 3, .opc1 = 1, .crn = 15, .crm = 2, .opc2 = 3,
187
+ .access = PL1_RW, .type = ARM_CP_CONST, .resetvalue = 0 },
188
+ { .name = "L2MERRSR",
189
+ .cp = 15, .opc1 = 3, .crm = 15,
190
+ .access = PL1_RW, .type = ARM_CP_CONST | ARM_CP_64BIT, .resetvalue = 0 },
191
+};
192
+
193
+void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu)
106
+{
194
+{
107
+ uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
195
+ define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
108
+ uint32_t el = regime_el(env, mmu_idx);
109
+ int select, tsz;
110
+ bool epd, hpd;
111
+
112
+ if (mmu_idx == ARMMMUIdx_S2NS) {
113
+ /* VTCR */
114
+ bool sext = extract32(tcr, 4, 1);
115
+ bool sign = extract32(tcr, 3, 1);
116
+
117
+ /*
118
+ * If the sign-extend bit is not the same as t0sz[3], the result
119
+ * is unpredictable. Flag this as a guest error.
120
+ */
121
+ if (sign != sext) {
122
+ qemu_log_mask(LOG_GUEST_ERROR,
123
+ "AArch32: VTCR.S / VTCR.T0SZ[3] mismatch\n");
124
+ }
125
+ tsz = sextract32(tcr, 0, 4) + 8;
126
+ select = 0;
127
+ hpd = false;
128
+ epd = false;
129
+ } else if (el == 2) {
130
+ /* HTCR */
131
+ tsz = extract32(tcr, 0, 3);
132
+ select = 0;
133
+ hpd = extract64(tcr, 24, 1);
134
+ epd = false;
135
+ } else {
136
+ int t0sz = extract32(tcr, 0, 3);
137
+ int t1sz = extract32(tcr, 16, 3);
138
+
139
+ if (t1sz == 0) {
140
+ select = va > (0xffffffffu >> t0sz);
141
+ } else {
142
+ /* Note that we will detect errors later. */
143
+ select = va >= ~(0xffffffffu >> t1sz);
144
+ }
145
+ if (!select) {
146
+ tsz = t0sz;
147
+ epd = extract32(tcr, 7, 1);
148
+ hpd = extract64(tcr, 41, 1);
149
+ } else {
150
+ tsz = t1sz;
151
+ epd = extract32(tcr, 23, 1);
152
+ hpd = extract64(tcr, 42, 1);
153
+ }
154
+ /* For aarch32, hpd0 is not enabled without t2e as well. */
155
+ hpd &= extract32(tcr, 6, 1);
156
+ }
157
+
158
+ return (ARMVAParameters) {
159
+ .tsz = tsz,
160
+ .select = select,
161
+ .epd = epd,
162
+ .hpd = hpd,
163
+ };
164
+}
196
+}
165
+
197
+#endif /* !CONFIG_USER_ONLY */
166
static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
198
+
167
MMUAccessType access_type, ARMMMUIdx mmu_idx,
199
/* CPU models. These are not needed for the AArch64 linux-user build. */
168
hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot,
200
#if !defined(CONFIG_USER_ONLY) || !defined(TARGET_AARCH64)
169
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
201
170
/* Read an LPAE long-descriptor translation table. */
171
ARMFaultType fault_type = ARMFault_Translation;
172
uint32_t level;
173
- uint32_t epd = 0;
174
- int32_t t0sz, t1sz;
175
- uint32_t tg;
176
+ ARMVAParameters param;
177
uint64_t ttbr;
178
- int ttbr_select;
179
hwaddr descaddr, indexmask, indexmask_grainsize;
180
uint32_t tableattrs;
181
- target_ulong page_size;
182
+ target_ulong page_size, top_bits;
183
uint32_t attrs;
184
- int32_t stride = 9;
185
- int32_t addrsize;
186
- int inputsize;
187
- int32_t tbi = 0;
188
+ int32_t stride;
189
+ int addrsize, inputsize;
190
TCR *tcr = regime_tcr(env, mmu_idx);
191
int ap, ns, xn, pxn;
192
uint32_t el = regime_el(env, mmu_idx);
193
- bool ttbr1_valid = true;
194
+ bool ttbr1_valid;
195
uint64_t descaddrmask;
196
bool aarch64 = arm_el_is_aa64(env, el);
197
- bool hpd = false;
198
199
/* TODO:
200
* This code does not handle the different format TCR for VTCR_EL2.
201
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
202
* support for those page table walks.
203
*/
204
if (aarch64) {
205
+ param = aa64_va_parameters(env, address, mmu_idx,
206
+ access_type != MMU_INST_FETCH);
207
level = 0;
208
- addrsize = 64;
209
- if (el > 1) {
210
- if (mmu_idx != ARMMMUIdx_S2NS) {
211
- tbi = extract64(tcr->raw_tcr, 20, 1);
212
- }
213
- } else {
214
- if (extract64(address, 55, 1)) {
215
- tbi = extract64(tcr->raw_tcr, 38, 1);
216
- } else {
217
- tbi = extract64(tcr->raw_tcr, 37, 1);
218
- }
219
- }
220
- tbi *= 8;
221
-
222
/* If we are in 64-bit EL2 or EL3 then there is no TTBR1, so mark it
223
* invalid.
224
*/
225
- if (el > 1) {
226
- ttbr1_valid = false;
227
- }
228
+ ttbr1_valid = (el < 2);
229
+ addrsize = 64 - 8 * param.tbi;
230
+ inputsize = 64 - param.tsz;
231
} else {
232
+ param = aa32_va_parameters(env, address, mmu_idx);
233
level = 1;
234
- addrsize = 32;
235
/* There is no TTBR1 for EL2 */
236
- if (el == 2) {
237
- ttbr1_valid = false;
238
- }
239
+ ttbr1_valid = (el != 2);
240
+ addrsize = (mmu_idx == ARMMMUIdx_S2NS ? 40 : 32);
241
+ inputsize = addrsize - param.tsz;
242
}
243
244
- /* Determine whether this address is in the region controlled by
245
- * TTBR0 or TTBR1 (or if it is in neither region and should fault).
246
- * This is a Non-secure PL0/1 stage 1 translation, so controlled by
247
- * TTBCR/TTBR0/TTBR1 in accordance with ARM ARM DDI0406C table B-32:
248
+ /*
249
+ * We determined the region when collecting the parameters, but we
250
+ * have not yet validated that the address is valid for the region.
251
+ * Extract the top bits and verify that they all match select.
252
*/
253
- if (aarch64) {
254
- /* AArch64 translation. */
255
- t0sz = extract32(tcr->raw_tcr, 0, 6);
256
- t0sz = MIN(t0sz, 39);
257
- t0sz = MAX(t0sz, 16);
258
- } else if (mmu_idx != ARMMMUIdx_S2NS) {
259
- /* AArch32 stage 1 translation. */
260
- t0sz = extract32(tcr->raw_tcr, 0, 3);
261
- } else {
262
- /* AArch32 stage 2 translation. */
263
- bool sext = extract32(tcr->raw_tcr, 4, 1);
264
- bool sign = extract32(tcr->raw_tcr, 3, 1);
265
- /* Address size is 40-bit for a stage 2 translation,
266
- * and t0sz can be negative (from -8 to 7),
267
- * so we need to adjust it to use the TTBR selecting logic below.
268
- */
269
- addrsize = 40;
270
- t0sz = sextract32(tcr->raw_tcr, 0, 4) + 8;
271
-
272
- /* If the sign-extend bit is not the same as t0sz[3], the result
273
- * is unpredictable. Flag this as a guest error. */
274
- if (sign != sext) {
275
- qemu_log_mask(LOG_GUEST_ERROR,
276
- "AArch32: VTCR.S / VTCR.T0SZ[3] mismatch\n");
277
- }
278
- }
279
- t1sz = extract32(tcr->raw_tcr, 16, 6);
280
- if (aarch64) {
281
- t1sz = MIN(t1sz, 39);
282
- t1sz = MAX(t1sz, 16);
283
- }
284
- if (t0sz && !extract64(address, addrsize - t0sz, t0sz - tbi)) {
285
- /* there is a ttbr0 region and we are in it (high bits all zero) */
286
- ttbr_select = 0;
287
- } else if (ttbr1_valid && t1sz &&
288
- !extract64(~address, addrsize - t1sz, t1sz - tbi)) {
289
- /* there is a ttbr1 region and we are in it (high bits all one) */
290
- ttbr_select = 1;
291
- } else if (!t0sz) {
292
- /* ttbr0 region is "everything not in the ttbr1 region" */
293
- ttbr_select = 0;
294
- } else if (!t1sz && ttbr1_valid) {
295
- /* ttbr1 region is "everything not in the ttbr0 region" */
296
- ttbr_select = 1;
297
- } else {
298
- /* in the gap between the two regions, this is a Translation fault */
299
+ top_bits = sextract64(address, inputsize, addrsize - inputsize);
300
+ if (-top_bits != param.select || (param.select && !ttbr1_valid)) {
301
+ /* In the gap between the two regions, this is a Translation fault */
302
fault_type = ARMFault_Translation;
303
goto do_fault;
304
}
305
306
+ if (param.using64k) {
307
+ stride = 13;
308
+ } else if (param.using16k) {
309
+ stride = 11;
310
+ } else {
311
+ stride = 9;
312
+ }
313
+
314
/* Note that QEMU ignores shareability and cacheability attributes,
315
* so we don't need to do anything with the SH, ORGN, IRGN fields
316
* in the TTBCR. Similarly, TTBCR:A1 selects whether we get the
317
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
318
* implement any ASID-like capability so we can ignore it (instead
319
* we will always flush the TLB any time the ASID is changed).
320
*/
321
- if (ttbr_select == 0) {
322
- ttbr = regime_ttbr(env, mmu_idx, 0);
323
- if (el < 2) {
324
- epd = extract32(tcr->raw_tcr, 7, 1);
325
- }
326
- inputsize = addrsize - t0sz;
327
-
328
- tg = extract32(tcr->raw_tcr, 14, 2);
329
- if (tg == 1) { /* 64KB pages */
330
- stride = 13;
331
- }
332
- if (tg == 2) { /* 16KB pages */
333
- stride = 11;
334
- }
335
- if (aarch64 && el > 1) {
336
- hpd = extract64(tcr->raw_tcr, 24, 1);
337
- } else {
338
- hpd = extract64(tcr->raw_tcr, 41, 1);
339
- }
340
- if (!aarch64) {
341
- /* For aarch32, hpd0 is not enabled without t2e as well. */
342
- hpd &= extract64(tcr->raw_tcr, 6, 1);
343
- }
344
- } else {
345
- /* We should only be here if TTBR1 is valid */
346
- assert(ttbr1_valid);
347
-
348
- ttbr = regime_ttbr(env, mmu_idx, 1);
349
- epd = extract32(tcr->raw_tcr, 23, 1);
350
- inputsize = addrsize - t1sz;
351
-
352
- tg = extract32(tcr->raw_tcr, 30, 2);
353
- if (tg == 3) { /* 64KB pages */
354
- stride = 13;
355
- }
356
- if (tg == 1) { /* 16KB pages */
357
- stride = 11;
358
- }
359
- hpd = extract64(tcr->raw_tcr, 42, 1);
360
- if (!aarch64) {
361
- /* For aarch32, hpd1 is not enabled without t2e as well. */
362
- hpd &= extract64(tcr->raw_tcr, 6, 1);
363
- }
364
- }
365
+ ttbr = regime_ttbr(env, mmu_idx, param.select);
366
367
/* Here we should have set up all the parameters for the translation:
368
* inputsize, ttbr, epd, stride, tbi
369
*/
370
371
- if (epd) {
372
+ if (param.epd) {
373
/* Translation table walk disabled => Translation fault on TLB miss
374
* Note: This is always 0 on 64-bit EL2 and EL3.
375
*/
376
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address,
377
}
378
/* Merge in attributes from table descriptors */
379
attrs |= nstable << 3; /* NS */
380
- if (hpd) {
381
+ if (param.hpd) {
382
/* HPD disables all the table attributes except NSTable. */
383
break;
384
}
385
--
202
--
386
2.20.1
203
2.25.1
387
388
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We can perform this with fewer operations.
3
Instead of starting with cortex-a15 and adding v8 features to
4
a v7 cpu, begin with a v8 cpu stripped of its aarch64 features.
5
This fixes the long-standing to-do where we only enabled v8
6
features for user-only.
4
7
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190108223129.5570-32-richard.henderson@linaro.org
10
Message-id: 20220506180242.216785-7-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
12
---
10
target/arm/translate-a64.c | 62 +++++++++++++-------------------------
13
target/arm/cpu_tcg.c | 151 ++++++++++++++++++++++++++-----------------
11
1 file changed, 21 insertions(+), 41 deletions(-)
14
1 file changed, 92 insertions(+), 59 deletions(-)
12
15
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-a64.c
18
--- a/target/arm/cpu_tcg.c
16
+++ b/target/arm/translate-a64.c
19
+++ b/target/arm/cpu_tcg.c
17
@@ -XXX,XX +XXX,XX @@ void gen_a64_set_pc_im(uint64_t val)
20
@@ -XXX,XX +XXX,XX @@ static void arm_v7m_class_init(ObjectClass *oc, void *data)
18
/* Load the PC from a generic TCG variable.
21
static void arm_max_initfn(Object *obj)
19
*
22
{
20
* If address tagging is enabled via the TCR TBI bits, then loading
23
ARMCPU *cpu = ARM_CPU(obj);
21
- * an address into the PC will clear out any tag in the it:
24
+ uint32_t t;
22
+ * an address into the PC will clear out any tag in it:
25
23
* + for EL2 and EL3 there is only one TBI bit, and if it is set
26
- cortex_a15_initfn(obj);
24
* then the address is zero-extended, clearing bits [63:56]
27
+ /* aarch64_a57_initfn, advertising none of the aarch64 features */
25
* + for EL0 and EL1, TBI0 controls addresses with bit 55 == 0
28
+ cpu->dtb_compatible = "arm,cortex-a57";
26
@@ -XXX,XX +XXX,XX @@ static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src)
29
+ set_feature(&cpu->env, ARM_FEATURE_V8);
27
int tbi = s->tbii;
30
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
28
31
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
29
if (s->current_el <= 1) {
32
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
30
- /* Test if NEITHER or BOTH TBI values are set. If so, no need to
33
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
31
- * examine bit 55 of address, can just generate code.
34
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
32
- * If mixed, then test via generated code
35
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
33
- */
36
+ cpu->midr = 0x411fd070;
34
- if (tbi == 3) {
37
+ cpu->revidr = 0x00000000;
35
- TCGv_i64 tmp_reg = tcg_temp_new_i64();
38
+ cpu->reset_fpsid = 0x41034070;
36
- /* Both bits set, sign extension from bit 55 into [63:56] will
39
+ cpu->isar.mvfr0 = 0x10110222;
37
- * cover both cases
40
+ cpu->isar.mvfr1 = 0x12111111;
38
- */
41
+ cpu->isar.mvfr2 = 0x00000043;
39
- tcg_gen_shli_i64(tmp_reg, src, 8);
42
+ cpu->ctr = 0x8444c004;
40
- tcg_gen_sari_i64(cpu_pc, tmp_reg, 8);
43
+ cpu->reset_sctlr = 0x00c50838;
41
- tcg_temp_free_i64(tmp_reg);
44
+ cpu->isar.id_pfr0 = 0x00000131;
42
- } else if (tbi == 0) {
45
+ cpu->isar.id_pfr1 = 0x00011011;
43
- /* Neither bit set, just load it as-is */
46
+ cpu->isar.id_dfr0 = 0x03010066;
44
- tcg_gen_mov_i64(cpu_pc, src);
47
+ cpu->id_afr0 = 0x00000000;
45
- } else {
48
+ cpu->isar.id_mmfr0 = 0x10101105;
46
- TCGv_i64 tcg_tmpval = tcg_temp_new_i64();
49
+ cpu->isar.id_mmfr1 = 0x40000000;
47
- TCGv_i64 tcg_bit55 = tcg_temp_new_i64();
50
+ cpu->isar.id_mmfr2 = 0x01260000;
48
- TCGv_i64 tcg_zero = tcg_const_i64(0);
51
+ cpu->isar.id_mmfr3 = 0x02102211;
49
+ if (tbi != 0) {
52
+ cpu->isar.id_isar0 = 0x02101110;
50
+ /* Sign-extend from bit 55. */
53
+ cpu->isar.id_isar1 = 0x13112111;
51
+ tcg_gen_sextract_i64(cpu_pc, src, 0, 56);
54
+ cpu->isar.id_isar2 = 0x21232042;
52
55
+ cpu->isar.id_isar3 = 0x01112131;
53
- tcg_gen_andi_i64(tcg_bit55, src, (1ull << 55));
56
+ cpu->isar.id_isar4 = 0x00011142;
54
+ if (tbi != 3) {
57
+ cpu->isar.id_isar5 = 0x00011121;
55
+ TCGv_i64 tcg_zero = tcg_const_i64(0);
58
+ cpu->isar.id_isar6 = 0;
56
59
+ cpu->isar.dbgdidr = 0x3516d000;
57
- if (tbi == 1) {
60
+ cpu->clidr = 0x0a200023;
58
- /* tbi0==1, tbi1==0, so 0-fill upper byte if bit 55 = 0 */
61
+ cpu->ccsidr[0] = 0x701fe00a; /* 32KB L1 dcache */
59
- tcg_gen_andi_i64(tcg_tmpval, src,
62
+ cpu->ccsidr[1] = 0x201fe012; /* 48KB L1 icache */
60
- 0x00FFFFFFFFFFFFFFull);
63
+ cpu->ccsidr[2] = 0x70ffe07a; /* 2048KB L2 cache */
61
- tcg_gen_movcond_i64(TCG_COND_EQ, cpu_pc, tcg_bit55, tcg_zero,
64
+ define_cortex_a72_a57_a53_cp_reginfo(cpu);
62
- tcg_tmpval, src);
65
63
- } else {
66
- /* old-style VFP short-vector support */
64
- /* tbi0==0, tbi1==1, so 1-fill upper byte if bit 55 = 1 */
67
- cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
65
- tcg_gen_ori_i64(tcg_tmpval, src,
68
+ /* Add additional features supported by QEMU */
66
- 0xFF00000000000000ull);
69
+ t = cpu->isar.id_isar5;
67
- tcg_gen_movcond_i64(TCG_COND_NE, cpu_pc, tcg_bit55, tcg_zero,
70
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
68
- tcg_tmpval, src);
71
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
69
+ /*
72
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
70
+ * The two TBI bits differ.
73
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
71
+ * If tbi0, then !tbi1: only use the extension if positive.
74
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
72
+ * if !tbi0, then tbi1: only use the extension if negative.
75
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
73
+ */
76
+ cpu->isar.id_isar5 = t;
74
+ tcg_gen_movcond_i64(tbi == 1 ? TCG_COND_GE : TCG_COND_LT,
75
+ cpu_pc, cpu_pc, tcg_zero, cpu_pc, src);
76
+ tcg_temp_free_i64(tcg_zero);
77
}
78
- tcg_temp_free_i64(tcg_zero);
79
- tcg_temp_free_i64(tcg_bit55);
80
- tcg_temp_free_i64(tcg_tmpval);
81
+ return;
82
}
83
- } else { /* EL > 1 */
84
+ } else {
85
if (tbi != 0) {
86
/* Force tag byte to all zero */
87
- tcg_gen_andi_i64(cpu_pc, src, 0x00FFFFFFFFFFFFFFull);
88
- } else {
89
- /* Load unmodified address */
90
- tcg_gen_mov_i64(cpu_pc, src);
91
+ tcg_gen_extract_i64(cpu_pc, src, 0, 56);
92
+ return;
93
}
94
}
95
+
77
+
96
+ /* Load unmodified address */
78
+ t = cpu->isar.id_isar6;
97
+ tcg_gen_mov_i64(cpu_pc, src);
79
+ t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
80
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
81
+ t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
82
+ t = FIELD_DP32(t, ID_ISAR6, SB, 1);
83
+ t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
84
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
85
+ t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
86
+ cpu->isar.id_isar6 = t;
87
+
88
+ t = cpu->isar.mvfr1;
89
+ t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
90
+ t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
91
+ cpu->isar.mvfr1 = t;
92
+
93
+ t = cpu->isar.mvfr2;
94
+ t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
95
+ t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
96
+ cpu->isar.mvfr2 = t;
97
+
98
+ t = cpu->isar.id_mmfr3;
99
+ t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
100
+ cpu->isar.id_mmfr3 = t;
101
+
102
+ t = cpu->isar.id_mmfr4;
103
+ t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
104
+ t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
105
+ t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
106
+ t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
107
+ cpu->isar.id_mmfr4 = t;
108
+
109
+ t = cpu->isar.id_pfr0;
110
+ t = FIELD_DP32(t, ID_PFR0, DIT, 1);
111
+ cpu->isar.id_pfr0 = t;
112
+
113
+ t = cpu->isar.id_pfr2;
114
+ t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
115
+ cpu->isar.id_pfr2 = t;
116
117
#ifdef CONFIG_USER_ONLY
118
/*
119
- * We don't set these in system emulation mode for the moment,
120
- * since we don't correctly set (all of) the ID registers to
121
- * advertise them.
122
+ * Break with true ARMv8 and add back old-style VFP short-vector support.
123
+ * Only do this for user-mode, where -cpu max is the default, so that
124
+ * older v6 and v7 programs are more likely to work without adjustment.
125
*/
126
- set_feature(&cpu->env, ARM_FEATURE_V8);
127
- {
128
- uint32_t t;
129
-
130
- t = cpu->isar.id_isar5;
131
- t = FIELD_DP32(t, ID_ISAR5, AES, 2);
132
- t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
133
- t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
134
- t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
135
- t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
136
- t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
137
- cpu->isar.id_isar5 = t;
138
-
139
- t = cpu->isar.id_isar6;
140
- t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
141
- t = FIELD_DP32(t, ID_ISAR6, DP, 1);
142
- t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
143
- t = FIELD_DP32(t, ID_ISAR6, SB, 1);
144
- t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
145
- t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
146
- t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
147
- cpu->isar.id_isar6 = t;
148
-
149
- t = cpu->isar.mvfr1;
150
- t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
151
- t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
152
- cpu->isar.mvfr1 = t;
153
-
154
- t = cpu->isar.mvfr2;
155
- t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
156
- t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
157
- cpu->isar.mvfr2 = t;
158
-
159
- t = cpu->isar.id_mmfr3;
160
- t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
161
- cpu->isar.id_mmfr3 = t;
162
-
163
- t = cpu->isar.id_mmfr4;
164
- t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
165
- t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
166
- t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
167
- t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
168
- cpu->isar.id_mmfr4 = t;
169
-
170
- t = cpu->isar.id_pfr0;
171
- t = FIELD_DP32(t, ID_PFR0, DIT, 1);
172
- cpu->isar.id_pfr0 = t;
173
-
174
- t = cpu->isar.id_pfr2;
175
- t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
176
- cpu->isar.id_pfr2 = t;
177
- }
178
-#endif /* CONFIG_USER_ONLY */
179
+ cpu->isar.mvfr0 = FIELD_DP32(cpu->isar.mvfr0, MVFR0, FPSHVEC, 1);
180
+#endif
98
}
181
}
99
182
#endif /* !TARGET_AARCH64 */
100
typedef struct DisasCompare64 {
183
101
--
184
--
102
2.20.1
185
2.25.1
103
104
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
We set this for qemu-system-aarch64, but failed to do so
4
for the strictly 32-bit emulation.
5
6
Fixes: 3bec78447a9 ("target/arm: Provide ARMv8.4-PMU in '-cpu max'")
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190108223129.5570-9-richard.henderson@linaro.org
9
Message-id: 20220506180242.216785-8-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/translate-a64.c | 146 +++++++++++++++++++++++++++++++++++++
12
target/arm/cpu_tcg.c | 4 ++++
9
1 file changed, 146 insertions(+)
13
1 file changed, 4 insertions(+)
10
14
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
12
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
17
--- a/target/arm/cpu_tcg.c
14
+++ b/target/arm/translate-a64.c
18
+++ b/target/arm/cpu_tcg.c
15
@@ -XXX,XX +XXX,XX @@ static void handle_rev16(DisasContext *s, unsigned int sf,
19
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
16
static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
20
t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
17
{
21
cpu->isar.id_pfr2 = t;
18
unsigned int sf, opcode, opcode2, rn, rd;
22
19
+ TCGv_i64 tcg_rd;
23
+ t = cpu->isar.id_dfr0;
20
24
+ t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
21
if (extract32(insn, 29, 1)) {
25
+ cpu->isar.id_dfr0 = t;
22
unallocated_encoding(s);
26
+
23
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
27
#ifdef CONFIG_USER_ONLY
24
case MAP(1, 0x00, 0x05):
28
/*
25
handle_cls(s, sf, rn, rd);
29
* Break with true ARMv8 and add back old-style VFP short-vector support.
26
break;
27
+ case MAP(1, 0x01, 0x00): /* PACIA */
28
+ if (s->pauth_active) {
29
+ tcg_rd = cpu_reg(s, rd);
30
+ gen_helper_pacia(tcg_rd, cpu_env, tcg_rd, cpu_reg_sp(s, rn));
31
+ } else if (!dc_isar_feature(aa64_pauth, s)) {
32
+ goto do_unallocated;
33
+ }
34
+ break;
35
+ case MAP(1, 0x01, 0x01): /* PACIB */
36
+ if (s->pauth_active) {
37
+ tcg_rd = cpu_reg(s, rd);
38
+ gen_helper_pacib(tcg_rd, cpu_env, tcg_rd, cpu_reg_sp(s, rn));
39
+ } else if (!dc_isar_feature(aa64_pauth, s)) {
40
+ goto do_unallocated;
41
+ }
42
+ break;
43
+ case MAP(1, 0x01, 0x02): /* PACDA */
44
+ if (s->pauth_active) {
45
+ tcg_rd = cpu_reg(s, rd);
46
+ gen_helper_pacda(tcg_rd, cpu_env, tcg_rd, cpu_reg_sp(s, rn));
47
+ } else if (!dc_isar_feature(aa64_pauth, s)) {
48
+ goto do_unallocated;
49
+ }
50
+ break;
51
+ case MAP(1, 0x01, 0x03): /* PACDB */
52
+ if (s->pauth_active) {
53
+ tcg_rd = cpu_reg(s, rd);
54
+ gen_helper_pacdb(tcg_rd, cpu_env, tcg_rd, cpu_reg_sp(s, rn));
55
+ } else if (!dc_isar_feature(aa64_pauth, s)) {
56
+ goto do_unallocated;
57
+ }
58
+ break;
59
+ case MAP(1, 0x01, 0x04): /* AUTIA */
60
+ if (s->pauth_active) {
61
+ tcg_rd = cpu_reg(s, rd);
62
+ gen_helper_autia(tcg_rd, cpu_env, tcg_rd, cpu_reg_sp(s, rn));
63
+ } else if (!dc_isar_feature(aa64_pauth, s)) {
64
+ goto do_unallocated;
65
+ }
66
+ break;
67
+ case MAP(1, 0x01, 0x05): /* AUTIB */
68
+ if (s->pauth_active) {
69
+ tcg_rd = cpu_reg(s, rd);
70
+ gen_helper_autib(tcg_rd, cpu_env, tcg_rd, cpu_reg_sp(s, rn));
71
+ } else if (!dc_isar_feature(aa64_pauth, s)) {
72
+ goto do_unallocated;
73
+ }
74
+ break;
75
+ case MAP(1, 0x01, 0x06): /* AUTDA */
76
+ if (s->pauth_active) {
77
+ tcg_rd = cpu_reg(s, rd);
78
+ gen_helper_autda(tcg_rd, cpu_env, tcg_rd, cpu_reg_sp(s, rn));
79
+ } else if (!dc_isar_feature(aa64_pauth, s)) {
80
+ goto do_unallocated;
81
+ }
82
+ break;
83
+ case MAP(1, 0x01, 0x07): /* AUTDB */
84
+ if (s->pauth_active) {
85
+ tcg_rd = cpu_reg(s, rd);
86
+ gen_helper_autdb(tcg_rd, cpu_env, tcg_rd, cpu_reg_sp(s, rn));
87
+ } else if (!dc_isar_feature(aa64_pauth, s)) {
88
+ goto do_unallocated;
89
+ }
90
+ break;
91
+ case MAP(1, 0x01, 0x08): /* PACIZA */
92
+ if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
93
+ goto do_unallocated;
94
+ } else if (s->pauth_active) {
95
+ tcg_rd = cpu_reg(s, rd);
96
+ gen_helper_pacia(tcg_rd, cpu_env, tcg_rd, new_tmp_a64_zero(s));
97
+ }
98
+ break;
99
+ case MAP(1, 0x01, 0x09): /* PACIZB */
100
+ if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
101
+ goto do_unallocated;
102
+ } else if (s->pauth_active) {
103
+ tcg_rd = cpu_reg(s, rd);
104
+ gen_helper_pacib(tcg_rd, cpu_env, tcg_rd, new_tmp_a64_zero(s));
105
+ }
106
+ break;
107
+ case MAP(1, 0x01, 0x0a): /* PACDZA */
108
+ if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
109
+ goto do_unallocated;
110
+ } else if (s->pauth_active) {
111
+ tcg_rd = cpu_reg(s, rd);
112
+ gen_helper_pacda(tcg_rd, cpu_env, tcg_rd, new_tmp_a64_zero(s));
113
+ }
114
+ break;
115
+ case MAP(1, 0x01, 0x0b): /* PACDZB */
116
+ if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
117
+ goto do_unallocated;
118
+ } else if (s->pauth_active) {
119
+ tcg_rd = cpu_reg(s, rd);
120
+ gen_helper_pacdb(tcg_rd, cpu_env, tcg_rd, new_tmp_a64_zero(s));
121
+ }
122
+ break;
123
+ case MAP(1, 0x01, 0x0c): /* AUTIZA */
124
+ if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
125
+ goto do_unallocated;
126
+ } else if (s->pauth_active) {
127
+ tcg_rd = cpu_reg(s, rd);
128
+ gen_helper_autia(tcg_rd, cpu_env, tcg_rd, new_tmp_a64_zero(s));
129
+ }
130
+ break;
131
+ case MAP(1, 0x01, 0x0d): /* AUTIZB */
132
+ if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
133
+ goto do_unallocated;
134
+ } else if (s->pauth_active) {
135
+ tcg_rd = cpu_reg(s, rd);
136
+ gen_helper_autib(tcg_rd, cpu_env, tcg_rd, new_tmp_a64_zero(s));
137
+ }
138
+ break;
139
+ case MAP(1, 0x01, 0x0e): /* AUTDZA */
140
+ if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
141
+ goto do_unallocated;
142
+ } else if (s->pauth_active) {
143
+ tcg_rd = cpu_reg(s, rd);
144
+ gen_helper_autda(tcg_rd, cpu_env, tcg_rd, new_tmp_a64_zero(s));
145
+ }
146
+ break;
147
+ case MAP(1, 0x01, 0x0f): /* AUTDZB */
148
+ if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
149
+ goto do_unallocated;
150
+ } else if (s->pauth_active) {
151
+ tcg_rd = cpu_reg(s, rd);
152
+ gen_helper_autdb(tcg_rd, cpu_env, tcg_rd, new_tmp_a64_zero(s));
153
+ }
154
+ break;
155
+ case MAP(1, 0x01, 0x10): /* XPACI */
156
+ if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
157
+ goto do_unallocated;
158
+ } else if (s->pauth_active) {
159
+ tcg_rd = cpu_reg(s, rd);
160
+ gen_helper_xpaci(tcg_rd, cpu_env, tcg_rd);
161
+ }
162
+ break;
163
+ case MAP(1, 0x01, 0x11): /* XPACD */
164
+ if (!dc_isar_feature(aa64_pauth, s) || rn != 31) {
165
+ goto do_unallocated;
166
+ } else if (s->pauth_active) {
167
+ tcg_rd = cpu_reg(s, rd);
168
+ gen_helper_xpacd(tcg_rd, cpu_env, tcg_rd);
169
+ }
170
+ break;
171
default:
172
+ do_unallocated:
173
unallocated_encoding(s);
174
break;
175
}
176
--
30
--
177
2.20.1
31
2.25.1
178
179
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
While we could expose stage_1_mmu_idx, the combination is
3
Share the code to set AArch32 max features so that we no
4
probably going to be more useful.
4
longer have code drift between qemu{-system,}-{arm,aarch64}.
5
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190108223129.5570-18-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-9-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/internals.h | 15 +++++++++++++++
11
target/arm/internals.h | 2 +
12
target/arm/helper.c | 7 +++++++
12
target/arm/cpu64.c | 50 +-----------------
13
2 files changed, 22 insertions(+)
13
target/arm/cpu_tcg.c | 114 ++++++++++++++++++++++-------------------
14
3 files changed, 65 insertions(+), 101 deletions(-)
14
15
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/internals.h
18
--- a/target/arm/internals.h
18
+++ b/target/arm/internals.h
19
+++ b/target/arm/internals.h
19
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_vfiq(ARMCPU *cpu);
20
@@ -XXX,XX +XXX,XX @@ static inline void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu) { }
20
*/
21
void define_cortex_a72_a57_a53_cp_reginfo(ARMCPU *cpu);
21
ARMMMUIdx arm_mmu_idx(CPUARMState *env);
22
#endif
22
23
23
+/**
24
+void aa32_max_features(ARMCPU *cpu);
24
+ * arm_stage1_mmu_idx:
25
+
25
+ * @env: The cpu environment
26
#endif
26
+ *
27
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
27
+ * Return the ARMMMUIdx for the stage1 traversal for the current regime.
28
index XXXXXXX..XXXXXXX 100644
28
+ */
29
--- a/target/arm/cpu64.c
29
+#ifdef CONFIG_USER_ONLY
30
+++ b/target/arm/cpu64.c
30
+static inline ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env)
31
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
32
{
33
ARMCPU *cpu = ARM_CPU(obj);
34
uint64_t t;
35
- uint32_t u;
36
37
if (kvm_enabled() || hvf_enabled()) {
38
/* With KVM or HVF, '-cpu max' is identical to '-cpu host' */
39
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
40
t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1);
41
cpu->isar.id_aa64zfr0 = t;
42
43
- /* Replicate the same data to the 32-bit id registers. */
44
- u = cpu->isar.id_isar5;
45
- u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
46
- u = FIELD_DP32(u, ID_ISAR5, SHA1, 1);
47
- u = FIELD_DP32(u, ID_ISAR5, SHA2, 1);
48
- u = FIELD_DP32(u, ID_ISAR5, CRC32, 1);
49
- u = FIELD_DP32(u, ID_ISAR5, RDM, 1);
50
- u = FIELD_DP32(u, ID_ISAR5, VCMA, 1);
51
- cpu->isar.id_isar5 = u;
52
-
53
- u = cpu->isar.id_isar6;
54
- u = FIELD_DP32(u, ID_ISAR6, JSCVT, 1);
55
- u = FIELD_DP32(u, ID_ISAR6, DP, 1);
56
- u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
57
- u = FIELD_DP32(u, ID_ISAR6, SB, 1);
58
- u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
59
- u = FIELD_DP32(u, ID_ISAR6, BF16, 1);
60
- u = FIELD_DP32(u, ID_ISAR6, I8MM, 1);
61
- cpu->isar.id_isar6 = u;
62
-
63
- u = cpu->isar.id_pfr0;
64
- u = FIELD_DP32(u, ID_PFR0, DIT, 1);
65
- cpu->isar.id_pfr0 = u;
66
-
67
- u = cpu->isar.id_pfr2;
68
- u = FIELD_DP32(u, ID_PFR2, SSBS, 1);
69
- cpu->isar.id_pfr2 = u;
70
-
71
- u = cpu->isar.id_mmfr3;
72
- u = FIELD_DP32(u, ID_MMFR3, PAN, 2); /* ATS1E1 */
73
- cpu->isar.id_mmfr3 = u;
74
-
75
- u = cpu->isar.id_mmfr4;
76
- u = FIELD_DP32(u, ID_MMFR4, HPDS, 1); /* AA32HPD */
77
- u = FIELD_DP32(u, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
78
- u = FIELD_DP32(u, ID_MMFR4, CNP, 1); /* TTCNP */
79
- u = FIELD_DP32(u, ID_MMFR4, XNX, 1); /* TTS2UXN */
80
- cpu->isar.id_mmfr4 = u;
81
-
82
t = cpu->isar.id_aa64dfr0;
83
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
84
cpu->isar.id_aa64dfr0 = t;
85
86
- u = cpu->isar.id_dfr0;
87
- u = FIELD_DP32(u, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
88
- cpu->isar.id_dfr0 = u;
89
-
90
- u = cpu->isar.mvfr1;
91
- u = FIELD_DP32(u, MVFR1, FPHP, 3); /* v8.2-FP16 */
92
- u = FIELD_DP32(u, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
93
- cpu->isar.mvfr1 = u;
94
+ /* Replicate the same data to the 32-bit id registers. */
95
+ aa32_max_features(cpu);
96
97
#ifdef CONFIG_USER_ONLY
98
/*
99
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
100
index XXXXXXX..XXXXXXX 100644
101
--- a/target/arm/cpu_tcg.c
102
+++ b/target/arm/cpu_tcg.c
103
@@ -XXX,XX +XXX,XX @@
104
#endif
105
#include "cpregs.h"
106
107
+
108
+/* Share AArch32 -cpu max features with AArch64. */
109
+void aa32_max_features(ARMCPU *cpu)
31
+{
110
+{
32
+ return ARMMMUIdx_S1NSE0;
111
+ uint32_t t;
112
+
113
+ /* Add additional features supported by QEMU */
114
+ t = cpu->isar.id_isar5;
115
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2);
116
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
117
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
118
+ t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
119
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
120
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
121
+ cpu->isar.id_isar5 = t;
122
+
123
+ t = cpu->isar.id_isar6;
124
+ t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
125
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1);
126
+ t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
127
+ t = FIELD_DP32(t, ID_ISAR6, SB, 1);
128
+ t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
129
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
130
+ t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
131
+ cpu->isar.id_isar6 = t;
132
+
133
+ t = cpu->isar.mvfr1;
134
+ t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
135
+ t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
136
+ cpu->isar.mvfr1 = t;
137
+
138
+ t = cpu->isar.mvfr2;
139
+ t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
140
+ t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
141
+ cpu->isar.mvfr2 = t;
142
+
143
+ t = cpu->isar.id_mmfr3;
144
+ t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
145
+ cpu->isar.id_mmfr3 = t;
146
+
147
+ t = cpu->isar.id_mmfr4;
148
+ t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
149
+ t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
150
+ t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
151
+ t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
152
+ cpu->isar.id_mmfr4 = t;
153
+
154
+ t = cpu->isar.id_pfr0;
155
+ t = FIELD_DP32(t, ID_PFR0, DIT, 1);
156
+ cpu->isar.id_pfr0 = t;
157
+
158
+ t = cpu->isar.id_pfr2;
159
+ t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
160
+ cpu->isar.id_pfr2 = t;
161
+
162
+ t = cpu->isar.id_dfr0;
163
+ t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
164
+ cpu->isar.id_dfr0 = t;
33
+}
165
+}
34
+#else
166
+
35
+ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env);
167
#ifndef CONFIG_USER_ONLY
36
+#endif
168
static uint64_t l2ctlr_read(CPUARMState *env, const ARMCPRegInfo *ri)
37
+
38
#endif
39
diff --git a/target/arm/helper.c b/target/arm/helper.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/target/arm/helper.c
42
+++ b/target/arm/helper.c
43
@@ -XXX,XX +XXX,XX @@ int cpu_mmu_index(CPUARMState *env, bool ifetch)
44
return arm_to_core_mmu_idx(arm_mmu_idx(env));
45
}
46
47
+#ifndef CONFIG_USER_ONLY
48
+ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env)
49
+{
50
+ return stage_1_mmu_idx(arm_mmu_idx(env));
51
+}
52
+#endif
53
+
54
void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
55
target_ulong *cs_base, uint32_t *pflags)
56
{
169
{
170
@@ -XXX,XX +XXX,XX @@ static void arm_v7m_class_init(ObjectClass *oc, void *data)
171
static void arm_max_initfn(Object *obj)
172
{
173
ARMCPU *cpu = ARM_CPU(obj);
174
- uint32_t t;
175
176
/* aarch64_a57_initfn, advertising none of the aarch64 features */
177
cpu->dtb_compatible = "arm,cortex-a57";
178
@@ -XXX,XX +XXX,XX @@ static void arm_max_initfn(Object *obj)
179
cpu->ccsidr[2] = 0x70ffe07a; /* 2048KB L2 cache */
180
define_cortex_a72_a57_a53_cp_reginfo(cpu);
181
182
- /* Add additional features supported by QEMU */
183
- t = cpu->isar.id_isar5;
184
- t = FIELD_DP32(t, ID_ISAR5, AES, 2);
185
- t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
186
- t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
187
- t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
188
- t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
189
- t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
190
- cpu->isar.id_isar5 = t;
191
-
192
- t = cpu->isar.id_isar6;
193
- t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
194
- t = FIELD_DP32(t, ID_ISAR6, DP, 1);
195
- t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
196
- t = FIELD_DP32(t, ID_ISAR6, SB, 1);
197
- t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
198
- t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
199
- t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
200
- cpu->isar.id_isar6 = t;
201
-
202
- t = cpu->isar.mvfr1;
203
- t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
204
- t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
205
- cpu->isar.mvfr1 = t;
206
-
207
- t = cpu->isar.mvfr2;
208
- t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
209
- t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
210
- cpu->isar.mvfr2 = t;
211
-
212
- t = cpu->isar.id_mmfr3;
213
- t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
214
- cpu->isar.id_mmfr3 = t;
215
-
216
- t = cpu->isar.id_mmfr4;
217
- t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
218
- t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
219
- t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
220
- t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
221
- cpu->isar.id_mmfr4 = t;
222
-
223
- t = cpu->isar.id_pfr0;
224
- t = FIELD_DP32(t, ID_PFR0, DIT, 1);
225
- cpu->isar.id_pfr0 = t;
226
-
227
- t = cpu->isar.id_pfr2;
228
- t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
229
- cpu->isar.id_pfr2 = t;
230
-
231
- t = cpu->isar.id_dfr0;
232
- t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
233
- cpu->isar.id_dfr0 = t;
234
+ aa32_max_features(cpu);
235
236
#ifdef CONFIG_USER_ONLY
237
/*
57
--
238
--
58
2.20.1
239
2.25.1
59
60
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
Update the legacy feature names to the current names.
4
Provide feature names for id changes that were not marked.
5
Sort the field updates into increasing bitfield order.
2
6
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190108223129.5570-7-richard.henderson@linaro.org
9
Message-id: 20220506180242.216785-10-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/translate-a64.c | 93 +++++++++++++++++++++++++++++++++-----
12
target/arm/cpu64.c | 100 +++++++++++++++++++++----------------------
9
1 file changed, 81 insertions(+), 12 deletions(-)
13
target/arm/cpu_tcg.c | 48 ++++++++++-----------
14
2 files changed, 74 insertions(+), 74 deletions(-)
10
15
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
12
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
18
--- a/target/arm/cpu64.c
14
+++ b/target/arm/translate-a64.c
19
+++ b/target/arm/cpu64.c
15
@@ -XXX,XX +XXX,XX @@ static void handle_hint(DisasContext *s, uint32_t insn,
20
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
16
}
21
cpu->midr = t;
17
22
18
switch (selector) {
23
t = cpu->isar.id_aa64isar0;
19
- case 0: /* NOP */
24
- t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* AES + PMULL */
20
- return;
25
- t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1);
21
- case 3: /* WFI */
26
- t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* SHA512 */
22
+ case 0b00000: /* NOP */
27
+ t = FIELD_DP64(t, ID_AA64ISAR0, AES, 2); /* FEAT_PMULL */
23
+ break;
28
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA1, 1); /* FEAT_SHA1 */
24
+ case 0b00011: /* WFI */
29
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA2, 2); /* FEAT_SHA512 */
25
s->base.is_jmp = DISAS_WFI;
30
t = FIELD_DP64(t, ID_AA64ISAR0, CRC32, 1);
26
- return;
31
- t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2);
27
+ break;
32
- t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1);
28
+ case 0b00001: /* YIELD */
33
- t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1);
29
/* When running in MTTCG we don't generate jumps to the yield and
34
- t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1);
30
* WFE helpers as it won't affect the scheduling of other vCPUs.
35
- t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1);
31
* If we wanted to more completely model WFE/SEV so we don't busy
36
- t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1);
32
* spin unnecessarily we would need to do something more involved.
37
- t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1);
33
*/
38
- t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* v8.5-CondM */
34
- case 1: /* YIELD */
39
- t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2); /* FEAT_TLBIRANGE */
35
if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
40
- t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1);
36
s->base.is_jmp = DISAS_YIELD;
41
+ t = FIELD_DP64(t, ID_AA64ISAR0, ATOMIC, 2); /* FEAT_LSE */
37
}
42
+ t = FIELD_DP64(t, ID_AA64ISAR0, RDM, 1); /* FEAT_RDM */
38
- return;
43
+ t = FIELD_DP64(t, ID_AA64ISAR0, SHA3, 1); /* FEAT_SHA3 */
39
- case 2: /* WFE */
44
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM3, 1); /* FEAT_SM3 */
40
+ break;
45
+ t = FIELD_DP64(t, ID_AA64ISAR0, SM4, 1); /* FEAT_SM4 */
41
+ case 0b00010: /* WFE */
46
+ t = FIELD_DP64(t, ID_AA64ISAR0, DP, 1); /* FEAT_DotProd */
42
if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
47
+ t = FIELD_DP64(t, ID_AA64ISAR0, FHM, 1); /* FEAT_FHM */
43
s->base.is_jmp = DISAS_WFE;
48
+ t = FIELD_DP64(t, ID_AA64ISAR0, TS, 2); /* FEAT_FlagM2 */
44
}
49
+ t = FIELD_DP64(t, ID_AA64ISAR0, TLB, 2); /* FEAT_TLBIRANGE */
45
- return;
50
+ t = FIELD_DP64(t, ID_AA64ISAR0, RNDR, 1); /* FEAT_RNG */
46
- case 4: /* SEV */
51
cpu->isar.id_aa64isar0 = t;
47
- case 5: /* SEVL */
52
48
+ break;
53
t = cpu->isar.id_aa64isar1;
49
+ case 0b00100: /* SEV */
54
- t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2);
50
+ case 0b00101: /* SEVL */
55
- t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1);
51
/* we treat all as NOP at least for now */
56
- t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
52
- return;
57
- t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
53
+ break;
58
- t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
54
+ case 0b00111: /* XPACLRI */
59
- t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1);
55
+ if (s->pauth_active) {
60
- t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
56
+ gen_helper_xpaci(cpu_X[30], cpu_env, cpu_X[30]);
61
- t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */
57
+ }
62
- t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1);
58
+ break;
63
+ t = FIELD_DP64(t, ID_AA64ISAR1, DPB, 2); /* FEAT_DPB2 */
59
+ case 0b01000: /* PACIA1716 */
64
+ t = FIELD_DP64(t, ID_AA64ISAR1, JSCVT, 1); /* FEAT_JSCVT */
60
+ if (s->pauth_active) {
65
+ t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1); /* FEAT_FCMA */
61
+ gen_helper_pacia(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
66
+ t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* FEAT_LRCPC2 */
62
+ }
67
+ t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1); /* FEAT_FRINTTS */
63
+ break;
68
+ t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1); /* FEAT_SB */
64
+ case 0b01010: /* PACIB1716 */
69
+ t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1); /* FEAT_SPECRES */
65
+ if (s->pauth_active) {
70
+ t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1); /* FEAT_BF16 */
66
+ gen_helper_pacib(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
71
+ t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); /* FEAT_I8MM */
67
+ }
72
cpu->isar.id_aa64isar1 = t;
68
+ break;
73
69
+ case 0b01100: /* AUTIA1716 */
74
t = cpu->isar.id_aa64pfr0;
70
+ if (s->pauth_active) {
75
+ t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); /* FEAT_FP16 */
71
+ gen_helper_autia(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
76
+ t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); /* FEAT_FP16 */
72
+ }
77
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
73
+ break;
78
- t = FIELD_DP64(t, ID_AA64PFR0, FP, 1);
74
+ case 0b01110: /* AUTIB1716 */
79
- t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1);
75
+ if (s->pauth_active) {
80
- t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1);
76
+ gen_helper_autib(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
81
- t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1);
77
+ }
82
+ t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
78
+ break;
83
+ t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
79
+ case 0b11000: /* PACIAZ */
84
cpu->isar.id_aa64pfr0 = t;
80
+ if (s->pauth_active) {
85
81
+ gen_helper_pacia(cpu_X[30], cpu_env, cpu_X[30],
86
t = cpu->isar.id_aa64pfr1;
82
+ new_tmp_a64_zero(s));
87
- t = FIELD_DP64(t, ID_AA64PFR1, BT, 1);
83
+ }
88
- t = FIELD_DP64(t, ID_AA64PFR1, SSBS, 2);
84
+ break;
89
+ t = FIELD_DP64(t, ID_AA64PFR1, BT, 1); /* FEAT_BTI */
85
+ case 0b11001: /* PACIASP */
90
+ t = FIELD_DP64(t, ID_AA64PFR1, SSBS, 2); /* FEAT_SSBS2 */
86
+ if (s->pauth_active) {
91
/*
87
+ gen_helper_pacia(cpu_X[30], cpu_env, cpu_X[30], cpu_X[31]);
92
* Begin with full support for MTE. This will be downgraded to MTE=0
88
+ }
93
* during realize if the board provides no tag memory, much like
89
+ break;
94
* we do for EL2 with the virtualization=on property.
90
+ case 0b11010: /* PACIBZ */
95
*/
91
+ if (s->pauth_active) {
96
- t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3);
92
+ gen_helper_pacib(cpu_X[30], cpu_env, cpu_X[30],
97
+ t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3); /* FEAT_MTE3 */
93
+ new_tmp_a64_zero(s));
98
cpu->isar.id_aa64pfr1 = t;
94
+ }
99
95
+ break;
100
t = cpu->isar.id_aa64mmfr0;
96
+ case 0b11011: /* PACIBSP */
101
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
97
+ if (s->pauth_active) {
102
cpu->isar.id_aa64mmfr0 = t;
98
+ gen_helper_pacib(cpu_X[30], cpu_env, cpu_X[30], cpu_X[31]);
103
99
+ }
104
t = cpu->isar.id_aa64mmfr1;
100
+ break;
105
- t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* HPD */
101
+ case 0b11100: /* AUTIAZ */
106
- t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1);
102
+ if (s->pauth_active) {
107
- t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1);
103
+ gen_helper_autia(cpu_X[30], cpu_env, cpu_X[30],
108
- t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* ATS1E1 */
104
+ new_tmp_a64_zero(s));
109
- t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* VMID16 */
105
+ }
110
- t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* TTS2UXN */
106
+ break;
111
+ t = FIELD_DP64(t, ID_AA64MMFR1, VMIDBITS, 2); /* FEAT_VMID16 */
107
+ case 0b11101: /* AUTIASP */
112
+ t = FIELD_DP64(t, ID_AA64MMFR1, VH, 1); /* FEAT_VHE */
108
+ if (s->pauth_active) {
113
+ t = FIELD_DP64(t, ID_AA64MMFR1, HPDS, 1); /* FEAT_HPDS */
109
+ gen_helper_autia(cpu_X[30], cpu_env, cpu_X[30], cpu_X[31]);
114
+ t = FIELD_DP64(t, ID_AA64MMFR1, LO, 1); /* FEAT_LOR */
110
+ }
115
+ t = FIELD_DP64(t, ID_AA64MMFR1, PAN, 2); /* FEAT_PAN2 */
111
+ break;
116
+ t = FIELD_DP64(t, ID_AA64MMFR1, XNX, 1); /* FEAT_XNX */
112
+ case 0b11110: /* AUTIBZ */
117
cpu->isar.id_aa64mmfr1 = t;
113
+ if (s->pauth_active) {
118
114
+ gen_helper_autib(cpu_X[30], cpu_env, cpu_X[30],
119
t = cpu->isar.id_aa64mmfr2;
115
+ new_tmp_a64_zero(s));
120
- t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1);
116
+ }
121
- t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* TTCNP */
117
+ break;
122
- t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* TTST */
118
+ case 0b11111: /* AUTIBSP */
123
- t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
119
+ if (s->pauth_active) {
124
- t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
120
+ gen_helper_autib(cpu_X[30], cpu_env, cpu_X[30], cpu_X[31]);
125
- t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
121
+ }
126
+ t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* FEAT_TTCNP */
122
+ break;
127
+ t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1); /* FEAT_UAO */
123
default:
128
+ t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
124
/* default specified as NOP equivalent */
129
+ t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
125
- return;
130
+ t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
126
+ break;
131
+ t = FIELD_DP64(t, ID_AA64MMFR2, BBM, 2); /* FEAT_BBM at level 2 */
127
}
132
cpu->isar.id_aa64mmfr2 = t;
133
134
t = cpu->isar.id_aa64zfr0;
135
t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1);
136
- t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2); /* PMULL */
137
- t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1);
138
- t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1);
139
- t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1);
140
- t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1);
141
- t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1);
142
- t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1);
143
- t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1);
144
+ t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2); /* FEAT_SVE_PMULL128 */
145
+ t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1); /* FEAT_SVE_BitPerm */
146
+ t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1); /* FEAT_BF16 */
147
+ t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1); /* FEAT_SVE_SHA3 */
148
+ t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1); /* FEAT_SVE_SM4 */
149
+ t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1); /* FEAT_I8MM */
150
+ t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1); /* FEAT_F32MM */
151
+ t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1); /* FEAT_F64MM */
152
cpu->isar.id_aa64zfr0 = t;
153
154
t = cpu->isar.id_aa64dfr0;
155
- t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* v8.4-PMU */
156
+ t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* FEAT_PMUv3p4 */
157
cpu->isar.id_aa64dfr0 = t;
158
159
/* Replicate the same data to the 32-bit id registers. */
160
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
161
index XXXXXXX..XXXXXXX 100644
162
--- a/target/arm/cpu_tcg.c
163
+++ b/target/arm/cpu_tcg.c
164
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
165
166
/* Add additional features supported by QEMU */
167
t = cpu->isar.id_isar5;
168
- t = FIELD_DP32(t, ID_ISAR5, AES, 2);
169
- t = FIELD_DP32(t, ID_ISAR5, SHA1, 1);
170
- t = FIELD_DP32(t, ID_ISAR5, SHA2, 1);
171
+ t = FIELD_DP32(t, ID_ISAR5, AES, 2); /* FEAT_PMULL */
172
+ t = FIELD_DP32(t, ID_ISAR5, SHA1, 1); /* FEAT_SHA1 */
173
+ t = FIELD_DP32(t, ID_ISAR5, SHA2, 1); /* FEAT_SHA256 */
174
t = FIELD_DP32(t, ID_ISAR5, CRC32, 1);
175
- t = FIELD_DP32(t, ID_ISAR5, RDM, 1);
176
- t = FIELD_DP32(t, ID_ISAR5, VCMA, 1);
177
+ t = FIELD_DP32(t, ID_ISAR5, RDM, 1); /* FEAT_RDM */
178
+ t = FIELD_DP32(t, ID_ISAR5, VCMA, 1); /* FEAT_FCMA */
179
cpu->isar.id_isar5 = t;
180
181
t = cpu->isar.id_isar6;
182
- t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1);
183
- t = FIELD_DP32(t, ID_ISAR6, DP, 1);
184
- t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
185
- t = FIELD_DP32(t, ID_ISAR6, SB, 1);
186
- t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
187
- t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
188
- t = FIELD_DP32(t, ID_ISAR6, I8MM, 1);
189
+ t = FIELD_DP32(t, ID_ISAR6, JSCVT, 1); /* FEAT_JSCVT */
190
+ t = FIELD_DP32(t, ID_ISAR6, DP, 1); /* Feat_DotProd */
191
+ t = FIELD_DP32(t, ID_ISAR6, FHM, 1); /* FEAT_FHM */
192
+ t = FIELD_DP32(t, ID_ISAR6, SB, 1); /* FEAT_SB */
193
+ t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1); /* FEAT_SPECRES */
194
+ t = FIELD_DP32(t, ID_ISAR6, BF16, 1); /* FEAT_AA32BF16 */
195
+ t = FIELD_DP32(t, ID_ISAR6, I8MM, 1); /* FEAT_AA32I8MM */
196
cpu->isar.id_isar6 = t;
197
198
t = cpu->isar.mvfr1;
199
- t = FIELD_DP32(t, MVFR1, FPHP, 3); /* v8.2-FP16 */
200
- t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* v8.2-FP16 */
201
+ t = FIELD_DP32(t, MVFR1, FPHP, 3); /* FEAT_FP16 */
202
+ t = FIELD_DP32(t, MVFR1, SIMDHP, 2); /* FEAT_FP16 */
203
cpu->isar.mvfr1 = t;
204
205
t = cpu->isar.mvfr2;
206
- t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
207
- t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
208
+ t = FIELD_DP32(t, MVFR2, SIMDMISC, 3); /* SIMD MaxNum */
209
+ t = FIELD_DP32(t, MVFR2, FPMISC, 4); /* FP MaxNum */
210
cpu->isar.mvfr2 = t;
211
212
t = cpu->isar.id_mmfr3;
213
- t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* ATS1E1 */
214
+ t = FIELD_DP32(t, ID_MMFR3, PAN, 2); /* FEAT_PAN2 */
215
cpu->isar.id_mmfr3 = t;
216
217
t = cpu->isar.id_mmfr4;
218
- t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* AA32HPD */
219
- t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
220
- t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* TTCNP */
221
- t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* TTS2UXN */
222
+ t = FIELD_DP32(t, ID_MMFR4, HPDS, 1); /* FEAT_AA32HPD */
223
+ t = FIELD_DP32(t, ID_MMFR4, AC2, 1); /* ACTLR2, HACTLR2 */
224
+ t = FIELD_DP32(t, ID_MMFR4, CNP, 1); /* FEAT_TTCNP */
225
+ t = FIELD_DP32(t, ID_MMFR4, XNX, 1); /* FEAT_XNX*/
226
cpu->isar.id_mmfr4 = t;
227
228
t = cpu->isar.id_pfr0;
229
- t = FIELD_DP32(t, ID_PFR0, DIT, 1);
230
+ t = FIELD_DP32(t, ID_PFR0, DIT, 1); /* FEAT_DIT */
231
cpu->isar.id_pfr0 = t;
232
233
t = cpu->isar.id_pfr2;
234
- t = FIELD_DP32(t, ID_PFR2, SSBS, 1);
235
+ t = FIELD_DP32(t, ID_PFR2, SSBS, 1); /* FEAT_SSBS */
236
cpu->isar.id_pfr2 = t;
237
238
t = cpu->isar.id_dfr0;
239
- t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* v8.4-PMU */
240
+ t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* FEAT_PMUv3p4 */
241
cpu->isar.id_dfr0 = t;
128
}
242
}
129
243
130
--
244
--
131
2.20.1
245
2.25.1
132
133
diff view generated by jsdifflib
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Rename arm_ccnt_enabled to pmu_counter_enabled, and add logic to only
3
Use FIELD_DP{32,64} to manipulate id_pfr1 and id_aa64pfr0
4
return 'true' if the specified counter is enabled and neither prohibited
4
during arm_cpu_realizefn.
5
or filtered.
6
5
7
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
8
Signed-off-by: Aaron Lindsay <aclindsa@gmail.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20181211151945.29137-5-aaron@os.amperecomputing.com
8
Message-id: 20220506180242.216785-11-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
target/arm/cpu.h | 10 ++++-
11
target/arm/cpu.c | 22 +++++++++++++---------
15
target/arm/cpu.c | 3 ++
12
1 file changed, 13 insertions(+), 9 deletions(-)
16
target/arm/helper.c | 96 +++++++++++++++++++++++++++++++++++++++++----
17
3 files changed, 101 insertions(+), 8 deletions(-)
18
13
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ void pmccntr_op_finish(CPUARMState *env);
24
void pmu_op_start(CPUARMState *env);
25
void pmu_op_finish(CPUARMState *env);
26
27
+/**
28
+ * Functions to register as EL change hooks for PMU mode filtering
29
+ */
30
+void pmu_pre_el_change(ARMCPU *cpu, void *ignored);
31
+void pmu_post_el_change(ARMCPU *cpu, void *ignored);
32
+
33
/* SCTLR bit meanings. Several bits have been reused in newer
34
* versions of the architecture; in that case we define constants
35
* for both old and new bit meanings. Code which tests against those
36
@@ -XXX,XX +XXX,XX @@ void pmu_op_finish(CPUARMState *env);
37
38
#define MDCR_EPMAD (1U << 21)
39
#define MDCR_EDAD (1U << 20)
40
-#define MDCR_SPME (1U << 17)
41
+#define MDCR_SPME (1U << 17) /* MDCR_EL3 */
42
+#define MDCR_HPMD (1U << 17) /* MDCR_EL2 */
43
#define MDCR_SDD (1U << 16)
44
#define MDCR_SPD (3U << 14)
45
#define MDCR_TDRA (1U << 11)
46
@@ -XXX,XX +XXX,XX @@ void pmu_op_finish(CPUARMState *env);
47
#define MDCR_HPME (1U << 7)
48
#define MDCR_TPM (1U << 6)
49
#define MDCR_TPMCR (1U << 5)
50
+#define MDCR_HPMN (0x1fU)
51
52
/* Not all of the MDCR_EL3 bits are present in the 32-bit SDCR */
53
#define SDCR_VALID_MASK (MDCR_EPMAD | MDCR_EDAD | MDCR_SPME | MDCR_SPD)
54
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
14
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
55
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/cpu.c
16
--- a/target/arm/cpu.c
57
+++ b/target/arm/cpu.c
17
+++ b/target/arm/cpu.c
58
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
18
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
59
if (!cpu->has_pmu) {
19
*/
60
unset_feature(env, ARM_FEATURE_PMU);
20
unset_feature(env, ARM_FEATURE_EL3);
61
cpu->id_aa64dfr0 &= ~0xf00;
21
62
+ } else if (!kvm_enabled()) {
22
- /* Disable the security extension feature bits in the processor feature
63
+ arm_register_pre_el_change_hook(cpu, &pmu_pre_el_change, 0);
23
- * registers as well. These are id_pfr1[7:4] and id_aa64pfr0[15:12].
64
+ arm_register_el_change_hook(cpu, &pmu_post_el_change, 0);
24
+ /*
25
+ * Disable the security extension feature bits in the processor
26
+ * feature registers as well.
27
*/
28
- cpu->isar.id_pfr1 &= ~0xf0;
29
- cpu->isar.id_aa64pfr0 &= ~0xf000;
30
+ cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1, ID_PFR1, SECURITY, 0);
31
+ cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
32
+ ID_AA64PFR0, EL3, 0);
65
}
33
}
66
34
35
if (!cpu->has_el2) {
36
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
37
}
38
67
if (!arm_feature(env, ARM_FEATURE_EL2)) {
39
if (!arm_feature(env, ARM_FEATURE_EL2)) {
68
diff --git a/target/arm/helper.c b/target/arm/helper.c
40
- /* Disable the hypervisor feature bits in the processor feature
69
index XXXXXXX..XXXXXXX 100644
41
- * registers if we don't have EL2. These are id_pfr1[15:12] and
70
--- a/target/arm/helper.c
42
- * id_aa64pfr0_el1[11:8].
71
+++ b/target/arm/helper.c
43
+ /*
72
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
44
+ * Disable the hypervisor feature bits in the processor feature
73
/* Definitions for the PMU registers */
45
+ * registers if we don't have EL2.
74
#define PMCRN_MASK 0xf800
46
*/
75
#define PMCRN_SHIFT 11
47
- cpu->isar.id_aa64pfr0 &= ~0xf00;
76
+#define PMCRDP 0x10
48
- cpu->isar.id_pfr1 &= ~0xf000;
77
#define PMCRD 0x8
49
+ cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
78
#define PMCRC 0x4
50
+ ID_AA64PFR0, EL2, 0);
79
#define PMCRE 0x1
51
+ cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1,
80
52
+ ID_PFR1, VIRTUALIZATION, 0);
81
+#define PMXEVTYPER_P 0x80000000
82
+#define PMXEVTYPER_U 0x40000000
83
+#define PMXEVTYPER_NSK 0x20000000
84
+#define PMXEVTYPER_NSU 0x10000000
85
+#define PMXEVTYPER_NSH 0x08000000
86
+#define PMXEVTYPER_M 0x04000000
87
+#define PMXEVTYPER_MT 0x02000000
88
+#define PMXEVTYPER_EVTCOUNT 0x0000ffff
89
+#define PMXEVTYPER_MASK (PMXEVTYPER_P | PMXEVTYPER_U | PMXEVTYPER_NSK | \
90
+ PMXEVTYPER_NSU | PMXEVTYPER_NSH | \
91
+ PMXEVTYPER_M | PMXEVTYPER_MT | \
92
+ PMXEVTYPER_EVTCOUNT)
93
+
94
static inline uint32_t pmu_num_counters(CPUARMState *env)
95
{
96
return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
97
@@ -XXX,XX +XXX,XX @@ static CPAccessResult pmreg_access_ccntr(CPUARMState *env,
98
return pmreg_access(env, ri, isread);
99
}
100
101
-static inline bool arm_ccnt_enabled(CPUARMState *env)
102
+/* Returns true if the counter (pass 31 for PMCCNTR) should count events using
103
+ * the current EL, security state, and register configuration.
104
+ */
105
+static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
106
{
107
- /* This does not support checking PMCCFILTR_EL0 register */
108
+ uint64_t filter;
109
+ bool e, p, u, nsk, nsu, nsh, m;
110
+ bool enabled, prohibited, filtered;
111
+ bool secure = arm_is_secure(env);
112
+ int el = arm_current_el(env);
113
+ uint8_t hpmn = env->cp15.mdcr_el2 & MDCR_HPMN;
114
115
- if (!(env->cp15.c9_pmcr & PMCRE) || !(env->cp15.c9_pmcnten & (1 << 31))) {
116
- return false;
117
+ if (!arm_feature(env, ARM_FEATURE_EL2) ||
118
+ (counter < hpmn || counter == 31)) {
119
+ e = env->cp15.c9_pmcr & PMCRE;
120
+ } else {
121
+ e = env->cp15.mdcr_el2 & MDCR_HPME;
122
+ }
123
+ enabled = e && (env->cp15.c9_pmcnten & (1 << counter));
124
+
125
+ if (!secure) {
126
+ if (el == 2 && (counter < hpmn || counter == 31)) {
127
+ prohibited = env->cp15.mdcr_el2 & MDCR_HPMD;
128
+ } else {
129
+ prohibited = false;
130
+ }
131
+ } else {
132
+ prohibited = arm_feature(env, ARM_FEATURE_EL3) &&
133
+ (env->cp15.mdcr_el3 & MDCR_SPME);
134
}
53
}
135
54
136
- return true;
55
#ifndef CONFIG_USER_ONLY
137
+ if (prohibited && counter == 31) {
138
+ prohibited = env->cp15.c9_pmcr & PMCRDP;
139
+ }
140
+
141
+ /* TODO Remove assert, set filter to correct PMEVTYPER */
142
+ assert(counter == 31);
143
+ filter = env->cp15.pmccfiltr_el0;
144
+
145
+ p = filter & PMXEVTYPER_P;
146
+ u = filter & PMXEVTYPER_U;
147
+ nsk = arm_feature(env, ARM_FEATURE_EL3) && (filter & PMXEVTYPER_NSK);
148
+ nsu = arm_feature(env, ARM_FEATURE_EL3) && (filter & PMXEVTYPER_NSU);
149
+ nsh = arm_feature(env, ARM_FEATURE_EL2) && (filter & PMXEVTYPER_NSH);
150
+ m = arm_el_is_aa64(env, 1) &&
151
+ arm_feature(env, ARM_FEATURE_EL3) && (filter & PMXEVTYPER_M);
152
+
153
+ if (el == 0) {
154
+ filtered = secure ? u : u != nsu;
155
+ } else if (el == 1) {
156
+ filtered = secure ? p : p != nsk;
157
+ } else if (el == 2) {
158
+ filtered = !nsh;
159
+ } else { /* EL3 */
160
+ filtered = m != p;
161
+ }
162
+
163
+ return enabled && !prohibited && !filtered;
164
}
165
+
166
/*
167
* Ensure c15_ccnt is the guest-visible count so that operations such as
168
* enabling/disabling the counter or filtering, modifying the count itself,
169
@@ -XXX,XX +XXX,XX @@ void pmccntr_op_start(CPUARMState *env)
170
cycles = muldiv64(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
171
ARM_CPU_FREQ, NANOSECONDS_PER_SECOND);
172
173
- if (arm_ccnt_enabled(env)) {
174
+ if (pmu_counter_enabled(env, 31)) {
175
uint64_t eff_cycles = cycles;
176
if (env->cp15.c9_pmcr & PMCRD) {
177
/* Increment once every 64 processor clock cycles */
178
@@ -XXX,XX +XXX,XX @@ void pmccntr_op_start(CPUARMState *env)
179
*/
180
void pmccntr_op_finish(CPUARMState *env)
181
{
182
- if (arm_ccnt_enabled(env)) {
183
+ if (pmu_counter_enabled(env, 31)) {
184
uint64_t prev_cycles = env->cp15.c15_ccnt_delta;
185
186
if (env->cp15.c9_pmcr & PMCRD) {
187
@@ -XXX,XX +XXX,XX @@ void pmu_op_finish(CPUARMState *env)
188
pmccntr_op_finish(env);
189
}
190
191
+void pmu_pre_el_change(ARMCPU *cpu, void *ignored)
192
+{
193
+ pmu_op_start(&cpu->env);
194
+}
195
+
196
+void pmu_post_el_change(ARMCPU *cpu, void *ignored)
197
+{
198
+ pmu_op_finish(&cpu->env);
199
+}
200
+
201
static void pmcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
202
uint64_t value)
203
{
204
@@ -XXX,XX +XXX,XX @@ void pmu_op_finish(CPUARMState *env)
205
{
206
}
207
208
+void pmu_pre_el_change(ARMCPU *cpu, void *ignored)
209
+{
210
+}
211
+
212
+void pmu_post_el_change(ARMCPU *cpu, void *ignored)
213
+{
214
+}
215
+
216
#endif
217
218
static void pmccfiltr_write(CPUARMState *env, const ARMCPRegInfo *ri,
219
--
56
--
220
2.20.1
57
2.25.1
221
222
diff view generated by jsdifflib
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This commit doesn't add any supported events, but provides the framework
3
The only portion of FEAT_Debugv8p2 that is relevant to QEMU
4
for adding them. We store the pm_event structs in a simple array, and
4
is CONTEXTIDR_EL2, which is also conditionally implemented
5
provide the mapping from the event numbers to array indexes in the
5
with FEAT_VHE. The rest of the debug extension concerns the
6
supported_event_map array. Because the value of PMCEID[01] depends upon
6
External debug interface, which is outside the scope of QEMU.
7
which events are supported at runtime, generate it dynamically.
8
7
9
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20181211151945.29137-10-aaron@os.amperecomputing.com
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20220506180242.216785-12-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
12
---
14
target/arm/cpu.h | 10 ++++++++
13
docs/system/arm/emulation.rst | 1 +
15
target/arm/cpu.c | 19 +++++++++------
14
target/arm/cpu.c | 1 +
16
target/arm/cpu64.c | 4 ----
15
target/arm/cpu64.c | 1 +
17
target/arm/helper.c | 57 +++++++++++++++++++++++++++++++++++++++++++++
16
target/arm/cpu_tcg.c | 2 ++
18
4 files changed, 79 insertions(+), 11 deletions(-)
17
4 files changed, 5 insertions(+)
19
18
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
21
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
21
--- a/docs/system/arm/emulation.rst
23
+++ b/target/arm/cpu.h
22
+++ b/docs/system/arm/emulation.rst
24
@@ -XXX,XX +XXX,XX @@ void pmu_op_finish(CPUARMState *env);
23
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
25
void pmu_pre_el_change(ARMCPU *cpu, void *ignored);
24
- FEAT_BTI (Branch Target Identification)
26
void pmu_post_el_change(ARMCPU *cpu, void *ignored);
25
- FEAT_DIT (Data Independent Timing instructions)
27
26
- FEAT_DPB (DC CVAP instruction)
28
+/*
27
+- FEAT_Debugv8p2 (Debug changes for v8.2)
29
+ * get_pmceid
28
- FEAT_DotProd (Advanced SIMD dot product instructions)
30
+ * @env: CPUARMState
29
- FEAT_FCMA (Floating-point complex number instructions)
31
+ * @which: which PMCEID register to return (0 or 1)
30
- FEAT_FHM (Floating-point half-precision multiplication instructions)
32
+ *
33
+ * Return the PMCEID[01]_EL0 register values corresponding to the counters
34
+ * which are supported given the current configuration
35
+ */
36
+uint64_t get_pmceid(CPUARMState *env, unsigned which);
37
+
38
/* SCTLR bit meanings. Several bits have been reused in newer
39
* versions of the architecture; in that case we define constants
40
* for both old and new bit meanings. Code which tests against those
41
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
31
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
42
index XXXXXXX..XXXXXXX 100644
32
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/cpu.c
33
--- a/target/arm/cpu.c
44
+++ b/target/arm/cpu.c
34
+++ b/target/arm/cpu.c
45
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
35
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
46
36
* feature registers as well.
47
if (!cpu->has_pmu) {
37
*/
48
unset_feature(env, ARM_FEATURE_PMU);
38
cpu->isar.id_pfr1 = FIELD_DP32(cpu->isar.id_pfr1, ID_PFR1, SECURITY, 0);
49
+ }
39
+ cpu->isar.id_dfr0 = FIELD_DP32(cpu->isar.id_dfr0, ID_DFR0, COPSDBG, 0);
50
+ if (arm_feature(env, ARM_FEATURE_PMU)) {
40
cpu->isar.id_aa64pfr0 = FIELD_DP64(cpu->isar.id_aa64pfr0,
51
+ cpu->pmceid0 = get_pmceid(&cpu->env, 0);
41
ID_AA64PFR0, EL3, 0);
52
+ cpu->pmceid1 = get_pmceid(&cpu->env, 1);
53
+
54
+ if (!kvm_enabled()) {
55
+ arm_register_pre_el_change_hook(cpu, &pmu_pre_el_change, 0);
56
+ arm_register_el_change_hook(cpu, &pmu_post_el_change, 0);
57
+ }
58
+ } else {
59
cpu->id_aa64dfr0 &= ~0xf00;
60
- } else if (!kvm_enabled()) {
61
- arm_register_pre_el_change_hook(cpu, &pmu_pre_el_change, 0);
62
- arm_register_el_change_hook(cpu, &pmu_post_el_change, 0);
63
+ cpu->pmceid0 = 0;
64
+ cpu->pmceid1 = 0;
65
}
42
}
66
67
if (!arm_feature(env, ARM_FEATURE_EL2)) {
68
@@ -XXX,XX +XXX,XX @@ static void cortex_a7_initfn(Object *obj)
69
cpu->id_pfr0 = 0x00001131;
70
cpu->id_pfr1 = 0x00011011;
71
cpu->id_dfr0 = 0x02010555;
72
- cpu->pmceid0 = 0x00000000;
73
- cpu->pmceid1 = 0x00000000;
74
cpu->id_afr0 = 0x00000000;
75
cpu->id_mmfr0 = 0x10101105;
76
cpu->id_mmfr1 = 0x40000000;
77
@@ -XXX,XX +XXX,XX @@ static void cortex_a15_initfn(Object *obj)
78
cpu->id_pfr0 = 0x00001131;
79
cpu->id_pfr1 = 0x00011011;
80
cpu->id_dfr0 = 0x02010555;
81
- cpu->pmceid0 = 0x0000000;
82
- cpu->pmceid1 = 0x00000000;
83
cpu->id_afr0 = 0x00000000;
84
cpu->id_mmfr0 = 0x10201105;
85
cpu->id_mmfr1 = 0x20000000;
86
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
43
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
87
index XXXXXXX..XXXXXXX 100644
44
index XXXXXXX..XXXXXXX 100644
88
--- a/target/arm/cpu64.c
45
--- a/target/arm/cpu64.c
89
+++ b/target/arm/cpu64.c
46
+++ b/target/arm/cpu64.c
90
@@ -XXX,XX +XXX,XX @@ static void aarch64_a57_initfn(Object *obj)
47
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
91
cpu->isar.id_isar6 = 0;
48
cpu->isar.id_aa64zfr0 = t;
92
cpu->isar.id_aa64pfr0 = 0x00002222;
49
93
cpu->id_aa64dfr0 = 0x10305106;
50
t = cpu->isar.id_aa64dfr0;
94
- cpu->pmceid0 = 0x00000000;
51
+ t = FIELD_DP64(t, ID_AA64DFR0, DEBUGVER, 8); /* FEAT_Debugv8p2 */
95
- cpu->pmceid1 = 0x00000000;
52
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* FEAT_PMUv3p4 */
96
cpu->isar.id_aa64isar0 = 0x00011120;
53
cpu->isar.id_aa64dfr0 = t;
97
cpu->isar.id_aa64mmfr0 = 0x00001124;
54
98
cpu->dbgdidr = 0x3516d000;
55
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
99
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
100
cpu->isar.id_isar5 = 0x00011121;
101
cpu->isar.id_aa64pfr0 = 0x00002222;
102
cpu->id_aa64dfr0 = 0x10305106;
103
- cpu->pmceid0 = 0x00000000;
104
- cpu->pmceid1 = 0x00000000;
105
cpu->isar.id_aa64isar0 = 0x00011120;
106
cpu->isar.id_aa64mmfr0 = 0x00001124;
107
cpu->dbgdidr = 0x3516d000;
108
diff --git a/target/arm/helper.c b/target/arm/helper.c
109
index XXXXXXX..XXXXXXX 100644
56
index XXXXXXX..XXXXXXX 100644
110
--- a/target/arm/helper.c
57
--- a/target/arm/cpu_tcg.c
111
+++ b/target/arm/helper.c
58
+++ b/target/arm/cpu_tcg.c
112
@@ -XXX,XX +XXX,XX @@ static inline uint64_t pmu_counter_mask(CPUARMState *env)
59
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
113
return (1 << 31) | ((1 << pmu_num_counters(env)) - 1);
60
cpu->isar.id_pfr2 = t;
61
62
t = cpu->isar.id_dfr0;
63
+ t = FIELD_DP32(t, ID_DFR0, COPDBG, 8); /* FEAT_Debugv8p2 */
64
+ t = FIELD_DP32(t, ID_DFR0, COPSDBG, 8); /* FEAT_Debugv8p2 */
65
t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* FEAT_PMUv3p4 */
66
cpu->isar.id_dfr0 = t;
114
}
67
}
115
116
+typedef struct pm_event {
117
+ uint16_t number; /* PMEVTYPER.evtCount is 16 bits wide */
118
+ /* If the event is supported on this CPU (used to generate PMCEID[01]) */
119
+ bool (*supported)(CPUARMState *);
120
+ /*
121
+ * Retrieve the current count of the underlying event. The programmed
122
+ * counters hold a difference from the return value from this function
123
+ */
124
+ uint64_t (*get_count)(CPUARMState *);
125
+} pm_event;
126
+
127
+static const pm_event pm_events[] = {
128
+};
129
+
130
+/*
131
+ * Note: Before increasing MAX_EVENT_ID beyond 0x3f into the 0x40xx range of
132
+ * events (i.e. the statistical profiling extension), this implementation
133
+ * should first be updated to something sparse instead of the current
134
+ * supported_event_map[] array.
135
+ */
136
+#define MAX_EVENT_ID 0x0
137
+#define UNSUPPORTED_EVENT UINT16_MAX
138
+static uint16_t supported_event_map[MAX_EVENT_ID + 1];
139
+
140
+/*
141
+ * Called upon initialization to build PMCEID0_EL0 or PMCEID1_EL0 (indicated by
142
+ * 'which'). We also use it to build a map of ARM event numbers to indices in
143
+ * our pm_events array.
144
+ *
145
+ * Note: Events in the 0x40XX range are not currently supported.
146
+ */
147
+uint64_t get_pmceid(CPUARMState *env, unsigned which)
148
+{
149
+ uint64_t pmceid = 0;
150
+ unsigned int i;
151
+
152
+ assert(which <= 1);
153
+
154
+ for (i = 0; i < ARRAY_SIZE(supported_event_map); i++) {
155
+ supported_event_map[i] = UNSUPPORTED_EVENT;
156
+ }
157
+
158
+ for (i = 0; i < ARRAY_SIZE(pm_events); i++) {
159
+ const pm_event *cnt = &pm_events[i];
160
+ assert(cnt->number <= MAX_EVENT_ID);
161
+ /* We do not currently support events in the 0x40xx range */
162
+ assert(cnt->number <= 0x3f);
163
+
164
+ if ((cnt->number & 0x20) == (which << 6) &&
165
+ cnt->supported(env)) {
166
+ pmceid |= (1 << (cnt->number & 0x1f));
167
+ supported_event_map[cnt->number] = i;
168
+ }
169
+ }
170
+ return pmceid;
171
+}
172
+
173
static CPAccessResult pmreg_access(CPUARMState *env, const ARMCPRegInfo *ri,
174
bool isread)
175
{
176
--
68
--
177
2.20.1
69
2.25.1
178
179
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This is not really functional yet, because the crypto is not yet
3
This extension concerns changes to the External Debug interface,
4
implemented. This, however follows the AddPAC pseudo function.
4
with Secure and Non-secure access to the debug registers, and all
5
of it is outside the scope of QEMU. Indicating support for this
6
is mandatory with FEAT_SEL2, which we do implement.
5
7
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190108223129.5570-27-richard.henderson@linaro.org
10
Message-id: 20220506180242.216785-13-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
12
---
11
target/arm/pauth_helper.c | 42 ++++++++++++++++++++++++++++++++++++++-
13
docs/system/arm/emulation.rst | 1 +
12
1 file changed, 41 insertions(+), 1 deletion(-)
14
target/arm/cpu64.c | 2 +-
15
target/arm/cpu_tcg.c | 4 ++--
16
3 files changed, 4 insertions(+), 3 deletions(-)
13
17
14
diff --git a/target/arm/pauth_helper.c b/target/arm/pauth_helper.c
18
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/pauth_helper.c
20
--- a/docs/system/arm/emulation.rst
17
+++ b/target/arm/pauth_helper.c
21
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ static uint64_t pauth_computepac(uint64_t data, uint64_t modifier,
22
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
static uint64_t pauth_addpac(CPUARMState *env, uint64_t ptr, uint64_t modifier,
23
- FEAT_DIT (Data Independent Timing instructions)
20
ARMPACKey *key, bool data)
24
- FEAT_DPB (DC CVAP instruction)
21
{
25
- FEAT_Debugv8p2 (Debug changes for v8.2)
22
- g_assert_not_reached(); /* FIXME */
26
+- FEAT_Debugv8p4 (Debug changes for v8.4)
23
+ ARMMMUIdx mmu_idx = arm_stage1_mmu_idx(env);
27
- FEAT_DotProd (Advanced SIMD dot product instructions)
24
+ ARMVAParameters param = aa64_va_parameters(env, ptr, mmu_idx, data);
28
- FEAT_FCMA (Floating-point complex number instructions)
25
+ uint64_t pac, ext_ptr, ext, test;
29
- FEAT_FHM (Floating-point half-precision multiplication instructions)
26
+ int bot_bit, top_bit;
30
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
27
+
31
index XXXXXXX..XXXXXXX 100644
28
+ /* If tagged pointers are in use, use ptr<55>, otherwise ptr<63>. */
32
--- a/target/arm/cpu64.c
29
+ if (param.tbi) {
33
+++ b/target/arm/cpu64.c
30
+ ext = sextract64(ptr, 55, 1);
34
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
31
+ } else {
35
cpu->isar.id_aa64zfr0 = t;
32
+ ext = sextract64(ptr, 63, 1);
36
33
+ }
37
t = cpu->isar.id_aa64dfr0;
34
+
38
- t = FIELD_DP64(t, ID_AA64DFR0, DEBUGVER, 8); /* FEAT_Debugv8p2 */
35
+ /* Build a pointer with known good extension bits. */
39
+ t = FIELD_DP64(t, ID_AA64DFR0, DEBUGVER, 9); /* FEAT_Debugv8p4 */
36
+ top_bit = 64 - 8 * param.tbi;
40
t = FIELD_DP64(t, ID_AA64DFR0, PMUVER, 5); /* FEAT_PMUv3p4 */
37
+ bot_bit = 64 - param.tsz;
41
cpu->isar.id_aa64dfr0 = t;
38
+ ext_ptr = deposit64(ptr, bot_bit, top_bit - bot_bit, ext);
42
39
+
43
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
40
+ pac = pauth_computepac(ext_ptr, modifier, *key);
44
index XXXXXXX..XXXXXXX 100644
41
+
45
--- a/target/arm/cpu_tcg.c
42
+ /*
46
+++ b/target/arm/cpu_tcg.c
43
+ * Check if the ptr has good extension bits and corrupt the
47
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
44
+ * pointer authentication code if not.
48
cpu->isar.id_pfr2 = t;
45
+ */
49
46
+ test = sextract64(ptr, bot_bit, top_bit - bot_bit);
50
t = cpu->isar.id_dfr0;
47
+ if (test != 0 && test != -1) {
51
- t = FIELD_DP32(t, ID_DFR0, COPDBG, 8); /* FEAT_Debugv8p2 */
48
+ pac ^= MAKE_64BIT_MASK(top_bit - 1, 1);
52
- t = FIELD_DP32(t, ID_DFR0, COPSDBG, 8); /* FEAT_Debugv8p2 */
49
+ }
53
+ t = FIELD_DP32(t, ID_DFR0, COPDBG, 9); /* FEAT_Debugv8p4 */
50
+
54
+ t = FIELD_DP32(t, ID_DFR0, COPSDBG, 9); /* FEAT_Debugv8p4 */
51
+ /*
55
t = FIELD_DP32(t, ID_DFR0, PERFMON, 5); /* FEAT_PMUv3p4 */
52
+ * Preserve the determination between upper and lower at bit 55,
56
cpu->isar.id_dfr0 = t;
53
+ * and insert pointer authentication code.
54
+ */
55
+ if (param.tbi) {
56
+ ptr &= ~MAKE_64BIT_MASK(bot_bit, 55 - bot_bit + 1);
57
+ pac &= MAKE_64BIT_MASK(bot_bit, 54 - bot_bit + 1);
58
+ } else {
59
+ ptr &= MAKE_64BIT_MASK(0, bot_bit);
60
+ pac &= ~(MAKE_64BIT_MASK(55, 1) | MAKE_64BIT_MASK(0, bot_bit));
61
+ }
62
+ ext &= MAKE_64BIT_MASK(55, 1);
63
+ return pac | ext | ptr;
64
}
57
}
65
66
static uint64_t pauth_original_ptr(uint64_t ptr, ARMVAParameters param)
67
--
58
--
68
2.20.1
59
2.25.1
69
70
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This function is, or will shortly become, too big to inline.
3
Add only the system registers required to implement zero error
4
records. This means that all values for ERRSELR are out of range,
5
which means that it and all of the indexed error record registers
6
need not be implemented.
7
8
Add the EL2 registers required for injecting virtual SError.
4
9
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190108223129.5570-16-richard.henderson@linaro.org
12
Message-id: 20220506180242.216785-14-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
14
---
10
target/arm/cpu.h | 48 +++++----------------------------------------
15
target/arm/cpu.h | 5 +++
11
target/arm/helper.c | 44 +++++++++++++++++++++++++++++++++++++++++
16
target/arm/helper.c | 84 +++++++++++++++++++++++++++++++++++++++++++++
12
2 files changed, 49 insertions(+), 43 deletions(-)
17
2 files changed, 89 insertions(+)
13
18
14
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/cpu.h
21
--- a/target/arm/cpu.h
17
+++ b/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ static inline int arm_mmu_idx_to_el(ARMMMUIdx mmu_idx)
23
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
19
}
24
uint64_t tfsr_el[4]; /* tfsre0_el1 is index 0. */
20
25
uint64_t gcr_el1;
21
/* Return the MMU index for a v7M CPU in the specified security and
26
uint64_t rgsr_el1;
22
- * privilege state
27
+
23
+ * privilege state.
28
+ /* Minimal RAS registers */
24
*/
29
+ uint64_t disr_el1;
25
-static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
30
+ uint64_t vdisr_el2;
26
- bool secstate,
31
+ uint64_t vsesr_el2;
27
- bool priv)
32
} cp15;
28
-{
33
29
- ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
34
struct {
30
-
31
- if (priv) {
32
- mmu_idx |= ARM_MMU_IDX_M_PRIV;
33
- }
34
-
35
- if (armv7m_nvic_neg_prio_requested(env->nvic, secstate)) {
36
- mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
37
- }
38
-
39
- if (secstate) {
40
- mmu_idx |= ARM_MMU_IDX_M_S;
41
- }
42
-
43
- return mmu_idx;
44
-}
45
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
46
+ bool secstate, bool priv);
47
48
/* Return the MMU index for a v7M CPU in the specified security state */
49
-static inline ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env,
50
- bool secstate)
51
-{
52
- bool priv = arm_current_el(env) != 0;
53
-
54
- return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
55
-}
56
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate);
57
58
/* Determine the current mmu_idx to use for normal loads/stores */
59
-static inline int cpu_mmu_index(CPUARMState *env, bool ifetch)
60
-{
61
- int el = arm_current_el(env);
62
-
63
- if (arm_feature(env, ARM_FEATURE_M)) {
64
- ARMMMUIdx mmu_idx = arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure);
65
-
66
- return arm_to_core_mmu_idx(mmu_idx);
67
- }
68
-
69
- if (el < 2 && arm_is_secure_below_el3(env)) {
70
- return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0 + el);
71
- }
72
- return el;
73
-}
74
+int cpu_mmu_index(CPUARMState *env, bool ifetch);
75
76
/* Indexes used when registering address spaces with cpu_address_space_init */
77
typedef enum ARMASIdx {
78
diff --git a/target/arm/helper.c b/target/arm/helper.c
35
diff --git a/target/arm/helper.c b/target/arm/helper.c
79
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/helper.c
37
--- a/target/arm/helper.c
81
+++ b/target/arm/helper.c
38
+++ b/target/arm/helper.c
82
@@ -XXX,XX +XXX,XX @@ int fp_exception_el(CPUARMState *env, int cur_el)
39
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo debug_lpae_cp_reginfo[] = {
83
return 0;
40
.access = PL0_R, .type = ARM_CP_CONST|ARM_CP_64BIT, .resetvalue = 0 },
84
}
41
};
85
42
86
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
43
+/*
87
+ bool secstate, bool priv)
44
+ * Check for traps to RAS registers, which are controlled
88
+{
45
+ * by HCR_EL2.TERR and SCR_EL3.TERR.
89
+ ARMMMUIdx mmu_idx = ARM_MMU_IDX_M;
46
+ */
90
+
47
+static CPAccessResult access_terr(CPUARMState *env, const ARMCPRegInfo *ri,
91
+ if (priv) {
48
+ bool isread)
92
+ mmu_idx |= ARM_MMU_IDX_M_PRIV;
93
+ }
94
+
95
+ if (armv7m_nvic_neg_prio_requested(env->nvic, secstate)) {
96
+ mmu_idx |= ARM_MMU_IDX_M_NEGPRI;
97
+ }
98
+
99
+ if (secstate) {
100
+ mmu_idx |= ARM_MMU_IDX_M_S;
101
+ }
102
+
103
+ return mmu_idx;
104
+}
105
+
106
+/* Return the MMU index for a v7M CPU in the specified security state */
107
+ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
108
+{
109
+ bool priv = arm_current_el(env) != 0;
110
+
111
+ return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
112
+}
113
+
114
+int cpu_mmu_index(CPUARMState *env, bool ifetch)
115
+{
49
+{
116
+ int el = arm_current_el(env);
50
+ int el = arm_current_el(env);
117
+
51
+
118
+ if (arm_feature(env, ARM_FEATURE_M)) {
52
+ if (el < 2 && (arm_hcr_el2_eff(env) & HCR_TERR)) {
119
+ ARMMMUIdx mmu_idx = arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure);
53
+ return CP_ACCESS_TRAP_EL2;
120
+
121
+ return arm_to_core_mmu_idx(mmu_idx);
122
+ }
54
+ }
123
+
55
+ if (el < 3 && (env->cp15.scr_el3 & SCR_TERR)) {
124
+ if (el < 2 && arm_is_secure_below_el3(env)) {
56
+ return CP_ACCESS_TRAP_EL3;
125
+ return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0 + el);
126
+ }
57
+ }
127
+ return el;
58
+ return CP_ACCESS_OK;
128
+}
59
+}
129
+
60
+
130
void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
61
+static uint64_t disr_read(CPUARMState *env, const ARMCPRegInfo *ri)
131
target_ulong *cs_base, uint32_t *pflags)
62
+{
132
{
63
+ int el = arm_current_el(env);
64
+
65
+ if (el < 2 && (arm_hcr_el2_eff(env) & HCR_AMO)) {
66
+ return env->cp15.vdisr_el2;
67
+ }
68
+ if (el < 3 && (env->cp15.scr_el3 & SCR_EA)) {
69
+ return 0; /* RAZ/WI */
70
+ }
71
+ return env->cp15.disr_el1;
72
+}
73
+
74
+static void disr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t val)
75
+{
76
+ int el = arm_current_el(env);
77
+
78
+ if (el < 2 && (arm_hcr_el2_eff(env) & HCR_AMO)) {
79
+ env->cp15.vdisr_el2 = val;
80
+ return;
81
+ }
82
+ if (el < 3 && (env->cp15.scr_el3 & SCR_EA)) {
83
+ return; /* RAZ/WI */
84
+ }
85
+ env->cp15.disr_el1 = val;
86
+}
87
+
88
+/*
89
+ * Minimal RAS implementation with no Error Records.
90
+ * Which means that all of the Error Record registers:
91
+ * ERXADDR_EL1
92
+ * ERXCTLR_EL1
93
+ * ERXFR_EL1
94
+ * ERXMISC0_EL1
95
+ * ERXMISC1_EL1
96
+ * ERXMISC2_EL1
97
+ * ERXMISC3_EL1
98
+ * ERXPFGCDN_EL1 (RASv1p1)
99
+ * ERXPFGCTL_EL1 (RASv1p1)
100
+ * ERXPFGF_EL1 (RASv1p1)
101
+ * ERXSTATUS_EL1
102
+ * and
103
+ * ERRSELR_EL1
104
+ * may generate UNDEFINED, which is the effect we get by not
105
+ * listing them at all.
106
+ */
107
+static const ARMCPRegInfo minimal_ras_reginfo[] = {
108
+ { .name = "DISR_EL1", .state = ARM_CP_STATE_BOTH,
109
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 1, .opc2 = 1,
110
+ .access = PL1_RW, .fieldoffset = offsetof(CPUARMState, cp15.disr_el1),
111
+ .readfn = disr_read, .writefn = disr_write, .raw_writefn = raw_write },
112
+ { .name = "ERRIDR_EL1", .state = ARM_CP_STATE_BOTH,
113
+ .opc0 = 3, .opc1 = 0, .crn = 5, .crm = 3, .opc2 = 0,
114
+ .access = PL1_R, .accessfn = access_terr,
115
+ .type = ARM_CP_CONST, .resetvalue = 0 },
116
+ { .name = "VDISR_EL2", .state = ARM_CP_STATE_BOTH,
117
+ .opc0 = 3, .opc1 = 4, .crn = 12, .crm = 1, .opc2 = 1,
118
+ .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.vdisr_el2) },
119
+ { .name = "VSESR_EL2", .state = ARM_CP_STATE_BOTH,
120
+ .opc0 = 3, .opc1 = 4, .crn = 5, .crm = 2, .opc2 = 3,
121
+ .access = PL2_RW, .fieldoffset = offsetof(CPUARMState, cp15.vsesr_el2) },
122
+};
123
+
124
/* Return the exception level to which exceptions should be taken
125
* via SVEAccessTrap. If an exception should be routed through
126
* AArch64.AdvSIMDFPAccessTrap, return 0; fp_exception_el should
127
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
128
if (cpu_isar_feature(aa64_ssbs, cpu)) {
129
define_one_arm_cp_reg(cpu, &ssbs_reginfo);
130
}
131
+ if (cpu_isar_feature(any_ras, cpu)) {
132
+ define_arm_cp_regs(cpu, minimal_ras_reginfo);
133
+ }
134
135
if (cpu_isar_feature(aa64_vh, cpu) ||
136
cpu_isar_feature(aa64_debugv8p2, cpu)) {
133
--
137
--
134
2.20.1
138
2.25.1
135
136
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
There are 5 bits of state that could be added, but to save
3
Enable writes to the TERR and TEA bits when RAS is enabled.
4
space within tbflags, add only a single enable bit.
4
These bits are otherwise RES0.
5
Helpers will determine the rest of the state at runtime.
6
5
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20190108223129.5570-4-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-15-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
10
---
12
target/arm/cpu.h | 1 +
11
target/arm/helper.c | 9 +++++++++
13
target/arm/translate.h | 2 ++
12
1 file changed, 9 insertions(+)
14
target/arm/helper.c | 19 +++++++++++++++++++
15
target/arm/translate-a64.c | 1 +
16
4 files changed, 23 insertions(+)
17
13
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, TBI0, 0, 1)
23
FIELD(TBFLAG_A64, TBI1, 1, 1)
24
FIELD(TBFLAG_A64, SVEEXC_EL, 2, 2)
25
FIELD(TBFLAG_A64, ZCR_LEN, 4, 4)
26
+FIELD(TBFLAG_A64, PAUTH_ACTIVE, 8, 1)
27
28
static inline bool bswap_code(bool sctlr_b)
29
{
30
diff --git a/target/arm/translate.h b/target/arm/translate.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/translate.h
33
+++ b/target/arm/translate.h
34
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
35
bool is_ldex;
36
/* True if a single-step exception will be taken to the current EL */
37
bool ss_same_el;
38
+ /* True if v8.3-PAuth is active. */
39
+ bool pauth_active;
40
/* Bottom two bits of XScale c15_cpar coprocessor access control reg */
41
int c15_cpar;
42
/* TCG op of the current insn_start. */
43
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
44
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
46
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
47
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
18
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
48
flags = FIELD_DP32(flags, TBFLAG_A64, SVEEXC_EL, sve_el);
49
flags = FIELD_DP32(flags, TBFLAG_A64, ZCR_LEN, zcr_len);
50
}
19
}
51
+
20
valid_mask &= ~SCR_NET;
52
+ if (cpu_isar_feature(aa64_pauth, cpu)) {
21
53
+ /*
22
+ if (cpu_isar_feature(aa64_ras, cpu)) {
54
+ * In order to save space in flags, we record only whether
23
+ valid_mask |= SCR_TERR;
55
+ * pauth is "inactive", meaning all insns are implemented as
56
+ * a nop, or "active" when some action must be performed.
57
+ * The decision of which action to take is left to a helper.
58
+ */
59
+ uint64_t sctlr;
60
+ if (current_el == 0) {
61
+ /* FIXME: ARMv8.1-VHE S2 translation regime. */
62
+ sctlr = env->cp15.sctlr_el[1];
63
+ } else {
64
+ sctlr = env->cp15.sctlr_el[current_el];
65
+ }
66
+ if (sctlr & (SCTLR_EnIA | SCTLR_EnIB | SCTLR_EnDA | SCTLR_EnDB)) {
67
+ flags = FIELD_DP32(flags, TBFLAG_A64, PAUTH_ACTIVE, 1);
68
+ }
69
+ }
24
+ }
25
if (cpu_isar_feature(aa64_lor, cpu)) {
26
valid_mask |= SCR_TLOR;
27
}
28
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
29
}
70
} else {
30
} else {
71
*pc = env->regs[15];
31
valid_mask &= ~(SCR_RW | SCR_ST);
72
flags = FIELD_DP32(flags, TBFLAG_A32, THUMB, env->thumb);
32
+ if (cpu_isar_feature(aa32_ras, cpu)) {
73
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
33
+ valid_mask |= SCR_TERR;
74
index XXXXXXX..XXXXXXX 100644
34
+ }
75
--- a/target/arm/translate-a64.c
35
}
76
+++ b/target/arm/translate-a64.c
36
77
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
37
if (!arm_feature(env, ARM_FEATURE_EL2)) {
78
dc->fp_excp_el = FIELD_EX32(tb_flags, TBFLAG_ANY, FPEXC_EL);
38
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
79
dc->sve_excp_el = FIELD_EX32(tb_flags, TBFLAG_A64, SVEEXC_EL);
39
if (cpu_isar_feature(aa64_vh, cpu)) {
80
dc->sve_len = (FIELD_EX32(tb_flags, TBFLAG_A64, ZCR_LEN) + 1) * 16;
40
valid_mask |= HCR_E2H;
81
+ dc->pauth_active = FIELD_EX32(tb_flags, TBFLAG_A64, PAUTH_ACTIVE);
41
}
82
dc->vec_len = 0;
42
+ if (cpu_isar_feature(aa64_ras, cpu)) {
83
dc->vec_stride = 0;
43
+ valid_mask |= HCR_TERR | HCR_TEA;
84
dc->cp_regs = arm_cpu->cp_regs;
44
+ }
45
if (cpu_isar_feature(aa64_lor, cpu)) {
46
valid_mask |= HCR_TLOR;
47
}
85
--
48
--
86
2.20.1
49
2.25.1
87
88
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The pattern
3
Virtual SError exceptions are raised by setting HCR_EL2.VSE,
4
4
and are routed to EL1 just like other virtual exceptions.
5
ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
6
7
is computing the full ARMMMUIdx, stripping off the ARM bits,
8
and then putting them back.
9
10
Avoid the extra two steps with the appropriate helper function.
11
5
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20190108223129.5570-17-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-16-richard.henderson@linaro.org
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
10
---
17
target/arm/cpu.h | 9 ++++++++-
11
target/arm/cpu.h | 2 ++
18
target/arm/internals.h | 8 ++++++++
12
target/arm/internals.h | 8 ++++++++
19
target/arm/helper.c | 27 ++++++++++++++++-----------
13
target/arm/syndrome.h | 5 +++++
20
3 files changed, 32 insertions(+), 12 deletions(-)
14
target/arm/cpu.c | 38 +++++++++++++++++++++++++++++++++++++-
15
target/arm/helper.c | 40 +++++++++++++++++++++++++++++++++++++++-
16
5 files changed, 91 insertions(+), 2 deletions(-)
21
17
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
20
--- a/target/arm/cpu.h
25
+++ b/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
26
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARMState *env,
22
@@ -XXX,XX +XXX,XX @@
27
/* Return the MMU index for a v7M CPU in the specified security state */
23
#define EXCP_LSERR 21 /* v8M LSERR SecureFault */
28
ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate);
24
#define EXCP_UNALIGNED 22 /* v7M UNALIGNED UsageFault */
29
25
#define EXCP_DIVBYZERO 23 /* v7M DIVBYZERO UsageFault */
30
-/* Determine the current mmu_idx to use for normal loads/stores */
26
+#define EXCP_VSERR 24
31
+/**
27
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
32
+ * cpu_mmu_index:
28
33
+ * @env: The cpu environment
29
#define ARMV7M_EXCP_RESET 1
34
+ * @ifetch: True for code access, false for data access.
30
@@ -XXX,XX +XXX,XX @@ enum {
35
+ *
31
#define CPU_INTERRUPT_FIQ CPU_INTERRUPT_TGT_EXT_1
36
+ * Return the core mmu index for the current translation regime.
32
#define CPU_INTERRUPT_VIRQ CPU_INTERRUPT_TGT_EXT_2
37
+ * This function is used by generic TCG code paths.
33
#define CPU_INTERRUPT_VFIQ CPU_INTERRUPT_TGT_EXT_3
38
+ */
34
+#define CPU_INTERRUPT_VSERR CPU_INTERRUPT_TGT_INT_0
39
int cpu_mmu_index(CPUARMState *env, bool ifetch);
35
40
36
/* The usual mapping for an AArch64 system register to its AArch32
41
/* Indexes used when registering address spaces with cpu_address_space_init */
37
* counterpart is for the 32 bit world to have access to the lower
42
diff --git a/target/arm/internals.h b/target/arm/internals.h
38
diff --git a/target/arm/internals.h b/target/arm/internals.h
43
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/internals.h
40
--- a/target/arm/internals.h
45
+++ b/target/arm/internals.h
41
+++ b/target/arm/internals.h
46
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_virq(ARMCPU *cpu);
42
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_virq(ARMCPU *cpu);
47
*/
43
*/
48
void arm_cpu_update_vfiq(ARMCPU *cpu);
44
void arm_cpu_update_vfiq(ARMCPU *cpu);
49
45
50
+/**
46
+/**
51
+ * arm_mmu_idx:
47
+ * arm_cpu_update_vserr: Update CPU_INTERRUPT_VSERR bit
52
+ * @env: The cpu environment
53
+ *
48
+ *
54
+ * Return the full ARMMMUIdx for the current translation regime.
49
+ * Update the CPU_INTERRUPT_VSERR bit in cs->interrupt_request,
50
+ * following a change to the HCR_EL2.VSE bit.
55
+ */
51
+ */
56
+ARMMMUIdx arm_mmu_idx(CPUARMState *env);
52
+void arm_cpu_update_vserr(ARMCPU *cpu);
57
+
53
+
58
#endif
54
/**
55
* arm_mmu_idx_el:
56
* @env: The cpu environment
57
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
58
index XXXXXXX..XXXXXXX 100644
59
--- a/target/arm/syndrome.h
60
+++ b/target/arm/syndrome.h
61
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_pcalignment(void)
62
return (EC_PCALIGNMENT << ARM_EL_EC_SHIFT) | ARM_EL_IL;
63
}
64
65
+static inline uint32_t syn_serror(uint32_t extra)
66
+{
67
+ return (EC_SERROR << ARM_EL_EC_SHIFT) | ARM_EL_IL | extra;
68
+}
69
+
70
#endif /* TARGET_ARM_SYNDROME_H */
71
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
72
index XXXXXXX..XXXXXXX 100644
73
--- a/target/arm/cpu.c
74
+++ b/target/arm/cpu.c
75
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_has_work(CPUState *cs)
76
return (cpu->power_state != PSCI_OFF)
77
&& cs->interrupt_request &
78
(CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
79
- | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ
80
+ | CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VSERR
81
| CPU_INTERRUPT_EXITTB);
82
}
83
84
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
85
return false;
86
}
87
return !(env->daif & PSTATE_I);
88
+ case EXCP_VSERR:
89
+ if (!(hcr_el2 & HCR_AMO) || (hcr_el2 & HCR_TGE)) {
90
+ /* VIRQs are only taken when hypervized. */
91
+ return false;
92
+ }
93
+ return !(env->daif & PSTATE_A);
94
default:
95
g_assert_not_reached();
96
}
97
@@ -XXX,XX +XXX,XX @@ static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
98
goto found;
99
}
100
}
101
+ if (interrupt_request & CPU_INTERRUPT_VSERR) {
102
+ excp_idx = EXCP_VSERR;
103
+ target_el = 1;
104
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
105
+ cur_el, secure, hcr_el2)) {
106
+ /* Taking a virtual abort clears HCR_EL2.VSE */
107
+ env->cp15.hcr_el2 &= ~HCR_VSE;
108
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
109
+ goto found;
110
+ }
111
+ }
112
return false;
113
114
found:
115
@@ -XXX,XX +XXX,XX @@ void arm_cpu_update_vfiq(ARMCPU *cpu)
116
}
117
}
118
119
+void arm_cpu_update_vserr(ARMCPU *cpu)
120
+{
121
+ /*
122
+ * Update the interrupt level for VSERR, which is the HCR_EL2.VSE bit.
123
+ */
124
+ CPUARMState *env = &cpu->env;
125
+ CPUState *cs = CPU(cpu);
126
+
127
+ bool new_state = env->cp15.hcr_el2 & HCR_VSE;
128
+
129
+ if (new_state != ((cs->interrupt_request & CPU_INTERRUPT_VSERR) != 0)) {
130
+ if (new_state) {
131
+ cpu_interrupt(cs, CPU_INTERRUPT_VSERR);
132
+ } else {
133
+ cpu_reset_interrupt(cs, CPU_INTERRUPT_VSERR);
134
+ }
135
+ }
136
+}
137
+
138
#ifndef CONFIG_USER_ONLY
139
static void arm_cpu_set_irq(void *opaque, int irq, int level)
140
{
59
diff --git a/target/arm/helper.c b/target/arm/helper.c
141
diff --git a/target/arm/helper.c b/target/arm/helper.c
60
index XXXXXXX..XXXXXXX 100644
142
index XXXXXXX..XXXXXXX 100644
61
--- a/target/arm/helper.c
143
--- a/target/arm/helper.c
62
+++ b/target/arm/helper.c
144
+++ b/target/arm/helper.c
63
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailchain,
145
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
64
limit = env->v7m.msplim[M_REG_S];
65
}
146
}
66
} else {
147
}
67
- mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
148
68
+ mmu_idx = arm_mmu_idx(env);
149
- /* External aborts are not possible in QEMU so A bit is always clear */
69
frame_sp_p = &env->regs[13];
150
+ if (hcr_el2 & HCR_AMO) {
70
limit = v7m_sp_limit(env);
151
+ if (cs->interrupt_request & CPU_INTERRUPT_VSERR) {
71
}
152
+ ret |= CPSR_A;
72
@@ -XXX,XX +XXX,XX @@ static bool v7m_push_stack(ARMCPU *cpu)
153
+ }
73
CPUARMState *env = &cpu->env;
154
+ }
74
uint32_t xpsr = xpsr_read(env);
155
+
75
uint32_t frameptr = env->regs[13];
156
return ret;
76
- ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
157
}
77
+ ARMMMUIdx mmu_idx = arm_mmu_idx(env);
158
78
159
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
79
/* Align stack pointer if the guest wants that */
160
g_assert(qemu_mutex_iothread_locked());
80
if ((frameptr & 4) &&
161
arm_cpu_update_virq(cpu);
81
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
162
arm_cpu_update_vfiq(cpu);
82
int prot;
163
+ arm_cpu_update_vserr(cpu);
83
bool ret;
164
}
84
ARMMMUFaultInfo fi = {};
165
85
- ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
166
static void hcr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
86
+ ARMMMUIdx mmu_idx = arm_mmu_idx(env);
167
@@ -XXX,XX +XXX,XX @@ void arm_log_exception(CPUState *cs)
87
168
[EXCP_LSERR] = "v8M LSERR UsageFault",
88
*attrs = (MemTxAttrs) {};
169
[EXCP_UNALIGNED] = "v7M UNALIGNED UsageFault",
89
170
[EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault",
90
@@ -XXX,XX +XXX,XX @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate)
171
+ [EXCP_VSERR] = "Virtual SERR",
91
return arm_v7m_mmu_idx_for_secstate_and_priv(env, secstate, priv);
172
};
92
}
173
93
174
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
94
-int cpu_mmu_index(CPUARMState *env, bool ifetch)
175
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch32(CPUState *cs)
95
+ARMMMUIdx arm_mmu_idx(CPUARMState *env)
176
mask = CPSR_A | CPSR_I | CPSR_F;
96
{
177
offset = 4;
97
- int el = arm_current_el(env);
178
break;
98
+ int el;
179
+ case EXCP_VSERR:
99
180
+ {
100
if (arm_feature(env, ARM_FEATURE_M)) {
181
+ /*
101
- ARMMMUIdx mmu_idx = arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure);
182
+ * Note that this is reported as a data abort, but the DFAR
102
-
183
+ * has an UNKNOWN value. Construct the SError syndrome from
103
- return arm_to_core_mmu_idx(mmu_idx);
184
+ * AET and ExT fields.
104
+ return arm_v7m_mmu_idx_for_secstate(env, env->v7m.secure);
185
+ */
105
}
186
+ ARMMMUFaultInfo fi = { .type = ARMFault_AsyncExternal, };
106
187
+
107
+ el = arm_current_el(env);
188
+ if (extended_addresses_enabled(env)) {
108
if (el < 2 && arm_is_secure_below_el3(env)) {
189
+ env->exception.fsr = arm_fi_to_lfsc(&fi);
109
- return arm_to_core_mmu_idx(ARMMMUIdx_S1SE0 + el);
190
+ } else {
110
+ return ARMMMUIdx_S1SE0 + el;
191
+ env->exception.fsr = arm_fi_to_sfsc(&fi);
111
+ } else {
192
+ }
112
+ return ARMMMUIdx_S12NSE0 + el;
193
+ env->exception.fsr |= env->cp15.vsesr_el2 & 0xd000;
113
}
194
+ A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr);
114
- return el;
195
+ qemu_log_mask(CPU_LOG_INT, "...with IFSR 0x%x\n",
115
+}
196
+ env->exception.fsr);
116
+
197
+
117
+int cpu_mmu_index(CPUARMState *env, bool ifetch)
198
+ new_mode = ARM_CPU_MODE_ABT;
118
+{
199
+ addr = 0x10;
119
+ return arm_to_core_mmu_idx(arm_mmu_idx(env));
200
+ mask = CPSR_A | CPSR_I;
120
}
201
+ offset = 8;
121
202
+ }
122
void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
203
+ break;
123
target_ulong *cs_base, uint32_t *pflags)
204
case EXCP_SMC:
124
{
205
new_mode = ARM_CPU_MODE_MON;
125
- ARMMMUIdx mmu_idx = core_to_arm_mmu_idx(env, cpu_mmu_index(env, false));
206
addr = 0x08;
126
+ ARMMMUIdx mmu_idx = arm_mmu_idx(env);
207
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
127
int current_el = arm_current_el(env);
208
case EXCP_VFIQ:
128
int fp_el = fp_exception_el(env, current_el);
209
addr += 0x100;
129
uint32_t flags = 0;
210
break;
211
+ case EXCP_VSERR:
212
+ addr += 0x180;
213
+ /* Construct the SError syndrome from IDS and ISS fields. */
214
+ env->exception.syndrome = syn_serror(env->cp15.vsesr_el2 & 0x1ffffff);
215
+ env->cp15.esr_el[new_el] = env->exception.syndrome;
216
+ break;
217
default:
218
cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index);
219
}
130
--
220
--
131
2.20.1
221
2.25.1
132
133
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This function is only used by AArch64. Code movement only.
3
Check for and defer any pending virtual SError.
4
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190108223129.5570-11-richard.henderson@linaro.org
7
Message-id: 20220506180242.216785-17-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
9
---
10
target/arm/helper-a64.h | 2 +
10
target/arm/helper.h | 1 +
11
target/arm/helper.h | 1 -
11
target/arm/a32.decode | 16 ++++++++------
12
target/arm/helper-a64.c | 155 ++++++++++++++++++++++++++++++++++++++++
12
target/arm/t32.decode | 18 ++++++++--------
13
target/arm/op_helper.c | 155 ----------------------------------------
13
target/arm/op_helper.c | 43 ++++++++++++++++++++++++++++++++++++++
14
4 files changed, 157 insertions(+), 156 deletions(-)
14
target/arm/translate-a64.c | 17 +++++++++++++++
15
target/arm/translate.c | 23 ++++++++++++++++++++
16
6 files changed, 103 insertions(+), 15 deletions(-)
15
17
16
diff --git a/target/arm/helper-a64.h b/target/arm/helper-a64.h
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper-a64.h
19
+++ b/target/arm/helper-a64.h
20
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(advsimd_f16tosinth, i32, f16, ptr)
21
DEF_HELPER_2(advsimd_f16touinth, i32, f16, ptr)
22
DEF_HELPER_2(sqrt_f16, f16, f16, ptr)
23
24
+DEF_HELPER_1(exception_return, void, env)
25
+
26
DEF_HELPER_FLAGS_3(pacia, TCG_CALL_NO_WG, i64, env, i64, i64)
27
DEF_HELPER_FLAGS_3(pacib, TCG_CALL_NO_WG, i64, env, i64, i64)
28
DEF_HELPER_FLAGS_3(pacda, TCG_CALL_NO_WG, i64, env, i64, i64)
29
diff --git a/target/arm/helper.h b/target/arm/helper.h
18
diff --git a/target/arm/helper.h b/target/arm/helper.h
30
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/helper.h
20
--- a/target/arm/helper.h
32
+++ b/target/arm/helper.h
21
+++ b/target/arm/helper.h
33
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(get_cp_reg64, i64, env, ptr)
22
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_1(wfe, void, env)
34
23
DEF_HELPER_1(yield, void, env)
35
DEF_HELPER_3(msr_i_pstate, void, env, i32, i32)
24
DEF_HELPER_1(pre_hvc, void, env)
36
DEF_HELPER_1(clear_pstate_ss, void, env)
25
DEF_HELPER_2(pre_smc, void, env, i32)
37
-DEF_HELPER_1(exception_return, void, env)
26
+DEF_HELPER_1(vesb, void, env)
38
27
39
DEF_HELPER_2(get_r13_banked, i32, env, i32)
28
DEF_HELPER_3(cpsr_write, void, env, i32, i32)
40
DEF_HELPER_3(set_r13_banked, void, env, i32, i32)
29
DEF_HELPER_2(cpsr_write_eret, void, env, i32)
41
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
30
diff --git a/target/arm/a32.decode b/target/arm/a32.decode
42
index XXXXXXX..XXXXXXX 100644
31
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/helper-a64.c
32
--- a/target/arm/a32.decode
44
+++ b/target/arm/helper-a64.c
33
+++ b/target/arm/a32.decode
45
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(advsimd_f16touinth)(uint32_t a, void *fpstp)
34
@@ -XXX,XX +XXX,XX @@ SMULTT .... 0001 0110 .... 0000 .... 1110 .... @rd0mn
46
return float16_to_uint16(a, fpst);
35
36
{
37
{
38
- YIELD ---- 0011 0010 0000 1111 ---- 0000 0001
39
- WFE ---- 0011 0010 0000 1111 ---- 0000 0010
40
- WFI ---- 0011 0010 0000 1111 ---- 0000 0011
41
+ [
42
+ YIELD ---- 0011 0010 0000 1111 ---- 0000 0001
43
+ WFE ---- 0011 0010 0000 1111 ---- 0000 0010
44
+ WFI ---- 0011 0010 0000 1111 ---- 0000 0011
45
46
- # TODO: Implement SEV, SEVL; may help SMP performance.
47
- # SEV ---- 0011 0010 0000 1111 ---- 0000 0100
48
- # SEVL ---- 0011 0010 0000 1111 ---- 0000 0101
49
+ # TODO: Implement SEV, SEVL; may help SMP performance.
50
+ # SEV ---- 0011 0010 0000 1111 ---- 0000 0100
51
+ # SEVL ---- 0011 0010 0000 1111 ---- 0000 0101
52
+
53
+ ESB ---- 0011 0010 0000 1111 ---- 0001 0000
54
+ ]
55
56
# The canonical nop ends in 00000000, but the whole of the
57
# rest of the space executes as nop if otherwise unsupported.
58
diff --git a/target/arm/t32.decode b/target/arm/t32.decode
59
index XXXXXXX..XXXXXXX 100644
60
--- a/target/arm/t32.decode
61
+++ b/target/arm/t32.decode
62
@@ -XXX,XX +XXX,XX @@ CLZ 1111 1010 1011 ---- 1111 .... 1000 .... @rdm
63
[
64
# Hints, and CPS
65
{
66
- YIELD 1111 0011 1010 1111 1000 0000 0000 0001
67
- WFE 1111 0011 1010 1111 1000 0000 0000 0010
68
- WFI 1111 0011 1010 1111 1000 0000 0000 0011
69
+ [
70
+ YIELD 1111 0011 1010 1111 1000 0000 0000 0001
71
+ WFE 1111 0011 1010 1111 1000 0000 0000 0010
72
+ WFI 1111 0011 1010 1111 1000 0000 0000 0011
73
74
- # TODO: Implement SEV, SEVL; may help SMP performance.
75
- # SEV 1111 0011 1010 1111 1000 0000 0000 0100
76
- # SEVL 1111 0011 1010 1111 1000 0000 0000 0101
77
+ # TODO: Implement SEV, SEVL; may help SMP performance.
78
+ # SEV 1111 0011 1010 1111 1000 0000 0000 0100
79
+ # SEVL 1111 0011 1010 1111 1000 0000 0000 0101
80
81
- # For M-profile minimal-RAS ESB can be a NOP, which is the
82
- # default behaviour since it is in the hint space.
83
- # ESB 1111 0011 1010 1111 1000 0000 0001 0000
84
+ ESB 1111 0011 1010 1111 1000 0000 0001 0000
85
+ ]
86
87
# The canonical nop ends in 0000 0000, but the whole rest
88
# of the space is "reserved hint, behaves as nop".
89
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
90
index XXXXXXX..XXXXXXX 100644
91
--- a/target/arm/op_helper.c
92
+++ b/target/arm/op_helper.c
93
@@ -XXX,XX +XXX,XX @@ void HELPER(probe_access)(CPUARMState *env, target_ulong ptr,
94
access_type, mmu_idx, ra);
95
}
47
}
96
}
48
97
+
49
+static int el_from_spsr(uint32_t spsr)
98
+/*
99
+ * This function corresponds to AArch64.vESBOperation().
100
+ * Note that the AArch32 version is not functionally different.
101
+ */
102
+void HELPER(vesb)(CPUARMState *env)
50
+{
103
+{
51
+ /* Return the exception level that this SPSR is requesting a return to,
104
+ /*
52
+ * or -1 if it is invalid (an illegal return)
105
+ * The EL2Enabled() check is done inside arm_hcr_el2_eff,
106
+ * and will return HCR_EL2.VSE == 0, so nothing happens.
53
+ */
107
+ */
54
+ if (spsr & PSTATE_nRW) {
108
+ uint64_t hcr = arm_hcr_el2_eff(env);
55
+ switch (spsr & CPSR_M) {
109
+ bool enabled = !(hcr & HCR_TGE) && (hcr & HCR_AMO);
56
+ case ARM_CPU_MODE_USR:
110
+ bool pending = enabled && (hcr & HCR_VSE);
57
+ return 0;
111
+ bool masked = (env->daif & PSTATE_A);
58
+ case ARM_CPU_MODE_HYP:
112
+
59
+ return 2;
113
+ /* If VSE pending and masked, defer the exception. */
60
+ case ARM_CPU_MODE_FIQ:
114
+ if (pending && masked) {
61
+ case ARM_CPU_MODE_IRQ:
115
+ uint32_t syndrome;
62
+ case ARM_CPU_MODE_SVC:
116
+
63
+ case ARM_CPU_MODE_ABT:
117
+ if (arm_el_is_aa64(env, 1)) {
64
+ case ARM_CPU_MODE_UND:
118
+ /* Copy across IDS and ISS from VSESR. */
65
+ case ARM_CPU_MODE_SYS:
119
+ syndrome = env->cp15.vsesr_el2 & 0x1ffffff;
66
+ return 1;
120
+ } else {
67
+ case ARM_CPU_MODE_MON:
121
+ ARMMMUFaultInfo fi = { .type = ARMFault_AsyncExternal };
68
+ /* Returning to Mon from AArch64 is never possible,
122
+
69
+ * so this is an illegal return.
123
+ if (extended_addresses_enabled(env)) {
70
+ */
124
+ syndrome = arm_fi_to_lfsc(&fi);
71
+ default:
125
+ } else {
72
+ return -1;
126
+ syndrome = arm_fi_to_sfsc(&fi);
127
+ }
128
+ /* Copy across AET and ExT from VSESR. */
129
+ syndrome |= env->cp15.vsesr_el2 & 0xd000;
73
+ }
130
+ }
74
+ } else {
131
+
75
+ if (extract32(spsr, 1, 1)) {
132
+ /* Set VDISR_EL2.A along with the syndrome. */
76
+ /* Return with reserved M[1] bit set */
133
+ env->cp15.vdisr_el2 = syndrome | (1u << 31);
77
+ return -1;
134
+
78
+ }
135
+ /* Clear pending virtual SError */
79
+ if (extract32(spsr, 0, 4) == 1) {
136
+ env->cp15.hcr_el2 &= ~HCR_VSE;
80
+ /* return to EL0 with M[0] bit set */
137
+ cpu_reset_interrupt(env_cpu(env), CPU_INTERRUPT_VSERR);
81
+ return -1;
82
+ }
83
+ return extract32(spsr, 2, 2);
84
+ }
138
+ }
85
+}
139
+}
86
+
140
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
87
+void HELPER(exception_return)(CPUARMState *env)
141
index XXXXXXX..XXXXXXX 100644
142
--- a/target/arm/translate-a64.c
143
+++ b/target/arm/translate-a64.c
144
@@ -XXX,XX +XXX,XX @@ static void handle_hint(DisasContext *s, uint32_t insn,
145
gen_helper_autib(cpu_X[17], cpu_env, cpu_X[17], cpu_X[16]);
146
}
147
break;
148
+ case 0b10000: /* ESB */
149
+ /* Without RAS, we must implement this as NOP. */
150
+ if (dc_isar_feature(aa64_ras, s)) {
151
+ /*
152
+ * QEMU does not have a source of physical SErrors,
153
+ * so we are only concerned with virtual SErrors.
154
+ * The pseudocode in the ARM for this case is
155
+ * if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then
156
+ * AArch64.vESBOperation();
157
+ * Most of the condition can be evaluated at translation time.
158
+ * Test for EL2 present, and defer test for SEL2 to runtime.
159
+ */
160
+ if (s->current_el <= 1 && arm_dc_feature(s, ARM_FEATURE_EL2)) {
161
+ gen_helper_vesb(cpu_env);
162
+ }
163
+ }
164
+ break;
165
case 0b11000: /* PACIAZ */
166
if (s->pauth_active) {
167
gen_helper_pacia(cpu_X[30], cpu_env, cpu_X[30],
168
diff --git a/target/arm/translate.c b/target/arm/translate.c
169
index XXXXXXX..XXXXXXX 100644
170
--- a/target/arm/translate.c
171
+++ b/target/arm/translate.c
172
@@ -XXX,XX +XXX,XX @@ static bool trans_WFI(DisasContext *s, arg_WFI *a)
173
return true;
174
}
175
176
+static bool trans_ESB(DisasContext *s, arg_ESB *a)
88
+{
177
+{
89
+ int cur_el = arm_current_el(env);
178
+ /*
90
+ unsigned int spsr_idx = aarch64_banked_spsr_index(cur_el);
179
+ * For M-profile, minimal-RAS ESB can be a NOP.
91
+ uint32_t spsr = env->banked_spsr[spsr_idx];
180
+ * Without RAS, we must implement this as NOP.
92
+ int new_el;
93
+ bool return_to_aa64 = (spsr & PSTATE_nRW) == 0;
94
+
95
+ aarch64_save_sp(env, cur_el);
96
+
97
+ arm_clear_exclusive(env);
98
+
99
+ /* We must squash the PSTATE.SS bit to zero unless both of the
100
+ * following hold:
101
+ * 1. debug exceptions are currently disabled
102
+ * 2. singlestep will be active in the EL we return to
103
+ * We check 1 here and 2 after we've done the pstate/cpsr write() to
104
+ * transition to the EL we're going to.
105
+ */
181
+ */
106
+ if (arm_generate_debug_exceptions(env)) {
182
+ if (!arm_dc_feature(s, ARM_FEATURE_M) && dc_isar_feature(aa32_ras, s)) {
107
+ spsr &= ~PSTATE_SS;
183
+ /*
184
+ * QEMU does not have a source of physical SErrors,
185
+ * so we are only concerned with virtual SErrors.
186
+ * The pseudocode in the ARM for this case is
187
+ * if PSTATE.EL IN {EL0, EL1} && EL2Enabled() then
188
+ * AArch32.vESBOperation();
189
+ * Most of the condition can be evaluated at translation time.
190
+ * Test for EL2 present, and defer test for SEL2 to runtime.
191
+ */
192
+ if (s->current_el <= 1 && arm_dc_feature(s, ARM_FEATURE_EL2)) {
193
+ gen_helper_vesb(cpu_env);
194
+ }
108
+ }
195
+ }
109
+
196
+ return true;
110
+ new_el = el_from_spsr(spsr);
111
+ if (new_el == -1) {
112
+ goto illegal_return;
113
+ }
114
+ if (new_el > cur_el
115
+ || (new_el == 2 && !arm_feature(env, ARM_FEATURE_EL2))) {
116
+ /* Disallow return to an EL which is unimplemented or higher
117
+ * than the current one.
118
+ */
119
+ goto illegal_return;
120
+ }
121
+
122
+ if (new_el != 0 && arm_el_is_aa64(env, new_el) != return_to_aa64) {
123
+ /* Return to an EL which is configured for a different register width */
124
+ goto illegal_return;
125
+ }
126
+
127
+ if (new_el == 2 && arm_is_secure_below_el3(env)) {
128
+ /* Return to the non-existent secure-EL2 */
129
+ goto illegal_return;
130
+ }
131
+
132
+ if (new_el == 1 && (arm_hcr_el2_eff(env) & HCR_TGE)) {
133
+ goto illegal_return;
134
+ }
135
+
136
+ qemu_mutex_lock_iothread();
137
+ arm_call_pre_el_change_hook(arm_env_get_cpu(env));
138
+ qemu_mutex_unlock_iothread();
139
+
140
+ if (!return_to_aa64) {
141
+ env->aarch64 = 0;
142
+ /* We do a raw CPSR write because aarch64_sync_64_to_32()
143
+ * will sort the register banks out for us, and we've already
144
+ * caught all the bad-mode cases in el_from_spsr().
145
+ */
146
+ cpsr_write(env, spsr, ~0, CPSRWriteRaw);
147
+ if (!arm_singlestep_active(env)) {
148
+ env->uncached_cpsr &= ~PSTATE_SS;
149
+ }
150
+ aarch64_sync_64_to_32(env);
151
+
152
+ if (spsr & CPSR_T) {
153
+ env->regs[15] = env->elr_el[cur_el] & ~0x1;
154
+ } else {
155
+ env->regs[15] = env->elr_el[cur_el] & ~0x3;
156
+ }
157
+ qemu_log_mask(CPU_LOG_INT, "Exception return from AArch64 EL%d to "
158
+ "AArch32 EL%d PC 0x%" PRIx32 "\n",
159
+ cur_el, new_el, env->regs[15]);
160
+ } else {
161
+ env->aarch64 = 1;
162
+ pstate_write(env, spsr);
163
+ if (!arm_singlestep_active(env)) {
164
+ env->pstate &= ~PSTATE_SS;
165
+ }
166
+ aarch64_restore_sp(env, new_el);
167
+ env->pc = env->elr_el[cur_el];
168
+ qemu_log_mask(CPU_LOG_INT, "Exception return from AArch64 EL%d to "
169
+ "AArch64 EL%d PC 0x%" PRIx64 "\n",
170
+ cur_el, new_el, env->pc);
171
+ }
172
+ /*
173
+ * Note that cur_el can never be 0. If new_el is 0, then
174
+ * el0_a64 is return_to_aa64, else el0_a64 is ignored.
175
+ */
176
+ aarch64_sve_change_el(env, cur_el, new_el, return_to_aa64);
177
+
178
+ qemu_mutex_lock_iothread();
179
+ arm_call_el_change_hook(arm_env_get_cpu(env));
180
+ qemu_mutex_unlock_iothread();
181
+
182
+ return;
183
+
184
+illegal_return:
185
+ /* Illegal return events of various kinds have architecturally
186
+ * mandated behaviour:
187
+ * restore NZCV and DAIF from SPSR_ELx
188
+ * set PSTATE.IL
189
+ * restore PC from ELR_ELx
190
+ * no change to exception level, execution state or stack pointer
191
+ */
192
+ env->pstate |= PSTATE_IL;
193
+ env->pc = env->elr_el[cur_el];
194
+ spsr &= PSTATE_NZCV | PSTATE_DAIF;
195
+ spsr |= pstate_read(env) & ~(PSTATE_NZCV | PSTATE_DAIF);
196
+ pstate_write(env, spsr);
197
+ if (!arm_singlestep_active(env)) {
198
+ env->pstate &= ~PSTATE_SS;
199
+ }
200
+ qemu_log_mask(LOG_GUEST_ERROR, "Illegal exception return at EL%d: "
201
+ "resuming execution at 0x%" PRIx64 "\n", cur_el, env->pc);
202
+}
197
+}
203
+
198
+
204
/*
199
static bool trans_NOP(DisasContext *s, arg_NOP *a)
205
* Square Root and Reciprocal square root
206
*/
207
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
208
index XXXXXXX..XXXXXXX 100644
209
--- a/target/arm/op_helper.c
210
+++ b/target/arm/op_helper.c
211
@@ -XXX,XX +XXX,XX @@ void HELPER(pre_smc)(CPUARMState *env, uint32_t syndrome)
212
}
213
}
214
215
-static int el_from_spsr(uint32_t spsr)
216
-{
217
- /* Return the exception level that this SPSR is requesting a return to,
218
- * or -1 if it is invalid (an illegal return)
219
- */
220
- if (spsr & PSTATE_nRW) {
221
- switch (spsr & CPSR_M) {
222
- case ARM_CPU_MODE_USR:
223
- return 0;
224
- case ARM_CPU_MODE_HYP:
225
- return 2;
226
- case ARM_CPU_MODE_FIQ:
227
- case ARM_CPU_MODE_IRQ:
228
- case ARM_CPU_MODE_SVC:
229
- case ARM_CPU_MODE_ABT:
230
- case ARM_CPU_MODE_UND:
231
- case ARM_CPU_MODE_SYS:
232
- return 1;
233
- case ARM_CPU_MODE_MON:
234
- /* Returning to Mon from AArch64 is never possible,
235
- * so this is an illegal return.
236
- */
237
- default:
238
- return -1;
239
- }
240
- } else {
241
- if (extract32(spsr, 1, 1)) {
242
- /* Return with reserved M[1] bit set */
243
- return -1;
244
- }
245
- if (extract32(spsr, 0, 4) == 1) {
246
- /* return to EL0 with M[0] bit set */
247
- return -1;
248
- }
249
- return extract32(spsr, 2, 2);
250
- }
251
-}
252
-
253
-void HELPER(exception_return)(CPUARMState *env)
254
-{
255
- int cur_el = arm_current_el(env);
256
- unsigned int spsr_idx = aarch64_banked_spsr_index(cur_el);
257
- uint32_t spsr = env->banked_spsr[spsr_idx];
258
- int new_el;
259
- bool return_to_aa64 = (spsr & PSTATE_nRW) == 0;
260
-
261
- aarch64_save_sp(env, cur_el);
262
-
263
- arm_clear_exclusive(env);
264
-
265
- /* We must squash the PSTATE.SS bit to zero unless both of the
266
- * following hold:
267
- * 1. debug exceptions are currently disabled
268
- * 2. singlestep will be active in the EL we return to
269
- * We check 1 here and 2 after we've done the pstate/cpsr write() to
270
- * transition to the EL we're going to.
271
- */
272
- if (arm_generate_debug_exceptions(env)) {
273
- spsr &= ~PSTATE_SS;
274
- }
275
-
276
- new_el = el_from_spsr(spsr);
277
- if (new_el == -1) {
278
- goto illegal_return;
279
- }
280
- if (new_el > cur_el
281
- || (new_el == 2 && !arm_feature(env, ARM_FEATURE_EL2))) {
282
- /* Disallow return to an EL which is unimplemented or higher
283
- * than the current one.
284
- */
285
- goto illegal_return;
286
- }
287
-
288
- if (new_el != 0 && arm_el_is_aa64(env, new_el) != return_to_aa64) {
289
- /* Return to an EL which is configured for a different register width */
290
- goto illegal_return;
291
- }
292
-
293
- if (new_el == 2 && arm_is_secure_below_el3(env)) {
294
- /* Return to the non-existent secure-EL2 */
295
- goto illegal_return;
296
- }
297
-
298
- if (new_el == 1 && (arm_hcr_el2_eff(env) & HCR_TGE)) {
299
- goto illegal_return;
300
- }
301
-
302
- qemu_mutex_lock_iothread();
303
- arm_call_pre_el_change_hook(arm_env_get_cpu(env));
304
- qemu_mutex_unlock_iothread();
305
-
306
- if (!return_to_aa64) {
307
- env->aarch64 = 0;
308
- /* We do a raw CPSR write because aarch64_sync_64_to_32()
309
- * will sort the register banks out for us, and we've already
310
- * caught all the bad-mode cases in el_from_spsr().
311
- */
312
- cpsr_write(env, spsr, ~0, CPSRWriteRaw);
313
- if (!arm_singlestep_active(env)) {
314
- env->uncached_cpsr &= ~PSTATE_SS;
315
- }
316
- aarch64_sync_64_to_32(env);
317
-
318
- if (spsr & CPSR_T) {
319
- env->regs[15] = env->elr_el[cur_el] & ~0x1;
320
- } else {
321
- env->regs[15] = env->elr_el[cur_el] & ~0x3;
322
- }
323
- qemu_log_mask(CPU_LOG_INT, "Exception return from AArch64 EL%d to "
324
- "AArch32 EL%d PC 0x%" PRIx32 "\n",
325
- cur_el, new_el, env->regs[15]);
326
- } else {
327
- env->aarch64 = 1;
328
- pstate_write(env, spsr);
329
- if (!arm_singlestep_active(env)) {
330
- env->pstate &= ~PSTATE_SS;
331
- }
332
- aarch64_restore_sp(env, new_el);
333
- env->pc = env->elr_el[cur_el];
334
- qemu_log_mask(CPU_LOG_INT, "Exception return from AArch64 EL%d to "
335
- "AArch64 EL%d PC 0x%" PRIx64 "\n",
336
- cur_el, new_el, env->pc);
337
- }
338
- /*
339
- * Note that cur_el can never be 0. If new_el is 0, then
340
- * el0_a64 is return_to_aa64, else el0_a64 is ignored.
341
- */
342
- aarch64_sve_change_el(env, cur_el, new_el, return_to_aa64);
343
-
344
- qemu_mutex_lock_iothread();
345
- arm_call_el_change_hook(arm_env_get_cpu(env));
346
- qemu_mutex_unlock_iothread();
347
-
348
- return;
349
-
350
-illegal_return:
351
- /* Illegal return events of various kinds have architecturally
352
- * mandated behaviour:
353
- * restore NZCV and DAIF from SPSR_ELx
354
- * set PSTATE.IL
355
- * restore PC from ELR_ELx
356
- * no change to exception level, execution state or stack pointer
357
- */
358
- env->pstate |= PSTATE_IL;
359
- env->pc = env->elr_el[cur_el];
360
- spsr &= PSTATE_NZCV | PSTATE_DAIF;
361
- spsr |= pstate_read(env) & ~(PSTATE_NZCV | PSTATE_DAIF);
362
- pstate_write(env, spsr);
363
- if (!arm_singlestep_active(env)) {
364
- env->pstate &= ~PSTATE_SS;
365
- }
366
- qemu_log_mask(LOG_GUEST_ERROR, "Illegal exception return at EL%d: "
367
- "resuming execution at 0x%" PRIx64 "\n", cur_el, env->pc);
368
-}
369
-
370
/* Return true if the linked breakpoint entry lbn passes its checks */
371
static bool linked_bp_matches(ARMCPU *cpu, int lbn)
372
{
200
{
201
return true;
373
--
202
--
374
2.20.1
203
2.25.1
375
376
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190108223129.5570-30-richard.henderson@linaro.org
5
Message-id: 20220506180242.216785-18-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
7
---
8
target/arm/cpu64.c | 4 ++++
8
docs/system/arm/emulation.rst | 1 +
9
1 file changed, 4 insertions(+)
9
target/arm/cpu64.c | 1 +
10
target/arm/cpu_tcg.c | 1 +
11
3 files changed, 3 insertions(+)
10
12
13
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
14
index XXXXXXX..XXXXXXX 100644
15
--- a/docs/system/arm/emulation.rst
16
+++ b/docs/system/arm/emulation.rst
17
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
18
- FEAT_PMULL (PMULL, PMULL2 instructions)
19
- FEAT_PMUv3p1 (PMU Extensions v3.1)
20
- FEAT_PMUv3p4 (PMU Extensions v3.4)
21
+- FEAT_RAS (Reliability, availability, and serviceability)
22
- FEAT_RDM (Advanced SIMD rounding double multiply accumulate instructions)
23
- FEAT_RNG (Random number generator)
24
- FEAT_SB (Speculation Barrier)
11
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
25
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
12
index XXXXXXX..XXXXXXX 100644
26
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/cpu64.c
27
--- a/target/arm/cpu64.c
14
+++ b/target/arm/cpu64.c
28
+++ b/target/arm/cpu64.c
15
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
29
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
16
30
t = cpu->isar.id_aa64pfr0;
17
t = cpu->isar.id_aa64isar1;
31
t = FIELD_DP64(t, ID_AA64PFR0, FP, 1); /* FEAT_FP16 */
18
t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
32
t = FIELD_DP64(t, ID_AA64PFR0, ADVSIMD, 1); /* FEAT_FP16 */
19
+ t = FIELD_DP64(t, ID_AA64ISAR1, APA, 1); /* PAuth, architected only */
33
+ t = FIELD_DP64(t, ID_AA64PFR0, RAS, 1); /* FEAT_RAS */
20
+ t = FIELD_DP64(t, ID_AA64ISAR1, API, 0);
34
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
21
+ t = FIELD_DP64(t, ID_AA64ISAR1, GPA, 1);
35
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
22
+ t = FIELD_DP64(t, ID_AA64ISAR1, GPI, 0);
36
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
23
cpu->isar.id_aa64isar1 = t;
37
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
24
38
index XXXXXXX..XXXXXXX 100644
25
t = cpu->isar.id_aa64pfr0;
39
--- a/target/arm/cpu_tcg.c
40
+++ b/target/arm/cpu_tcg.c
41
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
42
43
t = cpu->isar.id_pfr0;
44
t = FIELD_DP32(t, ID_PFR0, DIT, 1); /* FEAT_DIT */
45
+ t = FIELD_DP32(t, ID_PFR0, RAS, 1); /* FEAT_RAS */
46
cpu->isar.id_pfr0 = t;
47
48
t = cpu->isar.id_pfr2;
26
--
49
--
27
2.20.1
50
2.25.1
28
29
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
This is not really functional yet, because the crypto is not yet
3
This feature is AArch64 only, and applies to physical SErrors,
4
implemented. This, however follows the Auth pseudo function.
4
which QEMU does not implement, thus the feature is a nop.
5
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190108223129.5570-26-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-19-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
target/arm/pauth_helper.c | 21 ++++++++++++++++++++-
11
docs/system/arm/emulation.rst | 1 +
12
1 file changed, 20 insertions(+), 1 deletion(-)
12
target/arm/cpu64.c | 1 +
13
2 files changed, 2 insertions(+)
13
14
14
diff --git a/target/arm/pauth_helper.c b/target/arm/pauth_helper.c
15
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/pauth_helper.c
17
--- a/docs/system/arm/emulation.rst
17
+++ b/target/arm/pauth_helper.c
18
+++ b/docs/system/arm/emulation.rst
18
@@ -XXX,XX +XXX,XX @@ static uint64_t pauth_original_ptr(uint64_t ptr, ARMVAParameters param)
19
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
19
static uint64_t pauth_auth(CPUARMState *env, uint64_t ptr, uint64_t modifier,
20
- FEAT_FlagM2 (Enhancements to flag manipulation instructions)
20
ARMPACKey *key, bool data, int keynumber)
21
- FEAT_HPDS (Hierarchical permission disables)
21
{
22
- FEAT_I8MM (AArch64 Int8 matrix multiplication instructions)
22
- g_assert_not_reached(); /* FIXME */
23
+- FEAT_IESB (Implicit error synchronization event)
23
+ ARMMMUIdx mmu_idx = arm_stage1_mmu_idx(env);
24
- FEAT_JSCVT (JavaScript conversion instructions)
24
+ ARMVAParameters param = aa64_va_parameters(env, ptr, mmu_idx, data);
25
- FEAT_LOR (Limited ordering regions)
25
+ int bot_bit, top_bit;
26
- FEAT_LPA (Large Physical Address space)
26
+ uint64_t pac, orig_ptr, test;
27
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
27
+
28
index XXXXXXX..XXXXXXX 100644
28
+ orig_ptr = pauth_original_ptr(ptr, param);
29
--- a/target/arm/cpu64.c
29
+ pac = pauth_computepac(orig_ptr, modifier, *key);
30
+++ b/target/arm/cpu64.c
30
+ bot_bit = 64 - param.tsz;
31
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
31
+ top_bit = 64 - 8 * param.tbi;
32
t = cpu->isar.id_aa64mmfr2;
32
+
33
t = FIELD_DP64(t, ID_AA64MMFR2, CNP, 1); /* FEAT_TTCNP */
33
+ test = (pac ^ ptr) & ~MAKE_64BIT_MASK(55, 1);
34
t = FIELD_DP64(t, ID_AA64MMFR2, UAO, 1); /* FEAT_UAO */
34
+ if (unlikely(extract64(test, bot_bit, top_bit - bot_bit))) {
35
+ t = FIELD_DP64(t, ID_AA64MMFR2, IESB, 1); /* FEAT_IESB */
35
+ int error_code = (keynumber << 1) | (keynumber ^ 1);
36
t = FIELD_DP64(t, ID_AA64MMFR2, VARANGE, 1); /* FEAT_LVA */
36
+ if (param.tbi) {
37
t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* FEAT_TTST */
37
+ return deposit64(ptr, 53, 2, error_code);
38
t = FIELD_DP64(t, ID_AA64MMFR2, TTL, 1); /* FEAT_TTL */
38
+ } else {
39
+ return deposit64(ptr, 61, 2, error_code);
40
+ }
41
+ }
42
+ return orig_ptr;
43
}
44
45
static uint64_t pauth_strip(CPUARMState *env, uint64_t ptr, bool data)
46
--
39
--
47
2.20.1
40
2.25.1
48
49
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Post v8.4 bits taken from SysReg_v85_xml-00bet8.
3
This extension concerns branch speculation, which TCG does
4
not implement. Thus we can trivially enable this feature.
4
5
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190108223129.5570-3-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-20-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/cpu.h | 45 +++++++++++++++++++++++++++++++++------------
11
docs/system/arm/emulation.rst | 1 +
11
1 file changed, 33 insertions(+), 12 deletions(-)
12
target/arm/cpu64.c | 1 +
13
target/arm/cpu_tcg.c | 1 +
14
3 files changed, 3 insertions(+)
12
15
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
18
--- a/docs/system/arm/emulation.rst
16
+++ b/target/arm/cpu.h
19
+++ b/docs/system/arm/emulation.rst
17
@@ -XXX,XX +XXX,XX @@ void pmccntr_sync(CPUARMState *env);
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
18
#define SCTLR_A (1U << 1)
21
- FEAT_BBM at level 2 (Translation table break-before-make levels)
19
#define SCTLR_C (1U << 2)
22
- FEAT_BF16 (AArch64 BFloat16 instructions)
20
#define SCTLR_W (1U << 3) /* up to v6; RAO in v7 */
23
- FEAT_BTI (Branch Target Identification)
21
-#define SCTLR_SA (1U << 3)
24
+- FEAT_CSV2 (Cache speculation variant 2)
22
+#define SCTLR_nTLSMD_32 (1U << 3) /* v8.2-LSMAOC, AArch32 only */
25
- FEAT_DIT (Data Independent Timing instructions)
23
+#define SCTLR_SA (1U << 3) /* AArch64 only */
26
- FEAT_DPB (DC CVAP instruction)
24
#define SCTLR_P (1U << 4) /* up to v5; RAO in v6 and v7 */
27
- FEAT_Debugv8p2 (Debug changes for v8.2)
25
+#define SCTLR_LSMAOE_32 (1U << 4) /* v8.2-LSMAOC, AArch32 only */
28
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
26
#define SCTLR_SA0 (1U << 4) /* v8 onward, AArch64 only */
29
index XXXXXXX..XXXXXXX 100644
27
#define SCTLR_D (1U << 5) /* up to v5; RAO in v6 */
30
--- a/target/arm/cpu64.c
28
#define SCTLR_CP15BEN (1U << 5) /* v7 onward */
31
+++ b/target/arm/cpu64.c
29
#define SCTLR_L (1U << 6) /* up to v5; RAO in v6 and v7; RAZ in v8 */
32
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
30
+#define SCTLR_nAA (1U << 6) /* when v8.4-LSE is implemented */
33
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
31
#define SCTLR_B (1U << 7) /* up to v6; RAZ in v7 */
34
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
32
#define SCTLR_ITD (1U << 7) /* v8 onward */
35
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
33
#define SCTLR_S (1U << 8) /* up to v6; RAZ in v7 */
36
+ t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 1); /* FEAT_CSV2 */
34
@@ -XXX,XX +XXX,XX @@ void pmccntr_sync(CPUARMState *env);
37
cpu->isar.id_aa64pfr0 = t;
35
#define SCTLR_R (1U << 9) /* up to v6; RAZ in v7 */
38
36
#define SCTLR_UMA (1U << 9) /* v8 onward, AArch64 only */
39
t = cpu->isar.id_aa64pfr1;
37
#define SCTLR_F (1U << 10) /* up to v6 */
40
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
38
-#define SCTLR_SW (1U << 10) /* v7 onward */
41
index XXXXXXX..XXXXXXX 100644
39
-#define SCTLR_Z (1U << 11)
42
--- a/target/arm/cpu_tcg.c
40
+#define SCTLR_SW (1U << 10) /* v7, RES0 in v8 */
43
+++ b/target/arm/cpu_tcg.c
41
+#define SCTLR_Z (1U << 11) /* in v7, RES1 in v8 */
44
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
42
+#define SCTLR_EOS (1U << 11) /* v8.5-ExS */
45
cpu->isar.id_mmfr4 = t;
43
#define SCTLR_I (1U << 12)
46
44
-#define SCTLR_V (1U << 13)
47
t = cpu->isar.id_pfr0;
45
+#define SCTLR_V (1U << 13) /* AArch32 only */
48
+ t = FIELD_DP32(t, ID_PFR0, CSV2, 2); /* FEAT_CVS2 */
46
+#define SCTLR_EnDB (1U << 13) /* v8.3, AArch64 only */
49
t = FIELD_DP32(t, ID_PFR0, DIT, 1); /* FEAT_DIT */
47
#define SCTLR_RR (1U << 14) /* up to v7 */
50
t = FIELD_DP32(t, ID_PFR0, RAS, 1); /* FEAT_RAS */
48
#define SCTLR_DZE (1U << 14) /* v8 onward, AArch64 only */
51
cpu->isar.id_pfr0 = t;
49
#define SCTLR_L4 (1U << 15) /* up to v6; RAZ in v7 */
50
#define SCTLR_UCT (1U << 15) /* v8 onward, AArch64 only */
51
#define SCTLR_DT (1U << 16) /* up to ??, RAO in v6 and v7 */
52
#define SCTLR_nTWI (1U << 16) /* v8 onward */
53
-#define SCTLR_HA (1U << 17)
54
+#define SCTLR_HA (1U << 17) /* up to v7, RES0 in v8 */
55
#define SCTLR_BR (1U << 17) /* PMSA only */
56
#define SCTLR_IT (1U << 18) /* up to ??, RAO in v6 and v7 */
57
#define SCTLR_nTWE (1U << 18) /* v8 onward */
58
#define SCTLR_WXN (1U << 19)
59
#define SCTLR_ST (1U << 20) /* up to ??, RAZ in v6 */
60
-#define SCTLR_UWXN (1U << 20) /* v7 onward */
61
-#define SCTLR_FI (1U << 21)
62
-#define SCTLR_U (1U << 22)
63
+#define SCTLR_UWXN (1U << 20) /* v7 onward, AArch32 only */
64
+#define SCTLR_FI (1U << 21) /* up to v7, v8 RES0 */
65
+#define SCTLR_IESB (1U << 21) /* v8.2-IESB, AArch64 only */
66
+#define SCTLR_U (1U << 22) /* up to v6, RAO in v7 */
67
+#define SCTLR_EIS (1U << 22) /* v8.5-ExS */
68
#define SCTLR_XP (1U << 23) /* up to v6; v7 onward RAO */
69
+#define SCTLR_SPAN (1U << 23) /* v8.1-PAN */
70
#define SCTLR_VE (1U << 24) /* up to v7 */
71
#define SCTLR_E0E (1U << 24) /* v8 onward, AArch64 only */
72
#define SCTLR_EE (1U << 25)
73
#define SCTLR_L2 (1U << 26) /* up to v6, RAZ in v7 */
74
#define SCTLR_UCI (1U << 26) /* v8 onward, AArch64 only */
75
-#define SCTLR_NMFI (1U << 27)
76
-#define SCTLR_TRE (1U << 28)
77
-#define SCTLR_AFE (1U << 29)
78
-#define SCTLR_TE (1U << 30)
79
+#define SCTLR_NMFI (1U << 27) /* up to v7, RAZ in v7VE and v8 */
80
+#define SCTLR_EnDA (1U << 27) /* v8.3, AArch64 only */
81
+#define SCTLR_TRE (1U << 28) /* AArch32 only */
82
+#define SCTLR_nTLSMD_64 (1U << 28) /* v8.2-LSMAOC, AArch64 only */
83
+#define SCTLR_AFE (1U << 29) /* AArch32 only */
84
+#define SCTLR_LSMAOE_64 (1U << 29) /* v8.2-LSMAOC, AArch64 only */
85
+#define SCTLR_TE (1U << 30) /* AArch32 only */
86
+#define SCTLR_EnIB (1U << 30) /* v8.3, AArch64 only */
87
+#define SCTLR_EnIA (1U << 31) /* v8.3, AArch64 only */
88
+#define SCTLR_BT0 (1ULL << 35) /* v8.5-BTI */
89
+#define SCTLR_BT1 (1ULL << 36) /* v8.5-BTI */
90
+#define SCTLR_ITFSB (1ULL << 37) /* v8.5-MemTag */
91
+#define SCTLR_TCF0 (3ULL << 38) /* v8.5-MemTag */
92
+#define SCTLR_TCF (3ULL << 40) /* v8.5-MemTag */
93
+#define SCTLR_ATA0 (1ULL << 42) /* v8.5-MemTag */
94
+#define SCTLR_ATA (1ULL << 43) /* v8.5-MemTag */
95
+#define SCTLR_DSSBS (1ULL << 44) /* v8.5 */
96
97
#define CPTR_TCPAC (1U << 31)
98
#define CPTR_TTA (1U << 20)
99
--
52
--
100
2.20.1
53
2.25.1
101
102
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
3
There is no branch prediction in TCG, therefore there is no
4
need to actually include the context number into the predictor.
5
Therefore all we need to do is add the state for SCXTNUM_ELx.
2
6
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190108223129.5570-29-richard.henderson@linaro.org
9
Message-id: 20220506180242.216785-21-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
11
---
8
target/arm/helper.c | 70 +++++++++++++++++++++++++++++++++++++++++++++
12
docs/system/arm/emulation.rst | 3 ++
9
1 file changed, 70 insertions(+)
13
target/arm/cpu.h | 16 +++++++++
14
target/arm/cpu.c | 5 +++
15
target/arm/cpu64.c | 3 +-
16
target/arm/helper.c | 61 ++++++++++++++++++++++++++++++++++-
17
5 files changed, 86 insertions(+), 2 deletions(-)
10
18
19
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
20
index XXXXXXX..XXXXXXX 100644
21
--- a/docs/system/arm/emulation.rst
22
+++ b/docs/system/arm/emulation.rst
23
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
24
- FEAT_BF16 (AArch64 BFloat16 instructions)
25
- FEAT_BTI (Branch Target Identification)
26
- FEAT_CSV2 (Cache speculation variant 2)
27
+- FEAT_CSV2_1p1 (Cache speculation variant 2, version 1.1)
28
+- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
29
+- FEAT_CSV2_2 (Cache speculation variant 2, version 2)
30
- FEAT_DIT (Data Independent Timing instructions)
31
- FEAT_DPB (DC CVAP instruction)
32
- FEAT_Debugv8p2 (Debug changes for v8.2)
33
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/cpu.h
36
+++ b/target/arm/cpu.h
37
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
38
ARMPACKey apdb;
39
ARMPACKey apga;
40
} keys;
41
+
42
+ uint64_t scxtnum_el[4];
43
#endif
44
45
#if defined(CONFIG_USER_ONLY)
46
@@ -XXX,XX +XXX,XX @@ void pmu_init(ARMCPU *cpu);
47
#define SCTLR_WXN (1U << 19)
48
#define SCTLR_ST (1U << 20) /* up to ??, RAZ in v6 */
49
#define SCTLR_UWXN (1U << 20) /* v7 onward, AArch32 only */
50
+#define SCTLR_TSCXT (1U << 20) /* FEAT_CSV2_1p2, AArch64 only */
51
#define SCTLR_FI (1U << 21) /* up to v7, v8 RES0 */
52
#define SCTLR_IESB (1U << 21) /* v8.2-IESB, AArch64 only */
53
#define SCTLR_U (1U << 22) /* up to v6, RAO in v7 */
54
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_dit(const ARMISARegisters *id)
55
return FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, DIT) != 0;
56
}
57
58
+static inline bool isar_feature_aa64_scxtnum(const ARMISARegisters *id)
59
+{
60
+ int key = FIELD_EX64(id->id_aa64pfr0, ID_AA64PFR0, CSV2);
61
+ if (key >= 2) {
62
+ return true; /* FEAT_CSV2_2 */
63
+ }
64
+ if (key == 1) {
65
+ key = FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, CSV2_FRAC);
66
+ return key >= 2; /* FEAT_CSV2_1p2 */
67
+ }
68
+ return false;
69
+}
70
+
71
static inline bool isar_feature_aa64_ssbs(const ARMISARegisters *id)
72
{
73
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SSBS) != 0;
74
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
75
index XXXXXXX..XXXXXXX 100644
76
--- a/target/arm/cpu.c
77
+++ b/target/arm/cpu.c
78
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(DeviceState *dev)
79
*/
80
env->cp15.gcr_el1 = 0x1ffff;
81
}
82
+ /*
83
+ * Disable access to SCXTNUM_EL0 from CSV2_1p2.
84
+ * This is not yet exposed from the Linux kernel in any way.
85
+ */
86
+ env->cp15.sctlr_el[1] |= SCTLR_TSCXT;
87
#else
88
/* Reset into the highest available EL */
89
if (arm_feature(env, ARM_FEATURE_EL3)) {
90
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
91
index XXXXXXX..XXXXXXX 100644
92
--- a/target/arm/cpu64.c
93
+++ b/target/arm/cpu64.c
94
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
95
t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
96
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
97
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
98
- t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 1); /* FEAT_CSV2 */
99
+ t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 2); /* FEAT_CSV2_2 */
100
cpu->isar.id_aa64pfr0 = t;
101
102
t = cpu->isar.id_aa64pfr1;
103
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
104
* we do for EL2 with the virtualization=on property.
105
*/
106
t = FIELD_DP64(t, ID_AA64PFR1, MTE, 3); /* FEAT_MTE3 */
107
+ t = FIELD_DP64(t, ID_AA64PFR1, CSV2_FRAC, 0); /* FEAT_CSV2_2 */
108
cpu->isar.id_aa64pfr1 = t;
109
110
t = cpu->isar.id_aa64mmfr0;
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
111
diff --git a/target/arm/helper.c b/target/arm/helper.c
12
index XXXXXXX..XXXXXXX 100644
112
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/helper.c
113
--- a/target/arm/helper.c
14
+++ b/target/arm/helper.c
114
+++ b/target/arm/helper.c
15
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_lor_other(CPUARMState *env,
115
@@ -XXX,XX +XXX,XX @@ static void scr_write(CPUARMState *env, const ARMCPRegInfo *ri, uint64_t value)
16
return access_lor_ns(env);
116
if (cpu_isar_feature(aa64_mte, cpu)) {
17
}
117
valid_mask |= SCR_ATA;
18
118
}
19
+#ifdef TARGET_AARCH64
119
+ if (cpu_isar_feature(aa64_scxtnum, cpu)) {
20
+static CPAccessResult access_pauth(CPUARMState *env, const ARMCPRegInfo *ri,
120
+ valid_mask |= SCR_ENSCXT;
21
+ bool isread)
121
+ }
122
} else {
123
valid_mask &= ~(SCR_RW | SCR_ST);
124
if (cpu_isar_feature(aa32_ras, cpu)) {
125
@@ -XXX,XX +XXX,XX @@ static void do_hcr_write(CPUARMState *env, uint64_t value, uint64_t valid_mask)
126
if (cpu_isar_feature(aa64_mte, cpu)) {
127
valid_mask |= HCR_ATA | HCR_DCT | HCR_TID5;
128
}
129
+ if (cpu_isar_feature(aa64_scxtnum, cpu)) {
130
+ valid_mask |= HCR_ENSCXT;
131
+ }
132
}
133
134
/* Clear RES0 bits. */
135
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
136
{ K(3, 0, 5, 6, 0), K(3, 4, 5, 6, 0), K(3, 5, 5, 6, 0),
137
"TFSR_EL1", "TFSR_EL2", "TFSR_EL12", isar_feature_aa64_mte },
138
139
+ { K(3, 0, 13, 0, 7), K(3, 4, 13, 0, 7), K(3, 5, 13, 0, 7),
140
+ "SCXTNUM_EL1", "SCXTNUM_EL2", "SCXTNUM_EL12",
141
+ isar_feature_aa64_scxtnum },
142
+
143
/* TODO: ARMv8.2-SPE -- PMSCR_EL2 */
144
/* TODO: ARMv8.4-Trace -- TRFCR_EL2 */
145
};
146
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo mte_el0_cacheop_reginfo[] = {
147
},
148
};
149
150
-#endif
151
+static CPAccessResult access_scxtnum(CPUARMState *env, const ARMCPRegInfo *ri,
152
+ bool isread)
22
+{
153
+{
154
+ uint64_t hcr = arm_hcr_el2_eff(env);
23
+ int el = arm_current_el(env);
155
+ int el = arm_current_el(env);
24
+
156
+
25
+ if (el < 2 &&
157
+ if (el == 0 && !((hcr & HCR_E2H) && (hcr & HCR_TGE))) {
26
+ arm_feature(env, ARM_FEATURE_EL2) &&
158
+ if (env->cp15.sctlr_el[1] & SCTLR_TSCXT) {
27
+ !(arm_hcr_el2_eff(env) & HCR_APK)) {
159
+ if (hcr & HCR_TGE) {
160
+ return CP_ACCESS_TRAP_EL2;
161
+ }
162
+ return CP_ACCESS_TRAP;
163
+ }
164
+ } else if (el < 2 && (env->cp15.sctlr_el[2] & SCTLR_TSCXT)) {
28
+ return CP_ACCESS_TRAP_EL2;
165
+ return CP_ACCESS_TRAP_EL2;
29
+ }
166
+ }
30
+ if (el < 3 &&
167
+ if (el < 2 && arm_is_el2_enabled(env) && !(hcr & HCR_ENSCXT)) {
31
+ arm_feature(env, ARM_FEATURE_EL3) &&
168
+ return CP_ACCESS_TRAP_EL2;
32
+ !(env->cp15.scr_el3 & SCR_APK)) {
169
+ }
170
+ if (el < 3
171
+ && arm_feature(env, ARM_FEATURE_EL3)
172
+ && !(env->cp15.scr_el3 & SCR_ENSCXT)) {
33
+ return CP_ACCESS_TRAP_EL3;
173
+ return CP_ACCESS_TRAP_EL3;
34
+ }
174
+ }
35
+ return CP_ACCESS_OK;
175
+ return CP_ACCESS_OK;
36
+}
176
+}
37
+
177
+
38
+static const ARMCPRegInfo pauth_reginfo[] = {
178
+static const ARMCPRegInfo scxtnum_reginfo[] = {
39
+ { .name = "APDAKEYLO_EL1", .state = ARM_CP_STATE_AA64,
179
+ { .name = "SCXTNUM_EL0", .state = ARM_CP_STATE_AA64,
40
+ .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 0,
180
+ .opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 7,
41
+ .access = PL1_RW, .accessfn = access_pauth,
181
+ .access = PL0_RW, .accessfn = access_scxtnum,
42
+ .fieldoffset = offsetof(CPUARMState, apda_key.lo) },
182
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[0]) },
43
+ { .name = "APDAKEYHI_EL1", .state = ARM_CP_STATE_AA64,
183
+ { .name = "SCXTNUM_EL1", .state = ARM_CP_STATE_AA64,
44
+ .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 1,
184
+ .opc0 = 3, .opc1 = 0, .crn = 13, .crm = 0, .opc2 = 7,
45
+ .access = PL1_RW, .accessfn = access_pauth,
185
+ .access = PL1_RW, .accessfn = access_scxtnum,
46
+ .fieldoffset = offsetof(CPUARMState, apda_key.hi) },
186
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[1]) },
47
+ { .name = "APDBKEYLO_EL1", .state = ARM_CP_STATE_AA64,
187
+ { .name = "SCXTNUM_EL2", .state = ARM_CP_STATE_AA64,
48
+ .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 2,
188
+ .opc0 = 3, .opc1 = 4, .crn = 13, .crm = 0, .opc2 = 7,
49
+ .access = PL1_RW, .accessfn = access_pauth,
189
+ .access = PL2_RW, .accessfn = access_scxtnum,
50
+ .fieldoffset = offsetof(CPUARMState, apdb_key.lo) },
190
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[2]) },
51
+ { .name = "APDBKEYHI_EL1", .state = ARM_CP_STATE_AA64,
191
+ { .name = "SCXTNUM_EL3", .state = ARM_CP_STATE_AA64,
52
+ .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 2, .opc2 = 3,
192
+ .opc0 = 3, .opc1 = 6, .crn = 13, .crm = 0, .opc2 = 7,
53
+ .access = PL1_RW, .accessfn = access_pauth,
193
+ .access = PL3_RW,
54
+ .fieldoffset = offsetof(CPUARMState, apdb_key.hi) },
194
+ .fieldoffset = offsetof(CPUARMState, scxtnum_el[3]) },
55
+ { .name = "APGAKEYLO_EL1", .state = ARM_CP_STATE_AA64,
56
+ .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 3, .opc2 = 0,
57
+ .access = PL1_RW, .accessfn = access_pauth,
58
+ .fieldoffset = offsetof(CPUARMState, apga_key.lo) },
59
+ { .name = "APGAKEYHI_EL1", .state = ARM_CP_STATE_AA64,
60
+ .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 3, .opc2 = 1,
61
+ .access = PL1_RW, .accessfn = access_pauth,
62
+ .fieldoffset = offsetof(CPUARMState, apga_key.hi) },
63
+ { .name = "APIAKEYLO_EL1", .state = ARM_CP_STATE_AA64,
64
+ .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 0,
65
+ .access = PL1_RW, .accessfn = access_pauth,
66
+ .fieldoffset = offsetof(CPUARMState, apia_key.lo) },
67
+ { .name = "APIAKEYHI_EL1", .state = ARM_CP_STATE_AA64,
68
+ .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 1,
69
+ .access = PL1_RW, .accessfn = access_pauth,
70
+ .fieldoffset = offsetof(CPUARMState, apia_key.hi) },
71
+ { .name = "APIBKEYLO_EL1", .state = ARM_CP_STATE_AA64,
72
+ .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 2,
73
+ .access = PL1_RW, .accessfn = access_pauth,
74
+ .fieldoffset = offsetof(CPUARMState, apib_key.lo) },
75
+ { .name = "APIBKEYHI_EL1", .state = ARM_CP_STATE_AA64,
76
+ .opc0 = 3, .opc1 = 0, .crn = 2, .crm = 1, .opc2 = 3,
77
+ .access = PL1_RW, .accessfn = access_pauth,
78
+ .fieldoffset = offsetof(CPUARMState, apib_key.hi) },
79
+ REGINFO_SENTINEL
80
+};
195
+};
81
+#endif
196
+#endif /* TARGET_AARCH64 */
82
+
197
83
void register_cp_regs_for_features(ARMCPU *cpu)
198
static CPAccessResult access_predinv(CPUARMState *env, const ARMCPRegInfo *ri,
84
{
199
bool isread)
85
/* Register all the coprocessor registers based on feature bits */
86
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
200
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
87
define_one_arm_cp_reg(cpu, &zcr_el3_reginfo);
201
define_arm_cp_regs(cpu, mte_tco_ro_reginfo);
88
}
202
define_arm_cp_regs(cpu, mte_el0_cacheop_reginfo);
89
}
203
}
90
+
204
+
91
+#ifdef TARGET_AARCH64
205
+ if (cpu_isar_feature(aa64_scxtnum, cpu)) {
92
+ if (cpu_isar_feature(aa64_pauth, cpu)) {
206
+ define_arm_cp_regs(cpu, scxtnum_reginfo);
93
+ define_arm_cp_regs(cpu, pauth_reginfo);
207
+ }
94
+ }
208
#endif
95
+#endif
209
96
}
210
if (cpu_isar_feature(any_predinv, cpu)) {
97
98
void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu)
99
--
211
--
100
2.20.1
212
2.25.1
101
102
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add storage space for the 5 encryption keys.
3
This extension concerns cache speculation, which TCG does
4
not implement. Thus we can trivially enable this feature.
4
5
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20190108223129.5570-2-richard.henderson@linaro.org
8
Message-id: 20220506180242.216785-22-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
---
10
target/arm/cpu.h | 30 +++++++++++++++++++++++++++++-
11
docs/system/arm/emulation.rst | 1 +
11
1 file changed, 29 insertions(+), 1 deletion(-)
12
target/arm/cpu64.c | 1 +
13
target/arm/cpu_tcg.c | 1 +
14
3 files changed, 3 insertions(+)
12
15
13
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
14
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/cpu.h
18
--- a/docs/system/arm/emulation.rst
16
+++ b/target/arm/cpu.h
19
+++ b/docs/system/arm/emulation.rst
17
@@ -XXX,XX +XXX,XX @@ typedef struct ARMVectorReg {
20
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
18
uint64_t d[2 * ARM_MAX_VQ] QEMU_ALIGNED(16);
21
- FEAT_CSV2_1p1 (Cache speculation variant 2, version 1.1)
19
} ARMVectorReg;
22
- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
20
23
- FEAT_CSV2_2 (Cache speculation variant 2, version 2)
21
-/* In AArch32 mode, predicate registers do not exist at all. */
24
+- FEAT_CSV3 (Cache speculation variant 3)
22
#ifdef TARGET_AARCH64
25
- FEAT_DIT (Data Independent Timing instructions)
23
+/* In AArch32 mode, predicate registers do not exist at all. */
26
- FEAT_DPB (DC CVAP instruction)
24
typedef struct ARMPredicateReg {
27
- FEAT_Debugv8p2 (Debug changes for v8.2)
25
uint64_t p[2 * ARM_MAX_VQ / 8] QEMU_ALIGNED(16);
28
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
26
} ARMPredicateReg;
29
index XXXXXXX..XXXXXXX 100644
27
+
30
--- a/target/arm/cpu64.c
28
+/* In AArch32 mode, PAC keys do not exist at all. */
31
+++ b/target/arm/cpu64.c
29
+typedef struct ARMPACKey {
32
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
30
+ uint64_t lo, hi;
33
t = FIELD_DP64(t, ID_AA64PFR0, SEL2, 1); /* FEAT_SEL2 */
31
+} ARMPACKey;
34
t = FIELD_DP64(t, ID_AA64PFR0, DIT, 1); /* FEAT_DIT */
32
#endif
35
t = FIELD_DP64(t, ID_AA64PFR0, CSV2, 2); /* FEAT_CSV2_2 */
33
36
+ t = FIELD_DP64(t, ID_AA64PFR0, CSV3, 1); /* FEAT_CSV3 */
34
37
cpu->isar.id_aa64pfr0 = t;
35
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
38
36
uint32_t cregs[16];
39
t = cpu->isar.id_aa64pfr1;
37
} iwmmxt;
40
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
38
41
index XXXXXXX..XXXXXXX 100644
39
+#ifdef TARGET_AARCH64
42
--- a/target/arm/cpu_tcg.c
40
+ ARMPACKey apia_key;
43
+++ b/target/arm/cpu_tcg.c
41
+ ARMPACKey apib_key;
44
@@ -XXX,XX +XXX,XX @@ void aa32_max_features(ARMCPU *cpu)
42
+ ARMPACKey apda_key;
45
cpu->isar.id_pfr0 = t;
43
+ ARMPACKey apdb_key;
46
44
+ ARMPACKey apga_key;
47
t = cpu->isar.id_pfr2;
45
+#endif
48
+ t = FIELD_DP32(t, ID_PFR2, CSV3, 1); /* FEAT_CSV3 */
46
+
49
t = FIELD_DP32(t, ID_PFR2, SSBS, 1); /* FEAT_SSBS */
47
#if defined(CONFIG_USER_ONLY)
50
cpu->isar.id_pfr2 = t;
48
/* For usermode syscall translation. */
51
49
int eabi;
50
@@ -XXX,XX +XXX,XX @@ static inline bool isar_feature_aa64_fcma(const ARMISARegisters *id)
51
return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, FCMA) != 0;
52
}
53
54
+static inline bool isar_feature_aa64_pauth(const ARMISARegisters *id)
55
+{
56
+ /*
57
+ * Note that while QEMU will only implement the architected algorithm
58
+ * QARMA, and thus APA+GPA, the host cpu for kvm may use implementation
59
+ * defined algorithms, and thus API+GPI, and this predicate controls
60
+ * migration of the 128-bit keys.
61
+ */
62
+ return (id->id_aa64isar1 &
63
+ (FIELD_DP64(0, ID_AA64ISAR1, APA, -1) |
64
+ FIELD_DP64(0, ID_AA64ISAR1, API, -1) |
65
+ FIELD_DP64(0, ID_AA64ISAR1, GPA, -1) |
66
+ FIELD_DP64(0, ID_AA64ISAR1, GPI, -1))) != 0;
67
+}
68
+
69
static inline bool isar_feature_aa64_fp16(const ARMISARegisters *id)
70
{
71
/* We always set the AdvSIMD and FP fields identically wrt FP16. */
72
--
52
--
73
2.20.1
53
2.25.1
74
75
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This path uses cpu_loop_exit_restore to unwind current processor state.
4
5
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20190108223129.5570-5-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/internals.h | 7 +++++++
12
target/arm/op_helper.c | 19 +++++++++++++++++--
13
2 files changed, 24 insertions(+), 2 deletions(-)
14
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/internals.h
18
+++ b/target/arm/internals.h
19
@@ -XXX,XX +XXX,XX @@ FIELD(V7M_EXCRET, RES1, 7, 25) /* including the must-be-1 prefix */
20
void QEMU_NORETURN raise_exception(CPUARMState *env, uint32_t excp,
21
uint32_t syndrome, uint32_t target_el);
22
23
+/*
24
+ * Similarly, but also use unwinding to restore cpu state.
25
+ */
26
+void QEMU_NORETURN raise_exception_ra(CPUARMState *env, uint32_t excp,
27
+ uint32_t syndrome, uint32_t target_el,
28
+ uintptr_t ra);
29
+
30
/*
31
* For AArch64, map a given EL to an index in the banked_spsr array.
32
* Note that this mapping and the AArch32 mapping defined in bank_number()
33
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/target/arm/op_helper.c
36
+++ b/target/arm/op_helper.c
37
@@ -XXX,XX +XXX,XX @@
38
#define SIGNBIT (uint32_t)0x80000000
39
#define SIGNBIT64 ((uint64_t)1 << 63)
40
41
-void raise_exception(CPUARMState *env, uint32_t excp,
42
- uint32_t syndrome, uint32_t target_el)
43
+static CPUState *do_raise_exception(CPUARMState *env, uint32_t excp,
44
+ uint32_t syndrome, uint32_t target_el)
45
{
46
CPUState *cs = CPU(arm_env_get_cpu(env));
47
48
@@ -XXX,XX +XXX,XX @@ void raise_exception(CPUARMState *env, uint32_t excp,
49
cs->exception_index = excp;
50
env->exception.syndrome = syndrome;
51
env->exception.target_el = target_el;
52
+
53
+ return cs;
54
+}
55
+
56
+void raise_exception(CPUARMState *env, uint32_t excp,
57
+ uint32_t syndrome, uint32_t target_el)
58
+{
59
+ CPUState *cs = do_raise_exception(env, excp, syndrome, target_el);
60
cpu_loop_exit(cs);
61
}
62
63
+void raise_exception_ra(CPUARMState *env, uint32_t excp, uint32_t syndrome,
64
+ uint32_t target_el, uintptr_t ra)
65
+{
66
+ CPUState *cs = do_raise_exception(env, excp, syndrome, target_el);
67
+ cpu_loop_exit_restore(cs, ra);
68
+}
69
+
70
static int exception_target_el(CPUARMState *env)
71
{
72
int target_el = MAX(1, arm_current_el(env));
73
--
74
2.20.1
75
76
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
The cryptographic internals are stubbed out for now,
4
but the enable and trap bits are checked.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20190108223129.5570-6-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/Makefile.objs | 1 +
12
target/arm/helper-a64.h | 12 +++
13
target/arm/internals.h | 6 ++
14
target/arm/pauth_helper.c | 186 ++++++++++++++++++++++++++++++++++++++
15
4 files changed, 205 insertions(+)
16
create mode 100644 target/arm/pauth_helper.c
17
18
diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/Makefile.objs
21
+++ b/target/arm/Makefile.objs
22
@@ -XXX,XX +XXX,XX @@ obj-y += translate.o op_helper.o helper.o cpu.o
23
obj-y += neon_helper.o iwmmxt_helper.o vec_helper.o
24
obj-y += gdbstub.o
25
obj-$(TARGET_AARCH64) += cpu64.o translate-a64.o helper-a64.o gdbstub64.o
26
+obj-$(TARGET_AARCH64) += pauth_helper.o
27
obj-y += crypto_helper.o
28
obj-$(CONFIG_SOFTMMU) += arm-powerctl.o
29
30
diff --git a/target/arm/helper-a64.h b/target/arm/helper-a64.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/helper-a64.h
33
+++ b/target/arm/helper-a64.h
34
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(advsimd_rinth, f16, f16, ptr)
35
DEF_HELPER_2(advsimd_f16tosinth, i32, f16, ptr)
36
DEF_HELPER_2(advsimd_f16touinth, i32, f16, ptr)
37
DEF_HELPER_2(sqrt_f16, f16, f16, ptr)
38
+
39
+DEF_HELPER_FLAGS_3(pacia, TCG_CALL_NO_WG, i64, env, i64, i64)
40
+DEF_HELPER_FLAGS_3(pacib, TCG_CALL_NO_WG, i64, env, i64, i64)
41
+DEF_HELPER_FLAGS_3(pacda, TCG_CALL_NO_WG, i64, env, i64, i64)
42
+DEF_HELPER_FLAGS_3(pacdb, TCG_CALL_NO_WG, i64, env, i64, i64)
43
+DEF_HELPER_FLAGS_3(pacga, TCG_CALL_NO_WG, i64, env, i64, i64)
44
+DEF_HELPER_FLAGS_3(autia, TCG_CALL_NO_WG, i64, env, i64, i64)
45
+DEF_HELPER_FLAGS_3(autib, TCG_CALL_NO_WG, i64, env, i64, i64)
46
+DEF_HELPER_FLAGS_3(autda, TCG_CALL_NO_WG, i64, env, i64, i64)
47
+DEF_HELPER_FLAGS_3(autdb, TCG_CALL_NO_WG, i64, env, i64, i64)
48
+DEF_HELPER_FLAGS_2(xpaci, TCG_CALL_NO_RWG_SE, i64, env, i64)
49
+DEF_HELPER_FLAGS_2(xpacd, TCG_CALL_NO_RWG_SE, i64, env, i64)
50
diff --git a/target/arm/internals.h b/target/arm/internals.h
51
index XXXXXXX..XXXXXXX 100644
52
--- a/target/arm/internals.h
53
+++ b/target/arm/internals.h
54
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
55
EC_CP14DTTRAP = 0x06,
56
EC_ADVSIMDFPACCESSTRAP = 0x07,
57
EC_FPIDTRAP = 0x08,
58
+ EC_PACTRAP = 0x09,
59
EC_CP14RRTTRAP = 0x0c,
60
EC_ILLEGALSTATE = 0x0e,
61
EC_AA32_SVC = 0x11,
62
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_sve_access_trap(void)
63
return EC_SVEACCESSTRAP << ARM_EL_EC_SHIFT;
64
}
65
66
+static inline uint32_t syn_pactrap(void)
67
+{
68
+ return EC_PACTRAP << ARM_EL_EC_SHIFT;
69
+}
70
+
71
static inline uint32_t syn_insn_abort(int same_el, int ea, int s1ptw, int fsc)
72
{
73
return (EC_INSNABORT << ARM_EL_EC_SHIFT) | (same_el << ARM_EL_EC_SHIFT)
74
diff --git a/target/arm/pauth_helper.c b/target/arm/pauth_helper.c
75
new file mode 100644
76
index XXXXXXX..XXXXXXX
77
--- /dev/null
78
+++ b/target/arm/pauth_helper.c
79
@@ -XXX,XX +XXX,XX @@
80
+/*
81
+ * ARM v8.3-PAuth Operations
82
+ *
83
+ * Copyright (c) 2019 Linaro, Ltd.
84
+ *
85
+ * This library is free software; you can redistribute it and/or
86
+ * modify it under the terms of the GNU Lesser General Public
87
+ * License as published by the Free Software Foundation; either
88
+ * version 2 of the License, or (at your option) any later version.
89
+ *
90
+ * This library is distributed in the hope that it will be useful,
91
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
92
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
93
+ * Lesser General Public License for more details.
94
+ *
95
+ * You should have received a copy of the GNU Lesser General Public
96
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
97
+ */
98
+
99
+#include "qemu/osdep.h"
100
+#include "cpu.h"
101
+#include "internals.h"
102
+#include "exec/exec-all.h"
103
+#include "exec/cpu_ldst.h"
104
+#include "exec/helper-proto.h"
105
+#include "tcg/tcg-gvec-desc.h"
106
+
107
+
108
+static uint64_t pauth_computepac(uint64_t data, uint64_t modifier,
109
+ ARMPACKey key)
110
+{
111
+ g_assert_not_reached(); /* FIXME */
112
+}
113
+
114
+static uint64_t pauth_addpac(CPUARMState *env, uint64_t ptr, uint64_t modifier,
115
+ ARMPACKey *key, bool data)
116
+{
117
+ g_assert_not_reached(); /* FIXME */
118
+}
119
+
120
+static uint64_t pauth_auth(CPUARMState *env, uint64_t ptr, uint64_t modifier,
121
+ ARMPACKey *key, bool data, int keynumber)
122
+{
123
+ g_assert_not_reached(); /* FIXME */
124
+}
125
+
126
+static uint64_t pauth_strip(CPUARMState *env, uint64_t ptr, bool data)
127
+{
128
+ g_assert_not_reached(); /* FIXME */
129
+}
130
+
131
+static void QEMU_NORETURN pauth_trap(CPUARMState *env, int target_el,
132
+ uintptr_t ra)
133
+{
134
+ raise_exception_ra(env, EXCP_UDEF, syn_pactrap(), target_el, ra);
135
+}
136
+
137
+static void pauth_check_trap(CPUARMState *env, int el, uintptr_t ra)
138
+{
139
+ if (el < 2 && arm_feature(env, ARM_FEATURE_EL2)) {
140
+ uint64_t hcr = arm_hcr_el2_eff(env);
141
+ bool trap = !(hcr & HCR_API);
142
+ /* FIXME: ARMv8.1-VHE: trap only applies to EL1&0 regime. */
143
+ /* FIXME: ARMv8.3-NV: HCR_NV trap takes precedence for ERETA[AB]. */
144
+ if (trap) {
145
+ pauth_trap(env, 2, ra);
146
+ }
147
+ }
148
+ if (el < 3 && arm_feature(env, ARM_FEATURE_EL3)) {
149
+ if (!(env->cp15.scr_el3 & SCR_API)) {
150
+ pauth_trap(env, 3, ra);
151
+ }
152
+ }
153
+}
154
+
155
+static bool pauth_key_enabled(CPUARMState *env, int el, uint32_t bit)
156
+{
157
+ uint32_t sctlr;
158
+ if (el == 0) {
159
+ /* FIXME: ARMv8.1-VHE S2 translation regime. */
160
+ sctlr = env->cp15.sctlr_el[1];
161
+ } else {
162
+ sctlr = env->cp15.sctlr_el[el];
163
+ }
164
+ return (sctlr & bit) != 0;
165
+}
166
+
167
+uint64_t HELPER(pacia)(CPUARMState *env, uint64_t x, uint64_t y)
168
+{
169
+ int el = arm_current_el(env);
170
+ if (!pauth_key_enabled(env, el, SCTLR_EnIA)) {
171
+ return x;
172
+ }
173
+ pauth_check_trap(env, el, GETPC());
174
+ return pauth_addpac(env, x, y, &env->apia_key, false);
175
+}
176
+
177
+uint64_t HELPER(pacib)(CPUARMState *env, uint64_t x, uint64_t y)
178
+{
179
+ int el = arm_current_el(env);
180
+ if (!pauth_key_enabled(env, el, SCTLR_EnIB)) {
181
+ return x;
182
+ }
183
+ pauth_check_trap(env, el, GETPC());
184
+ return pauth_addpac(env, x, y, &env->apib_key, false);
185
+}
186
+
187
+uint64_t HELPER(pacda)(CPUARMState *env, uint64_t x, uint64_t y)
188
+{
189
+ int el = arm_current_el(env);
190
+ if (!pauth_key_enabled(env, el, SCTLR_EnDA)) {
191
+ return x;
192
+ }
193
+ pauth_check_trap(env, el, GETPC());
194
+ return pauth_addpac(env, x, y, &env->apda_key, true);
195
+}
196
+
197
+uint64_t HELPER(pacdb)(CPUARMState *env, uint64_t x, uint64_t y)
198
+{
199
+ int el = arm_current_el(env);
200
+ if (!pauth_key_enabled(env, el, SCTLR_EnDB)) {
201
+ return x;
202
+ }
203
+ pauth_check_trap(env, el, GETPC());
204
+ return pauth_addpac(env, x, y, &env->apdb_key, true);
205
+}
206
+
207
+uint64_t HELPER(pacga)(CPUARMState *env, uint64_t x, uint64_t y)
208
+{
209
+ uint64_t pac;
210
+
211
+ pauth_check_trap(env, arm_current_el(env), GETPC());
212
+ pac = pauth_computepac(x, y, env->apga_key);
213
+
214
+ return pac & 0xffffffff00000000ull;
215
+}
216
+
217
+uint64_t HELPER(autia)(CPUARMState *env, uint64_t x, uint64_t y)
218
+{
219
+ int el = arm_current_el(env);
220
+ if (!pauth_key_enabled(env, el, SCTLR_EnIA)) {
221
+ return x;
222
+ }
223
+ pauth_check_trap(env, el, GETPC());
224
+ return pauth_auth(env, x, y, &env->apia_key, false, 0);
225
+}
226
+
227
+uint64_t HELPER(autib)(CPUARMState *env, uint64_t x, uint64_t y)
228
+{
229
+ int el = arm_current_el(env);
230
+ if (!pauth_key_enabled(env, el, SCTLR_EnIB)) {
231
+ return x;
232
+ }
233
+ pauth_check_trap(env, el, GETPC());
234
+ return pauth_auth(env, x, y, &env->apib_key, false, 1);
235
+}
236
+
237
+uint64_t HELPER(autda)(CPUARMState *env, uint64_t x, uint64_t y)
238
+{
239
+ int el = arm_current_el(env);
240
+ if (!pauth_key_enabled(env, el, SCTLR_EnDA)) {
241
+ return x;
242
+ }
243
+ pauth_check_trap(env, el, GETPC());
244
+ return pauth_auth(env, x, y, &env->apda_key, true, 0);
245
+}
246
+
247
+uint64_t HELPER(autdb)(CPUARMState *env, uint64_t x, uint64_t y)
248
+{
249
+ int el = arm_current_el(env);
250
+ if (!pauth_key_enabled(env, el, SCTLR_EnDB)) {
251
+ return x;
252
+ }
253
+ pauth_check_trap(env, el, GETPC());
254
+ return pauth_auth(env, x, y, &env->apdb_key, true, 1);
255
+}
256
+
257
+uint64_t HELPER(xpaci)(CPUARMState *env, uint64_t a)
258
+{
259
+ return pauth_strip(env, a, false);
260
+}
261
+
262
+uint64_t HELPER(xpacd)(CPUARMState *env, uint64_t a)
263
+{
264
+ return pauth_strip(env, a, true);
265
+}
266
--
267
2.20.1
268
269
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Now properly signals unallocated for REV64 with SF=0.
3
This extension concerns not merging memory access, which TCG does
4
Allows for the opcode2 field to be decoded shortly.
4
not implement. Thus we can trivially enable this feature.
5
Add a comment to handle_hint for the DGH instruction, but no code.
5
6
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190108223129.5570-8-richard.henderson@linaro.org
9
Message-id: 20220506180242.216785-23-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
target/arm/translate-a64.c | 31 ++++++++++++++++++++++---------
12
docs/system/arm/emulation.rst | 1 +
12
1 file changed, 22 insertions(+), 9 deletions(-)
13
target/arm/cpu64.c | 1 +
14
target/arm/translate-a64.c | 1 +
15
3 files changed, 3 insertions(+)
13
16
17
diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst
18
index XXXXXXX..XXXXXXX 100644
19
--- a/docs/system/arm/emulation.rst
20
+++ b/docs/system/arm/emulation.rst
21
@@ -XXX,XX +XXX,XX @@ the following architecture extensions:
22
- FEAT_CSV2_1p2 (Cache speculation variant 2, version 1.2)
23
- FEAT_CSV2_2 (Cache speculation variant 2, version 2)
24
- FEAT_CSV3 (Cache speculation variant 3)
25
+- FEAT_DGH (Data gathering hint)
26
- FEAT_DIT (Data Independent Timing instructions)
27
- FEAT_DPB (DC CVAP instruction)
28
- FEAT_Debugv8p2 (Debug changes for v8.2)
29
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/target/arm/cpu64.c
32
+++ b/target/arm/cpu64.c
33
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
34
t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1); /* FEAT_SB */
35
t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1); /* FEAT_SPECRES */
36
t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1); /* FEAT_BF16 */
37
+ t = FIELD_DP64(t, ID_AA64ISAR1, DGH, 1); /* FEAT_DGH */
38
t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1); /* FEAT_I8MM */
39
cpu->isar.id_aa64isar1 = t;
40
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
41
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
15
index XXXXXXX..XXXXXXX 100644
42
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-a64.c
43
--- a/target/arm/translate-a64.c
17
+++ b/target/arm/translate-a64.c
44
+++ b/target/arm/translate-a64.c
18
@@ -XXX,XX +XXX,XX @@ static void handle_rev16(DisasContext *s, unsigned int sf,
45
@@ -XXX,XX +XXX,XX @@ static void handle_hint(DisasContext *s, uint32_t insn,
19
*/
20
static void disas_data_proc_1src(DisasContext *s, uint32_t insn)
21
{
22
- unsigned int sf, opcode, rn, rd;
23
+ unsigned int sf, opcode, opcode2, rn, rd;
24
25
- if (extract32(insn, 29, 1) || extract32(insn, 16, 5)) {
26
+ if (extract32(insn, 29, 1)) {
27
unallocated_encoding(s);
28
return;
29
}
30
31
sf = extract32(insn, 31, 1);
32
opcode = extract32(insn, 10, 6);
33
+ opcode2 = extract32(insn, 16, 5);
34
rn = extract32(insn, 5, 5);
35
rd = extract32(insn, 0, 5);
36
37
- switch (opcode) {
38
- case 0: /* RBIT */
39
+#define MAP(SF, O2, O1) ((SF) | (O1 << 1) | (O2 << 7))
40
+
41
+ switch (MAP(sf, opcode2, opcode)) {
42
+ case MAP(0, 0x00, 0x00): /* RBIT */
43
+ case MAP(1, 0x00, 0x00):
44
handle_rbit(s, sf, rn, rd);
45
break;
46
break;
46
- case 1: /* REV16 */
47
case 0b00100: /* SEV */
47
+ case MAP(0, 0x00, 0x01): /* REV16 */
48
case 0b00101: /* SEVL */
48
+ case MAP(1, 0x00, 0x01):
49
+ case 0b00110: /* DGH */
49
handle_rev16(s, sf, rn, rd);
50
/* we treat all as NOP at least for now */
50
break;
51
break;
51
- case 2: /* REV32 */
52
case 0b00111: /* XPACLRI */
52
+ case MAP(0, 0x00, 0x02): /* REV/REV32 */
53
+ case MAP(1, 0x00, 0x02):
54
handle_rev32(s, sf, rn, rd);
55
break;
56
- case 3: /* REV64 */
57
+ case MAP(1, 0x00, 0x03): /* REV64 */
58
handle_rev64(s, sf, rn, rd);
59
break;
60
- case 4: /* CLZ */
61
+ case MAP(0, 0x00, 0x04): /* CLZ */
62
+ case MAP(1, 0x00, 0x04):
63
handle_clz(s, sf, rn, rd);
64
break;
65
- case 5: /* CLS */
66
+ case MAP(0, 0x00, 0x05): /* CLS */
67
+ case MAP(1, 0x00, 0x05):
68
handle_cls(s, sf, rn, rd);
69
break;
70
+ default:
71
+ unallocated_encoding(s);
72
+ break;
73
}
74
+
75
+#undef MAP
76
}
77
78
static void handle_div(DisasContext *s, bool is_signed, unsigned int sf,
79
--
53
--
80
2.20.1
54
2.25.1
81
82
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190108223129.5570-10-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 8 ++++++++
9
1 file changed, 8 insertions(+)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
16
case 11: /* RORV */
17
handle_shift_reg(s, A64_SHIFT_TYPE_ROR, sf, rm, rn, rd);
18
break;
19
+ case 12: /* PACGA */
20
+ if (sf == 0 || !dc_isar_feature(aa64_pauth, s)) {
21
+ goto do_unallocated;
22
+ }
23
+ gen_helper_pacga(cpu_reg(s, rd), cpu_env,
24
+ cpu_reg(s, rn), cpu_reg_sp(s, rm));
25
+ break;
26
case 16:
27
case 17:
28
case 18:
29
@@ -XXX,XX +XXX,XX @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn)
30
break;
31
}
32
default:
33
+ do_unallocated:
34
unallocated_encoding(s);
35
break;
36
}
37
--
38
2.20.1
39
40
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190108223129.5570-12-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/helper-a64.h | 2 +-
9
target/arm/helper-a64.c | 10 +++++-----
10
target/arm/translate-a64.c | 7 ++++++-
11
3 files changed, 12 insertions(+), 7 deletions(-)
12
13
diff --git a/target/arm/helper-a64.h b/target/arm/helper-a64.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper-a64.h
16
+++ b/target/arm/helper-a64.h
17
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_2(advsimd_f16tosinth, i32, f16, ptr)
18
DEF_HELPER_2(advsimd_f16touinth, i32, f16, ptr)
19
DEF_HELPER_2(sqrt_f16, f16, f16, ptr)
20
21
-DEF_HELPER_1(exception_return, void, env)
22
+DEF_HELPER_2(exception_return, void, env, i64)
23
24
DEF_HELPER_FLAGS_3(pacia, TCG_CALL_NO_WG, i64, env, i64, i64)
25
DEF_HELPER_FLAGS_3(pacib, TCG_CALL_NO_WG, i64, env, i64, i64)
26
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper-a64.c
29
+++ b/target/arm/helper-a64.c
30
@@ -XXX,XX +XXX,XX @@ static int el_from_spsr(uint32_t spsr)
31
}
32
}
33
34
-void HELPER(exception_return)(CPUARMState *env)
35
+void HELPER(exception_return)(CPUARMState *env, uint64_t new_pc)
36
{
37
int cur_el = arm_current_el(env);
38
unsigned int spsr_idx = aarch64_banked_spsr_index(cur_el);
39
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env)
40
aarch64_sync_64_to_32(env);
41
42
if (spsr & CPSR_T) {
43
- env->regs[15] = env->elr_el[cur_el] & ~0x1;
44
+ env->regs[15] = new_pc & ~0x1;
45
} else {
46
- env->regs[15] = env->elr_el[cur_el] & ~0x3;
47
+ env->regs[15] = new_pc & ~0x3;
48
}
49
qemu_log_mask(CPU_LOG_INT, "Exception return from AArch64 EL%d to "
50
"AArch32 EL%d PC 0x%" PRIx32 "\n",
51
@@ -XXX,XX +XXX,XX @@ void HELPER(exception_return)(CPUARMState *env)
52
env->pstate &= ~PSTATE_SS;
53
}
54
aarch64_restore_sp(env, new_el);
55
- env->pc = env->elr_el[cur_el];
56
+ env->pc = new_pc;
57
qemu_log_mask(CPU_LOG_INT, "Exception return from AArch64 EL%d to "
58
"AArch64 EL%d PC 0x%" PRIx64 "\n",
59
cur_el, new_el, env->pc);
60
@@ -XXX,XX +XXX,XX @@ illegal_return:
61
* no change to exception level, execution state or stack pointer
62
*/
63
env->pstate |= PSTATE_IL;
64
- env->pc = env->elr_el[cur_el];
65
+ env->pc = new_pc;
66
spsr &= PSTATE_NZCV | PSTATE_DAIF;
67
spsr |= pstate_read(env) & ~(PSTATE_NZCV | PSTATE_DAIF);
68
pstate_write(env, spsr);
69
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/translate-a64.c
72
+++ b/target/arm/translate-a64.c
73
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
74
static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
75
{
76
unsigned int opc, op2, op3, rn, op4;
77
+ TCGv_i64 dst;
78
79
opc = extract32(insn, 21, 4);
80
op2 = extract32(insn, 16, 5);
81
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
82
if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
83
gen_io_start();
84
}
85
- gen_helper_exception_return(cpu_env);
86
+ dst = tcg_temp_new_i64();
87
+ tcg_gen_ld_i64(dst, cpu_env,
88
+ offsetof(CPUARMState, elr_el[s->current_el]));
89
+ gen_helper_exception_return(cpu_env, dst);
90
+ tcg_temp_free_i64(dst);
91
if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
92
gen_io_end();
93
}
94
--
95
2.20.1
96
97
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This will enable PAuth decode in a subsequent patch.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20190108223129.5570-13-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/translate-a64.c | 47 +++++++++++++++++++++++++++++---------
11
1 file changed, 36 insertions(+), 11 deletions(-)
12
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/translate-a64.c
16
+++ b/target/arm/translate-a64.c
17
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
18
rn = extract32(insn, 5, 5);
19
op4 = extract32(insn, 0, 5);
20
21
- if (op4 != 0x0 || op3 != 0x0 || op2 != 0x1f) {
22
- unallocated_encoding(s);
23
- return;
24
+ if (op2 != 0x1f) {
25
+ goto do_unallocated;
26
}
27
28
switch (opc) {
29
case 0: /* BR */
30
case 1: /* BLR */
31
case 2: /* RET */
32
- gen_a64_set_pc(s, cpu_reg(s, rn));
33
+ switch (op3) {
34
+ case 0:
35
+ if (op4 != 0) {
36
+ goto do_unallocated;
37
+ }
38
+ dst = cpu_reg(s, rn);
39
+ break;
40
+
41
+ default:
42
+ goto do_unallocated;
43
+ }
44
+
45
+ gen_a64_set_pc(s, dst);
46
/* BLR also needs to load return address */
47
if (opc == 1) {
48
tcg_gen_movi_i64(cpu_reg(s, 30), s->pc);
49
}
50
break;
51
+
52
case 4: /* ERET */
53
if (s->current_el == 0) {
54
- unallocated_encoding(s);
55
- return;
56
+ goto do_unallocated;
57
+ }
58
+ switch (op3) {
59
+ case 0:
60
+ if (op4 != 0) {
61
+ goto do_unallocated;
62
+ }
63
+ dst = tcg_temp_new_i64();
64
+ tcg_gen_ld_i64(dst, cpu_env,
65
+ offsetof(CPUARMState, elr_el[s->current_el]));
66
+ break;
67
+
68
+ default:
69
+ goto do_unallocated;
70
}
71
if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
72
gen_io_start();
73
}
74
- dst = tcg_temp_new_i64();
75
- tcg_gen_ld_i64(dst, cpu_env,
76
- offsetof(CPUARMState, elr_el[s->current_el]));
77
+
78
gen_helper_exception_return(cpu_env, dst);
79
tcg_temp_free_i64(dst);
80
if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
81
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
82
/* Must exit loop to check un-masked IRQs */
83
s->base.is_jmp = DISAS_EXIT;
84
return;
85
+
86
case 5: /* DRPS */
87
- if (rn != 0x1f) {
88
- unallocated_encoding(s);
89
+ if (op3 != 0 || op4 != 0 || rn != 0x1f) {
90
+ goto do_unallocated;
91
} else {
92
unsupported_encoding(s, insn);
93
}
94
return;
95
+
96
default:
97
+ do_unallocated:
98
unallocated_encoding(s);
99
return;
100
}
101
--
102
2.20.1
103
104
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20190108223129.5570-14-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
8
target/arm/translate-a64.c | 82 +++++++++++++++++++++++++++++++++++++-
9
1 file changed, 81 insertions(+), 1 deletion(-)
10
11
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/translate-a64.c
14
+++ b/target/arm/translate-a64.c
15
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
16
{
17
unsigned int opc, op2, op3, rn, op4;
18
TCGv_i64 dst;
19
+ TCGv_i64 modifier;
20
21
opc = extract32(insn, 21, 4);
22
op2 = extract32(insn, 16, 5);
23
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
24
case 2: /* RET */
25
switch (op3) {
26
case 0:
27
+ /* BR, BLR, RET */
28
if (op4 != 0) {
29
goto do_unallocated;
30
}
31
dst = cpu_reg(s, rn);
32
break;
33
34
+ case 2:
35
+ case 3:
36
+ if (!dc_isar_feature(aa64_pauth, s)) {
37
+ goto do_unallocated;
38
+ }
39
+ if (opc == 2) {
40
+ /* RETAA, RETAB */
41
+ if (rn != 0x1f || op4 != 0x1f) {
42
+ goto do_unallocated;
43
+ }
44
+ rn = 30;
45
+ modifier = cpu_X[31];
46
+ } else {
47
+ /* BRAAZ, BRABZ, BLRAAZ, BLRABZ */
48
+ if (op4 != 0x1f) {
49
+ goto do_unallocated;
50
+ }
51
+ modifier = new_tmp_a64_zero(s);
52
+ }
53
+ if (s->pauth_active) {
54
+ dst = new_tmp_a64(s);
55
+ if (op3 == 2) {
56
+ gen_helper_autia(dst, cpu_env, cpu_reg(s, rn), modifier);
57
+ } else {
58
+ gen_helper_autib(dst, cpu_env, cpu_reg(s, rn), modifier);
59
+ }
60
+ } else {
61
+ dst = cpu_reg(s, rn);
62
+ }
63
+ break;
64
+
65
default:
66
goto do_unallocated;
67
}
68
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
69
}
70
break;
71
72
+ case 8: /* BRAA */
73
+ case 9: /* BLRAA */
74
+ if (!dc_isar_feature(aa64_pauth, s)) {
75
+ goto do_unallocated;
76
+ }
77
+ if (op3 != 2 || op3 != 3) {
78
+ goto do_unallocated;
79
+ }
80
+ if (s->pauth_active) {
81
+ dst = new_tmp_a64(s);
82
+ modifier = cpu_reg_sp(s, op4);
83
+ if (op3 == 2) {
84
+ gen_helper_autia(dst, cpu_env, cpu_reg(s, rn), modifier);
85
+ } else {
86
+ gen_helper_autib(dst, cpu_env, cpu_reg(s, rn), modifier);
87
+ }
88
+ } else {
89
+ dst = cpu_reg(s, rn);
90
+ }
91
+ gen_a64_set_pc(s, dst);
92
+ /* BLRAA also needs to load return address */
93
+ if (opc == 9) {
94
+ tcg_gen_movi_i64(cpu_reg(s, 30), s->pc);
95
+ }
96
+ break;
97
+
98
case 4: /* ERET */
99
if (s->current_el == 0) {
100
goto do_unallocated;
101
}
102
switch (op3) {
103
- case 0:
104
+ case 0: /* ERET */
105
if (op4 != 0) {
106
goto do_unallocated;
107
}
108
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
109
offsetof(CPUARMState, elr_el[s->current_el]));
110
break;
111
112
+ case 2: /* ERETAA */
113
+ case 3: /* ERETAB */
114
+ if (!dc_isar_feature(aa64_pauth, s)) {
115
+ goto do_unallocated;
116
+ }
117
+ if (rn != 0x1f || op4 != 0x1f) {
118
+ goto do_unallocated;
119
+ }
120
+ dst = tcg_temp_new_i64();
121
+ tcg_gen_ld_i64(dst, cpu_env,
122
+ offsetof(CPUARMState, elr_el[s->current_el]));
123
+ if (s->pauth_active) {
124
+ modifier = cpu_X[31];
125
+ if (op3 == 2) {
126
+ gen_helper_autia(dst, cpu_env, dst, modifier);
127
+ } else {
128
+ gen_helper_autib(dst, cpu_env, dst, modifier);
129
+ }
130
+ }
131
+ break;
132
+
133
default:
134
goto do_unallocated;
135
}
136
--
137
2.20.1
138
139
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Not that there are any stores involved, but why argue with ARM's
4
naming convention.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20190108223129.5570-15-richard.henderson@linaro.org
9
[fixed trivial comment nit]
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/translate-a64.c | 61 ++++++++++++++++++++++++++++++++++++++
13
1 file changed, 61 insertions(+)
14
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate-a64.c
18
+++ b/target/arm/translate-a64.c
19
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_atomic(DisasContext *s, uint32_t insn,
20
s->be_data | size | MO_ALIGN);
21
}
22
23
+/*
24
+ * PAC memory operations
25
+ *
26
+ * 31 30 27 26 24 22 21 12 11 10 5 0
27
+ * +------+-------+---+-----+-----+---+--------+---+---+----+-----+
28
+ * | size | 1 1 1 | V | 0 0 | M S | 1 | imm9 | W | 1 | Rn | Rt |
29
+ * +------+-------+---+-----+-----+---+--------+---+---+----+-----+
30
+ *
31
+ * Rt: the result register
32
+ * Rn: base address or SP
33
+ * V: vector flag (always 0 as of v8.3)
34
+ * M: clear for key DA, set for key DB
35
+ * W: pre-indexing flag
36
+ * S: sign for imm9.
37
+ */
38
+static void disas_ldst_pac(DisasContext *s, uint32_t insn,
39
+ int size, int rt, bool is_vector)
40
+{
41
+ int rn = extract32(insn, 5, 5);
42
+ bool is_wback = extract32(insn, 11, 1);
43
+ bool use_key_a = !extract32(insn, 23, 1);
44
+ int offset;
45
+ TCGv_i64 tcg_addr, tcg_rt;
46
+
47
+ if (size != 3 || is_vector || !dc_isar_feature(aa64_pauth, s)) {
48
+ unallocated_encoding(s);
49
+ return;
50
+ }
51
+
52
+ if (rn == 31) {
53
+ gen_check_sp_alignment(s);
54
+ }
55
+ tcg_addr = read_cpu_reg_sp(s, rn, 1);
56
+
57
+ if (s->pauth_active) {
58
+ if (use_key_a) {
59
+ gen_helper_autda(tcg_addr, cpu_env, tcg_addr, cpu_X[31]);
60
+ } else {
61
+ gen_helper_autdb(tcg_addr, cpu_env, tcg_addr, cpu_X[31]);
62
+ }
63
+ }
64
+
65
+ /* Form the 10-bit signed, scaled offset. */
66
+ offset = (extract32(insn, 22, 1) << 9) | extract32(insn, 12, 9);
67
+ offset = sextract32(offset << size, 0, 10 + size);
68
+ tcg_gen_addi_i64(tcg_addr, tcg_addr, offset);
69
+
70
+ tcg_rt = cpu_reg(s, rt);
71
+
72
+ do_gpr_ld(s, tcg_rt, tcg_addr, size, /* is_signed */ false,
73
+ /* extend */ false, /* iss_valid */ !is_wback,
74
+ /* iss_srt */ rt, /* iss_sf */ true, /* iss_ar */ false);
75
+
76
+ if (is_wback) {
77
+ tcg_gen_mov_i64(cpu_reg_sp(s, rn), tcg_addr);
78
+ }
79
+}
80
+
81
/* Load/store register (all forms) */
82
static void disas_ldst_reg(DisasContext *s, uint32_t insn)
83
{
84
@@ -XXX,XX +XXX,XX @@ static void disas_ldst_reg(DisasContext *s, uint32_t insn)
85
case 2:
86
disas_ldst_reg_roffset(s, insn, opc, size, rt, is_vector);
87
return;
88
+ default:
89
+ disas_ldst_pac(s, insn, size, rt, is_vector);
90
+ return;
91
}
92
break;
93
case 1:
94
--
95
2.20.1
96
97
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
We will shortly want to talk about TBI as it relates to data.
4
Passing around a pair of variables is less convenient than a
5
single variable.
6
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20190108223129.5570-20-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/cpu.h | 3 +--
13
target/arm/translate.h | 3 +--
14
target/arm/helper.c | 5 ++---
15
target/arm/translate-a64.c | 13 +++++++------
16
4 files changed, 11 insertions(+), 13 deletions(-)
17
18
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/cpu.h
21
+++ b/target/arm/cpu.h
22
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A32, HANDLER, 21, 1)
23
FIELD(TBFLAG_A32, STACKCHECK, 22, 1)
24
25
/* Bit usage when in AArch64 state */
26
-FIELD(TBFLAG_A64, TBI0, 0, 1)
27
-FIELD(TBFLAG_A64, TBI1, 1, 1)
28
+FIELD(TBFLAG_A64, TBII, 0, 2)
29
FIELD(TBFLAG_A64, SVEEXC_EL, 2, 2)
30
FIELD(TBFLAG_A64, ZCR_LEN, 4, 4)
31
FIELD(TBFLAG_A64, PAUTH_ACTIVE, 8, 1)
32
diff --git a/target/arm/translate.h b/target/arm/translate.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/translate.h
35
+++ b/target/arm/translate.h
36
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
37
int user;
38
#endif
39
ARMMMUIdx mmu_idx; /* MMU index to use for normal loads/stores */
40
- bool tbi0; /* TBI0 for EL0/1 or TBI for EL2/3 */
41
- bool tbi1; /* TBI1 for EL0/1, not used for EL2/3 */
42
+ uint8_t tbii; /* TBI1|TBI0 for EL0/1 or TBI for EL2/3 */
43
bool ns; /* Use non-secure CPREG bank on access */
44
int fp_excp_el; /* FP exception EL or 0 if enabled */
45
int sve_excp_el; /* SVE exception EL or 0 if enabled */
46
diff --git a/target/arm/helper.c b/target/arm/helper.c
47
index XXXXXXX..XXXXXXX 100644
48
--- a/target/arm/helper.c
49
+++ b/target/arm/helper.c
50
@@ -XXX,XX +XXX,XX @@ void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc,
51
*pc = env->pc;
52
flags = FIELD_DP32(flags, TBFLAG_ANY, AARCH64_STATE, 1);
53
/* Get control bits for tagged addresses */
54
- flags = FIELD_DP32(flags, TBFLAG_A64, TBI0,
55
+ flags = FIELD_DP32(flags, TBFLAG_A64, TBII,
56
+ (arm_regime_tbi1(env, mmu_idx) << 1) |
57
arm_regime_tbi0(env, mmu_idx));
58
- flags = FIELD_DP32(flags, TBFLAG_A64, TBI1,
59
- arm_regime_tbi1(env, mmu_idx));
60
61
if (cpu_isar_feature(aa64_sve, cpu)) {
62
int sve_el = sve_exception_el(env, current_el);
63
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/translate-a64.c
66
+++ b/target/arm/translate-a64.c
67
@@ -XXX,XX +XXX,XX @@ void gen_a64_set_pc_im(uint64_t val)
68
*/
69
static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src)
70
{
71
+ /* Note that TBII is TBI1:TBI0. */
72
+ int tbi = s->tbii;
73
74
if (s->current_el <= 1) {
75
/* Test if NEITHER or BOTH TBI values are set. If so, no need to
76
* examine bit 55 of address, can just generate code.
77
* If mixed, then test via generated code
78
*/
79
- if (s->tbi0 && s->tbi1) {
80
+ if (tbi == 3) {
81
TCGv_i64 tmp_reg = tcg_temp_new_i64();
82
/* Both bits set, sign extension from bit 55 into [63:56] will
83
* cover both cases
84
@@ -XXX,XX +XXX,XX @@ static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src)
85
tcg_gen_shli_i64(tmp_reg, src, 8);
86
tcg_gen_sari_i64(cpu_pc, tmp_reg, 8);
87
tcg_temp_free_i64(tmp_reg);
88
- } else if (!s->tbi0 && !s->tbi1) {
89
+ } else if (tbi == 0) {
90
/* Neither bit set, just load it as-is */
91
tcg_gen_mov_i64(cpu_pc, src);
92
} else {
93
@@ -XXX,XX +XXX,XX @@ static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src)
94
95
tcg_gen_andi_i64(tcg_bit55, src, (1ull << 55));
96
97
- if (s->tbi0) {
98
+ if (tbi == 1) {
99
/* tbi0==1, tbi1==0, so 0-fill upper byte if bit 55 = 0 */
100
tcg_gen_andi_i64(tcg_tmpval, src,
101
0x00FFFFFFFFFFFFFFull);
102
@@ -XXX,XX +XXX,XX @@ static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src)
103
tcg_temp_free_i64(tcg_tmpval);
104
}
105
} else { /* EL > 1 */
106
- if (s->tbi0) {
107
+ if (tbi != 0) {
108
/* Force tag byte to all zero */
109
tcg_gen_andi_i64(cpu_pc, src, 0x00FFFFFFFFFFFFFFull);
110
} else {
111
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
112
dc->condexec_cond = 0;
113
core_mmu_idx = FIELD_EX32(tb_flags, TBFLAG_ANY, MMUIDX);
114
dc->mmu_idx = core_to_arm_mmu_idx(env, core_mmu_idx);
115
- dc->tbi0 = FIELD_EX32(tb_flags, TBFLAG_A64, TBI0);
116
- dc->tbi1 = FIELD_EX32(tb_flags, TBFLAG_A64, TBI1);
117
+ dc->tbii = FIELD_EX32(tb_flags, TBFLAG_A64, TBII);
118
dc->current_el = arm_mmu_idx_to_el(dc->mmu_idx);
119
#if !defined(CONFIG_USER_ONLY)
120
dc->user = (dc->current_el == 0);
121
--
122
2.20.1
123
124
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
We will want to check TBI for I and D simultaneously.
4
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20190108223129.5570-22-richard.henderson@linaro.org
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/internals.h | 15 ++++++++++++---
11
target/arm/helper.c | 10 ++++++++--
12
2 files changed, 20 insertions(+), 5 deletions(-)
13
14
diff --git a/target/arm/internals.h b/target/arm/internals.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/internals.h
17
+++ b/target/arm/internals.h
18
@@ -XXX,XX +XXX,XX @@ typedef struct ARMVAParameters {
19
} ARMVAParameters;
20
21
#ifdef CONFIG_USER_ONLY
22
-static inline ARMVAParameters aa64_va_parameters(CPUARMState *env,
23
- uint64_t va,
24
- ARMMMUIdx mmu_idx, bool data)
25
+static inline ARMVAParameters aa64_va_parameters_both(CPUARMState *env,
26
+ uint64_t va,
27
+ ARMMMUIdx mmu_idx)
28
{
29
return (ARMVAParameters) {
30
/* 48-bit address space */
31
@@ -XXX,XX +XXX,XX @@ static inline ARMVAParameters aa64_va_parameters(CPUARMState *env,
32
.tbi = false,
33
};
34
}
35
+
36
+static inline ARMVAParameters aa64_va_parameters(CPUARMState *env,
37
+ uint64_t va,
38
+ ARMMMUIdx mmu_idx, bool data)
39
+{
40
+ return aa64_va_parameters_both(env, va, mmu_idx);
41
+}
42
#else
43
+ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
44
+ ARMMMUIdx mmu_idx);
45
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
46
ARMMMUIdx mmu_idx, bool data);
47
#endif
48
diff --git a/target/arm/helper.c b/target/arm/helper.c
49
index XXXXXXX..XXXXXXX 100644
50
--- a/target/arm/helper.c
51
+++ b/target/arm/helper.c
52
@@ -XXX,XX +XXX,XX @@ static uint8_t convert_stage2_attrs(CPUARMState *env, uint8_t s2attrs)
53
return (hiattr << 6) | (hihint << 4) | (loattr << 2) | lohint;
54
}
55
56
-ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
57
- ARMMMUIdx mmu_idx, bool data)
58
+ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
59
+ ARMMMUIdx mmu_idx)
60
{
61
uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
62
uint32_t el = regime_el(env, mmu_idx);
63
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
64
};
65
}
66
67
+ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
68
+ ARMMMUIdx mmu_idx, bool data)
69
+{
70
+ return aa64_va_parameters_both(env, va, mmu_idx);
71
+}
72
+
73
static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
74
ARMMMUIdx mmu_idx)
75
{
76
--
77
2.20.1
78
79
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
Use TBID in aa64_va_parameters depending on the data parameter.
4
This automatically updates all existing users of the function.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20190108223129.5570-23-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/internals.h | 1 +
12
target/arm/helper.c | 14 +++++++++++---
13
2 files changed, 12 insertions(+), 3 deletions(-)
14
15
diff --git a/target/arm/internals.h b/target/arm/internals.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/internals.h
18
+++ b/target/arm/internals.h
19
@@ -XXX,XX +XXX,XX @@ typedef struct ARMVAParameters {
20
unsigned tsz : 8;
21
unsigned select : 1;
22
bool tbi : 1;
23
+ bool tbid : 1;
24
bool epd : 1;
25
bool hpd : 1;
26
bool using16k : 1;
27
diff --git a/target/arm/helper.c b/target/arm/helper.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/helper.c
30
+++ b/target/arm/helper.c
31
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
32
{
33
uint64_t tcr = regime_tcr(env, mmu_idx)->raw_tcr;
34
uint32_t el = regime_el(env, mmu_idx);
35
- bool tbi, epd, hpd, using16k, using64k;
36
+ bool tbi, tbid, epd, hpd, using16k, using64k;
37
int select, tsz;
38
39
/*
40
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
41
using16k = extract32(tcr, 15, 1);
42
if (mmu_idx == ARMMMUIdx_S2NS) {
43
/* VTCR_EL2 */
44
- tbi = hpd = false;
45
+ tbi = tbid = hpd = false;
46
} else {
47
tbi = extract32(tcr, 20, 1);
48
hpd = extract32(tcr, 24, 1);
49
+ tbid = extract32(tcr, 29, 1);
50
}
51
epd = false;
52
} else if (!select) {
53
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
54
using16k = extract32(tcr, 15, 1);
55
tbi = extract64(tcr, 37, 1);
56
hpd = extract64(tcr, 41, 1);
57
+ tbid = extract64(tcr, 51, 1);
58
} else {
59
int tg = extract32(tcr, 30, 2);
60
using16k = tg == 1;
61
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
62
epd = extract32(tcr, 23, 1);
63
tbi = extract64(tcr, 38, 1);
64
hpd = extract64(tcr, 42, 1);
65
+ tbid = extract64(tcr, 52, 1);
66
}
67
tsz = MIN(tsz, 39); /* TODO: ARMv8.4-TTST */
68
tsz = MAX(tsz, 16); /* TODO: ARMv8.2-LVA */
69
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
70
.tsz = tsz,
71
.select = select,
72
.tbi = tbi,
73
+ .tbid = tbid,
74
.epd = epd,
75
.hpd = hpd,
76
.using16k = using16k,
77
@@ -XXX,XX +XXX,XX @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *env, uint64_t va,
78
ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va,
79
ARMMMUIdx mmu_idx, bool data)
80
{
81
- return aa64_va_parameters_both(env, va, mmu_idx);
82
+ ARMVAParameters ret = aa64_va_parameters_both(env, va, mmu_idx);
83
+
84
+ /* Present TBI as a composite with TBID. */
85
+ ret.tbi &= (data || !ret.tbid);
86
+ return ret;
87
}
88
89
static ARMVAParameters aa32_va_parameters(CPUARMState *env, uint32_t va,
90
--
91
2.20.1
92
93
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Stripping out the authentication data does not require any crypto,
3
Enable the a76 for virt and sbsa board use.
4
it merely requires the virtual address parameters.
5
4
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20190108223129.5570-25-richard.henderson@linaro.org
7
Message-id: 20220506180242.216785-24-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
9
---
11
target/arm/pauth_helper.c | 14 +++++++++++++-
10
docs/system/arm/virt.rst | 1 +
12
1 file changed, 13 insertions(+), 1 deletion(-)
11
hw/arm/sbsa-ref.c | 1 +
12
hw/arm/virt.c | 1 +
13
target/arm/cpu64.c | 66 ++++++++++++++++++++++++++++++++++++++++
14
4 files changed, 69 insertions(+)
13
15
14
diff --git a/target/arm/pauth_helper.c b/target/arm/pauth_helper.c
16
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
15
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/pauth_helper.c
18
--- a/docs/system/arm/virt.rst
17
+++ b/target/arm/pauth_helper.c
19
+++ b/docs/system/arm/virt.rst
18
@@ -XXX,XX +XXX,XX @@ static uint64_t pauth_addpac(CPUARMState *env, uint64_t ptr, uint64_t modifier,
20
@@ -XXX,XX +XXX,XX @@ Supported guest CPU types:
19
g_assert_not_reached(); /* FIXME */
21
- ``cortex-a53`` (64-bit)
22
- ``cortex-a57`` (64-bit)
23
- ``cortex-a72`` (64-bit)
24
+- ``cortex-a76`` (64-bit)
25
- ``a64fx`` (64-bit)
26
- ``host`` (with KVM only)
27
- ``max`` (same as ``host`` for KVM; best possible emulation with TCG)
28
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/arm/sbsa-ref.c
31
+++ b/hw/arm/sbsa-ref.c
32
@@ -XXX,XX +XXX,XX @@ static const int sbsa_ref_irqmap[] = {
33
static const char * const valid_cpus[] = {
34
ARM_CPU_TYPE_NAME("cortex-a57"),
35
ARM_CPU_TYPE_NAME("cortex-a72"),
36
+ ARM_CPU_TYPE_NAME("cortex-a76"),
37
ARM_CPU_TYPE_NAME("max"),
38
};
39
40
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/arm/virt.c
43
+++ b/hw/arm/virt.c
44
@@ -XXX,XX +XXX,XX @@ static const char *valid_cpus[] = {
45
ARM_CPU_TYPE_NAME("cortex-a53"),
46
ARM_CPU_TYPE_NAME("cortex-a57"),
47
ARM_CPU_TYPE_NAME("cortex-a72"),
48
+ ARM_CPU_TYPE_NAME("cortex-a76"),
49
ARM_CPU_TYPE_NAME("a64fx"),
50
ARM_CPU_TYPE_NAME("host"),
51
ARM_CPU_TYPE_NAME("max"),
52
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
53
index XXXXXXX..XXXXXXX 100644
54
--- a/target/arm/cpu64.c
55
+++ b/target/arm/cpu64.c
56
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
57
define_cortex_a72_a57_a53_cp_reginfo(cpu);
20
}
58
}
21
59
22
+static uint64_t pauth_original_ptr(uint64_t ptr, ARMVAParameters param)
60
+static void aarch64_a76_initfn(Object *obj)
23
+{
61
+{
24
+ uint64_t extfield = -param.select;
62
+ ARMCPU *cpu = ARM_CPU(obj);
25
+ int bot_pac_bit = 64 - param.tsz;
26
+ int top_pac_bit = 64 - 8 * param.tbi;
27
+
63
+
28
+ return deposit64(ptr, bot_pac_bit, top_pac_bit - bot_pac_bit, extfield);
64
+ cpu->dtb_compatible = "arm,cortex-a76";
65
+ set_feature(&cpu->env, ARM_FEATURE_V8);
66
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
67
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
68
+ set_feature(&cpu->env, ARM_FEATURE_AARCH64);
69
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
70
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
71
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
72
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
73
+
74
+ /* Ordered by B2.4 AArch64 registers by functional group */
75
+ cpu->clidr = 0x82000023;
76
+ cpu->ctr = 0x8444C004;
77
+ cpu->dcz_blocksize = 4;
78
+ cpu->isar.id_aa64dfr0 = 0x0000000010305408ull;
79
+ cpu->isar.id_aa64isar0 = 0x0000100010211120ull;
80
+ cpu->isar.id_aa64isar1 = 0x0000000000100001ull;
81
+ cpu->isar.id_aa64mmfr0 = 0x0000000000101122ull;
82
+ cpu->isar.id_aa64mmfr1 = 0x0000000010212122ull;
83
+ cpu->isar.id_aa64mmfr2 = 0x0000000000001011ull;
84
+ cpu->isar.id_aa64pfr0 = 0x1100000010111112ull; /* GIC filled in later */
85
+ cpu->isar.id_aa64pfr1 = 0x0000000000000010ull;
86
+ cpu->id_afr0 = 0x00000000;
87
+ cpu->isar.id_dfr0 = 0x04010088;
88
+ cpu->isar.id_isar0 = 0x02101110;
89
+ cpu->isar.id_isar1 = 0x13112111;
90
+ cpu->isar.id_isar2 = 0x21232042;
91
+ cpu->isar.id_isar3 = 0x01112131;
92
+ cpu->isar.id_isar4 = 0x00010142;
93
+ cpu->isar.id_isar5 = 0x01011121;
94
+ cpu->isar.id_isar6 = 0x00000010;
95
+ cpu->isar.id_mmfr0 = 0x10201105;
96
+ cpu->isar.id_mmfr1 = 0x40000000;
97
+ cpu->isar.id_mmfr2 = 0x01260000;
98
+ cpu->isar.id_mmfr3 = 0x02122211;
99
+ cpu->isar.id_mmfr4 = 0x00021110;
100
+ cpu->isar.id_pfr0 = 0x10010131;
101
+ cpu->isar.id_pfr1 = 0x00010000; /* GIC filled in later */
102
+ cpu->isar.id_pfr2 = 0x00000011;
103
+ cpu->midr = 0x414fd0b1; /* r4p1 */
104
+ cpu->revidr = 0;
105
+
106
+ /* From B2.18 CCSIDR_EL1 */
107
+ cpu->ccsidr[0] = 0x701fe01a; /* 64KB L1 dcache */
108
+ cpu->ccsidr[1] = 0x201fe01a; /* 64KB L1 icache */
109
+ cpu->ccsidr[2] = 0x707fe03a; /* 512KB L2 cache */
110
+
111
+ /* From B2.93 SCTLR_EL3 */
112
+ cpu->reset_sctlr = 0x30c50838;
113
+
114
+ /* From B4.23 ICH_VTR_EL2 */
115
+ cpu->gic_num_lrs = 4;
116
+ cpu->gic_vpribits = 5;
117
+ cpu->gic_vprebits = 5;
118
+
119
+ /* From B5.1 AdvSIMD AArch64 register summary */
120
+ cpu->isar.mvfr0 = 0x10110222;
121
+ cpu->isar.mvfr1 = 0x13211111;
122
+ cpu->isar.mvfr2 = 0x00000043;
29
+}
123
+}
30
+
124
+
31
static uint64_t pauth_auth(CPUARMState *env, uint64_t ptr, uint64_t modifier,
125
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
32
ARMPACKey *key, bool data, int keynumber)
33
{
126
{
34
@@ -XXX,XX +XXX,XX @@ static uint64_t pauth_auth(CPUARMState *env, uint64_t ptr, uint64_t modifier,
127
/*
35
128
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo aarch64_cpus[] = {
36
static uint64_t pauth_strip(CPUARMState *env, uint64_t ptr, bool data)
129
{ .name = "cortex-a57", .initfn = aarch64_a57_initfn },
37
{
130
{ .name = "cortex-a53", .initfn = aarch64_a53_initfn },
38
- g_assert_not_reached(); /* FIXME */
131
{ .name = "cortex-a72", .initfn = aarch64_a72_initfn },
39
+ ARMMMUIdx mmu_idx = arm_stage1_mmu_idx(env);
132
+ { .name = "cortex-a76", .initfn = aarch64_a76_initfn },
40
+ ARMVAParameters param = aa64_va_parameters(env, ptr, mmu_idx, data);
133
{ .name = "a64fx", .initfn = aarch64_a64fx_initfn },
41
+
134
{ .name = "max", .initfn = aarch64_max_initfn },
42
+ return pauth_original_ptr(ptr, param);
135
#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
43
}
44
45
static void QEMU_NORETURN pauth_trap(CPUARMState *env, int target_el,
46
--
136
--
47
2.20.1
137
2.25.1
48
49
diff view generated by jsdifflib
Deleted patch
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1
3
This is the main crypto routine, an implementation of QARMA.
4
This matches, as much as possible, ARM pseudocode.
5
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20190108223129.5570-28-richard.henderson@linaro.org
9
[PMM: fixed minor checkpatch nits]
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/pauth_helper.c | 242 +++++++++++++++++++++++++++++++++++++-
13
1 file changed, 241 insertions(+), 1 deletion(-)
14
15
diff --git a/target/arm/pauth_helper.c b/target/arm/pauth_helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/pauth_helper.c
18
+++ b/target/arm/pauth_helper.c
19
@@ -XXX,XX +XXX,XX @@
20
#include "tcg/tcg-gvec-desc.h"
21
22
23
+static uint64_t pac_cell_shuffle(uint64_t i)
24
+{
25
+ uint64_t o = 0;
26
+
27
+ o |= extract64(i, 52, 4);
28
+ o |= extract64(i, 24, 4) << 4;
29
+ o |= extract64(i, 44, 4) << 8;
30
+ o |= extract64(i, 0, 4) << 12;
31
+
32
+ o |= extract64(i, 28, 4) << 16;
33
+ o |= extract64(i, 48, 4) << 20;
34
+ o |= extract64(i, 4, 4) << 24;
35
+ o |= extract64(i, 40, 4) << 28;
36
+
37
+ o |= extract64(i, 32, 4) << 32;
38
+ o |= extract64(i, 12, 4) << 36;
39
+ o |= extract64(i, 56, 4) << 40;
40
+ o |= extract64(i, 20, 4) << 44;
41
+
42
+ o |= extract64(i, 8, 4) << 48;
43
+ o |= extract64(i, 36, 4) << 52;
44
+ o |= extract64(i, 16, 4) << 56;
45
+ o |= extract64(i, 60, 4) << 60;
46
+
47
+ return o;
48
+}
49
+
50
+static uint64_t pac_cell_inv_shuffle(uint64_t i)
51
+{
52
+ uint64_t o = 0;
53
+
54
+ o |= extract64(i, 12, 4);
55
+ o |= extract64(i, 24, 4) << 4;
56
+ o |= extract64(i, 48, 4) << 8;
57
+ o |= extract64(i, 36, 4) << 12;
58
+
59
+ o |= extract64(i, 56, 4) << 16;
60
+ o |= extract64(i, 44, 4) << 20;
61
+ o |= extract64(i, 4, 4) << 24;
62
+ o |= extract64(i, 16, 4) << 28;
63
+
64
+ o |= i & MAKE_64BIT_MASK(32, 4);
65
+ o |= extract64(i, 52, 4) << 36;
66
+ o |= extract64(i, 28, 4) << 40;
67
+ o |= extract64(i, 8, 4) << 44;
68
+
69
+ o |= extract64(i, 20, 4) << 48;
70
+ o |= extract64(i, 0, 4) << 52;
71
+ o |= extract64(i, 40, 4) << 56;
72
+ o |= i & MAKE_64BIT_MASK(60, 4);
73
+
74
+ return o;
75
+}
76
+
77
+static uint64_t pac_sub(uint64_t i)
78
+{
79
+ static const uint8_t sub[16] = {
80
+ 0xb, 0x6, 0x8, 0xf, 0xc, 0x0, 0x9, 0xe,
81
+ 0x3, 0x7, 0x4, 0x5, 0xd, 0x2, 0x1, 0xa,
82
+ };
83
+ uint64_t o = 0;
84
+ int b;
85
+
86
+ for (b = 0; b < 64; b += 16) {
87
+ o |= (uint64_t)sub[(i >> b) & 0xf] << b;
88
+ }
89
+ return o;
90
+}
91
+
92
+static uint64_t pac_inv_sub(uint64_t i)
93
+{
94
+ static const uint8_t inv_sub[16] = {
95
+ 0x5, 0xe, 0xd, 0x8, 0xa, 0xb, 0x1, 0x9,
96
+ 0x2, 0x6, 0xf, 0x0, 0x4, 0xc, 0x7, 0x3,
97
+ };
98
+ uint64_t o = 0;
99
+ int b;
100
+
101
+ for (b = 0; b < 64; b += 16) {
102
+ o |= (uint64_t)inv_sub[(i >> b) & 0xf] << b;
103
+ }
104
+ return o;
105
+}
106
+
107
+static int rot_cell(int cell, int n)
108
+{
109
+ /* 4-bit rotate left by n. */
110
+ cell |= cell << 4;
111
+ return extract32(cell, 4 - n, 4);
112
+}
113
+
114
+static uint64_t pac_mult(uint64_t i)
115
+{
116
+ uint64_t o = 0;
117
+ int b;
118
+
119
+ for (b = 0; b < 4 * 4; b += 4) {
120
+ int i0, i4, i8, ic, t0, t1, t2, t3;
121
+
122
+ i0 = extract64(i, b, 4);
123
+ i4 = extract64(i, b + 4 * 4, 4);
124
+ i8 = extract64(i, b + 8 * 4, 4);
125
+ ic = extract64(i, b + 12 * 4, 4);
126
+
127
+ t0 = rot_cell(i8, 1) ^ rot_cell(i4, 2) ^ rot_cell(i0, 1);
128
+ t1 = rot_cell(ic, 1) ^ rot_cell(i4, 1) ^ rot_cell(i0, 2);
129
+ t2 = rot_cell(ic, 2) ^ rot_cell(i8, 1) ^ rot_cell(i0, 1);
130
+ t3 = rot_cell(ic, 1) ^ rot_cell(i8, 2) ^ rot_cell(i4, 1);
131
+
132
+ o |= (uint64_t)t3 << b;
133
+ o |= (uint64_t)t2 << (b + 4 * 4);
134
+ o |= (uint64_t)t1 << (b + 8 * 4);
135
+ o |= (uint64_t)t0 << (b + 12 * 4);
136
+ }
137
+ return o;
138
+}
139
+
140
+static uint64_t tweak_cell_rot(uint64_t cell)
141
+{
142
+ return (cell >> 1) | (((cell ^ (cell >> 1)) & 1) << 3);
143
+}
144
+
145
+static uint64_t tweak_shuffle(uint64_t i)
146
+{
147
+ uint64_t o = 0;
148
+
149
+ o |= extract64(i, 16, 4) << 0;
150
+ o |= extract64(i, 20, 4) << 4;
151
+ o |= tweak_cell_rot(extract64(i, 24, 4)) << 8;
152
+ o |= extract64(i, 28, 4) << 12;
153
+
154
+ o |= tweak_cell_rot(extract64(i, 44, 4)) << 16;
155
+ o |= extract64(i, 8, 4) << 20;
156
+ o |= extract64(i, 12, 4) << 24;
157
+ o |= tweak_cell_rot(extract64(i, 32, 4)) << 28;
158
+
159
+ o |= extract64(i, 48, 4) << 32;
160
+ o |= extract64(i, 52, 4) << 36;
161
+ o |= extract64(i, 56, 4) << 40;
162
+ o |= tweak_cell_rot(extract64(i, 60, 4)) << 44;
163
+
164
+ o |= tweak_cell_rot(extract64(i, 0, 4)) << 48;
165
+ o |= extract64(i, 4, 4) << 52;
166
+ o |= tweak_cell_rot(extract64(i, 40, 4)) << 56;
167
+ o |= tweak_cell_rot(extract64(i, 36, 4)) << 60;
168
+
169
+ return o;
170
+}
171
+
172
+static uint64_t tweak_cell_inv_rot(uint64_t cell)
173
+{
174
+ return ((cell << 1) & 0xf) | ((cell & 1) ^ (cell >> 3));
175
+}
176
+
177
+static uint64_t tweak_inv_shuffle(uint64_t i)
178
+{
179
+ uint64_t o = 0;
180
+
181
+ o |= tweak_cell_inv_rot(extract64(i, 48, 4));
182
+ o |= extract64(i, 52, 4) << 4;
183
+ o |= extract64(i, 20, 4) << 8;
184
+ o |= extract64(i, 24, 4) << 12;
185
+
186
+ o |= extract64(i, 0, 4) << 16;
187
+ o |= extract64(i, 4, 4) << 20;
188
+ o |= tweak_cell_inv_rot(extract64(i, 8, 4)) << 24;
189
+ o |= extract64(i, 12, 4) << 28;
190
+
191
+ o |= tweak_cell_inv_rot(extract64(i, 28, 4)) << 32;
192
+ o |= tweak_cell_inv_rot(extract64(i, 60, 4)) << 36;
193
+ o |= tweak_cell_inv_rot(extract64(i, 56, 4)) << 40;
194
+ o |= tweak_cell_inv_rot(extract64(i, 16, 4)) << 44;
195
+
196
+ o |= extract64(i, 32, 4) << 48;
197
+ o |= extract64(i, 36, 4) << 52;
198
+ o |= extract64(i, 40, 4) << 56;
199
+ o |= tweak_cell_inv_rot(extract64(i, 44, 4)) << 60;
200
+
201
+ return o;
202
+}
203
+
204
static uint64_t pauth_computepac(uint64_t data, uint64_t modifier,
205
ARMPACKey key)
206
{
207
- g_assert_not_reached(); /* FIXME */
208
+ static const uint64_t RC[5] = {
209
+ 0x0000000000000000ull,
210
+ 0x13198A2E03707344ull,
211
+ 0xA4093822299F31D0ull,
212
+ 0x082EFA98EC4E6C89ull,
213
+ 0x452821E638D01377ull,
214
+ };
215
+ const uint64_t alpha = 0xC0AC29B7C97C50DDull;
216
+ /*
217
+ * Note that in the ARM pseudocode, key0 contains bits <127:64>
218
+ * and key1 contains bits <63:0> of the 128-bit key.
219
+ */
220
+ uint64_t key0 = key.hi, key1 = key.lo;
221
+ uint64_t workingval, runningmod, roundkey, modk0;
222
+ int i;
223
+
224
+ modk0 = (key0 << 63) | ((key0 >> 1) ^ (key0 >> 63));
225
+ runningmod = modifier;
226
+ workingval = data ^ key0;
227
+
228
+ for (i = 0; i <= 4; ++i) {
229
+ roundkey = key1 ^ runningmod;
230
+ workingval ^= roundkey;
231
+ workingval ^= RC[i];
232
+ if (i > 0) {
233
+ workingval = pac_cell_shuffle(workingval);
234
+ workingval = pac_mult(workingval);
235
+ }
236
+ workingval = pac_sub(workingval);
237
+ runningmod = tweak_shuffle(runningmod);
238
+ }
239
+ roundkey = modk0 ^ runningmod;
240
+ workingval ^= roundkey;
241
+ workingval = pac_cell_shuffle(workingval);
242
+ workingval = pac_mult(workingval);
243
+ workingval = pac_sub(workingval);
244
+ workingval = pac_cell_shuffle(workingval);
245
+ workingval = pac_mult(workingval);
246
+ workingval ^= key1;
247
+ workingval = pac_cell_inv_shuffle(workingval);
248
+ workingval = pac_inv_sub(workingval);
249
+ workingval = pac_mult(workingval);
250
+ workingval = pac_cell_inv_shuffle(workingval);
251
+ workingval ^= key0;
252
+ workingval ^= runningmod;
253
+ for (i = 0; i <= 4; ++i) {
254
+ workingval = pac_inv_sub(workingval);
255
+ if (i < 4) {
256
+ workingval = pac_mult(workingval);
257
+ workingval = pac_cell_inv_shuffle(workingval);
258
+ }
259
+ runningmod = tweak_inv_shuffle(runningmod);
260
+ roundkey = key1 ^ runningmod;
261
+ workingval ^= RC[4 - i];
262
+ workingval ^= roundkey;
263
+ workingval ^= alpha;
264
+ }
265
+ workingval ^= modk0;
266
+
267
+ return workingval;
268
}
269
270
static uint64_t pauth_addpac(CPUARMState *env, uint64_t ptr, uint64_t modifier,
271
--
272
2.20.1
273
274
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add 4 attributes that controls the EL1 enable bits, as we may not
3
Enable the n1 for virt and sbsa board use.
4
always want to turn on pointer authentication with -cpu max.
5
However, by default they are enabled.
6
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20220506180242.216785-25-richard.henderson@linaro.org
9
Message-id: 20190108223129.5570-31-richard.henderson@linaro.org
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
target/arm/cpu.c | 3 +++
10
docs/system/arm/virt.rst | 1 +
13
target/arm/cpu64.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++
11
hw/arm/sbsa-ref.c | 1 +
14
2 files changed, 63 insertions(+)
12
hw/arm/virt.c | 1 +
13
target/arm/cpu64.c | 66 ++++++++++++++++++++++++++++++++++++++++
14
4 files changed, 69 insertions(+)
15
15
16
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
16
diff --git a/docs/system/arm/virt.rst b/docs/system/arm/virt.rst
17
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/cpu.c
18
--- a/docs/system/arm/virt.rst
19
+++ b/target/arm/cpu.c
19
+++ b/docs/system/arm/virt.rst
20
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
20
@@ -XXX,XX +XXX,XX @@ Supported guest CPU types:
21
env->pstate = PSTATE_MODE_EL0t;
21
- ``cortex-a76`` (64-bit)
22
/* Userspace expects access to DC ZVA, CTL_EL0 and the cache ops */
22
- ``a64fx`` (64-bit)
23
env->cp15.sctlr_el[1] |= SCTLR_UCT | SCTLR_UCI | SCTLR_DZE;
23
- ``host`` (with KVM only)
24
+ /* Enable all PAC instructions */
24
+- ``neoverse-n1`` (64-bit)
25
+ env->cp15.hcr_el2 |= HCR_API;
25
- ``max`` (same as ``host`` for KVM; best possible emulation with TCG)
26
+ env->cp15.scr_el3 |= SCR_API;
26
27
/* and to the FP/Neon instructions */
27
Note that the default is ``cortex-a15``, so for an AArch64 guest you must
28
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 20, 2, 3);
28
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
29
/* and to the SVE instructions */
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/arm/sbsa-ref.c
31
+++ b/hw/arm/sbsa-ref.c
32
@@ -XXX,XX +XXX,XX @@ static const char * const valid_cpus[] = {
33
ARM_CPU_TYPE_NAME("cortex-a57"),
34
ARM_CPU_TYPE_NAME("cortex-a72"),
35
ARM_CPU_TYPE_NAME("cortex-a76"),
36
+ ARM_CPU_TYPE_NAME("neoverse-n1"),
37
ARM_CPU_TYPE_NAME("max"),
38
};
39
40
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/arm/virt.c
43
+++ b/hw/arm/virt.c
44
@@ -XXX,XX +XXX,XX @@ static const char *valid_cpus[] = {
45
ARM_CPU_TYPE_NAME("cortex-a72"),
46
ARM_CPU_TYPE_NAME("cortex-a76"),
47
ARM_CPU_TYPE_NAME("a64fx"),
48
+ ARM_CPU_TYPE_NAME("neoverse-n1"),
49
ARM_CPU_TYPE_NAME("host"),
50
ARM_CPU_TYPE_NAME("max"),
51
};
30
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
52
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
31
index XXXXXXX..XXXXXXX 100644
53
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/cpu64.c
54
--- a/target/arm/cpu64.c
33
+++ b/target/arm/cpu64.c
55
+++ b/target/arm/cpu64.c
34
@@ -XXX,XX +XXX,XX @@ static void cpu_max_set_sve_vq(Object *obj, Visitor *v, const char *name,
56
@@ -XXX,XX +XXX,XX @@ static void aarch64_a76_initfn(Object *obj)
35
error_propagate(errp, err);
57
cpu->isar.mvfr2 = 0x00000043;
36
}
58
}
37
59
38
+#ifdef CONFIG_USER_ONLY
60
+static void aarch64_neoverse_n1_initfn(Object *obj)
39
+static void cpu_max_get_packey(Object *obj, Visitor *v, const char *name,
40
+ void *opaque, Error **errp)
41
+{
61
+{
42
+ ARMCPU *cpu = ARM_CPU(obj);
62
+ ARMCPU *cpu = ARM_CPU(obj);
43
+ const uint64_t *bit = opaque;
44
+ bool enabled = (cpu->env.cp15.sctlr_el[1] & *bit) != 0;
45
+
63
+
46
+ visit_type_bool(v, name, &enabled, errp);
64
+ cpu->dtb_compatible = "arm,neoverse-n1";
65
+ set_feature(&cpu->env, ARM_FEATURE_V8);
66
+ set_feature(&cpu->env, ARM_FEATURE_NEON);
67
+ set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
68
+ set_feature(&cpu->env, ARM_FEATURE_AARCH64);
69
+ set_feature(&cpu->env, ARM_FEATURE_CBAR_RO);
70
+ set_feature(&cpu->env, ARM_FEATURE_EL2);
71
+ set_feature(&cpu->env, ARM_FEATURE_EL3);
72
+ set_feature(&cpu->env, ARM_FEATURE_PMU);
73
+
74
+ /* Ordered by B2.4 AArch64 registers by functional group */
75
+ cpu->clidr = 0x82000023;
76
+ cpu->ctr = 0x8444c004;
77
+ cpu->dcz_blocksize = 4;
78
+ cpu->isar.id_aa64dfr0 = 0x0000000110305408ull;
79
+ cpu->isar.id_aa64isar0 = 0x0000100010211120ull;
80
+ cpu->isar.id_aa64isar1 = 0x0000000000100001ull;
81
+ cpu->isar.id_aa64mmfr0 = 0x0000000000101125ull;
82
+ cpu->isar.id_aa64mmfr1 = 0x0000000010212122ull;
83
+ cpu->isar.id_aa64mmfr2 = 0x0000000000001011ull;
84
+ cpu->isar.id_aa64pfr0 = 0x1100000010111112ull; /* GIC filled in later */
85
+ cpu->isar.id_aa64pfr1 = 0x0000000000000020ull;
86
+ cpu->id_afr0 = 0x00000000;
87
+ cpu->isar.id_dfr0 = 0x04010088;
88
+ cpu->isar.id_isar0 = 0x02101110;
89
+ cpu->isar.id_isar1 = 0x13112111;
90
+ cpu->isar.id_isar2 = 0x21232042;
91
+ cpu->isar.id_isar3 = 0x01112131;
92
+ cpu->isar.id_isar4 = 0x00010142;
93
+ cpu->isar.id_isar5 = 0x01011121;
94
+ cpu->isar.id_isar6 = 0x00000010;
95
+ cpu->isar.id_mmfr0 = 0x10201105;
96
+ cpu->isar.id_mmfr1 = 0x40000000;
97
+ cpu->isar.id_mmfr2 = 0x01260000;
98
+ cpu->isar.id_mmfr3 = 0x02122211;
99
+ cpu->isar.id_mmfr4 = 0x00021110;
100
+ cpu->isar.id_pfr0 = 0x10010131;
101
+ cpu->isar.id_pfr1 = 0x00010000; /* GIC filled in later */
102
+ cpu->isar.id_pfr2 = 0x00000011;
103
+ cpu->midr = 0x414fd0c1; /* r4p1 */
104
+ cpu->revidr = 0;
105
+
106
+ /* From B2.23 CCSIDR_EL1 */
107
+ cpu->ccsidr[0] = 0x701fe01a; /* 64KB L1 dcache */
108
+ cpu->ccsidr[1] = 0x201fe01a; /* 64KB L1 icache */
109
+ cpu->ccsidr[2] = 0x70ffe03a; /* 1MB L2 cache */
110
+
111
+ /* From B2.98 SCTLR_EL3 */
112
+ cpu->reset_sctlr = 0x30c50838;
113
+
114
+ /* From B4.23 ICH_VTR_EL2 */
115
+ cpu->gic_num_lrs = 4;
116
+ cpu->gic_vpribits = 5;
117
+ cpu->gic_vprebits = 5;
118
+
119
+ /* From B5.1 AdvSIMD AArch64 register summary */
120
+ cpu->isar.mvfr0 = 0x10110222;
121
+ cpu->isar.mvfr1 = 0x13211111;
122
+ cpu->isar.mvfr2 = 0x00000043;
47
+}
123
+}
48
+
124
+
49
+static void cpu_max_set_packey(Object *obj, Visitor *v, const char *name,
125
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
50
+ void *opaque, Error **errp)
126
{
51
+{
127
/*
52
+ ARMCPU *cpu = ARM_CPU(obj);
128
@@ -XXX,XX +XXX,XX @@ static const ARMCPUInfo aarch64_cpus[] = {
53
+ Error *err = NULL;
129
{ .name = "cortex-a72", .initfn = aarch64_a72_initfn },
54
+ const uint64_t *bit = opaque;
130
{ .name = "cortex-a76", .initfn = aarch64_a76_initfn },
55
+ bool enabled;
131
{ .name = "a64fx", .initfn = aarch64_a64fx_initfn },
56
+
132
+ { .name = "neoverse-n1", .initfn = aarch64_neoverse_n1_initfn },
57
+ visit_type_bool(v, name, &enabled, errp);
133
{ .name = "max", .initfn = aarch64_max_initfn },
58
+
134
#if defined(CONFIG_KVM) || defined(CONFIG_HVF)
59
+ if (!err) {
135
{ .name = "host", .initfn = aarch64_host_initfn },
60
+ if (enabled) {
61
+ cpu->env.cp15.sctlr_el[1] |= *bit;
62
+ } else {
63
+ cpu->env.cp15.sctlr_el[1] &= ~*bit;
64
+ }
65
+ }
66
+ error_propagate(errp, err);
67
+}
68
+#endif
69
+
70
/* -cpu max: if KVM is enabled, like -cpu host (best possible with this host);
71
* otherwise, a CPU with as many features enabled as our emulation supports.
72
* The version of '-cpu max' for qemu-system-arm is defined in cpu.c;
73
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
74
*/
75
cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
76
cpu->dcz_blocksize = 7; /* 512 bytes */
77
+
78
+ /*
79
+ * Note that Linux will enable enable all of the keys at once.
80
+ * But doing it this way will allow experimentation beyond that.
81
+ */
82
+ {
83
+ static const uint64_t apia_bit = SCTLR_EnIA;
84
+ static const uint64_t apib_bit = SCTLR_EnIB;
85
+ static const uint64_t apda_bit = SCTLR_EnDA;
86
+ static const uint64_t apdb_bit = SCTLR_EnDB;
87
+
88
+ object_property_add(obj, "apia", "bool", cpu_max_get_packey,
89
+ cpu_max_set_packey, NULL,
90
+ (void *)&apia_bit, &error_fatal);
91
+ object_property_add(obj, "apib", "bool", cpu_max_get_packey,
92
+ cpu_max_set_packey, NULL,
93
+ (void *)&apib_bit, &error_fatal);
94
+ object_property_add(obj, "apda", "bool", cpu_max_get_packey,
95
+ cpu_max_set_packey, NULL,
96
+ (void *)&apda_bit, &error_fatal);
97
+ object_property_add(obj, "apdb", "bool", cpu_max_get_packey,
98
+ cpu_max_set_packey, NULL,
99
+ (void *)&apdb_bit, &error_fatal);
100
+
101
+ /* Enable all PAC keys by default. */
102
+ cpu->env.cp15.sctlr_el[1] |= SCTLR_EnIA | SCTLR_EnIB;
103
+ cpu->env.cp15.sctlr_el[1] |= SCTLR_EnDA | SCTLR_EnDB;
104
+ }
105
#endif
106
107
cpu->sve_max_vq = ARM_MAX_VQ;
108
--
136
--
109
2.20.1
137
2.25.1
110
111
diff view generated by jsdifflib
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
1
From: Leif Lindholm <quic_llindhol@quicinc.com>
2
2
3
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
3
The sbsa-ref machine is continuously evolving. Some of the changes we
4
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
4
want to make in the near future, to align with real components (e.g.
5
Message-id: 20181211151945.29137-14-aaron@os.amperecomputing.com
5
the GIC-700), will break compatibility for existing firmware.
6
7
Introduce two new properties to the DT generated on machine generation:
8
- machine-version-major
9
To be incremented when a platform change makes the machine
10
incompatible with existing firmware.
11
- machine-version-minor
12
To be incremented when functionality is added to the machine
13
without causing incompatibility with existing firmware.
14
to be reset to 0 when machine-version-major is incremented.
15
16
This versioning scheme is *neither*:
17
- A QEMU versioned machine type; a given version of QEMU will emulate
18
a given version of the platform.
19
- A reflection of level of SBSA (now SystemReady SR) support provided.
20
21
The version will increment on guest-visible functional changes only,
22
akin to a revision ID register found on a physical platform.
23
24
These properties are both introduced with the value 0.
25
(Hence, a machine where the DT is lacking these nodes is equivalent
26
to version 0.0.)
27
28
Signed-off-by: Leif Lindholm <quic_llindhol@quicinc.com>
29
Message-id: 20220505113947.75714-1-quic_llindhol@quicinc.com
30
Cc: Peter Maydell <peter.maydell@linaro.org>
31
Cc: Radoslaw Biernacki <rad@semihalf.com>
32
Cc: Cédric Le Goater <clg@kaod.org>
33
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
35
---
8
target/arm/helper.c | 39 +++++++++++++++++++++++++++++++++++++--
36
hw/arm/sbsa-ref.c | 14 ++++++++++++++
9
1 file changed, 37 insertions(+), 2 deletions(-)
37
1 file changed, 14 insertions(+)
10
38
11
diff --git a/target/arm/helper.c b/target/arm/helper.c
39
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
12
index XXXXXXX..XXXXXXX 100644
40
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/helper.c
41
--- a/hw/arm/sbsa-ref.c
14
+++ b/target/arm/helper.c
42
+++ b/hw/arm/sbsa-ref.c
15
@@ -XXX,XX +XXX,XX @@ static bool event_always_supported(CPUARMState *env)
43
@@ -XXX,XX +XXX,XX @@ static void create_fdt(SBSAMachineState *sms)
16
return true;
44
qemu_fdt_setprop_cell(fdt, "/", "#address-cells", 0x2);
17
}
45
qemu_fdt_setprop_cell(fdt, "/", "#size-cells", 0x2);
18
46
19
+static uint64_t swinc_get_count(CPUARMState *env)
20
+{
21
+ /*
47
+ /*
22
+ * SW_INCR events are written directly to the pmevcntr's by writes to
48
+ * This versioning scheme is for informing platform fw only. It is neither:
23
+ * PMSWINC, so there is no underlying count maintained by the PMU itself
49
+ * - A QEMU versioned machine type; a given version of QEMU will emulate
50
+ * a given version of the platform.
51
+ * - A reflection of level of SBSA (now SystemReady SR) support provided.
52
+ *
53
+ * machine-version-major: updated when changes breaking fw compatibility
54
+ * are introduced.
55
+ * machine-version-minor: updated when features are added that don't break
56
+ * fw compatibility.
24
+ */
57
+ */
25
+ return 0;
58
+ qemu_fdt_setprop_cell(fdt, "/", "machine-version-major", 0);
26
+}
59
+ qemu_fdt_setprop_cell(fdt, "/", "machine-version-minor", 0);
27
+
60
+
28
/*
61
if (ms->numa_state->have_numa_distance) {
29
* Return the underlying cycle count for the PMU cycle counters. If we're in
62
int size = nb_numa_nodes * nb_numa_nodes * 3 * sizeof(uint32_t);
30
* usermode, simply return 0.
63
uint32_t *matrix = g_malloc0(size);
31
@@ -XXX,XX +XXX,XX @@ static uint64_t instructions_get_count(CPUARMState *env)
32
#endif
33
34
static const pm_event pm_events[] = {
35
+ { .number = 0x000, /* SW_INCR */
36
+ .supported = event_always_supported,
37
+ .get_count = swinc_get_count,
38
+ },
39
#ifndef CONFIG_USER_ONLY
40
{ .number = 0x008, /* INST_RETIRED, Instruction architecturally executed */
41
.supported = instructions_supported,
42
@@ -XXX,XX +XXX,XX @@ static void pmcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
43
pmu_op_finish(env);
44
}
45
46
+static void pmswinc_write(CPUARMState *env, const ARMCPRegInfo *ri,
47
+ uint64_t value)
48
+{
49
+ unsigned int i;
50
+ for (i = 0; i < pmu_num_counters(env); i++) {
51
+ /* Increment a counter's count iff: */
52
+ if ((value & (1 << i)) && /* counter's bit is set */
53
+ /* counter is enabled and not filtered */
54
+ pmu_counter_enabled(env, i) &&
55
+ /* counter is SW_INCR */
56
+ (env->cp15.c14_pmevtyper[i] & PMXEVTYPER_EVTCOUNT) == 0x0) {
57
+ pmevcntr_op_start(env, i);
58
+ env->cp15.c14_pmevcntr[i]++;
59
+ pmevcntr_op_finish(env, i);
60
+ }
61
+ }
62
+}
63
+
64
static uint64_t pmccntr_read(CPUARMState *env, const ARMCPRegInfo *ri)
65
{
66
uint64_t ret;
67
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
68
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmovsr),
69
.writefn = pmovsr_write,
70
.raw_writefn = raw_write },
71
- /* Unimplemented so WI. */
72
{ .name = "PMSWINC", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 4,
73
- .access = PL0_W, .accessfn = pmreg_access_swinc, .type = ARM_CP_NOP },
74
+ .access = PL0_W, .accessfn = pmreg_access_swinc, .type = ARM_CP_NO_RAW,
75
+ .writefn = pmswinc_write },
76
+ { .name = "PMSWINC_EL0", .state = ARM_CP_STATE_AA64,
77
+ .opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 4,
78
+ .access = PL0_W, .accessfn = pmreg_access_swinc, .type = ARM_CP_NO_RAW,
79
+ .writefn = pmswinc_write },
80
{ .name = "PMSELR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 5,
81
.access = PL0_RW, .type = ARM_CP_ALIAS,
82
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmselr),
83
--
64
--
84
2.20.1
65
2.25.1
85
66
86
67
diff view generated by jsdifflib
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
pmccntr_read and pmccntr_write contained duplicate code that was already
3
This adds cluster-id in CPU instance properties, which will be used
4
being handled by pmccntr_sync. Consolidate the duplicated code into two
4
by arm/virt machine. Besides, the cluster-id is also verified or
5
functions: pmccntr_op_start and pmccntr_op_finish. Add a companion to
5
dumped in various spots:
6
c15_ccnt in CPUARMState so that we can simultaneously save both the
7
architectural register value and the last underlying cycle count - this
8
ensures time isn't lost and will also allow us to access the 'old'
9
architectural register value in order to detect overflows in later
10
patches.
11
6
12
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
7
* hw/core/machine.c::machine_set_cpu_numa_node() to associate
13
Signed-off-by: Aaron Lindsay <aclindsa@gmail.com>
8
CPU with its NUMA node.
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
15
Message-id: 20181211151945.29137-3-aaron@os.amperecomputing.com
10
* hw/core/machine.c::machine_numa_finish_cpu_init() to record
11
CPU slots with no NUMA mapping set.
12
13
* hw/core/machine-hmp-cmds.c::hmp_hotpluggable_cpus() to dump
14
cluster-id.
15
16
Signed-off-by: Gavin Shan <gshan@redhat.com>
17
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
18
Acked-by: Igor Mammedov <imammedo@redhat.com>
19
Message-id: 20220503140304.855514-2-gshan@redhat.com
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
21
---
18
target/arm/cpu.h | 37 +++++++++++---
22
qapi/machine.json | 6 ++++--
19
target/arm/helper.c | 118 ++++++++++++++++++++++++++------------------
23
hw/core/machine-hmp-cmds.c | 4 ++++
20
2 files changed, 100 insertions(+), 55 deletions(-)
24
hw/core/machine.c | 16 ++++++++++++++++
25
3 files changed, 24 insertions(+), 2 deletions(-)
21
26
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
27
diff --git a/qapi/machine.json b/qapi/machine.json
23
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
29
--- a/qapi/machine.json
25
+++ b/target/arm/cpu.h
30
+++ b/qapi/machine.json
26
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMState {
31
@@ -XXX,XX +XXX,XX @@
27
uint64_t oslsr_el1; /* OS Lock Status */
32
# @node-id: NUMA node ID the CPU belongs to
28
uint64_t mdcr_el2;
33
# @socket-id: socket number within node/board the CPU belongs to
29
uint64_t mdcr_el3;
34
# @die-id: die number within socket the CPU belongs to (since 4.1)
30
- /* If the counter is enabled, this stores the last time the counter
35
-# @core-id: core number within die the CPU belongs to
31
- * was reset. Otherwise it stores the counter value
36
+# @cluster-id: cluster number within die the CPU belongs to (since 7.1)
32
+ /* Stores the architectural value of the counter *the last time it was
37
+# @core-id: core number within cluster the CPU belongs to
33
+ * updated* by pmccntr_op_start. Accesses should always be surrounded
38
# @thread-id: thread number within core the CPU belongs to
34
+ * by pmccntr_op_start/pmccntr_op_finish to guarantee the latest
39
#
35
+ * architecturally-correct value is being read/set.
40
-# Note: currently there are 5 properties that could be present
36
*/
41
+# Note: currently there are 6 properties that could be present
37
uint64_t c15_ccnt;
42
# but management should be prepared to pass through other
38
+ /* Stores the delta between the architectural value and the underlying
43
# properties with device_add command to allow for future
39
+ * cycle count during normal operation. It is used to update c15_ccnt
44
# interface extension. This also requires the filed names to be kept in
40
+ * to be the correct architectural value before accesses. During
45
@@ -XXX,XX +XXX,XX @@
41
+ * accesses, c15_ccnt_delta contains the underlying count being used
46
'data': { '*node-id': 'int',
42
+ * for the access, after which it reverts to the delta value in
47
'*socket-id': 'int',
43
+ * pmccntr_op_finish.
48
'*die-id': 'int',
44
+ */
49
+ '*cluster-id': 'int',
45
+ uint64_t c15_ccnt_delta;
50
'*core-id': 'int',
46
uint64_t pmccfiltr_el0; /* Performance Monitor Filter Register */
51
'*thread-id': 'int'
47
uint64_t vpidr_el2; /* Virtualization Processor ID Register */
52
}
48
uint64_t vmpidr_el2; /* Virtualization Multiprocessor ID Register */
53
diff --git a/hw/core/machine-hmp-cmds.c b/hw/core/machine-hmp-cmds.c
49
@@ -XXX,XX +XXX,XX @@ int cpu_arm_signal_handler(int host_signum, void *pinfo,
50
void *puc);
51
52
/**
53
- * pmccntr_sync
54
+ * pmccntr_op_start/finish
55
* @env: CPUARMState
56
*
57
- * Synchronises the counter in the PMCCNTR. This must always be called twice,
58
- * once before any action that might affect the timer and again afterwards.
59
- * The function is used to swap the state of the register if required.
60
- * This only happens when not in user mode (!CONFIG_USER_ONLY)
61
+ * Convert the counter in the PMCCNTR between its delta form (the typical mode
62
+ * when it's enabled) and the guest-visible value. These two calls must always
63
+ * surround any action which might affect the counter.
64
*/
65
-void pmccntr_sync(CPUARMState *env);
66
+void pmccntr_op_start(CPUARMState *env);
67
+void pmccntr_op_finish(CPUARMState *env);
68
+
69
+/**
70
+ * pmu_op_start/finish
71
+ * @env: CPUARMState
72
+ *
73
+ * Convert all PMU counters between their delta form (the typical mode when
74
+ * they are enabled) and the guest-visible values. These two calls must
75
+ * surround any action which might affect the counters.
76
+ */
77
+void pmu_op_start(CPUARMState *env);
78
+void pmu_op_finish(CPUARMState *env);
79
80
/* SCTLR bit meanings. Several bits have been reused in newer
81
* versions of the architecture; in that case we define constants
82
diff --git a/target/arm/helper.c b/target/arm/helper.c
83
index XXXXXXX..XXXXXXX 100644
54
index XXXXXXX..XXXXXXX 100644
84
--- a/target/arm/helper.c
55
--- a/hw/core/machine-hmp-cmds.c
85
+++ b/target/arm/helper.c
56
+++ b/hw/core/machine-hmp-cmds.c
86
@@ -XXX,XX +XXX,XX @@ static inline bool arm_ccnt_enabled(CPUARMState *env)
57
@@ -XXX,XX +XXX,XX @@ void hmp_hotpluggable_cpus(Monitor *mon, const QDict *qdict)
87
58
if (c->has_die_id) {
88
return true;
59
monitor_printf(mon, " die-id: \"%" PRIu64 "\"\n", c->die_id);
89
}
60
}
90
-
61
+ if (c->has_cluster_id) {
91
-void pmccntr_sync(CPUARMState *env)
62
+ monitor_printf(mon, " cluster-id: \"%" PRIu64 "\"\n",
92
+/*
63
+ c->cluster_id);
93
+ * Ensure c15_ccnt is the guest-visible count so that operations such as
64
+ }
94
+ * enabling/disabling the counter or filtering, modifying the count itself,
65
if (c->has_core_id) {
95
+ * etc. can be done logically. This is essentially a no-op if the counter is
66
monitor_printf(mon, " core-id: \"%" PRIu64 "\"\n", c->core_id);
96
+ * not enabled at the time of the call.
67
}
97
+ */
68
diff --git a/hw/core/machine.c b/hw/core/machine.c
98
+void pmccntr_op_start(CPUARMState *env)
69
index XXXXXXX..XXXXXXX 100644
99
{
70
--- a/hw/core/machine.c
100
- uint64_t temp_ticks;
71
+++ b/hw/core/machine.c
101
-
72
@@ -XXX,XX +XXX,XX @@ void machine_set_cpu_numa_node(MachineState *machine,
102
- temp_ticks = muldiv64(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
73
return;
103
+ uint64_t cycles = 0;
74
}
104
+ cycles = muldiv64(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
75
105
ARM_CPU_FREQ, NANOSECONDS_PER_SECOND);
76
+ if (props->has_cluster_id && !slot->props.has_cluster_id) {
106
77
+ error_setg(errp, "cluster-id is not supported");
107
- if (env->cp15.c9_pmcr & PMCRD) {
78
+ return;
108
- /* Increment once every 64 processor clock cycles */
109
- temp_ticks /= 64;
110
- }
111
-
112
if (arm_ccnt_enabled(env)) {
113
- env->cp15.c15_ccnt = temp_ticks - env->cp15.c15_ccnt;
114
+ uint64_t eff_cycles = cycles;
115
+ if (env->cp15.c9_pmcr & PMCRD) {
116
+ /* Increment once every 64 processor clock cycles */
117
+ eff_cycles /= 64;
118
+ }
79
+ }
119
+
80
+
120
+ env->cp15.c15_ccnt = eff_cycles - env->cp15.c15_ccnt_delta;
81
if (props->has_socket_id && !slot->props.has_socket_id) {
121
}
82
error_setg(errp, "socket-id is not supported");
122
+ env->cp15.c15_ccnt_delta = cycles;
83
return;
123
+}
84
@@ -XXX,XX +XXX,XX @@ void machine_set_cpu_numa_node(MachineState *machine,
124
+
85
continue;
125
+/*
86
}
126
+ * If PMCCNTR is enabled, recalculate the delta between the clock and the
87
127
+ * guest-visible count. A call to pmccntr_op_finish should follow every call to
88
+ if (props->has_cluster_id &&
128
+ * pmccntr_op_start.
89
+ props->cluster_id != slot->props.cluster_id) {
129
+ */
90
+ continue;
130
+void pmccntr_op_finish(CPUARMState *env)
131
+{
132
+ if (arm_ccnt_enabled(env)) {
133
+ uint64_t prev_cycles = env->cp15.c15_ccnt_delta;
134
+
135
+ if (env->cp15.c9_pmcr & PMCRD) {
136
+ /* Increment once every 64 processor clock cycles */
137
+ prev_cycles /= 64;
138
+ }
91
+ }
139
+
92
+
140
+ env->cp15.c15_ccnt_delta = prev_cycles - env->cp15.c15_ccnt;
93
if (props->has_die_id && props->die_id != slot->props.die_id) {
94
continue;
95
}
96
@@ -XXX,XX +XXX,XX @@ static char *cpu_slot_to_string(const CPUArchId *cpu)
97
}
98
g_string_append_printf(s, "die-id: %"PRId64, cpu->props.die_id);
99
}
100
+ if (cpu->props.has_cluster_id) {
101
+ if (s->len) {
102
+ g_string_append_printf(s, ", ");
103
+ }
104
+ g_string_append_printf(s, "cluster-id: %"PRId64, cpu->props.cluster_id);
141
+ }
105
+ }
142
+}
106
if (cpu->props.has_core_id) {
143
+
107
if (s->len) {
144
+void pmu_op_start(CPUARMState *env)
108
g_string_append_printf(s, ", ");
145
+{
146
+ pmccntr_op_start(env);
147
+}
148
+
149
+void pmu_op_finish(CPUARMState *env)
150
+{
151
+ pmccntr_op_finish(env);
152
}
153
154
static void pmcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
155
uint64_t value)
156
{
157
- pmccntr_sync(env);
158
+ pmu_op_start(env);
159
160
if (value & PMCRC) {
161
/* The counter has been reset */
162
@@ -XXX,XX +XXX,XX @@ static void pmcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
163
env->cp15.c9_pmcr &= ~0x39;
164
env->cp15.c9_pmcr |= (value & 0x39);
165
166
- pmccntr_sync(env);
167
+ pmu_op_finish(env);
168
}
169
170
static uint64_t pmccntr_read(CPUARMState *env, const ARMCPRegInfo *ri)
171
{
172
- uint64_t total_ticks;
173
-
174
- if (!arm_ccnt_enabled(env)) {
175
- /* Counter is disabled, do not change value */
176
- return env->cp15.c15_ccnt;
177
- }
178
-
179
- total_ticks = muldiv64(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
180
- ARM_CPU_FREQ, NANOSECONDS_PER_SECOND);
181
-
182
- if (env->cp15.c9_pmcr & PMCRD) {
183
- /* Increment once every 64 processor clock cycles */
184
- total_ticks /= 64;
185
- }
186
- return total_ticks - env->cp15.c15_ccnt;
187
+ uint64_t ret;
188
+ pmccntr_op_start(env);
189
+ ret = env->cp15.c15_ccnt;
190
+ pmccntr_op_finish(env);
191
+ return ret;
192
}
193
194
static void pmselr_write(CPUARMState *env, const ARMCPRegInfo *ri,
195
@@ -XXX,XX +XXX,XX @@ static void pmselr_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
static void pmccntr_write(CPUARMState *env, const ARMCPRegInfo *ri,
197
uint64_t value)
198
{
199
- uint64_t total_ticks;
200
-
201
- if (!arm_ccnt_enabled(env)) {
202
- /* Counter is disabled, set the absolute value */
203
- env->cp15.c15_ccnt = value;
204
- return;
205
- }
206
-
207
- total_ticks = muldiv64(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
208
- ARM_CPU_FREQ, NANOSECONDS_PER_SECOND);
209
-
210
- if (env->cp15.c9_pmcr & PMCRD) {
211
- /* Increment once every 64 processor clock cycles */
212
- total_ticks /= 64;
213
- }
214
- env->cp15.c15_ccnt = total_ticks - value;
215
+ pmccntr_op_start(env);
216
+ env->cp15.c15_ccnt = value;
217
+ pmccntr_op_finish(env);
218
}
219
220
static void pmccntr_write32(CPUARMState *env, const ARMCPRegInfo *ri,
221
@@ -XXX,XX +XXX,XX @@ static void pmccntr_write32(CPUARMState *env, const ARMCPRegInfo *ri,
222
223
#else /* CONFIG_USER_ONLY */
224
225
-void pmccntr_sync(CPUARMState *env)
226
+void pmccntr_op_start(CPUARMState *env)
227
+{
228
+}
229
+
230
+void pmccntr_op_finish(CPUARMState *env)
231
+{
232
+}
233
+
234
+void pmu_op_start(CPUARMState *env)
235
+{
236
+}
237
+
238
+void pmu_op_finish(CPUARMState *env)
239
{
240
}
241
242
@@ -XXX,XX +XXX,XX @@ void pmccntr_sync(CPUARMState *env)
243
static void pmccfiltr_write(CPUARMState *env, const ARMCPRegInfo *ri,
244
uint64_t value)
245
{
246
- pmccntr_sync(env);
247
+ pmccntr_op_start(env);
248
env->cp15.pmccfiltr_el0 = value & 0xfc000000;
249
- pmccntr_sync(env);
250
+ pmccntr_op_finish(env);
251
}
252
253
static void pmcntenset_write(CPUARMState *env, const ARMCPRegInfo *ri,
254
--
109
--
255
2.20.1
110
2.25.1
256
257
diff view generated by jsdifflib
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
This both advertises that we support four counters and enables them
3
The CPU topology isn't enabled on arm/virt machine yet, but we're
4
because the pmu_num_counters() reads this value from PMCR.
4
going to do it in next patch. After the CPU topology is enabled by
5
next patch, "thread-id=1" becomes invalid because the CPU core is
6
preferred on arm/virt machine. It means these two CPUs have 0/1
7
as their core IDs, but their thread IDs are all 0. It will trigger
8
test failure as the following message indicates:
5
9
6
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
10
[14/21 qemu:qtest+qtest-aarch64 / qtest-aarch64/numa-test ERROR
7
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
11
1.48s killed by signal 6 SIGABRT
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
>>> G_TEST_DBUS_DAEMON=/home/gavin/sandbox/qemu.main/tests/dbus-vmstate-daemon.sh \
9
Message-id: 20181211151945.29137-13-aaron@os.amperecomputing.com
13
QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon \
14
QTEST_QEMU_BINARY=./qemu-system-aarch64 \
15
QTEST_QEMU_IMG=./qemu-img MALLOC_PERTURB_=83 \
16
/home/gavin/sandbox/qemu.main/build/tests/qtest/numa-test --tap -k
17
――――――――――――――――――――――――――――――――――――――――――――――
18
stderr:
19
qemu-system-aarch64: -numa cpu,node-id=0,thread-id=1: no match found
20
21
This fixes the issue by providing comprehensive SMP configurations
22
in aarch64_numa_cpu(). The SMP configurations aren't used before
23
the CPU topology is enabled in next patch.
24
25
Signed-off-by: Gavin Shan <gshan@redhat.com>
26
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
27
Message-id: 20220503140304.855514-3-gshan@redhat.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
29
---
12
target/arm/helper.c | 10 +++++-----
30
tests/qtest/numa-test.c | 3 ++-
13
1 file changed, 5 insertions(+), 5 deletions(-)
31
1 file changed, 2 insertions(+), 1 deletion(-)
14
32
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
33
diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c
16
index XXXXXXX..XXXXXXX 100644
34
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
35
--- a/tests/qtest/numa-test.c
18
+++ b/target/arm/helper.c
36
+++ b/tests/qtest/numa-test.c
19
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
37
@@ -XXX,XX +XXX,XX @@ static void aarch64_numa_cpu(const void *data)
20
.access = PL1_W, .type = ARM_CP_NOP },
38
QTestState *qts;
21
/* Performance monitors are implementation defined in v7,
39
g_autofree char *cli = NULL;
22
* but with an ARM recommended set of registers, which we
40
23
- * follow (although we don't actually implement any counters)
41
- cli = make_cli(data, "-machine smp.cpus=2 "
24
+ * follow.
42
+ cli = make_cli(data, "-machine "
25
*
43
+ "smp.cpus=2,smp.sockets=1,smp.clusters=1,smp.cores=1,smp.threads=2 "
26
* Performance registers fall into three categories:
44
"-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
27
* (a) always UNDEF in PL0, RW in PL1 (PMINTENSET, PMINTENCLR)
45
"-numa cpu,node-id=1,thread-id=0 "
28
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
46
"-numa cpu,node-id=0,thread-id=1");
29
}
30
if (arm_feature(env, ARM_FEATURE_V7)) {
31
/* v7 performance monitor control register: same implementor
32
- * field as main ID register, and we implement only the cycle
33
- * count register.
34
+ * field as main ID register, and we implement four counters in
35
+ * addition to the cycle count register.
36
*/
37
- unsigned int i, pmcrn = 0;
38
+ unsigned int i, pmcrn = 4;
39
ARMCPRegInfo pmcr = {
40
.name = "PMCR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 0,
41
.access = PL0_RW,
42
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
43
.access = PL0_RW, .accessfn = pmreg_access,
44
.type = ARM_CP_IO,
45
.fieldoffset = offsetof(CPUARMState, cp15.c9_pmcr),
46
- .resetvalue = cpu->midr & 0xff000000,
47
+ .resetvalue = (cpu->midr & 0xff000000) | (pmcrn << PMCRN_SHIFT),
48
.writefn = pmcr_write, .raw_writefn = raw_write,
49
};
50
define_one_arm_cp_reg(cpu, &pmcr);
51
--
47
--
52
2.20.1
48
2.25.1
53
49
54
50
diff view generated by jsdifflib
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
The instruction event is only enabled when icount is used, cycles are
3
Currently, the SMP configuration isn't considered when the CPU
4
always supported. Always defining get_cycle_count (but altering its
4
topology is populated. In this case, it's impossible to provide
5
behavior depending on CONFIG_USER_ONLY) allows us to remove some
5
the default CPU-to-NUMA mapping or association based on the socket
6
CONFIG_USER_ONLY #defines throughout the rest of the code.
6
ID of the given CPU.
7
7
8
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
8
This takes account of SMP configuration when the CPU topology
9
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
9
is populated. The die ID for the given CPU isn't assigned since
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
it's not supported on arm/virt machine. Besides, the used SMP
11
Message-id: 20181211151945.29137-12-aaron@os.amperecomputing.com
11
configuration in qtest/numa-test/aarch64_numa_cpu() is corrcted
12
to avoid testing failure
13
14
Signed-off-by: Gavin Shan <gshan@redhat.com>
15
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
16
Acked-by: Igor Mammedov <imammedo@redhat.com>
17
Message-id: 20220503140304.855514-4-gshan@redhat.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
19
---
14
target/arm/helper.c | 90 ++++++++++++++++++++++-----------------------
20
hw/arm/virt.c | 15 ++++++++++++++-
15
1 file changed, 44 insertions(+), 46 deletions(-)
21
1 file changed, 14 insertions(+), 1 deletion(-)
16
22
17
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
18
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/helper.c
25
--- a/hw/arm/virt.c
20
+++ b/target/arm/helper.c
26
+++ b/hw/arm/virt.c
21
@@ -XXX,XX +XXX,XX @@
27
@@ -XXX,XX +XXX,XX @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
22
#include "arm_ldst.h"
28
int n;
23
#include <zlib.h> /* For crc32 */
29
unsigned int max_cpus = ms->smp.max_cpus;
24
#include "exec/semihost.h"
30
VirtMachineState *vms = VIRT_MACHINE(ms);
25
+#include "sysemu/cpus.h"
31
+ MachineClass *mc = MACHINE_GET_CLASS(vms);
26
#include "sysemu/kvm.h"
32
27
#include "fpu/softfloat.h"
33
if (ms->possible_cpus) {
28
#include "qemu/range.h"
34
assert(ms->possible_cpus->len == max_cpus);
29
@@ -XXX,XX +XXX,XX @@ typedef struct pm_event {
35
@@ -XXX,XX +XXX,XX @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
30
uint64_t (*get_count)(CPUARMState *);
36
ms->possible_cpus->cpus[n].type = ms->cpu_type;
31
} pm_event;
37
ms->possible_cpus->cpus[n].arch_id =
32
38
virt_cpu_mp_affinity(vms, n);
33
+static bool event_always_supported(CPUARMState *env)
34
+{
35
+ return true;
36
+}
37
+
39
+
38
+/*
40
+ assert(!mc->smp_props.dies_supported);
39
+ * Return the underlying cycle count for the PMU cycle counters. If we're in
41
+ ms->possible_cpus->cpus[n].props.has_socket_id = true;
40
+ * usermode, simply return 0.
42
+ ms->possible_cpus->cpus[n].props.socket_id =
41
+ */
43
+ n / (ms->smp.clusters * ms->smp.cores * ms->smp.threads);
42
+static uint64_t cycles_get_count(CPUARMState *env)
44
+ ms->possible_cpus->cpus[n].props.has_cluster_id = true;
43
+{
45
+ ms->possible_cpus->cpus[n].props.cluster_id =
44
+#ifndef CONFIG_USER_ONLY
46
+ (n / (ms->smp.cores * ms->smp.threads)) % ms->smp.clusters;
45
+ return muldiv64(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
47
+ ms->possible_cpus->cpus[n].props.has_core_id = true;
46
+ ARM_CPU_FREQ, NANOSECONDS_PER_SECOND);
48
+ ms->possible_cpus->cpus[n].props.core_id =
47
+#else
49
+ (n / ms->smp.threads) % ms->smp.cores;
48
+ return cpu_get_host_ticks();
50
ms->possible_cpus->cpus[n].props.has_thread_id = true;
49
+#endif
51
- ms->possible_cpus->cpus[n].props.thread_id = n;
50
+}
52
+ ms->possible_cpus->cpus[n].props.thread_id =
51
+
53
+ n % ms->smp.threads;
52
+#ifndef CONFIG_USER_ONLY
54
}
53
+static bool instructions_supported(CPUARMState *env)
55
return ms->possible_cpus;
54
+{
55
+ return use_icount == 1 /* Precise instruction counting */;
56
+}
57
+
58
+static uint64_t instructions_get_count(CPUARMState *env)
59
+{
60
+ return (uint64_t)cpu_get_icount_raw();
61
+}
62
+#endif
63
+
64
static const pm_event pm_events[] = {
65
+#ifndef CONFIG_USER_ONLY
66
+ { .number = 0x008, /* INST_RETIRED, Instruction architecturally executed */
67
+ .supported = instructions_supported,
68
+ .get_count = instructions_get_count,
69
+ },
70
+ { .number = 0x011, /* CPU_CYCLES, Cycle */
71
+ .supported = event_always_supported,
72
+ .get_count = cycles_get_count,
73
+ }
74
+#endif
75
};
76
77
/*
78
@@ -XXX,XX +XXX,XX @@ static const pm_event pm_events[] = {
79
* should first be updated to something sparse instead of the current
80
* supported_event_map[] array.
81
*/
82
-#define MAX_EVENT_ID 0x0
83
+#define MAX_EVENT_ID 0x11
84
#define UNSUPPORTED_EVENT UINT16_MAX
85
static uint16_t supported_event_map[MAX_EVENT_ID + 1];
86
87
@@ -XXX,XX +XXX,XX @@ static CPAccessResult pmreg_access_swinc(CPUARMState *env,
88
return pmreg_access(env, ri, isread);
89
}
56
}
90
91
-#ifndef CONFIG_USER_ONLY
92
-
93
static CPAccessResult pmreg_access_selr(CPUARMState *env,
94
const ARMCPRegInfo *ri,
95
bool isread)
96
@@ -XXX,XX +XXX,XX @@ static bool pmu_counter_enabled(CPUARMState *env, uint8_t counter)
97
*/
98
void pmccntr_op_start(CPUARMState *env)
99
{
100
- uint64_t cycles = 0;
101
- cycles = muldiv64(qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL),
102
- ARM_CPU_FREQ, NANOSECONDS_PER_SECOND);
103
+ uint64_t cycles = cycles_get_count(env);
104
105
if (pmu_counter_enabled(env, 31)) {
106
uint64_t eff_cycles = cycles;
107
@@ -XXX,XX +XXX,XX @@ static void pmccntr_write32(CPUARMState *env, const ARMCPRegInfo *ri,
108
pmccntr_write(env, ri, deposit64(cur_val, 0, 32, value));
109
}
110
111
-#else /* CONFIG_USER_ONLY */
112
-
113
-void pmccntr_op_start(CPUARMState *env)
114
-{
115
-}
116
-
117
-void pmccntr_op_finish(CPUARMState *env)
118
-{
119
-}
120
-
121
-void pmevcntr_op_start(CPUARMState *env, uint8_t i)
122
-{
123
-}
124
-
125
-void pmevcntr_op_finish(CPUARMState *env, uint8_t i)
126
-{
127
-}
128
-
129
-void pmu_op_start(CPUARMState *env)
130
-{
131
-}
132
-
133
-void pmu_op_finish(CPUARMState *env)
134
-{
135
-}
136
-
137
-void pmu_pre_el_change(ARMCPU *cpu, void *ignored)
138
-{
139
-}
140
-
141
-void pmu_post_el_change(ARMCPU *cpu, void *ignored)
142
-{
143
-}
144
-
145
-#endif
146
-
147
static void pmccfiltr_write(CPUARMState *env, const ARMCPRegInfo *ri,
148
uint64_t value)
149
{
150
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
151
/* Unimplemented so WI. */
152
{ .name = "PMSWINC", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 4,
153
.access = PL0_W, .accessfn = pmreg_access_swinc, .type = ARM_CP_NOP },
154
-#ifndef CONFIG_USER_ONLY
155
{ .name = "PMSELR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 5,
156
.access = PL0_RW, .type = ARM_CP_ALIAS,
157
.fieldoffset = offsetoflow32(CPUARMState, cp15.c9_pmselr),
158
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
159
.fieldoffset = offsetof(CPUARMState, cp15.c15_ccnt),
160
.readfn = pmccntr_read, .writefn = pmccntr_write,
161
.raw_readfn = raw_read, .raw_writefn = raw_write, },
162
-#endif
163
{ .name = "PMCCFILTR", .cp = 15, .opc1 = 0, .crn = 14, .crm = 15, .opc2 = 7,
164
.writefn = pmccfiltr_write_a32, .readfn = pmccfiltr_read_a32,
165
.access = PL0_RW, .accessfn = pmreg_access,
166
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
167
* count register.
168
*/
169
unsigned int i, pmcrn = 0;
170
-#ifndef CONFIG_USER_ONLY
171
ARMCPRegInfo pmcr = {
172
.name = "PMCR", .cp = 15, .crn = 9, .crm = 12, .opc1 = 0, .opc2 = 0,
173
.access = PL0_RW,
174
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
175
g_free(pmevtyper_name);
176
g_free(pmevtyper_el0_name);
177
}
178
-#endif
179
ARMCPRegInfo clidr = {
180
.name = "CLIDR", .state = ARM_CP_STATE_BOTH,
181
.opc0 = 3, .crn = 0, .crm = 0, .opc1 = 1, .opc2 = 1,
182
--
57
--
183
2.20.1
58
2.25.1
184
185
diff view generated by jsdifflib
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
3
In aarch64_numa_cpu(), the CPU and NUMA association is something
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
like below. Two threads in the same core/cluster/socket are
5
Message-id: 20181211151945.29137-9-aaron@os.amperecomputing.com
5
associated with two individual NUMA nodes, which is unreal as
6
Igor Mammedov mentioned. We don't expect the association to break
7
NUMA-to-socket boundary, which matches with the real world.
8
9
NUMA-node socket cluster core thread
10
------------------------------------------
11
0 0 0 0 0
12
1 0 0 0 1
13
14
This corrects the topology for CPUs and their association with
15
NUMA nodes. After this patch is applied, the CPU and NUMA
16
association becomes something like below, which looks real.
17
Besides, socket/cluster/core/thread IDs are all checked when
18
the NUMA node IDs are verified. It helps to check if the CPU
19
topology is properly populated or not.
20
21
NUMA-node socket cluster core thread
22
------------------------------------------
23
0 1 0 0 0
24
1 0 0 0 0
25
26
Suggested-by: Igor Mammedov <imammedo@redhat.com>
27
Signed-off-by: Gavin Shan <gshan@redhat.com>
28
Acked-by: Igor Mammedov <imammedo@redhat.com>
29
Message-id: 20220503140304.855514-5-gshan@redhat.com
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
30
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
---
31
---
8
target/arm/cpu.h | 4 ++--
32
tests/qtest/numa-test.c | 18 ++++++++++++------
9
target/arm/helper.c | 19 +++++++++++++++++--
33
1 file changed, 12 insertions(+), 6 deletions(-)
10
2 files changed, 19 insertions(+), 4 deletions(-)
11
34
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
35
diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c
13
index XXXXXXX..XXXXXXX 100644
36
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.h
37
--- a/tests/qtest/numa-test.c
15
+++ b/target/arm/cpu.h
38
+++ b/tests/qtest/numa-test.c
16
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
39
@@ -XXX,XX +XXX,XX @@ static void aarch64_numa_cpu(const void *data)
17
uint32_t id_pfr0;
40
g_autofree char *cli = NULL;
18
uint32_t id_pfr1;
41
19
uint32_t id_dfr0;
42
cli = make_cli(data, "-machine "
20
- uint32_t pmceid0;
43
- "smp.cpus=2,smp.sockets=1,smp.clusters=1,smp.cores=1,smp.threads=2 "
21
- uint32_t pmceid1;
44
+ "smp.cpus=2,smp.sockets=2,smp.clusters=1,smp.cores=1,smp.threads=1 "
22
+ uint64_t pmceid0;
45
"-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
23
+ uint64_t pmceid1;
46
- "-numa cpu,node-id=1,thread-id=0 "
24
uint32_t id_afr0;
47
- "-numa cpu,node-id=0,thread-id=1");
25
uint32_t id_mmfr0;
48
+ "-numa cpu,node-id=0,socket-id=1,cluster-id=0,core-id=0,thread-id=0 "
26
uint32_t id_mmfr1;
49
+ "-numa cpu,node-id=1,socket-id=0,cluster-id=0,core-id=0,thread-id=0");
27
diff --git a/target/arm/helper.c b/target/arm/helper.c
50
qts = qtest_init(cli);
28
index XXXXXXX..XXXXXXX 100644
51
cpus = get_cpus(qts, &resp);
29
--- a/target/arm/helper.c
52
g_assert(cpus);
30
+++ b/target/arm/helper.c
53
31
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
54
while ((e = qlist_pop(cpus))) {
32
} else {
55
QDict *cpu, *props;
33
define_arm_cp_regs(cpu, not_v7_cp_reginfo);
56
- int64_t thread, node;
34
}
57
+ int64_t socket, cluster, core, thread, node;
35
+ if (FIELD_EX32(cpu->id_dfr0, ID_DFR0, PERFMON) >= 4 &&
58
36
+ FIELD_EX32(cpu->id_dfr0, ID_DFR0, PERFMON) != 0xf) {
59
cpu = qobject_to(QDict, e);
37
+ ARMCPRegInfo v81_pmu_regs[] = {
60
g_assert(qdict_haskey(cpu, "props"));
38
+ { .name = "PMCEID2", .state = ARM_CP_STATE_AA32,
61
@@ -XXX,XX +XXX,XX @@ static void aarch64_numa_cpu(const void *data)
39
+ .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 4,
62
40
+ .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
63
g_assert(qdict_haskey(props, "node-id"));
41
+ .resetvalue = extract64(cpu->pmceid0, 32, 32) },
64
node = qdict_get_int(props, "node-id");
42
+ { .name = "PMCEID3", .state = ARM_CP_STATE_AA32,
65
+ g_assert(qdict_haskey(props, "socket-id"));
43
+ .cp = 15, .opc1 = 0, .crn = 9, .crm = 14, .opc2 = 5,
66
+ socket = qdict_get_int(props, "socket-id");
44
+ .access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
67
+ g_assert(qdict_haskey(props, "cluster-id"));
45
+ .resetvalue = extract64(cpu->pmceid1, 32, 32) },
68
+ cluster = qdict_get_int(props, "cluster-id");
46
+ REGINFO_SENTINEL
69
+ g_assert(qdict_haskey(props, "core-id"));
47
+ };
70
+ core = qdict_get_int(props, "core-id");
48
+ define_arm_cp_regs(cpu, v81_pmu_regs);
71
g_assert(qdict_haskey(props, "thread-id"));
49
+ }
72
thread = qdict_get_int(props, "thread-id");
50
if (arm_feature(env, ARM_FEATURE_V8)) {
73
51
/* AArch64 ID registers, which all have impdef reset values.
74
- if (thread == 0) {
52
* Note that within the ID register ranges the unused slots
75
+ if (socket == 0 && cluster == 0 && core == 0 && thread == 0) {
53
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
76
g_assert_cmpint(node, ==, 1);
54
{ .name = "PMCEID0", .state = ARM_CP_STATE_AA32,
77
- } else if (thread == 1) {
55
.cp = 15, .opc1 = 0, .crn = 9, .crm = 12, .opc2 = 6,
78
+ } else if (socket == 1 && cluster == 0 && core == 0 && thread == 0) {
56
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
79
g_assert_cmpint(node, ==, 0);
57
- .resetvalue = cpu->pmceid0 },
80
} else {
58
+ .resetvalue = extract64(cpu->pmceid0, 0, 32) },
81
g_assert(false);
59
{ .name = "PMCEID0_EL0", .state = ARM_CP_STATE_AA64,
60
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 6,
61
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
62
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
63
{ .name = "PMCEID1", .state = ARM_CP_STATE_AA32,
64
.cp = 15, .opc1 = 0, .crn = 9, .crm = 12, .opc2 = 7,
65
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
66
- .resetvalue = cpu->pmceid1 },
67
+ .resetvalue = extract64(cpu->pmceid1, 0, 32) },
68
{ .name = "PMCEID1_EL0", .state = ARM_CP_STATE_AA64,
69
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 12, .opc2 = 7,
70
.access = PL0_R, .accessfn = pmreg_access, .type = ARM_CP_CONST,
71
--
82
--
72
2.20.1
83
2.25.1
73
74
diff view generated by jsdifflib
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
3
When CPU-to-NUMA association isn't explicitly provided by users,
4
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
the default one is given by mc->get_default_cpu_node_id(). However,
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
5
the CPU topology isn't fully considered in the default association
6
Message-id: 20181211151945.29137-6-aaron@os.amperecomputing.com
6
and this causes CPU topology broken warnings on booting Linux guest.
7
8
For example, the following warning messages are observed when the
9
Linux guest is booted with the following command lines.
10
11
/home/gavin/sandbox/qemu.main/build/qemu-system-aarch64 \
12
-accel kvm -machine virt,gic-version=host \
13
-cpu host \
14
-smp 6,sockets=2,cores=3,threads=1 \
15
-m 1024M,slots=16,maxmem=64G \
16
-object memory-backend-ram,id=mem0,size=128M \
17
-object memory-backend-ram,id=mem1,size=128M \
18
-object memory-backend-ram,id=mem2,size=128M \
19
-object memory-backend-ram,id=mem3,size=128M \
20
-object memory-backend-ram,id=mem4,size=128M \
21
-object memory-backend-ram,id=mem4,size=384M \
22
-numa node,nodeid=0,memdev=mem0 \
23
-numa node,nodeid=1,memdev=mem1 \
24
-numa node,nodeid=2,memdev=mem2 \
25
-numa node,nodeid=3,memdev=mem3 \
26
-numa node,nodeid=4,memdev=mem4 \
27
-numa node,nodeid=5,memdev=mem5
28
:
29
alternatives: patching kernel code
30
BUG: arch topology borken
31
the CLS domain not a subset of the MC domain
32
<the above error log repeats>
33
BUG: arch topology borken
34
the DIE domain not a subset of the NODE domain
35
36
With current implementation of mc->get_default_cpu_node_id(),
37
CPU#0 to CPU#5 are associated with NODE#0 to NODE#5 separately.
38
That's incorrect because CPU#0/1/2 should be associated with same
39
NUMA node because they're seated in same socket.
40
41
This fixes the issue by considering the socket ID when the default
42
CPU-to-NUMA association is provided in virt_possible_cpu_arch_ids().
43
With this applied, no more CPU topology broken warnings are seen
44
from the Linux guest. The 6 CPUs are associated with NODE#0/1, but
45
there are no CPUs associated with NODE#2/3/4/5.
46
47
Signed-off-by: Gavin Shan <gshan@redhat.com>
48
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
49
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
50
Message-id: 20220503140304.855514-6-gshan@redhat.com
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
51
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
---
52
---
9
target/arm/helper.c | 27 ++++++++++++++++++++++++++-
53
hw/arm/virt.c | 4 +++-
10
1 file changed, 26 insertions(+), 1 deletion(-)
54
1 file changed, 3 insertions(+), 1 deletion(-)
11
55
12
diff --git a/target/arm/helper.c b/target/arm/helper.c
56
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
13
index XXXXXXX..XXXXXXX 100644
57
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/helper.c
58
--- a/hw/arm/virt.c
15
+++ b/target/arm/helper.c
59
+++ b/hw/arm/virt.c
16
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v6_cp_reginfo[] = {
60
@@ -XXX,XX +XXX,XX @@ virt_cpu_index_to_props(MachineState *ms, unsigned cpu_index)
17
PMXEVTYPER_M | PMXEVTYPER_MT | \
61
18
PMXEVTYPER_EVTCOUNT)
62
static int64_t virt_get_default_cpu_node_id(const MachineState *ms, int idx)
19
63
{
20
+#define PMCCFILTR 0xf8000000
64
- return idx % ms->numa_state->num_nodes;
21
+#define PMCCFILTR_M PMXEVTYPER_M
65
+ int64_t socket_id = ms->possible_cpus->cpus[idx].props.socket_id;
22
+#define PMCCFILTR_EL0 (PMCCFILTR | PMCCFILTR_M)
23
+
66
+
24
static inline uint32_t pmu_num_counters(CPUARMState *env)
67
+ return socket_id % ms->numa_state->num_nodes;
25
{
26
return (env->cp15.c9_pmcr & PMCRN_MASK) >> PMCRN_SHIFT;
27
@@ -XXX,XX +XXX,XX @@ static void pmccfiltr_write(CPUARMState *env, const ARMCPRegInfo *ri,
28
uint64_t value)
29
{
30
pmccntr_op_start(env);
31
- env->cp15.pmccfiltr_el0 = value & 0xfc000000;
32
+ env->cp15.pmccfiltr_el0 = value & PMCCFILTR_EL0;
33
pmccntr_op_finish(env);
34
}
68
}
35
69
36
+static void pmccfiltr_write_a32(CPUARMState *env, const ARMCPRegInfo *ri,
70
static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
37
+ uint64_t value)
38
+{
39
+ pmccntr_op_start(env);
40
+ /* M is not accessible from AArch32 */
41
+ env->cp15.pmccfiltr_el0 = (env->cp15.pmccfiltr_el0 & PMCCFILTR_M) |
42
+ (value & PMCCFILTR);
43
+ pmccntr_op_finish(env);
44
+}
45
+
46
+static uint64_t pmccfiltr_read_a32(CPUARMState *env, const ARMCPRegInfo *ri)
47
+{
48
+ /* M is not visible in AArch32 */
49
+ return env->cp15.pmccfiltr_el0 & PMCCFILTR;
50
+}
51
+
52
static void pmcntenset_write(CPUARMState *env, const ARMCPRegInfo *ri,
53
uint64_t value)
54
{
55
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
56
.readfn = pmccntr_read, .writefn = pmccntr_write,
57
.raw_readfn = raw_read, .raw_writefn = raw_write, },
58
#endif
59
+ { .name = "PMCCFILTR", .cp = 15, .opc1 = 0, .crn = 14, .crm = 15, .opc2 = 7,
60
+ .writefn = pmccfiltr_write_a32, .readfn = pmccfiltr_read_a32,
61
+ .access = PL0_RW, .accessfn = pmreg_access,
62
+ .type = ARM_CP_ALIAS | ARM_CP_IO,
63
+ .resetvalue = 0, },
64
{ .name = "PMCCFILTR_EL0", .state = ARM_CP_STATE_AA64,
65
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 15, .opc2 = 7,
66
.writefn = pmccfiltr_write, .raw_writefn = raw_write,
67
--
71
--
68
2.20.1
72
2.25.1
69
70
diff view generated by jsdifflib
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
1
From: Gavin Shan <gshan@redhat.com>
2
2
3
In some cases it may be helpful to modify state before saving it for
3
When the PPTT table is built, the CPU topology is re-calculated, but
4
migration, and then modify the state back after it has been saved. The
4
it's unecessary because the CPU topology has been populated in
5
existing pre_save function provides half of this functionality. This
5
virt_possible_cpu_arch_ids() on arm/virt machine.
6
patch adds a post_save function to provide the second half.
7
6
8
Signed-off-by: Aaron Lindsay <aclindsa@gmail.com>
7
This reworks build_pptt() to avoid by reusing the existing IDs in
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
ms->possible_cpus. Currently, the only user of build_pptt() is
10
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
9
arm/virt machine.
11
Message-id: 20181211151945.29137-2-aaron@os.amperecomputing.com
10
11
Signed-off-by: Gavin Shan <gshan@redhat.com>
12
Tested-by: Yanan Wang <wangyanan55@huawei.com>
13
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
14
Acked-by: Igor Mammedov <imammedo@redhat.com>
15
Acked-by: Michael S. Tsirkin <mst@redhat.com>
16
Message-id: 20220503140304.855514-7-gshan@redhat.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
18
---
14
include/migration/vmstate.h | 1 +
19
hw/acpi/aml-build.c | 111 +++++++++++++++++++-------------------------
15
migration/vmstate.c | 13 ++++++++++++-
20
1 file changed, 48 insertions(+), 63 deletions(-)
16
docs/devel/migration.rst | 9 +++++++--
17
3 files changed, 20 insertions(+), 3 deletions(-)
18
21
19
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
22
diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
20
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
21
--- a/include/migration/vmstate.h
24
--- a/hw/acpi/aml-build.c
22
+++ b/include/migration/vmstate.h
25
+++ b/hw/acpi/aml-build.c
23
@@ -XXX,XX +XXX,XX @@ struct VMStateDescription {
26
@@ -XXX,XX +XXX,XX @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
24
int (*pre_load)(void *opaque);
27
const char *oem_id, const char *oem_table_id)
25
int (*post_load)(void *opaque, int version_id);
28
{
26
int (*pre_save)(void *opaque);
29
MachineClass *mc = MACHINE_GET_CLASS(ms);
27
+ int (*post_save)(void *opaque);
30
- GQueue *list = g_queue_new();
28
bool (*needed)(void *opaque);
31
- guint pptt_start = table_data->len;
29
const VMStateField *fields;
32
- guint parent_offset;
30
const VMStateDescription **subsections;
33
- guint length, i;
31
diff --git a/migration/vmstate.c b/migration/vmstate.c
34
- int uid = 0;
32
index XXXXXXX..XXXXXXX 100644
35
- int socket;
33
--- a/migration/vmstate.c
36
+ CPUArchIdList *cpus = ms->possible_cpus;
34
+++ b/migration/vmstate.c
37
+ int64_t socket_id = -1, cluster_id = -1, core_id = -1;
35
@@ -XXX,XX +XXX,XX @@ int vmstate_save_state_v(QEMUFile *f, const VMStateDescription *vmsd,
38
+ uint32_t socket_offset = 0, cluster_offset = 0, core_offset = 0;
36
if (ret) {
39
+ uint32_t pptt_start = table_data->len;
37
error_report("Save of field %s/%s failed",
40
+ int n;
38
vmsd->name, field->name);
41
AcpiTable table = { .sig = "PPTT", .rev = 2,
39
+ if (vmsd->post_save) {
42
.oem_id = oem_id, .oem_table_id = oem_table_id };
40
+ vmsd->post_save(opaque);
43
41
+ }
44
acpi_table_begin(&table, table_data);
42
return ret;
45
43
}
46
- for (socket = 0; socket < ms->smp.sockets; socket++) {
44
47
- g_queue_push_tail(list,
45
@@ -XXX,XX +XXX,XX @@ int vmstate_save_state_v(QEMUFile *f, const VMStateDescription *vmsd,
48
- GUINT_TO_POINTER(table_data->len - pptt_start));
46
json_end_array(vmdesc);
49
- build_processor_hierarchy_node(
50
- table_data,
51
- /*
52
- * Physical package - represents the boundary
53
- * of a physical package
54
- */
55
- (1 << 0),
56
- 0, socket, NULL, 0);
57
- }
58
-
59
- if (mc->smp_props.clusters_supported) {
60
- length = g_queue_get_length(list);
61
- for (i = 0; i < length; i++) {
62
- int cluster;
63
-
64
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
65
- for (cluster = 0; cluster < ms->smp.clusters; cluster++) {
66
- g_queue_push_tail(list,
67
- GUINT_TO_POINTER(table_data->len - pptt_start));
68
- build_processor_hierarchy_node(
69
- table_data,
70
- (0 << 0), /* not a physical package */
71
- parent_offset, cluster, NULL, 0);
72
- }
73
+ /*
74
+ * This works with the assumption that cpus[n].props.*_id has been
75
+ * sorted from top to down levels in mc->possible_cpu_arch_ids().
76
+ * Otherwise, the unexpected and duplicated containers will be
77
+ * created.
78
+ */
79
+ for (n = 0; n < cpus->len; n++) {
80
+ if (cpus->cpus[n].props.socket_id != socket_id) {
81
+ assert(cpus->cpus[n].props.socket_id > socket_id);
82
+ socket_id = cpus->cpus[n].props.socket_id;
83
+ cluster_id = -1;
84
+ core_id = -1;
85
+ socket_offset = table_data->len - pptt_start;
86
+ build_processor_hierarchy_node(table_data,
87
+ (1 << 0), /* Physical package */
88
+ 0, socket_id, NULL, 0);
89
}
90
- }
91
92
- length = g_queue_get_length(list);
93
- for (i = 0; i < length; i++) {
94
- int core;
95
-
96
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
97
- for (core = 0; core < ms->smp.cores; core++) {
98
- if (ms->smp.threads > 1) {
99
- g_queue_push_tail(list,
100
- GUINT_TO_POINTER(table_data->len - pptt_start));
101
- build_processor_hierarchy_node(
102
- table_data,
103
- (0 << 0), /* not a physical package */
104
- parent_offset, core, NULL, 0);
105
- } else {
106
- build_processor_hierarchy_node(
107
- table_data,
108
- (1 << 1) | /* ACPI Processor ID valid */
109
- (1 << 3), /* Node is a Leaf */
110
- parent_offset, uid++, NULL, 0);
111
+ if (mc->smp_props.clusters_supported) {
112
+ if (cpus->cpus[n].props.cluster_id != cluster_id) {
113
+ assert(cpus->cpus[n].props.cluster_id > cluster_id);
114
+ cluster_id = cpus->cpus[n].props.cluster_id;
115
+ core_id = -1;
116
+ cluster_offset = table_data->len - pptt_start;
117
+ build_processor_hierarchy_node(table_data,
118
+ (0 << 0), /* Not a physical package */
119
+ socket_offset, cluster_id, NULL, 0);
120
}
121
+ } else {
122
+ cluster_offset = socket_offset;
123
}
124
- }
125
126
- length = g_queue_get_length(list);
127
- for (i = 0; i < length; i++) {
128
- int thread;
129
+ if (ms->smp.threads == 1) {
130
+ build_processor_hierarchy_node(table_data,
131
+ (1 << 1) | /* ACPI Processor ID valid */
132
+ (1 << 3), /* Node is a Leaf */
133
+ cluster_offset, n, NULL, 0);
134
+ } else {
135
+ if (cpus->cpus[n].props.core_id != core_id) {
136
+ assert(cpus->cpus[n].props.core_id > core_id);
137
+ core_id = cpus->cpus[n].props.core_id;
138
+ core_offset = table_data->len - pptt_start;
139
+ build_processor_hierarchy_node(table_data,
140
+ (0 << 0), /* Not a physical package */
141
+ cluster_offset, core_id, NULL, 0);
142
+ }
143
144
- parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
145
- for (thread = 0; thread < ms->smp.threads; thread++) {
146
- build_processor_hierarchy_node(
147
- table_data,
148
+ build_processor_hierarchy_node(table_data,
149
(1 << 1) | /* ACPI Processor ID valid */
150
(1 << 2) | /* Processor is a Thread */
151
(1 << 3), /* Node is a Leaf */
152
- parent_offset, uid++, NULL, 0);
153
+ core_offset, n, NULL, 0);
154
}
47
}
155
}
48
156
49
- return vmstate_subsection_save(f, vmsd, opaque, vmdesc);
157
- g_queue_free(list);
50
+ ret = vmstate_subsection_save(f, vmsd, opaque, vmdesc);
158
acpi_table_end(linker, &table);
51
+
52
+ if (vmsd->post_save) {
53
+ int ps_ret = vmsd->post_save(opaque);
54
+ if (!ret) {
55
+ ret = ps_ret;
56
+ }
57
+ }
58
+ return ret;
59
}
159
}
60
160
61
static const VMStateDescription *
62
diff --git a/docs/devel/migration.rst b/docs/devel/migration.rst
63
index XXXXXXX..XXXXXXX 100644
64
--- a/docs/devel/migration.rst
65
+++ b/docs/devel/migration.rst
66
@@ -XXX,XX +XXX,XX @@ The functions to do that are inside a vmstate definition, and are called:
67
68
This function is called before we save the state of one device.
69
70
-Example: You can look at hpet.c, that uses the three function to
71
-massage the state that is transferred.
72
+- ``int (*post_save)(void *opaque);``
73
+
74
+ This function is called after we save the state of one device
75
+ (even upon failure, unless the call to pre_save returned an error).
76
+
77
+Example: You can look at hpet.c, that uses the first three functions
78
+to massage the state that is transferred.
79
80
The ``VMSTATE_WITH_TMP`` macro may be useful when the migration
81
data doesn't match the stored device data well; it allows an
82
--
161
--
83
2.20.1
162
2.25.1
84
85
diff view generated by jsdifflib
Deleted patch
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
2
1
3
Because of the PMU's design, many register accesses have side effects
4
which are inter-related, meaning that the normal method of saving CP
5
registers can result in inconsistent state. These side-effects are
6
largely handled in pmu_op_start/finish functions which can be called
7
before and after the state is saved/restored. By doing this and adding
8
raw read/write functions for the affected registers, we avoid
9
migration-related inconsistencies.
10
11
Signed-off-by: Aaron Lindsay <aclindsa@gmail.com>
12
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20181211151945.29137-4-aaron@os.amperecomputing.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
target/arm/helper.c | 6 ++++--
18
target/arm/machine.c | 24 ++++++++++++++++++++++++
19
2 files changed, 28 insertions(+), 2 deletions(-)
20
21
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
index XXXXXXX..XXXXXXX 100644
23
--- a/target/arm/helper.c
24
+++ b/target/arm/helper.c
25
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v7_cp_reginfo[] = {
26
.opc0 = 3, .opc1 = 3, .crn = 9, .crm = 13, .opc2 = 0,
27
.access = PL0_RW, .accessfn = pmreg_access_ccntr,
28
.type = ARM_CP_IO,
29
- .readfn = pmccntr_read, .writefn = pmccntr_write, },
30
+ .fieldoffset = offsetof(CPUARMState, cp15.c15_ccnt),
31
+ .readfn = pmccntr_read, .writefn = pmccntr_write,
32
+ .raw_readfn = raw_read, .raw_writefn = raw_write, },
33
#endif
34
{ .name = "PMCCFILTR_EL0", .state = ARM_CP_STATE_AA64,
35
.opc0 = 3, .opc1 = 3, .crn = 14, .crm = 15, .opc2 = 7,
36
- .writefn = pmccfiltr_write,
37
+ .writefn = pmccfiltr_write, .raw_writefn = raw_write,
38
.access = PL0_RW, .accessfn = pmreg_access,
39
.type = ARM_CP_IO,
40
.fieldoffset = offsetof(CPUARMState, cp15.pmccfiltr_el0),
41
diff --git a/target/arm/machine.c b/target/arm/machine.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/machine.c
44
+++ b/target/arm/machine.c
45
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
46
{
47
ARMCPU *cpu = opaque;
48
49
+ if (!kvm_enabled()) {
50
+ pmu_op_start(&cpu->env);
51
+ }
52
+
53
if (kvm_enabled()) {
54
if (!write_kvmstate_to_list(cpu)) {
55
/* This should never fail */
56
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_save(void *opaque)
57
return 0;
58
}
59
60
+static int cpu_post_save(void *opaque)
61
+{
62
+ ARMCPU *cpu = opaque;
63
+
64
+ if (!kvm_enabled()) {
65
+ pmu_op_finish(&cpu->env);
66
+ }
67
+
68
+ return 0;
69
+}
70
+
71
static int cpu_pre_load(void *opaque)
72
{
73
ARMCPU *cpu = opaque;
74
@@ -XXX,XX +XXX,XX @@ static int cpu_pre_load(void *opaque)
75
*/
76
env->irq_line_state = UINT32_MAX;
77
78
+ if (!kvm_enabled()) {
79
+ pmu_op_start(&cpu->env);
80
+ }
81
+
82
return 0;
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static int cpu_post_load(void *opaque, int version_id)
86
hw_breakpoint_update_all(cpu);
87
hw_watchpoint_update_all(cpu);
88
89
+ if (!kvm_enabled()) {
90
+ pmu_op_finish(&cpu->env);
91
+ }
92
+
93
return 0;
94
}
95
96
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
97
.version_id = 22,
98
.minimum_version_id = 22,
99
.pre_save = cpu_pre_save,
100
+ .post_save = cpu_post_save,
101
.pre_load = cpu_pre_load,
102
.post_load = cpu_post_load,
103
.fields = (VMStateField[]) {
104
--
105
2.20.1
106
107
diff view generated by jsdifflib
Deleted patch
1
From: Aaron Lindsay <aaron@os.amperecomputing.com>
2
1
3
This is immediately necessary for the PMUv3 implementation to check
4
ID_DFR0.PerfMon to enable/disable specific features, but defines the
5
full complement of fields for possible future use elsewhere.
6
7
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20181211151945.29137-8-aaron@os.amperecomputing.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/cpu.h | 9 +++++++++
13
1 file changed, 9 insertions(+)
14
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
19
@@ -XXX,XX +XXX,XX @@ FIELD(ID_AA64MMFR1, PAN, 20, 4)
20
FIELD(ID_AA64MMFR1, SPECSEI, 24, 4)
21
FIELD(ID_AA64MMFR1, XNX, 28, 4)
22
23
+FIELD(ID_DFR0, COPDBG, 0, 4)
24
+FIELD(ID_DFR0, COPSDBG, 4, 4)
25
+FIELD(ID_DFR0, MMAPDBG, 8, 4)
26
+FIELD(ID_DFR0, COPTRC, 12, 4)
27
+FIELD(ID_DFR0, MMAPTRC, 16, 4)
28
+FIELD(ID_DFR0, MPROFDBG, 20, 4)
29
+FIELD(ID_DFR0, PERFMON, 24, 4)
30
+FIELD(ID_DFR0, TRACEFILT, 28, 4)
31
+
32
QEMU_BUILD_BUG_ON(ARRAY_SIZE(((ARMCPU *)0)->ccsidr) <= R_V7M_CSSELR_INDEX_MASK);
33
34
/* If adding a feature bit which corresponds to a Linux ELF
35
--
36
2.20.1
37
38
diff view generated by jsdifflib