1
The following changes since commit eae587e8e3694b1aceab23239493fb4c7e1a80f5:
1
target-arm queue: two bug fixes, plus the KVM/SVE patchset,
2
which is a new feature but one which was in my pre-softfreeze
3
pullreq (it just had to be dropped due to an unexpected test failure.)
2
4
3
Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2021-09-13' into staging (2021-09-13 11:00:30 +0100)
5
thanks
6
-- PMM
7
8
The following changes since commit b7c9a7f353c0e260519bf735ff0d4aa01e72784b:
9
10
Merge remote-tracking branch 'remotes/jnsnow/tags/ide-pull-request' into staging (2019-10-31 15:57:30 +0000)
4
11
5
are available in the Git repository at:
12
are available in the Git repository at:
6
13
7
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20210913
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20191101-1
8
15
9
for you to fetch changes up to 9a2b2ecf4d25a3943918c95d2db4508b304161b5:
16
for you to fetch changes up to d9ae7624b659362cb2bb2b04fee53bf50829ca56:
10
17
11
hw/arm/mps2.c: Mark internal-only I2C buses as 'full' (2021-09-13 17:09:28 +0100)
18
target/arm: Allow reading flags from FPSCR for M-profile (2019-11-01 08:49:10 +0000)
12
19
13
----------------------------------------------------------------
20
----------------------------------------------------------------
14
target-arm queue:
21
target-arm queue:
15
* mark MPS2/MPS3 board-internal i2c buses as 'full' so that command
22
* Support SVE in KVM guests
16
line user-created devices are not plugged into them
23
* Don't UNDEF on M-profile 'vmrs apsr_nzcv, fpscr'
17
* Take an exception if PSTATE.IL is set
24
* Update hflags after boot.c modifies CPU state
18
* Support an emulated ITS in the virt board
19
* Add support for kudo-bmc board
20
* Probe for KVM_CAP_ARM_VM_IPA_SIZE when creating scratch VM
21
* cadence_uart: Fix clock handling issues that prevented
22
u-boot from running
23
25
24
----------------------------------------------------------------
26
----------------------------------------------------------------
25
Bin Meng (6):
27
Andrew Jones (9):
26
hw/misc: zynq_slcr: Correctly compute output clocks in the reset exit phase
28
target/arm/monitor: Introduce qmp_query_cpu_model_expansion
27
hw/char: cadence_uart: Disable transmit when input clock is disabled
29
tests: arm: Introduce cpu feature tests
28
hw/char: cadence_uart: Move clock/reset check to uart_can_receive()
30
target/arm: Allow SVE to be disabled via a CPU property
29
hw/char: cadence_uart: Convert to memop_with_attrs() ops
31
target/arm/cpu64: max cpu: Introduce sve<N> properties
30
hw/char: cadence_uart: Ignore access when unclocked or in reset for uart_{read, write}()
32
target/arm/kvm64: Add kvm_arch_get/put_sve
31
hw/char: cadence_uart: Log a guest error when device is unclocked or in reset
33
target/arm/kvm64: max cpu: Enable SVE when available
34
target/arm/kvm: scratch vcpu: Preserve input kvm_vcpu_init features
35
target/arm/cpu64: max cpu: Support sve properties with KVM
36
target/arm/kvm: host cpu: Add support for sve<N> properties
32
37
33
Chris Rauer (1):
38
Christophe Lyon (1):
34
hw/arm: Add support for kudo-bmc board.
39
target/arm: Allow reading flags from FPSCR for M-profile
35
40
36
Marc Zyngier (1):
41
Edgar E. Iglesias (1):
37
hw/arm/virt: KVM: Probe for KVM_CAP_ARM_VM_IPA_SIZE when creating scratch VM
42
hw/arm/boot: Rebuild hflags when modifying CPUState at boot
38
43
39
Peter Maydell (5):
44
tests/Makefile.include | 5 +-
40
target/arm: Take an exception if PSTATE.IL is set
45
qapi/machine-target.json | 6 +-
41
qdev: Support marking individual buses as 'full'
46
include/qemu/bitops.h | 1 +
42
hw/arm/mps2-tz.c: Add extra data parameter to MakeDevFn
47
target/arm/cpu.h | 21 ++
43
hw/arm/mps2-tz.c: Mark internal-only I2C buses as 'full'
48
target/arm/kvm_arm.h | 39 +++
44
hw/arm/mps2.c: Mark internal-only I2C buses as 'full'
49
hw/arm/boot.c | 1 +
50
target/arm/cpu.c | 25 +-
51
target/arm/cpu64.c | 364 +++++++++++++++++++++++++--
52
target/arm/helper.c | 10 +-
53
target/arm/kvm.c | 25 +-
54
target/arm/kvm32.c | 6 +-
55
target/arm/kvm64.c | 325 +++++++++++++++++++++---
56
target/arm/monitor.c | 158 ++++++++++++
57
target/arm/translate-vfp.inc.c | 5 +-
58
tests/arm-cpu-features.c | 551 +++++++++++++++++++++++++++++++++++++++++
59
docs/arm-cpu-features.rst | 317 ++++++++++++++++++++++++
60
16 files changed, 1795 insertions(+), 64 deletions(-)
61
create mode 100644 tests/arm-cpu-features.c
62
create mode 100644 docs/arm-cpu-features.rst
45
63
46
Richard Henderson (1):
47
target/arm: Merge disas_a64_insn into aarch64_tr_translate_insn
48
49
Shashi Mallela (9):
50
hw/intc: GICv3 ITS initial framework
51
hw/intc: GICv3 ITS register definitions added
52
hw/intc: GICv3 ITS command queue framework
53
hw/intc: GICv3 ITS Command processing
54
hw/intc: GICv3 ITS Feature enablement
55
hw/intc: GICv3 redistributor ITS processing
56
tests/data/acpi/virt: Add IORT files for ITS
57
hw/arm/virt: add ITS support in virt GIC
58
tests/data/acpi/virt: Update IORT files for ITS
59
60
docs/system/arm/nuvoton.rst | 1 +
61
hw/intc/gicv3_internal.h | 188 ++++-
62
include/hw/arm/virt.h | 2 +
63
include/hw/intc/arm_gicv3_common.h | 13 +
64
include/hw/intc/arm_gicv3_its_common.h | 32 +-
65
include/hw/qdev-core.h | 24 +
66
target/arm/cpu.h | 1 +
67
target/arm/kvm_arm.h | 4 +-
68
target/arm/syndrome.h | 5 +
69
target/arm/translate.h | 2 +
70
hw/arm/mps2-tz.c | 92 ++-
71
hw/arm/mps2.c | 12 +-
72
hw/arm/npcm7xx_boards.c | 34 +
73
hw/arm/virt.c | 29 +-
74
hw/char/cadence_uart.c | 61 +-
75
hw/intc/arm_gicv3.c | 14 +
76
hw/intc/arm_gicv3_common.c | 13 +
77
hw/intc/arm_gicv3_cpuif.c | 7 +-
78
hw/intc/arm_gicv3_dist.c | 5 +-
79
hw/intc/arm_gicv3_its.c | 1322 ++++++++++++++++++++++++++++++++
80
hw/intc/arm_gicv3_its_common.c | 7 +-
81
hw/intc/arm_gicv3_its_kvm.c | 2 +-
82
hw/intc/arm_gicv3_redist.c | 153 +++-
83
hw/misc/zynq_slcr.c | 31 +-
84
softmmu/qdev-monitor.c | 7 +-
85
target/arm/helper-a64.c | 1 +
86
target/arm/helper.c | 8 +
87
target/arm/kvm.c | 7 +-
88
target/arm/translate-a64.c | 255 +++---
89
target/arm/translate.c | 21 +
90
hw/intc/meson.build | 1 +
91
tests/data/acpi/virt/IORT | Bin 0 -> 124 bytes
92
tests/data/acpi/virt/IORT.memhp | Bin 0 -> 124 bytes
93
tests/data/acpi/virt/IORT.numamem | Bin 0 -> 124 bytes
94
tests/data/acpi/virt/IORT.pxb | Bin 0 -> 124 bytes
95
35 files changed, 2144 insertions(+), 210 deletions(-)
96
create mode 100644 hw/intc/arm_gicv3_its.c
97
create mode 100644 tests/data/acpi/virt/IORT
98
create mode 100644 tests/data/acpi/virt/IORT.memhp
99
create mode 100644 tests/data/acpi/virt/IORT.numamem
100
create mode 100644 tests/data/acpi/virt/IORT.pxb
101
diff view generated by jsdifflib
Deleted patch
1
From: Bin Meng <bmeng.cn@gmail.com>
2
1
3
As of today, when booting upstream U-Boot for Xilinx Zynq, the UART
4
does not receive anything. Debugging shows that the UART input clock
5
frequency is zero which prevents the UART from receiving anything as
6
per the logic in uart_receive().
7
8
From zynq_slcr_reset_exit() comment, it intends to compute output
9
clocks according to ps_clk and registers. zynq_slcr_compute_clocks()
10
is called to accomplish the task, inside which device_is_in_reset()
11
is called to actually make the attempt in vain.
12
13
Rework reset_hold() and reset_exit() so that in the reset exit phase,
14
the logic can really compute output clocks in reset_exit().
15
16
With this change, upstream U-Boot boots properly again with:
17
18
$ qemu-system-arm -M xilinx-zynq-a9 -m 1G -display none -serial null -serial stdio \
19
-device loader,file=u-boot-dtb.bin,addr=0x4000000,cpu-num=0
20
21
Fixes: 38867cb7ec90 ("hw/misc/zynq_slcr: add clock generation for uarts")
22
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
23
Acked-by: Alistair Francis <alistair.francis@wdc.com>
24
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
25
Message-id: 20210901124521.30599-2-bmeng.cn@gmail.com
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
---
28
hw/misc/zynq_slcr.c | 31 ++++++++++++++++++-------------
29
1 file changed, 18 insertions(+), 13 deletions(-)
30
31
diff --git a/hw/misc/zynq_slcr.c b/hw/misc/zynq_slcr.c
32
index XXXXXXX..XXXXXXX 100644
33
--- a/hw/misc/zynq_slcr.c
34
+++ b/hw/misc/zynq_slcr.c
35
@@ -XXX,XX +XXX,XX @@ static uint64_t zynq_slcr_compute_clock(const uint64_t periods[],
36
zynq_slcr_compute_clock((plls), (state)->regs[reg], \
37
reg ## _ ## enable_field ## _SHIFT)
38
39
+static void zynq_slcr_compute_clocks_internal(ZynqSLCRState *s, uint64_t ps_clk)
40
+{
41
+ uint64_t io_pll = zynq_slcr_compute_pll(ps_clk, s->regs[R_IO_PLL_CTRL]);
42
+ uint64_t arm_pll = zynq_slcr_compute_pll(ps_clk, s->regs[R_ARM_PLL_CTRL]);
43
+ uint64_t ddr_pll = zynq_slcr_compute_pll(ps_clk, s->regs[R_DDR_PLL_CTRL]);
44
+
45
+ uint64_t uart_mux[4] = {io_pll, io_pll, arm_pll, ddr_pll};
46
+
47
+ /* compute uartX reference clocks */
48
+ clock_set(s->uart0_ref_clk,
49
+ ZYNQ_COMPUTE_CLK(s, uart_mux, R_UART_CLK_CTRL, CLKACT0));
50
+ clock_set(s->uart1_ref_clk,
51
+ ZYNQ_COMPUTE_CLK(s, uart_mux, R_UART_CLK_CTRL, CLKACT1));
52
+}
53
+
54
/**
55
* Compute and set the ouputs clocks periods.
56
* But do not propagate them further. Connected clocks
57
@@ -XXX,XX +XXX,XX @@ static void zynq_slcr_compute_clocks(ZynqSLCRState *s)
58
ps_clk = 0;
59
}
60
61
- uint64_t io_pll = zynq_slcr_compute_pll(ps_clk, s->regs[R_IO_PLL_CTRL]);
62
- uint64_t arm_pll = zynq_slcr_compute_pll(ps_clk, s->regs[R_ARM_PLL_CTRL]);
63
- uint64_t ddr_pll = zynq_slcr_compute_pll(ps_clk, s->regs[R_DDR_PLL_CTRL]);
64
-
65
- uint64_t uart_mux[4] = {io_pll, io_pll, arm_pll, ddr_pll};
66
-
67
- /* compute uartX reference clocks */
68
- clock_set(s->uart0_ref_clk,
69
- ZYNQ_COMPUTE_CLK(s, uart_mux, R_UART_CLK_CTRL, CLKACT0));
70
- clock_set(s->uart1_ref_clk,
71
- ZYNQ_COMPUTE_CLK(s, uart_mux, R_UART_CLK_CTRL, CLKACT1));
72
+ zynq_slcr_compute_clocks_internal(s, ps_clk);
73
}
74
75
/**
76
@@ -XXX,XX +XXX,XX @@ static void zynq_slcr_reset_hold(Object *obj)
77
ZynqSLCRState *s = ZYNQ_SLCR(obj);
78
79
/* will disable all output clocks */
80
- zynq_slcr_compute_clocks(s);
81
+ zynq_slcr_compute_clocks_internal(s, 0);
82
zynq_slcr_propagate_clocks(s);
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static void zynq_slcr_reset_exit(Object *obj)
86
ZynqSLCRState *s = ZYNQ_SLCR(obj);
87
88
/* will compute output clocks according to ps_clk and registers */
89
- zynq_slcr_compute_clocks(s);
90
+ zynq_slcr_compute_clocks_internal(s, clock_get(s->ps_clk));
91
zynq_slcr_propagate_clocks(s);
92
}
93
94
--
95
2.20.1
96
97
diff view generated by jsdifflib
Deleted patch
1
From: Bin Meng <bmeng.cn@gmail.com>
2
1
3
At present when input clock is disabled, any character transmitted
4
to tx fifo can still show on the serial line, which is wrong.
5
6
Fixes: b636db306e06 ("hw/char/cadence_uart: add clock support")
7
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
9
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Message-id: 20210901124521.30599-3-bmeng.cn@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/char/cadence_uart.c | 5 +++++
14
1 file changed, 5 insertions(+)
15
16
diff --git a/hw/char/cadence_uart.c b/hw/char/cadence_uart.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/char/cadence_uart.c
19
+++ b/hw/char/cadence_uart.c
20
@@ -XXX,XX +XXX,XX @@ static gboolean cadence_uart_xmit(void *do_not_use, GIOCondition cond,
21
static void uart_write_tx_fifo(CadenceUARTState *s, const uint8_t *buf,
22
int size)
23
{
24
+ /* ignore characters when unclocked or in reset */
25
+ if (!clock_is_enabled(s->refclk) || device_is_in_reset(DEVICE(s))) {
26
+ return;
27
+ }
28
+
29
if ((s->r[R_CR] & UART_CR_TX_DIS) || !(s->r[R_CR] & UART_CR_TX_EN)) {
30
return;
31
}
32
--
33
2.20.1
34
35
diff view generated by jsdifflib
1
From: Shashi Mallela <shashi.mallela@linaro.org>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
Added ITS command queue handling for MAPTI,MAPI commands,handled ITS
3
Add support for the query-cpu-model-expansion QMP command to Arm. We
4
translation which triggers an LPI via INT command as well as write
4
do this selectively, only exposing CPU properties which represent
5
to GITS_TRANSLATER register,defined enum to differentiate between ITS
5
optional CPU features which the user may want to enable/disable.
6
command interrupt trigger and GITS_TRANSLATER based interrupt trigger.
6
Additionally we restrict the list of queryable cpu models to 'max',
7
Each of these commands make use of other functionalities implemented to
7
'host', or the current type when KVM is in use. And, finally, we only
8
get device table entry,collection table entry or interrupt translation
8
implement expansion type 'full', as Arm does not yet have a "base"
9
table entry required for their processing.
9
CPU type. More details and example queries are described in a new
10
10
document (docs/arm-cpu-features.rst).
11
Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
11
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Note, certainly more features may be added to the list of advertised
13
Message-id: 20210910143951.92242-5-shashi.mallela@linaro.org
13
features, e.g. 'vfp' and 'neon'. The only requirement is that we can
14
detect invalid configurations and emit failures at QMP query time.
15
For 'vfp' and 'neon' this will require some refactoring to share a
16
validation function between the QMP query and the CPU realize
17
functions.
18
19
Signed-off-by: Andrew Jones <drjones@redhat.com>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Reviewed-by: Eric Auger <eric.auger@redhat.com>
22
Reviewed-by: Beata Michalska <beata.michalska@linaro.org>
23
Message-id: 20191031142734.8590-2-drjones@redhat.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
25
---
16
hw/intc/gicv3_internal.h | 12 +
26
qapi/machine-target.json | 6 +-
17
include/hw/intc/arm_gicv3_common.h | 2 +
27
target/arm/monitor.c | 146 ++++++++++++++++++++++++++++++++++++++
18
hw/intc/arm_gicv3_its.c | 365 ++++++++++++++++++++++++++++-
28
docs/arm-cpu-features.rst | 137 +++++++++++++++++++++++++++++++++++
19
3 files changed, 378 insertions(+), 1 deletion(-)
29
3 files changed, 286 insertions(+), 3 deletions(-)
20
30
create mode 100644 docs/arm-cpu-features.rst
21
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
31
32
diff --git a/qapi/machine-target.json b/qapi/machine-target.json
22
index XXXXXXX..XXXXXXX 100644
33
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/intc/gicv3_internal.h
34
--- a/qapi/machine-target.json
24
+++ b/hw/intc/gicv3_internal.h
35
+++ b/qapi/machine-target.json
25
@@ -XXX,XX +XXX,XX @@ FIELD(MAPC, RDBASE, 16, 32)
36
@@ -XXX,XX +XXX,XX @@
26
#define ITTADDR_MASK MAKE_64BIT_MASK(ITTADDR_SHIFT, ITTADDR_LENGTH)
37
##
27
#define SIZE_MASK 0x1f
38
{ 'struct': 'CpuModelExpansionInfo',
28
39
'data': { 'model': 'CpuModelInfo' },
29
+/* MAPI command fields */
40
- 'if': 'defined(TARGET_S390X) || defined(TARGET_I386)' }
30
+#define EVENTID_MASK ((1ULL << 32) - 1)
41
+ 'if': 'defined(TARGET_S390X) || defined(TARGET_I386) || defined(TARGET_ARM)' }
31
+
42
32
+/* MAPTI command fields */
43
##
33
+#define pINTID_SHIFT 32
44
# @query-cpu-model-expansion:
34
+#define pINTID_MASK MAKE_64BIT_MASK(32, 32)
45
@@ -XXX,XX +XXX,XX @@
35
+
46
# query-cpu-model-expansion while using these is not advised.
36
#define DEVID_SHIFT 32
47
#
37
#define DEVID_MASK MAKE_64BIT_MASK(32, 32)
48
# Some architectures may not support all expansion types. s390x supports
38
49
-# "full" and "static".
39
@@ -XXX,XX +XXX,XX @@ FIELD(MAPC, RDBASE, 16, 32)
50
+# "full" and "static". Arm only supports "full".
40
* Values: | vPEID | ICID |
51
#
52
# Returns: a CpuModelExpansionInfo. Returns an error if expanding CPU models is
53
# not supported, if the model cannot be expanded, if the model contains
54
@@ -XXX,XX +XXX,XX @@
55
'data': { 'type': 'CpuModelExpansionType',
56
'model': 'CpuModelInfo' },
57
'returns': 'CpuModelExpansionInfo',
58
- 'if': 'defined(TARGET_S390X) || defined(TARGET_I386)' }
59
+ 'if': 'defined(TARGET_S390X) || defined(TARGET_I386) || defined(TARGET_ARM)' }
60
61
##
62
# @CpuDefinitionInfo:
63
diff --git a/target/arm/monitor.c b/target/arm/monitor.c
64
index XXXXXXX..XXXXXXX 100644
65
--- a/target/arm/monitor.c
66
+++ b/target/arm/monitor.c
67
@@ -XXX,XX +XXX,XX @@
41
*/
68
*/
42
#define ITS_ITT_ENTRY_SIZE 0xC
69
43
+#define ITE_ENTRY_INTTYPE_SHIFT 1
70
#include "qemu/osdep.h"
44
+#define ITE_ENTRY_INTID_SHIFT 2
71
+#include "hw/boards.h"
45
+#define ITE_ENTRY_INTID_MASK MAKE_64BIT_MASK(2, 24)
72
#include "kvm_arm.h"
46
+#define ITE_ENTRY_INTSP_SHIFT 26
73
+#include "qapi/error.h"
47
+#define ITE_ENTRY_ICID_MASK MAKE_64BIT_MASK(0, 16)
74
+#include "qapi/visitor.h"
48
75
+#include "qapi/qobject-input-visitor.h"
49
/* 16 bits EventId */
76
+#include "qapi/qapi-commands-machine-target.h"
50
#define ITS_IDBITS GICD_TYPER_IDBITS
77
#include "qapi/qapi-commands-misc-target.h"
51
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
78
+#include "qapi/qmp/qerror.h"
52
index XXXXXXX..XXXXXXX 100644
79
+#include "qapi/qmp/qdict.h"
53
--- a/include/hw/intc/arm_gicv3_common.h
80
+#include "qom/qom-qobject.h"
54
+++ b/include/hw/intc/arm_gicv3_common.h
81
55
@@ -XXX,XX +XXX,XX @@
82
static GICCapability *gic_cap_new(int version)
56
#define GICV3_MAXIRQ 1020
83
{
57
#define GICV3_MAXSPI (GICV3_MAXIRQ - GIC_INTERNAL)
84
@@ -XXX,XX +XXX,XX @@ GICCapabilityList *qmp_query_gic_capabilities(Error **errp)
58
85
59
+#define GICV3_LPI_INTID_START 8192
86
return head;
60
+
87
}
61
#define GICV3_REDIST_SIZE 0x20000
88
+
62
63
/* Number of SGI target-list bits */
64
diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/hw/intc/arm_gicv3_its.c
67
+++ b/hw/intc/arm_gicv3_its.c
68
@@ -XXX,XX +XXX,XX @@ struct GICv3ITSClass {
69
void (*parent_reset)(DeviceState *dev);
70
};
71
72
+/*
89
+/*
73
+ * This is an internal enum used to distinguish between LPI triggered
90
+ * These are cpu model features we want to advertise. The order here
74
+ * via command queue and LPI triggered via gits_translater write.
91
+ * matters as this is the order in which qmp_query_cpu_model_expansion
92
+ * will attempt to set them. If there are dependencies between features,
93
+ * then the order that considers those dependencies must be used.
75
+ */
94
+ */
76
+typedef enum ItsCmdType {
95
+static const char *cpu_model_advertised_features[] = {
77
+ NONE = 0, /* internal indication for GITS_TRANSLATER write */
96
+ "aarch64", "pmu",
78
+ CLEAR = 1,
97
+ NULL
79
+ DISCARD = 2,
98
+};
80
+ INT = 3,
99
+
81
+} ItsCmdType;
100
+CpuModelExpansionInfo *qmp_query_cpu_model_expansion(CpuModelExpansionType type,
82
+
101
+ CpuModelInfo *model,
83
+typedef struct {
102
+ Error **errp)
84
+ uint32_t iteh;
85
+ uint64_t itel;
86
+} IteEntry;
87
+
88
static uint64_t baser_base_addr(uint64_t value, uint32_t page_sz)
89
{
90
uint64_t result = 0;
91
@@ -XXX,XX +XXX,XX @@ static uint64_t baser_base_addr(uint64_t value, uint32_t page_sz)
92
return result;
93
}
94
95
+static bool get_cte(GICv3ITSState *s, uint16_t icid, uint64_t *cte,
96
+ MemTxResult *res)
97
+{
103
+{
98
+ AddressSpace *as = &s->gicv3->dma_as;
104
+ CpuModelExpansionInfo *expansion_info;
99
+ uint64_t l2t_addr;
105
+ const QDict *qdict_in = NULL;
100
+ uint64_t value;
106
+ QDict *qdict_out;
101
+ bool valid_l2t;
107
+ ObjectClass *oc;
102
+ uint32_t l2t_id;
108
+ Object *obj;
103
+ uint32_t max_l2_entries;
109
+ const char *name;
104
+
110
+ int i;
105
+ if (s->ct.indirect) {
111
+
106
+ l2t_id = icid / (s->ct.page_sz / L1TABLE_ENTRY_SIZE);
112
+ if (type != CPU_MODEL_EXPANSION_TYPE_FULL) {
107
+
113
+ error_setg(errp, "The requested expansion type is not supported");
108
+ value = address_space_ldq_le(as,
114
+ return NULL;
109
+ s->ct.base_addr +
115
+ }
110
+ (l2t_id * L1TABLE_ENTRY_SIZE),
116
+
111
+ MEMTXATTRS_UNSPECIFIED, res);
117
+ if (!kvm_enabled() && !strcmp(model->name, "host")) {
112
+
118
+ error_setg(errp, "The CPU type '%s' requires KVM", model->name);
113
+ if (*res == MEMTX_OK) {
119
+ return NULL;
114
+ valid_l2t = (value & L2_TABLE_VALID_MASK) != 0;
120
+ }
115
+
121
+
116
+ if (valid_l2t) {
122
+ oc = cpu_class_by_name(TYPE_ARM_CPU, model->name);
117
+ max_l2_entries = s->ct.page_sz / s->ct.entry_sz;
123
+ if (!oc) {
118
+
124
+ error_setg(errp, "The CPU type '%s' is not a recognized ARM CPU type",
119
+ l2t_addr = value & ((1ULL << 51) - 1);
125
+ model->name);
120
+
126
+ return NULL;
121
+ *cte = address_space_ldq_le(as, l2t_addr +
127
+ }
122
+ ((icid % max_l2_entries) * GITS_CTE_SIZE),
128
+
123
+ MEMTXATTRS_UNSPECIFIED, res);
129
+ if (kvm_enabled()) {
124
+ }
130
+ const char *cpu_type = current_machine->cpu_type;
125
+ }
131
+ int len = strlen(cpu_type) - strlen(ARM_CPU_TYPE_SUFFIX);
126
+ } else {
132
+ bool supported = false;
127
+ /* Flat level table */
133
+
128
+ *cte = address_space_ldq_le(as, s->ct.base_addr +
134
+ if (!strcmp(model->name, "host") || !strcmp(model->name, "max")) {
129
+ (icid * GITS_CTE_SIZE),
135
+ /* These are kvmarm's recommended cpu types */
130
+ MEMTXATTRS_UNSPECIFIED, res);
136
+ supported = true;
131
+ }
137
+ } else if (strlen(model->name) == len &&
132
+
138
+ !strncmp(model->name, cpu_type, len)) {
133
+ return (*cte & TABLE_ENTRY_VALID_MASK) != 0;
139
+ /* KVM is enabled and we're using this type, so it works. */
134
+}
140
+ supported = true;
135
+
141
+ }
136
+static bool update_ite(GICv3ITSState *s, uint32_t eventid, uint64_t dte,
142
+ if (!supported) {
137
+ IteEntry ite)
143
+ error_setg(errp, "We cannot guarantee the CPU type '%s' works "
138
+{
144
+ "with KVM on this host", model->name);
139
+ AddressSpace *as = &s->gicv3->dma_as;
145
+ return NULL;
140
+ uint64_t itt_addr;
146
+ }
141
+ MemTxResult res = MEMTX_OK;
147
+ }
142
+
148
+
143
+ itt_addr = (dte & GITS_DTE_ITTADDR_MASK) >> GITS_DTE_ITTADDR_SHIFT;
149
+ if (model->props) {
144
+ itt_addr <<= ITTADDR_SHIFT; /* 256 byte aligned */
150
+ qdict_in = qobject_to(QDict, model->props);
145
+
151
+ if (!qdict_in) {
146
+ address_space_stq_le(as, itt_addr + (eventid * (sizeof(uint64_t) +
152
+ error_setg(errp, QERR_INVALID_PARAMETER_TYPE, "props", "dict");
147
+ sizeof(uint32_t))), ite.itel, MEMTXATTRS_UNSPECIFIED,
153
+ return NULL;
148
+ &res);
154
+ }
149
+
155
+ }
150
+ if (res == MEMTX_OK) {
156
+
151
+ address_space_stl_le(as, itt_addr + (eventid * (sizeof(uint64_t) +
157
+ obj = object_new(object_class_get_name(oc));
152
+ sizeof(uint32_t))) + sizeof(uint32_t), ite.iteh,
158
+
153
+ MEMTXATTRS_UNSPECIFIED, &res);
159
+ if (qdict_in) {
154
+ }
160
+ Visitor *visitor;
155
+ if (res != MEMTX_OK) {
161
+ Error *err = NULL;
156
+ return false;
162
+
157
+ } else {
163
+ visitor = qobject_input_visitor_new(model->props);
158
+ return true;
164
+ visit_start_struct(visitor, NULL, NULL, 0, &err);
159
+ }
165
+ if (err) {
160
+}
166
+ visit_free(visitor);
161
+
167
+ object_unref(obj);
162
+static bool get_ite(GICv3ITSState *s, uint32_t eventid, uint64_t dte,
168
+ error_propagate(errp, err);
163
+ uint16_t *icid, uint32_t *pIntid, MemTxResult *res)
169
+ return NULL;
164
+{
170
+ }
165
+ AddressSpace *as = &s->gicv3->dma_as;
171
+
166
+ uint64_t itt_addr;
172
+ i = 0;
167
+ bool status = false;
173
+ while ((name = cpu_model_advertised_features[i++]) != NULL) {
168
+ IteEntry ite = {};
174
+ if (qdict_get(qdict_in, name)) {
169
+
175
+ object_property_set(obj, visitor, name, &err);
170
+ itt_addr = (dte & GITS_DTE_ITTADDR_MASK) >> GITS_DTE_ITTADDR_SHIFT;
176
+ if (err) {
171
+ itt_addr <<= ITTADDR_SHIFT; /* 256 byte aligned */
177
+ break;
172
+
173
+ ite.itel = address_space_ldq_le(as, itt_addr +
174
+ (eventid * (sizeof(uint64_t) +
175
+ sizeof(uint32_t))), MEMTXATTRS_UNSPECIFIED,
176
+ res);
177
+
178
+ if (*res == MEMTX_OK) {
179
+ ite.iteh = address_space_ldl_le(as, itt_addr +
180
+ (eventid * (sizeof(uint64_t) +
181
+ sizeof(uint32_t))) + sizeof(uint32_t),
182
+ MEMTXATTRS_UNSPECIFIED, res);
183
+
184
+ if (*res == MEMTX_OK) {
185
+ if (ite.itel & TABLE_ENTRY_VALID_MASK) {
186
+ if ((ite.itel >> ITE_ENTRY_INTTYPE_SHIFT) &
187
+ GITS_TYPE_PHYSICAL) {
188
+ *pIntid = (ite.itel & ITE_ENTRY_INTID_MASK) >>
189
+ ITE_ENTRY_INTID_SHIFT;
190
+ *icid = ite.iteh & ITE_ENTRY_ICID_MASK;
191
+ status = true;
192
+ }
178
+ }
193
+ }
179
+ }
194
+ }
180
+ }
195
+ }
181
+
196
+ return status;
182
+ if (!err) {
183
+ visit_check_struct(visitor, &err);
184
+ }
185
+ visit_end_struct(visitor, NULL);
186
+ visit_free(visitor);
187
+ if (err) {
188
+ object_unref(obj);
189
+ error_propagate(errp, err);
190
+ return NULL;
191
+ }
192
+ }
193
+
194
+ expansion_info = g_new0(CpuModelExpansionInfo, 1);
195
+ expansion_info->model = g_malloc0(sizeof(*expansion_info->model));
196
+ expansion_info->model->name = g_strdup(model->name);
197
+
198
+ qdict_out = qdict_new();
199
+
200
+ i = 0;
201
+ while ((name = cpu_model_advertised_features[i++]) != NULL) {
202
+ ObjectProperty *prop = object_property_find(obj, name, NULL);
203
+ if (prop) {
204
+ Error *err = NULL;
205
+ QObject *value;
206
+
207
+ assert(prop->get);
208
+ value = object_property_get_qobject(obj, name, &err);
209
+ assert(!err);
210
+
211
+ qdict_put_obj(qdict_out, name, value);
212
+ }
213
+ }
214
+
215
+ if (!qdict_size(qdict_out)) {
216
+ qobject_unref(qdict_out);
217
+ } else {
218
+ expansion_info->model->props = QOBJECT(qdict_out);
219
+ expansion_info->model->has_props = true;
220
+ }
221
+
222
+ object_unref(obj);
223
+
224
+ return expansion_info;
197
+}
225
+}
198
+
226
diff --git a/docs/arm-cpu-features.rst b/docs/arm-cpu-features.rst
199
+static uint64_t get_dte(GICv3ITSState *s, uint32_t devid, MemTxResult *res)
227
new file mode 100644
200
+{
228
index XXXXXXX..XXXXXXX
201
+ AddressSpace *as = &s->gicv3->dma_as;
229
--- /dev/null
202
+ uint64_t l2t_addr;
230
+++ b/docs/arm-cpu-features.rst
203
+ uint64_t value;
231
@@ -XXX,XX +XXX,XX @@
204
+ bool valid_l2t;
232
+================
205
+ uint32_t l2t_id;
233
+ARM CPU Features
206
+ uint32_t max_l2_entries;
234
+================
207
+
235
+
208
+ if (s->dt.indirect) {
236
+Examples of probing and using ARM CPU features
209
+ l2t_id = devid / (s->dt.page_sz / L1TABLE_ENTRY_SIZE);
237
+
210
+
238
+Introduction
211
+ value = address_space_ldq_le(as,
239
+============
212
+ s->dt.base_addr +
240
+
213
+ (l2t_id * L1TABLE_ENTRY_SIZE),
241
+CPU features are optional features that a CPU of supporting type may
214
+ MEMTXATTRS_UNSPECIFIED, res);
242
+choose to implement or not. In QEMU, optional CPU features have
215
+
243
+corresponding boolean CPU proprieties that, when enabled, indicate
216
+ if (*res == MEMTX_OK) {
244
+that the feature is implemented, and, conversely, when disabled,
217
+ valid_l2t = (value & L2_TABLE_VALID_MASK) != 0;
245
+indicate that it is not implemented. An example of an ARM CPU feature
218
+
246
+is the Performance Monitoring Unit (PMU). CPU types such as the
219
+ if (valid_l2t) {
247
+Cortex-A15 and the Cortex-A57, which respectively implement ARM
220
+ max_l2_entries = s->dt.page_sz / s->dt.entry_sz;
248
+architecture reference manuals ARMv7-A and ARMv8-A, may both optionally
221
+
249
+implement PMUs. For example, if a user wants to use a Cortex-A15 without
222
+ l2t_addr = value & ((1ULL << 51) - 1);
250
+a PMU, then the `-cpu` parameter should contain `pmu=off` on the QEMU
223
+
251
+command line, i.e. `-cpu cortex-a15,pmu=off`.
224
+ value = address_space_ldq_le(as, l2t_addr +
252
+
225
+ ((devid % max_l2_entries) * GITS_DTE_SIZE),
253
+As not all CPU types support all optional CPU features, then whether or
226
+ MEMTXATTRS_UNSPECIFIED, res);
254
+not a CPU property exists depends on the CPU type. For example, CPUs
227
+ }
255
+that implement the ARMv8-A architecture reference manual may optionally
228
+ }
256
+support the AArch32 CPU feature, which may be enabled by disabling the
229
+ } else {
257
+`aarch64` CPU property. A CPU type such as the Cortex-A15, which does
230
+ /* Flat level table */
258
+not implement ARMv8-A, will not have the `aarch64` CPU property.
231
+ value = address_space_ldq_le(as, s->dt.base_addr +
259
+
232
+ (devid * GITS_DTE_SIZE),
260
+QEMU's support may be limited for some CPU features, only partially
233
+ MEMTXATTRS_UNSPECIFIED, res);
261
+supporting the feature or only supporting the feature under certain
234
+ }
262
+configurations. For example, the `aarch64` CPU feature, which, when
235
+
263
+disabled, enables the optional AArch32 CPU feature, is only supported
236
+ return value;
264
+when using the KVM accelerator and when running on a host CPU type that
237
+}
265
+supports the feature.
238
+
266
+
239
+/*
267
+CPU Feature Probing
240
+ * This function handles the processing of following commands based on
268
+===================
241
+ * the ItsCmdType parameter passed:-
269
+
242
+ * 1. triggering of lpi interrupt translation via ITS INT command
270
+Determining which CPU features are available and functional for a given
243
+ * 2. triggering of lpi interrupt translation via gits_translater register
271
+CPU type is possible with the `query-cpu-model-expansion` QMP command.
244
+ * 3. handling of ITS CLEAR command
272
+Below are some examples where `scripts/qmp/qmp-shell` (see the top comment
245
+ * 4. handling of ITS DISCARD command
273
+block in the script for usage) is used to issue the QMP commands.
246
+ */
274
+
247
+static bool process_its_cmd(GICv3ITSState *s, uint64_t value, uint32_t offset,
275
+(1) Determine which CPU features are available for the `max` CPU type
248
+ ItsCmdType cmd)
276
+ (Note, we started QEMU with qemu-system-aarch64, so `max` is
249
+{
277
+ implementing the ARMv8-A reference manual in this case)::
250
+ AddressSpace *as = &s->gicv3->dma_as;
278
+
251
+ uint32_t devid, eventid;
279
+ (QEMU) query-cpu-model-expansion type=full model={"name":"max"}
252
+ MemTxResult res = MEMTX_OK;
280
+ { "return": {
253
+ bool dte_valid;
281
+ "model": { "name": "max", "props": {
254
+ uint64_t dte = 0;
282
+ "pmu": true, "aarch64": true
255
+ uint32_t max_eventid;
283
+ }}}}
256
+ uint16_t icid = 0;
284
+
257
+ uint32_t pIntid = 0;
285
+We see that the `max` CPU type has the `pmu` and `aarch64` CPU features.
258
+ bool ite_valid = false;
286
+We also see that the CPU features are enabled, as they are all `true`.
259
+ uint64_t cte = 0;
287
+
260
+ bool cte_valid = false;
288
+(2) Let's try to disable the PMU::
261
+ bool result = false;
289
+
262
+
290
+ (QEMU) query-cpu-model-expansion type=full model={"name":"max","props":{"pmu":false}}
263
+ if (cmd == NONE) {
291
+ { "return": {
264
+ devid = offset;
292
+ "model": { "name": "max", "props": {
265
+ } else {
293
+ "pmu": false, "aarch64": true
266
+ devid = ((value & DEVID_MASK) >> DEVID_SHIFT);
294
+ }}}}
267
+
295
+
268
+ offset += NUM_BYTES_IN_DW;
296
+We see it worked, as `pmu` is now `false`.
269
+ value = address_space_ldq_le(as, s->cq.base_addr + offset,
297
+
270
+ MEMTXATTRS_UNSPECIFIED, &res);
298
+(3) Let's try to disable `aarch64`, which enables the AArch32 CPU feature::
271
+ }
299
+
272
+
300
+ (QEMU) query-cpu-model-expansion type=full model={"name":"max","props":{"aarch64":false}}
273
+ if (res != MEMTX_OK) {
301
+ {"error": {
274
+ return result;
302
+ "class": "GenericError", "desc":
275
+ }
303
+ "'aarch64' feature cannot be disabled unless KVM is enabled and 32-bit EL1 is supported"
276
+
304
+ }}
277
+ eventid = (value & EVENTID_MASK);
305
+
278
+
306
+It looks like this feature is limited to a configuration we do not
279
+ dte = get_dte(s, devid, &res);
307
+currently have.
280
+
308
+
281
+ if (res != MEMTX_OK) {
309
+(4) Let's try probing CPU features for the Cortex-A15 CPU type::
282
+ return result;
310
+
283
+ }
311
+ (QEMU) query-cpu-model-expansion type=full model={"name":"cortex-a15"}
284
+ dte_valid = dte & TABLE_ENTRY_VALID_MASK;
312
+ {"return": {"model": {"name": "cortex-a15", "props": {"pmu": true}}}}
285
+
313
+
286
+ if (dte_valid) {
314
+Only the `pmu` CPU feature is available.
287
+ max_eventid = (1UL << (((dte >> 1U) & SIZE_MASK) + 1));
315
+
288
+
316
+A note about CPU feature dependencies
289
+ ite_valid = get_ite(s, eventid, dte, &icid, &pIntid, &res);
317
+-------------------------------------
290
+
318
+
291
+ if (res != MEMTX_OK) {
319
+It's possible for features to have dependencies on other features. I.e.
292
+ return result;
320
+it may be possible to change one feature at a time without error, but
293
+ }
321
+when attempting to change all features at once an error could occur
294
+
322
+depending on the order they are processed. It's also possible changing
295
+ if (ite_valid) {
323
+all at once doesn't generate an error, because a feature's dependencies
296
+ cte_valid = get_cte(s, icid, &cte, &res);
324
+are satisfied with other features, but the same feature cannot be changed
297
+ }
325
+independently without error. For these reasons callers should always
298
+
326
+attempt to make their desired changes all at once in order to ensure the
299
+ if (res != MEMTX_OK) {
327
+collection is valid.
300
+ return result;
328
+
301
+ }
329
+A note about CPU models and KVM
302
+ }
330
+-------------------------------
303
+
331
+
304
+ if ((devid > s->dt.maxids.max_devids) || !dte_valid || !ite_valid ||
332
+Named CPU models generally do not work with KVM. There are a few cases
305
+ !cte_valid || (eventid > max_eventid)) {
333
+that do work, e.g. using the named CPU model `cortex-a57` with KVM on a
306
+ qemu_log_mask(LOG_GUEST_ERROR,
334
+seattle host, but mostly if KVM is enabled the `host` CPU type must be
307
+ "%s: invalid command attributes "
335
+used. This means the guest is provided all the same CPU features as the
308
+ "devid %d or eventid %d or invalid dte %d or"
336
+host CPU type has. And, for this reason, the `host` CPU type should
309
+ "invalid cte %d or invalid ite %d\n",
337
+enable all CPU features that the host has by default. Indeed it's even
310
+ __func__, devid, eventid, dte_valid, cte_valid,
338
+a bit strange to allow disabling CPU features that the host has when using
311
+ ite_valid);
339
+the `host` CPU type, but in the absence of CPU models it's the best we can
312
+ /*
340
+do if we want to launch guests without all the host's CPU features enabled.
313
+ * in this implementation, in case of error
341
+
314
+ * we ignore this command and move onto the next
342
+Enabling KVM also affects the `query-cpu-model-expansion` QMP command. The
315
+ * command in the queue
343
+affect is not only limited to specific features, as pointed out in example
316
+ */
344
+(3) of "CPU Feature Probing", but also to which CPU types may be expanded.
317
+ } else {
345
+When KVM is enabled, only the `max`, `host`, and current CPU type may be
318
+ /*
346
+expanded. This restriction is necessary as it's not possible to know all
319
+ * Current implementation only supports rdbase == procnum
347
+CPU types that may work with KVM, but it does impose a small risk of users
320
+ * Hence rdbase physical address is ignored
348
+experiencing unexpected errors. For example on a seattle, as mentioned
321
+ */
349
+above, the `cortex-a57` CPU type is also valid when KVM is enabled.
322
+ if (cmd == DISCARD) {
350
+Therefore a user could use the `host` CPU type for the current type, but
323
+ IteEntry ite = {};
351
+then attempt to query `cortex-a57`, however that query will fail with our
324
+ /* remove mapping from interrupt translation table */
352
+restrictions. This shouldn't be an issue though as management layers and
325
+ result = update_ite(s, eventid, dte, ite);
353
+users have been preferring the `host` CPU type for use with KVM for quite
326
+ }
354
+some time. Additionally, if the KVM-enabled QEMU instance running on a
327
+ }
355
+seattle host is using the `cortex-a57` CPU type, then querying `cortex-a57`
328
+
356
+will work.
329
+ return result;
357
+
330
+}
358
+Using CPU Features
331
+
359
+==================
332
+static bool process_mapti(GICv3ITSState *s, uint64_t value, uint32_t offset,
360
+
333
+ bool ignore_pInt)
361
+After determining which CPU features are available and supported for a
334
+{
362
+given CPU type, then they may be selectively enabled or disabled on the
335
+ AddressSpace *as = &s->gicv3->dma_as;
363
+QEMU command line with that CPU type::
336
+ uint32_t devid, eventid;
364
+
337
+ uint32_t pIntid = 0;
365
+ $ qemu-system-aarch64 -M virt -cpu max,pmu=off
338
+ uint32_t max_eventid, max_Intid;
366
+
339
+ bool dte_valid;
367
+The example above disables the PMU for the `max` CPU type.
340
+ MemTxResult res = MEMTX_OK;
368
+
341
+ uint16_t icid = 0;
342
+ uint64_t dte = 0;
343
+ IteEntry ite;
344
+ uint32_t int_spurious = INTID_SPURIOUS;
345
+ bool result = false;
346
+
347
+ devid = ((value & DEVID_MASK) >> DEVID_SHIFT);
348
+ offset += NUM_BYTES_IN_DW;
349
+ value = address_space_ldq_le(as, s->cq.base_addr + offset,
350
+ MEMTXATTRS_UNSPECIFIED, &res);
351
+
352
+ if (res != MEMTX_OK) {
353
+ return result;
354
+ }
355
+
356
+ eventid = (value & EVENTID_MASK);
357
+
358
+ if (!ignore_pInt) {
359
+ pIntid = ((value & pINTID_MASK) >> pINTID_SHIFT);
360
+ }
361
+
362
+ offset += NUM_BYTES_IN_DW;
363
+ value = address_space_ldq_le(as, s->cq.base_addr + offset,
364
+ MEMTXATTRS_UNSPECIFIED, &res);
365
+
366
+ if (res != MEMTX_OK) {
367
+ return result;
368
+ }
369
+
370
+ icid = value & ICID_MASK;
371
+
372
+ dte = get_dte(s, devid, &res);
373
+
374
+ if (res != MEMTX_OK) {
375
+ return result;
376
+ }
377
+ dte_valid = dte & TABLE_ENTRY_VALID_MASK;
378
+
379
+ max_eventid = (1UL << (((dte >> 1U) & SIZE_MASK) + 1));
380
+
381
+ if (!ignore_pInt) {
382
+ max_Intid = (1ULL << (GICD_TYPER_IDBITS + 1)) - 1;
383
+ }
384
+
385
+ if ((devid > s->dt.maxids.max_devids) || (icid > s->ct.maxids.max_collids)
386
+ || !dte_valid || (eventid > max_eventid) ||
387
+ (!ignore_pInt && (((pIntid < GICV3_LPI_INTID_START) ||
388
+ (pIntid > max_Intid)) && (pIntid != INTID_SPURIOUS)))) {
389
+ qemu_log_mask(LOG_GUEST_ERROR,
390
+ "%s: invalid command attributes "
391
+ "devid %d or icid %d or eventid %d or pIntid %d or"
392
+ "unmapped dte %d\n", __func__, devid, icid, eventid,
393
+ pIntid, dte_valid);
394
+ /*
395
+ * in this implementation, in case of error
396
+ * we ignore this command and move onto the next
397
+ * command in the queue
398
+ */
399
+ } else {
400
+ /* add ite entry to interrupt translation table */
401
+ ite.itel = (dte_valid & TABLE_ENTRY_VALID_MASK) |
402
+ (GITS_TYPE_PHYSICAL << ITE_ENTRY_INTTYPE_SHIFT);
403
+
404
+ if (ignore_pInt) {
405
+ ite.itel |= (eventid << ITE_ENTRY_INTID_SHIFT);
406
+ } else {
407
+ ite.itel |= (pIntid << ITE_ENTRY_INTID_SHIFT);
408
+ }
409
+ ite.itel |= (int_spurious << ITE_ENTRY_INTSP_SHIFT);
410
+ ite.iteh = icid;
411
+
412
+ result = update_ite(s, eventid, dte, ite);
413
+ }
414
+
415
+ return result;
416
+}
417
+
418
static bool update_cte(GICv3ITSState *s, uint16_t icid, bool valid,
419
uint64_t rdbase)
420
{
421
@@ -XXX,XX +XXX,XX @@ static void process_cmdq(GICv3ITSState *s)
422
423
switch (cmd) {
424
case GITS_CMD_INT:
425
+ res = process_its_cmd(s, data, cq_offset, INT);
426
break;
427
case GITS_CMD_CLEAR:
428
+ res = process_its_cmd(s, data, cq_offset, CLEAR);
429
break;
430
case GITS_CMD_SYNC:
431
/*
432
@@ -XXX,XX +XXX,XX @@ static void process_cmdq(GICv3ITSState *s)
433
result = process_mapc(s, cq_offset);
434
break;
435
case GITS_CMD_MAPTI:
436
+ result = process_mapti(s, data, cq_offset, false);
437
break;
438
case GITS_CMD_MAPI:
439
+ result = process_mapti(s, data, cq_offset, true);
440
break;
441
case GITS_CMD_DISCARD:
442
+ result = process_its_cmd(s, data, cq_offset, DISCARD);
443
break;
444
case GITS_CMD_INV:
445
case GITS_CMD_INVALL:
446
@@ -XXX,XX +XXX,XX @@ static MemTxResult gicv3_its_translation_write(void *opaque, hwaddr offset,
447
uint64_t data, unsigned size,
448
MemTxAttrs attrs)
449
{
450
- return MEMTX_OK;
451
+ GICv3ITSState *s = (GICv3ITSState *)opaque;
452
+ bool result = true;
453
+ uint32_t devid = 0;
454
+
455
+ switch (offset) {
456
+ case GITS_TRANSLATER:
457
+ if (s->ctlr & ITS_CTLR_ENABLED) {
458
+ devid = attrs.requester_id;
459
+ result = process_its_cmd(s, data, devid, NONE);
460
+ }
461
+ break;
462
+ default:
463
+ break;
464
+ }
465
+
466
+ if (result) {
467
+ return MEMTX_OK;
468
+ } else {
469
+ return MEMTX_ERROR;
470
+ }
471
}
472
473
static bool its_writel(GICv3ITSState *s, hwaddr offset,
474
--
369
--
475
2.20.1
370
2.20.1
476
371
477
372
diff view generated by jsdifflib
1
From: Shashi Mallela <shashi.mallela@linaro.org>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
Added register definitions relevant to ITS,implemented overall
3
Now that Arm CPUs have advertised features lets add tests to ensure
4
ITS device framework with stubs for ITS control and translater
4
we maintain their expected availability with and without KVM.
5
regions read/write,extended ITS common to handle mmio init between
5
6
existing kvm device and newer qemu device.
6
Signed-off-by: Andrew Jones <drjones@redhat.com>
7
8
Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Eric Auger <eric.auger@redhat.com>
7
Reviewed-by: Eric Auger <eric.auger@redhat.com>
11
Tested-by: Neil Armstrong <narmstrong@baylibre.com>
8
Message-id: 20191031142734.8590-3-drjones@redhat.com
12
Message-id: 20210910143951.92242-2-shashi.mallela@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
10
---
15
hw/intc/gicv3_internal.h | 96 +++++++++-
11
tests/Makefile.include | 5 +-
16
include/hw/intc/arm_gicv3_its_common.h | 9 +-
12
tests/arm-cpu-features.c | 253 +++++++++++++++++++++++++++++++++++++++
17
hw/intc/arm_gicv3_its.c | 241 +++++++++++++++++++++++++
13
2 files changed, 257 insertions(+), 1 deletion(-)
18
hw/intc/arm_gicv3_its_common.c | 7 +-
14
create mode 100644 tests/arm-cpu-features.c
19
hw/intc/arm_gicv3_its_kvm.c | 2 +-
15
20
hw/intc/meson.build | 1 +
16
diff --git a/tests/Makefile.include b/tests/Makefile.include
21
6 files changed, 342 insertions(+), 14 deletions(-)
22
create mode 100644 hw/intc/arm_gicv3_its.c
23
24
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
25
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/intc/gicv3_internal.h
18
--- a/tests/Makefile.include
27
+++ b/hw/intc/gicv3_internal.h
19
+++ b/tests/Makefile.include
28
@@ -XXX,XX +XXX,XX @@
20
@@ -XXX,XX +XXX,XX @@ check-qtest-sparc64-$(CONFIG_ISA_TESTDEV) = tests/endianness-test$(EXESUF)
29
#ifndef QEMU_ARM_GICV3_INTERNAL_H
21
check-qtest-sparc64-y += tests/prom-env-test$(EXESUF)
30
#define QEMU_ARM_GICV3_INTERNAL_H
22
check-qtest-sparc64-y += tests/boot-serial-test$(EXESUF)
31
23
32
+#include "hw/registerfields.h"
24
+check-qtest-arm-y += tests/arm-cpu-features$(EXESUF)
33
#include "hw/intc/arm_gicv3_common.h"
25
check-qtest-arm-y += tests/microbit-test$(EXESUF)
34
26
check-qtest-arm-y += tests/m25p80-test$(EXESUF)
35
/* Distributor registers, as offsets from the distributor base address */
27
check-qtest-arm-y += tests/test-arm-mptimer$(EXESUF)
36
@@ -XXX,XX +XXX,XX @@
28
@@ -XXX,XX +XXX,XX @@ check-qtest-arm-y += tests/boot-serial-test$(EXESUF)
37
#define GICD_CTLR_E1NWF (1U << 7)
29
check-qtest-arm-y += tests/hexloader-test$(EXESUF)
38
#define GICD_CTLR_RWP (1U << 31)
30
check-qtest-arm-$(CONFIG_PFLASH_CFI02) += tests/pflash-cfi02-test$(EXESUF)
39
31
40
+/* 16 bits EventId */
32
-check-qtest-aarch64-y = tests/numa-test$(EXESUF)
41
+#define GICD_TYPER_IDBITS 0xf
33
+check-qtest-aarch64-y += tests/arm-cpu-features$(EXESUF)
42
+
34
+check-qtest-aarch64-y += tests/numa-test$(EXESUF)
43
/*
35
check-qtest-aarch64-y += tests/boot-serial-test$(EXESUF)
44
* Redistributor frame offsets from RD_base
36
check-qtest-aarch64-y += tests/migration-test$(EXESUF)
45
*/
37
# TODO: once aarch64 TCG is fixed on ARM 32 bit host, make test unconditional
46
@@ -XXX,XX +XXX,XX @@
38
@@ -XXX,XX +XXX,XX @@ tests/test-qapi-util$(EXESUF): tests/test-qapi-util.o $(test-util-obj-y)
47
#define GICR_WAKER_ProcessorSleep (1U << 1)
39
tests/numa-test$(EXESUF): tests/numa-test.o
48
#define GICR_WAKER_ChildrenAsleep (1U << 2)
40
tests/vmgenid-test$(EXESUF): tests/vmgenid-test.o tests/boot-sector.o tests/acpi-utils.o
49
41
tests/cdrom-test$(EXESUF): tests/cdrom-test.o tests/boot-sector.o $(libqos-obj-y)
50
-#define GICR_PROPBASER_OUTER_CACHEABILITY_MASK (7ULL << 56)
42
+tests/arm-cpu-features$(EXESUF): tests/arm-cpu-features.o
51
-#define GICR_PROPBASER_ADDR_MASK (0xfffffffffULL << 12)
43
52
-#define GICR_PROPBASER_SHAREABILITY_MASK (3U << 10)
44
tests/migration/stress$(EXESUF): tests/migration/stress.o
53
-#define GICR_PROPBASER_CACHEABILITY_MASK (7U << 7)
45
    $(call quiet-command, $(LINKPROG) -static -O3 $(PTHREAD_LIB) -o $@ $< ,"LINK","$(TARGET_DIR)$@")
54
-#define GICR_PROPBASER_IDBITS_MASK (0x1f)
46
diff --git a/tests/arm-cpu-features.c b/tests/arm-cpu-features.c
55
+FIELD(GICR_PROPBASER, IDBITS, 0, 5)
56
+FIELD(GICR_PROPBASER, INNERCACHE, 7, 3)
57
+FIELD(GICR_PROPBASER, SHAREABILITY, 10, 2)
58
+FIELD(GICR_PROPBASER, PHYADDR, 12, 40)
59
+FIELD(GICR_PROPBASER, OUTERCACHE, 56, 3)
60
61
-#define GICR_PENDBASER_PTZ (1ULL << 62)
62
-#define GICR_PENDBASER_OUTER_CACHEABILITY_MASK (7ULL << 56)
63
-#define GICR_PENDBASER_ADDR_MASK (0xffffffffULL << 16)
64
-#define GICR_PENDBASER_SHAREABILITY_MASK (3U << 10)
65
-#define GICR_PENDBASER_CACHEABILITY_MASK (7U << 7)
66
+FIELD(GICR_PENDBASER, INNERCACHE, 7, 3)
67
+FIELD(GICR_PENDBASER, SHAREABILITY, 10, 2)
68
+FIELD(GICR_PENDBASER, PHYADDR, 16, 36)
69
+FIELD(GICR_PENDBASER, OUTERCACHE, 56, 3)
70
+FIELD(GICR_PENDBASER, PTZ, 62, 1)
71
72
#define ICC_CTLR_EL1_CBPR (1U << 0)
73
#define ICC_CTLR_EL1_EOIMODE (1U << 1)
74
@@ -XXX,XX +XXX,XX @@
75
#define ICH_VTR_EL2_PREBITS_SHIFT 26
76
#define ICH_VTR_EL2_PRIBITS_SHIFT 29
77
78
+/* ITS Registers */
79
+
80
+FIELD(GITS_BASER, SIZE, 0, 8)
81
+FIELD(GITS_BASER, PAGESIZE, 8, 2)
82
+FIELD(GITS_BASER, SHAREABILITY, 10, 2)
83
+FIELD(GITS_BASER, PHYADDR, 12, 36)
84
+FIELD(GITS_BASER, PHYADDRL_64K, 16, 32)
85
+FIELD(GITS_BASER, PHYADDRH_64K, 12, 4)
86
+FIELD(GITS_BASER, ENTRYSIZE, 48, 5)
87
+FIELD(GITS_BASER, OUTERCACHE, 53, 3)
88
+FIELD(GITS_BASER, TYPE, 56, 3)
89
+FIELD(GITS_BASER, INNERCACHE, 59, 3)
90
+FIELD(GITS_BASER, INDIRECT, 62, 1)
91
+FIELD(GITS_BASER, VALID, 63, 1)
92
+
93
+FIELD(GITS_CTLR, QUIESCENT, 31, 1)
94
+
95
+FIELD(GITS_TYPER, PHYSICAL, 0, 1)
96
+FIELD(GITS_TYPER, ITT_ENTRY_SIZE, 4, 4)
97
+FIELD(GITS_TYPER, IDBITS, 8, 5)
98
+FIELD(GITS_TYPER, DEVBITS, 13, 5)
99
+FIELD(GITS_TYPER, SEIS, 18, 1)
100
+FIELD(GITS_TYPER, PTA, 19, 1)
101
+FIELD(GITS_TYPER, CIDBITS, 32, 4)
102
+FIELD(GITS_TYPER, CIL, 36, 1)
103
+
104
+#define GITS_BASER_PAGESIZE_4K 0
105
+#define GITS_BASER_PAGESIZE_16K 1
106
+#define GITS_BASER_PAGESIZE_64K 2
107
+
108
+#define GITS_BASER_TYPE_DEVICE 1ULL
109
+#define GITS_BASER_TYPE_COLLECTION 4ULL
110
+
111
+/**
112
+ * Default features advertised by this version of ITS
113
+ */
114
+/* Physical LPIs supported */
115
+#define GITS_TYPE_PHYSICAL (1U << 0)
116
+
117
+/*
118
+ * 12 bytes Interrupt translation Table Entry size
119
+ * as per Table 5.3 in GICv3 spec
120
+ * ITE Lower 8 Bytes
121
+ * Bits: | 49 ... 26 | 25 ... 2 | 1 | 0 |
122
+ * Values: | 1023 | IntNum | IntType | Valid |
123
+ * ITE Higher 4 Bytes
124
+ * Bits: | 31 ... 16 | 15 ...0 |
125
+ * Values: | vPEID | ICID |
126
+ */
127
+#define ITS_ITT_ENTRY_SIZE 0xC
128
+
129
+/* 16 bits EventId */
130
+#define ITS_IDBITS GICD_TYPER_IDBITS
131
+
132
+/* 16 bits DeviceId */
133
+#define ITS_DEVBITS 0xF
134
+
135
+/* 16 bits CollectionId */
136
+#define ITS_CIDBITS 0xF
137
+
138
+/*
139
+ * 8 bytes Device Table Entry size
140
+ * Valid = 1 bit,ITTAddr = 44 bits,Size = 5 bits
141
+ */
142
+#define GITS_DTE_SIZE (0x8ULL)
143
+
144
+/*
145
+ * 8 bytes Collection Table Entry size
146
+ * Valid = 1 bit,RDBase = 36 bits(considering max RDBASE)
147
+ */
148
+#define GITS_CTE_SIZE (0x8ULL)
149
+
150
/* Special interrupt IDs */
151
#define INTID_SECURE 1020
152
#define INTID_NONSECURE 1021
153
diff --git a/include/hw/intc/arm_gicv3_its_common.h b/include/hw/intc/arm_gicv3_its_common.h
154
index XXXXXXX..XXXXXXX 100644
155
--- a/include/hw/intc/arm_gicv3_its_common.h
156
+++ b/include/hw/intc/arm_gicv3_its_common.h
157
@@ -XXX,XX +XXX,XX @@
158
#include "hw/intc/arm_gicv3_common.h"
159
#include "qom/object.h"
160
161
+#define TYPE_ARM_GICV3_ITS "arm-gicv3-its"
162
+
163
#define ITS_CONTROL_SIZE 0x10000
164
#define ITS_TRANS_SIZE 0x10000
165
#define ITS_SIZE (ITS_CONTROL_SIZE + ITS_TRANS_SIZE)
166
167
#define GITS_CTLR 0x0
168
#define GITS_IIDR 0x4
169
+#define GITS_TYPER 0x8
170
#define GITS_CBASER 0x80
171
#define GITS_CWRITER 0x88
172
#define GITS_CREADR 0x90
173
#define GITS_BASER 0x100
174
175
+#define GITS_TRANSLATER 0x0040
176
+
177
struct GICv3ITSState {
178
SysBusDevice parent_obj;
179
180
@@ -XXX,XX +XXX,XX @@ struct GICv3ITSState {
181
/* Registers */
182
uint32_t ctlr;
183
uint32_t iidr;
184
+ uint64_t typer;
185
uint64_t cbaser;
186
uint64_t cwriter;
187
uint64_t creadr;
188
@@ -XXX,XX +XXX,XX @@ struct GICv3ITSState {
189
190
typedef struct GICv3ITSState GICv3ITSState;
191
192
-void gicv3_its_init_mmio(GICv3ITSState *s, const MemoryRegionOps *ops);
193
+void gicv3_its_init_mmio(GICv3ITSState *s, const MemoryRegionOps *ops,
194
+ const MemoryRegionOps *tops);
195
196
#define TYPE_ARM_GICV3_ITS_COMMON "arm-gicv3-its-common"
197
typedef struct GICv3ITSCommonClass GICv3ITSCommonClass;
198
diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c
199
new file mode 100644
47
new file mode 100644
200
index XXXXXXX..XXXXXXX
48
index XXXXXXX..XXXXXXX
201
--- /dev/null
49
--- /dev/null
202
+++ b/hw/intc/arm_gicv3_its.c
50
+++ b/tests/arm-cpu-features.c
203
@@ -XXX,XX +XXX,XX @@
51
@@ -XXX,XX +XXX,XX @@
204
+/*
52
+/*
205
+ * ITS emulation for a GICv3-based system
53
+ * Arm CPU feature test cases
206
+ *
54
+ *
207
+ * Copyright Linaro.org 2021
55
+ * Copyright (c) 2019 Red Hat Inc.
56
+ * Authors:
57
+ * Andrew Jones <drjones@redhat.com>
208
+ *
58
+ *
209
+ * Authors:
59
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
210
+ * Shashi Mallela <shashi.mallela@linaro.org>
60
+ * See the COPYING file in the top-level directory.
211
+ *
212
+ * This work is licensed under the terms of the GNU GPL, version 2 or (at your
213
+ * option) any later version. See the COPYING file in the top-level directory.
214
+ *
215
+ */
61
+ */
216
+
217
+#include "qemu/osdep.h"
62
+#include "qemu/osdep.h"
218
+#include "qemu/log.h"
63
+#include "libqtest.h"
219
+#include "hw/qdev-properties.h"
64
+#include "qapi/qmp/qdict.h"
220
+#include "hw/intc/arm_gicv3_its_common.h"
65
+#include "qapi/qmp/qjson.h"
221
+#include "gicv3_internal.h"
66
+
222
+#include "qom/object.h"
67
+#define MACHINE "-machine virt,gic-version=max,accel=tcg "
223
+#include "qapi/error.h"
68
+#define MACHINE_KVM "-machine virt,gic-version=max,accel=kvm:tcg "
224
+
69
+#define QUERY_HEAD "{ 'execute': 'query-cpu-model-expansion', " \
225
+typedef struct GICv3ITSClass GICv3ITSClass;
70
+ " 'arguments': { 'type': 'full', "
226
+/* This is reusing the GICv3ITSState typedef from ARM_GICV3_ITS_COMMON */
71
+#define QUERY_TAIL "}}"
227
+DECLARE_OBJ_CHECKERS(GICv3ITSState, GICv3ITSClass,
72
+
228
+ ARM_GICV3_ITS, TYPE_ARM_GICV3_ITS)
73
+static bool kvm_enabled(QTestState *qts)
229
+
74
+{
230
+struct GICv3ITSClass {
75
+ QDict *resp, *qdict;
231
+ GICv3ITSCommonClass parent_class;
76
+ bool enabled;
232
+ void (*parent_reset)(DeviceState *dev);
77
+
233
+};
78
+ resp = qtest_qmp(qts, "{ 'execute': 'query-kvm' }");
234
+
79
+ g_assert(qdict_haskey(resp, "return"));
235
+static MemTxResult gicv3_its_translation_write(void *opaque, hwaddr offset,
80
+ qdict = qdict_get_qdict(resp, "return");
236
+ uint64_t data, unsigned size,
81
+ g_assert(qdict_haskey(qdict, "enabled"));
237
+ MemTxAttrs attrs)
82
+ enabled = qdict_get_bool(qdict, "enabled");
238
+{
83
+ qobject_unref(resp);
239
+ return MEMTX_OK;
84
+
240
+}
85
+ return enabled;
241
+
86
+}
242
+static bool its_writel(GICv3ITSState *s, hwaddr offset,
87
+
243
+ uint64_t value, MemTxAttrs attrs)
88
+static QDict *do_query_no_props(QTestState *qts, const char *cpu_type)
244
+{
89
+{
245
+ bool result = true;
90
+ return qtest_qmp(qts, QUERY_HEAD "'model': { 'name': %s }"
246
+
91
+ QUERY_TAIL, cpu_type);
247
+ return result;
92
+}
248
+}
93
+
249
+
94
+static QDict *do_query(QTestState *qts, const char *cpu_type,
250
+static bool its_readl(GICv3ITSState *s, hwaddr offset,
95
+ const char *fmt, ...)
251
+ uint64_t *data, MemTxAttrs attrs)
96
+{
252
+{
97
+ QDict *resp;
253
+ bool result = true;
98
+
254
+
99
+ if (fmt) {
255
+ return result;
100
+ QDict *args;
256
+}
101
+ va_list ap;
257
+
102
+
258
+static bool its_writell(GICv3ITSState *s, hwaddr offset,
103
+ va_start(ap, fmt);
259
+ uint64_t value, MemTxAttrs attrs)
104
+ args = qdict_from_vjsonf_nofail(fmt, ap);
260
+{
105
+ va_end(ap);
261
+ bool result = true;
106
+
262
+
107
+ resp = qtest_qmp(qts, QUERY_HEAD "'model': { 'name': %s, "
263
+ return result;
108
+ "'props': %p }"
264
+}
109
+ QUERY_TAIL, cpu_type, args);
265
+
110
+ } else {
266
+static bool its_readll(GICv3ITSState *s, hwaddr offset,
111
+ resp = do_query_no_props(qts, cpu_type);
267
+ uint64_t *data, MemTxAttrs attrs)
112
+ }
268
+{
113
+
269
+ bool result = true;
114
+ return resp;
270
+
115
+}
271
+ return result;
116
+
272
+}
117
+static const char *resp_get_error(QDict *resp)
273
+
118
+{
274
+static MemTxResult gicv3_its_read(void *opaque, hwaddr offset, uint64_t *data,
119
+ QDict *qdict;
275
+ unsigned size, MemTxAttrs attrs)
120
+
276
+{
121
+ g_assert(resp);
277
+ GICv3ITSState *s = (GICv3ITSState *)opaque;
122
+
278
+ bool result;
123
+ qdict = qdict_get_qdict(resp, "error");
279
+
124
+ if (qdict) {
280
+ switch (size) {
125
+ return qdict_get_str(qdict, "desc");
281
+ case 4:
126
+ }
282
+ result = its_readl(s, offset, data, attrs);
127
+
283
+ break;
128
+ return NULL;
284
+ case 8:
129
+}
285
+ result = its_readll(s, offset, data, attrs);
130
+
286
+ break;
131
+#define assert_error(qts, cpu_type, expected_error, fmt, ...) \
287
+ default:
132
+({ \
288
+ result = false;
133
+ QDict *_resp; \
289
+ break;
134
+ const char *_error; \
290
+ }
135
+ \
291
+
136
+ _resp = do_query(qts, cpu_type, fmt, ##__VA_ARGS__); \
292
+ if (!result) {
137
+ g_assert(_resp); \
293
+ qemu_log_mask(LOG_GUEST_ERROR,
138
+ _error = resp_get_error(_resp); \
294
+ "%s: invalid guest read at offset " TARGET_FMT_plx
139
+ g_assert(_error); \
295
+ "size %u\n", __func__, offset, size);
140
+ g_assert(g_str_equal(_error, expected_error)); \
296
+ /*
141
+ qobject_unref(_resp); \
297
+ * The spec requires that reserved registers are RAZ/WI;
142
+})
298
+ * so use false returns from leaf functions as a way to
143
+
299
+ * trigger the guest-error logging but don't return it to
144
+static bool resp_has_props(QDict *resp)
300
+ * the caller, or we'll cause a spurious guest data abort.
145
+{
301
+ */
146
+ QDict *qdict;
302
+ *data = 0;
147
+
303
+ }
148
+ g_assert(resp);
304
+ return MEMTX_OK;
149
+
305
+}
150
+ if (!qdict_haskey(resp, "return")) {
306
+
151
+ return false;
307
+static MemTxResult gicv3_its_write(void *opaque, hwaddr offset, uint64_t data,
152
+ }
308
+ unsigned size, MemTxAttrs attrs)
153
+ qdict = qdict_get_qdict(resp, "return");
309
+{
154
+
310
+ GICv3ITSState *s = (GICv3ITSState *)opaque;
155
+ if (!qdict_haskey(qdict, "model")) {
311
+ bool result;
156
+ return false;
312
+
157
+ }
313
+ switch (size) {
158
+ qdict = qdict_get_qdict(qdict, "model");
314
+ case 4:
159
+
315
+ result = its_writel(s, offset, data, attrs);
160
+ return qdict_haskey(qdict, "props");
316
+ break;
161
+}
317
+ case 8:
162
+
318
+ result = its_writell(s, offset, data, attrs);
163
+static QDict *resp_get_props(QDict *resp)
319
+ break;
164
+{
320
+ default:
165
+ QDict *qdict;
321
+ result = false;
166
+
322
+ break;
167
+ g_assert(resp);
323
+ }
168
+ g_assert(resp_has_props(resp));
324
+
169
+
325
+ if (!result) {
170
+ qdict = qdict_get_qdict(resp, "return");
326
+ qemu_log_mask(LOG_GUEST_ERROR,
171
+ qdict = qdict_get_qdict(qdict, "model");
327
+ "%s: invalid guest write at offset " TARGET_FMT_plx
172
+ qdict = qdict_get_qdict(qdict, "props");
328
+ "size %u\n", __func__, offset, size);
173
+
329
+ /*
174
+ return qdict;
330
+ * The spec requires that reserved registers are RAZ/WI;
175
+}
331
+ * so use false returns from leaf functions as a way to
176
+
332
+ * trigger the guest-error logging but don't return it to
177
+#define assert_has_feature(qts, cpu_type, feature) \
333
+ * the caller, or we'll cause a spurious guest data abort.
178
+({ \
334
+ */
179
+ QDict *_resp = do_query_no_props(qts, cpu_type); \
335
+ }
180
+ g_assert(_resp); \
336
+ return MEMTX_OK;
181
+ g_assert(resp_has_props(_resp)); \
337
+}
182
+ g_assert(qdict_get(resp_get_props(_resp), feature)); \
338
+
183
+ qobject_unref(_resp); \
339
+static const MemoryRegionOps gicv3_its_control_ops = {
184
+})
340
+ .read_with_attrs = gicv3_its_read,
185
+
341
+ .write_with_attrs = gicv3_its_write,
186
+#define assert_has_not_feature(qts, cpu_type, feature) \
342
+ .valid.min_access_size = 4,
187
+({ \
343
+ .valid.max_access_size = 8,
188
+ QDict *_resp = do_query_no_props(qts, cpu_type); \
344
+ .impl.min_access_size = 4,
189
+ g_assert(_resp); \
345
+ .impl.max_access_size = 8,
190
+ g_assert(!resp_has_props(_resp) || \
346
+ .endianness = DEVICE_NATIVE_ENDIAN,
191
+ !qdict_get(resp_get_props(_resp), feature)); \
347
+};
192
+ qobject_unref(_resp); \
348
+
193
+})
349
+static const MemoryRegionOps gicv3_its_translation_ops = {
194
+
350
+ .write_with_attrs = gicv3_its_translation_write,
195
+static void assert_type_full(QTestState *qts)
351
+ .valid.min_access_size = 2,
196
+{
352
+ .valid.max_access_size = 4,
197
+ const char *error;
353
+ .impl.min_access_size = 2,
198
+ QDict *resp;
354
+ .impl.max_access_size = 4,
199
+
355
+ .endianness = DEVICE_NATIVE_ENDIAN,
200
+ resp = qtest_qmp(qts, "{ 'execute': 'query-cpu-model-expansion', "
356
+};
201
+ "'arguments': { 'type': 'static', "
357
+
202
+ "'model': { 'name': 'foo' }}}");
358
+static void gicv3_arm_its_realize(DeviceState *dev, Error **errp)
203
+ g_assert(resp);
359
+{
204
+ error = resp_get_error(resp);
360
+ GICv3ITSState *s = ARM_GICV3_ITS_COMMON(dev);
205
+ g_assert(error);
361
+ int i;
206
+ g_assert(g_str_equal(error,
362
+
207
+ "The requested expansion type is not supported"));
363
+ for (i = 0; i < s->gicv3->num_cpu; i++) {
208
+ qobject_unref(resp);
364
+ if (!(s->gicv3->cpu[i].gicr_typer & GICR_TYPER_PLPIS)) {
209
+}
365
+ error_setg(errp, "Physical LPI not supported by CPU %d", i);
210
+
366
+ return;
211
+static void assert_bad_props(QTestState *qts, const char *cpu_type)
367
+ }
212
+{
368
+ }
213
+ const char *error;
369
+
214
+ QDict *resp;
370
+ gicv3_its_init_mmio(s, &gicv3_its_control_ops, &gicv3_its_translation_ops);
215
+
371
+
216
+ resp = qtest_qmp(qts, "{ 'execute': 'query-cpu-model-expansion', "
372
+ /* set the ITS default features supported */
217
+ "'arguments': { 'type': 'full', "
373
+ s->typer = FIELD_DP64(s->typer, GITS_TYPER, PHYSICAL,
218
+ "'model': { 'name': %s, "
374
+ GITS_TYPE_PHYSICAL);
219
+ "'props': false }}}",
375
+ s->typer = FIELD_DP64(s->typer, GITS_TYPER, ITT_ENTRY_SIZE,
220
+ cpu_type);
376
+ ITS_ITT_ENTRY_SIZE - 1);
221
+ g_assert(resp);
377
+ s->typer = FIELD_DP64(s->typer, GITS_TYPER, IDBITS, ITS_IDBITS);
222
+ error = resp_get_error(resp);
378
+ s->typer = FIELD_DP64(s->typer, GITS_TYPER, DEVBITS, ITS_DEVBITS);
223
+ g_assert(error);
379
+ s->typer = FIELD_DP64(s->typer, GITS_TYPER, CIL, 1);
224
+ g_assert(g_str_equal(error,
380
+ s->typer = FIELD_DP64(s->typer, GITS_TYPER, CIDBITS, ITS_CIDBITS);
225
+ "Invalid parameter type for 'props', expected: dict"));
381
+}
226
+ qobject_unref(resp);
382
+
227
+}
383
+static void gicv3_its_reset(DeviceState *dev)
228
+
384
+{
229
+static void test_query_cpu_model_expansion(const void *data)
385
+ GICv3ITSState *s = ARM_GICV3_ITS_COMMON(dev);
230
+{
386
+ GICv3ITSClass *c = ARM_GICV3_ITS_GET_CLASS(s);
231
+ QTestState *qts;
387
+
232
+
388
+ c->parent_reset(dev);
233
+ qts = qtest_init(MACHINE "-cpu max");
389
+
234
+
390
+ /* Quiescent bit reset to 1 */
235
+ /* Test common query-cpu-model-expansion input validation */
391
+ s->ctlr = FIELD_DP32(s->ctlr, GITS_CTLR, QUIESCENT, 1);
236
+ assert_type_full(qts);
237
+ assert_bad_props(qts, "max");
238
+ assert_error(qts, "foo", "The CPU type 'foo' is not a recognized "
239
+ "ARM CPU type", NULL);
240
+ assert_error(qts, "max", "Parameter 'not-a-prop' is unexpected",
241
+ "{ 'not-a-prop': false }");
242
+ assert_error(qts, "host", "The CPU type 'host' requires KVM", NULL);
243
+
244
+ /* Test expected feature presence/absence for some cpu types */
245
+ assert_has_feature(qts, "max", "pmu");
246
+ assert_has_feature(qts, "cortex-a15", "pmu");
247
+ assert_has_not_feature(qts, "cortex-a15", "aarch64");
248
+
249
+ if (g_str_equal(qtest_get_arch(), "aarch64")) {
250
+ assert_has_feature(qts, "max", "aarch64");
251
+ assert_has_feature(qts, "cortex-a57", "pmu");
252
+ assert_has_feature(qts, "cortex-a57", "aarch64");
253
+
254
+ /* Test that features that depend on KVM generate errors without. */
255
+ assert_error(qts, "max",
256
+ "'aarch64' feature cannot be disabled "
257
+ "unless KVM is enabled and 32-bit EL1 "
258
+ "is supported",
259
+ "{ 'aarch64': false }");
260
+ }
261
+
262
+ qtest_quit(qts);
263
+}
264
+
265
+static void test_query_cpu_model_expansion_kvm(const void *data)
266
+{
267
+ QTestState *qts;
268
+
269
+ qts = qtest_init(MACHINE_KVM "-cpu max");
392
+
270
+
393
+ /*
271
+ /*
394
+ * setting GITS_BASER0.Type = 0b001 (Device)
272
+ * These tests target the 'host' CPU type, so KVM must be enabled.
395
+ * GITS_BASER1.Type = 0b100 (Collection Table)
396
+ * GITS_BASER<n>.Type,where n = 3 to 7 are 0b00 (Unimplemented)
397
+ * GITS_BASER<0,1>.Page_Size = 64KB
398
+ * and default translation table entry size to 16 bytes
399
+ */
273
+ */
400
+ s->baser[0] = FIELD_DP64(s->baser[0], GITS_BASER, TYPE,
274
+ if (!kvm_enabled(qts)) {
401
+ GITS_BASER_TYPE_DEVICE);
275
+ qtest_quit(qts);
402
+ s->baser[0] = FIELD_DP64(s->baser[0], GITS_BASER, PAGESIZE,
276
+ return;
403
+ GITS_BASER_PAGESIZE_64K);
277
+ }
404
+ s->baser[0] = FIELD_DP64(s->baser[0], GITS_BASER, ENTRYSIZE,
278
+
405
+ GITS_DTE_SIZE - 1);
279
+ if (g_str_equal(qtest_get_arch(), "aarch64")) {
406
+
280
+ assert_has_feature(qts, "host", "aarch64");
407
+ s->baser[1] = FIELD_DP64(s->baser[1], GITS_BASER, TYPE,
281
+ assert_has_feature(qts, "host", "pmu");
408
+ GITS_BASER_TYPE_COLLECTION);
282
+
409
+ s->baser[1] = FIELD_DP64(s->baser[1], GITS_BASER, PAGESIZE,
283
+ assert_error(qts, "cortex-a15",
410
+ GITS_BASER_PAGESIZE_64K);
284
+ "We cannot guarantee the CPU type 'cortex-a15' works "
411
+ s->baser[1] = FIELD_DP64(s->baser[1], GITS_BASER, ENTRYSIZE,
285
+ "with KVM on this host", NULL);
412
+ GITS_CTE_SIZE - 1);
286
+ } else {
413
+}
287
+ assert_has_not_feature(qts, "host", "aarch64");
414
+
288
+ assert_has_not_feature(qts, "host", "pmu");
415
+static Property gicv3_its_props[] = {
289
+ }
416
+ DEFINE_PROP_LINK("parent-gicv3", GICv3ITSState, gicv3, "arm-gicv3",
290
+
417
+ GICv3State *),
291
+ qtest_quit(qts);
418
+ DEFINE_PROP_END_OF_LIST(),
292
+}
419
+};
293
+
420
+
294
+int main(int argc, char **argv)
421
+static void gicv3_its_class_init(ObjectClass *klass, void *data)
295
+{
422
+{
296
+ g_test_init(&argc, &argv, NULL);
423
+ DeviceClass *dc = DEVICE_CLASS(klass);
297
+
424
+ GICv3ITSClass *ic = ARM_GICV3_ITS_CLASS(klass);
298
+ qtest_add_data_func("/arm/query-cpu-model-expansion",
425
+
299
+ NULL, test_query_cpu_model_expansion);
426
+ dc->realize = gicv3_arm_its_realize;
300
+ qtest_add_data_func("/arm/kvm/query-cpu-model-expansion",
427
+ device_class_set_props(dc, gicv3_its_props);
301
+ NULL, test_query_cpu_model_expansion_kvm);
428
+ device_class_set_parent_reset(dc, gicv3_its_reset, &ic->parent_reset);
302
+
429
+}
303
+ return g_test_run();
430
+
304
+}
431
+static const TypeInfo gicv3_its_info = {
432
+ .name = TYPE_ARM_GICV3_ITS,
433
+ .parent = TYPE_ARM_GICV3_ITS_COMMON,
434
+ .instance_size = sizeof(GICv3ITSState),
435
+ .class_init = gicv3_its_class_init,
436
+ .class_size = sizeof(GICv3ITSClass),
437
+};
438
+
439
+static void gicv3_its_register_types(void)
440
+{
441
+ type_register_static(&gicv3_its_info);
442
+}
443
+
444
+type_init(gicv3_its_register_types)
445
diff --git a/hw/intc/arm_gicv3_its_common.c b/hw/intc/arm_gicv3_its_common.c
446
index XXXXXXX..XXXXXXX 100644
447
--- a/hw/intc/arm_gicv3_its_common.c
448
+++ b/hw/intc/arm_gicv3_its_common.c
449
@@ -XXX,XX +XXX,XX @@ static int gicv3_its_post_load(void *opaque, int version_id)
450
451
static const VMStateDescription vmstate_its = {
452
.name = "arm_gicv3_its",
453
+ .version_id = 1,
454
+ .minimum_version_id = 1,
455
.pre_save = gicv3_its_pre_save,
456
.post_load = gicv3_its_post_load,
457
.priority = MIG_PRI_GICV3_ITS,
458
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gicv3_its_trans_ops = {
459
.endianness = DEVICE_NATIVE_ENDIAN,
460
};
461
462
-void gicv3_its_init_mmio(GICv3ITSState *s, const MemoryRegionOps *ops)
463
+void gicv3_its_init_mmio(GICv3ITSState *s, const MemoryRegionOps *ops,
464
+ const MemoryRegionOps *tops)
465
{
466
SysBusDevice *sbd = SYS_BUS_DEVICE(s);
467
468
memory_region_init_io(&s->iomem_its_cntrl, OBJECT(s), ops, s,
469
"control", ITS_CONTROL_SIZE);
470
memory_region_init_io(&s->iomem_its_translation, OBJECT(s),
471
- &gicv3_its_trans_ops, s,
472
+ tops ? tops : &gicv3_its_trans_ops, s,
473
"translation", ITS_TRANS_SIZE);
474
475
/* Our two regions are always adjacent, therefore we now combine them
476
diff --git a/hw/intc/arm_gicv3_its_kvm.c b/hw/intc/arm_gicv3_its_kvm.c
477
index XXXXXXX..XXXXXXX 100644
478
--- a/hw/intc/arm_gicv3_its_kvm.c
479
+++ b/hw/intc/arm_gicv3_its_kvm.c
480
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_its_realize(DeviceState *dev, Error **errp)
481
kvm_arm_register_device(&s->iomem_its_cntrl, -1, KVM_DEV_ARM_VGIC_GRP_ADDR,
482
KVM_VGIC_ITS_ADDR_TYPE, s->dev_fd, 0);
483
484
- gicv3_its_init_mmio(s, NULL);
485
+ gicv3_its_init_mmio(s, NULL, NULL);
486
487
if (!kvm_device_check_attr(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_ITS_REGS,
488
GITS_CTLR)) {
489
diff --git a/hw/intc/meson.build b/hw/intc/meson.build
490
index XXXXXXX..XXXXXXX 100644
491
--- a/hw/intc/meson.build
492
+++ b/hw/intc/meson.build
493
@@ -XXX,XX +XXX,XX @@ softmmu_ss.add(when: 'CONFIG_ARM_GIC', if_true: files(
494
'arm_gicv3_dist.c',
495
'arm_gicv3_its_common.c',
496
'arm_gicv3_redist.c',
497
+ 'arm_gicv3_its.c',
498
))
499
softmmu_ss.add(when: 'CONFIG_ETRAXFS', if_true: files('etraxfs_pic.c'))
500
softmmu_ss.add(when: 'CONFIG_HEATHROW_PIC', if_true: files('heathrow_pic.c'))
501
--
305
--
502
2.20.1
306
2.20.1
503
307
504
308
diff view generated by jsdifflib
1
From: Chris Rauer <crauer@google.com>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
kudo-bmc is a board supported by OpenBMC.
3
Since 97a28b0eeac14 ("target/arm: Allow VFP and Neon to be disabled via
4
https://github.com/openbmc/openbmc/tree/master/meta-fii/meta-kudo
4
a CPU property") we can disable the 'max' cpu model's VFP and neon
5
features, but there's no way to disable SVE. Add the 'sve=on|off'
6
property to give it that flexibility. We also rename
7
cpu_max_get/set_sve_vq to cpu_max_get/set_sve_max_vq in order for them
8
to follow the typical *_get/set_<property-name> pattern.
5
9
6
Since v1:
10
Signed-off-by: Andrew Jones <drjones@redhat.com>
7
- hyphenated Cortex-A9
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
12
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Tested: Booted kudo firmware.
13
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
10
Signed-off-by: Chris Rauer <crauer@google.com>
14
Reviewed-by: Beata Michalska <beata.michalska@linaro.org>
11
Reviewed-by: Patrick Venture <venture@google.com>
15
Message-id: 20191031142734.8590-4-drjones@redhat.com
12
Message-id: 20210907223234.1165705-1-crauer@google.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
17
---
15
docs/system/arm/nuvoton.rst | 1 +
18
target/arm/cpu.c | 3 ++-
16
hw/arm/npcm7xx_boards.c | 34 ++++++++++++++++++++++++++++++++++
19
target/arm/cpu64.c | 52 ++++++++++++++++++++++++++++++++++------
17
2 files changed, 35 insertions(+)
20
target/arm/monitor.c | 2 +-
21
tests/arm-cpu-features.c | 1 +
22
4 files changed, 49 insertions(+), 9 deletions(-)
18
23
19
diff --git a/docs/system/arm/nuvoton.rst b/docs/system/arm/nuvoton.rst
24
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
20
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
21
--- a/docs/system/arm/nuvoton.rst
26
--- a/target/arm/cpu.c
22
+++ b/docs/system/arm/nuvoton.rst
27
+++ b/target/arm/cpu.c
23
@@ -XXX,XX +XXX,XX @@ Hyperscale applications. The following machines are based on this chip :
28
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
24
29
env->cp15.cpacr_el1 = deposit64(env->cp15.cpacr_el1, 16, 2, 3);
25
- ``quanta-gbs-bmc`` Quanta GBS server BMC
30
env->cp15.cptr_el[3] |= CPTR_EZ;
26
- ``quanta-gsj`` Quanta GSJ server BMC
31
/* with maximum vector length */
27
+- ``kudo-bmc`` Fii USA Kudo server BMC
32
- env->vfp.zcr_el[1] = cpu->sve_max_vq - 1;
28
33
+ env->vfp.zcr_el[1] = cpu_isar_feature(aa64_sve, cpu) ?
29
There are also two more SoCs, NPCM710 and NPCM705, which are single-core
34
+ cpu->sve_max_vq - 1 : 0;
30
variants of NPCM750 and NPCM730, respectively. These are currently not
35
env->vfp.zcr_el[2] = env->vfp.zcr_el[1];
31
diff --git a/hw/arm/npcm7xx_boards.c b/hw/arm/npcm7xx_boards.c
36
env->vfp.zcr_el[3] = env->vfp.zcr_el[1];
37
/*
38
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
32
index XXXXXXX..XXXXXXX 100644
39
index XXXXXXX..XXXXXXX 100644
33
--- a/hw/arm/npcm7xx_boards.c
40
--- a/target/arm/cpu64.c
34
+++ b/hw/arm/npcm7xx_boards.c
41
+++ b/target/arm/cpu64.c
35
@@ -XXX,XX +XXX,XX @@
42
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
36
#define NPCM750_EVB_POWER_ON_STRAPS 0x00001ff7
43
define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
37
#define QUANTA_GSJ_POWER_ON_STRAPS 0x00001fff
38
#define QUANTA_GBS_POWER_ON_STRAPS 0x000017ff
39
+#define KUDO_BMC_POWER_ON_STRAPS 0x00001fff
40
41
static const char npcm7xx_default_bootrom[] = "npcm7xx_bootrom.bin";
42
43
@@ -XXX,XX +XXX,XX @@ static void quanta_gbs_init(MachineState *machine)
44
npcm7xx_load_kernel(machine, soc);
45
}
44
}
46
45
47
+static void kudo_bmc_init(MachineState *machine)
46
-static void cpu_max_get_sve_vq(Object *obj, Visitor *v, const char *name,
47
- void *opaque, Error **errp)
48
+static void cpu_max_get_sve_max_vq(Object *obj, Visitor *v, const char *name,
49
+ void *opaque, Error **errp)
50
{
51
ARMCPU *cpu = ARM_CPU(obj);
52
- visit_type_uint32(v, name, &cpu->sve_max_vq, errp);
53
+ uint32_t value;
54
+
55
+ /* All vector lengths are disabled when SVE is off. */
56
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
57
+ value = 0;
58
+ } else {
59
+ value = cpu->sve_max_vq;
60
+ }
61
+ visit_type_uint32(v, name, &value, errp);
62
}
63
64
-static void cpu_max_set_sve_vq(Object *obj, Visitor *v, const char *name,
65
- void *opaque, Error **errp)
66
+static void cpu_max_set_sve_max_vq(Object *obj, Visitor *v, const char *name,
67
+ void *opaque, Error **errp)
68
{
69
ARMCPU *cpu = ARM_CPU(obj);
70
Error *err = NULL;
71
@@ -XXX,XX +XXX,XX @@ static void cpu_max_set_sve_vq(Object *obj, Visitor *v, const char *name,
72
error_propagate(errp, err);
73
}
74
75
+static void cpu_arm_get_sve(Object *obj, Visitor *v, const char *name,
76
+ void *opaque, Error **errp)
48
+{
77
+{
49
+ NPCM7xxState *soc;
78
+ ARMCPU *cpu = ARM_CPU(obj);
79
+ bool value = cpu_isar_feature(aa64_sve, cpu);
50
+
80
+
51
+ soc = npcm7xx_create_soc(machine, KUDO_BMC_POWER_ON_STRAPS);
81
+ visit_type_bool(v, name, &value, errp);
52
+ npcm7xx_connect_dram(soc, machine->ram);
53
+ qdev_realize(DEVICE(soc), NULL, &error_fatal);
54
+
55
+ npcm7xx_load_bootrom(machine, soc);
56
+ npcm7xx_connect_flash(&soc->fiu[0], 0, "mx66u51235f",
57
+ drive_get(IF_MTD, 0, 0));
58
+ npcm7xx_connect_flash(&soc->fiu[1], 0, "mx66u51235f",
59
+ drive_get(IF_MTD, 3, 0));
60
+
61
+ npcm7xx_load_kernel(machine, soc);
62
+}
82
+}
63
+
83
+
64
static void npcm7xx_set_soc_type(NPCM7xxMachineClass *nmc, const char *type)
84
+static void cpu_arm_set_sve(Object *obj, Visitor *v, const char *name,
65
{
85
+ void *opaque, Error **errp)
66
NPCM7xxClass *sc = NPCM7XX_CLASS(object_class_by_name(type));
86
+{
67
@@ -XXX,XX +XXX,XX @@ static void gbs_bmc_machine_class_init(ObjectClass *oc, void *data)
87
+ ARMCPU *cpu = ARM_CPU(obj);
68
mc->default_ram_size = 1 * GiB;
88
+ Error *err = NULL;
89
+ bool value;
90
+ uint64_t t;
91
+
92
+ visit_type_bool(v, name, &value, &err);
93
+ if (err) {
94
+ error_propagate(errp, err);
95
+ return;
96
+ }
97
+
98
+ t = cpu->isar.id_aa64pfr0;
99
+ t = FIELD_DP64(t, ID_AA64PFR0, SVE, value);
100
+ cpu->isar.id_aa64pfr0 = t;
101
+}
102
+
103
/* -cpu max: if KVM is enabled, like -cpu host (best possible with this host);
104
* otherwise, a CPU with as many features enabled as our emulation supports.
105
* The version of '-cpu max' for qemu-system-arm is defined in cpu.c;
106
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
107
#endif
108
109
cpu->sve_max_vq = ARM_MAX_VQ;
110
- object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_vq,
111
- cpu_max_set_sve_vq, NULL, NULL, &error_fatal);
112
+ object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq,
113
+ cpu_max_set_sve_max_vq, NULL, NULL, &error_fatal);
114
+ object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
115
+ cpu_arm_set_sve, NULL, NULL, &error_fatal);
116
}
69
}
117
}
70
118
71
+static void kudo_bmc_machine_class_init(ObjectClass *oc, void *data)
119
diff --git a/target/arm/monitor.c b/target/arm/monitor.c
72
+{
120
index XXXXXXX..XXXXXXX 100644
73
+ NPCM7xxMachineClass *nmc = NPCM7XX_MACHINE_CLASS(oc);
121
--- a/target/arm/monitor.c
74
+ MachineClass *mc = MACHINE_CLASS(oc);
122
+++ b/target/arm/monitor.c
75
+
123
@@ -XXX,XX +XXX,XX @@ GICCapabilityList *qmp_query_gic_capabilities(Error **errp)
76
+ npcm7xx_set_soc_type(nmc, TYPE_NPCM730);
124
* then the order that considers those dependencies must be used.
77
+
125
*/
78
+ mc->desc = "Kudo BMC (Cortex-A9)";
126
static const char *cpu_model_advertised_features[] = {
79
+ mc->init = kudo_bmc_init;
127
- "aarch64", "pmu",
80
+ mc->default_ram_size = 1 * GiB;
128
+ "aarch64", "pmu", "sve",
81
+};
129
NULL
82
+
83
static const TypeInfo npcm7xx_machine_types[] = {
84
{
85
.name = TYPE_NPCM7XX_MACHINE,
86
@@ -XXX,XX +XXX,XX @@ static const TypeInfo npcm7xx_machine_types[] = {
87
.name = MACHINE_TYPE_NAME("quanta-gbs-bmc"),
88
.parent = TYPE_NPCM7XX_MACHINE,
89
.class_init = gbs_bmc_machine_class_init,
90
+ }, {
91
+ .name = MACHINE_TYPE_NAME("kudo-bmc"),
92
+ .parent = TYPE_NPCM7XX_MACHINE,
93
+ .class_init = kudo_bmc_machine_class_init,
94
},
95
};
130
};
131
132
diff --git a/tests/arm-cpu-features.c b/tests/arm-cpu-features.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/tests/arm-cpu-features.c
135
+++ b/tests/arm-cpu-features.c
136
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion(const void *data)
137
138
if (g_str_equal(qtest_get_arch(), "aarch64")) {
139
assert_has_feature(qts, "max", "aarch64");
140
+ assert_has_feature(qts, "max", "sve");
141
assert_has_feature(qts, "cortex-a57", "pmu");
142
assert_has_feature(qts, "cortex-a57", "aarch64");
96
143
97
--
144
--
98
2.20.1
145
2.20.1
99
146
100
147
diff view generated by jsdifflib
1
From: Shashi Mallela <shashi.mallela@linaro.org>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
Added properties to enable ITS feature and define qemu system
3
Introduce cpu properties to give fine control over SVE vector lengths.
4
address space memory in gicv3 common,setup distributor and
4
We introduce a property for each valid length up to the current
5
redistributor registers to indicate LPI support.
5
maximum supported, which is 2048-bits. The properties are named, e.g.
6
sve128, sve256, sve384, sve512, ..., where the number is the number of
7
bits. See the updates to docs/arm-cpu-features.rst for a description
8
of the semantics and for example uses.
6
9
7
Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
10
Note, as sve-max-vq is still present and we'd like to be able to
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
support qmp_query_cpu_model_expansion with guests launched with e.g.
9
Tested-by: Neil Armstrong <narmstrong@baylibre.com>
12
-cpu max,sve-max-vq=8 on their command lines, then we do allow
10
Message-id: 20210910143951.92242-6-shashi.mallela@linaro.org
13
sve-max-vq and sve<N> properties to be provided at the same time, but
14
this is not recommended, and is why sve-max-vq is not mentioned in the
15
document. If sve-max-vq is provided then it enables all lengths smaller
16
than and including the max and disables all lengths larger. It also has
17
the side-effect that no larger lengths may be enabled and that the max
18
itself cannot be disabled. Smaller non-power-of-two lengths may,
19
however, be disabled, e.g. -cpu max,sve-max-vq=4,sve384=off provides a
20
guest the vector lengths 128, 256, and 512 bits.
21
22
This patch has been co-authored with Richard Henderson, who reworked
23
the target/arm/cpu64.c changes in order to push all the validation and
24
auto-enabling/disabling steps into the finalizer, resulting in a nice
25
LOC reduction.
26
27
Signed-off-by: Andrew Jones <drjones@redhat.com>
28
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
29
Reviewed-by: Eric Auger <eric.auger@redhat.com>
30
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
31
Reviewed-by: Beata Michalska <beata.michalska@linaro.org>
32
Message-id: 20191031142734.8590-5-drjones@redhat.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
33
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
34
---
13
hw/intc/gicv3_internal.h | 2 ++
35
include/qemu/bitops.h | 1 +
14
include/hw/intc/arm_gicv3_common.h | 1 +
36
target/arm/cpu.h | 19 ++++
15
hw/intc/arm_gicv3_common.c | 12 ++++++++++++
37
target/arm/cpu.c | 19 ++++
16
hw/intc/arm_gicv3_dist.c | 5 ++++-
38
target/arm/cpu64.c | 192 ++++++++++++++++++++++++++++++++++++-
17
hw/intc/arm_gicv3_redist.c | 12 +++++++++---
39
target/arm/helper.c | 10 +-
18
5 files changed, 28 insertions(+), 4 deletions(-)
40
target/arm/monitor.c | 12 +++
41
tests/arm-cpu-features.c | 194 ++++++++++++++++++++++++++++++++++++++
42
docs/arm-cpu-features.rst | 168 +++++++++++++++++++++++++++++++--
43
8 files changed, 606 insertions(+), 9 deletions(-)
19
44
20
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
45
diff --git a/include/qemu/bitops.h b/include/qemu/bitops.h
21
index XXXXXXX..XXXXXXX 100644
46
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/gicv3_internal.h
47
--- a/include/qemu/bitops.h
23
+++ b/hw/intc/gicv3_internal.h
48
+++ b/include/qemu/bitops.h
24
@@ -XXX,XX +XXX,XX @@
49
@@ -XXX,XX +XXX,XX @@
25
#define GICD_CTLR_E1NWF (1U << 7)
50
#define BITS_PER_LONG (sizeof (unsigned long) * BITS_PER_BYTE)
26
#define GICD_CTLR_RWP (1U << 31)
51
27
52
#define BIT(nr) (1UL << (nr))
28
+#define GICD_TYPER_LPIS_SHIFT 17
53
+#define BIT_ULL(nr) (1ULL << (nr))
29
+
54
#define BIT_MASK(nr) (1UL << ((nr) % BITS_PER_LONG))
30
/* 16 bits EventId */
55
#define BIT_WORD(nr) ((nr) / BITS_PER_LONG)
31
#define GICD_TYPER_IDBITS 0xf
56
#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long))
32
57
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
33
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
34
index XXXXXXX..XXXXXXX 100644
58
index XXXXXXX..XXXXXXX 100644
35
--- a/include/hw/intc/arm_gicv3_common.h
59
--- a/target/arm/cpu.h
36
+++ b/include/hw/intc/arm_gicv3_common.h
60
+++ b/target/arm/cpu.h
37
@@ -XXX,XX +XXX,XX @@ struct GICv3State {
61
@@ -XXX,XX +XXX,XX @@ typedef struct {
38
uint32_t num_cpu;
62
39
uint32_t num_irq;
63
#ifdef TARGET_AARCH64
40
uint32_t revision;
64
# define ARM_MAX_VQ 16
41
+ bool lpi_enable;
65
+void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
42
bool security_extn;
66
+uint32_t arm_cpu_vq_map_next_smaller(ARMCPU *cpu, uint32_t vq);
43
bool irq_reset_nonsecure;
67
#else
44
bool gicd_no_migration_shift_bug;
68
# define ARM_MAX_VQ 1
45
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
69
+static inline void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) { }
70
+static inline uint32_t arm_cpu_vq_map_next_smaller(ARMCPU *cpu, uint32_t vq)
71
+{ return 0; }
72
#endif
73
74
typedef struct ARMVectorReg {
75
@@ -XXX,XX +XXX,XX @@ struct ARMCPU {
76
77
/* Used to set the maximum vector length the cpu will support. */
78
uint32_t sve_max_vq;
79
+
80
+ /*
81
+ * In sve_vq_map each set bit is a supported vector length of
82
+ * (bit-number + 1) * 16 bytes, i.e. each bit number + 1 is the vector
83
+ * length in quadwords.
84
+ *
85
+ * While processing properties during initialization, corresponding
86
+ * sve_vq_init bits are set for bits in sve_vq_map that have been
87
+ * set by properties.
88
+ */
89
+ DECLARE_BITMAP(sve_vq_map, ARM_MAX_VQ);
90
+ DECLARE_BITMAP(sve_vq_init, ARM_MAX_VQ);
91
};
92
93
void arm_cpu_post_init(Object *obj);
94
@@ -XXX,XX +XXX,XX @@ static inline int arm_feature(CPUARMState *env, int feature)
95
return (env->features & (1ULL << feature)) != 0;
96
}
97
98
+void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp);
99
+
100
#if !defined(CONFIG_USER_ONLY)
101
/* Return true if exception levels below EL3 are in secure state,
102
* or would be following an exception return to that level.
103
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
46
index XXXXXXX..XXXXXXX 100644
104
index XXXXXXX..XXXXXXX 100644
47
--- a/hw/intc/arm_gicv3_common.c
105
--- a/target/arm/cpu.c
48
+++ b/hw/intc/arm_gicv3_common.c
106
+++ b/target/arm/cpu.c
49
@@ -XXX,XX +XXX,XX @@ static void arm_gicv3_common_realize(DeviceState *dev, Error **errp)
107
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_finalizefn(Object *obj)
108
#endif
109
}
110
111
+void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
112
+{
113
+ Error *local_err = NULL;
114
+
115
+ if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
116
+ arm_cpu_sve_finalize(cpu, &local_err);
117
+ if (local_err != NULL) {
118
+ error_propagate(errp, local_err);
119
+ return;
120
+ }
121
+ }
122
+}
123
+
124
static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
125
{
126
CPUState *cs = CPU(dev);
127
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
50
return;
128
return;
51
}
129
}
52
130
53
+ if (s->lpi_enable && !s->dma) {
131
+ arm_cpu_finalize_features(cpu, &local_err);
54
+ error_setg(errp, "Redist-ITS: Guest 'sysmem' reference link not set");
132
+ if (local_err != NULL) {
133
+ error_propagate(errp, local_err);
55
+ return;
134
+ return;
56
+ }
135
+ }
57
+
136
+
58
s->cpu = g_new0(GICv3CPUState, s->num_cpu);
137
if (arm_feature(env, ARM_FEATURE_AARCH64) &&
59
138
cpu->has_vfp != cpu->has_neon) {
60
for (i = 0; i < s->num_cpu; i++) {
139
/*
61
@@ -XXX,XX +XXX,XX @@ static void arm_gicv3_common_realize(DeviceState *dev, Error **errp)
140
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
62
(1 << 24) |
141
index XXXXXXX..XXXXXXX 100644
63
(i << 8) |
142
--- a/target/arm/cpu64.c
64
(last << 4);
143
+++ b/target/arm/cpu64.c
65
+
144
@@ -XXX,XX +XXX,XX @@ static void aarch64_a72_initfn(Object *obj)
66
+ if (s->lpi_enable) {
145
define_arm_cp_regs(cpu, cortex_a72_a57_a53_cp_reginfo);
67
+ s->cpu[i].gicr_typer |= GICR_TYPER_PLPIS;
146
}
147
148
+void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
149
+{
150
+ /*
151
+ * If any vector lengths are explicitly enabled with sve<N> properties,
152
+ * then all other lengths are implicitly disabled. If sve-max-vq is
153
+ * specified then it is the same as explicitly enabling all lengths
154
+ * up to and including the specified maximum, which means all larger
155
+ * lengths will be implicitly disabled. If no sve<N> properties
156
+ * are enabled and sve-max-vq is not specified, then all lengths not
157
+ * explicitly disabled will be enabled. Additionally, all power-of-two
158
+ * vector lengths less than the maximum enabled length will be
159
+ * automatically enabled and all vector lengths larger than the largest
160
+ * disabled power-of-two vector length will be automatically disabled.
161
+ * Errors are generated if the user provided input that interferes with
162
+ * any of the above. Finally, if SVE is not disabled, then at least one
163
+ * vector length must be enabled.
164
+ */
165
+ DECLARE_BITMAP(tmp, ARM_MAX_VQ);
166
+ uint32_t vq, max_vq = 0;
167
+
168
+ /*
169
+ * Process explicit sve<N> properties.
170
+ * From the properties, sve_vq_map<N> implies sve_vq_init<N>.
171
+ * Check first for any sve<N> enabled.
172
+ */
173
+ if (!bitmap_empty(cpu->sve_vq_map, ARM_MAX_VQ)) {
174
+ max_vq = find_last_bit(cpu->sve_vq_map, ARM_MAX_VQ) + 1;
175
+
176
+ if (cpu->sve_max_vq && max_vq > cpu->sve_max_vq) {
177
+ error_setg(errp, "cannot enable sve%d", max_vq * 128);
178
+ error_append_hint(errp, "sve%d is larger than the maximum vector "
179
+ "length, sve-max-vq=%d (%d bits)\n",
180
+ max_vq * 128, cpu->sve_max_vq,
181
+ cpu->sve_max_vq * 128);
182
+ return;
183
+ }
184
+
185
+ /* Propagate enabled bits down through required powers-of-two. */
186
+ for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) {
187
+ if (!test_bit(vq - 1, cpu->sve_vq_init)) {
188
+ set_bit(vq - 1, cpu->sve_vq_map);
189
+ }
190
+ }
191
+ } else if (cpu->sve_max_vq == 0) {
192
+ /*
193
+ * No explicit bits enabled, and no implicit bits from sve-max-vq.
194
+ */
195
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
196
+ /* SVE is disabled and so are all vector lengths. Good. */
197
+ return;
198
+ }
199
+
200
+ /* Disabling a power-of-two disables all larger lengths. */
201
+ if (test_bit(0, cpu->sve_vq_init)) {
202
+ error_setg(errp, "cannot disable sve128");
203
+ error_append_hint(errp, "Disabling sve128 results in all vector "
204
+ "lengths being disabled.\n");
205
+ error_append_hint(errp, "With SVE enabled, at least one vector "
206
+ "length must be enabled.\n");
207
+ return;
208
+ }
209
+ for (vq = 2; vq <= ARM_MAX_VQ; vq <<= 1) {
210
+ if (test_bit(vq - 1, cpu->sve_vq_init)) {
211
+ break;
212
+ }
213
+ }
214
+ max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ;
215
+
216
+ bitmap_complement(cpu->sve_vq_map, cpu->sve_vq_init, max_vq);
217
+ max_vq = find_last_bit(cpu->sve_vq_map, max_vq) + 1;
218
+ }
219
+
220
+ /*
221
+ * Process the sve-max-vq property.
222
+ * Note that we know from the above that no bit above
223
+ * sve-max-vq is currently set.
224
+ */
225
+ if (cpu->sve_max_vq != 0) {
226
+ max_vq = cpu->sve_max_vq;
227
+
228
+ if (!test_bit(max_vq - 1, cpu->sve_vq_map) &&
229
+ test_bit(max_vq - 1, cpu->sve_vq_init)) {
230
+ error_setg(errp, "cannot disable sve%d", max_vq * 128);
231
+ error_append_hint(errp, "The maximum vector length must be "
232
+ "enabled, sve-max-vq=%d (%d bits)\n",
233
+ max_vq, max_vq * 128);
234
+ return;
235
+ }
236
+
237
+ /* Set all bits not explicitly set within sve-max-vq. */
238
+ bitmap_complement(tmp, cpu->sve_vq_init, max_vq);
239
+ bitmap_or(cpu->sve_vq_map, cpu->sve_vq_map, tmp, max_vq);
240
+ }
241
+
242
+ /*
243
+ * We should know what max-vq is now. Also, as we're done
244
+ * manipulating sve-vq-map, we ensure any bits above max-vq
245
+ * are clear, just in case anybody looks.
246
+ */
247
+ assert(max_vq != 0);
248
+ bitmap_clear(cpu->sve_vq_map, max_vq, ARM_MAX_VQ - max_vq);
249
+
250
+ /* Ensure all required powers-of-two are enabled. */
251
+ for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) {
252
+ if (!test_bit(vq - 1, cpu->sve_vq_map)) {
253
+ error_setg(errp, "cannot disable sve%d", vq * 128);
254
+ error_append_hint(errp, "sve%d is required as it "
255
+ "is a power-of-two length smaller than "
256
+ "the maximum, sve%d\n",
257
+ vq * 128, max_vq * 128);
258
+ return;
259
+ }
260
+ }
261
+
262
+ /*
263
+ * Now that we validated all our vector lengths, the only question
264
+ * left to answer is if we even want SVE at all.
265
+ */
266
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
267
+ error_setg(errp, "cannot enable sve%d", max_vq * 128);
268
+ error_append_hint(errp, "SVE must be enabled to enable vector "
269
+ "lengths.\n");
270
+ error_append_hint(errp, "Add sve=on to the CPU property list.\n");
271
+ return;
272
+ }
273
+
274
+ /* From now on sve_max_vq is the actual maximum supported length. */
275
+ cpu->sve_max_vq = max_vq;
276
+}
277
+
278
+uint32_t arm_cpu_vq_map_next_smaller(ARMCPU *cpu, uint32_t vq)
279
+{
280
+ uint32_t bitnum;
281
+
282
+ /*
283
+ * We allow vq == ARM_MAX_VQ + 1 to be input because the caller may want
284
+ * to find the maximum vq enabled, which may be ARM_MAX_VQ, but this
285
+ * function always returns the next smaller than the input.
286
+ */
287
+ assert(vq && vq <= ARM_MAX_VQ + 1);
288
+
289
+ bitnum = find_last_bit(cpu->sve_vq_map, vq - 1);
290
+ return bitnum == vq - 1 ? 0 : bitnum + 1;
291
+}
292
+
293
static void cpu_max_get_sve_max_vq(Object *obj, Visitor *v, const char *name,
294
void *opaque, Error **errp)
295
{
296
@@ -XXX,XX +XXX,XX @@ static void cpu_max_set_sve_max_vq(Object *obj, Visitor *v, const char *name,
297
error_propagate(errp, err);
298
}
299
300
+static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name,
301
+ void *opaque, Error **errp)
302
+{
303
+ ARMCPU *cpu = ARM_CPU(obj);
304
+ uint32_t vq = atoi(&name[3]) / 128;
305
+ bool value;
306
+
307
+ /* All vector lengths are disabled when SVE is off. */
308
+ if (!cpu_isar_feature(aa64_sve, cpu)) {
309
+ value = false;
310
+ } else {
311
+ value = test_bit(vq - 1, cpu->sve_vq_map);
312
+ }
313
+ visit_type_bool(v, name, &value, errp);
314
+}
315
+
316
+static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
317
+ void *opaque, Error **errp)
318
+{
319
+ ARMCPU *cpu = ARM_CPU(obj);
320
+ uint32_t vq = atoi(&name[3]) / 128;
321
+ Error *err = NULL;
322
+ bool value;
323
+
324
+ visit_type_bool(v, name, &value, &err);
325
+ if (err) {
326
+ error_propagate(errp, err);
327
+ return;
328
+ }
329
+
330
+ if (value) {
331
+ set_bit(vq - 1, cpu->sve_vq_map);
332
+ } else {
333
+ clear_bit(vq - 1, cpu->sve_vq_map);
334
+ }
335
+ set_bit(vq - 1, cpu->sve_vq_init);
336
+}
337
+
338
static void cpu_arm_get_sve(Object *obj, Visitor *v, const char *name,
339
void *opaque, Error **errp)
340
{
341
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, Visitor *v, const char *name,
342
static void aarch64_max_initfn(Object *obj)
343
{
344
ARMCPU *cpu = ARM_CPU(obj);
345
+ uint32_t vq;
346
347
if (kvm_enabled()) {
348
kvm_arm_set_cpu_features_from_host(cpu);
349
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
350
cpu->dcz_blocksize = 7; /* 512 bytes */
351
#endif
352
353
- cpu->sve_max_vq = ARM_MAX_VQ;
354
object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq,
355
cpu_max_set_sve_max_vq, NULL, NULL, &error_fatal);
356
object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
357
cpu_arm_set_sve, NULL, NULL, &error_fatal);
358
+
359
+ for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
360
+ char name[8];
361
+ sprintf(name, "sve%d", vq * 128);
362
+ object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
363
+ cpu_arm_set_sve_vq, NULL, NULL, &error_fatal);
68
+ }
364
+ }
69
}
365
}
70
}
366
}
71
367
72
@@ -XXX,XX +XXX,XX @@ static Property arm_gicv3_common_properties[] = {
368
diff --git a/target/arm/helper.c b/target/arm/helper.c
73
DEFINE_PROP_UINT32("num-cpu", GICv3State, num_cpu, 1),
369
index XXXXXXX..XXXXXXX 100644
74
DEFINE_PROP_UINT32("num-irq", GICv3State, num_irq, 32),
370
--- a/target/arm/helper.c
75
DEFINE_PROP_UINT32("revision", GICv3State, revision, 3),
371
+++ b/target/arm/helper.c
76
+ DEFINE_PROP_BOOL("has-lpi", GICv3State, lpi_enable, 0),
372
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
77
DEFINE_PROP_BOOL("has-security-extensions", GICv3State, security_extn, 0),
373
return 0;
78
DEFINE_PROP_ARRAY("redist-region-count", GICv3State, nb_redist_regions,
374
}
79
redist_region_count, qdev_prop_uint32, uint32_t),
375
80
+ DEFINE_PROP_LINK("sysmem", GICv3State, dma, TYPE_MEMORY_REGION,
376
+static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
81
+ MemoryRegion *),
377
+{
82
DEFINE_PROP_END_OF_LIST(),
378
+ uint32_t start_vq = (start_len & 0xf) + 1;
379
+
380
+ return arm_cpu_vq_map_next_smaller(cpu, start_vq + 1) - 1;
381
+}
382
+
383
/*
384
* Given that SVE is enabled, return the vector length for EL.
385
*/
386
@@ -XXX,XX +XXX,XX @@ uint32_t sve_zcr_len_for_el(CPUARMState *env, int el)
387
if (arm_feature(env, ARM_FEATURE_EL3)) {
388
zcr_len = MIN(zcr_len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
389
}
390
- return zcr_len;
391
+
392
+ return sve_zcr_get_valid_len(cpu, zcr_len);
393
}
394
395
static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
396
diff --git a/target/arm/monitor.c b/target/arm/monitor.c
397
index XXXXXXX..XXXXXXX 100644
398
--- a/target/arm/monitor.c
399
+++ b/target/arm/monitor.c
400
@@ -XXX,XX +XXX,XX @@ GICCapabilityList *qmp_query_gic_capabilities(Error **errp)
401
return head;
402
}
403
404
+QEMU_BUILD_BUG_ON(ARM_MAX_VQ > 16);
405
+
406
/*
407
* These are cpu model features we want to advertise. The order here
408
* matters as this is the order in which qmp_query_cpu_model_expansion
409
@@ -XXX,XX +XXX,XX @@ GICCapabilityList *qmp_query_gic_capabilities(Error **errp)
410
*/
411
static const char *cpu_model_advertised_features[] = {
412
"aarch64", "pmu", "sve",
413
+ "sve128", "sve256", "sve384", "sve512",
414
+ "sve640", "sve768", "sve896", "sve1024", "sve1152", "sve1280",
415
+ "sve1408", "sve1536", "sve1664", "sve1792", "sve1920", "sve2048",
416
NULL
83
};
417
};
84
418
85
diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c
419
@@ -XXX,XX +XXX,XX @@ CpuModelExpansionInfo *qmp_query_cpu_model_expansion(CpuModelExpansionType type,
420
if (!err) {
421
visit_check_struct(visitor, &err);
422
}
423
+ if (!err) {
424
+ arm_cpu_finalize_features(ARM_CPU(obj), &err);
425
+ }
426
visit_end_struct(visitor, NULL);
427
visit_free(visitor);
428
if (err) {
429
@@ -XXX,XX +XXX,XX @@ CpuModelExpansionInfo *qmp_query_cpu_model_expansion(CpuModelExpansionType type,
430
error_propagate(errp, err);
431
return NULL;
432
}
433
+ } else {
434
+ Error *err = NULL;
435
+ arm_cpu_finalize_features(ARM_CPU(obj), &err);
436
+ assert(err == NULL);
437
}
438
439
expansion_info = g_new0(CpuModelExpansionInfo, 1);
440
diff --git a/tests/arm-cpu-features.c b/tests/arm-cpu-features.c
86
index XXXXXXX..XXXXXXX 100644
441
index XXXXXXX..XXXXXXX 100644
87
--- a/hw/intc/arm_gicv3_dist.c
442
--- a/tests/arm-cpu-features.c
88
+++ b/hw/intc/arm_gicv3_dist.c
443
+++ b/tests/arm-cpu-features.c
89
@@ -XXX,XX +XXX,XX @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
444
@@ -XXX,XX +XXX,XX @@
90
* A3V == 1 (non-zero values of Affinity level 3 supported)
445
* See the COPYING file in the top-level directory.
91
* IDbits == 0xf (we support 16-bit interrupt identifiers)
446
*/
92
* DVIS == 0 (Direct virtual LPI injection not supported)
447
#include "qemu/osdep.h"
93
- * LPIS == 0 (LPIs not supported)
448
+#include "qemu/bitops.h"
94
+ * LPIS == 1 (LPIs are supported if affinity routing is enabled)
449
#include "libqtest.h"
95
+ * num_LPIs == 0b00000 (bits [15:11],Number of LPIs as indicated
450
#include "qapi/qmp/qdict.h"
96
+ * by GICD_TYPER.IDbits)
451
#include "qapi/qmp/qjson.h"
97
* MBIS == 0 (message-based SPIs not supported)
452
98
* SecurityExtn == 1 if security extns supported
453
+/*
99
* CPUNumber == 0 since for us ARE is always 1
454
+ * We expect the SVE max-vq to be 16. Also it must be <= 64
100
@@ -XXX,XX +XXX,XX @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
455
+ * for our test code, otherwise 'vls' can't just be a uint64_t.
101
bool sec_extn = !(s->gicd_ctlr & GICD_CTLR_DS);
456
+ */
102
457
+#define SVE_MAX_VQ 16
103
*data = (1 << 25) | (1 << 24) | (sec_extn << 10) |
458
+
104
+ (s->lpi_enable << GICD_TYPER_LPIS_SHIFT) |
459
#define MACHINE "-machine virt,gic-version=max,accel=tcg "
105
(0xf << 19) | itlinesnumber;
460
#define MACHINE_KVM "-machine virt,gic-version=max,accel=kvm:tcg "
106
return true;
461
#define QUERY_HEAD "{ 'execute': 'query-cpu-model-expansion', " \
107
}
462
@@ -XXX,XX +XXX,XX @@ static void assert_bad_props(QTestState *qts, const char *cpu_type)
108
diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c
463
qobject_unref(resp);
464
}
465
466
+static uint64_t resp_get_sve_vls(QDict *resp)
467
+{
468
+ QDict *props;
469
+ const QDictEntry *e;
470
+ uint64_t vls = 0;
471
+ int n = 0;
472
+
473
+ g_assert(resp);
474
+ g_assert(resp_has_props(resp));
475
+
476
+ props = resp_get_props(resp);
477
+
478
+ for (e = qdict_first(props); e; e = qdict_next(props, e)) {
479
+ if (strlen(e->key) > 3 && !strncmp(e->key, "sve", 3) &&
480
+ g_ascii_isdigit(e->key[3])) {
481
+ char *endptr;
482
+ int bits;
483
+
484
+ bits = g_ascii_strtoll(&e->key[3], &endptr, 10);
485
+ if (!bits || *endptr != '\0') {
486
+ continue;
487
+ }
488
+
489
+ if (qdict_get_bool(props, e->key)) {
490
+ vls |= BIT_ULL((bits / 128) - 1);
491
+ }
492
+ ++n;
493
+ }
494
+ }
495
+
496
+ g_assert(n == SVE_MAX_VQ);
497
+
498
+ return vls;
499
+}
500
+
501
+#define assert_sve_vls(qts, cpu_type, expected_vls, fmt, ...) \
502
+({ \
503
+ QDict *_resp = do_query(qts, cpu_type, fmt, ##__VA_ARGS__); \
504
+ g_assert(_resp); \
505
+ g_assert(resp_has_props(_resp)); \
506
+ g_assert(resp_get_sve_vls(_resp) == expected_vls); \
507
+ qobject_unref(_resp); \
508
+})
509
+
510
+static void sve_tests_default(QTestState *qts, const char *cpu_type)
511
+{
512
+ /*
513
+ * With no sve-max-vq or sve<N> properties on the command line
514
+ * the default is to have all vector lengths enabled. This also
515
+ * tests that 'sve' is 'on' by default.
516
+ */
517
+ assert_sve_vls(qts, cpu_type, BIT_ULL(SVE_MAX_VQ) - 1, NULL);
518
+
519
+ /* With SVE off, all vector lengths should also be off. */
520
+ assert_sve_vls(qts, cpu_type, 0, "{ 'sve': false }");
521
+
522
+ /* With SVE on, we must have at least one vector length enabled. */
523
+ assert_error(qts, cpu_type, "cannot disable sve128", "{ 'sve128': false }");
524
+
525
+ /* Basic enable/disable tests. */
526
+ assert_sve_vls(qts, cpu_type, 0x7, "{ 'sve384': true }");
527
+ assert_sve_vls(qts, cpu_type, ((BIT_ULL(SVE_MAX_VQ) - 1) & ~BIT_ULL(2)),
528
+ "{ 'sve384': false }");
529
+
530
+ /*
531
+ * ---------------------------------------------------------------------
532
+ * power-of-two(vq) all-power- can can
533
+ * of-two(< vq) enable disable
534
+ * ---------------------------------------------------------------------
535
+ * vq < max_vq no MUST* yes yes
536
+ * vq < max_vq yes MUST* yes no
537
+ * ---------------------------------------------------------------------
538
+ * vq == max_vq n/a MUST* yes** yes**
539
+ * ---------------------------------------------------------------------
540
+ * vq > max_vq n/a no no yes
541
+ * vq > max_vq n/a yes yes yes
542
+ * ---------------------------------------------------------------------
543
+ *
544
+ * [*] "MUST" means this requirement must already be satisfied,
545
+ * otherwise 'max_vq' couldn't itself be enabled.
546
+ *
547
+ * [**] Not testable with the QMP interface, only with the command line.
548
+ */
549
+
550
+ /* max_vq := 8 */
551
+ assert_sve_vls(qts, cpu_type, 0x8b, "{ 'sve1024': true }");
552
+
553
+ /* max_vq := 8, vq < max_vq, !power-of-two(vq) */
554
+ assert_sve_vls(qts, cpu_type, 0x8f,
555
+ "{ 'sve1024': true, 'sve384': true }");
556
+ assert_sve_vls(qts, cpu_type, 0x8b,
557
+ "{ 'sve1024': true, 'sve384': false }");
558
+
559
+ /* max_vq := 8, vq < max_vq, power-of-two(vq) */
560
+ assert_sve_vls(qts, cpu_type, 0x8b,
561
+ "{ 'sve1024': true, 'sve256': true }");
562
+ assert_error(qts, cpu_type, "cannot disable sve256",
563
+ "{ 'sve1024': true, 'sve256': false }");
564
+
565
+ /* max_vq := 3, vq > max_vq, !all-power-of-two(< vq) */
566
+ assert_error(qts, cpu_type, "cannot disable sve512",
567
+ "{ 'sve384': true, 'sve512': false, 'sve640': true }");
568
+
569
+ /*
570
+ * We can disable power-of-two vector lengths when all larger lengths
571
+ * are also disabled. We only need to disable the power-of-two length,
572
+ * as all non-enabled larger lengths will then be auto-disabled.
573
+ */
574
+ assert_sve_vls(qts, cpu_type, 0x7, "{ 'sve512': false }");
575
+
576
+ /* max_vq := 3, vq > max_vq, all-power-of-two(< vq) */
577
+ assert_sve_vls(qts, cpu_type, 0x1f,
578
+ "{ 'sve384': true, 'sve512': true, 'sve640': true }");
579
+ assert_sve_vls(qts, cpu_type, 0xf,
580
+ "{ 'sve384': true, 'sve512': true, 'sve640': false }");
581
+}
582
+
583
+static void sve_tests_sve_max_vq_8(const void *data)
584
+{
585
+ QTestState *qts;
586
+
587
+ qts = qtest_init(MACHINE "-cpu max,sve-max-vq=8");
588
+
589
+ assert_sve_vls(qts, "max", BIT_ULL(8) - 1, NULL);
590
+
591
+ /*
592
+ * Disabling the max-vq set by sve-max-vq is not allowed, but
593
+ * of course enabling it is OK.
594
+ */
595
+ assert_error(qts, "max", "cannot disable sve1024", "{ 'sve1024': false }");
596
+ assert_sve_vls(qts, "max", 0xff, "{ 'sve1024': true }");
597
+
598
+ /*
599
+ * Enabling anything larger than max-vq set by sve-max-vq is not
600
+ * allowed, but of course disabling everything larger is OK.
601
+ */
602
+ assert_error(qts, "max", "cannot enable sve1152", "{ 'sve1152': true }");
603
+ assert_sve_vls(qts, "max", 0xff, "{ 'sve1152': false }");
604
+
605
+ /*
606
+ * We can enable/disable non power-of-two lengths smaller than the
607
+ * max-vq set by sve-max-vq, but, while we can enable power-of-two
608
+ * lengths, we can't disable them.
609
+ */
610
+ assert_sve_vls(qts, "max", 0xff, "{ 'sve384': true }");
611
+ assert_sve_vls(qts, "max", 0xfb, "{ 'sve384': false }");
612
+ assert_sve_vls(qts, "max", 0xff, "{ 'sve256': true }");
613
+ assert_error(qts, "max", "cannot disable sve256", "{ 'sve256': false }");
614
+
615
+ qtest_quit(qts);
616
+}
617
+
618
+static void sve_tests_sve_off(const void *data)
619
+{
620
+ QTestState *qts;
621
+
622
+ qts = qtest_init(MACHINE "-cpu max,sve=off");
623
+
624
+ /* SVE is off, so the map should be empty. */
625
+ assert_sve_vls(qts, "max", 0, NULL);
626
+
627
+ /* The map stays empty even if we turn lengths off. */
628
+ assert_sve_vls(qts, "max", 0, "{ 'sve128': false }");
629
+
630
+ /* It's an error to enable lengths when SVE is off. */
631
+ assert_error(qts, "max", "cannot enable sve128", "{ 'sve128': true }");
632
+
633
+ /* With SVE re-enabled we should get all vector lengths enabled. */
634
+ assert_sve_vls(qts, "max", BIT_ULL(SVE_MAX_VQ) - 1, "{ 'sve': true }");
635
+
636
+ /* Or enable SVE with just specific vector lengths. */
637
+ assert_sve_vls(qts, "max", 0x3,
638
+ "{ 'sve': true, 'sve128': true, 'sve256': true }");
639
+
640
+ qtest_quit(qts);
641
+}
642
+
643
static void test_query_cpu_model_expansion(const void *data)
644
{
645
QTestState *qts;
646
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion(const void *data)
647
if (g_str_equal(qtest_get_arch(), "aarch64")) {
648
assert_has_feature(qts, "max", "aarch64");
649
assert_has_feature(qts, "max", "sve");
650
+ assert_has_feature(qts, "max", "sve128");
651
assert_has_feature(qts, "cortex-a57", "pmu");
652
assert_has_feature(qts, "cortex-a57", "aarch64");
653
654
+ sve_tests_default(qts, "max");
655
+
656
/* Test that features that depend on KVM generate errors without. */
657
assert_error(qts, "max",
658
"'aarch64' feature cannot be disabled "
659
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
660
qtest_add_data_func("/arm/kvm/query-cpu-model-expansion",
661
NULL, test_query_cpu_model_expansion_kvm);
662
663
+ if (g_str_equal(qtest_get_arch(), "aarch64")) {
664
+ qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-max-vq-8",
665
+ NULL, sve_tests_sve_max_vq_8);
666
+ qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off",
667
+ NULL, sve_tests_sve_off);
668
+ }
669
+
670
return g_test_run();
671
}
672
diff --git a/docs/arm-cpu-features.rst b/docs/arm-cpu-features.rst
109
index XXXXXXX..XXXXXXX 100644
673
index XXXXXXX..XXXXXXX 100644
110
--- a/hw/intc/arm_gicv3_redist.c
674
--- a/docs/arm-cpu-features.rst
111
+++ b/hw/intc/arm_gicv3_redist.c
675
+++ b/docs/arm-cpu-features.rst
112
@@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_writel(GICv3CPUState *cs, hwaddr offset,
676
@@ -XXX,XX +XXX,XX @@ block in the script for usage) is used to issue the QMP commands.
113
case GICR_CTLR:
677
(QEMU) query-cpu-model-expansion type=full model={"name":"max"}
114
/* For our implementation, GICR_TYPER.DPGS is 0 and so all
678
{ "return": {
115
* the DPG bits are RAZ/WI. We don't do anything asynchronously,
679
"model": { "name": "max", "props": {
116
- * so UWP and RWP are RAZ/WI. And GICR_TYPER.LPIS is 0 (we don't
680
- "pmu": true, "aarch64": true
117
- * implement LPIs) so Enable_LPIs is RES0. So there are no writable
681
+ "sve1664": true, "pmu": true, "sve1792": true, "sve1920": true,
118
- * bits for us.
682
+ "sve128": true, "aarch64": true, "sve1024": true, "sve": true,
119
+ * so UWP and RWP are RAZ/WI. GICR_TYPER.LPIS is 1 (we
683
+ "sve640": true, "sve768": true, "sve1408": true, "sve256": true,
120
+ * implement LPIs) so Enable_LPIs is programmable.
684
+ "sve1152": true, "sve512": true, "sve384": true, "sve1536": true,
121
*/
685
+ "sve896": true, "sve1280": true, "sve2048": true
122
+ if (cs->gicr_typer & GICR_TYPER_PLPIS) {
686
}}}}
123
+ if (value & GICR_CTLR_ENABLE_LPIS) {
687
124
+ cs->gicr_ctlr |= GICR_CTLR_ENABLE_LPIS;
688
-We see that the `max` CPU type has the `pmu` and `aarch64` CPU features.
125
+ } else {
689
-We also see that the CPU features are enabled, as they are all `true`.
126
+ cs->gicr_ctlr &= ~GICR_CTLR_ENABLE_LPIS;
690
+We see that the `max` CPU type has the `pmu`, `aarch64`, `sve`, and many
127
+ }
691
+`sve<N>` CPU features. We also see that all the CPU features are
128
+ }
692
+enabled, as they are all `true`. (The `sve<N>` CPU features are all
129
return MEMTX_OK;
693
+optional SVE vector lengths (see "SVE CPU Properties"). While with TCG
130
case GICR_STATUSR:
694
+all SVE vector lengths can be supported, when KVM is in use it's more
131
/* RAZ/WI for our implementation */
695
+likely that only a few lengths will be supported, if SVE is supported at
696
+all.)
697
698
(2) Let's try to disable the PMU::
699
700
(QEMU) query-cpu-model-expansion type=full model={"name":"max","props":{"pmu":false}}
701
{ "return": {
702
"model": { "name": "max", "props": {
703
- "pmu": false, "aarch64": true
704
+ "sve1664": true, "pmu": false, "sve1792": true, "sve1920": true,
705
+ "sve128": true, "aarch64": true, "sve1024": true, "sve": true,
706
+ "sve640": true, "sve768": true, "sve1408": true, "sve256": true,
707
+ "sve1152": true, "sve512": true, "sve384": true, "sve1536": true,
708
+ "sve896": true, "sve1280": true, "sve2048": true
709
}}}}
710
711
We see it worked, as `pmu` is now `false`.
712
@@ -XXX,XX +XXX,XX @@ We see it worked, as `pmu` is now `false`.
713
It looks like this feature is limited to a configuration we do not
714
currently have.
715
716
-(4) Let's try probing CPU features for the Cortex-A15 CPU type::
717
+(4) Let's disable `sve` and see what happens to all the optional SVE
718
+ vector lengths::
719
+
720
+ (QEMU) query-cpu-model-expansion type=full model={"name":"max","props":{"sve":false}}
721
+ { "return": {
722
+ "model": { "name": "max", "props": {
723
+ "sve1664": false, "pmu": true, "sve1792": false, "sve1920": false,
724
+ "sve128": false, "aarch64": true, "sve1024": false, "sve": false,
725
+ "sve640": false, "sve768": false, "sve1408": false, "sve256": false,
726
+ "sve1152": false, "sve512": false, "sve384": false, "sve1536": false,
727
+ "sve896": false, "sve1280": false, "sve2048": false
728
+ }}}}
729
+
730
+As expected they are now all `false`.
731
+
732
+(5) Let's try probing CPU features for the Cortex-A15 CPU type::
733
734
(QEMU) query-cpu-model-expansion type=full model={"name":"cortex-a15"}
735
{"return": {"model": {"name": "cortex-a15", "props": {"pmu": true}}}}
736
@@ -XXX,XX +XXX,XX @@ After determining which CPU features are available and supported for a
737
given CPU type, then they may be selectively enabled or disabled on the
738
QEMU command line with that CPU type::
739
740
- $ qemu-system-aarch64 -M virt -cpu max,pmu=off
741
+ $ qemu-system-aarch64 -M virt -cpu max,pmu=off,sve=on,sve128=on,sve256=on
742
743
-The example above disables the PMU for the `max` CPU type.
744
+The example above disables the PMU and enables the first two SVE vector
745
+lengths for the `max` CPU type. Note, the `sve=on` isn't actually
746
+necessary, because, as we observed above with our probe of the `max` CPU
747
+type, `sve` is already on by default. Also, based on our probe of
748
+defaults, it would seem we need to disable many SVE vector lengths, rather
749
+than only enabling the two we want. This isn't the case, because, as
750
+disabling many SVE vector lengths would be quite verbose, the `sve<N>` CPU
751
+properties have special semantics (see "SVE CPU Property Parsing
752
+Semantics").
753
+
754
+SVE CPU Properties
755
+==================
756
+
757
+There are two types of SVE CPU properties: `sve` and `sve<N>`. The first
758
+is used to enable or disable the entire SVE feature, just as the `pmu`
759
+CPU property completely enables or disables the PMU. The second type
760
+is used to enable or disable specific vector lengths, where `N` is the
761
+number of bits of the length. The `sve<N>` CPU properties have special
762
+dependencies and constraints, see "SVE CPU Property Dependencies and
763
+Constraints" below. Additionally, as we want all supported vector lengths
764
+to be enabled by default, then, in order to avoid overly verbose command
765
+lines (command lines full of `sve<N>=off`, for all `N` not wanted), we
766
+provide the parsing semantics listed in "SVE CPU Property Parsing
767
+Semantics".
768
+
769
+SVE CPU Property Dependencies and Constraints
770
+---------------------------------------------
771
+
772
+ 1) At least one vector length must be enabled when `sve` is enabled.
773
+
774
+ 2) If a vector length `N` is enabled, then all power-of-two vector
775
+ lengths smaller than `N` must also be enabled. E.g. if `sve512`
776
+ is enabled, then the 128-bit and 256-bit vector lengths must also
777
+ be enabled.
778
+
779
+SVE CPU Property Parsing Semantics
780
+----------------------------------
781
+
782
+ 1) If SVE is disabled (`sve=off`), then which SVE vector lengths
783
+ are enabled or disabled is irrelevant to the guest, as the entire
784
+ SVE feature is disabled and that disables all vector lengths for
785
+ the guest. However QEMU will still track any `sve<N>` CPU
786
+ properties provided by the user. If later an `sve=on` is provided,
787
+ then the guest will get only the enabled lengths. If no `sve=on`
788
+ is provided and there are explicitly enabled vector lengths, then
789
+ an error is generated.
790
+
791
+ 2) If SVE is enabled (`sve=on`), but no `sve<N>` CPU properties are
792
+ provided, then all supported vector lengths are enabled, including
793
+ the non-power-of-two lengths.
794
+
795
+ 3) If SVE is enabled, then an error is generated when attempting to
796
+ disable the last enabled vector length (see constraint (1) of "SVE
797
+ CPU Property Dependencies and Constraints").
798
+
799
+ 4) If one or more vector lengths have been explicitly enabled and at
800
+ at least one of the dependency lengths of the maximum enabled length
801
+ has been explicitly disabled, then an error is generated (see
802
+ constraint (2) of "SVE CPU Property Dependencies and Constraints").
803
+
804
+ 5) If one or more `sve<N>` CPU properties are set `off`, but no `sve<N>`,
805
+ CPU properties are set `on`, then the specified vector lengths are
806
+ disabled but the default for any unspecified lengths remains enabled.
807
+ Disabling a power-of-two vector length also disables all vector
808
+ lengths larger than the power-of-two length (see constraint (2) of
809
+ "SVE CPU Property Dependencies and Constraints").
810
+
811
+ 6) If one or more `sve<N>` CPU properties are set to `on`, then they
812
+ are enabled and all unspecified lengths default to disabled, except
813
+ for the required lengths per constraint (2) of "SVE CPU Property
814
+ Dependencies and Constraints", which will even be auto-enabled if
815
+ they were not explicitly enabled.
816
+
817
+ 7) If SVE was disabled (`sve=off`), allowing all vector lengths to be
818
+ explicitly disabled (i.e. avoiding the error specified in (3) of
819
+ "SVE CPU Property Parsing Semantics"), then if later an `sve=on` is
820
+ provided an error will be generated. To avoid this error, one must
821
+ enable at least one vector length prior to enabling SVE.
822
+
823
+SVE CPU Property Examples
824
+-------------------------
825
+
826
+ 1) Disable SVE::
827
+
828
+ $ qemu-system-aarch64 -M virt -cpu max,sve=off
829
+
830
+ 2) Implicitly enable all vector lengths for the `max` CPU type::
831
+
832
+ $ qemu-system-aarch64 -M virt -cpu max
833
+
834
+ 3) Only enable the 128-bit vector length::
835
+
836
+ $ qemu-system-aarch64 -M virt -cpu max,sve128=on
837
+
838
+ 4) Disable the 512-bit vector length and all larger vector lengths,
839
+ since 512 is a power-of-two. This results in all the smaller,
840
+ uninitialized lengths (128, 256, and 384) defaulting to enabled::
841
+
842
+ $ qemu-system-aarch64 -M virt -cpu max,sve512=off
843
+
844
+ 5) Enable the 128-bit, 256-bit, and 512-bit vector lengths::
845
+
846
+ $ qemu-system-aarch64 -M virt -cpu max,sve128=on,sve256=on,sve512=on
847
+
848
+ 6) The same as (5), but since the 128-bit and 256-bit vector
849
+ lengths are required for the 512-bit vector length to be enabled,
850
+ then allow them to be auto-enabled::
851
+
852
+ $ qemu-system-aarch64 -M virt -cpu max,sve512=on
853
+
854
+ 7) Do the same as (6), but by first disabling SVE and then re-enabling it::
855
+
856
+ $ qemu-system-aarch64 -M virt -cpu max,sve=off,sve512=on,sve=on
857
+
858
+ 8) Force errors regarding the last vector length::
859
+
860
+ $ qemu-system-aarch64 -M virt -cpu max,sve128=off
861
+ $ qemu-system-aarch64 -M virt -cpu max,sve=off,sve128=off,sve=on
862
+
863
+SVE CPU Property Recommendations
864
+--------------------------------
865
+
866
+The examples in "SVE CPU Property Examples" exhibit many ways to select
867
+vector lengths which developers may find useful in order to avoid overly
868
+verbose command lines. However, the recommended way to select vector
869
+lengths is to explicitly enable each desired length. Therefore only
870
+example's (1), (3), and (5) exhibit recommended uses of the properties.
871
132
--
872
--
133
2.20.1
873
2.20.1
134
874
135
875
diff view generated by jsdifflib
1
From: Shashi Mallela <shashi.mallela@linaro.org>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
Defined descriptors for ITS device table,collection table and ITS
3
These are the SVE equivalents to kvm_arch_get/put_fpsimd. Note, the
4
command queue entities.Implemented register read/write functions,
4
swabbing is different than it is for fpsmid because the vector format
5
extract ITS table parameters and command queue parameters,extended
5
is a little-endian stream of words.
6
gicv3 common to capture qemu address space(which host the ITS table
6
7
platform memories required for subsequent ITS processing) and
7
Signed-off-by: Andrew Jones <drjones@redhat.com>
8
initialize the same in ITS device.
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
10
Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Eric Auger <eric.auger@redhat.com>
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
13
Tested-by: Neil Armstrong <narmstrong@baylibre.com>
10
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
14
Message-id: 20210910143951.92242-3-shashi.mallela@linaro.org
11
Message-id: 20191031142734.8590-6-drjones@redhat.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
13
---
17
hw/intc/gicv3_internal.h | 29 ++
14
target/arm/kvm64.c | 185 ++++++++++++++++++++++++++++++++++++++-------
18
include/hw/intc/arm_gicv3_common.h | 3 +
15
1 file changed, 156 insertions(+), 29 deletions(-)
19
include/hw/intc/arm_gicv3_its_common.h | 23 ++
16
20
hw/intc/arm_gicv3_its.c | 376 +++++++++++++++++++++++++
17
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
21
4 files changed, 431 insertions(+)
22
23
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
24
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/intc/gicv3_internal.h
19
--- a/target/arm/kvm64.c
26
+++ b/hw/intc/gicv3_internal.h
20
+++ b/target/arm/kvm64.c
27
@@ -XXX,XX +XXX,XX @@ FIELD(GITS_BASER, INNERCACHE, 59, 3)
21
@@ -XXX,XX +XXX,XX @@ int kvm_arch_destroy_vcpu(CPUState *cs)
28
FIELD(GITS_BASER, INDIRECT, 62, 1)
22
bool kvm_arm_reg_syncs_via_cpreg_list(uint64_t regidx)
29
FIELD(GITS_BASER, VALID, 63, 1)
23
{
30
24
/* Return true if the regidx is a register we should synchronize
31
+FIELD(GITS_CBASER, SIZE, 0, 8)
25
- * via the cpreg_tuples array (ie is not a core reg we sync by
32
+FIELD(GITS_CBASER, SHAREABILITY, 10, 2)
26
- * hand in kvm_arch_get/put_registers())
33
+FIELD(GITS_CBASER, PHYADDR, 12, 40)
27
+ * via the cpreg_tuples array (ie is not a core or sve reg that
34
+FIELD(GITS_CBASER, OUTERCACHE, 53, 3)
28
+ * we sync by hand in kvm_arch_get/put_registers())
35
+FIELD(GITS_CBASER, INNERCACHE, 59, 3)
29
*/
36
+FIELD(GITS_CBASER, VALID, 63, 1)
30
switch (regidx & KVM_REG_ARM_COPROC_MASK) {
37
+
31
case KVM_REG_ARM_CORE:
38
+FIELD(GITS_CREADR, STALLED, 0, 1)
32
+ case KVM_REG_ARM64_SVE:
39
+FIELD(GITS_CREADR, OFFSET, 5, 15)
33
return false;
40
+
34
default:
41
+FIELD(GITS_CWRITER, RETRY, 0, 1)
35
return true;
42
+FIELD(GITS_CWRITER, OFFSET, 5, 15)
36
@@ -XXX,XX +XXX,XX @@ int kvm_arm_cpreg_level(uint64_t regidx)
43
+
37
44
+FIELD(GITS_CTLR, ENABLED, 0, 1)
38
static int kvm_arch_put_fpsimd(CPUState *cs)
45
FIELD(GITS_CTLR, QUIESCENT, 31, 1)
39
{
46
40
- ARMCPU *cpu = ARM_CPU(cs);
47
FIELD(GITS_TYPER, PHYSICAL, 0, 1)
41
- CPUARMState *env = &cpu->env;
48
@@ -XXX,XX +XXX,XX @@ FIELD(GITS_TYPER, PTA, 19, 1)
42
+ CPUARMState *env = &ARM_CPU(cs)->env;
49
FIELD(GITS_TYPER, CIDBITS, 32, 4)
43
struct kvm_one_reg reg;
50
FIELD(GITS_TYPER, CIL, 36, 1)
44
- uint32_t fpr;
51
45
int i, ret;
52
+#define GITS_IDREGS 0xFFD0
46
53
+
47
for (i = 0; i < 32; i++) {
54
+#define ITS_CTLR_ENABLED (1U) /* ITS Enabled */
48
@@ -XXX,XX +XXX,XX @@ static int kvm_arch_put_fpsimd(CPUState *cs)
55
+
49
}
56
+#define GITS_BASER_RO_MASK (R_GITS_BASER_ENTRYSIZE_MASK | \
50
}
57
+ R_GITS_BASER_TYPE_MASK)
51
58
+
52
- reg.addr = (uintptr_t)(&fpr);
59
#define GITS_BASER_PAGESIZE_4K 0
53
- fpr = vfp_get_fpsr(env);
60
#define GITS_BASER_PAGESIZE_16K 1
54
- reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpsr);
61
#define GITS_BASER_PAGESIZE_64K 2
55
- ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
62
@@ -XXX,XX +XXX,XX @@ FIELD(GITS_TYPER, CIL, 36, 1)
56
- if (ret) {
63
#define GITS_BASER_TYPE_DEVICE 1ULL
57
- return ret;
64
#define GITS_BASER_TYPE_COLLECTION 4ULL
58
+ return 0;
65
59
+}
66
+#define GITS_PAGE_SIZE_4K 0x1000
60
+
67
+#define GITS_PAGE_SIZE_16K 0x4000
61
+/*
68
+#define GITS_PAGE_SIZE_64K 0x10000
62
+ * SVE registers are encoded in KVM's memory in an endianness-invariant format.
69
+
63
+ * The byte at offset i from the start of the in-memory representation contains
70
+#define L1TABLE_ENTRY_SIZE 8
64
+ * the bits [(7 + 8 * i) : (8 * i)] of the register value. As this means the
71
+
65
+ * lowest offsets are stored in the lowest memory addresses, then that nearly
72
+#define GITS_CMDQ_ENTRY_SIZE 32
66
+ * matches QEMU's representation, which is to use an array of host-endian
73
+
67
+ * uint64_t's, where the lower offsets are at the lower indices. To complete
74
/**
68
+ * the translation we just need to byte swap the uint64_t's on big-endian hosts.
75
* Default features advertised by this version of ITS
69
+ */
76
*/
70
+static uint64_t *sve_bswap64(uint64_t *dst, uint64_t *src, int nr)
77
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
78
index XXXXXXX..XXXXXXX 100644
79
--- a/include/hw/intc/arm_gicv3_common.h
80
+++ b/include/hw/intc/arm_gicv3_common.h
81
@@ -XXX,XX +XXX,XX @@ struct GICv3State {
82
int dev_fd; /* kvm device fd if backed by kvm vgic support */
83
Error *migration_blocker;
84
85
+ MemoryRegion *dma;
86
+ AddressSpace dma_as;
87
+
88
/* Distributor */
89
90
/* for a GIC with the security extensions the NS banked version of this
91
diff --git a/include/hw/intc/arm_gicv3_its_common.h b/include/hw/intc/arm_gicv3_its_common.h
92
index XXXXXXX..XXXXXXX 100644
93
--- a/include/hw/intc/arm_gicv3_its_common.h
94
+++ b/include/hw/intc/arm_gicv3_its_common.h
95
@@ -XXX,XX +XXX,XX @@
96
97
#define GITS_TRANSLATER 0x0040
98
99
+typedef struct {
100
+ bool valid;
101
+ bool indirect;
102
+ uint16_t entry_sz;
103
+ uint32_t page_sz;
104
+ uint32_t max_entries;
105
+ union {
106
+ uint32_t max_devids;
107
+ uint32_t max_collids;
108
+ } maxids;
109
+ uint64_t base_addr;
110
+} TableDesc;
111
+
112
+typedef struct {
113
+ bool valid;
114
+ uint32_t max_entries;
115
+ uint64_t base_addr;
116
+} CmdQDesc;
117
+
118
struct GICv3ITSState {
119
SysBusDevice parent_obj;
120
121
@@ -XXX,XX +XXX,XX @@ struct GICv3ITSState {
122
uint64_t creadr;
123
uint64_t baser[8];
124
125
+ TableDesc dt;
126
+ TableDesc ct;
127
+ CmdQDesc cq;
128
+
129
Error *migration_blocker;
130
};
131
132
diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c
133
index XXXXXXX..XXXXXXX 100644
134
--- a/hw/intc/arm_gicv3_its.c
135
+++ b/hw/intc/arm_gicv3_its.c
136
@@ -XXX,XX +XXX,XX @@ struct GICv3ITSClass {
137
void (*parent_reset)(DeviceState *dev);
138
};
139
140
+static uint64_t baser_base_addr(uint64_t value, uint32_t page_sz)
141
+{
71
+{
142
+ uint64_t result = 0;
72
+#ifdef HOST_WORDS_BIGENDIAN
143
+
73
+ int i;
144
+ switch (page_sz) {
74
+
145
+ case GITS_PAGE_SIZE_4K:
75
+ for (i = 0; i < nr; ++i) {
146
+ case GITS_PAGE_SIZE_16K:
76
+ dst[i] = bswap64(src[i]);
147
+ result = FIELD_EX64(value, GITS_BASER, PHYADDR) << 12;
77
}
148
+ break;
78
149
+
79
- reg.addr = (uintptr_t)(&fpr);
150
+ case GITS_PAGE_SIZE_64K:
80
- fpr = vfp_get_fpcr(env);
151
+ result = FIELD_EX64(value, GITS_BASER, PHYADDRL_64K) << 16;
81
- reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpcr);
152
+ result |= FIELD_EX64(value, GITS_BASER, PHYADDRH_64K) << 48;
82
+ return dst;
153
+ break;
83
+#else
154
+
84
+ return src;
155
+ default:
85
+#endif
156
+ break;
157
+ }
158
+ return result;
159
+}
86
+}
160
+
87
+
161
+/*
88
+/*
162
+ * This function extracts the ITS Device and Collection table specific
89
+ * KVM SVE registers come in slices where ZREGs have a slice size of 2048 bits
163
+ * parameters (like base_addr, size etc) from GITS_BASER register.
90
+ * and PREGS and the FFR have a slice size of 256 bits. However we simply hard
164
+ * It is called during ITS enable and also during post_load migration
91
+ * code the slice index to zero for now as it's unlikely we'll need more than
92
+ * one slice for quite some time.
165
+ */
93
+ */
166
+static void extract_table_params(GICv3ITSState *s)
94
+static int kvm_arch_put_sve(CPUState *cs)
167
+{
95
+{
168
+ uint16_t num_pages = 0;
96
+ ARMCPU *cpu = ARM_CPU(cs);
169
+ uint8_t page_sz_type;
97
+ CPUARMState *env = &cpu->env;
170
+ uint8_t type;
98
+ uint64_t tmp[ARM_MAX_VQ * 2];
171
+ uint32_t page_sz = 0;
99
+ uint64_t *r;
172
+ uint64_t value;
100
+ struct kvm_one_reg reg;
173
+
101
+ int n, ret;
174
+ for (int i = 0; i < 8; i++) {
102
+
175
+ value = s->baser[i];
103
+ for (n = 0; n < KVM_ARM64_SVE_NUM_ZREGS; ++n) {
176
+
104
+ r = sve_bswap64(tmp, &env->vfp.zregs[n].d[0], cpu->sve_max_vq * 2);
177
+ if (!value) {
105
+ reg.addr = (uintptr_t)r;
178
+ continue;
106
+ reg.id = KVM_REG_ARM64_SVE_ZREG(n, 0);
179
+ }
107
+ ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
180
+
108
+ if (ret) {
181
+ page_sz_type = FIELD_EX64(value, GITS_BASER, PAGESIZE);
109
+ return ret;
182
+
110
+ }
183
+ switch (page_sz_type) {
111
+ }
184
+ case 0:
112
+
185
+ page_sz = GITS_PAGE_SIZE_4K;
113
+ for (n = 0; n < KVM_ARM64_SVE_NUM_PREGS; ++n) {
186
+ break;
114
+ r = sve_bswap64(tmp, r = &env->vfp.pregs[n].p[0],
187
+
115
+ DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
188
+ case 1:
116
+ reg.addr = (uintptr_t)r;
189
+ page_sz = GITS_PAGE_SIZE_16K;
117
+ reg.id = KVM_REG_ARM64_SVE_PREG(n, 0);
190
+ break;
118
+ ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
191
+
119
+ if (ret) {
192
+ case 2:
120
+ return ret;
193
+ case 3:
121
+ }
194
+ page_sz = GITS_PAGE_SIZE_64K;
122
+ }
195
+ break;
123
+
196
+
124
+ r = sve_bswap64(tmp, &env->vfp.pregs[FFR_PRED_NUM].p[0],
197
+ default:
125
+ DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
198
+ g_assert_not_reached();
126
+ reg.addr = (uintptr_t)r;
199
+ }
127
+ reg.id = KVM_REG_ARM64_SVE_FFR(0);
200
+
128
ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
201
+ num_pages = FIELD_EX64(value, GITS_BASER, SIZE) + 1;
129
if (ret) {
202
+
130
return ret;
203
+ type = FIELD_EX64(value, GITS_BASER, TYPE);
131
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
204
+
132
{
205
+ switch (type) {
133
struct kvm_one_reg reg;
206
+
134
uint64_t val;
207
+ case GITS_BASER_TYPE_DEVICE:
135
+ uint32_t fpr;
208
+ memset(&s->dt, 0 , sizeof(s->dt));
136
int i, ret;
209
+ s->dt.valid = FIELD_EX64(value, GITS_BASER, VALID);
137
unsigned int el;
210
+
138
211
+ if (!s->dt.valid) {
139
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
212
+ return;
140
}
213
+ }
141
}
214
+
142
215
+ s->dt.page_sz = page_sz;
143
- ret = kvm_arch_put_fpsimd(cs);
216
+ s->dt.indirect = FIELD_EX64(value, GITS_BASER, INDIRECT);
144
+ if (cpu_isar_feature(aa64_sve, cpu)) {
217
+ s->dt.entry_sz = FIELD_EX64(value, GITS_BASER, ENTRYSIZE);
145
+ ret = kvm_arch_put_sve(cs);
218
+
146
+ } else {
219
+ if (!s->dt.indirect) {
147
+ ret = kvm_arch_put_fpsimd(cs);
220
+ s->dt.max_entries = (num_pages * page_sz) / s->dt.entry_sz;
148
+ }
221
+ } else {
149
+ if (ret) {
222
+ s->dt.max_entries = (((num_pages * page_sz) /
150
+ return ret;
223
+ L1TABLE_ENTRY_SIZE) *
151
+ }
224
+ (page_sz / s->dt.entry_sz));
152
+
225
+ }
153
+ reg.addr = (uintptr_t)(&fpr);
226
+
154
+ fpr = vfp_get_fpsr(env);
227
+ s->dt.maxids.max_devids = (1UL << (FIELD_EX64(s->typer, GITS_TYPER,
155
+ reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpsr);
228
+ DEVBITS) + 1));
156
+ ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
229
+
157
+ if (ret) {
230
+ s->dt.base_addr = baser_base_addr(value, page_sz);
158
+ return ret;
231
+
159
+ }
232
+ break;
160
+
233
+
161
+ reg.addr = (uintptr_t)(&fpr);
234
+ case GITS_BASER_TYPE_COLLECTION:
162
+ fpr = vfp_get_fpcr(env);
235
+ memset(&s->ct, 0 , sizeof(s->ct));
163
+ reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpcr);
236
+ s->ct.valid = FIELD_EX64(value, GITS_BASER, VALID);
164
+ ret = kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
237
+
165
if (ret) {
238
+ /*
166
return ret;
239
+ * GITS_TYPER.HCC is 0 for this implementation
167
}
240
+ * hence writes are discarded if ct.valid is 0
168
@@ -XXX,XX +XXX,XX @@ int kvm_arch_put_registers(CPUState *cs, int level)
241
+ */
169
242
+ if (!s->ct.valid) {
170
static int kvm_arch_get_fpsimd(CPUState *cs)
243
+ return;
171
{
244
+ }
172
- ARMCPU *cpu = ARM_CPU(cs);
245
+
173
- CPUARMState *env = &cpu->env;
246
+ s->ct.page_sz = page_sz;
174
+ CPUARMState *env = &ARM_CPU(cs)->env;
247
+ s->ct.indirect = FIELD_EX64(value, GITS_BASER, INDIRECT);
175
struct kvm_one_reg reg;
248
+ s->ct.entry_sz = FIELD_EX64(value, GITS_BASER, ENTRYSIZE);
176
- uint32_t fpr;
249
+
177
int i, ret;
250
+ if (!s->ct.indirect) {
178
251
+ s->ct.max_entries = (num_pages * page_sz) / s->ct.entry_sz;
179
for (i = 0; i < 32; i++) {
252
+ } else {
180
@@ -XXX,XX +XXX,XX @@ static int kvm_arch_get_fpsimd(CPUState *cs)
253
+ s->ct.max_entries = (((num_pages * page_sz) /
181
}
254
+ L1TABLE_ENTRY_SIZE) *
182
}
255
+ (page_sz / s->ct.entry_sz));
183
256
+ }
184
- reg.addr = (uintptr_t)(&fpr);
257
+
185
- reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpsr);
258
+ if (FIELD_EX64(s->typer, GITS_TYPER, CIL)) {
186
- ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
259
+ s->ct.maxids.max_collids = (1UL << (FIELD_EX64(s->typer,
187
- if (ret) {
260
+ GITS_TYPER, CIDBITS) + 1));
188
- return ret;
261
+ } else {
189
- }
262
+ /* 16-bit CollectionId supported when CIL == 0 */
190
- vfp_set_fpsr(env, fpr);
263
+ s->ct.maxids.max_collids = (1UL << 16);
191
+ return 0;
264
+ }
265
+
266
+ s->ct.base_addr = baser_base_addr(value, page_sz);
267
+
268
+ break;
269
+
270
+ default:
271
+ break;
272
+ }
273
+ }
274
+}
192
+}
275
+
193
276
+static void extract_cmdq_params(GICv3ITSState *s)
194
- reg.addr = (uintptr_t)(&fpr);
195
- reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpcr);
196
+/*
197
+ * KVM SVE registers come in slices where ZREGs have a slice size of 2048 bits
198
+ * and PREGS and the FFR have a slice size of 256 bits. However we simply hard
199
+ * code the slice index to zero for now as it's unlikely we'll need more than
200
+ * one slice for quite some time.
201
+ */
202
+static int kvm_arch_get_sve(CPUState *cs)
277
+{
203
+{
278
+ uint16_t num_pages = 0;
204
+ ARMCPU *cpu = ARM_CPU(cs);
279
+ uint64_t value = s->cbaser;
205
+ CPUARMState *env = &cpu->env;
280
+
206
+ struct kvm_one_reg reg;
281
+ num_pages = FIELD_EX64(value, GITS_CBASER, SIZE) + 1;
207
+ uint64_t *r;
282
+
208
+ int n, ret;
283
+ memset(&s->cq, 0 , sizeof(s->cq));
209
+
284
+ s->cq.valid = FIELD_EX64(value, GITS_CBASER, VALID);
210
+ for (n = 0; n < KVM_ARM64_SVE_NUM_ZREGS; ++n) {
285
+
211
+ r = &env->vfp.zregs[n].d[0];
286
+ if (s->cq.valid) {
212
+ reg.addr = (uintptr_t)r;
287
+ s->cq.max_entries = (num_pages * GITS_PAGE_SIZE_4K) /
213
+ reg.id = KVM_REG_ARM64_SVE_ZREG(n, 0);
288
+ GITS_CMDQ_ENTRY_SIZE;
214
+ ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
289
+ s->cq.base_addr = FIELD_EX64(value, GITS_CBASER, PHYADDR);
215
+ if (ret) {
290
+ s->cq.base_addr <<= R_GITS_CBASER_PHYADDR_SHIFT;
216
+ return ret;
291
+ }
217
+ }
292
+}
218
+ sve_bswap64(r, r, cpu->sve_max_vq * 2);
293
+
219
+ }
294
static MemTxResult gicv3_its_translation_write(void *opaque, hwaddr offset,
220
+
295
uint64_t data, unsigned size,
221
+ for (n = 0; n < KVM_ARM64_SVE_NUM_PREGS; ++n) {
296
MemTxAttrs attrs)
222
+ r = &env->vfp.pregs[n].p[0];
297
@@ -XXX,XX +XXX,XX @@ static bool its_writel(GICv3ITSState *s, hwaddr offset,
223
+ reg.addr = (uintptr_t)r;
298
uint64_t value, MemTxAttrs attrs)
224
+ reg.id = KVM_REG_ARM64_SVE_PREG(n, 0);
299
{
225
+ ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
300
bool result = true;
226
+ if (ret) {
301
+ int index;
227
+ return ret;
302
228
+ }
303
+ switch (offset) {
229
+ sve_bswap64(r, r, DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
304
+ case GITS_CTLR:
230
+ }
305
+ s->ctlr |= (value & ~(s->ctlr));
231
+
306
+
232
+ r = &env->vfp.pregs[FFR_PRED_NUM].p[0];
307
+ if (s->ctlr & ITS_CTLR_ENABLED) {
233
+ reg.addr = (uintptr_t)r;
308
+ extract_table_params(s);
234
+ reg.id = KVM_REG_ARM64_SVE_FFR(0);
309
+ extract_cmdq_params(s);
235
ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
310
+ s->creadr = 0;
236
if (ret) {
311
+ }
237
return ret;
312
+ break;
238
}
313
+ case GITS_CBASER:
239
- vfp_set_fpcr(env, fpr);
314
+ /*
240
+ sve_bswap64(r, r, DIV_ROUND_UP(cpu->sve_max_vq * 2, 8));
315
+ * IMPDEF choice:- GITS_CBASER register becomes RO if ITS is
241
316
+ * already enabled
242
return 0;
317
+ */
318
+ if (!(s->ctlr & ITS_CTLR_ENABLED)) {
319
+ s->cbaser = deposit64(s->cbaser, 0, 32, value);
320
+ s->creadr = 0;
321
+ s->cwriter = s->creadr;
322
+ }
323
+ break;
324
+ case GITS_CBASER + 4:
325
+ /*
326
+ * IMPDEF choice:- GITS_CBASER register becomes RO if ITS is
327
+ * already enabled
328
+ */
329
+ if (!(s->ctlr & ITS_CTLR_ENABLED)) {
330
+ s->cbaser = deposit64(s->cbaser, 32, 32, value);
331
+ s->creadr = 0;
332
+ s->cwriter = s->creadr;
333
+ }
334
+ break;
335
+ case GITS_CWRITER:
336
+ s->cwriter = deposit64(s->cwriter, 0, 32,
337
+ (value & ~R_GITS_CWRITER_RETRY_MASK));
338
+ break;
339
+ case GITS_CWRITER + 4:
340
+ s->cwriter = deposit64(s->cwriter, 32, 32, value);
341
+ break;
342
+ case GITS_CREADR:
343
+ if (s->gicv3->gicd_ctlr & GICD_CTLR_DS) {
344
+ s->creadr = deposit64(s->creadr, 0, 32,
345
+ (value & ~R_GITS_CREADR_STALLED_MASK));
346
+ } else {
347
+ /* RO register, ignore the write */
348
+ qemu_log_mask(LOG_GUEST_ERROR,
349
+ "%s: invalid guest write to RO register at offset "
350
+ TARGET_FMT_plx "\n", __func__, offset);
351
+ }
352
+ break;
353
+ case GITS_CREADR + 4:
354
+ if (s->gicv3->gicd_ctlr & GICD_CTLR_DS) {
355
+ s->creadr = deposit64(s->creadr, 32, 32, value);
356
+ } else {
357
+ /* RO register, ignore the write */
358
+ qemu_log_mask(LOG_GUEST_ERROR,
359
+ "%s: invalid guest write to RO register at offset "
360
+ TARGET_FMT_plx "\n", __func__, offset);
361
+ }
362
+ break;
363
+ case GITS_BASER ... GITS_BASER + 0x3f:
364
+ /*
365
+ * IMPDEF choice:- GITS_BASERn register becomes RO if ITS is
366
+ * already enabled
367
+ */
368
+ if (!(s->ctlr & ITS_CTLR_ENABLED)) {
369
+ index = (offset - GITS_BASER) / 8;
370
+
371
+ if (offset & 7) {
372
+ value <<= 32;
373
+ value &= ~GITS_BASER_RO_MASK;
374
+ s->baser[index] &= GITS_BASER_RO_MASK | MAKE_64BIT_MASK(0, 32);
375
+ s->baser[index] |= value;
376
+ } else {
377
+ value &= ~GITS_BASER_RO_MASK;
378
+ s->baser[index] &= GITS_BASER_RO_MASK | MAKE_64BIT_MASK(32, 32);
379
+ s->baser[index] |= value;
380
+ }
381
+ }
382
+ break;
383
+ case GITS_IIDR:
384
+ case GITS_IDREGS ... GITS_IDREGS + 0x2f:
385
+ /* RO registers, ignore the write */
386
+ qemu_log_mask(LOG_GUEST_ERROR,
387
+ "%s: invalid guest write to RO register at offset "
388
+ TARGET_FMT_plx "\n", __func__, offset);
389
+ break;
390
+ default:
391
+ result = false;
392
+ break;
393
+ }
394
return result;
395
}
243
}
396
244
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
397
@@ -XXX,XX +XXX,XX @@ static bool its_readl(GICv3ITSState *s, hwaddr offset,
245
struct kvm_one_reg reg;
398
uint64_t *data, MemTxAttrs attrs)
246
uint64_t val;
399
{
247
unsigned int el;
400
bool result = true;
248
+ uint32_t fpr;
401
+ int index;
249
int i, ret;
402
250
403
+ switch (offset) {
251
ARMCPU *cpu = ARM_CPU(cs);
404
+ case GITS_CTLR:
252
@@ -XXX,XX +XXX,XX @@ int kvm_arch_get_registers(CPUState *cs)
405
+ *data = s->ctlr;
253
env->spsr = env->banked_spsr[i];
406
+ break;
254
}
407
+ case GITS_IIDR:
255
408
+ *data = gicv3_iidr();
256
- ret = kvm_arch_get_fpsimd(cs);
409
+ break;
257
+ if (cpu_isar_feature(aa64_sve, cpu)) {
410
+ case GITS_IDREGS ... GITS_IDREGS + 0x2f:
258
+ ret = kvm_arch_get_sve(cs);
411
+ /* ID registers */
259
+ } else {
412
+ *data = gicv3_idreg(offset - GITS_IDREGS);
260
+ ret = kvm_arch_get_fpsimd(cs);
413
+ break;
261
+ }
414
+ case GITS_TYPER:
262
if (ret) {
415
+ *data = extract64(s->typer, 0, 32);
263
return ret;
416
+ break;
264
}
417
+ case GITS_TYPER + 4:
265
418
+ *data = extract64(s->typer, 32, 32);
266
+ reg.addr = (uintptr_t)(&fpr);
419
+ break;
267
+ reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpsr);
420
+ case GITS_CBASER:
268
+ ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
421
+ *data = extract64(s->cbaser, 0, 32);
269
+ if (ret) {
422
+ break;
270
+ return ret;
423
+ case GITS_CBASER + 4:
271
+ }
424
+ *data = extract64(s->cbaser, 32, 32);
272
+ vfp_set_fpsr(env, fpr);
425
+ break;
273
+
426
+ case GITS_CREADR:
274
+ reg.addr = (uintptr_t)(&fpr);
427
+ *data = extract64(s->creadr, 0, 32);
275
+ reg.id = AARCH64_SIMD_CTRL_REG(fp_regs.fpcr);
428
+ break;
276
+ ret = kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, &reg);
429
+ case GITS_CREADR + 4:
277
+ if (ret) {
430
+ *data = extract64(s->creadr, 32, 32);
278
+ return ret;
431
+ break;
279
+ }
432
+ case GITS_CWRITER:
280
+ vfp_set_fpcr(env, fpr);
433
+ *data = extract64(s->cwriter, 0, 32);
281
+
434
+ break;
282
ret = kvm_get_vcpu_events(cpu);
435
+ case GITS_CWRITER + 4:
283
if (ret) {
436
+ *data = extract64(s->cwriter, 32, 32);
284
return ret;
437
+ break;
438
+ case GITS_BASER ... GITS_BASER + 0x3f:
439
+ index = (offset - GITS_BASER) / 8;
440
+ if (offset & 7) {
441
+ *data = extract64(s->baser[index], 32, 32);
442
+ } else {
443
+ *data = extract64(s->baser[index], 0, 32);
444
+ }
445
+ break;
446
+ default:
447
+ result = false;
448
+ break;
449
+ }
450
return result;
451
}
452
453
@@ -XXX,XX +XXX,XX @@ static bool its_writell(GICv3ITSState *s, hwaddr offset,
454
uint64_t value, MemTxAttrs attrs)
455
{
456
bool result = true;
457
+ int index;
458
459
+ switch (offset) {
460
+ case GITS_BASER ... GITS_BASER + 0x3f:
461
+ /*
462
+ * IMPDEF choice:- GITS_BASERn register becomes RO if ITS is
463
+ * already enabled
464
+ */
465
+ if (!(s->ctlr & ITS_CTLR_ENABLED)) {
466
+ index = (offset - GITS_BASER) / 8;
467
+ s->baser[index] &= GITS_BASER_RO_MASK;
468
+ s->baser[index] |= (value & ~GITS_BASER_RO_MASK);
469
+ }
470
+ break;
471
+ case GITS_CBASER:
472
+ /*
473
+ * IMPDEF choice:- GITS_CBASER register becomes RO if ITS is
474
+ * already enabled
475
+ */
476
+ if (!(s->ctlr & ITS_CTLR_ENABLED)) {
477
+ s->cbaser = value;
478
+ s->creadr = 0;
479
+ s->cwriter = s->creadr;
480
+ }
481
+ break;
482
+ case GITS_CWRITER:
483
+ s->cwriter = value & ~R_GITS_CWRITER_RETRY_MASK;
484
+ break;
485
+ case GITS_CREADR:
486
+ if (s->gicv3->gicd_ctlr & GICD_CTLR_DS) {
487
+ s->creadr = value & ~R_GITS_CREADR_STALLED_MASK;
488
+ } else {
489
+ /* RO register, ignore the write */
490
+ qemu_log_mask(LOG_GUEST_ERROR,
491
+ "%s: invalid guest write to RO register at offset "
492
+ TARGET_FMT_plx "\n", __func__, offset);
493
+ }
494
+ break;
495
+ case GITS_TYPER:
496
+ /* RO registers, ignore the write */
497
+ qemu_log_mask(LOG_GUEST_ERROR,
498
+ "%s: invalid guest write to RO register at offset "
499
+ TARGET_FMT_plx "\n", __func__, offset);
500
+ break;
501
+ default:
502
+ result = false;
503
+ break;
504
+ }
505
return result;
506
}
507
508
@@ -XXX,XX +XXX,XX @@ static bool its_readll(GICv3ITSState *s, hwaddr offset,
509
uint64_t *data, MemTxAttrs attrs)
510
{
511
bool result = true;
512
+ int index;
513
514
+ switch (offset) {
515
+ case GITS_TYPER:
516
+ *data = s->typer;
517
+ break;
518
+ case GITS_BASER ... GITS_BASER + 0x3f:
519
+ index = (offset - GITS_BASER) / 8;
520
+ *data = s->baser[index];
521
+ break;
522
+ case GITS_CBASER:
523
+ *data = s->cbaser;
524
+ break;
525
+ case GITS_CREADR:
526
+ *data = s->creadr;
527
+ break;
528
+ case GITS_CWRITER:
529
+ *data = s->cwriter;
530
+ break;
531
+ default:
532
+ result = false;
533
+ break;
534
+ }
535
return result;
536
}
537
538
@@ -XXX,XX +XXX,XX @@ static void gicv3_arm_its_realize(DeviceState *dev, Error **errp)
539
540
gicv3_its_init_mmio(s, &gicv3_its_control_ops, &gicv3_its_translation_ops);
541
542
+ address_space_init(&s->gicv3->dma_as, s->gicv3->dma,
543
+ "gicv3-its-sysmem");
544
+
545
/* set the ITS default features supported */
546
s->typer = FIELD_DP64(s->typer, GITS_TYPER, PHYSICAL,
547
GITS_TYPE_PHYSICAL);
548
@@ -XXX,XX +XXX,XX @@ static void gicv3_its_reset(DeviceState *dev)
549
GITS_CTE_SIZE - 1);
550
}
551
552
+static void gicv3_its_post_load(GICv3ITSState *s)
553
+{
554
+ if (s->ctlr & ITS_CTLR_ENABLED) {
555
+ extract_table_params(s);
556
+ extract_cmdq_params(s);
557
+ }
558
+}
559
+
560
static Property gicv3_its_props[] = {
561
DEFINE_PROP_LINK("parent-gicv3", GICv3ITSState, gicv3, "arm-gicv3",
562
GICv3State *),
563
@@ -XXX,XX +XXX,XX @@ static void gicv3_its_class_init(ObjectClass *klass, void *data)
564
{
565
DeviceClass *dc = DEVICE_CLASS(klass);
566
GICv3ITSClass *ic = ARM_GICV3_ITS_CLASS(klass);
567
+ GICv3ITSCommonClass *icc = ARM_GICV3_ITS_COMMON_CLASS(klass);
568
569
dc->realize = gicv3_arm_its_realize;
570
device_class_set_props(dc, gicv3_its_props);
571
device_class_set_parent_reset(dc, gicv3_its_reset, &ic->parent_reset);
572
+ icc->post_load = gicv3_its_post_load;
573
}
574
575
static const TypeInfo gicv3_its_info = {
576
--
285
--
577
2.20.1
286
2.20.1
578
287
579
288
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
It is confusing to have different exits from translation
3
Enable SVE in the KVM guest when the 'max' cpu type is configured
4
for various conditions in separate functions.
4
and KVM supports it. KVM SVE requires use of the new finalize
5
5
vcpu ioctl, so we add that now too. For starters SVE can only be
6
Merge disas_a64_insn into its only caller. Standardize
6
turned on or off, getting all vector lengths the host CPU supports
7
on the "s" name for the DisasContext, as the code from
7
when on. We'll add the other SVE CPU properties in later patches.
8
disas_a64_insn had more instances.
8
9
9
Signed-off-by: Andrew Jones <drjones@redhat.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Reviewed-by: Eric Auger <eric.auger@redhat.com>
12
Message-id: 20210821195958.41312-3-richard.henderson@linaro.org
12
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
13
Reviewed-by: Beata Michalska <beata.michalska@linaro.org>
14
Message-id: 20191031142734.8590-7-drjones@redhat.com
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
16
---
15
target/arm/translate-a64.c | 224 ++++++++++++++++++-------------------
17
target/arm/kvm_arm.h | 27 +++++++++++++++++++++++++++
16
1 file changed, 109 insertions(+), 115 deletions(-)
18
target/arm/cpu64.c | 17 ++++++++++++++---
17
19
target/arm/kvm.c | 5 +++++
18
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
20
target/arm/kvm64.c | 20 +++++++++++++++++++-
19
index XXXXXXX..XXXXXXX 100644
21
tests/arm-cpu-features.c | 4 ++++
20
--- a/target/arm/translate-a64.c
22
5 files changed, 69 insertions(+), 4 deletions(-)
21
+++ b/target/arm/translate-a64.c
23
22
@@ -XXX,XX +XXX,XX @@ static bool btype_destination_ok(uint32_t insn, bool bt, int btype)
24
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/target/arm/kvm_arm.h
27
+++ b/target/arm/kvm_arm.h
28
@@ -XXX,XX +XXX,XX @@
29
*/
30
int kvm_arm_vcpu_init(CPUState *cs);
31
32
+/**
33
+ * kvm_arm_vcpu_finalize
34
+ * @cs: CPUState
35
+ * @feature: int
36
+ *
37
+ * Finalizes the configuration of the specified VCPU feature by
38
+ * invoking the KVM_ARM_VCPU_FINALIZE ioctl. Features requiring
39
+ * this are documented in the "KVM_ARM_VCPU_FINALIZE" section of
40
+ * KVM's API documentation.
41
+ *
42
+ * Returns: 0 if success else < 0 error code
43
+ */
44
+int kvm_arm_vcpu_finalize(CPUState *cs, int feature);
45
+
46
/**
47
* kvm_arm_register_device:
48
* @mr: memory region for this device
49
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_aarch32_supported(CPUState *cs);
50
*/
51
bool kvm_arm_pmu_supported(CPUState *cs);
52
53
+/**
54
+ * bool kvm_arm_sve_supported:
55
+ * @cs: CPUState
56
+ *
57
+ * Returns true if the KVM VCPU can enable SVE and false otherwise.
58
+ */
59
+bool kvm_arm_sve_supported(CPUState *cs);
60
+
61
/**
62
* kvm_arm_get_max_vm_ipa_size - Returns the number of bits in the
63
* IPA address space supported by KVM
64
@@ -XXX,XX +XXX,XX @@ static inline bool kvm_arm_pmu_supported(CPUState *cs)
23
return false;
65
return false;
24
}
66
}
25
67
26
-/* C3.1 A64 instruction index by encoding */
68
+static inline bool kvm_arm_sve_supported(CPUState *cs)
27
-static void disas_a64_insn(CPUARMState *env, DisasContext *s)
69
+{
28
-{
70
+ return false;
29
- uint32_t insn;
71
+}
30
-
72
+
31
- s->pc_curr = s->base.pc_next;
73
static inline int kvm_arm_get_max_vm_ipa_size(MachineState *ms)
32
- insn = arm_ldl_code(env, s->base.pc_next, s->sctlr_b);
33
- s->insn = insn;
34
- s->base.pc_next += 4;
35
-
36
- s->fp_access_checked = false;
37
- s->sve_access_checked = false;
38
-
39
- if (s->pstate_il) {
40
- /*
41
- * Illegal execution state. This has priority over BTI
42
- * exceptions, but comes after instruction abort exceptions.
43
- */
44
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
45
- syn_illegalstate(), default_exception_el(s));
46
- return;
47
- }
48
-
49
- if (dc_isar_feature(aa64_bti, s)) {
50
- if (s->base.num_insns == 1) {
51
- /*
52
- * At the first insn of the TB, compute s->guarded_page.
53
- * We delayed computing this until successfully reading
54
- * the first insn of the TB, above. This (mostly) ensures
55
- * that the softmmu tlb entry has been populated, and the
56
- * page table GP bit is available.
57
- *
58
- * Note that we need to compute this even if btype == 0,
59
- * because this value is used for BR instructions later
60
- * where ENV is not available.
61
- */
62
- s->guarded_page = is_guarded_page(env, s);
63
-
64
- /* First insn can have btype set to non-zero. */
65
- tcg_debug_assert(s->btype >= 0);
66
-
67
- /*
68
- * Note that the Branch Target Exception has fairly high
69
- * priority -- below debugging exceptions but above most
70
- * everything else. This allows us to handle this now
71
- * instead of waiting until the insn is otherwise decoded.
72
- */
73
- if (s->btype != 0
74
- && s->guarded_page
75
- && !btype_destination_ok(insn, s->bt, s->btype)) {
76
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
77
- syn_btitrap(s->btype),
78
- default_exception_el(s));
79
- return;
80
- }
81
- } else {
82
- /* Not the first insn: btype must be 0. */
83
- tcg_debug_assert(s->btype == 0);
84
- }
85
- }
86
-
87
- switch (extract32(insn, 25, 4)) {
88
- case 0x0: case 0x1: case 0x3: /* UNALLOCATED */
89
- unallocated_encoding(s);
90
- break;
91
- case 0x2:
92
- if (!dc_isar_feature(aa64_sve, s) || !disas_sve(s, insn)) {
93
- unallocated_encoding(s);
94
- }
95
- break;
96
- case 0x8: case 0x9: /* Data processing - immediate */
97
- disas_data_proc_imm(s, insn);
98
- break;
99
- case 0xa: case 0xb: /* Branch, exception generation and system insns */
100
- disas_b_exc_sys(s, insn);
101
- break;
102
- case 0x4:
103
- case 0x6:
104
- case 0xc:
105
- case 0xe: /* Loads and stores */
106
- disas_ldst(s, insn);
107
- break;
108
- case 0x5:
109
- case 0xd: /* Data processing - register */
110
- disas_data_proc_reg(s, insn);
111
- break;
112
- case 0x7:
113
- case 0xf: /* Data processing - SIMD and floating point */
114
- disas_data_proc_simd_fp(s, insn);
115
- break;
116
- default:
117
- assert(FALSE); /* all 15 cases should be handled above */
118
- break;
119
- }
120
-
121
- /* if we allocated any temporaries, free them here */
122
- free_tmp_a64(s);
123
-
124
- /*
125
- * After execution of most insns, btype is reset to 0.
126
- * Note that we set btype == -1 when the insn sets btype.
127
- */
128
- if (s->btype > 0 && s->base.is_jmp != DISAS_NORETURN) {
129
- reset_btype(s);
130
- }
131
-}
132
-
133
static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
134
CPUState *cpu)
135
{
74
{
136
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
75
return -ENOENT;
137
76
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
138
static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
77
index XXXXXXX..XXXXXXX 100644
139
{
78
--- a/target/arm/cpu64.c
140
- DisasContext *dc = container_of(dcbase, DisasContext, base);
79
+++ b/target/arm/cpu64.c
141
+ DisasContext *s = container_of(dcbase, DisasContext, base);
80
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, Visitor *v, const char *name,
142
CPUARMState *env = cpu->env_ptr;
81
return;
143
+ uint32_t insn;
82
}
144
83
145
- if (dc->ss_active && !dc->pstate_ss) {
84
+ if (value && kvm_enabled() && !kvm_arm_sve_supported(CPU(cpu))) {
146
+ if (s->ss_active && !s->pstate_ss) {
85
+ error_setg(errp, "'sve' feature not supported by KVM on this host");
147
/* Singlestep state is Active-pending.
148
* If we're in this state at the start of a TB then either
149
* a) we just took an exception to an EL which is being debugged
150
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
151
* "did not step an insn" case, and so the syndrome ISV and EX
152
* bits should be zero.
153
*/
154
- assert(dc->base.num_insns == 1);
155
- gen_swstep_exception(dc, 0, 0);
156
- dc->base.is_jmp = DISAS_NORETURN;
157
- } else {
158
- disas_a64_insn(env, dc);
159
+ assert(s->base.num_insns == 1);
160
+ gen_swstep_exception(s, 0, 0);
161
+ s->base.is_jmp = DISAS_NORETURN;
162
+ return;
163
}
164
165
- translator_loop_temp_check(&dc->base);
166
+ s->pc_curr = s->base.pc_next;
167
+ insn = arm_ldl_code(env, s->base.pc_next, s->sctlr_b);
168
+ s->insn = insn;
169
+ s->base.pc_next += 4;
170
+
171
+ s->fp_access_checked = false;
172
+ s->sve_access_checked = false;
173
+
174
+ if (s->pstate_il) {
175
+ /*
176
+ * Illegal execution state. This has priority over BTI
177
+ * exceptions, but comes after instruction abort exceptions.
178
+ */
179
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
180
+ syn_illegalstate(), default_exception_el(s));
181
+ return;
86
+ return;
182
+ }
87
+ }
183
+
88
+
184
+ if (dc_isar_feature(aa64_bti, s)) {
89
t = cpu->isar.id_aa64pfr0;
185
+ if (s->base.num_insns == 1) {
90
t = FIELD_DP64(t, ID_AA64PFR0, SVE, value);
186
+ /*
91
cpu->isar.id_aa64pfr0 = t;
187
+ * At the first insn of the TB, compute s->guarded_page.
92
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
188
+ * We delayed computing this until successfully reading
93
{
189
+ * the first insn of the TB, above. This (mostly) ensures
94
ARMCPU *cpu = ARM_CPU(obj);
190
+ * that the softmmu tlb entry has been populated, and the
95
uint32_t vq;
191
+ * page table GP bit is available.
96
+ uint64_t t;
192
+ *
97
193
+ * Note that we need to compute this even if btype == 0,
98
if (kvm_enabled()) {
194
+ * because this value is used for BR instructions later
99
kvm_arm_set_cpu_features_from_host(cpu);
195
+ * where ENV is not available.
100
+ if (kvm_arm_sve_supported(CPU(cpu))) {
196
+ */
101
+ t = cpu->isar.id_aa64pfr0;
197
+ s->guarded_page = is_guarded_page(env, s);
102
+ t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
198
+
103
+ cpu->isar.id_aa64pfr0 = t;
199
+ /* First insn can have btype set to non-zero. */
104
+ }
200
+ tcg_debug_assert(s->btype >= 0);
105
} else {
201
+
106
- uint64_t t;
202
+ /*
107
uint32_t u;
203
+ * Note that the Branch Target Exception has fairly high
108
aarch64_a57_initfn(obj);
204
+ * priority -- below debugging exceptions but above most
109
205
+ * everything else. This allows us to handle this now
110
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
206
+ * instead of waiting until the insn is otherwise decoded.
111
207
+ */
112
object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq,
208
+ if (s->btype != 0
113
cpu_max_set_sve_max_vq, NULL, NULL, &error_fatal);
209
+ && s->guarded_page
114
- object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
210
+ && !btype_destination_ok(insn, s->bt, s->btype)) {
115
- cpu_arm_set_sve, NULL, NULL, &error_fatal);
211
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
116
212
+ syn_btitrap(s->btype),
117
for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
213
+ default_exception_el(s));
118
char name[8];
214
+ return;
119
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
215
+ }
120
cpu_arm_set_sve_vq, NULL, NULL, &error_fatal);
216
+ } else {
121
}
217
+ /* Not the first insn: btype must be 0. */
122
}
218
+ tcg_debug_assert(s->btype == 0);
123
+
124
+ object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
125
+ cpu_arm_set_sve, NULL, NULL, &error_fatal);
126
}
127
128
struct ARMCPUInfo {
129
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
130
index XXXXXXX..XXXXXXX 100644
131
--- a/target/arm/kvm.c
132
+++ b/target/arm/kvm.c
133
@@ -XXX,XX +XXX,XX @@ int kvm_arm_vcpu_init(CPUState *cs)
134
return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_INIT, &init);
135
}
136
137
+int kvm_arm_vcpu_finalize(CPUState *cs, int feature)
138
+{
139
+ return kvm_vcpu_ioctl(cs, KVM_ARM_VCPU_FINALIZE, &feature);
140
+}
141
+
142
void kvm_arm_init_serror_injection(CPUState *cs)
143
{
144
cap_has_inject_serror_esr = kvm_check_extension(cs->kvm_state,
145
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
146
index XXXXXXX..XXXXXXX 100644
147
--- a/target/arm/kvm64.c
148
+++ b/target/arm/kvm64.c
149
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_aarch32_supported(CPUState *cpu)
150
return kvm_check_extension(s, KVM_CAP_ARM_EL1_32BIT);
151
}
152
153
+bool kvm_arm_sve_supported(CPUState *cpu)
154
+{
155
+ KVMState *s = KVM_STATE(current_machine->accelerator);
156
+
157
+ return kvm_check_extension(s, KVM_CAP_ARM_SVE);
158
+}
159
+
160
#define ARM_CPU_ID_MPIDR 3, 0, 0, 0, 5
161
162
int kvm_arch_init_vcpu(CPUState *cs)
163
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
164
cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_EL1_32BIT;
165
}
166
if (!kvm_check_extension(cs->kvm_state, KVM_CAP_ARM_PMU_V3)) {
167
- cpu->has_pmu = false;
168
+ cpu->has_pmu = false;
169
}
170
if (cpu->has_pmu) {
171
cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_PMU_V3;
172
} else {
173
unset_feature(&env->features, ARM_FEATURE_PMU);
174
}
175
+ if (cpu_isar_feature(aa64_sve, cpu)) {
176
+ assert(kvm_arm_sve_supported(cs));
177
+ cpu->kvm_init_features[0] |= 1 << KVM_ARM_VCPU_SVE;
178
+ }
179
180
/* Do KVM_ARM_VCPU_INIT ioctl */
181
ret = kvm_arm_vcpu_init(cs);
182
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
183
return ret;
184
}
185
186
+ if (cpu_isar_feature(aa64_sve, cpu)) {
187
+ ret = kvm_arm_vcpu_finalize(cs, KVM_ARM_VCPU_SVE);
188
+ if (ret) {
189
+ return ret;
219
+ }
190
+ }
220
+ }
191
+ }
221
+
192
+
222
+ switch (extract32(insn, 25, 4)) {
193
/*
223
+ case 0x0: case 0x1: case 0x3: /* UNALLOCATED */
194
* When KVM is in use, PSCI is emulated in-kernel and not by qemu.
224
+ unallocated_encoding(s);
195
* Currently KVM has its own idea about MPIDR assignment, so we
225
+ break;
196
diff --git a/tests/arm-cpu-features.c b/tests/arm-cpu-features.c
226
+ case 0x2:
197
index XXXXXXX..XXXXXXX 100644
227
+ if (!dc_isar_feature(aa64_sve, s) || !disas_sve(s, insn)) {
198
--- a/tests/arm-cpu-features.c
228
+ unallocated_encoding(s);
199
+++ b/tests/arm-cpu-features.c
229
+ }
200
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
230
+ break;
201
assert_has_feature(qts, "host", "aarch64");
231
+ case 0x8: case 0x9: /* Data processing - immediate */
202
assert_has_feature(qts, "host", "pmu");
232
+ disas_data_proc_imm(s, insn);
203
233
+ break;
204
+ assert_has_feature(qts, "max", "sve");
234
+ case 0xa: case 0xb: /* Branch, exception generation and system insns */
205
+
235
+ disas_b_exc_sys(s, insn);
206
assert_error(qts, "cortex-a15",
236
+ break;
207
"We cannot guarantee the CPU type 'cortex-a15' works "
237
+ case 0x4:
208
"with KVM on this host", NULL);
238
+ case 0x6:
209
} else {
239
+ case 0xc:
210
assert_has_not_feature(qts, "host", "aarch64");
240
+ case 0xe: /* Loads and stores */
211
assert_has_not_feature(qts, "host", "pmu");
241
+ disas_ldst(s, insn);
212
+
242
+ break;
213
+ assert_has_not_feature(qts, "max", "sve");
243
+ case 0x5:
214
}
244
+ case 0xd: /* Data processing - register */
215
245
+ disas_data_proc_reg(s, insn);
216
qtest_quit(qts);
246
+ break;
247
+ case 0x7:
248
+ case 0xf: /* Data processing - SIMD and floating point */
249
+ disas_data_proc_simd_fp(s, insn);
250
+ break;
251
+ default:
252
+ assert(FALSE); /* all 15 cases should be handled above */
253
+ break;
254
+ }
255
+
256
+ /* if we allocated any temporaries, free them here */
257
+ free_tmp_a64(s);
258
+
259
+ /*
260
+ * After execution of most insns, btype is reset to 0.
261
+ * Note that we set btype == -1 when the insn sets btype.
262
+ */
263
+ if (s->btype > 0 && s->base.is_jmp != DISAS_NORETURN) {
264
+ reset_btype(s);
265
+ }
266
+
267
+ translator_loop_temp_check(&s->base);
268
}
269
270
static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
271
--
217
--
272
2.20.1
218
2.20.1
273
219
274
220
diff view generated by jsdifflib
1
From: Marc Zyngier <maz@kernel.org>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
Although we probe for the IPA limits imposed by KVM (and the hardware)
3
kvm_arm_create_scratch_host_vcpu() takes a struct kvm_vcpu_init
4
when computing the memory map, we still use the old style '0' when
4
parameter. Rather than just using it as an output parameter to
5
creating a scratch VM in kvm_arm_create_scratch_host_vcpu().
5
pass back the preferred target, use it also as an input parameter,
6
allowing a caller to pass a selected target if they wish and to
7
also pass cpu features. If the caller doesn't want to select a
8
target they can pass -1 for the target which indicates they want
9
to use the preferred target and have it passed back like before.
6
10
7
On systems that are severely IPA challenged (such as the Apple M1),
11
Signed-off-by: Andrew Jones <drjones@redhat.com>
8
this results in a failure as KVM cannot use the default 40bit that
12
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
'0' represents.
13
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
14
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
11
Instead, probe for the extension and use the reported IPA limit
15
Reviewed-by: Beata Michalska <beata.michalska@linaro.org>
12
if available.
16
Message-id: 20191031142734.8590-8-drjones@redhat.com
13
14
Cc: Andrew Jones <drjones@redhat.com>
15
Cc: Eric Auger <eric.auger@redhat.com>
16
Cc: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Marc Zyngier <maz@kernel.org>
18
Reviewed-by: Andrew Jones <drjones@redhat.com>
19
Message-id: 20210822144441.1290891-2-maz@kernel.org
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
---
18
---
22
target/arm/kvm.c | 7 ++++++-
19
target/arm/kvm.c | 20 +++++++++++++++-----
23
1 file changed, 6 insertions(+), 1 deletion(-)
20
target/arm/kvm32.c | 6 +++++-
21
target/arm/kvm64.c | 6 +++++-
22
3 files changed, 25 insertions(+), 7 deletions(-)
24
23
25
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
24
diff --git a/target/arm/kvm.c b/target/arm/kvm.c
26
index XXXXXXX..XXXXXXX 100644
25
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/kvm.c
26
--- a/target/arm/kvm.c
28
+++ b/target/arm/kvm.c
27
+++ b/target/arm/kvm.c
29
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
28
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
29
int *fdarray,
30
struct kvm_vcpu_init *init)
30
struct kvm_vcpu_init *init)
31
{
31
{
32
int ret = 0, kvmfd = -1, vmfd = -1, cpufd = -1;
32
- int ret, kvmfd = -1, vmfd = -1, cpufd = -1;
33
+ int max_vm_pa_size;
33
+ int ret = 0, kvmfd = -1, vmfd = -1, cpufd = -1;
34
34
35
kvmfd = qemu_open_old("/dev/kvm", O_RDWR);
35
kvmfd = qemu_open("/dev/kvm", O_RDWR);
36
if (kvmfd < 0) {
36
if (kvmfd < 0) {
37
goto err;
37
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
38
goto finish;
38
}
39
}
39
- vmfd = ioctl(kvmfd, KVM_CREATE_VM, 0);
40
40
+ max_vm_pa_size = ioctl(kvmfd, KVM_CHECK_EXTENSION, KVM_CAP_ARM_VM_IPA_SIZE);
41
- ret = ioctl(vmfd, KVM_ARM_PREFERRED_TARGET, init);
41
+ if (max_vm_pa_size < 0) {
42
+ if (init->target == -1) {
42
+ max_vm_pa_size = 0;
43
+ struct kvm_vcpu_init preferred;
44
+
45
+ ret = ioctl(vmfd, KVM_ARM_PREFERRED_TARGET, &preferred);
46
+ if (!ret) {
47
+ init->target = preferred.target;
48
+ }
43
+ }
49
+ }
44
+ vmfd = ioctl(kvmfd, KVM_CREATE_VM, max_vm_pa_size);
50
if (ret >= 0) {
45
if (vmfd < 0) {
51
ret = ioctl(cpufd, KVM_ARM_VCPU_INIT, init);
46
goto err;
52
if (ret < 0) {
47
}
53
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
54
* creating one kind of guest CPU which is its preferred
55
* CPU type.
56
*/
57
+ struct kvm_vcpu_init try;
58
+
59
while (*cpus_to_try != QEMU_KVM_ARM_TARGET_NONE) {
60
- init->target = *cpus_to_try++;
61
- memset(init->features, 0, sizeof(init->features));
62
- ret = ioctl(cpufd, KVM_ARM_VCPU_INIT, init);
63
+ try.target = *cpus_to_try++;
64
+ memcpy(try.features, init->features, sizeof(init->features));
65
+ ret = ioctl(cpufd, KVM_ARM_VCPU_INIT, &try);
66
if (ret >= 0) {
67
break;
68
}
69
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_create_scratch_host_vcpu(const uint32_t *cpus_to_try,
70
if (ret < 0) {
71
goto err;
72
}
73
+ init->target = try.target;
74
} else {
75
/* Treat a NULL cpus_to_try argument the same as an empty
76
* list, which means we will fail the call since this must
77
diff --git a/target/arm/kvm32.c b/target/arm/kvm32.c
78
index XXXXXXX..XXXXXXX 100644
79
--- a/target/arm/kvm32.c
80
+++ b/target/arm/kvm32.c
81
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
82
QEMU_KVM_ARM_TARGET_CORTEX_A15,
83
QEMU_KVM_ARM_TARGET_NONE
84
};
85
- struct kvm_vcpu_init init;
86
+ /*
87
+ * target = -1 informs kvm_arm_create_scratch_host_vcpu()
88
+ * to use the preferred target
89
+ */
90
+ struct kvm_vcpu_init init = { .target = -1, };
91
92
if (!kvm_arm_create_scratch_host_vcpu(cpus_to_try, fdarray, &init)) {
93
return false;
94
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
95
index XXXXXXX..XXXXXXX 100644
96
--- a/target/arm/kvm64.c
97
+++ b/target/arm/kvm64.c
98
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
99
KVM_ARM_TARGET_CORTEX_A57,
100
QEMU_KVM_ARM_TARGET_NONE
101
};
102
- struct kvm_vcpu_init init;
103
+ /*
104
+ * target = -1 informs kvm_arm_create_scratch_host_vcpu()
105
+ * to use the preferred target
106
+ */
107
+ struct kvm_vcpu_init init = { .target = -1, };
108
109
if (!kvm_arm_create_scratch_host_vcpu(cpus_to_try, fdarray, &init)) {
110
return false;
48
--
111
--
49
2.20.1
112
2.20.1
50
113
51
114
diff view generated by jsdifflib
1
From: Shashi Mallela <shashi.mallela@linaro.org>
1
From: Andrew Jones <drjones@redhat.com>
2
2
3
Added functionality to trigger ITS command queue processing on
3
Extend the SVE vq map initialization and validation with KVM's
4
write to CWRITE register and process each command queue entry to
4
supported vector lengths when KVM is enabled. In order to determine
5
identify the command type and handle commands like MAPD,MAPC,SYNC.
5
and select supported lengths we add two new KVM functions for getting
6
and setting the KVM_REG_ARM64_SVE_VLS pseudo-register.
6
7
7
Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
8
This patch has been co-authored with Richard Henderson, who reworked
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
the target/arm/cpu64.c changes in order to push all the validation and
10
auto-enabling/disabling steps into the finalizer, resulting in a nice
11
LOC reduction.
12
13
Signed-off-by: Andrew Jones <drjones@redhat.com>
9
Reviewed-by: Eric Auger <eric.auger@redhat.com>
14
Reviewed-by: Eric Auger <eric.auger@redhat.com>
10
Tested-by: Neil Armstrong <narmstrong@baylibre.com>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20210910143951.92242-4-shashi.mallela@linaro.org
16
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
17
Message-id: 20191031142734.8590-9-drjones@redhat.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
19
---
14
hw/intc/gicv3_internal.h | 40 +++++
20
target/arm/kvm_arm.h | 12 +++
15
hw/intc/arm_gicv3_its.c | 319 +++++++++++++++++++++++++++++++++++++++
21
target/arm/cpu64.c | 176 ++++++++++++++++++++++++++++----------
16
2 files changed, 359 insertions(+)
22
target/arm/kvm64.c | 100 +++++++++++++++++++++-
23
tests/arm-cpu-features.c | 104 +++++++++++++++++++++-
24
docs/arm-cpu-features.rst | 45 +++++++---
25
5 files changed, 379 insertions(+), 58 deletions(-)
17
26
18
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
27
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
19
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
20
--- a/hw/intc/gicv3_internal.h
29
--- a/target/arm/kvm_arm.h
21
+++ b/hw/intc/gicv3_internal.h
30
+++ b/target/arm/kvm_arm.h
22
@@ -XXX,XX +XXX,XX @@ FIELD(GITS_TYPER, CIL, 36, 1)
31
@@ -XXX,XX +XXX,XX @@ typedef struct ARMHostCPUFeatures {
23
#define L1TABLE_ENTRY_SIZE 8
32
*/
24
33
bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf);
25
#define GITS_CMDQ_ENTRY_SIZE 32
34
26
+#define NUM_BYTES_IN_DW 8
35
+/**
27
+
36
+ * kvm_arm_sve_get_vls:
28
+#define CMD_MASK 0xff
37
+ * @cs: CPUState
29
+
38
+ * @map: bitmap to fill in
30
+/* ITS Commands */
39
+ *
31
+#define GITS_CMD_CLEAR 0x04
40
+ * Get all the SVE vector lengths supported by the KVM host, setting
32
+#define GITS_CMD_DISCARD 0x0F
41
+ * the bits corresponding to their length in quadwords minus one
33
+#define GITS_CMD_INT 0x03
42
+ * (vq - 1) in @map up to ARM_MAX_VQ.
34
+#define GITS_CMD_MAPC 0x09
43
+ */
35
+#define GITS_CMD_MAPD 0x08
44
+void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map);
36
+#define GITS_CMD_MAPI 0x0B
45
+
37
+#define GITS_CMD_MAPTI 0x0A
38
+#define GITS_CMD_INV 0x0C
39
+#define GITS_CMD_INVALL 0x0D
40
+#define GITS_CMD_SYNC 0x05
41
+
42
+/* MAPC command fields */
43
+#define ICID_LENGTH 16
44
+#define ICID_MASK ((1U << ICID_LENGTH) - 1)
45
+FIELD(MAPC, RDBASE, 16, 32)
46
+
47
+#define RDBASE_PROCNUM_LENGTH 16
48
+#define RDBASE_PROCNUM_MASK ((1ULL << RDBASE_PROCNUM_LENGTH) - 1)
49
+
50
+/* MAPD command fields */
51
+#define ITTADDR_LENGTH 44
52
+#define ITTADDR_SHIFT 8
53
+#define ITTADDR_MASK MAKE_64BIT_MASK(ITTADDR_SHIFT, ITTADDR_LENGTH)
54
+#define SIZE_MASK 0x1f
55
+
56
+#define DEVID_SHIFT 32
57
+#define DEVID_MASK MAKE_64BIT_MASK(32, 32)
58
+
59
+#define VALID_SHIFT 63
60
+#define CMD_FIELD_VALID_MASK (1ULL << VALID_SHIFT)
61
+#define L2_TABLE_VALID_MASK CMD_FIELD_VALID_MASK
62
+#define TABLE_ENTRY_VALID_MASK (1ULL << 0)
63
64
/**
46
/**
65
* Default features advertised by this version of ITS
47
* kvm_arm_set_cpu_features_from_host:
66
@@ -XXX,XX +XXX,XX @@ FIELD(GITS_TYPER, CIL, 36, 1)
48
* @cpu: ARMCPU to set the features for
67
* Valid = 1 bit,ITTAddr = 44 bits,Size = 5 bits
49
@@ -XXX,XX +XXX,XX @@ static inline int kvm_arm_vgic_probe(void)
68
*/
50
static inline void kvm_arm_pmu_set_irq(CPUState *cs, int irq) {}
69
#define GITS_DTE_SIZE (0x8ULL)
51
static inline void kvm_arm_pmu_init(CPUState *cs) {}
70
+#define GITS_DTE_ITTADDR_SHIFT 6
52
71
+#define GITS_DTE_ITTADDR_MASK MAKE_64BIT_MASK(GITS_DTE_ITTADDR_SHIFT, \
53
+static inline void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map) {}
72
+ ITTADDR_LENGTH)
54
#endif
73
55
74
/*
56
static inline const char *gic_class_name(void)
75
* 8 bytes Collection Table Entry size
57
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
76
diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c
77
index XXXXXXX..XXXXXXX 100644
58
index XXXXXXX..XXXXXXX 100644
78
--- a/hw/intc/arm_gicv3_its.c
59
--- a/target/arm/cpu64.c
79
+++ b/hw/intc/arm_gicv3_its.c
60
+++ b/target/arm/cpu64.c
80
@@ -XXX,XX +XXX,XX @@ static uint64_t baser_base_addr(uint64_t value, uint32_t page_sz)
61
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
81
return result;
62
* any of the above. Finally, if SVE is not disabled, then at least one
63
* vector length must be enabled.
64
*/
65
+ DECLARE_BITMAP(kvm_supported, ARM_MAX_VQ);
66
DECLARE_BITMAP(tmp, ARM_MAX_VQ);
67
uint32_t vq, max_vq = 0;
68
69
+ /* Collect the set of vector lengths supported by KVM. */
70
+ bitmap_zero(kvm_supported, ARM_MAX_VQ);
71
+ if (kvm_enabled() && kvm_arm_sve_supported(CPU(cpu))) {
72
+ kvm_arm_sve_get_vls(CPU(cpu), kvm_supported);
73
+ } else if (kvm_enabled()) {
74
+ assert(!cpu_isar_feature(aa64_sve, cpu));
75
+ }
76
+
77
/*
78
* Process explicit sve<N> properties.
79
* From the properties, sve_vq_map<N> implies sve_vq_init<N>.
80
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
81
return;
82
}
83
84
- /* Propagate enabled bits down through required powers-of-two. */
85
- for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) {
86
- if (!test_bit(vq - 1, cpu->sve_vq_init)) {
87
- set_bit(vq - 1, cpu->sve_vq_map);
88
+ if (kvm_enabled()) {
89
+ /*
90
+ * For KVM we have to automatically enable all supported unitialized
91
+ * lengths, even when the smaller lengths are not all powers-of-two.
92
+ */
93
+ bitmap_andnot(tmp, kvm_supported, cpu->sve_vq_init, max_vq);
94
+ bitmap_or(cpu->sve_vq_map, cpu->sve_vq_map, tmp, max_vq);
95
+ } else {
96
+ /* Propagate enabled bits down through required powers-of-two. */
97
+ for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) {
98
+ if (!test_bit(vq - 1, cpu->sve_vq_init)) {
99
+ set_bit(vq - 1, cpu->sve_vq_map);
100
+ }
101
}
102
}
103
} else if (cpu->sve_max_vq == 0) {
104
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
105
return;
106
}
107
108
- /* Disabling a power-of-two disables all larger lengths. */
109
- if (test_bit(0, cpu->sve_vq_init)) {
110
- error_setg(errp, "cannot disable sve128");
111
- error_append_hint(errp, "Disabling sve128 results in all vector "
112
- "lengths being disabled.\n");
113
- error_append_hint(errp, "With SVE enabled, at least one vector "
114
- "length must be enabled.\n");
115
- return;
116
- }
117
- for (vq = 2; vq <= ARM_MAX_VQ; vq <<= 1) {
118
- if (test_bit(vq - 1, cpu->sve_vq_init)) {
119
- break;
120
+ if (kvm_enabled()) {
121
+ /* Disabling a supported length disables all larger lengths. */
122
+ for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
123
+ if (test_bit(vq - 1, cpu->sve_vq_init) &&
124
+ test_bit(vq - 1, kvm_supported)) {
125
+ break;
126
+ }
127
}
128
+ max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ;
129
+ bitmap_andnot(cpu->sve_vq_map, kvm_supported,
130
+ cpu->sve_vq_init, max_vq);
131
+ if (max_vq == 0 || bitmap_empty(cpu->sve_vq_map, max_vq)) {
132
+ error_setg(errp, "cannot disable sve%d", vq * 128);
133
+ error_append_hint(errp, "Disabling sve%d results in all "
134
+ "vector lengths being disabled.\n",
135
+ vq * 128);
136
+ error_append_hint(errp, "With SVE enabled, at least one "
137
+ "vector length must be enabled.\n");
138
+ return;
139
+ }
140
+ } else {
141
+ /* Disabling a power-of-two disables all larger lengths. */
142
+ if (test_bit(0, cpu->sve_vq_init)) {
143
+ error_setg(errp, "cannot disable sve128");
144
+ error_append_hint(errp, "Disabling sve128 results in all "
145
+ "vector lengths being disabled.\n");
146
+ error_append_hint(errp, "With SVE enabled, at least one "
147
+ "vector length must be enabled.\n");
148
+ return;
149
+ }
150
+ for (vq = 2; vq <= ARM_MAX_VQ; vq <<= 1) {
151
+ if (test_bit(vq - 1, cpu->sve_vq_init)) {
152
+ break;
153
+ }
154
+ }
155
+ max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ;
156
+ bitmap_complement(cpu->sve_vq_map, cpu->sve_vq_init, max_vq);
157
}
158
- max_vq = vq <= ARM_MAX_VQ ? vq - 1 : ARM_MAX_VQ;
159
160
- bitmap_complement(cpu->sve_vq_map, cpu->sve_vq_init, max_vq);
161
max_vq = find_last_bit(cpu->sve_vq_map, max_vq) + 1;
162
}
163
164
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
165
assert(max_vq != 0);
166
bitmap_clear(cpu->sve_vq_map, max_vq, ARM_MAX_VQ - max_vq);
167
168
- /* Ensure all required powers-of-two are enabled. */
169
- for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) {
170
- if (!test_bit(vq - 1, cpu->sve_vq_map)) {
171
- error_setg(errp, "cannot disable sve%d", vq * 128);
172
- error_append_hint(errp, "sve%d is required as it "
173
- "is a power-of-two length smaller than "
174
- "the maximum, sve%d\n",
175
- vq * 128, max_vq * 128);
176
+ if (kvm_enabled()) {
177
+ /* Ensure the set of lengths matches what KVM supports. */
178
+ bitmap_xor(tmp, cpu->sve_vq_map, kvm_supported, max_vq);
179
+ if (!bitmap_empty(tmp, max_vq)) {
180
+ vq = find_last_bit(tmp, max_vq) + 1;
181
+ if (test_bit(vq - 1, cpu->sve_vq_map)) {
182
+ if (cpu->sve_max_vq) {
183
+ error_setg(errp, "cannot set sve-max-vq=%d",
184
+ cpu->sve_max_vq);
185
+ error_append_hint(errp, "This KVM host does not support "
186
+ "the vector length %d-bits.\n",
187
+ vq * 128);
188
+ error_append_hint(errp, "It may not be possible to use "
189
+ "sve-max-vq with this KVM host. Try "
190
+ "using only sve<N> properties.\n");
191
+ } else {
192
+ error_setg(errp, "cannot enable sve%d", vq * 128);
193
+ error_append_hint(errp, "This KVM host does not support "
194
+ "the vector length %d-bits.\n",
195
+ vq * 128);
196
+ }
197
+ } else {
198
+ error_setg(errp, "cannot disable sve%d", vq * 128);
199
+ error_append_hint(errp, "The KVM host requires all "
200
+ "supported vector lengths smaller "
201
+ "than %d bits to also be enabled.\n",
202
+ max_vq * 128);
203
+ }
204
return;
205
}
206
+ } else {
207
+ /* Ensure all required powers-of-two are enabled. */
208
+ for (vq = pow2floor(max_vq); vq >= 1; vq >>= 1) {
209
+ if (!test_bit(vq - 1, cpu->sve_vq_map)) {
210
+ error_setg(errp, "cannot disable sve%d", vq * 128);
211
+ error_append_hint(errp, "sve%d is required as it "
212
+ "is a power-of-two length smaller than "
213
+ "the maximum, sve%d\n",
214
+ vq * 128, max_vq * 128);
215
+ return;
216
+ }
217
+ }
218
}
219
220
/*
221
@@ -XXX,XX +XXX,XX @@ static void cpu_max_set_sve_max_vq(Object *obj, Visitor *v, const char *name,
222
{
223
ARMCPU *cpu = ARM_CPU(obj);
224
Error *err = NULL;
225
+ uint32_t max_vq;
226
227
- visit_type_uint32(v, name, &cpu->sve_max_vq, &err);
228
-
229
- if (!err && (cpu->sve_max_vq == 0 || cpu->sve_max_vq > ARM_MAX_VQ)) {
230
- error_setg(&err, "unsupported SVE vector length");
231
- error_append_hint(&err, "Valid sve-max-vq in range [1-%d]\n",
232
- ARM_MAX_VQ);
233
+ visit_type_uint32(v, name, &max_vq, &err);
234
+ if (err) {
235
+ error_propagate(errp, err);
236
+ return;
237
}
238
- error_propagate(errp, err);
239
+
240
+ if (kvm_enabled() && !kvm_arm_sve_supported(CPU(cpu))) {
241
+ error_setg(errp, "cannot set sve-max-vq");
242
+ error_append_hint(errp, "SVE not supported by KVM on this host\n");
243
+ return;
244
+ }
245
+
246
+ if (max_vq == 0 || max_vq > ARM_MAX_VQ) {
247
+ error_setg(errp, "unsupported SVE vector length");
248
+ error_append_hint(errp, "Valid sve-max-vq in range [1-%d]\n",
249
+ ARM_MAX_VQ);
250
+ return;
251
+ }
252
+
253
+ cpu->sve_max_vq = max_vq;
82
}
254
}
83
255
84
+static bool update_cte(GICv3ITSState *s, uint16_t icid, bool valid,
256
static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name,
85
+ uint64_t rdbase)
257
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
258
return;
259
}
260
261
+ if (value && kvm_enabled() && !kvm_arm_sve_supported(CPU(cpu))) {
262
+ error_setg(errp, "cannot enable %s", name);
263
+ error_append_hint(errp, "SVE not supported by KVM on this host\n");
264
+ return;
265
+ }
266
+
267
if (value) {
268
set_bit(vq - 1, cpu->sve_vq_map);
269
} else {
270
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
271
cpu->ctr = 0x80038003; /* 32 byte I and D cacheline size, VIPT icache */
272
cpu->dcz_blocksize = 7; /* 512 bytes */
273
#endif
274
-
275
- object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq,
276
- cpu_max_set_sve_max_vq, NULL, NULL, &error_fatal);
277
-
278
- for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
279
- char name[8];
280
- sprintf(name, "sve%d", vq * 128);
281
- object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
282
- cpu_arm_set_sve_vq, NULL, NULL, &error_fatal);
283
- }
284
}
285
286
object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
287
cpu_arm_set_sve, NULL, NULL, &error_fatal);
288
+ object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq,
289
+ cpu_max_set_sve_max_vq, NULL, NULL, &error_fatal);
290
+
291
+ for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
292
+ char name[8];
293
+ sprintf(name, "sve%d", vq * 128);
294
+ object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
295
+ cpu_arm_set_sve_vq, NULL, NULL, &error_fatal);
296
+ }
297
}
298
299
struct ARMCPUInfo {
300
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
301
index XXXXXXX..XXXXXXX 100644
302
--- a/target/arm/kvm64.c
303
+++ b/target/arm/kvm64.c
304
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_sve_supported(CPUState *cpu)
305
return kvm_check_extension(s, KVM_CAP_ARM_SVE);
306
}
307
308
+QEMU_BUILD_BUG_ON(KVM_ARM64_SVE_VQ_MIN != 1);
309
+
310
+void kvm_arm_sve_get_vls(CPUState *cs, unsigned long *map)
86
+{
311
+{
87
+ AddressSpace *as = &s->gicv3->dma_as;
312
+ /* Only call this function if kvm_arm_sve_supported() returns true. */
88
+ uint64_t value;
313
+ static uint64_t vls[KVM_ARM64_SVE_VLS_WORDS];
89
+ uint64_t l2t_addr;
314
+ static bool probed;
90
+ bool valid_l2t;
315
+ uint32_t vq = 0;
91
+ uint32_t l2t_id;
316
+ int i, j;
92
+ uint32_t max_l2_entries;
317
+
93
+ uint64_t cte = 0;
318
+ bitmap_clear(map, 0, ARM_MAX_VQ);
94
+ MemTxResult res = MEMTX_OK;
95
+
96
+ if (!s->ct.valid) {
97
+ return true;
98
+ }
99
+
100
+ if (valid) {
101
+ /* add mapping entry to collection table */
102
+ cte = (valid & TABLE_ENTRY_VALID_MASK) | (rdbase << 1ULL);
103
+ }
104
+
319
+
105
+ /*
320
+ /*
106
+ * The specification defines the format of level 1 entries of a
321
+ * KVM ensures all host CPUs support the same set of vector lengths.
107
+ * 2-level table, but the format of level 2 entries and the format
322
+ * So we only need to create the scratch VCPUs once and then cache
108
+ * of flat-mapped tables is IMPDEF.
323
+ * the results.
109
+ */
324
+ */
110
+ if (s->ct.indirect) {
325
+ if (!probed) {
111
+ l2t_id = icid / (s->ct.page_sz / L1TABLE_ENTRY_SIZE);
326
+ struct kvm_vcpu_init init = {
112
+
327
+ .target = -1,
113
+ value = address_space_ldq_le(as,
328
+ .features[0] = (1 << KVM_ARM_VCPU_SVE),
114
+ s->ct.base_addr +
329
+ };
115
+ (l2t_id * L1TABLE_ENTRY_SIZE),
330
+ struct kvm_one_reg reg = {
116
+ MEMTXATTRS_UNSPECIFIED, &res);
331
+ .id = KVM_REG_ARM64_SVE_VLS,
117
+
332
+ .addr = (uint64_t)&vls[0],
118
+ if (res != MEMTX_OK) {
333
+ };
119
+ return false;
334
+ int fdarray[3], ret;
120
+ }
335
+
121
+
336
+ probed = true;
122
+ valid_l2t = (value & L2_TABLE_VALID_MASK) != 0;
337
+
123
+
338
+ if (!kvm_arm_create_scratch_host_vcpu(NULL, fdarray, &init)) {
124
+ if (valid_l2t) {
339
+ error_report("failed to create scratch VCPU with SVE enabled");
125
+ max_l2_entries = s->ct.page_sz / s->ct.entry_sz;
340
+ abort();
126
+
341
+ }
127
+ l2t_addr = value & ((1ULL << 51) - 1);
342
+ ret = ioctl(fdarray[2], KVM_GET_ONE_REG, &reg);
128
+
343
+ kvm_arm_destroy_scratch_host_vcpu(fdarray);
129
+ address_space_stq_le(as, l2t_addr +
344
+ if (ret) {
130
+ ((icid % max_l2_entries) * GITS_CTE_SIZE),
345
+ error_report("failed to get KVM_REG_ARM64_SVE_VLS: %s",
131
+ cte, MEMTXATTRS_UNSPECIFIED, &res);
346
+ strerror(errno));
132
+ }
347
+ abort();
133
+ } else {
348
+ }
134
+ /* Flat level table */
349
+
135
+ address_space_stq_le(as, s->ct.base_addr + (icid * GITS_CTE_SIZE),
350
+ for (i = KVM_ARM64_SVE_VLS_WORDS - 1; i >= 0; --i) {
136
+ cte, MEMTXATTRS_UNSPECIFIED, &res);
351
+ if (vls[i]) {
137
+ }
352
+ vq = 64 - clz64(vls[i]) + i * 64;
138
+ if (res != MEMTX_OK) {
353
+ break;
139
+ return false;
354
+ }
140
+ } else {
355
+ }
141
+ return true;
356
+ if (vq > ARM_MAX_VQ) {
357
+ warn_report("KVM supports vector lengths larger than "
358
+ "QEMU can enable");
359
+ }
360
+ }
361
+
362
+ for (i = 0; i < KVM_ARM64_SVE_VLS_WORDS; ++i) {
363
+ if (!vls[i]) {
364
+ continue;
365
+ }
366
+ for (j = 1; j <= 64; ++j) {
367
+ vq = j + i * 64;
368
+ if (vq > ARM_MAX_VQ) {
369
+ return;
370
+ }
371
+ if (vls[i] & (1UL << (j - 1))) {
372
+ set_bit(vq - 1, map);
373
+ }
374
+ }
142
+ }
375
+ }
143
+}
376
+}
144
+
377
+
145
+static bool process_mapc(GICv3ITSState *s, uint32_t offset)
378
+static int kvm_arm_sve_set_vls(CPUState *cs)
146
+{
379
+{
147
+ AddressSpace *as = &s->gicv3->dma_as;
380
+ uint64_t vls[KVM_ARM64_SVE_VLS_WORDS] = {0};
148
+ uint16_t icid;
381
+ struct kvm_one_reg reg = {
149
+ uint64_t rdbase;
382
+ .id = KVM_REG_ARM64_SVE_VLS,
150
+ bool valid;
383
+ .addr = (uint64_t)&vls[0],
151
+ MemTxResult res = MEMTX_OK;
384
+ };
152
+ bool result = false;
385
+ ARMCPU *cpu = ARM_CPU(cs);
153
+ uint64_t value;
386
+ uint32_t vq;
154
+
387
+ int i, j;
155
+ offset += NUM_BYTES_IN_DW;
388
+
156
+ offset += NUM_BYTES_IN_DW;
389
+ assert(cpu->sve_max_vq <= KVM_ARM64_SVE_VQ_MAX);
157
+
390
+
158
+ value = address_space_ldq_le(as, s->cq.base_addr + offset,
391
+ for (vq = 1; vq <= cpu->sve_max_vq; ++vq) {
159
+ MEMTXATTRS_UNSPECIFIED, &res);
392
+ if (test_bit(vq - 1, cpu->sve_vq_map)) {
160
+
393
+ i = (vq - 1) / 64;
161
+ if (res != MEMTX_OK) {
394
+ j = (vq - 1) % 64;
162
+ return result;
395
+ vls[i] |= 1UL << j;
163
+ }
396
+ }
164
+
397
+ }
165
+ icid = value & ICID_MASK;
398
+
166
+
399
+ return kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, &reg);
167
+ rdbase = (value & R_MAPC_RDBASE_MASK) >> R_MAPC_RDBASE_SHIFT;
168
+ rdbase &= RDBASE_PROCNUM_MASK;
169
+
170
+ valid = (value & CMD_FIELD_VALID_MASK);
171
+
172
+ if ((icid > s->ct.maxids.max_collids) || (rdbase > s->gicv3->num_cpu)) {
173
+ qemu_log_mask(LOG_GUEST_ERROR,
174
+ "ITS MAPC: invalid collection table attributes "
175
+ "icid %d rdbase %lu\n", icid, rdbase);
176
+ /*
177
+ * in this implementation, in case of error
178
+ * we ignore this command and move onto the next
179
+ * command in the queue
180
+ */
181
+ } else {
182
+ result = update_cte(s, icid, valid, rdbase);
183
+ }
184
+
185
+ return result;
186
+}
400
+}
187
+
401
+
188
+static bool update_dte(GICv3ITSState *s, uint32_t devid, bool valid,
402
#define ARM_CPU_ID_MPIDR 3, 0, 0, 0, 5
189
+ uint8_t size, uint64_t itt_addr)
403
404
int kvm_arch_init_vcpu(CPUState *cs)
405
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
406
407
if (cpu->kvm_target == QEMU_KVM_ARM_TARGET_NONE ||
408
!object_dynamic_cast(OBJECT(cpu), TYPE_AARCH64_CPU)) {
409
- fprintf(stderr, "KVM is not supported for this guest CPU type\n");
410
+ error_report("KVM is not supported for this guest CPU type");
411
return -EINVAL;
412
}
413
414
@@ -XXX,XX +XXX,XX @@ int kvm_arch_init_vcpu(CPUState *cs)
415
}
416
417
if (cpu_isar_feature(aa64_sve, cpu)) {
418
+ ret = kvm_arm_sve_set_vls(cs);
419
+ if (ret) {
420
+ return ret;
421
+ }
422
ret = kvm_arm_vcpu_finalize(cs, KVM_ARM_VCPU_SVE);
423
if (ret) {
424
return ret;
425
diff --git a/tests/arm-cpu-features.c b/tests/arm-cpu-features.c
426
index XXXXXXX..XXXXXXX 100644
427
--- a/tests/arm-cpu-features.c
428
+++ b/tests/arm-cpu-features.c
429
@@ -XXX,XX +XXX,XX @@ static QDict *resp_get_props(QDict *resp)
430
return qdict;
431
}
432
433
+static bool resp_get_feature(QDict *resp, const char *feature)
190
+{
434
+{
191
+ AddressSpace *as = &s->gicv3->dma_as;
435
+ QDict *props;
192
+ uint64_t value;
436
+
193
+ uint64_t l2t_addr;
437
+ g_assert(resp);
194
+ bool valid_l2t;
438
+ g_assert(resp_has_props(resp));
195
+ uint32_t l2t_id;
439
+ props = resp_get_props(resp);
196
+ uint32_t max_l2_entries;
440
+ g_assert(qdict_get(props, feature));
197
+ uint64_t dte = 0;
441
+ return qdict_get_bool(props, feature);
198
+ MemTxResult res = MEMTX_OK;
442
+}
199
+
443
+
200
+ if (s->dt.valid) {
444
#define assert_has_feature(qts, cpu_type, feature) \
201
+ if (valid) {
445
({ \
202
+ /* add mapping entry to device table */
446
QDict *_resp = do_query_no_props(qts, cpu_type); \
203
+ dte = (valid & TABLE_ENTRY_VALID_MASK) |
447
@@ -XXX,XX +XXX,XX @@ static void sve_tests_sve_off(const void *data)
204
+ ((size & SIZE_MASK) << 1U) |
448
qtest_quit(qts);
205
+ (itt_addr << GITS_DTE_ITTADDR_SHIFT);
449
}
206
+ }
450
207
+ } else {
451
+static void sve_tests_sve_off_kvm(const void *data)
208
+ return true;
452
+{
209
+ }
453
+ QTestState *qts;
454
+
455
+ qts = qtest_init(MACHINE_KVM "-cpu max,sve=off");
210
+
456
+
211
+ /*
457
+ /*
212
+ * The specification defines the format of level 1 entries of a
458
+ * We don't know if this host supports SVE so we don't
213
+ * 2-level table, but the format of level 2 entries and the format
459
+ * attempt to test enabling anything. We only test that
214
+ * of flat-mapped tables is IMPDEF.
460
+ * everything is disabled (as it should be with sve=off)
461
+ * and that using sve<N>=off to explicitly disable vector
462
+ * lengths is OK too.
215
+ */
463
+ */
216
+ if (s->dt.indirect) {
464
+ assert_sve_vls(qts, "max", 0, NULL);
217
+ l2t_id = devid / (s->dt.page_sz / L1TABLE_ENTRY_SIZE);
465
+ assert_sve_vls(qts, "max", 0, "{ 'sve128': false }");
218
+
466
+
219
+ value = address_space_ldq_le(as,
467
+ qtest_quit(qts);
220
+ s->dt.base_addr +
221
+ (l2t_id * L1TABLE_ENTRY_SIZE),
222
+ MEMTXATTRS_UNSPECIFIED, &res);
223
+
224
+ if (res != MEMTX_OK) {
225
+ return false;
226
+ }
227
+
228
+ valid_l2t = (value & L2_TABLE_VALID_MASK) != 0;
229
+
230
+ if (valid_l2t) {
231
+ max_l2_entries = s->dt.page_sz / s->dt.entry_sz;
232
+
233
+ l2t_addr = value & ((1ULL << 51) - 1);
234
+
235
+ address_space_stq_le(as, l2t_addr +
236
+ ((devid % max_l2_entries) * GITS_DTE_SIZE),
237
+ dte, MEMTXATTRS_UNSPECIFIED, &res);
238
+ }
239
+ } else {
240
+ /* Flat level table */
241
+ address_space_stq_le(as, s->dt.base_addr + (devid * GITS_DTE_SIZE),
242
+ dte, MEMTXATTRS_UNSPECIFIED, &res);
243
+ }
244
+ if (res != MEMTX_OK) {
245
+ return false;
246
+ } else {
247
+ return true;
248
+ }
249
+}
468
+}
250
+
469
+
251
+static bool process_mapd(GICv3ITSState *s, uint64_t value, uint32_t offset)
470
static void test_query_cpu_model_expansion(const void *data)
252
+{
471
{
253
+ AddressSpace *as = &s->gicv3->dma_as;
472
QTestState *qts;
254
+ uint32_t devid;
473
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
255
+ uint8_t size;
474
}
256
+ uint64_t itt_addr;
475
257
+ bool valid;
476
if (g_str_equal(qtest_get_arch(), "aarch64")) {
258
+ MemTxResult res = MEMTX_OK;
477
+ bool kvm_supports_sve;
259
+ bool result = false;
478
+ char max_name[8], name[8];
260
+
479
+ uint32_t max_vq, vq;
261
+ devid = ((value & DEVID_MASK) >> DEVID_SHIFT);
480
+ uint64_t vls;
262
+
481
+ QDict *resp;
263
+ offset += NUM_BYTES_IN_DW;
482
+ char *error;
264
+ value = address_space_ldq_le(as, s->cq.base_addr + offset,
483
+
265
+ MEMTXATTRS_UNSPECIFIED, &res);
484
assert_has_feature(qts, "host", "aarch64");
266
+
485
assert_has_feature(qts, "host", "pmu");
267
+ if (res != MEMTX_OK) {
486
268
+ return result;
487
- assert_has_feature(qts, "max", "sve");
269
+ }
488
-
270
+
489
assert_error(qts, "cortex-a15",
271
+ size = (value & SIZE_MASK);
490
"We cannot guarantee the CPU type 'cortex-a15' works "
272
+
491
"with KVM on this host", NULL);
273
+ offset += NUM_BYTES_IN_DW;
492
+
274
+ value = address_space_ldq_le(as, s->cq.base_addr + offset,
493
+ assert_has_feature(qts, "max", "sve");
275
+ MEMTXATTRS_UNSPECIFIED, &res);
494
+ resp = do_query_no_props(qts, "max");
276
+
495
+ kvm_supports_sve = resp_get_feature(resp, "sve");
277
+ if (res != MEMTX_OK) {
496
+ vls = resp_get_sve_vls(resp);
278
+ return result;
497
+ qobject_unref(resp);
279
+ }
498
+
280
+
499
+ if (kvm_supports_sve) {
281
+ itt_addr = (value & ITTADDR_MASK) >> ITTADDR_SHIFT;
500
+ g_assert(vls != 0);
282
+
501
+ max_vq = 64 - __builtin_clzll(vls);
283
+ valid = (value & CMD_FIELD_VALID_MASK);
502
+ sprintf(max_name, "sve%d", max_vq * 128);
284
+
503
+
285
+ if ((devid > s->dt.maxids.max_devids) ||
504
+ /* Enabling a supported length is of course fine. */
286
+ (size > FIELD_EX64(s->typer, GITS_TYPER, IDBITS))) {
505
+ assert_sve_vls(qts, "max", vls, "{ %s: true }", max_name);
287
+ qemu_log_mask(LOG_GUEST_ERROR,
506
+
288
+ "ITS MAPD: invalid device table attributes "
507
+ /* Get the next supported length smaller than max-vq. */
289
+ "devid %d or size %d\n", devid, size);
508
+ vq = 64 - __builtin_clzll(vls & ~BIT_ULL(max_vq - 1));
290
+ /*
509
+ if (vq) {
291
+ * in this implementation, in case of error
510
+ /*
292
+ * we ignore this command and move onto the next
511
+ * We have at least one length smaller than max-vq,
293
+ * command in the queue
512
+ * so we can disable max-vq.
294
+ */
513
+ */
295
+ } else {
514
+ assert_sve_vls(qts, "max", (vls & ~BIT_ULL(max_vq - 1)),
296
+ result = update_dte(s, devid, valid, size, itt_addr);
515
+ "{ %s: false }", max_name);
297
+ }
516
+
298
+
517
+ /*
299
+ return result;
518
+ * Smaller, supported vector lengths cannot be disabled
300
+}
519
+ * unless all larger, supported vector lengths are also
301
+
520
+ * disabled.
302
+/*
521
+ */
303
+ * Current implementation blocks until all
522
+ sprintf(name, "sve%d", vq * 128);
304
+ * commands are processed
523
+ error = g_strdup_printf("cannot disable %s", name);
305
+ */
524
+ assert_error(qts, "max", error,
306
+static void process_cmdq(GICv3ITSState *s)
525
+ "{ %s: true, %s: false }",
307
+{
526
+ max_name, name);
308
+ uint32_t wr_offset = 0;
527
+ g_free(error);
309
+ uint32_t rd_offset = 0;
528
+ }
310
+ uint32_t cq_offset = 0;
529
+
311
+ uint64_t data;
312
+ AddressSpace *as = &s->gicv3->dma_as;
313
+ MemTxResult res = MEMTX_OK;
314
+ bool result = true;
315
+ uint8_t cmd;
316
+
317
+ if (!(s->ctlr & ITS_CTLR_ENABLED)) {
318
+ return;
319
+ }
320
+
321
+ wr_offset = FIELD_EX64(s->cwriter, GITS_CWRITER, OFFSET);
322
+
323
+ if (wr_offset > s->cq.max_entries) {
324
+ qemu_log_mask(LOG_GUEST_ERROR,
325
+ "%s: invalid write offset "
326
+ "%d\n", __func__, wr_offset);
327
+ return;
328
+ }
329
+
330
+ rd_offset = FIELD_EX64(s->creadr, GITS_CREADR, OFFSET);
331
+
332
+ if (rd_offset > s->cq.max_entries) {
333
+ qemu_log_mask(LOG_GUEST_ERROR,
334
+ "%s: invalid read offset "
335
+ "%d\n", __func__, rd_offset);
336
+ return;
337
+ }
338
+
339
+ while (wr_offset != rd_offset) {
340
+ cq_offset = (rd_offset * GITS_CMDQ_ENTRY_SIZE);
341
+ data = address_space_ldq_le(as, s->cq.base_addr + cq_offset,
342
+ MEMTXATTRS_UNSPECIFIED, &res);
343
+ if (res != MEMTX_OK) {
344
+ result = false;
345
+ }
346
+ cmd = (data & CMD_MASK);
347
+
348
+ switch (cmd) {
349
+ case GITS_CMD_INT:
350
+ break;
351
+ case GITS_CMD_CLEAR:
352
+ break;
353
+ case GITS_CMD_SYNC:
354
+ /*
530
+ /*
355
+ * Current implementation makes a blocking synchronous call
531
+ * The smallest, supported vector length is required, because
356
+ * for every command issued earlier, hence the internal state
532
+ * we need at least one vector length enabled.
357
+ * is already consistent by the time SYNC command is executed.
358
+ * Hence no further processing is required for SYNC command.
359
+ */
533
+ */
360
+ break;
534
+ vq = __builtin_ffsll(vls);
361
+ case GITS_CMD_MAPD:
535
+ sprintf(name, "sve%d", vq * 128);
362
+ result = process_mapd(s, data, cq_offset);
536
+ error = g_strdup_printf("cannot disable %s", name);
363
+ break;
537
+ assert_error(qts, "max", error, "{ %s: false }", name);
364
+ case GITS_CMD_MAPC:
538
+ g_free(error);
365
+ result = process_mapc(s, cq_offset);
539
+
366
+ break;
540
+ /* Get an unsupported length. */
367
+ case GITS_CMD_MAPTI:
541
+ for (vq = 1; vq <= max_vq; ++vq) {
368
+ break;
542
+ if (!(vls & BIT_ULL(vq - 1))) {
369
+ case GITS_CMD_MAPI:
543
+ break;
370
+ break;
544
+ }
371
+ case GITS_CMD_DISCARD:
545
+ }
372
+ break;
546
+ if (vq <= SVE_MAX_VQ) {
373
+ case GITS_CMD_INV:
547
+ sprintf(name, "sve%d", vq * 128);
374
+ case GITS_CMD_INVALL:
548
+ error = g_strdup_printf("cannot enable %s", name);
375
+ break;
549
+ assert_error(qts, "max", error, "{ %s: true }", name);
376
+ default:
550
+ g_free(error);
377
+ break;
551
+ }
378
+ }
379
+ if (result) {
380
+ rd_offset++;
381
+ rd_offset %= s->cq.max_entries;
382
+ s->creadr = FIELD_DP64(s->creadr, GITS_CREADR, OFFSET, rd_offset);
383
+ } else {
552
+ } else {
384
+ /*
553
+ g_assert(vls == 0);
385
+ * in this implementation, in case of dma read/write error
554
+ }
386
+ * we stall the command processing
555
} else {
387
+ */
556
assert_has_not_feature(qts, "host", "aarch64");
388
+ s->creadr = FIELD_DP64(s->creadr, GITS_CREADR, STALLED, 1);
557
assert_has_not_feature(qts, "host", "pmu");
389
+ qemu_log_mask(LOG_GUEST_ERROR,
558
@@ -XXX,XX +XXX,XX @@ int main(int argc, char **argv)
390
+ "%s: %x cmd processing failed\n", __func__, cmd);
559
NULL, sve_tests_sve_max_vq_8);
391
+ break;
560
qtest_add_data_func("/arm/max/query-cpu-model-expansion/sve-off",
392
+ }
561
NULL, sve_tests_sve_off);
393
+ }
562
+ qtest_add_data_func("/arm/kvm/query-cpu-model-expansion/sve-off",
394
+}
563
+ NULL, sve_tests_sve_off_kvm);
395
+
564
}
396
/*
565
397
* This function extracts the ITS Device and Collection table specific
566
return g_test_run();
398
* parameters (like base_addr, size etc) from GITS_BASER register.
567
diff --git a/docs/arm-cpu-features.rst b/docs/arm-cpu-features.rst
399
@@ -XXX,XX +XXX,XX @@ static bool its_writel(GICv3ITSState *s, hwaddr offset,
568
index XXXXXXX..XXXXXXX 100644
400
extract_table_params(s);
569
--- a/docs/arm-cpu-features.rst
401
extract_cmdq_params(s);
570
+++ b/docs/arm-cpu-features.rst
402
s->creadr = 0;
571
@@ -XXX,XX +XXX,XX @@ SVE CPU Property Dependencies and Constraints
403
+ process_cmdq(s);
572
404
}
573
1) At least one vector length must be enabled when `sve` is enabled.
405
break;
574
406
case GITS_CBASER:
575
- 2) If a vector length `N` is enabled, then all power-of-two vector
407
@@ -XXX,XX +XXX,XX @@ static bool its_writel(GICv3ITSState *s, hwaddr offset,
576
- lengths smaller than `N` must also be enabled. E.g. if `sve512`
408
case GITS_CWRITER:
577
- is enabled, then the 128-bit and 256-bit vector lengths must also
409
s->cwriter = deposit64(s->cwriter, 0, 32,
578
- be enabled.
410
(value & ~R_GITS_CWRITER_RETRY_MASK));
579
+ 2) If a vector length `N` is enabled, then, when KVM is enabled, all
411
+ if (s->cwriter != s->creadr) {
580
+ smaller, host supported vector lengths must also be enabled. If
412
+ process_cmdq(s);
581
+ KVM is not enabled, then only all the smaller, power-of-two vector
413
+ }
582
+ lengths must be enabled. E.g. with KVM if the host supports all
414
break;
583
+ vector lengths up to 512-bits (128, 256, 384, 512), then if `sve512`
415
case GITS_CWRITER + 4:
584
+ is enabled, the 128-bit vector length, 256-bit vector length, and
416
s->cwriter = deposit64(s->cwriter, 32, 32, value);
585
+ 384-bit vector length must also be enabled. Without KVM, the 384-bit
417
@@ -XXX,XX +XXX,XX @@ static bool its_writell(GICv3ITSState *s, hwaddr offset,
586
+ vector length would not be required.
418
break;
587
+
419
case GITS_CWRITER:
588
+ 3) If KVM is enabled then only vector lengths that the host CPU type
420
s->cwriter = value & ~R_GITS_CWRITER_RETRY_MASK;
589
+ support may be enabled. If SVE is not supported by the host, then
421
+ if (s->cwriter != s->creadr) {
590
+ no `sve*` properties may be enabled.
422
+ process_cmdq(s);
591
423
+ }
592
SVE CPU Property Parsing Semantics
424
break;
593
----------------------------------
425
case GITS_CREADR:
594
@@ -XXX,XX +XXX,XX @@ SVE CPU Property Parsing Semantics
426
if (s->gicv3->gicd_ctlr & GICD_CTLR_DS) {
595
an error is generated.
596
597
2) If SVE is enabled (`sve=on`), but no `sve<N>` CPU properties are
598
- provided, then all supported vector lengths are enabled, including
599
- the non-power-of-two lengths.
600
+ provided, then all supported vector lengths are enabled, which when
601
+ KVM is not in use means including the non-power-of-two lengths, and,
602
+ when KVM is in use, it means all vector lengths supported by the host
603
+ processor.
604
605
3) If SVE is enabled, then an error is generated when attempting to
606
disable the last enabled vector length (see constraint (1) of "SVE
607
@@ -XXX,XX +XXX,XX @@ SVE CPU Property Parsing Semantics
608
has been explicitly disabled, then an error is generated (see
609
constraint (2) of "SVE CPU Property Dependencies and Constraints").
610
611
- 5) If one or more `sve<N>` CPU properties are set `off`, but no `sve<N>`,
612
+ 5) When KVM is enabled, if the host does not support SVE, then an error
613
+ is generated when attempting to enable any `sve*` properties (see
614
+ constraint (3) of "SVE CPU Property Dependencies and Constraints").
615
+
616
+ 6) When KVM is enabled, if the host does support SVE, then an error is
617
+ generated when attempting to enable any vector lengths not supported
618
+ by the host (see constraint (3) of "SVE CPU Property Dependencies and
619
+ Constraints").
620
+
621
+ 7) If one or more `sve<N>` CPU properties are set `off`, but no `sve<N>`,
622
CPU properties are set `on`, then the specified vector lengths are
623
disabled but the default for any unspecified lengths remains enabled.
624
- Disabling a power-of-two vector length also disables all vector
625
- lengths larger than the power-of-two length (see constraint (2) of
626
- "SVE CPU Property Dependencies and Constraints").
627
+ When KVM is not enabled, disabling a power-of-two vector length also
628
+ disables all vector lengths larger than the power-of-two length.
629
+ When KVM is enabled, then disabling any supported vector length also
630
+ disables all larger vector lengths (see constraint (2) of "SVE CPU
631
+ Property Dependencies and Constraints").
632
633
- 6) If one or more `sve<N>` CPU properties are set to `on`, then they
634
+ 8) If one or more `sve<N>` CPU properties are set to `on`, then they
635
are enabled and all unspecified lengths default to disabled, except
636
for the required lengths per constraint (2) of "SVE CPU Property
637
Dependencies and Constraints", which will even be auto-enabled if
638
they were not explicitly enabled.
639
640
- 7) If SVE was disabled (`sve=off`), allowing all vector lengths to be
641
+ 9) If SVE was disabled (`sve=off`), allowing all vector lengths to be
642
explicitly disabled (i.e. avoiding the error specified in (3) of
643
"SVE CPU Property Parsing Semantics"), then if later an `sve=on` is
644
provided an error will be generated. To avoid this error, one must
427
--
645
--
428
2.20.1
646
2.20.1
429
647
430
648
diff view generated by jsdifflib
1
In v8A, the PSTATE.IL bit is set for various kinds of illegal
1
From: Andrew Jones <drjones@redhat.com>
2
exception return or mode-change attempts. We already set PSTATE.IL
2
3
(or its AArch32 equivalent CPSR.IL) in all those cases, but we
3
Allow cpu 'host' to enable SVE when it's available, unless the
4
weren't implementing the part of the behaviour where attempting to
4
user chooses to disable it with the added 'sve=off' cpu property.
5
execute an instruction with PSTATE.IL takes an immediate exception
5
Also give the user the ability to select vector lengths with the
6
with an appropriate syndrome value.
6
sve<N> properties. We don't adopt 'max' cpu's other sve property,
7
7
sve-max-vq, because that property is difficult to use with KVM.
8
Add a new TB flags bit tracking PSTATE.IL/CPSR.IL, and generate code
8
That property assumes all vector lengths in the range from 1 up
9
to take an exception instead of whatever the instruction would have
9
to and including the specified maximum length are supported, but
10
been.
10
there may be optional lengths not supported by the host in that
11
11
range. With KVM one must be more specific when enabling vector
12
PSTATE.IL and CPSR.IL change only on exception entry, attempted
12
lengths.
13
exception exit, and various AArch32 mode changes via cpsr_write().
13
14
These places generally already rebuild the hflags, so the only place
14
Signed-off-by: Andrew Jones <drjones@redhat.com>
15
we need an extra rebuild_hflags call is in the illegal-return
15
Reviewed-by: Eric Auger <eric.auger@redhat.com>
16
codepath of the AArch64 exception_return helper.
16
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
17
17
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
18
Message-id: 20191031142734.8590-10-drjones@redhat.com
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20210821195958.41312-2-richard.henderson@linaro.org
22
Message-Id: <20210817162118.24319-1-peter.maydell@linaro.org>
23
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
24
[rth: Added missing returns; set IL bit in syndrome]
25
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
26
---
20
---
27
target/arm/cpu.h | 1 +
21
target/arm/cpu.h | 2 ++
28
target/arm/syndrome.h | 5 +++++
22
target/arm/cpu.c | 3 +++
29
target/arm/translate.h | 2 ++
23
target/arm/cpu64.c | 33 +++++++++++++++++----------------
30
target/arm/helper-a64.c | 1 +
24
target/arm/kvm64.c | 14 +++++++++++++-
31
target/arm/helper.c | 8 ++++++++
25
tests/arm-cpu-features.c | 17 ++++++++---------
32
target/arm/translate-a64.c | 11 +++++++++++
26
docs/arm-cpu-features.rst | 19 ++++++++++++-------
33
target/arm/translate.c | 21 +++++++++++++++++++++
27
6 files changed, 55 insertions(+), 33 deletions(-)
34
7 files changed, 49 insertions(+)
35
28
36
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
37
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/cpu.h
31
--- a/target/arm/cpu.h
39
+++ b/target/arm/cpu.h
32
+++ b/target/arm/cpu.h
40
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_ANY, FPEXC_EL, 8, 2)
33
@@ -XXX,XX +XXX,XX @@ int aarch64_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
41
FIELD(TBFLAG_ANY, DEBUG_TARGET_EL, 10, 2)
34
void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq);
42
/* Memory operations require alignment: SCTLR_ELx.A or CCR.UNALIGN_TRP */
35
void aarch64_sve_change_el(CPUARMState *env, int old_el,
43
FIELD(TBFLAG_ANY, ALIGN_MEM, 12, 1)
36
int new_el, bool el0_a64);
44
+FIELD(TBFLAG_ANY, PSTATE__IL, 13, 1)
37
+void aarch64_add_sve_properties(Object *obj);
45
38
#else
46
/*
39
static inline void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq) { }
47
* Bit usage when in AArch32 state, both A- and M-profile.
40
static inline void aarch64_sve_change_el(CPUARMState *env, int o,
48
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
41
int n, bool a)
49
index XXXXXXX..XXXXXXX 100644
42
{ }
50
--- a/target/arm/syndrome.h
43
+static inline void aarch64_add_sve_properties(Object *obj) { }
51
+++ b/target/arm/syndrome.h
44
#endif
52
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_wfx(int cv, int cond, int ti, bool is_16bit)
45
53
(cv << 24) | (cond << 20) | ti;
46
#if !defined(CONFIG_TCG)
47
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
48
index XXXXXXX..XXXXXXX 100644
49
--- a/target/arm/cpu.c
50
+++ b/target/arm/cpu.c
51
@@ -XXX,XX +XXX,XX @@ static void arm_host_initfn(Object *obj)
52
ARMCPU *cpu = ARM_CPU(obj);
53
54
kvm_arm_set_cpu_features_from_host(cpu);
55
+ if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
56
+ aarch64_add_sve_properties(obj);
57
+ }
58
arm_cpu_post_init(obj);
54
}
59
}
55
60
56
+static inline uint32_t syn_illegalstate(void)
61
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/cpu64.c
64
+++ b/target/arm/cpu64.c
65
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, Visitor *v, const char *name,
66
cpu->isar.id_aa64pfr0 = t;
67
}
68
69
+void aarch64_add_sve_properties(Object *obj)
57
+{
70
+{
58
+ return (EC_ILLEGALSTATE << ARM_EL_EC_SHIFT) | ARM_EL_IL;
71
+ uint32_t vq;
72
+
73
+ object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
74
+ cpu_arm_set_sve, NULL, NULL, &error_fatal);
75
+
76
+ for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
77
+ char name[8];
78
+ sprintf(name, "sve%d", vq * 128);
79
+ object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
80
+ cpu_arm_set_sve_vq, NULL, NULL, &error_fatal);
81
+ }
59
+}
82
+}
60
+
83
+
61
#endif /* TARGET_ARM_SYNDROME_H */
84
/* -cpu max: if KVM is enabled, like -cpu host (best possible with this host);
62
diff --git a/target/arm/translate.h b/target/arm/translate.h
85
* otherwise, a CPU with as many features enabled as our emulation supports.
63
index XXXXXXX..XXXXXXX 100644
86
* The version of '-cpu max' for qemu-system-arm is defined in cpu.c;
64
--- a/target/arm/translate.h
87
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, Visitor *v, const char *name,
65
+++ b/target/arm/translate.h
88
static void aarch64_max_initfn(Object *obj)
66
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
89
{
67
bool hstr_active;
90
ARMCPU *cpu = ARM_CPU(obj);
68
/* True if memory operations require alignment */
91
- uint32_t vq;
69
bool align_mem;
92
- uint64_t t;
70
+ /* True if PSTATE.IL is set */
93
71
+ bool pstate_il;
94
if (kvm_enabled()) {
72
/*
95
kvm_arm_set_cpu_features_from_host(cpu);
73
* >= 0, a copy of PSTATE.BTYPE, which will be 0 without v8.5-BTI.
96
- if (kvm_arm_sve_supported(CPU(cpu))) {
74
* < 0, set by the current instruction.
97
- t = cpu->isar.id_aa64pfr0;
75
diff --git a/target/arm/helper-a64.c b/target/arm/helper-a64.c
98
- t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
76
index XXXXXXX..XXXXXXX 100644
99
- cpu->isar.id_aa64pfr0 = t;
77
--- a/target/arm/helper-a64.c
100
- }
78
+++ b/target/arm/helper-a64.c
101
} else {
79
@@ -XXX,XX +XXX,XX @@ illegal_return:
102
+ uint64_t t;
80
if (!arm_singlestep_active(env)) {
103
uint32_t u;
81
env->pstate &= ~PSTATE_SS;
104
aarch64_a57_initfn(obj);
82
}
105
83
+ helper_rebuild_hflags_a64(env, cur_el);
106
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
84
qemu_log_mask(LOG_GUEST_ERROR, "Illegal exception return at EL%d: "
107
#endif
85
"resuming execution at 0x%" PRIx64 "\n", cur_el, env->pc);
108
}
109
110
- object_property_add(obj, "sve", "bool", cpu_arm_get_sve,
111
- cpu_arm_set_sve, NULL, NULL, &error_fatal);
112
+ aarch64_add_sve_properties(obj);
113
object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq,
114
cpu_max_set_sve_max_vq, NULL, NULL, &error_fatal);
115
-
116
- for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
117
- char name[8];
118
- sprintf(name, "sve%d", vq * 128);
119
- object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
120
- cpu_arm_set_sve_vq, NULL, NULL, &error_fatal);
121
- }
86
}
122
}
87
diff --git a/target/arm/helper.c b/target/arm/helper.c
123
88
index XXXXXXX..XXXXXXX 100644
124
struct ARMCPUInfo {
89
--- a/target/arm/helper.c
125
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
90
+++ b/target/arm/helper.c
126
index XXXXXXX..XXXXXXX 100644
91
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a32(CPUARMState *env, int fp_el,
127
--- a/target/arm/kvm64.c
92
DP_TBFLAG_A32(flags, HSTR_ACTIVE, 1);
128
+++ b/target/arm/kvm64.c
93
}
129
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
94
130
* and then query that CPU for the relevant ID registers.
95
+ if (env->uncached_cpsr & CPSR_IL) {
131
*/
96
+ DP_TBFLAG_ANY(flags, PSTATE__IL, 1);
132
int fdarray[3];
133
+ bool sve_supported;
134
uint64_t features = 0;
135
+ uint64_t t;
136
int err;
137
138
/* Old kernels may not know about the PREFERRED_TARGET ioctl: however
139
@@ -XXX,XX +XXX,XX @@ bool kvm_arm_get_host_cpu_features(ARMHostCPUFeatures *ahcf)
140
ARM64_SYS_REG(3, 0, 0, 3, 2));
141
}
142
143
+ sve_supported = ioctl(fdarray[0], KVM_CHECK_EXTENSION, KVM_CAP_ARM_SVE) > 0;
144
+
145
kvm_arm_destroy_scratch_host_vcpu(fdarray);
146
147
if (err < 0) {
148
return false;
149
}
150
151
- /* We can assume any KVM supporting CPU is at least a v8
152
+ /* Add feature bits that can't appear until after VCPU init. */
153
+ if (sve_supported) {
154
+ t = ahcf->isar.id_aa64pfr0;
155
+ t = FIELD_DP64(t, ID_AA64PFR0, SVE, 1);
156
+ ahcf->isar.id_aa64pfr0 = t;
97
+ }
157
+ }
98
+
158
+
99
return rebuild_hflags_common_32(env, fp_el, mmu_idx, flags);
159
+ /*
100
}
160
+ * We can assume any KVM supporting CPU is at least a v8
101
161
* with VFPv4+Neon; this in turn implies most of the other
102
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
162
* feature bits.
103
}
163
*/
104
}
164
diff --git a/tests/arm-cpu-features.c b/tests/arm-cpu-features.c
105
165
index XXXXXXX..XXXXXXX 100644
106
+ if (env->pstate & PSTATE_IL) {
166
--- a/tests/arm-cpu-features.c
107
+ DP_TBFLAG_ANY(flags, PSTATE__IL, 1);
167
+++ b/tests/arm-cpu-features.c
108
+ }
168
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
109
+
169
"We cannot guarantee the CPU type 'cortex-a15' works "
110
if (cpu_isar_feature(aa64_mte, env_archcpu(env))) {
170
"with KVM on this host", NULL);
111
/*
171
112
* Set MTE_ACTIVE if any access may be Checked, and leave clear
172
- assert_has_feature(qts, "max", "sve");
113
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
173
- resp = do_query_no_props(qts, "max");
114
index XXXXXXX..XXXXXXX 100644
174
+ assert_has_feature(qts, "host", "sve");
115
--- a/target/arm/translate-a64.c
175
+ resp = do_query_no_props(qts, "host");
116
+++ b/target/arm/translate-a64.c
176
kvm_supports_sve = resp_get_feature(resp, "sve");
117
@@ -XXX,XX +XXX,XX @@ static void disas_a64_insn(CPUARMState *env, DisasContext *s)
177
vls = resp_get_sve_vls(resp);
118
s->fp_access_checked = false;
178
qobject_unref(resp);
119
s->sve_access_checked = false;
179
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
120
180
sprintf(max_name, "sve%d", max_vq * 128);
121
+ if (s->pstate_il) {
181
122
+ /*
182
/* Enabling a supported length is of course fine. */
123
+ * Illegal execution state. This has priority over BTI
183
- assert_sve_vls(qts, "max", vls, "{ %s: true }", max_name);
124
+ * exceptions, but comes after instruction abort exceptions.
184
+ assert_sve_vls(qts, "host", vls, "{ %s: true }", max_name);
125
+ */
185
126
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
186
/* Get the next supported length smaller than max-vq. */
127
+ syn_illegalstate(), default_exception_el(s));
187
vq = 64 - __builtin_clzll(vls & ~BIT_ULL(max_vq - 1));
128
+ return;
188
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
129
+ }
189
* We have at least one length smaller than max-vq,
130
+
190
* so we can disable max-vq.
131
if (dc_isar_feature(aa64_bti, s)) {
191
*/
132
if (s->base.num_insns == 1) {
192
- assert_sve_vls(qts, "max", (vls & ~BIT_ULL(max_vq - 1)),
133
/*
193
+ assert_sve_vls(qts, "host", (vls & ~BIT_ULL(max_vq - 1)),
134
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
194
"{ %s: false }", max_name);
135
#endif
195
136
dc->fp_excp_el = EX_TBFLAG_ANY(tb_flags, FPEXC_EL);
196
/*
137
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
197
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
138
+ dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
198
*/
139
dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
199
sprintf(name, "sve%d", vq * 128);
140
dc->sve_len = (EX_TBFLAG_A64(tb_flags, ZCR_LEN) + 1) * 16;
200
error = g_strdup_printf("cannot disable %s", name);
141
dc->pauth_active = EX_TBFLAG_A64(tb_flags, PAUTH_ACTIVE);
201
- assert_error(qts, "max", error,
142
diff --git a/target/arm/translate.c b/target/arm/translate.c
202
+ assert_error(qts, "host", error,
143
index XXXXXXX..XXXXXXX 100644
203
"{ %s: true, %s: false }",
144
--- a/target/arm/translate.c
204
max_name, name);
145
+++ b/target/arm/translate.c
205
g_free(error);
146
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
206
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
147
return;
207
vq = __builtin_ffsll(vls);
148
}
208
sprintf(name, "sve%d", vq * 128);
149
209
error = g_strdup_printf("cannot disable %s", name);
150
+ if (s->pstate_il) {
210
- assert_error(qts, "max", error, "{ %s: false }", name);
151
+ /*
211
+ assert_error(qts, "host", error, "{ %s: false }", name);
152
+ * Illegal execution state. This has priority over BTI
212
g_free(error);
153
+ * exceptions, but comes after instruction abort exceptions.
213
154
+ */
214
/* Get an unsupported length. */
155
+ gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
215
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
156
+ syn_illegalstate(), default_exception_el(s));
216
if (vq <= SVE_MAX_VQ) {
157
+ return;
217
sprintf(name, "sve%d", vq * 128);
158
+ }
218
error = g_strdup_printf("cannot enable %s", name);
159
+
219
- assert_error(qts, "max", error, "{ %s: true }", name);
160
if (cond == 0xf) {
220
+ assert_error(qts, "host", error, "{ %s: true }", name);
161
/* In ARMv3 and v4 the NV condition is UNPREDICTABLE; we
221
g_free(error);
162
* choose to UNDEF. In ARMv5 and above the space is used
222
}
163
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
223
} else {
164
#endif
224
@@ -XXX,XX +XXX,XX @@ static void test_query_cpu_model_expansion_kvm(const void *data)
165
dc->fp_excp_el = EX_TBFLAG_ANY(tb_flags, FPEXC_EL);
225
} else {
166
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
226
assert_has_not_feature(qts, "host", "aarch64");
167
+ dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
227
assert_has_not_feature(qts, "host", "pmu");
168
228
-
169
if (arm_feature(env, ARM_FEATURE_M)) {
229
- assert_has_not_feature(qts, "max", "sve");
170
dc->vfp_enabled = 1;
230
+ assert_has_not_feature(qts, "host", "sve");
171
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
231
}
172
}
232
173
dc->insn = insn;
233
qtest_quit(qts);
174
234
diff --git a/docs/arm-cpu-features.rst b/docs/arm-cpu-features.rst
175
+ if (dc->pstate_il) {
235
index XXXXXXX..XXXXXXX 100644
176
+ /*
236
--- a/docs/arm-cpu-features.rst
177
+ * Illegal execution state. This has priority over BTI
237
+++ b/docs/arm-cpu-features.rst
178
+ * exceptions, but comes after instruction abort exceptions.
238
@@ -XXX,XX +XXX,XX @@ SVE CPU Property Examples
179
+ */
239
180
+ gen_exception_insn(dc, dc->pc_curr, EXCP_UDEF,
240
$ qemu-system-aarch64 -M virt -cpu max
181
+ syn_illegalstate(), default_exception_el(dc));
241
182
+ return;
242
- 3) Only enable the 128-bit vector length::
183
+ }
243
+ 3) When KVM is enabled, implicitly enable all host CPU supported vector
184
+
244
+ lengths with the `host` CPU type::
185
if (dc->eci) {
245
+
186
/*
246
+ $ qemu-system-aarch64 -M virt,accel=kvm -cpu host
187
* For M-profile continuable instructions, ECI/ICI handling
247
+
248
+ 4) Only enable the 128-bit vector length::
249
250
$ qemu-system-aarch64 -M virt -cpu max,sve128=on
251
252
- 4) Disable the 512-bit vector length and all larger vector lengths,
253
+ 5) Disable the 512-bit vector length and all larger vector lengths,
254
since 512 is a power-of-two. This results in all the smaller,
255
uninitialized lengths (128, 256, and 384) defaulting to enabled::
256
257
$ qemu-system-aarch64 -M virt -cpu max,sve512=off
258
259
- 5) Enable the 128-bit, 256-bit, and 512-bit vector lengths::
260
+ 6) Enable the 128-bit, 256-bit, and 512-bit vector lengths::
261
262
$ qemu-system-aarch64 -M virt -cpu max,sve128=on,sve256=on,sve512=on
263
264
- 6) The same as (5), but since the 128-bit and 256-bit vector
265
+ 7) The same as (6), but since the 128-bit and 256-bit vector
266
lengths are required for the 512-bit vector length to be enabled,
267
then allow them to be auto-enabled::
268
269
$ qemu-system-aarch64 -M virt -cpu max,sve512=on
270
271
- 7) Do the same as (6), but by first disabling SVE and then re-enabling it::
272
+ 8) Do the same as (7), but by first disabling SVE and then re-enabling it::
273
274
$ qemu-system-aarch64 -M virt -cpu max,sve=off,sve512=on,sve=on
275
276
- 8) Force errors regarding the last vector length::
277
+ 9) Force errors regarding the last vector length::
278
279
$ qemu-system-aarch64 -M virt -cpu max,sve128=off
280
$ qemu-system-aarch64 -M virt -cpu max,sve=off,sve128=off,sve=on
281
@@ -XXX,XX +XXX,XX @@ The examples in "SVE CPU Property Examples" exhibit many ways to select
282
vector lengths which developers may find useful in order to avoid overly
283
verbose command lines. However, the recommended way to select vector
284
lengths is to explicitly enable each desired length. Therefore only
285
-example's (1), (3), and (5) exhibit recommended uses of the properties.
286
+example's (1), (4), and (6) exhibit recommended uses of the properties.
287
188
--
288
--
189
2.20.1
289
2.20.1
190
290
191
291
diff view generated by jsdifflib
1
From: Shashi Mallela <shashi.mallela@linaro.org>
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
2
3
Included creation of ITS as part of virt platform GIC
3
Rebuild hflags when modifying CPUState at boot.
4
initialization. This Emulated ITS model now co-exists with kvm
5
ITS and is enabled in absence of kvm irq kernel support in a
6
platform.
7
4
8
Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
5
Fixes: e979972a6a
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
10
Message-id: 20210910143951.92242-9-shashi.mallela@linaro.org
7
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
8
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
9
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
10
Message-id: 20191031040830.18800-2-edgar.iglesias@xilinx.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
12
---
13
include/hw/arm/virt.h | 2 ++
13
hw/arm/boot.c | 1 +
14
target/arm/kvm_arm.h | 4 ++--
14
1 file changed, 1 insertion(+)
15
hw/arm/virt.c | 29 +++++++++++++++++++++++++++--
16
3 files changed, 31 insertions(+), 4 deletions(-)
17
15
18
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
16
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
19
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/arm/virt.h
18
--- a/hw/arm/boot.c
21
+++ b/include/hw/arm/virt.h
19
+++ b/hw/arm/boot.c
22
@@ -XXX,XX +XXX,XX @@ struct VirtMachineClass {
20
@@ -XXX,XX +XXX,XX @@ static void do_cpu_reset(void *opaque)
23
MachineClass parent;
21
info->secondary_cpu_reset_hook(cpu, info);
24
bool disallow_affinity_adjustment;
22
}
25
bool no_its;
23
}
26
+ bool no_tcg_its;
24
+ arm_rebuild_hflags(env);
27
bool no_pmu;
28
bool claim_edge_triggered_timers;
29
bool smbios_old_sys_ver;
30
@@ -XXX,XX +XXX,XX @@ struct VirtMachineState {
31
bool highmem;
32
bool highmem_ecam;
33
bool its;
34
+ bool tcg_its;
35
bool virt;
36
bool ras;
37
bool mte;
38
diff --git a/target/arm/kvm_arm.h b/target/arm/kvm_arm.h
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/kvm_arm.h
41
+++ b/target/arm/kvm_arm.h
42
@@ -XXX,XX +XXX,XX @@ static inline const char *its_class_name(void)
43
/* KVM implementation requires this capability */
44
return kvm_direct_msi_enabled() ? "arm-its-kvm" : NULL;
45
} else {
46
- /* Software emulation is not implemented yet */
47
- return NULL;
48
+ /* Software emulation based model */
49
+ return "arm-gicv3-its";
50
}
25
}
51
}
26
}
52
53
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/arm/virt.c
56
+++ b/hw/arm/virt.c
57
@@ -XXX,XX +XXX,XX @@ static void create_its(VirtMachineState *vms)
58
const char *itsclass = its_class_name();
59
DeviceState *dev;
60
61
+ if (!strcmp(itsclass, "arm-gicv3-its")) {
62
+ if (!vms->tcg_its) {
63
+ itsclass = NULL;
64
+ }
65
+ }
66
+
67
if (!itsclass) {
68
/* Do nothing if not supported */
69
return;
70
@@ -XXX,XX +XXX,XX @@ static void create_v2m(VirtMachineState *vms)
71
vms->msi_controller = VIRT_MSI_CTRL_GICV2M;
72
}
73
74
-static void create_gic(VirtMachineState *vms)
75
+static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
76
{
77
MachineState *ms = MACHINE(vms);
78
/* We create a standalone GIC */
79
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms)
80
nb_redist_regions);
81
qdev_prop_set_uint32(vms->gic, "redist-region-count[0]", redist0_count);
82
83
+ if (!kvm_irqchip_in_kernel()) {
84
+ if (vms->tcg_its) {
85
+ object_property_set_link(OBJECT(vms->gic), "sysmem",
86
+ OBJECT(mem), &error_fatal);
87
+ qdev_prop_set_bit(vms->gic, "has-lpi", true);
88
+ }
89
+ }
90
+
91
if (nb_redist_regions == 2) {
92
uint32_t redist1_capacity =
93
vms->memmap[VIRT_HIGH_GIC_REDIST2].size / GICV3_REDIST_SIZE;
94
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
95
96
virt_flash_fdt(vms, sysmem, secure_sysmem ?: sysmem);
97
98
- create_gic(vms);
99
+ create_gic(vms, sysmem);
100
101
virt_cpu_post_init(vms, sysmem);
102
103
@@ -XXX,XX +XXX,XX @@ static void virt_instance_init(Object *obj)
104
} else {
105
/* Default allows ITS instantiation */
106
vms->its = true;
107
+
108
+ if (vmc->no_tcg_its) {
109
+ vms->tcg_its = false;
110
+ } else {
111
+ vms->tcg_its = true;
112
+ }
113
}
114
115
/* Default disallows iommu instantiation */
116
@@ -XXX,XX +XXX,XX @@ DEFINE_VIRT_MACHINE_AS_LATEST(6, 2)
117
118
static void virt_machine_6_1_options(MachineClass *mc)
119
{
120
+ VirtMachineClass *vmc = VIRT_MACHINE_CLASS(OBJECT_CLASS(mc));
121
+
122
virt_machine_6_2_options(mc);
123
compat_props_add(mc->compat_props, hw_compat_6_1, hw_compat_6_1_len);
124
+
125
+ /* qemu ITS was introduced with 6.2 */
126
+ vmc->no_tcg_its = true;
127
}
128
DEFINE_VIRT_MACHINE(6, 1)
129
27
130
--
28
--
131
2.20.1
29
2.20.1
132
30
133
31
diff view generated by jsdifflib
1
From: Bin Meng <bmeng.cn@gmail.com>
1
From: Christophe Lyon <christophe.lyon@linaro.org>
2
2
3
Currently the clock/reset check is done in uart_receive(), but we
3
rt==15 is a special case when reading the flags: it means the
4
can move the check to uart_can_receive() which is earlier.
4
destination is APSR. This patch avoids rejecting
5
vmrs apsr_nzcv, fpscr
6
as illegal instruction.
5
7
6
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
8
Cc: qemu-stable@nongnu.org
7
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Signed-off-by: Christophe Lyon <christophe.lyon@linaro.org>
8
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Message-id: 20191025095711.10853-1-christophe.lyon@linaro.org
9
Message-id: 20210901124521.30599-4-bmeng.cn@gmail.com
11
[PMM: updated the comment]
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
14
---
12
hw/char/cadence_uart.c | 17 ++++++++++-------
15
target/arm/translate-vfp.inc.c | 5 +++--
13
1 file changed, 10 insertions(+), 7 deletions(-)
16
1 file changed, 3 insertions(+), 2 deletions(-)
14
17
15
diff --git a/hw/char/cadence_uart.c b/hw/char/cadence_uart.c
18
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
16
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/char/cadence_uart.c
20
--- a/target/arm/translate-vfp.inc.c
18
+++ b/hw/char/cadence_uart.c
21
+++ b/target/arm/translate-vfp.inc.c
19
@@ -XXX,XX +XXX,XX @@ static void uart_parameters_setup(CadenceUARTState *s)
22
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
20
static int uart_can_receive(void *opaque)
23
if (arm_dc_feature(s, ARM_FEATURE_M)) {
21
{
24
/*
22
CadenceUARTState *s = opaque;
25
* The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
23
- int ret = MAX(CADENCE_UART_RX_FIFO_SIZE, CADENCE_UART_TX_FIFO_SIZE);
26
- * Writes to R15 are UNPREDICTABLE; we choose to undef.
24
- uint32_t ch_mode = s->r[R_MR] & UART_MR_CHMODE;
27
+ * Accesses to R15 are UNPREDICTABLE; we choose to undef.
25
+ int ret;
28
+ * (FPSCR -> r15 is a special case which writes to the PSR flags.)
26
+ uint32_t ch_mode;
29
*/
27
+
30
- if (a->rt == 15 || a->reg != ARM_VFP_FPSCR) {
28
+ /* ignore characters when unclocked or in reset */
31
+ if (a->rt == 15 && (!a->l || a->reg != ARM_VFP_FPSCR)) {
29
+ if (!clock_is_enabled(s->refclk) || device_is_in_reset(DEVICE(s))) {
32
return false;
30
+ return 0;
33
}
31
+ }
32
+
33
+ ret = MAX(CADENCE_UART_RX_FIFO_SIZE, CADENCE_UART_TX_FIFO_SIZE);
34
+ ch_mode = s->r[R_MR] & UART_MR_CHMODE;
35
36
if (ch_mode == NORMAL_MODE || ch_mode == ECHO_MODE) {
37
ret = MIN(ret, CADENCE_UART_RX_FIFO_SIZE - s->rx_count);
38
@@ -XXX,XX +XXX,XX @@ static void uart_receive(void *opaque, const uint8_t *buf, int size)
39
CadenceUARTState *s = opaque;
40
uint32_t ch_mode = s->r[R_MR] & UART_MR_CHMODE;
41
42
- /* ignore characters when unclocked or in reset */
43
- if (!clock_is_enabled(s->refclk) || device_is_in_reset(DEVICE(s))) {
44
- return;
45
- }
46
-
47
if (ch_mode == NORMAL_MODE || ch_mode == ECHO_MODE) {
48
uart_write_rx_fifo(opaque, buf, size);
49
}
34
}
50
--
35
--
51
2.20.1
36
2.20.1
52
37
53
38
diff view generated by jsdifflib
Deleted patch
1
From: Bin Meng <bmeng.cn@gmail.com>
2
1
3
This converts uart_read() and uart_write() to memop_with_attrs() ops.
4
5
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
6
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Message-id: 20210901124521.30599-5-bmeng.cn@gmail.com
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/char/cadence_uart.c | 26 +++++++++++++++-----------
12
1 file changed, 15 insertions(+), 11 deletions(-)
13
14
diff --git a/hw/char/cadence_uart.c b/hw/char/cadence_uart.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/char/cadence_uart.c
17
+++ b/hw/char/cadence_uart.c
18
@@ -XXX,XX +XXX,XX @@ static void uart_read_rx_fifo(CadenceUARTState *s, uint32_t *c)
19
uart_update_status(s);
20
}
21
22
-static void uart_write(void *opaque, hwaddr offset,
23
- uint64_t value, unsigned size)
24
+static MemTxResult uart_write(void *opaque, hwaddr offset,
25
+ uint64_t value, unsigned size, MemTxAttrs attrs)
26
{
27
CadenceUARTState *s = opaque;
28
29
DB_PRINT(" offset:%x data:%08x\n", (unsigned)offset, (unsigned)value);
30
offset >>= 2;
31
if (offset >= CADENCE_UART_R_MAX) {
32
- return;
33
+ return MEMTX_DECODE_ERROR;
34
}
35
switch (offset) {
36
case R_IER: /* ier (wts imr) */
37
@@ -XXX,XX +XXX,XX @@ static void uart_write(void *opaque, hwaddr offset,
38
break;
39
}
40
uart_update_status(s);
41
+
42
+ return MEMTX_OK;
43
}
44
45
-static uint64_t uart_read(void *opaque, hwaddr offset,
46
- unsigned size)
47
+static MemTxResult uart_read(void *opaque, hwaddr offset,
48
+ uint64_t *value, unsigned size, MemTxAttrs attrs)
49
{
50
CadenceUARTState *s = opaque;
51
uint32_t c = 0;
52
53
offset >>= 2;
54
if (offset >= CADENCE_UART_R_MAX) {
55
- c = 0;
56
- } else if (offset == R_TX_RX) {
57
+ return MEMTX_DECODE_ERROR;
58
+ }
59
+ if (offset == R_TX_RX) {
60
uart_read_rx_fifo(s, &c);
61
} else {
62
- c = s->r[offset];
63
+ c = s->r[offset];
64
}
65
66
DB_PRINT(" offset:%x data:%08x\n", (unsigned)(offset << 2), (unsigned)c);
67
- return c;
68
+ *value = c;
69
+ return MEMTX_OK;
70
}
71
72
static const MemoryRegionOps uart_ops = {
73
- .read = uart_read,
74
- .write = uart_write,
75
+ .read_with_attrs = uart_read,
76
+ .write_with_attrs = uart_write,
77
.endianness = DEVICE_NATIVE_ENDIAN,
78
};
79
80
--
81
2.20.1
82
83
diff view generated by jsdifflib
Deleted patch
1
From: Bin Meng <bmeng.cn@gmail.com>
2
1
3
Read or write to uart registers when unclocked or in reset should be
4
ignored. Add the check there, and as a result of this, the check in
5
uart_write_tx_fifo() is now unnecessary.
6
7
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
8
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
9
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
10
Message-id: 20210901124521.30599-6-bmeng.cn@gmail.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/char/cadence_uart.c | 15 ++++++++++-----
14
1 file changed, 10 insertions(+), 5 deletions(-)
15
16
diff --git a/hw/char/cadence_uart.c b/hw/char/cadence_uart.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/char/cadence_uart.c
19
+++ b/hw/char/cadence_uart.c
20
@@ -XXX,XX +XXX,XX @@ static gboolean cadence_uart_xmit(void *do_not_use, GIOCondition cond,
21
static void uart_write_tx_fifo(CadenceUARTState *s, const uint8_t *buf,
22
int size)
23
{
24
- /* ignore characters when unclocked or in reset */
25
- if (!clock_is_enabled(s->refclk) || device_is_in_reset(DEVICE(s))) {
26
- return;
27
- }
28
-
29
if ((s->r[R_CR] & UART_CR_TX_DIS) || !(s->r[R_CR] & UART_CR_TX_EN)) {
30
return;
31
}
32
@@ -XXX,XX +XXX,XX @@ static MemTxResult uart_write(void *opaque, hwaddr offset,
33
{
34
CadenceUARTState *s = opaque;
35
36
+ /* ignore access when unclocked or in reset */
37
+ if (!clock_is_enabled(s->refclk) || device_is_in_reset(DEVICE(s))) {
38
+ return MEMTX_ERROR;
39
+ }
40
+
41
DB_PRINT(" offset:%x data:%08x\n", (unsigned)offset, (unsigned)value);
42
offset >>= 2;
43
if (offset >= CADENCE_UART_R_MAX) {
44
@@ -XXX,XX +XXX,XX @@ static MemTxResult uart_read(void *opaque, hwaddr offset,
45
CadenceUARTState *s = opaque;
46
uint32_t c = 0;
47
48
+ /* ignore access when unclocked or in reset */
49
+ if (!clock_is_enabled(s->refclk) || device_is_in_reset(DEVICE(s))) {
50
+ return MEMTX_ERROR;
51
+ }
52
+
53
offset >>= 2;
54
if (offset >= CADENCE_UART_R_MAX) {
55
return MEMTX_DECODE_ERROR;
56
--
57
2.20.1
58
59
diff view generated by jsdifflib
Deleted patch
1
From: Bin Meng <bmeng.cn@gmail.com>
2
1
3
We've got SW that expects FSBL (Bootlooader) to setup clocks and
4
resets. It's quite common that users run that SW on QEMU without
5
FSBL (FSBL typically requires the Xilinx tools installed). That's
6
fine, since users can stil use -device loader to enable clocks etc.
7
8
To help folks understand what's going, a log (guest-error) message
9
would be helpful here. In particular with the serial port since
10
things will go very quiet if they get things wrong.
11
12
Suggested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
13
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
14
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
15
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
16
Message-id: 20210901124521.30599-7-bmeng.cn@gmail.com
17
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
18
---
19
hw/char/cadence_uart.c | 8 ++++++++
20
1 file changed, 8 insertions(+)
21
22
diff --git a/hw/char/cadence_uart.c b/hw/char/cadence_uart.c
23
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/char/cadence_uart.c
25
+++ b/hw/char/cadence_uart.c
26
@@ -XXX,XX +XXX,XX @@ static int uart_can_receive(void *opaque)
27
28
/* ignore characters when unclocked or in reset */
29
if (!clock_is_enabled(s->refclk) || device_is_in_reset(DEVICE(s))) {
30
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: uart is unclocked or in reset\n",
31
+ __func__);
32
return 0;
33
}
34
35
@@ -XXX,XX +XXX,XX @@ static void uart_event(void *opaque, QEMUChrEvent event)
36
37
/* ignore characters when unclocked or in reset */
38
if (!clock_is_enabled(s->refclk) || device_is_in_reset(DEVICE(s))) {
39
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: uart is unclocked or in reset\n",
40
+ __func__);
41
return;
42
}
43
44
@@ -XXX,XX +XXX,XX @@ static MemTxResult uart_write(void *opaque, hwaddr offset,
45
46
/* ignore access when unclocked or in reset */
47
if (!clock_is_enabled(s->refclk) || device_is_in_reset(DEVICE(s))) {
48
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: uart is unclocked or in reset\n",
49
+ __func__);
50
return MEMTX_ERROR;
51
}
52
53
@@ -XXX,XX +XXX,XX @@ static MemTxResult uart_read(void *opaque, hwaddr offset,
54
55
/* ignore access when unclocked or in reset */
56
if (!clock_is_enabled(s->refclk) || device_is_in_reset(DEVICE(s))) {
57
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: uart is unclocked or in reset\n",
58
+ __func__);
59
return MEMTX_ERROR;
60
}
61
62
--
63
2.20.1
64
65
diff view generated by jsdifflib
Deleted patch
1
From: Shashi Mallela <shashi.mallela@linaro.org>
2
1
3
Implemented lpi processing at redistributor to get lpi config info
4
from lpi configuration table,determine priority,set pending state in
5
lpi pending table and forward the lpi to cpuif.Added logic to invoke
6
redistributor lpi processing with translated LPI which set/clear LPI
7
from ITS device as part of ITS INT,CLEAR,DISCARD command and
8
GITS_TRANSLATER processing.
9
10
Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
11
Tested-by: Neil Armstrong <narmstrong@baylibre.com>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20210910143951.92242-7-shashi.mallela@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
hw/intc/gicv3_internal.h | 9 ++
17
include/hw/intc/arm_gicv3_common.h | 7 ++
18
hw/intc/arm_gicv3.c | 14 +++
19
hw/intc/arm_gicv3_common.c | 1 +
20
hw/intc/arm_gicv3_cpuif.c | 7 +-
21
hw/intc/arm_gicv3_its.c | 23 +++++
22
hw/intc/arm_gicv3_redist.c | 141 +++++++++++++++++++++++++++++
23
7 files changed, 200 insertions(+), 2 deletions(-)
24
25
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
26
index XXXXXXX..XXXXXXX 100644
27
--- a/hw/intc/gicv3_internal.h
28
+++ b/hw/intc/gicv3_internal.h
29
@@ -XXX,XX +XXX,XX @@ FIELD(GICR_PENDBASER, PHYADDR, 16, 36)
30
FIELD(GICR_PENDBASER, OUTERCACHE, 56, 3)
31
FIELD(GICR_PENDBASER, PTZ, 62, 1)
32
33
+#define GICR_PROPBASER_IDBITS_THRESHOLD 0xd
34
+
35
#define ICC_CTLR_EL1_CBPR (1U << 0)
36
#define ICC_CTLR_EL1_EOIMODE (1U << 1)
37
#define ICC_CTLR_EL1_PMHE (1U << 6)
38
@@ -XXX,XX +XXX,XX @@ FIELD(GITS_TYPER, CIL, 36, 1)
39
40
#define L1TABLE_ENTRY_SIZE 8
41
42
+#define LPI_CTE_ENABLED TABLE_ENTRY_VALID_MASK
43
+#define LPI_PRIORITY_MASK 0xfc
44
+
45
#define GITS_CMDQ_ENTRY_SIZE 32
46
#define NUM_BYTES_IN_DW 8
47
48
@@ -XXX,XX +XXX,XX @@ FIELD(MAPC, RDBASE, 16, 32)
49
* Valid = 1 bit,RDBase = 36 bits(considering max RDBASE)
50
*/
51
#define GITS_CTE_SIZE (0x8ULL)
52
+#define GITS_CTE_RDBASE_PROCNUM_MASK MAKE_64BIT_MASK(1, RDBASE_PROCNUM_LENGTH)
53
54
/* Special interrupt IDs */
55
#define INTID_SECURE 1020
56
@@ -XXX,XX +XXX,XX @@ MemTxResult gicv3_redist_write(void *opaque, hwaddr offset, uint64_t data,
57
unsigned size, MemTxAttrs attrs);
58
void gicv3_dist_set_irq(GICv3State *s, int irq, int level);
59
void gicv3_redist_set_irq(GICv3CPUState *cs, int irq, int level);
60
+void gicv3_redist_process_lpi(GICv3CPUState *cs, int irq, int level);
61
+void gicv3_redist_lpi_pending(GICv3CPUState *cs, int irq, int level);
62
+void gicv3_redist_update_lpi(GICv3CPUState *cs);
63
void gicv3_redist_send_sgi(GICv3CPUState *cs, int grp, int irq, bool ns);
64
void gicv3_init_cpuif(GICv3State *s);
65
66
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
67
index XXXXXXX..XXXXXXX 100644
68
--- a/include/hw/intc/arm_gicv3_common.h
69
+++ b/include/hw/intc/arm_gicv3_common.h
70
@@ -XXX,XX +XXX,XX @@ struct GICv3CPUState {
71
* real state above; it doesn't need to be migrated.
72
*/
73
PendingIrq hppi;
74
+
75
+ /*
76
+ * Cached information recalculated from LPI tables
77
+ * in guest memory
78
+ */
79
+ PendingIrq hpplpi;
80
+
81
/* This is temporary working state, to avoid a malloc in gicv3_update() */
82
bool seenbetter;
83
};
84
diff --git a/hw/intc/arm_gicv3.c b/hw/intc/arm_gicv3.c
85
index XXXXXXX..XXXXXXX 100644
86
--- a/hw/intc/arm_gicv3.c
87
+++ b/hw/intc/arm_gicv3.c
88
@@ -XXX,XX +XXX,XX @@ static void gicv3_redist_update_noirqset(GICv3CPUState *cs)
89
cs->hppi.grp = gicv3_irq_group(cs->gic, cs, cs->hppi.irq);
90
}
91
92
+ if ((cs->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) && cs->gic->lpi_enable &&
93
+ (cs->hpplpi.prio != 0xff)) {
94
+ if (irqbetter(cs, cs->hpplpi.irq, cs->hpplpi.prio)) {
95
+ cs->hppi.irq = cs->hpplpi.irq;
96
+ cs->hppi.prio = cs->hpplpi.prio;
97
+ cs->hppi.grp = cs->hpplpi.grp;
98
+ seenbetter = true;
99
+ }
100
+ }
101
+
102
/* If the best interrupt we just found would preempt whatever
103
* was the previous best interrupt before this update, then
104
* we know it's definitely the best one now.
105
@@ -XXX,XX +XXX,XX @@ static void gicv3_set_irq(void *opaque, int irq, int level)
106
107
static void arm_gicv3_post_load(GICv3State *s)
108
{
109
+ int i;
110
/* Recalculate our cached idea of the current highest priority
111
* pending interrupt, but don't set IRQ or FIQ lines.
112
*/
113
+ for (i = 0; i < s->num_cpu; i++) {
114
+ gicv3_redist_update_lpi(&s->cpu[i]);
115
+ }
116
gicv3_full_update_noirqset(s);
117
/* Repopulate the cache of GICv3CPUState pointers for target CPUs */
118
gicv3_cache_all_target_cpustates(s);
119
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
120
index XXXXXXX..XXXXXXX 100644
121
--- a/hw/intc/arm_gicv3_common.c
122
+++ b/hw/intc/arm_gicv3_common.c
123
@@ -XXX,XX +XXX,XX @@ static void arm_gicv3_common_reset(DeviceState *dev)
124
memset(cs->gicr_ipriorityr, 0, sizeof(cs->gicr_ipriorityr));
125
126
cs->hppi.prio = 0xff;
127
+ cs->hpplpi.prio = 0xff;
128
129
/* State in the CPU interface must *not* be reset here, because it
130
* is part of the CPU's reset domain, not the GIC device's.
131
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
132
index XXXXXXX..XXXXXXX 100644
133
--- a/hw/intc/arm_gicv3_cpuif.c
134
+++ b/hw/intc/arm_gicv3_cpuif.c
135
@@ -XXX,XX +XXX,XX @@ static void icc_activate_irq(GICv3CPUState *cs, int irq)
136
cs->gicr_iactiver0 = deposit32(cs->gicr_iactiver0, irq, 1, 1);
137
cs->gicr_ipendr0 = deposit32(cs->gicr_ipendr0, irq, 1, 0);
138
gicv3_redist_update(cs);
139
- } else {
140
+ } else if (irq < GICV3_LPI_INTID_START) {
141
gicv3_gicd_active_set(cs->gic, irq);
142
gicv3_gicd_pending_clear(cs->gic, irq);
143
gicv3_update(cs->gic, irq, 1);
144
+ } else {
145
+ gicv3_redist_lpi_pending(cs, irq, 0);
146
}
147
}
148
149
@@ -XXX,XX +XXX,XX @@ static void icc_eoir_write(CPUARMState *env, const ARMCPRegInfo *ri,
150
trace_gicv3_icc_eoir_write(is_eoir0 ? 0 : 1,
151
gicv3_redist_affid(cs), value);
152
153
- if (irq >= cs->gic->num_irq) {
154
+ if ((irq >= cs->gic->num_irq) &&
155
+ !(cs->gic->lpi_enable && (irq >= GICV3_LPI_INTID_START))) {
156
/* This handles two cases:
157
* 1. If software writes the ID of a spurious interrupt [ie 1020-1023]
158
* to the GICC_EOIR, the GIC ignores that write.
159
diff --git a/hw/intc/arm_gicv3_its.c b/hw/intc/arm_gicv3_its.c
160
index XXXXXXX..XXXXXXX 100644
161
--- a/hw/intc/arm_gicv3_its.c
162
+++ b/hw/intc/arm_gicv3_its.c
163
@@ -XXX,XX +XXX,XX @@ static bool process_its_cmd(GICv3ITSState *s, uint64_t value, uint32_t offset,
164
uint64_t cte = 0;
165
bool cte_valid = false;
166
bool result = false;
167
+ uint64_t rdbase;
168
169
if (cmd == NONE) {
170
devid = offset;
171
@@ -XXX,XX +XXX,XX @@ static bool process_its_cmd(GICv3ITSState *s, uint64_t value, uint32_t offset,
172
* Current implementation only supports rdbase == procnum
173
* Hence rdbase physical address is ignored
174
*/
175
+ rdbase = (cte & GITS_CTE_RDBASE_PROCNUM_MASK) >> 1U;
176
+
177
+ if (rdbase > s->gicv3->num_cpu) {
178
+ return result;
179
+ }
180
+
181
+ if ((cmd == CLEAR) || (cmd == DISCARD)) {
182
+ gicv3_redist_process_lpi(&s->gicv3->cpu[rdbase], pIntid, 0);
183
+ } else {
184
+ gicv3_redist_process_lpi(&s->gicv3->cpu[rdbase], pIntid, 1);
185
+ }
186
+
187
if (cmd == DISCARD) {
188
IteEntry ite = {};
189
/* remove mapping from interrupt translation table */
190
@@ -XXX,XX +XXX,XX @@ static void process_cmdq(GICv3ITSState *s)
191
MemTxResult res = MEMTX_OK;
192
bool result = true;
193
uint8_t cmd;
194
+ int i;
195
196
if (!(s->ctlr & ITS_CTLR_ENABLED)) {
197
return;
198
@@ -XXX,XX +XXX,XX @@ static void process_cmdq(GICv3ITSState *s)
199
break;
200
case GITS_CMD_INV:
201
case GITS_CMD_INVALL:
202
+ /*
203
+ * Current implementation doesn't cache any ITS tables,
204
+ * but the calculated lpi priority information. We only
205
+ * need to trigger lpi priority re-calculation to be in
206
+ * sync with LPI config table or pending table changes.
207
+ */
208
+ for (i = 0; i < s->gicv3->num_cpu; i++) {
209
+ gicv3_redist_update_lpi(&s->gicv3->cpu[i]);
210
+ }
211
break;
212
default:
213
break;
214
diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c
215
index XXXXXXX..XXXXXXX 100644
216
--- a/hw/intc/arm_gicv3_redist.c
217
+++ b/hw/intc/arm_gicv3_redist.c
218
@@ -XXX,XX +XXX,XX @@ static MemTxResult gicr_writel(GICv3CPUState *cs, hwaddr offset,
219
if (cs->gicr_typer & GICR_TYPER_PLPIS) {
220
if (value & GICR_CTLR_ENABLE_LPIS) {
221
cs->gicr_ctlr |= GICR_CTLR_ENABLE_LPIS;
222
+ /* Check for any pending interr in pending table */
223
+ gicv3_redist_update_lpi(cs);
224
+ gicv3_redist_update(cs);
225
} else {
226
cs->gicr_ctlr &= ~GICR_CTLR_ENABLE_LPIS;
227
}
228
@@ -XXX,XX +XXX,XX @@ MemTxResult gicv3_redist_write(void *opaque, hwaddr offset, uint64_t data,
229
return r;
230
}
231
232
+static void gicv3_redist_check_lpi_priority(GICv3CPUState *cs, int irq)
233
+{
234
+ AddressSpace *as = &cs->gic->dma_as;
235
+ uint64_t lpict_baddr;
236
+ uint8_t lpite;
237
+ uint8_t prio;
238
+
239
+ lpict_baddr = cs->gicr_propbaser & R_GICR_PROPBASER_PHYADDR_MASK;
240
+
241
+ address_space_read(as, lpict_baddr + ((irq - GICV3_LPI_INTID_START) *
242
+ sizeof(lpite)), MEMTXATTRS_UNSPECIFIED, &lpite,
243
+ sizeof(lpite));
244
+
245
+ if (!(lpite & LPI_CTE_ENABLED)) {
246
+ return;
247
+ }
248
+
249
+ if (cs->gic->gicd_ctlr & GICD_CTLR_DS) {
250
+ prio = lpite & LPI_PRIORITY_MASK;
251
+ } else {
252
+ prio = ((lpite & LPI_PRIORITY_MASK) >> 1) | 0x80;
253
+ }
254
+
255
+ if ((prio < cs->hpplpi.prio) ||
256
+ ((prio == cs->hpplpi.prio) && (irq <= cs->hpplpi.irq))) {
257
+ cs->hpplpi.irq = irq;
258
+ cs->hpplpi.prio = prio;
259
+ /* LPIs are always non-secure Grp1 interrupts */
260
+ cs->hpplpi.grp = GICV3_G1NS;
261
+ }
262
+}
263
+
264
+void gicv3_redist_update_lpi(GICv3CPUState *cs)
265
+{
266
+ /*
267
+ * This function scans the LPI pending table and for each pending
268
+ * LPI, reads the corresponding entry from LPI configuration table
269
+ * to extract the priority info and determine if the current LPI
270
+ * priority is lower than the last computed high priority lpi interrupt.
271
+ * If yes, replace current LPI as the new high priority lpi interrupt.
272
+ */
273
+ AddressSpace *as = &cs->gic->dma_as;
274
+ uint64_t lpipt_baddr;
275
+ uint32_t pendt_size = 0;
276
+ uint8_t pend;
277
+ int i, bit;
278
+ uint64_t idbits;
279
+
280
+ idbits = MIN(FIELD_EX64(cs->gicr_propbaser, GICR_PROPBASER, IDBITS),
281
+ GICD_TYPER_IDBITS);
282
+
283
+ if (!(cs->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) || !cs->gicr_propbaser ||
284
+ !cs->gicr_pendbaser) {
285
+ return;
286
+ }
287
+
288
+ cs->hpplpi.prio = 0xff;
289
+
290
+ lpipt_baddr = cs->gicr_pendbaser & R_GICR_PENDBASER_PHYADDR_MASK;
291
+
292
+ /* Determine the highest priority pending interrupt among LPIs */
293
+ pendt_size = (1ULL << (idbits + 1));
294
+
295
+ for (i = GICV3_LPI_INTID_START / 8; i < pendt_size / 8; i++) {
296
+ address_space_read(as, lpipt_baddr + i, MEMTXATTRS_UNSPECIFIED, &pend,
297
+ sizeof(pend));
298
+
299
+ while (pend) {
300
+ bit = ctz32(pend);
301
+ gicv3_redist_check_lpi_priority(cs, i * 8 + bit);
302
+ pend &= ~(1 << bit);
303
+ }
304
+ }
305
+}
306
+
307
+void gicv3_redist_lpi_pending(GICv3CPUState *cs, int irq, int level)
308
+{
309
+ /*
310
+ * This function updates the pending bit in lpi pending table for
311
+ * the irq being activated or deactivated.
312
+ */
313
+ AddressSpace *as = &cs->gic->dma_as;
314
+ uint64_t lpipt_baddr;
315
+ bool ispend = false;
316
+ uint8_t pend;
317
+
318
+ /*
319
+ * get the bit value corresponding to this irq in the
320
+ * lpi pending table
321
+ */
322
+ lpipt_baddr = cs->gicr_pendbaser & R_GICR_PENDBASER_PHYADDR_MASK;
323
+
324
+ address_space_read(as, lpipt_baddr + ((irq / 8) * sizeof(pend)),
325
+ MEMTXATTRS_UNSPECIFIED, &pend, sizeof(pend));
326
+
327
+ ispend = extract32(pend, irq % 8, 1);
328
+
329
+ /* no change in the value of pending bit, return */
330
+ if (ispend == level) {
331
+ return;
332
+ }
333
+ pend = deposit32(pend, irq % 8, 1, level ? 1 : 0);
334
+
335
+ address_space_write(as, lpipt_baddr + ((irq / 8) * sizeof(pend)),
336
+ MEMTXATTRS_UNSPECIFIED, &pend, sizeof(pend));
337
+
338
+ /*
339
+ * check if this LPI is better than the current hpplpi, if yes
340
+ * just set hpplpi.prio and .irq without doing a full rescan
341
+ */
342
+ if (level) {
343
+ gicv3_redist_check_lpi_priority(cs, irq);
344
+ } else {
345
+ if (irq == cs->hpplpi.irq) {
346
+ gicv3_redist_update_lpi(cs);
347
+ }
348
+ }
349
+}
350
+
351
+void gicv3_redist_process_lpi(GICv3CPUState *cs, int irq, int level)
352
+{
353
+ uint64_t idbits;
354
+
355
+ idbits = MIN(FIELD_EX64(cs->gicr_propbaser, GICR_PROPBASER, IDBITS),
356
+ GICD_TYPER_IDBITS);
357
+
358
+ if (!(cs->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) || !cs->gicr_propbaser ||
359
+ !cs->gicr_pendbaser || (irq > (1ULL << (idbits + 1)) - 1) ||
360
+ irq < GICV3_LPI_INTID_START) {
361
+ return;
362
+ }
363
+
364
+ /* set/clear the pending bit for this irq */
365
+ gicv3_redist_lpi_pending(cs, irq, level);
366
+
367
+ gicv3_redist_update(cs);
368
+}
369
+
370
void gicv3_redist_set_irq(GICv3CPUState *cs, int irq, int level)
371
{
372
/* Update redistributor state for a change in an external PPI input line */
373
--
374
2.20.1
375
376
diff view generated by jsdifflib
Deleted patch
1
From: Shashi Mallela <shashi.mallela@linaro.org>
2
1
3
Added expected IORT files applicable with latest GICv3
4
ITS changes.Temporarily differences in these files are
5
okay.
6
7
Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
8
Acked-by: Igor Mammedov <imammedo@redhat.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20210910143951.92242-8-shashi.mallela@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
tests/qtest/bios-tables-test-allowed-diff.h | 4 ++++
14
tests/data/acpi/virt/IORT | 0
15
tests/data/acpi/virt/IORT.memhp | 0
16
tests/data/acpi/virt/IORT.numamem | 0
17
tests/data/acpi/virt/IORT.pxb | 0
18
5 files changed, 4 insertions(+)
19
create mode 100644 tests/data/acpi/virt/IORT
20
create mode 100644 tests/data/acpi/virt/IORT.memhp
21
create mode 100644 tests/data/acpi/virt/IORT.numamem
22
create mode 100644 tests/data/acpi/virt/IORT.pxb
23
24
diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
25
index XXXXXXX..XXXXXXX 100644
26
--- a/tests/qtest/bios-tables-test-allowed-diff.h
27
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
28
@@ -1 +1,5 @@
29
/* List of comma-separated changed AML files to ignore */
30
+"tests/data/acpi/virt/IORT",
31
+"tests/data/acpi/virt/IORT.memhp",
32
+"tests/data/acpi/virt/IORT.numamem",
33
+"tests/data/acpi/virt/IORT.pxb",
34
diff --git a/tests/data/acpi/virt/IORT b/tests/data/acpi/virt/IORT
35
new file mode 100644
36
index XXXXXXX..XXXXXXX
37
diff --git a/tests/data/acpi/virt/IORT.memhp b/tests/data/acpi/virt/IORT.memhp
38
new file mode 100644
39
index XXXXXXX..XXXXXXX
40
diff --git a/tests/data/acpi/virt/IORT.numamem b/tests/data/acpi/virt/IORT.numamem
41
new file mode 100644
42
index XXXXXXX..XXXXXXX
43
diff --git a/tests/data/acpi/virt/IORT.pxb b/tests/data/acpi/virt/IORT.pxb
44
new file mode 100644
45
index XXXXXXX..XXXXXXX
46
--
47
2.20.1
48
49
diff view generated by jsdifflib
Deleted patch
1
From: Shashi Mallela <shashi.mallela@linaro.org>
2
1
3
Updated expected IORT files applicable with latest GICv3
4
ITS changes.
5
6
Full diff of new file disassembly:
7
8
/*
9
* Intel ACPI Component Architecture
10
* AML/ASL+ Disassembler version 20180629 (64-bit version)
11
* Copyright (c) 2000 - 2018 Intel Corporation
12
*
13
* Disassembly of tests/data/acpi/virt/IORT.pxb, Tue Jun 29 17:35:38 2021
14
*
15
* ACPI Data Table [IORT]
16
*
17
* Format: [HexOffset DecimalOffset ByteLength] FieldName : FieldValue
18
*/
19
20
[000h 0000 4] Signature : "IORT" [IO Remapping Table]
21
[004h 0004 4] Table Length : 0000007C
22
[008h 0008 1] Revision : 00
23
[009h 0009 1] Checksum : 07
24
[00Ah 0010 6] Oem ID : "BOCHS "
25
[010h 0016 8] Oem Table ID : "BXPC "
26
[018h 0024 4] Oem Revision : 00000001
27
[01Ch 0028 4] Asl Compiler ID : "BXPC"
28
[020h 0032 4] Asl Compiler Revision : 00000001
29
30
[024h 0036 4] Node Count : 00000002
31
[028h 0040 4] Node Offset : 00000030
32
[02Ch 0044 4] Reserved : 00000000
33
34
[030h 0048 1] Type : 00
35
[031h 0049 2] Length : 0018
36
[033h 0051 1] Revision : 00
37
[034h 0052 4] Reserved : 00000000
38
[038h 0056 4] Mapping Count : 00000000
39
[03Ch 0060 4] Mapping Offset : 00000000
40
41
[040h 0064 4] ItsCount : 00000001
42
[044h 0068 4] Identifiers : 00000000
43
44
[048h 0072 1] Type : 02
45
[049h 0073 2] Length : 0034
46
[04Bh 0075 1] Revision : 00
47
[04Ch 0076 4] Reserved : 00000000
48
[050h 0080 4] Mapping Count : 00000001
49
[054h 0084 4] Mapping Offset : 00000020
50
51
[058h 0088 8] Memory Properties : [IORT Memory Access Properties]
52
[058h 0088 4] Cache Coherency : 00000001
53
[05Ch 0092 1] Hints (decoded below) : 00
54
Transient : 0
55
Write Allocate : 0
56
Read Allocate : 0
57
Override : 0
58
[05Dh 0093 2] Reserved : 0000
59
[05Fh 0095 1] Memory Flags (decoded below) : 03
60
Coherency : 1
61
Device Attribute : 1
62
[060h 0096 4] ATS Attribute : 00000000
63
[064h 0100 4] PCI Segment Number : 00000000
64
[068h 0104 1] Memory Size Limit : 00
65
[069h 0105 3] Reserved : 000000
66
67
[068h 0104 4] Input base : 00000000
68
[06Ch 0108 4] ID Count : 0000FFFF
69
[070h 0112 4] Output Base : 00000000
70
[074h 0116 4] Output Reference : 00000030
71
[078h 0120 4] Flags (decoded below) : 00000000
72
Single Mapping : 0
73
74
Raw Table Data: Length 124 (0x7C)
75
76
0000: 49 4F 52 54 7C 00 00 00 00 07 42 4F 43 48 53 20 // IORT|.....BOCHS
77
0010: 42 58 50 43 20 20 20 20 01 00 00 00 42 58 50 43 // BXPC ....BXPC
78
0020: 01 00 00 00 02 00 00 00 30 00 00 00 00 00 00 00 // ........0.......
79
0030: 00 18 00 00 00 00 00 00 00 00 00 00 00 00 00 00 // ................
80
0040: 01 00 00 00 00 00 00 00 02 34 00 00 00 00 00 00 // .........4......
81
0050: 01 00 00 00 20 00 00 00 01 00 00 00 00 00 00 03 // .... ...........
82
0060: 00 00 00 00 00 00 00 00 00 00 00 00 FF FF 00 00 // ................
83
0070: 00 00 00 00 30 00 00 00 00 00 00 00 // ....0.......
84
85
Signed-off-by: Shashi Mallela <shashi.mallela@linaro.org>
86
Acked-by: Igor Mammedov <imammedo@redhat.com>
87
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
88
Message-id: 20210910143951.92242-10-shashi.mallela@linaro.org
89
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
90
---
91
tests/qtest/bios-tables-test-allowed-diff.h | 4 ----
92
tests/data/acpi/virt/IORT | Bin 0 -> 124 bytes
93
tests/data/acpi/virt/IORT.memhp | Bin 0 -> 124 bytes
94
tests/data/acpi/virt/IORT.numamem | Bin 0 -> 124 bytes
95
tests/data/acpi/virt/IORT.pxb | Bin 0 -> 124 bytes
96
5 files changed, 4 deletions(-)
97
98
diff --git a/tests/qtest/bios-tables-test-allowed-diff.h b/tests/qtest/bios-tables-test-allowed-diff.h
99
index XXXXXXX..XXXXXXX 100644
100
--- a/tests/qtest/bios-tables-test-allowed-diff.h
101
+++ b/tests/qtest/bios-tables-test-allowed-diff.h
102
@@ -1,5 +1 @@
103
/* List of comma-separated changed AML files to ignore */
104
-"tests/data/acpi/virt/IORT",
105
-"tests/data/acpi/virt/IORT.memhp",
106
-"tests/data/acpi/virt/IORT.numamem",
107
-"tests/data/acpi/virt/IORT.pxb",
108
diff --git a/tests/data/acpi/virt/IORT b/tests/data/acpi/virt/IORT
109
index XXXXXXX..XXXXXXX 100644
110
GIT binary patch
111
literal 124
112
zcmebD4+^Pa00MR=e`k+i1*eDrX9XZ&1PX!JAesq?4S*O7Bw!2(4Uz`|CKCt^;wu0#
113
QRGb+i3L*dhhtM#y0PN=p0RR91
114
115
literal 0
116
HcmV?d00001
117
118
diff --git a/tests/data/acpi/virt/IORT.memhp b/tests/data/acpi/virt/IORT.memhp
119
index XXXXXXX..XXXXXXX 100644
120
GIT binary patch
121
literal 124
122
zcmebD4+^Pa00MR=e`k+i1*eDrX9XZ&1PX!JAesq?4S*O7Bw!2(4Uz`|CKCt^;wu0#
123
QRGb+i3L*dhhtM#y0PN=p0RR91
124
125
literal 0
126
HcmV?d00001
127
128
diff --git a/tests/data/acpi/virt/IORT.numamem b/tests/data/acpi/virt/IORT.numamem
129
index XXXXXXX..XXXXXXX 100644
130
GIT binary patch
131
literal 124
132
zcmebD4+^Pa00MR=e`k+i1*eDrX9XZ&1PX!JAesq?4S*O7Bw!2(4Uz`|CKCt^;wu0#
133
QRGb+i3L*dhhtM#y0PN=p0RR91
134
135
literal 0
136
HcmV?d00001
137
138
diff --git a/tests/data/acpi/virt/IORT.pxb b/tests/data/acpi/virt/IORT.pxb
139
index XXXXXXX..XXXXXXX 100644
140
GIT binary patch
141
literal 124
142
zcmebD4+^Pa00MR=e`k+i1*eDrX9XZ&1PX!JAesq?4S*O7Bw!2(4Uz`|CKCt^;wu0#
143
QRGb+i3L*dhhtM#y0PN=p0RR91
144
145
literal 0
146
HcmV?d00001
147
148
--
149
2.20.1
150
151
diff view generated by jsdifflib
Deleted patch
1
By default, QEMU will allow devices to be plugged into a bus up to
2
the bus class's device count limit. If the user creates a device on
3
the command line or via the monitor and doesn't explicitly specify
4
the bus to plug it in, QEMU will plug it into the first non-full bus
5
that it finds.
6
1
7
This is fine in most cases, but some machines have multiple buses of
8
a given type, some of which are dedicated to on-board devices and
9
some of which have an externally exposed connector for user-pluggable
10
devices. One example is I2C buses.
11
12
Provide a new function qbus_mark_full() so that a machine model can
13
mark this kind of "internal only" bus as 'full' after it has created
14
all the devices that should be plugged into that bus. The "find a
15
non-full bus" algorithm will then skip the internal-only bus when
16
looking for a place to plug in user-created devices.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20210903151435.22379-2-peter.maydell@linaro.org
21
---
22
include/hw/qdev-core.h | 24 ++++++++++++++++++++++++
23
softmmu/qdev-monitor.c | 7 ++++++-
24
2 files changed, 30 insertions(+), 1 deletion(-)
25
26
diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h
27
index XXXXXXX..XXXXXXX 100644
28
--- a/include/hw/qdev-core.h
29
+++ b/include/hw/qdev-core.h
30
@@ -XXX,XX +XXX,XX @@ struct BusState {
31
HotplugHandler *hotplug_handler;
32
int max_index;
33
bool realized;
34
+ bool full;
35
int num_children;
36
37
/*
38
@@ -XXX,XX +XXX,XX @@ static inline bool qbus_is_hotpluggable(BusState *bus)
39
return bus->hotplug_handler;
40
}
41
42
+/**
43
+ * qbus_mark_full: Mark this bus as full, so no more devices can be attached
44
+ * @bus: Bus to mark as full
45
+ *
46
+ * By default, QEMU will allow devices to be plugged into a bus up
47
+ * to the bus class's device count limit. Calling this function
48
+ * marks a particular bus as full, so that no more devices can be
49
+ * plugged into it. In particular this means that the bus will not
50
+ * be considered as a candidate for plugging in devices created by
51
+ * the user on the commandline or via the monitor.
52
+ * If a machine has multiple buses of a given type, such as I2C,
53
+ * where some of those buses in the real hardware are used only for
54
+ * internal devices and some are exposed via expansion ports, you
55
+ * can use this function to mark the internal-only buses as full
56
+ * after you have created all their internal devices. Then user
57
+ * created devices will appear on the expansion-port bus where
58
+ * guest software expects them.
59
+ */
60
+static inline void qbus_mark_full(BusState *bus)
61
+{
62
+ bus->full = true;
63
+}
64
+
65
void device_listener_register(DeviceListener *listener);
66
void device_listener_unregister(DeviceListener *listener);
67
68
diff --git a/softmmu/qdev-monitor.c b/softmmu/qdev-monitor.c
69
index XXXXXXX..XXXXXXX 100644
70
--- a/softmmu/qdev-monitor.c
71
+++ b/softmmu/qdev-monitor.c
72
@@ -XXX,XX +XXX,XX @@ static DeviceState *qbus_find_dev(BusState *bus, char *elem)
73
74
static inline bool qbus_is_full(BusState *bus)
75
{
76
- BusClass *bus_class = BUS_GET_CLASS(bus);
77
+ BusClass *bus_class;
78
+
79
+ if (bus->full) {
80
+ return true;
81
+ }
82
+ bus_class = BUS_GET_CLASS(bus);
83
return bus_class->max_dev && bus->num_children >= bus_class->max_dev;
84
}
85
86
--
87
2.20.1
88
89
diff view generated by jsdifflib
Deleted patch
1
The mps2-tz boards use a data-driven structure to create the devices
2
that sit behind peripheral protection controllers. Currently the
3
functions which create these devices are passed an 'opaque' pointer
4
which is always the address within the machine struct of the device
5
to create, and some "all devices need this" information like irqs and
6
addresses.
7
1
8
If a specific device needs more information than this, it is
9
currently not possible to pass that through from the PPCInfo
10
data structure. Add support for passing an extra data parameter,
11
so that we can more flexibly handle the needs of specific
12
device types. To provide some type-safety we make this extra
13
parameter a pointer to a union (which initially has no members).
14
15
In particular, we would like to be able to indicate which of the
16
i2c controllers are for on-board devices only and which are
17
connected to the external 'shield' expansion port; a subsequent
18
patch will use this mechanism for that purpose.
19
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
22
Message-id: 20210903151435.22379-3-peter.maydell@linaro.org
23
---
24
hw/arm/mps2-tz.c | 35 ++++++++++++++++++++++-------------
25
1 file changed, 22 insertions(+), 13 deletions(-)
26
27
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
28
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/arm/mps2-tz.c
30
+++ b/hw/arm/mps2-tz.c
31
@@ -XXX,XX +XXX,XX @@ static qemu_irq get_sse_irq_in(MPS2TZMachineState *mms, int irqno)
32
}
33
}
34
35
+/* Union describing the device-specific extra data we pass to the devfn. */
36
+typedef union PPCExtraData {
37
+} PPCExtraData;
38
+
39
/* Most of the devices in the AN505 FPGA image sit behind
40
* Peripheral Protection Controllers. These data structures
41
* define the layout of which devices sit behind which PPCs.
42
@@ -XXX,XX +XXX,XX @@ static qemu_irq get_sse_irq_in(MPS2TZMachineState *mms, int irqno)
43
*/
44
typedef MemoryRegion *MakeDevFn(MPS2TZMachineState *mms, void *opaque,
45
const char *name, hwaddr size,
46
- const int *irqs);
47
+ const int *irqs,
48
+ const PPCExtraData *extradata);
49
50
typedef struct PPCPortInfo {
51
const char *name;
52
@@ -XXX,XX +XXX,XX @@ typedef struct PPCPortInfo {
53
hwaddr addr;
54
hwaddr size;
55
int irqs[3]; /* currently no device needs more IRQ lines than this */
56
+ PPCExtraData extradata; /* to pass device-specific info to the devfn */
57
} PPCPortInfo;
58
59
typedef struct PPCInfo {
60
@@ -XXX,XX +XXX,XX @@ typedef struct PPCInfo {
61
static MemoryRegion *make_unimp_dev(MPS2TZMachineState *mms,
62
void *opaque,
63
const char *name, hwaddr size,
64
- const int *irqs)
65
+ const int *irqs,
66
+ const PPCExtraData *extradata)
67
{
68
/* Initialize, configure and realize a TYPE_UNIMPLEMENTED_DEVICE,
69
* and return a pointer to its MemoryRegion.
70
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_unimp_dev(MPS2TZMachineState *mms,
71
72
static MemoryRegion *make_uart(MPS2TZMachineState *mms, void *opaque,
73
const char *name, hwaddr size,
74
- const int *irqs)
75
+ const int *irqs, const PPCExtraData *extradata)
76
{
77
/* The irq[] array is tx, rx, combined, in that order */
78
MPS2TZMachineClass *mmc = MPS2TZ_MACHINE_GET_CLASS(mms);
79
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_uart(MPS2TZMachineState *mms, void *opaque,
80
81
static MemoryRegion *make_scc(MPS2TZMachineState *mms, void *opaque,
82
const char *name, hwaddr size,
83
- const int *irqs)
84
+ const int *irqs, const PPCExtraData *extradata)
85
{
86
MPS2SCC *scc = opaque;
87
DeviceState *sccdev;
88
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_scc(MPS2TZMachineState *mms, void *opaque,
89
90
static MemoryRegion *make_fpgaio(MPS2TZMachineState *mms, void *opaque,
91
const char *name, hwaddr size,
92
- const int *irqs)
93
+ const int *irqs, const PPCExtraData *extradata)
94
{
95
MPS2FPGAIO *fpgaio = opaque;
96
MPS2TZMachineClass *mmc = MPS2TZ_MACHINE_GET_CLASS(mms);
97
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_fpgaio(MPS2TZMachineState *mms, void *opaque,
98
99
static MemoryRegion *make_eth_dev(MPS2TZMachineState *mms, void *opaque,
100
const char *name, hwaddr size,
101
- const int *irqs)
102
+ const int *irqs,
103
+ const PPCExtraData *extradata)
104
{
105
SysBusDevice *s;
106
NICInfo *nd = &nd_table[0];
107
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_eth_dev(MPS2TZMachineState *mms, void *opaque,
108
109
static MemoryRegion *make_eth_usb(MPS2TZMachineState *mms, void *opaque,
110
const char *name, hwaddr size,
111
- const int *irqs)
112
+ const int *irqs,
113
+ const PPCExtraData *extradata)
114
{
115
/*
116
* The AN524 makes the ethernet and USB share a PPC port.
117
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_eth_usb(MPS2TZMachineState *mms, void *opaque,
118
119
static MemoryRegion *make_mpc(MPS2TZMachineState *mms, void *opaque,
120
const char *name, hwaddr size,
121
- const int *irqs)
122
+ const int *irqs, const PPCExtraData *extradata)
123
{
124
TZMPC *mpc = opaque;
125
int i = mpc - &mms->mpc[0];
126
@@ -XXX,XX +XXX,XX @@ static void remap_irq_fn(void *opaque, int n, int level)
127
128
static MemoryRegion *make_dma(MPS2TZMachineState *mms, void *opaque,
129
const char *name, hwaddr size,
130
- const int *irqs)
131
+ const int *irqs, const PPCExtraData *extradata)
132
{
133
/* The irq[] array is DMACINTR, DMACINTERR, DMACINTTC, in that order */
134
PL080State *dma = opaque;
135
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_dma(MPS2TZMachineState *mms, void *opaque,
136
137
static MemoryRegion *make_spi(MPS2TZMachineState *mms, void *opaque,
138
const char *name, hwaddr size,
139
- const int *irqs)
140
+ const int *irqs, const PPCExtraData *extradata)
141
{
142
/*
143
* The AN505 has five PL022 SPI controllers.
144
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_spi(MPS2TZMachineState *mms, void *opaque,
145
146
static MemoryRegion *make_i2c(MPS2TZMachineState *mms, void *opaque,
147
const char *name, hwaddr size,
148
- const int *irqs)
149
+ const int *irqs, const PPCExtraData *extradata)
150
{
151
ArmSbconI2CState *i2c = opaque;
152
SysBusDevice *s;
153
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_i2c(MPS2TZMachineState *mms, void *opaque,
154
155
static MemoryRegion *make_rtc(MPS2TZMachineState *mms, void *opaque,
156
const char *name, hwaddr size,
157
- const int *irqs)
158
+ const int *irqs, const PPCExtraData *extradata)
159
{
160
PL031State *pl031 = opaque;
161
SysBusDevice *s;
162
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
163
}
164
165
mr = pinfo->devfn(mms, pinfo->opaque, pinfo->name, pinfo->size,
166
- pinfo->irqs);
167
+ pinfo->irqs, &pinfo->extradata);
168
portname = g_strdup_printf("port[%d]", port);
169
object_property_set_link(OBJECT(ppc), portname, OBJECT(mr),
170
&error_fatal);
171
--
172
2.20.1
173
174
diff view generated by jsdifflib
Deleted patch
1
The various MPS2 boards have multiple I2C buses: typically a bus
2
dedicated to the audio configuration, one for the LCD touchscreen
3
controller, one for a DDR4 EEPROM, and two which are connected to the
4
external Shield expansion connector. Mark the buses which are used
5
only for board-internal devices as 'full' so that if the user creates
6
i2c devices on the commandline without specifying a bus name then
7
they will be connected to the I2C controller used for the Shield
8
connector, where guest software will expect them.
9
1
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20210903151435.22379-4-peter.maydell@linaro.org
13
---
14
hw/arm/mps2-tz.c | 57 ++++++++++++++++++++++++++++++++++++------------
15
1 file changed, 43 insertions(+), 14 deletions(-)
16
17
diff --git a/hw/arm/mps2-tz.c b/hw/arm/mps2-tz.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/mps2-tz.c
20
+++ b/hw/arm/mps2-tz.c
21
@@ -XXX,XX +XXX,XX @@ static qemu_irq get_sse_irq_in(MPS2TZMachineState *mms, int irqno)
22
23
/* Union describing the device-specific extra data we pass to the devfn. */
24
typedef union PPCExtraData {
25
+ bool i2c_internal;
26
} PPCExtraData;
27
28
/* Most of the devices in the AN505 FPGA image sit behind
29
@@ -XXX,XX +XXX,XX @@ static MemoryRegion *make_i2c(MPS2TZMachineState *mms, void *opaque,
30
object_initialize_child(OBJECT(mms), name, i2c, TYPE_ARM_SBCON_I2C);
31
s = SYS_BUS_DEVICE(i2c);
32
sysbus_realize(s, &error_fatal);
33
+
34
+ /*
35
+ * If this is an internal-use-only i2c bus, mark it full
36
+ * so that user-created i2c devices are not plugged into it.
37
+ * If we implement models of any on-board i2c devices that
38
+ * plug in to one of the internal-use-only buses, then we will
39
+ * need to create and plugging those in here before we mark the
40
+ * bus as full.
41
+ */
42
+ if (extradata->i2c_internal) {
43
+ BusState *qbus = qdev_get_child_bus(DEVICE(i2c), "i2c");
44
+ qbus_mark_full(qbus);
45
+ }
46
+
47
return sysbus_mmio_get_region(s, 0);
48
}
49
50
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
51
{ "uart2", make_uart, &mms->uart[2], 0x40202000, 0x1000, { 36, 37, 44 } },
52
{ "uart3", make_uart, &mms->uart[3], 0x40203000, 0x1000, { 38, 39, 45 } },
53
{ "uart4", make_uart, &mms->uart[4], 0x40204000, 0x1000, { 40, 41, 46 } },
54
- { "i2c0", make_i2c, &mms->i2c[0], 0x40207000, 0x1000 },
55
- { "i2c1", make_i2c, &mms->i2c[1], 0x40208000, 0x1000 },
56
- { "i2c2", make_i2c, &mms->i2c[2], 0x4020c000, 0x1000 },
57
- { "i2c3", make_i2c, &mms->i2c[3], 0x4020d000, 0x1000 },
58
+ { "i2c0", make_i2c, &mms->i2c[0], 0x40207000, 0x1000, {},
59
+ { .i2c_internal = true /* touchscreen */ } },
60
+ { "i2c1", make_i2c, &mms->i2c[1], 0x40208000, 0x1000, {},
61
+ { .i2c_internal = true /* audio conf */ } },
62
+ { "i2c2", make_i2c, &mms->i2c[2], 0x4020c000, 0x1000, {},
63
+ { .i2c_internal = false /* shield 0 */ } },
64
+ { "i2c3", make_i2c, &mms->i2c[3], 0x4020d000, 0x1000, {},
65
+ { .i2c_internal = false /* shield 1 */ } },
66
},
67
}, {
68
.name = "apb_ppcexp2",
69
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
70
}, {
71
.name = "apb_ppcexp1",
72
.ports = {
73
- { "i2c0", make_i2c, &mms->i2c[0], 0x41200000, 0x1000 },
74
- { "i2c1", make_i2c, &mms->i2c[1], 0x41201000, 0x1000 },
75
+ { "i2c0", make_i2c, &mms->i2c[0], 0x41200000, 0x1000, {},
76
+ { .i2c_internal = true /* touchscreen */ } },
77
+ { "i2c1", make_i2c, &mms->i2c[1], 0x41201000, 0x1000, {},
78
+ { .i2c_internal = true /* audio conf */ } },
79
{ "spi0", make_spi, &mms->spi[0], 0x41202000, 0x1000, { 52 } },
80
{ "spi1", make_spi, &mms->spi[1], 0x41203000, 0x1000, { 53 } },
81
{ "spi2", make_spi, &mms->spi[2], 0x41204000, 0x1000, { 54 } },
82
- { "i2c2", make_i2c, &mms->i2c[2], 0x41205000, 0x1000 },
83
- { "i2c3", make_i2c, &mms->i2c[3], 0x41206000, 0x1000 },
84
+ { "i2c2", make_i2c, &mms->i2c[2], 0x41205000, 0x1000, {},
85
+ { .i2c_internal = false /* shield 0 */ } },
86
+ { "i2c3", make_i2c, &mms->i2c[3], 0x41206000, 0x1000, {},
87
+ { .i2c_internal = false /* shield 1 */ } },
88
{ /* port 7 reserved */ },
89
- { "i2c4", make_i2c, &mms->i2c[4], 0x41208000, 0x1000 },
90
+ { "i2c4", make_i2c, &mms->i2c[4], 0x41208000, 0x1000, {},
91
+ { .i2c_internal = true /* DDR4 EEPROM */ } },
92
},
93
}, {
94
.name = "apb_ppcexp2",
95
@@ -XXX,XX +XXX,XX @@ static void mps2tz_common_init(MachineState *machine)
96
}, {
97
.name = "apb_ppcexp1",
98
.ports = {
99
- { "i2c0", make_i2c, &mms->i2c[0], 0x49200000, 0x1000 },
100
- { "i2c1", make_i2c, &mms->i2c[1], 0x49201000, 0x1000 },
101
+ { "i2c0", make_i2c, &mms->i2c[0], 0x49200000, 0x1000, {},
102
+ { .i2c_internal = true /* touchscreen */ } },
103
+ { "i2c1", make_i2c, &mms->i2c[1], 0x49201000, 0x1000, {},
104
+ { .i2c_internal = true /* audio conf */ } },
105
{ "spi0", make_spi, &mms->spi[0], 0x49202000, 0x1000, { 53 } },
106
{ "spi1", make_spi, &mms->spi[1], 0x49203000, 0x1000, { 54 } },
107
{ "spi2", make_spi, &mms->spi[2], 0x49204000, 0x1000, { 55 } },
108
- { "i2c2", make_i2c, &mms->i2c[2], 0x49205000, 0x1000 },
109
- { "i2c3", make_i2c, &mms->i2c[3], 0x49206000, 0x1000 },
110
+ { "i2c2", make_i2c, &mms->i2c[2], 0x49205000, 0x1000, {},
111
+ { .i2c_internal = false /* shield 0 */ } },
112
+ { "i2c3", make_i2c, &mms->i2c[3], 0x49206000, 0x1000, {},
113
+ { .i2c_internal = false /* shield 1 */ } },
114
{ /* port 7 reserved */ },
115
- { "i2c4", make_i2c, &mms->i2c[4], 0x49208000, 0x1000 },
116
+ { "i2c4", make_i2c, &mms->i2c[4], 0x49208000, 0x1000, {},
117
+ { .i2c_internal = true /* DDR4 EEPROM */ } },
118
},
119
}, {
120
.name = "apb_ppcexp2",
121
--
122
2.20.1
123
124
diff view generated by jsdifflib
Deleted patch
1
The various MPS2 boards implemented in mps2.c have multiple I2C
2
buses: a bus dedicated to the audio configuration, one for the LCD
3
touchscreen controller, and two which are connected to the external
4
Shield expansion connector. Mark the buses which are used only for
5
board-internal devices as 'full' so that if the user creates i2c
6
devices on the commandline without specifying a bus name then they
7
will be connected to the I2C controller used for the Shield
8
connector, where guest software will expect them.
9
1
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20210903151435.22379-5-peter.maydell@linaro.org
13
---
14
hw/arm/mps2.c | 12 +++++++++++-
15
1 file changed, 11 insertions(+), 1 deletion(-)
16
17
diff --git a/hw/arm/mps2.c b/hw/arm/mps2.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/mps2.c
20
+++ b/hw/arm/mps2.c
21
@@ -XXX,XX +XXX,XX @@ static void mps2_common_init(MachineState *machine)
22
0x40023000, /* Audio */
23
0x40029000, /* Shield0 */
24
0x4002a000}; /* Shield1 */
25
- sysbus_create_simple(TYPE_ARM_SBCON_I2C, i2cbase[i], NULL);
26
+ DeviceState *dev;
27
+
28
+ dev = sysbus_create_simple(TYPE_ARM_SBCON_I2C, i2cbase[i], NULL);
29
+ if (i < 2) {
30
+ /*
31
+ * internal-only bus: mark it full to avoid user-created
32
+ * i2c devices being plugged into it.
33
+ */
34
+ BusState *qbus = qdev_get_child_bus(dev, "i2c");
35
+ qbus_mark_full(qbus);
36
+ }
37
}
38
create_unimplemented_device("i2s", 0x40024000, 0x400);
39
40
--
41
2.20.1
42
43
diff view generated by jsdifflib