1
First pullreq of the 3.1 release cycle, with lots of
1
target-arm queue, mostly SME preliminaries.
2
Arm related patches accumulated during freeze. Most
3
notable here is Luc's GICv2 virtualization support and
4
my execute-from-MMIO patches.
5
2
6
I stopped looking at my to-review queue towards the
3
In the unlikely event we don't land the rest of SME before freeze
7
end of freeze, since 45 patches is already pushing what
4
for 7.1 we can revert the docs/property changes included here.
8
I consider a reasonable sized pullreq; once this goes into
9
master I'll start working through it again.
10
5
11
thanks
12
-- PMM
6
-- PMM
13
7
14
The following changes since commit 38441756b70eec5807b5f60dad11a93a91199866:
8
The following changes since commit 097ccbbbaf2681df1e65542e5b7d2b2d0c66e2bc:
15
9
16
Update version for v3.0.0 release (2018-08-14 16:38:43 +0100)
10
Merge tag 'qemu-sparc-20220626' of https://github.com/mcayland/qemu into staging (2022-06-27 05:21:05 +0530)
17
11
18
are available in the Git repository at:
12
are available in the Git repository at:
19
13
20
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180814
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20220627
21
15
22
for you to fetch changes up to 054e7adf4e64e4acb3b033348ebf7cc871baa34f:
16
for you to fetch changes up to 59e1b8a22ea9f947d038ccac784de1020f266e14:
23
17
24
target/arm: Fix typo in helper_sve_movz_d (2018-08-14 17:17:22 +0100)
18
target/arm: Check V7VE as well as LPAE in arm_pamax (2022-06-27 11:18:17 +0100)
25
19
26
----------------------------------------------------------------
20
----------------------------------------------------------------
27
target-arm queue:
21
target-arm queue:
28
* Implement more of ARMv6-M support
22
* sphinx: change default language to 'en'
29
* Support direct execution from non-RAM regions;
23
* Diagnose attempts to emulate EL3 in hvf as well as kvm
30
use this to implmeent execution from small (<1K) MPU regions
24
* More SME groundwork patches
31
* GICv2: implement the virtualization extensions
25
* virt: Fix calculation of physical address space size
32
* support a virtualization-capable GICv2 in the virt and
26
for v7VE CPUs (eg cortex-a15)
33
xlnx-zynqmp boards
34
* arm: Fix return code of arm_load_elf() so we can detect
35
failure to load the file correctly
36
* Implement HCR_EL2.TGE ("trap general exceptions") bit
37
* Implement tailchaining for M profile cores
38
* Fix bugs in SVE compare, saturating add/sub, WHILE, MOVZ
39
27
40
----------------------------------------------------------------
28
----------------------------------------------------------------
41
Adam Lackorzynski (1):
29
Alexander Graf (2):
42
arm: Fix return code of arm_load_elf
30
accel: Introduce current_accel_name()
31
target/arm: Catch invalid kvm state also for hvf
43
32
44
Julia Suvorova (4):
33
Martin Liška (1):
45
target/arm: Forbid unprivileged mode for M Baseline
34
sphinx: change default language to 'en'
46
nvic: Handle ARMv6-M SCS reserved registers
47
arm: Add ARMv6-M programmer's model support
48
nvic: Change NVIC to support ARMv6-M
49
35
50
Luc Michel (20):
36
Richard Henderson (22):
51
intc/arm_gic: Refactor operations on the distributor
37
target/arm: Implement TPIDR2_EL0
52
intc/arm_gic: Implement GICD_ISACTIVERn and GICD_ICACTIVERn registers
38
target/arm: Add SMEEXC_EL to TB flags
53
intc/arm_gic: Remove some dead code and put some functions static
39
target/arm: Add syn_smetrap
54
vmstate.h: Provide VMSTATE_UINT16_SUB_ARRAY
40
target/arm: Add ARM_CP_SME
55
intc/arm_gic: Add the virtualization extensions to the GIC state
41
target/arm: Add SVCR
56
intc/arm_gic: Add virtual interface register definitions
42
target/arm: Add SMCR_ELx
57
intc/arm_gic: Add virtualization extensions helper macros and functions
43
target/arm: Add SMIDR_EL1, SMPRI_EL1, SMPRIMAP_EL2
58
intc/arm_gic: Refactor secure/ns access check in the CPU interface
44
target/arm: Add PSTATE.{SM,ZA} to TB flags
59
intc/arm_gic: Add virtualization enabled IRQ helper functions
45
target/arm: Add the SME ZA storage to CPUARMState
60
intc/arm_gic: Implement virtualization extensions in gic_(activate_irq|drop_prio)
46
target/arm: Implement SMSTART, SMSTOP
61
intc/arm_gic: Implement virtualization extensions in gic_acknowledge_irq
47
target/arm: Move error for sve%d property to arm_cpu_sve_finalize
62
intc/arm_gic: Implement virtualization extensions in gic_(deactivate|complete_irq)
48
target/arm: Create ARMVQMap
63
intc/arm_gic: Implement virtualization extensions in gic_cpu_(read|write)
49
target/arm: Generalize cpu_arm_{get,set}_vq
64
intc/arm_gic: Wire the vCPU interface
50
target/arm: Generalize cpu_arm_{get, set}_default_vec_len
65
intc/arm_gic: Implement the virtual interface registers
51
target/arm: Move arm_cpu_*_finalize to internals.h
66
intc/arm_gic: Implement gic_update_virt() function
52
target/arm: Unexport aarch64_add_*_properties
67
intc/arm_gic: Implement maintenance interrupt generation
53
target/arm: Add cpu properties for SME
68
intc/arm_gic: Improve traces
54
target/arm: Introduce sve_vqm1_for_el_sm
69
xlnx-zynqmp: Improve GIC wiring and MMIO mapping
55
target/arm: Add SVL to TB flags
70
arm/virt: Add support for GICv2 virtualization extensions
56
target/arm: Move pred_{full, gvec}_reg_{offset, size} to translate-a64.h
57
target/arm: Extend arm_pamax to more than aarch64
58
target/arm: Check V7VE as well as LPAE in arm_pamax
71
59
72
Peter Maydell (16):
60
docs/conf.py | 2 +-
73
accel/tcg: Pass read access type through to io_readx()
61
docs/system/arm/cpu-features.rst | 56 ++++++++++
74
accel/tcg: Handle get_page_addr_code() returning -1 in hashtable lookups
62
include/qemu/accel.h | 1 +
75
accel/tcg: Handle get_page_addr_code() returning -1 in tb_check_watchpoint()
63
target/arm/cpregs.h | 5 +
76
accel/tcg: tb_gen_code(): Create single-insn TB for execution from non-RAM
64
target/arm/cpu.h | 103 ++++++++++++++-----
77
accel/tcg: Return -1 for execution from MMIO regions in get_page_addr_code()
65
target/arm/helper-sme.h | 21 ++++
78
target/arm: Allow execution from small regions
66
target/arm/helper.h | 1 +
79
accel/tcg: Check whether TLB entry is RAM consistently with how we set it up
67
target/arm/internals.h | 4 +
80
target/arm: Mask virtual interrupts if HCR_EL2.TGE is set
68
target/arm/syndrome.h | 14 +++
81
target/arm: Honour HCR_EL2.TGE and MDCR_EL2.TDE in debug register access checks
69
target/arm/translate-a64.h | 38 +++++++
82
target/arm: Honour HCR_EL2.TGE when raising synchronous exceptions
70
target/arm/translate.h | 6 ++
83
target/arm: Provide accessor functions for HCR_EL2.{IMO, FMO, AMO}
71
accel/accel-common.c | 8 ++
84
target/arm: Treat SCTLR_EL1.M as if it were zero when HCR_EL2.TGE is set
72
hw/arm/virt.c | 10 +-
85
target/arm: Improve exception-taken logging
73
softmmu/vl.c | 3 +-
86
target/arm: Initialize exc_secure correctly in do_v7m_exception_exit()
74
target/arm/cpu.c | 32 ++++--
87
target/arm: Restore M-profile CONTROL.SPSEL before any tailchaining
75
target/arm/cpu64.c | 205 ++++++++++++++++++++++++++++---------
88
target/arm: Implement tailchaining for M profile cores
76
target/arm/helper.c | 213 +++++++++++++++++++++++++++++++++++++--
77
target/arm/kvm64.c | 2 +-
78
target/arm/machine.c | 34 +++++++
79
target/arm/ptw.c | 26 +++--
80
target/arm/sme_helper.c | 61 +++++++++++
81
target/arm/translate-a64.c | 46 +++++++++
82
target/arm/translate-sve.c | 36 -------
83
target/arm/meson.build | 1 +
84
24 files changed, 782 insertions(+), 146 deletions(-)
85
create mode 100644 target/arm/helper-sme.h
86
create mode 100644 target/arm/sme_helper.c
89
87
90
Richard Henderson (4):
91
target/arm: Fix sign of sve_cmpeq_ppzw/sve_cmpne_ppzw
92
target/arm: Fix typo in do_sat_addsub_64
93
target/arm: Reorganize SVE WHILE
94
target/arm: Fix typo in helper_sve_movz_d
95
96
accel/tcg/softmmu_template.h | 11 +-
97
hw/intc/gic_internal.h | 282 +++++++++--
98
include/exec/exec-all.h | 2 -
99
include/hw/arm/virt.h | 4 +-
100
include/hw/arm/xlnx-zynqmp.h | 4 +-
101
include/hw/intc/arm_gic_common.h | 43 +-
102
include/hw/intc/armv7m_nvic.h | 1 +
103
include/migration/vmstate.h | 3 +
104
include/qom/cpu.h | 6 +
105
target/arm/cpu.h | 62 ++-
106
accel/tcg/cpu-exec.c | 3 +
107
accel/tcg/cputlb.c | 111 +----
108
accel/tcg/translate-all.c | 23 +-
109
exec.c | 6 -
110
hw/arm/boot.c | 8 +-
111
hw/arm/virt-acpi-build.c | 6 +-
112
hw/arm/virt.c | 52 ++-
113
hw/arm/xlnx-zynqmp.c | 92 +++-
114
hw/intc/arm_gic.c | 987 +++++++++++++++++++++++++++++++--------
115
hw/intc/arm_gic_common.c | 154 ++++--
116
hw/intc/arm_gic_kvm.c | 31 +-
117
hw/intc/arm_gicv3_cpuif.c | 19 +-
118
hw/intc/armv7m_nvic.c | 82 +++-
119
memory.c | 3 +-
120
target/arm/cpu.c | 4 +
121
target/arm/helper.c | 127 +++--
122
target/arm/op_helper.c | 14 +
123
target/arm/sve_helper.c | 19 +-
124
target/arm/translate-sve.c | 51 +-
125
hw/intc/trace-events | 12 +-
126
30 files changed, 1724 insertions(+), 498 deletions(-)
127
diff view generated by jsdifflib
Deleted patch
1
From: Julia Suvorova <jusual@mail.ru>
2
1
3
MSR handling is the only place where CONTROL.nPRIV is modified.
4
5
Signed-off-by: Julia Suvorova <jusual@mail.ru>
6
Message-id: 20180705222622.17139-1-jusual@mail.ru
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.c | 12 ++++++++----
11
1 file changed, 8 insertions(+), 4 deletions(-)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
18
write_v7m_control_spsel_for_secstate(env,
19
val & R_V7M_CONTROL_SPSEL_MASK,
20
M_REG_NS);
21
- env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
22
- env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
23
+ if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
24
+ env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
25
+ env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
26
+ }
27
return;
28
case 0x98: /* SP_NS */
29
{
30
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
31
!arm_v7m_is_handler_mode(env)) {
32
write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
33
}
34
- env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
35
- env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
36
+ if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
37
+ env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
38
+ env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
39
+ }
40
break;
41
default:
42
bad_reg:
43
--
44
2.18.0
45
46
diff view generated by jsdifflib
Deleted patch
1
From: Julia Suvorova <jusual@mail.ru>
2
1
3
Handle SCS reserved registers listed in ARMv6-M ARM D3.6.1.
4
All reserved registers are RAZ/WI. ARM_FEATURE_M_MAIN is used for the
5
checks, because these registers are reserved in ARMv8-M Baseline too.
6
7
Signed-off-by: Julia Suvorova <jusual@mail.ru>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/intc/armv7m_nvic.c | 51 +++++++++++++++++++++++++++++++++++++++++--
12
1 file changed, 49 insertions(+), 2 deletions(-)
13
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
17
+++ b/hw/intc/armv7m_nvic.c
18
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
19
}
20
return val;
21
case 0xd10: /* System Control. */
22
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
23
+ goto bad_offset;
24
+ }
25
return cpu->env.v7m.scr[attrs.secure];
26
case 0xd14: /* Configuration Control. */
27
/* The BFHFNMIGN bit is the only non-banked bit; we
28
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
29
}
30
return val;
31
case 0xd2c: /* Hard Fault Status. */
32
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
33
+ goto bad_offset;
34
+ }
35
return cpu->env.v7m.hfsr;
36
case 0xd30: /* Debug Fault Status. */
37
return cpu->env.v7m.dfsr;
38
case 0xd34: /* MMFAR MemManage Fault Address */
39
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
40
+ goto bad_offset;
41
+ }
42
return cpu->env.v7m.mmfar[attrs.secure];
43
case 0xd38: /* Bus Fault Address. */
44
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
45
+ goto bad_offset;
46
+ }
47
return cpu->env.v7m.bfar;
48
case 0xd3c: /* Aux Fault Status. */
49
/* TODO: Implement fault status registers. */
50
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
51
}
52
break;
53
case 0xd10: /* System Control. */
54
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
55
+ goto bad_offset;
56
+ }
57
/* We don't implement deep-sleep so these bits are RAZ/WI.
58
* The other bits in the register are banked.
59
* QEMU's implementation ignores SEVONPEND and SLEEPONEXIT, which
60
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
61
nvic_irq_update(s);
62
break;
63
case 0xd2c: /* Hard Fault Status. */
64
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
65
+ goto bad_offset;
66
+ }
67
cpu->env.v7m.hfsr &= ~value; /* W1C */
68
break;
69
case 0xd30: /* Debug Fault Status. */
70
cpu->env.v7m.dfsr &= ~value; /* W1C */
71
break;
72
case 0xd34: /* Mem Manage Address. */
73
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
74
+ goto bad_offset;
75
+ }
76
cpu->env.v7m.mmfar[attrs.secure] = value;
77
return;
78
case 0xd38: /* Bus Fault Address. */
79
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
80
+ goto bad_offset;
81
+ }
82
cpu->env.v7m.bfar = value;
83
return;
84
case 0xd3c: /* Aux Fault Status. */
85
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
86
case 0xf00: /* Software Triggered Interrupt Register */
87
{
88
int excnum = (value & 0x1ff) + NVIC_FIRST_IRQ;
89
+
90
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
91
+ goto bad_offset;
92
+ }
93
+
94
if (excnum < s->num_irq) {
95
armv7m_nvic_set_pending(s, excnum, false);
96
}
97
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
98
}
99
}
100
break;
101
- case 0xd18 ... 0xd23: /* System Handler Priority (SHPR1, SHPR2, SHPR3) */
102
+ case 0xd18: /* System Handler Priority (SHPR1) */
103
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
104
+ val = 0;
105
+ break;
106
+ }
107
+ /* fall through */
108
+ case 0xd1c ... 0xd23: /* System Handler Priority (SHPR2, SHPR3) */
109
val = 0;
110
for (i = 0; i < size; i++) {
111
unsigned hdlidx = (offset - 0xd14) + i;
112
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
113
}
114
break;
115
case 0xd28 ... 0xd2b: /* Configurable Fault Status (CFSR) */
116
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
117
+ val = 0;
118
+ break;
119
+ };
120
/* The BFSR bits [15:8] are shared between security states
121
* and we store them in the NS copy
122
*/
123
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
124
}
125
nvic_irq_update(s);
126
return MEMTX_OK;
127
- case 0xd18 ... 0xd23: /* System Handler Priority (SHPR1, SHPR2, SHPR3) */
128
+ case 0xd18: /* System Handler Priority (SHPR1) */
129
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
130
+ return MEMTX_OK;
131
+ }
132
+ /* fall through */
133
+ case 0xd1c ... 0xd23: /* System Handler Priority (SHPR2, SHPR3) */
134
for (i = 0; i < size; i++) {
135
unsigned hdlidx = (offset - 0xd14) + i;
136
int newprio = extract32(value, i * 8, 8);
137
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
138
nvic_irq_update(s);
139
return MEMTX_OK;
140
case 0xd28 ... 0xd2b: /* Configurable Fault Status (CFSR) */
141
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
142
+ return MEMTX_OK;
143
+ }
144
/* All bits are W1C, so construct 32 bit value with 0s in
145
* the parts not written by the access size
146
*/
147
--
148
2.18.0
149
150
diff view generated by jsdifflib
1
From: Adam Lackorzynski <adam@l4re.org>
1
From: Martin Liška <mliska@suse.cz>
2
2
3
Use an int64_t as a return type to restore
3
Fixes the following Sphinx warning (treated as error) starting
4
the negative check for arm_load_as.
4
with 5.0 release:
5
5
6
Signed-off-by: Adam Lackorzynski <adam@l4re.org>
6
Warning, treated as error:
7
Message-id: 20180730173712.GG4987@os.inf.tu-dresden.de
7
Invalid configuration value found: 'language = None'. Update your configuration to a valid langauge code. Falling back to 'en' (English).
8
9
Signed-off-by: Martin Liska <mliska@suse.cz>
10
Message-id: e91e51ee-48ac-437e-6467-98b56ee40042@suse.cz
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
13
---
11
hw/arm/boot.c | 8 ++++----
14
docs/conf.py | 2 +-
12
1 file changed, 4 insertions(+), 4 deletions(-)
15
1 file changed, 1 insertion(+), 1 deletion(-)
13
16
14
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
17
diff --git a/docs/conf.py b/docs/conf.py
15
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/boot.c
19
--- a/docs/conf.py
17
+++ b/hw/arm/boot.c
20
+++ b/docs/conf.py
18
@@ -XXX,XX +XXX,XX @@ static int do_arm_linux_init(Object *obj, void *opaque)
21
@@ -XXX,XX +XXX,XX @@
19
return 0;
22
#
20
}
23
# This is also used if you do content translation via gettext catalogs.
21
24
# Usually you set "language" from the command line for these cases.
22
-static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
25
-language = None
23
- uint64_t *lowaddr, uint64_t *highaddr,
26
+language = 'en'
24
- int elf_machine, AddressSpace *as)
27
25
+static int64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
28
# List of patterns, relative to source directory, that match files and
26
+ uint64_t *lowaddr, uint64_t *highaddr,
29
# directories to ignore when looking for source files.
27
+ int elf_machine, AddressSpace *as)
28
{
29
bool elf_is64;
30
union {
31
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
32
} elf_header;
33
int data_swab = 0;
34
bool big_endian;
35
- uint64_t ret = -1;
36
+ int64_t ret = -1;
37
Error *err = NULL;
38
39
40
--
30
--
41
2.18.0
31
2.25.1
42
32
43
33
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
Implement virtualization extensions in the gic_acknowledge_irq()
3
We need to fetch the name of the current accelerator in flexible error
4
function. This function changes the state of the highest priority IRQ
4
messages more going forward. Let's create a helper that gives it to us
5
from pending to active.
5
without casting in the target code.
6
6
7
When the current CPU is a vCPU, modifying the state of an IRQ modifies
7
Signed-off-by: Alexander Graf <agraf@csgraf.de>
8
the corresponding LR entry. However if we clear the pending flag before
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
setting the active one, we lose track of the LR entry as it becomes
9
Message-id: 20220620192242.70573-1-agraf@csgraf.de
10
invalid. The next call to gic_get_lr_entry() will fail.
11
12
To overcome this issue, we call gic_activate_irq() before
13
gic_clear_pending(). This does not change the general behaviour of
14
gic_acknowledge_irq.
15
16
We also move the SGI case in gic_clear_pending_sgi() to enhance
17
code readability as the virtualization extensions support adds a if-else
18
level.
19
20
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
21
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
22
Message-id: 20180727095421.386-12-luc.michel@greensocs.com
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
11
---
25
hw/intc/arm_gic.c | 52 ++++++++++++++++++++++++++++++-----------------
12
include/qemu/accel.h | 1 +
26
1 file changed, 33 insertions(+), 19 deletions(-)
13
accel/accel-common.c | 8 ++++++++
14
softmmu/vl.c | 3 +--
15
3 files changed, 10 insertions(+), 2 deletions(-)
27
16
28
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
17
diff --git a/include/qemu/accel.h b/include/qemu/accel.h
29
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/intc/arm_gic.c
19
--- a/include/qemu/accel.h
31
+++ b/hw/intc/arm_gic.c
20
+++ b/include/qemu/accel.h
32
@@ -XXX,XX +XXX,XX @@ static void gic_drop_prio(GICState *s, int cpu, int group)
21
@@ -XXX,XX +XXX,XX @@ typedef struct AccelClass {
33
s->running_priority[cpu] = gic_get_prio_from_apr_bits(s, cpu);
22
23
AccelClass *accel_find(const char *opt_name);
24
AccelState *current_accel(void);
25
+const char *current_accel_name(void);
26
27
void accel_init_interfaces(AccelClass *ac);
28
29
diff --git a/accel/accel-common.c b/accel/accel-common.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/accel/accel-common.c
32
+++ b/accel/accel-common.c
33
@@ -XXX,XX +XXX,XX @@ AccelClass *accel_find(const char *opt_name)
34
return ac;
34
}
35
}
35
36
36
+static inline uint32_t gic_clear_pending_sgi(GICState *s, int irq, int cpu)
37
+/* Return the name of the current accelerator */
38
+const char *current_accel_name(void)
37
+{
39
+{
38
+ int src;
40
+ AccelClass *ac = ACCEL_GET_CLASS(current_accel());
39
+ uint32_t ret;
40
+
41
+
41
+ if (!gic_is_vcpu(cpu)) {
42
+ return ac->name;
42
+ /* Lookup the source CPU for the SGI and clear this in the
43
+ * sgi_pending map. Return the src and clear the overall pending
44
+ * state on this CPU if the SGI is not pending from any CPUs.
45
+ */
46
+ assert(s->sgi_pending[irq][cpu] != 0);
47
+ src = ctz32(s->sgi_pending[irq][cpu]);
48
+ s->sgi_pending[irq][cpu] &= ~(1 << src);
49
+ if (s->sgi_pending[irq][cpu] == 0) {
50
+ gic_clear_pending(s, irq, cpu);
51
+ }
52
+ ret = irq | ((src & 0x7) << 10);
53
+ } else {
54
+ uint32_t *lr_entry = gic_get_lr_entry(s, irq, cpu);
55
+ src = GICH_LR_CPUID(*lr_entry);
56
+
57
+ gic_clear_pending(s, irq, cpu);
58
+ ret = irq | (src << 10);
59
+ }
60
+
61
+ return ret;
62
+}
43
+}
63
+
44
+
64
uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
45
static void accel_init_cpu_int_aux(ObjectClass *klass, void *opaque)
65
{
46
{
66
- int ret, irq, src;
47
CPUClass *cc = CPU_CLASS(klass);
67
- int cm = 1 << cpu;
48
diff --git a/softmmu/vl.c b/softmmu/vl.c
68
+ int ret, irq;
49
index XXXXXXX..XXXXXXX 100644
69
50
--- a/softmmu/vl.c
70
/* gic_get_current_pending_irq() will return 1022 or 1023 appropriately
51
+++ b/softmmu/vl.c
71
* for the case where this GIC supports grouping and the pending interrupt
52
@@ -XXX,XX +XXX,XX @@ static void configure_accelerators(const char *progname)
72
* is in the wrong group.
73
*/
74
irq = gic_get_current_pending_irq(s, cpu, attrs);
75
- trace_gic_acknowledge_irq(cpu, irq);
76
+ trace_gic_acknowledge_irq(gic_get_vcpu_real_id(cpu), irq);
77
78
if (irq >= GIC_MAXIRQ) {
79
DPRINTF("ACK, no pending interrupt or it is hidden: %d\n", irq);
80
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
81
return 1023;
82
}
53
}
83
54
84
+ gic_activate_irq(s, cpu, irq);
55
if (init_failed && !qtest_chrdev) {
85
+
56
- AccelClass *ac = ACCEL_GET_CLASS(current_accel());
86
if (s->revision == REV_11MPCORE) {
57
- error_report("falling back to %s", ac->name);
87
/* Clear pending flags for both level and edge triggered interrupts.
58
+ error_report("falling back to %s", current_accel_name());
88
* Level triggered IRQs will be reasserted once they become inactive.
89
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
90
ret = irq;
91
} else {
92
if (irq < GIC_NR_SGIS) {
93
- /* Lookup the source CPU for the SGI and clear this in the
94
- * sgi_pending map. Return the src and clear the overall pending
95
- * state on this CPU if the SGI is not pending from any CPUs.
96
- */
97
- assert(s->sgi_pending[irq][cpu] != 0);
98
- src = ctz32(s->sgi_pending[irq][cpu]);
99
- s->sgi_pending[irq][cpu] &= ~(1 << src);
100
- if (s->sgi_pending[irq][cpu] == 0) {
101
- gic_clear_pending(s, irq, cpu);
102
- }
103
- ret = irq | ((src & 0x7) << 10);
104
+ ret = gic_clear_pending_sgi(s, irq, cpu);
105
} else {
106
- /* Clear pending state for both level and edge triggered
107
- * interrupts. (level triggered interrupts with an active line
108
- * remain pending, see gic_test_pending)
109
- */
110
gic_clear_pending(s, irq, cpu);
111
ret = irq;
112
}
113
}
59
}
114
60
115
- gic_activate_irq(s, cpu, irq);
61
if (icount_enabled() && !tcg_enabled()) {
116
gic_update(s);
117
DPRINTF("ACK %d\n", irq);
118
return ret;
119
--
62
--
120
2.18.0
63
2.25.1
121
122
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Alexander Graf <agraf@csgraf.de>
2
2
3
This commit improve the way the GIC is realized and connected in the
3
Some features such as running in EL3 or running M profile code are
4
ZynqMP SoC. The security extensions are enabled only if requested in the
4
incompatible with virtualization as QEMU implements it today. To prevent
5
machine state. The same goes for the virtualization extensions.
5
users from picking invalid configurations on other virt solutions like
6
Hvf, let's run the same checks there too.
6
7
7
All the GIC to APU CPU(s) IRQ lines are now connected, including FIQ,
8
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1073
8
vIRQ and vFIQ. The missing CPU to GIC timers IRQ connections are also
9
Signed-off-by: Alexander Graf <agraf@csgraf.de>
9
added (HYP and SEC timers).
10
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
11
Message-id: 20220620192242.70573-2-agraf@csgraf.de
11
The GIC maintenance IRQs are back-wired to the correct GIC PPIs.
12
[PMM: Allow qtest accelerator too; tweak comment]
12
13
Finally, the MMIO mappings are reworked to take into account the ZynqMP
14
specifics. The GIC (v)CPU interface is aliased 16 times:
15
* for the first 0x1000 bytes from 0xf9010000 to 0xf901f000
16
* for the second 0x1000 bytes from 0xf9020000 to 0xf902f000
17
Mappings of the virtual interface and virtual CPU interface are mapped
18
only when virtualization extensions are requested. The
19
XlnxZynqMPGICRegion struct has been enhanced to be able to catch all
20
this information.
21
22
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
23
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
24
Message-id: 20180727095421.386-20-luc.michel@greensocs.com
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
14
---
27
include/hw/arm/xlnx-zynqmp.h | 4 +-
15
target/arm/cpu.c | 16 ++++++++++++----
28
hw/arm/xlnx-zynqmp.c | 92 ++++++++++++++++++++++++++++++++----
16
1 file changed, 12 insertions(+), 4 deletions(-)
29
2 files changed, 86 insertions(+), 10 deletions(-)
30
17
31
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
18
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
32
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
33
--- a/include/hw/arm/xlnx-zynqmp.h
20
--- a/target/arm/cpu.c
34
+++ b/include/hw/arm/xlnx-zynqmp.h
21
+++ b/target/arm/cpu.c
35
@@ -XXX,XX +XXX,XX @@
22
@@ -XXX,XX +XXX,XX @@
36
#define XLNX_ZYNQMP_OCM_RAM_0_ADDRESS 0xFFFC0000
23
#include "hw/boards.h"
37
#define XLNX_ZYNQMP_OCM_RAM_SIZE 0x10000
24
#endif
38
25
#include "sysemu/tcg.h"
39
-#define XLNX_ZYNQMP_GIC_REGIONS 2
26
+#include "sysemu/qtest.h"
40
+#define XLNX_ZYNQMP_GIC_REGIONS 6
27
#include "sysemu/hw_accel.h"
41
28
#include "kvm_arm.h"
42
/* ZynqMP maps the ARM GIC regions (GICC, GICD ...) at consecutive 64k offsets
29
#include "disas/capstone.h"
43
* and under-decodes the 64k region. This mirrors the 4k regions to every 4k
30
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
44
@@ -XXX,XX +XXX,XX @@
45
*/
46
47
#define XLNX_ZYNQMP_GIC_REGION_SIZE 0x1000
48
-#define XLNX_ZYNQMP_GIC_ALIASES (0x10000 / XLNX_ZYNQMP_GIC_REGION_SIZE - 1)
49
+#define XLNX_ZYNQMP_GIC_ALIASES (0x10000 / XLNX_ZYNQMP_GIC_REGION_SIZE)
50
51
#define XLNX_ZYNQMP_MAX_LOW_RAM_SIZE 0x80000000ull
52
53
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/arm/xlnx-zynqmp.c
56
+++ b/hw/arm/xlnx-zynqmp.c
57
@@ -XXX,XX +XXX,XX @@
58
59
#define ARM_PHYS_TIMER_PPI 30
60
#define ARM_VIRT_TIMER_PPI 27
61
+#define ARM_HYP_TIMER_PPI 26
62
+#define ARM_SEC_TIMER_PPI 29
63
+#define GIC_MAINTENANCE_PPI 25
64
65
#define GEM_REVISION 0x40070106
66
67
#define GIC_BASE_ADDR 0xf9000000
68
#define GIC_DIST_ADDR 0xf9010000
69
#define GIC_CPU_ADDR 0xf9020000
70
+#define GIC_VIFACE_ADDR 0xf9040000
71
+#define GIC_VCPU_ADDR 0xf9060000
72
73
#define SATA_INTR 133
74
#define SATA_ADDR 0xFD0C0000
75
@@ -XXX,XX +XXX,XX @@ static const int adma_ch_intr[XLNX_ZYNQMP_NUM_ADMA_CH] = {
76
typedef struct XlnxZynqMPGICRegion {
77
int region_index;
78
uint32_t address;
79
+ uint32_t offset;
80
+ bool virt;
81
} XlnxZynqMPGICRegion;
82
83
static const XlnxZynqMPGICRegion xlnx_zynqmp_gic_regions[] = {
84
- { .region_index = 0, .address = GIC_DIST_ADDR, },
85
- { .region_index = 1, .address = GIC_CPU_ADDR, },
86
+ /* Distributor */
87
+ {
88
+ .region_index = 0,
89
+ .address = GIC_DIST_ADDR,
90
+ .offset = 0,
91
+ .virt = false
92
+ },
93
+
94
+ /* CPU interface */
95
+ {
96
+ .region_index = 1,
97
+ .address = GIC_CPU_ADDR,
98
+ .offset = 0,
99
+ .virt = false
100
+ },
101
+ {
102
+ .region_index = 1,
103
+ .address = GIC_CPU_ADDR + 0x10000,
104
+ .offset = 0x1000,
105
+ .virt = false
106
+ },
107
+
108
+ /* Virtual interface */
109
+ {
110
+ .region_index = 2,
111
+ .address = GIC_VIFACE_ADDR,
112
+ .offset = 0,
113
+ .virt = true
114
+ },
115
+
116
+ /* Virtual CPU interface */
117
+ {
118
+ .region_index = 3,
119
+ .address = GIC_VCPU_ADDR,
120
+ .offset = 0,
121
+ .virt = true
122
+ },
123
+ {
124
+ .region_index = 3,
125
+ .address = GIC_VCPU_ADDR + 0x10000,
126
+ .offset = 0x1000,
127
+ .virt = true
128
+ },
129
};
130
131
static inline int arm_gic_ppi_index(int cpu_nr, int ppi_index)
132
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
133
qdev_prop_set_uint32(DEVICE(&s->gic), "num-irq", GIC_NUM_SPI_INTR + 32);
134
qdev_prop_set_uint32(DEVICE(&s->gic), "revision", 2);
135
qdev_prop_set_uint32(DEVICE(&s->gic), "num-cpu", num_apus);
136
+ qdev_prop_set_bit(DEVICE(&s->gic), "has-security-extensions", s->secure);
137
+ qdev_prop_set_bit(DEVICE(&s->gic),
138
+ "has-virtualization-extensions", s->virt);
139
140
/* Realize APUs before realizing the GIC. KVM requires this. */
141
for (i = 0; i < num_apus; i++) {
142
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
143
for (i = 0; i < XLNX_ZYNQMP_GIC_REGIONS; i++) {
144
SysBusDevice *gic = SYS_BUS_DEVICE(&s->gic);
145
const XlnxZynqMPGICRegion *r = &xlnx_zynqmp_gic_regions[i];
146
- MemoryRegion *mr = sysbus_mmio_get_region(gic, r->region_index);
147
+ MemoryRegion *mr;
148
uint32_t addr = r->address;
149
int j;
150
151
- sysbus_mmio_map(gic, r->region_index, addr);
152
+ if (r->virt && !s->virt) {
153
+ continue;
154
+ }
155
156
+ mr = sysbus_mmio_get_region(gic, r->region_index);
157
for (j = 0; j < XLNX_ZYNQMP_GIC_ALIASES; j++) {
158
MemoryRegion *alias = &s->gic_mr[i][j];
159
160
- addr += XLNX_ZYNQMP_GIC_REGION_SIZE;
161
memory_region_init_alias(alias, OBJECT(s), "zynqmp-gic-alias", mr,
162
- 0, XLNX_ZYNQMP_GIC_REGION_SIZE);
163
+ r->offset, XLNX_ZYNQMP_GIC_REGION_SIZE);
164
memory_region_add_subregion(system_memory, addr, alias);
165
+
166
+ addr += XLNX_ZYNQMP_GIC_REGION_SIZE;
167
}
31
}
168
}
32
}
169
33
170
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
34
- if (kvm_enabled()) {
171
sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i,
35
+ if (!tcg_enabled() && !qtest_enabled()) {
172
qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
36
/*
173
ARM_CPU_IRQ));
37
+ * We assume that no accelerator except TCG (and the "not really an
174
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus,
38
+ * accelerator" qtest) can handle these features, because Arm hardware
175
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
39
+ * virtualization can't virtualize them.
176
+ ARM_CPU_FIQ));
40
+ *
177
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 2,
41
* Catch all the cases which might cause us to create more than one
178
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
42
* address space for the CPU (otherwise we will assert() later in
179
+ ARM_CPU_VIRQ));
43
* cpu_address_space_init()).
180
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 3,
44
*/
181
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
45
if (arm_feature(env, ARM_FEATURE_M)) {
182
+ ARM_CPU_VFIQ));
46
error_setg(errp,
183
irq = qdev_get_gpio_in(DEVICE(&s->gic),
47
- "Cannot enable KVM when using an M-profile guest CPU");
184
arm_gic_ppi_index(i, ARM_PHYS_TIMER_PPI));
48
+ "Cannot enable %s when using an M-profile guest CPU",
185
- qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), 0, irq);
49
+ current_accel_name());
186
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_PHYS, irq);
50
return;
187
irq = qdev_get_gpio_in(DEVICE(&s->gic),
51
}
188
arm_gic_ppi_index(i, ARM_VIRT_TIMER_PPI));
52
if (cpu->has_el3) {
189
- qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), 1, irq);
53
error_setg(errp,
190
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_VIRT, irq);
54
- "Cannot enable KVM when guest CPU has EL3 enabled");
191
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
55
+ "Cannot enable %s when guest CPU has EL3 enabled",
192
+ arm_gic_ppi_index(i, ARM_HYP_TIMER_PPI));
56
+ current_accel_name());
193
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_HYP, irq);
57
return;
194
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
58
}
195
+ arm_gic_ppi_index(i, ARM_SEC_TIMER_PPI));
59
if (cpu->tag_memory) {
196
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_SEC, irq);
60
error_setg(errp,
197
+
61
- "Cannot enable KVM when guest CPUs has MTE enabled");
198
+ if (s->virt) {
62
+ "Cannot enable %s when guest CPUs has MTE enabled",
199
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
63
+ current_accel_name());
200
+ arm_gic_ppi_index(i, GIC_MAINTENANCE_PPI));
64
return;
201
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 4, irq);
65
}
202
+ }
203
}
66
}
204
205
if (s->has_rpu) {
206
--
67
--
207
2.18.0
68
2.25.1
208
209
diff view generated by jsdifflib
1
Tailchaining is an optimization in handling of exception return
1
From: Richard Henderson <richard.henderson@linaro.org>
2
for M-profile cores: if we are about to pop the exception stack
3
for an exception return, but there is a pending exception which
4
is higher priority than the priority we are returning to, then
5
instead of unstacking and then immediately taking the exception
6
and stacking registers again, we can chain to the pending
7
exception without unstacking and stacking.
8
2
9
For v6M and v7M it is IMPDEF whether tailchaining happens for pending
3
This register is part of SME, but isn't closely related to the
10
exceptions; for v8M this is architecturally required. Implement it
4
rest of the extension.
11
in QEMU for all M-profile cores, since in practice v6M and v7M
12
hardware implementations generally do have it.
13
5
14
(We were already doing tailchaining for derived exceptions which
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
happened during exception return, like the validity checks and
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
stack access failures; these have always been required to be
8
Message-id: 20220620175235.60881-2-richard.henderson@linaro.org
17
tailchained for all versions of the architecture.)
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
target/arm/cpu.h | 1 +
12
target/arm/helper.c | 32 ++++++++++++++++++++++++++++++++
13
2 files changed, 33 insertions(+)
18
14
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
index XXXXXXX..XXXXXXX 100644
21
Message-id: 20180720145647.8810-5-peter.maydell@linaro.org
17
--- a/target/arm/cpu.h
22
---
18
+++ b/target/arm/cpu.h
23
target/arm/helper.c | 16 ++++++++++++++++
19
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
24
1 file changed, 16 insertions(+)
20
};
25
21
uint64_t tpidr_el[4];
22
};
23
+ uint64_t tpidr2_el0;
24
/* The secure banks of these registers don't map anywhere */
25
uint64_t tpidrurw_s;
26
uint64_t tpidrprw_s;
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.c
29
--- a/target/arm/helper.c
29
+++ b/target/arm/helper.c
30
+++ b/target/arm/helper.c
30
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
31
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo zcr_reginfo[] = {
31
return;
32
.writefn = zcr_write, .raw_writefn = raw_write },
33
};
34
35
+#ifdef TARGET_AARCH64
36
+static CPAccessResult access_tpidr2(CPUARMState *env, const ARMCPRegInfo *ri,
37
+ bool isread)
38
+{
39
+ int el = arm_current_el(env);
40
+
41
+ if (el == 0) {
42
+ uint64_t sctlr = arm_sctlr(env, el);
43
+ if (!(sctlr & SCTLR_EnTP2)) {
44
+ return CP_ACCESS_TRAP;
45
+ }
46
+ }
47
+ /* TODO: FEAT_FGT */
48
+ if (el < 3
49
+ && arm_feature(env, ARM_FEATURE_EL3)
50
+ && !(env->cp15.scr_el3 & SCR_ENTP2)) {
51
+ return CP_ACCESS_TRAP_EL3;
52
+ }
53
+ return CP_ACCESS_OK;
54
+}
55
+
56
+static const ARMCPRegInfo sme_reginfo[] = {
57
+ { .name = "TPIDR2_EL0", .state = ARM_CP_STATE_AA64,
58
+ .opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 5,
59
+ .access = PL0_RW, .accessfn = access_tpidr2,
60
+ .fieldoffset = offsetof(CPUARMState, cp15.tpidr2_el0) },
61
+};
62
+#endif /* TARGET_AARCH64 */
63
+
64
void hw_watchpoint_update(ARMCPU *cpu, int n)
65
{
66
CPUARMState *env = &cpu->env;
67
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
32
}
68
}
33
69
34
+ /*
70
#ifdef TARGET_AARCH64
35
+ * Tailchaining: if there is currently a pending exception that
71
+ if (cpu_isar_feature(aa64_sme, cpu)) {
36
+ * is high enough priority to preempt execution at the level we're
72
+ define_arm_cp_regs(cpu, sme_reginfo);
37
+ * about to return to, then just directly take that exception now,
38
+ * avoiding an unstack-and-then-stack. Note that now we have
39
+ * deactivated the previous exception by calling armv7m_nvic_complete_irq()
40
+ * our current execution priority is already the execution priority we are
41
+ * returning to -- none of the state we would unstack or set based on
42
+ * the EXCRET value affects it.
43
+ */
44
+ if (armv7m_nvic_can_take_pending_exception(env->nvic)) {
45
+ qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n");
46
+ v7m_exception_taken(cpu, excret, true, false);
47
+ return;
48
+ }
73
+ }
49
+
74
if (cpu_isar_feature(aa64_pauth, cpu)) {
50
switch_v7m_security_state(env, return_to_secure);
75
define_arm_cp_regs(cpu, pauth_reginfo);
51
76
}
52
{
53
--
77
--
54
2.18.0
78
2.25.1
55
56
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add some helper macros and functions related to the virtualization
3
This is CheckSMEAccess, which is the basis for a set of
4
extensions to gic_internal.h.
4
related tests for various SME cpregs and instructions.
5
5
6
The GICH_LR_* macros help extracting specific fields of a list register
7
value. The only tricky one is the priority field as only the MSB are
8
stored. The value must be shifted accordingly to obtain the correct
9
priority value.
10
11
gic_is_vcpu() and gic_get_vcpu_real_id() help with (v)CPU id manipulation
12
to abstract the fact that vCPU id are in the range
13
[ GIC_NCPU; (GIC_NCPU + num_cpu) [.
14
15
gic_lr_* and gic_virq_is_valid() help with the list registers.
16
gic_get_lr_entry() returns the LR entry for a given (vCPU, irq) pair. It
17
is meant to be used in contexts where we know for sure that the entry
18
exists, so we assert that entry is actually found, and the caller can
19
avoid the NULL check on the returned pointer.
20
21
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
22
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
23
Message-id: 20180727095421.386-8-luc.michel@greensocs.com
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220620175235.60881-3-richard.henderson@linaro.org
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
10
---
26
hw/intc/gic_internal.h | 74 ++++++++++++++++++++++++++++++++++++++++++
11
target/arm/cpu.h | 2 ++
27
hw/intc/arm_gic.c | 5 +++
12
target/arm/translate.h | 1 +
28
2 files changed, 79 insertions(+)
13
target/arm/helper.c | 52 ++++++++++++++++++++++++++++++++++++++
14
target/arm/translate-a64.c | 1 +
15
4 files changed, 56 insertions(+)
29
16
30
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
31
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/intc/gic_internal.h
19
--- a/target/arm/cpu.h
33
+++ b/hw/intc/gic_internal.h
20
+++ b/target/arm/cpu.h
34
@@ -XXX,XX +XXX,XX @@ REG32(GICH_LR63, 0x1fc)
21
@@ -XXX,XX +XXX,XX @@ void aarch64_sync_64_to_32(CPUARMState *env);
35
R_GICH_LR0_Priority_MASK | R_GICH_LR0_State_MASK | \
22
36
R_GICH_LR0_Grp1_MASK | R_GICH_LR0_HW_MASK)
23
int fp_exception_el(CPUARMState *env, int cur_el);
37
24
int sve_exception_el(CPUARMState *env, int cur_el);
38
+#define GICH_LR_STATE_INVALID 0
25
+int sme_exception_el(CPUARMState *env, int cur_el);
39
+#define GICH_LR_STATE_PENDING 1
26
40
+#define GICH_LR_STATE_ACTIVE 2
27
/**
41
+#define GICH_LR_STATE_ACTIVE_PENDING 3
28
* sve_vqm1_for_el:
42
+
29
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, ATA, 15, 1)
43
+#define GICH_LR_VIRT_ID(entry) (FIELD_EX32(entry, GICH_LR0, VirtualID))
30
FIELD(TBFLAG_A64, TCMA, 16, 2)
44
+#define GICH_LR_PHYS_ID(entry) (FIELD_EX32(entry, GICH_LR0, PhysicalID))
31
FIELD(TBFLAG_A64, MTE_ACTIVE, 18, 1)
45
+#define GICH_LR_CPUID(entry) (FIELD_EX32(entry, GICH_LR0, CPUID))
32
FIELD(TBFLAG_A64, MTE0_ACTIVE, 19, 1)
46
+#define GICH_LR_EOI(entry) (FIELD_EX32(entry, GICH_LR0, EOI))
33
+FIELD(TBFLAG_A64, SMEEXC_EL, 20, 2)
47
+#define GICH_LR_PRIORITY(entry) (FIELD_EX32(entry, GICH_LR0, Priority) << 3)
34
48
+#define GICH_LR_STATE(entry) (FIELD_EX32(entry, GICH_LR0, State))
35
/*
49
+#define GICH_LR_GROUP(entry) (FIELD_EX32(entry, GICH_LR0, Grp1))
36
* Helpers for using the above.
50
+#define GICH_LR_HW(entry) (FIELD_EX32(entry, GICH_LR0, HW))
37
diff --git a/target/arm/translate.h b/target/arm/translate.h
51
+
38
index XXXXXXX..XXXXXXX 100644
52
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
39
--- a/target/arm/translate.h
53
* GICv2 and GICv2 with security extensions:
40
+++ b/target/arm/translate.h
54
*/
41
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
55
@@ -XXX,XX +XXX,XX @@ static inline bool gic_is_vcpu(int cpu)
42
bool ns; /* Use non-secure CPREG bank on access */
56
return cpu >= GIC_NCPU;
43
int fp_excp_el; /* FP exception EL or 0 if enabled */
44
int sve_excp_el; /* SVE exception EL or 0 if enabled */
45
+ int sme_excp_el; /* SME exception EL or 0 if enabled */
46
int vl; /* current vector length in bytes */
47
bool vfp_enabled; /* FP enabled via FPSCR.EN */
48
int vec_len;
49
diff --git a/target/arm/helper.c b/target/arm/helper.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/helper.c
52
+++ b/target/arm/helper.c
53
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
54
return 0;
57
}
55
}
58
56
59
+static inline int gic_get_vcpu_real_id(int cpu)
57
+/*
58
+ * Return the exception level to which exceptions should be taken for SME.
59
+ * C.f. the ARM pseudocode function CheckSMEAccess.
60
+ */
61
+int sme_exception_el(CPUARMState *env, int el)
60
+{
62
+{
61
+ return (cpu >= GIC_NCPU) ? (cpu - GIC_NCPU) : cpu;
63
+#ifndef CONFIG_USER_ONLY
62
+}
64
+ if (el <= 1 && !el_is_in_host(env, el)) {
63
+
65
+ switch (FIELD_EX64(env->cp15.cpacr_el1, CPACR_EL1, SMEN)) {
64
+/* Return true if the given vIRQ state exists in a LR and is either active or
66
+ case 1:
65
+ * pending and active.
67
+ if (el != 0) {
66
+ *
68
+ break;
67
+ * This function is used to check that a guest's `end of interrupt' or
69
+ }
68
+ * `interrupts deactivation' request is valid, and matches with a LR of an
70
+ /* fall through */
69
+ * already acknowledged vIRQ (i.e. has the active bit set in its state).
71
+ case 0:
70
+ */
72
+ case 2:
71
+static inline bool gic_virq_is_valid(GICState *s, int irq, int vcpu)
73
+ return 1;
72
+{
73
+ int cpu = gic_get_vcpu_real_id(vcpu);
74
+ int lr_idx;
75
+
76
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
77
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
78
+
79
+ if ((GICH_LR_VIRT_ID(*entry) == irq) &&
80
+ (GICH_LR_STATE(*entry) & GICH_LR_STATE_ACTIVE)) {
81
+ return true;
82
+ }
74
+ }
83
+ }
75
+ }
84
+
76
+
85
+ return false;
77
+ if (el <= 2 && arm_is_el2_enabled(env)) {
86
+}
78
+ /* CPTR_EL2 changes format with HCR_EL2.E2H (regardless of TGE). */
87
+
79
+ if (env->cp15.hcr_el2 & HCR_E2H) {
88
+/* Return a pointer on the LR entry matching the given vIRQ.
80
+ switch (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, SMEN)) {
89
+ *
81
+ case 1:
90
+ * This function is used to retrieve an LR for which we know for sure that the
82
+ if (el != 0 || !(env->cp15.hcr_el2 & HCR_TGE)) {
91
+ * corresponding vIRQ exists in the current context (i.e. its current state is
83
+ break;
92
+ * not `invalid'):
84
+ }
93
+ * - Either the corresponding vIRQ has been validated with gic_virq_is_valid()
85
+ /* fall through */
94
+ * so it is `active' or `active and pending',
86
+ case 0:
95
+ * - Or it was pending and has been selected by gic_get_best_virq(). It is now
87
+ case 2:
96
+ * `pending', `active' or `active and pending', depending on what the guest
88
+ return 2;
97
+ * already did with this vIRQ.
89
+ }
98
+ *
90
+ } else {
99
+ * Having multiple LRs with the same VirtualID leads to UNPREDICTABLE
91
+ if (FIELD_EX64(env->cp15.cptr_el[2], CPTR_EL2, TSM)) {
100
+ * behaviour in the GIC. We choose to return the first one that matches.
92
+ return 2;
101
+ */
93
+ }
102
+static inline uint32_t *gic_get_lr_entry(GICState *s, int irq, int vcpu)
103
+{
104
+ int cpu = gic_get_vcpu_real_id(vcpu);
105
+ int lr_idx;
106
+
107
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
108
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
109
+
110
+ if ((GICH_LR_VIRT_ID(*entry) == irq) &&
111
+ (GICH_LR_STATE(*entry) != GICH_LR_STATE_INVALID)) {
112
+ return entry;
113
+ }
94
+ }
114
+ }
95
+ }
115
+
96
+
116
+ g_assert_not_reached();
97
+ /* CPTR_EL3. Since ESM is negative we must check for EL3. */
98
+ if (arm_feature(env, ARM_FEATURE_EL3)
99
+ && !FIELD_EX64(env->cp15.cptr_el[3], CPTR_EL3, ESM)) {
100
+ return 3;
101
+ }
102
+#endif
103
+ return 0;
117
+}
104
+}
118
+
105
+
119
#endif /* QEMU_ARM_GIC_INTERNAL_H */
106
/*
120
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
107
* Given that SVE is enabled, return the vector length for EL.
108
*/
109
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
110
}
111
DP_TBFLAG_A64(flags, SVEEXC_EL, sve_el);
112
}
113
+ if (cpu_isar_feature(aa64_sme, env_archcpu(env))) {
114
+ DP_TBFLAG_A64(flags, SMEEXC_EL, sme_exception_el(env, el));
115
+ }
116
117
sctlr = regime_sctlr(env, stage1);
118
119
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
121
index XXXXXXX..XXXXXXX 100644
120
index XXXXXXX..XXXXXXX 100644
122
--- a/hw/intc/arm_gic.c
121
--- a/target/arm/translate-a64.c
123
+++ b/hw/intc/arm_gic.c
122
+++ b/target/arm/translate-a64.c
124
@@ -XXX,XX +XXX,XX @@ static inline int gic_get_current_cpu(GICState *s)
123
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
125
return 0;
124
dc->align_mem = EX_TBFLAG_ANY(tb_flags, ALIGN_MEM);
126
}
125
dc->pstate_il = EX_TBFLAG_ANY(tb_flags, PSTATE__IL);
127
126
dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
128
+static inline int gic_get_current_vcpu(GICState *s)
127
+ dc->sme_excp_el = EX_TBFLAG_A64(tb_flags, SMEEXC_EL);
129
+{
128
dc->vl = (EX_TBFLAG_A64(tb_flags, VL) + 1) * 16;
130
+ return gic_get_current_cpu(s) + GIC_NCPU;
129
dc->pauth_active = EX_TBFLAG_A64(tb_flags, PAUTH_ACTIVE);
131
+}
130
dc->bt = EX_TBFLAG_A64(tb_flags, BT);
132
+
133
/* Return true if this GIC config has interrupt groups, which is
134
* true if we're a GICv2, or a GICv1 with the security extensions.
135
*/
136
--
131
--
137
2.18.0
132
2.25.1
138
139
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
3
This will be used for raising various traps for SME.
4
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
7
Message-id: 20220620175235.60881-4-richard.henderson@linaro.org
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Tested-by: Alex Bennée <alex.bennee@linaro.org>
9
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
10
Message-id: 20180801123111.3595-5-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
9
---
13
target/arm/sve_helper.c | 2 +-
10
target/arm/syndrome.h | 14 ++++++++++++++
14
1 file changed, 1 insertion(+), 1 deletion(-)
11
1 file changed, 14 insertions(+)
15
12
16
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
13
diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/sve_helper.c
15
--- a/target/arm/syndrome.h
19
+++ b/target/arm/sve_helper.c
16
+++ b/target/arm/syndrome.h
20
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_movz_d)(void *vd, void *vn, void *vg, uint32_t desc)
17
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
21
uint64_t *d = vd, *n = vn;
18
EC_AA64_SMC = 0x17,
22
uint8_t *pg = vg;
19
EC_SYSTEMREGISTERTRAP = 0x18,
23
for (i = 0; i < opr_sz; i += 1) {
20
EC_SVEACCESSTRAP = 0x19,
24
- d[i] = n[1] & -(uint64_t)(pg[H1(i)] & 1);
21
+ EC_SMETRAP = 0x1d,
25
+ d[i] = n[i] & -(uint64_t)(pg[H1(i)] & 1);
22
EC_INSNABORT = 0x20,
26
}
23
EC_INSNABORT_SAME_EL = 0x21,
24
EC_PCALIGNMENT = 0x22,
25
@@ -XXX,XX +XXX,XX @@ enum arm_exception_class {
26
EC_AA64_BKPT = 0x3c,
27
};
28
29
+typedef enum {
30
+ SME_ET_AccessTrap,
31
+ SME_ET_Streaming,
32
+ SME_ET_NotStreaming,
33
+ SME_ET_InactiveZA,
34
+} SMEExceptionType;
35
+
36
#define ARM_EL_EC_SHIFT 26
37
#define ARM_EL_IL_SHIFT 25
38
#define ARM_EL_ISV_SHIFT 24
39
@@ -XXX,XX +XXX,XX @@ static inline uint32_t syn_sve_access_trap(void)
40
return EC_SVEACCESSTRAP << ARM_EL_EC_SHIFT;
27
}
41
}
28
42
43
+static inline uint32_t syn_smetrap(SMEExceptionType etype, bool is_16bit)
44
+{
45
+ return (EC_SMETRAP << ARM_EL_EC_SHIFT)
46
+ | (is_16bit ? 0 : ARM_EL_IL) | etype;
47
+}
48
+
49
static inline uint32_t syn_pactrap(void)
50
{
51
return EC_PACTRAP << ARM_EL_EC_SHIFT;
29
--
52
--
30
2.18.0
53
2.25.1
31
32
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add the read/write functions to handle accesses to the vCPU interface.
3
This will be used for controlling access to SME cpregs.
4
Those accesses are forwarded to the real CPU interface, with the CPU id
5
being converted to the corresponding vCPU id (vCPU id = CPU id +
6
GIC_NCPU).
7
4
8
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
9
Message-id: 20180727095421.386-15-luc.michel@greensocs.com
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20220620175235.60881-5-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
9
---
13
hw/intc/arm_gic.c | 37 +++++++++++++++++++++++++++++++++++--
10
target/arm/cpregs.h | 5 +++++
14
1 file changed, 35 insertions(+), 2 deletions(-)
11
target/arm/translate-a64.c | 18 ++++++++++++++++++
12
2 files changed, 23 insertions(+)
15
13
16
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
14
diff --git a/target/arm/cpregs.h b/target/arm/cpregs.h
17
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gic.c
16
--- a/target/arm/cpregs.h
19
+++ b/hw/intc/arm_gic.c
17
+++ b/target/arm/cpregs.h
20
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_do_cpu_write(void *opaque, hwaddr addr,
18
@@ -XXX,XX +XXX,XX @@ enum {
21
return gic_cpu_write(s, id, addr, value, attrs);
19
ARM_CP_EL3_NO_EL2_UNDEF = 1 << 16,
20
ARM_CP_EL3_NO_EL2_KEEP = 1 << 17,
21
ARM_CP_EL3_NO_EL2_C_NZ = 1 << 18,
22
+ /*
23
+ * Flag: Access check for this sysreg is constrained by the
24
+ * ARM pseudocode function CheckSMEAccess().
25
+ */
26
+ ARM_CP_SME = 1 << 19,
27
};
28
29
/*
30
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/translate-a64.c
33
+++ b/target/arm/translate-a64.c
34
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
35
return fp_access_check(s);
22
}
36
}
23
37
24
+static MemTxResult gic_thisvcpu_read(void *opaque, hwaddr addr, uint64_t *data,
38
+/*
25
+ unsigned size, MemTxAttrs attrs)
39
+ * Check that SME access is enabled, raise an exception if not.
40
+ * Note that this function corresponds to CheckSMEAccess and is
41
+ * only used directly for cpregs.
42
+ */
43
+static bool sme_access_check(DisasContext *s)
26
+{
44
+{
27
+ GICState *s = (GICState *)opaque;
45
+ if (s->sme_excp_el) {
28
+
46
+ gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
29
+ return gic_cpu_read(s, gic_get_current_vcpu(s), addr, data, attrs);
47
+ syn_smetrap(SME_ET_AccessTrap, false),
48
+ s->sme_excp_el);
49
+ return false;
50
+ }
51
+ return true;
30
+}
52
+}
31
+
53
+
32
+static MemTxResult gic_thisvcpu_write(void *opaque, hwaddr addr,
54
/*
33
+ uint64_t value, unsigned size,
55
* This utility function is for doing register extension with an
34
+ MemTxAttrs attrs)
56
* optional shift. You will likely want to pass a temporary for the
35
+{
57
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
36
+ GICState *s = (GICState *)opaque;
37
+
38
+ return gic_cpu_write(s, gic_get_current_vcpu(s), addr, value, attrs);
39
+}
40
+
41
static const MemoryRegionOps gic_ops[2] = {
42
{
43
.read_with_attrs = gic_dist_read,
44
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
45
.endianness = DEVICE_NATIVE_ENDIAN,
46
};
47
48
+static const MemoryRegionOps gic_virt_ops[2] = {
49
+ {
50
+ .read_with_attrs = NULL,
51
+ .write_with_attrs = NULL,
52
+ .endianness = DEVICE_NATIVE_ENDIAN,
53
+ },
54
+ {
55
+ .read_with_attrs = gic_thisvcpu_read,
56
+ .write_with_attrs = gic_thisvcpu_write,
57
+ .endianness = DEVICE_NATIVE_ENDIAN,
58
+ }
59
+};
60
+
61
static void arm_gic_realize(DeviceState *dev, Error **errp)
62
{
63
/* Device instance realize function for the GIC sysbus device */
64
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
65
return;
58
return;
59
} else if ((ri->type & ARM_CP_SVE) && !sve_access_check(s)) {
60
return;
61
+ } else if ((ri->type & ARM_CP_SME) && !sme_access_check(s)) {
62
+ return;
66
}
63
}
67
64
68
- /* This creates distributor and main CPU interface (s->cpuiomem[0]) */
65
if ((tb_cflags(s->base.tb) & CF_USE_ICOUNT) && (ri->type & ARM_CP_IO)) {
69
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, NULL);
70
+ /* This creates distributor, main CPU interface (s->cpuiomem[0]) and if
71
+ * enabled, virtualization extensions related interfaces (main virtual
72
+ * interface (s->vifaceiomem[0]) and virtual CPU interface).
73
+ */
74
+ gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, gic_virt_ops);
75
76
/* Extra core-specific regions for the CPU interfaces. This is
77
* necessary for "franken-GIC" implementations, for example on
78
--
66
--
79
2.18.0
67
2.25.1
80
81
diff view generated by jsdifflib
1
Improve the exception-taken logging by logging in
1
From: Richard Henderson <richard.henderson@linaro.org>
2
v7m_exception_taken() the exception we're going to take
3
and whether it is secure/nonsecure.
4
2
5
This requires us to move logging at many callsites from after the
3
This cpreg is used to access two new bits of PSTATE
6
call to before it, so that the logging appears in a sensible order.
4
that are not visible via any other mechanism.
7
5
8
(This will make tail-chaining produce more useful logs; for the
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
current callers of v7m_exception_taken() we know which exception
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
we're going to take, so custom log messages at the callsite sufficed;
8
Message-id: 20220620175235.60881-6-richard.henderson@linaro.org
11
for tail-chaining only v7m_exception_taken() knows the exception
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
number that we're going to tail-chain to.)
10
---
11
target/arm/cpu.h | 6 ++++++
12
target/arm/helper.c | 13 +++++++++++++
13
2 files changed, 19 insertions(+)
13
14
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
index XXXXXXX..XXXXXXX 100644
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
--- a/target/arm/cpu.h
17
Message-id: 20180720145647.8810-2-peter.maydell@linaro.org
18
+++ b/target/arm/cpu.h
18
---
19
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
19
target/arm/helper.c | 17 +++++++++++------
20
* nRW (also known as M[4]) is kept, inverted, in env->aarch64
20
1 file changed, 11 insertions(+), 6 deletions(-)
21
* DAIF (exception masks) are kept in env->daif
21
22
* BTYPE is kept in env->btype
23
+ * SM and ZA are kept in env->svcr
24
* all other bits are stored in their correct places in env->pstate
25
*/
26
uint32_t pstate;
27
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
28
uint32_t condexec_bits; /* IT bits. cpsr[15:10,26:25]. */
29
uint32_t btype; /* BTI branch type. spsr[11:10]. */
30
uint64_t daif; /* exception masks, in the bits they are in PSTATE */
31
+ uint64_t svcr; /* PSTATE.{SM,ZA} in the bits they are in SVCR */
32
33
uint64_t elr_el[4]; /* AArch64 exception link regs */
34
uint64_t sp_el[4]; /* AArch64 banked stack pointers */
35
@@ -XXX,XX +XXX,XX @@ FIELD(CPTR_EL3, TCPAC, 31, 1)
36
#define PSTATE_MODE_EL1t 4
37
#define PSTATE_MODE_EL0t 0
38
39
+/* PSTATE bits that are accessed via SVCR and not stored in SPSR_ELx. */
40
+FIELD(SVCR, SM, 0, 1)
41
+FIELD(SVCR, ZA, 1, 1)
42
+
43
/* Write a new value to v7m.exception, thus transitioning into or out
44
* of Handler mode; this may result in a change of active stack pointer.
45
*/
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
46
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
47
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper.c
48
--- a/target/arm/helper.c
25
+++ b/target/arm/helper.c
49
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
50
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tpidr2(CPUARMState *env, const ARMCPRegInfo *ri,
27
bool push_failed = false;
51
return CP_ACCESS_OK;
28
29
armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure);
30
+ qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
31
+ targets_secure ? "secure" : "nonsecure", exc);
32
33
if (arm_feature(env, ARM_FEATURE_V8)) {
34
if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
35
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
36
* we might now want to take a different exception which
37
* targets a different security state, so try again from the top.
38
*/
39
+ qemu_log_mask(CPU_LOG_INT,
40
+ "...derived exception on callee-saves register stacking");
41
v7m_exception_taken(cpu, lr, true, true);
42
return;
43
}
44
45
if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) {
46
/* Vector load failed: derived exception */
47
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table load");
48
v7m_exception_taken(cpu, lr, true, true);
49
return;
50
}
51
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
52
if (sfault) {
53
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
54
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
55
- v7m_exception_taken(cpu, excret, true, false);
56
qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
57
"stackframe: failed EXC_RETURN.ES validity check\n");
58
+ v7m_exception_taken(cpu, excret, true, false);
59
return;
60
}
61
62
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
63
*/
64
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
65
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
66
- v7m_exception_taken(cpu, excret, true, false);
67
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
68
"stackframe: failed exception return integrity check\n");
69
+ v7m_exception_taken(cpu, excret, true, false);
70
return;
71
}
72
73
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
74
/* Take a SecureFault on the current stack */
75
env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
76
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
77
- v7m_exception_taken(cpu, excret, true, false);
78
qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
79
"stackframe: failed exception return integrity "
80
"signature check\n");
81
+ v7m_exception_taken(cpu, excret, true, false);
82
return;
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
86
/* v7m_stack_read() pended a fault, so take it (as a tail
87
* chained exception on the same stack frame)
88
*/
89
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking\n");
90
v7m_exception_taken(cpu, excret, true, false);
91
return;
92
}
93
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
94
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
95
env->v7m.secure);
96
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
97
- v7m_exception_taken(cpu, excret, true, false);
98
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
99
"stackframe: failed exception return integrity "
100
"check\n");
101
+ v7m_exception_taken(cpu, excret, true, false);
102
return;
103
}
104
}
105
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
106
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
107
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
108
ignore_stackfaults = v7m_push_stack(cpu);
109
- v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
110
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: "
111
"failed exception return integrity check\n");
112
+ v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
113
return;
114
}
115
116
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
117
118
ignore_stackfaults = v7m_push_stack(cpu);
119
v7m_exception_taken(cpu, lr, false, ignore_stackfaults);
120
- qemu_log_mask(CPU_LOG_INT, "... as %d\n", env->v7m.exception);
121
}
52
}
122
53
123
/* Function used to synchronize QEMU's AArch64 register set with AArch32
54
+static void svcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
55
+ uint64_t value)
56
+{
57
+ value &= R_SVCR_SM_MASK | R_SVCR_ZA_MASK;
58
+ /* TODO: Side effects. */
59
+ env->svcr = value;
60
+}
61
+
62
static const ARMCPRegInfo sme_reginfo[] = {
63
{ .name = "TPIDR2_EL0", .state = ARM_CP_STATE_AA64,
64
.opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 5,
65
.access = PL0_RW, .accessfn = access_tpidr2,
66
.fieldoffset = offsetof(CPUARMState, cp15.tpidr2_el0) },
67
+ { .name = "SVCR", .state = ARM_CP_STATE_AA64,
68
+ .opc0 = 3, .opc1 = 3, .crn = 4, .crm = 2, .opc2 = 2,
69
+ .access = PL0_RW, .type = ARM_CP_SME,
70
+ .fieldoffset = offsetof(CPUARMState, svcr),
71
+ .writefn = svcr_write, .raw_writefn = raw_write },
72
};
73
#endif /* TARGET_AARCH64 */
74
124
--
75
--
125
2.18.0
76
2.25.1
126
127
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Implement the maintenance interrupt generation that is part of the GICv2
3
These cpregs control the streaming vector length and whether the
4
virtualization extensions.
4
full a64 instruction set is allowed while in streaming mode.
5
5
6
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20180727095421.386-18-luc.michel@greensocs.com
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220620175235.60881-7-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
hw/intc/arm_gic.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++
11
target/arm/cpu.h | 8 ++++++--
12
1 file changed, 97 insertions(+)
12
target/arm/helper.c | 41 +++++++++++++++++++++++++++++++++++++++++
13
2 files changed, 47 insertions(+), 2 deletions(-)
13
14
14
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/arm_gic.c
17
--- a/target/arm/cpu.h
17
+++ b/hw/intc/arm_gic.c
18
+++ b/target/arm/cpu.h
18
@@ -XXX,XX +XXX,XX @@ static inline bool gic_lr_entry_is_eoi(uint32_t entry)
19
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
19
&& !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
20
float_status standard_fp_status;
21
float_status standard_fp_status_f16;
22
23
- /* ZCR_EL[1-3] */
24
- uint64_t zcr_el[4];
25
+ uint64_t zcr_el[4]; /* ZCR_EL[1-3] */
26
+ uint64_t smcr_el[4]; /* SMCR_EL[1-3] */
27
} vfp;
28
uint64_t exclusive_addr;
29
uint64_t exclusive_val;
30
@@ -XXX,XX +XXX,XX @@ FIELD(CPTR_EL3, TCPAC, 31, 1)
31
FIELD(SVCR, SM, 0, 1)
32
FIELD(SVCR, ZA, 1, 1)
33
34
+/* Fields for SMCR_ELx. */
35
+FIELD(SMCR, LEN, 0, 4)
36
+FIELD(SMCR, FA64, 31, 1)
37
+
38
/* Write a new value to v7m.exception, thus transitioning into or out
39
* of Handler mode; this may result in a change of active stack pointer.
40
*/
41
diff --git a/target/arm/helper.c b/target/arm/helper.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/target/arm/helper.c
44
+++ b/target/arm/helper.c
45
@@ -XXX,XX +XXX,XX @@ static void define_arm_vh_e2h_redirects_aliases(ARMCPU *cpu)
46
*/
47
{ K(3, 0, 1, 2, 0), K(3, 4, 1, 2, 0), K(3, 5, 1, 2, 0),
48
"ZCR_EL1", "ZCR_EL2", "ZCR_EL12", isar_feature_aa64_sve },
49
+ { K(3, 0, 1, 2, 6), K(3, 4, 1, 2, 6), K(3, 5, 1, 2, 6),
50
+ "SMCR_EL1", "SMCR_EL2", "SMCR_EL12", isar_feature_aa64_sme },
51
52
{ K(3, 0, 5, 6, 0), K(3, 4, 5, 6, 0), K(3, 5, 5, 6, 0),
53
"TFSR_EL1", "TFSR_EL2", "TFSR_EL12", isar_feature_aa64_mte },
54
@@ -XXX,XX +XXX,XX @@ static void svcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
55
env->svcr = value;
20
}
56
}
21
57
22
+static inline void gic_extract_lr_info(GICState *s, int cpu,
58
+static void smcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
23
+ int *num_eoi, int *num_valid, int *num_pending)
59
+ uint64_t value)
24
+{
60
+{
25
+ int lr_idx;
61
+ int cur_el = arm_current_el(env);
62
+ int old_len = sve_vqm1_for_el(env, cur_el);
63
+ int new_len;
26
+
64
+
27
+ *num_eoi = 0;
65
+ QEMU_BUILD_BUG_ON(ARM_MAX_VQ > R_SMCR_LEN_MASK + 1);
28
+ *num_valid = 0;
66
+ value &= R_SMCR_LEN_MASK | R_SMCR_FA64_MASK;
29
+ *num_pending = 0;
67
+ raw_write(env, ri, value);
30
+
68
+
31
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
69
+ /*
32
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
70
+ * Note that it is CONSTRAINED UNPREDICTABLE what happens to ZA storage
33
+
71
+ * when SVL is widened (old values kept, or zeros). Choose to keep the
34
+ if (gic_lr_entry_is_eoi(*entry)) {
72
+ * current values for simplicity. But for QEMU internals, we must still
35
+ (*num_eoi)++;
73
+ * apply the narrower SVL to the Zregs and Pregs -- see the comment
36
+ }
74
+ * above aarch64_sve_narrow_vq.
37
+
75
+ */
38
+ if (GICH_LR_STATE(*entry) != GICH_LR_STATE_INVALID) {
76
+ new_len = sve_vqm1_for_el(env, cur_el);
39
+ (*num_valid)++;
77
+ if (new_len < old_len) {
40
+ }
78
+ aarch64_sve_narrow_vq(env, new_len + 1);
41
+
42
+ if (GICH_LR_STATE(*entry) == GICH_LR_STATE_PENDING) {
43
+ (*num_pending)++;
44
+ }
45
+ }
79
+ }
46
+}
80
+}
47
+
81
+
48
+static void gic_compute_misr(GICState *s, int cpu)
82
static const ARMCPRegInfo sme_reginfo[] = {
49
+{
83
{ .name = "TPIDR2_EL0", .state = ARM_CP_STATE_AA64,
50
+ uint32_t value = 0;
84
.opc0 = 3, .opc1 = 3, .crn = 13, .crm = 0, .opc2 = 5,
51
+ int vcpu = cpu + GIC_NCPU;
85
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo sme_reginfo[] = {
52
+
86
.access = PL0_RW, .type = ARM_CP_SME,
53
+ int num_eoi, num_valid, num_pending;
87
.fieldoffset = offsetof(CPUARMState, svcr),
54
+
88
.writefn = svcr_write, .raw_writefn = raw_write },
55
+ gic_extract_lr_info(s, cpu, &num_eoi, &num_valid, &num_pending);
89
+ { .name = "SMCR_EL1", .state = ARM_CP_STATE_AA64,
56
+
90
+ .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 6,
57
+ /* EOI */
91
+ .access = PL1_RW, .type = ARM_CP_SME,
58
+ if (num_eoi) {
92
+ .fieldoffset = offsetof(CPUARMState, vfp.smcr_el[1]),
59
+ value |= R_GICH_MISR_EOI_MASK;
93
+ .writefn = smcr_write, .raw_writefn = raw_write },
60
+ }
94
+ { .name = "SMCR_EL2", .state = ARM_CP_STATE_AA64,
61
+
95
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 6,
62
+ /* U: true if only 0 or 1 LR entry is valid */
96
+ .access = PL2_RW, .type = ARM_CP_SME,
63
+ if ((s->h_hcr[cpu] & R_GICH_HCR_UIE_MASK) && (num_valid < 2)) {
97
+ .fieldoffset = offsetof(CPUARMState, vfp.smcr_el[2]),
64
+ value |= R_GICH_MISR_U_MASK;
98
+ .writefn = smcr_write, .raw_writefn = raw_write },
65
+ }
99
+ { .name = "SMCR_EL3", .state = ARM_CP_STATE_AA64,
66
+
100
+ .opc0 = 3, .opc1 = 6, .crn = 1, .crm = 2, .opc2 = 6,
67
+ /* LRENP: EOICount is not 0 */
101
+ .access = PL3_RW, .type = ARM_CP_SME,
68
+ if ((s->h_hcr[cpu] & R_GICH_HCR_LRENPIE_MASK) &&
102
+ .fieldoffset = offsetof(CPUARMState, vfp.smcr_el[3]),
69
+ ((s->h_hcr[cpu] & R_GICH_HCR_EOICount_MASK) != 0)) {
103
+ .writefn = smcr_write, .raw_writefn = raw_write },
70
+ value |= R_GICH_MISR_LRENP_MASK;
104
};
71
+ }
105
#endif /* TARGET_AARCH64 */
72
+
106
73
+ /* NP: no pending interrupts */
74
+ if ((s->h_hcr[cpu] & R_GICH_HCR_NPIE_MASK) && (num_pending == 0)) {
75
+ value |= R_GICH_MISR_NP_MASK;
76
+ }
77
+
78
+ /* VGrp0E: group0 virq signaling enabled */
79
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP0EIE_MASK) &&
80
+ (s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP0)) {
81
+ value |= R_GICH_MISR_VGrp0E_MASK;
82
+ }
83
+
84
+ /* VGrp0D: group0 virq signaling disabled */
85
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP0DIE_MASK) &&
86
+ !(s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP0)) {
87
+ value |= R_GICH_MISR_VGrp0D_MASK;
88
+ }
89
+
90
+ /* VGrp1E: group1 virq signaling enabled */
91
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP1EIE_MASK) &&
92
+ (s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP1)) {
93
+ value |= R_GICH_MISR_VGrp1E_MASK;
94
+ }
95
+
96
+ /* VGrp1D: group1 virq signaling disabled */
97
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP1DIE_MASK) &&
98
+ !(s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP1)) {
99
+ value |= R_GICH_MISR_VGrp1D_MASK;
100
+ }
101
+
102
+ s->h_misr[cpu] = value;
103
+}
104
+
105
+static void gic_update_maintenance(GICState *s)
106
+{
107
+ int cpu = 0;
108
+ int maint_level;
109
+
110
+ for (cpu = 0; cpu < s->num_cpu; cpu++) {
111
+ gic_compute_misr(s, cpu);
112
+ maint_level = (s->h_hcr[cpu] & R_GICH_HCR_EN_MASK) && s->h_misr[cpu];
113
+
114
+ qemu_set_irq(s->maintenance_irq[cpu], maint_level);
115
+ }
116
+}
117
+
118
static void gic_update_virt(GICState *s)
119
{
120
gic_update_internal(s, true);
121
+ gic_update_maintenance(s);
122
}
123
124
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
125
--
107
--
126
2.18.0
108
2.25.1
127
128
diff view generated by jsdifflib
1
Now that we have full support for small regions, including execution,
1
From: Richard Henderson <richard.henderson@linaro.org>
2
we can remove the workarounds where we marked all small regions as
3
non-executable for the M-profile MPU and SAU.
4
2
3
Implement the streaming mode identification register, and the
4
two streaming priority registers. For QEMU, they are all RES0.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220620175235.60881-8-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Tested-by: Cédric Le Goater <clg@kaod.org>
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20180710160013.26559-7-peter.maydell@linaro.org
11
---
10
---
12
target/arm/helper.c | 23 -----------------------
11
target/arm/helper.c | 33 +++++++++++++++++++++++++++++++++
13
1 file changed, 23 deletions(-)
12
1 file changed, 33 insertions(+)
14
13
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
16
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
17
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
18
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tpidr2(CPUARMState *env, const ARMCPRegInfo *ri,
20
19
return CP_ACCESS_OK;
21
fi->type = ARMFault_Permission;
22
fi->level = 1;
23
- /*
24
- * Core QEMU code can't handle execution from small pages yet, so
25
- * don't try it. This way we'll get an MPU exception, rather than
26
- * eventually causing QEMU to exit in get_page_addr_code().
27
- */
28
- if (*page_size < TARGET_PAGE_SIZE && (*prot & PAGE_EXEC)) {
29
- qemu_log_mask(LOG_UNIMP,
30
- "MPU: No support for execution from regions "
31
- "smaller than 1K\n");
32
- *prot &= ~PAGE_EXEC;
33
- }
34
return !(*prot & (1 << access_type));
35
}
20
}
36
21
37
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
22
+static CPAccessResult access_esm(CPUARMState *env, const ARMCPRegInfo *ri,
38
23
+ bool isread)
39
fi->type = ARMFault_Permission;
24
+{
40
fi->level = 1;
25
+ /* TODO: FEAT_FGT for SMPRI_EL1 but not SMPRIMAP_EL2 */
41
- /*
26
+ if (arm_current_el(env) < 3
42
- * Core QEMU code can't handle execution from small pages yet, so
27
+ && arm_feature(env, ARM_FEATURE_EL3)
43
- * don't try it. This means any attempted execution will generate
28
+ && !FIELD_EX64(env->cp15.cptr_el[3], CPTR_EL3, ESM)) {
44
- * an MPU exception, rather than eventually causing QEMU to exit in
29
+ return CP_ACCESS_TRAP_EL3;
45
- * get_page_addr_code().
30
+ }
46
- */
31
+ return CP_ACCESS_OK;
47
- if (*is_subpage && (*prot & PAGE_EXEC)) {
32
+}
48
- qemu_log_mask(LOG_UNIMP,
33
+
49
- "MPU: No support for execution from regions "
34
static void svcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
50
- "smaller than 1K\n");
35
uint64_t value)
51
- *prot &= ~PAGE_EXEC;
36
{
52
- }
37
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo sme_reginfo[] = {
53
return !(*prot & (1 << access_type));
38
.access = PL3_RW, .type = ARM_CP_SME,
54
}
39
.fieldoffset = offsetof(CPUARMState, vfp.smcr_el[3]),
40
.writefn = smcr_write, .raw_writefn = raw_write },
41
+ { .name = "SMIDR_EL1", .state = ARM_CP_STATE_AA64,
42
+ .opc0 = 3, .opc1 = 1, .crn = 0, .crm = 0, .opc2 = 6,
43
+ .access = PL1_R, .accessfn = access_aa64_tid1,
44
+ /*
45
+ * IMPLEMENTOR = 0 (software)
46
+ * REVISION = 0 (implementation defined)
47
+ * SMPS = 0 (no streaming execution priority in QEMU)
48
+ * AFFINITY = 0 (streaming sve mode not shared with other PEs)
49
+ */
50
+ .type = ARM_CP_CONST, .resetvalue = 0, },
51
+ /*
52
+ * Because SMIDR_EL1.SMPS is 0, SMPRI_EL1 and SMPRIMAP_EL2 are RES 0.
53
+ */
54
+ { .name = "SMPRI_EL1", .state = ARM_CP_STATE_AA64,
55
+ .opc0 = 3, .opc1 = 0, .crn = 1, .crm = 2, .opc2 = 4,
56
+ .access = PL1_RW, .accessfn = access_esm,
57
+ .type = ARM_CP_CONST, .resetvalue = 0 },
58
+ { .name = "SMPRIMAP_EL2", .state = ARM_CP_STATE_AA64,
59
+ .opc0 = 3, .opc1 = 4, .crn = 1, .crm = 2, .opc2 = 5,
60
+ .access = PL2_RW, .accessfn = access_esm,
61
+ .type = ARM_CP_CONST, .resetvalue = 0 },
62
};
63
#endif /* TARGET_AARCH64 */
55
64
56
--
65
--
57
2.18.0
66
2.25.1
58
59
diff view generated by jsdifflib
1
On exception return for M-profile, we must restore the CONTROL.SPSEL
1
From: Richard Henderson <richard.henderson@linaro.org>
2
bit from the EXCRET value before we do any kind of tailchaining,
3
including for the derived exceptions on integrity check failures.
4
Otherwise we will give the guest an incorrect EXCRET.SPSEL value on
5
exception entry for the tailchained exception.
6
2
3
These are required to determine if various insns
4
are allowed to issue.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220620175235.60881-9-richard.henderson@linaro.org
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180720145647.8810-4-peter.maydell@linaro.org
10
---
10
---
11
target/arm/helper.c | 16 ++++++++++------
11
target/arm/cpu.h | 2 ++
12
1 file changed, 10 insertions(+), 6 deletions(-)
12
target/arm/translate.h | 4 ++++
13
target/arm/helper.c | 4 ++++
14
target/arm/translate-a64.c | 2 ++
15
4 files changed, 12 insertions(+)
13
16
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, TCMA, 16, 2)
22
FIELD(TBFLAG_A64, MTE_ACTIVE, 18, 1)
23
FIELD(TBFLAG_A64, MTE0_ACTIVE, 19, 1)
24
FIELD(TBFLAG_A64, SMEEXC_EL, 20, 2)
25
+FIELD(TBFLAG_A64, PSTATE_SM, 22, 1)
26
+FIELD(TBFLAG_A64, PSTATE_ZA, 23, 1)
27
28
/*
29
* Helpers for using the above.
30
diff --git a/target/arm/translate.h b/target/arm/translate.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/target/arm/translate.h
33
+++ b/target/arm/translate.h
34
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
35
bool align_mem;
36
/* True if PSTATE.IL is set */
37
bool pstate_il;
38
+ /* True if PSTATE.SM is set. */
39
+ bool pstate_sm;
40
+ /* True if PSTATE.ZA is set. */
41
+ bool pstate_za;
42
/* True if MVE insns are definitely not predicated by VPR or LTPSIZE */
43
bool mve_no_pred;
44
/*
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
45
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
46
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
47
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
48
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
49
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
19
}
20
}
50
}
21
51
if (cpu_isar_feature(aa64_sme, env_archcpu(env))) {
22
+ /*
52
DP_TBFLAG_A64(flags, SMEEXC_EL, sme_exception_el(env, el));
23
+ * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
53
+ if (FIELD_EX64(env->svcr, SVCR, SM)) {
24
+ * Handler mode (and will be until we write the new XPSR.Interrupt
54
+ DP_TBFLAG_A64(flags, PSTATE_SM, 1);
25
+ * field) this does not switch around the current stack pointer.
55
+ }
26
+ * We must do this before we do any kind of tailchaining, including
56
+ DP_TBFLAG_A64(flags, PSTATE_ZA, FIELD_EX64(env->svcr, SVCR, ZA));
27
+ * for the derived exceptions on integrity check failures, or we will
28
+ * give the guest an incorrect EXCRET.SPSEL value on exception entry.
29
+ */
30
+ write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
31
+
32
if (sfault) {
33
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
34
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
35
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
36
return;
37
}
57
}
38
58
39
- /* Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
59
sctlr = regime_sctlr(env, stage1);
40
- * Handler mode (and will be until we write the new XPSR.Interrupt
60
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
41
- * field) this does not switch around the current stack pointer.
61
index XXXXXXX..XXXXXXX 100644
42
- */
62
--- a/target/arm/translate-a64.c
43
- write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
63
+++ b/target/arm/translate-a64.c
44
-
64
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
45
switch_v7m_security_state(env, return_to_secure);
65
dc->ata = EX_TBFLAG_A64(tb_flags, ATA);
46
66
dc->mte_active[0] = EX_TBFLAG_A64(tb_flags, MTE_ACTIVE);
47
{
67
dc->mte_active[1] = EX_TBFLAG_A64(tb_flags, MTE0_ACTIVE);
68
+ dc->pstate_sm = EX_TBFLAG_A64(tb_flags, PSTATE_SM);
69
+ dc->pstate_za = EX_TBFLAG_A64(tb_flags, PSTATE_ZA);
70
dc->vec_len = 0;
71
dc->vec_stride = 0;
72
dc->cp_regs = arm_cpu->cp_regs;
48
--
73
--
49
2.18.0
74
2.25.1
50
51
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add the necessary parts of the virtualization extensions state to the
3
Place this late in the resettable section of the structure,
4
GIC state. We choose to increase the size of the CPU interfaces state to
4
to keep the most common element offsets from being > 64k.
5
add space for the vCPU interfaces (the GIC_NCPU_VCPU macro). This way,
6
we'll be able to reuse most of the CPU interface code for the vCPUs.
7
5
8
The only exception is the APR value, which is stored in h_apr in the
9
virtual interface state for vCPUs. This is due to some complications
10
with the GIC VMState, for which we don't want to break backward
11
compatibility. APRs being stored in 2D arrays, increasing the second
12
dimension would lead to some ugly VMState description. To avoid
13
that, we keep it in h_apr for vCPUs.
14
15
The vCPUs are numbered from GIC_NCPU to (GIC_NCPU * 2) - 1. The
16
`gic_is_vcpu` function help to determine if a given CPU id correspond to
17
a physical CPU or a virtual one.
18
19
For the in-kernel KVM VGIC, since the exposed VGIC does not implement
20
the virtualization extensions, we report an error if the corresponding
21
property is set to true.
22
23
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Message-id: 20180727095421.386-6-luc.michel@greensocs.com
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220620175235.60881-10-richard.henderson@linaro.org
9
[PMM: expanded comment on zarray[] format]
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
---
11
---
28
hw/intc/gic_internal.h | 5 ++
12
target/arm/cpu.h | 22 ++++++++++++++++++++++
29
include/hw/intc/arm_gic_common.h | 43 +++++++--
13
target/arm/machine.c | 34 ++++++++++++++++++++++++++++++++++
30
hw/intc/arm_gic.c | 2 +-
14
2 files changed, 56 insertions(+)
31
hw/intc/arm_gic_common.c | 148 ++++++++++++++++++++++++++-----
32
hw/intc/arm_gic_kvm.c | 8 +-
33
5 files changed, 173 insertions(+), 33 deletions(-)
34
15
35
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
36
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
37
--- a/hw/intc/gic_internal.h
18
--- a/target/arm/cpu.h
38
+++ b/hw/intc/gic_internal.h
19
+++ b/target/arm/cpu.h
39
@@ -XXX,XX +XXX,XX @@ static inline bool gic_test_pending(GICState *s, int irq, int cm)
20
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
40
}
21
} keys;
41
}
22
42
23
uint64_t scxtnum_el[4];
43
+static inline bool gic_is_vcpu(int cpu)
44
+{
45
+ return cpu >= GIC_NCPU;
46
+}
47
+
24
+
48
#endif /* QEMU_ARM_GIC_INTERNAL_H */
25
+ /*
49
diff --git a/include/hw/intc/arm_gic_common.h b/include/hw/intc/arm_gic_common.h
26
+ * SME ZA storage -- 256 x 256 byte array, with bytes in host word order,
27
+ * as we do with vfp.zregs[]. This corresponds to the architectural ZA
28
+ * array, where ZA[N] is in the least-significant bytes of env->zarray[N].
29
+ * When SVL is less than the architectural maximum, the accessible
30
+ * storage is restricted, such that if the SVL is X bytes the guest can
31
+ * see only the bottom X elements of zarray[], and only the least
32
+ * significant X bytes of each element of the array. (In other words,
33
+ * the observable part is always square.)
34
+ *
35
+ * The ZA storage can also be considered as a set of square tiles of
36
+ * elements of different sizes. The mapping from tiles to the ZA array
37
+ * is architecturally defined, such that for tiles of elements of esz
38
+ * bytes, the Nth row (or "horizontal slice") of tile T is in
39
+ * ZA[T + N * esz]. Note that this means that each tile is not contiguous
40
+ * in the ZA storage, because its rows are striped through the ZA array.
41
+ *
42
+ * Because this is so large, keep this toward the end of the reset area,
43
+ * to keep the offsets into the rest of the structure smaller.
44
+ */
45
+ ARMVectorReg zarray[ARM_MAX_VQ * 16];
46
#endif
47
48
#if defined(CONFIG_USER_ONLY)
49
diff --git a/target/arm/machine.c b/target/arm/machine.c
50
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
51
--- a/include/hw/intc/arm_gic_common.h
51
--- a/target/arm/machine.c
52
+++ b/include/hw/intc/arm_gic_common.h
52
+++ b/target/arm/machine.c
53
@@ -XXX,XX +XXX,XX @@
53
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_sve = {
54
#define GIC_NR_SGIS 16
54
VMSTATE_END_OF_LIST()
55
/* Maximum number of possible CPU interfaces, determined by GIC architecture */
56
#define GIC_NCPU 8
57
+/* Maximum number of possible CPU interfaces with their respective vCPU */
58
+#define GIC_NCPU_VCPU (GIC_NCPU * 2)
59
60
#define MAX_NR_GROUP_PRIO 128
61
#define GIC_NR_APRS (MAX_NR_GROUP_PRIO / 32)
62
@@ -XXX,XX +XXX,XX @@
63
#define GIC_MIN_BPR 0
64
#define GIC_MIN_ABPR (GIC_MIN_BPR + 1)
65
66
+/* Architectural maximum number of list registers in the virtual interface */
67
+#define GIC_MAX_LR 64
68
+
69
+/* Only 32 priority levels and 32 preemption levels in the vCPU interfaces */
70
+#define GIC_VIRT_MAX_GROUP_PRIO_BITS 5
71
+#define GIC_VIRT_MAX_NR_GROUP_PRIO (1 << GIC_VIRT_MAX_GROUP_PRIO_BITS)
72
+#define GIC_VIRT_NR_APRS (GIC_VIRT_MAX_NR_GROUP_PRIO / 32)
73
+
74
+#define GIC_VIRT_MIN_BPR 2
75
+#define GIC_VIRT_MIN_ABPR (GIC_VIRT_MIN_BPR + 1)
76
+
77
typedef struct gic_irq_state {
78
/* The enable bits are only banked for per-cpu interrupts. */
79
uint8_t enabled;
80
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
81
qemu_irq parent_fiq[GIC_NCPU];
82
qemu_irq parent_virq[GIC_NCPU];
83
qemu_irq parent_vfiq[GIC_NCPU];
84
+ qemu_irq maintenance_irq[GIC_NCPU];
85
+
86
/* GICD_CTLR; for a GIC with the security extensions the NS banked version
87
* of this register is just an alias of bit 1 of the S banked version.
88
*/
89
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
90
/* GICC_CTLR; again, the NS banked version is just aliases of bits of
91
* the S banked register, so our state only needs to store the S version.
92
*/
93
- uint32_t cpu_ctlr[GIC_NCPU];
94
+ uint32_t cpu_ctlr[GIC_NCPU_VCPU];
95
96
gic_irq_state irq_state[GIC_MAXIRQ];
97
uint8_t irq_target[GIC_MAXIRQ];
98
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
99
*/
100
uint8_t sgi_pending[GIC_NR_SGIS][GIC_NCPU];
101
102
- uint16_t priority_mask[GIC_NCPU];
103
- uint16_t running_priority[GIC_NCPU];
104
- uint16_t current_pending[GIC_NCPU];
105
+ uint16_t priority_mask[GIC_NCPU_VCPU];
106
+ uint16_t running_priority[GIC_NCPU_VCPU];
107
+ uint16_t current_pending[GIC_NCPU_VCPU];
108
109
/* If we present the GICv2 without security extensions to a guest,
110
* the guest can configure the GICC_CTLR to configure group 1 binary point
111
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
112
* For a GIC with Security Extensions we use use bpr for the
113
* secure copy and abpr as storage for the non-secure copy of the register.
114
*/
115
- uint8_t bpr[GIC_NCPU];
116
- uint8_t abpr[GIC_NCPU];
117
+ uint8_t bpr[GIC_NCPU_VCPU];
118
+ uint8_t abpr[GIC_NCPU_VCPU];
119
120
/* The APR is implementation defined, so we choose a layout identical to
121
* the KVM ABI layout for QEMU's implementation of the gic:
122
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
123
uint32_t apr[GIC_NR_APRS][GIC_NCPU];
124
uint32_t nsapr[GIC_NR_APRS][GIC_NCPU];
125
126
+ /* Virtual interface control registers */
127
+ uint32_t h_hcr[GIC_NCPU];
128
+ uint32_t h_misr[GIC_NCPU];
129
+ uint32_t h_lr[GIC_MAX_LR][GIC_NCPU];
130
+ uint32_t h_apr[GIC_NCPU];
131
+
132
+ /* Number of LRs implemented in this GIC instance */
133
+ uint32_t num_lrs;
134
+
135
uint32_t num_cpu;
136
137
MemoryRegion iomem; /* Distributor */
138
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
139
*/
140
struct GICState *backref[GIC_NCPU];
141
MemoryRegion cpuiomem[GIC_NCPU + 1]; /* CPU interfaces */
142
+ MemoryRegion vifaceiomem[GIC_NCPU + 1]; /* Virtual interfaces */
143
+ MemoryRegion vcpuiomem; /* vCPU interface */
144
+
145
uint32_t num_irq;
146
uint32_t revision;
147
bool security_extn;
148
+ bool virt_extn;
149
bool irq_reset_nonsecure; /* configure IRQs as group 1 (NS) on reset? */
150
int dev_fd; /* kvm device fd if backed by kvm vgic support */
151
Error *migration_blocker;
152
@@ -XXX,XX +XXX,XX @@ typedef struct ARMGICCommonClass {
153
} ARMGICCommonClass;
154
155
void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
156
- const MemoryRegionOps *ops);
157
+ const MemoryRegionOps *ops,
158
+ const MemoryRegionOps *virt_ops);
159
160
#endif
161
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/hw/intc/arm_gic.c
164
+++ b/hw/intc/arm_gic.c
165
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
166
}
167
168
/* This creates distributor and main CPU interface (s->cpuiomem[0]) */
169
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops);
170
+ gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, NULL);
171
172
/* Extra core-specific regions for the CPU interfaces. This is
173
* necessary for "franken-GIC" implementations, for example on
174
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
175
index XXXXXXX..XXXXXXX 100644
176
--- a/hw/intc/arm_gic_common.c
177
+++ b/hw/intc/arm_gic_common.c
178
@@ -XXX,XX +XXX,XX @@ static int gic_post_load(void *opaque, int version_id)
179
return 0;
180
}
181
182
+static bool gic_virt_state_needed(void *opaque)
183
+{
184
+ GICState *s = (GICState *)opaque;
185
+
186
+ return s->virt_extn;
187
+}
188
+
189
static const VMStateDescription vmstate_gic_irq_state = {
190
.name = "arm_gic_irq_state",
191
.version_id = 1,
192
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gic_irq_state = {
193
}
55
}
194
};
56
};
195
57
+
196
+static const VMStateDescription vmstate_gic_virt_state = {
58
+static const VMStateDescription vmstate_vreg = {
197
+ .name = "arm_gic_virt_state",
59
+ .name = "vreg",
198
+ .version_id = 1,
60
+ .version_id = 1,
199
+ .minimum_version_id = 1,
61
+ .minimum_version_id = 1,
200
+ .needed = gic_virt_state_needed,
201
+ .fields = (VMStateField[]) {
62
+ .fields = (VMStateField[]) {
202
+ /* Virtual interface */
63
+ VMSTATE_UINT64_ARRAY(d, ARMVectorReg, ARM_MAX_VQ * 2),
203
+ VMSTATE_UINT32_ARRAY(h_hcr, GICState, GIC_NCPU),
204
+ VMSTATE_UINT32_ARRAY(h_misr, GICState, GIC_NCPU),
205
+ VMSTATE_UINT32_2DARRAY(h_lr, GICState, GIC_MAX_LR, GIC_NCPU),
206
+ VMSTATE_UINT32_ARRAY(h_apr, GICState, GIC_NCPU),
207
+
208
+ /* Virtual CPU interfaces */
209
+ VMSTATE_UINT32_SUB_ARRAY(cpu_ctlr, GICState, GIC_NCPU, GIC_NCPU),
210
+ VMSTATE_UINT16_SUB_ARRAY(priority_mask, GICState, GIC_NCPU, GIC_NCPU),
211
+ VMSTATE_UINT16_SUB_ARRAY(running_priority, GICState, GIC_NCPU, GIC_NCPU),
212
+ VMSTATE_UINT16_SUB_ARRAY(current_pending, GICState, GIC_NCPU, GIC_NCPU),
213
+ VMSTATE_UINT8_SUB_ARRAY(bpr, GICState, GIC_NCPU, GIC_NCPU),
214
+ VMSTATE_UINT8_SUB_ARRAY(abpr, GICState, GIC_NCPU, GIC_NCPU),
215
+
216
+ VMSTATE_END_OF_LIST()
64
+ VMSTATE_END_OF_LIST()
217
+ }
65
+ }
218
+};
66
+};
219
+
67
+
220
static const VMStateDescription vmstate_gic = {
68
+static bool za_needed(void *opaque)
221
.name = "arm_gic",
69
+{
222
.version_id = 12,
70
+ ARMCPU *cpu = opaque;
223
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gic = {
224
.post_load = gic_post_load,
225
.fields = (VMStateField[]) {
226
VMSTATE_UINT32(ctlr, GICState),
227
- VMSTATE_UINT32_ARRAY(cpu_ctlr, GICState, GIC_NCPU),
228
+ VMSTATE_UINT32_SUB_ARRAY(cpu_ctlr, GICState, 0, GIC_NCPU),
229
VMSTATE_STRUCT_ARRAY(irq_state, GICState, GIC_MAXIRQ, 1,
230
vmstate_gic_irq_state, gic_irq_state),
231
VMSTATE_UINT8_ARRAY(irq_target, GICState, GIC_MAXIRQ),
232
VMSTATE_UINT8_2DARRAY(priority1, GICState, GIC_INTERNAL, GIC_NCPU),
233
VMSTATE_UINT8_ARRAY(priority2, GICState, GIC_MAXIRQ - GIC_INTERNAL),
234
VMSTATE_UINT8_2DARRAY(sgi_pending, GICState, GIC_NR_SGIS, GIC_NCPU),
235
- VMSTATE_UINT16_ARRAY(priority_mask, GICState, GIC_NCPU),
236
- VMSTATE_UINT16_ARRAY(running_priority, GICState, GIC_NCPU),
237
- VMSTATE_UINT16_ARRAY(current_pending, GICState, GIC_NCPU),
238
- VMSTATE_UINT8_ARRAY(bpr, GICState, GIC_NCPU),
239
- VMSTATE_UINT8_ARRAY(abpr, GICState, GIC_NCPU),
240
+ VMSTATE_UINT16_SUB_ARRAY(priority_mask, GICState, 0, GIC_NCPU),
241
+ VMSTATE_UINT16_SUB_ARRAY(running_priority, GICState, 0, GIC_NCPU),
242
+ VMSTATE_UINT16_SUB_ARRAY(current_pending, GICState, 0, GIC_NCPU),
243
+ VMSTATE_UINT8_SUB_ARRAY(bpr, GICState, 0, GIC_NCPU),
244
+ VMSTATE_UINT8_SUB_ARRAY(abpr, GICState, 0, GIC_NCPU),
245
VMSTATE_UINT32_2DARRAY(apr, GICState, GIC_NR_APRS, GIC_NCPU),
246
VMSTATE_UINT32_2DARRAY(nsapr, GICState, GIC_NR_APRS, GIC_NCPU),
247
VMSTATE_END_OF_LIST()
248
+ },
249
+ .subsections = (const VMStateDescription * []) {
250
+ &vmstate_gic_virt_state,
251
+ NULL
252
}
253
};
254
255
void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
256
- const MemoryRegionOps *ops)
257
+ const MemoryRegionOps *ops,
258
+ const MemoryRegionOps *virt_ops)
259
{
260
SysBusDevice *sbd = SYS_BUS_DEVICE(s);
261
int i = s->num_irq - GIC_INTERNAL;
262
@@ -XXX,XX +XXX,XX @@ void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
263
for (i = 0; i < s->num_cpu; i++) {
264
sysbus_init_irq(sbd, &s->parent_vfiq[i]);
265
}
266
+ if (s->virt_extn) {
267
+ for (i = 0; i < s->num_cpu; i++) {
268
+ sysbus_init_irq(sbd, &s->maintenance_irq[i]);
269
+ }
270
+ }
271
272
/* Distributor */
273
memory_region_init_io(&s->iomem, OBJECT(s), ops, s, "gic_dist", 0x1000);
274
@@ -XXX,XX +XXX,XX @@ void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
275
memory_region_init_io(&s->cpuiomem[0], OBJECT(s), ops ? &ops[1] : NULL,
276
s, "gic_cpu", s->revision == 2 ? 0x2000 : 0x100);
277
sysbus_init_mmio(sbd, &s->cpuiomem[0]);
278
+
71
+
279
+ if (s->virt_extn) {
72
+ /*
280
+ memory_region_init_io(&s->vifaceiomem[0], OBJECT(s), virt_ops,
73
+ * When ZA storage is disabled, its contents are discarded.
281
+ s, "gic_viface", 0x1000);
74
+ * It will be zeroed when ZA storage is re-enabled.
282
+ sysbus_init_mmio(sbd, &s->vifaceiomem[0]);
75
+ */
283
+
76
+ return FIELD_EX64(cpu->env.svcr, SVCR, ZA);
284
+ memory_region_init_io(&s->vcpuiomem, OBJECT(s),
285
+ virt_ops ? &virt_ops[1] : NULL,
286
+ s, "gic_vcpu", 0x2000);
287
+ sysbus_init_mmio(sbd, &s->vcpuiomem);
288
+ }
289
}
290
291
static void arm_gic_common_realize(DeviceState *dev, Error **errp)
292
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_realize(DeviceState *dev, Error **errp)
293
"the security extensions");
294
return;
295
}
296
+
297
+ if (s->virt_extn) {
298
+ if (s->revision != 2) {
299
+ error_setg(errp, "GIC virtualization extensions are only "
300
+ "supported by revision 2");
301
+ return;
302
+ }
303
+
304
+ /* For now, set the number of implemented LRs to 4, as found in most
305
+ * real GICv2. This could be promoted as a QOM property if we need to
306
+ * emulate a variant with another num_lrs.
307
+ */
308
+ s->num_lrs = 4;
309
+ }
310
+}
77
+}
311
+
78
+
312
+static inline void arm_gic_common_reset_irq_state(GICState *s, int first_cpu,
79
+static const VMStateDescription vmstate_za = {
313
+ int resetprio)
80
+ .name = "cpu/sme",
314
+{
81
+ .version_id = 1,
315
+ int i, j;
82
+ .minimum_version_id = 1,
316
+
83
+ .needed = za_needed,
317
+ for (i = first_cpu; i < first_cpu + s->num_cpu; i++) {
84
+ .fields = (VMStateField[]) {
318
+ if (s->revision == REV_11MPCORE) {
85
+ VMSTATE_STRUCT_ARRAY(env.zarray, ARMCPU, ARM_MAX_VQ * 16, 0,
319
+ s->priority_mask[i] = 0xf0;
86
+ vmstate_vreg, ARMVectorReg),
320
+ } else {
87
+ VMSTATE_END_OF_LIST()
321
+ s->priority_mask[i] = resetprio;
322
+ }
323
+ s->current_pending[i] = 1023;
324
+ s->running_priority[i] = 0x100;
325
+ s->cpu_ctlr[i] = 0;
326
+ s->bpr[i] = gic_is_vcpu(i) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
327
+ s->abpr[i] = gic_is_vcpu(i) ? GIC_VIRT_MIN_ABPR : GIC_MIN_ABPR;
328
+
329
+ if (!gic_is_vcpu(i)) {
330
+ for (j = 0; j < GIC_INTERNAL; j++) {
331
+ s->priority1[j][i] = resetprio;
332
+ }
333
+ for (j = 0; j < GIC_NR_SGIS; j++) {
334
+ s->sgi_pending[j][i] = 0;
335
+ }
336
+ }
337
+ }
88
+ }
338
}
89
+};
339
90
#endif /* AARCH64 */
340
static void arm_gic_common_reset(DeviceState *dev)
91
341
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
92
static bool serror_needed(void *opaque)
342
}
93
@@ -XXX,XX +XXX,XX @@ const VMStateDescription vmstate_arm_cpu = {
343
94
&vmstate_m_security,
344
memset(s->irq_state, 0, GIC_MAXIRQ * sizeof(gic_irq_state));
95
#ifdef TARGET_AARCH64
345
- for (i = 0 ; i < s->num_cpu; i++) {
96
&vmstate_sve,
346
- if (s->revision == REV_11MPCORE) {
97
+ &vmstate_za,
347
- s->priority_mask[i] = 0xf0;
98
#endif
348
- } else {
99
&vmstate_serror,
349
- s->priority_mask[i] = resetprio;
100
&vmstate_irq_line_state,
350
- }
351
- s->current_pending[i] = 1023;
352
- s->running_priority[i] = 0x100;
353
- s->cpu_ctlr[i] = 0;
354
- s->bpr[i] = GIC_MIN_BPR;
355
- s->abpr[i] = GIC_MIN_ABPR;
356
- for (j = 0; j < GIC_INTERNAL; j++) {
357
- s->priority1[j][i] = resetprio;
358
- }
359
- for (j = 0; j < GIC_NR_SGIS; j++) {
360
- s->sgi_pending[j][i] = 0;
361
- }
362
+ arm_gic_common_reset_irq_state(s, 0, resetprio);
363
+
364
+ if (s->virt_extn) {
365
+ /* vCPU states are stored at indexes GIC_NCPU .. GIC_NCPU+num_cpu.
366
+ * The exposed vCPU interface does not have security extensions.
367
+ */
368
+ arm_gic_common_reset_irq_state(s, GIC_NCPU, 0);
369
}
370
+
371
for (i = 0; i < GIC_NR_SGIS; i++) {
372
GIC_DIST_SET_ENABLED(i, ALL_CPU_MASK);
373
GIC_DIST_SET_EDGE_TRIGGER(i);
374
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
375
}
376
}
377
378
+ if (s->virt_extn) {
379
+ for (i = 0; i < s->num_lrs; i++) {
380
+ for (j = 0; j < s->num_cpu; j++) {
381
+ s->h_lr[i][j] = 0;
382
+ }
383
+ }
384
+
385
+ for (i = 0; i < s->num_cpu; i++) {
386
+ s->h_hcr[i] = 0;
387
+ s->h_misr[i] = 0;
388
+ }
389
+ }
390
+
391
s->ctlr = 0;
392
}
393
394
@@ -XXX,XX +XXX,XX @@ static Property arm_gic_common_properties[] = {
395
DEFINE_PROP_UINT32("revision", GICState, revision, 1),
396
/* True if the GIC should implement the security extensions */
397
DEFINE_PROP_BOOL("has-security-extensions", GICState, security_extn, 0),
398
+ /* True if the GIC should implement the virtualization extensions */
399
+ DEFINE_PROP_BOOL("has-virtualization-extensions", GICState, virt_extn, 0),
400
DEFINE_PROP_END_OF_LIST(),
401
};
402
403
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
404
index XXXXXXX..XXXXXXX 100644
405
--- a/hw/intc/arm_gic_kvm.c
406
+++ b/hw/intc/arm_gic_kvm.c
407
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
408
return;
409
}
410
411
+ if (s->virt_extn) {
412
+ error_setg(errp, "the in-kernel VGIC does not implement the "
413
+ "virtualization extensions");
414
+ return;
415
+ }
416
+
417
if (!kvm_arm_gic_can_save_restore(s)) {
418
error_setg(&s->migration_blocker, "This operating system kernel does "
419
"not support vGICv2 migration");
420
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
421
}
422
}
423
424
- gic_init_irqs_and_mmio(s, kvm_arm_gicv2_set_irq, NULL);
425
+ gic_init_irqs_and_mmio(s, kvm_arm_gicv2_set_irq, NULL, NULL);
426
427
for (i = 0; i < s->num_irq - GIC_INTERNAL; i++) {
428
qemu_irq irq = qdev_get_gpio_in(dev, i);
429
--
101
--
430
2.18.0
102
2.25.1
431
432
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Implement the read and write functions for the virtual interface of the
3
These two instructions are aliases of MSR (immediate).
4
virtualization extensions in the GICv2.
4
Use the two helpers to properly implement svcr_write.
5
5
6
One mirror region per CPU is also created, which maps to that specific
7
CPU id. This is required by the GIC architecture specification.
8
9
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20180727095421.386-16-luc.michel@greensocs.com
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220620175235.60881-11-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
hw/intc/arm_gic.c | 235 +++++++++++++++++++++++++++++++++++++++++++++-
11
target/arm/cpu.h | 1 +
15
1 file changed, 233 insertions(+), 2 deletions(-)
12
target/arm/helper-sme.h | 21 +++++++++++++
13
target/arm/helper.h | 1 +
14
target/arm/helper.c | 6 ++--
15
target/arm/sme_helper.c | 61 ++++++++++++++++++++++++++++++++++++++
16
target/arm/translate-a64.c | 24 +++++++++++++++
17
target/arm/meson.build | 1 +
18
7 files changed, 112 insertions(+), 3 deletions(-)
19
create mode 100644 target/arm/helper-sme.h
20
create mode 100644 target/arm/sme_helper.c
16
21
17
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
18
index XXXXXXX..XXXXXXX 100644
23
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gic.c
24
--- a/target/arm/cpu.h
20
+++ b/hw/intc/arm_gic.c
25
+++ b/target/arm/cpu.h
21
@@ -XXX,XX +XXX,XX @@ static void gic_update(GICState *s)
26
@@ -XXX,XX +XXX,XX @@ void aarch64_sve_change_el(CPUARMState *env, int old_el,
22
}
27
int new_el, bool el0_a64);
28
void aarch64_add_sve_properties(Object *obj);
29
void aarch64_add_pauth_properties(Object *obj);
30
+void arm_reset_sve_state(CPUARMState *env);
31
32
/*
33
* SVE registers are encoded in KVM's memory in an endianness-invariant format.
34
diff --git a/target/arm/helper-sme.h b/target/arm/helper-sme.h
35
new file mode 100644
36
index XXXXXXX..XXXXXXX
37
--- /dev/null
38
+++ b/target/arm/helper-sme.h
39
@@ -XXX,XX +XXX,XX @@
40
+/*
41
+ * AArch64 SME specific helper definitions
42
+ *
43
+ * Copyright (c) 2022 Linaro, Ltd
44
+ *
45
+ * This library is free software; you can redistribute it and/or
46
+ * modify it under the terms of the GNU Lesser General Public
47
+ * License as published by the Free Software Foundation; either
48
+ * version 2.1 of the License, or (at your option) any later version.
49
+ *
50
+ * This library is distributed in the hope that it will be useful,
51
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
52
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
53
+ * Lesser General Public License for more details.
54
+ *
55
+ * You should have received a copy of the GNU Lesser General Public
56
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
57
+ */
58
+
59
+DEF_HELPER_FLAGS_2(set_pstate_sm, TCG_CALL_NO_RWG, void, env, i32)
60
+DEF_HELPER_FLAGS_2(set_pstate_za, TCG_CALL_NO_RWG, void, env, i32)
61
diff --git a/target/arm/helper.h b/target/arm/helper.h
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/helper.h
64
+++ b/target/arm/helper.h
65
@@ -XXX,XX +XXX,XX @@ DEF_HELPER_FLAGS_6(gvec_bfmlal_idx, TCG_CALL_NO_RWG,
66
#ifdef TARGET_AARCH64
67
#include "helper-a64.h"
68
#include "helper-sve.h"
69
+#include "helper-sme.h"
70
#endif
71
72
#include "helper-mve.h"
73
diff --git a/target/arm/helper.c b/target/arm/helper.c
74
index XXXXXXX..XXXXXXX 100644
75
--- a/target/arm/helper.c
76
+++ b/target/arm/helper.c
77
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_esm(CPUARMState *env, const ARMCPRegInfo *ri,
78
static void svcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
79
uint64_t value)
80
{
81
- value &= R_SVCR_SM_MASK | R_SVCR_ZA_MASK;
82
- /* TODO: Side effects. */
83
- env->svcr = value;
84
+ helper_set_pstate_sm(env, FIELD_EX64(value, SVCR, SM));
85
+ helper_set_pstate_za(env, FIELD_EX64(value, SVCR, ZA));
86
+ arm_rebuild_hflags(env);
23
}
87
}
24
88
25
+/* Return true if this LR is empty, i.e. the corresponding bit
89
static void smcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
26
+ * in ELRSR is set.
90
diff --git a/target/arm/sme_helper.c b/target/arm/sme_helper.c
91
new file mode 100644
92
index XXXXXXX..XXXXXXX
93
--- /dev/null
94
+++ b/target/arm/sme_helper.c
95
@@ -XXX,XX +XXX,XX @@
96
+/*
97
+ * ARM SME Operations
98
+ *
99
+ * Copyright (c) 2022 Linaro, Ltd.
100
+ *
101
+ * This library is free software; you can redistribute it and/or
102
+ * modify it under the terms of the GNU Lesser General Public
103
+ * License as published by the Free Software Foundation; either
104
+ * version 2.1 of the License, or (at your option) any later version.
105
+ *
106
+ * This library is distributed in the hope that it will be useful,
107
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
108
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
109
+ * Lesser General Public License for more details.
110
+ *
111
+ * You should have received a copy of the GNU Lesser General Public
112
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
27
+ */
113
+ */
28
+static inline bool gic_lr_entry_is_free(uint32_t entry)
114
+
115
+#include "qemu/osdep.h"
116
+#include "cpu.h"
117
+#include "internals.h"
118
+#include "exec/helper-proto.h"
119
+
120
+/* ResetSVEState */
121
+void arm_reset_sve_state(CPUARMState *env)
29
+{
122
+{
30
+ return (GICH_LR_STATE(entry) == GICH_LR_STATE_INVALID)
123
+ memset(env->vfp.zregs, 0, sizeof(env->vfp.zregs));
31
+ && (GICH_LR_HW(entry) || !GICH_LR_EOI(entry));
124
+ /* Recall that FFR is stored as pregs[16]. */
125
+ memset(env->vfp.pregs, 0, sizeof(env->vfp.pregs));
126
+ vfp_set_fpcr(env, 0x0800009f);
32
+}
127
+}
33
+
128
+
34
+/* Return true if this LR should trigger an EOI maintenance interrupt, i.e. the
129
+void helper_set_pstate_sm(CPUARMState *env, uint32_t i)
35
+ * corrsponding bit in EISR is set.
36
+ */
37
+static inline bool gic_lr_entry_is_eoi(uint32_t entry)
38
+{
130
+{
39
+ return (GICH_LR_STATE(entry) == GICH_LR_STATE_INVALID)
131
+ if (i == FIELD_EX64(env->svcr, SVCR, SM)) {
40
+ && !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
132
+ return;
133
+ }
134
+ env->svcr ^= R_SVCR_SM_MASK;
135
+ arm_reset_sve_state(env);
41
+}
136
+}
42
+
137
+
43
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
138
+void helper_set_pstate_za(CPUARMState *env, uint32_t i)
44
int cm, int target)
45
{
46
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_thisvcpu_write(void *opaque, hwaddr addr,
47
return gic_cpu_write(s, gic_get_current_vcpu(s), addr, value, attrs);
48
}
49
50
+static uint32_t gic_compute_eisr(GICState *s, int cpu, int lr_start)
51
+{
139
+{
52
+ int lr_idx;
140
+ if (i == FIELD_EX64(env->svcr, SVCR, ZA)) {
53
+ uint32_t ret = 0;
141
+ return;
54
+
55
+ for (lr_idx = lr_start; lr_idx < s->num_lrs; lr_idx++) {
56
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
57
+ ret = deposit32(ret, lr_idx - lr_start, 1,
58
+ gic_lr_entry_is_eoi(*entry));
59
+ }
142
+ }
60
+
143
+ env->svcr ^= R_SVCR_ZA_MASK;
61
+ return ret;
144
+
145
+ /*
146
+ * ResetSMEState.
147
+ *
148
+ * SetPSTATE_ZA zeros on enable and disable. We can zero this only
149
+ * on enable: while disabled, the storage is inaccessible and the
150
+ * value does not matter. We're not saving the storage in vmstate
151
+ * when disabled either.
152
+ */
153
+ if (i) {
154
+ memset(env->zarray, 0, sizeof(env->zarray));
155
+ }
62
+}
156
+}
63
+
157
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
64
+static uint32_t gic_compute_elrsr(GICState *s, int cpu, int lr_start)
158
index XXXXXXX..XXXXXXX 100644
65
+{
159
--- a/target/arm/translate-a64.c
66
+ int lr_idx;
160
+++ b/target/arm/translate-a64.c
67
+ uint32_t ret = 0;
161
@@ -XXX,XX +XXX,XX @@ static void handle_msr_i(DisasContext *s, uint32_t insn,
68
+
162
}
69
+ for (lr_idx = lr_start; lr_idx < s->num_lrs; lr_idx++) {
163
break;
70
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
164
71
+ ret = deposit32(ret, lr_idx - lr_start, 1,
165
+ case 0x1b: /* SVCR* */
72
+ gic_lr_entry_is_free(*entry));
166
+ if (!dc_isar_feature(aa64_sme, s) || crm < 2 || crm > 7) {
73
+ }
167
+ goto do_unallocated;
74
+
168
+ }
75
+ return ret;
169
+ if (sme_access_check(s)) {
76
+}
170
+ bool i = crm & 1;
77
+
171
+ bool changed = false;
78
+static void gic_vmcr_write(GICState *s, uint32_t value, MemTxAttrs attrs)
172
+
79
+{
173
+ if ((crm & 2) && i != s->pstate_sm) {
80
+ int vcpu = gic_get_current_vcpu(s);
174
+ gen_helper_set_pstate_sm(cpu_env, tcg_constant_i32(i));
81
+ uint32_t ctlr;
175
+ changed = true;
82
+ uint32_t abpr;
176
+ }
83
+ uint32_t bpr;
177
+ if ((crm & 4) && i != s->pstate_za) {
84
+ uint32_t prio_mask;
178
+ gen_helper_set_pstate_za(cpu_env, tcg_constant_i32(i));
85
+
179
+ changed = true;
86
+ ctlr = FIELD_EX32(value, GICH_VMCR, VMCCtlr);
180
+ }
87
+ abpr = FIELD_EX32(value, GICH_VMCR, VMABP);
181
+ if (changed) {
88
+ bpr = FIELD_EX32(value, GICH_VMCR, VMBP);
182
+ gen_rebuild_hflags(s);
89
+ prio_mask = FIELD_EX32(value, GICH_VMCR, VMPriMask) << 3;
183
+ } else {
90
+
184
+ s->base.is_jmp = DISAS_NEXT;
91
+ gic_set_cpu_control(s, vcpu, ctlr, attrs);
185
+ }
92
+ s->abpr[vcpu] = MAX(abpr, GIC_VIRT_MIN_ABPR);
93
+ s->bpr[vcpu] = MAX(bpr, GIC_VIRT_MIN_BPR);
94
+ gic_set_priority_mask(s, vcpu, prio_mask, attrs);
95
+}
96
+
97
+static MemTxResult gic_hyp_read(void *opaque, int cpu, hwaddr addr,
98
+ uint64_t *data, MemTxAttrs attrs)
99
+{
100
+ GICState *s = ARM_GIC(opaque);
101
+ int vcpu = cpu + GIC_NCPU;
102
+
103
+ switch (addr) {
104
+ case A_GICH_HCR: /* Hypervisor Control */
105
+ *data = s->h_hcr[cpu];
106
+ break;
107
+
108
+ case A_GICH_VTR: /* VGIC Type */
109
+ *data = FIELD_DP32(0, GICH_VTR, ListRegs, s->num_lrs - 1);
110
+ *data = FIELD_DP32(*data, GICH_VTR, PREbits,
111
+ GIC_VIRT_MAX_GROUP_PRIO_BITS - 1);
112
+ *data = FIELD_DP32(*data, GICH_VTR, PRIbits,
113
+ (7 - GIC_VIRT_MIN_BPR) - 1);
114
+ break;
115
+
116
+ case A_GICH_VMCR: /* Virtual Machine Control */
117
+ *data = FIELD_DP32(0, GICH_VMCR, VMCCtlr,
118
+ extract32(s->cpu_ctlr[vcpu], 0, 10));
119
+ *data = FIELD_DP32(*data, GICH_VMCR, VMABP, s->abpr[vcpu]);
120
+ *data = FIELD_DP32(*data, GICH_VMCR, VMBP, s->bpr[vcpu]);
121
+ *data = FIELD_DP32(*data, GICH_VMCR, VMPriMask,
122
+ extract32(s->priority_mask[vcpu], 3, 5));
123
+ break;
124
+
125
+ case A_GICH_MISR: /* Maintenance Interrupt Status */
126
+ *data = s->h_misr[cpu];
127
+ break;
128
+
129
+ case A_GICH_EISR0: /* End of Interrupt Status 0 and 1 */
130
+ case A_GICH_EISR1:
131
+ *data = gic_compute_eisr(s, cpu, (addr - A_GICH_EISR0) * 8);
132
+ break;
133
+
134
+ case A_GICH_ELRSR0: /* Empty List Status 0 and 1 */
135
+ case A_GICH_ELRSR1:
136
+ *data = gic_compute_elrsr(s, cpu, (addr - A_GICH_ELRSR0) * 8);
137
+ break;
138
+
139
+ case A_GICH_APR: /* Active Priorities */
140
+ *data = s->h_apr[cpu];
141
+ break;
142
+
143
+ case A_GICH_LR0 ... A_GICH_LR63: /* List Registers */
144
+ {
145
+ int lr_idx = (addr - A_GICH_LR0) / 4;
146
+
147
+ if (lr_idx > s->num_lrs) {
148
+ *data = 0;
149
+ } else {
150
+ *data = s->h_lr[lr_idx][cpu];
151
+ }
186
+ }
152
+ break;
187
+ break;
153
+ }
188
+
154
+
189
default:
155
+ default:
190
do_unallocated:
156
+ qemu_log_mask(LOG_GUEST_ERROR,
191
unallocated_encoding(s);
157
+ "gic_hyp_read: Bad offset %" HWADDR_PRIx "\n", addr);
192
diff --git a/target/arm/meson.build b/target/arm/meson.build
158
+ return MEMTX_OK;
193
index XXXXXXX..XXXXXXX 100644
159
+ }
194
--- a/target/arm/meson.build
160
+
195
+++ b/target/arm/meson.build
161
+ return MEMTX_OK;
196
@@ -XXX,XX +XXX,XX @@ arm_ss.add(when: 'TARGET_AARCH64', if_true: files(
162
+}
197
'mte_helper.c',
163
+
198
'pauth_helper.c',
164
+static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
199
'sve_helper.c',
165
+ uint64_t value, MemTxAttrs attrs)
200
+ 'sme_helper.c',
166
+{
201
'translate-a64.c',
167
+ GICState *s = ARM_GIC(opaque);
202
'translate-sve.c',
168
+ int vcpu = cpu + GIC_NCPU;
203
))
169
+
170
+ switch (addr) {
171
+ case A_GICH_HCR: /* Hypervisor Control */
172
+ s->h_hcr[cpu] = value & GICH_HCR_MASK;
173
+ break;
174
+
175
+ case A_GICH_VMCR: /* Virtual Machine Control */
176
+ gic_vmcr_write(s, value, attrs);
177
+ break;
178
+
179
+ case A_GICH_APR: /* Active Priorities */
180
+ s->h_apr[cpu] = value;
181
+ s->running_priority[vcpu] = gic_get_prio_from_apr_bits(s, vcpu);
182
+ break;
183
+
184
+ case A_GICH_LR0 ... A_GICH_LR63: /* List Registers */
185
+ {
186
+ int lr_idx = (addr - A_GICH_LR0) / 4;
187
+
188
+ if (lr_idx > s->num_lrs) {
189
+ return MEMTX_OK;
190
+ }
191
+
192
+ s->h_lr[lr_idx][cpu] = value & GICH_LR_MASK;
193
+ break;
194
+ }
195
+
196
+ default:
197
+ qemu_log_mask(LOG_GUEST_ERROR,
198
+ "gic_hyp_write: Bad offset %" HWADDR_PRIx "\n", addr);
199
+ return MEMTX_OK;
200
+ }
201
+
202
+ return MEMTX_OK;
203
+}
204
+
205
+static MemTxResult gic_thiscpu_hyp_read(void *opaque, hwaddr addr, uint64_t *data,
206
+ unsigned size, MemTxAttrs attrs)
207
+{
208
+ GICState *s = (GICState *)opaque;
209
+
210
+ return gic_hyp_read(s, gic_get_current_cpu(s), addr, data, attrs);
211
+}
212
+
213
+static MemTxResult gic_thiscpu_hyp_write(void *opaque, hwaddr addr,
214
+ uint64_t value, unsigned size,
215
+ MemTxAttrs attrs)
216
+{
217
+ GICState *s = (GICState *)opaque;
218
+
219
+ return gic_hyp_write(s, gic_get_current_cpu(s), addr, value, attrs);
220
+}
221
+
222
+static MemTxResult gic_do_hyp_read(void *opaque, hwaddr addr, uint64_t *data,
223
+ unsigned size, MemTxAttrs attrs)
224
+{
225
+ GICState **backref = (GICState **)opaque;
226
+ GICState *s = *backref;
227
+ int id = (backref - s->backref);
228
+
229
+ return gic_hyp_read(s, id, addr, data, attrs);
230
+}
231
+
232
+static MemTxResult gic_do_hyp_write(void *opaque, hwaddr addr,
233
+ uint64_t value, unsigned size,
234
+ MemTxAttrs attrs)
235
+{
236
+ GICState **backref = (GICState **)opaque;
237
+ GICState *s = *backref;
238
+ int id = (backref - s->backref);
239
+
240
+ return gic_hyp_write(s, id + GIC_NCPU, addr, value, attrs);
241
+
242
+}
243
+
244
static const MemoryRegionOps gic_ops[2] = {
245
{
246
.read_with_attrs = gic_dist_read,
247
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
248
249
static const MemoryRegionOps gic_virt_ops[2] = {
250
{
251
- .read_with_attrs = NULL,
252
- .write_with_attrs = NULL,
253
+ .read_with_attrs = gic_thiscpu_hyp_read,
254
+ .write_with_attrs = gic_thiscpu_hyp_write,
255
.endianness = DEVICE_NATIVE_ENDIAN,
256
},
257
{
258
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_virt_ops[2] = {
259
}
260
};
261
262
+static const MemoryRegionOps gic_viface_ops = {
263
+ .read_with_attrs = gic_do_hyp_read,
264
+ .write_with_attrs = gic_do_hyp_write,
265
+ .endianness = DEVICE_NATIVE_ENDIAN,
266
+};
267
+
268
static void arm_gic_realize(DeviceState *dev, Error **errp)
269
{
270
/* Device instance realize function for the GIC sysbus device */
271
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
272
&s->backref[i], "gic_cpu", 0x100);
273
sysbus_init_mmio(sbd, &s->cpuiomem[i+1]);
274
}
275
+
276
+ /* Extra core-specific regions for virtual interfaces. This is required by
277
+ * the GICv2 specification.
278
+ */
279
+ if (s->virt_extn) {
280
+ for (i = 0; i < s->num_cpu; i++) {
281
+ memory_region_init_io(&s->vifaceiomem[i + 1], OBJECT(s),
282
+ &gic_viface_ops, &s->backref[i],
283
+ "gic_viface", 0x1000);
284
+ sysbus_init_mmio(sbd, &s->vifaceiomem[i + 1]);
285
+ }
286
+ }
287
+
288
}
289
290
static void arm_gic_class_init(ObjectClass *klass, void *data)
291
--
204
--
292
2.18.0
205
2.25.1
293
294
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The pseudocode for this operation is an increment + compare loop,
3
Keep all of the error messages together. This does mean that
4
so comparing <= the maximum integer produces an all-true predicate.
4
when setting many sve length properties we'll only generate
5
one error, but we only really need one.
5
6
6
Rather than bound in both the inline code and the helper, pass the
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
helper the number of predicate bits to set instead of the number
8
of predicate elements to set.
9
10
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
9
Message-id: 20220620175235.60881-12-richard.henderson@linaro.org
13
Tested-by: Alex Bennée <alex.bennee@linaro.org>
14
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
15
Message-id: 20180801123111.3595-4-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
11
---
18
target/arm/sve_helper.c | 5 ----
12
target/arm/cpu64.c | 15 +++++++--------
19
target/arm/translate-sve.c | 49 +++++++++++++++++++++++++-------------
13
1 file changed, 7 insertions(+), 8 deletions(-)
20
2 files changed, 32 insertions(+), 22 deletions(-)
21
14
22
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
15
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
23
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/sve_helper.c
17
--- a/target/arm/cpu64.c
25
+++ b/target/arm/sve_helper.c
18
+++ b/target/arm/cpu64.c
26
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
19
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
27
return flags;
20
"using only sve<N> properties.\n");
21
} else {
22
error_setg(errp, "cannot enable sve%d", vq * 128);
23
- error_append_hint(errp, "This CPU does not support "
24
- "the vector length %d-bits.\n", vq * 128);
25
+ if (vq_supported) {
26
+ error_append_hint(errp, "This CPU does not support "
27
+ "the vector length %d-bits.\n", vq * 128);
28
+ } else {
29
+ error_append_hint(errp, "SVE not supported by KVM "
30
+ "on this host\n");
31
+ }
32
}
33
return;
34
} else {
35
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
36
return;
28
}
37
}
29
38
30
- /* Scale from predicate element count to bits. */
39
- if (value && kvm_enabled() && !kvm_arm_sve_supported()) {
31
- count <<= esz;
40
- error_setg(errp, "cannot enable %s", name);
32
- /* Bound to the bits in the predicate. */
41
- error_append_hint(errp, "SVE not supported by KVM on this host\n");
33
- count = MIN(count, oprsz * 8);
42
- return;
34
-
35
/* Set all of the requested bits. */
36
for (i = 0; i < count / 64; ++i) {
37
d->p[i] = esz_mask;
38
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-sve.c
41
+++ b/target/arm/translate-sve.c
42
@@ -XXX,XX +XXX,XX @@ static bool trans_CTERM(DisasContext *s, arg_CTERM *a, uint32_t insn)
43
44
static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
45
{
46
- if (!sve_access_check(s)) {
47
- return true;
48
- }
43
- }
49
-
44
-
50
- TCGv_i64 op0 = read_cpu_reg(s, a->rn, 1);
45
cpu->sve_vq_map = deposit32(cpu->sve_vq_map, vq - 1, 1, value);
51
- TCGv_i64 op1 = read_cpu_reg(s, a->rm, 1);
46
cpu->sve_vq_init |= 1 << (vq - 1);
52
- TCGv_i64 t0 = tcg_temp_new_i64();
47
}
53
- TCGv_i64 t1 = tcg_temp_new_i64();
54
+ TCGv_i64 op0, op1, t0, t1, tmax;
55
TCGv_i32 t2, t3;
56
TCGv_ptr ptr;
57
unsigned desc, vsz = vec_full_reg_size(s);
58
TCGCond cond;
59
60
+ if (!sve_access_check(s)) {
61
+ return true;
62
+ }
63
+
64
+ op0 = read_cpu_reg(s, a->rn, 1);
65
+ op1 = read_cpu_reg(s, a->rm, 1);
66
+
67
if (!a->sf) {
68
if (a->u) {
69
tcg_gen_ext32u_i64(op0, op0);
70
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
71
72
/* For the helper, compress the different conditions into a computation
73
* of how many iterations for which the condition is true.
74
- *
75
- * This is slightly complicated by 0 <= UINT64_MAX, which is nominally
76
- * 2**64 iterations, overflowing to 0. Of course, predicate registers
77
- * aren't that large, so any value >= predicate size is sufficient.
78
*/
79
+ t0 = tcg_temp_new_i64();
80
+ t1 = tcg_temp_new_i64();
81
tcg_gen_sub_i64(t0, op1, op0);
82
83
- /* t0 = MIN(op1 - op0, vsz). */
84
- tcg_gen_movi_i64(t1, vsz);
85
- tcg_gen_umin_i64(t0, t0, t1);
86
+ tmax = tcg_const_i64(vsz >> a->esz);
87
if (a->eq) {
88
/* Equality means one more iteration. */
89
tcg_gen_addi_i64(t0, t0, 1);
90
+
91
+ /* If op1 is max (un)signed integer (and the only time the addition
92
+ * above could overflow), then we produce an all-true predicate by
93
+ * setting the count to the vector length. This is because the
94
+ * pseudocode is described as an increment + compare loop, and the
95
+ * max integer would always compare true.
96
+ */
97
+ tcg_gen_movi_i64(t1, (a->sf
98
+ ? (a->u ? UINT64_MAX : INT64_MAX)
99
+ : (a->u ? UINT32_MAX : INT32_MAX)));
100
+ tcg_gen_movcond_i64(TCG_COND_EQ, t0, op1, t1, tmax, t0);
101
}
102
103
- /* t0 = (condition true ? t0 : 0). */
104
+ /* Bound to the maximum. */
105
+ tcg_gen_umin_i64(t0, t0, tmax);
106
+ tcg_temp_free_i64(tmax);
107
+
108
+ /* Set the count to zero if the condition is false. */
109
cond = (a->u
110
? (a->eq ? TCG_COND_LEU : TCG_COND_LTU)
111
: (a->eq ? TCG_COND_LE : TCG_COND_LT));
112
tcg_gen_movi_i64(t1, 0);
113
tcg_gen_movcond_i64(cond, t0, op0, op1, t0, t1);
114
+ tcg_temp_free_i64(t1);
115
116
+ /* Since we're bounded, pass as a 32-bit type. */
117
t2 = tcg_temp_new_i32();
118
tcg_gen_extrl_i64_i32(t2, t0);
119
tcg_temp_free_i64(t0);
120
- tcg_temp_free_i64(t1);
121
+
122
+ /* Scale elements to bits. */
123
+ tcg_gen_shli_i32(t2, t2, a->esz);
124
125
desc = (vsz / 8) - 2;
126
desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
127
--
48
--
128
2.18.0
49
2.25.1
129
130
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
In preparation for the virtualization extensions implementation,
3
Pull the three sve_vq_* values into a structure.
4
refactor the name of the functions and macros that act on the GIC
4
This will be reused for SME.
5
distributor to make that fact explicit. It will be useful to
6
differentiate them from the ones that will act on the virtual
7
interfaces.
8
5
9
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20180727095421.386-2-luc.michel@greensocs.com
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220620175235.60881-13-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
10
---
16
hw/intc/gic_internal.h | 51 ++++++------
11
target/arm/cpu.h | 29 ++++++++++++++---------------
17
hw/intc/arm_gic.c | 163 +++++++++++++++++++++------------------
12
target/arm/cpu64.c | 22 +++++++++++-----------
18
hw/intc/arm_gic_common.c | 6 +-
13
target/arm/helper.c | 2 +-
19
hw/intc/arm_gic_kvm.c | 23 +++---
14
target/arm/kvm64.c | 2 +-
20
4 files changed, 127 insertions(+), 116 deletions(-)
15
4 files changed, 27 insertions(+), 28 deletions(-)
21
16
22
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/intc/gic_internal.h
19
--- a/target/arm/cpu.h
25
+++ b/hw/intc/gic_internal.h
20
+++ b/target/arm/cpu.h
26
@@ -XXX,XX +XXX,XX @@
21
@@ -XXX,XX +XXX,XX @@ typedef enum ARMPSCIState {
27
22
28
#define GIC_BASE_IRQ 0
23
typedef struct ARMISARegisters ARMISARegisters;
29
24
30
-#define GIC_SET_ENABLED(irq, cm) s->irq_state[irq].enabled |= (cm)
25
+/*
31
-#define GIC_CLEAR_ENABLED(irq, cm) s->irq_state[irq].enabled &= ~(cm)
26
+ * In map, each set bit is a supported vector length of (bit-number + 1) * 16
32
-#define GIC_TEST_ENABLED(irq, cm) ((s->irq_state[irq].enabled & (cm)) != 0)
27
+ * bytes, i.e. each bit number + 1 is the vector length in quadwords.
33
-#define GIC_SET_PENDING(irq, cm) s->irq_state[irq].pending |= (cm)
28
+ *
34
-#define GIC_CLEAR_PENDING(irq, cm) s->irq_state[irq].pending &= ~(cm)
29
+ * While processing properties during initialization, corresponding init bits
35
-#define GIC_SET_ACTIVE(irq, cm) s->irq_state[irq].active |= (cm)
30
+ * are set for bits in sve_vq_map that have been set by properties.
36
-#define GIC_CLEAR_ACTIVE(irq, cm) s->irq_state[irq].active &= ~(cm)
31
+ *
37
-#define GIC_TEST_ACTIVE(irq, cm) ((s->irq_state[irq].active & (cm)) != 0)
32
+ * Bits set in supported represent valid vector lengths for the CPU type.
38
-#define GIC_SET_MODEL(irq) s->irq_state[irq].model = true
33
+ */
39
-#define GIC_CLEAR_MODEL(irq) s->irq_state[irq].model = false
34
+typedef struct {
40
-#define GIC_TEST_MODEL(irq) s->irq_state[irq].model
35
+ uint32_t map, init, supported;
41
-#define GIC_SET_LEVEL(irq, cm) s->irq_state[irq].level |= (cm)
36
+} ARMVQMap;
42
-#define GIC_CLEAR_LEVEL(irq, cm) s->irq_state[irq].level &= ~(cm)
37
+
43
-#define GIC_TEST_LEVEL(irq, cm) ((s->irq_state[irq].level & (cm)) != 0)
38
/**
44
-#define GIC_SET_EDGE_TRIGGER(irq) s->irq_state[irq].edge_trigger = true
39
* ARMCPU:
45
-#define GIC_CLEAR_EDGE_TRIGGER(irq) s->irq_state[irq].edge_trigger = false
40
* @env: #CPUARMState
46
-#define GIC_TEST_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger)
41
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
47
-#define GIC_GET_PRIORITY(irq, cpu) (((irq) < GIC_INTERNAL) ? \
42
uint32_t sve_default_vq;
48
+#define GIC_DIST_SET_ENABLED(irq, cm) (s->irq_state[irq].enabled |= (cm))
43
#endif
49
+#define GIC_DIST_CLEAR_ENABLED(irq, cm) (s->irq_state[irq].enabled &= ~(cm))
44
50
+#define GIC_DIST_TEST_ENABLED(irq, cm) ((s->irq_state[irq].enabled & (cm)) != 0)
45
- /*
51
+#define GIC_DIST_SET_PENDING(irq, cm) (s->irq_state[irq].pending |= (cm))
46
- * In sve_vq_map each set bit is a supported vector length of
52
+#define GIC_DIST_CLEAR_PENDING(irq, cm) (s->irq_state[irq].pending &= ~(cm))
47
- * (bit-number + 1) * 16 bytes, i.e. each bit number + 1 is the vector
53
+#define GIC_DIST_SET_ACTIVE(irq, cm) (s->irq_state[irq].active |= (cm))
48
- * length in quadwords.
54
+#define GIC_DIST_CLEAR_ACTIVE(irq, cm) (s->irq_state[irq].active &= ~(cm))
49
- *
55
+#define GIC_DIST_TEST_ACTIVE(irq, cm) ((s->irq_state[irq].active & (cm)) != 0)
50
- * While processing properties during initialization, corresponding
56
+#define GIC_DIST_SET_MODEL(irq) (s->irq_state[irq].model = true)
51
- * sve_vq_init bits are set for bits in sve_vq_map that have been
57
+#define GIC_DIST_CLEAR_MODEL(irq) (s->irq_state[irq].model = false)
52
- * set by properties.
58
+#define GIC_DIST_TEST_MODEL(irq) (s->irq_state[irq].model)
53
- *
59
+#define GIC_DIST_SET_LEVEL(irq, cm) (s->irq_state[irq].level |= (cm))
54
- * Bits set in sve_vq_supported represent valid vector lengths for
60
+#define GIC_DIST_CLEAR_LEVEL(irq, cm) (s->irq_state[irq].level &= ~(cm))
55
- * the CPU type.
61
+#define GIC_DIST_TEST_LEVEL(irq, cm) ((s->irq_state[irq].level & (cm)) != 0)
56
- */
62
+#define GIC_DIST_SET_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger = true)
57
- uint32_t sve_vq_map;
63
+#define GIC_DIST_CLEAR_EDGE_TRIGGER(irq) \
58
- uint32_t sve_vq_init;
64
+ (s->irq_state[irq].edge_trigger = false)
59
- uint32_t sve_vq_supported;
65
+#define GIC_DIST_TEST_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger)
60
+ ARMVQMap sve_vq;
66
+#define GIC_DIST_GET_PRIORITY(irq, cpu) (((irq) < GIC_INTERNAL) ? \
61
67
s->priority1[irq][cpu] : \
62
/* Generic timer counter frequency, in Hz */
68
s->priority2[(irq) - GIC_INTERNAL])
63
uint64_t gt_cntfrq_hz;
69
-#define GIC_TARGET(irq) s->irq_target[irq]
64
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
70
-#define GIC_CLEAR_GROUP(irq, cm) (s->irq_state[irq].group &= ~(cm))
71
-#define GIC_SET_GROUP(irq, cm) (s->irq_state[irq].group |= (cm))
72
-#define GIC_TEST_GROUP(irq, cm) ((s->irq_state[irq].group & (cm)) != 0)
73
+#define GIC_DIST_TARGET(irq) (s->irq_target[irq])
74
+#define GIC_DIST_CLEAR_GROUP(irq, cm) (s->irq_state[irq].group &= ~(cm))
75
+#define GIC_DIST_SET_GROUP(irq, cm) (s->irq_state[irq].group |= (cm))
76
+#define GIC_DIST_TEST_GROUP(irq, cm) ((s->irq_state[irq].group & (cm)) != 0)
77
78
#define GICD_CTLR_EN_GRP0 (1U << 0)
79
#define GICD_CTLR_EN_GRP1 (1U << 1)
80
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs);
81
void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs);
82
void gic_update(GICState *s);
83
void gic_init_irqs_and_distributor(GICState *s);
84
-void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
85
- MemTxAttrs attrs);
86
+void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
87
+ MemTxAttrs attrs);
88
89
static inline bool gic_test_pending(GICState *s, int irq, int cm)
90
{
91
@@ -XXX,XX +XXX,XX @@ static inline bool gic_test_pending(GICState *s, int irq, int cm)
92
* GICD_ISPENDR to set the state pending.
93
*/
94
return (s->irq_state[irq].pending & cm) ||
95
- (!GIC_TEST_EDGE_TRIGGER(irq) && GIC_TEST_LEVEL(irq, cm));
96
+ (!GIC_DIST_TEST_EDGE_TRIGGER(irq) && GIC_DIST_TEST_LEVEL(irq, cm));
97
}
98
}
99
100
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
101
index XXXXXXX..XXXXXXX 100644
65
index XXXXXXX..XXXXXXX 100644
102
--- a/hw/intc/arm_gic.c
66
--- a/target/arm/cpu64.c
103
+++ b/hw/intc/arm_gic.c
67
+++ b/target/arm/cpu64.c
104
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
68
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
105
best_prio = 0x100;
69
* any of the above. Finally, if SVE is not disabled, then at least one
106
best_irq = 1023;
70
* vector length must be enabled.
107
for (irq = 0; irq < s->num_irq; irq++) {
71
*/
108
- if (GIC_TEST_ENABLED(irq, cm) && gic_test_pending(s, irq, cm) &&
72
- uint32_t vq_map = cpu->sve_vq_map;
109
- (!GIC_TEST_ACTIVE(irq, cm)) &&
73
- uint32_t vq_init = cpu->sve_vq_init;
110
- (irq < GIC_INTERNAL || GIC_TARGET(irq) & cm)) {
74
+ uint32_t vq_map = cpu->sve_vq.map;
111
- if (GIC_GET_PRIORITY(irq, cpu) < best_prio) {
75
+ uint32_t vq_init = cpu->sve_vq.init;
112
- best_prio = GIC_GET_PRIORITY(irq, cpu);
76
uint32_t vq_supported;
113
+ if (GIC_DIST_TEST_ENABLED(irq, cm) &&
77
uint32_t vq_mask = 0;
114
+ gic_test_pending(s, irq, cm) &&
78
uint32_t tmp, vq, max_vq = 0;
115
+ (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
79
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
116
+ (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
80
*/
117
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) < best_prio) {
81
if (kvm_enabled()) {
118
+ best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
82
if (kvm_arm_sve_supported()) {
119
best_irq = irq;
83
- cpu->sve_vq_supported = kvm_arm_sve_get_vls(CPU(cpu));
120
}
84
- vq_supported = cpu->sve_vq_supported;
121
}
85
+ cpu->sve_vq.supported = kvm_arm_sve_get_vls(CPU(cpu));
122
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
86
+ vq_supported = cpu->sve_vq.supported;
123
if (best_prio < s->priority_mask[cpu]) {
87
} else {
124
s->current_pending[cpu] = best_irq;
88
assert(!cpu_isar_feature(aa64_sve, cpu));
125
if (best_prio < s->running_priority[cpu]) {
89
vq_supported = 0;
126
- int group = GIC_TEST_GROUP(best_irq, cm);
127
+ int group = GIC_DIST_TEST_GROUP(best_irq, cm);
128
129
if (extract32(s->ctlr, group, 1) &&
130
extract32(s->cpu_ctlr[cpu], group, 1)) {
131
@@ -XXX,XX +XXX,XX @@ void gic_set_pending_private(GICState *s, int cpu, int irq)
132
}
133
134
DPRINTF("Set %d pending cpu %d\n", irq, cpu);
135
- GIC_SET_PENDING(irq, cm);
136
+ GIC_DIST_SET_PENDING(irq, cm);
137
gic_update(s);
138
}
139
140
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
141
int cm, int target)
142
{
143
if (level) {
144
- GIC_SET_LEVEL(irq, cm);
145
- if (GIC_TEST_EDGE_TRIGGER(irq) || GIC_TEST_ENABLED(irq, cm)) {
146
+ GIC_DIST_SET_LEVEL(irq, cm);
147
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq) || GIC_DIST_TEST_ENABLED(irq, cm)) {
148
DPRINTF("Set %d pending mask %x\n", irq, target);
149
- GIC_SET_PENDING(irq, target);
150
+ GIC_DIST_SET_PENDING(irq, target);
151
}
90
}
152
} else {
91
} else {
153
- GIC_CLEAR_LEVEL(irq, cm);
92
- vq_supported = cpu->sve_vq_supported;
154
+ GIC_DIST_CLEAR_LEVEL(irq, cm);
93
+ vq_supported = cpu->sve_vq.supported;
155
}
94
}
95
96
/*
97
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
98
99
/* From now on sve_max_vq is the actual maximum supported length. */
100
cpu->sve_max_vq = max_vq;
101
- cpu->sve_vq_map = vq_map;
102
+ cpu->sve_vq.map = vq_map;
156
}
103
}
157
104
158
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq_generic(GICState *s, int irq, int level,
105
static void cpu_max_get_sve_max_vq(Object *obj, Visitor *v, const char *name,
159
int cm, int target)
106
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name,
160
{
107
if (!cpu_isar_feature(aa64_sve, cpu)) {
161
if (level) {
108
value = false;
162
- GIC_SET_LEVEL(irq, cm);
163
+ GIC_DIST_SET_LEVEL(irq, cm);
164
DPRINTF("Set %d pending mask %x\n", irq, target);
165
- if (GIC_TEST_EDGE_TRIGGER(irq)) {
166
- GIC_SET_PENDING(irq, target);
167
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq)) {
168
+ GIC_DIST_SET_PENDING(irq, target);
169
}
170
} else {
109
} else {
171
- GIC_CLEAR_LEVEL(irq, cm);
110
- value = extract32(cpu->sve_vq_map, vq - 1, 1);
172
+ GIC_DIST_CLEAR_LEVEL(irq, cm);
111
+ value = extract32(cpu->sve_vq.map, vq - 1, 1);
173
}
112
}
113
visit_type_bool(v, name, &value, errp);
174
}
114
}
175
115
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
176
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq(void *opaque, int irq, int level)
177
/* The first external input line is internal interrupt 32. */
178
cm = ALL_CPU_MASK;
179
irq += GIC_INTERNAL;
180
- target = GIC_TARGET(irq);
181
+ target = GIC_DIST_TARGET(irq);
182
} else {
183
int cpu;
184
irq -= (s->num_irq - GIC_INTERNAL);
185
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq(void *opaque, int irq, int level)
186
187
assert(irq >= GIC_NR_SGIS);
188
189
- if (level == GIC_TEST_LEVEL(irq, cm)) {
190
+ if (level == GIC_DIST_TEST_LEVEL(irq, cm)) {
191
return;
116
return;
192
}
117
}
193
118
194
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
119
- cpu->sve_vq_map = deposit32(cpu->sve_vq_map, vq - 1, 1, value);
195
uint16_t pending_irq = s->current_pending[cpu];
120
- cpu->sve_vq_init |= 1 << (vq - 1);
196
121
+ cpu->sve_vq.map = deposit32(cpu->sve_vq.map, vq - 1, 1, value);
197
if (pending_irq < GIC_MAXIRQ && gic_has_groups(s)) {
122
+ cpu->sve_vq.init |= 1 << (vq - 1);
198
- int group = GIC_TEST_GROUP(pending_irq, (1 << cpu));
199
+ int group = GIC_DIST_TEST_GROUP(pending_irq, (1 << cpu));
200
/* On a GIC without the security extensions, reading this register
201
* behaves in the same way as a secure access to a GIC with them.
202
*/
203
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
204
205
if (gic_has_groups(s) &&
206
!(s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) &&
207
- GIC_TEST_GROUP(irq, (1 << cpu))) {
208
+ GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
209
bpr = s->abpr[cpu] - 1;
210
assert(bpr >= 0);
211
} else {
212
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
213
*/
214
mask = ~0U << ((bpr & 7) + 1);
215
216
- return GIC_GET_PRIORITY(irq, cpu) & mask;
217
+ return GIC_DIST_GET_PRIORITY(irq, cpu) & mask;
218
}
123
}
219
124
220
static void gic_activate_irq(GICState *s, int cpu, int irq)
125
static bool cpu_arm_get_sve(Object *obj, Error **errp)
221
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
126
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
222
int regno = preemption_level / 32;
127
cpu->dcz_blocksize = 7; /* 512 bytes */
223
int bitno = preemption_level % 32;
128
#endif
224
129
225
- if (gic_has_groups(s) && GIC_TEST_GROUP(irq, (1 << cpu))) {
130
- cpu->sve_vq_supported = MAKE_64BIT_MASK(0, ARM_MAX_VQ);
226
+ if (gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
131
+ cpu->sve_vq.supported = MAKE_64BIT_MASK(0, ARM_MAX_VQ);
227
s->nsapr[regno][cpu] |= (1 << bitno);
132
228
} else {
133
aarch64_add_pauth_properties(obj);
229
s->apr[regno][cpu] |= (1 << bitno);
134
aarch64_add_sve_properties(obj);
135
@@ -XXX,XX +XXX,XX @@ static void aarch64_a64fx_initfn(Object *obj)
136
137
/* The A64FX supports only 128, 256 and 512 bit vector lengths */
138
aarch64_add_sve_properties(obj);
139
- cpu->sve_vq_supported = (1 << 0) /* 128bit */
140
+ cpu->sve_vq.supported = (1 << 0) /* 128bit */
141
| (1 << 1) /* 256bit */
142
| (1 << 3); /* 512bit */
143
144
diff --git a/target/arm/helper.c b/target/arm/helper.c
145
index XXXXXXX..XXXXXXX 100644
146
--- a/target/arm/helper.c
147
+++ b/target/arm/helper.c
148
@@ -XXX,XX +XXX,XX @@ uint32_t sve_vqm1_for_el(CPUARMState *env, int el)
149
len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
230
}
150
}
231
151
232
s->running_priority[cpu] = prio;
152
- len = 31 - clz32(cpu->sve_vq_map & MAKE_64BIT_MASK(0, len + 1));
233
- GIC_SET_ACTIVE(irq, 1 << cpu);
153
+ len = 31 - clz32(cpu->sve_vq.map & MAKE_64BIT_MASK(0, len + 1));
234
+ GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
154
return len;
235
}
155
}
236
156
237
static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
157
diff --git a/target/arm/kvm64.c b/target/arm/kvm64.c
238
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
158
index XXXXXXX..XXXXXXX 100644
239
return irq;
159
--- a/target/arm/kvm64.c
240
}
160
+++ b/target/arm/kvm64.c
241
161
@@ -XXX,XX +XXX,XX @@ uint32_t kvm_arm_sve_get_vls(CPUState *cs)
242
- if (GIC_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
162
static int kvm_arm_sve_set_vls(CPUState *cs)
243
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
244
DPRINTF("ACK, pending interrupt (%d) has insufficient priority\n", irq);
245
return 1023;
246
}
247
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
248
/* Clear pending flags for both level and edge triggered interrupts.
249
* Level triggered IRQs will be reasserted once they become inactive.
250
*/
251
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
252
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
253
+ : cm);
254
ret = irq;
255
} else {
256
if (irq < GIC_NR_SGIS) {
257
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
258
src = ctz32(s->sgi_pending[irq][cpu]);
259
s->sgi_pending[irq][cpu] &= ~(1 << src);
260
if (s->sgi_pending[irq][cpu] == 0) {
261
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
262
+ GIC_DIST_CLEAR_PENDING(irq,
263
+ GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
264
+ : cm);
265
}
266
ret = irq | ((src & 0x7) << 10);
267
} else {
268
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
269
* interrupts. (level triggered interrupts with an active line
270
* remain pending, see gic_test_pending)
271
*/
272
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
273
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
274
+ : cm);
275
ret = irq;
276
}
277
}
278
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
279
return ret;
280
}
281
282
-void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
283
+void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
284
MemTxAttrs attrs)
285
{
163
{
286
if (s->security_extn && !attrs.secure) {
164
ARMCPU *cpu = ARM_CPU(cs);
287
- if (!GIC_TEST_GROUP(irq, (1 << cpu))) {
165
- uint64_t vls[KVM_ARM64_SVE_VLS_WORDS] = { cpu->sve_vq_map };
288
+ if (!GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
166
+ uint64_t vls[KVM_ARM64_SVE_VLS_WORDS] = { cpu->sve_vq.map };
289
return; /* Ignore Non-secure access of Group0 IRQ */
167
struct kvm_one_reg reg = {
290
}
168
.id = KVM_REG_ARM64_SVE_VLS,
291
val = 0x80 | (val >> 1); /* Non-secure view */
169
.addr = (uint64_t)&vls[0],
292
@@ -XXX,XX +XXX,XX @@ void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
293
}
294
}
295
296
-static uint32_t gic_get_priority(GICState *s, int cpu, int irq,
297
+static uint32_t gic_dist_get_priority(GICState *s, int cpu, int irq,
298
MemTxAttrs attrs)
299
{
300
- uint32_t prio = GIC_GET_PRIORITY(irq, cpu);
301
+ uint32_t prio = GIC_DIST_GET_PRIORITY(irq, cpu);
302
303
if (s->security_extn && !attrs.secure) {
304
- if (!GIC_TEST_GROUP(irq, (1 << cpu))) {
305
+ if (!GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
306
return 0; /* Non-secure access cannot read priority of Group0 IRQ */
307
}
308
prio = (prio << 1) & 0xff; /* Non-secure view */
309
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
310
return;
311
}
312
313
- group = gic_has_groups(s) && GIC_TEST_GROUP(irq, cm);
314
+ group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
315
316
if (!gic_eoi_split(s, cpu, attrs)) {
317
/* This is UNPREDICTABLE; we choose to ignore it */
318
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
319
return;
320
}
321
322
- GIC_CLEAR_ACTIVE(irq, cm);
323
+ GIC_DIST_CLEAR_ACTIVE(irq, cm);
324
}
325
326
void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
327
@@ -XXX,XX +XXX,XX @@ void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
328
if (s->revision == REV_11MPCORE) {
329
/* Mark level triggered interrupts as pending if they are still
330
raised. */
331
- if (!GIC_TEST_EDGE_TRIGGER(irq) && GIC_TEST_ENABLED(irq, cm)
332
- && GIC_TEST_LEVEL(irq, cm) && (GIC_TARGET(irq) & cm) != 0) {
333
+ if (!GIC_DIST_TEST_EDGE_TRIGGER(irq) && GIC_DIST_TEST_ENABLED(irq, cm)
334
+ && GIC_DIST_TEST_LEVEL(irq, cm)
335
+ && (GIC_DIST_TARGET(irq) & cm) != 0) {
336
DPRINTF("Set %d pending mask %x\n", irq, cm);
337
- GIC_SET_PENDING(irq, cm);
338
+ GIC_DIST_SET_PENDING(irq, cm);
339
}
340
}
341
342
- group = gic_has_groups(s) && GIC_TEST_GROUP(irq, cm);
343
+ group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
344
345
if (s->security_extn && !attrs.secure && !group) {
346
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
347
@@ -XXX,XX +XXX,XX @@ void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
348
349
/* In GICv2 the guest can choose to split priority-drop and deactivate */
350
if (!gic_eoi_split(s, cpu, attrs)) {
351
- GIC_CLEAR_ACTIVE(irq, cm);
352
+ GIC_DIST_CLEAR_ACTIVE(irq, cm);
353
}
354
gic_update(s);
355
}
356
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
357
goto bad_reg;
358
}
359
for (i = 0; i < 8; i++) {
360
- if (GIC_TEST_GROUP(irq + i, cm)) {
361
+ if (GIC_DIST_TEST_GROUP(irq + i, cm)) {
362
res |= (1 << i);
363
}
364
}
365
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
366
res = 0;
367
for (i = 0; i < 8; i++) {
368
if (s->security_extn && !attrs.secure &&
369
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
370
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
371
continue; /* Ignore Non-secure access of Group0 IRQ */
372
}
373
374
- if (GIC_TEST_ENABLED(irq + i, cm)) {
375
+ if (GIC_DIST_TEST_ENABLED(irq + i, cm)) {
376
res |= (1 << i);
377
}
378
}
379
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
380
mask = (irq < GIC_INTERNAL) ? cm : ALL_CPU_MASK;
381
for (i = 0; i < 8; i++) {
382
if (s->security_extn && !attrs.secure &&
383
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
384
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
385
continue; /* Ignore Non-secure access of Group0 IRQ */
386
}
387
388
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
389
mask = (irq < GIC_INTERNAL) ? cm : ALL_CPU_MASK;
390
for (i = 0; i < 8; i++) {
391
if (s->security_extn && !attrs.secure &&
392
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
393
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
394
continue; /* Ignore Non-secure access of Group0 IRQ */
395
}
396
397
- if (GIC_TEST_ACTIVE(irq + i, mask)) {
398
+ if (GIC_DIST_TEST_ACTIVE(irq + i, mask)) {
399
res |= (1 << i);
400
}
401
}
402
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
403
irq = (offset - 0x400) + GIC_BASE_IRQ;
404
if (irq >= s->num_irq)
405
goto bad_reg;
406
- res = gic_get_priority(s, cpu, irq, attrs);
407
+ res = gic_dist_get_priority(s, cpu, irq, attrs);
408
} else if (offset < 0xc00) {
409
/* Interrupt CPU Target. */
410
if (s->num_cpu == 1 && s->revision != REV_11MPCORE) {
411
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
412
} else if (irq < GIC_INTERNAL) {
413
res = cm;
414
} else {
415
- res = GIC_TARGET(irq);
416
+ res = GIC_DIST_TARGET(irq);
417
}
418
}
419
} else if (offset < 0xf00) {
420
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
421
res = 0;
422
for (i = 0; i < 4; i++) {
423
if (s->security_extn && !attrs.secure &&
424
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
425
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
426
continue; /* Ignore Non-secure access of Group0 IRQ */
427
}
428
429
- if (GIC_TEST_MODEL(irq + i))
430
+ if (GIC_DIST_TEST_MODEL(irq + i)) {
431
res |= (1 << (i * 2));
432
- if (GIC_TEST_EDGE_TRIGGER(irq + i))
433
+ }
434
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq + i)) {
435
res |= (2 << (i * 2));
436
+ }
437
}
438
} else if (offset < 0xf10) {
439
goto bad_reg;
440
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
441
}
442
443
if (s->security_extn && !attrs.secure &&
444
- !GIC_TEST_GROUP(irq, 1 << cpu)) {
445
+ !GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
446
res = 0; /* Ignore Non-secure access of Group0 IRQ */
447
} else {
448
res = s->sgi_pending[irq][cpu];
449
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
450
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
451
if (value & (1 << i)) {
452
/* Group1 (Non-secure) */
453
- GIC_SET_GROUP(irq + i, cm);
454
+ GIC_DIST_SET_GROUP(irq + i, cm);
455
} else {
456
/* Group0 (Secure) */
457
- GIC_CLEAR_GROUP(irq + i, cm);
458
+ GIC_DIST_CLEAR_GROUP(irq + i, cm);
459
}
460
}
461
}
462
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
463
for (i = 0; i < 8; i++) {
464
if (value & (1 << i)) {
465
int mask =
466
- (irq < GIC_INTERNAL) ? (1 << cpu) : GIC_TARGET(irq + i);
467
+ (irq < GIC_INTERNAL) ? (1 << cpu)
468
+ : GIC_DIST_TARGET(irq + i);
469
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
470
471
if (s->security_extn && !attrs.secure &&
472
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
473
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
474
continue; /* Ignore Non-secure access of Group0 IRQ */
475
}
476
477
- if (!GIC_TEST_ENABLED(irq + i, cm)) {
478
+ if (!GIC_DIST_TEST_ENABLED(irq + i, cm)) {
479
DPRINTF("Enabled IRQ %d\n", irq + i);
480
trace_gic_enable_irq(irq + i);
481
}
482
- GIC_SET_ENABLED(irq + i, cm);
483
+ GIC_DIST_SET_ENABLED(irq + i, cm);
484
/* If a raised level triggered IRQ enabled then mark
485
is as pending. */
486
- if (GIC_TEST_LEVEL(irq + i, mask)
487
- && !GIC_TEST_EDGE_TRIGGER(irq + i)) {
488
+ if (GIC_DIST_TEST_LEVEL(irq + i, mask)
489
+ && !GIC_DIST_TEST_EDGE_TRIGGER(irq + i)) {
490
DPRINTF("Set %d pending mask %x\n", irq + i, mask);
491
- GIC_SET_PENDING(irq + i, mask);
492
+ GIC_DIST_SET_PENDING(irq + i, mask);
493
}
494
}
495
}
496
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
497
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
498
499
if (s->security_extn && !attrs.secure &&
500
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
501
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
502
continue; /* Ignore Non-secure access of Group0 IRQ */
503
}
504
505
- if (GIC_TEST_ENABLED(irq + i, cm)) {
506
+ if (GIC_DIST_TEST_ENABLED(irq + i, cm)) {
507
DPRINTF("Disabled IRQ %d\n", irq + i);
508
trace_gic_disable_irq(irq + i);
509
}
510
- GIC_CLEAR_ENABLED(irq + i, cm);
511
+ GIC_DIST_CLEAR_ENABLED(irq + i, cm);
512
}
513
}
514
} else if (offset < 0x280) {
515
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
516
for (i = 0; i < 8; i++) {
517
if (value & (1 << i)) {
518
if (s->security_extn && !attrs.secure &&
519
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
520
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
521
continue; /* Ignore Non-secure access of Group0 IRQ */
522
}
523
524
- GIC_SET_PENDING(irq + i, GIC_TARGET(irq + i));
525
+ GIC_DIST_SET_PENDING(irq + i, GIC_DIST_TARGET(irq + i));
526
}
527
}
528
} else if (offset < 0x300) {
529
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
530
531
for (i = 0; i < 8; i++) {
532
if (s->security_extn && !attrs.secure &&
533
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
534
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
535
continue; /* Ignore Non-secure access of Group0 IRQ */
536
}
537
538
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
539
for per-CPU interrupts. It's unclear whether this is the
540
corect behavior. */
541
if (value & (1 << i)) {
542
- GIC_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
543
+ GIC_DIST_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
544
}
545
}
546
} else if (offset < 0x400) {
547
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
548
irq = (offset - 0x400) + GIC_BASE_IRQ;
549
if (irq >= s->num_irq)
550
goto bad_reg;
551
- gic_set_priority(s, cpu, irq, value, attrs);
552
+ gic_dist_set_priority(s, cpu, irq, value, attrs);
553
} else if (offset < 0xc00) {
554
/* Interrupt CPU Target. RAZ/WI on uniprocessor GICs, with the
555
* annoying exception of the 11MPCore's GIC.
556
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
557
value |= 0xaa;
558
for (i = 0; i < 4; i++) {
559
if (s->security_extn && !attrs.secure &&
560
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
561
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
562
continue; /* Ignore Non-secure access of Group0 IRQ */
563
}
564
565
if (s->revision == REV_11MPCORE) {
566
if (value & (1 << (i * 2))) {
567
- GIC_SET_MODEL(irq + i);
568
+ GIC_DIST_SET_MODEL(irq + i);
569
} else {
570
- GIC_CLEAR_MODEL(irq + i);
571
+ GIC_DIST_CLEAR_MODEL(irq + i);
572
}
573
}
574
if (value & (2 << (i * 2))) {
575
- GIC_SET_EDGE_TRIGGER(irq + i);
576
+ GIC_DIST_SET_EDGE_TRIGGER(irq + i);
577
} else {
578
- GIC_CLEAR_EDGE_TRIGGER(irq + i);
579
+ GIC_DIST_CLEAR_EDGE_TRIGGER(irq + i);
580
}
581
}
582
} else if (offset < 0xf10) {
583
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
584
irq = (offset - 0xf10);
585
586
if (!s->security_extn || attrs.secure ||
587
- GIC_TEST_GROUP(irq, 1 << cpu)) {
588
+ GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
589
s->sgi_pending[irq][cpu] &= ~value;
590
if (s->sgi_pending[irq][cpu] == 0) {
591
- GIC_CLEAR_PENDING(irq, 1 << cpu);
592
+ GIC_DIST_CLEAR_PENDING(irq, 1 << cpu);
593
}
594
}
595
} else if (offset < 0xf30) {
596
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
597
irq = (offset - 0xf20);
598
599
if (!s->security_extn || attrs.secure ||
600
- GIC_TEST_GROUP(irq, 1 << cpu)) {
601
- GIC_SET_PENDING(irq, 1 << cpu);
602
+ GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
603
+ GIC_DIST_SET_PENDING(irq, 1 << cpu);
604
s->sgi_pending[irq][cpu] |= value;
605
}
606
} else {
607
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writel(void *opaque, hwaddr offset,
608
mask = ALL_CPU_MASK;
609
break;
610
}
611
- GIC_SET_PENDING(irq, mask);
612
+ GIC_DIST_SET_PENDING(irq, mask);
613
target_cpu = ctz32(mask);
614
while (target_cpu < GIC_NCPU) {
615
s->sgi_pending[irq][target_cpu] |= (1 << cpu);
616
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
617
index XXXXXXX..XXXXXXX 100644
618
--- a/hw/intc/arm_gic_common.c
619
+++ b/hw/intc/arm_gic_common.c
620
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
621
}
622
}
623
for (i = 0; i < GIC_NR_SGIS; i++) {
624
- GIC_SET_ENABLED(i, ALL_CPU_MASK);
625
- GIC_SET_EDGE_TRIGGER(i);
626
+ GIC_DIST_SET_ENABLED(i, ALL_CPU_MASK);
627
+ GIC_DIST_SET_EDGE_TRIGGER(i);
628
}
629
630
for (i = 0; i < ARRAY_SIZE(s->priority2); i++) {
631
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
632
}
633
if (s->security_extn && s->irq_reset_nonsecure) {
634
for (i = 0; i < GIC_MAXIRQ; i++) {
635
- GIC_SET_GROUP(i, ALL_CPU_MASK);
636
+ GIC_DIST_SET_GROUP(i, ALL_CPU_MASK);
637
}
638
}
639
640
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
641
index XXXXXXX..XXXXXXX 100644
642
--- a/hw/intc/arm_gic_kvm.c
643
+++ b/hw/intc/arm_gic_kvm.c
644
@@ -XXX,XX +XXX,XX @@ static void translate_group(GICState *s, int irq, int cpu,
645
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
646
647
if (to_kernel) {
648
- *field = GIC_TEST_GROUP(irq, cm);
649
+ *field = GIC_DIST_TEST_GROUP(irq, cm);
650
} else {
651
if (*field & 1) {
652
- GIC_SET_GROUP(irq, cm);
653
+ GIC_DIST_SET_GROUP(irq, cm);
654
}
655
}
656
}
657
@@ -XXX,XX +XXX,XX @@ static void translate_enabled(GICState *s, int irq, int cpu,
658
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
659
660
if (to_kernel) {
661
- *field = GIC_TEST_ENABLED(irq, cm);
662
+ *field = GIC_DIST_TEST_ENABLED(irq, cm);
663
} else {
664
if (*field & 1) {
665
- GIC_SET_ENABLED(irq, cm);
666
+ GIC_DIST_SET_ENABLED(irq, cm);
667
}
668
}
669
}
670
@@ -XXX,XX +XXX,XX @@ static void translate_pending(GICState *s, int irq, int cpu,
671
*field = gic_test_pending(s, irq, cm);
672
} else {
673
if (*field & 1) {
674
- GIC_SET_PENDING(irq, cm);
675
+ GIC_DIST_SET_PENDING(irq, cm);
676
/* TODO: Capture is level-line is held high in the kernel */
677
}
678
}
679
@@ -XXX,XX +XXX,XX @@ static void translate_active(GICState *s, int irq, int cpu,
680
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
681
682
if (to_kernel) {
683
- *field = GIC_TEST_ACTIVE(irq, cm);
684
+ *field = GIC_DIST_TEST_ACTIVE(irq, cm);
685
} else {
686
if (*field & 1) {
687
- GIC_SET_ACTIVE(irq, cm);
688
+ GIC_DIST_SET_ACTIVE(irq, cm);
689
}
690
}
691
}
692
@@ -XXX,XX +XXX,XX @@ static void translate_trigger(GICState *s, int irq, int cpu,
693
uint32_t *field, bool to_kernel)
694
{
695
if (to_kernel) {
696
- *field = (GIC_TEST_EDGE_TRIGGER(irq)) ? 0x2 : 0x0;
697
+ *field = (GIC_DIST_TEST_EDGE_TRIGGER(irq)) ? 0x2 : 0x0;
698
} else {
699
if (*field & 0x2) {
700
- GIC_SET_EDGE_TRIGGER(irq);
701
+ GIC_DIST_SET_EDGE_TRIGGER(irq);
702
}
703
}
704
}
705
@@ -XXX,XX +XXX,XX @@ static void translate_priority(GICState *s, int irq, int cpu,
706
uint32_t *field, bool to_kernel)
707
{
708
if (to_kernel) {
709
- *field = GIC_GET_PRIORITY(irq, cpu) & 0xff;
710
+ *field = GIC_DIST_GET_PRIORITY(irq, cpu) & 0xff;
711
} else {
712
- gic_set_priority(s, cpu, irq, *field & 0xff, MEMTXATTRS_UNSPECIFIED);
713
+ gic_dist_set_priority(s, cpu, irq,
714
+ *field & 0xff, MEMTXATTRS_UNSPECIFIED);
715
}
716
}
717
718
--
170
--
719
2.18.0
171
2.25.1
720
721
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add the gic_update_virt() function to update the vCPU interface states
3
Rename from cpu_arm_{get,set}_sve_vq, and take the
4
and raise vIRQ and vFIQ as needed. This commit renames gic_update() to
4
ARMVQMap as the opaque parameter.
5
gic_update_internal() and generalizes it to handle both cases, with a
6
`virt' parameter to track whether we are updating the CPU or vCPU
7
interfaces.
8
5
9
The main difference between CPU and vCPU is the way we select the best
10
IRQ. This part has been split into the gic_get_best_(v)irq functions.
11
For the virt case, the LRs are iterated to find the best candidate.
12
13
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20180727095421.386-17-luc.michel@greensocs.com
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220620175235.60881-14-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
10
---
18
hw/intc/arm_gic.c | 175 +++++++++++++++++++++++++++++++++++-----------
11
target/arm/cpu64.c | 29 +++++++++++++++--------------
19
1 file changed, 136 insertions(+), 39 deletions(-)
12
1 file changed, 15 insertions(+), 14 deletions(-)
20
13
21
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
14
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
22
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/intc/arm_gic.c
16
--- a/target/arm/cpu64.c
24
+++ b/hw/intc/arm_gic.c
17
+++ b/target/arm/cpu64.c
25
@@ -XXX,XX +XXX,XX @@ static inline bool gic_cpu_ns_access(GICState *s, int cpu, MemTxAttrs attrs)
18
@@ -XXX,XX +XXX,XX @@ static void cpu_max_set_sve_max_vq(Object *obj, Visitor *v, const char *name,
26
return !gic_is_vcpu(cpu) && s->security_extn && !attrs.secure;
27
}
19
}
28
20
29
+static inline void gic_get_best_irq(GICState *s, int cpu,
21
/*
30
+ int *best_irq, int *best_prio, int *group)
22
- * Note that cpu_arm_get/set_sve_vq cannot use the simpler
31
+{
23
- * object_property_add_bool interface because they make use
32
+ int irq;
24
- * of the contents of "name" to determine which bit on which
33
+ int cm = 1 << cpu;
25
- * to operate.
34
+
26
+ * Note that cpu_arm_{get,set}_vq cannot use the simpler
35
+ *best_irq = 1023;
27
+ * object_property_add_bool interface because they make use of the
36
+ *best_prio = 0x100;
28
+ * contents of "name" to determine which bit on which to operate.
37
+
29
*/
38
+ for (irq = 0; irq < s->num_irq; irq++) {
30
-static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name,
39
+ if (GIC_DIST_TEST_ENABLED(irq, cm) && gic_test_pending(s, irq, cm) &&
31
- void *opaque, Error **errp)
40
+ (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
32
+static void cpu_arm_get_vq(Object *obj, Visitor *v, const char *name,
41
+ (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
33
+ void *opaque, Error **errp)
42
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) < *best_prio) {
43
+ *best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
44
+ *best_irq = irq;
45
+ }
46
+ }
47
+ }
48
+
49
+ if (*best_irq < 1023) {
50
+ *group = GIC_DIST_TEST_GROUP(*best_irq, cm);
51
+ }
52
+}
53
+
54
+static inline void gic_get_best_virq(GICState *s, int cpu,
55
+ int *best_irq, int *best_prio, int *group)
56
+{
57
+ int lr_idx = 0;
58
+
59
+ *best_irq = 1023;
60
+ *best_prio = 0x100;
61
+
62
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
63
+ uint32_t lr_entry = s->h_lr[lr_idx][cpu];
64
+ int state = GICH_LR_STATE(lr_entry);
65
+
66
+ if (state == GICH_LR_STATE_PENDING) {
67
+ int prio = GICH_LR_PRIORITY(lr_entry);
68
+
69
+ if (prio < *best_prio) {
70
+ *best_prio = prio;
71
+ *best_irq = GICH_LR_VIRT_ID(lr_entry);
72
+ *group = GICH_LR_GROUP(lr_entry);
73
+ }
74
+ }
75
+ }
76
+}
77
+
78
+/* Return true if IRQ signaling is enabled for the given cpu and at least one
79
+ * of the given groups:
80
+ * - in the non-virt case, the distributor must be enabled for one of the
81
+ * given groups
82
+ * - in the virt case, the virtual interface must be enabled.
83
+ * - in all cases, the (v)CPU interface must be enabled for one of the given
84
+ * groups.
85
+ */
86
+static inline bool gic_irq_signaling_enabled(GICState *s, int cpu, bool virt,
87
+ int group_mask)
88
+{
89
+ if (!virt && !(s->ctlr & group_mask)) {
90
+ return false;
91
+ }
92
+
93
+ if (virt && !(s->h_hcr[cpu] & R_GICH_HCR_EN_MASK)) {
94
+ return false;
95
+ }
96
+
97
+ if (!(s->cpu_ctlr[cpu] & group_mask)) {
98
+ return false;
99
+ }
100
+
101
+ return true;
102
+}
103
+
104
/* TODO: Many places that call this routine could be optimized. */
105
/* Update interrupt status after enabled or pending bits have been changed. */
106
-static void gic_update(GICState *s)
107
+static inline void gic_update_internal(GICState *s, bool virt)
108
{
34
{
109
int best_irq;
35
ARMCPU *cpu = ARM_CPU(obj);
110
int best_prio;
36
+ ARMVQMap *vq_map = opaque;
111
- int irq;
37
uint32_t vq = atoi(&name[3]) / 128;
112
int irq_level, fiq_level;
38
bool value;
113
- int cpu;
39
114
- int cm;
40
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_get_sve_vq(Object *obj, Visitor *v, const char *name,
115
+ int cpu, cpu_iface;
41
if (!cpu_isar_feature(aa64_sve, cpu)) {
116
+ int group = 0;
42
value = false;
117
+ qemu_irq *irq_lines = virt ? s->parent_virq : s->parent_irq;
43
} else {
118
+ qemu_irq *fiq_lines = virt ? s->parent_vfiq : s->parent_fiq;
44
- value = extract32(cpu->sve_vq.map, vq - 1, 1);
119
45
+ value = extract32(vq_map->map, vq - 1, 1);
120
for (cpu = 0; cpu < s->num_cpu; cpu++) {
121
- cm = 1 << cpu;
122
- s->current_pending[cpu] = 1023;
123
- if (!(s->ctlr & (GICD_CTLR_EN_GRP0 | GICD_CTLR_EN_GRP1))
124
- || !(s->cpu_ctlr[cpu] & (GICC_CTLR_EN_GRP0 | GICC_CTLR_EN_GRP1))) {
125
- qemu_irq_lower(s->parent_irq[cpu]);
126
- qemu_irq_lower(s->parent_fiq[cpu]);
127
+ cpu_iface = virt ? (cpu + GIC_NCPU) : cpu;
128
+
129
+ s->current_pending[cpu_iface] = 1023;
130
+ if (!gic_irq_signaling_enabled(s, cpu, virt,
131
+ GICD_CTLR_EN_GRP0 | GICD_CTLR_EN_GRP1)) {
132
+ qemu_irq_lower(irq_lines[cpu]);
133
+ qemu_irq_lower(fiq_lines[cpu]);
134
continue;
135
}
136
- best_prio = 0x100;
137
- best_irq = 1023;
138
- for (irq = 0; irq < s->num_irq; irq++) {
139
- if (GIC_DIST_TEST_ENABLED(irq, cm) &&
140
- gic_test_pending(s, irq, cm) &&
141
- (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
142
- (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
143
- if (GIC_DIST_GET_PRIORITY(irq, cpu) < best_prio) {
144
- best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
145
- best_irq = irq;
146
- }
147
- }
148
+
149
+ if (virt) {
150
+ gic_get_best_virq(s, cpu, &best_irq, &best_prio, &group);
151
+ } else {
152
+ gic_get_best_irq(s, cpu, &best_irq, &best_prio, &group);
153
}
154
155
if (best_irq != 1023) {
156
trace_gic_update_bestirq(cpu, best_irq, best_prio,
157
- s->priority_mask[cpu], s->running_priority[cpu]);
158
+ s->priority_mask[cpu_iface], s->running_priority[cpu_iface]);
159
}
160
161
irq_level = fiq_level = 0;
162
163
- if (best_prio < s->priority_mask[cpu]) {
164
- s->current_pending[cpu] = best_irq;
165
- if (best_prio < s->running_priority[cpu]) {
166
- int group = GIC_DIST_TEST_GROUP(best_irq, cm);
167
-
168
- if (extract32(s->ctlr, group, 1) &&
169
- extract32(s->cpu_ctlr[cpu], group, 1)) {
170
- if (group == 0 && s->cpu_ctlr[cpu] & GICC_CTLR_FIQ_EN) {
171
+ if (best_prio < s->priority_mask[cpu_iface]) {
172
+ s->current_pending[cpu_iface] = best_irq;
173
+ if (best_prio < s->running_priority[cpu_iface]) {
174
+ if (gic_irq_signaling_enabled(s, cpu, virt, 1 << group)) {
175
+ if (group == 0 &&
176
+ s->cpu_ctlr[cpu_iface] & GICC_CTLR_FIQ_EN) {
177
DPRINTF("Raised pending FIQ %d (cpu %d)\n",
178
- best_irq, cpu);
179
+ best_irq, cpu_iface);
180
fiq_level = 1;
181
- trace_gic_update_set_irq(cpu, "fiq", fiq_level);
182
+ trace_gic_update_set_irq(cpu, virt ? "vfiq" : "fiq",
183
+ fiq_level);
184
} else {
185
DPRINTF("Raised pending IRQ %d (cpu %d)\n",
186
- best_irq, cpu);
187
+ best_irq, cpu_iface);
188
irq_level = 1;
189
- trace_gic_update_set_irq(cpu, "irq", irq_level);
190
+ trace_gic_update_set_irq(cpu, virt ? "virq" : "irq",
191
+ irq_level);
192
}
193
}
194
}
195
}
196
197
- qemu_set_irq(s->parent_irq[cpu], irq_level);
198
- qemu_set_irq(s->parent_fiq[cpu], fiq_level);
199
+ qemu_set_irq(irq_lines[cpu], irq_level);
200
+ qemu_set_irq(fiq_lines[cpu], fiq_level);
201
}
46
}
47
visit_type_bool(v, name, &value, errp);
202
}
48
}
203
49
204
+static void gic_update(GICState *s)
50
-static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
205
+{
51
- void *opaque, Error **errp)
206
+ gic_update_internal(s, false);
52
+static void cpu_arm_set_vq(Object *obj, Visitor *v, const char *name,
207
+}
53
+ void *opaque, Error **errp)
208
+
209
/* Return true if this LR is empty, i.e. the corresponding bit
210
* in ELRSR is set.
211
*/
212
@@ -XXX,XX +XXX,XX @@ static inline bool gic_lr_entry_is_eoi(uint32_t entry)
213
&& !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
214
}
215
216
+static void gic_update_virt(GICState *s)
217
+{
218
+ gic_update_internal(s, true);
219
+}
220
+
221
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
222
int cm, int target)
223
{
54
{
224
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
55
- ARMCPU *cpu = ARM_CPU(obj);
225
}
56
+ ARMVQMap *vq_map = opaque;
226
}
57
uint32_t vq = atoi(&name[3]) / 128;
227
58
bool value;
228
- gic_update(s);
59
229
+ if (gic_is_vcpu(cpu)) {
60
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_vq(Object *obj, Visitor *v, const char *name,
230
+ gic_update_virt(s);
231
+ } else {
232
+ gic_update(s);
233
+ }
234
DPRINTF("ACK %d\n", irq);
235
return ret;
236
}
237
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
238
*/
239
int rcpu = gic_get_vcpu_real_id(cpu);
240
s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
241
+
242
+ /* Update the virtual interface in case a maintenance interrupt should
243
+ * be raised.
244
+ */
245
+ gic_update_virt(s);
246
return;
61
return;
247
}
62
}
248
63
249
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
64
- cpu->sve_vq.map = deposit32(cpu->sve_vq.map, vq - 1, 1, value);
250
}
65
- cpu->sve_vq.init |= 1 << (vq - 1);
251
}
66
+ vq_map->map = deposit32(vq_map->map, vq - 1, 1, value);
252
67
+ vq_map->init |= 1 << (vq - 1);
253
+ gic_update_virt(s);
68
}
254
return;
69
70
static bool cpu_arm_get_sve(Object *obj, Error **errp)
71
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_get_sve_default_vec_len(Object *obj, Visitor *v,
72
73
void aarch64_add_sve_properties(Object *obj)
74
{
75
+ ARMCPU *cpu = ARM_CPU(obj);
76
uint32_t vq;
77
78
object_property_add_bool(obj, "sve", cpu_arm_get_sve, cpu_arm_set_sve);
79
@@ -XXX,XX +XXX,XX @@ void aarch64_add_sve_properties(Object *obj)
80
for (vq = 1; vq <= ARM_MAX_VQ; ++vq) {
81
char name[8];
82
sprintf(name, "sve%d", vq * 128);
83
- object_property_add(obj, name, "bool", cpu_arm_get_sve_vq,
84
- cpu_arm_set_sve_vq, NULL, NULL);
85
+ object_property_add(obj, name, "bool", cpu_arm_get_vq,
86
+ cpu_arm_set_vq, NULL, &cpu->sve_vq);
255
}
87
}
256
88
257
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
89
#ifdef CONFIG_USER_ONLY
258
"gic_cpu_write: Bad offset %x\n", (int)offset);
259
return MEMTX_OK;
260
}
261
- gic_update(s);
262
+
263
+ if (gic_is_vcpu(cpu)) {
264
+ gic_update_virt(s);
265
+ } else {
266
+ gic_update(s);
267
+ }
268
+
269
return MEMTX_OK;
270
}
271
272
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
273
return MEMTX_OK;
274
}
275
276
+ gic_update_virt(s);
277
return MEMTX_OK;
278
}
279
280
--
90
--
281
2.18.0
91
2.25.1
282
283
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add some helper functions to gic_internal.h to get or change the state
3
Rename from cpu_arm_{get,set}_sve_default_vec_len,
4
of an IRQ. When the current CPU is not a vCPU, the call is forwarded to
4
and take the pointer to default_vq from opaque.
5
the GIC distributor. Otherwise, it acts on the list register matching
6
the IRQ in the current CPU virtual interface.
7
5
8
gic_clear_active can have a side effect on the distributor, even in the
9
vCPU case, when the correponding LR has the HW field set.
10
11
Use those functions in the CPU interface code path to prepare for the
12
vCPU interface implementation.
13
14
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
17
Message-id: 20180727095421.386-10-luc.michel@greensocs.com
8
Message-id: 20220620175235.60881-15-richard.henderson@linaro.org
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
10
---
20
hw/intc/gic_internal.h | 83 ++++++++++++++++++++++++++++++++++++++++++
11
target/arm/cpu64.c | 27 ++++++++++++++-------------
21
hw/intc/arm_gic.c | 32 +++++++---------
12
1 file changed, 14 insertions(+), 13 deletions(-)
22
2 files changed, 97 insertions(+), 18 deletions(-)
23
13
24
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
14
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
25
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/intc/gic_internal.h
16
--- a/target/arm/cpu64.c
27
+++ b/hw/intc/gic_internal.h
17
+++ b/target/arm/cpu64.c
28
@@ -XXX,XX +XXX,XX @@ REG32(GICH_LR63, 0x1fc)
18
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, bool value, Error **errp)
29
#define GICH_LR_GROUP(entry) (FIELD_EX32(entry, GICH_LR0, Grp1))
19
30
#define GICH_LR_HW(entry) (FIELD_EX32(entry, GICH_LR0, HW))
20
#ifdef CONFIG_USER_ONLY
31
21
/* Mirror linux /proc/sys/abi/sve_default_vector_length. */
32
+#define GICH_LR_CLEAR_PENDING(entry) \
22
-static void cpu_arm_set_sve_default_vec_len(Object *obj, Visitor *v,
33
+ ((entry) &= ~(GICH_LR_STATE_PENDING << R_GICH_LR0_State_SHIFT))
23
- const char *name, void *opaque,
34
+#define GICH_LR_SET_ACTIVE(entry) \
24
- Error **errp)
35
+ ((entry) |= (GICH_LR_STATE_ACTIVE << R_GICH_LR0_State_SHIFT))
25
+static void cpu_arm_set_default_vec_len(Object *obj, Visitor *v,
36
+#define GICH_LR_CLEAR_ACTIVE(entry) \
26
+ const char *name, void *opaque,
37
+ ((entry) &= ~(GICH_LR_STATE_ACTIVE << R_GICH_LR0_State_SHIFT))
27
+ Error **errp)
38
+
39
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
40
* GICv2 and GICv2 with security extensions:
41
*/
42
@@ -XXX,XX +XXX,XX @@ static inline uint32_t *gic_get_lr_entry(GICState *s, int irq, int vcpu)
43
g_assert_not_reached();
44
}
45
46
+static inline bool gic_test_group(GICState *s, int irq, int cpu)
47
+{
48
+ if (gic_is_vcpu(cpu)) {
49
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
50
+ return GICH_LR_GROUP(*entry);
51
+ } else {
52
+ return GIC_DIST_TEST_GROUP(irq, 1 << cpu);
53
+ }
54
+}
55
+
56
+static inline void gic_clear_pending(GICState *s, int irq, int cpu)
57
+{
58
+ if (gic_is_vcpu(cpu)) {
59
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
60
+ GICH_LR_CLEAR_PENDING(*entry);
61
+ } else {
62
+ /* Clear pending state for both level and edge triggered
63
+ * interrupts. (level triggered interrupts with an active line
64
+ * remain pending, see gic_test_pending)
65
+ */
66
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
67
+ : (1 << cpu));
68
+ }
69
+}
70
+
71
+static inline void gic_set_active(GICState *s, int irq, int cpu)
72
+{
73
+ if (gic_is_vcpu(cpu)) {
74
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
75
+ GICH_LR_SET_ACTIVE(*entry);
76
+ } else {
77
+ GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
78
+ }
79
+}
80
+
81
+static inline void gic_clear_active(GICState *s, int irq, int cpu)
82
+{
83
+ if (gic_is_vcpu(cpu)) {
84
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
85
+ GICH_LR_CLEAR_ACTIVE(*entry);
86
+
87
+ if (GICH_LR_HW(*entry)) {
88
+ /* Hardware interrupt. We must forward the deactivation request to
89
+ * the distributor.
90
+ */
91
+ int phys_irq = GICH_LR_PHYS_ID(*entry);
92
+ int rcpu = gic_get_vcpu_real_id(cpu);
93
+
94
+ if (phys_irq < GIC_NR_SGIS || phys_irq >= GIC_MAXIRQ) {
95
+ /* UNPREDICTABLE behaviour, we choose to ignore the request */
96
+ return;
97
+ }
98
+
99
+ /* This is equivalent to a NS write to DIR on the physical CPU
100
+ * interface. Hence group0 interrupt deactivation is ignored if
101
+ * the GIC is secure.
102
+ */
103
+ if (!s->security_extn || GIC_DIST_TEST_GROUP(phys_irq, 1 << rcpu)) {
104
+ GIC_DIST_CLEAR_ACTIVE(phys_irq, 1 << rcpu);
105
+ }
106
+ }
107
+ } else {
108
+ GIC_DIST_CLEAR_ACTIVE(irq, 1 << cpu);
109
+ }
110
+}
111
+
112
+static inline int gic_get_priority(GICState *s, int irq, int cpu)
113
+{
114
+ if (gic_is_vcpu(cpu)) {
115
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
116
+ return GICH_LR_PRIORITY(*entry);
117
+ } else {
118
+ return GIC_DIST_GET_PRIORITY(irq, cpu);
119
+ }
120
+}
121
+
122
#endif /* QEMU_ARM_GIC_INTERNAL_H */
123
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
124
index XXXXXXX..XXXXXXX 100644
125
--- a/hw/intc/arm_gic.c
126
+++ b/hw/intc/arm_gic.c
127
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
128
uint16_t pending_irq = s->current_pending[cpu];
129
130
if (pending_irq < GIC_MAXIRQ && gic_has_groups(s)) {
131
- int group = GIC_DIST_TEST_GROUP(pending_irq, (1 << cpu));
132
+ int group = gic_test_group(s, pending_irq, cpu);
133
+
134
/* On a GIC without the security extensions, reading this register
135
* behaves in the same way as a secure access to a GIC with them.
136
*/
137
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
138
139
if (gic_has_groups(s) &&
140
!(s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) &&
141
- GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
142
+ gic_test_group(s, irq, cpu)) {
143
bpr = s->abpr[cpu] - 1;
144
assert(bpr >= 0);
145
} else {
146
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
147
*/
148
mask = ~0U << ((bpr & 7) + 1);
149
150
- return GIC_DIST_GET_PRIORITY(irq, cpu) & mask;
151
+ return gic_get_priority(s, irq, cpu) & mask;
152
}
153
154
static void gic_activate_irq(GICState *s, int cpu, int irq)
155
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
156
int regno = preemption_level / 32;
157
int bitno = preemption_level % 32;
158
159
- if (gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
160
+ if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
161
s->nsapr[regno][cpu] |= (1 << bitno);
162
} else {
163
s->apr[regno][cpu] |= (1 << bitno);
164
}
165
166
s->running_priority[cpu] = prio;
167
- GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
168
+ gic_set_active(s, irq, cpu);
169
}
170
171
static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
172
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
173
return irq;
174
}
175
176
- if (GIC_DIST_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
177
+ if (gic_get_priority(s, irq, cpu) >= s->running_priority[cpu]) {
178
DPRINTF("ACK, pending interrupt (%d) has insufficient priority\n", irq);
179
return 1023;
180
}
181
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
182
/* Clear pending flags for both level and edge triggered interrupts.
183
* Level triggered IRQs will be reasserted once they become inactive.
184
*/
185
- GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
186
- : cm);
187
+ gic_clear_pending(s, irq, cpu);
188
ret = irq;
189
} else {
190
if (irq < GIC_NR_SGIS) {
191
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
192
src = ctz32(s->sgi_pending[irq][cpu]);
193
s->sgi_pending[irq][cpu] &= ~(1 << src);
194
if (s->sgi_pending[irq][cpu] == 0) {
195
- GIC_DIST_CLEAR_PENDING(irq,
196
- GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
197
- : cm);
198
+ gic_clear_pending(s, irq, cpu);
199
}
200
ret = irq | ((src & 0x7) << 10);
201
} else {
202
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
203
* interrupts. (level triggered interrupts with an active line
204
* remain pending, see gic_test_pending)
205
*/
206
- GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
207
- : cm);
208
+ gic_clear_pending(s, irq, cpu);
209
ret = irq;
210
}
211
}
212
@@ -XXX,XX +XXX,XX @@ static bool gic_eoi_split(GICState *s, int cpu, MemTxAttrs attrs)
213
214
static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
215
{
28
{
216
- int cm = 1 << cpu;
29
- ARMCPU *cpu = ARM_CPU(obj);
217
int group;
30
+ uint32_t *ptr_default_vq = opaque;
218
31
int32_t default_len, default_vq, remainder;
219
if (irq >= s->num_irq) {
32
220
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
33
if (!visit_type_int32(v, name, &default_len, errp)) {
34
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_default_vec_len(Object *obj, Visitor *v,
35
36
/* Undocumented, but the kernel allows -1 to indicate "maximum". */
37
if (default_len == -1) {
38
- cpu->sve_default_vq = ARM_MAX_VQ;
39
+ *ptr_default_vq = ARM_MAX_VQ;
221
return;
40
return;
222
}
41
}
223
42
224
- group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
43
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve_default_vec_len(Object *obj, Visitor *v,
225
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
226
227
if (!gic_eoi_split(s, cpu, attrs)) {
228
/* This is UNPREDICTABLE; we choose to ignore it */
229
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
230
return;
44
return;
231
}
45
}
232
46
233
- GIC_DIST_CLEAR_ACTIVE(irq, cm);
47
- cpu->sve_default_vq = default_vq;
234
+ gic_clear_active(s, irq, cpu);
48
+ *ptr_default_vq = default_vq;
235
}
49
}
236
50
237
static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
51
-static void cpu_arm_get_sve_default_vec_len(Object *obj, Visitor *v,
238
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
52
- const char *name, void *opaque,
239
}
53
- Error **errp)
240
}
54
+static void cpu_arm_get_default_vec_len(Object *obj, Visitor *v,
241
55
+ const char *name, void *opaque,
242
- group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
56
+ Error **errp)
243
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
57
{
244
58
- ARMCPU *cpu = ARM_CPU(obj);
245
if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
59
- int32_t value = cpu->sve_default_vq * 16;
246
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
60
+ uint32_t *ptr_default_vq = opaque;
247
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
61
+ int32_t value = *ptr_default_vq * 16;
248
62
249
/* In GICv2 the guest can choose to split priority-drop and deactivate */
63
visit_type_int32(v, name, &value, errp);
250
if (!gic_eoi_split(s, cpu, attrs)) {
251
- GIC_DIST_CLEAR_ACTIVE(irq, cm);
252
+ gic_clear_active(s, irq, cpu);
253
}
254
gic_update(s);
255
}
64
}
65
@@ -XXX,XX +XXX,XX @@ void aarch64_add_sve_properties(Object *obj)
66
#ifdef CONFIG_USER_ONLY
67
/* Mirror linux /proc/sys/abi/sve_default_vector_length. */
68
object_property_add(obj, "sve-default-vector-length", "int32",
69
- cpu_arm_get_sve_default_vec_len,
70
- cpu_arm_set_sve_default_vec_len, NULL, NULL);
71
+ cpu_arm_get_default_vec_len,
72
+ cpu_arm_set_default_vec_len, NULL,
73
+ &cpu->sve_default_vq);
74
#endif
75
}
76
256
--
77
--
257
2.18.0
78
2.25.1
258
259
diff view generated by jsdifflib
1
From: Julia Suvorova <jusual@mail.ru>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Forbid stack alignment change. (CCR)
3
Drop the aa32-only inline fallbacks,
4
Reserve FAULTMASK, BASEPRI registers.
4
and just use a couple of ifdefs.
5
Report any fault as a HardFault. Disable MemManage, BusFault and
6
UsageFault, so they always escalated to HardFault. (SHCSR)
7
5
8
Signed-off-by: Julia Suvorova <jusual@mail.ru>
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Message-id: 20180718095628.26442-1-jusual@mail.ru
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220620175235.60881-16-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
10
---
14
hw/intc/armv7m_nvic.c | 10 ++++++++++
11
target/arm/cpu.h | 6 ------
15
target/arm/cpu.c | 4 ++++
12
target/arm/internals.h | 3 +++
16
target/arm/helper.c | 13 +++++++++++--
13
target/arm/cpu.c | 2 ++
17
3 files changed, 25 insertions(+), 2 deletions(-)
14
3 files changed, 5 insertions(+), 6 deletions(-)
18
15
19
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
16
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/intc/armv7m_nvic.c
18
--- a/target/arm/cpu.h
22
+++ b/hw/intc/armv7m_nvic.c
19
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
20
@@ -XXX,XX +XXX,XX @@ typedef struct {
24
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
21
25
return val;
22
#ifdef TARGET_AARCH64
26
case 0xd24: /* System Handler Control and State (SHCSR) */
23
# define ARM_MAX_VQ 16
27
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
24
-void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
28
+ goto bad_offset;
25
-void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp);
29
+ }
26
-void arm_cpu_lpa2_finalize(ARMCPU *cpu, Error **errp);
30
val = 0;
27
#else
31
if (attrs.secure) {
28
# define ARM_MAX_VQ 1
32
if (s->sec_vectors[ARMV7M_EXCP_MEM].active) {
29
-static inline void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) { }
33
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
30
-static inline void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp) { }
34
cpu->env.v7m.scr[attrs.secure] = value;
31
-static inline void arm_cpu_lpa2_finalize(ARMCPU *cpu, Error **errp) { }
35
break;
32
#endif
36
case 0xd14: /* Configuration Control. */
33
37
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
34
typedef struct ARMVectorReg {
38
+ goto bad_offset;
35
diff --git a/target/arm/internals.h b/target/arm/internals.h
39
+ }
36
index XXXXXXX..XXXXXXX 100644
40
+
37
--- a/target/arm/internals.h
41
/* Enforce RAZ/WI on reserved and must-RAZ/WI bits */
38
+++ b/target/arm/internals.h
42
value &= (R_V7M_CCR_STKALIGN_MASK |
39
@@ -XXX,XX +XXX,XX @@ int arm_gdb_get_svereg(CPUARMState *env, GByteArray *buf, int reg);
43
R_V7M_CCR_BFHFNMIGN_MASK |
40
int arm_gdb_set_svereg(CPUARMState *env, uint8_t *buf, int reg);
44
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
41
int aarch64_fpu_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg);
45
cpu->env.v7m.ccr[attrs.secure] = value;
42
int aarch64_fpu_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg);
46
break;
43
+void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
47
case 0xd24: /* System Handler Control and State (SHCSR) */
44
+void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp);
48
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
45
+void arm_cpu_lpa2_finalize(ARMCPU *cpu, Error **errp);
49
+ goto bad_offset;
46
#endif
50
+ }
47
51
if (attrs.secure) {
48
#ifdef CONFIG_USER_ONLY
52
s->sec_vectors[ARMV7M_EXCP_MEM].active = (value & (1 << 0)) != 0;
53
/* Secure HardFault active bit cannot be written */
54
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
49
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
55
index XXXXXXX..XXXXXXX 100644
50
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/cpu.c
51
--- a/target/arm/cpu.c
57
+++ b/target/arm/cpu.c
52
+++ b/target/arm/cpu.c
58
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
53
@@ -XXX,XX +XXX,XX @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
59
env->v7m.ccr[M_REG_NS] |= R_V7M_CCR_NONBASETHRDENA_MASK;
54
{
60
env->v7m.ccr[M_REG_S] |= R_V7M_CCR_NONBASETHRDENA_MASK;
55
Error *local_err = NULL;
56
57
+#ifdef TARGET_AARCH64
58
if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
59
arm_cpu_sve_finalize(cpu, &local_err);
60
if (local_err != NULL) {
61
@@ -XXX,XX +XXX,XX @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
62
return;
61
}
63
}
62
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
64
}
63
+ env->v7m.ccr[M_REG_NS] |= R_V7M_CCR_UNALIGN_TRP_MASK;
65
+#endif
64
+ env->v7m.ccr[M_REG_S] |= R_V7M_CCR_UNALIGN_TRP_MASK;
66
65
+ }
67
if (kvm_enabled()) {
66
68
kvm_arm_steal_time_finalize(cpu, &local_err);
67
/* Unlike A/R profile, M profile defines the reset LR value */
68
env->regs[14] = 0xffffffff;
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
70
index XXXXXXX..XXXXXXX 100644
71
--- a/target/arm/helper.c
72
+++ b/target/arm/helper.c
73
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
74
env->v7m.primask[M_REG_NS] = val & 1;
75
return;
76
case 0x91: /* BASEPRI_NS */
77
- if (!env->v7m.secure) {
78
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
79
return;
80
}
81
env->v7m.basepri[M_REG_NS] = val & 0xff;
82
return;
83
case 0x93: /* FAULTMASK_NS */
84
- if (!env->v7m.secure) {
85
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
86
return;
87
}
88
env->v7m.faultmask[M_REG_NS] = val & 1;
89
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
90
env->v7m.primask[env->v7m.secure] = val & 1;
91
break;
92
case 17: /* BASEPRI */
93
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
94
+ goto bad_reg;
95
+ }
96
env->v7m.basepri[env->v7m.secure] = val & 0xff;
97
break;
98
case 18: /* BASEPRI_MAX */
99
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
100
+ goto bad_reg;
101
+ }
102
val &= 0xff;
103
if (val != 0 && (val < env->v7m.basepri[env->v7m.secure]
104
|| env->v7m.basepri[env->v7m.secure] == 0)) {
105
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
106
}
107
break;
108
case 19: /* FAULTMASK */
109
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
110
+ goto bad_reg;
111
+ }
112
env->v7m.faultmask[env->v7m.secure] = val & 1;
113
break;
114
case 20: /* CONTROL */
115
--
69
--
116
2.18.0
70
2.25.1
117
118
diff view generated by jsdifflib
Deleted patch
1
From: Julia Suvorova <jusual@mail.ru>
2
1
3
The differences from ARMv7-M NVIC are:
4
* ARMv6-M only supports up to 32 external interrupts
5
(configurable feature already). The ICTR is reserved.
6
* Active Bit Register is reserved.
7
* ARMv6-M supports 4 priority levels against 256 in ARMv7-M.
8
9
Signed-off-by: Julia Suvorova <jusual@mail.ru>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
include/hw/intc/armv7m_nvic.h | 1 +
14
hw/intc/armv7m_nvic.c | 21 ++++++++++++++++++---
15
2 files changed, 19 insertions(+), 3 deletions(-)
16
17
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/intc/armv7m_nvic.h
20
+++ b/include/hw/intc/armv7m_nvic.h
21
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
22
VecInfo sec_vectors[NVIC_INTERNAL_VECTORS];
23
/* The PRIGROUP field in AIRCR is banked */
24
uint32_t prigroup[M_REG_NUM_BANKS];
25
+ uint8_t num_prio_bits;
26
27
/* v8M NVIC_ITNS state (stored as a bool per bit) */
28
bool itns[NVIC_MAX_VECTORS];
29
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/intc/armv7m_nvic.c
32
+++ b/hw/intc/armv7m_nvic.c
33
@@ -XXX,XX +XXX,XX @@ static void set_prio(NVICState *s, unsigned irq, bool secure, uint8_t prio)
34
assert(irq > ARMV7M_EXCP_NMI); /* only use for configurable prios */
35
assert(irq < s->num_irq);
36
37
+ prio &= MAKE_64BIT_MASK(8 - s->num_prio_bits, s->num_prio_bits);
38
+
39
if (secure) {
40
assert(exc_is_banked(irq));
41
s->sec_vectors[irq].prio = prio;
42
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
43
44
switch (offset) {
45
case 4: /* Interrupt Control Type. */
46
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
47
+ goto bad_offset;
48
+ }
49
return ((s->num_irq - NVIC_FIRST_IRQ) / 32) - 1;
50
case 0xc: /* CPPWR */
51
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
52
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
53
"Setting VECTRESET when not in DEBUG mode "
54
"is UNPREDICTABLE\n");
55
}
56
- s->prigroup[attrs.secure] = extract32(value,
57
- R_V7M_AIRCR_PRIGROUP_SHIFT,
58
- R_V7M_AIRCR_PRIGROUP_LENGTH);
59
+ if (arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
60
+ s->prigroup[attrs.secure] =
61
+ extract32(value,
62
+ R_V7M_AIRCR_PRIGROUP_SHIFT,
63
+ R_V7M_AIRCR_PRIGROUP_LENGTH);
64
+ }
65
if (attrs.secure) {
66
/* These bits are only writable by secure */
67
cpu->env.v7m.aircr = value &
68
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
69
break;
70
case 0x300 ... 0x33f: /* NVIC Active */
71
val = 0;
72
+
73
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_V7)) {
74
+ break;
75
+ }
76
+
77
startvec = 8 * (offset - 0x300) + NVIC_FIRST_IRQ; /* vector # */
78
79
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
80
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
81
/* include space for internal exception vectors */
82
s->num_irq += NVIC_FIRST_IRQ;
83
84
+ s->num_prio_bits = arm_feature(&s->cpu->env, ARM_FEATURE_V7) ? 8 : 2;
85
+
86
object_property_set_bool(OBJECT(&s->systick[M_REG_NS]), true,
87
"realized", &err);
88
if (err != NULL) {
89
--
90
2.18.0
91
92
diff view generated by jsdifflib
Deleted patch
1
The io_readx() function needs to know whether the load it is
2
doing is an MMU_DATA_LOAD or an MMU_INST_FETCH, so that it
3
can pass the right value to the cpu_transaction_failed()
4
function. Plumb this information through from the softmmu
5
code.
6
1
7
This is currently not often going to give the wrong answer,
8
because usually instruction fetches go via get_page_addr_code().
9
However once we switch over to handling execution from non-RAM by
10
creating single-insn TBs, the path for an insn fetch to generate
11
a bus error will be through cpu_ld*_code() and io_readx(),
12
so without this change we will generate a d-side fault when we
13
should generate an i-side fault.
14
15
We also have to pass the access type via a CPU struct global
16
down to unassigned_mem_read(), for the benefit of the targets
17
which still use the cpu_unassigned_access() hook (m68k, mips,
18
sparc, xtensa).
19
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
22
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
23
Tested-by: Cédric Le Goater <clg@kaod.org>
24
Message-id: 20180710160013.26559-2-peter.maydell@linaro.org
25
---
26
accel/tcg/softmmu_template.h | 11 +++++++----
27
include/qom/cpu.h | 6 ++++++
28
accel/tcg/cputlb.c | 5 +++--
29
memory.c | 3 ++-
30
4 files changed, 18 insertions(+), 7 deletions(-)
31
32
diff --git a/accel/tcg/softmmu_template.h b/accel/tcg/softmmu_template.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/accel/tcg/softmmu_template.h
35
+++ b/accel/tcg/softmmu_template.h
36
@@ -XXX,XX +XXX,XX @@ static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env,
37
size_t mmu_idx, size_t index,
38
target_ulong addr,
39
uintptr_t retaddr,
40
- bool recheck)
41
+ bool recheck,
42
+ MMUAccessType access_type)
43
{
44
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
45
return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, recheck,
46
- DATA_SIZE);
47
+ access_type, DATA_SIZE);
48
}
49
#endif
50
51
@@ -XXX,XX +XXX,XX @@ WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr,
52
/* ??? Note that the io helpers always read data in the target
53
byte ordering. We should push the LE/BE request down into io. */
54
res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
55
- tlb_addr & TLB_RECHECK);
56
+ tlb_addr & TLB_RECHECK,
57
+ READ_ACCESS_TYPE);
58
res = TGT_LE(res);
59
return res;
60
}
61
@@ -XXX,XX +XXX,XX @@ WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr,
62
/* ??? Note that the io helpers always read data in the target
63
byte ordering. We should push the LE/BE request down into io. */
64
res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
65
- tlb_addr & TLB_RECHECK);
66
+ tlb_addr & TLB_RECHECK,
67
+ READ_ACCESS_TYPE);
68
res = TGT_BE(res);
69
return res;
70
}
71
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
72
index XXXXXXX..XXXXXXX 100644
73
--- a/include/qom/cpu.h
74
+++ b/include/qom/cpu.h
75
@@ -XXX,XX +XXX,XX @@ struct CPUState {
76
*/
77
uintptr_t mem_io_pc;
78
vaddr mem_io_vaddr;
79
+ /*
80
+ * This is only needed for the legacy cpu_unassigned_access() hook;
81
+ * when all targets using it have been converted to use
82
+ * cpu_transaction_failed() instead it can be removed.
83
+ */
84
+ MMUAccessType mem_io_access_type;
85
86
int kvm_fd;
87
struct KVMState *kvm_state;
88
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/accel/tcg/cputlb.c
91
+++ b/accel/tcg/cputlb.c
92
@@ -XXX,XX +XXX,XX @@ static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr)
93
static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
94
int mmu_idx,
95
target_ulong addr, uintptr_t retaddr,
96
- bool recheck, int size)
97
+ bool recheck, MMUAccessType access_type, int size)
98
{
99
CPUState *cpu = ENV_GET_CPU(env);
100
hwaddr mr_offset;
101
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
102
}
103
104
cpu->mem_io_vaddr = addr;
105
+ cpu->mem_io_access_type = access_type;
106
107
if (mr->global_locking && !qemu_mutex_iothread_locked()) {
108
qemu_mutex_lock_iothread();
109
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
110
section->offset_within_address_space -
111
section->offset_within_region;
112
113
- cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_LOAD,
114
+ cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
115
mmu_idx, iotlbentry->attrs, r, retaddr);
116
}
117
if (locked) {
118
diff --git a/memory.c b/memory.c
119
index XXXXXXX..XXXXXXX 100644
120
--- a/memory.c
121
+++ b/memory.c
122
@@ -XXX,XX +XXX,XX @@ static uint64_t unassigned_mem_read(void *opaque, hwaddr addr,
123
printf("Unassigned mem read " TARGET_FMT_plx "\n", addr);
124
#endif
125
if (current_cpu != NULL) {
126
- cpu_unassigned_access(current_cpu, addr, false, false, 0, size);
127
+ bool is_exec = current_cpu->mem_io_access_type == MMU_INST_FETCH;
128
+ cpu_unassigned_access(current_cpu, addr, false, is_exec, 0, size);
129
}
130
return 0;
131
}
132
--
133
2.18.0
134
135
diff view generated by jsdifflib
Deleted patch
1
When we support execution from non-RAM MMIO regions, get_page_addr_code()
2
will return -1 to indicate that there is no RAM at the requested address.
3
Handle this in the cpu-exec TB hashtable lookup code, treating it as
4
"no match found".
5
1
6
Note that the call to get_page_addr_code() in tb_lookup_cmp() needs
7
no changes -- a return of -1 will already correctly result in the
8
function returning false.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Emilio G. Cota <cota@braap.org>
13
Tested-by: Cédric Le Goater <clg@kaod.org>
14
Message-id: 20180710160013.26559-3-peter.maydell@linaro.org
15
---
16
accel/tcg/cpu-exec.c | 3 +++
17
1 file changed, 3 insertions(+)
18
19
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/accel/tcg/cpu-exec.c
22
+++ b/accel/tcg/cpu-exec.c
23
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
24
desc.trace_vcpu_dstate = *cpu->trace_dstate;
25
desc.pc = pc;
26
phys_pc = get_page_addr_code(desc.env, pc);
27
+ if (phys_pc == -1) {
28
+ return NULL;
29
+ }
30
desc.phys_page1 = phys_pc & TARGET_PAGE_MASK;
31
h = tb_hash_func(phys_pc, pc, flags, cf_mask, *cpu->trace_dstate);
32
return qht_lookup_custom(&tb_ctx.htable, &desc, h, tb_lookup_cmp);
33
--
34
2.18.0
35
36
diff view generated by jsdifflib
Deleted patch
1
When we support execution from non-RAM MMIO regions, get_page_addr_code()
2
will return -1 to indicate that there is no RAM at the requested address.
3
Handle this in tb_check_watchpoint() -- if the exception happened for a
4
PC which doesn't correspond to RAM then there is no need to invalidate
5
any TBs, because the one-instruction TB will not have been cached.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Tested-by: Cédric Le Goater <clg@kaod.org>
10
Message-id: 20180710160013.26559-4-peter.maydell@linaro.org
11
---
12
accel/tcg/translate-all.c | 4 +++-
13
1 file changed, 3 insertions(+), 1 deletion(-)
14
15
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/accel/tcg/translate-all.c
18
+++ b/accel/tcg/translate-all.c
19
@@ -XXX,XX +XXX,XX @@ void tb_check_watchpoint(CPUState *cpu)
20
21
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
22
addr = get_page_addr_code(env, pc);
23
- tb_invalidate_phys_range(addr, addr + 1);
24
+ if (addr != -1) {
25
+ tb_invalidate_phys_range(addr, addr + 1);
26
+ }
27
}
28
}
29
30
--
31
2.18.0
32
33
diff view generated by jsdifflib
Deleted patch
1
If get_page_addr_code() returns -1, this indicates that there is no RAM
2
page we can read a full TB from. Instead we must create a TB which
3
contains a single instruction and which we do not cache, so it is
4
executed only once.
5
1
6
Since this means we can now have TBs which are not in any page list,
7
we also need to make tb_phys_invalidate() handle them (by not trying
8
to remove them from a nonexistent page list).
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Emilio G. Cota <cota@braap.org>
13
Tested-by: Cédric Le Goater <clg@kaod.org>
14
Message-id: 20180710160013.26559-5-peter.maydell@linaro.org
15
---
16
accel/tcg/translate-all.c | 19 ++++++++++++++++++-
17
1 file changed, 18 insertions(+), 1 deletion(-)
18
19
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/accel/tcg/translate-all.c
22
+++ b/accel/tcg/translate-all.c
23
@@ -XXX,XX +XXX,XX @@ static void tb_phys_invalidate__locked(TranslationBlock *tb)
24
*/
25
void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
26
{
27
- if (page_addr == -1) {
28
+ if (page_addr == -1 && tb->page_addr[0] != -1) {
29
page_lock_tb(tb);
30
do_tb_phys_invalidate(tb, true);
31
page_unlock_tb(tb);
32
@@ -XXX,XX +XXX,XX @@ tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
33
34
assert_memory_lock();
35
36
+ if (phys_pc == -1) {
37
+ /*
38
+ * If the TB is not associated with a physical RAM page then
39
+ * it must be a temporary one-insn TB, and we have nothing to do
40
+ * except fill in the page_addr[] fields.
41
+ */
42
+ assert(tb->cflags & CF_NOCACHE);
43
+ tb->page_addr[0] = tb->page_addr[1] = -1;
44
+ return tb;
45
+ }
46
+
47
/*
48
* Add the TB to the page list, acquiring first the pages's locks.
49
* We keep the locks held until after inserting the TB in the hash table,
50
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
51
52
phys_pc = get_page_addr_code(env, pc);
53
54
+ if (phys_pc == -1) {
55
+ /* Generate a temporary TB with 1 insn in it */
56
+ cflags &= ~CF_COUNT_MASK;
57
+ cflags |= CF_NOCACHE | 1;
58
+ }
59
+
60
buffer_overflow:
61
tb = tb_alloc(pc);
62
if (unlikely(!tb)) {
63
--
64
2.18.0
65
66
diff view generated by jsdifflib
Deleted patch
1
Now that all the callers can handle get_page_addr_code() returning -1,
2
remove all the code which tries to handle execution from MMIO regions
3
or small-MMU-region RAM areas. This will mean that we can correctly
4
execute from these areas, rather than ending up either aborting QEMU
5
or delivering an incorrect guest exception.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Tested-by: Cédric Le Goater <clg@kaod.org>
11
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Message-id: 20180710160013.26559-6-peter.maydell@linaro.org
13
---
14
accel/tcg/cputlb.c | 95 +++++-----------------------------------------
15
1 file changed, 10 insertions(+), 85 deletions(-)
16
17
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/accel/tcg/cputlb.c
20
+++ b/accel/tcg/cputlb.c
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page(CPUState *cpu, target_ulong vaddr,
22
prot, mmu_idx, size);
23
}
24
25
-static void report_bad_exec(CPUState *cpu, target_ulong addr)
26
-{
27
- /* Accidentally executing outside RAM or ROM is quite common for
28
- * several user-error situations, so report it in a way that
29
- * makes it clear that this isn't a QEMU bug and provide suggestions
30
- * about what a user could do to fix things.
31
- */
32
- error_report("Trying to execute code outside RAM or ROM at 0x"
33
- TARGET_FMT_lx, addr);
34
- error_printf("This usually means one of the following happened:\n\n"
35
- "(1) You told QEMU to execute a kernel for the wrong machine "
36
- "type, and it crashed on startup (eg trying to run a "
37
- "raspberry pi kernel on a versatilepb QEMU machine)\n"
38
- "(2) You didn't give QEMU a kernel or BIOS filename at all, "
39
- "and QEMU executed a ROM full of no-op instructions until "
40
- "it fell off the end\n"
41
- "(3) Your guest kernel has a bug and crashed by jumping "
42
- "off into nowhere\n\n"
43
- "This is almost always one of the first two, so check your "
44
- "command line and that you are using the right type of kernel "
45
- "for this machine.\n"
46
- "If you think option (3) is likely then you can try debugging "
47
- "your guest with the -d debug options; in particular "
48
- "-d guest_errors will cause the log to include a dump of the "
49
- "guest register state at this point.\n\n"
50
- "Execution cannot continue; stopping here.\n\n");
51
-
52
- /* Report also to the logs, with more detail including register dump */
53
- qemu_log_mask(LOG_GUEST_ERROR, "qemu: fatal: Trying to execute code "
54
- "outside RAM or ROM at 0x" TARGET_FMT_lx "\n", addr);
55
- log_cpu_state_mask(LOG_GUEST_ERROR, cpu, CPU_DUMP_FPU | CPU_DUMP_CCOP);
56
-}
57
-
58
static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr)
59
{
60
ram_addr_t ram_addr;
61
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
62
MemoryRegionSection *section;
63
CPUState *cpu = ENV_GET_CPU(env);
64
CPUIOTLBEntry *iotlbentry;
65
- hwaddr physaddr, mr_offset;
66
67
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
68
mmu_idx = cpu_mmu_index(env, true);
69
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
70
if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
71
/*
72
* This is a TLB_RECHECK access, where the MMU protection
73
- * covers a smaller range than a target page, and we must
74
- * repeat the MMU check here. This tlb_fill() call might
75
- * longjump out if this access should cause a guest exception.
76
- */
77
- int index;
78
- target_ulong tlb_addr;
79
-
80
- tlb_fill(cpu, addr, 0, MMU_INST_FETCH, mmu_idx, 0);
81
-
82
- index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
83
- tlb_addr = env->tlb_table[mmu_idx][index].addr_code;
84
- if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
85
- /* RAM access. We can't handle this, so for now just stop */
86
- cpu_abort(cpu, "Unable to handle guest executing from RAM within "
87
- "a small MPU region at 0x" TARGET_FMT_lx, addr);
88
- }
89
- /*
90
- * Fall through to handle IO accesses (which will almost certainly
91
- * also result in failure)
92
+ * covers a smaller range than a target page. Return -1 to
93
+ * indicate that we cannot simply execute from RAM here;
94
+ * we will perform the necessary repeat of the MMU check
95
+ * when the "execute a single insn" code performs the
96
+ * load of the guest insn.
97
*/
98
+ return -1;
99
}
100
101
iotlbentry = &env->iotlb[mmu_idx][index];
102
section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
103
mr = section->mr;
104
if (memory_region_is_unassigned(mr)) {
105
- qemu_mutex_lock_iothread();
106
- if (memory_region_request_mmio_ptr(mr, addr)) {
107
- qemu_mutex_unlock_iothread();
108
- /* A MemoryRegion is potentially added so re-run the
109
- * get_page_addr_code.
110
- */
111
- return get_page_addr_code(env, addr);
112
- }
113
- qemu_mutex_unlock_iothread();
114
-
115
- /* Give the new-style cpu_transaction_failed() hook first chance
116
- * to handle this.
117
- * This is not the ideal place to detect and generate CPU
118
- * exceptions for instruction fetch failure (for instance
119
- * we don't know the length of the access that the CPU would
120
- * use, and it would be better to go ahead and try the access
121
- * and use the MemTXResult it produced). However it is the
122
- * simplest place we have currently available for the check.
123
+ /*
124
+ * Not guest RAM, so there is no ram_addr_t for it. Return -1,
125
+ * and we will execute a single insn from this device.
126
*/
127
- mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
128
- physaddr = mr_offset +
129
- section->offset_within_address_space -
130
- section->offset_within_region;
131
- cpu_transaction_failed(cpu, physaddr, addr, 0, MMU_INST_FETCH, mmu_idx,
132
- iotlbentry->attrs, MEMTX_DECODE_ERROR, 0);
133
-
134
- cpu_unassigned_access(cpu, addr, false, true, 0, 4);
135
- /* The CPU's unassigned access hook might have longjumped out
136
- * with an exception. If it didn't (or there was no hook) then
137
- * we can't proceed further.
138
- */
139
- report_bad_exec(cpu, addr);
140
- exit(1);
141
+ return -1;
142
}
143
p = (void *)((uintptr_t)addr + env->tlb_table[mmu_idx][index].addend);
144
return qemu_ram_addr_from_host_nofail(p);
145
--
146
2.18.0
147
148
diff view generated by jsdifflib
Deleted patch
1
We set up TLB entries in tlb_set_page_with_attrs(), where we have
2
some logic for determining whether the TLB entry is considered
3
to be RAM-backed, and thus has a valid addend field. When we
4
look at the TLB entry in get_page_addr_code(), we use different
5
logic for determining whether to treat the page as RAM-backed
6
and use the addend field. This is confusing, and in fact buggy,
7
because the code in tlb_set_page_with_attrs() correctly decides
8
that rom_device memory regions not in romd mode are not RAM-backed,
9
but the code in get_page_addr_code() thinks they are RAM-backed.
10
This typically results in "Bad ram pointer" assertion if the
11
guest tries to execute from such a memory region.
12
1
13
Fix this by making get_page_addr_code() just look at the
14
TLB_MMIO bit in the code_address field of the TLB, which
15
tlb_set_page_with_attrs() sets if and only if the addend
16
field is not valid for code execution.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Message-id: 20180713150945.12348-1-peter.maydell@linaro.org
22
---
23
include/exec/exec-all.h | 2 --
24
accel/tcg/cputlb.c | 29 ++++++++---------------------
25
exec.c | 6 ------
26
3 files changed, 8 insertions(+), 29 deletions(-)
27
28
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/include/exec/exec-all.h
31
+++ b/include/exec/exec-all.h
32
@@ -XXX,XX +XXX,XX @@ hwaddr memory_region_section_get_iotlb(CPUState *cpu,
33
hwaddr paddr, hwaddr xlat,
34
int prot,
35
target_ulong *address);
36
-bool memory_region_is_unassigned(MemoryRegion *mr);
37
-
38
#endif
39
40
/* vl.c */
41
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/accel/tcg/cputlb.c
44
+++ b/accel/tcg/cputlb.c
45
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
46
{
47
int mmu_idx, index;
48
void *p;
49
- MemoryRegion *mr;
50
- MemoryRegionSection *section;
51
- CPUState *cpu = ENV_GET_CPU(env);
52
- CPUIOTLBEntry *iotlbentry;
53
54
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
55
mmu_idx = cpu_mmu_index(env, true);
56
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
57
assert(tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr));
58
}
59
60
- if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
61
+ if (unlikely(env->tlb_table[mmu_idx][index].addr_code &
62
+ (TLB_RECHECK | TLB_MMIO))) {
63
/*
64
- * This is a TLB_RECHECK access, where the MMU protection
65
- * covers a smaller range than a target page. Return -1 to
66
- * indicate that we cannot simply execute from RAM here;
67
- * we will perform the necessary repeat of the MMU check
68
- * when the "execute a single insn" code performs the
69
- * load of the guest insn.
70
+ * Return -1 if we can't translate and execute from an entire
71
+ * page of RAM here, which will cause us to execute by loading
72
+ * and translating one insn at a time, without caching:
73
+ * - TLB_RECHECK: means the MMU protection covers a smaller range
74
+ * than a target page, so we must redo the MMU check every insn
75
+ * - TLB_MMIO: region is not backed by RAM
76
*/
77
return -1;
78
}
79
80
- iotlbentry = &env->iotlb[mmu_idx][index];
81
- section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
82
- mr = section->mr;
83
- if (memory_region_is_unassigned(mr)) {
84
- /*
85
- * Not guest RAM, so there is no ram_addr_t for it. Return -1,
86
- * and we will execute a single insn from this device.
87
- */
88
- return -1;
89
- }
90
p = (void *)((uintptr_t)addr + env->tlb_table[mmu_idx][index].addend);
91
return qemu_ram_addr_from_host_nofail(p);
92
}
93
diff --git a/exec.c b/exec.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/exec.c
96
+++ b/exec.c
97
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection *phys_page_find(AddressSpaceDispatch *d, hwaddr addr)
98
}
99
}
100
101
-bool memory_region_is_unassigned(MemoryRegion *mr)
102
-{
103
- return mr != &io_mem_rom && mr != &io_mem_notdirty && !mr->rom_device
104
- && mr != &io_mem_watch;
105
-}
106
-
107
/* Called from RCU critical section */
108
static MemoryRegionSection *address_space_lookup_region(AddressSpaceDispatch *d,
109
hwaddr addr,
110
--
111
2.18.0
112
113
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Implement GICD_ISACTIVERn and GICD_ICACTIVERn registers in the GICv2.
4
Those registers allow to set or clear the active state of an IRQ in the
5
distributor.
6
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20180727095421.386-3-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/intc/arm_gic.c | 61 +++++++++++++++++++++++++++++++++++++++++++----
13
1 file changed, 57 insertions(+), 4 deletions(-)
14
15
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/intc/arm_gic.c
18
+++ b/hw/intc/arm_gic.c
19
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
20
}
21
}
22
} else if (offset < 0x400) {
23
- /* Interrupt Active. */
24
- irq = (offset - 0x300) * 8 + GIC_BASE_IRQ;
25
+ /* Interrupt Set/Clear Active. */
26
+ if (offset < 0x380) {
27
+ irq = (offset - 0x300) * 8;
28
+ } else if (s->revision == 2) {
29
+ irq = (offset - 0x380) * 8;
30
+ } else {
31
+ goto bad_reg;
32
+ }
33
+
34
+ irq += GIC_BASE_IRQ;
35
if (irq >= s->num_irq)
36
goto bad_reg;
37
res = 0;
38
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
39
GIC_DIST_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
40
}
41
}
42
+ } else if (offset < 0x380) {
43
+ /* Interrupt Set Active. */
44
+ if (s->revision != 2) {
45
+ goto bad_reg;
46
+ }
47
+
48
+ irq = (offset - 0x300) * 8 + GIC_BASE_IRQ;
49
+ if (irq >= s->num_irq) {
50
+ goto bad_reg;
51
+ }
52
+
53
+ /* This register is banked per-cpu for PPIs */
54
+ int cm = irq < GIC_INTERNAL ? (1 << cpu) : ALL_CPU_MASK;
55
+
56
+ for (i = 0; i < 8; i++) {
57
+ if (s->security_extn && !attrs.secure &&
58
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
59
+ continue; /* Ignore Non-secure access of Group0 IRQ */
60
+ }
61
+
62
+ if (value & (1 << i)) {
63
+ GIC_DIST_SET_ACTIVE(irq + i, cm);
64
+ }
65
+ }
66
} else if (offset < 0x400) {
67
- /* Interrupt Active. */
68
- goto bad_reg;
69
+ /* Interrupt Clear Active. */
70
+ if (s->revision != 2) {
71
+ goto bad_reg;
72
+ }
73
+
74
+ irq = (offset - 0x380) * 8 + GIC_BASE_IRQ;
75
+ if (irq >= s->num_irq) {
76
+ goto bad_reg;
77
+ }
78
+
79
+ /* This register is banked per-cpu for PPIs */
80
+ int cm = irq < GIC_INTERNAL ? (1 << cpu) : ALL_CPU_MASK;
81
+
82
+ for (i = 0; i < 8; i++) {
83
+ if (s->security_extn && !attrs.secure &&
84
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
85
+ continue; /* Ignore Non-secure access of Group0 IRQ */
86
+ }
87
+
88
+ if (value & (1 << i)) {
89
+ GIC_DIST_CLEAR_ACTIVE(irq + i, cm);
90
+ }
91
+ }
92
} else if (offset < 0x800) {
93
/* Interrupt Priority. */
94
irq = (offset - 0x400) + GIC_BASE_IRQ;
95
--
96
2.18.0
97
98
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Some functions are now only used in arm_gic.c, put them static. Some of
4
them where only used by the NVIC implementation and are not used
5
anymore, so remove them.
6
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20180727095421.386-4-luc.michel@greensocs.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
hw/intc/gic_internal.h | 4 ----
14
hw/intc/arm_gic.c | 23 ++---------------------
15
2 files changed, 2 insertions(+), 25 deletions(-)
16
17
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/gic_internal.h
20
+++ b/hw/intc/gic_internal.h
21
@@ -XXX,XX +XXX,XX @@
22
/* The special cases for the revision property: */
23
#define REV_11MPCORE 0
24
25
-void gic_set_pending_private(GICState *s, int cpu, int irq);
26
uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs);
27
-void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs);
28
-void gic_update(GICState *s);
29
-void gic_init_irqs_and_distributor(GICState *s);
30
void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
31
MemTxAttrs attrs);
32
33
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
34
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/intc/arm_gic.c
36
+++ b/hw/intc/arm_gic.c
37
@@ -XXX,XX +XXX,XX @@ static inline bool gic_has_groups(GICState *s)
38
39
/* TODO: Many places that call this routine could be optimized. */
40
/* Update interrupt status after enabled or pending bits have been changed. */
41
-void gic_update(GICState *s)
42
+static void gic_update(GICState *s)
43
{
44
int best_irq;
45
int best_prio;
46
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
47
}
48
}
49
50
-void gic_set_pending_private(GICState *s, int cpu, int irq)
51
-{
52
- int cm = 1 << cpu;
53
-
54
- if (gic_test_pending(s, irq, cm)) {
55
- return;
56
- }
57
-
58
- DPRINTF("Set %d pending cpu %d\n", irq, cpu);
59
- GIC_DIST_SET_PENDING(irq, cm);
60
- gic_update(s);
61
-}
62
-
63
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
64
int cm, int target)
65
{
66
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
67
GIC_DIST_CLEAR_ACTIVE(irq, cm);
68
}
69
70
-void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
71
+static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
72
{
73
int cm = 1 << cpu;
74
int group;
75
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
76
.endianness = DEVICE_NATIVE_ENDIAN,
77
};
78
79
-/* This function is used by nvic model */
80
-void gic_init_irqs_and_distributor(GICState *s)
81
-{
82
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops);
83
-}
84
-
85
static void arm_gic_realize(DeviceState *dev, Error **errp)
86
{
87
/* Device instance realize function for the GIC sysbus device */
88
--
89
2.18.0
90
91
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Provide a VMSTATE_UINT16_SUB_ARRAY macro to save a uint16_t sub-array in
4
a VMState.
5
6
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20180727095421.386-5-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/migration/vmstate.h | 3 +++
13
1 file changed, 3 insertions(+)
14
15
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/migration/vmstate.h
18
+++ b/include/migration/vmstate.h
19
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
20
#define VMSTATE_UINT16_ARRAY(_f, _s, _n) \
21
VMSTATE_UINT16_ARRAY_V(_f, _s, _n, 0)
22
23
+#define VMSTATE_UINT16_SUB_ARRAY(_f, _s, _start, _num) \
24
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_uint16, uint16_t)
25
+
26
#define VMSTATE_UINT16_2DARRAY(_f, _s, _n1, _n2) \
27
VMSTATE_UINT16_2DARRAY_V(_f, _s, _n1, _n2, 0)
28
29
--
30
2.18.0
31
32
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Add the register definitions for the virtual interface of the GICv2.
4
5
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20180727095421.386-7-luc.michel@greensocs.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
hw/intc/gic_internal.h | 65 ++++++++++++++++++++++++++++++++++++++++++
11
1 file changed, 65 insertions(+)
12
13
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/intc/gic_internal.h
16
+++ b/hw/intc/gic_internal.h
17
@@ -XXX,XX +XXX,XX @@
18
#ifndef QEMU_ARM_GIC_INTERNAL_H
19
#define QEMU_ARM_GIC_INTERNAL_H
20
21
+#include "hw/registerfields.h"
22
#include "hw/intc/arm_gic.h"
23
24
#define ALL_CPU_MASK ((unsigned)(((1 << GIC_NCPU) - 1)))
25
@@ -XXX,XX +XXX,XX @@
26
#define GICC_CTLR_EOIMODE (1U << 9)
27
#define GICC_CTLR_EOIMODE_NS (1U << 10)
28
29
+REG32(GICH_HCR, 0x0)
30
+ FIELD(GICH_HCR, EN, 0, 1)
31
+ FIELD(GICH_HCR, UIE, 1, 1)
32
+ FIELD(GICH_HCR, LRENPIE, 2, 1)
33
+ FIELD(GICH_HCR, NPIE, 3, 1)
34
+ FIELD(GICH_HCR, VGRP0EIE, 4, 1)
35
+ FIELD(GICH_HCR, VGRP0DIE, 5, 1)
36
+ FIELD(GICH_HCR, VGRP1EIE, 6, 1)
37
+ FIELD(GICH_HCR, VGRP1DIE, 7, 1)
38
+ FIELD(GICH_HCR, EOICount, 27, 5)
39
+
40
+#define GICH_HCR_MASK \
41
+ (R_GICH_HCR_EN_MASK | R_GICH_HCR_UIE_MASK | \
42
+ R_GICH_HCR_LRENPIE_MASK | R_GICH_HCR_NPIE_MASK | \
43
+ R_GICH_HCR_VGRP0EIE_MASK | R_GICH_HCR_VGRP0DIE_MASK | \
44
+ R_GICH_HCR_VGRP1EIE_MASK | R_GICH_HCR_VGRP1DIE_MASK | \
45
+ R_GICH_HCR_EOICount_MASK)
46
+
47
+REG32(GICH_VTR, 0x4)
48
+ FIELD(GICH_VTR, ListRegs, 0, 6)
49
+ FIELD(GICH_VTR, PREbits, 26, 3)
50
+ FIELD(GICH_VTR, PRIbits, 29, 3)
51
+
52
+REG32(GICH_VMCR, 0x8)
53
+ FIELD(GICH_VMCR, VMCCtlr, 0, 10)
54
+ FIELD(GICH_VMCR, VMABP, 18, 3)
55
+ FIELD(GICH_VMCR, VMBP, 21, 3)
56
+ FIELD(GICH_VMCR, VMPriMask, 27, 5)
57
+
58
+REG32(GICH_MISR, 0x10)
59
+ FIELD(GICH_MISR, EOI, 0, 1)
60
+ FIELD(GICH_MISR, U, 1, 1)
61
+ FIELD(GICH_MISR, LRENP, 2, 1)
62
+ FIELD(GICH_MISR, NP, 3, 1)
63
+ FIELD(GICH_MISR, VGrp0E, 4, 1)
64
+ FIELD(GICH_MISR, VGrp0D, 5, 1)
65
+ FIELD(GICH_MISR, VGrp1E, 6, 1)
66
+ FIELD(GICH_MISR, VGrp1D, 7, 1)
67
+
68
+REG32(GICH_EISR0, 0x20)
69
+REG32(GICH_EISR1, 0x24)
70
+REG32(GICH_ELRSR0, 0x30)
71
+REG32(GICH_ELRSR1, 0x34)
72
+REG32(GICH_APR, 0xf0)
73
+
74
+REG32(GICH_LR0, 0x100)
75
+ FIELD(GICH_LR0, VirtualID, 0, 10)
76
+ FIELD(GICH_LR0, PhysicalID, 10, 10)
77
+ FIELD(GICH_LR0, CPUID, 10, 3)
78
+ FIELD(GICH_LR0, EOI, 19, 1)
79
+ FIELD(GICH_LR0, Priority, 23, 5)
80
+ FIELD(GICH_LR0, State, 28, 2)
81
+ FIELD(GICH_LR0, Grp1, 30, 1)
82
+ FIELD(GICH_LR0, HW, 31, 1)
83
+
84
+/* Last LR register */
85
+REG32(GICH_LR63, 0x1fc)
86
+
87
+#define GICH_LR_MASK \
88
+ (R_GICH_LR0_VirtualID_MASK | R_GICH_LR0_PhysicalID_MASK | \
89
+ R_GICH_LR0_CPUID_MASK | R_GICH_LR0_EOI_MASK | \
90
+ R_GICH_LR0_Priority_MASK | R_GICH_LR0_State_MASK | \
91
+ R_GICH_LR0_Grp1_MASK | R_GICH_LR0_HW_MASK)
92
+
93
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
94
* GICv2 and GICv2 with security extensions:
95
*/
96
--
97
2.18.0
98
99
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
An access to the CPU interface is non-secure if the current GIC instance
4
implements the security extensions, and the memory access is actually
5
non-secure. Until then, it was checked with tests such as
6
if (s->security_extn && !attrs.secure) { ... }
7
in various places of the CPU interface code.
8
9
With the implementation of the virtualization extensions, those tests
10
must be updated to take into account whether we are in a vCPU interface
11
or not. This is because the exposed vCPU interface does not implement
12
security extensions.
13
14
This commits replaces all those tests with a call to the
15
gic_cpu_ns_access() function to check if the current access to the CPU
16
interface is non-secure. This function takes into account whether the
17
current CPU is a vCPU or not.
18
19
Note that this function is used only in the (v)CPU interface code path.
20
The distributor code path is left unchanged, as the distributor is not
21
exposed to vCPUs at all.
22
23
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
26
Message-id: 20180727095421.386-9-luc.michel@greensocs.com
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
---
29
hw/intc/arm_gic.c | 39 ++++++++++++++++++++++-----------------
30
1 file changed, 22 insertions(+), 17 deletions(-)
31
32
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
33
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/intc/arm_gic.c
35
+++ b/hw/intc/arm_gic.c
36
@@ -XXX,XX +XXX,XX @@ static inline bool gic_has_groups(GICState *s)
37
return s->revision == 2 || s->security_extn;
38
}
39
40
+static inline bool gic_cpu_ns_access(GICState *s, int cpu, MemTxAttrs attrs)
41
+{
42
+ return !gic_is_vcpu(cpu) && s->security_extn && !attrs.secure;
43
+}
44
+
45
/* TODO: Many places that call this routine could be optimized. */
46
/* Update interrupt status after enabled or pending bits have been changed. */
47
static void gic_update(GICState *s)
48
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
49
/* On a GIC without the security extensions, reading this register
50
* behaves in the same way as a secure access to a GIC with them.
51
*/
52
- bool secure = !s->security_extn || attrs.secure;
53
+ bool secure = !gic_cpu_ns_access(s, cpu, attrs);
54
55
if (group == 0 && !secure) {
56
/* Group0 interrupts hidden from Non-secure access */
57
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_get_priority(GICState *s, int cpu, int irq,
58
static void gic_set_priority_mask(GICState *s, int cpu, uint8_t pmask,
59
MemTxAttrs attrs)
60
{
61
- if (s->security_extn && !attrs.secure) {
62
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
63
if (s->priority_mask[cpu] & 0x80) {
64
/* Priority Mask in upper half */
65
pmask = 0x80 | (pmask >> 1);
66
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_get_priority_mask(GICState *s, int cpu, MemTxAttrs attrs)
67
{
68
uint32_t pmask = s->priority_mask[cpu];
69
70
- if (s->security_extn && !attrs.secure) {
71
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
72
if (pmask & 0x80) {
73
/* Priority Mask in upper half, return Non-secure view */
74
pmask = (pmask << 1) & 0xff;
75
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_get_cpu_control(GICState *s, int cpu, MemTxAttrs attrs)
76
{
77
uint32_t ret = s->cpu_ctlr[cpu];
78
79
- if (s->security_extn && !attrs.secure) {
80
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
81
/* Construct the NS banked view of GICC_CTLR from the correct
82
* bits of the S banked view. We don't need to move the bypass
83
* control bits because we don't implement that (IMPDEF) part
84
@@ -XXX,XX +XXX,XX @@ static void gic_set_cpu_control(GICState *s, int cpu, uint32_t value,
85
{
86
uint32_t mask;
87
88
- if (s->security_extn && !attrs.secure) {
89
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
90
/* The NS view can only write certain bits in the register;
91
* the rest are unchanged
92
*/
93
@@ -XXX,XX +XXX,XX @@ static uint8_t gic_get_running_priority(GICState *s, int cpu, MemTxAttrs attrs)
94
return 0xff;
95
}
96
97
- if (s->security_extn && !attrs.secure) {
98
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
99
if (s->running_priority[cpu] & 0x80) {
100
/* Running priority in upper half of range: return the Non-secure
101
* view of the priority.
102
@@ -XXX,XX +XXX,XX @@ static bool gic_eoi_split(GICState *s, int cpu, MemTxAttrs attrs)
103
/* Before GICv2 prio-drop and deactivate are not separable */
104
return false;
105
}
106
- if (s->security_extn && !attrs.secure) {
107
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
108
return s->cpu_ctlr[cpu] & GICC_CTLR_EOIMODE_NS;
109
}
110
return s->cpu_ctlr[cpu] & GICC_CTLR_EOIMODE;
111
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
112
return;
113
}
114
115
- if (s->security_extn && !attrs.secure && !group) {
116
+ if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
117
DPRINTF("Non-secure DI for Group0 interrupt %d ignored\n", irq);
118
return;
119
}
120
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
121
122
group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
123
124
- if (s->security_extn && !attrs.secure && !group) {
125
+ if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
126
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
127
return;
128
}
129
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
130
*data = gic_get_priority_mask(s, cpu, attrs);
131
break;
132
case 0x08: /* Binary Point */
133
- if (s->security_extn && !attrs.secure) {
134
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
135
if (s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) {
136
/* NS view of BPR when CBPR is 1 */
137
*data = MIN(s->bpr[cpu] + 1, 7);
138
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
139
* With security extensions, secure access: ABPR (alias of NS BPR)
140
* With security extensions, nonsecure access: RAZ/WI
141
*/
142
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
143
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
144
*data = 0;
145
} else {
146
*data = s->abpr[cpu];
147
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
148
149
if (regno >= GIC_NR_APRS || s->revision != 2) {
150
*data = 0;
151
- } else if (s->security_extn && !attrs.secure) {
152
+ } else if (gic_cpu_ns_access(s, cpu, attrs)) {
153
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
154
*data = gic_apr_ns_view(s, regno, cpu);
155
} else {
156
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
157
int regno = (offset - 0xe0) / 4;
158
159
if (regno >= GIC_NR_APRS || s->revision != 2 || !gic_has_groups(s) ||
160
- (s->security_extn && !attrs.secure)) {
161
+ gic_cpu_ns_access(s, cpu, attrs)) {
162
*data = 0;
163
} else {
164
*data = s->nsapr[regno][cpu];
165
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
166
gic_set_priority_mask(s, cpu, value, attrs);
167
break;
168
case 0x08: /* Binary Point */
169
- if (s->security_extn && !attrs.secure) {
170
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
171
if (s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) {
172
/* WI when CBPR is 1 */
173
return MEMTX_OK;
174
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
175
gic_complete_irq(s, cpu, value & 0x3ff, attrs);
176
return MEMTX_OK;
177
case 0x1c: /* Aliased Binary Point */
178
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
179
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
180
/* unimplemented, or NS access: RAZ/WI */
181
return MEMTX_OK;
182
} else {
183
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
184
if (regno >= GIC_NR_APRS || s->revision != 2) {
185
return MEMTX_OK;
186
}
187
- if (s->security_extn && !attrs.secure) {
188
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
189
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
190
gic_apr_write_ns_view(s, regno, cpu, value);
191
} else {
192
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
193
if (regno >= GIC_NR_APRS || s->revision != 2) {
194
return MEMTX_OK;
195
}
196
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
197
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
198
return MEMTX_OK;
199
}
200
s->nsapr[regno][cpu] = value;
201
--
202
2.18.0
203
204
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Implement virtualization extensions in gic_activate_irq() and
4
gic_drop_prio() and in gic_get_prio_from_apr_bits() called by
5
gic_drop_prio().
6
7
When the current CPU is a vCPU:
8
- Use GIC_VIRT_MIN_BPR and GIC_VIRT_NR_APRS instead of their non-virt
9
counterparts,
10
- the vCPU APR is stored in the virtual interface, in h_apr.
11
12
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20180727095421.386-11-luc.michel@greensocs.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
hw/intc/arm_gic.c | 50 +++++++++++++++++++++++++++++++++++------------
18
1 file changed, 38 insertions(+), 12 deletions(-)
19
20
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/arm_gic.c
23
+++ b/hw/intc/arm_gic.c
24
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
25
* and update the running priority.
26
*/
27
int prio = gic_get_group_priority(s, cpu, irq);
28
- int preemption_level = prio >> (GIC_MIN_BPR + 1);
29
+ int min_bpr = gic_is_vcpu(cpu) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
30
+ int preemption_level = prio >> (min_bpr + 1);
31
int regno = preemption_level / 32;
32
int bitno = preemption_level % 32;
33
+ uint32_t *papr = NULL;
34
35
- if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
36
- s->nsapr[regno][cpu] |= (1 << bitno);
37
+ if (gic_is_vcpu(cpu)) {
38
+ assert(regno == 0);
39
+ papr = &s->h_apr[gic_get_vcpu_real_id(cpu)];
40
+ } else if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
41
+ papr = &s->nsapr[regno][cpu];
42
} else {
43
- s->apr[regno][cpu] |= (1 << bitno);
44
+ papr = &s->apr[regno][cpu];
45
}
46
47
+ *papr |= (1 << bitno);
48
+
49
s->running_priority[cpu] = prio;
50
gic_set_active(s, irq, cpu);
51
}
52
@@ -XXX,XX +XXX,XX @@ static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
53
* on the set bits in the Active Priority Registers.
54
*/
55
int i;
56
+
57
+ if (gic_is_vcpu(cpu)) {
58
+ uint32_t apr = s->h_apr[gic_get_vcpu_real_id(cpu)];
59
+ if (apr) {
60
+ return ctz32(apr) << (GIC_VIRT_MIN_BPR + 1);
61
+ } else {
62
+ return 0x100;
63
+ }
64
+ }
65
+
66
for (i = 0; i < GIC_NR_APRS; i++) {
67
uint32_t apr = s->apr[i][cpu] | s->nsapr[i][cpu];
68
if (!apr) {
69
@@ -XXX,XX +XXX,XX @@ static void gic_drop_prio(GICState *s, int cpu, int group)
70
* running priority will be wrong, so interrupts that should preempt
71
* might not do so, and interrupts that should not preempt might do so.
72
*/
73
- int i;
74
+ if (gic_is_vcpu(cpu)) {
75
+ int rcpu = gic_get_vcpu_real_id(cpu);
76
77
- for (i = 0; i < GIC_NR_APRS; i++) {
78
- uint32_t *papr = group ? &s->nsapr[i][cpu] : &s->apr[i][cpu];
79
- if (!*papr) {
80
- continue;
81
+ if (s->h_apr[rcpu]) {
82
+ /* Clear lowest set bit */
83
+ s->h_apr[rcpu] &= s->h_apr[rcpu] - 1;
84
+ }
85
+ } else {
86
+ int i;
87
+
88
+ for (i = 0; i < GIC_NR_APRS; i++) {
89
+ uint32_t *papr = group ? &s->nsapr[i][cpu] : &s->apr[i][cpu];
90
+ if (!*papr) {
91
+ continue;
92
+ }
93
+ /* Clear lowest set bit */
94
+ *papr &= *papr - 1;
95
+ break;
96
}
97
- /* Clear lowest set bit */
98
- *papr &= *papr - 1;
99
- break;
100
}
101
102
s->running_priority[cpu] = gic_get_prio_from_apr_bits(s, cpu);
103
--
104
2.18.0
105
106
diff view generated by jsdifflib
1
If the "trap general exceptions" bit HCR_EL2.TGE is set, we
1
From: Richard Henderson <richard.henderson@linaro.org>
2
must mask all virtual interrupts (as per DDI0487C.a D1.14.3).
3
Implement this in arm_excp_unmasked().
4
2
3
These functions are not used outside cpu64.c,
4
so make them static.
5
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20220620175235.60881-17-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180724115950.17316-2-peter.maydell@linaro.org
8
---
10
---
9
target/arm/cpu.h | 6 ++++--
11
target/arm/cpu.h | 3 ---
10
1 file changed, 4 insertions(+), 2 deletions(-)
12
target/arm/cpu64.c | 4 ++--
13
2 files changed, 2 insertions(+), 5 deletions(-)
11
14
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
15
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.h
17
--- a/target/arm/cpu.h
15
+++ b/target/arm/cpu.h
18
+++ b/target/arm/cpu.h
16
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
19
@@ -XXX,XX +XXX,XX @@ int aarch64_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg);
17
break;
20
void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq);
18
21
void aarch64_sve_change_el(CPUARMState *env, int old_el,
19
case EXCP_VFIQ:
22
int new_el, bool el0_a64);
20
- if (secure || !(env->cp15.hcr_el2 & HCR_FMO)) {
23
-void aarch64_add_sve_properties(Object *obj);
21
+ if (secure || !(env->cp15.hcr_el2 & HCR_FMO)
24
-void aarch64_add_pauth_properties(Object *obj);
22
+ || (env->cp15.hcr_el2 & HCR_TGE)) {
25
void arm_reset_sve_state(CPUARMState *env);
23
/* VFIQs are only taken when hypervized and non-secure. */
26
24
return false;
27
/*
25
}
28
@@ -XXX,XX +XXX,XX @@ static inline void aarch64_sve_narrow_vq(CPUARMState *env, unsigned vq) { }
26
return !(env->daif & PSTATE_F);
29
static inline void aarch64_sve_change_el(CPUARMState *env, int o,
27
case EXCP_VIRQ:
30
int n, bool a)
28
- if (secure || !(env->cp15.hcr_el2 & HCR_IMO)) {
31
{ }
29
+ if (secure || !(env->cp15.hcr_el2 & HCR_IMO)
32
-static inline void aarch64_add_sve_properties(Object *obj) { }
30
+ || (env->cp15.hcr_el2 & HCR_TGE)) {
33
#endif
31
/* VIRQs are only taken when hypervized and non-secure. */
34
32
return false;
35
void aarch64_sync_32_to_64(CPUARMState *env);
33
}
36
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
37
index XXXXXXX..XXXXXXX 100644
38
--- a/target/arm/cpu64.c
39
+++ b/target/arm/cpu64.c
40
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_get_default_vec_len(Object *obj, Visitor *v,
41
}
42
#endif
43
44
-void aarch64_add_sve_properties(Object *obj)
45
+static void aarch64_add_sve_properties(Object *obj)
46
{
47
ARMCPU *cpu = ARM_CPU(obj);
48
uint32_t vq;
49
@@ -XXX,XX +XXX,XX @@ static Property arm_cpu_pauth_property =
50
static Property arm_cpu_pauth_impdef_property =
51
DEFINE_PROP_BOOL("pauth-impdef", ARMCPU, prop_pauth_impdef, false);
52
53
-void aarch64_add_pauth_properties(Object *obj)
54
+static void aarch64_add_pauth_properties(Object *obj)
55
{
56
ARMCPU *cpu = ARM_CPU(obj);
57
34
--
58
--
35
2.18.0
59
2.25.1
36
37
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Implement virtualization extensions in the gic_deactivate_irq() and
3
Mirror the properties for SVE. The main difference is
4
gic_complete_irq() functions.
4
that any arbitrary set of powers of 2 may be supported,
5
and not the stricter constraints that apply to SVE.
5
6
6
When the guest writes an invalid vIRQ to V_EOIR or V_DIR, since the
7
Include a property to control FEAT_SME_FA64, as failing
7
GICv2 specification is not entirely clear here, we adopt the behaviour
8
to restrict the runtime to the proper subset of insns
8
observed on real hardware:
9
could be a major point for bugs.
9
* When V_CTRL.EOIMode is false (EOI split is disabled):
10
- In case of an invalid vIRQ write to V_EOIR:
11
-> If some bits are set in H_APR, an invalid vIRQ write to V_EOIR
12
triggers a priority drop, and increments V_HCR.EOICount.
13
-> If V_APR is already cleared, nothing happen
14
10
15
- An invalid vIRQ write to V_DIR is ignored.
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
16
17
* When V_CTRL.EOIMode is true:
18
- In case of an invalid vIRQ write to V_EOIR:
19
-> If some bits are set in H_APR, an invalid vIRQ write to V_EOIR
20
triggers a priority drop.
21
-> If V_APR is already cleared, nothing happen
22
23
- An invalid vIRQ write to V_DIR increments V_HCR.EOICount.
24
25
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
26
Message-id: 20180727095421.386-13-luc.michel@greensocs.com
27
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20220620175235.60881-18-richard.henderson@linaro.org
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
---
15
---
30
hw/intc/arm_gic.c | 51 +++++++++++++++++++++++++++++++++++++++++++----
16
docs/system/arm/cpu-features.rst | 56 +++++++++++++++
31
1 file changed, 47 insertions(+), 4 deletions(-)
17
target/arm/cpu.h | 2 +
18
target/arm/internals.h | 1 +
19
target/arm/cpu.c | 14 +++-
20
target/arm/cpu64.c | 114 +++++++++++++++++++++++++++++--
21
5 files changed, 180 insertions(+), 7 deletions(-)
32
22
33
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
23
diff --git a/docs/system/arm/cpu-features.rst b/docs/system/arm/cpu-features.rst
34
index XXXXXXX..XXXXXXX 100644
24
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/intc/arm_gic.c
25
--- a/docs/system/arm/cpu-features.rst
36
+++ b/hw/intc/arm_gic.c
26
+++ b/docs/system/arm/cpu-features.rst
37
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
27
@@ -XXX,XX +XXX,XX @@ verbose command lines. However, the recommended way to select vector
38
{
28
lengths is to explicitly enable each desired length. Therefore only
39
int group;
29
example's (1), (4), and (6) exhibit recommended uses of the properties.
40
30
41
- if (irq >= s->num_irq) {
31
+SME CPU Property Examples
42
+ if (irq >= GIC_MAXIRQ || (!gic_is_vcpu(cpu) && irq >= s->num_irq)) {
32
+-------------------------
43
/*
33
+
44
* This handles two cases:
34
+ 1) Disable SME::
45
* 1. If software writes the ID of a spurious interrupt [ie 1023]
35
+
46
* to the GICC_DIR, the GIC ignores that write.
36
+ $ qemu-system-aarch64 -M virt -cpu max,sme=off
47
* 2. If software writes the number of a non-existent interrupt
37
+
48
* this must be a subcase of "value written is not an active interrupt"
38
+ 2) Implicitly enable all vector lengths for the ``max`` CPU type::
49
- * and so this is UNPREDICTABLE. We choose to ignore it.
39
+
50
+ * and so this is UNPREDICTABLE. We choose to ignore it. For vCPUs,
40
+ $ qemu-system-aarch64 -M virt -cpu max
51
+ * all IRQs potentially exist, so this limit does not apply.
41
+
52
*/
42
+ 3) Only enable the 256-bit vector length::
53
return;
43
+
54
}
44
+ $ qemu-system-aarch64 -M virt -cpu max,sme256=on
55
45
+
56
- group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
46
+ 3) Enable the 256-bit and 1024-bit vector lengths::
57
-
47
+
58
if (!gic_eoi_split(s, cpu, attrs)) {
48
+ $ qemu-system-aarch64 -M virt -cpu max,sme256=on,sme1024=on
59
/* This is UNPREDICTABLE; we choose to ignore it */
49
+
60
qemu_log_mask(LOG_GUEST_ERROR,
50
+ 4) Disable the 512-bit vector length. This results in all the other
61
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
51
+ lengths supported by ``max`` defaulting to enabled
62
return;
52
+ (128, 256, 1024 and 2048)::
63
}
53
+
64
54
+ $ qemu-system-aarch64 -M virt -cpu max,sve512=off
65
+ if (gic_is_vcpu(cpu) && !gic_virq_is_valid(s, irq, cpu)) {
55
+
66
+ /* This vIRQ does not have an LR entry which is either active or
56
SVE User-mode Default Vector Length Property
67
+ * pending and active. Increment EOICount and ignore the write.
57
--------------------------------------------
68
+ */
58
69
+ int rcpu = gic_get_vcpu_real_id(cpu);
59
@@ -XXX,XX +XXX,XX @@ length supported by QEMU is 256.
70
+ s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
60
71
+ return;
61
If this property is set to ``-1`` then the default vector length
72
+ }
62
is set to the maximum possible length.
73
+
63
+
74
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
64
+SME CPU Properties
75
+
65
+==================
76
if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
66
+
77
DPRINTF("Non-secure DI for Group0 interrupt %d ignored\n", irq);
67
+The SME CPU properties are much like the SVE properties: ``sme`` is
78
return;
68
+used to enable or disable the entire SME feature, and ``sme<N>`` is
79
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
69
+used to enable or disable specific vector lengths. Finally,
80
int group;
70
+``sme_fa64`` is used to enable or disable ``FEAT_SME_FA64``, which
81
71
+allows execution of the "full a64" instruction set while Streaming
82
DPRINTF("EOI %d\n", irq);
72
+SVE mode is enabled.
83
+ if (gic_is_vcpu(cpu)) {
73
+
84
+ /* The call to gic_prio_drop() will clear a bit in GICH_APR iff the
74
+SME is not supported by KVM at this time.
85
+ * running prio is < 0x100.
75
+
86
+ */
76
+At least one vector length must be enabled when ``sme`` is enabled,
87
+ bool prio_drop = s->running_priority[cpu] < 0x100;
77
+and all vector lengths must be powers of 2. The maximum vector
88
+
78
+length supported by qemu is 2048 bits. Otherwise, there are no
89
+ if (irq >= GIC_MAXIRQ) {
79
+additional constraints on the set of vector lengths supported by SME.
90
+ /* Ignore spurious interrupt */
80
+
81
+SME User-mode Default Vector Length Property
82
+--------------------------------------------
83
+
84
+For qemu-aarch64, the cpu propery ``sme-default-vector-length=N`` is
85
+defined to mirror the Linux kernel parameter file
86
+``/proc/sys/abi/sme_default_vector_length``. The default length, ``N``,
87
+is in units of bytes and must be between 16 and 8192.
88
+If not specified, the default vector length is 32.
89
+
90
+As with ``sve-default-vector-length``, if the default length is larger
91
+than the maximum vector length enabled, the actual vector length will
92
+be reduced. If this property is set to ``-1`` then the default vector
93
+length is set to the maximum possible length.
94
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
95
index XXXXXXX..XXXXXXX 100644
96
--- a/target/arm/cpu.h
97
+++ b/target/arm/cpu.h
98
@@ -XXX,XX +XXX,XX @@ struct ArchCPU {
99
#ifdef CONFIG_USER_ONLY
100
/* Used to set the default vector length at process start. */
101
uint32_t sve_default_vq;
102
+ uint32_t sme_default_vq;
103
#endif
104
105
ARMVQMap sve_vq;
106
+ ARMVQMap sme_vq;
107
108
/* Generic timer counter frequency, in Hz */
109
uint64_t gt_cntfrq_hz;
110
diff --git a/target/arm/internals.h b/target/arm/internals.h
111
index XXXXXXX..XXXXXXX 100644
112
--- a/target/arm/internals.h
113
+++ b/target/arm/internals.h
114
@@ -XXX,XX +XXX,XX @@ int arm_gdb_set_svereg(CPUARMState *env, uint8_t *buf, int reg);
115
int aarch64_fpu_gdb_get_reg(CPUARMState *env, GByteArray *buf, int reg);
116
int aarch64_fpu_gdb_set_reg(CPUARMState *env, uint8_t *buf, int reg);
117
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
118
+void arm_cpu_sme_finalize(ARMCPU *cpu, Error **errp);
119
void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp);
120
void arm_cpu_lpa2_finalize(ARMCPU *cpu, Error **errp);
121
#endif
122
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
123
index XXXXXXX..XXXXXXX 100644
124
--- a/target/arm/cpu.c
125
+++ b/target/arm/cpu.c
126
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_initfn(Object *obj)
127
#ifdef CONFIG_USER_ONLY
128
# ifdef TARGET_AARCH64
129
/*
130
- * The linux kernel defaults to 512-bit vectors, when sve is supported.
131
- * See documentation for /proc/sys/abi/sve_default_vector_length, and
132
- * our corresponding sve-default-vector-length cpu property.
133
+ * The linux kernel defaults to 512-bit for SVE, and 256-bit for SME.
134
+ * These values were chosen to fit within the default signal frame.
135
+ * See documentation for /proc/sys/abi/{sve,sme}_default_vector_length,
136
+ * and our corresponding cpu property.
137
*/
138
cpu->sve_default_vq = 4;
139
+ cpu->sme_default_vq = 2;
140
# endif
141
#else
142
/* Our inbound IRQ and FIQ lines */
143
@@ -XXX,XX +XXX,XX @@ void arm_cpu_finalize_features(ARMCPU *cpu, Error **errp)
144
return;
145
}
146
147
+ arm_cpu_sme_finalize(cpu, &local_err);
148
+ if (local_err != NULL) {
149
+ error_propagate(errp, local_err);
91
+ return;
150
+ return;
92
+ }
151
+ }
93
+
152
+
94
+ gic_drop_prio(s, cpu, 0);
153
arm_cpu_pauth_finalize(cpu, &local_err);
95
+
154
if (local_err != NULL) {
96
+ if (!gic_eoi_split(s, cpu, attrs)) {
155
error_propagate(errp, local_err);
97
+ bool valid = gic_virq_is_valid(s, irq, cpu);
156
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
98
+ if (prio_drop && !valid) {
157
index XXXXXXX..XXXXXXX 100644
99
+ /* We are in a situation where:
158
--- a/target/arm/cpu64.c
100
+ * - V_CTRL.EOIMode is false (no EOI split),
159
+++ b/target/arm/cpu64.c
101
+ * - The call to gic_drop_prio() cleared a bit in GICH_APR,
160
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_get_vq(Object *obj, Visitor *v, const char *name,
102
+ * - This vIRQ does not have an LR entry which is either
161
ARMCPU *cpu = ARM_CPU(obj);
103
+ * active or pending and active.
162
ARMVQMap *vq_map = opaque;
104
+ * In that case, we must increment EOICount.
163
uint32_t vq = atoi(&name[3]) / 128;
105
+ */
164
+ bool sve = vq_map == &cpu->sve_vq;
106
+ int rcpu = gic_get_vcpu_real_id(cpu);
165
bool value;
107
+ s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
166
108
+ } else if (valid) {
167
- /* All vector lengths are disabled when SVE is off. */
109
+ gic_clear_active(s, irq, cpu);
168
- if (!cpu_isar_feature(aa64_sve, cpu)) {
110
+ }
169
+ /* All vector lengths are disabled when feature is off. */
170
+ if (sve
171
+ ? !cpu_isar_feature(aa64_sve, cpu)
172
+ : !cpu_isar_feature(aa64_sme, cpu)) {
173
value = false;
174
} else {
175
value = extract32(vq_map->map, vq - 1, 1);
176
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_sve(Object *obj, bool value, Error **errp)
177
cpu->isar.id_aa64pfr0 = t;
178
}
179
180
+void arm_cpu_sme_finalize(ARMCPU *cpu, Error **errp)
181
+{
182
+ uint32_t vq_map = cpu->sme_vq.map;
183
+ uint32_t vq_init = cpu->sme_vq.init;
184
+ uint32_t vq_supported = cpu->sme_vq.supported;
185
+ uint32_t vq;
186
+
187
+ if (vq_map == 0) {
188
+ if (!cpu_isar_feature(aa64_sme, cpu)) {
189
+ cpu->isar.id_aa64smfr0 = 0;
190
+ return;
111
+ }
191
+ }
112
+
192
+
113
+ return;
193
+ /* TODO: KVM will require limitations via SMCR_EL2. */
194
+ vq_map = vq_supported & ~vq_init;
195
+
196
+ if (vq_map == 0) {
197
+ vq = ctz32(vq_supported) + 1;
198
+ error_setg(errp, "cannot disable sme%d", vq * 128);
199
+ error_append_hint(errp, "All SME vector lengths are disabled.\n");
200
+ error_append_hint(errp, "With SME enabled, at least one "
201
+ "vector length must be enabled.\n");
202
+ return;
203
+ }
204
+ } else {
205
+ if (!cpu_isar_feature(aa64_sme, cpu)) {
206
+ vq = 32 - clz32(vq_map);
207
+ error_setg(errp, "cannot enable sme%d", vq * 128);
208
+ error_append_hint(errp, "SME must be enabled to enable "
209
+ "vector lengths.\n");
210
+ error_append_hint(errp, "Add sme=on to the CPU property list.\n");
211
+ return;
212
+ }
213
+ /* TODO: KVM will require limitations via SMCR_EL2. */
114
+ }
214
+ }
115
+
215
+
116
if (irq >= s->num_irq) {
216
+ cpu->sme_vq.map = vq_map;
117
/* This handles two cases:
217
+}
118
* 1. If software writes the ID of a spurious interrupt [ie 1023]
218
+
219
+static bool cpu_arm_get_sme(Object *obj, Error **errp)
220
+{
221
+ ARMCPU *cpu = ARM_CPU(obj);
222
+ return cpu_isar_feature(aa64_sme, cpu);
223
+}
224
+
225
+static void cpu_arm_set_sme(Object *obj, bool value, Error **errp)
226
+{
227
+ ARMCPU *cpu = ARM_CPU(obj);
228
+ uint64_t t;
229
+
230
+ t = cpu->isar.id_aa64pfr1;
231
+ t = FIELD_DP64(t, ID_AA64PFR1, SME, value);
232
+ cpu->isar.id_aa64pfr1 = t;
233
+}
234
+
235
+static bool cpu_arm_get_sme_fa64(Object *obj, Error **errp)
236
+{
237
+ ARMCPU *cpu = ARM_CPU(obj);
238
+ return cpu_isar_feature(aa64_sme, cpu) &&
239
+ cpu_isar_feature(aa64_sme_fa64, cpu);
240
+}
241
+
242
+static void cpu_arm_set_sme_fa64(Object *obj, bool value, Error **errp)
243
+{
244
+ ARMCPU *cpu = ARM_CPU(obj);
245
+ uint64_t t;
246
+
247
+ t = cpu->isar.id_aa64smfr0;
248
+ t = FIELD_DP64(t, ID_AA64SMFR0, FA64, value);
249
+ cpu->isar.id_aa64smfr0 = t;
250
+}
251
+
252
#ifdef CONFIG_USER_ONLY
253
-/* Mirror linux /proc/sys/abi/sve_default_vector_length. */
254
+/* Mirror linux /proc/sys/abi/{sve,sme}_default_vector_length. */
255
static void cpu_arm_set_default_vec_len(Object *obj, Visitor *v,
256
const char *name, void *opaque,
257
Error **errp)
258
@@ -XXX,XX +XXX,XX @@ static void cpu_arm_set_default_vec_len(Object *obj, Visitor *v,
259
* and is the maximum architectural width of ZCR_ELx.LEN.
260
*/
261
if (remainder || default_vq < 1 || default_vq > 512) {
262
- error_setg(errp, "cannot set sve-default-vector-length");
263
+ ARMCPU *cpu = ARM_CPU(obj);
264
+ const char *which =
265
+ (ptr_default_vq == &cpu->sve_default_vq ? "sve" : "sme");
266
+
267
+ error_setg(errp, "cannot set %s-default-vector-length", which);
268
if (remainder) {
269
error_append_hint(errp, "Vector length not a multiple of 16\n");
270
} else if (default_vq < 1) {
271
@@ -XXX,XX +XXX,XX @@ static void aarch64_add_sve_properties(Object *obj)
272
#endif
273
}
274
275
+static void aarch64_add_sme_properties(Object *obj)
276
+{
277
+ ARMCPU *cpu = ARM_CPU(obj);
278
+ uint32_t vq;
279
+
280
+ object_property_add_bool(obj, "sme", cpu_arm_get_sme, cpu_arm_set_sme);
281
+ object_property_add_bool(obj, "sme_fa64", cpu_arm_get_sme_fa64,
282
+ cpu_arm_set_sme_fa64);
283
+
284
+ for (vq = 1; vq <= ARM_MAX_VQ; vq <<= 1) {
285
+ char name[8];
286
+ sprintf(name, "sme%d", vq * 128);
287
+ object_property_add(obj, name, "bool", cpu_arm_get_vq,
288
+ cpu_arm_set_vq, NULL, &cpu->sme_vq);
289
+ }
290
+
291
+#ifdef CONFIG_USER_ONLY
292
+ /* Mirror linux /proc/sys/abi/sme_default_vector_length. */
293
+ object_property_add(obj, "sme-default-vector-length", "int32",
294
+ cpu_arm_get_default_vec_len,
295
+ cpu_arm_set_default_vec_len, NULL,
296
+ &cpu->sme_default_vq);
297
+#endif
298
+}
299
+
300
void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp)
301
{
302
int arch_val = 0, impdef_val = 0;
303
@@ -XXX,XX +XXX,XX @@ static void aarch64_max_initfn(Object *obj)
304
#endif
305
306
cpu->sve_vq.supported = MAKE_64BIT_MASK(0, ARM_MAX_VQ);
307
+ cpu->sme_vq.supported = SVE_VQ_POW2_MAP;
308
309
aarch64_add_pauth_properties(obj);
310
aarch64_add_sve_properties(obj);
311
+ aarch64_add_sme_properties(obj);
312
object_property_add(obj, "sve-max-vq", "uint32", cpu_max_get_sve_max_vq,
313
cpu_max_set_sve_max_vq, NULL, NULL);
314
qdev_property_add_static(DEVICE(obj), &arm_cpu_lpa2_property);
119
--
315
--
120
2.18.0
316
2.25.1
121
122
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Implement virtualization extensions in the gic_cpu_read() and
4
gic_cpu_write() functions. Those are the last bits missing to fully
5
support virtualization extensions in the CPU interface path.
6
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20180727095421.386-14-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/intc/arm_gic.c | 20 +++++++++++++++-----
13
1 file changed, 15 insertions(+), 5 deletions(-)
14
15
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/intc/arm_gic.c
18
+++ b/hw/intc/arm_gic.c
19
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
20
case 0xd0: case 0xd4: case 0xd8: case 0xdc:
21
{
22
int regno = (offset - 0xd0) / 4;
23
+ int nr_aprs = gic_is_vcpu(cpu) ? GIC_VIRT_NR_APRS : GIC_NR_APRS;
24
25
- if (regno >= GIC_NR_APRS || s->revision != 2) {
26
+ if (regno >= nr_aprs || s->revision != 2) {
27
*data = 0;
28
+ } else if (gic_is_vcpu(cpu)) {
29
+ *data = s->h_apr[gic_get_vcpu_real_id(cpu)];
30
} else if (gic_cpu_ns_access(s, cpu, attrs)) {
31
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
32
*data = gic_apr_ns_view(s, regno, cpu);
33
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
34
int regno = (offset - 0xe0) / 4;
35
36
if (regno >= GIC_NR_APRS || s->revision != 2 || !gic_has_groups(s) ||
37
- gic_cpu_ns_access(s, cpu, attrs)) {
38
+ gic_cpu_ns_access(s, cpu, attrs) || gic_is_vcpu(cpu)) {
39
*data = 0;
40
} else {
41
*data = s->nsapr[regno][cpu];
42
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
43
s->abpr[cpu] = MAX(value & 0x7, GIC_MIN_ABPR);
44
}
45
} else {
46
- s->bpr[cpu] = MAX(value & 0x7, GIC_MIN_BPR);
47
+ int min_bpr = gic_is_vcpu(cpu) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
48
+ s->bpr[cpu] = MAX(value & 0x7, min_bpr);
49
}
50
break;
51
case 0x10: /* End Of Interrupt */
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
53
case 0xd0: case 0xd4: case 0xd8: case 0xdc:
54
{
55
int regno = (offset - 0xd0) / 4;
56
+ int nr_aprs = gic_is_vcpu(cpu) ? GIC_VIRT_NR_APRS : GIC_NR_APRS;
57
58
- if (regno >= GIC_NR_APRS || s->revision != 2) {
59
+ if (regno >= nr_aprs || s->revision != 2) {
60
return MEMTX_OK;
61
}
62
- if (gic_cpu_ns_access(s, cpu, attrs)) {
63
+ if (gic_is_vcpu(cpu)) {
64
+ s->h_apr[gic_get_vcpu_real_id(cpu)] = value;
65
+ } else if (gic_cpu_ns_access(s, cpu, attrs)) {
66
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
67
gic_apr_write_ns_view(s, regno, cpu, value);
68
} else {
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
70
if (regno >= GIC_NR_APRS || s->revision != 2) {
71
return MEMTX_OK;
72
}
73
+ if (gic_is_vcpu(cpu)) {
74
+ return MEMTX_OK;
75
+ }
76
if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
77
return MEMTX_OK;
78
}
79
--
80
2.18.0
81
82
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Add some traces to the ARM GIC to catch register accesses (distributor,
4
(v)cpu interface and virtual interface), and to take into account
5
virtualization extensions (print `vcpu` instead of `cpu` when needed).
6
7
Also add some virtualization extensions specific traces: LR updating
8
and maintenance IRQ generation.
9
10
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
11
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20180727095421.386-19-luc.michel@greensocs.com
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
16
hw/intc/arm_gic.c | 31 +++++++++++++++++++++++++------
17
hw/intc/trace-events | 12 ++++++++++--
18
2 files changed, 35 insertions(+), 8 deletions(-)
19
20
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/arm_gic.c
23
+++ b/hw/intc/arm_gic.c
24
@@ -XXX,XX +XXX,XX @@ static inline void gic_update_internal(GICState *s, bool virt)
25
}
26
27
if (best_irq != 1023) {
28
- trace_gic_update_bestirq(cpu, best_irq, best_prio,
29
- s->priority_mask[cpu_iface], s->running_priority[cpu_iface]);
30
+ trace_gic_update_bestirq(virt ? "vcpu" : "cpu", cpu,
31
+ best_irq, best_prio,
32
+ s->priority_mask[cpu_iface],
33
+ s->running_priority[cpu_iface]);
34
}
35
36
irq_level = fiq_level = 0;
37
@@ -XXX,XX +XXX,XX @@ static void gic_update_maintenance(GICState *s)
38
gic_compute_misr(s, cpu);
39
maint_level = (s->h_hcr[cpu] & R_GICH_HCR_EN_MASK) && s->h_misr[cpu];
40
41
+ trace_gic_update_maintenance_irq(cpu, maint_level);
42
qemu_set_irq(s->maintenance_irq[cpu], maint_level);
43
}
44
}
45
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
46
* is in the wrong group.
47
*/
48
irq = gic_get_current_pending_irq(s, cpu, attrs);
49
- trace_gic_acknowledge_irq(gic_get_vcpu_real_id(cpu), irq);
50
+ trace_gic_acknowledge_irq(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
51
+ gic_get_vcpu_real_id(cpu), irq);
52
53
if (irq >= GIC_MAXIRQ) {
54
DPRINTF("ACK, no pending interrupt or it is hidden: %d\n", irq);
55
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_dist_read(void *opaque, hwaddr offset, uint64_t *data,
56
switch (size) {
57
case 1:
58
*data = gic_dist_readb(opaque, offset, attrs);
59
- return MEMTX_OK;
60
+ break;
61
case 2:
62
*data = gic_dist_readb(opaque, offset, attrs);
63
*data |= gic_dist_readb(opaque, offset + 1, attrs) << 8;
64
- return MEMTX_OK;
65
+ break;
66
case 4:
67
*data = gic_dist_readb(opaque, offset, attrs);
68
*data |= gic_dist_readb(opaque, offset + 1, attrs) << 8;
69
*data |= gic_dist_readb(opaque, offset + 2, attrs) << 16;
70
*data |= gic_dist_readb(opaque, offset + 3, attrs) << 24;
71
- return MEMTX_OK;
72
+ break;
73
default:
74
return MEMTX_ERROR;
75
}
76
+
77
+ trace_gic_dist_read(offset, size, *data);
78
+ return MEMTX_OK;
79
}
80
81
static void gic_dist_writeb(void *opaque, hwaddr offset,
82
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writel(void *opaque, hwaddr offset,
83
static MemTxResult gic_dist_write(void *opaque, hwaddr offset, uint64_t data,
84
unsigned size, MemTxAttrs attrs)
85
{
86
+ trace_gic_dist_write(offset, size, data);
87
+
88
switch (size) {
89
case 1:
90
gic_dist_writeb(opaque, offset, data, attrs);
91
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
92
*data = 0;
93
break;
94
}
95
+
96
+ trace_gic_cpu_read(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
97
+ gic_get_vcpu_real_id(cpu), offset, *data);
98
return MEMTX_OK;
99
}
100
101
static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
102
uint32_t value, MemTxAttrs attrs)
103
{
104
+ trace_gic_cpu_write(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
105
+ gic_get_vcpu_real_id(cpu), offset, value);
106
+
107
switch (offset) {
108
case 0x00: /* Control */
109
gic_set_cpu_control(s, cpu, value, attrs);
110
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_read(void *opaque, int cpu, hwaddr addr,
111
return MEMTX_OK;
112
}
113
114
+ trace_gic_hyp_read(addr, *data);
115
return MEMTX_OK;
116
}
117
118
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
119
GICState *s = ARM_GIC(opaque);
120
int vcpu = cpu + GIC_NCPU;
121
122
+ trace_gic_hyp_write(addr, value);
123
+
124
switch (addr) {
125
case A_GICH_HCR: /* Hypervisor Control */
126
s->h_hcr[cpu] = value & GICH_HCR_MASK;
127
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
128
}
129
130
s->h_lr[lr_idx][cpu] = value & GICH_LR_MASK;
131
+ trace_gic_lr_entry(cpu, lr_idx, s->h_lr[lr_idx][cpu]);
132
break;
133
}
134
135
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
136
index XXXXXXX..XXXXXXX 100644
137
--- a/hw/intc/trace-events
138
+++ b/hw/intc/trace-events
139
@@ -XXX,XX +XXX,XX @@ aspeed_vic_write(uint64_t offset, unsigned size, uint32_t data) "To 0x%" PRIx64
140
gic_enable_irq(int irq) "irq %d enabled"
141
gic_disable_irq(int irq) "irq %d disabled"
142
gic_set_irq(int irq, int level, int cpumask, int target) "irq %d level %d cpumask 0x%x target 0x%x"
143
-gic_update_bestirq(int cpu, int irq, int prio, int priority_mask, int running_priority) "cpu %d irq %d priority %d cpu priority mask %d cpu running priority %d"
144
+gic_update_bestirq(const char *s, int cpu, int irq, int prio, int priority_mask, int running_priority) "%s %d irq %d priority %d cpu priority mask %d cpu running priority %d"
145
gic_update_set_irq(int cpu, const char *name, int level) "cpu[%d]: %s = %d"
146
-gic_acknowledge_irq(int cpu, int irq) "cpu %d acknowledged irq %d"
147
+gic_acknowledge_irq(const char *s, int cpu, int irq) "%s %d acknowledged irq %d"
148
+gic_cpu_write(const char *s, int cpu, int addr, uint32_t val) "%s %d iface write at 0x%08x 0x%08" PRIx32
149
+gic_cpu_read(const char *s, int cpu, int addr, uint32_t val) "%s %d iface read at 0x%08x: 0x%08" PRIx32
150
+gic_hyp_read(int addr, uint32_t val) "hyp read at 0x%08x: 0x%08" PRIx32
151
+gic_hyp_write(int addr, uint32_t val) "hyp write at 0x%08x: 0x%08" PRIx32
152
+gic_dist_read(int addr, unsigned int size, uint32_t val) "dist read at 0x%08x size %u: 0x%08" PRIx32
153
+gic_dist_write(int addr, unsigned int size, uint32_t val) "dist write at 0x%08x size %u: 0x%08" PRIx32
154
+gic_lr_entry(int cpu, int entry, uint32_t val) "cpu %d: new lr entry %d: 0x%08" PRIx32
155
+gic_update_maintenance_irq(int cpu, int val) "cpu %d: maintenance = %d"
156
157
# hw/intc/arm_gicv3_cpuif.c
158
gicv3_icc_pmr_read(uint32_t cpu, uint64_t val) "GICv3 ICC_PMR read cpu 0x%x value 0x%" PRIx64
159
--
160
2.18.0
161
162
diff view generated by jsdifflib
1
One of the required effects of setting HCR_EL2.TGE is that when
1
From: Richard Henderson <richard.henderson@linaro.org>
2
SCR_EL3.NS is 1 then SCTLR_EL1.M must behave as if it is zero for
3
all purposes except direct reads. That is, it effectively disables
4
the MMU for the NS EL0/EL1 translation regime.
5
2
3
When Streaming SVE mode is enabled, the size is taken from
4
SMCR_ELx instead of ZCR_ELx. The format is shared, but the
5
set of vector lengths is not. Further, Streaming SVE does
6
not require any particular length to be supported.
7
8
Adjust sve_vqm1_for_el to pass the current value of PSTATE.SM
9
to the new function.
10
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20220620175235.60881-19-richard.henderson@linaro.org
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20180724115950.17316-6-peter.maydell@linaro.org
9
---
15
---
10
target/arm/helper.c | 8 ++++++++
16
target/arm/cpu.h | 9 +++++++--
11
1 file changed, 8 insertions(+)
17
target/arm/helper.c | 32 +++++++++++++++++++++++++-------
18
2 files changed, 32 insertions(+), 9 deletions(-)
12
19
20
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
21
index XXXXXXX..XXXXXXX 100644
22
--- a/target/arm/cpu.h
23
+++ b/target/arm/cpu.h
24
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int cur_el);
25
int sme_exception_el(CPUARMState *env, int cur_el);
26
27
/**
28
- * sve_vqm1_for_el:
29
+ * sve_vqm1_for_el_sm:
30
* @env: CPUARMState
31
* @el: exception level
32
+ * @sm: streaming mode
33
*
34
- * Compute the current SVE vector length for @el, in units of
35
+ * Compute the current vector length for @el & @sm, in units of
36
* Quadwords Minus 1 -- the same scale used for ZCR_ELx.LEN.
37
+ * If @sm, compute for SVL, otherwise NVL.
38
*/
39
+uint32_t sve_vqm1_for_el_sm(CPUARMState *env, int el, bool sm);
40
+
41
+/* Likewise, but using @sm = PSTATE.SM. */
42
uint32_t sve_vqm1_for_el(CPUARMState *env, int el);
43
44
static inline bool is_a64(CPUARMState *env)
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
45
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
46
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
47
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
48
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
49
@@ -XXX,XX +XXX,XX @@ int sme_exception_el(CPUARMState *env, int el)
18
if (mmu_idx == ARMMMUIdx_S2NS) {
50
/*
19
return (env->cp15.hcr_el2 & HCR_VM) == 0;
51
* Given that SVE is enabled, return the vector length for EL.
52
*/
53
-uint32_t sve_vqm1_for_el(CPUARMState *env, int el)
54
+uint32_t sve_vqm1_for_el_sm(CPUARMState *env, int el, bool sm)
55
{
56
ARMCPU *cpu = env_archcpu(env);
57
- uint32_t len = cpu->sve_max_vq - 1;
58
+ uint64_t *cr = env->vfp.zcr_el;
59
+ uint32_t map = cpu->sve_vq.map;
60
+ uint32_t len = ARM_MAX_VQ - 1;
61
+
62
+ if (sm) {
63
+ cr = env->vfp.smcr_el;
64
+ map = cpu->sme_vq.map;
65
+ }
66
67
if (el <= 1 && !el_is_in_host(env, el)) {
68
- len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[1]);
69
+ len = MIN(len, 0xf & (uint32_t)cr[1]);
20
}
70
}
21
+
71
if (el <= 2 && arm_feature(env, ARM_FEATURE_EL2)) {
22
+ if (env->cp15.hcr_el2 & HCR_TGE) {
72
- len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[2]);
23
+ /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
73
+ len = MIN(len, 0xf & (uint32_t)cr[2]);
24
+ if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) {
74
}
25
+ return true;
75
if (arm_feature(env, ARM_FEATURE_EL3)) {
26
+ }
76
- len = MIN(len, 0xf & (uint32_t)env->vfp.zcr_el[3]);
77
+ len = MIN(len, 0xf & (uint32_t)cr[3]);
78
}
79
80
- len = 31 - clz32(cpu->sve_vq.map & MAKE_64BIT_MASK(0, len + 1));
81
- return len;
82
+ map &= MAKE_64BIT_MASK(0, len + 1);
83
+ if (map != 0) {
84
+ return 31 - clz32(map);
27
+ }
85
+ }
28
+
86
+
29
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
87
+ /* Bit 0 is always set for Normal SVE -- not so for Streaming SVE. */
88
+ assert(sm);
89
+ return ctz32(cpu->sme_vq.map);
90
+}
91
+
92
+uint32_t sve_vqm1_for_el(CPUARMState *env, int el)
93
+{
94
+ return sve_vqm1_for_el_sm(env, el, FIELD_EX64(env->svcr, SVCR, SM));
30
}
95
}
31
96
97
static void zcr_write(CPUARMState *env, const ARMCPRegInfo *ri,
32
--
98
--
33
2.18.0
99
2.25.1
34
35
diff view generated by jsdifflib
1
The IMO, FMO and AMO bits in HCR_EL2 are defined to "behave as
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1 for all purposes other than direct reads" if HCR_EL2.TGE
3
is set and HCR_EL2.E2H is 0, and to "behave as 0 for all
4
purposes other than direct reads" if HCR_EL2.TGE is set
5
and HRC_EL2.E2H is 1.
6
2
7
To avoid having to check E2H and TGE everywhere where we test IMO and
3
We need SVL separate from VL for RDSVL et al, as well as
8
FMO, provide accessors arm_hcr_el2_imo(), arm_hcr_el2_fmo()and
4
ZA storage loads and stores, which do not require PSTATE.SM.
9
arm_hcr_el2_amo(). We don't implement ARMv8.1-VHE yet, so the E2H
10
case will never be true, but we include the logic to save effort when
11
we eventually do get to that.
12
5
13
(Note that in several of these callsites the change doesn't
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
actually make a difference as either the callsite is handling
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
15
TGE specially anyway, or the CPU can't get into that situation
8
Message-id: 20220620175235.60881-20-richard.henderson@linaro.org
16
with TGE set; we change everywhere for consistency.)
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Message-id: 20180724115950.17316-5-peter.maydell@linaro.org
21
---
10
---
22
target/arm/cpu.h | 64 +++++++++++++++++++++++++++++++++++----
11
target/arm/cpu.h | 12 ++++++++++++
23
hw/intc/arm_gicv3_cpuif.c | 19 ++++++------
12
target/arm/translate.h | 1 +
24
target/arm/helper.c | 6 ++--
13
target/arm/helper.c | 8 +++++++-
25
3 files changed, 71 insertions(+), 18 deletions(-)
14
target/arm/translate-a64.c | 1 +
15
4 files changed, 21 insertions(+), 1 deletion(-)
26
16
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
17
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
28
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/cpu.h
19
--- a/target/arm/cpu.h
30
+++ b/target/arm/cpu.h
20
+++ b/target/arm/cpu.h
31
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
21
@@ -XXX,XX +XXX,XX @@ FIELD(TBFLAG_A64, MTE0_ACTIVE, 19, 1)
32
#define HCR_RW (1ULL << 31)
22
FIELD(TBFLAG_A64, SMEEXC_EL, 20, 2)
33
#define HCR_CD (1ULL << 32)
23
FIELD(TBFLAG_A64, PSTATE_SM, 22, 1)
34
#define HCR_ID (1ULL << 33)
24
FIELD(TBFLAG_A64, PSTATE_ZA, 23, 1)
35
+#define HCR_E2H (1ULL << 34)
25
+FIELD(TBFLAG_A64, SVL, 24, 4)
36
+/*
26
37
+ * When we actually implement ARMv8.1-VHE we should add HCR_E2H to
27
/*
38
+ * HCR_MASK and then clear it again if the feature bit is not set in
28
* Helpers for using the above.
39
+ * hcr_write().
29
@@ -XXX,XX +XXX,XX @@ static inline int sve_vq(CPUARMState *env)
30
return EX_TBFLAG_A64(env->hflags, VL) + 1;
31
}
32
33
+/**
34
+ * sme_vq
35
+ * @env: the cpu context
36
+ *
37
+ * Return the SVL cached within env->hflags, in units of quadwords.
40
+ */
38
+ */
41
#define HCR_MASK ((1ULL << 34) - 1)
39
+static inline int sme_vq(CPUARMState *env)
42
43
#define SCR_NS (1U << 0)
44
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu);
45
# define TARGET_VIRT_ADDR_SPACE_BITS 32
46
#endif
47
48
+/**
49
+ * arm_hcr_el2_imo(): Return the effective value of HCR_EL2.IMO.
50
+ * Depending on the values of HCR_EL2.E2H and TGE, this may be
51
+ * "behaves as 1 for all purposes other than direct read/write" or
52
+ * "behaves as 0 for all purposes other than direct read/write"
53
+ */
54
+static inline bool arm_hcr_el2_imo(CPUARMState *env)
55
+{
40
+{
56
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
41
+ return EX_TBFLAG_A64(env->hflags, SVL) + 1;
57
+ case HCR_TGE:
58
+ return true;
59
+ case HCR_TGE | HCR_E2H:
60
+ return false;
61
+ default:
62
+ return env->cp15.hcr_el2 & HCR_IMO;
63
+ }
64
+}
42
+}
65
+
43
+
66
+/**
44
static inline bool bswap_code(bool sctlr_b)
67
+ * arm_hcr_el2_fmo(): Return the effective value of HCR_EL2.FMO.
68
+ */
69
+static inline bool arm_hcr_el2_fmo(CPUARMState *env)
70
+{
71
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
72
+ case HCR_TGE:
73
+ return true;
74
+ case HCR_TGE | HCR_E2H:
75
+ return false;
76
+ default:
77
+ return env->cp15.hcr_el2 & HCR_FMO;
78
+ }
79
+}
80
+
81
+/**
82
+ * arm_hcr_el2_amo(): Return the effective value of HCR_EL2.AMO.
83
+ */
84
+static inline bool arm_hcr_el2_amo(CPUARMState *env)
85
+{
86
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
87
+ case HCR_TGE:
88
+ return true;
89
+ case HCR_TGE | HCR_E2H:
90
+ return false;
91
+ default:
92
+ return env->cp15.hcr_el2 & HCR_AMO;
93
+ }
94
+}
95
+
96
static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
97
unsigned int target_el)
98
{
45
{
99
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
46
#ifdef CONFIG_USER_ONLY
100
break;
47
diff --git a/target/arm/translate.h b/target/arm/translate.h
101
102
case EXCP_VFIQ:
103
- if (secure || !(env->cp15.hcr_el2 & HCR_FMO)
104
- || (env->cp15.hcr_el2 & HCR_TGE)) {
105
+ if (secure || !arm_hcr_el2_fmo(env) || (env->cp15.hcr_el2 & HCR_TGE)) {
106
/* VFIQs are only taken when hypervized and non-secure. */
107
return false;
108
}
109
return !(env->daif & PSTATE_F);
110
case EXCP_VIRQ:
111
- if (secure || !(env->cp15.hcr_el2 & HCR_IMO)
112
- || (env->cp15.hcr_el2 & HCR_TGE)) {
113
+ if (secure || !arm_hcr_el2_imo(env) || (env->cp15.hcr_el2 & HCR_TGE)) {
114
/* VIRQs are only taken when hypervized and non-secure. */
115
return false;
116
}
117
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
118
* to the CPSR.F setting otherwise we further assess the state
119
* below.
120
*/
121
- hcr = (env->cp15.hcr_el2 & HCR_FMO);
122
+ hcr = arm_hcr_el2_fmo(env);
123
scr = (env->cp15.scr_el3 & SCR_FIQ);
124
125
/* When EL3 is 32-bit, the SCR.FW bit controls whether the
126
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
127
* when setting the target EL, so it does not have a further
128
* affect here.
129
*/
130
- hcr = (env->cp15.hcr_el2 & HCR_IMO);
131
+ hcr = arm_hcr_el2_imo(env);
132
scr = false;
133
break;
134
default:
135
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
136
index XXXXXXX..XXXXXXX 100644
48
index XXXXXXX..XXXXXXX 100644
137
--- a/hw/intc/arm_gicv3_cpuif.c
49
--- a/target/arm/translate.h
138
+++ b/hw/intc/arm_gicv3_cpuif.c
50
+++ b/target/arm/translate.h
139
@@ -XXX,XX +XXX,XX @@ static bool icv_access(CPUARMState *env, int hcr_flags)
51
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
140
* * access if NS EL1 and either IMO or FMO == 1:
52
int sve_excp_el; /* SVE exception EL or 0 if enabled */
141
* CTLR, DIR, PMR, RPR
53
int sme_excp_el; /* SME exception EL or 0 if enabled */
142
*/
54
int vl; /* current vector length in bytes */
143
- return (env->cp15.hcr_el2 & hcr_flags) && arm_current_el(env) == 1
55
+ int svl; /* current streaming vector length in bytes */
144
+ bool flagmatch = ((hcr_flags & HCR_IMO) && arm_hcr_el2_imo(env)) ||
56
bool vfp_enabled; /* FP enabled via FPSCR.EN */
145
+ ((hcr_flags & HCR_FMO) && arm_hcr_el2_fmo(env));
57
int vec_len;
146
+
58
int vec_stride;
147
+ return flagmatch && arm_current_el(env) == 1
148
&& !arm_is_secure_below_el3(env);
149
}
150
151
@@ -XXX,XX +XXX,XX @@ static void icc_dir_write(CPUARMState *env, const ARMCPRegInfo *ri,
152
/* No need to include !IsSecure in route_*_to_el2 as it's only
153
* tested in cases where we know !IsSecure is true.
154
*/
155
- route_fiq_to_el2 = env->cp15.hcr_el2 & HCR_FMO;
156
- route_irq_to_el2 = env->cp15.hcr_el2 & HCR_IMO;
157
+ route_fiq_to_el2 = arm_hcr_el2_fmo(env);
158
+ route_irq_to_el2 = arm_hcr_el2_imo(env);
159
160
switch (arm_current_el(env)) {
161
case 3:
162
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_irqfiq_access(CPUARMState *env,
163
switch (el) {
164
case 1:
165
if (arm_is_secure_below_el3(env) ||
166
- ((env->cp15.hcr_el2 & (HCR_IMO | HCR_FMO)) == 0)) {
167
+ (arm_hcr_el2_imo(env) == 0 && arm_hcr_el2_fmo(env) == 0)) {
168
r = CP_ACCESS_TRAP_EL3;
169
}
170
break;
171
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_dir_access(CPUARMState *env,
172
static CPAccessResult gicv3_sgi_access(CPUARMState *env,
173
const ARMCPRegInfo *ri, bool isread)
174
{
175
- if ((env->cp15.hcr_el2 & (HCR_IMO | HCR_FMO)) &&
176
+ if ((arm_hcr_el2_imo(env) || arm_hcr_el2_fmo(env)) &&
177
arm_current_el(env) == 1 && !arm_is_secure_below_el3(env)) {
178
/* Takes priority over a possible EL3 trap */
179
return CP_ACCESS_TRAP_EL2;
180
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_fiq_access(CPUARMState *env,
181
if (env->cp15.scr_el3 & SCR_FIQ) {
182
switch (el) {
183
case 1:
184
- if (arm_is_secure_below_el3(env) ||
185
- ((env->cp15.hcr_el2 & HCR_FMO) == 0)) {
186
+ if (arm_is_secure_below_el3(env) || !arm_hcr_el2_fmo(env)) {
187
r = CP_ACCESS_TRAP_EL3;
188
}
189
break;
190
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_irq_access(CPUARMState *env,
191
if (env->cp15.scr_el3 & SCR_IRQ) {
192
switch (el) {
193
case 1:
194
- if (arm_is_secure_below_el3(env) ||
195
- ((env->cp15.hcr_el2 & HCR_IMO) == 0)) {
196
+ if (arm_is_secure_below_el3(env) || !arm_hcr_el2_imo(env)) {
197
r = CP_ACCESS_TRAP_EL3;
198
}
199
break;
200
diff --git a/target/arm/helper.c b/target/arm/helper.c
59
diff --git a/target/arm/helper.c b/target/arm/helper.c
201
index XXXXXXX..XXXXXXX 100644
60
index XXXXXXX..XXXXXXX 100644
202
--- a/target/arm/helper.c
61
--- a/target/arm/helper.c
203
+++ b/target/arm/helper.c
62
+++ b/target/arm/helper.c
204
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
63
@@ -XXX,XX +XXX,XX @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el,
205
switch (excp_idx) {
64
DP_TBFLAG_A64(flags, SVEEXC_EL, sve_el);
206
case EXCP_IRQ:
65
}
207
scr = ((env->cp15.scr_el3 & SCR_IRQ) == SCR_IRQ);
66
if (cpu_isar_feature(aa64_sme, env_archcpu(env))) {
208
- hcr = ((env->cp15.hcr_el2 & HCR_IMO) == HCR_IMO);
67
- DP_TBFLAG_A64(flags, SMEEXC_EL, sme_exception_el(env, el));
209
+ hcr = arm_hcr_el2_imo(env);
68
+ int sme_el = sme_exception_el(env, el);
210
break;
69
+
211
case EXCP_FIQ:
70
+ DP_TBFLAG_A64(flags, SMEEXC_EL, sme_el);
212
scr = ((env->cp15.scr_el3 & SCR_FIQ) == SCR_FIQ);
71
+ if (sme_el == 0) {
213
- hcr = ((env->cp15.hcr_el2 & HCR_FMO) == HCR_FMO);
72
+ /* Similarly, do not compute SVL if SME is disabled. */
214
+ hcr = arm_hcr_el2_fmo(env);
73
+ DP_TBFLAG_A64(flags, SVL, sve_vqm1_for_el_sm(env, el, true));
215
break;
74
+ }
216
default:
75
if (FIELD_EX64(env->svcr, SVCR, SM)) {
217
scr = ((env->cp15.scr_el3 & SCR_EA) == SCR_EA);
76
DP_TBFLAG_A64(flags, PSTATE_SM, 1);
218
- hcr = ((env->cp15.hcr_el2 & HCR_AMO) == HCR_AMO);
77
}
219
+ hcr = arm_hcr_el2_amo(env);
78
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
220
break;
79
index XXXXXXX..XXXXXXX 100644
221
};
80
--- a/target/arm/translate-a64.c
222
81
+++ b/target/arm/translate-a64.c
82
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
83
dc->sve_excp_el = EX_TBFLAG_A64(tb_flags, SVEEXC_EL);
84
dc->sme_excp_el = EX_TBFLAG_A64(tb_flags, SMEEXC_EL);
85
dc->vl = (EX_TBFLAG_A64(tb_flags, VL) + 1) * 16;
86
+ dc->svl = (EX_TBFLAG_A64(tb_flags, SVL) + 1) * 16;
87
dc->pauth_active = EX_TBFLAG_A64(tb_flags, PAUTH_ACTIVE);
88
dc->bt = EX_TBFLAG_A64(tb_flags, BT);
89
dc->btype = EX_TBFLAG_A64(tb_flags, BTYPE);
223
--
90
--
224
2.18.0
91
2.25.1
225
226
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Used the wrong temporary in the computation of subtractive overflow.
3
We will need these functions in translate-sme.c.
4
4
5
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
7
Message-id: 20220620175235.60881-21-richard.henderson@linaro.org
8
Tested-by: Alex Bennée <alex.bennee@linaro.org>
9
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
10
Message-id: 20180801123111.3595-3-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
9
---
13
target/arm/translate-sve.c | 2 +-
10
target/arm/translate-a64.h | 38 ++++++++++++++++++++++++++++++++++++++
14
1 file changed, 1 insertion(+), 1 deletion(-)
11
target/arm/translate-sve.c | 36 ------------------------------------
12
2 files changed, 38 insertions(+), 36 deletions(-)
15
13
14
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/translate-a64.h
17
+++ b/target/arm/translate-a64.h
18
@@ -XXX,XX +XXX,XX @@ static inline int vec_full_reg_size(DisasContext *s)
19
return s->vl;
20
}
21
22
+/*
23
+ * Return the offset info CPUARMState of the predicate vector register Pn.
24
+ * Note for this purpose, FFR is P16.
25
+ */
26
+static inline int pred_full_reg_offset(DisasContext *s, int regno)
27
+{
28
+ return offsetof(CPUARMState, vfp.pregs[regno]);
29
+}
30
+
31
+/* Return the byte size of the whole predicate register, VL / 64. */
32
+static inline int pred_full_reg_size(DisasContext *s)
33
+{
34
+ return s->vl >> 3;
35
+}
36
+
37
+/*
38
+ * Round up the size of a register to a size allowed by
39
+ * the tcg vector infrastructure. Any operation which uses this
40
+ * size may assume that the bits above pred_full_reg_size are zero,
41
+ * and must leave them the same way.
42
+ *
43
+ * Note that this is not needed for the vector registers as they
44
+ * are always properly sized for tcg vectors.
45
+ */
46
+static inline int size_for_gvec(int size)
47
+{
48
+ if (size <= 8) {
49
+ return 8;
50
+ } else {
51
+ return QEMU_ALIGN_UP(size, 16);
52
+ }
53
+}
54
+
55
+static inline int pred_gvec_reg_size(DisasContext *s)
56
+{
57
+ return size_for_gvec(pred_full_reg_size(s));
58
+}
59
+
60
bool disas_sve(DisasContext *, uint32_t);
61
62
void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
16
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
63
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
17
index XXXXXXX..XXXXXXX 100644
64
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-sve.c
65
--- a/target/arm/translate-sve.c
19
+++ b/target/arm/translate-sve.c
66
+++ b/target/arm/translate-sve.c
20
@@ -XXX,XX +XXX,XX @@ static void do_sat_addsub_64(TCGv_i64 reg, TCGv_i64 val, bool u, bool d)
67
@@ -XXX,XX +XXX,XX @@ static inline int msz_dtype(DisasContext *s, int msz)
21
/* Detect signed overflow for subtraction. */
68
* Implement all of the translator functions referenced by the decoder.
22
tcg_gen_xor_i64(t0, reg, val);
69
*/
23
tcg_gen_sub_i64(t1, reg, val);
70
24
- tcg_gen_xor_i64(reg, reg, t0);
71
-/* Return the offset info CPUARMState of the predicate vector register Pn.
25
+ tcg_gen_xor_i64(reg, reg, t1);
72
- * Note for this purpose, FFR is P16.
26
tcg_gen_and_i64(t0, t0, reg);
73
- */
27
74
-static inline int pred_full_reg_offset(DisasContext *s, int regno)
28
/* Bound the result. */
75
-{
76
- return offsetof(CPUARMState, vfp.pregs[regno]);
77
-}
78
-
79
-/* Return the byte size of the whole predicate register, VL / 64. */
80
-static inline int pred_full_reg_size(DisasContext *s)
81
-{
82
- return s->vl >> 3;
83
-}
84
-
85
-/* Round up the size of a register to a size allowed by
86
- * the tcg vector infrastructure. Any operation which uses this
87
- * size may assume that the bits above pred_full_reg_size are zero,
88
- * and must leave them the same way.
89
- *
90
- * Note that this is not needed for the vector registers as they
91
- * are always properly sized for tcg vectors.
92
- */
93
-static int size_for_gvec(int size)
94
-{
95
- if (size <= 8) {
96
- return 8;
97
- } else {
98
- return QEMU_ALIGN_UP(size, 16);
99
- }
100
-}
101
-
102
-static int pred_gvec_reg_size(DisasContext *s)
103
-{
104
- return size_for_gvec(pred_full_reg_size(s));
105
-}
106
-
107
/* Invoke an out-of-line helper on 2 Zregs. */
108
static bool gen_gvec_ool_zz(DisasContext *s, gen_helper_gvec_2 *fn,
109
int rd, int rn, int data)
29
--
110
--
30
2.18.0
111
2.25.1
31
32
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add support for GICv2 virtualization extensions by mapping the necessary
3
Move the code from hw/arm/virt.c that is supposed
4
I/O regions and connecting the maintenance IRQ lines.
4
to handle v7 into the one function.
5
5
6
Declare those additions in the device tree and in the ACPI tables.
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
7
Reported-by: He Zhe <zhe.he@windriver.com>
8
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
8
Message-id: 20220619001541.131672-2-richard.henderson@linaro.org
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20180727095421.386-21-luc.michel@greensocs.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
include/hw/arm/virt.h | 4 +++-
12
hw/arm/virt.c | 10 +---------
14
hw/arm/virt-acpi-build.c | 6 +++--
13
target/arm/ptw.c | 24 ++++++++++++++++--------
15
hw/arm/virt.c | 52 +++++++++++++++++++++++++++++++++-------
14
2 files changed, 17 insertions(+), 17 deletions(-)
16
3 files changed, 50 insertions(+), 12 deletions(-)
17
15
18
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/arm/virt.h
21
+++ b/include/hw/arm/virt.h
22
@@ -XXX,XX +XXX,XX @@
23
#define NUM_VIRTIO_TRANSPORTS 32
24
#define NUM_SMMU_IRQS 4
25
26
-#define ARCH_GICV3_MAINT_IRQ 9
27
+#define ARCH_GIC_MAINT_IRQ 9
28
29
#define ARCH_TIMER_VIRT_IRQ 11
30
#define ARCH_TIMER_S_EL1_IRQ 13
31
@@ -XXX,XX +XXX,XX @@ enum {
32
VIRT_GIC_DIST,
33
VIRT_GIC_CPU,
34
VIRT_GIC_V2M,
35
+ VIRT_GIC_HYP,
36
+ VIRT_GIC_VCPU,
37
VIRT_GIC_ITS,
38
VIRT_GIC_REDIST,
39
VIRT_GIC_REDIST2,
40
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/arm/virt-acpi-build.c
43
+++ b/hw/arm/virt-acpi-build.c
44
@@ -XXX,XX +XXX,XX @@ build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
45
gicc->length = sizeof(*gicc);
46
if (vms->gic_version == 2) {
47
gicc->base_address = cpu_to_le64(memmap[VIRT_GIC_CPU].base);
48
+ gicc->gich_base_address = cpu_to_le64(memmap[VIRT_GIC_HYP].base);
49
+ gicc->gicv_base_address = cpu_to_le64(memmap[VIRT_GIC_VCPU].base);
50
}
51
gicc->cpu_interface_number = cpu_to_le32(i);
52
gicc->arm_mpidr = cpu_to_le64(armcpu->mp_affinity);
53
@@ -XXX,XX +XXX,XX @@ build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
54
if (arm_feature(&armcpu->env, ARM_FEATURE_PMU)) {
55
gicc->performance_interrupt = cpu_to_le32(PPI(VIRTUAL_PMU_IRQ));
56
}
57
- if (vms->virt && vms->gic_version == 3) {
58
- gicc->vgic_interrupt = cpu_to_le32(PPI(ARCH_GICV3_MAINT_IRQ));
59
+ if (vms->virt) {
60
+ gicc->vgic_interrupt = cpu_to_le32(PPI(ARCH_GIC_MAINT_IRQ));
61
}
62
}
63
64
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
16
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
65
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
66
--- a/hw/arm/virt.c
18
--- a/hw/arm/virt.c
67
+++ b/hw/arm/virt.c
19
+++ b/hw/arm/virt.c
68
@@ -XXX,XX +XXX,XX @@ static const MemMapEntry a15memmap[] = {
20
@@ -XXX,XX +XXX,XX @@ static void machvirt_init(MachineState *machine)
69
[VIRT_GIC_DIST] = { 0x08000000, 0x00010000 },
21
cpuobj = object_new(possible_cpus->cpus[0].type);
70
[VIRT_GIC_CPU] = { 0x08010000, 0x00010000 },
22
armcpu = ARM_CPU(cpuobj);
71
[VIRT_GIC_V2M] = { 0x08020000, 0x00001000 },
23
72
+ [VIRT_GIC_HYP] = { 0x08030000, 0x00010000 },
24
- if (object_property_get_bool(cpuobj, "aarch64", NULL)) {
73
+ [VIRT_GIC_VCPU] = { 0x08040000, 0x00010000 },
25
- pa_bits = arm_pamax(armcpu);
74
/* The space in between here is reserved for GICv3 CPU/vCPU/HYP */
26
- } else if (arm_feature(&armcpu->env, ARM_FEATURE_LPAE)) {
75
[VIRT_GIC_ITS] = { 0x08080000, 0x00020000 },
27
- /* v7 with LPAE */
76
/* This redistributor space allows up to 2*64kB*123 CPUs */
28
- pa_bits = 40;
77
@@ -XXX,XX +XXX,XX @@ static void fdt_add_gic_node(VirtMachineState *vms)
29
- } else {
78
30
- /* Anything else */
79
if (vms->virt) {
31
- pa_bits = 32;
80
qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
32
- }
81
- GIC_FDT_IRQ_TYPE_PPI, ARCH_GICV3_MAINT_IRQ,
33
+ pa_bits = arm_pamax(armcpu);
82
+ GIC_FDT_IRQ_TYPE_PPI, ARCH_GIC_MAINT_IRQ,
34
83
GIC_FDT_IRQ_FLAGS_LEVEL_HI);
35
object_unref(cpuobj);
84
}
36
85
} else {
37
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
86
/* 'cortex-a15-gic' means 'GIC v2' */
38
index XXXXXXX..XXXXXXX 100644
87
qemu_fdt_setprop_string(vms->fdt, nodename, "compatible",
39
--- a/target/arm/ptw.c
88
"arm,cortex-a15-gic");
40
+++ b/target/arm/ptw.c
89
- qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
41
@@ -XXX,XX +XXX,XX @@ static const uint8_t pamax_map[] = {
90
- 2, vms->memmap[VIRT_GIC_DIST].base,
42
/* The cpu-specific constant value of PAMax; also used by hw/arm/virt. */
91
- 2, vms->memmap[VIRT_GIC_DIST].size,
43
unsigned int arm_pamax(ARMCPU *cpu)
92
- 2, vms->memmap[VIRT_GIC_CPU].base,
44
{
93
- 2, vms->memmap[VIRT_GIC_CPU].size);
45
- unsigned int parange =
94
+ if (!vms->virt) {
46
- FIELD_EX64(cpu->isar.id_aa64mmfr0, ID_AA64MMFR0, PARANGE);
95
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
47
+ if (arm_feature(&cpu->env, ARM_FEATURE_AARCH64)) {
96
+ 2, vms->memmap[VIRT_GIC_DIST].base,
48
+ unsigned int parange =
97
+ 2, vms->memmap[VIRT_GIC_DIST].size,
49
+ FIELD_EX64(cpu->isar.id_aa64mmfr0, ID_AA64MMFR0, PARANGE);
98
+ 2, vms->memmap[VIRT_GIC_CPU].base,
50
99
+ 2, vms->memmap[VIRT_GIC_CPU].size);
51
- /*
100
+ } else {
52
- * id_aa64mmfr0 is a read-only register so values outside of the
101
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
53
- * supported mappings can be considered an implementation error.
102
+ 2, vms->memmap[VIRT_GIC_DIST].base,
54
- */
103
+ 2, vms->memmap[VIRT_GIC_DIST].size,
55
- assert(parange < ARRAY_SIZE(pamax_map));
104
+ 2, vms->memmap[VIRT_GIC_CPU].base,
56
- return pamax_map[parange];
105
+ 2, vms->memmap[VIRT_GIC_CPU].size,
57
+ /*
106
+ 2, vms->memmap[VIRT_GIC_HYP].base,
58
+ * id_aa64mmfr0 is a read-only register so values outside of the
107
+ 2, vms->memmap[VIRT_GIC_HYP].size,
59
+ * supported mappings can be considered an implementation error.
108
+ 2, vms->memmap[VIRT_GIC_VCPU].base,
60
+ */
109
+ 2, vms->memmap[VIRT_GIC_VCPU].size);
61
+ assert(parange < ARRAY_SIZE(pamax_map));
110
+ qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
62
+ return pamax_map[parange];
111
+ GIC_FDT_IRQ_TYPE_PPI, ARCH_GIC_MAINT_IRQ,
63
+ }
112
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI);
64
+ if (arm_feature(&cpu->env, ARM_FEATURE_LPAE)) {
113
+ }
65
+ /* v7 with LPAE */
114
}
66
+ return 40;
115
67
+ }
116
qemu_fdt_setprop_cell(vms->fdt, nodename, "phandle", vms->gic_phandle);
68
+ /* Anything else */
117
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
69
+ return 32;
118
qdev_prop_set_uint32(gicdev, "redist-region-count[1]",
70
}
119
MIN(smp_cpus - redist0_count, redist1_capacity));
71
120
}
72
/*
121
+ } else {
122
+ if (!kvm_irqchip_in_kernel()) {
123
+ qdev_prop_set_bit(gicdev, "has-virtualization-extensions",
124
+ vms->virt);
125
+ }
126
}
127
qdev_init_nofail(gicdev);
128
gicbusdev = SYS_BUS_DEVICE(gicdev);
129
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
130
}
131
} else {
132
sysbus_mmio_map(gicbusdev, 1, vms->memmap[VIRT_GIC_CPU].base);
133
+ if (vms->virt) {
134
+ sysbus_mmio_map(gicbusdev, 2, vms->memmap[VIRT_GIC_HYP].base);
135
+ sysbus_mmio_map(gicbusdev, 3, vms->memmap[VIRT_GIC_VCPU].base);
136
+ }
137
}
138
139
/* Wire the outputs from each CPU's generic timer and the GICv3
140
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
141
ppibase + timer_irq[irq]));
142
}
143
144
- qdev_connect_gpio_out_named(cpudev, "gicv3-maintenance-interrupt", 0,
145
- qdev_get_gpio_in(gicdev, ppibase
146
- + ARCH_GICV3_MAINT_IRQ));
147
+ if (type == 3) {
148
+ qemu_irq irq = qdev_get_gpio_in(gicdev,
149
+ ppibase + ARCH_GIC_MAINT_IRQ);
150
+ qdev_connect_gpio_out_named(cpudev, "gicv3-maintenance-interrupt",
151
+ 0, irq);
152
+ } else if (vms->virt) {
153
+ qemu_irq irq = qdev_get_gpio_in(gicdev,
154
+ ppibase + ARCH_GIC_MAINT_IRQ);
155
+ sysbus_connect_irq(gicbusdev, i + 4 * smp_cpus, irq);
156
+ }
157
+
158
qdev_connect_gpio_out_named(cpudev, "pmu-interrupt", 0,
159
qdev_get_gpio_in(gicdev, ppibase
160
+ VIRTUAL_PMU_IRQ));
161
--
73
--
162
2.18.0
74
2.25.1
163
164
diff view generated by jsdifflib
Deleted patch
1
Some debug registers can be trapped via MDCR_EL2 bits TDRA, TDOSA,
2
and TDA, which we implement in the functions access_tdra(),
3
access_tdosa() and access_tda(). If MDCR_EL2.TDE or HCR_EL2.TGE
4
are 1, the TDRA, TDOSA and TDA bits should behave as if they were 1.
5
Implement this by having the access functions check MDCR_EL2.TDE
6
and HCR_EL2.TGE.
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180724115950.17316-3-peter.maydell@linaro.org
11
---
12
target/arm/helper.c | 18 ++++++++++++------
13
1 file changed, 12 insertions(+), 6 deletions(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri,
20
bool isread)
21
{
22
int el = arm_current_el(env);
23
+ bool mdcr_el2_tdosa = (env->cp15.mdcr_el2 & MDCR_TDOSA) ||
24
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
25
+ (env->cp15.hcr_el2 & HCR_TGE);
26
27
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDOSA)
28
- && !arm_is_secure_below_el3(env)) {
29
+ if (el < 2 && mdcr_el2_tdosa && !arm_is_secure_below_el3(env)) {
30
return CP_ACCESS_TRAP_EL2;
31
}
32
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDOSA)) {
33
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri,
34
bool isread)
35
{
36
int el = arm_current_el(env);
37
+ bool mdcr_el2_tdra = (env->cp15.mdcr_el2 & MDCR_TDRA) ||
38
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
39
+ (env->cp15.hcr_el2 & HCR_TGE);
40
41
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDRA)
42
- && !arm_is_secure_below_el3(env)) {
43
+ if (el < 2 && mdcr_el2_tdra && !arm_is_secure_below_el3(env)) {
44
return CP_ACCESS_TRAP_EL2;
45
}
46
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
47
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri,
48
bool isread)
49
{
50
int el = arm_current_el(env);
51
+ bool mdcr_el2_tda = (env->cp15.mdcr_el2 & MDCR_TDA) ||
52
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
53
+ (env->cp15.hcr_el2 & HCR_TGE);
54
55
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDA)
56
- && !arm_is_secure_below_el3(env)) {
57
+ if (el < 2 && mdcr_el2_tda && !arm_is_secure_below_el3(env)) {
58
return CP_ACCESS_TRAP_EL2;
59
}
60
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
61
--
62
2.18.0
63
64
diff view generated by jsdifflib
Deleted patch
1
Whene we raise a synchronous exception, if HCR_EL2.TGE is set then
2
exceptions targeting NS EL1 must be redirected to EL2. Implement
3
this in raise_exception() -- all synchronous exceptions go through
4
this function.
5
1
6
(Asynchronous exceptions go via arm_cpu_exec_interrupt(), which
7
already honours HCR_EL2.TGE when it determines the target EL
8
in arm_phys_excp_target_el().)
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180724115950.17316-4-peter.maydell@linaro.org
13
---
14
target/arm/op_helper.c | 14 ++++++++++++++
15
1 file changed, 14 insertions(+)
16
17
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/op_helper.c
20
+++ b/target/arm/op_helper.c
21
@@ -XXX,XX +XXX,XX @@ static void raise_exception(CPUARMState *env, uint32_t excp,
22
{
23
CPUState *cs = CPU(arm_env_get_cpu(env));
24
25
+ if ((env->cp15.hcr_el2 & HCR_TGE) &&
26
+ target_el == 1 && !arm_is_secure(env)) {
27
+ /*
28
+ * Redirect NS EL1 exceptions to NS EL2. These are reported with
29
+ * their original syndrome register value, with the exception of
30
+ * SIMD/FP access traps, which are reported as uncategorized
31
+ * (see DDI0478C.a D1.10.4)
32
+ */
33
+ target_el = 2;
34
+ if (syndrome >> ARM_EL_EC_SHIFT == EC_ADVSIMDFPACCESSTRAP) {
35
+ syndrome = syn_uncategorized();
36
+ }
37
+ }
38
+
39
assert(!excp_is_internal(excp));
40
cs->exception_index = excp;
41
env->exception.syndrome = syndrome;
42
--
43
2.18.0
44
45
diff view generated by jsdifflib
Deleted patch
1
In do_v7m_exception_exit(), we use the exc_secure variable to track
2
whether the exception we're returning from is secure or non-secure.
3
Unfortunately the statement initializing this was accidentally
4
inside an "if (env->v7m.exception != ARMV7M_EXCP_NMI)" conditional,
5
which meant that we were using the wrong value for NMI handlers.
6
Move the initialization out to the right place.
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20180720145647.8810-3-peter.maydell@linaro.org
12
---
13
target/arm/helper.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
21
/* For all other purposes, treat ES as 0 (R_HXSR) */
22
excret &= ~R_V7M_EXCRET_ES_MASK;
23
}
24
+ exc_secure = excret & R_V7M_EXCRET_ES_MASK;
25
}
26
27
if (env->v7m.exception != ARMV7M_EXCP_NMI) {
28
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
29
* which security state's faultmask to clear. (v8M ARM ARM R_KBNF.)
30
*/
31
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
32
- exc_secure = excret & R_V7M_EXCRET_ES_MASK;
33
if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) {
34
env->v7m.faultmask[exc_secure] = 0;
35
}
36
--
37
2.18.0
38
39
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The normal vector element is sign-extended before
3
In machvirt_init we create a cpu but do not fully initialize it.
4
comparing with the wide vector element.
4
Thus the propagation of V7VE to LPAE has not been done, and we
5
compute the wrong value for some v7 cpus, e.g. cortex-a15.
5
6
6
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
7
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1078
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
9
Reported-by: He Zhe <zhe.he@windriver.com>
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Message-id: 20220619001541.131672-3-richard.henderson@linaro.org
10
Tested-by: Alex Bennée <alex.bennee@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
12
Message-id: 20180801123111.3595-2-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
13
---
15
target/arm/sve_helper.c | 12 ++++++------
14
target/arm/ptw.c | 8 +++++++-
16
1 file changed, 6 insertions(+), 6 deletions(-)
15
1 file changed, 7 insertions(+), 1 deletion(-)
17
16
18
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
17
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
19
index XXXXXXX..XXXXXXX 100644
18
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/sve_helper.c
19
--- a/target/arm/ptw.c
21
+++ b/target/arm/sve_helper.c
20
+++ b/target/arm/ptw.c
22
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
21
@@ -XXX,XX +XXX,XX @@ unsigned int arm_pamax(ARMCPU *cpu)
23
#define DO_CMP_PPZW_S(NAME, TYPE, TYPEW, OP) \
22
assert(parange < ARRAY_SIZE(pamax_map));
24
DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_4, 0x1111111111111111ull)
23
return pamax_map[parange];
25
24
}
26
-DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, uint8_t, uint64_t, ==)
25
- if (arm_feature(&cpu->env, ARM_FEATURE_LPAE)) {
27
-DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, uint16_t, uint64_t, ==)
26
+
28
-DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, uint32_t, uint64_t, ==)
27
+ /*
29
+DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, int8_t, uint64_t, ==)
28
+ * In machvirt_init, we call arm_pamax on a cpu that is not fully
30
+DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, int16_t, uint64_t, ==)
29
+ * initialized, so we can't rely on the propagation done in realize.
31
+DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, int32_t, uint64_t, ==)
30
+ */
32
31
+ if (arm_feature(&cpu->env, ARM_FEATURE_LPAE) ||
33
-DO_CMP_PPZW_B(sve_cmpne_ppzw_b, uint8_t, uint64_t, !=)
32
+ arm_feature(&cpu->env, ARM_FEATURE_V7VE)) {
34
-DO_CMP_PPZW_H(sve_cmpne_ppzw_h, uint16_t, uint64_t, !=)
33
/* v7 with LPAE */
35
-DO_CMP_PPZW_S(sve_cmpne_ppzw_s, uint32_t, uint64_t, !=)
34
return 40;
36
+DO_CMP_PPZW_B(sve_cmpne_ppzw_b, int8_t, uint64_t, !=)
35
}
37
+DO_CMP_PPZW_H(sve_cmpne_ppzw_h, int16_t, uint64_t, !=)
38
+DO_CMP_PPZW_S(sve_cmpne_ppzw_s, int32_t, uint64_t, !=)
39
40
DO_CMP_PPZW_B(sve_cmpgt_ppzw_b, int8_t, int64_t, >)
41
DO_CMP_PPZW_H(sve_cmpgt_ppzw_h, int16_t, int64_t, >)
42
--
36
--
43
2.18.0
37
2.25.1
44
45
diff view generated by jsdifflib