1
First pullreq of the 3.1 release cycle, with lots of
1
Hi; here's the latest arm pullreq. This is mostly patches from
2
Arm related patches accumulated during freeze. Most
2
RTH, plus a couple of other more minor things. Switching to
3
notable here is Luc's GICv2 virtualization support and
3
PCREL is the big one, hopefully should improve performance.
4
my execute-from-MMIO patches.
5
6
I stopped looking at my to-review queue towards the
7
end of freeze, since 45 patches is already pushing what
8
I consider a reasonable sized pullreq; once this goes into
9
master I'll start working through it again.
10
4
11
thanks
5
thanks
12
-- PMM
6
-- PMM
13
7
14
The following changes since commit 38441756b70eec5807b5f60dad11a93a91199866:
8
The following changes since commit 214a8da23651f2472b296b3293e619fd58d9e212:
15
9
16
Update version for v3.0.0 release (2018-08-14 16:38:43 +0100)
10
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2022-10-18 11:14:31 -0400)
17
11
18
are available in the Git repository at:
12
are available in the Git repository at:
19
13
20
git://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20180814
14
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20221020
21
15
22
for you to fetch changes up to 054e7adf4e64e4acb3b033348ebf7cc871baa34f:
16
for you to fetch changes up to 5db899303799e49209016a93289b8694afa1449e:
23
17
24
target/arm: Fix typo in helper_sve_movz_d (2018-08-14 17:17:22 +0100)
18
hw/ide/microdrive: Use device_cold_reset() for self-resets (2022-10-20 12:11:53 +0100)
25
19
26
----------------------------------------------------------------
20
----------------------------------------------------------------
27
target-arm queue:
21
target-arm queue:
28
* Implement more of ARMv6-M support
22
* Switch to TARGET_TB_PCREL
29
* Support direct execution from non-RAM regions;
23
* More pagetable-walk refactoring preparatory to HAFDBS
30
use this to implmeent execution from small (<1K) MPU regions
24
* update the cortex-a15 MIDR to latest rev
31
* GICv2: implement the virtualization extensions
25
* hw/char/pl011: fix baud rate calculation
32
* support a virtualization-capable GICv2 in the virt and
26
* hw/ide/microdrive: Use device_cold_reset() for self-resets
33
xlnx-zynqmp boards
34
* arm: Fix return code of arm_load_elf() so we can detect
35
failure to load the file correctly
36
* Implement HCR_EL2.TGE ("trap general exceptions") bit
37
* Implement tailchaining for M profile cores
38
* Fix bugs in SVE compare, saturating add/sub, WHILE, MOVZ
39
27
40
----------------------------------------------------------------
28
----------------------------------------------------------------
41
Adam Lackorzynski (1):
29
Alex Bennée (1):
42
arm: Fix return code of arm_load_elf
30
target/arm: update the cortex-a15 MIDR to latest rev
43
31
44
Julia Suvorova (4):
32
Baruch Siach (1):
45
target/arm: Forbid unprivileged mode for M Baseline
33
hw/char/pl011: fix baud rate calculation
46
nvic: Handle ARMv6-M SCS reserved registers
47
arm: Add ARMv6-M programmer's model support
48
nvic: Change NVIC to support ARMv6-M
49
34
50
Luc Michel (20):
35
Peter Maydell (1):
51
intc/arm_gic: Refactor operations on the distributor
36
hw/ide/microdrive: Use device_cold_reset() for self-resets
52
intc/arm_gic: Implement GICD_ISACTIVERn and GICD_ICACTIVERn registers
53
intc/arm_gic: Remove some dead code and put some functions static
54
vmstate.h: Provide VMSTATE_UINT16_SUB_ARRAY
55
intc/arm_gic: Add the virtualization extensions to the GIC state
56
intc/arm_gic: Add virtual interface register definitions
57
intc/arm_gic: Add virtualization extensions helper macros and functions
58
intc/arm_gic: Refactor secure/ns access check in the CPU interface
59
intc/arm_gic: Add virtualization enabled IRQ helper functions
60
intc/arm_gic: Implement virtualization extensions in gic_(activate_irq|drop_prio)
61
intc/arm_gic: Implement virtualization extensions in gic_acknowledge_irq
62
intc/arm_gic: Implement virtualization extensions in gic_(deactivate|complete_irq)
63
intc/arm_gic: Implement virtualization extensions in gic_cpu_(read|write)
64
intc/arm_gic: Wire the vCPU interface
65
intc/arm_gic: Implement the virtual interface registers
66
intc/arm_gic: Implement gic_update_virt() function
67
intc/arm_gic: Implement maintenance interrupt generation
68
intc/arm_gic: Improve traces
69
xlnx-zynqmp: Improve GIC wiring and MMIO mapping
70
arm/virt: Add support for GICv2 virtualization extensions
71
37
72
Peter Maydell (16):
38
Richard Henderson (21):
73
accel/tcg: Pass read access type through to io_readx()
39
target/arm: Enable TARGET_PAGE_ENTRY_EXTRA
74
accel/tcg: Handle get_page_addr_code() returning -1 in hashtable lookups
40
target/arm: Use probe_access_full for MTE
75
accel/tcg: Handle get_page_addr_code() returning -1 in tb_check_watchpoint()
41
target/arm: Use probe_access_full for BTI
76
accel/tcg: tb_gen_code(): Create single-insn TB for execution from non-RAM
42
target/arm: Add ARMMMUIdx_Phys_{S,NS}
77
accel/tcg: Return -1 for execution from MMIO regions in get_page_addr_code()
43
target/arm: Move ARMMMUIdx_Stage2 to a real tlb mmu_idx
78
target/arm: Allow execution from small regions
44
target/arm: Restrict tlb flush from vttbr_write to vmid change
79
accel/tcg: Check whether TLB entry is RAM consistently with how we set it up
45
target/arm: Split out S1Translate type
80
target/arm: Mask virtual interrupts if HCR_EL2.TGE is set
46
target/arm: Plumb debug into S1Translate
81
target/arm: Honour HCR_EL2.TGE and MDCR_EL2.TDE in debug register access checks
47
target/arm: Move be test for regime into S1TranslateResult
82
target/arm: Honour HCR_EL2.TGE when raising synchronous exceptions
48
target/arm: Use softmmu tlbs for page table walking
83
target/arm: Provide accessor functions for HCR_EL2.{IMO, FMO, AMO}
49
target/arm: Split out get_phys_addr_twostage
84
target/arm: Treat SCTLR_EL1.M as if it were zero when HCR_EL2.TGE is set
50
target/arm: Use bool consistently for get_phys_addr subroutines
85
target/arm: Improve exception-taken logging
51
target/arm: Introduce curr_insn_len
86
target/arm: Initialize exc_secure correctly in do_v7m_exception_exit()
52
target/arm: Change gen_goto_tb to work on displacements
87
target/arm: Restore M-profile CONTROL.SPSEL before any tailchaining
53
target/arm: Change gen_*set_pc_im to gen_*update_pc
88
target/arm: Implement tailchaining for M profile cores
54
target/arm: Change gen_exception_insn* to work on displacements
55
target/arm: Remove gen_exception_internal_insn pc argument
56
target/arm: Change gen_jmp* to work on displacements
57
target/arm: Introduce gen_pc_plus_diff for aarch64
58
target/arm: Introduce gen_pc_plus_diff for aarch32
59
target/arm: Enable TARGET_TB_PCREL
89
60
90
Richard Henderson (4):
61
target/arm/cpu-param.h | 17 +-
91
target/arm: Fix sign of sve_cmpeq_ppzw/sve_cmpne_ppzw
62
target/arm/cpu.h | 47 ++--
92
target/arm: Fix typo in do_sat_addsub_64
63
target/arm/internals.h | 1 +
93
target/arm: Reorganize SVE WHILE
64
target/arm/sve_ldst_internal.h | 1 +
94
target/arm: Fix typo in helper_sve_movz_d
65
target/arm/translate-a32.h | 2 +-
66
target/arm/translate.h | 66 ++++-
67
hw/char/pl011.c | 2 +-
68
hw/ide/microdrive.c | 8 +-
69
target/arm/cpu.c | 23 +-
70
target/arm/cpu_tcg.c | 4 +-
71
target/arm/helper.c | 155 +++++++++---
72
target/arm/mte_helper.c | 62 ++---
73
target/arm/ptw.c | 535 +++++++++++++++++++++++++----------------
74
target/arm/sve_helper.c | 54 ++---
75
target/arm/tlb_helper.c | 24 +-
76
target/arm/translate-a64.c | 220 ++++++++++-------
77
target/arm/translate-m-nocp.c | 8 +-
78
target/arm/translate-mve.c | 2 +-
79
target/arm/translate-vfp.c | 10 +-
80
target/arm/translate.c | 284 +++++++++++++---------
81
20 files changed, 918 insertions(+), 607 deletions(-)
95
82
96
accel/tcg/softmmu_template.h | 11 +-
97
hw/intc/gic_internal.h | 282 +++++++++--
98
include/exec/exec-all.h | 2 -
99
include/hw/arm/virt.h | 4 +-
100
include/hw/arm/xlnx-zynqmp.h | 4 +-
101
include/hw/intc/arm_gic_common.h | 43 +-
102
include/hw/intc/armv7m_nvic.h | 1 +
103
include/migration/vmstate.h | 3 +
104
include/qom/cpu.h | 6 +
105
target/arm/cpu.h | 62 ++-
106
accel/tcg/cpu-exec.c | 3 +
107
accel/tcg/cputlb.c | 111 +----
108
accel/tcg/translate-all.c | 23 +-
109
exec.c | 6 -
110
hw/arm/boot.c | 8 +-
111
hw/arm/virt-acpi-build.c | 6 +-
112
hw/arm/virt.c | 52 ++-
113
hw/arm/xlnx-zynqmp.c | 92 +++-
114
hw/intc/arm_gic.c | 987 +++++++++++++++++++++++++++++++--------
115
hw/intc/arm_gic_common.c | 154 ++++--
116
hw/intc/arm_gic_kvm.c | 31 +-
117
hw/intc/arm_gicv3_cpuif.c | 19 +-
118
hw/intc/armv7m_nvic.c | 82 +++-
119
memory.c | 3 +-
120
target/arm/cpu.c | 4 +
121
target/arm/helper.c | 127 +++--
122
target/arm/op_helper.c | 14 +
123
target/arm/sve_helper.c | 19 +-
124
target/arm/translate-sve.c | 51 +-
125
hw/intc/trace-events | 12 +-
126
30 files changed, 1724 insertions(+), 498 deletions(-)
127
diff view generated by jsdifflib
Deleted patch
1
From: Julia Suvorova <jusual@mail.ru>
2
1
3
MSR handling is the only place where CONTROL.nPRIV is modified.
4
5
Signed-off-by: Julia Suvorova <jusual@mail.ru>
6
Message-id: 20180705222622.17139-1-jusual@mail.ru
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
target/arm/helper.c | 12 ++++++++----
11
1 file changed, 8 insertions(+), 4 deletions(-)
12
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
14
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
16
+++ b/target/arm/helper.c
17
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
18
write_v7m_control_spsel_for_secstate(env,
19
val & R_V7M_CONTROL_SPSEL_MASK,
20
M_REG_NS);
21
- env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
22
- env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
23
+ if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
24
+ env->v7m.control[M_REG_NS] &= ~R_V7M_CONTROL_NPRIV_MASK;
25
+ env->v7m.control[M_REG_NS] |= val & R_V7M_CONTROL_NPRIV_MASK;
26
+ }
27
return;
28
case 0x98: /* SP_NS */
29
{
30
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
31
!arm_v7m_is_handler_mode(env)) {
32
write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) != 0);
33
}
34
- env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
35
- env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
36
+ if (arm_feature(env, ARM_FEATURE_M_MAIN)) {
37
+ env->v7m.control[env->v7m.secure] &= ~R_V7M_CONTROL_NPRIV_MASK;
38
+ env->v7m.control[env->v7m.secure] |= val & R_V7M_CONTROL_NPRIV_MASK;
39
+ }
40
break;
41
default:
42
bad_reg:
43
--
44
2.18.0
45
46
diff view generated by jsdifflib
Deleted patch
1
From: Julia Suvorova <jusual@mail.ru>
2
1
3
Handle SCS reserved registers listed in ARMv6-M ARM D3.6.1.
4
All reserved registers are RAZ/WI. ARM_FEATURE_M_MAIN is used for the
5
checks, because these registers are reserved in ARMv8-M Baseline too.
6
7
Signed-off-by: Julia Suvorova <jusual@mail.ru>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
hw/intc/armv7m_nvic.c | 51 +++++++++++++++++++++++++++++++++++++++++--
12
1 file changed, 49 insertions(+), 2 deletions(-)
13
14
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/armv7m_nvic.c
17
+++ b/hw/intc/armv7m_nvic.c
18
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
19
}
20
return val;
21
case 0xd10: /* System Control. */
22
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
23
+ goto bad_offset;
24
+ }
25
return cpu->env.v7m.scr[attrs.secure];
26
case 0xd14: /* Configuration Control. */
27
/* The BFHFNMIGN bit is the only non-banked bit; we
28
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
29
}
30
return val;
31
case 0xd2c: /* Hard Fault Status. */
32
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
33
+ goto bad_offset;
34
+ }
35
return cpu->env.v7m.hfsr;
36
case 0xd30: /* Debug Fault Status. */
37
return cpu->env.v7m.dfsr;
38
case 0xd34: /* MMFAR MemManage Fault Address */
39
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
40
+ goto bad_offset;
41
+ }
42
return cpu->env.v7m.mmfar[attrs.secure];
43
case 0xd38: /* Bus Fault Address. */
44
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
45
+ goto bad_offset;
46
+ }
47
return cpu->env.v7m.bfar;
48
case 0xd3c: /* Aux Fault Status. */
49
/* TODO: Implement fault status registers. */
50
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
51
}
52
break;
53
case 0xd10: /* System Control. */
54
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
55
+ goto bad_offset;
56
+ }
57
/* We don't implement deep-sleep so these bits are RAZ/WI.
58
* The other bits in the register are banked.
59
* QEMU's implementation ignores SEVONPEND and SLEEPONEXIT, which
60
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
61
nvic_irq_update(s);
62
break;
63
case 0xd2c: /* Hard Fault Status. */
64
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
65
+ goto bad_offset;
66
+ }
67
cpu->env.v7m.hfsr &= ~value; /* W1C */
68
break;
69
case 0xd30: /* Debug Fault Status. */
70
cpu->env.v7m.dfsr &= ~value; /* W1C */
71
break;
72
case 0xd34: /* Mem Manage Address. */
73
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
74
+ goto bad_offset;
75
+ }
76
cpu->env.v7m.mmfar[attrs.secure] = value;
77
return;
78
case 0xd38: /* Bus Fault Address. */
79
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
80
+ goto bad_offset;
81
+ }
82
cpu->env.v7m.bfar = value;
83
return;
84
case 0xd3c: /* Aux Fault Status. */
85
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
86
case 0xf00: /* Software Triggered Interrupt Register */
87
{
88
int excnum = (value & 0x1ff) + NVIC_FIRST_IRQ;
89
+
90
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
91
+ goto bad_offset;
92
+ }
93
+
94
if (excnum < s->num_irq) {
95
armv7m_nvic_set_pending(s, excnum, false);
96
}
97
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
98
}
99
}
100
break;
101
- case 0xd18 ... 0xd23: /* System Handler Priority (SHPR1, SHPR2, SHPR3) */
102
+ case 0xd18: /* System Handler Priority (SHPR1) */
103
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
104
+ val = 0;
105
+ break;
106
+ }
107
+ /* fall through */
108
+ case 0xd1c ... 0xd23: /* System Handler Priority (SHPR2, SHPR3) */
109
val = 0;
110
for (i = 0; i < size; i++) {
111
unsigned hdlidx = (offset - 0xd14) + i;
112
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
113
}
114
break;
115
case 0xd28 ... 0xd2b: /* Configurable Fault Status (CFSR) */
116
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
117
+ val = 0;
118
+ break;
119
+ };
120
/* The BFSR bits [15:8] are shared between security states
121
* and we store them in the NS copy
122
*/
123
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
124
}
125
nvic_irq_update(s);
126
return MEMTX_OK;
127
- case 0xd18 ... 0xd23: /* System Handler Priority (SHPR1, SHPR2, SHPR3) */
128
+ case 0xd18: /* System Handler Priority (SHPR1) */
129
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
130
+ return MEMTX_OK;
131
+ }
132
+ /* fall through */
133
+ case 0xd1c ... 0xd23: /* System Handler Priority (SHPR2, SHPR3) */
134
for (i = 0; i < size; i++) {
135
unsigned hdlidx = (offset - 0xd14) + i;
136
int newprio = extract32(value, i * 8, 8);
137
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_write(void *opaque, hwaddr addr,
138
nvic_irq_update(s);
139
return MEMTX_OK;
140
case 0xd28 ... 0xd2b: /* Configurable Fault Status (CFSR) */
141
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_M_MAIN)) {
142
+ return MEMTX_OK;
143
+ }
144
/* All bits are W1C, so construct 32 bit value with 0s in
145
* the parts not written by the access size
146
*/
147
--
148
2.18.0
149
150
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Baruch Siach <baruch@tkos.co.il>
2
2
3
Implement virtualization extensions in the gic_deactivate_irq() and
3
The PL011 TRM says that "UARTIBRD = 0 is invalid and UARTFBRD is ignored
4
gic_complete_irq() functions.
4
when this is the case". But the code looks at FBRD for the invalid case.
5
Fix this.
5
6
6
When the guest writes an invalid vIRQ to V_EOIR or V_DIR, since the
7
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
7
GICv2 specification is not entirely clear here, we adopt the behaviour
8
Message-id: 1408f62a2e45665816527d4845ffde650957d5ab.1665051588.git.baruchs-c@neureality.ai
8
observed on real hardware:
9
* When V_CTRL.EOIMode is false (EOI split is disabled):
10
- In case of an invalid vIRQ write to V_EOIR:
11
-> If some bits are set in H_APR, an invalid vIRQ write to V_EOIR
12
triggers a priority drop, and increments V_HCR.EOICount.
13
-> If V_APR is already cleared, nothing happen
14
15
- An invalid vIRQ write to V_DIR is ignored.
16
17
* When V_CTRL.EOIMode is true:
18
- In case of an invalid vIRQ write to V_EOIR:
19
-> If some bits are set in H_APR, an invalid vIRQ write to V_EOIR
20
triggers a priority drop.
21
-> If V_APR is already cleared, nothing happen
22
23
- An invalid vIRQ write to V_DIR increments V_HCR.EOICount.
24
25
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
26
Message-id: 20180727095421.386-13-luc.michel@greensocs.com
27
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
---
11
---
30
hw/intc/arm_gic.c | 51 +++++++++++++++++++++++++++++++++++++++++++----
12
hw/char/pl011.c | 2 +-
31
1 file changed, 47 insertions(+), 4 deletions(-)
13
1 file changed, 1 insertion(+), 1 deletion(-)
32
14
33
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
15
diff --git a/hw/char/pl011.c b/hw/char/pl011.c
34
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
35
--- a/hw/intc/arm_gic.c
17
--- a/hw/char/pl011.c
36
+++ b/hw/intc/arm_gic.c
18
+++ b/hw/char/pl011.c
37
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
19
@@ -XXX,XX +XXX,XX @@ static unsigned int pl011_get_baudrate(const PL011State *s)
38
{
20
{
39
int group;
21
uint64_t clk;
40
22
41
- if (irq >= s->num_irq) {
23
- if (s->fbrd == 0) {
42
+ if (irq >= GIC_MAXIRQ || (!gic_is_vcpu(cpu) && irq >= s->num_irq)) {
24
+ if (s->ibrd == 0) {
43
/*
25
return 0;
44
* This handles two cases:
45
* 1. If software writes the ID of a spurious interrupt [ie 1023]
46
* to the GICC_DIR, the GIC ignores that write.
47
* 2. If software writes the number of a non-existent interrupt
48
* this must be a subcase of "value written is not an active interrupt"
49
- * and so this is UNPREDICTABLE. We choose to ignore it.
50
+ * and so this is UNPREDICTABLE. We choose to ignore it. For vCPUs,
51
+ * all IRQs potentially exist, so this limit does not apply.
52
*/
53
return;
54
}
26
}
55
27
56
- group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
57
-
58
if (!gic_eoi_split(s, cpu, attrs)) {
59
/* This is UNPREDICTABLE; we choose to ignore it */
60
qemu_log_mask(LOG_GUEST_ERROR,
61
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
62
return;
63
}
64
65
+ if (gic_is_vcpu(cpu) && !gic_virq_is_valid(s, irq, cpu)) {
66
+ /* This vIRQ does not have an LR entry which is either active or
67
+ * pending and active. Increment EOICount and ignore the write.
68
+ */
69
+ int rcpu = gic_get_vcpu_real_id(cpu);
70
+ s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
71
+ return;
72
+ }
73
+
74
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
75
+
76
if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
77
DPRINTF("Non-secure DI for Group0 interrupt %d ignored\n", irq);
78
return;
79
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
80
int group;
81
82
DPRINTF("EOI %d\n", irq);
83
+ if (gic_is_vcpu(cpu)) {
84
+ /* The call to gic_prio_drop() will clear a bit in GICH_APR iff the
85
+ * running prio is < 0x100.
86
+ */
87
+ bool prio_drop = s->running_priority[cpu] < 0x100;
88
+
89
+ if (irq >= GIC_MAXIRQ) {
90
+ /* Ignore spurious interrupt */
91
+ return;
92
+ }
93
+
94
+ gic_drop_prio(s, cpu, 0);
95
+
96
+ if (!gic_eoi_split(s, cpu, attrs)) {
97
+ bool valid = gic_virq_is_valid(s, irq, cpu);
98
+ if (prio_drop && !valid) {
99
+ /* We are in a situation where:
100
+ * - V_CTRL.EOIMode is false (no EOI split),
101
+ * - The call to gic_drop_prio() cleared a bit in GICH_APR,
102
+ * - This vIRQ does not have an LR entry which is either
103
+ * active or pending and active.
104
+ * In that case, we must increment EOICount.
105
+ */
106
+ int rcpu = gic_get_vcpu_real_id(cpu);
107
+ s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
108
+ } else if (valid) {
109
+ gic_clear_active(s, irq, cpu);
110
+ }
111
+ }
112
+
113
+ return;
114
+ }
115
+
116
if (irq >= s->num_irq) {
117
/* This handles two cases:
118
* 1. If software writes the ID of a spurious interrupt [ie 1023]
119
--
28
--
120
2.18.0
29
2.25.1
121
122
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The normal vector element is sign-extended before
3
The CPUTLBEntryFull structure now stores the original pte attributes, as
4
comparing with the wide vector element.
4
well as the physical address. Therefore, we no longer need a separate
5
bit in MemTxAttrs, nor do we need to walk the tree of memory regions.
5
6
6
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
9
Message-id: 20221011031911.2408754-3-richard.henderson@linaro.org
9
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
10
Tested-by: Alex Bennée <alex.bennee@linaro.org>
11
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
12
Message-id: 20180801123111.3595-2-richard.henderson@linaro.org
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
14
---
11
---
15
target/arm/sve_helper.c | 12 ++++++------
12
target/arm/cpu.h | 1 -
16
1 file changed, 6 insertions(+), 6 deletions(-)
13
target/arm/sve_ldst_internal.h | 1 +
14
target/arm/mte_helper.c | 62 ++++++++++------------------------
15
target/arm/sve_helper.c | 54 ++++++++++-------------------
16
target/arm/tlb_helper.c | 4 ---
17
5 files changed, 36 insertions(+), 86 deletions(-)
17
18
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
23
@@ -XXX,XX +XXX,XX @@ static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
24
* generic target bits directly.
25
*/
26
#define arm_tlb_bti_gp(x) (typecheck_memtxattrs(x)->target_tlb_bit0)
27
-#define arm_tlb_mte_tagged(x) (typecheck_memtxattrs(x)->target_tlb_bit1)
28
29
/*
30
* AArch64 usage of the PAGE_TARGET_* bits for linux-user.
31
diff --git a/target/arm/sve_ldst_internal.h b/target/arm/sve_ldst_internal.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/sve_ldst_internal.h
34
+++ b/target/arm/sve_ldst_internal.h
35
@@ -XXX,XX +XXX,XX @@ typedef struct {
36
void *host;
37
int flags;
38
MemTxAttrs attrs;
39
+ bool tagged;
40
} SVEHostPage;
41
42
bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
43
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
44
index XXXXXXX..XXXXXXX 100644
45
--- a/target/arm/mte_helper.c
46
+++ b/target/arm/mte_helper.c
47
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
48
TARGET_PAGE_BITS - LOG2_TAG_GRANULE - 1);
49
return tags + index;
50
#else
51
- uintptr_t index;
52
CPUTLBEntryFull *full;
53
+ MemTxAttrs attrs;
54
int in_page, flags;
55
- ram_addr_t ptr_ra;
56
hwaddr ptr_paddr, tag_paddr, xlat;
57
MemoryRegion *mr;
58
ARMASIdx tag_asi;
59
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
60
* valid. Indicate to probe_access_flags no-fault, then assert that
61
* we received a valid page.
62
*/
63
- flags = probe_access_flags(env, ptr, ptr_access, ptr_mmu_idx,
64
- ra == 0, &host, ra);
65
+ flags = probe_access_full(env, ptr, ptr_access, ptr_mmu_idx,
66
+ ra == 0, &host, &full, ra);
67
assert(!(flags & TLB_INVALID_MASK));
68
69
- /*
70
- * Find the CPUTLBEntryFull for ptr. This *must* be present in the TLB
71
- * because we just found the mapping.
72
- * TODO: Perhaps there should be a cputlb helper that returns a
73
- * matching tlb entry + iotlb entry.
74
- */
75
- index = tlb_index(env, ptr_mmu_idx, ptr);
76
-# ifdef CONFIG_DEBUG_TCG
77
- {
78
- CPUTLBEntry *entry = tlb_entry(env, ptr_mmu_idx, ptr);
79
- target_ulong comparator = (ptr_access == MMU_DATA_LOAD
80
- ? entry->addr_read
81
- : tlb_addr_write(entry));
82
- g_assert(tlb_hit(comparator, ptr));
83
- }
84
-# endif
85
- full = &env_tlb(env)->d[ptr_mmu_idx].fulltlb[index];
86
-
87
/* If the virtual page MemAttr != Tagged, access unchecked. */
88
- if (!arm_tlb_mte_tagged(&full->attrs)) {
89
+ if (full->pte_attrs != 0xf0) {
90
return NULL;
91
}
92
93
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
94
return NULL;
95
}
96
97
+ /*
98
+ * Remember these values across the second lookup below,
99
+ * which may invalidate this pointer via tlb resize.
100
+ */
101
+ ptr_paddr = full->phys_addr;
102
+ attrs = full->attrs;
103
+ full = NULL;
104
+
105
/*
106
* The Normal memory access can extend to the next page. E.g. a single
107
* 8-byte access to the last byte of a page will check only the last
108
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
109
*/
110
in_page = -(ptr | TARGET_PAGE_MASK);
111
if (unlikely(ptr_size > in_page)) {
112
- void *ignore;
113
- flags |= probe_access_flags(env, ptr + in_page, ptr_access,
114
- ptr_mmu_idx, ra == 0, &ignore, ra);
115
+ flags |= probe_access_full(env, ptr + in_page, ptr_access,
116
+ ptr_mmu_idx, ra == 0, &host, &full, ra);
117
assert(!(flags & TLB_INVALID_MASK));
118
}
119
120
@@ -XXX,XX +XXX,XX @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx,
121
if (unlikely(flags & TLB_WATCHPOINT)) {
122
int wp = ptr_access == MMU_DATA_LOAD ? BP_MEM_READ : BP_MEM_WRITE;
123
assert(ra != 0);
124
- cpu_check_watchpoint(env_cpu(env), ptr, ptr_size,
125
- full->attrs, wp, ra);
126
+ cpu_check_watchpoint(env_cpu(env), ptr, ptr_size, attrs, wp, ra);
127
}
128
129
- /*
130
- * Find the physical address within the normal mem space.
131
- * The memory region lookup must succeed because TLB_MMIO was
132
- * not set in the cputlb lookup above.
133
- */
134
- mr = memory_region_from_host(host, &ptr_ra);
135
- tcg_debug_assert(mr != NULL);
136
- tcg_debug_assert(memory_region_is_ram(mr));
137
- ptr_paddr = ptr_ra;
138
- do {
139
- ptr_paddr += mr->addr;
140
- mr = mr->container;
141
- } while (mr);
142
-
143
/* Convert to the physical address in tag space. */
144
tag_paddr = ptr_paddr >> (LOG2_TAG_GRANULE + 1);
145
146
/* Look up the address in tag space. */
147
- tag_asi = full->attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS;
148
+ tag_asi = attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS;
149
tag_as = cpu_get_address_space(env_cpu(env), tag_asi);
150
mr = address_space_translate(tag_as, tag_paddr, &xlat, NULL,
151
- tag_access == MMU_DATA_STORE,
152
- full->attrs);
153
+ tag_access == MMU_DATA_STORE, attrs);
154
155
/*
156
* Note that @mr will never be NULL. If there is nothing in the address
18
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
157
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
19
index XXXXXXX..XXXXXXX 100644
158
index XXXXXXX..XXXXXXX 100644
20
--- a/target/arm/sve_helper.c
159
--- a/target/arm/sve_helper.c
21
+++ b/target/arm/sve_helper.c
160
+++ b/target/arm/sve_helper.c
22
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
161
@@ -XXX,XX +XXX,XX @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
23
#define DO_CMP_PPZW_S(NAME, TYPE, TYPEW, OP) \
162
*/
24
DO_CMP_PPZW(NAME, TYPE, TYPEW, OP, H1_4, 0x1111111111111111ull)
163
addr = useronly_clean_ptr(addr);
25
164
26
-DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, uint8_t, uint64_t, ==)
165
+#ifdef CONFIG_USER_ONLY
27
-DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, uint16_t, uint64_t, ==)
166
flags = probe_access_flags(env, addr, access_type, mmu_idx, nofault,
28
-DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, uint32_t, uint64_t, ==)
167
&info->host, retaddr);
29
+DO_CMP_PPZW_B(sve_cmpeq_ppzw_b, int8_t, uint64_t, ==)
168
+ memset(&info->attrs, 0, sizeof(info->attrs));
30
+DO_CMP_PPZW_H(sve_cmpeq_ppzw_h, int16_t, uint64_t, ==)
169
+ /* Require both ANON and MTE; see allocation_tag_mem(). */
31
+DO_CMP_PPZW_S(sve_cmpeq_ppzw_s, int32_t, uint64_t, ==)
170
+ info->tagged = (flags & PAGE_ANON) && (flags & PAGE_MTE);
32
171
+#else
33
-DO_CMP_PPZW_B(sve_cmpne_ppzw_b, uint8_t, uint64_t, !=)
172
+ CPUTLBEntryFull *full;
34
-DO_CMP_PPZW_H(sve_cmpne_ppzw_h, uint16_t, uint64_t, !=)
173
+ flags = probe_access_full(env, addr, access_type, mmu_idx, nofault,
35
-DO_CMP_PPZW_S(sve_cmpne_ppzw_s, uint32_t, uint64_t, !=)
174
+ &info->host, &full, retaddr);
36
+DO_CMP_PPZW_B(sve_cmpne_ppzw_b, int8_t, uint64_t, !=)
175
+ info->attrs = full->attrs;
37
+DO_CMP_PPZW_H(sve_cmpne_ppzw_h, int16_t, uint64_t, !=)
176
+ info->tagged = full->pte_attrs == 0xf0;
38
+DO_CMP_PPZW_S(sve_cmpne_ppzw_s, int32_t, uint64_t, !=)
177
+#endif
39
178
info->flags = flags;
40
DO_CMP_PPZW_B(sve_cmpgt_ppzw_b, int8_t, int64_t, >)
179
41
DO_CMP_PPZW_H(sve_cmpgt_ppzw_h, int16_t, int64_t, >)
180
if (flags & TLB_INVALID_MASK) {
181
@@ -XXX,XX +XXX,XX @@ bool sve_probe_page(SVEHostPage *info, bool nofault, CPUARMState *env,
182
183
/* Ensure that info->host[] is relative to addr, not addr + mem_off. */
184
info->host -= mem_off;
185
-
186
-#ifdef CONFIG_USER_ONLY
187
- memset(&info->attrs, 0, sizeof(info->attrs));
188
- /* Require both MAP_ANON and PROT_MTE -- see allocation_tag_mem. */
189
- arm_tlb_mte_tagged(&info->attrs) =
190
- (flags & PAGE_ANON) && (flags & PAGE_MTE);
191
-#else
192
- /*
193
- * Find the iotlbentry for addr and return the transaction attributes.
194
- * This *must* be present in the TLB because we just found the mapping.
195
- */
196
- {
197
- uintptr_t index = tlb_index(env, mmu_idx, addr);
198
-
199
-# ifdef CONFIG_DEBUG_TCG
200
- CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
201
- target_ulong comparator = (access_type == MMU_DATA_LOAD
202
- ? entry->addr_read
203
- : tlb_addr_write(entry));
204
- g_assert(tlb_hit(comparator, addr));
205
-# endif
206
-
207
- CPUTLBEntryFull *full = &env_tlb(env)->d[mmu_idx].fulltlb[index];
208
- info->attrs = full->attrs;
209
- }
210
-#endif
211
-
212
return true;
213
}
214
215
@@ -XXX,XX +XXX,XX @@ void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env,
216
intptr_t mem_off, reg_off, reg_last;
217
218
/* Process the page only if MemAttr == Tagged. */
219
- if (arm_tlb_mte_tagged(&info->page[0].attrs)) {
220
+ if (info->page[0].tagged) {
221
mem_off = info->mem_off_first[0];
222
reg_off = info->reg_off_first[0];
223
reg_last = info->reg_off_split;
224
@@ -XXX,XX +XXX,XX @@ void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env,
225
}
226
227
mem_off = info->mem_off_first[1];
228
- if (mem_off >= 0 && arm_tlb_mte_tagged(&info->page[1].attrs)) {
229
+ if (mem_off >= 0 && info->page[1].tagged) {
230
reg_off = info->reg_off_first[1];
231
reg_last = info->reg_off_last[1];
232
233
@@ -XXX,XX +XXX,XX @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const target_ulong addr,
234
* Disable MTE checking if the Tagged bit is not set. Since TBI must
235
* be set within MTEDESC for MTE, !mtedesc => !mte_active.
236
*/
237
- if (!arm_tlb_mte_tagged(&info.page[0].attrs)) {
238
+ if (!info.page[0].tagged) {
239
mtedesc = 0;
240
}
241
242
@@ -XXX,XX +XXX,XX @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
243
cpu_check_watchpoint(env_cpu(env), addr, msize,
244
info.attrs, BP_MEM_READ, retaddr);
245
}
246
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
247
+ if (mtedesc && info.tagged) {
248
mte_check(env, mtedesc, addr, retaddr);
249
}
250
if (unlikely(info.flags & TLB_MMIO)) {
251
@@ -XXX,XX +XXX,XX @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
252
msize, info.attrs,
253
BP_MEM_READ, retaddr);
254
}
255
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
256
+ if (mtedesc && info.tagged) {
257
mte_check(env, mtedesc, addr, retaddr);
258
}
259
tlb_fn(env, &scratch, reg_off, addr, retaddr);
260
@@ -XXX,XX +XXX,XX @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
261
(env_cpu(env), addr, msize) & BP_MEM_READ)) {
262
goto fault;
263
}
264
- if (mtedesc &&
265
- arm_tlb_mte_tagged(&info.attrs) &&
266
- !mte_probe(env, mtedesc, addr)) {
267
+ if (mtedesc && info.tagged && !mte_probe(env, mtedesc, addr)) {
268
goto fault;
269
}
270
271
@@ -XXX,XX +XXX,XX @@ void sve_st1_z(CPUARMState *env, void *vd, uint64_t *vg, void *vm,
272
info.attrs, BP_MEM_WRITE, retaddr);
273
}
274
275
- if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) {
276
+ if (mtedesc && info.tagged) {
277
mte_check(env, mtedesc, addr, retaddr);
278
}
279
}
280
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
281
index XXXXXXX..XXXXXXX 100644
282
--- a/target/arm/tlb_helper.c
283
+++ b/target/arm/tlb_helper.c
284
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
285
res.f.phys_addr &= TARGET_PAGE_MASK;
286
address &= TARGET_PAGE_MASK;
287
}
288
- /* Notice and record tagged memory. */
289
- if (cpu_isar_feature(aa64_mte, cpu) && res.cacheattrs.attrs == 0xf0) {
290
- arm_tlb_mte_tagged(&res.f.attrs) = true;
291
- }
292
293
res.f.pte_attrs = res.cacheattrs.attrs;
294
res.f.shareability = res.cacheattrs.shareability;
42
--
295
--
43
2.18.0
296
2.25.1
44
45
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
3
Add a field to TARGET_PAGE_ENTRY_EXTRA to hold the guarded bit.
4
In is_guarded_page, use probe_access_full instead of just guessing
5
that the tlb entry is still present. Also handles the FIXME about
6
executing from device memory.
7
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
10
Message-id: 20221011031911.2408754-4-richard.henderson@linaro.org
6
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Tested-by: Alex Bennée <alex.bennee@linaro.org>
9
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
10
Message-id: 20180801123111.3595-5-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
12
---
13
target/arm/sve_helper.c | 2 +-
13
target/arm/cpu-param.h | 9 +++++----
14
1 file changed, 1 insertion(+), 1 deletion(-)
14
target/arm/cpu.h | 13 -------------
15
target/arm/internals.h | 1 +
16
target/arm/ptw.c | 7 ++++---
17
target/arm/translate-a64.c | 21 ++++++++++-----------
18
5 files changed, 20 insertions(+), 31 deletions(-)
15
19
16
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
20
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
17
index XXXXXXX..XXXXXXX 100644
21
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/sve_helper.c
22
--- a/target/arm/cpu-param.h
19
+++ b/target/arm/sve_helper.c
23
+++ b/target/arm/cpu-param.h
20
@@ -XXX,XX +XXX,XX @@ void HELPER(sve_movz_d)(void *vd, void *vn, void *vg, uint32_t desc)
24
@@ -XXX,XX +XXX,XX @@
21
uint64_t *d = vd, *n = vn;
25
*
22
uint8_t *pg = vg;
26
* For ARMMMUIdx_Stage2*, pte_attrs is the S2 descriptor bits [5:2].
23
for (i = 0; i < opr_sz; i += 1) {
27
* Otherwise, pte_attrs is the same as the MAIR_EL1 8-bit format.
24
- d[i] = n[1] & -(uint64_t)(pg[H1(i)] & 1);
28
- * For shareability, as in the SH field of the VMSAv8-64 PTEs.
25
+ d[i] = n[i] & -(uint64_t)(pg[H1(i)] & 1);
29
+ * For shareability and guarded, as in the SH and GP fields respectively
30
+ * of the VMSAv8-64 PTEs.
31
*/
32
# define TARGET_PAGE_ENTRY_EXTRA \
33
- uint8_t pte_attrs; \
34
- uint8_t shareability;
35
-
36
+ uint8_t pte_attrs; \
37
+ uint8_t shareability; \
38
+ bool guarded;
39
#endif
40
41
#define NB_MMU_MODES 8
42
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/cpu.h
45
+++ b/target/arm/cpu.h
46
@@ -XXX,XX +XXX,XX @@ static inline uint64_t *aa64_vfp_qreg(CPUARMState *env, unsigned regno)
47
/* Shared between translate-sve.c and sve_helper.c. */
48
extern const uint64_t pred_esz_masks[5];
49
50
-/* Helper for the macros below, validating the argument type. */
51
-static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x)
52
-{
53
- return x;
54
-}
55
-
56
-/*
57
- * Lvalue macros for ARM TLB bits that we must cache in the TCG TLB.
58
- * Using these should be a bit more self-documenting than using the
59
- * generic target bits directly.
60
- */
61
-#define arm_tlb_bti_gp(x) (typecheck_memtxattrs(x)->target_tlb_bit0)
62
-
63
/*
64
* AArch64 usage of the PAGE_TARGET_* bits for linux-user.
65
* Note that with the Linux kernel, PROT_MTE may not be cleared by mprotect
66
diff --git a/target/arm/internals.h b/target/arm/internals.h
67
index XXXXXXX..XXXXXXX 100644
68
--- a/target/arm/internals.h
69
+++ b/target/arm/internals.h
70
@@ -XXX,XX +XXX,XX @@ typedef struct ARMCacheAttrs {
71
unsigned int attrs:8;
72
unsigned int shareability:2; /* as in the SH field of the VMSAv8-64 PTEs */
73
bool is_s2_format:1;
74
+ bool guarded:1; /* guarded bit of the v8-64 PTE */
75
} ARMCacheAttrs;
76
77
/* Fields that are valid upon success. */
78
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
79
index XXXXXXX..XXXXXXX 100644
80
--- a/target/arm/ptw.c
81
+++ b/target/arm/ptw.c
82
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
83
*/
84
result->f.attrs.secure = false;
26
}
85
}
86
- /* When in aarch64 mode, and BTI is enabled, remember GP in the IOTLB. */
87
- if (aarch64 && guarded && cpu_isar_feature(aa64_bti, cpu)) {
88
- arm_tlb_bti_gp(&result->f.attrs) = true;
89
+
90
+ /* When in aarch64 mode, and BTI is enabled, remember GP in the TLB. */
91
+ if (aarch64 && cpu_isar_feature(aa64_bti, cpu)) {
92
+ result->f.guarded = guarded;
93
}
94
95
if (mmu_idx == ARMMMUIdx_Stage2 || mmu_idx == ARMMMUIdx_Stage2_S) {
96
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
97
index XXXXXXX..XXXXXXX 100644
98
--- a/target/arm/translate-a64.c
99
+++ b/target/arm/translate-a64.c
100
@@ -XXX,XX +XXX,XX @@ static bool is_guarded_page(CPUARMState *env, DisasContext *s)
101
#ifdef CONFIG_USER_ONLY
102
return page_get_flags(addr) & PAGE_BTI;
103
#else
104
+ CPUTLBEntryFull *full;
105
+ void *host;
106
int mmu_idx = arm_to_core_mmu_idx(s->mmu_idx);
107
- unsigned int index = tlb_index(env, mmu_idx, addr);
108
- CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
109
+ int flags;
110
111
/*
112
* We test this immediately after reading an insn, which means
113
- * that any normal page must be in the TLB. The only exception
114
- * would be for executing from flash or device memory, which
115
- * does not retain the TLB entry.
116
- *
117
- * FIXME: Assume false for those, for now. We could use
118
- * arm_cpu_get_phys_page_attrs_debug to re-read the page
119
- * table entry even for that case.
120
+ * that the TLB entry must be present and valid, and thus this
121
+ * access will never raise an exception.
122
*/
123
- return (tlb_hit(entry->addr_code, addr) &&
124
- arm_tlb_bti_gp(&env_tlb(env)->d[mmu_idx].fulltlb[index].attrs));
125
+ flags = probe_access_full(env, addr, MMU_INST_FETCH, mmu_idx,
126
+ false, &host, &full, 0);
127
+ assert(!(flags & TLB_INVALID_MASK));
128
+
129
+ return full->guarded;
130
#endif
27
}
131
}
28
132
29
--
133
--
30
2.18.0
134
2.25.1
31
32
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
The pseudocode for this operation is an increment + compare loop,
3
Not yet used, but add mmu indexes for 1-1 mapping
4
so comparing <= the maximum integer produces an all-true predicate.
4
to physical addresses.
5
5
6
Rather than bound in both the inline code and the helper, pass the
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
helper the number of predicate bits to set instead of the number
8
of predicate elements to set.
9
10
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
8
Message-id: 20221011031911.2408754-5-richard.henderson@linaro.org
13
Tested-by: Alex Bennée <alex.bennee@linaro.org>
14
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
15
Message-id: 20180801123111.3595-4-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
10
---
18
target/arm/sve_helper.c | 5 ----
11
target/arm/cpu-param.h | 2 +-
19
target/arm/translate-sve.c | 49 +++++++++++++++++++++++++-------------
12
target/arm/cpu.h | 7 ++++++-
20
2 files changed, 32 insertions(+), 22 deletions(-)
13
target/arm/ptw.c | 19 +++++++++++++++++--
14
3 files changed, 24 insertions(+), 4 deletions(-)
21
15
22
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
16
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
23
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/sve_helper.c
18
--- a/target/arm/cpu-param.h
25
+++ b/target/arm/sve_helper.c
19
+++ b/target/arm/cpu-param.h
26
@@ -XXX,XX +XXX,XX @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
20
@@ -XXX,XX +XXX,XX @@
27
return flags;
21
bool guarded;
22
#endif
23
24
-#define NB_MMU_MODES 8
25
+#define NB_MMU_MODES 10
26
27
#endif
28
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/target/arm/cpu.h
31
+++ b/target/arm/cpu.h
32
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
33
* EL2 EL2&0 +PAN
34
* EL2 (aka NS PL2)
35
* EL3 (aka S PL1)
36
+ * Physical (NS & S)
37
*
38
- * for a total of 8 different mmu_idx.
39
+ * for a total of 10 different mmu_idx.
40
*
41
* R profile CPUs have an MPU, but can use the same set of MMU indexes
42
* as A profile. They only need to distinguish EL0 and EL1 (and
43
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
44
ARMMMUIdx_E2 = 6 | ARM_MMU_IDX_A,
45
ARMMMUIdx_E3 = 7 | ARM_MMU_IDX_A,
46
47
+ /* TLBs with 1-1 mapping to the physical address spaces. */
48
+ ARMMMUIdx_Phys_NS = 8 | ARM_MMU_IDX_A,
49
+ ARMMMUIdx_Phys_S = 9 | ARM_MMU_IDX_A,
50
+
51
/*
52
* These are not allocated TLBs and are used only for AT system
53
* instructions or for the first stage of an S12 page table walk.
54
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
55
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/ptw.c
57
+++ b/target/arm/ptw.c
58
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
59
case ARMMMUIdx_E3:
60
break;
61
62
+ case ARMMMUIdx_Phys_NS:
63
+ case ARMMMUIdx_Phys_S:
64
+ /* No translation for physical address spaces. */
65
+ return true;
66
+
67
default:
68
g_assert_not_reached();
28
}
69
}
29
70
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
30
- /* Scale from predicate element count to bits. */
31
- count <<= esz;
32
- /* Bound to the bits in the predicate. */
33
- count = MIN(count, oprsz * 8);
34
-
35
/* Set all of the requested bits. */
36
for (i = 0; i < count / 64; ++i) {
37
d->p[i] = esz_mask;
38
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
39
index XXXXXXX..XXXXXXX 100644
40
--- a/target/arm/translate-sve.c
41
+++ b/target/arm/translate-sve.c
42
@@ -XXX,XX +XXX,XX @@ static bool trans_CTERM(DisasContext *s, arg_CTERM *a, uint32_t insn)
43
44
static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
45
{
71
{
46
- if (!sve_access_check(s)) {
72
uint8_t memattr = 0x00; /* Device nGnRnE */
47
- return true;
73
uint8_t shareability = 0; /* non-sharable */
48
- }
74
+ int r_el;
49
-
75
50
- TCGv_i64 op0 = read_cpu_reg(s, a->rn, 1);
76
- if (mmu_idx != ARMMMUIdx_Stage2 && mmu_idx != ARMMMUIdx_Stage2_S) {
51
- TCGv_i64 op1 = read_cpu_reg(s, a->rm, 1);
77
- int r_el = regime_el(env, mmu_idx);
52
- TCGv_i64 t0 = tcg_temp_new_i64();
78
+ switch (mmu_idx) {
53
- TCGv_i64 t1 = tcg_temp_new_i64();
79
+ case ARMMMUIdx_Stage2:
54
+ TCGv_i64 op0, op1, t0, t1, tmax;
80
+ case ARMMMUIdx_Stage2_S:
55
TCGv_i32 t2, t3;
81
+ case ARMMMUIdx_Phys_NS:
56
TCGv_ptr ptr;
82
+ case ARMMMUIdx_Phys_S:
57
unsigned desc, vsz = vec_full_reg_size(s);
83
+ break;
58
TCGCond cond;
84
59
85
+ default:
60
+ if (!sve_access_check(s)) {
86
+ r_el = regime_el(env, mmu_idx);
61
+ return true;
87
if (arm_el_is_aa64(env, r_el)) {
62
+ }
88
int pamax = arm_pamax(env_archcpu(env));
63
+
89
uint64_t tcr = env->cp15.tcr_el[r_el];
64
+ op0 = read_cpu_reg(s, a->rn, 1);
90
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
65
+ op1 = read_cpu_reg(s, a->rm, 1);
91
shareability = 2; /* outer sharable */
66
+
92
}
67
if (!a->sf) {
93
result->cacheattrs.is_s2_format = false;
68
if (a->u) {
94
+ break;
69
tcg_gen_ext32u_i64(op0, op0);
70
@@ -XXX,XX +XXX,XX @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a, uint32_t insn)
71
72
/* For the helper, compress the different conditions into a computation
73
* of how many iterations for which the condition is true.
74
- *
75
- * This is slightly complicated by 0 <= UINT64_MAX, which is nominally
76
- * 2**64 iterations, overflowing to 0. Of course, predicate registers
77
- * aren't that large, so any value >= predicate size is sufficient.
78
*/
79
+ t0 = tcg_temp_new_i64();
80
+ t1 = tcg_temp_new_i64();
81
tcg_gen_sub_i64(t0, op1, op0);
82
83
- /* t0 = MIN(op1 - op0, vsz). */
84
- tcg_gen_movi_i64(t1, vsz);
85
- tcg_gen_umin_i64(t0, t0, t1);
86
+ tmax = tcg_const_i64(vsz >> a->esz);
87
if (a->eq) {
88
/* Equality means one more iteration. */
89
tcg_gen_addi_i64(t0, t0, 1);
90
+
91
+ /* If op1 is max (un)signed integer (and the only time the addition
92
+ * above could overflow), then we produce an all-true predicate by
93
+ * setting the count to the vector length. This is because the
94
+ * pseudocode is described as an increment + compare loop, and the
95
+ * max integer would always compare true.
96
+ */
97
+ tcg_gen_movi_i64(t1, (a->sf
98
+ ? (a->u ? UINT64_MAX : INT64_MAX)
99
+ : (a->u ? UINT32_MAX : INT32_MAX)));
100
+ tcg_gen_movcond_i64(TCG_COND_EQ, t0, op1, t1, tmax, t0);
101
}
95
}
102
96
103
- /* t0 = (condition true ? t0 : 0). */
97
result->f.phys_addr = address;
104
+ /* Bound to the maximum. */
98
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
105
+ tcg_gen_umin_i64(t0, t0, tmax);
99
is_secure = arm_is_secure_below_el3(env);
106
+ tcg_temp_free_i64(tmax);
100
break;
107
+
101
case ARMMMUIdx_Stage2:
108
+ /* Set the count to zero if the condition is false. */
102
+ case ARMMMUIdx_Phys_NS:
109
cond = (a->u
103
case ARMMMUIdx_MPrivNegPri:
110
? (a->eq ? TCG_COND_LEU : TCG_COND_LTU)
104
case ARMMMUIdx_MUserNegPri:
111
: (a->eq ? TCG_COND_LE : TCG_COND_LT));
105
case ARMMMUIdx_MPriv:
112
tcg_gen_movi_i64(t1, 0);
106
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr(CPUARMState *env, target_ulong address,
113
tcg_gen_movcond_i64(cond, t0, op0, op1, t0, t1);
107
break;
114
+ tcg_temp_free_i64(t1);
108
case ARMMMUIdx_E3:
115
109
case ARMMMUIdx_Stage2_S:
116
+ /* Since we're bounded, pass as a 32-bit type. */
110
+ case ARMMMUIdx_Phys_S:
117
t2 = tcg_temp_new_i32();
111
case ARMMMUIdx_MSPrivNegPri:
118
tcg_gen_extrl_i64_i32(t2, t0);
112
case ARMMMUIdx_MSUserNegPri:
119
tcg_temp_free_i64(t0);
113
case ARMMMUIdx_MSPriv:
120
- tcg_temp_free_i64(t1);
121
+
122
+ /* Scale elements to bits. */
123
+ tcg_gen_shli_i32(t2, t2, a->esz);
124
125
desc = (vsz / 8) - 2;
126
desc = deposit32(desc, SIMD_DATA_SHIFT, 2, a->esz);
127
--
114
--
128
2.18.0
115
2.25.1
129
130
diff view generated by jsdifflib
1
The IMO, FMO and AMO bits in HCR_EL2 are defined to "behave as
1
From: Richard Henderson <richard.henderson@linaro.org>
2
1 for all purposes other than direct reads" if HCR_EL2.TGE
3
is set and HCR_EL2.E2H is 0, and to "behave as 0 for all
4
purposes other than direct reads" if HCR_EL2.TGE is set
5
and HRC_EL2.E2H is 1.
6
2
7
To avoid having to check E2H and TGE everywhere where we test IMO and
3
We had been marking this ARM_MMU_IDX_NOTLB, move it to a real tlb.
8
FMO, provide accessors arm_hcr_el2_imo(), arm_hcr_el2_fmo()and
4
Flush the tlb when invalidating stage 1+2 translations. Re-use
9
arm_hcr_el2_amo(). We don't implement ARMv8.1-VHE yet, so the E2H
5
alle1_tlbmask() for other instances of EL1&0 + Stage2.
10
case will never be true, but we include the logic to save effort when
11
we eventually do get to that.
12
6
13
(Note that in several of these callsites the change doesn't
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
14
actually make a difference as either the callsite is handling
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
TGE specially anyway, or the CPU can't get into that situation
9
Message-id: 20221011031911.2408754-6-richard.henderson@linaro.org
16
with TGE set; we change everywhere for consistency.)
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
target/arm/cpu-param.h | 2 +-
13
target/arm/cpu.h | 23 ++++---
14
target/arm/helper.c | 151 ++++++++++++++++++++++++++++++-----------
15
3 files changed, 127 insertions(+), 49 deletions(-)
17
16
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
index XXXXXXX..XXXXXXX 100644
20
Message-id: 20180724115950.17316-5-peter.maydell@linaro.org
19
--- a/target/arm/cpu-param.h
21
---
20
+++ b/target/arm/cpu-param.h
22
target/arm/cpu.h | 64 +++++++++++++++++++++++++++++++++++----
21
@@ -XXX,XX +XXX,XX @@
23
hw/intc/arm_gicv3_cpuif.c | 19 ++++++------
22
bool guarded;
24
target/arm/helper.c | 6 ++--
23
#endif
25
3 files changed, 71 insertions(+), 18 deletions(-)
24
26
25
-#define NB_MMU_MODES 10
26
+#define NB_MMU_MODES 12
27
28
#endif
27
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
29
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
28
index XXXXXXX..XXXXXXX 100644
30
index XXXXXXX..XXXXXXX 100644
29
--- a/target/arm/cpu.h
31
--- a/target/arm/cpu.h
30
+++ b/target/arm/cpu.h
32
+++ b/target/arm/cpu.h
31
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
33
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu, bool kvm_sync);
32
#define HCR_RW (1ULL << 31)
34
* EL2 (aka NS PL2)
33
#define HCR_CD (1ULL << 32)
35
* EL3 (aka S PL1)
34
#define HCR_ID (1ULL << 33)
36
* Physical (NS & S)
35
+#define HCR_E2H (1ULL << 34)
37
+ * Stage2 (NS & S)
36
+/*
38
*
37
+ * When we actually implement ARMv8.1-VHE we should add HCR_E2H to
39
- * for a total of 10 different mmu_idx.
38
+ * HCR_MASK and then clear it again if the feature bit is not set in
40
+ * for a total of 12 different mmu_idx.
39
+ * hcr_write().
41
*
40
+ */
42
* R profile CPUs have an MPU, but can use the same set of MMU indexes
41
#define HCR_MASK ((1ULL << 34) - 1)
43
* as A profile. They only need to distinguish EL0 and EL1 (and
42
44
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
43
#define SCR_NS (1U << 0)
45
ARMMMUIdx_Phys_NS = 8 | ARM_MMU_IDX_A,
44
@@ -XXX,XX +XXX,XX @@ bool write_cpustate_to_list(ARMCPU *cpu);
46
ARMMMUIdx_Phys_S = 9 | ARM_MMU_IDX_A,
45
# define TARGET_VIRT_ADDR_SPACE_BITS 32
47
46
#endif
48
+ /*
47
49
+ * Used for second stage of an S12 page table walk, or for descriptor
48
+/**
50
+ * loads during first stage of an S1 page table walk. Note that both
49
+ * arm_hcr_el2_imo(): Return the effective value of HCR_EL2.IMO.
51
+ * are in use simultaneously for SecureEL2: the security state for
50
+ * Depending on the values of HCR_EL2.E2H and TGE, this may be
52
+ * the S2 ptw is selected by the NS bit from the S1 ptw.
51
+ * "behaves as 1 for all purposes other than direct read/write" or
53
+ */
52
+ * "behaves as 0 for all purposes other than direct read/write"
54
+ ARMMMUIdx_Stage2 = 10 | ARM_MMU_IDX_A,
53
+ */
55
+ ARMMMUIdx_Stage2_S = 11 | ARM_MMU_IDX_A,
54
+static inline bool arm_hcr_el2_imo(CPUARMState *env)
56
+
55
+{
57
/*
56
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
58
* These are not allocated TLBs and are used only for AT system
57
+ case HCR_TGE:
59
* instructions or for the first stage of an S12 page table walk.
58
+ return true;
60
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdx {
59
+ case HCR_TGE | HCR_E2H:
61
ARMMMUIdx_Stage1_E0 = 0 | ARM_MMU_IDX_NOTLB,
60
+ return false;
62
ARMMMUIdx_Stage1_E1 = 1 | ARM_MMU_IDX_NOTLB,
61
+ default:
63
ARMMMUIdx_Stage1_E1_PAN = 2 | ARM_MMU_IDX_NOTLB,
62
+ return env->cp15.hcr_el2 & HCR_IMO;
64
- /*
63
+ }
65
- * Not allocated a TLB: used only for second stage of an S12 page
64
+}
66
- * table walk, or for descriptor loads during first stage of an S1
65
+
67
- * page table walk. Note that if we ever want to have a TLB for this
66
+/**
68
- * then various TLB flush insns which currently are no-ops or flush
67
+ * arm_hcr_el2_fmo(): Return the effective value of HCR_EL2.FMO.
69
- * only stage 1 MMU indexes will need to change to flush stage 2.
68
+ */
70
- */
69
+static inline bool arm_hcr_el2_fmo(CPUARMState *env)
71
- ARMMMUIdx_Stage2 = 3 | ARM_MMU_IDX_NOTLB,
70
+{
72
- ARMMMUIdx_Stage2_S = 4 | ARM_MMU_IDX_NOTLB,
71
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
73
72
+ case HCR_TGE:
74
/*
73
+ return true;
75
* M-profile.
74
+ case HCR_TGE | HCR_E2H:
76
@@ -XXX,XX +XXX,XX @@ typedef enum ARMMMUIdxBit {
75
+ return false;
77
TO_CORE_BIT(E20_2),
76
+ default:
78
TO_CORE_BIT(E20_2_PAN),
77
+ return env->cp15.hcr_el2 & HCR_FMO;
79
TO_CORE_BIT(E3),
78
+ }
80
+ TO_CORE_BIT(Stage2),
79
+}
81
+ TO_CORE_BIT(Stage2_S),
80
+
82
81
+/**
83
TO_CORE_BIT(MUser),
82
+ * arm_hcr_el2_amo(): Return the effective value of HCR_EL2.AMO.
84
TO_CORE_BIT(MPriv),
83
+ */
84
+static inline bool arm_hcr_el2_amo(CPUARMState *env)
85
+{
86
+ switch (env->cp15.hcr_el2 & (HCR_TGE | HCR_E2H)) {
87
+ case HCR_TGE:
88
+ return true;
89
+ case HCR_TGE | HCR_E2H:
90
+ return false;
91
+ default:
92
+ return env->cp15.hcr_el2 & HCR_AMO;
93
+ }
94
+}
95
+
96
static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
97
unsigned int target_el)
98
{
99
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
100
break;
101
102
case EXCP_VFIQ:
103
- if (secure || !(env->cp15.hcr_el2 & HCR_FMO)
104
- || (env->cp15.hcr_el2 & HCR_TGE)) {
105
+ if (secure || !arm_hcr_el2_fmo(env) || (env->cp15.hcr_el2 & HCR_TGE)) {
106
/* VFIQs are only taken when hypervized and non-secure. */
107
return false;
108
}
109
return !(env->daif & PSTATE_F);
110
case EXCP_VIRQ:
111
- if (secure || !(env->cp15.hcr_el2 & HCR_IMO)
112
- || (env->cp15.hcr_el2 & HCR_TGE)) {
113
+ if (secure || !arm_hcr_el2_imo(env) || (env->cp15.hcr_el2 & HCR_TGE)) {
114
/* VIRQs are only taken when hypervized and non-secure. */
115
return false;
116
}
117
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
118
* to the CPSR.F setting otherwise we further assess the state
119
* below.
120
*/
121
- hcr = (env->cp15.hcr_el2 & HCR_FMO);
122
+ hcr = arm_hcr_el2_fmo(env);
123
scr = (env->cp15.scr_el3 & SCR_FIQ);
124
125
/* When EL3 is 32-bit, the SCR.FW bit controls whether the
126
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
127
* when setting the target EL, so it does not have a further
128
* affect here.
129
*/
130
- hcr = (env->cp15.hcr_el2 & HCR_IMO);
131
+ hcr = arm_hcr_el2_imo(env);
132
scr = false;
133
break;
134
default:
135
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
136
index XXXXXXX..XXXXXXX 100644
137
--- a/hw/intc/arm_gicv3_cpuif.c
138
+++ b/hw/intc/arm_gicv3_cpuif.c
139
@@ -XXX,XX +XXX,XX @@ static bool icv_access(CPUARMState *env, int hcr_flags)
140
* * access if NS EL1 and either IMO or FMO == 1:
141
* CTLR, DIR, PMR, RPR
142
*/
143
- return (env->cp15.hcr_el2 & hcr_flags) && arm_current_el(env) == 1
144
+ bool flagmatch = ((hcr_flags & HCR_IMO) && arm_hcr_el2_imo(env)) ||
145
+ ((hcr_flags & HCR_FMO) && arm_hcr_el2_fmo(env));
146
+
147
+ return flagmatch && arm_current_el(env) == 1
148
&& !arm_is_secure_below_el3(env);
149
}
150
151
@@ -XXX,XX +XXX,XX @@ static void icc_dir_write(CPUARMState *env, const ARMCPRegInfo *ri,
152
/* No need to include !IsSecure in route_*_to_el2 as it's only
153
* tested in cases where we know !IsSecure is true.
154
*/
155
- route_fiq_to_el2 = env->cp15.hcr_el2 & HCR_FMO;
156
- route_irq_to_el2 = env->cp15.hcr_el2 & HCR_IMO;
157
+ route_fiq_to_el2 = arm_hcr_el2_fmo(env);
158
+ route_irq_to_el2 = arm_hcr_el2_imo(env);
159
160
switch (arm_current_el(env)) {
161
case 3:
162
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_irqfiq_access(CPUARMState *env,
163
switch (el) {
164
case 1:
165
if (arm_is_secure_below_el3(env) ||
166
- ((env->cp15.hcr_el2 & (HCR_IMO | HCR_FMO)) == 0)) {
167
+ (arm_hcr_el2_imo(env) == 0 && arm_hcr_el2_fmo(env) == 0)) {
168
r = CP_ACCESS_TRAP_EL3;
169
}
170
break;
171
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_dir_access(CPUARMState *env,
172
static CPAccessResult gicv3_sgi_access(CPUARMState *env,
173
const ARMCPRegInfo *ri, bool isread)
174
{
175
- if ((env->cp15.hcr_el2 & (HCR_IMO | HCR_FMO)) &&
176
+ if ((arm_hcr_el2_imo(env) || arm_hcr_el2_fmo(env)) &&
177
arm_current_el(env) == 1 && !arm_is_secure_below_el3(env)) {
178
/* Takes priority over a possible EL3 trap */
179
return CP_ACCESS_TRAP_EL2;
180
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_fiq_access(CPUARMState *env,
181
if (env->cp15.scr_el3 & SCR_FIQ) {
182
switch (el) {
183
case 1:
184
- if (arm_is_secure_below_el3(env) ||
185
- ((env->cp15.hcr_el2 & HCR_FMO) == 0)) {
186
+ if (arm_is_secure_below_el3(env) || !arm_hcr_el2_fmo(env)) {
187
r = CP_ACCESS_TRAP_EL3;
188
}
189
break;
190
@@ -XXX,XX +XXX,XX @@ static CPAccessResult gicv3_irq_access(CPUARMState *env,
191
if (env->cp15.scr_el3 & SCR_IRQ) {
192
switch (el) {
193
case 1:
194
- if (arm_is_secure_below_el3(env) ||
195
- ((env->cp15.hcr_el2 & HCR_IMO) == 0)) {
196
+ if (arm_is_secure_below_el3(env) || !arm_hcr_el2_imo(env)) {
197
r = CP_ACCESS_TRAP_EL3;
198
}
199
break;
200
diff --git a/target/arm/helper.c b/target/arm/helper.c
85
diff --git a/target/arm/helper.c b/target/arm/helper.c
201
index XXXXXXX..XXXXXXX 100644
86
index XXXXXXX..XXXXXXX 100644
202
--- a/target/arm/helper.c
87
--- a/target/arm/helper.c
203
+++ b/target/arm/helper.c
88
+++ b/target/arm/helper.c
204
@@ -XXX,XX +XXX,XX @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint32_t excp_idx,
89
@@ -XXX,XX +XXX,XX @@ static void contextidr_write(CPUARMState *env, const ARMCPRegInfo *ri,
205
switch (excp_idx) {
90
raw_write(env, ri, value);
206
case EXCP_IRQ:
91
}
207
scr = ((env->cp15.scr_el3 & SCR_IRQ) == SCR_IRQ);
92
208
- hcr = ((env->cp15.hcr_el2 & HCR_IMO) == HCR_IMO);
93
+static int alle1_tlbmask(CPUARMState *env)
209
+ hcr = arm_hcr_el2_imo(env);
94
+{
210
break;
95
+ /*
211
case EXCP_FIQ:
96
+ * Note that the 'ALL' scope must invalidate both stage 1 and
212
scr = ((env->cp15.scr_el3 & SCR_FIQ) == SCR_FIQ);
97
+ * stage 2 translations, whereas most other scopes only invalidate
213
- hcr = ((env->cp15.hcr_el2 & HCR_FMO) == HCR_FMO);
98
+ * stage 1 translations.
214
+ hcr = arm_hcr_el2_fmo(env);
99
+ */
215
break;
100
+ return (ARMMMUIdxBit_E10_1 |
216
default:
101
+ ARMMMUIdxBit_E10_1_PAN |
217
scr = ((env->cp15.scr_el3 & SCR_EA) == SCR_EA);
102
+ ARMMMUIdxBit_E10_0 |
218
- hcr = ((env->cp15.hcr_el2 & HCR_AMO) == HCR_AMO);
103
+ ARMMMUIdxBit_Stage2 |
219
+ hcr = arm_hcr_el2_amo(env);
104
+ ARMMMUIdxBit_Stage2_S);
220
break;
105
+}
221
};
106
+
222
107
+
108
/* IS variants of TLB operations must affect all cores */
109
static void tlbiall_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
110
uint64_t value)
111
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_write(CPUARMState *env, const ARMCPRegInfo *ri,
112
{
113
CPUState *cs = env_cpu(env);
114
115
- tlb_flush_by_mmuidx(cs,
116
- ARMMMUIdxBit_E10_1 |
117
- ARMMMUIdxBit_E10_1_PAN |
118
- ARMMMUIdxBit_E10_0);
119
+ tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
120
}
121
122
static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
123
@@ -XXX,XX +XXX,XX @@ static void tlbiall_nsnh_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
124
{
125
CPUState *cs = env_cpu(env);
126
127
- tlb_flush_by_mmuidx_all_cpus_synced(cs,
128
- ARMMMUIdxBit_E10_1 |
129
- ARMMMUIdxBit_E10_1_PAN |
130
- ARMMMUIdxBit_E10_0);
131
+ tlb_flush_by_mmuidx_all_cpus_synced(cs, alle1_tlbmask(env));
132
}
133
134
135
@@ -XXX,XX +XXX,XX @@ static void tlbimva_hyp_is_write(CPUARMState *env, const ARMCPRegInfo *ri,
136
ARMMMUIdxBit_E2);
137
}
138
139
+static void tlbiipas2_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
140
+ uint64_t value)
141
+{
142
+ CPUState *cs = env_cpu(env);
143
+ uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
144
+
145
+ tlb_flush_page_by_mmuidx(cs, pageaddr, ARMMMUIdxBit_Stage2);
146
+}
147
+
148
+static void tlbiipas2is_hyp_write(CPUARMState *env, const ARMCPRegInfo *ri,
149
+ uint64_t value)
150
+{
151
+ CPUState *cs = env_cpu(env);
152
+ uint64_t pageaddr = (value & MAKE_64BIT_MASK(0, 28)) << 12;
153
+
154
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, ARMMMUIdxBit_Stage2);
155
+}
156
+
157
static const ARMCPRegInfo cp_reginfo[] = {
158
/* Define the secure and non-secure FCSE identifier CP registers
159
* separately because there is no secure bank in V8 (no _EL3). This allows
160
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
161
162
/*
163
* A change in VMID to the stage2 page table (Stage2) invalidates
164
- * the combined stage 1&2 tlbs (EL10_1 and EL10_0).
165
+ * the stage2 and combined stage 1&2 tlbs (EL10_1 and EL10_0).
166
*/
167
if (raw_read(env, ri) != value) {
168
- uint16_t mask = ARMMMUIdxBit_E10_1 |
169
- ARMMMUIdxBit_E10_1_PAN |
170
- ARMMMUIdxBit_E10_0;
171
- tlb_flush_by_mmuidx(cs, mask);
172
+ tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
173
raw_write(env, ri, value);
174
}
175
}
176
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vmalle1_write(CPUARMState *env, const ARMCPRegInfo *ri,
177
}
178
}
179
180
-static int alle1_tlbmask(CPUARMState *env)
181
-{
182
- /*
183
- * Note that the 'ALL' scope must invalidate both stage 1 and
184
- * stage 2 translations, whereas most other scopes only invalidate
185
- * stage 1 translations.
186
- */
187
- return (ARMMMUIdxBit_E10_1 |
188
- ARMMMUIdxBit_E10_1_PAN |
189
- ARMMMUIdxBit_E10_0);
190
-}
191
-
192
static int e2_tlbmask(CPUARMState *env)
193
{
194
return (ARMMMUIdxBit_E20_0 |
195
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
196
ARMMMUIdxBit_E3, bits);
197
}
198
199
+static int ipas2e1_tlbmask(CPUARMState *env, int64_t value)
200
+{
201
+ /*
202
+ * The MSB of value is the NS field, which only applies if SEL2
203
+ * is implemented and SCR_EL3.NS is not set (i.e. in secure mode).
204
+ */
205
+ return (value >= 0
206
+ && cpu_isar_feature(aa64_sel2, env_archcpu(env))
207
+ && arm_is_secure_below_el3(env)
208
+ ? ARMMMUIdxBit_Stage2_S
209
+ : ARMMMUIdxBit_Stage2);
210
+}
211
+
212
+static void tlbi_aa64_ipas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
213
+ uint64_t value)
214
+{
215
+ CPUState *cs = env_cpu(env);
216
+ int mask = ipas2e1_tlbmask(env, value);
217
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
218
+
219
+ if (tlb_force_broadcast(env)) {
220
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
221
+ } else {
222
+ tlb_flush_page_by_mmuidx(cs, pageaddr, mask);
223
+ }
224
+}
225
+
226
+static void tlbi_aa64_ipas2e1is_write(CPUARMState *env, const ARMCPRegInfo *ri,
227
+ uint64_t value)
228
+{
229
+ CPUState *cs = env_cpu(env);
230
+ int mask = ipas2e1_tlbmask(env, value);
231
+ uint64_t pageaddr = sextract64(value << 12, 0, 56);
232
+
233
+ tlb_flush_page_by_mmuidx_all_cpus_synced(cs, pageaddr, mask);
234
+}
235
+
236
#ifdef TARGET_AARCH64
237
typedef struct {
238
uint64_t base;
239
@@ -XXX,XX +XXX,XX @@ static void tlbi_aa64_rvae3is_write(CPUARMState *env,
240
241
do_rvae_write(env, value, ARMMMUIdxBit_E3, true);
242
}
243
+
244
+static void tlbi_aa64_ripas2e1_write(CPUARMState *env, const ARMCPRegInfo *ri,
245
+ uint64_t value)
246
+{
247
+ do_rvae_write(env, value, ipas2e1_tlbmask(env, value),
248
+ tlb_force_broadcast(env));
249
+}
250
+
251
+static void tlbi_aa64_ripas2e1is_write(CPUARMState *env,
252
+ const ARMCPRegInfo *ri,
253
+ uint64_t value)
254
+{
255
+ do_rvae_write(env, value, ipas2e1_tlbmask(env, value), true);
256
+}
257
#endif
258
259
static CPAccessResult aa64_zva_access(CPUARMState *env, const ARMCPRegInfo *ri,
260
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
261
.writefn = tlbi_aa64_vae1_write },
262
{ .name = "TLBI_IPAS2E1IS", .state = ARM_CP_STATE_AA64,
263
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
264
- .access = PL2_W, .type = ARM_CP_NOP },
265
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
266
+ .writefn = tlbi_aa64_ipas2e1is_write },
267
{ .name = "TLBI_IPAS2LE1IS", .state = ARM_CP_STATE_AA64,
268
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
269
- .access = PL2_W, .type = ARM_CP_NOP },
270
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
271
+ .writefn = tlbi_aa64_ipas2e1is_write },
272
{ .name = "TLBI_ALLE1IS", .state = ARM_CP_STATE_AA64,
273
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 3, .opc2 = 4,
274
.access = PL2_W, .type = ARM_CP_NO_RAW,
275
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
276
.writefn = tlbi_aa64_alle1is_write },
277
{ .name = "TLBI_IPAS2E1", .state = ARM_CP_STATE_AA64,
278
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
279
- .access = PL2_W, .type = ARM_CP_NOP },
280
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
281
+ .writefn = tlbi_aa64_ipas2e1_write },
282
{ .name = "TLBI_IPAS2LE1", .state = ARM_CP_STATE_AA64,
283
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
284
- .access = PL2_W, .type = ARM_CP_NOP },
285
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
286
+ .writefn = tlbi_aa64_ipas2e1_write },
287
{ .name = "TLBI_ALLE1", .state = ARM_CP_STATE_AA64,
288
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 7, .opc2 = 4,
289
.access = PL2_W, .type = ARM_CP_NO_RAW,
290
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
291
.writefn = tlbimva_hyp_is_write },
292
{ .name = "TLBIIPAS2",
293
.cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 1,
294
- .type = ARM_CP_NOP, .access = PL2_W },
295
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
296
+ .writefn = tlbiipas2_hyp_write },
297
{ .name = "TLBIIPAS2IS",
298
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 1,
299
- .type = ARM_CP_NOP, .access = PL2_W },
300
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
301
+ .writefn = tlbiipas2is_hyp_write },
302
{ .name = "TLBIIPAS2L",
303
.cp = 15, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 5,
304
- .type = ARM_CP_NOP, .access = PL2_W },
305
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
306
+ .writefn = tlbiipas2_hyp_write },
307
{ .name = "TLBIIPAS2LIS",
308
.cp = 15, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 5,
309
- .type = ARM_CP_NOP, .access = PL2_W },
310
+ .type = ARM_CP_NO_RAW, .access = PL2_W,
311
+ .writefn = tlbiipas2is_hyp_write },
312
/* 32 bit cache operations */
313
{ .name = "ICIALLUIS", .cp = 15, .opc1 = 0, .crn = 7, .crm = 1, .opc2 = 0,
314
.type = ARM_CP_NOP, .access = PL1_W, .accessfn = aa64_cacheop_pou_access },
315
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
316
.writefn = tlbi_aa64_rvae1_write },
317
{ .name = "TLBI_RIPAS2E1IS", .state = ARM_CP_STATE_AA64,
318
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 2,
319
- .access = PL2_W, .type = ARM_CP_NOP },
320
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
321
+ .writefn = tlbi_aa64_ripas2e1is_write },
322
{ .name = "TLBI_RIPAS2LE1IS", .state = ARM_CP_STATE_AA64,
323
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 0, .opc2 = 6,
324
- .access = PL2_W, .type = ARM_CP_NOP },
325
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
326
+ .writefn = tlbi_aa64_ripas2e1is_write },
327
{ .name = "TLBI_RVAE2IS", .state = ARM_CP_STATE_AA64,
328
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 2, .opc2 = 1,
329
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
330
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo tlbirange_reginfo[] = {
331
.writefn = tlbi_aa64_rvae2is_write },
332
{ .name = "TLBI_RIPAS2E1", .state = ARM_CP_STATE_AA64,
333
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 2,
334
- .access = PL2_W, .type = ARM_CP_NOP },
335
- { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
336
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
337
+ .writefn = tlbi_aa64_ripas2e1_write },
338
+ { .name = "TLBI_RIPAS2LE1", .state = ARM_CP_STATE_AA64,
339
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 4, .opc2 = 6,
340
- .access = PL2_W, .type = ARM_CP_NOP },
341
+ .access = PL2_W, .type = ARM_CP_NO_RAW,
342
+ .writefn = tlbi_aa64_ripas2e1_write },
343
{ .name = "TLBI_RVAE2OS", .state = ARM_CP_STATE_AA64,
344
.opc0 = 1, .opc1 = 4, .crn = 8, .crm = 5, .opc2 = 1,
345
.access = PL2_W, .type = ARM_CP_NO_RAW | ARM_CP_EL3_NO_EL2_UNDEF,
223
--
346
--
224
2.18.0
347
2.25.1
225
226
diff view generated by jsdifflib
1
Improve the exception-taken logging by logging in
1
From: Richard Henderson <richard.henderson@linaro.org>
2
v7m_exception_taken() the exception we're going to take
3
and whether it is secure/nonsecure.
4
2
5
This requires us to move logging at many callsites from after the
3
Compare only the VMID field when considering whether we need to flush.
6
call to before it, so that the logging appears in a sensible order.
7
4
8
(This will make tail-chaining produce more useful logs; for the
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
current callers of v7m_exception_taken() we know which exception
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
we're going to take, so custom log messages at the callsite sufficed;
7
Message-id: 20221011031911.2408754-7-richard.henderson@linaro.org
11
for tail-chaining only v7m_exception_taken() knows the exception
12
number that we're going to tail-chain to.)
13
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Message-id: 20180720145647.8810-2-peter.maydell@linaro.org
18
---
9
---
19
target/arm/helper.c | 17 +++++++++++------
10
target/arm/helper.c | 4 ++--
20
1 file changed, 11 insertions(+), 6 deletions(-)
11
1 file changed, 2 insertions(+), 2 deletions(-)
21
12
22
diff --git a/target/arm/helper.c b/target/arm/helper.c
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
23
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/helper.c
15
--- a/target/arm/helper.c
25
+++ b/target/arm/helper.c
16
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
17
@@ -XXX,XX +XXX,XX @@ static void vttbr_write(CPUARMState *env, const ARMCPRegInfo *ri,
27
bool push_failed = false;
18
* A change in VMID to the stage2 page table (Stage2) invalidates
28
19
* the stage2 and combined stage 1&2 tlbs (EL10_1 and EL10_0).
29
armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure);
20
*/
30
+ qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n",
21
- if (raw_read(env, ri) != value) {
31
+ targets_secure ? "secure" : "nonsecure", exc);
22
+ if (extract64(raw_read(env, ri) ^ value, 48, 16) != 0) {
32
23
tlb_flush_by_mmuidx(cs, alle1_tlbmask(env));
33
if (arm_feature(env, ARM_FEATURE_V8)) {
24
- raw_write(env, ri, value);
34
if (arm_feature(env, ARM_FEATURE_M_SECURITY) &&
35
@@ -XXX,XX +XXX,XX @@ static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain,
36
* we might now want to take a different exception which
37
* targets a different security state, so try again from the top.
38
*/
39
+ qemu_log_mask(CPU_LOG_INT,
40
+ "...derived exception on callee-saves register stacking");
41
v7m_exception_taken(cpu, lr, true, true);
42
return;
43
}
25
}
44
26
+ raw_write(env, ri, value);
45
if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) {
46
/* Vector load failed: derived exception */
47
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table load");
48
v7m_exception_taken(cpu, lr, true, true);
49
return;
50
}
51
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
52
if (sfault) {
53
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
54
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
55
- v7m_exception_taken(cpu, excret, true, false);
56
qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
57
"stackframe: failed EXC_RETURN.ES validity check\n");
58
+ v7m_exception_taken(cpu, excret, true, false);
59
return;
60
}
61
62
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
63
*/
64
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
65
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.secure);
66
- v7m_exception_taken(cpu, excret, true, false);
67
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
68
"stackframe: failed exception return integrity check\n");
69
+ v7m_exception_taken(cpu, excret, true, false);
70
return;
71
}
72
73
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
74
/* Take a SecureFault on the current stack */
75
env->v7m.sfsr |= R_V7M_SFSR_INVIS_MASK;
76
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
77
- v7m_exception_taken(cpu, excret, true, false);
78
qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing "
79
"stackframe: failed exception return integrity "
80
"signature check\n");
81
+ v7m_exception_taken(cpu, excret, true, false);
82
return;
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
86
/* v7m_stack_read() pended a fault, so take it (as a tail
87
* chained exception on the same stack frame)
88
*/
89
+ qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking\n");
90
v7m_exception_taken(cpu, excret, true, false);
91
return;
92
}
93
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
94
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE,
95
env->v7m.secure);
96
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
97
- v7m_exception_taken(cpu, excret, true, false);
98
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing "
99
"stackframe: failed exception return integrity "
100
"check\n");
101
+ v7m_exception_taken(cpu, excret, true, false);
102
return;
103
}
104
}
105
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
106
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false);
107
env->v7m.cfsr[env->v7m.secure] |= R_V7M_CFSR_INVPC_MASK;
108
ignore_stackfaults = v7m_push_stack(cpu);
109
- v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
110
qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe: "
111
"failed exception return integrity check\n");
112
+ v7m_exception_taken(cpu, excret, false, ignore_stackfaults);
113
return;
114
}
115
116
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
117
118
ignore_stackfaults = v7m_push_stack(cpu);
119
v7m_exception_taken(cpu, lr, false, ignore_stackfaults);
120
- qemu_log_mask(CPU_LOG_INT, "... as %d\n", env->v7m.exception);
121
}
27
}
122
28
123
/* Function used to synchronize QEMU's AArch64 register set with AArch32
29
static const ARMCPRegInfo vmsa_pmsa_cp_reginfo[] = {
124
--
30
--
125
2.18.0
31
2.25.1
126
127
diff view generated by jsdifflib
1
From: Adam Lackorzynski <adam@l4re.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Use an int64_t as a return type to restore
3
Consolidate most of the inputs and outputs of S1_ptw_translate
4
the negative check for arm_load_as.
4
into a single structure. Plumb this through arm_ld*_ptw from
5
the controlling get_phys_addr_* routine.
5
6
6
Signed-off-by: Adam Lackorzynski <adam@l4re.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180730173712.GG4987@os.inf.tu-dresden.de
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20221011031911.2408754-8-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
11
---
11
hw/arm/boot.c | 8 ++++----
12
target/arm/ptw.c | 140 ++++++++++++++++++++++++++---------------------
12
1 file changed, 4 insertions(+), 4 deletions(-)
13
1 file changed, 79 insertions(+), 61 deletions(-)
13
14
14
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/arm/boot.c
17
--- a/target/arm/ptw.c
17
+++ b/hw/arm/boot.c
18
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ static int do_arm_linux_init(Object *obj, void *opaque)
19
@@ -XXX,XX +XXX,XX @@
20
#include "idau.h"
21
22
23
-static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
24
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
25
- bool is_secure, bool s1_is_el0,
26
+typedef struct S1Translate {
27
+ ARMMMUIdx in_mmu_idx;
28
+ bool in_secure;
29
+ bool out_secure;
30
+ hwaddr out_phys;
31
+} S1Translate;
32
+
33
+static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
34
+ uint64_t address,
35
+ MMUAccessType access_type, bool s1_is_el0,
36
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
37
__attribute__((nonnull));
38
39
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
40
}
41
42
/* Translate a S1 pagetable walk through S2 if needed. */
43
-static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
44
- hwaddr addr, bool *is_secure_ptr,
45
- ARMMMUFaultInfo *fi)
46
+static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
47
+ hwaddr addr, ARMMMUFaultInfo *fi)
48
{
49
- bool is_secure = *is_secure_ptr;
50
+ bool is_secure = ptw->in_secure;
51
ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
52
53
- if (arm_mmu_idx_is_stage1_of_2(mmu_idx) &&
54
+ if (arm_mmu_idx_is_stage1_of_2(ptw->in_mmu_idx) &&
55
!regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
56
GetPhysAddrResult s2 = {};
57
+ S1Translate s2ptw = {
58
+ .in_mmu_idx = s2_mmu_idx,
59
+ .in_secure = is_secure,
60
+ };
61
uint64_t hcr;
62
int ret;
63
64
- ret = get_phys_addr_lpae(env, addr, MMU_DATA_LOAD, s2_mmu_idx,
65
- is_secure, false, &s2, fi);
66
+ ret = get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
67
+ false, &s2, fi);
68
if (ret) {
69
assert(fi->type != ARMFault_None);
70
fi->s2addr = addr;
71
fi->stage2 = true;
72
fi->s1ptw = true;
73
fi->s1ns = !is_secure;
74
- return ~0;
75
+ return false;
76
}
77
78
hcr = arm_hcr_el2_eff_secstate(env, is_secure);
79
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
80
fi->stage2 = true;
81
fi->s1ptw = true;
82
fi->s1ns = !is_secure;
83
- return ~0;
84
+ return false;
85
}
86
87
if (arm_is_secure_below_el3(env)) {
88
@@ -XXX,XX +XXX,XX @@ static hwaddr S1_ptw_translate(CPUARMState *env, ARMMMUIdx mmu_idx,
89
} else {
90
is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
91
}
92
- *is_secure_ptr = is_secure;
93
} else {
94
assert(!is_secure);
95
}
96
97
addr = s2.f.phys_addr;
98
}
99
- return addr;
100
+
101
+ ptw->out_secure = is_secure;
102
+ ptw->out_phys = addr;
103
+ return true;
104
}
105
106
/* All loads done in the course of a page table walk go through here. */
107
-static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
108
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
109
+static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
110
+ ARMMMUFaultInfo *fi)
111
{
112
CPUState *cs = env_cpu(env);
113
MemTxAttrs attrs = {};
114
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
115
AddressSpace *as;
116
uint32_t data;
117
118
- addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
119
- attrs.secure = is_secure;
120
- as = arm_addressspace(cs, attrs);
121
- if (fi->s1ptw) {
122
+ if (!S1_ptw_translate(env, ptw, addr, fi)) {
123
return 0;
124
}
125
- if (regime_translation_big_endian(env, mmu_idx)) {
126
+ addr = ptw->out_phys;
127
+ attrs.secure = ptw->out_secure;
128
+ as = arm_addressspace(cs, attrs);
129
+ if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
130
data = address_space_ldl_be(as, addr, attrs, &result);
131
} else {
132
data = address_space_ldl_le(as, addr, attrs, &result);
133
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
19
return 0;
134
return 0;
20
}
135
}
21
136
22
-static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
137
-static uint64_t arm_ldq_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
23
- uint64_t *lowaddr, uint64_t *highaddr,
138
- ARMMMUIdx mmu_idx, ARMMMUFaultInfo *fi)
24
- int elf_machine, AddressSpace *as)
139
+static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
25
+static int64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
140
+ ARMMMUFaultInfo *fi)
26
+ uint64_t *lowaddr, uint64_t *highaddr,
141
{
27
+ int elf_machine, AddressSpace *as)
142
CPUState *cs = env_cpu(env);
28
{
143
MemTxAttrs attrs = {};
29
bool elf_is64;
144
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, hwaddr addr, bool is_secure,
30
union {
145
AddressSpace *as;
31
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_load_elf(struct arm_boot_info *info, uint64_t *pentry,
146
uint64_t data;
32
} elf_header;
147
33
int data_swab = 0;
148
- addr = S1_ptw_translate(env, mmu_idx, addr, &is_secure, fi);
34
bool big_endian;
149
- attrs.secure = is_secure;
35
- uint64_t ret = -1;
150
- as = arm_addressspace(cs, attrs);
36
+ int64_t ret = -1;
151
- if (fi->s1ptw) {
37
Error *err = NULL;
152
+ if (!S1_ptw_translate(env, ptw, addr, fi)) {
38
153
return 0;
154
}
155
- if (regime_translation_big_endian(env, mmu_idx)) {
156
+ addr = ptw->out_phys;
157
+ attrs.secure = ptw->out_secure;
158
+ as = arm_addressspace(cs, attrs);
159
+ if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
160
data = address_space_ldq_be(as, addr, attrs, &result);
161
} else {
162
data = address_space_ldq_le(as, addr, attrs, &result);
163
@@ -XXX,XX +XXX,XX @@ static int simple_ap_to_rw_prot(CPUARMState *env, ARMMMUIdx mmu_idx, int ap)
164
return simple_ap_to_rw_prot_is_user(ap, regime_is_user(env, mmu_idx));
165
}
166
167
-static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
168
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
169
- bool is_secure, GetPhysAddrResult *result,
170
- ARMMMUFaultInfo *fi)
171
+static bool get_phys_addr_v5(CPUARMState *env, S1Translate *ptw,
172
+ uint32_t address, MMUAccessType access_type,
173
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
174
{
175
int level = 1;
176
uint32_t table;
177
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
178
179
/* Pagetable walk. */
180
/* Lookup l1 descriptor. */
181
- if (!get_level1_table_address(env, mmu_idx, &table, address)) {
182
+ if (!get_level1_table_address(env, ptw->in_mmu_idx, &table, address)) {
183
/* Section translation fault if page walk is disabled by PD0 or PD1 */
184
fi->type = ARMFault_Translation;
185
goto do_fault;
186
}
187
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
188
+ desc = arm_ldl_ptw(env, ptw, table, fi);
189
if (fi->type != ARMFault_None) {
190
goto do_fault;
191
}
192
type = (desc & 3);
193
domain = (desc >> 5) & 0x0f;
194
- if (regime_el(env, mmu_idx) == 1) {
195
+ if (regime_el(env, ptw->in_mmu_idx) == 1) {
196
dacr = env->cp15.dacr_ns;
197
} else {
198
dacr = env->cp15.dacr_s;
199
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
200
/* Fine pagetable. */
201
table = (desc & 0xfffff000) | ((address >> 8) & 0xffc);
202
}
203
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
204
+ desc = arm_ldl_ptw(env, ptw, table, fi);
205
if (fi->type != ARMFault_None) {
206
goto do_fault;
207
}
208
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v5(CPUARMState *env, uint32_t address,
209
g_assert_not_reached();
210
}
211
}
212
- result->f.prot = ap_to_rw_prot(env, mmu_idx, ap, domain_prot);
213
+ result->f.prot = ap_to_rw_prot(env, ptw->in_mmu_idx, ap, domain_prot);
214
result->f.prot |= result->f.prot ? PAGE_EXEC : 0;
215
if (!(result->f.prot & (1 << access_type))) {
216
/* Access permission fault. */
217
@@ -XXX,XX +XXX,XX @@ do_fault:
218
return true;
219
}
220
221
-static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
222
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
223
- bool is_secure, GetPhysAddrResult *result,
224
- ARMMMUFaultInfo *fi)
225
+static bool get_phys_addr_v6(CPUARMState *env, S1Translate *ptw,
226
+ uint32_t address, MMUAccessType access_type,
227
+ GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
228
{
229
ARMCPU *cpu = env_archcpu(env);
230
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
231
int level = 1;
232
uint32_t table;
233
uint32_t desc;
234
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
235
fi->type = ARMFault_Translation;
236
goto do_fault;
237
}
238
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
239
+ desc = arm_ldl_ptw(env, ptw, table, fi);
240
if (fi->type != ARMFault_None) {
241
goto do_fault;
242
}
243
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_v6(CPUARMState *env, uint32_t address,
244
ns = extract32(desc, 3, 1);
245
/* Lookup l2 entry. */
246
table = (desc & 0xfffffc00) | ((address >> 10) & 0x3fc);
247
- desc = arm_ldl_ptw(env, table, is_secure, mmu_idx, fi);
248
+ desc = arm_ldl_ptw(env, ptw, table, fi);
249
if (fi->type != ARMFault_None) {
250
goto do_fault;
251
}
252
@@ -XXX,XX +XXX,XX @@ static bool check_s2_mmu_setup(ARMCPU *cpu, bool is_aa64, int level,
253
* the WnR bit is never set (the caller must do this).
254
*
255
* @env: CPUARMState
256
+ * @ptw: Current and next stage parameters for the walk.
257
* @address: virtual address to get physical address for
258
* @access_type: MMU_DATA_LOAD, MMU_DATA_STORE or MMU_INST_FETCH
259
- * @mmu_idx: MMU index indicating required translation regime
260
- * @s1_is_el0: if @mmu_idx is ARMMMUIdx_Stage2 (so this is a stage 2 page
261
- * table walk), must be true if this is stage 2 of a stage 1+2
262
+ * @s1_is_el0: if @ptw->in_mmu_idx is ARMMMUIdx_Stage2
263
+ * (so this is a stage 2 page table walk),
264
+ * must be true if this is stage 2 of a stage 1+2
265
* walk for an EL0 access. If @mmu_idx is anything else,
266
* @s1_is_el0 is ignored.
267
* @result: set on translation success,
268
* @fi: set to fault info if the translation fails
269
*/
270
-static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
271
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
272
- bool is_secure, bool s1_is_el0,
273
+static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
274
+ uint64_t address,
275
+ MMUAccessType access_type, bool s1_is_el0,
276
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
277
{
278
ARMCPU *cpu = env_archcpu(env);
279
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
280
+ bool is_secure = ptw->in_secure;
281
/* Read an LPAE long-descriptor translation table. */
282
ARMFaultType fault_type = ARMFault_Translation;
283
uint32_t level;
284
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address,
285
descaddr |= (address >> (stride * (4 - level))) & indexmask;
286
descaddr &= ~7ULL;
287
nstable = extract32(tableattrs, 4, 1);
288
- descriptor = arm_ldq_ptw(env, descaddr, !nstable, mmu_idx, fi);
289
+ ptw->in_secure = !nstable;
290
+ descriptor = arm_ldq_ptw(env, ptw, descaddr, fi);
291
if (fi->type != ARMFault_None) {
292
goto do_fault;
293
}
294
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
295
ARMMMUFaultInfo *fi)
296
{
297
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
298
+ S1Translate ptw;
299
300
if (mmu_idx != s1_mmu_idx) {
301
/*
302
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
303
int ret;
304
bool ipa_secure, s2walk_secure;
305
ARMCacheAttrs cacheattrs1;
306
- ARMMMUIdx s2_mmu_idx;
307
bool is_el0;
308
uint64_t hcr;
309
310
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
311
s2walk_secure = false;
312
}
313
314
- s2_mmu_idx = (s2walk_secure
315
- ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2);
316
+ ptw.in_mmu_idx =
317
+ s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
318
+ ptw.in_secure = s2walk_secure;
319
is_el0 = mmu_idx == ARMMMUIdx_E10_0;
320
321
/*
322
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
323
cacheattrs1 = result->cacheattrs;
324
memset(result, 0, sizeof(*result));
325
326
- ret = get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx,
327
- s2walk_secure, is_el0, result, fi);
328
+ ret = get_phys_addr_lpae(env, &ptw, ipa, access_type,
329
+ is_el0, result, fi);
330
fi->s2addr = ipa;
331
332
/* Combine the S1 and S2 perms. */
333
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
334
return get_phys_addr_disabled(env, address, access_type, mmu_idx,
335
is_secure, result, fi);
336
}
337
+
338
+ ptw.in_mmu_idx = mmu_idx;
339
+ ptw.in_secure = is_secure;
340
+
341
if (regime_using_lpae_format(env, mmu_idx)) {
342
- return get_phys_addr_lpae(env, address, access_type, mmu_idx,
343
- is_secure, false, result, fi);
344
+ return get_phys_addr_lpae(env, &ptw, address, access_type, false,
345
+ result, fi);
346
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
347
- return get_phys_addr_v6(env, address, access_type, mmu_idx,
348
- is_secure, result, fi);
349
+ return get_phys_addr_v6(env, &ptw, address, access_type, result, fi);
350
} else {
351
- return get_phys_addr_v5(env, address, access_type, mmu_idx,
352
- is_secure, result, fi);
353
+ return get_phys_addr_v5(env, &ptw, address, access_type, result, fi);
354
}
355
}
39
356
40
--
357
--
41
2.18.0
358
2.25.1
42
43
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Some functions are now only used in arm_gic.c, put them static. Some of
3
Before using softmmu page tables for the ptw, plumb down
4
them where only used by the NVIC implementation and are not used
4
a debug parameter so that we can query page table entries
5
anymore, so remove them.
5
from gdbstub without modifying cpu state.
6
6
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20180727095421.386-4-luc.michel@greensocs.com
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20221011031911.2408754-9-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
11
---
13
hw/intc/gic_internal.h | 4 ----
12
target/arm/ptw.c | 55 ++++++++++++++++++++++++++++++++----------------
14
hw/intc/arm_gic.c | 23 ++---------------------
13
1 file changed, 37 insertions(+), 18 deletions(-)
15
2 files changed, 2 insertions(+), 25 deletions(-)
16
14
17
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
15
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
18
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/gic_internal.h
17
--- a/target/arm/ptw.c
20
+++ b/hw/intc/gic_internal.h
18
+++ b/target/arm/ptw.c
21
@@ -XXX,XX +XXX,XX @@
19
@@ -XXX,XX +XXX,XX @@
22
/* The special cases for the revision property: */
20
typedef struct S1Translate {
23
#define REV_11MPCORE 0
21
ARMMMUIdx in_mmu_idx;
24
22
bool in_secure;
25
-void gic_set_pending_private(GICState *s, int cpu, int irq);
23
+ bool in_debug;
26
uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs);
24
bool out_secure;
27
-void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs);
25
hwaddr out_phys;
28
-void gic_update(GICState *s);
26
} S1Translate;
29
-void gic_init_irqs_and_distributor(GICState *s);
27
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
30
void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
28
S1Translate s2ptw = {
31
MemTxAttrs attrs);
29
.in_mmu_idx = s2_mmu_idx,
32
30
.in_secure = is_secure,
33
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
31
+ .in_debug = ptw->in_debug,
34
index XXXXXXX..XXXXXXX 100644
32
};
35
--- a/hw/intc/arm_gic.c
33
uint64_t hcr;
36
+++ b/hw/intc/arm_gic.c
34
int ret;
37
@@ -XXX,XX +XXX,XX @@ static inline bool gic_has_groups(GICState *s)
35
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
38
36
return 0;
39
/* TODO: Many places that call this routine could be optimized. */
37
}
40
/* Update interrupt status after enabled or pending bits have been changed. */
38
41
-void gic_update(GICState *s)
39
-bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
42
+static void gic_update(GICState *s)
40
- MMUAccessType access_type, ARMMMUIdx mmu_idx,
41
- bool is_secure, GetPhysAddrResult *result,
42
- ARMMMUFaultInfo *fi)
43
+static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
44
+ target_ulong address,
45
+ MMUAccessType access_type,
46
+ GetPhysAddrResult *result,
47
+ ARMMMUFaultInfo *fi)
43
{
48
{
44
int best_irq;
49
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
45
int best_prio;
50
ARMMMUIdx s1_mmu_idx = stage_1_mmu_idx(mmu_idx);
46
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
51
- S1Translate ptw;
52
+ bool is_secure = ptw->in_secure;
53
54
if (mmu_idx != s1_mmu_idx) {
55
/*
56
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
57
bool is_el0;
58
uint64_t hcr;
59
60
- ret = get_phys_addr_with_secure(env, address, access_type,
61
- s1_mmu_idx, is_secure, result, fi);
62
+ ptw->in_mmu_idx = s1_mmu_idx;
63
+ ret = get_phys_addr_with_struct(env, ptw, address, access_type,
64
+ result, fi);
65
66
/* If S1 fails or S2 is disabled, return early. */
67
if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
68
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
69
s2walk_secure = false;
70
}
71
72
- ptw.in_mmu_idx =
73
+ ptw->in_mmu_idx =
74
s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
75
- ptw.in_secure = s2walk_secure;
76
+ ptw->in_secure = s2walk_secure;
77
is_el0 = mmu_idx == ARMMMUIdx_E10_0;
78
79
/*
80
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
81
cacheattrs1 = result->cacheattrs;
82
memset(result, 0, sizeof(*result));
83
84
- ret = get_phys_addr_lpae(env, &ptw, ipa, access_type,
85
+ ret = get_phys_addr_lpae(env, ptw, ipa, access_type,
86
is_el0, result, fi);
87
fi->s2addr = ipa;
88
89
@@ -XXX,XX +XXX,XX @@ bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
90
is_secure, result, fi);
91
}
92
93
- ptw.in_mmu_idx = mmu_idx;
94
- ptw.in_secure = is_secure;
95
-
96
if (regime_using_lpae_format(env, mmu_idx)) {
97
- return get_phys_addr_lpae(env, &ptw, address, access_type, false,
98
+ return get_phys_addr_lpae(env, ptw, address, access_type, false,
99
result, fi);
100
} else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) {
101
- return get_phys_addr_v6(env, &ptw, address, access_type, result, fi);
102
+ return get_phys_addr_v6(env, ptw, address, access_type, result, fi);
103
} else {
104
- return get_phys_addr_v5(env, &ptw, address, access_type, result, fi);
105
+ return get_phys_addr_v5(env, ptw, address, access_type, result, fi);
47
}
106
}
48
}
107
}
49
108
50
-void gic_set_pending_private(GICState *s, int cpu, int irq)
109
+bool get_phys_addr_with_secure(CPUARMState *env, target_ulong address,
51
-{
110
+ MMUAccessType access_type, ARMMMUIdx mmu_idx,
52
- int cm = 1 << cpu;
111
+ bool is_secure, GetPhysAddrResult *result,
53
-
112
+ ARMMMUFaultInfo *fi)
54
- if (gic_test_pending(s, irq, cm)) {
113
+{
55
- return;
114
+ S1Translate ptw = {
56
- }
115
+ .in_mmu_idx = mmu_idx,
57
-
116
+ .in_secure = is_secure,
58
- DPRINTF("Set %d pending cpu %d\n", irq, cpu);
117
+ };
59
- GIC_DIST_SET_PENDING(irq, cm);
118
+ return get_phys_addr_with_struct(env, &ptw, address, access_type,
60
- gic_update(s);
119
+ result, fi);
61
-}
120
+}
62
-
121
+
63
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
122
bool get_phys_addr(CPUARMState *env, target_ulong address,
64
int cm, int target)
123
MMUAccessType access_type, ARMMMUIdx mmu_idx,
124
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
125
@@ -XXX,XX +XXX,XX @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr,
65
{
126
{
66
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
127
ARMCPU *cpu = ARM_CPU(cs);
67
GIC_DIST_CLEAR_ACTIVE(irq, cm);
128
CPUARMState *env = &cpu->env;
68
}
129
+ S1Translate ptw = {
69
130
+ .in_mmu_idx = arm_mmu_idx(env),
70
-void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
131
+ .in_secure = arm_is_secure(env),
71
+static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
132
+ .in_debug = true,
72
{
133
+ };
73
int cm = 1 << cpu;
134
GetPhysAddrResult res = {};
74
int group;
135
ARMMMUFaultInfo fi = {};
75
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
136
- ARMMMUIdx mmu_idx = arm_mmu_idx(env);
76
.endianness = DEVICE_NATIVE_ENDIAN,
137
bool ret;
77
};
138
78
139
- ret = get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &res, &fi);
79
-/* This function is used by nvic model */
140
+ ret = get_phys_addr_with_struct(env, &ptw, addr, MMU_DATA_LOAD, &res, &fi);
80
-void gic_init_irqs_and_distributor(GICState *s)
141
*attrs = res.f.attrs;
81
-{
142
82
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops);
143
if (ret) {
83
-}
84
-
85
static void arm_gic_realize(DeviceState *dev, Error **errp)
86
{
87
/* Device instance realize function for the GIC sysbus device */
88
--
144
--
89
2.18.0
145
2.25.1
90
91
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Used the wrong temporary in the computation of subtractive overflow.
3
Hoist this test out of arm_ld[lq]_ptw into S1_ptw_translate.
4
4
5
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
7
Message-id: 20221011031911.2408754-10-richard.henderson@linaro.org
8
Tested-by: Alex Bennée <alex.bennee@linaro.org>
9
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
10
Message-id: 20180801123111.3595-3-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
9
---
13
target/arm/translate-sve.c | 2 +-
10
target/arm/ptw.c | 6 ++++--
14
1 file changed, 1 insertion(+), 1 deletion(-)
11
1 file changed, 4 insertions(+), 2 deletions(-)
15
12
16
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
13
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
17
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/translate-sve.c
15
--- a/target/arm/ptw.c
19
+++ b/target/arm/translate-sve.c
16
+++ b/target/arm/ptw.c
20
@@ -XXX,XX +XXX,XX @@ static void do_sat_addsub_64(TCGv_i64 reg, TCGv_i64 val, bool u, bool d)
17
@@ -XXX,XX +XXX,XX @@ typedef struct S1Translate {
21
/* Detect signed overflow for subtraction. */
18
bool in_secure;
22
tcg_gen_xor_i64(t0, reg, val);
19
bool in_debug;
23
tcg_gen_sub_i64(t1, reg, val);
20
bool out_secure;
24
- tcg_gen_xor_i64(reg, reg, t0);
21
+ bool out_be;
25
+ tcg_gen_xor_i64(reg, reg, t1);
22
hwaddr out_phys;
26
tcg_gen_and_i64(t0, t0, reg);
23
} S1Translate;
27
24
28
/* Bound the result. */
25
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
26
27
ptw->out_secure = is_secure;
28
ptw->out_phys = addr;
29
+ ptw->out_be = regime_translation_big_endian(env, ptw->in_mmu_idx);
30
return true;
31
}
32
33
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
34
addr = ptw->out_phys;
35
attrs.secure = ptw->out_secure;
36
as = arm_addressspace(cs, attrs);
37
- if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
38
+ if (ptw->out_be) {
39
data = address_space_ldl_be(as, addr, attrs, &result);
40
} else {
41
data = address_space_ldl_le(as, addr, attrs, &result);
42
@@ -XXX,XX +XXX,XX @@ static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
43
addr = ptw->out_phys;
44
attrs.secure = ptw->out_secure;
45
as = arm_addressspace(cs, attrs);
46
- if (regime_translation_big_endian(env, ptw->in_mmu_idx)) {
47
+ if (ptw->out_be) {
48
data = address_space_ldq_be(as, addr, attrs, &result);
49
} else {
50
data = address_space_ldq_le(as, addr, attrs, &result);
29
--
51
--
30
2.18.0
52
2.25.1
31
32
diff view generated by jsdifflib
1
If the "trap general exceptions" bit HCR_EL2.TGE is set, we
1
From: Richard Henderson <richard.henderson@linaro.org>
2
must mask all virtual interrupts (as per DDI0487C.a D1.14.3).
3
Implement this in arm_excp_unmasked().
4
2
3
So far, limit the change to S1_ptw_translate, arm_ldl_ptw, and
4
arm_ldq_ptw. Use probe_access_full to find the host address,
5
and if so use a host load. If the probe fails, we've got our
6
fault info already. On the off chance that page tables are not
7
in RAM, continue to use the address_space_ld* functions.
8
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
Message-id: 20221011031911.2408754-11-richard.henderson@linaro.org
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20180724115950.17316-2-peter.maydell@linaro.org
8
---
13
---
9
target/arm/cpu.h | 6 ++++--
14
target/arm/cpu.h | 5 +
10
1 file changed, 4 insertions(+), 2 deletions(-)
15
target/arm/ptw.c | 196 +++++++++++++++++++++++++---------------
16
target/arm/tlb_helper.c | 17 +++-
17
3 files changed, 144 insertions(+), 74 deletions(-)
11
18
12
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
19
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
13
index XXXXXXX..XXXXXXX 100644
20
index XXXXXXX..XXXXXXX 100644
14
--- a/target/arm/cpu.h
21
--- a/target/arm/cpu.h
15
+++ b/target/arm/cpu.h
22
+++ b/target/arm/cpu.h
16
@@ -XXX,XX +XXX,XX @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
23
@@ -XXX,XX +XXX,XX @@ typedef struct CPUARMTBFlags {
17
break;
24
target_ulong flags2;
18
25
} CPUARMTBFlags;
19
case EXCP_VFIQ:
26
20
- if (secure || !(env->cp15.hcr_el2 & HCR_FMO)) {
27
+typedef struct ARMMMUFaultInfo ARMMMUFaultInfo;
21
+ if (secure || !(env->cp15.hcr_el2 & HCR_FMO)
28
+
22
+ || (env->cp15.hcr_el2 & HCR_TGE)) {
29
typedef struct CPUArchState {
23
/* VFIQs are only taken when hypervized and non-secure. */
30
/* Regs for current mode. */
31
uint32_t regs[16];
32
@@ -XXX,XX +XXX,XX @@ typedef struct CPUArchState {
33
struct CPUBreakpoint *cpu_breakpoint[16];
34
struct CPUWatchpoint *cpu_watchpoint[16];
35
36
+ /* Optional fault info across tlb lookup. */
37
+ ARMMMUFaultInfo *tlb_fi;
38
+
39
/* Fields up to this point are cleared by a CPU reset */
40
struct {} end_reset_fields;
41
42
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/ptw.c
45
+++ b/target/arm/ptw.c
46
@@ -XXX,XX +XXX,XX @@
47
#include "qemu/osdep.h"
48
#include "qemu/log.h"
49
#include "qemu/range.h"
50
+#include "exec/exec-all.h"
51
#include "cpu.h"
52
#include "internals.h"
53
#include "idau.h"
54
@@ -XXX,XX +XXX,XX @@ typedef struct S1Translate {
55
bool out_secure;
56
bool out_be;
57
hwaddr out_phys;
58
+ void *out_host;
59
} S1Translate;
60
61
static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
62
@@ -XXX,XX +XXX,XX @@ static bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx,
63
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
64
}
65
66
-static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
67
+static bool S2_attrs_are_device(uint64_t hcr, uint8_t attrs)
68
{
69
/*
70
* For an S1 page table walk, the stage 1 attributes are always
71
@@ -XXX,XX +XXX,XX @@ static bool ptw_attrs_are_device(uint64_t hcr, ARMCacheAttrs cacheattrs)
72
* With HCR_EL2.FWB == 1 this is when descriptor bit [4] is 0, ie
73
* when cacheattrs.attrs bit [2] is 0.
74
*/
75
- assert(cacheattrs.is_s2_format);
76
if (hcr & HCR_FWB) {
77
- return (cacheattrs.attrs & 0x4) == 0;
78
+ return (attrs & 0x4) == 0;
79
} else {
80
- return (cacheattrs.attrs & 0xc) == 0;
81
+ return (attrs & 0xc) == 0;
82
}
83
}
84
85
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
86
hwaddr addr, ARMMMUFaultInfo *fi)
87
{
88
bool is_secure = ptw->in_secure;
89
+ ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
90
ARMMMUIdx s2_mmu_idx = is_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
91
+ bool s2_phys = false;
92
+ uint8_t pte_attrs;
93
+ bool pte_secure;
94
95
- if (arm_mmu_idx_is_stage1_of_2(ptw->in_mmu_idx) &&
96
- !regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
97
- GetPhysAddrResult s2 = {};
98
- S1Translate s2ptw = {
99
- .in_mmu_idx = s2_mmu_idx,
100
- .in_secure = is_secure,
101
- .in_debug = ptw->in_debug,
102
- };
103
- uint64_t hcr;
104
- int ret;
105
+ if (!arm_mmu_idx_is_stage1_of_2(mmu_idx)
106
+ || regime_translation_disabled(env, s2_mmu_idx, is_secure)) {
107
+ s2_mmu_idx = is_secure ? ARMMMUIdx_Phys_S : ARMMMUIdx_Phys_NS;
108
+ s2_phys = true;
109
+ }
110
111
- ret = get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
112
- false, &s2, fi);
113
- if (ret) {
114
- assert(fi->type != ARMFault_None);
115
- fi->s2addr = addr;
116
- fi->stage2 = true;
117
- fi->s1ptw = true;
118
- fi->s1ns = !is_secure;
119
- return false;
120
+ if (unlikely(ptw->in_debug)) {
121
+ /*
122
+ * From gdbstub, do not use softmmu so that we don't modify the
123
+ * state of the cpu at all, including softmmu tlb contents.
124
+ */
125
+ if (s2_phys) {
126
+ ptw->out_phys = addr;
127
+ pte_attrs = 0;
128
+ pte_secure = is_secure;
129
+ } else {
130
+ S1Translate s2ptw = {
131
+ .in_mmu_idx = s2_mmu_idx,
132
+ .in_secure = is_secure,
133
+ .in_debug = true,
134
+ };
135
+ GetPhysAddrResult s2 = { };
136
+ if (!get_phys_addr_lpae(env, &s2ptw, addr, MMU_DATA_LOAD,
137
+ false, &s2, fi)) {
138
+ goto fail;
139
+ }
140
+ ptw->out_phys = s2.f.phys_addr;
141
+ pte_attrs = s2.cacheattrs.attrs;
142
+ pte_secure = s2.f.attrs.secure;
143
}
144
+ ptw->out_host = NULL;
145
+ } else {
146
+ CPUTLBEntryFull *full;
147
+ int flags;
148
149
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
150
- if ((hcr & HCR_PTW) && ptw_attrs_are_device(hcr, s2.cacheattrs)) {
151
+ env->tlb_fi = fi;
152
+ flags = probe_access_full(env, addr, MMU_DATA_LOAD,
153
+ arm_to_core_mmu_idx(s2_mmu_idx),
154
+ true, &ptw->out_host, &full, 0);
155
+ env->tlb_fi = NULL;
156
+
157
+ if (unlikely(flags & TLB_INVALID_MASK)) {
158
+ goto fail;
159
+ }
160
+ ptw->out_phys = full->phys_addr;
161
+ pte_attrs = full->pte_attrs;
162
+ pte_secure = full->attrs.secure;
163
+ }
164
+
165
+ if (!s2_phys) {
166
+ uint64_t hcr = arm_hcr_el2_eff_secstate(env, is_secure);
167
+
168
+ if ((hcr & HCR_PTW) && S2_attrs_are_device(hcr, pte_attrs)) {
169
/*
170
* PTW set and S1 walk touched S2 Device memory:
171
* generate Permission fault.
172
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
173
fi->s1ns = !is_secure;
24
return false;
174
return false;
25
}
175
}
26
return !(env->daif & PSTATE_F);
176
-
27
case EXCP_VIRQ:
177
- if (arm_is_secure_below_el3(env)) {
28
- if (secure || !(env->cp15.hcr_el2 & HCR_IMO)) {
178
- /* Check if page table walk is to secure or non-secure PA space. */
29
+ if (secure || !(env->cp15.hcr_el2 & HCR_IMO)
179
- if (is_secure) {
30
+ || (env->cp15.hcr_el2 & HCR_TGE)) {
180
- is_secure = !(env->cp15.vstcr_el2 & VSTCR_SW);
31
/* VIRQs are only taken when hypervized and non-secure. */
181
- } else {
32
return false;
182
- is_secure = !(env->cp15.vtcr_el2 & VTCR_NSW);
33
}
183
- }
184
- } else {
185
- assert(!is_secure);
186
- }
187
-
188
- addr = s2.f.phys_addr;
189
}
190
191
- ptw->out_secure = is_secure;
192
- ptw->out_phys = addr;
193
- ptw->out_be = regime_translation_big_endian(env, ptw->in_mmu_idx);
194
+ /* Check if page table walk is to secure or non-secure PA space. */
195
+ ptw->out_secure = (is_secure
196
+ && !(pte_secure
197
+ ? env->cp15.vstcr_el2 & VSTCR_SW
198
+ : env->cp15.vtcr_el2 & VTCR_NSW));
199
+ ptw->out_be = regime_translation_big_endian(env, mmu_idx);
200
return true;
201
+
202
+ fail:
203
+ assert(fi->type != ARMFault_None);
204
+ fi->s2addr = addr;
205
+ fi->stage2 = true;
206
+ fi->s1ptw = true;
207
+ fi->s1ns = !is_secure;
208
+ return false;
209
}
210
211
/* All loads done in the course of a page table walk go through here. */
212
@@ -XXX,XX +XXX,XX @@ static uint32_t arm_ldl_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
213
ARMMMUFaultInfo *fi)
214
{
215
CPUState *cs = env_cpu(env);
216
- MemTxAttrs attrs = {};
217
- MemTxResult result = MEMTX_OK;
218
- AddressSpace *as;
219
uint32_t data;
220
221
if (!S1_ptw_translate(env, ptw, addr, fi)) {
222
+ /* Failure. */
223
+ assert(fi->s1ptw);
224
return 0;
225
}
226
- addr = ptw->out_phys;
227
- attrs.secure = ptw->out_secure;
228
- as = arm_addressspace(cs, attrs);
229
- if (ptw->out_be) {
230
- data = address_space_ldl_be(as, addr, attrs, &result);
231
+
232
+ if (likely(ptw->out_host)) {
233
+ /* Page tables are in RAM, and we have the host address. */
234
+ if (ptw->out_be) {
235
+ data = ldl_be_p(ptw->out_host);
236
+ } else {
237
+ data = ldl_le_p(ptw->out_host);
238
+ }
239
} else {
240
- data = address_space_ldl_le(as, addr, attrs, &result);
241
+ /* Page tables are in MMIO. */
242
+ MemTxAttrs attrs = { .secure = ptw->out_secure };
243
+ AddressSpace *as = arm_addressspace(cs, attrs);
244
+ MemTxResult result = MEMTX_OK;
245
+
246
+ if (ptw->out_be) {
247
+ data = address_space_ldl_be(as, ptw->out_phys, attrs, &result);
248
+ } else {
249
+ data = address_space_ldl_le(as, ptw->out_phys, attrs, &result);
250
+ }
251
+ if (unlikely(result != MEMTX_OK)) {
252
+ fi->type = ARMFault_SyncExternalOnWalk;
253
+ fi->ea = arm_extabort_type(result);
254
+ return 0;
255
+ }
256
}
257
- if (result == MEMTX_OK) {
258
- return data;
259
- }
260
- fi->type = ARMFault_SyncExternalOnWalk;
261
- fi->ea = arm_extabort_type(result);
262
- return 0;
263
+ return data;
264
}
265
266
static uint64_t arm_ldq_ptw(CPUARMState *env, S1Translate *ptw, hwaddr addr,
267
ARMMMUFaultInfo *fi)
268
{
269
CPUState *cs = env_cpu(env);
270
- MemTxAttrs attrs = {};
271
- MemTxResult result = MEMTX_OK;
272
- AddressSpace *as;
273
uint64_t data;
274
275
if (!S1_ptw_translate(env, ptw, addr, fi)) {
276
+ /* Failure. */
277
+ assert(fi->s1ptw);
278
return 0;
279
}
280
- addr = ptw->out_phys;
281
- attrs.secure = ptw->out_secure;
282
- as = arm_addressspace(cs, attrs);
283
- if (ptw->out_be) {
284
- data = address_space_ldq_be(as, addr, attrs, &result);
285
+
286
+ if (likely(ptw->out_host)) {
287
+ /* Page tables are in RAM, and we have the host address. */
288
+ if (ptw->out_be) {
289
+ data = ldq_be_p(ptw->out_host);
290
+ } else {
291
+ data = ldq_le_p(ptw->out_host);
292
+ }
293
} else {
294
- data = address_space_ldq_le(as, addr, attrs, &result);
295
+ /* Page tables are in MMIO. */
296
+ MemTxAttrs attrs = { .secure = ptw->out_secure };
297
+ AddressSpace *as = arm_addressspace(cs, attrs);
298
+ MemTxResult result = MEMTX_OK;
299
+
300
+ if (ptw->out_be) {
301
+ data = address_space_ldq_be(as, ptw->out_phys, attrs, &result);
302
+ } else {
303
+ data = address_space_ldq_le(as, ptw->out_phys, attrs, &result);
304
+ }
305
+ if (unlikely(result != MEMTX_OK)) {
306
+ fi->type = ARMFault_SyncExternalOnWalk;
307
+ fi->ea = arm_extabort_type(result);
308
+ return 0;
309
+ }
310
}
311
- if (result == MEMTX_OK) {
312
- return data;
313
- }
314
- fi->type = ARMFault_SyncExternalOnWalk;
315
- fi->ea = arm_extabort_type(result);
316
- return 0;
317
+ return data;
318
}
319
320
static bool get_level1_table_address(CPUARMState *env, ARMMMUIdx mmu_idx,
321
diff --git a/target/arm/tlb_helper.c b/target/arm/tlb_helper.c
322
index XXXXXXX..XXXXXXX 100644
323
--- a/target/arm/tlb_helper.c
324
+++ b/target/arm/tlb_helper.c
325
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
326
bool probe, uintptr_t retaddr)
327
{
328
ARMCPU *cpu = ARM_CPU(cs);
329
- ARMMMUFaultInfo fi = {};
330
GetPhysAddrResult res = {};
331
+ ARMMMUFaultInfo local_fi, *fi;
332
int ret;
333
334
+ /*
335
+ * Allow S1_ptw_translate to see any fault generated here.
336
+ * Since this may recurse, read and clear.
337
+ */
338
+ fi = cpu->env.tlb_fi;
339
+ if (fi) {
340
+ cpu->env.tlb_fi = NULL;
341
+ } else {
342
+ fi = memset(&local_fi, 0, sizeof(local_fi));
343
+ }
344
+
345
/*
346
* Walk the page table and (if the mapping exists) add the page
347
* to the TLB. On success, return true. Otherwise, if probing,
348
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
349
*/
350
ret = get_phys_addr(&cpu->env, address, access_type,
351
core_to_arm_mmu_idx(&cpu->env, mmu_idx),
352
- &res, &fi);
353
+ &res, fi);
354
if (likely(!ret)) {
355
/*
356
* Map a single [sub]page. Regions smaller than our declared
357
@@ -XXX,XX +XXX,XX @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
358
} else {
359
/* now we have a real cpu fault */
360
cpu_restore_state(cs, retaddr, true);
361
- arm_deliver_fault(cpu, address, access_type, mmu_idx, &fi);
362
+ arm_deliver_fault(cpu, address, access_type, mmu_idx, fi);
363
}
364
}
365
#else
34
--
366
--
35
2.18.0
367
2.25.1
36
37
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add the necessary parts of the virtualization extensions state to the
4
GIC state. We choose to increase the size of the CPU interfaces state to
5
add space for the vCPU interfaces (the GIC_NCPU_VCPU macro). This way,
6
we'll be able to reuse most of the CPU interface code for the vCPUs.
7
8
The only exception is the APR value, which is stored in h_apr in the
9
virtual interface state for vCPUs. This is due to some complications
10
with the GIC VMState, for which we don't want to break backward
11
compatibility. APRs being stored in 2D arrays, increasing the second
12
dimension would lead to some ugly VMState description. To avoid
13
that, we keep it in h_apr for vCPUs.
14
15
The vCPUs are numbered from GIC_NCPU to (GIC_NCPU * 2) - 1. The
16
`gic_is_vcpu` function help to determine if a given CPU id correspond to
17
a physical CPU or a virtual one.
18
19
For the in-kernel KVM VGIC, since the exposed VGIC does not implement
20
the virtualization extensions, we report an error if the corresponding
21
property is set to true.
22
23
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
3
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Message-id: 20180727095421.386-6-luc.michel@greensocs.com
4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
5
Message-id: 20221011031911.2408754-12-richard.henderson@linaro.org
26
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
---
7
---
28
hw/intc/gic_internal.h | 5 ++
8
target/arm/ptw.c | 191 +++++++++++++++++++++++++----------------------
29
include/hw/intc/arm_gic_common.h | 43 +++++++--
9
1 file changed, 100 insertions(+), 91 deletions(-)
30
hw/intc/arm_gic.c | 2 +-
31
hw/intc/arm_gic_common.c | 148 ++++++++++++++++++++++++++-----
32
hw/intc/arm_gic_kvm.c | 8 +-
33
5 files changed, 173 insertions(+), 33 deletions(-)
34
10
35
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
36
index XXXXXXX..XXXXXXX 100644
12
index XXXXXXX..XXXXXXX 100644
37
--- a/hw/intc/gic_internal.h
13
--- a/target/arm/ptw.c
38
+++ b/hw/intc/gic_internal.h
14
+++ b/target/arm/ptw.c
39
@@ -XXX,XX +XXX,XX @@ static inline bool gic_test_pending(GICState *s, int irq, int cm)
15
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Translate *ptw,
40
}
16
GetPhysAddrResult *result, ARMMMUFaultInfo *fi)
41
}
17
__attribute__((nonnull));
42
18
43
+static inline bool gic_is_vcpu(int cpu)
19
+static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
44
+{
20
+ target_ulong address,
45
+ return cpu >= GIC_NCPU;
21
+ MMUAccessType access_type,
46
+}
22
+ GetPhysAddrResult *result,
47
+
23
+ ARMMMUFaultInfo *fi)
48
#endif /* QEMU_ARM_GIC_INTERNAL_H */
24
+ __attribute__((nonnull));
49
diff --git a/include/hw/intc/arm_gic_common.h b/include/hw/intc/arm_gic_common.h
25
+
50
index XXXXXXX..XXXXXXX 100644
26
/* This mapping is common between ID_AA64MMFR0.PARANGE and TCR_ELx.{I}PS. */
51
--- a/include/hw/intc/arm_gic_common.h
27
static const uint8_t pamax_map[] = {
52
+++ b/include/hw/intc/arm_gic_common.h
28
[0] = 32,
53
@@ -XXX,XX +XXX,XX @@
29
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
54
#define GIC_NR_SGIS 16
55
/* Maximum number of possible CPU interfaces, determined by GIC architecture */
56
#define GIC_NCPU 8
57
+/* Maximum number of possible CPU interfaces with their respective vCPU */
58
+#define GIC_NCPU_VCPU (GIC_NCPU * 2)
59
60
#define MAX_NR_GROUP_PRIO 128
61
#define GIC_NR_APRS (MAX_NR_GROUP_PRIO / 32)
62
@@ -XXX,XX +XXX,XX @@
63
#define GIC_MIN_BPR 0
64
#define GIC_MIN_ABPR (GIC_MIN_BPR + 1)
65
66
+/* Architectural maximum number of list registers in the virtual interface */
67
+#define GIC_MAX_LR 64
68
+
69
+/* Only 32 priority levels and 32 preemption levels in the vCPU interfaces */
70
+#define GIC_VIRT_MAX_GROUP_PRIO_BITS 5
71
+#define GIC_VIRT_MAX_NR_GROUP_PRIO (1 << GIC_VIRT_MAX_GROUP_PRIO_BITS)
72
+#define GIC_VIRT_NR_APRS (GIC_VIRT_MAX_NR_GROUP_PRIO / 32)
73
+
74
+#define GIC_VIRT_MIN_BPR 2
75
+#define GIC_VIRT_MIN_ABPR (GIC_VIRT_MIN_BPR + 1)
76
+
77
typedef struct gic_irq_state {
78
/* The enable bits are only banked for per-cpu interrupts. */
79
uint8_t enabled;
80
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
81
qemu_irq parent_fiq[GIC_NCPU];
82
qemu_irq parent_virq[GIC_NCPU];
83
qemu_irq parent_vfiq[GIC_NCPU];
84
+ qemu_irq maintenance_irq[GIC_NCPU];
85
+
86
/* GICD_CTLR; for a GIC with the security extensions the NS banked version
87
* of this register is just an alias of bit 1 of the S banked version.
88
*/
89
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
90
/* GICC_CTLR; again, the NS banked version is just aliases of bits of
91
* the S banked register, so our state only needs to store the S version.
92
*/
93
- uint32_t cpu_ctlr[GIC_NCPU];
94
+ uint32_t cpu_ctlr[GIC_NCPU_VCPU];
95
96
gic_irq_state irq_state[GIC_MAXIRQ];
97
uint8_t irq_target[GIC_MAXIRQ];
98
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
99
*/
100
uint8_t sgi_pending[GIC_NR_SGIS][GIC_NCPU];
101
102
- uint16_t priority_mask[GIC_NCPU];
103
- uint16_t running_priority[GIC_NCPU];
104
- uint16_t current_pending[GIC_NCPU];
105
+ uint16_t priority_mask[GIC_NCPU_VCPU];
106
+ uint16_t running_priority[GIC_NCPU_VCPU];
107
+ uint16_t current_pending[GIC_NCPU_VCPU];
108
109
/* If we present the GICv2 without security extensions to a guest,
110
* the guest can configure the GICC_CTLR to configure group 1 binary point
111
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
112
* For a GIC with Security Extensions we use use bpr for the
113
* secure copy and abpr as storage for the non-secure copy of the register.
114
*/
115
- uint8_t bpr[GIC_NCPU];
116
- uint8_t abpr[GIC_NCPU];
117
+ uint8_t bpr[GIC_NCPU_VCPU];
118
+ uint8_t abpr[GIC_NCPU_VCPU];
119
120
/* The APR is implementation defined, so we choose a layout identical to
121
* the KVM ABI layout for QEMU's implementation of the gic:
122
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
123
uint32_t apr[GIC_NR_APRS][GIC_NCPU];
124
uint32_t nsapr[GIC_NR_APRS][GIC_NCPU];
125
126
+ /* Virtual interface control registers */
127
+ uint32_t h_hcr[GIC_NCPU];
128
+ uint32_t h_misr[GIC_NCPU];
129
+ uint32_t h_lr[GIC_MAX_LR][GIC_NCPU];
130
+ uint32_t h_apr[GIC_NCPU];
131
+
132
+ /* Number of LRs implemented in this GIC instance */
133
+ uint32_t num_lrs;
134
+
135
uint32_t num_cpu;
136
137
MemoryRegion iomem; /* Distributor */
138
@@ -XXX,XX +XXX,XX @@ typedef struct GICState {
139
*/
140
struct GICState *backref[GIC_NCPU];
141
MemoryRegion cpuiomem[GIC_NCPU + 1]; /* CPU interfaces */
142
+ MemoryRegion vifaceiomem[GIC_NCPU + 1]; /* Virtual interfaces */
143
+ MemoryRegion vcpuiomem; /* vCPU interface */
144
+
145
uint32_t num_irq;
146
uint32_t revision;
147
bool security_extn;
148
+ bool virt_extn;
149
bool irq_reset_nonsecure; /* configure IRQs as group 1 (NS) on reset? */
150
int dev_fd; /* kvm device fd if backed by kvm vgic support */
151
Error *migration_blocker;
152
@@ -XXX,XX +XXX,XX @@ typedef struct ARMGICCommonClass {
153
} ARMGICCommonClass;
154
155
void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
156
- const MemoryRegionOps *ops);
157
+ const MemoryRegionOps *ops,
158
+ const MemoryRegionOps *virt_ops);
159
160
#endif
161
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
162
index XXXXXXX..XXXXXXX 100644
163
--- a/hw/intc/arm_gic.c
164
+++ b/hw/intc/arm_gic.c
165
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
166
}
167
168
/* This creates distributor and main CPU interface (s->cpuiomem[0]) */
169
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops);
170
+ gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, NULL);
171
172
/* Extra core-specific regions for the CPU interfaces. This is
173
* necessary for "franken-GIC" implementations, for example on
174
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
175
index XXXXXXX..XXXXXXX 100644
176
--- a/hw/intc/arm_gic_common.c
177
+++ b/hw/intc/arm_gic_common.c
178
@@ -XXX,XX +XXX,XX @@ static int gic_post_load(void *opaque, int version_id)
179
return 0;
30
return 0;
180
}
31
}
181
32
182
+static bool gic_virt_state_needed(void *opaque)
33
+static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
34
+ target_ulong address,
35
+ MMUAccessType access_type,
36
+ GetPhysAddrResult *result,
37
+ ARMMMUFaultInfo *fi)
183
+{
38
+{
184
+ GICState *s = (GICState *)opaque;
39
+ hwaddr ipa;
185
+
40
+ int s1_prot;
186
+ return s->virt_extn;
41
+ int ret;
42
+ bool is_secure = ptw->in_secure;
43
+ bool ipa_secure, s2walk_secure;
44
+ ARMCacheAttrs cacheattrs1;
45
+ bool is_el0;
46
+ uint64_t hcr;
47
+
48
+ ret = get_phys_addr_with_struct(env, ptw, address, access_type, result, fi);
49
+
50
+ /* If S1 fails or S2 is disabled, return early. */
51
+ if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2, is_secure)) {
52
+ return ret;
53
+ }
54
+
55
+ ipa = result->f.phys_addr;
56
+ ipa_secure = result->f.attrs.secure;
57
+ if (is_secure) {
58
+ /* Select TCR based on the NS bit from the S1 walk. */
59
+ s2walk_secure = !(ipa_secure
60
+ ? env->cp15.vstcr_el2 & VSTCR_SW
61
+ : env->cp15.vtcr_el2 & VTCR_NSW);
62
+ } else {
63
+ assert(!ipa_secure);
64
+ s2walk_secure = false;
65
+ }
66
+
67
+ is_el0 = ptw->in_mmu_idx == ARMMMUIdx_Stage1_E0;
68
+ ptw->in_mmu_idx = s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
69
+ ptw->in_secure = s2walk_secure;
70
+
71
+ /*
72
+ * S1 is done, now do S2 translation.
73
+ * Save the stage1 results so that we may merge prot and cacheattrs later.
74
+ */
75
+ s1_prot = result->f.prot;
76
+ cacheattrs1 = result->cacheattrs;
77
+ memset(result, 0, sizeof(*result));
78
+
79
+ ret = get_phys_addr_lpae(env, ptw, ipa, access_type, is_el0, result, fi);
80
+ fi->s2addr = ipa;
81
+
82
+ /* Combine the S1 and S2 perms. */
83
+ result->f.prot &= s1_prot;
84
+
85
+ /* If S2 fails, return early. */
86
+ if (ret) {
87
+ return ret;
88
+ }
89
+
90
+ /* Combine the S1 and S2 cache attributes. */
91
+ hcr = arm_hcr_el2_eff_secstate(env, is_secure);
92
+ if (hcr & HCR_DC) {
93
+ /*
94
+ * HCR.DC forces the first stage attributes to
95
+ * Normal Non-Shareable,
96
+ * Inner Write-Back Read-Allocate Write-Allocate,
97
+ * Outer Write-Back Read-Allocate Write-Allocate.
98
+ * Do not overwrite Tagged within attrs.
99
+ */
100
+ if (cacheattrs1.attrs != 0xf0) {
101
+ cacheattrs1.attrs = 0xff;
102
+ }
103
+ cacheattrs1.shareability = 0;
104
+ }
105
+ result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
106
+ result->cacheattrs);
107
+
108
+ /*
109
+ * Check if IPA translates to secure or non-secure PA space.
110
+ * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
111
+ */
112
+ result->f.attrs.secure =
113
+ (is_secure
114
+ && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
115
+ && (ipa_secure
116
+ || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
117
+
118
+ return 0;
187
+}
119
+}
188
+
120
+
189
static const VMStateDescription vmstate_gic_irq_state = {
121
static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
190
.name = "arm_gic_irq_state",
122
target_ulong address,
191
.version_id = 1,
123
MMUAccessType access_type,
192
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gic_irq_state = {
124
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
193
}
125
if (mmu_idx != s1_mmu_idx) {
194
};
126
/*
195
127
* Call ourselves recursively to do the stage 1 and then stage 2
196
+static const VMStateDescription vmstate_gic_virt_state = {
128
- * translations if mmu_idx is a two-stage regime.
197
+ .name = "arm_gic_virt_state",
129
+ * translations if mmu_idx is a two-stage regime, and EL2 present.
198
+ .version_id = 1,
130
+ * Otherwise, a stage1+stage2 translation is just stage 1.
199
+ .minimum_version_id = 1,
131
*/
200
+ .needed = gic_virt_state_needed,
132
+ ptw->in_mmu_idx = mmu_idx = s1_mmu_idx;
201
+ .fields = (VMStateField[]) {
133
if (arm_feature(env, ARM_FEATURE_EL2)) {
202
+ /* Virtual interface */
134
- hwaddr ipa;
203
+ VMSTATE_UINT32_ARRAY(h_hcr, GICState, GIC_NCPU),
135
- int s1_prot;
204
+ VMSTATE_UINT32_ARRAY(h_misr, GICState, GIC_NCPU),
136
- int ret;
205
+ VMSTATE_UINT32_2DARRAY(h_lr, GICState, GIC_MAX_LR, GIC_NCPU),
137
- bool ipa_secure, s2walk_secure;
206
+ VMSTATE_UINT32_ARRAY(h_apr, GICState, GIC_NCPU),
138
- ARMCacheAttrs cacheattrs1;
207
+
139
- bool is_el0;
208
+ /* Virtual CPU interfaces */
140
- uint64_t hcr;
209
+ VMSTATE_UINT32_SUB_ARRAY(cpu_ctlr, GICState, GIC_NCPU, GIC_NCPU),
141
-
210
+ VMSTATE_UINT16_SUB_ARRAY(priority_mask, GICState, GIC_NCPU, GIC_NCPU),
142
- ptw->in_mmu_idx = s1_mmu_idx;
211
+ VMSTATE_UINT16_SUB_ARRAY(running_priority, GICState, GIC_NCPU, GIC_NCPU),
143
- ret = get_phys_addr_with_struct(env, ptw, address, access_type,
212
+ VMSTATE_UINT16_SUB_ARRAY(current_pending, GICState, GIC_NCPU, GIC_NCPU),
144
- result, fi);
213
+ VMSTATE_UINT8_SUB_ARRAY(bpr, GICState, GIC_NCPU, GIC_NCPU),
145
-
214
+ VMSTATE_UINT8_SUB_ARRAY(abpr, GICState, GIC_NCPU, GIC_NCPU),
146
- /* If S1 fails or S2 is disabled, return early. */
215
+
147
- if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2,
216
+ VMSTATE_END_OF_LIST()
148
- is_secure)) {
217
+ }
149
- return ret;
218
+};
150
- }
219
+
151
-
220
static const VMStateDescription vmstate_gic = {
152
- ipa = result->f.phys_addr;
221
.name = "arm_gic",
153
- ipa_secure = result->f.attrs.secure;
222
.version_id = 12,
154
- if (is_secure) {
223
@@ -XXX,XX +XXX,XX @@ static const VMStateDescription vmstate_gic = {
155
- /* Select TCR based on the NS bit from the S1 walk. */
224
.post_load = gic_post_load,
156
- s2walk_secure = !(ipa_secure
225
.fields = (VMStateField[]) {
157
- ? env->cp15.vstcr_el2 & VSTCR_SW
226
VMSTATE_UINT32(ctlr, GICState),
158
- : env->cp15.vtcr_el2 & VTCR_NSW);
227
- VMSTATE_UINT32_ARRAY(cpu_ctlr, GICState, GIC_NCPU),
159
- } else {
228
+ VMSTATE_UINT32_SUB_ARRAY(cpu_ctlr, GICState, 0, GIC_NCPU),
160
- assert(!ipa_secure);
229
VMSTATE_STRUCT_ARRAY(irq_state, GICState, GIC_MAXIRQ, 1,
161
- s2walk_secure = false;
230
vmstate_gic_irq_state, gic_irq_state),
162
- }
231
VMSTATE_UINT8_ARRAY(irq_target, GICState, GIC_MAXIRQ),
163
-
232
VMSTATE_UINT8_2DARRAY(priority1, GICState, GIC_INTERNAL, GIC_NCPU),
164
- ptw->in_mmu_idx =
233
VMSTATE_UINT8_ARRAY(priority2, GICState, GIC_MAXIRQ - GIC_INTERNAL),
165
- s2walk_secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_Stage2;
234
VMSTATE_UINT8_2DARRAY(sgi_pending, GICState, GIC_NR_SGIS, GIC_NCPU),
166
- ptw->in_secure = s2walk_secure;
235
- VMSTATE_UINT16_ARRAY(priority_mask, GICState, GIC_NCPU),
167
- is_el0 = mmu_idx == ARMMMUIdx_E10_0;
236
- VMSTATE_UINT16_ARRAY(running_priority, GICState, GIC_NCPU),
168
-
237
- VMSTATE_UINT16_ARRAY(current_pending, GICState, GIC_NCPU),
169
- /*
238
- VMSTATE_UINT8_ARRAY(bpr, GICState, GIC_NCPU),
170
- * S1 is done, now do S2 translation.
239
- VMSTATE_UINT8_ARRAY(abpr, GICState, GIC_NCPU),
171
- * Save the stage1 results so that we may merge
240
+ VMSTATE_UINT16_SUB_ARRAY(priority_mask, GICState, 0, GIC_NCPU),
172
- * prot and cacheattrs later.
241
+ VMSTATE_UINT16_SUB_ARRAY(running_priority, GICState, 0, GIC_NCPU),
173
- */
242
+ VMSTATE_UINT16_SUB_ARRAY(current_pending, GICState, 0, GIC_NCPU),
174
- s1_prot = result->f.prot;
243
+ VMSTATE_UINT8_SUB_ARRAY(bpr, GICState, 0, GIC_NCPU),
175
- cacheattrs1 = result->cacheattrs;
244
+ VMSTATE_UINT8_SUB_ARRAY(abpr, GICState, 0, GIC_NCPU),
176
- memset(result, 0, sizeof(*result));
245
VMSTATE_UINT32_2DARRAY(apr, GICState, GIC_NR_APRS, GIC_NCPU),
177
-
246
VMSTATE_UINT32_2DARRAY(nsapr, GICState, GIC_NR_APRS, GIC_NCPU),
178
- ret = get_phys_addr_lpae(env, ptw, ipa, access_type,
247
VMSTATE_END_OF_LIST()
179
- is_el0, result, fi);
248
+ },
180
- fi->s2addr = ipa;
249
+ .subsections = (const VMStateDescription * []) {
181
-
250
+ &vmstate_gic_virt_state,
182
- /* Combine the S1 and S2 perms. */
251
+ NULL
183
- result->f.prot &= s1_prot;
252
}
184
-
253
};
185
- /* If S2 fails, return early. */
254
186
- if (ret) {
255
void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
187
- return ret;
256
- const MemoryRegionOps *ops)
188
- }
257
+ const MemoryRegionOps *ops,
189
-
258
+ const MemoryRegionOps *virt_ops)
190
- /* Combine the S1 and S2 cache attributes. */
259
{
191
- hcr = arm_hcr_el2_eff_secstate(env, is_secure);
260
SysBusDevice *sbd = SYS_BUS_DEVICE(s);
192
- if (hcr & HCR_DC) {
261
int i = s->num_irq - GIC_INTERNAL;
193
- /*
262
@@ -XXX,XX +XXX,XX @@ void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
194
- * HCR.DC forces the first stage attributes to
263
for (i = 0; i < s->num_cpu; i++) {
195
- * Normal Non-Shareable,
264
sysbus_init_irq(sbd, &s->parent_vfiq[i]);
196
- * Inner Write-Back Read-Allocate Write-Allocate,
265
}
197
- * Outer Write-Back Read-Allocate Write-Allocate.
266
+ if (s->virt_extn) {
198
- * Do not overwrite Tagged within attrs.
267
+ for (i = 0; i < s->num_cpu; i++) {
199
- */
268
+ sysbus_init_irq(sbd, &s->maintenance_irq[i]);
200
- if (cacheattrs1.attrs != 0xf0) {
269
+ }
201
- cacheattrs1.attrs = 0xff;
270
+ }
202
- }
271
203
- cacheattrs1.shareability = 0;
272
/* Distributor */
204
- }
273
memory_region_init_io(&s->iomem, OBJECT(s), ops, s, "gic_dist", 0x1000);
205
- result->cacheattrs = combine_cacheattrs(hcr, cacheattrs1,
274
@@ -XXX,XX +XXX,XX @@ void gic_init_irqs_and_mmio(GICState *s, qemu_irq_handler handler,
206
- result->cacheattrs);
275
memory_region_init_io(&s->cpuiomem[0], OBJECT(s), ops ? &ops[1] : NULL,
207
-
276
s, "gic_cpu", s->revision == 2 ? 0x2000 : 0x100);
208
- /*
277
sysbus_init_mmio(sbd, &s->cpuiomem[0]);
209
- * Check if IPA translates to secure or non-secure PA space.
278
+
210
- * Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
279
+ if (s->virt_extn) {
211
- */
280
+ memory_region_init_io(&s->vifaceiomem[0], OBJECT(s), virt_ops,
212
- result->f.attrs.secure =
281
+ s, "gic_viface", 0x1000);
213
- (is_secure
282
+ sysbus_init_mmio(sbd, &s->vifaceiomem[0]);
214
- && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
283
+
215
- && (ipa_secure
284
+ memory_region_init_io(&s->vcpuiomem, OBJECT(s),
216
- || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
285
+ virt_ops ? &virt_ops[1] : NULL,
217
-
286
+ s, "gic_vcpu", 0x2000);
218
- return 0;
287
+ sysbus_init_mmio(sbd, &s->vcpuiomem);
288
+ }
289
}
290
291
static void arm_gic_common_realize(DeviceState *dev, Error **errp)
292
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_realize(DeviceState *dev, Error **errp)
293
"the security extensions");
294
return;
295
}
296
+
297
+ if (s->virt_extn) {
298
+ if (s->revision != 2) {
299
+ error_setg(errp, "GIC virtualization extensions are only "
300
+ "supported by revision 2");
301
+ return;
302
+ }
303
+
304
+ /* For now, set the number of implemented LRs to 4, as found in most
305
+ * real GICv2. This could be promoted as a QOM property if we need to
306
+ * emulate a variant with another num_lrs.
307
+ */
308
+ s->num_lrs = 4;
309
+ }
310
+}
311
+
312
+static inline void arm_gic_common_reset_irq_state(GICState *s, int first_cpu,
313
+ int resetprio)
314
+{
315
+ int i, j;
316
+
317
+ for (i = first_cpu; i < first_cpu + s->num_cpu; i++) {
318
+ if (s->revision == REV_11MPCORE) {
319
+ s->priority_mask[i] = 0xf0;
320
+ } else {
321
+ s->priority_mask[i] = resetprio;
322
+ }
323
+ s->current_pending[i] = 1023;
324
+ s->running_priority[i] = 0x100;
325
+ s->cpu_ctlr[i] = 0;
326
+ s->bpr[i] = gic_is_vcpu(i) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
327
+ s->abpr[i] = gic_is_vcpu(i) ? GIC_VIRT_MIN_ABPR : GIC_MIN_ABPR;
328
+
329
+ if (!gic_is_vcpu(i)) {
330
+ for (j = 0; j < GIC_INTERNAL; j++) {
331
+ s->priority1[j][i] = resetprio;
332
+ }
333
+ for (j = 0; j < GIC_NR_SGIS; j++) {
334
+ s->sgi_pending[j][i] = 0;
335
+ }
336
+ }
337
+ }
338
}
339
340
static void arm_gic_common_reset(DeviceState *dev)
341
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
342
}
343
344
memset(s->irq_state, 0, GIC_MAXIRQ * sizeof(gic_irq_state));
345
- for (i = 0 ; i < s->num_cpu; i++) {
346
- if (s->revision == REV_11MPCORE) {
347
- s->priority_mask[i] = 0xf0;
348
- } else {
219
- } else {
349
- s->priority_mask[i] = resetprio;
220
- /*
350
- }
221
- * For non-EL2 CPUs a stage1+stage2 translation is just stage 1.
351
- s->current_pending[i] = 1023;
222
- */
352
- s->running_priority[i] = 0x100;
223
- mmu_idx = stage_1_mmu_idx(mmu_idx);
353
- s->cpu_ctlr[i] = 0;
224
+ return get_phys_addr_twostage(env, ptw, address, access_type,
354
- s->bpr[i] = GIC_MIN_BPR;
225
+ result, fi);
355
- s->abpr[i] = GIC_MIN_ABPR;
356
- for (j = 0; j < GIC_INTERNAL; j++) {
357
- s->priority1[j][i] = resetprio;
358
- }
359
- for (j = 0; j < GIC_NR_SGIS; j++) {
360
- s->sgi_pending[j][i] = 0;
361
- }
362
+ arm_gic_common_reset_irq_state(s, 0, resetprio);
363
+
364
+ if (s->virt_extn) {
365
+ /* vCPU states are stored at indexes GIC_NCPU .. GIC_NCPU+num_cpu.
366
+ * The exposed vCPU interface does not have security extensions.
367
+ */
368
+ arm_gic_common_reset_irq_state(s, GIC_NCPU, 0);
369
}
370
+
371
for (i = 0; i < GIC_NR_SGIS; i++) {
372
GIC_DIST_SET_ENABLED(i, ALL_CPU_MASK);
373
GIC_DIST_SET_EDGE_TRIGGER(i);
374
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
375
}
226
}
376
}
227
}
377
228
378
+ if (s->virt_extn) {
379
+ for (i = 0; i < s->num_lrs; i++) {
380
+ for (j = 0; j < s->num_cpu; j++) {
381
+ s->h_lr[i][j] = 0;
382
+ }
383
+ }
384
+
385
+ for (i = 0; i < s->num_cpu; i++) {
386
+ s->h_hcr[i] = 0;
387
+ s->h_misr[i] = 0;
388
+ }
389
+ }
390
+
391
s->ctlr = 0;
392
}
393
394
@@ -XXX,XX +XXX,XX @@ static Property arm_gic_common_properties[] = {
395
DEFINE_PROP_UINT32("revision", GICState, revision, 1),
396
/* True if the GIC should implement the security extensions */
397
DEFINE_PROP_BOOL("has-security-extensions", GICState, security_extn, 0),
398
+ /* True if the GIC should implement the virtualization extensions */
399
+ DEFINE_PROP_BOOL("has-virtualization-extensions", GICState, virt_extn, 0),
400
DEFINE_PROP_END_OF_LIST(),
401
};
402
403
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
404
index XXXXXXX..XXXXXXX 100644
405
--- a/hw/intc/arm_gic_kvm.c
406
+++ b/hw/intc/arm_gic_kvm.c
407
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
408
return;
409
}
410
411
+ if (s->virt_extn) {
412
+ error_setg(errp, "the in-kernel VGIC does not implement the "
413
+ "virtualization extensions");
414
+ return;
415
+ }
416
+
417
if (!kvm_arm_gic_can_save_restore(s)) {
418
error_setg(&s->migration_blocker, "This operating system kernel does "
419
"not support vGICv2 migration");
420
@@ -XXX,XX +XXX,XX @@ static void kvm_arm_gic_realize(DeviceState *dev, Error **errp)
421
}
422
}
423
424
- gic_init_irqs_and_mmio(s, kvm_arm_gicv2_set_irq, NULL);
425
+ gic_init_irqs_and_mmio(s, kvm_arm_gicv2_set_irq, NULL, NULL);
426
427
for (i = 0; i < s->num_irq - GIC_INTERNAL; i++) {
428
qemu_irq irq = qdev_get_gpio_in(dev, i);
429
--
229
--
430
2.18.0
230
2.25.1
431
432
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Implement the maintenance interrupt generation that is part of the GICv2
3
The return type of the functions is already bool, but in a few
4
virtualization extensions.
4
instances we used an integer type with the return statement.
5
5
6
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Message-id: 20180727095421.386-18-luc.michel@greensocs.com
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221011031911.2408754-13-richard.henderson@linaro.org
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
---
10
---
11
hw/intc/arm_gic.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++
11
target/arm/ptw.c | 7 +++----
12
1 file changed, 97 insertions(+)
12
1 file changed, 3 insertions(+), 4 deletions(-)
13
13
14
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
14
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
15
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
16
--- a/hw/intc/arm_gic.c
16
--- a/target/arm/ptw.c
17
+++ b/hw/intc/arm_gic.c
17
+++ b/target/arm/ptw.c
18
@@ -XXX,XX +XXX,XX @@ static inline bool gic_lr_entry_is_eoi(uint32_t entry)
18
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_disabled(CPUARMState *env, target_ulong address,
19
&& !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
19
result->f.lg_page_size = TARGET_PAGE_BITS;
20
result->cacheattrs.shareability = shareability;
21
result->cacheattrs.attrs = memattr;
22
- return 0;
23
+ return false;
20
}
24
}
21
25
22
+static inline void gic_extract_lr_info(GICState *s, int cpu,
26
static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
23
+ int *num_eoi, int *num_valid, int *num_pending)
27
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
24
+{
25
+ int lr_idx;
26
+
27
+ *num_eoi = 0;
28
+ *num_valid = 0;
29
+ *num_pending = 0;
30
+
31
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
32
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
33
+
34
+ if (gic_lr_entry_is_eoi(*entry)) {
35
+ (*num_eoi)++;
36
+ }
37
+
38
+ if (GICH_LR_STATE(*entry) != GICH_LR_STATE_INVALID) {
39
+ (*num_valid)++;
40
+ }
41
+
42
+ if (GICH_LR_STATE(*entry) == GICH_LR_STATE_PENDING) {
43
+ (*num_pending)++;
44
+ }
45
+ }
46
+}
47
+
48
+static void gic_compute_misr(GICState *s, int cpu)
49
+{
50
+ uint32_t value = 0;
51
+ int vcpu = cpu + GIC_NCPU;
52
+
53
+ int num_eoi, num_valid, num_pending;
54
+
55
+ gic_extract_lr_info(s, cpu, &num_eoi, &num_valid, &num_pending);
56
+
57
+ /* EOI */
58
+ if (num_eoi) {
59
+ value |= R_GICH_MISR_EOI_MASK;
60
+ }
61
+
62
+ /* U: true if only 0 or 1 LR entry is valid */
63
+ if ((s->h_hcr[cpu] & R_GICH_HCR_UIE_MASK) && (num_valid < 2)) {
64
+ value |= R_GICH_MISR_U_MASK;
65
+ }
66
+
67
+ /* LRENP: EOICount is not 0 */
68
+ if ((s->h_hcr[cpu] & R_GICH_HCR_LRENPIE_MASK) &&
69
+ ((s->h_hcr[cpu] & R_GICH_HCR_EOICount_MASK) != 0)) {
70
+ value |= R_GICH_MISR_LRENP_MASK;
71
+ }
72
+
73
+ /* NP: no pending interrupts */
74
+ if ((s->h_hcr[cpu] & R_GICH_HCR_NPIE_MASK) && (num_pending == 0)) {
75
+ value |= R_GICH_MISR_NP_MASK;
76
+ }
77
+
78
+ /* VGrp0E: group0 virq signaling enabled */
79
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP0EIE_MASK) &&
80
+ (s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP0)) {
81
+ value |= R_GICH_MISR_VGrp0E_MASK;
82
+ }
83
+
84
+ /* VGrp0D: group0 virq signaling disabled */
85
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP0DIE_MASK) &&
86
+ !(s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP0)) {
87
+ value |= R_GICH_MISR_VGrp0D_MASK;
88
+ }
89
+
90
+ /* VGrp1E: group1 virq signaling enabled */
91
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP1EIE_MASK) &&
92
+ (s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP1)) {
93
+ value |= R_GICH_MISR_VGrp1E_MASK;
94
+ }
95
+
96
+ /* VGrp1D: group1 virq signaling disabled */
97
+ if ((s->h_hcr[cpu] & R_GICH_HCR_VGRP1DIE_MASK) &&
98
+ !(s->cpu_ctlr[vcpu] & GICC_CTLR_EN_GRP1)) {
99
+ value |= R_GICH_MISR_VGrp1D_MASK;
100
+ }
101
+
102
+ s->h_misr[cpu] = value;
103
+}
104
+
105
+static void gic_update_maintenance(GICState *s)
106
+{
107
+ int cpu = 0;
108
+ int maint_level;
109
+
110
+ for (cpu = 0; cpu < s->num_cpu; cpu++) {
111
+ gic_compute_misr(s, cpu);
112
+ maint_level = (s->h_hcr[cpu] & R_GICH_HCR_EN_MASK) && s->h_misr[cpu];
113
+
114
+ qemu_set_irq(s->maintenance_irq[cpu], maint_level);
115
+ }
116
+}
117
+
118
static void gic_update_virt(GICState *s)
119
{
28
{
120
gic_update_internal(s, true);
29
hwaddr ipa;
121
+ gic_update_maintenance(s);
30
int s1_prot;
31
- int ret;
32
bool is_secure = ptw->in_secure;
33
- bool ipa_secure, s2walk_secure;
34
+ bool ret, ipa_secure, s2walk_secure;
35
ARMCacheAttrs cacheattrs1;
36
bool is_el0;
37
uint64_t hcr;
38
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
39
&& (ipa_secure
40
|| !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
41
42
- return 0;
43
+ return false;
122
}
44
}
123
45
124
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
46
static bool get_phys_addr_with_struct(CPUARMState *env, S1Translate *ptw,
125
--
47
--
126
2.18.0
48
2.25.1
127
128
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add the read/write functions to handle accesses to the vCPU interface.
3
A simple helper to retrieve the length of the current insn.
4
Those accesses are forwarded to the real CPU interface, with the CPU id
5
being converted to the corresponding vCPU id (vCPU id = CPU id +
6
GIC_NCPU).
7
4
8
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20180727095421.386-15-luc.michel@greensocs.com
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20221020030641.2066807-2-richard.henderson@linaro.org
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
9
---
13
hw/intc/arm_gic.c | 37 +++++++++++++++++++++++++++++++++++--
10
target/arm/translate.h | 5 +++++
14
1 file changed, 35 insertions(+), 2 deletions(-)
11
target/arm/translate-vfp.c | 2 +-
12
target/arm/translate.c | 5 ++---
13
3 files changed, 8 insertions(+), 4 deletions(-)
15
14
16
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
15
diff --git a/target/arm/translate.h b/target/arm/translate.h
17
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
18
--- a/hw/intc/arm_gic.c
17
--- a/target/arm/translate.h
19
+++ b/hw/intc/arm_gic.c
18
+++ b/target/arm/translate.h
20
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_do_cpu_write(void *opaque, hwaddr addr,
19
@@ -XXX,XX +XXX,XX @@ static inline void disas_set_insn_syndrome(DisasContext *s, uint32_t syn)
21
return gic_cpu_write(s, id, addr, value, attrs);
20
s->insn_start = NULL;
22
}
21
}
23
22
24
+static MemTxResult gic_thisvcpu_read(void *opaque, hwaddr addr, uint64_t *data,
23
+static inline int curr_insn_len(DisasContext *s)
25
+ unsigned size, MemTxAttrs attrs)
26
+{
24
+{
27
+ GICState *s = (GICState *)opaque;
25
+ return s->base.pc_next - s->pc_curr;
28
+
29
+ return gic_cpu_read(s, gic_get_current_vcpu(s), addr, data, attrs);
30
+}
26
+}
31
+
27
+
32
+static MemTxResult gic_thisvcpu_write(void *opaque, hwaddr addr,
28
/* is_jmp field values */
33
+ uint64_t value, unsigned size,
29
#define DISAS_JUMP DISAS_TARGET_0 /* only pc was modified dynamically */
34
+ MemTxAttrs attrs)
30
/* CPU state was modified dynamically; exit to main loop for interrupts. */
35
+{
31
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
36
+ GICState *s = (GICState *)opaque;
32
index XXXXXXX..XXXXXXX 100644
37
+
33
--- a/target/arm/translate-vfp.c
38
+ return gic_cpu_write(s, gic_get_current_vcpu(s), addr, value, attrs);
34
+++ b/target/arm/translate-vfp.c
39
+}
35
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
40
+
36
if (s->sme_trap_nonstreaming) {
41
static const MemoryRegionOps gic_ops[2] = {
37
gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
42
{
38
syn_smetrap(SME_ET_Streaming,
43
.read_with_attrs = gic_dist_read,
39
- s->base.pc_next - s->pc_curr == 2));
44
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
40
+ curr_insn_len(s) == 2));
45
.endianness = DEVICE_NATIVE_ENDIAN,
41
return false;
46
};
47
48
+static const MemoryRegionOps gic_virt_ops[2] = {
49
+ {
50
+ .read_with_attrs = NULL,
51
+ .write_with_attrs = NULL,
52
+ .endianness = DEVICE_NATIVE_ENDIAN,
53
+ },
54
+ {
55
+ .read_with_attrs = gic_thisvcpu_read,
56
+ .write_with_attrs = gic_thisvcpu_write,
57
+ .endianness = DEVICE_NATIVE_ENDIAN,
58
+ }
59
+};
60
+
61
static void arm_gic_realize(DeviceState *dev, Error **errp)
62
{
63
/* Device instance realize function for the GIC sysbus device */
64
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
65
return;
66
}
42
}
67
43
68
- /* This creates distributor and main CPU interface (s->cpuiomem[0]) */
44
diff --git a/target/arm/translate.c b/target/arm/translate.c
69
- gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, NULL);
45
index XXXXXXX..XXXXXXX 100644
70
+ /* This creates distributor, main CPU interface (s->cpuiomem[0]) and if
46
--- a/target/arm/translate.c
71
+ * enabled, virtualization extensions related interfaces (main virtual
47
+++ b/target/arm/translate.c
72
+ * interface (s->vifaceiomem[0]) and virtual CPU interface).
48
@@ -XXX,XX +XXX,XX @@ static ISSInfo make_issinfo(DisasContext *s, int rd, bool p, bool w)
73
+ */
49
/* ISS not valid if writeback */
74
+ gic_init_irqs_and_mmio(s, gic_set_irq, gic_ops, gic_virt_ops);
50
if (p && !w) {
75
51
ret = rd;
76
/* Extra core-specific regions for the CPU interfaces. This is
52
- if (s->base.pc_next - s->pc_curr == 2) {
77
* necessary for "franken-GIC" implementations, for example on
53
+ if (curr_insn_len(s) == 2) {
54
ret |= ISSIs16Bit;
55
}
56
} else {
57
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
58
/* nothing more to generate */
59
break;
60
case DISAS_WFI:
61
- gen_helper_wfi(cpu_env,
62
- tcg_constant_i32(dc->base.pc_next - dc->pc_curr));
63
+ gen_helper_wfi(cpu_env, tcg_constant_i32(curr_insn_len(dc)));
64
/*
65
* The helper doesn't necessarily throw an exception, but we
66
* must go back to the main loop to check for interrupts anyway.
78
--
67
--
79
2.18.0
68
2.25.1
80
69
81
70
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
In preparation for the virtualization extensions implementation,
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
refactor the name of the functions and macros that act on the GIC
5
distributor to make that fact explicit. It will be useful to
6
differentiate them from the ones that will act on the virtual
7
interfaces.
8
4
9
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Reviewed-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
13
Message-id: 20180727095421.386-2-luc.michel@greensocs.com
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-3-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
9
---
16
hw/intc/gic_internal.h | 51 ++++++------
10
target/arm/translate-a64.c | 40 ++++++++++++++++++++------------------
17
hw/intc/arm_gic.c | 163 +++++++++++++++++++++------------------
11
target/arm/translate.c | 10 ++++++----
18
hw/intc/arm_gic_common.c | 6 +-
12
2 files changed, 27 insertions(+), 23 deletions(-)
19
hw/intc/arm_gic_kvm.c | 23 +++---
20
4 files changed, 127 insertions(+), 116 deletions(-)
21
13
22
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
14
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
23
index XXXXXXX..XXXXXXX 100644
15
index XXXXXXX..XXXXXXX 100644
24
--- a/hw/intc/gic_internal.h
16
--- a/target/arm/translate-a64.c
25
+++ b/hw/intc/gic_internal.h
17
+++ b/target/arm/translate-a64.c
26
@@ -XXX,XX +XXX,XX @@
18
@@ -XXX,XX +XXX,XX @@ static inline bool use_goto_tb(DisasContext *s, uint64_t dest)
27
19
return translator_use_goto_tb(&s->base, dest);
28
#define GIC_BASE_IRQ 0
20
}
29
21
30
-#define GIC_SET_ENABLED(irq, cm) s->irq_state[irq].enabled |= (cm)
22
-static inline void gen_goto_tb(DisasContext *s, int n, uint64_t dest)
31
-#define GIC_CLEAR_ENABLED(irq, cm) s->irq_state[irq].enabled &= ~(cm)
23
+static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
32
-#define GIC_TEST_ENABLED(irq, cm) ((s->irq_state[irq].enabled & (cm)) != 0)
24
{
33
-#define GIC_SET_PENDING(irq, cm) s->irq_state[irq].pending |= (cm)
25
+ uint64_t dest = s->pc_curr + diff;
34
-#define GIC_CLEAR_PENDING(irq, cm) s->irq_state[irq].pending &= ~(cm)
26
+
35
-#define GIC_SET_ACTIVE(irq, cm) s->irq_state[irq].active |= (cm)
27
if (use_goto_tb(s, dest)) {
36
-#define GIC_CLEAR_ACTIVE(irq, cm) s->irq_state[irq].active &= ~(cm)
28
tcg_gen_goto_tb(n);
37
-#define GIC_TEST_ACTIVE(irq, cm) ((s->irq_state[irq].active & (cm)) != 0)
29
gen_a64_set_pc_im(dest);
38
-#define GIC_SET_MODEL(irq) s->irq_state[irq].model = true
30
@@ -XXX,XX +XXX,XX @@ static inline AArch64DecodeFn *lookup_disas_fn(const AArch64DecodeTable *table,
39
-#define GIC_CLEAR_MODEL(irq) s->irq_state[irq].model = false
31
*/
40
-#define GIC_TEST_MODEL(irq) s->irq_state[irq].model
32
static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
41
-#define GIC_SET_LEVEL(irq, cm) s->irq_state[irq].level |= (cm)
33
{
42
-#define GIC_CLEAR_LEVEL(irq, cm) s->irq_state[irq].level &= ~(cm)
34
- uint64_t addr = s->pc_curr + sextract32(insn, 0, 26) * 4;
43
-#define GIC_TEST_LEVEL(irq, cm) ((s->irq_state[irq].level & (cm)) != 0)
35
+ int64_t diff = sextract32(insn, 0, 26) * 4;
44
-#define GIC_SET_EDGE_TRIGGER(irq) s->irq_state[irq].edge_trigger = true
36
45
-#define GIC_CLEAR_EDGE_TRIGGER(irq) s->irq_state[irq].edge_trigger = false
37
if (insn & (1U << 31)) {
46
-#define GIC_TEST_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger)
38
/* BL Branch with link */
47
-#define GIC_GET_PRIORITY(irq, cpu) (((irq) < GIC_INTERNAL) ? \
39
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
48
+#define GIC_DIST_SET_ENABLED(irq, cm) (s->irq_state[irq].enabled |= (cm))
40
49
+#define GIC_DIST_CLEAR_ENABLED(irq, cm) (s->irq_state[irq].enabled &= ~(cm))
41
/* B Branch / BL Branch with link */
50
+#define GIC_DIST_TEST_ENABLED(irq, cm) ((s->irq_state[irq].enabled & (cm)) != 0)
42
reset_btype(s);
51
+#define GIC_DIST_SET_PENDING(irq, cm) (s->irq_state[irq].pending |= (cm))
43
- gen_goto_tb(s, 0, addr);
52
+#define GIC_DIST_CLEAR_PENDING(irq, cm) (s->irq_state[irq].pending &= ~(cm))
44
+ gen_goto_tb(s, 0, diff);
53
+#define GIC_DIST_SET_ACTIVE(irq, cm) (s->irq_state[irq].active |= (cm))
45
}
54
+#define GIC_DIST_CLEAR_ACTIVE(irq, cm) (s->irq_state[irq].active &= ~(cm))
46
55
+#define GIC_DIST_TEST_ACTIVE(irq, cm) ((s->irq_state[irq].active & (cm)) != 0)
47
/* Compare and branch (immediate)
56
+#define GIC_DIST_SET_MODEL(irq) (s->irq_state[irq].model = true)
48
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
57
+#define GIC_DIST_CLEAR_MODEL(irq) (s->irq_state[irq].model = false)
49
static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
58
+#define GIC_DIST_TEST_MODEL(irq) (s->irq_state[irq].model)
50
{
59
+#define GIC_DIST_SET_LEVEL(irq, cm) (s->irq_state[irq].level |= (cm))
51
unsigned int sf, op, rt;
60
+#define GIC_DIST_CLEAR_LEVEL(irq, cm) (s->irq_state[irq].level &= ~(cm))
52
- uint64_t addr;
61
+#define GIC_DIST_TEST_LEVEL(irq, cm) ((s->irq_state[irq].level & (cm)) != 0)
53
+ int64_t diff;
62
+#define GIC_DIST_SET_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger = true)
54
TCGLabel *label_match;
63
+#define GIC_DIST_CLEAR_EDGE_TRIGGER(irq) \
55
TCGv_i64 tcg_cmp;
64
+ (s->irq_state[irq].edge_trigger = false)
56
65
+#define GIC_DIST_TEST_EDGE_TRIGGER(irq) (s->irq_state[irq].edge_trigger)
57
sf = extract32(insn, 31, 1);
66
+#define GIC_DIST_GET_PRIORITY(irq, cpu) (((irq) < GIC_INTERNAL) ? \
58
op = extract32(insn, 24, 1); /* 0: CBZ; 1: CBNZ */
67
s->priority1[irq][cpu] : \
59
rt = extract32(insn, 0, 5);
68
s->priority2[(irq) - GIC_INTERNAL])
60
- addr = s->pc_curr + sextract32(insn, 5, 19) * 4;
69
-#define GIC_TARGET(irq) s->irq_target[irq]
61
+ diff = sextract32(insn, 5, 19) * 4;
70
-#define GIC_CLEAR_GROUP(irq, cm) (s->irq_state[irq].group &= ~(cm))
62
71
-#define GIC_SET_GROUP(irq, cm) (s->irq_state[irq].group |= (cm))
63
tcg_cmp = read_cpu_reg(s, rt, sf);
72
-#define GIC_TEST_GROUP(irq, cm) ((s->irq_state[irq].group & (cm)) != 0)
64
label_match = gen_new_label();
73
+#define GIC_DIST_TARGET(irq) (s->irq_target[irq])
65
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
74
+#define GIC_DIST_CLEAR_GROUP(irq, cm) (s->irq_state[irq].group &= ~(cm))
66
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
75
+#define GIC_DIST_SET_GROUP(irq, cm) (s->irq_state[irq].group |= (cm))
67
tcg_cmp, 0, label_match);
76
+#define GIC_DIST_TEST_GROUP(irq, cm) ((s->irq_state[irq].group & (cm)) != 0)
68
77
69
- gen_goto_tb(s, 0, s->base.pc_next);
78
#define GICD_CTLR_EN_GRP0 (1U << 0)
70
+ gen_goto_tb(s, 0, 4);
79
#define GICD_CTLR_EN_GRP1 (1U << 1)
71
gen_set_label(label_match);
80
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs);
72
- gen_goto_tb(s, 1, addr);
81
void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs);
73
+ gen_goto_tb(s, 1, diff);
82
void gic_update(GICState *s);
74
}
83
void gic_init_irqs_and_distributor(GICState *s);
75
84
-void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
76
/* Test and branch (immediate)
85
- MemTxAttrs attrs);
77
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
86
+void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
78
static void disas_test_b_imm(DisasContext *s, uint32_t insn)
87
+ MemTxAttrs attrs);
79
{
88
80
unsigned int bit_pos, op, rt;
89
static inline bool gic_test_pending(GICState *s, int irq, int cm)
81
- uint64_t addr;
90
{
82
+ int64_t diff;
91
@@ -XXX,XX +XXX,XX @@ static inline bool gic_test_pending(GICState *s, int irq, int cm)
83
TCGLabel *label_match;
92
* GICD_ISPENDR to set the state pending.
84
TCGv_i64 tcg_cmp;
93
*/
85
94
return (s->irq_state[irq].pending & cm) ||
86
bit_pos = (extract32(insn, 31, 1) << 5) | extract32(insn, 19, 5);
95
- (!GIC_TEST_EDGE_TRIGGER(irq) && GIC_TEST_LEVEL(irq, cm));
87
op = extract32(insn, 24, 1); /* 0: TBZ; 1: TBNZ */
96
+ (!GIC_DIST_TEST_EDGE_TRIGGER(irq) && GIC_DIST_TEST_LEVEL(irq, cm));
88
- addr = s->pc_curr + sextract32(insn, 5, 14) * 4;
97
}
89
+ diff = sextract32(insn, 5, 14) * 4;
98
}
90
rt = extract32(insn, 0, 5);
99
91
100
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
92
tcg_cmp = tcg_temp_new_i64();
101
index XXXXXXX..XXXXXXX 100644
93
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
102
--- a/hw/intc/arm_gic.c
94
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
103
+++ b/hw/intc/arm_gic.c
95
tcg_cmp, 0, label_match);
104
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
96
tcg_temp_free_i64(tcg_cmp);
105
best_prio = 0x100;
97
- gen_goto_tb(s, 0, s->base.pc_next);
106
best_irq = 1023;
98
+ gen_goto_tb(s, 0, 4);
107
for (irq = 0; irq < s->num_irq; irq++) {
99
gen_set_label(label_match);
108
- if (GIC_TEST_ENABLED(irq, cm) && gic_test_pending(s, irq, cm) &&
100
- gen_goto_tb(s, 1, addr);
109
- (!GIC_TEST_ACTIVE(irq, cm)) &&
101
+ gen_goto_tb(s, 1, diff);
110
- (irq < GIC_INTERNAL || GIC_TARGET(irq) & cm)) {
102
}
111
- if (GIC_GET_PRIORITY(irq, cpu) < best_prio) {
103
112
- best_prio = GIC_GET_PRIORITY(irq, cpu);
104
/* Conditional branch (immediate)
113
+ if (GIC_DIST_TEST_ENABLED(irq, cm) &&
105
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
114
+ gic_test_pending(s, irq, cm) &&
106
static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
115
+ (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
107
{
116
+ (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
108
unsigned int cond;
117
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) < best_prio) {
109
- uint64_t addr;
118
+ best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
110
+ int64_t diff;
119
best_irq = irq;
111
120
}
112
if ((insn & (1 << 4)) || (insn & (1 << 24))) {
121
}
113
unallocated_encoding(s);
122
@@ -XXX,XX +XXX,XX @@ void gic_update(GICState *s)
123
if (best_prio < s->priority_mask[cpu]) {
124
s->current_pending[cpu] = best_irq;
125
if (best_prio < s->running_priority[cpu]) {
126
- int group = GIC_TEST_GROUP(best_irq, cm);
127
+ int group = GIC_DIST_TEST_GROUP(best_irq, cm);
128
129
if (extract32(s->ctlr, group, 1) &&
130
extract32(s->cpu_ctlr[cpu], group, 1)) {
131
@@ -XXX,XX +XXX,XX @@ void gic_set_pending_private(GICState *s, int cpu, int irq)
132
}
133
134
DPRINTF("Set %d pending cpu %d\n", irq, cpu);
135
- GIC_SET_PENDING(irq, cm);
136
+ GIC_DIST_SET_PENDING(irq, cm);
137
gic_update(s);
138
}
139
140
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
141
int cm, int target)
142
{
143
if (level) {
144
- GIC_SET_LEVEL(irq, cm);
145
- if (GIC_TEST_EDGE_TRIGGER(irq) || GIC_TEST_ENABLED(irq, cm)) {
146
+ GIC_DIST_SET_LEVEL(irq, cm);
147
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq) || GIC_DIST_TEST_ENABLED(irq, cm)) {
148
DPRINTF("Set %d pending mask %x\n", irq, target);
149
- GIC_SET_PENDING(irq, target);
150
+ GIC_DIST_SET_PENDING(irq, target);
151
}
152
} else {
153
- GIC_CLEAR_LEVEL(irq, cm);
154
+ GIC_DIST_CLEAR_LEVEL(irq, cm);
155
}
156
}
157
158
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq_generic(GICState *s, int irq, int level,
159
int cm, int target)
160
{
161
if (level) {
162
- GIC_SET_LEVEL(irq, cm);
163
+ GIC_DIST_SET_LEVEL(irq, cm);
164
DPRINTF("Set %d pending mask %x\n", irq, target);
165
- if (GIC_TEST_EDGE_TRIGGER(irq)) {
166
- GIC_SET_PENDING(irq, target);
167
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq)) {
168
+ GIC_DIST_SET_PENDING(irq, target);
169
}
170
} else {
171
- GIC_CLEAR_LEVEL(irq, cm);
172
+ GIC_DIST_CLEAR_LEVEL(irq, cm);
173
}
174
}
175
176
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq(void *opaque, int irq, int level)
177
/* The first external input line is internal interrupt 32. */
178
cm = ALL_CPU_MASK;
179
irq += GIC_INTERNAL;
180
- target = GIC_TARGET(irq);
181
+ target = GIC_DIST_TARGET(irq);
182
} else {
183
int cpu;
184
irq -= (s->num_irq - GIC_INTERNAL);
185
@@ -XXX,XX +XXX,XX @@ static void gic_set_irq(void *opaque, int irq, int level)
186
187
assert(irq >= GIC_NR_SGIS);
188
189
- if (level == GIC_TEST_LEVEL(irq, cm)) {
190
+ if (level == GIC_DIST_TEST_LEVEL(irq, cm)) {
191
return;
114
return;
192
}
115
}
193
116
- addr = s->pc_curr + sextract32(insn, 5, 19) * 4;
194
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
117
+ diff = sextract32(insn, 5, 19) * 4;
195
uint16_t pending_irq = s->current_pending[cpu];
118
cond = extract32(insn, 0, 4);
196
119
197
if (pending_irq < GIC_MAXIRQ && gic_has_groups(s)) {
120
reset_btype(s);
198
- int group = GIC_TEST_GROUP(pending_irq, (1 << cpu));
121
@@ -XXX,XX +XXX,XX @@ static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
199
+ int group = GIC_DIST_TEST_GROUP(pending_irq, (1 << cpu));
122
/* genuinely conditional branches */
200
/* On a GIC without the security extensions, reading this register
123
TCGLabel *label_match = gen_new_label();
201
* behaves in the same way as a secure access to a GIC with them.
124
arm_gen_test_cc(cond, label_match);
125
- gen_goto_tb(s, 0, s->base.pc_next);
126
+ gen_goto_tb(s, 0, 4);
127
gen_set_label(label_match);
128
- gen_goto_tb(s, 1, addr);
129
+ gen_goto_tb(s, 1, diff);
130
} else {
131
/* 0xe and 0xf are both "always" conditions */
132
- gen_goto_tb(s, 0, addr);
133
+ gen_goto_tb(s, 0, diff);
134
}
135
}
136
137
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
138
* any pending interrupts immediately.
202
*/
139
*/
203
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
140
reset_btype(s);
204
141
- gen_goto_tb(s, 0, s->base.pc_next);
205
if (gic_has_groups(s) &&
142
+ gen_goto_tb(s, 0, 4);
206
!(s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) &&
143
return;
207
- GIC_TEST_GROUP(irq, (1 << cpu))) {
144
208
+ GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
145
case 7: /* SB */
209
bpr = s->abpr[cpu] - 1;
146
@@ -XXX,XX +XXX,XX @@ static void handle_sync(DisasContext *s, uint32_t insn,
210
assert(bpr >= 0);
147
* MB and end the TB instead.
211
} else {
212
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
213
*/
214
mask = ~0U << ((bpr & 7) + 1);
215
216
- return GIC_GET_PRIORITY(irq, cpu) & mask;
217
+ return GIC_DIST_GET_PRIORITY(irq, cpu) & mask;
218
}
219
220
static void gic_activate_irq(GICState *s, int cpu, int irq)
221
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
222
int regno = preemption_level / 32;
223
int bitno = preemption_level % 32;
224
225
- if (gic_has_groups(s) && GIC_TEST_GROUP(irq, (1 << cpu))) {
226
+ if (gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
227
s->nsapr[regno][cpu] |= (1 << bitno);
228
} else {
229
s->apr[regno][cpu] |= (1 << bitno);
230
}
231
232
s->running_priority[cpu] = prio;
233
- GIC_SET_ACTIVE(irq, 1 << cpu);
234
+ GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
235
}
236
237
static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
238
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
239
return irq;
240
}
241
242
- if (GIC_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
243
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
244
DPRINTF("ACK, pending interrupt (%d) has insufficient priority\n", irq);
245
return 1023;
246
}
247
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
248
/* Clear pending flags for both level and edge triggered interrupts.
249
* Level triggered IRQs will be reasserted once they become inactive.
250
*/
148
*/
251
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
149
tcg_gen_mb(TCG_MO_ALL | TCG_BAR_SC);
252
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
150
- gen_goto_tb(s, 0, s->base.pc_next);
253
+ : cm);
151
+ gen_goto_tb(s, 0, 4);
254
ret = irq;
152
return;
255
} else {
153
256
if (irq < GIC_NR_SGIS) {
154
default:
257
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
155
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
258
src = ctz32(s->sgi_pending[irq][cpu]);
156
switch (dc->base.is_jmp) {
259
s->sgi_pending[irq][cpu] &= ~(1 << src);
157
case DISAS_NEXT:
260
if (s->sgi_pending[irq][cpu] == 0) {
158
case DISAS_TOO_MANY:
261
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
159
- gen_goto_tb(dc, 1, dc->base.pc_next);
262
+ GIC_DIST_CLEAR_PENDING(irq,
160
+ gen_goto_tb(dc, 1, 4);
263
+ GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
161
break;
264
+ : cm);
162
default:
265
}
163
case DISAS_UPDATE_EXIT:
266
ret = irq | ((src & 0x7) << 10);
164
diff --git a/target/arm/translate.c b/target/arm/translate.c
165
index XXXXXXX..XXXXXXX 100644
166
--- a/target/arm/translate.c
167
+++ b/target/arm/translate.c
168
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
169
* cpu_loop_exec. Any live exit_requests will be processed as we
170
* enter the next TB.
171
*/
172
-static void gen_goto_tb(DisasContext *s, int n, target_ulong dest)
173
+static void gen_goto_tb(DisasContext *s, int n, int diff)
174
{
175
+ target_ulong dest = s->pc_curr + diff;
176
+
177
if (translator_use_goto_tb(&s->base, dest)) {
178
tcg_gen_goto_tb(n);
179
gen_set_pc_im(s, dest);
180
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
181
* gen_jmp();
182
* on the second call to gen_jmp().
183
*/
184
- gen_goto_tb(s, tbno, dest);
185
+ gen_goto_tb(s, tbno, dest - s->pc_curr);
186
break;
187
case DISAS_UPDATE_NOCHAIN:
188
case DISAS_UPDATE_EXIT:
189
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
190
switch (dc->base.is_jmp) {
191
case DISAS_NEXT:
192
case DISAS_TOO_MANY:
193
- gen_goto_tb(dc, 1, dc->base.pc_next);
194
+ gen_goto_tb(dc, 1, curr_insn_len(dc));
195
break;
196
case DISAS_UPDATE_NOCHAIN:
197
gen_set_pc_im(dc, dc->base.pc_next);
198
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
199
gen_set_pc_im(dc, dc->base.pc_next);
200
gen_singlestep_exception(dc);
267
} else {
201
} else {
268
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
202
- gen_goto_tb(dc, 1, dc->base.pc_next);
269
* interrupts. (level triggered interrupts with an active line
203
+ gen_goto_tb(dc, 1, curr_insn_len(dc));
270
* remain pending, see gic_test_pending)
271
*/
272
- GIC_CLEAR_PENDING(irq, GIC_TEST_MODEL(irq) ? ALL_CPU_MASK : cm);
273
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
274
+ : cm);
275
ret = irq;
276
}
204
}
277
}
205
}
278
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
206
}
279
return ret;
280
}
281
282
-void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
283
+void gic_dist_set_priority(GICState *s, int cpu, int irq, uint8_t val,
284
MemTxAttrs attrs)
285
{
286
if (s->security_extn && !attrs.secure) {
287
- if (!GIC_TEST_GROUP(irq, (1 << cpu))) {
288
+ if (!GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
289
return; /* Ignore Non-secure access of Group0 IRQ */
290
}
291
val = 0x80 | (val >> 1); /* Non-secure view */
292
@@ -XXX,XX +XXX,XX @@ void gic_set_priority(GICState *s, int cpu, int irq, uint8_t val,
293
}
294
}
295
296
-static uint32_t gic_get_priority(GICState *s, int cpu, int irq,
297
+static uint32_t gic_dist_get_priority(GICState *s, int cpu, int irq,
298
MemTxAttrs attrs)
299
{
300
- uint32_t prio = GIC_GET_PRIORITY(irq, cpu);
301
+ uint32_t prio = GIC_DIST_GET_PRIORITY(irq, cpu);
302
303
if (s->security_extn && !attrs.secure) {
304
- if (!GIC_TEST_GROUP(irq, (1 << cpu))) {
305
+ if (!GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
306
return 0; /* Non-secure access cannot read priority of Group0 IRQ */
307
}
308
prio = (prio << 1) & 0xff; /* Non-secure view */
309
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
310
return;
311
}
312
313
- group = gic_has_groups(s) && GIC_TEST_GROUP(irq, cm);
314
+ group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
315
316
if (!gic_eoi_split(s, cpu, attrs)) {
317
/* This is UNPREDICTABLE; we choose to ignore it */
318
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
319
return;
320
}
321
322
- GIC_CLEAR_ACTIVE(irq, cm);
323
+ GIC_DIST_CLEAR_ACTIVE(irq, cm);
324
}
325
326
void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
327
@@ -XXX,XX +XXX,XX @@ void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
328
if (s->revision == REV_11MPCORE) {
329
/* Mark level triggered interrupts as pending if they are still
330
raised. */
331
- if (!GIC_TEST_EDGE_TRIGGER(irq) && GIC_TEST_ENABLED(irq, cm)
332
- && GIC_TEST_LEVEL(irq, cm) && (GIC_TARGET(irq) & cm) != 0) {
333
+ if (!GIC_DIST_TEST_EDGE_TRIGGER(irq) && GIC_DIST_TEST_ENABLED(irq, cm)
334
+ && GIC_DIST_TEST_LEVEL(irq, cm)
335
+ && (GIC_DIST_TARGET(irq) & cm) != 0) {
336
DPRINTF("Set %d pending mask %x\n", irq, cm);
337
- GIC_SET_PENDING(irq, cm);
338
+ GIC_DIST_SET_PENDING(irq, cm);
339
}
340
}
341
342
- group = gic_has_groups(s) && GIC_TEST_GROUP(irq, cm);
343
+ group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
344
345
if (s->security_extn && !attrs.secure && !group) {
346
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
347
@@ -XXX,XX +XXX,XX @@ void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
348
349
/* In GICv2 the guest can choose to split priority-drop and deactivate */
350
if (!gic_eoi_split(s, cpu, attrs)) {
351
- GIC_CLEAR_ACTIVE(irq, cm);
352
+ GIC_DIST_CLEAR_ACTIVE(irq, cm);
353
}
354
gic_update(s);
355
}
356
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
357
goto bad_reg;
358
}
359
for (i = 0; i < 8; i++) {
360
- if (GIC_TEST_GROUP(irq + i, cm)) {
361
+ if (GIC_DIST_TEST_GROUP(irq + i, cm)) {
362
res |= (1 << i);
363
}
364
}
365
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
366
res = 0;
367
for (i = 0; i < 8; i++) {
368
if (s->security_extn && !attrs.secure &&
369
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
370
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
371
continue; /* Ignore Non-secure access of Group0 IRQ */
372
}
373
374
- if (GIC_TEST_ENABLED(irq + i, cm)) {
375
+ if (GIC_DIST_TEST_ENABLED(irq + i, cm)) {
376
res |= (1 << i);
377
}
378
}
379
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
380
mask = (irq < GIC_INTERNAL) ? cm : ALL_CPU_MASK;
381
for (i = 0; i < 8; i++) {
382
if (s->security_extn && !attrs.secure &&
383
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
384
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
385
continue; /* Ignore Non-secure access of Group0 IRQ */
386
}
387
388
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
389
mask = (irq < GIC_INTERNAL) ? cm : ALL_CPU_MASK;
390
for (i = 0; i < 8; i++) {
391
if (s->security_extn && !attrs.secure &&
392
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
393
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
394
continue; /* Ignore Non-secure access of Group0 IRQ */
395
}
396
397
- if (GIC_TEST_ACTIVE(irq + i, mask)) {
398
+ if (GIC_DIST_TEST_ACTIVE(irq + i, mask)) {
399
res |= (1 << i);
400
}
401
}
402
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
403
irq = (offset - 0x400) + GIC_BASE_IRQ;
404
if (irq >= s->num_irq)
405
goto bad_reg;
406
- res = gic_get_priority(s, cpu, irq, attrs);
407
+ res = gic_dist_get_priority(s, cpu, irq, attrs);
408
} else if (offset < 0xc00) {
409
/* Interrupt CPU Target. */
410
if (s->num_cpu == 1 && s->revision != REV_11MPCORE) {
411
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
412
} else if (irq < GIC_INTERNAL) {
413
res = cm;
414
} else {
415
- res = GIC_TARGET(irq);
416
+ res = GIC_DIST_TARGET(irq);
417
}
418
}
419
} else if (offset < 0xf00) {
420
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
421
res = 0;
422
for (i = 0; i < 4; i++) {
423
if (s->security_extn && !attrs.secure &&
424
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
425
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
426
continue; /* Ignore Non-secure access of Group0 IRQ */
427
}
428
429
- if (GIC_TEST_MODEL(irq + i))
430
+ if (GIC_DIST_TEST_MODEL(irq + i)) {
431
res |= (1 << (i * 2));
432
- if (GIC_TEST_EDGE_TRIGGER(irq + i))
433
+ }
434
+ if (GIC_DIST_TEST_EDGE_TRIGGER(irq + i)) {
435
res |= (2 << (i * 2));
436
+ }
437
}
438
} else if (offset < 0xf10) {
439
goto bad_reg;
440
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
441
}
442
443
if (s->security_extn && !attrs.secure &&
444
- !GIC_TEST_GROUP(irq, 1 << cpu)) {
445
+ !GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
446
res = 0; /* Ignore Non-secure access of Group0 IRQ */
447
} else {
448
res = s->sgi_pending[irq][cpu];
449
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
450
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
451
if (value & (1 << i)) {
452
/* Group1 (Non-secure) */
453
- GIC_SET_GROUP(irq + i, cm);
454
+ GIC_DIST_SET_GROUP(irq + i, cm);
455
} else {
456
/* Group0 (Secure) */
457
- GIC_CLEAR_GROUP(irq + i, cm);
458
+ GIC_DIST_CLEAR_GROUP(irq + i, cm);
459
}
460
}
461
}
462
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
463
for (i = 0; i < 8; i++) {
464
if (value & (1 << i)) {
465
int mask =
466
- (irq < GIC_INTERNAL) ? (1 << cpu) : GIC_TARGET(irq + i);
467
+ (irq < GIC_INTERNAL) ? (1 << cpu)
468
+ : GIC_DIST_TARGET(irq + i);
469
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
470
471
if (s->security_extn && !attrs.secure &&
472
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
473
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
474
continue; /* Ignore Non-secure access of Group0 IRQ */
475
}
476
477
- if (!GIC_TEST_ENABLED(irq + i, cm)) {
478
+ if (!GIC_DIST_TEST_ENABLED(irq + i, cm)) {
479
DPRINTF("Enabled IRQ %d\n", irq + i);
480
trace_gic_enable_irq(irq + i);
481
}
482
- GIC_SET_ENABLED(irq + i, cm);
483
+ GIC_DIST_SET_ENABLED(irq + i, cm);
484
/* If a raised level triggered IRQ enabled then mark
485
is as pending. */
486
- if (GIC_TEST_LEVEL(irq + i, mask)
487
- && !GIC_TEST_EDGE_TRIGGER(irq + i)) {
488
+ if (GIC_DIST_TEST_LEVEL(irq + i, mask)
489
+ && !GIC_DIST_TEST_EDGE_TRIGGER(irq + i)) {
490
DPRINTF("Set %d pending mask %x\n", irq + i, mask);
491
- GIC_SET_PENDING(irq + i, mask);
492
+ GIC_DIST_SET_PENDING(irq + i, mask);
493
}
494
}
495
}
496
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
497
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
498
499
if (s->security_extn && !attrs.secure &&
500
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
501
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
502
continue; /* Ignore Non-secure access of Group0 IRQ */
503
}
504
505
- if (GIC_TEST_ENABLED(irq + i, cm)) {
506
+ if (GIC_DIST_TEST_ENABLED(irq + i, cm)) {
507
DPRINTF("Disabled IRQ %d\n", irq + i);
508
trace_gic_disable_irq(irq + i);
509
}
510
- GIC_CLEAR_ENABLED(irq + i, cm);
511
+ GIC_DIST_CLEAR_ENABLED(irq + i, cm);
512
}
513
}
514
} else if (offset < 0x280) {
515
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
516
for (i = 0; i < 8; i++) {
517
if (value & (1 << i)) {
518
if (s->security_extn && !attrs.secure &&
519
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
520
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
521
continue; /* Ignore Non-secure access of Group0 IRQ */
522
}
523
524
- GIC_SET_PENDING(irq + i, GIC_TARGET(irq + i));
525
+ GIC_DIST_SET_PENDING(irq + i, GIC_DIST_TARGET(irq + i));
526
}
527
}
528
} else if (offset < 0x300) {
529
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
530
531
for (i = 0; i < 8; i++) {
532
if (s->security_extn && !attrs.secure &&
533
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
534
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
535
continue; /* Ignore Non-secure access of Group0 IRQ */
536
}
537
538
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
539
for per-CPU interrupts. It's unclear whether this is the
540
corect behavior. */
541
if (value & (1 << i)) {
542
- GIC_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
543
+ GIC_DIST_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
544
}
545
}
546
} else if (offset < 0x400) {
547
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
548
irq = (offset - 0x400) + GIC_BASE_IRQ;
549
if (irq >= s->num_irq)
550
goto bad_reg;
551
- gic_set_priority(s, cpu, irq, value, attrs);
552
+ gic_dist_set_priority(s, cpu, irq, value, attrs);
553
} else if (offset < 0xc00) {
554
/* Interrupt CPU Target. RAZ/WI on uniprocessor GICs, with the
555
* annoying exception of the 11MPCore's GIC.
556
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
557
value |= 0xaa;
558
for (i = 0; i < 4; i++) {
559
if (s->security_extn && !attrs.secure &&
560
- !GIC_TEST_GROUP(irq + i, 1 << cpu)) {
561
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
562
continue; /* Ignore Non-secure access of Group0 IRQ */
563
}
564
565
if (s->revision == REV_11MPCORE) {
566
if (value & (1 << (i * 2))) {
567
- GIC_SET_MODEL(irq + i);
568
+ GIC_DIST_SET_MODEL(irq + i);
569
} else {
570
- GIC_CLEAR_MODEL(irq + i);
571
+ GIC_DIST_CLEAR_MODEL(irq + i);
572
}
573
}
574
if (value & (2 << (i * 2))) {
575
- GIC_SET_EDGE_TRIGGER(irq + i);
576
+ GIC_DIST_SET_EDGE_TRIGGER(irq + i);
577
} else {
578
- GIC_CLEAR_EDGE_TRIGGER(irq + i);
579
+ GIC_DIST_CLEAR_EDGE_TRIGGER(irq + i);
580
}
581
}
582
} else if (offset < 0xf10) {
583
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
584
irq = (offset - 0xf10);
585
586
if (!s->security_extn || attrs.secure ||
587
- GIC_TEST_GROUP(irq, 1 << cpu)) {
588
+ GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
589
s->sgi_pending[irq][cpu] &= ~value;
590
if (s->sgi_pending[irq][cpu] == 0) {
591
- GIC_CLEAR_PENDING(irq, 1 << cpu);
592
+ GIC_DIST_CLEAR_PENDING(irq, 1 << cpu);
593
}
594
}
595
} else if (offset < 0xf30) {
596
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
597
irq = (offset - 0xf20);
598
599
if (!s->security_extn || attrs.secure ||
600
- GIC_TEST_GROUP(irq, 1 << cpu)) {
601
- GIC_SET_PENDING(irq, 1 << cpu);
602
+ GIC_DIST_TEST_GROUP(irq, 1 << cpu)) {
603
+ GIC_DIST_SET_PENDING(irq, 1 << cpu);
604
s->sgi_pending[irq][cpu] |= value;
605
}
606
} else {
607
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writel(void *opaque, hwaddr offset,
608
mask = ALL_CPU_MASK;
609
break;
610
}
611
- GIC_SET_PENDING(irq, mask);
612
+ GIC_DIST_SET_PENDING(irq, mask);
613
target_cpu = ctz32(mask);
614
while (target_cpu < GIC_NCPU) {
615
s->sgi_pending[irq][target_cpu] |= (1 << cpu);
616
diff --git a/hw/intc/arm_gic_common.c b/hw/intc/arm_gic_common.c
617
index XXXXXXX..XXXXXXX 100644
618
--- a/hw/intc/arm_gic_common.c
619
+++ b/hw/intc/arm_gic_common.c
620
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
621
}
622
}
623
for (i = 0; i < GIC_NR_SGIS; i++) {
624
- GIC_SET_ENABLED(i, ALL_CPU_MASK);
625
- GIC_SET_EDGE_TRIGGER(i);
626
+ GIC_DIST_SET_ENABLED(i, ALL_CPU_MASK);
627
+ GIC_DIST_SET_EDGE_TRIGGER(i);
628
}
629
630
for (i = 0; i < ARRAY_SIZE(s->priority2); i++) {
631
@@ -XXX,XX +XXX,XX @@ static void arm_gic_common_reset(DeviceState *dev)
632
}
633
if (s->security_extn && s->irq_reset_nonsecure) {
634
for (i = 0; i < GIC_MAXIRQ; i++) {
635
- GIC_SET_GROUP(i, ALL_CPU_MASK);
636
+ GIC_DIST_SET_GROUP(i, ALL_CPU_MASK);
637
}
638
}
639
640
diff --git a/hw/intc/arm_gic_kvm.c b/hw/intc/arm_gic_kvm.c
641
index XXXXXXX..XXXXXXX 100644
642
--- a/hw/intc/arm_gic_kvm.c
643
+++ b/hw/intc/arm_gic_kvm.c
644
@@ -XXX,XX +XXX,XX @@ static void translate_group(GICState *s, int irq, int cpu,
645
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
646
647
if (to_kernel) {
648
- *field = GIC_TEST_GROUP(irq, cm);
649
+ *field = GIC_DIST_TEST_GROUP(irq, cm);
650
} else {
651
if (*field & 1) {
652
- GIC_SET_GROUP(irq, cm);
653
+ GIC_DIST_SET_GROUP(irq, cm);
654
}
655
}
656
}
657
@@ -XXX,XX +XXX,XX @@ static void translate_enabled(GICState *s, int irq, int cpu,
658
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
659
660
if (to_kernel) {
661
- *field = GIC_TEST_ENABLED(irq, cm);
662
+ *field = GIC_DIST_TEST_ENABLED(irq, cm);
663
} else {
664
if (*field & 1) {
665
- GIC_SET_ENABLED(irq, cm);
666
+ GIC_DIST_SET_ENABLED(irq, cm);
667
}
668
}
669
}
670
@@ -XXX,XX +XXX,XX @@ static void translate_pending(GICState *s, int irq, int cpu,
671
*field = gic_test_pending(s, irq, cm);
672
} else {
673
if (*field & 1) {
674
- GIC_SET_PENDING(irq, cm);
675
+ GIC_DIST_SET_PENDING(irq, cm);
676
/* TODO: Capture is level-line is held high in the kernel */
677
}
678
}
679
@@ -XXX,XX +XXX,XX @@ static void translate_active(GICState *s, int irq, int cpu,
680
int cm = (irq < GIC_INTERNAL) ? (1 << cpu) : ALL_CPU_MASK;
681
682
if (to_kernel) {
683
- *field = GIC_TEST_ACTIVE(irq, cm);
684
+ *field = GIC_DIST_TEST_ACTIVE(irq, cm);
685
} else {
686
if (*field & 1) {
687
- GIC_SET_ACTIVE(irq, cm);
688
+ GIC_DIST_SET_ACTIVE(irq, cm);
689
}
690
}
691
}
692
@@ -XXX,XX +XXX,XX @@ static void translate_trigger(GICState *s, int irq, int cpu,
693
uint32_t *field, bool to_kernel)
694
{
695
if (to_kernel) {
696
- *field = (GIC_TEST_EDGE_TRIGGER(irq)) ? 0x2 : 0x0;
697
+ *field = (GIC_DIST_TEST_EDGE_TRIGGER(irq)) ? 0x2 : 0x0;
698
} else {
699
if (*field & 0x2) {
700
- GIC_SET_EDGE_TRIGGER(irq);
701
+ GIC_DIST_SET_EDGE_TRIGGER(irq);
702
}
703
}
704
}
705
@@ -XXX,XX +XXX,XX @@ static void translate_priority(GICState *s, int irq, int cpu,
706
uint32_t *field, bool to_kernel)
707
{
708
if (to_kernel) {
709
- *field = GIC_GET_PRIORITY(irq, cpu) & 0xff;
710
+ *field = GIC_DIST_GET_PRIORITY(irq, cpu) & 0xff;
711
} else {
712
- gic_set_priority(s, cpu, irq, *field & 0xff, MEMTXATTRS_UNSPECIFIED);
713
+ gic_dist_set_priority(s, cpu, irq,
714
+ *field & 0xff, MEMTXATTRS_UNSPECIFIED);
715
}
716
}
717
718
--
207
--
719
2.18.0
208
2.25.1
720
721
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add some traces to the ARM GIC to catch register accesses (distributor,
3
In preparation for TARGET_TB_PCREL, reduce reliance on
4
(v)cpu interface and virtual interface), and to take into account
4
absolute values by passing in pc difference.
5
virtualization extensions (print `vcpu` instead of `cpu` when needed).
6
5
7
Also add some virtualization extensions specific traces: LR updating
8
and maintenance IRQ generation.
9
10
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
11
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
13
Message-id: 20180727095421.386-19-luc.michel@greensocs.com
8
Message-id: 20221020030641.2066807-4-richard.henderson@linaro.org
14
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
15
---
10
---
16
hw/intc/arm_gic.c | 31 +++++++++++++++++++++++++------
11
target/arm/translate-a32.h | 2 +-
17
hw/intc/trace-events | 12 ++++++++++--
12
target/arm/translate.h | 6 ++--
18
2 files changed, 35 insertions(+), 8 deletions(-)
13
target/arm/translate-a64.c | 32 +++++++++---------
14
target/arm/translate-vfp.c | 2 +-
15
target/arm/translate.c | 68 ++++++++++++++++++++------------------
16
5 files changed, 56 insertions(+), 54 deletions(-)
19
17
20
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
18
diff --git a/target/arm/translate-a32.h b/target/arm/translate-a32.h
21
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/arm_gic.c
20
--- a/target/arm/translate-a32.h
23
+++ b/hw/intc/arm_gic.c
21
+++ b/target/arm/translate-a32.h
24
@@ -XXX,XX +XXX,XX @@ static inline void gic_update_internal(GICState *s, bool virt)
22
@@ -XXX,XX +XXX,XX @@ void write_neon_element64(TCGv_i64 src, int reg, int ele, MemOp memop);
23
TCGv_i32 add_reg_for_lit(DisasContext *s, int reg, int ofs);
24
void gen_set_cpsr(TCGv_i32 var, uint32_t mask);
25
void gen_set_condexec(DisasContext *s);
26
-void gen_set_pc_im(DisasContext *s, target_ulong val);
27
+void gen_update_pc(DisasContext *s, target_long diff);
28
void gen_lookup_tb(DisasContext *s);
29
long vfp_reg_offset(bool dp, unsigned reg);
30
long neon_full_reg_offset(unsigned reg);
31
diff --git a/target/arm/translate.h b/target/arm/translate.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/target/arm/translate.h
34
+++ b/target/arm/translate.h
35
@@ -XXX,XX +XXX,XX @@ static inline int curr_insn_len(DisasContext *s)
36
* For instructions which want an immediate exit to the main loop, as opposed
37
* to attempting to use lookup_and_goto_ptr. Unlike DISAS_UPDATE_EXIT, this
38
* doesn't write the PC on exiting the translation loop so you need to ensure
39
- * something (gen_a64_set_pc_im or runtime helper) has done so before we reach
40
+ * something (gen_a64_update_pc or runtime helper) has done so before we reach
41
* return from cpu_tb_exec.
42
*/
43
#define DISAS_EXIT DISAS_TARGET_9
44
@@ -XXX,XX +XXX,XX @@ static inline int curr_insn_len(DisasContext *s)
45
46
#ifdef TARGET_AARCH64
47
void a64_translate_init(void);
48
-void gen_a64_set_pc_im(uint64_t val);
49
+void gen_a64_update_pc(DisasContext *s, target_long diff);
50
extern const TranslatorOps aarch64_translator_ops;
51
#else
52
static inline void a64_translate_init(void)
53
{
54
}
55
56
-static inline void gen_a64_set_pc_im(uint64_t val)
57
+static inline void gen_a64_update_pc(DisasContext *s, target_long diff)
58
{
59
}
60
#endif
61
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
62
index XXXXXXX..XXXXXXX 100644
63
--- a/target/arm/translate-a64.c
64
+++ b/target/arm/translate-a64.c
65
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
66
}
67
}
68
69
-void gen_a64_set_pc_im(uint64_t val)
70
+void gen_a64_update_pc(DisasContext *s, target_long diff)
71
{
72
- tcg_gen_movi_i64(cpu_pc, val);
73
+ tcg_gen_movi_i64(cpu_pc, s->pc_curr + diff);
74
}
75
76
/*
77
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal(int excp)
78
79
static void gen_exception_internal_insn(DisasContext *s, uint64_t pc, int excp)
80
{
81
- gen_a64_set_pc_im(pc);
82
+ gen_a64_update_pc(s, pc - s->pc_curr);
83
gen_exception_internal(excp);
84
s->base.is_jmp = DISAS_NORETURN;
85
}
86
87
static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syndrome)
88
{
89
- gen_a64_set_pc_im(s->pc_curr);
90
+ gen_a64_update_pc(s, 0);
91
gen_helper_exception_bkpt_insn(cpu_env, tcg_constant_i32(syndrome));
92
s->base.is_jmp = DISAS_NORETURN;
93
}
94
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
95
96
if (use_goto_tb(s, dest)) {
97
tcg_gen_goto_tb(n);
98
- gen_a64_set_pc_im(dest);
99
+ gen_a64_update_pc(s, diff);
100
tcg_gen_exit_tb(s->base.tb, n);
101
s->base.is_jmp = DISAS_NORETURN;
102
} else {
103
- gen_a64_set_pc_im(dest);
104
+ gen_a64_update_pc(s, diff);
105
if (s->ss_active) {
106
gen_step_complete_exception(s);
107
} else {
108
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
109
uint32_t syndrome;
110
111
syndrome = syn_aa64_sysregtrap(op0, op1, op2, crn, crm, rt, isread);
112
- gen_a64_set_pc_im(s->pc_curr);
113
+ gen_a64_update_pc(s, 0);
114
gen_helper_access_check_cp_reg(cpu_env,
115
tcg_constant_ptr(ri),
116
tcg_constant_i32(syndrome),
117
@@ -XXX,XX +XXX,XX @@ static void handle_sys(DisasContext *s, uint32_t insn, bool isread,
118
* The readfn or writefn might raise an exception;
119
* synchronize the CPU state in case it does.
120
*/
121
- gen_a64_set_pc_im(s->pc_curr);
122
+ gen_a64_update_pc(s, 0);
123
}
124
125
/* Handle special cases first */
126
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
127
/* The pre HVC helper handles cases when HVC gets trapped
128
* as an undefined insn by runtime configuration.
129
*/
130
- gen_a64_set_pc_im(s->pc_curr);
131
+ gen_a64_update_pc(s, 0);
132
gen_helper_pre_hvc(cpu_env);
133
gen_ss_advance(s);
134
gen_exception_insn_el(s, s->base.pc_next, EXCP_HVC,
135
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
136
unallocated_encoding(s);
137
break;
138
}
139
- gen_a64_set_pc_im(s->pc_curr);
140
+ gen_a64_update_pc(s, 0);
141
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa64_smc(imm16)));
142
gen_ss_advance(s);
143
gen_exception_insn_el(s, s->base.pc_next, EXCP_SMC,
144
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
145
*/
146
switch (dc->base.is_jmp) {
147
default:
148
- gen_a64_set_pc_im(dc->base.pc_next);
149
+ gen_a64_update_pc(dc, 4);
150
/* fall through */
151
case DISAS_EXIT:
152
case DISAS_JUMP:
153
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
154
break;
155
default:
156
case DISAS_UPDATE_EXIT:
157
- gen_a64_set_pc_im(dc->base.pc_next);
158
+ gen_a64_update_pc(dc, 4);
159
/* fall through */
160
case DISAS_EXIT:
161
tcg_gen_exit_tb(NULL, 0);
162
break;
163
case DISAS_UPDATE_NOCHAIN:
164
- gen_a64_set_pc_im(dc->base.pc_next);
165
+ gen_a64_update_pc(dc, 4);
166
/* fall through */
167
case DISAS_JUMP:
168
tcg_gen_lookup_and_goto_ptr();
169
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
170
case DISAS_SWI:
171
break;
172
case DISAS_WFE:
173
- gen_a64_set_pc_im(dc->base.pc_next);
174
+ gen_a64_update_pc(dc, 4);
175
gen_helper_wfe(cpu_env);
176
break;
177
case DISAS_YIELD:
178
- gen_a64_set_pc_im(dc->base.pc_next);
179
+ gen_a64_update_pc(dc, 4);
180
gen_helper_yield(cpu_env);
181
break;
182
case DISAS_WFI:
183
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
184
* This is a special case because we don't want to just halt
185
* the CPU if trying to debug across a WFI.
186
*/
187
- gen_a64_set_pc_im(dc->base.pc_next);
188
+ gen_a64_update_pc(dc, 4);
189
gen_helper_wfi(cpu_env, tcg_constant_i32(4));
190
/*
191
* The helper doesn't necessarily throw an exception, but we
192
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
193
index XXXXXXX..XXXXXXX 100644
194
--- a/target/arm/translate-vfp.c
195
+++ b/target/arm/translate-vfp.c
196
@@ -XXX,XX +XXX,XX @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
197
case ARM_VFP_FPSID:
198
if (s->current_el == 1) {
199
gen_set_condexec(s);
200
- gen_set_pc_im(s, s->pc_curr);
201
+ gen_update_pc(s, 0);
202
gen_helper_check_hcr_el2_trap(cpu_env,
203
tcg_constant_i32(a->rt),
204
tcg_constant_i32(a->reg));
205
diff --git a/target/arm/translate.c b/target/arm/translate.c
206
index XXXXXXX..XXXXXXX 100644
207
--- a/target/arm/translate.c
208
+++ b/target/arm/translate.c
209
@@ -XXX,XX +XXX,XX @@ void gen_set_condexec(DisasContext *s)
210
}
211
}
212
213
-void gen_set_pc_im(DisasContext *s, target_ulong val)
214
+void gen_update_pc(DisasContext *s, target_long diff)
215
{
216
- tcg_gen_movi_i32(cpu_R[15], val);
217
+ tcg_gen_movi_i32(cpu_R[15], s->pc_curr + diff);
218
}
219
220
/* Set PC and Thumb state from var. var is marked as dead. */
221
@@ -XXX,XX +XXX,XX @@ static inline void gen_bxns(DisasContext *s, int rm)
222
223
/* The bxns helper may raise an EXCEPTION_EXIT exception, so in theory
224
* we need to sync state before calling it, but:
225
- * - we don't need to do gen_set_pc_im() because the bxns helper will
226
+ * - we don't need to do gen_update_pc() because the bxns helper will
227
* always set the PC itself
228
* - we don't need to do gen_set_condexec() because BXNS is UNPREDICTABLE
229
* unless it's outside an IT block or the last insn in an IT block,
230
@@ -XXX,XX +XXX,XX @@ static inline void gen_blxns(DisasContext *s, int rm)
231
* We do however need to set the PC, because the blxns helper reads it.
232
* The blxns helper may throw an exception.
233
*/
234
- gen_set_pc_im(s, s->base.pc_next);
235
+ gen_update_pc(s, curr_insn_len(s));
236
gen_helper_v7m_blxns(cpu_env, var);
237
tcg_temp_free_i32(var);
238
s->base.is_jmp = DISAS_EXIT;
239
@@ -XXX,XX +XXX,XX @@ static inline void gen_hvc(DisasContext *s, int imm16)
240
* as an undefined insn by runtime configuration (ie before
241
* the insn really executes).
242
*/
243
- gen_set_pc_im(s, s->pc_curr);
244
+ gen_update_pc(s, 0);
245
gen_helper_pre_hvc(cpu_env);
246
/* Otherwise we will treat this as a real exception which
247
* happens after execution of the insn. (The distinction matters
248
@@ -XXX,XX +XXX,XX @@ static inline void gen_hvc(DisasContext *s, int imm16)
249
* for single stepping.)
250
*/
251
s->svc_imm = imm16;
252
- gen_set_pc_im(s, s->base.pc_next);
253
+ gen_update_pc(s, curr_insn_len(s));
254
s->base.is_jmp = DISAS_HVC;
255
}
256
257
@@ -XXX,XX +XXX,XX @@ static inline void gen_smc(DisasContext *s)
258
/* As with HVC, we may take an exception either before or after
259
* the insn executes.
260
*/
261
- gen_set_pc_im(s, s->pc_curr);
262
+ gen_update_pc(s, 0);
263
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa32_smc()));
264
- gen_set_pc_im(s, s->base.pc_next);
265
+ gen_update_pc(s, curr_insn_len(s));
266
s->base.is_jmp = DISAS_SMC;
267
}
268
269
static void gen_exception_internal_insn(DisasContext *s, uint32_t pc, int excp)
270
{
271
gen_set_condexec(s);
272
- gen_set_pc_im(s, pc);
273
+ gen_update_pc(s, pc - s->pc_curr);
274
gen_exception_internal(excp);
275
s->base.is_jmp = DISAS_NORETURN;
276
}
277
@@ -XXX,XX +XXX,XX @@ static void gen_exception_insn_el_v(DisasContext *s, uint64_t pc, int excp,
278
uint32_t syn, TCGv_i32 tcg_el)
279
{
280
if (s->aarch64) {
281
- gen_a64_set_pc_im(pc);
282
+ gen_a64_update_pc(s, pc - s->pc_curr);
283
} else {
284
gen_set_condexec(s);
285
- gen_set_pc_im(s, pc);
286
+ gen_update_pc(s, pc - s->pc_curr);
287
}
288
gen_exception_el_v(excp, syn, tcg_el);
289
s->base.is_jmp = DISAS_NORETURN;
290
@@ -XXX,XX +XXX,XX @@ void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
291
void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
292
{
293
if (s->aarch64) {
294
- gen_a64_set_pc_im(pc);
295
+ gen_a64_update_pc(s, pc - s->pc_curr);
296
} else {
297
gen_set_condexec(s);
298
- gen_set_pc_im(s, pc);
299
+ gen_update_pc(s, pc - s->pc_curr);
300
}
301
gen_exception(excp, syn);
302
s->base.is_jmp = DISAS_NORETURN;
303
@@ -XXX,XX +XXX,XX @@ void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
304
static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
305
{
306
gen_set_condexec(s);
307
- gen_set_pc_im(s, s->pc_curr);
308
+ gen_update_pc(s, 0);
309
gen_helper_exception_bkpt_insn(cpu_env, tcg_constant_i32(syn));
310
s->base.is_jmp = DISAS_NORETURN;
311
}
312
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
313
314
if (translator_use_goto_tb(&s->base, dest)) {
315
tcg_gen_goto_tb(n);
316
- gen_set_pc_im(s, dest);
317
+ gen_update_pc(s, diff);
318
tcg_gen_exit_tb(s->base.tb, n);
319
} else {
320
- gen_set_pc_im(s, dest);
321
+ gen_update_pc(s, diff);
322
gen_goto_ptr();
323
}
324
s->base.is_jmp = DISAS_NORETURN;
325
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
326
/* Jump, specifying which TB number to use if we gen_goto_tb() */
327
static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
328
{
329
+ int diff = dest - s->pc_curr;
330
+
331
if (unlikely(s->ss_active)) {
332
/* An indirect jump so that we still trigger the debug exception. */
333
- gen_set_pc_im(s, dest);
334
+ gen_update_pc(s, diff);
335
s->base.is_jmp = DISAS_JUMP;
336
return;
337
}
338
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
339
* gen_jmp();
340
* on the second call to gen_jmp().
341
*/
342
- gen_goto_tb(s, tbno, dest - s->pc_curr);
343
+ gen_goto_tb(s, tbno, diff);
344
break;
345
case DISAS_UPDATE_NOCHAIN:
346
case DISAS_UPDATE_EXIT:
347
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
348
* Avoid using goto_tb so we really do exit back to the main loop
349
* and don't chain to another TB.
350
*/
351
- gen_set_pc_im(s, dest);
352
+ gen_update_pc(s, diff);
353
gen_goto_ptr();
354
s->base.is_jmp = DISAS_NORETURN;
355
break;
356
@@ -XXX,XX +XXX,XX @@ static void gen_msr_banked(DisasContext *s, int r, int sysm, int rn)
357
358
/* Sync state because msr_banked() can raise exceptions */
359
gen_set_condexec(s);
360
- gen_set_pc_im(s, s->pc_curr);
361
+ gen_update_pc(s, 0);
362
tcg_reg = load_reg(s, rn);
363
gen_helper_msr_banked(cpu_env, tcg_reg,
364
tcg_constant_i32(tgtmode),
365
@@ -XXX,XX +XXX,XX @@ static void gen_mrs_banked(DisasContext *s, int r, int sysm, int rn)
366
367
/* Sync state because mrs_banked() can raise exceptions */
368
gen_set_condexec(s);
369
- gen_set_pc_im(s, s->pc_curr);
370
+ gen_update_pc(s, 0);
371
tcg_reg = tcg_temp_new_i32();
372
gen_helper_mrs_banked(tcg_reg, cpu_env,
373
tcg_constant_i32(tgtmode),
374
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
375
}
376
377
gen_set_condexec(s);
378
- gen_set_pc_im(s, s->pc_curr);
379
+ gen_update_pc(s, 0);
380
gen_helper_access_check_cp_reg(cpu_env,
381
tcg_constant_ptr(ri),
382
tcg_constant_i32(syndrome),
383
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
384
* synchronize the CPU state in case it does.
385
*/
386
gen_set_condexec(s);
387
- gen_set_pc_im(s, s->pc_curr);
388
+ gen_update_pc(s, 0);
25
}
389
}
26
390
27
if (best_irq != 1023) {
391
/* Handle special cases first */
28
- trace_gic_update_bestirq(cpu, best_irq, best_prio,
392
@@ -XXX,XX +XXX,XX @@ static void do_coproc_insn(DisasContext *s, int cpnum, int is64,
29
- s->priority_mask[cpu_iface], s->running_priority[cpu_iface]);
393
unallocated_encoding(s);
30
+ trace_gic_update_bestirq(virt ? "vcpu" : "cpu", cpu,
394
return;
31
+ best_irq, best_prio,
395
}
32
+ s->priority_mask[cpu_iface],
396
- gen_set_pc_im(s, s->base.pc_next);
33
+ s->running_priority[cpu_iface]);
397
+ gen_update_pc(s, curr_insn_len(s));
34
}
398
s->base.is_jmp = DISAS_WFI;
35
399
return;
36
irq_level = fiq_level = 0;
400
default:
37
@@ -XXX,XX +XXX,XX @@ static void gic_update_maintenance(GICState *s)
401
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
38
gic_compute_misr(s, cpu);
402
addr = tcg_temp_new_i32();
39
maint_level = (s->h_hcr[cpu] & R_GICH_HCR_EN_MASK) && s->h_misr[cpu];
403
/* get_r13_banked() will raise an exception if called from System mode */
40
404
gen_set_condexec(s);
41
+ trace_gic_update_maintenance_irq(cpu, maint_level);
405
- gen_set_pc_im(s, s->pc_curr);
42
qemu_set_irq(s->maintenance_irq[cpu], maint_level);
406
+ gen_update_pc(s, 0);
43
}
407
gen_helper_get_r13_banked(addr, cpu_env, tcg_constant_i32(mode));
44
}
408
switch (amode) {
45
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
409
case 0: /* DA */
46
* is in the wrong group.
410
@@ -XXX,XX +XXX,XX @@ static bool trans_YIELD(DisasContext *s, arg_YIELD *a)
47
*/
411
* scheduling of other vCPUs.
48
irq = gic_get_current_pending_irq(s, cpu, attrs);
412
*/
49
- trace_gic_acknowledge_irq(gic_get_vcpu_real_id(cpu), irq);
413
if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
50
+ trace_gic_acknowledge_irq(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
414
- gen_set_pc_im(s, s->base.pc_next);
51
+ gic_get_vcpu_real_id(cpu), irq);
415
+ gen_update_pc(s, curr_insn_len(s));
52
416
s->base.is_jmp = DISAS_YIELD;
53
if (irq >= GIC_MAXIRQ) {
417
}
54
DPRINTF("ACK, no pending interrupt or it is hidden: %d\n", irq);
418
return true;
55
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_dist_read(void *opaque, hwaddr offset, uint64_t *data,
419
@@ -XXX,XX +XXX,XX @@ static bool trans_WFE(DisasContext *s, arg_WFE *a)
56
switch (size) {
420
* implemented so we can't sleep like WFI does.
57
case 1:
421
*/
58
*data = gic_dist_readb(opaque, offset, attrs);
422
if (!(tb_cflags(s->base.tb) & CF_PARALLEL)) {
59
- return MEMTX_OK;
423
- gen_set_pc_im(s, s->base.pc_next);
60
+ break;
424
+ gen_update_pc(s, curr_insn_len(s));
61
case 2:
425
s->base.is_jmp = DISAS_WFE;
62
*data = gic_dist_readb(opaque, offset, attrs);
426
}
63
*data |= gic_dist_readb(opaque, offset + 1, attrs) << 8;
427
return true;
64
- return MEMTX_OK;
428
@@ -XXX,XX +XXX,XX @@ static bool trans_WFE(DisasContext *s, arg_WFE *a)
65
+ break;
429
static bool trans_WFI(DisasContext *s, arg_WFI *a)
66
case 4:
430
{
67
*data = gic_dist_readb(opaque, offset, attrs);
431
/* For WFI, halt the vCPU until an IRQ. */
68
*data |= gic_dist_readb(opaque, offset + 1, attrs) << 8;
432
- gen_set_pc_im(s, s->base.pc_next);
69
*data |= gic_dist_readb(opaque, offset + 2, attrs) << 16;
433
+ gen_update_pc(s, curr_insn_len(s));
70
*data |= gic_dist_readb(opaque, offset + 3, attrs) << 24;
434
s->base.is_jmp = DISAS_WFI;
71
- return MEMTX_OK;
435
return true;
72
+ break;
436
}
73
default:
437
@@ -XXX,XX +XXX,XX @@ static bool trans_SVC(DisasContext *s, arg_SVC *a)
74
return MEMTX_ERROR;
438
(a->imm == semihost_imm)) {
75
}
439
gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
76
+
440
} else {
77
+ trace_gic_dist_read(offset, size, *data);
441
- gen_set_pc_im(s, s->base.pc_next);
78
+ return MEMTX_OK;
442
+ gen_update_pc(s, curr_insn_len(s));
79
}
443
s->svc_imm = a->imm;
80
444
s->base.is_jmp = DISAS_SWI;
81
static void gic_dist_writeb(void *opaque, hwaddr offset,
445
}
82
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writel(void *opaque, hwaddr offset,
446
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
83
static MemTxResult gic_dist_write(void *opaque, hwaddr offset, uint64_t data,
447
case DISAS_TOO_MANY:
84
unsigned size, MemTxAttrs attrs)
448
case DISAS_UPDATE_EXIT:
85
{
449
case DISAS_UPDATE_NOCHAIN:
86
+ trace_gic_dist_write(offset, size, data);
450
- gen_set_pc_im(dc, dc->base.pc_next);
87
+
451
+ gen_update_pc(dc, curr_insn_len(dc));
88
switch (size) {
452
/* fall through */
89
case 1:
453
default:
90
gic_dist_writeb(opaque, offset, data, attrs);
454
/* FIXME: Single stepping a WFI insn will not halt the CPU. */
91
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
455
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
92
*data = 0;
456
gen_goto_tb(dc, 1, curr_insn_len(dc));
93
break;
457
break;
94
}
458
case DISAS_UPDATE_NOCHAIN:
95
+
459
- gen_set_pc_im(dc, dc->base.pc_next);
96
+ trace_gic_cpu_read(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
460
+ gen_update_pc(dc, curr_insn_len(dc));
97
+ gic_get_vcpu_real_id(cpu), offset, *data);
461
/* fall through */
98
return MEMTX_OK;
462
case DISAS_JUMP:
99
}
463
gen_goto_ptr();
100
464
break;
101
static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
465
case DISAS_UPDATE_EXIT:
102
uint32_t value, MemTxAttrs attrs)
466
- gen_set_pc_im(dc, dc->base.pc_next);
103
{
467
+ gen_update_pc(dc, curr_insn_len(dc));
104
+ trace_gic_cpu_write(gic_is_vcpu(cpu) ? "vcpu" : "cpu",
468
/* fall through */
105
+ gic_get_vcpu_real_id(cpu), offset, value);
469
default:
106
+
470
/* indicate that the hash table must be used to find the next TB */
107
switch (offset) {
471
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
108
case 0x00: /* Control */
472
gen_set_label(dc->condlabel);
109
gic_set_cpu_control(s, cpu, value, attrs);
473
gen_set_condexec(dc);
110
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_read(void *opaque, int cpu, hwaddr addr,
474
if (unlikely(dc->ss_active)) {
111
return MEMTX_OK;
475
- gen_set_pc_im(dc, dc->base.pc_next);
112
}
476
+ gen_update_pc(dc, curr_insn_len(dc));
113
477
gen_singlestep_exception(dc);
114
+ trace_gic_hyp_read(addr, *data);
478
} else {
115
return MEMTX_OK;
479
gen_goto_tb(dc, 1, curr_insn_len(dc));
116
}
117
118
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
119
GICState *s = ARM_GIC(opaque);
120
int vcpu = cpu + GIC_NCPU;
121
122
+ trace_gic_hyp_write(addr, value);
123
+
124
switch (addr) {
125
case A_GICH_HCR: /* Hypervisor Control */
126
s->h_hcr[cpu] = value & GICH_HCR_MASK;
127
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
128
}
129
130
s->h_lr[lr_idx][cpu] = value & GICH_LR_MASK;
131
+ trace_gic_lr_entry(cpu, lr_idx, s->h_lr[lr_idx][cpu]);
132
break;
133
}
134
135
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
136
index XXXXXXX..XXXXXXX 100644
137
--- a/hw/intc/trace-events
138
+++ b/hw/intc/trace-events
139
@@ -XXX,XX +XXX,XX @@ aspeed_vic_write(uint64_t offset, unsigned size, uint32_t data) "To 0x%" PRIx64
140
gic_enable_irq(int irq) "irq %d enabled"
141
gic_disable_irq(int irq) "irq %d disabled"
142
gic_set_irq(int irq, int level, int cpumask, int target) "irq %d level %d cpumask 0x%x target 0x%x"
143
-gic_update_bestirq(int cpu, int irq, int prio, int priority_mask, int running_priority) "cpu %d irq %d priority %d cpu priority mask %d cpu running priority %d"
144
+gic_update_bestirq(const char *s, int cpu, int irq, int prio, int priority_mask, int running_priority) "%s %d irq %d priority %d cpu priority mask %d cpu running priority %d"
145
gic_update_set_irq(int cpu, const char *name, int level) "cpu[%d]: %s = %d"
146
-gic_acknowledge_irq(int cpu, int irq) "cpu %d acknowledged irq %d"
147
+gic_acknowledge_irq(const char *s, int cpu, int irq) "%s %d acknowledged irq %d"
148
+gic_cpu_write(const char *s, int cpu, int addr, uint32_t val) "%s %d iface write at 0x%08x 0x%08" PRIx32
149
+gic_cpu_read(const char *s, int cpu, int addr, uint32_t val) "%s %d iface read at 0x%08x: 0x%08" PRIx32
150
+gic_hyp_read(int addr, uint32_t val) "hyp read at 0x%08x: 0x%08" PRIx32
151
+gic_hyp_write(int addr, uint32_t val) "hyp write at 0x%08x: 0x%08" PRIx32
152
+gic_dist_read(int addr, unsigned int size, uint32_t val) "dist read at 0x%08x size %u: 0x%08" PRIx32
153
+gic_dist_write(int addr, unsigned int size, uint32_t val) "dist write at 0x%08x size %u: 0x%08" PRIx32
154
+gic_lr_entry(int cpu, int entry, uint32_t val) "cpu %d: new lr entry %d: 0x%08" PRIx32
155
+gic_update_maintenance_irq(int cpu, int val) "cpu %d: maintenance = %d"
156
157
# hw/intc/arm_gicv3_cpuif.c
158
gicv3_icc_pmr_read(uint32_t cpu, uint64_t val) "GICv3 ICC_PMR read cpu 0x%x value 0x%" PRIx64
159
--
480
--
160
2.18.0
481
2.25.1
161
482
162
483
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Implement virtualization extensions in the gic_cpu_read() and
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
gic_cpu_write() functions. Those are the last bits missing to fully
4
5
support virtualization extensions in the CPU interface path.
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
7
Message-id: 20221020030641.2066807-5-richard.henderson@linaro.org
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20180727095421.386-14-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
9
---
12
hw/intc/arm_gic.c | 20 +++++++++++++++-----
10
target/arm/translate.h | 5 +++--
13
1 file changed, 15 insertions(+), 5 deletions(-)
11
target/arm/translate-a64.c | 28 ++++++++++-------------
14
12
target/arm/translate-m-nocp.c | 6 ++---
15
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
13
target/arm/translate-mve.c | 2 +-
16
index XXXXXXX..XXXXXXX 100644
14
target/arm/translate-vfp.c | 6 ++---
17
--- a/hw/intc/arm_gic.c
15
target/arm/translate.c | 42 +++++++++++++++++------------------
18
+++ b/hw/intc/arm_gic.c
16
6 files changed, 43 insertions(+), 46 deletions(-)
19
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
17
20
case 0xd0: case 0xd4: case 0xd8: case 0xdc:
18
diff --git a/target/arm/translate.h b/target/arm/translate.h
21
{
19
index XXXXXXX..XXXXXXX 100644
22
int regno = (offset - 0xd0) / 4;
20
--- a/target/arm/translate.h
23
+ int nr_aprs = gic_is_vcpu(cpu) ? GIC_VIRT_NR_APRS : GIC_NR_APRS;
21
+++ b/target/arm/translate.h
24
22
@@ -XXX,XX +XXX,XX @@ void arm_jump_cc(DisasCompare *cmp, TCGLabel *label);
25
- if (regno >= GIC_NR_APRS || s->revision != 2) {
23
void arm_gen_test_cc(int cc, TCGLabel *label);
26
+ if (regno >= nr_aprs || s->revision != 2) {
24
MemOp pow2_align(unsigned i);
27
*data = 0;
25
void unallocated_encoding(DisasContext *s);
28
+ } else if (gic_is_vcpu(cpu)) {
26
-void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
29
+ *data = s->h_apr[gic_get_vcpu_real_id(cpu)];
27
+void gen_exception_insn_el(DisasContext *s, target_long pc_diff, int excp,
30
} else if (gic_cpu_ns_access(s, cpu, attrs)) {
28
uint32_t syn, uint32_t target_el);
31
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
29
-void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn);
32
*data = gic_apr_ns_view(s, regno, cpu);
30
+void gen_exception_insn(DisasContext *s, target_long pc_diff,
33
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
31
+ int excp, uint32_t syn);
34
int regno = (offset - 0xe0) / 4;
32
35
33
/* Return state of Alternate Half-precision flag, caller frees result */
36
if (regno >= GIC_NR_APRS || s->revision != 2 || !gic_has_groups(s) ||
34
static inline TCGv_i32 get_ahp_flag(void)
37
- gic_cpu_ns_access(s, cpu, attrs)) {
35
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
38
+ gic_cpu_ns_access(s, cpu, attrs) || gic_is_vcpu(cpu)) {
36
index XXXXXXX..XXXXXXX 100644
39
*data = 0;
37
--- a/target/arm/translate-a64.c
40
} else {
38
+++ b/target/arm/translate-a64.c
41
*data = s->nsapr[regno][cpu];
39
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check_only(DisasContext *s)
42
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
40
assert(!s->fp_access_checked);
43
s->abpr[cpu] = MAX(value & 0x7, GIC_MIN_ABPR);
41
s->fp_access_checked = true;
42
43
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
44
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
45
syn_fp_access_trap(1, 0xe, false, 0),
46
s->fp_excp_el);
47
return false;
48
@@ -XXX,XX +XXX,XX @@ static bool fp_access_check(DisasContext *s)
49
return false;
50
}
51
if (s->sme_trap_nonstreaming && s->is_nonstreaming) {
52
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
53
+ gen_exception_insn(s, 0, EXCP_UDEF,
54
syn_smetrap(SME_ET_Streaming, false));
55
return false;
56
}
57
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
58
goto fail_exit;
59
}
60
} else if (s->sve_excp_el) {
61
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
62
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
63
syn_sve_access_trap(), s->sve_excp_el);
64
goto fail_exit;
65
}
66
@@ -XXX,XX +XXX,XX @@ bool sve_access_check(DisasContext *s)
67
static bool sme_access_check(DisasContext *s)
68
{
69
if (s->sme_excp_el) {
70
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
71
+ gen_exception_insn_el(s, 0, EXCP_UDEF,
72
syn_smetrap(SME_ET_AccessTrap, false),
73
s->sme_excp_el);
74
return false;
75
@@ -XXX,XX +XXX,XX @@ bool sme_enabled_check_with_svcr(DisasContext *s, unsigned req)
76
return false;
77
}
78
if (FIELD_EX64(req, SVCR, SM) && !s->pstate_sm) {
79
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
80
+ gen_exception_insn(s, 0, EXCP_UDEF,
81
syn_smetrap(SME_ET_NotStreaming, false));
82
return false;
83
}
84
if (FIELD_EX64(req, SVCR, ZA) && !s->pstate_za) {
85
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
86
+ gen_exception_insn(s, 0, EXCP_UDEF,
87
syn_smetrap(SME_ET_InactiveZA, false));
88
return false;
89
}
90
@@ -XXX,XX +XXX,XX @@ static void gen_sysreg_undef(DisasContext *s, bool isread,
91
} else {
92
syndrome = syn_uncategorized();
93
}
94
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syndrome);
95
+ gen_exception_insn(s, 0, EXCP_UDEF, syndrome);
96
}
97
98
/* MRS - move from system register
99
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
100
switch (op2_ll) {
101
case 1: /* SVC */
102
gen_ss_advance(s);
103
- gen_exception_insn(s, s->base.pc_next, EXCP_SWI,
104
- syn_aa64_svc(imm16));
105
+ gen_exception_insn(s, 4, EXCP_SWI, syn_aa64_svc(imm16));
106
break;
107
case 2: /* HVC */
108
if (s->current_el == 0) {
109
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
110
gen_a64_update_pc(s, 0);
111
gen_helper_pre_hvc(cpu_env);
112
gen_ss_advance(s);
113
- gen_exception_insn_el(s, s->base.pc_next, EXCP_HVC,
114
- syn_aa64_hvc(imm16), 2);
115
+ gen_exception_insn_el(s, 4, EXCP_HVC, syn_aa64_hvc(imm16), 2);
116
break;
117
case 3: /* SMC */
118
if (s->current_el == 0) {
119
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
120
gen_a64_update_pc(s, 0);
121
gen_helper_pre_smc(cpu_env, tcg_constant_i32(syn_aa64_smc(imm16)));
122
gen_ss_advance(s);
123
- gen_exception_insn_el(s, s->base.pc_next, EXCP_SMC,
124
- syn_aa64_smc(imm16), 3);
125
+ gen_exception_insn_el(s, 4, EXCP_SMC, syn_aa64_smc(imm16), 3);
126
break;
127
default:
128
unallocated_encoding(s);
129
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
130
* Illegal execution state. This has priority over BTI
131
* exceptions, but comes after instruction abort exceptions.
132
*/
133
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_illegalstate());
134
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_illegalstate());
135
return;
136
}
137
138
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
139
if (s->btype != 0
140
&& s->guarded_page
141
&& !btype_destination_ok(insn, s->bt, s->btype)) {
142
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
143
- syn_btitrap(s->btype));
144
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_btitrap(s->btype));
145
return;
44
}
146
}
45
} else {
147
} else {
46
- s->bpr[cpu] = MAX(value & 0x7, GIC_MIN_BPR);
148
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
47
+ int min_bpr = gic_is_vcpu(cpu) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
149
index XXXXXXX..XXXXXXX 100644
48
+ s->bpr[cpu] = MAX(value & 0x7, min_bpr);
150
--- a/target/arm/translate-m-nocp.c
151
+++ b/target/arm/translate-m-nocp.c
152
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
153
tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
154
155
if (s->fp_excp_el != 0) {
156
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
157
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
158
syn_uncategorized(), s->fp_excp_el);
159
return true;
160
}
161
@@ -XXX,XX +XXX,XX @@ static bool trans_NOCP(DisasContext *s, arg_nocp *a)
162
}
163
164
if (a->cp != 10) {
165
- gen_exception_insn(s, s->pc_curr, EXCP_NOCP, syn_uncategorized());
166
+ gen_exception_insn(s, 0, EXCP_NOCP, syn_uncategorized());
167
return true;
168
}
169
170
if (s->fp_excp_el != 0) {
171
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
172
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
173
syn_uncategorized(), s->fp_excp_el);
174
return true;
175
}
176
diff --git a/target/arm/translate-mve.c b/target/arm/translate-mve.c
177
index XXXXXXX..XXXXXXX 100644
178
--- a/target/arm/translate-mve.c
179
+++ b/target/arm/translate-mve.c
180
@@ -XXX,XX +XXX,XX @@ bool mve_eci_check(DisasContext *s)
181
return true;
182
default:
183
/* Reserved value: INVSTATE UsageFault */
184
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
185
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
186
return false;
187
}
188
}
189
diff --git a/target/arm/translate-vfp.c b/target/arm/translate-vfp.c
190
index XXXXXXX..XXXXXXX 100644
191
--- a/target/arm/translate-vfp.c
192
+++ b/target/arm/translate-vfp.c
193
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
194
int coproc = arm_dc_feature(s, ARM_FEATURE_V8) ? 0 : 0xa;
195
uint32_t syn = syn_fp_access_trap(1, 0xe, false, coproc);
196
197
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF, syn, s->fp_excp_el);
198
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn, s->fp_excp_el);
199
return false;
200
}
201
202
@@ -XXX,XX +XXX,XX @@ static bool vfp_access_check_a(DisasContext *s, bool ignore_vfp_enabled)
203
* appear to be any insns which touch VFP which are allowed.
204
*/
205
if (s->sme_trap_nonstreaming) {
206
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF,
207
+ gen_exception_insn(s, 0, EXCP_UDEF,
208
syn_smetrap(SME_ET_Streaming,
209
curr_insn_len(s) == 2));
210
return false;
211
@@ -XXX,XX +XXX,XX @@ bool vfp_access_check_m(DisasContext *s, bool skip_context_update)
212
* the encoding space handled by the patterns in m-nocp.decode,
213
* and for them we may need to raise NOCP here.
214
*/
215
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
216
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
217
syn_uncategorized(), s->fp_excp_el);
218
return false;
219
}
220
diff --git a/target/arm/translate.c b/target/arm/translate.c
221
index XXXXXXX..XXXXXXX 100644
222
--- a/target/arm/translate.c
223
+++ b/target/arm/translate.c
224
@@ -XXX,XX +XXX,XX @@ static void gen_exception(int excp, uint32_t syndrome)
225
tcg_constant_i32(syndrome));
226
}
227
228
-static void gen_exception_insn_el_v(DisasContext *s, uint64_t pc, int excp,
229
- uint32_t syn, TCGv_i32 tcg_el)
230
+static void gen_exception_insn_el_v(DisasContext *s, target_long pc_diff,
231
+ int excp, uint32_t syn, TCGv_i32 tcg_el)
232
{
233
if (s->aarch64) {
234
- gen_a64_update_pc(s, pc - s->pc_curr);
235
+ gen_a64_update_pc(s, pc_diff);
236
} else {
237
gen_set_condexec(s);
238
- gen_update_pc(s, pc - s->pc_curr);
239
+ gen_update_pc(s, pc_diff);
240
}
241
gen_exception_el_v(excp, syn, tcg_el);
242
s->base.is_jmp = DISAS_NORETURN;
243
}
244
245
-void gen_exception_insn_el(DisasContext *s, uint64_t pc, int excp,
246
+void gen_exception_insn_el(DisasContext *s, target_long pc_diff, int excp,
247
uint32_t syn, uint32_t target_el)
248
{
249
- gen_exception_insn_el_v(s, pc, excp, syn, tcg_constant_i32(target_el));
250
+ gen_exception_insn_el_v(s, pc_diff, excp, syn,
251
+ tcg_constant_i32(target_el));
252
}
253
254
-void gen_exception_insn(DisasContext *s, uint64_t pc, int excp, uint32_t syn)
255
+void gen_exception_insn(DisasContext *s, target_long pc_diff,
256
+ int excp, uint32_t syn)
257
{
258
if (s->aarch64) {
259
- gen_a64_update_pc(s, pc - s->pc_curr);
260
+ gen_a64_update_pc(s, pc_diff);
261
} else {
262
gen_set_condexec(s);
263
- gen_update_pc(s, pc - s->pc_curr);
264
+ gen_update_pc(s, pc_diff);
265
}
266
gen_exception(excp, syn);
267
s->base.is_jmp = DISAS_NORETURN;
268
@@ -XXX,XX +XXX,XX @@ static void gen_exception_bkpt_insn(DisasContext *s, uint32_t syn)
269
void unallocated_encoding(DisasContext *s)
270
{
271
/* Unallocated and reserved encodings are uncategorized */
272
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized());
273
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_uncategorized());
274
}
275
276
/* Force a TB lookup after an instruction that changes the CPU state. */
277
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
278
tcg_el = tcg_constant_i32(3);
279
}
280
281
- gen_exception_insn_el_v(s, s->pc_curr, EXCP_UDEF,
282
+ gen_exception_insn_el_v(s, 0, EXCP_UDEF,
283
syn_uncategorized(), tcg_el);
284
tcg_temp_free_i32(tcg_el);
285
return false;
286
@@ -XXX,XX +XXX,XX @@ static bool msr_banked_access_decode(DisasContext *s, int r, int sysm, int rn,
287
288
undef:
289
/* If we get here then some access check did not pass */
290
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_uncategorized());
291
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_uncategorized());
292
return false;
293
}
294
295
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
296
* For the UNPREDICTABLE cases we choose to UNDEF.
297
*/
298
if (s->current_el == 1 && !s->ns && mode == ARM_CPU_MODE_MON) {
299
- gen_exception_insn_el(s, s->pc_curr, EXCP_UDEF,
300
- syn_uncategorized(), 3);
301
+ gen_exception_insn_el(s, 0, EXCP_UDEF, syn_uncategorized(), 3);
302
return;
303
}
304
305
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
306
* Do the check-and-raise-exception by hand.
307
*/
308
if (s->fp_excp_el) {
309
- gen_exception_insn_el(s, s->pc_curr, EXCP_NOCP,
310
+ gen_exception_insn_el(s, 0, EXCP_NOCP,
311
syn_uncategorized(), s->fp_excp_el);
312
return true;
49
}
313
}
50
break;
314
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
51
case 0x10: /* End Of Interrupt */
315
tmp = load_cpu_field(v7m.ltpsize);
52
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
316
tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc);
53
case 0xd0: case 0xd4: case 0xd8: case 0xdc:
317
tcg_temp_free_i32(tmp);
54
{
318
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
55
int regno = (offset - 0xd0) / 4;
319
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
56
+ int nr_aprs = gic_is_vcpu(cpu) ? GIC_VIRT_NR_APRS : GIC_NR_APRS;
320
gen_set_label(skipexc);
57
321
}
58
- if (regno >= GIC_NR_APRS || s->revision != 2) {
322
59
+ if (regno >= nr_aprs || s->revision != 2) {
323
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
60
return MEMTX_OK;
324
* UsageFault exception.
61
}
325
*/
62
- if (gic_cpu_ns_access(s, cpu, attrs)) {
326
if (arm_dc_feature(s, ARM_FEATURE_M)) {
63
+ if (gic_is_vcpu(cpu)) {
327
- gen_exception_insn(s, s->pc_curr, EXCP_INVSTATE, syn_uncategorized());
64
+ s->h_apr[gic_get_vcpu_real_id(cpu)] = value;
328
+ gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
65
+ } else if (gic_cpu_ns_access(s, cpu, attrs)) {
329
return;
66
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
330
}
67
gic_apr_write_ns_view(s, regno, cpu, value);
331
68
} else {
332
@@ -XXX,XX +XXX,XX @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
69
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
333
* Illegal execution state. This has priority over BTI
70
if (regno >= GIC_NR_APRS || s->revision != 2) {
334
* exceptions, but comes after instruction abort exceptions.
71
return MEMTX_OK;
335
*/
72
}
336
- gen_exception_insn(s, s->pc_curr, EXCP_UDEF, syn_illegalstate());
73
+ if (gic_is_vcpu(cpu)) {
337
+ gen_exception_insn(s, 0, EXCP_UDEF, syn_illegalstate());
74
+ return MEMTX_OK;
338
return;
75
+ }
339
}
76
if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
340
77
return MEMTX_OK;
341
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
78
}
342
* Illegal execution state. This has priority over BTI
343
* exceptions, but comes after instruction abort exceptions.
344
*/
345
- gen_exception_insn(dc, dc->pc_curr, EXCP_UDEF, syn_illegalstate());
346
+ gen_exception_insn(dc, 0, EXCP_UDEF, syn_illegalstate());
347
return;
348
}
349
350
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
351
*/
352
tcg_remove_ops_after(dc->insn_eci_rewind);
353
dc->condjmp = 0;
354
- gen_exception_insn(dc, dc->pc_curr, EXCP_INVSTATE,
355
- syn_uncategorized());
356
+ gen_exception_insn(dc, 0, EXCP_INVSTATE, syn_uncategorized());
357
}
358
359
arm_post_translate_insn(dc);
79
--
360
--
80
2.18.0
361
2.25.1
81
362
82
363
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
An access to the CPU interface is non-secure if the current GIC instance
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
implements the security extensions, and the memory access is actually
4
Since we always pass dc->pc_curr, fold the arithmetic to zero displacement.
5
non-secure. Until then, it was checked with tests such as
6
if (s->security_extn && !attrs.secure) { ... }
7
in various places of the CPU interface code.
8
5
9
With the implementation of the virtualization extensions, those tests
10
must be updated to take into account whether we are in a vCPU interface
11
or not. This is because the exposed vCPU interface does not implement
12
security extensions.
13
14
This commits replaces all those tests with a call to the
15
gic_cpu_ns_access() function to check if the current access to the CPU
16
interface is non-secure. This function takes into account whether the
17
current CPU is a vCPU or not.
18
19
Note that this function is used only in the (v)CPU interface code path.
20
The distributor code path is left unchanged, as the distributor is not
21
exposed to vCPUs at all.
22
23
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
24
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
25
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
6
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
26
Message-id: 20180727095421.386-9-luc.michel@greensocs.com
7
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Message-id: 20221020030641.2066807-6-richard.henderson@linaro.org
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
28
---
10
---
29
hw/intc/arm_gic.c | 39 ++++++++++++++++++++++-----------------
11
target/arm/translate-a64.c | 6 +++---
30
1 file changed, 22 insertions(+), 17 deletions(-)
12
target/arm/translate.c | 10 +++++-----
13
2 files changed, 8 insertions(+), 8 deletions(-)
31
14
32
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
15
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
33
index XXXXXXX..XXXXXXX 100644
16
index XXXXXXX..XXXXXXX 100644
34
--- a/hw/intc/arm_gic.c
17
--- a/target/arm/translate-a64.c
35
+++ b/hw/intc/arm_gic.c
18
+++ b/target/arm/translate-a64.c
36
@@ -XXX,XX +XXX,XX @@ static inline bool gic_has_groups(GICState *s)
19
@@ -XXX,XX +XXX,XX @@ static void gen_exception_internal(int excp)
37
return s->revision == 2 || s->security_extn;
20
gen_helper_exception_internal(cpu_env, tcg_constant_i32(excp));
38
}
21
}
39
22
40
+static inline bool gic_cpu_ns_access(GICState *s, int cpu, MemTxAttrs attrs)
23
-static void gen_exception_internal_insn(DisasContext *s, uint64_t pc, int excp)
41
+{
24
+static void gen_exception_internal_insn(DisasContext *s, int excp)
42
+ return !gic_is_vcpu(cpu) && s->security_extn && !attrs.secure;
25
{
43
+}
26
- gen_a64_update_pc(s, pc - s->pc_curr);
44
+
27
+ gen_a64_update_pc(s, 0);
45
/* TODO: Many places that call this routine could be optimized. */
28
gen_exception_internal(excp);
46
/* Update interrupt status after enabled or pending bits have been changed. */
29
s->base.is_jmp = DISAS_NORETURN;
47
static void gic_update(GICState *s)
30
}
48
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
31
@@ -XXX,XX +XXX,XX @@ static void disas_exc(DisasContext *s, uint32_t insn)
49
/* On a GIC without the security extensions, reading this register
32
* Secondly, "HLT 0xf000" is the A64 semihosting syscall instruction.
50
* behaves in the same way as a secure access to a GIC with them.
51
*/
33
*/
52
- bool secure = !s->security_extn || attrs.secure;
34
if (semihosting_enabled(s->current_el == 0) && imm16 == 0xf000) {
53
+ bool secure = !gic_cpu_ns_access(s, cpu, attrs);
35
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
54
36
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
55
if (group == 0 && !secure) {
37
} else {
56
/* Group0 interrupts hidden from Non-secure access */
38
unallocated_encoding(s);
57
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_get_priority(GICState *s, int cpu, int irq,
39
}
58
static void gic_set_priority_mask(GICState *s, int cpu, uint8_t pmask,
40
diff --git a/target/arm/translate.c b/target/arm/translate.c
59
MemTxAttrs attrs)
41
index XXXXXXX..XXXXXXX 100644
42
--- a/target/arm/translate.c
43
+++ b/target/arm/translate.c
44
@@ -XXX,XX +XXX,XX @@ static inline void gen_smc(DisasContext *s)
45
s->base.is_jmp = DISAS_SMC;
46
}
47
48
-static void gen_exception_internal_insn(DisasContext *s, uint32_t pc, int excp)
49
+static void gen_exception_internal_insn(DisasContext *s, int excp)
60
{
50
{
61
- if (s->security_extn && !attrs.secure) {
51
gen_set_condexec(s);
62
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
52
- gen_update_pc(s, pc - s->pc_curr);
63
if (s->priority_mask[cpu] & 0x80) {
53
+ gen_update_pc(s, 0);
64
/* Priority Mask in upper half */
54
gen_exception_internal(excp);
65
pmask = 0x80 | (pmask >> 1);
55
s->base.is_jmp = DISAS_NORETURN;
66
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_get_priority_mask(GICState *s, int cpu, MemTxAttrs attrs)
56
}
67
{
57
@@ -XXX,XX +XXX,XX @@ static inline void gen_hlt(DisasContext *s, int imm)
68
uint32_t pmask = s->priority_mask[cpu];
58
*/
69
59
if (semihosting_enabled(s->current_el != 0) &&
70
- if (s->security_extn && !attrs.secure) {
60
(imm == (s->thumb ? 0x3c : 0xf000))) {
71
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
61
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
72
if (pmask & 0x80) {
62
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
73
/* Priority Mask in upper half, return Non-secure view */
74
pmask = (pmask << 1) & 0xff;
75
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_get_cpu_control(GICState *s, int cpu, MemTxAttrs attrs)
76
{
77
uint32_t ret = s->cpu_ctlr[cpu];
78
79
- if (s->security_extn && !attrs.secure) {
80
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
81
/* Construct the NS banked view of GICC_CTLR from the correct
82
* bits of the S banked view. We don't need to move the bypass
83
* control bits because we don't implement that (IMPDEF) part
84
@@ -XXX,XX +XXX,XX @@ static void gic_set_cpu_control(GICState *s, int cpu, uint32_t value,
85
{
86
uint32_t mask;
87
88
- if (s->security_extn && !attrs.secure) {
89
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
90
/* The NS view can only write certain bits in the register;
91
* the rest are unchanged
92
*/
93
@@ -XXX,XX +XXX,XX @@ static uint8_t gic_get_running_priority(GICState *s, int cpu, MemTxAttrs attrs)
94
return 0xff;
95
}
96
97
- if (s->security_extn && !attrs.secure) {
98
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
99
if (s->running_priority[cpu] & 0x80) {
100
/* Running priority in upper half of range: return the Non-secure
101
* view of the priority.
102
@@ -XXX,XX +XXX,XX @@ static bool gic_eoi_split(GICState *s, int cpu, MemTxAttrs attrs)
103
/* Before GICv2 prio-drop and deactivate are not separable */
104
return false;
105
}
106
- if (s->security_extn && !attrs.secure) {
107
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
108
return s->cpu_ctlr[cpu] & GICC_CTLR_EOIMODE_NS;
109
}
110
return s->cpu_ctlr[cpu] & GICC_CTLR_EOIMODE;
111
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
112
return;
63
return;
113
}
64
}
114
65
115
- if (s->security_extn && !attrs.secure && !group) {
66
@@ -XXX,XX +XXX,XX @@ static bool trans_BKPT(DisasContext *s, arg_BKPT *a)
116
+ if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
67
if (arm_dc_feature(s, ARM_FEATURE_M) &&
117
DPRINTF("Non-secure DI for Group0 interrupt %d ignored\n", irq);
68
semihosting_enabled(s->current_el == 0) &&
118
return;
69
(a->imm == 0xab)) {
70
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
71
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
72
} else {
73
gen_exception_bkpt_insn(s, syn_aa32_bkpt(a->imm, false));
119
}
74
}
120
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
75
@@ -XXX,XX +XXX,XX @@ static bool trans_SVC(DisasContext *s, arg_SVC *a)
121
76
if (!arm_dc_feature(s, ARM_FEATURE_M) &&
122
group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
77
semihosting_enabled(s->current_el == 0) &&
123
78
(a->imm == semihost_imm)) {
124
- if (s->security_extn && !attrs.secure && !group) {
79
- gen_exception_internal_insn(s, s->pc_curr, EXCP_SEMIHOST);
125
+ if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
80
+ gen_exception_internal_insn(s, EXCP_SEMIHOST);
126
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
81
} else {
127
return;
82
gen_update_pc(s, curr_insn_len(s));
128
}
83
s->svc_imm = a->imm;
129
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
130
*data = gic_get_priority_mask(s, cpu, attrs);
131
break;
132
case 0x08: /* Binary Point */
133
- if (s->security_extn && !attrs.secure) {
134
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
135
if (s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) {
136
/* NS view of BPR when CBPR is 1 */
137
*data = MIN(s->bpr[cpu] + 1, 7);
138
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
139
* With security extensions, secure access: ABPR (alias of NS BPR)
140
* With security extensions, nonsecure access: RAZ/WI
141
*/
142
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
143
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
144
*data = 0;
145
} else {
146
*data = s->abpr[cpu];
147
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
148
149
if (regno >= GIC_NR_APRS || s->revision != 2) {
150
*data = 0;
151
- } else if (s->security_extn && !attrs.secure) {
152
+ } else if (gic_cpu_ns_access(s, cpu, attrs)) {
153
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
154
*data = gic_apr_ns_view(s, regno, cpu);
155
} else {
156
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_read(GICState *s, int cpu, int offset,
157
int regno = (offset - 0xe0) / 4;
158
159
if (regno >= GIC_NR_APRS || s->revision != 2 || !gic_has_groups(s) ||
160
- (s->security_extn && !attrs.secure)) {
161
+ gic_cpu_ns_access(s, cpu, attrs)) {
162
*data = 0;
163
} else {
164
*data = s->nsapr[regno][cpu];
165
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
166
gic_set_priority_mask(s, cpu, value, attrs);
167
break;
168
case 0x08: /* Binary Point */
169
- if (s->security_extn && !attrs.secure) {
170
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
171
if (s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) {
172
/* WI when CBPR is 1 */
173
return MEMTX_OK;
174
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
175
gic_complete_irq(s, cpu, value & 0x3ff, attrs);
176
return MEMTX_OK;
177
case 0x1c: /* Aliased Binary Point */
178
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
179
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
180
/* unimplemented, or NS access: RAZ/WI */
181
return MEMTX_OK;
182
} else {
183
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
184
if (regno >= GIC_NR_APRS || s->revision != 2) {
185
return MEMTX_OK;
186
}
187
- if (s->security_extn && !attrs.secure) {
188
+ if (gic_cpu_ns_access(s, cpu, attrs)) {
189
/* NS view of GICC_APR<n> is the top half of GIC_NSAPR<n> */
190
gic_apr_write_ns_view(s, regno, cpu, value);
191
} else {
192
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
193
if (regno >= GIC_NR_APRS || s->revision != 2) {
194
return MEMTX_OK;
195
}
196
- if (!gic_has_groups(s) || (s->security_extn && !attrs.secure)) {
197
+ if (!gic_has_groups(s) || (gic_cpu_ns_access(s, cpu, attrs))) {
198
return MEMTX_OK;
199
}
200
s->nsapr[regno][cpu] = value;
201
--
84
--
202
2.18.0
85
2.25.1
203
86
204
87
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add the gic_update_virt() function to update the vCPU interface states
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
and raise vIRQ and vFIQ as needed. This commit renames gic_update() to
5
gic_update_internal() and generalizes it to handle both cases, with a
6
`virt' parameter to track whether we are updating the CPU or vCPU
7
interfaces.
8
4
9
The main difference between CPU and vCPU is the way we select the best
10
IRQ. This part has been split into the gic_get_best_(v)irq functions.
11
For the virt case, the LRs are iterated to find the best candidate.
12
13
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
14
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
15
Message-id: 20180727095421.386-17-luc.michel@greensocs.com
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-7-richard.henderson@linaro.org
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
9
---
18
hw/intc/arm_gic.c | 175 +++++++++++++++++++++++++++++++++++-----------
10
target/arm/translate.c | 37 +++++++++++++++++++++----------------
19
1 file changed, 136 insertions(+), 39 deletions(-)
11
1 file changed, 21 insertions(+), 16 deletions(-)
20
12
21
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
22
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
23
--- a/hw/intc/arm_gic.c
15
--- a/target/arm/translate.c
24
+++ b/hw/intc/arm_gic.c
16
+++ b/target/arm/translate.c
25
@@ -XXX,XX +XXX,XX @@ static inline bool gic_cpu_ns_access(GICState *s, int cpu, MemTxAttrs attrs)
17
@@ -XXX,XX +XXX,XX @@ static uint32_t read_pc(DisasContext *s)
26
return !gic_is_vcpu(cpu) && s->security_extn && !attrs.secure;
18
return s->pc_curr + (s->thumb ? 4 : 8);
27
}
19
}
28
20
29
+static inline void gic_get_best_irq(GICState *s, int cpu,
21
+/* The pc_curr difference for an architectural jump. */
30
+ int *best_irq, int *best_prio, int *group)
22
+static target_long jmp_diff(DisasContext *s, target_long diff)
31
+{
23
+{
32
+ int irq;
24
+ return diff + (s->thumb ? 4 : 8);
33
+ int cm = 1 << cpu;
34
+
35
+ *best_irq = 1023;
36
+ *best_prio = 0x100;
37
+
38
+ for (irq = 0; irq < s->num_irq; irq++) {
39
+ if (GIC_DIST_TEST_ENABLED(irq, cm) && gic_test_pending(s, irq, cm) &&
40
+ (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
41
+ (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
42
+ if (GIC_DIST_GET_PRIORITY(irq, cpu) < *best_prio) {
43
+ *best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
44
+ *best_irq = irq;
45
+ }
46
+ }
47
+ }
48
+
49
+ if (*best_irq < 1023) {
50
+ *group = GIC_DIST_TEST_GROUP(*best_irq, cm);
51
+ }
52
+}
25
+}
53
+
26
+
54
+static inline void gic_get_best_virq(GICState *s, int cpu,
27
/* Set a variable to the value of a CPU register. */
55
+ int *best_irq, int *best_prio, int *group)
28
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg)
56
+{
57
+ int lr_idx = 0;
58
+
59
+ *best_irq = 1023;
60
+ *best_prio = 0x100;
61
+
62
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
63
+ uint32_t lr_entry = s->h_lr[lr_idx][cpu];
64
+ int state = GICH_LR_STATE(lr_entry);
65
+
66
+ if (state == GICH_LR_STATE_PENDING) {
67
+ int prio = GICH_LR_PRIORITY(lr_entry);
68
+
69
+ if (prio < *best_prio) {
70
+ *best_prio = prio;
71
+ *best_irq = GICH_LR_VIRT_ID(lr_entry);
72
+ *group = GICH_LR_GROUP(lr_entry);
73
+ }
74
+ }
75
+ }
76
+}
77
+
78
+/* Return true if IRQ signaling is enabled for the given cpu and at least one
79
+ * of the given groups:
80
+ * - in the non-virt case, the distributor must be enabled for one of the
81
+ * given groups
82
+ * - in the virt case, the virtual interface must be enabled.
83
+ * - in all cases, the (v)CPU interface must be enabled for one of the given
84
+ * groups.
85
+ */
86
+static inline bool gic_irq_signaling_enabled(GICState *s, int cpu, bool virt,
87
+ int group_mask)
88
+{
89
+ if (!virt && !(s->ctlr & group_mask)) {
90
+ return false;
91
+ }
92
+
93
+ if (virt && !(s->h_hcr[cpu] & R_GICH_HCR_EN_MASK)) {
94
+ return false;
95
+ }
96
+
97
+ if (!(s->cpu_ctlr[cpu] & group_mask)) {
98
+ return false;
99
+ }
100
+
101
+ return true;
102
+}
103
+
104
/* TODO: Many places that call this routine could be optimized. */
105
/* Update interrupt status after enabled or pending bits have been changed. */
106
-static void gic_update(GICState *s)
107
+static inline void gic_update_internal(GICState *s, bool virt)
108
{
29
{
109
int best_irq;
30
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
110
int best_prio;
31
* cpu_loop_exec. Any live exit_requests will be processed as we
111
- int irq;
32
* enter the next TB.
112
int irq_level, fiq_level;
33
*/
113
- int cpu;
34
-static void gen_goto_tb(DisasContext *s, int n, int diff)
114
- int cm;
35
+static void gen_goto_tb(DisasContext *s, int n, target_long diff)
115
+ int cpu, cpu_iface;
36
{
116
+ int group = 0;
37
target_ulong dest = s->pc_curr + diff;
117
+ qemu_irq *irq_lines = virt ? s->parent_virq : s->parent_irq;
38
118
+ qemu_irq *fiq_lines = virt ? s->parent_vfiq : s->parent_fiq;
39
@@ -XXX,XX +XXX,XX @@ static void gen_goto_tb(DisasContext *s, int n, int diff)
119
40
}
120
for (cpu = 0; cpu < s->num_cpu; cpu++) {
41
121
- cm = 1 << cpu;
42
/* Jump, specifying which TB number to use if we gen_goto_tb() */
122
- s->current_pending[cpu] = 1023;
43
-static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
123
- if (!(s->ctlr & (GICD_CTLR_EN_GRP0 | GICD_CTLR_EN_GRP1))
44
+static void gen_jmp_tb(DisasContext *s, target_long diff, int tbno)
124
- || !(s->cpu_ctlr[cpu] & (GICC_CTLR_EN_GRP0 | GICC_CTLR_EN_GRP1))) {
45
{
125
- qemu_irq_lower(s->parent_irq[cpu]);
46
- int diff = dest - s->pc_curr;
126
- qemu_irq_lower(s->parent_fiq[cpu]);
127
+ cpu_iface = virt ? (cpu + GIC_NCPU) : cpu;
128
+
129
+ s->current_pending[cpu_iface] = 1023;
130
+ if (!gic_irq_signaling_enabled(s, cpu, virt,
131
+ GICD_CTLR_EN_GRP0 | GICD_CTLR_EN_GRP1)) {
132
+ qemu_irq_lower(irq_lines[cpu]);
133
+ qemu_irq_lower(fiq_lines[cpu]);
134
continue;
135
}
136
- best_prio = 0x100;
137
- best_irq = 1023;
138
- for (irq = 0; irq < s->num_irq; irq++) {
139
- if (GIC_DIST_TEST_ENABLED(irq, cm) &&
140
- gic_test_pending(s, irq, cm) &&
141
- (!GIC_DIST_TEST_ACTIVE(irq, cm)) &&
142
- (irq < GIC_INTERNAL || GIC_DIST_TARGET(irq) & cm)) {
143
- if (GIC_DIST_GET_PRIORITY(irq, cpu) < best_prio) {
144
- best_prio = GIC_DIST_GET_PRIORITY(irq, cpu);
145
- best_irq = irq;
146
- }
147
- }
148
+
149
+ if (virt) {
150
+ gic_get_best_virq(s, cpu, &best_irq, &best_prio, &group);
151
+ } else {
152
+ gic_get_best_irq(s, cpu, &best_irq, &best_prio, &group);
153
}
154
155
if (best_irq != 1023) {
156
trace_gic_update_bestirq(cpu, best_irq, best_prio,
157
- s->priority_mask[cpu], s->running_priority[cpu]);
158
+ s->priority_mask[cpu_iface], s->running_priority[cpu_iface]);
159
}
160
161
irq_level = fiq_level = 0;
162
163
- if (best_prio < s->priority_mask[cpu]) {
164
- s->current_pending[cpu] = best_irq;
165
- if (best_prio < s->running_priority[cpu]) {
166
- int group = GIC_DIST_TEST_GROUP(best_irq, cm);
167
-
47
-
168
- if (extract32(s->ctlr, group, 1) &&
48
if (unlikely(s->ss_active)) {
169
- extract32(s->cpu_ctlr[cpu], group, 1)) {
49
/* An indirect jump so that we still trigger the debug exception. */
170
- if (group == 0 && s->cpu_ctlr[cpu] & GICC_CTLR_FIQ_EN) {
50
gen_update_pc(s, diff);
171
+ if (best_prio < s->priority_mask[cpu_iface]) {
51
@@ -XXX,XX +XXX,XX @@ static inline void gen_jmp_tb(DisasContext *s, uint32_t dest, int tbno)
172
+ s->current_pending[cpu_iface] = best_irq;
173
+ if (best_prio < s->running_priority[cpu_iface]) {
174
+ if (gic_irq_signaling_enabled(s, cpu, virt, 1 << group)) {
175
+ if (group == 0 &&
176
+ s->cpu_ctlr[cpu_iface] & GICC_CTLR_FIQ_EN) {
177
DPRINTF("Raised pending FIQ %d (cpu %d)\n",
178
- best_irq, cpu);
179
+ best_irq, cpu_iface);
180
fiq_level = 1;
181
- trace_gic_update_set_irq(cpu, "fiq", fiq_level);
182
+ trace_gic_update_set_irq(cpu, virt ? "vfiq" : "fiq",
183
+ fiq_level);
184
} else {
185
DPRINTF("Raised pending IRQ %d (cpu %d)\n",
186
- best_irq, cpu);
187
+ best_irq, cpu_iface);
188
irq_level = 1;
189
- trace_gic_update_set_irq(cpu, "irq", irq_level);
190
+ trace_gic_update_set_irq(cpu, virt ? "virq" : "irq",
191
+ irq_level);
192
}
193
}
194
}
195
}
196
197
- qemu_set_irq(s->parent_irq[cpu], irq_level);
198
- qemu_set_irq(s->parent_fiq[cpu], fiq_level);
199
+ qemu_set_irq(irq_lines[cpu], irq_level);
200
+ qemu_set_irq(fiq_lines[cpu], fiq_level);
201
}
52
}
202
}
53
}
203
54
204
+static void gic_update(GICState *s)
55
-static inline void gen_jmp(DisasContext *s, uint32_t dest)
205
+{
56
+static inline void gen_jmp(DisasContext *s, target_long diff)
206
+ gic_update_internal(s, false);
57
{
207
+}
58
- gen_jmp_tb(s, dest, 0);
208
+
59
+ gen_jmp_tb(s, diff, 0);
209
/* Return true if this LR is empty, i.e. the corresponding bit
210
* in ELRSR is set.
211
*/
212
@@ -XXX,XX +XXX,XX @@ static inline bool gic_lr_entry_is_eoi(uint32_t entry)
213
&& !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
214
}
60
}
215
61
216
+static void gic_update_virt(GICState *s)
62
static inline void gen_mulxy(TCGv_i32 t0, TCGv_i32 t1, int x, int y)
217
+{
63
@@ -XXX,XX +XXX,XX @@ static bool trans_CLRM(DisasContext *s, arg_CLRM *a)
218
+ gic_update_internal(s, true);
64
219
+}
65
static bool trans_B(DisasContext *s, arg_i *a)
220
+
221
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
222
int cm, int target)
223
{
66
{
224
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
67
- gen_jmp(s, read_pc(s) + a->imm);
225
}
68
+ gen_jmp(s, jmp_diff(s, a->imm));
69
return true;
70
}
71
72
@@ -XXX,XX +XXX,XX @@ static bool trans_B_cond_thumb(DisasContext *s, arg_ci *a)
73
return true;
226
}
74
}
227
75
arm_skip_unless(s, a->cond);
228
- gic_update(s);
76
- gen_jmp(s, read_pc(s) + a->imm);
229
+ if (gic_is_vcpu(cpu)) {
77
+ gen_jmp(s, jmp_diff(s, a->imm));
230
+ gic_update_virt(s);
78
return true;
231
+ } else {
232
+ gic_update(s);
233
+ }
234
DPRINTF("ACK %d\n", irq);
235
return ret;
236
}
79
}
237
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
80
81
static bool trans_BL(DisasContext *s, arg_i *a)
82
{
83
tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
84
- gen_jmp(s, read_pc(s) + a->imm);
85
+ gen_jmp(s, jmp_diff(s, a->imm));
86
return true;
87
}
88
89
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
90
}
91
tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
92
store_cpu_field_constant(!s->thumb, thumb);
93
- gen_jmp(s, (read_pc(s) & ~3) + a->imm);
94
+ /* This jump is computed from an aligned PC: subtract off the low bits. */
95
+ gen_jmp(s, jmp_diff(s, a->imm - (s->pc_curr & 3)));
96
return true;
97
}
98
99
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
100
* when we take this upcoming exit from this TB, so gen_jmp_tb() is OK.
238
*/
101
*/
239
int rcpu = gic_get_vcpu_real_id(cpu);
240
s->h_hcr[rcpu] += 1 << R_GICH_HCR_EOICount_SHIFT;
241
+
242
+ /* Update the virtual interface in case a maintenance interrupt should
243
+ * be raised.
244
+ */
245
+ gic_update_virt(s);
246
return;
247
}
102
}
248
103
- gen_jmp_tb(s, s->base.pc_next, 1);
249
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
104
+ gen_jmp_tb(s, curr_insn_len(s), 1);
250
}
105
251
}
106
gen_set_label(nextlabel);
252
107
- gen_jmp(s, read_pc(s) + a->imm);
253
+ gic_update_virt(s);
108
+ gen_jmp(s, jmp_diff(s, a->imm));
254
return;
109
return true;
110
}
111
112
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
113
114
if (a->f) {
115
/* Loop-forever: just jump back to the loop start */
116
- gen_jmp(s, read_pc(s) - a->imm);
117
+ gen_jmp(s, jmp_diff(s, -a->imm));
118
return true;
255
}
119
}
256
120
257
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_cpu_write(GICState *s, int cpu, int offset,
121
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
258
"gic_cpu_write: Bad offset %x\n", (int)offset);
122
tcg_temp_free_i32(decr);
259
return MEMTX_OK;
260
}
123
}
261
- gic_update(s);
124
/* Jump back to the loop start */
262
+
125
- gen_jmp(s, read_pc(s) - a->imm);
263
+ if (gic_is_vcpu(cpu)) {
126
+ gen_jmp(s, jmp_diff(s, -a->imm));
264
+ gic_update_virt(s);
127
265
+ } else {
128
gen_set_label(loopend);
266
+ gic_update(s);
129
if (a->tp) {
267
+ }
130
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
268
+
131
store_cpu_field(tcg_constant_i32(4), v7m.ltpsize);
269
return MEMTX_OK;
132
}
133
/* End TB, continuing to following insn */
134
- gen_jmp_tb(s, s->base.pc_next, 1);
135
+ gen_jmp_tb(s, curr_insn_len(s), 1);
136
return true;
270
}
137
}
271
138
272
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
139
@@ -XXX,XX +XXX,XX @@ static bool trans_CBZ(DisasContext *s, arg_CBZ *a)
273
return MEMTX_OK;
140
tcg_gen_brcondi_i32(a->nz ? TCG_COND_EQ : TCG_COND_NE,
274
}
141
tmp, 0, s->condlabel);
275
142
tcg_temp_free_i32(tmp);
276
+ gic_update_virt(s);
143
- gen_jmp(s, read_pc(s) + a->imm);
277
return MEMTX_OK;
144
+ gen_jmp(s, jmp_diff(s, a->imm));
145
return true;
278
}
146
}
279
147
280
--
148
--
281
2.18.0
149
2.25.1
282
283
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Implement the read and write functions for the virtual interface of the
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
virtualization extensions in the GICv2.
5
4
6
One mirror region per CPU is also created, which maps to that specific
7
CPU id. This is required by the GIC architecture specification.
8
9
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Message-id: 20180727095421.386-16-luc.michel@greensocs.com
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-8-richard.henderson@linaro.org
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
9
---
14
hw/intc/arm_gic.c | 235 +++++++++++++++++++++++++++++++++++++++++++++-
10
target/arm/translate-a64.c | 41 +++++++++++++++++++++++++++-----------
15
1 file changed, 233 insertions(+), 2 deletions(-)
11
1 file changed, 29 insertions(+), 12 deletions(-)
16
12
17
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
13
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
18
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/intc/arm_gic.c
15
--- a/target/arm/translate-a64.c
20
+++ b/hw/intc/arm_gic.c
16
+++ b/target/arm/translate-a64.c
21
@@ -XXX,XX +XXX,XX @@ static void gic_update(GICState *s)
17
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
22
}
18
}
23
}
19
}
24
20
25
+/* Return true if this LR is empty, i.e. the corresponding bit
21
+static void gen_pc_plus_diff(DisasContext *s, TCGv_i64 dest, target_long diff)
26
+ * in ELRSR is set.
27
+ */
28
+static inline bool gic_lr_entry_is_free(uint32_t entry)
29
+{
22
+{
30
+ return (GICH_LR_STATE(entry) == GICH_LR_STATE_INVALID)
23
+ tcg_gen_movi_i64(dest, s->pc_curr + diff);
31
+ && (GICH_LR_HW(entry) || !GICH_LR_EOI(entry));
32
+}
24
+}
33
+
25
+
34
+/* Return true if this LR should trigger an EOI maintenance interrupt, i.e. the
26
void gen_a64_update_pc(DisasContext *s, target_long diff)
35
+ * corrsponding bit in EISR is set.
36
+ */
37
+static inline bool gic_lr_entry_is_eoi(uint32_t entry)
38
+{
39
+ return (GICH_LR_STATE(entry) == GICH_LR_STATE_INVALID)
40
+ && !GICH_LR_HW(entry) && GICH_LR_EOI(entry);
41
+}
42
+
43
static void gic_set_irq_11mpcore(GICState *s, int irq, int level,
44
int cm, int target)
45
{
27
{
46
@@ -XXX,XX +XXX,XX @@ static MemTxResult gic_thisvcpu_write(void *opaque, hwaddr addr,
28
- tcg_gen_movi_i64(cpu_pc, s->pc_curr + diff);
47
return gic_cpu_write(s, gic_get_current_vcpu(s), addr, value, attrs);
29
+ gen_pc_plus_diff(s, cpu_pc, diff);
48
}
30
}
49
31
50
+static uint32_t gic_compute_eisr(GICState *s, int cpu, int lr_start)
32
/*
51
+{
33
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_imm(DisasContext *s, uint32_t insn)
52
+ int lr_idx;
34
53
+ uint32_t ret = 0;
35
if (insn & (1U << 31)) {
54
+
36
/* BL Branch with link */
55
+ for (lr_idx = lr_start; lr_idx < s->num_lrs; lr_idx++) {
37
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
56
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
38
+ gen_pc_plus_diff(s, cpu_reg(s, 30), curr_insn_len(s));
57
+ ret = deposit32(ret, lr_idx - lr_start, 1,
58
+ gic_lr_entry_is_eoi(*entry));
59
+ }
60
+
61
+ return ret;
62
+}
63
+
64
+static uint32_t gic_compute_elrsr(GICState *s, int cpu, int lr_start)
65
+{
66
+ int lr_idx;
67
+ uint32_t ret = 0;
68
+
69
+ for (lr_idx = lr_start; lr_idx < s->num_lrs; lr_idx++) {
70
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
71
+ ret = deposit32(ret, lr_idx - lr_start, 1,
72
+ gic_lr_entry_is_free(*entry));
73
+ }
74
+
75
+ return ret;
76
+}
77
+
78
+static void gic_vmcr_write(GICState *s, uint32_t value, MemTxAttrs attrs)
79
+{
80
+ int vcpu = gic_get_current_vcpu(s);
81
+ uint32_t ctlr;
82
+ uint32_t abpr;
83
+ uint32_t bpr;
84
+ uint32_t prio_mask;
85
+
86
+ ctlr = FIELD_EX32(value, GICH_VMCR, VMCCtlr);
87
+ abpr = FIELD_EX32(value, GICH_VMCR, VMABP);
88
+ bpr = FIELD_EX32(value, GICH_VMCR, VMBP);
89
+ prio_mask = FIELD_EX32(value, GICH_VMCR, VMPriMask) << 3;
90
+
91
+ gic_set_cpu_control(s, vcpu, ctlr, attrs);
92
+ s->abpr[vcpu] = MAX(abpr, GIC_VIRT_MIN_ABPR);
93
+ s->bpr[vcpu] = MAX(bpr, GIC_VIRT_MIN_BPR);
94
+ gic_set_priority_mask(s, vcpu, prio_mask, attrs);
95
+}
96
+
97
+static MemTxResult gic_hyp_read(void *opaque, int cpu, hwaddr addr,
98
+ uint64_t *data, MemTxAttrs attrs)
99
+{
100
+ GICState *s = ARM_GIC(opaque);
101
+ int vcpu = cpu + GIC_NCPU;
102
+
103
+ switch (addr) {
104
+ case A_GICH_HCR: /* Hypervisor Control */
105
+ *data = s->h_hcr[cpu];
106
+ break;
107
+
108
+ case A_GICH_VTR: /* VGIC Type */
109
+ *data = FIELD_DP32(0, GICH_VTR, ListRegs, s->num_lrs - 1);
110
+ *data = FIELD_DP32(*data, GICH_VTR, PREbits,
111
+ GIC_VIRT_MAX_GROUP_PRIO_BITS - 1);
112
+ *data = FIELD_DP32(*data, GICH_VTR, PRIbits,
113
+ (7 - GIC_VIRT_MIN_BPR) - 1);
114
+ break;
115
+
116
+ case A_GICH_VMCR: /* Virtual Machine Control */
117
+ *data = FIELD_DP32(0, GICH_VMCR, VMCCtlr,
118
+ extract32(s->cpu_ctlr[vcpu], 0, 10));
119
+ *data = FIELD_DP32(*data, GICH_VMCR, VMABP, s->abpr[vcpu]);
120
+ *data = FIELD_DP32(*data, GICH_VMCR, VMBP, s->bpr[vcpu]);
121
+ *data = FIELD_DP32(*data, GICH_VMCR, VMPriMask,
122
+ extract32(s->priority_mask[vcpu], 3, 5));
123
+ break;
124
+
125
+ case A_GICH_MISR: /* Maintenance Interrupt Status */
126
+ *data = s->h_misr[cpu];
127
+ break;
128
+
129
+ case A_GICH_EISR0: /* End of Interrupt Status 0 and 1 */
130
+ case A_GICH_EISR1:
131
+ *data = gic_compute_eisr(s, cpu, (addr - A_GICH_EISR0) * 8);
132
+ break;
133
+
134
+ case A_GICH_ELRSR0: /* Empty List Status 0 and 1 */
135
+ case A_GICH_ELRSR1:
136
+ *data = gic_compute_elrsr(s, cpu, (addr - A_GICH_ELRSR0) * 8);
137
+ break;
138
+
139
+ case A_GICH_APR: /* Active Priorities */
140
+ *data = s->h_apr[cpu];
141
+ break;
142
+
143
+ case A_GICH_LR0 ... A_GICH_LR63: /* List Registers */
144
+ {
145
+ int lr_idx = (addr - A_GICH_LR0) / 4;
146
+
147
+ if (lr_idx > s->num_lrs) {
148
+ *data = 0;
149
+ } else {
150
+ *data = s->h_lr[lr_idx][cpu];
151
+ }
152
+ break;
153
+ }
154
+
155
+ default:
156
+ qemu_log_mask(LOG_GUEST_ERROR,
157
+ "gic_hyp_read: Bad offset %" HWADDR_PRIx "\n", addr);
158
+ return MEMTX_OK;
159
+ }
160
+
161
+ return MEMTX_OK;
162
+}
163
+
164
+static MemTxResult gic_hyp_write(void *opaque, int cpu, hwaddr addr,
165
+ uint64_t value, MemTxAttrs attrs)
166
+{
167
+ GICState *s = ARM_GIC(opaque);
168
+ int vcpu = cpu + GIC_NCPU;
169
+
170
+ switch (addr) {
171
+ case A_GICH_HCR: /* Hypervisor Control */
172
+ s->h_hcr[cpu] = value & GICH_HCR_MASK;
173
+ break;
174
+
175
+ case A_GICH_VMCR: /* Virtual Machine Control */
176
+ gic_vmcr_write(s, value, attrs);
177
+ break;
178
+
179
+ case A_GICH_APR: /* Active Priorities */
180
+ s->h_apr[cpu] = value;
181
+ s->running_priority[vcpu] = gic_get_prio_from_apr_bits(s, vcpu);
182
+ break;
183
+
184
+ case A_GICH_LR0 ... A_GICH_LR63: /* List Registers */
185
+ {
186
+ int lr_idx = (addr - A_GICH_LR0) / 4;
187
+
188
+ if (lr_idx > s->num_lrs) {
189
+ return MEMTX_OK;
190
+ }
191
+
192
+ s->h_lr[lr_idx][cpu] = value & GICH_LR_MASK;
193
+ break;
194
+ }
195
+
196
+ default:
197
+ qemu_log_mask(LOG_GUEST_ERROR,
198
+ "gic_hyp_write: Bad offset %" HWADDR_PRIx "\n", addr);
199
+ return MEMTX_OK;
200
+ }
201
+
202
+ return MEMTX_OK;
203
+}
204
+
205
+static MemTxResult gic_thiscpu_hyp_read(void *opaque, hwaddr addr, uint64_t *data,
206
+ unsigned size, MemTxAttrs attrs)
207
+{
208
+ GICState *s = (GICState *)opaque;
209
+
210
+ return gic_hyp_read(s, gic_get_current_cpu(s), addr, data, attrs);
211
+}
212
+
213
+static MemTxResult gic_thiscpu_hyp_write(void *opaque, hwaddr addr,
214
+ uint64_t value, unsigned size,
215
+ MemTxAttrs attrs)
216
+{
217
+ GICState *s = (GICState *)opaque;
218
+
219
+ return gic_hyp_write(s, gic_get_current_cpu(s), addr, value, attrs);
220
+}
221
+
222
+static MemTxResult gic_do_hyp_read(void *opaque, hwaddr addr, uint64_t *data,
223
+ unsigned size, MemTxAttrs attrs)
224
+{
225
+ GICState **backref = (GICState **)opaque;
226
+ GICState *s = *backref;
227
+ int id = (backref - s->backref);
228
+
229
+ return gic_hyp_read(s, id, addr, data, attrs);
230
+}
231
+
232
+static MemTxResult gic_do_hyp_write(void *opaque, hwaddr addr,
233
+ uint64_t value, unsigned size,
234
+ MemTxAttrs attrs)
235
+{
236
+ GICState **backref = (GICState **)opaque;
237
+ GICState *s = *backref;
238
+ int id = (backref - s->backref);
239
+
240
+ return gic_hyp_write(s, id + GIC_NCPU, addr, value, attrs);
241
+
242
+}
243
+
244
static const MemoryRegionOps gic_ops[2] = {
245
{
246
.read_with_attrs = gic_dist_read,
247
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_cpu_ops = {
248
249
static const MemoryRegionOps gic_virt_ops[2] = {
250
{
251
- .read_with_attrs = NULL,
252
- .write_with_attrs = NULL,
253
+ .read_with_attrs = gic_thiscpu_hyp_read,
254
+ .write_with_attrs = gic_thiscpu_hyp_write,
255
.endianness = DEVICE_NATIVE_ENDIAN,
256
},
257
{
258
@@ -XXX,XX +XXX,XX @@ static const MemoryRegionOps gic_virt_ops[2] = {
259
}
39
}
260
};
40
261
41
/* B Branch / BL Branch with link */
262
+static const MemoryRegionOps gic_viface_ops = {
42
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
263
+ .read_with_attrs = gic_do_hyp_read,
43
default:
264
+ .write_with_attrs = gic_do_hyp_write,
44
goto do_unallocated;
265
+ .endianness = DEVICE_NATIVE_ENDIAN,
45
}
266
+};
46
- gen_a64_set_pc(s, dst);
267
+
47
/* BLR also needs to load return address */
268
static void arm_gic_realize(DeviceState *dev, Error **errp)
48
if (opc == 1) {
49
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
50
+ TCGv_i64 lr = cpu_reg(s, 30);
51
+ if (dst == lr) {
52
+ TCGv_i64 tmp = new_tmp_a64(s);
53
+ tcg_gen_mov_i64(tmp, dst);
54
+ dst = tmp;
55
+ }
56
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
57
}
58
+ gen_a64_set_pc(s, dst);
59
break;
60
61
case 8: /* BRAA */
62
@@ -XXX,XX +XXX,XX @@ static void disas_uncond_b_reg(DisasContext *s, uint32_t insn)
63
} else {
64
dst = cpu_reg(s, rn);
65
}
66
- gen_a64_set_pc(s, dst);
67
/* BLRAA also needs to load return address */
68
if (opc == 9) {
69
- tcg_gen_movi_i64(cpu_reg(s, 30), s->base.pc_next);
70
+ TCGv_i64 lr = cpu_reg(s, 30);
71
+ if (dst == lr) {
72
+ TCGv_i64 tmp = new_tmp_a64(s);
73
+ tcg_gen_mov_i64(tmp, dst);
74
+ dst = tmp;
75
+ }
76
+ gen_pc_plus_diff(s, lr, curr_insn_len(s));
77
}
78
+ gen_a64_set_pc(s, dst);
79
break;
80
81
case 4: /* ERET */
82
@@ -XXX,XX +XXX,XX @@ static void disas_ld_lit(DisasContext *s, uint32_t insn)
83
84
tcg_rt = cpu_reg(s, rt);
85
86
- clean_addr = tcg_constant_i64(s->pc_curr + imm);
87
+ clean_addr = new_tmp_a64(s);
88
+ gen_pc_plus_diff(s, clean_addr, imm);
89
if (is_vector) {
90
do_fp_ld(s, rt, clean_addr, size);
91
} else {
92
@@ -XXX,XX +XXX,XX @@ static void disas_ldst(DisasContext *s, uint32_t insn)
93
static void disas_pc_rel_adr(DisasContext *s, uint32_t insn)
269
{
94
{
270
/* Device instance realize function for the GIC sysbus device */
95
unsigned int page, rd;
271
@@ -XXX,XX +XXX,XX @@ static void arm_gic_realize(DeviceState *dev, Error **errp)
96
- uint64_t base;
272
&s->backref[i], "gic_cpu", 0x100);
97
- uint64_t offset;
273
sysbus_init_mmio(sbd, &s->cpuiomem[i+1]);
98
+ int64_t offset;
99
100
page = extract32(insn, 31, 1);
101
/* SignExtend(immhi:immlo) -> offset */
102
offset = sextract64(insn, 5, 19);
103
offset = offset << 2 | extract32(insn, 29, 2);
104
rd = extract32(insn, 0, 5);
105
- base = s->pc_curr;
106
107
if (page) {
108
/* ADRP (page based) */
109
- base &= ~0xfff;
110
offset <<= 12;
111
+ /* The page offset is ok for TARGET_TB_PCREL. */
112
+ offset -= s->pc_curr & 0xfff;
274
}
113
}
275
+
114
276
+ /* Extra core-specific regions for virtual interfaces. This is required by
115
- tcg_gen_movi_i64(cpu_reg(s, rd), base + offset);
277
+ * the GICv2 specification.
116
+ gen_pc_plus_diff(s, cpu_reg(s, rd), offset);
278
+ */
279
+ if (s->virt_extn) {
280
+ for (i = 0; i < s->num_cpu; i++) {
281
+ memory_region_init_io(&s->vifaceiomem[i + 1], OBJECT(s),
282
+ &gic_viface_ops, &s->backref[i],
283
+ "gic_viface", 0x1000);
284
+ sysbus_init_mmio(sbd, &s->vifaceiomem[i + 1]);
285
+ }
286
+ }
287
+
288
}
117
}
289
118
290
static void arm_gic_class_init(ObjectClass *klass, void *data)
119
/*
291
--
120
--
292
2.18.0
121
2.25.1
293
294
diff view generated by jsdifflib
1
From: Luc Michel <luc.michel@greensocs.com>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Add some helper functions to gic_internal.h to get or change the state
3
In preparation for TARGET_TB_PCREL, reduce reliance on absolute values.
4
of an IRQ. When the current CPU is not a vCPU, the call is forwarded to
5
the GIC distributor. Otherwise, it acts on the list register matching
6
the IRQ in the current CPU virtual interface.
7
4
8
gic_clear_active can have a side effect on the distributor, even in the
9
vCPU case, when the correponding LR has the HW field set.
10
11
Use those functions in the CPU interface code path to prepare for the
12
vCPU interface implementation.
13
14
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
5
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
17
Message-id: 20180727095421.386-10-luc.michel@greensocs.com
6
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
7
Message-id: 20221020030641.2066807-9-richard.henderson@linaro.org
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
9
---
20
hw/intc/gic_internal.h | 83 ++++++++++++++++++++++++++++++++++++++++++
10
target/arm/translate.c | 38 +++++++++++++++++++++-----------------
21
hw/intc/arm_gic.c | 32 +++++++---------
11
1 file changed, 21 insertions(+), 17 deletions(-)
22
2 files changed, 97 insertions(+), 18 deletions(-)
23
12
24
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
13
diff --git a/target/arm/translate.c b/target/arm/translate.c
25
index XXXXXXX..XXXXXXX 100644
14
index XXXXXXX..XXXXXXX 100644
26
--- a/hw/intc/gic_internal.h
15
--- a/target/arm/translate.c
27
+++ b/hw/intc/gic_internal.h
16
+++ b/target/arm/translate.c
28
@@ -XXX,XX +XXX,XX @@ REG32(GICH_LR63, 0x1fc)
17
@@ -XXX,XX +XXX,XX @@ static inline int get_a32_user_mem_index(DisasContext *s)
29
#define GICH_LR_GROUP(entry) (FIELD_EX32(entry, GICH_LR0, Grp1))
18
}
30
#define GICH_LR_HW(entry) (FIELD_EX32(entry, GICH_LR0, HW))
31
32
+#define GICH_LR_CLEAR_PENDING(entry) \
33
+ ((entry) &= ~(GICH_LR_STATE_PENDING << R_GICH_LR0_State_SHIFT))
34
+#define GICH_LR_SET_ACTIVE(entry) \
35
+ ((entry) |= (GICH_LR_STATE_ACTIVE << R_GICH_LR0_State_SHIFT))
36
+#define GICH_LR_CLEAR_ACTIVE(entry) \
37
+ ((entry) &= ~(GICH_LR_STATE_ACTIVE << R_GICH_LR0_State_SHIFT))
38
+
39
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
40
* GICv2 and GICv2 with security extensions:
41
*/
42
@@ -XXX,XX +XXX,XX @@ static inline uint32_t *gic_get_lr_entry(GICState *s, int irq, int vcpu)
43
g_assert_not_reached();
44
}
19
}
45
20
46
+static inline bool gic_test_group(GICState *s, int irq, int cpu)
21
-/* The architectural value of PC. */
22
-static uint32_t read_pc(DisasContext *s)
23
-{
24
- return s->pc_curr + (s->thumb ? 4 : 8);
25
-}
26
-
27
/* The pc_curr difference for an architectural jump. */
28
static target_long jmp_diff(DisasContext *s, target_long diff)
29
{
30
return diff + (s->thumb ? 4 : 8);
31
}
32
33
+static void gen_pc_plus_diff(DisasContext *s, TCGv_i32 var, target_long diff)
47
+{
34
+{
48
+ if (gic_is_vcpu(cpu)) {
35
+ tcg_gen_movi_i32(var, s->pc_curr + diff);
49
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
50
+ return GICH_LR_GROUP(*entry);
51
+ } else {
52
+ return GIC_DIST_TEST_GROUP(irq, 1 << cpu);
53
+ }
54
+}
36
+}
55
+
37
+
56
+static inline void gic_clear_pending(GICState *s, int irq, int cpu)
38
/* Set a variable to the value of a CPU register. */
57
+{
39
void load_reg_var(DisasContext *s, TCGv_i32 var, int reg)
58
+ if (gic_is_vcpu(cpu)) {
40
{
59
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
41
if (reg == 15) {
60
+ GICH_LR_CLEAR_PENDING(*entry);
42
- tcg_gen_movi_i32(var, read_pc(s));
61
+ } else {
43
+ gen_pc_plus_diff(s, var, jmp_diff(s, 0));
62
+ /* Clear pending state for both level and edge triggered
44
} else {
63
+ * interrupts. (level triggered interrupts with an active line
45
tcg_gen_mov_i32(var, cpu_R[reg]);
64
+ * remain pending, see gic_test_pending)
46
}
47
@@ -XXX,XX +XXX,XX @@ TCGv_i32 add_reg_for_lit(DisasContext *s, int reg, int ofs)
48
TCGv_i32 tmp = tcg_temp_new_i32();
49
50
if (reg == 15) {
51
- tcg_gen_movi_i32(tmp, (read_pc(s) & ~3) + ofs);
52
+ /*
53
+ * This address is computed from an aligned PC:
54
+ * subtract off the low bits.
65
+ */
55
+ */
66
+ GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
56
+ gen_pc_plus_diff(s, tmp, jmp_diff(s, ofs - (s->pc_curr & 3)));
67
+ : (1 << cpu));
68
+ }
69
+}
70
+
71
+static inline void gic_set_active(GICState *s, int irq, int cpu)
72
+{
73
+ if (gic_is_vcpu(cpu)) {
74
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
75
+ GICH_LR_SET_ACTIVE(*entry);
76
+ } else {
77
+ GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
78
+ }
79
+}
80
+
81
+static inline void gic_clear_active(GICState *s, int irq, int cpu)
82
+{
83
+ if (gic_is_vcpu(cpu)) {
84
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
85
+ GICH_LR_CLEAR_ACTIVE(*entry);
86
+
87
+ if (GICH_LR_HW(*entry)) {
88
+ /* Hardware interrupt. We must forward the deactivation request to
89
+ * the distributor.
90
+ */
91
+ int phys_irq = GICH_LR_PHYS_ID(*entry);
92
+ int rcpu = gic_get_vcpu_real_id(cpu);
93
+
94
+ if (phys_irq < GIC_NR_SGIS || phys_irq >= GIC_MAXIRQ) {
95
+ /* UNPREDICTABLE behaviour, we choose to ignore the request */
96
+ return;
97
+ }
98
+
99
+ /* This is equivalent to a NS write to DIR on the physical CPU
100
+ * interface. Hence group0 interrupt deactivation is ignored if
101
+ * the GIC is secure.
102
+ */
103
+ if (!s->security_extn || GIC_DIST_TEST_GROUP(phys_irq, 1 << rcpu)) {
104
+ GIC_DIST_CLEAR_ACTIVE(phys_irq, 1 << rcpu);
105
+ }
106
+ }
107
+ } else {
108
+ GIC_DIST_CLEAR_ACTIVE(irq, 1 << cpu);
109
+ }
110
+}
111
+
112
+static inline int gic_get_priority(GICState *s, int irq, int cpu)
113
+{
114
+ if (gic_is_vcpu(cpu)) {
115
+ uint32_t *entry = gic_get_lr_entry(s, irq, cpu);
116
+ return GICH_LR_PRIORITY(*entry);
117
+ } else {
118
+ return GIC_DIST_GET_PRIORITY(irq, cpu);
119
+ }
120
+}
121
+
122
#endif /* QEMU_ARM_GIC_INTERNAL_H */
123
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
124
index XXXXXXX..XXXXXXX 100644
125
--- a/hw/intc/arm_gic.c
126
+++ b/hw/intc/arm_gic.c
127
@@ -XXX,XX +XXX,XX @@ static uint16_t gic_get_current_pending_irq(GICState *s, int cpu,
128
uint16_t pending_irq = s->current_pending[cpu];
129
130
if (pending_irq < GIC_MAXIRQ && gic_has_groups(s)) {
131
- int group = GIC_DIST_TEST_GROUP(pending_irq, (1 << cpu));
132
+ int group = gic_test_group(s, pending_irq, cpu);
133
+
134
/* On a GIC without the security extensions, reading this register
135
* behaves in the same way as a secure access to a GIC with them.
136
*/
137
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
138
139
if (gic_has_groups(s) &&
140
!(s->cpu_ctlr[cpu] & GICC_CTLR_CBPR) &&
141
- GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
142
+ gic_test_group(s, irq, cpu)) {
143
bpr = s->abpr[cpu] - 1;
144
assert(bpr >= 0);
145
} else {
57
} else {
146
@@ -XXX,XX +XXX,XX @@ static int gic_get_group_priority(GICState *s, int cpu, int irq)
58
tcg_gen_addi_i32(tmp, cpu_R[reg], ofs);
147
*/
59
}
148
mask = ~0U << ((bpr & 7) + 1);
60
@@ -XXX,XX +XXX,XX @@ void unallocated_encoding(DisasContext *s)
149
61
/* Force a TB lookup after an instruction that changes the CPU state. */
150
- return GIC_DIST_GET_PRIORITY(irq, cpu) & mask;
62
void gen_lookup_tb(DisasContext *s)
151
+ return gic_get_priority(s, irq, cpu) & mask;
63
{
64
- tcg_gen_movi_i32(cpu_R[15], s->base.pc_next);
65
+ gen_pc_plus_diff(s, cpu_R[15], curr_insn_len(s));
66
s->base.is_jmp = DISAS_EXIT;
152
}
67
}
153
68
154
static void gic_activate_irq(GICState *s, int cpu, int irq)
69
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_r(DisasContext *s, arg_BLX_r *a)
155
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
70
return false;
156
int regno = preemption_level / 32;
157
int bitno = preemption_level % 32;
158
159
- if (gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, (1 << cpu))) {
160
+ if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
161
s->nsapr[regno][cpu] |= (1 << bitno);
162
} else {
163
s->apr[regno][cpu] |= (1 << bitno);
164
}
71
}
165
72
tmp = load_reg(s, a->rm);
166
s->running_priority[cpu] = prio;
73
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
167
- GIC_DIST_SET_ACTIVE(irq, 1 << cpu);
74
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
168
+ gic_set_active(s, irq, cpu);
75
gen_bx(s, tmp);
76
return true;
169
}
77
}
170
78
@@ -XXX,XX +XXX,XX @@ static bool trans_B_cond_thumb(DisasContext *s, arg_ci *a)
171
static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
79
172
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
80
static bool trans_BL(DisasContext *s, arg_i *a)
173
return irq;
81
{
82
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
83
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
84
gen_jmp(s, jmp_diff(s, a->imm));
85
return true;
86
}
87
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
88
if (s->thumb && (a->imm & 2)) {
89
return false;
174
}
90
}
175
91
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | s->thumb);
176
- if (GIC_DIST_GET_PRIORITY(irq, cpu) >= s->running_priority[cpu]) {
92
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | s->thumb);
177
+ if (gic_get_priority(s, irq, cpu) >= s->running_priority[cpu]) {
93
store_cpu_field_constant(!s->thumb, thumb);
178
DPRINTF("ACK, pending interrupt (%d) has insufficient priority\n", irq);
94
/* This jump is computed from an aligned PC: subtract off the low bits. */
179
return 1023;
95
gen_jmp(s, jmp_diff(s, a->imm - (s->pc_curr & 3)));
180
}
96
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_i(DisasContext *s, arg_BLX_i *a)
181
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
97
static bool trans_BL_BLX_prefix(DisasContext *s, arg_BL_BLX_prefix *a)
182
/* Clear pending flags for both level and edge triggered interrupts.
183
* Level triggered IRQs will be reasserted once they become inactive.
184
*/
185
- GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
186
- : cm);
187
+ gic_clear_pending(s, irq, cpu);
188
ret = irq;
189
} else {
190
if (irq < GIC_NR_SGIS) {
191
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
192
src = ctz32(s->sgi_pending[irq][cpu]);
193
s->sgi_pending[irq][cpu] &= ~(1 << src);
194
if (s->sgi_pending[irq][cpu] == 0) {
195
- GIC_DIST_CLEAR_PENDING(irq,
196
- GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
197
- : cm);
198
+ gic_clear_pending(s, irq, cpu);
199
}
200
ret = irq | ((src & 0x7) << 10);
201
} else {
202
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
203
* interrupts. (level triggered interrupts with an active line
204
* remain pending, see gic_test_pending)
205
*/
206
- GIC_DIST_CLEAR_PENDING(irq, GIC_DIST_TEST_MODEL(irq) ? ALL_CPU_MASK
207
- : cm);
208
+ gic_clear_pending(s, irq, cpu);
209
ret = irq;
210
}
211
}
212
@@ -XXX,XX +XXX,XX @@ static bool gic_eoi_split(GICState *s, int cpu, MemTxAttrs attrs)
213
214
static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
215
{
98
{
216
- int cm = 1 << cpu;
99
assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2));
217
int group;
100
- tcg_gen_movi_i32(cpu_R[14], read_pc(s) + (a->imm << 12));
218
101
+ gen_pc_plus_diff(s, cpu_R[14], jmp_diff(s, a->imm << 12));
219
if (irq >= s->num_irq) {
102
return true;
220
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
221
return;
222
}
223
224
- group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
225
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
226
227
if (!gic_eoi_split(s, cpu, attrs)) {
228
/* This is UNPREDICTABLE; we choose to ignore it */
229
@@ -XXX,XX +XXX,XX @@ static void gic_deactivate_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
230
return;
231
}
232
233
- GIC_DIST_CLEAR_ACTIVE(irq, cm);
234
+ gic_clear_active(s, irq, cpu);
235
}
103
}
236
104
237
static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
105
@@ -XXX,XX +XXX,XX @@ static bool trans_BL_suffix(DisasContext *s, arg_BL_suffix *a)
238
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
106
239
}
107
assert(!arm_dc_feature(s, ARM_FEATURE_THUMB2));
240
}
108
tcg_gen_addi_i32(tmp, cpu_R[14], (a->imm << 1) | 1);
241
109
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | 1);
242
- group = gic_has_groups(s) && GIC_DIST_TEST_GROUP(irq, cm);
110
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | 1);
243
+ group = gic_has_groups(s) && gic_test_group(s, irq, cpu);
111
gen_bx(s, tmp);
244
112
return true;
245
if (gic_cpu_ns_access(s, cpu, attrs) && !group) {
113
}
246
DPRINTF("Non-secure EOI for Group0 interrupt %d ignored\n", irq);
114
@@ -XXX,XX +XXX,XX @@ static bool trans_BLX_suffix(DisasContext *s, arg_BLX_suffix *a)
247
@@ -XXX,XX +XXX,XX @@ static void gic_complete_irq(GICState *s, int cpu, int irq, MemTxAttrs attrs)
115
tmp = tcg_temp_new_i32();
248
116
tcg_gen_addi_i32(tmp, cpu_R[14], a->imm << 1);
249
/* In GICv2 the guest can choose to split priority-drop and deactivate */
117
tcg_gen_andi_i32(tmp, tmp, 0xfffffffc);
250
if (!gic_eoi_split(s, cpu, attrs)) {
118
- tcg_gen_movi_i32(cpu_R[14], s->base.pc_next | 1);
251
- GIC_DIST_CLEAR_ACTIVE(irq, cm);
119
+ gen_pc_plus_diff(s, cpu_R[14], curr_insn_len(s) | 1);
252
+ gic_clear_active(s, irq, cpu);
120
gen_bx(s, tmp);
253
}
121
return true;
254
gic_update(s);
122
}
123
@@ -XXX,XX +XXX,XX @@ static bool op_tbranch(DisasContext *s, arg_tbranch *a, bool half)
124
tcg_gen_add_i32(addr, addr, tmp);
125
126
gen_aa32_ld_i32(s, tmp, addr, get_mem_index(s), half ? MO_UW : MO_UB);
127
- tcg_temp_free_i32(addr);
128
129
tcg_gen_add_i32(tmp, tmp, tmp);
130
- tcg_gen_addi_i32(tmp, tmp, read_pc(s));
131
+ gen_pc_plus_diff(s, addr, jmp_diff(s, 0));
132
+ tcg_gen_add_i32(tmp, tmp, addr);
133
+ tcg_temp_free_i32(addr);
134
store_reg(s, 15, tmp);
135
return true;
255
}
136
}
256
--
137
--
257
2.18.0
138
2.25.1
258
139
259
140
diff view generated by jsdifflib
1
From: Julia Suvorova <jusual@mail.ru>
1
From: Richard Henderson <richard.henderson@linaro.org>
2
2
3
Forbid stack alignment change. (CCR)
3
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
4
Reserve FAULTMASK, BASEPRI registers.
4
Message-id: 20221020030641.2066807-10-richard.henderson@linaro.org
5
Report any fault as a HardFault. Disable MemManage, BusFault and
6
UsageFault, so they always escalated to HardFault. (SHCSR)
7
8
Signed-off-by: Julia Suvorova <jusual@mail.ru>
9
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
10
Message-id: 20180718095628.26442-1-jusual@mail.ru
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
7
---
14
hw/intc/armv7m_nvic.c | 10 ++++++++++
8
target/arm/cpu-param.h | 2 +
15
target/arm/cpu.c | 4 ++++
9
target/arm/translate.h | 50 +++++++++++++++-
16
target/arm/helper.c | 13 +++++++++++--
10
target/arm/cpu.c | 23 ++++----
17
3 files changed, 25 insertions(+), 2 deletions(-)
11
target/arm/translate-a64.c | 64 +++++++++++++-------
12
target/arm/translate-m-nocp.c | 2 +-
13
target/arm/translate.c | 108 +++++++++++++++++++++++-----------
14
6 files changed, 178 insertions(+), 71 deletions(-)
18
15
19
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
16
diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h
20
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
21
--- a/hw/intc/armv7m_nvic.c
18
--- a/target/arm/cpu-param.h
22
+++ b/hw/intc/armv7m_nvic.c
19
+++ b/target/arm/cpu-param.h
23
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
20
@@ -XXX,XX +XXX,XX @@
24
val |= cpu->env.v7m.ccr[M_REG_NS] & R_V7M_CCR_BFHFNMIGN_MASK;
21
# define TARGET_PAGE_BITS_VARY
25
return val;
22
# define TARGET_PAGE_BITS_MIN 10
26
case 0xd24: /* System Handler Control and State (SHCSR) */
23
27
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
24
+# define TARGET_TB_PCREL 1
28
+ goto bad_offset;
29
+ }
30
val = 0;
31
if (attrs.secure) {
32
if (s->sec_vectors[ARMV7M_EXCP_MEM].active) {
33
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
34
cpu->env.v7m.scr[attrs.secure] = value;
35
break;
36
case 0xd14: /* Configuration Control. */
37
+ if (!arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
38
+ goto bad_offset;
39
+ }
40
+
25
+
41
/* Enforce RAZ/WI on reserved and must-RAZ/WI bits */
26
/*
42
value &= (R_V7M_CCR_STKALIGN_MASK |
27
* Cache the attrs and shareability fields from the page table entry.
43
R_V7M_CCR_BFHFNMIGN_MASK |
28
*
44
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
29
diff --git a/target/arm/translate.h b/target/arm/translate.h
45
cpu->env.v7m.ccr[attrs.secure] = value;
30
index XXXXXXX..XXXXXXX 100644
46
break;
31
--- a/target/arm/translate.h
47
case 0xd24: /* System Handler Control and State (SHCSR) */
32
+++ b/target/arm/translate.h
48
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
33
@@ -XXX,XX +XXX,XX @@
49
+ goto bad_offset;
34
50
+ }
35
51
if (attrs.secure) {
36
/* internal defines */
52
s->sec_vectors[ARMV7M_EXCP_MEM].active = (value & (1 << 0)) != 0;
37
+
53
/* Secure HardFault active bit cannot be written */
38
+/*
39
+ * Save pc_save across a branch, so that we may restore the value from
40
+ * before the branch at the point the label is emitted.
41
+ */
42
+typedef struct DisasLabel {
43
+ TCGLabel *label;
44
+ target_ulong pc_save;
45
+} DisasLabel;
46
+
47
typedef struct DisasContext {
48
DisasContextBase base;
49
const ARMISARegisters *isar;
50
51
/* The address of the current instruction being translated. */
52
target_ulong pc_curr;
53
+ /*
54
+ * For TARGET_TB_PCREL, the full value of cpu_pc is not known
55
+ * (although the page offset is known). For convenience, the
56
+ * translation loop uses the full virtual address that triggered
57
+ * the translation, from base.pc_start through pc_curr.
58
+ * For efficiency, we do not update cpu_pc for every instruction.
59
+ * Instead, pc_save has the value of pc_curr at the time of the
60
+ * last update to cpu_pc, which allows us to compute the addend
61
+ * needed to bring cpu_pc current: pc_curr - pc_save.
62
+ * If cpu_pc now contains the destination of an indirect branch,
63
+ * pc_save contains -1 to indicate that relative updates are no
64
+ * longer possible.
65
+ */
66
+ target_ulong pc_save;
67
target_ulong page_start;
68
uint32_t insn;
69
/* Nonzero if this instruction has been conditionally skipped. */
70
int condjmp;
71
/* The label that will be jumped to when the instruction is skipped. */
72
- TCGLabel *condlabel;
73
+ DisasLabel condlabel;
74
/* Thumb-2 conditional execution bits. */
75
int condexec_mask;
76
int condexec_cond;
77
@@ -XXX,XX +XXX,XX @@ typedef struct DisasContext {
78
* after decode (ie after any UNDEF checks)
79
*/
80
bool eci_handled;
81
- /* TCG op to rewind to if this turns out to be an invalid ECI state */
82
- TCGOp *insn_eci_rewind;
83
int sctlr_b;
84
MemOp be_data;
85
#if !defined(CONFIG_USER_ONLY)
86
@@ -XXX,XX +XXX,XX @@ static inline MemOp finalize_memop(DisasContext *s, MemOp opc)
87
*/
88
uint64_t asimd_imm_const(uint32_t imm, int cmode, int op);
89
90
+/*
91
+ * gen_disas_label:
92
+ * Create a label and cache a copy of pc_save.
93
+ */
94
+static inline DisasLabel gen_disas_label(DisasContext *s)
95
+{
96
+ return (DisasLabel){
97
+ .label = gen_new_label(),
98
+ .pc_save = s->pc_save,
99
+ };
100
+}
101
+
102
+/*
103
+ * set_disas_label:
104
+ * Emit a label and restore the cached copy of pc_save.
105
+ */
106
+static inline void set_disas_label(DisasContext *s, DisasLabel l)
107
+{
108
+ gen_set_label(l.label);
109
+ s->pc_save = l.pc_save;
110
+}
111
+
112
/*
113
* Helpers for implementing sets of trans_* functions.
114
* Defer the implementation of NAME to FUNC, with optional extra arguments.
54
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
115
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
55
index XXXXXXX..XXXXXXX 100644
116
index XXXXXXX..XXXXXXX 100644
56
--- a/target/arm/cpu.c
117
--- a/target/arm/cpu.c
57
+++ b/target/arm/cpu.c
118
+++ b/target/arm/cpu.c
58
@@ -XXX,XX +XXX,XX @@ static void arm_cpu_reset(CPUState *s)
119
@@ -XXX,XX +XXX,XX @@ static vaddr arm_cpu_get_pc(CPUState *cs)
59
env->v7m.ccr[M_REG_NS] |= R_V7M_CCR_NONBASETHRDENA_MASK;
120
void arm_cpu_synchronize_from_tb(CPUState *cs,
60
env->v7m.ccr[M_REG_S] |= R_V7M_CCR_NONBASETHRDENA_MASK;
121
const TranslationBlock *tb)
122
{
123
- ARMCPU *cpu = ARM_CPU(cs);
124
- CPUARMState *env = &cpu->env;
125
-
126
- /*
127
- * It's OK to look at env for the current mode here, because it's
128
- * never possible for an AArch64 TB to chain to an AArch32 TB.
129
- */
130
- if (is_a64(env)) {
131
- env->pc = tb_pc(tb);
132
- } else {
133
- env->regs[15] = tb_pc(tb);
134
+ /* The program counter is always up to date with TARGET_TB_PCREL. */
135
+ if (!TARGET_TB_PCREL) {
136
+ CPUARMState *env = cs->env_ptr;
137
+ /*
138
+ * It's OK to look at env for the current mode here, because it's
139
+ * never possible for an AArch64 TB to chain to an AArch32 TB.
140
+ */
141
+ if (is_a64(env)) {
142
+ env->pc = tb_pc(tb);
143
+ } else {
144
+ env->regs[15] = tb_pc(tb);
145
+ }
146
}
147
}
148
#endif /* CONFIG_TCG */
149
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
150
index XXXXXXX..XXXXXXX 100644
151
--- a/target/arm/translate-a64.c
152
+++ b/target/arm/translate-a64.c
153
@@ -XXX,XX +XXX,XX @@ static void reset_btype(DisasContext *s)
154
155
static void gen_pc_plus_diff(DisasContext *s, TCGv_i64 dest, target_long diff)
156
{
157
- tcg_gen_movi_i64(dest, s->pc_curr + diff);
158
+ assert(s->pc_save != -1);
159
+ if (TARGET_TB_PCREL) {
160
+ tcg_gen_addi_i64(dest, cpu_pc, (s->pc_curr - s->pc_save) + diff);
161
+ } else {
162
+ tcg_gen_movi_i64(dest, s->pc_curr + diff);
163
+ }
164
}
165
166
void gen_a64_update_pc(DisasContext *s, target_long diff)
167
{
168
gen_pc_plus_diff(s, cpu_pc, diff);
169
+ s->pc_save = s->pc_curr + diff;
170
}
171
172
/*
173
@@ -XXX,XX +XXX,XX @@ static void gen_a64_set_pc(DisasContext *s, TCGv_i64 src)
174
* then loading an address into the PC will clear out any tag.
175
*/
176
gen_top_byte_ignore(s, cpu_pc, src, s->tbii);
177
+ s->pc_save = -1;
178
}
179
180
/*
181
@@ -XXX,XX +XXX,XX @@ static inline bool use_goto_tb(DisasContext *s, uint64_t dest)
182
183
static void gen_goto_tb(DisasContext *s, int n, int64_t diff)
184
{
185
- uint64_t dest = s->pc_curr + diff;
186
-
187
- if (use_goto_tb(s, dest)) {
188
- tcg_gen_goto_tb(n);
189
- gen_a64_update_pc(s, diff);
190
+ if (use_goto_tb(s, s->pc_curr + diff)) {
191
+ /*
192
+ * For pcrel, the pc must always be up-to-date on entry to
193
+ * the linked TB, so that it can use simple additions for all
194
+ * further adjustments. For !pcrel, the linked TB is compiled
195
+ * to know its full virtual address, so we can delay the
196
+ * update to pc to the unlinked path. A long chain of links
197
+ * can thus avoid many updates to the PC.
198
+ */
199
+ if (TARGET_TB_PCREL) {
200
+ gen_a64_update_pc(s, diff);
201
+ tcg_gen_goto_tb(n);
202
+ } else {
203
+ tcg_gen_goto_tb(n);
204
+ gen_a64_update_pc(s, diff);
205
+ }
206
tcg_gen_exit_tb(s->base.tb, n);
207
s->base.is_jmp = DISAS_NORETURN;
208
} else {
209
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
210
{
211
unsigned int sf, op, rt;
212
int64_t diff;
213
- TCGLabel *label_match;
214
+ DisasLabel match;
215
TCGv_i64 tcg_cmp;
216
217
sf = extract32(insn, 31, 1);
218
@@ -XXX,XX +XXX,XX @@ static void disas_comp_b_imm(DisasContext *s, uint32_t insn)
219
diff = sextract32(insn, 5, 19) * 4;
220
221
tcg_cmp = read_cpu_reg(s, rt, sf);
222
- label_match = gen_new_label();
223
-
224
reset_btype(s);
225
- tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
226
- tcg_cmp, 0, label_match);
227
228
+ match = gen_disas_label(s);
229
+ tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
230
+ tcg_cmp, 0, match.label);
231
gen_goto_tb(s, 0, 4);
232
- gen_set_label(label_match);
233
+ set_disas_label(s, match);
234
gen_goto_tb(s, 1, diff);
235
}
236
237
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
238
{
239
unsigned int bit_pos, op, rt;
240
int64_t diff;
241
- TCGLabel *label_match;
242
+ DisasLabel match;
243
TCGv_i64 tcg_cmp;
244
245
bit_pos = (extract32(insn, 31, 1) << 5) | extract32(insn, 19, 5);
246
@@ -XXX,XX +XXX,XX @@ static void disas_test_b_imm(DisasContext *s, uint32_t insn)
247
248
tcg_cmp = tcg_temp_new_i64();
249
tcg_gen_andi_i64(tcg_cmp, cpu_reg(s, rt), (1ULL << bit_pos));
250
- label_match = gen_new_label();
251
252
reset_btype(s);
253
+
254
+ match = gen_disas_label(s);
255
tcg_gen_brcondi_i64(op ? TCG_COND_NE : TCG_COND_EQ,
256
- tcg_cmp, 0, label_match);
257
+ tcg_cmp, 0, match.label);
258
tcg_temp_free_i64(tcg_cmp);
259
gen_goto_tb(s, 0, 4);
260
- gen_set_label(label_match);
261
+ set_disas_label(s, match);
262
gen_goto_tb(s, 1, diff);
263
}
264
265
@@ -XXX,XX +XXX,XX @@ static void disas_cond_b_imm(DisasContext *s, uint32_t insn)
266
reset_btype(s);
267
if (cond < 0x0e) {
268
/* genuinely conditional branches */
269
- TCGLabel *label_match = gen_new_label();
270
- arm_gen_test_cc(cond, label_match);
271
+ DisasLabel match = gen_disas_label(s);
272
+ arm_gen_test_cc(cond, match.label);
273
gen_goto_tb(s, 0, 4);
274
- gen_set_label(label_match);
275
+ set_disas_label(s, match);
276
gen_goto_tb(s, 1, diff);
277
} else {
278
/* 0xe and 0xf are both "always" conditions */
279
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase,
280
281
dc->isar = &arm_cpu->isar;
282
dc->condjmp = 0;
283
-
284
+ dc->pc_save = dc->base.pc_first;
285
dc->aarch64 = true;
286
dc->thumb = false;
287
dc->sctlr_b = 0;
288
@@ -XXX,XX +XXX,XX @@ static void aarch64_tr_tb_start(DisasContextBase *db, CPUState *cpu)
289
static void aarch64_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
290
{
291
DisasContext *dc = container_of(dcbase, DisasContext, base);
292
+ target_ulong pc_arg = dc->base.pc_next;
293
294
- tcg_gen_insn_start(dc->base.pc_next, 0, 0);
295
+ if (TARGET_TB_PCREL) {
296
+ pc_arg &= ~TARGET_PAGE_MASK;
297
+ }
298
+ tcg_gen_insn_start(pc_arg, 0, 0);
299
dc->insn_start = tcg_last_op();
300
}
301
302
diff --git a/target/arm/translate-m-nocp.c b/target/arm/translate-m-nocp.c
303
index XXXXXXX..XXXXXXX 100644
304
--- a/target/arm/translate-m-nocp.c
305
+++ b/target/arm/translate-m-nocp.c
306
@@ -XXX,XX +XXX,XX @@ static bool trans_VSCCLRM(DisasContext *s, arg_VSCCLRM *a)
307
tcg_gen_andi_i32(sfpa, sfpa, R_V7M_CONTROL_SFPA_MASK);
308
tcg_gen_or_i32(sfpa, sfpa, aspen);
309
arm_gen_condlabel(s);
310
- tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel);
311
+ tcg_gen_brcondi_i32(TCG_COND_EQ, sfpa, 0, s->condlabel.label);
312
313
if (s->fp_excp_el != 0) {
314
gen_exception_insn_el(s, 0, EXCP_NOCP,
315
diff --git a/target/arm/translate.c b/target/arm/translate.c
316
index XXXXXXX..XXXXXXX 100644
317
--- a/target/arm/translate.c
318
+++ b/target/arm/translate.c
319
@@ -XXX,XX +XXX,XX @@ uint64_t asimd_imm_const(uint32_t imm, int cmode, int op)
320
void arm_gen_condlabel(DisasContext *s)
321
{
322
if (!s->condjmp) {
323
- s->condlabel = gen_new_label();
324
+ s->condlabel = gen_disas_label(s);
325
s->condjmp = 1;
326
}
327
}
328
@@ -XXX,XX +XXX,XX @@ static target_long jmp_diff(DisasContext *s, target_long diff)
329
330
static void gen_pc_plus_diff(DisasContext *s, TCGv_i32 var, target_long diff)
331
{
332
- tcg_gen_movi_i32(var, s->pc_curr + diff);
333
+ assert(s->pc_save != -1);
334
+ if (TARGET_TB_PCREL) {
335
+ tcg_gen_addi_i32(var, cpu_R[15], (s->pc_curr - s->pc_save) + diff);
336
+ } else {
337
+ tcg_gen_movi_i32(var, s->pc_curr + diff);
338
+ }
339
}
340
341
/* Set a variable to the value of a CPU register. */
342
@@ -XXX,XX +XXX,XX @@ void store_reg(DisasContext *s, int reg, TCGv_i32 var)
343
*/
344
tcg_gen_andi_i32(var, var, s->thumb ? ~1 : ~3);
345
s->base.is_jmp = DISAS_JUMP;
346
+ s->pc_save = -1;
347
} else if (reg == 13 && arm_dc_feature(s, ARM_FEATURE_M)) {
348
/* For M-profile SP bits [1:0] are always zero */
349
tcg_gen_andi_i32(var, var, ~3);
350
@@ -XXX,XX +XXX,XX @@ void gen_set_condexec(DisasContext *s)
351
352
void gen_update_pc(DisasContext *s, target_long diff)
353
{
354
- tcg_gen_movi_i32(cpu_R[15], s->pc_curr + diff);
355
+ gen_pc_plus_diff(s, cpu_R[15], diff);
356
+ s->pc_save = s->pc_curr + diff;
357
}
358
359
/* Set PC and Thumb state from var. var is marked as dead. */
360
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx(DisasContext *s, TCGv_i32 var)
361
tcg_gen_andi_i32(cpu_R[15], var, ~1);
362
tcg_gen_andi_i32(var, var, 1);
363
store_cpu_field(var, thumb);
364
+ s->pc_save = -1;
365
}
366
367
/*
368
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var)
369
static inline void gen_bx_excret_final_code(DisasContext *s)
370
{
371
/* Generate the code to finish possible exception return and end the TB */
372
- TCGLabel *excret_label = gen_new_label();
373
+ DisasLabel excret_label = gen_disas_label(s);
374
uint32_t min_magic;
375
376
if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY)) {
377
@@ -XXX,XX +XXX,XX @@ static inline void gen_bx_excret_final_code(DisasContext *s)
378
}
379
380
/* Is the new PC value in the magic range indicating exception return? */
381
- tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], min_magic, excret_label);
382
+ tcg_gen_brcondi_i32(TCG_COND_GEU, cpu_R[15], min_magic, excret_label.label);
383
/* No: end the TB as we would for a DISAS_JMP */
384
if (s->ss_active) {
385
gen_singlestep_exception(s);
386
} else {
387
tcg_gen_exit_tb(NULL, 0);
388
}
389
- gen_set_label(excret_label);
390
+ set_disas_label(s, excret_label);
391
/* Yes: this is an exception return.
392
* At this point in runtime env->regs[15] and env->thumb will hold
393
* the exception-return magic number, which do_v7m_exception_exit()
394
@@ -XXX,XX +XXX,XX @@ static void gen_goto_ptr(void)
395
*/
396
static void gen_goto_tb(DisasContext *s, int n, target_long diff)
397
{
398
- target_ulong dest = s->pc_curr + diff;
399
-
400
- if (translator_use_goto_tb(&s->base, dest)) {
401
- tcg_gen_goto_tb(n);
402
- gen_update_pc(s, diff);
403
+ if (translator_use_goto_tb(&s->base, s->pc_curr + diff)) {
404
+ /*
405
+ * For pcrel, the pc must always be up-to-date on entry to
406
+ * the linked TB, so that it can use simple additions for all
407
+ * further adjustments. For !pcrel, the linked TB is compiled
408
+ * to know its full virtual address, so we can delay the
409
+ * update to pc to the unlinked path. A long chain of links
410
+ * can thus avoid many updates to the PC.
411
+ */
412
+ if (TARGET_TB_PCREL) {
413
+ gen_update_pc(s, diff);
414
+ tcg_gen_goto_tb(n);
415
+ } else {
416
+ tcg_gen_goto_tb(n);
417
+ gen_update_pc(s, diff);
418
+ }
419
tcg_gen_exit_tb(s->base.tb, n);
420
} else {
421
gen_update_pc(s, diff);
422
@@ -XXX,XX +XXX,XX @@ static void gen_srs(DisasContext *s,
423
static void arm_skip_unless(DisasContext *s, uint32_t cond)
424
{
425
arm_gen_condlabel(s);
426
- arm_gen_test_cc(cond ^ 1, s->condlabel);
427
+ arm_gen_test_cc(cond ^ 1, s->condlabel.label);
428
}
429
430
431
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
432
{
433
/* M-profile low-overhead while-loop start */
434
TCGv_i32 tmp;
435
- TCGLabel *nextlabel;
436
+ DisasLabel nextlabel;
437
438
if (!dc_isar_feature(aa32_lob, s)) {
439
return false;
440
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
61
}
441
}
62
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
442
}
63
+ env->v7m.ccr[M_REG_NS] |= R_V7M_CCR_UNALIGN_TRP_MASK;
443
64
+ env->v7m.ccr[M_REG_S] |= R_V7M_CCR_UNALIGN_TRP_MASK;
444
- nextlabel = gen_new_label();
445
- tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_R[a->rn], 0, nextlabel);
446
+ nextlabel = gen_disas_label(s);
447
+ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_R[a->rn], 0, nextlabel.label);
448
tmp = load_reg(s, a->rn);
449
store_reg(s, 14, tmp);
450
if (a->size != 4) {
451
@@ -XXX,XX +XXX,XX @@ static bool trans_WLS(DisasContext *s, arg_WLS *a)
452
}
453
gen_jmp_tb(s, curr_insn_len(s), 1);
454
455
- gen_set_label(nextlabel);
456
+ set_disas_label(s, nextlabel);
457
gen_jmp(s, jmp_diff(s, a->imm));
458
return true;
459
}
460
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
461
* any faster.
462
*/
463
TCGv_i32 tmp;
464
- TCGLabel *loopend;
465
+ DisasLabel loopend;
466
bool fpu_active;
467
468
if (!dc_isar_feature(aa32_lob, s)) {
469
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
470
471
if (!a->tp && dc_isar_feature(aa32_mve, s) && fpu_active) {
472
/* Need to do a runtime check for LTPSIZE != 4 */
473
- TCGLabel *skipexc = gen_new_label();
474
+ DisasLabel skipexc = gen_disas_label(s);
475
tmp = load_cpu_field(v7m.ltpsize);
476
- tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc);
477
+ tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 4, skipexc.label);
478
tcg_temp_free_i32(tmp);
479
gen_exception_insn(s, 0, EXCP_INVSTATE, syn_uncategorized());
480
- gen_set_label(skipexc);
481
+ set_disas_label(s, skipexc);
482
}
483
484
if (a->f) {
485
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
486
* loop decrement value is 1. For LETP we need to calculate the decrement
487
* value from LTPSIZE.
488
*/
489
- loopend = gen_new_label();
490
+ loopend = gen_disas_label(s);
491
if (!a->tp) {
492
- tcg_gen_brcondi_i32(TCG_COND_LEU, cpu_R[14], 1, loopend);
493
+ tcg_gen_brcondi_i32(TCG_COND_LEU, cpu_R[14], 1, loopend.label);
494
tcg_gen_addi_i32(cpu_R[14], cpu_R[14], -1);
495
} else {
496
/*
497
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
498
tcg_gen_shl_i32(decr, tcg_constant_i32(1), decr);
499
tcg_temp_free_i32(ltpsize);
500
501
- tcg_gen_brcond_i32(TCG_COND_LEU, cpu_R[14], decr, loopend);
502
+ tcg_gen_brcond_i32(TCG_COND_LEU, cpu_R[14], decr, loopend.label);
503
504
tcg_gen_sub_i32(cpu_R[14], cpu_R[14], decr);
505
tcg_temp_free_i32(decr);
506
@@ -XXX,XX +XXX,XX @@ static bool trans_LE(DisasContext *s, arg_LE *a)
507
/* Jump back to the loop start */
508
gen_jmp(s, jmp_diff(s, -a->imm));
509
510
- gen_set_label(loopend);
511
+ set_disas_label(s, loopend);
512
if (a->tp) {
513
/* Exits from tail-pred loops must reset LTPSIZE to 4 */
514
store_cpu_field(tcg_constant_i32(4), v7m.ltpsize);
515
@@ -XXX,XX +XXX,XX @@ static bool trans_CBZ(DisasContext *s, arg_CBZ *a)
516
517
arm_gen_condlabel(s);
518
tcg_gen_brcondi_i32(a->nz ? TCG_COND_EQ : TCG_COND_NE,
519
- tmp, 0, s->condlabel);
520
+ tmp, 0, s->condlabel.label);
521
tcg_temp_free_i32(tmp);
522
gen_jmp(s, jmp_diff(s, a->imm));
523
return true;
524
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
525
526
dc->isar = &cpu->isar;
527
dc->condjmp = 0;
528
-
529
+ dc->pc_save = dc->base.pc_first;
530
dc->aarch64 = false;
531
dc->thumb = EX_TBFLAG_AM32(tb_flags, THUMB);
532
dc->be_data = EX_TBFLAG_ANY(tb_flags, BE_DATA) ? MO_BE : MO_LE;
533
@@ -XXX,XX +XXX,XX @@ static void arm_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs)
534
*/
535
dc->eci = dc->condexec_mask = dc->condexec_cond = 0;
536
dc->eci_handled = false;
537
- dc->insn_eci_rewind = NULL;
538
if (condexec & 0xf) {
539
dc->condexec_mask = (condexec & 0xf) << 1;
540
dc->condexec_cond = condexec >> 4;
541
@@ -XXX,XX +XXX,XX @@ static void arm_tr_insn_start(DisasContextBase *dcbase, CPUState *cpu)
542
* fields here.
543
*/
544
uint32_t condexec_bits;
545
+ target_ulong pc_arg = dc->base.pc_next;
546
547
+ if (TARGET_TB_PCREL) {
548
+ pc_arg &= ~TARGET_PAGE_MASK;
549
+ }
550
if (dc->eci) {
551
condexec_bits = dc->eci << 4;
552
} else {
553
condexec_bits = (dc->condexec_cond << 4) | (dc->condexec_mask >> 1);
554
}
555
- tcg_gen_insn_start(dc->base.pc_next, condexec_bits, 0);
556
+ tcg_gen_insn_start(pc_arg, condexec_bits, 0);
557
dc->insn_start = tcg_last_op();
558
}
559
560
@@ -XXX,XX +XXX,XX @@ static bool arm_check_ss_active(DisasContext *dc)
561
562
static void arm_post_translate_insn(DisasContext *dc)
563
{
564
- if (dc->condjmp && !dc->base.is_jmp) {
565
- gen_set_label(dc->condlabel);
566
+ if (dc->condjmp && dc->base.is_jmp == DISAS_NEXT) {
567
+ if (dc->pc_save != dc->condlabel.pc_save) {
568
+ gen_update_pc(dc, dc->condlabel.pc_save - dc->pc_save);
65
+ }
569
+ }
66
570
+ gen_set_label(dc->condlabel.label);
67
/* Unlike A/R profile, M profile defines the reset LR value */
571
dc->condjmp = 0;
68
env->regs[14] = 0xffffffff;
572
}
69
diff --git a/target/arm/helper.c b/target/arm/helper.c
573
translator_loop_temp_check(&dc->base);
70
index XXXXXXX..XXXXXXX 100644
574
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
71
--- a/target/arm/helper.c
575
uint32_t pc = dc->base.pc_next;
72
+++ b/target/arm/helper.c
576
uint32_t insn;
73
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
577
bool is_16bit;
74
env->v7m.primask[M_REG_NS] = val & 1;
578
+ /* TCG op to rewind to if this turns out to be an invalid ECI state */
75
return;
579
+ TCGOp *insn_eci_rewind = NULL;
76
case 0x91: /* BASEPRI_NS */
580
+ target_ulong insn_eci_pc_save = -1;
77
- if (!env->v7m.secure) {
581
78
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
582
/* Misaligned thumb PC is architecturally impossible. */
79
return;
583
assert((dc->base.pc_next & 1) == 0);
80
}
584
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
81
env->v7m.basepri[M_REG_NS] = val & 0xff;
585
* insn" case. We will rewind to the marker (ie throwing away
82
return;
586
* all the generated code) and instead emit "take exception".
83
case 0x93: /* FAULTMASK_NS */
587
*/
84
- if (!env->v7m.secure) {
588
- dc->insn_eci_rewind = tcg_last_op();
85
+ if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN)) {
589
+ insn_eci_rewind = tcg_last_op();
86
return;
590
+ insn_eci_pc_save = dc->pc_save;
87
}
591
}
88
env->v7m.faultmask[M_REG_NS] = val & 1;
592
89
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
593
if (dc->condexec_mask && !thumb_insn_is_unconditional(dc, insn)) {
90
env->v7m.primask[env->v7m.secure] = val & 1;
594
@@ -XXX,XX +XXX,XX @@ static void thumb_tr_translate_insn(DisasContextBase *dcbase, CPUState *cpu)
91
break;
595
* Insn wasn't valid for ECI/ICI at all: undo what we
92
case 17: /* BASEPRI */
596
* just generated and instead emit an exception
93
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
597
*/
94
+ goto bad_reg;
598
- tcg_remove_ops_after(dc->insn_eci_rewind);
599
+ tcg_remove_ops_after(insn_eci_rewind);
600
+ dc->pc_save = insn_eci_pc_save;
601
dc->condjmp = 0;
602
gen_exception_insn(dc, 0, EXCP_INVSTATE, syn_uncategorized());
603
}
604
@@ -XXX,XX +XXX,XX @@ static void arm_tr_tb_stop(DisasContextBase *dcbase, CPUState *cpu)
605
606
if (dc->condjmp) {
607
/* "Condition failed" instruction codepath for the branch/trap insn */
608
- gen_set_label(dc->condlabel);
609
+ set_disas_label(dc, dc->condlabel);
610
gen_set_condexec(dc);
611
if (unlikely(dc->ss_active)) {
612
gen_update_pc(dc, curr_insn_len(dc));
613
@@ -XXX,XX +XXX,XX @@ void restore_state_to_opc(CPUARMState *env, TranslationBlock *tb,
614
target_ulong *data)
615
{
616
if (is_a64(env)) {
617
- env->pc = data[0];
618
+ if (TARGET_TB_PCREL) {
619
+ env->pc = (env->pc & TARGET_PAGE_MASK) | data[0];
620
+ } else {
621
+ env->pc = data[0];
95
+ }
622
+ }
96
env->v7m.basepri[env->v7m.secure] = val & 0xff;
623
env->condexec_bits = 0;
97
break;
624
env->exception.syndrome = data[2] << ARM_INSN_START_WORD2_SHIFT;
98
case 18: /* BASEPRI_MAX */
625
} else {
99
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
626
- env->regs[15] = data[0];
100
+ goto bad_reg;
627
+ if (TARGET_TB_PCREL) {
628
+ env->regs[15] = (env->regs[15] & TARGET_PAGE_MASK) | data[0];
629
+ } else {
630
+ env->regs[15] = data[0];
101
+ }
631
+ }
102
val &= 0xff;
632
env->condexec_bits = data[1];
103
if (val != 0 && (val < env->v7m.basepri[env->v7m.secure]
633
env->exception.syndrome = data[2] << ARM_INSN_START_WORD2_SHIFT;
104
|| env->v7m.basepri[env->v7m.secure] == 0)) {
634
}
105
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
106
}
107
break;
108
case 19: /* FAULTMASK */
109
+ if (!arm_feature(env, ARM_FEATURE_M_MAIN)) {
110
+ goto bad_reg;
111
+ }
112
env->v7m.faultmask[env->v7m.secure] = val & 1;
113
break;
114
case 20: /* CONTROL */
115
--
635
--
116
2.18.0
636
2.25.1
117
118
diff view generated by jsdifflib
Deleted patch
1
From: Julia Suvorova <jusual@mail.ru>
2
1
3
The differences from ARMv7-M NVIC are:
4
* ARMv6-M only supports up to 32 external interrupts
5
(configurable feature already). The ICTR is reserved.
6
* Active Bit Register is reserved.
7
* ARMv6-M supports 4 priority levels against 256 in ARMv7-M.
8
9
Signed-off-by: Julia Suvorova <jusual@mail.ru>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
include/hw/intc/armv7m_nvic.h | 1 +
14
hw/intc/armv7m_nvic.c | 21 ++++++++++++++++++---
15
2 files changed, 19 insertions(+), 3 deletions(-)
16
17
diff --git a/include/hw/intc/armv7m_nvic.h b/include/hw/intc/armv7m_nvic.h
18
index XXXXXXX..XXXXXXX 100644
19
--- a/include/hw/intc/armv7m_nvic.h
20
+++ b/include/hw/intc/armv7m_nvic.h
21
@@ -XXX,XX +XXX,XX @@ typedef struct NVICState {
22
VecInfo sec_vectors[NVIC_INTERNAL_VECTORS];
23
/* The PRIGROUP field in AIRCR is banked */
24
uint32_t prigroup[M_REG_NUM_BANKS];
25
+ uint8_t num_prio_bits;
26
27
/* v8M NVIC_ITNS state (stored as a bool per bit) */
28
bool itns[NVIC_MAX_VECTORS];
29
diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
30
index XXXXXXX..XXXXXXX 100644
31
--- a/hw/intc/armv7m_nvic.c
32
+++ b/hw/intc/armv7m_nvic.c
33
@@ -XXX,XX +XXX,XX @@ static void set_prio(NVICState *s, unsigned irq, bool secure, uint8_t prio)
34
assert(irq > ARMV7M_EXCP_NMI); /* only use for configurable prios */
35
assert(irq < s->num_irq);
36
37
+ prio &= MAKE_64BIT_MASK(8 - s->num_prio_bits, s->num_prio_bits);
38
+
39
if (secure) {
40
assert(exc_is_banked(irq));
41
s->sec_vectors[irq].prio = prio;
42
@@ -XXX,XX +XXX,XX @@ static uint32_t nvic_readl(NVICState *s, uint32_t offset, MemTxAttrs attrs)
43
44
switch (offset) {
45
case 4: /* Interrupt Control Type. */
46
+ if (!arm_feature(&cpu->env, ARM_FEATURE_V7)) {
47
+ goto bad_offset;
48
+ }
49
return ((s->num_irq - NVIC_FIRST_IRQ) / 32) - 1;
50
case 0xc: /* CPPWR */
51
if (!arm_feature(&cpu->env, ARM_FEATURE_V8)) {
52
@@ -XXX,XX +XXX,XX @@ static void nvic_writel(NVICState *s, uint32_t offset, uint32_t value,
53
"Setting VECTRESET when not in DEBUG mode "
54
"is UNPREDICTABLE\n");
55
}
56
- s->prigroup[attrs.secure] = extract32(value,
57
- R_V7M_AIRCR_PRIGROUP_SHIFT,
58
- R_V7M_AIRCR_PRIGROUP_LENGTH);
59
+ if (arm_feature(&cpu->env, ARM_FEATURE_M_MAIN)) {
60
+ s->prigroup[attrs.secure] =
61
+ extract32(value,
62
+ R_V7M_AIRCR_PRIGROUP_SHIFT,
63
+ R_V7M_AIRCR_PRIGROUP_LENGTH);
64
+ }
65
if (attrs.secure) {
66
/* These bits are only writable by secure */
67
cpu->env.v7m.aircr = value &
68
@@ -XXX,XX +XXX,XX @@ static MemTxResult nvic_sysreg_read(void *opaque, hwaddr addr,
69
break;
70
case 0x300 ... 0x33f: /* NVIC Active */
71
val = 0;
72
+
73
+ if (!arm_feature(&s->cpu->env, ARM_FEATURE_V7)) {
74
+ break;
75
+ }
76
+
77
startvec = 8 * (offset - 0x300) + NVIC_FIRST_IRQ; /* vector # */
78
79
for (i = 0, end = size * 8; i < end && startvec + i < s->num_irq; i++) {
80
@@ -XXX,XX +XXX,XX @@ static void armv7m_nvic_realize(DeviceState *dev, Error **errp)
81
/* include space for internal exception vectors */
82
s->num_irq += NVIC_FIRST_IRQ;
83
84
+ s->num_prio_bits = arm_feature(&s->cpu->env, ARM_FEATURE_V7) ? 8 : 2;
85
+
86
object_property_set_bool(OBJECT(&s->systick[M_REG_NS]), true,
87
"realized", &err);
88
if (err != NULL) {
89
--
90
2.18.0
91
92
diff view generated by jsdifflib
Deleted patch
1
The io_readx() function needs to know whether the load it is
2
doing is an MMU_DATA_LOAD or an MMU_INST_FETCH, so that it
3
can pass the right value to the cpu_transaction_failed()
4
function. Plumb this information through from the softmmu
5
code.
6
1
7
This is currently not often going to give the wrong answer,
8
because usually instruction fetches go via get_page_addr_code().
9
However once we switch over to handling execution from non-RAM by
10
creating single-insn TBs, the path for an insn fetch to generate
11
a bus error will be through cpu_ld*_code() and io_readx(),
12
so without this change we will generate a d-side fault when we
13
should generate an i-side fault.
14
15
We also have to pass the access type via a CPU struct global
16
down to unassigned_mem_read(), for the benefit of the targets
17
which still use the cpu_unassigned_access() hook (m68k, mips,
18
sparc, xtensa).
19
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
22
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
23
Tested-by: Cédric Le Goater <clg@kaod.org>
24
Message-id: 20180710160013.26559-2-peter.maydell@linaro.org
25
---
26
accel/tcg/softmmu_template.h | 11 +++++++----
27
include/qom/cpu.h | 6 ++++++
28
accel/tcg/cputlb.c | 5 +++--
29
memory.c | 3 ++-
30
4 files changed, 18 insertions(+), 7 deletions(-)
31
32
diff --git a/accel/tcg/softmmu_template.h b/accel/tcg/softmmu_template.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/accel/tcg/softmmu_template.h
35
+++ b/accel/tcg/softmmu_template.h
36
@@ -XXX,XX +XXX,XX @@ static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env,
37
size_t mmu_idx, size_t index,
38
target_ulong addr,
39
uintptr_t retaddr,
40
- bool recheck)
41
+ bool recheck,
42
+ MMUAccessType access_type)
43
{
44
CPUIOTLBEntry *iotlbentry = &env->iotlb[mmu_idx][index];
45
return io_readx(env, iotlbentry, mmu_idx, addr, retaddr, recheck,
46
- DATA_SIZE);
47
+ access_type, DATA_SIZE);
48
}
49
#endif
50
51
@@ -XXX,XX +XXX,XX @@ WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr,
52
/* ??? Note that the io helpers always read data in the target
53
byte ordering. We should push the LE/BE request down into io. */
54
res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
55
- tlb_addr & TLB_RECHECK);
56
+ tlb_addr & TLB_RECHECK,
57
+ READ_ACCESS_TYPE);
58
res = TGT_LE(res);
59
return res;
60
}
61
@@ -XXX,XX +XXX,XX @@ WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr,
62
/* ??? Note that the io helpers always read data in the target
63
byte ordering. We should push the LE/BE request down into io. */
64
res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr,
65
- tlb_addr & TLB_RECHECK);
66
+ tlb_addr & TLB_RECHECK,
67
+ READ_ACCESS_TYPE);
68
res = TGT_BE(res);
69
return res;
70
}
71
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
72
index XXXXXXX..XXXXXXX 100644
73
--- a/include/qom/cpu.h
74
+++ b/include/qom/cpu.h
75
@@ -XXX,XX +XXX,XX @@ struct CPUState {
76
*/
77
uintptr_t mem_io_pc;
78
vaddr mem_io_vaddr;
79
+ /*
80
+ * This is only needed for the legacy cpu_unassigned_access() hook;
81
+ * when all targets using it have been converted to use
82
+ * cpu_transaction_failed() instead it can be removed.
83
+ */
84
+ MMUAccessType mem_io_access_type;
85
86
int kvm_fd;
87
struct KVMState *kvm_state;
88
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
89
index XXXXXXX..XXXXXXX 100644
90
--- a/accel/tcg/cputlb.c
91
+++ b/accel/tcg/cputlb.c
92
@@ -XXX,XX +XXX,XX @@ static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr)
93
static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
94
int mmu_idx,
95
target_ulong addr, uintptr_t retaddr,
96
- bool recheck, int size)
97
+ bool recheck, MMUAccessType access_type, int size)
98
{
99
CPUState *cpu = ENV_GET_CPU(env);
100
hwaddr mr_offset;
101
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
102
}
103
104
cpu->mem_io_vaddr = addr;
105
+ cpu->mem_io_access_type = access_type;
106
107
if (mr->global_locking && !qemu_mutex_iothread_locked()) {
108
qemu_mutex_lock_iothread();
109
@@ -XXX,XX +XXX,XX @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
110
section->offset_within_address_space -
111
section->offset_within_region;
112
113
- cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_LOAD,
114
+ cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
115
mmu_idx, iotlbentry->attrs, r, retaddr);
116
}
117
if (locked) {
118
diff --git a/memory.c b/memory.c
119
index XXXXXXX..XXXXXXX 100644
120
--- a/memory.c
121
+++ b/memory.c
122
@@ -XXX,XX +XXX,XX @@ static uint64_t unassigned_mem_read(void *opaque, hwaddr addr,
123
printf("Unassigned mem read " TARGET_FMT_plx "\n", addr);
124
#endif
125
if (current_cpu != NULL) {
126
- cpu_unassigned_access(current_cpu, addr, false, false, 0, size);
127
+ bool is_exec = current_cpu->mem_io_access_type == MMU_INST_FETCH;
128
+ cpu_unassigned_access(current_cpu, addr, false, is_exec, 0, size);
129
}
130
return 0;
131
}
132
--
133
2.18.0
134
135
diff view generated by jsdifflib
Deleted patch
1
When we support execution from non-RAM MMIO regions, get_page_addr_code()
2
will return -1 to indicate that there is no RAM at the requested address.
3
Handle this in the cpu-exec TB hashtable lookup code, treating it as
4
"no match found".
5
1
6
Note that the call to get_page_addr_code() in tb_lookup_cmp() needs
7
no changes -- a return of -1 will already correctly result in the
8
function returning false.
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Emilio G. Cota <cota@braap.org>
13
Tested-by: Cédric Le Goater <clg@kaod.org>
14
Message-id: 20180710160013.26559-3-peter.maydell@linaro.org
15
---
16
accel/tcg/cpu-exec.c | 3 +++
17
1 file changed, 3 insertions(+)
18
19
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/accel/tcg/cpu-exec.c
22
+++ b/accel/tcg/cpu-exec.c
23
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_htable_lookup(CPUState *cpu, target_ulong pc,
24
desc.trace_vcpu_dstate = *cpu->trace_dstate;
25
desc.pc = pc;
26
phys_pc = get_page_addr_code(desc.env, pc);
27
+ if (phys_pc == -1) {
28
+ return NULL;
29
+ }
30
desc.phys_page1 = phys_pc & TARGET_PAGE_MASK;
31
h = tb_hash_func(phys_pc, pc, flags, cf_mask, *cpu->trace_dstate);
32
return qht_lookup_custom(&tb_ctx.htable, &desc, h, tb_lookup_cmp);
33
--
34
2.18.0
35
36
diff view generated by jsdifflib
Deleted patch
1
When we support execution from non-RAM MMIO regions, get_page_addr_code()
2
will return -1 to indicate that there is no RAM at the requested address.
3
Handle this in tb_check_watchpoint() -- if the exception happened for a
4
PC which doesn't correspond to RAM then there is no need to invalidate
5
any TBs, because the one-instruction TB will not have been cached.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Tested-by: Cédric Le Goater <clg@kaod.org>
10
Message-id: 20180710160013.26559-4-peter.maydell@linaro.org
11
---
12
accel/tcg/translate-all.c | 4 +++-
13
1 file changed, 3 insertions(+), 1 deletion(-)
14
15
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/accel/tcg/translate-all.c
18
+++ b/accel/tcg/translate-all.c
19
@@ -XXX,XX +XXX,XX @@ void tb_check_watchpoint(CPUState *cpu)
20
21
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
22
addr = get_page_addr_code(env, pc);
23
- tb_invalidate_phys_range(addr, addr + 1);
24
+ if (addr != -1) {
25
+ tb_invalidate_phys_range(addr, addr + 1);
26
+ }
27
}
28
}
29
30
--
31
2.18.0
32
33
diff view generated by jsdifflib
Deleted patch
1
If get_page_addr_code() returns -1, this indicates that there is no RAM
2
page we can read a full TB from. Instead we must create a TB which
3
contains a single instruction and which we do not cache, so it is
4
executed only once.
5
1
6
Since this means we can now have TBs which are not in any page list,
7
we also need to make tb_phys_invalidate() handle them (by not trying
8
to remove them from a nonexistent page list).
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Reviewed-by: Emilio G. Cota <cota@braap.org>
13
Tested-by: Cédric Le Goater <clg@kaod.org>
14
Message-id: 20180710160013.26559-5-peter.maydell@linaro.org
15
---
16
accel/tcg/translate-all.c | 19 ++++++++++++++++++-
17
1 file changed, 18 insertions(+), 1 deletion(-)
18
19
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/accel/tcg/translate-all.c
22
+++ b/accel/tcg/translate-all.c
23
@@ -XXX,XX +XXX,XX @@ static void tb_phys_invalidate__locked(TranslationBlock *tb)
24
*/
25
void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
26
{
27
- if (page_addr == -1) {
28
+ if (page_addr == -1 && tb->page_addr[0] != -1) {
29
page_lock_tb(tb);
30
do_tb_phys_invalidate(tb, true);
31
page_unlock_tb(tb);
32
@@ -XXX,XX +XXX,XX @@ tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
33
34
assert_memory_lock();
35
36
+ if (phys_pc == -1) {
37
+ /*
38
+ * If the TB is not associated with a physical RAM page then
39
+ * it must be a temporary one-insn TB, and we have nothing to do
40
+ * except fill in the page_addr[] fields.
41
+ */
42
+ assert(tb->cflags & CF_NOCACHE);
43
+ tb->page_addr[0] = tb->page_addr[1] = -1;
44
+ return tb;
45
+ }
46
+
47
/*
48
* Add the TB to the page list, acquiring first the pages's locks.
49
* We keep the locks held until after inserting the TB in the hash table,
50
@@ -XXX,XX +XXX,XX @@ TranslationBlock *tb_gen_code(CPUState *cpu,
51
52
phys_pc = get_page_addr_code(env, pc);
53
54
+ if (phys_pc == -1) {
55
+ /* Generate a temporary TB with 1 insn in it */
56
+ cflags &= ~CF_COUNT_MASK;
57
+ cflags |= CF_NOCACHE | 1;
58
+ }
59
+
60
buffer_overflow:
61
tb = tb_alloc(pc);
62
if (unlikely(!tb)) {
63
--
64
2.18.0
65
66
diff view generated by jsdifflib
Deleted patch
1
Now that all the callers can handle get_page_addr_code() returning -1,
2
remove all the code which tries to handle execution from MMIO regions
3
or small-MMU-region RAM areas. This will mean that we can correctly
4
execute from these areas, rather than ending up either aborting QEMU
5
or delivering an incorrect guest exception.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Tested-by: Cédric Le Goater <clg@kaod.org>
11
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
12
Message-id: 20180710160013.26559-6-peter.maydell@linaro.org
13
---
14
accel/tcg/cputlb.c | 95 +++++-----------------------------------------
15
1 file changed, 10 insertions(+), 85 deletions(-)
16
17
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/accel/tcg/cputlb.c
20
+++ b/accel/tcg/cputlb.c
21
@@ -XXX,XX +XXX,XX @@ void tlb_set_page(CPUState *cpu, target_ulong vaddr,
22
prot, mmu_idx, size);
23
}
24
25
-static void report_bad_exec(CPUState *cpu, target_ulong addr)
26
-{
27
- /* Accidentally executing outside RAM or ROM is quite common for
28
- * several user-error situations, so report it in a way that
29
- * makes it clear that this isn't a QEMU bug and provide suggestions
30
- * about what a user could do to fix things.
31
- */
32
- error_report("Trying to execute code outside RAM or ROM at 0x"
33
- TARGET_FMT_lx, addr);
34
- error_printf("This usually means one of the following happened:\n\n"
35
- "(1) You told QEMU to execute a kernel for the wrong machine "
36
- "type, and it crashed on startup (eg trying to run a "
37
- "raspberry pi kernel on a versatilepb QEMU machine)\n"
38
- "(2) You didn't give QEMU a kernel or BIOS filename at all, "
39
- "and QEMU executed a ROM full of no-op instructions until "
40
- "it fell off the end\n"
41
- "(3) Your guest kernel has a bug and crashed by jumping "
42
- "off into nowhere\n\n"
43
- "This is almost always one of the first two, so check your "
44
- "command line and that you are using the right type of kernel "
45
- "for this machine.\n"
46
- "If you think option (3) is likely then you can try debugging "
47
- "your guest with the -d debug options; in particular "
48
- "-d guest_errors will cause the log to include a dump of the "
49
- "guest register state at this point.\n\n"
50
- "Execution cannot continue; stopping here.\n\n");
51
-
52
- /* Report also to the logs, with more detail including register dump */
53
- qemu_log_mask(LOG_GUEST_ERROR, "qemu: fatal: Trying to execute code "
54
- "outside RAM or ROM at 0x" TARGET_FMT_lx "\n", addr);
55
- log_cpu_state_mask(LOG_GUEST_ERROR, cpu, CPU_DUMP_FPU | CPU_DUMP_CCOP);
56
-}
57
-
58
static inline ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr)
59
{
60
ram_addr_t ram_addr;
61
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
62
MemoryRegionSection *section;
63
CPUState *cpu = ENV_GET_CPU(env);
64
CPUIOTLBEntry *iotlbentry;
65
- hwaddr physaddr, mr_offset;
66
67
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
68
mmu_idx = cpu_mmu_index(env, true);
69
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
70
if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
71
/*
72
* This is a TLB_RECHECK access, where the MMU protection
73
- * covers a smaller range than a target page, and we must
74
- * repeat the MMU check here. This tlb_fill() call might
75
- * longjump out if this access should cause a guest exception.
76
- */
77
- int index;
78
- target_ulong tlb_addr;
79
-
80
- tlb_fill(cpu, addr, 0, MMU_INST_FETCH, mmu_idx, 0);
81
-
82
- index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
83
- tlb_addr = env->tlb_table[mmu_idx][index].addr_code;
84
- if (!(tlb_addr & ~(TARGET_PAGE_MASK | TLB_RECHECK))) {
85
- /* RAM access. We can't handle this, so for now just stop */
86
- cpu_abort(cpu, "Unable to handle guest executing from RAM within "
87
- "a small MPU region at 0x" TARGET_FMT_lx, addr);
88
- }
89
- /*
90
- * Fall through to handle IO accesses (which will almost certainly
91
- * also result in failure)
92
+ * covers a smaller range than a target page. Return -1 to
93
+ * indicate that we cannot simply execute from RAM here;
94
+ * we will perform the necessary repeat of the MMU check
95
+ * when the "execute a single insn" code performs the
96
+ * load of the guest insn.
97
*/
98
+ return -1;
99
}
100
101
iotlbentry = &env->iotlb[mmu_idx][index];
102
section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
103
mr = section->mr;
104
if (memory_region_is_unassigned(mr)) {
105
- qemu_mutex_lock_iothread();
106
- if (memory_region_request_mmio_ptr(mr, addr)) {
107
- qemu_mutex_unlock_iothread();
108
- /* A MemoryRegion is potentially added so re-run the
109
- * get_page_addr_code.
110
- */
111
- return get_page_addr_code(env, addr);
112
- }
113
- qemu_mutex_unlock_iothread();
114
-
115
- /* Give the new-style cpu_transaction_failed() hook first chance
116
- * to handle this.
117
- * This is not the ideal place to detect and generate CPU
118
- * exceptions for instruction fetch failure (for instance
119
- * we don't know the length of the access that the CPU would
120
- * use, and it would be better to go ahead and try the access
121
- * and use the MemTXResult it produced). However it is the
122
- * simplest place we have currently available for the check.
123
+ /*
124
+ * Not guest RAM, so there is no ram_addr_t for it. Return -1,
125
+ * and we will execute a single insn from this device.
126
*/
127
- mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
128
- physaddr = mr_offset +
129
- section->offset_within_address_space -
130
- section->offset_within_region;
131
- cpu_transaction_failed(cpu, physaddr, addr, 0, MMU_INST_FETCH, mmu_idx,
132
- iotlbentry->attrs, MEMTX_DECODE_ERROR, 0);
133
-
134
- cpu_unassigned_access(cpu, addr, false, true, 0, 4);
135
- /* The CPU's unassigned access hook might have longjumped out
136
- * with an exception. If it didn't (or there was no hook) then
137
- * we can't proceed further.
138
- */
139
- report_bad_exec(cpu, addr);
140
- exit(1);
141
+ return -1;
142
}
143
p = (void *)((uintptr_t)addr + env->tlb_table[mmu_idx][index].addend);
144
return qemu_ram_addr_from_host_nofail(p);
145
--
146
2.18.0
147
148
diff view generated by jsdifflib
Deleted patch
1
Now that we have full support for small regions, including execution,
2
we can remove the workarounds where we marked all small regions as
3
non-executable for the M-profile MPU and SAU.
4
1
5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
6
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
7
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
8
Tested-by: Cédric Le Goater <clg@kaod.org>
9
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
10
Message-id: 20180710160013.26559-7-peter.maydell@linaro.org
11
---
12
target/arm/helper.c | 23 -----------------------
13
1 file changed, 23 deletions(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address,
20
21
fi->type = ARMFault_Permission;
22
fi->level = 1;
23
- /*
24
- * Core QEMU code can't handle execution from small pages yet, so
25
- * don't try it. This way we'll get an MPU exception, rather than
26
- * eventually causing QEMU to exit in get_page_addr_code().
27
- */
28
- if (*page_size < TARGET_PAGE_SIZE && (*prot & PAGE_EXEC)) {
29
- qemu_log_mask(LOG_UNIMP,
30
- "MPU: No support for execution from regions "
31
- "smaller than 1K\n");
32
- *prot &= ~PAGE_EXEC;
33
- }
34
return !(*prot & (1 << access_type));
35
}
36
37
@@ -XXX,XX +XXX,XX @@ static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address,
38
39
fi->type = ARMFault_Permission;
40
fi->level = 1;
41
- /*
42
- * Core QEMU code can't handle execution from small pages yet, so
43
- * don't try it. This means any attempted execution will generate
44
- * an MPU exception, rather than eventually causing QEMU to exit in
45
- * get_page_addr_code().
46
- */
47
- if (*is_subpage && (*prot & PAGE_EXEC)) {
48
- qemu_log_mask(LOG_UNIMP,
49
- "MPU: No support for execution from regions "
50
- "smaller than 1K\n");
51
- *prot &= ~PAGE_EXEC;
52
- }
53
return !(*prot & (1 << access_type));
54
}
55
56
--
57
2.18.0
58
59
diff view generated by jsdifflib
Deleted patch
1
We set up TLB entries in tlb_set_page_with_attrs(), where we have
2
some logic for determining whether the TLB entry is considered
3
to be RAM-backed, and thus has a valid addend field. When we
4
look at the TLB entry in get_page_addr_code(), we use different
5
logic for determining whether to treat the page as RAM-backed
6
and use the addend field. This is confusing, and in fact buggy,
7
because the code in tlb_set_page_with_attrs() correctly decides
8
that rom_device memory regions not in romd mode are not RAM-backed,
9
but the code in get_page_addr_code() thinks they are RAM-backed.
10
This typically results in "Bad ram pointer" assertion if the
11
guest tries to execute from such a memory region.
12
1
13
Fix this by making get_page_addr_code() just look at the
14
TLB_MMIO bit in the code_address field of the TLB, which
15
tlb_set_page_with_attrs() sets if and only if the addend
16
field is not valid for code execution.
17
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
20
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
21
Message-id: 20180713150945.12348-1-peter.maydell@linaro.org
22
---
23
include/exec/exec-all.h | 2 --
24
accel/tcg/cputlb.c | 29 ++++++++---------------------
25
exec.c | 6 ------
26
3 files changed, 8 insertions(+), 29 deletions(-)
27
28
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
29
index XXXXXXX..XXXXXXX 100644
30
--- a/include/exec/exec-all.h
31
+++ b/include/exec/exec-all.h
32
@@ -XXX,XX +XXX,XX @@ hwaddr memory_region_section_get_iotlb(CPUState *cpu,
33
hwaddr paddr, hwaddr xlat,
34
int prot,
35
target_ulong *address);
36
-bool memory_region_is_unassigned(MemoryRegion *mr);
37
-
38
#endif
39
40
/* vl.c */
41
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
42
index XXXXXXX..XXXXXXX 100644
43
--- a/accel/tcg/cputlb.c
44
+++ b/accel/tcg/cputlb.c
45
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
46
{
47
int mmu_idx, index;
48
void *p;
49
- MemoryRegion *mr;
50
- MemoryRegionSection *section;
51
- CPUState *cpu = ENV_GET_CPU(env);
52
- CPUIOTLBEntry *iotlbentry;
53
54
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
55
mmu_idx = cpu_mmu_index(env, true);
56
@@ -XXX,XX +XXX,XX @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
57
assert(tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr));
58
}
59
60
- if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
61
+ if (unlikely(env->tlb_table[mmu_idx][index].addr_code &
62
+ (TLB_RECHECK | TLB_MMIO))) {
63
/*
64
- * This is a TLB_RECHECK access, where the MMU protection
65
- * covers a smaller range than a target page. Return -1 to
66
- * indicate that we cannot simply execute from RAM here;
67
- * we will perform the necessary repeat of the MMU check
68
- * when the "execute a single insn" code performs the
69
- * load of the guest insn.
70
+ * Return -1 if we can't translate and execute from an entire
71
+ * page of RAM here, which will cause us to execute by loading
72
+ * and translating one insn at a time, without caching:
73
+ * - TLB_RECHECK: means the MMU protection covers a smaller range
74
+ * than a target page, so we must redo the MMU check every insn
75
+ * - TLB_MMIO: region is not backed by RAM
76
*/
77
return -1;
78
}
79
80
- iotlbentry = &env->iotlb[mmu_idx][index];
81
- section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
82
- mr = section->mr;
83
- if (memory_region_is_unassigned(mr)) {
84
- /*
85
- * Not guest RAM, so there is no ram_addr_t for it. Return -1,
86
- * and we will execute a single insn from this device.
87
- */
88
- return -1;
89
- }
90
p = (void *)((uintptr_t)addr + env->tlb_table[mmu_idx][index].addend);
91
return qemu_ram_addr_from_host_nofail(p);
92
}
93
diff --git a/exec.c b/exec.c
94
index XXXXXXX..XXXXXXX 100644
95
--- a/exec.c
96
+++ b/exec.c
97
@@ -XXX,XX +XXX,XX @@ static MemoryRegionSection *phys_page_find(AddressSpaceDispatch *d, hwaddr addr)
98
}
99
}
100
101
-bool memory_region_is_unassigned(MemoryRegion *mr)
102
-{
103
- return mr != &io_mem_rom && mr != &io_mem_notdirty && !mr->rom_device
104
- && mr != &io_mem_watch;
105
-}
106
-
107
/* Called from RCU critical section */
108
static MemoryRegionSection *address_space_lookup_region(AddressSpaceDispatch *d,
109
hwaddr addr,
110
--
111
2.18.0
112
113
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Implement GICD_ISACTIVERn and GICD_ICACTIVERn registers in the GICv2.
4
Those registers allow to set or clear the active state of an IRQ in the
5
distributor.
6
7
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
8
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
9
Message-id: 20180727095421.386-3-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
hw/intc/arm_gic.c | 61 +++++++++++++++++++++++++++++++++++++++++++----
13
1 file changed, 57 insertions(+), 4 deletions(-)
14
15
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/hw/intc/arm_gic.c
18
+++ b/hw/intc/arm_gic.c
19
@@ -XXX,XX +XXX,XX @@ static uint32_t gic_dist_readb(void *opaque, hwaddr offset, MemTxAttrs attrs)
20
}
21
}
22
} else if (offset < 0x400) {
23
- /* Interrupt Active. */
24
- irq = (offset - 0x300) * 8 + GIC_BASE_IRQ;
25
+ /* Interrupt Set/Clear Active. */
26
+ if (offset < 0x380) {
27
+ irq = (offset - 0x300) * 8;
28
+ } else if (s->revision == 2) {
29
+ irq = (offset - 0x380) * 8;
30
+ } else {
31
+ goto bad_reg;
32
+ }
33
+
34
+ irq += GIC_BASE_IRQ;
35
if (irq >= s->num_irq)
36
goto bad_reg;
37
res = 0;
38
@@ -XXX,XX +XXX,XX @@ static void gic_dist_writeb(void *opaque, hwaddr offset,
39
GIC_DIST_CLEAR_PENDING(irq + i, ALL_CPU_MASK);
40
}
41
}
42
+ } else if (offset < 0x380) {
43
+ /* Interrupt Set Active. */
44
+ if (s->revision != 2) {
45
+ goto bad_reg;
46
+ }
47
+
48
+ irq = (offset - 0x300) * 8 + GIC_BASE_IRQ;
49
+ if (irq >= s->num_irq) {
50
+ goto bad_reg;
51
+ }
52
+
53
+ /* This register is banked per-cpu for PPIs */
54
+ int cm = irq < GIC_INTERNAL ? (1 << cpu) : ALL_CPU_MASK;
55
+
56
+ for (i = 0; i < 8; i++) {
57
+ if (s->security_extn && !attrs.secure &&
58
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
59
+ continue; /* Ignore Non-secure access of Group0 IRQ */
60
+ }
61
+
62
+ if (value & (1 << i)) {
63
+ GIC_DIST_SET_ACTIVE(irq + i, cm);
64
+ }
65
+ }
66
} else if (offset < 0x400) {
67
- /* Interrupt Active. */
68
- goto bad_reg;
69
+ /* Interrupt Clear Active. */
70
+ if (s->revision != 2) {
71
+ goto bad_reg;
72
+ }
73
+
74
+ irq = (offset - 0x380) * 8 + GIC_BASE_IRQ;
75
+ if (irq >= s->num_irq) {
76
+ goto bad_reg;
77
+ }
78
+
79
+ /* This register is banked per-cpu for PPIs */
80
+ int cm = irq < GIC_INTERNAL ? (1 << cpu) : ALL_CPU_MASK;
81
+
82
+ for (i = 0; i < 8; i++) {
83
+ if (s->security_extn && !attrs.secure &&
84
+ !GIC_DIST_TEST_GROUP(irq + i, 1 << cpu)) {
85
+ continue; /* Ignore Non-secure access of Group0 IRQ */
86
+ }
87
+
88
+ if (value & (1 << i)) {
89
+ GIC_DIST_CLEAR_ACTIVE(irq + i, cm);
90
+ }
91
+ }
92
} else if (offset < 0x800) {
93
/* Interrupt Priority. */
94
irq = (offset - 0x400) + GIC_BASE_IRQ;
95
--
96
2.18.0
97
98
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Provide a VMSTATE_UINT16_SUB_ARRAY macro to save a uint16_t sub-array in
4
a VMState.
5
6
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
7
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
9
Message-id: 20180727095421.386-5-luc.michel@greensocs.com
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
include/migration/vmstate.h | 3 +++
13
1 file changed, 3 insertions(+)
14
15
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
16
index XXXXXXX..XXXXXXX 100644
17
--- a/include/migration/vmstate.h
18
+++ b/include/migration/vmstate.h
19
@@ -XXX,XX +XXX,XX @@ extern const VMStateInfo vmstate_info_qtailq;
20
#define VMSTATE_UINT16_ARRAY(_f, _s, _n) \
21
VMSTATE_UINT16_ARRAY_V(_f, _s, _n, 0)
22
23
+#define VMSTATE_UINT16_SUB_ARRAY(_f, _s, _start, _num) \
24
+ VMSTATE_SUB_ARRAY(_f, _s, _start, _num, 0, vmstate_info_uint16, uint16_t)
25
+
26
#define VMSTATE_UINT16_2DARRAY(_f, _s, _n1, _n2) \
27
VMSTATE_UINT16_2DARRAY_V(_f, _s, _n1, _n2, 0)
28
29
--
30
2.18.0
31
32
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Add the register definitions for the virtual interface of the GICv2.
4
5
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
6
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
7
Message-id: 20180727095421.386-7-luc.michel@greensocs.com
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
---
10
hw/intc/gic_internal.h | 65 ++++++++++++++++++++++++++++++++++++++++++
11
1 file changed, 65 insertions(+)
12
13
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
14
index XXXXXXX..XXXXXXX 100644
15
--- a/hw/intc/gic_internal.h
16
+++ b/hw/intc/gic_internal.h
17
@@ -XXX,XX +XXX,XX @@
18
#ifndef QEMU_ARM_GIC_INTERNAL_H
19
#define QEMU_ARM_GIC_INTERNAL_H
20
21
+#include "hw/registerfields.h"
22
#include "hw/intc/arm_gic.h"
23
24
#define ALL_CPU_MASK ((unsigned)(((1 << GIC_NCPU) - 1)))
25
@@ -XXX,XX +XXX,XX @@
26
#define GICC_CTLR_EOIMODE (1U << 9)
27
#define GICC_CTLR_EOIMODE_NS (1U << 10)
28
29
+REG32(GICH_HCR, 0x0)
30
+ FIELD(GICH_HCR, EN, 0, 1)
31
+ FIELD(GICH_HCR, UIE, 1, 1)
32
+ FIELD(GICH_HCR, LRENPIE, 2, 1)
33
+ FIELD(GICH_HCR, NPIE, 3, 1)
34
+ FIELD(GICH_HCR, VGRP0EIE, 4, 1)
35
+ FIELD(GICH_HCR, VGRP0DIE, 5, 1)
36
+ FIELD(GICH_HCR, VGRP1EIE, 6, 1)
37
+ FIELD(GICH_HCR, VGRP1DIE, 7, 1)
38
+ FIELD(GICH_HCR, EOICount, 27, 5)
39
+
40
+#define GICH_HCR_MASK \
41
+ (R_GICH_HCR_EN_MASK | R_GICH_HCR_UIE_MASK | \
42
+ R_GICH_HCR_LRENPIE_MASK | R_GICH_HCR_NPIE_MASK | \
43
+ R_GICH_HCR_VGRP0EIE_MASK | R_GICH_HCR_VGRP0DIE_MASK | \
44
+ R_GICH_HCR_VGRP1EIE_MASK | R_GICH_HCR_VGRP1DIE_MASK | \
45
+ R_GICH_HCR_EOICount_MASK)
46
+
47
+REG32(GICH_VTR, 0x4)
48
+ FIELD(GICH_VTR, ListRegs, 0, 6)
49
+ FIELD(GICH_VTR, PREbits, 26, 3)
50
+ FIELD(GICH_VTR, PRIbits, 29, 3)
51
+
52
+REG32(GICH_VMCR, 0x8)
53
+ FIELD(GICH_VMCR, VMCCtlr, 0, 10)
54
+ FIELD(GICH_VMCR, VMABP, 18, 3)
55
+ FIELD(GICH_VMCR, VMBP, 21, 3)
56
+ FIELD(GICH_VMCR, VMPriMask, 27, 5)
57
+
58
+REG32(GICH_MISR, 0x10)
59
+ FIELD(GICH_MISR, EOI, 0, 1)
60
+ FIELD(GICH_MISR, U, 1, 1)
61
+ FIELD(GICH_MISR, LRENP, 2, 1)
62
+ FIELD(GICH_MISR, NP, 3, 1)
63
+ FIELD(GICH_MISR, VGrp0E, 4, 1)
64
+ FIELD(GICH_MISR, VGrp0D, 5, 1)
65
+ FIELD(GICH_MISR, VGrp1E, 6, 1)
66
+ FIELD(GICH_MISR, VGrp1D, 7, 1)
67
+
68
+REG32(GICH_EISR0, 0x20)
69
+REG32(GICH_EISR1, 0x24)
70
+REG32(GICH_ELRSR0, 0x30)
71
+REG32(GICH_ELRSR1, 0x34)
72
+REG32(GICH_APR, 0xf0)
73
+
74
+REG32(GICH_LR0, 0x100)
75
+ FIELD(GICH_LR0, VirtualID, 0, 10)
76
+ FIELD(GICH_LR0, PhysicalID, 10, 10)
77
+ FIELD(GICH_LR0, CPUID, 10, 3)
78
+ FIELD(GICH_LR0, EOI, 19, 1)
79
+ FIELD(GICH_LR0, Priority, 23, 5)
80
+ FIELD(GICH_LR0, State, 28, 2)
81
+ FIELD(GICH_LR0, Grp1, 30, 1)
82
+ FIELD(GICH_LR0, HW, 31, 1)
83
+
84
+/* Last LR register */
85
+REG32(GICH_LR63, 0x1fc)
86
+
87
+#define GICH_LR_MASK \
88
+ (R_GICH_LR0_VirtualID_MASK | R_GICH_LR0_PhysicalID_MASK | \
89
+ R_GICH_LR0_CPUID_MASK | R_GICH_LR0_EOI_MASK | \
90
+ R_GICH_LR0_Priority_MASK | R_GICH_LR0_State_MASK | \
91
+ R_GICH_LR0_Grp1_MASK | R_GICH_LR0_HW_MASK)
92
+
93
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
94
* GICv2 and GICv2 with security extensions:
95
*/
96
--
97
2.18.0
98
99
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Add some helper macros and functions related to the virtualization
4
extensions to gic_internal.h.
5
6
The GICH_LR_* macros help extracting specific fields of a list register
7
value. The only tricky one is the priority field as only the MSB are
8
stored. The value must be shifted accordingly to obtain the correct
9
priority value.
10
11
gic_is_vcpu() and gic_get_vcpu_real_id() help with (v)CPU id manipulation
12
to abstract the fact that vCPU id are in the range
13
[ GIC_NCPU; (GIC_NCPU + num_cpu) [.
14
15
gic_lr_* and gic_virq_is_valid() help with the list registers.
16
gic_get_lr_entry() returns the LR entry for a given (vCPU, irq) pair. It
17
is meant to be used in contexts where we know for sure that the entry
18
exists, so we assert that entry is actually found, and the caller can
19
avoid the NULL check on the returned pointer.
20
21
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
22
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
23
Message-id: 20180727095421.386-8-luc.michel@greensocs.com
24
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
25
---
26
hw/intc/gic_internal.h | 74 ++++++++++++++++++++++++++++++++++++++++++
27
hw/intc/arm_gic.c | 5 +++
28
2 files changed, 79 insertions(+)
29
30
diff --git a/hw/intc/gic_internal.h b/hw/intc/gic_internal.h
31
index XXXXXXX..XXXXXXX 100644
32
--- a/hw/intc/gic_internal.h
33
+++ b/hw/intc/gic_internal.h
34
@@ -XXX,XX +XXX,XX @@ REG32(GICH_LR63, 0x1fc)
35
R_GICH_LR0_Priority_MASK | R_GICH_LR0_State_MASK | \
36
R_GICH_LR0_Grp1_MASK | R_GICH_LR0_HW_MASK)
37
38
+#define GICH_LR_STATE_INVALID 0
39
+#define GICH_LR_STATE_PENDING 1
40
+#define GICH_LR_STATE_ACTIVE 2
41
+#define GICH_LR_STATE_ACTIVE_PENDING 3
42
+
43
+#define GICH_LR_VIRT_ID(entry) (FIELD_EX32(entry, GICH_LR0, VirtualID))
44
+#define GICH_LR_PHYS_ID(entry) (FIELD_EX32(entry, GICH_LR0, PhysicalID))
45
+#define GICH_LR_CPUID(entry) (FIELD_EX32(entry, GICH_LR0, CPUID))
46
+#define GICH_LR_EOI(entry) (FIELD_EX32(entry, GICH_LR0, EOI))
47
+#define GICH_LR_PRIORITY(entry) (FIELD_EX32(entry, GICH_LR0, Priority) << 3)
48
+#define GICH_LR_STATE(entry) (FIELD_EX32(entry, GICH_LR0, State))
49
+#define GICH_LR_GROUP(entry) (FIELD_EX32(entry, GICH_LR0, Grp1))
50
+#define GICH_LR_HW(entry) (FIELD_EX32(entry, GICH_LR0, HW))
51
+
52
/* Valid bits for GICC_CTLR for GICv1, v1 with security extensions,
53
* GICv2 and GICv2 with security extensions:
54
*/
55
@@ -XXX,XX +XXX,XX @@ static inline bool gic_is_vcpu(int cpu)
56
return cpu >= GIC_NCPU;
57
}
58
59
+static inline int gic_get_vcpu_real_id(int cpu)
60
+{
61
+ return (cpu >= GIC_NCPU) ? (cpu - GIC_NCPU) : cpu;
62
+}
63
+
64
+/* Return true if the given vIRQ state exists in a LR and is either active or
65
+ * pending and active.
66
+ *
67
+ * This function is used to check that a guest's `end of interrupt' or
68
+ * `interrupts deactivation' request is valid, and matches with a LR of an
69
+ * already acknowledged vIRQ (i.e. has the active bit set in its state).
70
+ */
71
+static inline bool gic_virq_is_valid(GICState *s, int irq, int vcpu)
72
+{
73
+ int cpu = gic_get_vcpu_real_id(vcpu);
74
+ int lr_idx;
75
+
76
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
77
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
78
+
79
+ if ((GICH_LR_VIRT_ID(*entry) == irq) &&
80
+ (GICH_LR_STATE(*entry) & GICH_LR_STATE_ACTIVE)) {
81
+ return true;
82
+ }
83
+ }
84
+
85
+ return false;
86
+}
87
+
88
+/* Return a pointer on the LR entry matching the given vIRQ.
89
+ *
90
+ * This function is used to retrieve an LR for which we know for sure that the
91
+ * corresponding vIRQ exists in the current context (i.e. its current state is
92
+ * not `invalid'):
93
+ * - Either the corresponding vIRQ has been validated with gic_virq_is_valid()
94
+ * so it is `active' or `active and pending',
95
+ * - Or it was pending and has been selected by gic_get_best_virq(). It is now
96
+ * `pending', `active' or `active and pending', depending on what the guest
97
+ * already did with this vIRQ.
98
+ *
99
+ * Having multiple LRs with the same VirtualID leads to UNPREDICTABLE
100
+ * behaviour in the GIC. We choose to return the first one that matches.
101
+ */
102
+static inline uint32_t *gic_get_lr_entry(GICState *s, int irq, int vcpu)
103
+{
104
+ int cpu = gic_get_vcpu_real_id(vcpu);
105
+ int lr_idx;
106
+
107
+ for (lr_idx = 0; lr_idx < s->num_lrs; lr_idx++) {
108
+ uint32_t *entry = &s->h_lr[lr_idx][cpu];
109
+
110
+ if ((GICH_LR_VIRT_ID(*entry) == irq) &&
111
+ (GICH_LR_STATE(*entry) != GICH_LR_STATE_INVALID)) {
112
+ return entry;
113
+ }
114
+ }
115
+
116
+ g_assert_not_reached();
117
+}
118
+
119
#endif /* QEMU_ARM_GIC_INTERNAL_H */
120
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
121
index XXXXXXX..XXXXXXX 100644
122
--- a/hw/intc/arm_gic.c
123
+++ b/hw/intc/arm_gic.c
124
@@ -XXX,XX +XXX,XX @@ static inline int gic_get_current_cpu(GICState *s)
125
return 0;
126
}
127
128
+static inline int gic_get_current_vcpu(GICState *s)
129
+{
130
+ return gic_get_current_cpu(s) + GIC_NCPU;
131
+}
132
+
133
/* Return true if this GIC config has interrupt groups, which is
134
* true if we're a GICv2, or a GICv1 with the security extensions.
135
*/
136
--
137
2.18.0
138
139
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Implement virtualization extensions in gic_activate_irq() and
4
gic_drop_prio() and in gic_get_prio_from_apr_bits() called by
5
gic_drop_prio().
6
7
When the current CPU is a vCPU:
8
- Use GIC_VIRT_MIN_BPR and GIC_VIRT_NR_APRS instead of their non-virt
9
counterparts,
10
- the vCPU APR is stored in the virtual interface, in h_apr.
11
12
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
13
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
14
Message-id: 20180727095421.386-11-luc.michel@greensocs.com
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
---
17
hw/intc/arm_gic.c | 50 +++++++++++++++++++++++++++++++++++------------
18
1 file changed, 38 insertions(+), 12 deletions(-)
19
20
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/hw/intc/arm_gic.c
23
+++ b/hw/intc/arm_gic.c
24
@@ -XXX,XX +XXX,XX @@ static void gic_activate_irq(GICState *s, int cpu, int irq)
25
* and update the running priority.
26
*/
27
int prio = gic_get_group_priority(s, cpu, irq);
28
- int preemption_level = prio >> (GIC_MIN_BPR + 1);
29
+ int min_bpr = gic_is_vcpu(cpu) ? GIC_VIRT_MIN_BPR : GIC_MIN_BPR;
30
+ int preemption_level = prio >> (min_bpr + 1);
31
int regno = preemption_level / 32;
32
int bitno = preemption_level % 32;
33
+ uint32_t *papr = NULL;
34
35
- if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
36
- s->nsapr[regno][cpu] |= (1 << bitno);
37
+ if (gic_is_vcpu(cpu)) {
38
+ assert(regno == 0);
39
+ papr = &s->h_apr[gic_get_vcpu_real_id(cpu)];
40
+ } else if (gic_has_groups(s) && gic_test_group(s, irq, cpu)) {
41
+ papr = &s->nsapr[regno][cpu];
42
} else {
43
- s->apr[regno][cpu] |= (1 << bitno);
44
+ papr = &s->apr[regno][cpu];
45
}
46
47
+ *papr |= (1 << bitno);
48
+
49
s->running_priority[cpu] = prio;
50
gic_set_active(s, irq, cpu);
51
}
52
@@ -XXX,XX +XXX,XX @@ static int gic_get_prio_from_apr_bits(GICState *s, int cpu)
53
* on the set bits in the Active Priority Registers.
54
*/
55
int i;
56
+
57
+ if (gic_is_vcpu(cpu)) {
58
+ uint32_t apr = s->h_apr[gic_get_vcpu_real_id(cpu)];
59
+ if (apr) {
60
+ return ctz32(apr) << (GIC_VIRT_MIN_BPR + 1);
61
+ } else {
62
+ return 0x100;
63
+ }
64
+ }
65
+
66
for (i = 0; i < GIC_NR_APRS; i++) {
67
uint32_t apr = s->apr[i][cpu] | s->nsapr[i][cpu];
68
if (!apr) {
69
@@ -XXX,XX +XXX,XX @@ static void gic_drop_prio(GICState *s, int cpu, int group)
70
* running priority will be wrong, so interrupts that should preempt
71
* might not do so, and interrupts that should not preempt might do so.
72
*/
73
- int i;
74
+ if (gic_is_vcpu(cpu)) {
75
+ int rcpu = gic_get_vcpu_real_id(cpu);
76
77
- for (i = 0; i < GIC_NR_APRS; i++) {
78
- uint32_t *papr = group ? &s->nsapr[i][cpu] : &s->apr[i][cpu];
79
- if (!*papr) {
80
- continue;
81
+ if (s->h_apr[rcpu]) {
82
+ /* Clear lowest set bit */
83
+ s->h_apr[rcpu] &= s->h_apr[rcpu] - 1;
84
+ }
85
+ } else {
86
+ int i;
87
+
88
+ for (i = 0; i < GIC_NR_APRS; i++) {
89
+ uint32_t *papr = group ? &s->nsapr[i][cpu] : &s->apr[i][cpu];
90
+ if (!*papr) {
91
+ continue;
92
+ }
93
+ /* Clear lowest set bit */
94
+ *papr &= *papr - 1;
95
+ break;
96
}
97
- /* Clear lowest set bit */
98
- *papr &= *papr - 1;
99
- break;
100
}
101
102
s->running_priority[cpu] = gic_get_prio_from_apr_bits(s, cpu);
103
--
104
2.18.0
105
106
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Implement virtualization extensions in the gic_acknowledge_irq()
4
function. This function changes the state of the highest priority IRQ
5
from pending to active.
6
7
When the current CPU is a vCPU, modifying the state of an IRQ modifies
8
the corresponding LR entry. However if we clear the pending flag before
9
setting the active one, we lose track of the LR entry as it becomes
10
invalid. The next call to gic_get_lr_entry() will fail.
11
12
To overcome this issue, we call gic_activate_irq() before
13
gic_clear_pending(). This does not change the general behaviour of
14
gic_acknowledge_irq.
15
16
We also move the SGI case in gic_clear_pending_sgi() to enhance
17
code readability as the virtualization extensions support adds a if-else
18
level.
19
20
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
21
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
22
Message-id: 20180727095421.386-12-luc.michel@greensocs.com
23
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
24
---
25
hw/intc/arm_gic.c | 52 ++++++++++++++++++++++++++++++-----------------
26
1 file changed, 33 insertions(+), 19 deletions(-)
27
28
diff --git a/hw/intc/arm_gic.c b/hw/intc/arm_gic.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/intc/arm_gic.c
31
+++ b/hw/intc/arm_gic.c
32
@@ -XXX,XX +XXX,XX @@ static void gic_drop_prio(GICState *s, int cpu, int group)
33
s->running_priority[cpu] = gic_get_prio_from_apr_bits(s, cpu);
34
}
35
36
+static inline uint32_t gic_clear_pending_sgi(GICState *s, int irq, int cpu)
37
+{
38
+ int src;
39
+ uint32_t ret;
40
+
41
+ if (!gic_is_vcpu(cpu)) {
42
+ /* Lookup the source CPU for the SGI and clear this in the
43
+ * sgi_pending map. Return the src and clear the overall pending
44
+ * state on this CPU if the SGI is not pending from any CPUs.
45
+ */
46
+ assert(s->sgi_pending[irq][cpu] != 0);
47
+ src = ctz32(s->sgi_pending[irq][cpu]);
48
+ s->sgi_pending[irq][cpu] &= ~(1 << src);
49
+ if (s->sgi_pending[irq][cpu] == 0) {
50
+ gic_clear_pending(s, irq, cpu);
51
+ }
52
+ ret = irq | ((src & 0x7) << 10);
53
+ } else {
54
+ uint32_t *lr_entry = gic_get_lr_entry(s, irq, cpu);
55
+ src = GICH_LR_CPUID(*lr_entry);
56
+
57
+ gic_clear_pending(s, irq, cpu);
58
+ ret = irq | (src << 10);
59
+ }
60
+
61
+ return ret;
62
+}
63
+
64
uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
65
{
66
- int ret, irq, src;
67
- int cm = 1 << cpu;
68
+ int ret, irq;
69
70
/* gic_get_current_pending_irq() will return 1022 or 1023 appropriately
71
* for the case where this GIC supports grouping and the pending interrupt
72
* is in the wrong group.
73
*/
74
irq = gic_get_current_pending_irq(s, cpu, attrs);
75
- trace_gic_acknowledge_irq(cpu, irq);
76
+ trace_gic_acknowledge_irq(gic_get_vcpu_real_id(cpu), irq);
77
78
if (irq >= GIC_MAXIRQ) {
79
DPRINTF("ACK, no pending interrupt or it is hidden: %d\n", irq);
80
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
81
return 1023;
82
}
83
84
+ gic_activate_irq(s, cpu, irq);
85
+
86
if (s->revision == REV_11MPCORE) {
87
/* Clear pending flags for both level and edge triggered interrupts.
88
* Level triggered IRQs will be reasserted once they become inactive.
89
@@ -XXX,XX +XXX,XX @@ uint32_t gic_acknowledge_irq(GICState *s, int cpu, MemTxAttrs attrs)
90
ret = irq;
91
} else {
92
if (irq < GIC_NR_SGIS) {
93
- /* Lookup the source CPU for the SGI and clear this in the
94
- * sgi_pending map. Return the src and clear the overall pending
95
- * state on this CPU if the SGI is not pending from any CPUs.
96
- */
97
- assert(s->sgi_pending[irq][cpu] != 0);
98
- src = ctz32(s->sgi_pending[irq][cpu]);
99
- s->sgi_pending[irq][cpu] &= ~(1 << src);
100
- if (s->sgi_pending[irq][cpu] == 0) {
101
- gic_clear_pending(s, irq, cpu);
102
- }
103
- ret = irq | ((src & 0x7) << 10);
104
+ ret = gic_clear_pending_sgi(s, irq, cpu);
105
} else {
106
- /* Clear pending state for both level and edge triggered
107
- * interrupts. (level triggered interrupts with an active line
108
- * remain pending, see gic_test_pending)
109
- */
110
gic_clear_pending(s, irq, cpu);
111
ret = irq;
112
}
113
}
114
115
- gic_activate_irq(s, cpu, irq);
116
gic_update(s);
117
DPRINTF("ACK %d\n", irq);
118
return ret;
119
--
120
2.18.0
121
122
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
This commit improve the way the GIC is realized and connected in the
4
ZynqMP SoC. The security extensions are enabled only if requested in the
5
machine state. The same goes for the virtualization extensions.
6
7
All the GIC to APU CPU(s) IRQ lines are now connected, including FIQ,
8
vIRQ and vFIQ. The missing CPU to GIC timers IRQ connections are also
9
added (HYP and SEC timers).
10
11
The GIC maintenance IRQs are back-wired to the correct GIC PPIs.
12
13
Finally, the MMIO mappings are reworked to take into account the ZynqMP
14
specifics. The GIC (v)CPU interface is aliased 16 times:
15
* for the first 0x1000 bytes from 0xf9010000 to 0xf901f000
16
* for the second 0x1000 bytes from 0xf9020000 to 0xf902f000
17
Mappings of the virtual interface and virtual CPU interface are mapped
18
only when virtualization extensions are requested. The
19
XlnxZynqMPGICRegion struct has been enhanced to be able to catch all
20
this information.
21
22
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
23
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
24
Message-id: 20180727095421.386-20-luc.michel@greensocs.com
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
27
include/hw/arm/xlnx-zynqmp.h | 4 +-
28
hw/arm/xlnx-zynqmp.c | 92 ++++++++++++++++++++++++++++++++----
29
2 files changed, 86 insertions(+), 10 deletions(-)
30
31
diff --git a/include/hw/arm/xlnx-zynqmp.h b/include/hw/arm/xlnx-zynqmp.h
32
index XXXXXXX..XXXXXXX 100644
33
--- a/include/hw/arm/xlnx-zynqmp.h
34
+++ b/include/hw/arm/xlnx-zynqmp.h
35
@@ -XXX,XX +XXX,XX @@
36
#define XLNX_ZYNQMP_OCM_RAM_0_ADDRESS 0xFFFC0000
37
#define XLNX_ZYNQMP_OCM_RAM_SIZE 0x10000
38
39
-#define XLNX_ZYNQMP_GIC_REGIONS 2
40
+#define XLNX_ZYNQMP_GIC_REGIONS 6
41
42
/* ZynqMP maps the ARM GIC regions (GICC, GICD ...) at consecutive 64k offsets
43
* and under-decodes the 64k region. This mirrors the 4k regions to every 4k
44
@@ -XXX,XX +XXX,XX @@
45
*/
46
47
#define XLNX_ZYNQMP_GIC_REGION_SIZE 0x1000
48
-#define XLNX_ZYNQMP_GIC_ALIASES (0x10000 / XLNX_ZYNQMP_GIC_REGION_SIZE - 1)
49
+#define XLNX_ZYNQMP_GIC_ALIASES (0x10000 / XLNX_ZYNQMP_GIC_REGION_SIZE)
50
51
#define XLNX_ZYNQMP_MAX_LOW_RAM_SIZE 0x80000000ull
52
53
diff --git a/hw/arm/xlnx-zynqmp.c b/hw/arm/xlnx-zynqmp.c
54
index XXXXXXX..XXXXXXX 100644
55
--- a/hw/arm/xlnx-zynqmp.c
56
+++ b/hw/arm/xlnx-zynqmp.c
57
@@ -XXX,XX +XXX,XX @@
58
59
#define ARM_PHYS_TIMER_PPI 30
60
#define ARM_VIRT_TIMER_PPI 27
61
+#define ARM_HYP_TIMER_PPI 26
62
+#define ARM_SEC_TIMER_PPI 29
63
+#define GIC_MAINTENANCE_PPI 25
64
65
#define GEM_REVISION 0x40070106
66
67
#define GIC_BASE_ADDR 0xf9000000
68
#define GIC_DIST_ADDR 0xf9010000
69
#define GIC_CPU_ADDR 0xf9020000
70
+#define GIC_VIFACE_ADDR 0xf9040000
71
+#define GIC_VCPU_ADDR 0xf9060000
72
73
#define SATA_INTR 133
74
#define SATA_ADDR 0xFD0C0000
75
@@ -XXX,XX +XXX,XX @@ static const int adma_ch_intr[XLNX_ZYNQMP_NUM_ADMA_CH] = {
76
typedef struct XlnxZynqMPGICRegion {
77
int region_index;
78
uint32_t address;
79
+ uint32_t offset;
80
+ bool virt;
81
} XlnxZynqMPGICRegion;
82
83
static const XlnxZynqMPGICRegion xlnx_zynqmp_gic_regions[] = {
84
- { .region_index = 0, .address = GIC_DIST_ADDR, },
85
- { .region_index = 1, .address = GIC_CPU_ADDR, },
86
+ /* Distributor */
87
+ {
88
+ .region_index = 0,
89
+ .address = GIC_DIST_ADDR,
90
+ .offset = 0,
91
+ .virt = false
92
+ },
93
+
94
+ /* CPU interface */
95
+ {
96
+ .region_index = 1,
97
+ .address = GIC_CPU_ADDR,
98
+ .offset = 0,
99
+ .virt = false
100
+ },
101
+ {
102
+ .region_index = 1,
103
+ .address = GIC_CPU_ADDR + 0x10000,
104
+ .offset = 0x1000,
105
+ .virt = false
106
+ },
107
+
108
+ /* Virtual interface */
109
+ {
110
+ .region_index = 2,
111
+ .address = GIC_VIFACE_ADDR,
112
+ .offset = 0,
113
+ .virt = true
114
+ },
115
+
116
+ /* Virtual CPU interface */
117
+ {
118
+ .region_index = 3,
119
+ .address = GIC_VCPU_ADDR,
120
+ .offset = 0,
121
+ .virt = true
122
+ },
123
+ {
124
+ .region_index = 3,
125
+ .address = GIC_VCPU_ADDR + 0x10000,
126
+ .offset = 0x1000,
127
+ .virt = true
128
+ },
129
};
130
131
static inline int arm_gic_ppi_index(int cpu_nr, int ppi_index)
132
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
133
qdev_prop_set_uint32(DEVICE(&s->gic), "num-irq", GIC_NUM_SPI_INTR + 32);
134
qdev_prop_set_uint32(DEVICE(&s->gic), "revision", 2);
135
qdev_prop_set_uint32(DEVICE(&s->gic), "num-cpu", num_apus);
136
+ qdev_prop_set_bit(DEVICE(&s->gic), "has-security-extensions", s->secure);
137
+ qdev_prop_set_bit(DEVICE(&s->gic),
138
+ "has-virtualization-extensions", s->virt);
139
140
/* Realize APUs before realizing the GIC. KVM requires this. */
141
for (i = 0; i < num_apus; i++) {
142
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
143
for (i = 0; i < XLNX_ZYNQMP_GIC_REGIONS; i++) {
144
SysBusDevice *gic = SYS_BUS_DEVICE(&s->gic);
145
const XlnxZynqMPGICRegion *r = &xlnx_zynqmp_gic_regions[i];
146
- MemoryRegion *mr = sysbus_mmio_get_region(gic, r->region_index);
147
+ MemoryRegion *mr;
148
uint32_t addr = r->address;
149
int j;
150
151
- sysbus_mmio_map(gic, r->region_index, addr);
152
+ if (r->virt && !s->virt) {
153
+ continue;
154
+ }
155
156
+ mr = sysbus_mmio_get_region(gic, r->region_index);
157
for (j = 0; j < XLNX_ZYNQMP_GIC_ALIASES; j++) {
158
MemoryRegion *alias = &s->gic_mr[i][j];
159
160
- addr += XLNX_ZYNQMP_GIC_REGION_SIZE;
161
memory_region_init_alias(alias, OBJECT(s), "zynqmp-gic-alias", mr,
162
- 0, XLNX_ZYNQMP_GIC_REGION_SIZE);
163
+ r->offset, XLNX_ZYNQMP_GIC_REGION_SIZE);
164
memory_region_add_subregion(system_memory, addr, alias);
165
+
166
+ addr += XLNX_ZYNQMP_GIC_REGION_SIZE;
167
}
168
}
169
170
@@ -XXX,XX +XXX,XX @@ static void xlnx_zynqmp_realize(DeviceState *dev, Error **errp)
171
sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i,
172
qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
173
ARM_CPU_IRQ));
174
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus,
175
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
176
+ ARM_CPU_FIQ));
177
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 2,
178
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
179
+ ARM_CPU_VIRQ));
180
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 3,
181
+ qdev_get_gpio_in(DEVICE(&s->apu_cpu[i]),
182
+ ARM_CPU_VFIQ));
183
irq = qdev_get_gpio_in(DEVICE(&s->gic),
184
arm_gic_ppi_index(i, ARM_PHYS_TIMER_PPI));
185
- qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), 0, irq);
186
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_PHYS, irq);
187
irq = qdev_get_gpio_in(DEVICE(&s->gic),
188
arm_gic_ppi_index(i, ARM_VIRT_TIMER_PPI));
189
- qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), 1, irq);
190
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_VIRT, irq);
191
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
192
+ arm_gic_ppi_index(i, ARM_HYP_TIMER_PPI));
193
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_HYP, irq);
194
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
195
+ arm_gic_ppi_index(i, ARM_SEC_TIMER_PPI));
196
+ qdev_connect_gpio_out(DEVICE(&s->apu_cpu[i]), GTIMER_SEC, irq);
197
+
198
+ if (s->virt) {
199
+ irq = qdev_get_gpio_in(DEVICE(&s->gic),
200
+ arm_gic_ppi_index(i, GIC_MAINTENANCE_PPI));
201
+ sysbus_connect_irq(SYS_BUS_DEVICE(&s->gic), i + num_apus * 4, irq);
202
+ }
203
}
204
205
if (s->has_rpu) {
206
--
207
2.18.0
208
209
diff view generated by jsdifflib
Deleted patch
1
From: Luc Michel <luc.michel@greensocs.com>
2
1
3
Add support for GICv2 virtualization extensions by mapping the necessary
4
I/O regions and connecting the maintenance IRQ lines.
5
6
Declare those additions in the device tree and in the ACPI tables.
7
8
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Message-id: 20180727095421.386-21-luc.michel@greensocs.com
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
12
---
13
include/hw/arm/virt.h | 4 +++-
14
hw/arm/virt-acpi-build.c | 6 +++--
15
hw/arm/virt.c | 52 +++++++++++++++++++++++++++++++++-------
16
3 files changed, 50 insertions(+), 12 deletions(-)
17
18
diff --git a/include/hw/arm/virt.h b/include/hw/arm/virt.h
19
index XXXXXXX..XXXXXXX 100644
20
--- a/include/hw/arm/virt.h
21
+++ b/include/hw/arm/virt.h
22
@@ -XXX,XX +XXX,XX @@
23
#define NUM_VIRTIO_TRANSPORTS 32
24
#define NUM_SMMU_IRQS 4
25
26
-#define ARCH_GICV3_MAINT_IRQ 9
27
+#define ARCH_GIC_MAINT_IRQ 9
28
29
#define ARCH_TIMER_VIRT_IRQ 11
30
#define ARCH_TIMER_S_EL1_IRQ 13
31
@@ -XXX,XX +XXX,XX @@ enum {
32
VIRT_GIC_DIST,
33
VIRT_GIC_CPU,
34
VIRT_GIC_V2M,
35
+ VIRT_GIC_HYP,
36
+ VIRT_GIC_VCPU,
37
VIRT_GIC_ITS,
38
VIRT_GIC_REDIST,
39
VIRT_GIC_REDIST2,
40
diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
41
index XXXXXXX..XXXXXXX 100644
42
--- a/hw/arm/virt-acpi-build.c
43
+++ b/hw/arm/virt-acpi-build.c
44
@@ -XXX,XX +XXX,XX @@ build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
45
gicc->length = sizeof(*gicc);
46
if (vms->gic_version == 2) {
47
gicc->base_address = cpu_to_le64(memmap[VIRT_GIC_CPU].base);
48
+ gicc->gich_base_address = cpu_to_le64(memmap[VIRT_GIC_HYP].base);
49
+ gicc->gicv_base_address = cpu_to_le64(memmap[VIRT_GIC_VCPU].base);
50
}
51
gicc->cpu_interface_number = cpu_to_le32(i);
52
gicc->arm_mpidr = cpu_to_le64(armcpu->mp_affinity);
53
@@ -XXX,XX +XXX,XX @@ build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
54
if (arm_feature(&armcpu->env, ARM_FEATURE_PMU)) {
55
gicc->performance_interrupt = cpu_to_le32(PPI(VIRTUAL_PMU_IRQ));
56
}
57
- if (vms->virt && vms->gic_version == 3) {
58
- gicc->vgic_interrupt = cpu_to_le32(PPI(ARCH_GICV3_MAINT_IRQ));
59
+ if (vms->virt) {
60
+ gicc->vgic_interrupt = cpu_to_le32(PPI(ARCH_GIC_MAINT_IRQ));
61
}
62
}
63
64
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
65
index XXXXXXX..XXXXXXX 100644
66
--- a/hw/arm/virt.c
67
+++ b/hw/arm/virt.c
68
@@ -XXX,XX +XXX,XX @@ static const MemMapEntry a15memmap[] = {
69
[VIRT_GIC_DIST] = { 0x08000000, 0x00010000 },
70
[VIRT_GIC_CPU] = { 0x08010000, 0x00010000 },
71
[VIRT_GIC_V2M] = { 0x08020000, 0x00001000 },
72
+ [VIRT_GIC_HYP] = { 0x08030000, 0x00010000 },
73
+ [VIRT_GIC_VCPU] = { 0x08040000, 0x00010000 },
74
/* The space in between here is reserved for GICv3 CPU/vCPU/HYP */
75
[VIRT_GIC_ITS] = { 0x08080000, 0x00020000 },
76
/* This redistributor space allows up to 2*64kB*123 CPUs */
77
@@ -XXX,XX +XXX,XX @@ static void fdt_add_gic_node(VirtMachineState *vms)
78
79
if (vms->virt) {
80
qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
81
- GIC_FDT_IRQ_TYPE_PPI, ARCH_GICV3_MAINT_IRQ,
82
+ GIC_FDT_IRQ_TYPE_PPI, ARCH_GIC_MAINT_IRQ,
83
GIC_FDT_IRQ_FLAGS_LEVEL_HI);
84
}
85
} else {
86
/* 'cortex-a15-gic' means 'GIC v2' */
87
qemu_fdt_setprop_string(vms->fdt, nodename, "compatible",
88
"arm,cortex-a15-gic");
89
- qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
90
- 2, vms->memmap[VIRT_GIC_DIST].base,
91
- 2, vms->memmap[VIRT_GIC_DIST].size,
92
- 2, vms->memmap[VIRT_GIC_CPU].base,
93
- 2, vms->memmap[VIRT_GIC_CPU].size);
94
+ if (!vms->virt) {
95
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
96
+ 2, vms->memmap[VIRT_GIC_DIST].base,
97
+ 2, vms->memmap[VIRT_GIC_DIST].size,
98
+ 2, vms->memmap[VIRT_GIC_CPU].base,
99
+ 2, vms->memmap[VIRT_GIC_CPU].size);
100
+ } else {
101
+ qemu_fdt_setprop_sized_cells(vms->fdt, nodename, "reg",
102
+ 2, vms->memmap[VIRT_GIC_DIST].base,
103
+ 2, vms->memmap[VIRT_GIC_DIST].size,
104
+ 2, vms->memmap[VIRT_GIC_CPU].base,
105
+ 2, vms->memmap[VIRT_GIC_CPU].size,
106
+ 2, vms->memmap[VIRT_GIC_HYP].base,
107
+ 2, vms->memmap[VIRT_GIC_HYP].size,
108
+ 2, vms->memmap[VIRT_GIC_VCPU].base,
109
+ 2, vms->memmap[VIRT_GIC_VCPU].size);
110
+ qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupts",
111
+ GIC_FDT_IRQ_TYPE_PPI, ARCH_GIC_MAINT_IRQ,
112
+ GIC_FDT_IRQ_FLAGS_LEVEL_HI);
113
+ }
114
}
115
116
qemu_fdt_setprop_cell(vms->fdt, nodename, "phandle", vms->gic_phandle);
117
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
118
qdev_prop_set_uint32(gicdev, "redist-region-count[1]",
119
MIN(smp_cpus - redist0_count, redist1_capacity));
120
}
121
+ } else {
122
+ if (!kvm_irqchip_in_kernel()) {
123
+ qdev_prop_set_bit(gicdev, "has-virtualization-extensions",
124
+ vms->virt);
125
+ }
126
}
127
qdev_init_nofail(gicdev);
128
gicbusdev = SYS_BUS_DEVICE(gicdev);
129
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
130
}
131
} else {
132
sysbus_mmio_map(gicbusdev, 1, vms->memmap[VIRT_GIC_CPU].base);
133
+ if (vms->virt) {
134
+ sysbus_mmio_map(gicbusdev, 2, vms->memmap[VIRT_GIC_HYP].base);
135
+ sysbus_mmio_map(gicbusdev, 3, vms->memmap[VIRT_GIC_VCPU].base);
136
+ }
137
}
138
139
/* Wire the outputs from each CPU's generic timer and the GICv3
140
@@ -XXX,XX +XXX,XX @@ static void create_gic(VirtMachineState *vms, qemu_irq *pic)
141
ppibase + timer_irq[irq]));
142
}
143
144
- qdev_connect_gpio_out_named(cpudev, "gicv3-maintenance-interrupt", 0,
145
- qdev_get_gpio_in(gicdev, ppibase
146
- + ARCH_GICV3_MAINT_IRQ));
147
+ if (type == 3) {
148
+ qemu_irq irq = qdev_get_gpio_in(gicdev,
149
+ ppibase + ARCH_GIC_MAINT_IRQ);
150
+ qdev_connect_gpio_out_named(cpudev, "gicv3-maintenance-interrupt",
151
+ 0, irq);
152
+ } else if (vms->virt) {
153
+ qemu_irq irq = qdev_get_gpio_in(gicdev,
154
+ ppibase + ARCH_GIC_MAINT_IRQ);
155
+ sysbus_connect_irq(gicbusdev, i + 4 * smp_cpus, irq);
156
+ }
157
+
158
qdev_connect_gpio_out_named(cpudev, "pmu-interrupt", 0,
159
qdev_get_gpio_in(gicdev, ppibase
160
+ VIRTUAL_PMU_IRQ));
161
--
162
2.18.0
163
164
diff view generated by jsdifflib
Deleted patch
1
Some debug registers can be trapped via MDCR_EL2 bits TDRA, TDOSA,
2
and TDA, which we implement in the functions access_tdra(),
3
access_tdosa() and access_tda(). If MDCR_EL2.TDE or HCR_EL2.TGE
4
are 1, the TDRA, TDOSA and TDA bits should behave as if they were 1.
5
Implement this by having the access functions check MDCR_EL2.TDE
6
and HCR_EL2.TGE.
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Message-id: 20180724115950.17316-3-peter.maydell@linaro.org
11
---
12
target/arm/helper.c | 18 ++++++++++++------
13
1 file changed, 12 insertions(+), 6 deletions(-)
14
15
diff --git a/target/arm/helper.c b/target/arm/helper.c
16
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/helper.c
18
+++ b/target/arm/helper.c
19
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdosa(CPUARMState *env, const ARMCPRegInfo *ri,
20
bool isread)
21
{
22
int el = arm_current_el(env);
23
+ bool mdcr_el2_tdosa = (env->cp15.mdcr_el2 & MDCR_TDOSA) ||
24
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
25
+ (env->cp15.hcr_el2 & HCR_TGE);
26
27
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDOSA)
28
- && !arm_is_secure_below_el3(env)) {
29
+ if (el < 2 && mdcr_el2_tdosa && !arm_is_secure_below_el3(env)) {
30
return CP_ACCESS_TRAP_EL2;
31
}
32
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDOSA)) {
33
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tdra(CPUARMState *env, const ARMCPRegInfo *ri,
34
bool isread)
35
{
36
int el = arm_current_el(env);
37
+ bool mdcr_el2_tdra = (env->cp15.mdcr_el2 & MDCR_TDRA) ||
38
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
39
+ (env->cp15.hcr_el2 & HCR_TGE);
40
41
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDRA)
42
- && !arm_is_secure_below_el3(env)) {
43
+ if (el < 2 && mdcr_el2_tdra && !arm_is_secure_below_el3(env)) {
44
return CP_ACCESS_TRAP_EL2;
45
}
46
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
47
@@ -XXX,XX +XXX,XX @@ static CPAccessResult access_tda(CPUARMState *env, const ARMCPRegInfo *ri,
48
bool isread)
49
{
50
int el = arm_current_el(env);
51
+ bool mdcr_el2_tda = (env->cp15.mdcr_el2 & MDCR_TDA) ||
52
+ (env->cp15.mdcr_el2 & MDCR_TDE) ||
53
+ (env->cp15.hcr_el2 & HCR_TGE);
54
55
- if (el < 2 && (env->cp15.mdcr_el2 & MDCR_TDA)
56
- && !arm_is_secure_below_el3(env)) {
57
+ if (el < 2 && mdcr_el2_tda && !arm_is_secure_below_el3(env)) {
58
return CP_ACCESS_TRAP_EL2;
59
}
60
if (el < 3 && (env->cp15.mdcr_el3 & MDCR_TDA)) {
61
--
62
2.18.0
63
64
diff view generated by jsdifflib
Deleted patch
1
Whene we raise a synchronous exception, if HCR_EL2.TGE is set then
2
exceptions targeting NS EL1 must be redirected to EL2. Implement
3
this in raise_exception() -- all synchronous exceptions go through
4
this function.
5
1
6
(Asynchronous exceptions go via arm_cpu_exec_interrupt(), which
7
already honours HCR_EL2.TGE when it determines the target EL
8
in arm_phys_excp_target_el().)
9
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
12
Message-id: 20180724115950.17316-4-peter.maydell@linaro.org
13
---
14
target/arm/op_helper.c | 14 ++++++++++++++
15
1 file changed, 14 insertions(+)
16
17
diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/op_helper.c
20
+++ b/target/arm/op_helper.c
21
@@ -XXX,XX +XXX,XX @@ static void raise_exception(CPUARMState *env, uint32_t excp,
22
{
23
CPUState *cs = CPU(arm_env_get_cpu(env));
24
25
+ if ((env->cp15.hcr_el2 & HCR_TGE) &&
26
+ target_el == 1 && !arm_is_secure(env)) {
27
+ /*
28
+ * Redirect NS EL1 exceptions to NS EL2. These are reported with
29
+ * their original syndrome register value, with the exception of
30
+ * SIMD/FP access traps, which are reported as uncategorized
31
+ * (see DDI0478C.a D1.10.4)
32
+ */
33
+ target_el = 2;
34
+ if (syndrome >> ARM_EL_EC_SHIFT == EC_ADVSIMDFPACCESSTRAP) {
35
+ syndrome = syn_uncategorized();
36
+ }
37
+ }
38
+
39
assert(!excp_is_internal(excp));
40
cs->exception_index = excp;
41
env->exception.syndrome = syndrome;
42
--
43
2.18.0
44
45
diff view generated by jsdifflib
1
One of the required effects of setting HCR_EL2.TGE is that when
1
Currently the microdrive code uses device_legacy_reset() to reset
2
SCR_EL3.NS is 1 then SCTLR_EL1.M must behave as if it is zero for
2
itself, and has its reset method call reset on the IDE bus as the
3
all purposes except direct reads. That is, it effectively disables
3
last thing it does. Switch to using device_cold_reset().
4
the MMU for the NS EL0/EL1 translation regime.
4
5
The only concrete microdrive device is the TYPE_DSCM1XXXX; it is not
6
command-line pluggable, so it is used only by the old pxa2xx Arm
7
boards 'akita', 'borzoi', 'spitz', 'terrier' and 'tosa'.
8
9
You might think that this would result in the IDE bus being
10
reset automatically, but it does not, because the IDEBus type
11
does not set the BusClass::reset method. Instead the controller
12
must explicitly call ide_bus_reset(). We therefore leave that
13
call in md_reset().
14
15
Note also that because the PCMCIA card device is a direct subclass of
16
TYPE_DEVICE and we don't model the PCMCIA controller-to-card
17
interface as a qbus, PCMCIA cards are not on any qbus and so they
18
don't get reset when the system is reset. The reset only happens via
19
the dscm1xxxx_attach() and dscm1xxxx_detach() functions during
20
machine creation.
21
22
Because our aim here is merely to try to get rid of calls to the
23
device_legacy_reset() function, we leave these other dubious
24
reset-related issues alone. (They all stem from this code being
25
absolutely ancient.)
5
26
6
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
27
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
7
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
28
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
8
Message-id: 20180724115950.17316-6-peter.maydell@linaro.org
29
Message-id: 20221013174042.1602926-1-peter.maydell@linaro.org
9
---
30
---
10
target/arm/helper.c | 8 ++++++++
31
hw/ide/microdrive.c | 8 ++++----
11
1 file changed, 8 insertions(+)
32
1 file changed, 4 insertions(+), 4 deletions(-)
12
33
13
diff --git a/target/arm/helper.c b/target/arm/helper.c
34
diff --git a/hw/ide/microdrive.c b/hw/ide/microdrive.c
14
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
15
--- a/target/arm/helper.c
36
--- a/hw/ide/microdrive.c
16
+++ b/target/arm/helper.c
37
+++ b/hw/ide/microdrive.c
17
@@ -XXX,XX +XXX,XX @@ static inline bool regime_translation_disabled(CPUARMState *env,
38
@@ -XXX,XX +XXX,XX @@ static void md_attr_write(PCMCIACardState *card, uint32_t at, uint8_t value)
18
if (mmu_idx == ARMMMUIdx_S2NS) {
39
case 0x00:    /* Configuration Option Register */
19
return (env->cp15.hcr_el2 & HCR_VM) == 0;
40
s->opt = value & 0xcf;
20
}
41
if (value & OPT_SRESET) {
21
+
42
- device_legacy_reset(DEVICE(s));
22
+ if (env->cp15.hcr_el2 & HCR_TGE) {
43
+ device_cold_reset(DEVICE(s));
23
+ /* TGE means that NS EL0/1 act as if SCTLR_EL1.M is zero */
44
}
24
+ if (!regime_is_secure(env, mmu_idx) && regime_el(env, mmu_idx) == 1) {
45
md_interrupt_update(s);
25
+ return true;
46
break;
26
+ }
47
@@ -XXX,XX +XXX,XX @@ static void md_common_write(PCMCIACardState *card, uint32_t at, uint16_t value)
27
+ }
48
case 0xe:    /* Device Control */
28
+
49
s->ctrl = value;
29
return (regime_sctlr(env, mmu_idx) & SCTLR_M) == 0;
50
if (value & CTRL_SRST) {
51
- device_legacy_reset(DEVICE(s));
52
+ device_cold_reset(DEVICE(s));
53
}
54
md_interrupt_update(s);
55
break;
56
@@ -XXX,XX +XXX,XX @@ static int dscm1xxxx_attach(PCMCIACardState *card)
57
md->attr_base = pcc->cis[0x74] | (pcc->cis[0x76] << 8);
58
md->io_base = 0x0;
59
60
- device_legacy_reset(DEVICE(md));
61
+ device_cold_reset(DEVICE(md));
62
md_interrupt_update(md);
63
64
return 0;
65
@@ -XXX,XX +XXX,XX @@ static int dscm1xxxx_detach(PCMCIACardState *card)
66
{
67
MicroDriveState *md = MICRODRIVE(card);
68
69
- device_legacy_reset(DEVICE(md));
70
+ device_cold_reset(DEVICE(md));
71
return 0;
30
}
72
}
31
73
32
--
74
--
33
2.18.0
75
2.25.1
34
76
35
77
diff view generated by jsdifflib
Deleted patch
1
In do_v7m_exception_exit(), we use the exc_secure variable to track
2
whether the exception we're returning from is secure or non-secure.
3
Unfortunately the statement initializing this was accidentally
4
inside an "if (env->v7m.exception != ARMV7M_EXCP_NMI)" conditional,
5
which meant that we were using the wrong value for NMI handlers.
6
Move the initialization out to the right place.
7
1
8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
9
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
10
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
11
Message-id: 20180720145647.8810-3-peter.maydell@linaro.org
12
---
13
target/arm/helper.c | 2 +-
14
1 file changed, 1 insertion(+), 1 deletion(-)
15
16
diff --git a/target/arm/helper.c b/target/arm/helper.c
17
index XXXXXXX..XXXXXXX 100644
18
--- a/target/arm/helper.c
19
+++ b/target/arm/helper.c
20
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
21
/* For all other purposes, treat ES as 0 (R_HXSR) */
22
excret &= ~R_V7M_EXCRET_ES_MASK;
23
}
24
+ exc_secure = excret & R_V7M_EXCRET_ES_MASK;
25
}
26
27
if (env->v7m.exception != ARMV7M_EXCP_NMI) {
28
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
29
* which security state's faultmask to clear. (v8M ARM ARM R_KBNF.)
30
*/
31
if (arm_feature(env, ARM_FEATURE_M_SECURITY)) {
32
- exc_secure = excret & R_V7M_EXCRET_ES_MASK;
33
if (armv7m_nvic_raw_execution_priority(env->nvic) >= 0) {
34
env->v7m.faultmask[exc_secure] = 0;
35
}
36
--
37
2.18.0
38
39
diff view generated by jsdifflib
Deleted patch
1
On exception return for M-profile, we must restore the CONTROL.SPSEL
2
bit from the EXCRET value before we do any kind of tailchaining,
3
including for the derived exceptions on integrity check failures.
4
Otherwise we will give the guest an incorrect EXCRET.SPSEL value on
5
exception entry for the tailchained exception.
6
1
7
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Message-id: 20180720145647.8810-4-peter.maydell@linaro.org
10
---
11
target/arm/helper.c | 16 ++++++++++------
12
1 file changed, 10 insertions(+), 6 deletions(-)
13
14
diff --git a/target/arm/helper.c b/target/arm/helper.c
15
index XXXXXXX..XXXXXXX 100644
16
--- a/target/arm/helper.c
17
+++ b/target/arm/helper.c
18
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
19
}
20
}
21
22
+ /*
23
+ * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
24
+ * Handler mode (and will be until we write the new XPSR.Interrupt
25
+ * field) this does not switch around the current stack pointer.
26
+ * We must do this before we do any kind of tailchaining, including
27
+ * for the derived exceptions on integrity check failures, or we will
28
+ * give the guest an incorrect EXCRET.SPSEL value on exception entry.
29
+ */
30
+ write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
31
+
32
if (sfault) {
33
env->v7m.sfsr |= R_V7M_SFSR_INVER_MASK;
34
armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false);
35
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
36
return;
37
}
38
39
- /* Set CONTROL.SPSEL from excret.SPSEL. Since we're still in
40
- * Handler mode (and will be until we write the new XPSR.Interrupt
41
- * field) this does not switch around the current stack pointer.
42
- */
43
- write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_secure);
44
-
45
switch_v7m_security_state(env, return_to_secure);
46
47
{
48
--
49
2.18.0
50
51
diff view generated by jsdifflib
Deleted patch
1
Tailchaining is an optimization in handling of exception return
2
for M-profile cores: if we are about to pop the exception stack
3
for an exception return, but there is a pending exception which
4
is higher priority than the priority we are returning to, then
5
instead of unstacking and then immediately taking the exception
6
and stacking registers again, we can chain to the pending
7
exception without unstacking and stacking.
8
1
9
For v6M and v7M it is IMPDEF whether tailchaining happens for pending
10
exceptions; for v8M this is architecturally required. Implement it
11
in QEMU for all M-profile cores, since in practice v6M and v7M
12
hardware implementations generally do have it.
13
14
(We were already doing tailchaining for derived exceptions which
15
happened during exception return, like the validity checks and
16
stack access failures; these have always been required to be
17
tailchained for all versions of the architecture.)
18
19
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
20
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
21
Message-id: 20180720145647.8810-5-peter.maydell@linaro.org
22
---
23
target/arm/helper.c | 16 ++++++++++++++++
24
1 file changed, 16 insertions(+)
25
26
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
index XXXXXXX..XXXXXXX 100644
28
--- a/target/arm/helper.c
29
+++ b/target/arm/helper.c
30
@@ -XXX,XX +XXX,XX @@ static void do_v7m_exception_exit(ARMCPU *cpu)
31
return;
32
}
33
34
+ /*
35
+ * Tailchaining: if there is currently a pending exception that
36
+ * is high enough priority to preempt execution at the level we're
37
+ * about to return to, then just directly take that exception now,
38
+ * avoiding an unstack-and-then-stack. Note that now we have
39
+ * deactivated the previous exception by calling armv7m_nvic_complete_irq()
40
+ * our current execution priority is already the execution priority we are
41
+ * returning to -- none of the state we would unstack or set based on
42
+ * the EXCRET value affects it.
43
+ */
44
+ if (armv7m_nvic_can_take_pending_exception(env->nvic)) {
45
+ qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n");
46
+ v7m_exception_taken(cpu, excret, true, false);
47
+ return;
48
+ }
49
+
50
switch_v7m_security_state(env, return_to_secure);
51
52
{
53
--
54
2.18.0
55
56
diff view generated by jsdifflib