1
Target-arm queue for rc2 -- just some minor bugfixes.
1
Arm patches for rc3 : just a handful of bug fixes.
2
2
3
thanks
3
thanks
4
-- PMM
4
-- PMM
5
5
6
The following changes since commit 6e5d4999c761ffa082f60d72a14e5c953515b417:
7
6
8
Merge remote-tracking branch 'remotes/armbru/tags/pull-monitor-2019-11-19' into staging (2019-11-19 11:29:01 +0000)
7
The following changes since commit 4ecc984210ca1bf508a96a550ec8a93a5f833f6c:
8
9
Merge remote-tracking branch 'remotes/palmer/tags/riscv-for-master-4.2-rc3' into staging (2019-11-26 12:36:40 +0000)
9
10
10
are available in the Git repository at:
11
are available in the Git repository at:
11
12
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20191119
13
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20191126
13
14
14
for you to fetch changes up to 04c9c81b8fa2ee33f59a26265700fae6fc646062:
15
for you to fetch changes up to 6a4ef4e5d1084ce41fafa7d470a644b0fd3d9317:
15
16
16
target/arm: Support EL0 v7m msr/mrs for CONFIG_USER_ONLY (2019-11-19 13:20:28 +0000)
17
target/arm: Honor HCR_EL2.TID3 trapping requirements (2019-11-26 13:55:37 +0000)
17
18
18
----------------------------------------------------------------
19
----------------------------------------------------------------
19
target-arm queue:
20
target-arm queue:
20
* Support EL0 v7m msr/mrs for CONFIG_USER_ONLY
21
* handle FTYPE flag correctly in v7M exception return
21
* Relax r13 restriction for ldrex/strex for v8.0
22
for v7M CPUs with an FPU (v8M CPUs were already correct)
22
* Do not reject rt == rt2 for strexd
23
* versal: Add the CRP as unimplemented
23
* net/cadence_gem: Set PHY autonegotiation restart status
24
* Fix ISR_EL1 tracking when executing at EL2
24
* ssi: xilinx_spips: Skip spi bus update for a few register writes
25
* Honor HCR_EL2.TID3 trapping requirements
25
* pl031: Expose RTCICR as proper WC register
26
26
27
----------------------------------------------------------------
27
----------------------------------------------------------------
28
Alexander Graf (1):
28
Edgar E. Iglesias (1):
29
pl031: Expose RTCICR as proper WC register
29
hw/arm: versal: Add the CRP as unimplemented
30
30
31
Linus Ziegert (1):
31
Jean-Hugues Deschênes (1):
32
net/cadence_gem: Set PHY autonegotiation restart status
32
target/arm: Fix handling of cortex-m FTYPE flag in EXCRET
33
33
34
Richard Henderson (4):
34
Marc Zyngier (2):
35
target/arm: Merge arm_cpu_vq_map_next_smaller into sole caller
35
target/arm: Fix ISR_EL1 tracking when executing at EL2
36
target/arm: Do not reject rt == rt2 for strexd
36
target/arm: Honor HCR_EL2.TID3 trapping requirements
37
target/arm: Relax r13 restriction for ldrex/strex for v8.0
38
target/arm: Support EL0 v7m msr/mrs for CONFIG_USER_ONLY
39
37
40
Sai Pavan Boddu (1):
38
include/hw/arm/xlnx-versal.h | 3 ++
41
ssi: xilinx_spips: Skip spi bus update for a few register writes
39
hw/arm/xlnx-versal.c | 2 ++
40
target/arm/helper.c | 83 ++++++++++++++++++++++++++++++++++++++++++--
41
target/arm/m_helper.c | 7 ++--
42
4 files changed, 89 insertions(+), 6 deletions(-)
42
43
43
target/arm/cpu.h | 5 +--
44
hw/net/cadence_gem.c | 9 ++--
45
hw/rtc/pl031.c | 6 +--
46
hw/ssi/xilinx_spips.c | 22 ++++++++--
47
target/arm/cpu64.c | 15 -------
48
target/arm/helper.c | 9 +++-
49
target/arm/m_helper.c | 114 ++++++++++++++++++++++++++++++-------------------
50
target/arm/translate.c | 14 +++---
51
8 files changed, 113 insertions(+), 81 deletions(-)
52
diff view generated by jsdifflib
Deleted patch
1
From: Alexander Graf <graf@amazon.com>
2
1
3
The current PL031 RTCICR register implementation always clears the
4
IRQ pending status on a register write, regardless of the value the
5
guest writes.
6
7
To justify that behavior, it references the ARM926EJ-S Development
8
Chip Reference Manual (DDI0287B) and indicates that said document
9
states that any write clears the internal IRQ state. It is indeed
10
true that in section 11.1 this document says:
11
12
"The interrupt is cleared by writing any data value to the
13
interrupt clear register RTCICR".
14
15
However, later in section 11.2.2 it contradicts itself by saying:
16
17
"Writing 1 to bit 0 of RTCICR clears the RTCINTR flag."
18
19
The latter statement matches the PL031 TRM (DDI0224C), which says:
20
21
"Writing 1 to bit position 0 clears the corresponding interrupt.
22
Writing 0 has no effect."
23
24
Let's assume that the self-contradictory DDI0287B is in error, and
25
follow the reference manual for the device itself, by making the
26
register write-one-to-clear.
27
28
Reported-by: Hendrik Borghorst <hborghor@amazon.de>
29
Signed-off-by: Alexander Graf <graf@amazon.com>
30
Message-id: 20191104115228.30745-1-graf@amazon.com
31
[PMM: updated commit message to note that DDI0287B says two
32
conflicting things]
33
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
34
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
35
---
36
hw/rtc/pl031.c | 6 +-----
37
1 file changed, 1 insertion(+), 5 deletions(-)
38
39
diff --git a/hw/rtc/pl031.c b/hw/rtc/pl031.c
40
index XXXXXXX..XXXXXXX 100644
41
--- a/hw/rtc/pl031.c
42
+++ b/hw/rtc/pl031.c
43
@@ -XXX,XX +XXX,XX @@ static void pl031_write(void * opaque, hwaddr offset,
44
pl031_update(s);
45
break;
46
case RTC_ICR:
47
- /* The PL031 documentation (DDI0224B) states that the interrupt is
48
- cleared when bit 0 of the written value is set. However the
49
- arm926e documentation (DDI0287B) states that the interrupt is
50
- cleared when any value is written. */
51
- s->is = 0;
52
+ s->is &= ~value;
53
pl031_update(s);
54
break;
55
case RTC_CR:
56
--
57
2.20.1
58
59
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Jean-Hugues Deschênes <Jean-Hugues.Deschenes@ossiaco.com>
2
2
3
Armv8-A removes UNPREDICTABLE for R13 for these cases.
3
According to the PushStack() pseudocode in the armv7m RM,
4
bit 4 of the LR should be set to NOT(CONTROL.PFCA) when
5
an FPU is present. Current implementation is doing it for
6
armv8, but not for armv7. This patch makes the existing
7
logic applicable to both code paths.
4
8
5
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
9
Signed-off-by: Jean-Hugues Deschenes <jean-hugues.deschenes@ossiaco.com>
6
Message-id: 20191117090621.32425-3-richard.henderson@linaro.org
7
[PMM: changed ENABLE_ARCH_8 checks to check a new bool 'v8a',
8
since these cases are still UNPREDICTABLE for v8M]
9
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
11
---
12
---
12
target/arm/translate.c | 12 ++++++++----
13
target/arm/m_helper.c | 7 +++----
13
1 file changed, 8 insertions(+), 4 deletions(-)
14
1 file changed, 3 insertions(+), 4 deletions(-)
14
15
15
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
16
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
17
--- a/target/arm/translate.c
18
--- a/target/arm/m_helper.c
18
+++ b/target/arm/translate.c
19
+++ b/target/arm/m_helper.c
19
@@ -XXX,XX +XXX,XX @@ static bool trans_SWPB(DisasContext *s, arg_SWP *a)
20
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
20
static bool op_strex(DisasContext *s, arg_STREX *a, MemOp mop, bool rel)
21
if (env->v7m.secure) {
21
{
22
lr |= R_V7M_EXCRET_S_MASK;
22
TCGv_i32 addr;
23
}
23
+ /* Some cases stopped being UNPREDICTABLE in v8A (but not v8M) */
24
- if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
24
+ bool v8a = ENABLE_ARCH_8 && !arm_dc_feature(s, ARM_FEATURE_M);
25
- lr |= R_V7M_EXCRET_FTYPE_MASK;
25
26
- }
26
/* We UNDEF for these UNPREDICTABLE cases. */
27
} else {
27
if (a->rd == 15 || a->rn == 15 || a->rt == 15
28
lr = R_V7M_EXCRET_RES1_MASK |
28
|| a->rd == a->rn || a->rd == a->rt
29
R_V7M_EXCRET_S_MASK |
29
- || (s->thumb && (a->rd == 13 || a->rt == 13))
30
R_V7M_EXCRET_DCRS_MASK |
30
+ || (!v8a && s->thumb && (a->rd == 13 || a->rt == 13))
31
- R_V7M_EXCRET_FTYPE_MASK |
31
|| (mop == MO_64
32
R_V7M_EXCRET_ES_MASK;
32
&& (a->rt2 == 15
33
if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) {
33
|| a->rd == a->rt2
34
lr |= R_V7M_EXCRET_SPSEL_MASK;
34
- || (s->thumb && a->rt2 == 13)))) {
35
}
35
+ || (!v8a && s->thumb && a->rt2 == 13)))) {
36
unallocated_encoding(s);
37
return true;
38
}
36
}
39
@@ -XXX,XX +XXX,XX @@ static bool trans_STLH(DisasContext *s, arg_STL *a)
37
+ if (!(env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK)) {
40
static bool op_ldrex(DisasContext *s, arg_LDREX *a, MemOp mop, bool acq)
38
+ lr |= R_V7M_EXCRET_FTYPE_MASK;
41
{
39
+ }
42
TCGv_i32 addr;
40
if (!arm_v7m_is_handler_mode(env)) {
43
+ /* Some cases stopped being UNPREDICTABLE in v8A (but not v8M) */
41
lr |= R_V7M_EXCRET_MODE_MASK;
44
+ bool v8a = ENABLE_ARCH_8 && !arm_dc_feature(s, ARM_FEATURE_M);
45
46
/* We UNDEF for these UNPREDICTABLE cases. */
47
if (a->rn == 15 || a->rt == 15
48
- || (s->thumb && a->rt == 13)
49
+ || (!v8a && s->thumb && a->rt == 13)
50
|| (mop == MO_64
51
&& (a->rt2 == 15 || a->rt == a->rt2
52
- || (s->thumb && a->rt2 == 13)))) {
53
+ || (!v8a && s->thumb && a->rt2 == 13)))) {
54
unallocated_encoding(s);
55
return true;
56
}
42
}
57
--
43
--
58
2.20.1
44
2.20.1
59
45
60
46
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>
2
2
3
There was too much cut and paste between ldrexd and strexd,
3
Add the CRP as unimplemented thus avoiding bus errors when
4
as ldrexd does prohibit two output registers the same.
4
guests access these registers.
5
5
6
Fixes: af288228995
6
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
7
Reported-by: Michael Goffioul <michael.goffioul@gmail.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
8
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
9
Message-id: 20191117090621.32425-2-richard.henderson@linaro.org
9
Message-id: 20191115154734.26449-2-edgar.iglesias@gmail.com
10
Reviewed-by: Robert Foley <robert.foley@linaro.org>
11
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
10
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
11
---
14
target/arm/translate.c | 2 +-
12
include/hw/arm/xlnx-versal.h | 3 +++
15
1 file changed, 1 insertion(+), 1 deletion(-)
13
hw/arm/xlnx-versal.c | 2 ++
14
2 files changed, 5 insertions(+)
16
15
17
diff --git a/target/arm/translate.c b/target/arm/translate.c
16
diff --git a/include/hw/arm/xlnx-versal.h b/include/hw/arm/xlnx-versal.h
18
index XXXXXXX..XXXXXXX 100644
17
index XXXXXXX..XXXXXXX 100644
19
--- a/target/arm/translate.c
18
--- a/include/hw/arm/xlnx-versal.h
20
+++ b/target/arm/translate.c
19
+++ b/include/hw/arm/xlnx-versal.h
21
@@ -XXX,XX +XXX,XX @@ static bool op_strex(DisasContext *s, arg_STREX *a, MemOp mop, bool rel)
20
@@ -XXX,XX +XXX,XX @@ typedef struct Versal {
22
|| (s->thumb && (a->rd == 13 || a->rt == 13))
21
#define MM_IOU_SCNTRS_SIZE 0x10000
23
|| (mop == MO_64
22
#define MM_FPD_CRF 0xfd1a0000U
24
&& (a->rt2 == 15
23
#define MM_FPD_CRF_SIZE 0x140000
25
- || a->rd == a->rt2 || a->rt == a->rt2
24
+
26
+ || a->rd == a->rt2
25
+#define MM_PMC_CRP 0xf1260000U
27
|| (s->thumb && a->rt2 == 13)))) {
26
+#define MM_PMC_CRP_SIZE 0x10000
28
unallocated_encoding(s);
27
#endif
29
return true;
28
diff --git a/hw/arm/xlnx-versal.c b/hw/arm/xlnx-versal.c
29
index XXXXXXX..XXXXXXX 100644
30
--- a/hw/arm/xlnx-versal.c
31
+++ b/hw/arm/xlnx-versal.c
32
@@ -XXX,XX +XXX,XX @@ static void versal_unimp(Versal *s)
33
MM_CRL, MM_CRL_SIZE);
34
versal_unimp_area(s, "crf", &s->mr_ps,
35
MM_FPD_CRF, MM_FPD_CRF_SIZE);
36
+ versal_unimp_area(s, "crp", &s->mr_ps,
37
+ MM_PMC_CRP, MM_PMC_CRP_SIZE);
38
versal_unimp_area(s, "iou-scntr", &s->mr_ps,
39
MM_IOU_SCNTR, MM_IOU_SCNTR_SIZE);
40
versal_unimp_area(s, "iou-scntr-seucre", &s->mr_ps,
30
--
41
--
31
2.20.1
42
2.20.1
32
43
33
44
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Marc Zyngier <maz@kernel.org>
2
2
3
Coverity reports, in sve_zcr_get_valid_len,
3
The ARMv8 ARM states when executing at EL2, EL3 or Secure EL1,
4
ISR_EL1 shows the pending status of the physical IRQ, FIQ, or
5
SError interrupts.
4
6
5
"Subtract operation overflows on operands
7
Unfortunately, QEMU's implementation only considers the HCR_EL2
6
arm_cpu_vq_map_next_smaller(cpu, start_vq + 1U) and 1U"
8
bits, and ignores the current exception level. This means a hypervisor
9
trying to look at its own interrupt state actually sees the guest
10
state, which is unexpected and breaks KVM as of Linux 5.3.
7
11
8
First, the aarch32 stub version of arm_cpu_vq_map_next_smaller,
12
Instead, check for the running EL and return the physical bits
9
returning 0, does exactly what Coverity reports. Remove it.
13
if not running in a virtualized context.
10
14
11
Second, the aarch64 version of arm_cpu_vq_map_next_smaller has
15
Fixes: 636540e9c40b
12
a set of asserts, but they don't cover the case in question.
16
Cc: qemu-stable@nongnu.org
13
Further, there is a fair amount of extra arithmetic needed to
17
Reported-by: Quentin Perret <qperret@google.com>
14
convert from the 0-based zcr register, to the 1-base vq form,
18
Signed-off-by: Marc Zyngier <maz@kernel.org>
15
to the 0-based bitmap, and back again. This can be simplified
19
Message-id: 20191122135833.28953-1-maz@kernel.org
16
by leaving the value in the 0-based form.
20
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
17
21
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
18
Finally, use test_bit to simplify the common case, where the
19
length in the zcr registers is in fact a supported length.
20
21
Reported-by: Coverity (CID 1407217)
22
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
23
Reviewed-by: Andrew Jones <drjones@redhat.com>
24
Message-id: 20191118091414.19440-1-richard.henderson@linaro.org
25
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
26
---
23
---
27
target/arm/cpu.h | 3 ---
24
target/arm/helper.c | 7 +++++--
28
target/arm/cpu64.c | 15 ---------------
25
1 file changed, 5 insertions(+), 2 deletions(-)
29
target/arm/helper.c | 9 +++++++--
30
3 files changed, 7 insertions(+), 20 deletions(-)
31
26
32
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
33
index XXXXXXX..XXXXXXX 100644
34
--- a/target/arm/cpu.h
35
+++ b/target/arm/cpu.h
36
@@ -XXX,XX +XXX,XX @@ typedef struct {
37
#ifdef TARGET_AARCH64
38
# define ARM_MAX_VQ 16
39
void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp);
40
-uint32_t arm_cpu_vq_map_next_smaller(ARMCPU *cpu, uint32_t vq);
41
#else
42
# define ARM_MAX_VQ 1
43
static inline void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp) { }
44
-static inline uint32_t arm_cpu_vq_map_next_smaller(ARMCPU *cpu, uint32_t vq)
45
-{ return 0; }
46
#endif
47
48
typedef struct ARMVectorReg {
49
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
50
index XXXXXXX..XXXXXXX 100644
51
--- a/target/arm/cpu64.c
52
+++ b/target/arm/cpu64.c
53
@@ -XXX,XX +XXX,XX @@ void arm_cpu_sve_finalize(ARMCPU *cpu, Error **errp)
54
cpu->sve_max_vq = max_vq;
55
}
56
57
-uint32_t arm_cpu_vq_map_next_smaller(ARMCPU *cpu, uint32_t vq)
58
-{
59
- uint32_t bitnum;
60
-
61
- /*
62
- * We allow vq == ARM_MAX_VQ + 1 to be input because the caller may want
63
- * to find the maximum vq enabled, which may be ARM_MAX_VQ, but this
64
- * function always returns the next smaller than the input.
65
- */
66
- assert(vq && vq <= ARM_MAX_VQ + 1);
67
-
68
- bitnum = find_last_bit(cpu->sve_vq_map, vq - 1);
69
- return bitnum == vq - 1 ? 0 : bitnum + 1;
70
-}
71
-
72
static void cpu_max_get_sve_max_vq(Object *obj, Visitor *v, const char *name,
73
void *opaque, Error **errp)
74
{
75
diff --git a/target/arm/helper.c b/target/arm/helper.c
27
diff --git a/target/arm/helper.c b/target/arm/helper.c
76
index XXXXXXX..XXXXXXX 100644
28
index XXXXXXX..XXXXXXX 100644
77
--- a/target/arm/helper.c
29
--- a/target/arm/helper.c
78
+++ b/target/arm/helper.c
30
+++ b/target/arm/helper.c
79
@@ -XXX,XX +XXX,XX @@ int sve_exception_el(CPUARMState *env, int el)
31
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
80
32
CPUState *cs = env_cpu(env);
81
static uint32_t sve_zcr_get_valid_len(ARMCPU *cpu, uint32_t start_len)
33
uint64_t hcr_el2 = arm_hcr_el2_eff(env);
82
{
34
uint64_t ret = 0;
83
- uint32_t start_vq = (start_len & 0xf) + 1;
35
+ bool allow_virt = (arm_current_el(env) == 1 &&
84
+ uint32_t end_len;
36
+ (!arm_is_secure_below_el3(env) ||
85
37
+ (env->cp15.scr_el3 & SCR_EEL2)));
86
- return arm_cpu_vq_map_next_smaller(cpu, start_vq + 1) - 1;
38
87
+ end_len = start_len &= 0xf;
39
- if (hcr_el2 & HCR_IMO) {
88
+ if (!test_bit(start_len, cpu->sve_vq_map)) {
40
+ if (allow_virt && (hcr_el2 & HCR_IMO)) {
89
+ end_len = find_last_bit(cpu->sve_vq_map, start_len);
41
if (cs->interrupt_request & CPU_INTERRUPT_VIRQ) {
90
+ assert(end_len < start_len);
42
ret |= CPSR_I;
91
+ }
43
}
92
+ return end_len;
44
@@ -XXX,XX +XXX,XX @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
93
}
45
}
94
46
}
95
/*
47
48
- if (hcr_el2 & HCR_FMO) {
49
+ if (allow_virt && (hcr_el2 & HCR_FMO)) {
50
if (cs->interrupt_request & CPU_INTERRUPT_VFIQ) {
51
ret |= CPSR_F;
52
}
96
--
53
--
97
2.20.1
54
2.20.1
98
55
99
56
diff view generated by jsdifflib
Deleted patch
1
From: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
2
1
3
A few configuration register writes need not update the spi bus state, so just
4
return after the register write.
5
6
Signed-off-by: Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
7
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
8
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
9
Tested-by: Francisco Iglesias <frasse.iglesias@gmail.com>
10
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
11
Message-id: 1573830705-14579-1-git-send-email-sai.pavan.boddu@xilinx.com
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
hw/ssi/xilinx_spips.c | 22 ++++++++++++++++++----
15
1 file changed, 18 insertions(+), 4 deletions(-)
16
17
diff --git a/hw/ssi/xilinx_spips.c b/hw/ssi/xilinx_spips.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/ssi/xilinx_spips.c
20
+++ b/hw/ssi/xilinx_spips.c
21
@@ -XXX,XX +XXX,XX @@
22
#define R_GPIO (0x30 / 4)
23
#define R_LPBK_DLY_ADJ (0x38 / 4)
24
#define R_LPBK_DLY_ADJ_RESET (0x33)
25
+#define R_IOU_TAPDLY_BYPASS (0x3C / 4)
26
#define R_TXD1 (0x80 / 4)
27
#define R_TXD2 (0x84 / 4)
28
#define R_TXD3 (0x88 / 4)
29
@@ -XXX,XX +XXX,XX @@
30
#define R_LQSPI_STS (0xA4 / 4)
31
#define LQSPI_STS_WR_RECVD (1 << 1)
32
33
+#define R_DUMMY_CYCLE_EN (0xC8 / 4)
34
+#define R_ECO (0xF8 / 4)
35
#define R_MOD_ID (0xFC / 4)
36
37
#define R_GQSPI_SELECT (0x144 / 4)
38
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
39
{
40
int mask = ~0;
41
XilinxSPIPS *s = opaque;
42
+ bool try_flush = true;
43
44
DB_PRINT_L(0, "addr=" TARGET_FMT_plx " = %x\n", addr, (unsigned)value);
45
addr >>= 2;
46
@@ -XXX,XX +XXX,XX @@ static void xilinx_spips_write(void *opaque, hwaddr addr,
47
tx_data_bytes(&s->tx_fifo, (uint32_t)value, 3,
48
s->regs[R_CONFIG] & R_CONFIG_ENDIAN);
49
goto no_reg_update;
50
+ /* Skip SPI bus update for below registers writes */
51
+ case R_GPIO:
52
+ case R_LPBK_DLY_ADJ:
53
+ case R_IOU_TAPDLY_BYPASS:
54
+ case R_DUMMY_CYCLE_EN:
55
+ case R_ECO:
56
+ try_flush = false;
57
+ break;
58
}
59
s->regs[addr] = (s->regs[addr] & ~mask) | (value & mask);
60
no_reg_update:
61
- xilinx_spips_update_cs_lines(s);
62
- xilinx_spips_check_flush(s);
63
- xilinx_spips_update_cs_lines(s);
64
- xilinx_spips_update_ixr(s);
65
+ if (try_flush) {
66
+ xilinx_spips_update_cs_lines(s);
67
+ xilinx_spips_check_flush(s);
68
+ xilinx_spips_update_cs_lines(s);
69
+ xilinx_spips_update_ixr(s);
70
+ }
71
}
72
73
static const MemoryRegionOps spips_ops = {
74
--
75
2.20.1
76
77
diff view generated by jsdifflib
Deleted patch
1
From: Linus Ziegert <linus.ziegert+qemu@holoplot.com>
2
1
3
The Linux kernel PHY driver sets AN_RESTART in the BMCR of the
4
PHY when autonegotiation is started.
5
Recently the kernel started to read back the PHY's AN_RESTART
6
bit and now checks whether the autonegotiation is complete and
7
the bit was cleared [1]. Otherwise the link status is down.
8
9
The emulated PHY needs to clear AN_RESTART immediately to inform
10
the kernel driver about the completion of autonegotiation phase.
11
12
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c36757eb9dee
13
14
Signed-off-by: Linus Ziegert <linus.ziegert+qemu@holoplot.com>
15
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
16
Message-id: 20191104181604.21943-1-linus.ziegert+qemu@holoplot.com
17
Cc: qemu-stable@nongnu.org
18
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
19
---
20
hw/net/cadence_gem.c | 9 +++++----
21
1 file changed, 5 insertions(+), 4 deletions(-)
22
23
diff --git a/hw/net/cadence_gem.c b/hw/net/cadence_gem.c
24
index XXXXXXX..XXXXXXX 100644
25
--- a/hw/net/cadence_gem.c
26
+++ b/hw/net/cadence_gem.c
27
@@ -XXX,XX +XXX,XX @@
28
#define PHY_REG_EXT_PHYSPCFC_ST 27
29
#define PHY_REG_CABLE_DIAG 28
30
31
-#define PHY_REG_CONTROL_RST 0x8000
32
-#define PHY_REG_CONTROL_LOOP 0x4000
33
-#define PHY_REG_CONTROL_ANEG 0x1000
34
+#define PHY_REG_CONTROL_RST 0x8000
35
+#define PHY_REG_CONTROL_LOOP 0x4000
36
+#define PHY_REG_CONTROL_ANEG 0x1000
37
+#define PHY_REG_CONTROL_ANRESTART 0x0200
38
39
#define PHY_REG_STATUS_LINK 0x0004
40
#define PHY_REG_STATUS_ANEGCMPL 0x0020
41
@@ -XXX,XX +XXX,XX @@ static void gem_phy_write(CadenceGEMState *s, unsigned reg_num, uint16_t val)
42
}
43
if (val & PHY_REG_CONTROL_ANEG) {
44
/* Complete autonegotiation immediately */
45
- val &= ~PHY_REG_CONTROL_ANEG;
46
+ val &= ~(PHY_REG_CONTROL_ANEG | PHY_REG_CONTROL_ANRESTART);
47
s->phy_regs[PHY_REG_STATUS] |= PHY_REG_STATUS_ANEGCMPL;
48
}
49
if (val & PHY_REG_CONTROL_LOOP) {
50
--
51
2.20.1
52
53
diff view generated by jsdifflib
1
From: Richard Henderson <richard.henderson@linaro.org>
1
From: Marc Zyngier <maz@kernel.org>
2
2
3
Simply moving the non-stub helper_v7m_mrs/msr outside of
3
HCR_EL2.TID3 mandates that access from EL1 to a long list of id
4
!CONFIG_USER_ONLY is not an option, because of all of the
4
registers traps to EL2, and QEMU has so far ignored this requirement.
5
other system-mode helpers that are called.
5
6
6
This breaks (among other things) KVM guests that have PtrAuth enabled,
7
But we can split out a few subroutines to handle the few
7
while the hypervisor doesn't want to expose the feature to its guest.
8
EL0 accessible registers without duplicating code.
8
To achieve this, KVM traps the ID registers (ID_AA64ISAR1_EL1 in this
9
9
case), and masks out the unsupported feature.
10
Reported-by: Christophe Lyon <christophe.lyon@linaro.org>
10
11
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
11
QEMU not honoring the trap request means that the guest observes
12
Message-id: 20191118194916.3670-1-richard.henderson@linaro.org
12
that the feature is present in the HW, starts using it, and dies
13
[PMM: deleted now-redundant comment; added a default case
13
a horrible death when KVM injects an UNDEF, because the feature
14
to switch in v7m_msr helper]
14
*really* isn't supported.
15
16
Do the right thing by trapping to EL2 if HCR_EL2.TID3 is set.
17
18
Note that this change does not include trapping of the MVFR
19
registers from AArch32 (they are accessed via the VMRS
20
instruction and need to be handled in a different way).
21
22
Reported-by: Will Deacon <will@kernel.org>
23
Signed-off-by: Marc Zyngier <maz@kernel.org>
24
Tested-by: Will Deacon <will@kernel.org>
25
Message-id: 20191123115618.29230-1-maz@kernel.org
26
[PMM: added missing accessfn line for ID_AA4PFR2_EL1_RESERVED;
27
changed names of access functions to include _tid3]
15
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
28
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
16
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
17
---
30
---
18
target/arm/cpu.h | 2 +
31
target/arm/helper.c | 76 +++++++++++++++++++++++++++++++++++++++++++++
19
target/arm/m_helper.c | 114 ++++++++++++++++++++++++++----------------
32
1 file changed, 76 insertions(+)
20
2 files changed, 73 insertions(+), 43 deletions(-)
33
21
34
diff --git a/target/arm/helper.c b/target/arm/helper.c
22
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
23
index XXXXXXX..XXXXXXX 100644
35
index XXXXXXX..XXXXXXX 100644
24
--- a/target/arm/cpu.h
36
--- a/target/arm/helper.c
25
+++ b/target/arm/cpu.h
37
+++ b/target/arm/helper.c
26
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
38
@@ -XXX,XX +XXX,XX @@ static const ARMCPRegInfo predinv_reginfo[] = {
27
if (mask & XPSR_GE) {
39
REGINFO_SENTINEL
28
env->GE = (val & XPSR_GE) >> 16;
40
};
29
}
41
30
+#ifndef CONFIG_USER_ONLY
42
+static CPAccessResult access_aa64_tid3(CPUARMState *env, const ARMCPRegInfo *ri,
31
if (mask & XPSR_T) {
43
+ bool isread)
32
env->thumb = ((val & XPSR_T) != 0);
33
}
34
@@ -XXX,XX +XXX,XX @@ static inline void xpsr_write(CPUARMState *env, uint32_t val, uint32_t mask)
35
/* Note that this only happens on exception exit */
36
write_v7m_exception(env, val & XPSR_EXCP);
37
}
38
+#endif
39
}
40
41
#define HCR_VM (1ULL << 0)
42
diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c
43
index XXXXXXX..XXXXXXX 100644
44
--- a/target/arm/m_helper.c
45
+++ b/target/arm/m_helper.c
46
@@ -XXX,XX +XXX,XX @@
47
#include "exec/cpu_ldst.h"
48
#endif
49
50
+static void v7m_msr_xpsr(CPUARMState *env, uint32_t mask,
51
+ uint32_t reg, uint32_t val)
52
+{
44
+{
53
+ /* Only APSR is actually writable */
45
+ if ((arm_current_el(env) < 2) && (arm_hcr_el2_eff(env) & HCR_TID3)) {
54
+ if (!(reg & 4)) {
46
+ return CP_ACCESS_TRAP_EL2;
55
+ uint32_t apsrmask = 0;
47
+ }
56
+
48
+
57
+ if (mask & 8) {
49
+ return CP_ACCESS_OK;
58
+ apsrmask |= XPSR_NZCV | XPSR_Q;
59
+ }
60
+ if ((mask & 4) && arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
61
+ apsrmask |= XPSR_GE;
62
+ }
63
+ xpsr_write(env, val, apsrmask);
64
+ }
65
+}
50
+}
66
+
51
+
67
+static uint32_t v7m_mrs_xpsr(CPUARMState *env, uint32_t reg, unsigned el)
52
+static CPAccessResult access_aa32_tid3(CPUARMState *env, const ARMCPRegInfo *ri,
53
+ bool isread)
68
+{
54
+{
69
+ uint32_t mask = 0;
55
+ if (arm_feature(env, ARM_FEATURE_V8)) {
56
+ return access_aa64_tid3(env, ri, isread);
57
+ }
70
+
58
+
71
+ if ((reg & 1) && el) {
59
+ return CP_ACCESS_OK;
72
+ mask |= XPSR_EXCP; /* IPSR (unpriv. reads as zero) */
73
+ }
74
+ if (!(reg & 4)) {
75
+ mask |= XPSR_NZCV | XPSR_Q; /* APSR */
76
+ if (arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
77
+ mask |= XPSR_GE;
78
+ }
79
+ }
80
+ /* EPSR reads as zero */
81
+ return xpsr_read(env) & mask;
82
+}
60
+}
83
+
61
+
84
+static uint32_t v7m_mrs_control(CPUARMState *env, uint32_t secure)
62
void register_cp_regs_for_features(ARMCPU *cpu)
85
+{
86
+ uint32_t value = env->v7m.control[secure];
87
+
88
+ if (!secure) {
89
+ /* SFPA is RAZ/WI from NS; FPCA is stored in the M_REG_S bank */
90
+ value |= env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK;
91
+ }
92
+ return value;
93
+}
94
+
95
#ifdef CONFIG_USER_ONLY
96
97
-/* These should probably raise undefined insn exceptions. */
98
-void HELPER(v7m_msr)(CPUARMState *env, uint32_t reg, uint32_t val)
99
+void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
100
{
63
{
101
- ARMCPU *cpu = env_archcpu(env);
64
/* Register all the coprocessor registers based on feature bits */
102
+ uint32_t mask = extract32(maskreg, 8, 4);
65
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
103
+ uint32_t reg = extract32(maskreg, 0, 8);
66
{ .name = "ID_PFR0", .state = ARM_CP_STATE_BOTH,
104
67
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 0,
105
- cpu_abort(CPU(cpu), "v7m_msr %d\n", reg);
68
.access = PL1_R, .type = ARM_CP_CONST,
106
+ switch (reg) {
69
+ .accessfn = access_aa32_tid3,
107
+ case 0 ... 7: /* xPSR sub-fields */
70
.resetvalue = cpu->id_pfr0 },
108
+ v7m_msr_xpsr(env, mask, reg, val);
71
/* ID_PFR1 is not a plain ARM_CP_CONST because we don't know
109
+ break;
72
* the value of the GIC field until after we define these regs.
110
+ case 20: /* CONTROL */
73
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
111
+ /* There are no sub-fields that are actually writable from EL0. */
74
{ .name = "ID_PFR1", .state = ARM_CP_STATE_BOTH,
112
+ break;
75
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 1,
113
+ default:
76
.access = PL1_R, .type = ARM_CP_NO_RAW,
114
+ /* Unprivileged writes to other registers are ignored */
77
+ .accessfn = access_aa32_tid3,
115
+ break;
78
.readfn = id_pfr1_read,
116
+ }
79
.writefn = arm_cp_write_ignore },
117
}
80
{ .name = "ID_DFR0", .state = ARM_CP_STATE_BOTH,
118
81
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 2,
119
uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
82
.access = PL1_R, .type = ARM_CP_CONST,
120
{
83
+ .accessfn = access_aa32_tid3,
121
- ARMCPU *cpu = env_archcpu(env);
84
.resetvalue = cpu->id_dfr0 },
122
-
85
{ .name = "ID_AFR0", .state = ARM_CP_STATE_BOTH,
123
- cpu_abort(CPU(cpu), "v7m_mrs %d\n", reg);
86
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 3,
124
- return 0;
87
.access = PL1_R, .type = ARM_CP_CONST,
125
+ switch (reg) {
88
+ .accessfn = access_aa32_tid3,
126
+ case 0 ... 7: /* xPSR sub-fields */
89
.resetvalue = cpu->id_afr0 },
127
+ return v7m_mrs_xpsr(env, reg, 0);
90
{ .name = "ID_MMFR0", .state = ARM_CP_STATE_BOTH,
128
+ case 20: /* CONTROL */
91
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 4,
129
+ return v7m_mrs_control(env, 0);
92
.access = PL1_R, .type = ARM_CP_CONST,
130
+ default:
93
+ .accessfn = access_aa32_tid3,
131
+ /* Unprivileged reads others as zero. */
94
.resetvalue = cpu->id_mmfr0 },
132
+ return 0;
95
{ .name = "ID_MMFR1", .state = ARM_CP_STATE_BOTH,
133
+ }
96
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 5,
134
}
97
.access = PL1_R, .type = ARM_CP_CONST,
135
98
+ .accessfn = access_aa32_tid3,
136
void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest)
99
.resetvalue = cpu->id_mmfr1 },
137
@@ -XXX,XX +XXX,XX @@ void arm_v7m_cpu_do_interrupt(CPUState *cs)
100
{ .name = "ID_MMFR2", .state = ARM_CP_STATE_BOTH,
138
101
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 6,
139
uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg)
102
.access = PL1_R, .type = ARM_CP_CONST,
140
{
103
+ .accessfn = access_aa32_tid3,
141
- uint32_t mask;
104
.resetvalue = cpu->id_mmfr2 },
142
unsigned el = arm_current_el(env);
105
{ .name = "ID_MMFR3", .state = ARM_CP_STATE_BOTH,
143
106
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 1, .opc2 = 7,
144
/* First handle registers which unprivileged can read */
107
.access = PL1_R, .type = ARM_CP_CONST,
145
-
108
+ .accessfn = access_aa32_tid3,
146
switch (reg) {
109
.resetvalue = cpu->id_mmfr3 },
147
case 0 ... 7: /* xPSR sub-fields */
110
{ .name = "ID_ISAR0", .state = ARM_CP_STATE_BOTH,
148
- mask = 0;
111
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 0,
149
- if ((reg & 1) && el) {
112
.access = PL1_R, .type = ARM_CP_CONST,
150
- mask |= XPSR_EXCP; /* IPSR (unpriv. reads as zero) */
113
+ .accessfn = access_aa32_tid3,
151
- }
114
.resetvalue = cpu->isar.id_isar0 },
152
- if (!(reg & 4)) {
115
{ .name = "ID_ISAR1", .state = ARM_CP_STATE_BOTH,
153
- mask |= XPSR_NZCV | XPSR_Q; /* APSR */
116
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 1,
154
- if (arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
117
.access = PL1_R, .type = ARM_CP_CONST,
155
- mask |= XPSR_GE;
118
+ .accessfn = access_aa32_tid3,
156
- }
119
.resetvalue = cpu->isar.id_isar1 },
157
- }
120
{ .name = "ID_ISAR2", .state = ARM_CP_STATE_BOTH,
158
- /* EPSR reads as zero */
121
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 2,
159
- return xpsr_read(env) & mask;
122
.access = PL1_R, .type = ARM_CP_CONST,
160
- break;
123
+ .accessfn = access_aa32_tid3,
161
+ return v7m_mrs_xpsr(env, reg, el);
124
.resetvalue = cpu->isar.id_isar2 },
162
case 20: /* CONTROL */
125
{ .name = "ID_ISAR3", .state = ARM_CP_STATE_BOTH,
163
- {
126
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 3,
164
- uint32_t value = env->v7m.control[env->v7m.secure];
127
.access = PL1_R, .type = ARM_CP_CONST,
165
- if (!env->v7m.secure) {
128
+ .accessfn = access_aa32_tid3,
166
- /* SFPA is RAZ/WI from NS; FPCA is stored in the M_REG_S bank */
129
.resetvalue = cpu->isar.id_isar3 },
167
- value |= env->v7m.control[M_REG_S] & R_V7M_CONTROL_FPCA_MASK;
130
{ .name = "ID_ISAR4", .state = ARM_CP_STATE_BOTH,
168
- }
131
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 4,
169
- return value;
132
.access = PL1_R, .type = ARM_CP_CONST,
170
- }
133
+ .accessfn = access_aa32_tid3,
171
+ return v7m_mrs_control(env, env->v7m.secure);
134
.resetvalue = cpu->isar.id_isar4 },
172
case 0x94: /* CONTROL_NS */
135
{ .name = "ID_ISAR5", .state = ARM_CP_STATE_BOTH,
173
/*
136
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 5,
174
* We have to handle this here because unprivileged Secure code
137
.access = PL1_R, .type = ARM_CP_CONST,
175
@@ -XXX,XX +XXX,XX @@ void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val)
138
+ .accessfn = access_aa32_tid3,
176
139
.resetvalue = cpu->isar.id_isar5 },
177
switch (reg) {
140
{ .name = "ID_MMFR4", .state = ARM_CP_STATE_BOTH,
178
case 0 ... 7: /* xPSR sub-fields */
141
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 6,
179
- /* only APSR is actually writable */
142
.access = PL1_R, .type = ARM_CP_CONST,
180
- if (!(reg & 4)) {
143
+ .accessfn = access_aa32_tid3,
181
- uint32_t apsrmask = 0;
144
.resetvalue = cpu->id_mmfr4 },
182
-
145
{ .name = "ID_ISAR6", .state = ARM_CP_STATE_BOTH,
183
- if (mask & 8) {
146
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 2, .opc2 = 7,
184
- apsrmask |= XPSR_NZCV | XPSR_Q;
147
.access = PL1_R, .type = ARM_CP_CONST,
185
- }
148
+ .accessfn = access_aa32_tid3,
186
- if ((mask & 4) && arm_feature(env, ARM_FEATURE_THUMB_DSP)) {
149
.resetvalue = cpu->isar.id_isar6 },
187
- apsrmask |= XPSR_GE;
150
REGINFO_SENTINEL
188
- }
151
};
189
- xpsr_write(env, val, apsrmask);
152
@@ -XXX,XX +XXX,XX @@ void register_cp_regs_for_features(ARMCPU *cpu)
190
- }
153
{ .name = "ID_AA64PFR0_EL1", .state = ARM_CP_STATE_AA64,
191
+ v7m_msr_xpsr(env, mask, reg, val);
154
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 0,
192
break;
155
.access = PL1_R, .type = ARM_CP_NO_RAW,
193
case 8: /* MSP */
156
+ .accessfn = access_aa64_tid3,
194
if (v7m_using_psp(env)) {
157
.readfn = id_aa64pfr0_read,
158
.writefn = arm_cp_write_ignore },
159
{ .name = "ID_AA64PFR1_EL1", .state = ARM_CP_STATE_AA64,
160
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 1,
161
.access = PL1_R, .type = ARM_CP_CONST,
162
+ .accessfn = access_aa64_tid3,
163
.resetvalue = cpu->isar.id_aa64pfr1},
164
{ .name = "ID_AA64PFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
165
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 2,
166
.access = PL1_R, .type = ARM_CP_CONST,
167
+ .accessfn = access_aa64_tid3,
168
.resetvalue = 0 },
169
{ .name = "ID_AA64PFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
170
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 3,
171
.access = PL1_R, .type = ARM_CP_CONST,
172
+ .accessfn = access_aa64_tid3,
173
.resetvalue = 0 },
174
{ .name = "ID_AA64ZFR0_EL1", .state = ARM_CP_STATE_AA64,
175
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 4,
176
.access = PL1_R, .type = ARM_CP_CONST,
177
+ .accessfn = access_aa64_tid3,
178
/* At present, only SVEver == 0 is defined anyway. */
179
.resetvalue = 0 },
180
{ .name = "ID_AA64PFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
181
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 5,
182
.access = PL1_R, .type = ARM_CP_CONST,
183
+ .accessfn = access_aa64_tid3,
184
.resetvalue = 0 },
185
{ .name = "ID_AA64PFR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
186
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 6,
187
.access = PL1_R, .type = ARM_CP_CONST,
188
+ .accessfn = access_aa64_tid3,
189
.resetvalue = 0 },
190
{ .name = "ID_AA64PFR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
191
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 4, .opc2 = 7,
192
.access = PL1_R, .type = ARM_CP_CONST,
193
+ .accessfn = access_aa64_tid3,
194
.resetvalue = 0 },
195
{ .name = "ID_AA64DFR0_EL1", .state = ARM_CP_STATE_AA64,
196
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 0,
197
.access = PL1_R, .type = ARM_CP_CONST,
198
+ .accessfn = access_aa64_tid3,
199
.resetvalue = cpu->id_aa64dfr0 },
200
{ .name = "ID_AA64DFR1_EL1", .state = ARM_CP_STATE_AA64,
201
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 1,
202
.access = PL1_R, .type = ARM_CP_CONST,
203
+ .accessfn = access_aa64_tid3,
204
.resetvalue = cpu->id_aa64dfr1 },
205
{ .name = "ID_AA64DFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
206
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 2,
207
.access = PL1_R, .type = ARM_CP_CONST,
208
+ .accessfn = access_aa64_tid3,
209
.resetvalue = 0 },
210
{ .name = "ID_AA64DFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
211
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 3,
212
.access = PL1_R, .type = ARM_CP_CONST,
213
+ .accessfn = access_aa64_tid3,
214
.resetvalue = 0 },
215
{ .name = "ID_AA64AFR0_EL1", .state = ARM_CP_STATE_AA64,
216
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 4,
217
.access = PL1_R, .type = ARM_CP_CONST,
218
+ .accessfn = access_aa64_tid3,
219
.resetvalue = cpu->id_aa64afr0 },
220
{ .name = "ID_AA64AFR1_EL1", .state = ARM_CP_STATE_AA64,
221
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 5,
222
.access = PL1_R, .type = ARM_CP_CONST,
223
+ .accessfn = access_aa64_tid3,
224
.resetvalue = cpu->id_aa64afr1 },
225
{ .name = "ID_AA64AFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
226
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 6,
227
.access = PL1_R, .type = ARM_CP_CONST,
228
+ .accessfn = access_aa64_tid3,
229
.resetvalue = 0 },
230
{ .name = "ID_AA64AFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
231
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 5, .opc2 = 7,
232
.access = PL1_R, .type = ARM_CP_CONST,
233
+ .accessfn = access_aa64_tid3,
234
.resetvalue = 0 },
235
{ .name = "ID_AA64ISAR0_EL1", .state = ARM_CP_STATE_AA64,
236
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 0,
237
.access = PL1_R, .type = ARM_CP_CONST,
238
+ .accessfn = access_aa64_tid3,
239
.resetvalue = cpu->isar.id_aa64isar0 },
240
{ .name = "ID_AA64ISAR1_EL1", .state = ARM_CP_STATE_AA64,
241
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 1,
242
.access = PL1_R, .type = ARM_CP_CONST,
243
+ .accessfn = access_aa64_tid3,
244
.resetvalue = cpu->isar.id_aa64isar1 },
245
{ .name = "ID_AA64ISAR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
246
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 2,
247
.access = PL1_R, .type = ARM_CP_CONST,
248
+ .accessfn = access_aa64_tid3,
249
.resetvalue = 0 },
250
{ .name = "ID_AA64ISAR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
251
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 3,
252
.access = PL1_R, .type = ARM_CP_CONST,
253
+ .accessfn = access_aa64_tid3,
254
.resetvalue = 0 },
255
{ .name = "ID_AA64ISAR4_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
256
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 4,
257
.access = PL1_R, .type = ARM_CP_CONST,
258
+ .accessfn = access_aa64_tid3,
259
.resetvalue = 0 },
260
{ .name = "ID_AA64ISAR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
261
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 5,
262
.access = PL1_R, .type = ARM_CP_CONST,
263
+ .accessfn = access_aa64_tid3,
264
.resetvalue = 0 },
265
{ .name = "ID_AA64ISAR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
266
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 6,
267
.access = PL1_R, .type = ARM_CP_CONST,
268
+ .accessfn = access_aa64_tid3,
269
.resetvalue = 0 },
270
{ .name = "ID_AA64ISAR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
271
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 6, .opc2 = 7,
272
.access = PL1_R, .type = ARM_CP_CONST,
273
+ .accessfn = access_aa64_tid3,
274
.resetvalue = 0 },
275
{ .name = "ID_AA64MMFR0_EL1", .state = ARM_CP_STATE_AA64,
276
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 0,
277
.access = PL1_R, .type = ARM_CP_CONST,
278
+ .accessfn = access_aa64_tid3,
279
.resetvalue = cpu->isar.id_aa64mmfr0 },
280
{ .name = "ID_AA64MMFR1_EL1", .state = ARM_CP_STATE_AA64,
281
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 1,
282
.access = PL1_R, .type = ARM_CP_CONST,
283
+ .accessfn = access_aa64_tid3,
284
.resetvalue = cpu->isar.id_aa64mmfr1 },
285
{ .name = "ID_AA64MMFR2_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
286
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 2,
287
.access = PL1_R, .type = ARM_CP_CONST,
288
+ .accessfn = access_aa64_tid3,
289
.resetvalue = 0 },
290
{ .name = "ID_AA64MMFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
291
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 3,
292
.access = PL1_R, .type = ARM_CP_CONST,
293
+ .accessfn = access_aa64_tid3,
294
.resetvalue = 0 },
295
{ .name = "ID_AA64MMFR4_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
296
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 4,
297
.access = PL1_R, .type = ARM_CP_CONST,
298
+ .accessfn = access_aa64_tid3,
299
.resetvalue = 0 },
300
{ .name = "ID_AA64MMFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
301
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 5,
302
.access = PL1_R, .type = ARM_CP_CONST,
303
+ .accessfn = access_aa64_tid3,
304
.resetvalue = 0 },
305
{ .name = "ID_AA64MMFR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
306
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 6,
307
.access = PL1_R, .type = ARM_CP_CONST,
308
+ .accessfn = access_aa64_tid3,
309
.resetvalue = 0 },
310
{ .name = "ID_AA64MMFR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
311
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 7, .opc2 = 7,
312
.access = PL1_R, .type = ARM_CP_CONST,
313
+ .accessfn = access_aa64_tid3,
314
.resetvalue = 0 },
315
{ .name = "MVFR0_EL1", .state = ARM_CP_STATE_AA64,
316
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 0,
317
.access = PL1_R, .type = ARM_CP_CONST,
318
+ .accessfn = access_aa64_tid3,
319
.resetvalue = cpu->isar.mvfr0 },
320
{ .name = "MVFR1_EL1", .state = ARM_CP_STATE_AA64,
321
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 1,
322
.access = PL1_R, .type = ARM_CP_CONST,
323
+ .accessfn = access_aa64_tid3,
324
.resetvalue = cpu->isar.mvfr1 },
325
{ .name = "MVFR2_EL1", .state = ARM_CP_STATE_AA64,
326
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 2,
327
.access = PL1_R, .type = ARM_CP_CONST,
328
+ .accessfn = access_aa64_tid3,
329
.resetvalue = cpu->isar.mvfr2 },
330
{ .name = "MVFR3_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
331
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 3,
332
.access = PL1_R, .type = ARM_CP_CONST,
333
+ .accessfn = access_aa64_tid3,
334
.resetvalue = 0 },
335
{ .name = "MVFR4_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
336
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 4,
337
.access = PL1_R, .type = ARM_CP_CONST,
338
+ .accessfn = access_aa64_tid3,
339
.resetvalue = 0 },
340
{ .name = "MVFR5_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
341
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 5,
342
.access = PL1_R, .type = ARM_CP_CONST,
343
+ .accessfn = access_aa64_tid3,
344
.resetvalue = 0 },
345
{ .name = "MVFR6_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
346
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 6,
347
.access = PL1_R, .type = ARM_CP_CONST,
348
+ .accessfn = access_aa64_tid3,
349
.resetvalue = 0 },
350
{ .name = "MVFR7_EL1_RESERVED", .state = ARM_CP_STATE_AA64,
351
.opc0 = 3, .opc1 = 0, .crn = 0, .crm = 3, .opc2 = 7,
352
.access = PL1_R, .type = ARM_CP_CONST,
353
+ .accessfn = access_aa64_tid3,
354
.resetvalue = 0 },
355
{ .name = "PMCEID0", .state = ARM_CP_STATE_AA32,
356
.cp = 15, .opc1 = 0, .crn = 9, .crm = 12, .opc2 = 6,
195
--
357
--
196
2.20.1
358
2.20.1
197
359
198
360
diff view generated by jsdifflib