1
Last minute pullreq with one patch, fixing the GICv3 ICH_MISR_EL2.LRENP
1
A last small test of bug fixes before rc1.
2
calculation. I went back-and-forth on whether to put this in, but:
3
* it's an effective regression from 6.1 (the bug itself has been
4
present since before then, but it was previously masked by the
5
other bug which we fixed in 9cee1efe92)
6
* I just realized it could cause a screaming maintenance interrupt
7
even for hypervisors like KVM that don't set LRENPIE
8
2
9
On the other hand this is very late and we haven't seen it be a
3
thanks
10
problem with any guest except Qualcomm's hypervisor. So if you want
11
to decide it's better not going in that's OK too.
12
13
Tested on the gitlab CI and with a local test of nested KVM.
14
15
-- PMM
4
-- PMM
16
5
17
The following changes since commit 7635eff97104242d618400e4b6746d0a5c97af82:
6
The following changes since commit ed8ad9728a9c0eec34db9dff61dfa2f1dd625637:
18
7
19
Merge tag 'block-pull-request' of https://gitlab.com/stefanha/qemu into staging (2021-12-06 11:18:06 -0800)
8
Merge tag 'pull-tpm-2023-07-14-1' of https://github.com/stefanberger/qemu-tpm into staging (2023-07-15 14:54:04 +0100)
20
9
21
are available in the Git repository at:
10
are available in the Git repository at:
22
11
23
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20211207
12
https://git.linaro.org/people/pmaydell/qemu-arm.git tags/pull-target-arm-20230717
24
13
25
for you to fetch changes up to 2958e5150dfa297dd5a51fe57a29156b8744f07f:
14
for you to fetch changes up to c2c1c4a35c7c2b1a4140b0942b9797c857e476a4:
26
15
27
gicv3: fix ICH_MISR's LRENP computation (2021-12-07 15:30:08 +0000)
16
hw/nvram: Avoid unnecessary Xilinx eFuse backstore write (2023-07-17 11:05:52 +0100)
28
17
29
----------------------------------------------------------------
18
----------------------------------------------------------------
30
target-arm queue:
19
target-arm queue:
31
* Fix calculation of ICH_MISR_EL2.LRENP to avoid incorrect generation
20
* hw/arm/sbsa-ref: set 'slots' property of xhci
32
of maintenance interrupts
21
* linux-user: Remove pointless NULL check in clock_adjtime handling
22
* ptw: Fix S1_ptw_translate() debug path
23
* ptw: Account for FEAT_RME when applying {N}SW, SA bits
24
* accel/tcg: Zero-pad PC in TCG CPU exec trace lines
25
* hw/nvram: Avoid unnecessary Xilinx eFuse backstore write
33
26
34
----------------------------------------------------------------
27
----------------------------------------------------------------
35
Damien Hedde (1):
28
Peter Maydell (5):
36
gicv3: fix ICH_MISR's LRENP computation
29
linux-user: Remove pointless NULL check in clock_adjtime handling
30
target/arm/ptw.c: Add comments to S1Translate struct fields
31
target/arm: Fix S1_ptw_translate() debug path
32
target/arm/ptw.c: Account for FEAT_RME when applying {N}SW, SA bits
33
accel/tcg: Zero-pad PC in TCG CPU exec trace lines
37
34
38
hw/intc/arm_gicv3_cpuif.c | 3 ++-
35
Tong Ho (1):
39
1 file changed, 2 insertions(+), 1 deletion(-)
36
hw/nvram: Avoid unnecessary Xilinx eFuse backstore write
40
37
38
Yuquan Wang (1):
39
hw/arm/sbsa-ref: set 'slots' property of xhci
40
41
accel/tcg/cpu-exec.c | 4 +--
42
accel/tcg/translate-all.c | 2 +-
43
hw/arm/sbsa-ref.c | 1 +
44
hw/nvram/xlnx-efuse.c | 11 ++++--
45
linux-user/syscall.c | 12 +++----
46
target/arm/ptw.c | 90 +++++++++++++++++++++++++++++++++++++++++------
47
6 files changed, 98 insertions(+), 22 deletions(-)
diff view generated by jsdifflib
New patch
1
From: Yuquan Wang <wangyuquan1236@phytium.com.cn>
1
2
3
This extends the slots of xhci to 64, since the default xhci_sysbus
4
just supports one slot.
5
6
Signed-off-by: Wang Yuquan <wangyuquan1236@phytium.com.cn>
7
Signed-off-by: Chen Baozi <chenbaozi@phytium.com.cn>
8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
9
Reviewed-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
10
Tested-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
11
Message-id: 20230710063750.473510-2-wangyuquan1236@phytium.com.cn
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
---
14
hw/arm/sbsa-ref.c | 1 +
15
1 file changed, 1 insertion(+)
16
17
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
18
index XXXXXXX..XXXXXXX 100644
19
--- a/hw/arm/sbsa-ref.c
20
+++ b/hw/arm/sbsa-ref.c
21
@@ -XXX,XX +XXX,XX @@ static void create_xhci(const SBSAMachineState *sms)
22
hwaddr base = sbsa_ref_memmap[SBSA_XHCI].base;
23
int irq = sbsa_ref_irqmap[SBSA_XHCI];
24
DeviceState *dev = qdev_new(TYPE_XHCI_SYSBUS);
25
+ qdev_prop_set_uint32(dev, "slots", XHCI_MAXSLOTS);
26
27
sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal);
28
sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base);
29
--
30
2.34.1
diff view generated by jsdifflib
New patch
1
In the code for TARGET_NR_clock_adjtime, we set the pointer phtx to
2
the address of the local variable htx. This means it can never be
3
NULL, but later in the code we check it for NULL anyway. Coverity
4
complains about this (CID 1507683) because the NULL check comes after
5
a call to clock_adjtime() that assumes it is non-NULL.
1
6
7
Since phtx is always &htx, and is used only in three places, it's not
8
really necessary. Remove it, bringing the code structure in to line
9
with that for TARGET_NR_clock_adjtime64, which already uses a simple
10
'&htx' when it wants a pointer to 'htx'.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
14
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
15
Message-id: 20230623144410.1837261-1-peter.maydell@linaro.org
16
---
17
linux-user/syscall.c | 12 +++++-------
18
1 file changed, 5 insertions(+), 7 deletions(-)
19
20
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
21
index XXXXXXX..XXXXXXX 100644
22
--- a/linux-user/syscall.c
23
+++ b/linux-user/syscall.c
24
@@ -XXX,XX +XXX,XX @@ static abi_long do_syscall1(CPUArchState *cpu_env, int num, abi_long arg1,
25
#if defined(TARGET_NR_clock_adjtime) && defined(CONFIG_CLOCK_ADJTIME)
26
case TARGET_NR_clock_adjtime:
27
{
28
- struct timex htx, *phtx = &htx;
29
+ struct timex htx;
30
31
- if (target_to_host_timex(phtx, arg2) != 0) {
32
+ if (target_to_host_timex(&htx, arg2) != 0) {
33
return -TARGET_EFAULT;
34
}
35
- ret = get_errno(clock_adjtime(arg1, phtx));
36
- if (!is_error(ret) && phtx) {
37
- if (host_to_target_timex(arg2, phtx) != 0) {
38
- return -TARGET_EFAULT;
39
- }
40
+ ret = get_errno(clock_adjtime(arg1, &htx));
41
+ if (!is_error(ret) && host_to_target_timex(arg2, &htx)) {
42
+ return -TARGET_EFAULT;
43
}
44
}
45
return ret;
46
--
47
2.34.1
48
49
diff view generated by jsdifflib
New patch
1
Add comments to the in_* fields in the S1Translate struct
2
that explain what they're doing.
1
3
4
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
5
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
6
Message-id: 20230710152130.3928330-2-peter.maydell@linaro.org
7
---
8
target/arm/ptw.c | 40 ++++++++++++++++++++++++++++++++++++++++
9
1 file changed, 40 insertions(+)
10
11
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
12
index XXXXXXX..XXXXXXX 100644
13
--- a/target/arm/ptw.c
14
+++ b/target/arm/ptw.c
15
@@ -XXX,XX +XXX,XX @@
16
#endif
17
18
typedef struct S1Translate {
19
+ /*
20
+ * in_mmu_idx : specifies which TTBR, TCR, etc to use for the walk.
21
+ * Together with in_space, specifies the architectural translation regime.
22
+ */
23
ARMMMUIdx in_mmu_idx;
24
+ /*
25
+ * in_ptw_idx: specifies which mmuidx to use for the actual
26
+ * page table descriptor load operations. This will be one of the
27
+ * ARMMMUIdx_Stage2* or one of the ARMMMUIdx_Phys_* indexes.
28
+ * If a Secure ptw is "downgraded" to NonSecure by an NSTable bit,
29
+ * this field is updated accordingly.
30
+ */
31
ARMMMUIdx in_ptw_idx;
32
+ /*
33
+ * in_space: the security space for this walk. This plus
34
+ * the in_mmu_idx specify the architectural translation regime.
35
+ * If a Secure ptw is "downgraded" to NonSecure by an NSTable bit,
36
+ * this field is updated accordingly.
37
+ *
38
+ * Note that the security space for the in_ptw_idx may be different
39
+ * from that for the in_mmu_idx. We do not need to explicitly track
40
+ * the in_ptw_idx security space because:
41
+ * - if the in_ptw_idx is an ARMMMUIdx_Phys_* then the mmuidx
42
+ * itself specifies the security space
43
+ * - if the in_ptw_idx is an ARMMMUIdx_Stage2* then the security
44
+ * space used for ptw reads is the same as that of the security
45
+ * space of the stage 1 translation for all cases except where
46
+ * stage 1 is Secure; in that case the only possibilities for
47
+ * the ptw read are Secure and NonSecure, and the in_ptw_idx
48
+ * value being Stage2 vs Stage2_S distinguishes those.
49
+ */
50
ARMSecuritySpace in_space;
51
+ /*
52
+ * in_secure: whether the translation regime is a Secure one.
53
+ * This is always equal to arm_space_is_secure(in_space).
54
+ * If a Secure ptw is "downgraded" to NonSecure by an NSTable bit,
55
+ * this field is updated accordingly.
56
+ */
57
bool in_secure;
58
+ /*
59
+ * in_debug: is this a QEMU debug access (gdbstub, etc)? Debug
60
+ * accesses will not update the guest page table access flags
61
+ * and will not change the state of the softmmu TLBs.
62
+ */
63
bool in_debug;
64
/*
65
* If this is stage 2 of a stage 1+2 page table walk, then this must
66
--
67
2.34.1
diff view generated by jsdifflib
New patch
1
In commit fe4a5472ccd6 we rearranged the logic in S1_ptw_translate()
2
so that the debug-access "call get_phys_addr_*" codepath is used both
3
when S1 is doing ptw reads from stage 2 and when it is doing ptw
4
reads from physical memory. However, we didn't update the
5
calculation of s2ptw->in_space and s2ptw->in_secure to account for
6
the "ptw reads from physical memory" case. This meant that debug
7
accesses when in Secure state broke.
1
8
9
Create a new function S2_security_space() which returns the
10
correct security space to use for the ptw load, and use it to
11
determine the correct .in_secure and .in_space fields for the
12
stage 2 lookup for the ptw load.
13
14
Reported-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
15
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
16
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
17
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
18
Message-id: 20230710152130.3928330-3-peter.maydell@linaro.org
19
Fixes: fe4a5472ccd6 ("target/arm: Use get_phys_addr_with_struct in S1_ptw_translate")
20
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
21
---
22
target/arm/ptw.c | 37 ++++++++++++++++++++++++++++++++-----
23
1 file changed, 32 insertions(+), 5 deletions(-)
24
25
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
26
index XXXXXXX..XXXXXXX 100644
27
--- a/target/arm/ptw.c
28
+++ b/target/arm/ptw.c
29
@@ -XXX,XX +XXX,XX @@ static bool S2_attrs_are_device(uint64_t hcr, uint8_t attrs)
30
}
31
}
32
33
+static ARMSecuritySpace S2_security_space(ARMSecuritySpace s1_space,
34
+ ARMMMUIdx s2_mmu_idx)
35
+{
36
+ /*
37
+ * Return the security space to use for stage 2 when doing
38
+ * the S1 page table descriptor load.
39
+ */
40
+ if (regime_is_stage2(s2_mmu_idx)) {
41
+ /*
42
+ * The security space for ptw reads is almost always the same
43
+ * as that of the security space of the stage 1 translation.
44
+ * The only exception is when stage 1 is Secure; in that case
45
+ * the ptw read might be to the Secure or the NonSecure space
46
+ * (but never Realm or Root), and the s2_mmu_idx tells us which.
47
+ * Root translations are always single-stage.
48
+ */
49
+ if (s1_space == ARMSS_Secure) {
50
+ return arm_secure_to_space(s2_mmu_idx == ARMMMUIdx_Stage2_S);
51
+ } else {
52
+ assert(s2_mmu_idx != ARMMMUIdx_Stage2_S);
53
+ assert(s1_space != ARMSS_Root);
54
+ return s1_space;
55
+ }
56
+ } else {
57
+ /* ptw loads are from phys: the mmu idx itself says which space */
58
+ return arm_phys_to_space(s2_mmu_idx);
59
+ }
60
+}
61
+
62
/* Translate a S1 pagetable walk through S2 if needed. */
63
static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
64
hwaddr addr, ARMMMUFaultInfo *fi)
65
{
66
- ARMSecuritySpace space = ptw->in_space;
67
bool is_secure = ptw->in_secure;
68
ARMMMUIdx mmu_idx = ptw->in_mmu_idx;
69
ARMMMUIdx s2_mmu_idx = ptw->in_ptw_idx;
70
@@ -XXX,XX +XXX,XX @@ static bool S1_ptw_translate(CPUARMState *env, S1Translate *ptw,
71
* From gdbstub, do not use softmmu so that we don't modify the
72
* state of the cpu at all, including softmmu tlb contents.
73
*/
74
+ ARMSecuritySpace s2_space = S2_security_space(ptw->in_space, s2_mmu_idx);
75
S1Translate s2ptw = {
76
.in_mmu_idx = s2_mmu_idx,
77
.in_ptw_idx = ptw_idx_for_stage_2(env, s2_mmu_idx),
78
- .in_secure = s2_mmu_idx == ARMMMUIdx_Stage2_S,
79
- .in_space = (s2_mmu_idx == ARMMMUIdx_Stage2_S ? ARMSS_Secure
80
- : space == ARMSS_Realm ? ARMSS_Realm
81
- : ARMSS_NonSecure),
82
+ .in_secure = arm_space_is_secure(s2_space),
83
+ .in_space = s2_space,
84
.in_debug = true,
85
};
86
GetPhysAddrResult s2 = { };
87
--
88
2.34.1
diff view generated by jsdifflib
New patch
1
In get_phys_addr_twostage() the code that applies the effects of
2
VSTCR.{SA,SW} and VTCR.{NSA,NSW} only updates result->f.attrs.secure.
3
Now we also have f.attrs.space for FEAT_RME, we need to keep the two
4
in sync.
1
5
6
These bits only have an effect for Secure space translations, not
7
for Root, so use the input in_space field to determine whether to
8
apply them rather than the input is_secure. This doesn't actually
9
make a difference because Root translations are never two-stage,
10
but it's a little clearer.
11
12
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
14
Message-id: 20230710152130.3928330-4-peter.maydell@linaro.org
15
---
16
target/arm/ptw.c | 13 ++++++++-----
17
1 file changed, 8 insertions(+), 5 deletions(-)
18
19
diff --git a/target/arm/ptw.c b/target/arm/ptw.c
20
index XXXXXXX..XXXXXXX 100644
21
--- a/target/arm/ptw.c
22
+++ b/target/arm/ptw.c
23
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
24
hwaddr ipa;
25
int s1_prot, s1_lgpgsz;
26
bool is_secure = ptw->in_secure;
27
+ ARMSecuritySpace in_space = ptw->in_space;
28
bool ret, ipa_secure;
29
ARMCacheAttrs cacheattrs1;
30
ARMSecuritySpace ipa_space;
31
@@ -XXX,XX +XXX,XX @@ static bool get_phys_addr_twostage(CPUARMState *env, S1Translate *ptw,
32
* Check if IPA translates to secure or non-secure PA space.
33
* Note that VSTCR overrides VTCR and {N}SW overrides {N}SA.
34
*/
35
- result->f.attrs.secure =
36
- (is_secure
37
- && !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
38
- && (ipa_secure
39
- || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW))));
40
+ if (in_space == ARMSS_Secure) {
41
+ result->f.attrs.secure =
42
+ !(env->cp15.vstcr_el2 & (VSTCR_SA | VSTCR_SW))
43
+ && (ipa_secure
44
+ || !(env->cp15.vtcr_el2 & (VTCR_NSA | VTCR_NSW)));
45
+ result->f.attrs.space = arm_secure_to_space(result->f.attrs.secure);
46
+ }
47
48
return false;
49
}
50
--
51
2.34.1
diff view generated by jsdifflib
New patch
1
In commit f0a08b0913befbd we changed the type of the PC from
2
target_ulong to vaddr. In doing so we inadvertently dropped the
3
zero-padding on the PC in trace lines (the second item inside the []
4
in these lines). They used to look like this on AArch64, for
5
instance:
1
6
7
Trace 0: 0x7f2260000100 [00000000/0000000040000000/00000061/ff200000]
8
9
and now they look like this:
10
Trace 0: 0x7f4f50000100 [00000000/40000000/00000061/ff200000]
11
12
and if the PC happens to be somewhere low like 0x5000
13
then the field is shown as /5000/.
14
15
This is because TARGET_FMT_lx is a "%08x" or "%016x" specifier,
16
depending on TARGET_LONG_SIZE, whereas VADDR_PRIx is just PRIx64
17
with no width specifier.
18
19
Restore the zero-padding by adding an 016 width specifier to
20
this tracing and a couple of others that were similarly recently
21
changed to use VADDR_PRIx without a width specifier.
22
23
We can't unfortunately restore the "32-bit guests are padded to
24
8 hex digits and 64-bit guests to 16 hex digits" behaviour so
25
easily.
26
27
Fixes: f0a08b0913befbd ("accel/tcg/cpu-exec.c: Widen pc to vaddr")
28
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
29
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
30
Reviewed-by: Anton Johansson <anjo@rev.ng>
31
Message-id: 20230711165434.4123674-1-peter.maydell@linaro.org
32
---
33
accel/tcg/cpu-exec.c | 4 ++--
34
accel/tcg/translate-all.c | 2 +-
35
2 files changed, 3 insertions(+), 3 deletions(-)
36
37
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
38
index XXXXXXX..XXXXXXX 100644
39
--- a/accel/tcg/cpu-exec.c
40
+++ b/accel/tcg/cpu-exec.c
41
@@ -XXX,XX +XXX,XX @@ static void log_cpu_exec(vaddr pc, CPUState *cpu,
42
if (qemu_log_in_addr_range(pc)) {
43
qemu_log_mask(CPU_LOG_EXEC,
44
"Trace %d: %p [%08" PRIx64
45
- "/%" VADDR_PRIx "/%08x/%08x] %s\n",
46
+ "/%016" VADDR_PRIx "/%08x/%08x] %s\n",
47
cpu->cpu_index, tb->tc.ptr, tb->cs_base, pc,
48
tb->flags, tb->cflags, lookup_symbol(pc));
49
50
@@ -XXX,XX +XXX,XX @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
51
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
52
vaddr pc = log_pc(cpu, last_tb);
53
if (qemu_log_in_addr_range(pc)) {
54
- qemu_log("Stopped execution of TB chain before %p [%"
55
+ qemu_log("Stopped execution of TB chain before %p [%016"
56
VADDR_PRIx "] %s\n",
57
last_tb->tc.ptr, pc, lookup_symbol(pc));
58
}
59
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
60
index XXXXXXX..XXXXXXX 100644
61
--- a/accel/tcg/translate-all.c
62
+++ b/accel/tcg/translate-all.c
63
@@ -XXX,XX +XXX,XX @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
64
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
65
vaddr pc = log_pc(cpu, tb);
66
if (qemu_log_in_addr_range(pc)) {
67
- qemu_log("cpu_io_recompile: rewound execution of TB to %"
68
+ qemu_log("cpu_io_recompile: rewound execution of TB to %016"
69
VADDR_PRIx "\n", pc);
70
}
71
}
72
--
73
2.34.1
74
75
diff view generated by jsdifflib
1
From: Damien Hedde <damien.hedde@greensocs.com>
1
From: Tong Ho <tong.ho@amd.com>
2
2
3
According to the "Arm Generic Interrupt Controller Architecture
3
Add a check in the bit-set operation to write the backstore
4
Specification GIC architecture version 3 and 4" (version G: page 345
4
only if the affected bit is 0 before.
5
for aarch64 or 509 for aarch32):
6
LRENP bit of ICH_MISR is set when ICH_HCR.LRENPIE==1 and
7
ICH_HCR.EOIcount is non-zero.
8
5
9
When only LRENPIE was set (and EOI count was zero), the LRENP bit was
6
With this in place, there will be no need for callers to
10
wrongly set and MISR value was wrong.
7
do the checking in order to avoid unnecessary writes.
11
8
12
As an additional consequence, if an hypervisor set ICH_HCR.LRENPIE,
9
Signed-off-by: Tong Ho <tong.ho@amd.com>
13
the maintenance interrupt was constantly fired. It happens since patch
10
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
14
9cee1efe92 ("hw/intc: Set GIC maintenance interrupt level to only 0 or 1")
11
Reviewed-by: Francisco Iglesias <frasse.iglesias@gmail.com>
15
which fixed another bug about maintenance interrupt (most significant
12
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
16
bits of misr, including this one, were ignored in the interrupt trigger).
17
18
Fixes: 83f036fe3d ("hw/intc/arm_gicv3: Add accessors for ICH_ system registers")
19
Signed-off-by: Damien Hedde <damien.hedde@greensocs.com>
20
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
21
Message-id: 20211207094427.3473-1-damien.hedde@greensocs.com
22
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
13
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
23
---
14
---
24
hw/intc/arm_gicv3_cpuif.c | 3 ++-
15
hw/nvram/xlnx-efuse.c | 11 +++++++++--
25
1 file changed, 2 insertions(+), 1 deletion(-)
16
1 file changed, 9 insertions(+), 2 deletions(-)
26
17
27
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
18
diff --git a/hw/nvram/xlnx-efuse.c b/hw/nvram/xlnx-efuse.c
28
index XXXXXXX..XXXXXXX 100644
19
index XXXXXXX..XXXXXXX 100644
29
--- a/hw/intc/arm_gicv3_cpuif.c
20
--- a/hw/nvram/xlnx-efuse.c
30
+++ b/hw/intc/arm_gicv3_cpuif.c
21
+++ b/hw/nvram/xlnx-efuse.c
31
@@ -XXX,XX +XXX,XX @@ static uint32_t maintenance_interrupt_state(GICv3CPUState *cs)
22
@@ -XXX,XX +XXX,XX @@ static bool efuse_ro_bits_find(XlnxEFuse *s, uint32_t k)
32
/* Scan list registers and fill in the U, NP and EOI bits */
23
33
eoi_maintenance_interrupt_state(cs, &value);
24
bool xlnx_efuse_set_bit(XlnxEFuse *s, unsigned int bit)
34
25
{
35
- if (cs->ich_hcr_el2 & (ICH_HCR_EL2_LRENPIE | ICH_HCR_EL2_EOICOUNT_MASK)) {
26
+ uint32_t set, *row;
36
+ if ((cs->ich_hcr_el2 & ICH_HCR_EL2_LRENPIE) &&
27
+
37
+ (cs->ich_hcr_el2 & ICH_HCR_EL2_EOICOUNT_MASK)) {
28
if (efuse_ro_bits_find(s, bit)) {
38
value |= ICH_MISR_EL2_LRENP;
29
g_autofree char *path = object_get_canonical_path(OBJECT(s));
30
31
@@ -XXX,XX +XXX,XX @@ bool xlnx_efuse_set_bit(XlnxEFuse *s, unsigned int bit)
32
return false;
39
}
33
}
40
34
35
- s->fuse32[bit / 32] |= 1 << (bit % 32);
36
- efuse_bdrv_sync(s, bit);
37
+ /* Avoid back-end write unless there is a real update */
38
+ row = &s->fuse32[bit / 32];
39
+ set = 1 << (bit % 32);
40
+ if (!(set & *row)) {
41
+ *row |= set;
42
+ efuse_bdrv_sync(s, bit);
43
+ }
44
return true;
45
}
46
41
--
47
--
42
2.25.1
48
2.34.1
43
49
44
50
diff view generated by jsdifflib