From: Alex Bennée <alex.bennee@linaro.org>
PC alignment faults have priority over instruction aborts and we have
code to deal with this in the translation front-ends. However during
tb_lookup we can see a potentially faulting probe which doesn't get a
MemOp set. If the page isn't available this results in
EC_INSNABORT (0x20) instead of EC_PCALIGNMENT (0x22).
As there is no easy way to set the appropriate MemOp in the
instruction fetch probe path lets just detect it in
arm_cpu_tlb_fill_align() ahead of the main alignment check. We also
teach arm_deliver_fault to deliver the right syndrome for
MMU_INST_FETCH alignment issues.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3233
Tested-by: Jessica Clarke <jrtc27@jrtc27.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20251209092459.1058313-5-alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
(cherry picked from commit dd77ef99aa0280c467fe8442b4238122899ae6cf)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
diff --git a/target/arm/tcg/tlb_helper.c b/target/arm/tcg/tlb_helper.c
index 23c72a99f5..91f5735b2b 100644
--- a/target/arm/tcg/tlb_helper.c
+++ b/target/arm/tcg/tlb_helper.c
@@ -243,7 +243,11 @@ void arm_deliver_fault(ARMCPU *cpu, vaddr addr,
fsr = compute_fsr_fsc(env, fi, target_el, mmu_idx, &fsc);
if (access_type == MMU_INST_FETCH) {
- syn = syn_insn_abort(same_el, fi->ea, fi->s1ptw, fsc);
+ if (fi->type == ARMFault_Alignment) {
+ syn = syn_pcalignment();
+ } else {
+ syn = syn_insn_abort(same_el, fi->ea, fi->s1ptw, fsc);
+ }
exc = EXCP_PREFETCH_ABORT;
} else {
syn = merge_syn_data_abort(env->exception.syndrome, fi, target_el,
@@ -338,11 +342,18 @@ bool arm_cpu_tlb_fill_align(CPUState *cs, CPUTLBEntryFull *out, vaddr address,
}
/*
- * Per R_XCHFJ, alignment fault not due to memory type has
- * highest precedence. Otherwise, walk the page table and
- * and collect the page description.
+ * PC alignment faults should be dealt with at translation time
+ * but we also need to catch them while being probed.
+ *
+ * Then per R_XCHFJ, alignment fault not due to memory type take
+ * precedence. Otherwise, walk the page table and and collect the
+ * page description.
+ *
*/
- if (address & ((1 << memop_alignment_bits(memop)) - 1)) {
+ if (access_type == MMU_INST_FETCH && !cpu->env.thumb &&
+ (address & 3)) {
+ fi->type = ARMFault_Alignment;
+ } else if (address & ((1 << memop_alignment_bits(memop)) - 1)) {
fi->type = ARMFault_Alignment;
} else if (!get_phys_addr(&cpu->env, address, access_type, memop,
core_to_arm_mmu_idx(&cpu->env, mmu_idx),
--
2.47.3