From nobody Thu Nov 6 16:18:08 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1542128794860405.8679150004741; Tue, 13 Nov 2018 09:06:34 -0800 (PST) Received: from localhost ([::1]:55136 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMc92-00070k-Md for importer@patchew.org; Tue, 13 Nov 2018 12:06:32 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35072) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMc7X-0006Fr-7L for qemu-devel@nongnu.org; Tue, 13 Nov 2018 12:05:04 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMbwg-0003wZ-5N for qemu-devel@nongnu.org; Tue, 13 Nov 2018 11:53:51 -0500 Received: from mga06.intel.com ([134.134.136.31]:62858) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gMbwe-0003pu-Ib; Tue, 13 Nov 2018 11:53:45 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 08:53:34 -0800 Received: from qingbai-mobl.ger.corp.intel.com (HELO caravaggio.ger.corp.intel.com) ([10.249.41.239]) by orsmga001.jf.intel.com with ESMTP; 13 Nov 2018 08:53:32 -0800 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,228,1539673200"; d="scan'208";a="107922101" From: Samuel Ortiz To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 17:52:38 +0100 Message-Id: <20181113165247.4806-5-sameo@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181113165247.4806-1-sameo@linux.intel.com> References: <20181113165247.4806-1-sameo@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.31 Subject: [Qemu-devel] [PATCH 04/13] target: arm: Move all interrupt and exception handlers into their own file X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, richard.henderson@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Most of them are TCG dependent so we want to be able to not build them in order to support TCG disablement with ARM. Signed-off-by: Samuel Ortiz Tested-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Robert Bradford --- target/arm/excp_helper.c | 550 +++++++++++++++++++++++++++++++++++++++ target/arm/helper.c | 531 ------------------------------------- target/arm/Makefile.objs | 2 +- 3 files changed, 551 insertions(+), 532 deletions(-) create mode 100644 target/arm/excp_helper.c diff --git a/target/arm/excp_helper.c b/target/arm/excp_helper.c new file mode 100644 index 0000000000..38fe9703de --- /dev/null +++ b/target/arm/excp_helper.c @@ -0,0 +1,550 @@ +/* + * Exception and interrupt helpers. + * + * This code is licensed under the GNU GPL v2 and later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ +#include "qemu/osdep.h" +#include "trace.h" +#include "cpu.h" +#include "internals.h" +#include "sysemu/sysemu.h" +#include "exec/exec-all.h" +#include "exec/cpu_ldst.h" +#include "arm_ldst.h" +#include "exec/semihost.h" +#include "sysemu/kvm.h" + +static void take_aarch32_exception(CPUARMState *env, int new_mode, + uint32_t mask, uint32_t offset, + uint32_t newpc) +{ + /* Change the CPU state so as to actually take the exception. */ + switch_mode(env, new_mode); + /* + * For exceptions taken to AArch32 we must clear the SS bit in both + * PSTATE and in the old-state value we save to SPSR_, so zero i= t now. + */ + env->uncached_cpsr &=3D ~PSTATE_SS; + env->spsr =3D cpsr_read(env); + /* Clear IT bits. */ + env->condexec_bits =3D 0; + /* Switch to the new mode, and to the correct instruction set. */ + env->uncached_cpsr =3D (env->uncached_cpsr & ~CPSR_M) | new_mode; + /* Set new mode endianness */ + env->uncached_cpsr &=3D ~CPSR_E; + if (env->cp15.sctlr_el[arm_current_el(env)] & SCTLR_EE) { + env->uncached_cpsr |=3D CPSR_E; + } + /* J and IL must always be cleared for exception entry */ + env->uncached_cpsr &=3D ~(CPSR_IL | CPSR_J); + env->daif |=3D mask; + + if (new_mode =3D=3D ARM_CPU_MODE_HYP) { + env->thumb =3D (env->cp15.sctlr_el[2] & SCTLR_TE) !=3D 0; + env->elr_el[2] =3D env->regs[15]; + } else { + /* + * this is a lie, as there was no c1_sys on V4T/V5, but who cares + * and we should just guard the thumb mode on V4 + */ + if (arm_feature(env, ARM_FEATURE_V4T)) { + env->thumb =3D + (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_TE) !=3D 0; + } + env->regs[14] =3D env->regs[15] + offset; + } + env->regs[15] =3D newpc; +} + +static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs) +{ + /* + * Handle exception entry to Hyp mode; this is sufficiently + * different to entry to other AArch32 modes that we handle it + * separately here. + * + * The vector table entry used is always the 0x14 Hyp mode entry point, + * unless this is an UNDEF/HVC/abort taken from Hyp to Hyp. + * The offset applied to the preferred return address is always zero + * (see DDI0487C.a section G1.12.3). + * PSTATE A/I/F masks are set based only on the SCR.EA/IRQ/FIQ values. + */ + uint32_t addr, mask; + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + + switch (cs->exception_index) { + case EXCP_UDEF: + addr =3D 0x04; + break; + case EXCP_SWI: + addr =3D 0x14; + break; + case EXCP_BKPT: + /* Fall through to prefetch abort. */ + case EXCP_PREFETCH_ABORT: + env->cp15.ifar_s =3D env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, "...with HIFAR 0x%x\n", + (uint32_t)env->exception.vaddress); + addr =3D 0x0c; + break; + case EXCP_DATA_ABORT: + env->cp15.dfar_s =3D env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, "...with HDFAR 0x%x\n", + (uint32_t)env->exception.vaddress); + addr =3D 0x10; + break; + case EXCP_IRQ: + addr =3D 0x18; + break; + case EXCP_FIQ: + addr =3D 0x1c; + break; + case EXCP_HVC: + addr =3D 0x08; + break; + case EXCP_HYP_TRAP: + addr =3D 0x14; + default: + cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); + } + + if (cs->exception_index !=3D EXCP_IRQ && cs->exception_index !=3D EXCP= _FIQ) { + if (!arm_feature(env, ARM_FEATURE_V8)) { + /* + * QEMU syndrome values are v8-style. v7 has the IL bit + * UNK/SBZP for "field not valid" cases, where v8 uses RES1. + * If this is a v7 CPU, squash the IL bit in those cases. + */ + if (cs->exception_index =3D=3D EXCP_PREFETCH_ABORT || + (cs->exception_index =3D=3D EXCP_DATA_ABORT && + !(env->exception.syndrome & ARM_EL_ISV)) || + syn_get_ec(env->exception.syndrome) =3D=3D EC_UNCATEGORIZE= D) { + env->exception.syndrome &=3D ~ARM_EL_IL; + } + } + env->cp15.esr_el[2] =3D env->exception.syndrome; + } + + if (arm_current_el(env) !=3D 2 && addr < 0x14) { + addr =3D 0x14; + } + + mask =3D 0; + if (!(env->cp15.scr_el3 & SCR_EA)) { + mask |=3D CPSR_A; + } + if (!(env->cp15.scr_el3 & SCR_IRQ)) { + mask |=3D CPSR_I; + } + if (!(env->cp15.scr_el3 & SCR_FIQ)) { + mask |=3D CPSR_F; + } + + addr +=3D env->cp15.hvbar; + + take_aarch32_exception(env, ARM_CPU_MODE_HYP, mask, 0, addr); +} + +static void arm_cpu_do_interrupt_aarch32(CPUState *cs) +{ + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + uint32_t addr; + uint32_t mask; + int new_mode; + uint32_t offset; + uint32_t moe; + + /* If this is a debug exception we must update the DBGDSCR.MOE bits */ + switch (syn_get_ec(env->exception.syndrome)) { + case EC_BREAKPOINT: + case EC_BREAKPOINT_SAME_EL: + moe =3D 1; + break; + case EC_WATCHPOINT: + case EC_WATCHPOINT_SAME_EL: + moe =3D 10; + break; + case EC_AA32_BKPT: + moe =3D 3; + break; + case EC_VECTORCATCH: + moe =3D 5; + break; + default: + moe =3D 0; + break; + } + + if (moe) { + env->cp15.mdscr_el1 =3D deposit64(env->cp15.mdscr_el1, 2, 4, moe); + } + + if (env->exception.target_el =3D=3D 2) { + arm_cpu_do_interrupt_aarch32_hyp(cs); + return; + } + + /* TODO: Vectored interrupt controller. */ + switch (cs->exception_index) { + case EXCP_UDEF: + new_mode =3D ARM_CPU_MODE_UND; + addr =3D 0x04; + mask =3D CPSR_I; + if (env->thumb) { + offset =3D 2; + } else { + offset =3D 4; + } + break; + case EXCP_SWI: + new_mode =3D ARM_CPU_MODE_SVC; + addr =3D 0x08; + mask =3D CPSR_I; + /* The PC already points to the next instruction. */ + offset =3D 0; + break; + case EXCP_BKPT: + /* Fall through to prefetch abort. */ + case EXCP_PREFETCH_ABORT: + A32_BANKED_CURRENT_REG_SET(env, ifsr, env->exception.fsr); + A32_BANKED_CURRENT_REG_SET(env, ifar, env->exception.vaddress); + qemu_log_mask(CPU_LOG_INT, "...with IFSR 0x%x IFAR 0x%x\n", + env->exception.fsr, (uint32_t)env->exception.vaddres= s); + new_mode =3D ARM_CPU_MODE_ABT; + addr =3D 0x0c; + mask =3D CPSR_A | CPSR_I; + offset =3D 4; + break; + case EXCP_DATA_ABORT: + A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr); + A32_BANKED_CURRENT_REG_SET(env, dfar, env->exception.vaddress); + qemu_log_mask(CPU_LOG_INT, "...with DFSR 0x%x DFAR 0x%x\n", + env->exception.fsr, + (uint32_t)env->exception.vaddress); + new_mode =3D ARM_CPU_MODE_ABT; + addr =3D 0x10; + mask =3D CPSR_A | CPSR_I; + offset =3D 8; + break; + case EXCP_IRQ: + new_mode =3D ARM_CPU_MODE_IRQ; + addr =3D 0x18; + /* Disable IRQ and imprecise data aborts. */ + mask =3D CPSR_A | CPSR_I; + offset =3D 4; + if (env->cp15.scr_el3 & SCR_IRQ) { + /* IRQ routed to monitor mode */ + new_mode =3D ARM_CPU_MODE_MON; + mask |=3D CPSR_F; + } + break; + case EXCP_FIQ: + new_mode =3D ARM_CPU_MODE_FIQ; + addr =3D 0x1c; + /* Disable FIQ, IRQ and imprecise data aborts. */ + mask =3D CPSR_A | CPSR_I | CPSR_F; + if (env->cp15.scr_el3 & SCR_FIQ) { + /* FIQ routed to monitor mode */ + new_mode =3D ARM_CPU_MODE_MON; + } + offset =3D 4; + break; + case EXCP_VIRQ: + new_mode =3D ARM_CPU_MODE_IRQ; + addr =3D 0x18; + /* Disable IRQ and imprecise data aborts. */ + mask =3D CPSR_A | CPSR_I; + offset =3D 4; + break; + case EXCP_VFIQ: + new_mode =3D ARM_CPU_MODE_FIQ; + addr =3D 0x1c; + /* Disable FIQ, IRQ and imprecise data aborts. */ + mask =3D CPSR_A | CPSR_I | CPSR_F; + offset =3D 4; + break; + case EXCP_SMC: + new_mode =3D ARM_CPU_MODE_MON; + addr =3D 0x08; + mask =3D CPSR_A | CPSR_I | CPSR_F; + offset =3D 0; + break; + default: + cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); + return; /* Never happens. Keep compiler happy. */ + } + + if (new_mode =3D=3D ARM_CPU_MODE_MON) { + addr +=3D env->cp15.mvbar; + } else if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) { + /* High vectors. When enabled, base address cannot be remapped. */ + addr +=3D 0xffff0000; + } else { + /* ARM v7 architectures provide a vector base address register to = remap + * the interrupt vector table. + * This register is only followed in non-monitor mode, and is bank= ed. + * Note: only bits 31:5 are valid. + */ + addr +=3D A32_BANKED_CURRENT_REG_GET(env, vbar); + } + + if ((env->uncached_cpsr & CPSR_M) =3D=3D ARM_CPU_MODE_MON) { + env->cp15.scr_el3 &=3D ~SCR_NS; + } + + take_aarch32_exception(env, new_mode, mask, offset, addr); +} + +/* Handle exception entry to a target EL which is using AArch64 */ +static void arm_cpu_do_interrupt_aarch64(CPUState *cs) +{ + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + unsigned int new_el =3D env->exception.target_el; + target_ulong addr =3D env->cp15.vbar_el[new_el]; + unsigned int new_mode =3D aarch64_pstate_mode(new_el, true); + unsigned int cur_el =3D arm_current_el(env); + + /* + * Note that new_el can never be 0. If cur_el is 0, then + * el0_a64 is is_a64(), else el0_a64 is ignored. + */ + aarch64_sve_change_el(env, cur_el, new_el, is_a64(env)); + + if (cur_el < new_el) { + /* Entry vector offset depends on whether the implemented EL + * immediately lower than the target level is using AArch32 or AAr= ch64 + */ + bool is_aa64; + + switch (new_el) { + case 3: + is_aa64 =3D (env->cp15.scr_el3 & SCR_RW) !=3D 0; + break; + case 2: + is_aa64 =3D (env->cp15.hcr_el2 & HCR_RW) !=3D 0; + break; + case 1: + is_aa64 =3D is_a64(env); + break; + default: + g_assert_not_reached(); + } + + if (is_aa64) { + addr +=3D 0x400; + } else { + addr +=3D 0x600; + } + } else if (pstate_read(env) & PSTATE_SP) { + addr +=3D 0x200; + } + + switch (cs->exception_index) { + case EXCP_PREFETCH_ABORT: + case EXCP_DATA_ABORT: + env->cp15.far_el[new_el] =3D env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, "...with FAR 0x%" PRIx64 "\n", + env->cp15.far_el[new_el]); + /* fall through */ + case EXCP_BKPT: + case EXCP_UDEF: + case EXCP_SWI: + case EXCP_HVC: + case EXCP_HYP_TRAP: + case EXCP_SMC: + if (syn_get_ec(env->exception.syndrome) =3D=3D EC_ADVSIMDFPACCESST= RAP) { + /* + * QEMU internal FP/SIMD syndromes from AArch32 include the + * TA and coproc fields which are only exposed if the exception + * is taken to AArch32 Hyp mode. Mask them out to get a valid + * AArch64 format syndrome. + */ + env->exception.syndrome &=3D ~MAKE_64BIT_MASK(0, 20); + } + env->cp15.esr_el[new_el] =3D env->exception.syndrome; + break; + case EXCP_IRQ: + case EXCP_VIRQ: + addr +=3D 0x80; + break; + case EXCP_FIQ: + case EXCP_VFIQ: + addr +=3D 0x100; + break; + case EXCP_SEMIHOST: + qemu_log_mask(CPU_LOG_INT, + "...handling as semihosting call 0x%" PRIx64 "\n", + env->xregs[0]); + env->xregs[0] =3D do_arm_semihosting(env); + return; + default: + cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); + } + + if (is_a64(env)) { + env->banked_spsr[aarch64_banked_spsr_index(new_el)] =3D pstate_rea= d(env); + aarch64_save_sp(env, arm_current_el(env)); + env->elr_el[new_el] =3D env->pc; + } else { + env->banked_spsr[aarch64_banked_spsr_index(new_el)] =3D cpsr_read(= env); + env->elr_el[new_el] =3D env->regs[15]; + + aarch64_sync_32_to_64(env); + + env->condexec_bits =3D 0; + } + qemu_log_mask(CPU_LOG_INT, "...with ELR 0x%" PRIx64 "\n", + env->elr_el[new_el]); + + pstate_write(env, PSTATE_DAIF | new_mode); + env->aarch64 =3D 1; + aarch64_restore_sp(env, new_el); + + env->pc =3D addr; + + qemu_log_mask(CPU_LOG_INT, "...to EL%d PC 0x%" PRIx64 " PSTATE 0x%x\n", + new_el, env->pc, pstate_read(env)); +} + +static inline bool check_for_semihosting(CPUState *cs) +{ + /* Check whether this exception is a semihosting call; if so + * then handle it and return true; otherwise return false. + */ + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + + if (is_a64(env)) { + if (cs->exception_index =3D=3D EXCP_SEMIHOST) { + /* This is always the 64-bit semihosting exception. + * The "is this usermode" and "is semihosting enabled" + * checks have been done at translate time. + */ + qemu_log_mask(CPU_LOG_INT, + "...handling as semihosting call 0x%" PRIx64 "\n= ", + env->xregs[0]); + env->xregs[0] =3D do_arm_semihosting(env); + return true; + } + return false; + } else { + uint32_t imm; + + /* Only intercept calls from privileged modes, to provide some + * semblance of security. + */ + if (cs->exception_index !=3D EXCP_SEMIHOST && + (!semihosting_enabled() || + ((env->uncached_cpsr & CPSR_M) =3D=3D ARM_CPU_MODE_USR))) { + return false; + } + + switch (cs->exception_index) { + case EXCP_SEMIHOST: + /* This is always a semihosting call; the "is this usermode" + * and "is semihosting enabled" checks have been done at + * translate time. + */ + break; + case EXCP_SWI: + /* Check for semihosting interrupt. */ + if (env->thumb) { + imm =3D arm_lduw_code(env, env->regs[15] - 2, arm_sctlr_b(= env)) + & 0xff; + if (imm =3D=3D 0xab) { + break; + } + } else { + imm =3D arm_ldl_code(env, env->regs[15] - 4, arm_sctlr_b(e= nv)) + & 0xffffff; + if (imm =3D=3D 0x123456) { + break; + } + } + return false; + case EXCP_BKPT: + /* See if this is a semihosting syscall. */ + if (env->thumb) { + imm =3D arm_lduw_code(env, env->regs[15], arm_sctlr_b(env)) + & 0xff; + if (imm =3D=3D 0xab) { + env->regs[15] +=3D 2; + break; + } + } + return false; + default: + return false; + } + + qemu_log_mask(CPU_LOG_INT, + "...handling as semihosting call 0x%x\n", + env->regs[0]); + env->regs[0] =3D do_arm_semihosting(env); + return true; + } +} + +/* Handle a CPU exception for A and R profile CPUs. + * Do any appropriate logging, handle PSCI calls, and then hand off + * to the AArch64-entry or AArch32-entry function depending on the + * target exception level's register width. + */ +void arm_cpu_do_interrupt(CPUState *cs) +{ + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + unsigned int new_el =3D env->exception.target_el; + + assert(!arm_feature(env, ARM_FEATURE_M)); + + arm_log_exception(cs->exception_index); + qemu_log_mask(CPU_LOG_INT, "...from EL%d to EL%d\n", arm_current_el(en= v), + new_el); + if (qemu_loglevel_mask(CPU_LOG_INT) + && !excp_is_internal(cs->exception_index)) { + qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n", + syn_get_ec(env->exception.syndrome), + env->exception.syndrome); + } + + if (arm_is_psci_call(cpu, cs->exception_index)) { + arm_handle_psci_call(cpu); + qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n"); + return; + } + + /* Semihosting semantics depend on the register width of the + * code that caused the exception, not the target exception level, + * so must be handled here. + */ + if (check_for_semihosting(cs)) { + return; + } + + /* Hooks may change global state so BQL should be held, also the + * BQL needs to be held for any modification of + * cs->interrupt_request. + */ + g_assert(qemu_mutex_iothread_locked()); + + arm_call_pre_el_change_hook(cpu); + + assert(!excp_is_internal(cs->exception_index)); + if (arm_el_is_aa64(env, new_el)) { + arm_cpu_do_interrupt_aarch64(cs); + } else { + arm_cpu_do_interrupt_aarch32(cs); + } + + arm_call_el_change_hook(cpu); + + if (!kvm_enabled()) { + cs->interrupt_request |=3D CPU_INTERRUPT_EXITTB; + } +} diff --git a/target/arm/helper.c b/target/arm/helper.c index 8aa5a9e41d..7b30a4cb49 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6700,537 +6700,6 @@ void aarch64_sync_64_to_32(CPUARMState *env) env->regs[15] =3D env->pc; } =20 -static void take_aarch32_exception(CPUARMState *env, int new_mode, - uint32_t mask, uint32_t offset, - uint32_t newpc) -{ - /* Change the CPU state so as to actually take the exception. */ - switch_mode(env, new_mode); - /* - * For exceptions taken to AArch32 we must clear the SS bit in both - * PSTATE and in the old-state value we save to SPSR_, so zero i= t now. - */ - env->uncached_cpsr &=3D ~PSTATE_SS; - env->spsr =3D cpsr_read(env); - /* Clear IT bits. */ - env->condexec_bits =3D 0; - /* Switch to the new mode, and to the correct instruction set. */ - env->uncached_cpsr =3D (env->uncached_cpsr & ~CPSR_M) | new_mode; - /* Set new mode endianness */ - env->uncached_cpsr &=3D ~CPSR_E; - if (env->cp15.sctlr_el[arm_current_el(env)] & SCTLR_EE) { - env->uncached_cpsr |=3D CPSR_E; - } - /* J and IL must always be cleared for exception entry */ - env->uncached_cpsr &=3D ~(CPSR_IL | CPSR_J); - env->daif |=3D mask; - - if (new_mode =3D=3D ARM_CPU_MODE_HYP) { - env->thumb =3D (env->cp15.sctlr_el[2] & SCTLR_TE) !=3D 0; - env->elr_el[2] =3D env->regs[15]; - } else { - /* - * this is a lie, as there was no c1_sys on V4T/V5, but who cares - * and we should just guard the thumb mode on V4 - */ - if (arm_feature(env, ARM_FEATURE_V4T)) { - env->thumb =3D - (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_TE) !=3D 0; - } - env->regs[14] =3D env->regs[15] + offset; - } - env->regs[15] =3D newpc; -} - -static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs) -{ - /* - * Handle exception entry to Hyp mode; this is sufficiently - * different to entry to other AArch32 modes that we handle it - * separately here. - * - * The vector table entry used is always the 0x14 Hyp mode entry point, - * unless this is an UNDEF/HVC/abort taken from Hyp to Hyp. - * The offset applied to the preferred return address is always zero - * (see DDI0487C.a section G1.12.3). - * PSTATE A/I/F masks are set based only on the SCR.EA/IRQ/FIQ values. - */ - uint32_t addr, mask; - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - - switch (cs->exception_index) { - case EXCP_UDEF: - addr =3D 0x04; - break; - case EXCP_SWI: - addr =3D 0x14; - break; - case EXCP_BKPT: - /* Fall through to prefetch abort. */ - case EXCP_PREFETCH_ABORT: - env->cp15.ifar_s =3D env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, "...with HIFAR 0x%x\n", - (uint32_t)env->exception.vaddress); - addr =3D 0x0c; - break; - case EXCP_DATA_ABORT: - env->cp15.dfar_s =3D env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, "...with HDFAR 0x%x\n", - (uint32_t)env->exception.vaddress); - addr =3D 0x10; - break; - case EXCP_IRQ: - addr =3D 0x18; - break; - case EXCP_FIQ: - addr =3D 0x1c; - break; - case EXCP_HVC: - addr =3D 0x08; - break; - case EXCP_HYP_TRAP: - addr =3D 0x14; - default: - cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); - } - - if (cs->exception_index !=3D EXCP_IRQ && cs->exception_index !=3D EXCP= _FIQ) { - if (!arm_feature(env, ARM_FEATURE_V8)) { - /* - * QEMU syndrome values are v8-style. v7 has the IL bit - * UNK/SBZP for "field not valid" cases, where v8 uses RES1. - * If this is a v7 CPU, squash the IL bit in those cases. - */ - if (cs->exception_index =3D=3D EXCP_PREFETCH_ABORT || - (cs->exception_index =3D=3D EXCP_DATA_ABORT && - !(env->exception.syndrome & ARM_EL_ISV)) || - syn_get_ec(env->exception.syndrome) =3D=3D EC_UNCATEGORIZE= D) { - env->exception.syndrome &=3D ~ARM_EL_IL; - } - } - env->cp15.esr_el[2] =3D env->exception.syndrome; - } - - if (arm_current_el(env) !=3D 2 && addr < 0x14) { - addr =3D 0x14; - } - - mask =3D 0; - if (!(env->cp15.scr_el3 & SCR_EA)) { - mask |=3D CPSR_A; - } - if (!(env->cp15.scr_el3 & SCR_IRQ)) { - mask |=3D CPSR_I; - } - if (!(env->cp15.scr_el3 & SCR_FIQ)) { - mask |=3D CPSR_F; - } - - addr +=3D env->cp15.hvbar; - - take_aarch32_exception(env, ARM_CPU_MODE_HYP, mask, 0, addr); -} - -static void arm_cpu_do_interrupt_aarch32(CPUState *cs) -{ - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - uint32_t addr; - uint32_t mask; - int new_mode; - uint32_t offset; - uint32_t moe; - - /* If this is a debug exception we must update the DBGDSCR.MOE bits */ - switch (syn_get_ec(env->exception.syndrome)) { - case EC_BREAKPOINT: - case EC_BREAKPOINT_SAME_EL: - moe =3D 1; - break; - case EC_WATCHPOINT: - case EC_WATCHPOINT_SAME_EL: - moe =3D 10; - break; - case EC_AA32_BKPT: - moe =3D 3; - break; - case EC_VECTORCATCH: - moe =3D 5; - break; - default: - moe =3D 0; - break; - } - - if (moe) { - env->cp15.mdscr_el1 =3D deposit64(env->cp15.mdscr_el1, 2, 4, moe); - } - - if (env->exception.target_el =3D=3D 2) { - arm_cpu_do_interrupt_aarch32_hyp(cs); - return; - } - - switch (cs->exception_index) { - case EXCP_UDEF: - new_mode =3D ARM_CPU_MODE_UND; - addr =3D 0x04; - mask =3D CPSR_I; - if (env->thumb) - offset =3D 2; - else - offset =3D 4; - break; - case EXCP_SWI: - new_mode =3D ARM_CPU_MODE_SVC; - addr =3D 0x08; - mask =3D CPSR_I; - /* The PC already points to the next instruction. */ - offset =3D 0; - break; - case EXCP_BKPT: - /* Fall through to prefetch abort. */ - case EXCP_PREFETCH_ABORT: - A32_BANKED_CURRENT_REG_SET(env, ifsr, env->exception.fsr); - A32_BANKED_CURRENT_REG_SET(env, ifar, env->exception.vaddress); - qemu_log_mask(CPU_LOG_INT, "...with IFSR 0x%x IFAR 0x%x\n", - env->exception.fsr, (uint32_t)env->exception.vaddres= s); - new_mode =3D ARM_CPU_MODE_ABT; - addr =3D 0x0c; - mask =3D CPSR_A | CPSR_I; - offset =3D 4; - break; - case EXCP_DATA_ABORT: - A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr); - A32_BANKED_CURRENT_REG_SET(env, dfar, env->exception.vaddress); - qemu_log_mask(CPU_LOG_INT, "...with DFSR 0x%x DFAR 0x%x\n", - env->exception.fsr, - (uint32_t)env->exception.vaddress); - new_mode =3D ARM_CPU_MODE_ABT; - addr =3D 0x10; - mask =3D CPSR_A | CPSR_I; - offset =3D 8; - break; - case EXCP_IRQ: - new_mode =3D ARM_CPU_MODE_IRQ; - addr =3D 0x18; - /* Disable IRQ and imprecise data aborts. */ - mask =3D CPSR_A | CPSR_I; - offset =3D 4; - if (env->cp15.scr_el3 & SCR_IRQ) { - /* IRQ routed to monitor mode */ - new_mode =3D ARM_CPU_MODE_MON; - mask |=3D CPSR_F; - } - break; - case EXCP_FIQ: - new_mode =3D ARM_CPU_MODE_FIQ; - addr =3D 0x1c; - /* Disable FIQ, IRQ and imprecise data aborts. */ - mask =3D CPSR_A | CPSR_I | CPSR_F; - if (env->cp15.scr_el3 & SCR_FIQ) { - /* FIQ routed to monitor mode */ - new_mode =3D ARM_CPU_MODE_MON; - } - offset =3D 4; - break; - case EXCP_VIRQ: - new_mode =3D ARM_CPU_MODE_IRQ; - addr =3D 0x18; - /* Disable IRQ and imprecise data aborts. */ - mask =3D CPSR_A | CPSR_I; - offset =3D 4; - break; - case EXCP_VFIQ: - new_mode =3D ARM_CPU_MODE_FIQ; - addr =3D 0x1c; - /* Disable FIQ, IRQ and imprecise data aborts. */ - mask =3D CPSR_A | CPSR_I | CPSR_F; - offset =3D 4; - break; - case EXCP_SMC: - new_mode =3D ARM_CPU_MODE_MON; - addr =3D 0x08; - mask =3D CPSR_A | CPSR_I | CPSR_F; - offset =3D 0; - break; - default: - cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); - return; /* Never happens. Keep compiler happy. */ - } - - if (new_mode =3D=3D ARM_CPU_MODE_MON) { - addr +=3D env->cp15.mvbar; - } else if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) { - /* High vectors. When enabled, base address cannot be remapped. */ - addr +=3D 0xffff0000; - } else { - /* ARM v7 architectures provide a vector base address register to = remap - * the interrupt vector table. - * This register is only followed in non-monitor mode, and is bank= ed. - * Note: only bits 31:5 are valid. - */ - addr +=3D A32_BANKED_CURRENT_REG_GET(env, vbar); - } - - if ((env->uncached_cpsr & CPSR_M) =3D=3D ARM_CPU_MODE_MON) { - env->cp15.scr_el3 &=3D ~SCR_NS; - } - - take_aarch32_exception(env, new_mode, mask, offset, addr); -} - -/* Handle exception entry to a target EL which is using AArch64 */ -static void arm_cpu_do_interrupt_aarch64(CPUState *cs) -{ - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - unsigned int new_el =3D env->exception.target_el; - target_ulong addr =3D env->cp15.vbar_el[new_el]; - unsigned int new_mode =3D aarch64_pstate_mode(new_el, true); - unsigned int cur_el =3D arm_current_el(env); - - /* - * Note that new_el can never be 0. If cur_el is 0, then - * el0_a64 is is_a64(), else el0_a64 is ignored. - */ - aarch64_sve_change_el(env, cur_el, new_el, is_a64(env)); - - if (cur_el < new_el) { - /* Entry vector offset depends on whether the implemented EL - * immediately lower than the target level is using AArch32 or AAr= ch64 - */ - bool is_aa64; - - switch (new_el) { - case 3: - is_aa64 =3D (env->cp15.scr_el3 & SCR_RW) !=3D 0; - break; - case 2: - is_aa64 =3D (env->cp15.hcr_el2 & HCR_RW) !=3D 0; - break; - case 1: - is_aa64 =3D is_a64(env); - break; - default: - g_assert_not_reached(); - } - - if (is_aa64) { - addr +=3D 0x400; - } else { - addr +=3D 0x600; - } - } else if (pstate_read(env) & PSTATE_SP) { - addr +=3D 0x200; - } - - switch (cs->exception_index) { - case EXCP_PREFETCH_ABORT: - case EXCP_DATA_ABORT: - env->cp15.far_el[new_el] =3D env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, "...with FAR 0x%" PRIx64 "\n", - env->cp15.far_el[new_el]); - /* fall through */ - case EXCP_BKPT: - case EXCP_UDEF: - case EXCP_SWI: - case EXCP_HVC: - case EXCP_HYP_TRAP: - case EXCP_SMC: - if (syn_get_ec(env->exception.syndrome) =3D=3D EC_ADVSIMDFPACCESST= RAP) { - /* - * QEMU internal FP/SIMD syndromes from AArch32 include the - * TA and coproc fields which are only exposed if the exception - * is taken to AArch32 Hyp mode. Mask them out to get a valid - * AArch64 format syndrome. - */ - env->exception.syndrome &=3D ~MAKE_64BIT_MASK(0, 20); - } - env->cp15.esr_el[new_el] =3D env->exception.syndrome; - break; - case EXCP_IRQ: - case EXCP_VIRQ: - addr +=3D 0x80; - break; - case EXCP_FIQ: - case EXCP_VFIQ: - addr +=3D 0x100; - break; - case EXCP_SEMIHOST: - qemu_log_mask(CPU_LOG_INT, - "...handling as semihosting call 0x%" PRIx64 "\n", - env->xregs[0]); - env->xregs[0] =3D do_arm_semihosting(env); - return; - default: - cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); - } - - if (is_a64(env)) { - env->banked_spsr[aarch64_banked_spsr_index(new_el)] =3D pstate_rea= d(env); - aarch64_save_sp(env, arm_current_el(env)); - env->elr_el[new_el] =3D env->pc; - } else { - env->banked_spsr[aarch64_banked_spsr_index(new_el)] =3D cpsr_read(= env); - env->elr_el[new_el] =3D env->regs[15]; - - aarch64_sync_32_to_64(env); - - env->condexec_bits =3D 0; - } - qemu_log_mask(CPU_LOG_INT, "...with ELR 0x%" PRIx64 "\n", - env->elr_el[new_el]); - - pstate_write(env, PSTATE_DAIF | new_mode); - env->aarch64 =3D 1; - aarch64_restore_sp(env, new_el); - - env->pc =3D addr; - - qemu_log_mask(CPU_LOG_INT, "...to EL%d PC 0x%" PRIx64 " PSTATE 0x%x\n", - new_el, env->pc, pstate_read(env)); -} - -static inline bool check_for_semihosting(CPUState *cs) -{ - /* Check whether this exception is a semihosting call; if so - * then handle it and return true; otherwise return false. - */ - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - - if (is_a64(env)) { - if (cs->exception_index =3D=3D EXCP_SEMIHOST) { - /* This is always the 64-bit semihosting exception. - * The "is this usermode" and "is semihosting enabled" - * checks have been done at translate time. - */ - qemu_log_mask(CPU_LOG_INT, - "...handling as semihosting call 0x%" PRIx64 "\n= ", - env->xregs[0]); - env->xregs[0] =3D do_arm_semihosting(env); - return true; - } - return false; - } else { - uint32_t imm; - - /* Only intercept calls from privileged modes, to provide some - * semblance of security. - */ - if (cs->exception_index !=3D EXCP_SEMIHOST && - (!semihosting_enabled() || - ((env->uncached_cpsr & CPSR_M) =3D=3D ARM_CPU_MODE_USR))) { - return false; - } - - switch (cs->exception_index) { - case EXCP_SEMIHOST: - /* This is always a semihosting call; the "is this usermode" - * and "is semihosting enabled" checks have been done at - * translate time. - */ - break; - case EXCP_SWI: - /* Check for semihosting interrupt. */ - if (env->thumb) { - imm =3D arm_lduw_code(env, env->regs[15] - 2, arm_sctlr_b(= env)) - & 0xff; - if (imm =3D=3D 0xab) { - break; - } - } else { - imm =3D arm_ldl_code(env, env->regs[15] - 4, arm_sctlr_b(e= nv)) - & 0xffffff; - if (imm =3D=3D 0x123456) { - break; - } - } - return false; - case EXCP_BKPT: - /* See if this is a semihosting syscall. */ - if (env->thumb) { - imm =3D arm_lduw_code(env, env->regs[15], arm_sctlr_b(env)) - & 0xff; - if (imm =3D=3D 0xab) { - env->regs[15] +=3D 2; - break; - } - } - return false; - default: - return false; - } - - qemu_log_mask(CPU_LOG_INT, - "...handling as semihosting call 0x%x\n", - env->regs[0]); - env->regs[0] =3D do_arm_semihosting(env); - return true; - } -} - -/* Handle a CPU exception for A and R profile CPUs. - * Do any appropriate logging, handle PSCI calls, and then hand off - * to the AArch64-entry or AArch32-entry function depending on the - * target exception level's register width. - */ -void arm_cpu_do_interrupt(CPUState *cs) -{ - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - unsigned int new_el =3D env->exception.target_el; - - assert(!arm_feature(env, ARM_FEATURE_M)); - - arm_log_exception(cs->exception_index); - qemu_log_mask(CPU_LOG_INT, "...from EL%d to EL%d\n", arm_current_el(en= v), - new_el); - if (qemu_loglevel_mask(CPU_LOG_INT) - && !excp_is_internal(cs->exception_index)) { - qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n", - syn_get_ec(env->exception.syndrome), - env->exception.syndrome); - } - - if (arm_is_psci_call(cpu, cs->exception_index)) { - arm_handle_psci_call(cpu); - qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n"); - return; - } - - /* Semihosting semantics depend on the register width of the - * code that caused the exception, not the target exception level, - * so must be handled here. - */ - if (check_for_semihosting(cs)) { - return; - } - - /* Hooks may change global state so BQL should be held, also the - * BQL needs to be held for any modification of - * cs->interrupt_request. - */ - g_assert(qemu_mutex_iothread_locked()); - - arm_call_pre_el_change_hook(cpu); - - assert(!excp_is_internal(cs->exception_index)); - if (arm_el_is_aa64(env, new_el)) { - arm_cpu_do_interrupt_aarch64(cs); - } else { - arm_cpu_do_interrupt_aarch32(cs); - } - - arm_call_el_change_hook(cpu); - - if (!kvm_enabled()) { - cs->interrupt_request |=3D CPU_INTERRUPT_EXITTB; - } -} - /* Return the exception level which controls this address translation regi= me */ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx) { diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs index 76634b2437..85e35ee668 100644 --- a/target/arm/Makefile.objs +++ b/target/arm/Makefile.objs @@ -6,7 +6,7 @@ obj-$(call land,$(CONFIG_KVM),$(TARGET_AARCH64)) +=3D kvm64= .o obj-$(call lnot,$(CONFIG_KVM)) +=3D kvm-stub.o obj-y +=3D translate.o op_helper.o helper.o cpu.o obj-y +=3D neon_helper.o iwmmxt_helper.o vec_helper.o -obj-y +=3D gdbstub.o m_helper.o +obj-y +=3D gdbstub.o m_helper.o excp_helper.o obj-$(TARGET_AARCH64) +=3D cpu64.o translate-a64.o helper-a64.o gdbstub64.o obj-y +=3D crypto_helper.o obj-$(CONFIG_SOFTMMU) +=3D arm-powerctl.o --=20 2.19.1