From nobody Thu Nov 6 14:16:22 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1542129456982532.2276926537656; Tue, 13 Nov 2018 09:17:36 -0800 (PST) Received: from localhost ([::1]:55232 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcJh-0008Jv-9S for importer@patchew.org; Tue, 13 Nov 2018 12:17:33 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36371) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcAl-0007Wl-VT for qemu-devel@nongnu.org; Tue, 13 Nov 2018 12:08:22 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMbwV-0003se-IN for qemu-devel@nongnu.org; Tue, 13 Nov 2018 11:53:40 -0500 Received: from mga06.intel.com ([134.134.136.31]:62852) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gMbwS-0003pH-JN; Tue, 13 Nov 2018 11:53:34 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 08:53:26 -0800 Received: from qingbai-mobl.ger.corp.intel.com (HELO caravaggio.ger.corp.intel.com) ([10.249.41.239]) by orsmga001.jf.intel.com with ESMTP; 13 Nov 2018 08:53:24 -0800 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,228,1539673200"; d="scan'208";a="107922091" From: Samuel Ortiz To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 17:52:35 +0100 Message-Id: <20181113165247.4806-2-sameo@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181113165247.4806-1-sameo@linux.intel.com> References: <20181113165247.4806-1-sameo@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.31 Subject: [Qemu-devel] [PATCH 01/13] target: arm: Add copyright boilerplate X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, richard.henderson@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Robert Bradford Reviewed-by: Samuel Ortiz --- target/arm/helper.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/target/arm/helper.c b/target/arm/helper.c index 0da1424f72..3d4e9c5c8a 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -1,3 +1,10 @@ +/* + * ARM generic helpers. + * + * This code is licensed under the GNU GPL v2. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ #include "qemu/osdep.h" #include "target/arm/idau.h" #include "trace.h" --=20 2.19.1 From nobody Thu Nov 6 14:16:22 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 15421294979613.13342026456246; Tue, 13 Nov 2018 09:18:17 -0800 (PST) Received: from localhost ([::1]:55233 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcKM-0000Ii-5h for importer@patchew.org; Tue, 13 Nov 2018 12:18:14 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36383) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcAl-0007X5-V4 for qemu-devel@nongnu.org; Tue, 13 Nov 2018 12:08:21 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMbwZ-0003uE-QZ for qemu-devel@nongnu.org; Tue, 13 Nov 2018 11:53:41 -0500 Received: from mga06.intel.com ([134.134.136.31]:62858) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gMbwW-0003pu-0O; Tue, 13 Nov 2018 11:53:39 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 08:53:28 -0800 Received: from qingbai-mobl.ger.corp.intel.com (HELO caravaggio.ger.corp.intel.com) ([10.249.41.239]) by orsmga001.jf.intel.com with ESMTP; 13 Nov 2018 08:53:26 -0800 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,228,1539673200"; d="scan'208";a="107922094" From: Samuel Ortiz To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 17:52:36 +0100 Message-Id: <20181113165247.4806-3-sameo@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181113165247.4806-1-sameo@linux.intel.com> References: <20181113165247.4806-1-sameo@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.31 Subject: [Qemu-devel] [PATCH 02/13] target: arm: Remove unused headers X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, richard.henderson@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" From: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Robert Bradford Reviewed-by: Samuel Ortiz --- target/arm/helper.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/target/arm/helper.c b/target/arm/helper.c index 3d4e9c5c8a..27d9285e1e 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -12,13 +12,10 @@ #include "internals.h" #include "exec/gdbstub.h" #include "exec/helper-proto.h" -#include "qemu/host-utils.h" #include "sysemu/arch_init.h" #include "sysemu/sysemu.h" -#include "qemu/bitops.h" #include "qemu/crc32c.h" #include "exec/exec-all.h" -#include "exec/cpu_ldst.h" #include "arm_ldst.h" #include /* For crc32 */ #include "exec/semihost.h" --=20 2.19.1 From nobody Thu Nov 6 14:16:22 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 154212923435279.19848577665095; Tue, 13 Nov 2018 09:13:54 -0800 (PST) Received: from localhost ([::1]:55201 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcG7-0005Oc-Vx for importer@patchew.org; Tue, 13 Nov 2018 12:13:52 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36383) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcAR-0007X5-JS for qemu-devel@nongnu.org; Tue, 13 Nov 2018 12:08:17 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMbwe-0003vn-AW for qemu-devel@nongnu.org; Tue, 13 Nov 2018 11:53:56 -0500 Received: from mga06.intel.com ([134.134.136.31]:62852) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gMbwd-0003pH-4J; Tue, 13 Nov 2018 11:53:43 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 08:53:32 -0800 Received: from qingbai-mobl.ger.corp.intel.com (HELO caravaggio.ger.corp.intel.com) ([10.249.41.239]) by orsmga001.jf.intel.com with ESMTP; 13 Nov 2018 08:53:28 -0800 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,228,1539673200"; d="scan'208";a="107922096" From: Samuel Ortiz To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 17:52:37 +0100 Message-Id: <20181113165247.4806-4-sameo@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181113165247.4806-1-sameo@linux.intel.com> References: <20181113165247.4806-1-sameo@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.31 Subject: [Qemu-devel] [PATCH 03/13] target: arm: Move all v7m helpers into their own file X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, richard.henderson@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" In preparation for supporting TCG disablement on ARM, we move all TCG related v7m helpers and APIs into their own file (m_helper.c for all v*-m helpers). arm_v7m_cpu_do_interrupt pulls a large number of static functions out of helper.c into m_helper.c because it is TCG dependent. Signed-off-by: Samuel Ortiz Signed-off-by: Philippe Mathieu-Daud=C3=A9 Tested-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Robert Bradford --- target/arm/internals.h | 37 + target/arm/helper.c | 2209 +++----------------------------------- target/arm/m_helper.c | 1892 ++++++++++++++++++++++++++++++++ target/arm/Makefile.objs | 2 +- 4 files changed, 2086 insertions(+), 2054 deletions(-) create mode 100644 target/arm/m_helper.c diff --git a/target/arm/internals.h b/target/arm/internals.h index d208b70a64..ffb5091b1f 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -905,4 +905,41 @@ void arm_cpu_update_virq(ARMCPU *cpu); */ void arm_cpu_update_vfiq(ARMCPU *cpu); =20 +void arm_log_exception(int idx); + +/* Cacheability and shareability attributes for a memory access */ +typedef struct ARMCacheAttrs { + unsigned int attrs:8; /* as in the MAIR register encoding */ + unsigned int shareability:2; /* as in the SH field of the VMSAv8-64 PT= Es */ +} ARMCacheAttrs; + +bool get_phys_addr(CPUARMState *env, target_ulong address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot, + target_ulong *page_size, + ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs); + +/* Security attributes for an address, as returned by v8m_security_lookup.= */ +typedef struct V8M_SAttributes { + bool subpage; /* true if these attrs don't cover the whole TARGET_PAGE= */ + bool ns; + bool nsc; + uint8_t sregion; + bool srvalid; + uint8_t iregion; + bool irvalid; +} V8M_SAttributes; + +void v8m_security_lookup(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + V8M_SAttributes *sattrs); + +void switch_mode(CPUARMState *, int); + +bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *txattrs, + int *prot, bool *is_subpage, + ARMMMUFaultInfo *fi, uint32_t *mregion); + #endif diff --git a/target/arm/helper.c b/target/arm/helper.c index 27d9285e1e..8aa5a9e41d 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -26,17 +26,6 @@ #define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */ =20 #ifndef CONFIG_USER_ONLY -/* Cacheability and shareability attributes for a memory access */ -typedef struct ARMCacheAttrs { - unsigned int attrs:8; /* as in the MAIR register encoding */ - unsigned int shareability:2; /* as in the SH field of the VMSAv8-64 PT= Es */ -} ARMCacheAttrs; - -static bool get_phys_addr(CPUARMState *env, target_ulong address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot, - target_ulong *page_size, - ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs); =20 static bool get_phys_addr_lpae(CPUARMState *env, target_ulong address, MMUAccessType access_type, ARMMMUIdx mmu_id= x, @@ -44,24 +33,8 @@ static bool get_phys_addr_lpae(CPUARMState *env, target_= ulong address, target_ulong *page_size_ptr, ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheat= trs); =20 -/* Security attributes for an address, as returned by v8m_security_lookup.= */ -typedef struct V8M_SAttributes { - bool subpage; /* true if these attrs don't cover the whole TARGET_PAGE= */ - bool ns; - bool nsc; - uint8_t sregion; - bool srvalid; - uint8_t iregion; - bool irvalid; -} V8M_SAttributes; - -static void v8m_security_lookup(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_i= dx, - V8M_SAttributes *sattrs); #endif =20 -static void switch_mode(CPUARMState *env, int mode); - static int vfp_gdb_get_reg(CPUARMState *env, uint8_t *buf, int reg) { int nregs; @@ -6345,57 +6318,7 @@ uint32_t HELPER(rbit)(uint32_t x) =20 #if defined(CONFIG_USER_ONLY) =20 -/* These should probably raise undefined insn exceptions. */ -void HELPER(v7m_msr)(CPUARMState *env, uint32_t reg, uint32_t val) -{ - ARMCPU *cpu =3D arm_env_get_cpu(env); - - cpu_abort(CPU(cpu), "v7m_msr %d\n", reg); -} - -uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg) -{ - ARMCPU *cpu =3D arm_env_get_cpu(env); - - cpu_abort(CPU(cpu), "v7m_mrs %d\n", reg); - return 0; -} - -void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) -{ - /* translate.c should never generate calls here in user-only mode */ - g_assert_not_reached(); -} - -void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest) -{ - /* translate.c should never generate calls here in user-only mode */ - g_assert_not_reached(); -} - -uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op) -{ - /* The TT instructions can be used by unprivileged code, but in - * user-only emulation we don't have the MPU. - * Luckily since we know we are NonSecure unprivileged (and that in - * turn means that the A flag wasn't specified), all the bits in the - * register must be zero: - * IREGION: 0 because IRVALID is 0 - * IRVALID: 0 because NS - * S: 0 because NS - * NSRW: 0 because NS - * NSR: 0 because NS - * RW: 0 because unpriv and A flag not set - * R: 0 because unpriv and A flag not set - * SRVALID: 0 because NS - * MRVALID: 0 because unpriv and A flag not set - * SREGION: 0 becaus SRVALID is 0 - * MREGION: 0 because MRVALID is 0 - */ - return 0; -} - -static void switch_mode(CPUARMState *env, int mode) +void switch_mode(CPUARMState *env, int mode) { ARMCPU *cpu =3D arm_env_get_cpu(env); =20 @@ -6417,7 +6340,7 @@ void aarch64_sync_64_to_32(CPUARMState *env) =20 #else =20 -static void switch_mode(CPUARMState *env, int mode) +void switch_mode(CPUARMState *env, int mode) { int old_mode; int i; @@ -6552,165 +6475,6 @@ uint32_t arm_phys_excp_target_el(CPUState *cs, uint= 32_t excp_idx, return target_el; } =20 -static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value, - ARMMMUIdx mmu_idx, bool ignfault) -{ - CPUState *cs =3D CPU(cpu); - CPUARMState *env =3D &cpu->env; - MemTxAttrs attrs =3D {}; - MemTxResult txres; - target_ulong page_size; - hwaddr physaddr; - int prot; - ARMMMUFaultInfo fi =3D {}; - bool secure =3D mmu_idx & ARM_MMU_IDX_M_S; - int exc; - bool exc_secure; - - if (get_phys_addr(env, addr, MMU_DATA_STORE, mmu_idx, &physaddr, - &attrs, &prot, &page_size, &fi, NULL)) { - /* MPU/SAU lookup failed */ - if (fi.type =3D=3D ARMFault_QEMU_SFault) { - qemu_log_mask(CPU_LOG_INT, - "...SecureFault with SFSR.AUVIOL during stacking= \n"); - env->v7m.sfsr |=3D R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVAL= ID_MASK; - env->v7m.sfar =3D addr; - exc =3D ARMV7M_EXCP_SECURE; - exc_secure =3D false; - } else { - qemu_log_mask(CPU_LOG_INT, "...MemManageFault with CFSR.MSTKER= R\n"); - env->v7m.cfsr[secure] |=3D R_V7M_CFSR_MSTKERR_MASK; - exc =3D ARMV7M_EXCP_MEM; - exc_secure =3D secure; - } - goto pend_fault; - } - address_space_stl_le(arm_addressspace(cs, attrs), physaddr, value, - attrs, &txres); - if (txres !=3D MEMTX_OK) { - /* BusFault trying to write the data */ - qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n"); - env->v7m.cfsr[M_REG_NS] |=3D R_V7M_CFSR_STKERR_MASK; - exc =3D ARMV7M_EXCP_BUS; - exc_secure =3D false; - goto pend_fault; - } - return true; - -pend_fault: - /* By pending the exception at this point we are making - * the IMPDEF choice "overridden exceptions pended" (see the - * MergeExcInfo() pseudocode). The other choice would be to not - * pend them now and then make a choice about which to throw away - * later if we have two derived exceptions. - * The only case when we must not pend the exception but instead - * throw it away is if we are doing the push of the callee registers - * and we've already generated a derived exception. Even in this - * case we will still update the fault status registers. - */ - if (!ignfault) { - armv7m_nvic_set_pending_derived(env->nvic, exc, exc_secure); - } - return false; -} - -static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr, - ARMMMUIdx mmu_idx) -{ - CPUState *cs =3D CPU(cpu); - CPUARMState *env =3D &cpu->env; - MemTxAttrs attrs =3D {}; - MemTxResult txres; - target_ulong page_size; - hwaddr physaddr; - int prot; - ARMMMUFaultInfo fi =3D {}; - bool secure =3D mmu_idx & ARM_MMU_IDX_M_S; - int exc; - bool exc_secure; - uint32_t value; - - if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr, - &attrs, &prot, &page_size, &fi, NULL)) { - /* MPU/SAU lookup failed */ - if (fi.type =3D=3D ARMFault_QEMU_SFault) { - qemu_log_mask(CPU_LOG_INT, - "...SecureFault with SFSR.AUVIOL during unstack\= n"); - env->v7m.sfsr |=3D R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVAL= ID_MASK; - env->v7m.sfar =3D addr; - exc =3D ARMV7M_EXCP_SECURE; - exc_secure =3D false; - } else { - qemu_log_mask(CPU_LOG_INT, - "...MemManageFault with CFSR.MUNSTKERR\n"); - env->v7m.cfsr[secure] |=3D R_V7M_CFSR_MUNSTKERR_MASK; - exc =3D ARMV7M_EXCP_MEM; - exc_secure =3D secure; - } - goto pend_fault; - } - - value =3D address_space_ldl(arm_addressspace(cs, attrs), physaddr, - attrs, &txres); - if (txres !=3D MEMTX_OK) { - /* BusFault trying to read the data */ - qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n"); - env->v7m.cfsr[M_REG_NS] |=3D R_V7M_CFSR_UNSTKERR_MASK; - exc =3D ARMV7M_EXCP_BUS; - exc_secure =3D false; - goto pend_fault; - } - - *dest =3D value; - return true; - -pend_fault: - /* By pending the exception at this point we are making - * the IMPDEF choice "overridden exceptions pended" (see the - * MergeExcInfo() pseudocode). The other choice would be to not - * pend them now and then make a choice about which to throw away - * later if we have two derived exceptions. - */ - armv7m_nvic_set_pending(env->nvic, exc, exc_secure); - return false; -} - -/* Write to v7M CONTROL.SPSEL bit for the specified security bank. - * This may change the current stack pointer between Main and Process - * stack pointers if it is done for the CONTROL register for the current - * security state. - */ -static void write_v7m_control_spsel_for_secstate(CPUARMState *env, - bool new_spsel, - bool secstate) -{ - bool old_is_psp =3D v7m_using_psp(env); - - env->v7m.control[secstate] =3D - deposit32(env->v7m.control[secstate], - R_V7M_CONTROL_SPSEL_SHIFT, - R_V7M_CONTROL_SPSEL_LENGTH, new_spsel); - - if (secstate =3D=3D env->v7m.secure) { - bool new_is_psp =3D v7m_using_psp(env); - uint32_t tmp; - - if (old_is_psp !=3D new_is_psp) { - tmp =3D env->v7m.other_sp; - env->v7m.other_sp =3D env->regs[13]; - env->regs[13] =3D tmp; - } - } -} - -/* Write to v7M CONTROL.SPSEL bit. This may change the current - * stack pointer between Main and Process stack pointers. - */ -static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel) -{ - write_v7m_control_spsel_for_secstate(env, new_spsel, env->v7m.secure); -} - void write_v7m_exception(CPUARMState *env, uint32_t new_exc) { /* Write a new value to v7m.exception, thus transitioning into or out @@ -6730,1444 +6494,192 @@ void write_v7m_exception(CPUARMState *env, uint3= 2_t new_exc) } } =20 -/* Switch M profile security state between NS and S */ -static void switch_v7m_security_state(CPUARMState *env, bool new_secstate) +/* Function used to synchronize QEMU's AArch64 register set with AArch32 + * register set. This is necessary when switching between AArch32 and AAr= ch64 + * execution state. + */ +void aarch64_sync_32_to_64(CPUARMState *env) { - uint32_t new_ss_msp, new_ss_psp; + int i; + uint32_t mode =3D env->uncached_cpsr & CPSR_M; =20 - if (env->v7m.secure =3D=3D new_secstate) { - return; + /* We can blanket copy R[0:7] to X[0:7] */ + for (i =3D 0; i < 8; i++) { + env->xregs[i] =3D env->regs[i]; } =20 - /* All the banked state is accessed by looking at env->v7m.secure - * except for the stack pointer; rearrange the SP appropriately. + /* Unless we are in FIQ mode, x8-x12 come from the user registers r8-r= 12. + * Otherwise, they come from the banked user regs. */ - new_ss_msp =3D env->v7m.other_ss_msp; - new_ss_psp =3D env->v7m.other_ss_psp; - - if (v7m_using_psp(env)) { - env->v7m.other_ss_psp =3D env->regs[13]; - env->v7m.other_ss_msp =3D env->v7m.other_sp; - } else { - env->v7m.other_ss_msp =3D env->regs[13]; - env->v7m.other_ss_psp =3D env->v7m.other_sp; - } - - env->v7m.secure =3D new_secstate; - - if (v7m_using_psp(env)) { - env->regs[13] =3D new_ss_psp; - env->v7m.other_sp =3D new_ss_msp; + if (mode =3D=3D ARM_CPU_MODE_FIQ) { + for (i =3D 8; i < 13; i++) { + env->xregs[i] =3D env->usr_regs[i - 8]; + } } else { - env->regs[13] =3D new_ss_msp; - env->v7m.other_sp =3D new_ss_psp; + for (i =3D 8; i < 13; i++) { + env->xregs[i] =3D env->regs[i]; + } } -} =20 -void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) -{ - /* Handle v7M BXNS: - * - if the return value is a magic value, do exception return (like = BX) - * - otherwise bit 0 of the return value is the target security state + /* Registers x13-x23 are the various mode SP and FP registers. Registe= rs + * r13 and r14 are only copied if we are in that mode, otherwise we co= py + * from the mode banked register. */ - uint32_t min_magic; - - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - /* Covers FNC_RETURN and EXC_RETURN magic */ - min_magic =3D FNC_RETURN_MIN_MAGIC; + if (mode =3D=3D ARM_CPU_MODE_USR || mode =3D=3D ARM_CPU_MODE_SYS) { + env->xregs[13] =3D env->regs[13]; + env->xregs[14] =3D env->regs[14]; } else { - /* EXC_RETURN magic only */ - min_magic =3D EXC_RETURN_MIN_MAGIC; + env->xregs[13] =3D env->banked_r13[bank_number(ARM_CPU_MODE_USR)]; + /* HYP is an exception in that it is copied from r14 */ + if (mode =3D=3D ARM_CPU_MODE_HYP) { + env->xregs[14] =3D env->regs[14]; + } else { + env->xregs[14] =3D env->banked_r14[bank_number(ARM_CPU_MODE_US= R)]; + } } =20 - if (dest >=3D min_magic) { - /* This is an exception return magic value; put it where - * do_v7m_exception_exit() expects and raise EXCEPTION_EXIT. - * Note that if we ever add gen_ss_advance() singlestep support to - * M profile this should count as an "instruction execution comple= te" - * event (compare gen_bx_excret_final_code()). - */ - env->regs[15] =3D dest & ~1; - env->thumb =3D dest & 1; - HELPER(exception_internal)(env, EXCP_EXCEPTION_EXIT); - /* notreached */ + if (mode =3D=3D ARM_CPU_MODE_HYP) { + env->xregs[15] =3D env->regs[13]; + } else { + env->xregs[15] =3D env->banked_r13[bank_number(ARM_CPU_MODE_HYP)]; } =20 - /* translate.c should have made BXNS UNDEF unless we're secure */ - assert(env->v7m.secure); - - switch_v7m_security_state(env, dest & 1); - env->thumb =3D 1; - env->regs[15] =3D dest & ~1; -} - -void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest) -{ - /* Handle v7M BLXNS: - * - bit 0 of the destination address is the target security state - */ - - /* At this point regs[15] is the address just after the BLXNS */ - uint32_t nextinst =3D env->regs[15] | 1; - uint32_t sp =3D env->regs[13] - 8; - uint32_t saved_psr; - - /* translate.c will have made BLXNS UNDEF unless we're secure */ - assert(env->v7m.secure); - - if (dest & 1) { - /* target is Secure, so this is just a normal BLX, - * except that the low bit doesn't indicate Thumb/not. - */ - env->regs[14] =3D nextinst; - env->thumb =3D 1; - env->regs[15] =3D dest & ~1; - return; + if (mode =3D=3D ARM_CPU_MODE_IRQ) { + env->xregs[16] =3D env->regs[14]; + env->xregs[17] =3D env->regs[13]; + } else { + env->xregs[16] =3D env->banked_r14[bank_number(ARM_CPU_MODE_IRQ)]; + env->xregs[17] =3D env->banked_r13[bank_number(ARM_CPU_MODE_IRQ)]; } =20 - /* Target is non-secure: first push a stack frame */ - if (!QEMU_IS_ALIGNED(sp, 8)) { - qemu_log_mask(LOG_GUEST_ERROR, - "BLXNS with misaligned SP is UNPREDICTABLE\n"); + if (mode =3D=3D ARM_CPU_MODE_SVC) { + env->xregs[18] =3D env->regs[14]; + env->xregs[19] =3D env->regs[13]; + } else { + env->xregs[18] =3D env->banked_r14[bank_number(ARM_CPU_MODE_SVC)]; + env->xregs[19] =3D env->banked_r13[bank_number(ARM_CPU_MODE_SVC)]; } =20 - if (sp < v7m_sp_limit(env)) { - raise_exception(env, EXCP_STKOF, 0, 1); + if (mode =3D=3D ARM_CPU_MODE_ABT) { + env->xregs[20] =3D env->regs[14]; + env->xregs[21] =3D env->regs[13]; + } else { + env->xregs[20] =3D env->banked_r14[bank_number(ARM_CPU_MODE_ABT)]; + env->xregs[21] =3D env->banked_r13[bank_number(ARM_CPU_MODE_ABT)]; } =20 - saved_psr =3D env->v7m.exception; - if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK) { - saved_psr |=3D XPSR_SFPA; + if (mode =3D=3D ARM_CPU_MODE_UND) { + env->xregs[22] =3D env->regs[14]; + env->xregs[23] =3D env->regs[13]; + } else { + env->xregs[22] =3D env->banked_r14[bank_number(ARM_CPU_MODE_UND)]; + env->xregs[23] =3D env->banked_r13[bank_number(ARM_CPU_MODE_UND)]; } =20 - /* Note that these stores can throw exceptions on MPU faults */ - cpu_stl_data(env, sp, nextinst); - cpu_stl_data(env, sp + 4, saved_psr); - - env->regs[13] =3D sp; - env->regs[14] =3D 0xfeffffff; - if (arm_v7m_is_handler_mode(env)) { - /* Write a dummy value to IPSR, to avoid leaking the current secure - * exception number to non-secure code. This is guaranteed not - * to cause write_v7m_exception() to actually change stacks. - */ - write_v7m_exception(env, 1); - } - switch_v7m_security_state(env, 0); - env->thumb =3D 1; - env->regs[15] =3D dest; -} - -static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool thread= mode, - bool spsel) -{ - /* Return a pointer to the location where we currently store the - * stack pointer for the requested security state and thread mode. - * This pointer will become invalid if the CPU state is updated - * such that the stack pointers are switched around (eg changing - * the SPSEL control bit). - * Compare the v8M ARM ARM pseudocode LookUpSP_with_security_mode(). - * Unlike that pseudocode, we require the caller to pass us in the - * SPSEL control bit value; this is because we also use this - * function in handling of pushing of the callee-saves registers - * part of the v8M stack frame (pseudocode PushCalleeStack()), - * and in the tailchain codepath the SPSEL bit comes from the exception - * return magic LR value from the previous exception. The pseudocode - * opencodes the stack-selection in PushCalleeStack(), but we prefer - * to make this utility function generic enough to do the job. - */ - bool want_psp =3D threadmode && spsel; - - if (secure =3D=3D env->v7m.secure) { - if (want_psp =3D=3D v7m_using_psp(env)) { - return &env->regs[13]; - } else { - return &env->v7m.other_sp; + /* Registers x24-x30 are mapped to r8-r14 in FIQ mode. If we are in F= IQ + * mode, then we can copy from r8-r14. Otherwise, we copy from the + * FIQ bank for r8-r14. + */ + if (mode =3D=3D ARM_CPU_MODE_FIQ) { + for (i =3D 24; i < 31; i++) { + env->xregs[i] =3D env->regs[i - 16]; /* X[24:30] <- R[8:14] = */ } } else { - if (want_psp) { - return &env->v7m.other_ss_psp; - } else { - return &env->v7m.other_ss_msp; + for (i =3D 24; i < 29; i++) { + env->xregs[i] =3D env->fiq_regs[i - 24]; } + env->xregs[29] =3D env->banked_r13[bank_number(ARM_CPU_MODE_FIQ)]; + env->xregs[30] =3D env->banked_r14[bank_number(ARM_CPU_MODE_FIQ)]; } + + env->pc =3D env->regs[15]; } =20 -static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure, - uint32_t *pvec) +/* Function used to synchronize QEMU's AArch32 register set with AArch64 + * register set. This is necessary when switching between AArch32 and AAr= ch64 + * execution state. + */ +void aarch64_sync_64_to_32(CPUARMState *env) { - CPUState *cs =3D CPU(cpu); - CPUARMState *env =3D &cpu->env; - MemTxResult result; - uint32_t addr =3D env->v7m.vecbase[targets_secure] + exc * 4; - uint32_t vector_entry; - MemTxAttrs attrs =3D {}; - ARMMMUIdx mmu_idx; - bool exc_secure; + int i; + uint32_t mode =3D env->uncached_cpsr & CPSR_M; =20 - mmu_idx =3D arm_v7m_mmu_idx_for_secstate_and_priv(env, targets_secure,= true); + /* We can blanket copy X[0:7] to R[0:7] */ + for (i =3D 0; i < 8; i++) { + env->regs[i] =3D env->xregs[i]; + } =20 - /* We don't do a get_phys_addr() here because the rules for vector - * loads are special: they always use the default memory map, and - * the default memory map permits reads from all addresses. - * Since there's no easy way to pass through to pmsav8_mpu_lookup() - * that we want this special case which would always say "yes", - * we just do the SAU lookup here followed by a direct physical load. + /* Unless we are in FIQ mode, r8-r12 come from the user registers x8-x= 12. + * Otherwise, we copy x8-x12 into the banked user regs. */ - attrs.secure =3D targets_secure; - attrs.user =3D false; + if (mode =3D=3D ARM_CPU_MODE_FIQ) { + for (i =3D 8; i < 13; i++) { + env->usr_regs[i - 8] =3D env->xregs[i]; + } + } else { + for (i =3D 8; i < 13; i++) { + env->regs[i] =3D env->xregs[i]; + } + } =20 - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - V8M_SAttributes sattrs =3D {}; + /* Registers r13 & r14 depend on the current mode. + * If we are in a given mode, we copy the corresponding x registers to= r13 + * and r14. Otherwise, we copy the x register to the banked r13 and r= 14 + * for the mode. + */ + if (mode =3D=3D ARM_CPU_MODE_USR || mode =3D=3D ARM_CPU_MODE_SYS) { + env->regs[13] =3D env->xregs[13]; + env->regs[14] =3D env->xregs[14]; + } else { + env->banked_r13[bank_number(ARM_CPU_MODE_USR)] =3D env->xregs[13]; =20 - v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs); - if (sattrs.ns) { - attrs.secure =3D false; - } else if (!targets_secure) { - /* NS access to S memory */ - goto load_fail; + /* HYP is an exception in that it does not have its own banked r14= but + * shares the USR r14 + */ + if (mode =3D=3D ARM_CPU_MODE_HYP) { + env->regs[14] =3D env->xregs[14]; + } else { + env->banked_r14[bank_number(ARM_CPU_MODE_USR)] =3D env->xregs[= 14]; } } =20 - vector_entry =3D address_space_ldl(arm_addressspace(cs, attrs), addr, - attrs, &result); - if (result !=3D MEMTX_OK) { - goto load_fail; + if (mode =3D=3D ARM_CPU_MODE_HYP) { + env->regs[13] =3D env->xregs[15]; + } else { + env->banked_r13[bank_number(ARM_CPU_MODE_HYP)] =3D env->xregs[15]; } - *pvec =3D vector_entry; - return true; - -load_fail: - /* All vector table fetch fails are reported as HardFault, with - * HFSR.VECTTBL and .FORCED set. (FORCED is set because - * technically the underlying exception is a MemManage or BusFault - * that is escalated to HardFault.) This is a terminal exception, - * so we will either take the HardFault immediately or else enter - * lockup (the latter case is handled in armv7m_nvic_set_pending_deriv= ed()). - */ - exc_secure =3D targets_secure || - !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK); - env->v7m.hfsr |=3D R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK; - armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secur= e); - return false; -} =20 -static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailcha= in, - bool ignore_faults) -{ - /* For v8M, push the callee-saves register part of the stack frame. - * Compare the v8M pseudocode PushCalleeStack(). - * In the tailchaining case this may not be the current stack. - */ - CPUARMState *env =3D &cpu->env; - uint32_t *frame_sp_p; - uint32_t frameptr; - ARMMMUIdx mmu_idx; - bool stacked_ok; - uint32_t limit; - bool want_psp; - - if (dotailchain) { - bool mode =3D lr & R_V7M_EXCRET_MODE_MASK; - bool priv =3D !(env->v7m.control[M_REG_S] & R_V7M_CONTROL_NPRIV_MA= SK) || - !mode; - - mmu_idx =3D arm_v7m_mmu_idx_for_secstate_and_priv(env, M_REG_S, pr= iv); - frame_sp_p =3D get_v7m_sp_ptr(env, M_REG_S, mode, - lr & R_V7M_EXCRET_SPSEL_MASK); - want_psp =3D mode && (lr & R_V7M_EXCRET_SPSEL_MASK); - if (want_psp) { - limit =3D env->v7m.psplim[M_REG_S]; - } else { - limit =3D env->v7m.msplim[M_REG_S]; - } + if (mode =3D=3D ARM_CPU_MODE_IRQ) { + env->regs[14] =3D env->xregs[16]; + env->regs[13] =3D env->xregs[17]; } else { - mmu_idx =3D core_to_arm_mmu_idx(env, cpu_mmu_index(env, false)); - frame_sp_p =3D &env->regs[13]; - limit =3D v7m_sp_limit(env); + env->banked_r14[bank_number(ARM_CPU_MODE_IRQ)] =3D env->xregs[16]; + env->banked_r13[bank_number(ARM_CPU_MODE_IRQ)] =3D env->xregs[17]; } =20 - frameptr =3D *frame_sp_p - 0x28; - if (frameptr < limit) { - /* - * Stack limit failure: set SP to the limit value, and generate - * STKOF UsageFault. Stack pushes below the limit must not be - * performed. It is IMPDEF whether pushes above the limit are - * performed; we choose not to. - */ - qemu_log_mask(CPU_LOG_INT, - "...STKOF during callee-saves register stacking\n"); - env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_STKOF_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, - env->v7m.secure); - *frame_sp_p =3D limit; - return true; + if (mode =3D=3D ARM_CPU_MODE_SVC) { + env->regs[14] =3D env->xregs[18]; + env->regs[13] =3D env->xregs[19]; + } else { + env->banked_r14[bank_number(ARM_CPU_MODE_SVC)] =3D env->xregs[18]; + env->banked_r13[bank_number(ARM_CPU_MODE_SVC)] =3D env->xregs[19]; } =20 - /* Write as much of the stack frame as we can. A write failure may - * cause us to pend a derived exception. - */ - stacked_ok =3D - v7m_stack_write(cpu, frameptr, 0xfefa125b, mmu_idx, ignore_faults)= && - v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx, - ignore_faults) && - v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx, - ignore_faults) && - v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx, - ignore_faults) && - v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx, - ignore_faults) && - v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx, - ignore_faults) && - v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx, - ignore_faults) && - v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx, - ignore_faults) && - v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx, - ignore_faults); + if (mode =3D=3D ARM_CPU_MODE_ABT) { + env->regs[14] =3D env->xregs[20]; + env->regs[13] =3D env->xregs[21]; + } else { + env->banked_r14[r14_bank_number(ARM_CPU_MODE_ABT)] =3D env->xregs[= 20]; + env->banked_r13[bank_number(ARM_CPU_MODE_ABT)] =3D env->xregs[21]; + } =20 - /* Update SP regardless of whether any of the stack accesses failed. */ - *frame_sp_p =3D frameptr; - - return !stacked_ok; -} - -static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain, - bool ignore_stackfaults) -{ - /* Do the "take the exception" parts of exception entry, - * but not the pushing of state to the stack. This is - * similar to the pseudocode ExceptionTaken() function. - */ - CPUARMState *env =3D &cpu->env; - uint32_t addr; - bool targets_secure; - int exc; - bool push_failed =3D false; - - armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure); - qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n", - targets_secure ? "secure" : "nonsecure", exc); - - if (arm_feature(env, ARM_FEATURE_V8)) { - if (arm_feature(env, ARM_FEATURE_M_SECURITY) && - (lr & R_V7M_EXCRET_S_MASK)) { - /* The background code (the owner of the registers in the - * exception frame) is Secure. This means it may either already - * have or now needs to push callee-saves registers. - */ - if (targets_secure) { - if (dotailchain && !(lr & R_V7M_EXCRET_ES_MASK)) { - /* We took an exception from Secure to NonSecure - * (which means the callee-saved registers got stacked) - * and are now tailchaining to a Secure exception. - * Clear DCRS so eventual return from this Secure - * exception unstacks the callee-saved registers. - */ - lr &=3D ~R_V7M_EXCRET_DCRS_MASK; - } - } else { - /* We're going to a non-secure exception; push the - * callee-saves registers to the stack now, if they're - * not already saved. - */ - if (lr & R_V7M_EXCRET_DCRS_MASK && - !(dotailchain && !(lr & R_V7M_EXCRET_ES_MASK))) { - push_failed =3D v7m_push_callee_stack(cpu, lr, dotailc= hain, - ignore_stackfaults= ); - } - lr |=3D R_V7M_EXCRET_DCRS_MASK; - } - } - - lr &=3D ~R_V7M_EXCRET_ES_MASK; - if (targets_secure || !arm_feature(env, ARM_FEATURE_M_SECURITY)) { - lr |=3D R_V7M_EXCRET_ES_MASK; - } - lr &=3D ~R_V7M_EXCRET_SPSEL_MASK; - if (env->v7m.control[targets_secure] & R_V7M_CONTROL_SPSEL_MASK) { - lr |=3D R_V7M_EXCRET_SPSEL_MASK; - } - - /* Clear registers if necessary to prevent non-secure exception - * code being able to see register values from secure code. - * Where register values become architecturally UNKNOWN we leave - * them with their previous values. - */ - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - if (!targets_secure) { - /* Always clear the caller-saved registers (they have been - * pushed to the stack earlier in v7m_push_stack()). - * Clear callee-saved registers if the background code is - * Secure (in which case these regs were saved in - * v7m_push_callee_stack()). - */ - int i; - - for (i =3D 0; i < 13; i++) { - /* r4..r11 are callee-saves, zero only if EXCRET.S =3D= =3D 1 */ - if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) { - env->regs[i] =3D 0; - } - } - /* Clear EAPSR */ - xpsr_write(env, 0, XPSR_NZCV | XPSR_Q | XPSR_GE | XPSR_IT); - } - } - } - - if (push_failed && !ignore_stackfaults) { - /* Derived exception on callee-saves register stacking: - * we might now want to take a different exception which - * targets a different security state, so try again from the top. - */ - qemu_log_mask(CPU_LOG_INT, - "...derived exception on callee-saves register stack= ing"); - v7m_exception_taken(cpu, lr, true, true); - return; - } - - if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) { - /* Vector load failed: derived exception */ - qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table l= oad"); - v7m_exception_taken(cpu, lr, true, true); - return; - } - - /* Now we've done everything that might cause a derived exception - * we can go ahead and activate whichever exception we're going to - * take (which might now be the derived exception). - */ - armv7m_nvic_acknowledge_irq(env->nvic); - - /* Switch to target security state -- must do this before writing SPSE= L */ - switch_v7m_security_state(env, targets_secure); - write_v7m_control_spsel(env, 0); - arm_clear_exclusive(env); - /* Clear IT bits */ - env->condexec_bits =3D 0; - env->regs[14] =3D lr; - env->regs[15] =3D addr & 0xfffffffe; - env->thumb =3D addr & 1; -} - -static bool v7m_push_stack(ARMCPU *cpu) -{ - /* Do the "set up stack frame" part of exception entry, - * similar to pseudocode PushStack(). - * Return true if we generate a derived exception (and so - * should ignore further stack faults trying to process - * that derived exception.) - */ - bool stacked_ok; - CPUARMState *env =3D &cpu->env; - uint32_t xpsr =3D xpsr_read(env); - uint32_t frameptr =3D env->regs[13]; - ARMMMUIdx mmu_idx =3D core_to_arm_mmu_idx(env, cpu_mmu_index(env, fals= e)); - - /* Align stack pointer if the guest wants that */ - if ((frameptr & 4) && - (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKALIGN_MASK)) { - frameptr -=3D 4; - xpsr |=3D XPSR_SPREALIGN; - } - - frameptr -=3D 0x20; - - if (arm_feature(env, ARM_FEATURE_V8)) { - uint32_t limit =3D v7m_sp_limit(env); - - if (frameptr < limit) { - /* - * Stack limit failure: set SP to the limit value, and generate - * STKOF UsageFault. Stack pushes below the limit must not be - * performed. It is IMPDEF whether pushes above the limit are - * performed; we choose not to. - */ - qemu_log_mask(CPU_LOG_INT, - "...STKOF during stacking\n"); - env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_STKOF_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, - env->v7m.secure); - env->regs[13] =3D limit; - return true; - } - } - - /* Write as much of the stack frame as we can. If we fail a stack - * write this will result in a derived exception being pended - * (which may be taken in preference to the one we started with - * if it has higher priority). - */ - stacked_ok =3D - v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, false) && - v7m_stack_write(cpu, frameptr + 4, env->regs[1], mmu_idx, false) && - v7m_stack_write(cpu, frameptr + 8, env->regs[2], mmu_idx, false) && - v7m_stack_write(cpu, frameptr + 12, env->regs[3], mmu_idx, false) = && - v7m_stack_write(cpu, frameptr + 16, env->regs[12], mmu_idx, false)= && - v7m_stack_write(cpu, frameptr + 20, env->regs[14], mmu_idx, false)= && - v7m_stack_write(cpu, frameptr + 24, env->regs[15], mmu_idx, false)= && - v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, false); - - /* Update SP regardless of whether any of the stack accesses failed. */ - env->regs[13] =3D frameptr; - - return !stacked_ok; -} - -static void do_v7m_exception_exit(ARMCPU *cpu) -{ - CPUARMState *env =3D &cpu->env; - uint32_t excret; - uint32_t xpsr; - bool ufault =3D false; - bool sfault =3D false; - bool return_to_sp_process; - bool return_to_handler; - bool rettobase =3D false; - bool exc_secure =3D false; - bool return_to_secure; - - /* If we're not in Handler mode then jumps to magic exception-exit - * addresses don't have magic behaviour. However for the v8M - * security extensions the magic secure-function-return has to - * work in thread mode too, so to avoid doing an extra check in - * the generated code we allow exception-exit magic to also cause the - * internal exception and bring us here in thread mode. Correct code - * will never try to do this (the following insn fetch will always - * fault) so we the overhead of having taken an unnecessary exception - * doesn't matter. - */ - if (!arm_v7m_is_handler_mode(env)) { - return; - } - - /* In the spec pseudocode ExceptionReturn() is called directly - * from BXWritePC() and gets the full target PC value including - * bit zero. In QEMU's implementation we treat it as a normal - * jump-to-register (which is then caught later on), and so split - * the target value up between env->regs[15] and env->thumb in - * gen_bx(). Reconstitute it. - */ - excret =3D env->regs[15]; - if (env->thumb) { - excret |=3D 1; - } - - qemu_log_mask(CPU_LOG_INT, "Exception return: magic PC %" PRIx32 - " previous exception %d\n", - excret, env->v7m.exception); - - if ((excret & R_V7M_EXCRET_RES1_MASK) !=3D R_V7M_EXCRET_RES1_MASK) { - qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero high bits in excep= tion " - "exit PC value 0x%" PRIx32 " are UNPREDICTABLE\n", - excret); - } - - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - /* EXC_RETURN.ES validation check (R_SMFL). We must do this before - * we pick which FAULTMASK to clear. - */ - if (!env->v7m.secure && - ((excret & R_V7M_EXCRET_ES_MASK) || - !(excret & R_V7M_EXCRET_DCRS_MASK))) { - sfault =3D 1; - /* For all other purposes, treat ES as 0 (R_HXSR) */ - excret &=3D ~R_V7M_EXCRET_ES_MASK; - } - exc_secure =3D excret & R_V7M_EXCRET_ES_MASK; - } - - if (env->v7m.exception !=3D ARMV7M_EXCP_NMI) { - /* Auto-clear FAULTMASK on return from other than NMI. - * If the security extension is implemented then this only - * happens if the raw execution priority is >=3D 0; the - * value of the ES bit in the exception return value indicates - * which security state's faultmask to clear. (v8M ARM ARM R_KBNF.) - */ - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - if (armv7m_nvic_raw_execution_priority(env->nvic) >=3D 0) { - env->v7m.faultmask[exc_secure] =3D 0; - } - } else { - env->v7m.faultmask[M_REG_NS] =3D 0; - } - } - - switch (armv7m_nvic_complete_irq(env->nvic, env->v7m.exception, - exc_secure)) { - case -1: - /* attempt to exit an exception that isn't active */ - ufault =3D true; - break; - case 0: - /* still an irq active now */ - break; - case 1: - /* we returned to base exception level, no nesting. - * (In the pseudocode this is written using "NestedActivation !=3D= 1" - * where we have 'rettobase =3D=3D false'.) - */ - rettobase =3D true; - break; - default: - g_assert_not_reached(); - } - - return_to_handler =3D !(excret & R_V7M_EXCRET_MODE_MASK); - return_to_sp_process =3D excret & R_V7M_EXCRET_SPSEL_MASK; - return_to_secure =3D arm_feature(env, ARM_FEATURE_M_SECURITY) && - (excret & R_V7M_EXCRET_S_MASK); - - if (arm_feature(env, ARM_FEATURE_V8)) { - if (!arm_feature(env, ARM_FEATURE_M_SECURITY)) { - /* UNPREDICTABLE if S =3D=3D 1 or DCRS =3D=3D 0 or ES =3D=3D 1= (R_XLCP); - * we choose to take the UsageFault. - */ - if ((excret & R_V7M_EXCRET_S_MASK) || - (excret & R_V7M_EXCRET_ES_MASK) || - !(excret & R_V7M_EXCRET_DCRS_MASK)) { - ufault =3D true; - } - } - if (excret & R_V7M_EXCRET_RES0_MASK) { - ufault =3D true; - } - } else { - /* For v7M we only recognize certain combinations of the low bits = */ - switch (excret & 0xf) { - case 1: /* Return to Handler */ - break; - case 13: /* Return to Thread using Process stack */ - case 9: /* Return to Thread using Main stack */ - /* We only need to check NONBASETHRDENA for v7M, because in - * v8M this bit does not exist (it is RES1). - */ - if (!rettobase && - !(env->v7m.ccr[env->v7m.secure] & - R_V7M_CCR_NONBASETHRDENA_MASK)) { - ufault =3D true; - } - break; - default: - ufault =3D true; - } - } - - /* - * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in - * Handler mode (and will be until we write the new XPSR.Interrupt - * field) this does not switch around the current stack pointer. - * We must do this before we do any kind of tailchaining, including - * for the derived exceptions on integrity check failures, or we will - * give the guest an incorrect EXCRET.SPSEL value on exception entry. - */ - write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_se= cure); - - if (sfault) { - env->v7m.sfsr |=3D R_V7M_SFSR_INVER_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " - "stackframe: failed EXC_RETURN.ES validity check\n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } - - if (ufault) { - /* Bad exception return: instead of popping the exception - * stack, directly take a usage fault on the current stack. - */ - env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_INVPC_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.sec= ure); - qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " - "stackframe: failed exception return integrity check= \n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } - - /* - * Tailchaining: if there is currently a pending exception that - * is high enough priority to preempt execution at the level we're - * about to return to, then just directly take that exception now, - * avoiding an unstack-and-then-stack. Note that now we have - * deactivated the previous exception by calling armv7m_nvic_complete_= irq() - * our current execution priority is already the execution priority we= are - * returning to -- none of the state we would unstack or set based on - * the EXCRET value affects it. - */ - if (armv7m_nvic_can_take_pending_exception(env->nvic)) { - qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n= "); - v7m_exception_taken(cpu, excret, true, false); - return; - } - - switch_v7m_security_state(env, return_to_secure); - - { - /* The stack pointer we should be reading the exception frame from - * depends on bits in the magic exception return type value (and - * for v8M isn't necessarily the stack pointer we will eventually - * end up resuming execution with). Get a pointer to the location - * in the CPU state struct where the SP we need is currently being - * stored; we will use and modify it in place. - * We use this limited C variable scope so we don't accidentally - * use 'frame_sp_p' after we do something that makes it invalid. - */ - uint32_t *frame_sp_p =3D get_v7m_sp_ptr(env, - return_to_secure, - !return_to_handler, - return_to_sp_process); - uint32_t frameptr =3D *frame_sp_p; - bool pop_ok =3D true; - ARMMMUIdx mmu_idx; - bool return_to_priv =3D return_to_handler || - !(env->v7m.control[return_to_secure] & R_V7M_CONTROL_NPRIV_MAS= K); - - mmu_idx =3D arm_v7m_mmu_idx_for_secstate_and_priv(env, return_to_s= ecure, - return_to_priv); - - if (!QEMU_IS_ALIGNED(frameptr, 8) && - arm_feature(env, ARM_FEATURE_V8)) { - qemu_log_mask(LOG_GUEST_ERROR, - "M profile exception return with non-8-aligned S= P " - "for destination state is UNPREDICTABLE\n"); - } - - /* Do we need to pop callee-saved registers? */ - if (return_to_secure && - ((excret & R_V7M_EXCRET_ES_MASK) =3D=3D 0 || - (excret & R_V7M_EXCRET_DCRS_MASK) =3D=3D 0)) { - uint32_t expected_sig =3D 0xfefa125b; - uint32_t actual_sig; - - pop_ok =3D v7m_stack_read(cpu, &actual_sig, frameptr, mmu_idx); - - if (pop_ok && expected_sig !=3D actual_sig) { - /* Take a SecureFault on the current stack */ - env->v7m.sfsr |=3D R_V7M_SFSR_INVIS_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, fal= se); - qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on exist= ing " - "stackframe: failed exception return integri= ty " - "signature check\n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } - - pop_ok =3D pop_ok && - v7m_stack_read(cpu, &env->regs[4], frameptr + 0x8, mmu_idx= ) && - v7m_stack_read(cpu, &env->regs[5], frameptr + 0xc, mmu_idx= ) && - v7m_stack_read(cpu, &env->regs[6], frameptr + 0x10, mmu_id= x) && - v7m_stack_read(cpu, &env->regs[7], frameptr + 0x14, mmu_id= x) && - v7m_stack_read(cpu, &env->regs[8], frameptr + 0x18, mmu_id= x) && - v7m_stack_read(cpu, &env->regs[9], frameptr + 0x1c, mmu_id= x) && - v7m_stack_read(cpu, &env->regs[10], frameptr + 0x20, mmu_i= dx) && - v7m_stack_read(cpu, &env->regs[11], frameptr + 0x24, mmu_i= dx); - - frameptr +=3D 0x28; - } - - /* Pop registers */ - pop_ok =3D pop_ok && - v7m_stack_read(cpu, &env->regs[0], frameptr, mmu_idx) && - v7m_stack_read(cpu, &env->regs[1], frameptr + 0x4, mmu_idx) && - v7m_stack_read(cpu, &env->regs[2], frameptr + 0x8, mmu_idx) && - v7m_stack_read(cpu, &env->regs[3], frameptr + 0xc, mmu_idx) && - v7m_stack_read(cpu, &env->regs[12], frameptr + 0x10, mmu_idx) = && - v7m_stack_read(cpu, &env->regs[14], frameptr + 0x14, mmu_idx) = && - v7m_stack_read(cpu, &env->regs[15], frameptr + 0x18, mmu_idx) = && - v7m_stack_read(cpu, &xpsr, frameptr + 0x1c, mmu_idx); - - if (!pop_ok) { - /* v7m_stack_read() pended a fault, so take it (as a tail - * chained exception on the same stack frame) - */ - qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking= \n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } - - /* Returning from an exception with a PC with bit 0 set is defined - * behaviour on v8M (bit 0 is ignored), but for v7M it was specifi= ed - * to be UNPREDICTABLE. In practice actual v7M hardware seems to i= gnore - * the lsbit, and there are several RTOSes out there which incorre= ctly - * assume the r15 in the stack frame should be a Thumb-style "lsbit - * indicates ARM/Thumb" value, so ignore the bit on v7M as well, b= ut - * complain about the badly behaved guest. - */ - if (env->regs[15] & 1) { - env->regs[15] &=3D ~1U; - if (!arm_feature(env, ARM_FEATURE_V8)) { - qemu_log_mask(LOG_GUEST_ERROR, - "M profile return from interrupt with misali= gned " - "PC is UNPREDICTABLE on v7M\n"); - } - } - - if (arm_feature(env, ARM_FEATURE_V8)) { - /* For v8M we have to check whether the xPSR exception field - * matches the EXCRET value for return to handler/thread - * before we commit to changing the SP and xPSR. - */ - bool will_be_handler =3D (xpsr & XPSR_EXCP) !=3D 0; - if (return_to_handler !=3D will_be_handler) { - /* Take an INVPC UsageFault on the current stack. - * By this point we will have switched to the security sta= te - * for the background state, so this UsageFault will target - * that state. - */ - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, - env->v7m.secure); - env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_INVPC_MASK; - qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existi= ng " - "stackframe: failed exception return integri= ty " - "check\n"); - v7m_exception_taken(cpu, excret, true, false); - return; - } - } - - /* Commit to consuming the stack frame */ - frameptr +=3D 0x20; - /* Undo stack alignment (the SPREALIGN bit indicates that the orig= inal - * pre-exception SP was not 8-aligned and we added a padding word = to - * align it, so we undo this by ORing in the bit that increases it - * from the current 8-aligned value to the 8-unaligned value. (Add= ing 4 - * would work too but a logical OR is how the pseudocode specifies= it.) - */ - if (xpsr & XPSR_SPREALIGN) { - frameptr |=3D 4; - } - *frame_sp_p =3D frameptr; - } - /* This xpsr_write() will invalidate frame_sp_p as it may switch stack= */ - xpsr_write(env, xpsr, ~XPSR_SPREALIGN); - - /* The restored xPSR exception field will be zero if we're - * resuming in Thread mode. If that doesn't match what the - * exception return excret specified then this is a UsageFault. - * v7M requires we make this check here; v8M did it earlier. - */ - if (return_to_handler !=3D arm_v7m_is_handler_mode(env)) { - /* Take an INVPC UsageFault by pushing the stack again; - * we know we're v7M so this is never a Secure UsageFault. - */ - bool ignore_stackfaults; - - assert(!arm_feature(env, ARM_FEATURE_V8)); - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false); - env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_INVPC_MASK; - ignore_stackfaults =3D v7m_push_stack(cpu); - qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe= : " - "failed exception return integrity check\n"); - v7m_exception_taken(cpu, excret, false, ignore_stackfaults); - return; - } - - /* Otherwise, we have a successful exception exit. */ - arm_clear_exclusive(env); - qemu_log_mask(CPU_LOG_INT, "...successful exception return\n"); -} - -static bool do_v7m_function_return(ARMCPU *cpu) -{ - /* v8M security extensions magic function return. - * We may either: - * (1) throw an exception (longjump) - * (2) return true if we successfully handled the function return - * (3) return false if we failed a consistency check and have - * pended a UsageFault that needs to be taken now - * - * At this point the magic return value is split between env->regs[15] - * and env->thumb. We don't bother to reconstitute it because we don't - * need it (all values are handled the same way). - */ - CPUARMState *env =3D &cpu->env; - uint32_t newpc, newpsr, newpsr_exc; - - qemu_log_mask(CPU_LOG_INT, "...really v7M secure function return\n"); - - { - bool threadmode, spsel; - TCGMemOpIdx oi; - ARMMMUIdx mmu_idx; - uint32_t *frame_sp_p; - uint32_t frameptr; - - /* Pull the return address and IPSR from the Secure stack */ - threadmode =3D !arm_v7m_is_handler_mode(env); - spsel =3D env->v7m.control[M_REG_S] & R_V7M_CONTROL_SPSEL_MASK; - - frame_sp_p =3D get_v7m_sp_ptr(env, true, threadmode, spsel); - frameptr =3D *frame_sp_p; - - /* These loads may throw an exception (for MPU faults). We want to - * do them as secure, so work out what MMU index that is. - */ - mmu_idx =3D arm_v7m_mmu_idx_for_secstate(env, true); - oi =3D make_memop_idx(MO_LE, arm_to_core_mmu_idx(mmu_idx)); - newpc =3D helper_le_ldul_mmu(env, frameptr, oi, 0); - newpsr =3D helper_le_ldul_mmu(env, frameptr + 4, oi, 0); - - /* Consistency checks on new IPSR */ - newpsr_exc =3D newpsr & XPSR_EXCP; - if (!((env->v7m.exception =3D=3D 0 && newpsr_exc =3D=3D 0) || - (env->v7m.exception =3D=3D 1 && newpsr_exc !=3D 0))) { - /* Pend the fault and tell our caller to take it */ - env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_INVPC_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, - env->v7m.secure); - qemu_log_mask(CPU_LOG_INT, - "...taking INVPC UsageFault: " - "IPSR consistency check failed\n"); - return false; - } - - *frame_sp_p =3D frameptr + 8; - } - - /* This invalidates frame_sp_p */ - switch_v7m_security_state(env, true); - env->v7m.exception =3D newpsr_exc; - env->v7m.control[M_REG_S] &=3D ~R_V7M_CONTROL_SFPA_MASK; - if (newpsr & XPSR_SFPA) { - env->v7m.control[M_REG_S] |=3D R_V7M_CONTROL_SFPA_MASK; - } - xpsr_write(env, 0, XPSR_IT); - env->thumb =3D newpc & 1; - env->regs[15] =3D newpc & ~1; - - qemu_log_mask(CPU_LOG_INT, "...function return successful\n"); - return true; -} - -static void arm_log_exception(int idx) -{ - if (qemu_loglevel_mask(CPU_LOG_INT)) { - const char *exc =3D NULL; - static const char * const excnames[] =3D { - [EXCP_UDEF] =3D "Undefined Instruction", - [EXCP_SWI] =3D "SVC", - [EXCP_PREFETCH_ABORT] =3D "Prefetch Abort", - [EXCP_DATA_ABORT] =3D "Data Abort", - [EXCP_IRQ] =3D "IRQ", - [EXCP_FIQ] =3D "FIQ", - [EXCP_BKPT] =3D "Breakpoint", - [EXCP_EXCEPTION_EXIT] =3D "QEMU v7M exception exit", - [EXCP_KERNEL_TRAP] =3D "QEMU intercept of kernel commpage", - [EXCP_HVC] =3D "Hypervisor Call", - [EXCP_HYP_TRAP] =3D "Hypervisor Trap", - [EXCP_SMC] =3D "Secure Monitor Call", - [EXCP_VIRQ] =3D "Virtual IRQ", - [EXCP_VFIQ] =3D "Virtual FIQ", - [EXCP_SEMIHOST] =3D "Semihosting call", - [EXCP_NOCP] =3D "v7M NOCP UsageFault", - [EXCP_INVSTATE] =3D "v7M INVSTATE UsageFault", - [EXCP_STKOF] =3D "v8M STKOF UsageFault", - }; - - if (idx >=3D 0 && idx < ARRAY_SIZE(excnames)) { - exc =3D excnames[idx]; - } - if (!exc) { - exc =3D "unknown"; - } - qemu_log_mask(CPU_LOG_INT, "Taking exception %d [%s]\n", idx, exc); - } -} - -static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, - uint32_t addr, uint16_t *insn) -{ - /* Load a 16-bit portion of a v7M instruction, returning true on succe= ss, - * or false on failure (in which case we will have pended the appropri= ate - * exception). - * We need to do the instruction fetch's MPU and SAU checks - * like this because there is no MMU index that would allow - * doing the load with a single function call. Instead we must - * first check that the security attributes permit the load - * and that they don't mismatch on the two halves of the instruction, - * and then we do the load as a secure load (ie using the security - * attributes of the address, not the CPU, as architecturally required= ). - */ - CPUState *cs =3D CPU(cpu); - CPUARMState *env =3D &cpu->env; - V8M_SAttributes sattrs =3D {}; - MemTxAttrs attrs =3D {}; - ARMMMUFaultInfo fi =3D {}; - MemTxResult txres; - target_ulong page_size; - hwaddr physaddr; - int prot; - - v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs); - if (!sattrs.nsc || sattrs.ns) { - /* This must be the second half of the insn, and it straddles a - * region boundary with the second half not being S&NSC. - */ - env->v7m.sfsr |=3D R_V7M_SFSR_INVEP_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - qemu_log_mask(CPU_LOG_INT, - "...really SecureFault with SFSR.INVEP\n"); - return false; - } - if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx, - &physaddr, &attrs, &prot, &page_size, &fi, NULL)) { - /* the MPU lookup failed */ - env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_IACCVIOL_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secur= e); - qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL= \n"); - return false; - } - *insn =3D address_space_lduw_le(arm_addressspace(cs, attrs), physaddr, - attrs, &txres); - if (txres !=3D MEMTX_OK) { - env->v7m.cfsr[M_REG_NS] |=3D R_V7M_CFSR_IBUSERR_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false); - qemu_log_mask(CPU_LOG_INT, "...really BusFault with CFSR.IBUSERR\n= "); - return false; - } - return true; -} - -static bool v7m_handle_execute_nsc(ARMCPU *cpu) -{ - /* Check whether this attempt to execute code in a Secure & NS-Callable - * memory region is for an SG instruction; if so, then emulate the - * effect of the SG instruction and return true. Otherwise pend - * the correct kind of exception and return false. - */ - CPUARMState *env =3D &cpu->env; - ARMMMUIdx mmu_idx; - uint16_t insn; - - /* We should never get here unless get_phys_addr_pmsav8() caused - * an exception for NS executing in S&NSC memory. - */ - assert(!env->v7m.secure); - assert(arm_feature(env, ARM_FEATURE_M_SECURITY)); - - /* We want to do the MPU lookup as secure; work out what mmu_idx that = is */ - mmu_idx =3D arm_v7m_mmu_idx_for_secstate(env, true); - - if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) { - return false; - } - - if (!env->thumb) { - goto gen_invep; - } - - if (insn !=3D 0xe97f) { - /* Not an SG instruction first half (we choose the IMPDEF - * early-SG-check option). - */ - goto gen_invep; - } - - if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) { - return false; - } - - if (insn !=3D 0xe97f) { - /* Not an SG instruction second half (yes, both halves of the SG - * insn have the same hex value) - */ - goto gen_invep; - } - - /* OK, we have confirmed that we really have an SG instruction. - * We know we're NS in S memory so don't need to repeat those checks. - */ - qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx= 32 - ", executing it\n", env->regs[15]); - env->regs[14] &=3D ~1; - switch_v7m_security_state(env, true); - xpsr_write(env, 0, XPSR_IT); - env->regs[15] +=3D 4; - return true; - -gen_invep: - env->v7m.sfsr |=3D R_V7M_SFSR_INVEP_MASK; - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - qemu_log_mask(CPU_LOG_INT, - "...really SecureFault with SFSR.INVEP\n"); - return false; -} - -void arm_v7m_cpu_do_interrupt(CPUState *cs) -{ - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - uint32_t lr; - bool ignore_stackfaults; - - arm_log_exception(cs->exception_index); - - /* For exceptions we just mark as pending on the NVIC, and let that - handle it. */ - switch (cs->exception_index) { - case EXCP_UDEF: - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.sec= ure); - env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_UNDEFINSTR_MASK; - break; - case EXCP_NOCP: - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.sec= ure); - env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_NOCP_MASK; - break; - case EXCP_INVSTATE: - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.sec= ure); - env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_INVSTATE_MASK; - break; - case EXCP_STKOF: - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.sec= ure); - env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_STKOF_MASK; - break; - case EXCP_SWI: - /* The PC already points to the next instruction. */ - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secur= e); - break; - case EXCP_PREFETCH_ABORT: - case EXCP_DATA_ABORT: - /* Note that for M profile we don't have a guest facing FSR, but - * the env->exception.fsr will be populated by the code that - * raises the fault, in the A profile short-descriptor format. - */ - switch (env->exception.fsr & 0xf) { - case M_FAKE_FSR_NSC_EXEC: - /* Exception generated when we try to execute code at an addre= ss - * which is marked as Secure & Non-Secure Callable and the CPU - * is in the Non-Secure state. The only instruction which can - * be executed like this is SG (and that only if both halves of - * the SG instruction have the same security attributes.) - * Everything else must generate an INVEP SecureFault, so we - * emulate the SG instruction here. - */ - if (v7m_handle_execute_nsc(cpu)) { - return; - } - break; - case M_FAKE_FSR_SFAULT: - /* Various flavours of SecureFault for attempts to execute or - * access data in the wrong security state. - */ - switch (cs->exception_index) { - case EXCP_PREFETCH_ABORT: - if (env->v7m.secure) { - env->v7m.sfsr |=3D R_V7M_SFSR_INVTRAN_MASK; - qemu_log_mask(CPU_LOG_INT, - "...really SecureFault with SFSR.INVTRAN= \n"); - } else { - env->v7m.sfsr |=3D R_V7M_SFSR_INVEP_MASK; - qemu_log_mask(CPU_LOG_INT, - "...really SecureFault with SFSR.INVEP\n= "); - } - break; - case EXCP_DATA_ABORT: - /* This must be an NS access to S memory */ - env->v7m.sfsr |=3D R_V7M_SFSR_AUVIOL_MASK; - qemu_log_mask(CPU_LOG_INT, - "...really SecureFault with SFSR.AUVIOL\n"); - break; - } - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); - break; - case 0x8: /* External Abort */ - switch (cs->exception_index) { - case EXCP_PREFETCH_ABORT: - env->v7m.cfsr[M_REG_NS] |=3D R_V7M_CFSR_IBUSERR_MASK; - qemu_log_mask(CPU_LOG_INT, "...with CFSR.IBUSERR\n"); - break; - case EXCP_DATA_ABORT: - env->v7m.cfsr[M_REG_NS] |=3D - (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK= ); - env->v7m.bfar =3D env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, - "...with CFSR.PRECISERR and BFAR 0x%x\n", - env->v7m.bfar); - break; - } - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false); - break; - default: - /* All other FSR values are either MPU faults or "can't happen - * for M profile" cases. - */ - switch (cs->exception_index) { - case EXCP_PREFETCH_ABORT: - env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_IACCVIOL_MA= SK; - qemu_log_mask(CPU_LOG_INT, "...with CFSR.IACCVIOL\n"); - break; - case EXCP_DATA_ABORT: - env->v7m.cfsr[env->v7m.secure] |=3D - (R_V7M_CFSR_DACCVIOL_MASK | R_V7M_CFSR_MMARVALID_MASK); - env->v7m.mmfar[env->v7m.secure] =3D env->exception.vaddres= s; - qemu_log_mask(CPU_LOG_INT, - "...with CFSR.DACCVIOL and MMFAR 0x%x\n", - env->v7m.mmfar[env->v7m.secure]); - break; - } - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, - env->v7m.secure); - break; - } - break; - case EXCP_BKPT: - if (semihosting_enabled()) { - int nr; - nr =3D arm_lduw_code(env, env->regs[15], arm_sctlr_b(env)) & 0= xff; - if (nr =3D=3D 0xab) { - env->regs[15] +=3D 2; - qemu_log_mask(CPU_LOG_INT, - "...handling as semihosting call 0x%x\n", - env->regs[0]); - env->regs[0] =3D do_arm_semihosting(env); - return; - } - } - armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG, false); - break; - case EXCP_IRQ: - break; - case EXCP_EXCEPTION_EXIT: - if (env->regs[15] < EXC_RETURN_MIN_MAGIC) { - /* Must be v8M security extension function return */ - assert(env->regs[15] >=3D FNC_RETURN_MIN_MAGIC); - assert(arm_feature(env, ARM_FEATURE_M_SECURITY)); - if (do_v7m_function_return(cpu)) { - return; - } - } else { - do_v7m_exception_exit(cpu); - return; - } - break; - default: - cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); - return; /* Never happens. Keep compiler happy. */ - } - - if (arm_feature(env, ARM_FEATURE_V8)) { - lr =3D R_V7M_EXCRET_RES1_MASK | - R_V7M_EXCRET_DCRS_MASK | - R_V7M_EXCRET_FTYPE_MASK; - /* The S bit indicates whether we should return to Secure - * or NonSecure (ie our current state). - * The ES bit indicates whether we're taking this exception - * to Secure or NonSecure (ie our target state). We set it - * later, in v7m_exception_taken(). - * The SPSEL bit is also set in v7m_exception_taken() for v8M. - * This corresponds to the ARM ARM pseudocode for v8M setting - * some LR bits in PushStack() and some in ExceptionTaken(); - * the distinction matters for the tailchain cases where we - * can take an exception without pushing the stack. - */ - if (env->v7m.secure) { - lr |=3D R_V7M_EXCRET_S_MASK; - } - } else { - lr =3D R_V7M_EXCRET_RES1_MASK | - R_V7M_EXCRET_S_MASK | - R_V7M_EXCRET_DCRS_MASK | - R_V7M_EXCRET_FTYPE_MASK | - R_V7M_EXCRET_ES_MASK; - if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) { - lr |=3D R_V7M_EXCRET_SPSEL_MASK; - } - } - if (!arm_v7m_is_handler_mode(env)) { - lr |=3D R_V7M_EXCRET_MODE_MASK; - } - - ignore_stackfaults =3D v7m_push_stack(cpu); - v7m_exception_taken(cpu, lr, false, ignore_stackfaults); -} - -/* Function used to synchronize QEMU's AArch64 register set with AArch32 - * register set. This is necessary when switching between AArch32 and AAr= ch64 - * execution state. - */ -void aarch64_sync_32_to_64(CPUARMState *env) -{ - int i; - uint32_t mode =3D env->uncached_cpsr & CPSR_M; - - /* We can blanket copy R[0:7] to X[0:7] */ - for (i =3D 0; i < 8; i++) { - env->xregs[i] =3D env->regs[i]; - } - - /* Unless we are in FIQ mode, x8-x12 come from the user registers r8-r= 12. - * Otherwise, they come from the banked user regs. - */ - if (mode =3D=3D ARM_CPU_MODE_FIQ) { - for (i =3D 8; i < 13; i++) { - env->xregs[i] =3D env->usr_regs[i - 8]; - } - } else { - for (i =3D 8; i < 13; i++) { - env->xregs[i] =3D env->regs[i]; - } - } - - /* Registers x13-x23 are the various mode SP and FP registers. Registe= rs - * r13 and r14 are only copied if we are in that mode, otherwise we co= py - * from the mode banked register. - */ - if (mode =3D=3D ARM_CPU_MODE_USR || mode =3D=3D ARM_CPU_MODE_SYS) { - env->xregs[13] =3D env->regs[13]; - env->xregs[14] =3D env->regs[14]; - } else { - env->xregs[13] =3D env->banked_r13[bank_number(ARM_CPU_MODE_USR)]; - /* HYP is an exception in that it is copied from r14 */ - if (mode =3D=3D ARM_CPU_MODE_HYP) { - env->xregs[14] =3D env->regs[14]; - } else { - env->xregs[14] =3D env->banked_r14[r14_bank_number(ARM_CPU_MOD= E_USR)]; - } - } - - if (mode =3D=3D ARM_CPU_MODE_HYP) { - env->xregs[15] =3D env->regs[13]; - } else { - env->xregs[15] =3D env->banked_r13[bank_number(ARM_CPU_MODE_HYP)]; - } - - if (mode =3D=3D ARM_CPU_MODE_IRQ) { - env->xregs[16] =3D env->regs[14]; - env->xregs[17] =3D env->regs[13]; - } else { - env->xregs[16] =3D env->banked_r14[r14_bank_number(ARM_CPU_MODE_IR= Q)]; - env->xregs[17] =3D env->banked_r13[bank_number(ARM_CPU_MODE_IRQ)]; - } - - if (mode =3D=3D ARM_CPU_MODE_SVC) { - env->xregs[18] =3D env->regs[14]; - env->xregs[19] =3D env->regs[13]; - } else { - env->xregs[18] =3D env->banked_r14[r14_bank_number(ARM_CPU_MODE_SV= C)]; - env->xregs[19] =3D env->banked_r13[bank_number(ARM_CPU_MODE_SVC)]; - } - - if (mode =3D=3D ARM_CPU_MODE_ABT) { - env->xregs[20] =3D env->regs[14]; - env->xregs[21] =3D env->regs[13]; - } else { - env->xregs[20] =3D env->banked_r14[r14_bank_number(ARM_CPU_MODE_AB= T)]; - env->xregs[21] =3D env->banked_r13[bank_number(ARM_CPU_MODE_ABT)]; - } - - if (mode =3D=3D ARM_CPU_MODE_UND) { - env->xregs[22] =3D env->regs[14]; - env->xregs[23] =3D env->regs[13]; - } else { - env->xregs[22] =3D env->banked_r14[r14_bank_number(ARM_CPU_MODE_UN= D)]; - env->xregs[23] =3D env->banked_r13[bank_number(ARM_CPU_MODE_UND)]; - } - - /* Registers x24-x30 are mapped to r8-r14 in FIQ mode. If we are in F= IQ - * mode, then we can copy from r8-r14. Otherwise, we copy from the - * FIQ bank for r8-r14. - */ - if (mode =3D=3D ARM_CPU_MODE_FIQ) { - for (i =3D 24; i < 31; i++) { - env->xregs[i] =3D env->regs[i - 16]; /* X[24:30] <- R[8:14] = */ - } - } else { - for (i =3D 24; i < 29; i++) { - env->xregs[i] =3D env->fiq_regs[i - 24]; - } - env->xregs[29] =3D env->banked_r13[bank_number(ARM_CPU_MODE_FIQ)]; - env->xregs[30] =3D env->banked_r14[r14_bank_number(ARM_CPU_MODE_FI= Q)]; - } - - env->pc =3D env->regs[15]; -} - -/* Function used to synchronize QEMU's AArch32 register set with AArch64 - * register set. This is necessary when switching between AArch32 and AAr= ch64 - * execution state. - */ -void aarch64_sync_64_to_32(CPUARMState *env) -{ - int i; - uint32_t mode =3D env->uncached_cpsr & CPSR_M; - - /* We can blanket copy X[0:7] to R[0:7] */ - for (i =3D 0; i < 8; i++) { - env->regs[i] =3D env->xregs[i]; - } - - /* Unless we are in FIQ mode, r8-r12 come from the user registers x8-x= 12. - * Otherwise, we copy x8-x12 into the banked user regs. - */ - if (mode =3D=3D ARM_CPU_MODE_FIQ) { - for (i =3D 8; i < 13; i++) { - env->usr_regs[i - 8] =3D env->xregs[i]; - } - } else { - for (i =3D 8; i < 13; i++) { - env->regs[i] =3D env->xregs[i]; - } - } - - /* Registers r13 & r14 depend on the current mode. - * If we are in a given mode, we copy the corresponding x registers to= r13 - * and r14. Otherwise, we copy the x register to the banked r13 and r= 14 - * for the mode. - */ - if (mode =3D=3D ARM_CPU_MODE_USR || mode =3D=3D ARM_CPU_MODE_SYS) { - env->regs[13] =3D env->xregs[13]; - env->regs[14] =3D env->xregs[14]; - } else { - env->banked_r13[bank_number(ARM_CPU_MODE_USR)] =3D env->xregs[13]; - - /* HYP is an exception in that it does not have its own banked r14= but - * shares the USR r14 - */ - if (mode =3D=3D ARM_CPU_MODE_HYP) { - env->regs[14] =3D env->xregs[14]; - } else { - env->banked_r14[r14_bank_number(ARM_CPU_MODE_USR)] =3D env->xr= egs[14]; - } - } - - if (mode =3D=3D ARM_CPU_MODE_HYP) { - env->regs[13] =3D env->xregs[15]; - } else { - env->banked_r13[bank_number(ARM_CPU_MODE_HYP)] =3D env->xregs[15]; - } - - if (mode =3D=3D ARM_CPU_MODE_IRQ) { - env->regs[14] =3D env->xregs[16]; - env->regs[13] =3D env->xregs[17]; - } else { - env->banked_r14[r14_bank_number(ARM_CPU_MODE_IRQ)] =3D env->xregs[= 16]; - env->banked_r13[bank_number(ARM_CPU_MODE_IRQ)] =3D env->xregs[17]; - } - - if (mode =3D=3D ARM_CPU_MODE_SVC) { - env->regs[14] =3D env->xregs[18]; - env->regs[13] =3D env->xregs[19]; - } else { - env->banked_r14[r14_bank_number(ARM_CPU_MODE_SVC)] =3D env->xregs[= 18]; - env->banked_r13[bank_number(ARM_CPU_MODE_SVC)] =3D env->xregs[19]; - } - - if (mode =3D=3D ARM_CPU_MODE_ABT) { - env->regs[14] =3D env->xregs[20]; - env->regs[13] =3D env->xregs[21]; - } else { - env->banked_r14[r14_bank_number(ARM_CPU_MODE_ABT)] =3D env->xregs[= 20]; - env->banked_r13[bank_number(ARM_CPU_MODE_ABT)] =3D env->xregs[21]; - } - - if (mode =3D=3D ARM_CPU_MODE_UND) { - env->regs[14] =3D env->xregs[22]; - env->regs[13] =3D env->xregs[23]; - } else { - env->banked_r14[r14_bank_number(ARM_CPU_MODE_UND)] =3D env->xregs[= 22]; - env->banked_r13[bank_number(ARM_CPU_MODE_UND)] =3D env->xregs[23]; - } + if (mode =3D=3D ARM_CPU_MODE_UND) { + env->regs[14] =3D env->xregs[22]; + env->regs[13] =3D env->xregs[23]; + } else { + env->banked_r14[r14_bank_number(ARM_CPU_MODE_UND)] =3D env->xregs[= 22]; + env->banked_r13[bank_number(ARM_CPU_MODE_UND)] =3D env->xregs[23]; + } =20 /* Registers x24-x30 are mapped to r8-r14 in FIQ mode. If we are in F= IQ * mode, then we can copy to r8-r14. Otherwise, we copy to the @@ -10235,9 +8747,9 @@ static bool v8m_is_sau_exempt(CPUARMState *env, (address >=3D 0xe00ff000 && address <=3D 0xe00fffff); } =20 -static void v8m_security_lookup(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_i= dx, - V8M_SAttributes *sattrs) +void v8m_security_lookup(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + V8M_SAttributes *sattrs) { /* Look up the security attributes for this address. Compare the * pseudocode SecurityCheck() function. @@ -10341,11 +8853,11 @@ static void v8m_security_lookup(CPUARMState *env, = uint32_t address, } } =20 -static bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, MemTxAttrs *txattrs, - int *prot, bool *is_subpage, - ARMMMUFaultInfo *fi, uint32_t *mregion) +bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *txattrs, + int *prot, bool *is_subpage, + ARMMMUFaultInfo *fi, uint32_t *mregion) { /* Perform a PMSAv8 MPU lookup (without also doing the SAU check * that a full phys-to-virt translation does). @@ -10743,11 +9255,11 @@ static ARMCacheAttrs combine_cacheattrs(ARMCacheAt= trs s1, ARMCacheAttrs s2) * @fi: set to fault info if the translation fails * @cacheattrs: (if non-NULL) set to the cacheability/shareability attribu= tes */ -static bool get_phys_addr(CPUARMState *env, target_ulong address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot, - target_ulong *page_size, - ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) +bool get_phys_addr(CPUARMState *env, target_ulong address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot, + target_ulong *page_size, + ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) { if (mmu_idx =3D=3D ARMMMUIdx_S12NSE0 || mmu_idx =3D=3D ARMMMUIdx_S12NS= E1) { /* Call ourselves recursively to do the stage 1 and then stage 2 @@ -10934,415 +9446,6 @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState = *cs, vaddr addr, return phys_addr; } =20 -uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg) -{ - uint32_t mask; - unsigned el =3D arm_current_el(env); - - /* First handle registers which unprivileged can read */ - - switch (reg) { - case 0 ... 7: /* xPSR sub-fields */ - mask =3D 0; - if ((reg & 1) && el) { - mask |=3D XPSR_EXCP; /* IPSR (unpriv. reads as zero) */ - } - if (!(reg & 4)) { - mask |=3D XPSR_NZCV | XPSR_Q; /* APSR */ - } - /* EPSR reads as zero */ - return xpsr_read(env) & mask; - break; - case 20: /* CONTROL */ - return env->v7m.control[env->v7m.secure]; - case 0x94: /* CONTROL_NS */ - /* We have to handle this here because unprivileged Secure code - * can read the NS CONTROL register. - */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.control[M_REG_NS]; - } - - if (el =3D=3D 0) { - return 0; /* unprivileged reads others as zero */ - } - - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - switch (reg) { - case 0x88: /* MSP_NS */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.other_ss_msp; - case 0x89: /* PSP_NS */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.other_ss_psp; - case 0x8a: /* MSPLIM_NS */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.msplim[M_REG_NS]; - case 0x8b: /* PSPLIM_NS */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.psplim[M_REG_NS]; - case 0x90: /* PRIMASK_NS */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.primask[M_REG_NS]; - case 0x91: /* BASEPRI_NS */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.basepri[M_REG_NS]; - case 0x93: /* FAULTMASK_NS */ - if (!env->v7m.secure) { - return 0; - } - return env->v7m.faultmask[M_REG_NS]; - case 0x98: /* SP_NS */ - { - /* This gives the non-secure SP selected based on whether we're - * currently in handler mode or not, using the NS CONTROL.SPSE= L. - */ - bool spsel =3D env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSE= L_MASK; - - if (!env->v7m.secure) { - return 0; - } - if (!arm_v7m_is_handler_mode(env) && spsel) { - return env->v7m.other_ss_psp; - } else { - return env->v7m.other_ss_msp; - } - } - default: - break; - } - } - - switch (reg) { - case 8: /* MSP */ - return v7m_using_psp(env) ? env->v7m.other_sp : env->regs[13]; - case 9: /* PSP */ - return v7m_using_psp(env) ? env->regs[13] : env->v7m.other_sp; - case 10: /* MSPLIM */ - if (!arm_feature(env, ARM_FEATURE_V8)) { - goto bad_reg; - } - return env->v7m.msplim[env->v7m.secure]; - case 11: /* PSPLIM */ - if (!arm_feature(env, ARM_FEATURE_V8)) { - goto bad_reg; - } - return env->v7m.psplim[env->v7m.secure]; - case 16: /* PRIMASK */ - return env->v7m.primask[env->v7m.secure]; - case 17: /* BASEPRI */ - case 18: /* BASEPRI_MAX */ - return env->v7m.basepri[env->v7m.secure]; - case 19: /* FAULTMASK */ - return env->v7m.faultmask[env->v7m.secure]; - default: - bad_reg: - qemu_log_mask(LOG_GUEST_ERROR, "Attempt to read unknown special" - " register %d\n", reg); - return 0; - } -} - -void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val) -{ - /* We're passed bits [11..0] of the instruction; extract - * SYSm and the mask bits. - * Invalid combinations of SYSm and mask are UNPREDICTABLE; - * we choose to treat them as if the mask bits were valid. - * NB that the pseudocode 'mask' variable is bits [11..10], - * whereas ours is [11..8]. - */ - uint32_t mask =3D extract32(maskreg, 8, 4); - uint32_t reg =3D extract32(maskreg, 0, 8); - - if (arm_current_el(env) =3D=3D 0 && reg > 7) { - /* only xPSR sub-fields may be written by unprivileged */ - return; - } - - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - switch (reg) { - case 0x88: /* MSP_NS */ - if (!env->v7m.secure) { - return; - } - env->v7m.other_ss_msp =3D val; - return; - case 0x89: /* PSP_NS */ - if (!env->v7m.secure) { - return; - } - env->v7m.other_ss_psp =3D val; - return; - case 0x8a: /* MSPLIM_NS */ - if (!env->v7m.secure) { - return; - } - env->v7m.msplim[M_REG_NS] =3D val & ~7; - return; - case 0x8b: /* PSPLIM_NS */ - if (!env->v7m.secure) { - return; - } - env->v7m.psplim[M_REG_NS] =3D val & ~7; - return; - case 0x90: /* PRIMASK_NS */ - if (!env->v7m.secure) { - return; - } - env->v7m.primask[M_REG_NS] =3D val & 1; - return; - case 0x91: /* BASEPRI_NS */ - if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN))= { - return; - } - env->v7m.basepri[M_REG_NS] =3D val & 0xff; - return; - case 0x93: /* FAULTMASK_NS */ - if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN))= { - return; - } - env->v7m.faultmask[M_REG_NS] =3D val & 1; - return; - case 0x94: /* CONTROL_NS */ - if (!env->v7m.secure) { - return; - } - write_v7m_control_spsel_for_secstate(env, - val & R_V7M_CONTROL_SPSEL= _MASK, - M_REG_NS); - if (arm_feature(env, ARM_FEATURE_M_MAIN)) { - env->v7m.control[M_REG_NS] &=3D ~R_V7M_CONTROL_NPRIV_MASK; - env->v7m.control[M_REG_NS] |=3D val & R_V7M_CONTROL_NPRIV_= MASK; - } - return; - case 0x98: /* SP_NS */ - { - /* This gives the non-secure SP selected based on whether we're - * currently in handler mode or not, using the NS CONTROL.SPSE= L. - */ - bool spsel =3D env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSE= L_MASK; - bool is_psp =3D !arm_v7m_is_handler_mode(env) && spsel; - uint32_t limit; - - if (!env->v7m.secure) { - return; - } - - limit =3D is_psp ? env->v7m.psplim[false] : env->v7m.msplim[fa= lse]; - - if (val < limit) { - CPUState *cs =3D CPU(arm_env_get_cpu(env)); - - cpu_restore_state(cs, GETPC(), true); - raise_exception(env, EXCP_STKOF, 0, 1); - } - - if (is_psp) { - env->v7m.other_ss_psp =3D val; - } else { - env->v7m.other_ss_msp =3D val; - } - return; - } - default: - break; - } - } - - switch (reg) { - case 0 ... 7: /* xPSR sub-fields */ - /* only APSR is actually writable */ - if (!(reg & 4)) { - uint32_t apsrmask =3D 0; - - if (mask & 8) { - apsrmask |=3D XPSR_NZCV | XPSR_Q; - } - if ((mask & 4) && arm_feature(env, ARM_FEATURE_THUMB_DSP)) { - apsrmask |=3D XPSR_GE; - } - xpsr_write(env, val, apsrmask); - } - break; - case 8: /* MSP */ - if (v7m_using_psp(env)) { - env->v7m.other_sp =3D val; - } else { - env->regs[13] =3D val; - } - break; - case 9: /* PSP */ - if (v7m_using_psp(env)) { - env->regs[13] =3D val; - } else { - env->v7m.other_sp =3D val; - } - break; - case 10: /* MSPLIM */ - if (!arm_feature(env, ARM_FEATURE_V8)) { - goto bad_reg; - } - env->v7m.msplim[env->v7m.secure] =3D val & ~7; - break; - case 11: /* PSPLIM */ - if (!arm_feature(env, ARM_FEATURE_V8)) { - goto bad_reg; - } - env->v7m.psplim[env->v7m.secure] =3D val & ~7; - break; - case 16: /* PRIMASK */ - env->v7m.primask[env->v7m.secure] =3D val & 1; - break; - case 17: /* BASEPRI */ - if (!arm_feature(env, ARM_FEATURE_M_MAIN)) { - goto bad_reg; - } - env->v7m.basepri[env->v7m.secure] =3D val & 0xff; - break; - case 18: /* BASEPRI_MAX */ - if (!arm_feature(env, ARM_FEATURE_M_MAIN)) { - goto bad_reg; - } - val &=3D 0xff; - if (val !=3D 0 && (val < env->v7m.basepri[env->v7m.secure] - || env->v7m.basepri[env->v7m.secure] =3D=3D 0)) { - env->v7m.basepri[env->v7m.secure] =3D val; - } - break; - case 19: /* FAULTMASK */ - if (!arm_feature(env, ARM_FEATURE_M_MAIN)) { - goto bad_reg; - } - env->v7m.faultmask[env->v7m.secure] =3D val & 1; - break; - case 20: /* CONTROL */ - /* Writing to the SPSEL bit only has an effect if we are in - * thread mode; other bits can be updated by any privileged code. - * write_v7m_control_spsel() deals with updating the SPSEL bit in - * env->v7m.control, so we only need update the others. - * For v7M, we must just ignore explicit writes to SPSEL in handler - * mode; for v8M the write is permitted but will have no effect. - */ - if (arm_feature(env, ARM_FEATURE_V8) || - !arm_v7m_is_handler_mode(env)) { - write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) = !=3D 0); - } - if (arm_feature(env, ARM_FEATURE_M_MAIN)) { - env->v7m.control[env->v7m.secure] &=3D ~R_V7M_CONTROL_NPRIV_MA= SK; - env->v7m.control[env->v7m.secure] |=3D val & R_V7M_CONTROL_NPR= IV_MASK; - } - break; - default: - bad_reg: - qemu_log_mask(LOG_GUEST_ERROR, "Attempt to write unknown special" - " register %d\n", reg); - return; - } -} - -uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op) -{ - /* Implement the TT instruction. op is bits [7:6] of the insn. */ - bool forceunpriv =3D op & 1; - bool alt =3D op & 2; - V8M_SAttributes sattrs =3D {}; - uint32_t tt_resp; - bool r, rw, nsr, nsrw, mrvalid; - int prot; - ARMMMUFaultInfo fi =3D {}; - MemTxAttrs attrs =3D {}; - hwaddr phys_addr; - ARMMMUIdx mmu_idx; - uint32_t mregion; - bool targetpriv; - bool targetsec =3D env->v7m.secure; - bool is_subpage; - - /* Work out what the security state and privilege level we're - * interested in is... - */ - if (alt) { - targetsec =3D !targetsec; - } - - if (forceunpriv) { - targetpriv =3D false; - } else { - targetpriv =3D arm_v7m_is_handler_mode(env) || - !(env->v7m.control[targetsec] & R_V7M_CONTROL_NPRIV_MASK); - } - - /* ...and then figure out which MMU index this is */ - mmu_idx =3D arm_v7m_mmu_idx_for_secstate_and_priv(env, targetsec, targ= etpriv); - - /* We know that the MPU and SAU don't care about the access type - * for our purposes beyond that we don't want to claim to be - * an insn fetch, so we arbitrarily call this a read. - */ - - /* MPU region info only available for privileged or if - * inspecting the other MPU state. - */ - if (arm_current_el(env) !=3D 0 || alt) { - /* We can ignore the return value as prot is always set */ - pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, - &phys_addr, &attrs, &prot, &is_subpage, - &fi, &mregion); - if (mregion =3D=3D -1) { - mrvalid =3D false; - mregion =3D 0; - } else { - mrvalid =3D true; - } - r =3D prot & PAGE_READ; - rw =3D prot & PAGE_WRITE; - } else { - r =3D false; - rw =3D false; - mrvalid =3D false; - mregion =3D 0; - } - - if (env->v7m.secure) { - v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs); - nsr =3D sattrs.ns && r; - nsrw =3D sattrs.ns && rw; - } else { - sattrs.ns =3D true; - nsr =3D false; - nsrw =3D false; - } - - tt_resp =3D (sattrs.iregion << 24) | - (sattrs.irvalid << 23) | - ((!sattrs.ns) << 22) | - (nsrw << 21) | - (nsr << 20) | - (rw << 19) | - (r << 18) | - (sattrs.srvalid << 17) | - (mrvalid << 16) | - (sattrs.sregion << 8) | - mregion; - - return tt_resp; -} - #endif =20 void HELPER(dc_zva)(CPUARMState *env, uint64_t vaddr_in) diff --git a/target/arm/m_helper.c b/target/arm/m_helper.c new file mode 100644 index 0000000000..a8e608dc0e --- /dev/null +++ b/target/arm/m_helper.c @@ -0,0 +1,1892 @@ +/* + * ARM M profile helpers. + * + * This code is licensed under the GNU GPL v2 and later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ +#include "qemu/osdep.h" +#include "cpu.h" +#include "internals.h" +#include "exec/helper-proto.h" +#include "exec/exec-all.h" +#include "arm_ldst.h" +#include "exec/semihost.h" +#include "fpu/softfloat.h" + +#if defined(CONFIG_USER_ONLY) + +/* These should probably raise undefined insn exceptions. */ +void HELPER(v7m_msr)(CPUARMState *env, uint32_t reg, uint32_t val) +{ + ARMCPU *cpu =3D arm_env_get_cpu(env); + + cpu_abort(CPU(cpu), "v7m_msr %d\n", reg); +} + +uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg) +{ + ARMCPU *cpu =3D arm_env_get_cpu(env); + + cpu_abort(CPU(cpu), "v7m_mrs %d\n", reg); + return 0; +} + +void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) +{ + /* translate.c should never generate calls here in user-only mode */ + g_assert_not_reached(); +} + +void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest) +{ + /* translate.c should never generate calls here in user-only mode */ + g_assert_not_reached(); +} + +uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op) +{ + /* The TT instructions can be used by unprivileged code, but in + * user-only emulation we don't have the MPU. + * Luckily since we know we are NonSecure unprivileged (and that in + * turn means that the A flag wasn't specified), all the bits in the + * register must be zero: + * IREGION: 0 because IRVALID is 0 + * IRVALID: 0 because NS + * S: 0 because NS + * NSRW: 0 because NS + * NSR: 0 because NS + * RW: 0 because unpriv and A flag not set + * R: 0 because unpriv and A flag not set + * SRVALID: 0 because NS + * MRVALID: 0 because unpriv and A flag not set + * SREGION: 0 becaus SRVALID is 0 + * MREGION: 0 because MRVALID is 0 + */ + return 0; +} + +#else + +static bool v7m_stack_write(ARMCPU *cpu, uint32_t addr, uint32_t value, + ARMMMUIdx mmu_idx, bool ignfault) +{ + CPUState *cs =3D CPU(cpu); + CPUARMState *env =3D &cpu->env; + MemTxAttrs attrs =3D {}; + MemTxResult txres; + target_ulong page_size; + hwaddr physaddr; + int prot; + ARMMMUFaultInfo fi =3D {}; + bool secure =3D mmu_idx & ARM_MMU_IDX_M_S; + int exc; + bool exc_secure; + + if (get_phys_addr(env, addr, MMU_DATA_STORE, mmu_idx, &physaddr, + &attrs, &prot, &page_size, &fi, NULL)) { + /* MPU/SAU lookup failed */ + if (fi.type =3D=3D ARMFault_QEMU_SFault) { + qemu_log_mask(CPU_LOG_INT, + "...SecureFault with SFSR.AUVIOL during stacking= \n"); + env->v7m.sfsr |=3D R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVAL= ID_MASK; + env->v7m.sfar =3D addr; + exc =3D ARMV7M_EXCP_SECURE; + exc_secure =3D false; + } else { + qemu_log_mask(CPU_LOG_INT, "...MemManageFault with CFSR.MSTKER= R\n"); + env->v7m.cfsr[secure] |=3D R_V7M_CFSR_MSTKERR_MASK; + exc =3D ARMV7M_EXCP_MEM; + exc_secure =3D secure; + } + goto pend_fault; + } + address_space_stl_le(arm_addressspace(cs, attrs), physaddr, value, + attrs, &txres); + if (txres !=3D MEMTX_OK) { + /* BusFault trying to write the data */ + qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.STKERR\n"); + env->v7m.cfsr[M_REG_NS] |=3D R_V7M_CFSR_STKERR_MASK; + exc =3D ARMV7M_EXCP_BUS; + exc_secure =3D false; + goto pend_fault; + } + return true; + +pend_fault: + /* By pending the exception at this point we are making + * the IMPDEF choice "overridden exceptions pended" (see the + * MergeExcInfo() pseudocode). The other choice would be to not + * pend them now and then make a choice about which to throw away + * later if we have two derived exceptions. + * The only case when we must not pend the exception but instead + * throw it away is if we are doing the push of the callee registers + * and we've already generated a derived exception. Even in this + * case we will still update the fault status registers. + */ + if (!ignfault) { + armv7m_nvic_set_pending_derived(env->nvic, exc, exc_secure); + } + return false; +} + +static bool v7m_stack_read(ARMCPU *cpu, uint32_t *dest, uint32_t addr, + ARMMMUIdx mmu_idx) +{ + CPUState *cs =3D CPU(cpu); + CPUARMState *env =3D &cpu->env; + MemTxAttrs attrs =3D {}; + MemTxResult txres; + target_ulong page_size; + hwaddr physaddr; + int prot; + ARMMMUFaultInfo fi =3D {}; + bool secure =3D mmu_idx & ARM_MMU_IDX_M_S; + int exc; + bool exc_secure; + uint32_t value; + + if (get_phys_addr(env, addr, MMU_DATA_LOAD, mmu_idx, &physaddr, + &attrs, &prot, &page_size, &fi, NULL)) { + /* MPU/SAU lookup failed */ + if (fi.type =3D=3D ARMFault_QEMU_SFault) { + qemu_log_mask(CPU_LOG_INT, + "...SecureFault with SFSR.AUVIOL during unstack\= n"); + env->v7m.sfsr |=3D R_V7M_SFSR_AUVIOL_MASK | R_V7M_SFSR_SFARVAL= ID_MASK; + env->v7m.sfar =3D addr; + exc =3D ARMV7M_EXCP_SECURE; + exc_secure =3D false; + } else { + qemu_log_mask(CPU_LOG_INT, + "...MemManageFault with CFSR.MUNSTKERR\n"); + env->v7m.cfsr[secure] |=3D R_V7M_CFSR_MUNSTKERR_MASK; + exc =3D ARMV7M_EXCP_MEM; + exc_secure =3D secure; + } + goto pend_fault; + } + + value =3D address_space_ldl(arm_addressspace(cs, attrs), physaddr, + attrs, &txres); + if (txres !=3D MEMTX_OK) { + /* BusFault trying to read the data */ + qemu_log_mask(CPU_LOG_INT, "...BusFault with BFSR.UNSTKERR\n"); + env->v7m.cfsr[M_REG_NS] |=3D R_V7M_CFSR_UNSTKERR_MASK; + exc =3D ARMV7M_EXCP_BUS; + exc_secure =3D false; + goto pend_fault; + } + + *dest =3D value; + return true; + +pend_fault: + /* By pending the exception at this point we are making + * the IMPDEF choice "overridden exceptions pended" (see the + * MergeExcInfo() pseudocode). The other choice would be to not + * pend them now and then make a choice about which to throw away + * later if we have two derived exceptions. + */ + armv7m_nvic_set_pending(env->nvic, exc, exc_secure); + return false; +} + +/* Write to v7M CONTROL.SPSEL bit for the specified security bank. + * This may change the current stack pointer between Main and Process + * stack pointers if it is done for the CONTROL register for the current + * security state. + */ +static void write_v7m_control_spsel_for_secstate(CPUARMState *env, + bool new_spsel, + bool secstate) +{ + bool old_is_psp =3D v7m_using_psp(env); + + env->v7m.control[secstate] =3D + deposit32(env->v7m.control[secstate], + R_V7M_CONTROL_SPSEL_SHIFT, + R_V7M_CONTROL_SPSEL_LENGTH, new_spsel); + + if (secstate =3D=3D env->v7m.secure) { + bool new_is_psp =3D v7m_using_psp(env); + uint32_t tmp; + + if (old_is_psp !=3D new_is_psp) { + tmp =3D env->v7m.other_sp; + env->v7m.other_sp =3D env->regs[13]; + env->regs[13] =3D tmp; + } + } +} + +/* Write to v7M CONTROL.SPSEL bit. This may change the current + * stack pointer between Main and Process stack pointers. + */ +static void write_v7m_control_spsel(CPUARMState *env, bool new_spsel) +{ + write_v7m_control_spsel_for_secstate(env, new_spsel, env->v7m.secure); +} + +/* Switch M profile security state between NS and S */ +static void switch_v7m_security_state(CPUARMState *env, bool new_secstate) +{ + uint32_t new_ss_msp, new_ss_psp; + + if (env->v7m.secure =3D=3D new_secstate) { + return; + } + + /* All the banked state is accessed by looking at env->v7m.secure + * except for the stack pointer; rearrange the SP appropriately. + */ + new_ss_msp =3D env->v7m.other_ss_msp; + new_ss_psp =3D env->v7m.other_ss_psp; + + if (v7m_using_psp(env)) { + env->v7m.other_ss_psp =3D env->regs[13]; + env->v7m.other_ss_msp =3D env->v7m.other_sp; + } else { + env->v7m.other_ss_msp =3D env->regs[13]; + env->v7m.other_ss_psp =3D env->v7m.other_sp; + } + + env->v7m.secure =3D new_secstate; + + if (v7m_using_psp(env)) { + env->regs[13] =3D new_ss_psp; + env->v7m.other_sp =3D new_ss_msp; + } else { + env->regs[13] =3D new_ss_msp; + env->v7m.other_sp =3D new_ss_psp; + } +} + +void HELPER(v7m_bxns)(CPUARMState *env, uint32_t dest) +{ + /* Handle v7M BXNS: + * - if the return value is a magic value, do exception return (like = BX) + * - otherwise bit 0 of the return value is the target security state + */ + uint32_t min_magic; + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + /* Covers FNC_RETURN and EXC_RETURN magic */ + min_magic =3D FNC_RETURN_MIN_MAGIC; + } else { + /* EXC_RETURN magic only */ + min_magic =3D EXC_RETURN_MIN_MAGIC; + } + + if (dest >=3D min_magic) { + /* This is an exception return magic value; put it where + * do_v7m_exception_exit() expects and raise EXCEPTION_EXIT. + * Note that if we ever add gen_ss_advance() singlestep support to + * M profile this should count as an "instruction execution comple= te" + * event (compare gen_bx_excret_final_code()). + */ + env->regs[15] =3D dest & ~1; + env->thumb =3D dest & 1; + HELPER(exception_internal)(env, EXCP_EXCEPTION_EXIT); + /* notreached */ + } + + /* translate.c should have made BXNS UNDEF unless we're secure */ + assert(env->v7m.secure); + + switch_v7m_security_state(env, dest & 1); + env->thumb =3D 1; + env->regs[15] =3D dest & ~1; +} + +void HELPER(v7m_blxns)(CPUARMState *env, uint32_t dest) +{ + /* Handle v7M BLXNS: + * - bit 0 of the destination address is the target security state + */ + + /* At this point regs[15] is the address just after the BLXNS */ + uint32_t nextinst =3D env->regs[15] | 1; + uint32_t sp =3D env->regs[13] - 8; + uint32_t saved_psr; + + /* translate.c will have made BLXNS UNDEF unless we're secure */ + assert(env->v7m.secure); + + if (dest & 1) { + /* target is Secure, so this is just a normal BLX, + * except that the low bit doesn't indicate Thumb/not. + */ + env->regs[14] =3D nextinst; + env->thumb =3D 1; + env->regs[15] =3D dest & ~1; + return; + } + + /* Target is non-secure: first push a stack frame */ + if (!QEMU_IS_ALIGNED(sp, 8)) { + qemu_log_mask(LOG_GUEST_ERROR, + "BLXNS with misaligned SP is UNPREDICTABLE\n"); + } + + if (sp < v7m_sp_limit(env)) { + raise_exception(env, EXCP_STKOF, 0, 1); + } + + saved_psr =3D env->v7m.exception; + if (env->v7m.control[M_REG_S] & R_V7M_CONTROL_SFPA_MASK) { + saved_psr |=3D XPSR_SFPA; + } + + /* Note that these stores can throw exceptions on MPU faults */ + cpu_stl_data(env, sp, nextinst); + cpu_stl_data(env, sp + 4, saved_psr); + + env->regs[13] =3D sp; + env->regs[14] =3D 0xfeffffff; + if (arm_v7m_is_handler_mode(env)) { + /* Write a dummy value to IPSR, to avoid leaking the current secure + * exception number to non-secure code. This is guaranteed not + * to cause write_v7m_exception() to actually change stacks. + */ + write_v7m_exception(env, 1); + } + switch_v7m_security_state(env, 0); + env->thumb =3D 1; + env->regs[15] =3D dest; +} + +static uint32_t *get_v7m_sp_ptr(CPUARMState *env, bool secure, bool thread= mode, + bool spsel) +{ + /* Return a pointer to the location where we currently store the + * stack pointer for the requested security state and thread mode. + * This pointer will become invalid if the CPU state is updated + * such that the stack pointers are switched around (eg changing + * the SPSEL control bit). + * Compare the v8M ARM ARM pseudocode LookUpSP_with_security_mode(). + * Unlike that pseudocode, we require the caller to pass us in the + * SPSEL control bit value; this is because we also use this + * function in handling of pushing of the callee-saves registers + * part of the v8M stack frame (pseudocode PushCalleeStack()), + * and in the tailchain codepath the SPSEL bit comes from the exception + * return magic LR value from the previous exception. The pseudocode + * opencodes the stack-selection in PushCalleeStack(), but we prefer + * to make this utility function generic enough to do the job. + */ + bool want_psp =3D threadmode && spsel; + + if (secure =3D=3D env->v7m.secure) { + if (want_psp =3D=3D v7m_using_psp(env)) { + return &env->regs[13]; + } else { + return &env->v7m.other_sp; + } + } else { + if (want_psp) { + return &env->v7m.other_ss_psp; + } else { + return &env->v7m.other_ss_msp; + } + } +} + +static bool arm_v7m_load_vector(ARMCPU *cpu, int exc, bool targets_secure, + uint32_t *pvec) +{ + CPUState *cs =3D CPU(cpu); + CPUARMState *env =3D &cpu->env; + MemTxResult result; + uint32_t addr =3D env->v7m.vecbase[targets_secure] + exc * 4; + uint32_t vector_entry; + MemTxAttrs attrs =3D {}; + ARMMMUIdx mmu_idx; + bool exc_secure; + + mmu_idx =3D arm_v7m_mmu_idx_for_secstate_and_priv(env, targets_secure,= true); + + /* We don't do a get_phys_addr() here because the rules for vector + * loads are special: they always use the default memory map, and + * the default memory map permits reads from all addresses. + * Since there's no easy way to pass through to pmsav8_mpu_lookup() + * that we want this special case which would always say "yes", + * we just do the SAU lookup here followed by a direct physical load. + */ + attrs.secure =3D targets_secure; + attrs.user =3D false; + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + V8M_SAttributes sattrs =3D {}; + + v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs); + if (sattrs.ns) { + attrs.secure =3D false; + } else if (!targets_secure) { + /* NS access to S memory */ + goto load_fail; + } + } + + vector_entry =3D address_space_ldl(arm_addressspace(cs, attrs), addr, + attrs, &result); + if (result !=3D MEMTX_OK) { + goto load_fail; + } + *pvec =3D vector_entry; + return true; + +load_fail: + /* All vector table fetch fails are reported as HardFault, with + * HFSR.VECTTBL and .FORCED set. (FORCED is set because + * technically the underlying exception is a MemManage or BusFault + * that is escalated to HardFault.) This is a terminal exception, + * so we will either take the HardFault immediately or else enter + * lockup (the latter case is handled in armv7m_nvic_set_pending_deriv= ed()). + */ + exc_secure =3D targets_secure || + !(cpu->env.v7m.aircr & R_V7M_AIRCR_BFHFNMINS_MASK); + env->v7m.hfsr |=3D R_V7M_HFSR_VECTTBL_MASK | R_V7M_HFSR_FORCED_MASK; + armv7m_nvic_set_pending_derived(env->nvic, ARMV7M_EXCP_HARD, exc_secur= e); + return false; +} + +static bool v7m_push_callee_stack(ARMCPU *cpu, uint32_t lr, bool dotailcha= in, + bool ignore_faults) +{ + /* For v8M, push the callee-saves register part of the stack frame. + * Compare the v8M pseudocode PushCalleeStack(). + * In the tailchaining case this may not be the current stack. + */ + CPUARMState *env =3D &cpu->env; + uint32_t *frame_sp_p; + uint32_t frameptr; + ARMMMUIdx mmu_idx; + bool stacked_ok; + uint32_t limit; + bool want_psp; + + if (dotailchain) { + bool mode =3D lr & R_V7M_EXCRET_MODE_MASK; + bool priv =3D !(env->v7m.control[M_REG_S] & R_V7M_CONTROL_NPRIV_MA= SK) || + !mode; + + mmu_idx =3D arm_v7m_mmu_idx_for_secstate_and_priv(env, M_REG_S, pr= iv); + frame_sp_p =3D get_v7m_sp_ptr(env, M_REG_S, mode, + lr & R_V7M_EXCRET_SPSEL_MASK); + want_psp =3D mode && (lr & R_V7M_EXCRET_SPSEL_MASK); + if (want_psp) { + limit =3D env->v7m.psplim[M_REG_S]; + } else { + limit =3D env->v7m.msplim[M_REG_S]; + } + } else { + mmu_idx =3D core_to_arm_mmu_idx(env, cpu_mmu_index(env, false)); + frame_sp_p =3D &env->regs[13]; + limit =3D v7m_sp_limit(env); + } + + frameptr =3D *frame_sp_p - 0x28; + if (frameptr < limit) { + /* + * Stack limit failure: set SP to the limit value, and generate + * STKOF UsageFault. Stack pushes below the limit must not be + * performed. It is IMPDEF whether pushes above the limit are + * performed; we choose not to. + */ + qemu_log_mask(CPU_LOG_INT, + "...STKOF during callee-saves register stacking\n"); + env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_STKOF_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, + env->v7m.secure); + *frame_sp_p =3D limit; + return true; + } + + /* Write as much of the stack frame as we can. A write failure may + * cause us to pend a derived exception. + */ + stacked_ok =3D + v7m_stack_write(cpu, frameptr, 0xfefa125b, mmu_idx, ignore_faults)= && + v7m_stack_write(cpu, frameptr + 0x8, env->regs[4], mmu_idx, + ignore_faults) && + v7m_stack_write(cpu, frameptr + 0xc, env->regs[5], mmu_idx, + ignore_faults) && + v7m_stack_write(cpu, frameptr + 0x10, env->regs[6], mmu_idx, + ignore_faults) && + v7m_stack_write(cpu, frameptr + 0x14, env->regs[7], mmu_idx, + ignore_faults) && + v7m_stack_write(cpu, frameptr + 0x18, env->regs[8], mmu_idx, + ignore_faults) && + v7m_stack_write(cpu, frameptr + 0x1c, env->regs[9], mmu_idx, + ignore_faults) && + v7m_stack_write(cpu, frameptr + 0x20, env->regs[10], mmu_idx, + ignore_faults) && + v7m_stack_write(cpu, frameptr + 0x24, env->regs[11], mmu_idx, + ignore_faults); + + /* Update SP regardless of whether any of the stack accesses failed. */ + *frame_sp_p =3D frameptr; + + return !stacked_ok; +} + +static void v7m_exception_taken(ARMCPU *cpu, uint32_t lr, bool dotailchain, + bool ignore_stackfaults) +{ + /* Do the "take the exception" parts of exception entry, + * but not the pushing of state to the stack. This is + * similar to the pseudocode ExceptionTaken() function. + */ + CPUARMState *env =3D &cpu->env; + uint32_t addr; + bool targets_secure; + int exc; + bool push_failed =3D false; + + armv7m_nvic_get_pending_irq_info(env->nvic, &exc, &targets_secure); + qemu_log_mask(CPU_LOG_INT, "...taking pending %s exception %d\n", + targets_secure ? "secure" : "nonsecure", exc); + + if (arm_feature(env, ARM_FEATURE_V8)) { + if (arm_feature(env, ARM_FEATURE_M_SECURITY) && + (lr & R_V7M_EXCRET_S_MASK)) { + /* The background code (the owner of the registers in the + * exception frame) is Secure. This means it may either already + * have or now needs to push callee-saves registers. + */ + if (targets_secure) { + if (dotailchain && !(lr & R_V7M_EXCRET_ES_MASK)) { + /* We took an exception from Secure to NonSecure + * (which means the callee-saved registers got stacked) + * and are now tailchaining to a Secure exception. + * Clear DCRS so eventual return from this Secure + * exception unstacks the callee-saved registers. + */ + lr &=3D ~R_V7M_EXCRET_DCRS_MASK; + } + } else { + /* We're going to a non-secure exception; push the + * callee-saves registers to the stack now, if they're + * not already saved. + */ + if (lr & R_V7M_EXCRET_DCRS_MASK && + !(dotailchain && !(lr & R_V7M_EXCRET_ES_MASK))) { + push_failed =3D v7m_push_callee_stack(cpu, lr, dotailc= hain, + ignore_stackfaults= ); + } + lr |=3D R_V7M_EXCRET_DCRS_MASK; + } + } + + lr &=3D ~R_V7M_EXCRET_ES_MASK; + if (targets_secure || !arm_feature(env, ARM_FEATURE_M_SECURITY)) { + lr |=3D R_V7M_EXCRET_ES_MASK; + } + lr &=3D ~R_V7M_EXCRET_SPSEL_MASK; + if (env->v7m.control[targets_secure] & R_V7M_CONTROL_SPSEL_MASK) { + lr |=3D R_V7M_EXCRET_SPSEL_MASK; + } + + /* Clear registers if necessary to prevent non-secure exception + * code being able to see register values from secure code. + * Where register values become architecturally UNKNOWN we leave + * them with their previous values. + */ + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + if (!targets_secure) { + /* Always clear the caller-saved registers (they have been + * pushed to the stack earlier in v7m_push_stack()). + * Clear callee-saved registers if the background code is + * Secure (in which case these regs were saved in + * v7m_push_callee_stack()). + */ + int i; + + for (i =3D 0; i < 13; i++) { + /* r4..r11 are callee-saves, zero only if EXCRET.S =3D= =3D 1 */ + if (i < 4 || i > 11 || (lr & R_V7M_EXCRET_S_MASK)) { + env->regs[i] =3D 0; + } + } + /* Clear EAPSR */ + xpsr_write(env, 0, XPSR_NZCV | XPSR_Q | XPSR_GE | XPSR_IT); + } + } + } + + if (push_failed && !ignore_stackfaults) { + /* Derived exception on callee-saves register stacking: + * we might now want to take a different exception which + * targets a different security state, so try again from the top. + */ + qemu_log_mask(CPU_LOG_INT, + "...derived exception on callee-saves register stack= ing"); + v7m_exception_taken(cpu, lr, true, true); + return; + } + + if (!arm_v7m_load_vector(cpu, exc, targets_secure, &addr)) { + /* Vector load failed: derived exception */ + qemu_log_mask(CPU_LOG_INT, "...derived exception on vector table l= oad"); + v7m_exception_taken(cpu, lr, true, true); + return; + } + + /* Now we've done everything that might cause a derived exception + * we can go ahead and activate whichever exception we're going to + * take (which might now be the derived exception). + */ + armv7m_nvic_acknowledge_irq(env->nvic); + + /* Switch to target security state -- must do this before writing SPSE= L */ + switch_v7m_security_state(env, targets_secure); + write_v7m_control_spsel(env, 0); + arm_clear_exclusive(env); + /* Clear IT bits */ + env->condexec_bits =3D 0; + env->regs[14] =3D lr; + env->regs[15] =3D addr & 0xfffffffe; + env->thumb =3D addr & 1; +} + +static bool v7m_push_stack(ARMCPU *cpu) +{ + /* Do the "set up stack frame" part of exception entry, + * similar to pseudocode PushStack(). + * Return true if we generate a derived exception (and so + * should ignore further stack faults trying to process + * that derived exception.) + */ + bool stacked_ok; + CPUARMState *env =3D &cpu->env; + uint32_t xpsr =3D xpsr_read(env); + uint32_t frameptr =3D env->regs[13]; + ARMMMUIdx mmu_idx =3D core_to_arm_mmu_idx(env, cpu_mmu_index(env, fals= e)); + + /* Align stack pointer if the guest wants that */ + if ((frameptr & 4) && + (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKALIGN_MASK)) { + frameptr -=3D 4; + xpsr |=3D XPSR_SPREALIGN; + } + + frameptr -=3D 0x20; + + if (arm_feature(env, ARM_FEATURE_V8)) { + uint32_t limit =3D v7m_sp_limit(env); + + if (frameptr < limit) { + /* + * Stack limit failure: set SP to the limit value, and generate + * STKOF UsageFault. Stack pushes below the limit must not be + * performed. It is IMPDEF whether pushes above the limit are + * performed; we choose not to. + */ + qemu_log_mask(CPU_LOG_INT, + "...STKOF during stacking\n"); + env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_STKOF_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, + env->v7m.secure); + env->regs[13] =3D limit; + return true; + } + } + + /* Write as much of the stack frame as we can. If we fail a stack + * write this will result in a derived exception being pended + * (which may be taken in preference to the one we started with + * if it has higher priority). + */ + stacked_ok =3D + v7m_stack_write(cpu, frameptr, env->regs[0], mmu_idx, false) && + v7m_stack_write(cpu, frameptr + 4, env->regs[1], mmu_idx, false) && + v7m_stack_write(cpu, frameptr + 8, env->regs[2], mmu_idx, false) && + v7m_stack_write(cpu, frameptr + 12, env->regs[3], mmu_idx, false) = && + v7m_stack_write(cpu, frameptr + 16, env->regs[12], mmu_idx, false)= && + v7m_stack_write(cpu, frameptr + 20, env->regs[14], mmu_idx, false)= && + v7m_stack_write(cpu, frameptr + 24, env->regs[15], mmu_idx, false)= && + v7m_stack_write(cpu, frameptr + 28, xpsr, mmu_idx, false); + + /* Update SP regardless of whether any of the stack accesses failed. */ + env->regs[13] =3D frameptr; + + return !stacked_ok; +} + +static void do_v7m_exception_exit(ARMCPU *cpu) +{ + CPUARMState *env =3D &cpu->env; + uint32_t excret; + uint32_t xpsr; + bool ufault =3D false; + bool sfault =3D false; + bool return_to_sp_process; + bool return_to_handler; + bool rettobase =3D false; + bool exc_secure =3D false; + bool return_to_secure; + + /* If we're not in Handler mode then jumps to magic exception-exit + * addresses don't have magic behaviour. However for the v8M + * security extensions the magic secure-function-return has to + * work in thread mode too, so to avoid doing an extra check in + * the generated code we allow exception-exit magic to also cause the + * internal exception and bring us here in thread mode. Correct code + * will never try to do this (the following insn fetch will always + * fault) so we the overhead of having taken an unnecessary exception + * doesn't matter. + */ + if (!arm_v7m_is_handler_mode(env)) { + return; + } + + /* In the spec pseudocode ExceptionReturn() is called directly + * from BXWritePC() and gets the full target PC value including + * bit zero. In QEMU's implementation we treat it as a normal + * jump-to-register (which is then caught later on), and so split + * the target value up between env->regs[15] and env->thumb in + * gen_bx(). Reconstitute it. + */ + excret =3D env->regs[15]; + if (env->thumb) { + excret |=3D 1; + } + + qemu_log_mask(CPU_LOG_INT, "Exception return: magic PC %" PRIx32 + " previous exception %d\n", + excret, env->v7m.exception); + + if ((excret & R_V7M_EXCRET_RES1_MASK) !=3D R_V7M_EXCRET_RES1_MASK) { + qemu_log_mask(LOG_GUEST_ERROR, "M profile: zero high bits in excep= tion " + "exit PC value 0x%" PRIx32 " are UNPREDICTABLE\n", + excret); + } + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + /* EXC_RETURN.ES validation check (R_SMFL). We must do this before + * we pick which FAULTMASK to clear. + */ + if (!env->v7m.secure && + ((excret & R_V7M_EXCRET_ES_MASK) || + !(excret & R_V7M_EXCRET_DCRS_MASK))) { + sfault =3D 1; + /* For all other purposes, treat ES as 0 (R_HXSR) */ + excret &=3D ~R_V7M_EXCRET_ES_MASK; + } + exc_secure =3D excret & R_V7M_EXCRET_ES_MASK; + } + + if (env->v7m.exception !=3D ARMV7M_EXCP_NMI) { + /* Auto-clear FAULTMASK on return from other than NMI. + * If the security extension is implemented then this only + * happens if the raw execution priority is >=3D 0; the + * value of the ES bit in the exception return value indicates + * which security state's faultmask to clear. (v8M ARM ARM R_KBNF.) + */ + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + if (armv7m_nvic_raw_execution_priority(env->nvic) >=3D 0) { + env->v7m.faultmask[exc_secure] =3D 0; + } + } else { + env->v7m.faultmask[M_REG_NS] =3D 0; + } + } + + switch (armv7m_nvic_complete_irq(env->nvic, env->v7m.exception, + exc_secure)) { + case -1: + /* attempt to exit an exception that isn't active */ + ufault =3D true; + break; + case 0: + /* still an irq active now */ + break; + case 1: + /* we returned to base exception level, no nesting. + * (In the pseudocode this is written using "NestedActivation !=3D= 1" + * where we have 'rettobase =3D=3D false'.) + */ + rettobase =3D true; + break; + default: + g_assert_not_reached(); + } + + return_to_handler =3D !(excret & R_V7M_EXCRET_MODE_MASK); + return_to_sp_process =3D excret & R_V7M_EXCRET_SPSEL_MASK; + return_to_secure =3D arm_feature(env, ARM_FEATURE_M_SECURITY) && + (excret & R_V7M_EXCRET_S_MASK); + + if (arm_feature(env, ARM_FEATURE_V8)) { + if (!arm_feature(env, ARM_FEATURE_M_SECURITY)) { + /* UNPREDICTABLE if S =3D=3D 1 or DCRS =3D=3D 0 or ES =3D=3D 1= (R_XLCP); + * we choose to take the UsageFault. + */ + if ((excret & R_V7M_EXCRET_S_MASK) || + (excret & R_V7M_EXCRET_ES_MASK) || + !(excret & R_V7M_EXCRET_DCRS_MASK)) { + ufault =3D true; + } + } + if (excret & R_V7M_EXCRET_RES0_MASK) { + ufault =3D true; + } + } else { + /* For v7M we only recognize certain combinations of the low bits = */ + switch (excret & 0xf) { + case 1: /* Return to Handler */ + break; + case 13: /* Return to Thread using Process stack */ + case 9: /* Return to Thread using Main stack */ + /* We only need to check NONBASETHRDENA for v7M, because in + * v8M this bit does not exist (it is RES1). + */ + if (!rettobase && + !(env->v7m.ccr[env->v7m.secure] & + R_V7M_CCR_NONBASETHRDENA_MASK)) { + ufault =3D true; + } + break; + default: + ufault =3D true; + } + } + + /* + * Set CONTROL.SPSEL from excret.SPSEL. Since we're still in + * Handler mode (and will be until we write the new XPSR.Interrupt + * field) this does not switch around the current stack pointer. + * We must do this before we do any kind of tailchaining, including + * for the derived exceptions on integrity check failures, or we will + * give the guest an incorrect EXCRET.SPSEL value on exception entry. + */ + write_v7m_control_spsel_for_secstate(env, return_to_sp_process, exc_se= cure); + + if (sfault) { + env->v7m.sfsr |=3D R_V7M_SFSR_INVER_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on existing " + "stackframe: failed EXC_RETURN.ES validity check\n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } + + if (ufault) { + /* Bad exception return: instead of popping the exception + * stack, directly take a usage fault on the current stack. + */ + env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_INVPC_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.sec= ure); + qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existing " + "stackframe: failed exception return integrity check= \n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } + + /* + * Tailchaining: if there is currently a pending exception that + * is high enough priority to preempt execution at the level we're + * about to return to, then just directly take that exception now, + * avoiding an unstack-and-then-stack. Note that now we have + * deactivated the previous exception by calling armv7m_nvic_complete_= irq() + * our current execution priority is already the execution priority we= are + * returning to -- none of the state we would unstack or set based on + * the EXCRET value affects it. + */ + if (armv7m_nvic_can_take_pending_exception(env->nvic)) { + qemu_log_mask(CPU_LOG_INT, "...tailchaining to pending exception\n= "); + v7m_exception_taken(cpu, excret, true, false); + return; + } + + switch_v7m_security_state(env, return_to_secure); + + { + /* The stack pointer we should be reading the exception frame from + * depends on bits in the magic exception return type value (and + * for v8M isn't necessarily the stack pointer we will eventually + * end up resuming execution with). Get a pointer to the location + * in the CPU state struct where the SP we need is currently being + * stored; we will use and modify it in place. + * We use this limited C variable scope so we don't accidentally + * use 'frame_sp_p' after we do something that makes it invalid. + */ + uint32_t *frame_sp_p =3D get_v7m_sp_ptr(env, + return_to_secure, + !return_to_handler, + return_to_sp_process); + uint32_t frameptr =3D *frame_sp_p; + bool pop_ok =3D true; + ARMMMUIdx mmu_idx; + bool return_to_priv =3D return_to_handler || + !(env->v7m.control[return_to_secure] & R_V7M_CONTROL_NPRIV_MAS= K); + + mmu_idx =3D arm_v7m_mmu_idx_for_secstate_and_priv(env, return_to_s= ecure, + return_to_priv); + + if (!QEMU_IS_ALIGNED(frameptr, 8) && + arm_feature(env, ARM_FEATURE_V8)) { + qemu_log_mask(LOG_GUEST_ERROR, + "M profile exception return with non-8-aligned S= P " + "for destination state is UNPREDICTABLE\n"); + } + + /* Do we need to pop callee-saved registers? */ + if (return_to_secure && + ((excret & R_V7M_EXCRET_ES_MASK) =3D=3D 0 || + (excret & R_V7M_EXCRET_DCRS_MASK) =3D=3D 0)) { + uint32_t expected_sig =3D 0xfefa125b; + uint32_t actual_sig; + + pop_ok =3D v7m_stack_read(cpu, &actual_sig, frameptr, mmu_idx); + + if (pop_ok && expected_sig !=3D actual_sig) { + /* Take a SecureFault on the current stack */ + env->v7m.sfsr |=3D R_V7M_SFSR_INVIS_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, fal= se); + qemu_log_mask(CPU_LOG_INT, "...taking SecureFault on exist= ing " + "stackframe: failed exception return integri= ty " + "signature check\n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } + + pop_ok =3D pop_ok && + v7m_stack_read(cpu, &env->regs[4], frameptr + 0x8, mmu_idx= ) && + v7m_stack_read(cpu, &env->regs[5], frameptr + 0xc, mmu_idx= ) && + v7m_stack_read(cpu, &env->regs[6], frameptr + 0x10, mmu_id= x) && + v7m_stack_read(cpu, &env->regs[7], frameptr + 0x14, mmu_id= x) && + v7m_stack_read(cpu, &env->regs[8], frameptr + 0x18, mmu_id= x) && + v7m_stack_read(cpu, &env->regs[9], frameptr + 0x1c, mmu_id= x) && + v7m_stack_read(cpu, &env->regs[10], frameptr + 0x20, mmu_i= dx) && + v7m_stack_read(cpu, &env->regs[11], frameptr + 0x24, mmu_i= dx); + + frameptr +=3D 0x28; + } + + /* Pop registers */ + pop_ok =3D pop_ok && + v7m_stack_read(cpu, &env->regs[0], frameptr, mmu_idx) && + v7m_stack_read(cpu, &env->regs[1], frameptr + 0x4, mmu_idx) && + v7m_stack_read(cpu, &env->regs[2], frameptr + 0x8, mmu_idx) && + v7m_stack_read(cpu, &env->regs[3], frameptr + 0xc, mmu_idx) && + v7m_stack_read(cpu, &env->regs[12], frameptr + 0x10, mmu_idx) = && + v7m_stack_read(cpu, &env->regs[14], frameptr + 0x14, mmu_idx) = && + v7m_stack_read(cpu, &env->regs[15], frameptr + 0x18, mmu_idx) = && + v7m_stack_read(cpu, &xpsr, frameptr + 0x1c, mmu_idx); + + if (!pop_ok) { + /* v7m_stack_read() pended a fault, so take it (as a tail + * chained exception on the same stack frame) + */ + qemu_log_mask(CPU_LOG_INT, "...derived exception on unstacking= \n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } + + /* Returning from an exception with a PC with bit 0 set is defined + * behaviour on v8M (bit 0 is ignored), but for v7M it was specifi= ed + * to be UNPREDICTABLE. In practice actual v7M hardware seems to i= gnore + * the lsbit, and there are several RTOSes out there which incorre= ctly + * assume the r15 in the stack frame should be a Thumb-style "lsbit + * indicates ARM/Thumb" value, so ignore the bit on v7M as well, b= ut + * complain about the badly behaved guest. + */ + if (env->regs[15] & 1) { + env->regs[15] &=3D ~1U; + if (!arm_feature(env, ARM_FEATURE_V8)) { + qemu_log_mask(LOG_GUEST_ERROR, + "M profile return from interrupt with misali= gned " + "PC is UNPREDICTABLE on v7M\n"); + } + } + + if (arm_feature(env, ARM_FEATURE_V8)) { + /* For v8M we have to check whether the xPSR exception field + * matches the EXCRET value for return to handler/thread + * before we commit to changing the SP and xPSR. + */ + bool will_be_handler =3D (xpsr & XPSR_EXCP) !=3D 0; + if (return_to_handler !=3D will_be_handler) { + /* Take an INVPC UsageFault on the current stack. + * By this point we will have switched to the security sta= te + * for the background state, so this UsageFault will target + * that state. + */ + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, + env->v7m.secure); + env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_INVPC_MASK; + qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on existi= ng " + "stackframe: failed exception return integri= ty " + "check\n"); + v7m_exception_taken(cpu, excret, true, false); + return; + } + } + + /* Commit to consuming the stack frame */ + frameptr +=3D 0x20; + /* Undo stack alignment (the SPREALIGN bit indicates that the orig= inal + * pre-exception SP was not 8-aligned and we added a padding word = to + * align it, so we undo this by ORing in the bit that increases it + * from the current 8-aligned value to the 8-unaligned value. (Add= ing 4 + * would work too but a logical OR is how the pseudocode specifies= it.) + */ + if (xpsr & XPSR_SPREALIGN) { + frameptr |=3D 4; + } + *frame_sp_p =3D frameptr; + } + /* This xpsr_write() will invalidate frame_sp_p as it may switch stack= */ + xpsr_write(env, xpsr, ~XPSR_SPREALIGN); + + /* The restored xPSR exception field will be zero if we're + * resuming in Thread mode. If that doesn't match what the + * exception return excret specified then this is a UsageFault. + * v7M requires we make this check here; v8M did it earlier. + */ + if (return_to_handler !=3D arm_v7m_is_handler_mode(env)) { + /* Take an INVPC UsageFault by pushing the stack again; + * we know we're v7M so this is never a Secure UsageFault. + */ + bool ignore_stackfaults; + + assert(!arm_feature(env, ARM_FEATURE_V8)); + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, false); + env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_INVPC_MASK; + ignore_stackfaults =3D v7m_push_stack(cpu); + qemu_log_mask(CPU_LOG_INT, "...taking UsageFault on new stackframe= : " + "failed exception return integrity check\n"); + v7m_exception_taken(cpu, excret, false, ignore_stackfaults); + return; + } + + /* Otherwise, we have a successful exception exit. */ + arm_clear_exclusive(env); + qemu_log_mask(CPU_LOG_INT, "...successful exception return\n"); +} + +static bool do_v7m_function_return(ARMCPU *cpu) +{ + /* v8M security extensions magic function return. + * We may either: + * (1) throw an exception (longjump) + * (2) return true if we successfully handled the function return + * (3) return false if we failed a consistency check and have + * pended a UsageFault that needs to be taken now + * + * At this point the magic return value is split between env->regs[15] + * and env->thumb. We don't bother to reconstitute it because we don't + * need it (all values are handled the same way). + */ + CPUARMState *env =3D &cpu->env; + uint32_t newpc, newpsr, newpsr_exc; + + qemu_log_mask(CPU_LOG_INT, "...really v7M secure function return\n"); + + { + bool threadmode, spsel; + TCGMemOpIdx oi; + ARMMMUIdx mmu_idx; + uint32_t *frame_sp_p; + uint32_t frameptr; + + /* Pull the return address and IPSR from the Secure stack */ + threadmode =3D !arm_v7m_is_handler_mode(env); + spsel =3D env->v7m.control[M_REG_S] & R_V7M_CONTROL_SPSEL_MASK; + + frame_sp_p =3D get_v7m_sp_ptr(env, true, threadmode, spsel); + frameptr =3D *frame_sp_p; + + /* These loads may throw an exception (for MPU faults). We want to + * do them as secure, so work out what MMU index that is. + */ + mmu_idx =3D arm_v7m_mmu_idx_for_secstate(env, true); + oi =3D make_memop_idx(MO_LE, arm_to_core_mmu_idx(mmu_idx)); + newpc =3D helper_le_ldul_mmu(env, frameptr, oi, 0); + newpsr =3D helper_le_ldul_mmu(env, frameptr + 4, oi, 0); + + /* Consistency checks on new IPSR */ + newpsr_exc =3D newpsr & XPSR_EXCP; + if (!((env->v7m.exception =3D=3D 0 && newpsr_exc =3D=3D 0) || + (env->v7m.exception =3D=3D 1 && newpsr_exc !=3D 0))) { + /* Pend the fault and tell our caller to take it */ + env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_INVPC_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, + env->v7m.secure); + qemu_log_mask(CPU_LOG_INT, + "...taking INVPC UsageFault: " + "IPSR consistency check failed\n"); + return false; + } + + *frame_sp_p =3D frameptr + 8; + } + + /* This invalidates frame_sp_p */ + switch_v7m_security_state(env, true); + env->v7m.exception =3D newpsr_exc; + env->v7m.control[M_REG_S] &=3D ~R_V7M_CONTROL_SFPA_MASK; + if (newpsr & XPSR_SFPA) { + env->v7m.control[M_REG_S] |=3D R_V7M_CONTROL_SFPA_MASK; + } + xpsr_write(env, 0, XPSR_IT); + env->thumb =3D newpc & 1; + env->regs[15] =3D newpc & ~1; + + qemu_log_mask(CPU_LOG_INT, "...function return successful\n"); + return true; +} + +void arm_log_exception(int idx) +{ + if (qemu_loglevel_mask(CPU_LOG_INT)) { + const char *exc =3D NULL; + static const char * const excnames[] =3D { + [EXCP_UDEF] =3D "Undefined Instruction", + [EXCP_SWI] =3D "SVC", + [EXCP_PREFETCH_ABORT] =3D "Prefetch Abort", + [EXCP_DATA_ABORT] =3D "Data Abort", + [EXCP_IRQ] =3D "IRQ", + [EXCP_FIQ] =3D "FIQ", + [EXCP_BKPT] =3D "Breakpoint", + [EXCP_EXCEPTION_EXIT] =3D "QEMU v7M exception exit", + [EXCP_KERNEL_TRAP] =3D "QEMU intercept of kernel commpage", + [EXCP_HVC] =3D "Hypervisor Call", + [EXCP_HYP_TRAP] =3D "Hypervisor Trap", + [EXCP_SMC] =3D "Secure Monitor Call", + [EXCP_VIRQ] =3D "Virtual IRQ", + [EXCP_VFIQ] =3D "Virtual FIQ", + [EXCP_SEMIHOST] =3D "Semihosting call", + [EXCP_NOCP] =3D "v7M NOCP UsageFault", + [EXCP_INVSTATE] =3D "v7M INVSTATE UsageFault", + [EXCP_STKOF] =3D "v8M STKOF UsageFault", + }; + + if (idx >=3D 0 && idx < ARRAY_SIZE(excnames)) { + exc =3D excnames[idx]; + } + if (!exc) { + exc =3D "unknown"; + } + qemu_log_mask(CPU_LOG_INT, "Taking exception %d [%s]\n", idx, exc); + } +} + +static bool v7m_read_half_insn(ARMCPU *cpu, ARMMMUIdx mmu_idx, + uint32_t addr, uint16_t *insn) +{ + /* Load a 16-bit portion of a v7M instruction, returning true on succe= ss, + * or false on failure (in which case we will have pended the appropri= ate + * exception). + * We need to do the instruction fetch's MPU and SAU checks + * like this because there is no MMU index that would allow + * doing the load with a single function call. Instead we must + * first check that the security attributes permit the load + * and that they don't mismatch on the two halves of the instruction, + * and then we do the load as a secure load (ie using the security + * attributes of the address, not the CPU, as architecturally required= ). + */ + CPUState *cs =3D CPU(cpu); + CPUARMState *env =3D &cpu->env; + V8M_SAttributes sattrs =3D {}; + MemTxAttrs attrs =3D {}; + ARMMMUFaultInfo fi =3D {}; + MemTxResult txres; + target_ulong page_size; + hwaddr physaddr; + int prot; + + v8m_security_lookup(env, addr, MMU_INST_FETCH, mmu_idx, &sattrs); + if (!sattrs.nsc || sattrs.ns) { + /* This must be the second half of the insn, and it straddles a + * region boundary with the second half not being S&NSC. + */ + env->v7m.sfsr |=3D R_V7M_SFSR_INVEP_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVEP\n"); + return false; + } + if (get_phys_addr(env, addr, MMU_INST_FETCH, mmu_idx, + &physaddr, &attrs, &prot, &page_size, &fi, NULL)) { + /* the MPU lookup failed */ + env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_IACCVIOL_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, env->v7m.secur= e); + qemu_log_mask(CPU_LOG_INT, "...really MemManage with CFSR.IACCVIOL= \n"); + return false; + } + *insn =3D address_space_lduw_le(arm_addressspace(cs, attrs), physaddr, + attrs, &txres); + if (txres !=3D MEMTX_OK) { + env->v7m.cfsr[M_REG_NS] |=3D R_V7M_CFSR_IBUSERR_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false); + qemu_log_mask(CPU_LOG_INT, "...really BusFault with CFSR.IBUSERR\n= "); + return false; + } + return true; +} + + +static bool v7m_handle_execute_nsc(ARMCPU *cpu) +{ + /* Check whether this attempt to execute code in a Secure & NS-Callable + * memory region is for an SG instruction; if so, then emulate the + * effect of the SG instruction and return true. Otherwise pend + * the correct kind of exception and return false. + */ + CPUARMState *env =3D &cpu->env; + ARMMMUIdx mmu_idx; + uint16_t insn; + + /* We should never get here unless get_phys_addr_pmsav8() caused + * an exception for NS executing in S&NSC memory. + */ + assert(!env->v7m.secure); + assert(arm_feature(env, ARM_FEATURE_M_SECURITY)); + + /* We want to do the MPU lookup as secure; work out what mmu_idx that = is */ + mmu_idx =3D arm_v7m_mmu_idx_for_secstate(env, true); + + if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15], &insn)) { + return false; + } + + if (!env->thumb) { + goto gen_invep; + } + + if (insn !=3D 0xe97f) { + /* Not an SG instruction first half (we choose the IMPDEF + * early-SG-check option). + */ + goto gen_invep; + } + + if (!v7m_read_half_insn(cpu, mmu_idx, env->regs[15] + 2, &insn)) { + return false; + } + + if (insn !=3D 0xe97f) { + /* Not an SG instruction second half (yes, both halves of the SG + * insn have the same hex value) + */ + goto gen_invep; + } + + /* OK, we have confirmed that we really have an SG instruction. + * We know we're NS in S memory so don't need to repeat those checks. + */ + qemu_log_mask(CPU_LOG_INT, "...really an SG instruction at 0x%08" PRIx= 32 + ", executing it\n", env->regs[15]); + env->regs[14] &=3D ~1; + switch_v7m_security_state(env, true); + xpsr_write(env, 0, XPSR_IT); + env->regs[15] +=3D 4; + return true; + +gen_invep: + env->v7m.sfsr |=3D R_V7M_SFSR_INVEP_MASK; + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVEP\n"); + return false; +} + +void arm_v7m_cpu_do_interrupt(CPUState *cs) +{ + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + uint32_t lr; + bool ignore_stackfaults; + + arm_log_exception(cs->exception_index); + + /* For exceptions we just mark as pending on the NVIC, and let that + handle it. */ + switch (cs->exception_index) { + case EXCP_UDEF: + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.sec= ure); + env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_UNDEFINSTR_MASK; + break; + case EXCP_NOCP: + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.sec= ure); + env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_NOCP_MASK; + break; + case EXCP_INVSTATE: + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.sec= ure); + env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_INVSTATE_MASK; + break; + case EXCP_STKOF: + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_USAGE, env->v7m.sec= ure); + env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_STKOF_MASK; + break; + case EXCP_SWI: + /* The PC already points to the next instruction. */ + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SVC, env->v7m.secur= e); + break; + case EXCP_PREFETCH_ABORT: + case EXCP_DATA_ABORT: + /* Note that for M profile we don't have a guest facing FSR, but + * the env->exception.fsr will be populated by the code that + * raises the fault, in the A profile short-descriptor format. + */ + switch (env->exception.fsr & 0xf) { + case M_FAKE_FSR_NSC_EXEC: + /* Exception generated when we try to execute code at an addre= ss + * which is marked as Secure & Non-Secure Callable and the CPU + * is in the Non-Secure state. The only instruction which can + * be executed like this is SG (and that only if both halves of + * the SG instruction have the same security attributes.) + * Everything else must generate an INVEP SecureFault, so we + * emulate the SG instruction here. + */ + if (v7m_handle_execute_nsc(cpu)) { + return; + } + break; + case M_FAKE_FSR_SFAULT: + /* Various flavours of SecureFault for attempts to execute or + * access data in the wrong security state. + */ + switch (cs->exception_index) { + case EXCP_PREFETCH_ABORT: + if (env->v7m.secure) { + env->v7m.sfsr |=3D R_V7M_SFSR_INVTRAN_MASK; + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVTRAN= \n"); + } else { + env->v7m.sfsr |=3D R_V7M_SFSR_INVEP_MASK; + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.INVEP\n= "); + } + break; + case EXCP_DATA_ABORT: + /* This must be an NS access to S memory */ + env->v7m.sfsr |=3D R_V7M_SFSR_AUVIOL_MASK; + qemu_log_mask(CPU_LOG_INT, + "...really SecureFault with SFSR.AUVIOL\n"); + break; + } + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_SECURE, false); + break; + case 0x8: /* External Abort */ + switch (cs->exception_index) { + case EXCP_PREFETCH_ABORT: + env->v7m.cfsr[M_REG_NS] |=3D R_V7M_CFSR_IBUSERR_MASK; + qemu_log_mask(CPU_LOG_INT, "...with CFSR.IBUSERR\n"); + break; + case EXCP_DATA_ABORT: + env->v7m.cfsr[M_REG_NS] |=3D + (R_V7M_CFSR_PRECISERR_MASK | R_V7M_CFSR_BFARVALID_MASK= ); + env->v7m.bfar =3D env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, + "...with CFSR.PRECISERR and BFAR 0x%x\n", + env->v7m.bfar); + break; + } + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_BUS, false); + break; + default: + /* All other FSR values are either MPU faults or "can't happen + * for M profile" cases. + */ + switch (cs->exception_index) { + case EXCP_PREFETCH_ABORT: + env->v7m.cfsr[env->v7m.secure] |=3D R_V7M_CFSR_IACCVIOL_MA= SK; + qemu_log_mask(CPU_LOG_INT, "...with CFSR.IACCVIOL\n"); + break; + case EXCP_DATA_ABORT: + env->v7m.cfsr[env->v7m.secure] |=3D + (R_V7M_CFSR_DACCVIOL_MASK | R_V7M_CFSR_MMARVALID_MASK); + env->v7m.mmfar[env->v7m.secure] =3D env->exception.vaddres= s; + qemu_log_mask(CPU_LOG_INT, + "...with CFSR.DACCVIOL and MMFAR 0x%x\n", + env->v7m.mmfar[env->v7m.secure]); + break; + } + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_MEM, + env->v7m.secure); + break; + } + break; + case EXCP_BKPT: + if (semihosting_enabled()) { + int nr; + nr =3D arm_lduw_code(env, env->regs[15], arm_sctlr_b(env)) & 0= xff; + if (nr =3D=3D 0xab) { + env->regs[15] +=3D 2; + qemu_log_mask(CPU_LOG_INT, + "...handling as semihosting call 0x%x\n", + env->regs[0]); + env->regs[0] =3D do_arm_semihosting(env); + return; + } + } + armv7m_nvic_set_pending(env->nvic, ARMV7M_EXCP_DEBUG, false); + break; + case EXCP_IRQ: + break; + case EXCP_EXCEPTION_EXIT: + if (env->regs[15] < EXC_RETURN_MIN_MAGIC) { + /* Must be v8M security extension function return */ + assert(env->regs[15] >=3D FNC_RETURN_MIN_MAGIC); + assert(arm_feature(env, ARM_FEATURE_M_SECURITY)); + if (do_v7m_function_return(cpu)) { + return; + } + } else { + do_v7m_exception_exit(cpu); + return; + } + break; + default: + cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); + return; /* Never happens. Keep compiler happy. */ + } + + if (arm_feature(env, ARM_FEATURE_V8)) { + lr =3D R_V7M_EXCRET_RES1_MASK | + R_V7M_EXCRET_DCRS_MASK | + R_V7M_EXCRET_FTYPE_MASK; + /* The S bit indicates whether we should return to Secure + * or NonSecure (ie our current state). + * The ES bit indicates whether we're taking this exception + * to Secure or NonSecure (ie our target state). We set it + * later, in v7m_exception_taken(). + * The SPSEL bit is also set in v7m_exception_taken() for v8M. + * This corresponds to the ARM ARM pseudocode for v8M setting + * some LR bits in PushStack() and some in ExceptionTaken(); + * the distinction matters for the tailchain cases where we + * can take an exception without pushing the stack. + */ + if (env->v7m.secure) { + lr |=3D R_V7M_EXCRET_S_MASK; + } + } else { + lr =3D R_V7M_EXCRET_RES1_MASK | + R_V7M_EXCRET_S_MASK | + R_V7M_EXCRET_DCRS_MASK | + R_V7M_EXCRET_FTYPE_MASK | + R_V7M_EXCRET_ES_MASK; + if (env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSEL_MASK) { + lr |=3D R_V7M_EXCRET_SPSEL_MASK; + } + } + if (!arm_v7m_is_handler_mode(env)) { + lr |=3D R_V7M_EXCRET_MODE_MASK; + } + + ignore_stackfaults =3D v7m_push_stack(cpu); + v7m_exception_taken(cpu, lr, false, ignore_stackfaults); +} + +uint32_t HELPER(v7m_mrs)(CPUARMState *env, uint32_t reg) +{ + uint32_t mask; + unsigned el =3D arm_current_el(env); + + /* First handle registers which unprivileged can read */ + + switch (reg) { + case 0 ... 7: /* xPSR sub-fields */ + mask =3D 0; + if ((reg & 1) && el) { + mask |=3D XPSR_EXCP; /* IPSR (unpriv. reads as zero) */ + } + if (!(reg & 4)) { + mask |=3D XPSR_NZCV | XPSR_Q; /* APSR */ + } + /* EPSR reads as zero */ + return xpsr_read(env) & mask; + break; + case 20: /* CONTROL */ + return env->v7m.control[env->v7m.secure]; + case 0x94: /* CONTROL_NS */ + /* We have to handle this here because unprivileged Secure code + * can read the NS CONTROL register. + */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.control[M_REG_NS]; + } + + if (el =3D=3D 0) { + return 0; /* unprivileged reads others as zero */ + } + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + switch (reg) { + case 0x88: /* MSP_NS */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.other_ss_msp; + case 0x89: /* PSP_NS */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.other_ss_psp; + case 0x8a: /* MSPLIM_NS */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.msplim[M_REG_NS]; + case 0x8b: /* PSPLIM_NS */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.psplim[M_REG_NS]; + case 0x90: /* PRIMASK_NS */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.primask[M_REG_NS]; + case 0x91: /* BASEPRI_NS */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.basepri[M_REG_NS]; + case 0x93: /* FAULTMASK_NS */ + if (!env->v7m.secure) { + return 0; + } + return env->v7m.faultmask[M_REG_NS]; + case 0x98: /* SP_NS */ + { + /* This gives the non-secure SP selected based on whether we're + * currently in handler mode or not, using the NS CONTROL.SPSE= L. + */ + bool spsel =3D env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSE= L_MASK; + + if (!env->v7m.secure) { + return 0; + } + if (!arm_v7m_is_handler_mode(env) && spsel) { + return env->v7m.other_ss_psp; + } else { + return env->v7m.other_ss_msp; + } + } + default: + break; + } + } + + switch (reg) { + case 8: /* MSP */ + return v7m_using_psp(env) ? env->v7m.other_sp : env->regs[13]; + case 9: /* PSP */ + return v7m_using_psp(env) ? env->regs[13] : env->v7m.other_sp; + case 10: /* MSPLIM */ + if (!arm_feature(env, ARM_FEATURE_V8)) { + goto bad_reg; + } + return env->v7m.msplim[env->v7m.secure]; + case 11: /* PSPLIM */ + if (!arm_feature(env, ARM_FEATURE_V8)) { + goto bad_reg; + } + return env->v7m.psplim[env->v7m.secure]; + case 16: /* PRIMASK */ + return env->v7m.primask[env->v7m.secure]; + case 17: /* BASEPRI */ + case 18: /* BASEPRI_MAX */ + return env->v7m.basepri[env->v7m.secure]; + case 19: /* FAULTMASK */ + return env->v7m.faultmask[env->v7m.secure]; + default: + bad_reg: + qemu_log_mask(LOG_GUEST_ERROR, "Attempt to read unknown special" + " register %d\n", reg); + return 0; + } +} + +void HELPER(v7m_msr)(CPUARMState *env, uint32_t maskreg, uint32_t val) +{ + /* We're passed bits [11..0] of the instruction; extract + * SYSm and the mask bits. + * Invalid combinations of SYSm and mask are UNPREDICTABLE; + * we choose to treat them as if the mask bits were valid. + * NB that the pseudocode 'mask' variable is bits [11..10], + * whereas ours is [11..8]. + */ + uint32_t mask =3D extract32(maskreg, 8, 4); + uint32_t reg =3D extract32(maskreg, 0, 8); + + if (arm_current_el(env) =3D=3D 0 && reg > 7) { + /* only xPSR sub-fields may be written by unprivileged */ + return; + } + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + switch (reg) { + case 0x88: /* MSP_NS */ + if (!env->v7m.secure) { + return; + } + env->v7m.other_ss_msp =3D val; + return; + case 0x89: /* PSP_NS */ + if (!env->v7m.secure) { + return; + } + env->v7m.other_ss_psp =3D val; + return; + case 0x8a: /* MSPLIM_NS */ + if (!env->v7m.secure) { + return; + } + env->v7m.msplim[M_REG_NS] =3D val & ~7; + return; + case 0x8b: /* PSPLIM_NS */ + if (!env->v7m.secure) { + return; + } + env->v7m.psplim[M_REG_NS] =3D val & ~7; + return; + case 0x90: /* PRIMASK_NS */ + if (!env->v7m.secure) { + return; + } + env->v7m.primask[M_REG_NS] =3D val & 1; + return; + case 0x91: /* BASEPRI_NS */ + if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN))= { + return; + } + env->v7m.basepri[M_REG_NS] =3D val & 0xff; + return; + case 0x93: /* FAULTMASK_NS */ + if (!env->v7m.secure || !arm_feature(env, ARM_FEATURE_M_MAIN))= { + return; + } + env->v7m.faultmask[M_REG_NS] =3D val & 1; + return; + case 0x94: /* CONTROL_NS */ + if (!env->v7m.secure) { + return; + } + write_v7m_control_spsel_for_secstate(env, + val & R_V7M_CONTROL_SPSEL= _MASK, + M_REG_NS); + if (arm_feature(env, ARM_FEATURE_M_MAIN)) { + env->v7m.control[M_REG_NS] &=3D ~R_V7M_CONTROL_NPRIV_MASK; + env->v7m.control[M_REG_NS] |=3D val & R_V7M_CONTROL_NPRIV_= MASK; + } + return; + case 0x98: /* SP_NS */ + { + /* This gives the non-secure SP selected based on whether we're + * currently in handler mode or not, using the NS CONTROL.SPSE= L. + */ + bool spsel =3D env->v7m.control[M_REG_NS] & R_V7M_CONTROL_SPSE= L_MASK; + bool is_psp =3D !arm_v7m_is_handler_mode(env) && spsel; + uint32_t limit; + + if (!env->v7m.secure) { + return; + } + + limit =3D is_psp ? env->v7m.psplim[false] : env->v7m.msplim[fa= lse]; + + if (val < limit) { + CPUState *cs =3D CPU(arm_env_get_cpu(env)); + + cpu_restore_state(cs, GETPC(), true); + raise_exception(env, EXCP_STKOF, 0, 1); + } + + if (is_psp) { + env->v7m.other_ss_psp =3D val; + } else { + env->v7m.other_ss_msp =3D val; + } + return; + } + default: + break; + } + } + + switch (reg) { + case 0 ... 7: /* xPSR sub-fields */ + /* only APSR is actually writable */ + if (!(reg & 4)) { + uint32_t apsrmask =3D 0; + + if (mask & 8) { + apsrmask |=3D XPSR_NZCV | XPSR_Q; + } + if ((mask & 4) && arm_feature(env, ARM_FEATURE_THUMB_DSP)) { + apsrmask |=3D XPSR_GE; + } + xpsr_write(env, val, apsrmask); + } + break; + case 8: /* MSP */ + if (v7m_using_psp(env)) { + env->v7m.other_sp =3D val; + } else { + env->regs[13] =3D val; + } + break; + case 9: /* PSP */ + if (v7m_using_psp(env)) { + env->regs[13] =3D val; + } else { + env->v7m.other_sp =3D val; + } + break; + case 10: /* MSPLIM */ + if (!arm_feature(env, ARM_FEATURE_V8)) { + goto bad_reg; + } + env->v7m.msplim[env->v7m.secure] =3D val & ~7; + break; + case 11: /* PSPLIM */ + if (!arm_feature(env, ARM_FEATURE_V8)) { + goto bad_reg; + } + env->v7m.psplim[env->v7m.secure] =3D val & ~7; + break; + case 16: /* PRIMASK */ + env->v7m.primask[env->v7m.secure] =3D val & 1; + break; + case 17: /* BASEPRI */ + if (!arm_feature(env, ARM_FEATURE_M_MAIN)) { + goto bad_reg; + } + env->v7m.basepri[env->v7m.secure] =3D val & 0xff; + break; + case 18: /* BASEPRI_MAX */ + if (!arm_feature(env, ARM_FEATURE_M_MAIN)) { + goto bad_reg; + } + val &=3D 0xff; + if (val !=3D 0 && (val < env->v7m.basepri[env->v7m.secure] + || env->v7m.basepri[env->v7m.secure] =3D=3D 0)) { + env->v7m.basepri[env->v7m.secure] =3D val; + } + break; + case 19: /* FAULTMASK */ + if (!arm_feature(env, ARM_FEATURE_M_MAIN)) { + goto bad_reg; + } + env->v7m.faultmask[env->v7m.secure] =3D val & 1; + break; + case 20: /* CONTROL */ + /* Writing to the SPSEL bit only has an effect if we are in + * thread mode; other bits can be updated by any privileged code. + * write_v7m_control_spsel() deals with updating the SPSEL bit in + * env->v7m.control, so we only need update the others. + * For v7M, we must just ignore explicit writes to SPSEL in handler + * mode; for v8M the write is permitted but will have no effect. + */ + if (arm_feature(env, ARM_FEATURE_V8) || + !arm_v7m_is_handler_mode(env)) { + write_v7m_control_spsel(env, (val & R_V7M_CONTROL_SPSEL_MASK) = !=3D 0); + } + if (arm_feature(env, ARM_FEATURE_M_MAIN)) { + env->v7m.control[env->v7m.secure] &=3D ~R_V7M_CONTROL_NPRIV_MA= SK; + env->v7m.control[env->v7m.secure] |=3D val & R_V7M_CONTROL_NPR= IV_MASK; + } + break; + default: + bad_reg: + qemu_log_mask(LOG_GUEST_ERROR, "Attempt to write unknown special" + " register %d\n", reg); + return; + } +} + +uint32_t HELPER(v7m_tt)(CPUARMState *env, uint32_t addr, uint32_t op) +{ + /* Implement the TT instruction. op is bits [7:6] of the insn. */ + bool forceunpriv =3D op & 1; + bool alt =3D op & 2; + V8M_SAttributes sattrs =3D {}; + uint32_t tt_resp; + bool r, rw, nsr, nsrw, mrvalid; + int prot; + ARMMMUFaultInfo fi =3D {}; + MemTxAttrs attrs =3D {}; + hwaddr phys_addr; + ARMMMUIdx mmu_idx; + uint32_t mregion; + bool targetpriv; + bool targetsec =3D env->v7m.secure; + bool is_subpage; + + /* Work out what the security state and privilege level we're + * interested in is... + */ + if (alt) { + targetsec =3D !targetsec; + } + + if (forceunpriv) { + targetpriv =3D false; + } else { + targetpriv =3D arm_v7m_is_handler_mode(env) || + !(env->v7m.control[targetsec] & R_V7M_CONTROL_NPRIV_MASK); + } + + /* ...and then figure out which MMU index this is */ + mmu_idx =3D arm_v7m_mmu_idx_for_secstate_and_priv(env, targetsec, targ= etpriv); + + /* We know that the MPU and SAU don't care about the access type + * for our purposes beyond that we don't want to claim to be + * an insn fetch, so we arbitrarily call this a read. + */ + + /* MPU region info only available for privileged or if + * inspecting the other MPU state. + */ + if (arm_current_el(env) !=3D 0 || alt) { + /* We can ignore the return value as prot is always set */ + pmsav8_mpu_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, + &phys_addr, &attrs, &prot, &is_subpage, + &fi, &mregion); + if (mregion =3D=3D -1) { + mrvalid =3D false; + mregion =3D 0; + } else { + mrvalid =3D true; + } + r =3D prot & PAGE_READ; + rw =3D prot & PAGE_WRITE; + } else { + r =3D false; + rw =3D false; + mrvalid =3D false; + mregion =3D 0; + } + + if (env->v7m.secure) { + v8m_security_lookup(env, addr, MMU_DATA_LOAD, mmu_idx, &sattrs); + nsr =3D sattrs.ns && r; + nsrw =3D sattrs.ns && rw; + } else { + sattrs.ns =3D true; + nsr =3D false; + nsrw =3D false; + } + + tt_resp =3D (sattrs.iregion << 24) | + (sattrs.irvalid << 23) | + ((!sattrs.ns) << 22) | + (nsrw << 21) | + (nsr << 20) | + (rw << 19) | + (r << 18) | + (sattrs.srvalid << 17) | + (mrvalid << 16) | + (sattrs.sregion << 8) | + mregion; + + return tt_resp; +} + +#endif /* CONFIG_USER_ONLY */ diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs index 11c7baf8a3..76634b2437 100644 --- a/target/arm/Makefile.objs +++ b/target/arm/Makefile.objs @@ -6,7 +6,7 @@ obj-$(call land,$(CONFIG_KVM),$(TARGET_AARCH64)) +=3D kvm64= .o obj-$(call lnot,$(CONFIG_KVM)) +=3D kvm-stub.o obj-y +=3D translate.o op_helper.o helper.o cpu.o obj-y +=3D neon_helper.o iwmmxt_helper.o vec_helper.o -obj-y +=3D gdbstub.o +obj-y +=3D gdbstub.o m_helper.o obj-$(TARGET_AARCH64) +=3D cpu64.o translate-a64.o helper-a64.o gdbstub64.o obj-y +=3D crypto_helper.o obj-$(CONFIG_SOFTMMU) +=3D arm-powerctl.o --=20 2.19.1 From nobody Thu Nov 6 14:16:22 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1542128794860405.8679150004741; Tue, 13 Nov 2018 09:06:34 -0800 (PST) Received: from localhost ([::1]:55136 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMc92-00070k-Md for importer@patchew.org; Tue, 13 Nov 2018 12:06:32 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35072) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMc7X-0006Fr-7L for qemu-devel@nongnu.org; Tue, 13 Nov 2018 12:05:04 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMbwg-0003wZ-5N for qemu-devel@nongnu.org; Tue, 13 Nov 2018 11:53:51 -0500 Received: from mga06.intel.com ([134.134.136.31]:62858) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gMbwe-0003pu-Ib; Tue, 13 Nov 2018 11:53:45 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 08:53:34 -0800 Received: from qingbai-mobl.ger.corp.intel.com (HELO caravaggio.ger.corp.intel.com) ([10.249.41.239]) by orsmga001.jf.intel.com with ESMTP; 13 Nov 2018 08:53:32 -0800 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,228,1539673200"; d="scan'208";a="107922101" From: Samuel Ortiz To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 17:52:38 +0100 Message-Id: <20181113165247.4806-5-sameo@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181113165247.4806-1-sameo@linux.intel.com> References: <20181113165247.4806-1-sameo@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.31 Subject: [Qemu-devel] [PATCH 04/13] target: arm: Move all interrupt and exception handlers into their own file X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, richard.henderson@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Most of them are TCG dependent so we want to be able to not build them in order to support TCG disablement with ARM. Signed-off-by: Samuel Ortiz Tested-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Robert Bradford --- target/arm/excp_helper.c | 550 +++++++++++++++++++++++++++++++++++++++ target/arm/helper.c | 531 ------------------------------------- target/arm/Makefile.objs | 2 +- 3 files changed, 551 insertions(+), 532 deletions(-) create mode 100644 target/arm/excp_helper.c diff --git a/target/arm/excp_helper.c b/target/arm/excp_helper.c new file mode 100644 index 0000000000..38fe9703de --- /dev/null +++ b/target/arm/excp_helper.c @@ -0,0 +1,550 @@ +/* + * Exception and interrupt helpers. + * + * This code is licensed under the GNU GPL v2 and later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ +#include "qemu/osdep.h" +#include "trace.h" +#include "cpu.h" +#include "internals.h" +#include "sysemu/sysemu.h" +#include "exec/exec-all.h" +#include "exec/cpu_ldst.h" +#include "arm_ldst.h" +#include "exec/semihost.h" +#include "sysemu/kvm.h" + +static void take_aarch32_exception(CPUARMState *env, int new_mode, + uint32_t mask, uint32_t offset, + uint32_t newpc) +{ + /* Change the CPU state so as to actually take the exception. */ + switch_mode(env, new_mode); + /* + * For exceptions taken to AArch32 we must clear the SS bit in both + * PSTATE and in the old-state value we save to SPSR_, so zero i= t now. + */ + env->uncached_cpsr &=3D ~PSTATE_SS; + env->spsr =3D cpsr_read(env); + /* Clear IT bits. */ + env->condexec_bits =3D 0; + /* Switch to the new mode, and to the correct instruction set. */ + env->uncached_cpsr =3D (env->uncached_cpsr & ~CPSR_M) | new_mode; + /* Set new mode endianness */ + env->uncached_cpsr &=3D ~CPSR_E; + if (env->cp15.sctlr_el[arm_current_el(env)] & SCTLR_EE) { + env->uncached_cpsr |=3D CPSR_E; + } + /* J and IL must always be cleared for exception entry */ + env->uncached_cpsr &=3D ~(CPSR_IL | CPSR_J); + env->daif |=3D mask; + + if (new_mode =3D=3D ARM_CPU_MODE_HYP) { + env->thumb =3D (env->cp15.sctlr_el[2] & SCTLR_TE) !=3D 0; + env->elr_el[2] =3D env->regs[15]; + } else { + /* + * this is a lie, as there was no c1_sys on V4T/V5, but who cares + * and we should just guard the thumb mode on V4 + */ + if (arm_feature(env, ARM_FEATURE_V4T)) { + env->thumb =3D + (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_TE) !=3D 0; + } + env->regs[14] =3D env->regs[15] + offset; + } + env->regs[15] =3D newpc; +} + +static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs) +{ + /* + * Handle exception entry to Hyp mode; this is sufficiently + * different to entry to other AArch32 modes that we handle it + * separately here. + * + * The vector table entry used is always the 0x14 Hyp mode entry point, + * unless this is an UNDEF/HVC/abort taken from Hyp to Hyp. + * The offset applied to the preferred return address is always zero + * (see DDI0487C.a section G1.12.3). + * PSTATE A/I/F masks are set based only on the SCR.EA/IRQ/FIQ values. + */ + uint32_t addr, mask; + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + + switch (cs->exception_index) { + case EXCP_UDEF: + addr =3D 0x04; + break; + case EXCP_SWI: + addr =3D 0x14; + break; + case EXCP_BKPT: + /* Fall through to prefetch abort. */ + case EXCP_PREFETCH_ABORT: + env->cp15.ifar_s =3D env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, "...with HIFAR 0x%x\n", + (uint32_t)env->exception.vaddress); + addr =3D 0x0c; + break; + case EXCP_DATA_ABORT: + env->cp15.dfar_s =3D env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, "...with HDFAR 0x%x\n", + (uint32_t)env->exception.vaddress); + addr =3D 0x10; + break; + case EXCP_IRQ: + addr =3D 0x18; + break; + case EXCP_FIQ: + addr =3D 0x1c; + break; + case EXCP_HVC: + addr =3D 0x08; + break; + case EXCP_HYP_TRAP: + addr =3D 0x14; + default: + cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); + } + + if (cs->exception_index !=3D EXCP_IRQ && cs->exception_index !=3D EXCP= _FIQ) { + if (!arm_feature(env, ARM_FEATURE_V8)) { + /* + * QEMU syndrome values are v8-style. v7 has the IL bit + * UNK/SBZP for "field not valid" cases, where v8 uses RES1. + * If this is a v7 CPU, squash the IL bit in those cases. + */ + if (cs->exception_index =3D=3D EXCP_PREFETCH_ABORT || + (cs->exception_index =3D=3D EXCP_DATA_ABORT && + !(env->exception.syndrome & ARM_EL_ISV)) || + syn_get_ec(env->exception.syndrome) =3D=3D EC_UNCATEGORIZE= D) { + env->exception.syndrome &=3D ~ARM_EL_IL; + } + } + env->cp15.esr_el[2] =3D env->exception.syndrome; + } + + if (arm_current_el(env) !=3D 2 && addr < 0x14) { + addr =3D 0x14; + } + + mask =3D 0; + if (!(env->cp15.scr_el3 & SCR_EA)) { + mask |=3D CPSR_A; + } + if (!(env->cp15.scr_el3 & SCR_IRQ)) { + mask |=3D CPSR_I; + } + if (!(env->cp15.scr_el3 & SCR_FIQ)) { + mask |=3D CPSR_F; + } + + addr +=3D env->cp15.hvbar; + + take_aarch32_exception(env, ARM_CPU_MODE_HYP, mask, 0, addr); +} + +static void arm_cpu_do_interrupt_aarch32(CPUState *cs) +{ + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + uint32_t addr; + uint32_t mask; + int new_mode; + uint32_t offset; + uint32_t moe; + + /* If this is a debug exception we must update the DBGDSCR.MOE bits */ + switch (syn_get_ec(env->exception.syndrome)) { + case EC_BREAKPOINT: + case EC_BREAKPOINT_SAME_EL: + moe =3D 1; + break; + case EC_WATCHPOINT: + case EC_WATCHPOINT_SAME_EL: + moe =3D 10; + break; + case EC_AA32_BKPT: + moe =3D 3; + break; + case EC_VECTORCATCH: + moe =3D 5; + break; + default: + moe =3D 0; + break; + } + + if (moe) { + env->cp15.mdscr_el1 =3D deposit64(env->cp15.mdscr_el1, 2, 4, moe); + } + + if (env->exception.target_el =3D=3D 2) { + arm_cpu_do_interrupt_aarch32_hyp(cs); + return; + } + + /* TODO: Vectored interrupt controller. */ + switch (cs->exception_index) { + case EXCP_UDEF: + new_mode =3D ARM_CPU_MODE_UND; + addr =3D 0x04; + mask =3D CPSR_I; + if (env->thumb) { + offset =3D 2; + } else { + offset =3D 4; + } + break; + case EXCP_SWI: + new_mode =3D ARM_CPU_MODE_SVC; + addr =3D 0x08; + mask =3D CPSR_I; + /* The PC already points to the next instruction. */ + offset =3D 0; + break; + case EXCP_BKPT: + /* Fall through to prefetch abort. */ + case EXCP_PREFETCH_ABORT: + A32_BANKED_CURRENT_REG_SET(env, ifsr, env->exception.fsr); + A32_BANKED_CURRENT_REG_SET(env, ifar, env->exception.vaddress); + qemu_log_mask(CPU_LOG_INT, "...with IFSR 0x%x IFAR 0x%x\n", + env->exception.fsr, (uint32_t)env->exception.vaddres= s); + new_mode =3D ARM_CPU_MODE_ABT; + addr =3D 0x0c; + mask =3D CPSR_A | CPSR_I; + offset =3D 4; + break; + case EXCP_DATA_ABORT: + A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr); + A32_BANKED_CURRENT_REG_SET(env, dfar, env->exception.vaddress); + qemu_log_mask(CPU_LOG_INT, "...with DFSR 0x%x DFAR 0x%x\n", + env->exception.fsr, + (uint32_t)env->exception.vaddress); + new_mode =3D ARM_CPU_MODE_ABT; + addr =3D 0x10; + mask =3D CPSR_A | CPSR_I; + offset =3D 8; + break; + case EXCP_IRQ: + new_mode =3D ARM_CPU_MODE_IRQ; + addr =3D 0x18; + /* Disable IRQ and imprecise data aborts. */ + mask =3D CPSR_A | CPSR_I; + offset =3D 4; + if (env->cp15.scr_el3 & SCR_IRQ) { + /* IRQ routed to monitor mode */ + new_mode =3D ARM_CPU_MODE_MON; + mask |=3D CPSR_F; + } + break; + case EXCP_FIQ: + new_mode =3D ARM_CPU_MODE_FIQ; + addr =3D 0x1c; + /* Disable FIQ, IRQ and imprecise data aborts. */ + mask =3D CPSR_A | CPSR_I | CPSR_F; + if (env->cp15.scr_el3 & SCR_FIQ) { + /* FIQ routed to monitor mode */ + new_mode =3D ARM_CPU_MODE_MON; + } + offset =3D 4; + break; + case EXCP_VIRQ: + new_mode =3D ARM_CPU_MODE_IRQ; + addr =3D 0x18; + /* Disable IRQ and imprecise data aborts. */ + mask =3D CPSR_A | CPSR_I; + offset =3D 4; + break; + case EXCP_VFIQ: + new_mode =3D ARM_CPU_MODE_FIQ; + addr =3D 0x1c; + /* Disable FIQ, IRQ and imprecise data aborts. */ + mask =3D CPSR_A | CPSR_I | CPSR_F; + offset =3D 4; + break; + case EXCP_SMC: + new_mode =3D ARM_CPU_MODE_MON; + addr =3D 0x08; + mask =3D CPSR_A | CPSR_I | CPSR_F; + offset =3D 0; + break; + default: + cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); + return; /* Never happens. Keep compiler happy. */ + } + + if (new_mode =3D=3D ARM_CPU_MODE_MON) { + addr +=3D env->cp15.mvbar; + } else if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) { + /* High vectors. When enabled, base address cannot be remapped. */ + addr +=3D 0xffff0000; + } else { + /* ARM v7 architectures provide a vector base address register to = remap + * the interrupt vector table. + * This register is only followed in non-monitor mode, and is bank= ed. + * Note: only bits 31:5 are valid. + */ + addr +=3D A32_BANKED_CURRENT_REG_GET(env, vbar); + } + + if ((env->uncached_cpsr & CPSR_M) =3D=3D ARM_CPU_MODE_MON) { + env->cp15.scr_el3 &=3D ~SCR_NS; + } + + take_aarch32_exception(env, new_mode, mask, offset, addr); +} + +/* Handle exception entry to a target EL which is using AArch64 */ +static void arm_cpu_do_interrupt_aarch64(CPUState *cs) +{ + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + unsigned int new_el =3D env->exception.target_el; + target_ulong addr =3D env->cp15.vbar_el[new_el]; + unsigned int new_mode =3D aarch64_pstate_mode(new_el, true); + unsigned int cur_el =3D arm_current_el(env); + + /* + * Note that new_el can never be 0. If cur_el is 0, then + * el0_a64 is is_a64(), else el0_a64 is ignored. + */ + aarch64_sve_change_el(env, cur_el, new_el, is_a64(env)); + + if (cur_el < new_el) { + /* Entry vector offset depends on whether the implemented EL + * immediately lower than the target level is using AArch32 or AAr= ch64 + */ + bool is_aa64; + + switch (new_el) { + case 3: + is_aa64 =3D (env->cp15.scr_el3 & SCR_RW) !=3D 0; + break; + case 2: + is_aa64 =3D (env->cp15.hcr_el2 & HCR_RW) !=3D 0; + break; + case 1: + is_aa64 =3D is_a64(env); + break; + default: + g_assert_not_reached(); + } + + if (is_aa64) { + addr +=3D 0x400; + } else { + addr +=3D 0x600; + } + } else if (pstate_read(env) & PSTATE_SP) { + addr +=3D 0x200; + } + + switch (cs->exception_index) { + case EXCP_PREFETCH_ABORT: + case EXCP_DATA_ABORT: + env->cp15.far_el[new_el] =3D env->exception.vaddress; + qemu_log_mask(CPU_LOG_INT, "...with FAR 0x%" PRIx64 "\n", + env->cp15.far_el[new_el]); + /* fall through */ + case EXCP_BKPT: + case EXCP_UDEF: + case EXCP_SWI: + case EXCP_HVC: + case EXCP_HYP_TRAP: + case EXCP_SMC: + if (syn_get_ec(env->exception.syndrome) =3D=3D EC_ADVSIMDFPACCESST= RAP) { + /* + * QEMU internal FP/SIMD syndromes from AArch32 include the + * TA and coproc fields which are only exposed if the exception + * is taken to AArch32 Hyp mode. Mask them out to get a valid + * AArch64 format syndrome. + */ + env->exception.syndrome &=3D ~MAKE_64BIT_MASK(0, 20); + } + env->cp15.esr_el[new_el] =3D env->exception.syndrome; + break; + case EXCP_IRQ: + case EXCP_VIRQ: + addr +=3D 0x80; + break; + case EXCP_FIQ: + case EXCP_VFIQ: + addr +=3D 0x100; + break; + case EXCP_SEMIHOST: + qemu_log_mask(CPU_LOG_INT, + "...handling as semihosting call 0x%" PRIx64 "\n", + env->xregs[0]); + env->xregs[0] =3D do_arm_semihosting(env); + return; + default: + cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); + } + + if (is_a64(env)) { + env->banked_spsr[aarch64_banked_spsr_index(new_el)] =3D pstate_rea= d(env); + aarch64_save_sp(env, arm_current_el(env)); + env->elr_el[new_el] =3D env->pc; + } else { + env->banked_spsr[aarch64_banked_spsr_index(new_el)] =3D cpsr_read(= env); + env->elr_el[new_el] =3D env->regs[15]; + + aarch64_sync_32_to_64(env); + + env->condexec_bits =3D 0; + } + qemu_log_mask(CPU_LOG_INT, "...with ELR 0x%" PRIx64 "\n", + env->elr_el[new_el]); + + pstate_write(env, PSTATE_DAIF | new_mode); + env->aarch64 =3D 1; + aarch64_restore_sp(env, new_el); + + env->pc =3D addr; + + qemu_log_mask(CPU_LOG_INT, "...to EL%d PC 0x%" PRIx64 " PSTATE 0x%x\n", + new_el, env->pc, pstate_read(env)); +} + +static inline bool check_for_semihosting(CPUState *cs) +{ + /* Check whether this exception is a semihosting call; if so + * then handle it and return true; otherwise return false. + */ + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + + if (is_a64(env)) { + if (cs->exception_index =3D=3D EXCP_SEMIHOST) { + /* This is always the 64-bit semihosting exception. + * The "is this usermode" and "is semihosting enabled" + * checks have been done at translate time. + */ + qemu_log_mask(CPU_LOG_INT, + "...handling as semihosting call 0x%" PRIx64 "\n= ", + env->xregs[0]); + env->xregs[0] =3D do_arm_semihosting(env); + return true; + } + return false; + } else { + uint32_t imm; + + /* Only intercept calls from privileged modes, to provide some + * semblance of security. + */ + if (cs->exception_index !=3D EXCP_SEMIHOST && + (!semihosting_enabled() || + ((env->uncached_cpsr & CPSR_M) =3D=3D ARM_CPU_MODE_USR))) { + return false; + } + + switch (cs->exception_index) { + case EXCP_SEMIHOST: + /* This is always a semihosting call; the "is this usermode" + * and "is semihosting enabled" checks have been done at + * translate time. + */ + break; + case EXCP_SWI: + /* Check for semihosting interrupt. */ + if (env->thumb) { + imm =3D arm_lduw_code(env, env->regs[15] - 2, arm_sctlr_b(= env)) + & 0xff; + if (imm =3D=3D 0xab) { + break; + } + } else { + imm =3D arm_ldl_code(env, env->regs[15] - 4, arm_sctlr_b(e= nv)) + & 0xffffff; + if (imm =3D=3D 0x123456) { + break; + } + } + return false; + case EXCP_BKPT: + /* See if this is a semihosting syscall. */ + if (env->thumb) { + imm =3D arm_lduw_code(env, env->regs[15], arm_sctlr_b(env)) + & 0xff; + if (imm =3D=3D 0xab) { + env->regs[15] +=3D 2; + break; + } + } + return false; + default: + return false; + } + + qemu_log_mask(CPU_LOG_INT, + "...handling as semihosting call 0x%x\n", + env->regs[0]); + env->regs[0] =3D do_arm_semihosting(env); + return true; + } +} + +/* Handle a CPU exception for A and R profile CPUs. + * Do any appropriate logging, handle PSCI calls, and then hand off + * to the AArch64-entry or AArch32-entry function depending on the + * target exception level's register width. + */ +void arm_cpu_do_interrupt(CPUState *cs) +{ + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + unsigned int new_el =3D env->exception.target_el; + + assert(!arm_feature(env, ARM_FEATURE_M)); + + arm_log_exception(cs->exception_index); + qemu_log_mask(CPU_LOG_INT, "...from EL%d to EL%d\n", arm_current_el(en= v), + new_el); + if (qemu_loglevel_mask(CPU_LOG_INT) + && !excp_is_internal(cs->exception_index)) { + qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n", + syn_get_ec(env->exception.syndrome), + env->exception.syndrome); + } + + if (arm_is_psci_call(cpu, cs->exception_index)) { + arm_handle_psci_call(cpu); + qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n"); + return; + } + + /* Semihosting semantics depend on the register width of the + * code that caused the exception, not the target exception level, + * so must be handled here. + */ + if (check_for_semihosting(cs)) { + return; + } + + /* Hooks may change global state so BQL should be held, also the + * BQL needs to be held for any modification of + * cs->interrupt_request. + */ + g_assert(qemu_mutex_iothread_locked()); + + arm_call_pre_el_change_hook(cpu); + + assert(!excp_is_internal(cs->exception_index)); + if (arm_el_is_aa64(env, new_el)) { + arm_cpu_do_interrupt_aarch64(cs); + } else { + arm_cpu_do_interrupt_aarch32(cs); + } + + arm_call_el_change_hook(cpu); + + if (!kvm_enabled()) { + cs->interrupt_request |=3D CPU_INTERRUPT_EXITTB; + } +} diff --git a/target/arm/helper.c b/target/arm/helper.c index 8aa5a9e41d..7b30a4cb49 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -6700,537 +6700,6 @@ void aarch64_sync_64_to_32(CPUARMState *env) env->regs[15] =3D env->pc; } =20 -static void take_aarch32_exception(CPUARMState *env, int new_mode, - uint32_t mask, uint32_t offset, - uint32_t newpc) -{ - /* Change the CPU state so as to actually take the exception. */ - switch_mode(env, new_mode); - /* - * For exceptions taken to AArch32 we must clear the SS bit in both - * PSTATE and in the old-state value we save to SPSR_, so zero i= t now. - */ - env->uncached_cpsr &=3D ~PSTATE_SS; - env->spsr =3D cpsr_read(env); - /* Clear IT bits. */ - env->condexec_bits =3D 0; - /* Switch to the new mode, and to the correct instruction set. */ - env->uncached_cpsr =3D (env->uncached_cpsr & ~CPSR_M) | new_mode; - /* Set new mode endianness */ - env->uncached_cpsr &=3D ~CPSR_E; - if (env->cp15.sctlr_el[arm_current_el(env)] & SCTLR_EE) { - env->uncached_cpsr |=3D CPSR_E; - } - /* J and IL must always be cleared for exception entry */ - env->uncached_cpsr &=3D ~(CPSR_IL | CPSR_J); - env->daif |=3D mask; - - if (new_mode =3D=3D ARM_CPU_MODE_HYP) { - env->thumb =3D (env->cp15.sctlr_el[2] & SCTLR_TE) !=3D 0; - env->elr_el[2] =3D env->regs[15]; - } else { - /* - * this is a lie, as there was no c1_sys on V4T/V5, but who cares - * and we should just guard the thumb mode on V4 - */ - if (arm_feature(env, ARM_FEATURE_V4T)) { - env->thumb =3D - (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_TE) !=3D 0; - } - env->regs[14] =3D env->regs[15] + offset; - } - env->regs[15] =3D newpc; -} - -static void arm_cpu_do_interrupt_aarch32_hyp(CPUState *cs) -{ - /* - * Handle exception entry to Hyp mode; this is sufficiently - * different to entry to other AArch32 modes that we handle it - * separately here. - * - * The vector table entry used is always the 0x14 Hyp mode entry point, - * unless this is an UNDEF/HVC/abort taken from Hyp to Hyp. - * The offset applied to the preferred return address is always zero - * (see DDI0487C.a section G1.12.3). - * PSTATE A/I/F masks are set based only on the SCR.EA/IRQ/FIQ values. - */ - uint32_t addr, mask; - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - - switch (cs->exception_index) { - case EXCP_UDEF: - addr =3D 0x04; - break; - case EXCP_SWI: - addr =3D 0x14; - break; - case EXCP_BKPT: - /* Fall through to prefetch abort. */ - case EXCP_PREFETCH_ABORT: - env->cp15.ifar_s =3D env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, "...with HIFAR 0x%x\n", - (uint32_t)env->exception.vaddress); - addr =3D 0x0c; - break; - case EXCP_DATA_ABORT: - env->cp15.dfar_s =3D env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, "...with HDFAR 0x%x\n", - (uint32_t)env->exception.vaddress); - addr =3D 0x10; - break; - case EXCP_IRQ: - addr =3D 0x18; - break; - case EXCP_FIQ: - addr =3D 0x1c; - break; - case EXCP_HVC: - addr =3D 0x08; - break; - case EXCP_HYP_TRAP: - addr =3D 0x14; - default: - cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); - } - - if (cs->exception_index !=3D EXCP_IRQ && cs->exception_index !=3D EXCP= _FIQ) { - if (!arm_feature(env, ARM_FEATURE_V8)) { - /* - * QEMU syndrome values are v8-style. v7 has the IL bit - * UNK/SBZP for "field not valid" cases, where v8 uses RES1. - * If this is a v7 CPU, squash the IL bit in those cases. - */ - if (cs->exception_index =3D=3D EXCP_PREFETCH_ABORT || - (cs->exception_index =3D=3D EXCP_DATA_ABORT && - !(env->exception.syndrome & ARM_EL_ISV)) || - syn_get_ec(env->exception.syndrome) =3D=3D EC_UNCATEGORIZE= D) { - env->exception.syndrome &=3D ~ARM_EL_IL; - } - } - env->cp15.esr_el[2] =3D env->exception.syndrome; - } - - if (arm_current_el(env) !=3D 2 && addr < 0x14) { - addr =3D 0x14; - } - - mask =3D 0; - if (!(env->cp15.scr_el3 & SCR_EA)) { - mask |=3D CPSR_A; - } - if (!(env->cp15.scr_el3 & SCR_IRQ)) { - mask |=3D CPSR_I; - } - if (!(env->cp15.scr_el3 & SCR_FIQ)) { - mask |=3D CPSR_F; - } - - addr +=3D env->cp15.hvbar; - - take_aarch32_exception(env, ARM_CPU_MODE_HYP, mask, 0, addr); -} - -static void arm_cpu_do_interrupt_aarch32(CPUState *cs) -{ - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - uint32_t addr; - uint32_t mask; - int new_mode; - uint32_t offset; - uint32_t moe; - - /* If this is a debug exception we must update the DBGDSCR.MOE bits */ - switch (syn_get_ec(env->exception.syndrome)) { - case EC_BREAKPOINT: - case EC_BREAKPOINT_SAME_EL: - moe =3D 1; - break; - case EC_WATCHPOINT: - case EC_WATCHPOINT_SAME_EL: - moe =3D 10; - break; - case EC_AA32_BKPT: - moe =3D 3; - break; - case EC_VECTORCATCH: - moe =3D 5; - break; - default: - moe =3D 0; - break; - } - - if (moe) { - env->cp15.mdscr_el1 =3D deposit64(env->cp15.mdscr_el1, 2, 4, moe); - } - - if (env->exception.target_el =3D=3D 2) { - arm_cpu_do_interrupt_aarch32_hyp(cs); - return; - } - - switch (cs->exception_index) { - case EXCP_UDEF: - new_mode =3D ARM_CPU_MODE_UND; - addr =3D 0x04; - mask =3D CPSR_I; - if (env->thumb) - offset =3D 2; - else - offset =3D 4; - break; - case EXCP_SWI: - new_mode =3D ARM_CPU_MODE_SVC; - addr =3D 0x08; - mask =3D CPSR_I; - /* The PC already points to the next instruction. */ - offset =3D 0; - break; - case EXCP_BKPT: - /* Fall through to prefetch abort. */ - case EXCP_PREFETCH_ABORT: - A32_BANKED_CURRENT_REG_SET(env, ifsr, env->exception.fsr); - A32_BANKED_CURRENT_REG_SET(env, ifar, env->exception.vaddress); - qemu_log_mask(CPU_LOG_INT, "...with IFSR 0x%x IFAR 0x%x\n", - env->exception.fsr, (uint32_t)env->exception.vaddres= s); - new_mode =3D ARM_CPU_MODE_ABT; - addr =3D 0x0c; - mask =3D CPSR_A | CPSR_I; - offset =3D 4; - break; - case EXCP_DATA_ABORT: - A32_BANKED_CURRENT_REG_SET(env, dfsr, env->exception.fsr); - A32_BANKED_CURRENT_REG_SET(env, dfar, env->exception.vaddress); - qemu_log_mask(CPU_LOG_INT, "...with DFSR 0x%x DFAR 0x%x\n", - env->exception.fsr, - (uint32_t)env->exception.vaddress); - new_mode =3D ARM_CPU_MODE_ABT; - addr =3D 0x10; - mask =3D CPSR_A | CPSR_I; - offset =3D 8; - break; - case EXCP_IRQ: - new_mode =3D ARM_CPU_MODE_IRQ; - addr =3D 0x18; - /* Disable IRQ and imprecise data aborts. */ - mask =3D CPSR_A | CPSR_I; - offset =3D 4; - if (env->cp15.scr_el3 & SCR_IRQ) { - /* IRQ routed to monitor mode */ - new_mode =3D ARM_CPU_MODE_MON; - mask |=3D CPSR_F; - } - break; - case EXCP_FIQ: - new_mode =3D ARM_CPU_MODE_FIQ; - addr =3D 0x1c; - /* Disable FIQ, IRQ and imprecise data aborts. */ - mask =3D CPSR_A | CPSR_I | CPSR_F; - if (env->cp15.scr_el3 & SCR_FIQ) { - /* FIQ routed to monitor mode */ - new_mode =3D ARM_CPU_MODE_MON; - } - offset =3D 4; - break; - case EXCP_VIRQ: - new_mode =3D ARM_CPU_MODE_IRQ; - addr =3D 0x18; - /* Disable IRQ and imprecise data aborts. */ - mask =3D CPSR_A | CPSR_I; - offset =3D 4; - break; - case EXCP_VFIQ: - new_mode =3D ARM_CPU_MODE_FIQ; - addr =3D 0x1c; - /* Disable FIQ, IRQ and imprecise data aborts. */ - mask =3D CPSR_A | CPSR_I | CPSR_F; - offset =3D 4; - break; - case EXCP_SMC: - new_mode =3D ARM_CPU_MODE_MON; - addr =3D 0x08; - mask =3D CPSR_A | CPSR_I | CPSR_F; - offset =3D 0; - break; - default: - cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); - return; /* Never happens. Keep compiler happy. */ - } - - if (new_mode =3D=3D ARM_CPU_MODE_MON) { - addr +=3D env->cp15.mvbar; - } else if (A32_BANKED_CURRENT_REG_GET(env, sctlr) & SCTLR_V) { - /* High vectors. When enabled, base address cannot be remapped. */ - addr +=3D 0xffff0000; - } else { - /* ARM v7 architectures provide a vector base address register to = remap - * the interrupt vector table. - * This register is only followed in non-monitor mode, and is bank= ed. - * Note: only bits 31:5 are valid. - */ - addr +=3D A32_BANKED_CURRENT_REG_GET(env, vbar); - } - - if ((env->uncached_cpsr & CPSR_M) =3D=3D ARM_CPU_MODE_MON) { - env->cp15.scr_el3 &=3D ~SCR_NS; - } - - take_aarch32_exception(env, new_mode, mask, offset, addr); -} - -/* Handle exception entry to a target EL which is using AArch64 */ -static void arm_cpu_do_interrupt_aarch64(CPUState *cs) -{ - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - unsigned int new_el =3D env->exception.target_el; - target_ulong addr =3D env->cp15.vbar_el[new_el]; - unsigned int new_mode =3D aarch64_pstate_mode(new_el, true); - unsigned int cur_el =3D arm_current_el(env); - - /* - * Note that new_el can never be 0. If cur_el is 0, then - * el0_a64 is is_a64(), else el0_a64 is ignored. - */ - aarch64_sve_change_el(env, cur_el, new_el, is_a64(env)); - - if (cur_el < new_el) { - /* Entry vector offset depends on whether the implemented EL - * immediately lower than the target level is using AArch32 or AAr= ch64 - */ - bool is_aa64; - - switch (new_el) { - case 3: - is_aa64 =3D (env->cp15.scr_el3 & SCR_RW) !=3D 0; - break; - case 2: - is_aa64 =3D (env->cp15.hcr_el2 & HCR_RW) !=3D 0; - break; - case 1: - is_aa64 =3D is_a64(env); - break; - default: - g_assert_not_reached(); - } - - if (is_aa64) { - addr +=3D 0x400; - } else { - addr +=3D 0x600; - } - } else if (pstate_read(env) & PSTATE_SP) { - addr +=3D 0x200; - } - - switch (cs->exception_index) { - case EXCP_PREFETCH_ABORT: - case EXCP_DATA_ABORT: - env->cp15.far_el[new_el] =3D env->exception.vaddress; - qemu_log_mask(CPU_LOG_INT, "...with FAR 0x%" PRIx64 "\n", - env->cp15.far_el[new_el]); - /* fall through */ - case EXCP_BKPT: - case EXCP_UDEF: - case EXCP_SWI: - case EXCP_HVC: - case EXCP_HYP_TRAP: - case EXCP_SMC: - if (syn_get_ec(env->exception.syndrome) =3D=3D EC_ADVSIMDFPACCESST= RAP) { - /* - * QEMU internal FP/SIMD syndromes from AArch32 include the - * TA and coproc fields which are only exposed if the exception - * is taken to AArch32 Hyp mode. Mask them out to get a valid - * AArch64 format syndrome. - */ - env->exception.syndrome &=3D ~MAKE_64BIT_MASK(0, 20); - } - env->cp15.esr_el[new_el] =3D env->exception.syndrome; - break; - case EXCP_IRQ: - case EXCP_VIRQ: - addr +=3D 0x80; - break; - case EXCP_FIQ: - case EXCP_VFIQ: - addr +=3D 0x100; - break; - case EXCP_SEMIHOST: - qemu_log_mask(CPU_LOG_INT, - "...handling as semihosting call 0x%" PRIx64 "\n", - env->xregs[0]); - env->xregs[0] =3D do_arm_semihosting(env); - return; - default: - cpu_abort(cs, "Unhandled exception 0x%x\n", cs->exception_index); - } - - if (is_a64(env)) { - env->banked_spsr[aarch64_banked_spsr_index(new_el)] =3D pstate_rea= d(env); - aarch64_save_sp(env, arm_current_el(env)); - env->elr_el[new_el] =3D env->pc; - } else { - env->banked_spsr[aarch64_banked_spsr_index(new_el)] =3D cpsr_read(= env); - env->elr_el[new_el] =3D env->regs[15]; - - aarch64_sync_32_to_64(env); - - env->condexec_bits =3D 0; - } - qemu_log_mask(CPU_LOG_INT, "...with ELR 0x%" PRIx64 "\n", - env->elr_el[new_el]); - - pstate_write(env, PSTATE_DAIF | new_mode); - env->aarch64 =3D 1; - aarch64_restore_sp(env, new_el); - - env->pc =3D addr; - - qemu_log_mask(CPU_LOG_INT, "...to EL%d PC 0x%" PRIx64 " PSTATE 0x%x\n", - new_el, env->pc, pstate_read(env)); -} - -static inline bool check_for_semihosting(CPUState *cs) -{ - /* Check whether this exception is a semihosting call; if so - * then handle it and return true; otherwise return false. - */ - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - - if (is_a64(env)) { - if (cs->exception_index =3D=3D EXCP_SEMIHOST) { - /* This is always the 64-bit semihosting exception. - * The "is this usermode" and "is semihosting enabled" - * checks have been done at translate time. - */ - qemu_log_mask(CPU_LOG_INT, - "...handling as semihosting call 0x%" PRIx64 "\n= ", - env->xregs[0]); - env->xregs[0] =3D do_arm_semihosting(env); - return true; - } - return false; - } else { - uint32_t imm; - - /* Only intercept calls from privileged modes, to provide some - * semblance of security. - */ - if (cs->exception_index !=3D EXCP_SEMIHOST && - (!semihosting_enabled() || - ((env->uncached_cpsr & CPSR_M) =3D=3D ARM_CPU_MODE_USR))) { - return false; - } - - switch (cs->exception_index) { - case EXCP_SEMIHOST: - /* This is always a semihosting call; the "is this usermode" - * and "is semihosting enabled" checks have been done at - * translate time. - */ - break; - case EXCP_SWI: - /* Check for semihosting interrupt. */ - if (env->thumb) { - imm =3D arm_lduw_code(env, env->regs[15] - 2, arm_sctlr_b(= env)) - & 0xff; - if (imm =3D=3D 0xab) { - break; - } - } else { - imm =3D arm_ldl_code(env, env->regs[15] - 4, arm_sctlr_b(e= nv)) - & 0xffffff; - if (imm =3D=3D 0x123456) { - break; - } - } - return false; - case EXCP_BKPT: - /* See if this is a semihosting syscall. */ - if (env->thumb) { - imm =3D arm_lduw_code(env, env->regs[15], arm_sctlr_b(env)) - & 0xff; - if (imm =3D=3D 0xab) { - env->regs[15] +=3D 2; - break; - } - } - return false; - default: - return false; - } - - qemu_log_mask(CPU_LOG_INT, - "...handling as semihosting call 0x%x\n", - env->regs[0]); - env->regs[0] =3D do_arm_semihosting(env); - return true; - } -} - -/* Handle a CPU exception for A and R profile CPUs. - * Do any appropriate logging, handle PSCI calls, and then hand off - * to the AArch64-entry or AArch32-entry function depending on the - * target exception level's register width. - */ -void arm_cpu_do_interrupt(CPUState *cs) -{ - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - unsigned int new_el =3D env->exception.target_el; - - assert(!arm_feature(env, ARM_FEATURE_M)); - - arm_log_exception(cs->exception_index); - qemu_log_mask(CPU_LOG_INT, "...from EL%d to EL%d\n", arm_current_el(en= v), - new_el); - if (qemu_loglevel_mask(CPU_LOG_INT) - && !excp_is_internal(cs->exception_index)) { - qemu_log_mask(CPU_LOG_INT, "...with ESR 0x%x/0x%" PRIx32 "\n", - syn_get_ec(env->exception.syndrome), - env->exception.syndrome); - } - - if (arm_is_psci_call(cpu, cs->exception_index)) { - arm_handle_psci_call(cpu); - qemu_log_mask(CPU_LOG_INT, "...handled as PSCI call\n"); - return; - } - - /* Semihosting semantics depend on the register width of the - * code that caused the exception, not the target exception level, - * so must be handled here. - */ - if (check_for_semihosting(cs)) { - return; - } - - /* Hooks may change global state so BQL should be held, also the - * BQL needs to be held for any modification of - * cs->interrupt_request. - */ - g_assert(qemu_mutex_iothread_locked()); - - arm_call_pre_el_change_hook(cpu); - - assert(!excp_is_internal(cs->exception_index)); - if (arm_el_is_aa64(env, new_el)) { - arm_cpu_do_interrupt_aarch64(cs); - } else { - arm_cpu_do_interrupt_aarch32(cs); - } - - arm_call_el_change_hook(cpu); - - if (!kvm_enabled()) { - cs->interrupt_request |=3D CPU_INTERRUPT_EXITTB; - } -} - /* Return the exception level which controls this address translation regi= me */ static inline uint32_t regime_el(CPUARMState *env, ARMMMUIdx mmu_idx) { diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs index 76634b2437..85e35ee668 100644 --- a/target/arm/Makefile.objs +++ b/target/arm/Makefile.objs @@ -6,7 +6,7 @@ obj-$(call land,$(CONFIG_KVM),$(TARGET_AARCH64)) +=3D kvm64= .o obj-$(call lnot,$(CONFIG_KVM)) +=3D kvm-stub.o obj-y +=3D translate.o op_helper.o helper.o cpu.o obj-y +=3D neon_helper.o iwmmxt_helper.o vec_helper.o -obj-y +=3D gdbstub.o m_helper.o +obj-y +=3D gdbstub.o m_helper.o excp_helper.o obj-$(TARGET_AARCH64) +=3D cpu64.o translate-a64.o helper-a64.o gdbstub64.o obj-y +=3D crypto_helper.o obj-$(CONFIG_SOFTMMU) +=3D arm-powerctl.o --=20 2.19.1 From nobody Thu Nov 6 14:16:22 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1542129682583774.3927687347052; Tue, 13 Nov 2018 09:21:22 -0800 (PST) Received: from localhost ([::1]:55253 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcNN-0002sR-6S for importer@patchew.org; Tue, 13 Nov 2018 12:21:21 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36712) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcAl-0007vY-Jo for qemu-devel@nongnu.org; Tue, 13 Nov 2018 12:08:21 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMbwh-0003x1-Aj for qemu-devel@nongnu.org; Tue, 13 Nov 2018 11:53:51 -0500 Received: from mga06.intel.com ([134.134.136.31]:62852) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gMbwf-0003pH-PW; Tue, 13 Nov 2018 11:53:46 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 08:53:36 -0800 Received: from qingbai-mobl.ger.corp.intel.com (HELO caravaggio.ger.corp.intel.com) ([10.249.41.239]) by orsmga001.jf.intel.com with ESMTP; 13 Nov 2018 08:53:34 -0800 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,228,1539673200"; d="scan'208";a="107922105" From: Samuel Ortiz To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 17:52:39 +0100 Message-Id: <20181113165247.4806-6-sameo@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181113165247.4806-1-sameo@linux.intel.com> References: <20181113165247.4806-1-sameo@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.31 Subject: [Qemu-devel] [PATCH 05/13] target: arm: Move the DC ZVA helper into op_helper X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, richard.henderson@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: Samuel Ortiz Reviewed-by: Philippe Mathieu-Daud=C3=A9 Tested-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Robert Bradford --- target/arm/helper.c | 84 ------------------------------------------ target/arm/op_helper.c | 84 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+), 84 deletions(-) diff --git a/target/arm/helper.c b/target/arm/helper.c index 7b30a4cb49..bc2c8cdb67 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -8917,90 +8917,6 @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *c= s, vaddr addr, =20 #endif =20 -void HELPER(dc_zva)(CPUARMState *env, uint64_t vaddr_in) -{ - /* Implement DC ZVA, which zeroes a fixed-length block of memory. - * Note that we do not implement the (architecturally mandated) - * alignment fault for attempts to use this on Device memory - * (which matches the usual QEMU behaviour of not implementing either - * alignment faults or any memory attribute handling). - */ - - ARMCPU *cpu =3D arm_env_get_cpu(env); - uint64_t blocklen =3D 4 << cpu->dcz_blocksize; - uint64_t vaddr =3D vaddr_in & ~(blocklen - 1); - -#ifndef CONFIG_USER_ONLY - { - /* Slightly awkwardly, QEMU's TARGET_PAGE_SIZE may be less than - * the block size so we might have to do more than one TLB lookup. - * We know that in fact for any v8 CPU the page size is at least 4K - * and the block size must be 2K or less, but TARGET_PAGE_SIZE is = only - * 1K as an artefact of legacy v5 subpage support being present in= the - * same QEMU executable. - */ - int maxidx =3D DIV_ROUND_UP(blocklen, TARGET_PAGE_SIZE); - void *hostaddr[maxidx]; - int try, i; - unsigned mmu_idx =3D cpu_mmu_index(env, false); - TCGMemOpIdx oi =3D make_memop_idx(MO_UB, mmu_idx); - - for (try =3D 0; try < 2; try++) { - - for (i =3D 0; i < maxidx; i++) { - hostaddr[i] =3D tlb_vaddr_to_host(env, - vaddr + TARGET_PAGE_SIZE *= i, - 1, mmu_idx); - if (!hostaddr[i]) { - break; - } - } - if (i =3D=3D maxidx) { - /* If it's all in the TLB it's fair game for just writing = to; - * we know we don't need to update dirty status, etc. - */ - for (i =3D 0; i < maxidx - 1; i++) { - memset(hostaddr[i], 0, TARGET_PAGE_SIZE); - } - memset(hostaddr[i], 0, blocklen - (i * TARGET_PAGE_SIZE)); - return; - } - /* OK, try a store and see if we can populate the tlb. This - * might cause an exception if the memory isn't writable, - * in which case we will longjmp out of here. We must for - * this purpose use the actual register value passed to us - * so that we get the fault address right. - */ - helper_ret_stb_mmu(env, vaddr_in, 0, oi, GETPC()); - /* Now we can populate the other TLB entries, if any */ - for (i =3D 0; i < maxidx; i++) { - uint64_t va =3D vaddr + TARGET_PAGE_SIZE * i; - if (va !=3D (vaddr_in & TARGET_PAGE_MASK)) { - helper_ret_stb_mmu(env, va, 0, oi, GETPC()); - } - } - } - - /* Slow path (probably attempt to do this to an I/O device or - * similar, or clearing of a block of code we have translations - * cached for). Just do a series of byte writes as the architecture - * demands. It's not worth trying to use a cpu_physical_memory_map= (), - * memset(), unmap() sequence here because: - * + we'd need to account for the blocksize being larger than a p= age - * + the direct-RAM access case is almost always going to be dealt - * with in the fastpath code above, so there's no speed benefit - * + we would have to deal with the map returning NULL because the - * bounce buffer was in use - */ - for (i =3D 0; i < blocklen; i++) { - helper_ret_stb_mmu(env, vaddr + i, 0, oi, GETPC()); - } - } -#else - memset(g2h(vaddr), 0, blocklen); -#endif -} - /* Note that signed overflow is undefined in C. The following routines are careful to use unsigned types where modulo arithmetic is required. Failure to do so _will_ break on newer gcc. */ diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c index eb6fb82fb8..44a74cb296 100644 --- a/target/arm/op_helper.c +++ b/target/arm/op_helper.c @@ -1480,3 +1480,87 @@ uint32_t HELPER(ror_cc)(CPUARMState *env, uint32_t x= , uint32_t i) return ((uint32_t)x >> shift) | (x << (32 - shift)); } } + +void HELPER(dc_zva)(CPUARMState *env, uint64_t vaddr_in) +{ + /* Implement DC ZVA, which zeroes a fixed-length block of memory. + * Note that we do not implement the (architecturally mandated) + * alignment fault for attempts to use this on Device memory + * (which matches the usual QEMU behaviour of not implementing either + * alignment faults or any memory attribute handling). + */ + + ARMCPU *cpu =3D arm_env_get_cpu(env); + uint64_t blocklen =3D 4 << cpu->dcz_blocksize; + uint64_t vaddr =3D vaddr_in & ~(blocklen - 1); + +#ifndef CONFIG_USER_ONLY + { + /* Slightly awkwardly, QEMU's TARGET_PAGE_SIZE may be less than + * the block size so we might have to do more than one TLB lookup. + * We know that in fact for any v8 CPU the page size is at least 4K + * and the block size must be 2K or less, but TARGET_PAGE_SIZE is = only + * 1K as an artefact of legacy v5 subpage support being present in= the + * same QEMU executable. + */ + int maxidx =3D DIV_ROUND_UP(blocklen, TARGET_PAGE_SIZE); + void *hostaddr[maxidx]; + int try, i; + unsigned mmu_idx =3D cpu_mmu_index(env, false); + TCGMemOpIdx oi =3D make_memop_idx(MO_UB, mmu_idx); + + for (try =3D 0; try < 2; try++) { + + for (i =3D 0; i < maxidx; i++) { + hostaddr[i] =3D tlb_vaddr_to_host(env, + vaddr + TARGET_PAGE_SIZE *= i, + 1, mmu_idx); + if (!hostaddr[i]) { + break; + } + } + if (i =3D=3D maxidx) { + /* If it's all in the TLB it's fair game for just writing = to; + * we know we don't need to update dirty status, etc. + */ + for (i =3D 0; i < maxidx - 1; i++) { + memset(hostaddr[i], 0, TARGET_PAGE_SIZE); + } + memset(hostaddr[i], 0, blocklen - (i * TARGET_PAGE_SIZE)); + return; + } + /* OK, try a store and see if we can populate the tlb. This + * might cause an exception if the memory isn't writable, + * in which case we will longjmp out of here. We must for + * this purpose use the actual register value passed to us + * so that we get the fault address right. + */ + helper_ret_stb_mmu(env, vaddr_in, 0, oi, GETPC()); + /* Now we can populate the other TLB entries, if any */ + for (i =3D 0; i < maxidx; i++) { + uint64_t va =3D vaddr + TARGET_PAGE_SIZE * i; + if (va !=3D (vaddr_in & TARGET_PAGE_MASK)) { + helper_ret_stb_mmu(env, va, 0, oi, GETPC()); + } + } + } + + /* Slow path (probably attempt to do this to an I/O device or + * similar, or clearing of a block of code we have translations + * cached for). Just do a series of byte writes as the architecture + * demands. It's not worth trying to use a cpu_physical_memory_map= (), + * memset(), unmap() sequence here because: + * + we'd need to account for the blocksize being larger than a p= age + * + the direct-RAM access case is almost always going to be dealt + * with in the fastpath code above, so there's no speed benefit + * + we would have to deal with the map returning NULL because the + * bounce buffer was in use + */ + for (i =3D 0; i < blocklen; i++) { + helper_ret_stb_mmu(env, vaddr + i, 0, oi, GETPC()); + } + } +#else + memset(g2h(vaddr), 0, blocklen); +#endif +} --=20 2.19.1 From nobody Thu Nov 6 14:16:22 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1542129647578578.447988259708; Tue, 13 Nov 2018 09:20:47 -0800 (PST) Received: from localhost ([::1]:55247 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcMm-0002Pf-BE for importer@patchew.org; Tue, 13 Nov 2018 12:20:44 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35107) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcAl-0006HH-Mk for qemu-devel@nongnu.org; Tue, 13 Nov 2018 12:08:23 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMbwh-0003xD-Cq for qemu-devel@nongnu.org; Tue, 13 Nov 2018 11:53:51 -0500 Received: from mga06.intel.com ([134.134.136.31]:62850) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gMbwg-0003nD-99; Tue, 13 Nov 2018 11:53:47 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 08:53:38 -0800 Received: from qingbai-mobl.ger.corp.intel.com (HELO caravaggio.ger.corp.intel.com) ([10.249.41.239]) by orsmga001.jf.intel.com with ESMTP; 13 Nov 2018 08:53:36 -0800 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,228,1539673200"; d="scan'208";a="107922107" From: Samuel Ortiz To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 17:52:40 +0100 Message-Id: <20181113165247.4806-7-sameo@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181113165247.4806-1-sameo@linux.intel.com> References: <20181113165247.4806-1-sameo@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.31 Subject: [Qemu-devel] [PATCH 06/13] target: arm: Make ARM TLB filling routine static X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, richard.henderson@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" It's only used in op_helper.c, it does not need to be exported and moreover it should only be build when TCG is enabled. Signed-off-by: Samuel Ortiz Reviewed-by: Philippe Mathieu-Daud=C3=A9 Tested-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Robert Bradford --- target/arm/internals.h | 5 ----- target/arm/helper.c | 37 ------------------------------------- target/arm/op_helper.c | 38 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 38 insertions(+), 42 deletions(-) diff --git a/target/arm/internals.h b/target/arm/internals.h index ffb5091b1f..06439467d2 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -741,11 +741,6 @@ static inline bool arm_extabort_type(MemTxResult resul= t) return result !=3D MEMTX_DECODE_ERROR; } =20 -/* Do a page table walk and add page to TLB if possible */ -bool arm_tlb_fill(CPUState *cpu, vaddr address, - MMUAccessType access_type, int mmu_idx, - ARMMMUFaultInfo *fi); - /* Return true if the stage 1 translation regime is using LPAE format page * tables */ bool arm_s1_regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx); diff --git a/target/arm/helper.c b/target/arm/helper.c index bc2c8cdb67..689879c23a 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -8855,43 +8855,6 @@ bool get_phys_addr(CPUARMState *env, target_ulong ad= dress, } } =20 -/* Walk the page table and (if the mapping exists) add the page - * to the TLB. Return false on success, or true on failure. Populate - * fsr with ARM DFSR/IFSR fault register format value on failure. - */ -bool arm_tlb_fill(CPUState *cs, vaddr address, - MMUAccessType access_type, int mmu_idx, - ARMMMUFaultInfo *fi) -{ - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - hwaddr phys_addr; - target_ulong page_size; - int prot; - int ret; - MemTxAttrs attrs =3D {}; - - ret =3D get_phys_addr(env, address, access_type, - core_to_arm_mmu_idx(env, mmu_idx), &phys_addr, - &attrs, &prot, &page_size, fi, NULL); - if (!ret) { - /* - * Map a single [sub]page. Regions smaller than our declared - * target page size are handled specially, so for those we - * pass in the exact addresses. - */ - if (page_size >=3D TARGET_PAGE_SIZE) { - phys_addr &=3D TARGET_PAGE_MASK; - address &=3D TARGET_PAGE_MASK; - } - tlb_set_page_with_attrs(cs, address, phys_addr, attrs, - prot, mmu_idx, page_size); - return 0; - } - - return ret; -} - hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr, MemTxAttrs *attrs) { diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c index 44a74cb296..3b0459db50 100644 --- a/target/arm/op_helper.c +++ b/target/arm/op_helper.c @@ -179,6 +179,44 @@ static void deliver_fault(ARMCPU *cpu, vaddr addr, MMU= AccessType access_type, raise_exception(env, exc, syn, target_el); } =20 +/* Walk the page table and (if the mapping exists) add the page + * to the TLB. Return false on success, or true on failure. Populate + * fsr with ARM DFSR/IFSR fault register format value on failure. + */ +static bool arm_tlb_fill(CPUState *cs, vaddr address, + MMUAccessType access_type, int mmu_idx, + ARMMMUFaultInfo *fi) +{ + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + hwaddr phys_addr; + target_ulong page_size; + int prot; + int ret; + MemTxAttrs attrs =3D {}; + + ret =3D get_phys_addr(env, address, access_type, + core_to_arm_mmu_idx(env, mmu_idx), &phys_addr, + &attrs, &prot, &page_size, fi, NULL); + if (!ret) { + /* + * Map a single [sub]page. Regions smaller than our declared + * target page size are handled specially, so for those we + * pass in the exact addresses. + */ + if (page_size >=3D TARGET_PAGE_SIZE) { + phys_addr &=3D TARGET_PAGE_MASK; + address &=3D TARGET_PAGE_MASK; + } + tlb_set_page_with_attrs(cs, address, phys_addr, attrs, + prot, mmu_idx, page_size); + return 0; + } + + return ret; +} + + /* try to fill the TLB and return an exception if error. If retaddr is * NULL, it means that the function was called in C code (i.e. not * from generated code or from helper.c) --=20 2.19.1 From nobody Thu Nov 6 14:16:22 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1542129834006534.0308971959357; Tue, 13 Nov 2018 09:23:54 -0800 (PST) Received: from localhost ([::1]:55284 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcPk-0004jD-Nq for importer@patchew.org; Tue, 13 Nov 2018 12:23:48 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35265) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcAj-0006gL-Nq for qemu-devel@nongnu.org; Tue, 13 Nov 2018 12:08:21 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMbwl-0003yf-FB for qemu-devel@nongnu.org; Tue, 13 Nov 2018 11:53:52 -0500 Received: from mga06.intel.com ([134.134.136.31]:62858) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gMbwh-0003pu-H7; Tue, 13 Nov 2018 11:53:51 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 08:53:39 -0800 Received: from qingbai-mobl.ger.corp.intel.com (HELO caravaggio.ger.corp.intel.com) ([10.249.41.239]) by orsmga001.jf.intel.com with ESMTP; 13 Nov 2018 08:53:38 -0800 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,228,1539673200"; d="scan'208";a="107922110" From: Samuel Ortiz To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 17:52:41 +0100 Message-Id: <20181113165247.4806-8-sameo@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181113165247.4806-1-sameo@linux.intel.com> References: <20181113165247.4806-1-sameo@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.31 Subject: [Qemu-devel] [PATCH 07/13] target: arm: Remove the LDST headers X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, richard.henderson@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" They are no longer needed. Signed-off-by: Samuel Ortiz Reviewed-by: Philippe Mathieu-Daud=C3=A9 Tested-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Robert Bradford --- target/arm/helper.c | 1 - 1 file changed, 1 deletion(-) diff --git a/target/arm/helper.c b/target/arm/helper.c index 689879c23a..dcb689bfbb 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -16,7 +16,6 @@ #include "sysemu/sysemu.h" #include "qemu/crc32c.h" #include "exec/exec-all.h" -#include "arm_ldst.h" #include /* For crc32 */ #include "exec/semihost.h" #include "sysemu/kvm.h" --=20 2.19.1 From nobody Thu Nov 6 14:16:22 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1542129398832743.5135522734804; Tue, 13 Nov 2018 09:16:38 -0800 (PST) Received: from localhost ([::1]:55229 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcIm-0007ZE-Pq for importer@patchew.org; Tue, 13 Nov 2018 12:16:36 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35265) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcAR-0006gL-Eo for qemu-devel@nongnu.org; Tue, 13 Nov 2018 12:08:11 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMbwl-0003yn-FZ for qemu-devel@nongnu.org; Tue, 13 Nov 2018 11:53:57 -0500 Received: from mga06.intel.com ([134.134.136.31]:62852) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gMbwh-0003pH-LF; Tue, 13 Nov 2018 11:53:51 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 08:53:42 -0800 Received: from qingbai-mobl.ger.corp.intel.com (HELO caravaggio.ger.corp.intel.com) ([10.249.41.239]) by orsmga001.jf.intel.com with ESMTP; 13 Nov 2018 08:53:40 -0800 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,228,1539673200"; d="scan'208";a="107922113" From: Samuel Ortiz To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 17:52:42 +0100 Message-Id: <20181113165247.4806-9-sameo@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181113165247.4806-1-sameo@linux.intel.com> References: <20181113165247.4806-1-sameo@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.31 Subject: [Qemu-devel] [PATCH 08/13] target: arm: Move all VFP helpers into their own file X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, richard.henderson@linaro.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Since softfloat is only relevant with TCG, we move all ARM VFP helpers into their own file (vfp_helper.c), in order to support TCG disablement on ARM. Signed-off-by: Samuel Ortiz Signed-off-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Philippe Mathieu-Daud=C3=A9 Tested-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Robert Bradford --- target/arm/helper.c | 871 -------------------------------------- target/arm/vfp_helper.c | 893 +++++++++++++++++++++++++++++++++++++++ target/arm/Makefile.objs | 2 +- 3 files changed, 894 insertions(+), 872 deletions(-) create mode 100644 target/arm/vfp_helper.c diff --git a/target/arm/helper.c b/target/arm/helper.c index dcb689bfbb..996dfbbda2 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -9257,877 +9257,6 @@ void vfp_set_fpscr(CPUARMState *env, uint32_t val) HELPER(vfp_set_fpscr)(env, val); } =20 -#define VFP_HELPER(name, p) HELPER(glue(glue(vfp_,name),p)) - -#define VFP_BINOP(name) \ -float32 VFP_HELPER(name, s)(float32 a, float32 b, void *fpstp) \ -{ \ - float_status *fpst =3D fpstp; \ - return float32_ ## name(a, b, fpst); \ -} \ -float64 VFP_HELPER(name, d)(float64 a, float64 b, void *fpstp) \ -{ \ - float_status *fpst =3D fpstp; \ - return float64_ ## name(a, b, fpst); \ -} -VFP_BINOP(add) -VFP_BINOP(sub) -VFP_BINOP(mul) -VFP_BINOP(div) -VFP_BINOP(min) -VFP_BINOP(max) -VFP_BINOP(minnum) -VFP_BINOP(maxnum) -#undef VFP_BINOP - -float32 VFP_HELPER(neg, s)(float32 a) -{ - return float32_chs(a); -} - -float64 VFP_HELPER(neg, d)(float64 a) -{ - return float64_chs(a); -} - -float32 VFP_HELPER(abs, s)(float32 a) -{ - return float32_abs(a); -} - -float64 VFP_HELPER(abs, d)(float64 a) -{ - return float64_abs(a); -} - -float32 VFP_HELPER(sqrt, s)(float32 a, CPUARMState *env) -{ - return float32_sqrt(a, &env->vfp.fp_status); -} - -float64 VFP_HELPER(sqrt, d)(float64 a, CPUARMState *env) -{ - return float64_sqrt(a, &env->vfp.fp_status); -} - -/* XXX: check quiet/signaling case */ -#define DO_VFP_cmp(p, type) \ -void VFP_HELPER(cmp, p)(type a, type b, CPUARMState *env) \ -{ \ - uint32_t flags; \ - switch(type ## _compare_quiet(a, b, &env->vfp.fp_status)) { \ - case 0: flags =3D 0x6; break; \ - case -1: flags =3D 0x8; break; \ - case 1: flags =3D 0x2; break; \ - default: case 2: flags =3D 0x3; break; \ - } \ - env->vfp.xregs[ARM_VFP_FPSCR] =3D (flags << 28) \ - | (env->vfp.xregs[ARM_VFP_FPSCR] & 0x0fffffff); \ -} \ -void VFP_HELPER(cmpe, p)(type a, type b, CPUARMState *env) \ -{ \ - uint32_t flags; \ - switch(type ## _compare(a, b, &env->vfp.fp_status)) { \ - case 0: flags =3D 0x6; break; \ - case -1: flags =3D 0x8; break; \ - case 1: flags =3D 0x2; break; \ - default: case 2: flags =3D 0x3; break; \ - } \ - env->vfp.xregs[ARM_VFP_FPSCR] =3D (flags << 28) \ - | (env->vfp.xregs[ARM_VFP_FPSCR] & 0x0fffffff); \ -} -DO_VFP_cmp(s, float32) -DO_VFP_cmp(d, float64) -#undef DO_VFP_cmp - -/* Integer to float and float to integer conversions */ - -#define CONV_ITOF(name, ftype, fsz, sign) \ -ftype HELPER(name)(uint32_t x, void *fpstp) \ -{ \ - float_status *fpst =3D fpstp; \ - return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \ -} - -#define CONV_FTOI(name, ftype, fsz, sign, round) \ -sign##int32_t HELPER(name)(ftype x, void *fpstp) \ -{ \ - float_status *fpst =3D fpstp; \ - if (float##fsz##_is_any_nan(x)) { \ - float_raise(float_flag_invalid, fpst); \ - return 0; \ - } \ - return float##fsz##_to_##sign##int32##round(x, fpst); \ -} - -#define FLOAT_CONVS(name, p, ftype, fsz, sign) \ - CONV_ITOF(vfp_##name##to##p, ftype, fsz, sign) \ - CONV_FTOI(vfp_to##name##p, ftype, fsz, sign, ) \ - CONV_FTOI(vfp_to##name##z##p, ftype, fsz, sign, _round_to_zero) - -FLOAT_CONVS(si, h, uint32_t, 16, ) -FLOAT_CONVS(si, s, float32, 32, ) -FLOAT_CONVS(si, d, float64, 64, ) -FLOAT_CONVS(ui, h, uint32_t, 16, u) -FLOAT_CONVS(ui, s, float32, 32, u) -FLOAT_CONVS(ui, d, float64, 64, u) - -#undef CONV_ITOF -#undef CONV_FTOI -#undef FLOAT_CONVS - -/* floating point conversion */ -float64 VFP_HELPER(fcvtd, s)(float32 x, CPUARMState *env) -{ - return float32_to_float64(x, &env->vfp.fp_status); -} - -float32 VFP_HELPER(fcvts, d)(float64 x, CPUARMState *env) -{ - return float64_to_float32(x, &env->vfp.fp_status); -} - -/* VFP3 fixed point conversion. */ -#define VFP_CONV_FIX_FLOAT(name, p, fsz, isz, itype) \ -float##fsz HELPER(vfp_##name##to##p)(uint##isz##_t x, uint32_t shift, \ - void *fpstp) \ -{ return itype##_to_##float##fsz##_scalbn(x, -shift, fpstp); } - -#define VFP_CONV_FLOAT_FIX_ROUND(name, p, fsz, isz, itype, ROUND, suff) \ -uint##isz##_t HELPER(vfp_to##name##p##suff)(float##fsz x, uint32_t shift, \ - void *fpst) \ -{ \ - if (unlikely(float##fsz##_is_any_nan(x))) { \ - float_raise(float_flag_invalid, fpst); \ - return 0; \ - } \ - return float##fsz##_to_##itype##_scalbn(x, ROUND, shift, fpst); \ -} - -#define VFP_CONV_FIX(name, p, fsz, isz, itype) \ -VFP_CONV_FIX_FLOAT(name, p, fsz, isz, itype) \ -VFP_CONV_FLOAT_FIX_ROUND(name, p, fsz, isz, itype, \ - float_round_to_zero, _round_to_zero) \ -VFP_CONV_FLOAT_FIX_ROUND(name, p, fsz, isz, itype, \ - get_float_rounding_mode(fpst), ) - -#define VFP_CONV_FIX_A64(name, p, fsz, isz, itype) \ -VFP_CONV_FIX_FLOAT(name, p, fsz, isz, itype) \ -VFP_CONV_FLOAT_FIX_ROUND(name, p, fsz, isz, itype, \ - get_float_rounding_mode(fpst), ) - -VFP_CONV_FIX(sh, d, 64, 64, int16) -VFP_CONV_FIX(sl, d, 64, 64, int32) -VFP_CONV_FIX_A64(sq, d, 64, 64, int64) -VFP_CONV_FIX(uh, d, 64, 64, uint16) -VFP_CONV_FIX(ul, d, 64, 64, uint32) -VFP_CONV_FIX_A64(uq, d, 64, 64, uint64) -VFP_CONV_FIX(sh, s, 32, 32, int16) -VFP_CONV_FIX(sl, s, 32, 32, int32) -VFP_CONV_FIX_A64(sq, s, 32, 64, int64) -VFP_CONV_FIX(uh, s, 32, 32, uint16) -VFP_CONV_FIX(ul, s, 32, 32, uint32) -VFP_CONV_FIX_A64(uq, s, 32, 64, uint64) - -#undef VFP_CONV_FIX -#undef VFP_CONV_FIX_FLOAT -#undef VFP_CONV_FLOAT_FIX_ROUND -#undef VFP_CONV_FIX_A64 - -uint32_t HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst) -{ - return int32_to_float16_scalbn(x, -shift, fpst); -} - -uint32_t HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst) -{ - return uint32_to_float16_scalbn(x, -shift, fpst); -} - -uint32_t HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst) -{ - return int64_to_float16_scalbn(x, -shift, fpst); -} - -uint32_t HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst) -{ - return uint64_to_float16_scalbn(x, -shift, fpst); -} - -uint32_t HELPER(vfp_toshh)(uint32_t x, uint32_t shift, void *fpst) -{ - if (unlikely(float16_is_any_nan(x))) { - float_raise(float_flag_invalid, fpst); - return 0; - } - return float16_to_int16_scalbn(x, get_float_rounding_mode(fpst), - shift, fpst); -} - -uint32_t HELPER(vfp_touhh)(uint32_t x, uint32_t shift, void *fpst) -{ - if (unlikely(float16_is_any_nan(x))) { - float_raise(float_flag_invalid, fpst); - return 0; - } - return float16_to_uint16_scalbn(x, get_float_rounding_mode(fpst), - shift, fpst); -} - -uint32_t HELPER(vfp_toslh)(uint32_t x, uint32_t shift, void *fpst) -{ - if (unlikely(float16_is_any_nan(x))) { - float_raise(float_flag_invalid, fpst); - return 0; - } - return float16_to_int32_scalbn(x, get_float_rounding_mode(fpst), - shift, fpst); -} - -uint32_t HELPER(vfp_toulh)(uint32_t x, uint32_t shift, void *fpst) -{ - if (unlikely(float16_is_any_nan(x))) { - float_raise(float_flag_invalid, fpst); - return 0; - } - return float16_to_uint32_scalbn(x, get_float_rounding_mode(fpst), - shift, fpst); -} - -uint64_t HELPER(vfp_tosqh)(uint32_t x, uint32_t shift, void *fpst) -{ - if (unlikely(float16_is_any_nan(x))) { - float_raise(float_flag_invalid, fpst); - return 0; - } - return float16_to_int64_scalbn(x, get_float_rounding_mode(fpst), - shift, fpst); -} - -uint64_t HELPER(vfp_touqh)(uint32_t x, uint32_t shift, void *fpst) -{ - if (unlikely(float16_is_any_nan(x))) { - float_raise(float_flag_invalid, fpst); - return 0; - } - return float16_to_uint64_scalbn(x, get_float_rounding_mode(fpst), - shift, fpst); -} - -/* Set the current fp rounding mode and return the old one. - * The argument is a softfloat float_round_ value. - */ -uint32_t HELPER(set_rmode)(uint32_t rmode, void *fpstp) -{ - float_status *fp_status =3D fpstp; - - uint32_t prev_rmode =3D get_float_rounding_mode(fp_status); - set_float_rounding_mode(rmode, fp_status); - - return prev_rmode; -} - -/* Set the current fp rounding mode in the standard fp status and return - * the old one. This is for NEON instructions that need to change the - * rounding mode but wish to use the standard FPSCR values for everything - * else. Always set the rounding mode back to the correct value after - * modifying it. - * The argument is a softfloat float_round_ value. - */ -uint32_t HELPER(set_neon_rmode)(uint32_t rmode, CPUARMState *env) -{ - float_status *fp_status =3D &env->vfp.standard_fp_status; - - uint32_t prev_rmode =3D get_float_rounding_mode(fp_status); - set_float_rounding_mode(rmode, fp_status); - - return prev_rmode; -} - -/* Half precision conversions. */ -float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, void *fpstp, uint32_t ahp_= mode) -{ - /* Squash FZ16 to 0 for the duration of conversion. In this case, - * it would affect flushing input denormals. - */ - float_status *fpst =3D fpstp; - flag save =3D get_flush_inputs_to_zero(fpst); - set_flush_inputs_to_zero(false, fpst); - float32 r =3D float16_to_float32(a, !ahp_mode, fpst); - set_flush_inputs_to_zero(save, fpst); - return r; -} - -uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_= mode) -{ - /* Squash FZ16 to 0 for the duration of conversion. In this case, - * it would affect flushing output denormals. - */ - float_status *fpst =3D fpstp; - flag save =3D get_flush_to_zero(fpst); - set_flush_to_zero(false, fpst); - float16 r =3D float32_to_float16(a, !ahp_mode, fpst); - set_flush_to_zero(save, fpst); - return r; -} - -float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, void *fpstp, uint32_t ahp_= mode) -{ - /* Squash FZ16 to 0 for the duration of conversion. In this case, - * it would affect flushing input denormals. - */ - float_status *fpst =3D fpstp; - flag save =3D get_flush_inputs_to_zero(fpst); - set_flush_inputs_to_zero(false, fpst); - float64 r =3D float16_to_float64(a, !ahp_mode, fpst); - set_flush_inputs_to_zero(save, fpst); - return r; -} - -uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_= mode) -{ - /* Squash FZ16 to 0 for the duration of conversion. In this case, - * it would affect flushing output denormals. - */ - float_status *fpst =3D fpstp; - flag save =3D get_flush_to_zero(fpst); - set_flush_to_zero(false, fpst); - float16 r =3D float64_to_float16(a, !ahp_mode, fpst); - set_flush_to_zero(save, fpst); - return r; -} - -#define float32_two make_float32(0x40000000) -#define float32_three make_float32(0x40400000) -#define float32_one_point_five make_float32(0x3fc00000) - -float32 HELPER(recps_f32)(float32 a, float32 b, CPUARMState *env) -{ - float_status *s =3D &env->vfp.standard_fp_status; - if ((float32_is_infinity(a) && float32_is_zero_or_denormal(b)) || - (float32_is_infinity(b) && float32_is_zero_or_denormal(a))) { - if (!(float32_is_zero(a) || float32_is_zero(b))) { - float_raise(float_flag_input_denormal, s); - } - return float32_two; - } - return float32_sub(float32_two, float32_mul(a, b, s), s); -} - -float32 HELPER(rsqrts_f32)(float32 a, float32 b, CPUARMState *env) -{ - float_status *s =3D &env->vfp.standard_fp_status; - float32 product; - if ((float32_is_infinity(a) && float32_is_zero_or_denormal(b)) || - (float32_is_infinity(b) && float32_is_zero_or_denormal(a))) { - if (!(float32_is_zero(a) || float32_is_zero(b))) { - float_raise(float_flag_input_denormal, s); - } - return float32_one_point_five; - } - product =3D float32_mul(a, b, s); - return float32_div(float32_sub(float32_three, product, s), float32_two= , s); -} - -/* NEON helpers. */ - -/* Constants 256 and 512 are used in some helpers; we avoid relying on - * int->float conversions at run-time. */ -#define float64_256 make_float64(0x4070000000000000LL) -#define float64_512 make_float64(0x4080000000000000LL) -#define float16_maxnorm make_float16(0x7bff) -#define float32_maxnorm make_float32(0x7f7fffff) -#define float64_maxnorm make_float64(0x7fefffffffffffffLL) - -/* Reciprocal functions - * - * The algorithm that must be used to calculate the estimate - * is specified by the ARM ARM, see FPRecipEstimate()/RecipEstimate - */ - -/* See RecipEstimate() - * - * input is a 9 bit fixed point number - * input range 256 .. 511 for a number from 0.5 <=3D x < 1.0. - * result range 256 .. 511 for a number from 1.0 to 511/256. - */ - -static int recip_estimate(int input) -{ - int a, b, r; - assert(256 <=3D input && input < 512); - a =3D (input * 2) + 1; - b =3D (1 << 19) / a; - r =3D (b + 1) >> 1; - assert(256 <=3D r && r < 512); - return r; -} - -/* - * Common wrapper to call recip_estimate - * - * The parameters are exponent and 64 bit fraction (without implicit - * bit) where the binary point is nominally at bit 52. Returns a - * float64 which can then be rounded to the appropriate size by the - * callee. - */ - -static uint64_t call_recip_estimate(int *exp, int exp_off, uint64_t frac) -{ - uint32_t scaled, estimate; - uint64_t result_frac; - int result_exp; - - /* Handle sub-normals */ - if (*exp =3D=3D 0) { - if (extract64(frac, 51, 1) =3D=3D 0) { - *exp =3D -1; - frac <<=3D 2; - } else { - frac <<=3D 1; - } - } - - /* scaled =3D UInt('1':fraction<51:44>) */ - scaled =3D deposit32(1 << 8, 0, 8, extract64(frac, 44, 8)); - estimate =3D recip_estimate(scaled); - - result_exp =3D exp_off - *exp; - result_frac =3D deposit64(0, 44, 8, estimate); - if (result_exp =3D=3D 0) { - result_frac =3D deposit64(result_frac >> 1, 51, 1, 1); - } else if (result_exp =3D=3D -1) { - result_frac =3D deposit64(result_frac >> 2, 50, 2, 1); - result_exp =3D 0; - } - - *exp =3D result_exp; - - return result_frac; -} - -static bool round_to_inf(float_status *fpst, bool sign_bit) -{ - switch (fpst->float_rounding_mode) { - case float_round_nearest_even: /* Round to Nearest */ - return true; - case float_round_up: /* Round to +Inf */ - return !sign_bit; - case float_round_down: /* Round to -Inf */ - return sign_bit; - case float_round_to_zero: /* Round to Zero */ - return false; - } - - g_assert_not_reached(); -} - -uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp) -{ - float_status *fpst =3D fpstp; - float16 f16 =3D float16_squash_input_denormal(input, fpst); - uint32_t f16_val =3D float16_val(f16); - uint32_t f16_sign =3D float16_is_neg(f16); - int f16_exp =3D extract32(f16_val, 10, 5); - uint32_t f16_frac =3D extract32(f16_val, 0, 10); - uint64_t f64_frac; - - if (float16_is_any_nan(f16)) { - float16 nan =3D f16; - if (float16_is_signaling_nan(f16, fpst)) { - float_raise(float_flag_invalid, fpst); - nan =3D float16_silence_nan(f16, fpst); - } - if (fpst->default_nan_mode) { - nan =3D float16_default_nan(fpst); - } - return nan; - } else if (float16_is_infinity(f16)) { - return float16_set_sign(float16_zero, float16_is_neg(f16)); - } else if (float16_is_zero(f16)) { - float_raise(float_flag_divbyzero, fpst); - return float16_set_sign(float16_infinity, float16_is_neg(f16)); - } else if (float16_abs(f16) < (1 << 8)) { - /* Abs(value) < 2.0^-16 */ - float_raise(float_flag_overflow | float_flag_inexact, fpst); - if (round_to_inf(fpst, f16_sign)) { - return float16_set_sign(float16_infinity, f16_sign); - } else { - return float16_set_sign(float16_maxnorm, f16_sign); - } - } else if (f16_exp >=3D 29 && fpst->flush_to_zero) { - float_raise(float_flag_underflow, fpst); - return float16_set_sign(float16_zero, float16_is_neg(f16)); - } - - f64_frac =3D call_recip_estimate(&f16_exp, 29, - ((uint64_t) f16_frac) << (52 - 10)); - - /* result =3D sign : result_exp<4:0> : fraction<51:42> */ - f16_val =3D deposit32(0, 15, 1, f16_sign); - f16_val =3D deposit32(f16_val, 10, 5, f16_exp); - f16_val =3D deposit32(f16_val, 0, 10, extract64(f64_frac, 52 - 10, 10)= ); - return make_float16(f16_val); -} - -float32 HELPER(recpe_f32)(float32 input, void *fpstp) -{ - float_status *fpst =3D fpstp; - float32 f32 =3D float32_squash_input_denormal(input, fpst); - uint32_t f32_val =3D float32_val(f32); - bool f32_sign =3D float32_is_neg(f32); - int f32_exp =3D extract32(f32_val, 23, 8); - uint32_t f32_frac =3D extract32(f32_val, 0, 23); - uint64_t f64_frac; - - if (float32_is_any_nan(f32)) { - float32 nan =3D f32; - if (float32_is_signaling_nan(f32, fpst)) { - float_raise(float_flag_invalid, fpst); - nan =3D float32_silence_nan(f32, fpst); - } - if (fpst->default_nan_mode) { - nan =3D float32_default_nan(fpst); - } - return nan; - } else if (float32_is_infinity(f32)) { - return float32_set_sign(float32_zero, float32_is_neg(f32)); - } else if (float32_is_zero(f32)) { - float_raise(float_flag_divbyzero, fpst); - return float32_set_sign(float32_infinity, float32_is_neg(f32)); - } else if (float32_abs(f32) < (1ULL << 21)) { - /* Abs(value) < 2.0^-128 */ - float_raise(float_flag_overflow | float_flag_inexact, fpst); - if (round_to_inf(fpst, f32_sign)) { - return float32_set_sign(float32_infinity, f32_sign); - } else { - return float32_set_sign(float32_maxnorm, f32_sign); - } - } else if (f32_exp >=3D 253 && fpst->flush_to_zero) { - float_raise(float_flag_underflow, fpst); - return float32_set_sign(float32_zero, float32_is_neg(f32)); - } - - f64_frac =3D call_recip_estimate(&f32_exp, 253, - ((uint64_t) f32_frac) << (52 - 23)); - - /* result =3D sign : result_exp<7:0> : fraction<51:29> */ - f32_val =3D deposit32(0, 31, 1, f32_sign); - f32_val =3D deposit32(f32_val, 23, 8, f32_exp); - f32_val =3D deposit32(f32_val, 0, 23, extract64(f64_frac, 52 - 23, 23)= ); - return make_float32(f32_val); -} - -float64 HELPER(recpe_f64)(float64 input, void *fpstp) -{ - float_status *fpst =3D fpstp; - float64 f64 =3D float64_squash_input_denormal(input, fpst); - uint64_t f64_val =3D float64_val(f64); - bool f64_sign =3D float64_is_neg(f64); - int f64_exp =3D extract64(f64_val, 52, 11); - uint64_t f64_frac =3D extract64(f64_val, 0, 52); - - /* Deal with any special cases */ - if (float64_is_any_nan(f64)) { - float64 nan =3D f64; - if (float64_is_signaling_nan(f64, fpst)) { - float_raise(float_flag_invalid, fpst); - nan =3D float64_silence_nan(f64, fpst); - } - if (fpst->default_nan_mode) { - nan =3D float64_default_nan(fpst); - } - return nan; - } else if (float64_is_infinity(f64)) { - return float64_set_sign(float64_zero, float64_is_neg(f64)); - } else if (float64_is_zero(f64)) { - float_raise(float_flag_divbyzero, fpst); - return float64_set_sign(float64_infinity, float64_is_neg(f64)); - } else if ((f64_val & ~(1ULL << 63)) < (1ULL << 50)) { - /* Abs(value) < 2.0^-1024 */ - float_raise(float_flag_overflow | float_flag_inexact, fpst); - if (round_to_inf(fpst, f64_sign)) { - return float64_set_sign(float64_infinity, f64_sign); - } else { - return float64_set_sign(float64_maxnorm, f64_sign); - } - } else if (f64_exp >=3D 2045 && fpst->flush_to_zero) { - float_raise(float_flag_underflow, fpst); - return float64_set_sign(float64_zero, float64_is_neg(f64)); - } - - f64_frac =3D call_recip_estimate(&f64_exp, 2045, f64_frac); - - /* result =3D sign : result_exp<10:0> : fraction<51:0>; */ - f64_val =3D deposit64(0, 63, 1, f64_sign); - f64_val =3D deposit64(f64_val, 52, 11, f64_exp); - f64_val =3D deposit64(f64_val, 0, 52, f64_frac); - return make_float64(f64_val); -} - -/* The algorithm that must be used to calculate the estimate - * is specified by the ARM ARM. - */ - -static int do_recip_sqrt_estimate(int a) -{ - int b, estimate; - - assert(128 <=3D a && a < 512); - if (a < 256) { - a =3D a * 2 + 1; - } else { - a =3D (a >> 1) << 1; - a =3D (a + 1) * 2; - } - b =3D 512; - while (a * (b + 1) * (b + 1) < (1 << 28)) { - b +=3D 1; - } - estimate =3D (b + 1) / 2; - assert(256 <=3D estimate && estimate < 512); - - return estimate; -} - - -static uint64_t recip_sqrt_estimate(int *exp , int exp_off, uint64_t frac) -{ - int estimate; - uint32_t scaled; - - if (*exp =3D=3D 0) { - while (extract64(frac, 51, 1) =3D=3D 0) { - frac =3D frac << 1; - *exp -=3D 1; - } - frac =3D extract64(frac, 0, 51) << 1; - } - - if (*exp & 1) { - /* scaled =3D UInt('01':fraction<51:45>) */ - scaled =3D deposit32(1 << 7, 0, 7, extract64(frac, 45, 7)); - } else { - /* scaled =3D UInt('1':fraction<51:44>) */ - scaled =3D deposit32(1 << 8, 0, 8, extract64(frac, 44, 8)); - } - estimate =3D do_recip_sqrt_estimate(scaled); - - *exp =3D (exp_off - *exp) / 2; - return extract64(estimate, 0, 8) << 44; -} - -uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp) -{ - float_status *s =3D fpstp; - float16 f16 =3D float16_squash_input_denormal(input, s); - uint16_t val =3D float16_val(f16); - bool f16_sign =3D float16_is_neg(f16); - int f16_exp =3D extract32(val, 10, 5); - uint16_t f16_frac =3D extract32(val, 0, 10); - uint64_t f64_frac; - - if (float16_is_any_nan(f16)) { - float16 nan =3D f16; - if (float16_is_signaling_nan(f16, s)) { - float_raise(float_flag_invalid, s); - nan =3D float16_silence_nan(f16, s); - } - if (s->default_nan_mode) { - nan =3D float16_default_nan(s); - } - return nan; - } else if (float16_is_zero(f16)) { - float_raise(float_flag_divbyzero, s); - return float16_set_sign(float16_infinity, f16_sign); - } else if (f16_sign) { - float_raise(float_flag_invalid, s); - return float16_default_nan(s); - } else if (float16_is_infinity(f16)) { - return float16_zero; - } - - /* Scale and normalize to a double-precision value between 0.25 and 1.= 0, - * preserving the parity of the exponent. */ - - f64_frac =3D ((uint64_t) f16_frac) << (52 - 10); - - f64_frac =3D recip_sqrt_estimate(&f16_exp, 44, f64_frac); - - /* result =3D sign : result_exp<4:0> : estimate<7:0> : Zeros(2) */ - val =3D deposit32(0, 15, 1, f16_sign); - val =3D deposit32(val, 10, 5, f16_exp); - val =3D deposit32(val, 2, 8, extract64(f64_frac, 52 - 8, 8)); - return make_float16(val); -} - -float32 HELPER(rsqrte_f32)(float32 input, void *fpstp) -{ - float_status *s =3D fpstp; - float32 f32 =3D float32_squash_input_denormal(input, s); - uint32_t val =3D float32_val(f32); - uint32_t f32_sign =3D float32_is_neg(f32); - int f32_exp =3D extract32(val, 23, 8); - uint32_t f32_frac =3D extract32(val, 0, 23); - uint64_t f64_frac; - - if (float32_is_any_nan(f32)) { - float32 nan =3D f32; - if (float32_is_signaling_nan(f32, s)) { - float_raise(float_flag_invalid, s); - nan =3D float32_silence_nan(f32, s); - } - if (s->default_nan_mode) { - nan =3D float32_default_nan(s); - } - return nan; - } else if (float32_is_zero(f32)) { - float_raise(float_flag_divbyzero, s); - return float32_set_sign(float32_infinity, float32_is_neg(f32)); - } else if (float32_is_neg(f32)) { - float_raise(float_flag_invalid, s); - return float32_default_nan(s); - } else if (float32_is_infinity(f32)) { - return float32_zero; - } - - /* Scale and normalize to a double-precision value between 0.25 and 1.= 0, - * preserving the parity of the exponent. */ - - f64_frac =3D ((uint64_t) f32_frac) << 29; - - f64_frac =3D recip_sqrt_estimate(&f32_exp, 380, f64_frac); - - /* result =3D sign : result_exp<4:0> : estimate<7:0> : Zeros(15) */ - val =3D deposit32(0, 31, 1, f32_sign); - val =3D deposit32(val, 23, 8, f32_exp); - val =3D deposit32(val, 15, 8, extract64(f64_frac, 52 - 8, 8)); - return make_float32(val); -} - -float64 HELPER(rsqrte_f64)(float64 input, void *fpstp) -{ - float_status *s =3D fpstp; - float64 f64 =3D float64_squash_input_denormal(input, s); - uint64_t val =3D float64_val(f64); - bool f64_sign =3D float64_is_neg(f64); - int f64_exp =3D extract64(val, 52, 11); - uint64_t f64_frac =3D extract64(val, 0, 52); - - if (float64_is_any_nan(f64)) { - float64 nan =3D f64; - if (float64_is_signaling_nan(f64, s)) { - float_raise(float_flag_invalid, s); - nan =3D float64_silence_nan(f64, s); - } - if (s->default_nan_mode) { - nan =3D float64_default_nan(s); - } - return nan; - } else if (float64_is_zero(f64)) { - float_raise(float_flag_divbyzero, s); - return float64_set_sign(float64_infinity, float64_is_neg(f64)); - } else if (float64_is_neg(f64)) { - float_raise(float_flag_invalid, s); - return float64_default_nan(s); - } else if (float64_is_infinity(f64)) { - return float64_zero; - } - - f64_frac =3D recip_sqrt_estimate(&f64_exp, 3068, f64_frac); - - /* result =3D sign : result_exp<4:0> : estimate<7:0> : Zeros(44) */ - val =3D deposit64(0, 61, 1, f64_sign); - val =3D deposit64(val, 52, 11, f64_exp); - val =3D deposit64(val, 44, 8, extract64(f64_frac, 52 - 8, 8)); - return make_float64(val); -} - -uint32_t HELPER(recpe_u32)(uint32_t a, void *fpstp) -{ - /* float_status *s =3D fpstp; */ - int input, estimate; - - if ((a & 0x80000000) =3D=3D 0) { - return 0xffffffff; - } - - input =3D extract32(a, 23, 9); - estimate =3D recip_estimate(input); - - return deposit32(0, (32 - 9), 9, estimate); -} - -uint32_t HELPER(rsqrte_u32)(uint32_t a, void *fpstp) -{ - int estimate; - - if ((a & 0xc0000000) =3D=3D 0) { - return 0xffffffff; - } - - estimate =3D do_recip_sqrt_estimate(extract32(a, 23, 9)); - - return deposit32(0, 23, 9, estimate); -} - -/* VFPv4 fused multiply-accumulate */ -float32 VFP_HELPER(muladd, s)(float32 a, float32 b, float32 c, void *fpstp) -{ - float_status *fpst =3D fpstp; - return float32_muladd(a, b, c, 0, fpst); -} - -float64 VFP_HELPER(muladd, d)(float64 a, float64 b, float64 c, void *fpstp) -{ - float_status *fpst =3D fpstp; - return float64_muladd(a, b, c, 0, fpst); -} - -/* ARMv8 round to integral */ -float32 HELPER(rints_exact)(float32 x, void *fp_status) -{ - return float32_round_to_int(x, fp_status); -} - -float64 HELPER(rintd_exact)(float64 x, void *fp_status) -{ - return float64_round_to_int(x, fp_status); -} - -float32 HELPER(rints)(float32 x, void *fp_status) -{ - int old_flags =3D get_float_exception_flags(fp_status), new_flags; - float32 ret; - - ret =3D float32_round_to_int(x, fp_status); - - /* Suppress any inexact exceptions the conversion produced */ - if (!(old_flags & float_flag_inexact)) { - new_flags =3D get_float_exception_flags(fp_status); - set_float_exception_flags(new_flags & ~float_flag_inexact, fp_stat= us); - } - - return ret; -} - -float64 HELPER(rintd)(float64 x, void *fp_status) -{ - int old_flags =3D get_float_exception_flags(fp_status), new_flags; - float64 ret; - - ret =3D float64_round_to_int(x, fp_status); - - new_flags =3D get_float_exception_flags(fp_status); - - /* Suppress any inexact exceptions the conversion produced */ - if (!(old_flags & float_flag_inexact)) { - new_flags =3D get_float_exception_flags(fp_status); - set_float_exception_flags(new_flags & ~float_flag_inexact, fp_stat= us); - } - - return ret; -} =20 /* Convert ARM rounding mode to softfloat */ int arm_rmode_to_sf(int rmode) diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c new file mode 100644 index 0000000000..a1b093710e --- /dev/null +++ b/target/arm/vfp_helper.c @@ -0,0 +1,893 @@ +/* + * ARM floating point operations. + * + * This code is licensed under the GNU GPL v2 and later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ +#include "qemu/osdep.h" +#include "cpu.h" +#include "exec/helper-proto.h" +#include "fpu/softfloat.h" + +#define VFP_HELPER(name, p) HELPER(glue(glue(vfp_, name), p)) + +#define VFP_BINOP(name) \ +float32 VFP_HELPER(name, s)(float32 a, float32 b, void *fpstp) \ +{ \ + float_status *fpst =3D fpstp; \ + return float32_ ## name(a, b, fpst); \ +} \ +float64 VFP_HELPER(name, d)(float64 a, float64 b, void *fpstp) \ +{ \ + float_status *fpst =3D fpstp; \ + return float64_ ## name(a, b, fpst); \ +} +VFP_BINOP(add) +VFP_BINOP(sub) +VFP_BINOP(mul) +VFP_BINOP(div) +VFP_BINOP(min) +VFP_BINOP(max) +VFP_BINOP(minnum) +VFP_BINOP(maxnum) +#undef VFP_BINOP + +float32 VFP_HELPER(neg, s)(float32 a) +{ + return float32_chs(a); +} + +float64 VFP_HELPER(neg, d)(float64 a) +{ + return float64_chs(a); +} + +float32 VFP_HELPER(abs, s)(float32 a) +{ + return float32_abs(a); +} + +float64 VFP_HELPER(abs, d)(float64 a) +{ + return float64_abs(a); +} + +float32 VFP_HELPER(sqrt, s)(float32 a, CPUARMState *env) +{ + return float32_sqrt(a, &env->vfp.fp_status); +} + +float64 VFP_HELPER(sqrt, d)(float64 a, CPUARMState *env) +{ + return float64_sqrt(a, &env->vfp.fp_status); +} + +/* XXX: check quiet/signaling case */ +#define DO_VFP_cmp(p, type) \ +void VFP_HELPER(cmp, p)(type a, type b, CPUARMState *env) \ +{ \ + uint32_t flags; \ + switch (type ## _compare_quiet(a, b, &env->vfp.fp_status)) { \ + case 0: \ + flags =3D 0x6; break; \ + case -1: \ + flags =3D 0x8; break; \ + case 1: \ + flags =3D 0x2; break; \ + case 2: \ + default: \ + flags =3D 0x3; break; \ + } \ + env->vfp.xregs[ARM_VFP_FPSCR] =3D (flags << 28) \ + | (env->vfp.xregs[ARM_VFP_FPSCR] & 0x0fffffff); \ +} \ +void VFP_HELPER(cmpe, p)(type a, type b, CPUARMState *env) \ +{ \ + uint32_t flags; \ + switch (type ## _compare(a, b, &env->vfp.fp_status)) { \ + case 0: \ + flags =3D 0x6; break; \ + case -1: \ + flags =3D 0x8; break; \ + case 1: \ + flags =3D 0x2; break; \ + case 2: \ + default: \ + flags =3D 0x3; break; \ + } \ + env->vfp.xregs[ARM_VFP_FPSCR] =3D (flags << 28) \ + | (env->vfp.xregs[ARM_VFP_FPSCR] & 0x0fffffff); \ +} +DO_VFP_cmp(s, float32) +DO_VFP_cmp(d, float64) +#undef DO_VFP_cmp + +/* Integer to float and float to integer conversions */ + +#define CONV_ITOF(name, ftype, fsz, sign) \ +ftype HELPER(name)(uint32_t x, void *fpstp) \ +{ \ + float_status *fpst =3D fpstp; \ + return sign##int32_to_##float##fsz((sign##int32_t)x, fpst); \ +} + +#define CONV_FTOI(name, ftype, fsz, sign, round) \ +sign##int32_t HELPER(name)(ftype x, void *fpstp) \ +{ \ + float_status *fpst =3D fpstp; \ + if (float##fsz##_is_any_nan(x)) { \ + float_raise(float_flag_invalid, fpst); \ + return 0; \ + } \ + return float##fsz##_to_##sign##int32##round(x, fpst); \ +} + +#define FLOAT_CONVS(name, p, ftype, fsz, sign) \ + CONV_ITOF(vfp_##name##to##p, ftype, fsz, sign) \ + CONV_FTOI(vfp_to##name##p, ftype, fsz, sign, ) \ + CONV_FTOI(vfp_to##name##z##p, ftype, fsz, sign, _round_to_zero) + +FLOAT_CONVS(si, h, uint32_t, 16, ) +FLOAT_CONVS(si, s, float32, 32, ) +FLOAT_CONVS(si, d, float64, 64, ) +FLOAT_CONVS(ui, h, uint32_t, 16, u) +FLOAT_CONVS(ui, s, float32, 32, u) +FLOAT_CONVS(ui, d, float64, 64, u) + +#undef CONV_ITOF +#undef CONV_FTOI +#undef FLOAT_CONVS + +/* floating point conversion */ +float64 VFP_HELPER(fcvtd, s)(float32 x, CPUARMState *env) +{ + return float32_to_float64(x, &env->vfp.fp_status); +} + +float32 VFP_HELPER(fcvts, d)(float64 x, CPUARMState *env) +{ + return float64_to_float32(x, &env->vfp.fp_status); +} + +/* VFP3 fixed point conversion. */ +#define VFP_CONV_FIX_FLOAT(name, p, fsz, isz, itype) \ +float##fsz HELPER(vfp_##name##to##p)(uint##isz##_t x, uint32_t shift, \ + void *fpstp) \ +{ return itype##_to_##float##fsz##_scalbn(x, -shift, fpstp); } + +#define VFP_CONV_FLOAT_FIX_ROUND(name, p, fsz, isz, itype, ROUND, suff) \ +uint##isz##_t HELPER(vfp_to##name##p##suff)(float##fsz x, uint32_t shift, \ + void *fpst) \ +{ \ + if (unlikely(float##fsz##_is_any_nan(x))) { \ + float_raise(float_flag_invalid, fpst); \ + return 0; \ + } \ + return float##fsz##_to_##itype##_scalbn(x, ROUND, shift, fpst); \ +} + +#define VFP_CONV_FIX(name, p, fsz, isz, itype) \ +VFP_CONV_FIX_FLOAT(name, p, fsz, isz, itype) \ +VFP_CONV_FLOAT_FIX_ROUND(name, p, fsz, isz, itype, \ + float_round_to_zero, _round_to_zero) \ +VFP_CONV_FLOAT_FIX_ROUND(name, p, fsz, isz, itype, \ + get_float_rounding_mode(fpst), ) + +#define VFP_CONV_FIX_A64(name, p, fsz, isz, itype) \ +VFP_CONV_FIX_FLOAT(name, p, fsz, isz, itype) \ +VFP_CONV_FLOAT_FIX_ROUND(name, p, fsz, isz, itype, \ + get_float_rounding_mode(fpst), ) + +VFP_CONV_FIX(sh, d, 64, 64, int16) +VFP_CONV_FIX(sl, d, 64, 64, int32) +VFP_CONV_FIX_A64(sq, d, 64, 64, int64) +VFP_CONV_FIX(uh, d, 64, 64, uint16) +VFP_CONV_FIX(ul, d, 64, 64, uint32) +VFP_CONV_FIX_A64(uq, d, 64, 64, uint64) +VFP_CONV_FIX(sh, s, 32, 32, int16) +VFP_CONV_FIX(sl, s, 32, 32, int32) +VFP_CONV_FIX_A64(sq, s, 32, 64, int64) +VFP_CONV_FIX(uh, s, 32, 32, uint16) +VFP_CONV_FIX(ul, s, 32, 32, uint32) +VFP_CONV_FIX_A64(uq, s, 32, 64, uint64) + +#undef VFP_CONV_FIX +#undef VFP_CONV_FIX_FLOAT +#undef VFP_CONV_FLOAT_FIX_ROUND +#undef VFP_CONV_FIX_A64 + +uint32_t HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst) +{ + return int32_to_float16_scalbn(x, -shift, fpst); +} + +uint32_t HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst) +{ + return uint32_to_float16_scalbn(x, -shift, fpst); +} + +uint32_t HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst) +{ + return int64_to_float16_scalbn(x, -shift, fpst); +} + +uint32_t HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst) +{ + return uint64_to_float16_scalbn(x, -shift, fpst); +} + +uint32_t HELPER(vfp_toshh)(uint32_t x, uint32_t shift, void *fpst) +{ + if (unlikely(float16_is_any_nan(x))) { + float_raise(float_flag_invalid, fpst); + return 0; + } + return float16_to_int16_scalbn(x, get_float_rounding_mode(fpst), + shift, fpst); +} + +uint32_t HELPER(vfp_touhh)(uint32_t x, uint32_t shift, void *fpst) +{ + if (unlikely(float16_is_any_nan(x))) { + float_raise(float_flag_invalid, fpst); + return 0; + } + return float16_to_uint16_scalbn(x, get_float_rounding_mode(fpst), + shift, fpst); +} + +uint32_t HELPER(vfp_toslh)(uint32_t x, uint32_t shift, void *fpst) +{ + if (unlikely(float16_is_any_nan(x))) { + float_raise(float_flag_invalid, fpst); + return 0; + } + return float16_to_int32_scalbn(x, get_float_rounding_mode(fpst), + shift, fpst); +} + +uint32_t HELPER(vfp_toulh)(uint32_t x, uint32_t shift, void *fpst) +{ + if (unlikely(float16_is_any_nan(x))) { + float_raise(float_flag_invalid, fpst); + return 0; + } + return float16_to_uint32_scalbn(x, get_float_rounding_mode(fpst), + shift, fpst); +} + +uint64_t HELPER(vfp_tosqh)(uint32_t x, uint32_t shift, void *fpst) +{ + if (unlikely(float16_is_any_nan(x))) { + float_raise(float_flag_invalid, fpst); + return 0; + } + return float16_to_int64_scalbn(x, get_float_rounding_mode(fpst), + shift, fpst); +} + +uint64_t HELPER(vfp_touqh)(uint32_t x, uint32_t shift, void *fpst) +{ + if (unlikely(float16_is_any_nan(x))) { + float_raise(float_flag_invalid, fpst); + return 0; + } + return float16_to_uint64_scalbn(x, get_float_rounding_mode(fpst), + shift, fpst); +} + +/* Set the current fp rounding mode and return the old one. + * The argument is a softfloat float_round_ value. + */ +uint32_t HELPER(set_rmode)(uint32_t rmode, void *fpstp) +{ + float_status *fp_status =3D fpstp; + + uint32_t prev_rmode =3D get_float_rounding_mode(fp_status); + set_float_rounding_mode(rmode, fp_status); + + return prev_rmode; +} + +/* Set the current fp rounding mode in the standard fp status and return + * the old one. This is for NEON instructions that need to change the + * rounding mode but wish to use the standard FPSCR values for everything + * else. Always set the rounding mode back to the correct value after + * modifying it. + * The argument is a softfloat float_round_ value. + */ +uint32_t HELPER(set_neon_rmode)(uint32_t rmode, CPUARMState *env) +{ + float_status *fp_status =3D &env->vfp.standard_fp_status; + + uint32_t prev_rmode =3D get_float_rounding_mode(fp_status); + set_float_rounding_mode(rmode, fp_status); + + return prev_rmode; +} + +/* Half precision conversions. */ +float32 HELPER(vfp_fcvt_f16_to_f32)(uint32_t a, void *fpstp, uint32_t ahp_= mode) +{ + /* Squash FZ16 to 0 for the duration of conversion. In this case, + * it would affect flushing input denormals. + */ + float_status *fpst =3D fpstp; + flag save =3D get_flush_inputs_to_zero(fpst); + set_flush_inputs_to_zero(false, fpst); + float32 r =3D float16_to_float32(a, !ahp_mode, fpst); + set_flush_inputs_to_zero(save, fpst); + return r; +} + +uint32_t HELPER(vfp_fcvt_f32_to_f16)(float32 a, void *fpstp, uint32_t ahp_= mode) +{ + /* Squash FZ16 to 0 for the duration of conversion. In this case, + * it would affect flushing output denormals. + */ + float_status *fpst =3D fpstp; + flag save =3D get_flush_to_zero(fpst); + set_flush_to_zero(false, fpst); + float16 r =3D float32_to_float16(a, !ahp_mode, fpst); + set_flush_to_zero(save, fpst); + return r; +} + +float64 HELPER(vfp_fcvt_f16_to_f64)(uint32_t a, void *fpstp, uint32_t ahp_= mode) +{ + /* Squash FZ16 to 0 for the duration of conversion. In this case, + * it would affect flushing input denormals. + */ + float_status *fpst =3D fpstp; + flag save =3D get_flush_inputs_to_zero(fpst); + set_flush_inputs_to_zero(false, fpst); + float64 r =3D float16_to_float64(a, !ahp_mode, fpst); + set_flush_inputs_to_zero(save, fpst); + return r; +} + +uint32_t HELPER(vfp_fcvt_f64_to_f16)(float64 a, void *fpstp, uint32_t ahp_= mode) +{ + /* Squash FZ16 to 0 for the duration of conversion. In this case, + * it would affect flushing output denormals. + */ + float_status *fpst =3D fpstp; + flag save =3D get_flush_to_zero(fpst); + set_flush_to_zero(false, fpst); + float16 r =3D float64_to_float16(a, !ahp_mode, fpst); + set_flush_to_zero(save, fpst); + return r; +} + +#define float32_two make_float32(0x40000000) +#define float32_three make_float32(0x40400000) +#define float32_one_point_five make_float32(0x3fc00000) + +float32 HELPER(recps_f32)(float32 a, float32 b, CPUARMState *env) +{ + float_status *s =3D &env->vfp.standard_fp_status; + if ((float32_is_infinity(a) && float32_is_zero_or_denormal(b)) || + (float32_is_infinity(b) && float32_is_zero_or_denormal(a))) { + if (!(float32_is_zero(a) || float32_is_zero(b))) { + float_raise(float_flag_input_denormal, s); + } + return float32_two; + } + return float32_sub(float32_two, float32_mul(a, b, s), s); +} + +float32 HELPER(rsqrts_f32)(float32 a, float32 b, CPUARMState *env) +{ + float_status *s =3D &env->vfp.standard_fp_status; + float32 product; + if ((float32_is_infinity(a) && float32_is_zero_or_denormal(b)) || + (float32_is_infinity(b) && float32_is_zero_or_denormal(a))) { + if (!(float32_is_zero(a) || float32_is_zero(b))) { + float_raise(float_flag_input_denormal, s); + } + return float32_one_point_five; + } + product =3D float32_mul(a, b, s); + return float32_div(float32_sub(float32_three, product, s), float32_two= , s); +} + +/* NEON helpers. */ + +/* Constants 256 and 512 are used in some helpers; we avoid relying on + * int->float conversions at run-time. */ +#define float64_256 make_float64(0x4070000000000000LL) +#define float64_512 make_float64(0x4080000000000000LL) +#define float16_maxnorm make_float16(0x7bff) +#define float32_maxnorm make_float32(0x7f7fffff) +#define float64_maxnorm make_float64(0x7fefffffffffffffLL) + +/* Reciprocal functions + * + * The algorithm that must be used to calculate the estimate + * is specified by the ARM ARM, see FPRecipEstimate()/RecipEstimate + */ + +/* See RecipEstimate() + * + * input is a 9 bit fixed point number + * input range 256 .. 511 for a number from 0.5 <=3D x < 1.0. + * result range 256 .. 511 for a number from 1.0 to 511/256. + */ + +static int recip_estimate(int input) +{ + int a, b, r; + assert(256 <=3D input && input < 512); + a =3D (input * 2) + 1; + b =3D (1 << 19) / a; + r =3D (b + 1) >> 1; + assert(256 <=3D r && r < 512); + return r; +} + +/* + * Common wrapper to call recip_estimate + * + * The parameters are exponent and 64 bit fraction (without implicit + * bit) where the binary point is nominally at bit 52. Returns a + * float64 which can then be rounded to the appropriate size by the + * callee. + */ + +static uint64_t call_recip_estimate(int *exp, int exp_off, uint64_t frac) +{ + uint32_t scaled, estimate; + uint64_t result_frac; + int result_exp; + + /* Handle sub-normals */ + if (*exp =3D=3D 0) { + if (extract64(frac, 51, 1) =3D=3D 0) { + *exp =3D -1; + frac <<=3D 2; + } else { + frac <<=3D 1; + } + } + + /* scaled =3D UInt('1':fraction<51:44>) */ + scaled =3D deposit32(1 << 8, 0, 8, extract64(frac, 44, 8)); + estimate =3D recip_estimate(scaled); + + result_exp =3D exp_off - *exp; + result_frac =3D deposit64(0, 44, 8, estimate); + if (result_exp =3D=3D 0) { + result_frac =3D deposit64(result_frac >> 1, 51, 1, 1); + } else if (result_exp =3D=3D -1) { + result_frac =3D deposit64(result_frac >> 2, 50, 2, 1); + result_exp =3D 0; + } + + *exp =3D result_exp; + + return result_frac; +} + +static bool round_to_inf(float_status *fpst, bool sign_bit) +{ + switch (fpst->float_rounding_mode) { + case float_round_nearest_even: /* Round to Nearest */ + return true; + case float_round_up: /* Round to +Inf */ + return !sign_bit; + case float_round_down: /* Round to -Inf */ + return sign_bit; + case float_round_to_zero: /* Round to Zero */ + return false; + } + + g_assert_not_reached(); +} + +uint32_t HELPER(recpe_f16)(uint32_t input, void *fpstp) +{ + float_status *fpst =3D fpstp; + float16 f16 =3D float16_squash_input_denormal(input, fpst); + uint32_t f16_val =3D float16_val(f16); + uint32_t f16_sign =3D float16_is_neg(f16); + int f16_exp =3D extract32(f16_val, 10, 5); + uint32_t f16_frac =3D extract32(f16_val, 0, 10); + uint64_t f64_frac; + + if (float16_is_any_nan(f16)) { + float16 nan =3D f16; + if (float16_is_signaling_nan(f16, fpst)) { + float_raise(float_flag_invalid, fpst); + nan =3D float16_silence_nan(f16, fpst); + } + if (fpst->default_nan_mode) { + nan =3D float16_default_nan(fpst); + } + return nan; + } else if (float16_is_infinity(f16)) { + return float16_set_sign(float16_zero, float16_is_neg(f16)); + } else if (float16_is_zero(f16)) { + float_raise(float_flag_divbyzero, fpst); + return float16_set_sign(float16_infinity, float16_is_neg(f16)); + } else if (float16_abs(f16) < (1 << 8)) { + /* Abs(value) < 2.0^-16 */ + float_raise(float_flag_overflow | float_flag_inexact, fpst); + if (round_to_inf(fpst, f16_sign)) { + return float16_set_sign(float16_infinity, f16_sign); + } else { + return float16_set_sign(float16_maxnorm, f16_sign); + } + } else if (f16_exp >=3D 29 && fpst->flush_to_zero) { + float_raise(float_flag_underflow, fpst); + return float16_set_sign(float16_zero, float16_is_neg(f16)); + } + + f64_frac =3D call_recip_estimate(&f16_exp, 29, + ((uint64_t) f16_frac) << (52 - 10)); + + /* result =3D sign : result_exp<4:0> : fraction<51:42> */ + f16_val =3D deposit32(0, 15, 1, f16_sign); + f16_val =3D deposit32(f16_val, 10, 5, f16_exp); + f16_val =3D deposit32(f16_val, 0, 10, extract64(f64_frac, 52 - 10, 10)= ); + return make_float16(f16_val); +} + +float32 HELPER(recpe_f32)(float32 input, void *fpstp) +{ + float_status *fpst =3D fpstp; + float32 f32 =3D float32_squash_input_denormal(input, fpst); + uint32_t f32_val =3D float32_val(f32); + bool f32_sign =3D float32_is_neg(f32); + int f32_exp =3D extract32(f32_val, 23, 8); + uint32_t f32_frac =3D extract32(f32_val, 0, 23); + uint64_t f64_frac; + + if (float32_is_any_nan(f32)) { + float32 nan =3D f32; + if (float32_is_signaling_nan(f32, fpst)) { + float_raise(float_flag_invalid, fpst); + nan =3D float32_silence_nan(f32, fpst); + } + if (fpst->default_nan_mode) { + nan =3D float32_default_nan(fpst); + } + return nan; + } else if (float32_is_infinity(f32)) { + return float32_set_sign(float32_zero, float32_is_neg(f32)); + } else if (float32_is_zero(f32)) { + float_raise(float_flag_divbyzero, fpst); + return float32_set_sign(float32_infinity, float32_is_neg(f32)); + } else if (float32_abs(f32) < (1ULL << 21)) { + /* Abs(value) < 2.0^-128 */ + float_raise(float_flag_overflow | float_flag_inexact, fpst); + if (round_to_inf(fpst, f32_sign)) { + return float32_set_sign(float32_infinity, f32_sign); + } else { + return float32_set_sign(float32_maxnorm, f32_sign); + } + } else if (f32_exp >=3D 253 && fpst->flush_to_zero) { + float_raise(float_flag_underflow, fpst); + return float32_set_sign(float32_zero, float32_is_neg(f32)); + } + + f64_frac =3D call_recip_estimate(&f32_exp, 253, + ((uint64_t) f32_frac) << (52 - 23)); + + /* result =3D sign : result_exp<7:0> : fraction<51:29> */ + f32_val =3D deposit32(0, 31, 1, f32_sign); + f32_val =3D deposit32(f32_val, 23, 8, f32_exp); + f32_val =3D deposit32(f32_val, 0, 23, extract64(f64_frac, 52 - 23, 23)= ); + return make_float32(f32_val); +} + +float64 HELPER(recpe_f64)(float64 input, void *fpstp) +{ + float_status *fpst =3D fpstp; + float64 f64 =3D float64_squash_input_denormal(input, fpst); + uint64_t f64_val =3D float64_val(f64); + bool f64_sign =3D float64_is_neg(f64); + int f64_exp =3D extract64(f64_val, 52, 11); + uint64_t f64_frac =3D extract64(f64_val, 0, 52); + + /* Deal with any special cases */ + if (float64_is_any_nan(f64)) { + float64 nan =3D f64; + if (float64_is_signaling_nan(f64, fpst)) { + float_raise(float_flag_invalid, fpst); + nan =3D float64_silence_nan(f64, fpst); + } + if (fpst->default_nan_mode) { + nan =3D float64_default_nan(fpst); + } + return nan; + } else if (float64_is_infinity(f64)) { + return float64_set_sign(float64_zero, float64_is_neg(f64)); + } else if (float64_is_zero(f64)) { + float_raise(float_flag_divbyzero, fpst); + return float64_set_sign(float64_infinity, float64_is_neg(f64)); + } else if ((f64_val & ~(1ULL << 63)) < (1ULL << 50)) { + /* Abs(value) < 2.0^-1024 */ + float_raise(float_flag_overflow | float_flag_inexact, fpst); + if (round_to_inf(fpst, f64_sign)) { + return float64_set_sign(float64_infinity, f64_sign); + } else { + return float64_set_sign(float64_maxnorm, f64_sign); + } + } else if (f64_exp >=3D 2045 && fpst->flush_to_zero) { + float_raise(float_flag_underflow, fpst); + return float64_set_sign(float64_zero, float64_is_neg(f64)); + } + + f64_frac =3D call_recip_estimate(&f64_exp, 2045, f64_frac); + + /* result =3D sign : result_exp<10:0> : fraction<51:0>; */ + f64_val =3D deposit64(0, 63, 1, f64_sign); + f64_val =3D deposit64(f64_val, 52, 11, f64_exp); + f64_val =3D deposit64(f64_val, 0, 52, f64_frac); + return make_float64(f64_val); +} + +/* The algorithm that must be used to calculate the estimate + * is specified by the ARM ARM. + */ + +static int do_recip_sqrt_estimate(int a) +{ + int b, estimate; + + assert(128 <=3D a && a < 512); + if (a < 256) { + a =3D a * 2 + 1; + } else { + a =3D (a >> 1) << 1; + a =3D (a + 1) * 2; + } + b =3D 512; + while (a * (b + 1) * (b + 1) < (1 << 28)) { + b +=3D 1; + } + estimate =3D (b + 1) / 2; + assert(256 <=3D estimate && estimate < 512); + + return estimate; +} + + +static uint64_t recip_sqrt_estimate(int *exp , int exp_off, uint64_t frac) +{ + int estimate; + uint32_t scaled; + + if (*exp =3D=3D 0) { + while (extract64(frac, 51, 1) =3D=3D 0) { + frac =3D frac << 1; + *exp -=3D 1; + } + frac =3D extract64(frac, 0, 51) << 1; + } + + if (*exp & 1) { + /* scaled =3D UInt('01':fraction<51:45>) */ + scaled =3D deposit32(1 << 7, 0, 7, extract64(frac, 45, 7)); + } else { + /* scaled =3D UInt('1':fraction<51:44>) */ + scaled =3D deposit32(1 << 8, 0, 8, extract64(frac, 44, 8)); + } + estimate =3D do_recip_sqrt_estimate(scaled); + + *exp =3D (exp_off - *exp) / 2; + return extract64(estimate, 0, 8) << 44; +} + +uint32_t HELPER(rsqrte_f16)(uint32_t input, void *fpstp) +{ + float_status *s =3D fpstp; + float16 f16 =3D float16_squash_input_denormal(input, s); + uint16_t val =3D float16_val(f16); + bool f16_sign =3D float16_is_neg(f16); + int f16_exp =3D extract32(val, 10, 5); + uint16_t f16_frac =3D extract32(val, 0, 10); + uint64_t f64_frac; + + if (float16_is_any_nan(f16)) { + float16 nan =3D f16; + if (float16_is_signaling_nan(f16, s)) { + float_raise(float_flag_invalid, s); + nan =3D float16_silence_nan(f16, s); + } + if (s->default_nan_mode) { + nan =3D float16_default_nan(s); + } + return nan; + } else if (float16_is_zero(f16)) { + float_raise(float_flag_divbyzero, s); + return float16_set_sign(float16_infinity, f16_sign); + } else if (f16_sign) { + float_raise(float_flag_invalid, s); + return float16_default_nan(s); + } else if (float16_is_infinity(f16)) { + return float16_zero; + } + + /* Scale and normalize to a double-precision value between 0.25 and 1.= 0, + * preserving the parity of the exponent. */ + + f64_frac =3D ((uint64_t) f16_frac) << (52 - 10); + + f64_frac =3D recip_sqrt_estimate(&f16_exp, 44, f64_frac); + + /* result =3D sign : result_exp<4:0> : estimate<7:0> : Zeros(2) */ + val =3D deposit32(0, 15, 1, f16_sign); + val =3D deposit32(val, 10, 5, f16_exp); + val =3D deposit32(val, 2, 8, extract64(f64_frac, 52 - 8, 8)); + return make_float16(val); +} + +float32 HELPER(rsqrte_f32)(float32 input, void *fpstp) +{ + float_status *s =3D fpstp; + float32 f32 =3D float32_squash_input_denormal(input, s); + uint32_t val =3D float32_val(f32); + uint32_t f32_sign =3D float32_is_neg(f32); + int f32_exp =3D extract32(val, 23, 8); + uint32_t f32_frac =3D extract32(val, 0, 23); + uint64_t f64_frac; + + if (float32_is_any_nan(f32)) { + float32 nan =3D f32; + if (float32_is_signaling_nan(f32, s)) { + float_raise(float_flag_invalid, s); + nan =3D float32_silence_nan(f32, s); + } + if (s->default_nan_mode) { + nan =3D float32_default_nan(s); + } + return nan; + } else if (float32_is_zero(f32)) { + float_raise(float_flag_divbyzero, s); + return float32_set_sign(float32_infinity, float32_is_neg(f32)); + } else if (float32_is_neg(f32)) { + float_raise(float_flag_invalid, s); + return float32_default_nan(s); + } else if (float32_is_infinity(f32)) { + return float32_zero; + } + + /* Scale and normalize to a double-precision value between 0.25 and 1.= 0, + * preserving the parity of the exponent. */ + + f64_frac =3D ((uint64_t) f32_frac) << 29; + + f64_frac =3D recip_sqrt_estimate(&f32_exp, 380, f64_frac); + + /* result =3D sign : result_exp<4:0> : estimate<7:0> : Zeros(15) */ + val =3D deposit32(0, 31, 1, f32_sign); + val =3D deposit32(val, 23, 8, f32_exp); + val =3D deposit32(val, 15, 8, extract64(f64_frac, 52 - 8, 8)); + return make_float32(val); +} + +float64 HELPER(rsqrte_f64)(float64 input, void *fpstp) +{ + float_status *s =3D fpstp; + float64 f64 =3D float64_squash_input_denormal(input, s); + uint64_t val =3D float64_val(f64); + bool f64_sign =3D float64_is_neg(f64); + int f64_exp =3D extract64(val, 52, 11); + uint64_t f64_frac =3D extract64(val, 0, 52); + + if (float64_is_any_nan(f64)) { + float64 nan =3D f64; + if (float64_is_signaling_nan(f64, s)) { + float_raise(float_flag_invalid, s); + nan =3D float64_silence_nan(f64, s); + } + if (s->default_nan_mode) { + nan =3D float64_default_nan(s); + } + return nan; + } else if (float64_is_zero(f64)) { + float_raise(float_flag_divbyzero, s); + return float64_set_sign(float64_infinity, float64_is_neg(f64)); + } else if (float64_is_neg(f64)) { + float_raise(float_flag_invalid, s); + return float64_default_nan(s); + } else if (float64_is_infinity(f64)) { + return float64_zero; + } + + f64_frac =3D recip_sqrt_estimate(&f64_exp, 3068, f64_frac); + + /* result =3D sign : result_exp<4:0> : estimate<7:0> : Zeros(44) */ + val =3D deposit64(0, 61, 1, f64_sign); + val =3D deposit64(val, 52, 11, f64_exp); + val =3D deposit64(val, 44, 8, extract64(f64_frac, 52 - 8, 8)); + return make_float64(val); +} + +uint32_t HELPER(recpe_u32)(uint32_t a, void *fpstp) +{ + /* float_status *s =3D fpstp; */ + int input, estimate; + + if ((a & 0x80000000) =3D=3D 0) { + return 0xffffffff; + } + + input =3D extract32(a, 23, 9); + estimate =3D recip_estimate(input); + + return deposit32(0, (32 - 9), 9, estimate); +} + +uint32_t HELPER(rsqrte_u32)(uint32_t a, void *fpstp) +{ + int estimate; + + if ((a & 0xc0000000) =3D=3D 0) { + return 0xffffffff; + } + + estimate =3D do_recip_sqrt_estimate(extract32(a, 23, 9)); + + return deposit32(0, 23, 9, estimate); +} + +/* VFPv4 fused multiply-accumulate */ +float32 VFP_HELPER(muladd, s)(float32 a, float32 b, float32 c, void *fpstp) +{ + float_status *fpst =3D fpstp; + return float32_muladd(a, b, c, 0, fpst); +} + +float64 VFP_HELPER(muladd, d)(float64 a, float64 b, float64 c, void *fpstp) +{ + float_status *fpst =3D fpstp; + return float64_muladd(a, b, c, 0, fpst); +} + +/* ARMv8 round to integral */ +float32 HELPER(rints_exact)(float32 x, void *fp_status) +{ + return float32_round_to_int(x, fp_status); +} + +float64 HELPER(rintd_exact)(float64 x, void *fp_status) +{ + return float64_round_to_int(x, fp_status); +} + +float32 HELPER(rints)(float32 x, void *fp_status) +{ + int old_flags =3D get_float_exception_flags(fp_status), new_flags; + float32 ret; + + ret =3D float32_round_to_int(x, fp_status); + + /* Suppress any inexact exceptions the conversion produced */ + if (!(old_flags & float_flag_inexact)) { + new_flags =3D get_float_exception_flags(fp_status); + set_float_exception_flags(new_flags & ~float_flag_inexact, fp_stat= us); + } + + return ret; +} + +float64 HELPER(rintd)(float64 x, void *fp_status) +{ + int old_flags =3D get_float_exception_flags(fp_status), new_flags; + float64 ret; + + ret =3D float64_round_to_int(x, fp_status); + + new_flags =3D get_float_exception_flags(fp_status); + + /* Suppress any inexact exceptions the conversion produced */ + if (!(old_flags & float_flag_inexact)) { + new_flags =3D get_float_exception_flags(fp_status); + set_float_exception_flags(new_flags & ~float_flag_inexact, fp_stat= us); + } + + return ret; +} diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs index 85e35ee668..6e880df727 100644 --- a/target/arm/Makefile.objs +++ b/target/arm/Makefile.objs @@ -6,7 +6,7 @@ obj-$(call land,$(CONFIG_KVM),$(TARGET_AARCH64)) +=3D kvm64= .o obj-$(call lnot,$(CONFIG_KVM)) +=3D kvm-stub.o obj-y +=3D translate.o op_helper.o helper.o cpu.o obj-y +=3D neon_helper.o iwmmxt_helper.o vec_helper.o -obj-y +=3D gdbstub.o m_helper.o excp_helper.o +obj-y +=3D gdbstub.o m_helper.o excp_helper.o vfp_helper.o obj-$(TARGET_AARCH64) +=3D cpu64.o translate-a64.o helper-a64.o gdbstub64.o obj-y +=3D crypto_helper.o obj-$(CONFIG_SOFTMMU) +=3D arm-powerctl.o --=20 2.19.1 From nobody Thu Nov 6 14:16:22 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1542129053268740.128572252524; Tue, 13 Nov 2018 09:10:53 -0800 (PST) Received: from localhost ([::1]:55182 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcDD-0002PS-ME for importer@patchew.org; Tue, 13 Nov 2018 12:10:51 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36371) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcAa-0007Wl-2W for qemu-devel@nongnu.org; Tue, 13 Nov 2018 12:08:14 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMbwl-0003yu-G3 for qemu-devel@nongnu.org; Tue, 13 Nov 2018 11:53:53 -0500 Received: from mga06.intel.com ([134.134.136.31]:62850) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gMbwi-0003nD-E2; Tue, 13 Nov 2018 11:53:51 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 08:53:44 -0800 Received: from qingbai-mobl.ger.corp.intel.com (HELO caravaggio.ger.corp.intel.com) ([10.249.41.239]) by orsmga001.jf.intel.com with ESMTP; 13 Nov 2018 08:53:42 -0800 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,228,1539673200"; d="scan'208";a="107922117" From: Samuel Ortiz To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 17:52:43 +0100 Message-Id: <20181113165247.4806-10-sameo@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181113165247.4806-1-sameo@linux.intel.com> References: <20181113165247.4806-1-sameo@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.31 Subject: [Qemu-devel] [PATCH 09/13] target: arm: Move CPU state dumping routines to helper.c X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, richard.henderson@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" They're not TCG specific and should be living the generic helper file instead. Signed-off-by: Samuel Ortiz Reviewed-by: Robert Bradford --- target/arm/internals.h | 12 +++ target/arm/translate.h | 7 -- target/arm/helper.c | 214 +++++++++++++++++++++++++++++++++++++ target/arm/translate-a64.c | 125 ---------------------- target/arm/translate.c | 87 --------------- 5 files changed, 226 insertions(+), 219 deletions(-) diff --git a/target/arm/internals.h b/target/arm/internals.h index 06439467d2..ddb594d58d 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -937,4 +937,16 @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t addr= ess, int *prot, bool *is_subpage, ARMMMUFaultInfo *fi, uint32_t *mregion); =20 +#ifdef TARGET_AARCH64 +void aarch64_cpu_dump_state(CPUState *cs, FILE *f, + fprintf_function cpu_fprintf, int flags); + +#else +static inline void aarch64_cpu_dump_state(CPUState *cs, FILE *f, + fprintf_function cpu_fprintf, + int flags) +{ +} +#endif + #endif diff --git a/target/arm/translate.h b/target/arm/translate.h index 1550aa8bc7..059645c23c 100644 --- a/target/arm/translate.h +++ b/target/arm/translate.h @@ -155,8 +155,6 @@ static inline void disas_set_insn_syndrome(DisasContext= *s, uint32_t syn) #ifdef TARGET_AARCH64 void a64_translate_init(void); void gen_a64_set_pc_im(uint64_t val); -void aarch64_cpu_dump_state(CPUState *cs, FILE *f, - fprintf_function cpu_fprintf, int flags); extern const TranslatorOps aarch64_translator_ops; #else static inline void a64_translate_init(void) @@ -167,11 +165,6 @@ static inline void gen_a64_set_pc_im(uint64_t val) { } =20 -static inline void aarch64_cpu_dump_state(CPUState *cs, FILE *f, - fprintf_function cpu_fprintf, - int flags) -{ -} #endif =20 void arm_test_cc(DisasCompare *cmp, int cc); diff --git a/target/arm/helper.c b/target/arm/helper.c index 996dfbbda2..ff3011fcb6 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -9555,4 +9555,218 @@ void aarch64_sve_change_el(CPUARMState *env, int ol= d_el, aarch64_sve_narrow_vq(env, new_len + 1); } } + +void aarch64_cpu_dump_state(CPUState *cs, FILE *f, + fprintf_function cpu_fprintf, int flags) +{ + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + uint32_t psr =3D pstate_read(env); + int i; + int el =3D arm_current_el(env); + const char *ns_status; + + cpu_fprintf(f, " PC=3D%016" PRIx64 " ", env->pc); + for (i =3D 0; i < 32; i++) { + if (i =3D=3D 31) { + cpu_fprintf(f, " SP=3D%016" PRIx64 "\n", env->xregs[i]); + } else { + cpu_fprintf(f, "X%02d=3D%016" PRIx64 "%s", i, env->xregs[i], + (i + 2) % 3 ? " " : "\n"); + } + } + + if (arm_feature(env, ARM_FEATURE_EL3) && el !=3D 3) { + ns_status =3D env->cp15.scr_el3 & SCR_NS ? "NS " : "S "; + } else { + ns_status =3D ""; + } + cpu_fprintf(f, "PSTATE=3D%08x %c%c%c%c %sEL%d%c", + psr, + psr & PSTATE_N ? 'N' : '-', + psr & PSTATE_Z ? 'Z' : '-', + psr & PSTATE_C ? 'C' : '-', + psr & PSTATE_V ? 'V' : '-', + ns_status, + el, + psr & PSTATE_SP ? 'h' : 't'); + + if (!(flags & CPU_DUMP_FPU)) { + cpu_fprintf(f, "\n"); + return; + } + if (fp_exception_el(env, el) !=3D 0) { + cpu_fprintf(f, " FPU disabled\n"); + return; + } + cpu_fprintf(f, " FPCR=3D%08x FPSR=3D%08x\n", + vfp_get_fpcr(env), vfp_get_fpsr(env)); + + if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) =3D= =3D 0) { + int j, zcr_len =3D sve_zcr_len_for_el(env, el); + + for (i =3D 0; i <=3D FFR_PRED_NUM; i++) { + bool eol; + if (i =3D=3D FFR_PRED_NUM) { + cpu_fprintf(f, "FFR=3D"); + /* It's last, so end the line. */ + eol =3D true; + } else { + cpu_fprintf(f, "P%02d=3D", i); + switch (zcr_len) { + case 0: + eol =3D i % 8 =3D=3D 7; + break; + case 1: + eol =3D i % 6 =3D=3D 5; + break; + case 2: + case 3: + eol =3D i % 3 =3D=3D 2; + break; + default: + /* More than one quadword per predicate. */ + eol =3D true; + break; + } + } + for (j =3D zcr_len / 4; j >=3D 0; j--) { + int digits; + if (j * 4 + 4 <=3D zcr_len + 1) { + digits =3D 16; + } else { + digits =3D (zcr_len % 4 + 1) * 4; + } + cpu_fprintf(f, "%0*" PRIx64 "%s", digits, + env->vfp.pregs[i].p[j], + j ? ":" : eol ? "\n" : " "); + } + } + + for (i =3D 0; i < 32; i++) { + if (zcr_len =3D=3D 0) { + cpu_fprintf(f, "Z%02d=3D%016" PRIx64 ":%016" PRIx64 "%s", + i, env->vfp.zregs[i].d[1], + env->vfp.zregs[i].d[0], i & 1 ? "\n" : " "); + } else if (zcr_len =3D=3D 1) { + cpu_fprintf(f, "Z%02d=3D%016" PRIx64 ":%016" PRIx64 + ":%016" PRIx64 ":%016" PRIx64 "\n", + i, env->vfp.zregs[i].d[3], env->vfp.zregs[i].d= [2], + env->vfp.zregs[i].d[1], env->vfp.zregs[i].d[0]= ); + } else { + for (j =3D zcr_len; j >=3D 0; j--) { + bool odd =3D (zcr_len - j) % 2 !=3D 0; + if (j =3D=3D zcr_len) { + cpu_fprintf(f, "Z%02d[%x-%x]=3D", i, j, j - 1); + } else if (!odd) { + if (j > 0) { + cpu_fprintf(f, " [%x-%x]=3D", j, j - 1); + } else { + cpu_fprintf(f, " [%x]=3D", j); + } + } + cpu_fprintf(f, "%016" PRIx64 ":%016" PRIx64 "%s", + env->vfp.zregs[i].d[j * 2 + 1], + env->vfp.zregs[i].d[j * 2], + odd || j =3D=3D 0 ? "\n" : ":"); + } + } + } + } else { + for (i =3D 0; i < 32; i++) { + uint64_t *q =3D aa64_vfp_qreg(env, i); + cpu_fprintf(f, "Q%02d=3D%016" PRIx64 ":%016" PRIx64 "%s", + i, q[1], q[0], (i & 1 ? "\n" : " ")); + } + } +} + #endif + +void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprint= f, + int flags) +{ + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + int i; + + if (is_a64(env)) { + aarch64_cpu_dump_state(cs, f, cpu_fprintf, flags); + return; + } + + for (i =3D 0; i < 16; i++) { + cpu_fprintf(f, "R%02d=3D%08x", i, env->regs[i]); + if ((i % 4) =3D=3D 3) { + cpu_fprintf(f, "\n"); + } else { + cpu_fprintf(f, " "); + } + } + + if (arm_feature(env, ARM_FEATURE_M)) { + uint32_t xpsr =3D xpsr_read(env); + const char *mode; + const char *ns_status =3D ""; + + if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { + ns_status =3D env->v7m.secure ? "S " : "NS "; + } + + if (xpsr & XPSR_EXCP) { + mode =3D "handler"; + } else { + if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_NPRIV_MA= SK) { + mode =3D "unpriv-thread"; + } else { + mode =3D "priv-thread"; + } + } + + cpu_fprintf(f, "XPSR=3D%08x %c%c%c%c %c %s%s\n", + xpsr, + xpsr & XPSR_N ? 'N' : '-', + xpsr & XPSR_Z ? 'Z' : '-', + xpsr & XPSR_C ? 'C' : '-', + xpsr & XPSR_V ? 'V' : '-', + xpsr & XPSR_T ? 'T' : 'A', + ns_status, + mode); + } else { + uint32_t psr =3D cpsr_read(env); + const char *ns_status =3D ""; + + if (arm_feature(env, ARM_FEATURE_EL3) && + (psr & CPSR_M) !=3D ARM_CPU_MODE_MON) { + ns_status =3D env->cp15.scr_el3 & SCR_NS ? "NS " : "S "; + } + + cpu_fprintf(f, "PSR=3D%08x %c%c%c%c %c %s%s%d\n", + psr, + psr & CPSR_N ? 'N' : '-', + psr & CPSR_Z ? 'Z' : '-', + psr & CPSR_C ? 'C' : '-', + psr & CPSR_V ? 'V' : '-', + psr & CPSR_T ? 'T' : 'A', + ns_status, + aarch32_mode_name(psr), (psr & 0x10) ? 32 : 26); + } + + if (flags & CPU_DUMP_FPU) { + int numvfpregs =3D 0; + if (arm_feature(env, ARM_FEATURE_VFP)) { + numvfpregs +=3D 16; + } + if (arm_feature(env, ARM_FEATURE_VFP3)) { + numvfpregs +=3D 16; + } + for (i =3D 0; i < numvfpregs; i++) { + uint64_t v =3D *aa32_vfp_dreg(env, i); + cpu_fprintf(f, "s%02d=3D%08x s%02d=3D%08x d%02d=3D%016" PRIx64= "\n", + i * 2, (uint32_t)v, + i * 2 + 1, (uint32_t)(v >> 32), + i, v); + } + cpu_fprintf(f, "FPSCR: %08x\n", (int)env->vfp.xregs[ARM_VFP_FPSCR]= ); + } +} diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c index fd36425f1a..642df2f821 100644 --- a/target/arm/translate-a64.c +++ b/target/arm/translate-a64.c @@ -128,131 +128,6 @@ static inline int get_a64_user_mem_index(DisasContext= *s) return arm_to_core_mmu_idx(useridx); } =20 -void aarch64_cpu_dump_state(CPUState *cs, FILE *f, - fprintf_function cpu_fprintf, int flags) -{ - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - uint32_t psr =3D pstate_read(env); - int i; - int el =3D arm_current_el(env); - const char *ns_status; - - cpu_fprintf(f, " PC=3D%016" PRIx64 " ", env->pc); - for (i =3D 0; i < 32; i++) { - if (i =3D=3D 31) { - cpu_fprintf(f, " SP=3D%016" PRIx64 "\n", env->xregs[i]); - } else { - cpu_fprintf(f, "X%02d=3D%016" PRIx64 "%s", i, env->xregs[i], - (i + 2) % 3 ? " " : "\n"); - } - } - - if (arm_feature(env, ARM_FEATURE_EL3) && el !=3D 3) { - ns_status =3D env->cp15.scr_el3 & SCR_NS ? "NS " : "S "; - } else { - ns_status =3D ""; - } - cpu_fprintf(f, "PSTATE=3D%08x %c%c%c%c %sEL%d%c", - psr, - psr & PSTATE_N ? 'N' : '-', - psr & PSTATE_Z ? 'Z' : '-', - psr & PSTATE_C ? 'C' : '-', - psr & PSTATE_V ? 'V' : '-', - ns_status, - el, - psr & PSTATE_SP ? 'h' : 't'); - - if (!(flags & CPU_DUMP_FPU)) { - cpu_fprintf(f, "\n"); - return; - } - if (fp_exception_el(env, el) !=3D 0) { - cpu_fprintf(f, " FPU disabled\n"); - return; - } - cpu_fprintf(f, " FPCR=3D%08x FPSR=3D%08x\n", - vfp_get_fpcr(env), vfp_get_fpsr(env)); - - if (cpu_isar_feature(aa64_sve, cpu) && sve_exception_el(env, el) =3D= =3D 0) { - int j, zcr_len =3D sve_zcr_len_for_el(env, el); - - for (i =3D 0; i <=3D FFR_PRED_NUM; i++) { - bool eol; - if (i =3D=3D FFR_PRED_NUM) { - cpu_fprintf(f, "FFR=3D"); - /* It's last, so end the line. */ - eol =3D true; - } else { - cpu_fprintf(f, "P%02d=3D", i); - switch (zcr_len) { - case 0: - eol =3D i % 8 =3D=3D 7; - break; - case 1: - eol =3D i % 6 =3D=3D 5; - break; - case 2: - case 3: - eol =3D i % 3 =3D=3D 2; - break; - default: - /* More than one quadword per predicate. */ - eol =3D true; - break; - } - } - for (j =3D zcr_len / 4; j >=3D 0; j--) { - int digits; - if (j * 4 + 4 <=3D zcr_len + 1) { - digits =3D 16; - } else { - digits =3D (zcr_len % 4 + 1) * 4; - } - cpu_fprintf(f, "%0*" PRIx64 "%s", digits, - env->vfp.pregs[i].p[j], - j ? ":" : eol ? "\n" : " "); - } - } - - for (i =3D 0; i < 32; i++) { - if (zcr_len =3D=3D 0) { - cpu_fprintf(f, "Z%02d=3D%016" PRIx64 ":%016" PRIx64 "%s", - i, env->vfp.zregs[i].d[1], - env->vfp.zregs[i].d[0], i & 1 ? "\n" : " "); - } else if (zcr_len =3D=3D 1) { - cpu_fprintf(f, "Z%02d=3D%016" PRIx64 ":%016" PRIx64 - ":%016" PRIx64 ":%016" PRIx64 "\n", - i, env->vfp.zregs[i].d[3], env->vfp.zregs[i].d= [2], - env->vfp.zregs[i].d[1], env->vfp.zregs[i].d[0]= ); - } else { - for (j =3D zcr_len; j >=3D 0; j--) { - bool odd =3D (zcr_len - j) % 2 !=3D 0; - if (j =3D=3D zcr_len) { - cpu_fprintf(f, "Z%02d[%x-%x]=3D", i, j, j - 1); - } else if (!odd) { - if (j > 0) { - cpu_fprintf(f, " [%x-%x]=3D", j, j - 1); - } else { - cpu_fprintf(f, " [%x]=3D", j); - } - } - cpu_fprintf(f, "%016" PRIx64 ":%016" PRIx64 "%s", - env->vfp.zregs[i].d[j * 2 + 1], - env->vfp.zregs[i].d[j * 2], - odd || j =3D=3D 0 ? "\n" : ":"); - } - } - } - } else { - for (i =3D 0; i < 32; i++) { - uint64_t *q =3D aa64_vfp_qreg(env, i); - cpu_fprintf(f, "Q%02d=3D%016" PRIx64 ":%016" PRIx64 "%s", - i, q[1], q[0], (i & 1 ? "\n" : " ")); - } - } -} - void gen_a64_set_pc_im(uint64_t val) { tcg_gen_movi_i64(cpu_pc, val); diff --git a/target/arm/translate.c b/target/arm/translate.c index 7c4675ffd8..839ca85cd4 100644 --- a/target/arm/translate.c +++ b/target/arm/translate.c @@ -13528,93 +13528,6 @@ void gen_intermediate_code(CPUState *cpu, Translat= ionBlock *tb) translator_loop(ops, &dc.base, cpu, tb); } =20 -void arm_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprint= f, - int flags) -{ - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - int i; - - if (is_a64(env)) { - aarch64_cpu_dump_state(cs, f, cpu_fprintf, flags); - return; - } - - for(i=3D0;i<16;i++) { - cpu_fprintf(f, "R%02d=3D%08x", i, env->regs[i]); - if ((i % 4) =3D=3D 3) - cpu_fprintf(f, "\n"); - else - cpu_fprintf(f, " "); - } - - if (arm_feature(env, ARM_FEATURE_M)) { - uint32_t xpsr =3D xpsr_read(env); - const char *mode; - const char *ns_status =3D ""; - - if (arm_feature(env, ARM_FEATURE_M_SECURITY)) { - ns_status =3D env->v7m.secure ? "S " : "NS "; - } - - if (xpsr & XPSR_EXCP) { - mode =3D "handler"; - } else { - if (env->v7m.control[env->v7m.secure] & R_V7M_CONTROL_NPRIV_MA= SK) { - mode =3D "unpriv-thread"; - } else { - mode =3D "priv-thread"; - } - } - - cpu_fprintf(f, "XPSR=3D%08x %c%c%c%c %c %s%s\n", - xpsr, - xpsr & XPSR_N ? 'N' : '-', - xpsr & XPSR_Z ? 'Z' : '-', - xpsr & XPSR_C ? 'C' : '-', - xpsr & XPSR_V ? 'V' : '-', - xpsr & XPSR_T ? 'T' : 'A', - ns_status, - mode); - } else { - uint32_t psr =3D cpsr_read(env); - const char *ns_status =3D ""; - - if (arm_feature(env, ARM_FEATURE_EL3) && - (psr & CPSR_M) !=3D ARM_CPU_MODE_MON) { - ns_status =3D env->cp15.scr_el3 & SCR_NS ? "NS " : "S "; - } - - cpu_fprintf(f, "PSR=3D%08x %c%c%c%c %c %s%s%d\n", - psr, - psr & CPSR_N ? 'N' : '-', - psr & CPSR_Z ? 'Z' : '-', - psr & CPSR_C ? 'C' : '-', - psr & CPSR_V ? 'V' : '-', - psr & CPSR_T ? 'T' : 'A', - ns_status, - aarch32_mode_name(psr), (psr & 0x10) ? 32 : 26); - } - - if (flags & CPU_DUMP_FPU) { - int numvfpregs =3D 0; - if (arm_feature(env, ARM_FEATURE_VFP)) { - numvfpregs +=3D 16; - } - if (arm_feature(env, ARM_FEATURE_VFP3)) { - numvfpregs +=3D 16; - } - for (i =3D 0; i < numvfpregs; i++) { - uint64_t v =3D *aa32_vfp_dreg(env, i); - cpu_fprintf(f, "s%02d=3D%08x s%02d=3D%08x d%02d=3D%016" PRIx64= "\n", - i * 2, (uint32_t)v, - i * 2 + 1, (uint32_t)(v >> 32), - i, v); - } - cpu_fprintf(f, "FPSCR: %08x\n", (int)env->vfp.xregs[ARM_VFP_FPSCR]= ); - } -} - void restore_state_to_opc(CPUARMState *env, TranslationBlock *tb, target_ulong *data) { --=20 2.19.1 From nobody Thu Nov 6 14:16:22 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1542129206420375.79959517547866; Tue, 13 Nov 2018 09:13:26 -0800 (PST) Received: from localhost ([::1]:55200 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcFg-0004rq-PY for importer@patchew.org; Tue, 13 Nov 2018 12:13:24 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35456) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcAW-0006qZ-JR for qemu-devel@nongnu.org; Tue, 13 Nov 2018 12:08:07 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMbwm-0003zv-9T for qemu-devel@nongnu.org; Tue, 13 Nov 2018 11:53:54 -0500 Received: from mga06.intel.com ([134.134.136.31]:62858) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gMbwl-0003pu-TL; Tue, 13 Nov 2018 11:53:52 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 08:53:46 -0800 Received: from qingbai-mobl.ger.corp.intel.com (HELO caravaggio.ger.corp.intel.com) ([10.249.41.239]) by orsmga001.jf.intel.com with ESMTP; 13 Nov 2018 08:53:44 -0800 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,228,1539673200"; d="scan'208";a="107922122" From: Samuel Ortiz To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 17:52:44 +0100 Message-Id: <20181113165247.4806-11-sameo@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181113165247.4806-1-sameo@linux.intel.com> References: <20181113165247.4806-1-sameo@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.31 Subject: [Qemu-devel] [PATCH 10/13] target: arm: Move watchpoints APIs to helper.c X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, richard.henderson@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Here again, those APIs are not TCG dependent and can move to the always built helper.c. Signed-off-by: Samuel Ortiz Reviewed-by: Philippe Mathieu-Daud=C3=A9 Tested-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Robert Bradford --- target/arm/internals.h | 6 ++ target/arm/helper.c | 204 +++++++++++++++++++++++++++++++++++++++++ target/arm/op_helper.c | 204 ----------------------------------------- 3 files changed, 210 insertions(+), 204 deletions(-) diff --git a/target/arm/internals.h b/target/arm/internals.h index ddb594d58d..7ab22208a3 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -937,6 +937,12 @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t addr= ess, int *prot, bool *is_subpage, ARMMMUFaultInfo *fi, uint32_t *mregion); =20 +/* + * Returns true when the current CPU execution context matches + * the watchpoint or the breakpoint pointed at by n. + */ +bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp); + #ifdef TARGET_AARCH64 void aarch64_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf, int flags); diff --git a/target/arm/helper.c b/target/arm/helper.c index ff3011fcb6..c4e7d23023 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -9770,3 +9770,207 @@ void arm_cpu_dump_state(CPUState *cs, FILE *f, fpri= ntf_function cpu_fprintf, cpu_fprintf(f, "FPSCR: %08x\n", (int)env->vfp.xregs[ARM_VFP_FPSCR]= ); } } + +/* Return true if the linked breakpoint entry lbn passes its checks */ +static bool linked_bp_matches(ARMCPU *cpu, int lbn) +{ + CPUARMState *env =3D &cpu->env; + uint64_t bcr =3D env->cp15.dbgbcr[lbn]; + int brps =3D extract32(cpu->dbgdidr, 24, 4); + int ctx_cmps =3D extract32(cpu->dbgdidr, 20, 4); + int bt; + uint32_t contextidr; + + /* Links to unimplemented or non-context aware breakpoints are + * CONSTRAINED UNPREDICTABLE: either behave as if disabled, or + * as if linked to an UNKNOWN context-aware breakpoint (in which + * case DBGWCR_EL1.LBN must indicate that breakpoint). + * We choose the former. + */ + if (lbn > brps || lbn < (brps - ctx_cmps)) { + return false; + } + + bcr =3D env->cp15.dbgbcr[lbn]; + + if (extract64(bcr, 0, 1) =3D=3D 0) { + /* Linked breakpoint disabled : generate no events */ + return false; + } + + bt =3D extract64(bcr, 20, 4); + + /* We match the whole register even if this is AArch32 using the + * short descriptor format (in which case it holds both PROCID and ASI= D), + * since we don't implement the optional v7 context ID masking. + */ + contextidr =3D extract64(env->cp15.contextidr_el[1], 0, 32); + + switch (bt) { + case 3: /* linked context ID match */ + if (arm_current_el(env) > 1) { + /* Context matches never fire in EL2 or (AArch64) EL3 */ + return false; + } + return (contextidr =3D=3D extract64(env->cp15.dbgbvr[lbn], 0, 32)); + case 5: /* linked address mismatch (reserved in AArch64) */ + case 9: /* linked VMID match (reserved if no EL2) */ + case 11: /* linked context ID and VMID match (reserved if no EL2) */ + default: + /* Links to Unlinked context breakpoints must generate no + * events; we choose to do the same for reserved values too. + */ + return false; + } + + return false; +} + +bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp) +{ + CPUARMState *env =3D &cpu->env; + uint64_t cr; + int pac, hmc, ssc, wt, lbn; + /* Note that for watchpoints the check is against the CPU security + * state, not the S/NS attribute on the offending data access. + */ + bool is_secure =3D arm_is_secure(env); + int access_el =3D arm_current_el(env); + + if (is_wp) { + CPUWatchpoint *wp =3D env->cpu_watchpoint[n]; + + if (!wp || !(wp->flags & BP_WATCHPOINT_HIT)) { + return false; + } + cr =3D env->cp15.dbgwcr[n]; + if (wp->hitattrs.user) { + /* The LDRT/STRT/LDT/STT "unprivileged access" instructions sh= ould + * match watchpoints as if they were accesses done at EL0, eve= n if + * the CPU is at EL1 or higher. + */ + access_el =3D 0; + } + } else { + uint64_t pc =3D is_a64(env) ? env->pc : env->regs[15]; + + if (!env->cpu_breakpoint[n] || env->cpu_breakpoint[n]->pc !=3D pc)= { + return false; + } + cr =3D env->cp15.dbgbcr[n]; + } + /* The WATCHPOINT_HIT flag guarantees us that the watchpoint is + * enabled and that the address and access type match; for breakpoints + * we know the address matched; check the remaining fields, including + * linked breakpoints. We rely on WCR and BCR having the same layout + * for the LBN, SSC, HMC, PAC/PMC and is-linked fields. + * Note that some combinations of {PAC, HMC, SSC} are reserved and + * must act either like some valid combination or as if the watchpoint + * were disabled. We choose the former, and use this together with + * the fact that EL3 must always be Secure and EL2 must always be + * Non-Secure to simplify the code slightly compared to the full + * table in the ARM ARM. + */ + pac =3D extract64(cr, 1, 2); + hmc =3D extract64(cr, 13, 1); + ssc =3D extract64(cr, 14, 2); + + switch (ssc) { + case 0: + break; + case 1: + case 3: + if (is_secure) { + return false; + } + break; + case 2: + if (!is_secure) { + return false; + } + break; + } + + switch (access_el) { + case 3: + case 2: + if (!hmc) { + return false; + } + break; + case 1: + if (extract32(pac, 0, 1) =3D=3D 0) { + return false; + } + break; + case 0: + if (extract32(pac, 1, 1) =3D=3D 0) { + return false; + } + break; + default: + g_assert_not_reached(); + } + + wt =3D extract64(cr, 20, 1); + lbn =3D extract64(cr, 16, 4); + + if (wt && !linked_bp_matches(cpu, lbn)) { + return false; + } + + return true; +} + +static bool check_watchpoints(ARMCPU *cpu) +{ + CPUARMState *env =3D &cpu->env; + int n; + + /* If watchpoints are disabled globally or we can't take debug + * exceptions here then watchpoint firings are ignored. + */ + if (extract32(env->cp15.mdscr_el1, 15, 1) =3D=3D 0 + || !arm_generate_debug_exceptions(env)) { + return false; + } + + for (n =3D 0; n < ARRAY_SIZE(env->cpu_watchpoint); n++) { + if (bp_wp_matches(cpu, n, true)) { + return true; + } + } + return false; +} + +bool arm_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp) +{ + /* Called by core code when a CPU watchpoint fires; need to check if t= his + * is also an architectural watchpoint match. + */ + ARMCPU *cpu =3D ARM_CPU(cs); + + return check_watchpoints(cpu); +} + +vaddr arm_adjust_watchpoint_address(CPUState *cs, vaddr addr, int len) +{ + ARMCPU *cpu =3D ARM_CPU(cs); + CPUARMState *env =3D &cpu->env; + + /* In BE32 system mode, target memory is stored byteswapped (on a + * little-endian host system), and by the time we reach here (via an + * opcode helper) the addresses of subword accesses have been adjusted + * to account for that, which means that watchpoints will not match. + * Undo the adjustment here. + */ + if (arm_sctlr_b(env)) { + if (len =3D=3D 1) { + addr ^=3D 3; + } else if (len =3D=3D 2) { + addr ^=3D 2; + } + } + + return addr; +} diff --git a/target/arm/op_helper.c b/target/arm/op_helper.c index 3b0459db50..c836d05aab 100644 --- a/target/arm/op_helper.c +++ b/target/arm/op_helper.c @@ -1171,178 +1171,6 @@ illegal_return: "resuming execution at 0x%" PRIx64 "\n", cur_el, env->pc= ); } =20 -/* Return true if the linked breakpoint entry lbn passes its checks */ -static bool linked_bp_matches(ARMCPU *cpu, int lbn) -{ - CPUARMState *env =3D &cpu->env; - uint64_t bcr =3D env->cp15.dbgbcr[lbn]; - int brps =3D extract32(cpu->dbgdidr, 24, 4); - int ctx_cmps =3D extract32(cpu->dbgdidr, 20, 4); - int bt; - uint32_t contextidr; - - /* Links to unimplemented or non-context aware breakpoints are - * CONSTRAINED UNPREDICTABLE: either behave as if disabled, or - * as if linked to an UNKNOWN context-aware breakpoint (in which - * case DBGWCR_EL1.LBN must indicate that breakpoint). - * We choose the former. - */ - if (lbn > brps || lbn < (brps - ctx_cmps)) { - return false; - } - - bcr =3D env->cp15.dbgbcr[lbn]; - - if (extract64(bcr, 0, 1) =3D=3D 0) { - /* Linked breakpoint disabled : generate no events */ - return false; - } - - bt =3D extract64(bcr, 20, 4); - - /* We match the whole register even if this is AArch32 using the - * short descriptor format (in which case it holds both PROCID and ASI= D), - * since we don't implement the optional v7 context ID masking. - */ - contextidr =3D extract64(env->cp15.contextidr_el[1], 0, 32); - - switch (bt) { - case 3: /* linked context ID match */ - if (arm_current_el(env) > 1) { - /* Context matches never fire in EL2 or (AArch64) EL3 */ - return false; - } - return (contextidr =3D=3D extract64(env->cp15.dbgbvr[lbn], 0, 32)); - case 5: /* linked address mismatch (reserved in AArch64) */ - case 9: /* linked VMID match (reserved if no EL2) */ - case 11: /* linked context ID and VMID match (reserved if no EL2) */ - default: - /* Links to Unlinked context breakpoints must generate no - * events; we choose to do the same for reserved values too. - */ - return false; - } - - return false; -} - -static bool bp_wp_matches(ARMCPU *cpu, int n, bool is_wp) -{ - CPUARMState *env =3D &cpu->env; - uint64_t cr; - int pac, hmc, ssc, wt, lbn; - /* Note that for watchpoints the check is against the CPU security - * state, not the S/NS attribute on the offending data access. - */ - bool is_secure =3D arm_is_secure(env); - int access_el =3D arm_current_el(env); - - if (is_wp) { - CPUWatchpoint *wp =3D env->cpu_watchpoint[n]; - - if (!wp || !(wp->flags & BP_WATCHPOINT_HIT)) { - return false; - } - cr =3D env->cp15.dbgwcr[n]; - if (wp->hitattrs.user) { - /* The LDRT/STRT/LDT/STT "unprivileged access" instructions sh= ould - * match watchpoints as if they were accesses done at EL0, eve= n if - * the CPU is at EL1 or higher. - */ - access_el =3D 0; - } - } else { - uint64_t pc =3D is_a64(env) ? env->pc : env->regs[15]; - - if (!env->cpu_breakpoint[n] || env->cpu_breakpoint[n]->pc !=3D pc)= { - return false; - } - cr =3D env->cp15.dbgbcr[n]; - } - /* The WATCHPOINT_HIT flag guarantees us that the watchpoint is - * enabled and that the address and access type match; for breakpoints - * we know the address matched; check the remaining fields, including - * linked breakpoints. We rely on WCR and BCR having the same layout - * for the LBN, SSC, HMC, PAC/PMC and is-linked fields. - * Note that some combinations of {PAC, HMC, SSC} are reserved and - * must act either like some valid combination or as if the watchpoint - * were disabled. We choose the former, and use this together with - * the fact that EL3 must always be Secure and EL2 must always be - * Non-Secure to simplify the code slightly compared to the full - * table in the ARM ARM. - */ - pac =3D extract64(cr, 1, 2); - hmc =3D extract64(cr, 13, 1); - ssc =3D extract64(cr, 14, 2); - - switch (ssc) { - case 0: - break; - case 1: - case 3: - if (is_secure) { - return false; - } - break; - case 2: - if (!is_secure) { - return false; - } - break; - } - - switch (access_el) { - case 3: - case 2: - if (!hmc) { - return false; - } - break; - case 1: - if (extract32(pac, 0, 1) =3D=3D 0) { - return false; - } - break; - case 0: - if (extract32(pac, 1, 1) =3D=3D 0) { - return false; - } - break; - default: - g_assert_not_reached(); - } - - wt =3D extract64(cr, 20, 1); - lbn =3D extract64(cr, 16, 4); - - if (wt && !linked_bp_matches(cpu, lbn)) { - return false; - } - - return true; -} - -static bool check_watchpoints(ARMCPU *cpu) -{ - CPUARMState *env =3D &cpu->env; - int n; - - /* If watchpoints are disabled globally or we can't take debug - * exceptions here then watchpoint firings are ignored. - */ - if (extract32(env->cp15.mdscr_el1, 15, 1) =3D=3D 0 - || !arm_generate_debug_exceptions(env)) { - return false; - } - - for (n =3D 0; n < ARRAY_SIZE(env->cpu_watchpoint); n++) { - if (bp_wp_matches(cpu, n, true)) { - return true; - } - } - return false; -} - static bool check_breakpoints(ARMCPU *cpu) { CPUARMState *env =3D &cpu->env; @@ -1373,38 +1201,6 @@ void HELPER(check_breakpoints)(CPUARMState *env) } } =20 -bool arm_debug_check_watchpoint(CPUState *cs, CPUWatchpoint *wp) -{ - /* Called by core code when a CPU watchpoint fires; need to check if t= his - * is also an architectural watchpoint match. - */ - ARMCPU *cpu =3D ARM_CPU(cs); - - return check_watchpoints(cpu); -} - -vaddr arm_adjust_watchpoint_address(CPUState *cs, vaddr addr, int len) -{ - ARMCPU *cpu =3D ARM_CPU(cs); - CPUARMState *env =3D &cpu->env; - - /* In BE32 system mode, target memory is stored byteswapped (on a - * little-endian host system), and by the time we reach here (via an - * opcode helper) the addresses of subword accesses have been adjusted - * to account for that, which means that watchpoints will not match. - * Undo the adjustment here. - */ - if (arm_sctlr_b(env)) { - if (len =3D=3D 1) { - addr ^=3D 3; - } else if (len =3D=3D 2) { - addr ^=3D 2; - } - } - - return addr; -} - void arm_debug_excp_handler(CPUState *cs) { /* Called by core code when a watchpoint or breakpoint fires; --=20 2.19.1 From nobody Thu Nov 6 14:16:22 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1542129614173528.9147057576423; Tue, 13 Nov 2018 09:20:14 -0800 (PST) Received: from localhost ([::1]:55245 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcMG-00021h-Ce for importer@patchew.org; Tue, 13 Nov 2018 12:20:12 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35778) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcAc-000762-BX for qemu-devel@nongnu.org; Tue, 13 Nov 2018 12:08:12 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMbwm-000402-Dv for qemu-devel@nongnu.org; Tue, 13 Nov 2018 11:53:53 -0500 Received: from mga06.intel.com ([134.134.136.31]:62850) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gMbwl-0003nD-WB; Tue, 13 Nov 2018 11:53:52 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 08:53:47 -0800 Received: from qingbai-mobl.ger.corp.intel.com (HELO caravaggio.ger.corp.intel.com) ([10.249.41.239]) by orsmga001.jf.intel.com with ESMTP; 13 Nov 2018 08:53:46 -0800 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,228,1539673200"; d="scan'208";a="107922124" From: Samuel Ortiz To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 17:52:45 +0100 Message-Id: <20181113165247.4806-12-sameo@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181113165247.4806-1-sameo@linux.intel.com> References: <20181113165247.4806-1-sameo@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.31 Subject: [Qemu-devel] [PATCH 11/13] target: arm: Define TCG dependent functions when TCG is enabled X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, richard.henderson@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" do_interrupt, do_unaligned_access, do_transaction_failed and debug_excp are only relevant in the TCG context, so we should not define them when TCG is disabled. Signed-off-by: Samuel Ortiz Reviewed-by: Philippe Mathieu-Daud=C3=A9 Tested-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Robert Bradford --- target/arm/cpu.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/target/arm/cpu.c b/target/arm/cpu.c index 60411f6bfe..fb2e5d430e 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -1444,7 +1444,7 @@ static void arm_v7m_class_init(ObjectClass *oc, void = *data) { CPUClass *cc =3D CPU_CLASS(oc); =20 -#ifndef CONFIG_USER_ONLY +#if !defined(CONFIG_USER_ONLY) && defined(CONFIG_TCG) cc->do_interrupt =3D arm_v7m_cpu_do_interrupt; #endif =20 @@ -2061,9 +2061,14 @@ static void arm_cpu_class_init(ObjectClass *oc, void= *data) #ifdef CONFIG_USER_ONLY cc->handle_mmu_fault =3D arm_cpu_handle_mmu_fault; #else + +#ifdef CONFIG_TCG cc->do_interrupt =3D arm_cpu_do_interrupt; cc->do_unaligned_access =3D arm_cpu_do_unaligned_access; cc->do_transaction_failed =3D arm_cpu_do_transaction_failed; + cc->debug_excp_handler =3D arm_debug_excp_handler; +#endif + cc->get_phys_page_attrs_debug =3D arm_cpu_get_phys_page_attrs_debug; cc->asidx_from_attrs =3D arm_asidx_from_attrs; cc->vmsd =3D &vmstate_arm_cpu; @@ -2076,7 +2081,6 @@ static void arm_cpu_class_init(ObjectClass *oc, void = *data) cc->gdb_arch_name =3D arm_gdb_arch_name; cc->gdb_get_dynamic_xml =3D arm_gdb_get_dynamic_xml; cc->gdb_stop_before_watchpoint =3D true; - cc->debug_excp_handler =3D arm_debug_excp_handler; cc->debug_check_watchpoint =3D arm_debug_check_watchpoint; #if !defined(CONFIG_USER_ONLY) cc->adjust_watchpoint_address =3D arm_adjust_watchpoint_address; --=20 2.19.1 From nobody Thu Nov 6 14:16:22 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1542129670068983.7343717601717; Tue, 13 Nov 2018 09:21:10 -0800 (PST) Received: from localhost ([::1]:55251 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcN7-0002eN-LW for importer@patchew.org; Tue, 13 Nov 2018 12:21:05 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36371) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcAN-0007Wl-Gr for qemu-devel@nongnu.org; Tue, 13 Nov 2018 12:07:56 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMbxB-00049x-Ml for qemu-devel@nongnu.org; Tue, 13 Nov 2018 11:54:25 -0500 Received: from mga06.intel.com ([134.134.136.31]:62852) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gMbx5-0003pH-Tz; Tue, 13 Nov 2018 11:54:16 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 08:53:49 -0800 Received: from qingbai-mobl.ger.corp.intel.com (HELO caravaggio.ger.corp.intel.com) ([10.249.41.239]) by orsmga001.jf.intel.com with ESMTP; 13 Nov 2018 08:53:48 -0800 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,228,1539673200"; d="scan'208";a="107922127" From: Samuel Ortiz To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 17:52:46 +0100 Message-Id: <20181113165247.4806-13-sameo@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181113165247.4806-1-sameo@linux.intel.com> References: <20181113165247.4806-1-sameo@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.31 Subject: [Qemu-devel] [PATCH 12/13] target: arm: Makefile cleanup X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, richard.henderson@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Group objects with the same build dependencies together. Signed-off-by: Samuel Ortiz Reviewed-by: Philippe Mathieu-Daud=C3=A9 Tested-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Robert Bradford --- target/arm/Makefile.objs | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs index 6e880df727..8e513748e3 100644 --- a/target/arm/Makefile.objs +++ b/target/arm/Makefile.objs @@ -1,15 +1,13 @@ -obj-y +=3D arm-semi.o -obj-$(CONFIG_SOFTMMU) +=3D machine.o psci.o arch_dump.o monitor.o +obj-y +=3D arm-semi.o crypto_helper.o gdbstub.o helper.o cpu.o +obj-y +=3D translate.o op_helper.o neon_helper.o +obj-y +=3D iwmmxt_helper.o vec_helper.o m_helper.o +obj-y +=3D excp_helper.o vfp_helper.o + +obj-$(CONFIG_SOFTMMU) +=3D machine.o psci.o arch_dump.o monitor.o arm-powe= rctl.o obj-$(CONFIG_KVM) +=3D kvm.o obj-$(call land,$(CONFIG_KVM),$(call lnot,$(TARGET_AARCH64))) +=3D kvm32.o obj-$(call land,$(CONFIG_KVM),$(TARGET_AARCH64)) +=3D kvm64.o obj-$(call lnot,$(CONFIG_KVM)) +=3D kvm-stub.o -obj-y +=3D translate.o op_helper.o helper.o cpu.o -obj-y +=3D neon_helper.o iwmmxt_helper.o vec_helper.o -obj-y +=3D gdbstub.o m_helper.o excp_helper.o vfp_helper.o -obj-$(TARGET_AARCH64) +=3D cpu64.o translate-a64.o helper-a64.o gdbstub64.o -obj-y +=3D crypto_helper.o -obj-$(CONFIG_SOFTMMU) +=3D arm-powerctl.o =20 DECODETREE =3D $(SRC_PATH)/scripts/decodetree.py =20 @@ -20,3 +18,4 @@ target/arm/decode-sve.inc.c: $(SRC_PATH)/target/arm/sve.d= ecode $(DECODETREE) =20 target/arm/translate-sve.o: target/arm/decode-sve.inc.c obj-$(TARGET_AARCH64) +=3D translate-sve.o sve_helper.o +obj-$(TARGET_AARCH64) +=3D cpu64.o translate-a64.o helper-a64.o gdbstub64.o --=20 2.19.1 From nobody Thu Nov 6 14:16:22 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1542129164420392.5714605862639; Tue, 13 Nov 2018 09:12:44 -0800 (PST) Received: from localhost ([::1]:55197 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcF0-0003yg-V5 for importer@patchew.org; Tue, 13 Nov 2018 12:12:42 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36182) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMcAL-0007Sq-6L for qemu-devel@nongnu.org; Tue, 13 Nov 2018 12:07:53 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMbxD-0004AN-BF for qemu-devel@nongnu.org; Tue, 13 Nov 2018 11:54:25 -0500 Received: from mga06.intel.com ([134.134.136.31]:62858) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gMbx7-0003pu-Th; Tue, 13 Nov 2018 11:54:17 -0500 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Nov 2018 08:53:51 -0800 Received: from qingbai-mobl.ger.corp.intel.com (HELO caravaggio.ger.corp.intel.com) ([10.249.41.239]) by orsmga001.jf.intel.com with ESMTP; 13 Nov 2018 08:53:49 -0800 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,228,1539673200"; d="scan'208";a="107922138" From: Samuel Ortiz To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 17:52:47 +0100 Message-Id: <20181113165247.4806-14-sameo@linux.intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181113165247.4806-1-sameo@linux.intel.com> References: <20181113165247.4806-1-sameo@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 134.134.136.31 Subject: [Qemu-devel] [PATCH 13/13] target: arm: Do not build TCG objects when TCG is off X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , qemu-arm@nongnu.org, richard.henderson@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" We can now safely turn all TCG dependent build off when CONFIG_TCG is off. This allows building ARM binaries with --disable-tcg. Signed-off-by: Samuel Ortiz Reviewed-by: Philippe Mathieu-Daud=C3=A9 Tested-by: Philippe Mathieu-Daud=C3=A9 Reviewed-by: Robert Bradford --- target/arm/Makefile.objs | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/target/arm/Makefile.objs b/target/arm/Makefile.objs index 8e513748e3..3f59bf1685 100644 --- a/target/arm/Makefile.objs +++ b/target/arm/Makefile.objs @@ -1,9 +1,10 @@ obj-y +=3D arm-semi.o crypto_helper.o gdbstub.o helper.o cpu.o -obj-y +=3D translate.o op_helper.o neon_helper.o -obj-y +=3D iwmmxt_helper.o vec_helper.o m_helper.o -obj-y +=3D excp_helper.o vfp_helper.o +obj-$(CONFIG_TCG) +=3D translate.o op_helper.o neon_helper.o +obj-$(CONFIG_TCG) +=3D iwmmxt_helper.o vec_helper.o m_helper.o +obj-$(CONFIG_TCG) +=3D excp_helper.o vfp_helper.o =20 -obj-$(CONFIG_SOFTMMU) +=3D machine.o psci.o arch_dump.o monitor.o arm-powe= rctl.o +obj-$(CONFIG_SOFTMMU) +=3D machine.o arch_dump.o monitor.o arm-powerctl.o +obj-$(call land,$(CONFIG_TCG),$(CONFIG_SOFTMMU)) +=3D psci.o obj-$(CONFIG_KVM) +=3D kvm.o obj-$(call land,$(CONFIG_KVM),$(call lnot,$(TARGET_AARCH64))) +=3D kvm32.o obj-$(call land,$(CONFIG_KVM),$(TARGET_AARCH64)) +=3D kvm64.o @@ -17,5 +18,6 @@ target/arm/decode-sve.inc.c: $(SRC_PATH)/target/arm/sve.d= ecode $(DECODETREE) "GEN", $(TARGET_DIR)$@) =20 target/arm/translate-sve.o: target/arm/decode-sve.inc.c -obj-$(TARGET_AARCH64) +=3D translate-sve.o sve_helper.o -obj-$(TARGET_AARCH64) +=3D cpu64.o translate-a64.o helper-a64.o gdbstub64.o +obj-$(call land,$(CONFIG_TCG),$(TARGET_AARCH64)) +=3D translate-sve.o sve_= helper.o +obj-$(call land,$(CONFIG_TCG),$(TARGET_AARCH64)) +=3D translate-a64.o help= er-a64.o +obj-$(TARGET_AARCH64) +=3D cpu64.o gdbstub64.o --=20 2.19.1