From nobody Thu Oct 2 15:15:52 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1550803966020231.10650462072226; Thu, 21 Feb 2019 18:52:46 -0800 (PST) Received: from localhost ([127.0.0.1]:43069 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gx0x8-0004WW-P1 for importer@patchew.org; Thu, 21 Feb 2019 21:52:42 -0500 Received: from eggs.gnu.org ([209.51.188.92]:50698) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gx0mf-0005Ap-UG for qemu-devel@nongnu.org; Thu, 21 Feb 2019 21:41:55 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gx0mU-0005BG-QY for qemu-devel@nongnu.org; Thu, 21 Feb 2019 21:41:45 -0500 Received: from mail-pf1-x429.google.com ([2607:f8b0:4864:20::429]:38207) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gx0mU-0004ZG-3y for qemu-devel@nongnu.org; Thu, 21 Feb 2019 21:41:42 -0500 Received: by mail-pf1-x429.google.com with SMTP id n125so384710pfn.5 for ; Thu, 21 Feb 2019 18:41:11 -0800 (PST) Received: from cloudburst.twiddle.net (97-113-188-82.tukw.qwest.net. [97.113.188.82]) by smtp.gmail.com with ESMTPSA id v6sm187429pgb.2.2019.02.21.18.41.08 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 21 Feb 2019 18:41:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CBQ8Xc2iiIXC8ygx5OS3Q6XQ/FWqjNqGCzR2YV4MDI8=; b=b6A/M7LRo7nTov26SXnejYZzXSS1/K/AO4JIo65sv0MAq10Y5l0YIUuX7KVPnrw0Vs mE6z3teLIp4Duj6/WBJN73lSqn3GeRoEH+PMQ0BNFZ8HjIazHII04RRtEGCN1cquZ9KR RRyoSjM3LlWphP/uV4u5x+iVUt+5szrOK4HERiYEgz13T5hrcBqx5vGHnxbdBIImZyWu 7hqlBoPGnbOZwfie55vx/4vQVjhk6oHzQScVvYU/wLBVGw0Bo1Tdlc+XmCyRpNn/01Oq WojJpDACyh4VAsaX670ZUEZ14mqtQzOYQw9ClDaCta3g7LjE8eJ6fy8ycRc0yKpThHbr GkLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CBQ8Xc2iiIXC8ygx5OS3Q6XQ/FWqjNqGCzR2YV4MDI8=; b=S/RriYo9fPnP6VMiPbVxc/lBr07DpYWuGs1e+PG5UVAIW0MHTv04RdNLudsMtoa9p9 ckCZGodG77m3X0izr1j8Wh500lvfPPNsgUZZdpHeF++ps5pQCGD4W3OBlw67kxLvsrcm 9r/amm+e6dcMnzqOrSNKQrPVnZbJfMzqCC8uhZR0MBcoe1Mxse9TdkU4ZA86ibxNK01w TPZ/N/SnF4naqTj1U1sSsS/jpMaM+IzDFUjfCBuh6Oij2jhAGIU0iKpUgqxVwdl77wXz 6ejHPu02H7yyVw38hVjrwNZEeyS28fM8PUgDLaj2Z4aCsC0EAFuuePfQ4t0M2YzJBfxW T4aw== X-Gm-Message-State: AHQUAuYGognzrOTJYsGz1oHEHbcPYtyjitsQtHKO8uj/ILh4SjEIfXRw DMs6AIA1VLDlXDhBJ0jV97r0s5UTHHs= X-Google-Smtp-Source: AHgI3IbEH2v5BTrhPJtS2Mkoub1Jj6rkZqCgvvc6LVdCTO3+FRzPJb0xJC43rxAXHVYf8Mqb3B8A4w== X-Received: by 2002:a62:e911:: with SMTP id j17mr1324190pfh.107.1550803270024; Thu, 21 Feb 2019 18:41:10 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Date: Thu, 21 Feb 2019 18:41:04 -0800 Message-Id: <20190222024106.9167-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.2 In-Reply-To: <20190222024106.9167-1-richard.henderson@linaro.org> References: <20190222024106.9167-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::429 Subject: [Qemu-devel] [PATCH v3 1/3] target/arm: Split out recompute_hflags et al X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, cota@braap.org, alex.bennee@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) We will use these to minimize the computation for every call to cpu_get_tb_cpu_state. For now, the env->hflags variable is not used. Reviewed-by: Alex Benn=C3=A9e Signed-off-by: Richard Henderson --- v3: Do not cache VECLEN, VECSTRIDE, VFPEN. Move HANDLER and STACKCHECK to rebuild_hflags_a32. --- target/arm/cpu.h | 28 +++-- target/arm/helper.h | 3 + target/arm/internals.h | 3 + target/arm/helper.c | 254 ++++++++++++++++++++++++----------------- 4 files changed, 175 insertions(+), 113 deletions(-) diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 84ae6849c2..30532bf53e 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -240,6 +240,9 @@ typedef struct CPUARMState { uint32_t pstate; uint32_t aarch64; /* 1 if CPU is in aarch64 state; inverse of PSTATE.n= RW */ =20 + /* Cached TBFLAGS state. See below for which bits are included. */ + uint32_t hflags; + /* Frequently accessed CPSR bits are stored separately for efficiency. This contains all the other bits. Use cpsr_{read,write} to access the whole CPSR. */ @@ -3065,25 +3068,28 @@ static inline bool arm_cpu_data_is_big_endian(CPUAR= MState *env) =20 #include "exec/cpu-all.h" =20 -/* Bit usage in the TB flags field: bit 31 indicates whether we are +/* + * Bit usage in the TB flags field: bit 31 indicates whether we are * in 32 or 64 bit mode. The meaning of the other bits depends on that. * We put flags which are shared between 32 and 64 bit mode at the top * of the word, and flags which apply to only one mode at the bottom. + * + * Unless otherwise noted, these bits are cached in env->hflags. */ FIELD(TBFLAG_ANY, AARCH64_STATE, 31, 1) FIELD(TBFLAG_ANY, MMUIDX, 28, 3) FIELD(TBFLAG_ANY, SS_ACTIVE, 27, 1) -FIELD(TBFLAG_ANY, PSTATE_SS, 26, 1) +FIELD(TBFLAG_ANY, PSTATE_SS, 26, 1) /* Not cached. */ /* Target EL if we take a floating-point-disabled exception */ FIELD(TBFLAG_ANY, FPEXC_EL, 24, 2) FIELD(TBFLAG_ANY, BE_DATA, 23, 1) =20 /* Bit usage when in AArch32 state: */ -FIELD(TBFLAG_A32, THUMB, 0, 1) -FIELD(TBFLAG_A32, VECLEN, 1, 3) -FIELD(TBFLAG_A32, VECSTRIDE, 4, 2) -FIELD(TBFLAG_A32, VFPEN, 7, 1) -FIELD(TBFLAG_A32, CONDEXEC, 8, 8) +FIELD(TBFLAG_A32, THUMB, 0, 1) /* Not cached. */ +FIELD(TBFLAG_A32, VECLEN, 1, 3) /* Not cached. */ +FIELD(TBFLAG_A32, VECSTRIDE, 4, 2) /* Not cached. */ +FIELD(TBFLAG_A32, VFPEN, 7, 1) /* Not cached. */ +FIELD(TBFLAG_A32, CONDEXEC, 8, 8) /* Not cached. */ FIELD(TBFLAG_A32, SCTLR_B, 16, 1) /* We store the bottom two bits of the CPAR as TB flags and handle * checks on the other bits at runtime @@ -3105,7 +3111,7 @@ FIELD(TBFLAG_A64, SVEEXC_EL, 2, 2) FIELD(TBFLAG_A64, ZCR_LEN, 4, 4) FIELD(TBFLAG_A64, PAUTH_ACTIVE, 8, 1) FIELD(TBFLAG_A64, BT, 9, 1) -FIELD(TBFLAG_A64, BTYPE, 10, 2) +FIELD(TBFLAG_A64, BTYPE, 10, 2) /* Not cached. */ FIELD(TBFLAG_A64, TBID, 12, 2) =20 static inline bool bswap_code(bool sctlr_b) @@ -3190,6 +3196,12 @@ void arm_register_pre_el_change_hook(ARMCPU *cpu, AR= MELChangeHookFn *hook, void arm_register_el_change_hook(ARMCPU *cpu, ARMELChangeHookFn *hook, void *opaque); =20 +/** + * arm_rebuild_hflags: + * Rebuild the cached TBFLAGS for arbitrary changed processor state. + */ +void arm_rebuild_hflags(CPUARMState *env); + /** * aa32_vfp_dreg: * Return a pointer to the Dn register within env in 32-bit mode. diff --git a/target/arm/helper.h b/target/arm/helper.h index 923e8e1525..bbc1a48089 100644 --- a/target/arm/helper.h +++ b/target/arm/helper.h @@ -89,6 +89,9 @@ DEF_HELPER_4(msr_banked, void, env, i32, i32, i32) DEF_HELPER_2(get_user_reg, i32, env, i32) DEF_HELPER_3(set_user_reg, void, env, i32, i32) =20 +DEF_HELPER_FLAGS_2(rebuild_hflags_a32, TCG_CALL_NO_RWG, void, env, i32) +DEF_HELPER_FLAGS_2(rebuild_hflags_a64, TCG_CALL_NO_RWG, void, env, i32) + DEF_HELPER_1(vfp_get_fpscr, i32, env) DEF_HELPER_2(vfp_set_fpscr, void, env, i32) =20 diff --git a/target/arm/internals.h b/target/arm/internals.h index a4bd1becb7..8c1b813364 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -968,4 +968,7 @@ ARMVAParameters aa64_va_parameters_both(CPUARMState *en= v, uint64_t va, ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, ARMMMUIdx mmu_idx, bool data); =20 +uint32_t rebuild_hflags_a32(CPUARMState *env, int el); +uint32_t rebuild_hflags_a64(CPUARMState *env, int el); + #endif diff --git a/target/arm/helper.c b/target/arm/helper.c index a018eb23fe..29486a09f6 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -13886,139 +13886,183 @@ ARMMMUIdx arm_stage1_mmu_idx(CPUARMState *env) } #endif =20 +static uint32_t common_hflags(CPUARMState *env, int el, ARMMMUIdx mmu_idx, + int fp_el, uint32_t flags) +{ + flags =3D FIELD_DP32(flags, TBFLAG_ANY, FPEXC_EL, fp_el); + flags =3D FIELD_DP32(flags, TBFLAG_ANY, MMUIDX, + arm_to_core_mmu_idx(mmu_idx)); + if (arm_cpu_data_is_big_endian(env)) { + flags =3D FIELD_DP32(flags, TBFLAG_ANY, BE_DATA, 1); + } + if (arm_singlestep_active(env)) { + flags =3D FIELD_DP32(flags, TBFLAG_ANY, SS_ACTIVE, 1); + } + return flags; +} + +uint32_t rebuild_hflags_a32(CPUARMState *env, int el) +{ + uint32_t flags =3D 0; + ARMMMUIdx mmu_idx; + int fp_el; + + flags =3D FIELD_DP32(flags, TBFLAG_A32, SCTLR_B, arm_sctlr_b(env)); + flags =3D FIELD_DP32(flags, TBFLAG_A32, NS, !access_secure_reg(env)); + flags =3D FIELD_DP32(flags, TBFLAG_A32, XSCALE_CPAR, env->cp15.c15_cpa= r); + + if (arm_v7m_is_handler_mode(env)) { + flags =3D FIELD_DP32(flags, TBFLAG_A32, HANDLER, 1); + } + + mmu_idx =3D arm_mmu_idx(env); + + /* v8M always applies stack limit checks unless CCR.STKOFHFNMIGN is + * suppressing them because the requested execution priority is less t= han 0. + */ + if (arm_feature(env, ARM_FEATURE_V8) && + arm_feature(env, ARM_FEATURE_M) && + !((mmu_idx & ARM_MMU_IDX_M_NEGPRI) && + (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKOFHFNMIGN_MASK))) { + flags =3D FIELD_DP32(flags, TBFLAG_A32, STACKCHECK, 1); + } + + fp_el =3D fp_exception_el(env, el); + return common_hflags(env, el, mmu_idx, fp_el, flags); +} + +uint32_t rebuild_hflags_a64(CPUARMState *env, int el) +{ + ARMCPU *cpu =3D arm_env_get_cpu(env); + ARMMMUIdx mmu_idx =3D arm_mmu_idx(env); + ARMMMUIdx stage1 =3D stage_1_mmu_idx(mmu_idx); + ARMVAParameters p0 =3D aa64_va_parameters_both(env, 0, stage1); + int fp_el =3D fp_exception_el(env, el); + uint32_t flags =3D 0; + uint64_t sctlr; + int tbii, tbid; + + flags =3D FIELD_DP32(flags, TBFLAG_ANY, AARCH64_STATE, 1); + + /* Get control bits for tagged addresses. */ + /* FIXME: ARMv8.1-VHE S2 translation regime. */ + if (regime_el(env, stage1) < 2) { + ARMVAParameters p1 =3D aa64_va_parameters_both(env, -1, stage1); + tbid =3D (p1.tbi << 1) | p0.tbi; + tbii =3D tbid & ~((p1.tbid << 1) | p0.tbid); + } else { + tbid =3D p0.tbi; + tbii =3D tbid & !p0.tbid; + } + + flags =3D FIELD_DP32(flags, TBFLAG_A64, TBII, tbii); + flags =3D FIELD_DP32(flags, TBFLAG_A64, TBID, tbid); + + if (cpu_isar_feature(aa64_sve, cpu)) { + int sve_el =3D sve_exception_el(env, el); + uint32_t zcr_len; + + /* If SVE is disabled, but FP is enabled, + * then the effective len is 0. + */ + if (sve_el !=3D 0 && fp_el =3D=3D 0) { + zcr_len =3D 0; + } else { + zcr_len =3D sve_zcr_len_for_el(env, el); + } + flags =3D FIELD_DP32(flags, TBFLAG_A64, SVEEXC_EL, sve_el); + flags =3D FIELD_DP32(flags, TBFLAG_A64, ZCR_LEN, zcr_len); + } + + if (el =3D=3D 0) { + /* FIXME: ARMv8.1-VHE S2 translation regime. */ + sctlr =3D env->cp15.sctlr_el[1]; + } else { + sctlr =3D env->cp15.sctlr_el[el]; + } + if (cpu_isar_feature(aa64_pauth, cpu)) { + /* + * In order to save space in flags, we record only whether + * pauth is "inactive", meaning all insns are implemented as + * a nop, or "active" when some action must be performed. + * The decision of which action to take is left to a helper. + */ + if (sctlr & (SCTLR_EnIA | SCTLR_EnIB | SCTLR_EnDA | SCTLR_EnDB)) { + flags =3D FIELD_DP32(flags, TBFLAG_A64, PAUTH_ACTIVE, 1); + } + } + + if (cpu_isar_feature(aa64_bti, cpu)) { + /* Note that SCTLR_EL[23].BT =3D=3D SCTLR_BT1. */ + if (sctlr & (el =3D=3D 0 ? SCTLR_BT0 : SCTLR_BT1)) { + flags =3D FIELD_DP32(flags, TBFLAG_A64, BT, 1); + } + } + + return common_hflags(env, el, mmu_idx, fp_el, flags); +} + +void arm_rebuild_hflags(CPUARMState *env) +{ + int el =3D arm_current_el(env); + env->hflags =3D (is_a64(env) + ? rebuild_hflags_a64(env, el) + : rebuild_hflags_a32(env, el)); +} + +void HELPER(rebuild_hflags_a32)(CPUARMState *env, uint32_t el) +{ + tcg_debug_assert(!is_a64(env)); + env->hflags =3D rebuild_hflags_a32(env, el); +} + +void HELPER(rebuild_hflags_a64)(CPUARMState *env, uint32_t el) +{ + tcg_debug_assert(is_a64(env)); + env->hflags =3D rebuild_hflags_a64(env, el); +} + void cpu_get_tb_cpu_state(CPUARMState *env, target_ulong *pc, target_ulong *cs_base, uint32_t *pflags) { - ARMMMUIdx mmu_idx =3D arm_mmu_idx(env); int current_el =3D arm_current_el(env); - int fp_el =3D fp_exception_el(env, current_el); - uint32_t flags =3D 0; + uint32_t flags; + uint32_t pstate_for_ss; =20 + *cs_base =3D 0; if (is_a64(env)) { - ARMCPU *cpu =3D arm_env_get_cpu(env); - uint64_t sctlr; - *pc =3D env->pc; - flags =3D FIELD_DP32(flags, TBFLAG_ANY, AARCH64_STATE, 1); - - /* Get control bits for tagged addresses. */ - { - ARMMMUIdx stage1 =3D stage_1_mmu_idx(mmu_idx); - ARMVAParameters p0 =3D aa64_va_parameters_both(env, 0, stage1); - int tbii, tbid; - - /* FIXME: ARMv8.1-VHE S2 translation regime. */ - if (regime_el(env, stage1) < 2) { - ARMVAParameters p1 =3D aa64_va_parameters_both(env, -1, st= age1); - tbid =3D (p1.tbi << 1) | p0.tbi; - tbii =3D tbid & ~((p1.tbid << 1) | p0.tbid); - } else { - tbid =3D p0.tbi; - tbii =3D tbid & !p0.tbid; - } - - flags =3D FIELD_DP32(flags, TBFLAG_A64, TBII, tbii); - flags =3D FIELD_DP32(flags, TBFLAG_A64, TBID, tbid); - } - - if (cpu_isar_feature(aa64_sve, cpu)) { - int sve_el =3D sve_exception_el(env, current_el); - uint32_t zcr_len; - - /* If SVE is disabled, but FP is enabled, - * then the effective len is 0. - */ - if (sve_el !=3D 0 && fp_el =3D=3D 0) { - zcr_len =3D 0; - } else { - zcr_len =3D sve_zcr_len_for_el(env, current_el); - } - flags =3D FIELD_DP32(flags, TBFLAG_A64, SVEEXC_EL, sve_el); - flags =3D FIELD_DP32(flags, TBFLAG_A64, ZCR_LEN, zcr_len); - } - - if (current_el =3D=3D 0) { - /* FIXME: ARMv8.1-VHE S2 translation regime. */ - sctlr =3D env->cp15.sctlr_el[1]; - } else { - sctlr =3D env->cp15.sctlr_el[current_el]; - } - if (cpu_isar_feature(aa64_pauth, cpu)) { - /* - * In order to save space in flags, we record only whether - * pauth is "inactive", meaning all insns are implemented as - * a nop, or "active" when some action must be performed. - * The decision of which action to take is left to a helper. - */ - if (sctlr & (SCTLR_EnIA | SCTLR_EnIB | SCTLR_EnDA | SCTLR_EnDB= )) { - flags =3D FIELD_DP32(flags, TBFLAG_A64, PAUTH_ACTIVE, 1); - } - } - - if (cpu_isar_feature(aa64_bti, cpu)) { - /* Note that SCTLR_EL[23].BT =3D=3D SCTLR_BT1. */ - if (sctlr & (current_el =3D=3D 0 ? SCTLR_BT0 : SCTLR_BT1)) { - flags =3D FIELD_DP32(flags, TBFLAG_A64, BT, 1); - } - flags =3D FIELD_DP32(flags, TBFLAG_A64, BTYPE, env->btype); - } + flags =3D rebuild_hflags_a64(env, current_el); + flags =3D FIELD_DP32(flags, TBFLAG_A64, BTYPE, env->btype); + pstate_for_ss =3D env->pstate; } else { *pc =3D env->regs[15]; + flags =3D rebuild_hflags_a32(env, current_el); flags =3D FIELD_DP32(flags, TBFLAG_A32, THUMB, env->thumb); + flags =3D FIELD_DP32(flags, TBFLAG_A32, CONDEXEC, env->condexec_bi= ts); flags =3D FIELD_DP32(flags, TBFLAG_A32, VECLEN, env->vfp.vec_len); flags =3D FIELD_DP32(flags, TBFLAG_A32, VECSTRIDE, env->vfp.vec_st= ride); - flags =3D FIELD_DP32(flags, TBFLAG_A32, CONDEXEC, env->condexec_bi= ts); - flags =3D FIELD_DP32(flags, TBFLAG_A32, SCTLR_B, arm_sctlr_b(env)); - flags =3D FIELD_DP32(flags, TBFLAG_A32, NS, !access_secure_reg(env= )); if (env->vfp.xregs[ARM_VFP_FPEXC] & (1 << 30) || arm_el_is_aa64(env, 1)) { flags =3D FIELD_DP32(flags, TBFLAG_A32, VFPEN, 1); } - flags =3D FIELD_DP32(flags, TBFLAG_A32, XSCALE_CPAR, env->cp15.c15= _cpar); + pstate_for_ss =3D env->uncached_cpsr; } =20 - flags =3D FIELD_DP32(flags, TBFLAG_ANY, MMUIDX, arm_to_core_mmu_idx(mm= u_idx)); - /* The SS_ACTIVE and PSTATE_SS bits correspond to the state machine * states defined in the ARM ARM for software singlestep: * SS_ACTIVE PSTATE.SS State * 0 x Inactive (the TB flag for SS is always 0) * 1 0 Active-pending * 1 1 Active-not-pending + * SS_ACTIVE is set in hflags; PSTATE_SS is computed every TB. */ - if (arm_singlestep_active(env)) { - flags =3D FIELD_DP32(flags, TBFLAG_ANY, SS_ACTIVE, 1); - if (is_a64(env)) { - if (env->pstate & PSTATE_SS) { - flags =3D FIELD_DP32(flags, TBFLAG_ANY, PSTATE_SS, 1); - } - } else { - if (env->uncached_cpsr & PSTATE_SS) { - flags =3D FIELD_DP32(flags, TBFLAG_ANY, PSTATE_SS, 1); - } - } - } - if (arm_cpu_data_is_big_endian(env)) { - flags =3D FIELD_DP32(flags, TBFLAG_ANY, BE_DATA, 1); - } - flags =3D FIELD_DP32(flags, TBFLAG_ANY, FPEXC_EL, fp_el); - - if (arm_v7m_is_handler_mode(env)) { - flags =3D FIELD_DP32(flags, TBFLAG_A32, HANDLER, 1); - } - - /* v8M always applies stack limit checks unless CCR.STKOFHFNMIGN is - * suppressing them because the requested execution priority is less t= han 0. - */ - if (arm_feature(env, ARM_FEATURE_V8) && - arm_feature(env, ARM_FEATURE_M) && - !((mmu_idx & ARM_MMU_IDX_M_NEGPRI) && - (env->v7m.ccr[env->v7m.secure] & R_V7M_CCR_STKOFHFNMIGN_MASK))) { - flags =3D FIELD_DP32(flags, TBFLAG_A32, STACKCHECK, 1); + if (FIELD_EX32(flags, TBFLAG_ANY, SS_ACTIVE) + && (pstate_for_ss & PSTATE_SS)) { + flags =3D FIELD_DP32(flags, TBFLAG_ANY, PSTATE_SS, 1); } =20 *pflags =3D flags; - *cs_base =3D 0; } =20 #ifdef TARGET_AARCH64 --=20 2.17.2