From nobody Sun Feb 8 09:20:31 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1621285148; cv=none; d=zohomail.com; s=zohoarc; b=Kzihk0o/Ji3+y+S4+d1Q+X5m/W0YdMjowY8r9aVCUy40JS2EX3Gai8THWAlXUsb+fPkt4Wnk6YUTylF0lKKF3BsMXA/1BUB/QleL1YktQuDjMOLIjiWn3ygXIUBz02NI59w2AFkboLB+DhYovasUytvU1mPdrr5j2X1IE6bpaPE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1621285148; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=LICH9YV3kcO3+gi+VRif0RjZZ1xXkdtaSikftCpFJPw=; b=YfWry0XQkGM8KVtOvH+Y+4S4ZqbpIq8tCiL1xRD6tahLUOb7LpqFUjOmDd7I1Cuj1RUM1eNRxd379qdtedGk2ld5bZ9pb93mNOUBNQmrzdFUCQkdPhl7vbfEY0xCuZWA99G8vRsyRR71+2i4rArWZ6SocUa5xA7zLEchCRVUd+A= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 16212851486578.684124550267825; Mon, 17 May 2021 13:59:08 -0700 (PDT) Received: from localhost ([::1]:38434 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1likKR-0003Gi-Fp for importer@patchew.org; Mon, 17 May 2021 16:59:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:46136) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1likEj-0005O1-KI for qemu-devel@nongnu.org; Mon, 17 May 2021 16:53:14 -0400 Received: from mail-lf1-x132.google.com ([2a00:1450:4864:20::132]:41762) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1likEh-0001ey-79 for qemu-devel@nongnu.org; Mon, 17 May 2021 16:53:13 -0400 Received: by mail-lf1-x132.google.com with SMTP id v8so5807752lft.8 for ; Mon, 17 May 2021 13:53:10 -0700 (PDT) Received: from octofox.metropolis ([5.18.202.173]) by smtp.gmail.com with ESMTPSA id p11sm154097lfu.51.2021.05.17.13.53.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 May 2021 13:53:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=LICH9YV3kcO3+gi+VRif0RjZZ1xXkdtaSikftCpFJPw=; b=cAbKs+g5dT/mvjvqA1H/zCYVbhxbQ8QSJoFGoGdY8Wv0kh9A/coOsbv7GFlR3dIQ1C Uy7sKO4nuGgjVpawT2x06dE63aBBVVE+JSJf7eeOAmtbfK0L47qJRcA5aJjpFiNPB3S/ +gEGXXbS18QY8hq8u4NMZKZSIAb0paV/i9wADqOp18CcJ7ObCz1bEDUQpQ29TlekC1pz 423khdWqOIynmSE4TkBlxMIT8lWzfBKqKYMNhMYvd5tySzjZfjZ9wAj3ZQwrjTYCbwQ3 pNLCzwWQ0w+cOwnHQMgb7ZXEOpbCSUPijFMjI645lVSByy3Maor5VidIQFczlllFEcQA G3Ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=LICH9YV3kcO3+gi+VRif0RjZZ1xXkdtaSikftCpFJPw=; b=Ikuu29M2UFEe1UBvsZrIqViNPdDdaONhXuN0gNhSfNKCB4/m1plwBwaFo4m0q+XpxM V1qnRAB9oDaHWLZ6YEMl9wxVsETmKlOMLfKTLkx2komCJD5tXI+QxVQCz66KkF30NBly JgeLtNd/43g1XK3/WaZRZazud0hTjY00bfpRla6E1Jz6Kr7sCZjThnRWBjrPb25eqWSi pCC1KZHi0SFDMI7RZE29oQXtbUjMxNnbrZpzOQeO//cm0LUvt5Qxb5ByA5TqXm32d5OJ 51hShi9bxygneZlfvfknWYSqN2YD3zFI59wLC38bQMCt0tEW68bfP5AdF4D65En3WGNA 57wg== X-Gm-Message-State: AOAM5307wPDRsEPyQ71bj9p4ThWKPruDW0TW6FV3cXvXO+NQylP3+PTs q1BuuKXJKdttzW9QqAEZj4P+PsHIhmU= X-Google-Smtp-Source: ABdhPJwn1s2YgpujsUP9L79fGvNA+Gm2fn1HOgQ/ZSwHqD8MNDjUztI8/UcZlUKW8/3N3xEQUEdG7w== X-Received: by 2002:a05:6512:234b:: with SMTP id p11mr1223553lfu.102.1621284788934; Mon, 17 May 2021 13:53:08 -0700 (PDT) From: Max Filippov To: qemu-devel@nongnu.org Subject: [PATCH v3] target/xtensa: clean up unaligned access Date: Mon, 17 May 2021 13:52:59 -0700 Message-Id: <20210517205259.13241-1-jcmvbkbc@gmail.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2a00:1450:4864:20::132; envelope-from=jcmvbkbc@gmail.com; helo=mail-lf1-x132.google.com X-Spam_score_int: 4 X-Spam_score: 0.4 X-Spam_bar: / X-Spam_report: (0.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, FROM_LOCAL_NOVOWEL=0.5, HK_RANDOM_ENVFROM=0.999, HK_RANDOM_FROM=1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Max Filippov , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @gmail.com) Xtensa cores may or may not have hardware support for unaligned memory access. On cores with such support pass MO_UNALN in memory access flags for all operations that would not raise an exception. Drop condition from xtensa_cpu_do_unaligned_access and replace it with assertions. Add a test. Suggested-by: Philippe Mathieu-Daud=C3=A9 Signed-off-by: Max Filippov --- Changes v2->v3: - drop assertion for !XTENSA_OPTION_HW_ALIGNMENT from xtensa_cpu_do_unaligned_access to correctly handle acquire/release intsructions; - add tests for acquire/release instructions. Changes v1->v2: - correctly handle case of !XCHAL_UNALIGNED_*_EXCEPTION in the test target/xtensa/helper.c | 13 +- target/xtensa/translate.c | 108 +++++++------- tests/tcg/xtensa/test_load_store.S | 221 +++++++++++++++++++++++++++++ 3 files changed, 281 insertions(+), 61 deletions(-) create mode 100644 tests/tcg/xtensa/test_load_store.S diff --git a/target/xtensa/helper.c b/target/xtensa/helper.c index eeffee297d15..f18ab383fd89 100644 --- a/target/xtensa/helper.c +++ b/target/xtensa/helper.c @@ -270,13 +270,12 @@ void xtensa_cpu_do_unaligned_access(CPUState *cs, XtensaCPU *cpu =3D XTENSA_CPU(cs); CPUXtensaState *env =3D &cpu->env; =20 - if (xtensa_option_enabled(env->config, XTENSA_OPTION_UNALIGNED_EXCEPTI= ON) && - !xtensa_option_enabled(env->config, XTENSA_OPTION_HW_ALIGNMENT)) { - cpu_restore_state(CPU(cpu), retaddr, true); - HELPER(exception_cause_vaddr)(env, - env->pc, LOAD_STORE_ALIGNMENT_CAUSE, - addr); - } + assert(xtensa_option_enabled(env->config, + XTENSA_OPTION_UNALIGNED_EXCEPTION)); + cpu_restore_state(CPU(cpu), retaddr, true); + HELPER(exception_cause_vaddr)(env, + env->pc, LOAD_STORE_ALIGNMENT_CAUSE, + addr); } =20 bool xtensa_cpu_tlb_fill(CPUState *cs, vaddr address, int size, diff --git a/target/xtensa/translate.c b/target/xtensa/translate.c index 0ae4efc48a17..8759bea7ff85 100644 --- a/target/xtensa/translate.c +++ b/target/xtensa/translate.c @@ -339,16 +339,6 @@ static void gen_exception_cause(DisasContext *dc, uint= 32_t cause) } } =20 -static void gen_exception_cause_vaddr(DisasContext *dc, uint32_t cause, - TCGv_i32 vaddr) -{ - TCGv_i32 tpc =3D tcg_const_i32(dc->pc); - TCGv_i32 tcause =3D tcg_const_i32(cause); - gen_helper_exception_cause_vaddr(cpu_env, tpc, tcause, vaddr); - tcg_temp_free(tpc); - tcg_temp_free(tcause); -} - static void gen_debug_exception(DisasContext *dc, uint32_t cause) { TCGv_i32 tpc =3D tcg_const_i32(dc->pc); @@ -554,20 +544,16 @@ static uint32_t test_exceptions_hpi(DisasContext *dc,= const OpcodeArg arg[], return test_exceptions_sr(dc, arg, par); } =20 -static void gen_load_store_alignment(DisasContext *dc, int shift, - TCGv_i32 addr, bool no_hw_alignment) +static MemOp gen_load_store_alignment(DisasContext *dc, int shift, + TCGv_i32 addr, bool no_hw_alignment) { if (!option_enabled(dc, XTENSA_OPTION_UNALIGNED_EXCEPTION)) { tcg_gen_andi_i32(addr, addr, ~0 << shift); - } else if (option_enabled(dc, XTENSA_OPTION_HW_ALIGNMENT) && - no_hw_alignment) { - TCGLabel *label =3D gen_new_label(); - TCGv_i32 tmp =3D tcg_temp_new_i32(); - tcg_gen_andi_i32(tmp, addr, ~(~0 << shift)); - tcg_gen_brcondi_i32(TCG_COND_EQ, tmp, 0, label); - gen_exception_cause_vaddr(dc, LOAD_STORE_ALIGNMENT_CAUSE, addr); - gen_set_label(label); - tcg_temp_free(tmp); + } + if (!no_hw_alignment && option_enabled(dc, XTENSA_OPTION_HW_ALIGNMENT)= ) { + return MO_UNALN; + } else { + return MO_ALIGN; } } =20 @@ -1784,10 +1770,11 @@ static void translate_l32e(DisasContext *dc, const = OpcodeArg arg[], const uint32_t par[]) { TCGv_i32 addr =3D tcg_temp_new_i32(); + MemOp al; =20 tcg_gen_addi_i32(addr, arg[1].in, arg[2].imm); - gen_load_store_alignment(dc, 2, addr, false); - tcg_gen_qemu_ld_tl(arg[0].out, addr, dc->ring, MO_TEUL); + al =3D gen_load_store_alignment(dc, 2, addr, false); + tcg_gen_qemu_ld_tl(arg[0].out, addr, dc->ring, MO_TEUL | al); tcg_temp_free(addr); } =20 @@ -1813,11 +1800,12 @@ static void translate_l32ex(DisasContext *dc, const= OpcodeArg arg[], const uint32_t par[]) { TCGv_i32 addr =3D tcg_temp_new_i32(); + MemOp al; =20 tcg_gen_mov_i32(addr, arg[1].in); - gen_load_store_alignment(dc, 2, addr, true); + al =3D gen_load_store_alignment(dc, 2, addr, true); gen_check_exclusive(dc, addr, false); - tcg_gen_qemu_ld_i32(arg[0].out, addr, dc->ring, MO_TEUL); + tcg_gen_qemu_ld_i32(arg[0].out, addr, dc->ring, MO_TEUL | al); tcg_gen_mov_i32(cpu_exclusive_addr, addr); tcg_gen_mov_i32(cpu_exclusive_val, arg[0].out); tcg_temp_free(addr); @@ -1827,18 +1815,19 @@ static void translate_ldst(DisasContext *dc, const = OpcodeArg arg[], const uint32_t par[]) { TCGv_i32 addr =3D tcg_temp_new_i32(); + MemOp al =3D MO_UNALN; =20 tcg_gen_addi_i32(addr, arg[1].in, arg[2].imm); if (par[0] & MO_SIZE) { - gen_load_store_alignment(dc, par[0] & MO_SIZE, addr, par[1]); + al =3D gen_load_store_alignment(dc, par[0] & MO_SIZE, addr, par[1]= ); } if (par[2]) { if (par[1]) { tcg_gen_mb(TCG_BAR_STRL | TCG_MO_ALL); } - tcg_gen_qemu_st_tl(arg[0].in, addr, dc->cring, par[0]); + tcg_gen_qemu_st_tl(arg[0].in, addr, dc->cring, par[0] | al); } else { - tcg_gen_qemu_ld_tl(arg[0].out, addr, dc->cring, par[0]); + tcg_gen_qemu_ld_tl(arg[0].out, addr, dc->cring, par[0] | al); if (par[1]) { tcg_gen_mb(TCG_BAR_LDAQ | TCG_MO_ALL); } @@ -1909,9 +1898,11 @@ static void translate_mac16(DisasContext *dc, const = OpcodeArg arg[], TCGv_i32 mem32 =3D tcg_temp_new_i32(); =20 if (ld_offset) { + MemOp al; + tcg_gen_addi_i32(vaddr, arg[1].in, ld_offset); - gen_load_store_alignment(dc, 2, vaddr, false); - tcg_gen_qemu_ld32u(mem32, vaddr, dc->cring); + al =3D gen_load_store_alignment(dc, 2, vaddr, false); + tcg_gen_qemu_ld_tl(mem32, vaddr, dc->cring, MO_TEUL | al); } if (op !=3D MAC16_NONE) { TCGv_i32 m1 =3D gen_mac16_m(arg[off].in, @@ -2357,13 +2348,14 @@ static void translate_s32c1i(DisasContext *dc, cons= t OpcodeArg arg[], { TCGv_i32 tmp =3D tcg_temp_local_new_i32(); TCGv_i32 addr =3D tcg_temp_local_new_i32(); + MemOp al; =20 tcg_gen_mov_i32(tmp, arg[0].in); tcg_gen_addi_i32(addr, arg[1].in, arg[2].imm); - gen_load_store_alignment(dc, 2, addr, true); + al =3D gen_load_store_alignment(dc, 2, addr, true); gen_check_atomctl(dc, addr); tcg_gen_atomic_cmpxchg_i32(arg[0].out, addr, cpu_SR[SCOMPARE1], - tmp, dc->cring, MO_TEUL); + tmp, dc->cring, MO_TEUL | al); tcg_temp_free(addr); tcg_temp_free(tmp); } @@ -2372,10 +2364,11 @@ static void translate_s32e(DisasContext *dc, const = OpcodeArg arg[], const uint32_t par[]) { TCGv_i32 addr =3D tcg_temp_new_i32(); + MemOp al; =20 tcg_gen_addi_i32(addr, arg[1].in, arg[2].imm); - gen_load_store_alignment(dc, 2, addr, false); - tcg_gen_qemu_st_tl(arg[0].in, addr, dc->ring, MO_TEUL); + al =3D gen_load_store_alignment(dc, 2, addr, false); + tcg_gen_qemu_st_tl(arg[0].in, addr, dc->ring, MO_TEUL | al); tcg_temp_free(addr); } =20 @@ -2386,14 +2379,15 @@ static void translate_s32ex(DisasContext *dc, const= OpcodeArg arg[], TCGv_i32 addr =3D tcg_temp_local_new_i32(); TCGv_i32 res =3D tcg_temp_local_new_i32(); TCGLabel *label =3D gen_new_label(); + MemOp al; =20 tcg_gen_movi_i32(res, 0); tcg_gen_mov_i32(addr, arg[1].in); - gen_load_store_alignment(dc, 2, addr, true); + al =3D gen_load_store_alignment(dc, 2, addr, true); tcg_gen_brcond_i32(TCG_COND_NE, addr, cpu_exclusive_addr, label); gen_check_exclusive(dc, addr, true); tcg_gen_atomic_cmpxchg_i32(prev, cpu_exclusive_addr, cpu_exclusive_val, - arg[0].in, dc->cring, MO_TEUL); + arg[0].in, dc->cring, MO_TEUL | al); tcg_gen_setcond_i32(TCG_COND_EQ, res, prev, cpu_exclusive_val); tcg_gen_movcond_i32(TCG_COND_EQ, cpu_exclusive_val, prev, cpu_exclusive_val, prev, cpu_exclusive_val); @@ -6642,13 +6636,14 @@ static void translate_ldsti(DisasContext *dc, const= OpcodeArg arg[], const uint32_t par[]) { TCGv_i32 addr =3D tcg_temp_new_i32(); + MemOp al; =20 tcg_gen_addi_i32(addr, arg[1].in, arg[2].imm); - gen_load_store_alignment(dc, 2, addr, false); + al =3D gen_load_store_alignment(dc, 2, addr, false); if (par[0]) { - tcg_gen_qemu_st32(arg[0].in, addr, dc->cring); + tcg_gen_qemu_st_tl(arg[0].in, addr, dc->cring, MO_TEUL | al); } else { - tcg_gen_qemu_ld32u(arg[0].out, addr, dc->cring); + tcg_gen_qemu_ld_tl(arg[0].out, addr, dc->cring, MO_TEUL | al); } if (par[1]) { tcg_gen_mov_i32(arg[1].out, addr); @@ -6660,13 +6655,14 @@ static void translate_ldstx(DisasContext *dc, const= OpcodeArg arg[], const uint32_t par[]) { TCGv_i32 addr =3D tcg_temp_new_i32(); + MemOp al; =20 tcg_gen_add_i32(addr, arg[1].in, arg[2].in); - gen_load_store_alignment(dc, 2, addr, false); + al =3D gen_load_store_alignment(dc, 2, addr, false); if (par[0]) { - tcg_gen_qemu_st32(arg[0].in, addr, dc->cring); + tcg_gen_qemu_st_tl(arg[0].in, addr, dc->cring, MO_TEUL | al); } else { - tcg_gen_qemu_ld32u(arg[0].out, addr, dc->cring); + tcg_gen_qemu_ld_tl(arg[0].out, addr, dc->cring, MO_TEUL | al); } if (par[1]) { tcg_gen_mov_i32(arg[1].out, addr); @@ -7104,6 +7100,7 @@ static void translate_ldsti_d(DisasContext *dc, const= OpcodeArg arg[], const uint32_t par[]) { TCGv_i32 addr; + MemOp al; =20 if (par[1]) { addr =3D tcg_temp_new_i32(); @@ -7111,11 +7108,11 @@ static void translate_ldsti_d(DisasContext *dc, con= st OpcodeArg arg[], } else { addr =3D arg[1].in; } - gen_load_store_alignment(dc, 3, addr, false); + al =3D gen_load_store_alignment(dc, 3, addr, false); if (par[0]) { - tcg_gen_qemu_st64(arg[0].in, addr, dc->cring); + tcg_gen_qemu_st_i64(arg[0].in, addr, dc->cring, MO_TEQ | al); } else { - tcg_gen_qemu_ld64(arg[0].out, addr, dc->cring); + tcg_gen_qemu_ld_i64(arg[0].out, addr, dc->cring, MO_TEQ | al); } if (par[2]) { if (par[1]) { @@ -7134,6 +7131,7 @@ static void translate_ldsti_s(DisasContext *dc, const= OpcodeArg arg[], { TCGv_i32 addr; OpcodeArg arg32[1]; + MemOp al; =20 if (par[1]) { addr =3D tcg_temp_new_i32(); @@ -7141,14 +7139,14 @@ static void translate_ldsti_s(DisasContext *dc, con= st OpcodeArg arg[], } else { addr =3D arg[1].in; } - gen_load_store_alignment(dc, 2, addr, false); + al =3D gen_load_store_alignment(dc, 2, addr, false); if (par[0]) { get_f32_i1(arg, arg32, 0); - tcg_gen_qemu_st32(arg32[0].in, addr, dc->cring); + tcg_gen_qemu_st_tl(arg32[0].in, addr, dc->cring, MO_TEUL | al); put_f32_i1(arg, arg32, 0); } else { get_f32_o1(arg, arg32, 0); - tcg_gen_qemu_ld32u(arg32[0].out, addr, dc->cring); + tcg_gen_qemu_ld_tl(arg32[0].out, addr, dc->cring, MO_TEUL | al); put_f32_o1(arg, arg32, 0); } if (par[2]) { @@ -7167,6 +7165,7 @@ static void translate_ldstx_d(DisasContext *dc, const= OpcodeArg arg[], const uint32_t par[]) { TCGv_i32 addr; + MemOp al; =20 if (par[1]) { addr =3D tcg_temp_new_i32(); @@ -7174,11 +7173,11 @@ static void translate_ldstx_d(DisasContext *dc, con= st OpcodeArg arg[], } else { addr =3D arg[1].in; } - gen_load_store_alignment(dc, 3, addr, false); + al =3D gen_load_store_alignment(dc, 3, addr, false); if (par[0]) { - tcg_gen_qemu_st64(arg[0].in, addr, dc->cring); + tcg_gen_qemu_st_i64(arg[0].in, addr, dc->cring, MO_TEQ | al); } else { - tcg_gen_qemu_ld64(arg[0].out, addr, dc->cring); + tcg_gen_qemu_ld_i64(arg[0].out, addr, dc->cring, MO_TEQ | al); } if (par[2]) { if (par[1]) { @@ -7197,6 +7196,7 @@ static void translate_ldstx_s(DisasContext *dc, const= OpcodeArg arg[], { TCGv_i32 addr; OpcodeArg arg32[1]; + MemOp al; =20 if (par[1]) { addr =3D tcg_temp_new_i32(); @@ -7204,14 +7204,14 @@ static void translate_ldstx_s(DisasContext *dc, con= st OpcodeArg arg[], } else { addr =3D arg[1].in; } - gen_load_store_alignment(dc, 2, addr, false); + al =3D gen_load_store_alignment(dc, 2, addr, false); if (par[0]) { get_f32_i1(arg, arg32, 0); - tcg_gen_qemu_st32(arg32[0].in, addr, dc->cring); + tcg_gen_qemu_st_tl(arg32[0].in, addr, dc->cring, MO_TEUL | al); put_f32_i1(arg, arg32, 0); } else { get_f32_o1(arg, arg32, 0); - tcg_gen_qemu_ld32u(arg32[0].out, addr, dc->cring); + tcg_gen_qemu_ld_tl(arg32[0].out, addr, dc->cring, MO_TEUL | al); put_f32_o1(arg, arg32, 0); } if (par[2]) { diff --git a/tests/tcg/xtensa/test_load_store.S b/tests/tcg/xtensa/test_loa= d_store.S new file mode 100644 index 000000000000..b339f40f1280 --- /dev/null +++ b/tests/tcg/xtensa/test_load_store.S @@ -0,0 +1,221 @@ +#include "macros.inc" + +test_suite load_store + +.macro load_ok_test op, type, data, value + .data + .align 4 +1: + \type \data + .previous + + reset_ps + set_vector kernel, 0 + movi a3, 1b + addi a4, a4, 1 + mov a5, a4 + \op a5, a3, 0 + movi a6, \value + assert eq, a5, a6 +.endm + +#if XCHAL_UNALIGNED_LOAD_EXCEPTION +.macro load_unaligned_test will_trap, op, type, data, value + .data + .align 4 + .byte 0 +1: + \type \data + .previous + + reset_ps + .ifeq \will_trap + set_vector kernel, 0 + .else + set_vector kernel, 2f + .endif + movi a3, 1b + addi a4, a4, 1 + mov a5, a4 +1: + \op a5, a3, 0 + .ifeq \will_trap + movi a6, \value + assert eq, a5, a6 + .else + test_fail +2: + rsr a6, exccause + movi a7, 9 + assert eq, a6, a7 + rsr a6, epc1 + movi a7, 1b + assert eq, a6, a7 + rsr a6, excvaddr + assert eq, a6, a3 + assert eq, a5, a4 + .endif + reset_ps +.endm +#else +.macro load_unaligned_test will_trap, op, type, data, value + .data + .align 4 +1: + \type \data + .previous + + reset_ps + set_vector kernel, 0 + movi a3, 1b + 1 + addi a4, a4, 1 + mov a5, a4 + \op a5, a3, 0 + movi a6, \value + assert eq, a5, a6 +.endm +#endif + +.macro store_ok_test op, type, value + .data + .align 4 + .byte 0, 0, 0, 0x55 +1: + \type 0 +2: + .byte 0xaa + .previous + + reset_ps + set_vector kernel, 0 + movi a3, 1b + movi a5, \value + \op a5, a3, 0 + movi a3, 2b + l8ui a5, a3, 0 + movi a6, 0xaa + assert eq, a5, a6 + movi a3, 1b - 1 + l8ui a5, a3, 0 + movi a6, 0x55 + assert eq, a5, a6 +.endm + +#if XCHAL_UNALIGNED_STORE_EXCEPTION +.macro store_unaligned_test will_trap, op, nop, type, value + .data + .align 4 + .byte 0x55 +1: + \type 0 +2: + .byte 0xaa + .previous + + reset_ps + .ifeq \will_trap + set_vector kernel, 0 + .else + set_vector kernel, 4f + .endif + movi a3, 1b + movi a5, \value +3: + \op a5, a3, 0 + .ifne \will_trap + test_fail +4: + rsr a6, exccause + movi a7, 9 + assert eq, a6, a7 + rsr a6, epc1 + movi a7, 3b + assert eq, a6, a7 + rsr a6, excvaddr + assert eq, a6, a3 + l8ui a5, a3, 0 + assert eqi, a5, 0 + .endif + reset_ps + movi a3, 2b + l8ui a5, a3, 0 + movi a6, 0xaa + assert eq, a5, a6 + movi a3, 1b - 1 + l8ui a5, a3, 0 + movi a6, 0x55 + assert eq, a5, a6 +.endm +#else +.macro store_unaligned_test will_trap, sop, lop, type, value + .data + .align 4 + .byte 0x55 +1: + \type 0 + .previous + + reset_ps + set_vector kernel, 0 + movi a3, 1b + movi a5, \value + \sop a5, a3, 0 + movi a3, 1b - 1 + \lop a6, a3, 0 + assert eq, a5, a6 +.endm +#endif + +test load_ok + load_ok_test l16si, .short, 0x00001234, 0x00001234 + load_ok_test l16si, .short, 0x000089ab, 0xffff89ab + load_ok_test l16ui, .short, 0x00001234, 0x00001234 + load_ok_test l16ui, .short, 0x000089ab, 0x000089ab + load_ok_test l32i, .word, 0x12345678, 0x12345678 +#if XCHAL_HAVE_RELEASE_SYNC + load_ok_test l32ai, .word, 0x12345678, 0x12345678 +#endif +test_end + +#undef WILL_TRAP +#if XCHAL_UNALIGNED_LOAD_HW +#define WILL_TRAP 0 +#else +#define WILL_TRAP 1 +#endif + +test load_unaligned + load_unaligned_test WILL_TRAP, l16si, .short, 0x00001234, 0x00001234 + load_unaligned_test WILL_TRAP, l16si, .short, 0x000089ab, 0xffff89ab + load_unaligned_test WILL_TRAP, l16ui, .short, 0x00001234, 0x00001234 + load_unaligned_test WILL_TRAP, l16ui, .short, 0x000089ab, 0x000089ab + load_unaligned_test WILL_TRAP, l32i, .word, 0x12345678, 0x12345678 +#if XCHAL_HAVE_RELEASE_SYNC + load_unaligned_test 1, l32ai, .word, 0x12345678, 0x12345678 +#endif +test_end + +test store_ok + store_ok_test s16i, .short, 0x00001234 + store_ok_test s32i, .word, 0x12345678 +#if XCHAL_HAVE_RELEASE_SYNC + store_ok_test s32ri, .word, 0x12345678 +#endif +test_end + +#undef WILL_TRAP +#if XCHAL_UNALIGNED_STORE_HW +#define WILL_TRAP 0 +#else +#define WILL_TRAP 1 +#endif + +test store_unaligned + store_unaligned_test WILL_TRAP, s16i, l16ui, .short, 0x00001234 + store_unaligned_test WILL_TRAP, s32i, l32i, .word, 0x12345678 +#if XCHAL_HAVE_RELEASE_SYNC + store_unaligned_test 1, s32ri, l32i, .word, 0x12345678 +#endif +test_end + +test_suite_end --=20 2.20.1