From nobody Wed Oct 29 20:34:59 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 15247889673791016.7119841860408; Thu, 26 Apr 2018 17:29:27 -0700 (PDT) Received: from localhost ([::1]:45110 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fBrGQ-0003e5-EC for importer@patchew.org; Thu, 26 Apr 2018 20:29:26 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51756) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fBrEI-000291-5M for qemu-devel@nongnu.org; Thu, 26 Apr 2018 20:27:15 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fBrEG-0004Yo-LQ for qemu-devel@nongnu.org; Thu, 26 Apr 2018 20:27:14 -0400 Received: from mail-pf0-x242.google.com ([2607:f8b0:400e:c00::242]:33871) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1fBrEG-0004XI-DE for qemu-devel@nongnu.org; Thu, 26 Apr 2018 20:27:12 -0400 Received: by mail-pf0-x242.google.com with SMTP id a14so142478pfi.1 for ; Thu, 26 Apr 2018 17:27:12 -0700 (PDT) Received: from cloudburst.twiddle.net.com ([2605:e000:112b:41da:c94c:5ee7:de92:7d78]) by smtp.gmail.com with ESMTPSA id g76sm86338pfj.102.2018.04.26.17.27.09 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 26 Apr 2018 17:27:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/OKQUbApuVUtLjo2vK1xvxA0ICytFh53I028FicEKSU=; b=fzApzNUTfn5otIM2RgqTI757SUJiQVemakMS1Rhm8pEcQyQD/numKYdxcjTo4kEVgX VBSb4rjFvLSt2e3R0rbs44fdUVTVM4mBWq/21zVfXVDQUoABVm/JYKHycUEqEwOJRP/2 OUwbaJ+dK1w7efd6hKIuIKrBQOJ4IFM1uuLGY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/OKQUbApuVUtLjo2vK1xvxA0ICytFh53I028FicEKSU=; b=c2b+K0vWj7BEqur9XEJVBregR4REtvsgR3iWXogtXBWowrOdmZzLNWr8+mhczzGfo0 p4gFE8Ije9AJ3ohWIs+hHP89Xv9kBJ9nl6oczi3SEeSaE0RM3p24J/ZSqHG8ji+Wb22W 4vcqbANmaNuj6Ts6TiJpq+rPxjciikbhpPPbmX18Kcr+MOyUJ/OoL5HjlbzM3gAGZKAK AFRYXdOpBpXyczZNImCAJ0sNQvEMmTPz3DlMyyD+Kds77Y7HtLzUaX0+w1lJRg6g/YpQ V8LwMxFQWoA1/zihxdAtGmSwB4wcw/Y88I76XawQWzSmZiZfeuweGV8pIKgYXZo4Fbyu DBxg== X-Gm-Message-State: ALQs6tApMHNEu9BdIsHrGH5UuyoTeElNQ2szo2BuxUlIfbgRVS6iKBWs yJAW9JpxPYhpDk7Ok5aOzmcwrh0kBts= X-Google-Smtp-Source: AB8JxZoClzgSNIw4cLWKkNKZynAZfw0IpHasSHzEx4mdRkMO3/tLXQlz3btTtWFh624CR5aUsBm4dw== X-Received: by 2002:a63:6110:: with SMTP id v16-v6mr172686pgb.292.1524788830951; Thu, 26 Apr 2018 17:27:10 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Thu, 26 Apr 2018 14:26:48 -1000 Message-Id: <20180427002651.28356-7-richard.henderson@linaro.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180427002651.28356-1-richard.henderson@linaro.org> References: <20180427002651.28356-1-richard.henderson@linaro.org> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400e:c00::242 Subject: [Qemu-devel] [PATCH 6/9] target/arm: Introduce ARM_FEATURE_V8_ATOMICS and initial decode X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-arm@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" The insns in the ARMv8.1-Atomics are added to the existing load/store exclusive and load/store reg opcode spaces. Rearrange the top-level decoders for these to accomodate. The Atomics insns themselves still generate Unallocated. Signed-off-by: Richard Henderson Reviewed-by: Peter Maydell --- target/arm/cpu.h | 1 + linux-user/elfload.c | 1 + target/arm/translate-a64.c | 182 +++++++++++++++++++++++++++++++++--------= ---- 3 files changed, 138 insertions(+), 46 deletions(-) diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 44e6b77151..013f785306 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -1449,6 +1449,7 @@ enum arm_features { ARM_FEATURE_V8_SHA3, /* implements SHA3 part of v8 Crypto Extensions */ ARM_FEATURE_V8_SM3, /* implements SM3 part of v8 Crypto Extensions */ ARM_FEATURE_V8_SM4, /* implements SM4 part of v8 Crypto Extensions */ + ARM_FEATURE_V8_ATOMICS, /* implements v8.1 Atomic Memory Extensions */ ARM_FEATURE_V8_RDM, /* implements v8.1 simd round multiply */ ARM_FEATURE_V8_FP16, /* implements v8.2 half-precision float */ ARM_FEATURE_V8_FCMA, /* has complex number part of v8.3 extensions. */ diff --git a/linux-user/elfload.c b/linux-user/elfload.c index c77ed1bb01..a12b7b9d8c 100644 --- a/linux-user/elfload.c +++ b/linux-user/elfload.c @@ -557,6 +557,7 @@ static uint32_t get_elf_hwcap(void) GET_FEATURE(ARM_FEATURE_V8_SHA512, ARM_HWCAP_A64_SHA512); GET_FEATURE(ARM_FEATURE_V8_FP16, ARM_HWCAP_A64_FPHP | ARM_HWCAP_A64_ASIMDHP); + GET_FEATURE(ARM_FEATURE_V8_ATOMICS, ARM_HWCAP_A64_ATOMICS); GET_FEATURE(ARM_FEATURE_V8_RDM, ARM_HWCAP_A64_ASIMDRDM); GET_FEATURE(ARM_FEATURE_V8_FCMA, ARM_HWCAP_A64_FCMA); #undef GET_FEATURE diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c index d916fea3a3..0706c8c394 100644 --- a/target/arm/translate-a64.c +++ b/target/arm/translate-a64.c @@ -2147,62 +2147,98 @@ static void disas_ldst_excl(DisasContext *s, uint32= _t insn) int rt =3D extract32(insn, 0, 5); int rn =3D extract32(insn, 5, 5); int rt2 =3D extract32(insn, 10, 5); - int is_lasr =3D extract32(insn, 15, 1); int rs =3D extract32(insn, 16, 5); - int is_pair =3D extract32(insn, 21, 1); - int is_store =3D !extract32(insn, 22, 1); - int is_excl =3D !extract32(insn, 23, 1); + int is_lasr =3D extract32(insn, 15, 1); + int o2_L_o1_o0 =3D extract32(insn, 21, 3) * 2 | is_lasr; int size =3D extract32(insn, 30, 2); TCGv_i64 tcg_addr; =20 - if ((!is_excl && !is_pair && !is_lasr) || - (!is_excl && is_pair) || - (is_pair && size < 2)) { - unallocated_encoding(s); + switch (o2_L_o1_o0) { + case 0x0: /* STXR */ + case 0x1: /* STLXR */ + if (rn =3D=3D 31) { + gen_check_sp_alignment(s); + } + if (is_lasr) { + tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL); + } + tcg_addr =3D read_cpu_reg_sp(s, rn, 1); + gen_store_exclusive(s, rs, rt, rt2, tcg_addr, size, false); return; - } =20 - if (rn =3D=3D 31) { - gen_check_sp_alignment(s); - } - tcg_addr =3D read_cpu_reg_sp(s, rn, 1); - - /* Note that since TCG is single threaded load-acquire/store-release - * semantics require no extra if (is_lasr) { ... } handling. - */ - - if (is_excl) { - if (!is_store) { - s->is_ldex =3D true; - gen_load_exclusive(s, rt, rt2, tcg_addr, size, is_pair); - if (is_lasr) { - tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ); - } - } else { - if (is_lasr) { - tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL); - } - gen_store_exclusive(s, rs, rt, rt2, tcg_addr, size, is_pair); + case 0x4: /* LDXR */ + case 0x5: /* LDAXR */ + if (rn =3D=3D 31) { + gen_check_sp_alignment(s); } - } else { - TCGv_i64 tcg_rt =3D cpu_reg(s, rt); - bool iss_sf =3D disas_ldst_compute_iss_sf(size, false, 0); + tcg_addr =3D read_cpu_reg_sp(s, rn, 1); + s->is_ldex =3D true; + gen_load_exclusive(s, rt, rt2, tcg_addr, size, false); + if (is_lasr) { + tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ); + } + return; =20 + case 0x9: /* STLR */ /* Generate ISS for non-exclusive accesses including LASR. */ - if (is_store) { + if (rn =3D=3D 31) { + gen_check_sp_alignment(s); + } + tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL); + tcg_addr =3D read_cpu_reg_sp(s, rn, 1); + do_gpr_st(s, cpu_reg(s, rt), tcg_addr, size, true, rt, + disas_ldst_compute_iss_sf(size, false, 0), is_lasr); + return; + + case 0xd: /* LDARB */ + /* Generate ISS for non-exclusive accesses including LASR. */ + if (rn =3D=3D 31) { + gen_check_sp_alignment(s); + } + tcg_addr =3D read_cpu_reg_sp(s, rn, 1); + do_gpr_ld(s, cpu_reg(s, rt), tcg_addr, size, false, false, true, r= t, + disas_ldst_compute_iss_sf(size, false, 0), is_lasr); + tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ); + return; + + case 0x2: case 0x3: /* CASP / STXP */ + if (size & 2) { /* STXP / STLXP */ + if (rn =3D=3D 31) { + gen_check_sp_alignment(s); + } if (is_lasr) { tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL); } - do_gpr_st(s, tcg_rt, tcg_addr, size, - true, rt, iss_sf, is_lasr); - } else { - do_gpr_ld(s, tcg_rt, tcg_addr, size, false, false, - true, rt, iss_sf, is_lasr); + tcg_addr =3D read_cpu_reg_sp(s, rn, 1); + gen_store_exclusive(s, rs, rt, rt2, tcg_addr, size, true); + return; + } + /* CASP / CASPL */ + break; + + case 0x6: case 0x7: /* CASP / LDXP */ + if (size & 2) { /* LDXP / LDAXP */ + if (rn =3D=3D 31) { + gen_check_sp_alignment(s); + } + tcg_addr =3D read_cpu_reg_sp(s, rn, 1); + s->is_ldex =3D true; + gen_load_exclusive(s, rt, rt2, tcg_addr, size, true); if (is_lasr) { tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ); } + return; } + /* CASPA / CASPAL */ + break; + + case 0xa: /* CAS */ + case 0xb: /* CASL */ + case 0xe: /* CASA */ + case 0xf: /* CASAL */ + break; } + unallocated_encoding(s); } =20 /* @@ -2715,6 +2751,55 @@ static void disas_ldst_reg_unsigned_imm(DisasContext= *s, uint32_t insn, } } =20 +/* Atomic memory operations + * + * 31 30 27 26 24 22 21 16 15 12 10 5 0 + * +------+-------+---+-----+-----+---+----+----+-----+-----+----+-----+ + * | size | 1 1 1 | V | 0 0 | A R | 1 | Rs | o3 | opc | 0 0 | Rn | Rt | + * +------+-------+---+-----+-----+--------+----+-----+-----+----+-----+ + * + * Rt: the result register + * Rn: base address or SP + * Rs: the source register for the operation + * V: vector flag (always 0 as of v8.3) + * A: acquire flag + * R: release flag + */ +static void disas_ldst_atomic(DisasContext *s, uint32_t insn, + int size, int rt, bool is_vector) +{ + int rs =3D extract32(insn, 16, 5); + int rn =3D extract32(insn, 5, 5); + int o3_opc =3D extract32(insn, 12, 4); + int feature =3D ARM_FEATURE_V8_ATOMICS; + + if (is_vector) { + unallocated_encoding(s); + return; + } + switch (o3_opc) { + case 000: /* LDADD */ + case 001: /* LDCLR */ + case 002: /* LDEOR */ + case 003: /* LDSET */ + case 004: /* LDSMAX */ + case 005: /* LDSMIN */ + case 006: /* LDUMAX */ + case 007: /* LDUMIN */ + case 010: /* SWP */ + default: + unallocated_encoding(s); + return; + } + if (!arm_dc_feature(s, feature)) { + unallocated_encoding(s); + return; + } + + (void)rs; + (void)rn; +} + /* Load/store register (all forms) */ static void disas_ldst_reg(DisasContext *s, uint32_t insn) { @@ -2725,23 +2810,28 @@ static void disas_ldst_reg(DisasContext *s, uint32_= t insn) =20 switch (extract32(insn, 24, 2)) { case 0: - if (extract32(insn, 21, 1) =3D=3D 1 && extract32(insn, 10, 2) =3D= =3D 2) { - disas_ldst_reg_roffset(s, insn, opc, size, rt, is_vector); - } else { + if (extract32(insn, 21, 1) =3D=3D 0) { /* Load/store register (unscaled immediate) * Load/store immediate pre/post-indexed * Load/store register unprivileged */ disas_ldst_reg_imm9(s, insn, opc, size, rt, is_vector); + return; + } + switch (extract32(insn, 10, 2)) { + case 0: + disas_ldst_atomic(s, insn, size, rt, is_vector); + return; + case 2: + disas_ldst_reg_roffset(s, insn, opc, size, rt, is_vector); + return; } break; case 1: disas_ldst_reg_unsigned_imm(s, insn, opc, size, rt, is_vector); - break; - default: - unallocated_encoding(s); - break; + return; } + unallocated_encoding(s); } =20 /* AdvSIMD load/store multiple structures --=20 2.14.3