From nobody Fri May 17 06:07:38 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1654274806; cv=none; d=zohomail.com; s=zohoarc; b=epYLmu9eppuL8L4bICnA6qB07yAmvuesBvzBL3PyKwuKEUvysA2kfQSb0QRyklG464cX8b59mf8QchapyY7Rt2mMlqdzf5jr+wbAgGrWvsRh2QSoC6LLC1gB4byWKEOmnoysCFygLbwyhJRyYhba7YCOBLRVf24FoRBEdE5GuTM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1654274806; h=Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=DHi0UcFRYURr36pwIT0dL5uJZn9QeI01dJjJxAJd2gA=; b=GNMLM8winkf+/vtpidqmwjjrqpxAwNgwkQrbW9TJYKFT/Tv5VaM3pnT9AOhpq4oGHVYov2K1FNH05ITQARk6dZVx5tqYVncVj3iLDUxg4qAKiBEW75k1AETpnkuoJt9tgpOKAnyDhTXWfj6ePBEtD5xmFUmn2Sj75fuGuQlXi2Q= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1654274805961129.61055800894576; Fri, 3 Jun 2022 09:46:45 -0700 (PDT) Received: from localhost ([::1]:37238 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1nxARg-00054f-6I for importer@patchew.org; Fri, 03 Jun 2022 12:46:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54116) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nxAOF-0002ry-3P for qemu-devel@nongnu.org; Fri, 03 Jun 2022 12:43:11 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:42386) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nxAO7-0000ya-Ar for qemu-devel@nongnu.org; Fri, 03 Jun 2022 12:43:10 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-350-tvDhGmt4NPyjRi-4Wg3nbA-1; Fri, 03 Jun 2022 12:42:58 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 87CF4280C262; Fri, 3 Jun 2022 16:42:58 +0000 (UTC) Received: from thuth.com (unknown [10.39.192.103]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3E0CB2166B2D; Fri, 3 Jun 2022 16:42:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654274582; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=DHi0UcFRYURr36pwIT0dL5uJZn9QeI01dJjJxAJd2gA=; b=LgStmr6xiIsVAW+D83ZCgRlSNZ6H6cV8THkFYvQpL+npqzbuy6L9/f1w+kTxLG/FPh48rz lXqKtxK+lcjWpPp7sKM3sqKDew9y/+hK7topNfJDDVKFs97BAKVybxXi8+ZuRZloCpPzqp P+mKAQPtXOwQZozjhb2Uf8GrD96TJb8= X-MC-Unique: tvDhGmt4NPyjRi-4Wg3nbA-1 From: Thomas Huth To: qemu-devel@nongnu.org, Richard Henderson , Peter Maydell Cc: qemu-arm@nongnu.org, =?UTF-8?q?Alex=20Benn=C3=A9e?= Subject: [PATCH] disas: Remove libvixl disassembler Date: Fri, 3 Jun 2022 18:42:49 +0200 Message-Id: <20220603164249.112459-1-thuth@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=thuth@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.082, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1654274809286100001 Content-Type: text/plain; charset="utf-8" The disassembly via capstone should be superiour to our old vixl sources nowadays, so let's finally cut this old disassembler out of the QEMU source tree. Signed-off-by: Thomas Huth Reviewed-by: Richard Henderson Tested-by: Richard Henderson --- See also the discussions here: - https://lists.gnu.org/archive/html/qemu-devel/2022-06/msg00428.html - https://lists.nongnu.org/archive/html/qemu-devel/2018-05/msg03416.html meson.build | 4 - disas/libvixl/vixl/a64/assembler-a64.h | 4624 -------------------- disas/libvixl/vixl/a64/constants-a64.h | 2116 --------- disas/libvixl/vixl/a64/cpu-a64.h | 83 - disas/libvixl/vixl/a64/decoder-a64.h | 275 -- disas/libvixl/vixl/a64/disasm-a64.h | 177 - disas/libvixl/vixl/a64/instructions-a64.h | 757 ---- disas/libvixl/vixl/code-buffer.h | 113 - disas/libvixl/vixl/compiler-intrinsics.h | 155 - disas/libvixl/vixl/globals.h | 155 - disas/libvixl/vixl/invalset.h | 775 ---- disas/libvixl/vixl/platform.h | 39 - disas/libvixl/vixl/utils.h | 286 -- include/exec/poison.h | 2 - disas.c | 3 - target/arm/cpu.c | 7 - MAINTAINERS | 4 - disas/arm-a64.cc | 101 - disas/libvixl/LICENCE | 30 - disas/libvixl/README | 11 - disas/libvixl/meson.build | 7 - disas/libvixl/vixl/a64/decoder-a64.cc | 877 ---- disas/libvixl/vixl/a64/disasm-a64.cc | 3495 --------------- disas/libvixl/vixl/a64/instructions-a64.cc | 622 --- disas/libvixl/vixl/compiler-intrinsics.cc | 144 - disas/libvixl/vixl/utils.cc | 142 - disas/meson.build | 5 - scripts/clean-header-guards.pl | 4 +- scripts/clean-includes | 2 +- scripts/coverity-scan/COMPONENTS.md | 3 - 30 files changed, 3 insertions(+), 15015 deletions(-) delete mode 100644 disas/libvixl/vixl/a64/assembler-a64.h delete mode 100644 disas/libvixl/vixl/a64/constants-a64.h delete mode 100644 disas/libvixl/vixl/a64/cpu-a64.h delete mode 100644 disas/libvixl/vixl/a64/decoder-a64.h delete mode 100644 disas/libvixl/vixl/a64/disasm-a64.h delete mode 100644 disas/libvixl/vixl/a64/instructions-a64.h delete mode 100644 disas/libvixl/vixl/code-buffer.h delete mode 100644 disas/libvixl/vixl/compiler-intrinsics.h delete mode 100644 disas/libvixl/vixl/globals.h delete mode 100644 disas/libvixl/vixl/invalset.h delete mode 100644 disas/libvixl/vixl/platform.h delete mode 100644 disas/libvixl/vixl/utils.h delete mode 100644 disas/arm-a64.cc delete mode 100644 disas/libvixl/LICENCE delete mode 100644 disas/libvixl/README delete mode 100644 disas/libvixl/meson.build delete mode 100644 disas/libvixl/vixl/a64/decoder-a64.cc delete mode 100644 disas/libvixl/vixl/a64/disasm-a64.cc delete mode 100644 disas/libvixl/vixl/a64/instructions-a64.cc delete mode 100644 disas/libvixl/vixl/compiler-intrinsics.cc delete mode 100644 disas/libvixl/vixl/utils.cc diff --git a/meson.build b/meson.build index bc6234c85e..3daebae207 100644 --- a/meson.build +++ b/meson.build @@ -246,7 +246,6 @@ endif add_project_arguments('-iquote', '.', '-iquote', meson.current_source_dir(), '-iquote', meson.current_source_dir() / 'include', - '-iquote', meson.current_source_dir() / 'disas/libvi= xl', language: ['c', 'cpp', 'objc']) =20 link_language =3D meson.get_external_property('link_language', 'cpp') @@ -2330,7 +2329,6 @@ config_target_mak =3D {} =20 disassemblers =3D { 'alpha' : ['CONFIG_ALPHA_DIS'], - 'arm' : ['CONFIG_ARM_DIS'], 'avr' : ['CONFIG_AVR_DIS'], 'cris' : ['CONFIG_CRIS_DIS'], 'hexagon' : ['CONFIG_HEXAGON_DIS'], @@ -2352,8 +2350,6 @@ disassemblers =3D { } if link_language =3D=3D 'cpp' disassemblers +=3D { - 'aarch64' : [ 'CONFIG_ARM_A64_DIS'], - 'arm' : [ 'CONFIG_ARM_DIS', 'CONFIG_ARM_A64_DIS'], 'mips' : [ 'CONFIG_MIPS_DIS', 'CONFIG_NANOMIPS_DIS'], } endif diff --git a/disas/libvixl/vixl/a64/assembler-a64.h b/disas/libvixl/vixl/a6= 4/assembler-a64.h deleted file mode 100644 index fda5ccc6c7..0000000000 --- a/disas/libvixl/vixl/a64/assembler-a64.h +++ /dev/null @@ -1,4624 +0,0 @@ -// Copyright 2015, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#ifndef VIXL_A64_ASSEMBLER_A64_H_ -#define VIXL_A64_ASSEMBLER_A64_H_ - - -#include "vixl/globals.h" -#include "vixl/invalset.h" -#include "vixl/utils.h" -#include "vixl/code-buffer.h" -#include "vixl/a64/instructions-a64.h" - -namespace vixl { - -typedef uint64_t RegList; -static const int kRegListSizeInBits =3D sizeof(RegList) * 8; - - -// Registers. - -// Some CPURegister methods can return Register or VRegister types, so we = need -// to declare them in advance. -class Register; -class VRegister; - -class CPURegister { - public: - enum RegisterType { - // The kInvalid value is used to detect uninitialized static instances, - // which are always zero-initialized before any constructors are calle= d. - kInvalid =3D 0, - kRegister, - kVRegister, - kFPRegister =3D kVRegister, - kNoRegister - }; - - CPURegister() : code_(0), size_(0), type_(kNoRegister) { - VIXL_ASSERT(!IsValid()); - VIXL_ASSERT(IsNone()); - } - - CPURegister(unsigned code, unsigned size, RegisterType type) - : code_(code), size_(size), type_(type) { - VIXL_ASSERT(IsValidOrNone()); - } - - unsigned code() const { - VIXL_ASSERT(IsValid()); - return code_; - } - - RegisterType type() const { - VIXL_ASSERT(IsValidOrNone()); - return type_; - } - - RegList Bit() const { - VIXL_ASSERT(code_ < (sizeof(RegList) * 8)); - return IsValid() ? (static_cast(1) << code_) : 0; - } - - unsigned size() const { - VIXL_ASSERT(IsValid()); - return size_; - } - - int SizeInBytes() const { - VIXL_ASSERT(IsValid()); - VIXL_ASSERT(size() % 8 =3D=3D 0); - return size_ / 8; - } - - int SizeInBits() const { - VIXL_ASSERT(IsValid()); - return size_; - } - - bool Is8Bits() const { - VIXL_ASSERT(IsValid()); - return size_ =3D=3D 8; - } - - bool Is16Bits() const { - VIXL_ASSERT(IsValid()); - return size_ =3D=3D 16; - } - - bool Is32Bits() const { - VIXL_ASSERT(IsValid()); - return size_ =3D=3D 32; - } - - bool Is64Bits() const { - VIXL_ASSERT(IsValid()); - return size_ =3D=3D 64; - } - - bool Is128Bits() const { - VIXL_ASSERT(IsValid()); - return size_ =3D=3D 128; - } - - bool IsValid() const { - if (IsValidRegister() || IsValidVRegister()) { - VIXL_ASSERT(!IsNone()); - return true; - } else { - // This assert is hit when the register has not been properly initia= lized. - // One cause for this can be an initialisation order fiasco. See - // https://isocpp.org/wiki/faq/ctors#static-init-order for some deta= ils. - VIXL_ASSERT(IsNone()); - return false; - } - } - - bool IsValidRegister() const { - return IsRegister() && - ((size_ =3D=3D kWRegSize) || (size_ =3D=3D kXRegSize)) && - ((code_ < kNumberOfRegisters) || (code_ =3D=3D kSPRegInternalCo= de)); - } - - bool IsValidVRegister() const { - return IsVRegister() && - ((size_ =3D=3D kBRegSize) || (size_ =3D=3D kHRegSize) || - (size_ =3D=3D kSRegSize) || (size_ =3D=3D kDRegSize) || - (size_ =3D=3D kQRegSize)) && - (code_ < kNumberOfVRegisters); - } - - bool IsValidFPRegister() const { - return IsFPRegister() && (code_ < kNumberOfVRegisters); - } - - bool IsNone() const { - // kNoRegister types should always have size 0 and code 0. - VIXL_ASSERT((type_ !=3D kNoRegister) || (code_ =3D=3D 0)); - VIXL_ASSERT((type_ !=3D kNoRegister) || (size_ =3D=3D 0)); - - return type_ =3D=3D kNoRegister; - } - - bool Aliases(const CPURegister& other) const { - VIXL_ASSERT(IsValidOrNone() && other.IsValidOrNone()); - return (code_ =3D=3D other.code_) && (type_ =3D=3D other.type_); - } - - bool Is(const CPURegister& other) const { - VIXL_ASSERT(IsValidOrNone() && other.IsValidOrNone()); - return Aliases(other) && (size_ =3D=3D other.size_); - } - - bool IsZero() const { - VIXL_ASSERT(IsValid()); - return IsRegister() && (code_ =3D=3D kZeroRegCode); - } - - bool IsSP() const { - VIXL_ASSERT(IsValid()); - return IsRegister() && (code_ =3D=3D kSPRegInternalCode); - } - - bool IsRegister() const { - return type_ =3D=3D kRegister; - } - - bool IsVRegister() const { - return type_ =3D=3D kVRegister; - } - - bool IsFPRegister() const { - return IsS() || IsD(); - } - - bool IsW() const { return IsValidRegister() && Is32Bits(); } - bool IsX() const { return IsValidRegister() && Is64Bits(); } - - // These assertions ensure that the size and type of the register are as - // described. They do not consider the number of lanes that make up a ve= ctor. - // So, for example, Is8B() implies IsD(), and Is1D() implies IsD, but Is= D() - // does not imply Is1D() or Is8B(). - // Check the number of lanes, ie. the format of the vector, using method= s such - // as Is8B(), Is1D(), etc. in the VRegister class. - bool IsV() const { return IsVRegister(); } - bool IsB() const { return IsV() && Is8Bits(); } - bool IsH() const { return IsV() && Is16Bits(); } - bool IsS() const { return IsV() && Is32Bits(); } - bool IsD() const { return IsV() && Is64Bits(); } - bool IsQ() const { return IsV() && Is128Bits(); } - - const Register& W() const; - const Register& X() const; - const VRegister& V() const; - const VRegister& B() const; - const VRegister& H() const; - const VRegister& S() const; - const VRegister& D() const; - const VRegister& Q() const; - - bool IsSameSizeAndType(const CPURegister& other) const { - return (size_ =3D=3D other.size_) && (type_ =3D=3D other.type_); - } - - protected: - unsigned code_; - unsigned size_; - RegisterType type_; - - private: - bool IsValidOrNone() const { - return IsValid() || IsNone(); - } -}; - - -class Register : public CPURegister { - public: - Register() : CPURegister() {} - explicit Register(const CPURegister& other) - : CPURegister(other.code(), other.size(), other.type()) { - VIXL_ASSERT(IsValidRegister()); - } - Register(unsigned code, unsigned size) - : CPURegister(code, size, kRegister) {} - - bool IsValid() const { - VIXL_ASSERT(IsRegister() || IsNone()); - return IsValidRegister(); - } - - static const Register& WRegFromCode(unsigned code); - static const Register& XRegFromCode(unsigned code); - - private: - static const Register wregisters[]; - static const Register xregisters[]; -}; - - -class VRegister : public CPURegister { - public: - VRegister() : CPURegister(), lanes_(1) {} - explicit VRegister(const CPURegister& other) - : CPURegister(other.code(), other.size(), other.type()), lanes_(1) { - VIXL_ASSERT(IsValidVRegister()); - VIXL_ASSERT(IsPowerOf2(lanes_) && (lanes_ <=3D 16)); - } - VRegister(unsigned code, unsigned size, unsigned lanes =3D 1) - : CPURegister(code, size, kVRegister), lanes_(lanes) { - VIXL_ASSERT(IsPowerOf2(lanes_) && (lanes_ <=3D 16)); - } - VRegister(unsigned code, VectorFormat format) - : CPURegister(code, RegisterSizeInBitsFromFormat(format), kVRegister= ), - lanes_(IsVectorFormat(format) ? LaneCountFromFormat(format) : 1) { - VIXL_ASSERT(IsPowerOf2(lanes_) && (lanes_ <=3D 16)); - } - - bool IsValid() const { - VIXL_ASSERT(IsVRegister() || IsNone()); - return IsValidVRegister(); - } - - static const VRegister& BRegFromCode(unsigned code); - static const VRegister& HRegFromCode(unsigned code); - static const VRegister& SRegFromCode(unsigned code); - static const VRegister& DRegFromCode(unsigned code); - static const VRegister& QRegFromCode(unsigned code); - static const VRegister& VRegFromCode(unsigned code); - - VRegister V8B() const { return VRegister(code_, kDRegSize, 8); } - VRegister V16B() const { return VRegister(code_, kQRegSize, 16); } - VRegister V4H() const { return VRegister(code_, kDRegSize, 4); } - VRegister V8H() const { return VRegister(code_, kQRegSize, 8); } - VRegister V2S() const { return VRegister(code_, kDRegSize, 2); } - VRegister V4S() const { return VRegister(code_, kQRegSize, 4); } - VRegister V2D() const { return VRegister(code_, kQRegSize, 2); } - VRegister V1D() const { return VRegister(code_, kDRegSize, 1); } - - bool Is8B() const { return (Is64Bits() && (lanes_ =3D=3D 8)); } - bool Is16B() const { return (Is128Bits() && (lanes_ =3D=3D 16)); } - bool Is4H() const { return (Is64Bits() && (lanes_ =3D=3D 4)); } - bool Is8H() const { return (Is128Bits() && (lanes_ =3D=3D 8)); } - bool Is2S() const { return (Is64Bits() && (lanes_ =3D=3D 2)); } - bool Is4S() const { return (Is128Bits() && (lanes_ =3D=3D 4)); } - bool Is1D() const { return (Is64Bits() && (lanes_ =3D=3D 1)); } - bool Is2D() const { return (Is128Bits() && (lanes_ =3D=3D 2)); } - - // For consistency, we assert the number of lanes of these scalar regist= ers, - // even though there are no vectors of equivalent total size with which = they - // could alias. - bool Is1B() const { - VIXL_ASSERT(!(Is8Bits() && IsVector())); - return Is8Bits(); - } - bool Is1H() const { - VIXL_ASSERT(!(Is16Bits() && IsVector())); - return Is16Bits(); - } - bool Is1S() const { - VIXL_ASSERT(!(Is32Bits() && IsVector())); - return Is32Bits(); - } - - bool IsLaneSizeB() const { return LaneSizeInBits() =3D=3D kBRegSize; } - bool IsLaneSizeH() const { return LaneSizeInBits() =3D=3D kHRegSize; } - bool IsLaneSizeS() const { return LaneSizeInBits() =3D=3D kSRegSize; } - bool IsLaneSizeD() const { return LaneSizeInBits() =3D=3D kDRegSize; } - - int lanes() const { - return lanes_; - } - - bool IsScalar() const { - return lanes_ =3D=3D 1; - } - - bool IsVector() const { - return lanes_ > 1; - } - - bool IsSameFormat(const VRegister& other) const { - return (size_ =3D=3D other.size_) && (lanes_ =3D=3D other.lanes_); - } - - unsigned LaneSizeInBytes() const { - return SizeInBytes() / lanes_; - } - - unsigned LaneSizeInBits() const { - return LaneSizeInBytes() * 8; - } - - private: - static const VRegister bregisters[]; - static const VRegister hregisters[]; - static const VRegister sregisters[]; - static const VRegister dregisters[]; - static const VRegister qregisters[]; - static const VRegister vregisters[]; - int lanes_; -}; - - -// Backward compatibility for FPRegisters. -typedef VRegister FPRegister; - -// No*Reg is used to indicate an unused argument, or an error case. Note t= hat -// these all compare equal (using the Is() method). The Register and VRegi= ster -// variants are provided for convenience. -const Register NoReg; -const VRegister NoVReg; -const FPRegister NoFPReg; // For backward compatibility. -const CPURegister NoCPUReg; - - -#define DEFINE_REGISTERS(N) \ -const Register w##N(N, kWRegSize); \ -const Register x##N(N, kXRegSize); -REGISTER_CODE_LIST(DEFINE_REGISTERS) -#undef DEFINE_REGISTERS -const Register wsp(kSPRegInternalCode, kWRegSize); -const Register sp(kSPRegInternalCode, kXRegSize); - - -#define DEFINE_VREGISTERS(N) \ -const VRegister b##N(N, kBRegSize); \ -const VRegister h##N(N, kHRegSize); \ -const VRegister s##N(N, kSRegSize); \ -const VRegister d##N(N, kDRegSize); \ -const VRegister q##N(N, kQRegSize); \ -const VRegister v##N(N, kQRegSize); -REGISTER_CODE_LIST(DEFINE_VREGISTERS) -#undef DEFINE_VREGISTERS - - -// Registers aliases. -const Register ip0 =3D x16; -const Register ip1 =3D x17; -const Register lr =3D x30; -const Register xzr =3D x31; -const Register wzr =3D w31; - - -// AreAliased returns true if any of the named registers overlap. Arguments -// set to NoReg are ignored. The system stack pointer may be specified. -bool AreAliased(const CPURegister& reg1, - const CPURegister& reg2, - const CPURegister& reg3 =3D NoReg, - const CPURegister& reg4 =3D NoReg, - const CPURegister& reg5 =3D NoReg, - const CPURegister& reg6 =3D NoReg, - const CPURegister& reg7 =3D NoReg, - const CPURegister& reg8 =3D NoReg); - - -// AreSameSizeAndType returns true if all of the specified registers have = the -// same size, and are of the same type. The system stack pointer may be -// specified. Arguments set to NoReg are ignored, as are any subsequent -// arguments. At least one argument (reg1) must be valid (not NoCPUReg). -bool AreSameSizeAndType(const CPURegister& reg1, - const CPURegister& reg2, - const CPURegister& reg3 =3D NoCPUReg, - const CPURegister& reg4 =3D NoCPUReg, - const CPURegister& reg5 =3D NoCPUReg, - const CPURegister& reg6 =3D NoCPUReg, - const CPURegister& reg7 =3D NoCPUReg, - const CPURegister& reg8 =3D NoCPUReg); - - -// AreSameFormat returns true if all of the specified VRegisters have the = same -// vector format. Arguments set to NoReg are ignored, as are any subsequent -// arguments. At least one argument (reg1) must be valid (not NoVReg). -bool AreSameFormat(const VRegister& reg1, - const VRegister& reg2, - const VRegister& reg3 =3D NoVReg, - const VRegister& reg4 =3D NoVReg); - - -// AreConsecutive returns true if all of the specified VRegisters are -// consecutive in the register file. Arguments set to NoReg are ignored, a= s are -// any subsequent arguments. At least one argument (reg1) must be valid -// (not NoVReg). -bool AreConsecutive(const VRegister& reg1, - const VRegister& reg2, - const VRegister& reg3 =3D NoVReg, - const VRegister& reg4 =3D NoVReg); - - -// Lists of registers. -class CPURegList { - public: - explicit CPURegList(CPURegister reg1, - CPURegister reg2 =3D NoCPUReg, - CPURegister reg3 =3D NoCPUReg, - CPURegister reg4 =3D NoCPUReg) - : list_(reg1.Bit() | reg2.Bit() | reg3.Bit() | reg4.Bit()), - size_(reg1.size()), type_(reg1.type()) { - VIXL_ASSERT(AreSameSizeAndType(reg1, reg2, reg3, reg4)); - VIXL_ASSERT(IsValid()); - } - - CPURegList(CPURegister::RegisterType type, unsigned size, RegList list) - : list_(list), size_(size), type_(type) { - VIXL_ASSERT(IsValid()); - } - - CPURegList(CPURegister::RegisterType type, unsigned size, - unsigned first_reg, unsigned last_reg) - : size_(size), type_(type) { - VIXL_ASSERT(((type =3D=3D CPURegister::kRegister) && - (last_reg < kNumberOfRegisters)) || - ((type =3D=3D CPURegister::kVRegister) && - (last_reg < kNumberOfVRegisters))); - VIXL_ASSERT(last_reg >=3D first_reg); - list_ =3D (UINT64_C(1) << (last_reg + 1)) - 1; - list_ &=3D ~((UINT64_C(1) << first_reg) - 1); - VIXL_ASSERT(IsValid()); - } - - CPURegister::RegisterType type() const { - VIXL_ASSERT(IsValid()); - return type_; - } - - // Combine another CPURegList into this one. Registers that already exis= t in - // this list are left unchanged. The type and size of the registers in t= he - // 'other' list must match those in this list. - void Combine(const CPURegList& other) { - VIXL_ASSERT(IsValid()); - VIXL_ASSERT(other.type() =3D=3D type_); - VIXL_ASSERT(other.RegisterSizeInBits() =3D=3D size_); - list_ |=3D other.list(); - } - - // Remove every register in the other CPURegList from this one. Register= s that - // do not exist in this list are ignored. The type and size of the regis= ters - // in the 'other' list must match those in this list. - void Remove(const CPURegList& other) { - VIXL_ASSERT(IsValid()); - VIXL_ASSERT(other.type() =3D=3D type_); - VIXL_ASSERT(other.RegisterSizeInBits() =3D=3D size_); - list_ &=3D ~other.list(); - } - - // Variants of Combine and Remove which take a single register. - void Combine(const CPURegister& other) { - VIXL_ASSERT(other.type() =3D=3D type_); - VIXL_ASSERT(other.size() =3D=3D size_); - Combine(other.code()); - } - - void Remove(const CPURegister& other) { - VIXL_ASSERT(other.type() =3D=3D type_); - VIXL_ASSERT(other.size() =3D=3D size_); - Remove(other.code()); - } - - // Variants of Combine and Remove which take a single register by its co= de; - // the type and size of the register is inferred from this list. - void Combine(int code) { - VIXL_ASSERT(IsValid()); - VIXL_ASSERT(CPURegister(code, size_, type_).IsValid()); - list_ |=3D (UINT64_C(1) << code); - } - - void Remove(int code) { - VIXL_ASSERT(IsValid()); - VIXL_ASSERT(CPURegister(code, size_, type_).IsValid()); - list_ &=3D ~(UINT64_C(1) << code); - } - - static CPURegList Union(const CPURegList& list_1, const CPURegList& list= _2) { - VIXL_ASSERT(list_1.type_ =3D=3D list_2.type_); - VIXL_ASSERT(list_1.size_ =3D=3D list_2.size_); - return CPURegList(list_1.type_, list_1.size_, list_1.list_ | list_2.li= st_); - } - static CPURegList Union(const CPURegList& list_1, - const CPURegList& list_2, - const CPURegList& list_3); - static CPURegList Union(const CPURegList& list_1, - const CPURegList& list_2, - const CPURegList& list_3, - const CPURegList& list_4); - - static CPURegList Intersection(const CPURegList& list_1, - const CPURegList& list_2) { - VIXL_ASSERT(list_1.type_ =3D=3D list_2.type_); - VIXL_ASSERT(list_1.size_ =3D=3D list_2.size_); - return CPURegList(list_1.type_, list_1.size_, list_1.list_ & list_2.li= st_); - } - static CPURegList Intersection(const CPURegList& list_1, - const CPURegList& list_2, - const CPURegList& list_3); - static CPURegList Intersection(const CPURegList& list_1, - const CPURegList& list_2, - const CPURegList& list_3, - const CPURegList& list_4); - - bool Overlaps(const CPURegList& other) const { - return (type_ =3D=3D other.type_) && ((list_ & other.list_) !=3D 0); - } - - RegList list() const { - VIXL_ASSERT(IsValid()); - return list_; - } - - void set_list(RegList new_list) { - VIXL_ASSERT(IsValid()); - list_ =3D new_list; - } - - // Remove all callee-saved registers from the list. This can be useful w= hen - // preparing registers for an AAPCS64 function call, for example. - void RemoveCalleeSaved(); - - CPURegister PopLowestIndex(); - CPURegister PopHighestIndex(); - - // AAPCS64 callee-saved registers. - static CPURegList GetCalleeSaved(unsigned size =3D kXRegSize); - static CPURegList GetCalleeSavedV(unsigned size =3D kDRegSize); - - // AAPCS64 caller-saved registers. Note that this includes lr. - // TODO(all): Determine how we handle d8-d15 being callee-saved, but the= top - // 64-bits being caller-saved. - static CPURegList GetCallerSaved(unsigned size =3D kXRegSize); - static CPURegList GetCallerSavedV(unsigned size =3D kDRegSize); - - bool IsEmpty() const { - VIXL_ASSERT(IsValid()); - return list_ =3D=3D 0; - } - - bool IncludesAliasOf(const CPURegister& other) const { - VIXL_ASSERT(IsValid()); - return (type_ =3D=3D other.type()) && ((other.Bit() & list_) !=3D 0); - } - - bool IncludesAliasOf(int code) const { - VIXL_ASSERT(IsValid()); - return ((code & list_) !=3D 0); - } - - int Count() const { - VIXL_ASSERT(IsValid()); - return CountSetBits(list_); - } - - unsigned RegisterSizeInBits() const { - VIXL_ASSERT(IsValid()); - return size_; - } - - unsigned RegisterSizeInBytes() const { - int size_in_bits =3D RegisterSizeInBits(); - VIXL_ASSERT((size_in_bits % 8) =3D=3D 0); - return size_in_bits / 8; - } - - unsigned TotalSizeInBytes() const { - VIXL_ASSERT(IsValid()); - return RegisterSizeInBytes() * Count(); - } - - private: - RegList list_; - unsigned size_; - CPURegister::RegisterType type_; - - bool IsValid() const; -}; - - -// AAPCS64 callee-saved registers. -extern const CPURegList kCalleeSaved; -extern const CPURegList kCalleeSavedV; - - -// AAPCS64 caller-saved registers. Note that this includes lr. -extern const CPURegList kCallerSaved; -extern const CPURegList kCallerSavedV; - - -// Operand. -class Operand { - public: - // # - // where is int64_t. - // This is allowed to be an implicit constructor because Operand is - // a wrapper class that doesn't normally perform any type conversion. - Operand(int64_t immediate =3D 0); // NOLINT(runtime/explicit) - - // rm, { #} - // where is one of {LSL, LSR, ASR, ROR}. - // is uint6_t. - // This is allowed to be an implicit constructor because Operand is - // a wrapper class that doesn't normally perform any type conversion. - Operand(Register reg, - Shift shift =3D LSL, - unsigned shift_amount =3D 0); // NOLINT(runtime/explicit) - - // rm, { {#}} - // where is one of {UXTB, UXTH, UXTW, UXTX, SXTB, SXTH, SXTW, S= XTX}. - // is uint2_t. - explicit Operand(Register reg, Extend extend, unsigned shift_amount =3D = 0); - - bool IsImmediate() const; - bool IsShiftedRegister() const; - bool IsExtendedRegister() const; - bool IsZero() const; - - // This returns an LSL shift (<=3D 4) operand as an equivalent extend op= erand, - // which helps in the encoding of instructions that use the stack pointe= r. - Operand ToExtendedRegister() const; - - int64_t immediate() const { - VIXL_ASSERT(IsImmediate()); - return immediate_; - } - - Register reg() const { - VIXL_ASSERT(IsShiftedRegister() || IsExtendedRegister()); - return reg_; - } - - Shift shift() const { - VIXL_ASSERT(IsShiftedRegister()); - return shift_; - } - - Extend extend() const { - VIXL_ASSERT(IsExtendedRegister()); - return extend_; - } - - unsigned shift_amount() const { - VIXL_ASSERT(IsShiftedRegister() || IsExtendedRegister()); - return shift_amount_; - } - - private: - int64_t immediate_; - Register reg_; - Shift shift_; - Extend extend_; - unsigned shift_amount_; -}; - - -// MemOperand represents the addressing mode of a load or store instructio= n. -class MemOperand { - public: - explicit MemOperand(Register base, - int64_t offset =3D 0, - AddrMode addrmode =3D Offset); - MemOperand(Register base, - Register regoffset, - Shift shift =3D LSL, - unsigned shift_amount =3D 0); - MemOperand(Register base, - Register regoffset, - Extend extend, - unsigned shift_amount =3D 0); - MemOperand(Register base, - const Operand& offset, - AddrMode addrmode =3D Offset); - - const Register& base() const { return base_; } - const Register& regoffset() const { return regoffset_; } - int64_t offset() const { return offset_; } - AddrMode addrmode() const { return addrmode_; } - Shift shift() const { return shift_; } - Extend extend() const { return extend_; } - unsigned shift_amount() const { return shift_amount_; } - bool IsImmediateOffset() const; - bool IsRegisterOffset() const; - bool IsPreIndex() const; - bool IsPostIndex() const; - - void AddOffset(int64_t offset); - - private: - Register base_; - Register regoffset_; - int64_t offset_; - AddrMode addrmode_; - Shift shift_; - Extend extend_; - unsigned shift_amount_; -}; - - -class LabelTestHelper; // Forward declaration. - - -class Label { - public: - Label() : location_(kLocationUnbound) {} - ~Label() { - // If the label has been linked to, it needs to be bound to a target. - VIXL_ASSERT(!IsLinked() || IsBound()); - } - - bool IsBound() const { return location_ >=3D 0; } - bool IsLinked() const { return !links_.empty(); } - - ptrdiff_t location() const { return location_; } - - static const int kNPreallocatedLinks =3D 4; - static const ptrdiff_t kInvalidLinkKey =3D PTRDIFF_MAX; - static const size_t kReclaimFrom =3D 512; - static const size_t kReclaimFactor =3D 2; - - typedef InvalSet LinksSetBase; - typedef InvalSetIterator LabelLinksIteratorBase; - - private: - class LinksSet : public LinksSetBase { - public: - LinksSet() : LinksSetBase() {} - }; - - // Allows iterating over the links of a label. The behaviour is undefine= d if - // the list of links is modified in any way while iterating. - class LabelLinksIterator : public LabelLinksIteratorBase { - public: - explicit LabelLinksIterator(Label* label) - : LabelLinksIteratorBase(&label->links_) {} - }; - - void Bind(ptrdiff_t location) { - // Labels can only be bound once. - VIXL_ASSERT(!IsBound()); - location_ =3D location; - } - - void AddLink(ptrdiff_t instruction) { - // If a label is bound, the assembler already has the information it n= eeds - // to write the instruction, so there is no need to add it to links_. - VIXL_ASSERT(!IsBound()); - links_.insert(instruction); - } - - void DeleteLink(ptrdiff_t instruction) { - links_.erase(instruction); - } - - void ClearAllLinks() { - links_.clear(); - } - - // TODO: The comment below considers average case complexity for our - // usual use-cases. The elements of interest are: - // - Branches to a label are emitted in order: branch instructions to a = label - // are generated at an offset in the code generation buffer greater than= any - // other branch to that same label already generated. As an example, thi= s can - // be broken when an instruction is patched to become a branch. Note tha= t the - // code will still work, but the complexity considerations below may loc= ally - // not apply any more. - // - Veneers are generated in order: for multiple branches of the same t= ype - // branching to the same unbound label going out of range, veneers are - // generated in growing order of the branch instruction offset from the = start - // of the buffer. - // - // When creating a veneer for a branch going out of range, the link for = this - // branch needs to be removed from this `links_`. Since all branches are - // tracked in one underlying InvalSet, the complexity for this deletion = is the - // same as for finding the element, ie. O(n), where n is the number of l= inks - // in the set. - // This could be reduced to O(1) by using the same trick as used when tr= acking - // branch information for veneers: split the container to use one set pe= r type - // of branch. With that setup, when a veneer is created and the link nee= ds to - // be deleted, if the two points above hold, it must be the minimum elem= ent of - // the set for its type of branch, and that minimum element will be acce= ssible - // in O(1). - - // The offsets of the instructions that have linked to this label. - LinksSet links_; - // The label location. - ptrdiff_t location_; - - static const ptrdiff_t kLocationUnbound =3D -1; - - // It is not safe to copy labels, so disable the copy constructor and op= erator - // by declaring them private (without an implementation). - Label(const Label&); - void operator=3D(const Label&); - - // The Assembler class is responsible for binding and linking labels, si= nce - // the stored offsets need to be consistent with the Assembler's buffer. - friend class Assembler; - // The MacroAssembler and VeneerPool handle resolution of branches to di= stant - // targets. - friend class MacroAssembler; - friend class VeneerPool; -}; - - -// Required InvalSet template specialisations. -#define INVAL_SET_TEMPLATE_PARAMETERS \ - ptrdiff_t, \ - Label::kNPreallocatedLinks, \ - ptrdiff_t, \ - Label::kInvalidLinkKey, \ - Label::kReclaimFrom, \ - Label::kReclaimFactor -template<> -inline ptrdiff_t InvalSet::Key( - const ptrdiff_t& element) { - return element; -} -template<> -inline void InvalSet::SetKey( - ptrdiff_t* element, ptrdiff_t key) { - *element =3D key; -} -#undef INVAL_SET_TEMPLATE_PARAMETERS - - -class Assembler; -class LiteralPool; - -// A literal is a 32-bit or 64-bit piece of data stored in the instruction -// stream and loaded through a pc relative load. The same literal can be -// referred to by multiple instructions but a literal can only reside at o= ne -// place in memory. A literal can be used by a load before or after being -// placed in memory. -// -// Internally an offset of 0 is associated with a literal which has been -// neither used nor placed. Then two possibilities arise: -// 1) the label is placed, the offset (stored as offset + 1) is used to -// resolve any subsequent load using the label. -// 2) the label is not placed and offset is the offset of the last load u= sing -// the literal (stored as -offset -1). If multiple loads refer to this -// literal then the last load holds the offset of the preceding load a= nd -// all loads form a chain. Once the offset is placed all the loads in = the -// chain are resolved and future loads fall back to possibility 1. -class RawLiteral { - public: - enum DeletionPolicy { - kDeletedOnPlacementByPool, - kDeletedOnPoolDestruction, - kManuallyDeleted - }; - - RawLiteral(size_t size, - LiteralPool* literal_pool, - DeletionPolicy deletion_policy =3D kManuallyDeleted); - - // The literal pool only sees and deletes `RawLiteral*` pointers, but th= ey are - // actually pointing to `Literal` objects. - virtual ~RawLiteral() {} - - size_t size() { - VIXL_STATIC_ASSERT(kDRegSizeInBytes =3D=3D kXRegSizeInBytes); - VIXL_STATIC_ASSERT(kSRegSizeInBytes =3D=3D kWRegSizeInBytes); - VIXL_ASSERT((size_ =3D=3D kXRegSizeInBytes) || - (size_ =3D=3D kWRegSizeInBytes) || - (size_ =3D=3D kQRegSizeInBytes)); - return size_; - } - uint64_t raw_value128_low64() { - VIXL_ASSERT(size_ =3D=3D kQRegSizeInBytes); - return low64_; - } - uint64_t raw_value128_high64() { - VIXL_ASSERT(size_ =3D=3D kQRegSizeInBytes); - return high64_; - } - uint64_t raw_value64() { - VIXL_ASSERT(size_ =3D=3D kXRegSizeInBytes); - VIXL_ASSERT(high64_ =3D=3D 0); - return low64_; - } - uint32_t raw_value32() { - VIXL_ASSERT(size_ =3D=3D kWRegSizeInBytes); - VIXL_ASSERT(high64_ =3D=3D 0); - VIXL_ASSERT(is_uint32(low64_) || is_int32(low64_)); - return static_cast(low64_); - } - bool IsUsed() { return offset_ < 0; } - bool IsPlaced() { return offset_ > 0; } - - LiteralPool* GetLiteralPool() const { - return literal_pool_; - } - - ptrdiff_t offset() { - VIXL_ASSERT(IsPlaced()); - return offset_ - 1; - } - - protected: - void set_offset(ptrdiff_t offset) { - VIXL_ASSERT(offset >=3D 0); - VIXL_ASSERT(IsWordAligned(offset)); - VIXL_ASSERT(!IsPlaced()); - offset_ =3D offset + 1; - } - ptrdiff_t last_use() { - VIXL_ASSERT(IsUsed()); - return -offset_ - 1; - } - void set_last_use(ptrdiff_t offset) { - VIXL_ASSERT(offset >=3D 0); - VIXL_ASSERT(IsWordAligned(offset)); - VIXL_ASSERT(!IsPlaced()); - offset_ =3D -offset - 1; - } - - size_t size_; - ptrdiff_t offset_; - uint64_t low64_; - uint64_t high64_; - - private: - LiteralPool* literal_pool_; - DeletionPolicy deletion_policy_; - - friend class Assembler; - friend class LiteralPool; -}; - - -template -class Literal : public RawLiteral { - public: - explicit Literal(T value, - LiteralPool* literal_pool =3D NULL, - RawLiteral::DeletionPolicy ownership =3D kManuallyDelet= ed) - : RawLiteral(sizeof(value), literal_pool, ownership) { - VIXL_STATIC_ASSERT(sizeof(value) <=3D kXRegSizeInBytes); - UpdateValue(value); - } - - Literal(T high64, T low64, - LiteralPool* literal_pool =3D NULL, - RawLiteral::DeletionPolicy ownership =3D kManuallyDeleted) - : RawLiteral(kQRegSizeInBytes, literal_pool, ownership) { - VIXL_STATIC_ASSERT(sizeof(low64) =3D=3D (kQRegSizeInBytes / 2)); - UpdateValue(high64, low64); - } - - virtual ~Literal() {} - - // Update the value of this literal, if necessary by rewriting the value= in - // the pool. - // If the literal has already been placed in a literal pool, the address= of - // the start of the code buffer must be provided, as the literal only kn= ows it - // offset from there. This also allows patching the value after the cod= e has - // been moved in memory. - void UpdateValue(T new_value, uint8_t* code_buffer =3D NULL) { - VIXL_ASSERT(sizeof(new_value) =3D=3D size_); - memcpy(&low64_, &new_value, sizeof(new_value)); - if (IsPlaced()) { - VIXL_ASSERT(code_buffer !=3D NULL); - RewriteValueInCode(code_buffer); - } - } - - void UpdateValue(T high64, T low64, uint8_t* code_buffer =3D NULL) { - VIXL_ASSERT(sizeof(low64) =3D=3D size_ / 2); - memcpy(&low64_, &low64, sizeof(low64)); - memcpy(&high64_, &high64, sizeof(high64)); - if (IsPlaced()) { - VIXL_ASSERT(code_buffer !=3D NULL); - RewriteValueInCode(code_buffer); - } - } - - void UpdateValue(T new_value, const Assembler* assembler); - void UpdateValue(T high64, T low64, const Assembler* assembler); - - private: - void RewriteValueInCode(uint8_t* code_buffer) { - VIXL_ASSERT(IsPlaced()); - VIXL_STATIC_ASSERT(sizeof(T) <=3D kXRegSizeInBytes); - switch (size()) { - case kSRegSizeInBytes: - *reinterpret_cast(code_buffer + offset()) =3D raw_value= 32(); - break; - case kDRegSizeInBytes: - *reinterpret_cast(code_buffer + offset()) =3D raw_value= 64(); - break; - default: - VIXL_ASSERT(size() =3D=3D kQRegSizeInBytes); - uint64_t* base_address =3D - reinterpret_cast(code_buffer + offset()); - *base_address =3D raw_value128_low64(); - *(base_address + 1) =3D raw_value128_high64(); - } - } -}; - - -// Control whether or not position-independent code should be emitted. -enum PositionIndependentCodeOption { - // All code generated will be position-independent; all branches and - // references to labels generated with the Label class will use PC-relat= ive - // addressing. - PositionIndependentCode, - - // Allow VIXL to generate code that refers to absolute addresses. With t= his - // option, it will not be possible to copy the code buffer and run it fr= om a - // different address; code must be generated in its final location. - PositionDependentCode, - - // Allow VIXL to assume that the bottom 12 bits of the address will be - // constant, but that the top 48 bits may change. This allows `adrp` to - // function in systems which copy code between pages, but otherwise main= tain - // 4KB page alignment. - PageOffsetDependentCode -}; - - -// Control how scaled- and unscaled-offset loads and stores are generated. -enum LoadStoreScalingOption { - // Prefer scaled-immediate-offset instructions, but emit unscaled-offset, - // register-offset, pre-index or post-index instructions if necessary. - PreferScaledOffset, - - // Prefer unscaled-immediate-offset instructions, but emit scaled-offset, - // register-offset, pre-index or post-index instructions if necessary. - PreferUnscaledOffset, - - // Require scaled-immediate-offset instructions. - RequireScaledOffset, - - // Require unscaled-immediate-offset instructions. - RequireUnscaledOffset -}; - - -// Assembler. -class Assembler { - public: - Assembler(size_t capacity, - PositionIndependentCodeOption pic =3D PositionIndependentCode); - Assembler(byte* buffer, size_t capacity, - PositionIndependentCodeOption pic =3D PositionIndependentCode); - - // The destructor asserts that one of the following is true: - // * The Assembler object has not been used. - // * Nothing has been emitted since the last Reset() call. - // * Nothing has been emitted since the last FinalizeCode() call. - ~Assembler(); - - // System functions. - - // Start generating code from the beginning of the buffer, discarding an= y code - // and data that has already been emitted into the buffer. - void Reset(); - - // Finalize a code buffer of generated instructions. This function must = be - // called before executing or copying code from the buffer. - void FinalizeCode(); - - // Label. - // Bind a label to the current PC. - void bind(Label* label); - - // Bind a label to a specified offset from the start of the buffer. - void BindToOffset(Label* label, ptrdiff_t offset); - - // Place a literal at the current PC. - void place(RawLiteral* literal); - - ptrdiff_t CursorOffset() const { - return buffer_->CursorOffset(); - } - - ptrdiff_t BufferEndOffset() const { - return static_cast(buffer_->capacity()); - } - - // Return the address of an offset in the buffer. - template - T GetOffsetAddress(ptrdiff_t offset) const { - VIXL_STATIC_ASSERT(sizeof(T) >=3D sizeof(uintptr_t)); - return buffer_->GetOffsetAddress(offset); - } - - // Return the address of a bound label. - template - T GetLabelAddress(const Label * label) const { - VIXL_ASSERT(label->IsBound()); - VIXL_STATIC_ASSERT(sizeof(T) >=3D sizeof(uintptr_t)); - return GetOffsetAddress(label->location()); - } - - // Return the address of the cursor. - template - T GetCursorAddress() const { - VIXL_STATIC_ASSERT(sizeof(T) >=3D sizeof(uintptr_t)); - return GetOffsetAddress(CursorOffset()); - } - - // Return the address of the start of the buffer. - template - T GetStartAddress() const { - VIXL_STATIC_ASSERT(sizeof(T) >=3D sizeof(uintptr_t)); - return GetOffsetAddress(0); - } - - Instruction* InstructionAt(ptrdiff_t instruction_offset) { - return GetOffsetAddress(instruction_offset); - } - - ptrdiff_t InstructionOffset(Instruction* instruction) { - VIXL_STATIC_ASSERT(sizeof(*instruction) =3D=3D 1); - ptrdiff_t offset =3D instruction - GetStartAddress(); - VIXL_ASSERT((0 <=3D offset) && - (offset < static_cast(BufferCapacity()))); - return offset; - } - - // Instruction set functions. - - // Branch / Jump instructions. - // Branch to register. - void br(const Register& xn); - - // Branch with link to register. - void blr(const Register& xn); - - // Branch to register with return hint. - void ret(const Register& xn =3D lr); - - // Unconditional branch to label. - void b(Label* label); - - // Conditional branch to label. - void b(Label* label, Condition cond); - - // Unconditional branch to PC offset. - void b(int imm26); - - // Conditional branch to PC offset. - void b(int imm19, Condition cond); - - // Branch with link to label. - void bl(Label* label); - - // Branch with link to PC offset. - void bl(int imm26); - - // Compare and branch to label if zero. - void cbz(const Register& rt, Label* label); - - // Compare and branch to PC offset if zero. - void cbz(const Register& rt, int imm19); - - // Compare and branch to label if not zero. - void cbnz(const Register& rt, Label* label); - - // Compare and branch to PC offset if not zero. - void cbnz(const Register& rt, int imm19); - - // Table lookup from one register. - void tbl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Table lookup from two registers. - void tbl(const VRegister& vd, - const VRegister& vn, - const VRegister& vn2, - const VRegister& vm); - - // Table lookup from three registers. - void tbl(const VRegister& vd, - const VRegister& vn, - const VRegister& vn2, - const VRegister& vn3, - const VRegister& vm); - - // Table lookup from four registers. - void tbl(const VRegister& vd, - const VRegister& vn, - const VRegister& vn2, - const VRegister& vn3, - const VRegister& vn4, - const VRegister& vm); - - // Table lookup extension from one register. - void tbx(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Table lookup extension from two registers. - void tbx(const VRegister& vd, - const VRegister& vn, - const VRegister& vn2, - const VRegister& vm); - - // Table lookup extension from three registers. - void tbx(const VRegister& vd, - const VRegister& vn, - const VRegister& vn2, - const VRegister& vn3, - const VRegister& vm); - - // Table lookup extension from four registers. - void tbx(const VRegister& vd, - const VRegister& vn, - const VRegister& vn2, - const VRegister& vn3, - const VRegister& vn4, - const VRegister& vm); - - // Test bit and branch to label if zero. - void tbz(const Register& rt, unsigned bit_pos, Label* label); - - // Test bit and branch to PC offset if zero. - void tbz(const Register& rt, unsigned bit_pos, int imm14); - - // Test bit and branch to label if not zero. - void tbnz(const Register& rt, unsigned bit_pos, Label* label); - - // Test bit and branch to PC offset if not zero. - void tbnz(const Register& rt, unsigned bit_pos, int imm14); - - // Address calculation instructions. - // Calculate a PC-relative address. Unlike for branches the offset in ad= r is - // unscaled (i.e. the result can be unaligned). - - // Calculate the address of a label. - void adr(const Register& rd, Label* label); - - // Calculate the address of a PC offset. - void adr(const Register& rd, int imm21); - - // Calculate the page address of a label. - void adrp(const Register& rd, Label* label); - - // Calculate the page address of a PC offset. - void adrp(const Register& rd, int imm21); - - // Data Processing instructions. - // Add. - void add(const Register& rd, - const Register& rn, - const Operand& operand); - - // Add and update status flags. - void adds(const Register& rd, - const Register& rn, - const Operand& operand); - - // Compare negative. - void cmn(const Register& rn, const Operand& operand); - - // Subtract. - void sub(const Register& rd, - const Register& rn, - const Operand& operand); - - // Subtract and update status flags. - void subs(const Register& rd, - const Register& rn, - const Operand& operand); - - // Compare. - void cmp(const Register& rn, const Operand& operand); - - // Negate. - void neg(const Register& rd, - const Operand& operand); - - // Negate and update status flags. - void negs(const Register& rd, - const Operand& operand); - - // Add with carry bit. - void adc(const Register& rd, - const Register& rn, - const Operand& operand); - - // Add with carry bit and update status flags. - void adcs(const Register& rd, - const Register& rn, - const Operand& operand); - - // Subtract with carry bit. - void sbc(const Register& rd, - const Register& rn, - const Operand& operand); - - // Subtract with carry bit and update status flags. - void sbcs(const Register& rd, - const Register& rn, - const Operand& operand); - - // Negate with carry bit. - void ngc(const Register& rd, - const Operand& operand); - - // Negate with carry bit and update status flags. - void ngcs(const Register& rd, - const Operand& operand); - - // Logical instructions. - // Bitwise and (A & B). - void and_(const Register& rd, - const Register& rn, - const Operand& operand); - - // Bitwise and (A & B) and update status flags. - void ands(const Register& rd, - const Register& rn, - const Operand& operand); - - // Bit test and set flags. - void tst(const Register& rn, const Operand& operand); - - // Bit clear (A & ~B). - void bic(const Register& rd, - const Register& rn, - const Operand& operand); - - // Bit clear (A & ~B) and update status flags. - void bics(const Register& rd, - const Register& rn, - const Operand& operand); - - // Bitwise or (A | B). - void orr(const Register& rd, const Register& rn, const Operand& operand); - - // Bitwise nor (A | ~B). - void orn(const Register& rd, const Register& rn, const Operand& operand); - - // Bitwise eor/xor (A ^ B). - void eor(const Register& rd, const Register& rn, const Operand& operand); - - // Bitwise enor/xnor (A ^ ~B). - void eon(const Register& rd, const Register& rn, const Operand& operand); - - // Logical shift left by variable. - void lslv(const Register& rd, const Register& rn, const Register& rm); - - // Logical shift right by variable. - void lsrv(const Register& rd, const Register& rn, const Register& rm); - - // Arithmetic shift right by variable. - void asrv(const Register& rd, const Register& rn, const Register& rm); - - // Rotate right by variable. - void rorv(const Register& rd, const Register& rn, const Register& rm); - - // Bitfield instructions. - // Bitfield move. - void bfm(const Register& rd, - const Register& rn, - unsigned immr, - unsigned imms); - - // Signed bitfield move. - void sbfm(const Register& rd, - const Register& rn, - unsigned immr, - unsigned imms); - - // Unsigned bitfield move. - void ubfm(const Register& rd, - const Register& rn, - unsigned immr, - unsigned imms); - - // Bfm aliases. - // Bitfield insert. - void bfi(const Register& rd, - const Register& rn, - unsigned lsb, - unsigned width) { - VIXL_ASSERT(width >=3D 1); - VIXL_ASSERT(lsb + width <=3D rn.size()); - bfm(rd, rn, (rd.size() - lsb) & (rd.size() - 1), width - 1); - } - - // Bitfield extract and insert low. - void bfxil(const Register& rd, - const Register& rn, - unsigned lsb, - unsigned width) { - VIXL_ASSERT(width >=3D 1); - VIXL_ASSERT(lsb + width <=3D rn.size()); - bfm(rd, rn, lsb, lsb + width - 1); - } - - // Sbfm aliases. - // Arithmetic shift right. - void asr(const Register& rd, const Register& rn, unsigned shift) { - VIXL_ASSERT(shift < rd.size()); - sbfm(rd, rn, shift, rd.size() - 1); - } - - // Signed bitfield insert with zero at right. - void sbfiz(const Register& rd, - const Register& rn, - unsigned lsb, - unsigned width) { - VIXL_ASSERT(width >=3D 1); - VIXL_ASSERT(lsb + width <=3D rn.size()); - sbfm(rd, rn, (rd.size() - lsb) & (rd.size() - 1), width - 1); - } - - // Signed bitfield extract. - void sbfx(const Register& rd, - const Register& rn, - unsigned lsb, - unsigned width) { - VIXL_ASSERT(width >=3D 1); - VIXL_ASSERT(lsb + width <=3D rn.size()); - sbfm(rd, rn, lsb, lsb + width - 1); - } - - // Signed extend byte. - void sxtb(const Register& rd, const Register& rn) { - sbfm(rd, rn, 0, 7); - } - - // Signed extend halfword. - void sxth(const Register& rd, const Register& rn) { - sbfm(rd, rn, 0, 15); - } - - // Signed extend word. - void sxtw(const Register& rd, const Register& rn) { - sbfm(rd, rn, 0, 31); - } - - // Ubfm aliases. - // Logical shift left. - void lsl(const Register& rd, const Register& rn, unsigned shift) { - unsigned reg_size =3D rd.size(); - VIXL_ASSERT(shift < reg_size); - ubfm(rd, rn, (reg_size - shift) % reg_size, reg_size - shift - 1); - } - - // Logical shift right. - void lsr(const Register& rd, const Register& rn, unsigned shift) { - VIXL_ASSERT(shift < rd.size()); - ubfm(rd, rn, shift, rd.size() - 1); - } - - // Unsigned bitfield insert with zero at right. - void ubfiz(const Register& rd, - const Register& rn, - unsigned lsb, - unsigned width) { - VIXL_ASSERT(width >=3D 1); - VIXL_ASSERT(lsb + width <=3D rn.size()); - ubfm(rd, rn, (rd.size() - lsb) & (rd.size() - 1), width - 1); - } - - // Unsigned bitfield extract. - void ubfx(const Register& rd, - const Register& rn, - unsigned lsb, - unsigned width) { - VIXL_ASSERT(width >=3D 1); - VIXL_ASSERT(lsb + width <=3D rn.size()); - ubfm(rd, rn, lsb, lsb + width - 1); - } - - // Unsigned extend byte. - void uxtb(const Register& rd, const Register& rn) { - ubfm(rd, rn, 0, 7); - } - - // Unsigned extend halfword. - void uxth(const Register& rd, const Register& rn) { - ubfm(rd, rn, 0, 15); - } - - // Unsigned extend word. - void uxtw(const Register& rd, const Register& rn) { - ubfm(rd, rn, 0, 31); - } - - // Extract. - void extr(const Register& rd, - const Register& rn, - const Register& rm, - unsigned lsb); - - // Conditional select: rd =3D cond ? rn : rm. - void csel(const Register& rd, - const Register& rn, - const Register& rm, - Condition cond); - - // Conditional select increment: rd =3D cond ? rn : rm + 1. - void csinc(const Register& rd, - const Register& rn, - const Register& rm, - Condition cond); - - // Conditional select inversion: rd =3D cond ? rn : ~rm. - void csinv(const Register& rd, - const Register& rn, - const Register& rm, - Condition cond); - - // Conditional select negation: rd =3D cond ? rn : -rm. - void csneg(const Register& rd, - const Register& rn, - const Register& rm, - Condition cond); - - // Conditional set: rd =3D cond ? 1 : 0. - void cset(const Register& rd, Condition cond); - - // Conditional set mask: rd =3D cond ? -1 : 0. - void csetm(const Register& rd, Condition cond); - - // Conditional increment: rd =3D cond ? rn + 1 : rn. - void cinc(const Register& rd, const Register& rn, Condition cond); - - // Conditional invert: rd =3D cond ? ~rn : rn. - void cinv(const Register& rd, const Register& rn, Condition cond); - - // Conditional negate: rd =3D cond ? -rn : rn. - void cneg(const Register& rd, const Register& rn, Condition cond); - - // Rotate right. - void ror(const Register& rd, const Register& rs, unsigned shift) { - extr(rd, rs, rs, shift); - } - - // Conditional comparison. - // Conditional compare negative. - void ccmn(const Register& rn, - const Operand& operand, - StatusFlags nzcv, - Condition cond); - - // Conditional compare. - void ccmp(const Register& rn, - const Operand& operand, - StatusFlags nzcv, - Condition cond); - - // CRC-32 checksum from byte. - void crc32b(const Register& rd, - const Register& rn, - const Register& rm); - - // CRC-32 checksum from half-word. - void crc32h(const Register& rd, - const Register& rn, - const Register& rm); - - // CRC-32 checksum from word. - void crc32w(const Register& rd, - const Register& rn, - const Register& rm); - - // CRC-32 checksum from double word. - void crc32x(const Register& rd, - const Register& rn, - const Register& rm); - - // CRC-32 C checksum from byte. - void crc32cb(const Register& rd, - const Register& rn, - const Register& rm); - - // CRC-32 C checksum from half-word. - void crc32ch(const Register& rd, - const Register& rn, - const Register& rm); - - // CRC-32 C checksum from word. - void crc32cw(const Register& rd, - const Register& rn, - const Register& rm); - - // CRC-32C checksum from double word. - void crc32cx(const Register& rd, - const Register& rn, - const Register& rm); - - // Multiply. - void mul(const Register& rd, const Register& rn, const Register& rm); - - // Negated multiply. - void mneg(const Register& rd, const Register& rn, const Register& rm); - - // Signed long multiply: 32 x 32 -> 64-bit. - void smull(const Register& rd, const Register& rn, const Register& rm); - - // Signed multiply high: 64 x 64 -> 64-bit <127:64>. - void smulh(const Register& xd, const Register& xn, const Register& xm); - - // Multiply and accumulate. - void madd(const Register& rd, - const Register& rn, - const Register& rm, - const Register& ra); - - // Multiply and subtract. - void msub(const Register& rd, - const Register& rn, - const Register& rm, - const Register& ra); - - // Signed long multiply and accumulate: 32 x 32 + 64 -> 64-bit. - void smaddl(const Register& rd, - const Register& rn, - const Register& rm, - const Register& ra); - - // Unsigned long multiply and accumulate: 32 x 32 + 64 -> 64-bit. - void umaddl(const Register& rd, - const Register& rn, - const Register& rm, - const Register& ra); - - // Unsigned long multiply: 32 x 32 -> 64-bit. - void umull(const Register& rd, - const Register& rn, - const Register& rm) { - umaddl(rd, rn, rm, xzr); - } - - // Unsigned multiply high: 64 x 64 -> 64-bit <127:64>. - void umulh(const Register& xd, - const Register& xn, - const Register& xm); - - // Signed long multiply and subtract: 64 - (32 x 32) -> 64-bit. - void smsubl(const Register& rd, - const Register& rn, - const Register& rm, - const Register& ra); - - // Unsigned long multiply and subtract: 64 - (32 x 32) -> 64-bit. - void umsubl(const Register& rd, - const Register& rn, - const Register& rm, - const Register& ra); - - // Signed integer divide. - void sdiv(const Register& rd, const Register& rn, const Register& rm); - - // Unsigned integer divide. - void udiv(const Register& rd, const Register& rn, const Register& rm); - - // Bit reverse. - void rbit(const Register& rd, const Register& rn); - - // Reverse bytes in 16-bit half words. - void rev16(const Register& rd, const Register& rn); - - // Reverse bytes in 32-bit words. - void rev32(const Register& rd, const Register& rn); - - // Reverse bytes. - void rev(const Register& rd, const Register& rn); - - // Count leading zeroes. - void clz(const Register& rd, const Register& rn); - - // Count leading sign bits. - void cls(const Register& rd, const Register& rn); - - // Memory instructions. - // Load integer or FP register. - void ldr(const CPURegister& rt, const MemOperand& src, - LoadStoreScalingOption option =3D PreferScaledOffset); - - // Store integer or FP register. - void str(const CPURegister& rt, const MemOperand& dst, - LoadStoreScalingOption option =3D PreferScaledOffset); - - // Load word with sign extension. - void ldrsw(const Register& rt, const MemOperand& src, - LoadStoreScalingOption option =3D PreferScaledOffset); - - // Load byte. - void ldrb(const Register& rt, const MemOperand& src, - LoadStoreScalingOption option =3D PreferScaledOffset); - - // Store byte. - void strb(const Register& rt, const MemOperand& dst, - LoadStoreScalingOption option =3D PreferScaledOffset); - - // Load byte with sign extension. - void ldrsb(const Register& rt, const MemOperand& src, - LoadStoreScalingOption option =3D PreferScaledOffset); - - // Load half-word. - void ldrh(const Register& rt, const MemOperand& src, - LoadStoreScalingOption option =3D PreferScaledOffset); - - // Store half-word. - void strh(const Register& rt, const MemOperand& dst, - LoadStoreScalingOption option =3D PreferScaledOffset); - - // Load half-word with sign extension. - void ldrsh(const Register& rt, const MemOperand& src, - LoadStoreScalingOption option =3D PreferScaledOffset); - - // Load integer or FP register (with unscaled offset). - void ldur(const CPURegister& rt, const MemOperand& src, - LoadStoreScalingOption option =3D PreferUnscaledOffset); - - // Store integer or FP register (with unscaled offset). - void stur(const CPURegister& rt, const MemOperand& src, - LoadStoreScalingOption option =3D PreferUnscaledOffset); - - // Load word with sign extension. - void ldursw(const Register& rt, const MemOperand& src, - LoadStoreScalingOption option =3D PreferUnscaledOffset); - - // Load byte (with unscaled offset). - void ldurb(const Register& rt, const MemOperand& src, - LoadStoreScalingOption option =3D PreferUnscaledOffset); - - // Store byte (with unscaled offset). - void sturb(const Register& rt, const MemOperand& dst, - LoadStoreScalingOption option =3D PreferUnscaledOffset); - - // Load byte with sign extension (and unscaled offset). - void ldursb(const Register& rt, const MemOperand& src, - LoadStoreScalingOption option =3D PreferUnscaledOffset); - - // Load half-word (with unscaled offset). - void ldurh(const Register& rt, const MemOperand& src, - LoadStoreScalingOption option =3D PreferUnscaledOffset); - - // Store half-word (with unscaled offset). - void sturh(const Register& rt, const MemOperand& dst, - LoadStoreScalingOption option =3D PreferUnscaledOffset); - - // Load half-word with sign extension (and unscaled offset). - void ldursh(const Register& rt, const MemOperand& src, - LoadStoreScalingOption option =3D PreferUnscaledOffset); - - // Load integer or FP register pair. - void ldp(const CPURegister& rt, const CPURegister& rt2, - const MemOperand& src); - - // Store integer or FP register pair. - void stp(const CPURegister& rt, const CPURegister& rt2, - const MemOperand& dst); - - // Load word pair with sign extension. - void ldpsw(const Register& rt, const Register& rt2, const MemOperand& sr= c); - - // Load integer or FP register pair, non-temporal. - void ldnp(const CPURegister& rt, const CPURegister& rt2, - const MemOperand& src); - - // Store integer or FP register pair, non-temporal. - void stnp(const CPURegister& rt, const CPURegister& rt2, - const MemOperand& dst); - - // Load integer or FP register from literal pool. - void ldr(const CPURegister& rt, RawLiteral* literal); - - // Load word with sign extension from literal pool. - void ldrsw(const Register& rt, RawLiteral* literal); - - // Load integer or FP register from pc + imm19 << 2. - void ldr(const CPURegister& rt, int imm19); - - // Load word with sign extension from pc + imm19 << 2. - void ldrsw(const Register& rt, int imm19); - - // Store exclusive byte. - void stxrb(const Register& rs, const Register& rt, const MemOperand& dst= ); - - // Store exclusive half-word. - void stxrh(const Register& rs, const Register& rt, const MemOperand& dst= ); - - // Store exclusive register. - void stxr(const Register& rs, const Register& rt, const MemOperand& dst); - - // Load exclusive byte. - void ldxrb(const Register& rt, const MemOperand& src); - - // Load exclusive half-word. - void ldxrh(const Register& rt, const MemOperand& src); - - // Load exclusive register. - void ldxr(const Register& rt, const MemOperand& src); - - // Store exclusive register pair. - void stxp(const Register& rs, - const Register& rt, - const Register& rt2, - const MemOperand& dst); - - // Load exclusive register pair. - void ldxp(const Register& rt, const Register& rt2, const MemOperand& src= ); - - // Store-release exclusive byte. - void stlxrb(const Register& rs, const Register& rt, const MemOperand& ds= t); - - // Store-release exclusive half-word. - void stlxrh(const Register& rs, const Register& rt, const MemOperand& ds= t); - - // Store-release exclusive register. - void stlxr(const Register& rs, const Register& rt, const MemOperand& dst= ); - - // Load-acquire exclusive byte. - void ldaxrb(const Register& rt, const MemOperand& src); - - // Load-acquire exclusive half-word. - void ldaxrh(const Register& rt, const MemOperand& src); - - // Load-acquire exclusive register. - void ldaxr(const Register& rt, const MemOperand& src); - - // Store-release exclusive register pair. - void stlxp(const Register& rs, - const Register& rt, - const Register& rt2, - const MemOperand& dst); - - // Load-acquire exclusive register pair. - void ldaxp(const Register& rt, const Register& rt2, const MemOperand& sr= c); - - // Store-release byte. - void stlrb(const Register& rt, const MemOperand& dst); - - // Store-release half-word. - void stlrh(const Register& rt, const MemOperand& dst); - - // Store-release register. - void stlr(const Register& rt, const MemOperand& dst); - - // Load-acquire byte. - void ldarb(const Register& rt, const MemOperand& src); - - // Load-acquire half-word. - void ldarh(const Register& rt, const MemOperand& src); - - // Load-acquire register. - void ldar(const Register& rt, const MemOperand& src); - - // Prefetch memory. - void prfm(PrefetchOperation op, const MemOperand& addr, - LoadStoreScalingOption option =3D PreferScaledOffset); - - // Prefetch memory (with unscaled offset). - void prfum(PrefetchOperation op, const MemOperand& addr, - LoadStoreScalingOption option =3D PreferUnscaledOffset); - - // Prefetch memory in the literal pool. - void prfm(PrefetchOperation op, RawLiteral* literal); - - // Prefetch from pc + imm19 << 2. - void prfm(PrefetchOperation op, int imm19); - - // Move instructions. The default shift of -1 indicates that the move - // instruction will calculate an appropriate 16-bit immediate and left s= hift - // that is equal to the 64-bit immediate argument. If an explicit left s= hift - // is specified (0, 16, 32 or 48), the immediate must be a 16-bit value. - // - // For movk, an explicit shift can be used to indicate which half word s= hould - // be overwritten, eg. movk(x0, 0, 0) will overwrite the least-significa= nt - // half word with zero, whereas movk(x0, 0, 48) will overwrite the - // most-significant. - - // Move immediate and keep. - void movk(const Register& rd, uint64_t imm, int shift =3D -1) { - MoveWide(rd, imm, shift, MOVK); - } - - // Move inverted immediate. - void movn(const Register& rd, uint64_t imm, int shift =3D -1) { - MoveWide(rd, imm, shift, MOVN); - } - - // Move immediate. - void movz(const Register& rd, uint64_t imm, int shift =3D -1) { - MoveWide(rd, imm, shift, MOVZ); - } - - // Misc instructions. - // Monitor debug-mode breakpoint. - void brk(int code); - - // Halting debug-mode breakpoint. - void hlt(int code); - - // Generate exception targeting EL1. - void svc(int code); - - // Move register to register. - void mov(const Register& rd, const Register& rn); - - // Move inverted operand to register. - void mvn(const Register& rd, const Operand& operand); - - // System instructions. - // Move to register from system register. - void mrs(const Register& rt, SystemRegister sysreg); - - // Move from register to system register. - void msr(SystemRegister sysreg, const Register& rt); - - // System instruction. - void sys(int op1, int crn, int crm, int op2, const Register& rt =3D xzr); - - // System instruction with pre-encoded op (op1:crn:crm:op2). - void sys(int op, const Register& rt =3D xzr); - - // System data cache operation. - void dc(DataCacheOp op, const Register& rt); - - // System instruction cache operation. - void ic(InstructionCacheOp op, const Register& rt); - - // System hint. - void hint(SystemHint code); - - // Clear exclusive monitor. - void clrex(int imm4 =3D 0xf); - - // Data memory barrier. - void dmb(BarrierDomain domain, BarrierType type); - - // Data synchronization barrier. - void dsb(BarrierDomain domain, BarrierType type); - - // Instruction synchronization barrier. - void isb(); - - // Alias for system instructions. - // No-op. - void nop() { - hint(NOP); - } - - // FP and NEON instructions. - // Move double precision immediate to FP register. - void fmov(const VRegister& vd, double imm); - - // Move single precision immediate to FP register. - void fmov(const VRegister& vd, float imm); - - // Move FP register to register. - void fmov(const Register& rd, const VRegister& fn); - - // Move register to FP register. - void fmov(const VRegister& vd, const Register& rn); - - // Move FP register to FP register. - void fmov(const VRegister& vd, const VRegister& fn); - - // Move 64-bit register to top half of 128-bit FP register. - void fmov(const VRegister& vd, int index, const Register& rn); - - // Move top half of 128-bit FP register to 64-bit register. - void fmov(const Register& rd, const VRegister& vn, int index); - - // FP add. - void fadd(const VRegister& vd, const VRegister& vn, const VRegister& vm); - - // FP subtract. - void fsub(const VRegister& vd, const VRegister& vn, const VRegister& vm); - - // FP multiply. - void fmul(const VRegister& vd, const VRegister& vn, const VRegister& vm); - - // FP fused multiply-add. - void fmadd(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - const VRegister& va); - - // FP fused multiply-subtract. - void fmsub(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - const VRegister& va); - - // FP fused multiply-add and negate. - void fnmadd(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - const VRegister& va); - - // FP fused multiply-subtract and negate. - void fnmsub(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - const VRegister& va); - - // FP multiply-negate scalar. - void fnmul(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP reciprocal exponent scalar. - void frecpx(const VRegister& vd, - const VRegister& vn); - - // FP divide. - void fdiv(const VRegister& vd, const VRegister& fn, const VRegister& vm); - - // FP maximum. - void fmax(const VRegister& vd, const VRegister& fn, const VRegister& vm); - - // FP minimum. - void fmin(const VRegister& vd, const VRegister& fn, const VRegister& vm); - - // FP maximum number. - void fmaxnm(const VRegister& vd, const VRegister& fn, const VRegister& v= m); - - // FP minimum number. - void fminnm(const VRegister& vd, const VRegister& fn, const VRegister& v= m); - - // FP absolute. - void fabs(const VRegister& vd, const VRegister& vn); - - // FP negate. - void fneg(const VRegister& vd, const VRegister& vn); - - // FP square root. - void fsqrt(const VRegister& vd, const VRegister& vn); - - // FP round to integer, nearest with ties to away. - void frinta(const VRegister& vd, const VRegister& vn); - - // FP round to integer, implicit rounding. - void frinti(const VRegister& vd, const VRegister& vn); - - // FP round to integer, toward minus infinity. - void frintm(const VRegister& vd, const VRegister& vn); - - // FP round to integer, nearest with ties to even. - void frintn(const VRegister& vd, const VRegister& vn); - - // FP round to integer, toward plus infinity. - void frintp(const VRegister& vd, const VRegister& vn); - - // FP round to integer, exact, implicit rounding. - void frintx(const VRegister& vd, const VRegister& vn); - - // FP round to integer, towards zero. - void frintz(const VRegister& vd, const VRegister& vn); - - void FPCompareMacro(const VRegister& vn, - double value, - FPTrapFlags trap); - - void FPCompareMacro(const VRegister& vn, - const VRegister& vm, - FPTrapFlags trap); - - // FP compare registers. - void fcmp(const VRegister& vn, const VRegister& vm); - - // FP compare immediate. - void fcmp(const VRegister& vn, double value); - - void FPCCompareMacro(const VRegister& vn, - const VRegister& vm, - StatusFlags nzcv, - Condition cond, - FPTrapFlags trap); - - // FP conditional compare. - void fccmp(const VRegister& vn, - const VRegister& vm, - StatusFlags nzcv, - Condition cond); - - // FP signaling compare registers. - void fcmpe(const VRegister& vn, const VRegister& vm); - - // FP signaling compare immediate. - void fcmpe(const VRegister& vn, double value); - - // FP conditional signaling compare. - void fccmpe(const VRegister& vn, - const VRegister& vm, - StatusFlags nzcv, - Condition cond); - - // FP conditional select. - void fcsel(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - Condition cond); - - // Common FP Convert functions. - void NEONFPConvertToInt(const Register& rd, - const VRegister& vn, - Instr op); - void NEONFPConvertToInt(const VRegister& vd, - const VRegister& vn, - Instr op); - - // FP convert between precisions. - void fcvt(const VRegister& vd, const VRegister& vn); - - // FP convert to higher precision. - void fcvtl(const VRegister& vd, const VRegister& vn); - - // FP convert to higher precision (second part). - void fcvtl2(const VRegister& vd, const VRegister& vn); - - // FP convert to lower precision. - void fcvtn(const VRegister& vd, const VRegister& vn); - - // FP convert to lower prevision (second part). - void fcvtn2(const VRegister& vd, const VRegister& vn); - - // FP convert to lower precision, rounding to odd. - void fcvtxn(const VRegister& vd, const VRegister& vn); - - // FP convert to lower precision, rounding to odd (second part). - void fcvtxn2(const VRegister& vd, const VRegister& vn); - - // FP convert to signed integer, nearest with ties to away. - void fcvtas(const Register& rd, const VRegister& vn); - - // FP convert to unsigned integer, nearest with ties to away. - void fcvtau(const Register& rd, const VRegister& vn); - - // FP convert to signed integer, nearest with ties to away. - void fcvtas(const VRegister& vd, const VRegister& vn); - - // FP convert to unsigned integer, nearest with ties to away. - void fcvtau(const VRegister& vd, const VRegister& vn); - - // FP convert to signed integer, round towards -infinity. - void fcvtms(const Register& rd, const VRegister& vn); - - // FP convert to unsigned integer, round towards -infinity. - void fcvtmu(const Register& rd, const VRegister& vn); - - // FP convert to signed integer, round towards -infinity. - void fcvtms(const VRegister& vd, const VRegister& vn); - - // FP convert to unsigned integer, round towards -infinity. - void fcvtmu(const VRegister& vd, const VRegister& vn); - - // FP convert to signed integer, nearest with ties to even. - void fcvtns(const Register& rd, const VRegister& vn); - - // FP convert to unsigned integer, nearest with ties to even. - void fcvtnu(const Register& rd, const VRegister& vn); - - // FP convert to signed integer, nearest with ties to even. - void fcvtns(const VRegister& rd, const VRegister& vn); - - // FP convert to unsigned integer, nearest with ties to even. - void fcvtnu(const VRegister& rd, const VRegister& vn); - - // FP convert to signed integer or fixed-point, round towards zero. - void fcvtzs(const Register& rd, const VRegister& vn, int fbits =3D 0); - - // FP convert to unsigned integer or fixed-point, round towards zero. - void fcvtzu(const Register& rd, const VRegister& vn, int fbits =3D 0); - - // FP convert to signed integer or fixed-point, round towards zero. - void fcvtzs(const VRegister& vd, const VRegister& vn, int fbits =3D 0); - - // FP convert to unsigned integer or fixed-point, round towards zero. - void fcvtzu(const VRegister& vd, const VRegister& vn, int fbits =3D 0); - - // FP convert to signed integer, round towards +infinity. - void fcvtps(const Register& rd, const VRegister& vn); - - // FP convert to unsigned integer, round towards +infinity. - void fcvtpu(const Register& rd, const VRegister& vn); - - // FP convert to signed integer, round towards +infinity. - void fcvtps(const VRegister& vd, const VRegister& vn); - - // FP convert to unsigned integer, round towards +infinity. - void fcvtpu(const VRegister& vd, const VRegister& vn); - - // Convert signed integer or fixed point to FP. - void scvtf(const VRegister& fd, const Register& rn, int fbits =3D 0); - - // Convert unsigned integer or fixed point to FP. - void ucvtf(const VRegister& fd, const Register& rn, int fbits =3D 0); - - // Convert signed integer or fixed-point to FP. - void scvtf(const VRegister& fd, const VRegister& vn, int fbits =3D 0); - - // Convert unsigned integer or fixed-point to FP. - void ucvtf(const VRegister& fd, const VRegister& vn, int fbits =3D 0); - - // Unsigned absolute difference. - void uabd(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed absolute difference. - void sabd(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned absolute difference and accumulate. - void uaba(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed absolute difference and accumulate. - void saba(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Add. - void add(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Subtract. - void sub(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned halving add. - void uhadd(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed halving add. - void shadd(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned rounding halving add. - void urhadd(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed rounding halving add. - void srhadd(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned halving sub. - void uhsub(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed halving sub. - void shsub(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned saturating add. - void uqadd(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed saturating add. - void sqadd(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned saturating subtract. - void uqsub(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed saturating subtract. - void sqsub(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Add pairwise. - void addp(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Add pair of elements scalar. - void addp(const VRegister& vd, - const VRegister& vn); - - // Multiply-add to accumulator. - void mla(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Multiply-subtract to accumulator. - void mls(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Multiply. - void mul(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Multiply by scalar element. - void mul(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Multiply-add by scalar element. - void mla(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Multiply-subtract by scalar element. - void mls(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Signed long multiply-add by scalar element. - void smlal(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Signed long multiply-add by scalar element (second part). - void smlal2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Unsigned long multiply-add by scalar element. - void umlal(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Unsigned long multiply-add by scalar element (second part). - void umlal2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Signed long multiply-sub by scalar element. - void smlsl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Signed long multiply-sub by scalar element (second part). - void smlsl2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Unsigned long multiply-sub by scalar element. - void umlsl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Unsigned long multiply-sub by scalar element (second part). - void umlsl2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Signed long multiply by scalar element. - void smull(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Signed long multiply by scalar element (second part). - void smull2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Unsigned long multiply by scalar element. - void umull(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Unsigned long multiply by scalar element (second part). - void umull2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Signed saturating double long multiply by element. - void sqdmull(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Signed saturating double long multiply by element (second part). - void sqdmull2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Signed saturating doubling long multiply-add by element. - void sqdmlal(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Signed saturating doubling long multiply-add by element (second part). - void sqdmlal2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Signed saturating doubling long multiply-sub by element. - void sqdmlsl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Signed saturating doubling long multiply-sub by element (second part). - void sqdmlsl2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Compare equal. - void cmeq(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Compare signed greater than or equal. - void cmge(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Compare signed greater than. - void cmgt(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Compare unsigned higher. - void cmhi(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Compare unsigned higher or same. - void cmhs(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Compare bitwise test bits nonzero. - void cmtst(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Compare bitwise to zero. - void cmeq(const VRegister& vd, - const VRegister& vn, - int value); - - // Compare signed greater than or equal to zero. - void cmge(const VRegister& vd, - const VRegister& vn, - int value); - - // Compare signed greater than zero. - void cmgt(const VRegister& vd, - const VRegister& vn, - int value); - - // Compare signed less than or equal to zero. - void cmle(const VRegister& vd, - const VRegister& vn, - int value); - - // Compare signed less than zero. - void cmlt(const VRegister& vd, - const VRegister& vn, - int value); - - // Signed shift left by register. - void sshl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned shift left by register. - void ushl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed saturating shift left by register. - void sqshl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned saturating shift left by register. - void uqshl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed rounding shift left by register. - void srshl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned rounding shift left by register. - void urshl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed saturating rounding shift left by register. - void sqrshl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned saturating rounding shift left by register. - void uqrshl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Bitwise and. - void and_(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Bitwise or. - void orr(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Bitwise or immediate. - void orr(const VRegister& vd, - const int imm8, - const int left_shift =3D 0); - - // Move register to register. - void mov(const VRegister& vd, - const VRegister& vn); - - // Bitwise orn. - void orn(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Bitwise eor. - void eor(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Bit clear immediate. - void bic(const VRegister& vd, - const int imm8, - const int left_shift =3D 0); - - // Bit clear. - void bic(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Bitwise insert if false. - void bif(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Bitwise insert if true. - void bit(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Bitwise select. - void bsl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Polynomial multiply. - void pmul(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Vector move immediate. - void movi(const VRegister& vd, - const uint64_t imm, - Shift shift =3D LSL, - const int shift_amount =3D 0); - - // Bitwise not. - void mvn(const VRegister& vd, - const VRegister& vn); - - // Vector move inverted immediate. - void mvni(const VRegister& vd, - const int imm8, - Shift shift =3D LSL, - const int shift_amount =3D 0); - - // Signed saturating accumulate of unsigned value. - void suqadd(const VRegister& vd, - const VRegister& vn); - - // Unsigned saturating accumulate of signed value. - void usqadd(const VRegister& vd, - const VRegister& vn); - - // Absolute value. - void abs(const VRegister& vd, - const VRegister& vn); - - // Signed saturating absolute value. - void sqabs(const VRegister& vd, - const VRegister& vn); - - // Negate. - void neg(const VRegister& vd, - const VRegister& vn); - - // Signed saturating negate. - void sqneg(const VRegister& vd, - const VRegister& vn); - - // Bitwise not. - void not_(const VRegister& vd, - const VRegister& vn); - - // Extract narrow. - void xtn(const VRegister& vd, - const VRegister& vn); - - // Extract narrow (second part). - void xtn2(const VRegister& vd, - const VRegister& vn); - - // Signed saturating extract narrow. - void sqxtn(const VRegister& vd, - const VRegister& vn); - - // Signed saturating extract narrow (second part). - void sqxtn2(const VRegister& vd, - const VRegister& vn); - - // Unsigned saturating extract narrow. - void uqxtn(const VRegister& vd, - const VRegister& vn); - - // Unsigned saturating extract narrow (second part). - void uqxtn2(const VRegister& vd, - const VRegister& vn); - - // Signed saturating extract unsigned narrow. - void sqxtun(const VRegister& vd, - const VRegister& vn); - - // Signed saturating extract unsigned narrow (second part). - void sqxtun2(const VRegister& vd, - const VRegister& vn); - - // Extract vector from pair of vectors. - void ext(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int index); - - // Duplicate vector element to vector or scalar. - void dup(const VRegister& vd, - const VRegister& vn, - int vn_index); - - // Move vector element to scalar. - void mov(const VRegister& vd, - const VRegister& vn, - int vn_index); - - // Duplicate general-purpose register to vector. - void dup(const VRegister& vd, - const Register& rn); - - // Insert vector element from another vector element. - void ins(const VRegister& vd, - int vd_index, - const VRegister& vn, - int vn_index); - - // Move vector element to another vector element. - void mov(const VRegister& vd, - int vd_index, - const VRegister& vn, - int vn_index); - - // Insert vector element from general-purpose register. - void ins(const VRegister& vd, - int vd_index, - const Register& rn); - - // Move general-purpose register to a vector element. - void mov(const VRegister& vd, - int vd_index, - const Register& rn); - - // Unsigned move vector element to general-purpose register. - void umov(const Register& rd, - const VRegister& vn, - int vn_index); - - // Move vector element to general-purpose register. - void mov(const Register& rd, - const VRegister& vn, - int vn_index); - - // Signed move vector element to general-purpose register. - void smov(const Register& rd, - const VRegister& vn, - int vn_index); - - // One-element structure load to one register. - void ld1(const VRegister& vt, - const MemOperand& src); - - // One-element structure load to two registers. - void ld1(const VRegister& vt, - const VRegister& vt2, - const MemOperand& src); - - // One-element structure load to three registers. - void ld1(const VRegister& vt, - const VRegister& vt2, - const VRegister& vt3, - const MemOperand& src); - - // One-element structure load to four registers. - void ld1(const VRegister& vt, - const VRegister& vt2, - const VRegister& vt3, - const VRegister& vt4, - const MemOperand& src); - - // One-element single structure load to one lane. - void ld1(const VRegister& vt, - int lane, - const MemOperand& src); - - // One-element single structure load to all lanes. - void ld1r(const VRegister& vt, - const MemOperand& src); - - // Two-element structure load. - void ld2(const VRegister& vt, - const VRegister& vt2, - const MemOperand& src); - - // Two-element single structure load to one lane. - void ld2(const VRegister& vt, - const VRegister& vt2, - int lane, - const MemOperand& src); - - // Two-element single structure load to all lanes. - void ld2r(const VRegister& vt, - const VRegister& vt2, - const MemOperand& src); - - // Three-element structure load. - void ld3(const VRegister& vt, - const VRegister& vt2, - const VRegister& vt3, - const MemOperand& src); - - // Three-element single structure load to one lane. - void ld3(const VRegister& vt, - const VRegister& vt2, - const VRegister& vt3, - int lane, - const MemOperand& src); - - // Three-element single structure load to all lanes. - void ld3r(const VRegister& vt, - const VRegister& vt2, - const VRegister& vt3, - const MemOperand& src); - - // Four-element structure load. - void ld4(const VRegister& vt, - const VRegister& vt2, - const VRegister& vt3, - const VRegister& vt4, - const MemOperand& src); - - // Four-element single structure load to one lane. - void ld4(const VRegister& vt, - const VRegister& vt2, - const VRegister& vt3, - const VRegister& vt4, - int lane, - const MemOperand& src); - - // Four-element single structure load to all lanes. - void ld4r(const VRegister& vt, - const VRegister& vt2, - const VRegister& vt3, - const VRegister& vt4, - const MemOperand& src); - - // Count leading sign bits. - void cls(const VRegister& vd, - const VRegister& vn); - - // Count leading zero bits (vector). - void clz(const VRegister& vd, - const VRegister& vn); - - // Population count per byte. - void cnt(const VRegister& vd, - const VRegister& vn); - - // Reverse bit order. - void rbit(const VRegister& vd, - const VRegister& vn); - - // Reverse elements in 16-bit halfwords. - void rev16(const VRegister& vd, - const VRegister& vn); - - // Reverse elements in 32-bit words. - void rev32(const VRegister& vd, - const VRegister& vn); - - // Reverse elements in 64-bit doublewords. - void rev64(const VRegister& vd, - const VRegister& vn); - - // Unsigned reciprocal square root estimate. - void ursqrte(const VRegister& vd, - const VRegister& vn); - - // Unsigned reciprocal estimate. - void urecpe(const VRegister& vd, - const VRegister& vn); - - // Signed pairwise long add. - void saddlp(const VRegister& vd, - const VRegister& vn); - - // Unsigned pairwise long add. - void uaddlp(const VRegister& vd, - const VRegister& vn); - - // Signed pairwise long add and accumulate. - void sadalp(const VRegister& vd, - const VRegister& vn); - - // Unsigned pairwise long add and accumulate. - void uadalp(const VRegister& vd, - const VRegister& vn); - - // Shift left by immediate. - void shl(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed saturating shift left by immediate. - void sqshl(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed saturating shift left unsigned by immediate. - void sqshlu(const VRegister& vd, - const VRegister& vn, - int shift); - - // Unsigned saturating shift left by immediate. - void uqshl(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed shift left long by immediate. - void sshll(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed shift left long by immediate (second part). - void sshll2(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed extend long. - void sxtl(const VRegister& vd, - const VRegister& vn); - - // Signed extend long (second part). - void sxtl2(const VRegister& vd, - const VRegister& vn); - - // Unsigned shift left long by immediate. - void ushll(const VRegister& vd, - const VRegister& vn, - int shift); - - // Unsigned shift left long by immediate (second part). - void ushll2(const VRegister& vd, - const VRegister& vn, - int shift); - - // Shift left long by element size. - void shll(const VRegister& vd, - const VRegister& vn, - int shift); - - // Shift left long by element size (second part). - void shll2(const VRegister& vd, - const VRegister& vn, - int shift); - - // Unsigned extend long. - void uxtl(const VRegister& vd, - const VRegister& vn); - - // Unsigned extend long (second part). - void uxtl2(const VRegister& vd, - const VRegister& vn); - - // Shift left by immediate and insert. - void sli(const VRegister& vd, - const VRegister& vn, - int shift); - - // Shift right by immediate and insert. - void sri(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed maximum. - void smax(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed pairwise maximum. - void smaxp(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Add across vector. - void addv(const VRegister& vd, - const VRegister& vn); - - // Signed add long across vector. - void saddlv(const VRegister& vd, - const VRegister& vn); - - // Unsigned add long across vector. - void uaddlv(const VRegister& vd, - const VRegister& vn); - - // FP maximum number across vector. - void fmaxnmv(const VRegister& vd, - const VRegister& vn); - - // FP maximum across vector. - void fmaxv(const VRegister& vd, - const VRegister& vn); - - // FP minimum number across vector. - void fminnmv(const VRegister& vd, - const VRegister& vn); - - // FP minimum across vector. - void fminv(const VRegister& vd, - const VRegister& vn); - - // Signed maximum across vector. - void smaxv(const VRegister& vd, - const VRegister& vn); - - // Signed minimum. - void smin(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed minimum pairwise. - void sminp(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed minimum across vector. - void sminv(const VRegister& vd, - const VRegister& vn); - - // One-element structure store from one register. - void st1(const VRegister& vt, - const MemOperand& src); - - // One-element structure store from two registers. - void st1(const VRegister& vt, - const VRegister& vt2, - const MemOperand& src); - - // One-element structure store from three registers. - void st1(const VRegister& vt, - const VRegister& vt2, - const VRegister& vt3, - const MemOperand& src); - - // One-element structure store from four registers. - void st1(const VRegister& vt, - const VRegister& vt2, - const VRegister& vt3, - const VRegister& vt4, - const MemOperand& src); - - // One-element single structure store from one lane. - void st1(const VRegister& vt, - int lane, - const MemOperand& src); - - // Two-element structure store from two registers. - void st2(const VRegister& vt, - const VRegister& vt2, - const MemOperand& src); - - // Two-element single structure store from two lanes. - void st2(const VRegister& vt, - const VRegister& vt2, - int lane, - const MemOperand& src); - - // Three-element structure store from three registers. - void st3(const VRegister& vt, - const VRegister& vt2, - const VRegister& vt3, - const MemOperand& src); - - // Three-element single structure store from three lanes. - void st3(const VRegister& vt, - const VRegister& vt2, - const VRegister& vt3, - int lane, - const MemOperand& src); - - // Four-element structure store from four registers. - void st4(const VRegister& vt, - const VRegister& vt2, - const VRegister& vt3, - const VRegister& vt4, - const MemOperand& src); - - // Four-element single structure store from four lanes. - void st4(const VRegister& vt, - const VRegister& vt2, - const VRegister& vt3, - const VRegister& vt4, - int lane, - const MemOperand& src); - - // Unsigned add long. - void uaddl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned add long (second part). - void uaddl2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned add wide. - void uaddw(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned add wide (second part). - void uaddw2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed add long. - void saddl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed add long (second part). - void saddl2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed add wide. - void saddw(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed add wide (second part). - void saddw2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned subtract long. - void usubl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned subtract long (second part). - void usubl2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned subtract wide. - void usubw(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned subtract wide (second part). - void usubw2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed subtract long. - void ssubl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed subtract long (second part). - void ssubl2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed integer subtract wide. - void ssubw(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed integer subtract wide (second part). - void ssubw2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned maximum. - void umax(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned pairwise maximum. - void umaxp(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned maximum across vector. - void umaxv(const VRegister& vd, - const VRegister& vn); - - // Unsigned minimum. - void umin(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned pairwise minimum. - void uminp(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned minimum across vector. - void uminv(const VRegister& vd, - const VRegister& vn); - - // Transpose vectors (primary). - void trn1(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Transpose vectors (secondary). - void trn2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unzip vectors (primary). - void uzp1(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unzip vectors (secondary). - void uzp2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Zip vectors (primary). - void zip1(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Zip vectors (secondary). - void zip2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed shift right by immediate. - void sshr(const VRegister& vd, - const VRegister& vn, - int shift); - - // Unsigned shift right by immediate. - void ushr(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed rounding shift right by immediate. - void srshr(const VRegister& vd, - const VRegister& vn, - int shift); - - // Unsigned rounding shift right by immediate. - void urshr(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed shift right by immediate and accumulate. - void ssra(const VRegister& vd, - const VRegister& vn, - int shift); - - // Unsigned shift right by immediate and accumulate. - void usra(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed rounding shift right by immediate and accumulate. - void srsra(const VRegister& vd, - const VRegister& vn, - int shift); - - // Unsigned rounding shift right by immediate and accumulate. - void ursra(const VRegister& vd, - const VRegister& vn, - int shift); - - // Shift right narrow by immediate. - void shrn(const VRegister& vd, - const VRegister& vn, - int shift); - - // Shift right narrow by immediate (second part). - void shrn2(const VRegister& vd, - const VRegister& vn, - int shift); - - // Rounding shift right narrow by immediate. - void rshrn(const VRegister& vd, - const VRegister& vn, - int shift); - - // Rounding shift right narrow by immediate (second part). - void rshrn2(const VRegister& vd, - const VRegister& vn, - int shift); - - // Unsigned saturating shift right narrow by immediate. - void uqshrn(const VRegister& vd, - const VRegister& vn, - int shift); - - // Unsigned saturating shift right narrow by immediate (second part). - void uqshrn2(const VRegister& vd, - const VRegister& vn, - int shift); - - // Unsigned saturating rounding shift right narrow by immediate. - void uqrshrn(const VRegister& vd, - const VRegister& vn, - int shift); - - // Unsigned saturating rounding shift right narrow by immediate (second = part). - void uqrshrn2(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed saturating shift right narrow by immediate. - void sqshrn(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed saturating shift right narrow by immediate (second part). - void sqshrn2(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed saturating rounded shift right narrow by immediate. - void sqrshrn(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed saturating rounded shift right narrow by immediate (second par= t). - void sqrshrn2(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed saturating shift right unsigned narrow by immediate. - void sqshrun(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed saturating shift right unsigned narrow by immediate (second pa= rt). - void sqshrun2(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed sat rounded shift right unsigned narrow by immediate. - void sqrshrun(const VRegister& vd, - const VRegister& vn, - int shift); - - // Signed sat rounded shift right unsigned narrow by immediate (second p= art). - void sqrshrun2(const VRegister& vd, - const VRegister& vn, - int shift); - - // FP reciprocal step. - void frecps(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP reciprocal estimate. - void frecpe(const VRegister& vd, - const VRegister& vn); - - // FP reciprocal square root estimate. - void frsqrte(const VRegister& vd, - const VRegister& vn); - - // FP reciprocal square root step. - void frsqrts(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed absolute difference and accumulate long. - void sabal(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed absolute difference and accumulate long (second part). - void sabal2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned absolute difference and accumulate long. - void uabal(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned absolute difference and accumulate long (second part). - void uabal2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed absolute difference long. - void sabdl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed absolute difference long (second part). - void sabdl2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned absolute difference long. - void uabdl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned absolute difference long (second part). - void uabdl2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Polynomial multiply long. - void pmull(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Polynomial multiply long (second part). - void pmull2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed long multiply-add. - void smlal(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed long multiply-add (second part). - void smlal2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned long multiply-add. - void umlal(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned long multiply-add (second part). - void umlal2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed long multiply-sub. - void smlsl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed long multiply-sub (second part). - void smlsl2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned long multiply-sub. - void umlsl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned long multiply-sub (second part). - void umlsl2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed long multiply. - void smull(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed long multiply (second part). - void smull2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed saturating doubling long multiply-add. - void sqdmlal(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed saturating doubling long multiply-add (second part). - void sqdmlal2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed saturating doubling long multiply-subtract. - void sqdmlsl(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed saturating doubling long multiply-subtract (second part). - void sqdmlsl2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed saturating doubling long multiply. - void sqdmull(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed saturating doubling long multiply (second part). - void sqdmull2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed saturating doubling multiply returning high half. - void sqdmulh(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed saturating rounding doubling multiply returning high half. - void sqrdmulh(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Signed saturating doubling multiply element returning high half. - void sqdmulh(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Signed saturating rounding doubling multiply element returning high h= alf. - void sqrdmulh(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // Unsigned long multiply long. - void umull(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Unsigned long multiply (second part). - void umull2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Add narrow returning high half. - void addhn(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Add narrow returning high half (second part). - void addhn2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Rounding add narrow returning high half. - void raddhn(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Rounding add narrow returning high half (second part). - void raddhn2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Subtract narrow returning high half. - void subhn(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Subtract narrow returning high half (second part). - void subhn2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Rounding subtract narrow returning high half. - void rsubhn(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // Rounding subtract narrow returning high half (second part). - void rsubhn2(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP vector multiply accumulate. - void fmla(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP vector multiply subtract. - void fmls(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP vector multiply extended. - void fmulx(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP absolute greater than or equal. - void facge(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP absolute greater than. - void facgt(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP multiply by element. - void fmul(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // FP fused multiply-add to accumulator by element. - void fmla(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // FP fused multiply-sub from accumulator by element. - void fmls(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // FP multiply extended by element. - void fmulx(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index); - - // FP compare equal. - void fcmeq(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP greater than. - void fcmgt(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP greater than or equal. - void fcmge(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP compare equal to zero. - void fcmeq(const VRegister& vd, - const VRegister& vn, - double imm); - - // FP greater than zero. - void fcmgt(const VRegister& vd, - const VRegister& vn, - double imm); - - // FP greater than or equal to zero. - void fcmge(const VRegister& vd, - const VRegister& vn, - double imm); - - // FP less than or equal to zero. - void fcmle(const VRegister& vd, - const VRegister& vn, - double imm); - - // FP less than to zero. - void fcmlt(const VRegister& vd, - const VRegister& vn, - double imm); - - // FP absolute difference. - void fabd(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP pairwise add vector. - void faddp(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP pairwise add scalar. - void faddp(const VRegister& vd, - const VRegister& vn); - - // FP pairwise maximum vector. - void fmaxp(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP pairwise maximum scalar. - void fmaxp(const VRegister& vd, - const VRegister& vn); - - // FP pairwise minimum vector. - void fminp(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP pairwise minimum scalar. - void fminp(const VRegister& vd, - const VRegister& vn); - - // FP pairwise maximum number vector. - void fmaxnmp(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP pairwise maximum number scalar. - void fmaxnmp(const VRegister& vd, - const VRegister& vn); - - // FP pairwise minimum number vector. - void fminnmp(const VRegister& vd, - const VRegister& vn, - const VRegister& vm); - - // FP pairwise minimum number scalar. - void fminnmp(const VRegister& vd, - const VRegister& vn); - - // Emit generic instructions. - // Emit raw instructions into the instruction stream. - void dci(Instr raw_inst) { Emit(raw_inst); } - - // Emit 32 bits of data into the instruction stream. - void dc32(uint32_t data) { - VIXL_ASSERT(buffer_monitor_ > 0); - buffer_->Emit32(data); - } - - // Emit 64 bits of data into the instruction stream. - void dc64(uint64_t data) { - VIXL_ASSERT(buffer_monitor_ > 0); - buffer_->Emit64(data); - } - - // Copy a string into the instruction stream, including the terminating = NULL - // character. The instruction pointer is then aligned correctly for - // subsequent instructions. - void EmitString(const char * string) { - VIXL_ASSERT(string !=3D NULL); - VIXL_ASSERT(buffer_monitor_ > 0); - - buffer_->EmitString(string); - buffer_->Align(); - } - - // Code generation helpers. - - // Register encoding. - static Instr Rd(CPURegister rd) { - VIXL_ASSERT(rd.code() !=3D kSPRegInternalCode); - return rd.code() << Rd_offset; - } - - static Instr Rn(CPURegister rn) { - VIXL_ASSERT(rn.code() !=3D kSPRegInternalCode); - return rn.code() << Rn_offset; - } - - static Instr Rm(CPURegister rm) { - VIXL_ASSERT(rm.code() !=3D kSPRegInternalCode); - return rm.code() << Rm_offset; - } - - static Instr RmNot31(CPURegister rm) { - VIXL_ASSERT(rm.code() !=3D kSPRegInternalCode); - VIXL_ASSERT(!rm.IsZero()); - return Rm(rm); - } - - static Instr Ra(CPURegister ra) { - VIXL_ASSERT(ra.code() !=3D kSPRegInternalCode); - return ra.code() << Ra_offset; - } - - static Instr Rt(CPURegister rt) { - VIXL_ASSERT(rt.code() !=3D kSPRegInternalCode); - return rt.code() << Rt_offset; - } - - static Instr Rt2(CPURegister rt2) { - VIXL_ASSERT(rt2.code() !=3D kSPRegInternalCode); - return rt2.code() << Rt2_offset; - } - - static Instr Rs(CPURegister rs) { - VIXL_ASSERT(rs.code() !=3D kSPRegInternalCode); - return rs.code() << Rs_offset; - } - - // These encoding functions allow the stack pointer to be encoded, and - // disallow the zero register. - static Instr RdSP(Register rd) { - VIXL_ASSERT(!rd.IsZero()); - return (rd.code() & kRegCodeMask) << Rd_offset; - } - - static Instr RnSP(Register rn) { - VIXL_ASSERT(!rn.IsZero()); - return (rn.code() & kRegCodeMask) << Rn_offset; - } - - // Flags encoding. - static Instr Flags(FlagsUpdate S) { - if (S =3D=3D SetFlags) { - return 1 << FlagsUpdate_offset; - } else if (S =3D=3D LeaveFlags) { - return 0 << FlagsUpdate_offset; - } - VIXL_UNREACHABLE(); - return 0; - } - - static Instr Cond(Condition cond) { - return cond << Condition_offset; - } - - // PC-relative address encoding. - static Instr ImmPCRelAddress(int imm21) { - VIXL_ASSERT(is_int21(imm21)); - Instr imm =3D static_cast(truncate_to_int21(imm21)); - Instr immhi =3D (imm >> ImmPCRelLo_width) << ImmPCRelHi_offset; - Instr immlo =3D imm << ImmPCRelLo_offset; - return (immhi & ImmPCRelHi_mask) | (immlo & ImmPCRelLo_mask); - } - - // Branch encoding. - static Instr ImmUncondBranch(int imm26) { - VIXL_ASSERT(is_int26(imm26)); - return truncate_to_int26(imm26) << ImmUncondBranch_offset; - } - - static Instr ImmCondBranch(int imm19) { - VIXL_ASSERT(is_int19(imm19)); - return truncate_to_int19(imm19) << ImmCondBranch_offset; - } - - static Instr ImmCmpBranch(int imm19) { - VIXL_ASSERT(is_int19(imm19)); - return truncate_to_int19(imm19) << ImmCmpBranch_offset; - } - - static Instr ImmTestBranch(int imm14) { - VIXL_ASSERT(is_int14(imm14)); - return truncate_to_int14(imm14) << ImmTestBranch_offset; - } - - static Instr ImmTestBranchBit(unsigned bit_pos) { - VIXL_ASSERT(is_uint6(bit_pos)); - // Subtract five from the shift offset, as we need bit 5 from bit_pos. - unsigned b5 =3D bit_pos << (ImmTestBranchBit5_offset - 5); - unsigned b40 =3D bit_pos << ImmTestBranchBit40_offset; - b5 &=3D ImmTestBranchBit5_mask; - b40 &=3D ImmTestBranchBit40_mask; - return b5 | b40; - } - - // Data Processing encoding. - static Instr SF(Register rd) { - return rd.Is64Bits() ? SixtyFourBits : ThirtyTwoBits; - } - - static Instr ImmAddSub(int imm) { - VIXL_ASSERT(IsImmAddSub(imm)); - if (is_uint12(imm)) { // No shift required. - imm <<=3D ImmAddSub_offset; - } else { - imm =3D ((imm >> 12) << ImmAddSub_offset) | (1 << ShiftAddSub_offset= ); - } - return imm; - } - - static Instr ImmS(unsigned imms, unsigned reg_size) { - VIXL_ASSERT(((reg_size =3D=3D kXRegSize) && is_uint6(imms)) || - ((reg_size =3D=3D kWRegSize) && is_uint5(imms))); - USE(reg_size); - return imms << ImmS_offset; - } - - static Instr ImmR(unsigned immr, unsigned reg_size) { - VIXL_ASSERT(((reg_size =3D=3D kXRegSize) && is_uint6(immr)) || - ((reg_size =3D=3D kWRegSize) && is_uint5(immr))); - USE(reg_size); - VIXL_ASSERT(is_uint6(immr)); - return immr << ImmR_offset; - } - - static Instr ImmSetBits(unsigned imms, unsigned reg_size) { - VIXL_ASSERT((reg_size =3D=3D kWRegSize) || (reg_size =3D=3D kXRegSize)= ); - VIXL_ASSERT(is_uint6(imms)); - VIXL_ASSERT((reg_size =3D=3D kXRegSize) || is_uint6(imms + 3)); - USE(reg_size); - return imms << ImmSetBits_offset; - } - - static Instr ImmRotate(unsigned immr, unsigned reg_size) { - VIXL_ASSERT((reg_size =3D=3D kWRegSize) || (reg_size =3D=3D kXRegSize)= ); - VIXL_ASSERT(((reg_size =3D=3D kXRegSize) && is_uint6(immr)) || - ((reg_size =3D=3D kWRegSize) && is_uint5(immr))); - USE(reg_size); - return immr << ImmRotate_offset; - } - - static Instr ImmLLiteral(int imm19) { - VIXL_ASSERT(is_int19(imm19)); - return truncate_to_int19(imm19) << ImmLLiteral_offset; - } - - static Instr BitN(unsigned bitn, unsigned reg_size) { - VIXL_ASSERT((reg_size =3D=3D kWRegSize) || (reg_size =3D=3D kXRegSize)= ); - VIXL_ASSERT((reg_size =3D=3D kXRegSize) || (bitn =3D=3D 0)); - USE(reg_size); - return bitn << BitN_offset; - } - - static Instr ShiftDP(Shift shift) { - VIXL_ASSERT(shift =3D=3D LSL || shift =3D=3D LSR || shift =3D=3D ASR |= | shift =3D=3D ROR); - return shift << ShiftDP_offset; - } - - static Instr ImmDPShift(unsigned amount) { - VIXL_ASSERT(is_uint6(amount)); - return amount << ImmDPShift_offset; - } - - static Instr ExtendMode(Extend extend) { - return extend << ExtendMode_offset; - } - - static Instr ImmExtendShift(unsigned left_shift) { - VIXL_ASSERT(left_shift <=3D 4); - return left_shift << ImmExtendShift_offset; - } - - static Instr ImmCondCmp(unsigned imm) { - VIXL_ASSERT(is_uint5(imm)); - return imm << ImmCondCmp_offset; - } - - static Instr Nzcv(StatusFlags nzcv) { - return ((nzcv >> Flags_offset) & 0xf) << Nzcv_offset; - } - - // MemOperand offset encoding. - static Instr ImmLSUnsigned(int imm12) { - VIXL_ASSERT(is_uint12(imm12)); - return imm12 << ImmLSUnsigned_offset; - } - - static Instr ImmLS(int imm9) { - VIXL_ASSERT(is_int9(imm9)); - return truncate_to_int9(imm9) << ImmLS_offset; - } - - static Instr ImmLSPair(int imm7, unsigned access_size) { - VIXL_ASSERT(((imm7 >> access_size) << access_size) =3D=3D imm7); - int scaled_imm7 =3D imm7 >> access_size; - VIXL_ASSERT(is_int7(scaled_imm7)); - return truncate_to_int7(scaled_imm7) << ImmLSPair_offset; - } - - static Instr ImmShiftLS(unsigned shift_amount) { - VIXL_ASSERT(is_uint1(shift_amount)); - return shift_amount << ImmShiftLS_offset; - } - - static Instr ImmPrefetchOperation(int imm5) { - VIXL_ASSERT(is_uint5(imm5)); - return imm5 << ImmPrefetchOperation_offset; - } - - static Instr ImmException(int imm16) { - VIXL_ASSERT(is_uint16(imm16)); - return imm16 << ImmException_offset; - } - - static Instr ImmSystemRegister(int imm15) { - VIXL_ASSERT(is_uint15(imm15)); - return imm15 << ImmSystemRegister_offset; - } - - static Instr ImmHint(int imm7) { - VIXL_ASSERT(is_uint7(imm7)); - return imm7 << ImmHint_offset; - } - - static Instr CRm(int imm4) { - VIXL_ASSERT(is_uint4(imm4)); - return imm4 << CRm_offset; - } - - static Instr CRn(int imm4) { - VIXL_ASSERT(is_uint4(imm4)); - return imm4 << CRn_offset; - } - - static Instr SysOp(int imm14) { - VIXL_ASSERT(is_uint14(imm14)); - return imm14 << SysOp_offset; - } - - static Instr ImmSysOp1(int imm3) { - VIXL_ASSERT(is_uint3(imm3)); - return imm3 << SysOp1_offset; - } - - static Instr ImmSysOp2(int imm3) { - VIXL_ASSERT(is_uint3(imm3)); - return imm3 << SysOp2_offset; - } - - static Instr ImmBarrierDomain(int imm2) { - VIXL_ASSERT(is_uint2(imm2)); - return imm2 << ImmBarrierDomain_offset; - } - - static Instr ImmBarrierType(int imm2) { - VIXL_ASSERT(is_uint2(imm2)); - return imm2 << ImmBarrierType_offset; - } - - // Move immediates encoding. - static Instr ImmMoveWide(uint64_t imm) { - VIXL_ASSERT(is_uint16(imm)); - return static_cast(imm << ImmMoveWide_offset); - } - - static Instr ShiftMoveWide(int64_t shift) { - VIXL_ASSERT(is_uint2(shift)); - return static_cast(shift << ShiftMoveWide_offset); - } - - // FP Immediates. - static Instr ImmFP32(float imm); - static Instr ImmFP64(double imm); - - // FP register type. - static Instr FPType(FPRegister fd) { - return fd.Is64Bits() ? FP64 : FP32; - } - - static Instr FPScale(unsigned scale) { - VIXL_ASSERT(is_uint6(scale)); - return scale << FPScale_offset; - } - - // Immediate field checking helpers. - static bool IsImmAddSub(int64_t immediate); - static bool IsImmConditionalCompare(int64_t immediate); - static bool IsImmFP32(float imm); - static bool IsImmFP64(double imm); - static bool IsImmLogical(uint64_t value, - unsigned width, - unsigned* n =3D NULL, - unsigned* imm_s =3D NULL, - unsigned* imm_r =3D NULL); - static bool IsImmLSPair(int64_t offset, unsigned access_size); - static bool IsImmLSScaled(int64_t offset, unsigned access_size); - static bool IsImmLSUnscaled(int64_t offset); - static bool IsImmMovn(uint64_t imm, unsigned reg_size); - static bool IsImmMovz(uint64_t imm, unsigned reg_size); - - // Instruction bits for vector format in data processing operations. - static Instr VFormat(VRegister vd) { - if (vd.Is64Bits()) { - switch (vd.lanes()) { - case 2: return NEON_2S; - case 4: return NEON_4H; - case 8: return NEON_8B; - default: return 0xffffffff; - } - } else { - VIXL_ASSERT(vd.Is128Bits()); - switch (vd.lanes()) { - case 2: return NEON_2D; - case 4: return NEON_4S; - case 8: return NEON_8H; - case 16: return NEON_16B; - default: return 0xffffffff; - } - } - } - - // Instruction bits for vector format in floating point data processing - // operations. - static Instr FPFormat(VRegister vd) { - if (vd.lanes() =3D=3D 1) { - // Floating point scalar formats. - VIXL_ASSERT(vd.Is32Bits() || vd.Is64Bits()); - return vd.Is64Bits() ? FP64 : FP32; - } - - // Two lane floating point vector formats. - if (vd.lanes() =3D=3D 2) { - VIXL_ASSERT(vd.Is64Bits() || vd.Is128Bits()); - return vd.Is128Bits() ? NEON_FP_2D : NEON_FP_2S; - } - - // Four lane floating point vector format. - VIXL_ASSERT((vd.lanes() =3D=3D 4) && vd.Is128Bits()); - return NEON_FP_4S; - } - - // Instruction bits for vector format in load and store operations. - static Instr LSVFormat(VRegister vd) { - if (vd.Is64Bits()) { - switch (vd.lanes()) { - case 1: return LS_NEON_1D; - case 2: return LS_NEON_2S; - case 4: return LS_NEON_4H; - case 8: return LS_NEON_8B; - default: return 0xffffffff; - } - } else { - VIXL_ASSERT(vd.Is128Bits()); - switch (vd.lanes()) { - case 2: return LS_NEON_2D; - case 4: return LS_NEON_4S; - case 8: return LS_NEON_8H; - case 16: return LS_NEON_16B; - default: return 0xffffffff; - } - } - } - - // Instruction bits for scalar format in data processing operations. - static Instr SFormat(VRegister vd) { - VIXL_ASSERT(vd.lanes() =3D=3D 1); - switch (vd.SizeInBytes()) { - case 1: return NEON_B; - case 2: return NEON_H; - case 4: return NEON_S; - case 8: return NEON_D; - default: return 0xffffffff; - } - } - - static Instr ImmNEONHLM(int index, int num_bits) { - int h, l, m; - if (num_bits =3D=3D 3) { - VIXL_ASSERT(is_uint3(index)); - h =3D (index >> 2) & 1; - l =3D (index >> 1) & 1; - m =3D (index >> 0) & 1; - } else if (num_bits =3D=3D 2) { - VIXL_ASSERT(is_uint2(index)); - h =3D (index >> 1) & 1; - l =3D (index >> 0) & 1; - m =3D 0; - } else { - VIXL_ASSERT(is_uint1(index) && (num_bits =3D=3D 1)); - h =3D (index >> 0) & 1; - l =3D 0; - m =3D 0; - } - return (h << NEONH_offset) | (l << NEONL_offset) | (m << NEONM_offset); - } - - static Instr ImmNEONExt(int imm4) { - VIXL_ASSERT(is_uint4(imm4)); - return imm4 << ImmNEONExt_offset; - } - - static Instr ImmNEON5(Instr format, int index) { - VIXL_ASSERT(is_uint4(index)); - int s =3D LaneSizeInBytesLog2FromFormat(static_cast(form= at)); - int imm5 =3D (index << (s + 1)) | (1 << s); - return imm5 << ImmNEON5_offset; - } - - static Instr ImmNEON4(Instr format, int index) { - VIXL_ASSERT(is_uint4(index)); - int s =3D LaneSizeInBytesLog2FromFormat(static_cast(form= at)); - int imm4 =3D index << s; - return imm4 << ImmNEON4_offset; - } - - static Instr ImmNEONabcdefgh(int imm8) { - VIXL_ASSERT(is_uint8(imm8)); - Instr instr; - instr =3D ((imm8 >> 5) & 7) << ImmNEONabc_offset; - instr |=3D (imm8 & 0x1f) << ImmNEONdefgh_offset; - return instr; - } - - static Instr NEONCmode(int cmode) { - VIXL_ASSERT(is_uint4(cmode)); - return cmode << NEONCmode_offset; - } - - static Instr NEONModImmOp(int op) { - VIXL_ASSERT(is_uint1(op)); - return op << NEONModImmOp_offset; - } - - // Size of the code generated since label to the current position. - size_t SizeOfCodeGeneratedSince(Label* label) const { - VIXL_ASSERT(label->IsBound()); - return buffer_->OffsetFrom(label->location()); - } - - size_t SizeOfCodeGenerated() const { - return buffer_->CursorOffset(); - } - - size_t BufferCapacity() const { return buffer_->capacity(); } - - size_t RemainingBufferSpace() const { return buffer_->RemainingBytes(); } - - void EnsureSpaceFor(size_t amount) { - if (buffer_->RemainingBytes() < amount) { - size_t capacity =3D buffer_->capacity(); - size_t size =3D buffer_->CursorOffset(); - do { - // TODO(all): refine. - capacity *=3D 2; - } while ((capacity - size) < amount); - buffer_->Grow(capacity); - } - } - -#ifdef VIXL_DEBUG - void AcquireBuffer() { - VIXL_ASSERT(buffer_monitor_ >=3D 0); - buffer_monitor_++; - } - - void ReleaseBuffer() { - buffer_monitor_--; - VIXL_ASSERT(buffer_monitor_ >=3D 0); - } -#endif - - PositionIndependentCodeOption pic() const { - return pic_; - } - - bool AllowPageOffsetDependentCode() const { - return (pic() =3D=3D PageOffsetDependentCode) || - (pic() =3D=3D PositionDependentCode); - } - - static const Register& AppropriateZeroRegFor(const CPURegister& reg) { - return reg.Is64Bits() ? xzr : wzr; - } - - - protected: - void LoadStore(const CPURegister& rt, - const MemOperand& addr, - LoadStoreOp op, - LoadStoreScalingOption option =3D PreferScaledOffset); - - void LoadStorePair(const CPURegister& rt, - const CPURegister& rt2, - const MemOperand& addr, - LoadStorePairOp op); - void LoadStoreStruct(const VRegister& vt, - const MemOperand& addr, - NEONLoadStoreMultiStructOp op); - void LoadStoreStruct1(const VRegister& vt, - int reg_count, - const MemOperand& addr); - void LoadStoreStructSingle(const VRegister& vt, - uint32_t lane, - const MemOperand& addr, - NEONLoadStoreSingleStructOp op); - void LoadStoreStructSingleAllLanes(const VRegister& vt, - const MemOperand& addr, - NEONLoadStoreSingleStructOp op); - void LoadStoreStructVerify(const VRegister& vt, - const MemOperand& addr, - Instr op); - - void Prefetch(PrefetchOperation op, - const MemOperand& addr, - LoadStoreScalingOption option =3D PreferScaledOffset); - - // TODO(all): The third parameter should be passed by reference but gcc = 4.8.2 - // reports a bogus uninitialised warning then. - void Logical(const Register& rd, - const Register& rn, - const Operand operand, - LogicalOp op); - void LogicalImmediate(const Register& rd, - const Register& rn, - unsigned n, - unsigned imm_s, - unsigned imm_r, - LogicalOp op); - - void ConditionalCompare(const Register& rn, - const Operand& operand, - StatusFlags nzcv, - Condition cond, - ConditionalCompareOp op); - - void AddSubWithCarry(const Register& rd, - const Register& rn, - const Operand& operand, - FlagsUpdate S, - AddSubWithCarryOp op); - - - // Functions for emulating operands not directly supported by the instru= ction - // set. - void EmitShift(const Register& rd, - const Register& rn, - Shift shift, - unsigned amount); - void EmitExtendShift(const Register& rd, - const Register& rn, - Extend extend, - unsigned left_shift); - - void AddSub(const Register& rd, - const Register& rn, - const Operand& operand, - FlagsUpdate S, - AddSubOp op); - - void NEONTable(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - NEONTableOp op); - - // Find an appropriate LoadStoreOp or LoadStorePairOp for the specified - // registers. Only simple loads are supported; sign- and zero-extension = (such - // as in LDPSW_x or LDRB_w) are not supported. - static LoadStoreOp LoadOpFor(const CPURegister& rt); - static LoadStorePairOp LoadPairOpFor(const CPURegister& rt, - const CPURegister& rt2); - static LoadStoreOp StoreOpFor(const CPURegister& rt); - static LoadStorePairOp StorePairOpFor(const CPURegister& rt, - const CPURegister& rt2); - static LoadStorePairNonTemporalOp LoadPairNonTemporalOpFor( - const CPURegister& rt, const CPURegister& rt2); - static LoadStorePairNonTemporalOp StorePairNonTemporalOpFor( - const CPURegister& rt, const CPURegister& rt2); - static LoadLiteralOp LoadLiteralOpFor(const CPURegister& rt); - - - private: - static uint32_t FP32ToImm8(float imm); - static uint32_t FP64ToImm8(double imm); - - // Instruction helpers. - void MoveWide(const Register& rd, - uint64_t imm, - int shift, - MoveWideImmediateOp mov_op); - void DataProcShiftedRegister(const Register& rd, - const Register& rn, - const Operand& operand, - FlagsUpdate S, - Instr op); - void DataProcExtendedRegister(const Register& rd, - const Register& rn, - const Operand& operand, - FlagsUpdate S, - Instr op); - void LoadStorePairNonTemporal(const CPURegister& rt, - const CPURegister& rt2, - const MemOperand& addr, - LoadStorePairNonTemporalOp op); - void LoadLiteral(const CPURegister& rt, uint64_t imm, LoadLiteralOp op); - void ConditionalSelect(const Register& rd, - const Register& rn, - const Register& rm, - Condition cond, - ConditionalSelectOp op); - void DataProcessing1Source(const Register& rd, - const Register& rn, - DataProcessing1SourceOp op); - void DataProcessing3Source(const Register& rd, - const Register& rn, - const Register& rm, - const Register& ra, - DataProcessing3SourceOp op); - void FPDataProcessing1Source(const VRegister& fd, - const VRegister& fn, - FPDataProcessing1SourceOp op); - void FPDataProcessing3Source(const VRegister& fd, - const VRegister& fn, - const VRegister& fm, - const VRegister& fa, - FPDataProcessing3SourceOp op); - void NEONAcrossLanesL(const VRegister& vd, - const VRegister& vn, - NEONAcrossLanesOp op); - void NEONAcrossLanes(const VRegister& vd, - const VRegister& vn, - NEONAcrossLanesOp op); - void NEONModifiedImmShiftLsl(const VRegister& vd, - const int imm8, - const int left_shift, - NEONModifiedImmediateOp op); - void NEONModifiedImmShiftMsl(const VRegister& vd, - const int imm8, - const int shift_amount, - NEONModifiedImmediateOp op); - void NEONFP2Same(const VRegister& vd, - const VRegister& vn, - Instr vop); - void NEON3Same(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - NEON3SameOp vop); - void NEONFP3Same(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - Instr op); - void NEON3DifferentL(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - NEON3DifferentOp vop); - void NEON3DifferentW(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - NEON3DifferentOp vop); - void NEON3DifferentHN(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - NEON3DifferentOp vop); - void NEONFP2RegMisc(const VRegister& vd, - const VRegister& vn, - NEON2RegMiscOp vop, - double value =3D 0.0); - void NEON2RegMisc(const VRegister& vd, - const VRegister& vn, - NEON2RegMiscOp vop, - int value =3D 0); - void NEONFP2RegMisc(const VRegister& vd, - const VRegister& vn, - Instr op); - void NEONAddlp(const VRegister& vd, - const VRegister& vn, - NEON2RegMiscOp op); - void NEONPerm(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - NEONPermOp op); - void NEONFPByElement(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index, - NEONByIndexedElementOp op); - void NEONByElement(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index, - NEONByIndexedElementOp op); - void NEONByElementL(const VRegister& vd, - const VRegister& vn, - const VRegister& vm, - int vm_index, - NEONByIndexedElementOp op); - void NEONShiftImmediate(const VRegister& vd, - const VRegister& vn, - NEONShiftImmediateOp op, - int immh_immb); - void NEONShiftLeftImmediate(const VRegister& vd, - const VRegister& vn, - int shift, - NEONShiftImmediateOp op); - void NEONShiftRightImmediate(const VRegister& vd, - const VRegister& vn, - int shift, - NEONShiftImmediateOp op); - void NEONShiftImmediateL(const VRegister& vd, - const VRegister& vn, - int shift, - NEONShiftImmediateOp op); - void NEONShiftImmediateN(const VRegister& vd, - const VRegister& vn, - int shift, - NEONShiftImmediateOp op); - void NEONXtn(const VRegister& vd, - const VRegister& vn, - NEON2RegMiscOp vop); - - Instr LoadStoreStructAddrModeField(const MemOperand& addr); - - // Encode the specified MemOperand for the specified access size and sca= ling - // preference. - Instr LoadStoreMemOperand(const MemOperand& addr, - unsigned access_size, - LoadStoreScalingOption option); - - // Link the current (not-yet-emitted) instruction to the specified label= , then - // return an offset to be encoded in the instruction. If the label is no= t yet - // bound, an offset of 0 is returned. - ptrdiff_t LinkAndGetByteOffsetTo(Label * label); - ptrdiff_t LinkAndGetInstructionOffsetTo(Label * label); - ptrdiff_t LinkAndGetPageOffsetTo(Label * label); - - // A common implementation for the LinkAndGetOffsetTo helpers. - template - ptrdiff_t LinkAndGetOffsetTo(Label* label); - - // Literal load offset are in words (32-bit). - ptrdiff_t LinkAndGetWordOffsetTo(RawLiteral* literal); - - // Emit the instruction in buffer_. - void Emit(Instr instruction) { - VIXL_STATIC_ASSERT(sizeof(instruction) =3D=3D kInstructionSize); - VIXL_ASSERT(buffer_monitor_ > 0); - buffer_->Emit32(instruction); - } - - // Buffer where the code is emitted. - CodeBuffer* buffer_; - PositionIndependentCodeOption pic_; - -#ifdef VIXL_DEBUG - int64_t buffer_monitor_; -#endif -}; - - -// All Assembler emits MUST acquire/release the underlying code buffer. The -// helper scope below will do so and optionally ensure the buffer is big e= nough -// to receive the emit. It is possible to request the scope not to perform= any -// checks (kNoCheck) if for example it is known in advance the buffer size= is -// adequate or there is some other size checking mechanism in place. -class CodeBufferCheckScope { - public: - // Tell whether or not the scope needs to ensure the associated CodeBuff= er - // has enough space for the requested size. - enum CheckPolicy { - kNoCheck, - kCheck - }; - - // Tell whether or not the scope should assert the amount of code emitted - // within the scope is consistent with the requested amount. - enum AssertPolicy { - kNoAssert, // No assert required. - kExactSize, // The code emitted must be exactly size bytes. - kMaximumSize // The code emitted must be at most size bytes. - }; - - CodeBufferCheckScope(Assembler* assm, - size_t size, - CheckPolicy check_policy =3D kCheck, - AssertPolicy assert_policy =3D kMaximumSize) - : assm_(assm) { - if (check_policy =3D=3D kCheck) assm->EnsureSpaceFor(size); -#ifdef VIXL_DEBUG - assm->bind(&start_); - size_ =3D size; - assert_policy_ =3D assert_policy; - assm->AcquireBuffer(); -#else - USE(assert_policy); -#endif - } - - // This is a shortcut for CodeBufferCheckScope(assm, 0, kNoCheck, kNoAss= ert). - explicit CodeBufferCheckScope(Assembler* assm) : assm_(assm) { -#ifdef VIXL_DEBUG - size_ =3D 0; - assert_policy_ =3D kNoAssert; - assm->AcquireBuffer(); -#endif - } - - ~CodeBufferCheckScope() { -#ifdef VIXL_DEBUG - assm_->ReleaseBuffer(); - switch (assert_policy_) { - case kNoAssert: break; - case kExactSize: - VIXL_ASSERT(assm_->SizeOfCodeGeneratedSince(&start_) =3D=3D size_); - break; - case kMaximumSize: - VIXL_ASSERT(assm_->SizeOfCodeGeneratedSince(&start_) <=3D size_); - break; - default: - VIXL_UNREACHABLE(); - } -#endif - } - - protected: - Assembler* assm_; -#ifdef VIXL_DEBUG - Label start_; - size_t size_; - AssertPolicy assert_policy_; -#endif -}; - - -template -void Literal::UpdateValue(T new_value, const Assembler* assembler) { - return UpdateValue(new_value, assembler->GetStartAddress()); -} - - -template -void Literal::UpdateValue(T high64, T low64, const Assembler* assembler= ) { - return UpdateValue(high64, low64, assembler->GetStartAddress()= ); -} - - -} // namespace vixl - -#endif // VIXL_A64_ASSEMBLER_A64_H_ diff --git a/disas/libvixl/vixl/a64/constants-a64.h b/disas/libvixl/vixl/a6= 4/constants-a64.h deleted file mode 100644 index 2caa73af87..0000000000 --- a/disas/libvixl/vixl/a64/constants-a64.h +++ /dev/null @@ -1,2116 +0,0 @@ -// Copyright 2015, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#ifndef VIXL_A64_CONSTANTS_A64_H_ -#define VIXL_A64_CONSTANTS_A64_H_ - -namespace vixl { - -const unsigned kNumberOfRegisters =3D 32; -const unsigned kNumberOfVRegisters =3D 32; -const unsigned kNumberOfFPRegisters =3D kNumberOfVRegisters; -// Callee saved registers are x21-x30(lr). -const int kNumberOfCalleeSavedRegisters =3D 10; -const int kFirstCalleeSavedRegisterIndex =3D 21; -// Callee saved FP registers are d8-d15. -const int kNumberOfCalleeSavedFPRegisters =3D 8; -const int kFirstCalleeSavedFPRegisterIndex =3D 8; - -#define REGISTER_CODE_LIST(R) = \ -R(0) R(1) R(2) R(3) R(4) R(5) R(6) R(7) = \ -R(8) R(9) R(10) R(11) R(12) R(13) R(14) R(15) = \ -R(16) R(17) R(18) R(19) R(20) R(21) R(22) R(23) = \ -R(24) R(25) R(26) R(27) R(28) R(29) R(30) R(31) - -#define INSTRUCTION_FIELDS_LIST(V_) = \ -/* Register fields */ = \ -V_(Rd, 4, 0, Bits) /* Destination register. = */ \ -V_(Rn, 9, 5, Bits) /* First source register. = */ \ -V_(Rm, 20, 16, Bits) /* Second source register. = */ \ -V_(Ra, 14, 10, Bits) /* Third source register. = */ \ -V_(Rt, 4, 0, Bits) /* Load/store register. = */ \ -V_(Rt2, 14, 10, Bits) /* Load/store second register. = */ \ -V_(Rs, 20, 16, Bits) /* Exclusive access status. = */ \ - = \ -/* Common bits */ = \ -V_(SixtyFourBits, 31, 31, Bits) = \ -V_(FlagsUpdate, 29, 29, Bits) = \ - = \ -/* PC relative addressing */ = \ -V_(ImmPCRelHi, 23, 5, SignedBits) = \ -V_(ImmPCRelLo, 30, 29, Bits) = \ - = \ -/* Add/subtract/logical shift register */ = \ -V_(ShiftDP, 23, 22, Bits) = \ -V_(ImmDPShift, 15, 10, Bits) = \ - = \ -/* Add/subtract immediate */ = \ -V_(ImmAddSub, 21, 10, Bits) = \ -V_(ShiftAddSub, 23, 22, Bits) = \ - = \ -/* Add/substract extend */ = \ -V_(ImmExtendShift, 12, 10, Bits) = \ -V_(ExtendMode, 15, 13, Bits) = \ - = \ -/* Move wide */ = \ -V_(ImmMoveWide, 20, 5, Bits) = \ -V_(ShiftMoveWide, 22, 21, Bits) = \ - = \ -/* Logical immediate, bitfield and extract */ = \ -V_(BitN, 22, 22, Bits) = \ -V_(ImmRotate, 21, 16, Bits) = \ -V_(ImmSetBits, 15, 10, Bits) = \ -V_(ImmR, 21, 16, Bits) = \ -V_(ImmS, 15, 10, Bits) = \ - = \ -/* Test and branch immediate */ = \ -V_(ImmTestBranch, 18, 5, SignedBits) = \ -V_(ImmTestBranchBit40, 23, 19, Bits) = \ -V_(ImmTestBranchBit5, 31, 31, Bits) = \ - = \ -/* Conditionals */ = \ -V_(Condition, 15, 12, Bits) = \ -V_(ConditionBranch, 3, 0, Bits) = \ -V_(Nzcv, 3, 0, Bits) = \ -V_(ImmCondCmp, 20, 16, Bits) = \ -V_(ImmCondBranch, 23, 5, SignedBits) = \ - = \ -/* Floating point */ = \ -V_(FPType, 23, 22, Bits) = \ -V_(ImmFP, 20, 13, Bits) = \ -V_(FPScale, 15, 10, Bits) = \ - = \ -/* Load Store */ = \ -V_(ImmLS, 20, 12, SignedBits) = \ -V_(ImmLSUnsigned, 21, 10, Bits) = \ -V_(ImmLSPair, 21, 15, SignedBits) = \ -V_(ImmShiftLS, 12, 12, Bits) = \ -V_(LSOpc, 23, 22, Bits) = \ -V_(LSVector, 26, 26, Bits) = \ -V_(LSSize, 31, 30, Bits) = \ -V_(ImmPrefetchOperation, 4, 0, Bits) = \ -V_(PrefetchHint, 4, 3, Bits) = \ -V_(PrefetchTarget, 2, 1, Bits) = \ -V_(PrefetchStream, 0, 0, Bits) = \ - = \ -/* Other immediates */ = \ -V_(ImmUncondBranch, 25, 0, SignedBits) = \ -V_(ImmCmpBranch, 23, 5, SignedBits) = \ -V_(ImmLLiteral, 23, 5, SignedBits) = \ -V_(ImmException, 20, 5, Bits) = \ -V_(ImmHint, 11, 5, Bits) = \ -V_(ImmBarrierDomain, 11, 10, Bits) = \ -V_(ImmBarrierType, 9, 8, Bits) = \ - = \ -/* System (MRS, MSR, SYS) */ = \ -V_(ImmSystemRegister, 19, 5, Bits) = \ -V_(SysO0, 19, 19, Bits) = \ -V_(SysOp, 18, 5, Bits) = \ -V_(SysOp1, 18, 16, Bits) = \ -V_(SysOp2, 7, 5, Bits) = \ -V_(CRn, 15, 12, Bits) = \ -V_(CRm, 11, 8, Bits) = \ - = \ -/* Load-/store-exclusive */ = \ -V_(LdStXLoad, 22, 22, Bits) = \ -V_(LdStXNotExclusive, 23, 23, Bits) = \ -V_(LdStXAcquireRelease, 15, 15, Bits) = \ -V_(LdStXSizeLog2, 31, 30, Bits) = \ -V_(LdStXPair, 21, 21, Bits) = \ - = \ -/* NEON generic fields */ = \ -V_(NEONQ, 30, 30, Bits) = \ -V_(NEONSize, 23, 22, Bits) = \ -V_(NEONLSSize, 11, 10, Bits) = \ -V_(NEONS, 12, 12, Bits) = \ -V_(NEONL, 21, 21, Bits) = \ -V_(NEONM, 20, 20, Bits) = \ -V_(NEONH, 11, 11, Bits) = \ -V_(ImmNEONExt, 14, 11, Bits) = \ -V_(ImmNEON5, 20, 16, Bits) = \ -V_(ImmNEON4, 14, 11, Bits) = \ - = \ -/* NEON Modified Immediate fields */ = \ -V_(ImmNEONabc, 18, 16, Bits) = \ -V_(ImmNEONdefgh, 9, 5, Bits) = \ -V_(NEONModImmOp, 29, 29, Bits) = \ -V_(NEONCmode, 15, 12, Bits) = \ - = \ -/* NEON Shift Immediate fields */ = \ -V_(ImmNEONImmhImmb, 22, 16, Bits) = \ -V_(ImmNEONImmh, 22, 19, Bits) = \ -V_(ImmNEONImmb, 18, 16, Bits) - -#define SYSTEM_REGISTER_FIELDS_LIST(V_, M_) = \ -/* NZCV */ = \ -V_(Flags, 31, 28, Bits) = \ -V_(N, 31, 31, Bits) = \ -V_(Z, 30, 30, Bits) = \ -V_(C, 29, 29, Bits) = \ -V_(V, 28, 28, Bits) = \ -M_(NZCV, Flags_mask) = \ -/* FPCR */ = \ -V_(AHP, 26, 26, Bits) = \ -V_(DN, 25, 25, Bits) = \ -V_(FZ, 24, 24, Bits) = \ -V_(RMode, 23, 22, Bits) = \ -M_(FPCR, AHP_mask | DN_mask | FZ_mask | RMode_mask) - -// Fields offsets. -#define DECLARE_FIELDS_OFFSETS(Name, HighBit, LowBit, X) = \ -const int Name##_offset =3D LowBit; = \ -const int Name##_width =3D HighBit - LowBit + 1; = \ -const uint32_t Name##_mask =3D ((1 << Name##_width) - 1) << LowBit; -#define NOTHING(A, B) -INSTRUCTION_FIELDS_LIST(DECLARE_FIELDS_OFFSETS) -SYSTEM_REGISTER_FIELDS_LIST(DECLARE_FIELDS_OFFSETS, NOTHING) -#undef NOTHING -#undef DECLARE_FIELDS_BITS - -// ImmPCRel is a compound field (not present in INSTRUCTION_FIELDS_LIST), = formed -// from ImmPCRelLo and ImmPCRelHi. -const int ImmPCRel_mask =3D ImmPCRelLo_mask | ImmPCRelHi_mask; - -// Condition codes. -enum Condition { - eq =3D 0, // Z set Equal. - ne =3D 1, // Z clear Not equal. - cs =3D 2, // C set Carry set. - cc =3D 3, // C clear Carry clear. - mi =3D 4, // N set Negative. - pl =3D 5, // N clear Positive or zero. - vs =3D 6, // V set Overflow. - vc =3D 7, // V clear No overflow. - hi =3D 8, // C set, Z clear Unsigned higher. - ls =3D 9, // C clear or Z set Unsigned lower or same. - ge =3D 10, // N =3D=3D V Greater or equal. - lt =3D 11, // N !=3D V Less than. - gt =3D 12, // Z clear, N =3D=3D V Greater than. - le =3D 13, // Z set or N !=3D V Less then or equal - al =3D 14, // Always. - nv =3D 15, // Behaves as always/al. - - // Aliases. - hs =3D cs, // C set Unsigned higher or same. - lo =3D cc // C clear Unsigned lower. -}; - -inline Condition InvertCondition(Condition cond) { - // Conditions al and nv behave identically, as "always true". They can't= be - // inverted, because there is no "always false" condition. - VIXL_ASSERT((cond !=3D al) && (cond !=3D nv)); - return static_cast(cond ^ 1); -} - -enum FPTrapFlags { - EnableTrap =3D 1, - DisableTrap =3D 0 -}; - -enum FlagsUpdate { - SetFlags =3D 1, - LeaveFlags =3D 0 -}; - -enum StatusFlags { - NoFlag =3D 0, - - // Derive the flag combinations from the system register bit description= s. - NFlag =3D N_mask, - ZFlag =3D Z_mask, - CFlag =3D C_mask, - VFlag =3D V_mask, - NZFlag =3D NFlag | ZFlag, - NCFlag =3D NFlag | CFlag, - NVFlag =3D NFlag | VFlag, - ZCFlag =3D ZFlag | CFlag, - ZVFlag =3D ZFlag | VFlag, - CVFlag =3D CFlag | VFlag, - NZCFlag =3D NFlag | ZFlag | CFlag, - NZVFlag =3D NFlag | ZFlag | VFlag, - NCVFlag =3D NFlag | CFlag | VFlag, - ZCVFlag =3D ZFlag | CFlag | VFlag, - NZCVFlag =3D NFlag | ZFlag | CFlag | VFlag, - - // Floating-point comparison results. - FPEqualFlag =3D ZCFlag, - FPLessThanFlag =3D NFlag, - FPGreaterThanFlag =3D CFlag, - FPUnorderedFlag =3D CVFlag -}; - -enum Shift { - NO_SHIFT =3D -1, - LSL =3D 0x0, - LSR =3D 0x1, - ASR =3D 0x2, - ROR =3D 0x3, - MSL =3D 0x4 -}; - -enum Extend { - NO_EXTEND =3D -1, - UXTB =3D 0, - UXTH =3D 1, - UXTW =3D 2, - UXTX =3D 3, - SXTB =3D 4, - SXTH =3D 5, - SXTW =3D 6, - SXTX =3D 7 -}; - -enum SystemHint { - NOP =3D 0, - YIELD =3D 1, - WFE =3D 2, - WFI =3D 3, - SEV =3D 4, - SEVL =3D 5 -}; - -enum BarrierDomain { - OuterShareable =3D 0, - NonShareable =3D 1, - InnerShareable =3D 2, - FullSystem =3D 3 -}; - -enum BarrierType { - BarrierOther =3D 0, - BarrierReads =3D 1, - BarrierWrites =3D 2, - BarrierAll =3D 3 -}; - -enum PrefetchOperation { - PLDL1KEEP =3D 0x00, - PLDL1STRM =3D 0x01, - PLDL2KEEP =3D 0x02, - PLDL2STRM =3D 0x03, - PLDL3KEEP =3D 0x04, - PLDL3STRM =3D 0x05, - - PLIL1KEEP =3D 0x08, - PLIL1STRM =3D 0x09, - PLIL2KEEP =3D 0x0a, - PLIL2STRM =3D 0x0b, - PLIL3KEEP =3D 0x0c, - PLIL3STRM =3D 0x0d, - - PSTL1KEEP =3D 0x10, - PSTL1STRM =3D 0x11, - PSTL2KEEP =3D 0x12, - PSTL2STRM =3D 0x13, - PSTL3KEEP =3D 0x14, - PSTL3STRM =3D 0x15 -}; - -// System/special register names. -// This information is not encoded as one field but as the concatenation of -// multiple fields (Op0<0>, Op1, Crn, Crm, Op2). -enum SystemRegister { - NZCV =3D ((0x1 << SysO0_offset) | - (0x3 << SysOp1_offset) | - (0x4 << CRn_offset) | - (0x2 << CRm_offset) | - (0x0 << SysOp2_offset)) >> ImmSystemRegister_offset, - FPCR =3D ((0x1 << SysO0_offset) | - (0x3 << SysOp1_offset) | - (0x4 << CRn_offset) | - (0x4 << CRm_offset) | - (0x0 << SysOp2_offset)) >> ImmSystemRegister_offset -}; - -enum InstructionCacheOp { - IVAU =3D ((0x3 << SysOp1_offset) | - (0x7 << CRn_offset) | - (0x5 << CRm_offset) | - (0x1 << SysOp2_offset)) >> SysOp_offset -}; - -enum DataCacheOp { - CVAC =3D ((0x3 << SysOp1_offset) | - (0x7 << CRn_offset) | - (0xa << CRm_offset) | - (0x1 << SysOp2_offset)) >> SysOp_offset, - CVAU =3D ((0x3 << SysOp1_offset) | - (0x7 << CRn_offset) | - (0xb << CRm_offset) | - (0x1 << SysOp2_offset)) >> SysOp_offset, - CIVAC =3D ((0x3 << SysOp1_offset) | - (0x7 << CRn_offset) | - (0xe << CRm_offset) | - (0x1 << SysOp2_offset)) >> SysOp_offset, - ZVA =3D ((0x3 << SysOp1_offset) | - (0x7 << CRn_offset) | - (0x4 << CRm_offset) | - (0x1 << SysOp2_offset)) >> SysOp_offset -}; - -// Instruction enumerations. -// -// These are the masks that define a class of instructions, and the list of -// instructions within each class. Each enumeration has a Fixed, FMask and -// Mask value. -// -// Fixed: The fixed bits in this instruction class. -// FMask: The mask used to extract the fixed bits in the class. -// Mask: The mask used to identify the instructions within a class. -// -// The enumerations can be used like this: -// -// VIXL_ASSERT(instr->Mask(PCRelAddressingFMask) =3D=3D PCRelAddressingFix= ed); -// switch(instr->Mask(PCRelAddressingMask)) { -// case ADR: Format("adr 'Xd, 'AddrPCRelByte"); break; -// case ADRP: Format("adrp 'Xd, 'AddrPCRelPage"); break; -// default: printf("Unknown instruction\n"); -// } - - -// Generic fields. -enum GenericInstrField { - SixtyFourBits =3D 0x80000000, - ThirtyTwoBits =3D 0x00000000, - FP32 =3D 0x00000000, - FP64 =3D 0x00400000 -}; - -enum NEONFormatField { - NEONFormatFieldMask =3D 0x40C00000, - NEON_Q =3D 0x40000000, - NEON_8B =3D 0x00000000, - NEON_16B =3D NEON_8B | NEON_Q, - NEON_4H =3D 0x00400000, - NEON_8H =3D NEON_4H | NEON_Q, - NEON_2S =3D 0x00800000, - NEON_4S =3D NEON_2S | NEON_Q, - NEON_1D =3D 0x00C00000, - NEON_2D =3D 0x00C00000 | NEON_Q -}; - -enum NEONFPFormatField { - NEONFPFormatFieldMask =3D 0x40400000, - NEON_FP_2S =3D FP32, - NEON_FP_4S =3D FP32 | NEON_Q, - NEON_FP_2D =3D FP64 | NEON_Q -}; - -enum NEONLSFormatField { - NEONLSFormatFieldMask =3D 0x40000C00, - LS_NEON_8B =3D 0x00000000, - LS_NEON_16B =3D LS_NEON_8B | NEON_Q, - LS_NEON_4H =3D 0x00000400, - LS_NEON_8H =3D LS_NEON_4H | NEON_Q, - LS_NEON_2S =3D 0x00000800, - LS_NEON_4S =3D LS_NEON_2S | NEON_Q, - LS_NEON_1D =3D 0x00000C00, - LS_NEON_2D =3D LS_NEON_1D | NEON_Q -}; - -enum NEONScalarFormatField { - NEONScalarFormatFieldMask =3D 0x00C00000, - NEONScalar =3D 0x10000000, - NEON_B =3D 0x00000000, - NEON_H =3D 0x00400000, - NEON_S =3D 0x00800000, - NEON_D =3D 0x00C00000 -}; - -// PC relative addressing. -enum PCRelAddressingOp { - PCRelAddressingFixed =3D 0x10000000, - PCRelAddressingFMask =3D 0x1F000000, - PCRelAddressingMask =3D 0x9F000000, - ADR =3D PCRelAddressingFixed | 0x00000000, - ADRP =3D PCRelAddressingFixed | 0x80000000 -}; - -// Add/sub (immediate, shifted and extended.) -const int kSFOffset =3D 31; -enum AddSubOp { - AddSubOpMask =3D 0x60000000, - AddSubSetFlagsBit =3D 0x20000000, - ADD =3D 0x00000000, - ADDS =3D ADD | AddSubSetFlagsBit, - SUB =3D 0x40000000, - SUBS =3D SUB | AddSubSetFlagsBit -}; - -#define ADD_SUB_OP_LIST(V) \ - V(ADD), \ - V(ADDS), \ - V(SUB), \ - V(SUBS) - -enum AddSubImmediateOp { - AddSubImmediateFixed =3D 0x11000000, - AddSubImmediateFMask =3D 0x1F000000, - AddSubImmediateMask =3D 0xFF000000, - #define ADD_SUB_IMMEDIATE(A) \ - A##_w_imm =3D AddSubImmediateFixed | A, \ - A##_x_imm =3D AddSubImmediateFixed | A | SixtyFourBits - ADD_SUB_OP_LIST(ADD_SUB_IMMEDIATE) - #undef ADD_SUB_IMMEDIATE -}; - -enum AddSubShiftedOp { - AddSubShiftedFixed =3D 0x0B000000, - AddSubShiftedFMask =3D 0x1F200000, - AddSubShiftedMask =3D 0xFF200000, - #define ADD_SUB_SHIFTED(A) \ - A##_w_shift =3D AddSubShiftedFixed | A, \ - A##_x_shift =3D AddSubShiftedFixed | A | SixtyFourBits - ADD_SUB_OP_LIST(ADD_SUB_SHIFTED) - #undef ADD_SUB_SHIFTED -}; - -enum AddSubExtendedOp { - AddSubExtendedFixed =3D 0x0B200000, - AddSubExtendedFMask =3D 0x1F200000, - AddSubExtendedMask =3D 0xFFE00000, - #define ADD_SUB_EXTENDED(A) \ - A##_w_ext =3D AddSubExtendedFixed | A, \ - A##_x_ext =3D AddSubExtendedFixed | A | SixtyFourBits - ADD_SUB_OP_LIST(ADD_SUB_EXTENDED) - #undef ADD_SUB_EXTENDED -}; - -// Add/sub with carry. -enum AddSubWithCarryOp { - AddSubWithCarryFixed =3D 0x1A000000, - AddSubWithCarryFMask =3D 0x1FE00000, - AddSubWithCarryMask =3D 0xFFE0FC00, - ADC_w =3D AddSubWithCarryFixed | ADD, - ADC_x =3D AddSubWithCarryFixed | ADD | SixtyFourBits, - ADC =3D ADC_w, - ADCS_w =3D AddSubWithCarryFixed | ADDS, - ADCS_x =3D AddSubWithCarryFixed | ADDS | SixtyFourBits, - SBC_w =3D AddSubWithCarryFixed | SUB, - SBC_x =3D AddSubWithCarryFixed | SUB | SixtyFourBits, - SBC =3D SBC_w, - SBCS_w =3D AddSubWithCarryFixed | SUBS, - SBCS_x =3D AddSubWithCarryFixed | SUBS | SixtyFourBits -}; - - -// Logical (immediate and shifted register). -enum LogicalOp { - LogicalOpMask =3D 0x60200000, - NOT =3D 0x00200000, - AND =3D 0x00000000, - BIC =3D AND | NOT, - ORR =3D 0x20000000, - ORN =3D ORR | NOT, - EOR =3D 0x40000000, - EON =3D EOR | NOT, - ANDS =3D 0x60000000, - BICS =3D ANDS | NOT -}; - -// Logical immediate. -enum LogicalImmediateOp { - LogicalImmediateFixed =3D 0x12000000, - LogicalImmediateFMask =3D 0x1F800000, - LogicalImmediateMask =3D 0xFF800000, - AND_w_imm =3D LogicalImmediateFixed | AND, - AND_x_imm =3D LogicalImmediateFixed | AND | SixtyFourBits, - ORR_w_imm =3D LogicalImmediateFixed | ORR, - ORR_x_imm =3D LogicalImmediateFixed | ORR | SixtyFourBits, - EOR_w_imm =3D LogicalImmediateFixed | EOR, - EOR_x_imm =3D LogicalImmediateFixed | EOR | SixtyFourBits, - ANDS_w_imm =3D LogicalImmediateFixed | ANDS, - ANDS_x_imm =3D LogicalImmediateFixed | ANDS | SixtyFourBits -}; - -// Logical shifted register. -enum LogicalShiftedOp { - LogicalShiftedFixed =3D 0x0A000000, - LogicalShiftedFMask =3D 0x1F000000, - LogicalShiftedMask =3D 0xFF200000, - AND_w =3D LogicalShiftedFixed | AND, - AND_x =3D LogicalShiftedFixed | AND | SixtyFourBits, - AND_shift =3D AND_w, - BIC_w =3D LogicalShiftedFixed | BIC, - BIC_x =3D LogicalShiftedFixed | BIC | SixtyFourBits, - BIC_shift =3D BIC_w, - ORR_w =3D LogicalShiftedFixed | ORR, - ORR_x =3D LogicalShiftedFixed | ORR | SixtyFourBits, - ORR_shift =3D ORR_w, - ORN_w =3D LogicalShiftedFixed | ORN, - ORN_x =3D LogicalShiftedFixed | ORN | SixtyFourBits, - ORN_shift =3D ORN_w, - EOR_w =3D LogicalShiftedFixed | EOR, - EOR_x =3D LogicalShiftedFixed | EOR | SixtyFourBits, - EOR_shift =3D EOR_w, - EON_w =3D LogicalShiftedFixed | EON, - EON_x =3D LogicalShiftedFixed | EON | SixtyFourBits, - EON_shift =3D EON_w, - ANDS_w =3D LogicalShiftedFixed | ANDS, - ANDS_x =3D LogicalShiftedFixed | ANDS | SixtyFourBits, - ANDS_shift =3D ANDS_w, - BICS_w =3D LogicalShiftedFixed | BICS, - BICS_x =3D LogicalShiftedFixed | BICS | SixtyFourBits, - BICS_shift =3D BICS_w -}; - -// Move wide immediate. -enum MoveWideImmediateOp { - MoveWideImmediateFixed =3D 0x12800000, - MoveWideImmediateFMask =3D 0x1F800000, - MoveWideImmediateMask =3D 0xFF800000, - MOVN =3D 0x00000000, - MOVZ =3D 0x40000000, - MOVK =3D 0x60000000, - MOVN_w =3D MoveWideImmediateFixed | MOVN, - MOVN_x =3D MoveWideImmediateFixed | MOVN | SixtyFourBits, - MOVZ_w =3D MoveWideImmediateFixed | MOVZ, - MOVZ_x =3D MoveWideImmediateFixed | MOVZ | SixtyFourBits, - MOVK_w =3D MoveWideImmediateFixed | MOVK, - MOVK_x =3D MoveWideImmediateFixed | MOVK | SixtyFourBits -}; - -// Bitfield. -const int kBitfieldNOffset =3D 22; -enum BitfieldOp { - BitfieldFixed =3D 0x13000000, - BitfieldFMask =3D 0x1F800000, - BitfieldMask =3D 0xFF800000, - SBFM_w =3D BitfieldFixed | 0x00000000, - SBFM_x =3D BitfieldFixed | 0x80000000, - SBFM =3D SBFM_w, - BFM_w =3D BitfieldFixed | 0x20000000, - BFM_x =3D BitfieldFixed | 0xA0000000, - BFM =3D BFM_w, - UBFM_w =3D BitfieldFixed | 0x40000000, - UBFM_x =3D BitfieldFixed | 0xC0000000, - UBFM =3D UBFM_w - // Bitfield N field. -}; - -// Extract. -enum ExtractOp { - ExtractFixed =3D 0x13800000, - ExtractFMask =3D 0x1F800000, - ExtractMask =3D 0xFFA00000, - EXTR_w =3D ExtractFixed | 0x00000000, - EXTR_x =3D ExtractFixed | 0x80000000, - EXTR =3D EXTR_w -}; - -// Unconditional branch. -enum UnconditionalBranchOp { - UnconditionalBranchFixed =3D 0x14000000, - UnconditionalBranchFMask =3D 0x7C000000, - UnconditionalBranchMask =3D 0xFC000000, - B =3D UnconditionalBranchFixed | 0x00000000, - BL =3D UnconditionalBranchFixed | 0x80000000 -}; - -// Unconditional branch to register. -enum UnconditionalBranchToRegisterOp { - UnconditionalBranchToRegisterFixed =3D 0xD6000000, - UnconditionalBranchToRegisterFMask =3D 0xFE000000, - UnconditionalBranchToRegisterMask =3D 0xFFFFFC1F, - BR =3D UnconditionalBranchToRegisterFixed | 0x001F0000, - BLR =3D UnconditionalBranchToRegisterFixed | 0x003F0000, - RET =3D UnconditionalBranchToRegisterFixed | 0x005F0000 -}; - -// Compare and branch. -enum CompareBranchOp { - CompareBranchFixed =3D 0x34000000, - CompareBranchFMask =3D 0x7E000000, - CompareBranchMask =3D 0xFF000000, - CBZ_w =3D CompareBranchFixed | 0x00000000, - CBZ_x =3D CompareBranchFixed | 0x80000000, - CBZ =3D CBZ_w, - CBNZ_w =3D CompareBranchFixed | 0x01000000, - CBNZ_x =3D CompareBranchFixed | 0x81000000, - CBNZ =3D CBNZ_w -}; - -// Test and branch. -enum TestBranchOp { - TestBranchFixed =3D 0x36000000, - TestBranchFMask =3D 0x7E000000, - TestBranchMask =3D 0x7F000000, - TBZ =3D TestBranchFixed | 0x00000000, - TBNZ =3D TestBranchFixed | 0x01000000 -}; - -// Conditional branch. -enum ConditionalBranchOp { - ConditionalBranchFixed =3D 0x54000000, - ConditionalBranchFMask =3D 0xFE000000, - ConditionalBranchMask =3D 0xFF000010, - B_cond =3D ConditionalBranchFixed | 0x00000000 -}; - -// System. -// System instruction encoding is complicated because some instructions us= e op -// and CR fields to encode parameters. To handle this cleanly, the system -// instructions are split into more than one enum. - -enum SystemOp { - SystemFixed =3D 0xD5000000, - SystemFMask =3D 0xFFC00000 -}; - -enum SystemSysRegOp { - SystemSysRegFixed =3D 0xD5100000, - SystemSysRegFMask =3D 0xFFD00000, - SystemSysRegMask =3D 0xFFF00000, - MRS =3D SystemSysRegFixed | 0x00200000, - MSR =3D SystemSysRegFixed | 0x00000000 -}; - -enum SystemHintOp { - SystemHintFixed =3D 0xD503201F, - SystemHintFMask =3D 0xFFFFF01F, - SystemHintMask =3D 0xFFFFF01F, - HINT =3D SystemHintFixed | 0x00000000 -}; - -enum SystemSysOp { - SystemSysFixed =3D 0xD5080000, - SystemSysFMask =3D 0xFFF80000, - SystemSysMask =3D 0xFFF80000, - SYS =3D SystemSysFixed | 0x00000000 -}; - -// Exception. -enum ExceptionOp { - ExceptionFixed =3D 0xD4000000, - ExceptionFMask =3D 0xFF000000, - ExceptionMask =3D 0xFFE0001F, - HLT =3D ExceptionFixed | 0x00400000, - BRK =3D ExceptionFixed | 0x00200000, - SVC =3D ExceptionFixed | 0x00000001, - HVC =3D ExceptionFixed | 0x00000002, - SMC =3D ExceptionFixed | 0x00000003, - DCPS1 =3D ExceptionFixed | 0x00A00001, - DCPS2 =3D ExceptionFixed | 0x00A00002, - DCPS3 =3D ExceptionFixed | 0x00A00003 -}; - -enum MemBarrierOp { - MemBarrierFixed =3D 0xD503309F, - MemBarrierFMask =3D 0xFFFFF09F, - MemBarrierMask =3D 0xFFFFF0FF, - DSB =3D MemBarrierFixed | 0x00000000, - DMB =3D MemBarrierFixed | 0x00000020, - ISB =3D MemBarrierFixed | 0x00000040 -}; - -enum SystemExclusiveMonitorOp { - SystemExclusiveMonitorFixed =3D 0xD503305F, - SystemExclusiveMonitorFMask =3D 0xFFFFF0FF, - SystemExclusiveMonitorMask =3D 0xFFFFF0FF, - CLREX =3D SystemExclusiveMonitorFixed -}; - -// Any load or store. -enum LoadStoreAnyOp { - LoadStoreAnyFMask =3D 0x0a000000, - LoadStoreAnyFixed =3D 0x08000000 -}; - -// Any load pair or store pair. -enum LoadStorePairAnyOp { - LoadStorePairAnyFMask =3D 0x3a000000, - LoadStorePairAnyFixed =3D 0x28000000 -}; - -#define LOAD_STORE_PAIR_OP_LIST(V) \ - V(STP, w, 0x00000000), \ - V(LDP, w, 0x00400000), \ - V(LDPSW, x, 0x40400000), \ - V(STP, x, 0x80000000), \ - V(LDP, x, 0x80400000), \ - V(STP, s, 0x04000000), \ - V(LDP, s, 0x04400000), \ - V(STP, d, 0x44000000), \ - V(LDP, d, 0x44400000), \ - V(STP, q, 0x84000000), \ - V(LDP, q, 0x84400000) - -// Load/store pair (post, pre and offset.) -enum LoadStorePairOp { - LoadStorePairMask =3D 0xC4400000, - LoadStorePairLBit =3D 1 << 22, - #define LOAD_STORE_PAIR(A, B, C) \ - A##_##B =3D C - LOAD_STORE_PAIR_OP_LIST(LOAD_STORE_PAIR) - #undef LOAD_STORE_PAIR -}; - -enum LoadStorePairPostIndexOp { - LoadStorePairPostIndexFixed =3D 0x28800000, - LoadStorePairPostIndexFMask =3D 0x3B800000, - LoadStorePairPostIndexMask =3D 0xFFC00000, - #define LOAD_STORE_PAIR_POST_INDEX(A, B, C) \ - A##_##B##_post =3D LoadStorePairPostIndexFixed | A##_##B - LOAD_STORE_PAIR_OP_LIST(LOAD_STORE_PAIR_POST_INDEX) - #undef LOAD_STORE_PAIR_POST_INDEX -}; - -enum LoadStorePairPreIndexOp { - LoadStorePairPreIndexFixed =3D 0x29800000, - LoadStorePairPreIndexFMask =3D 0x3B800000, - LoadStorePairPreIndexMask =3D 0xFFC00000, - #define LOAD_STORE_PAIR_PRE_INDEX(A, B, C) \ - A##_##B##_pre =3D LoadStorePairPreIndexFixed | A##_##B - LOAD_STORE_PAIR_OP_LIST(LOAD_STORE_PAIR_PRE_INDEX) - #undef LOAD_STORE_PAIR_PRE_INDEX -}; - -enum LoadStorePairOffsetOp { - LoadStorePairOffsetFixed =3D 0x29000000, - LoadStorePairOffsetFMask =3D 0x3B800000, - LoadStorePairOffsetMask =3D 0xFFC00000, - #define LOAD_STORE_PAIR_OFFSET(A, B, C) \ - A##_##B##_off =3D LoadStorePairOffsetFixed | A##_##B - LOAD_STORE_PAIR_OP_LIST(LOAD_STORE_PAIR_OFFSET) - #undef LOAD_STORE_PAIR_OFFSET -}; - -enum LoadStorePairNonTemporalOp { - LoadStorePairNonTemporalFixed =3D 0x28000000, - LoadStorePairNonTemporalFMask =3D 0x3B800000, - LoadStorePairNonTemporalMask =3D 0xFFC00000, - LoadStorePairNonTemporalLBit =3D 1 << 22, - STNP_w =3D LoadStorePairNonTemporalFixed | STP_w, - LDNP_w =3D LoadStorePairNonTemporalFixed | LDP_w, - STNP_x =3D LoadStorePairNonTemporalFixed | STP_x, - LDNP_x =3D LoadStorePairNonTemporalFixed | LDP_x, - STNP_s =3D LoadStorePairNonTemporalFixed | STP_s, - LDNP_s =3D LoadStorePairNonTemporalFixed | LDP_s, - STNP_d =3D LoadStorePairNonTemporalFixed | STP_d, - LDNP_d =3D LoadStorePairNonTemporalFixed | LDP_d, - STNP_q =3D LoadStorePairNonTemporalFixed | STP_q, - LDNP_q =3D LoadStorePairNonTemporalFixed | LDP_q -}; - -// Load literal. -enum LoadLiteralOp { - LoadLiteralFixed =3D 0x18000000, - LoadLiteralFMask =3D 0x3B000000, - LoadLiteralMask =3D 0xFF000000, - LDR_w_lit =3D LoadLiteralFixed | 0x00000000, - LDR_x_lit =3D LoadLiteralFixed | 0x40000000, - LDRSW_x_lit =3D LoadLiteralFixed | 0x80000000, - PRFM_lit =3D LoadLiteralFixed | 0xC0000000, - LDR_s_lit =3D LoadLiteralFixed | 0x04000000, - LDR_d_lit =3D LoadLiteralFixed | 0x44000000, - LDR_q_lit =3D LoadLiteralFixed | 0x84000000 -}; - -#define LOAD_STORE_OP_LIST(V) \ - V(ST, RB, w, 0x00000000), \ - V(ST, RH, w, 0x40000000), \ - V(ST, R, w, 0x80000000), \ - V(ST, R, x, 0xC0000000), \ - V(LD, RB, w, 0x00400000), \ - V(LD, RH, w, 0x40400000), \ - V(LD, R, w, 0x80400000), \ - V(LD, R, x, 0xC0400000), \ - V(LD, RSB, x, 0x00800000), \ - V(LD, RSH, x, 0x40800000), \ - V(LD, RSW, x, 0x80800000), \ - V(LD, RSB, w, 0x00C00000), \ - V(LD, RSH, w, 0x40C00000), \ - V(ST, R, b, 0x04000000), \ - V(ST, R, h, 0x44000000), \ - V(ST, R, s, 0x84000000), \ - V(ST, R, d, 0xC4000000), \ - V(ST, R, q, 0x04800000), \ - V(LD, R, b, 0x04400000), \ - V(LD, R, h, 0x44400000), \ - V(LD, R, s, 0x84400000), \ - V(LD, R, d, 0xC4400000), \ - V(LD, R, q, 0x04C00000) - -// Load/store (post, pre, offset and unsigned.) -enum LoadStoreOp { - LoadStoreMask =3D 0xC4C00000, - LoadStoreVMask =3D 0x04000000, - #define LOAD_STORE(A, B, C, D) \ - A##B##_##C =3D D - LOAD_STORE_OP_LIST(LOAD_STORE), - #undef LOAD_STORE - PRFM =3D 0xC0800000 -}; - -// Load/store unscaled offset. -enum LoadStoreUnscaledOffsetOp { - LoadStoreUnscaledOffsetFixed =3D 0x38000000, - LoadStoreUnscaledOffsetFMask =3D 0x3B200C00, - LoadStoreUnscaledOffsetMask =3D 0xFFE00C00, - PRFUM =3D LoadStoreUnscaledOffsetFixed | PRFM, - #define LOAD_STORE_UNSCALED(A, B, C, D) \ - A##U##B##_##C =3D LoadStoreUnscaledOffsetFixed | D - LOAD_STORE_OP_LIST(LOAD_STORE_UNSCALED) - #undef LOAD_STORE_UNSCALED -}; - -// Load/store post index. -enum LoadStorePostIndex { - LoadStorePostIndexFixed =3D 0x38000400, - LoadStorePostIndexFMask =3D 0x3B200C00, - LoadStorePostIndexMask =3D 0xFFE00C00, - #define LOAD_STORE_POST_INDEX(A, B, C, D) \ - A##B##_##C##_post =3D LoadStorePostIndexFixed | D - LOAD_STORE_OP_LIST(LOAD_STORE_POST_INDEX) - #undef LOAD_STORE_POST_INDEX -}; - -// Load/store pre index. -enum LoadStorePreIndex { - LoadStorePreIndexFixed =3D 0x38000C00, - LoadStorePreIndexFMask =3D 0x3B200C00, - LoadStorePreIndexMask =3D 0xFFE00C00, - #define LOAD_STORE_PRE_INDEX(A, B, C, D) \ - A##B##_##C##_pre =3D LoadStorePreIndexFixed | D - LOAD_STORE_OP_LIST(LOAD_STORE_PRE_INDEX) - #undef LOAD_STORE_PRE_INDEX -}; - -// Load/store unsigned offset. -enum LoadStoreUnsignedOffset { - LoadStoreUnsignedOffsetFixed =3D 0x39000000, - LoadStoreUnsignedOffsetFMask =3D 0x3B000000, - LoadStoreUnsignedOffsetMask =3D 0xFFC00000, - PRFM_unsigned =3D LoadStoreUnsignedOffsetFixed | PRFM, - #define LOAD_STORE_UNSIGNED_OFFSET(A, B, C, D) \ - A##B##_##C##_unsigned =3D LoadStoreUnsignedOffsetFixed | D - LOAD_STORE_OP_LIST(LOAD_STORE_UNSIGNED_OFFSET) - #undef LOAD_STORE_UNSIGNED_OFFSET -}; - -// Load/store register offset. -enum LoadStoreRegisterOffset { - LoadStoreRegisterOffsetFixed =3D 0x38200800, - LoadStoreRegisterOffsetFMask =3D 0x3B200C00, - LoadStoreRegisterOffsetMask =3D 0xFFE00C00, - PRFM_reg =3D LoadStoreRegisterOffsetFixed | PRFM, - #define LOAD_STORE_REGISTER_OFFSET(A, B, C, D) \ - A##B##_##C##_reg =3D LoadStoreRegisterOffsetFixed | D - LOAD_STORE_OP_LIST(LOAD_STORE_REGISTER_OFFSET) - #undef LOAD_STORE_REGISTER_OFFSET -}; - -enum LoadStoreExclusive { - LoadStoreExclusiveFixed =3D 0x08000000, - LoadStoreExclusiveFMask =3D 0x3F000000, - LoadStoreExclusiveMask =3D 0xFFE08000, - STXRB_w =3D LoadStoreExclusiveFixed | 0x00000000, - STXRH_w =3D LoadStoreExclusiveFixed | 0x40000000, - STXR_w =3D LoadStoreExclusiveFixed | 0x80000000, - STXR_x =3D LoadStoreExclusiveFixed | 0xC0000000, - LDXRB_w =3D LoadStoreExclusiveFixed | 0x00400000, - LDXRH_w =3D LoadStoreExclusiveFixed | 0x40400000, - LDXR_w =3D LoadStoreExclusiveFixed | 0x80400000, - LDXR_x =3D LoadStoreExclusiveFixed | 0xC0400000, - STXP_w =3D LoadStoreExclusiveFixed | 0x80200000, - STXP_x =3D LoadStoreExclusiveFixed | 0xC0200000, - LDXP_w =3D LoadStoreExclusiveFixed | 0x80600000, - LDXP_x =3D LoadStoreExclusiveFixed | 0xC0600000, - STLXRB_w =3D LoadStoreExclusiveFixed | 0x00008000, - STLXRH_w =3D LoadStoreExclusiveFixed | 0x40008000, - STLXR_w =3D LoadStoreExclusiveFixed | 0x80008000, - STLXR_x =3D LoadStoreExclusiveFixed | 0xC0008000, - LDAXRB_w =3D LoadStoreExclusiveFixed | 0x00408000, - LDAXRH_w =3D LoadStoreExclusiveFixed | 0x40408000, - LDAXR_w =3D LoadStoreExclusiveFixed | 0x80408000, - LDAXR_x =3D LoadStoreExclusiveFixed | 0xC0408000, - STLXP_w =3D LoadStoreExclusiveFixed | 0x80208000, - STLXP_x =3D LoadStoreExclusiveFixed | 0xC0208000, - LDAXP_w =3D LoadStoreExclusiveFixed | 0x80608000, - LDAXP_x =3D LoadStoreExclusiveFixed | 0xC0608000, - STLRB_w =3D LoadStoreExclusiveFixed | 0x00808000, - STLRH_w =3D LoadStoreExclusiveFixed | 0x40808000, - STLR_w =3D LoadStoreExclusiveFixed | 0x80808000, - STLR_x =3D LoadStoreExclusiveFixed | 0xC0808000, - LDARB_w =3D LoadStoreExclusiveFixed | 0x00C08000, - LDARH_w =3D LoadStoreExclusiveFixed | 0x40C08000, - LDAR_w =3D LoadStoreExclusiveFixed | 0x80C08000, - LDAR_x =3D LoadStoreExclusiveFixed | 0xC0C08000 -}; - -// Conditional compare. -enum ConditionalCompareOp { - ConditionalCompareMask =3D 0x60000000, - CCMN =3D 0x20000000, - CCMP =3D 0x60000000 -}; - -// Conditional compare register. -enum ConditionalCompareRegisterOp { - ConditionalCompareRegisterFixed =3D 0x1A400000, - ConditionalCompareRegisterFMask =3D 0x1FE00800, - ConditionalCompareRegisterMask =3D 0xFFE00C10, - CCMN_w =3D ConditionalCompareRegisterFixed | CCMN, - CCMN_x =3D ConditionalCompareRegisterFixed | SixtyFourBits | CCMN, - CCMP_w =3D ConditionalCompareRegisterFixed | CCMP, - CCMP_x =3D ConditionalCompareRegisterFixed | SixtyFourBits | CCMP -}; - -// Conditional compare immediate. -enum ConditionalCompareImmediateOp { - ConditionalCompareImmediateFixed =3D 0x1A400800, - ConditionalCompareImmediateFMask =3D 0x1FE00800, - ConditionalCompareImmediateMask =3D 0xFFE00C10, - CCMN_w_imm =3D ConditionalCompareImmediateFixed | CCMN, - CCMN_x_imm =3D ConditionalCompareImmediateFixed | SixtyFourBits | CCMN, - CCMP_w_imm =3D ConditionalCompareImmediateFixed | CCMP, - CCMP_x_imm =3D ConditionalCompareImmediateFixed | SixtyFourBits | CCMP -}; - -// Conditional select. -enum ConditionalSelectOp { - ConditionalSelectFixed =3D 0x1A800000, - ConditionalSelectFMask =3D 0x1FE00000, - ConditionalSelectMask =3D 0xFFE00C00, - CSEL_w =3D ConditionalSelectFixed | 0x00000000, - CSEL_x =3D ConditionalSelectFixed | 0x80000000, - CSEL =3D CSEL_w, - CSINC_w =3D ConditionalSelectFixed | 0x00000400, - CSINC_x =3D ConditionalSelectFixed | 0x80000400, - CSINC =3D CSINC_w, - CSINV_w =3D ConditionalSelectFixed | 0x40000000, - CSINV_x =3D ConditionalSelectFixed | 0xC0000000, - CSINV =3D CSINV_w, - CSNEG_w =3D ConditionalSelectFixed | 0x40000400, - CSNEG_x =3D ConditionalSelectFixed | 0xC0000400, - CSNEG =3D CSNEG_w -}; - -// Data processing 1 source. -enum DataProcessing1SourceOp { - DataProcessing1SourceFixed =3D 0x5AC00000, - DataProcessing1SourceFMask =3D 0x5FE00000, - DataProcessing1SourceMask =3D 0xFFFFFC00, - RBIT =3D DataProcessing1SourceFixed | 0x00000000, - RBIT_w =3D RBIT, - RBIT_x =3D RBIT | SixtyFourBits, - REV16 =3D DataProcessing1SourceFixed | 0x00000400, - REV16_w =3D REV16, - REV16_x =3D REV16 | SixtyFourBits, - REV =3D DataProcessing1SourceFixed | 0x00000800, - REV_w =3D REV, - REV32_x =3D REV | SixtyFourBits, - REV_x =3D DataProcessing1SourceFixed | SixtyFourBits | 0x00000C00, - CLZ =3D DataProcessing1SourceFixed | 0x00001000, - CLZ_w =3D CLZ, - CLZ_x =3D CLZ | SixtyFourBits, - CLS =3D DataProcessing1SourceFixed | 0x00001400, - CLS_w =3D CLS, - CLS_x =3D CLS | SixtyFourBits -}; - -// Data processing 2 source. -enum DataProcessing2SourceOp { - DataProcessing2SourceFixed =3D 0x1AC00000, - DataProcessing2SourceFMask =3D 0x5FE00000, - DataProcessing2SourceMask =3D 0xFFE0FC00, - UDIV_w =3D DataProcessing2SourceFixed | 0x00000800, - UDIV_x =3D DataProcessing2SourceFixed | 0x80000800, - UDIV =3D UDIV_w, - SDIV_w =3D DataProcessing2SourceFixed | 0x00000C00, - SDIV_x =3D DataProcessing2SourceFixed | 0x80000C00, - SDIV =3D SDIV_w, - LSLV_w =3D DataProcessing2SourceFixed | 0x00002000, - LSLV_x =3D DataProcessing2SourceFixed | 0x80002000, - LSLV =3D LSLV_w, - LSRV_w =3D DataProcessing2SourceFixed | 0x00002400, - LSRV_x =3D DataProcessing2SourceFixed | 0x80002400, - LSRV =3D LSRV_w, - ASRV_w =3D DataProcessing2SourceFixed | 0x00002800, - ASRV_x =3D DataProcessing2SourceFixed | 0x80002800, - ASRV =3D ASRV_w, - RORV_w =3D DataProcessing2SourceFixed | 0x00002C00, - RORV_x =3D DataProcessing2SourceFixed | 0x80002C00, - RORV =3D RORV_w, - CRC32B =3D DataProcessing2SourceFixed | 0x00004000, - CRC32H =3D DataProcessing2SourceFixed | 0x00004400, - CRC32W =3D DataProcessing2SourceFixed | 0x00004800, - CRC32X =3D DataProcessing2SourceFixed | SixtyFourBits | 0x00004C00, - CRC32CB =3D DataProcessing2SourceFixed | 0x00005000, - CRC32CH =3D DataProcessing2SourceFixed | 0x00005400, - CRC32CW =3D DataProcessing2SourceFixed | 0x00005800, - CRC32CX =3D DataProcessing2SourceFixed | SixtyFourBits | 0x00005C00 -}; - -// Data processing 3 source. -enum DataProcessing3SourceOp { - DataProcessing3SourceFixed =3D 0x1B000000, - DataProcessing3SourceFMask =3D 0x1F000000, - DataProcessing3SourceMask =3D 0xFFE08000, - MADD_w =3D DataProcessing3SourceFixed | 0x00000000, - MADD_x =3D DataProcessing3SourceFixed | 0x80000000, - MADD =3D MADD_w, - MSUB_w =3D DataProcessing3SourceFixed | 0x00008000, - MSUB_x =3D DataProcessing3SourceFixed | 0x80008000, - MSUB =3D MSUB_w, - SMADDL_x =3D DataProcessing3SourceFixed | 0x80200000, - SMSUBL_x =3D DataProcessing3SourceFixed | 0x80208000, - SMULH_x =3D DataProcessing3SourceFixed | 0x80400000, - UMADDL_x =3D DataProcessing3SourceFixed | 0x80A00000, - UMSUBL_x =3D DataProcessing3SourceFixed | 0x80A08000, - UMULH_x =3D DataProcessing3SourceFixed | 0x80C00000 -}; - -// Floating point compare. -enum FPCompareOp { - FPCompareFixed =3D 0x1E202000, - FPCompareFMask =3D 0x5F203C00, - FPCompareMask =3D 0xFFE0FC1F, - FCMP_s =3D FPCompareFixed | 0x00000000, - FCMP_d =3D FPCompareFixed | FP64 | 0x00000000, - FCMP =3D FCMP_s, - FCMP_s_zero =3D FPCompareFixed | 0x00000008, - FCMP_d_zero =3D FPCompareFixed | FP64 | 0x00000008, - FCMP_zero =3D FCMP_s_zero, - FCMPE_s =3D FPCompareFixed | 0x00000010, - FCMPE_d =3D FPCompareFixed | FP64 | 0x00000010, - FCMPE =3D FCMPE_s, - FCMPE_s_zero =3D FPCompareFixed | 0x00000018, - FCMPE_d_zero =3D FPCompareFixed | FP64 | 0x00000018, - FCMPE_zero =3D FCMPE_s_zero -}; - -// Floating point conditional compare. -enum FPConditionalCompareOp { - FPConditionalCompareFixed =3D 0x1E200400, - FPConditionalCompareFMask =3D 0x5F200C00, - FPConditionalCompareMask =3D 0xFFE00C10, - FCCMP_s =3D FPConditionalCompareFixed | 0x00000000, - FCCMP_d =3D FPConditionalCompareFixed | FP64 | 0x00000= 000, - FCCMP =3D FCCMP_s, - FCCMPE_s =3D FPConditionalCompareFixed | 0x00000010, - FCCMPE_d =3D FPConditionalCompareFixed | FP64 | 0x00000= 010, - FCCMPE =3D FCCMPE_s -}; - -// Floating point conditional select. -enum FPConditionalSelectOp { - FPConditionalSelectFixed =3D 0x1E200C00, - FPConditionalSelectFMask =3D 0x5F200C00, - FPConditionalSelectMask =3D 0xFFE00C00, - FCSEL_s =3D FPConditionalSelectFixed | 0x00000000, - FCSEL_d =3D FPConditionalSelectFixed | FP64 | 0x0000000= 0, - FCSEL =3D FCSEL_s -}; - -// Floating point immediate. -enum FPImmediateOp { - FPImmediateFixed =3D 0x1E201000, - FPImmediateFMask =3D 0x5F201C00, - FPImmediateMask =3D 0xFFE01C00, - FMOV_s_imm =3D FPImmediateFixed | 0x00000000, - FMOV_d_imm =3D FPImmediateFixed | FP64 | 0x00000000 -}; - -// Floating point data processing 1 source. -enum FPDataProcessing1SourceOp { - FPDataProcessing1SourceFixed =3D 0x1E204000, - FPDataProcessing1SourceFMask =3D 0x5F207C00, - FPDataProcessing1SourceMask =3D 0xFFFFFC00, - FMOV_s =3D FPDataProcessing1SourceFixed | 0x00000000, - FMOV_d =3D FPDataProcessing1SourceFixed | FP64 | 0x00000000, - FMOV =3D FMOV_s, - FABS_s =3D FPDataProcessing1SourceFixed | 0x00008000, - FABS_d =3D FPDataProcessing1SourceFixed | FP64 | 0x00008000, - FABS =3D FABS_s, - FNEG_s =3D FPDataProcessing1SourceFixed | 0x00010000, - FNEG_d =3D FPDataProcessing1SourceFixed | FP64 | 0x00010000, - FNEG =3D FNEG_s, - FSQRT_s =3D FPDataProcessing1SourceFixed | 0x00018000, - FSQRT_d =3D FPDataProcessing1SourceFixed | FP64 | 0x00018000, - FSQRT =3D FSQRT_s, - FCVT_ds =3D FPDataProcessing1SourceFixed | 0x00028000, - FCVT_sd =3D FPDataProcessing1SourceFixed | FP64 | 0x00020000, - FCVT_hs =3D FPDataProcessing1SourceFixed | 0x00038000, - FCVT_hd =3D FPDataProcessing1SourceFixed | FP64 | 0x00038000, - FCVT_sh =3D FPDataProcessing1SourceFixed | 0x00C20000, - FCVT_dh =3D FPDataProcessing1SourceFixed | 0x00C28000, - FRINTN_s =3D FPDataProcessing1SourceFixed | 0x00040000, - FRINTN_d =3D FPDataProcessing1SourceFixed | FP64 | 0x00040000, - FRINTN =3D FRINTN_s, - FRINTP_s =3D FPDataProcessing1SourceFixed | 0x00048000, - FRINTP_d =3D FPDataProcessing1SourceFixed | FP64 | 0x00048000, - FRINTP =3D FRINTP_s, - FRINTM_s =3D FPDataProcessing1SourceFixed | 0x00050000, - FRINTM_d =3D FPDataProcessing1SourceFixed | FP64 | 0x00050000, - FRINTM =3D FRINTM_s, - FRINTZ_s =3D FPDataProcessing1SourceFixed | 0x00058000, - FRINTZ_d =3D FPDataProcessing1SourceFixed | FP64 | 0x00058000, - FRINTZ =3D FRINTZ_s, - FRINTA_s =3D FPDataProcessing1SourceFixed | 0x00060000, - FRINTA_d =3D FPDataProcessing1SourceFixed | FP64 | 0x00060000, - FRINTA =3D FRINTA_s, - FRINTX_s =3D FPDataProcessing1SourceFixed | 0x00070000, - FRINTX_d =3D FPDataProcessing1SourceFixed | FP64 | 0x00070000, - FRINTX =3D FRINTX_s, - FRINTI_s =3D FPDataProcessing1SourceFixed | 0x00078000, - FRINTI_d =3D FPDataProcessing1SourceFixed | FP64 | 0x00078000, - FRINTI =3D FRINTI_s -}; - -// Floating point data processing 2 source. -enum FPDataProcessing2SourceOp { - FPDataProcessing2SourceFixed =3D 0x1E200800, - FPDataProcessing2SourceFMask =3D 0x5F200C00, - FPDataProcessing2SourceMask =3D 0xFFE0FC00, - FMUL =3D FPDataProcessing2SourceFixed | 0x00000000, - FMUL_s =3D FMUL, - FMUL_d =3D FMUL | FP64, - FDIV =3D FPDataProcessing2SourceFixed | 0x00001000, - FDIV_s =3D FDIV, - FDIV_d =3D FDIV | FP64, - FADD =3D FPDataProcessing2SourceFixed | 0x00002000, - FADD_s =3D FADD, - FADD_d =3D FADD | FP64, - FSUB =3D FPDataProcessing2SourceFixed | 0x00003000, - FSUB_s =3D FSUB, - FSUB_d =3D FSUB | FP64, - FMAX =3D FPDataProcessing2SourceFixed | 0x00004000, - FMAX_s =3D FMAX, - FMAX_d =3D FMAX | FP64, - FMIN =3D FPDataProcessing2SourceFixed | 0x00005000, - FMIN_s =3D FMIN, - FMIN_d =3D FMIN | FP64, - FMAXNM =3D FPDataProcessing2SourceFixed | 0x00006000, - FMAXNM_s =3D FMAXNM, - FMAXNM_d =3D FMAXNM | FP64, - FMINNM =3D FPDataProcessing2SourceFixed | 0x00007000, - FMINNM_s =3D FMINNM, - FMINNM_d =3D FMINNM | FP64, - FNMUL =3D FPDataProcessing2SourceFixed | 0x00008000, - FNMUL_s =3D FNMUL, - FNMUL_d =3D FNMUL | FP64 -}; - -// Floating point data processing 3 source. -enum FPDataProcessing3SourceOp { - FPDataProcessing3SourceFixed =3D 0x1F000000, - FPDataProcessing3SourceFMask =3D 0x5F000000, - FPDataProcessing3SourceMask =3D 0xFFE08000, - FMADD_s =3D FPDataProcessing3SourceFixed | 0x000000= 00, - FMSUB_s =3D FPDataProcessing3SourceFixed | 0x000080= 00, - FNMADD_s =3D FPDataProcessing3SourceFixed | 0x002000= 00, - FNMSUB_s =3D FPDataProcessing3SourceFixed | 0x002080= 00, - FMADD_d =3D FPDataProcessing3SourceFixed | 0x004000= 00, - FMSUB_d =3D FPDataProcessing3SourceFixed | 0x004080= 00, - FNMADD_d =3D FPDataProcessing3SourceFixed | 0x006000= 00, - FNMSUB_d =3D FPDataProcessing3SourceFixed | 0x006080= 00 -}; - -// Conversion between floating point and integer. -enum FPIntegerConvertOp { - FPIntegerConvertFixed =3D 0x1E200000, - FPIntegerConvertFMask =3D 0x5F20FC00, - FPIntegerConvertMask =3D 0xFFFFFC00, - FCVTNS =3D FPIntegerConvertFixed | 0x00000000, - FCVTNS_ws =3D FCVTNS, - FCVTNS_xs =3D FCVTNS | SixtyFourBits, - FCVTNS_wd =3D FCVTNS | FP64, - FCVTNS_xd =3D FCVTNS | SixtyFourBits | FP64, - FCVTNU =3D FPIntegerConvertFixed | 0x00010000, - FCVTNU_ws =3D FCVTNU, - FCVTNU_xs =3D FCVTNU | SixtyFourBits, - FCVTNU_wd =3D FCVTNU | FP64, - FCVTNU_xd =3D FCVTNU | SixtyFourBits | FP64, - FCVTPS =3D FPIntegerConvertFixed | 0x00080000, - FCVTPS_ws =3D FCVTPS, - FCVTPS_xs =3D FCVTPS | SixtyFourBits, - FCVTPS_wd =3D FCVTPS | FP64, - FCVTPS_xd =3D FCVTPS | SixtyFourBits | FP64, - FCVTPU =3D FPIntegerConvertFixed | 0x00090000, - FCVTPU_ws =3D FCVTPU, - FCVTPU_xs =3D FCVTPU | SixtyFourBits, - FCVTPU_wd =3D FCVTPU | FP64, - FCVTPU_xd =3D FCVTPU | SixtyFourBits | FP64, - FCVTMS =3D FPIntegerConvertFixed | 0x00100000, - FCVTMS_ws =3D FCVTMS, - FCVTMS_xs =3D FCVTMS | SixtyFourBits, - FCVTMS_wd =3D FCVTMS | FP64, - FCVTMS_xd =3D FCVTMS | SixtyFourBits | FP64, - FCVTMU =3D FPIntegerConvertFixed | 0x00110000, - FCVTMU_ws =3D FCVTMU, - FCVTMU_xs =3D FCVTMU | SixtyFourBits, - FCVTMU_wd =3D FCVTMU | FP64, - FCVTMU_xd =3D FCVTMU | SixtyFourBits | FP64, - FCVTZS =3D FPIntegerConvertFixed | 0x00180000, - FCVTZS_ws =3D FCVTZS, - FCVTZS_xs =3D FCVTZS | SixtyFourBits, - FCVTZS_wd =3D FCVTZS | FP64, - FCVTZS_xd =3D FCVTZS | SixtyFourBits | FP64, - FCVTZU =3D FPIntegerConvertFixed | 0x00190000, - FCVTZU_ws =3D FCVTZU, - FCVTZU_xs =3D FCVTZU | SixtyFourBits, - FCVTZU_wd =3D FCVTZU | FP64, - FCVTZU_xd =3D FCVTZU | SixtyFourBits | FP64, - SCVTF =3D FPIntegerConvertFixed | 0x00020000, - SCVTF_sw =3D SCVTF, - SCVTF_sx =3D SCVTF | SixtyFourBits, - SCVTF_dw =3D SCVTF | FP64, - SCVTF_dx =3D SCVTF | SixtyFourBits | FP64, - UCVTF =3D FPIntegerConvertFixed | 0x00030000, - UCVTF_sw =3D UCVTF, - UCVTF_sx =3D UCVTF | SixtyFourBits, - UCVTF_dw =3D UCVTF | FP64, - UCVTF_dx =3D UCVTF | SixtyFourBits | FP64, - FCVTAS =3D FPIntegerConvertFixed | 0x00040000, - FCVTAS_ws =3D FCVTAS, - FCVTAS_xs =3D FCVTAS | SixtyFourBits, - FCVTAS_wd =3D FCVTAS | FP64, - FCVTAS_xd =3D FCVTAS | SixtyFourBits | FP64, - FCVTAU =3D FPIntegerConvertFixed | 0x00050000, - FCVTAU_ws =3D FCVTAU, - FCVTAU_xs =3D FCVTAU | SixtyFourBits, - FCVTAU_wd =3D FCVTAU | FP64, - FCVTAU_xd =3D FCVTAU | SixtyFourBits | FP64, - FMOV_ws =3D FPIntegerConvertFixed | 0x00060000, - FMOV_sw =3D FPIntegerConvertFixed | 0x00070000, - FMOV_xd =3D FMOV_ws | SixtyFourBits | FP64, - FMOV_dx =3D FMOV_sw | SixtyFourBits | FP64, - FMOV_d1_x =3D FPIntegerConvertFixed | SixtyFourBits | 0x008F0000, - FMOV_x_d1 =3D FPIntegerConvertFixed | SixtyFourBits | 0x008E0000 -}; - -// Conversion between fixed point and floating point. -enum FPFixedPointConvertOp { - FPFixedPointConvertFixed =3D 0x1E000000, - FPFixedPointConvertFMask =3D 0x5F200000, - FPFixedPointConvertMask =3D 0xFFFF0000, - FCVTZS_fixed =3D FPFixedPointConvertFixed | 0x00180000, - FCVTZS_ws_fixed =3D FCVTZS_fixed, - FCVTZS_xs_fixed =3D FCVTZS_fixed | SixtyFourBits, - FCVTZS_wd_fixed =3D FCVTZS_fixed | FP64, - FCVTZS_xd_fixed =3D FCVTZS_fixed | SixtyFourBits | FP64, - FCVTZU_fixed =3D FPFixedPointConvertFixed | 0x00190000, - FCVTZU_ws_fixed =3D FCVTZU_fixed, - FCVTZU_xs_fixed =3D FCVTZU_fixed | SixtyFourBits, - FCVTZU_wd_fixed =3D FCVTZU_fixed | FP64, - FCVTZU_xd_fixed =3D FCVTZU_fixed | SixtyFourBits | FP64, - SCVTF_fixed =3D FPFixedPointConvertFixed | 0x00020000, - SCVTF_sw_fixed =3D SCVTF_fixed, - SCVTF_sx_fixed =3D SCVTF_fixed | SixtyFourBits, - SCVTF_dw_fixed =3D SCVTF_fixed | FP64, - SCVTF_dx_fixed =3D SCVTF_fixed | SixtyFourBits | FP64, - UCVTF_fixed =3D FPFixedPointConvertFixed | 0x00030000, - UCVTF_sw_fixed =3D UCVTF_fixed, - UCVTF_sx_fixed =3D UCVTF_fixed | SixtyFourBits, - UCVTF_dw_fixed =3D UCVTF_fixed | FP64, - UCVTF_dx_fixed =3D UCVTF_fixed | SixtyFourBits | FP64 -}; - -// Crypto - two register SHA. -enum Crypto2RegSHAOp { - Crypto2RegSHAFixed =3D 0x5E280800, - Crypto2RegSHAFMask =3D 0xFF3E0C00 -}; - -// Crypto - three register SHA. -enum Crypto3RegSHAOp { - Crypto3RegSHAFixed =3D 0x5E000000, - Crypto3RegSHAFMask =3D 0xFF208C00 -}; - -// Crypto - AES. -enum CryptoAESOp { - CryptoAESFixed =3D 0x4E280800, - CryptoAESFMask =3D 0xFF3E0C00 -}; - -// NEON instructions with two register operands. -enum NEON2RegMiscOp { - NEON2RegMiscFixed =3D 0x0E200800, - NEON2RegMiscFMask =3D 0x9F3E0C00, - NEON2RegMiscMask =3D 0xBF3FFC00, - NEON2RegMiscUBit =3D 0x20000000, - NEON_REV64 =3D NEON2RegMiscFixed | 0x00000000, - NEON_REV32 =3D NEON2RegMiscFixed | 0x20000000, - NEON_REV16 =3D NEON2RegMiscFixed | 0x00001000, - NEON_SADDLP =3D NEON2RegMiscFixed | 0x00002000, - NEON_UADDLP =3D NEON_SADDLP | NEON2RegMiscUBit, - NEON_SUQADD =3D NEON2RegMiscFixed | 0x00003000, - NEON_USQADD =3D NEON_SUQADD | NEON2RegMiscUBit, - NEON_CLS =3D NEON2RegMiscFixed | 0x00004000, - NEON_CLZ =3D NEON2RegMiscFixed | 0x20004000, - NEON_CNT =3D NEON2RegMiscFixed | 0x00005000, - NEON_RBIT_NOT =3D NEON2RegMiscFixed | 0x20005000, - NEON_SADALP =3D NEON2RegMiscFixed | 0x00006000, - NEON_UADALP =3D NEON_SADALP | NEON2RegMiscUBit, - NEON_SQABS =3D NEON2RegMiscFixed | 0x00007000, - NEON_SQNEG =3D NEON2RegMiscFixed | 0x20007000, - NEON_CMGT_zero =3D NEON2RegMiscFixed | 0x00008000, - NEON_CMGE_zero =3D NEON2RegMiscFixed | 0x20008000, - NEON_CMEQ_zero =3D NEON2RegMiscFixed | 0x00009000, - NEON_CMLE_zero =3D NEON2RegMiscFixed | 0x20009000, - NEON_CMLT_zero =3D NEON2RegMiscFixed | 0x0000A000, - NEON_ABS =3D NEON2RegMiscFixed | 0x0000B000, - NEON_NEG =3D NEON2RegMiscFixed | 0x2000B000, - NEON_XTN =3D NEON2RegMiscFixed | 0x00012000, - NEON_SQXTUN =3D NEON2RegMiscFixed | 0x20012000, - NEON_SHLL =3D NEON2RegMiscFixed | 0x20013000, - NEON_SQXTN =3D NEON2RegMiscFixed | 0x00014000, - NEON_UQXTN =3D NEON_SQXTN | NEON2RegMiscUBit, - - NEON2RegMiscOpcode =3D 0x0001F000, - NEON_RBIT_NOT_opcode =3D NEON_RBIT_NOT & NEON2RegMiscOpcode, - NEON_NEG_opcode =3D NEON_NEG & NEON2RegMiscOpcode, - NEON_XTN_opcode =3D NEON_XTN & NEON2RegMiscOpcode, - NEON_UQXTN_opcode =3D NEON_UQXTN & NEON2RegMiscOpcode, - - // These instructions use only one bit of the size field. The other bit = is - // used to distinguish between instructions. - NEON2RegMiscFPMask =3D NEON2RegMiscMask | 0x00800000, - NEON_FABS =3D NEON2RegMiscFixed | 0x0080F000, - NEON_FNEG =3D NEON2RegMiscFixed | 0x2080F000, - NEON_FCVTN =3D NEON2RegMiscFixed | 0x00016000, - NEON_FCVTXN =3D NEON2RegMiscFixed | 0x20016000, - NEON_FCVTL =3D NEON2RegMiscFixed | 0x00017000, - NEON_FRINTN =3D NEON2RegMiscFixed | 0x00018000, - NEON_FRINTA =3D NEON2RegMiscFixed | 0x20018000, - NEON_FRINTP =3D NEON2RegMiscFixed | 0x00818000, - NEON_FRINTM =3D NEON2RegMiscFixed | 0x00019000, - NEON_FRINTX =3D NEON2RegMiscFixed | 0x20019000, - NEON_FRINTZ =3D NEON2RegMiscFixed | 0x00819000, - NEON_FRINTI =3D NEON2RegMiscFixed | 0x20819000, - NEON_FCVTNS =3D NEON2RegMiscFixed | 0x0001A000, - NEON_FCVTNU =3D NEON_FCVTNS | NEON2RegMiscUBit, - NEON_FCVTPS =3D NEON2RegMiscFixed | 0x0081A000, - NEON_FCVTPU =3D NEON_FCVTPS | NEON2RegMiscUBit, - NEON_FCVTMS =3D NEON2RegMiscFixed | 0x0001B000, - NEON_FCVTMU =3D NEON_FCVTMS | NEON2RegMiscUBit, - NEON_FCVTZS =3D NEON2RegMiscFixed | 0x0081B000, - NEON_FCVTZU =3D NEON_FCVTZS | NEON2RegMiscUBit, - NEON_FCVTAS =3D NEON2RegMiscFixed | 0x0001C000, - NEON_FCVTAU =3D NEON_FCVTAS | NEON2RegMiscUBit, - NEON_FSQRT =3D NEON2RegMiscFixed | 0x2081F000, - NEON_SCVTF =3D NEON2RegMiscFixed | 0x0001D000, - NEON_UCVTF =3D NEON_SCVTF | NEON2RegMiscUBit, - NEON_URSQRTE =3D NEON2RegMiscFixed | 0x2081C000, - NEON_URECPE =3D NEON2RegMiscFixed | 0x0081C000, - NEON_FRSQRTE =3D NEON2RegMiscFixed | 0x2081D000, - NEON_FRECPE =3D NEON2RegMiscFixed | 0x0081D000, - NEON_FCMGT_zero =3D NEON2RegMiscFixed | 0x0080C000, - NEON_FCMGE_zero =3D NEON2RegMiscFixed | 0x2080C000, - NEON_FCMEQ_zero =3D NEON2RegMiscFixed | 0x0080D000, - NEON_FCMLE_zero =3D NEON2RegMiscFixed | 0x2080D000, - NEON_FCMLT_zero =3D NEON2RegMiscFixed | 0x0080E000, - - NEON_FCVTL_opcode =3D NEON_FCVTL & NEON2RegMiscOpcode, - NEON_FCVTN_opcode =3D NEON_FCVTN & NEON2RegMiscOpcode -}; - -// NEON instructions with three same-type operands. -enum NEON3SameOp { - NEON3SameFixed =3D 0x0E200400, - NEON3SameFMask =3D 0x9F200400, - NEON3SameMask =3D 0xBF20FC00, - NEON3SameUBit =3D 0x20000000, - NEON_ADD =3D NEON3SameFixed | 0x00008000, - NEON_ADDP =3D NEON3SameFixed | 0x0000B800, - NEON_SHADD =3D NEON3SameFixed | 0x00000000, - NEON_SHSUB =3D NEON3SameFixed | 0x00002000, - NEON_SRHADD =3D NEON3SameFixed | 0x00001000, - NEON_CMEQ =3D NEON3SameFixed | NEON3SameUBit | 0x00008800, - NEON_CMGE =3D NEON3SameFixed | 0x00003800, - NEON_CMGT =3D NEON3SameFixed | 0x00003000, - NEON_CMHI =3D NEON3SameFixed | NEON3SameUBit | NEON_CMGT, - NEON_CMHS =3D NEON3SameFixed | NEON3SameUBit | NEON_CMGE, - NEON_CMTST =3D NEON3SameFixed | 0x00008800, - NEON_MLA =3D NEON3SameFixed | 0x00009000, - NEON_MLS =3D NEON3SameFixed | 0x20009000, - NEON_MUL =3D NEON3SameFixed | 0x00009800, - NEON_PMUL =3D NEON3SameFixed | 0x20009800, - NEON_SRSHL =3D NEON3SameFixed | 0x00005000, - NEON_SQSHL =3D NEON3SameFixed | 0x00004800, - NEON_SQRSHL =3D NEON3SameFixed | 0x00005800, - NEON_SSHL =3D NEON3SameFixed | 0x00004000, - NEON_SMAX =3D NEON3SameFixed | 0x00006000, - NEON_SMAXP =3D NEON3SameFixed | 0x0000A000, - NEON_SMIN =3D NEON3SameFixed | 0x00006800, - NEON_SMINP =3D NEON3SameFixed | 0x0000A800, - NEON_SABD =3D NEON3SameFixed | 0x00007000, - NEON_SABA =3D NEON3SameFixed | 0x00007800, - NEON_UABD =3D NEON3SameFixed | NEON3SameUBit | NEON_SABD, - NEON_UABA =3D NEON3SameFixed | NEON3SameUBit | NEON_SABA, - NEON_SQADD =3D NEON3SameFixed | 0x00000800, - NEON_SQSUB =3D NEON3SameFixed | 0x00002800, - NEON_SUB =3D NEON3SameFixed | NEON3SameUBit | 0x00008000, - NEON_UHADD =3D NEON3SameFixed | NEON3SameUBit | NEON_SHADD, - NEON_UHSUB =3D NEON3SameFixed | NEON3SameUBit | NEON_SHSUB, - NEON_URHADD =3D NEON3SameFixed | NEON3SameUBit | NEON_SRHADD, - NEON_UMAX =3D NEON3SameFixed | NEON3SameUBit | NEON_SMAX, - NEON_UMAXP =3D NEON3SameFixed | NEON3SameUBit | NEON_SMAXP, - NEON_UMIN =3D NEON3SameFixed | NEON3SameUBit | NEON_SMIN, - NEON_UMINP =3D NEON3SameFixed | NEON3SameUBit | NEON_SMINP, - NEON_URSHL =3D NEON3SameFixed | NEON3SameUBit | NEON_SRSHL, - NEON_UQADD =3D NEON3SameFixed | NEON3SameUBit | NEON_SQADD, - NEON_UQRSHL =3D NEON3SameFixed | NEON3SameUBit | NEON_SQRSHL, - NEON_UQSHL =3D NEON3SameFixed | NEON3SameUBit | NEON_SQSHL, - NEON_UQSUB =3D NEON3SameFixed | NEON3SameUBit | NEON_SQSUB, - NEON_USHL =3D NEON3SameFixed | NEON3SameUBit | NEON_SSHL, - NEON_SQDMULH =3D NEON3SameFixed | 0x0000B000, - NEON_SQRDMULH =3D NEON3SameFixed | 0x2000B000, - - // NEON floating point instructions with three same-type operands. - NEON3SameFPFixed =3D NEON3SameFixed | 0x0000C000, - NEON3SameFPFMask =3D NEON3SameFMask | 0x0000C000, - NEON3SameFPMask =3D NEON3SameMask | 0x00800000, - NEON_FADD =3D NEON3SameFixed | 0x0000D000, - NEON_FSUB =3D NEON3SameFixed | 0x0080D000, - NEON_FMUL =3D NEON3SameFixed | 0x2000D800, - NEON_FDIV =3D NEON3SameFixed | 0x2000F800, - NEON_FMAX =3D NEON3SameFixed | 0x0000F000, - NEON_FMAXNM =3D NEON3SameFixed | 0x0000C000, - NEON_FMAXP =3D NEON3SameFixed | 0x2000F000, - NEON_FMAXNMP =3D NEON3SameFixed | 0x2000C000, - NEON_FMIN =3D NEON3SameFixed | 0x0080F000, - NEON_FMINNM =3D NEON3SameFixed | 0x0080C000, - NEON_FMINP =3D NEON3SameFixed | 0x2080F000, - NEON_FMINNMP =3D NEON3SameFixed | 0x2080C000, - NEON_FMLA =3D NEON3SameFixed | 0x0000C800, - NEON_FMLS =3D NEON3SameFixed | 0x0080C800, - NEON_FMULX =3D NEON3SameFixed | 0x0000D800, - NEON_FRECPS =3D NEON3SameFixed | 0x0000F800, - NEON_FRSQRTS =3D NEON3SameFixed | 0x0080F800, - NEON_FABD =3D NEON3SameFixed | 0x2080D000, - NEON_FADDP =3D NEON3SameFixed | 0x2000D000, - NEON_FCMEQ =3D NEON3SameFixed | 0x0000E000, - NEON_FCMGE =3D NEON3SameFixed | 0x2000E000, - NEON_FCMGT =3D NEON3SameFixed | 0x2080E000, - NEON_FACGE =3D NEON3SameFixed | 0x2000E800, - NEON_FACGT =3D NEON3SameFixed | 0x2080E800, - - // NEON logical instructions with three same-type operands. - NEON3SameLogicalFixed =3D NEON3SameFixed | 0x00001800, - NEON3SameLogicalFMask =3D NEON3SameFMask | 0x0000F800, - NEON3SameLogicalMask =3D 0xBFE0FC00, - NEON3SameLogicalFormatMask =3D NEON_Q, - NEON_AND =3D NEON3SameLogicalFixed | 0x00000000, - NEON_ORR =3D NEON3SameLogicalFixed | 0x00A00000, - NEON_ORN =3D NEON3SameLogicalFixed | 0x00C00000, - NEON_EOR =3D NEON3SameLogicalFixed | 0x20000000, - NEON_BIC =3D NEON3SameLogicalFixed | 0x00400000, - NEON_BIF =3D NEON3SameLogicalFixed | 0x20C00000, - NEON_BIT =3D NEON3SameLogicalFixed | 0x20800000, - NEON_BSL =3D NEON3SameLogicalFixed | 0x20400000 -}; - -// NEON instructions with three different-type operands. -enum NEON3DifferentOp { - NEON3DifferentFixed =3D 0x0E200000, - NEON3DifferentFMask =3D 0x9F200C00, - NEON3DifferentMask =3D 0xFF20FC00, - NEON_ADDHN =3D NEON3DifferentFixed | 0x00004000, - NEON_ADDHN2 =3D NEON_ADDHN | NEON_Q, - NEON_PMULL =3D NEON3DifferentFixed | 0x0000E000, - NEON_PMULL2 =3D NEON_PMULL | NEON_Q, - NEON_RADDHN =3D NEON3DifferentFixed | 0x20004000, - NEON_RADDHN2 =3D NEON_RADDHN | NEON_Q, - NEON_RSUBHN =3D NEON3DifferentFixed | 0x20006000, - NEON_RSUBHN2 =3D NEON_RSUBHN | NEON_Q, - NEON_SABAL =3D NEON3DifferentFixed | 0x00005000, - NEON_SABAL2 =3D NEON_SABAL | NEON_Q, - NEON_SABDL =3D NEON3DifferentFixed | 0x00007000, - NEON_SABDL2 =3D NEON_SABDL | NEON_Q, - NEON_SADDL =3D NEON3DifferentFixed | 0x00000000, - NEON_SADDL2 =3D NEON_SADDL | NEON_Q, - NEON_SADDW =3D NEON3DifferentFixed | 0x00001000, - NEON_SADDW2 =3D NEON_SADDW | NEON_Q, - NEON_SMLAL =3D NEON3DifferentFixed | 0x00008000, - NEON_SMLAL2 =3D NEON_SMLAL | NEON_Q, - NEON_SMLSL =3D NEON3DifferentFixed | 0x0000A000, - NEON_SMLSL2 =3D NEON_SMLSL | NEON_Q, - NEON_SMULL =3D NEON3DifferentFixed | 0x0000C000, - NEON_SMULL2 =3D NEON_SMULL | NEON_Q, - NEON_SSUBL =3D NEON3DifferentFixed | 0x00002000, - NEON_SSUBL2 =3D NEON_SSUBL | NEON_Q, - NEON_SSUBW =3D NEON3DifferentFixed | 0x00003000, - NEON_SSUBW2 =3D NEON_SSUBW | NEON_Q, - NEON_SQDMLAL =3D NEON3DifferentFixed | 0x00009000, - NEON_SQDMLAL2 =3D NEON_SQDMLAL | NEON_Q, - NEON_SQDMLSL =3D NEON3DifferentFixed | 0x0000B000, - NEON_SQDMLSL2 =3D NEON_SQDMLSL | NEON_Q, - NEON_SQDMULL =3D NEON3DifferentFixed | 0x0000D000, - NEON_SQDMULL2 =3D NEON_SQDMULL | NEON_Q, - NEON_SUBHN =3D NEON3DifferentFixed | 0x00006000, - NEON_SUBHN2 =3D NEON_SUBHN | NEON_Q, - NEON_UABAL =3D NEON_SABAL | NEON3SameUBit, - NEON_UABAL2 =3D NEON_UABAL | NEON_Q, - NEON_UABDL =3D NEON_SABDL | NEON3SameUBit, - NEON_UABDL2 =3D NEON_UABDL | NEON_Q, - NEON_UADDL =3D NEON_SADDL | NEON3SameUBit, - NEON_UADDL2 =3D NEON_UADDL | NEON_Q, - NEON_UADDW =3D NEON_SADDW | NEON3SameUBit, - NEON_UADDW2 =3D NEON_UADDW | NEON_Q, - NEON_UMLAL =3D NEON_SMLAL | NEON3SameUBit, - NEON_UMLAL2 =3D NEON_UMLAL | NEON_Q, - NEON_UMLSL =3D NEON_SMLSL | NEON3SameUBit, - NEON_UMLSL2 =3D NEON_UMLSL | NEON_Q, - NEON_UMULL =3D NEON_SMULL | NEON3SameUBit, - NEON_UMULL2 =3D NEON_UMULL | NEON_Q, - NEON_USUBL =3D NEON_SSUBL | NEON3SameUBit, - NEON_USUBL2 =3D NEON_USUBL | NEON_Q, - NEON_USUBW =3D NEON_SSUBW | NEON3SameUBit, - NEON_USUBW2 =3D NEON_USUBW | NEON_Q -}; - -// NEON instructions operating across vectors. -enum NEONAcrossLanesOp { - NEONAcrossLanesFixed =3D 0x0E300800, - NEONAcrossLanesFMask =3D 0x9F3E0C00, - NEONAcrossLanesMask =3D 0xBF3FFC00, - NEON_ADDV =3D NEONAcrossLanesFixed | 0x0001B000, - NEON_SADDLV =3D NEONAcrossLanesFixed | 0x00003000, - NEON_UADDLV =3D NEONAcrossLanesFixed | 0x20003000, - NEON_SMAXV =3D NEONAcrossLanesFixed | 0x0000A000, - NEON_SMINV =3D NEONAcrossLanesFixed | 0x0001A000, - NEON_UMAXV =3D NEONAcrossLanesFixed | 0x2000A000, - NEON_UMINV =3D NEONAcrossLanesFixed | 0x2001A000, - - // NEON floating point across instructions. - NEONAcrossLanesFPFixed =3D NEONAcrossLanesFixed | 0x0000C000, - NEONAcrossLanesFPFMask =3D NEONAcrossLanesFMask | 0x0000C000, - NEONAcrossLanesFPMask =3D NEONAcrossLanesMask | 0x00800000, - - NEON_FMAXV =3D NEONAcrossLanesFPFixed | 0x2000F000, - NEON_FMINV =3D NEONAcrossLanesFPFixed | 0x2080F000, - NEON_FMAXNMV =3D NEONAcrossLanesFPFixed | 0x2000C000, - NEON_FMINNMV =3D NEONAcrossLanesFPFixed | 0x2080C000 -}; - -// NEON instructions with indexed element operand. -enum NEONByIndexedElementOp { - NEONByIndexedElementFixed =3D 0x0F000000, - NEONByIndexedElementFMask =3D 0x9F000400, - NEONByIndexedElementMask =3D 0xBF00F400, - NEON_MUL_byelement =3D NEONByIndexedElementFixed | 0x00008000, - NEON_MLA_byelement =3D NEONByIndexedElementFixed | 0x20000000, - NEON_MLS_byelement =3D NEONByIndexedElementFixed | 0x20004000, - NEON_SMULL_byelement =3D NEONByIndexedElementFixed | 0x0000A000, - NEON_SMLAL_byelement =3D NEONByIndexedElementFixed | 0x00002000, - NEON_SMLSL_byelement =3D NEONByIndexedElementFixed | 0x00006000, - NEON_UMULL_byelement =3D NEONByIndexedElementFixed | 0x2000A000, - NEON_UMLAL_byelement =3D NEONByIndexedElementFixed | 0x20002000, - NEON_UMLSL_byelement =3D NEONByIndexedElementFixed | 0x20006000, - NEON_SQDMULL_byelement =3D NEONByIndexedElementFixed | 0x0000B000, - NEON_SQDMLAL_byelement =3D NEONByIndexedElementFixed | 0x00003000, - NEON_SQDMLSL_byelement =3D NEONByIndexedElementFixed | 0x00007000, - NEON_SQDMULH_byelement =3D NEONByIndexedElementFixed | 0x0000C000, - NEON_SQRDMULH_byelement =3D NEONByIndexedElementFixed | 0x0000D000, - - // Floating point instructions. - NEONByIndexedElementFPFixed =3D NEONByIndexedElementFixed | 0x00800000, - NEONByIndexedElementFPMask =3D NEONByIndexedElementMask | 0x00800000, - NEON_FMLA_byelement =3D NEONByIndexedElementFPFixed | 0x00001000, - NEON_FMLS_byelement =3D NEONByIndexedElementFPFixed | 0x00005000, - NEON_FMUL_byelement =3D NEONByIndexedElementFPFixed | 0x00009000, - NEON_FMULX_byelement =3D NEONByIndexedElementFPFixed | 0x20009000 -}; - -// NEON register copy. -enum NEONCopyOp { - NEONCopyFixed =3D 0x0E000400, - NEONCopyFMask =3D 0x9FE08400, - NEONCopyMask =3D 0x3FE08400, - NEONCopyInsElementMask =3D NEONCopyMask | 0x40000000, - NEONCopyInsGeneralMask =3D NEONCopyMask | 0x40007800, - NEONCopyDupElementMask =3D NEONCopyMask | 0x20007800, - NEONCopyDupGeneralMask =3D NEONCopyDupElementMask, - NEONCopyUmovMask =3D NEONCopyMask | 0x20007800, - NEONCopySmovMask =3D NEONCopyMask | 0x20007800, - NEON_INS_ELEMENT =3D NEONCopyFixed | 0x60000000, - NEON_INS_GENERAL =3D NEONCopyFixed | 0x40001800, - NEON_DUP_ELEMENT =3D NEONCopyFixed | 0x00000000, - NEON_DUP_GENERAL =3D NEONCopyFixed | 0x00000800, - NEON_SMOV =3D NEONCopyFixed | 0x00002800, - NEON_UMOV =3D NEONCopyFixed | 0x00003800 -}; - -// NEON extract. -enum NEONExtractOp { - NEONExtractFixed =3D 0x2E000000, - NEONExtractFMask =3D 0xBF208400, - NEONExtractMask =3D 0xBFE08400, - NEON_EXT =3D NEONExtractFixed | 0x00000000 -}; - -enum NEONLoadStoreMultiOp { - NEONLoadStoreMultiL =3D 0x00400000, - NEONLoadStoreMulti1_1v =3D 0x00007000, - NEONLoadStoreMulti1_2v =3D 0x0000A000, - NEONLoadStoreMulti1_3v =3D 0x00006000, - NEONLoadStoreMulti1_4v =3D 0x00002000, - NEONLoadStoreMulti2 =3D 0x00008000, - NEONLoadStoreMulti3 =3D 0x00004000, - NEONLoadStoreMulti4 =3D 0x00000000 -}; - -// NEON load/store multiple structures. -enum NEONLoadStoreMultiStructOp { - NEONLoadStoreMultiStructFixed =3D 0x0C000000, - NEONLoadStoreMultiStructFMask =3D 0xBFBF0000, - NEONLoadStoreMultiStructMask =3D 0xBFFFF000, - NEONLoadStoreMultiStructStore =3D NEONLoadStoreMultiStructFixed, - NEONLoadStoreMultiStructLoad =3D NEONLoadStoreMultiStructFixed | - NEONLoadStoreMultiL, - NEON_LD1_1v =3D NEONLoadStoreMultiStructLoad | NEONLoadStoreMulti1_1v, - NEON_LD1_2v =3D NEONLoadStoreMultiStructLoad | NEONLoadStoreMulti1_2v, - NEON_LD1_3v =3D NEONLoadStoreMultiStructLoad | NEONLoadStoreMulti1_3v, - NEON_LD1_4v =3D NEONLoadStoreMultiStructLoad | NEONLoadStoreMulti1_4v, - NEON_LD2 =3D NEONLoadStoreMultiStructLoad | NEONLoadStoreMulti2, - NEON_LD3 =3D NEONLoadStoreMultiStructLoad | NEONLoadStoreMulti3, - NEON_LD4 =3D NEONLoadStoreMultiStructLoad | NEONLoadStoreMulti4, - NEON_ST1_1v =3D NEONLoadStoreMultiStructStore | NEONLoadStoreMulti1_1v, - NEON_ST1_2v =3D NEONLoadStoreMultiStructStore | NEONLoadStoreMulti1_2v, - NEON_ST1_3v =3D NEONLoadStoreMultiStructStore | NEONLoadStoreMulti1_3v, - NEON_ST1_4v =3D NEONLoadStoreMultiStructStore | NEONLoadStoreMulti1_4v, - NEON_ST2 =3D NEONLoadStoreMultiStructStore | NEONLoadStoreMulti2, - NEON_ST3 =3D NEONLoadStoreMultiStructStore | NEONLoadStoreMulti3, - NEON_ST4 =3D NEONLoadStoreMultiStructStore | NEONLoadStoreMulti4 -}; - -// NEON load/store multiple structures with post-index addressing. -enum NEONLoadStoreMultiStructPostIndexOp { - NEONLoadStoreMultiStructPostIndexFixed =3D 0x0C800000, - NEONLoadStoreMultiStructPostIndexFMask =3D 0xBFA00000, - NEONLoadStoreMultiStructPostIndexMask =3D 0xBFE0F000, - NEONLoadStoreMultiStructPostIndex =3D 0x00800000, - NEON_LD1_1v_post =3D NEON_LD1_1v | NEONLoadStoreMultiStructPostIndex, - NEON_LD1_2v_post =3D NEON_LD1_2v | NEONLoadStoreMultiStructPostIndex, - NEON_LD1_3v_post =3D NEON_LD1_3v | NEONLoadStoreMultiStructPostIndex, - NEON_LD1_4v_post =3D NEON_LD1_4v | NEONLoadStoreMultiStructPostIndex, - NEON_LD2_post =3D NEON_LD2 | NEONLoadStoreMultiStructPostIndex, - NEON_LD3_post =3D NEON_LD3 | NEONLoadStoreMultiStructPostIndex, - NEON_LD4_post =3D NEON_LD4 | NEONLoadStoreMultiStructPostIndex, - NEON_ST1_1v_post =3D NEON_ST1_1v | NEONLoadStoreMultiStructPostIndex, - NEON_ST1_2v_post =3D NEON_ST1_2v | NEONLoadStoreMultiStructPostIndex, - NEON_ST1_3v_post =3D NEON_ST1_3v | NEONLoadStoreMultiStructPostIndex, - NEON_ST1_4v_post =3D NEON_ST1_4v | NEONLoadStoreMultiStructPostIndex, - NEON_ST2_post =3D NEON_ST2 | NEONLoadStoreMultiStructPostIndex, - NEON_ST3_post =3D NEON_ST3 | NEONLoadStoreMultiStructPostIndex, - NEON_ST4_post =3D NEON_ST4 | NEONLoadStoreMultiStructPostIndex -}; - -enum NEONLoadStoreSingleOp { - NEONLoadStoreSingle1 =3D 0x00000000, - NEONLoadStoreSingle2 =3D 0x00200000, - NEONLoadStoreSingle3 =3D 0x00002000, - NEONLoadStoreSingle4 =3D 0x00202000, - NEONLoadStoreSingleL =3D 0x00400000, - NEONLoadStoreSingle_b =3D 0x00000000, - NEONLoadStoreSingle_h =3D 0x00004000, - NEONLoadStoreSingle_s =3D 0x00008000, - NEONLoadStoreSingle_d =3D 0x00008400, - NEONLoadStoreSingleAllLanes =3D 0x0000C000, - NEONLoadStoreSingleLenMask =3D 0x00202000 -}; - -// NEON load/store single structure. -enum NEONLoadStoreSingleStructOp { - NEONLoadStoreSingleStructFixed =3D 0x0D000000, - NEONLoadStoreSingleStructFMask =3D 0xBF9F0000, - NEONLoadStoreSingleStructMask =3D 0xBFFFE000, - NEONLoadStoreSingleStructStore =3D NEONLoadStoreSingleStructFixed, - NEONLoadStoreSingleStructLoad =3D NEONLoadStoreSingleStructFixed | - NEONLoadStoreSingleL, - NEONLoadStoreSingleStructLoad1 =3D NEONLoadStoreSingle1 | - NEONLoadStoreSingleStructLoad, - NEONLoadStoreSingleStructLoad2 =3D NEONLoadStoreSingle2 | - NEONLoadStoreSingleStructLoad, - NEONLoadStoreSingleStructLoad3 =3D NEONLoadStoreSingle3 | - NEONLoadStoreSingleStructLoad, - NEONLoadStoreSingleStructLoad4 =3D NEONLoadStoreSingle4 | - NEONLoadStoreSingleStructLoad, - NEONLoadStoreSingleStructStore1 =3D NEONLoadStoreSingle1 | - NEONLoadStoreSingleStructFixed, - NEONLoadStoreSingleStructStore2 =3D NEONLoadStoreSingle2 | - NEONLoadStoreSingleStructFixed, - NEONLoadStoreSingleStructStore3 =3D NEONLoadStoreSingle3 | - NEONLoadStoreSingleStructFixed, - NEONLoadStoreSingleStructStore4 =3D NEONLoadStoreSingle4 | - NEONLoadStoreSingleStructFixed, - NEON_LD1_b =3D NEONLoadStoreSingleStructLoad1 | NEONLoadStoreSingle_b, - NEON_LD1_h =3D NEONLoadStoreSingleStructLoad1 | NEONLoadStoreSingle_h, - NEON_LD1_s =3D NEONLoadStoreSingleStructLoad1 | NEONLoadStoreSingle_s, - NEON_LD1_d =3D NEONLoadStoreSingleStructLoad1 | NEONLoadStoreSingle_d, - NEON_LD1R =3D NEONLoadStoreSingleStructLoad1 | NEONLoadStoreSingleAllLa= nes, - NEON_ST1_b =3D NEONLoadStoreSingleStructStore1 | NEONLoadStoreSingle_b, - NEON_ST1_h =3D NEONLoadStoreSingleStructStore1 | NEONLoadStoreSingle_h, - NEON_ST1_s =3D NEONLoadStoreSingleStructStore1 | NEONLoadStoreSingle_s, - NEON_ST1_d =3D NEONLoadStoreSingleStructStore1 | NEONLoadStoreSingle_d, - - NEON_LD2_b =3D NEONLoadStoreSingleStructLoad2 | NEONLoadStoreSingle_b, - NEON_LD2_h =3D NEONLoadStoreSingleStructLoad2 | NEONLoadStoreSingle_h, - NEON_LD2_s =3D NEONLoadStoreSingleStructLoad2 | NEONLoadStoreSingle_s, - NEON_LD2_d =3D NEONLoadStoreSingleStructLoad2 | NEONLoadStoreSingle_d, - NEON_LD2R =3D NEONLoadStoreSingleStructLoad2 | NEONLoadStoreSingleAllLa= nes, - NEON_ST2_b =3D NEONLoadStoreSingleStructStore2 | NEONLoadStoreSingle_b, - NEON_ST2_h =3D NEONLoadStoreSingleStructStore2 | NEONLoadStoreSingle_h, - NEON_ST2_s =3D NEONLoadStoreSingleStructStore2 | NEONLoadStoreSingle_s, - NEON_ST2_d =3D NEONLoadStoreSingleStructStore2 | NEONLoadStoreSingle_d, - - NEON_LD3_b =3D NEONLoadStoreSingleStructLoad3 | NEONLoadStoreSingle_b, - NEON_LD3_h =3D NEONLoadStoreSingleStructLoad3 | NEONLoadStoreSingle_h, - NEON_LD3_s =3D NEONLoadStoreSingleStructLoad3 | NEONLoadStoreSingle_s, - NEON_LD3_d =3D NEONLoadStoreSingleStructLoad3 | NEONLoadStoreSingle_d, - NEON_LD3R =3D NEONLoadStoreSingleStructLoad3 | NEONLoadStoreSingleAllLa= nes, - NEON_ST3_b =3D NEONLoadStoreSingleStructStore3 | NEONLoadStoreSingle_b, - NEON_ST3_h =3D NEONLoadStoreSingleStructStore3 | NEONLoadStoreSingle_h, - NEON_ST3_s =3D NEONLoadStoreSingleStructStore3 | NEONLoadStoreSingle_s, - NEON_ST3_d =3D NEONLoadStoreSingleStructStore3 | NEONLoadStoreSingle_d, - - NEON_LD4_b =3D NEONLoadStoreSingleStructLoad4 | NEONLoadStoreSingle_b, - NEON_LD4_h =3D NEONLoadStoreSingleStructLoad4 | NEONLoadStoreSingle_h, - NEON_LD4_s =3D NEONLoadStoreSingleStructLoad4 | NEONLoadStoreSingle_s, - NEON_LD4_d =3D NEONLoadStoreSingleStructLoad4 | NEONLoadStoreSingle_d, - NEON_LD4R =3D NEONLoadStoreSingleStructLoad4 | NEONLoadStoreSingleAllLa= nes, - NEON_ST4_b =3D NEONLoadStoreSingleStructStore4 | NEONLoadStoreSingle_b, - NEON_ST4_h =3D NEONLoadStoreSingleStructStore4 | NEONLoadStoreSingle_h, - NEON_ST4_s =3D NEONLoadStoreSingleStructStore4 | NEONLoadStoreSingle_s, - NEON_ST4_d =3D NEONLoadStoreSingleStructStore4 | NEONLoadStoreSingle_d -}; - -// NEON load/store single structure with post-index addressing. -enum NEONLoadStoreSingleStructPostIndexOp { - NEONLoadStoreSingleStructPostIndexFixed =3D 0x0D800000, - NEONLoadStoreSingleStructPostIndexFMask =3D 0xBF800000, - NEONLoadStoreSingleStructPostIndexMask =3D 0xBFE0E000, - NEONLoadStoreSingleStructPostIndex =3D 0x00800000, - NEON_LD1_b_post =3D NEON_LD1_b | NEONLoadStoreSingleStructPostIndex, - NEON_LD1_h_post =3D NEON_LD1_h | NEONLoadStoreSingleStructPostIndex, - NEON_LD1_s_post =3D NEON_LD1_s | NEONLoadStoreSingleStructPostIndex, - NEON_LD1_d_post =3D NEON_LD1_d | NEONLoadStoreSingleStructPostIndex, - NEON_LD1R_post =3D NEON_LD1R | NEONLoadStoreSingleStructPostIndex, - NEON_ST1_b_post =3D NEON_ST1_b | NEONLoadStoreSingleStructPostIndex, - NEON_ST1_h_post =3D NEON_ST1_h | NEONLoadStoreSingleStructPostIndex, - NEON_ST1_s_post =3D NEON_ST1_s | NEONLoadStoreSingleStructPostIndex, - NEON_ST1_d_post =3D NEON_ST1_d | NEONLoadStoreSingleStructPostIndex, - - NEON_LD2_b_post =3D NEON_LD2_b | NEONLoadStoreSingleStructPostIndex, - NEON_LD2_h_post =3D NEON_LD2_h | NEONLoadStoreSingleStructPostIndex, - NEON_LD2_s_post =3D NEON_LD2_s | NEONLoadStoreSingleStructPostIndex, - NEON_LD2_d_post =3D NEON_LD2_d | NEONLoadStoreSingleStructPostIndex, - NEON_LD2R_post =3D NEON_LD2R | NEONLoadStoreSingleStructPostIndex, - NEON_ST2_b_post =3D NEON_ST2_b | NEONLoadStoreSingleStructPostIndex, - NEON_ST2_h_post =3D NEON_ST2_h | NEONLoadStoreSingleStructPostIndex, - NEON_ST2_s_post =3D NEON_ST2_s | NEONLoadStoreSingleStructPostIndex, - NEON_ST2_d_post =3D NEON_ST2_d | NEONLoadStoreSingleStructPostIndex, - - NEON_LD3_b_post =3D NEON_LD3_b | NEONLoadStoreSingleStructPostIndex, - NEON_LD3_h_post =3D NEON_LD3_h | NEONLoadStoreSingleStructPostIndex, - NEON_LD3_s_post =3D NEON_LD3_s | NEONLoadStoreSingleStructPostIndex, - NEON_LD3_d_post =3D NEON_LD3_d | NEONLoadStoreSingleStructPostIndex, - NEON_LD3R_post =3D NEON_LD3R | NEONLoadStoreSingleStructPostIndex, - NEON_ST3_b_post =3D NEON_ST3_b | NEONLoadStoreSingleStructPostIndex, - NEON_ST3_h_post =3D NEON_ST3_h | NEONLoadStoreSingleStructPostIndex, - NEON_ST3_s_post =3D NEON_ST3_s | NEONLoadStoreSingleStructPostIndex, - NEON_ST3_d_post =3D NEON_ST3_d | NEONLoadStoreSingleStructPostIndex, - - NEON_LD4_b_post =3D NEON_LD4_b | NEONLoadStoreSingleStructPostIndex, - NEON_LD4_h_post =3D NEON_LD4_h | NEONLoadStoreSingleStructPostIndex, - NEON_LD4_s_post =3D NEON_LD4_s | NEONLoadStoreSingleStructPostIndex, - NEON_LD4_d_post =3D NEON_LD4_d | NEONLoadStoreSingleStructPostIndex, - NEON_LD4R_post =3D NEON_LD4R | NEONLoadStoreSingleStructPostIndex, - NEON_ST4_b_post =3D NEON_ST4_b | NEONLoadStoreSingleStructPostIndex, - NEON_ST4_h_post =3D NEON_ST4_h | NEONLoadStoreSingleStructPostIndex, - NEON_ST4_s_post =3D NEON_ST4_s | NEONLoadStoreSingleStructPostIndex, - NEON_ST4_d_post =3D NEON_ST4_d | NEONLoadStoreSingleStructPostIndex -}; - -// NEON modified immediate. -enum NEONModifiedImmediateOp { - NEONModifiedImmediateFixed =3D 0x0F000400, - NEONModifiedImmediateFMask =3D 0x9FF80400, - NEONModifiedImmediateOpBit =3D 0x20000000, - NEONModifiedImmediate_MOVI =3D NEONModifiedImmediateFixed | 0x00000000, - NEONModifiedImmediate_MVNI =3D NEONModifiedImmediateFixed | 0x20000000, - NEONModifiedImmediate_ORR =3D NEONModifiedImmediateFixed | 0x00001000, - NEONModifiedImmediate_BIC =3D NEONModifiedImmediateFixed | 0x20001000 -}; - -// NEON shift immediate. -enum NEONShiftImmediateOp { - NEONShiftImmediateFixed =3D 0x0F000400, - NEONShiftImmediateFMask =3D 0x9F800400, - NEONShiftImmediateMask =3D 0xBF80FC00, - NEONShiftImmediateUBit =3D 0x20000000, - NEON_SHL =3D NEONShiftImmediateFixed | 0x00005000, - NEON_SSHLL =3D NEONShiftImmediateFixed | 0x0000A000, - NEON_USHLL =3D NEONShiftImmediateFixed | 0x2000A000, - NEON_SLI =3D NEONShiftImmediateFixed | 0x20005000, - NEON_SRI =3D NEONShiftImmediateFixed | 0x20004000, - NEON_SHRN =3D NEONShiftImmediateFixed | 0x00008000, - NEON_RSHRN =3D NEONShiftImmediateFixed | 0x00008800, - NEON_UQSHRN =3D NEONShiftImmediateFixed | 0x20009000, - NEON_UQRSHRN =3D NEONShiftImmediateFixed | 0x20009800, - NEON_SQSHRN =3D NEONShiftImmediateFixed | 0x00009000, - NEON_SQRSHRN =3D NEONShiftImmediateFixed | 0x00009800, - NEON_SQSHRUN =3D NEONShiftImmediateFixed | 0x20008000, - NEON_SQRSHRUN =3D NEONShiftImmediateFixed | 0x20008800, - NEON_SSHR =3D NEONShiftImmediateFixed | 0x00000000, - NEON_SRSHR =3D NEONShiftImmediateFixed | 0x00002000, - NEON_USHR =3D NEONShiftImmediateFixed | 0x20000000, - NEON_URSHR =3D NEONShiftImmediateFixed | 0x20002000, - NEON_SSRA =3D NEONShiftImmediateFixed | 0x00001000, - NEON_SRSRA =3D NEONShiftImmediateFixed | 0x00003000, - NEON_USRA =3D NEONShiftImmediateFixed | 0x20001000, - NEON_URSRA =3D NEONShiftImmediateFixed | 0x20003000, - NEON_SQSHLU =3D NEONShiftImmediateFixed | 0x20006000, - NEON_SCVTF_imm =3D NEONShiftImmediateFixed | 0x0000E000, - NEON_UCVTF_imm =3D NEONShiftImmediateFixed | 0x2000E000, - NEON_FCVTZS_imm =3D NEONShiftImmediateFixed | 0x0000F800, - NEON_FCVTZU_imm =3D NEONShiftImmediateFixed | 0x2000F800, - NEON_SQSHL_imm =3D NEONShiftImmediateFixed | 0x00007000, - NEON_UQSHL_imm =3D NEONShiftImmediateFixed | 0x20007000 -}; - -// NEON table. -enum NEONTableOp { - NEONTableFixed =3D 0x0E000000, - NEONTableFMask =3D 0xBF208C00, - NEONTableExt =3D 0x00001000, - NEONTableMask =3D 0xBF20FC00, - NEON_TBL_1v =3D NEONTableFixed | 0x00000000, - NEON_TBL_2v =3D NEONTableFixed | 0x00002000, - NEON_TBL_3v =3D NEONTableFixed | 0x00004000, - NEON_TBL_4v =3D NEONTableFixed | 0x00006000, - NEON_TBX_1v =3D NEON_TBL_1v | NEONTableExt, - NEON_TBX_2v =3D NEON_TBL_2v | NEONTableExt, - NEON_TBX_3v =3D NEON_TBL_3v | NEONTableExt, - NEON_TBX_4v =3D NEON_TBL_4v | NEONTableExt -}; - -// NEON perm. -enum NEONPermOp { - NEONPermFixed =3D 0x0E000800, - NEONPermFMask =3D 0xBF208C00, - NEONPermMask =3D 0x3F20FC00, - NEON_UZP1 =3D NEONPermFixed | 0x00001000, - NEON_TRN1 =3D NEONPermFixed | 0x00002000, - NEON_ZIP1 =3D NEONPermFixed | 0x00003000, - NEON_UZP2 =3D NEONPermFixed | 0x00005000, - NEON_TRN2 =3D NEONPermFixed | 0x00006000, - NEON_ZIP2 =3D NEONPermFixed | 0x00007000 -}; - -// NEON scalar instructions with two register operands. -enum NEONScalar2RegMiscOp { - NEONScalar2RegMiscFixed =3D 0x5E200800, - NEONScalar2RegMiscFMask =3D 0xDF3E0C00, - NEONScalar2RegMiscMask =3D NEON_Q | NEONScalar | NEON2RegMiscMask, - NEON_CMGT_zero_scalar =3D NEON_Q | NEONScalar | NEON_CMGT_zero, - NEON_CMEQ_zero_scalar =3D NEON_Q | NEONScalar | NEON_CMEQ_zero, - NEON_CMLT_zero_scalar =3D NEON_Q | NEONScalar | NEON_CMLT_zero, - NEON_CMGE_zero_scalar =3D NEON_Q | NEONScalar | NEON_CMGE_zero, - NEON_CMLE_zero_scalar =3D NEON_Q | NEONScalar | NEON_CMLE_zero, - NEON_ABS_scalar =3D NEON_Q | NEONScalar | NEON_ABS, - NEON_SQABS_scalar =3D NEON_Q | NEONScalar | NEON_SQABS, - NEON_NEG_scalar =3D NEON_Q | NEONScalar | NEON_NEG, - NEON_SQNEG_scalar =3D NEON_Q | NEONScalar | NEON_SQNEG, - NEON_SQXTN_scalar =3D NEON_Q | NEONScalar | NEON_SQXTN, - NEON_UQXTN_scalar =3D NEON_Q | NEONScalar | NEON_UQXTN, - NEON_SQXTUN_scalar =3D NEON_Q | NEONScalar | NEON_SQXTUN, - NEON_SUQADD_scalar =3D NEON_Q | NEONScalar | NEON_SUQADD, - NEON_USQADD_scalar =3D NEON_Q | NEONScalar | NEON_USQADD, - - NEONScalar2RegMiscOpcode =3D NEON2RegMiscOpcode, - NEON_NEG_scalar_opcode =3D NEON_NEG_scalar & NEONScalar2RegMiscOpcode, - - NEONScalar2RegMiscFPMask =3D NEONScalar2RegMiscMask | 0x00800000, - NEON_FRSQRTE_scalar =3D NEON_Q | NEONScalar | NEON_FRSQRTE, - NEON_FRECPE_scalar =3D NEON_Q | NEONScalar | NEON_FRECPE, - NEON_SCVTF_scalar =3D NEON_Q | NEONScalar | NEON_SCVTF, - NEON_UCVTF_scalar =3D NEON_Q | NEONScalar | NEON_UCVTF, - NEON_FCMGT_zero_scalar =3D NEON_Q | NEONScalar | NEON_FCMGT_zero, - NEON_FCMEQ_zero_scalar =3D NEON_Q | NEONScalar | NEON_FCMEQ_zero, - NEON_FCMLT_zero_scalar =3D NEON_Q | NEONScalar | NEON_FCMLT_zero, - NEON_FCMGE_zero_scalar =3D NEON_Q | NEONScalar | NEON_FCMGE_zero, - NEON_FCMLE_zero_scalar =3D NEON_Q | NEONScalar | NEON_FCMLE_zero, - NEON_FRECPX_scalar =3D NEONScalar2RegMiscFixed | 0x0081F000, - NEON_FCVTNS_scalar =3D NEON_Q | NEONScalar | NEON_FCVTNS, - NEON_FCVTNU_scalar =3D NEON_Q | NEONScalar | NEON_FCVTNU, - NEON_FCVTPS_scalar =3D NEON_Q | NEONScalar | NEON_FCVTPS, - NEON_FCVTPU_scalar =3D NEON_Q | NEONScalar | NEON_FCVTPU, - NEON_FCVTMS_scalar =3D NEON_Q | NEONScalar | NEON_FCVTMS, - NEON_FCVTMU_scalar =3D NEON_Q | NEONScalar | NEON_FCVTMU, - NEON_FCVTZS_scalar =3D NEON_Q | NEONScalar | NEON_FCVTZS, - NEON_FCVTZU_scalar =3D NEON_Q | NEONScalar | NEON_FCVTZU, - NEON_FCVTAS_scalar =3D NEON_Q | NEONScalar | NEON_FCVTAS, - NEON_FCVTAU_scalar =3D NEON_Q | NEONScalar | NEON_FCVTAU, - NEON_FCVTXN_scalar =3D NEON_Q | NEONScalar | NEON_FCVTXN -}; - -// NEON scalar instructions with three same-type operands. -enum NEONScalar3SameOp { - NEONScalar3SameFixed =3D 0x5E200400, - NEONScalar3SameFMask =3D 0xDF200400, - NEONScalar3SameMask =3D 0xFF20FC00, - NEON_ADD_scalar =3D NEON_Q | NEONScalar | NEON_ADD, - NEON_CMEQ_scalar =3D NEON_Q | NEONScalar | NEON_CMEQ, - NEON_CMGE_scalar =3D NEON_Q | NEONScalar | NEON_CMGE, - NEON_CMGT_scalar =3D NEON_Q | NEONScalar | NEON_CMGT, - NEON_CMHI_scalar =3D NEON_Q | NEONScalar | NEON_CMHI, - NEON_CMHS_scalar =3D NEON_Q | NEONScalar | NEON_CMHS, - NEON_CMTST_scalar =3D NEON_Q | NEONScalar | NEON_CMTST, - NEON_SUB_scalar =3D NEON_Q | NEONScalar | NEON_SUB, - NEON_UQADD_scalar =3D NEON_Q | NEONScalar | NEON_UQADD, - NEON_SQADD_scalar =3D NEON_Q | NEONScalar | NEON_SQADD, - NEON_UQSUB_scalar =3D NEON_Q | NEONScalar | NEON_UQSUB, - NEON_SQSUB_scalar =3D NEON_Q | NEONScalar | NEON_SQSUB, - NEON_USHL_scalar =3D NEON_Q | NEONScalar | NEON_USHL, - NEON_SSHL_scalar =3D NEON_Q | NEONScalar | NEON_SSHL, - NEON_UQSHL_scalar =3D NEON_Q | NEONScalar | NEON_UQSHL, - NEON_SQSHL_scalar =3D NEON_Q | NEONScalar | NEON_SQSHL, - NEON_URSHL_scalar =3D NEON_Q | NEONScalar | NEON_URSHL, - NEON_SRSHL_scalar =3D NEON_Q | NEONScalar | NEON_SRSHL, - NEON_UQRSHL_scalar =3D NEON_Q | NEONScalar | NEON_UQRSHL, - NEON_SQRSHL_scalar =3D NEON_Q | NEONScalar | NEON_SQRSHL, - NEON_SQDMULH_scalar =3D NEON_Q | NEONScalar | NEON_SQDMULH, - NEON_SQRDMULH_scalar =3D NEON_Q | NEONScalar | NEON_SQRDMULH, - - // NEON floating point scalar instructions with three same-type operands. - NEONScalar3SameFPFixed =3D NEONScalar3SameFixed | 0x0000C000, - NEONScalar3SameFPFMask =3D NEONScalar3SameFMask | 0x0000C000, - NEONScalar3SameFPMask =3D NEONScalar3SameMask | 0x00800000, - NEON_FACGE_scalar =3D NEON_Q | NEONScalar | NEON_FACGE, - NEON_FACGT_scalar =3D NEON_Q | NEONScalar | NEON_FACGT, - NEON_FCMEQ_scalar =3D NEON_Q | NEONScalar | NEON_FCMEQ, - NEON_FCMGE_scalar =3D NEON_Q | NEONScalar | NEON_FCMGE, - NEON_FCMGT_scalar =3D NEON_Q | NEONScalar | NEON_FCMGT, - NEON_FMULX_scalar =3D NEON_Q | NEONScalar | NEON_FMULX, - NEON_FRECPS_scalar =3D NEON_Q | NEONScalar | NEON_FRECPS, - NEON_FRSQRTS_scalar =3D NEON_Q | NEONScalar | NEON_FRSQRTS, - NEON_FABD_scalar =3D NEON_Q | NEONScalar | NEON_FABD -}; - -// NEON scalar instructions with three different-type operands. -enum NEONScalar3DiffOp { - NEONScalar3DiffFixed =3D 0x5E200000, - NEONScalar3DiffFMask =3D 0xDF200C00, - NEONScalar3DiffMask =3D NEON_Q | NEONScalar | NEON3DifferentMask, - NEON_SQDMLAL_scalar =3D NEON_Q | NEONScalar | NEON_SQDMLAL, - NEON_SQDMLSL_scalar =3D NEON_Q | NEONScalar | NEON_SQDMLSL, - NEON_SQDMULL_scalar =3D NEON_Q | NEONScalar | NEON_SQDMULL -}; - -// NEON scalar instructions with indexed element operand. -enum NEONScalarByIndexedElementOp { - NEONScalarByIndexedElementFixed =3D 0x5F000000, - NEONScalarByIndexedElementFMask =3D 0xDF000400, - NEONScalarByIndexedElementMask =3D 0xFF00F400, - NEON_SQDMLAL_byelement_scalar =3D NEON_Q | NEONScalar | NEON_SQDMLAL_by= element, - NEON_SQDMLSL_byelement_scalar =3D NEON_Q | NEONScalar | NEON_SQDMLSL_by= element, - NEON_SQDMULL_byelement_scalar =3D NEON_Q | NEONScalar | NEON_SQDMULL_by= element, - NEON_SQDMULH_byelement_scalar =3D NEON_Q | NEONScalar | NEON_SQDMULH_by= element, - NEON_SQRDMULH_byelement_scalar - =3D NEON_Q | NEONScalar | NEON_SQRDMULH_byelement, - - // Floating point instructions. - NEONScalarByIndexedElementFPFixed - =3D NEONScalarByIndexedElementFixed | 0x00800000, - NEONScalarByIndexedElementFPMask - =3D NEONScalarByIndexedElementMask | 0x00800000, - NEON_FMLA_byelement_scalar =3D NEON_Q | NEONScalar | NEON_FMLA_byelemen= t, - NEON_FMLS_byelement_scalar =3D NEON_Q | NEONScalar | NEON_FMLS_byelemen= t, - NEON_FMUL_byelement_scalar =3D NEON_Q | NEONScalar | NEON_FMUL_byelemen= t, - NEON_FMULX_byelement_scalar =3D NEON_Q | NEONScalar | NEON_FMULX_byeleme= nt -}; - -// NEON scalar register copy. -enum NEONScalarCopyOp { - NEONScalarCopyFixed =3D 0x5E000400, - NEONScalarCopyFMask =3D 0xDFE08400, - NEONScalarCopyMask =3D 0xFFE0FC00, - NEON_DUP_ELEMENT_scalar =3D NEON_Q | NEONScalar | NEON_DUP_ELEMENT -}; - -// NEON scalar pairwise instructions. -enum NEONScalarPairwiseOp { - NEONScalarPairwiseFixed =3D 0x5E300800, - NEONScalarPairwiseFMask =3D 0xDF3E0C00, - NEONScalarPairwiseMask =3D 0xFFB1F800, - NEON_ADDP_scalar =3D NEONScalarPairwiseFixed | 0x0081B000, - NEON_FMAXNMP_scalar =3D NEONScalarPairwiseFixed | 0x2000C000, - NEON_FMINNMP_scalar =3D NEONScalarPairwiseFixed | 0x2080C000, - NEON_FADDP_scalar =3D NEONScalarPairwiseFixed | 0x2000D000, - NEON_FMAXP_scalar =3D NEONScalarPairwiseFixed | 0x2000F000, - NEON_FMINP_scalar =3D NEONScalarPairwiseFixed | 0x2080F000 -}; - -// NEON scalar shift immediate. -enum NEONScalarShiftImmediateOp { - NEONScalarShiftImmediateFixed =3D 0x5F000400, - NEONScalarShiftImmediateFMask =3D 0xDF800400, - NEONScalarShiftImmediateMask =3D 0xFF80FC00, - NEON_SHL_scalar =3D NEON_Q | NEONScalar | NEON_SHL, - NEON_SLI_scalar =3D NEON_Q | NEONScalar | NEON_SLI, - NEON_SRI_scalar =3D NEON_Q | NEONScalar | NEON_SRI, - NEON_SSHR_scalar =3D NEON_Q | NEONScalar | NEON_SSHR, - NEON_USHR_scalar =3D NEON_Q | NEONScalar | NEON_USHR, - NEON_SRSHR_scalar =3D NEON_Q | NEONScalar | NEON_SRSHR, - NEON_URSHR_scalar =3D NEON_Q | NEONScalar | NEON_URSHR, - NEON_SSRA_scalar =3D NEON_Q | NEONScalar | NEON_SSRA, - NEON_USRA_scalar =3D NEON_Q | NEONScalar | NEON_USRA, - NEON_SRSRA_scalar =3D NEON_Q | NEONScalar | NEON_SRSRA, - NEON_URSRA_scalar =3D NEON_Q | NEONScalar | NEON_URSRA, - NEON_UQSHRN_scalar =3D NEON_Q | NEONScalar | NEON_UQSHRN, - NEON_UQRSHRN_scalar =3D NEON_Q | NEONScalar | NEON_UQRSHRN, - NEON_SQSHRN_scalar =3D NEON_Q | NEONScalar | NEON_SQSHRN, - NEON_SQRSHRN_scalar =3D NEON_Q | NEONScalar | NEON_SQRSHRN, - NEON_SQSHRUN_scalar =3D NEON_Q | NEONScalar | NEON_SQSHRUN, - NEON_SQRSHRUN_scalar =3D NEON_Q | NEONScalar | NEON_SQRSHRUN, - NEON_SQSHLU_scalar =3D NEON_Q | NEONScalar | NEON_SQSHLU, - NEON_SQSHL_imm_scalar =3D NEON_Q | NEONScalar | NEON_SQSHL_imm, - NEON_UQSHL_imm_scalar =3D NEON_Q | NEONScalar | NEON_UQSHL_imm, - NEON_SCVTF_imm_scalar =3D NEON_Q | NEONScalar | NEON_SCVTF_imm, - NEON_UCVTF_imm_scalar =3D NEON_Q | NEONScalar | NEON_UCVTF_imm, - NEON_FCVTZS_imm_scalar =3D NEON_Q | NEONScalar | NEON_FCVTZS_imm, - NEON_FCVTZU_imm_scalar =3D NEON_Q | NEONScalar | NEON_FCVTZU_imm -}; - -// Unimplemented and unallocated instructions. These are defined to make f= ixed -// bit assertion easier. -enum UnimplementedOp { - UnimplementedFixed =3D 0x00000000, - UnimplementedFMask =3D 0x00000000 -}; - -enum UnallocatedOp { - UnallocatedFixed =3D 0x00000000, - UnallocatedFMask =3D 0x00000000 -}; - -} // namespace vixl - -#endif // VIXL_A64_CONSTANTS_A64_H_ diff --git a/disas/libvixl/vixl/a64/cpu-a64.h b/disas/libvixl/vixl/a64/cpu-= a64.h deleted file mode 100644 index cdf09a6af1..0000000000 --- a/disas/libvixl/vixl/a64/cpu-a64.h +++ /dev/null @@ -1,83 +0,0 @@ -// Copyright 2014, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#ifndef VIXL_CPU_A64_H -#define VIXL_CPU_A64_H - -#include "vixl/globals.h" -#include "vixl/a64/instructions-a64.h" - -namespace vixl { - -class CPU { - public: - // Initialise CPU support. - static void SetUp(); - - // Ensures the data at a given address and with a given size is the same= for - // the I and D caches. I and D caches are not automatically coherent on = ARM - // so this operation is required before any dynamically generated code c= an - // safely run. - static void EnsureIAndDCacheCoherency(void *address, size_t length); - - // Handle tagged pointers. - template - static T SetPointerTag(T pointer, uint64_t tag) { - VIXL_ASSERT(is_uintn(kAddressTagWidth, tag)); - - // Use C-style casts to get static_cast behaviour for integral types (= T), - // and reinterpret_cast behaviour for other types. - - uint64_t raw =3D (uint64_t)pointer; - VIXL_STATIC_ASSERT(sizeof(pointer) =3D=3D sizeof(raw)); - - raw =3D (raw & ~kAddressTagMask) | (tag << kAddressTagOffset); - return (T)raw; - } - - template - static uint64_t GetPointerTag(T pointer) { - // Use C-style casts to get static_cast behaviour for integral types (= T), - // and reinterpret_cast behaviour for other types. - - uint64_t raw =3D (uint64_t)pointer; - VIXL_STATIC_ASSERT(sizeof(pointer) =3D=3D sizeof(raw)); - - return (raw & kAddressTagMask) >> kAddressTagOffset; - } - - private: - // Return the content of the cache type register. - static uint32_t GetCacheType(); - - // I and D cache line size in bytes. - static unsigned icache_line_size_; - static unsigned dcache_line_size_; -}; - -} // namespace vixl - -#endif // VIXL_CPU_A64_H diff --git a/disas/libvixl/vixl/a64/decoder-a64.h b/disas/libvixl/vixl/a64/= decoder-a64.h deleted file mode 100644 index b3f04f68fc..0000000000 --- a/disas/libvixl/vixl/a64/decoder-a64.h +++ /dev/null @@ -1,275 +0,0 @@ -// Copyright 2014, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#ifndef VIXL_A64_DECODER_A64_H_ -#define VIXL_A64_DECODER_A64_H_ - -#include - -#include "vixl/globals.h" -#include "vixl/a64/instructions-a64.h" - - -// List macro containing all visitors needed by the decoder class. - -#define VISITOR_LIST_THAT_RETURN(V) \ - V(PCRelAddressing) \ - V(AddSubImmediate) \ - V(LogicalImmediate) \ - V(MoveWideImmediate) \ - V(Bitfield) \ - V(Extract) \ - V(UnconditionalBranch) \ - V(UnconditionalBranchToRegister) \ - V(CompareBranch) \ - V(TestBranch) \ - V(ConditionalBranch) \ - V(System) \ - V(Exception) \ - V(LoadStorePairPostIndex) \ - V(LoadStorePairOffset) \ - V(LoadStorePairPreIndex) \ - V(LoadStorePairNonTemporal) \ - V(LoadLiteral) \ - V(LoadStoreUnscaledOffset) \ - V(LoadStorePostIndex) \ - V(LoadStorePreIndex) \ - V(LoadStoreRegisterOffset) \ - V(LoadStoreUnsignedOffset) \ - V(LoadStoreExclusive) \ - V(LogicalShifted) \ - V(AddSubShifted) \ - V(AddSubExtended) \ - V(AddSubWithCarry) \ - V(ConditionalCompareRegister) \ - V(ConditionalCompareImmediate) \ - V(ConditionalSelect) \ - V(DataProcessing1Source) \ - V(DataProcessing2Source) \ - V(DataProcessing3Source) \ - V(FPCompare) \ - V(FPConditionalCompare) \ - V(FPConditionalSelect) \ - V(FPImmediate) \ - V(FPDataProcessing1Source) \ - V(FPDataProcessing2Source) \ - V(FPDataProcessing3Source) \ - V(FPIntegerConvert) \ - V(FPFixedPointConvert) \ - V(Crypto2RegSHA) \ - V(Crypto3RegSHA) \ - V(CryptoAES) \ - V(NEON2RegMisc) \ - V(NEON3Different) \ - V(NEON3Same) \ - V(NEONAcrossLanes) \ - V(NEONByIndexedElement) \ - V(NEONCopy) \ - V(NEONExtract) \ - V(NEONLoadStoreMultiStruct) \ - V(NEONLoadStoreMultiStructPostIndex) \ - V(NEONLoadStoreSingleStruct) \ - V(NEONLoadStoreSingleStructPostIndex) \ - V(NEONModifiedImmediate) \ - V(NEONScalar2RegMisc) \ - V(NEONScalar3Diff) \ - V(NEONScalar3Same) \ - V(NEONScalarByIndexedElement) \ - V(NEONScalarCopy) \ - V(NEONScalarPairwise) \ - V(NEONScalarShiftImmediate) \ - V(NEONShiftImmediate) \ - V(NEONTable) \ - V(NEONPerm) \ - -#define VISITOR_LIST_THAT_DONT_RETURN(V) \ - V(Unallocated) \ - V(Unimplemented) \ - -#define VISITOR_LIST(V) \ - VISITOR_LIST_THAT_RETURN(V) \ - VISITOR_LIST_THAT_DONT_RETURN(V) \ - -namespace vixl { - -// The Visitor interface. Disassembler and simulator (and other tools) -// must provide implementations for all of these functions. -class DecoderVisitor { - public: - enum VisitorConstness { - kConstVisitor, - kNonConstVisitor - }; - explicit DecoderVisitor(VisitorConstness constness =3D kConstVisitor) - : constness_(constness) {} - - virtual ~DecoderVisitor() {} - - #define DECLARE(A) virtual void Visit##A(const Instruction* instr) =3D 0; - VISITOR_LIST(DECLARE) - #undef DECLARE - - bool IsConstVisitor() const { return constness_ =3D=3D kConstVisitor; } - Instruction* MutableInstruction(const Instruction* instr) { - VIXL_ASSERT(!IsConstVisitor()); - return const_cast(instr); - } - - private: - const VisitorConstness constness_; -}; - - -class Decoder { - public: - Decoder() {} - - // Top-level wrappers around the actual decoding function. - void Decode(const Instruction* instr) { - std::list::iterator it; - for (it =3D visitors_.begin(); it !=3D visitors_.end(); it++) { - VIXL_ASSERT((*it)->IsConstVisitor()); - } - DecodeInstruction(instr); - } - void Decode(Instruction* instr) { - DecodeInstruction(const_cast(instr)); - } - - // Register a new visitor class with the decoder. - // Decode() will call the corresponding visitor method from all register= ed - // visitor classes when decoding reaches the leaf node of the instruction - // decode tree. - // Visitors are called in order. - // A visitor can be registered multiple times. - // - // d.AppendVisitor(V1); - // d.AppendVisitor(V2); - // d.PrependVisitor(V2); - // d.AppendVisitor(V3); - // - // d.Decode(i); - // - // will call in order visitor methods in V2, V1, V2, V3. - void AppendVisitor(DecoderVisitor* visitor); - void PrependVisitor(DecoderVisitor* visitor); - // These helpers register `new_visitor` before or after the first instan= ce of - // `registered_visiter` in the list. - // So if - // V1, V2, V1, V2 - // are registered in this order in the decoder, calls to - // d.InsertVisitorAfter(V3, V1); - // d.InsertVisitorBefore(V4, V2); - // will yield the order - // V1, V3, V4, V2, V1, V2 - // - // For more complex modifications of the order of registered visitors, o= ne can - // directly access and modify the list of visitors via the `visitors()' - // accessor. - void InsertVisitorBefore(DecoderVisitor* new_visitor, - DecoderVisitor* registered_visitor); - void InsertVisitorAfter(DecoderVisitor* new_visitor, - DecoderVisitor* registered_visitor); - - // Remove all instances of a previously registered visitor class from th= e list - // of visitors stored by the decoder. - void RemoveVisitor(DecoderVisitor* visitor); - - #define DECLARE(A) void Visit##A(const Instruction* instr); - VISITOR_LIST(DECLARE) - #undef DECLARE - - - std::list* visitors() { return &visitors_; } - - private: - // Decodes an instruction and calls the visitor functions registered wit= h the - // Decoder class. - void DecodeInstruction(const Instruction* instr); - - // Decode the PC relative addressing instruction, and call the correspon= ding - // visitors. - // On entry, instruction bits 27:24 =3D 0x0. - void DecodePCRelAddressing(const Instruction* instr); - - // Decode the add/subtract immediate instruction, and call the correspod= ing - // visitors. - // On entry, instruction bits 27:24 =3D 0x1. - void DecodeAddSubImmediate(const Instruction* instr); - - // Decode the branch, system command, and exception generation parts of - // the instruction tree, and call the corresponding visitors. - // On entry, instruction bits 27:24 =3D {0x4, 0x5, 0x6, 0x7}. - void DecodeBranchSystemException(const Instruction* instr); - - // Decode the load and store parts of the instruction tree, and call - // the corresponding visitors. - // On entry, instruction bits 27:24 =3D {0x8, 0x9, 0xC, 0xD}. - void DecodeLoadStore(const Instruction* instr); - - // Decode the logical immediate and move wide immediate parts of the - // instruction tree, and call the corresponding visitors. - // On entry, instruction bits 27:24 =3D 0x2. - void DecodeLogical(const Instruction* instr); - - // Decode the bitfield and extraction parts of the instruction tree, - // and call the corresponding visitors. - // On entry, instruction bits 27:24 =3D 0x3. - void DecodeBitfieldExtract(const Instruction* instr); - - // Decode the data processing parts of the instruction tree, and call the - // corresponding visitors. - // On entry, instruction bits 27:24 =3D {0x1, 0xA, 0xB}. - void DecodeDataProcessing(const Instruction* instr); - - // Decode the floating point parts of the instruction tree, and call the - // corresponding visitors. - // On entry, instruction bits 27:24 =3D {0xE, 0xF}. - void DecodeFP(const Instruction* instr); - - // Decode the Advanced SIMD (NEON) load/store part of the instruction tr= ee, - // and call the corresponding visitors. - // On entry, instruction bits 29:25 =3D 0x6. - void DecodeNEONLoadStore(const Instruction* instr); - - // Decode the Advanced SIMD (NEON) vector data processing part of the - // instruction tree, and call the corresponding visitors. - // On entry, instruction bits 28:25 =3D 0x7. - void DecodeNEONVectorDataProcessing(const Instruction* instr); - - // Decode the Advanced SIMD (NEON) scalar data processing part of the - // instruction tree, and call the corresponding visitors. - // On entry, instruction bits 28:25 =3D 0xF. - void DecodeNEONScalarDataProcessing(const Instruction* instr); - - private: - // Visitors are registered in a list. - std::list visitors_; -}; - -} // namespace vixl - -#endif // VIXL_A64_DECODER_A64_H_ diff --git a/disas/libvixl/vixl/a64/disasm-a64.h b/disas/libvixl/vixl/a64/d= isasm-a64.h deleted file mode 100644 index 930df6ea6a..0000000000 --- a/disas/libvixl/vixl/a64/disasm-a64.h +++ /dev/null @@ -1,177 +0,0 @@ -// Copyright 2015, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#ifndef VIXL_A64_DISASM_A64_H -#define VIXL_A64_DISASM_A64_H - -#include "vixl/globals.h" -#include "vixl/utils.h" -#include "vixl/a64/instructions-a64.h" -#include "vixl/a64/decoder-a64.h" -#include "vixl/a64/assembler-a64.h" - -namespace vixl { - -class Disassembler: public DecoderVisitor { - public: - Disassembler(); - Disassembler(char* text_buffer, int buffer_size); - virtual ~Disassembler(); - char* GetOutput(); - - // Declare all Visitor functions. - #define DECLARE(A) virtual void Visit##A(const Instruction* instr); - VISITOR_LIST(DECLARE) - #undef DECLARE - - protected: - virtual void ProcessOutput(const Instruction* instr); - - // Default output functions. The functions below implement a default wa= y of - // printing elements in the disassembly. A sub-class can override these = to - // customize the disassembly output. - - // Prints the name of a register. - // TODO: This currently doesn't allow renaming of V registers. - virtual void AppendRegisterNameToOutput(const Instruction* instr, - const CPURegister& reg); - - // Prints a PC-relative offset. This is used for example when disassembl= ing - // branches to immediate offsets. - virtual void AppendPCRelativeOffsetToOutput(const Instruction* instr, - int64_t offset); - - // Prints an address, in the general case. It can be code or data. This = is - // used for example to print the target address of an ADR instruction. - virtual void AppendCodeRelativeAddressToOutput(const Instruction* instr, - const void* addr); - - // Prints the address of some code. - // This is used for example to print the target address of a branch to an - // immediate offset. - // A sub-class can for example override this method to lookup the addres= s and - // print an appropriate name. - virtual void AppendCodeRelativeCodeAddressToOutput(const Instruction* in= str, - const void* addr); - - // Prints the address of some data. - // This is used for example to print the source address of a load literal - // instruction. - virtual void AppendCodeRelativeDataAddressToOutput(const Instruction* in= str, - const void* addr); - - // Same as the above, but for addresses that are not relative to the code - // buffer. They are currently not used by VIXL. - virtual void AppendAddressToOutput(const Instruction* instr, - const void* addr); - virtual void AppendCodeAddressToOutput(const Instruction* instr, - const void* addr); - virtual void AppendDataAddressToOutput(const Instruction* instr, - const void* addr); - - public: - // Get/Set the offset that should be added to code addresses when printi= ng - // code-relative addresses in the AppendCodeRelativeAddressToOutpu= t() - // helpers. - // Below is an example of how a branch immediate instruction in memory at - // address 0xb010200 would disassemble with different offsets. - // Base address | Disassembly - // 0x0 | 0xb010200: b #+0xcc (addr 0xb0102cc) - // 0x10000 | 0xb000200: b #+0xcc (addr 0xb0002cc) - // 0xb010200 | 0x0: b #+0xcc (addr 0xcc) - void MapCodeAddress(int64_t base_address, const Instruction* instr_addre= ss); - int64_t CodeRelativeAddress(const void* instr); - - private: - void Format( - const Instruction* instr, const char* mnemonic, const char* format); - void Substitute(const Instruction* instr, const char* string); - int SubstituteField(const Instruction* instr, const char* format); - int SubstituteRegisterField(const Instruction* instr, const char* format= ); - int SubstituteImmediateField(const Instruction* instr, const char* forma= t); - int SubstituteLiteralField(const Instruction* instr, const char* format); - int SubstituteBitfieldImmediateField( - const Instruction* instr, const char* format); - int SubstituteShiftField(const Instruction* instr, const char* format); - int SubstituteExtendField(const Instruction* instr, const char* format); - int SubstituteConditionField(const Instruction* instr, const char* forma= t); - int SubstitutePCRelAddressField(const Instruction* instr, const char* fo= rmat); - int SubstituteBranchTargetField(const Instruction* instr, const char* fo= rmat); - int SubstituteLSRegOffsetField(const Instruction* instr, const char* for= mat); - int SubstitutePrefetchField(const Instruction* instr, const char* format= ); - int SubstituteBarrierField(const Instruction* instr, const char* format); - int SubstituteSysOpField(const Instruction* instr, const char* format); - int SubstituteCrField(const Instruction* instr, const char* format); - bool RdIsZROrSP(const Instruction* instr) const { - return (instr->Rd() =3D=3D kZeroRegCode); - } - - bool RnIsZROrSP(const Instruction* instr) const { - return (instr->Rn() =3D=3D kZeroRegCode); - } - - bool RmIsZROrSP(const Instruction* instr) const { - return (instr->Rm() =3D=3D kZeroRegCode); - } - - bool RaIsZROrSP(const Instruction* instr) const { - return (instr->Ra() =3D=3D kZeroRegCode); - } - - bool IsMovzMovnImm(unsigned reg_size, uint64_t value); - - int64_t code_address_offset() const { return code_address_offset_; } - - protected: - void ResetOutput(); - void AppendToOutput(const char* string, ...) PRINTF_CHECK(2, 3); - - void set_code_address_offset(int64_t code_address_offset) { - code_address_offset_ =3D code_address_offset; - } - - char* buffer_; - uint32_t buffer_pos_; - uint32_t buffer_size_; - bool own_buffer_; - - int64_t code_address_offset_; -}; - - -class PrintDisassembler: public Disassembler { - public: - explicit PrintDisassembler(FILE* stream) : stream_(stream) { } - - protected: - virtual void ProcessOutput(const Instruction* instr); - - private: - FILE *stream_; -}; -} // namespace vixl - -#endif // VIXL_A64_DISASM_A64_H diff --git a/disas/libvixl/vixl/a64/instructions-a64.h b/disas/libvixl/vixl= /a64/instructions-a64.h deleted file mode 100644 index 7e0dbae36a..0000000000 --- a/disas/libvixl/vixl/a64/instructions-a64.h +++ /dev/null @@ -1,757 +0,0 @@ -// Copyright 2015, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#ifndef VIXL_A64_INSTRUCTIONS_A64_H_ -#define VIXL_A64_INSTRUCTIONS_A64_H_ - -#include "vixl/globals.h" -#include "vixl/utils.h" -#include "vixl/a64/constants-a64.h" - -namespace vixl { -// ISA constants. --------------------------------------------------------= ------ - -typedef uint32_t Instr; -const unsigned kInstructionSize =3D 4; -const unsigned kInstructionSizeLog2 =3D 2; -const unsigned kLiteralEntrySize =3D 4; -const unsigned kLiteralEntrySizeLog2 =3D 2; -const unsigned kMaxLoadLiteralRange =3D 1 * MBytes; - -// This is the nominal page size (as used by the adrp instruction); the ac= tual -// size of the memory pages allocated by the kernel is likely to differ. -const unsigned kPageSize =3D 4 * KBytes; -const unsigned kPageSizeLog2 =3D 12; - -const unsigned kBRegSize =3D 8; -const unsigned kBRegSizeLog2 =3D 3; -const unsigned kBRegSizeInBytes =3D kBRegSize / 8; -const unsigned kBRegSizeInBytesLog2 =3D kBRegSizeLog2 - 3; -const unsigned kHRegSize =3D 16; -const unsigned kHRegSizeLog2 =3D 4; -const unsigned kHRegSizeInBytes =3D kHRegSize / 8; -const unsigned kHRegSizeInBytesLog2 =3D kHRegSizeLog2 - 3; -const unsigned kWRegSize =3D 32; -const unsigned kWRegSizeLog2 =3D 5; -const unsigned kWRegSizeInBytes =3D kWRegSize / 8; -const unsigned kWRegSizeInBytesLog2 =3D kWRegSizeLog2 - 3; -const unsigned kXRegSize =3D 64; -const unsigned kXRegSizeLog2 =3D 6; -const unsigned kXRegSizeInBytes =3D kXRegSize / 8; -const unsigned kXRegSizeInBytesLog2 =3D kXRegSizeLog2 - 3; -const unsigned kSRegSize =3D 32; -const unsigned kSRegSizeLog2 =3D 5; -const unsigned kSRegSizeInBytes =3D kSRegSize / 8; -const unsigned kSRegSizeInBytesLog2 =3D kSRegSizeLog2 - 3; -const unsigned kDRegSize =3D 64; -const unsigned kDRegSizeLog2 =3D 6; -const unsigned kDRegSizeInBytes =3D kDRegSize / 8; -const unsigned kDRegSizeInBytesLog2 =3D kDRegSizeLog2 - 3; -const unsigned kQRegSize =3D 128; -const unsigned kQRegSizeLog2 =3D 7; -const unsigned kQRegSizeInBytes =3D kQRegSize / 8; -const unsigned kQRegSizeInBytesLog2 =3D kQRegSizeLog2 - 3; -const uint64_t kWRegMask =3D UINT64_C(0xffffffff); -const uint64_t kXRegMask =3D UINT64_C(0xffffffffffffffff); -const uint64_t kSRegMask =3D UINT64_C(0xffffffff); -const uint64_t kDRegMask =3D UINT64_C(0xffffffffffffffff); -const uint64_t kSSignMask =3D UINT64_C(0x80000000); -const uint64_t kDSignMask =3D UINT64_C(0x8000000000000000); -const uint64_t kWSignMask =3D UINT64_C(0x80000000); -const uint64_t kXSignMask =3D UINT64_C(0x8000000000000000); -const uint64_t kByteMask =3D UINT64_C(0xff); -const uint64_t kHalfWordMask =3D UINT64_C(0xffff); -const uint64_t kWordMask =3D UINT64_C(0xffffffff); -const uint64_t kXMaxUInt =3D UINT64_C(0xffffffffffffffff); -const uint64_t kWMaxUInt =3D UINT64_C(0xffffffff); -const int64_t kXMaxInt =3D INT64_C(0x7fffffffffffffff); -const int64_t kXMinInt =3D INT64_C(0x8000000000000000); -const int32_t kWMaxInt =3D INT32_C(0x7fffffff); -const int32_t kWMinInt =3D INT32_C(0x80000000); -const unsigned kLinkRegCode =3D 30; -const unsigned kZeroRegCode =3D 31; -const unsigned kSPRegInternalCode =3D 63; -const unsigned kRegCodeMask =3D 0x1f; - -const unsigned kAddressTagOffset =3D 56; -const unsigned kAddressTagWidth =3D 8; -const uint64_t kAddressTagMask =3D - ((UINT64_C(1) << kAddressTagWidth) - 1) << kAddressTagOffset; -VIXL_STATIC_ASSERT(kAddressTagMask =3D=3D UINT64_C(0xff00000000000000)); - -// AArch64 floating-point specifics. These match IEEE-754. -const unsigned kDoubleMantissaBits =3D 52; -const unsigned kDoubleExponentBits =3D 11; -const unsigned kFloatMantissaBits =3D 23; -const unsigned kFloatExponentBits =3D 8; -const unsigned kFloat16MantissaBits =3D 10; -const unsigned kFloat16ExponentBits =3D 5; - -// Floating-point infinity values. -extern const float16 kFP16PositiveInfinity; -extern const float16 kFP16NegativeInfinity; -extern const float kFP32PositiveInfinity; -extern const float kFP32NegativeInfinity; -extern const double kFP64PositiveInfinity; -extern const double kFP64NegativeInfinity; - -// The default NaN values (for FPCR.DN=3D1). -extern const float16 kFP16DefaultNaN; -extern const float kFP32DefaultNaN; -extern const double kFP64DefaultNaN; - -unsigned CalcLSDataSize(LoadStoreOp op); -unsigned CalcLSPairDataSize(LoadStorePairOp op); - -enum ImmBranchType { - UnknownBranchType =3D 0, - CondBranchType =3D 1, - UncondBranchType =3D 2, - CompareBranchType =3D 3, - TestBranchType =3D 4 -}; - -enum AddrMode { - Offset, - PreIndex, - PostIndex -}; - -enum FPRounding { - // The first four values are encodable directly by FPCR. - FPTieEven =3D 0x0, - FPPositiveInfinity =3D 0x1, - FPNegativeInfinity =3D 0x2, - FPZero =3D 0x3, - - // The final rounding modes are only available when explicitly specified= by - // the instruction (such as with fcvta). It cannot be set in FPCR. - FPTieAway, - FPRoundOdd -}; - -enum Reg31Mode { - Reg31IsStackPointer, - Reg31IsZeroRegister -}; - -// Instructions. ---------------------------------------------------------= ------ - -class Instruction { - public: - Instr InstructionBits() const { - return *(reinterpret_cast(this)); - } - - void SetInstructionBits(Instr new_instr) { - *(reinterpret_cast(this)) =3D new_instr; - } - - int Bit(int pos) const { - return (InstructionBits() >> pos) & 1; - } - - uint32_t Bits(int msb, int lsb) const { - return unsigned_bitextract_32(msb, lsb, InstructionBits()); - } - - int32_t SignedBits(int msb, int lsb) const { - int32_t bits =3D *(reinterpret_cast(this)); - return signed_bitextract_32(msb, lsb, bits); - } - - Instr Mask(uint32_t mask) const { - return InstructionBits() & mask; - } - - #define DEFINE_GETTER(Name, HighBit, LowBit, Func) \ - int32_t Name() const { return Func(HighBit, LowBit); } - INSTRUCTION_FIELDS_LIST(DEFINE_GETTER) - #undef DEFINE_GETTER - - // ImmPCRel is a compound field (not present in INSTRUCTION_FIELDS_LIST), - // formed from ImmPCRelLo and ImmPCRelHi. - int ImmPCRel() const { - int offset =3D - static_cast((ImmPCRelHi() << ImmPCRelLo_width) | ImmPCRelLo()= ); - int width =3D ImmPCRelLo_width + ImmPCRelHi_width; - return signed_bitextract_32(width - 1, 0, offset); - } - - uint64_t ImmLogical() const; - unsigned ImmNEONabcdefgh() const; - float ImmFP32() const; - double ImmFP64() const; - float ImmNEONFP32() const; - double ImmNEONFP64() const; - - unsigned SizeLS() const { - return CalcLSDataSize(static_cast(Mask(LoadStoreMask))); - } - - unsigned SizeLSPair() const { - return CalcLSPairDataSize( - static_cast(Mask(LoadStorePairMask))); - } - - int NEONLSIndex(int access_size_shift) const { - int64_t q =3D NEONQ(); - int64_t s =3D NEONS(); - int64_t size =3D NEONLSSize(); - int64_t index =3D (q << 3) | (s << 2) | size; - return static_cast(index >> access_size_shift); - } - - // Helpers. - bool IsCondBranchImm() const { - return Mask(ConditionalBranchFMask) =3D=3D ConditionalBranchFixed; - } - - bool IsUncondBranchImm() const { - return Mask(UnconditionalBranchFMask) =3D=3D UnconditionalBranchFixed; - } - - bool IsCompareBranch() const { - return Mask(CompareBranchFMask) =3D=3D CompareBranchFixed; - } - - bool IsTestBranch() const { - return Mask(TestBranchFMask) =3D=3D TestBranchFixed; - } - - bool IsImmBranch() const { - return BranchType() !=3D UnknownBranchType; - } - - bool IsPCRelAddressing() const { - return Mask(PCRelAddressingFMask) =3D=3D PCRelAddressingFixed; - } - - bool IsLogicalImmediate() const { - return Mask(LogicalImmediateFMask) =3D=3D LogicalImmediateFixed; - } - - bool IsAddSubImmediate() const { - return Mask(AddSubImmediateFMask) =3D=3D AddSubImmediateFixed; - } - - bool IsAddSubExtended() const { - return Mask(AddSubExtendedFMask) =3D=3D AddSubExtendedFixed; - } - - bool IsLoadOrStore() const { - return Mask(LoadStoreAnyFMask) =3D=3D LoadStoreAnyFixed; - } - - bool IsLoad() const; - bool IsStore() const; - - bool IsLoadLiteral() const { - // This includes PRFM_lit. - return Mask(LoadLiteralFMask) =3D=3D LoadLiteralFixed; - } - - bool IsMovn() const { - return (Mask(MoveWideImmediateMask) =3D=3D MOVN_x) || - (Mask(MoveWideImmediateMask) =3D=3D MOVN_w); - } - - static int ImmBranchRangeBitwidth(ImmBranchType branch_type); - static int32_t ImmBranchForwardRange(ImmBranchType branch_type); - static bool IsValidImmPCOffset(ImmBranchType branch_type, int64_t offset= ); - - // Indicate whether Rd can be the stack pointer or the zero register. Th= is - // does not check that the instruction actually has an Rd field. - Reg31Mode RdMode() const { - // The following instructions use sp or wsp as Rd: - // Add/sub (immediate) when not setting the flags. - // Add/sub (extended) when not setting the flags. - // Logical (immediate) when not setting the flags. - // Otherwise, r31 is the zero register. - if (IsAddSubImmediate() || IsAddSubExtended()) { - if (Mask(AddSubSetFlagsBit)) { - return Reg31IsZeroRegister; - } else { - return Reg31IsStackPointer; - } - } - if (IsLogicalImmediate()) { - // Of the logical (immediate) instructions, only ANDS (and its alias= es) - // can set the flags. The others can all write into sp. - // Note that some logical operations are not available to - // immediate-operand instructions, so we have to combine two masks h= ere. - if (Mask(LogicalImmediateMask & LogicalOpMask) =3D=3D ANDS) { - return Reg31IsZeroRegister; - } else { - return Reg31IsStackPointer; - } - } - return Reg31IsZeroRegister; - } - - // Indicate whether Rn can be the stack pointer or the zero register. Th= is - // does not check that the instruction actually has an Rn field. - Reg31Mode RnMode() const { - // The following instructions use sp or wsp as Rn: - // All loads and stores. - // Add/sub (immediate). - // Add/sub (extended). - // Otherwise, r31 is the zero register. - if (IsLoadOrStore() || IsAddSubImmediate() || IsAddSubExtended()) { - return Reg31IsStackPointer; - } - return Reg31IsZeroRegister; - } - - ImmBranchType BranchType() const { - if (IsCondBranchImm()) { - return CondBranchType; - } else if (IsUncondBranchImm()) { - return UncondBranchType; - } else if (IsCompareBranch()) { - return CompareBranchType; - } else if (IsTestBranch()) { - return TestBranchType; - } else { - return UnknownBranchType; - } - } - - // Find the target of this instruction. 'this' may be a branch or a - // PC-relative addressing instruction. - const Instruction* ImmPCOffsetTarget() const; - - // Patch a PC-relative offset to refer to 'target'. 'this' may be a bran= ch or - // a PC-relative addressing instruction. - void SetImmPCOffsetTarget(const Instruction* target); - // Patch a literal load instruction to load from 'source'. - void SetImmLLiteral(const Instruction* source); - - // The range of a load literal instruction, expressed as 'instr +- range= '. - // The range is actually the 'positive' range; the branch instruction can - // target [instr - range - kInstructionSize, instr + range]. - static const int kLoadLiteralImmBitwidth =3D 19; - static const int kLoadLiteralRange =3D - (1 << kLoadLiteralImmBitwidth) / 2 - kInstructionSize; - - // Calculate the address of a literal referred to by a load-literal - // instruction, and return it as the specified type. - // - // The literal itself is safely mutable only if the backing buffer is sa= fely - // mutable. - template - T LiteralAddress() const { - uint64_t base_raw =3D reinterpret_cast(this); - int64_t offset =3D ImmLLiteral() << kLiteralEntrySizeLog2; - uint64_t address_raw =3D base_raw + offset; - - // Cast the address using a C-style cast. A reinterpret_cast would be - // appropriate, but it can't cast one integral type to another. - T address =3D (T)(address_raw); - - // Assert that the address can be represented by the specified type. - VIXL_ASSERT((uint64_t)(address) =3D=3D address_raw); - - return address; - } - - uint32_t Literal32() const { - uint32_t literal; - memcpy(&literal, LiteralAddress(), sizeof(literal)); - return literal; - } - - uint64_t Literal64() const { - uint64_t literal; - memcpy(&literal, LiteralAddress(), sizeof(literal)); - return literal; - } - - float LiteralFP32() const { - return rawbits_to_float(Literal32()); - } - - double LiteralFP64() const { - return rawbits_to_double(Literal64()); - } - - const Instruction* NextInstruction() const { - return this + kInstructionSize; - } - - const Instruction* InstructionAtOffset(int64_t offset) const { - VIXL_ASSERT(IsWordAligned(this + offset)); - return this + offset; - } - - template static Instruction* Cast(T src) { - return reinterpret_cast(src); - } - - template static const Instruction* CastConst(T src) { - return reinterpret_cast(src); - } - - private: - int ImmBranch() const; - - static float Imm8ToFP32(uint32_t imm8); - static double Imm8ToFP64(uint32_t imm8); - - void SetPCRelImmTarget(const Instruction* target); - void SetBranchImmTarget(const Instruction* target); -}; - - -// Functions for handling NEON vector format information. -enum VectorFormat { - kFormatUndefined =3D 0xffffffff, - kFormat8B =3D NEON_8B, - kFormat16B =3D NEON_16B, - kFormat4H =3D NEON_4H, - kFormat8H =3D NEON_8H, - kFormat2S =3D NEON_2S, - kFormat4S =3D NEON_4S, - kFormat1D =3D NEON_1D, - kFormat2D =3D NEON_2D, - - // Scalar formats. We add the scalar bit to distinguish between scalar a= nd - // vector enumerations; the bit is always set in the encoding of scalar = ops - // and always clear for vector ops. Although kFormatD and kFormat1D appe= ar - // to be the same, their meaning is subtly different. The first is a sca= lar - // operation, the second a vector operation that only affects one lane. - kFormatB =3D NEON_B | NEONScalar, - kFormatH =3D NEON_H | NEONScalar, - kFormatS =3D NEON_S | NEONScalar, - kFormatD =3D NEON_D | NEONScalar -}; - -VectorFormat VectorFormatHalfWidth(const VectorFormat vform); -VectorFormat VectorFormatDoubleWidth(const VectorFormat vform); -VectorFormat VectorFormatDoubleLanes(const VectorFormat vform); -VectorFormat VectorFormatHalfLanes(const VectorFormat vform); -VectorFormat ScalarFormatFromLaneSize(int lanesize); -VectorFormat VectorFormatHalfWidthDoubleLanes(const VectorFormat vform); -VectorFormat VectorFormatFillQ(const VectorFormat vform); -unsigned RegisterSizeInBitsFromFormat(VectorFormat vform); -unsigned RegisterSizeInBytesFromFormat(VectorFormat vform); -// TODO: Make the return types of these functions consistent. -unsigned LaneSizeInBitsFromFormat(VectorFormat vform); -int LaneSizeInBytesFromFormat(VectorFormat vform); -int LaneSizeInBytesLog2FromFormat(VectorFormat vform); -int LaneCountFromFormat(VectorFormat vform); -int MaxLaneCountFromFormat(VectorFormat vform); -bool IsVectorFormat(VectorFormat vform); -int64_t MaxIntFromFormat(VectorFormat vform); -int64_t MinIntFromFormat(VectorFormat vform); -uint64_t MaxUintFromFormat(VectorFormat vform); - - -enum NEONFormat { - NF_UNDEF =3D 0, - NF_8B =3D 1, - NF_16B =3D 2, - NF_4H =3D 3, - NF_8H =3D 4, - NF_2S =3D 5, - NF_4S =3D 6, - NF_1D =3D 7, - NF_2D =3D 8, - NF_B =3D 9, - NF_H =3D 10, - NF_S =3D 11, - NF_D =3D 12 -}; - -static const unsigned kNEONFormatMaxBits =3D 6; - -struct NEONFormatMap { - // The bit positions in the instruction to consider. - uint8_t bits[kNEONFormatMaxBits]; - - // Mapping from concatenated bits to format. - NEONFormat map[1 << kNEONFormatMaxBits]; -}; - -class NEONFormatDecoder { - public: - enum SubstitutionMode { - kPlaceholder, - kFormat - }; - - // Construct a format decoder with increasingly specific format maps for= each - // subsitution. If no format map is specified, the default is the integer - // format map. - explicit NEONFormatDecoder(const Instruction* instr) { - instrbits_ =3D instr->InstructionBits(); - SetFormatMaps(IntegerFormatMap()); - } - NEONFormatDecoder(const Instruction* instr, - const NEONFormatMap* format) { - instrbits_ =3D instr->InstructionBits(); - SetFormatMaps(format); - } - NEONFormatDecoder(const Instruction* instr, - const NEONFormatMap* format0, - const NEONFormatMap* format1) { - instrbits_ =3D instr->InstructionBits(); - SetFormatMaps(format0, format1); - } - NEONFormatDecoder(const Instruction* instr, - const NEONFormatMap* format0, - const NEONFormatMap* format1, - const NEONFormatMap* format2) { - instrbits_ =3D instr->InstructionBits(); - SetFormatMaps(format0, format1, format2); - } - - // Set the format mapping for all or individual substitutions. - void SetFormatMaps(const NEONFormatMap* format0, - const NEONFormatMap* format1 =3D NULL, - const NEONFormatMap* format2 =3D NULL) { - VIXL_ASSERT(format0 !=3D NULL); - formats_[0] =3D format0; - formats_[1] =3D (format1 =3D=3D NULL) ? formats_[0] : format1; - formats_[2] =3D (format2 =3D=3D NULL) ? formats_[1] : format2; - } - void SetFormatMap(unsigned index, const NEONFormatMap* format) { - VIXL_ASSERT(index <=3D (sizeof(formats_) / sizeof(formats_[0]))); - VIXL_ASSERT(format !=3D NULL); - formats_[index] =3D format; - } - - // Substitute %s in the input string with the placeholder string for each - // register, ie. "'B", "'H", etc. - const char* SubstitutePlaceholders(const char* string) { - return Substitute(string, kPlaceholder, kPlaceholder, kPlaceholder); - } - - // Substitute %s in the input string with a new string based on the - // substitution mode. - const char* Substitute(const char* string, - SubstitutionMode mode0 =3D kFormat, - SubstitutionMode mode1 =3D kFormat, - SubstitutionMode mode2 =3D kFormat) { - snprintf(form_buffer_, sizeof(form_buffer_), string, - GetSubstitute(0, mode0), - GetSubstitute(1, mode1), - GetSubstitute(2, mode2)); - return form_buffer_; - } - - // Append a "2" to a mnemonic string based of the state of the Q bit. - const char* Mnemonic(const char* mnemonic) { - if ((instrbits_ & NEON_Q) !=3D 0) { - snprintf(mne_buffer_, sizeof(mne_buffer_), "%s2", mnemonic); - return mne_buffer_; - } - return mnemonic; - } - - VectorFormat GetVectorFormat(int format_index =3D 0) { - return GetVectorFormat(formats_[format_index]); - } - - VectorFormat GetVectorFormat(const NEONFormatMap* format_map) { - static const VectorFormat vform[] =3D { - kFormatUndefined, - kFormat8B, kFormat16B, kFormat4H, kFormat8H, - kFormat2S, kFormat4S, kFormat1D, kFormat2D, - kFormatB, kFormatH, kFormatS, kFormatD - }; - VIXL_ASSERT(GetNEONFormat(format_map) < (sizeof(vform) / sizeof(vform[= 0]))); - return vform[GetNEONFormat(format_map)]; - } - - // Built in mappings for common cases. - - // The integer format map uses three bits (Q, size<1:0>) to encode the - // "standard" set of NEON integer vector formats. - static const NEONFormatMap* IntegerFormatMap() { - static const NEONFormatMap map =3D { - {23, 22, 30}, - {NF_8B, NF_16B, NF_4H, NF_8H, NF_2S, NF_4S, NF_UNDEF, NF_2D} - }; - return ↦ - } - - // The long integer format map uses two bits (size<1:0>) to encode the - // long set of NEON integer vector formats. These are used in narrow, wi= de - // and long operations. - static const NEONFormatMap* LongIntegerFormatMap() { - static const NEONFormatMap map =3D { - {23, 22}, {NF_8H, NF_4S, NF_2D} - }; - return ↦ - } - - // The FP format map uses two bits (Q, size<0>) to encode the NEON FP ve= ctor - // formats: NF_2S, NF_4S, NF_2D. - static const NEONFormatMap* FPFormatMap() { - // The FP format map assumes two bits (Q, size<0>) are used to encode = the - // NEON FP vector formats: NF_2S, NF_4S, NF_2D. - static const NEONFormatMap map =3D { - {22, 30}, {NF_2S, NF_4S, NF_UNDEF, NF_2D} - }; - return ↦ - } - - // The load/store format map uses three bits (Q, 11, 10) to encode the - // set of NEON vector formats. - static const NEONFormatMap* LoadStoreFormatMap() { - static const NEONFormatMap map =3D { - {11, 10, 30}, - {NF_8B, NF_16B, NF_4H, NF_8H, NF_2S, NF_4S, NF_1D, NF_2D} - }; - return ↦ - } - - // The logical format map uses one bit (Q) to encode the NEON vector for= mat: - // NF_8B, NF_16B. - static const NEONFormatMap* LogicalFormatMap() { - static const NEONFormatMap map =3D { - {30}, {NF_8B, NF_16B} - }; - return ↦ - } - - // The triangular format map uses between two and five bits to encode th= e NEON - // vector format: - // xxx10->8B, xxx11->16B, xx100->4H, xx101->8H - // x1000->2S, x1001->4S, 10001->2D, all others undefined. - static const NEONFormatMap* TriangularFormatMap() { - static const NEONFormatMap map =3D { - {19, 18, 17, 16, 30}, - {NF_UNDEF, NF_UNDEF, NF_8B, NF_16B, NF_4H, NF_8H, NF_8B, NF_16B, NF_= 2S, - NF_4S, NF_8B, NF_16B, NF_4H, NF_8H, NF_8B, NF_16B, NF_UNDEF, NF_2D, - NF_8B, NF_16B, NF_4H, NF_8H, NF_8B, NF_16B, NF_2S, NF_4S, NF_8B, NF= _16B, - NF_4H, NF_8H, NF_8B, NF_16B} - }; - return ↦ - } - - // The scalar format map uses two bits (size<1:0>) to encode the NEON sc= alar - // formats: NF_B, NF_H, NF_S, NF_D. - static const NEONFormatMap* ScalarFormatMap() { - static const NEONFormatMap map =3D { - {23, 22}, {NF_B, NF_H, NF_S, NF_D} - }; - return ↦ - } - - // The long scalar format map uses two bits (size<1:0>) to encode the lo= nger - // NEON scalar formats: NF_H, NF_S, NF_D. - static const NEONFormatMap* LongScalarFormatMap() { - static const NEONFormatMap map =3D { - {23, 22}, {NF_H, NF_S, NF_D} - }; - return ↦ - } - - // The FP scalar format map assumes one bit (size<0>) is used to encode = the - // NEON FP scalar formats: NF_S, NF_D. - static const NEONFormatMap* FPScalarFormatMap() { - static const NEONFormatMap map =3D { - {22}, {NF_S, NF_D} - }; - return ↦ - } - - // The triangular scalar format map uses between one and four bits to en= code - // the NEON FP scalar formats: - // xxx1->B, xx10->H, x100->S, 1000->D, all others undefined. - static const NEONFormatMap* TriangularScalarFormatMap() { - static const NEONFormatMap map =3D { - {19, 18, 17, 16}, - {NF_UNDEF, NF_B, NF_H, NF_B, NF_S, NF_B, NF_H, NF_B, - NF_D, NF_B, NF_H, NF_B, NF_S, NF_B, NF_H, NF_B} - }; - return ↦ - } - - private: - // Get a pointer to a string that represents the format or placeholder f= or - // the specified substitution index, based on the format map and instruc= tion. - const char* GetSubstitute(int index, SubstitutionMode mode) { - if (mode =3D=3D kFormat) { - return NEONFormatAsString(GetNEONFormat(formats_[index])); - } - VIXL_ASSERT(mode =3D=3D kPlaceholder); - return NEONFormatAsPlaceholder(GetNEONFormat(formats_[index])); - } - - // Get the NEONFormat enumerated value for bits obtained from the - // instruction based on the specified format mapping. - NEONFormat GetNEONFormat(const NEONFormatMap* format_map) { - return format_map->map[PickBits(format_map->bits)]; - } - - // Convert a NEONFormat into a string. - static const char* NEONFormatAsString(NEONFormat format) { - static const char* formats[] =3D { - "undefined", - "8b", "16b", "4h", "8h", "2s", "4s", "1d", "2d", - "b", "h", "s", "d" - }; - VIXL_ASSERT(format < (sizeof(formats) / sizeof(formats[0]))); - return formats[format]; - } - - // Convert a NEONFormat into a register placeholder string. - static const char* NEONFormatAsPlaceholder(NEONFormat format) { - VIXL_ASSERT((format =3D=3D NF_B) || (format =3D=3D NF_H) || - (format =3D=3D NF_S) || (format =3D=3D NF_D) || - (format =3D=3D NF_UNDEF)); - static const char* formats[] =3D { - "undefined", - "undefined", "undefined", "undefined", "undefined", - "undefined", "undefined", "undefined", "undefined", - "'B", "'H", "'S", "'D" - }; - return formats[format]; - } - - // Select bits from instrbits_ defined by the bits array, concatenate th= em, - // and return the value. - uint8_t PickBits(const uint8_t bits[]) { - uint8_t result =3D 0; - for (unsigned b =3D 0; b < kNEONFormatMaxBits; b++) { - if (bits[b] =3D=3D 0) break; - result <<=3D 1; - result |=3D ((instrbits_ & (1 << bits[b])) =3D=3D 0) ? 0 : 1; - } - return result; - } - - Instr instrbits_; - const NEONFormatMap* formats_[3]; - char form_buffer_[64]; - char mne_buffer_[16]; -}; -} // namespace vixl - -#endif // VIXL_A64_INSTRUCTIONS_A64_H_ diff --git a/disas/libvixl/vixl/code-buffer.h b/disas/libvixl/vixl/code-buf= fer.h deleted file mode 100644 index b95babbdee..0000000000 --- a/disas/libvixl/vixl/code-buffer.h +++ /dev/null @@ -1,113 +0,0 @@ -// Copyright 2014, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#ifndef VIXL_CODE_BUFFER_H -#define VIXL_CODE_BUFFER_H - -#include -#include "vixl/globals.h" - -namespace vixl { - -class CodeBuffer { - public: - explicit CodeBuffer(size_t capacity =3D 4 * KBytes); - CodeBuffer(void* buffer, size_t capacity); - ~CodeBuffer(); - - void Reset(); - - ptrdiff_t OffsetFrom(ptrdiff_t offset) const { - ptrdiff_t cursor_offset =3D cursor_ - buffer_; - VIXL_ASSERT((offset >=3D 0) && (offset <=3D cursor_offset)); - return cursor_offset - offset; - } - - ptrdiff_t CursorOffset() const { - return OffsetFrom(0); - } - - template - T GetOffsetAddress(ptrdiff_t offset) const { - VIXL_ASSERT((offset >=3D 0) && (offset <=3D (cursor_ - buffer_))); - return reinterpret_cast(buffer_ + offset); - } - - size_t RemainingBytes() const { - VIXL_ASSERT((cursor_ >=3D buffer_) && (cursor_ <=3D (buffer_ + capacit= y_))); - return (buffer_ + capacity_) - cursor_; - } - - // A code buffer can emit: - // * 32-bit data: instruction and constant. - // * 64-bit data: constant. - // * string: debug info. - void Emit32(uint32_t data) { Emit(data); } - - void Emit64(uint64_t data) { Emit(data); } - - void EmitString(const char* string); - - // Align to kInstructionSize. - void Align(); - - size_t capacity() const { return capacity_; } - - bool IsManaged() const { return managed_; } - - void Grow(size_t new_capacity); - - bool IsDirty() const { return dirty_; } - - void SetClean() { dirty_ =3D false; } - - private: - template - void Emit(T value) { - VIXL_ASSERT(RemainingBytes() >=3D sizeof(value)); - dirty_ =3D true; - memcpy(cursor_, &value, sizeof(value)); - cursor_ +=3D sizeof(value); - } - - // Backing store of the buffer. - byte* buffer_; - // If true the backing store is allocated and deallocated by the buffer.= The - // backing store can then grow on demand. If false the backing store is - // provided by the user and cannot be resized internally. - bool managed_; - // Pointer to the next location to be written. - byte* cursor_; - // True if there has been any write since the buffer was created or clea= ned. - bool dirty_; - // Capacity in bytes of the backing store. - size_t capacity_; -}; - -} // namespace vixl - -#endif // VIXL_CODE_BUFFER_H - diff --git a/disas/libvixl/vixl/compiler-intrinsics.h b/disas/libvixl/vixl/= compiler-intrinsics.h deleted file mode 100644 index 9431beddb9..0000000000 --- a/disas/libvixl/vixl/compiler-intrinsics.h +++ /dev/null @@ -1,155 +0,0 @@ -// Copyright 2015, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - - -#ifndef VIXL_COMPILER_INTRINSICS_H -#define VIXL_COMPILER_INTRINSICS_H - -#include "globals.h" - -namespace vixl { - -// Helper to check whether the version of GCC used is greater than the spe= cified -// requirement. -#define MAJOR 1000000 -#define MINOR 1000 -#if defined(__GNUC__) && defined(__GNUC_MINOR__) && defined(__GNUC_PATCHLE= VEL__) -#define GCC_VERSION_OR_NEWER(major, minor, patchlevel) = \ - ((__GNUC__ * MAJOR + __GNUC_MINOR__ * MINOR + __GNUC_PATCHLEVEL__) >= =3D \ - ((major) * MAJOR + (minor) * MINOR + (patchlevel))) -#elif defined(__GNUC__) && defined(__GNUC_MINOR__) -#define GCC_VERSION_OR_NEWER(major, minor, patchlevel) = \ - ((__GNUC__ * MAJOR + __GNUC_MINOR__ * MINOR) >=3D = \ - ((major) * MAJOR + (minor) * MINOR + (patchlevel))) -#else -#define GCC_VERSION_OR_NEWER(major, minor, patchlevel) 0 -#endif - - -#if defined(__clang__) && !defined(VIXL_NO_COMPILER_BUILTINS) - -#define COMPILER_HAS_BUILTIN_CLRSB (__has_builtin(__builtin_clrsb)) -#define COMPILER_HAS_BUILTIN_CLZ (__has_builtin(__builtin_clz)) -#define COMPILER_HAS_BUILTIN_CTZ (__has_builtin(__builtin_ctz)) -#define COMPILER_HAS_BUILTIN_FFS (__has_builtin(__builtin_ffs)) -#define COMPILER_HAS_BUILTIN_POPCOUNT (__has_builtin(__builtin_popcount)) - -#elif defined(__GNUC__) && !defined(VIXL_NO_COMPILER_BUILTINS) -// The documentation for these builtins is available at: -// https://gcc.gnu.org/onlinedocs/gcc-$MAJOR.$MINOR.$PATCHLEVEL/gcc//Other= -Builtins.html - -# define COMPILER_HAS_BUILTIN_CLRSB (GCC_VERSION_OR_NEWER(4, 7, 0)) -# define COMPILER_HAS_BUILTIN_CLZ (GCC_VERSION_OR_NEWER(3, 4, 0)) -# define COMPILER_HAS_BUILTIN_CTZ (GCC_VERSION_OR_NEWER(3, 4, 0)) -# define COMPILER_HAS_BUILTIN_FFS (GCC_VERSION_OR_NEWER(3, 4, 0)) -# define COMPILER_HAS_BUILTIN_POPCOUNT (GCC_VERSION_OR_NEWER(3, 4, 0)) - -#else -// One can define VIXL_NO_COMPILER_BUILTINS to force using the manually -// implemented C++ methods. - -#define COMPILER_HAS_BUILTIN_BSWAP false -#define COMPILER_HAS_BUILTIN_CLRSB false -#define COMPILER_HAS_BUILTIN_CLZ false -#define COMPILER_HAS_BUILTIN_CTZ false -#define COMPILER_HAS_BUILTIN_FFS false -#define COMPILER_HAS_BUILTIN_POPCOUNT false - -#endif - - -template -inline bool IsPowerOf2(V value) { - return (value !=3D 0) && ((value & (value - 1)) =3D=3D 0); -} - - -// Declaration of fallback functions. -int CountLeadingSignBitsFallBack(int64_t value, int width); -int CountLeadingZerosFallBack(uint64_t value, int width); -int CountSetBitsFallBack(uint64_t value, int width); -int CountTrailingZerosFallBack(uint64_t value, int width); - - -// Implementation of intrinsics functions. -// TODO: The implementations could be improved for sizes different from 32= bit -// and 64bit: we could mask the values and call the appropriate builtin. - -template -inline int CountLeadingSignBits(V value, int width =3D (sizeof(V) * 8)) { -#if COMPILER_HAS_BUILTIN_CLRSB - if (width =3D=3D 32) { - return __builtin_clrsb(value); - } else if (width =3D=3D 64) { - return __builtin_clrsbll(value); - } -#endif - return CountLeadingSignBitsFallBack(value, width); -} - - -template -inline int CountLeadingZeros(V value, int width =3D (sizeof(V) * 8)) { -#if COMPILER_HAS_BUILTIN_CLZ - if (width =3D=3D 32) { - return (value =3D=3D 0) ? 32 : __builtin_clz(static_cast(val= ue)); - } else if (width =3D=3D 64) { - return (value =3D=3D 0) ? 64 : __builtin_clzll(value); - } -#endif - return CountLeadingZerosFallBack(value, width); -} - - -template -inline int CountSetBits(V value, int width =3D (sizeof(V) * 8)) { -#if COMPILER_HAS_BUILTIN_POPCOUNT - if (width =3D=3D 32) { - return __builtin_popcount(static_cast(value)); - } else if (width =3D=3D 64) { - return __builtin_popcountll(value); - } -#endif - return CountSetBitsFallBack(value, width); -} - - -template -inline int CountTrailingZeros(V value, int width =3D (sizeof(V) * 8)) { -#if COMPILER_HAS_BUILTIN_CTZ - if (width =3D=3D 32) { - return (value =3D=3D 0) ? 32 : __builtin_ctz(static_cast(val= ue)); - } else if (width =3D=3D 64) { - return (value =3D=3D 0) ? 64 : __builtin_ctzll(value); - } -#endif - return CountTrailingZerosFallBack(value, width); -} - -} // namespace vixl - -#endif // VIXL_COMPILER_INTRINSICS_H - diff --git a/disas/libvixl/vixl/globals.h b/disas/libvixl/vixl/globals.h deleted file mode 100644 index 3a71942f1e..0000000000 --- a/disas/libvixl/vixl/globals.h +++ /dev/null @@ -1,155 +0,0 @@ -// Copyright 2015, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#ifndef VIXL_GLOBALS_H -#define VIXL_GLOBALS_H - -// Get standard C99 macros for integer types. -#ifndef __STDC_CONSTANT_MACROS -#define __STDC_CONSTANT_MACROS -#endif - -#ifndef __STDC_LIMIT_MACROS -#define __STDC_LIMIT_MACROS -#endif - -#ifndef __STDC_FORMAT_MACROS -#define __STDC_FORMAT_MACROS -#endif - -extern "C" { -#include -#include -} - -#include -#include -#include -#include -#include - -#include "vixl/platform.h" - - -typedef uint8_t byte; - -// Type for half-precision (16 bit) floating point numbers. -typedef uint16_t float16; - -const int KBytes =3D 1024; -const int MBytes =3D 1024 * KBytes; - -#define VIXL_ABORT() \ - do { printf("in %s, line %i", __FILE__, __LINE__); abort(); } while (f= alse) -#ifdef VIXL_DEBUG - #define VIXL_ASSERT(condition) assert(condition) - #define VIXL_CHECK(condition) VIXL_ASSERT(condition) - #define VIXL_UNIMPLEMENTED() \ - do { fprintf(stderr, "UNIMPLEMENTED\t"); VIXL_ABORT(); } while (false) - #define VIXL_UNREACHABLE() \ - do { fprintf(stderr, "UNREACHABLE\t"); VIXL_ABORT(); } while (false) -#else - #define VIXL_ASSERT(condition) ((void) 0) - #define VIXL_CHECK(condition) assert(condition) - #define VIXL_UNIMPLEMENTED() ((void) 0) - #define VIXL_UNREACHABLE() ((void) 0) -#endif -// This is not as powerful as template based assertions, but it is simple. -// It assumes that the descriptions are unique. If this starts being a pro= blem, -// we can switch to a different implemention. -#define VIXL_CONCAT(a, b) a##b -#define VIXL_STATIC_ASSERT_LINE(line, condition) \ - typedef char VIXL_CONCAT(STATIC_ASSERT_LINE_, line)[(condition) ? 1 : -1= ] \ - __attribute__((unused)) -#define VIXL_STATIC_ASSERT(condition) \ - VIXL_STATIC_ASSERT_LINE(__LINE__, condition) - -template -inline void USE(T1) {} - -template -inline void USE(T1, T2) {} - -template -inline void USE(T1, T2, T3) {} - -template -inline void USE(T1, T2, T3, T4) {} - -#define VIXL_ALIGNMENT_EXCEPTION() \ - do { fprintf(stderr, "ALIGNMENT EXCEPTION\t"); VIXL_ABORT(); } while (= 0) - -// The clang::fallthrough attribute is used along with the Wimplicit-fallt= hrough -// argument to annotate intentional fall-through between switch labels. -// For more information please refer to: -// http://clang.llvm.org/docs/AttributeReference.html#fallthrough-clang-fa= llthrough -#ifndef __has_warning - #define __has_warning(x) 0 -#endif - -// Fallthrough annotation for Clang and C++11(201103L). -#if __has_warning("-Wimplicit-fallthrough") && __cplusplus >=3D 201103L - #define VIXL_FALLTHROUGH() [[clang::fallthrough]] //NOLINT -// Fallthrough annotation for GCC >=3D 7. -#elif __GNUC__ >=3D 7 - #define VIXL_FALLTHROUGH() __attribute__((fallthrough)) -#else - #define VIXL_FALLTHROUGH() do {} while (0) -#endif - -#if __cplusplus >=3D 201103L - #define VIXL_NO_RETURN [[noreturn]] //NOLINT -#else - #define VIXL_NO_RETURN __attribute__((noreturn)) -#endif - -// Some functions might only be marked as "noreturn" for the DEBUG build. = This -// macro should be used for such cases (for more details see what -// VIXL_UNREACHABLE expands to). -#ifdef VIXL_DEBUG - #define VIXL_DEBUG_NO_RETURN VIXL_NO_RETURN -#else - #define VIXL_DEBUG_NO_RETURN -#endif - -#ifdef VIXL_INCLUDE_SIMULATOR -#ifndef VIXL_GENERATE_SIMULATOR_INSTRUCTIONS_VALUE - #define VIXL_GENERATE_SIMULATOR_INSTRUCTIONS_VALUE 1 -#endif -#else -#ifndef VIXL_GENERATE_SIMULATOR_INSTRUCTIONS_VALUE - #define VIXL_GENERATE_SIMULATOR_INSTRUCTIONS_VALUE 0 -#endif -#if VIXL_GENERATE_SIMULATOR_INSTRUCTIONS_VALUE - #warning "Generating Simulator instructions without Simulator support." -#endif -#endif - -#ifdef USE_SIMULATOR - #error "Please see the release notes for USE_SIMULATOR." -#endif - -#endif // VIXL_GLOBALS_H diff --git a/disas/libvixl/vixl/invalset.h b/disas/libvixl/vixl/invalset.h deleted file mode 100644 index 2e0871f8c3..0000000000 --- a/disas/libvixl/vixl/invalset.h +++ /dev/null @@ -1,775 +0,0 @@ -// Copyright 2015, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#ifndef VIXL_INVALSET_H_ -#define VIXL_INVALSET_H_ - -#include - -#include -#include - -#include "vixl/globals.h" - -namespace vixl { - -// We define a custom data structure template and its iterator as `std` -// containers do not fit the performance requirements for some of our use = cases. -// -// The structure behaves like an iterable unordered set with special prope= rties -// and restrictions. "InvalSet" stands for "Invalidatable Set". -// -// Restrictions and requirements: -// - Adding an element already present in the set is illegal. In debug mod= e, -// this is checked at insertion time. -// - The templated class `ElementType` must provide comparison operators s= o that -// `std::sort()` can be used. -// - A key must be available to represent invalid elements. -// - Elements with an invalid key must compare higher or equal to any other -// element. -// -// Use cases and performance considerations: -// Our use cases present two specificities that allow us to design this -// structure to provide fast insertion *and* fast search and deletion -// operations: -// - Elements are (generally) inserted in order (sorted according to their= key). -// - A key is available to mark elements as invalid (deleted). -// The backing `std::vector` allows for fast insertions. When -// searching for an element we ensure the elements are sorted (this is gen= erally -// the case) and perform a binary search. When deleting an element we do n= ot -// free the associated memory immediately. Instead, an element to be delet= ed is -// marked with the 'invalid' key. Other methods of the container take care= of -// ignoring entries marked as invalid. -// To avoid the overhead of the `std::vector` container when only few entr= ies -// are used, a number of elements are preallocated. - -// 'ElementType' and 'KeyType' are respectively the types of the elements = and -// their key. The structure only reclaims memory when safe to do so, if t= he -// number of elements that can be reclaimed is greater than `RECLAIM_FROM`= and -// greater than ` / RECLAIM_FACTOR. -#define TEMPLATE_INVALSET_P_DECL = \ - class ElementType, = \ - unsigned N_PREALLOCATED_ELEMENTS, = \ - class KeyType, = \ - KeyType INVALID_KEY, = \ - size_t RECLAIM_FROM, = \ - unsigned RECLAIM_FACTOR - -#define TEMPLATE_INVALSET_P_DEF = \ -ElementType, N_PREALLOCATED_ELEMENTS, = \ -KeyType, INVALID_KEY, RECLAIM_FROM, RECLAIM_FACTOR - -template class InvalSetIterator; // Forward declaration. - -template class InvalSet { - public: - InvalSet(); - ~InvalSet(); - - static const size_t kNPreallocatedElements =3D N_PREALLOCATED_ELEMENTS; - static const KeyType kInvalidKey =3D INVALID_KEY; - - // It is illegal to insert an element already present in the set. - void insert(const ElementType& element); - - // Looks for the specified element in the set and - if found - deletes i= t. - void erase(const ElementType& element); - - // This indicates the number of (valid) elements stored in this set. - size_t size() const; - - // Returns true if no elements are stored in the set. - // Note that this does not mean the the backing storage is empty: it can= still - // contain invalid elements. - bool empty() const; - - void clear(); - - const ElementType min_element(); - - // This returns the key of the minimum element in the set. - KeyType min_element_key(); - - static bool IsValid(const ElementType& element); - static KeyType Key(const ElementType& element); - static void SetKey(ElementType* element, KeyType key); - - protected: - // Returns a pointer to the element in vector_ if it was found, or NULL - // otherwise. - ElementType* Search(const ElementType& element); - - // The argument *must* point to an element stored in *this* set. - // This function is not allowed to move elements in the backing vector - // storage. - void EraseInternal(ElementType* element); - - // The elements in the range searched must be sorted. - ElementType* BinarySearch(const ElementType& element, - ElementType* start, - ElementType* end) const; - - // Sort the elements. - enum SortType { - // The 'hard' version guarantees that invalid elements are moved to th= e end - // of the container. - kHardSort, - // The 'soft' version only guarantees that the elements will be sorted. - // Invalid elements may still be present anywhere in the set. - kSoftSort - }; - void Sort(SortType sort_type); - - // Delete the elements that have an invalid key. The complexity is linear - // with the size of the vector. - void Clean(); - - const ElementType Front() const; - const ElementType Back() const; - - // Delete invalid trailing elements and return the last valid element in= the - // set. - const ElementType CleanBack(); - - // Returns a pointer to the start or end of the backing storage. - const ElementType* StorageBegin() const; - const ElementType* StorageEnd() const; - ElementType* StorageBegin(); - ElementType* StorageEnd(); - - // Returns the index of the element within the backing storage. The elem= ent - // must belong to the backing storage. - size_t ElementIndex(const ElementType* element) const; - - // Returns the element at the specified index in the backing storage. - const ElementType* ElementAt(size_t index) const; - ElementType* ElementAt(size_t index); - - static const ElementType* FirstValidElement(const ElementType* from, - const ElementType* end); - - void CacheMinElement(); - const ElementType CachedMinElement() const; - - bool ShouldReclaimMemory() const; - void ReclaimMemory(); - - bool IsUsingVector() const { return vector_ !=3D NULL; } - void set_sorted(bool sorted) { sorted_ =3D sorted; } - - // We cache some data commonly required by users to improve performance. - // We cannot cache pointers to elements as we do not control the backing - // storage. - bool valid_cached_min_; - size_t cached_min_index_; // Valid iff `valid_cached_min_` is true. - KeyType cached_min_key_; // Valid iff `valid_cached_min_` is tru= e. - - // Indicates whether the elements are sorted. - bool sorted_; - - // This represents the number of (valid) elements in this set. - size_t size_; - - // The backing storage is either the array of preallocated elements or t= he - // vector. The structure starts by using the preallocated elements, and - // transitions (permanently) to using the vector once more than - // kNPreallocatedElements are used. - // Elements are only invalidated when using the vector. The preallocated - // storage always only contains valid elements. - ElementType preallocated_[kNPreallocatedElements]; - std::vector* vector_; - -#ifdef VIXL_DEBUG - // Iterators acquire and release this monitor. While a set is acquired, - // certain operations are illegal to ensure that the iterator will - // correctly iterate over the elements in the set. - int monitor_; - int monitor() const { return monitor_; } - void Acquire() { monitor_++; } - void Release() { - monitor_--; - VIXL_ASSERT(monitor_ >=3D 0); - } -#endif - - friend class InvalSetIterator >; - typedef ElementType _ElementType; - typedef KeyType _KeyType; -}; - - -template class InvalSetIterator { - private: - // Redefine types to mirror the associated set types. - typedef typename S::_ElementType ElementType; - typedef typename S::_KeyType KeyType; - - public: - explicit InvalSetIterator(S* inval_set); - ~InvalSetIterator(); - - ElementType* Current() const; - void Advance(); - bool Done() const; - - // Mark this iterator as 'done'. - void Finish(); - - // Delete the current element and advance the iterator to point to the n= ext - // element. - void DeleteCurrentAndAdvance(); - - static bool IsValid(const ElementType& element); - static KeyType Key(const ElementType& element); - - protected: - void MoveToValidElement(); - - // Indicates if the iterator is looking at the vector or at the prealloc= ated - // elements. - const bool using_vector_; - // Used when looking at the preallocated elements, or in debug mode when= using - // the vector to track how many times the iterator has advanced. - size_t index_; - typename std::vector::iterator iterator_; - S* inval_set_; -}; - - -template -InvalSet::InvalSet() - : valid_cached_min_(false), - sorted_(true), size_(0), vector_(NULL) { -#ifdef VIXL_DEBUG - monitor_ =3D 0; -#endif -} - - -template -InvalSet::~InvalSet() { - VIXL_ASSERT(monitor_ =3D=3D 0); - delete vector_; -} - - -template -void InvalSet::insert(const ElementType& element)= { - VIXL_ASSERT(monitor() =3D=3D 0); - VIXL_ASSERT(IsValid(element)); - VIXL_ASSERT(Search(element) =3D=3D NULL); - set_sorted(empty() || (sorted_ && (element > CleanBack()))); - if (IsUsingVector()) { - vector_->push_back(element); - } else { - if (size_ < kNPreallocatedElements) { - preallocated_[size_] =3D element; - } else { - // Transition to using the vector. - vector_ =3D new std::vector(preallocated_, - preallocated_ + size_); - vector_->push_back(element); - } - } - size_++; - - if (valid_cached_min_ && (element < min_element())) { - cached_min_index_ =3D IsUsingVector() ? vector_->size() - 1 : size_ - = 1; - cached_min_key_ =3D Key(element); - valid_cached_min_ =3D true; - } - - if (ShouldReclaimMemory()) { - ReclaimMemory(); - } -} - - -template -void InvalSet::erase(const ElementType& element) { - VIXL_ASSERT(monitor() =3D=3D 0); - VIXL_ASSERT(IsValid(element)); - ElementType* local_element =3D Search(element); - if (local_element !=3D NULL) { - EraseInternal(local_element); - } -} - - -template -ElementType* InvalSet::Search( - const ElementType& element) { - VIXL_ASSERT(monitor() =3D=3D 0); - if (empty()) { - return NULL; - } - if (ShouldReclaimMemory()) { - ReclaimMemory(); - } - if (!sorted_) { - Sort(kHardSort); - } - if (!valid_cached_min_) { - CacheMinElement(); - } - return BinarySearch(element, ElementAt(cached_min_index_), StorageEnd()); -} - - -template -size_t InvalSet::size() const { - return size_; -} - - -template -bool InvalSet::empty() const { - return size_ =3D=3D 0; -} - - -template -void InvalSet::clear() { - VIXL_ASSERT(monitor() =3D=3D 0); - size_ =3D 0; - if (IsUsingVector()) { - vector_->clear(); - } - set_sorted(true); - valid_cached_min_ =3D false; -} - - -template -const ElementType InvalSet::min_element() { - VIXL_ASSERT(monitor() =3D=3D 0); - VIXL_ASSERT(!empty()); - CacheMinElement(); - return *ElementAt(cached_min_index_); -} - - -template -KeyType InvalSet::min_element_key() { - VIXL_ASSERT(monitor() =3D=3D 0); - if (valid_cached_min_) { - return cached_min_key_; - } else { - return Key(min_element()); - } -} - - -template -bool InvalSet::IsValid(const ElementType& element= ) { - return Key(element) !=3D kInvalidKey; -} - - -template -void InvalSet::EraseInternal(ElementType* element= ) { - // Note that this function must be safe even while an iterator has acqui= red - // this set. - VIXL_ASSERT(element !=3D NULL); - size_t deleted_index =3D ElementIndex(element); - if (IsUsingVector()) { - VIXL_ASSERT((&(vector_->front()) <=3D element) && - (element <=3D &(vector_->back()))); - SetKey(element, kInvalidKey); - } else { - VIXL_ASSERT((preallocated_ <=3D element) && - (element < (preallocated_ + kNPreallocatedElements))); - ElementType* end =3D preallocated_ + kNPreallocatedElements; - size_t copy_size =3D sizeof(*element) * (end - element - 1); - memmove(element, element + 1, copy_size); - } - size_--; - - if (valid_cached_min_ && - (deleted_index =3D=3D cached_min_index_)) { - if (sorted_ && !empty()) { - const ElementType* min =3D FirstValidElement(element, StorageEnd()); - cached_min_index_ =3D ElementIndex(min); - cached_min_key_ =3D Key(*min); - valid_cached_min_ =3D true; - } else { - valid_cached_min_ =3D false; - } - } -} - - -template -ElementType* InvalSet::BinarySearch( - const ElementType& element, ElementType* start, ElementType* end) cons= t { - if (start =3D=3D end) { - return NULL; - } - VIXL_ASSERT(sorted_); - VIXL_ASSERT(start < end); - VIXL_ASSERT(!empty()); - - // Perform a binary search through the elements while ignoring invalid - // elements. - ElementType* elements =3D start; - size_t low =3D 0; - size_t high =3D (end - start) - 1; - while (low < high) { - // Find valid bounds. - while (!IsValid(elements[low]) && (low < high)) ++low; - while (!IsValid(elements[high]) && (low < high)) --high; - VIXL_ASSERT(low <=3D high); - // Avoid overflow when computing the middle index. - size_t middle =3D low / 2 + high / 2 + (low & high & 1); - if ((middle =3D=3D low) || (middle =3D=3D high)) { - break; - } - while (!IsValid(elements[middle]) && (middle < high - 1)) ++middle; - while (!IsValid(elements[middle]) && (low + 1 < middle)) --middle; - if (!IsValid(elements[middle])) { - break; - } - if (elements[middle] < element) { - low =3D middle; - } else { - high =3D middle; - } - } - - if (elements[low] =3D=3D element) return &elements[low]; - if (elements[high] =3D=3D element) return &elements[high]; - return NULL; -} - - -template -void InvalSet::Sort(SortType sort_type) { - VIXL_ASSERT(monitor() =3D=3D 0); - if (sort_type =3D=3D kSoftSort) { - if (sorted_) { - return; - } - } - if (empty()) { - return; - } - - Clean(); - std::sort(StorageBegin(), StorageEnd()); - - set_sorted(true); - cached_min_index_ =3D 0; - cached_min_key_ =3D Key(Front()); - valid_cached_min_ =3D true; -} - - -template -void InvalSet::Clean() { - VIXL_ASSERT(monitor() =3D=3D 0); - if (empty() || !IsUsingVector()) { - return; - } - // Manually iterate through the vector storage to discard invalid elemen= ts. - ElementType* start =3D &(vector_->front()); - ElementType* end =3D start + vector_->size(); - ElementType* c =3D start; - ElementType* first_invalid; - ElementType* first_valid; - ElementType* next_invalid; - - while (c < end && IsValid(*c)) { c++; } - first_invalid =3D c; - - while (c < end) { - while (c < end && !IsValid(*c)) { c++; } - first_valid =3D c; - while (c < end && IsValid(*c)) { c++; } - next_invalid =3D c; - - ptrdiff_t n_moved_elements =3D (next_invalid - first_valid); - memmove(first_invalid, first_valid, n_moved_elements * sizeof(*c)); - first_invalid =3D first_invalid + n_moved_elements; - c =3D next_invalid; - } - - // Delete the trailing invalid elements. - vector_->erase(vector_->begin() + (first_invalid - start), vector_->end(= )); - VIXL_ASSERT(vector_->size() =3D=3D size_); - - if (sorted_) { - valid_cached_min_ =3D true; - cached_min_index_ =3D 0; - cached_min_key_ =3D Key(*ElementAt(0)); - } else { - valid_cached_min_ =3D false; - } -} - - -template -const ElementType InvalSet::Front() const { - VIXL_ASSERT(!empty()); - return IsUsingVector() ? vector_->front() : preallocated_[0]; -} - - -template -const ElementType InvalSet::Back() const { - VIXL_ASSERT(!empty()); - return IsUsingVector() ? vector_->back() : preallocated_[size_ - 1]; -} - - -template -const ElementType InvalSet::CleanBack() { - VIXL_ASSERT(monitor() =3D=3D 0); - if (IsUsingVector()) { - // Delete the invalid trailing elements. - typename std::vector::reverse_iterator it =3D vector_->rb= egin(); - while (!IsValid(*it)) { - it++; - } - vector_->erase(it.base(), vector_->end()); - } - return Back(); -} - - -template -const ElementType* InvalSet::StorageBegin() const= { - return IsUsingVector() ? &(vector_->front()) : preallocated_; -} - - -template -const ElementType* InvalSet::StorageEnd() const { - return IsUsingVector() ? &(vector_->back()) + 1 : preallocated_ + size_; -} - - -template -ElementType* InvalSet::StorageBegin() { - return IsUsingVector() ? &(vector_->front()) : preallocated_; -} - - -template -ElementType* InvalSet::StorageEnd() { - return IsUsingVector() ? &(vector_->back()) + 1 : preallocated_ + size_; -} - - -template -size_t InvalSet::ElementIndex( - const ElementType* element) const { - VIXL_ASSERT((StorageBegin() <=3D element) && (element < StorageEnd())); - return element - StorageBegin(); -} - - -template -const ElementType* InvalSet::ElementAt( - size_t index) const { - VIXL_ASSERT( - (IsUsingVector() && (index < vector_->size())) || (index < size_)); - return StorageBegin() + index; -} - -template -ElementType* InvalSet::ElementAt(size_t index) { - VIXL_ASSERT( - (IsUsingVector() && (index < vector_->size())) || (index < size_)); - return StorageBegin() + index; -} - -template -const ElementType* InvalSet::FirstValidElement( - const ElementType* from, const ElementType* end) { - while ((from < end) && !IsValid(*from)) { - from++; - } - return from; -} - - -template -void InvalSet::CacheMinElement() { - VIXL_ASSERT(monitor() =3D=3D 0); - VIXL_ASSERT(!empty()); - - if (valid_cached_min_) { - return; - } - - if (sorted_) { - const ElementType* min =3D FirstValidElement(StorageBegin(), StorageEn= d()); - cached_min_index_ =3D ElementIndex(min); - cached_min_key_ =3D Key(*min); - valid_cached_min_ =3D true; - } else { - Sort(kHardSort); - } - VIXL_ASSERT(valid_cached_min_); -} - - -template -bool InvalSet::ShouldReclaimMemory() const { - if (!IsUsingVector()) { - return false; - } - size_t n_invalid_elements =3D vector_->size() - size_; - return (n_invalid_elements > RECLAIM_FROM) && - (n_invalid_elements > vector_->size() / RECLAIM_FACTOR); -} - - -template -void InvalSet::ReclaimMemory() { - VIXL_ASSERT(monitor() =3D=3D 0); - Clean(); -} - - -template -InvalSetIterator::InvalSetIterator(S* inval_set) - : using_vector_((inval_set !=3D NULL) && inval_set->IsUsingVector()), - index_(0), - inval_set_(inval_set) { - if (inval_set !=3D NULL) { - inval_set->Sort(S::kSoftSort); -#ifdef VIXL_DEBUG - inval_set->Acquire(); -#endif - if (using_vector_) { - iterator_ =3D typename std::vector::iterator( - inval_set_->vector_->begin()); - } - MoveToValidElement(); - } -} - - -template -InvalSetIterator::~InvalSetIterator() { -#ifdef VIXL_DEBUG - if (inval_set_ !=3D NULL) { - inval_set_->Release(); - } -#endif -} - - -template -typename S::_ElementType* InvalSetIterator::Current() const { - VIXL_ASSERT(!Done()); - if (using_vector_) { - return &(*iterator_); - } else { - return &(inval_set_->preallocated_[index_]); - } -} - - -template -void InvalSetIterator::Advance() { - VIXL_ASSERT(!Done()); - if (using_vector_) { - iterator_++; -#ifdef VIXL_DEBUG - index_++; -#endif - MoveToValidElement(); - } else { - index_++; - } -} - - -template -bool InvalSetIterator::Done() const { - if (using_vector_) { - bool done =3D (iterator_ =3D=3D inval_set_->vector_->end()); - VIXL_ASSERT(done =3D=3D (index_ =3D=3D inval_set_->size())); - return done; - } else { - return index_ =3D=3D inval_set_->size(); - } -} - - -template -void InvalSetIterator::Finish() { - VIXL_ASSERT(inval_set_->sorted_); - if (using_vector_) { - iterator_ =3D inval_set_->vector_->end(); - } - index_ =3D inval_set_->size(); -} - - -template -void InvalSetIterator::DeleteCurrentAndAdvance() { - if (using_vector_) { - inval_set_->EraseInternal(&(*iterator_)); - MoveToValidElement(); - } else { - inval_set_->EraseInternal(inval_set_->preallocated_ + index_); - } -} - - -template -bool InvalSetIterator::IsValid(const ElementType& element) { - return S::IsValid(element); -} - - -template -typename S::_KeyType InvalSetIterator::Key(const ElementType& element) { - return S::Key(element); -} - - -template -void InvalSetIterator::MoveToValidElement() { - if (using_vector_) { - while ((iterator_ !=3D inval_set_->vector_->end()) && !IsValid(*iterat= or_)) { - iterator_++; - } - } else { - VIXL_ASSERT(inval_set_->empty() || IsValid(inval_set_->preallocated_[0= ])); - // Nothing to do. - } -} - -#undef TEMPLATE_INVALSET_P_DECL -#undef TEMPLATE_INVALSET_P_DEF - -} // namespace vixl - -#endif // VIXL_INVALSET_H_ diff --git a/disas/libvixl/vixl/platform.h b/disas/libvixl/vixl/platform.h deleted file mode 100644 index 26a74de81b..0000000000 --- a/disas/libvixl/vixl/platform.h +++ /dev/null @@ -1,39 +0,0 @@ -// Copyright 2014, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#ifndef PLATFORM_H -#define PLATFORM_H - -// Define platform specific functionalities. -extern "C" { -#include -} - -namespace vixl { -inline void HostBreakpoint() { raise(SIGINT); } -} // namespace vixl - -#endif diff --git a/disas/libvixl/vixl/utils.h b/disas/libvixl/vixl/utils.h deleted file mode 100644 index ecb0f1014a..0000000000 --- a/disas/libvixl/vixl/utils.h +++ /dev/null @@ -1,286 +0,0 @@ -// Copyright 2015, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#ifndef VIXL_UTILS_H -#define VIXL_UTILS_H - -#include -#include -#include "vixl/globals.h" -#include "vixl/compiler-intrinsics.h" - -namespace vixl { - -// Macros for compile-time format checking. -#if GCC_VERSION_OR_NEWER(4, 4, 0) -#define PRINTF_CHECK(format_index, varargs_index) \ - __attribute__((format(gnu_printf, format_index, varargs_index))) -#else -#define PRINTF_CHECK(format_index, varargs_index) -#endif - -// Check number width. -inline bool is_intn(unsigned n, int64_t x) { - VIXL_ASSERT((0 < n) && (n < 64)); - int64_t limit =3D INT64_C(1) << (n - 1); - return (-limit <=3D x) && (x < limit); -} - -inline bool is_uintn(unsigned n, int64_t x) { - VIXL_ASSERT((0 < n) && (n < 64)); - return !(x >> n); -} - -inline uint32_t truncate_to_intn(unsigned n, int64_t x) { - VIXL_ASSERT((0 < n) && (n < 64)); - return static_cast(x & ((INT64_C(1) << n) - 1)); -} - -#define INT_1_TO_63_LIST(V) = \ -V(1) V(2) V(3) V(4) V(5) V(6) V(7) V(8) = \ -V(9) V(10) V(11) V(12) V(13) V(14) V(15) V(16) = \ -V(17) V(18) V(19) V(20) V(21) V(22) V(23) V(24) = \ -V(25) V(26) V(27) V(28) V(29) V(30) V(31) V(32) = \ -V(33) V(34) V(35) V(36) V(37) V(38) V(39) V(40) = \ -V(41) V(42) V(43) V(44) V(45) V(46) V(47) V(48) = \ -V(49) V(50) V(51) V(52) V(53) V(54) V(55) V(56) = \ -V(57) V(58) V(59) V(60) V(61) V(62) V(63) - -#define DECLARE_IS_INT_N(N) = \ -inline bool is_int##N(int64_t x) { return is_intn(N, x); } -#define DECLARE_IS_UINT_N(N) = \ -inline bool is_uint##N(int64_t x) { return is_uintn(N, x); } -#define DECLARE_TRUNCATE_TO_INT_N(N) = \ -inline uint32_t truncate_to_int##N(int x) { return truncate_to_intn(N, x);= } -INT_1_TO_63_LIST(DECLARE_IS_INT_N) -INT_1_TO_63_LIST(DECLARE_IS_UINT_N) -INT_1_TO_63_LIST(DECLARE_TRUNCATE_TO_INT_N) -#undef DECLARE_IS_INT_N -#undef DECLARE_IS_UINT_N -#undef DECLARE_TRUNCATE_TO_INT_N - -// Bit field extraction. -inline uint32_t unsigned_bitextract_32(int msb, int lsb, uint32_t x) { - return (x >> lsb) & ((1 << (1 + msb - lsb)) - 1); -} - -inline uint64_t unsigned_bitextract_64(int msb, int lsb, uint64_t x) { - return (x >> lsb) & ((static_cast(1) << (1 + msb - lsb)) - 1); -} - -inline int32_t signed_bitextract_32(int msb, int lsb, int32_t x) { - return (x << (31 - msb)) >> (lsb + 31 - msb); -} - -inline int64_t signed_bitextract_64(int msb, int lsb, int64_t x) { - return (x << (63 - msb)) >> (lsb + 63 - msb); -} - -// Floating point representation. -uint32_t float_to_rawbits(float value); -uint64_t double_to_rawbits(double value); -float rawbits_to_float(uint32_t bits); -double rawbits_to_double(uint64_t bits); - -uint32_t float_sign(float val); -uint32_t float_exp(float val); -uint32_t float_mantissa(float val); -uint32_t double_sign(double val); -uint32_t double_exp(double val); -uint64_t double_mantissa(double val); - -float float_pack(uint32_t sign, uint32_t exp, uint32_t mantissa); -double double_pack(uint64_t sign, uint64_t exp, uint64_t mantissa); - -// An fpclassify() function for 16-bit half-precision floats. -int float16classify(float16 value); - -// NaN tests. -inline bool IsSignallingNaN(double num) { - const uint64_t kFP64QuietNaNMask =3D UINT64_C(0x0008000000000000); - uint64_t raw =3D double_to_rawbits(num); - if (std::isnan(num) && ((raw & kFP64QuietNaNMask) =3D=3D 0)) { - return true; - } - return false; -} - - -inline bool IsSignallingNaN(float num) { - const uint32_t kFP32QuietNaNMask =3D 0x00400000; - uint32_t raw =3D float_to_rawbits(num); - if (std::isnan(num) && ((raw & kFP32QuietNaNMask) =3D=3D 0)) { - return true; - } - return false; -} - - -inline bool IsSignallingNaN(float16 num) { - const uint16_t kFP16QuietNaNMask =3D 0x0200; - return (float16classify(num) =3D=3D FP_NAN) && - ((num & kFP16QuietNaNMask) =3D=3D 0); -} - - -template -inline bool IsQuietNaN(T num) { - return std::isnan(num) && !IsSignallingNaN(num); -} - - -// Convert the NaN in 'num' to a quiet NaN. -inline double ToQuietNaN(double num) { - const uint64_t kFP64QuietNaNMask =3D UINT64_C(0x0008000000000000); - VIXL_ASSERT(std::isnan(num)); - return rawbits_to_double(double_to_rawbits(num) | kFP64QuietNaNMask); -} - - -inline float ToQuietNaN(float num) { - const uint32_t kFP32QuietNaNMask =3D 0x00400000; - VIXL_ASSERT(std::isnan(num)); - return rawbits_to_float(float_to_rawbits(num) | kFP32QuietNaNMask); -} - - -// Fused multiply-add. -inline double FusedMultiplyAdd(double op1, double op2, double a) { - return fma(op1, op2, a); -} - - -inline float FusedMultiplyAdd(float op1, float op2, float a) { - return fmaf(op1, op2, a); -} - - -inline uint64_t LowestSetBit(uint64_t value) { - return value & -value; -} - - -template -inline int HighestSetBitPosition(T value) { - VIXL_ASSERT(value !=3D 0); - return (sizeof(value) * 8 - 1) - CountLeadingZeros(value); -} - - -template -inline int WhichPowerOf2(V value) { - VIXL_ASSERT(IsPowerOf2(value)); - return CountTrailingZeros(value); -} - - -unsigned CountClearHalfWords(uint64_t imm, unsigned reg_size); - - -template -T ReverseBits(T value) { - VIXL_ASSERT((sizeof(value) =3D=3D 1) || (sizeof(value) =3D=3D 2) || - (sizeof(value) =3D=3D 4) || (sizeof(value) =3D=3D 8)); - T result =3D 0; - for (unsigned i =3D 0; i < (sizeof(value) * 8); i++) { - result =3D (result << 1) | (value & 1); - value >>=3D 1; - } - return result; -} - - -template -T ReverseBytes(T value, int block_bytes_log2) { - VIXL_ASSERT((sizeof(value) =3D=3D 4) || (sizeof(value) =3D=3D 8)); - VIXL_ASSERT((1U << block_bytes_log2) <=3D sizeof(value)); - // Split the 64-bit value into an 8-bit array, where b[0] is the least - // significant byte, and b[7] is the most significant. - uint8_t bytes[8]; - uint64_t mask =3D UINT64_C(0xff00000000000000); - for (int i =3D 7; i >=3D 0; i--) { - bytes[i] =3D (static_cast(value) & mask) >> (i * 8); - mask >>=3D 8; - } - - // Permutation tables for REV instructions. - // permute_table[0] is used by REV16_x, REV16_w - // permute_table[1] is used by REV32_x, REV_w - // permute_table[2] is used by REV_x - VIXL_ASSERT((0 < block_bytes_log2) && (block_bytes_log2 < 4)); - static const uint8_t permute_table[3][8] =3D { {6, 7, 4, 5, 2, 3, 0, 1}, - {4, 5, 6, 7, 0, 1, 2, 3}, - {0, 1, 2, 3, 4, 5, 6, 7} }; - T result =3D 0; - for (int i =3D 0; i < 8; i++) { - result <<=3D 8; - result |=3D bytes[permute_table[block_bytes_log2 - 1][i]]; - } - return result; -} - - -// Pointer alignment -// TODO: rename/refactor to make it specific to instructions. -template -bool IsWordAligned(T pointer) { - VIXL_ASSERT(sizeof(pointer) =3D=3D sizeof(intptr_t)); // NOLINT(runtim= e/sizeof) - return ((intptr_t)(pointer) & 3) =3D=3D 0; -} - -// Increment a pointer (up to 64 bits) until it has the specified alignmen= t. -template -T AlignUp(T pointer, size_t alignment) { - // Use C-style casts to get static_cast behaviour for integral types (T)= , and - // reinterpret_cast behaviour for other types. - - uint64_t pointer_raw =3D (uint64_t)pointer; - VIXL_STATIC_ASSERT(sizeof(pointer) <=3D sizeof(pointer_raw)); - - size_t align_step =3D (alignment - pointer_raw) % alignment; - VIXL_ASSERT((pointer_raw + align_step) % alignment =3D=3D 0); - - return (T)(pointer_raw + align_step); -} - -// Decrement a pointer (up to 64 bits) until it has the specified alignmen= t. -template -T AlignDown(T pointer, size_t alignment) { - // Use C-style casts to get static_cast behaviour for integral types (T)= , and - // reinterpret_cast behaviour for other types. - - uint64_t pointer_raw =3D (uint64_t)pointer; - VIXL_STATIC_ASSERT(sizeof(pointer) <=3D sizeof(pointer_raw)); - - size_t align_step =3D pointer_raw % alignment; - VIXL_ASSERT((pointer_raw - align_step) % alignment =3D=3D 0); - - return (T)(pointer_raw - align_step); -} - -} // namespace vixl - -#endif // VIXL_UTILS_H diff --git a/include/exec/poison.h b/include/exec/poison.h index 9f1ca3409c..ec7fbc67e1 100644 --- a/include/exec/poison.h +++ b/include/exec/poison.h @@ -65,8 +65,6 @@ #pragma GCC poison CPU_INTERRUPT_TGT_INT_2 =20 #pragma GCC poison CONFIG_ALPHA_DIS -#pragma GCC poison CONFIG_ARM_A64_DIS -#pragma GCC poison CONFIG_ARM_DIS #pragma GCC poison CONFIG_CRIS_DIS #pragma GCC poison CONFIG_HPPA_DIS #pragma GCC poison CONFIG_I386_DIS diff --git a/disas.c b/disas.c index b2753e1902..e31438f349 100644 --- a/disas.c +++ b/disas.c @@ -178,9 +178,6 @@ static void initialize_debug_host(CPUDebug *s) #endif #elif defined(__aarch64__) s->info.cap_arch =3D CS_ARCH_ARM64; -# ifdef CONFIG_ARM_A64_DIS - s->info.print_insn =3D print_insn_arm_a64; -# endif #elif defined(__alpha__) s->info.print_insn =3D print_insn_alpha; #elif defined(__sparc__) diff --git a/target/arm/cpu.c b/target/arm/cpu.c index d2bd74c2ed..b57beb1536 100644 --- a/target/arm/cpu.c +++ b/target/arm/cpu.c @@ -828,13 +828,6 @@ static void arm_disas_set_info(CPUState *cpu, disassem= ble_info *info) bool sctlr_b; =20 if (is_a64(env)) { - /* We might not be compiled with the A64 disassembler - * because it needs a C++ compiler. Leave print_insn - * unset in this case to use the caller default behaviour. - */ -#if defined(CONFIG_ARM_A64_DIS) - info->print_insn =3D print_insn_arm_a64; -#endif info->cap_arch =3D CS_ARCH_ARM64; info->cap_insn_unit =3D 4; info->cap_insn_split =3D 4; diff --git a/MAINTAINERS b/MAINTAINERS index 5fe8f7eca2..fcf398e695 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -165,8 +165,6 @@ F: tests/qtest/arm-cpu-features.c F: hw/arm/ F: hw/cpu/a*mpcore.c F: include/hw/cpu/a*mpcore.h -F: disas/arm-a64.cc -F: disas/libvixl/ F: docs/system/target-arm.rst F: docs/system/arm/cpu-features.rst =20 @@ -3271,8 +3269,6 @@ M: Richard Henderson S: Maintained L: qemu-arm@nongnu.org F: tcg/aarch64/ -F: disas/arm-a64.cc -F: disas/libvixl/ =20 ARM TCG target M: Richard Henderson diff --git a/disas/arm-a64.cc b/disas/arm-a64.cc deleted file mode 100644 index a1402a2e07..0000000000 --- a/disas/arm-a64.cc +++ /dev/null @@ -1,101 +0,0 @@ -/* - * ARM A64 disassembly output wrapper to libvixl - * Copyright (c) 2013 Linaro Limited - * Written by Claudio Fontana - * - * This program is free software: you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation, either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program. If not, see . - */ - -#include "qemu/osdep.h" -#include "disas/dis-asm.h" - -#include "vixl/a64/disasm-a64.h" - -using namespace vixl; - -static Decoder *vixl_decoder =3D NULL; -static Disassembler *vixl_disasm =3D NULL; - -/* We don't use libvixl's PrintDisassembler because its output - * is a little unhelpful (trailing newlines, for example). - * Instead we use our own very similar variant so we have - * control over the format. - */ -class QEMUDisassembler : public Disassembler { -public: - QEMUDisassembler() : printf_(NULL), stream_(NULL) { } - ~QEMUDisassembler() { } - - void SetStream(FILE *stream) { - stream_ =3D stream; - } - - void SetPrintf(fprintf_function printf_fn) { - printf_ =3D printf_fn; - } - -protected: - virtual void ProcessOutput(const Instruction *instr) { - printf_(stream_, "%08" PRIx32 " %s", - instr->InstructionBits(), GetOutput()); - } - -private: - fprintf_function printf_; - FILE *stream_; -}; - -static int vixl_is_initialized(void) -{ - return vixl_decoder !=3D NULL; -} - -static void vixl_init() { - vixl_decoder =3D new Decoder(); - vixl_disasm =3D new QEMUDisassembler(); - vixl_decoder->AppendVisitor(vixl_disasm); -} - -#define INSN_SIZE 4 - -/* Disassemble ARM A64 instruction. This is our only entry - * point from QEMU's C code. - */ -int print_insn_arm_a64(uint64_t addr, disassemble_info *info) -{ - uint8_t bytes[INSN_SIZE]; - uint32_t instrval; - const Instruction *instr; - int status; - - status =3D info->read_memory_func(addr, bytes, INSN_SIZE, info); - if (status !=3D 0) { - info->memory_error_func(status, addr, info); - return -1; - } - - if (!vixl_is_initialized()) { - vixl_init(); - } - - ((QEMUDisassembler *)vixl_disasm)->SetPrintf(info->fprintf_func); - ((QEMUDisassembler *)vixl_disasm)->SetStream(info->stream); - - instrval =3D bytes[0] | bytes[1] << 8 | bytes[2] << 16 | bytes[3] << 2= 4; - instr =3D reinterpret_cast(&instrval); - vixl_disasm->MapCodeAddress(addr, instr); - vixl_decoder->Decode(instr); - - return INSN_SIZE; -} diff --git a/disas/libvixl/LICENCE b/disas/libvixl/LICENCE deleted file mode 100644 index b7e160a3f5..0000000000 --- a/disas/libvixl/LICENCE +++ /dev/null @@ -1,30 +0,0 @@ -LICENCE -=3D=3D=3D=3D=3D=3D=3D - -The software in this repository is covered by the following licence. - -// Copyright 2013, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/disas/libvixl/README b/disas/libvixl/README deleted file mode 100644 index 932a41adf7..0000000000 --- a/disas/libvixl/README +++ /dev/null @@ -1,11 +0,0 @@ - -The code in this directory is a subset of libvixl: - https://github.com/armvixl/vixl -(specifically, it is the set of files needed for disassembly only, -taken from libvixl 1.12). -Bugfixes should preferably be sent upstream initially. - -The disassembler does not currently support the entire A64 instruction -set. Notably: - * Limited support for system instructions. - * A few miscellaneous integer and floating point instructions are missing. diff --git a/disas/libvixl/meson.build b/disas/libvixl/meson.build deleted file mode 100644 index 5e2eb33e8e..0000000000 --- a/disas/libvixl/meson.build +++ /dev/null @@ -1,7 +0,0 @@ -libvixl_ss.add(files( - 'vixl/a64/decoder-a64.cc', - 'vixl/a64/disasm-a64.cc', - 'vixl/a64/instructions-a64.cc', - 'vixl/compiler-intrinsics.cc', - 'vixl/utils.cc', -)) diff --git a/disas/libvixl/vixl/a64/decoder-a64.cc b/disas/libvixl/vixl/a64= /decoder-a64.cc deleted file mode 100644 index 5ba2d3ce04..0000000000 --- a/disas/libvixl/vixl/a64/decoder-a64.cc +++ /dev/null @@ -1,877 +0,0 @@ -// Copyright 2014, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#include "vixl/globals.h" -#include "vixl/utils.h" -#include "vixl/a64/decoder-a64.h" - -namespace vixl { - -void Decoder::DecodeInstruction(const Instruction *instr) { - if (instr->Bits(28, 27) =3D=3D 0) { - VisitUnallocated(instr); - } else { - switch (instr->Bits(27, 24)) { - // 0: PC relative addressing. - case 0x0: DecodePCRelAddressing(instr); break; - - // 1: Add/sub immediate. - case 0x1: DecodeAddSubImmediate(instr); break; - - // A: Logical shifted register. - // Add/sub with carry. - // Conditional compare register. - // Conditional compare immediate. - // Conditional select. - // Data processing 1 source. - // Data processing 2 source. - // B: Add/sub shifted register. - // Add/sub extended register. - // Data processing 3 source. - case 0xA: - case 0xB: DecodeDataProcessing(instr); break; - - // 2: Logical immediate. - // Move wide immediate. - case 0x2: DecodeLogical(instr); break; - - // 3: Bitfield. - // Extract. - case 0x3: DecodeBitfieldExtract(instr); break; - - // 4: Unconditional branch immediate. - // Exception generation. - // Compare and branch immediate. - // 5: Compare and branch immediate. - // Conditional branch. - // System. - // 6,7: Unconditional branch. - // Test and branch immediate. - case 0x4: - case 0x5: - case 0x6: - case 0x7: DecodeBranchSystemException(instr); break; - - // 8,9: Load/store register pair post-index. - // Load register literal. - // Load/store register unscaled immediate. - // Load/store register immediate post-index. - // Load/store register immediate pre-index. - // Load/store register offset. - // Load/store exclusive. - // C,D: Load/store register pair offset. - // Load/store register pair pre-index. - // Load/store register unsigned immediate. - // Advanced SIMD. - case 0x8: - case 0x9: - case 0xC: - case 0xD: DecodeLoadStore(instr); break; - - // E: FP fixed point conversion. - // FP integer conversion. - // FP data processing 1 source. - // FP compare. - // FP immediate. - // FP data processing 2 source. - // FP conditional compare. - // FP conditional select. - // Advanced SIMD. - // F: FP data processing 3 source. - // Advanced SIMD. - case 0xE: - case 0xF: DecodeFP(instr); break; - } - } -} - -void Decoder::AppendVisitor(DecoderVisitor* new_visitor) { - visitors_.push_back(new_visitor); -} - - -void Decoder::PrependVisitor(DecoderVisitor* new_visitor) { - visitors_.push_front(new_visitor); -} - - -void Decoder::InsertVisitorBefore(DecoderVisitor* new_visitor, - DecoderVisitor* registered_visitor) { - std::list::iterator it; - for (it =3D visitors_.begin(); it !=3D visitors_.end(); it++) { - if (*it =3D=3D registered_visitor) { - visitors_.insert(it, new_visitor); - return; - } - } - // We reached the end of the list. The last element must be - // registered_visitor. - VIXL_ASSERT(*it =3D=3D registered_visitor); - visitors_.insert(it, new_visitor); -} - - -void Decoder::InsertVisitorAfter(DecoderVisitor* new_visitor, - DecoderVisitor* registered_visitor) { - std::list::iterator it; - for (it =3D visitors_.begin(); it !=3D visitors_.end(); it++) { - if (*it =3D=3D registered_visitor) { - it++; - visitors_.insert(it, new_visitor); - return; - } - } - // We reached the end of the list. The last element must be - // registered_visitor. - VIXL_ASSERT(*it =3D=3D registered_visitor); - visitors_.push_back(new_visitor); -} - - -void Decoder::RemoveVisitor(DecoderVisitor* visitor) { - visitors_.remove(visitor); -} - - -void Decoder::DecodePCRelAddressing(const Instruction* instr) { - VIXL_ASSERT(instr->Bits(27, 24) =3D=3D 0x0); - // We know bit 28 is set, as =3D 0 is filtered out at the top = level - // decode. - VIXL_ASSERT(instr->Bit(28) =3D=3D 0x1); - VisitPCRelAddressing(instr); -} - - -void Decoder::DecodeBranchSystemException(const Instruction* instr) { - VIXL_ASSERT((instr->Bits(27, 24) =3D=3D 0x4) || - (instr->Bits(27, 24) =3D=3D 0x5) || - (instr->Bits(27, 24) =3D=3D 0x6) || - (instr->Bits(27, 24) =3D=3D 0x7) ); - - switch (instr->Bits(31, 29)) { - case 0: - case 4: { - VisitUnconditionalBranch(instr); - break; - } - case 1: - case 5: { - if (instr->Bit(25) =3D=3D 0) { - VisitCompareBranch(instr); - } else { - VisitTestBranch(instr); - } - break; - } - case 2: { - if (instr->Bit(25) =3D=3D 0) { - if ((instr->Bit(24) =3D=3D 0x1) || - (instr->Mask(0x01000010) =3D=3D 0x00000010)) { - VisitUnallocated(instr); - } else { - VisitConditionalBranch(instr); - } - } else { - VisitUnallocated(instr); - } - break; - } - case 6: { - if (instr->Bit(25) =3D=3D 0) { - if (instr->Bit(24) =3D=3D 0) { - if ((instr->Bits(4, 2) !=3D 0) || - (instr->Mask(0x00E0001D) =3D=3D 0x00200001) || - (instr->Mask(0x00E0001D) =3D=3D 0x00400001) || - (instr->Mask(0x00E0001E) =3D=3D 0x00200002) || - (instr->Mask(0x00E0001E) =3D=3D 0x00400002) || - (instr->Mask(0x00E0001C) =3D=3D 0x00600000) || - (instr->Mask(0x00E0001C) =3D=3D 0x00800000) || - (instr->Mask(0x00E0001F) =3D=3D 0x00A00000) || - (instr->Mask(0x00C0001C) =3D=3D 0x00C00000)) { - VisitUnallocated(instr); - } else { - VisitException(instr); - } - } else { - if (instr->Bits(23, 22) =3D=3D 0) { - const Instr masked_003FF0E0 =3D instr->Mask(0x003FF0E0); - if ((instr->Bits(21, 19) =3D=3D 0x4) || - (masked_003FF0E0 =3D=3D 0x00033000) || - (masked_003FF0E0 =3D=3D 0x003FF020) || - (masked_003FF0E0 =3D=3D 0x003FF060) || - (masked_003FF0E0 =3D=3D 0x003FF0E0) || - (instr->Mask(0x00388000) =3D=3D 0x00008000) || - (instr->Mask(0x0038E000) =3D=3D 0x00000000) || - (instr->Mask(0x0039E000) =3D=3D 0x00002000) || - (instr->Mask(0x003AE000) =3D=3D 0x00002000) || - (instr->Mask(0x003CE000) =3D=3D 0x00042000) || - (instr->Mask(0x003FFFC0) =3D=3D 0x000320C0) || - (instr->Mask(0x003FF100) =3D=3D 0x00032100) || - (instr->Mask(0x003FF200) =3D=3D 0x00032200) || - (instr->Mask(0x003FF400) =3D=3D 0x00032400) || - (instr->Mask(0x003FF800) =3D=3D 0x00032800) || - (instr->Mask(0x0038F000) =3D=3D 0x00005000) || - (instr->Mask(0x0038E000) =3D=3D 0x00006000)) { - VisitUnallocated(instr); - } else { - VisitSystem(instr); - } - } else { - VisitUnallocated(instr); - } - } - } else { - if ((instr->Bit(24) =3D=3D 0x1) || - (instr->Bits(20, 16) !=3D 0x1F) || - (instr->Bits(15, 10) !=3D 0) || - (instr->Bits(4, 0) !=3D 0) || - (instr->Bits(24, 21) =3D=3D 0x3) || - (instr->Bits(24, 22) =3D=3D 0x3)) { - VisitUnallocated(instr); - } else { - VisitUnconditionalBranchToRegister(instr); - } - } - break; - } - case 3: - case 7: { - VisitUnallocated(instr); - break; - } - } -} - - -void Decoder::DecodeLoadStore(const Instruction* instr) { - VIXL_ASSERT((instr->Bits(27, 24) =3D=3D 0x8) || - (instr->Bits(27, 24) =3D=3D 0x9) || - (instr->Bits(27, 24) =3D=3D 0xC) || - (instr->Bits(27, 24) =3D=3D 0xD) ); - // TODO(all): rearrange the tree to integrate this branch. - if ((instr->Bit(28) =3D=3D 0) && (instr->Bit(29) =3D=3D 0) && (instr->Bi= t(26) =3D=3D 1)) { - DecodeNEONLoadStore(instr); - return; - } - - if (instr->Bit(24) =3D=3D 0) { - if (instr->Bit(28) =3D=3D 0) { - if (instr->Bit(29) =3D=3D 0) { - if (instr->Bit(26) =3D=3D 0) { - VisitLoadStoreExclusive(instr); - } else { - VIXL_UNREACHABLE(); - } - } else { - if ((instr->Bits(31, 30) =3D=3D 0x3) || - (instr->Mask(0xC4400000) =3D=3D 0x40000000)) { - VisitUnallocated(instr); - } else { - if (instr->Bit(23) =3D=3D 0) { - if (instr->Mask(0xC4400000) =3D=3D 0xC0400000) { - VisitUnallocated(instr); - } else { - VisitLoadStorePairNonTemporal(instr); - } - } else { - VisitLoadStorePairPostIndex(instr); - } - } - } - } else { - if (instr->Bit(29) =3D=3D 0) { - if (instr->Mask(0xC4000000) =3D=3D 0xC4000000) { - VisitUnallocated(instr); - } else { - VisitLoadLiteral(instr); - } - } else { - if ((instr->Mask(0x84C00000) =3D=3D 0x80C00000) || - (instr->Mask(0x44800000) =3D=3D 0x44800000) || - (instr->Mask(0x84800000) =3D=3D 0x84800000)) { - VisitUnallocated(instr); - } else { - if (instr->Bit(21) =3D=3D 0) { - switch (instr->Bits(11, 10)) { - case 0: { - VisitLoadStoreUnscaledOffset(instr); - break; - } - case 1: { - if (instr->Mask(0xC4C00000) =3D=3D 0xC0800000) { - VisitUnallocated(instr); - } else { - VisitLoadStorePostIndex(instr); - } - break; - } - case 2: { - // TODO: VisitLoadStoreRegisterOffsetUnpriv. - VisitUnimplemented(instr); - break; - } - case 3: { - if (instr->Mask(0xC4C00000) =3D=3D 0xC0800000) { - VisitUnallocated(instr); - } else { - VisitLoadStorePreIndex(instr); - } - break; - } - } - } else { - if (instr->Bits(11, 10) =3D=3D 0x2) { - if (instr->Bit(14) =3D=3D 0) { - VisitUnallocated(instr); - } else { - VisitLoadStoreRegisterOffset(instr); - } - } else { - VisitUnallocated(instr); - } - } - } - } - } - } else { - if (instr->Bit(28) =3D=3D 0) { - if (instr->Bit(29) =3D=3D 0) { - VisitUnallocated(instr); - } else { - if ((instr->Bits(31, 30) =3D=3D 0x3) || - (instr->Mask(0xC4400000) =3D=3D 0x40000000)) { - VisitUnallocated(instr); - } else { - if (instr->Bit(23) =3D=3D 0) { - VisitLoadStorePairOffset(instr); - } else { - VisitLoadStorePairPreIndex(instr); - } - } - } - } else { - if (instr->Bit(29) =3D=3D 0) { - VisitUnallocated(instr); - } else { - if ((instr->Mask(0x84C00000) =3D=3D 0x80C00000) || - (instr->Mask(0x44800000) =3D=3D 0x44800000) || - (instr->Mask(0x84800000) =3D=3D 0x84800000)) { - VisitUnallocated(instr); - } else { - VisitLoadStoreUnsignedOffset(instr); - } - } - } - } -} - - -void Decoder::DecodeLogical(const Instruction* instr) { - VIXL_ASSERT(instr->Bits(27, 24) =3D=3D 0x2); - - if (instr->Mask(0x80400000) =3D=3D 0x00400000) { - VisitUnallocated(instr); - } else { - if (instr->Bit(23) =3D=3D 0) { - VisitLogicalImmediate(instr); - } else { - if (instr->Bits(30, 29) =3D=3D 0x1) { - VisitUnallocated(instr); - } else { - VisitMoveWideImmediate(instr); - } - } - } -} - - -void Decoder::DecodeBitfieldExtract(const Instruction* instr) { - VIXL_ASSERT(instr->Bits(27, 24) =3D=3D 0x3); - - if ((instr->Mask(0x80400000) =3D=3D 0x80000000) || - (instr->Mask(0x80400000) =3D=3D 0x00400000) || - (instr->Mask(0x80008000) =3D=3D 0x00008000)) { - VisitUnallocated(instr); - } else if (instr->Bit(23) =3D=3D 0) { - if ((instr->Mask(0x80200000) =3D=3D 0x00200000) || - (instr->Mask(0x60000000) =3D=3D 0x60000000)) { - VisitUnallocated(instr); - } else { - VisitBitfield(instr); - } - } else { - if ((instr->Mask(0x60200000) =3D=3D 0x00200000) || - (instr->Mask(0x60000000) !=3D 0x00000000)) { - VisitUnallocated(instr); - } else { - VisitExtract(instr); - } - } -} - - -void Decoder::DecodeAddSubImmediate(const Instruction* instr) { - VIXL_ASSERT(instr->Bits(27, 24) =3D=3D 0x1); - if (instr->Bit(23) =3D=3D 1) { - VisitUnallocated(instr); - } else { - VisitAddSubImmediate(instr); - } -} - - -void Decoder::DecodeDataProcessing(const Instruction* instr) { - VIXL_ASSERT((instr->Bits(27, 24) =3D=3D 0xA) || - (instr->Bits(27, 24) =3D=3D 0xB)); - - if (instr->Bit(24) =3D=3D 0) { - if (instr->Bit(28) =3D=3D 0) { - if (instr->Mask(0x80008000) =3D=3D 0x00008000) { - VisitUnallocated(instr); - } else { - VisitLogicalShifted(instr); - } - } else { - switch (instr->Bits(23, 21)) { - case 0: { - if (instr->Mask(0x0000FC00) !=3D 0) { - VisitUnallocated(instr); - } else { - VisitAddSubWithCarry(instr); - } - break; - } - case 2: { - if ((instr->Bit(29) =3D=3D 0) || - (instr->Mask(0x00000410) !=3D 0)) { - VisitUnallocated(instr); - } else { - if (instr->Bit(11) =3D=3D 0) { - VisitConditionalCompareRegister(instr); - } else { - VisitConditionalCompareImmediate(instr); - } - } - break; - } - case 4: { - if (instr->Mask(0x20000800) !=3D 0x00000000) { - VisitUnallocated(instr); - } else { - VisitConditionalSelect(instr); - } - break; - } - case 6: { - if (instr->Bit(29) =3D=3D 0x1) { - VisitUnallocated(instr); - VIXL_FALLTHROUGH(); - } else { - if (instr->Bit(30) =3D=3D 0) { - if ((instr->Bit(15) =3D=3D 0x1) || - (instr->Bits(15, 11) =3D=3D 0) || - (instr->Bits(15, 12) =3D=3D 0x1) || - (instr->Bits(15, 12) =3D=3D 0x3) || - (instr->Bits(15, 13) =3D=3D 0x3) || - (instr->Mask(0x8000EC00) =3D=3D 0x00004C00) || - (instr->Mask(0x8000E800) =3D=3D 0x80004000) || - (instr->Mask(0x8000E400) =3D=3D 0x80004000)) { - VisitUnallocated(instr); - } else { - VisitDataProcessing2Source(instr); - } - } else { - if ((instr->Bit(13) =3D=3D 1) || - (instr->Bits(20, 16) !=3D 0) || - (instr->Bits(15, 14) !=3D 0) || - (instr->Mask(0xA01FFC00) =3D=3D 0x00000C00) || - (instr->Mask(0x201FF800) =3D=3D 0x00001800)) { - VisitUnallocated(instr); - } else { - VisitDataProcessing1Source(instr); - } - } - break; - } - } - case 1: - case 3: - case 5: - case 7: VisitUnallocated(instr); break; - } - } - } else { - if (instr->Bit(28) =3D=3D 0) { - if (instr->Bit(21) =3D=3D 0) { - if ((instr->Bits(23, 22) =3D=3D 0x3) || - (instr->Mask(0x80008000) =3D=3D 0x00008000)) { - VisitUnallocated(instr); - } else { - VisitAddSubShifted(instr); - } - } else { - if ((instr->Mask(0x00C00000) !=3D 0x00000000) || - (instr->Mask(0x00001400) =3D=3D 0x00001400) || - (instr->Mask(0x00001800) =3D=3D 0x00001800)) { - VisitUnallocated(instr); - } else { - VisitAddSubExtended(instr); - } - } - } else { - if ((instr->Bit(30) =3D=3D 0x1) || - (instr->Bits(30, 29) =3D=3D 0x1) || - (instr->Mask(0xE0600000) =3D=3D 0x00200000) || - (instr->Mask(0xE0608000) =3D=3D 0x00400000) || - (instr->Mask(0x60608000) =3D=3D 0x00408000) || - (instr->Mask(0x60E00000) =3D=3D 0x00E00000) || - (instr->Mask(0x60E00000) =3D=3D 0x00800000) || - (instr->Mask(0x60E00000) =3D=3D 0x00600000)) { - VisitUnallocated(instr); - } else { - VisitDataProcessing3Source(instr); - } - } - } -} - - -void Decoder::DecodeFP(const Instruction* instr) { - VIXL_ASSERT((instr->Bits(27, 24) =3D=3D 0xE) || - (instr->Bits(27, 24) =3D=3D 0xF)); - if (instr->Bit(28) =3D=3D 0) { - DecodeNEONVectorDataProcessing(instr); - } else { - if (instr->Bits(31, 30) =3D=3D 0x3) { - VisitUnallocated(instr); - } else if (instr->Bits(31, 30) =3D=3D 0x1) { - DecodeNEONScalarDataProcessing(instr); - } else { - if (instr->Bit(29) =3D=3D 0) { - if (instr->Bit(24) =3D=3D 0) { - if (instr->Bit(21) =3D=3D 0) { - if ((instr->Bit(23) =3D=3D 1) || - (instr->Bit(18) =3D=3D 1) || - (instr->Mask(0x80008000) =3D=3D 0x00000000) || - (instr->Mask(0x000E0000) =3D=3D 0x00000000) || - (instr->Mask(0x000E0000) =3D=3D 0x000A0000) || - (instr->Mask(0x00160000) =3D=3D 0x00000000) || - (instr->Mask(0x00160000) =3D=3D 0x00120000)) { - VisitUnallocated(instr); - } else { - VisitFPFixedPointConvert(instr); - } - } else { - if (instr->Bits(15, 10) =3D=3D 32) { - VisitUnallocated(instr); - } else if (instr->Bits(15, 10) =3D=3D 0) { - if ((instr->Bits(23, 22) =3D=3D 0x3) || - (instr->Mask(0x000E0000) =3D=3D 0x000A0000) || - (instr->Mask(0x000E0000) =3D=3D 0x000C0000) || - (instr->Mask(0x00160000) =3D=3D 0x00120000) || - (instr->Mask(0x00160000) =3D=3D 0x00140000) || - (instr->Mask(0x20C40000) =3D=3D 0x00800000) || - (instr->Mask(0x20C60000) =3D=3D 0x00840000) || - (instr->Mask(0xA0C60000) =3D=3D 0x80060000) || - (instr->Mask(0xA0C60000) =3D=3D 0x00860000) || - (instr->Mask(0xA0C60000) =3D=3D 0x00460000) || - (instr->Mask(0xA0CE0000) =3D=3D 0x80860000) || - (instr->Mask(0xA0CE0000) =3D=3D 0x804E0000) || - (instr->Mask(0xA0CE0000) =3D=3D 0x000E0000) || - (instr->Mask(0xA0D60000) =3D=3D 0x00160000) || - (instr->Mask(0xA0D60000) =3D=3D 0x80560000) || - (instr->Mask(0xA0D60000) =3D=3D 0x80960000)) { - VisitUnallocated(instr); - } else { - VisitFPIntegerConvert(instr); - } - } else if (instr->Bits(14, 10) =3D=3D 16) { - const Instr masked_A0DF8000 =3D instr->Mask(0xA0DF8000); - if ((instr->Mask(0x80180000) !=3D 0) || - (masked_A0DF8000 =3D=3D 0x00020000) || - (masked_A0DF8000 =3D=3D 0x00030000) || - (masked_A0DF8000 =3D=3D 0x00068000) || - (masked_A0DF8000 =3D=3D 0x00428000) || - (masked_A0DF8000 =3D=3D 0x00430000) || - (masked_A0DF8000 =3D=3D 0x00468000) || - (instr->Mask(0xA0D80000) =3D=3D 0x00800000) || - (instr->Mask(0xA0DE0000) =3D=3D 0x00C00000) || - (instr->Mask(0xA0DF0000) =3D=3D 0x00C30000) || - (instr->Mask(0xA0DC0000) =3D=3D 0x00C40000)) { - VisitUnallocated(instr); - } else { - VisitFPDataProcessing1Source(instr); - } - } else if (instr->Bits(13, 10) =3D=3D 8) { - if ((instr->Bits(15, 14) !=3D 0) || - (instr->Bits(2, 0) !=3D 0) || - (instr->Mask(0x80800000) !=3D 0x00000000)) { - VisitUnallocated(instr); - } else { - VisitFPCompare(instr); - } - } else if (instr->Bits(12, 10) =3D=3D 4) { - if ((instr->Bits(9, 5) !=3D 0) || - (instr->Mask(0x80800000) !=3D 0x00000000)) { - VisitUnallocated(instr); - } else { - VisitFPImmediate(instr); - } - } else { - if (instr->Mask(0x80800000) !=3D 0x00000000) { - VisitUnallocated(instr); - } else { - switch (instr->Bits(11, 10)) { - case 1: { - VisitFPConditionalCompare(instr); - break; - } - case 2: { - if ((instr->Bits(15, 14) =3D=3D 0x3) || - (instr->Mask(0x00009000) =3D=3D 0x00009000) || - (instr->Mask(0x0000A000) =3D=3D 0x0000A000)) { - VisitUnallocated(instr); - } else { - VisitFPDataProcessing2Source(instr); - } - break; - } - case 3: { - VisitFPConditionalSelect(instr); - break; - } - default: VIXL_UNREACHABLE(); - } - } - } - } - } else { - // Bit 30 =3D=3D 1 has been handled earlier. - VIXL_ASSERT(instr->Bit(30) =3D=3D 0); - if (instr->Mask(0xA0800000) !=3D 0) { - VisitUnallocated(instr); - } else { - VisitFPDataProcessing3Source(instr); - } - } - } else { - VisitUnallocated(instr); - } - } - } -} - - -void Decoder::DecodeNEONLoadStore(const Instruction* instr) { - VIXL_ASSERT(instr->Bits(29, 25) =3D=3D 0x6); - if (instr->Bit(31) =3D=3D 0) { - if ((instr->Bit(24) =3D=3D 0) && (instr->Bit(21) =3D=3D 1)) { - VisitUnallocated(instr); - return; - } - - if (instr->Bit(23) =3D=3D 0) { - if (instr->Bits(20, 16) =3D=3D 0) { - if (instr->Bit(24) =3D=3D 0) { - VisitNEONLoadStoreMultiStruct(instr); - } else { - VisitNEONLoadStoreSingleStruct(instr); - } - } else { - VisitUnallocated(instr); - } - } else { - if (instr->Bit(24) =3D=3D 0) { - VisitNEONLoadStoreMultiStructPostIndex(instr); - } else { - VisitNEONLoadStoreSingleStructPostIndex(instr); - } - } - } else { - VisitUnallocated(instr); - } -} - - -void Decoder::DecodeNEONVectorDataProcessing(const Instruction* instr) { - VIXL_ASSERT(instr->Bits(28, 25) =3D=3D 0x7); - if (instr->Bit(31) =3D=3D 0) { - if (instr->Bit(24) =3D=3D 0) { - if (instr->Bit(21) =3D=3D 0) { - if (instr->Bit(15) =3D=3D 0) { - if (instr->Bit(10) =3D=3D 0) { - if (instr->Bit(29) =3D=3D 0) { - if (instr->Bit(11) =3D=3D 0) { - VisitNEONTable(instr); - } else { - VisitNEONPerm(instr); - } - } else { - VisitNEONExtract(instr); - } - } else { - if (instr->Bits(23, 22) =3D=3D 0) { - VisitNEONCopy(instr); - } else { - VisitUnallocated(instr); - } - } - } else { - VisitUnallocated(instr); - } - } else { - if (instr->Bit(10) =3D=3D 0) { - if (instr->Bit(11) =3D=3D 0) { - VisitNEON3Different(instr); - } else { - if (instr->Bits(18, 17) =3D=3D 0) { - if (instr->Bit(20) =3D=3D 0) { - if (instr->Bit(19) =3D=3D 0) { - VisitNEON2RegMisc(instr); - } else { - if (instr->Bits(30, 29) =3D=3D 0x2) { - VisitCryptoAES(instr); - } else { - VisitUnallocated(instr); - } - } - } else { - if (instr->Bit(19) =3D=3D 0) { - VisitNEONAcrossLanes(instr); - } else { - VisitUnallocated(instr); - } - } - } else { - VisitUnallocated(instr); - } - } - } else { - VisitNEON3Same(instr); - } - } - } else { - if (instr->Bit(10) =3D=3D 0) { - VisitNEONByIndexedElement(instr); - } else { - if (instr->Bit(23) =3D=3D 0) { - if (instr->Bits(22, 19) =3D=3D 0) { - VisitNEONModifiedImmediate(instr); - } else { - VisitNEONShiftImmediate(instr); - } - } else { - VisitUnallocated(instr); - } - } - } - } else { - VisitUnallocated(instr); - } -} - - -void Decoder::DecodeNEONScalarDataProcessing(const Instruction* instr) { - VIXL_ASSERT(instr->Bits(28, 25) =3D=3D 0xF); - if (instr->Bit(24) =3D=3D 0) { - if (instr->Bit(21) =3D=3D 0) { - if (instr->Bit(15) =3D=3D 0) { - if (instr->Bit(10) =3D=3D 0) { - if (instr->Bit(29) =3D=3D 0) { - if (instr->Bit(11) =3D=3D 0) { - VisitCrypto3RegSHA(instr); - } else { - VisitUnallocated(instr); - } - } else { - VisitUnallocated(instr); - } - } else { - if (instr->Bits(23, 22) =3D=3D 0) { - VisitNEONScalarCopy(instr); - } else { - VisitUnallocated(instr); - } - } - } else { - VisitUnallocated(instr); - } - } else { - if (instr->Bit(10) =3D=3D 0) { - if (instr->Bit(11) =3D=3D 0) { - VisitNEONScalar3Diff(instr); - } else { - if (instr->Bits(18, 17) =3D=3D 0) { - if (instr->Bit(20) =3D=3D 0) { - if (instr->Bit(19) =3D=3D 0) { - VisitNEONScalar2RegMisc(instr); - } else { - if (instr->Bit(29) =3D=3D 0) { - VisitCrypto2RegSHA(instr); - } else { - VisitUnallocated(instr); - } - } - } else { - if (instr->Bit(19) =3D=3D 0) { - VisitNEONScalarPairwise(instr); - } else { - VisitUnallocated(instr); - } - } - } else { - VisitUnallocated(instr); - } - } - } else { - VisitNEONScalar3Same(instr); - } - } - } else { - if (instr->Bit(10) =3D=3D 0) { - VisitNEONScalarByIndexedElement(instr); - } else { - if (instr->Bit(23) =3D=3D 0) { - VisitNEONScalarShiftImmediate(instr); - } else { - VisitUnallocated(instr); - } - } - } -} - - -#define DEFINE_VISITOR_CALLERS(A) = \ - void Decoder::Visit##A(const Instruction *instr) { = \ - VIXL_ASSERT(instr->Mask(A##FMask) =3D=3D A##Fixed); = \ - std::list::iterator it; = \ - for (it =3D visitors_.begin(); it !=3D visitors_.end(); it++) { = \ - (*it)->Visit##A(instr); = \ - } = \ - } -VISITOR_LIST(DEFINE_VISITOR_CALLERS) -#undef DEFINE_VISITOR_CALLERS -} // namespace vixl diff --git a/disas/libvixl/vixl/a64/disasm-a64.cc b/disas/libvixl/vixl/a64/= disasm-a64.cc deleted file mode 100644 index f34d1d68da..0000000000 --- a/disas/libvixl/vixl/a64/disasm-a64.cc +++ /dev/null @@ -1,3495 +0,0 @@ -// Copyright 2015, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#include -#include "vixl/a64/disasm-a64.h" - -namespace vixl { - -Disassembler::Disassembler() { - buffer_size_ =3D 256; - buffer_ =3D reinterpret_cast(malloc(buffer_size_)); - buffer_pos_ =3D 0; - own_buffer_ =3D true; - code_address_offset_ =3D 0; -} - - -Disassembler::Disassembler(char* text_buffer, int buffer_size) { - buffer_size_ =3D buffer_size; - buffer_ =3D text_buffer; - buffer_pos_ =3D 0; - own_buffer_ =3D false; - code_address_offset_ =3D 0; -} - - -Disassembler::~Disassembler() { - if (own_buffer_) { - free(buffer_); - } -} - - -char* Disassembler::GetOutput() { - return buffer_; -} - - -void Disassembler::VisitAddSubImmediate(const Instruction* instr) { - bool rd_is_zr =3D RdIsZROrSP(instr); - bool stack_op =3D (rd_is_zr || RnIsZROrSP(instr)) && - (instr->ImmAddSub() =3D=3D 0) ? true : false; - const char *mnemonic =3D ""; - const char *form =3D "'Rds, 'Rns, 'IAddSub"; - const char *form_cmp =3D "'Rns, 'IAddSub"; - const char *form_mov =3D "'Rds, 'Rns"; - - switch (instr->Mask(AddSubImmediateMask)) { - case ADD_w_imm: - case ADD_x_imm: { - mnemonic =3D "add"; - if (stack_op) { - mnemonic =3D "mov"; - form =3D form_mov; - } - break; - } - case ADDS_w_imm: - case ADDS_x_imm: { - mnemonic =3D "adds"; - if (rd_is_zr) { - mnemonic =3D "cmn"; - form =3D form_cmp; - } - break; - } - case SUB_w_imm: - case SUB_x_imm: mnemonic =3D "sub"; break; - case SUBS_w_imm: - case SUBS_x_imm: { - mnemonic =3D "subs"; - if (rd_is_zr) { - mnemonic =3D "cmp"; - form =3D form_cmp; - } - break; - } - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitAddSubShifted(const Instruction* instr) { - bool rd_is_zr =3D RdIsZROrSP(instr); - bool rn_is_zr =3D RnIsZROrSP(instr); - const char *mnemonic =3D ""; - const char *form =3D "'Rd, 'Rn, 'Rm'NDP"; - const char *form_cmp =3D "'Rn, 'Rm'NDP"; - const char *form_neg =3D "'Rd, 'Rm'NDP"; - - switch (instr->Mask(AddSubShiftedMask)) { - case ADD_w_shift: - case ADD_x_shift: mnemonic =3D "add"; break; - case ADDS_w_shift: - case ADDS_x_shift: { - mnemonic =3D "adds"; - if (rd_is_zr) { - mnemonic =3D "cmn"; - form =3D form_cmp; - } - break; - } - case SUB_w_shift: - case SUB_x_shift: { - mnemonic =3D "sub"; - if (rn_is_zr) { - mnemonic =3D "neg"; - form =3D form_neg; - } - break; - } - case SUBS_w_shift: - case SUBS_x_shift: { - mnemonic =3D "subs"; - if (rd_is_zr) { - mnemonic =3D "cmp"; - form =3D form_cmp; - } else if (rn_is_zr) { - mnemonic =3D "negs"; - form =3D form_neg; - } - break; - } - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitAddSubExtended(const Instruction* instr) { - bool rd_is_zr =3D RdIsZROrSP(instr); - const char *mnemonic =3D ""; - Extend mode =3D static_cast(instr->ExtendMode()); - const char *form =3D ((mode =3D=3D UXTX) || (mode =3D=3D SXTX)) ? - "'Rds, 'Rns, 'Xm'Ext" : "'Rds, 'Rns, 'Wm'Ext"; - const char *form_cmp =3D ((mode =3D=3D UXTX) || (mode =3D=3D SXTX)) ? - "'Rns, 'Xm'Ext" : "'Rns, 'Wm'Ext"; - - switch (instr->Mask(AddSubExtendedMask)) { - case ADD_w_ext: - case ADD_x_ext: mnemonic =3D "add"; break; - case ADDS_w_ext: - case ADDS_x_ext: { - mnemonic =3D "adds"; - if (rd_is_zr) { - mnemonic =3D "cmn"; - form =3D form_cmp; - } - break; - } - case SUB_w_ext: - case SUB_x_ext: mnemonic =3D "sub"; break; - case SUBS_w_ext: - case SUBS_x_ext: { - mnemonic =3D "subs"; - if (rd_is_zr) { - mnemonic =3D "cmp"; - form =3D form_cmp; - } - break; - } - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitAddSubWithCarry(const Instruction* instr) { - bool rn_is_zr =3D RnIsZROrSP(instr); - const char *mnemonic =3D ""; - const char *form =3D "'Rd, 'Rn, 'Rm"; - const char *form_neg =3D "'Rd, 'Rm"; - - switch (instr->Mask(AddSubWithCarryMask)) { - case ADC_w: - case ADC_x: mnemonic =3D "adc"; break; - case ADCS_w: - case ADCS_x: mnemonic =3D "adcs"; break; - case SBC_w: - case SBC_x: { - mnemonic =3D "sbc"; - if (rn_is_zr) { - mnemonic =3D "ngc"; - form =3D form_neg; - } - break; - } - case SBCS_w: - case SBCS_x: { - mnemonic =3D "sbcs"; - if (rn_is_zr) { - mnemonic =3D "ngcs"; - form =3D form_neg; - } - break; - } - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitLogicalImmediate(const Instruction* instr) { - bool rd_is_zr =3D RdIsZROrSP(instr); - bool rn_is_zr =3D RnIsZROrSP(instr); - const char *mnemonic =3D ""; - const char *form =3D "'Rds, 'Rn, 'ITri"; - - if (instr->ImmLogical() =3D=3D 0) { - // The immediate encoded in the instruction is not in the expected for= mat. - Format(instr, "unallocated", "(LogicalImmediate)"); - return; - } - - switch (instr->Mask(LogicalImmediateMask)) { - case AND_w_imm: - case AND_x_imm: mnemonic =3D "and"; break; - case ORR_w_imm: - case ORR_x_imm: { - mnemonic =3D "orr"; - unsigned reg_size =3D (instr->SixtyFourBits() =3D=3D 1) ? kXRegSize - : kWRegSize; - if (rn_is_zr && !IsMovzMovnImm(reg_size, instr->ImmLogical())) { - mnemonic =3D "mov"; - form =3D "'Rds, 'ITri"; - } - break; - } - case EOR_w_imm: - case EOR_x_imm: mnemonic =3D "eor"; break; - case ANDS_w_imm: - case ANDS_x_imm: { - mnemonic =3D "ands"; - if (rd_is_zr) { - mnemonic =3D "tst"; - form =3D "'Rn, 'ITri"; - } - break; - } - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -bool Disassembler::IsMovzMovnImm(unsigned reg_size, uint64_t value) { - VIXL_ASSERT((reg_size =3D=3D kXRegSize) || - ((reg_size =3D=3D kWRegSize) && (value <=3D 0xffffffff))); - - // Test for movz: 16 bits set at positions 0, 16, 32 or 48. - if (((value & UINT64_C(0xffffffffffff0000)) =3D=3D 0) || - ((value & UINT64_C(0xffffffff0000ffff)) =3D=3D 0) || - ((value & UINT64_C(0xffff0000ffffffff)) =3D=3D 0) || - ((value & UINT64_C(0x0000ffffffffffff)) =3D=3D 0)) { - return true; - } - - // Test for movn: NOT(16 bits set at positions 0, 16, 32 or 48). - if ((reg_size =3D=3D kXRegSize) && - (((~value & UINT64_C(0xffffffffffff0000)) =3D=3D 0) || - ((~value & UINT64_C(0xffffffff0000ffff)) =3D=3D 0) || - ((~value & UINT64_C(0xffff0000ffffffff)) =3D=3D 0) || - ((~value & UINT64_C(0x0000ffffffffffff)) =3D=3D 0))) { - return true; - } - if ((reg_size =3D=3D kWRegSize) && - (((value & 0xffff0000) =3D=3D 0xffff0000) || - ((value & 0x0000ffff) =3D=3D 0x0000ffff))) { - return true; - } - return false; -} - - -void Disassembler::VisitLogicalShifted(const Instruction* instr) { - bool rd_is_zr =3D RdIsZROrSP(instr); - bool rn_is_zr =3D RnIsZROrSP(instr); - const char *mnemonic =3D ""; - const char *form =3D "'Rd, 'Rn, 'Rm'NLo"; - - switch (instr->Mask(LogicalShiftedMask)) { - case AND_w: - case AND_x: mnemonic =3D "and"; break; - case BIC_w: - case BIC_x: mnemonic =3D "bic"; break; - case EOR_w: - case EOR_x: mnemonic =3D "eor"; break; - case EON_w: - case EON_x: mnemonic =3D "eon"; break; - case BICS_w: - case BICS_x: mnemonic =3D "bics"; break; - case ANDS_w: - case ANDS_x: { - mnemonic =3D "ands"; - if (rd_is_zr) { - mnemonic =3D "tst"; - form =3D "'Rn, 'Rm'NLo"; - } - break; - } - case ORR_w: - case ORR_x: { - mnemonic =3D "orr"; - if (rn_is_zr && (instr->ImmDPShift() =3D=3D 0) && (instr->ShiftDP() = =3D=3D LSL)) { - mnemonic =3D "mov"; - form =3D "'Rd, 'Rm"; - } - break; - } - case ORN_w: - case ORN_x: { - mnemonic =3D "orn"; - if (rn_is_zr) { - mnemonic =3D "mvn"; - form =3D "'Rd, 'Rm'NLo"; - } - break; - } - default: VIXL_UNREACHABLE(); - } - - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitConditionalCompareRegister(const Instruction* inst= r) { - const char *mnemonic =3D ""; - const char *form =3D "'Rn, 'Rm, 'INzcv, 'Cond"; - - switch (instr->Mask(ConditionalCompareRegisterMask)) { - case CCMN_w: - case CCMN_x: mnemonic =3D "ccmn"; break; - case CCMP_w: - case CCMP_x: mnemonic =3D "ccmp"; break; - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitConditionalCompareImmediate(const Instruction* ins= tr) { - const char *mnemonic =3D ""; - const char *form =3D "'Rn, 'IP, 'INzcv, 'Cond"; - - switch (instr->Mask(ConditionalCompareImmediateMask)) { - case CCMN_w_imm: - case CCMN_x_imm: mnemonic =3D "ccmn"; break; - case CCMP_w_imm: - case CCMP_x_imm: mnemonic =3D "ccmp"; break; - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitConditionalSelect(const Instruction* instr) { - bool rnm_is_zr =3D (RnIsZROrSP(instr) && RmIsZROrSP(instr)); - bool rn_is_rm =3D (instr->Rn() =3D=3D instr->Rm()); - const char *mnemonic =3D ""; - const char *form =3D "'Rd, 'Rn, 'Rm, 'Cond"; - const char *form_test =3D "'Rd, 'CInv"; - const char *form_update =3D "'Rd, 'Rn, 'CInv"; - - Condition cond =3D static_cast(instr->Condition()); - bool invertible_cond =3D (cond !=3D al) && (cond !=3D nv); - - switch (instr->Mask(ConditionalSelectMask)) { - case CSEL_w: - case CSEL_x: mnemonic =3D "csel"; break; - case CSINC_w: - case CSINC_x: { - mnemonic =3D "csinc"; - if (rnm_is_zr && invertible_cond) { - mnemonic =3D "cset"; - form =3D form_test; - } else if (rn_is_rm && invertible_cond) { - mnemonic =3D "cinc"; - form =3D form_update; - } - break; - } - case CSINV_w: - case CSINV_x: { - mnemonic =3D "csinv"; - if (rnm_is_zr && invertible_cond) { - mnemonic =3D "csetm"; - form =3D form_test; - } else if (rn_is_rm && invertible_cond) { - mnemonic =3D "cinv"; - form =3D form_update; - } - break; - } - case CSNEG_w: - case CSNEG_x: { - mnemonic =3D "csneg"; - if (rn_is_rm && invertible_cond) { - mnemonic =3D "cneg"; - form =3D form_update; - } - break; - } - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitBitfield(const Instruction* instr) { - unsigned s =3D instr->ImmS(); - unsigned r =3D instr->ImmR(); - unsigned rd_size_minus_1 =3D - ((instr->SixtyFourBits() =3D=3D 1) ? kXRegSize : kWRegSize) - 1; - const char *mnemonic =3D ""; - const char *form =3D ""; - const char *form_shift_right =3D "'Rd, 'Rn, 'IBr"; - const char *form_extend =3D "'Rd, 'Wn"; - const char *form_bfiz =3D "'Rd, 'Rn, 'IBZ-r, 'IBs+1"; - const char *form_bfx =3D "'Rd, 'Rn, 'IBr, 'IBs-r+1"; - const char *form_lsl =3D "'Rd, 'Rn, 'IBZ-r"; - - switch (instr->Mask(BitfieldMask)) { - case SBFM_w: - case SBFM_x: { - mnemonic =3D "sbfx"; - form =3D form_bfx; - if (r =3D=3D 0) { - form =3D form_extend; - if (s =3D=3D 7) { - mnemonic =3D "sxtb"; - } else if (s =3D=3D 15) { - mnemonic =3D "sxth"; - } else if ((s =3D=3D 31) && (instr->SixtyFourBits() =3D=3D 1)) { - mnemonic =3D "sxtw"; - } else { - form =3D form_bfx; - } - } else if (s =3D=3D rd_size_minus_1) { - mnemonic =3D "asr"; - form =3D form_shift_right; - } else if (s < r) { - mnemonic =3D "sbfiz"; - form =3D form_bfiz; - } - break; - } - case UBFM_w: - case UBFM_x: { - mnemonic =3D "ubfx"; - form =3D form_bfx; - if (r =3D=3D 0) { - form =3D form_extend; - if (s =3D=3D 7) { - mnemonic =3D "uxtb"; - } else if (s =3D=3D 15) { - mnemonic =3D "uxth"; - } else { - form =3D form_bfx; - } - } - if (s =3D=3D rd_size_minus_1) { - mnemonic =3D "lsr"; - form =3D form_shift_right; - } else if (r =3D=3D s + 1) { - mnemonic =3D "lsl"; - form =3D form_lsl; - } else if (s < r) { - mnemonic =3D "ubfiz"; - form =3D form_bfiz; - } - break; - } - case BFM_w: - case BFM_x: { - mnemonic =3D "bfxil"; - form =3D form_bfx; - if (s < r) { - mnemonic =3D "bfi"; - form =3D form_bfiz; - } - } - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitExtract(const Instruction* instr) { - const char *mnemonic =3D ""; - const char *form =3D "'Rd, 'Rn, 'Rm, 'IExtract"; - - switch (instr->Mask(ExtractMask)) { - case EXTR_w: - case EXTR_x: { - if (instr->Rn() =3D=3D instr->Rm()) { - mnemonic =3D "ror"; - form =3D "'Rd, 'Rn, 'IExtract"; - } else { - mnemonic =3D "extr"; - } - break; - } - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitPCRelAddressing(const Instruction* instr) { - switch (instr->Mask(PCRelAddressingMask)) { - case ADR: Format(instr, "adr", "'Xd, 'AddrPCRelByte"); break; - case ADRP: Format(instr, "adrp", "'Xd, 'AddrPCRelPage"); break; - default: Format(instr, "unimplemented", "(PCRelAddressing)"); - } -} - - -void Disassembler::VisitConditionalBranch(const Instruction* instr) { - switch (instr->Mask(ConditionalBranchMask)) { - case B_cond: Format(instr, "b.'CBrn", "'TImmCond"); break; - default: VIXL_UNREACHABLE(); - } -} - - -void Disassembler::VisitUnconditionalBranchToRegister( - const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "'Xn"; - - switch (instr->Mask(UnconditionalBranchToRegisterMask)) { - case BR: mnemonic =3D "br"; break; - case BLR: mnemonic =3D "blr"; break; - case RET: { - mnemonic =3D "ret"; - if (instr->Rn() =3D=3D kLinkRegCode) { - form =3D NULL; - } - break; - } - default: form =3D "(UnconditionalBranchToRegister)"; - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitUnconditionalBranch(const Instruction* instr) { - const char *mnemonic =3D ""; - const char *form =3D "'TImmUncn"; - - switch (instr->Mask(UnconditionalBranchMask)) { - case B: mnemonic =3D "b"; break; - case BL: mnemonic =3D "bl"; break; - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitDataProcessing1Source(const Instruction* instr) { - const char *mnemonic =3D ""; - const char *form =3D "'Rd, 'Rn"; - - switch (instr->Mask(DataProcessing1SourceMask)) { - #define FORMAT(A, B) \ - case A##_w: \ - case A##_x: mnemonic =3D B; break; - FORMAT(RBIT, "rbit"); - FORMAT(REV16, "rev16"); - FORMAT(REV, "rev"); - FORMAT(CLZ, "clz"); - FORMAT(CLS, "cls"); - #undef FORMAT - case REV32_x: mnemonic =3D "rev32"; break; - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitDataProcessing2Source(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "'Rd, 'Rn, 'Rm"; - const char *form_wwx =3D "'Wd, 'Wn, 'Xm"; - - switch (instr->Mask(DataProcessing2SourceMask)) { - #define FORMAT(A, B) \ - case A##_w: \ - case A##_x: mnemonic =3D B; break; - FORMAT(UDIV, "udiv"); - FORMAT(SDIV, "sdiv"); - FORMAT(LSLV, "lsl"); - FORMAT(LSRV, "lsr"); - FORMAT(ASRV, "asr"); - FORMAT(RORV, "ror"); - #undef FORMAT - case CRC32B: mnemonic =3D "crc32b"; break; - case CRC32H: mnemonic =3D "crc32h"; break; - case CRC32W: mnemonic =3D "crc32w"; break; - case CRC32X: mnemonic =3D "crc32x"; form =3D form_wwx; break; - case CRC32CB: mnemonic =3D "crc32cb"; break; - case CRC32CH: mnemonic =3D "crc32ch"; break; - case CRC32CW: mnemonic =3D "crc32cw"; break; - case CRC32CX: mnemonic =3D "crc32cx"; form =3D form_wwx; break; - default: form =3D "(DataProcessing2Source)"; - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitDataProcessing3Source(const Instruction* instr) { - bool ra_is_zr =3D RaIsZROrSP(instr); - const char *mnemonic =3D ""; - const char *form =3D "'Xd, 'Wn, 'Wm, 'Xa"; - const char *form_rrr =3D "'Rd, 'Rn, 'Rm"; - const char *form_rrrr =3D "'Rd, 'Rn, 'Rm, 'Ra"; - const char *form_xww =3D "'Xd, 'Wn, 'Wm"; - const char *form_xxx =3D "'Xd, 'Xn, 'Xm"; - - switch (instr->Mask(DataProcessing3SourceMask)) { - case MADD_w: - case MADD_x: { - mnemonic =3D "madd"; - form =3D form_rrrr; - if (ra_is_zr) { - mnemonic =3D "mul"; - form =3D form_rrr; - } - break; - } - case MSUB_w: - case MSUB_x: { - mnemonic =3D "msub"; - form =3D form_rrrr; - if (ra_is_zr) { - mnemonic =3D "mneg"; - form =3D form_rrr; - } - break; - } - case SMADDL_x: { - mnemonic =3D "smaddl"; - if (ra_is_zr) { - mnemonic =3D "smull"; - form =3D form_xww; - } - break; - } - case SMSUBL_x: { - mnemonic =3D "smsubl"; - if (ra_is_zr) { - mnemonic =3D "smnegl"; - form =3D form_xww; - } - break; - } - case UMADDL_x: { - mnemonic =3D "umaddl"; - if (ra_is_zr) { - mnemonic =3D "umull"; - form =3D form_xww; - } - break; - } - case UMSUBL_x: { - mnemonic =3D "umsubl"; - if (ra_is_zr) { - mnemonic =3D "umnegl"; - form =3D form_xww; - } - break; - } - case SMULH_x: { - mnemonic =3D "smulh"; - form =3D form_xxx; - break; - } - case UMULH_x: { - mnemonic =3D "umulh"; - form =3D form_xxx; - break; - } - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitCompareBranch(const Instruction* instr) { - const char *mnemonic =3D ""; - const char *form =3D "'Rt, 'TImmCmpa"; - - switch (instr->Mask(CompareBranchMask)) { - case CBZ_w: - case CBZ_x: mnemonic =3D "cbz"; break; - case CBNZ_w: - case CBNZ_x: mnemonic =3D "cbnz"; break; - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitTestBranch(const Instruction* instr) { - const char *mnemonic =3D ""; - // If the top bit of the immediate is clear, the tested register is - // disassembled as Wt, otherwise Xt. As the top bit of the immediate is - // encoded in bit 31 of the instruction, we can reuse the Rt form, which - // uses bit 31 (normally "sf") to choose the register size. - const char *form =3D "'Rt, 'IS, 'TImmTest"; - - switch (instr->Mask(TestBranchMask)) { - case TBZ: mnemonic =3D "tbz"; break; - case TBNZ: mnemonic =3D "tbnz"; break; - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitMoveWideImmediate(const Instruction* instr) { - const char *mnemonic =3D ""; - const char *form =3D "'Rd, 'IMoveImm"; - - // Print the shift separately for movk, to make it clear which half word= will - // be overwritten. Movn and movz print the computed immediate, which inc= ludes - // shift calculation. - switch (instr->Mask(MoveWideImmediateMask)) { - case MOVN_w: - case MOVN_x: - if ((instr->ImmMoveWide()) || (instr->ShiftMoveWide() =3D=3D 0)) { - if ((instr->SixtyFourBits() =3D=3D 0) && (instr->ImmMoveWide() =3D= =3D 0xffff)) { - mnemonic =3D "movn"; - } else { - mnemonic =3D "mov"; - form =3D "'Rd, 'IMoveNeg"; - } - } else { - mnemonic =3D "movn"; - } - break; - case MOVZ_w: - case MOVZ_x: - if ((instr->ImmMoveWide()) || (instr->ShiftMoveWide() =3D=3D 0)) - mnemonic =3D "mov"; - else - mnemonic =3D "movz"; - break; - case MOVK_w: - case MOVK_x: mnemonic =3D "movk"; form =3D "'Rd, 'IMoveLSL"; break; - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -#define LOAD_STORE_LIST(V) \ - V(STRB_w, "strb", "'Wt") \ - V(STRH_w, "strh", "'Wt") \ - V(STR_w, "str", "'Wt") \ - V(STR_x, "str", "'Xt") \ - V(LDRB_w, "ldrb", "'Wt") \ - V(LDRH_w, "ldrh", "'Wt") \ - V(LDR_w, "ldr", "'Wt") \ - V(LDR_x, "ldr", "'Xt") \ - V(LDRSB_x, "ldrsb", "'Xt") \ - V(LDRSH_x, "ldrsh", "'Xt") \ - V(LDRSW_x, "ldrsw", "'Xt") \ - V(LDRSB_w, "ldrsb", "'Wt") \ - V(LDRSH_w, "ldrsh", "'Wt") \ - V(STR_b, "str", "'Bt") \ - V(STR_h, "str", "'Ht") \ - V(STR_s, "str", "'St") \ - V(STR_d, "str", "'Dt") \ - V(LDR_b, "ldr", "'Bt") \ - V(LDR_h, "ldr", "'Ht") \ - V(LDR_s, "ldr", "'St") \ - V(LDR_d, "ldr", "'Dt") \ - V(STR_q, "str", "'Qt") \ - V(LDR_q, "ldr", "'Qt") - -void Disassembler::VisitLoadStorePreIndex(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(LoadStorePreIndex)"; - - switch (instr->Mask(LoadStorePreIndexMask)) { - #define LS_PREINDEX(A, B, C) \ - case A##_pre: mnemonic =3D B; form =3D C ", ['Xns'ILS]!"; break; - LOAD_STORE_LIST(LS_PREINDEX) - #undef LS_PREINDEX - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitLoadStorePostIndex(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(LoadStorePostIndex)"; - - switch (instr->Mask(LoadStorePostIndexMask)) { - #define LS_POSTINDEX(A, B, C) \ - case A##_post: mnemonic =3D B; form =3D C ", ['Xns]'ILS"; break; - LOAD_STORE_LIST(LS_POSTINDEX) - #undef LS_POSTINDEX - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitLoadStoreUnsignedOffset(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(LoadStoreUnsignedOffset)"; - - switch (instr->Mask(LoadStoreUnsignedOffsetMask)) { - #define LS_UNSIGNEDOFFSET(A, B, C) \ - case A##_unsigned: mnemonic =3D B; form =3D C ", ['Xns'ILU]"; break; - LOAD_STORE_LIST(LS_UNSIGNEDOFFSET) - #undef LS_UNSIGNEDOFFSET - case PRFM_unsigned: mnemonic =3D "prfm"; form =3D "'PrefOp, ['Xns'ILU]= "; - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitLoadStoreRegisterOffset(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(LoadStoreRegisterOffset)"; - - switch (instr->Mask(LoadStoreRegisterOffsetMask)) { - #define LS_REGISTEROFFSET(A, B, C) \ - case A##_reg: mnemonic =3D B; form =3D C ", ['Xns, 'Offsetreg]"; break; - LOAD_STORE_LIST(LS_REGISTEROFFSET) - #undef LS_REGISTEROFFSET - case PRFM_reg: mnemonic =3D "prfm"; form =3D "'PrefOp, ['Xns, 'Offsetr= eg]"; - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitLoadStoreUnscaledOffset(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "'Wt, ['Xns'ILS]"; - const char *form_x =3D "'Xt, ['Xns'ILS]"; - const char *form_b =3D "'Bt, ['Xns'ILS]"; - const char *form_h =3D "'Ht, ['Xns'ILS]"; - const char *form_s =3D "'St, ['Xns'ILS]"; - const char *form_d =3D "'Dt, ['Xns'ILS]"; - const char *form_q =3D "'Qt, ['Xns'ILS]"; - const char *form_prefetch =3D "'PrefOp, ['Xns'ILS]"; - - switch (instr->Mask(LoadStoreUnscaledOffsetMask)) { - case STURB_w: mnemonic =3D "sturb"; break; - case STURH_w: mnemonic =3D "sturh"; break; - case STUR_w: mnemonic =3D "stur"; break; - case STUR_x: mnemonic =3D "stur"; form =3D form_x; break; - case STUR_b: mnemonic =3D "stur"; form =3D form_b; break; - case STUR_h: mnemonic =3D "stur"; form =3D form_h; break; - case STUR_s: mnemonic =3D "stur"; form =3D form_s; break; - case STUR_d: mnemonic =3D "stur"; form =3D form_d; break; - case STUR_q: mnemonic =3D "stur"; form =3D form_q; break; - case LDURB_w: mnemonic =3D "ldurb"; break; - case LDURH_w: mnemonic =3D "ldurh"; break; - case LDUR_w: mnemonic =3D "ldur"; break; - case LDUR_x: mnemonic =3D "ldur"; form =3D form_x; break; - case LDUR_b: mnemonic =3D "ldur"; form =3D form_b; break; - case LDUR_h: mnemonic =3D "ldur"; form =3D form_h; break; - case LDUR_s: mnemonic =3D "ldur"; form =3D form_s; break; - case LDUR_d: mnemonic =3D "ldur"; form =3D form_d; break; - case LDUR_q: mnemonic =3D "ldur"; form =3D form_q; break; - case LDURSB_x: form =3D form_x; VIXL_FALLTHROUGH(); - case LDURSB_w: mnemonic =3D "ldursb"; break; - case LDURSH_x: form =3D form_x; VIXL_FALLTHROUGH(); - case LDURSH_w: mnemonic =3D "ldursh"; break; - case LDURSW_x: mnemonic =3D "ldursw"; form =3D form_x; break; - case PRFUM: mnemonic =3D "prfum"; form =3D form_prefetch; break; - default: form =3D "(LoadStoreUnscaledOffset)"; - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitLoadLiteral(const Instruction* instr) { - const char *mnemonic =3D "ldr"; - const char *form =3D "(LoadLiteral)"; - - switch (instr->Mask(LoadLiteralMask)) { - case LDR_w_lit: form =3D "'Wt, 'ILLiteral 'LValue"; break; - case LDR_x_lit: form =3D "'Xt, 'ILLiteral 'LValue"; break; - case LDR_s_lit: form =3D "'St, 'ILLiteral 'LValue"; break; - case LDR_d_lit: form =3D "'Dt, 'ILLiteral 'LValue"; break; - case LDR_q_lit: form =3D "'Qt, 'ILLiteral 'LValue"; break; - case LDRSW_x_lit: { - mnemonic =3D "ldrsw"; - form =3D "'Xt, 'ILLiteral 'LValue"; - break; - } - case PRFM_lit: { - mnemonic =3D "prfm"; - form =3D "'PrefOp, 'ILLiteral 'LValue"; - break; - } - default: mnemonic =3D "unimplemented"; - } - Format(instr, mnemonic, form); -} - - -#define LOAD_STORE_PAIR_LIST(V) \ - V(STP_w, "stp", "'Wt, 'Wt2", "2") \ - V(LDP_w, "ldp", "'Wt, 'Wt2", "2") \ - V(LDPSW_x, "ldpsw", "'Xt, 'Xt2", "2") \ - V(STP_x, "stp", "'Xt, 'Xt2", "3") \ - V(LDP_x, "ldp", "'Xt, 'Xt2", "3") \ - V(STP_s, "stp", "'St, 'St2", "2") \ - V(LDP_s, "ldp", "'St, 'St2", "2") \ - V(STP_d, "stp", "'Dt, 'Dt2", "3") \ - V(LDP_d, "ldp", "'Dt, 'Dt2", "3") \ - V(LDP_q, "ldp", "'Qt, 'Qt2", "4") \ - V(STP_q, "stp", "'Qt, 'Qt2", "4") - -void Disassembler::VisitLoadStorePairPostIndex(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(LoadStorePairPostIndex)"; - - switch (instr->Mask(LoadStorePairPostIndexMask)) { - #define LSP_POSTINDEX(A, B, C, D) \ - case A##_post: mnemonic =3D B; form =3D C ", ['Xns]'ILP" D; break; - LOAD_STORE_PAIR_LIST(LSP_POSTINDEX) - #undef LSP_POSTINDEX - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitLoadStorePairPreIndex(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(LoadStorePairPreIndex)"; - - switch (instr->Mask(LoadStorePairPreIndexMask)) { - #define LSP_PREINDEX(A, B, C, D) \ - case A##_pre: mnemonic =3D B; form =3D C ", ['Xns'ILP" D "]!"; break; - LOAD_STORE_PAIR_LIST(LSP_PREINDEX) - #undef LSP_PREINDEX - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitLoadStorePairOffset(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(LoadStorePairOffset)"; - - switch (instr->Mask(LoadStorePairOffsetMask)) { - #define LSP_OFFSET(A, B, C, D) \ - case A##_off: mnemonic =3D B; form =3D C ", ['Xns'ILP" D "]"; break; - LOAD_STORE_PAIR_LIST(LSP_OFFSET) - #undef LSP_OFFSET - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitLoadStorePairNonTemporal(const Instruction* instr)= { - const char *mnemonic =3D "unimplemented"; - const char *form; - - switch (instr->Mask(LoadStorePairNonTemporalMask)) { - case STNP_w: mnemonic =3D "stnp"; form =3D "'Wt, 'Wt2, ['Xns'ILP2]"; b= reak; - case LDNP_w: mnemonic =3D "ldnp"; form =3D "'Wt, 'Wt2, ['Xns'ILP2]"; b= reak; - case STNP_x: mnemonic =3D "stnp"; form =3D "'Xt, 'Xt2, ['Xns'ILP3]"; b= reak; - case LDNP_x: mnemonic =3D "ldnp"; form =3D "'Xt, 'Xt2, ['Xns'ILP3]"; b= reak; - case STNP_s: mnemonic =3D "stnp"; form =3D "'St, 'St2, ['Xns'ILP2]"; b= reak; - case LDNP_s: mnemonic =3D "ldnp"; form =3D "'St, 'St2, ['Xns'ILP2]"; b= reak; - case STNP_d: mnemonic =3D "stnp"; form =3D "'Dt, 'Dt2, ['Xns'ILP3]"; b= reak; - case LDNP_d: mnemonic =3D "ldnp"; form =3D "'Dt, 'Dt2, ['Xns'ILP3]"; b= reak; - case STNP_q: mnemonic =3D "stnp"; form =3D "'Qt, 'Qt2, ['Xns'ILP4]"; b= reak; - case LDNP_q: mnemonic =3D "ldnp"; form =3D "'Qt, 'Qt2, ['Xns'ILP4]"; b= reak; - default: form =3D "(LoadStorePairNonTemporal)"; - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitLoadStoreExclusive(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form; - - switch (instr->Mask(LoadStoreExclusiveMask)) { - case STXRB_w: mnemonic =3D "stxrb"; form =3D "'Ws, 'Wt, ['Xns]"; break; - case STXRH_w: mnemonic =3D "stxrh"; form =3D "'Ws, 'Wt, ['Xns]"; break; - case STXR_w: mnemonic =3D "stxr"; form =3D "'Ws, 'Wt, ['Xns]"; break; - case STXR_x: mnemonic =3D "stxr"; form =3D "'Ws, 'Xt, ['Xns]"; break; - case LDXRB_w: mnemonic =3D "ldxrb"; form =3D "'Wt, ['Xns]"; break; - case LDXRH_w: mnemonic =3D "ldxrh"; form =3D "'Wt, ['Xns]"; break; - case LDXR_w: mnemonic =3D "ldxr"; form =3D "'Wt, ['Xns]"; break; - case LDXR_x: mnemonic =3D "ldxr"; form =3D "'Xt, ['Xns]"; break; - case STXP_w: mnemonic =3D "stxp"; form =3D "'Ws, 'Wt, 'Wt2, ['Xns]"; b= reak; - case STXP_x: mnemonic =3D "stxp"; form =3D "'Ws, 'Xt, 'Xt2, ['Xns]"; b= reak; - case LDXP_w: mnemonic =3D "ldxp"; form =3D "'Wt, 'Wt2, ['Xns]"; break; - case LDXP_x: mnemonic =3D "ldxp"; form =3D "'Xt, 'Xt2, ['Xns]"; break; - case STLXRB_w: mnemonic =3D "stlxrb"; form =3D "'Ws, 'Wt, ['Xns]"; bre= ak; - case STLXRH_w: mnemonic =3D "stlxrh"; form =3D "'Ws, 'Wt, ['Xns]"; bre= ak; - case STLXR_w: mnemonic =3D "stlxr"; form =3D "'Ws, 'Wt, ['Xns]"; break; - case STLXR_x: mnemonic =3D "stlxr"; form =3D "'Ws, 'Xt, ['Xns]"; break; - case LDAXRB_w: mnemonic =3D "ldaxrb"; form =3D "'Wt, ['Xns]"; break; - case LDAXRH_w: mnemonic =3D "ldaxrh"; form =3D "'Wt, ['Xns]"; break; - case LDAXR_w: mnemonic =3D "ldaxr"; form =3D "'Wt, ['Xns]"; break; - case LDAXR_x: mnemonic =3D "ldaxr"; form =3D "'Xt, ['Xns]"; break; - case STLXP_w: mnemonic =3D "stlxp"; form =3D "'Ws, 'Wt, 'Wt2, ['Xns]";= break; - case STLXP_x: mnemonic =3D "stlxp"; form =3D "'Ws, 'Xt, 'Xt2, ['Xns]";= break; - case LDAXP_w: mnemonic =3D "ldaxp"; form =3D "'Wt, 'Wt2, ['Xns]"; brea= k; - case LDAXP_x: mnemonic =3D "ldaxp"; form =3D "'Xt, 'Xt2, ['Xns]"; brea= k; - case STLRB_w: mnemonic =3D "stlrb"; form =3D "'Wt, ['Xns]"; break; - case STLRH_w: mnemonic =3D "stlrh"; form =3D "'Wt, ['Xns]"; break; - case STLR_w: mnemonic =3D "stlr"; form =3D "'Wt, ['Xns]"; break; - case STLR_x: mnemonic =3D "stlr"; form =3D "'Xt, ['Xns]"; break; - case LDARB_w: mnemonic =3D "ldarb"; form =3D "'Wt, ['Xns]"; break; - case LDARH_w: mnemonic =3D "ldarh"; form =3D "'Wt, ['Xns]"; break; - case LDAR_w: mnemonic =3D "ldar"; form =3D "'Wt, ['Xns]"; break; - case LDAR_x: mnemonic =3D "ldar"; form =3D "'Xt, ['Xns]"; break; - default: form =3D "(LoadStoreExclusive)"; - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitFPCompare(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "'Fn, 'Fm"; - const char *form_zero =3D "'Fn, #0.0"; - - switch (instr->Mask(FPCompareMask)) { - case FCMP_s_zero: - case FCMP_d_zero: form =3D form_zero; VIXL_FALLTHROUGH(); - case FCMP_s: - case FCMP_d: mnemonic =3D "fcmp"; break; - case FCMPE_s_zero: - case FCMPE_d_zero: form =3D form_zero; VIXL_FALLTHROUGH(); - case FCMPE_s: - case FCMPE_d: mnemonic =3D "fcmpe"; break; - default: form =3D "(FPCompare)"; - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitFPConditionalCompare(const Instruction* instr) { - const char *mnemonic =3D "unmplemented"; - const char *form =3D "'Fn, 'Fm, 'INzcv, 'Cond"; - - switch (instr->Mask(FPConditionalCompareMask)) { - case FCCMP_s: - case FCCMP_d: mnemonic =3D "fccmp"; break; - case FCCMPE_s: - case FCCMPE_d: mnemonic =3D "fccmpe"; break; - default: form =3D "(FPConditionalCompare)"; - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitFPConditionalSelect(const Instruction* instr) { - const char *mnemonic =3D ""; - const char *form =3D "'Fd, 'Fn, 'Fm, 'Cond"; - - switch (instr->Mask(FPConditionalSelectMask)) { - case FCSEL_s: - case FCSEL_d: mnemonic =3D "fcsel"; break; - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitFPDataProcessing1Source(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "'Fd, 'Fn"; - - switch (instr->Mask(FPDataProcessing1SourceMask)) { - #define FORMAT(A, B) \ - case A##_s: \ - case A##_d: mnemonic =3D B; break; - FORMAT(FMOV, "fmov"); - FORMAT(FABS, "fabs"); - FORMAT(FNEG, "fneg"); - FORMAT(FSQRT, "fsqrt"); - FORMAT(FRINTN, "frintn"); - FORMAT(FRINTP, "frintp"); - FORMAT(FRINTM, "frintm"); - FORMAT(FRINTZ, "frintz"); - FORMAT(FRINTA, "frinta"); - FORMAT(FRINTX, "frintx"); - FORMAT(FRINTI, "frinti"); - #undef FORMAT - case FCVT_ds: mnemonic =3D "fcvt"; form =3D "'Dd, 'Sn"; break; - case FCVT_sd: mnemonic =3D "fcvt"; form =3D "'Sd, 'Dn"; break; - case FCVT_hs: mnemonic =3D "fcvt"; form =3D "'Hd, 'Sn"; break; - case FCVT_sh: mnemonic =3D "fcvt"; form =3D "'Sd, 'Hn"; break; - case FCVT_dh: mnemonic =3D "fcvt"; form =3D "'Dd, 'Hn"; break; - case FCVT_hd: mnemonic =3D "fcvt"; form =3D "'Hd, 'Dn"; break; - default: form =3D "(FPDataProcessing1Source)"; - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitFPDataProcessing2Source(const Instruction* instr) { - const char *mnemonic =3D ""; - const char *form =3D "'Fd, 'Fn, 'Fm"; - - switch (instr->Mask(FPDataProcessing2SourceMask)) { - #define FORMAT(A, B) \ - case A##_s: \ - case A##_d: mnemonic =3D B; break; - FORMAT(FMUL, "fmul"); - FORMAT(FDIV, "fdiv"); - FORMAT(FADD, "fadd"); - FORMAT(FSUB, "fsub"); - FORMAT(FMAX, "fmax"); - FORMAT(FMIN, "fmin"); - FORMAT(FMAXNM, "fmaxnm"); - FORMAT(FMINNM, "fminnm"); - FORMAT(FNMUL, "fnmul"); - #undef FORMAT - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitFPDataProcessing3Source(const Instruction* instr) { - const char *mnemonic =3D ""; - const char *form =3D "'Fd, 'Fn, 'Fm, 'Fa"; - - switch (instr->Mask(FPDataProcessing3SourceMask)) { - #define FORMAT(A, B) \ - case A##_s: \ - case A##_d: mnemonic =3D B; break; - FORMAT(FMADD, "fmadd"); - FORMAT(FMSUB, "fmsub"); - FORMAT(FNMADD, "fnmadd"); - FORMAT(FNMSUB, "fnmsub"); - #undef FORMAT - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitFPImmediate(const Instruction* instr) { - const char *mnemonic =3D ""; - const char *form =3D "(FPImmediate)"; - - switch (instr->Mask(FPImmediateMask)) { - case FMOV_s_imm: mnemonic =3D "fmov"; form =3D "'Sd, 'IFPSingle"; brea= k; - case FMOV_d_imm: mnemonic =3D "fmov"; form =3D "'Dd, 'IFPDouble"; brea= k; - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitFPIntegerConvert(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(FPIntegerConvert)"; - const char *form_rf =3D "'Rd, 'Fn"; - const char *form_fr =3D "'Fd, 'Rn"; - - switch (instr->Mask(FPIntegerConvertMask)) { - case FMOV_ws: - case FMOV_xd: mnemonic =3D "fmov"; form =3D form_rf; break; - case FMOV_sw: - case FMOV_dx: mnemonic =3D "fmov"; form =3D form_fr; break; - case FMOV_d1_x: mnemonic =3D "fmov"; form =3D "'Vd.D[1], 'Rn"; break; - case FMOV_x_d1: mnemonic =3D "fmov"; form =3D "'Rd, 'Vn.D[1]"; break; - case FCVTAS_ws: - case FCVTAS_xs: - case FCVTAS_wd: - case FCVTAS_xd: mnemonic =3D "fcvtas"; form =3D form_rf; break; - case FCVTAU_ws: - case FCVTAU_xs: - case FCVTAU_wd: - case FCVTAU_xd: mnemonic =3D "fcvtau"; form =3D form_rf; break; - case FCVTMS_ws: - case FCVTMS_xs: - case FCVTMS_wd: - case FCVTMS_xd: mnemonic =3D "fcvtms"; form =3D form_rf; break; - case FCVTMU_ws: - case FCVTMU_xs: - case FCVTMU_wd: - case FCVTMU_xd: mnemonic =3D "fcvtmu"; form =3D form_rf; break; - case FCVTNS_ws: - case FCVTNS_xs: - case FCVTNS_wd: - case FCVTNS_xd: mnemonic =3D "fcvtns"; form =3D form_rf; break; - case FCVTNU_ws: - case FCVTNU_xs: - case FCVTNU_wd: - case FCVTNU_xd: mnemonic =3D "fcvtnu"; form =3D form_rf; break; - case FCVTZU_xd: - case FCVTZU_ws: - case FCVTZU_wd: - case FCVTZU_xs: mnemonic =3D "fcvtzu"; form =3D form_rf; break; - case FCVTZS_xd: - case FCVTZS_wd: - case FCVTZS_xs: - case FCVTZS_ws: mnemonic =3D "fcvtzs"; form =3D form_rf; break; - case FCVTPU_xd: - case FCVTPU_ws: - case FCVTPU_wd: - case FCVTPU_xs: mnemonic =3D "fcvtpu"; form =3D form_rf; break; - case FCVTPS_xd: - case FCVTPS_wd: - case FCVTPS_xs: - case FCVTPS_ws: mnemonic =3D "fcvtps"; form =3D form_rf; break; - case SCVTF_sw: - case SCVTF_sx: - case SCVTF_dw: - case SCVTF_dx: mnemonic =3D "scvtf"; form =3D form_fr; break; - case UCVTF_sw: - case UCVTF_sx: - case UCVTF_dw: - case UCVTF_dx: mnemonic =3D "ucvtf"; form =3D form_fr; break; - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitFPFixedPointConvert(const Instruction* instr) { - const char *mnemonic =3D ""; - const char *form =3D "'Rd, 'Fn, 'IFPFBits"; - const char *form_fr =3D "'Fd, 'Rn, 'IFPFBits"; - - switch (instr->Mask(FPFixedPointConvertMask)) { - case FCVTZS_ws_fixed: - case FCVTZS_xs_fixed: - case FCVTZS_wd_fixed: - case FCVTZS_xd_fixed: mnemonic =3D "fcvtzs"; break; - case FCVTZU_ws_fixed: - case FCVTZU_xs_fixed: - case FCVTZU_wd_fixed: - case FCVTZU_xd_fixed: mnemonic =3D "fcvtzu"; break; - case SCVTF_sw_fixed: - case SCVTF_sx_fixed: - case SCVTF_dw_fixed: - case SCVTF_dx_fixed: mnemonic =3D "scvtf"; form =3D form_fr; break; - case UCVTF_sw_fixed: - case UCVTF_sx_fixed: - case UCVTF_dw_fixed: - case UCVTF_dx_fixed: mnemonic =3D "ucvtf"; form =3D form_fr; break; - default: VIXL_UNREACHABLE(); - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitSystem(const Instruction* instr) { - // Some system instructions hijack their Op and Cp fields to represent a - // range of immediates instead of indicating a different instruction. Th= is - // makes the decoding tricky. - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(System)"; - - if (instr->Mask(SystemExclusiveMonitorFMask) =3D=3D SystemExclusiveMonit= orFixed) { - switch (instr->Mask(SystemExclusiveMonitorMask)) { - case CLREX: { - mnemonic =3D "clrex"; - form =3D (instr->CRm() =3D=3D 0xf) ? NULL : "'IX"; - break; - } - } - } else if (instr->Mask(SystemSysRegFMask) =3D=3D SystemSysRegFixed) { - switch (instr->Mask(SystemSysRegMask)) { - case MRS: { - mnemonic =3D "mrs"; - switch (instr->ImmSystemRegister()) { - case NZCV: form =3D "'Xt, nzcv"; break; - case FPCR: form =3D "'Xt, fpcr"; break; - default: form =3D "'Xt, (unknown)"; break; - } - break; - } - case MSR: { - mnemonic =3D "msr"; - switch (instr->ImmSystemRegister()) { - case NZCV: form =3D "nzcv, 'Xt"; break; - case FPCR: form =3D "fpcr, 'Xt"; break; - default: form =3D "(unknown), 'Xt"; break; - } - break; - } - } - } else if (instr->Mask(SystemHintFMask) =3D=3D SystemHintFixed) { - switch (instr->ImmHint()) { - case NOP: { - mnemonic =3D "nop"; - form =3D NULL; - break; - } - } - } else if (instr->Mask(MemBarrierFMask) =3D=3D MemBarrierFixed) { - switch (instr->Mask(MemBarrierMask)) { - case DMB: { - mnemonic =3D "dmb"; - form =3D "'M"; - break; - } - case DSB: { - mnemonic =3D "dsb"; - form =3D "'M"; - break; - } - case ISB: { - mnemonic =3D "isb"; - form =3D NULL; - break; - } - } - } else if (instr->Mask(SystemSysFMask) =3D=3D SystemSysFixed) { - switch (instr->SysOp()) { - case IVAU: - mnemonic =3D "ic"; - form =3D "ivau, 'Xt"; - break; - case CVAC: - mnemonic =3D "dc"; - form =3D "cvac, 'Xt"; - break; - case CVAU: - mnemonic =3D "dc"; - form =3D "cvau, 'Xt"; - break; - case CIVAC: - mnemonic =3D "dc"; - form =3D "civac, 'Xt"; - break; - case ZVA: - mnemonic =3D "dc"; - form =3D "zva, 'Xt"; - break; - default: - mnemonic =3D "sys"; - if (instr->Rt() =3D=3D 31) { - form =3D "'G1, 'Kn, 'Km, 'G2"; - } else { - form =3D "'G1, 'Kn, 'Km, 'G2, 'Xt"; - } - break; - } - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitException(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "'IDebug"; - - switch (instr->Mask(ExceptionMask)) { - case HLT: mnemonic =3D "hlt"; break; - case BRK: mnemonic =3D "brk"; break; - case SVC: mnemonic =3D "svc"; break; - case HVC: mnemonic =3D "hvc"; break; - case SMC: mnemonic =3D "smc"; break; - case DCPS1: mnemonic =3D "dcps1"; form =3D "{'IDebug}"; break; - case DCPS2: mnemonic =3D "dcps2"; form =3D "{'IDebug}"; break; - case DCPS3: mnemonic =3D "dcps3"; form =3D "{'IDebug}"; break; - default: form =3D "(Exception)"; - } - Format(instr, mnemonic, form); -} - - -void Disassembler::VisitCrypto2RegSHA(const Instruction* instr) { - VisitUnimplemented(instr); -} - - -void Disassembler::VisitCrypto3RegSHA(const Instruction* instr) { - VisitUnimplemented(instr); -} - - -void Disassembler::VisitCryptoAES(const Instruction* instr) { - VisitUnimplemented(instr); -} - - -void Disassembler::VisitNEON2RegMisc(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "'Vd.%s, 'Vn.%s"; - const char *form_cmp_zero =3D "'Vd.%s, 'Vn.%s, #0"; - const char *form_fcmp_zero =3D "'Vd.%s, 'Vn.%s, #0.0"; - NEONFormatDecoder nfd(instr); - - static const NEONFormatMap map_lp_ta =3D { - {23, 22, 30}, {NF_4H, NF_8H, NF_2S, NF_4S, NF_1D, NF_2D} - }; - - static const NEONFormatMap map_cvt_ta =3D { - {22}, {NF_4S, NF_2D} - }; - - static const NEONFormatMap map_cvt_tb =3D { - {22, 30}, {NF_4H, NF_8H, NF_2S, NF_4S} - }; - - if (instr->Mask(NEON2RegMiscOpcode) <=3D NEON_NEG_opcode) { - // These instructions all use a two bit size field, except NOT and RBI= T, - // which use the field to encode the operation. - switch (instr->Mask(NEON2RegMiscMask)) { - case NEON_REV64: mnemonic =3D "rev64"; break; - case NEON_REV32: mnemonic =3D "rev32"; break; - case NEON_REV16: mnemonic =3D "rev16"; break; - case NEON_SADDLP: - mnemonic =3D "saddlp"; - nfd.SetFormatMap(0, &map_lp_ta); - break; - case NEON_UADDLP: - mnemonic =3D "uaddlp"; - nfd.SetFormatMap(0, &map_lp_ta); - break; - case NEON_SUQADD: mnemonic =3D "suqadd"; break; - case NEON_USQADD: mnemonic =3D "usqadd"; break; - case NEON_CLS: mnemonic =3D "cls"; break; - case NEON_CLZ: mnemonic =3D "clz"; break; - case NEON_CNT: mnemonic =3D "cnt"; break; - case NEON_SADALP: - mnemonic =3D "sadalp"; - nfd.SetFormatMap(0, &map_lp_ta); - break; - case NEON_UADALP: - mnemonic =3D "uadalp"; - nfd.SetFormatMap(0, &map_lp_ta); - break; - case NEON_SQABS: mnemonic =3D "sqabs"; break; - case NEON_SQNEG: mnemonic =3D "sqneg"; break; - case NEON_CMGT_zero: mnemonic =3D "cmgt"; form =3D form_cmp_zero; br= eak; - case NEON_CMGE_zero: mnemonic =3D "cmge"; form =3D form_cmp_zero; br= eak; - case NEON_CMEQ_zero: mnemonic =3D "cmeq"; form =3D form_cmp_zero; br= eak; - case NEON_CMLE_zero: mnemonic =3D "cmle"; form =3D form_cmp_zero; br= eak; - case NEON_CMLT_zero: mnemonic =3D "cmlt"; form =3D form_cmp_zero; br= eak; - case NEON_ABS: mnemonic =3D "abs"; break; - case NEON_NEG: mnemonic =3D "neg"; break; - case NEON_RBIT_NOT: - switch (instr->FPType()) { - case 0: mnemonic =3D "mvn"; break; - case 1: mnemonic =3D "rbit"; break; - default: form =3D "(NEON2RegMisc)"; - } - nfd.SetFormatMaps(nfd.LogicalFormatMap()); - break; - } - } else { - // These instructions all use a one bit size field, except XTN, SQXTUN, - // SHLL, SQXTN and UQXTN, which use a two bit size field. - nfd.SetFormatMaps(nfd.FPFormatMap()); - switch (instr->Mask(NEON2RegMiscFPMask)) { - case NEON_FABS: mnemonic =3D "fabs"; break; - case NEON_FNEG: mnemonic =3D "fneg"; break; - case NEON_FCVTN: - mnemonic =3D instr->Mask(NEON_Q) ? "fcvtn2" : "fcvtn"; - nfd.SetFormatMap(0, &map_cvt_tb); - nfd.SetFormatMap(1, &map_cvt_ta); - break; - case NEON_FCVTXN: - mnemonic =3D instr->Mask(NEON_Q) ? "fcvtxn2" : "fcvtxn"; - nfd.SetFormatMap(0, &map_cvt_tb); - nfd.SetFormatMap(1, &map_cvt_ta); - break; - case NEON_FCVTL: - mnemonic =3D instr->Mask(NEON_Q) ? "fcvtl2" : "fcvtl"; - nfd.SetFormatMap(0, &map_cvt_ta); - nfd.SetFormatMap(1, &map_cvt_tb); - break; - case NEON_FRINTN: mnemonic =3D "frintn"; break; - case NEON_FRINTA: mnemonic =3D "frinta"; break; - case NEON_FRINTP: mnemonic =3D "frintp"; break; - case NEON_FRINTM: mnemonic =3D "frintm"; break; - case NEON_FRINTX: mnemonic =3D "frintx"; break; - case NEON_FRINTZ: mnemonic =3D "frintz"; break; - case NEON_FRINTI: mnemonic =3D "frinti"; break; - case NEON_FCVTNS: mnemonic =3D "fcvtns"; break; - case NEON_FCVTNU: mnemonic =3D "fcvtnu"; break; - case NEON_FCVTPS: mnemonic =3D "fcvtps"; break; - case NEON_FCVTPU: mnemonic =3D "fcvtpu"; break; - case NEON_FCVTMS: mnemonic =3D "fcvtms"; break; - case NEON_FCVTMU: mnemonic =3D "fcvtmu"; break; - case NEON_FCVTZS: mnemonic =3D "fcvtzs"; break; - case NEON_FCVTZU: mnemonic =3D "fcvtzu"; break; - case NEON_FCVTAS: mnemonic =3D "fcvtas"; break; - case NEON_FCVTAU: mnemonic =3D "fcvtau"; break; - case NEON_FSQRT: mnemonic =3D "fsqrt"; break; - case NEON_SCVTF: mnemonic =3D "scvtf"; break; - case NEON_UCVTF: mnemonic =3D "ucvtf"; break; - case NEON_URSQRTE: mnemonic =3D "ursqrte"; break; - case NEON_URECPE: mnemonic =3D "urecpe"; break; - case NEON_FRSQRTE: mnemonic =3D "frsqrte"; break; - case NEON_FRECPE: mnemonic =3D "frecpe"; break; - case NEON_FCMGT_zero: mnemonic =3D "fcmgt"; form =3D form_fcmp_zero;= break; - case NEON_FCMGE_zero: mnemonic =3D "fcmge"; form =3D form_fcmp_zero;= break; - case NEON_FCMEQ_zero: mnemonic =3D "fcmeq"; form =3D form_fcmp_zero;= break; - case NEON_FCMLE_zero: mnemonic =3D "fcmle"; form =3D form_fcmp_zero;= break; - case NEON_FCMLT_zero: mnemonic =3D "fcmlt"; form =3D form_fcmp_zero;= break; - default: - if ((NEON_XTN_opcode <=3D instr->Mask(NEON2RegMiscOpcode)) && - (instr->Mask(NEON2RegMiscOpcode) <=3D NEON_UQXTN_opcode)) { - nfd.SetFormatMap(0, nfd.IntegerFormatMap()); - nfd.SetFormatMap(1, nfd.LongIntegerFormatMap()); - - switch (instr->Mask(NEON2RegMiscMask)) { - case NEON_XTN: mnemonic =3D "xtn"; break; - case NEON_SQXTN: mnemonic =3D "sqxtn"; break; - case NEON_UQXTN: mnemonic =3D "uqxtn"; break; - case NEON_SQXTUN: mnemonic =3D "sqxtun"; break; - case NEON_SHLL: - mnemonic =3D "shll"; - nfd.SetFormatMap(0, nfd.LongIntegerFormatMap()); - nfd.SetFormatMap(1, nfd.IntegerFormatMap()); - switch (instr->NEONSize()) { - case 0: form =3D "'Vd.%s, 'Vn.%s, #8"; break; - case 1: form =3D "'Vd.%s, 'Vn.%s, #16"; break; - case 2: form =3D "'Vd.%s, 'Vn.%s, #32"; break; - default: form =3D "(NEON2RegMisc)"; - } - } - Format(instr, nfd.Mnemonic(mnemonic), nfd.Substitute(form)); - return; - } else { - form =3D "(NEON2RegMisc)"; - } - } - } - Format(instr, mnemonic, nfd.Substitute(form)); -} - - -void Disassembler::VisitNEON3Same(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "'Vd.%s, 'Vn.%s, 'Vm.%s"; - NEONFormatDecoder nfd(instr); - - if (instr->Mask(NEON3SameLogicalFMask) =3D=3D NEON3SameLogicalFixed) { - switch (instr->Mask(NEON3SameLogicalMask)) { - case NEON_AND: mnemonic =3D "and"; break; - case NEON_ORR: - mnemonic =3D "orr"; - if (instr->Rm() =3D=3D instr->Rn()) { - mnemonic =3D "mov"; - form =3D "'Vd.%s, 'Vn.%s"; - } - break; - case NEON_ORN: mnemonic =3D "orn"; break; - case NEON_EOR: mnemonic =3D "eor"; break; - case NEON_BIC: mnemonic =3D "bic"; break; - case NEON_BIF: mnemonic =3D "bif"; break; - case NEON_BIT: mnemonic =3D "bit"; break; - case NEON_BSL: mnemonic =3D "bsl"; break; - default: form =3D "(NEON3Same)"; - } - nfd.SetFormatMaps(nfd.LogicalFormatMap()); - } else { - static const char *mnemonics[] =3D { - "shadd", "uhadd", "shadd", "uhadd", - "sqadd", "uqadd", "sqadd", "uqadd", - "srhadd", "urhadd", "srhadd", "urhadd", - NULL, NULL, NULL, NULL, // Handled by logical cases above. - "shsub", "uhsub", "shsub", "uhsub", - "sqsub", "uqsub", "sqsub", "uqsub", - "cmgt", "cmhi", "cmgt", "cmhi", - "cmge", "cmhs", "cmge", "cmhs", - "sshl", "ushl", "sshl", "ushl", - "sqshl", "uqshl", "sqshl", "uqshl", - "srshl", "urshl", "srshl", "urshl", - "sqrshl", "uqrshl", "sqrshl", "uqrshl", - "smax", "umax", "smax", "umax", - "smin", "umin", "smin", "umin", - "sabd", "uabd", "sabd", "uabd", - "saba", "uaba", "saba", "uaba", - "add", "sub", "add", "sub", - "cmtst", "cmeq", "cmtst", "cmeq", - "mla", "mls", "mla", "mls", - "mul", "pmul", "mul", "pmul", - "smaxp", "umaxp", "smaxp", "umaxp", - "sminp", "uminp", "sminp", "uminp", - "sqdmulh", "sqrdmulh", "sqdmulh", "sqrdmulh", - "addp", "unallocated", "addp", "unallocated", - "fmaxnm", "fmaxnmp", "fminnm", "fminnmp", - "fmla", "unallocated", "fmls", "unallocated", - "fadd", "faddp", "fsub", "fabd", - "fmulx", "fmul", "unallocated", "unallocated", - "fcmeq", "fcmge", "unallocated", "fcmgt", - "unallocated", "facge", "unallocated", "facgt", - "fmax", "fmaxp", "fmin", "fminp", - "frecps", "fdiv", "frsqrts", "unallocated"}; - - // Operation is determined by the opcode bits (15-11), the top bit of - // size (23) and the U bit (29). - unsigned index =3D (instr->Bits(15, 11) << 2) | (instr->Bit(23) << 1) | - instr->Bit(29); - VIXL_ASSERT(index < (sizeof(mnemonics) / sizeof(mnemonics[0]))); - mnemonic =3D mnemonics[index]; - // Assert that index is not one of the previously handled logical - // instructions. - VIXL_ASSERT(mnemonic !=3D NULL); - - if (instr->Mask(NEON3SameFPFMask) =3D=3D NEON3SameFPFixed) { - nfd.SetFormatMaps(nfd.FPFormatMap()); - } - } - Format(instr, mnemonic, nfd.Substitute(form)); -} - - -void Disassembler::VisitNEON3Different(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "'Vd.%s, 'Vn.%s, 'Vm.%s"; - - NEONFormatDecoder nfd(instr); - nfd.SetFormatMap(0, nfd.LongIntegerFormatMap()); - - // Ignore the Q bit. Appending a "2" suffix is handled later. - switch (instr->Mask(NEON3DifferentMask) & ~NEON_Q) { - case NEON_PMULL: mnemonic =3D "pmull"; break; - case NEON_SABAL: mnemonic =3D "sabal"; break; - case NEON_SABDL: mnemonic =3D "sabdl"; break; - case NEON_SADDL: mnemonic =3D "saddl"; break; - case NEON_SMLAL: mnemonic =3D "smlal"; break; - case NEON_SMLSL: mnemonic =3D "smlsl"; break; - case NEON_SMULL: mnemonic =3D "smull"; break; - case NEON_SSUBL: mnemonic =3D "ssubl"; break; - case NEON_SQDMLAL: mnemonic =3D "sqdmlal"; break; - case NEON_SQDMLSL: mnemonic =3D "sqdmlsl"; break; - case NEON_SQDMULL: mnemonic =3D "sqdmull"; break; - case NEON_UABAL: mnemonic =3D "uabal"; break; - case NEON_UABDL: mnemonic =3D "uabdl"; break; - case NEON_UADDL: mnemonic =3D "uaddl"; break; - case NEON_UMLAL: mnemonic =3D "umlal"; break; - case NEON_UMLSL: mnemonic =3D "umlsl"; break; - case NEON_UMULL: mnemonic =3D "umull"; break; - case NEON_USUBL: mnemonic =3D "usubl"; break; - case NEON_SADDW: - mnemonic =3D "saddw"; - nfd.SetFormatMap(1, nfd.LongIntegerFormatMap()); - break; - case NEON_SSUBW: - mnemonic =3D "ssubw"; - nfd.SetFormatMap(1, nfd.LongIntegerFormatMap()); - break; - case NEON_UADDW: - mnemonic =3D "uaddw"; - nfd.SetFormatMap(1, nfd.LongIntegerFormatMap()); - break; - case NEON_USUBW: - mnemonic =3D "usubw"; - nfd.SetFormatMap(1, nfd.LongIntegerFormatMap()); - break; - case NEON_ADDHN: - mnemonic =3D "addhn"; - nfd.SetFormatMaps(nfd.LongIntegerFormatMap()); - nfd.SetFormatMap(0, nfd.IntegerFormatMap()); - break; - case NEON_RADDHN: - mnemonic =3D "raddhn"; - nfd.SetFormatMaps(nfd.LongIntegerFormatMap()); - nfd.SetFormatMap(0, nfd.IntegerFormatMap()); - break; - case NEON_RSUBHN: - mnemonic =3D "rsubhn"; - nfd.SetFormatMaps(nfd.LongIntegerFormatMap()); - nfd.SetFormatMap(0, nfd.IntegerFormatMap()); - break; - case NEON_SUBHN: - mnemonic =3D "subhn"; - nfd.SetFormatMaps(nfd.LongIntegerFormatMap()); - nfd.SetFormatMap(0, nfd.IntegerFormatMap()); - break; - default: form =3D "(NEON3Different)"; - } - Format(instr, nfd.Mnemonic(mnemonic), nfd.Substitute(form)); -} - - -void Disassembler::VisitNEONAcrossLanes(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "%sd, 'Vn.%s"; - - NEONFormatDecoder nfd(instr, NEONFormatDecoder::ScalarFormatMap(), - NEONFormatDecoder::IntegerFormatMap()); - - if (instr->Mask(NEONAcrossLanesFPFMask) =3D=3D NEONAcrossLanesFPFixed) { - nfd.SetFormatMap(0, nfd.FPScalarFormatMap()); - nfd.SetFormatMap(1, nfd.FPFormatMap()); - switch (instr->Mask(NEONAcrossLanesFPMask)) { - case NEON_FMAXV: mnemonic =3D "fmaxv"; break; - case NEON_FMINV: mnemonic =3D "fminv"; break; - case NEON_FMAXNMV: mnemonic =3D "fmaxnmv"; break; - case NEON_FMINNMV: mnemonic =3D "fminnmv"; break; - default: form =3D "(NEONAcrossLanes)"; break; - } - } else if (instr->Mask(NEONAcrossLanesFMask) =3D=3D NEONAcrossLanesFixed= ) { - switch (instr->Mask(NEONAcrossLanesMask)) { - case NEON_ADDV: mnemonic =3D "addv"; break; - case NEON_SMAXV: mnemonic =3D "smaxv"; break; - case NEON_SMINV: mnemonic =3D "sminv"; break; - case NEON_UMAXV: mnemonic =3D "umaxv"; break; - case NEON_UMINV: mnemonic =3D "uminv"; break; - case NEON_SADDLV: - mnemonic =3D "saddlv"; - nfd.SetFormatMap(0, nfd.LongScalarFormatMap()); - break; - case NEON_UADDLV: - mnemonic =3D "uaddlv"; - nfd.SetFormatMap(0, nfd.LongScalarFormatMap()); - break; - default: form =3D "(NEONAcrossLanes)"; break; - } - } - Format(instr, mnemonic, nfd.Substitute(form, - NEONFormatDecoder::kPlaceholder, NEONFormatDecoder::kFormat)); -} - - -void Disassembler::VisitNEONByIndexedElement(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - bool l_instr =3D false; - bool fp_instr =3D false; - - const char *form =3D "'Vd.%s, 'Vn.%s, 'Ve.%s['IVByElemIndex]"; - - static const NEONFormatMap map_ta =3D { - {23, 22}, {NF_UNDEF, NF_4S, NF_2D} - }; - NEONFormatDecoder nfd(instr, &map_ta, - NEONFormatDecoder::IntegerFormatMap(), - NEONFormatDecoder::ScalarFormatMap()); - - switch (instr->Mask(NEONByIndexedElementMask)) { - case NEON_SMULL_byelement: mnemonic =3D "smull"; l_instr =3D true; = break; - case NEON_UMULL_byelement: mnemonic =3D "umull"; l_instr =3D true; = break; - case NEON_SMLAL_byelement: mnemonic =3D "smlal"; l_instr =3D true; = break; - case NEON_UMLAL_byelement: mnemonic =3D "umlal"; l_instr =3D true; = break; - case NEON_SMLSL_byelement: mnemonic =3D "smlsl"; l_instr =3D true; = break; - case NEON_UMLSL_byelement: mnemonic =3D "umlsl"; l_instr =3D true; = break; - case NEON_SQDMULL_byelement: mnemonic =3D "sqdmull"; l_instr =3D true= ; break; - case NEON_SQDMLAL_byelement: mnemonic =3D "sqdmlal"; l_instr =3D true= ; break; - case NEON_SQDMLSL_byelement: mnemonic =3D "sqdmlsl"; l_instr =3D true= ; break; - case NEON_MUL_byelement: mnemonic =3D "mul"; break; - case NEON_MLA_byelement: mnemonic =3D "mla"; break; - case NEON_MLS_byelement: mnemonic =3D "mls"; break; - case NEON_SQDMULH_byelement: mnemonic =3D "sqdmulh"; break; - case NEON_SQRDMULH_byelement: mnemonic =3D "sqrdmulh"; break; - default: - switch (instr->Mask(NEONByIndexedElementFPMask)) { - case NEON_FMUL_byelement: mnemonic =3D "fmul"; fp_instr =3D true= ; break; - case NEON_FMLA_byelement: mnemonic =3D "fmla"; fp_instr =3D true= ; break; - case NEON_FMLS_byelement: mnemonic =3D "fmls"; fp_instr =3D true= ; break; - case NEON_FMULX_byelement: mnemonic =3D "fmulx"; fp_instr =3D true= ; break; - } - } - - if (l_instr) { - Format(instr, nfd.Mnemonic(mnemonic), nfd.Substitute(form)); - } else if (fp_instr) { - nfd.SetFormatMap(0, nfd.FPFormatMap()); - Format(instr, mnemonic, nfd.Substitute(form)); - } else { - nfd.SetFormatMap(0, nfd.IntegerFormatMap()); - Format(instr, mnemonic, nfd.Substitute(form)); - } -} - - -void Disassembler::VisitNEONCopy(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(NEONCopy)"; - - NEONFormatDecoder nfd(instr, NEONFormatDecoder::TriangularFormatMap(), - NEONFormatDecoder::TriangularScalarFormatMa= p()); - - if (instr->Mask(NEONCopyInsElementMask) =3D=3D NEON_INS_ELEMENT) { - mnemonic =3D "mov"; - nfd.SetFormatMap(0, nfd.TriangularScalarFormatMap()); - form =3D "'Vd.%s['IVInsIndex1], 'Vn.%s['IVInsIndex2]"; - } else if (instr->Mask(NEONCopyInsGeneralMask) =3D=3D NEON_INS_GENERAL) { - mnemonic =3D "mov"; - nfd.SetFormatMap(0, nfd.TriangularScalarFormatMap()); - if (nfd.GetVectorFormat() =3D=3D kFormatD) { - form =3D "'Vd.%s['IVInsIndex1], 'Xn"; - } else { - form =3D "'Vd.%s['IVInsIndex1], 'Wn"; - } - } else if (instr->Mask(NEONCopyUmovMask) =3D=3D NEON_UMOV) { - if (instr->Mask(NEON_Q) || ((instr->ImmNEON5() & 7) =3D=3D 4)) { - mnemonic =3D "mov"; - } else { - mnemonic =3D "umov"; - } - nfd.SetFormatMap(0, nfd.TriangularScalarFormatMap()); - if (nfd.GetVectorFormat() =3D=3D kFormatD) { - form =3D "'Xd, 'Vn.%s['IVInsIndex1]"; - } else { - form =3D "'Wd, 'Vn.%s['IVInsIndex1]"; - } - } else if (instr->Mask(NEONCopySmovMask) =3D=3D NEON_SMOV) { - mnemonic =3D "smov"; - nfd.SetFormatMap(0, nfd.TriangularScalarFormatMap()); - form =3D "'Rdq, 'Vn.%s['IVInsIndex1]"; - } else if (instr->Mask(NEONCopyDupElementMask) =3D=3D NEON_DUP_ELEMENT) { - mnemonic =3D "dup"; - form =3D "'Vd.%s, 'Vn.%s['IVInsIndex1]"; - } else if (instr->Mask(NEONCopyDupGeneralMask) =3D=3D NEON_DUP_GENERAL) { - mnemonic =3D "dup"; - if (nfd.GetVectorFormat() =3D=3D kFormat2D) { - form =3D "'Vd.%s, 'Xn"; - } else { - form =3D "'Vd.%s, 'Wn"; - } - } - Format(instr, mnemonic, nfd.Substitute(form)); -} - - -void Disassembler::VisitNEONExtract(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(NEONExtract)"; - NEONFormatDecoder nfd(instr, NEONFormatDecoder::LogicalFormatMap()); - if (instr->Mask(NEONExtractMask) =3D=3D NEON_EXT) { - mnemonic =3D "ext"; - form =3D "'Vd.%s, 'Vn.%s, 'Vm.%s, 'IVExtract"; - } - Format(instr, mnemonic, nfd.Substitute(form)); -} - - -void Disassembler::VisitNEONLoadStoreMultiStruct(const Instruction* instr)= { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(NEONLoadStoreMultiStruct)"; - const char *form_1v =3D "{'Vt.%1$s}, ['Xns]"; - const char *form_2v =3D "{'Vt.%1$s, 'Vt2.%1$s}, ['Xns]"; - const char *form_3v =3D "{'Vt.%1$s, 'Vt2.%1$s, 'Vt3.%1$s}, ['Xns]"; - const char *form_4v =3D "{'Vt.%1$s, 'Vt2.%1$s, 'Vt3.%1$s, 'Vt4.%1$s}, ['= Xns]"; - NEONFormatDecoder nfd(instr, NEONFormatDecoder::LoadStoreFormatMap()); - - switch (instr->Mask(NEONLoadStoreMultiStructMask)) { - case NEON_LD1_1v: mnemonic =3D "ld1"; form =3D form_1v; break; - case NEON_LD1_2v: mnemonic =3D "ld1"; form =3D form_2v; break; - case NEON_LD1_3v: mnemonic =3D "ld1"; form =3D form_3v; break; - case NEON_LD1_4v: mnemonic =3D "ld1"; form =3D form_4v; break; - case NEON_LD2: mnemonic =3D "ld2"; form =3D form_2v; break; - case NEON_LD3: mnemonic =3D "ld3"; form =3D form_3v; break; - case NEON_LD4: mnemonic =3D "ld4"; form =3D form_4v; break; - case NEON_ST1_1v: mnemonic =3D "st1"; form =3D form_1v; break; - case NEON_ST1_2v: mnemonic =3D "st1"; form =3D form_2v; break; - case NEON_ST1_3v: mnemonic =3D "st1"; form =3D form_3v; break; - case NEON_ST1_4v: mnemonic =3D "st1"; form =3D form_4v; break; - case NEON_ST2: mnemonic =3D "st2"; form =3D form_2v; break; - case NEON_ST3: mnemonic =3D "st3"; form =3D form_3v; break; - case NEON_ST4: mnemonic =3D "st4"; form =3D form_4v; break; - default: break; - } - - Format(instr, mnemonic, nfd.Substitute(form)); -} - - -void Disassembler::VisitNEONLoadStoreMultiStructPostIndex( - const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(NEONLoadStoreMultiStructPostIndex)"; - const char *form_1v =3D "{'Vt.%1$s}, ['Xns], 'Xmr1"; - const char *form_2v =3D "{'Vt.%1$s, 'Vt2.%1$s}, ['Xns], 'Xmr2"; - const char *form_3v =3D "{'Vt.%1$s, 'Vt2.%1$s, 'Vt3.%1$s}, ['Xns], 'Xmr3= "; - const char *form_4v =3D - "{'Vt.%1$s, 'Vt2.%1$s, 'Vt3.%1$s, 'Vt4.%1$s}, ['Xns], 'Xmr4"; - NEONFormatDecoder nfd(instr, NEONFormatDecoder::LoadStoreFormatMap()); - - switch (instr->Mask(NEONLoadStoreMultiStructPostIndexMask)) { - case NEON_LD1_1v_post: mnemonic =3D "ld1"; form =3D form_1v; break; - case NEON_LD1_2v_post: mnemonic =3D "ld1"; form =3D form_2v; break; - case NEON_LD1_3v_post: mnemonic =3D "ld1"; form =3D form_3v; break; - case NEON_LD1_4v_post: mnemonic =3D "ld1"; form =3D form_4v; break; - case NEON_LD2_post: mnemonic =3D "ld2"; form =3D form_2v; break; - case NEON_LD3_post: mnemonic =3D "ld3"; form =3D form_3v; break; - case NEON_LD4_post: mnemonic =3D "ld4"; form =3D form_4v; break; - case NEON_ST1_1v_post: mnemonic =3D "st1"; form =3D form_1v; break; - case NEON_ST1_2v_post: mnemonic =3D "st1"; form =3D form_2v; break; - case NEON_ST1_3v_post: mnemonic =3D "st1"; form =3D form_3v; break; - case NEON_ST1_4v_post: mnemonic =3D "st1"; form =3D form_4v; break; - case NEON_ST2_post: mnemonic =3D "st2"; form =3D form_2v; break; - case NEON_ST3_post: mnemonic =3D "st3"; form =3D form_3v; break; - case NEON_ST4_post: mnemonic =3D "st4"; form =3D form_4v; break; - default: break; - } - - Format(instr, mnemonic, nfd.Substitute(form)); -} - - -void Disassembler::VisitNEONLoadStoreSingleStruct(const Instruction* instr= ) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(NEONLoadStoreSingleStruct)"; - - const char *form_1b =3D "{'Vt.b}['IVLSLane0], ['Xns]"; - const char *form_1h =3D "{'Vt.h}['IVLSLane1], ['Xns]"; - const char *form_1s =3D "{'Vt.s}['IVLSLane2], ['Xns]"; - const char *form_1d =3D "{'Vt.d}['IVLSLane3], ['Xns]"; - NEONFormatDecoder nfd(instr, NEONFormatDecoder::LoadStoreFormatMap()); - - switch (instr->Mask(NEONLoadStoreSingleStructMask)) { - case NEON_LD1_b: mnemonic =3D "ld1"; form =3D form_1b; break; - case NEON_LD1_h: mnemonic =3D "ld1"; form =3D form_1h; break; - case NEON_LD1_s: - mnemonic =3D "ld1"; - VIXL_STATIC_ASSERT((NEON_LD1_s | (1 << NEONLSSize_offset)) =3D=3D NE= ON_LD1_d); - form =3D ((instr->NEONLSSize() & 1) =3D=3D 0) ? form_1s : form_1d; - break; - case NEON_ST1_b: mnemonic =3D "st1"; form =3D form_1b; break; - case NEON_ST1_h: mnemonic =3D "st1"; form =3D form_1h; break; - case NEON_ST1_s: - mnemonic =3D "st1"; - VIXL_STATIC_ASSERT((NEON_ST1_s | (1 << NEONLSSize_offset)) =3D=3D NE= ON_ST1_d); - form =3D ((instr->NEONLSSize() & 1) =3D=3D 0) ? form_1s : form_1d; - break; - case NEON_LD1R: - mnemonic =3D "ld1r"; - form =3D "{'Vt.%s}, ['Xns]"; - break; - case NEON_LD2_b: - case NEON_ST2_b: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld2" : "st2"; - form =3D "{'Vt.b, 'Vt2.b}['IVLSLane0], ['Xns]"; - break; - case NEON_LD2_h: - case NEON_ST2_h: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld2" : "st2"; - form =3D "{'Vt.h, 'Vt2.h}['IVLSLane1], ['Xns]"; - break; - case NEON_LD2_s: - case NEON_ST2_s: - VIXL_STATIC_ASSERT((NEON_ST2_s | (1 << NEONLSSize_offset)) =3D=3D NE= ON_ST2_d); - VIXL_STATIC_ASSERT((NEON_LD2_s | (1 << NEONLSSize_offset)) =3D=3D NE= ON_LD2_d); - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld2" : "st2"; - if ((instr->NEONLSSize() & 1) =3D=3D 0) - form =3D "{'Vt.s, 'Vt2.s}['IVLSLane2], ['Xns]"; - else - form =3D "{'Vt.d, 'Vt2.d}['IVLSLane3], ['Xns]"; - break; - case NEON_LD2R: - mnemonic =3D "ld2r"; - form =3D "{'Vt.%s, 'Vt2.%s}, ['Xns]"; - break; - case NEON_LD3_b: - case NEON_ST3_b: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld3" : "st3"; - form =3D "{'Vt.b, 'Vt2.b, 'Vt3.b}['IVLSLane0], ['Xns]"; - break; - case NEON_LD3_h: - case NEON_ST3_h: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld3" : "st3"; - form =3D "{'Vt.h, 'Vt2.h, 'Vt3.h}['IVLSLane1], ['Xns]"; - break; - case NEON_LD3_s: - case NEON_ST3_s: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld3" : "st3"; - if ((instr->NEONLSSize() & 1) =3D=3D 0) - form =3D "{'Vt.s, 'Vt2.s, 'Vt3.s}['IVLSLane2], ['Xns]"; - else - form =3D "{'Vt.d, 'Vt2.d, 'Vt3.d}['IVLSLane3], ['Xns]"; - break; - case NEON_LD3R: - mnemonic =3D "ld3r"; - form =3D "{'Vt.%s, 'Vt2.%s, 'Vt3.%s}, ['Xns]"; - break; - case NEON_LD4_b: - case NEON_ST4_b: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld4" : "st4"; - form =3D "{'Vt.b, 'Vt2.b, 'Vt3.b, 'Vt4.b}['IVLSLane0], ['Xns]"; - break; - case NEON_LD4_h: - case NEON_ST4_h: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld4" : "st4"; - form =3D "{'Vt.h, 'Vt2.h, 'Vt3.h, 'Vt4.h}['IVLSLane1], ['Xns]"; - break; - case NEON_LD4_s: - case NEON_ST4_s: - VIXL_STATIC_ASSERT((NEON_LD4_s | (1 << NEONLSSize_offset)) =3D=3D NE= ON_LD4_d); - VIXL_STATIC_ASSERT((NEON_ST4_s | (1 << NEONLSSize_offset)) =3D=3D NE= ON_ST4_d); - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld4" : "st4"; - if ((instr->NEONLSSize() & 1) =3D=3D 0) - form =3D "{'Vt.s, 'Vt2.s, 'Vt3.s, 'Vt4.s}['IVLSLane2], ['Xns]"; - else - form =3D "{'Vt.d, 'Vt2.d, 'Vt3.d, 'Vt4.d}['IVLSLane3], ['Xns]"; - break; - case NEON_LD4R: - mnemonic =3D "ld4r"; - form =3D "{'Vt.%1$s, 'Vt2.%1$s, 'Vt3.%1$s, 'Vt4.%1$s}, ['Xns]"; - break; - default: break; - } - - Format(instr, mnemonic, nfd.Substitute(form)); -} - - -void Disassembler::VisitNEONLoadStoreSingleStructPostIndex( - const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(NEONLoadStoreSingleStructPostIndex)"; - - const char *form_1b =3D "{'Vt.b}['IVLSLane0], ['Xns], 'Xmb1"; - const char *form_1h =3D "{'Vt.h}['IVLSLane1], ['Xns], 'Xmb2"; - const char *form_1s =3D "{'Vt.s}['IVLSLane2], ['Xns], 'Xmb4"; - const char *form_1d =3D "{'Vt.d}['IVLSLane3], ['Xns], 'Xmb8"; - NEONFormatDecoder nfd(instr, NEONFormatDecoder::LoadStoreFormatMap()); - - switch (instr->Mask(NEONLoadStoreSingleStructPostIndexMask)) { - case NEON_LD1_b_post: mnemonic =3D "ld1"; form =3D form_1b; break; - case NEON_LD1_h_post: mnemonic =3D "ld1"; form =3D form_1h; break; - case NEON_LD1_s_post: - mnemonic =3D "ld1"; - VIXL_STATIC_ASSERT((NEON_LD1_s | (1 << NEONLSSize_offset)) =3D=3D NE= ON_LD1_d); - form =3D ((instr->NEONLSSize() & 1) =3D=3D 0) ? form_1s : form_1d; - break; - case NEON_ST1_b_post: mnemonic =3D "st1"; form =3D form_1b; break; - case NEON_ST1_h_post: mnemonic =3D "st1"; form =3D form_1h; break; - case NEON_ST1_s_post: - mnemonic =3D "st1"; - VIXL_STATIC_ASSERT((NEON_ST1_s | (1 << NEONLSSize_offset)) =3D=3D NE= ON_ST1_d); - form =3D ((instr->NEONLSSize() & 1) =3D=3D 0) ? form_1s : form_1d; - break; - case NEON_LD1R_post: - mnemonic =3D "ld1r"; - form =3D "{'Vt.%s}, ['Xns], 'Xmz1"; - break; - case NEON_LD2_b_post: - case NEON_ST2_b_post: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld2" : "st2"; - form =3D "{'Vt.b, 'Vt2.b}['IVLSLane0], ['Xns], 'Xmb2"; - break; - case NEON_ST2_h_post: - case NEON_LD2_h_post: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld2" : "st2"; - form =3D "{'Vt.h, 'Vt2.h}['IVLSLane1], ['Xns], 'Xmb4"; - break; - case NEON_LD2_s_post: - case NEON_ST2_s_post: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld2" : "st2"; - if ((instr->NEONLSSize() & 1) =3D=3D 0) - form =3D "{'Vt.s, 'Vt2.s}['IVLSLane2], ['Xns], 'Xmb8"; - else - form =3D "{'Vt.d, 'Vt2.d}['IVLSLane3], ['Xns], 'Xmb16"; - break; - case NEON_LD2R_post: - mnemonic =3D "ld2r"; - form =3D "{'Vt.%s, 'Vt2.%s}, ['Xns], 'Xmz2"; - break; - case NEON_LD3_b_post: - case NEON_ST3_b_post: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld3" : "st3"; - form =3D "{'Vt.b, 'Vt2.b, 'Vt3.b}['IVLSLane0], ['Xns], 'Xmb3"; - break; - case NEON_LD3_h_post: - case NEON_ST3_h_post: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld3" : "st3"; - form =3D "{'Vt.h, 'Vt2.h, 'Vt3.h}['IVLSLane1], ['Xns], 'Xmb6"; - break; - case NEON_LD3_s_post: - case NEON_ST3_s_post: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld3" : "st3"; - if ((instr->NEONLSSize() & 1) =3D=3D 0) - form =3D "{'Vt.s, 'Vt2.s, 'Vt3.s}['IVLSLane2], ['Xns], 'Xmb12"; - else - form =3D "{'Vt.d, 'Vt2.d, 'Vt3.d}['IVLSLane3], ['Xns], 'Xmr3"; - break; - case NEON_LD3R_post: - mnemonic =3D "ld3r"; - form =3D "{'Vt.%s, 'Vt2.%s, 'Vt3.%s}, ['Xns], 'Xmz3"; - break; - case NEON_LD4_b_post: - case NEON_ST4_b_post: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld4" : "st4"; - form =3D "{'Vt.b, 'Vt2.b, 'Vt3.b, 'Vt4.b}['IVLSLane0], ['Xns], 'Xmb4= "; - break; - case NEON_LD4_h_post: - case NEON_ST4_h_post: - mnemonic =3D (instr->LdStXLoad()) =3D=3D 1 ? "ld4" : "st4"; - form =3D "{'Vt.h, 'Vt2.h, 'Vt3.h, 'Vt4.h}['IVLSLane1], ['Xns], 'Xmb8= "; - break; - case NEON_LD4_s_post: - case NEON_ST4_s_post: - mnemonic =3D (instr->LdStXLoad() =3D=3D 1) ? "ld4" : "st4"; - if ((instr->NEONLSSize() & 1) =3D=3D 0) - form =3D "{'Vt.s, 'Vt2.s, 'Vt3.s, 'Vt4.s}['IVLSLane2], ['Xns], 'Xm= b16"; - else - form =3D "{'Vt.d, 'Vt2.d, 'Vt3.d, 'Vt4.d}['IVLSLane3], ['Xns], 'Xm= b32"; - break; - case NEON_LD4R_post: - mnemonic =3D "ld4r"; - form =3D "{'Vt.%1$s, 'Vt2.%1$s, 'Vt3.%1$s, 'Vt4.%1$s}, ['Xns], 'Xmz4= "; - break; - default: break; - } - - Format(instr, mnemonic, nfd.Substitute(form)); -} - - -void Disassembler::VisitNEONModifiedImmediate(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "'Vt.%s, 'IVMIImm8, lsl 'IVMIShiftAmt1"; - - int cmode =3D instr->NEONCmode(); - int cmode_3 =3D (cmode >> 3) & 1; - int cmode_2 =3D (cmode >> 2) & 1; - int cmode_1 =3D (cmode >> 1) & 1; - int cmode_0 =3D cmode & 1; - int q =3D instr->NEONQ(); - int op =3D instr->NEONModImmOp(); - - static const NEONFormatMap map_b =3D { {30}, {NF_8B, NF_16B} }; - static const NEONFormatMap map_h =3D { {30}, {NF_4H, NF_8H} }; - static const NEONFormatMap map_s =3D { {30}, {NF_2S, NF_4S} }; - NEONFormatDecoder nfd(instr, &map_b); - - if (cmode_3 =3D=3D 0) { - if (cmode_0 =3D=3D 0) { - mnemonic =3D (op =3D=3D 1) ? "mvni" : "movi"; - } else { // cmode<0> =3D=3D '1'. - mnemonic =3D (op =3D=3D 1) ? "bic" : "orr"; - } - nfd.SetFormatMap(0, &map_s); - } else { // cmode<3> =3D=3D '1'. - if (cmode_2 =3D=3D 0) { - if (cmode_0 =3D=3D 0) { - mnemonic =3D (op =3D=3D 1) ? "mvni" : "movi"; - } else { // cmode<0> =3D=3D '1'. - mnemonic =3D (op =3D=3D 1) ? "bic" : "orr"; - } - nfd.SetFormatMap(0, &map_h); - } else { // cmode<2> =3D=3D '1'. - if (cmode_1 =3D=3D 0) { - mnemonic =3D (op =3D=3D 1) ? "mvni" : "movi"; - form =3D "'Vt.%s, 'IVMIImm8, msl 'IVMIShiftAmt2"; - nfd.SetFormatMap(0, &map_s); - } else { // cmode<1> =3D=3D '1'. - if (cmode_0 =3D=3D 0) { - mnemonic =3D "movi"; - if (op =3D=3D 0) { - form =3D "'Vt.%s, 'IVMIImm8"; - } else { - form =3D (q =3D=3D 0) ? "'Dd, 'IVMIImm" : "'Vt.2d, 'IVMIImm"; - } - } else { // cmode<0> =3D=3D '1' - mnemonic =3D "fmov"; - if (op =3D=3D 0) { - form =3D "'Vt.%s, 'IVMIImmFPSingle"; - nfd.SetFormatMap(0, &map_s); - } else { - if (q =3D=3D 1) { - form =3D "'Vt.2d, 'IVMIImmFPDouble"; - } - } - } - } - } - } - Format(instr, mnemonic, nfd.Substitute(form)); -} - - -void Disassembler::VisitNEONScalar2RegMisc(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "%sd, %sn"; - const char *form_0 =3D "%sd, %sn, #0"; - const char *form_fp0 =3D "%sd, %sn, #0.0"; - - NEONFormatDecoder nfd(instr, NEONFormatDecoder::ScalarFormatMap()); - - if (instr->Mask(NEON2RegMiscOpcode) <=3D NEON_NEG_scalar_opcode) { - // These instructions all use a two bit size field, except NOT and RBI= T, - // which use the field to encode the operation. - switch (instr->Mask(NEONScalar2RegMiscMask)) { - case NEON_CMGT_zero_scalar: mnemonic =3D "cmgt"; form =3D form_0; br= eak; - case NEON_CMGE_zero_scalar: mnemonic =3D "cmge"; form =3D form_0; br= eak; - case NEON_CMLE_zero_scalar: mnemonic =3D "cmle"; form =3D form_0; br= eak; - case NEON_CMLT_zero_scalar: mnemonic =3D "cmlt"; form =3D form_0; br= eak; - case NEON_CMEQ_zero_scalar: mnemonic =3D "cmeq"; form =3D form_0; br= eak; - case NEON_NEG_scalar: mnemonic =3D "neg"; break; - case NEON_SQNEG_scalar: mnemonic =3D "sqneg"; break; - case NEON_ABS_scalar: mnemonic =3D "abs"; break; - case NEON_SQABS_scalar: mnemonic =3D "sqabs"; break; - case NEON_SUQADD_scalar: mnemonic =3D "suqadd"; break; - case NEON_USQADD_scalar: mnemonic =3D "usqadd"; break; - default: form =3D "(NEONScalar2RegMisc)"; - } - } else { - // These instructions all use a one bit size field, except SQXTUN, SQX= TN - // and UQXTN, which use a two bit size field. - nfd.SetFormatMaps(nfd.FPScalarFormatMap()); - switch (instr->Mask(NEONScalar2RegMiscFPMask)) { - case NEON_FRSQRTE_scalar: mnemonic =3D "frsqrte"; break; - case NEON_FRECPE_scalar: mnemonic =3D "frecpe"; break; - case NEON_SCVTF_scalar: mnemonic =3D "scvtf"; break; - case NEON_UCVTF_scalar: mnemonic =3D "ucvtf"; break; - case NEON_FCMGT_zero_scalar: mnemonic =3D "fcmgt"; form =3D form_fp0= ; break; - case NEON_FCMGE_zero_scalar: mnemonic =3D "fcmge"; form =3D form_fp0= ; break; - case NEON_FCMLE_zero_scalar: mnemonic =3D "fcmle"; form =3D form_fp0= ; break; - case NEON_FCMLT_zero_scalar: mnemonic =3D "fcmlt"; form =3D form_fp0= ; break; - case NEON_FCMEQ_zero_scalar: mnemonic =3D "fcmeq"; form =3D form_fp0= ; break; - case NEON_FRECPX_scalar: mnemonic =3D "frecpx"; break; - case NEON_FCVTNS_scalar: mnemonic =3D "fcvtns"; break; - case NEON_FCVTNU_scalar: mnemonic =3D "fcvtnu"; break; - case NEON_FCVTPS_scalar: mnemonic =3D "fcvtps"; break; - case NEON_FCVTPU_scalar: mnemonic =3D "fcvtpu"; break; - case NEON_FCVTMS_scalar: mnemonic =3D "fcvtms"; break; - case NEON_FCVTMU_scalar: mnemonic =3D "fcvtmu"; break; - case NEON_FCVTZS_scalar: mnemonic =3D "fcvtzs"; break; - case NEON_FCVTZU_scalar: mnemonic =3D "fcvtzu"; break; - case NEON_FCVTAS_scalar: mnemonic =3D "fcvtas"; break; - case NEON_FCVTAU_scalar: mnemonic =3D "fcvtau"; break; - case NEON_FCVTXN_scalar: - nfd.SetFormatMap(0, nfd.LongScalarFormatMap()); - mnemonic =3D "fcvtxn"; - break; - default: - nfd.SetFormatMap(0, nfd.ScalarFormatMap()); - nfd.SetFormatMap(1, nfd.LongScalarFormatMap()); - switch (instr->Mask(NEONScalar2RegMiscMask)) { - case NEON_SQXTN_scalar: mnemonic =3D "sqxtn"; break; - case NEON_UQXTN_scalar: mnemonic =3D "uqxtn"; break; - case NEON_SQXTUN_scalar: mnemonic =3D "sqxtun"; break; - default: form =3D "(NEONScalar2RegMisc)"; - } - } - } - Format(instr, mnemonic, nfd.SubstitutePlaceholders(form)); -} - - -void Disassembler::VisitNEONScalar3Diff(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "%sd, %sn, %sm"; - NEONFormatDecoder nfd(instr, NEONFormatDecoder::LongScalarFormatMap(), - NEONFormatDecoder::ScalarFormatMap()); - - switch (instr->Mask(NEONScalar3DiffMask)) { - case NEON_SQDMLAL_scalar : mnemonic =3D "sqdmlal"; break; - case NEON_SQDMLSL_scalar : mnemonic =3D "sqdmlsl"; break; - case NEON_SQDMULL_scalar : mnemonic =3D "sqdmull"; break; - default: form =3D "(NEONScalar3Diff)"; - } - Format(instr, mnemonic, nfd.SubstitutePlaceholders(form)); -} - - -void Disassembler::VisitNEONScalar3Same(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "%sd, %sn, %sm"; - NEONFormatDecoder nfd(instr, NEONFormatDecoder::ScalarFormatMap()); - - if (instr->Mask(NEONScalar3SameFPFMask) =3D=3D NEONScalar3SameFPFixed) { - nfd.SetFormatMaps(nfd.FPScalarFormatMap()); - switch (instr->Mask(NEONScalar3SameFPMask)) { - case NEON_FACGE_scalar: mnemonic =3D "facge"; break; - case NEON_FACGT_scalar: mnemonic =3D "facgt"; break; - case NEON_FCMEQ_scalar: mnemonic =3D "fcmeq"; break; - case NEON_FCMGE_scalar: mnemonic =3D "fcmge"; break; - case NEON_FCMGT_scalar: mnemonic =3D "fcmgt"; break; - case NEON_FMULX_scalar: mnemonic =3D "fmulx"; break; - case NEON_FRECPS_scalar: mnemonic =3D "frecps"; break; - case NEON_FRSQRTS_scalar: mnemonic =3D "frsqrts"; break; - case NEON_FABD_scalar: mnemonic =3D "fabd"; break; - default: form =3D "(NEONScalar3Same)"; - } - } else { - switch (instr->Mask(NEONScalar3SameMask)) { - case NEON_ADD_scalar: mnemonic =3D "add"; break; - case NEON_SUB_scalar: mnemonic =3D "sub"; break; - case NEON_CMEQ_scalar: mnemonic =3D "cmeq"; break; - case NEON_CMGE_scalar: mnemonic =3D "cmge"; break; - case NEON_CMGT_scalar: mnemonic =3D "cmgt"; break; - case NEON_CMHI_scalar: mnemonic =3D "cmhi"; break; - case NEON_CMHS_scalar: mnemonic =3D "cmhs"; break; - case NEON_CMTST_scalar: mnemonic =3D "cmtst"; break; - case NEON_UQADD_scalar: mnemonic =3D "uqadd"; break; - case NEON_SQADD_scalar: mnemonic =3D "sqadd"; break; - case NEON_UQSUB_scalar: mnemonic =3D "uqsub"; break; - case NEON_SQSUB_scalar: mnemonic =3D "sqsub"; break; - case NEON_USHL_scalar: mnemonic =3D "ushl"; break; - case NEON_SSHL_scalar: mnemonic =3D "sshl"; break; - case NEON_UQSHL_scalar: mnemonic =3D "uqshl"; break; - case NEON_SQSHL_scalar: mnemonic =3D "sqshl"; break; - case NEON_URSHL_scalar: mnemonic =3D "urshl"; break; - case NEON_SRSHL_scalar: mnemonic =3D "srshl"; break; - case NEON_UQRSHL_scalar: mnemonic =3D "uqrshl"; break; - case NEON_SQRSHL_scalar: mnemonic =3D "sqrshl"; break; - case NEON_SQDMULH_scalar: mnemonic =3D "sqdmulh"; break; - case NEON_SQRDMULH_scalar: mnemonic =3D "sqrdmulh"; break; - default: form =3D "(NEONScalar3Same)"; - } - } - Format(instr, mnemonic, nfd.SubstitutePlaceholders(form)); -} - - -void Disassembler::VisitNEONScalarByIndexedElement(const Instruction* inst= r) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "%sd, %sn, 'Ve.%s['IVByElemIndex]"; - NEONFormatDecoder nfd(instr, NEONFormatDecoder::ScalarFormatMap()); - bool long_instr =3D false; - - switch (instr->Mask(NEONScalarByIndexedElementMask)) { - case NEON_SQDMULL_byelement_scalar: - mnemonic =3D "sqdmull"; - long_instr =3D true; - break; - case NEON_SQDMLAL_byelement_scalar: - mnemonic =3D "sqdmlal"; - long_instr =3D true; - break; - case NEON_SQDMLSL_byelement_scalar: - mnemonic =3D "sqdmlsl"; - long_instr =3D true; - break; - case NEON_SQDMULH_byelement_scalar: - mnemonic =3D "sqdmulh"; - break; - case NEON_SQRDMULH_byelement_scalar: - mnemonic =3D "sqrdmulh"; - break; - default: - nfd.SetFormatMap(0, nfd.FPScalarFormatMap()); - switch (instr->Mask(NEONScalarByIndexedElementFPMask)) { - case NEON_FMUL_byelement_scalar: mnemonic =3D "fmul"; break; - case NEON_FMLA_byelement_scalar: mnemonic =3D "fmla"; break; - case NEON_FMLS_byelement_scalar: mnemonic =3D "fmls"; break; - case NEON_FMULX_byelement_scalar: mnemonic =3D "fmulx"; break; - default: form =3D "(NEONScalarByIndexedElement)"; - } - } - - if (long_instr) { - nfd.SetFormatMap(0, nfd.LongScalarFormatMap()); - } - - Format(instr, mnemonic, nfd.Substitute( - form, nfd.kPlaceholder, nfd.kPlaceholder, nfd.kFormat)); -} - - -void Disassembler::VisitNEONScalarCopy(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(NEONScalarCopy)"; - - NEONFormatDecoder nfd(instr, NEONFormatDecoder::TriangularScalarFormatMa= p()); - - if (instr->Mask(NEONScalarCopyMask) =3D=3D NEON_DUP_ELEMENT_scalar) { - mnemonic =3D "mov"; - form =3D "%sd, 'Vn.%s['IVInsIndex1]"; - } - - Format(instr, mnemonic, nfd.Substitute(form, nfd.kPlaceholder, nfd.kForm= at)); -} - - -void Disassembler::VisitNEONScalarPairwise(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "%sd, 'Vn.%s"; - NEONFormatMap map =3D { {22}, {NF_2S, NF_2D} }; - NEONFormatDecoder nfd(instr, NEONFormatDecoder::FPScalarFormatMap(), &ma= p); - - switch (instr->Mask(NEONScalarPairwiseMask)) { - case NEON_ADDP_scalar: mnemonic =3D "addp"; break; - case NEON_FADDP_scalar: mnemonic =3D "faddp"; break; - case NEON_FMAXP_scalar: mnemonic =3D "fmaxp"; break; - case NEON_FMAXNMP_scalar: mnemonic =3D "fmaxnmp"; break; - case NEON_FMINP_scalar: mnemonic =3D "fminp"; break; - case NEON_FMINNMP_scalar: mnemonic =3D "fminnmp"; break; - default: form =3D "(NEONScalarPairwise)"; - } - Format(instr, mnemonic, nfd.Substitute(form, - NEONFormatDecoder::kPlaceholder, NEONFormatDecoder::kFormat)); -} - - -void Disassembler::VisitNEONScalarShiftImmediate(const Instruction* instr)= { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "%sd, %sn, 'Is1"; - const char *form_2 =3D "%sd, %sn, 'Is2"; - - static const NEONFormatMap map_shift =3D { - {22, 21, 20, 19}, - {NF_UNDEF, NF_B, NF_H, NF_H, NF_S, NF_S, NF_S, NF_S, - NF_D, NF_D, NF_D, NF_D, NF_D, NF_D, NF_D, NF_D} - }; - static const NEONFormatMap map_shift_narrow =3D { - {21, 20, 19}, - {NF_UNDEF, NF_H, NF_S, NF_S, NF_D, NF_D, NF_D, NF_D} - }; - NEONFormatDecoder nfd(instr, &map_shift); - - if (instr->ImmNEONImmh()) { // immh has to be non-zero. - switch (instr->Mask(NEONScalarShiftImmediateMask)) { - case NEON_FCVTZU_imm_scalar: mnemonic =3D "fcvtzu"; break; - case NEON_FCVTZS_imm_scalar: mnemonic =3D "fcvtzs"; break; - case NEON_SCVTF_imm_scalar: mnemonic =3D "scvtf"; break; - case NEON_UCVTF_imm_scalar: mnemonic =3D "ucvtf"; break; - case NEON_SRI_scalar: mnemonic =3D "sri"; break; - case NEON_SSHR_scalar: mnemonic =3D "sshr"; break; - case NEON_USHR_scalar: mnemonic =3D "ushr"; break; - case NEON_SRSHR_scalar: mnemonic =3D "srshr"; break; - case NEON_URSHR_scalar: mnemonic =3D "urshr"; break; - case NEON_SSRA_scalar: mnemonic =3D "ssra"; break; - case NEON_USRA_scalar: mnemonic =3D "usra"; break; - case NEON_SRSRA_scalar: mnemonic =3D "srsra"; break; - case NEON_URSRA_scalar: mnemonic =3D "ursra"; break; - case NEON_SHL_scalar: mnemonic =3D "shl"; form =3D form_2; = break; - case NEON_SLI_scalar: mnemonic =3D "sli"; form =3D form_2; = break; - case NEON_SQSHLU_scalar: mnemonic =3D "sqshlu"; form =3D form_2; = break; - case NEON_SQSHL_imm_scalar: mnemonic =3D "sqshl"; form =3D form_2; = break; - case NEON_UQSHL_imm_scalar: mnemonic =3D "uqshl"; form =3D form_2; = break; - case NEON_UQSHRN_scalar: - mnemonic =3D "uqshrn"; - nfd.SetFormatMap(1, &map_shift_narrow); - break; - case NEON_UQRSHRN_scalar: - mnemonic =3D "uqrshrn"; - nfd.SetFormatMap(1, &map_shift_narrow); - break; - case NEON_SQSHRN_scalar: - mnemonic =3D "sqshrn"; - nfd.SetFormatMap(1, &map_shift_narrow); - break; - case NEON_SQRSHRN_scalar: - mnemonic =3D "sqrshrn"; - nfd.SetFormatMap(1, &map_shift_narrow); - break; - case NEON_SQSHRUN_scalar: - mnemonic =3D "sqshrun"; - nfd.SetFormatMap(1, &map_shift_narrow); - break; - case NEON_SQRSHRUN_scalar: - mnemonic =3D "sqrshrun"; - nfd.SetFormatMap(1, &map_shift_narrow); - break; - default: - form =3D "(NEONScalarShiftImmediate)"; - } - } else { - form =3D "(NEONScalarShiftImmediate)"; - } - Format(instr, mnemonic, nfd.SubstitutePlaceholders(form)); -} - - -void Disassembler::VisitNEONShiftImmediate(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "'Vd.%s, 'Vn.%s, 'Is1"; - const char *form_shift_2 =3D "'Vd.%s, 'Vn.%s, 'Is2"; - const char *form_xtl =3D "'Vd.%s, 'Vn.%s"; - - // 0001->8H, 001x->4S, 01xx->2D, all others undefined. - static const NEONFormatMap map_shift_ta =3D { - {22, 21, 20, 19}, - {NF_UNDEF, NF_8H, NF_4S, NF_4S, NF_2D, NF_2D, NF_2D, NF_2D} - }; - - // 00010->8B, 00011->16B, 001x0->4H, 001x1->8H, - // 01xx0->2S, 01xx1->4S, 1xxx1->2D, all others undefined. - static const NEONFormatMap map_shift_tb =3D { - {22, 21, 20, 19, 30}, - {NF_UNDEF, NF_UNDEF, NF_8B, NF_16B, NF_4H, NF_8H, NF_4H, NF_8= H, - NF_2S, NF_4S, NF_2S, NF_4S, NF_2S, NF_4S, NF_2S, NF_4= S, - NF_UNDEF, NF_2D, NF_UNDEF, NF_2D, NF_UNDEF, NF_2D, NF_UNDEF, NF_2= D, - NF_UNDEF, NF_2D, NF_UNDEF, NF_2D, NF_UNDEF, NF_2D, NF_UNDEF, NF_2= D} - }; - - NEONFormatDecoder nfd(instr, &map_shift_tb); - - if (instr->ImmNEONImmh()) { // immh has to be non-zero. - switch (instr->Mask(NEONShiftImmediateMask)) { - case NEON_SQSHLU: mnemonic =3D "sqshlu"; form =3D form_shift_2; = break; - case NEON_SQSHL_imm: mnemonic =3D "sqshl"; form =3D form_shift_2; = break; - case NEON_UQSHL_imm: mnemonic =3D "uqshl"; form =3D form_shift_2; = break; - case NEON_SHL: mnemonic =3D "shl"; form =3D form_shift_2; = break; - case NEON_SLI: mnemonic =3D "sli"; form =3D form_shift_2; = break; - case NEON_SCVTF_imm: mnemonic =3D "scvtf"; break; - case NEON_UCVTF_imm: mnemonic =3D "ucvtf"; break; - case NEON_FCVTZU_imm: mnemonic =3D "fcvtzu"; break; - case NEON_FCVTZS_imm: mnemonic =3D "fcvtzs"; break; - case NEON_SRI: mnemonic =3D "sri"; break; - case NEON_SSHR: mnemonic =3D "sshr"; break; - case NEON_USHR: mnemonic =3D "ushr"; break; - case NEON_SRSHR: mnemonic =3D "srshr"; break; - case NEON_URSHR: mnemonic =3D "urshr"; break; - case NEON_SSRA: mnemonic =3D "ssra"; break; - case NEON_USRA: mnemonic =3D "usra"; break; - case NEON_SRSRA: mnemonic =3D "srsra"; break; - case NEON_URSRA: mnemonic =3D "ursra"; break; - case NEON_SHRN: - mnemonic =3D instr->Mask(NEON_Q) ? "shrn2" : "shrn"; - nfd.SetFormatMap(1, &map_shift_ta); - break; - case NEON_RSHRN: - mnemonic =3D instr->Mask(NEON_Q) ? "rshrn2" : "rshrn"; - nfd.SetFormatMap(1, &map_shift_ta); - break; - case NEON_UQSHRN: - mnemonic =3D instr->Mask(NEON_Q) ? "uqshrn2" : "uqshrn"; - nfd.SetFormatMap(1, &map_shift_ta); - break; - case NEON_UQRSHRN: - mnemonic =3D instr->Mask(NEON_Q) ? "uqrshrn2" : "uqrshrn"; - nfd.SetFormatMap(1, &map_shift_ta); - break; - case NEON_SQSHRN: - mnemonic =3D instr->Mask(NEON_Q) ? "sqshrn2" : "sqshrn"; - nfd.SetFormatMap(1, &map_shift_ta); - break; - case NEON_SQRSHRN: - mnemonic =3D instr->Mask(NEON_Q) ? "sqrshrn2" : "sqrshrn"; - nfd.SetFormatMap(1, &map_shift_ta); - break; - case NEON_SQSHRUN: - mnemonic =3D instr->Mask(NEON_Q) ? "sqshrun2" : "sqshrun"; - nfd.SetFormatMap(1, &map_shift_ta); - break; - case NEON_SQRSHRUN: - mnemonic =3D instr->Mask(NEON_Q) ? "sqrshrun2" : "sqrshrun"; - nfd.SetFormatMap(1, &map_shift_ta); - break; - case NEON_SSHLL: - nfd.SetFormatMap(0, &map_shift_ta); - if (instr->ImmNEONImmb() =3D=3D 0 && - CountSetBits(instr->ImmNEONImmh(), 32) =3D=3D 1) { // sxtl va= riant. - form =3D form_xtl; - mnemonic =3D instr->Mask(NEON_Q) ? "sxtl2" : "sxtl"; - } else { // sshll variant. - form =3D form_shift_2; - mnemonic =3D instr->Mask(NEON_Q) ? "sshll2" : "sshll"; - } - break; - case NEON_USHLL: - nfd.SetFormatMap(0, &map_shift_ta); - if (instr->ImmNEONImmb() =3D=3D 0 && - CountSetBits(instr->ImmNEONImmh(), 32) =3D=3D 1) { // uxtl va= riant. - form =3D form_xtl; - mnemonic =3D instr->Mask(NEON_Q) ? "uxtl2" : "uxtl"; - } else { // ushll variant. - form =3D form_shift_2; - mnemonic =3D instr->Mask(NEON_Q) ? "ushll2" : "ushll"; - } - break; - default: form =3D "(NEONShiftImmediate)"; - } - } else { - form =3D "(NEONShiftImmediate)"; - } - Format(instr, mnemonic, nfd.Substitute(form)); -} - - -void Disassembler::VisitNEONTable(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "(NEONTable)"; - const char form_1v[] =3D "'Vd.%%s, {'Vn.16b}, 'Vm.%%s"; - const char form_2v[] =3D "'Vd.%%s, {'Vn.16b, v%d.16b}, 'Vm.%%s"; - const char form_3v[] =3D "'Vd.%%s, {'Vn.16b, v%d.16b, v%d.16b}, 'Vm.%%s"; - const char form_4v[] =3D - "'Vd.%%s, {'Vn.16b, v%d.16b, v%d.16b, v%d.16b}, 'Vm.%%s"; - static const NEONFormatMap map_b =3D { {30}, {NF_8B, NF_16B} }; - NEONFormatDecoder nfd(instr, &map_b); - - switch (instr->Mask(NEONTableMask)) { - case NEON_TBL_1v: mnemonic =3D "tbl"; form =3D form_1v; break; - case NEON_TBL_2v: mnemonic =3D "tbl"; form =3D form_2v; break; - case NEON_TBL_3v: mnemonic =3D "tbl"; form =3D form_3v; break; - case NEON_TBL_4v: mnemonic =3D "tbl"; form =3D form_4v; break; - case NEON_TBX_1v: mnemonic =3D "tbx"; form =3D form_1v; break; - case NEON_TBX_2v: mnemonic =3D "tbx"; form =3D form_2v; break; - case NEON_TBX_3v: mnemonic =3D "tbx"; form =3D form_3v; break; - case NEON_TBX_4v: mnemonic =3D "tbx"; form =3D form_4v; break; - default: break; - } - - char re_form[sizeof(form_4v) + 6]; - int reg_num =3D instr->Rn(); - snprintf(re_form, sizeof(re_form), form, - (reg_num + 1) % kNumberOfVRegisters, - (reg_num + 2) % kNumberOfVRegisters, - (reg_num + 3) % kNumberOfVRegisters); - - Format(instr, mnemonic, nfd.Substitute(re_form)); -} - - -void Disassembler::VisitNEONPerm(const Instruction* instr) { - const char *mnemonic =3D "unimplemented"; - const char *form =3D "'Vd.%s, 'Vn.%s, 'Vm.%s"; - NEONFormatDecoder nfd(instr); - - switch (instr->Mask(NEONPermMask)) { - case NEON_TRN1: mnemonic =3D "trn1"; break; - case NEON_TRN2: mnemonic =3D "trn2"; break; - case NEON_UZP1: mnemonic =3D "uzp1"; break; - case NEON_UZP2: mnemonic =3D "uzp2"; break; - case NEON_ZIP1: mnemonic =3D "zip1"; break; - case NEON_ZIP2: mnemonic =3D "zip2"; break; - default: form =3D "(NEONPerm)"; - } - Format(instr, mnemonic, nfd.Substitute(form)); -} - - -void Disassembler::VisitUnimplemented(const Instruction* instr) { - Format(instr, "unimplemented", "(Unimplemented)"); -} - - -void Disassembler::VisitUnallocated(const Instruction* instr) { - Format(instr, "unallocated", "(Unallocated)"); -} - - -void Disassembler::ProcessOutput(const Instruction* /*instr*/) { - // The base disasm does nothing more than disassembling into a buffer. -} - - -void Disassembler::AppendRegisterNameToOutput(const Instruction* instr, - const CPURegister& reg) { - USE(instr); - VIXL_ASSERT(reg.IsValid()); - char reg_char; - - if (reg.IsRegister()) { - reg_char =3D reg.Is64Bits() ? 'x' : 'w'; - } else { - VIXL_ASSERT(reg.IsVRegister()); - switch (reg.SizeInBits()) { - case kBRegSize: reg_char =3D 'b'; break; - case kHRegSize: reg_char =3D 'h'; break; - case kSRegSize: reg_char =3D 's'; break; - case kDRegSize: reg_char =3D 'd'; break; - default: - VIXL_ASSERT(reg.Is128Bits()); - reg_char =3D 'q'; - } - } - - if (reg.IsVRegister() || !(reg.Aliases(sp) || reg.Aliases(xzr))) { - // A core or scalar/vector register: [wx]0 - 30, [bhsdq]0 - 31. - AppendToOutput("%c%d", reg_char, reg.code()); - } else if (reg.Aliases(sp)) { - // Disassemble w31/x31 as stack pointer wsp/sp. - AppendToOutput("%s", reg.Is64Bits() ? "sp" : "wsp"); - } else { - // Disassemble w31/x31 as zero register wzr/xzr. - AppendToOutput("%czr", reg_char); - } -} - - -void Disassembler::AppendPCRelativeOffsetToOutput(const Instruction* instr, - int64_t offset) { - USE(instr); - uint64_t abs_offset =3D offset; - char sign =3D (offset < 0) ? '-' : '+'; - if (offset < 0) { - abs_offset =3D -abs_offset; - } - AppendToOutput("#%c0x%" PRIx64, sign, abs_offset); -} - - -void Disassembler::AppendAddressToOutput(const Instruction* instr, - const void* addr) { - USE(instr); - AppendToOutput("(addr 0x%" PRIxPTR ")", reinterpret_cast(addr= )); -} - - -void Disassembler::AppendCodeAddressToOutput(const Instruction* instr, - const void* addr) { - AppendAddressToOutput(instr, addr); -} - - -void Disassembler::AppendDataAddressToOutput(const Instruction* instr, - const void* addr) { - AppendAddressToOutput(instr, addr); -} - - -void Disassembler::AppendCodeRelativeAddressToOutput(const Instruction* in= str, - const void* addr) { - USE(instr); - int64_t rel_addr =3D CodeRelativeAddress(addr); - if (rel_addr >=3D 0) { - AppendToOutput("(addr 0x%" PRIx64 ")", rel_addr); - } else { - AppendToOutput("(addr -0x%" PRIx64 ")", -rel_addr); - } -} - - -void Disassembler::AppendCodeRelativeCodeAddressToOutput( - const Instruction* instr, const void* addr) { - AppendCodeRelativeAddressToOutput(instr, addr); -} - - -void Disassembler::AppendCodeRelativeDataAddressToOutput( - const Instruction* instr, const void* addr) { - AppendCodeRelativeAddressToOutput(instr, addr); -} - - -void Disassembler::MapCodeAddress(int64_t base_address, - const Instruction* instr_address) { - set_code_address_offset( - base_address - reinterpret_cast(instr_address)); -} -int64_t Disassembler::CodeRelativeAddress(const void* addr) { - return reinterpret_cast(addr) + code_address_offset(); -} - - -void Disassembler::Format(const Instruction* instr, const char* mnemonic, - const char* format) { - VIXL_ASSERT(mnemonic !=3D NULL); - ResetOutput(); - Substitute(instr, mnemonic); - if (format !=3D NULL) { - VIXL_ASSERT(buffer_pos_ < buffer_size_); - buffer_[buffer_pos_++] =3D ' '; - Substitute(instr, format); - } - VIXL_ASSERT(buffer_pos_ < buffer_size_); - buffer_[buffer_pos_] =3D 0; - ProcessOutput(instr); -} - - -void Disassembler::Substitute(const Instruction* instr, const char* string= ) { - char chr =3D *string++; - while (chr !=3D '\0') { - if (chr =3D=3D '\'') { - string +=3D SubstituteField(instr, string); - } else { - VIXL_ASSERT(buffer_pos_ < buffer_size_); - buffer_[buffer_pos_++] =3D chr; - } - chr =3D *string++; - } -} - - -int Disassembler::SubstituteField(const Instruction* instr, - const char* format) { - switch (format[0]) { - // NB. The remaining substitution prefix characters are: GJKUZ. - case 'R': // Register. X or W, selected by sf bit. - case 'F': // FP register. S or D, selected by type field. - case 'V': // Vector register, V, vector format. - case 'W': - case 'X': - case 'B': - case 'H': - case 'S': - case 'D': - case 'Q': return SubstituteRegisterField(instr, format); - case 'I': return SubstituteImmediateField(instr, format); - case 'L': return SubstituteLiteralField(instr, format); - case 'N': return SubstituteShiftField(instr, format); - case 'P': return SubstitutePrefetchField(instr, format); - case 'C': return SubstituteConditionField(instr, format); - case 'E': return SubstituteExtendField(instr, format); - case 'A': return SubstitutePCRelAddressField(instr, format); - case 'T': return SubstituteBranchTargetField(instr, format); - case 'O': return SubstituteLSRegOffsetField(instr, format); - case 'M': return SubstituteBarrierField(instr, format); - case 'K': return SubstituteCrField(instr, format); - case 'G': return SubstituteSysOpField(instr, format); - default: { - VIXL_UNREACHABLE(); - return 1; - } - } -} - - -int Disassembler::SubstituteRegisterField(const Instruction* instr, - const char* format) { - char reg_prefix =3D format[0]; - unsigned reg_num =3D 0; - unsigned field_len =3D 2; - - switch (format[1]) { - case 'd': - reg_num =3D instr->Rd(); - if (format[2] =3D=3D 'q') { - reg_prefix =3D instr->NEONQ() ? 'X' : 'W'; - field_len =3D 3; - } - break; - case 'n': reg_num =3D instr->Rn(); break; - case 'm': - reg_num =3D instr->Rm(); - switch (format[2]) { - // Handle registers tagged with b (bytes), z (instruction), or - // r (registers), used for address updates in - // NEON load/store instructions. - case 'r': - case 'b': - case 'z': { - field_len =3D 3; - char* eimm; - int imm =3D static_cast(strtol(&format[3], &eimm, 10)); - field_len +=3D eimm - &format[3]; - if (reg_num =3D=3D 31) { - switch (format[2]) { - case 'z': - imm *=3D (1 << instr->NEONLSSize()); - break; - case 'r': - imm *=3D (instr->NEONQ() =3D=3D 0) ? kDRegSizeInBytes - : kQRegSizeInBytes; - break; - case 'b': - break; - } - AppendToOutput("#%d", imm); - return field_len; - } - break; - } - } - break; - case 'e': - // This is register Rm, but using a 4-bit specifier. Used in NEON - // by-element instructions. - reg_num =3D (instr->Rm() & 0xf); - break; - case 'a': reg_num =3D instr->Ra(); break; - case 's': reg_num =3D instr->Rs(); break; - case 't': - reg_num =3D instr->Rt(); - if (format[0] =3D=3D 'V') { - if ((format[2] >=3D '2') && (format[2] <=3D '4')) { - // Handle consecutive vector register specifiers Vt2, Vt3 and Vt= 4. - reg_num =3D (reg_num + format[2] - '1') % 32; - field_len =3D 3; - } - } else { - if (format[2] =3D=3D '2') { - // Handle register specifier Rt2. - reg_num =3D instr->Rt2(); - field_len =3D 3; - } - } - break; - default: VIXL_UNREACHABLE(); - } - - // Increase field length for registers tagged as stack. - if (format[2] =3D=3D 's') { - field_len =3D 3; - } - - CPURegister::RegisterType reg_type =3D CPURegister::kRegister; - unsigned reg_size =3D kXRegSize; - - if (reg_prefix =3D=3D 'R') { - reg_prefix =3D instr->SixtyFourBits() ? 'X' : 'W'; - } else if (reg_prefix =3D=3D 'F') { - reg_prefix =3D ((instr->FPType() & 1) =3D=3D 0) ? 'S' : 'D'; - } - - switch (reg_prefix) { - case 'W': - reg_type =3D CPURegister::kRegister; reg_size =3D kWRegSize; break; - case 'X': - reg_type =3D CPURegister::kRegister; reg_size =3D kXRegSize; break; - case 'B': - reg_type =3D CPURegister::kVRegister; reg_size =3D kBRegSize; break; - case 'H': - reg_type =3D CPURegister::kVRegister; reg_size =3D kHRegSize; break; - case 'S': - reg_type =3D CPURegister::kVRegister; reg_size =3D kSRegSize; break; - case 'D': - reg_type =3D CPURegister::kVRegister; reg_size =3D kDRegSize; break; - case 'Q': - reg_type =3D CPURegister::kVRegister; reg_size =3D kQRegSize; break; - case 'V': - AppendToOutput("v%d", reg_num); - return field_len; - default: - VIXL_UNREACHABLE(); - } - - if ((reg_type =3D=3D CPURegister::kRegister) && - (reg_num =3D=3D kZeroRegCode) && (format[2] =3D=3D 's')) { - reg_num =3D kSPRegInternalCode; - } - - AppendRegisterNameToOutput(instr, CPURegister(reg_num, reg_size, reg_typ= e)); - - return field_len; -} - - -int Disassembler::SubstituteImmediateField(const Instruction* instr, - const char* format) { - VIXL_ASSERT(format[0] =3D=3D 'I'); - - switch (format[1]) { - case 'M': { // IMoveImm, IMoveNeg or IMoveLSL. - if (format[5] =3D=3D 'L') { - AppendToOutput("#0x%" PRIx32, instr->ImmMoveWide()); - if (instr->ShiftMoveWide() > 0) { - AppendToOutput(", lsl #%" PRId32, 16 * instr->ShiftMoveWide()); - } - } else { - VIXL_ASSERT((format[5] =3D=3D 'I') || (format[5] =3D=3D 'N')); - uint64_t imm =3D static_cast(instr->ImmMoveWide()) << - (16 * instr->ShiftMoveWide()); - if (format[5] =3D=3D 'N') - imm =3D ~imm; - if (!instr->SixtyFourBits()) - imm &=3D UINT64_C(0xffffffff); - AppendToOutput("#0x%" PRIx64, imm); - } - return 8; - } - case 'L': { - switch (format[2]) { - case 'L': { // ILLiteral - Immediate Load Literal. - AppendToOutput("pc%+" PRId32, - instr->ImmLLiteral() << kLiteralEntrySizeLog2); - return 9; - } - case 'S': { // ILS - Immediate Load/Store. - if (instr->ImmLS() !=3D 0) { - AppendToOutput(", #%" PRId32, instr->ImmLS()); - } - return 3; - } - case 'P': { // ILPx - Immediate Load/Store Pair, x =3D access siz= e. - if (instr->ImmLSPair() !=3D 0) { - // format[3] is the scale value. Convert to a number. - int scale =3D 1 << (format[3] - '0'); - AppendToOutput(", #%" PRId32, instr->ImmLSPair() * scale); - } - return 4; - } - case 'U': { // ILU - Immediate Load/Store Unsigned. - if (instr->ImmLSUnsigned() !=3D 0) { - int shift =3D instr->SizeLS(); - AppendToOutput(", #%" PRId32, instr->ImmLSUnsigned() << shift); - } - return 3; - } - default: { - VIXL_UNIMPLEMENTED(); - return 0; - } - } - } - case 'C': { // ICondB - Immediate Conditional Branch. - int64_t offset =3D instr->ImmCondBranch() << 2; - AppendPCRelativeOffsetToOutput(instr, offset); - return 6; - } - case 'A': { // IAddSub. - VIXL_ASSERT(instr->ShiftAddSub() <=3D 1); - int64_t imm =3D instr->ImmAddSub() << (12 * instr->ShiftAddSub()); - AppendToOutput("#0x%" PRIx64 " (%" PRId64 ")", imm, imm); - return 7; - } - case 'F': { // IFPSingle, IFPDouble or IFPFBits. - if (format[3] =3D=3D 'F') { // IFPFbits. - AppendToOutput("#%" PRId32, 64 - instr->FPScale()); - return 8; - } else { - AppendToOutput("#0x%" PRIx32 " (%.4f)", instr->ImmFP(), - format[3] =3D=3D 'S' ? instr->ImmFP32() : instr->Im= mFP64()); - return 9; - } - } - case 'T': { // ITri - Immediate Triangular Encoded. - AppendToOutput("#0x%" PRIx64, instr->ImmLogical()); - return 4; - } - case 'N': { // INzcv. - int nzcv =3D (instr->Nzcv() << Flags_offset); - AppendToOutput("#%c%c%c%c", ((nzcv & NFlag) =3D=3D 0) ? 'n' : 'N', - ((nzcv & ZFlag) =3D=3D 0) ? 'z' : 'Z', - ((nzcv & CFlag) =3D=3D 0) ? 'c' : 'C', - ((nzcv & VFlag) =3D=3D 0) ? 'v' : 'V'); - return 5; - } - case 'P': { // IP - Conditional compare. - AppendToOutput("#%" PRId32, instr->ImmCondCmp()); - return 2; - } - case 'B': { // Bitfields. - return SubstituteBitfieldImmediateField(instr, format); - } - case 'E': { // IExtract. - AppendToOutput("#%" PRId32, instr->ImmS()); - return 8; - } - case 'S': { // IS - Test and branch bit. - AppendToOutput("#%" PRId32, (instr->ImmTestBranchBit5() << 5) | - instr->ImmTestBranchBit40()); - return 2; - } - case 's': { // Is - Shift (immediate). - switch (format[2]) { - case '1': { // Is1 - SSHR. - int shift =3D 16 << HighestSetBitPosition(instr->ImmNEONImmh()); - shift -=3D instr->ImmNEONImmhImmb(); - AppendToOutput("#%d", shift); - return 3; - } - case '2': { // Is2 - SLI. - int shift =3D instr->ImmNEONImmhImmb(); - shift -=3D 8 << HighestSetBitPosition(instr->ImmNEONImmh()); - AppendToOutput("#%d", shift); - return 3; - } - default: { - VIXL_UNIMPLEMENTED(); - return 0; - } - } - } - case 'D': { // IDebug - HLT and BRK instructions. - AppendToOutput("#0x%" PRIx32, instr->ImmException()); - return 6; - } - case 'V': { // Immediate Vector. - switch (format[2]) { - case 'E': { // IVExtract. - AppendToOutput("#%" PRId32, instr->ImmNEONExt()); - return 9; - } - case 'B': { // IVByElemIndex. - int vm_index =3D (instr->NEONH() << 1) | instr->NEONL(); - if (instr->NEONSize() =3D=3D 1) { - vm_index =3D (vm_index << 1) | instr->NEONM(); - } - AppendToOutput("%d", vm_index); - return strlen("IVByElemIndex"); - } - case 'I': { // INS element. - if (strncmp(format, "IVInsIndex", strlen("IVInsIndex")) =3D=3D 0= ) { - int rd_index, rn_index; - int imm5 =3D instr->ImmNEON5(); - int imm4 =3D instr->ImmNEON4(); - int tz =3D CountTrailingZeros(imm5, 32); - rd_index =3D imm5 >> (tz + 1); - rn_index =3D imm4 >> tz; - if (strncmp(format, "IVInsIndex1", strlen("IVInsIndex1")) =3D= =3D 0) { - AppendToOutput("%d", rd_index); - return strlen("IVInsIndex1"); - } else if (strncmp(format, "IVInsIndex2", - strlen("IVInsIndex2")) =3D=3D 0) { - AppendToOutput("%d", rn_index); - return strlen("IVInsIndex2"); - } else { - VIXL_UNIMPLEMENTED(); - return 0; - } - } - VIXL_FALLTHROUGH(); - } - case 'L': { // IVLSLane[0123] - suffix indicates access size shif= t. - AppendToOutput("%d", instr->NEONLSIndex(format[8] - '0')); - return 9; - } - case 'M': { // Modified Immediate cases. - if (strncmp(format, - "IVMIImmFPSingle", - strlen("IVMIImmFPSingle")) =3D=3D 0) { - AppendToOutput("#0x%" PRIx32 " (%.4f)", instr->ImmNEONabcdefgh= (), - instr->ImmNEONFP32()); - return strlen("IVMIImmFPSingle"); - } else if (strncmp(format, - "IVMIImmFPDouble", - strlen("IVMIImmFPDouble")) =3D=3D 0) { - AppendToOutput("#0x%" PRIx32 " (%.4f)", instr->ImmNEONabcdefgh= (), - instr->ImmNEONFP64()); - return strlen("IVMIImmFPDouble"); - } else if (strncmp(format, "IVMIImm8", strlen("IVMIImm8")) =3D= =3D 0) { - uint64_t imm8 =3D instr->ImmNEONabcdefgh(); - AppendToOutput("#0x%" PRIx64, imm8); - return strlen("IVMIImm8"); - } else if (strncmp(format, "IVMIImm", strlen("IVMIImm")) =3D=3D = 0) { - uint64_t imm8 =3D instr->ImmNEONabcdefgh(); - uint64_t imm =3D 0; - for (int i =3D 0; i < 8; ++i) { - if (imm8 & (1 << i)) { - imm |=3D (UINT64_C(0xff) << (8 * i)); - } - } - AppendToOutput("#0x%" PRIx64, imm); - return strlen("IVMIImm"); - } else if (strncmp(format, "IVMIShiftAmt1", - strlen("IVMIShiftAmt1")) =3D=3D 0) { - int cmode =3D instr->NEONCmode(); - int shift_amount =3D 8 * ((cmode >> 1) & 3); - AppendToOutput("#%d", shift_amount); - return strlen("IVMIShiftAmt1"); - } else if (strncmp(format, "IVMIShiftAmt2", - strlen("IVMIShiftAmt2")) =3D=3D 0) { - int cmode =3D instr->NEONCmode(); - int shift_amount =3D 8 << (cmode & 1); - AppendToOutput("#%d", shift_amount); - return strlen("IVMIShiftAmt2"); - } else { - VIXL_UNIMPLEMENTED(); - return 0; - } - } - default: { - VIXL_UNIMPLEMENTED(); - return 0; - } - } - } - case 'X': { // IX - CLREX instruction. - AppendToOutput("#0x%" PRIx32, instr->CRm()); - return 2; - } - default: { - VIXL_UNIMPLEMENTED(); - return 0; - } - } -} - - -int Disassembler::SubstituteBitfieldImmediateField(const Instruction* inst= r, - const char* format) { - VIXL_ASSERT((format[0] =3D=3D 'I') && (format[1] =3D=3D 'B')); - unsigned r =3D instr->ImmR(); - unsigned s =3D instr->ImmS(); - - switch (format[2]) { - case 'r': { // IBr. - AppendToOutput("#%d", r); - return 3; - } - case 's': { // IBs+1 or IBs-r+1. - if (format[3] =3D=3D '+') { - AppendToOutput("#%d", s + 1); - return 5; - } else { - VIXL_ASSERT(format[3] =3D=3D '-'); - AppendToOutput("#%d", s - r + 1); - return 7; - } - } - case 'Z': { // IBZ-r. - VIXL_ASSERT((format[3] =3D=3D '-') && (format[4] =3D=3D 'r')); - unsigned reg_size =3D (instr->SixtyFourBits() =3D=3D 1) ? kXRegSize = : kWRegSize; - AppendToOutput("#%d", reg_size - r); - return 5; - } - default: { - VIXL_UNREACHABLE(); - return 0; - } - } -} - - -int Disassembler::SubstituteLiteralField(const Instruction* instr, - const char* format) { - VIXL_ASSERT(strncmp(format, "LValue", 6) =3D=3D 0); - USE(format); - - const void * address =3D instr->LiteralAddress(); - switch (instr->Mask(LoadLiteralMask)) { - case LDR_w_lit: - case LDR_x_lit: - case LDRSW_x_lit: - case LDR_s_lit: - case LDR_d_lit: - case LDR_q_lit: - AppendCodeRelativeDataAddressToOutput(instr, address); - break; - case PRFM_lit: { - // Use the prefetch hint to decide how to print the address. - switch (instr->PrefetchHint()) { - case 0x0: // PLD: prefetch for load. - case 0x2: // PST: prepare for store. - AppendCodeRelativeDataAddressToOutput(instr, address); - break; - case 0x1: // PLI: preload instructions. - AppendCodeRelativeCodeAddressToOutput(instr, address); - break; - case 0x3: // Unallocated hint. - AppendCodeRelativeAddressToOutput(instr, address); - break; - } - break; - } - default: - VIXL_UNREACHABLE(); - } - - return 6; -} - - -int Disassembler::SubstituteShiftField(const Instruction* instr, - const char* format) { - VIXL_ASSERT(format[0] =3D=3D 'N'); - VIXL_ASSERT(instr->ShiftDP() <=3D 0x3); - - switch (format[1]) { - case 'D': { // HDP. - VIXL_ASSERT(instr->ShiftDP() !=3D ROR); - VIXL_FALLTHROUGH(); - } - case 'L': { // HLo. - if (instr->ImmDPShift() !=3D 0) { - const char* shift_type[] =3D {"lsl", "lsr", "asr", "ror"}; - AppendToOutput(", %s #%" PRId32, shift_type[instr->ShiftDP()], - instr->ImmDPShift()); - } - return 3; - } - default: - VIXL_UNIMPLEMENTED(); - return 0; - } -} - - -int Disassembler::SubstituteConditionField(const Instruction* instr, - const char* format) { - VIXL_ASSERT(format[0] =3D=3D 'C'); - const char* condition_code[] =3D { "eq", "ne", "hs", "lo", - "mi", "pl", "vs", "vc", - "hi", "ls", "ge", "lt", - "gt", "le", "al", "nv" }; - int cond; - switch (format[1]) { - case 'B': cond =3D instr->ConditionBranch(); break; - case 'I': { - cond =3D InvertCondition(static_cast(instr->Condition())); - break; - } - default: cond =3D instr->Condition(); - } - AppendToOutput("%s", condition_code[cond]); - return 4; -} - - -int Disassembler::SubstitutePCRelAddressField(const Instruction* instr, - const char* format) { - VIXL_ASSERT((strcmp(format, "AddrPCRelByte") =3D=3D 0) || // Used by `= adr`. - (strcmp(format, "AddrPCRelPage") =3D=3D 0)); // Used by `= adrp`. - - int64_t offset =3D instr->ImmPCRel(); - - // Compute the target address based on the effective address (after appl= ying - // code_address_offset). This is required for correct behaviour of adrp. - const Instruction* base =3D instr + code_address_offset(); - if (format[9] =3D=3D 'P') { - offset *=3D kPageSize; - base =3D AlignDown(base, kPageSize); - } - // Strip code_address_offset before printing, so we can use the - // semantically-correct AppendCodeRelativeAddressToOutput. - const void* target =3D - reinterpret_cast(base + offset - code_address_offset()); - - AppendPCRelativeOffsetToOutput(instr, offset); - AppendToOutput(" "); - AppendCodeRelativeAddressToOutput(instr, target); - return 13; -} - - -int Disassembler::SubstituteBranchTargetField(const Instruction* instr, - const char* format) { - VIXL_ASSERT(strncmp(format, "TImm", 4) =3D=3D 0); - - int64_t offset =3D 0; - switch (format[5]) { - // BImmUncn - unconditional branch immediate. - case 'n': offset =3D instr->ImmUncondBranch(); break; - // BImmCond - conditional branch immediate. - case 'o': offset =3D instr->ImmCondBranch(); break; - // BImmCmpa - compare and branch immediate. - case 'm': offset =3D instr->ImmCmpBranch(); break; - // BImmTest - test and branch immediate. - case 'e': offset =3D instr->ImmTestBranch(); break; - default: VIXL_UNIMPLEMENTED(); - } - offset <<=3D kInstructionSizeLog2; - const void* target_address =3D reinterpret_cast(instr + off= set); - VIXL_STATIC_ASSERT(sizeof(*instr) =3D=3D 1); - - AppendPCRelativeOffsetToOutput(instr, offset); - AppendToOutput(" "); - AppendCodeRelativeCodeAddressToOutput(instr, target_address); - - return 8; -} - - -int Disassembler::SubstituteExtendField(const Instruction* instr, - const char* format) { - VIXL_ASSERT(strncmp(format, "Ext", 3) =3D=3D 0); - VIXL_ASSERT(instr->ExtendMode() <=3D 7); - USE(format); - - const char* extend_mode[] =3D { "uxtb", "uxth", "uxtw", "uxtx", - "sxtb", "sxth", "sxtw", "sxtx" }; - - // If rd or rn is SP, uxtw on 32-bit registers and uxtx on 64-bit - // registers becomes lsl. - if (((instr->Rd() =3D=3D kZeroRegCode) || (instr->Rn() =3D=3D kZeroRegCo= de)) && - (((instr->ExtendMode() =3D=3D UXTW) && (instr->SixtyFourBits() =3D= =3D 0)) || - (instr->ExtendMode() =3D=3D UXTX))) { - if (instr->ImmExtendShift() > 0) { - AppendToOutput(", lsl #%" PRId32, instr->ImmExtendShift()); - } - } else { - AppendToOutput(", %s", extend_mode[instr->ExtendMode()]); - if (instr->ImmExtendShift() > 0) { - AppendToOutput(" #%" PRId32, instr->ImmExtendShift()); - } - } - return 3; -} - - -int Disassembler::SubstituteLSRegOffsetField(const Instruction* instr, - const char* format) { - VIXL_ASSERT(strncmp(format, "Offsetreg", 9) =3D=3D 0); - const char* extend_mode[] =3D { "undefined", "undefined", "uxtw", "lsl", - "undefined", "undefined", "sxtw", "sxtx" }; - USE(format); - - unsigned shift =3D instr->ImmShiftLS(); - Extend ext =3D static_cast(instr->ExtendMode()); - char reg_type =3D ((ext =3D=3D UXTW) || (ext =3D=3D SXTW)) ? 'w' : 'x'; - - unsigned rm =3D instr->Rm(); - if (rm =3D=3D kZeroRegCode) { - AppendToOutput("%czr", reg_type); - } else { - AppendToOutput("%c%d", reg_type, rm); - } - - // Extend mode UXTX is an alias for shift mode LSL here. - if (!((ext =3D=3D UXTX) && (shift =3D=3D 0))) { - AppendToOutput(", %s", extend_mode[ext]); - if (shift !=3D 0) { - AppendToOutput(" #%d", instr->SizeLS()); - } - } - return 9; -} - - -int Disassembler::SubstitutePrefetchField(const Instruction* instr, - const char* format) { - VIXL_ASSERT(format[0] =3D=3D 'P'); - USE(format); - - static const char* hints[] =3D {"ld", "li", "st"}; - static const char* stream_options[] =3D {"keep", "strm"}; - - unsigned hint =3D instr->PrefetchHint(); - unsigned target =3D instr->PrefetchTarget() + 1; - unsigned stream =3D instr->PrefetchStream(); - - if ((hint >=3D (sizeof(hints) / sizeof(hints[0]))) || (target > 3)) { - // Unallocated prefetch operations. - int prefetch_mode =3D instr->ImmPrefetchOperation(); - AppendToOutput("#0b%c%c%c%c%c", - (prefetch_mode & (1 << 4)) ? '1' : '0', - (prefetch_mode & (1 << 3)) ? '1' : '0', - (prefetch_mode & (1 << 2)) ? '1' : '0', - (prefetch_mode & (1 << 1)) ? '1' : '0', - (prefetch_mode & (1 << 0)) ? '1' : '0'); - } else { - VIXL_ASSERT(stream < (sizeof(stream_options) / sizeof(stream_options[0= ]))); - AppendToOutput("p%sl%d%s", hints[hint], target, stream_options[stream]= ); - } - return 6; -} - -int Disassembler::SubstituteBarrierField(const Instruction* instr, - const char* format) { - VIXL_ASSERT(format[0] =3D=3D 'M'); - USE(format); - - static const char* options[4][4] =3D { - { "sy (0b0000)", "oshld", "oshst", "osh" }, - { "sy (0b0100)", "nshld", "nshst", "nsh" }, - { "sy (0b1000)", "ishld", "ishst", "ish" }, - { "sy (0b1100)", "ld", "st", "sy" } - }; - int domain =3D instr->ImmBarrierDomain(); - int type =3D instr->ImmBarrierType(); - - AppendToOutput("%s", options[domain][type]); - return 1; -} - -int Disassembler::SubstituteSysOpField(const Instruction* instr, - const char* format) { - VIXL_ASSERT(format[0] =3D=3D 'G'); - int op =3D -1; - switch (format[1]) { - case '1': op =3D instr->SysOp1(); break; - case '2': op =3D instr->SysOp2(); break; - default: - VIXL_UNREACHABLE(); - } - AppendToOutput("#%d", op); - return 2; -} - -int Disassembler::SubstituteCrField(const Instruction* instr, - const char* format) { - VIXL_ASSERT(format[0] =3D=3D 'K'); - int cr =3D -1; - switch (format[1]) { - case 'n': cr =3D instr->CRn(); break; - case 'm': cr =3D instr->CRm(); break; - default: - VIXL_UNREACHABLE(); - } - AppendToOutput("C%d", cr); - return 2; -} - -void Disassembler::ResetOutput() { - buffer_pos_ =3D 0; - buffer_[buffer_pos_] =3D 0; -} - - -void Disassembler::AppendToOutput(const char* format, ...) { - va_list args; - va_start(args, format); - buffer_pos_ +=3D vsnprintf(&buffer_[buffer_pos_], buffer_size_ - buffer_= pos_, - format, args); - va_end(args); -} - - -void PrintDisassembler::ProcessOutput(const Instruction* instr) { - fprintf(stream_, "0x%016" PRIx64 " %08" PRIx32 "\t\t%s\n", - reinterpret_cast(instr), - instr->InstructionBits(), - GetOutput()); -} - -} // namespace vixl diff --git a/disas/libvixl/vixl/a64/instructions-a64.cc b/disas/libvixl/vix= l/a64/instructions-a64.cc deleted file mode 100644 index 33992f88a4..0000000000 --- a/disas/libvixl/vixl/a64/instructions-a64.cc +++ /dev/null @@ -1,622 +0,0 @@ -// Copyright 2015, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#include "vixl/a64/instructions-a64.h" -#include "vixl/a64/assembler-a64.h" - -namespace vixl { - - -// Floating-point infinity values. -const float16 kFP16PositiveInfinity =3D 0x7c00; -const float16 kFP16NegativeInfinity =3D 0xfc00; -const float kFP32PositiveInfinity =3D rawbits_to_float(0x7f800000); -const float kFP32NegativeInfinity =3D rawbits_to_float(0xff800000); -const double kFP64PositiveInfinity =3D - rawbits_to_double(UINT64_C(0x7ff0000000000000)); -const double kFP64NegativeInfinity =3D - rawbits_to_double(UINT64_C(0xfff0000000000000)); - - -// The default NaN values (for FPCR.DN=3D1). -const double kFP64DefaultNaN =3D rawbits_to_double(UINT64_C(0x7ff800000000= 0000)); -const float kFP32DefaultNaN =3D rawbits_to_float(0x7fc00000); -const float16 kFP16DefaultNaN =3D 0x7e00; - - -static uint64_t RotateRight(uint64_t value, - unsigned int rotate, - unsigned int width) { - VIXL_ASSERT(width <=3D 64); - rotate &=3D 63; - return ((value & ((UINT64_C(1) << rotate) - 1)) << - (width - rotate)) | (value >> rotate); -} - - -static uint64_t RepeatBitsAcrossReg(unsigned reg_size, - uint64_t value, - unsigned width) { - VIXL_ASSERT((width =3D=3D 2) || (width =3D=3D 4) || (width =3D=3D 8) || = (width =3D=3D 16) || - (width =3D=3D 32)); - VIXL_ASSERT((reg_size =3D=3D kWRegSize) || (reg_size =3D=3D kXRegSize)); - uint64_t result =3D value & ((UINT64_C(1) << width) - 1); - for (unsigned i =3D width; i < reg_size; i *=3D 2) { - result |=3D (result << i); - } - return result; -} - - -bool Instruction::IsLoad() const { - if (Mask(LoadStoreAnyFMask) !=3D LoadStoreAnyFixed) { - return false; - } - - if (Mask(LoadStorePairAnyFMask) =3D=3D LoadStorePairAnyFixed) { - return Mask(LoadStorePairLBit) !=3D 0; - } else { - LoadStoreOp op =3D static_cast(Mask(LoadStoreMask)); - switch (op) { - case LDRB_w: - case LDRH_w: - case LDR_w: - case LDR_x: - case LDRSB_w: - case LDRSB_x: - case LDRSH_w: - case LDRSH_x: - case LDRSW_x: - case LDR_b: - case LDR_h: - case LDR_s: - case LDR_d: - case LDR_q: return true; - default: return false; - } - } -} - - -bool Instruction::IsStore() const { - if (Mask(LoadStoreAnyFMask) !=3D LoadStoreAnyFixed) { - return false; - } - - if (Mask(LoadStorePairAnyFMask) =3D=3D LoadStorePairAnyFixed) { - return Mask(LoadStorePairLBit) =3D=3D 0; - } else { - LoadStoreOp op =3D static_cast(Mask(LoadStoreMask)); - switch (op) { - case STRB_w: - case STRH_w: - case STR_w: - case STR_x: - case STR_b: - case STR_h: - case STR_s: - case STR_d: - case STR_q: return true; - default: return false; - } - } -} - - -// Logical immediates can't encode zero, so a return value of zero is used= to -// indicate a failure case. Specifically, where the constraints on imm_s a= re -// not met. -uint64_t Instruction::ImmLogical() const { - unsigned reg_size =3D SixtyFourBits() ? kXRegSize : kWRegSize; - int32_t n =3D BitN(); - int32_t imm_s =3D ImmSetBits(); - int32_t imm_r =3D ImmRotate(); - - // An integer is constructed from the n, imm_s and imm_r bits according = to - // the following table: - // - // N imms immr size S R - // 1 ssssss rrrrrr 64 UInt(ssssss) UInt(rrrrrr) - // 0 0sssss xrrrrr 32 UInt(sssss) UInt(rrrrr) - // 0 10ssss xxrrrr 16 UInt(ssss) UInt(rrrr) - // 0 110sss xxxrrr 8 UInt(sss) UInt(rrr) - // 0 1110ss xxxxrr 4 UInt(ss) UInt(rr) - // 0 11110s xxxxxr 2 UInt(s) UInt(r) - // (s bits must not be all set) - // - // A pattern is constructed of size bits, where the least significant S+1 - // bits are set. The pattern is rotated right by R, and repeated across a - // 32 or 64-bit value, depending on destination register width. - // - - if (n =3D=3D 1) { - if (imm_s =3D=3D 0x3f) { - return 0; - } - uint64_t bits =3D (UINT64_C(1) << (imm_s + 1)) - 1; - return RotateRight(bits, imm_r, 64); - } else { - if ((imm_s >> 1) =3D=3D 0x1f) { - return 0; - } - for (int width =3D 0x20; width >=3D 0x2; width >>=3D 1) { - if ((imm_s & width) =3D=3D 0) { - int mask =3D width - 1; - if ((imm_s & mask) =3D=3D mask) { - return 0; - } - uint64_t bits =3D (UINT64_C(1) << ((imm_s & mask) + 1)) - 1; - return RepeatBitsAcrossReg(reg_size, - RotateRight(bits, imm_r & mask, width), - width); - } - } - } - VIXL_UNREACHABLE(); - return 0; -} - - -uint32_t Instruction::ImmNEONabcdefgh() const { - return ImmNEONabc() << 5 | ImmNEONdefgh(); -} - - -float Instruction::Imm8ToFP32(uint32_t imm8) { - // Imm8: abcdefgh (8 bits) - // Single: aBbb.bbbc.defg.h000.0000.0000.0000.0000 (32 bits) - // where B is b ^ 1 - uint32_t bits =3D imm8; - uint32_t bit7 =3D (bits >> 7) & 0x1; - uint32_t bit6 =3D (bits >> 6) & 0x1; - uint32_t bit5_to_0 =3D bits & 0x3f; - uint32_t result =3D (bit7 << 31) | ((32 - bit6) << 25) | (bit5_to_0 << 1= 9); - - return rawbits_to_float(result); -} - - -float Instruction::ImmFP32() const { - return Imm8ToFP32(ImmFP()); -} - - -double Instruction::Imm8ToFP64(uint32_t imm8) { - // Imm8: abcdefgh (8 bits) - // Double: aBbb.bbbb.bbcd.efgh.0000.0000.0000.0000 - // 0000.0000.0000.0000.0000.0000.0000.0000 (64 bits) - // where B is b ^ 1 - uint32_t bits =3D imm8; - uint64_t bit7 =3D (bits >> 7) & 0x1; - uint64_t bit6 =3D (bits >> 6) & 0x1; - uint64_t bit5_to_0 =3D bits & 0x3f; - uint64_t result =3D (bit7 << 63) | ((256 - bit6) << 54) | (bit5_to_0 << = 48); - - return rawbits_to_double(result); -} - - -double Instruction::ImmFP64() const { - return Imm8ToFP64(ImmFP()); -} - - -float Instruction::ImmNEONFP32() const { - return Imm8ToFP32(ImmNEONabcdefgh()); -} - - -double Instruction::ImmNEONFP64() const { - return Imm8ToFP64(ImmNEONabcdefgh()); -} - - -unsigned CalcLSDataSize(LoadStoreOp op) { - VIXL_ASSERT((LSSize_offset + LSSize_width) =3D=3D (kInstructionSize * 8)= ); - unsigned size =3D static_cast(op) >> LSSize_offset; - if ((op & LSVector_mask) !=3D 0) { - // Vector register memory operations encode the access size in the "si= ze" - // and "opc" fields. - if ((size =3D=3D 0) && ((op & LSOpc_mask) >> LSOpc_offset) >=3D 2) { - size =3D kQRegSizeInBytesLog2; - } - } - return size; -} - - -unsigned CalcLSPairDataSize(LoadStorePairOp op) { - VIXL_STATIC_ASSERT(kXRegSizeInBytes =3D=3D kDRegSizeInBytes); - VIXL_STATIC_ASSERT(kWRegSizeInBytes =3D=3D kSRegSizeInBytes); - switch (op) { - case STP_q: - case LDP_q: return kQRegSizeInBytesLog2; - case STP_x: - case LDP_x: - case STP_d: - case LDP_d: return kXRegSizeInBytesLog2; - default: return kWRegSizeInBytesLog2; - } -} - - -int Instruction::ImmBranchRangeBitwidth(ImmBranchType branch_type) { - switch (branch_type) { - case UncondBranchType: - return ImmUncondBranch_width; - case CondBranchType: - return ImmCondBranch_width; - case CompareBranchType: - return ImmCmpBranch_width; - case TestBranchType: - return ImmTestBranch_width; - default: - VIXL_UNREACHABLE(); - return 0; - } -} - - -int32_t Instruction::ImmBranchForwardRange(ImmBranchType branch_type) { - int32_t encoded_max =3D 1 << (ImmBranchRangeBitwidth(branch_type) - 1); - return encoded_max * kInstructionSize; -} - - -bool Instruction::IsValidImmPCOffset(ImmBranchType branch_type, - int64_t offset) { - return is_intn(ImmBranchRangeBitwidth(branch_type), offset); -} - - -const Instruction* Instruction::ImmPCOffsetTarget() const { - const Instruction * base =3D this; - ptrdiff_t offset; - if (IsPCRelAddressing()) { - // ADR and ADRP. - offset =3D ImmPCRel(); - if (Mask(PCRelAddressingMask) =3D=3D ADRP) { - base =3D AlignDown(base, kPageSize); - offset *=3D kPageSize; - } else { - VIXL_ASSERT(Mask(PCRelAddressingMask) =3D=3D ADR); - } - } else { - // All PC-relative branches. - VIXL_ASSERT(BranchType() !=3D UnknownBranchType); - // Relative branch offsets are instruction-size-aligned. - offset =3D ImmBranch() << kInstructionSizeLog2; - } - return base + offset; -} - - -int Instruction::ImmBranch() const { - switch (BranchType()) { - case CondBranchType: return ImmCondBranch(); - case UncondBranchType: return ImmUncondBranch(); - case CompareBranchType: return ImmCmpBranch(); - case TestBranchType: return ImmTestBranch(); - default: VIXL_UNREACHABLE(); - } - return 0; -} - - -void Instruction::SetImmPCOffsetTarget(const Instruction* target) { - if (IsPCRelAddressing()) { - SetPCRelImmTarget(target); - } else { - SetBranchImmTarget(target); - } -} - - -void Instruction::SetPCRelImmTarget(const Instruction* target) { - ptrdiff_t imm21; - if ((Mask(PCRelAddressingMask) =3D=3D ADR)) { - imm21 =3D target - this; - } else { - VIXL_ASSERT(Mask(PCRelAddressingMask) =3D=3D ADRP); - uintptr_t this_page =3D reinterpret_cast(this) / kPageSize; - uintptr_t target_page =3D reinterpret_cast(target) / kPageS= ize; - imm21 =3D target_page - this_page; - } - Instr imm =3D Assembler::ImmPCRelAddress(static_cast(imm21)); - - SetInstructionBits(Mask(~ImmPCRel_mask) | imm); -} - - -void Instruction::SetBranchImmTarget(const Instruction* target) { - VIXL_ASSERT(((target - this) & 3) =3D=3D 0); - Instr branch_imm =3D 0; - uint32_t imm_mask =3D 0; - int offset =3D static_cast((target - this) >> kInstructionSizeLog2); - switch (BranchType()) { - case CondBranchType: { - branch_imm =3D Assembler::ImmCondBranch(offset); - imm_mask =3D ImmCondBranch_mask; - break; - } - case UncondBranchType: { - branch_imm =3D Assembler::ImmUncondBranch(offset); - imm_mask =3D ImmUncondBranch_mask; - break; - } - case CompareBranchType: { - branch_imm =3D Assembler::ImmCmpBranch(offset); - imm_mask =3D ImmCmpBranch_mask; - break; - } - case TestBranchType: { - branch_imm =3D Assembler::ImmTestBranch(offset); - imm_mask =3D ImmTestBranch_mask; - break; - } - default: VIXL_UNREACHABLE(); - } - SetInstructionBits(Mask(~imm_mask) | branch_imm); -} - - -void Instruction::SetImmLLiteral(const Instruction* source) { - VIXL_ASSERT(IsWordAligned(source)); - ptrdiff_t offset =3D (source - this) >> kLiteralEntrySizeLog2; - Instr imm =3D Assembler::ImmLLiteral(static_cast(offset)); - Instr mask =3D ImmLLiteral_mask; - - SetInstructionBits(Mask(~mask) | imm); -} - - -VectorFormat VectorFormatHalfWidth(const VectorFormat vform) { - VIXL_ASSERT(vform =3D=3D kFormat8H || vform =3D=3D kFormat4S || vform = =3D=3D kFormat2D || - vform =3D=3D kFormatH || vform =3D=3D kFormatS || vform =3D= =3D kFormatD); - switch (vform) { - case kFormat8H: return kFormat8B; - case kFormat4S: return kFormat4H; - case kFormat2D: return kFormat2S; - case kFormatH: return kFormatB; - case kFormatS: return kFormatH; - case kFormatD: return kFormatS; - default: VIXL_UNREACHABLE(); return kFormatUndefined; - } -} - - -VectorFormat VectorFormatDoubleWidth(const VectorFormat vform) { - VIXL_ASSERT(vform =3D=3D kFormat8B || vform =3D=3D kFormat4H || vform = =3D=3D kFormat2S || - vform =3D=3D kFormatB || vform =3D=3D kFormatH || vform =3D= =3D kFormatS); - switch (vform) { - case kFormat8B: return kFormat8H; - case kFormat4H: return kFormat4S; - case kFormat2S: return kFormat2D; - case kFormatB: return kFormatH; - case kFormatH: return kFormatS; - case kFormatS: return kFormatD; - default: VIXL_UNREACHABLE(); return kFormatUndefined; - } -} - - -VectorFormat VectorFormatFillQ(const VectorFormat vform) { - switch (vform) { - case kFormatB: - case kFormat8B: - case kFormat16B: return kFormat16B; - case kFormatH: - case kFormat4H: - case kFormat8H: return kFormat8H; - case kFormatS: - case kFormat2S: - case kFormat4S: return kFormat4S; - case kFormatD: - case kFormat1D: - case kFormat2D: return kFormat2D; - default: VIXL_UNREACHABLE(); return kFormatUndefined; - } -} - -VectorFormat VectorFormatHalfWidthDoubleLanes(const VectorFormat vform) { - switch (vform) { - case kFormat4H: return kFormat8B; - case kFormat8H: return kFormat16B; - case kFormat2S: return kFormat4H; - case kFormat4S: return kFormat8H; - case kFormat1D: return kFormat2S; - case kFormat2D: return kFormat4S; - default: VIXL_UNREACHABLE(); return kFormatUndefined; - } -} - -VectorFormat VectorFormatDoubleLanes(const VectorFormat vform) { - VIXL_ASSERT(vform =3D=3D kFormat8B || vform =3D=3D kFormat4H || vform = =3D=3D kFormat2S); - switch (vform) { - case kFormat8B: return kFormat16B; - case kFormat4H: return kFormat8H; - case kFormat2S: return kFormat4S; - default: VIXL_UNREACHABLE(); return kFormatUndefined; - } -} - - -VectorFormat VectorFormatHalfLanes(const VectorFormat vform) { - VIXL_ASSERT(vform =3D=3D kFormat16B || vform =3D=3D kFormat8H || vform = =3D=3D kFormat4S); - switch (vform) { - case kFormat16B: return kFormat8B; - case kFormat8H: return kFormat4H; - case kFormat4S: return kFormat2S; - default: VIXL_UNREACHABLE(); return kFormatUndefined; - } -} - - -VectorFormat ScalarFormatFromLaneSize(int laneSize) { - switch (laneSize) { - case 8: return kFormatB; - case 16: return kFormatH; - case 32: return kFormatS; - case 64: return kFormatD; - default: VIXL_UNREACHABLE(); return kFormatUndefined; - } -} - - -unsigned RegisterSizeInBitsFromFormat(VectorFormat vform) { - VIXL_ASSERT(vform !=3D kFormatUndefined); - switch (vform) { - case kFormatB: return kBRegSize; - case kFormatH: return kHRegSize; - case kFormatS: return kSRegSize; - case kFormatD: return kDRegSize; - case kFormat8B: - case kFormat4H: - case kFormat2S: - case kFormat1D: return kDRegSize; - default: return kQRegSize; - } -} - - -unsigned RegisterSizeInBytesFromFormat(VectorFormat vform) { - return RegisterSizeInBitsFromFormat(vform) / 8; -} - - -unsigned LaneSizeInBitsFromFormat(VectorFormat vform) { - VIXL_ASSERT(vform !=3D kFormatUndefined); - switch (vform) { - case kFormatB: - case kFormat8B: - case kFormat16B: return 8; - case kFormatH: - case kFormat4H: - case kFormat8H: return 16; - case kFormatS: - case kFormat2S: - case kFormat4S: return 32; - case kFormatD: - case kFormat1D: - case kFormat2D: return 64; - default: VIXL_UNREACHABLE(); return 0; - } -} - - -int LaneSizeInBytesFromFormat(VectorFormat vform) { - return LaneSizeInBitsFromFormat(vform) / 8; -} - - -int LaneSizeInBytesLog2FromFormat(VectorFormat vform) { - VIXL_ASSERT(vform !=3D kFormatUndefined); - switch (vform) { - case kFormatB: - case kFormat8B: - case kFormat16B: return 0; - case kFormatH: - case kFormat4H: - case kFormat8H: return 1; - case kFormatS: - case kFormat2S: - case kFormat4S: return 2; - case kFormatD: - case kFormat1D: - case kFormat2D: return 3; - default: VIXL_UNREACHABLE(); return 0; - } -} - - -int LaneCountFromFormat(VectorFormat vform) { - VIXL_ASSERT(vform !=3D kFormatUndefined); - switch (vform) { - case kFormat16B: return 16; - case kFormat8B: - case kFormat8H: return 8; - case kFormat4H: - case kFormat4S: return 4; - case kFormat2S: - case kFormat2D: return 2; - case kFormat1D: - case kFormatB: - case kFormatH: - case kFormatS: - case kFormatD: return 1; - default: VIXL_UNREACHABLE(); return 0; - } -} - - -int MaxLaneCountFromFormat(VectorFormat vform) { - VIXL_ASSERT(vform !=3D kFormatUndefined); - switch (vform) { - case kFormatB: - case kFormat8B: - case kFormat16B: return 16; - case kFormatH: - case kFormat4H: - case kFormat8H: return 8; - case kFormatS: - case kFormat2S: - case kFormat4S: return 4; - case kFormatD: - case kFormat1D: - case kFormat2D: return 2; - default: VIXL_UNREACHABLE(); return 0; - } -} - - -// Does 'vform' indicate a vector format or a scalar format? -bool IsVectorFormat(VectorFormat vform) { - VIXL_ASSERT(vform !=3D kFormatUndefined); - switch (vform) { - case kFormatB: - case kFormatH: - case kFormatS: - case kFormatD: return false; - default: return true; - } -} - - -int64_t MaxIntFromFormat(VectorFormat vform) { - return INT64_MAX >> (64 - LaneSizeInBitsFromFormat(vform)); -} - - -int64_t MinIntFromFormat(VectorFormat vform) { - return INT64_MIN >> (64 - LaneSizeInBitsFromFormat(vform)); -} - - -uint64_t MaxUintFromFormat(VectorFormat vform) { - return UINT64_MAX >> (64 - LaneSizeInBitsFromFormat(vform)); -} -} // namespace vixl - diff --git a/disas/libvixl/vixl/compiler-intrinsics.cc b/disas/libvixl/vixl= /compiler-intrinsics.cc deleted file mode 100644 index fd551faeb1..0000000000 --- a/disas/libvixl/vixl/compiler-intrinsics.cc +++ /dev/null @@ -1,144 +0,0 @@ -// Copyright 2015, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#include "compiler-intrinsics.h" - -namespace vixl { - - -int CountLeadingSignBitsFallBack(int64_t value, int width) { - VIXL_ASSERT(IsPowerOf2(width) && (width <=3D 64)); - if (value >=3D 0) { - return CountLeadingZeros(value, width) - 1; - } else { - return CountLeadingZeros(~value, width) - 1; - } -} - - -int CountLeadingZerosFallBack(uint64_t value, int width) { - VIXL_ASSERT(IsPowerOf2(width) && (width <=3D 64)); - if (value =3D=3D 0) { - return width; - } - int count =3D 0; - value =3D value << (64 - width); - if ((value & UINT64_C(0xffffffff00000000)) =3D=3D 0) { - count +=3D 32; - value =3D value << 32; - } - if ((value & UINT64_C(0xffff000000000000)) =3D=3D 0) { - count +=3D 16; - value =3D value << 16; - } - if ((value & UINT64_C(0xff00000000000000)) =3D=3D 0) { - count +=3D 8; - value =3D value << 8; - } - if ((value & UINT64_C(0xf000000000000000)) =3D=3D 0) { - count +=3D 4; - value =3D value << 4; - } - if ((value & UINT64_C(0xc000000000000000)) =3D=3D 0) { - count +=3D 2; - value =3D value << 2; - } - if ((value & UINT64_C(0x8000000000000000)) =3D=3D 0) { - count +=3D 1; - } - count +=3D (value =3D=3D 0); - return count; -} - - -int CountSetBitsFallBack(uint64_t value, int width) { - VIXL_ASSERT(IsPowerOf2(width) && (width <=3D 64)); - - // Mask out unused bits to ensure that they are not counted. - value &=3D (UINT64_C(0xffffffffffffffff) >> (64 - width)); - - // Add up the set bits. - // The algorithm works by adding pairs of bit fields together iterativel= y, - // where the size of each bit field doubles each time. - // An example for an 8-bit value: - // Bits: h g f e d c b a - // \ | \ | \ | \ | - // value =3D h+g f+e d+c b+a - // \ | \ | - // value =3D h+g+f+e d+c+b+a - // \ | - // value =3D h+g+f+e+d+c+b+a - const uint64_t kMasks[] =3D { - UINT64_C(0x5555555555555555), - UINT64_C(0x3333333333333333), - UINT64_C(0x0f0f0f0f0f0f0f0f), - UINT64_C(0x00ff00ff00ff00ff), - UINT64_C(0x0000ffff0000ffff), - UINT64_C(0x00000000ffffffff), - }; - - for (unsigned i =3D 0; i < (sizeof(kMasks) / sizeof(kMasks[0])); i++) { - int shift =3D 1 << i; - value =3D ((value >> shift) & kMasks[i]) + (value & kMasks[i]); - } - - return static_cast(value); -} - - -int CountTrailingZerosFallBack(uint64_t value, int width) { - VIXL_ASSERT(IsPowerOf2(width) && (width <=3D 64)); - int count =3D 0; - value =3D value << (64 - width); - if ((value & UINT64_C(0xffffffff)) =3D=3D 0) { - count +=3D 32; - value =3D value >> 32; - } - if ((value & 0xffff) =3D=3D 0) { - count +=3D 16; - value =3D value >> 16; - } - if ((value & 0xff) =3D=3D 0) { - count +=3D 8; - value =3D value >> 8; - } - if ((value & 0xf) =3D=3D 0) { - count +=3D 4; - value =3D value >> 4; - } - if ((value & 0x3) =3D=3D 0) { - count +=3D 2; - value =3D value >> 2; - } - if ((value & 0x1) =3D=3D 0) { - count +=3D 1; - } - count +=3D (value =3D=3D 0); - return count - (64 - width); -} - - -} // namespace vixl diff --git a/disas/libvixl/vixl/utils.cc b/disas/libvixl/vixl/utils.cc deleted file mode 100644 index 69304d266d..0000000000 --- a/disas/libvixl/vixl/utils.cc +++ /dev/null @@ -1,142 +0,0 @@ -// Copyright 2015, ARM Limited -// All rights reserved. -// -// Redistribution and use in source and binary forms, with or without -// modification, are permitted provided that the following conditions are = met: -// -// * Redistributions of source code must retain the above copyright noti= ce, -// this list of conditions and the following disclaimer. -// * Redistributions in binary form must reproduce the above copyright n= otice, -// this list of conditions and the following disclaimer in the documen= tation -// and/or other materials provided with the distribution. -// * Neither the name of ARM Limited nor the names of its contributors m= ay be -// used to endorse or promote products derived from this software with= out -// specific prior written permission. -// -// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS CONTRIBUTORS "AS IS"= AND -// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE I= MPLIED -// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LI= ABLE -// FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENT= IAL -// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS= OR -// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWE= VER -// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIAB= ILITY, -// OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF T= HE USE -// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -#include "vixl/utils.h" -#include - -namespace vixl { - -uint32_t float_to_rawbits(float value) { - uint32_t bits =3D 0; - memcpy(&bits, &value, 4); - return bits; -} - - -uint64_t double_to_rawbits(double value) { - uint64_t bits =3D 0; - memcpy(&bits, &value, 8); - return bits; -} - - -float rawbits_to_float(uint32_t bits) { - float value =3D 0.0; - memcpy(&value, &bits, 4); - return value; -} - - -double rawbits_to_double(uint64_t bits) { - double value =3D 0.0; - memcpy(&value, &bits, 8); - return value; -} - - -uint32_t float_sign(float val) { - uint32_t rawbits =3D float_to_rawbits(val); - return unsigned_bitextract_32(31, 31, rawbits); -} - - -uint32_t float_exp(float val) { - uint32_t rawbits =3D float_to_rawbits(val); - return unsigned_bitextract_32(30, 23, rawbits); -} - - -uint32_t float_mantissa(float val) { - uint32_t rawbits =3D float_to_rawbits(val); - return unsigned_bitextract_32(22, 0, rawbits); -} - - -uint32_t double_sign(double val) { - uint64_t rawbits =3D double_to_rawbits(val); - return static_cast(unsigned_bitextract_64(63, 63, rawbits)); -} - - -uint32_t double_exp(double val) { - uint64_t rawbits =3D double_to_rawbits(val); - return static_cast(unsigned_bitextract_64(62, 52, rawbits)); -} - - -uint64_t double_mantissa(double val) { - uint64_t rawbits =3D double_to_rawbits(val); - return unsigned_bitextract_64(51, 0, rawbits); -} - - -float float_pack(uint32_t sign, uint32_t exp, uint32_t mantissa) { - uint32_t bits =3D (sign << 31) | (exp << 23) | mantissa; - return rawbits_to_float(bits); -} - - -double double_pack(uint64_t sign, uint64_t exp, uint64_t mantissa) { - uint64_t bits =3D (sign << 63) | (exp << 52) | mantissa; - return rawbits_to_double(bits); -} - - -int float16classify(float16 value) { - uint16_t exponent_max =3D (1 << 5) - 1; - uint16_t exponent_mask =3D exponent_max << 10; - uint16_t mantissa_mask =3D (1 << 10) - 1; - - uint16_t exponent =3D (value & exponent_mask) >> 10; - uint16_t mantissa =3D value & mantissa_mask; - if (exponent =3D=3D 0) { - if (mantissa =3D=3D 0) { - return FP_ZERO; - } - return FP_SUBNORMAL; - } else if (exponent =3D=3D exponent_max) { - if (mantissa =3D=3D 0) { - return FP_INFINITE; - } - return FP_NAN; - } - return FP_NORMAL; -} - - -unsigned CountClearHalfWords(uint64_t imm, unsigned reg_size) { - VIXL_ASSERT((reg_size % 8) =3D=3D 0); - int count =3D 0; - for (unsigned i =3D 0; i < (reg_size / 16); i++) { - if ((imm & 0xffff) =3D=3D 0) { - count++; - } - imm >>=3D 16; - } - return count; -} - -} // namespace vixl diff --git a/disas/meson.build b/disas/meson.build index 7da48ea74a..ba22f7cbcd 100644 --- a/disas/meson.build +++ b/disas/meson.build @@ -1,9 +1,4 @@ -libvixl_ss =3D ss.source_set() -subdir('libvixl') - common_ss.add(when: 'CONFIG_ALPHA_DIS', if_true: files('alpha.c')) -common_ss.add(when: 'CONFIG_ARM_A64_DIS', if_true: files('arm-a64.cc')) -common_ss.add_all(when: 'CONFIG_ARM_A64_DIS', if_true: libvixl_ss) common_ss.add(when: 'CONFIG_CRIS_DIS', if_true: files('cris.c')) common_ss.add(when: 'CONFIG_HEXAGON_DIS', if_true: files('hexagon.c')) common_ss.add(when: 'CONFIG_HPPA_DIS', if_true: files('hppa.c')) diff --git a/scripts/clean-header-guards.pl b/scripts/clean-header-guards.pl index a6680253b1..a7fd8dc99f 100755 --- a/scripts/clean-header-guards.pl +++ b/scripts/clean-header-guards.pl @@ -32,8 +32,8 @@ use warnings; use Getopt::Std; =20 # Stuff we don't want to clean because we import it into our tree: -my $exclude =3D qr,^(disas/libvixl/|include/standard-headers/ - |linux-headers/|pc-bios/|tests/tcg/|tests/multiboot/),x; +my $exclude =3D qr,^(include/standard-headers/|linux-headers/ + |pc-bios/|tests/tcg/|tests/multiboot/),x; # Stuff that is expected to fail the preprocessing test: my $exclude_cpp =3D qr,^include/libdecnumber/decNumberLocal.h,; =20 diff --git a/scripts/clean-includes b/scripts/clean-includes index aaa7d4ceb3..d37bd4f692 100755 --- a/scripts/clean-includes +++ b/scripts/clean-includes @@ -51,7 +51,7 @@ GIT=3Dno DUPHEAD=3Dno =20 # Extended regular expression defining files to ignore when using --all -XDIRREGEX=3D'^(tests/tcg|tests/multiboot|pc-bios|disas/libvixl)' +XDIRREGEX=3D'^(tests/tcg|tests/multiboot|pc-bios)' =20 while true do diff --git a/scripts/coverity-scan/COMPONENTS.md b/scripts/coverity-scan/CO= MPONENTS.md index 183f26a32c..de2eb96241 100644 --- a/scripts/coverity-scan/COMPONENTS.md +++ b/scripts/coverity-scan/COMPONENTS.md @@ -87,9 +87,6 @@ io ipmi ~ (/qemu)?((/include)?/hw/ipmi/.*) =20 -libvixl - ~ (/qemu)?(/disas/libvixl/.*) - migration ~ (/qemu)?((/include)?/migration/.*) =20 --=20 2.31.1