From nobody Sat Nov 23 18:00:43 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1732153856; cv=none; d=zohomail.com; s=zohoarc; b=DdCkg55ZNRuCrbAFmL7X46KTMrVdRic9tbnCBhzEK0rV/59ysGlRB1QEgDqTSAjPU/dfptkc6lbEQzQT0ijnKwXLEPnTk9PlISuFZ3g8FH4qYKWYqYmGrTVCRk4nLAW0Kyg6o9++HmiJnJc/M0vk7TINqhLqjHrO+8VnvyulxyA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1732153856; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:Reply-To:References:Sender:Subject:Subject:To:To:Message-Id; bh=GcVPUJyptch/NiR/8R+sSM2oGaLEowkzsVXIpJeEORM=; b=Jo4udP5+zlYWOr4BonmUKpDGndFzBk5oiSQEGgwbxX0+Q9o9J0kvocmk7NQAWm6EF8mAVMteBINGoCkCaPdI8UDSfj8MuHOMmPsDy65p28u8hmo0WtPRa0Lqb72+1Hz/zLdvK4FtIiaIJWAQnoiffq1xM8tQYX9sFmnpwvQinYc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1732153856979794.8937051093002; Wed, 20 Nov 2024 17:50:56 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tDwI1-0001gn-9H; Wed, 20 Nov 2024 20:47:25 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tDwHz-0001g2-AT for qemu-devel@nongnu.org; Wed, 20 Nov 2024 20:47:23 -0500 Received: from rev.ng ([94.130.142.21]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tDwHv-0004Zt-HT for qemu-devel@nongnu.org; Wed, 20 Nov 2024 20:47:23 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=rev.ng; s=dkim; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive:List-Unsubscribe:List-Unsubscribe-Post: List-Help; bh=GcVPUJyptch/NiR/8R+sSM2oGaLEowkzsVXIpJeEORM=; b=Pu1eS6mafjujCyI wgR/meO9r7ViGmRYbXdDJtGFhHt50GFhGETszsi/5jR3pYhqCGYLTNFmKKXhDHsDTgFwL1DsSYsAw GmC0zCSfI0VM9tiRJ9PygYw43krzvwtqPMzKgQ5yt5LrCVUZqzhRZBMXAxu+IEgTCjUrO3EL66++o uw=; To: qemu-devel@nongnu.org Cc: ale@rev.ng, ltaylorsimpson@gmail.com, bcain@quicinc.com, richard.henderson@linaro.org, philmd@linaro.org, alex.bennee@linaro.org Subject: [RFC PATCH v1 26/43] helper-to-tcg: PrepareForTcgPass, canonicalize IR Date: Thu, 21 Nov 2024 02:49:30 +0100 Message-ID: <20241121014947.18666-27-anjo@rev.ng> In-Reply-To: <20241121014947.18666-1-anjo@rev.ng> References: <20241121014947.18666-1-anjo@rev.ng> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=94.130.142.21; envelope-from=anjo@rev.ng; helo=rev.ng X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Anton Johansson From: Anton Johansson via Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1732153859076116600 Content-Type: text/plain; charset="utf-8" Iterates over the IR with the goal of converting it to a form closer to TCG, taking care of IR disparencies between LLVM and TCG. This also simplifies the backend by containing the bulk of custom IR transformations, meaning the backend can be as dumb as possible. Signed-off-by: Anton Johansson --- subprojects/helper-to-tcg/meson.build | 1 + .../PrepareForTcgPass/CanonicalizeIR.cpp | 1000 +++++++++++++++++ .../passes/PrepareForTcgPass/CanonicalizeIR.h | 25 + .../PrepareForTcgPass/PrepareForTcgPass.cpp | 2 + 4 files changed, 1028 insertions(+) create mode 100644 subprojects/helper-to-tcg/passes/PrepareForTcgPass/Cano= nicalizeIR.cpp create mode 100644 subprojects/helper-to-tcg/passes/PrepareForTcgPass/Cano= nicalizeIR.h diff --git a/subprojects/helper-to-tcg/meson.build b/subprojects/helper-to-= tcg/meson.build index 6b18734bad..50bb926f49 100644 --- a/subprojects/helper-to-tcg/meson.build +++ b/subprojects/helper-to-tcg/meson.build @@ -48,6 +48,7 @@ sources =3D [ 'passes/PseudoInst.cpp', 'passes/PrepareForTcgPass/PrepareForTcgPass.cpp', 'passes/PrepareForTcgPass/TransformGEPs.cpp', + 'passes/PrepareForTcgPass/CanonicalizeIR.cpp', ] =20 clang =3D bindir / 'clang' diff --git a/subprojects/helper-to-tcg/passes/PrepareForTcgPass/Canonicaliz= eIR.cpp b/subprojects/helper-to-tcg/passes/PrepareForTcgPass/CanonicalizeIR= .cpp new file mode 100644 index 0000000000..d53b7b8580 --- /dev/null +++ b/subprojects/helper-to-tcg/passes/PrepareForTcgPass/CanonicalizeIR.cpp @@ -0,0 +1,1000 @@ +// +// Copyright(c) 2024 rev.ng Labs Srl. All Rights Reserved. +// +// This program is free software; you can redistribute it and/or modify +// it under the terms of the GNU General Public License as published by +// the Free Software Foundation; either version 2 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU General Public License for more details. +// +// You should have received a copy of the GNU General Public License +// along with this program; if not, see . +// + +#include "CanonicalizeIR.h" +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +using namespace llvm; +using namespace PatternMatch; + +// Needed to track and remove instructions not handled by a subsequent dea= d code +// elimination, this applies to calls to pseudo instructions in particular. +// +// TODO: Can we instead make pseudo instructions side effect free via +// attributes? +using EraseInstVec =3D SmallVector; +using UsageCountMap =3D DenseMap; + +// Helper function to remove an instruction only if all uses have been rem= oved. +// This way we can keep track instruction uses without having to modify th= e IR, +// or without having to iterate over all uses everytime we wish to remove = an +// instruction. +static void addToEraseVectorIfUnused(EraseInstVec &InstToErase, + UsageCountMap &UsageMap, Value *V) +{ + auto *I =3D dyn_cast(V); + if (!I) { + return; + } + + // Add V to map if not there + if (UsageMap.count(V) =3D=3D 0) { + UsageMap[V] =3D V->getNumUses(); + } + + // Erase if count reaches zero + if (--UsageMap[V] =3D=3D 0) { + InstToErase.push_back(I); + UsageMap.erase(V); + } +} + +// Forward declarations of IR transformations used in canonicalizing the IR +static void upcastAshr(Instruction *I); +static void convertInsertShuffleToSplat(Module &M, Instruction *I); + +static void simplifyVecBinOpWithSplat(EraseInstVec &InstToErase, + UsageCountMap &UsageMap, Module &M, + BinaryOperator *BinOp); + +static void convertSelectICmp(Module &M, SelectInst *Select, ICmpInst *ICm= p); + +static void convertQemuLoadStoreToPseudoInst(Module &M, CallInst *Call); +static void convertExceptionCallsToPseudoInst(Module &M, CallInst *Call); +static void convertVecStoreBitcastToPseudoInst(EraseInstVec &InstToErase, + Module &M, StoreInst *Store= ); +static void convertICmpBrToPseudInst(LLVMContext &Context, + EraseInstVec &InstToErase, Module &M, + Instruction *I, BasicBlock *NextBb); + +void canonicalizeIR(Module &M) +{ + for (Function &F : M) { + if (F.isDeclaration()) { + continue; + } + + EraseInstVec InstToErase; + UsageCountMap UsageMap; + LLVMContext &Context =3D F.getContext(); + + // Perform a first pass over all instructions in the function and = apply + // IR transformations sequentially. NOTE: order matters here. + for (Instruction &I : instructions(F)) { + if (I.isArithmeticShift()) { + upcastAshr(&I); + } + + convertInsertShuffleToSplat(M, &I); + + // Depends on convertInsertShuffleToSplat for @VecSplat instru= ctions + if (auto *BinOp =3D dyn_cast(&I)) { + simplifyVecBinOpWithSplat(InstToErase, UsageMap, M, BinOp); + } + + // Independent of above + if (auto *ICmp =3D dyn_cast(&I)) { + for (auto *U : ICmp->users()) { + auto *Select =3D dyn_cast(U); + if (Select and Select->getCondition() =3D=3D ICmp) { + convertSelectICmp(M, Select, ICmp); + } + } + } + + // Independent of above, can run at any point + if (auto *Call =3D dyn_cast(&I)) { + convertQemuLoadStoreToPseudoInst(M, Call); + convertExceptionCallsToPseudoInst(M, Call); + } + + // Depends on other vector conversions performed above, needs = to + // run last + if (auto *Store =3D dyn_cast(&I)) { + convertVecStoreBitcastToPseudoInst(InstToErase, M, Store); + } + } + + // Perform a second pass over the instructions. Can be combined wi= th the + // above by using a worklist and making sure we have access to the + // BasicBlock. + // + // Depends on icmp,select -> @movcond + ReversePostOrderTraversal RPOT(&F); + for (auto BbIt =3D RPOT.begin(); BbIt !=3D RPOT.end(); ++BbIt) { + BasicBlock &BB =3D **BbIt; + + auto NextIt =3D BbIt; + BasicBlock *NextBb =3D &**(++NextIt); + + for (Instruction &I : BB) { + convertICmpBrToPseudInst(Context, InstToErase, M, &I, Next= Bb); + } + } + + // Finally clean up instructions we need to remove manually + for (Instruction *I : InstToErase) { + I->eraseFromParent(); + } + } +} + +static Value *upcastInt(IRBuilder<> &Builder, IntegerType *FinalIntTy, Val= ue *V) +{ + if (auto *ConstInt =3D dyn_cast(V)) { + return ConstantInt::get(FinalIntTy, ConstInt->getZExtValue()); + } else { + return Builder.CreateSExt(V, FinalIntTy); + } +} + +// Convert +// +// %2 =3D ashr i[8|16] %1, %0 +// +// to +// +// %2 =3D zext i[8|16] %1 to i32 +// %3 =3D zext i[8|16] %2 to i32 +// %2 =3D ashr i32 %2, %3 +// +static void upcastAshr(Instruction *I) +{ + // Only care about scalar shifts < on less than 32-bit integers + auto *IntTy =3D dyn_cast(I->getType()); + if (!IntTy or IntTy->getBitWidth() >=3D 32) { + return; + } + + IRBuilder<> Builder(I); + + Value *Op1 =3D I->getOperand(0); + Value *Op2 =3D I->getOperand(1); + auto *UpcastIntTy =3D Builder.getInt32Ty(); + Op1 =3D upcastInt(Builder, UpcastIntTy, Op1); + Op2 =3D upcastInt(Builder, UpcastIntTy, Op2); + + auto *AShr =3D Builder.CreateAShr(Op1, Op2); + auto *Trunc =3D Builder.CreateTrunc(AShr, I->getType()); + I->replaceAllUsesWith(Trunc); +} + +// Convert vector intrinsics +// +// %0 =3D insertelement ... +// %1 =3D shuffle ... +// +// to +// +// %0 =3D call @VecSplat.* +// +static void convertInsertShuffleToSplat(Module &M, Instruction *I) +{ + Value *SplatV; + if (match(I, compat_m_Shuffle(compat_m_InsertElt(m_Value(), m_Value(Sp= latV), + m_ZeroInt()), + m_Value(), compat_m_ZeroMask()))) { + + auto *VecTy =3D cast(I->getType()); + + IRBuilder<> Builder(I); + FunctionCallee Fn =3D + pseudoInstFunction(M, VecSplat, VecTy, {SplatV->getType()}); + CallInst *Call =3D Builder.CreateCall(Fn, {SplatV}); + I->replaceAllUsesWith(Call); + } +} + +// Convert +// +// %1 =3D @VecSplat(%0) +// %2 =3D ... op %1 +// +// to +// +// %2 =3D call @Vec[op]Scalar(..., %0) +// +// which more closely matches TCG gvec operations. +static void simplifyVecBinOpWithSplat(EraseInstVec &InstToErase, + UsageCountMap &UsageMap, Module &M, + BinaryOperator *BinOp) +{ + Value *Lhs =3D BinOp->getOperand(0); + Value *Rhs =3D BinOp->getOperand(1); + if (!Lhs->getType()->isVectorTy() or !Rhs->getType()->isVectorTy()) { + return; + } + + // Get splat value from constant or @VecSplat call + Value *SplatValue =3D nullptr; + if (auto *Const =3D dyn_cast(Rhs)) { + SplatValue =3D Const->getSplatValue(); + } else if (auto *Call =3D dyn_cast(Rhs)) { + if (getPseudoInstFromCall(Call) =3D=3D VecSplat) { + SplatValue =3D Call->getOperand(0); + } + } + + if (SplatValue =3D=3D nullptr) { + return; + } + + auto *VecTy =3D cast(Lhs->getType()); + auto *ConstInt =3D dyn_cast(SplatValue); + bool ConstIsNegOne =3D ConstInt and ConstInt->getSExtValue() =3D=3D -1; + bool IsNot =3D BinOp->getOpcode() =3D=3D Instruction::Xor and ConstIsN= egOne; + if (IsNot) { + FunctionCallee Fn =3D pseudoInstFunction(M, VecNot, VecTy, {VecTy}= ); + IRBuilder<> Builder(BinOp); + CallInst *Call =3D Builder.CreateCall(Fn, {Lhs}); + BinOp->replaceAllUsesWith(Call); + } else { + PseudoInst Inst; + switch (BinOp->getOpcode()) { + case Instruction::Add: + Inst =3D VecAddScalar; + break; + case Instruction::Sub: + Inst =3D VecSubScalar; + break; + case Instruction::Mul: + Inst =3D VecMulScalar; + break; + case Instruction::Xor: + Inst =3D VecXorScalar; + break; + case Instruction::Or: + Inst =3D VecOrScalar; + break; + case Instruction::And: + Inst =3D VecAndScalar; + break; + case Instruction::Shl: + Inst =3D VecShlScalar; + break; + case Instruction::LShr: + Inst =3D VecLShrScalar; + break; + case Instruction::AShr: + Inst =3D VecAShrScalar; + break; + default: + abort(); + } + + IRBuilder<> Builder(BinOp); + // Scalar gvec shift operations uses 32-bit scalars, whereas arith= metic + // operations uses 64-bit scalars. + uint32_t SplatSize =3D SplatValue->getType()->getIntegerBitWidth(); + if (BinOp->isShift()) { + if (SplatSize > 32) { + SplatValue =3D + Builder.CreateTrunc(SplatValue, Builder.getInt32Ty()); + } + } else { + if (SplatSize < 64) { + SplatValue =3D + Builder.CreateZExt(SplatValue, Builder.getInt64Ty()); + } + } + FunctionCallee Fn =3D + pseudoInstFunction(M, Inst, VecTy, {VecTy, SplatValue->getType= ()}); + CallInst *Call =3D Builder.CreateCall(Fn, {Lhs, SplatValue}); + BinOp->replaceAllUsesWith(Call); + } + + InstToErase.push_back(BinOp); + addToEraseVectorIfUnused(InstToErase, UsageMap, Rhs); +} + +// Convert +// +// %2 =3D icmp [sgt|ugt|slt|ult] %0, %1 +// %5 =3D select %2, %3, %4 +// +// to +// +// %5 =3D [s|u][max|min] %0, %1 +// +// if possible. Results in cleaner IR, particularly useful for vector +// instructions. +static bool convertSelectICmpToMinMax(Module &M, SelectInst *Select, + ICmpInst *ICmp, ICmpInst::Predicate = &Pred, + Value *ICmpOp0, Value *ICmpOp1, + Value *SelectOp0, Value *SelectOp1) +{ +#if LLVM_VERSION_MAJOR > 11 + if (ICmpOp0 !=3D SelectOp0 or ICmpOp1 !=3D SelectOp1) { + return false; + } + + Intrinsic::ID Intrin; + switch (Pred) { + case ICmpInst::ICMP_SGT: + Intrin =3D Intrinsic::smax; + break; + case ICmpInst::ICMP_UGT: + Intrin =3D Intrinsic::umax; + break; + case ICmpInst::ICMP_SLT: + Intrin =3D Intrinsic::smin; + break; + case ICmpInst::ICMP_ULT: + Intrin =3D Intrinsic::umin; + break; + default: + return false; + } + + auto Ty =3D Select->getType(); + auto MaxMinF =3D Intrinsic::getDeclaration(&M, Intrin, {Ty}); + + IRBuilder<> Builder(Select); + auto Call =3D Builder.CreateCall(MaxMinF, {ICmpOp0, ICmpOp1}); + Select->replaceAllUsesWith(Call); + + return true; +#else + return false; +#endif +} + +// In LLVM, icmp on vectors returns a vector on i1s whereas TCGs gvec_cmp +// returns a vector of the element type of its operands. This can result = in +// some subtle bugs. Convert +// +// icmp -> call @VecCompare +// select -> call @VecWideCondBitsel +// +static bool convertSelectICmpToVecBitsel(Module &M, SelectInst *Select, + ICmpInst *ICmp, + ICmpInst::Predicate &Pred, + Value *ICmpOp0, Value *ICmpOp1, + Value *SelectOp0, Value *SelectOp= 1) +{ + auto *ICmpVecTy =3D dyn_cast(ICmpOp0->getType()); + auto *SelectVecTy =3D dyn_cast(Select->getType()); + if (!ICmpVecTy or !SelectVecTy) { + return false; + } + + Instruction *Cmp =3D ICmp; + { + IRBuilder<> Builder(Cmp); + FunctionCallee Fn =3D + pseudoInstFunction(M, VecCompare, ICmpVecTy, + {Builder.getInt32Ty(), ICmpVecTy, ICmpVecTy= }); + ICmpInst::Predicate Pred =3D ICmp->getPredicate(); + CallInst *Call =3D Builder.CreateCall( + Fn, + {ConstantInt::get(Builder.getInt32Ty(), Pred), ICmpOp0, ICmpOp= 1}); + Cmp =3D Call; + } + + unsigned SrcWidth =3D ICmpVecTy->getElementType()->getIntegerBitWidth(= ); + unsigned DstWidth =3D SelectVecTy->getElementType()->getIntegerBitWidt= h(); + + if (SrcWidth < DstWidth) { + IRBuilder<> Builder(Select); + Value *ZExt =3D Builder.CreateSExt(Cmp, SelectVecTy); + FunctionCallee Fn =3D + pseudoInstFunction(M, VecWideCondBitsel, SelectVecTy, + {SelectVecTy, SelectVecTy, SelectVecTy}); + CallInst *Call =3D Builder.CreateCall(Fn, {ZExt, SelectOp0, Select= Op1}); + Select->replaceAllUsesWith(Call); + } else if (SrcWidth > DstWidth) { + IRBuilder<> Builder(Select); + Value *ZExt =3D Builder.CreateTrunc(Cmp, SelectVecTy); + FunctionCallee Fn =3D + pseudoInstFunction(M, VecWideCondBitsel, SelectVecTy, + {SelectVecTy, SelectVecTy, SelectVecTy}); + CallInst *Call =3D Builder.CreateCall(Fn, {ZExt, SelectOp0, Select= Op1}); + Select->replaceAllUsesWith(Call); + } else { + IRBuilder<> Builder(Select); + FunctionCallee Fn =3D + pseudoInstFunction(M, VecWideCondBitsel, SelectVecTy, + {SelectVecTy, SelectVecTy, SelectVecTy}); + CallInst *Call =3D Builder.CreateCall(Fn, {Cmp, SelectOp0, SelectO= p1}); + Select->replaceAllUsesWith(Call); + } + + return true; +} + +// Convert +// +// %2 =3D icmp [sgt|ugt|slt|ult] %0, %1 +// %5 =3D select %2, %3, %4 +// +// to +// +// 5 =3D call @Movcond.[cond].*(%1, %0, %3, %4) +// +// to more closely match TCG semantics. +static bool convertSelectICmpToMovcond(Module &M, SelectInst *Select, + ICmpInst *ICmp, + ICmpInst::Predicate &Pred, + Value *ICmpOp0, Value *ICmpOp1, + Value *SelectOp0, Value *SelectOp1) +{ + // We only handle integers, we have no movcond equivalent in gvec + auto *IntTy =3D dyn_cast(Select->getType()); + if (!IntTy) { + return false; + } + + // If the type of the comparison does not match the return type of the + // select statement, we cannot do anything so skip + if (ICmpOp0->getType() !=3D IntTy) { + return false; + } + + IRBuilder<> Builder(Select); + if (cast(ICmpOp0->getType())->getBitWidth() < + IntTy->getBitWidth()) { + if (ICmp->isSigned(Pred)) { + ICmpOp0 =3D Builder.CreateSExt(ICmpOp0, IntTy); + ICmpOp1 =3D Builder.CreateSExt(ICmpOp1, IntTy); + } else { + ICmpOp0 =3D Builder.CreateZExt(ICmpOp0, IntTy); + ICmpOp1 =3D Builder.CreateZExt(ICmpOp1, IntTy); + } + } + + // Create @Movcond.[slt|...].* function + FunctionCallee Fn =3D pseudoInstFunction(M, Movcond, IntTy, + {IntTy, IntTy, IntTy, IntTy, In= tTy}); + CallInst *Call =3D + Builder.CreateCall(Fn, {ConstantInt::get(IntTy, Pred), ICmpOp0, IC= mpOp1, + SelectOp0, SelectOp1}); + Select->replaceAllUsesWith(Call); + + return true; +} + +// Specialize +// +// %2 =3D icmp [sgt|ugt|slt|ult] %0, %1 +// %5 =3D select %2, %3, %4 +// +// to either maximum/minimum, vector operations matching TCG, or a conditi= onal +// move that also matches TCG in sematics. +static void convertSelectICmp(Module &M, SelectInst *Select, ICmpInst *ICm= p) +{ + // Given + // %2 =3D icmp [sgt|ugt|slt|ult] %0, %1 + // %5 =3D select %2, %3, %4 + assert(Select->getCondition() =3D=3D ICmp); + Value *ICmpOp0 =3D ICmp->getOperand(0); + Value *ICmpOp1 =3D ICmp->getOperand(1); + Value *SelectOp0 =3D Select->getTrueValue(); + Value *SelectOp1 =3D Select->getFalseValue(); + ICmpInst::Predicate Pred =3D ICmp->getPredicate(); + + // First try to convert to min/max + // %5 =3D [s|u][max|min] %0, %1 + if (convertSelectICmpToMinMax(M, Select, ICmp, Pred, ICmpOp0, ICmpOp1, + SelectOp0, SelectOp1)) { + return; + } + + // Secondly try convert icmp -> @VecCompare, select -> @VecWideCondBit= sel + if (convertSelectICmpToVecBitsel(M, Select, ICmp, Pred, ICmpOp0, ICmpO= p1, + SelectOp0, SelectOp1)) { + return; + } + + // If min/max and vector conversion failed we fallback to a movcond + // %5 =3D call @Movcond.[cond].*(%1, %0, %3, %4) + convertSelectICmpToMovcond(M, Select, ICmp, Pred, ICmpOp0, ICmpOp1, + SelectOp0, SelectOp1); +} + +// Convert QEMU guest loads/stores represented by calls such as +// +// call cpu_ldub*(), +// call cpu_stb*(), +// +// and friends, to pseudo instructions +// +// %5 =3D call @GuestLoad.*(%addr, %sign, %size, %endian); +// %5 =3D call @GuestStore.*(%addr, %value, %size, %endian); +// +// Makes the backend agnostic to what instructions or calls are used to +// represent loads and stores. +static void convertQemuLoadStoreToPseudoInst(Module &M, CallInst *Call) +{ + Function *F =3D Call->getCalledFunction(); + StringRef Name =3D F->getName(); + if (Name.consume_front("cpu_")) { + bool IsLoad =3D Name.consume_front("ld"); + bool IsStore =3D !IsLoad and Name.consume_front("st"); + if (IsLoad or IsStore) { + bool Signed =3D !Name.consume_front("u"); + uint8_t Size =3D 0; + switch (Name[0]) { + case 'b': + Size =3D 1; + break; + case 'w': + Size =3D 2; + break; + case 'l': + Size =3D 4; + break; + case 'q': + Size =3D 8; + break; + default: + abort(); + } + + uint8_t Endianness =3D 0; // unknown + if (Size > 1) { + Name =3D Name.drop_front(2); + switch (Name[0]) { + case 'l': + Endianness =3D 1; + break; + case 'b': + Endianness =3D 2; + break; + default: + abort(); + } + } + + IRBuilder<> Builder(Call); + Value *AddrOp =3D Call->getArgOperand(1); + IntegerType *AddrTy =3D cast(AddrOp->getType()); + IntegerType *FlagTy =3D Builder.getInt8Ty(); + Value *SizeOp =3D ConstantInt::get(FlagTy, Size); + Value *EndianOp =3D ConstantInt::get(FlagTy, Endianness); + CallInst *NewCall; + if (IsLoad) { + Value *SignOp =3D ConstantInt::get(FlagTy, Signed); + IntegerType *RetTy =3D cast(Call->getType()); + FunctionCallee Fn =3D pseudoInstFunction( + M, GuestLoad, RetTy, {AddrTy, FlagTy, FlagTy, FlagTy}); + NewCall =3D + Builder.CreateCall(Fn, {AddrOp, SignOp, SizeOp, Endian= Op}); + } else { + Value *ValueOp =3D Call->getArgOperand(2); + IntegerType *ValueTy =3D cast(ValueOp->getTyp= e()); + FunctionCallee Fn =3D + pseudoInstFunction(M, GuestStore, Builder.getVoidTy(), + {AddrTy, ValueTy, FlagTy, FlagTy}); + NewCall =3D + Builder.CreateCall(Fn, {AddrOp, ValueOp, SizeOp, Endia= nOp}); + } + Call->replaceAllUsesWith(NewCall); + } + } +} + +// Convert QEMU exception calls +// +// call raise_exception_ra(...), +// ... +// +// to a pseudo instruction +// +// %5 =3D call @Exception.*(...); +// +// Makes the backend agnostic to what instructions or calls are used to +// represent exceptions, and the list of sources can be expanded here. +static void convertExceptionCallsToPseudoInst(Module &M, CallInst *Call) +{ + Function *F =3D Call->getCalledFunction(); + StringRef Name =3D F->getName(); + // NOTE: expand as needed + if (Name =3D=3D "raise_exception_ra") { + IRBuilder<> Builder(Call); + Value *Op0 =3D Call->getArgOperand(0); + Value *Op1 =3D Call->getArgOperand(1); + FunctionCallee Fn =3D + pseudoInstFunction(M, Exception, Builder.getVoidTy(), + {Op0->getType(), Op1->getType()}); + CallInst *NewCall =3D Builder.CreateCall(Fn, {Op0, Op1}); + Call->replaceAllUsesWith(NewCall); + } +} + +// +// Following functions help with converting between different types of +// instructions to pseudo instructions, particularly ones that write +// to a pointer, aka the Vec*Store pseudo instructions +// + +static PseudoInst instructionToStorePseudoInst(unsigned Opcode) +{ + switch (Opcode) { + case Instruction::Trunc: + return VecTruncStore; + case Instruction::ZExt: + return VecZExtStore; + case Instruction::SExt: + return VecSExtStore; + case Instruction::Select: + return VecSelectStore; + case Instruction::Add: + return VecAddStore; + case Instruction::Sub: + return VecSubStore; + case Instruction::Mul: + return VecMulStore; + case Instruction::Xor: + return VecXorStore; + case Instruction::Or: + return VecOrStore; + case Instruction::And: + return VecAndStore; + case Instruction::Shl: + return VecShlStore; + case Instruction::LShr: + return VecLShrStore; + case Instruction::AShr: + return VecAShrStore; + default: + abort(); + } +} + +static PseudoInst pseudoInstToStorePseudoInst(PseudoInst Inst) +{ + switch (Inst) { + case VecNot: + return VecNotStore; + case VecAddScalar: + return VecAddScalarStore; + case VecSubScalar: + return VecSubScalarStore; + case VecMulScalar: + return VecMulScalarStore; + case VecXorScalar: + return VecXorScalarStore; + case VecOrScalar: + return VecOrScalarStore; + case VecAndScalar: + return VecAndScalarStore; + case VecShlScalar: + return VecShlScalarStore; + case VecLShrScalar: + return VecLShrScalarStore; + case VecAShrScalar: + return VecAShrScalarStore; + case VecWideCondBitsel: + return VecWideCondBitselStore; + default: + abort(); + } +} + +static PseudoInst intrinsicToStorePseudoInst(unsigned IntrinsicID) +{ + switch (IntrinsicID) { + case Intrinsic::sadd_sat: + return VecSignedSatAddStore; + case Intrinsic::ssub_sat: + return VecSignedSatSubStore; + case Intrinsic::fshr: + return VecFunnelShrStore; +#if LLVM_VERSION_MAJOR > 11 + case Intrinsic::abs: + return VecAbsStore; + case Intrinsic::smax: + return VecSignedMaxStore; + case Intrinsic::umax: + return VecUnsignedMaxStore; + case Intrinsic::smin: + return VecSignedMinStore; + case Intrinsic::umin: + return VecUnsignedMinStore; +#endif + case Intrinsic::ctlz: + return VecCtlzStore; + case Intrinsic::cttz: + return VecCttzStore; + case Intrinsic::ctpop: + return VecCtpopStore; + default: + abort(); + } +} + +// For binary/unary ops on vectors where the result is stored to a +// pointer +// +// %3 =3D %1 [op] %2 +// %4 =3D bitcast i8* %0 to * +// store %3, * %4 +// +// to +// +// call @Vec[Op]Store.*(%0, %1, %2) +// +// This deals with the duality of pointers and vectors, and +// simplifies the backend. We previously kept a map on the +// side to propagate "vector"-ness from %3 to %4 via the store, +// no longer! +static void convertVecStoreBitcastToPseudoInst(EraseInstVec &InstToErase, + Module &M, StoreInst *Store) +{ + Value *ValueOp =3D Store->getValueOperand(); + Type *ValueTy =3D ValueOp->getType(); + if (!ValueTy->isVectorTy()) { + return; + } + auto *Bitcast =3D cast(Store->getPointerOperand()); + Type *PtrTy =3D Bitcast->getType(); + // Ensure store and binary op. are in the same basic + // block since the op. is moved to the store. + bool InSameBB =3D + cast(ValueOp)->getParent() =3D=3D Store->getParent(); + if (!InSameBB) { + return; + } + + SmallVector Types; + SmallVector Args; + Value *PtrOp =3D Store->getPointerOperand(); + if (auto *BinOp =3D dyn_cast(ValueOp)) { + Instruction *Inst =3D cast(ValueOp); + PseudoInst NewInst =3D instructionToStorePseudoInst(BinOp->getOpco= de()); + IRBuilder<> Builder(Store); + const unsigned ArgCount =3D pseudoInstArgCount(NewInst); + // Add one to account for extra store pointer + // argument of Vec*Store pseudo instructions. + assert(ArgCount > 0 and ArgCount - 1 <=3D Inst->getNumOperands()); + Types.push_back(PtrTy); + Args.push_back(PtrOp); + for (unsigned I =3D 0; I < ArgCount - 1; ++I) { + Value *Op =3D Inst->getOperand(I); + Types.push_back(Op->getType()); + Args.push_back(Op); + } + FunctionCallee Fn =3D + pseudoInstFunction(M, NewInst, Builder.getVoidTy(), Types); + Builder.CreateCall(Fn, Args); + } else if (auto *Call =3D dyn_cast(ValueOp)) { + Function *F =3D Call->getCalledFunction(); + PseudoInst OldInst =3D getPseudoInstFromCall(Call); + if (OldInst !=3D InvalidPseudoInst) { + // Map scalar vector pseudo instructions to + // store variants + PseudoInst NewInst =3D pseudoInstToStorePseudoInst(OldInst); + IRBuilder<> Builder(Store); + Types.push_back(PtrTy); + Args.push_back(PtrOp); + for (Value *Op : Call->args()) { + Types.push_back(Op->getType()); + Args.push_back(Op); + } + FunctionCallee Fn =3D + pseudoInstFunction(M, NewInst, Builder.getVoidTy(), Types); + Builder.CreateCall(Fn, Args); + } else if (F->isIntrinsic()) { + Instruction *Inst =3D cast(ValueOp); + PseudoInst NewInst =3D + intrinsicToStorePseudoInst(F->getIntrinsicID()); + const unsigned ArgCount =3D pseudoInstArgCount(NewInst); + // Add one to account for extra store pointer + // argument of Vec*Store pseudo instructions. + assert(ArgCount > 0 and ArgCount - 1 <=3D Inst->getNumOperands= ()); + IRBuilder<> Builder(Store); + SmallVector ArgTys; + SmallVector Args; + ArgTys.push_back(PtrTy); + Args.push_back(PtrOp); + for (unsigned I =3D 0; I < ArgCount - 1; ++I) { + Value *Op =3D Inst->getOperand(I); + ArgTys.push_back(Op->getType()); + Args.push_back(Op); + } + FunctionCallee Fn =3D + pseudoInstFunction(M, NewInst, Builder.getVoidTy(), ArgTys= ); + Builder.CreateCall(Fn, Args); + } else { + dbgs() << "Uhandled vector + bitcast + store op. " << *ValueOp + << "\n"; + abort(); + } + } else { + Instruction *Inst =3D cast(ValueOp); + PseudoInst NewInst =3D instructionToStorePseudoInst(Inst->getOpcod= e()); + const unsigned ArgCount =3D pseudoInstArgCount(NewInst); + // Add one to account for extra store pointer + // argument of Vec*Store pseudo instructions. + assert(ArgCount > 0 and ArgCount - 1 <=3D Inst->getNumOperands()); + IRBuilder<> Builder(Store); + SmallVector ArgTys; + SmallVector Args; + ArgTys.push_back(PtrTy); + Args.push_back(PtrOp); + for (unsigned I =3D 0; I < ArgCount - 1; ++I) { + Value *Op =3D Inst->getOperand(I); + ArgTys.push_back(Op->getType()); + Args.push_back(Op); + } + FunctionCallee Fn =3D + pseudoInstFunction(M, NewInst, Builder.getVoidTy(), ArgTys); + Builder.CreateCall(Fn, Args); + } + + // Remove store instruction, this ensures DCE + // can cleanup the rest, we also remove ValueOp + // here since it's a call and won't get cleaned + // by DCE. + InstToErase.push_back(cast(ValueOp)); + InstToErase.push_back(Store); +} + +// +// Convert +// +// %cond =3D icmp [cond] i32 %0, i32 %1 +// br i1 %cond, label %true, label %false +// +// to +// +// call void @brcond.[cond].i32(i32 %0, i32 %1, label %true.exit, +// label %false) br i1 %cond, label %true, label %false !dead-branch +// +// note the old branch still remains as @brcond.* is not an actual +// branch instruction. Removing the old branch would result in broken +// IR. +// +// Additionally if the %false basic block immediatly succeeds the +// current one, we can ignore the false branch and fallthrough, this is +// indicated via !fallthrough metadata on the call. +// +// TODO: Consider using a ConstantInt i1 arguments instead. Metadata is +// fragile and does not survive optimization. We do not run any more +// optimization passes, but this could be a source of future headache. +static void convertICmpBrToPseudInst(LLVMContext &Context, + EraseInstVec &InstToErase, Module &M, + Instruction *I, BasicBlock *NextBb) +{ + auto *ICmp =3D dyn_cast(I); + if (!ICmp) { + return; + } + + // Since we want to remove the icmp instruction we ensure that + // all uses are branch instructions that can be converted into + // @brcond.* calls. + for (User *U : ICmp->users()) { + if (!isa(U)) { + return; + } + } + + Value *Op0 =3D ICmp->getOperand(0); + Value *Op1 =3D ICmp->getOperand(1); + auto *CmpIntTy =3D dyn_cast(Op0->getType()); + if (!CmpIntTy) { + return; + } + for (User *U : ICmp->users()) { + auto *Br =3D cast(U); + + BasicBlock *True =3D Br->getSuccessor(0); + BasicBlock *False =3D Br->getSuccessor(1); + + bool TrueUnreachable =3D + True->getTerminator()->getOpcode() =3D=3D Instruction::Unreach= able and + False->getTerminator()->getOpcode() !=3D Instruction::Unreacha= ble; + + // If the next basic block is either of our true/false + // branches, we can fallthrough instead of branching. + bool Fallthrough =3D (NextBb =3D=3D True or NextBb =3D=3D False); + + // If the succeeding basic block is the true branch we + // invert the condition so we can !fallthrough instead. + ICmpInst::Predicate Predicate; + if (NextBb =3D=3D True or (TrueUnreachable and NextBb =3D=3D False= )) { + std::swap(True, False); + Predicate =3D ICmp->getInversePredicate(); + } else { + Predicate =3D ICmp->getPredicate(); + } + + IRBuilder<> Builder(Br); + FunctionCallee Fn =3D pseudoInstFunction( + M, Brcond, Builder.getVoidTy(), + {CmpIntTy, CmpIntTy, CmpIntTy, True->getType(), False->getType= ()}); + CallInst *Call =3D Builder.CreateCall( + Fn, {ConstantInt::get(CmpIntTy, Predicate), Op0, Op1, True, Fa= lse}); + + if (Fallthrough) { + MDTuple *N =3D MDNode::get(Context, MDString::get(Context, "")= ); + Call->setMetadata("fallthrough", N); + } + + // + // We need to keep the BB of the true branch alive + // so that we can iterate over the CFG as usual + // using LLVM. Or custom "opcode" @brcond is not an + // actual branch, so LLVM does not understand that + // we can branch to the true branch. + // + // For this reason we emit an extra dead branch + // to the true branch, and tag it as dead using + // metadata. The backend can later check that if + // this metadata is present and ignore the branch. + // + // Another idea: + // What we could do instead is to + // linearize the CFG before this point, i.e. + // establish the order we want to emit all BBs + // in, in say an array. We can then iterate + // over this array instead, note this can only + // happen in the later stages of the pipeline + // where we don't rely on LLVM for any extra work. + // + // Keeping our own linear array would also allow + // us to optimize brconds for fallthroughs, e.g. + // check if any of the basic blocks we branch to + // is the next basic block, and if so we can adjust + // the condition accordingly. + // (We do this currently, but this assumes the + // iteration order here is the same as in the + // backend.) + // + // Note also: LLVM expectects the BB to end in a single + // branch. + // + BranchInst *DeadBranch =3D + Builder.CreateCondBr(ConstantInt::getFalse(Context), True, Fal= se); + { + MDTuple *N =3D MDNode::get(Context, MDString::get(Context, "")= ); + DeadBranch->setMetadata("dead-branch", N); + } + + InstToErase.push_back(Br); + } + InstToErase.push_back(ICmp); +} diff --git a/subprojects/helper-to-tcg/passes/PrepareForTcgPass/Canonicaliz= eIR.h b/subprojects/helper-to-tcg/passes/PrepareForTcgPass/CanonicalizeIR.h new file mode 100644 index 0000000000..441200606d --- /dev/null +++ b/subprojects/helper-to-tcg/passes/PrepareForTcgPass/CanonicalizeIR.h @@ -0,0 +1,25 @@ +// +// Copyright(c) 2024 rev.ng Labs Srl. All Rights Reserved. +// +// This program is free software; you can redistribute it and/or modify +// it under the terms of the GNU General Public License as published by +// the Free Software Foundation; either version 2 of the License, or +// (at your option) any later version. +// +// This program is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU General Public License for more details. +// +// You should have received a copy of the GNU General Public License +// along with this program; if not, see . +// + +#pragma once + +namespace llvm +{ +class Module; +} + +void canonicalizeIR(llvm::Module &M); diff --git a/subprojects/helper-to-tcg/passes/PrepareForTcgPass/PrepareForT= cgPass.cpp b/subprojects/helper-to-tcg/passes/PrepareForTcgPass/PrepareForT= cgPass.cpp index b1e2932750..7fdbc2a0c9 100644 --- a/subprojects/helper-to-tcg/passes/PrepareForTcgPass/PrepareForTcgPass.= cpp +++ b/subprojects/helper-to-tcg/passes/PrepareForTcgPass/PrepareForTcgPass.= cpp @@ -15,6 +15,7 @@ // along with this program; if not, see . // =20 +#include "CanonicalizeIR.h" #include #include #include "TransformGEPs.h" @@ -124,5 +125,6 @@ PreservedAnalyses PrepareForTcgPass::run(Module &M, Mod= uleAnalysisManager &MAM) for (Function &F : M) { transformGEPs(M, F, ResultTcgGlobalMap); } + canonicalizeIR(M); return PreservedAnalyses::none(); } --=20 2.45.2