From nobody Thu Jul 4 00:03:58 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1629316555; cv=none; d=zohomail.com; s=zohoarc; b=iNFA0/IYCfmalgeV10TE7uxgd0fFUgiEcypdrSubPyzygQdzE29/UpHSgcYbXGWd93ERh9jh4KrzpJmMuG+ptO2jU8RYTdFDKDWi8JI7Dg1FNdqehtm9F1yO7YDg1AgWAS2Os5m82dSQ+prBshDF4QAEpa5o1rlg0a/1qoa+C+A= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1629316555; h=Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=yK5fZoKCStyttLP0JcVE3OKam3tB/3GO7DNObKmyGfk=; b=Z6rT8WNJToqoCL1Qq+rNU3QwrjUQaVeT1P3XKOO56SuRsWfM92IdUgo0bol4oFD/a2m531Bd3N02foY4CG1xsP8eMvx4/pONuNlYmPSCJkJO98cmrOi+kWW8+xNTdhIp9QpESEHojLzai89W3m4g54qhEgY0I5l+OuuEABu4boY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1629316555856401.0193199101417; Wed, 18 Aug 2021 12:55:55 -0700 (PDT) Received: from localhost ([::1]:36178 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mGRfG-0001yC-Eu for importer@patchew.org; Wed, 18 Aug 2021 15:55:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:58994) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mGRA3-00082d-1T for qemu-devel@nongnu.org; Wed, 18 Aug 2021 15:23:39 -0400 Received: from mail-pj1-x1036.google.com ([2607:f8b0:4864:20::1036]:39502) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1mGRA0-0002t8-SS for qemu-devel@nongnu.org; Wed, 18 Aug 2021 15:23:38 -0400 Received: by mail-pj1-x1036.google.com with SMTP id u21-20020a17090a8915b02901782c36f543so9569732pjn.4 for ; Wed, 18 Aug 2021 12:23:36 -0700 (PDT) Received: from localhost.localdomain ([173.197.107.15]) by smtp.gmail.com with ESMTPSA id w82sm569302pff.112.2021.08.18.12.23.34 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Aug 2021 12:23:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=yK5fZoKCStyttLP0JcVE3OKam3tB/3GO7DNObKmyGfk=; b=yiAgg1IsNipwONYT3PiruyQAWfyuVgEHaNQR74So2nlW7SYlRRXcwHJsIIH9F3Z0Re ZItSwaa/Ub+cD1ouTTHzsTUb1Fifzx1F/nSyNrTM4+yJnfDPWzSX2rHtT/iOSyj58W// Lp3dXert5LNouGMzkhs7tW3aYmPInf0DgMOMJejB1PdfKHPl0/Ffni/Go6fAEb2zBTRB wiYaOxDLnNOOdX0H4cGsuxD0FQ5YTIb+GERBs/mj2XHVFb2/yfM7BynbVHRHc8X2OXm3 Wbu3cI6i1Fk1TAWukEeJFzawcmodjw5XKa4m0DJVR8nZlDdthR8DpIVh4PSa/2fqD1zG f+ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yK5fZoKCStyttLP0JcVE3OKam3tB/3GO7DNObKmyGfk=; b=Xpa+JYvshR018XEAzxtbDT8qbcNvbL4wtCWhdDM9NOQYLL3E0OSPnEZ6wYX2arCl65 CRzwT4yN4iOYwtt/FUBXUeeJQX5HZNkGmgw0ExIHlIEup/1af7IkqtmcuulDyJuEO4I0 SSGkhan+y/oyQqXpEJQfbWkHYndSwCYpW9RYVY0wbkSzOvI6As8yXQXwJVPj7gXenIf+ hdiMGXHD+XiQEfdoCgwekk4uErCmQv+vwPnznESqh7+bgOP1KGvRk3p3ZPYNHFdLywjJ vnmIg4eGt3UfZWwpcZOB0V0qRgBWQyCWEOysX7xyzbw9tOUIx6FkPYFac+5Nsl+2xs9H ceAQ== X-Gm-Message-State: AOAM5336d+gXkwFkV5l2Tq49TbBjR68KxcV/XLIXsxe/WfPKnX4BbTHn mG+wzXcq+X21NUpOl87Vax577xqWUR8Kpg== X-Google-Smtp-Source: ABdhPJw34tkE6WipTR6+W97OBL9ZSDJEIHVDIeG7sMxKeo1VPaivb8kyKNjx8KU2R34iWbDGPEBoCQ== X-Received: by 2002:a17:90a:f314:: with SMTP id ca20mr11175921pjb.210.1629314615509; Wed, 18 Aug 2021 12:23:35 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH v3 60/66] tcg/aarch64: Support raising sigbus for user-only Date: Wed, 18 Aug 2021 09:19:14 -1000 Message-Id: <20210818191920.390759-61-richard.henderson@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210818191920.390759-1-richard.henderson@linaro.org> References: <20210818191920.390759-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::1036; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1036.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1629316557216100001 Content-Type: text/plain; charset="utf-8" Use load-acquire / store-release for the normal case of alignment matching the access size. Otherwise, emit a test + branch sequence invoking helper_unaligned_mmu. Signed-off-by: Richard Henderson --- tcg/aarch64/tcg-target.h | 2 - tcg/aarch64/tcg-target.c.inc | 174 +++++++++++++++++++++++++++++++---- 2 files changed, 157 insertions(+), 19 deletions(-) diff --git a/tcg/aarch64/tcg-target.h b/tcg/aarch64/tcg-target.h index 7a93ac8023..876af589ce 100644 --- a/tcg/aarch64/tcg-target.h +++ b/tcg/aarch64/tcg-target.h @@ -151,9 +151,7 @@ typedef enum { =20 void tb_target_set_jmp_target(uintptr_t, uintptr_t, uintptr_t, uintptr_t); =20 -#ifdef CONFIG_SOFTMMU #define TCG_TARGET_NEED_LDST_LABELS -#endif #define TCG_TARGET_NEED_POOL_LABELS =20 #endif /* AARCH64_TCG_TARGET_H */ diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 5edca8d44d..f5664636cf 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -10,6 +10,7 @@ * See the COPYING file in the top-level directory for details. */ =20 +#include "../tcg-ldst.c.inc" #include "../tcg-pool.c.inc" #include "qemu/bitops.h" =20 @@ -390,6 +391,10 @@ typedef enum { I3305_LDR_v64 =3D 0x5c000000, I3305_LDR_v128 =3D 0x9c000000, =20 + /* Load/store exclusive */ + I3306_LDAR =3D 0x08808000 | LDST_LD << 22, /* plus MO << 30 */ + I3306_STLR =3D 0x08808000 | LDST_ST << 22, /* plus MO << 30 */ + /* Load/store register. Described here as 3.3.12, but the helper that emits them can transform to 3.3.10 or 3.3.13. */ I3312_STRB =3D 0x38000000 | LDST_ST << 22 | MO_8 << 30, @@ -443,6 +448,7 @@ typedef enum { I3404_ANDI =3D 0x12000000, I3404_ORRI =3D 0x32000000, I3404_EORI =3D 0x52000000, + I3404_ANDSI =3D 0x72000000, =20 /* Move wide immediate instructions. */ I3405_MOVN =3D 0x12800000, @@ -453,6 +459,9 @@ typedef enum { I3406_ADR =3D 0x10000000, I3406_ADRP =3D 0x90000000, =20 + /* Add/subtract extended register. */ + I3501_ADDEXT =3D 0x0b200000, + /* Add/subtract shifted register instructions (without a shift). */ I3502_ADD =3D 0x0b000000, I3502_ADDS =3D 0x2b000000, @@ -623,6 +632,14 @@ static void tcg_out_insn_3305(TCGContext *s, AArch64In= sn insn, tcg_out32(s, insn | (imm19 & 0x7ffff) << 5 | rt); } =20 +static void G_GNUC_UNUSED +tcg_out_insn_3306(TCGContext *s, AArch64Insn insn, MemOp sz, + TCGReg rs, TCGReg rt, TCGReg rt2, TCGReg rn) +{ + tcg_out32(s, insn | (sz << 30) | (rs << 16) | + (rt2 << 10) | (rn << 5) | rt); +} + static void tcg_out_insn_3201(TCGContext *s, AArch64Insn insn, TCGType ext, TCGReg rt, int imm19) { @@ -705,6 +722,13 @@ static void tcg_out_insn_3406(TCGContext *s, AArch64In= sn insn, tcg_out32(s, insn | (disp & 3) << 29 | (disp & 0x1ffffc) << (5 - 2) | = rd); } =20 +static inline void tcg_out_insn_3501(TCGContext *s, AArch64Insn insn, + TCGReg rd, TCGReg rn, + TCGReg rm, MemOp ext) +{ + tcg_out32(s, insn | 1 << 31 | rm << 16 | ext << 13 | rn << 5 | rd); +} + /* This function is for both 3.5.2 (Add/Subtract shifted register), for the rare occasion when we actually want to supply a shift amount. */ static inline void tcg_out_insn_3502S(TCGContext *s, AArch64Insn insn, @@ -1328,8 +1352,9 @@ static void tcg_out_goto_long(TCGContext *s, const tc= g_insn_unit *target) if (offset =3D=3D sextract64(offset, 0, 26)) { tcg_out_insn(s, 3206, B, offset); } else { - tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP, (intptr_t)target); - tcg_out_insn(s, 3207, BR, TCG_REG_TMP); + /* Choose X9 as a call-clobbered non-LR temporary. */ + tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_X9, (intptr_t)target); + tcg_out_insn(s, 3207, BR, TCG_REG_X9); } } =20 @@ -1541,9 +1566,14 @@ static void tcg_out_cltz(TCGContext *s, TCGType ext,= TCGReg d, } } =20 -#ifdef CONFIG_SOFTMMU -#include "../tcg-ldst.c.inc" +static void tcg_out_adr(TCGContext *s, TCGReg rd, const void *target) +{ + ptrdiff_t offset =3D tcg_pcrel_diff(s, target); + tcg_debug_assert(offset =3D=3D sextract64(offset, 0, 21)); + tcg_out_insn(s, 3406, ADR, rd, offset); +} =20 +#ifdef CONFIG_SOFTMMU /* helper signature: helper_ret_ld_mmu(CPUState *env, target_ulong addr, * MemOpIdx oi, uintptr_t ra) */ @@ -1577,13 +1607,6 @@ static void * const qemu_st_helpers[MO_SIZE + 1] =3D= { #endif }; =20 -static inline void tcg_out_adr(TCGContext *s, TCGReg rd, const void *targe= t) -{ - ptrdiff_t offset =3D tcg_pcrel_diff(s, target); - tcg_debug_assert(offset =3D=3D sextract64(offset, 0, 21)); - tcg_out_insn(s, 3406, ADR, rd, offset); -} - static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) { MemOpIdx oi =3D lb->oi; @@ -1714,15 +1737,85 @@ static void tcg_out_tlb_read(TCGContext *s, TCGReg = addr_reg, MemOp opc, tcg_out_insn(s, 3202, B_C, TCG_COND_NE, 0); } =20 +#else + +static void tcg_out_test_alignment(TCGContext *s, bool is_ld, TCGReg addr_= reg, + unsigned a_bits) +{ + unsigned a_mask =3D (1 << a_bits) - 1; + TCGLabelQemuLdst *label =3D new_ldst_label(s); + + label->is_ld =3D is_ld; + label->addrlo_reg =3D addr_reg; + + /* tst addr, #mask */ + tcg_out_logicali(s, I3404_ANDSI, 0, TCG_REG_XZR, addr_reg, a_mask); + + label->label_ptr[0] =3D s->code_ptr; + + /* b.ne slow_path */ + tcg_out_insn(s, 3202, B_C, TCG_COND_NE, 0); + + label->raddr =3D tcg_splitwx_to_rx(s->code_ptr); +} + +static bool tcg_out_fail_alignment(TCGContext *s, TCGLabelQemuLdst *l) +{ + if (!reloc_pc19(l->label_ptr[0], tcg_splitwx_to_rx(s->code_ptr))) { + return false; + } + + tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_X1, l->addrlo_reg); + tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_X0, TCG_AREG0); + + /* + * "Tail call" to the helper, with the return address back inline, + * just for the clarity of the debugging traceback -- the helper + * cannot return. + */ + tcg_out_adr(s, TCG_REG_LR, l->raddr); + tcg_out_goto_long(s, (const void *)(l->is_ld ? helper_unaligned_ld + : helper_unaligned_st)); + return true; +} + +static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) +{ + return tcg_out_fail_alignment(s, l); +} + +static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) +{ + return tcg_out_fail_alignment(s, l); +} + +static void tcg_out_qemu_ld_acquire(TCGContext *s, MemOp memop, TCGType ex= t, + TCGReg data_r, TCGReg addr_r) +{ + MemOp size =3D memop & MO_SIZE; + + tcg_out_insn(s, 3306, LDAR, size, + TCG_REG_XZR, data_r, TCG_REG_XZR, addr_r); + if (memop & MO_SIGN) { + tcg_out_sxt(s, ext, size, data_r, data_r); + } +} + +static void tcg_out_qemu_st_release(TCGContext *s, MemOp memop, + TCGReg data_r, TCGReg addr_r) +{ + MemOp size =3D memop & MO_SIZE; + + tcg_out_insn(s, 3306, STLR, size, + TCG_REG_XZR, data_r, TCG_REG_XZR, addr_r); +} + #endif /* CONFIG_SOFTMMU */ =20 static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp memop, TCGType ext, TCGReg data_r, TCGReg addr_r, TCGType otype, TCGReg off_r) { - /* Byte swapping is left to middle-end expansion. */ - tcg_debug_assert((memop & MO_BSWAP) =3D=3D 0); - switch (memop & MO_SSIZE) { case MO_UB: tcg_out_ldst_r(s, I3312_LDRB, data_r, addr_r, otype, off_r); @@ -1756,9 +1849,6 @@ static void tcg_out_qemu_st_direct(TCGContext *s, Mem= Op memop, TCGReg data_r, TCGReg addr_r, TCGType otype, TCGReg off_r) { - /* Byte swapping is left to middle-end expansion. */ - tcg_debug_assert((memop & MO_BSWAP) =3D=3D 0); - switch (memop & MO_SIZE) { case MO_8: tcg_out_ldst_r(s, I3312_STRB, data_r, addr_r, otype, off_r); @@ -1782,6 +1872,10 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg da= ta_reg, TCGReg addr_reg, { MemOp memop =3D get_memop(oi); const TCGType otype =3D TARGET_LONG_BITS =3D=3D 64 ? TCG_TYPE_I64 : TC= G_TYPE_I32; + + /* Byte swapping is left to middle-end expansion. */ + tcg_debug_assert((memop & MO_BSWAP) =3D=3D 0); + #ifdef CONFIG_SOFTMMU unsigned mem_index =3D get_mmuidx(oi); tcg_insn_unit *label_ptr; @@ -1792,6 +1886,28 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg da= ta_reg, TCGReg addr_reg, add_qemu_ldst_label(s, true, oi, ext, data_reg, addr_reg, s->code_ptr, label_ptr); #else /* !CONFIG_SOFTMMU */ + unsigned a_bits =3D get_alignment_bits(memop); + + if (a_bits) { + /* + * If alignment required, and equals the access size, then + * use load-acquire for the size effect of alignment checking. + * Despite the extra memory barrier, for a ThunderX2 host, + * this is is about 40% faster. It is always smaller. + */ + if (a_bits =3D=3D (memop & MO_SIZE)) { + if (USE_GUEST_BASE) { + tcg_out_insn(s, 3501, ADDEXT, TCG_REG_TMP, TCG_REG_GUEST_B= ASE, + addr_reg, TARGET_LONG_BITS =3D=3D 64 ? MO_64 = : MO_32); + addr_reg =3D TCG_REG_TMP; + } + tcg_out_qemu_ld_acquire(s, memop, ext, data_reg, addr_reg); + return; + } + + tcg_out_test_alignment(s, true, addr_reg, a_bits); + } + if (USE_GUEST_BASE) { tcg_out_qemu_ld_direct(s, memop, ext, data_reg, TCG_REG_GUEST_BASE, otype, addr_reg); @@ -1807,6 +1923,10 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg da= ta_reg, TCGReg addr_reg, { MemOp memop =3D get_memop(oi); const TCGType otype =3D TARGET_LONG_BITS =3D=3D 64 ? TCG_TYPE_I64 : TC= G_TYPE_I32; + + /* Byte swapping is left to middle-end expansion. */ + tcg_debug_assert((memop & MO_BSWAP) =3D=3D 0); + #ifdef CONFIG_SOFTMMU unsigned mem_index =3D get_mmuidx(oi); tcg_insn_unit *label_ptr; @@ -1817,6 +1937,26 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg da= ta_reg, TCGReg addr_reg, add_qemu_ldst_label(s, false, oi, (memop & MO_SIZE)=3D=3D MO_64, data_reg, addr_reg, s->code_ptr, label_ptr); #else /* !CONFIG_SOFTMMU */ + unsigned a_bits =3D get_alignment_bits(memop); + + if (a_bits) { + /* + * If alignment required, and equals the access size, then + * use store-release for the size effect of alignment checking. + */ + if (a_bits =3D=3D (memop & MO_SIZE)) { + if (USE_GUEST_BASE) { + tcg_out_insn(s, 3501, ADDEXT, TCG_REG_TMP, TCG_REG_GUEST_B= ASE, + addr_reg, TARGET_LONG_BITS =3D=3D 64 ? MO_64 = : MO_32); + addr_reg =3D TCG_REG_TMP; + } + tcg_out_qemu_st_release(s, memop, data_reg, addr_reg); + return; + } + + tcg_out_test_alignment(s, false, addr_reg, a_bits); + } + if (USE_GUEST_BASE) { tcg_out_qemu_st_direct(s, memop, data_reg, TCG_REG_GUEST_BASE, otype, addr_reg); --=20 2.25.1