From nobody Sun May 5 11:24:39 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1617342075; cv=none; d=zohomail.com; s=zohoarc; b=NAhL+8ktZnWB/avFH+lQShQCaWeSU6CA0nvPoB2FJMP/wp60VcfYeNqBxNcqq5ui/znYFzhXYGwVl+BCvfvfcP4KURLp2sSLSwJovYELgmc8vMn80UJtbo3YYSHCplDE/rKtS/2di3824jOZViqNtCkG/nxc+vqrWfLhS6Zc5bQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1617342075; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=fmCgzHtzk5qDGe2eGqNAc4vF9R/bc/UzkhnNYf8s1aw=; b=lEqWfZ+KL+kZVh954yLqObjVtLr4A1ym6Wx1HIEbEdn4fgGutkR3gx7J0UxgKUtuQHD6SZrNXUqvTh+o4fDnwgraAHlBzknKa4wwuIua1XvTBG0I74L1BSKZUoKkBUMwBg3g/OXrNdc2o0Ol6Vl+HLrtot5OolVimRAKO8QmeRs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1617342075261683.2621796151756; Thu, 1 Apr 2021 22:41:15 -0700 (PDT) Received: from localhost ([::1]:48752 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lSCYU-0006wx-6D for importer@patchew.org; Fri, 02 Apr 2021 01:41:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:40452) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lSCV6-0004VQ-1q for qemu-devel@nongnu.org; Fri, 02 Apr 2021 01:37:44 -0400 Received: from mail-pf1-x432.google.com ([2607:f8b0:4864:20::432]:39924) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lSCUw-0004TF-U6 for qemu-devel@nongnu.org; Fri, 02 Apr 2021 01:37:43 -0400 Received: by mail-pf1-x432.google.com with SMTP id c17so2955136pfn.6 for ; Thu, 01 Apr 2021 22:37:34 -0700 (PDT) Received: from localhost.localdomain (h216-228-167-147.bendor.dedicated.static.tds.net. [216.228.167.147]) by smtp.gmail.com with ESMTPSA id g10sm6908074pgh.36.2021.04.01.22.37.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Apr 2021 22:37:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fmCgzHtzk5qDGe2eGqNAc4vF9R/bc/UzkhnNYf8s1aw=; b=bm476dv5KmFGLLJFdhch7hk+g9TYdW9A5BNUlZCiZ7+UHaOOtAHBDEEIiVGTuLGGcz O0pRBis38KND8+hNcrMayxBpZlM/2Kz1mnkL7u9xVrdWwm/hVnFPVHFS0H5K5s7TNNP3 iNEPErcyhIf36QI2DEzOZnmzWoEvHuCtO80DfVEWaVPg0dX8ziXn+FIegAXRUR2NJzLb tG5glWzkA27S5gdw2bGvKUFOiuGFxL/Eo015HhSe9q5OttJvlCS/Atlj9FgNeyQf0ec5 DlousngIrrPdER0XSqMTRonJeTLnKnE9GurDQbIqS/mPhJ4+yU1ARu0ywZFQAft+RU2b ksnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fmCgzHtzk5qDGe2eGqNAc4vF9R/bc/UzkhnNYf8s1aw=; b=YQ66rOn3BUCgLtstQK7S4F0ZKHtPbpVKQ3lU4nmfNuxS4c0NSSXRZd4VznBWhzwUlw qGKwaUr6LWj5PObPa0O3HToo+BXljYvghyGT3Ag6sWqLkK9FTKAtyrYKiFkep3TP//On EueyzSIE8W9iYlqO82LKMU73/zsgF7GdYVtP/8/nJovdqBcLWMNS4cj49T5Jhzgi8g08 W3zetyRP+M5/Hi0950t5WlAArdtI6/5T8+hS3dMzVf0AColbrz8903t8yG5LOslrmB2M 4lfqs7w+tjGdTiGt4Lp9Svofdbg/NxGk5bfJK7eN5Z1ePvFJuFrppn+0PD//EYan0SVY CyKA== X-Gm-Message-State: AOAM531FjF7xrKgCe26Mo1Y7ZbGignl+2hEGFjfrCpMRJdPBMiZUSeRa e9aXtXlnWYOvpDTInYSM/ac1vyPvRq2xjw== X-Google-Smtp-Source: ABdhPJy7sbg7K1v/FL1HrKspl9gyUCi6C6tDHX9kJf+5e5EcL5lCIN3I7i+HoevdKePA6XYPTZ6Pig== X-Received: by 2002:a63:f247:: with SMTP id d7mr10250009pgk.112.1617341853100; Thu, 01 Apr 2021 22:37:33 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 1/2] target/arm: Check PAGE_WRITE_ORG for MTE writeability Date: Thu, 1 Apr 2021 22:37:27 -0700 Message-Id: <20210402053728.265173-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210402053728.265173-1-richard.henderson@linaro.org> References: <20210402053728.265173-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::432; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x432.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-arm@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @linaro.org) Content-Type: text/plain; charset="utf-8" We can remove PAGE_WRITE when (internally) marking a page read-only because it contains translated code. This can be triggered by tests/tcg/aarch64/bti-2, after having serviced SIGILL trampolines on the stack. Signed-off-by: Richard Henderson --- target/arm/mte_helper.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c index 0bbb9ec346..8be17e1b70 100644 --- a/target/arm/mte_helper.c +++ b/target/arm/mte_helper.c @@ -83,7 +83,7 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int = ptr_mmu_idx, uint8_t *tags; uintptr_t index; =20 - if (!(flags & (ptr_access =3D=3D MMU_DATA_STORE ? PAGE_WRITE : PAGE_RE= AD))) { + if (!(flags & (ptr_access =3D=3D MMU_DATA_STORE ? PAGE_WRITE_ORG : PAG= E_READ))) { /* SIGSEGV */ arm_cpu_tlb_fill(env_cpu(env), ptr, ptr_size, ptr_access, ptr_mmu_idx, false, ra); --=20 2.25.1 From nobody Sun May 5 11:24:39 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1617342110; cv=none; d=zohomail.com; s=zohoarc; b=ZUdsCW4CFx5UdTK9mUnrwW6ocvbHnrPlRh3VJry/+dC3JwSCpMEJ8RCsOeMvdj9dkgbAh2V3i8NLEklFtwZ7PBB/0MMCVVtgxmwbHgiur6aRcgl8eAE7pld9qsSB6F1GUAIjJT2XmYqLcNpNmzZHIgnGawONHcpkKPEZQyEeIYo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1617342110; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=jWIxZDgZIiCBjmrSBD42n/qnJFiWL4d60PLnKUyK884=; b=DGRnX1ByHTpoISYmC26tqnYSXVYdPa8rQBMmRAnzFrJYOGFQ4yZimWksZVUWFEjiwpjiPdjK+HsuVgal8bAjobzMCUH2rBgb+565sVQqwqRWEVLAygP61jTVeZqa9vx1hbikjkuYnAU3A/kYiZFJRfI89ZMC4eWIn99jzeh28B8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1617342110599468.39515994736485; Thu, 1 Apr 2021 22:41:50 -0700 (PDT) Received: from localhost ([::1]:50644 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lSCZ3-0007lF-E2 for importer@patchew.org; Fri, 02 Apr 2021 01:41:49 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:40444) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lSCV5-0004U8-Eu for qemu-devel@nongnu.org; Fri, 02 Apr 2021 01:37:43 -0400 Received: from mail-pg1-x531.google.com ([2607:f8b0:4864:20::531]:45956) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lSCUx-0004Ti-Ty for qemu-devel@nongnu.org; Fri, 02 Apr 2021 01:37:43 -0400 Received: by mail-pg1-x531.google.com with SMTP id j34so948635pgj.12 for ; Thu, 01 Apr 2021 22:37:35 -0700 (PDT) Received: from localhost.localdomain (h216-228-167-147.bendor.dedicated.static.tds.net. [216.228.167.147]) by smtp.gmail.com with ESMTPSA id g10sm6908074pgh.36.2021.04.01.22.37.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 01 Apr 2021 22:37:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jWIxZDgZIiCBjmrSBD42n/qnJFiWL4d60PLnKUyK884=; b=qGaqxWaYRfh9OaOU08G+IFzMKvqjTMGgW0xuNRl3phOBFxXgPVHFPy4tDZWmbP2AFG ojUihm1hl+8L/zMt9ooPuI9uRXdEjOaQaJMW8EJ1DI77zofCOTWlH/07lfxlkYB4XBDJ cvh7kFmNWsnijZZmtINsLWANv4RBf/YYg3HRnG5E7TwUeepizEQNSQitn/nBoFu95QS2 09tt57a+suB6ETrxgfDDe3LLaDbGJyaHECnPodo010AYWGF50FpEUhkE2qQkrjcNGmAq NT5losqVtfJyVM1EngDutlxK3N8qp9NXH6aTsbM/UnY3dGxTUK8EyuejiNq6YRoxK2wG 9Q6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jWIxZDgZIiCBjmrSBD42n/qnJFiWL4d60PLnKUyK884=; b=CELlImLMcnWIXlm/bk6y3Qr5I9Ucmpkg5rnQl+b3QAt91tV+XtGVD6b15v5IVZLGEy nBe5u0oTe1xDrGgBe1b4Sb4oUKxOktYRGk02XdKzPMLQzM2vJkM/p+WYn/94/78uG5sK qu5dTSPNRkoGh4X7X76y0yy/aIhJEI0R5oq5n4pmLyil9XG3DRu8sKy1XDa2ZmzJv2fm uAI4VgNijp74cJ8rpHitSknQJPv9QPxxe+RcPpTzcnZeA2Gj04JEHTzjGLBx35A/Hy6W 73wO+Vd2UErmDi9Yfyb2zlf9Z2p1uALqOzLMzZxKu+cK/R4vOaSQrMDlzdBKt6iN3Ihu jM0A== X-Gm-Message-State: AOAM533V7j/39t2Y/l2ZXFFED/bgLajbMUOYR/ly0yTO6d2mcZ6qs/HH 7RItvhWyGpaxChRFsQju/yQEPX2KqNc+5g== X-Google-Smtp-Source: ABdhPJwS2Z/cnwFoucLv6nR1eY566z/iU7/J+H2Gv1SivRWiHygA6DdfgRlqTLhlfW0mb0P+2ZX1mA== X-Received: by 2002:a63:1d52:: with SMTP id d18mr10647661pgm.403.1617341854191; Thu, 01 Apr 2021 22:37:34 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH 2/2] target/arm: Fix unaligned mte checks Date: Thu, 1 Apr 2021 22:37:28 -0700 Message-Id: <20210402053728.265173-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210402053728.265173-1-richard.henderson@linaro.org> References: <20210402053728.265173-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::531; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x531.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-arm@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @linaro.org) Content-Type: text/plain; charset="utf-8" We were incorrectly assuming that only the first byte of an MTE access is checked against the tags. But per the ARM, unaligned accesses are pre-decomposed into single-byte accesses. So by the time we reach the actual MTE check in the ARM pseudocode, all accesses are aligned. Therefore, drop mte_check1, since we cannot know a priori that an access is aligned. Rename mte_checkN to mte_check, which now handles all accesses. Rename mte_probe1 to mte_probe, and use a common helper. Drop the computation of the faulting nth element, since all accesses can be considered to devolve to bytes, and simply compute the faulting address. Buglink: https://bugs.launchpad.net/bugs/1921948 Signed-off-by: Richard Henderson --- target/arm/helper-a64.h | 3 +- target/arm/internals.h | 13 +-- target/arm/translate-a64.h | 2 +- target/arm/mte_helper.c | 169 ++++++++++++------------------ target/arm/sve_helper.c | 96 ++++++----------- target/arm/translate-a64.c | 52 ++++----- target/arm/translate-sve.c | 9 +- tests/tcg/aarch64/mte-5.c | 44 ++++++++ tests/tcg/aarch64/Makefile.target | 2 +- 9 files changed, 178 insertions(+), 212 deletions(-) create mode 100644 tests/tcg/aarch64/mte-5.c diff --git a/target/arm/helper-a64.h b/target/arm/helper-a64.h index c139fa81f9..7b706571bb 100644 --- a/target/arm/helper-a64.h +++ b/target/arm/helper-a64.h @@ -104,8 +104,7 @@ DEF_HELPER_FLAGS_3(autdb, TCG_CALL_NO_WG, i64, env, i64= , i64) DEF_HELPER_FLAGS_2(xpaci, TCG_CALL_NO_RWG_SE, i64, env, i64) DEF_HELPER_FLAGS_2(xpacd, TCG_CALL_NO_RWG_SE, i64, env, i64) =20 -DEF_HELPER_FLAGS_3(mte_check1, TCG_CALL_NO_WG, i64, env, i32, i64) -DEF_HELPER_FLAGS_3(mte_checkN, TCG_CALL_NO_WG, i64, env, i32, i64) +DEF_HELPER_FLAGS_3(mte_check, TCG_CALL_NO_WG, i64, env, i32, i64) DEF_HELPER_FLAGS_3(mte_check_zva, TCG_CALL_NO_WG, i64, env, i32, i64) DEF_HELPER_FLAGS_3(irg, TCG_CALL_NO_RWG, i64, env, i64, i64) DEF_HELPER_FLAGS_4(addsubg, TCG_CALL_NO_RWG_SE, i64, env, i64, s32, i32) diff --git a/target/arm/internals.h b/target/arm/internals.h index f11bd32696..817d3aa51b 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -1137,19 +1137,16 @@ FIELD(PREDDESC, DATA, 8, 24) */ #define SVE_MTEDESC_SHIFT 5 =20 -/* Bits within a descriptor passed to the helper_mte_check* functions. */ +/* Bits within a descriptor passed to the helper_mte_check function. */ FIELD(MTEDESC, MIDX, 0, 4) FIELD(MTEDESC, TBI, 4, 2) FIELD(MTEDESC, TCMA, 6, 2) FIELD(MTEDESC, WRITE, 8, 1) -FIELD(MTEDESC, ESIZE, 9, 5) -FIELD(MTEDESC, TSIZE, 14, 10) /* mte_checkN only */ +FIELD(MTEDESC, SIZEM1, 12, 10) /* size - 1 */ =20 -bool mte_probe1(CPUARMState *env, uint32_t desc, uint64_t ptr); -uint64_t mte_check1(CPUARMState *env, uint32_t desc, - uint64_t ptr, uintptr_t ra); -uint64_t mte_checkN(CPUARMState *env, uint32_t desc, - uint64_t ptr, uintptr_t ra); +bool mte_probe(CPUARMState *env, uint32_t desc, uint64_t ptr); +uint64_t mte_check(CPUARMState *env, uint32_t desc, + uint64_t ptr, uintptr_t ra); =20 static inline int allocation_tag_from_addr(uint64_t ptr) { diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h index 3668b671dd..6c4bbf9096 100644 --- a/target/arm/translate-a64.h +++ b/target/arm/translate-a64.h @@ -44,7 +44,7 @@ TCGv_i64 clean_data_tbi(DisasContext *s, TCGv_i64 addr); TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write, bool tag_checked, int log2_size); TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write, - bool tag_checked, int count, int log2_esize); + bool tag_checked, int total_size); =20 /* We should have at some point before trying to access an FP register * done the necessary access check, so assert that diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c index 8be17e1b70..62bea7ad4a 100644 --- a/target/arm/mte_helper.c +++ b/target/arm/mte_helper.c @@ -121,7 +121,7 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, in= t ptr_mmu_idx, * exception for inaccessible pages, and resolves the virtual address * into the softmmu tlb. * - * When RA =3D=3D 0, this is for mte_probe1. The page is expected to = be + * When RA =3D=3D 0, this is for mte_probe. The page is expected to be * valid. Indicate to probe_access_flags no-fault, then assert that * we received a valid page. */ @@ -617,80 +617,6 @@ static void mte_check_fail(CPUARMState *env, uint32_t = desc, } } =20 -/* - * Perform an MTE checked access for a single logical or atomic access. - */ -static bool mte_probe1_int(CPUARMState *env, uint32_t desc, uint64_t ptr, - uintptr_t ra, int bit55) -{ - int mem_tag, mmu_idx, ptr_tag, size; - MMUAccessType type; - uint8_t *mem; - - ptr_tag =3D allocation_tag_from_addr(ptr); - - if (tcma_check(desc, bit55, ptr_tag)) { - return true; - } - - mmu_idx =3D FIELD_EX32(desc, MTEDESC, MIDX); - type =3D FIELD_EX32(desc, MTEDESC, WRITE) ? MMU_DATA_STORE : MMU_DATA_= LOAD; - size =3D FIELD_EX32(desc, MTEDESC, ESIZE); - - mem =3D allocation_tag_mem(env, mmu_idx, ptr, type, size, - MMU_DATA_LOAD, 1, ra); - if (!mem) { - return true; - } - - mem_tag =3D load_tag1(ptr, mem); - return ptr_tag =3D=3D mem_tag; -} - -/* - * No-fault version of mte_check1, to be used by SVE for MemSingleNF. - * Returns false if the access is Checked and the check failed. This - * is only intended to probe the tag -- the validity of the page must - * be checked beforehand. - */ -bool mte_probe1(CPUARMState *env, uint32_t desc, uint64_t ptr) -{ - int bit55 =3D extract64(ptr, 55, 1); - - /* If TBI is disabled, the access is unchecked. */ - if (unlikely(!tbi_check(desc, bit55))) { - return true; - } - - return mte_probe1_int(env, desc, ptr, 0, bit55); -} - -uint64_t mte_check1(CPUARMState *env, uint32_t desc, - uint64_t ptr, uintptr_t ra) -{ - int bit55 =3D extract64(ptr, 55, 1); - - /* If TBI is disabled, the access is unchecked, and ptr is not dirty. = */ - if (unlikely(!tbi_check(desc, bit55))) { - return ptr; - } - - if (unlikely(!mte_probe1_int(env, desc, ptr, ra, bit55))) { - mte_check_fail(env, desc, ptr, ra); - } - - return useronly_clean_ptr(ptr); -} - -uint64_t HELPER(mte_check1)(CPUARMState *env, uint32_t desc, uint64_t ptr) -{ - return mte_check1(env, desc, ptr, GETPC()); -} - -/* - * Perform an MTE checked access for multiple logical accesses. - */ - /** * checkN: * @tag: tag memory to test @@ -753,38 +679,49 @@ static int checkN(uint8_t *mem, int odd, int cmp, int= count) return n; } =20 -uint64_t mte_checkN(CPUARMState *env, uint32_t desc, - uint64_t ptr, uintptr_t ra) +/* + * mte_check_int: + * @env: CPU environment + * @desc: MTEDESC descriptor + * @ptr: virtual address of the base of the access + * @fault: return virtual address of the first check failure + * + * Internal routine for both mte_probe and mte_check. + * Return zero on failure, filling in *fault. + * Return negative on trivial success for tbi disabled. + * Return positive on success with tbi enabled. + */ +static int mte_check_int(CPUARMState *env, uint32_t desc, + uint64_t ptr, uintptr_t ra, uint64_t *fault) { int mmu_idx, ptr_tag, bit55; - uint64_t ptr_last, ptr_end, prev_page, next_page; + uint64_t ptr_last, prev_page, next_page; uint64_t tag_first, tag_end; uint64_t tag_byte_first, tag_byte_end; - uint32_t esize, total, tag_count, tag_size, n, c; + uint32_t sizem1, tag_count, tag_size, n, c; uint8_t *mem1, *mem2; MMUAccessType type; =20 bit55 =3D extract64(ptr, 55, 1); + *fault =3D ptr; =20 /* If TBI is disabled, the access is unchecked, and ptr is not dirty. = */ if (unlikely(!tbi_check(desc, bit55))) { - return ptr; + return -1; } =20 ptr_tag =3D allocation_tag_from_addr(ptr); =20 if (tcma_check(desc, bit55, ptr_tag)) { - goto done; + return 1; } =20 mmu_idx =3D FIELD_EX32(desc, MTEDESC, MIDX); type =3D FIELD_EX32(desc, MTEDESC, WRITE) ? MMU_DATA_STORE : MMU_DATA_= LOAD; - esize =3D FIELD_EX32(desc, MTEDESC, ESIZE); - total =3D FIELD_EX32(desc, MTEDESC, TSIZE); + sizem1 =3D FIELD_EX32(desc, MTEDESC, SIZEM1); =20 - /* Find the addr of the end of the access, and of the last element. */ - ptr_end =3D ptr + total; - ptr_last =3D ptr_end - esize; + /* Find the addr of the last byte of the access. */ + ptr_last =3D ptr + sizem1; =20 /* Round the bounds to the tag granule, and compute the number of tags= . */ tag_first =3D QEMU_ALIGN_DOWN(ptr, TAG_GRANULE); @@ -802,12 +739,19 @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc, if (likely(tag_end - prev_page <=3D TARGET_PAGE_SIZE)) { /* Memory access stays on one page. */ tag_size =3D (tag_byte_end - tag_byte_first) / (2 * TAG_GRANULE); - mem1 =3D allocation_tag_mem(env, mmu_idx, ptr, type, total, + mem1 =3D allocation_tag_mem(env, mmu_idx, ptr, type, sizem1 + 1, MMU_DATA_LOAD, tag_size, ra); if (!mem1) { - goto done; + return 1; + } + + /* + * Perform all of the comparisons, recognizing that most are + * aligned operations that do not cross granule boundaries. + */ + if (likely(tag_count =3D=3D 1)) { + return ptr_tag =3D=3D load_tag1(ptr, mem1); } - /* Perform all of the comparisons. */ n =3D checkN(mem1, ptr & TAG_GRANULE, ptr_tag, tag_count); } else { /* Memory access crosses to next page. */ @@ -817,7 +761,7 @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc, =20 tag_size =3D (tag_byte_end - next_page) / (2 * TAG_GRANULE); mem2 =3D allocation_tag_mem(env, mmu_idx, next_page, type, - ptr_end - next_page, + ptr_last - next_page + 1, MMU_DATA_LOAD, tag_size, ra); =20 /* @@ -831,31 +775,54 @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc, } if (n =3D=3D c) { if (!mem2) { - goto done; + return 1; } n +=3D checkN(mem2, 0, ptr_tag, tag_count - c); } } =20 /* - * If we failed, we know which granule. Compute the element that - * is first in that granule, and signal failure on that element. + * If we failed, we know which granule -- signal failure at that addre= ss. */ if (unlikely(n < tag_count)) { - uint64_t fail_ofs; - - fail_ofs =3D tag_first + n * TAG_GRANULE - ptr; - fail_ofs =3D ROUND_UP(fail_ofs, esize); - mte_check_fail(env, desc, ptr + fail_ofs, ra); + if (n > 0) { + *fault =3D tag_first + n * TAG_GRANULE; + } + return 0; } =20 - done: + return 1; +} + +/* + * No-fault probe, to be used by SVE for MemSingleNF. + * Returns false if the access is Checked and the check failed. This + * is only intended to probe the tag -- the validity of the page must + * be checked beforehand. + */ +bool mte_probe(CPUARMState *env, uint32_t desc, uint64_t ptr) +{ + uint64_t discard; + return mte_check_int(env, desc, ptr, 0, &discard) !=3D 0; +} + +uint64_t mte_check(CPUARMState *env, uint32_t desc, + uint64_t ptr, uintptr_t ra) +{ + uint64_t fault; + int ret =3D mte_check_int(env, desc, ptr, ra, &fault); + + if (unlikely(ret =3D=3D 0)) { + mte_check_fail(env, desc, fault, ra); + } else if (ret < 0) { + return ptr; + } return useronly_clean_ptr(ptr); } =20 -uint64_t HELPER(mte_checkN)(CPUARMState *env, uint32_t desc, uint64_t ptr) +uint64_t HELPER(mte_check)(CPUARMState *env, uint32_t desc, uint64_t ptr) { - return mte_checkN(env, desc, ptr, GETPC()); + return mte_check(env, desc, ptr, GETPC()); } =20 /* diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c index fd6c58f96a..9382ece660 100644 --- a/target/arm/sve_helper.c +++ b/target/arm/sve_helper.c @@ -4382,13 +4382,9 @@ static void sve_cont_ldst_watchpoints(SVEContLdSt *i= nfo, CPUARMState *env, #endif } =20 -typedef uint64_t mte_check_fn(CPUARMState *, uint32_t, uint64_t, uintptr_t= ); - -static inline QEMU_ALWAYS_INLINE -void sve_cont_ldst_mte_check_int(SVEContLdSt *info, CPUARMState *env, - uint64_t *vg, target_ulong addr, int esiz= e, - int msize, uint32_t mtedesc, uintptr_t ra, - mte_check_fn *check) +static void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env, + uint64_t *vg, target_ulong addr, int e= size, + int msize, uint32_t mtedesc, uintptr_t= ra) { intptr_t mem_off, reg_off, reg_last; =20 @@ -4405,7 +4401,7 @@ void sve_cont_ldst_mte_check_int(SVEContLdSt *info, C= PUARMState *env, uint64_t pg =3D vg[reg_off >> 6]; do { if ((pg >> (reg_off & 63)) & 1) { - check(env, mtedesc, addr, ra); + mte_check(env, mtedesc, addr, ra); } reg_off +=3D esize; mem_off +=3D msize; @@ -4422,7 +4418,7 @@ void sve_cont_ldst_mte_check_int(SVEContLdSt *info, C= PUARMState *env, uint64_t pg =3D vg[reg_off >> 6]; do { if ((pg >> (reg_off & 63)) & 1) { - check(env, mtedesc, addr, ra); + mte_check(env, mtedesc, addr, ra); } reg_off +=3D esize; mem_off +=3D msize; @@ -4431,30 +4427,6 @@ void sve_cont_ldst_mte_check_int(SVEContLdSt *info, = CPUARMState *env, } } =20 -typedef void sve_cont_ldst_mte_check_fn(SVEContLdSt *info, CPUARMState *en= v, - uint64_t *vg, target_ulong addr, - int esize, int msize, uint32_t mte= desc, - uintptr_t ra); - -static void sve_cont_ldst_mte_check1(SVEContLdSt *info, CPUARMState *env, - uint64_t *vg, target_ulong addr, - int esize, int msize, uint32_t mtedes= c, - uintptr_t ra) -{ - sve_cont_ldst_mte_check_int(info, env, vg, addr, esize, msize, - mtedesc, ra, mte_check1); -} - -static void sve_cont_ldst_mte_checkN(SVEContLdSt *info, CPUARMState *env, - uint64_t *vg, target_ulong addr, - int esize, int msize, uint32_t mtedes= c, - uintptr_t ra) -{ - sve_cont_ldst_mte_check_int(info, env, vg, addr, esize, msize, - mtedesc, ra, mte_checkN); -} - - /* * Common helper for all contiguous 1,2,3,4-register predicated stores. */ @@ -4463,8 +4435,7 @@ void sve_ldN_r(CPUARMState *env, uint64_t *vg, const = target_ulong addr, uint32_t desc, const uintptr_t retaddr, const int esz, const int msz, const int N, uint32_t mtedesc, sve_ldst1_host_fn *host_fn, - sve_ldst1_tlb_fn *tlb_fn, - sve_cont_ldst_mte_check_fn *mte_check_fn) + sve_ldst1_tlb_fn *tlb_fn) { const unsigned rd =3D simd_data(desc); const intptr_t reg_max =3D simd_oprsz(desc); @@ -4493,9 +4464,9 @@ void sve_ldN_r(CPUARMState *env, uint64_t *vg, const = target_ulong addr, * Handle mte checks for all active elements. * Since TBI must be set for MTE, !mtedesc =3D> !mte_active. */ - if (mte_check_fn && mtedesc) { - mte_check_fn(&info, env, vg, addr, 1 << esz, N << msz, - mtedesc, retaddr); + if (mtedesc) { + sve_cont_ldst_mte_check(&info, env, vg, addr, 1 << esz, N << msz, + mtedesc, retaddr); } =20 flags =3D info.page[0].flags | info.page[1].flags; @@ -4621,8 +4592,7 @@ void sve_ldN_r_mte(CPUARMState *env, uint64_t *vg, ta= rget_ulong addr, mtedesc =3D 0; } =20 - sve_ldN_r(env, vg, addr, desc, ra, esz, msz, N, mtedesc, host_fn, tlb_= fn, - N =3D=3D 1 ? sve_cont_ldst_mte_check1 : sve_cont_ldst_mte_ch= eckN); + sve_ldN_r(env, vg, addr, desc, ra, esz, msz, N, mtedesc, host_fn, tlb_= fn); } =20 #define DO_LD1_1(NAME, ESZ) \ @@ -4630,7 +4600,7 @@ void HELPER(sve_##NAME##_r)(CPUARMState *env, void *v= g, \ target_ulong addr, uint32_t desc) \ { \ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, MO_8, 1, 0, \ - sve_##NAME##_host, sve_##NAME##_tlb, NULL); \ + sve_##NAME##_host, sve_##NAME##_tlb); \ } \ void HELPER(sve_##NAME##_r_mte)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ @@ -4644,13 +4614,13 @@ void HELPER(sve_##NAME##_le_r)(CPUARMState *env, vo= id *vg, \ target_ulong addr, uint32_t desc) \ { \ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, 1, 0, \ - sve_##NAME##_le_host, sve_##NAME##_le_tlb, NULL); \ + sve_##NAME##_le_host, sve_##NAME##_le_tlb); \ } \ void HELPER(sve_##NAME##_be_r)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ { \ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, 1, 0, \ - sve_##NAME##_be_host, sve_##NAME##_be_tlb, NULL); \ + sve_##NAME##_be_host, sve_##NAME##_be_tlb); \ } \ void HELPER(sve_##NAME##_le_r_mte)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ @@ -4693,7 +4663,7 @@ void HELPER(sve_ld##N##bb_r)(CPUARMState *env, void *= vg, \ target_ulong addr, uint32_t desc) \ { \ sve_ldN_r(env, vg, addr, desc, GETPC(), MO_8, MO_8, N, 0, \ - sve_ld1bb_host, sve_ld1bb_tlb, NULL); \ + sve_ld1bb_host, sve_ld1bb_tlb); \ } \ void HELPER(sve_ld##N##bb_r_mte)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ @@ -4707,13 +4677,13 @@ void HELPER(sve_ld##N##SUFF##_le_r)(CPUARMState *en= v, void *vg, \ target_ulong addr, uint32_t desc) \ { \ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, ESZ, N, 0, \ - sve_ld1##SUFF##_le_host, sve_ld1##SUFF##_le_tlb, NULL); \ + sve_ld1##SUFF##_le_host, sve_ld1##SUFF##_le_tlb); \ } \ void HELPER(sve_ld##N##SUFF##_be_r)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ { \ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, ESZ, N, 0, \ - sve_ld1##SUFF##_be_host, sve_ld1##SUFF##_be_tlb, NULL); \ + sve_ld1##SUFF##_be_host, sve_ld1##SUFF##_be_tlb); \ } \ void HELPER(sve_ld##N##SUFF##_le_r_mte)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ @@ -4826,7 +4796,7 @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const = target_ulong addr, if (fault =3D=3D FAULT_FIRST) { /* Trapping mte check for the first-fault element. */ if (mtedesc) { - mte_check1(env, mtedesc, addr + mem_off, retaddr); + mte_check(env, mtedesc, addr + mem_off, retaddr); } =20 /* @@ -4869,7 +4839,7 @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const = target_ulong addr, /* Watchpoint hit, see below. */ goto do_fault; } - if (mtedesc && !mte_probe1(env, mtedesc, addr + mem_off)) { + if (mtedesc && !mte_probe(env, mtedesc, addr + mem_off)) { goto do_fault; } /* @@ -4919,7 +4889,7 @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const = target_ulong addr, & BP_MEM_READ)) { goto do_fault; } - if (mtedesc && !mte_probe1(env, mtedesc, addr + mem_off)) { + if (mtedesc && !mte_probe(env, mtedesc, addr + mem_off)) { goto do_fault; } host_fn(vd, reg_off, host + mem_off); @@ -5090,8 +5060,7 @@ void sve_stN_r(CPUARMState *env, uint64_t *vg, target= _ulong addr, uint32_t desc, const uintptr_t retaddr, const int esz, const int msz, const int N, uint32_t mtedesc, sve_ldst1_host_fn *host_fn, - sve_ldst1_tlb_fn *tlb_fn, - sve_cont_ldst_mte_check_fn *mte_check_fn) + sve_ldst1_tlb_fn *tlb_fn) { const unsigned rd =3D simd_data(desc); const intptr_t reg_max =3D simd_oprsz(desc); @@ -5117,9 +5086,9 @@ void sve_stN_r(CPUARMState *env, uint64_t *vg, target= _ulong addr, * Handle mte checks for all active elements. * Since TBI must be set for MTE, !mtedesc =3D> !mte_active. */ - if (mte_check_fn && mtedesc) { - mte_check_fn(&info, env, vg, addr, 1 << esz, N << msz, - mtedesc, retaddr); + if (mtedesc) { + sve_cont_ldst_mte_check(&info, env, vg, addr, 1 << esz, N << msz, + mtedesc, retaddr); } =20 flags =3D info.page[0].flags | info.page[1].flags; @@ -5233,8 +5202,7 @@ void sve_stN_r_mte(CPUARMState *env, uint64_t *vg, ta= rget_ulong addr, mtedesc =3D 0; } =20 - sve_stN_r(env, vg, addr, desc, ra, esz, msz, N, mtedesc, host_fn, tlb_= fn, - N =3D=3D 1 ? sve_cont_ldst_mte_check1 : sve_cont_ldst_mte_ch= eckN); + sve_stN_r(env, vg, addr, desc, ra, esz, msz, N, mtedesc, host_fn, tlb_= fn); } =20 #define DO_STN_1(N, NAME, ESZ) \ @@ -5242,7 +5210,7 @@ void HELPER(sve_st##N##NAME##_r)(CPUARMState *env, vo= id *vg, \ target_ulong addr, uint32_t desc) \ { \ sve_stN_r(env, vg, addr, desc, GETPC(), ESZ, MO_8, N, 0, \ - sve_st1##NAME##_host, sve_st1##NAME##_tlb, NULL); \ + sve_st1##NAME##_host, sve_st1##NAME##_tlb); \ } \ void HELPER(sve_st##N##NAME##_r_mte)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ @@ -5256,13 +5224,13 @@ void HELPER(sve_st##N##NAME##_le_r)(CPUARMState *en= v, void *vg, \ target_ulong addr, uint32_t desc) \ { \ sve_stN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, N, 0, \ - sve_st1##NAME##_le_host, sve_st1##NAME##_le_tlb, NULL); \ + sve_st1##NAME##_le_host, sve_st1##NAME##_le_tlb); \ } \ void HELPER(sve_st##N##NAME##_be_r)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ { \ sve_stN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, N, 0, \ - sve_st1##NAME##_be_host, sve_st1##NAME##_be_tlb, NULL); \ + sve_st1##NAME##_be_host, sve_st1##NAME##_be_tlb); \ } \ void HELPER(sve_st##N##NAME##_le_r_mte)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ @@ -5373,7 +5341,7 @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *= vg, void *vm, info.attrs, BP_MEM_READ, reta= ddr); } if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) { - mte_check1(env, mtedesc, addr, retaddr); + mte_check(env, mtedesc, addr, retaddr); } host_fn(&scratch, reg_off, info.host); } else { @@ -5386,7 +5354,7 @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *= vg, void *vm, BP_MEM_READ, retaddr); } if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) { - mte_check1(env, mtedesc, addr, retaddr); + mte_check(env, mtedesc, addr, retaddr); } tlb_fn(env, &scratch, reg_off, addr, retaddr); } @@ -5552,7 +5520,7 @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t= *vg, void *vm, */ addr =3D base + (off_fn(vm, reg_off) << scale); if (mtedesc) { - mte_check1(env, mtedesc, addr, retaddr); + mte_check(env, mtedesc, addr, retaddr); } tlb_fn(env, vd, reg_off, addr, retaddr); =20 @@ -5588,7 +5556,7 @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t= *vg, void *vm, } if (mtedesc && arm_tlb_mte_tagged(&info.attrs) && - !mte_probe1(env, mtedesc, addr)) { + !mte_probe(env, mtedesc, addr)) { goto fault; } =20 @@ -5773,7 +5741,7 @@ void sve_st1_z(CPUARMState *env, void *vd, uint64_t *= vg, void *vm, } =20 if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) { - mte_check1(env, mtedesc, addr, retaddr); + mte_check(env, mtedesc, addr, retaddr); } } i +=3D 1; diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c index 0b42e53500..d8cf284a15 100644 --- a/target/arm/translate-a64.c +++ b/target/arm/translate-a64.c @@ -253,10 +253,7 @@ static void gen_probe_access(DisasContext *s, TCGv_i64= ptr, } =20 /* - * For MTE, check a single logical or atomic access. This probes a single - * address, the exact one specified. The size and alignment of the access - * is not relevant to MTE, per se, but watchpoints do require the size, - * and we want to recognize those before making any other changes to state. + * For MTE, check a single logical or atomic access. */ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr, bool is_write, bool tag_checked, @@ -272,11 +269,11 @@ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s= , TCGv_i64 addr, desc =3D FIELD_DP32(desc, MTEDESC, TBI, s->tbid); desc =3D FIELD_DP32(desc, MTEDESC, TCMA, s->tcma); desc =3D FIELD_DP32(desc, MTEDESC, WRITE, is_write); - desc =3D FIELD_DP32(desc, MTEDESC, ESIZE, 1 << log2_size); + desc =3D FIELD_DP32(desc, MTEDESC, SIZEM1, (1 << log2_size) - 1); tcg_desc =3D tcg_const_i32(desc); =20 ret =3D new_tmp_a64(s); - gen_helper_mte_check1(ret, cpu_env, tcg_desc, addr); + gen_helper_mte_check(ret, cpu_env, tcg_desc, addr); tcg_temp_free_i32(tcg_desc); =20 return ret; @@ -295,28 +292,24 @@ TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 add= r, bool is_write, * For MTE, check multiple logical sequential accesses. */ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write, - bool tag_checked, int log2_esize, int total_size) + bool tag_checked, int total_size) { - if (tag_checked && s->mte_active[0] && total_size !=3D (1 << log2_esiz= e)) { - TCGv_i32 tcg_desc; - TCGv_i64 ret; - int desc =3D 0; + TCGv_i32 tcg_desc; + TCGv_i64 ret; + int desc =3D 0; =20 - desc =3D FIELD_DP32(desc, MTEDESC, MIDX, get_mem_index(s)); - desc =3D FIELD_DP32(desc, MTEDESC, TBI, s->tbid); - desc =3D FIELD_DP32(desc, MTEDESC, TCMA, s->tcma); - desc =3D FIELD_DP32(desc, MTEDESC, WRITE, is_write); - desc =3D FIELD_DP32(desc, MTEDESC, ESIZE, 1 << log2_esize); - desc =3D FIELD_DP32(desc, MTEDESC, TSIZE, total_size); - tcg_desc =3D tcg_const_i32(desc); + desc =3D FIELD_DP32(desc, MTEDESC, MIDX, get_mem_index(s)); + desc =3D FIELD_DP32(desc, MTEDESC, TBI, s->tbid); + desc =3D FIELD_DP32(desc, MTEDESC, TCMA, s->tcma); + desc =3D FIELD_DP32(desc, MTEDESC, WRITE, is_write); + desc =3D FIELD_DP32(desc, MTEDESC, SIZEM1, total_size - 1); + tcg_desc =3D tcg_const_i32(desc); =20 - ret =3D new_tmp_a64(s); - gen_helper_mte_checkN(ret, cpu_env, tcg_desc, addr); - tcg_temp_free_i32(tcg_desc); + ret =3D new_tmp_a64(s); + gen_helper_mte_check(ret, cpu_env, tcg_desc, addr); + tcg_temp_free_i32(tcg_desc); =20 - return ret; - } - return gen_mte_check1(s, addr, is_write, tag_checked, log2_esize); + return ret; } =20 typedef struct DisasCompare64 { @@ -2966,8 +2959,7 @@ static void disas_ldst_pair(DisasContext *s, uint32_t= insn) } =20 clean_addr =3D gen_mte_checkN(s, dirty_addr, !is_load, - (wback || rn !=3D 31) && !set_tag, - size, 2 << size); + (wback || rn !=3D 31) && !set_tag, 2 << si= ze); =20 if (is_vector) { if (is_load) { @@ -3713,8 +3705,8 @@ static void disas_ldst_multiple_struct(DisasContext *= s, uint32_t insn) * Issue the MTE check vs the logical repeat count, before we * promote consecutive little-endian elements below. */ - clean_addr =3D gen_mte_checkN(s, tcg_rn, is_store, is_postidx || rn != =3D 31, - size, total); + clean_addr =3D gen_mte_checkN(s, tcg_rn, is_store, + is_postidx || rn !=3D 31, total); =20 /* * Consecutive little-endian elements from a single register @@ -3866,8 +3858,8 @@ static void disas_ldst_single_struct(DisasContext *s,= uint32_t insn) total =3D selem << scale; tcg_rn =3D cpu_reg_sp(s, rn); =20 - clean_addr =3D gen_mte_checkN(s, tcg_rn, !is_load, is_postidx || rn != =3D 31, - scale, total); + clean_addr =3D gen_mte_checkN(s, tcg_rn, !is_load, + is_postidx || rn !=3D 31, total); =20 tcg_ebytes =3D tcg_const_i64(1 << scale); for (xs =3D 0; xs < selem; xs++) { diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c index 0eefb61214..584c4d047c 100644 --- a/target/arm/translate-sve.c +++ b/target/arm/translate-sve.c @@ -4264,7 +4264,7 @@ static void do_ldr(DisasContext *s, uint32_t vofs, in= t len, int rn, int imm) =20 dirty_addr =3D tcg_temp_new_i64(); tcg_gen_addi_i64(dirty_addr, cpu_reg_sp(s, rn), imm); - clean_addr =3D gen_mte_checkN(s, dirty_addr, false, rn !=3D 31, len, M= O_8); + clean_addr =3D gen_mte_checkN(s, dirty_addr, false, rn !=3D 31, len); tcg_temp_free_i64(dirty_addr); =20 /* @@ -4352,7 +4352,7 @@ static void do_str(DisasContext *s, uint32_t vofs, in= t len, int rn, int imm) =20 dirty_addr =3D tcg_temp_new_i64(); tcg_gen_addi_i64(dirty_addr, cpu_reg_sp(s, rn), imm); - clean_addr =3D gen_mte_checkN(s, dirty_addr, false, rn !=3D 31, len, M= O_8); + clean_addr =3D gen_mte_checkN(s, dirty_addr, false, rn !=3D 31, len); tcg_temp_free_i64(dirty_addr); =20 /* Note that unpredicated load/store of vector/predicate registers @@ -4509,8 +4509,7 @@ static void do_mem_zpa(DisasContext *s, int zt, int p= g, TCGv_i64 addr, desc =3D FIELD_DP32(desc, MTEDESC, TBI, s->tbid); desc =3D FIELD_DP32(desc, MTEDESC, TCMA, s->tcma); desc =3D FIELD_DP32(desc, MTEDESC, WRITE, is_write); - desc =3D FIELD_DP32(desc, MTEDESC, ESIZE, 1 << msz); - desc =3D FIELD_DP32(desc, MTEDESC, TSIZE, mte_n << msz); + desc =3D FIELD_DP32(desc, MTEDESC, SIZEM1, (mte_n << msz) - 1); desc <<=3D SVE_MTEDESC_SHIFT; } else { addr =3D clean_data_tbi(s, addr); @@ -5189,7 +5188,7 @@ static void do_mem_zpz(DisasContext *s, int zt, int p= g, int zm, desc =3D FIELD_DP32(desc, MTEDESC, TBI, s->tbid); desc =3D FIELD_DP32(desc, MTEDESC, TCMA, s->tcma); desc =3D FIELD_DP32(desc, MTEDESC, WRITE, is_write); - desc =3D FIELD_DP32(desc, MTEDESC, ESIZE, 1 << msz); + desc =3D FIELD_DP32(desc, MTEDESC, SIZEM1, (1 << msz) - 1); desc <<=3D SVE_MTEDESC_SHIFT; } desc =3D simd_desc(vsz, vsz, desc | scale); diff --git a/tests/tcg/aarch64/mte-5.c b/tests/tcg/aarch64/mte-5.c new file mode 100644 index 0000000000..6dbd6ab3ea --- /dev/null +++ b/tests/tcg/aarch64/mte-5.c @@ -0,0 +1,44 @@ +/* + * Memory tagging, faulting unaligned access. + * + * Copyright (c) 2021 Linaro Ltd + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#include "mte.h" + +void pass(int sig, siginfo_t *info, void *uc) +{ + assert(info->si_code =3D=3D SEGV_MTESERR); + exit(0); +} + +int main(int ac, char **av) +{ + struct sigaction sa; + void *p0, *p1, *p2; + long excl =3D 1; + + enable_mte(PR_MTE_TCF_SYNC); + p0 =3D alloc_mte_mem(sizeof(*p0)); + + /* Create two differently tagged pointers. */ + asm("irg %0,%1,%2" : "=3Dr"(p1) : "r"(p0), "r"(excl)); + asm("gmi %0,%1,%0" : "+r"(excl) : "r" (p1)); + assert(excl !=3D 1); + asm("irg %0,%1,%2" : "=3Dr"(p2) : "r"(p0), "r"(excl)); + assert(p1 !=3D p2); + + memset(&sa, 0, sizeof(sa)); + sa.sa_sigaction =3D pass; + sa.sa_flags =3D SA_SIGINFO; + sigaction(SIGSEGV, &sa, NULL); + + /* Store store two different tags in sequential granules. */ + asm("stg %0, [%0]" : : "r"(p1)); + asm("stg %0, [%0]" : : "r"(p2 + 16)); + + /* Perform an unaligned load crossing the granules. */ + asm volatile("ldr %0, [%1]" : "=3Dr"(p0) : "r"(p1 + 12)); + abort(); +} diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile= .target index 56e48f4b34..6c95dd8b9e 100644 --- a/tests/tcg/aarch64/Makefile.target +++ b/tests/tcg/aarch64/Makefile.target @@ -37,7 +37,7 @@ AARCH64_TESTS +=3D bti-2 =20 # MTE Tests ifneq ($(DOCKER_IMAGE)$(CROSS_CC_HAS_ARMV8_MTE),) -AARCH64_TESTS +=3D mte-1 mte-2 mte-3 mte-4 +AARCH64_TESTS +=3D mte-1 mte-2 mte-3 mte-4 mte-5 mte-%: CFLAGS +=3D -march=3Darmv8.5-a+memtag endif =20 --=20 2.25.1