From nobody Sat Apr 20 12:08:49 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1617380471; cv=none; d=zohomail.com; s=zohoarc; b=lWaO+3F74Gci60jqX8Bgx03AV7tHDQoZJYRB9uGEC1dIEDZVUn4AgXfFIGtBTvsH7mk59Io9Snz/XoWbXA6fnpPLIp8qAnX0D/y03dgCcy1Od0G+p6pC9mcfDASSA4yPGPOyK1Tz9tIuz56Bs9IUD2yTOpxoXLSZxep15Ra9XtU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1617380471; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=fmCgzHtzk5qDGe2eGqNAc4vF9R/bc/UzkhnNYf8s1aw=; b=SMg0kjZApIqXyxDeW1bUSj/0VCWksS3XcxY7zLwNalElG1LakT2au8/2MrvodOd2vAqQ2rGecP6gyaogyx5CLnX0u7vfVWUVGGo22YRrNrUZ2pZ8+v53H+p9S8AZr7Q834vHRNwDf6TNeY+VgY9iNzrxvi+kSaqPA8YEATzDko8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1617380471705980.4749218449174; Fri, 2 Apr 2021 09:21:11 -0700 (PDT) Received: from localhost ([::1]:48756 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lSMXm-0002vT-Ke for importer@patchew.org; Fri, 02 Apr 2021 12:21:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:50954) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lSMVQ-0000vC-CY for qemu-devel@nongnu.org; Fri, 02 Apr 2021 12:18:44 -0400 Received: from mail-pj1-x102c.google.com ([2607:f8b0:4864:20::102c]:34348) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lSMVN-00035N-1Q for qemu-devel@nongnu.org; Fri, 02 Apr 2021 12:18:44 -0400 Received: by mail-pj1-x102c.google.com with SMTP id cl21-20020a17090af695b02900c61ac0f0e9so6368396pjb.1 for ; Fri, 02 Apr 2021 09:18:40 -0700 (PDT) Received: from localhost.localdomain (h216-228-167-147.bendor.dedicated.static.tds.net. [216.228.167.147]) by smtp.gmail.com with ESMTPSA id m7sm8821761pjc.54.2021.04.02.09.18.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Apr 2021 09:18:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fmCgzHtzk5qDGe2eGqNAc4vF9R/bc/UzkhnNYf8s1aw=; b=l9y2N/FQMRgq1j+ah+xdquCl5SSxrHBiED0OUHza/FscU/Lp7cdzoLyMh73W+tzlDx mjD9L/l8cxDVuZiTEDoHux51ozT6oCdK8M4UZZVG8/GfQkteGDmfxDtIGfTh0s//Xj5e sEJBi8VJ1rbtBg3NkAM/yLYb74i2s5m0ZAaT7a/1PU1V/5Oscu1N2bTSHkobcvdUt2GZ VWU+fWQmtAwTXtvWTId2zfXPTJfivQNQsmppzqCAs5o0vKGgxhXtZSKui4AMVVCwVMOb +kSmlnGVLvoD0CWWo52c9ZuPhfV3W5qtlt9Dpkf3kUIN54ZqxZvwHLIHxONubLH5A+Xk tZzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fmCgzHtzk5qDGe2eGqNAc4vF9R/bc/UzkhnNYf8s1aw=; b=k4WbTCaEGxo0n/P6R7kFsQXMY+XMwrnt9lTg6qOZEymRctDJVdBWIITh3n9gfli41c PCTcV3zkjKOswrLewMCloAk6HHAl4LoMr5dV+VwaClGEuQauVaDR+2ze+wlnvmdMNSAJ xWy1koCTpBXFlW+BX6TVSCJrH19xZHfy0unVegvvX8z8ABCZtZOdGl8ofm1KvB6gXX6S qZY5bWoML6M36i8K5ePXH1e4cqEjderYX9adHifjaB0G93+PKpTG6wxEXE/EmF7oB0tX kE3faCS6IEvJ264WocflqoGZYDlfGLQURmCQwpMMEoeQGk96BWcRZigiGh8Tr3RLZnQa wNwQ== X-Gm-Message-State: AOAM5305BiUxXP/EYcE8+SebeTfOwIHmQVRUqz0zO+JT1gblZh6A6trq RhoaARxvSbl6o5SMXbl+exwtxhwBZ5Lghg== X-Google-Smtp-Source: ABdhPJwEExtx5HQ6m5bTu8aX5ByMQ5KSQGjAY8m4qKYQ6iyfepHziEg80zTemt/ZPaldxDvREZlfYQ== X-Received: by 2002:a17:90a:5910:: with SMTP id k16mr14965471pji.207.1617380318663; Fri, 02 Apr 2021 09:18:38 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH v2 1/3] target/arm: Check PAGE_WRITE_ORG for MTE writeability Date: Fri, 2 Apr 2021 09:18:33 -0700 Message-Id: <20210402161835.286665-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210402161835.286665-1-richard.henderson@linaro.org> References: <20210402161835.286665-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::102c; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-arm@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @linaro.org) Content-Type: text/plain; charset="utf-8" We can remove PAGE_WRITE when (internally) marking a page read-only because it contains translated code. This can be triggered by tests/tcg/aarch64/bti-2, after having serviced SIGILL trampolines on the stack. Signed-off-by: Richard Henderson --- target/arm/mte_helper.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c index 0bbb9ec346..8be17e1b70 100644 --- a/target/arm/mte_helper.c +++ b/target/arm/mte_helper.c @@ -83,7 +83,7 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int = ptr_mmu_idx, uint8_t *tags; uintptr_t index; =20 - if (!(flags & (ptr_access =3D=3D MMU_DATA_STORE ? PAGE_WRITE : PAGE_RE= AD))) { + if (!(flags & (ptr_access =3D=3D MMU_DATA_STORE ? PAGE_WRITE_ORG : PAG= E_READ))) { /* SIGSEGV */ arm_cpu_tlb_fill(env_cpu(env), ptr, ptr_size, ptr_access, ptr_mmu_idx, false, ra); --=20 2.25.1 From nobody Sat Apr 20 12:08:49 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1617380464; cv=none; d=zohomail.com; s=zohoarc; b=BpWnmmGPwwX98BPD3SuvKYiunCYeWq56fsBdQljWGLA4cU92DvJiycjgLjx5XshRjMRXQRbGMqpW50Wn2KS9RqTteDGmJxDDAZr2lIx9b8+SqUtybUlwzGTCugw0gfpdSZ10gxgl+fPHSQiev9FvkYN+zMjTm+Np+8GCFdO4o2A= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1617380464; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=IKepXQzwJaF4zrXZKA428tFFtkd3HFANGqNk/XwK+Aw=; b=YW0g2wyOsuax7+9aFOwsSblHYBvjMG7aEWk59VOvlHm8TkW0vnm/ghGjKJpJHOwPsAfVtn5ISEhcWSPmteB2bm+jX5PRQxFilurfzLA27+hCuwm67MVwrxS+CFU0OKhlPu8CAcZ5fgkhxKnRbXcYAvF4Q6B7DKhlwm9NN5HZBN0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1617380464869809.8040645294165; Fri, 2 Apr 2021 09:21:04 -0700 (PDT) Received: from localhost ([::1]:48286 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lSMXf-0002gG-K4 for importer@patchew.org; Fri, 02 Apr 2021 12:21:03 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51024) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lSMVT-00010f-4j for qemu-devel@nongnu.org; Fri, 02 Apr 2021 12:18:47 -0400 Received: from mail-pj1-x102d.google.com ([2607:f8b0:4864:20::102d]:51107) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lSMVN-00036k-Ew for qemu-devel@nongnu.org; Fri, 02 Apr 2021 12:18:46 -0400 Received: by mail-pj1-x102d.google.com with SMTP id gb6so2933869pjb.0 for ; Fri, 02 Apr 2021 09:18:41 -0700 (PDT) Received: from localhost.localdomain (h216-228-167-147.bendor.dedicated.static.tds.net. [216.228.167.147]) by smtp.gmail.com with ESMTPSA id m7sm8821761pjc.54.2021.04.02.09.18.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Apr 2021 09:18:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IKepXQzwJaF4zrXZKA428tFFtkd3HFANGqNk/XwK+Aw=; b=tMSgWrHZefbytCxxstE4w2I7g0E1AC0GriTUcR2ILiOaPDJV2rVgxgof24XxpGCWSE m4fagTJbmAxX1mIXISEYFDtPwhY1eqEBEE5kqJQ+hh+oxW8sbRqx/bpMTREModDl1dTY 5n7mRxfxUj2HsjGaQnaqcOSUBc5QhF0cVgHVNGvCOBrM7tYSi2PDA6yWssqjm26T6M2A vqazKm1AVBYbSkvTbqUZkayiCCQl31K65eilK3XRxBfmOkN926E8+DXkw6Wreu8h5tFt e/HEod7B/dYUpExZvoS9W55pJJoV+i2s2eR7eJqQZO1pW/Ouj26tjFFc4P9Qf4L29PWk HpoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IKepXQzwJaF4zrXZKA428tFFtkd3HFANGqNk/XwK+Aw=; b=QSSnMHVaDdTl9cpGUot2WA6bNwh/gPE+iCgLRKSGfhNfciY4M7ULQzlN5KGyzxBQn+ FHb8jexqh2iPSBGkdPHruxxoCrbY53q/oHllLf7T/RUWoJw0B/RRsgpnsuK9PN/qjwq6 cN4e9JmoDUp468hAKGqSQzIqcw9QlgdjriHTjtxoqW2Sv1H0YohBBGJIYyOjI56uMnbO VOavGoDBFM+JJcdSCr6sCyzT83nzPeO0EIjhbillfHn3YoW5as2VRSP/1I13YP/yvBAt 8MGMgRKR8S1fOXBTwUcnuEVs59iqFeOpdJq4V2FEwT1WZKveDdOyKe9+x4XYgMwlRZ3X 12BA== X-Gm-Message-State: AOAM530d0BQ4IVrzT1PIup0fpIOvazqR0lXc0U0NPB29XQdGdLmKqKo4 pbahqJZbUxx9pSWd+jWHsEEpSmQWX9WKJA== X-Google-Smtp-Source: ABdhPJxO1cXHN1cCqI5b+BQGkboKqhjZVPDoM/Z51T6JDZrc+prdltldJ+8Idy2fMlmUSYohlSJjbw== X-Received: by 2002:a17:90b:4b8c:: with SMTP id lr12mr14843085pjb.124.1617380319787; Fri, 02 Apr 2021 09:18:39 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH v2 2/3] target/arm: Fix unaligned mte checks Date: Fri, 2 Apr 2021 09:18:34 -0700 Message-Id: <20210402161835.286665-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210402161835.286665-1-richard.henderson@linaro.org> References: <20210402161835.286665-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::102d; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-arm@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @linaro.org) Content-Type: text/plain; charset="utf-8" We were incorrectly assuming that only the first byte of an MTE access is checked against the tags. But per the ARM, unaligned accesses are pre-decomposed into single-byte accesses. So by the time we reach the actual MTE check in the ARM pseudocode, all accesses are aligned. Therefore, drop mte_check1, since we cannot know a priori that an access is aligned. Rename mte_checkN to mte_check, which now handles all accesses. Rename mte_probe1 to mte_probe, and use a common helper. Drop the computation of the faulting nth element, since all accesses can be considered to devolve to bytes, and simply compute the faulting address. Buglink: https://bugs.launchpad.net/bugs/1921948 Signed-off-by: Richard Henderson --- target/arm/helper-a64.h | 3 +- target/arm/internals.h | 13 +-- target/arm/translate-a64.h | 2 +- target/arm/mte_helper.c | 169 ++++++++++++------------------ target/arm/sve_helper.c | 96 ++++++----------- target/arm/translate-a64.c | 31 +++--- target/arm/translate-sve.c | 9 +- tests/tcg/aarch64/mte-5.c | 44 ++++++++ tests/tcg/aarch64/Makefile.target | 2 +- 9 files changed, 169 insertions(+), 200 deletions(-) create mode 100644 tests/tcg/aarch64/mte-5.c diff --git a/target/arm/helper-a64.h b/target/arm/helper-a64.h index c139fa81f9..7b706571bb 100644 --- a/target/arm/helper-a64.h +++ b/target/arm/helper-a64.h @@ -104,8 +104,7 @@ DEF_HELPER_FLAGS_3(autdb, TCG_CALL_NO_WG, i64, env, i64= , i64) DEF_HELPER_FLAGS_2(xpaci, TCG_CALL_NO_RWG_SE, i64, env, i64) DEF_HELPER_FLAGS_2(xpacd, TCG_CALL_NO_RWG_SE, i64, env, i64) =20 -DEF_HELPER_FLAGS_3(mte_check1, TCG_CALL_NO_WG, i64, env, i32, i64) -DEF_HELPER_FLAGS_3(mte_checkN, TCG_CALL_NO_WG, i64, env, i32, i64) +DEF_HELPER_FLAGS_3(mte_check, TCG_CALL_NO_WG, i64, env, i32, i64) DEF_HELPER_FLAGS_3(mte_check_zva, TCG_CALL_NO_WG, i64, env, i32, i64) DEF_HELPER_FLAGS_3(irg, TCG_CALL_NO_RWG, i64, env, i64, i64) DEF_HELPER_FLAGS_4(addsubg, TCG_CALL_NO_RWG_SE, i64, env, i64, s32, i32) diff --git a/target/arm/internals.h b/target/arm/internals.h index f11bd32696..817d3aa51b 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -1137,19 +1137,16 @@ FIELD(PREDDESC, DATA, 8, 24) */ #define SVE_MTEDESC_SHIFT 5 =20 -/* Bits within a descriptor passed to the helper_mte_check* functions. */ +/* Bits within a descriptor passed to the helper_mte_check function. */ FIELD(MTEDESC, MIDX, 0, 4) FIELD(MTEDESC, TBI, 4, 2) FIELD(MTEDESC, TCMA, 6, 2) FIELD(MTEDESC, WRITE, 8, 1) -FIELD(MTEDESC, ESIZE, 9, 5) -FIELD(MTEDESC, TSIZE, 14, 10) /* mte_checkN only */ +FIELD(MTEDESC, SIZEM1, 12, 10) /* size - 1 */ =20 -bool mte_probe1(CPUARMState *env, uint32_t desc, uint64_t ptr); -uint64_t mte_check1(CPUARMState *env, uint32_t desc, - uint64_t ptr, uintptr_t ra); -uint64_t mte_checkN(CPUARMState *env, uint32_t desc, - uint64_t ptr, uintptr_t ra); +bool mte_probe(CPUARMState *env, uint32_t desc, uint64_t ptr); +uint64_t mte_check(CPUARMState *env, uint32_t desc, + uint64_t ptr, uintptr_t ra); =20 static inline int allocation_tag_from_addr(uint64_t ptr) { diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h index 3668b671dd..6c4bbf9096 100644 --- a/target/arm/translate-a64.h +++ b/target/arm/translate-a64.h @@ -44,7 +44,7 @@ TCGv_i64 clean_data_tbi(DisasContext *s, TCGv_i64 addr); TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr, bool is_write, bool tag_checked, int log2_size); TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write, - bool tag_checked, int count, int log2_esize); + bool tag_checked, int total_size); =20 /* We should have at some point before trying to access an FP register * done the necessary access check, so assert that diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c index 8be17e1b70..62bea7ad4a 100644 --- a/target/arm/mte_helper.c +++ b/target/arm/mte_helper.c @@ -121,7 +121,7 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, in= t ptr_mmu_idx, * exception for inaccessible pages, and resolves the virtual address * into the softmmu tlb. * - * When RA =3D=3D 0, this is for mte_probe1. The page is expected to = be + * When RA =3D=3D 0, this is for mte_probe. The page is expected to be * valid. Indicate to probe_access_flags no-fault, then assert that * we received a valid page. */ @@ -617,80 +617,6 @@ static void mte_check_fail(CPUARMState *env, uint32_t = desc, } } =20 -/* - * Perform an MTE checked access for a single logical or atomic access. - */ -static bool mte_probe1_int(CPUARMState *env, uint32_t desc, uint64_t ptr, - uintptr_t ra, int bit55) -{ - int mem_tag, mmu_idx, ptr_tag, size; - MMUAccessType type; - uint8_t *mem; - - ptr_tag =3D allocation_tag_from_addr(ptr); - - if (tcma_check(desc, bit55, ptr_tag)) { - return true; - } - - mmu_idx =3D FIELD_EX32(desc, MTEDESC, MIDX); - type =3D FIELD_EX32(desc, MTEDESC, WRITE) ? MMU_DATA_STORE : MMU_DATA_= LOAD; - size =3D FIELD_EX32(desc, MTEDESC, ESIZE); - - mem =3D allocation_tag_mem(env, mmu_idx, ptr, type, size, - MMU_DATA_LOAD, 1, ra); - if (!mem) { - return true; - } - - mem_tag =3D load_tag1(ptr, mem); - return ptr_tag =3D=3D mem_tag; -} - -/* - * No-fault version of mte_check1, to be used by SVE for MemSingleNF. - * Returns false if the access is Checked and the check failed. This - * is only intended to probe the tag -- the validity of the page must - * be checked beforehand. - */ -bool mte_probe1(CPUARMState *env, uint32_t desc, uint64_t ptr) -{ - int bit55 =3D extract64(ptr, 55, 1); - - /* If TBI is disabled, the access is unchecked. */ - if (unlikely(!tbi_check(desc, bit55))) { - return true; - } - - return mte_probe1_int(env, desc, ptr, 0, bit55); -} - -uint64_t mte_check1(CPUARMState *env, uint32_t desc, - uint64_t ptr, uintptr_t ra) -{ - int bit55 =3D extract64(ptr, 55, 1); - - /* If TBI is disabled, the access is unchecked, and ptr is not dirty. = */ - if (unlikely(!tbi_check(desc, bit55))) { - return ptr; - } - - if (unlikely(!mte_probe1_int(env, desc, ptr, ra, bit55))) { - mte_check_fail(env, desc, ptr, ra); - } - - return useronly_clean_ptr(ptr); -} - -uint64_t HELPER(mte_check1)(CPUARMState *env, uint32_t desc, uint64_t ptr) -{ - return mte_check1(env, desc, ptr, GETPC()); -} - -/* - * Perform an MTE checked access for multiple logical accesses. - */ - /** * checkN: * @tag: tag memory to test @@ -753,38 +679,49 @@ static int checkN(uint8_t *mem, int odd, int cmp, int= count) return n; } =20 -uint64_t mte_checkN(CPUARMState *env, uint32_t desc, - uint64_t ptr, uintptr_t ra) +/* + * mte_check_int: + * @env: CPU environment + * @desc: MTEDESC descriptor + * @ptr: virtual address of the base of the access + * @fault: return virtual address of the first check failure + * + * Internal routine for both mte_probe and mte_check. + * Return zero on failure, filling in *fault. + * Return negative on trivial success for tbi disabled. + * Return positive on success with tbi enabled. + */ +static int mte_check_int(CPUARMState *env, uint32_t desc, + uint64_t ptr, uintptr_t ra, uint64_t *fault) { int mmu_idx, ptr_tag, bit55; - uint64_t ptr_last, ptr_end, prev_page, next_page; + uint64_t ptr_last, prev_page, next_page; uint64_t tag_first, tag_end; uint64_t tag_byte_first, tag_byte_end; - uint32_t esize, total, tag_count, tag_size, n, c; + uint32_t sizem1, tag_count, tag_size, n, c; uint8_t *mem1, *mem2; MMUAccessType type; =20 bit55 =3D extract64(ptr, 55, 1); + *fault =3D ptr; =20 /* If TBI is disabled, the access is unchecked, and ptr is not dirty. = */ if (unlikely(!tbi_check(desc, bit55))) { - return ptr; + return -1; } =20 ptr_tag =3D allocation_tag_from_addr(ptr); =20 if (tcma_check(desc, bit55, ptr_tag)) { - goto done; + return 1; } =20 mmu_idx =3D FIELD_EX32(desc, MTEDESC, MIDX); type =3D FIELD_EX32(desc, MTEDESC, WRITE) ? MMU_DATA_STORE : MMU_DATA_= LOAD; - esize =3D FIELD_EX32(desc, MTEDESC, ESIZE); - total =3D FIELD_EX32(desc, MTEDESC, TSIZE); + sizem1 =3D FIELD_EX32(desc, MTEDESC, SIZEM1); =20 - /* Find the addr of the end of the access, and of the last element. */ - ptr_end =3D ptr + total; - ptr_last =3D ptr_end - esize; + /* Find the addr of the last byte of the access. */ + ptr_last =3D ptr + sizem1; =20 /* Round the bounds to the tag granule, and compute the number of tags= . */ tag_first =3D QEMU_ALIGN_DOWN(ptr, TAG_GRANULE); @@ -802,12 +739,19 @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc, if (likely(tag_end - prev_page <=3D TARGET_PAGE_SIZE)) { /* Memory access stays on one page. */ tag_size =3D (tag_byte_end - tag_byte_first) / (2 * TAG_GRANULE); - mem1 =3D allocation_tag_mem(env, mmu_idx, ptr, type, total, + mem1 =3D allocation_tag_mem(env, mmu_idx, ptr, type, sizem1 + 1, MMU_DATA_LOAD, tag_size, ra); if (!mem1) { - goto done; + return 1; + } + + /* + * Perform all of the comparisons, recognizing that most are + * aligned operations that do not cross granule boundaries. + */ + if (likely(tag_count =3D=3D 1)) { + return ptr_tag =3D=3D load_tag1(ptr, mem1); } - /* Perform all of the comparisons. */ n =3D checkN(mem1, ptr & TAG_GRANULE, ptr_tag, tag_count); } else { /* Memory access crosses to next page. */ @@ -817,7 +761,7 @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc, =20 tag_size =3D (tag_byte_end - next_page) / (2 * TAG_GRANULE); mem2 =3D allocation_tag_mem(env, mmu_idx, next_page, type, - ptr_end - next_page, + ptr_last - next_page + 1, MMU_DATA_LOAD, tag_size, ra); =20 /* @@ -831,31 +775,54 @@ uint64_t mte_checkN(CPUARMState *env, uint32_t desc, } if (n =3D=3D c) { if (!mem2) { - goto done; + return 1; } n +=3D checkN(mem2, 0, ptr_tag, tag_count - c); } } =20 /* - * If we failed, we know which granule. Compute the element that - * is first in that granule, and signal failure on that element. + * If we failed, we know which granule -- signal failure at that addre= ss. */ if (unlikely(n < tag_count)) { - uint64_t fail_ofs; - - fail_ofs =3D tag_first + n * TAG_GRANULE - ptr; - fail_ofs =3D ROUND_UP(fail_ofs, esize); - mte_check_fail(env, desc, ptr + fail_ofs, ra); + if (n > 0) { + *fault =3D tag_first + n * TAG_GRANULE; + } + return 0; } =20 - done: + return 1; +} + +/* + * No-fault probe, to be used by SVE for MemSingleNF. + * Returns false if the access is Checked and the check failed. This + * is only intended to probe the tag -- the validity of the page must + * be checked beforehand. + */ +bool mte_probe(CPUARMState *env, uint32_t desc, uint64_t ptr) +{ + uint64_t discard; + return mte_check_int(env, desc, ptr, 0, &discard) !=3D 0; +} + +uint64_t mte_check(CPUARMState *env, uint32_t desc, + uint64_t ptr, uintptr_t ra) +{ + uint64_t fault; + int ret =3D mte_check_int(env, desc, ptr, ra, &fault); + + if (unlikely(ret =3D=3D 0)) { + mte_check_fail(env, desc, fault, ra); + } else if (ret < 0) { + return ptr; + } return useronly_clean_ptr(ptr); } =20 -uint64_t HELPER(mte_checkN)(CPUARMState *env, uint32_t desc, uint64_t ptr) +uint64_t HELPER(mte_check)(CPUARMState *env, uint32_t desc, uint64_t ptr) { - return mte_checkN(env, desc, ptr, GETPC()); + return mte_check(env, desc, ptr, GETPC()); } =20 /* diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c index fd6c58f96a..9382ece660 100644 --- a/target/arm/sve_helper.c +++ b/target/arm/sve_helper.c @@ -4382,13 +4382,9 @@ static void sve_cont_ldst_watchpoints(SVEContLdSt *i= nfo, CPUARMState *env, #endif } =20 -typedef uint64_t mte_check_fn(CPUARMState *, uint32_t, uint64_t, uintptr_t= ); - -static inline QEMU_ALWAYS_INLINE -void sve_cont_ldst_mte_check_int(SVEContLdSt *info, CPUARMState *env, - uint64_t *vg, target_ulong addr, int esiz= e, - int msize, uint32_t mtedesc, uintptr_t ra, - mte_check_fn *check) +static void sve_cont_ldst_mte_check(SVEContLdSt *info, CPUARMState *env, + uint64_t *vg, target_ulong addr, int e= size, + int msize, uint32_t mtedesc, uintptr_t= ra) { intptr_t mem_off, reg_off, reg_last; =20 @@ -4405,7 +4401,7 @@ void sve_cont_ldst_mte_check_int(SVEContLdSt *info, C= PUARMState *env, uint64_t pg =3D vg[reg_off >> 6]; do { if ((pg >> (reg_off & 63)) & 1) { - check(env, mtedesc, addr, ra); + mte_check(env, mtedesc, addr, ra); } reg_off +=3D esize; mem_off +=3D msize; @@ -4422,7 +4418,7 @@ void sve_cont_ldst_mte_check_int(SVEContLdSt *info, C= PUARMState *env, uint64_t pg =3D vg[reg_off >> 6]; do { if ((pg >> (reg_off & 63)) & 1) { - check(env, mtedesc, addr, ra); + mte_check(env, mtedesc, addr, ra); } reg_off +=3D esize; mem_off +=3D msize; @@ -4431,30 +4427,6 @@ void sve_cont_ldst_mte_check_int(SVEContLdSt *info, = CPUARMState *env, } } =20 -typedef void sve_cont_ldst_mte_check_fn(SVEContLdSt *info, CPUARMState *en= v, - uint64_t *vg, target_ulong addr, - int esize, int msize, uint32_t mte= desc, - uintptr_t ra); - -static void sve_cont_ldst_mte_check1(SVEContLdSt *info, CPUARMState *env, - uint64_t *vg, target_ulong addr, - int esize, int msize, uint32_t mtedes= c, - uintptr_t ra) -{ - sve_cont_ldst_mte_check_int(info, env, vg, addr, esize, msize, - mtedesc, ra, mte_check1); -} - -static void sve_cont_ldst_mte_checkN(SVEContLdSt *info, CPUARMState *env, - uint64_t *vg, target_ulong addr, - int esize, int msize, uint32_t mtedes= c, - uintptr_t ra) -{ - sve_cont_ldst_mte_check_int(info, env, vg, addr, esize, msize, - mtedesc, ra, mte_checkN); -} - - /* * Common helper for all contiguous 1,2,3,4-register predicated stores. */ @@ -4463,8 +4435,7 @@ void sve_ldN_r(CPUARMState *env, uint64_t *vg, const = target_ulong addr, uint32_t desc, const uintptr_t retaddr, const int esz, const int msz, const int N, uint32_t mtedesc, sve_ldst1_host_fn *host_fn, - sve_ldst1_tlb_fn *tlb_fn, - sve_cont_ldst_mte_check_fn *mte_check_fn) + sve_ldst1_tlb_fn *tlb_fn) { const unsigned rd =3D simd_data(desc); const intptr_t reg_max =3D simd_oprsz(desc); @@ -4493,9 +4464,9 @@ void sve_ldN_r(CPUARMState *env, uint64_t *vg, const = target_ulong addr, * Handle mte checks for all active elements. * Since TBI must be set for MTE, !mtedesc =3D> !mte_active. */ - if (mte_check_fn && mtedesc) { - mte_check_fn(&info, env, vg, addr, 1 << esz, N << msz, - mtedesc, retaddr); + if (mtedesc) { + sve_cont_ldst_mte_check(&info, env, vg, addr, 1 << esz, N << msz, + mtedesc, retaddr); } =20 flags =3D info.page[0].flags | info.page[1].flags; @@ -4621,8 +4592,7 @@ void sve_ldN_r_mte(CPUARMState *env, uint64_t *vg, ta= rget_ulong addr, mtedesc =3D 0; } =20 - sve_ldN_r(env, vg, addr, desc, ra, esz, msz, N, mtedesc, host_fn, tlb_= fn, - N =3D=3D 1 ? sve_cont_ldst_mte_check1 : sve_cont_ldst_mte_ch= eckN); + sve_ldN_r(env, vg, addr, desc, ra, esz, msz, N, mtedesc, host_fn, tlb_= fn); } =20 #define DO_LD1_1(NAME, ESZ) \ @@ -4630,7 +4600,7 @@ void HELPER(sve_##NAME##_r)(CPUARMState *env, void *v= g, \ target_ulong addr, uint32_t desc) \ { \ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, MO_8, 1, 0, \ - sve_##NAME##_host, sve_##NAME##_tlb, NULL); \ + sve_##NAME##_host, sve_##NAME##_tlb); \ } \ void HELPER(sve_##NAME##_r_mte)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ @@ -4644,13 +4614,13 @@ void HELPER(sve_##NAME##_le_r)(CPUARMState *env, vo= id *vg, \ target_ulong addr, uint32_t desc) \ { \ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, 1, 0, \ - sve_##NAME##_le_host, sve_##NAME##_le_tlb, NULL); \ + sve_##NAME##_le_host, sve_##NAME##_le_tlb); \ } \ void HELPER(sve_##NAME##_be_r)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ { \ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, 1, 0, \ - sve_##NAME##_be_host, sve_##NAME##_be_tlb, NULL); \ + sve_##NAME##_be_host, sve_##NAME##_be_tlb); \ } \ void HELPER(sve_##NAME##_le_r_mte)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ @@ -4693,7 +4663,7 @@ void HELPER(sve_ld##N##bb_r)(CPUARMState *env, void *= vg, \ target_ulong addr, uint32_t desc) \ { \ sve_ldN_r(env, vg, addr, desc, GETPC(), MO_8, MO_8, N, 0, \ - sve_ld1bb_host, sve_ld1bb_tlb, NULL); \ + sve_ld1bb_host, sve_ld1bb_tlb); \ } \ void HELPER(sve_ld##N##bb_r_mte)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ @@ -4707,13 +4677,13 @@ void HELPER(sve_ld##N##SUFF##_le_r)(CPUARMState *en= v, void *vg, \ target_ulong addr, uint32_t desc) \ { \ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, ESZ, N, 0, \ - sve_ld1##SUFF##_le_host, sve_ld1##SUFF##_le_tlb, NULL); \ + sve_ld1##SUFF##_le_host, sve_ld1##SUFF##_le_tlb); \ } \ void HELPER(sve_ld##N##SUFF##_be_r)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ { \ sve_ldN_r(env, vg, addr, desc, GETPC(), ESZ, ESZ, N, 0, \ - sve_ld1##SUFF##_be_host, sve_ld1##SUFF##_be_tlb, NULL); \ + sve_ld1##SUFF##_be_host, sve_ld1##SUFF##_be_tlb); \ } \ void HELPER(sve_ld##N##SUFF##_le_r_mte)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ @@ -4826,7 +4796,7 @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const = target_ulong addr, if (fault =3D=3D FAULT_FIRST) { /* Trapping mte check for the first-fault element. */ if (mtedesc) { - mte_check1(env, mtedesc, addr + mem_off, retaddr); + mte_check(env, mtedesc, addr + mem_off, retaddr); } =20 /* @@ -4869,7 +4839,7 @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const = target_ulong addr, /* Watchpoint hit, see below. */ goto do_fault; } - if (mtedesc && !mte_probe1(env, mtedesc, addr + mem_off)) { + if (mtedesc && !mte_probe(env, mtedesc, addr + mem_off)) { goto do_fault; } /* @@ -4919,7 +4889,7 @@ void sve_ldnfff1_r(CPUARMState *env, void *vg, const = target_ulong addr, & BP_MEM_READ)) { goto do_fault; } - if (mtedesc && !mte_probe1(env, mtedesc, addr + mem_off)) { + if (mtedesc && !mte_probe(env, mtedesc, addr + mem_off)) { goto do_fault; } host_fn(vd, reg_off, host + mem_off); @@ -5090,8 +5060,7 @@ void sve_stN_r(CPUARMState *env, uint64_t *vg, target= _ulong addr, uint32_t desc, const uintptr_t retaddr, const int esz, const int msz, const int N, uint32_t mtedesc, sve_ldst1_host_fn *host_fn, - sve_ldst1_tlb_fn *tlb_fn, - sve_cont_ldst_mte_check_fn *mte_check_fn) + sve_ldst1_tlb_fn *tlb_fn) { const unsigned rd =3D simd_data(desc); const intptr_t reg_max =3D simd_oprsz(desc); @@ -5117,9 +5086,9 @@ void sve_stN_r(CPUARMState *env, uint64_t *vg, target= _ulong addr, * Handle mte checks for all active elements. * Since TBI must be set for MTE, !mtedesc =3D> !mte_active. */ - if (mte_check_fn && mtedesc) { - mte_check_fn(&info, env, vg, addr, 1 << esz, N << msz, - mtedesc, retaddr); + if (mtedesc) { + sve_cont_ldst_mte_check(&info, env, vg, addr, 1 << esz, N << msz, + mtedesc, retaddr); } =20 flags =3D info.page[0].flags | info.page[1].flags; @@ -5233,8 +5202,7 @@ void sve_stN_r_mte(CPUARMState *env, uint64_t *vg, ta= rget_ulong addr, mtedesc =3D 0; } =20 - sve_stN_r(env, vg, addr, desc, ra, esz, msz, N, mtedesc, host_fn, tlb_= fn, - N =3D=3D 1 ? sve_cont_ldst_mte_check1 : sve_cont_ldst_mte_ch= eckN); + sve_stN_r(env, vg, addr, desc, ra, esz, msz, N, mtedesc, host_fn, tlb_= fn); } =20 #define DO_STN_1(N, NAME, ESZ) \ @@ -5242,7 +5210,7 @@ void HELPER(sve_st##N##NAME##_r)(CPUARMState *env, vo= id *vg, \ target_ulong addr, uint32_t desc) \ { \ sve_stN_r(env, vg, addr, desc, GETPC(), ESZ, MO_8, N, 0, \ - sve_st1##NAME##_host, sve_st1##NAME##_tlb, NULL); \ + sve_st1##NAME##_host, sve_st1##NAME##_tlb); \ } \ void HELPER(sve_st##N##NAME##_r_mte)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ @@ -5256,13 +5224,13 @@ void HELPER(sve_st##N##NAME##_le_r)(CPUARMState *en= v, void *vg, \ target_ulong addr, uint32_t desc) \ { \ sve_stN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, N, 0, \ - sve_st1##NAME##_le_host, sve_st1##NAME##_le_tlb, NULL); \ + sve_st1##NAME##_le_host, sve_st1##NAME##_le_tlb); \ } \ void HELPER(sve_st##N##NAME##_be_r)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ { \ sve_stN_r(env, vg, addr, desc, GETPC(), ESZ, MSZ, N, 0, \ - sve_st1##NAME##_be_host, sve_st1##NAME##_be_tlb, NULL); \ + sve_st1##NAME##_be_host, sve_st1##NAME##_be_tlb); \ } \ void HELPER(sve_st##N##NAME##_le_r_mte)(CPUARMState *env, void *vg, \ target_ulong addr, uint32_t desc) \ @@ -5373,7 +5341,7 @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *= vg, void *vm, info.attrs, BP_MEM_READ, reta= ddr); } if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) { - mte_check1(env, mtedesc, addr, retaddr); + mte_check(env, mtedesc, addr, retaddr); } host_fn(&scratch, reg_off, info.host); } else { @@ -5386,7 +5354,7 @@ void sve_ld1_z(CPUARMState *env, void *vd, uint64_t *= vg, void *vm, BP_MEM_READ, retaddr); } if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) { - mte_check1(env, mtedesc, addr, retaddr); + mte_check(env, mtedesc, addr, retaddr); } tlb_fn(env, &scratch, reg_off, addr, retaddr); } @@ -5552,7 +5520,7 @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t= *vg, void *vm, */ addr =3D base + (off_fn(vm, reg_off) << scale); if (mtedesc) { - mte_check1(env, mtedesc, addr, retaddr); + mte_check(env, mtedesc, addr, retaddr); } tlb_fn(env, vd, reg_off, addr, retaddr); =20 @@ -5588,7 +5556,7 @@ void sve_ldff1_z(CPUARMState *env, void *vd, uint64_t= *vg, void *vm, } if (mtedesc && arm_tlb_mte_tagged(&info.attrs) && - !mte_probe1(env, mtedesc, addr)) { + !mte_probe(env, mtedesc, addr)) { goto fault; } =20 @@ -5773,7 +5741,7 @@ void sve_st1_z(CPUARMState *env, void *vd, uint64_t *= vg, void *vm, } =20 if (mtedesc && arm_tlb_mte_tagged(&info.attrs)) { - mte_check1(env, mtedesc, addr, retaddr); + mte_check(env, mtedesc, addr, retaddr); } } i +=3D 1; diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c index 0b42e53500..44f32fc0a8 100644 --- a/target/arm/translate-a64.c +++ b/target/arm/translate-a64.c @@ -253,10 +253,7 @@ static void gen_probe_access(DisasContext *s, TCGv_i64= ptr, } =20 /* - * For MTE, check a single logical or atomic access. This probes a single - * address, the exact one specified. The size and alignment of the access - * is not relevant to MTE, per se, but watchpoints do require the size, - * and we want to recognize those before making any other changes to state. + * For MTE, check a single logical or atomic access. */ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s, TCGv_i64 addr, bool is_write, bool tag_checked, @@ -272,11 +269,11 @@ static TCGv_i64 gen_mte_check1_mmuidx(DisasContext *s= , TCGv_i64 addr, desc =3D FIELD_DP32(desc, MTEDESC, TBI, s->tbid); desc =3D FIELD_DP32(desc, MTEDESC, TCMA, s->tcma); desc =3D FIELD_DP32(desc, MTEDESC, WRITE, is_write); - desc =3D FIELD_DP32(desc, MTEDESC, ESIZE, 1 << log2_size); + desc =3D FIELD_DP32(desc, MTEDESC, SIZEM1, (1 << log2_size) - 1); tcg_desc =3D tcg_const_i32(desc); =20 ret =3D new_tmp_a64(s); - gen_helper_mte_check1(ret, cpu_env, tcg_desc, addr); + gen_helper_mte_check(ret, cpu_env, tcg_desc, addr); tcg_temp_free_i32(tcg_desc); =20 return ret; @@ -295,9 +292,9 @@ TCGv_i64 gen_mte_check1(DisasContext *s, TCGv_i64 addr,= bool is_write, * For MTE, check multiple logical sequential accesses. */ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 addr, bool is_write, - bool tag_checked, int log2_esize, int total_size) + bool tag_checked, int total_size) { - if (tag_checked && s->mte_active[0] && total_size !=3D (1 << log2_esiz= e)) { + if (tag_checked && s->mte_active[0]) { TCGv_i32 tcg_desc; TCGv_i64 ret; int desc =3D 0; @@ -306,17 +303,16 @@ TCGv_i64 gen_mte_checkN(DisasContext *s, TCGv_i64 add= r, bool is_write, desc =3D FIELD_DP32(desc, MTEDESC, TBI, s->tbid); desc =3D FIELD_DP32(desc, MTEDESC, TCMA, s->tcma); desc =3D FIELD_DP32(desc, MTEDESC, WRITE, is_write); - desc =3D FIELD_DP32(desc, MTEDESC, ESIZE, 1 << log2_esize); - desc =3D FIELD_DP32(desc, MTEDESC, TSIZE, total_size); + desc =3D FIELD_DP32(desc, MTEDESC, SIZEM1, total_size - 1); tcg_desc =3D tcg_const_i32(desc); =20 ret =3D new_tmp_a64(s); - gen_helper_mte_checkN(ret, cpu_env, tcg_desc, addr); + gen_helper_mte_check(ret, cpu_env, tcg_desc, addr); tcg_temp_free_i32(tcg_desc); =20 return ret; } - return gen_mte_check1(s, addr, is_write, tag_checked, log2_esize); + return clean_data_tbi(s, addr); } =20 typedef struct DisasCompare64 { @@ -2966,8 +2962,7 @@ static void disas_ldst_pair(DisasContext *s, uint32_t= insn) } =20 clean_addr =3D gen_mte_checkN(s, dirty_addr, !is_load, - (wback || rn !=3D 31) && !set_tag, - size, 2 << size); + (wback || rn !=3D 31) && !set_tag, 2 << si= ze); =20 if (is_vector) { if (is_load) { @@ -3713,8 +3708,8 @@ static void disas_ldst_multiple_struct(DisasContext *= s, uint32_t insn) * Issue the MTE check vs the logical repeat count, before we * promote consecutive little-endian elements below. */ - clean_addr =3D gen_mte_checkN(s, tcg_rn, is_store, is_postidx || rn != =3D 31, - size, total); + clean_addr =3D gen_mte_checkN(s, tcg_rn, is_store, + is_postidx || rn !=3D 31, total); =20 /* * Consecutive little-endian elements from a single register @@ -3866,8 +3861,8 @@ static void disas_ldst_single_struct(DisasContext *s,= uint32_t insn) total =3D selem << scale; tcg_rn =3D cpu_reg_sp(s, rn); =20 - clean_addr =3D gen_mte_checkN(s, tcg_rn, !is_load, is_postidx || rn != =3D 31, - scale, total); + clean_addr =3D gen_mte_checkN(s, tcg_rn, !is_load, + is_postidx || rn !=3D 31, total); =20 tcg_ebytes =3D tcg_const_i64(1 << scale); for (xs =3D 0; xs < selem; xs++) { diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c index 0eefb61214..584c4d047c 100644 --- a/target/arm/translate-sve.c +++ b/target/arm/translate-sve.c @@ -4264,7 +4264,7 @@ static void do_ldr(DisasContext *s, uint32_t vofs, in= t len, int rn, int imm) =20 dirty_addr =3D tcg_temp_new_i64(); tcg_gen_addi_i64(dirty_addr, cpu_reg_sp(s, rn), imm); - clean_addr =3D gen_mte_checkN(s, dirty_addr, false, rn !=3D 31, len, M= O_8); + clean_addr =3D gen_mte_checkN(s, dirty_addr, false, rn !=3D 31, len); tcg_temp_free_i64(dirty_addr); =20 /* @@ -4352,7 +4352,7 @@ static void do_str(DisasContext *s, uint32_t vofs, in= t len, int rn, int imm) =20 dirty_addr =3D tcg_temp_new_i64(); tcg_gen_addi_i64(dirty_addr, cpu_reg_sp(s, rn), imm); - clean_addr =3D gen_mte_checkN(s, dirty_addr, false, rn !=3D 31, len, M= O_8); + clean_addr =3D gen_mte_checkN(s, dirty_addr, false, rn !=3D 31, len); tcg_temp_free_i64(dirty_addr); =20 /* Note that unpredicated load/store of vector/predicate registers @@ -4509,8 +4509,7 @@ static void do_mem_zpa(DisasContext *s, int zt, int p= g, TCGv_i64 addr, desc =3D FIELD_DP32(desc, MTEDESC, TBI, s->tbid); desc =3D FIELD_DP32(desc, MTEDESC, TCMA, s->tcma); desc =3D FIELD_DP32(desc, MTEDESC, WRITE, is_write); - desc =3D FIELD_DP32(desc, MTEDESC, ESIZE, 1 << msz); - desc =3D FIELD_DP32(desc, MTEDESC, TSIZE, mte_n << msz); + desc =3D FIELD_DP32(desc, MTEDESC, SIZEM1, (mte_n << msz) - 1); desc <<=3D SVE_MTEDESC_SHIFT; } else { addr =3D clean_data_tbi(s, addr); @@ -5189,7 +5188,7 @@ static void do_mem_zpz(DisasContext *s, int zt, int p= g, int zm, desc =3D FIELD_DP32(desc, MTEDESC, TBI, s->tbid); desc =3D FIELD_DP32(desc, MTEDESC, TCMA, s->tcma); desc =3D FIELD_DP32(desc, MTEDESC, WRITE, is_write); - desc =3D FIELD_DP32(desc, MTEDESC, ESIZE, 1 << msz); + desc =3D FIELD_DP32(desc, MTEDESC, SIZEM1, (1 << msz) - 1); desc <<=3D SVE_MTEDESC_SHIFT; } desc =3D simd_desc(vsz, vsz, desc | scale); diff --git a/tests/tcg/aarch64/mte-5.c b/tests/tcg/aarch64/mte-5.c new file mode 100644 index 0000000000..6dbd6ab3ea --- /dev/null +++ b/tests/tcg/aarch64/mte-5.c @@ -0,0 +1,44 @@ +/* + * Memory tagging, faulting unaligned access. + * + * Copyright (c) 2021 Linaro Ltd + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#include "mte.h" + +void pass(int sig, siginfo_t *info, void *uc) +{ + assert(info->si_code =3D=3D SEGV_MTESERR); + exit(0); +} + +int main(int ac, char **av) +{ + struct sigaction sa; + void *p0, *p1, *p2; + long excl =3D 1; + + enable_mte(PR_MTE_TCF_SYNC); + p0 =3D alloc_mte_mem(sizeof(*p0)); + + /* Create two differently tagged pointers. */ + asm("irg %0,%1,%2" : "=3Dr"(p1) : "r"(p0), "r"(excl)); + asm("gmi %0,%1,%0" : "+r"(excl) : "r" (p1)); + assert(excl !=3D 1); + asm("irg %0,%1,%2" : "=3Dr"(p2) : "r"(p0), "r"(excl)); + assert(p1 !=3D p2); + + memset(&sa, 0, sizeof(sa)); + sa.sa_sigaction =3D pass; + sa.sa_flags =3D SA_SIGINFO; + sigaction(SIGSEGV, &sa, NULL); + + /* Store store two different tags in sequential granules. */ + asm("stg %0, [%0]" : : "r"(p1)); + asm("stg %0, [%0]" : : "r"(p2 + 16)); + + /* Perform an unaligned load crossing the granules. */ + asm volatile("ldr %0, [%1]" : "=3Dr"(p0) : "r"(p1 + 12)); + abort(); +} diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile= .target index 56e48f4b34..6c95dd8b9e 100644 --- a/tests/tcg/aarch64/Makefile.target +++ b/tests/tcg/aarch64/Makefile.target @@ -37,7 +37,7 @@ AARCH64_TESTS +=3D bti-2 =20 # MTE Tests ifneq ($(DOCKER_IMAGE)$(CROSS_CC_HAS_ARMV8_MTE),) -AARCH64_TESTS +=3D mte-1 mte-2 mte-3 mte-4 +AARCH64_TESTS +=3D mte-1 mte-2 mte-3 mte-4 mte-5 mte-%: CFLAGS +=3D -march=3Darmv8.5-a+memtag endif =20 --=20 2.25.1 From nobody Sat Apr 20 12:08:49 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1617380585; cv=none; d=zohomail.com; s=zohoarc; b=RzzrBQGYbpOGEhxLb6ez8hbbOdAy+PyjYDBEraWr21fffpkEbXctfSNrnXvvXbzJiihmBh1+evMrxmbxUaRg7NErTXYTbRNvma5G2Zp9LL6Z5wla3vLUcz6cpTdI3P+066Fh0e8m6evFMyZ0mmrcCqc2eDmIo/Ze+Y29wT5M4ns= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1617380585; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=40EtQ0GsJCeH0a6Y7JFQ1124iOFiLbPm35cGCuUgHGs=; b=Pz81omMerB1eKcp4q87GA2CwcEUS2xaFjpwZXU6g2JIRik7+DyOSBbER1PNqTne0mFLs+afwCy6tDPL0z+zh1Pp5OD3lcRQPb3n0kl+RCWPbd1xG8yrPzE+xzjwhHWgrDAIk5T10nRGoQ7CMg6B0kr7QxY/S/phulG0YJk5aV+o= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1617380585157425.7678852220439; Fri, 2 Apr 2021 09:23:05 -0700 (PDT) Received: from localhost ([::1]:54394 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lSMZZ-0005M4-UB for importer@patchew.org; Fri, 02 Apr 2021 12:23:03 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:50994) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lSMVR-0000yA-UM for qemu-devel@nongnu.org; Fri, 02 Apr 2021 12:18:45 -0400 Received: from mail-pj1-x1031.google.com ([2607:f8b0:4864:20::1031]:46748) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lSMVO-00037O-2u for qemu-devel@nongnu.org; Fri, 02 Apr 2021 12:18:45 -0400 Received: by mail-pj1-x1031.google.com with SMTP id q6-20020a17090a4306b02900c42a012202so2766717pjg.5 for ; Fri, 02 Apr 2021 09:18:41 -0700 (PDT) Received: from localhost.localdomain (h216-228-167-147.bendor.dedicated.static.tds.net. [216.228.167.147]) by smtp.gmail.com with ESMTPSA id m7sm8821761pjc.54.2021.04.02.09.18.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Apr 2021 09:18:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=40EtQ0GsJCeH0a6Y7JFQ1124iOFiLbPm35cGCuUgHGs=; b=IwtQGeiOJi/QpBb85ZOaOnyDWtCfLsvyWvPo6PtnLrRpJ/9D9De5lADwWEPDB33W85 R472l6g4KJwXIaC9ojOAKhRSwKocxYnk2m/O11ffMuDX221Ftz87fSY8Y7wZb8Y8xRt8 gl7CMSQP3AJgsGeF3+3UIaDBlq7QpVayg2+Ur8ws2tvzOCql8JIJSUAmu6X0oTet8BYc nEicT2kvLpTQvAZKEb+JE7znfocLzzTqv50nHki1qG62hwjFnZ8mmM4MJH/SKPrn+PpK KenVrOuoDGgVFlV1l7061Zzkabutuajfy8fcQk2LpC7xlOqakNjnwDPssdt2vxyQUDwI LZkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=40EtQ0GsJCeH0a6Y7JFQ1124iOFiLbPm35cGCuUgHGs=; b=TMwQ9QDDedddDEVgNJgyxacR29tXaSjS3BV7G+odLO4IAXZxRESEYoNAFmSpQQvu3W A96XSTO4ZM3itEw/lqeoxrSDLU6RgeNX6aVpQDfNM8YQUmyj4T5H33C6qCYuzW1hqoYl lY1+ydUx6VzHCwPgiqzQ/lJCa8Z7uqUuzofPS5TmqAhN8CpWZCiGvI8ZICreTRc2xNY8 nSLIaPCshrsvKXiiOUCksUb1mEKXJAVXF/SWINZRq9Sws1e/9xtYdcEpCA1wjgyK4MRB 2uR5PL1SzZbo09xVzVmXnZKhKjK0YC/fBYkjjxaIFBTbe8iM/7AMG/ymJRjXn2X7DNxg 489A== X-Gm-Message-State: AOAM531PJdv7ggZZdSCo7+lWtt3t6arC45M6rBWGATg16BYpJIZZiYhT AVkQVn0iuEWc6fDeyLxPRhX5zDaeCrRmJQ== X-Google-Smtp-Source: ABdhPJxHZ41Nd/JG/SniuC8oO31hshAXgDjX1bsXWhywJHR6jtEAGH+MJu3UdZG+5GPS0uBuaTm4Ow== X-Received: by 2002:a17:90a:d3d2:: with SMTP id d18mr14329233pjw.111.1617380320765; Fri, 02 Apr 2021 09:18:40 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH v2 3/3] accel/tcg: Preserve PAGE_ANON when changing page permissions Date: Fri, 2 Apr 2021 09:18:35 -0700 Message-Id: <20210402161835.286665-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210402161835.286665-1-richard.henderson@linaro.org> References: <20210402161835.286665-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::1031; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1031.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-arm@nongnu.org, Stephen Long Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @linaro.org) Content-Type: text/plain; charset="utf-8" Using mprotect() to change PROT_* does not change the MAP_ANON previously set with mmap(). Our linux-user version of MTE only works with MAP_ANON pages, so losing PAGE_ANON caused MTE to stop working. Reported-by: Stephen Long Signed-off-by: Richard Henderson --- tests/tcg/aarch64/mte.h | 3 ++- accel/tcg/translate-all.c | 9 +++++-- tests/tcg/aarch64/mte-6.c | 43 +++++++++++++++++++++++++++++++ tests/tcg/aarch64/Makefile.target | 2 +- 4 files changed, 53 insertions(+), 4 deletions(-) create mode 100644 tests/tcg/aarch64/mte-6.c diff --git a/tests/tcg/aarch64/mte.h b/tests/tcg/aarch64/mte.h index 141cef522c..0805676b11 100644 --- a/tests/tcg/aarch64/mte.h +++ b/tests/tcg/aarch64/mte.h @@ -48,7 +48,8 @@ static void enable_mte(int tcf) } } =20 -static void *alloc_mte_mem(size_t size) +static void * alloc_mte_mem(size_t size) __attribute__((unused)); +static void * alloc_mte_mem(size_t size) { void *p =3D mmap(NULL, size, PROT_READ | PROT_WRITE | PROT_MTE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index f32df8b240..ba6ab09790 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -2714,6 +2714,8 @@ void page_set_flags(target_ulong start, target_ulong = end, int flags) a missing call to h2g_valid. */ assert(end - 1 <=3D GUEST_ADDR_MAX); assert(start < end); + /* Only set PAGE_ANON with new mappings. */ + assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET)); assert_memory_lock(); =20 start =3D start & TARGET_PAGE_MASK; @@ -2737,11 +2739,14 @@ void page_set_flags(target_ulong start, target_ulon= g end, int flags) p->first_tb) { tb_invalidate_phys_page(addr, 0); } - if (reset_target_data && p->target_data) { + if (reset_target_data) { g_free(p->target_data); p->target_data =3D NULL; + p->flags =3D flags; + } else { + /* Using mprotect on a page does not change MAP_ANON. */ + p->flags =3D (p->flags & PAGE_ANON) | flags; } - p->flags =3D flags; } } =20 diff --git a/tests/tcg/aarch64/mte-6.c b/tests/tcg/aarch64/mte-6.c new file mode 100644 index 0000000000..60d51d18be --- /dev/null +++ b/tests/tcg/aarch64/mte-6.c @@ -0,0 +1,43 @@ +#include "mte.h" + +void pass(int sig, siginfo_t *info, void *uc) +{ + assert(info->si_code =3D=3D SEGV_MTESERR); + exit(0); +} + +int main(void) +{ + enable_mte(PR_MTE_TCF_SYNC); + + void *brk =3D sbrk(16); + if (brk =3D=3D (void *)-1) { + perror("sbrk"); + return 2; + } + + if (mprotect(brk, 16, PROT_READ | PROT_WRITE | PROT_MTE)) { + perror("mprotect"); + return 2; + } + + int *p1, *p2; + long excl =3D 1; + + asm("irg %0,%1,%2" : "=3Dr"(p1) : "r"(brk), "r"(excl)); + asm("gmi %0,%1,%0" : "+r"(excl) : "r"(p1)); + asm("irg %0,%1,%2" : "=3Dr"(p2) : "r"(brk), "r"(excl)); + asm("stg %0,[%0]" : : "r"(p1)); + + *p1 =3D 0; + + struct sigaction sa; + memset(&sa, 0, sizeof(sa)); + sa.sa_sigaction =3D pass; + sa.sa_flags =3D SA_SIGINFO; + sigaction(SIGSEGV, &sa, NULL); + + *p2 =3D 0; + + abort(); +} diff --git a/tests/tcg/aarch64/Makefile.target b/tests/tcg/aarch64/Makefile= .target index 6c95dd8b9e..928357b10a 100644 --- a/tests/tcg/aarch64/Makefile.target +++ b/tests/tcg/aarch64/Makefile.target @@ -37,7 +37,7 @@ AARCH64_TESTS +=3D bti-2 =20 # MTE Tests ifneq ($(DOCKER_IMAGE)$(CROSS_CC_HAS_ARMV8_MTE),) -AARCH64_TESTS +=3D mte-1 mte-2 mte-3 mte-4 mte-5 +AARCH64_TESTS +=3D mte-1 mte-2 mte-3 mte-4 mte-5 mte-6 mte-%: CFLAGS +=3D -march=3Darmv8.5-a+memtag endif =20 --=20 2.25.1