From nobody Fri May 10 10:46:34 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1694532972; cv=none; d=zohomail.com; s=zohoarc; b=kSik8hvPPWK51khFAWi5O2BFw9UOFyA0cxwBRZqX3zeb71Z4+LyNXNnoy4+Yt9aNIlzCxXUqglimD6HdQGmOPkz3jExlpis/lkAaaXR7noVHHndv/7Ymk42MmcGqaRABzJ6ziF+4Ww5PDfDMQabXNEMVAO3iXwN++ftAx0RZINk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694532972; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:References:Sender:Subject:To; bh=0usGpL+34nhpBA7IcAyVkW72vvJfUbi0euUmwfs4ybs=; b=T/zPEnL9OadSYuwUbtXqgVvLcizZKM5YT/UUNqPaCTEb1CZEI94SD/zjO9VIzJRxcII9gqqnCYMGgDG2Sa0S87ljdyBt+QGhdf/MCYX0vGScSfbouckjHUlegJusfRe/aaGttAavYYvcwcXLvTcpjuFQLUSOCVuBOe4sM0D5mcw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1694532972566797.719481478878; Tue, 12 Sep 2023 08:36:12 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg5Pn-0007HH-6h; Tue, 12 Sep 2023 11:34:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Pj-0007H0-NU for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:34:55 -0400 Received: from rev.ng ([5.9.113.41]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Pe-0001WB-26 for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:34:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=rev.ng; s=dkim; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=0usGpL+34nhpBA7IcAyVkW72vvJfUbi0euUmwfs4ybs=; b=KU2Tb7XMjBGMCLLWasUAQ+Ecv/ hthkmPCuwE2tJhxfdw62bHJgshlAttlrAH4MJebvzC0invlKXnFwMV3tVoGVIKICkCTNPzioHnEqU 3K1BdNioO9N0t53SB97aBtoVqsZM5a2qNmDgYPzfVGYNN+WUscmEH6/MpkR2BOOTy6+k=; To: qemu-devel@nongnu.org Cc: ale@rev.ng, richard.henderson@linaro.org, pbonzini@redhat.com, philmd@linaro.org, peter.maydell@linaro.org Subject: [PATCH 01/11] target/arm: Replace TARGET_PAGE_ENTRY_EXTRA Date: Tue, 12 Sep 2023 17:34:18 +0200 Message-ID: <20230912153428.17816-2-anjo@rev.ng> In-Reply-To: <20230912153428.17816-1-anjo@rev.ng> References: <20230912153428.17816-1-anjo@rev.ng> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=5.9.113.41; envelope-from=anjo@rev.ng; helo=rev.ng X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Anton Johansson From: Anton Johansson via Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1694532974374100001 Content-Type: text/plain; charset="utf-8" TARGET_PAGE_ENTRY_EXTRA is a macro that allows guests to specify additional fields for caching with the full TLB entry. This macro is replaced with a union in CPUTLBEntryFull, thus making CPUTLB target-agnostic at the cost of slightly inflated CPUTLBEntryFull for non-arm guests. Note, this is needed to ensure that fields in CPUTLB don't vary in offset between various targets. (arm is the only guest actually making use of this feature.) Signed-off-by: Anton Johansson --- include/exec/cpu-defs.h | 18 +++++++++++++++--- target/arm/cpu-param.h | 12 ------------ target/arm/ptw.c | 4 ++-- target/arm/tcg/mte_helper.c | 2 +- target/arm/tcg/sve_helper.c | 2 +- target/arm/tcg/tlb_helper.c | 4 ++-- target/arm/tcg/translate-a64.c | 2 +- 7 files changed, 22 insertions(+), 22 deletions(-) diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h index fb4c8d480f..0a600a312b 100644 --- a/include/exec/cpu-defs.h +++ b/include/exec/cpu-defs.h @@ -135,9 +135,21 @@ typedef struct CPUTLBEntryFull { * This may be used to cache items from the guest cpu * page tables for later use by the implementation. */ -#ifdef TARGET_PAGE_ENTRY_EXTRA - TARGET_PAGE_ENTRY_EXTRA -#endif + union { + /* + * Cache the attrs and shareability fields from the page table ent= ry. + * + * For ARMMMUIdx_Stage2*, pte_attrs is the S2 descriptor bits [5:2= ]. + * Otherwise, pte_attrs is the same as the MAIR_EL1 8-bit format. + * For shareability and guarded, as in the SH and GP fields respec= tively + * of the VMSAv8-64 PTEs. + */ + struct { + uint8_t pte_attrs; + uint8_t shareability; + bool guarded; + } arm; + } extra; } CPUTLBEntryFull; #endif /* CONFIG_SOFTMMU */ =20 diff --git a/target/arm/cpu-param.h b/target/arm/cpu-param.h index b3b35f7aa1..f9b462a98f 100644 --- a/target/arm/cpu-param.h +++ b/target/arm/cpu-param.h @@ -31,18 +31,6 @@ # define TARGET_PAGE_BITS_VARY # define TARGET_PAGE_BITS_MIN 10 =20 -/* - * Cache the attrs and shareability fields from the page table entry. - * - * For ARMMMUIdx_Stage2*, pte_attrs is the S2 descriptor bits [5:2]. - * Otherwise, pte_attrs is the same as the MAIR_EL1 8-bit format. - * For shareability and guarded, as in the SH and GP fields respectively - * of the VMSAv8-64 PTEs. - */ -# define TARGET_PAGE_ENTRY_EXTRA \ - uint8_t pte_attrs; \ - uint8_t shareability; \ - bool guarded; #endif =20 #endif diff --git a/target/arm/ptw.c b/target/arm/ptw.c index bfbab26b9b..95db9ec4c3 100644 --- a/target/arm/ptw.c +++ b/target/arm/ptw.c @@ -579,7 +579,7 @@ static bool S1_ptw_translate(CPUARMState *env, S1Transl= ate *ptw, } ptw->out_phys =3D full->phys_addr | (addr & ~TARGET_PAGE_MASK); ptw->out_rw =3D full->prot & PAGE_WRITE; - pte_attrs =3D full->pte_attrs; + pte_attrs =3D full->extra.arm.pte_attrs; ptw->out_space =3D full->attrs.space; #else g_assert_not_reached(); @@ -2036,7 +2036,7 @@ static bool get_phys_addr_lpae(CPUARMState *env, S1Tr= anslate *ptw, =20 /* When in aarch64 mode, and BTI is enabled, remember GP in the TL= B. */ if (aarch64 && cpu_isar_feature(aa64_bti, cpu)) { - result->f.guarded =3D extract64(attrs, 50, 1); /* GP */ + result->f.extra.arm.guarded =3D extract64(attrs, 50, 1); /* GP= */ } } =20 diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c index b23d11563a..dba21cc4d6 100644 --- a/target/arm/tcg/mte_helper.c +++ b/target/arm/tcg/mte_helper.c @@ -124,7 +124,7 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, in= t ptr_mmu_idx, assert(!(flags & TLB_INVALID_MASK)); =20 /* If the virtual page MemAttr !=3D Tagged, access unchecked. */ - if (full->pte_attrs !=3D 0xf0) { + if (full->extra.arm.pte_attrs !=3D 0xf0) { return NULL; } =20 diff --git a/target/arm/tcg/sve_helper.c b/target/arm/tcg/sve_helper.c index 7c103fc9f7..f006d152cc 100644 --- a/target/arm/tcg/sve_helper.c +++ b/target/arm/tcg/sve_helper.c @@ -5373,7 +5373,7 @@ bool sve_probe_page(SVEHostPage *info, bool nofault, = CPUARMState *env, info->tagged =3D (flags & PAGE_ANON) && (flags & PAGE_MTE); #else info->attrs =3D full->attrs; - info->tagged =3D full->pte_attrs =3D=3D 0xf0; + info->tagged =3D full->extra.arm.pte_attrs =3D=3D 0xf0; #endif =20 /* Ensure that info->host[] is relative to addr, not addr + mem_off. */ diff --git a/target/arm/tcg/tlb_helper.c b/target/arm/tcg/tlb_helper.c index b22b2a4c6e..59bff8b452 100644 --- a/target/arm/tcg/tlb_helper.c +++ b/target/arm/tcg/tlb_helper.c @@ -334,8 +334,8 @@ bool arm_cpu_tlb_fill(CPUState *cs, vaddr address, int = size, address &=3D TARGET_PAGE_MASK; } =20 - res.f.pte_attrs =3D res.cacheattrs.attrs; - res.f.shareability =3D res.cacheattrs.shareability; + res.f.extra.arm.pte_attrs =3D res.cacheattrs.attrs; + res.f.extra.arm.shareability =3D res.cacheattrs.shareability; =20 tlb_set_page_full(cs, mmu_idx, address, &res.f); return true; diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c index 1b6fbb61e2..07c8f5b53b 100644 --- a/target/arm/tcg/translate-a64.c +++ b/target/arm/tcg/translate-a64.c @@ -13775,7 +13775,7 @@ static bool is_guarded_page(CPUARMState *env, Disas= Context *s) false, &host, &full, 0); assert(!(flags & TLB_INVALID_MASK)); =20 - return full->guarded; + return full->extra.arm.guarded; #endif } =20 --=20 2.41.0 From nobody Fri May 10 10:46:34 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1694533002; cv=none; d=zohomail.com; s=zohoarc; b=VzltNoomFEreaVoVBDfT+bblAos6f2qxIFuVDr0ys8/6iK2YCS77TTRbQN+srPNROcRP+1PfkJV8tQjyV+K0PP3wYtAB35PMGW5WqgtNTtYBDtowo5VHeYr8bAl2sfpZcIkSrixz+EFB8UGPQbtd0cz9b+kmrtHYDkaKx2znAvg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694533002; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:References:Sender:Subject:To; bh=Baqtz1NvlAfAhnKkPG9KYzOxUXSPqLyX7PRY4yWEROc=; b=cJOqtd/WucUJbyG+wORWjoQAbGjF17LGmg/9bNLpCQmnuNCyqwIJ1LSnCWtK/1Fjn+ARnOl0CpGs014VBghVA1NBUZAYmZvgWXUr/KQkwo7972QM46Nis3SdvJMpv62pif8OuL5mWsZiOkvigLYksfxcvjoT8pkVinGpUT0Y2cU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1694533001973197.75149147785612; Tue, 12 Sep 2023 08:36:41 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg5Pt-0007Ik-Ee; Tue, 12 Sep 2023 11:35:05 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Pi-0007F8-5R for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:34:54 -0400 Received: from rev.ng ([5.9.113.41]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Pe-0001WF-25 for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:34:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=rev.ng; s=dkim; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=Baqtz1NvlAfAhnKkPG9KYzOxUXSPqLyX7PRY4yWEROc=; b=kuboRHIVk0oNCBnOKkITuPLyA4 J5ApZXAQGg8o9PEtQmiv4D4AK4lxNBgiNyYgd54+Pzjkc+/+Xc2tgP0snEgdCy7T5wlKUaukRrvz2 iWDf9IkTusu9/j7POvsh37XAG7FXsoghqiYEDMyr6yBu7h+VS4OdkFuHmYIBvWqZdDVE=; To: qemu-devel@nongnu.org Cc: ale@rev.ng, richard.henderson@linaro.org, pbonzini@redhat.com, philmd@linaro.org, peter.maydell@linaro.org Subject: [PATCH 02/11] include: Introduce tlb_ptr field to CPUState Date: Tue, 12 Sep 2023 17:34:19 +0200 Message-ID: <20230912153428.17816-3-anjo@rev.ng> In-Reply-To: <20230912153428.17816-1-anjo@rev.ng> References: <20230912153428.17816-1-anjo@rev.ng> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=5.9.113.41; envelope-from=anjo@rev.ng; helo=rev.ng X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Anton Johansson From: Anton Johansson via Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1694533003265100007 Content-Type: text/plain; charset="utf-8" Adds a CPUTLB * field to CPUState in order to access the TLB without knowledge of the ArchCPU struct layout. A cpu_tlb(CPUState *) function is added for ease of access. Signed-off-by: Anton Johansson --- include/exec/cpu-all.h | 12 ++++++++++++ include/hw/core/cpu.h | 6 ++++++ 2 files changed, 18 insertions(+) diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h index c2c62160c6..b03218d7d3 100644 --- a/include/exec/cpu-all.h +++ b/include/exec/cpu-all.h @@ -435,6 +435,7 @@ static inline void cpu_set_cpustate_pointers(ArchCPU *c= pu) { cpu->parent_obj.env_ptr =3D &cpu->env; cpu->parent_obj.icount_decr_ptr =3D &cpu->neg.icount_decr; + cpu->parent_obj.tlb_ptr =3D &cpu->neg.tlb; } =20 /** @@ -494,4 +495,15 @@ static inline CPUTLB *env_tlb(CPUArchState *env) return &env_neg(env)->tlb; } =20 +/** + * cpu_tlb(cpu) + * @cpu: The generic CPUState + * + * Return the CPUTLB state associated with the cpu. + */ +static inline CPUTLB *cpu_tlb(CPUState *cpu) +{ + return cpu->tlb_ptr; +} + #endif /* CPU_ALL_H */ diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h index 92a4234439..dd31303480 100644 --- a/include/hw/core/cpu.h +++ b/include/hw/core/cpu.h @@ -238,7 +238,12 @@ typedef struct SavedIOTLB { } SavedIOTLB; #endif =20 +/* see include/exec/cpu-defs.h */ +struct CPUTLB; + +/* see include/sysemu/kvm_int.h */ struct KVMState; +/* see linux-headers/linux/kvm.h */ struct kvm_run; =20 /* work queue */ @@ -372,6 +377,7 @@ struct CPUState { =20 CPUArchState *env_ptr; IcountDecr *icount_decr_ptr; + struct CPUTLB *tlb_ptr; =20 CPUJumpCache *tb_jmp_cache; =20 --=20 2.41.0 From nobody Fri May 10 10:46:34 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1694533002; cv=none; d=zohomail.com; s=zohoarc; b=cYjAKt3ujFnojEUY5M8RUFcg2aPi3yj57i/fOQoWNcpGh4jyo1cHlF2eCMQrqsfUAuYRGkYeqdJOOx7ZztFH4Fq1oi0sZ+sa6GgQIUp21g2LqA+FybXpw9pHTEdVywtRP5Z5wpUfr0Fw/E94PWxaZq9I6Ix46O/NQ8ZtgGvSQCo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694533002; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:References:Sender:Subject:To; bh=ajRgBIqjASGOj90Ikbj6gblbv58eNIk4j1/p8tyhlww=; b=g3m4JIrxESZworz94yBB03i8HbOoIx39zDUDdBw2GH1by316w8irXYxVFutQVMV4benlYvZ95Ck8Wvbk3/9DKW+hGt3kF/oIIEVOxQC8ggPYjZpexRKtETtos/PlpBORiRXzImJUFoyFE5aX/HXF7X1uwhQxtkZA+3q3YlNxrmA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1694533002280391.975787921608; Tue, 12 Sep 2023 08:36:42 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg5QB-0007Ov-Ep; Tue, 12 Sep 2023 11:35:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Pw-0007K8-17 for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:13 -0400 Received: from rev.ng ([5.9.113.41]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Ph-0001WV-Rz for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=rev.ng; s=dkim; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=ajRgBIqjASGOj90Ikbj6gblbv58eNIk4j1/p8tyhlww=; b=KTHh2h1bf3Z/vPyy4zLHHBIz61 Ke6GY5IhnNSm3JSiY4Yh5+Bj7Z8UtjRgAiMNGRFgeJeGnOcEDx/laKsAH5xohRr5zE7L4mMLCyu4Y x4n6lnmjEE2F+YOmuTz49QmtFn6GVuKSDzM5hFpOA5UUdMkUa1LrPzxPOZxNcphplPJU=; To: qemu-devel@nongnu.org Cc: ale@rev.ng, richard.henderson@linaro.org, pbonzini@redhat.com, philmd@linaro.org, peter.maydell@linaro.org Subject: [PATCH 03/11] accel/tcg: Modify tlb_*() to use CPUState Date: Tue, 12 Sep 2023 17:34:20 +0200 Message-ID: <20230912153428.17816-4-anjo@rev.ng> In-Reply-To: <20230912153428.17816-1-anjo@rev.ng> References: <20230912153428.17816-1-anjo@rev.ng> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=5.9.113.41; envelope-from=anjo@rev.ng; helo=rev.ng X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Anton Johansson From: Anton Johansson via Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1694533002763100004 Content-Type: text/plain; charset="utf-8" Changes tlb_*() functions to take CPUState instead of CPUArchState, as they don't require the full CPUArchState. This makes it easier to decouple target-(in)dependent code. Signed-off-by: Anton Johansson --- include/exec/cpu_ldst.h | 8 +- accel/tcg/cputlb.c | 218 +++++++++++++++++++--------------------- 2 files changed, 107 insertions(+), 119 deletions(-) diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h index da10ba1433..8d168f76ce 100644 --- a/include/exec/cpu_ldst.h +++ b/include/exec/cpu_ldst.h @@ -361,19 +361,19 @@ static inline uint64_t tlb_addr_write(const CPUTLBEnt= ry *entry) } =20 /* Find the TLB index corresponding to the mmu_idx + address pair. */ -static inline uintptr_t tlb_index(CPUArchState *env, uintptr_t mmu_idx, +static inline uintptr_t tlb_index(CPUState *cpu, uintptr_t mmu_idx, vaddr addr) { - uintptr_t size_mask =3D env_tlb(env)->f[mmu_idx].mask >> CPU_TLB_ENTRY= _BITS; + uintptr_t size_mask =3D cpu_tlb(cpu)->f[mmu_idx].mask >> CPU_TLB_ENTRY= _BITS; =20 return (addr >> TARGET_PAGE_BITS) & size_mask; } =20 /* Find the TLB entry corresponding to the mmu_idx + address pair. */ -static inline CPUTLBEntry *tlb_entry(CPUArchState *env, uintptr_t mmu_idx, +static inline CPUTLBEntry *tlb_entry(CPUState *cpu, uintptr_t mmu_idx, vaddr addr) { - return &env_tlb(env)->f[mmu_idx].table[tlb_index(env, mmu_idx, addr)]; + return &cpu_tlb(cpu)->f[mmu_idx].table[tlb_index(cpu, mmu_idx, addr)]; } =20 #endif /* defined(CONFIG_USER_ONLY) */ diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index c643d66190..213b3236bb 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -240,11 +240,11 @@ static void tlb_mmu_flush_locked(CPUTLBDesc *desc, CP= UTLBDescFast *fast) memset(desc->vtable, -1, sizeof(desc->vtable)); } =20 -static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx, +static void tlb_flush_one_mmuidx_locked(CPUState *cpu, int mmu_idx, int64_t now) { - CPUTLBDesc *desc =3D &env_tlb(env)->d[mmu_idx]; - CPUTLBDescFast *fast =3D &env_tlb(env)->f[mmu_idx]; + CPUTLBDesc *desc =3D &cpu_tlb(cpu)->d[mmu_idx]; + CPUTLBDescFast *fast =3D &cpu_tlb(cpu)->f[mmu_idx]; =20 tlb_mmu_resize_locked(desc, fast, now); tlb_mmu_flush_locked(desc, fast); @@ -262,41 +262,39 @@ static void tlb_mmu_init(CPUTLBDesc *desc, CPUTLBDesc= Fast *fast, int64_t now) tlb_mmu_flush_locked(desc, fast); } =20 -static inline void tlb_n_used_entries_inc(CPUArchState *env, uintptr_t mmu= _idx) +static inline void tlb_n_used_entries_inc(CPUState *cpu, uintptr_t mmu_idx) { - env_tlb(env)->d[mmu_idx].n_used_entries++; + cpu_tlb(cpu)->d[mmu_idx].n_used_entries++; } =20 -static inline void tlb_n_used_entries_dec(CPUArchState *env, uintptr_t mmu= _idx) +static inline void tlb_n_used_entries_dec(CPUState *cpu, uintptr_t mmu_idx) { - env_tlb(env)->d[mmu_idx].n_used_entries--; + cpu_tlb(cpu)->d[mmu_idx].n_used_entries--; } =20 void tlb_init(CPUState *cpu) { - CPUArchState *env =3D cpu->env_ptr; int64_t now =3D get_clock_realtime(); int i; =20 - qemu_spin_init(&env_tlb(env)->c.lock); + qemu_spin_init(&cpu_tlb(cpu)->c.lock); =20 /* All tlbs are initialized flushed. */ - env_tlb(env)->c.dirty =3D 0; + cpu_tlb(cpu)->c.dirty =3D 0; =20 for (i =3D 0; i < NB_MMU_MODES; i++) { - tlb_mmu_init(&env_tlb(env)->d[i], &env_tlb(env)->f[i], now); + tlb_mmu_init(&cpu_tlb(cpu)->d[i], &cpu_tlb(cpu)->f[i], now); } } =20 void tlb_destroy(CPUState *cpu) { - CPUArchState *env =3D cpu->env_ptr; int i; =20 - qemu_spin_destroy(&env_tlb(env)->c.lock); + qemu_spin_destroy(&cpu_tlb(cpu)->c.lock); for (i =3D 0; i < NB_MMU_MODES; i++) { - CPUTLBDesc *desc =3D &env_tlb(env)->d[i]; - CPUTLBDescFast *fast =3D &env_tlb(env)->f[i]; + CPUTLBDesc *desc =3D &cpu_tlb(cpu)->d[i]; + CPUTLBDescFast *fast =3D &cpu_tlb(cpu)->f[i]; =20 g_free(fast->table); g_free(desc->fulltlb); @@ -328,11 +326,9 @@ void tlb_flush_counts(size_t *pfull, size_t *ppart, si= ze_t *pelide) size_t full =3D 0, part =3D 0, elide =3D 0; =20 CPU_FOREACH(cpu) { - CPUArchState *env =3D cpu->env_ptr; - - full +=3D qatomic_read(&env_tlb(env)->c.full_flush_count); - part +=3D qatomic_read(&env_tlb(env)->c.part_flush_count); - elide +=3D qatomic_read(&env_tlb(env)->c.elide_flush_count); + full +=3D qatomic_read(&cpu_tlb(cpu)->c.full_flush_count); + part +=3D qatomic_read(&cpu_tlb(cpu)->c.part_flush_count); + elide +=3D qatomic_read(&cpu_tlb(cpu)->c.elide_flush_count); } *pfull =3D full; *ppart =3D part; @@ -341,7 +337,6 @@ void tlb_flush_counts(size_t *pfull, size_t *ppart, siz= e_t *pelide) =20 static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data = data) { - CPUArchState *env =3D cpu->env_ptr; uint16_t asked =3D data.host_int; uint16_t all_dirty, work, to_clean; int64_t now =3D get_clock_realtime(); @@ -350,31 +345,31 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *= cpu, run_on_cpu_data data) =20 tlb_debug("mmu_idx:0x%04" PRIx16 "\n", asked); =20 - qemu_spin_lock(&env_tlb(env)->c.lock); + qemu_spin_lock(&cpu_tlb(cpu)->c.lock); =20 - all_dirty =3D env_tlb(env)->c.dirty; + all_dirty =3D cpu_tlb(cpu)->c.dirty; to_clean =3D asked & all_dirty; all_dirty &=3D ~to_clean; - env_tlb(env)->c.dirty =3D all_dirty; + cpu_tlb(cpu)->c.dirty =3D all_dirty; =20 for (work =3D to_clean; work !=3D 0; work &=3D work - 1) { int mmu_idx =3D ctz32(work); - tlb_flush_one_mmuidx_locked(env, mmu_idx, now); + tlb_flush_one_mmuidx_locked(cpu, mmu_idx, now); } =20 - qemu_spin_unlock(&env_tlb(env)->c.lock); + qemu_spin_unlock(&cpu_tlb(cpu)->c.lock); =20 tcg_flush_jmp_cache(cpu); =20 if (to_clean =3D=3D ALL_MMUIDX_BITS) { - qatomic_set(&env_tlb(env)->c.full_flush_count, - env_tlb(env)->c.full_flush_count + 1); + qatomic_set(&cpu_tlb(cpu)->c.full_flush_count, + cpu_tlb(cpu)->c.full_flush_count + 1); } else { - qatomic_set(&env_tlb(env)->c.part_flush_count, - env_tlb(env)->c.part_flush_count + ctpop16(to_clean)); + qatomic_set(&cpu_tlb(cpu)->c.part_flush_count, + cpu_tlb(cpu)->c.part_flush_count + ctpop16(to_clean)); if (to_clean !=3D asked) { - qatomic_set(&env_tlb(env)->c.elide_flush_count, - env_tlb(env)->c.elide_flush_count + + qatomic_set(&cpu_tlb(cpu)->c.elide_flush_count, + cpu_tlb(cpu)->c.elide_flush_count + ctpop16(asked & ~to_clean)); } } @@ -470,43 +465,43 @@ static inline bool tlb_flush_entry_locked(CPUTLBEntry= *tlb_entry, vaddr page) } =20 /* Called with tlb_c.lock held */ -static void tlb_flush_vtlb_page_mask_locked(CPUArchState *env, int mmu_idx, +static void tlb_flush_vtlb_page_mask_locked(CPUState *cpu, int mmu_idx, vaddr page, vaddr mask) { - CPUTLBDesc *d =3D &env_tlb(env)->d[mmu_idx]; + CPUTLBDesc *d =3D &cpu_tlb(cpu)->d[mmu_idx]; int k; =20 - assert_cpu_is_self(env_cpu(env)); + assert_cpu_is_self(cpu); for (k =3D 0; k < CPU_VTLB_SIZE; k++) { if (tlb_flush_entry_mask_locked(&d->vtable[k], page, mask)) { - tlb_n_used_entries_dec(env, mmu_idx); + tlb_n_used_entries_dec(cpu, mmu_idx); } } } =20 -static inline void tlb_flush_vtlb_page_locked(CPUArchState *env, int mmu_i= dx, +static inline void tlb_flush_vtlb_page_locked(CPUState *cpu, int mmu_idx, vaddr page) { - tlb_flush_vtlb_page_mask_locked(env, mmu_idx, page, -1); + tlb_flush_vtlb_page_mask_locked(cpu, mmu_idx, page, -1); } =20 -static void tlb_flush_page_locked(CPUArchState *env, int midx, vaddr page) +static void tlb_flush_page_locked(CPUState *cpu, int midx, vaddr page) { - vaddr lp_addr =3D env_tlb(env)->d[midx].large_page_addr; - vaddr lp_mask =3D env_tlb(env)->d[midx].large_page_mask; + vaddr lp_addr =3D cpu_tlb(cpu)->d[midx].large_page_addr; + vaddr lp_mask =3D cpu_tlb(cpu)->d[midx].large_page_mask; =20 /* Check if we need to flush due to large pages. */ if ((page & lp_mask) =3D=3D lp_addr) { tlb_debug("forcing full flush midx %d (%016" VADDR_PRIx "/%016" VADDR_PRIx ")\n", midx, lp_addr, lp_mask); - tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime()); + tlb_flush_one_mmuidx_locked(cpu, midx, get_clock_realtime()); } else { - if (tlb_flush_entry_locked(tlb_entry(env, midx, page), page)) { - tlb_n_used_entries_dec(env, midx); + if (tlb_flush_entry_locked(tlb_entry(cpu, midx, page), page)) { + tlb_n_used_entries_dec(cpu, midx); } - tlb_flush_vtlb_page_locked(env, midx, page); + tlb_flush_vtlb_page_locked(cpu, midx, page); } } =20 @@ -523,20 +518,19 @@ static void tlb_flush_page_by_mmuidx_async_0(CPUState= *cpu, vaddr addr, uint16_t idxmap) { - CPUArchState *env =3D cpu->env_ptr; int mmu_idx; =20 assert_cpu_is_self(cpu); =20 tlb_debug("page addr: %016" VADDR_PRIx " mmu_map:0x%x\n", addr, idxmap= ); =20 - qemu_spin_lock(&env_tlb(env)->c.lock); + qemu_spin_lock(&cpu_tlb(cpu)->c.lock); for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { if ((idxmap >> mmu_idx) & 1) { - tlb_flush_page_locked(env, mmu_idx, addr); + tlb_flush_page_locked(cpu, mmu_idx, addr); } } - qemu_spin_unlock(&env_tlb(env)->c.lock); + qemu_spin_unlock(&cpu_tlb(cpu)->c.lock); =20 /* * Discard jump cache entries for any tb which might potentially @@ -709,12 +703,12 @@ void tlb_flush_page_all_cpus_synced(CPUState *src, va= ddr addr) tlb_flush_page_by_mmuidx_all_cpus_synced(src, addr, ALL_MMUIDX_BITS); } =20 -static void tlb_flush_range_locked(CPUArchState *env, int midx, +static void tlb_flush_range_locked(CPUState *cpu, int midx, vaddr addr, vaddr len, unsigned bits) { - CPUTLBDesc *d =3D &env_tlb(env)->d[midx]; - CPUTLBDescFast *f =3D &env_tlb(env)->f[midx]; + CPUTLBDesc *d =3D &cpu_tlb(cpu)->d[midx]; + CPUTLBDescFast *f =3D &cpu_tlb(cpu)->f[midx]; vaddr mask =3D MAKE_64BIT_MASK(0, bits); =20 /* @@ -731,7 +725,7 @@ static void tlb_flush_range_locked(CPUArchState *env, i= nt midx, tlb_debug("forcing full flush midx %d (" "%016" VADDR_PRIx "/%016" VADDR_PRIx "+%016" VADDR_PRIx = ")\n", midx, addr, mask, len); - tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime()); + tlb_flush_one_mmuidx_locked(cpu, midx, get_clock_realtime()); return; } =20 @@ -744,18 +738,18 @@ static void tlb_flush_range_locked(CPUArchState *env,= int midx, tlb_debug("forcing full flush midx %d (" "%016" VADDR_PRIx "/%016" VADDR_PRIx ")\n", midx, d->large_page_addr, d->large_page_mask); - tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime()); + tlb_flush_one_mmuidx_locked(cpu, midx, get_clock_realtime()); return; } =20 for (vaddr i =3D 0; i < len; i +=3D TARGET_PAGE_SIZE) { vaddr page =3D addr + i; - CPUTLBEntry *entry =3D tlb_entry(env, midx, page); + CPUTLBEntry *entry =3D tlb_entry(cpu, midx, page); =20 if (tlb_flush_entry_mask_locked(entry, page, mask)) { - tlb_n_used_entries_dec(env, midx); + tlb_n_used_entries_dec(cpu, midx); } - tlb_flush_vtlb_page_mask_locked(env, midx, page, mask); + tlb_flush_vtlb_page_mask_locked(cpu, midx, page, mask); } } =20 @@ -769,7 +763,6 @@ typedef struct { static void tlb_flush_range_by_mmuidx_async_0(CPUState *cpu, TLBFlushRangeData d) { - CPUArchState *env =3D cpu->env_ptr; int mmu_idx; =20 assert_cpu_is_self(cpu); @@ -777,13 +770,13 @@ static void tlb_flush_range_by_mmuidx_async_0(CPUStat= e *cpu, tlb_debug("range: %016" VADDR_PRIx "/%u+%016" VADDR_PRIx " mmu_map:0x%= x\n", d.addr, d.bits, d.len, d.idxmap); =20 - qemu_spin_lock(&env_tlb(env)->c.lock); + qemu_spin_lock(&cpu_tlb(cpu)->c.lock); for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { if ((d.idxmap >> mmu_idx) & 1) { - tlb_flush_range_locked(env, mmu_idx, d.addr, d.len, d.bits); + tlb_flush_range_locked(cpu, mmu_idx, d.addr, d.len, d.bits); } } - qemu_spin_unlock(&env_tlb(env)->c.lock); + qemu_spin_unlock(&cpu_tlb(cpu)->c.lock); =20 /* * If the length is larger than the jump cache size, then it will take @@ -1028,27 +1021,24 @@ static inline void copy_tlb_helper_locked(CPUTLBEnt= ry *d, const CPUTLBEntry *s) */ void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, ram_addr_t length) { - CPUArchState *env; - int mmu_idx; =20 - env =3D cpu->env_ptr; - qemu_spin_lock(&env_tlb(env)->c.lock); + qemu_spin_lock(&cpu_tlb(cpu)->c.lock); for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { unsigned int i; - unsigned int n =3D tlb_n_entries(&env_tlb(env)->f[mmu_idx]); + unsigned int n =3D tlb_n_entries(&cpu_tlb(cpu)->f[mmu_idx]); =20 for (i =3D 0; i < n; i++) { - tlb_reset_dirty_range_locked(&env_tlb(env)->f[mmu_idx].table[i= ], + tlb_reset_dirty_range_locked(&cpu_tlb(cpu)->f[mmu_idx].table[i= ], start1, length); } =20 for (i =3D 0; i < CPU_VTLB_SIZE; i++) { - tlb_reset_dirty_range_locked(&env_tlb(env)->d[mmu_idx].vtable[= i], + tlb_reset_dirty_range_locked(&cpu_tlb(cpu)->d[mmu_idx].vtable[= i], start1, length); } } - qemu_spin_unlock(&env_tlb(env)->c.lock); + qemu_spin_unlock(&cpu_tlb(cpu)->c.lock); } =20 /* Called with tlb_c.lock held */ @@ -1064,32 +1054,31 @@ static inline void tlb_set_dirty1_locked(CPUTLBEntr= y *tlb_entry, so that it is no longer dirty */ void tlb_set_dirty(CPUState *cpu, vaddr addr) { - CPUArchState *env =3D cpu->env_ptr; int mmu_idx; =20 assert_cpu_is_self(cpu); =20 addr &=3D TARGET_PAGE_MASK; - qemu_spin_lock(&env_tlb(env)->c.lock); + qemu_spin_lock(&cpu_tlb(cpu)->c.lock); for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { - tlb_set_dirty1_locked(tlb_entry(env, mmu_idx, addr), addr); + tlb_set_dirty1_locked(tlb_entry(cpu, mmu_idx, addr), addr); } =20 for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) { int k; for (k =3D 0; k < CPU_VTLB_SIZE; k++) { - tlb_set_dirty1_locked(&env_tlb(env)->d[mmu_idx].vtable[k], add= r); + tlb_set_dirty1_locked(&cpu_tlb(cpu)->d[mmu_idx].vtable[k], add= r); } } - qemu_spin_unlock(&env_tlb(env)->c.lock); + qemu_spin_unlock(&cpu_tlb(cpu)->c.lock); } =20 /* Our TLB does not support large pages, so remember the area covered by large pages and trigger a full TLB flush if these are invalidated. */ -static void tlb_add_large_page(CPUArchState *env, int mmu_idx, +static void tlb_add_large_page(CPUState *cpu, int mmu_idx, vaddr addr, uint64_t size) { - vaddr lp_addr =3D env_tlb(env)->d[mmu_idx].large_page_addr; + vaddr lp_addr =3D cpu_tlb(cpu)->d[mmu_idx].large_page_addr; vaddr lp_mask =3D ~(size - 1); =20 if (lp_addr =3D=3D (vaddr)-1) { @@ -1099,13 +1088,13 @@ static void tlb_add_large_page(CPUArchState *env, i= nt mmu_idx, /* Extend the existing region to include the new page. This is a compromise between unnecessary flushes and the cost of maintaining a full variable size TLB. */ - lp_mask &=3D env_tlb(env)->d[mmu_idx].large_page_mask; + lp_mask &=3D cpu_tlb(cpu)->d[mmu_idx].large_page_mask; while (((lp_addr ^ addr) & lp_mask) !=3D 0) { lp_mask <<=3D 1; } } - env_tlb(env)->d[mmu_idx].large_page_addr =3D lp_addr & lp_mask; - env_tlb(env)->d[mmu_idx].large_page_mask =3D lp_mask; + cpu_tlb(cpu)->d[mmu_idx].large_page_addr =3D lp_addr & lp_mask; + cpu_tlb(cpu)->d[mmu_idx].large_page_mask =3D lp_mask; } =20 static inline void tlb_set_compare(CPUTLBEntryFull *full, CPUTLBEntry *ent, @@ -1137,8 +1126,7 @@ static inline void tlb_set_compare(CPUTLBEntryFull *f= ull, CPUTLBEntry *ent, void tlb_set_page_full(CPUState *cpu, int mmu_idx, vaddr addr, CPUTLBEntryFull *full) { - CPUArchState *env =3D cpu->env_ptr; - CPUTLB *tlb =3D env_tlb(env); + CPUTLB *tlb =3D cpu_tlb(cpu); CPUTLBDesc *desc =3D &tlb->d[mmu_idx]; MemoryRegionSection *section; unsigned int index, read_flags, write_flags; @@ -1155,7 +1143,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, sz =3D TARGET_PAGE_SIZE; } else { sz =3D (hwaddr)1 << full->lg_page_size; - tlb_add_large_page(env, mmu_idx, addr, sz); + tlb_add_large_page(cpu, mmu_idx, addr, sz); } addr_page =3D addr & TARGET_PAGE_MASK; paddr_page =3D full->phys_addr & TARGET_PAGE_MASK; @@ -1221,8 +1209,8 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, wp_flags =3D cpu_watchpoint_address_matches(cpu, addr_page, TARGET_PAGE_SIZE); =20 - index =3D tlb_index(env, mmu_idx, addr_page); - te =3D tlb_entry(env, mmu_idx, addr_page); + index =3D tlb_index(cpu, mmu_idx, addr_page); + te =3D tlb_entry(cpu, mmu_idx, addr_page); =20 /* * Hold the TLB lock for the rest of the function. We could acquire/re= lease @@ -1237,7 +1225,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, tlb->c.dirty |=3D 1 << mmu_idx; =20 /* Make sure there's no cached translation for the new page. */ - tlb_flush_vtlb_page_locked(env, mmu_idx, addr_page); + tlb_flush_vtlb_page_locked(cpu, mmu_idx, addr_page); =20 /* * Only evict the old entry to the victim tlb if it's for a @@ -1250,7 +1238,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, /* Evict the old entry into the victim tlb. */ copy_tlb_helper_locked(tv, te); desc->vfulltlb[vidx] =3D desc->fulltlb[index]; - tlb_n_used_entries_dec(env, mmu_idx); + tlb_n_used_entries_dec(cpu, mmu_idx); } =20 /* refill the tlb */ @@ -1293,7 +1281,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, MMU_DATA_STORE, prot & PAGE_WRITE); =20 copy_tlb_helper_locked(te, &tn); - tlb_n_used_entries_inc(env, mmu_idx); + tlb_n_used_entries_inc(cpu, mmu_idx); qemu_spin_unlock(&tlb->c.lock); } =20 @@ -1462,28 +1450,28 @@ static void io_writex(CPUArchState *env, CPUTLBEntr= yFull *full, =20 /* Return true if ADDR is present in the victim tlb, and has been copied back to the main tlb. */ -static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index, +static bool victim_tlb_hit(CPUState *cpu, size_t mmu_idx, size_t index, MMUAccessType access_type, vaddr page) { size_t vidx; =20 - assert_cpu_is_self(env_cpu(env)); + assert_cpu_is_self(cpu); for (vidx =3D 0; vidx < CPU_VTLB_SIZE; ++vidx) { - CPUTLBEntry *vtlb =3D &env_tlb(env)->d[mmu_idx].vtable[vidx]; + CPUTLBEntry *vtlb =3D &cpu_tlb(cpu)->d[mmu_idx].vtable[vidx]; uint64_t cmp =3D tlb_read_idx(vtlb, access_type); =20 if (cmp =3D=3D page) { /* Found entry in victim tlb, swap tlb and iotlb. */ - CPUTLBEntry tmptlb, *tlb =3D &env_tlb(env)->f[mmu_idx].table[i= ndex]; + CPUTLBEntry tmptlb, *tlb =3D &cpu_tlb(cpu)->f[mmu_idx].table[i= ndex]; =20 - qemu_spin_lock(&env_tlb(env)->c.lock); + qemu_spin_lock(&cpu_tlb(cpu)->c.lock); copy_tlb_helper_locked(&tmptlb, tlb); copy_tlb_helper_locked(tlb, vtlb); copy_tlb_helper_locked(vtlb, &tmptlb); - qemu_spin_unlock(&env_tlb(env)->c.lock); + qemu_spin_unlock(&cpu_tlb(cpu)->c.lock); =20 - CPUTLBEntryFull *f1 =3D &env_tlb(env)->d[mmu_idx].fulltlb[inde= x]; - CPUTLBEntryFull *f2 =3D &env_tlb(env)->d[mmu_idx].vfulltlb[vid= x]; + CPUTLBEntryFull *f1 =3D &cpu_tlb(cpu)->d[mmu_idx].fulltlb[inde= x]; + CPUTLBEntryFull *f2 =3D &cpu_tlb(cpu)->d[mmu_idx].vfulltlb[vid= x]; CPUTLBEntryFull tmpf; tmpf =3D *f1; *f1 =3D *f2; *f2 =3D tmpf; return true; @@ -1522,8 +1510,8 @@ static int probe_access_internal(CPUArchState *env, v= addr addr, void **phost, CPUTLBEntryFull **pfull, uintptr_t retaddr, bool check_mem_cbs) { - uintptr_t index =3D tlb_index(env, mmu_idx, addr); - CPUTLBEntry *entry =3D tlb_entry(env, mmu_idx, addr); + uintptr_t index =3D tlb_index(env_cpu(env), mmu_idx, addr); + CPUTLBEntry *entry =3D tlb_entry(env_cpu(env), mmu_idx, addr); uint64_t tlb_addr =3D tlb_read_idx(entry, access_type); vaddr page_addr =3D addr & TARGET_PAGE_MASK; int flags =3D TLB_FLAGS_MASK & ~TLB_FORCE_SLOW; @@ -1531,7 +1519,8 @@ static int probe_access_internal(CPUArchState *env, v= addr addr, CPUTLBEntryFull *full; =20 if (!tlb_hit_page(tlb_addr, page_addr)) { - if (!victim_tlb_hit(env, mmu_idx, index, access_type, page_addr)) { + if (!victim_tlb_hit(env_cpu(env), mmu_idx, index, + access_type, page_addr)) { CPUState *cs =3D env_cpu(env); =20 if (!cs->cc->tcg_ops->tlb_fill(cs, addr, fault_size, access_ty= pe, @@ -1543,8 +1532,8 @@ static int probe_access_internal(CPUArchState *env, v= addr addr, } =20 /* TLB resize via tlb_fill may have moved the entry. */ - index =3D tlb_index(env, mmu_idx, addr); - entry =3D tlb_entry(env, mmu_idx, addr); + index =3D tlb_index(env_cpu(env), mmu_idx, addr); + entry =3D tlb_entry(env_cpu(env), mmu_idx, addr); =20 /* * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately, @@ -1737,16 +1726,15 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchStat= e *env, vaddr addr, bool tlb_plugin_lookup(CPUState *cpu, vaddr addr, int mmu_idx, bool is_store, struct qemu_plugin_hwaddr *data) { - CPUArchState *env =3D cpu->env_ptr; - CPUTLBEntry *tlbe =3D tlb_entry(env, mmu_idx, addr); - uintptr_t index =3D tlb_index(env, mmu_idx, addr); + CPUTLBEntry *tlbe =3D tlb_entry(cpu, mmu_idx, addr); + uintptr_t index =3D tlb_index(cpu, mmu_idx, addr); uint64_t tlb_addr =3D is_store ? tlb_addr_write(tlbe) : tlbe->addr_rea= d; =20 if (likely(tlb_hit(tlb_addr, addr))) { /* We must have an iotlb entry for MMIO */ if (tlb_addr & TLB_MMIO) { CPUTLBEntryFull *full; - full =3D &env_tlb(env)->d[mmu_idx].fulltlb[index]; + full =3D &cpu_tlb(cpu)->d[mmu_idx].fulltlb[index]; data->is_io =3D true; data->v.io.section =3D iotlb_to_section(cpu, full->xlat_section, full->attrs); @@ -1803,8 +1791,8 @@ static bool mmu_lookup1(CPUArchState *env, MMULookupP= ageData *data, int mmu_idx, MMUAccessType access_type, uintptr_t = ra) { vaddr addr =3D data->addr; - uintptr_t index =3D tlb_index(env, mmu_idx, addr); - CPUTLBEntry *entry =3D tlb_entry(env, mmu_idx, addr); + uintptr_t index =3D tlb_index(env_cpu(env), mmu_idx, addr); + CPUTLBEntry *entry =3D tlb_entry(env_cpu(env), mmu_idx, addr); uint64_t tlb_addr =3D tlb_read_idx(entry, access_type); bool maybe_resized =3D false; CPUTLBEntryFull *full; @@ -1812,12 +1800,12 @@ static bool mmu_lookup1(CPUArchState *env, MMULooku= pPageData *data, =20 /* If the TLB entry is for a different page, reload and try again. */ if (!tlb_hit(tlb_addr, addr)) { - if (!victim_tlb_hit(env, mmu_idx, index, access_type, + if (!victim_tlb_hit(env_cpu(env), mmu_idx, index, access_type, addr & TARGET_PAGE_MASK)) { tlb_fill(env_cpu(env), addr, data->size, access_type, mmu_idx,= ra); maybe_resized =3D true; - index =3D tlb_index(env, mmu_idx, addr); - entry =3D tlb_entry(env, mmu_idx, addr); + index =3D tlb_index(env_cpu(env), mmu_idx, addr); + entry =3D tlb_entry(env_cpu(env), mmu_idx, addr); } tlb_addr =3D tlb_read_idx(entry, access_type) & ~TLB_INVALID_MASK; } @@ -1925,7 +1913,7 @@ static bool mmu_lookup(CPUArchState *env, vaddr addr,= MemOpIdx oi, */ mmu_lookup1(env, &l->page[0], l->mmu_idx, type, ra); if (mmu_lookup1(env, &l->page[1], l->mmu_idx, type, ra)) { - uintptr_t index =3D tlb_index(env, l->mmu_idx, addr); + uintptr_t index =3D tlb_index(env_cpu(env), l->mmu_idx, addr); l->page[0].full =3D &env_tlb(env)->d[l->mmu_idx].fulltlb[index= ]; } =20 @@ -1983,18 +1971,18 @@ static void *atomic_mmu_lookup(CPUArchState *env, v= addr addr, MemOpIdx oi, goto stop_the_world; } =20 - index =3D tlb_index(env, mmu_idx, addr); - tlbe =3D tlb_entry(env, mmu_idx, addr); + index =3D tlb_index(env_cpu(env), mmu_idx, addr); + tlbe =3D tlb_entry(env_cpu(env), mmu_idx, addr); =20 /* Check TLB entry and enforce page permissions. */ tlb_addr =3D tlb_addr_write(tlbe); if (!tlb_hit(tlb_addr, addr)) { - if (!victim_tlb_hit(env, mmu_idx, index, MMU_DATA_STORE, + if (!victim_tlb_hit(env_cpu(env), mmu_idx, index, MMU_DATA_STORE, addr & TARGET_PAGE_MASK)) { tlb_fill(env_cpu(env), addr, size, MMU_DATA_STORE, mmu_idx, retaddr); - index =3D tlb_index(env, mmu_idx, addr); - tlbe =3D tlb_entry(env, mmu_idx, addr); + index =3D tlb_index(env_cpu(env), mmu_idx, addr); + tlbe =3D tlb_entry(env_cpu(env), mmu_idx, addr); } tlb_addr =3D tlb_addr_write(tlbe) & ~TLB_INVALID_MASK; } --=20 2.41.0 From nobody Fri May 10 10:46:34 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1694533003; cv=none; d=zohomail.com; s=zohoarc; b=gOWEpyzAd33+6oUowAWewm7bUl+YUo0CwaGyDHbsIyYWVZ8XaFOKJDrEjhno5V2rhxYy4CskVg2cKzRAomQoA+fJWqLtUl4quijv93xVlwKdE6JM+sqAUFknqMfjAWEH+CKrixGigmnWHI7UPZD+Fh0/W44rIzuzz0z+rgQP0WM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694533003; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:References:Sender:Subject:To; bh=C/v3CCWgAmZwtexjXdzrHXxC9xO96YviJzsqXM2DB9Q=; b=W64SUDEHP6kh6D/NPjMpzdl7lQVT93ILOB1/GXa4VtYURhktK2QgPtoAXyNjfT1TvP9bjnAoWK9aXUzMmwNllk82Op6zSJQY1eflyLDLAFDU5e2qKpQm376r5Ee79O3NvXlLgFaihn4MP6hxB1di0siEJXzftLeGhD5E3FT2mI8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1694533003542331.9371772837319; Tue, 12 Sep 2023 08:36:43 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg5Q9-0007Nz-72; Tue, 12 Sep 2023 11:35:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Ps-0007Iu-Ea for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:05 -0400 Received: from rev.ng ([5.9.113.41]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Ph-0001WZ-SR for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:03 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=rev.ng; s=dkim; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=C/v3CCWgAmZwtexjXdzrHXxC9xO96YviJzsqXM2DB9Q=; b=UpC4Qn/JhR6A+Q85kPK0gj09M7 DwSdOvxiRDRviV1fNJYWcdxT/RQDlCpLlHS3Lvr9m2PCJwqyMA3/c/JHP0zqM6+HBAMCouXoI5dYL ta6FliPmwm1KxHQX5gwMZ/DNd+hwzls12Ai5sQCFEGjwpgAMS7FMnx5Fe7gQ/DYk2+ZA=; To: qemu-devel@nongnu.org Cc: ale@rev.ng, richard.henderson@linaro.org, pbonzini@redhat.com, philmd@linaro.org, peter.maydell@linaro.org Subject: [PATCH 04/11] accel/tcg: Modify probe_access_internal() to use CPUState Date: Tue, 12 Sep 2023 17:34:21 +0200 Message-ID: <20230912153428.17816-5-anjo@rev.ng> In-Reply-To: <20230912153428.17816-1-anjo@rev.ng> References: <20230912153428.17816-1-anjo@rev.ng> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=5.9.113.41; envelope-from=anjo@rev.ng; helo=rev.ng X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Anton Johansson From: Anton Johansson via Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1694533005056100013 Content-Type: text/plain; charset="utf-8" probe_access_internal() is changed to instead take the generic CPUState over CPUArchState, in order to lessen the target-specific coupling of cputlb.c. Note: probe_access*() also don't need the full CPUArchState, but aren't touched in this patch as they are target-facing. Signed-off-by: Anton Johansson --- accel/tcg/cputlb.c | 46 +++++++++++++++++++++++----------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 213b3236bb..20ea2e2395 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1504,27 +1504,24 @@ static void notdirty_write(CPUState *cpu, vaddr mem= _vaddr, unsigned size, } } =20 -static int probe_access_internal(CPUArchState *env, vaddr addr, +static int probe_access_internal(CPUState *cpu, vaddr addr, int fault_size, MMUAccessType access_type, int mmu_idx, bool nonfault, void **phost, CPUTLBEntryFull **pfull, uintptr_t retaddr, bool check_mem_cbs) { - uintptr_t index =3D tlb_index(env_cpu(env), mmu_idx, addr); - CPUTLBEntry *entry =3D tlb_entry(env_cpu(env), mmu_idx, addr); + uintptr_t index =3D tlb_index(cpu, mmu_idx, addr); + CPUTLBEntry *entry =3D tlb_entry(cpu, mmu_idx, addr); uint64_t tlb_addr =3D tlb_read_idx(entry, access_type); vaddr page_addr =3D addr & TARGET_PAGE_MASK; int flags =3D TLB_FLAGS_MASK & ~TLB_FORCE_SLOW; - bool force_mmio =3D check_mem_cbs && cpu_plugin_mem_cbs_enabled(env_cp= u(env)); + bool force_mmio =3D check_mem_cbs && cpu_plugin_mem_cbs_enabled(cpu); CPUTLBEntryFull *full; =20 if (!tlb_hit_page(tlb_addr, page_addr)) { - if (!victim_tlb_hit(env_cpu(env), mmu_idx, index, - access_type, page_addr)) { - CPUState *cs =3D env_cpu(env); - - if (!cs->cc->tcg_ops->tlb_fill(cs, addr, fault_size, access_ty= pe, - mmu_idx, nonfault, retaddr)) { + if (!victim_tlb_hit(cpu, mmu_idx, index, access_type, page_addr)) { + if (!cpu->cc->tcg_ops->tlb_fill(cpu, addr, fault_size, access_= type, + mmu_idx, nonfault, retaddr)) { /* Non-faulting page table read failed. */ *phost =3D NULL; *pfull =3D NULL; @@ -1532,8 +1529,8 @@ static int probe_access_internal(CPUArchState *env, v= addr addr, } =20 /* TLB resize via tlb_fill may have moved the entry. */ - index =3D tlb_index(env_cpu(env), mmu_idx, addr); - entry =3D tlb_entry(env_cpu(env), mmu_idx, addr); + index =3D tlb_index(cpu, mmu_idx, addr); + entry =3D tlb_entry(cpu, mmu_idx, addr); =20 /* * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately, @@ -1546,7 +1543,7 @@ static int probe_access_internal(CPUArchState *env, v= addr addr, } flags &=3D tlb_addr; =20 - *pfull =3D full =3D &env_tlb(env)->d[mmu_idx].fulltlb[index]; + *pfull =3D full =3D &cpu_tlb(cpu)->d[mmu_idx].fulltlb[index]; flags |=3D full->slow_flags[access_type]; =20 /* Fold all "mmio-like" bits into TLB_MMIO. This is not RAM. */ @@ -1567,8 +1564,9 @@ int probe_access_full(CPUArchState *env, vaddr addr, = int size, bool nonfault, void **phost, CPUTLBEntryFull **pfull, uintptr_t retaddr) { - int flags =3D probe_access_internal(env, addr, size, access_type, mmu_= idx, - nonfault, phost, pfull, retaddr, tru= e); + int flags =3D probe_access_internal(env_cpu(env), addr, size, access_t= ype, + mmu_idx, nonfault, phost, pfull, ret= addr, + true); =20 /* Handle clean RAM pages. */ if (unlikely(flags & TLB_NOTDIRTY)) { @@ -1590,8 +1588,8 @@ int probe_access_full_mmu(CPUArchState *env, vaddr ad= dr, int size, phost =3D phost ? phost : &discard_phost; pfull =3D pfull ? pfull : &discard_tlb; =20 - int flags =3D probe_access_internal(env, addr, size, access_type, mmu_= idx, - true, phost, pfull, 0, false); + int flags =3D probe_access_internal(env_cpu(env), addr, size, access_t= ype, + mmu_idx, true, phost, pfull, 0, fals= e); =20 /* Handle clean RAM pages. */ if (unlikely(flags & TLB_NOTDIRTY)) { @@ -1611,8 +1609,9 @@ int probe_access_flags(CPUArchState *env, vaddr addr,= int size, =20 g_assert(-(addr | TARGET_PAGE_MASK) >=3D size); =20 - flags =3D probe_access_internal(env, addr, size, access_type, mmu_idx, - nonfault, phost, &full, retaddr, true); + flags =3D probe_access_internal(env_cpu(env), addr, size, access_type, + mmu_idx, nonfault, phost, &full, retaddr, + true); =20 /* Handle clean RAM pages. */ if (unlikely(flags & TLB_NOTDIRTY)) { @@ -1632,8 +1631,9 @@ void *probe_access(CPUArchState *env, vaddr addr, int= size, =20 g_assert(-(addr | TARGET_PAGE_MASK) >=3D size); =20 - flags =3D probe_access_internal(env, addr, size, access_type, mmu_idx, - false, &host, &full, retaddr, true); + flags =3D probe_access_internal(env_cpu(env), addr, size, access_type, + mmu_idx, false, &host, &full, retaddr, + true); =20 /* Per the interface, size =3D=3D 0 merely faults the access. */ if (size =3D=3D 0) { @@ -1665,7 +1665,7 @@ void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr ad= dr, void *host; int flags; =20 - flags =3D probe_access_internal(env, addr, 0, access_type, + flags =3D probe_access_internal(env_cpu(env), addr, 0, access_type, mmu_idx, true, &host, &full, 0, false); =20 /* No combination of flags are expected by the caller. */ @@ -1688,7 +1688,7 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState = *env, vaddr addr, CPUTLBEntryFull *full; void *p; =20 - (void)probe_access_internal(env, addr, 1, MMU_INST_FETCH, + (void)probe_access_internal(env_cpu(env), addr, 1, MMU_INST_FETCH, cpu_mmu_index(env, true), false, &p, &full, 0, false); if (p =3D=3D NULL) { --=20 2.41.0 From nobody Fri May 10 10:46:34 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1694533051; cv=none; d=zohomail.com; s=zohoarc; b=NiPI4bOfsoQWdDce++jahUZfAFg8YLdY55fhXeuV9QB0dxSB62/r6EDoqakKTkPXKz0tAp63I3sqwQj0JRxs90Im0+RGpNuHXHoHQLeNXmG5p3VRTno8ksjqGDqLh/cah15Bpf6W3One8nL0q1uuFy5+eN2lY4WvUXZSSIbvZis= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694533051; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:References:Sender:Subject:To; bh=xlerWTxY/dIHL1wKz+c5zCtWpW92xylIr3aiOT+ZYNs=; b=g+GAwBmpUbVB37VvNJYiyqj5QQ1TO/b0BATB7ZLv/ZTUYlrmo3qx7Px+8Cimqw7TpZGq25wlS1rNa7CMKiroJdEcStLXmZOCPmUSLfb4Lrd6e7gLKuwVVMh5PzwYVz0hPyg91BUU0tHJ+RqpELHCwzMIu3HQT7NbhGOfYX6/Bx8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1694533051148801.7448745783742; Tue, 12 Sep 2023 08:37:31 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg5Q4-0007ME-Jl; Tue, 12 Sep 2023 11:35:16 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Pq-0007If-3Y for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:03 -0400 Received: from rev.ng ([5.9.113.41]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Pe-0001Wf-25 for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=rev.ng; s=dkim; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=xlerWTxY/dIHL1wKz+c5zCtWpW92xylIr3aiOT+ZYNs=; b=dhPenxNoJHNPiWpyL0uEmPTghD QOX2nNPdNOH47JOL7wkIOkJ0exFTwvuyBPrb/+/+c9miFwi+67xE2c8xy7iPfJe18eeeX6T55+PRU uCyXbcwouItNzU5J1g1SbOGDYu3J5PmALKhH5IQ3huC3KFLMveN/80vPpGSslfWXHPZM=; To: qemu-devel@nongnu.org Cc: ale@rev.ng, richard.henderson@linaro.org, pbonzini@redhat.com, philmd@linaro.org, peter.maydell@linaro.org Subject: [PATCH 05/11] accel/tcg: Modifies memory access functions to use CPUState Date: Tue, 12 Sep 2023 17:34:22 +0200 Message-ID: <20230912153428.17816-6-anjo@rev.ng> In-Reply-To: <20230912153428.17816-1-anjo@rev.ng> References: <20230912153428.17816-1-anjo@rev.ng> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=5.9.113.41; envelope-from=anjo@rev.ng; helo=rev.ng X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Anton Johansson From: Anton Johansson via Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1694533051982100001 Content-Type: text/plain; charset="utf-8" do_[ld|st]*() and mmu_lookup*() are changed to use CPUState over CPUArchState, moving the target-dependence to the target-facing facing cpu_[ld|st] functions. Signed-off-by: Anton Johansson --- accel/tcg/cputlb.c | 324 ++++++++++++++++++++++----------------------- 1 file changed, 161 insertions(+), 163 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 20ea2e2395..ebd174e23d 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1367,11 +1367,10 @@ static void save_iotlb_data(CPUState *cs, MemoryReg= ionSection *section, #endif } =20 -static uint64_t io_readx(CPUArchState *env, CPUTLBEntryFull *full, +static uint64_t io_readx(CPUState *cpu, CPUTLBEntryFull *full, int mmu_idx, vaddr addr, uintptr_t retaddr, MMUAccessType access_type, MemOp op) { - CPUState *cpu =3D env_cpu(env); hwaddr mr_offset; MemoryRegionSection *section; MemoryRegion *mr; @@ -1408,11 +1407,10 @@ static uint64_t io_readx(CPUArchState *env, CPUTLBE= ntryFull *full, return val; } =20 -static void io_writex(CPUArchState *env, CPUTLBEntryFull *full, +static void io_writex(CPUState *cpu, CPUTLBEntryFull *full, int mmu_idx, uint64_t val, vaddr addr, uintptr_t retaddr, MemOp op) { - CPUState *cpu =3D env_cpu(env); hwaddr mr_offset; MemoryRegionSection *section; MemoryRegion *mr; @@ -1776,7 +1774,7 @@ typedef struct MMULookupLocals { =20 /** * mmu_lookup1: translate one page - * @env: cpu context + * @cpu: generic cpu state * @data: lookup parameters * @mmu_idx: virtual address context * @access_type: load/store/code @@ -1787,12 +1785,12 @@ typedef struct MMULookupLocals { * tlb_fill will longjmp out. Return true if the softmmu tlb for * @mmu_idx may have resized. */ -static bool mmu_lookup1(CPUArchState *env, MMULookupPageData *data, +static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, int mmu_idx, MMUAccessType access_type, uintptr_t = ra) { vaddr addr =3D data->addr; - uintptr_t index =3D tlb_index(env_cpu(env), mmu_idx, addr); - CPUTLBEntry *entry =3D tlb_entry(env_cpu(env), mmu_idx, addr); + uintptr_t index =3D tlb_index(cpu, mmu_idx, addr); + CPUTLBEntry *entry =3D tlb_entry(cpu, mmu_idx, addr); uint64_t tlb_addr =3D tlb_read_idx(entry, access_type); bool maybe_resized =3D false; CPUTLBEntryFull *full; @@ -1800,17 +1798,17 @@ static bool mmu_lookup1(CPUArchState *env, MMULooku= pPageData *data, =20 /* If the TLB entry is for a different page, reload and try again. */ if (!tlb_hit(tlb_addr, addr)) { - if (!victim_tlb_hit(env_cpu(env), mmu_idx, index, access_type, + if (!victim_tlb_hit(cpu, mmu_idx, index, access_type, addr & TARGET_PAGE_MASK)) { - tlb_fill(env_cpu(env), addr, data->size, access_type, mmu_idx,= ra); + tlb_fill(cpu, addr, data->size, access_type, mmu_idx, ra); maybe_resized =3D true; - index =3D tlb_index(env_cpu(env), mmu_idx, addr); - entry =3D tlb_entry(env_cpu(env), mmu_idx, addr); + index =3D tlb_index(cpu, mmu_idx, addr); + entry =3D tlb_entry(cpu, mmu_idx, addr); } tlb_addr =3D tlb_read_idx(entry, access_type) & ~TLB_INVALID_MASK; } =20 - full =3D &env_tlb(env)->d[mmu_idx].fulltlb[index]; + full =3D &cpu_tlb(cpu)->d[mmu_idx].fulltlb[index]; flags =3D tlb_addr & (TLB_FLAGS_MASK & ~TLB_FORCE_SLOW); flags |=3D full->slow_flags[access_type]; =20 @@ -1824,7 +1822,7 @@ static bool mmu_lookup1(CPUArchState *env, MMULookupP= ageData *data, =20 /** * mmu_watch_or_dirty - * @env: cpu context + * @cpu: generic cpu state * @data: lookup parameters * @access_type: load/store/code * @ra: return address into tcg generated code, or 0 @@ -1832,7 +1830,7 @@ static bool mmu_lookup1(CPUArchState *env, MMULookupP= ageData *data, * Trigger watchpoints for @data.addr:@data.size; * record writes to protected clean pages. */ -static void mmu_watch_or_dirty(CPUArchState *env, MMULookupPageData *data, +static void mmu_watch_or_dirty(CPUState *cpu, MMULookupPageData *data, MMUAccessType access_type, uintptr_t ra) { CPUTLBEntryFull *full =3D data->full; @@ -1843,13 +1841,13 @@ static void mmu_watch_or_dirty(CPUArchState *env, M= MULookupPageData *data, /* On watchpoint hit, this will longjmp out. */ if (flags & TLB_WATCHPOINT) { int wp =3D access_type =3D=3D MMU_DATA_STORE ? BP_MEM_WRITE : BP_M= EM_READ; - cpu_check_watchpoint(env_cpu(env), addr, size, full->attrs, wp, ra= ); + cpu_check_watchpoint(cpu, addr, size, full->attrs, wp, ra); flags &=3D ~TLB_WATCHPOINT; } =20 /* Note that notdirty is only set for writes. */ if (flags & TLB_NOTDIRTY) { - notdirty_write(env_cpu(env), addr, size, full, ra); + notdirty_write(cpu, addr, size, full, ra); flags &=3D ~TLB_NOTDIRTY; } data->flags =3D flags; @@ -1857,7 +1855,7 @@ static void mmu_watch_or_dirty(CPUArchState *env, MMU= LookupPageData *data, =20 /** * mmu_lookup: translate page(s) - * @env: cpu context + * @cpu: generic cpu state * @addr: virtual address * @oi: combined mmu_idx and MemOp * @ra: return address into tcg generated code, or 0 @@ -1867,7 +1865,7 @@ static void mmu_watch_or_dirty(CPUArchState *env, MMU= LookupPageData *data, * Resolve the translation for the page(s) beginning at @addr, for MemOp.s= ize * bytes. Return true if the lookup crosses a page boundary. */ -static bool mmu_lookup(CPUArchState *env, vaddr addr, MemOpIdx oi, +static bool mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, uintptr_t ra, MMUAccessType type, MMULookupLocals *= l) { unsigned a_bits; @@ -1882,7 +1880,7 @@ static bool mmu_lookup(CPUArchState *env, vaddr addr,= MemOpIdx oi, /* Handle CPU specific unaligned behaviour */ a_bits =3D get_alignment_bits(l->memop); if (addr & ((1 << a_bits) - 1)) { - cpu_unaligned_access(env_cpu(env), addr, type, l->mmu_idx, ra); + cpu_unaligned_access(cpu, addr, type, l->mmu_idx, ra); } =20 l->page[0].addr =3D addr; @@ -1892,11 +1890,11 @@ static bool mmu_lookup(CPUArchState *env, vaddr add= r, MemOpIdx oi, crosspage =3D (addr ^ l->page[1].addr) & TARGET_PAGE_MASK; =20 if (likely(!crosspage)) { - mmu_lookup1(env, &l->page[0], l->mmu_idx, type, ra); + mmu_lookup1(cpu, &l->page[0], l->mmu_idx, type, ra); =20 flags =3D l->page[0].flags; if (unlikely(flags & (TLB_WATCHPOINT | TLB_NOTDIRTY))) { - mmu_watch_or_dirty(env, &l->page[0], type, ra); + mmu_watch_or_dirty(cpu, &l->page[0], type, ra); } if (unlikely(flags & TLB_BSWAP)) { l->memop ^=3D MO_BSWAP; @@ -1911,16 +1909,16 @@ static bool mmu_lookup(CPUArchState *env, vaddr add= r, MemOpIdx oi, * Lookup both pages, recognizing exceptions from either. If the * second lookup potentially resized, refresh first CPUTLBEntryFul= l. */ - mmu_lookup1(env, &l->page[0], l->mmu_idx, type, ra); - if (mmu_lookup1(env, &l->page[1], l->mmu_idx, type, ra)) { - uintptr_t index =3D tlb_index(env_cpu(env), l->mmu_idx, addr); - l->page[0].full =3D &env_tlb(env)->d[l->mmu_idx].fulltlb[index= ]; + mmu_lookup1(cpu, &l->page[0], l->mmu_idx, type, ra); + if (mmu_lookup1(cpu, &l->page[1], l->mmu_idx, type, ra)) { + uintptr_t index =3D tlb_index(cpu, l->mmu_idx, addr); + l->page[0].full =3D &cpu_tlb(cpu)->d[l->mmu_idx].fulltlb[index= ]; } =20 flags =3D l->page[0].flags | l->page[1].flags; if (unlikely(flags & (TLB_WATCHPOINT | TLB_NOTDIRTY))) { - mmu_watch_or_dirty(env, &l->page[0], type, ra); - mmu_watch_or_dirty(env, &l->page[1], type, ra); + mmu_watch_or_dirty(cpu, &l->page[0], type, ra); + mmu_watch_or_dirty(cpu, &l->page[1], type, ra); } =20 /* @@ -2060,7 +2058,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, vad= dr addr, MemOpIdx oi, =20 /** * do_ld_mmio_beN: - * @env: cpu context + * @cpu: generic cpu state * @full: page parameters * @ret_be: accumulated data * @addr: virtual address @@ -2072,7 +2070,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, vad= dr addr, MemOpIdx oi, * Load @size bytes from @addr, which is memory-mapped i/o. * The bytes are concatenated in big-endian order with @ret_be. */ -static uint64_t do_ld_mmio_beN(CPUArchState *env, CPUTLBEntryFull *full, +static uint64_t do_ld_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full, uint64_t ret_be, vaddr addr, int size, int mmu_idx, MMUAccessType type, uintptr_t = ra) { @@ -2086,26 +2084,26 @@ static uint64_t do_ld_mmio_beN(CPUArchState *env, C= PUTLBEntryFull *full, case 3: case 5: case 7: - t =3D io_readx(env, full, mmu_idx, addr, ra, type, MO_UB); + t =3D io_readx(cpu, full, mmu_idx, addr, ra, type, MO_UB); ret_be =3D (ret_be << 8) | t; size -=3D 1; addr +=3D 1; break; case 2: case 6: - t =3D io_readx(env, full, mmu_idx, addr, ra, type, MO_BEUW); + t =3D io_readx(cpu, full, mmu_idx, addr, ra, type, MO_BEUW); ret_be =3D (ret_be << 16) | t; size -=3D 2; addr +=3D 2; break; case 4: - t =3D io_readx(env, full, mmu_idx, addr, ra, type, MO_BEUL); + t =3D io_readx(cpu, full, mmu_idx, addr, ra, type, MO_BEUL); ret_be =3D (ret_be << 32) | t; size -=3D 4; addr +=3D 4; break; case 0: - return io_readx(env, full, mmu_idx, addr, ra, type, MO_BEUQ); + return io_readx(cpu, full, mmu_idx, addr, ra, type, MO_BEUQ); default: qemu_build_not_reached(); } @@ -2207,11 +2205,11 @@ static uint64_t do_ld_whole_be4(MMULookupPageData *= p, uint64_t ret_be) * As do_ld_bytes_beN, but with one atomic load. * Eight aligned bytes are guaranteed to cover the load. */ -static uint64_t do_ld_whole_be8(CPUArchState *env, uintptr_t ra, +static uint64_t do_ld_whole_be8(CPUState *cpu, uintptr_t ra, MMULookupPageData *p, uint64_t ret_be) { int o =3D p->addr & 7; - uint64_t x =3D load_atomic8_or_exit(env, ra, p->haddr - o); + uint64_t x =3D load_atomic8_or_exit(cpu->env_ptr, ra, p->haddr - o); =20 x =3D cpu_to_be64(x); x <<=3D o * 8; @@ -2227,11 +2225,11 @@ static uint64_t do_ld_whole_be8(CPUArchState *env, = uintptr_t ra, * As do_ld_bytes_beN, but with one atomic load. * 16 aligned bytes are guaranteed to cover the load. */ -static Int128 do_ld_whole_be16(CPUArchState *env, uintptr_t ra, +static Int128 do_ld_whole_be16(CPUState *cpu, uintptr_t ra, MMULookupPageData *p, uint64_t ret_be) { int o =3D p->addr & 15; - Int128 x, y =3D load_atomic16_or_exit(env, ra, p->haddr - o); + Int128 x, y =3D load_atomic16_or_exit(cpu->env_ptr, ra, p->haddr - o); int size =3D p->size; =20 if (!HOST_BIG_ENDIAN) { @@ -2247,7 +2245,7 @@ static Int128 do_ld_whole_be16(CPUArchState *env, uin= tptr_t ra, /* * Wrapper for the above. */ -static uint64_t do_ld_beN(CPUArchState *env, MMULookupPageData *p, +static uint64_t do_ld_beN(CPUState *cpu, MMULookupPageData *p, uint64_t ret_be, int mmu_idx, MMUAccessType type, MemOp mop, uintptr_t ra) { @@ -2256,7 +2254,7 @@ static uint64_t do_ld_beN(CPUArchState *env, MMULooku= pPageData *p, =20 if (unlikely(p->flags & TLB_MMIO)) { QEMU_IOTHREAD_LOCK_GUARD(); - return do_ld_mmio_beN(env, p->full, ret_be, p->addr, p->size, + return do_ld_mmio_beN(cpu, p->full, ret_be, p->addr, p->size, mmu_idx, type, ra); } =20 @@ -2280,7 +2278,7 @@ static uint64_t do_ld_beN(CPUArchState *env, MMULooku= pPageData *p, if (!HAVE_al8_fast && p->size < 4) { return do_ld_whole_be4(p, ret_be); } else { - return do_ld_whole_be8(env, ra, p, ret_be); + return do_ld_whole_be8(cpu, ra, p, ret_be); } } /* fall through */ @@ -2298,7 +2296,7 @@ static uint64_t do_ld_beN(CPUArchState *env, MMULooku= pPageData *p, /* * Wrapper for the above, for 8 < size < 16. */ -static Int128 do_ld16_beN(CPUArchState *env, MMULookupPageData *p, +static Int128 do_ld16_beN(CPUState *cpu, MMULookupPageData *p, uint64_t a, int mmu_idx, MemOp mop, uintptr_t ra) { int size =3D p->size; @@ -2307,9 +2305,9 @@ static Int128 do_ld16_beN(CPUArchState *env, MMULooku= pPageData *p, =20 if (unlikely(p->flags & TLB_MMIO)) { QEMU_IOTHREAD_LOCK_GUARD(); - a =3D do_ld_mmio_beN(env, p->full, a, p->addr, size - 8, + a =3D do_ld_mmio_beN(cpu, p->full, a, p->addr, size - 8, mmu_idx, MMU_DATA_LOAD, ra); - b =3D do_ld_mmio_beN(env, p->full, 0, p->addr + 8, 8, + b =3D do_ld_mmio_beN(cpu, p->full, 0, p->addr + 8, 8, mmu_idx, MMU_DATA_LOAD, ra); return int128_make128(b, a); } @@ -2330,7 +2328,7 @@ static Int128 do_ld16_beN(CPUArchState *env, MMULooku= pPageData *p, =20 case MO_ATOM_WITHIN16_PAIR: /* Since size > 8, this is the half that must be atomic. */ - return do_ld_whole_be16(env, ra, p, a); + return do_ld_whole_be16(cpu, ra, p, a); =20 case MO_ATOM_IFALIGN_PAIR: /* @@ -2352,30 +2350,30 @@ static Int128 do_ld16_beN(CPUArchState *env, MMULoo= kupPageData *p, return int128_make128(b, a); } =20 -static uint8_t do_ld_1(CPUArchState *env, MMULookupPageData *p, int mmu_id= x, +static uint8_t do_ld_1(CPUState *cpu, MMULookupPageData *p, int mmu_idx, MMUAccessType type, uintptr_t ra) { if (unlikely(p->flags & TLB_MMIO)) { - return io_readx(env, p->full, mmu_idx, p->addr, ra, type, MO_UB); + return io_readx(cpu, p->full, mmu_idx, p->addr, ra, type, MO_UB); } else { return *(uint8_t *)p->haddr; } } =20 -static uint16_t do_ld_2(CPUArchState *env, MMULookupPageData *p, int mmu_i= dx, +static uint16_t do_ld_2(CPUState *cpu, MMULookupPageData *p, int mmu_idx, MMUAccessType type, MemOp memop, uintptr_t ra) { uint16_t ret; =20 if (unlikely(p->flags & TLB_MMIO)) { QEMU_IOTHREAD_LOCK_GUARD(); - ret =3D do_ld_mmio_beN(env, p->full, 0, p->addr, 2, mmu_idx, type,= ra); + ret =3D do_ld_mmio_beN(cpu, p->full, 0, p->addr, 2, mmu_idx, type,= ra); if ((memop & MO_BSWAP) =3D=3D MO_LE) { ret =3D bswap16(ret); } } else { /* Perform the load host endian, then swap if necessary. */ - ret =3D load_atom_2(env, ra, p->haddr, memop); + ret =3D load_atom_2(cpu->env_ptr, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret =3D bswap16(ret); } @@ -2383,20 +2381,20 @@ static uint16_t do_ld_2(CPUArchState *env, MMULooku= pPageData *p, int mmu_idx, return ret; } =20 -static uint32_t do_ld_4(CPUArchState *env, MMULookupPageData *p, int mmu_i= dx, +static uint32_t do_ld_4(CPUState *cpu, MMULookupPageData *p, int mmu_idx, MMUAccessType type, MemOp memop, uintptr_t ra) { uint32_t ret; =20 if (unlikely(p->flags & TLB_MMIO)) { QEMU_IOTHREAD_LOCK_GUARD(); - ret =3D do_ld_mmio_beN(env, p->full, 0, p->addr, 4, mmu_idx, type,= ra); + ret =3D do_ld_mmio_beN(cpu, p->full, 0, p->addr, 4, mmu_idx, type,= ra); if ((memop & MO_BSWAP) =3D=3D MO_LE) { ret =3D bswap32(ret); } } else { /* Perform the load host endian. */ - ret =3D load_atom_4(env, ra, p->haddr, memop); + ret =3D load_atom_4(cpu->env_ptr, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret =3D bswap32(ret); } @@ -2404,20 +2402,20 @@ static uint32_t do_ld_4(CPUArchState *env, MMULooku= pPageData *p, int mmu_idx, return ret; } =20 -static uint64_t do_ld_8(CPUArchState *env, MMULookupPageData *p, int mmu_i= dx, +static uint64_t do_ld_8(CPUState *cpu, MMULookupPageData *p, int mmu_idx, MMUAccessType type, MemOp memop, uintptr_t ra) { uint64_t ret; =20 if (unlikely(p->flags & TLB_MMIO)) { QEMU_IOTHREAD_LOCK_GUARD(); - ret =3D do_ld_mmio_beN(env, p->full, 0, p->addr, 8, mmu_idx, type,= ra); + ret =3D do_ld_mmio_beN(cpu, p->full, 0, p->addr, 8, mmu_idx, type,= ra); if ((memop & MO_BSWAP) =3D=3D MO_LE) { ret =3D bswap64(ret); } } else { /* Perform the load host endian. */ - ret =3D load_atom_8(env, ra, p->haddr, memop); + ret =3D load_atom_8(cpu->env_ptr, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret =3D bswap64(ret); } @@ -2425,27 +2423,27 @@ static uint64_t do_ld_8(CPUArchState *env, MMULooku= pPageData *p, int mmu_idx, return ret; } =20 -static uint8_t do_ld1_mmu(CPUArchState *env, vaddr addr, MemOpIdx oi, +static uint8_t do_ld1_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, uintptr_t ra, MMUAccessType access_type) { MMULookupLocals l; bool crosspage; =20 cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - crosspage =3D mmu_lookup(env, addr, oi, ra, access_type, &l); + crosspage =3D mmu_lookup(cpu, addr, oi, ra, access_type, &l); tcg_debug_assert(!crosspage); =20 - return do_ld_1(env, &l.page[0], l.mmu_idx, access_type, ra); + return do_ld_1(cpu, &l.page[0], l.mmu_idx, access_type, ra); } =20 tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t retaddr) { tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); - return do_ld1_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); + return do_ld1_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD); } =20 -static uint16_t do_ld2_mmu(CPUArchState *env, vaddr addr, MemOpIdx oi, +static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, uintptr_t ra, MMUAccessType access_type) { MMULookupLocals l; @@ -2454,13 +2452,13 @@ static uint16_t do_ld2_mmu(CPUArchState *env, vaddr= addr, MemOpIdx oi, uint8_t a, b; =20 cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - crosspage =3D mmu_lookup(env, addr, oi, ra, access_type, &l); + crosspage =3D mmu_lookup(cpu, addr, oi, ra, access_type, &l); if (likely(!crosspage)) { - return do_ld_2(env, &l.page[0], l.mmu_idx, access_type, l.memop, r= a); + return do_ld_2(cpu, &l.page[0], l.mmu_idx, access_type, l.memop, r= a); } =20 - a =3D do_ld_1(env, &l.page[0], l.mmu_idx, access_type, ra); - b =3D do_ld_1(env, &l.page[1], l.mmu_idx, access_type, ra); + a =3D do_ld_1(cpu, &l.page[0], l.mmu_idx, access_type, ra); + b =3D do_ld_1(cpu, &l.page[1], l.mmu_idx, access_type, ra); =20 if ((l.memop & MO_BSWAP) =3D=3D MO_LE) { ret =3D a | (b << 8); @@ -2474,10 +2472,10 @@ tcg_target_ulong helper_lduw_mmu(CPUArchState *env,= uint64_t addr, MemOpIdx oi, uintptr_t retaddr) { tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); - return do_ld2_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); + return do_ld2_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD); } =20 -static uint32_t do_ld4_mmu(CPUArchState *env, vaddr addr, MemOpIdx oi, +static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, uintptr_t ra, MMUAccessType access_type) { MMULookupLocals l; @@ -2485,13 +2483,13 @@ static uint32_t do_ld4_mmu(CPUArchState *env, vaddr= addr, MemOpIdx oi, uint32_t ret; =20 cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - crosspage =3D mmu_lookup(env, addr, oi, ra, access_type, &l); + crosspage =3D mmu_lookup(cpu, addr, oi, ra, access_type, &l); if (likely(!crosspage)) { - return do_ld_4(env, &l.page[0], l.mmu_idx, access_type, l.memop, r= a); + return do_ld_4(cpu, &l.page[0], l.mmu_idx, access_type, l.memop, r= a); } =20 - ret =3D do_ld_beN(env, &l.page[0], 0, l.mmu_idx, access_type, l.memop,= ra); - ret =3D do_ld_beN(env, &l.page[1], ret, l.mmu_idx, access_type, l.memo= p, ra); + ret =3D do_ld_beN(cpu, &l.page[0], 0, l.mmu_idx, access_type, l.memop,= ra); + ret =3D do_ld_beN(cpu, &l.page[1], ret, l.mmu_idx, access_type, l.memo= p, ra); if ((l.memop & MO_BSWAP) =3D=3D MO_LE) { ret =3D bswap32(ret); } @@ -2502,10 +2500,10 @@ tcg_target_ulong helper_ldul_mmu(CPUArchState *env,= uint64_t addr, MemOpIdx oi, uintptr_t retaddr) { tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); - return do_ld4_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); + return do_ld4_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD); } =20 -static uint64_t do_ld8_mmu(CPUArchState *env, vaddr addr, MemOpIdx oi, +static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, uintptr_t ra, MMUAccessType access_type) { MMULookupLocals l; @@ -2513,13 +2511,13 @@ static uint64_t do_ld8_mmu(CPUArchState *env, vaddr= addr, MemOpIdx oi, uint64_t ret; =20 cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - crosspage =3D mmu_lookup(env, addr, oi, ra, access_type, &l); + crosspage =3D mmu_lookup(cpu, addr, oi, ra, access_type, &l); if (likely(!crosspage)) { - return do_ld_8(env, &l.page[0], l.mmu_idx, access_type, l.memop, r= a); + return do_ld_8(cpu, &l.page[0], l.mmu_idx, access_type, l.memop, r= a); } =20 - ret =3D do_ld_beN(env, &l.page[0], 0, l.mmu_idx, access_type, l.memop,= ra); - ret =3D do_ld_beN(env, &l.page[1], ret, l.mmu_idx, access_type, l.memo= p, ra); + ret =3D do_ld_beN(cpu, &l.page[0], 0, l.mmu_idx, access_type, l.memop,= ra); + ret =3D do_ld_beN(cpu, &l.page[1], ret, l.mmu_idx, access_type, l.memo= p, ra); if ((l.memop & MO_BSWAP) =3D=3D MO_LE) { ret =3D bswap64(ret); } @@ -2530,7 +2528,7 @@ uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t a= ddr, MemOpIdx oi, uintptr_t retaddr) { tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); - return do_ld8_mmu(env, addr, oi, retaddr, MMU_DATA_LOAD); + return do_ld8_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD); } =20 /* @@ -2556,7 +2554,7 @@ tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, u= int64_t addr, return (int32_t)helper_ldul_mmu(env, addr, oi, retaddr); } =20 -static Int128 do_ld16_mmu(CPUArchState *env, vaddr addr, +static Int128 do_ld16_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, uintptr_t ra) { MMULookupLocals l; @@ -2566,13 +2564,13 @@ static Int128 do_ld16_mmu(CPUArchState *env, vaddr = addr, int first; =20 cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - crosspage =3D mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD, &l); + crosspage =3D mmu_lookup(cpu, addr, oi, ra, MMU_DATA_LOAD, &l); if (likely(!crosspage)) { if (unlikely(l.page[0].flags & TLB_MMIO)) { QEMU_IOTHREAD_LOCK_GUARD(); - a =3D do_ld_mmio_beN(env, l.page[0].full, 0, addr, 8, + a =3D do_ld_mmio_beN(cpu, l.page[0].full, 0, addr, 8, l.mmu_idx, MMU_DATA_LOAD, ra); - b =3D do_ld_mmio_beN(env, l.page[0].full, 0, addr + 8, 8, + b =3D do_ld_mmio_beN(cpu, l.page[0].full, 0, addr + 8, 8, l.mmu_idx, MMU_DATA_LOAD, ra); ret =3D int128_make128(b, a); if ((l.memop & MO_BSWAP) =3D=3D MO_LE) { @@ -2580,7 +2578,7 @@ static Int128 do_ld16_mmu(CPUArchState *env, vaddr ad= dr, } } else { /* Perform the load host endian. */ - ret =3D load_atom_16(env, ra, l.page[0].haddr, l.memop); + ret =3D load_atom_16(cpu->env_ptr, ra, l.page[0].haddr, l.memo= p); if (l.memop & MO_BSWAP) { ret =3D bswap128(ret); } @@ -2592,8 +2590,8 @@ static Int128 do_ld16_mmu(CPUArchState *env, vaddr ad= dr, if (first =3D=3D 8) { MemOp mop8 =3D (l.memop & ~MO_SIZE) | MO_64; =20 - a =3D do_ld_8(env, &l.page[0], l.mmu_idx, MMU_DATA_LOAD, mop8, ra); - b =3D do_ld_8(env, &l.page[1], l.mmu_idx, MMU_DATA_LOAD, mop8, ra); + a =3D do_ld_8(cpu, &l.page[0], l.mmu_idx, MMU_DATA_LOAD, mop8, ra); + b =3D do_ld_8(cpu, &l.page[1], l.mmu_idx, MMU_DATA_LOAD, mop8, ra); if ((mop8 & MO_BSWAP) =3D=3D MO_LE) { ret =3D int128_make128(a, b); } else { @@ -2603,15 +2601,15 @@ static Int128 do_ld16_mmu(CPUArchState *env, vaddr = addr, } =20 if (first < 8) { - a =3D do_ld_beN(env, &l.page[0], 0, l.mmu_idx, + a =3D do_ld_beN(cpu, &l.page[0], 0, l.mmu_idx, MMU_DATA_LOAD, l.memop, ra); - ret =3D do_ld16_beN(env, &l.page[1], a, l.mmu_idx, l.memop, ra); + ret =3D do_ld16_beN(cpu, &l.page[1], a, l.mmu_idx, l.memop, ra); } else { - ret =3D do_ld16_beN(env, &l.page[0], 0, l.mmu_idx, l.memop, ra); + ret =3D do_ld16_beN(cpu, &l.page[0], 0, l.mmu_idx, l.memop, ra); b =3D int128_getlo(ret); ret =3D int128_lshift(ret, l.page[1].size * 8); a =3D int128_gethi(ret); - b =3D do_ld_beN(env, &l.page[1], b, l.mmu_idx, + b =3D do_ld_beN(cpu, &l.page[1], b, l.mmu_idx, MMU_DATA_LOAD, l.memop, ra); ret =3D int128_make128(b, a); } @@ -2625,7 +2623,7 @@ Int128 helper_ld16_mmu(CPUArchState *env, uint64_t ad= dr, uint32_t oi, uintptr_t retaddr) { tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); - return do_ld16_mmu(env, addr, oi, retaddr); + return do_ld16_mmu(env_cpu(env), addr, oi, retaddr); } =20 Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, uint32_t oi) @@ -2647,7 +2645,7 @@ uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, = MemOpIdx oi, uintptr_t ra) uint8_t ret; =20 tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_UB); - ret =3D do_ld1_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + ret =3D do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; } @@ -2658,7 +2656,7 @@ uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr, uint16_t ret; =20 tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); - ret =3D do_ld2_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + ret =3D do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; } @@ -2669,7 +2667,7 @@ uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr, uint32_t ret; =20 tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); - ret =3D do_ld4_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + ret =3D do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; } @@ -2680,7 +2678,7 @@ uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr, uint64_t ret; =20 tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); - ret =3D do_ld8_mmu(env, addr, oi, ra, MMU_DATA_LOAD); + ret =3D do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); plugin_load_cb(env, addr, oi); return ret; } @@ -2691,7 +2689,7 @@ Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr, Int128 ret; =20 tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); - ret =3D do_ld16_mmu(env, addr, oi, ra); + ret =3D do_ld16_mmu(env_cpu(env), addr, oi, ra); plugin_load_cb(env, addr, oi); return ret; } @@ -2702,7 +2700,7 @@ Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr, =20 /** * do_st_mmio_leN: - * @env: cpu context + * @cpu: generic cpu state * @full: page parameters * @val_le: data to store * @addr: virtual address @@ -2715,7 +2713,7 @@ Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr, * The bytes to store are extracted in little-endian order from @val_le; * return the bytes of @val_le beyond @p->size that have not been stored. */ -static uint64_t do_st_mmio_leN(CPUArchState *env, CPUTLBEntryFull *full, +static uint64_t do_st_mmio_leN(CPUState *cpu, CPUTLBEntryFull *full, uint64_t val_le, vaddr addr, int size, int mmu_idx, uintptr_t ra) { @@ -2728,26 +2726,26 @@ static uint64_t do_st_mmio_leN(CPUArchState *env, C= PUTLBEntryFull *full, case 3: case 5: case 7: - io_writex(env, full, mmu_idx, val_le, addr, ra, MO_UB); + io_writex(cpu, full, mmu_idx, val_le, addr, ra, MO_UB); val_le >>=3D 8; size -=3D 1; addr +=3D 1; break; case 2: case 6: - io_writex(env, full, mmu_idx, val_le, addr, ra, MO_LEUW); + io_writex(cpu, full, mmu_idx, val_le, addr, ra, MO_LEUW); val_le >>=3D 16; size -=3D 2; addr +=3D 2; break; case 4: - io_writex(env, full, mmu_idx, val_le, addr, ra, MO_LEUL); + io_writex(cpu, full, mmu_idx, val_le, addr, ra, MO_LEUL); val_le >>=3D 32; size -=3D 4; addr +=3D 4; break; case 0: - io_writex(env, full, mmu_idx, val_le, addr, ra, MO_LEUQ); + io_writex(cpu, full, mmu_idx, val_le, addr, ra, MO_LEUQ); return 0; default: qemu_build_not_reached(); @@ -2760,7 +2758,7 @@ static uint64_t do_st_mmio_leN(CPUArchState *env, CPU= TLBEntryFull *full, /* * Wrapper for the above. */ -static uint64_t do_st_leN(CPUArchState *env, MMULookupPageData *p, +static uint64_t do_st_leN(CPUState *cpu, MMULookupPageData *p, uint64_t val_le, int mmu_idx, MemOp mop, uintptr_t ra) { @@ -2769,7 +2767,7 @@ static uint64_t do_st_leN(CPUArchState *env, MMULooku= pPageData *p, =20 if (unlikely(p->flags & TLB_MMIO)) { QEMU_IOTHREAD_LOCK_GUARD(); - return do_st_mmio_leN(env, p->full, val_le, p->addr, + return do_st_mmio_leN(cpu, p->full, val_le, p->addr, p->size, mmu_idx, ra); } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { return val_le >> (p->size * 8); @@ -2797,7 +2795,7 @@ static uint64_t do_st_leN(CPUArchState *env, MMULooku= pPageData *p, } else if (HAVE_al8) { return store_whole_le8(p->haddr, p->size, val_le); } else { - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } } /* fall through */ @@ -2815,7 +2813,7 @@ static uint64_t do_st_leN(CPUArchState *env, MMULooku= pPageData *p, /* * Wrapper for the above, for 8 < size < 16. */ -static uint64_t do_st16_leN(CPUArchState *env, MMULookupPageData *p, +static uint64_t do_st16_leN(CPUState *cpu, MMULookupPageData *p, Int128 val_le, int mmu_idx, MemOp mop, uintptr_t ra) { @@ -2824,9 +2822,9 @@ static uint64_t do_st16_leN(CPUArchState *env, MMULoo= kupPageData *p, =20 if (unlikely(p->flags & TLB_MMIO)) { QEMU_IOTHREAD_LOCK_GUARD(); - do_st_mmio_leN(env, p->full, int128_getlo(val_le), + do_st_mmio_leN(cpu, p->full, int128_getlo(val_le), p->addr, 8, mmu_idx, ra); - return do_st_mmio_leN(env, p->full, int128_gethi(val_le), + return do_st_mmio_leN(cpu, p->full, int128_gethi(val_le), p->addr + 8, size - 8, mmu_idx, ra); } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { return int128_gethi(val_le) >> ((size - 8) * 8); @@ -2846,7 +2844,7 @@ static uint64_t do_st16_leN(CPUArchState *env, MMULoo= kupPageData *p, case MO_ATOM_WITHIN16_PAIR: /* Since size > 8, this is the half that must be atomic. */ if (!HAVE_ATOMIC128_RW) { - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } return store_whole_le16(p->haddr, p->size, val_le); =20 @@ -2867,11 +2865,11 @@ static uint64_t do_st16_leN(CPUArchState *env, MMUL= ookupPageData *p, } } =20 -static void do_st_1(CPUArchState *env, MMULookupPageData *p, uint8_t val, +static void do_st_1(CPUState *cpu, MMULookupPageData *p, uint8_t val, int mmu_idx, uintptr_t ra) { if (unlikely(p->flags & TLB_MMIO)) { - io_writex(env, p->full, mmu_idx, val, p->addr, ra, MO_UB); + io_writex(cpu, p->full, mmu_idx, val, p->addr, ra, MO_UB); } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { /* nothing */ } else { @@ -2879,7 +2877,7 @@ static void do_st_1(CPUArchState *env, MMULookupPageD= ata *p, uint8_t val, } } =20 -static void do_st_2(CPUArchState *env, MMULookupPageData *p, uint16_t val, +static void do_st_2(CPUState *cpu, MMULookupPageData *p, uint16_t val, int mmu_idx, MemOp memop, uintptr_t ra) { if (unlikely(p->flags & TLB_MMIO)) { @@ -2887,7 +2885,7 @@ static void do_st_2(CPUArchState *env, MMULookupPageD= ata *p, uint16_t val, val =3D bswap16(val); } QEMU_IOTHREAD_LOCK_GUARD(); - do_st_mmio_leN(env, p->full, val, p->addr, 2, mmu_idx, ra); + do_st_mmio_leN(cpu, p->full, val, p->addr, 2, mmu_idx, ra); } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { /* nothing */ } else { @@ -2895,11 +2893,11 @@ static void do_st_2(CPUArchState *env, MMULookupPag= eData *p, uint16_t val, if (memop & MO_BSWAP) { val =3D bswap16(val); } - store_atom_2(env, ra, p->haddr, memop, val); + store_atom_2(cpu->env_ptr, ra, p->haddr, memop, val); } } =20 -static void do_st_4(CPUArchState *env, MMULookupPageData *p, uint32_t val, +static void do_st_4(CPUState *cpu, MMULookupPageData *p, uint32_t val, int mmu_idx, MemOp memop, uintptr_t ra) { if (unlikely(p->flags & TLB_MMIO)) { @@ -2907,7 +2905,7 @@ static void do_st_4(CPUArchState *env, MMULookupPageD= ata *p, uint32_t val, val =3D bswap32(val); } QEMU_IOTHREAD_LOCK_GUARD(); - do_st_mmio_leN(env, p->full, val, p->addr, 4, mmu_idx, ra); + do_st_mmio_leN(cpu, p->full, val, p->addr, 4, mmu_idx, ra); } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { /* nothing */ } else { @@ -2915,11 +2913,11 @@ static void do_st_4(CPUArchState *env, MMULookupPag= eData *p, uint32_t val, if (memop & MO_BSWAP) { val =3D bswap32(val); } - store_atom_4(env, ra, p->haddr, memop, val); + store_atom_4(cpu->env_ptr, ra, p->haddr, memop, val); } } =20 -static void do_st_8(CPUArchState *env, MMULookupPageData *p, uint64_t val, +static void do_st_8(CPUState *cpu, MMULookupPageData *p, uint64_t val, int mmu_idx, MemOp memop, uintptr_t ra) { if (unlikely(p->flags & TLB_MMIO)) { @@ -2927,7 +2925,7 @@ static void do_st_8(CPUArchState *env, MMULookupPageD= ata *p, uint64_t val, val =3D bswap64(val); } QEMU_IOTHREAD_LOCK_GUARD(); - do_st_mmio_leN(env, p->full, val, p->addr, 8, mmu_idx, ra); + do_st_mmio_leN(cpu, p->full, val, p->addr, 8, mmu_idx, ra); } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { /* nothing */ } else { @@ -2935,7 +2933,7 @@ static void do_st_8(CPUArchState *env, MMULookupPageD= ata *p, uint64_t val, if (memop & MO_BSWAP) { val =3D bswap64(val); } - store_atom_8(env, ra, p->haddr, memop, val); + store_atom_8(cpu->env_ptr, ra, p->haddr, memop, val); } } =20 @@ -2947,13 +2945,13 @@ void helper_stb_mmu(CPUArchState *env, uint64_t add= r, uint32_t val, =20 tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - crosspage =3D mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + crosspage =3D mmu_lookup(env_cpu(env), addr, oi, ra, MMU_DATA_STORE, &= l); tcg_debug_assert(!crosspage); =20 - do_st_1(env, &l.page[0], val, l.mmu_idx, ra); + do_st_1(env_cpu(env), &l.page[0], val, l.mmu_idx, ra); } =20 -static void do_st2_mmu(CPUArchState *env, vaddr addr, uint16_t val, +static void do_st2_mmu(CPUState *cpu, vaddr addr, uint16_t val, MemOpIdx oi, uintptr_t ra) { MMULookupLocals l; @@ -2961,9 +2959,9 @@ static void do_st2_mmu(CPUArchState *env, vaddr addr,= uint16_t val, uint8_t a, b; =20 cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - crosspage =3D mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + crosspage =3D mmu_lookup(cpu, addr, oi, ra, MMU_DATA_STORE, &l); if (likely(!crosspage)) { - do_st_2(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + do_st_2(cpu, &l.page[0], val, l.mmu_idx, l.memop, ra); return; } =20 @@ -2972,27 +2970,27 @@ static void do_st2_mmu(CPUArchState *env, vaddr add= r, uint16_t val, } else { b =3D val, a =3D val >> 8; } - do_st_1(env, &l.page[0], a, l.mmu_idx, ra); - do_st_1(env, &l.page[1], b, l.mmu_idx, ra); + do_st_1(cpu, &l.page[0], a, l.mmu_idx, ra); + do_st_1(cpu, &l.page[1], b, l.mmu_idx, ra); } =20 void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); - do_st2_mmu(env, addr, val, oi, retaddr); + do_st2_mmu(env_cpu(env), addr, val, oi, retaddr); } =20 -static void do_st4_mmu(CPUArchState *env, vaddr addr, uint32_t val, +static void do_st4_mmu(CPUState *cpu, vaddr addr, uint32_t val, MemOpIdx oi, uintptr_t ra) { MMULookupLocals l; bool crosspage; =20 cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - crosspage =3D mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + crosspage =3D mmu_lookup(cpu, addr, oi, ra, MMU_DATA_STORE, &l); if (likely(!crosspage)) { - do_st_4(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + do_st_4(cpu, &l.page[0], val, l.mmu_idx, l.memop, ra); return; } =20 @@ -3000,27 +2998,27 @@ static void do_st4_mmu(CPUArchState *env, vaddr add= r, uint32_t val, if ((l.memop & MO_BSWAP) !=3D MO_LE) { val =3D bswap32(val); } - val =3D do_st_leN(env, &l.page[0], val, l.mmu_idx, l.memop, ra); - (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, l.memop, ra); + val =3D do_st_leN(cpu, &l.page[0], val, l.mmu_idx, l.memop, ra); + (void) do_st_leN(cpu, &l.page[1], val, l.mmu_idx, l.memop, ra); } =20 void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val, MemOpIdx oi, uintptr_t retaddr) { tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); - do_st4_mmu(env, addr, val, oi, retaddr); + do_st4_mmu(env_cpu(env), addr, val, oi, retaddr); } =20 -static void do_st8_mmu(CPUArchState *env, vaddr addr, uint64_t val, +static void do_st8_mmu(CPUState *cpu, vaddr addr, uint64_t val, MemOpIdx oi, uintptr_t ra) { MMULookupLocals l; bool crosspage; =20 cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - crosspage =3D mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + crosspage =3D mmu_lookup(cpu, addr, oi, ra, MMU_DATA_STORE, &l); if (likely(!crosspage)) { - do_st_8(env, &l.page[0], val, l.mmu_idx, l.memop, ra); + do_st_8(cpu, &l.page[0], val, l.mmu_idx, l.memop, ra); return; } =20 @@ -3028,18 +3026,18 @@ static void do_st8_mmu(CPUArchState *env, vaddr add= r, uint64_t val, if ((l.memop & MO_BSWAP) !=3D MO_LE) { val =3D bswap64(val); } - val =3D do_st_leN(env, &l.page[0], val, l.mmu_idx, l.memop, ra); - (void) do_st_leN(env, &l.page[1], val, l.mmu_idx, l.memop, ra); + val =3D do_st_leN(cpu, &l.page[0], val, l.mmu_idx, l.memop, ra); + (void) do_st_leN(cpu, &l.page[1], val, l.mmu_idx, l.memop, ra); } =20 void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr) { tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); - do_st8_mmu(env, addr, val, oi, retaddr); + do_st8_mmu(env_cpu(env), addr, val, oi, retaddr); } =20 -static void do_st16_mmu(CPUArchState *env, vaddr addr, Int128 val, +static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val, MemOpIdx oi, uintptr_t ra) { MMULookupLocals l; @@ -3048,7 +3046,7 @@ static void do_st16_mmu(CPUArchState *env, vaddr addr= , Int128 val, int first; =20 cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - crosspage =3D mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); + crosspage =3D mmu_lookup(cpu, addr, oi, ra, MMU_DATA_STORE, &l); if (likely(!crosspage)) { if (unlikely(l.page[0].flags & TLB_MMIO)) { if ((l.memop & MO_BSWAP) !=3D MO_LE) { @@ -3057,8 +3055,8 @@ static void do_st16_mmu(CPUArchState *env, vaddr addr= , Int128 val, a =3D int128_getlo(val); b =3D int128_gethi(val); QEMU_IOTHREAD_LOCK_GUARD(); - do_st_mmio_leN(env, l.page[0].full, a, addr, 8, l.mmu_idx, ra); - do_st_mmio_leN(env, l.page[0].full, b, addr + 8, 8, l.mmu_idx,= ra); + do_st_mmio_leN(cpu, l.page[0].full, a, addr, 8, l.mmu_idx, ra); + do_st_mmio_leN(cpu, l.page[0].full, b, addr + 8, 8, l.mmu_idx,= ra); } else if (unlikely(l.page[0].flags & TLB_DISCARD_WRITE)) { /* nothing */ } else { @@ -3066,7 +3064,7 @@ static void do_st16_mmu(CPUArchState *env, vaddr addr= , Int128 val, if (l.memop & MO_BSWAP) { val =3D bswap128(val); } - store_atom_16(env, ra, l.page[0].haddr, l.memop, val); + store_atom_16(cpu->env_ptr, ra, l.page[0].haddr, l.memop, val); } return; } @@ -3083,8 +3081,8 @@ static void do_st16_mmu(CPUArchState *env, vaddr addr= , Int128 val, } else { a =3D int128_getlo(val), b =3D int128_gethi(val); } - do_st_8(env, &l.page[0], a, l.mmu_idx, mop8, ra); - do_st_8(env, &l.page[1], b, l.mmu_idx, mop8, ra); + do_st_8(cpu, &l.page[0], a, l.mmu_idx, mop8, ra); + do_st_8(cpu, &l.page[1], b, l.mmu_idx, mop8, ra); return; } =20 @@ -3092,12 +3090,12 @@ static void do_st16_mmu(CPUArchState *env, vaddr ad= dr, Int128 val, val =3D bswap128(val); } if (first < 8) { - do_st_leN(env, &l.page[0], int128_getlo(val), l.mmu_idx, l.memop, = ra); + do_st_leN(cpu, &l.page[0], int128_getlo(val), l.mmu_idx, l.memop, = ra); val =3D int128_urshift(val, first * 8); - do_st16_leN(env, &l.page[1], val, l.mmu_idx, l.memop, ra); + do_st16_leN(cpu, &l.page[1], val, l.mmu_idx, l.memop, ra); } else { - b =3D do_st16_leN(env, &l.page[0], val, l.mmu_idx, l.memop, ra); - do_st_leN(env, &l.page[1], b, l.mmu_idx, l.memop, ra); + b =3D do_st16_leN(cpu, &l.page[0], val, l.mmu_idx, l.memop, ra); + do_st_leN(cpu, &l.page[1], b, l.mmu_idx, l.memop, ra); } } =20 @@ -3105,7 +3103,7 @@ void helper_st16_mmu(CPUArchState *env, uint64_t addr= , Int128 val, MemOpIdx oi, uintptr_t retaddr) { tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); - do_st16_mmu(env, addr, val, oi, retaddr); + do_st16_mmu(env_cpu(env), addr, val, oi, retaddr); } =20 void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx= oi) @@ -3133,7 +3131,7 @@ void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uin= t16_t val, MemOpIdx oi, uintptr_t retaddr) { tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); - do_st2_mmu(env, addr, val, oi, retaddr); + do_st2_mmu(env_cpu(env), addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } =20 @@ -3141,7 +3139,7 @@ void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uin= t32_t val, MemOpIdx oi, uintptr_t retaddr) { tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); - do_st4_mmu(env, addr, val, oi, retaddr); + do_st4_mmu(env_cpu(env), addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } =20 @@ -3149,7 +3147,7 @@ void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uin= t64_t val, MemOpIdx oi, uintptr_t retaddr) { tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); - do_st8_mmu(env, addr, val, oi, retaddr); + do_st8_mmu(env_cpu(env), addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } =20 @@ -3157,7 +3155,7 @@ void cpu_st16_mmu(CPUArchState *env, abi_ptr addr, In= t128 val, MemOpIdx oi, uintptr_t retaddr) { tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); - do_st16_mmu(env, addr, val, oi, retaddr); + do_st16_mmu(env_cpu(env), addr, val, oi, retaddr); plugin_store_cb(env, addr, oi); } =20 @@ -3199,47 +3197,47 @@ void cpu_st16_mmu(CPUArchState *env, abi_ptr addr, = Int128 val, uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr addr) { MemOpIdx oi =3D make_memop_idx(MO_UB, cpu_mmu_index(env, true)); - return do_ld1_mmu(env, addr, oi, 0, MMU_INST_FETCH); + return do_ld1_mmu(env_cpu(env), addr, oi, 0, MMU_INST_FETCH); } =20 uint32_t cpu_lduw_code(CPUArchState *env, abi_ptr addr) { MemOpIdx oi =3D make_memop_idx(MO_TEUW, cpu_mmu_index(env, true)); - return do_ld2_mmu(env, addr, oi, 0, MMU_INST_FETCH); + return do_ld2_mmu(env_cpu(env), addr, oi, 0, MMU_INST_FETCH); } =20 uint32_t cpu_ldl_code(CPUArchState *env, abi_ptr addr) { MemOpIdx oi =3D make_memop_idx(MO_TEUL, cpu_mmu_index(env, true)); - return do_ld4_mmu(env, addr, oi, 0, MMU_INST_FETCH); + return do_ld4_mmu(env_cpu(env), addr, oi, 0, MMU_INST_FETCH); } =20 uint64_t cpu_ldq_code(CPUArchState *env, abi_ptr addr) { MemOpIdx oi =3D make_memop_idx(MO_TEUQ, cpu_mmu_index(env, true)); - return do_ld8_mmu(env, addr, oi, 0, MMU_INST_FETCH); + return do_ld8_mmu(env_cpu(env), addr, oi, 0, MMU_INST_FETCH); } =20 uint8_t cpu_ldb_code_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t retaddr) { - return do_ld1_mmu(env, addr, oi, retaddr, MMU_INST_FETCH); + return do_ld1_mmu(env_cpu(env), addr, oi, retaddr, MMU_INST_FETCH); } =20 uint16_t cpu_ldw_code_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t retaddr) { - return do_ld2_mmu(env, addr, oi, retaddr, MMU_INST_FETCH); + return do_ld2_mmu(env_cpu(env), addr, oi, retaddr, MMU_INST_FETCH); } =20 uint32_t cpu_ldl_code_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t retaddr) { - return do_ld4_mmu(env, addr, oi, retaddr, MMU_INST_FETCH); + return do_ld4_mmu(env_cpu(env), addr, oi, retaddr, MMU_INST_FETCH); } =20 uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t retaddr) { - return do_ld8_mmu(env, addr, oi, retaddr, MMU_INST_FETCH); + return do_ld8_mmu(env_cpu(env), addr, oi, retaddr, MMU_INST_FETCH); } --=20 2.41.0 From nobody Fri May 10 10:46:34 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1694533002; cv=none; d=zohomail.com; s=zohoarc; b=nxhmdWDZAqe+0rn2XcLnIzYDe/oWnjVzlEr+a5P4ReejPmho3h3NTl4L6IYNl1CwZiM+RgQ2v++CnZ+7KOUOrTWsuLf7ongLOZ9wGH9DLbqEyNcOHeqREl+hfVcCM3x0hHNV1e5ZLAzGMUbji55H0we0RP/DofFVRWt53z5qeKo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694533002; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:References:Sender:Subject:To; bh=YgJqc7hFaWRrfirxYuQiLPoj/eyrnb95JHshHj7BXAE=; b=bdMqcc36bnOQmVR1GBXxdH7W/pfs+T8bt/4yEuJUFf6qSmU0dvtjNd3X6LIJV9Mood/LYD2UGAgli7a/YOu3Gpe1LjHaFqJgQinAAlhNjBKGT8J7u3DODj2uElDK2w61mMsj/pTYdDSA/1IRLM4jM3vJRwpRL6YjcNNvoTkjyT0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 169453300236440.84359207641023; Tue, 12 Sep 2023 08:36:42 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg5Q8-0007Nm-SK; Tue, 12 Sep 2023 11:35:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Ps-0007It-ES for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:05 -0400 Received: from rev.ng ([5.9.113.41]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Ph-0001Wt-Se for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:03 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=rev.ng; s=dkim; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=YgJqc7hFaWRrfirxYuQiLPoj/eyrnb95JHshHj7BXAE=; b=vPfW+HN3ULUwKyaZ8xOME9D1dt AuAWDQv9UwVsWA+3uL5aVb2SIoQyV9Kf220X/RxdknfyT6e4Jb9dI2/4LP0wzPr2g8pwcF+lyisGZ P28mRd61nOUIWUqGij60J4V8h0X66pB6JWwhrGH23GJkVYvKsuQUd1xr7s0eX0JeWYw8=; To: qemu-devel@nongnu.org Cc: ale@rev.ng, richard.henderson@linaro.org, pbonzini@redhat.com, philmd@linaro.org, peter.maydell@linaro.org Subject: [PATCH 06/11] accel/tcg: Modify atomic_mmu_lookup() to use CPUState Date: Tue, 12 Sep 2023 17:34:23 +0200 Message-ID: <20230912153428.17816-7-anjo@rev.ng> In-Reply-To: <20230912153428.17816-1-anjo@rev.ng> References: <20230912153428.17816-1-anjo@rev.ng> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=5.9.113.41; envelope-from=anjo@rev.ng; helo=rev.ng X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Anton Johansson From: Anton Johansson via Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1694533002759100003 Content-Type: text/plain; charset="utf-8" The goal is to (in the future) allow for per-target compilation of functions in atomic_template.h whilst atomic_mmu_lookup() and cputlb.c are compiled once-per user- or system mode. Signed-off-by: Anton Johansson --- accel/tcg/atomic_template.h | 20 ++++++++++++-------- accel/tcg/cputlb.c | 26 +++++++++++++------------- accel/tcg/user-exec.c | 8 ++++---- 3 files changed, 29 insertions(+), 25 deletions(-) diff --git a/accel/tcg/atomic_template.h b/accel/tcg/atomic_template.h index 84c08b1425..1dc2151daf 100644 --- a/accel/tcg/atomic_template.h +++ b/accel/tcg/atomic_template.h @@ -73,7 +73,8 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr = addr, ABI_TYPE cmpv, ABI_TYPE newv, MemOpIdx oi, uintptr_t retaddr) { - DATA_TYPE *haddr =3D atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retad= dr); + DATA_TYPE *haddr =3D atomic_mmu_lookup(env_cpu(env), addr, oi, + DATA_SIZE, retaddr); DATA_TYPE ret; #if DATA_SIZE =3D=3D 16 @@ -90,7 +91,8 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_ptr = addr, ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) { - DATA_TYPE *haddr =3D atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retad= dr); + DATA_TYPE *haddr =3D atomic_mmu_lookup(env_cpu(env), addr, oi, + DATA_SIZE, retaddr); DATA_TYPE ret; ret =3D qatomic_xchg__nocheck(haddr, val); @@ -104,7 +106,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr= , \ ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) \ { \ DATA_TYPE *haddr, ret; \ - haddr =3D atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \ + haddr =3D atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr= ); \ ret =3D qatomic_##X(haddr, val); \ ATOMIC_MMU_CLEANUP; \ atomic_trace_rmw_post(env, addr, oi); \ @@ -135,7 +137,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr= , \ ABI_TYPE xval, MemOpIdx oi, uintptr_t retaddr) \ { \ XDATA_TYPE *haddr, cmp, old, new, val =3D xval; \ - haddr =3D atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \ + haddr =3D atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr= ); \ smp_mb(); \ cmp =3D qatomic_read__nocheck(haddr); \ do { \ @@ -176,7 +178,8 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_pt= r addr, ABI_TYPE cmpv, ABI_TYPE newv, MemOpIdx oi, uintptr_t retaddr) { - DATA_TYPE *haddr =3D atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retad= dr); + DATA_TYPE *haddr =3D atomic_mmu_lookup(env_cpu(env), addr, oi, + DATA_SIZE, retaddr); DATA_TYPE ret; #if DATA_SIZE =3D=3D 16 @@ -193,7 +196,8 @@ ABI_TYPE ATOMIC_NAME(cmpxchg)(CPUArchState *env, abi_pt= r addr, ABI_TYPE ATOMIC_NAME(xchg)(CPUArchState *env, abi_ptr addr, ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) { - DATA_TYPE *haddr =3D atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retad= dr); + DATA_TYPE *haddr =3D atomic_mmu_lookup(env_cpu(env), addr, oi, + DATA_SIZE, retaddr); ABI_TYPE ret; ret =3D qatomic_xchg__nocheck(haddr, BSWAP(val)); @@ -207,7 +211,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr= , \ ABI_TYPE val, MemOpIdx oi, uintptr_t retaddr) \ { \ DATA_TYPE *haddr, ret; \ - haddr =3D atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \ + haddr =3D atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr= ); \ ret =3D qatomic_##X(haddr, BSWAP(val)); \ ATOMIC_MMU_CLEANUP; \ atomic_trace_rmw_post(env, addr, oi); \ @@ -235,7 +239,7 @@ ABI_TYPE ATOMIC_NAME(X)(CPUArchState *env, abi_ptr addr= , \ ABI_TYPE xval, MemOpIdx oi, uintptr_t retaddr) \ { \ XDATA_TYPE *haddr, ldo, ldn, old, new, val =3D xval; \ - haddr =3D atomic_mmu_lookup(env, addr, oi, DATA_SIZE, retaddr); \ + haddr =3D atomic_mmu_lookup(env_cpu(env), addr, oi, DATA_SIZE, retaddr= ); \ smp_mb(); \ ldn =3D qatomic_read__nocheck(haddr); \ do { \ diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index ebd174e23d..cbefea0dd6 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1936,7 +1936,7 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, Mem= OpIdx oi, * Probe for an atomic operation. Do not allow unaligned operations, * or io operations to proceed. Return the host address. */ -static void *atomic_mmu_lookup(CPUArchState *env, vaddr addr, MemOpIdx oi, +static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, int size, uintptr_t retaddr) { uintptr_t mmu_idx =3D get_mmuidx(oi); @@ -1956,7 +1956,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, vad= dr addr, MemOpIdx oi, /* Enforce guest required alignment. */ if (unlikely(a_bits > 0 && (addr & ((1 << a_bits) - 1)))) { /* ??? Maybe indicate atomic op to cpu_unaligned_access */ - cpu_unaligned_access(env_cpu(env), addr, MMU_DATA_STORE, + cpu_unaligned_access(cpu, addr, MMU_DATA_STORE, mmu_idx, retaddr); } @@ -1969,18 +1969,18 @@ static void *atomic_mmu_lookup(CPUArchState *env, v= addr addr, MemOpIdx oi, goto stop_the_world; } - index =3D tlb_index(env_cpu(env), mmu_idx, addr); - tlbe =3D tlb_entry(env_cpu(env), mmu_idx, addr); + index =3D tlb_index(cpu, mmu_idx, addr); + tlbe =3D tlb_entry(cpu, mmu_idx, addr); /* Check TLB entry and enforce page permissions. */ tlb_addr =3D tlb_addr_write(tlbe); if (!tlb_hit(tlb_addr, addr)) { - if (!victim_tlb_hit(env_cpu(env), mmu_idx, index, MMU_DATA_STORE, + if (!victim_tlb_hit(cpu, mmu_idx, index, MMU_DATA_STORE, addr & TARGET_PAGE_MASK)) { - tlb_fill(env_cpu(env), addr, size, + tlb_fill(cpu, addr, size, MMU_DATA_STORE, mmu_idx, retaddr); - index =3D tlb_index(env_cpu(env), mmu_idx, addr); - tlbe =3D tlb_entry(env_cpu(env), mmu_idx, addr); + index =3D tlb_index(cpu, mmu_idx, addr); + tlbe =3D tlb_entry(cpu, mmu_idx, addr); } tlb_addr =3D tlb_addr_write(tlbe) & ~TLB_INVALID_MASK; } @@ -1992,7 +1992,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, vad= dr addr, MemOpIdx oi, * but addr_read will only be -1 if PAGE_READ was unset. */ if (unlikely(tlbe->addr_read =3D=3D -1)) { - tlb_fill(env_cpu(env), addr, size, MMU_DATA_LOAD, mmu_idx, retaddr= ); + tlb_fill(cpu, addr, size, MMU_DATA_LOAD, mmu_idx, retaddr); /* * Since we don't support reads and writes to different * addresses, and we do have the proper page loaded for @@ -2012,10 +2012,10 @@ static void *atomic_mmu_lookup(CPUArchState *env, v= addr addr, MemOpIdx oi, } hostaddr =3D (void *)((uintptr_t)addr + tlbe->addend); - full =3D &env_tlb(env)->d[mmu_idx].fulltlb[index]; + full =3D &cpu_tlb(cpu)->d[mmu_idx].fulltlb[index]; if (unlikely(tlb_addr & TLB_NOTDIRTY)) { - notdirty_write(env_cpu(env), addr, size, full, retaddr); + notdirty_write(cpu, addr, size, full, retaddr); } if (unlikely(tlb_addr & TLB_FORCE_SLOW)) { @@ -2028,7 +2028,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, vad= dr addr, MemOpIdx oi, wp_flags |=3D BP_MEM_READ; } if (wp_flags) { - cpu_check_watchpoint(env_cpu(env), addr, size, + cpu_check_watchpoint(cpu, addr, size, full->attrs, wp_flags, retaddr); } } @@ -2036,7 +2036,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, vad= dr addr, MemOpIdx oi, return hostaddr; stop_the_world: - cpu_loop_exit_atomic(env_cpu(env), retaddr); + cpu_loop_exit_atomic(cpu, retaddr); } /* diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index ab48cb41e4..d2daeafbab 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -1386,7 +1386,7 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr = addr, /* * Do not allow unaligned operations to proceed. Return the host address. */ -static void *atomic_mmu_lookup(CPUArchState *env, vaddr addr, MemOpIdx oi, +static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, int size, uintptr_t retaddr) { MemOp mop =3D get_memop(oi); @@ -1395,15 +1395,15 @@ static void *atomic_mmu_lookup(CPUArchState *env, v= addr addr, MemOpIdx oi, /* Enforce guest required alignment. */ if (unlikely(addr & ((1 << a_bits) - 1))) { - cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_STORE, retaddr); + cpu_loop_exit_sigbus(cpu, addr, MMU_DATA_STORE, retaddr); } /* Enforce qemu required alignment. */ if (unlikely(addr & (size - 1))) { - cpu_loop_exit_atomic(env_cpu(env), retaddr); + cpu_loop_exit_atomic(cpu, retaddr); } - ret =3D g2h(env_cpu(env), addr); + ret =3D g2h(cpu, addr); set_helper_retaddr(retaddr); return ret; } -- 2.41.0 From nobody Fri May 10 10:46:34 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1694533072; cv=none; d=zohomail.com; s=zohoarc; b=euwdsBFSaAtjcXXO3QdxR/nVotvEdGFkw7Lsp+jiRgEPEmlHZvB9HK4vPmUsSjNQ9yqGZwyVHU9nizoGtNdx7/9ryRr+f/mH0pZecKOhN+K+nle1YpwDCngfBEBs5zFMIybX8w7LVQNtX8c+DWz231CTn2Uc/NDqJDOagU0suoM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694533072; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:References:Sender:Subject:To; bh=5PHodCuwq1ZqPvwLmTdrP0f5KX+bU4JFB3TKc3Ifqkk=; b=n+L6pNu78CkS/E1cCjXq3MiaEJtG+JxROjGNRPAmIIEGvHfltXl/FQaZVvy4nRzOLaimna9W/09Nd6hE5z/9cwWdro5qIvlq7+HrD1wizgwQSuV7jZlzywqIrROwYoDm+EELzEO3v0S7Cjg5N+gMgyHfAIXPOGR8g1sfs2OxiHM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1694533072261162.11496429084366; Tue, 12 Sep 2023 08:37:52 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg5QA-0007ON-PN; Tue, 12 Sep 2023 11:35:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Px-0007KC-NP for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:14 -0400 Received: from rev.ng ([5.9.113.41]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Ph-0001X7-SU for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=rev.ng; s=dkim; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=5PHodCuwq1ZqPvwLmTdrP0f5KX+bU4JFB3TKc3Ifqkk=; b=PKUUD/TJnQa8sTwvPt5FfzPjCv lsowMnf45DtIC9/1FuvtRiOTkr3BlKjThJJybM0wEjZvPb255gW885Wpn4cngSsEbsEtLTJBUEyM4 LizzyT5jw+56jW+Rh2h4giT680GzkXEuA50lutAa1igO4F3WV3iE9/G8YrLVujjR7W/g=; To: qemu-devel@nongnu.org Cc: ale@rev.ng, richard.henderson@linaro.org, pbonzini@redhat.com, philmd@linaro.org, peter.maydell@linaro.org Subject: [PATCH 07/11] accel/tcg: Use CPUState in atomicity helpers Date: Tue, 12 Sep 2023 17:34:24 +0200 Message-ID: <20230912153428.17816-8-anjo@rev.ng> In-Reply-To: <20230912153428.17816-1-anjo@rev.ng> References: <20230912153428.17816-1-anjo@rev.ng> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=5.9.113.41; envelope-from=anjo@rev.ng; helo=rev.ng X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Anton Johansson From: Anton Johansson via Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1694533073102100009 Content-Type: text/plain; charset="utf-8" Makes ldst_atomicity.c.inc almost target-independent, with the exception of TARGET_PAGE_MASK, which will be addressed in a future patch. Signed-off-by: Anton Johansson --- accel/tcg/cputlb.c | 20 ++++---- accel/tcg/user-exec.c | 16 +++---- accel/tcg/ldst_atomicity.c.inc | 88 +++++++++++++++++----------------- 3 files changed, 62 insertions(+), 62 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index cbefea0dd6..5439ca1cf7 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -2209,7 +2209,7 @@ static uint64_t do_ld_whole_be8(CPUState *cpu, uintpt= r_t ra, MMULookupPageData *p, uint64_t ret_be) { int o =3D p->addr & 7; - uint64_t x =3D load_atomic8_or_exit(cpu->env_ptr, ra, p->haddr - o); + uint64_t x =3D load_atomic8_or_exit(cpu, ra, p->haddr - o); x =3D cpu_to_be64(x); x <<=3D o * 8; @@ -2229,7 +2229,7 @@ static Int128 do_ld_whole_be16(CPUState *cpu, uintptr= _t ra, MMULookupPageData *p, uint64_t ret_be) { int o =3D p->addr & 15; - Int128 x, y =3D load_atomic16_or_exit(cpu->env_ptr, ra, p->haddr - o); + Int128 x, y =3D load_atomic16_or_exit(cpu, ra, p->haddr - o); int size =3D p->size; if (!HOST_BIG_ENDIAN) { @@ -2373,7 +2373,7 @@ static uint16_t do_ld_2(CPUState *cpu, MMULookupPageD= ata *p, int mmu_idx, } } else { /* Perform the load host endian, then swap if necessary. */ - ret =3D load_atom_2(cpu->env_ptr, ra, p->haddr, memop); + ret =3D load_atom_2(cpu, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret =3D bswap16(ret); } @@ -2394,7 +2394,7 @@ static uint32_t do_ld_4(CPUState *cpu, MMULookupPageD= ata *p, int mmu_idx, } } else { /* Perform the load host endian. */ - ret =3D load_atom_4(cpu->env_ptr, ra, p->haddr, memop); + ret =3D load_atom_4(cpu, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret =3D bswap32(ret); } @@ -2415,7 +2415,7 @@ static uint64_t do_ld_8(CPUState *cpu, MMULookupPageD= ata *p, int mmu_idx, } } else { /* Perform the load host endian. */ - ret =3D load_atom_8(cpu->env_ptr, ra, p->haddr, memop); + ret =3D load_atom_8(cpu, ra, p->haddr, memop); if (memop & MO_BSWAP) { ret =3D bswap64(ret); } @@ -2578,7 +2578,7 @@ static Int128 do_ld16_mmu(CPUState *cpu, vaddr addr, } } else { /* Perform the load host endian. */ - ret =3D load_atom_16(cpu->env_ptr, ra, l.page[0].haddr, l.memo= p); + ret =3D load_atom_16(cpu, ra, l.page[0].haddr, l.memop); if (l.memop & MO_BSWAP) { ret =3D bswap128(ret); } @@ -2893,7 +2893,7 @@ static void do_st_2(CPUState *cpu, MMULookupPageData = *p, uint16_t val, if (memop & MO_BSWAP) { val =3D bswap16(val); } - store_atom_2(cpu->env_ptr, ra, p->haddr, memop, val); + store_atom_2(cpu, ra, p->haddr, memop, val); } } @@ -2913,7 +2913,7 @@ static void do_st_4(CPUState *cpu, MMULookupPageData = *p, uint32_t val, if (memop & MO_BSWAP) { val =3D bswap32(val); } - store_atom_4(cpu->env_ptr, ra, p->haddr, memop, val); + store_atom_4(cpu, ra, p->haddr, memop, val); } } @@ -2933,7 +2933,7 @@ static void do_st_8(CPUState *cpu, MMULookupPageData = *p, uint64_t val, if (memop & MO_BSWAP) { val =3D bswap64(val); } - store_atom_8(cpu->env_ptr, ra, p->haddr, memop, val); + store_atom_8(cpu, ra, p->haddr, memop, val); } } @@ -3064,7 +3064,7 @@ static void do_st16_mmu(CPUState *cpu, vaddr addr, In= t128 val, if (l.memop & MO_BSWAP) { val =3D bswap128(val); } - store_atom_16(cpu->env_ptr, ra, l.page[0].haddr, l.memop, val); + store_atom_16(cpu, ra, l.page[0].haddr, l.memop, val); } return; } diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index d2daeafbab..f9f5cd1770 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -1002,7 +1002,7 @@ static uint16_t do_ld2_mmu(CPUArchState *env, abi_ptr= addr, tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_16); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret =3D load_atom_2(env, ra, haddr, mop); + ret =3D load_atom_2(env_cpu(env), ra, haddr, mop); clear_helper_retaddr(); if (mop & MO_BSWAP) { @@ -1040,7 +1040,7 @@ static uint32_t do_ld4_mmu(CPUArchState *env, abi_ptr= addr, tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_32); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret =3D load_atom_4(env, ra, haddr, mop); + ret =3D load_atom_4(env_cpu(env), ra, haddr, mop); clear_helper_retaddr(); if (mop & MO_BSWAP) { @@ -1078,7 +1078,7 @@ static uint64_t do_ld8_mmu(CPUArchState *env, abi_ptr= addr, tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_64); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret =3D load_atom_8(env, ra, haddr, mop); + ret =3D load_atom_8(env_cpu(env), ra, haddr, mop); clear_helper_retaddr(); if (mop & MO_BSWAP) { @@ -1110,7 +1110,7 @@ static Int128 do_ld16_mmu(CPUArchState *env, abi_ptr = addr, tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_128); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret =3D load_atom_16(env, ra, haddr, mop); + ret =3D load_atom_16(env_cpu(env), ra, haddr, mop); clear_helper_retaddr(); if (mop & MO_BSWAP) { @@ -1175,7 +1175,7 @@ static void do_st2_mmu(CPUArchState *env, abi_ptr add= r, uint16_t val, if (mop & MO_BSWAP) { val =3D bswap16(val); } - store_atom_2(env, ra, haddr, mop, val); + store_atom_2(env_cpu(env), ra, haddr, mop, val); clear_helper_retaddr(); } @@ -1204,7 +1204,7 @@ static void do_st4_mmu(CPUArchState *env, abi_ptr add= r, uint32_t val, if (mop & MO_BSWAP) { val =3D bswap32(val); } - store_atom_4(env, ra, haddr, mop, val); + store_atom_4(env_cpu(env), ra, haddr, mop, val); clear_helper_retaddr(); } @@ -1233,7 +1233,7 @@ static void do_st8_mmu(CPUArchState *env, abi_ptr add= r, uint64_t val, if (mop & MO_BSWAP) { val =3D bswap64(val); } - store_atom_8(env, ra, haddr, mop, val); + store_atom_8(env_cpu(env), ra, haddr, mop, val); clear_helper_retaddr(); } @@ -1262,7 +1262,7 @@ static void do_st16_mmu(CPUArchState *env, abi_ptr ad= dr, Int128 val, if (mop & MO_BSWAP) { val =3D bswap128(val); } - store_atom_16(env, ra, haddr, mop, val); + store_atom_16(env_cpu(env), ra, haddr, mop, val); clear_helper_retaddr(); } diff --git a/accel/tcg/ldst_atomicity.c.inc b/accel/tcg/ldst_atomicity.c.inc index 1b793e6935..1cf5b92166 100644 --- a/accel/tcg/ldst_atomicity.c.inc +++ b/accel/tcg/ldst_atomicity.c.inc @@ -26,7 +26,7 @@ * If the operation must be split into two operations to be * examined separately for atomicity, return -lg2. */ -static int required_atomicity(CPUArchState *env, uintptr_t p, MemOp memop) +static int required_atomicity(CPUState *cpu, uintptr_t p, MemOp memop) { MemOp atom =3D memop & MO_ATOM_MASK; MemOp size =3D memop & MO_SIZE; @@ -93,7 +93,7 @@ static int required_atomicity(CPUArchState *env, uintptr_= t p, MemOp memop) * host atomicity in order to avoid racing. This reduction * avoids looping with cpu_loop_exit_atomic. */ - if (cpu_in_serial_context(env_cpu(env))) { + if (cpu_in_serial_context(cpu)) { return MO_8; } return atmax; @@ -139,14 +139,14 @@ static inline uint64_t load_atomic8(void *pv) /** * load_atomic8_or_exit: - * @env: cpu context + * @cpu: generic cpu state * @ra: host unwind address * @pv: host address * * Atomically load 8 aligned bytes from @pv. * If this is not possible, longjmp out to restart serially. */ -static uint64_t load_atomic8_or_exit(CPUArchState *env, uintptr_t ra, void= *pv) +static uint64_t load_atomic8_or_exit(CPUState *cpu, uintptr_t ra, void *pv) { if (HAVE_al8) { return load_atomic8(pv); @@ -168,19 +168,19 @@ static uint64_t load_atomic8_or_exit(CPUArchState *en= v, uintptr_t ra, void *pv) #endif /* Ultimate fallback: re-execute in serial context. */ - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } /** * load_atomic16_or_exit: - * @env: cpu context + * @cpu: generic cpu state * @ra: host unwind address * @pv: host address * * Atomically load 16 aligned bytes from @pv. * If this is not possible, longjmp out to restart serially. */ -static Int128 load_atomic16_or_exit(CPUArchState *env, uintptr_t ra, void = *pv) +static Int128 load_atomic16_or_exit(CPUState *cpu, uintptr_t ra, void *pv) { Int128 *p =3D __builtin_assume_aligned(pv, 16); @@ -212,7 +212,7 @@ static Int128 load_atomic16_or_exit(CPUArchState *env, = uintptr_t ra, void *pv) } /* Ultimate fallback: re-execute in serial context. */ - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } /** @@ -263,7 +263,7 @@ static uint64_t load_atom_extract_al8x2(void *pv) /** * load_atom_extract_al8_or_exit: - * @env: cpu context + * @cpu: generic cpu state * @ra: host unwind address * @pv: host address * @s: object size in bytes, @s <=3D 4. @@ -273,7 +273,7 @@ static uint64_t load_atom_extract_al8x2(void *pv) * 8-byte load and extract. * The value is returned in the low bits of a uint32_t. */ -static uint32_t load_atom_extract_al8_or_exit(CPUArchState *env, uintptr_t= ra, +static uint32_t load_atom_extract_al8_or_exit(CPUState *cpu, uintptr_t ra, void *pv, int s) { uintptr_t pi =3D (uintptr_t)pv; @@ -281,12 +281,12 @@ static uint32_t load_atom_extract_al8_or_exit(CPUArch= State *env, uintptr_t ra, int shr =3D (HOST_BIG_ENDIAN ? 8 - s - o : o) * 8; pv =3D (void *)(pi & ~7); - return load_atomic8_or_exit(env, ra, pv) >> shr; + return load_atomic8_or_exit(cpu, ra, pv) >> shr; } /** * load_atom_extract_al16_or_exit: - * @env: cpu context + * @cpu: generic cpu state * @ra: host unwind address * @p: host address * @s: object size in bytes, @s <=3D 8. @@ -299,7 +299,7 @@ static uint32_t load_atom_extract_al8_or_exit(CPUArchSt= ate *env, uintptr_t ra, * * If this is not possible, longjmp out to restart serially. */ -static uint64_t load_atom_extract_al16_or_exit(CPUArchState *env, uintptr_= t ra, +static uint64_t load_atom_extract_al16_or_exit(CPUState *cpu, uintptr_t ra, void *pv, int s) { uintptr_t pi =3D (uintptr_t)pv; @@ -312,7 +312,7 @@ static uint64_t load_atom_extract_al16_or_exit(CPUArchS= tate *env, uintptr_t ra, * Provoke SIGBUS if possible otherwise. */ pv =3D (void *)(pi & ~7); - r =3D load_atomic16_or_exit(env, ra, pv); + r =3D load_atomic16_or_exit(cpu, ra, pv); r =3D int128_urshift(r, shr); return int128_getlo(r); @@ -394,7 +394,7 @@ static inline uint64_t load_atom_8_by_8_or_4(void *pv) * * Load 2 bytes from @p, honoring the atomicity of @memop. */ -static uint16_t load_atom_2(CPUArchState *env, uintptr_t ra, +static uint16_t load_atom_2(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop) { uintptr_t pi =3D (uintptr_t)pv; @@ -410,7 +410,7 @@ static uint16_t load_atom_2(CPUArchState *env, uintptr_= t ra, } } - atmax =3D required_atomicity(env, pi, memop); + atmax =3D required_atomicity(cpu, pi, memop); switch (atmax) { case MO_8: return lduw_he_p(pv); @@ -421,9 +421,9 @@ static uint16_t load_atom_2(CPUArchState *env, uintptr_= t ra, return load_atomic4(pv - 1) >> 8; } if ((pi & 15) !=3D 7) { - return load_atom_extract_al8_or_exit(env, ra, pv, 2); + return load_atom_extract_al8_or_exit(cpu, ra, pv, 2); } - return load_atom_extract_al16_or_exit(env, ra, pv, 2); + return load_atom_extract_al16_or_exit(cpu, ra, pv, 2); default: g_assert_not_reached(); } @@ -436,7 +436,7 @@ static uint16_t load_atom_2(CPUArchState *env, uintptr_= t ra, * * Load 4 bytes from @p, honoring the atomicity of @memop. */ -static uint32_t load_atom_4(CPUArchState *env, uintptr_t ra, +static uint32_t load_atom_4(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop) { uintptr_t pi =3D (uintptr_t)pv; @@ -452,7 +452,7 @@ static uint32_t load_atom_4(CPUArchState *env, uintptr_= t ra, } } - atmax =3D required_atomicity(env, pi, memop); + atmax =3D required_atomicity(cpu, pi, memop); switch (atmax) { case MO_8: case MO_16: @@ -466,9 +466,9 @@ static uint32_t load_atom_4(CPUArchState *env, uintptr_= t ra, return load_atom_extract_al4x2(pv); case MO_32: if (!(pi & 4)) { - return load_atom_extract_al8_or_exit(env, ra, pv, 4); + return load_atom_extract_al8_or_exit(cpu, ra, pv, 4); } - return load_atom_extract_al16_or_exit(env, ra, pv, 4); + return load_atom_extract_al16_or_exit(cpu, ra, pv, 4); default: g_assert_not_reached(); } @@ -481,7 +481,7 @@ static uint32_t load_atom_4(CPUArchState *env, uintptr_= t ra, * * Load 8 bytes from @p, honoring the atomicity of @memop. */ -static uint64_t load_atom_8(CPUArchState *env, uintptr_t ra, +static uint64_t load_atom_8(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop) { uintptr_t pi =3D (uintptr_t)pv; @@ -498,12 +498,12 @@ static uint64_t load_atom_8(CPUArchState *env, uintpt= r_t ra, return load_atom_extract_al16_or_al8(pv, 8); } - atmax =3D required_atomicity(env, pi, memop); + atmax =3D required_atomicity(cpu, pi, memop); if (atmax =3D=3D MO_64) { if (!HAVE_al8 && (pi & 7) =3D=3D 0) { - load_atomic8_or_exit(env, ra, pv); + load_atomic8_or_exit(cpu, ra, pv); } - return load_atom_extract_al16_or_exit(env, ra, pv, 8); + return load_atom_extract_al16_or_exit(cpu, ra, pv, 8); } if (HAVE_al8_fast) { return load_atom_extract_al8x2(pv); @@ -519,7 +519,7 @@ static uint64_t load_atom_8(CPUArchState *env, uintptr_= t ra, if (HAVE_al8) { return load_atom_extract_al8x2(pv); } - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); default: g_assert_not_reached(); } @@ -532,7 +532,7 @@ static uint64_t load_atom_8(CPUArchState *env, uintptr_= t ra, * * Load 16 bytes from @p, honoring the atomicity of @memop. */ -static Int128 load_atom_16(CPUArchState *env, uintptr_t ra, +static Int128 load_atom_16(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop) { uintptr_t pi =3D (uintptr_t)pv; @@ -548,7 +548,7 @@ static Int128 load_atom_16(CPUArchState *env, uintptr_t= ra, return atomic16_read_ro(pv); } - atmax =3D required_atomicity(env, pi, memop); + atmax =3D required_atomicity(cpu, pi, memop); switch (atmax) { case MO_8: memcpy(&r, pv, 16); @@ -563,20 +563,20 @@ static Int128 load_atom_16(CPUArchState *env, uintptr= _t ra, break; case MO_64: if (!HAVE_al8) { - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } a =3D load_atomic8(pv); b =3D load_atomic8(pv + 8); break; case -MO_64: if (!HAVE_al8) { - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } a =3D load_atom_extract_al8x2(pv); b =3D load_atom_extract_al8x2(pv + 8); break; case MO_128: - return load_atomic16_or_exit(env, ra, pv); + return load_atomic16_or_exit(cpu, ra, pv); default: g_assert_not_reached(); } @@ -857,7 +857,7 @@ static uint64_t store_whole_le16(void *pv, int size, In= t128 val_le) * * Store 2 bytes to @p, honoring the atomicity of @memop. */ -static void store_atom_2(CPUArchState *env, uintptr_t ra, +static void store_atom_2(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop, uint16_t val) { uintptr_t pi =3D (uintptr_t)pv; @@ -868,7 +868,7 @@ static void store_atom_2(CPUArchState *env, uintptr_t r= a, return; } - atmax =3D required_atomicity(env, pi, memop); + atmax =3D required_atomicity(cpu, pi, memop); if (atmax =3D=3D MO_8) { stw_he_p(pv, val); return; @@ -897,7 +897,7 @@ static void store_atom_2(CPUArchState *env, uintptr_t r= a, g_assert_not_reached(); } - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } /** @@ -908,7 +908,7 @@ static void store_atom_2(CPUArchState *env, uintptr_t r= a, * * Store 4 bytes to @p, honoring the atomicity of @memop. */ -static void store_atom_4(CPUArchState *env, uintptr_t ra, +static void store_atom_4(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop, uint32_t val) { uintptr_t pi =3D (uintptr_t)pv; @@ -919,7 +919,7 @@ static void store_atom_4(CPUArchState *env, uintptr_t r= a, return; } - atmax =3D required_atomicity(env, pi, memop); + atmax =3D required_atomicity(cpu, pi, memop); switch (atmax) { case MO_8: stl_he_p(pv, val); @@ -961,7 +961,7 @@ static void store_atom_4(CPUArchState *env, uintptr_t r= a, return; } } - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); default: g_assert_not_reached(); } @@ -975,7 +975,7 @@ static void store_atom_4(CPUArchState *env, uintptr_t r= a, * * Store 8 bytes to @p, honoring the atomicity of @memop. */ -static void store_atom_8(CPUArchState *env, uintptr_t ra, +static void store_atom_8(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop, uint64_t val) { uintptr_t pi =3D (uintptr_t)pv; @@ -986,7 +986,7 @@ static void store_atom_8(CPUArchState *env, uintptr_t r= a, return; } - atmax =3D required_atomicity(env, pi, memop); + atmax =3D required_atomicity(cpu, pi, memop); switch (atmax) { case MO_8: stq_he_p(pv, val); @@ -1029,7 +1029,7 @@ static void store_atom_8(CPUArchState *env, uintptr_t= ra, default: g_assert_not_reached(); } - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } /** @@ -1040,7 +1040,7 @@ static void store_atom_8(CPUArchState *env, uintptr_t= ra, * * Store 16 bytes to @p, honoring the atomicity of @memop. */ -static void store_atom_16(CPUArchState *env, uintptr_t ra, +static void store_atom_16(CPUState *cpu, uintptr_t ra, void *pv, MemOp memop, Int128 val) { uintptr_t pi =3D (uintptr_t)pv; @@ -1052,7 +1052,7 @@ static void store_atom_16(CPUArchState *env, uintptr_= t ra, return; } - atmax =3D required_atomicity(env, pi, memop); + atmax =3D required_atomicity(cpu, pi, memop); a =3D HOST_BIG_ENDIAN ? int128_gethi(val) : int128_getlo(val); b =3D HOST_BIG_ENDIAN ? int128_getlo(val) : int128_gethi(val); @@ -1111,5 +1111,5 @@ static void store_atom_16(CPUArchState *env, uintptr_= t ra, default: g_assert_not_reached(); } - cpu_loop_exit_atomic(env_cpu(env), ra); + cpu_loop_exit_atomic(cpu, ra); } -- 2.41.0 From nobody Fri May 10 10:46:34 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1694533041; cv=none; d=zohomail.com; s=zohoarc; b=WYUugJljWZ9og6JXzjK8eehBxlHUc5JZxbu8tFTiRLoUUIs0LxmzM61KDZnyvgEgrBtzPEM4qX/Rakgat64pJqMhK+04IZGShF+lf42L6P+Vo+BVcIzyAyJ5UUH+svbhRInWhhspDaZnrCFye+2YOwrGqiV9GyJ1Pro+B1BmrXQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694533041; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:References:Sender:Subject:To; bh=w4Nwmun9CsPPObbhbws12gGWnkley7Gkfpyi30R0okQ=; b=MK+Cn2PV4F+GGeCcc5jX/vRPq7YAySQbwxEv4FRGmzzsCVEm/uLbJtvGPsNjOEy4evVLLbTvuow+eiDObJcbPzqT1ZZekijiUEBATBxVLlMrJippZN/PWjRC8cBBkDtCSdZ4yvJIzCToDFmYw1y2x8hl3aHfwZc0Yj4uFzR1n3U= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1694533041486547.1325941117345; Tue, 12 Sep 2023 08:37:21 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg5Q5-0007MG-AZ; Tue, 12 Sep 2023 11:35:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Po-0007I7-Ar for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:00 -0400 Received: from rev.ng ([5.9.113.41]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Ph-0001XA-So for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=rev.ng; s=dkim; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=w4Nwmun9CsPPObbhbws12gGWnkley7Gkfpyi30R0okQ=; b=fzAZd82KO9VMRyy4VisbT8k4MY +aWq9IjY4+qckta6M2LsATnnOQA9ShH3uJpEPli/GC6RVBSB7h29e0uNVi5WDyS2vwLtkGkDSXGRW OYJBcCIHiODbWilVqiD+L7rKht8gT9nEwedmZnVvdtWXEsNtQgvE62MM2txUmgwmVsDs=; To: qemu-devel@nongnu.org Cc: ale@rev.ng, richard.henderson@linaro.org, pbonzini@redhat.com, philmd@linaro.org, peter.maydell@linaro.org Subject: [PATCH 08/11] include/exec: Remove env_tlb() Date: Tue, 12 Sep 2023 17:34:25 +0200 Message-ID: <20230912153428.17816-9-anjo@rev.ng> In-Reply-To: <20230912153428.17816-1-anjo@rev.ng> References: <20230912153428.17816-1-anjo@rev.ng> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=5.9.113.41; envelope-from=anjo@rev.ng; helo=rev.ng X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Anton Johansson From: Anton Johansson via Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1694533043673100003 Content-Type: text/plain; charset="utf-8" The function is no longer used to access the TLB, and has been replaced by cpu_tlb(). Signed-off-by: Anton Johansson --- include/exec/cpu-all.h | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h index b03218d7d3..ee21f94969 100644 --- a/include/exec/cpu-all.h +++ b/include/exec/cpu-all.h @@ -484,17 +484,6 @@ static inline CPUNegativeOffsetState *cpu_neg(CPUState= *cpu) return &arch_cpu->neg; } =20 -/** - * env_tlb(env) - * @env: The architecture environment - * - * Return the CPUTLB state associated with the environment. - */ -static inline CPUTLB *env_tlb(CPUArchState *env) -{ - return &env_neg(env)->tlb; -} - /** * cpu_tlb(cpu) * @cpu: The generic CPUState --=20 2.41.0 From nobody Fri May 10 10:46:34 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1694533068; cv=none; d=zohomail.com; s=zohoarc; b=fTwuofjaZU7oSuGNw3/vxH2pcofTetYYFmrZcuz8Rn4NoNZ+L6PNOgbhTbSDs7qGnTxzHNc3vG9tUQ9VKT2X5FKx/r0y0cWGE2I5Fdxx4Vi5Ck/Xw/cobJj1VFT32MauoAUAefv+Jat/B7YhPI8jH8kMUUuIvOvP5/dahiKZqHE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694533068; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:References:Sender:Subject:To; bh=ggUCBX4pndpvew0+Bvf3La4j0PlM5VB5bL6z/4Z5DO0=; b=ccbMLP6IiA3rbRZuErITSlnF1uVVGxGX5G2ipoTz9ENpDBkQbEC4cStB5OlX8xxtTRJS3LDmJ/SCUAmtP2YoUxFVswYFaNAtEWa65ACYmG4daY+2xNfQHrGuMqSjODC/C3TwUUCYP5UO8zr3dVQd5T+kE67NpTrVclPGUzcm028= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1694533068424402.6496849741641; Tue, 12 Sep 2023 08:37:48 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg5QA-0007O5-Hb; Tue, 12 Sep 2023 11:35:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Q5-0007MO-01 for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:17 -0400 Received: from rev.ng ([5.9.113.41]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Po-0001XR-2O for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:16 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=rev.ng; s=dkim; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=ggUCBX4pndpvew0+Bvf3La4j0PlM5VB5bL6z/4Z5DO0=; b=lhyJRwfCafonpgCKNl1FepWY/0 bPkno0UlfOuqIkY5dNBghu6Ul/mgRh9BA8RFj8hLMsTSf8VKHVwG2UeowzXab1+gCI3ZdQ0EHvH8P sCLw/iUavXNiZRiKuxR40jHbociPnJ96SwvVmmT9G9aBISxzBS8SN0qwiwaB+hRZ+jnI=; To: qemu-devel@nongnu.org Cc: ale@rev.ng, richard.henderson@linaro.org, pbonzini@redhat.com, philmd@linaro.org, peter.maydell@linaro.org Subject: [PATCH 09/11] tcg: Update env_tlb() comments Date: Tue, 12 Sep 2023 17:34:26 +0200 Message-ID: <20230912153428.17816-10-anjo@rev.ng> In-Reply-To: <20230912153428.17816-1-anjo@rev.ng> References: <20230912153428.17816-1-anjo@rev.ng> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=5.9.113.41; envelope-from=anjo@rev.ng; helo=rev.ng X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, T_SPF_TEMPERROR=0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Anton Johansson From: Anton Johansson via Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1694533070384100003 Content-Type: text/plain; charset="utf-8" Signed-off-by: Anton Johansson --- tcg/aarch64/tcg-target.c.inc | 2 +- tcg/arm/tcg-target.c.inc | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc index 0931a69448..d2f5ca5946 100644 --- a/tcg/aarch64/tcg-target.c.inc +++ b/tcg/aarch64/tcg-target.c.inc @@ -1676,7 +1676,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext= *s, HostAddress *h, mask_type =3D (s->page_bits + s->tlb_dyn_max_bits > 32 ? TCG_TYPE_I64 : TCG_TYPE_I32); =20 - /* Load env_tlb(env)->f[mmu_idx].{mask,table} into {tmp0,tmp1}. */ + /* Load cpu_tlb(cpu)->f[mmu_idx].{mask,table} into {tmp0,tmp1}. */ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, mask) !=3D 0); QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) !=3D 8); tcg_out_insn(s, 3314, LDP, TCG_REG_TMP0, TCG_REG_TMP1, TCG_AREG0, diff --git a/tcg/arm/tcg-target.c.inc b/tcg/arm/tcg-target.c.inc index acb5f23b54..9b96f5e7b5 100644 --- a/tcg/arm/tcg-target.c.inc +++ b/tcg/arm/tcg-target.c.inc @@ -1420,7 +1420,7 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext= *s, HostAddress *h, ldst->addrlo_reg =3D addrlo; ldst->addrhi_reg =3D addrhi; =20 - /* Load env_tlb(env)->f[mmu_idx].{mask,table} into {r0,r1}. */ + /* Load cpu_tlb(cpu)->f[mmu_idx].{mask,table} into {r0,r1}. */ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, mask) !=3D 0); QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) !=3D 4); tcg_out_ldrd_8(s, COND_AL, TCG_REG_R0, TCG_AREG0, fast_off); --=20 2.41.0 From nobody Fri May 10 10:46:34 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1694532945; cv=none; d=zohomail.com; s=zohoarc; b=ftYjIIVvBFTOdZvdE4qOI8i3boRlsUHfDS6vAsGI+dWXVA9Pppg4Udf3fC3OtnuAbdDdCLh4x+Zea2ncaBRfRYKqKjpSMqbqbbYWssnHI4DH1u7jik55iH5yS3Oes7W2wO5BovPfs2UZ53sjw12ZjgXZ1lCFQtM8BWPqBflsq5w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694532945; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:References:Sender:Subject:To; bh=fbZNaRSEayCuiWO1DCnDwHpe/ZX2udTsNDPG+tpgbAc=; b=lknitGQlvyyUdbLuzALnrPvHhXqCMXjSjMsJSuSFg1aXYnIovFwon10f8Sws++6uNz+tHWqzbd2ORku13glhl/XTMiPuE4rxjy3O2FW0ZRVXa5arO9ISj3oxdsQ9LebPIIOR0il1s6CrtmLoEJsytxOXk1HI/s659b/3m5SAKVE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1694532945888553.5572190515978; Tue, 12 Sep 2023 08:35:45 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg5Pp-0007IL-SB; Tue, 12 Sep 2023 11:35:01 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Pm-0007HT-Gl for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:34:59 -0400 Received: from rev.ng ([5.9.113.41]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Ph-0001Xd-DJ for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:34:58 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=rev.ng; s=dkim; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=fbZNaRSEayCuiWO1DCnDwHpe/ZX2udTsNDPG+tpgbAc=; b=r7JS/oNyB5E8kv7zoeelvckheK eLsnn8dkXZCxr2oBEdc5SGOGQRGtlukd2yNyHEWkBwKO5Cjb+nAgK4JBrcdkOzDejp5N+RMIdHH3e Svngyx9usxOwup0ijZA22RrTd7Qcm6KQ+Q/ksoMsjA5QtquspZYgQ2MgZONdgev63FC4=; To: qemu-devel@nongnu.org Cc: ale@rev.ng, richard.henderson@linaro.org, pbonzini@redhat.com, philmd@linaro.org, peter.maydell@linaro.org Subject: [PATCH 10/11] accel/tcg: Unify user and softmmu do_[st|ld]*_mmu() Date: Tue, 12 Sep 2023 17:34:27 +0200 Message-ID: <20230912153428.17816-11-anjo@rev.ng> In-Reply-To: <20230912153428.17816-1-anjo@rev.ng> References: <20230912153428.17816-1-anjo@rev.ng> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=5.9.113.41; envelope-from=anjo@rev.ng; helo=rev.ng X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Anton Johansson From: Anton Johansson via Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1694532948381100005 Content-Type: text/plain; charset="utf-8" The prototype of do_[st|ld]*_mmu() is unified between system- and user-mode allowing a large chunk of helper_[st|ld]*() and cpu_[st|ld]*() functions to be expressed in same manner between both modes. These functions will be moved to ldst_common.c.inc in a following commit. Signed-off-by: Anton Johansson --- accel/tcg/cputlb.c | 16 ++-- accel/tcg/user-exec.c | 183 ++++++++++++++++++++++++------------------ 2 files changed, 117 insertions(+), 82 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 5439ca1cf7..ca69c369cc 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -2937,18 +2937,24 @@ static void do_st_8(CPUState *cpu, MMULookupPageDat= a *p, uint64_t val, } } =20 -void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val, - MemOpIdx oi, uintptr_t ra) +static void do_st1_mmu(CPUState *cpu, vaddr addr, uint8_t val, + MemOpIdx oi, uintptr_t ra) { MMULookupLocals l; bool crosspage; =20 - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - crosspage =3D mmu_lookup(env_cpu(env), addr, oi, ra, MMU_DATA_STORE, &= l); + crosspage =3D mmu_lookup(cpu, addr, oi, ra, MMU_DATA_STORE, &l); tcg_debug_assert(!crosspage); =20 - do_st_1(env_cpu(env), &l.page[0], val, l.mmu_idx, ra); + do_st_1(cpu, &l.page[0], val, l.mmu_idx, ra); +} + +void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); + do_st1_mmu(env_cpu(env), addr, val, oi, ra); } =20 static void do_st2_mmu(CPUState *cpu, vaddr addr, uint16_t val, diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index f9f5cd1770..a6593d0e0f 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -941,7 +941,7 @@ void page_reset_target_data(target_ulong start, target_= ulong last) { } =20 /* The softmmu versions of these helpers are in cputlb.c. */ =20 -static void *cpu_mmu_lookup(CPUArchState *env, vaddr addr, +static void *cpu_mmu_lookup(CPUState *cpu, vaddr addr, MemOp mop, uintptr_t ra, MMUAccessType type) { int a_bits =3D get_alignment_bits(mop); @@ -949,25 +949,24 @@ static void *cpu_mmu_lookup(CPUArchState *env, vaddr = addr, =20 /* Enforce guest required alignment. */ if (unlikely(addr & ((1 << a_bits) - 1))) { - cpu_loop_exit_sigbus(env_cpu(env), addr, type, ra); + cpu_loop_exit_sigbus(cpu, addr, type, ra); } =20 - ret =3D g2h(env_cpu(env), addr); + ret =3D g2h(cpu, addr); set_helper_retaddr(ra); return ret; } =20 #include "ldst_atomicity.c.inc" =20 -static uint8_t do_ld1_mmu(CPUArchState *env, abi_ptr addr, - MemOp mop, uintptr_t ra) +static uint8_t do_ld1_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) { void *haddr; uint8_t ret; =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_8); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); + haddr =3D cpu_mmu_lookup(cpu, addr, get_memop(oi), ra, access_type); ret =3D ldub_p(haddr); clear_helper_retaddr(); return ret; @@ -976,33 +975,38 @@ static uint8_t do_ld1_mmu(CPUArchState *env, abi_ptr = addr, tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return do_ld1_mmu(env, addr, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); + return do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); } =20 tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return (int8_t)do_ld1_mmu(env, addr, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); + return (int8_t)do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); } =20 uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - uint8_t ret =3D do_ld1_mmu(env, addr, get_memop(oi), ra); + uint8_t ret; + + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); + ret =3D do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return ret; } =20 -static uint16_t do_ld2_mmu(CPUArchState *env, abi_ptr addr, - MemOp mop, uintptr_t ra) +static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) { void *haddr; uint16_t ret; + MemOp mop =3D get_memop(oi); =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_16); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret =3D load_atom_2(env_cpu(env), ra, haddr, mop); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, access_type); + ret =3D load_atom_2(cpu, ra, haddr, mop); clear_helper_retaddr(); =20 if (mop & MO_BSWAP) { @@ -1014,33 +1018,38 @@ static uint16_t do_ld2_mmu(CPUArchState *env, abi_p= tr addr, tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return do_ld2_mmu(env, addr, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); + return do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); } =20 tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return (int16_t)do_ld2_mmu(env, addr, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); + return (int16_t)do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); } =20 uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - uint16_t ret =3D do_ld2_mmu(env, addr, get_memop(oi), ra); + uint16_t ret; + + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); + ret =3D do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return ret; } =20 -static uint32_t do_ld4_mmu(CPUArchState *env, abi_ptr addr, - MemOp mop, uintptr_t ra) +static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) { void *haddr; uint32_t ret; + MemOp mop =3D get_memop(oi); =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_32); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret =3D load_atom_4(env_cpu(env), ra, haddr, mop); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, access_type); + ret =3D load_atom_4(cpu, ra, haddr, mop); clear_helper_retaddr(); =20 if (mop & MO_BSWAP) { @@ -1052,33 +1061,38 @@ static uint32_t do_ld4_mmu(CPUArchState *env, abi_p= tr addr, tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return do_ld4_mmu(env, addr, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); + return do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); } =20 tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return (int32_t)do_ld4_mmu(env, addr, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); + return (int32_t)do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); } =20 uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - uint32_t ret =3D do_ld4_mmu(env, addr, get_memop(oi), ra); + uint32_t ret; + + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); + ret =3D do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return ret; } =20 -static uint64_t do_ld8_mmu(CPUArchState *env, abi_ptr addr, - MemOp mop, uintptr_t ra) +static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) { void *haddr; uint64_t ret; + MemOp mop =3D get_memop(oi); =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_64); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret =3D load_atom_8(env_cpu(env), ra, haddr, mop); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, access_type); + ret =3D load_atom_8(cpu, ra, haddr, mop); clear_helper_retaddr(); =20 if (mop & MO_BSWAP) { @@ -1090,27 +1104,32 @@ static uint64_t do_ld8_mmu(CPUArchState *env, abi_p= tr addr, uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return do_ld8_mmu(env, addr, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); + return do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); } =20 uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - uint64_t ret =3D do_ld8_mmu(env, addr, get_memop(oi), ra); + uint64_t ret; + + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); + ret =3D do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return ret; } =20 -static Int128 do_ld16_mmu(CPUArchState *env, abi_ptr addr, - MemOp mop, uintptr_t ra) +static Int128 do_ld16_mmu(CPUState *cpu, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) { void *haddr; Int128 ret; + MemOp mop =3D get_memop(oi); =20 tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_128); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret =3D load_atom_16(env_cpu(env), ra, haddr, mop); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_LOAD); + ret =3D load_atom_16(cpu, ra, haddr, mop); clear_helper_retaddr(); =20 if (mop & MO_BSWAP) { @@ -1122,7 +1141,7 @@ static Int128 do_ld16_mmu(CPUArchState *env, abi_ptr = addr, Int128 helper_ld16_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return do_ld16_mmu(env, addr, get_memop(oi), ra); + return do_ld16_mmu(env_cpu(env), addr, get_memop(oi), ra); } =20 Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, MemOpIdx oi) @@ -1133,19 +1152,18 @@ Int128 helper_ld_i128(CPUArchState *env, uint64_t a= ddr, MemOpIdx oi) Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - Int128 ret =3D do_ld16_mmu(env, addr, get_memop(oi), ra); + Int128 ret =3D do_ld16_mmu(env_cpu(env), addr, get_memop(oi), ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return ret; } =20 -static void do_st1_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, - MemOp mop, uintptr_t ra) +static void do_st1_mmu(CPUState *cpu, vaddr addr, uint8_t val, + MemOpIdx oi, uintptr_t ra) { void *haddr; =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_8); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + haddr =3D cpu_mmu_lookup(cpu, addr, get_memop(oi), ra, MMU_DATA_STORE); stb_p(haddr, val); clear_helper_retaddr(); } @@ -1153,134 +1171,145 @@ static void do_st1_mmu(CPUArchState *env, abi_ptr= addr, uint8_t val, void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val, MemOpIdx oi, uintptr_t ra) { - do_st1_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); + do_st1_mmu(env_cpu(env), addr, val, oi, ra); } =20 void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, MemOpIdx oi, uintptr_t ra) { - do_st1_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); + do_st1_mmu(env_cpu(env), addr, val, oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 -static void do_st2_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, - MemOp mop, uintptr_t ra) +static void do_st2_mmu(CPUState *cpu, vaddr addr, uint16_t val, + MemOpIdx oi, uintptr_t ra) { void *haddr; + MemOp mop =3D get_memop(oi); =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_16); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE); =20 if (mop & MO_BSWAP) { val =3D bswap16(val); } - store_atom_2(env_cpu(env), ra, haddr, mop, val); + store_atom_2(cpu, ra, haddr, mop, val); clear_helper_retaddr(); } =20 void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val, MemOpIdx oi, uintptr_t ra) { - do_st2_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); + do_st2_mmu(env_cpu(env), addr, val, oi, ra); } =20 void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, MemOpIdx oi, uintptr_t ra) { - do_st2_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); + do_st2_mmu(env_cpu(env), addr, val, oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 -static void do_st4_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, - MemOp mop, uintptr_t ra) +static void do_st4_mmu(CPUState *cpu, vaddr addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) { void *haddr; + MemOp mop =3D get_memop(oi); =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_32); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE); =20 if (mop & MO_BSWAP) { val =3D bswap32(val); } - store_atom_4(env_cpu(env), ra, haddr, mop, val); + store_atom_4(cpu, ra, haddr, mop, val); clear_helper_retaddr(); } =20 void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val, MemOpIdx oi, uintptr_t ra) { - do_st4_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); + do_st4_mmu(env_cpu(env), addr, val, oi, ra); } =20 void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, MemOpIdx oi, uintptr_t ra) { - do_st4_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); + do_st4_mmu(env_cpu(env), addr, val, oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 -static void do_st8_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, - MemOp mop, uintptr_t ra) +static void do_st8_mmu(CPUState *cpu, vaddr addr, uint64_t val, + MemOpIdx oi, uintptr_t ra) { void *haddr; + MemOp mop =3D get_memop(oi); =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_64); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE); =20 if (mop & MO_BSWAP) { val =3D bswap64(val); } - store_atom_8(env_cpu(env), ra, haddr, mop, val); + store_atom_8(cpu, ra, haddr, mop, val); clear_helper_retaddr(); } =20 void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val, MemOpIdx oi, uintptr_t ra) { - do_st8_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); + do_st8_mmu(env_cpu(env), addr, val, oi, ra); } =20 void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, MemOpIdx oi, uintptr_t ra) { - do_st8_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); + do_st8_mmu(env_cpu(env), addr, val, oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 -static void do_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val, - MemOp mop, uintptr_t ra) +static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val, + MemOpIdx oi, uintptr_t ra) { void *haddr; + MemOpIdx mop =3D get_memop(oi); =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_128); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE); =20 if (mop & MO_BSWAP) { val =3D bswap128(val); } - store_atom_16(env_cpu(env), ra, haddr, mop, val); + store_atom_16(cpu, ra, haddr, mop, val); clear_helper_retaddr(); } =20 void helper_st16_mmu(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx oi, uintptr_t ra) { - do_st16_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); + do_st16_mmu(env_cpu(env), addr, val, oi, ra); } =20 void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx= oi) { + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); helper_st16_mmu(env, addr, val, oi, GETPC()); } =20 void cpu_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val, MemOpIdx oi, uintptr_t ra) { - do_st16_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); + do_st16_mmu(env_cpu(env), addr, val, oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 @@ -1330,7 +1359,7 @@ uint8_t cpu_ldb_code_mmu(CPUArchState *env, abi_ptr a= ddr, void *haddr; uint8_t ret; =20 - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH); + haddr =3D cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH); ret =3D ldub_p(haddr); clear_helper_retaddr(); return ret; @@ -1342,7 +1371,7 @@ uint16_t cpu_ldw_code_mmu(CPUArchState *env, abi_ptr = addr, void *haddr; uint16_t ret; =20 - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH); + haddr =3D cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH); ret =3D lduw_p(haddr); clear_helper_retaddr(); if (get_memop(oi) & MO_BSWAP) { @@ -1357,7 +1386,7 @@ uint32_t cpu_ldl_code_mmu(CPUArchState *env, abi_ptr = addr, void *haddr; uint32_t ret; =20 - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH); + haddr =3D cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH); ret =3D ldl_p(haddr); clear_helper_retaddr(); if (get_memop(oi) & MO_BSWAP) { @@ -1372,7 +1401,7 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr = addr, void *haddr; uint64_t ret; =20 - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); + haddr =3D cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); ret =3D ldq_p(haddr); clear_helper_retaddr(); if (get_memop(oi) & MO_BSWAP) { --=20 2.41.0 From nobody Fri May 10 10:46:34 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1694533071; cv=none; d=zohomail.com; s=zohoarc; b=aS2TErPBnj+htuimkWccDt+FTirZbrSeJSnoSam4iC//MrUoLxrZYa94rdO6+Q7La1SjgxIIYWfQ9CT3ezSOMAHuahsfvrWgoX77cmyrMkMit2/b1gH5E4CLDtjlAGJXjw4FyA1dkroufoTyv4mNziAkPJfjz/6fHyW5xKNdvYM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694533071; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Reply-To:References:Sender:Subject:To; bh=aH0KgosAIK5mrQM2ra+7M7yllH1NOaC49YIKbBXNLRM=; b=HY4Eoxg+MRkKrNXyMOIM/2eRgdAZtkW90c/U7HDUnCuXP8MaotaqrwarvKn2/oTRrRRuR9bd8Vf4fG1Io0WkMdmxPSCsZ0+roR4lR0XPg2/is1pw3aQpf59BhLdM1ns8D8MWn0CtabWklRTlJSXXnuSVWB9muNwRfNjxiGd9aaQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1694533071569976.511139178508; Tue, 12 Sep 2023 08:37:51 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg5Q7-0007NW-Cv; Tue, 12 Sep 2023 11:35:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Ps-0007Is-EG for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:05 -0400 Received: from rev.ng ([5.9.113.41]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg5Ph-0001Xp-DK for qemu-devel@nongnu.org; Tue, 12 Sep 2023 11:35:03 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=rev.ng; s=dkim; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=aH0KgosAIK5mrQM2ra+7M7yllH1NOaC49YIKbBXNLRM=; b=FwCjMHUouBBt9mG9+GvrcJEKXf fVdpIJsFCiN5Vv9ZHszMB1CNiBJ3CG0waNIfGBCLGqfOHgSu+i3IvvDV6ZaDH7x9Fz6/TaW1oCu8G /T4UCmfWKPdwmzo4XT8TcKWcLQeRhpuOnR5TmAH2k6yLSqaDwV3Ey02GRe127QfIVZR4=; To: qemu-devel@nongnu.org Cc: ale@rev.ng, richard.henderson@linaro.org, pbonzini@redhat.com, philmd@linaro.org, peter.maydell@linaro.org Subject: [PATCH 11/11] accel/tcg: move ld/st helpers to ldst_common.c.inc Date: Tue, 12 Sep 2023 17:34:28 +0200 Message-ID: <20230912153428.17816-12-anjo@rev.ng> In-Reply-To: <20230912153428.17816-1-anjo@rev.ng> References: <20230912153428.17816-1-anjo@rev.ng> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=5.9.113.41; envelope-from=anjo@rev.ng; helo=rev.ng X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Anton Johansson From: Anton Johansson via Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1694533072500100006 Content-Type: text/plain; charset="utf-8" A large chunk of ld/st functions are moved from cputlb.c and user-exec.c to ldst_common.c.inc as their implementation is the same between both modes. Eventually, ldst_common.c.inc could be compiled into a separate target-specific compilation unit, and be linked in with the targets. Keeping CPUArchState usage out of cputlb.c (CPUArchState is primarily used to access the mmu index in these functions). Signed-off-by: Anton Johansson --- accel/tcg/cputlb.c | 214 ---------------------------------- accel/tcg/user-exec.c | 193 ------------------------------- accel/tcg/ldst_common.c.inc | 225 ++++++++++++++++++++++++++++++++++++ 3 files changed, 225 insertions(+), 407 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index ca69c369cc..cf4f50b188 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -2436,13 +2436,6 @@ static uint8_t do_ld1_mmu(CPUState *cpu, vaddr addr,= MemOpIdx oi, return do_ld_1(cpu, &l.page[0], l.mmu_idx, access_type, ra); } =20 -tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t retaddr) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); - return do_ld1_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD); -} - static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, uintptr_t ra, MMUAccessType access_type) { @@ -2468,13 +2461,6 @@ static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr= , MemOpIdx oi, return ret; } =20 -tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t retaddr) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); - return do_ld2_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD); -} - static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, uintptr_t ra, MMUAccessType access_type) { @@ -2496,13 +2482,6 @@ static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr= , MemOpIdx oi, return ret; } =20 -tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t retaddr) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); - return do_ld4_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD); -} - static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, uintptr_t ra, MMUAccessType access_type) { @@ -2524,36 +2503,6 @@ static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr= , MemOpIdx oi, return ret; } =20 -uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t retaddr) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); - return do_ld8_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD); -} - -/* - * Provide signed versions of the load routines as well. We can of course - * avoid this for 64-bit data, or for 32-bit data on 32-bit host. - */ - -tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return (int8_t)helper_ldub_mmu(env, addr, oi, retaddr); -} - -tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return (int16_t)helper_lduw_mmu(env, addr, oi, retaddr); -} - -tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t retaddr) -{ - return (int32_t)helper_ldul_mmu(env, addr, oi, retaddr); -} - static Int128 do_ld16_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, uintptr_t ra) { @@ -2619,81 +2568,6 @@ static Int128 do_ld16_mmu(CPUState *cpu, vaddr addr, return ret; } =20 -Int128 helper_ld16_mmu(CPUArchState *env, uint64_t addr, - uint32_t oi, uintptr_t retaddr) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); - return do_ld16_mmu(env_cpu(env), addr, oi, retaddr); -} - -Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, uint32_t oi) -{ - return helper_ld16_mmu(env, addr, oi, GETPC()); -} - -/* - * Load helpers for cpu_ldst.h. - */ - -static void plugin_load_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi) -{ - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); -} - -uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_= t ra) -{ - uint8_t ret; - - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_UB); - ret =3D do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); - plugin_load_cb(env, addr, oi); - return ret; -} - -uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - uint16_t ret; - - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); - ret =3D do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); - plugin_load_cb(env, addr, oi); - return ret; -} - -uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - uint32_t ret; - - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); - ret =3D do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); - plugin_load_cb(env, addr, oi); - return ret; -} - -uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - uint64_t ret; - - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); - ret =3D do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); - plugin_load_cb(env, addr, oi); - return ret; -} - -Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - Int128 ret; - - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); - ret =3D do_ld16_mmu(env_cpu(env), addr, oi, ra); - plugin_load_cb(env, addr, oi); - return ret; -} - /* * Store Helpers */ @@ -2950,13 +2824,6 @@ static void do_st1_mmu(CPUState *cpu, vaddr addr, ui= nt8_t val, do_st_1(cpu, &l.page[0], val, l.mmu_idx, ra); } =20 -void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); - do_st1_mmu(env_cpu(env), addr, val, oi, ra); -} - static void do_st2_mmu(CPUState *cpu, vaddr addr, uint16_t val, MemOpIdx oi, uintptr_t ra) { @@ -2980,13 +2847,6 @@ static void do_st2_mmu(CPUState *cpu, vaddr addr, ui= nt16_t val, do_st_1(cpu, &l.page[1], b, l.mmu_idx, ra); } =20 -void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); - do_st2_mmu(env_cpu(env), addr, val, oi, retaddr); -} - static void do_st4_mmu(CPUState *cpu, vaddr addr, uint32_t val, MemOpIdx oi, uintptr_t ra) { @@ -3008,13 +2868,6 @@ static void do_st4_mmu(CPUState *cpu, vaddr addr, ui= nt32_t val, (void) do_st_leN(cpu, &l.page[1], val, l.mmu_idx, l.memop, ra); } =20 -void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); - do_st4_mmu(env_cpu(env), addr, val, oi, retaddr); -} - static void do_st8_mmu(CPUState *cpu, vaddr addr, uint64_t val, MemOpIdx oi, uintptr_t ra) { @@ -3036,13 +2889,6 @@ static void do_st8_mmu(CPUState *cpu, vaddr addr, ui= nt64_t val, (void) do_st_leN(cpu, &l.page[1], val, l.mmu_idx, l.memop, ra); } =20 -void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); - do_st8_mmu(env_cpu(env), addr, val, oi, retaddr); -} - static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val, MemOpIdx oi, uintptr_t ra) { @@ -3105,66 +2951,6 @@ static void do_st16_mmu(CPUState *cpu, vaddr addr, I= nt128 val, } } =20 -void helper_st16_mmu(CPUArchState *env, uint64_t addr, Int128 val, - MemOpIdx oi, uintptr_t retaddr) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); - do_st16_mmu(env_cpu(env), addr, val, oi, retaddr); -} - -void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx= oi) -{ - helper_st16_mmu(env, addr, val, oi, GETPC()); -} - -/* - * Store Helpers for cpu_ldst.h - */ - -static void plugin_store_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi) -{ - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); -} - -void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - helper_stb_mmu(env, addr, val, oi, retaddr); - plugin_store_cb(env, addr, oi); -} - -void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); - do_st2_mmu(env_cpu(env), addr, val, oi, retaddr); - plugin_store_cb(env, addr, oi); -} - -void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); - do_st4_mmu(env_cpu(env), addr, val, oi, retaddr); - plugin_store_cb(env, addr, oi); -} - -void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, - MemOpIdx oi, uintptr_t retaddr) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); - do_st8_mmu(env_cpu(env), addr, val, oi, retaddr); - plugin_store_cb(env, addr, oi); -} - -void cpu_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val, - MemOpIdx oi, uintptr_t retaddr) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); - do_st16_mmu(env_cpu(env), addr, val, oi, retaddr); - plugin_store_cb(env, addr, oi); -} - #include "ldst_common.c.inc" =20 /* diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index a6593d0e0f..17f9aff0cf 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -972,31 +972,6 @@ static uint8_t do_ld1_mmu(CPUState *cpu, vaddr addr, M= emOpIdx oi, return ret; } =20 -tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); - return do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); -} - -tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); - return (int8_t)do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); -} - -uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - uint8_t ret; - - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); - ret =3D do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; -} - static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, uintptr_t ra, MMUAccessType access_type) { @@ -1015,31 +990,6 @@ static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr,= MemOpIdx oi, return ret; } =20 -tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); - return do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); -} - -tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); - return (int16_t)do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); -} - -uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - uint16_t ret; - - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); - ret =3D do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; -} - static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, uintptr_t ra, MMUAccessType access_type) { @@ -1058,31 +1008,6 @@ static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr= , MemOpIdx oi, return ret; } =20 -tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); - return do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); -} - -tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); - return (int32_t)do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); -} - -uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - uint32_t ret; - - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); - ret =3D do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; -} - static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, uintptr_t ra, MMUAccessType access_type) { @@ -1101,24 +1026,6 @@ static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr= , MemOpIdx oi, return ret; } =20 -uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); - return do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); -} - -uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - uint64_t ret; - - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); - ret =3D do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; -} - static Int128 do_ld16_mmu(CPUState *cpu, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { @@ -1138,25 +1045,6 @@ static Int128 do_ld16_mmu(CPUState *cpu, abi_ptr add= r, return ret; } =20 -Int128 helper_ld16_mmu(CPUArchState *env, uint64_t addr, - MemOpIdx oi, uintptr_t ra) -{ - return do_ld16_mmu(env_cpu(env), addr, get_memop(oi), ra); -} - -Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, MemOpIdx oi) -{ - return helper_ld16_mmu(env, addr, oi, GETPC()); -} - -Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - Int128 ret =3D do_ld16_mmu(env_cpu(env), addr, get_memop(oi), ra); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return ret; -} - static void do_st1_mmu(CPUState *cpu, vaddr addr, uint8_t val, MemOpIdx oi, uintptr_t ra) { @@ -1168,21 +1056,6 @@ static void do_st1_mmu(CPUState *cpu, vaddr addr, ui= nt8_t val, clear_helper_retaddr(); } =20 -void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); - do_st1_mmu(env_cpu(env), addr, val, oi, ra); -} - -void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); - do_st1_mmu(env_cpu(env), addr, val, oi, ra); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); -} - static void do_st2_mmu(CPUState *cpu, vaddr addr, uint16_t val, MemOpIdx oi, uintptr_t ra) { @@ -1199,21 +1072,6 @@ static void do_st2_mmu(CPUState *cpu, vaddr addr, ui= nt16_t val, clear_helper_retaddr(); } =20 -void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); - do_st2_mmu(env_cpu(env), addr, val, oi, ra); -} - -void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); - do_st2_mmu(env_cpu(env), addr, val, oi, ra); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); -} - static void do_st4_mmu(CPUState *cpu, vaddr addr, uint32_t val, MemOpIdx oi, uintptr_t ra) { @@ -1230,21 +1088,6 @@ static void do_st4_mmu(CPUState *cpu, vaddr addr, ui= nt32_t val, clear_helper_retaddr(); } =20 -void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); - do_st4_mmu(env_cpu(env), addr, val, oi, ra); -} - -void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); - do_st4_mmu(env_cpu(env), addr, val, oi, ra); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); -} - static void do_st8_mmu(CPUState *cpu, vaddr addr, uint64_t val, MemOpIdx oi, uintptr_t ra) { @@ -1261,21 +1104,6 @@ static void do_st8_mmu(CPUState *cpu, vaddr addr, ui= nt64_t val, clear_helper_retaddr(); } =20 -void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); - do_st8_mmu(env_cpu(env), addr, val, oi, ra); -} - -void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); - do_st8_mmu(env_cpu(env), addr, val, oi, ra); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); -} - static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val, MemOpIdx oi, uintptr_t ra) { @@ -1292,27 +1120,6 @@ static void do_st16_mmu(CPUState *cpu, vaddr addr, I= nt128 val, clear_helper_retaddr(); } =20 -void helper_st16_mmu(CPUArchState *env, uint64_t addr, Int128 val, - MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); - do_st16_mmu(env_cpu(env), addr, val, oi, ra); -} - -void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx= oi) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); - helper_st16_mmu(env, addr, val, oi, GETPC()); -} - -void cpu_st16_mmu(CPUArchState *env, abi_ptr addr, - Int128 val, MemOpIdx oi, uintptr_t ra) -{ - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); - do_st16_mmu(env_cpu(env), addr, val, oi, ra); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); -} - uint32_t cpu_ldub_code(CPUArchState *env, abi_ptr ptr) { uint32_t ret; diff --git a/accel/tcg/ldst_common.c.inc b/accel/tcg/ldst_common.c.inc index 5f8144b33a..44833513fb 100644 --- a/accel/tcg/ldst_common.c.inc +++ b/accel/tcg/ldst_common.c.inc @@ -8,6 +8,231 @@ * This work is licensed under the terms of the GNU GPL, version 2 or late= r. * See the COPYING file in the top-level directory. */ +/* + * Load helpers for tcg-ldst.h + */ + +tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); + return do_ld1_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD); +} + +tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); + return do_ld2_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD); +} + +tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); + return do_ld4_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD); +} + +uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t addr, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); + return do_ld8_mmu(env_cpu(env), addr, oi, retaddr, MMU_DATA_LOAD); +} + +/* + * Provide signed versions of the load routines as well. We can of course + * avoid this for 64-bit data, or for 32-bit data on 32-bit host. + */ + +tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, uint64_t addr, + MemOpIdx oi, uintptr_t retaddr) +{ + return (int8_t)helper_ldub_mmu(env, addr, oi, retaddr); +} + +tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, uint64_t addr, + MemOpIdx oi, uintptr_t retaddr) +{ + return (int16_t)helper_lduw_mmu(env, addr, oi, retaddr); +} + +tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr, + MemOpIdx oi, uintptr_t retaddr) +{ + return (int32_t)helper_ldul_mmu(env, addr, oi, retaddr); +} + +Int128 helper_ld16_mmu(CPUArchState *env, uint64_t addr, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); + return do_ld16_mmu(env_cpu(env), addr, oi, retaddr); +} + +Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, uint32_t oi) +{ + return helper_ld16_mmu(env, addr, oi, GETPC()); +} + +/* + * Store helpers for tcg-ldst.h + */ + +void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); + do_st1_mmu(env_cpu(env), addr, val, oi, ra); +} + +void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); + do_st2_mmu(env_cpu(env), addr, val, oi, retaddr); +} + +void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); + do_st4_mmu(env_cpu(env), addr, val, oi, retaddr); +} + +void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); + do_st8_mmu(env_cpu(env), addr, val, oi, retaddr); +} + +void helper_st16_mmu(CPUArchState *env, uint64_t addr, Int128 val, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); + do_st16_mmu(env_cpu(env), addr, val, oi, retaddr); +} + +void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx= oi) +{ + helper_st16_mmu(env, addr, val, oi, GETPC()); +} + +/* + * Load helpers for cpu_ldst.h + */ + +static void plugin_load_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi) +{ + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); +} + +uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_= t ra) +{ + uint8_t ret; + + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_UB); + ret =3D do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; +} + +uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + uint16_t ret; + + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); + ret =3D do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; +} + +uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + uint32_t ret; + + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); + ret =3D do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; +} + +uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + uint64_t ret; + + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); + ret =3D do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); + plugin_load_cb(env, addr, oi); + return ret; +} + +Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + Int128 ret; + + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); + ret =3D do_ld16_mmu(env_cpu(env), addr, oi, ra); + plugin_load_cb(env, addr, oi); + return ret; +} + +/* + * Store helpers for cpu_ldst.h + */ + +static void plugin_store_cb(CPUArchState *env, abi_ptr addr, MemOpIdx oi) +{ + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); +} + +void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, + MemOpIdx oi, uintptr_t retaddr) +{ + helper_stb_mmu(env, addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); +} + +void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); + do_st2_mmu(env_cpu(env), addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); +} + +void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); + do_st4_mmu(env_cpu(env), addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); +} + +void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); + do_st8_mmu(env_cpu(env), addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); +} + +void cpu_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val, + MemOpIdx oi, uintptr_t retaddr) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); + do_st16_mmu(env_cpu(env), addr, val, oi, retaddr); + plugin_store_cb(env, addr, oi); +} + +/* + * Wrappers of the above + */ =20 uint32_t cpu_ldub_mmuidx_ra(CPUArchState *env, abi_ptr addr, int mmu_idx, uintptr_t ra) --=20 2.41.0