From nobody Wed May  7 10:32:54 2025
Delivered-To: importer@patchew.org
Authentication-Results: mx.zohomail.com;
	dkim=pass;
	spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as
 permitted sender)
  smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org;
	dmarc=pass(p=none dis=none)  header.from=linaro.org
ARC-Seal: i=1; a=rsa-sha256; t=1664913711; cv=none;
	d=zohomail.com; s=zohoarc;
	b=m4qMOsm0pocHLS2/P/5k2zG3PbO861wRrfw0zj4ildus3BZ/QreOBKffZg2F4+A7kP28YmrCt9088MDgVOJDVkrGAUfKBtzfzatGXjJcFR/A4hI7J3Xnxd790TVo6ewFTbOWgYnvhvWV1WN//tn/g+Zyk5VuDfKnPuGhYBhZqQc=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com;
 s=zohoarc;
	t=1664913711;
 h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To;
	bh=egSMCzuiwg0v2IHxtNXQw4L5wKP7YmYYVIDjRNnn+8I=;
	b=dxeiVbuNfAW3llgxtzTZrUH+pw2hjJiXllIKiN4J8KDHYrxZNH7lGMhCnE6jwUPIJrU5D1ly/l50rKLuVi+Y0m9vaU4ibvahroen7W5rfzuJAjlJ7fgPr4DTquiz7G2/p4nexE3ztvZuSFHDUmek55y7y2jmPhsQ8+M90QfTOKg=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass;
	spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as
 permitted sender)
  smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org;
	dmarc=pass header.from=<richard.henderson@linaro.org> (p=none dis=none)
Return-Path: <qemu-devel-bounces+importer=patchew.org@nongnu.org>
Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by
 mx.zohomail.com
	with SMTPS id 1664913711183520.4639581129927;
 Tue, 4 Oct 2022 13:01:51 -0700 (PDT)
Received: from localhost ([::1]:34806 helo=lists1p.gnu.org)
	by lists.gnu.org with esmtp (Exim 4.90_1)
	(envelope-from <qemu-devel-bounces+importer=patchew.org@nongnu.org>)
	id 1ofo6u-0002jE-Ud
	for importer@patchew.org; Tue, 04 Oct 2022 16:01:48 -0400
Received: from eggs.gnu.org ([2001:470:142:3::10]:41266)
 by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)
 (Exim 4.90_1) (envelope-from <richard.henderson@linaro.org>)
 id 1ofnyF-0001iO-PZ
 for qemu-devel@nongnu.org; Tue, 04 Oct 2022 15:52:52 -0400
Received: from mail-pj1-x1033.google.com ([2607:f8b0:4864:20::1033]:40531)
 by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
 (Exim 4.90_1) (envelope-from <richard.henderson@linaro.org>)
 id 1ofnyC-0000Fs-Jn
 for qemu-devel@nongnu.org; Tue, 04 Oct 2022 15:52:51 -0400
Received: by mail-pj1-x1033.google.com with SMTP id
 h8-20020a17090a054800b00205ccbae31eso19549830pjf.5
 for <qemu-devel@nongnu.org>; Tue, 04 Oct 2022 12:52:47 -0700 (PDT)
Received: from stoup.. ([2602:47:d49d:ec01:526e:3326:a84e:e5e3])
 by smtp.gmail.com with ESMTPSA id
 u23-20020a1709026e1700b00172973d3cd9sm9293406plk.55.2022.10.04.12.52.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 Oct 2022 12:52:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google;
 h=content-transfer-encoding:mime-version:references:in-reply-to
 :message-id:date:subject:cc:to:from:from:to:cc:subject:date;
 bh=egSMCzuiwg0v2IHxtNXQw4L5wKP7YmYYVIDjRNnn+8I=;
 b=X3PIQfI0JlJaaWMIyf6z3xBYoB9UqR6TfV3lJR9yXPWxIgxIZEf0MnnGxXWc327XFJ
 RSuUsCcqYd2807yChoMtEjJhDWDKaTsnL/yd1wknVjPffhB5+Vk3iBBcH3vyIDRmGr70
 tn4F/FrHDLVIjyjf5329AoE0fAviuKhIvtPZc5DH7VlfG4jQv2PAuEj9Z4GLwqFp44fk
 CLLihpdAKb86a5cfDHLiDXA6ho0DnReCJ5NqXoTXmHoexvDW2rwvJPCeMQht5xXFNxBQ
 J1sH4arAyxTR6WoLoKl/HkyOqmLhooTfv4Y/NVHsTda/f09PxAoMkUA/W1D4aGFEXr4c
 jPhg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20210112;
 h=content-transfer-encoding:mime-version:references:in-reply-to
 :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
 :subject:date;
 bh=egSMCzuiwg0v2IHxtNXQw4L5wKP7YmYYVIDjRNnn+8I=;
 b=RzRKPP7ahSatdYjDUbama4o3+tKcMseiQErlYk8yA1XO0/YKRkP95MWpfTvOkQKusq
 9EH2rxlDTTe1wjcPe7UfJ3YVNqSn9ag4I9KWJR0wixtY+8S33wD1qb9ClhBvfAxq1e+y
 Mpyfczyvh52PxvF8eVmC5YbocZ1svarnqeT+V/leSeYWi1ajJK1CJff7Xg5GIJiL5MrK
 6TYkNK46jlcZu7h5CRqhGWvI+Y05wA1OSi8AWXOOwYZx1lTh1PL/3wgDyLnRA8TSPrj4
 FUCKBVHcD5h69wIgQj3AnHLymR57c//mSjRlrxv/JPLcqgSRz9QgvA7R6eBVeuxvxghe
 JNrg==
X-Gm-Message-State: ACrzQf3fqcymREpU1vg1Fz0RcC0ZKF2fiXo4K/qpIChgPcCRP/5cDpDL
 N3DnTU8/OA6R6fTCyaibhjbDM+UwDxqzBQ==
X-Google-Smtp-Source: 
 AMsMyM5sgcb4r0xvADg6NzTnod199abjNt25r+LCghifMiUPg1aa5F5P7uYCBsuprRkA59yRfsUT9g==
X-Received: by 2002:a17:90a:7305:b0:20a:6ffd:1f8e with SMTP id
 m5-20020a17090a730500b0020a6ffd1f8emr1269395pjk.69.1664913166662;
 Tue, 04 Oct 2022 12:52:46 -0700 (PDT)
From: Richard Henderson <richard.henderson@linaro.org>
To: qemu-devel@nongnu.org
Cc: stefanha@redhat.com,
 =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
 Peter Maydell <peter.maydell@linaro.org>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: [PULL 04/20] accel/tcg: Rename CPUIOTLBEntry to CPUTLBEntryFull
Date: Tue,  4 Oct 2022 12:52:25 -0700
Message-Id: <20221004195241.46491-5-richard.henderson@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20221004195241.46491-1-richard.henderson@linaro.org>
References: <20221004195241.46491-1-richard.henderson@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17
 as permitted sender) client-ip=209.51.188.17;
 envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org;
 helo=lists.gnu.org;
Received-SPF: pass client-ip=2607:f8b0:4864:20::1033;
 envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1033.google.com
X-Spam_score_int: -20
X-Spam_score: -2.1
X-Spam_bar: --
X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1,
 DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1,
 RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001,
 SPF_PASS=-0.001 autolearn=ham autolearn_force=no
X-Spam_action: no action
X-BeenThere: qemu-devel@nongnu.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: <qemu-devel.nongnu.org>
List-Unsubscribe: <https://lists.nongnu.org/mailman/options/qemu-devel>,
 <mailto:qemu-devel-request@nongnu.org?subject=unsubscribe>
List-Archive: <https://lists.nongnu.org/archive/html/qemu-devel>
List-Post: <mailto:qemu-devel@nongnu.org>
List-Help: <mailto:qemu-devel-request@nongnu.org?subject=help>
List-Subscribe: <https://lists.nongnu.org/mailman/listinfo/qemu-devel>,
 <mailto:qemu-devel-request@nongnu.org?subject=subscribe>
Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org
Sender: "Qemu-devel" <qemu-devel-bounces+importer=patchew.org@nongnu.org>
X-ZohoMail-DKIM: pass (identity @linaro.org)
X-ZM-MESSAGEID: 1664913712662100001

This structure will shortly contain more than just
data for accessing MMIO.  Rename the 'addr' member
to 'xlat_section' to more clearly indicate its purpose.

Reviewed-by: Alex Benn=C3=A9e <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 include/exec/cpu-defs.h    |  22 ++++----
 accel/tcg/cputlb.c         | 102 +++++++++++++++++++------------------
 target/arm/mte_helper.c    |  14 ++---
 target/arm/sve_helper.c    |   4 +-
 target/arm/translate-a64.c |   2 +-
 5 files changed, 73 insertions(+), 71 deletions(-)

diff --git a/include/exec/cpu-defs.h b/include/exec/cpu-defs.h
index ba3cd32a1e..f70f54d850 100644
--- a/include/exec/cpu-defs.h
+++ b/include/exec/cpu-defs.h
@@ -108,6 +108,7 @@ typedef uint64_t target_ulong;
 #  endif
 # endif
=20
+/* Minimalized TLB entry for use by TCG fast path. */
 typedef struct CPUTLBEntry {
     /* bit TARGET_LONG_BITS to TARGET_PAGE_BITS : virtual address
        bit TARGET_PAGE_BITS-1..4  : Nonzero for accesses that should not
@@ -131,14 +132,14 @@ typedef struct CPUTLBEntry {
=20
 QEMU_BUILD_BUG_ON(sizeof(CPUTLBEntry) !=3D (1 << CPU_TLB_ENTRY_BITS));
=20
-/* The IOTLB is not accessed directly inline by generated TCG code,
- * so the CPUIOTLBEntry layout is not as critical as that of the
- * CPUTLBEntry. (This is also why we don't want to combine the two
- * structs into one.)
+/*
+ * The full TLB entry, which is not accessed by generated TCG code,
+ * so the layout is not as critical as that of CPUTLBEntry. This is
+ * also why we don't want to combine the two structs.
  */
-typedef struct CPUIOTLBEntry {
+typedef struct CPUTLBEntryFull {
     /*
-     * @addr contains:
+     * @xlat_section contains:
      *  - in the lower TARGET_PAGE_BITS, a physical section number
      *  - with the lower TARGET_PAGE_BITS masked off, an offset which
      *    must be added to the virtual address to obtain:
@@ -146,9 +147,9 @@ typedef struct CPUIOTLBEntry {
      *       number is PHYS_SECTION_NOTDIRTY or PHYS_SECTION_ROM)
      *     + the offset within the target MemoryRegion (otherwise)
      */
-    hwaddr addr;
+    hwaddr xlat_section;
     MemTxAttrs attrs;
-} CPUIOTLBEntry;
+} CPUTLBEntryFull;
=20
 /*
  * Data elements that are per MMU mode, minus the bits accessed by
@@ -172,9 +173,8 @@ typedef struct CPUTLBDesc {
     size_t vindex;
     /* The tlb victim table, in two parts.  */
     CPUTLBEntry vtable[CPU_VTLB_SIZE];
-    CPUIOTLBEntry viotlb[CPU_VTLB_SIZE];
-    /* The iotlb.  */
-    CPUIOTLBEntry *iotlb;
+    CPUTLBEntryFull vfulltlb[CPU_VTLB_SIZE];
+    CPUTLBEntryFull *fulltlb;
 } CPUTLBDesc;
=20
 /*
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 193bfc1cfc..aa22f578cb 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -200,13 +200,13 @@ static void tlb_mmu_resize_locked(CPUTLBDesc *desc, C=
PUTLBDescFast *fast,
     }
=20
     g_free(fast->table);
-    g_free(desc->iotlb);
+    g_free(desc->fulltlb);
=20
     tlb_window_reset(desc, now, 0);
     /* desc->n_used_entries is cleared by the caller */
     fast->mask =3D (new_size - 1) << CPU_TLB_ENTRY_BITS;
     fast->table =3D g_try_new(CPUTLBEntry, new_size);
-    desc->iotlb =3D g_try_new(CPUIOTLBEntry, new_size);
+    desc->fulltlb =3D g_try_new(CPUTLBEntryFull, new_size);
=20
     /*
      * If the allocations fail, try smaller sizes. We just freed some
@@ -215,7 +215,7 @@ static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPU=
TLBDescFast *fast,
      * allocations to fail though, so we progressively reduce the allocati=
on
      * size, aborting if we cannot even allocate the smallest TLB we suppo=
rt.
      */
-    while (fast->table =3D=3D NULL || desc->iotlb =3D=3D NULL) {
+    while (fast->table =3D=3D NULL || desc->fulltlb =3D=3D NULL) {
         if (new_size =3D=3D (1 << CPU_TLB_DYN_MIN_BITS)) {
             error_report("%s: %s", __func__, strerror(errno));
             abort();
@@ -224,9 +224,9 @@ static void tlb_mmu_resize_locked(CPUTLBDesc *desc, CPU=
TLBDescFast *fast,
         fast->mask =3D (new_size - 1) << CPU_TLB_ENTRY_BITS;
=20
         g_free(fast->table);
-        g_free(desc->iotlb);
+        g_free(desc->fulltlb);
         fast->table =3D g_try_new(CPUTLBEntry, new_size);
-        desc->iotlb =3D g_try_new(CPUIOTLBEntry, new_size);
+        desc->fulltlb =3D g_try_new(CPUTLBEntryFull, new_size);
     }
 }
=20
@@ -258,7 +258,7 @@ static void tlb_mmu_init(CPUTLBDesc *desc, CPUTLBDescFa=
st *fast, int64_t now)
     desc->n_used_entries =3D 0;
     fast->mask =3D (n_entries - 1) << CPU_TLB_ENTRY_BITS;
     fast->table =3D g_new(CPUTLBEntry, n_entries);
-    desc->iotlb =3D g_new(CPUIOTLBEntry, n_entries);
+    desc->fulltlb =3D g_new(CPUTLBEntryFull, n_entries);
     tlb_mmu_flush_locked(desc, fast);
 }
=20
@@ -299,7 +299,7 @@ void tlb_destroy(CPUState *cpu)
         CPUTLBDescFast *fast =3D &env_tlb(env)->f[i];
=20
         g_free(fast->table);
-        g_free(desc->iotlb);
+        g_free(desc->fulltlb);
     }
 }
=20
@@ -1219,7 +1219,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ul=
ong vaddr,
=20
         /* Evict the old entry into the victim tlb.  */
         copy_tlb_helper_locked(tv, te);
-        desc->viotlb[vidx] =3D desc->iotlb[index];
+        desc->vfulltlb[vidx] =3D desc->fulltlb[index];
         tlb_n_used_entries_dec(env, mmu_idx);
     }
=20
@@ -1236,8 +1236,8 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ul=
ong vaddr,
      * subtract here is that of the page base, and not the same as the
      * vaddr we add back in io_readx()/io_writex()/get_page_addr_code().
      */
-    desc->iotlb[index].addr =3D iotlb - vaddr_page;
-    desc->iotlb[index].attrs =3D attrs;
+    desc->fulltlb[index].xlat_section =3D iotlb - vaddr_page;
+    desc->fulltlb[index].attrs =3D attrs;
=20
     /* Now calculate the new entry */
     tn.addend =3D addend - vaddr_page;
@@ -1327,7 +1327,7 @@ static inline void cpu_transaction_failed(CPUState *c=
pu, hwaddr physaddr,
     }
 }
=20
-static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
+static uint64_t io_readx(CPUArchState *env, CPUTLBEntryFull *full,
                          int mmu_idx, target_ulong addr, uintptr_t retaddr,
                          MMUAccessType access_type, MemOp op)
 {
@@ -1339,9 +1339,9 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBE=
ntry *iotlbentry,
     bool locked =3D false;
     MemTxResult r;
=20
-    section =3D iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
+    section =3D iotlb_to_section(cpu, full->xlat_section, full->attrs);
     mr =3D section->mr;
-    mr_offset =3D (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
+    mr_offset =3D (full->xlat_section & TARGET_PAGE_MASK) + addr;
     cpu->mem_io_pc =3D retaddr;
     if (!cpu->can_do_io) {
         cpu_io_recompile(cpu, retaddr);
@@ -1351,14 +1351,14 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTL=
BEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked =3D true;
     }
-    r =3D memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry-=
>attrs);
+    r =3D memory_region_dispatch_read(mr, mr_offset, &val, op, full->attrs=
);
     if (r !=3D MEMTX_OK) {
         hwaddr physaddr =3D mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;
=20
         cpu_transaction_failed(cpu, physaddr, addr, memop_size(op), access=
_type,
-                               mmu_idx, iotlbentry->attrs, r, retaddr);
+                               mmu_idx, full->attrs, r, retaddr);
     }
     if (locked) {
         qemu_mutex_unlock_iothread();
@@ -1368,8 +1368,8 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBE=
ntry *iotlbentry,
 }
=20
 /*
- * Save a potentially trashed IOTLB entry for later lookup by plugin.
- * This is read by tlb_plugin_lookup if the iotlb entry doesn't match
+ * Save a potentially trashed CPUTLBEntryFull for later lookup by plugin.
+ * This is read by tlb_plugin_lookup if the fulltlb entry doesn't match
  * because of the side effect of io_writex changing memory layout.
  */
 static void save_iotlb_data(CPUState *cs, hwaddr addr,
@@ -1383,7 +1383,7 @@ static void save_iotlb_data(CPUState *cs, hwaddr addr,
 #endif
 }
=20
-static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
+static void io_writex(CPUArchState *env, CPUTLBEntryFull *full,
                       int mmu_idx, uint64_t val, target_ulong addr,
                       uintptr_t retaddr, MemOp op)
 {
@@ -1394,9 +1394,9 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntr=
y *iotlbentry,
     bool locked =3D false;
     MemTxResult r;
=20
-    section =3D iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
+    section =3D iotlb_to_section(cpu, full->xlat_section, full->attrs);
     mr =3D section->mr;
-    mr_offset =3D (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
+    mr_offset =3D (full->xlat_section & TARGET_PAGE_MASK) + addr;
     if (!cpu->can_do_io) {
         cpu_io_recompile(cpu, retaddr);
     }
@@ -1406,20 +1406,20 @@ static void io_writex(CPUArchState *env, CPUIOTLBEn=
try *iotlbentry,
      * The memory_region_dispatch may trigger a flush/resize
      * so for plugins we save the iotlb_data just in case.
      */
-    save_iotlb_data(cpu, iotlbentry->addr, section, mr_offset);
+    save_iotlb_data(cpu, full->xlat_section, section, mr_offset);
=20
     if (!qemu_mutex_iothread_locked()) {
         qemu_mutex_lock_iothread();
         locked =3D true;
     }
-    r =3D memory_region_dispatch_write(mr, mr_offset, val, op, iotlbentry-=
>attrs);
+    r =3D memory_region_dispatch_write(mr, mr_offset, val, op, full->attrs=
);
     if (r !=3D MEMTX_OK) {
         hwaddr physaddr =3D mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;
=20
         cpu_transaction_failed(cpu, physaddr, addr, memop_size(op),
-                               MMU_DATA_STORE, mmu_idx, iotlbentry->attrs,=
 r,
+                               MMU_DATA_STORE, mmu_idx, full->attrs, r,
                                retaddr);
     }
     if (locked) {
@@ -1466,9 +1466,10 @@ static bool victim_tlb_hit(CPUArchState *env, size_t=
 mmu_idx, size_t index,
             copy_tlb_helper_locked(vtlb, &tmptlb);
             qemu_spin_unlock(&env_tlb(env)->c.lock);
=20
-            CPUIOTLBEntry tmpio, *io =3D &env_tlb(env)->d[mmu_idx].iotlb[i=
ndex];
-            CPUIOTLBEntry *vio =3D &env_tlb(env)->d[mmu_idx].viotlb[vidx];
-            tmpio =3D *io; *io =3D *vio; *vio =3D tmpio;
+            CPUTLBEntryFull *f1 =3D &env_tlb(env)->d[mmu_idx].fulltlb[inde=
x];
+            CPUTLBEntryFull *f2 =3D &env_tlb(env)->d[mmu_idx].vfulltlb[vid=
x];
+            CPUTLBEntryFull tmpf;
+            tmpf =3D *f1; *f1 =3D *f2; *f2 =3D tmpf;
             return true;
         }
     }
@@ -1481,9 +1482,9 @@ static bool victim_tlb_hit(CPUArchState *env, size_t =
mmu_idx, size_t index,
                  (ADDR) & TARGET_PAGE_MASK)
=20
 static void notdirty_write(CPUState *cpu, vaddr mem_vaddr, unsigned size,
-                           CPUIOTLBEntry *iotlbentry, uintptr_t retaddr)
+                           CPUTLBEntryFull *full, uintptr_t retaddr)
 {
-    ram_addr_t ram_addr =3D mem_vaddr + iotlbentry->addr;
+    ram_addr_t ram_addr =3D mem_vaddr + full->xlat_section;
=20
     trace_memory_notdirty_write_access(mem_vaddr, ram_addr, size);
=20
@@ -1575,9 +1576,9 @@ int probe_access_flags(CPUArchState *env, target_ulon=
g addr,
     /* Handle clean RAM pages.  */
     if (unlikely(flags & TLB_NOTDIRTY)) {
         uintptr_t index =3D tlb_index(env, mmu_idx, addr);
-        CPUIOTLBEntry *iotlbentry =3D &env_tlb(env)->d[mmu_idx].iotlb[inde=
x];
+        CPUTLBEntryFull *full =3D &env_tlb(env)->d[mmu_idx].fulltlb[index];
=20
-        notdirty_write(env_cpu(env), addr, 1, iotlbentry, retaddr);
+        notdirty_write(env_cpu(env), addr, 1, full, retaddr);
         flags &=3D ~TLB_NOTDIRTY;
     }
=20
@@ -1602,19 +1603,19 @@ void *probe_access(CPUArchState *env, target_ulong =
addr, int size,
=20
     if (unlikely(flags & (TLB_NOTDIRTY | TLB_WATCHPOINT))) {
         uintptr_t index =3D tlb_index(env, mmu_idx, addr);
-        CPUIOTLBEntry *iotlbentry =3D &env_tlb(env)->d[mmu_idx].iotlb[inde=
x];
+        CPUTLBEntryFull *full =3D &env_tlb(env)->d[mmu_idx].fulltlb[index];
=20
         /* Handle watchpoints.  */
         if (flags & TLB_WATCHPOINT) {
             int wp_access =3D (access_type =3D=3D MMU_DATA_STORE
                              ? BP_MEM_WRITE : BP_MEM_READ);
             cpu_check_watchpoint(env_cpu(env), addr, size,
-                                 iotlbentry->attrs, wp_access, retaddr);
+                                 full->attrs, wp_access, retaddr);
         }
=20
         /* Handle clean RAM pages.  */
         if (flags & TLB_NOTDIRTY) {
-            notdirty_write(env_cpu(env), addr, 1, iotlbentry, retaddr);
+            notdirty_write(env_cpu(env), addr, 1, full, retaddr);
         }
     }
=20
@@ -1671,7 +1672,7 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState =
*env, target_ulong addr,
  * should have just filled the TLB. The one corner case is io_writex
  * which can cause TLB flushes and potential resizing of the TLBs
  * losing the information we need. In those cases we need to recover
- * data from a copy of the iotlbentry. As long as this always occurs
+ * data from a copy of the CPUTLBEntryFull. As long as this always occurs
  * from the same thread (which a mem callback will be) this is safe.
  */
=20
@@ -1686,11 +1687,12 @@ bool tlb_plugin_lookup(CPUState *cpu, target_ulong =
addr, int mmu_idx,
     if (likely(tlb_hit(tlb_addr, addr))) {
         /* We must have an iotlb entry for MMIO */
         if (tlb_addr & TLB_MMIO) {
-            CPUIOTLBEntry *iotlbentry;
-            iotlbentry =3D &env_tlb(env)->d[mmu_idx].iotlb[index];
+            CPUTLBEntryFull *full;
+            full =3D &env_tlb(env)->d[mmu_idx].fulltlb[index];
             data->is_io =3D true;
-            data->v.io.section =3D iotlb_to_section(cpu, iotlbentry->addr,=
 iotlbentry->attrs);
-            data->v.io.offset =3D (iotlbentry->addr & TARGET_PAGE_MASK) + =
addr;
+            data->v.io.section =3D
+                iotlb_to_section(cpu, full->xlat_section, full->attrs);
+            data->v.io.offset =3D (full->xlat_section & TARGET_PAGE_MASK) =
+ addr;
         } else {
             data->is_io =3D false;
             data->v.ram.hostaddr =3D (void *)((uintptr_t)addr + tlbe->adde=
nd);
@@ -1798,7 +1800,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, tar=
get_ulong addr,
=20
     if (unlikely(tlb_addr & TLB_NOTDIRTY)) {
         notdirty_write(env_cpu(env), addr, size,
-                       &env_tlb(env)->d[mmu_idx].iotlb[index], retaddr);
+                       &env_tlb(env)->d[mmu_idx].fulltlb[index], retaddr);
     }
=20
     return hostaddr;
@@ -1906,7 +1908,7 @@ load_helper(CPUArchState *env, target_ulong addr, Mem=
OpIdx oi,
=20
     /* Handle anything that isn't just a straight memory access.  */
     if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
-        CPUIOTLBEntry *iotlbentry;
+        CPUTLBEntryFull *full;
         bool need_swap;
=20
         /* For anything that is unaligned, recurse through full_load.  */
@@ -1914,20 +1916,20 @@ load_helper(CPUArchState *env, target_ulong addr, M=
emOpIdx oi,
             goto do_unaligned_access;
         }
=20
-        iotlbentry =3D &env_tlb(env)->d[mmu_idx].iotlb[index];
+        full =3D &env_tlb(env)->d[mmu_idx].fulltlb[index];
=20
         /* Handle watchpoints.  */
         if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
             /* On watchpoint hit, this will longjmp out.  */
             cpu_check_watchpoint(env_cpu(env), addr, size,
-                                 iotlbentry->attrs, BP_MEM_READ, retaddr);
+                                 full->attrs, BP_MEM_READ, retaddr);
         }
=20
         need_swap =3D size > 1 && (tlb_addr & TLB_BSWAP);
=20
         /* Handle I/O access.  */
         if (likely(tlb_addr & TLB_MMIO)) {
-            return io_readx(env, iotlbentry, mmu_idx, addr, retaddr,
+            return io_readx(env, full, mmu_idx, addr, retaddr,
                             access_type, op ^ (need_swap * MO_BSWAP));
         }
=20
@@ -2242,12 +2244,12 @@ store_helper_unaligned(CPUArchState *env, target_ul=
ong addr, uint64_t val,
      */
     if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
         cpu_check_watchpoint(env_cpu(env), addr, size - size2,
-                             env_tlb(env)->d[mmu_idx].iotlb[index].attrs,
+                             env_tlb(env)->d[mmu_idx].fulltlb[index].attrs,
                              BP_MEM_WRITE, retaddr);
     }
     if (unlikely(tlb_addr2 & TLB_WATCHPOINT)) {
         cpu_check_watchpoint(env_cpu(env), page2, size2,
-                             env_tlb(env)->d[mmu_idx].iotlb[index2].attrs,
+                             env_tlb(env)->d[mmu_idx].fulltlb[index2].attr=
s,
                              BP_MEM_WRITE, retaddr);
     }
=20
@@ -2311,7 +2313,7 @@ store_helper(CPUArchState *env, target_ulong addr, ui=
nt64_t val,
=20
     /* Handle anything that isn't just a straight memory access.  */
     if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) {
-        CPUIOTLBEntry *iotlbentry;
+        CPUTLBEntryFull *full;
         bool need_swap;
=20
         /* For anything that is unaligned, recurse through byte stores.  */
@@ -2319,20 +2321,20 @@ store_helper(CPUArchState *env, target_ulong addr, =
uint64_t val,
             goto do_unaligned_access;
         }
=20
-        iotlbentry =3D &env_tlb(env)->d[mmu_idx].iotlb[index];
+        full =3D &env_tlb(env)->d[mmu_idx].fulltlb[index];
=20
         /* Handle watchpoints.  */
         if (unlikely(tlb_addr & TLB_WATCHPOINT)) {
             /* On watchpoint hit, this will longjmp out.  */
             cpu_check_watchpoint(env_cpu(env), addr, size,
-                                 iotlbentry->attrs, BP_MEM_WRITE, retaddr);
+                                 full->attrs, BP_MEM_WRITE, retaddr);
         }
=20
         need_swap =3D size > 1 && (tlb_addr & TLB_BSWAP);
=20
         /* Handle I/O access.  */
         if (tlb_addr & TLB_MMIO) {
-            io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr,
+            io_writex(env, full, mmu_idx, val, addr, retaddr,
                       op ^ (need_swap * MO_BSWAP));
             return;
         }
@@ -2344,7 +2346,7 @@ store_helper(CPUArchState *env, target_ulong addr, ui=
nt64_t val,
=20
         /* Handle clean RAM pages.  */
         if (tlb_addr & TLB_NOTDIRTY) {
-            notdirty_write(env_cpu(env), addr, size, iotlbentry, retaddr);
+            notdirty_write(env_cpu(env), addr, size, full, retaddr);
         }
=20
         haddr =3D (void *)((uintptr_t)addr + entry->addend);
diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c
index d11a8c70d0..fdd23ab3f8 100644
--- a/target/arm/mte_helper.c
+++ b/target/arm/mte_helper.c
@@ -106,7 +106,7 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, in=
t ptr_mmu_idx,
     return tags + index;
 #else
     uintptr_t index;
-    CPUIOTLBEntry *iotlbentry;
+    CPUTLBEntryFull *full;
     int in_page, flags;
     ram_addr_t ptr_ra;
     hwaddr ptr_paddr, tag_paddr, xlat;
@@ -129,7 +129,7 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, in=
t ptr_mmu_idx,
     assert(!(flags & TLB_INVALID_MASK));
=20
     /*
-     * Find the iotlbentry for ptr.  This *must* be present in the TLB
+     * Find the CPUTLBEntryFull for ptr.  This *must* be present in the TLB
      * because we just found the mapping.
      * TODO: Perhaps there should be a cputlb helper that returns a
      * matching tlb entry + iotlb entry.
@@ -144,10 +144,10 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, =
int ptr_mmu_idx,
         g_assert(tlb_hit(comparator, ptr));
     }
 # endif
-    iotlbentry =3D &env_tlb(env)->d[ptr_mmu_idx].iotlb[index];
+    full =3D &env_tlb(env)->d[ptr_mmu_idx].fulltlb[index];
=20
     /* If the virtual page MemAttr !=3D Tagged, access unchecked. */
-    if (!arm_tlb_mte_tagged(&iotlbentry->attrs)) {
+    if (!arm_tlb_mte_tagged(&full->attrs)) {
         return NULL;
     }
=20
@@ -181,7 +181,7 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, in=
t ptr_mmu_idx,
         int wp =3D ptr_access =3D=3D MMU_DATA_LOAD ? BP_MEM_READ : BP_MEM_=
WRITE;
         assert(ra !=3D 0);
         cpu_check_watchpoint(env_cpu(env), ptr, ptr_size,
-                             iotlbentry->attrs, wp, ra);
+                             full->attrs, wp, ra);
     }
=20
     /*
@@ -202,11 +202,11 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, =
int ptr_mmu_idx,
     tag_paddr =3D ptr_paddr >> (LOG2_TAG_GRANULE + 1);
=20
     /* Look up the address in tag space. */
-    tag_asi =3D iotlbentry->attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS;
+    tag_asi =3D full->attrs.secure ? ARMASIdx_TagS : ARMASIdx_TagNS;
     tag_as =3D cpu_get_address_space(env_cpu(env), tag_asi);
     mr =3D address_space_translate(tag_as, tag_paddr, &xlat, NULL,
                                  tag_access =3D=3D MMU_DATA_STORE,
-                                 iotlbentry->attrs);
+                                 full->attrs);
=20
     /*
      * Note that @mr will never be NULL.  If there is nothing in the addre=
ss
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index d6f7ef94fe..9cae8fd352 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -5384,8 +5384,8 @@ bool sve_probe_page(SVEHostPage *info, bool nofault, =
CPUARMState *env,
         g_assert(tlb_hit(comparator, addr));
 # endif
=20
-        CPUIOTLBEntry *iotlbentry =3D &env_tlb(env)->d[mmu_idx].iotlb[inde=
x];
-        info->attrs =3D iotlbentry->attrs;
+        CPUTLBEntryFull *full =3D &env_tlb(env)->d[mmu_idx].fulltlb[index];
+        info->attrs =3D full->attrs;
     }
 #endif
=20
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 9bed336b47..78b2d91ed4 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -14624,7 +14624,7 @@ static bool is_guarded_page(CPUARMState *env, Disas=
Context *s)
      * table entry even for that case.
      */
     return (tlb_hit(entry->addr_code, addr) &&
-            arm_tlb_bti_gp(&env_tlb(env)->d[mmu_idx].iotlb[index].attrs));
+            arm_tlb_bti_gp(&env_tlb(env)->d[mmu_idx].fulltlb[index].attrs)=
);
 #endif
 }
=20
--=20
2.34.1