From nobody Tue May  6 19:20:32 2025
Delivered-To: importer@patchew.org
Authentication-Results: mx.zohomail.com;
	dkim=pass;
	spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as
 permitted sender)
  smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org;
	dmarc=pass(p=none dis=none)  header.from=linaro.org
ARC-Seal: i=1; a=rsa-sha256; t=1696354377; cv=none;
	d=zohomail.com; s=zohoarc;
	b=VidPy+6cmHIK01u/L58rCBgm8FsX4iPG/6KSGZz7wvF6ljgPj88j/qOds+xyxNexWiNlsdq33CHpVynDDwb/aiDb+kiql0Zur3vjxE5BVXxjW5SSQcOYGc5gIWauB1kpk+f1scn0BZNTHCIz9TIuQ78k1fXRiHs3qu1nYRYbpdc=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com;
 s=zohoarc;
	t=1696354377;
 h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To;
	bh=A4s7KJiQgNukiWgHw7MWqiZcwqwzIGazMeUA6d9T77I=;
	b=aJOQnAlwX7GGl/J9Ujf2Qep+pXW3QPD6JdBelNIKhdJ8jl89k3yff6+TSQ3Lj0lA5UFVgjwVa0J7YFBsYrdvizRQrivONMmMqxuapdruh7KtrREqsBPYkpHFuzkyAzXXsFNXIlhs7pCKhtqLiPt0cN9K3ZeqKGWAE9TTw9YJP6w=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass;
	spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as
 permitted sender)
  smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org;
	dmarc=pass header.from=<richard.henderson@linaro.org> (p=none dis=none)
Return-Path: <qemu-devel-bounces+importer=patchew.org@nongnu.org>
Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by
 mx.zohomail.com
	with SMTPS id 1696354377101486.05787547913167;
 Tue, 3 Oct 2023 10:32:57 -0700 (PDT)
Received: from localhost ([::1] helo=lists1p.gnu.org)
	by lists.gnu.org with esmtp (Exim 4.90_1)
	(envelope-from <qemu-devel-bounces@nongnu.org>)
	id 1qnjFB-0008WC-Ah; Tue, 03 Oct 2023 13:31:37 -0400
Received: from eggs.gnu.org ([2001:470:142:3::10])
 by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)
 (Exim 4.90_1) (envelope-from <richard.henderson@linaro.org>)
 id 1qnjEz-0008Qe-Ki
 for qemu-devel@nongnu.org; Tue, 03 Oct 2023 13:31:25 -0400
Received: from mail-pf1-x433.google.com ([2607:f8b0:4864:20::433])
 by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
 (Exim 4.90_1) (envelope-from <richard.henderson@linaro.org>)
 id 1qnjEv-0007VM-PO
 for qemu-devel@nongnu.org; Tue, 03 Oct 2023 13:31:25 -0400
Received: by mail-pf1-x433.google.com with SMTP id
 d2e1a72fcca58-690bf8fdd1aso882362b3a.2
 for <qemu-devel@nongnu.org>; Tue, 03 Oct 2023 10:31:21 -0700 (PDT)
Received: from stoup.. ([71.212.149.95]) by smtp.gmail.com with ESMTPSA id
 n7-20020aa78a47000000b00692c5b1a731sm1620195pfa.186.2023.10.03.10.31.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 03 Oct 2023 10:31:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=linaro.org; s=google; t=1696354280; x=1696959080; darn=nongnu.org;
 h=content-transfer-encoding:mime-version:references:in-reply-to
 :message-id:date:subject:cc:to:from:from:to:cc:subject:date
 :message-id:reply-to;
 bh=A4s7KJiQgNukiWgHw7MWqiZcwqwzIGazMeUA6d9T77I=;
 b=yTtXKVmfkv8ZLjvQS0rvLBHKQRksA/oIoLyXPcP5VXD156oqKLZrm9ePXsURtDZev3
 LWvOh49f1AWWc4dKaYy0K45Y52d9pdfKxY8ow1jQ4CQHKZERCj1W75zKKsoZypJ2XFK7
 8oJHvFgWUidxeTkt/62h4D4CqsvEK3GkBB4BLBPNkuiUpnu5RyFcjApJVKluOjT90tOD
 oJjuhI12547aayLUQa8vjk21yeE8zoBxibgu8+W57zpIMPkKU3ls3oM2e+6/0GhzpXyY
 gOTSszxK5xgrnZAec67yid/eFoOjog5lHpRXjiMpnaJTG25WYnaXuoa6ObQtLgCsNc1o
 2clg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20230601; t=1696354280; x=1696959080;
 h=content-transfer-encoding:mime-version:references:in-reply-to
 :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
 :subject:date:message-id:reply-to;
 bh=A4s7KJiQgNukiWgHw7MWqiZcwqwzIGazMeUA6d9T77I=;
 b=Mp2WnRtFxww+CC33uz6iBHo2aryKSO+DXmv+CISMV4Qf8Mz66JYyECt9G6APT7Dk+Y
 PCIp/QIbUVWEvn70nF6OUxcZzE+1rC9KBGtwkpHVIFfy7EzIBGCcrHgOiQeP6rbSlHkp
 or83hdyPpmRfH36LWWfVUCvPSx4DE+hgn7SPTlgLAoAaUTja2EO3O0l585+IfRC2jkDk
 8nunuPqzKS/Fpke60oePw7+jVhI7CqYsZX1VumbWqJn4XjgB+iEG9CJy6xDGx9qc5uAX
 vGvtjZVvdDVRPN7Lmg5chxetf9HiSWOF+81kO+eteSKhu4BBVcfxsmAbPNPiFmY/23rx
 17gw==
X-Gm-Message-State: AOJu0Yw49Au0R3H3o1IRdnfzyyA6c2ctR7ASfe/D3iC32iCTvhs07hBD
 bNKP9xgZAmDWMsgsYTBvHIfGYHaIzQbbunuhp5g=
X-Google-Smtp-Source: 
 AGHT+IE7xvkOmV5LsSP91p5xy+1lrAyLD8OUYe0yE/jQqErFofnjaTtqpUV8dVJMzvOC60F1+fkFQA==
X-Received: by 2002:a05:6a00:1691:b0:68e:3772:4e40 with SMTP id
 k17-20020a056a00169100b0068e37724e40mr306337pfc.3.1696354280392;
 Tue, 03 Oct 2023 10:31:20 -0700 (PDT)
From: Richard Henderson <richard.henderson@linaro.org>
To: qemu-devel@nongnu.org
Cc: Anton Johansson <anjo@rev.ng>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
Subject: [PULL 23/47] accel/tcg: Modify tlb_*() to use CPUState
Date: Tue,  3 Oct 2023 10:30:28 -0700
Message-Id: <20231003173052.1601813-24-richard.henderson@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20231003173052.1601813-1-richard.henderson@linaro.org>
References: <20231003173052.1601813-1-richard.henderson@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17
 as permitted sender) client-ip=209.51.188.17;
 envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org;
 helo=lists.gnu.org;
Received-SPF: pass client-ip=2607:f8b0:4864:20::433;
 envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x433.google.com
X-Spam_score_int: -20
X-Spam_score: -2.1
X-Spam_bar: --
X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1,
 DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1,
 RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001,
 SPF_PASS=-0.001 autolearn=ham autolearn_force=no
X-Spam_action: no action
X-BeenThere: qemu-devel@nongnu.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: <qemu-devel.nongnu.org>
List-Unsubscribe: <https://lists.nongnu.org/mailman/options/qemu-devel>,
 <mailto:qemu-devel-request@nongnu.org?subject=unsubscribe>
List-Archive: <https://lists.nongnu.org/archive/html/qemu-devel>
List-Post: <mailto:qemu-devel@nongnu.org>
List-Help: <mailto:qemu-devel-request@nongnu.org?subject=help>
List-Subscribe: <https://lists.nongnu.org/mailman/listinfo/qemu-devel>,
 <mailto:qemu-devel-request@nongnu.org?subject=subscribe>
Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org
Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org
X-ZohoMail-DKIM: pass (identity @linaro.org)
X-ZM-MESSAGEID: 1696354378946100005

From: Anton Johansson <anjo@rev.ng>

Changes tlb_*() functions to take CPUState instead of CPUArchState, as
they don't require the full CPUArchState. This makes it easier to
decouple target-(in)dependent code.

Signed-off-by: Anton Johansson <anjo@rev.ng>
Message-Id: <20230912153428.17816-4-anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daud=C3=A9 <philmd@linaro.org>
[rth: Use cpu->neg.tlb instead of cpu_tlb()]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 include/exec/cpu_ldst.h |   8 +-
 accel/tcg/cputlb.c      | 220 +++++++++++++++++++---------------------
 2 files changed, 108 insertions(+), 120 deletions(-)

diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
index da10ba1433..6061e33ac9 100644
--- a/include/exec/cpu_ldst.h
+++ b/include/exec/cpu_ldst.h
@@ -361,19 +361,19 @@ static inline uint64_t tlb_addr_write(const CPUTLBEnt=
ry *entry)
 }
=20
 /* Find the TLB index corresponding to the mmu_idx + address pair.  */
-static inline uintptr_t tlb_index(CPUArchState *env, uintptr_t mmu_idx,
+static inline uintptr_t tlb_index(CPUState *cpu, uintptr_t mmu_idx,
                                   vaddr addr)
 {
-    uintptr_t size_mask =3D env_tlb(env)->f[mmu_idx].mask >> CPU_TLB_ENTRY=
_BITS;
+    uintptr_t size_mask =3D cpu->neg.tlb.f[mmu_idx].mask >> CPU_TLB_ENTRY_=
BITS;
=20
     return (addr >> TARGET_PAGE_BITS) & size_mask;
 }
=20
 /* Find the TLB entry corresponding to the mmu_idx + address pair.  */
-static inline CPUTLBEntry *tlb_entry(CPUArchState *env, uintptr_t mmu_idx,
+static inline CPUTLBEntry *tlb_entry(CPUState *cpu, uintptr_t mmu_idx,
                                      vaddr addr)
 {
-    return &env_tlb(env)->f[mmu_idx].table[tlb_index(env, mmu_idx, addr)];
+    return &cpu->neg.tlb.f[mmu_idx].table[tlb_index(cpu, mmu_idx, addr)];
 }
=20
 #endif /* defined(CONFIG_USER_ONLY) */
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index f790be5b6e..f88c394594 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -240,11 +240,11 @@ static void tlb_mmu_flush_locked(CPUTLBDesc *desc, CP=
UTLBDescFast *fast)
     memset(desc->vtable, -1, sizeof(desc->vtable));
 }
=20
-static void tlb_flush_one_mmuidx_locked(CPUArchState *env, int mmu_idx,
+static void tlb_flush_one_mmuidx_locked(CPUState *cpu, int mmu_idx,
                                         int64_t now)
 {
-    CPUTLBDesc *desc =3D &env_tlb(env)->d[mmu_idx];
-    CPUTLBDescFast *fast =3D &env_tlb(env)->f[mmu_idx];
+    CPUTLBDesc *desc =3D &cpu->neg.tlb.d[mmu_idx];
+    CPUTLBDescFast *fast =3D &cpu->neg.tlb.f[mmu_idx];
=20
     tlb_mmu_resize_locked(desc, fast, now);
     tlb_mmu_flush_locked(desc, fast);
@@ -262,41 +262,39 @@ static void tlb_mmu_init(CPUTLBDesc *desc, CPUTLBDesc=
Fast *fast, int64_t now)
     tlb_mmu_flush_locked(desc, fast);
 }
=20
-static inline void tlb_n_used_entries_inc(CPUArchState *env, uintptr_t mmu=
_idx)
+static inline void tlb_n_used_entries_inc(CPUState *cpu, uintptr_t mmu_idx)
 {
-    env_tlb(env)->d[mmu_idx].n_used_entries++;
+    cpu->neg.tlb.d[mmu_idx].n_used_entries++;
 }
=20
-static inline void tlb_n_used_entries_dec(CPUArchState *env, uintptr_t mmu=
_idx)
+static inline void tlb_n_used_entries_dec(CPUState *cpu, uintptr_t mmu_idx)
 {
-    env_tlb(env)->d[mmu_idx].n_used_entries--;
+    cpu->neg.tlb.d[mmu_idx].n_used_entries--;
 }
=20
 void tlb_init(CPUState *cpu)
 {
-    CPUArchState *env =3D cpu_env(cpu);
     int64_t now =3D get_clock_realtime();
     int i;
=20
-    qemu_spin_init(&env_tlb(env)->c.lock);
+    qemu_spin_init(&cpu->neg.tlb.c.lock);
=20
     /* All tlbs are initialized flushed. */
-    env_tlb(env)->c.dirty =3D 0;
+    cpu->neg.tlb.c.dirty =3D 0;
=20
     for (i =3D 0; i < NB_MMU_MODES; i++) {
-        tlb_mmu_init(&env_tlb(env)->d[i], &env_tlb(env)->f[i], now);
+        tlb_mmu_init(&cpu->neg.tlb.d[i], &cpu->neg.tlb.f[i], now);
     }
 }
=20
 void tlb_destroy(CPUState *cpu)
 {
-    CPUArchState *env =3D cpu_env(cpu);
     int i;
=20
-    qemu_spin_destroy(&env_tlb(env)->c.lock);
+    qemu_spin_destroy(&cpu->neg.tlb.c.lock);
     for (i =3D 0; i < NB_MMU_MODES; i++) {
-        CPUTLBDesc *desc =3D &env_tlb(env)->d[i];
-        CPUTLBDescFast *fast =3D &env_tlb(env)->f[i];
+        CPUTLBDesc *desc =3D &cpu->neg.tlb.d[i];
+        CPUTLBDescFast *fast =3D &cpu->neg.tlb.f[i];
=20
         g_free(fast->table);
         g_free(desc->fulltlb);
@@ -328,11 +326,9 @@ void tlb_flush_counts(size_t *pfull, size_t *ppart, si=
ze_t *pelide)
     size_t full =3D 0, part =3D 0, elide =3D 0;
=20
     CPU_FOREACH(cpu) {
-        CPUArchState *env =3D cpu_env(cpu);
-
-        full +=3D qatomic_read(&env_tlb(env)->c.full_flush_count);
-        part +=3D qatomic_read(&env_tlb(env)->c.part_flush_count);
-        elide +=3D qatomic_read(&env_tlb(env)->c.elide_flush_count);
+        full +=3D qatomic_read(&cpu->neg.tlb.c.full_flush_count);
+        part +=3D qatomic_read(&cpu->neg.tlb.c.part_flush_count);
+        elide +=3D qatomic_read(&cpu->neg.tlb.c.elide_flush_count);
     }
     *pfull =3D full;
     *ppart =3D part;
@@ -341,7 +337,6 @@ void tlb_flush_counts(size_t *pfull, size_t *ppart, siz=
e_t *pelide)
=20
 static void tlb_flush_by_mmuidx_async_work(CPUState *cpu, run_on_cpu_data =
data)
 {
-    CPUArchState *env =3D cpu_env(cpu);
     uint16_t asked =3D data.host_int;
     uint16_t all_dirty, work, to_clean;
     int64_t now =3D get_clock_realtime();
@@ -350,32 +345,32 @@ static void tlb_flush_by_mmuidx_async_work(CPUState *=
cpu, run_on_cpu_data data)
=20
     tlb_debug("mmu_idx:0x%04" PRIx16 "\n", asked);
=20
-    qemu_spin_lock(&env_tlb(env)->c.lock);
+    qemu_spin_lock(&cpu->neg.tlb.c.lock);
=20
-    all_dirty =3D env_tlb(env)->c.dirty;
+    all_dirty =3D cpu->neg.tlb.c.dirty;
     to_clean =3D asked & all_dirty;
     all_dirty &=3D ~to_clean;
-    env_tlb(env)->c.dirty =3D all_dirty;
+    cpu->neg.tlb.c.dirty =3D all_dirty;
=20
     for (work =3D to_clean; work !=3D 0; work &=3D work - 1) {
         int mmu_idx =3D ctz32(work);
-        tlb_flush_one_mmuidx_locked(env, mmu_idx, now);
+        tlb_flush_one_mmuidx_locked(cpu, mmu_idx, now);
     }
=20
-    qemu_spin_unlock(&env_tlb(env)->c.lock);
+    qemu_spin_unlock(&cpu->neg.tlb.c.lock);
=20
     tcg_flush_jmp_cache(cpu);
=20
     if (to_clean =3D=3D ALL_MMUIDX_BITS) {
-        qatomic_set(&env_tlb(env)->c.full_flush_count,
-                   env_tlb(env)->c.full_flush_count + 1);
+        qatomic_set(&cpu->neg.tlb.c.full_flush_count,
+                    cpu->neg.tlb.c.full_flush_count + 1);
     } else {
-        qatomic_set(&env_tlb(env)->c.part_flush_count,
-                   env_tlb(env)->c.part_flush_count + ctpop16(to_clean));
+        qatomic_set(&cpu->neg.tlb.c.part_flush_count,
+                    cpu->neg.tlb.c.part_flush_count + ctpop16(to_clean));
         if (to_clean !=3D asked) {
-            qatomic_set(&env_tlb(env)->c.elide_flush_count,
-                       env_tlb(env)->c.elide_flush_count +
-                       ctpop16(asked & ~to_clean));
+            qatomic_set(&cpu->neg.tlb.c.elide_flush_count,
+                        cpu->neg.tlb.c.elide_flush_count +
+                        ctpop16(asked & ~to_clean));
         }
     }
 }
@@ -470,43 +465,43 @@ static inline bool tlb_flush_entry_locked(CPUTLBEntry=
 *tlb_entry, vaddr page)
 }
=20
 /* Called with tlb_c.lock held */
-static void tlb_flush_vtlb_page_mask_locked(CPUArchState *env, int mmu_idx,
+static void tlb_flush_vtlb_page_mask_locked(CPUState *cpu, int mmu_idx,
                                             vaddr page,
                                             vaddr mask)
 {
-    CPUTLBDesc *d =3D &env_tlb(env)->d[mmu_idx];
+    CPUTLBDesc *d =3D &cpu->neg.tlb.d[mmu_idx];
     int k;
=20
-    assert_cpu_is_self(env_cpu(env));
+    assert_cpu_is_self(cpu);
     for (k =3D 0; k < CPU_VTLB_SIZE; k++) {
         if (tlb_flush_entry_mask_locked(&d->vtable[k], page, mask)) {
-            tlb_n_used_entries_dec(env, mmu_idx);
+            tlb_n_used_entries_dec(cpu, mmu_idx);
         }
     }
 }
=20
-static inline void tlb_flush_vtlb_page_locked(CPUArchState *env, int mmu_i=
dx,
+static inline void tlb_flush_vtlb_page_locked(CPUState *cpu, int mmu_idx,
                                               vaddr page)
 {
-    tlb_flush_vtlb_page_mask_locked(env, mmu_idx, page, -1);
+    tlb_flush_vtlb_page_mask_locked(cpu, mmu_idx, page, -1);
 }
=20
-static void tlb_flush_page_locked(CPUArchState *env, int midx, vaddr page)
+static void tlb_flush_page_locked(CPUState *cpu, int midx, vaddr page)
 {
-    vaddr lp_addr =3D env_tlb(env)->d[midx].large_page_addr;
-    vaddr lp_mask =3D env_tlb(env)->d[midx].large_page_mask;
+    vaddr lp_addr =3D cpu->neg.tlb.d[midx].large_page_addr;
+    vaddr lp_mask =3D cpu->neg.tlb.d[midx].large_page_mask;
=20
     /* Check if we need to flush due to large pages.  */
     if ((page & lp_mask) =3D=3D lp_addr) {
         tlb_debug("forcing full flush midx %d (%016"
                   VADDR_PRIx "/%016" VADDR_PRIx ")\n",
                   midx, lp_addr, lp_mask);
-        tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime());
+        tlb_flush_one_mmuidx_locked(cpu, midx, get_clock_realtime());
     } else {
-        if (tlb_flush_entry_locked(tlb_entry(env, midx, page), page)) {
-            tlb_n_used_entries_dec(env, midx);
+        if (tlb_flush_entry_locked(tlb_entry(cpu, midx, page), page)) {
+            tlb_n_used_entries_dec(cpu, midx);
         }
-        tlb_flush_vtlb_page_locked(env, midx, page);
+        tlb_flush_vtlb_page_locked(cpu, midx, page);
     }
 }
=20
@@ -523,20 +518,19 @@ static void tlb_flush_page_by_mmuidx_async_0(CPUState=
 *cpu,
                                              vaddr addr,
                                              uint16_t idxmap)
 {
-    CPUArchState *env =3D cpu_env(cpu);
     int mmu_idx;
=20
     assert_cpu_is_self(cpu);
=20
     tlb_debug("page addr: %016" VADDR_PRIx " mmu_map:0x%x\n", addr, idxmap=
);
=20
-    qemu_spin_lock(&env_tlb(env)->c.lock);
+    qemu_spin_lock(&cpu->neg.tlb.c.lock);
     for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
         if ((idxmap >> mmu_idx) & 1) {
-            tlb_flush_page_locked(env, mmu_idx, addr);
+            tlb_flush_page_locked(cpu, mmu_idx, addr);
         }
     }
-    qemu_spin_unlock(&env_tlb(env)->c.lock);
+    qemu_spin_unlock(&cpu->neg.tlb.c.lock);
=20
     /*
      * Discard jump cache entries for any tb which might potentially
@@ -709,12 +703,12 @@ void tlb_flush_page_all_cpus_synced(CPUState *src, va=
ddr addr)
     tlb_flush_page_by_mmuidx_all_cpus_synced(src, addr, ALL_MMUIDX_BITS);
 }
=20
-static void tlb_flush_range_locked(CPUArchState *env, int midx,
+static void tlb_flush_range_locked(CPUState *cpu, int midx,
                                    vaddr addr, vaddr len,
                                    unsigned bits)
 {
-    CPUTLBDesc *d =3D &env_tlb(env)->d[midx];
-    CPUTLBDescFast *f =3D &env_tlb(env)->f[midx];
+    CPUTLBDesc *d =3D &cpu->neg.tlb.d[midx];
+    CPUTLBDescFast *f =3D &cpu->neg.tlb.f[midx];
     vaddr mask =3D MAKE_64BIT_MASK(0, bits);
=20
     /*
@@ -731,7 +725,7 @@ static void tlb_flush_range_locked(CPUArchState *env, i=
nt midx,
         tlb_debug("forcing full flush midx %d ("
                   "%016" VADDR_PRIx "/%016" VADDR_PRIx "+%016" VADDR_PRIx =
")\n",
                   midx, addr, mask, len);
-        tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime());
+        tlb_flush_one_mmuidx_locked(cpu, midx, get_clock_realtime());
         return;
     }
=20
@@ -744,18 +738,18 @@ static void tlb_flush_range_locked(CPUArchState *env,=
 int midx,
         tlb_debug("forcing full flush midx %d ("
                   "%016" VADDR_PRIx "/%016" VADDR_PRIx ")\n",
                   midx, d->large_page_addr, d->large_page_mask);
-        tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime());
+        tlb_flush_one_mmuidx_locked(cpu, midx, get_clock_realtime());
         return;
     }
=20
     for (vaddr i =3D 0; i < len; i +=3D TARGET_PAGE_SIZE) {
         vaddr page =3D addr + i;
-        CPUTLBEntry *entry =3D tlb_entry(env, midx, page);
+        CPUTLBEntry *entry =3D tlb_entry(cpu, midx, page);
=20
         if (tlb_flush_entry_mask_locked(entry, page, mask)) {
-            tlb_n_used_entries_dec(env, midx);
+            tlb_n_used_entries_dec(cpu, midx);
         }
-        tlb_flush_vtlb_page_mask_locked(env, midx, page, mask);
+        tlb_flush_vtlb_page_mask_locked(cpu, midx, page, mask);
     }
 }
=20
@@ -769,7 +763,6 @@ typedef struct {
 static void tlb_flush_range_by_mmuidx_async_0(CPUState *cpu,
                                               TLBFlushRangeData d)
 {
-    CPUArchState *env =3D cpu_env(cpu);
     int mmu_idx;
=20
     assert_cpu_is_self(cpu);
@@ -777,13 +770,13 @@ static void tlb_flush_range_by_mmuidx_async_0(CPUStat=
e *cpu,
     tlb_debug("range: %016" VADDR_PRIx "/%u+%016" VADDR_PRIx " mmu_map:0x%=
x\n",
               d.addr, d.bits, d.len, d.idxmap);
=20
-    qemu_spin_lock(&env_tlb(env)->c.lock);
+    qemu_spin_lock(&cpu->neg.tlb.c.lock);
     for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
         if ((d.idxmap >> mmu_idx) & 1) {
-            tlb_flush_range_locked(env, mmu_idx, d.addr, d.len, d.bits);
+            tlb_flush_range_locked(cpu, mmu_idx, d.addr, d.len, d.bits);
         }
     }
-    qemu_spin_unlock(&env_tlb(env)->c.lock);
+    qemu_spin_unlock(&cpu->neg.tlb.c.lock);
=20
     /*
      * If the length is larger than the jump cache size, then it will take
@@ -1028,27 +1021,24 @@ static inline void copy_tlb_helper_locked(CPUTLBEnt=
ry *d, const CPUTLBEntry *s)
  */
 void tlb_reset_dirty(CPUState *cpu, ram_addr_t start1, ram_addr_t length)
 {
-    CPUArchState *env;
-
     int mmu_idx;
=20
-    env =3D cpu_env(cpu);
-    qemu_spin_lock(&env_tlb(env)->c.lock);
+    qemu_spin_lock(&cpu->neg.tlb.c.lock);
     for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
         unsigned int i;
-        unsigned int n =3D tlb_n_entries(&env_tlb(env)->f[mmu_idx]);
+        unsigned int n =3D tlb_n_entries(&cpu->neg.tlb.f[mmu_idx]);
=20
         for (i =3D 0; i < n; i++) {
-            tlb_reset_dirty_range_locked(&env_tlb(env)->f[mmu_idx].table[i=
],
+            tlb_reset_dirty_range_locked(&cpu->neg.tlb.f[mmu_idx].table[i],
                                          start1, length);
         }
=20
         for (i =3D 0; i < CPU_VTLB_SIZE; i++) {
-            tlb_reset_dirty_range_locked(&env_tlb(env)->d[mmu_idx].vtable[=
i],
+            tlb_reset_dirty_range_locked(&cpu->neg.tlb.d[mmu_idx].vtable[i=
],
                                          start1, length);
         }
     }
-    qemu_spin_unlock(&env_tlb(env)->c.lock);
+    qemu_spin_unlock(&cpu->neg.tlb.c.lock);
 }
=20
 /* Called with tlb_c.lock held */
@@ -1064,32 +1054,31 @@ static inline void tlb_set_dirty1_locked(CPUTLBEntr=
y *tlb_entry,
    so that it is no longer dirty */
 void tlb_set_dirty(CPUState *cpu, vaddr addr)
 {
-    CPUArchState *env =3D cpu_env(cpu);
     int mmu_idx;
=20
     assert_cpu_is_self(cpu);
=20
     addr &=3D TARGET_PAGE_MASK;
-    qemu_spin_lock(&env_tlb(env)->c.lock);
+    qemu_spin_lock(&cpu->neg.tlb.c.lock);
     for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
-        tlb_set_dirty1_locked(tlb_entry(env, mmu_idx, addr), addr);
+        tlb_set_dirty1_locked(tlb_entry(cpu, mmu_idx, addr), addr);
     }
=20
     for (mmu_idx =3D 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
         int k;
         for (k =3D 0; k < CPU_VTLB_SIZE; k++) {
-            tlb_set_dirty1_locked(&env_tlb(env)->d[mmu_idx].vtable[k], add=
r);
+            tlb_set_dirty1_locked(&cpu->neg.tlb.d[mmu_idx].vtable[k], addr=
);
         }
     }
-    qemu_spin_unlock(&env_tlb(env)->c.lock);
+    qemu_spin_unlock(&cpu->neg.tlb.c.lock);
 }
=20
 /* Our TLB does not support large pages, so remember the area covered by
    large pages and trigger a full TLB flush if these are invalidated.  */
-static void tlb_add_large_page(CPUArchState *env, int mmu_idx,
+static void tlb_add_large_page(CPUState *cpu, int mmu_idx,
                                vaddr addr, uint64_t size)
 {
-    vaddr lp_addr =3D env_tlb(env)->d[mmu_idx].large_page_addr;
+    vaddr lp_addr =3D cpu->neg.tlb.d[mmu_idx].large_page_addr;
     vaddr lp_mask =3D ~(size - 1);
=20
     if (lp_addr =3D=3D (vaddr)-1) {
@@ -1099,13 +1088,13 @@ static void tlb_add_large_page(CPUArchState *env, i=
nt mmu_idx,
         /* Extend the existing region to include the new page.
            This is a compromise between unnecessary flushes and
            the cost of maintaining a full variable size TLB.  */
-        lp_mask &=3D env_tlb(env)->d[mmu_idx].large_page_mask;
+        lp_mask &=3D cpu->neg.tlb.d[mmu_idx].large_page_mask;
         while (((lp_addr ^ addr) & lp_mask) !=3D 0) {
             lp_mask <<=3D 1;
         }
     }
-    env_tlb(env)->d[mmu_idx].large_page_addr =3D lp_addr & lp_mask;
-    env_tlb(env)->d[mmu_idx].large_page_mask =3D lp_mask;
+    cpu->neg.tlb.d[mmu_idx].large_page_addr =3D lp_addr & lp_mask;
+    cpu->neg.tlb.d[mmu_idx].large_page_mask =3D lp_mask;
 }
=20
 static inline void tlb_set_compare(CPUTLBEntryFull *full, CPUTLBEntry *ent,
@@ -1137,8 +1126,7 @@ static inline void tlb_set_compare(CPUTLBEntryFull *f=
ull, CPUTLBEntry *ent,
 void tlb_set_page_full(CPUState *cpu, int mmu_idx,
                        vaddr addr, CPUTLBEntryFull *full)
 {
-    CPUArchState *env =3D cpu_env(cpu);
-    CPUTLB *tlb =3D env_tlb(env);
+    CPUTLB *tlb =3D &cpu->neg.tlb;
     CPUTLBDesc *desc =3D &tlb->d[mmu_idx];
     MemoryRegionSection *section;
     unsigned int index, read_flags, write_flags;
@@ -1155,7 +1143,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
         sz =3D TARGET_PAGE_SIZE;
     } else {
         sz =3D (hwaddr)1 << full->lg_page_size;
-        tlb_add_large_page(env, mmu_idx, addr, sz);
+        tlb_add_large_page(cpu, mmu_idx, addr, sz);
     }
     addr_page =3D addr & TARGET_PAGE_MASK;
     paddr_page =3D full->phys_addr & TARGET_PAGE_MASK;
@@ -1222,8 +1210,8 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
     wp_flags =3D cpu_watchpoint_address_matches(cpu, addr_page,
                                               TARGET_PAGE_SIZE);
=20
-    index =3D tlb_index(env, mmu_idx, addr_page);
-    te =3D tlb_entry(env, mmu_idx, addr_page);
+    index =3D tlb_index(cpu, mmu_idx, addr_page);
+    te =3D tlb_entry(cpu, mmu_idx, addr_page);
=20
     /*
      * Hold the TLB lock for the rest of the function. We could acquire/re=
lease
@@ -1238,7 +1226,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
     tlb->c.dirty |=3D 1 << mmu_idx;
=20
     /* Make sure there's no cached translation for the new page.  */
-    tlb_flush_vtlb_page_locked(env, mmu_idx, addr_page);
+    tlb_flush_vtlb_page_locked(cpu, mmu_idx, addr_page);
=20
     /*
      * Only evict the old entry to the victim tlb if it's for a
@@ -1251,7 +1239,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
         /* Evict the old entry into the victim tlb.  */
         copy_tlb_helper_locked(tv, te);
         desc->vfulltlb[vidx] =3D desc->fulltlb[index];
-        tlb_n_used_entries_dec(env, mmu_idx);
+        tlb_n_used_entries_dec(cpu, mmu_idx);
     }
=20
     /* refill the tlb */
@@ -1296,7 +1284,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
                     MMU_DATA_STORE, prot & PAGE_WRITE);
=20
     copy_tlb_helper_locked(te, &tn);
-    tlb_n_used_entries_inc(env, mmu_idx);
+    tlb_n_used_entries_inc(cpu, mmu_idx);
     qemu_spin_unlock(&tlb->c.lock);
 }
=20
@@ -1390,28 +1378,28 @@ static void io_failed(CPUArchState *env, CPUTLBEntr=
yFull *full, vaddr addr,
=20
 /* Return true if ADDR is present in the victim tlb, and has been copied
    back to the main tlb.  */
-static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index,
+static bool victim_tlb_hit(CPUState *cpu, size_t mmu_idx, size_t index,
                            MMUAccessType access_type, vaddr page)
 {
     size_t vidx;
=20
-    assert_cpu_is_self(env_cpu(env));
+    assert_cpu_is_self(cpu);
     for (vidx =3D 0; vidx < CPU_VTLB_SIZE; ++vidx) {
-        CPUTLBEntry *vtlb =3D &env_tlb(env)->d[mmu_idx].vtable[vidx];
+        CPUTLBEntry *vtlb =3D &cpu->neg.tlb.d[mmu_idx].vtable[vidx];
         uint64_t cmp =3D tlb_read_idx(vtlb, access_type);
=20
         if (cmp =3D=3D page) {
             /* Found entry in victim tlb, swap tlb and iotlb.  */
-            CPUTLBEntry tmptlb, *tlb =3D &env_tlb(env)->f[mmu_idx].table[i=
ndex];
+            CPUTLBEntry tmptlb, *tlb =3D &cpu->neg.tlb.f[mmu_idx].table[in=
dex];
=20
-            qemu_spin_lock(&env_tlb(env)->c.lock);
+            qemu_spin_lock(&cpu->neg.tlb.c.lock);
             copy_tlb_helper_locked(&tmptlb, tlb);
             copy_tlb_helper_locked(tlb, vtlb);
             copy_tlb_helper_locked(vtlb, &tmptlb);
-            qemu_spin_unlock(&env_tlb(env)->c.lock);
+            qemu_spin_unlock(&cpu->neg.tlb.c.lock);
=20
-            CPUTLBEntryFull *f1 =3D &env_tlb(env)->d[mmu_idx].fulltlb[inde=
x];
-            CPUTLBEntryFull *f2 =3D &env_tlb(env)->d[mmu_idx].vfulltlb[vid=
x];
+            CPUTLBEntryFull *f1 =3D &cpu->neg.tlb.d[mmu_idx].fulltlb[index=
];
+            CPUTLBEntryFull *f2 =3D &cpu->neg.tlb.d[mmu_idx].vfulltlb[vidx=
];
             CPUTLBEntryFull tmpf;
             tmpf =3D *f1; *f1 =3D *f2; *f2 =3D tmpf;
             return true;
@@ -1450,8 +1438,8 @@ static int probe_access_internal(CPUArchState *env, v=
addr addr,
                                  void **phost, CPUTLBEntryFull **pfull,
                                  uintptr_t retaddr, bool check_mem_cbs)
 {
-    uintptr_t index =3D tlb_index(env, mmu_idx, addr);
-    CPUTLBEntry *entry =3D tlb_entry(env, mmu_idx, addr);
+    uintptr_t index =3D tlb_index(env_cpu(env), mmu_idx, addr);
+    CPUTLBEntry *entry =3D tlb_entry(env_cpu(env), mmu_idx, addr);
     uint64_t tlb_addr =3D tlb_read_idx(entry, access_type);
     vaddr page_addr =3D addr & TARGET_PAGE_MASK;
     int flags =3D TLB_FLAGS_MASK & ~TLB_FORCE_SLOW;
@@ -1459,7 +1447,8 @@ static int probe_access_internal(CPUArchState *env, v=
addr addr,
     CPUTLBEntryFull *full;
=20
     if (!tlb_hit_page(tlb_addr, page_addr)) {
-        if (!victim_tlb_hit(env, mmu_idx, index, access_type, page_addr)) {
+        if (!victim_tlb_hit(env_cpu(env), mmu_idx, index,
+                            access_type, page_addr)) {
             CPUState *cs =3D env_cpu(env);
=20
             if (!cs->cc->tcg_ops->tlb_fill(cs, addr, fault_size, access_ty=
pe,
@@ -1471,8 +1460,8 @@ static int probe_access_internal(CPUArchState *env, v=
addr addr,
             }
=20
             /* TLB resize via tlb_fill may have moved the entry.  */
-            index =3D tlb_index(env, mmu_idx, addr);
-            entry =3D tlb_entry(env, mmu_idx, addr);
+            index =3D tlb_index(env_cpu(env), mmu_idx, addr);
+            entry =3D tlb_entry(env_cpu(env), mmu_idx, addr);
=20
             /*
              * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately,
@@ -1662,9 +1651,8 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState =
*env, vaddr addr,
 bool tlb_plugin_lookup(CPUState *cpu, vaddr addr, int mmu_idx,
                        bool is_store, struct qemu_plugin_hwaddr *data)
 {
-    CPUArchState *env =3D cpu_env(cpu);
-    CPUTLBEntry *tlbe =3D tlb_entry(env, mmu_idx, addr);
-    uintptr_t index =3D tlb_index(env, mmu_idx, addr);
+    CPUTLBEntry *tlbe =3D tlb_entry(cpu, mmu_idx, addr);
+    uintptr_t index =3D tlb_index(cpu, mmu_idx, addr);
     MMUAccessType access_type =3D is_store ? MMU_DATA_STORE : MMU_DATA_LOA=
D;
     uint64_t tlb_addr =3D tlb_read_idx(tlbe, access_type);
     CPUTLBEntryFull *full;
@@ -1673,7 +1661,7 @@ bool tlb_plugin_lookup(CPUState *cpu, vaddr addr, int=
 mmu_idx,
         return false;
     }
=20
-    full =3D &env_tlb(env)->d[mmu_idx].fulltlb[index];
+    full =3D &cpu->neg.tlb.d[mmu_idx].fulltlb[index];
     data->phys_addr =3D full->phys_addr | (addr & ~TARGET_PAGE_MASK);
=20
     /* We must have an iotlb entry for MMIO */
@@ -1727,8 +1715,8 @@ static bool mmu_lookup1(CPUArchState *env, MMULookupP=
ageData *data,
                         int mmu_idx, MMUAccessType access_type, uintptr_t =
ra)
 {
     vaddr addr =3D data->addr;
-    uintptr_t index =3D tlb_index(env, mmu_idx, addr);
-    CPUTLBEntry *entry =3D tlb_entry(env, mmu_idx, addr);
+    uintptr_t index =3D tlb_index(env_cpu(env), mmu_idx, addr);
+    CPUTLBEntry *entry =3D tlb_entry(env_cpu(env), mmu_idx, addr);
     uint64_t tlb_addr =3D tlb_read_idx(entry, access_type);
     bool maybe_resized =3D false;
     CPUTLBEntryFull *full;
@@ -1736,12 +1724,12 @@ static bool mmu_lookup1(CPUArchState *env, MMULooku=
pPageData *data,
=20
     /* If the TLB entry is for a different page, reload and try again.  */
     if (!tlb_hit(tlb_addr, addr)) {
-        if (!victim_tlb_hit(env, mmu_idx, index, access_type,
+        if (!victim_tlb_hit(env_cpu(env), mmu_idx, index, access_type,
                             addr & TARGET_PAGE_MASK)) {
             tlb_fill(env_cpu(env), addr, data->size, access_type, mmu_idx,=
 ra);
             maybe_resized =3D true;
-            index =3D tlb_index(env, mmu_idx, addr);
-            entry =3D tlb_entry(env, mmu_idx, addr);
+            index =3D tlb_index(env_cpu(env), mmu_idx, addr);
+            entry =3D tlb_entry(env_cpu(env), mmu_idx, addr);
         }
         tlb_addr =3D tlb_read_idx(entry, access_type) & ~TLB_INVALID_MASK;
     }
@@ -1849,7 +1837,7 @@ static bool mmu_lookup(CPUArchState *env, vaddr addr,=
 MemOpIdx oi,
          */
         mmu_lookup1(env, &l->page[0], l->mmu_idx, type, ra);
         if (mmu_lookup1(env, &l->page[1], l->mmu_idx, type, ra)) {
-            uintptr_t index =3D tlb_index(env, l->mmu_idx, addr);
+            uintptr_t index =3D tlb_index(env_cpu(env), l->mmu_idx, addr);
             l->page[0].full =3D &env_tlb(env)->d[l->mmu_idx].fulltlb[index=
];
         }
=20
@@ -1907,18 +1895,18 @@ static void *atomic_mmu_lookup(CPUArchState *env, v=
addr addr, MemOpIdx oi,
         goto stop_the_world;
     }
=20
-    index =3D tlb_index(env, mmu_idx, addr);
-    tlbe =3D tlb_entry(env, mmu_idx, addr);
+    index =3D tlb_index(env_cpu(env), mmu_idx, addr);
+    tlbe =3D tlb_entry(env_cpu(env), mmu_idx, addr);
=20
     /* Check TLB entry and enforce page permissions.  */
     tlb_addr =3D tlb_addr_write(tlbe);
     if (!tlb_hit(tlb_addr, addr)) {
-        if (!victim_tlb_hit(env, mmu_idx, index, MMU_DATA_STORE,
+        if (!victim_tlb_hit(env_cpu(env), mmu_idx, index, MMU_DATA_STORE,
                             addr & TARGET_PAGE_MASK)) {
             tlb_fill(env_cpu(env), addr, size,
                      MMU_DATA_STORE, mmu_idx, retaddr);
-            index =3D tlb_index(env, mmu_idx, addr);
-            tlbe =3D tlb_entry(env, mmu_idx, addr);
+            index =3D tlb_index(env_cpu(env), mmu_idx, addr);
+            tlbe =3D tlb_entry(env_cpu(env), mmu_idx, addr);
         }
         tlb_addr =3D tlb_addr_write(tlbe) & ~TLB_INVALID_MASK;
     }
--=20
2.34.1