From nobody Sat Nov 23 19:04:01 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1731600828; cv=none; d=zohomail.com; s=zohoarc; b=Ul7/X0vrgvPNQDteR8ypV5YNdJzaR/nZNMiJHtMyGH/D/hf5eh5Lhe9kyAONBEvdcrx6We4RvEK/QxupbE3a2d8ptIf+wNCYpIQRK+tT2FCA3f7pPGnnzDEicnmKirGMoz218dDAkv/2qe2PCKsHZ4fr45e19tyitLXxnKmGmCY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1731600828; h=Content-Transfer-Encoding:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To:Cc; bh=LsY9qG1BO160UZ3oT54r8qS8sJnVh1lfu8JqWoX9HKs=; b=Lhf2pHW2ykR3eFwcdF7hK+YxRBCfH6oZSPoD/xuyJH9JvxkmWai9oFpS3OsgJhPxIxohXRNIn2fYFZz8cAQFQEWgwkAAkRzbu9j5jMvUV96bUZ0N+01+VLyEfJ+Ew+Ch+Kj0tkI5x99HON5GYghL9hueIxz/k0co7LWsdqrXmXc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1731600828825171.05425349436166; Thu, 14 Nov 2024 08:13:48 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tBcJV-0006N4-Df; Thu, 14 Nov 2024 11:03:21 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tBcIR-00055Q-Dc for qemu-devel@nongnu.org; Thu, 14 Nov 2024 11:02:15 -0500 Received: from mail-pj1-x1031.google.com ([2607:f8b0:4864:20::1031]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1tBcIL-0002H5-9V for qemu-devel@nongnu.org; Thu, 14 Nov 2024 11:02:14 -0500 Received: by mail-pj1-x1031.google.com with SMTP id 98e67ed59e1d1-2e2ed59a35eso736807a91.0 for ; Thu, 14 Nov 2024 08:01:58 -0800 (PST) Received: from stoup.. ([71.212.136.242]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2ea024ec723sm1484438a91.46.2024.11.14.08.01.55 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Nov 2024 08:01:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1731600117; x=1732204917; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=LsY9qG1BO160UZ3oT54r8qS8sJnVh1lfu8JqWoX9HKs=; b=Ff2wmvpUdD2EO1f7TlSEKgvFRCBGLSSVVfVZJTayf5x1Xodt5YQu9pXnksMJYLpu7z zOagINypcEtvZpNxrbklxytPXhWB1lM45ka2B2tf/b1YQ5M/IVKoo37NUtAI+q5BYlK8 ChfNGcy5EHbmgBCJ+4h5eS+1AkkOjlFcdkwyTFTVpo5giys/iQ3xkBZbWepXbLxJX+CD gSjGE+0T5KFGYrtkPrQ9lwP6jXk8mDRw/cAJFce9/J99BFjWblrU1z9LKjoXmkGI78gv ln6c49qJ5uLSKD6nqBLPHHTqzfxtONvK+jz5RqtpHPBmHJcfQ/q56CCFYDGFOD3q18mm Xevw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731600117; x=1732204917; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LsY9qG1BO160UZ3oT54r8qS8sJnVh1lfu8JqWoX9HKs=; b=xMA/VbgRcDm89KX0zH2lP/of3BA4btFs8gkEYIv6IE5FrFngDWFk/UW6Vgxh/35PwX uEL8XUrsVfSOQSWnglMqvfvznRKpHug0mJNcnfPM0YnHX/8ocFXSgtkl21+8ARNBm5kR a7EfEeQpRfGCcUmAkg7hSxCG7L/ofsFJqZgPxRyqylucLVcP1vzWnjc9NyoJNSVFef+t mQOzSXtZfLVmLvafUqTA4wFbxrlswVMw4Oy9SUJf+u3dVPkzm15Ea4vuWzZKHixKYuaW nYbNueMCJE3uWyGvTr+KL/ZyJ7X/+RFCE1o/rhkGnOai0SNpXpAWXyykNeBkVz4n+mr0 d+Qw== X-Gm-Message-State: AOJu0YyNG14MKxbRvq02FFpGo/NR78neJaDrvXTyM3LqCJvgKy/oOZgE JC/4gKAZaPkvvV3MuGw+57xAaAo6tOdnnL7fj4eikG/mUTbU5n8FgrpI3ft+ccQZ01EfyHfjZxU 0 X-Google-Smtp-Source: AGHT+IHnNhDS49CGt44Gxs8BVTqUpIENQno7yxoNeG6lAU+lTa83Immbnso9WHnc97AumqC+86docw== X-Received: by 2002:a17:90a:dfc4:b0:2ea:1293:7db1 with SMTP id 98e67ed59e1d1-2ea12937f38mr645040a91.33.1731600116195; Thu, 14 Nov 2024 08:01:56 -0800 (PST) From: Richard Henderson To: qemu-devel@nongnu.org Subject: [PATCH v2 28/54] accel/tcg: Introduce tlb_lookup Date: Thu, 14 Nov 2024 08:01:04 -0800 Message-ID: <20241114160131.48616-29-richard.henderson@linaro.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20241114160131.48616-1-richard.henderson@linaro.org> References: <20241114160131.48616-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::1031; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1031.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1731600829945116600 Content-Type: text/plain; charset="utf-8" Unify 3 instances of tlb lookup, through tlb_hit, tlbtree_hit, and tlb_full_align. Use structures to avoid too many arguments. Signed-off-by: Richard Henderson Reviewed-by: Pierrick Bouvier --- accel/tcg/cputlb.c | 369 ++++++++++++++++++++++----------------------- 1 file changed, 178 insertions(+), 191 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 41b2f76cc9..a33bebf55a 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1271,6 +1271,118 @@ static inline void cpu_unaligned_access(CPUState *c= pu, vaddr addr, mmu_idx, retaddr); } =20 +typedef struct TLBLookupInput { + vaddr addr; + uintptr_t ra; + int memop_probe : 16; + unsigned int size : 8; + MMUAccessType access_type : 4; + unsigned int mmu_idx : 4; +} TLBLookupInput; + +typedef struct TLBLookupOutput { + CPUTLBEntryFull full; + void *haddr; + int flags; + bool did_tlb_fill; +} TLBLookupOutput; + +static bool tlb_lookup(CPUState *cpu, TLBLookupOutput *o, + const TLBLookupInput *i) +{ + CPUTLBDesc *desc =3D &cpu->neg.tlb.d[i->mmu_idx]; + CPUTLBDescFast *fast =3D &cpu->neg.tlb.f[i->mmu_idx]; + vaddr addr =3D i->addr; + MMUAccessType access_type =3D i->access_type; + CPUTLBEntryFull *full; + CPUTLBEntryTree *node; + CPUTLBEntry *entry; + uint64_t cmp; + bool probe =3D i->memop_probe < 0; + MemOp memop =3D probe ? 0 : i->memop_probe; + int flags =3D TLB_FLAGS_MASK & ~TLB_FORCE_SLOW; + + assert_cpu_is_self(cpu); + o->did_tlb_fill =3D false; + + /* Primary lookup in the fast tlb. */ + entry =3D tlbfast_entry(fast, addr); + full =3D &desc->fulltlb[tlbfast_index(fast, addr)]; + cmp =3D tlb_read_idx(entry, access_type); + if (tlb_hit(cmp, addr)) { + goto found; + } + + /* Secondary lookup in the IntervalTree. */ + node =3D tlbtree_lookup_addr(desc, addr); + if (node) { + cmp =3D tlb_read_idx(&node->copy, access_type); + if (tlb_hit(cmp, addr)) { + /* Install the cached entry. */ + qemu_spin_lock(&cpu->neg.tlb.c.lock); + copy_tlb_helper_locked(entry, &node->copy); + qemu_spin_unlock(&cpu->neg.tlb.c.lock); + *full =3D node->full; + goto found; + } + } + + /* Finally, query the target hook. */ + if (!tlb_fill_align(cpu, addr, access_type, i->mmu_idx, + memop, i->size, probe, i->ra)) { + tcg_debug_assert(probe); + return false; + } + + o->did_tlb_fill =3D true; + + entry =3D tlbfast_entry(fast, addr); + full =3D &desc->fulltlb[tlbfast_index(fast, addr)]; + cmp =3D tlb_read_idx(entry, access_type); + /* + * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately, + * to force the next access through tlb_fill_align. We've just + * called tlb_fill_align, so we know that this entry *is* valid. + */ + flags &=3D ~TLB_INVALID_MASK; + goto done; + + found: + /* Alignment has not been checked by tlb_fill_align. */ + { + int a_bits =3D memop_alignment_bits(memop); + + /* + * The TLB_CHECK_ALIGNED check differs from the normal alignment + * check, in that this is based on the atomicity of the operation. + * The intended use case is the ARM memory type field of each PTE, + * where access to pages with Device memory type require alignment. + */ + if (unlikely(flags & TLB_CHECK_ALIGNED)) { + int at_bits =3D memop_atomicity_bits(memop); + a_bits =3D MAX(a_bits, at_bits); + } + if (unlikely(addr & ((1 << a_bits) - 1))) { + cpu_unaligned_access(cpu, addr, access_type, i->mmu_idx, i->ra= ); + } + } + + done: + flags &=3D cmp; + flags |=3D full->slow_flags[access_type]; + o->flags =3D flags; + o->full =3D *full; + o->haddr =3D (void *)((uintptr_t)addr + entry->addend); + return true; +} + +static void tlb_lookup_nofail(CPUState *cpu, TLBLookupOutput *o, + const TLBLookupInput *i) +{ + bool ok =3D tlb_lookup(cpu, o, i); + tcg_debug_assert(ok); +} + static MemoryRegionSection * io_prepare(hwaddr *out_offset, CPUState *cpu, hwaddr xlat, MemTxAttrs attrs, vaddr addr, uintptr_t retaddr) @@ -1303,40 +1415,6 @@ static void io_failed(CPUState *cpu, CPUTLBEntryFull= *full, vaddr addr, } } =20 -/* - * Return true if ADDR is present in the interval tree, - * and has been copied back to the main tlb. - */ -static bool tlbtree_hit(CPUState *cpu, int mmu_idx, - MMUAccessType access_type, vaddr addr) -{ - CPUTLBDesc *desc =3D &cpu->neg.tlb.d[mmu_idx]; - CPUTLBDescFast *fast =3D &cpu->neg.tlb.f[mmu_idx]; - CPUTLBEntryTree *node; - size_t index; - - assert_cpu_is_self(cpu); - node =3D tlbtree_lookup_addr(desc, addr); - if (!node) { - /* There is no cached mapping for this page. */ - return false; - } - - if (!tlb_hit(tlb_read_idx(&node->copy, access_type), addr)) { - /* This access is not permitted. */ - return false; - } - - /* Install the cached entry. */ - index =3D tlbfast_index(fast, addr); - qemu_spin_lock(&cpu->neg.tlb.c.lock); - copy_tlb_helper_locked(&fast->table[index], &node->copy); - qemu_spin_unlock(&cpu->neg.tlb.c.lock); - - desc->fulltlb[index] =3D node->full; - return true; -} - static void notdirty_write(CPUState *cpu, vaddr mem_vaddr, unsigned size, CPUTLBEntryFull *full, uintptr_t retaddr) { @@ -1367,40 +1445,26 @@ static int probe_access_internal(CPUState *cpu, vad= dr addr, void **phost, CPUTLBEntryFull *pfull, uintptr_t retaddr, bool check_mem_cbs) { - uintptr_t index =3D tlb_index(cpu, mmu_idx, addr); - CPUTLBEntry *entry =3D tlb_entry(cpu, mmu_idx, addr); - uint64_t tlb_addr =3D tlb_read_idx(entry, access_type); - int flags =3D TLB_FLAGS_MASK & ~TLB_FORCE_SLOW; - CPUTLBEntryFull *full; + TLBLookupInput i =3D { + .addr =3D addr, + .ra =3D retaddr, + .access_type =3D access_type, + .size =3D fault_size, + .memop_probe =3D nonfault ? -1 : 0, + .mmu_idx =3D mmu_idx, + }; + TLBLookupOutput o; + int flags; =20 - if (!tlb_hit(tlb_addr, addr)) { - if (!tlbtree_hit(cpu, mmu_idx, access_type, addr)) { - if (!tlb_fill_align(cpu, addr, access_type, mmu_idx, - 0, fault_size, nonfault, retaddr)) { - /* Non-faulting page table read failed. */ - *phost =3D NULL; - memset(pfull, 0, sizeof(*pfull)); - return TLB_INVALID_MASK; - } - - /* TLB resize via tlb_fill_align may have moved the entry. */ - index =3D tlb_index(cpu, mmu_idx, addr); - entry =3D tlb_entry(cpu, mmu_idx, addr); - - /* - * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately, - * to force the next access through tlb_fill_align. We've just - * called tlb_fill_align, so we know that this entry *is* vali= d. - */ - flags &=3D ~TLB_INVALID_MASK; - } - tlb_addr =3D tlb_read_idx(entry, access_type); + if (!tlb_lookup(cpu, &o, &i)) { + /* Non-faulting page table read failed. */ + *phost =3D NULL; + memset(pfull, 0, sizeof(*pfull)); + return TLB_INVALID_MASK; } - flags &=3D tlb_addr; =20 - full =3D &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; - flags |=3D full->slow_flags[access_type]; - *pfull =3D *full; + *pfull =3D o.full; + flags =3D o.flags; =20 /* * Fold all "mmio-like" bits, and required plugin callbacks, to TLB_MM= IO. @@ -1415,7 +1479,7 @@ static int probe_access_internal(CPUState *cpu, vaddr= addr, } =20 /* Everything else is RAM. */ - *phost =3D (void *)((uintptr_t)addr + entry->addend); + *phost =3D o.haddr; return flags; } =20 @@ -1625,6 +1689,7 @@ typedef struct MMULookupPageData { vaddr addr; int flags; int size; + TLBLookupOutput o; } MMULookupPageData; =20 typedef struct MMULookupLocals { @@ -1644,67 +1709,25 @@ typedef struct MMULookupLocals { * * Resolve the translation for the one page at @data.addr, filling in * the rest of @data with the results. If the translation fails, - * tlb_fill_align will longjmp out. Return true if the softmmu tlb for - * @mmu_idx may have resized. + * tlb_fill_align will longjmp out. */ -static bool mmu_lookup1(CPUState *cpu, MMULookupPageData *data, MemOp memo= p, +static void mmu_lookup1(CPUState *cpu, MMULookupPageData *data, MemOp memo= p, int mmu_idx, MMUAccessType access_type, uintptr_t = ra) { - vaddr addr =3D data->addr; - uintptr_t index =3D tlb_index(cpu, mmu_idx, addr); - CPUTLBEntry *entry =3D tlb_entry(cpu, mmu_idx, addr); - uint64_t tlb_addr =3D tlb_read_idx(entry, access_type); - bool maybe_resized =3D false; - CPUTLBEntryFull *full; - int flags =3D TLB_FLAGS_MASK & ~TLB_FORCE_SLOW; + TLBLookupInput i =3D { + .addr =3D data->addr, + .ra =3D ra, + .access_type =3D access_type, + .memop_probe =3D memop, + .size =3D data->size, + .mmu_idx =3D mmu_idx, + }; =20 - /* If the TLB entry is for a different page, reload and try again. */ - if (!tlb_hit(tlb_addr, addr)) { - if (!tlbtree_hit(cpu, mmu_idx, access_type, addr)) { - tlb_fill_align(cpu, addr, access_type, mmu_idx, - memop, data->size, false, ra); - maybe_resized =3D true; - index =3D tlb_index(cpu, mmu_idx, addr); - entry =3D tlb_entry(cpu, mmu_idx, addr); - /* - * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately, - * to force the next access through tlb_fill. We've just - * called tlb_fill, so we know that this entry *is* valid. - */ - flags &=3D ~TLB_INVALID_MASK; - } - tlb_addr =3D tlb_read_idx(entry, access_type); - } + tlb_lookup_nofail(cpu, &data->o, &i); =20 - full =3D &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; - flags =3D tlb_addr & (TLB_FLAGS_MASK & ~TLB_FORCE_SLOW); - flags |=3D full->slow_flags[access_type]; - - if (likely(!maybe_resized)) { - /* Alignment has not been checked by tlb_fill_align. */ - int a_bits =3D memop_alignment_bits(memop); - - /* - * This alignment check differs from the one above, in that this is - * based on the atomicity of the operation. The intended use case = is - * the ARM memory type field of each PTE, where access to pages wi= th - * Device memory type require alignment. - */ - if (unlikely(flags & TLB_CHECK_ALIGNED)) { - int at_bits =3D memop_atomicity_bits(memop); - a_bits =3D MAX(a_bits, at_bits); - } - if (unlikely(addr & ((1 << a_bits) - 1))) { - cpu_unaligned_access(cpu, addr, access_type, mmu_idx, ra); - } - } - - data->full =3D full; - data->flags =3D flags; - /* Compute haddr speculatively; depending on flags it might be invalid= . */ - data->haddr =3D (void *)((uintptr_t)addr + entry->addend); - - return maybe_resized; + data->full =3D &data->o.full; + data->flags =3D data->o.flags; + data->haddr =3D data->o.haddr; } =20 /** @@ -1785,15 +1808,9 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, Me= mOpIdx oi, l->page[1].size =3D l->page[0].size - size0; l->page[0].size =3D size0; =20 - /* - * Lookup both pages, recognizing exceptions from either. If the - * second lookup potentially resized, refresh first CPUTLBEntryFul= l. - */ + /* Lookup both pages, recognizing exceptions from either. */ mmu_lookup1(cpu, &l->page[0], l->memop, l->mmu_idx, type, ra); - if (mmu_lookup1(cpu, &l->page[1], 0, l->mmu_idx, type, ra)) { - uintptr_t index =3D tlb_index(cpu, l->mmu_idx, addr); - l->page[0].full =3D &cpu->neg.tlb.d[l->mmu_idx].fulltlb[index]; - } + mmu_lookup1(cpu, &l->page[1], 0, l->mmu_idx, type, ra); =20 flags =3D l->page[0].flags | l->page[1].flags; if (unlikely(flags & (TLB_WATCHPOINT | TLB_NOTDIRTY))) { @@ -1819,49 +1836,26 @@ static bool mmu_lookup(CPUState *cpu, vaddr addr, M= emOpIdx oi, static void *atomic_mmu_lookup(CPUState *cpu, vaddr addr, MemOpIdx oi, int size, uintptr_t retaddr) { - uintptr_t mmu_idx =3D get_mmuidx(oi); - MemOp mop =3D get_memop(oi); - uintptr_t index; - CPUTLBEntry *tlbe; - void *hostaddr; - CPUTLBEntryFull *full; - bool did_tlb_fill =3D false; - int flags; + TLBLookupInput i =3D { + .addr =3D addr, + .ra =3D retaddr - GETPC_ADJ, + .access_type =3D MMU_DATA_STORE, + .memop_probe =3D get_memop(oi), + .mmu_idx =3D get_mmuidx(oi), + }; + TLBLookupOutput o; + int flags, wp_flags; =20 - tcg_debug_assert(mmu_idx < NB_MMU_MODES); - - /* Adjust the given return address. */ - retaddr -=3D GETPC_ADJ; - - index =3D tlb_index(cpu, mmu_idx, addr); - tlbe =3D tlb_entry(cpu, mmu_idx, addr); - - /* Check TLB entry and enforce page permissions. */ - flags =3D TLB_FLAGS_MASK; - if (!tlb_hit(tlb_addr_write(tlbe), addr)) { - if (!tlbtree_hit(cpu, mmu_idx, MMU_DATA_STORE, addr)) { - tlb_fill_align(cpu, addr, MMU_DATA_STORE, mmu_idx, - mop, size, false, retaddr); - did_tlb_fill =3D true; - index =3D tlb_index(cpu, mmu_idx, addr); - tlbe =3D tlb_entry(cpu, mmu_idx, addr); - /* - * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately, - * to force the next access through tlb_fill. We've just - * called tlb_fill, so we know that this entry *is* valid. - */ - flags &=3D ~TLB_INVALID_MASK; - } - } - full =3D &cpu->neg.tlb.d[mmu_idx].fulltlb[index]; + i.size =3D memop_size(i.memop_probe); + tlb_lookup_nofail(cpu, &o, &i); =20 /* * Let the guest notice RMW on a write-only page. * We have just verified that the page is writable. */ - if (unlikely(!(full->prot & PAGE_READ))) { - tlb_fill_align(cpu, addr, MMU_DATA_LOAD, mmu_idx, - 0, size, false, retaddr); + if (unlikely(!(o.full.prot & PAGE_READ))) { + tlb_fill_align(cpu, addr, MMU_DATA_LOAD, i.mmu_idx, + 0, i.size, false, i.ra); /* * Since we don't support reads and writes to different * addresses, and we do have the proper page loaded for @@ -1871,12 +1865,13 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr= addr, MemOpIdx oi, } =20 /* Enforce guest required alignment, if not handled by tlb_fill_align.= */ - if (!did_tlb_fill && (addr & ((1 << memop_alignment_bits(mop)) - 1))) { - cpu_unaligned_access(cpu, addr, MMU_DATA_STORE, mmu_idx, retaddr); + if (!o.did_tlb_fill + && (addr & ((1 << memop_alignment_bits(i.memop_probe)) - 1))) { + cpu_unaligned_access(cpu, addr, MMU_DATA_STORE, i.mmu_idx, i.ra); } =20 /* Enforce qemu required alignment. */ - if (unlikely(addr & (size - 1))) { + if (unlikely(addr & (i.size - 1))) { /* * We get here if guest alignment was not requested, or was not * enforced by cpu_unaligned_access or tlb_fill_align above. @@ -1886,41 +1881,33 @@ static void *atomic_mmu_lookup(CPUState *cpu, vaddr= addr, MemOpIdx oi, goto stop_the_world; } =20 - /* Collect tlb flags for read and write. */ - flags &=3D tlbe->addr_read | tlb_addr_write(tlbe); - /* Notice an IO access or a needs-MMU-lookup access */ + flags =3D o.flags; if (unlikely(flags & (TLB_MMIO | TLB_DISCARD_WRITE))) { /* There's really nothing that can be done to support this apart from stop-the-world. */ goto stop_the_world; } =20 - hostaddr =3D (void *)((uintptr_t)addr + tlbe->addend); - if (unlikely(flags & TLB_NOTDIRTY)) { - notdirty_write(cpu, addr, size, full, retaddr); + notdirty_write(cpu, addr, i.size, &o.full, i.ra); } =20 - if (unlikely(flags & TLB_FORCE_SLOW)) { - int wp_flags =3D 0; - - if (full->slow_flags[MMU_DATA_STORE] & TLB_WATCHPOINT) { - wp_flags |=3D BP_MEM_WRITE; - } - if (full->slow_flags[MMU_DATA_LOAD] & TLB_WATCHPOINT) { - wp_flags |=3D BP_MEM_READ; - } - if (wp_flags) { - cpu_check_watchpoint(cpu, addr, size, - full->attrs, wp_flags, retaddr); - } + wp_flags =3D 0; + if (flags & TLB_WATCHPOINT) { + wp_flags |=3D BP_MEM_WRITE; + } + if (o.full.slow_flags[MMU_DATA_LOAD] & TLB_WATCHPOINT) { + wp_flags |=3D BP_MEM_READ; + } + if (unlikely(wp_flags)) { + cpu_check_watchpoint(cpu, addr, i.size, o.full.attrs, wp_flags, i.= ra); } =20 - return hostaddr; + return o.haddr; =20 stop_the_world: - cpu_loop_exit_atomic(cpu, retaddr); + cpu_loop_exit_atomic(cpu, i.ra); } =20 /* --=20 2.43.0