From nobody Tue Feb 10 20:31:01 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1694659634; cv=none; d=zohomail.com; s=zohoarc; b=LNQp3OvoALNKLt3c92mAoiTzBrOv/L+Y42KoXCRaQUWpUfM0ReP6yE0o5x488o26cR8yKU5I7Ds3m2iXeh2QzKoaCNlcSCoo5yrH0K4jX9d1LB2tF8AzY0y6rll+MGQMoXSBpP1rTEc6QobPwQ82dt06MYHiQahWVuxQqHhGH3k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1694659634; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Ju7mjgxWvnS3wY4ZvjN8DxWMYMCiGAy1BvzRoRkiiPg=; b=GnyWbDjYJsP0YFTtkb8R+6yNmsq0fgcAiCNRQNvU7OAy7g6zRka3GEJi5iXYmDuejOuZNgHFc9aWSwi0b6RVy2IXv5Xa6FQkUMYWspW9E8Erl+vpWi5GYoBvFndnYwvzDuW77Y3AJMAMS+2zaj/642KiEwYoLVpWVh9E5QyBkGI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 169465963467363.612379760259614; Wed, 13 Sep 2023 19:47:14 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qgcM8-0000nY-CD; Wed, 13 Sep 2023 22:45:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qgcM2-0000kM-4N for qemu-devel@nongnu.org; Wed, 13 Sep 2023 22:45:19 -0400 Received: from mail-pl1-x629.google.com ([2607:f8b0:4864:20::629]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qgcLj-0000jY-2a for qemu-devel@nongnu.org; Wed, 13 Sep 2023 22:45:14 -0400 Received: by mail-pl1-x629.google.com with SMTP id d9443c01a7336-1bf55a81eeaso3643645ad.0 for ; Wed, 13 Sep 2023 19:44:58 -0700 (PDT) Received: from stoup.. ([71.212.131.115]) by smtp.gmail.com with ESMTPSA id x24-20020a170902b41800b001bbdf32f011sm304336plr.269.2023.09.13.19.44.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Sep 2023 19:44:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1694659498; x=1695264298; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ju7mjgxWvnS3wY4ZvjN8DxWMYMCiGAy1BvzRoRkiiPg=; b=DcwgAxKyQRZuui8mw/mQCi/sr1d5sTLMDuGUhM81zJ0G1zJyUDOSytfn4KYkUJ6NtG OUquNQmd4HnYgPWGhMypXtq2JYOgvnv4DMDHGek58Fb9HAhBQvUq9nwqpGDP1AXaRJiJ /orrYLBsidXuHQaBIne7Rj66R5GReVMqmr99P/STuM8VoUhk5tGjx860i1Fk1yi/RYZl 5eiYqoh0mnTxiHMyMVxwbK6ZUGT75Jy4LTWf5fOxO+mZYCM2NZ8+7ChEmvO4fe3QgsIO 2Cj/cJRoIImPwRGQIrGFNm2kOzAqETp6loRMJp54MPoAkHCg0XwrgPv6HA6LpUnVrSMc SkZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694659498; x=1695264298; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ju7mjgxWvnS3wY4ZvjN8DxWMYMCiGAy1BvzRoRkiiPg=; b=gQoxq0dJqVuFNoUD1ZKEIHlZGpnCoSMshOKVpAa9FXT1ig679UuxSJPzhl0wZTfjvI A48uYZ3wkptqUsbhqQEmTZ+espl0mbHaXVLV2i8d5Tds8/2m4vOIVrjQsSbOAT8kfHRQ Xdb7EaZ2SvRlVQfk5YthcYhxUFfhBPxcpH+b+mghvuearV3S+dJQ/BxzT9rGTQRv0qmh onDUEH9eXQ3QcTY7UJF7N2o2XH9n045Js94eJpRSEQYSrX/EAguLTwuRouoJfkU8pvvO 3Cjbdbc8qRI7hF2+Q+7khb5718YGCSYz5QN8vtVPX9dJdi3ZhiGb8FOWpAUkEqeZNLdi yJfg== X-Gm-Message-State: AOJu0YwOHHFyevj30WDQRQ6T3o8VbCmde9ccprzeHCQz2RKZtHvt4bgY 5zbqDgHK6zGvoStoT+bC3Kh19JTUekUrFABQx2M= X-Google-Smtp-Source: AGHT+IGe2RqacFnkGDh6KQakWAvYvs5PsSZYW2QCn4js48wxdTwfofgGGsGjRCkbGmxDNgr6AeFEGQ== X-Received: by 2002:a17:903:2351:b0:1c0:ee60:470a with SMTP id c17-20020a170903235100b001c0ee60470amr5745241plh.66.1694659497827; Wed, 13 Sep 2023 19:44:57 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: anjo@rev.ng, ale@rev.ng, philmd@linaro.org Subject: [PATCH v2 23/24] accel/tcg: Unify user and softmmu do_[st|ld]*_mmu() Date: Wed, 13 Sep 2023 19:44:34 -0700 Message-Id: <20230914024435.1381329-24-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230914024435.1381329-1-richard.henderson@linaro.org> References: <20230914024435.1381329-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::629; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x629.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1694659645373100013 Content-Type: text/plain; charset="utf-8" From: Anton Johansson The prototype of do_[st|ld]*_mmu() is unified between system- and user-mode allowing a large chunk of helper_[st|ld]*() and cpu_[st|ld]*() functions to be expressed in same manner between both modes. These functions will be moved to ldst_common.c.inc in a following commit. Signed-off-by: Anton Johansson Message-Id: <20230912153428.17816-11-anjo@rev.ng> Reviewed-by: Richard Henderson Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 16 ++-- accel/tcg/user-exec.c | 183 ++++++++++++++++++++++++------------------ 2 files changed, 117 insertions(+), 82 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index a7f2c848ad..cbab7e2648 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -2937,18 +2937,24 @@ static void do_st_8(CPUState *cpu, MMULookupPageDat= a *p, uint64_t val, } } =20 -void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val, - MemOpIdx oi, uintptr_t ra) +static void do_st1_mmu(CPUState *cpu, vaddr addr, uint8_t val, + MemOpIdx oi, uintptr_t ra) { MMULookupLocals l; bool crosspage; =20 - tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - crosspage =3D mmu_lookup(env_cpu(env), addr, oi, ra, MMU_DATA_STORE, &= l); + crosspage =3D mmu_lookup(cpu, addr, oi, ra, MMU_DATA_STORE, &l); tcg_debug_assert(!crosspage); =20 - do_st_1(env_cpu(env), &l.page[0], val, l.mmu_idx, ra); + do_st_1(cpu, &l.page[0], val, l.mmu_idx, ra); +} + +void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); + do_st1_mmu(env_cpu(env), addr, val, oi, ra); } =20 static void do_st2_mmu(CPUState *cpu, vaddr addr, uint16_t val, diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index f9f5cd1770..a6593d0e0f 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -941,7 +941,7 @@ void page_reset_target_data(target_ulong start, target_= ulong last) { } =20 /* The softmmu versions of these helpers are in cputlb.c. */ =20 -static void *cpu_mmu_lookup(CPUArchState *env, vaddr addr, +static void *cpu_mmu_lookup(CPUState *cpu, vaddr addr, MemOp mop, uintptr_t ra, MMUAccessType type) { int a_bits =3D get_alignment_bits(mop); @@ -949,25 +949,24 @@ static void *cpu_mmu_lookup(CPUArchState *env, vaddr = addr, =20 /* Enforce guest required alignment. */ if (unlikely(addr & ((1 << a_bits) - 1))) { - cpu_loop_exit_sigbus(env_cpu(env), addr, type, ra); + cpu_loop_exit_sigbus(cpu, addr, type, ra); } =20 - ret =3D g2h(env_cpu(env), addr); + ret =3D g2h(cpu, addr); set_helper_retaddr(ra); return ret; } =20 #include "ldst_atomicity.c.inc" =20 -static uint8_t do_ld1_mmu(CPUArchState *env, abi_ptr addr, - MemOp mop, uintptr_t ra) +static uint8_t do_ld1_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) { void *haddr; uint8_t ret; =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_8); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); + haddr =3D cpu_mmu_lookup(cpu, addr, get_memop(oi), ra, access_type); ret =3D ldub_p(haddr); clear_helper_retaddr(); return ret; @@ -976,33 +975,38 @@ static uint8_t do_ld1_mmu(CPUArchState *env, abi_ptr = addr, tcg_target_ulong helper_ldub_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return do_ld1_mmu(env, addr, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); + return do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); } =20 tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return (int8_t)do_ld1_mmu(env, addr, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); + return (int8_t)do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); } =20 uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - uint8_t ret =3D do_ld1_mmu(env, addr, get_memop(oi), ra); + uint8_t ret; + + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); + ret =3D do_ld1_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return ret; } =20 -static uint16_t do_ld2_mmu(CPUArchState *env, abi_ptr addr, - MemOp mop, uintptr_t ra) +static uint16_t do_ld2_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) { void *haddr; uint16_t ret; + MemOp mop =3D get_memop(oi); =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_16); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret =3D load_atom_2(env_cpu(env), ra, haddr, mop); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, access_type); + ret =3D load_atom_2(cpu, ra, haddr, mop); clear_helper_retaddr(); =20 if (mop & MO_BSWAP) { @@ -1014,33 +1018,38 @@ static uint16_t do_ld2_mmu(CPUArchState *env, abi_p= tr addr, tcg_target_ulong helper_lduw_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return do_ld2_mmu(env, addr, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); + return do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); } =20 tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return (int16_t)do_ld2_mmu(env, addr, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); + return (int16_t)do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); } =20 uint16_t cpu_ldw_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - uint16_t ret =3D do_ld2_mmu(env, addr, get_memop(oi), ra); + uint16_t ret; + + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); + ret =3D do_ld2_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return ret; } =20 -static uint32_t do_ld4_mmu(CPUArchState *env, abi_ptr addr, - MemOp mop, uintptr_t ra) +static uint32_t do_ld4_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) { void *haddr; uint32_t ret; + MemOp mop =3D get_memop(oi); =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_32); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret =3D load_atom_4(env_cpu(env), ra, haddr, mop); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, access_type); + ret =3D load_atom_4(cpu, ra, haddr, mop); clear_helper_retaddr(); =20 if (mop & MO_BSWAP) { @@ -1052,33 +1061,38 @@ static uint32_t do_ld4_mmu(CPUArchState *env, abi_p= tr addr, tcg_target_ulong helper_ldul_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return do_ld4_mmu(env, addr, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); + return do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); } =20 tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return (int32_t)do_ld4_mmu(env, addr, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); + return (int32_t)do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); } =20 uint32_t cpu_ldl_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - uint32_t ret =3D do_ld4_mmu(env, addr, get_memop(oi), ra); + uint32_t ret; + + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); + ret =3D do_ld4_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return ret; } =20 -static uint64_t do_ld8_mmu(CPUArchState *env, abi_ptr addr, - MemOp mop, uintptr_t ra) +static uint64_t do_ld8_mmu(CPUState *cpu, vaddr addr, MemOpIdx oi, + uintptr_t ra, MMUAccessType access_type) { void *haddr; uint64_t ret; + MemOp mop =3D get_memop(oi); =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_64); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret =3D load_atom_8(env_cpu(env), ra, haddr, mop); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, access_type); + ret =3D load_atom_8(cpu, ra, haddr, mop); clear_helper_retaddr(); =20 if (mop & MO_BSWAP) { @@ -1090,27 +1104,32 @@ static uint64_t do_ld8_mmu(CPUArchState *env, abi_p= tr addr, uint64_t helper_ldq_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return do_ld8_mmu(env, addr, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); + return do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); } =20 uint64_t cpu_ldq_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - uint64_t ret =3D do_ld8_mmu(env, addr, get_memop(oi), ra); + uint64_t ret; + + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); + ret =3D do_ld8_mmu(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return ret; } =20 -static Int128 do_ld16_mmu(CPUArchState *env, abi_ptr addr, - MemOp mop, uintptr_t ra) +static Int128 do_ld16_mmu(CPUState *cpu, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) { void *haddr; Int128 ret; + MemOp mop =3D get_memop(oi); =20 tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_128); cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); - ret =3D load_atom_16(env_cpu(env), ra, haddr, mop); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_LOAD); + ret =3D load_atom_16(cpu, ra, haddr, mop); clear_helper_retaddr(); =20 if (mop & MO_BSWAP) { @@ -1122,7 +1141,7 @@ static Int128 do_ld16_mmu(CPUArchState *env, abi_ptr = addr, Int128 helper_ld16_mmu(CPUArchState *env, uint64_t addr, MemOpIdx oi, uintptr_t ra) { - return do_ld16_mmu(env, addr, get_memop(oi), ra); + return do_ld16_mmu(env_cpu(env), addr, get_memop(oi), ra); } =20 Int128 helper_ld_i128(CPUArchState *env, uint64_t addr, MemOpIdx oi) @@ -1133,19 +1152,18 @@ Int128 helper_ld_i128(CPUArchState *env, uint64_t a= ddr, MemOpIdx oi) Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - Int128 ret =3D do_ld16_mmu(env, addr, get_memop(oi), ra); + Int128 ret =3D do_ld16_mmu(env_cpu(env), addr, get_memop(oi), ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return ret; } =20 -static void do_st1_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, - MemOp mop, uintptr_t ra) +static void do_st1_mmu(CPUState *cpu, vaddr addr, uint8_t val, + MemOpIdx oi, uintptr_t ra) { void *haddr; =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_8); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + haddr =3D cpu_mmu_lookup(cpu, addr, get_memop(oi), ra, MMU_DATA_STORE); stb_p(haddr, val); clear_helper_retaddr(); } @@ -1153,134 +1171,145 @@ static void do_st1_mmu(CPUArchState *env, abi_ptr= addr, uint8_t val, void helper_stb_mmu(CPUArchState *env, uint64_t addr, uint32_t val, MemOpIdx oi, uintptr_t ra) { - do_st1_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); + do_st1_mmu(env_cpu(env), addr, val, oi, ra); } =20 void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, MemOpIdx oi, uintptr_t ra) { - do_st1_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_8); + do_st1_mmu(env_cpu(env), addr, val, oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 -static void do_st2_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, - MemOp mop, uintptr_t ra) +static void do_st2_mmu(CPUState *cpu, vaddr addr, uint16_t val, + MemOpIdx oi, uintptr_t ra) { void *haddr; + MemOp mop =3D get_memop(oi); =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_16); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE); =20 if (mop & MO_BSWAP) { val =3D bswap16(val); } - store_atom_2(env_cpu(env), ra, haddr, mop, val); + store_atom_2(cpu, ra, haddr, mop, val); clear_helper_retaddr(); } =20 void helper_stw_mmu(CPUArchState *env, uint64_t addr, uint32_t val, MemOpIdx oi, uintptr_t ra) { - do_st2_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); + do_st2_mmu(env_cpu(env), addr, val, oi, ra); } =20 void cpu_stw_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, MemOpIdx oi, uintptr_t ra) { - do_st2_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_16); + do_st2_mmu(env_cpu(env), addr, val, oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 -static void do_st4_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, - MemOp mop, uintptr_t ra) +static void do_st4_mmu(CPUState *cpu, vaddr addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) { void *haddr; + MemOp mop =3D get_memop(oi); =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_32); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE); =20 if (mop & MO_BSWAP) { val =3D bswap32(val); } - store_atom_4(env_cpu(env), ra, haddr, mop, val); + store_atom_4(cpu, ra, haddr, mop, val); clear_helper_retaddr(); } =20 void helper_stl_mmu(CPUArchState *env, uint64_t addr, uint32_t val, MemOpIdx oi, uintptr_t ra) { - do_st4_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); + do_st4_mmu(env_cpu(env), addr, val, oi, ra); } =20 void cpu_stl_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, MemOpIdx oi, uintptr_t ra) { - do_st4_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_32); + do_st4_mmu(env_cpu(env), addr, val, oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 -static void do_st8_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, - MemOp mop, uintptr_t ra) +static void do_st8_mmu(CPUState *cpu, vaddr addr, uint64_t val, + MemOpIdx oi, uintptr_t ra) { void *haddr; + MemOp mop =3D get_memop(oi); =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_64); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE); =20 if (mop & MO_BSWAP) { val =3D bswap64(val); } - store_atom_8(env_cpu(env), ra, haddr, mop, val); + store_atom_8(cpu, ra, haddr, mop, val); clear_helper_retaddr(); } =20 void helper_stq_mmu(CPUArchState *env, uint64_t addr, uint64_t val, MemOpIdx oi, uintptr_t ra) { - do_st8_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); + do_st8_mmu(env_cpu(env), addr, val, oi, ra); } =20 void cpu_stq_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, MemOpIdx oi, uintptr_t ra) { - do_st8_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_64); + do_st8_mmu(env_cpu(env), addr, val, oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 -static void do_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val, - MemOp mop, uintptr_t ra) +static void do_st16_mmu(CPUState *cpu, vaddr addr, Int128 val, + MemOpIdx oi, uintptr_t ra) { void *haddr; + MemOpIdx mop =3D get_memop(oi); =20 - tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_128); cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); - haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + haddr =3D cpu_mmu_lookup(cpu, addr, mop, ra, MMU_DATA_STORE); =20 if (mop & MO_BSWAP) { val =3D bswap128(val); } - store_atom_16(env_cpu(env), ra, haddr, mop, val); + store_atom_16(cpu, ra, haddr, mop, val); clear_helper_retaddr(); } =20 void helper_st16_mmu(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx oi, uintptr_t ra) { - do_st16_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); + do_st16_mmu(env_cpu(env), addr, val, oi, ra); } =20 void helper_st_i128(CPUArchState *env, uint64_t addr, Int128 val, MemOpIdx= oi) { + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); helper_st16_mmu(env, addr, val, oi, GETPC()); } =20 void cpu_st16_mmu(CPUArchState *env, abi_ptr addr, Int128 val, MemOpIdx oi, uintptr_t ra) { - do_st16_mmu(env, addr, val, get_memop(oi), ra); + tcg_debug_assert((get_memop(oi) & MO_SIZE) =3D=3D MO_128); + do_st16_mmu(env_cpu(env), addr, val, oi, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 @@ -1330,7 +1359,7 @@ uint8_t cpu_ldb_code_mmu(CPUArchState *env, abi_ptr a= ddr, void *haddr; uint8_t ret; =20 - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH); + haddr =3D cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH); ret =3D ldub_p(haddr); clear_helper_retaddr(); return ret; @@ -1342,7 +1371,7 @@ uint16_t cpu_ldw_code_mmu(CPUArchState *env, abi_ptr = addr, void *haddr; uint16_t ret; =20 - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH); + haddr =3D cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH); ret =3D lduw_p(haddr); clear_helper_retaddr(); if (get_memop(oi) & MO_BSWAP) { @@ -1357,7 +1386,7 @@ uint32_t cpu_ldl_code_mmu(CPUArchState *env, abi_ptr = addr, void *haddr; uint32_t ret; =20 - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_INST_FETCH); + haddr =3D cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_INST_FETCH); ret =3D ldl_p(haddr); clear_helper_retaddr(); if (get_memop(oi) & MO_BSWAP) { @@ -1372,7 +1401,7 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr = addr, void *haddr; uint64_t ret; =20 - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); + haddr =3D cpu_mmu_lookup(env_cpu(env), addr, oi, ra, MMU_DATA_LOAD); ret =3D ldq_p(haddr); clear_helper_retaddr(); if (get_memop(oi) & MO_BSWAP) { --=20 2.34.1