From nobody Wed Dec 17 05:43:28 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1684267119; cv=none; d=zohomail.com; s=zohoarc; b=BTCINuJ9atsknzvqpA9pxWAW4zUtpdhVzR0CN0NIxtwPMSQAB56R5eTTqfUbb8/p+Y0tsx2JZE2miKjzAmSrguAqZwYEw0tVjYnyQeKOAoTw79bn0dEZr/kNTuyhUyWZvR0M4Gmhe9+AfkBdYEcd0GrVCY4uFQMmqFS/QJyx5rA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1684267119; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=zr1FUkJkAhrNHuYcLCFTMcK4YsDCqMv67DuUT8Pk2Gs=; b=Im9UdVMy1acrXI9mZD856ct0DdpbOx+wKkJQg5EK7OmJrtWRp1LvEo3MD0dm6EHdzftLPURR+dcotd+34dY0jPku+r7niDpUH+sN+9OHsJyGMjPQKwe0SAcyQmNd4vI1LiTxT9S1z3/lW9ul81EAYMZE8KD3FXgmf5Qfp8rLp08= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1684267119243234.5861145294673; Tue, 16 May 2023 12:58:39 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1pz0YZ-0006c8-95; Tue, 16 May 2023 15:41:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1pz0YW-0006Zf-RY for qemu-devel@nongnu.org; Tue, 16 May 2023 15:41:56 -0400 Received: from mail-pg1-x52a.google.com ([2607:f8b0:4864:20::52a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1pz0YU-0002kC-54 for qemu-devel@nongnu.org; Tue, 16 May 2023 15:41:56 -0400 Received: by mail-pg1-x52a.google.com with SMTP id 41be03b00d2f7-52cb78647ecso9266100a12.1 for ; Tue, 16 May 2023 12:41:53 -0700 (PDT) Received: from stoup.. ([2602:ae:1598:4c01:ec81:440e:33a4:40b9]) by smtp.gmail.com with ESMTPSA id z21-20020aa791d5000000b006260526cf0csm13771165pfa.116.2023.05.16.12.41.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 May 2023 12:41:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1684266113; x=1686858113; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zr1FUkJkAhrNHuYcLCFTMcK4YsDCqMv67DuUT8Pk2Gs=; b=hj7Tj4Vhwsd2ITXiBEIdzH6ZI+GU0E6imhai4cQhBH/oHK2xbTXHXuJLQgMKMrmeUP 3dD/ulCTIpAxgqXdlSySGLEQhxyUUt0yyHW8gJg32vnZPDCfsfLyk1bO8xyLlcleODrI ulSgV2OsKrkmH1zaNFAaNvNDfMZF0jiCGQv1krRztyvzJzZE+f9ttz7Ka2ktIjlawg4/ 5vRupnScGJ2IfJIzLoSrcu5x+o4K5z2oMl6nKpvWFFcCH4qT4/szWjpoORk0aSS3weHT A3V5Lrz6/VP6CEVlcgJ9x4s6e1XXtsStrMv9YJslF/tyH8D0cYTKstnoZGvp4nXvvhLd wRCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684266113; x=1686858113; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zr1FUkJkAhrNHuYcLCFTMcK4YsDCqMv67DuUT8Pk2Gs=; b=H7FfpCrzWPabLct6qnLaHItI2QbsoBdY6MVin6Nt/IOo/AKTJ0hgCpscA7rPfBrwXx 93TD1vLpDjWbbYXBj7oEZ9xtM5Ife+lT3pQ6g+O5pzfV26ObL4hfQdXg1jYvdwov2ig9 o/QRsXblT09GGr1NDARfg+F2pMAKZsJfLWIuc84sf3wXsav2FmC4pVE+aYZiQD8DV4On dimiF9U8qIJsZFK6q5x9C2O1fh0NI98PPZBkEYzrKOKHitt69JYUHyFR77TXa/swTnNP INZDb/9JYdQfiNptBpKXH0nANDT1LD/bYzFAxs4Q/ZYdOgkkgVXKKzEKScd9VpbSFsb3 BO7g== X-Gm-Message-State: AC+VfDzOyB5WcYQhspVpG2oUlxPRgf6FE64kBGopMnIhAFHQ7HL8gpLw zOClCNHNJUt+Seeqost3PH3tqOmYgAQjCDUhD+s= X-Google-Smtp-Source: ACHHUZ4JnNRs9gQtg9amkh5z4ZzpI0I/7casjx45/T/7m71AfwdYQzbOlp5cfLYWGvdaumrZGk4l6Q== X-Received: by 2002:a05:6a00:22cd:b0:64a:f8c9:a421 with SMTP id f13-20020a056a0022cd00b0064af8c9a421mr20174754pfj.32.1684266112753; Tue, 16 May 2023 12:41:52 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: Peter Maydell Subject: [PULL 06/80] accel/tcg: Implement helper_{ld,st}*_mmu for user-only Date: Tue, 16 May 2023 12:40:31 -0700 Message-Id: <20230516194145.1749305-7-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230516194145.1749305-1-richard.henderson@linaro.org> References: <20230516194145.1749305-1-richard.henderson@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::52a; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x52a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1684267120692100007 Content-Type: text/plain; charset="utf-8" TCG backends may need to defer to a helper to implement the atomicity required by a given operation. Mirror the interface used in system mode. Reviewed-by: Peter Maydell Signed-off-by: Richard Henderson --- include/tcg/tcg-ldst.h | 6 +- accel/tcg/user-exec.c | 393 ++++++++++++++++++++++++++++------------- tcg/tcg.c | 6 +- 3 files changed, 278 insertions(+), 127 deletions(-) diff --git a/include/tcg/tcg-ldst.h b/include/tcg/tcg-ldst.h index 3d897ca942..57fafa14b1 100644 --- a/include/tcg/tcg-ldst.h +++ b/include/tcg/tcg-ldst.h @@ -25,8 +25,6 @@ #ifndef TCG_LDST_H #define TCG_LDST_H =20 -#ifdef CONFIG_SOFTMMU - /* Value zero-extended to tcg register size. */ tcg_target_ulong helper_ldub_mmu(CPUArchState *env, target_ulong addr, MemOpIdx oi, uintptr_t retaddr); @@ -58,10 +56,10 @@ void helper_stl_mmu(CPUArchState *env, target_ulong add= r, uint32_t val, void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, MemOpIdx oi, uintptr_t retaddr); =20 -#else +#ifdef CONFIG_USER_ONLY =20 G_NORETURN void helper_unaligned_ld(CPUArchState *env, target_ulong addr); G_NORETURN void helper_unaligned_st(CPUArchState *env, target_ulong addr); =20 -#endif /* CONFIG_SOFTMMU */ +#endif /* CONFIG_USER_ONLY */ #endif /* TCG_LDST_H */ diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index b89fa35a83..d9f9766b7f 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -889,21 +889,6 @@ void page_reset_target_data(target_ulong start, target= _ulong last) { } =20 /* The softmmu versions of these helpers are in cputlb.c. */ =20 -/* - * Verify that we have passed the correct MemOp to the correct function. - * - * We could present one function to target code, and dispatch based on - * the MemOp, but so far we have worked hard to avoid an indirect function - * call along the memory path. - */ -static void validate_memop(MemOpIdx oi, MemOp expected) -{ -#ifdef CONFIG_DEBUG_TCG - MemOp have =3D get_memop(oi) & (MO_SIZE | MO_BSWAP); - assert(have =3D=3D expected); -#endif -} - void helper_unaligned_ld(CPUArchState *env, target_ulong addr) { cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_LOAD, GETPC()); @@ -914,10 +899,9 @@ void helper_unaligned_st(CPUArchState *env, target_ulo= ng addr) cpu_loop_exit_sigbus(env_cpu(env), addr, MMU_DATA_STORE, GETPC()); } =20 -static void *cpu_mmu_lookup(CPUArchState *env, target_ulong addr, - MemOpIdx oi, uintptr_t ra, MMUAccessType type) +static void *cpu_mmu_lookup(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra, MMUAccessType type) { - MemOp mop =3D get_memop(oi); int a_bits =3D get_alignment_bits(mop); void *ret; =20 @@ -933,100 +917,206 @@ static void *cpu_mmu_lookup(CPUArchState *env, targ= et_ulong addr, =20 #include "ldst_atomicity.c.inc" =20 -uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) +static uint8_t do_ld1_mmu(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra) { void *haddr; uint8_t ret; =20 - validate_memop(oi, MO_UB); - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); + tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_8); + haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); ret =3D ldub_p(haddr); clear_helper_retaddr(); + return ret; +} + +tcg_target_ulong helper_ldub_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + return do_ld1_mmu(env, addr, get_memop(oi), ra); +} + +tcg_target_ulong helper_ldsb_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + return (int8_t)do_ld1_mmu(env, addr, get_memop(oi), ra); +} + +uint8_t cpu_ldb_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + uint8_t ret =3D do_ld1_mmu(env, addr, get_memop(oi), ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return ret; } =20 +static uint16_t do_ld2_he_mmu(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra) +{ + void *haddr; + uint16_t ret; + + tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_16); + haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); + ret =3D load_atom_2(env, ra, haddr, mop); + clear_helper_retaddr(); + return ret; +} + +tcg_target_ulong helper_lduw_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop =3D get_memop(oi); + uint16_t ret =3D do_ld2_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret =3D bswap16(ret); + } + return ret; +} + +tcg_target_ulong helper_ldsw_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop =3D get_memop(oi); + int16_t ret =3D do_ld2_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret =3D bswap16(ret); + } + return ret; +} + uint16_t cpu_ldw_be_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop =3D get_memop(oi); uint16_t ret; =20 - validate_memop(oi, MO_BEUW); - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret =3D load_atom_2(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) =3D=3D MO_BE); + ret =3D do_ld2_he_mmu(env, addr, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return cpu_to_be16(ret); } =20 -uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - void *haddr; - uint32_t ret; - - validate_memop(oi, MO_BEUL); - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret =3D load_atom_4(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return cpu_to_be32(ret); -} - -uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr, - MemOpIdx oi, uintptr_t ra) -{ - void *haddr; - uint64_t ret; - - validate_memop(oi, MO_BEUQ); - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret =3D load_atom_8(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); - return cpu_to_be64(ret); -} - uint16_t cpu_ldw_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop =3D get_memop(oi); uint16_t ret; =20 - validate_memop(oi, MO_LEUW); - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret =3D load_atom_2(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) =3D=3D MO_LE); + ret =3D do_ld2_he_mmu(env, addr, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return cpu_to_le16(ret); } =20 +static uint32_t do_ld4_he_mmu(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra) +{ + void *haddr; + uint32_t ret; + + tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_32); + haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); + ret =3D load_atom_4(env, ra, haddr, mop); + clear_helper_retaddr(); + return ret; +} + +tcg_target_ulong helper_ldul_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop =3D get_memop(oi); + uint32_t ret =3D do_ld4_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret =3D bswap32(ret); + } + return ret; +} + +tcg_target_ulong helper_ldsl_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop =3D get_memop(oi); + int32_t ret =3D do_ld4_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret =3D bswap32(ret); + } + return ret; +} + +uint32_t cpu_ldl_be_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop =3D get_memop(oi); + uint32_t ret; + + tcg_debug_assert((mop & MO_BSWAP) =3D=3D MO_BE); + ret =3D do_ld4_he_mmu(env, addr, mop, ra); + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); + return cpu_to_be32(ret); +} + uint32_t cpu_ldl_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop =3D get_memop(oi); uint32_t ret; =20 - validate_memop(oi, MO_LEUL); - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret =3D load_atom_4(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) =3D=3D MO_LE); + ret =3D do_ld4_he_mmu(env, addr, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return cpu_to_le32(ret); } =20 +static uint64_t do_ld8_he_mmu(CPUArchState *env, abi_ptr addr, + MemOp mop, uintptr_t ra) +{ + void *haddr; + uint64_t ret; + + tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_64); + haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_LOAD); + ret =3D load_atom_8(env, ra, haddr, mop); + clear_helper_retaddr(); + return ret; +} + +uint64_t helper_ldq_mmu(CPUArchState *env, target_ulong addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop =3D get_memop(oi); + uint64_t ret =3D do_ld8_he_mmu(env, addr, mop, ra); + + if (mop & MO_BSWAP) { + ret =3D bswap64(ret); + } + return ret; +} + +uint64_t cpu_ldq_be_mmu(CPUArchState *env, abi_ptr addr, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop =3D get_memop(oi); + uint64_t ret; + + tcg_debug_assert((mop & MO_BSWAP) =3D=3D MO_BE); + ret =3D do_ld8_he_mmu(env, addr, mop, ra); + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); + return cpu_to_be64(ret); +} + uint64_t cpu_ldq_le_mmu(CPUArchState *env, abi_ptr addr, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop =3D get_memop(oi); uint64_t ret; =20 - validate_memop(oi, MO_LEUQ); - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); - ret =3D load_atom_8(env, ra, haddr, get_memop(oi)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) =3D=3D MO_LE); + ret =3D do_ld8_he_mmu(env, addr, mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_R); return cpu_to_le64(ret); } @@ -1037,7 +1127,7 @@ Int128 cpu_ld16_be_mmu(CPUArchState *env, abi_ptr add= r, void *haddr; Int128 ret; =20 - validate_memop(oi, MO_128 | MO_BE); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) =3D=3D (MO_128= | MO_BE)); haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); memcpy(&ret, haddr, 16); clear_helper_retaddr(); @@ -1055,7 +1145,7 @@ Int128 cpu_ld16_le_mmu(CPUArchState *env, abi_ptr add= r, void *haddr; Int128 ret; =20 - validate_memop(oi, MO_128 | MO_LE); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) =3D=3D (MO_128= | MO_LE)); haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); memcpy(&ret, haddr, 16); clear_helper_retaddr(); @@ -1067,87 +1157,153 @@ Int128 cpu_ld16_le_mmu(CPUArchState *env, abi_ptr = addr, return ret; } =20 -void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, - MemOpIdx oi, uintptr_t ra) +static void do_st1_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, + MemOp mop, uintptr_t ra) { void *haddr; =20 - validate_memop(oi, MO_UB); - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); + tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_8); + haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); stb_p(haddr, val); clear_helper_retaddr(); +} + +void helper_stb_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + do_st1_mmu(env, addr, val, get_memop(oi), ra); +} + +void cpu_stb_mmu(CPUArchState *env, abi_ptr addr, uint8_t val, + MemOpIdx oi, uintptr_t ra) +{ + do_st1_mmu(env, addr, val, get_memop(oi), ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 +static void do_st2_he_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, + MemOp mop, uintptr_t ra) +{ + void *haddr; + + tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_16); + haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + store_atom_2(env, ra, haddr, mop, val); + clear_helper_retaddr(); +} + +void helper_stw_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop =3D get_memop(oi); + + if (mop & MO_BSWAP) { + val =3D bswap16(val); + } + do_st2_he_mmu(env, addr, val, mop, ra); +} + void cpu_stw_be_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop =3D get_memop(oi); =20 - validate_memop(oi, MO_BEUW); - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_2(env, ra, haddr, get_memop(oi), be16_to_cpu(val)); - clear_helper_retaddr(); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); -} - -void cpu_stl_be_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, - MemOpIdx oi, uintptr_t ra) -{ - void *haddr; - - validate_memop(oi, MO_BEUL); - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_4(env, ra, haddr, get_memop(oi), be32_to_cpu(val)); - clear_helper_retaddr(); - qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); -} - -void cpu_stq_be_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, - MemOpIdx oi, uintptr_t ra) -{ - void *haddr; - - validate_memop(oi, MO_BEUQ); - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_8(env, ra, haddr, get_memop(oi), be64_to_cpu(val)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) =3D=3D MO_BE); + do_st2_he_mmu(env, addr, be16_to_cpu(val), mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 void cpu_stw_le_mmu(CPUArchState *env, abi_ptr addr, uint16_t val, MemOpIdx oi, uintptr_t ra) +{ + MemOp mop =3D get_memop(oi); + + tcg_debug_assert((mop & MO_BSWAP) =3D=3D MO_LE); + do_st2_he_mmu(env, addr, le16_to_cpu(val), mop, ra); + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); +} + +static void do_st4_he_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, + MemOp mop, uintptr_t ra) { void *haddr; =20 - validate_memop(oi, MO_LEUW); - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_2(env, ra, haddr, get_memop(oi), le16_to_cpu(val)); + tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_32); + haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + store_atom_4(env, ra, haddr, mop, val); clear_helper_retaddr(); +} + +void helper_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop =3D get_memop(oi); + + if (mop & MO_BSWAP) { + val =3D bswap32(val); + } + do_st4_he_mmu(env, addr, val, mop, ra); +} + +void cpu_stl_be_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop =3D get_memop(oi); + + tcg_debug_assert((mop & MO_BSWAP) =3D=3D MO_BE); + do_st4_he_mmu(env, addr, be32_to_cpu(val), mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 void cpu_stl_le_mmu(CPUArchState *env, abi_ptr addr, uint32_t val, MemOpIdx oi, uintptr_t ra) +{ + MemOp mop =3D get_memop(oi); + + tcg_debug_assert((mop & MO_BSWAP) =3D=3D MO_LE); + do_st4_he_mmu(env, addr, le32_to_cpu(val), mop, ra); + qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); +} + +static void do_st8_he_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, + MemOp mop, uintptr_t ra) { void *haddr; =20 - validate_memop(oi, MO_LEUL); - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_4(env, ra, haddr, get_memop(oi), le32_to_cpu(val)); + tcg_debug_assert((mop & MO_SIZE) =3D=3D MO_64); + haddr =3D cpu_mmu_lookup(env, addr, mop, ra, MMU_DATA_STORE); + store_atom_8(env, ra, haddr, mop, val); clear_helper_retaddr(); +} + +void helper_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop =3D get_memop(oi); + + if (mop & MO_BSWAP) { + val =3D bswap64(val); + } + do_st8_he_mmu(env, addr, val, mop, ra); +} + +void cpu_stq_be_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, + MemOpIdx oi, uintptr_t ra) +{ + MemOp mop =3D get_memop(oi); + + tcg_debug_assert((mop & MO_BSWAP) =3D=3D MO_BE); + do_st8_he_mmu(env, addr, cpu_to_be64(val), mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 void cpu_stq_le_mmu(CPUArchState *env, abi_ptr addr, uint64_t val, MemOpIdx oi, uintptr_t ra) { - void *haddr; + MemOp mop =3D get_memop(oi); =20 - validate_memop(oi, MO_LEUQ); - haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); - store_atom_8(env, ra, haddr, get_memop(oi), le64_to_cpu(val)); - clear_helper_retaddr(); + tcg_debug_assert((mop & MO_BSWAP) =3D=3D MO_LE); + do_st8_he_mmu(env, addr, cpu_to_le64(val), mop, ra); qemu_plugin_vcpu_mem_cb(env_cpu(env), addr, oi, QEMU_PLUGIN_MEM_W); } =20 @@ -1156,7 +1312,7 @@ void cpu_st16_be_mmu(CPUArchState *env, abi_ptr addr, { void *haddr; =20 - validate_memop(oi, MO_128 | MO_BE); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) =3D=3D (MO_128= | MO_BE)); haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); if (!HOST_BIG_ENDIAN) { val =3D bswap128(val); @@ -1171,7 +1327,7 @@ void cpu_st16_le_mmu(CPUArchState *env, abi_ptr addr, { void *haddr; =20 - validate_memop(oi, MO_128 | MO_LE); + tcg_debug_assert((get_memop(oi) & (MO_BSWAP | MO_SIZE)) =3D=3D (MO_128= | MO_LE)); haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE); if (HOST_BIG_ENDIAN) { val =3D bswap128(val); @@ -1269,7 +1425,6 @@ uint64_t cpu_ldq_code_mmu(CPUArchState *env, abi_ptr = addr, void *haddr; uint64_t ret; =20 - validate_memop(oi, MO_BEUQ); haddr =3D cpu_mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD); ret =3D ldq_p(haddr); clear_helper_retaddr(); diff --git a/tcg/tcg.c b/tcg/tcg.c index f5f9d8f7e8..a864ff1f4b 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -197,8 +197,7 @@ static void tcg_out_st_helper_args(TCGContext *s, const= TCGLabelQemuLdst *l, const TCGLdstHelperParam *p) __attribute__((unused)); =20 -#ifdef CONFIG_SOFTMMU -static void * const qemu_ld_helpers[MO_SSIZE + 1] =3D { +static void * const qemu_ld_helpers[MO_SSIZE + 1] __attribute__((unused)) = =3D { [MO_UB] =3D helper_ldub_mmu, [MO_SB] =3D helper_ldsb_mmu, [MO_UW] =3D helper_lduw_mmu, @@ -210,13 +209,12 @@ static void * const qemu_ld_helpers[MO_SSIZE + 1] =3D= { #endif }; =20 -static void * const qemu_st_helpers[MO_SIZE + 1] =3D { +static void * const qemu_st_helpers[MO_SIZE + 1] __attribute__((unused)) = =3D { [MO_8] =3D helper_stb_mmu, [MO_16] =3D helper_stw_mmu, [MO_32] =3D helper_stl_mmu, [MO_64] =3D helper_stq_mmu, }; -#endif =20 TCGContext tcg_init_ctx; __thread TCGContext *tcg_ctx; --=20 2.34.1