From nobody Mon Apr 7 15:48:50 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=linaro.org ARC-Seal: i=1; a=rsa-sha256; t=1654768719; cv=none; d=zohomail.com; s=zohoarc; b=fOXSi9desgKJI8SIhFCohaOJglLaGMVTlJOQX1Z/HOcOr1Le+OlUOW64pTkfK9TT7iYP0dBbFHE/CBuYs8YEdaC31USQTYBXQuru3DCKShSEFo7+/2HhPLd9hEFoib7UFMbecX03jnfuTUtsz1L15RpCHziyJ8cwNl2P2EaXUzw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1654768719; h=Content-Transfer-Encoding:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=hc5zkvCsi6gg6RtC6i+txFH2JRDjXFdVtwnnnFfRk5Y=; b=cqpnBPEdB81WCijV0Cyy+EsJARCmC+5f9IaZoWO5epv8gKZOYA9TcPgZpxp/7OqFWYsTzY6VpGVgbI8lPzJd5xFRvweDs+qmIH5TmwWWknRIQTp0dieWe1GWgRb8Sp67VG3y82sW2037AoOqf5p9sDhlEiPxb1FFtMxIB+VcmdM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1654768719381168.00548140835383; Thu, 9 Jun 2022 02:58:39 -0700 (PDT) Received: from localhost ([::1]:45112 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1nzEw1-0003m5-Pq for importer@patchew.org; Thu, 09 Jun 2022 05:58:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:39302) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nzE71-0001TV-AJ for qemu-devel@nongnu.org; Thu, 09 Jun 2022 05:05:55 -0400 Received: from mail-wr1-x436.google.com ([2a00:1450:4864:20::436]:40587) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1nzE6x-0005wl-R3 for qemu-devel@nongnu.org; Thu, 09 Jun 2022 05:05:55 -0400 Received: by mail-wr1-x436.google.com with SMTP id k16so31477811wrg.7 for ; Thu, 09 Jun 2022 02:05:50 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by smtp.gmail.com with ESMTPSA id c13-20020adffb0d000000b002183cf9cd69sm11349796wrr.15.2022.06.09.02.05.48 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 09 Jun 2022 02:05:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=hc5zkvCsi6gg6RtC6i+txFH2JRDjXFdVtwnnnFfRk5Y=; b=baAdG3ycirS9ozBNAcguePoPK+4apMDZ9WHZkHthEKipv9a73/Z3BWfsJHFh07Bvo/ rBFcvaKn5x0vOXIZ+F8O5yFkNV1EboizS3PHIm4AdJj1YOlgfga9/K31YFiuapxiGIrK rOMKM4jm/xRTiKFqdVbWev4Jb9Zel5yoQV/8YPfWAIm066L6TvU03dFTauIjjIZC3Jy5 gRGYGWFcng84FuWUG9M2ntTd5NOkqYLgMmnDVoBpgNV0QUKOK5JxZv2gB9LRvRUxZDjg HNADpRNRItqqN1XObU0wXMyetK3jIJ3nJ9YlhW3CLbO6z99AzPcGX1sbIBBqf79F+q1Z nGsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hc5zkvCsi6gg6RtC6i+txFH2JRDjXFdVtwnnnFfRk5Y=; b=qd6lmkx/83I/01CklAQ7yVgoXXDXY8gnYOPI66AshmqQukba+r0rY37YmGrsvI0HJS 4QHMBd5CWmr9+P7WNW126mxcaIqdjxtMbqdg8itCmOQ1Os4NjXVGGUhHSFvXnvW2F8c+ 7uHqBzwkOA275tjP3eYbLy/bDQJ3nhj6j1+Dv2T/sz9eXF6Qfcx7XQnsYLrgaXmRmjse mWoGx94vSocU2DVpOxf9wgTsa1QbRdF7cWYRzQIobc9x1ayMxG5/gka51wOx2H2Fj35L JGYAYtSLTQEqoxhkyca2a46F30nY+FLFYzc/8tUd+GLYEXxjGV2M2nlICNxtUlCOvyGt T/Xg== X-Gm-Message-State: AOAM531imTqL5+dbtAkvh+jx6YAoDtlky5+ytPTXA7irDNLXOySYU20J 5lKtAnk6leITyL7406s6KMHiOGfbVCrDGg== X-Google-Smtp-Source: ABdhPJxTy0Gl8uIFJ4DlpB8kxBjO/jL00MAdig58l9YoK6h5jPUSe2G8XdFKhfK9GBbU5/swLS7FCw== X-Received: by 2002:adf:f90f:0:b0:20e:5fd4:5d06 with SMTP id b15-20020adff90f000000b0020e5fd45d06mr37549597wrr.371.1654765549972; Thu, 09 Jun 2022 02:05:49 -0700 (PDT) From: Peter Maydell To: qemu-devel@nongnu.org Subject: [PULL 09/55] target/arm: Move get_phys_addr to ptw.c Date: Thu, 9 Jun 2022 10:04:51 +0100 Message-Id: <20220609090537.1971756-10-peter.maydell@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220609090537.1971756-1-peter.maydell@linaro.org> References: <20220609090537.1971756-1-peter.maydell@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2a00:1450:4864:20::436; envelope-from=peter.maydell@linaro.org; helo=mail-wr1-x436.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: pass (identity @linaro.org) X-ZM-MESSAGEID: 1654768719931100001 Content-Type: text/plain; charset="utf-8" From: Richard Henderson Begin moving all of the page table walking functions out of helper.c, starting with get_phys_addr(). Create a temporary header file, "ptw.h", in which to share declarations between the two C files while we are moving functions. Move a few declarations to "internals.h", which will remain used by multiple C files. Signed-off-by: Richard Henderson Message-id: 20220604040607.269301-3-richard.henderson@linaro.org Reviewed-by: Peter Maydell Signed-off-by: Peter Maydell --- target/arm/internals.h | 18 ++- target/arm/ptw.h | 51 ++++++ target/arm/helper.c | 344 +++++------------------------------------ target/arm/ptw.c | 267 ++++++++++++++++++++++++++++++++ target/arm/meson.build | 1 + 5 files changed, 372 insertions(+), 309 deletions(-) create mode 100644 target/arm/ptw.h create mode 100644 target/arm/ptw.c diff --git a/target/arm/internals.h b/target/arm/internals.h index 049edce946c..1d83146d565 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -613,8 +613,13 @@ ARMMMUIdx arm_v7m_mmu_idx_for_secstate_and_priv(CPUARM= State *env, /* Return the MMU index for a v7M CPU in the specified security state */ ARMMMUIdx arm_v7m_mmu_idx_for_secstate(CPUARMState *env, bool secstate); =20 -/* Return true if the stage 1 translation regime is using LPAE format page - * tables */ +/* Return true if the translation regime is using LPAE format page tables = */ +bool regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx); + +/* + * Return true if the stage 1 translation regime is using LPAE + * format page tables + */ bool arm_s1_regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx); =20 /* Raise a data fault alignment exception for the specified virtual addres= s */ @@ -777,6 +782,12 @@ static inline uint32_t regime_el(CPUARMState *env, ARM= MMUIdx mmu_idx) } } =20 +/* Return the SCTLR value which controls this address translation regime */ +static inline uint64_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx) +{ + return env->cp15.sctlr_el[regime_el(env, mmu_idx)]; +} + /* Return the TCR controlling this translation regime */ static inline TCR *regime_tcr(CPUARMState *env, ARMMMUIdx mmu_idx) { @@ -1095,6 +1106,9 @@ typedef struct ARMVAParameters { ARMVAParameters aa64_va_parameters(CPUARMState *env, uint64_t va, ARMMMUIdx mmu_idx, bool data); =20 +int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx); +int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx); + static inline int exception_target_el(CPUARMState *env) { int target_el =3D MAX(1, arm_current_el(env)); diff --git a/target/arm/ptw.h b/target/arm/ptw.h new file mode 100644 index 00000000000..e2023ae7508 --- /dev/null +++ b/target/arm/ptw.h @@ -0,0 +1,51 @@ +/* + * ARM page table walking. + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#ifndef TARGET_ARM_PTW_H +#define TARGET_ARM_PTW_H + +#ifndef CONFIG_USER_ONLY + +bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx); +bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx); +ARMCacheAttrs combine_cacheattrs(CPUARMState *env, + ARMCacheAttrs s1, ARMCacheAttrs s2); + +bool get_phys_addr_v5(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, int *prot, + target_ulong *page_size, + ARMMMUFaultInfo *fi); +bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, int *prot, + ARMMMUFaultInfo *fi); +bool get_phys_addr_v6(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot, + target_ulong *page_size, ARMMMUFaultInfo *fi); +bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, int *prot, + target_ulong *page_size, + ARMMMUFaultInfo *fi); +bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *txattrs, + int *prot, target_ulong *page_size, + ARMMMUFaultInfo *fi); +bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + bool s1_is_el0, + hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot, + target_ulong *page_size_ptr, + ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) + __attribute__((nonnull)); + +#endif /* !CONFIG_USER_ONLY */ +#endif /* TARGET_ARM_PTW_H */ diff --git a/target/arm/helper.c b/target/arm/helper.c index 829b660db92..3ffd122178d 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -37,22 +37,11 @@ #include "semihosting/common-semi.h" #endif #include "cpregs.h" +#include "ptw.h" =20 #define ARM_CPU_FREQ 1000000000 /* FIXME: 1 GHz, should be configurable */ =20 -#ifndef CONFIG_USER_ONLY - -static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, - MMUAccessType access_type, ARMMMUIdx mmu_id= x, - bool s1_is_el0, - hwaddr *phys_ptr, MemTxAttrs *txattrs, int = *prot, - target_ulong *page_size_ptr, - ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheat= trs) - __attribute__((nonnull)); -#endif - static void switch_mode(CPUARMState *env, int mode); -static int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx); =20 static uint64_t raw_read(CPUARMState *env, const ARMCPRegInfo *ri) { @@ -10440,17 +10429,10 @@ uint64_t arm_sctlr(CPUARMState *env, int el) return env->cp15.sctlr_el[el]; } =20 -/* Return the SCTLR value which controls this address translation regime */ -static inline uint64_t regime_sctlr(CPUARMState *env, ARMMMUIdx mmu_idx) -{ - return env->cp15.sctlr_el[regime_el(env, mmu_idx)]; -} - #ifndef CONFIG_USER_ONLY =20 /* Return true if the specified stage of address translation is disabled */ -static inline bool regime_translation_disabled(CPUARMState *env, - ARMMMUIdx mmu_idx) +bool regime_translation_disabled(CPUARMState *env, ARMMMUIdx mmu_idx) { uint64_t hcr_el2; =20 @@ -10542,8 +10524,7 @@ ARMMMUIdx stage_1_mmu_idx(ARMMMUIdx mmu_idx) #endif /* !CONFIG_USER_ONLY */ =20 /* Return true if the translation regime is using LPAE format page tables = */ -static inline bool regime_using_lpae_format(CPUARMState *env, - ARMMMUIdx mmu_idx) +bool regime_using_lpae_format(CPUARMState *env, ARMMMUIdx mmu_idx) { int el =3D regime_el(env, mmu_idx); if (el =3D=3D 2 || arm_el_is_aa64(env, el)) { @@ -10567,7 +10548,7 @@ bool arm_s1_regime_using_lpae_format(CPUARMState *e= nv, ARMMMUIdx mmu_idx) } =20 #ifndef CONFIG_USER_ONLY -static inline bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx) +bool regime_is_user(CPUARMState *env, ARMMMUIdx mmu_idx) { switch (mmu_idx) { case ARMMMUIdx_SE10_0: @@ -10959,11 +10940,11 @@ static uint64_t arm_ldq_ptw(CPUState *cs, hwaddr = addr, bool is_secure, return 0; } =20 -static bool get_phys_addr_v5(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, int *prot, - target_ulong *page_size, - ARMMMUFaultInfo *fi) +bool get_phys_addr_v5(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, int *prot, + target_ulong *page_size, + ARMMMUFaultInfo *fi) { CPUState *cs =3D env_cpu(env); int level =3D 1; @@ -11081,10 +11062,10 @@ do_fault: return true; } =20 -static bool get_phys_addr_v6(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, MemTxAttrs *attrs, int *pro= t, - target_ulong *page_size, ARMMMUFaultInfo *fi) +bool get_phys_addr_v6(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot, + target_ulong *page_size, ARMMMUFaultInfo *fi) { CPUState *cs =3D env_cpu(env); ARMCPU *cpu =3D env_archcpu(env); @@ -11360,7 +11341,7 @@ unsigned int arm_pamax(ARMCPU *cpu) return pamax_map[parange]; } =20 -static int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx) +int aa64_va_parameter_tbi(uint64_t tcr, ARMMMUIdx mmu_idx) { if (regime_has_2_ranges(mmu_idx)) { return extract64(tcr, 37, 2); @@ -11372,7 +11353,7 @@ static int aa64_va_parameter_tbi(uint64_t tcr, ARMM= MUIdx mmu_idx) } } =20 -static int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx) +int aa64_va_parameter_tbid(uint64_t tcr, ARMMMUIdx mmu_idx) { if (regime_has_2_ranges(mmu_idx)) { return extract64(tcr, 51, 2); @@ -11602,12 +11583,12 @@ static ARMVAParameters aa32_va_parameters(CPUARMS= tate *env, uint32_t va, * @fi: set to fault info if the translation fails * @cacheattrs: (if non-NULL) set to the cacheability/shareability attribu= tes */ -static bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, - MMUAccessType access_type, ARMMMUIdx mmu_id= x, - bool s1_is_el0, - hwaddr *phys_ptr, MemTxAttrs *txattrs, int = *prot, - target_ulong *page_size_ptr, - ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheat= trs) +bool get_phys_addr_lpae(CPUARMState *env, uint64_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + bool s1_is_el0, + hwaddr *phys_ptr, MemTxAttrs *txattrs, int *prot, + target_ulong *page_size_ptr, + ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) { ARMCPU *cpu =3D env_archcpu(env); CPUState *cs =3D CPU(cpu); @@ -12055,11 +12036,11 @@ static inline bool m_is_system_region(CPUARMState= *env, uint32_t address) return arm_feature(env, ARM_FEATURE_M) && extract32(address, 29, 3) = =3D=3D 0x7; } =20 -static bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_= idx, - hwaddr *phys_ptr, int *prot, - target_ulong *page_size, - ARMMMUFaultInfo *fi) +bool get_phys_addr_pmsav7(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, int *prot, + target_ulong *page_size, + ARMMMUFaultInfo *fi) { ARMCPU *cpu =3D env_archcpu(env); int n; @@ -12501,11 +12482,11 @@ bool pmsav8_mpu_lookup(CPUARMState *env, uint32_t= address, } =20 =20 -static bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_= idx, - hwaddr *phys_ptr, MemTxAttrs *txattrs, - int *prot, target_ulong *page_size, - ARMMMUFaultInfo *fi) +bool get_phys_addr_pmsav8(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *txattrs, + int *prot, target_ulong *page_size, + ARMMMUFaultInfo *fi) { uint32_t secure =3D regime_is_secure(env, mmu_idx); V8M_SAttributes sattrs =3D {}; @@ -12575,10 +12556,10 @@ static bool get_phys_addr_pmsav8(CPUARMState *env= , uint32_t address, return ret; } =20 -static bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, - MMUAccessType access_type, ARMMMUIdx mmu_= idx, - hwaddr *phys_ptr, int *prot, - ARMMMUFaultInfo *fi) +bool get_phys_addr_pmsav5(CPUARMState *env, uint32_t address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, int *prot, + ARMMMUFaultInfo *fi) { int n; uint32_t mask; @@ -12795,8 +12776,8 @@ static uint8_t combined_attrs_fwb(CPUARMState *env, * @s1: Attributes from stage 1 walk * @s2: Attributes from stage 2 walk */ -static ARMCacheAttrs combine_cacheattrs(CPUARMState *env, - ARMCacheAttrs s1, ARMCacheAttrs s2) +ARMCacheAttrs combine_cacheattrs(CPUARMState *env, + ARMCacheAttrs s1, ARMCacheAttrs s2) { ARMCacheAttrs ret; bool tagged =3D false; @@ -12848,256 +12829,6 @@ static ARMCacheAttrs combine_cacheattrs(CPUARMSta= te *env, return ret; } =20 - -/* get_phys_addr - get the physical address for this virtual address - * - * Find the physical address corresponding to the given virtual address, - * by doing a translation table walk on MMU based systems or using the - * MPU state on MPU based systems. - * - * Returns false if the translation was successful. Otherwise, phys_ptr, a= ttrs, - * prot and page_size may not be filled in, and the populated fsr value pr= ovides - * information on why the translation aborted, in the format of a - * DFSR/IFSR fault register, with the following caveats: - * * we honour the short vs long DFSR format differences. - * * the WnR bit is never set (the caller must do this). - * * for PSMAv5 based systems we don't bother to return a full FSR format - * value. - * - * @env: CPUARMState - * @address: virtual address to get physical address for - * @access_type: 0 for read, 1 for write, 2 for execute - * @mmu_idx: MMU index indicating required translation regime - * @phys_ptr: set to the physical address corresponding to the virtual add= ress - * @attrs: set to the memory transaction attributes to use - * @prot: set to the permissions for the page containing phys_ptr - * @page_size: set to the size of the page containing phys_ptr - * @fi: set to fault info if the translation fails - * @cacheattrs: (if non-NULL) set to the cacheability/shareability attribu= tes - */ -bool get_phys_addr(CPUARMState *env, target_ulong address, - MMUAccessType access_type, ARMMMUIdx mmu_idx, - hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot, - target_ulong *page_size, - ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) -{ - ARMMMUIdx s1_mmu_idx =3D stage_1_mmu_idx(mmu_idx); - - if (mmu_idx !=3D s1_mmu_idx) { - /* Call ourselves recursively to do the stage 1 and then stage 2 - * translations if mmu_idx is a two-stage regime. - */ - if (arm_feature(env, ARM_FEATURE_EL2)) { - hwaddr ipa; - int s2_prot; - int ret; - bool ipa_secure; - ARMCacheAttrs cacheattrs2 =3D {}; - ARMMMUIdx s2_mmu_idx; - bool is_el0; - - ret =3D get_phys_addr(env, address, access_type, s1_mmu_idx, &= ipa, - attrs, prot, page_size, fi, cacheattrs); - - /* If S1 fails or S2 is disabled, return early. */ - if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2))= { - *phys_ptr =3D ipa; - return ret; - } - - ipa_secure =3D attrs->secure; - if (arm_is_secure_below_el3(env)) { - if (ipa_secure) { - attrs->secure =3D !(env->cp15.vstcr_el2.raw_tcr & VSTC= R_SW); - } else { - attrs->secure =3D !(env->cp15.vtcr_el2.raw_tcr & VTCR_= NSW); - } - } else { - assert(!ipa_secure); - } - - s2_mmu_idx =3D attrs->secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_= Stage2; - is_el0 =3D mmu_idx =3D=3D ARMMMUIdx_E10_0 || mmu_idx =3D=3D AR= MMMUIdx_SE10_0; - - /* S1 is done. Now do S2 translation. */ - ret =3D get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx, = is_el0, - phys_ptr, attrs, &s2_prot, - page_size, fi, &cacheattrs2); - fi->s2addr =3D ipa; - /* Combine the S1 and S2 perms. */ - *prot &=3D s2_prot; - - /* If S2 fails, return early. */ - if (ret) { - return ret; - } - - /* Combine the S1 and S2 cache attributes. */ - if (arm_hcr_el2_eff(env) & HCR_DC) { - /* - * HCR.DC forces the first stage attributes to - * Normal Non-Shareable, - * Inner Write-Back Read-Allocate Write-Allocate, - * Outer Write-Back Read-Allocate Write-Allocate. - * Do not overwrite Tagged within attrs. - */ - if (cacheattrs->attrs !=3D 0xf0) { - cacheattrs->attrs =3D 0xff; - } - cacheattrs->shareability =3D 0; - } - *cacheattrs =3D combine_cacheattrs(env, *cacheattrs, cacheattr= s2); - - /* Check if IPA translates to secure or non-secure PA space. */ - if (arm_is_secure_below_el3(env)) { - if (ipa_secure) { - attrs->secure =3D - !(env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_= SW)); - } else { - attrs->secure =3D - !((env->cp15.vtcr_el2.raw_tcr & (VTCR_NSA | VTCR_N= SW)) - || (env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTC= R_SW))); - } - } - return 0; - } else { - /* - * For non-EL2 CPUs a stage1+stage2 translation is just stage = 1. - */ - mmu_idx =3D stage_1_mmu_idx(mmu_idx); - } - } - - /* The page table entries may downgrade secure to non-secure, but - * cannot upgrade an non-secure translation regime's attributes - * to secure. - */ - attrs->secure =3D regime_is_secure(env, mmu_idx); - attrs->user =3D regime_is_user(env, mmu_idx); - - /* Fast Context Switch Extension. This doesn't exist at all in v8. - * In v7 and earlier it affects all stage 1 translations. - */ - if (address < 0x02000000 && mmu_idx !=3D ARMMMUIdx_Stage2 - && !arm_feature(env, ARM_FEATURE_V8)) { - if (regime_el(env, mmu_idx) =3D=3D 3) { - address +=3D env->cp15.fcseidr_s; - } else { - address +=3D env->cp15.fcseidr_ns; - } - } - - if (arm_feature(env, ARM_FEATURE_PMSA)) { - bool ret; - *page_size =3D TARGET_PAGE_SIZE; - - if (arm_feature(env, ARM_FEATURE_V8)) { - /* PMSAv8 */ - ret =3D get_phys_addr_pmsav8(env, address, access_type, mmu_id= x, - phys_ptr, attrs, prot, page_size, f= i); - } else if (arm_feature(env, ARM_FEATURE_V7)) { - /* PMSAv7 */ - ret =3D get_phys_addr_pmsav7(env, address, access_type, mmu_id= x, - phys_ptr, prot, page_size, fi); - } else { - /* Pre-v7 MPU */ - ret =3D get_phys_addr_pmsav5(env, address, access_type, mmu_id= x, - phys_ptr, prot, fi); - } - qemu_log_mask(CPU_LOG_MMU, "PMSA MPU lookup for %s at 0x%08" PRIx32 - " mmu_idx %u -> %s (prot %c%c%c)\n", - access_type =3D=3D MMU_DATA_LOAD ? "reading" : - (access_type =3D=3D MMU_DATA_STORE ? "writing" : "ex= ecute"), - (uint32_t)address, mmu_idx, - ret ? "Miss" : "Hit", - *prot & PAGE_READ ? 'r' : '-', - *prot & PAGE_WRITE ? 'w' : '-', - *prot & PAGE_EXEC ? 'x' : '-'); - - return ret; - } - - /* Definitely a real MMU, not an MPU */ - - if (regime_translation_disabled(env, mmu_idx)) { - uint64_t hcr; - uint8_t memattr; - - /* - * MMU disabled. S1 addresses within aa64 translation regimes are - * still checked for bounds -- see AArch64.TranslateAddressS1Off. - */ - if (mmu_idx !=3D ARMMMUIdx_Stage2 && mmu_idx !=3D ARMMMUIdx_Stage2= _S) { - int r_el =3D regime_el(env, mmu_idx); - if (arm_el_is_aa64(env, r_el)) { - int pamax =3D arm_pamax(env_archcpu(env)); - uint64_t tcr =3D env->cp15.tcr_el[r_el].raw_tcr; - int addrtop, tbi; - - tbi =3D aa64_va_parameter_tbi(tcr, mmu_idx); - if (access_type =3D=3D MMU_INST_FETCH) { - tbi &=3D ~aa64_va_parameter_tbid(tcr, mmu_idx); - } - tbi =3D (tbi >> extract64(address, 55, 1)) & 1; - addrtop =3D (tbi ? 55 : 63); - - if (extract64(address, pamax, addrtop - pamax + 1) !=3D 0)= { - fi->type =3D ARMFault_AddressSize; - fi->level =3D 0; - fi->stage2 =3D false; - return 1; - } - - /* - * When TBI is disabled, we've just validated that all of = the - * bits above PAMax are zero, so logically we only need to - * clear the top byte for TBI. But it's clearer to follow - * the pseudocode set of addrdesc.paddress. - */ - address =3D extract64(address, 0, 52); - } - } - *phys_ptr =3D address; - *prot =3D PAGE_READ | PAGE_WRITE | PAGE_EXEC; - *page_size =3D TARGET_PAGE_SIZE; - - /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */ - hcr =3D arm_hcr_el2_eff(env); - cacheattrs->shareability =3D 0; - cacheattrs->is_s2_format =3D false; - if (hcr & HCR_DC) { - if (hcr & HCR_DCT) { - memattr =3D 0xf0; /* Tagged, Normal, WB, RWA */ - } else { - memattr =3D 0xff; /* Normal, WB, RWA */ - } - } else if (access_type =3D=3D MMU_INST_FETCH) { - if (regime_sctlr(env, mmu_idx) & SCTLR_I) { - memattr =3D 0xee; /* Normal, WT, RA, NT */ - } else { - memattr =3D 0x44; /* Normal, NC, No */ - } - cacheattrs->shareability =3D 2; /* outer sharable */ - } else { - memattr =3D 0x00; /* Device, nGnRnE */ - } - cacheattrs->attrs =3D memattr; - return 0; - } - - if (regime_using_lpae_format(env, mmu_idx)) { - return get_phys_addr_lpae(env, address, access_type, mmu_idx, fals= e, - phys_ptr, attrs, prot, page_size, - fi, cacheattrs); - } else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) { - return get_phys_addr_v6(env, address, access_type, mmu_idx, - phys_ptr, attrs, prot, page_size, fi); - } else { - return get_phys_addr_v5(env, address, access_type, mmu_idx, - phys_ptr, prot, page_size, fi); - } -} - hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *cs, vaddr addr, MemTxAttrs *attrs) { @@ -13121,7 +12852,6 @@ hwaddr arm_cpu_get_phys_page_attrs_debug(CPUState *= cs, vaddr addr, } return phys_addr; } - #endif =20 /* Note that signed overflow is undefined in C. The following routines are diff --git a/target/arm/ptw.c b/target/arm/ptw.c new file mode 100644 index 00000000000..318000f6d94 --- /dev/null +++ b/target/arm/ptw.c @@ -0,0 +1,267 @@ +/* + * ARM page table walking. + * + * This code is licensed under the GNU GPL v2 or later. + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#include "qemu/osdep.h" +#include "qemu/log.h" +#include "cpu.h" +#include "internals.h" +#include "ptw.h" + + +/** + * get_phys_addr - get the physical address for this virtual address + * + * Find the physical address corresponding to the given virtual address, + * by doing a translation table walk on MMU based systems or using the + * MPU state on MPU based systems. + * + * Returns false if the translation was successful. Otherwise, phys_ptr, a= ttrs, + * prot and page_size may not be filled in, and the populated fsr value pr= ovides + * information on why the translation aborted, in the format of a + * DFSR/IFSR fault register, with the following caveats: + * * we honour the short vs long DFSR format differences. + * * the WnR bit is never set (the caller must do this). + * * for PSMAv5 based systems we don't bother to return a full FSR format + * value. + * + * @env: CPUARMState + * @address: virtual address to get physical address for + * @access_type: 0 for read, 1 for write, 2 for execute + * @mmu_idx: MMU index indicating required translation regime + * @phys_ptr: set to the physical address corresponding to the virtual add= ress + * @attrs: set to the memory transaction attributes to use + * @prot: set to the permissions for the page containing phys_ptr + * @page_size: set to the size of the page containing phys_ptr + * @fi: set to fault info if the translation fails + * @cacheattrs: (if non-NULL) set to the cacheability/shareability attribu= tes + */ +bool get_phys_addr(CPUARMState *env, target_ulong address, + MMUAccessType access_type, ARMMMUIdx mmu_idx, + hwaddr *phys_ptr, MemTxAttrs *attrs, int *prot, + target_ulong *page_size, + ARMMMUFaultInfo *fi, ARMCacheAttrs *cacheattrs) +{ + ARMMMUIdx s1_mmu_idx =3D stage_1_mmu_idx(mmu_idx); + + if (mmu_idx !=3D s1_mmu_idx) { + /* + * Call ourselves recursively to do the stage 1 and then stage 2 + * translations if mmu_idx is a two-stage regime. + */ + if (arm_feature(env, ARM_FEATURE_EL2)) { + hwaddr ipa; + int s2_prot; + int ret; + bool ipa_secure; + ARMCacheAttrs cacheattrs2 =3D {}; + ARMMMUIdx s2_mmu_idx; + bool is_el0; + + ret =3D get_phys_addr(env, address, access_type, s1_mmu_idx, &= ipa, + attrs, prot, page_size, fi, cacheattrs); + + /* If S1 fails or S2 is disabled, return early. */ + if (ret || regime_translation_disabled(env, ARMMMUIdx_Stage2))= { + *phys_ptr =3D ipa; + return ret; + } + + ipa_secure =3D attrs->secure; + if (arm_is_secure_below_el3(env)) { + if (ipa_secure) { + attrs->secure =3D !(env->cp15.vstcr_el2.raw_tcr & VSTC= R_SW); + } else { + attrs->secure =3D !(env->cp15.vtcr_el2.raw_tcr & VTCR_= NSW); + } + } else { + assert(!ipa_secure); + } + + s2_mmu_idx =3D attrs->secure ? ARMMMUIdx_Stage2_S : ARMMMUIdx_= Stage2; + is_el0 =3D mmu_idx =3D=3D ARMMMUIdx_E10_0 || mmu_idx =3D=3D AR= MMMUIdx_SE10_0; + + /* S1 is done. Now do S2 translation. */ + ret =3D get_phys_addr_lpae(env, ipa, access_type, s2_mmu_idx, = is_el0, + phys_ptr, attrs, &s2_prot, + page_size, fi, &cacheattrs2); + fi->s2addr =3D ipa; + /* Combine the S1 and S2 perms. */ + *prot &=3D s2_prot; + + /* If S2 fails, return early. */ + if (ret) { + return ret; + } + + /* Combine the S1 and S2 cache attributes. */ + if (arm_hcr_el2_eff(env) & HCR_DC) { + /* + * HCR.DC forces the first stage attributes to + * Normal Non-Shareable, + * Inner Write-Back Read-Allocate Write-Allocate, + * Outer Write-Back Read-Allocate Write-Allocate. + * Do not overwrite Tagged within attrs. + */ + if (cacheattrs->attrs !=3D 0xf0) { + cacheattrs->attrs =3D 0xff; + } + cacheattrs->shareability =3D 0; + } + *cacheattrs =3D combine_cacheattrs(env, *cacheattrs, cacheattr= s2); + + /* Check if IPA translates to secure or non-secure PA space. */ + if (arm_is_secure_below_el3(env)) { + if (ipa_secure) { + attrs->secure =3D + !(env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTCR_= SW)); + } else { + attrs->secure =3D + !((env->cp15.vtcr_el2.raw_tcr & (VTCR_NSA | VTCR_N= SW)) + || (env->cp15.vstcr_el2.raw_tcr & (VSTCR_SA | VSTC= R_SW))); + } + } + return 0; + } else { + /* + * For non-EL2 CPUs a stage1+stage2 translation is just stage = 1. + */ + mmu_idx =3D stage_1_mmu_idx(mmu_idx); + } + } + + /* + * The page table entries may downgrade secure to non-secure, but + * cannot upgrade an non-secure translation regime's attributes + * to secure. + */ + attrs->secure =3D regime_is_secure(env, mmu_idx); + attrs->user =3D regime_is_user(env, mmu_idx); + + /* + * Fast Context Switch Extension. This doesn't exist at all in v8. + * In v7 and earlier it affects all stage 1 translations. + */ + if (address < 0x02000000 && mmu_idx !=3D ARMMMUIdx_Stage2 + && !arm_feature(env, ARM_FEATURE_V8)) { + if (regime_el(env, mmu_idx) =3D=3D 3) { + address +=3D env->cp15.fcseidr_s; + } else { + address +=3D env->cp15.fcseidr_ns; + } + } + + if (arm_feature(env, ARM_FEATURE_PMSA)) { + bool ret; + *page_size =3D TARGET_PAGE_SIZE; + + if (arm_feature(env, ARM_FEATURE_V8)) { + /* PMSAv8 */ + ret =3D get_phys_addr_pmsav8(env, address, access_type, mmu_id= x, + phys_ptr, attrs, prot, page_size, f= i); + } else if (arm_feature(env, ARM_FEATURE_V7)) { + /* PMSAv7 */ + ret =3D get_phys_addr_pmsav7(env, address, access_type, mmu_id= x, + phys_ptr, prot, page_size, fi); + } else { + /* Pre-v7 MPU */ + ret =3D get_phys_addr_pmsav5(env, address, access_type, mmu_id= x, + phys_ptr, prot, fi); + } + qemu_log_mask(CPU_LOG_MMU, "PMSA MPU lookup for %s at 0x%08" PRIx32 + " mmu_idx %u -> %s (prot %c%c%c)\n", + access_type =3D=3D MMU_DATA_LOAD ? "reading" : + (access_type =3D=3D MMU_DATA_STORE ? "writing" : "ex= ecute"), + (uint32_t)address, mmu_idx, + ret ? "Miss" : "Hit", + *prot & PAGE_READ ? 'r' : '-', + *prot & PAGE_WRITE ? 'w' : '-', + *prot & PAGE_EXEC ? 'x' : '-'); + + return ret; + } + + /* Definitely a real MMU, not an MPU */ + + if (regime_translation_disabled(env, mmu_idx)) { + uint64_t hcr; + uint8_t memattr; + + /* + * MMU disabled. S1 addresses within aa64 translation regimes are + * still checked for bounds -- see AArch64.TranslateAddressS1Off. + */ + if (mmu_idx !=3D ARMMMUIdx_Stage2 && mmu_idx !=3D ARMMMUIdx_Stage2= _S) { + int r_el =3D regime_el(env, mmu_idx); + if (arm_el_is_aa64(env, r_el)) { + int pamax =3D arm_pamax(env_archcpu(env)); + uint64_t tcr =3D env->cp15.tcr_el[r_el].raw_tcr; + int addrtop, tbi; + + tbi =3D aa64_va_parameter_tbi(tcr, mmu_idx); + if (access_type =3D=3D MMU_INST_FETCH) { + tbi &=3D ~aa64_va_parameter_tbid(tcr, mmu_idx); + } + tbi =3D (tbi >> extract64(address, 55, 1)) & 1; + addrtop =3D (tbi ? 55 : 63); + + if (extract64(address, pamax, addrtop - pamax + 1) !=3D 0)= { + fi->type =3D ARMFault_AddressSize; + fi->level =3D 0; + fi->stage2 =3D false; + return 1; + } + + /* + * When TBI is disabled, we've just validated that all of = the + * bits above PAMax are zero, so logically we only need to + * clear the top byte for TBI. But it's clearer to follow + * the pseudocode set of addrdesc.paddress. + */ + address =3D extract64(address, 0, 52); + } + } + *phys_ptr =3D address; + *prot =3D PAGE_READ | PAGE_WRITE | PAGE_EXEC; + *page_size =3D TARGET_PAGE_SIZE; + + /* Fill in cacheattr a-la AArch64.TranslateAddressS1Off. */ + hcr =3D arm_hcr_el2_eff(env); + cacheattrs->shareability =3D 0; + cacheattrs->is_s2_format =3D false; + if (hcr & HCR_DC) { + if (hcr & HCR_DCT) { + memattr =3D 0xf0; /* Tagged, Normal, WB, RWA */ + } else { + memattr =3D 0xff; /* Normal, WB, RWA */ + } + } else if (access_type =3D=3D MMU_INST_FETCH) { + if (regime_sctlr(env, mmu_idx) & SCTLR_I) { + memattr =3D 0xee; /* Normal, WT, RA, NT */ + } else { + memattr =3D 0x44; /* Normal, NC, No */ + } + cacheattrs->shareability =3D 2; /* outer sharable */ + } else { + memattr =3D 0x00; /* Device, nGnRnE */ + } + cacheattrs->attrs =3D memattr; + return 0; + } + + if (regime_using_lpae_format(env, mmu_idx)) { + return get_phys_addr_lpae(env, address, access_type, mmu_idx, fals= e, + phys_ptr, attrs, prot, page_size, + fi, cacheattrs); + } else if (regime_sctlr(env, mmu_idx) & SCTLR_XP) { + return get_phys_addr_v6(env, address, access_type, mmu_idx, + phys_ptr, attrs, prot, page_size, fi); + } else { + return get_phys_addr_v5(env, address, access_type, mmu_idx, + phys_ptr, prot, page_size, fi); + } +} diff --git a/target/arm/meson.build b/target/arm/meson.build index 50f152214af..ac571fc45db 100644 --- a/target/arm/meson.build +++ b/target/arm/meson.build @@ -58,6 +58,7 @@ arm_softmmu_ss.add(files( 'machine.c', 'monitor.c', 'psci.c', + 'ptw.c', )) =20 subdir('hvf') --=20 2.25.1