From nobody Wed Nov 27 11:54:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1698027280689468.6010551242422; Sun, 22 Oct 2023 19:14:40 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.620922.966788 (Exim 4.92) (envelope-from ) id 1qukSH-0001kM-9K; Mon, 23 Oct 2023 02:14:09 +0000 Received: by outflank-mailman (output) from mailman id 620922.966788; Mon, 23 Oct 2023 02:14:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSH-0001kD-6D; Mon, 23 Oct 2023 02:14:09 +0000 Received: by outflank-mailman (input) for mailman id 620922; Mon, 23 Oct 2023 02:14:08 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSG-0001U7-1g for xen-devel@lists.xenproject.org; Mon, 23 Oct 2023 02:14:08 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id d4faa739-7149-11ee-9b0e-b553b5be7939; Mon, 23 Oct 2023 04:14:01 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CDD33C15; Sun, 22 Oct 2023 19:14:41 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 681B03F738; Sun, 22 Oct 2023 19:13:57 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d4faa739-7149-11ee-9b0e-b553b5be7939 From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Henry Wang , Stefano Stabellini , Julien Grall , Bertrand Marquis , Wei Chen , Volodymyr Babchuk , Penny Zheng , Julien Grall Subject: [PATCH v8 1/8] xen/arm: Split page table related code to mmu/pt.c Date: Mon, 23 Oct 2023 10:13:38 +0800 Message-Id: <20231023021345.1731436-2-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231023021345.1731436-1-Henry.Wang@arm.com> References: <20231023021345.1731436-1-Henry.Wang@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1698027282413100003 Content-Type: text/plain; charset="utf-8" The extraction of MMU related code is the basis of MPU support. This commit starts this work by firstly splitting the page table related code to mmu/pt.c, so that we will not end up with again massive mm.c files. Introduce a mmu specific directory and setup the Makefiles for it. Move the page table related functions and macros from arch/arm/mm.c to arch/arm/mmu/pt.c. Take the opportunity to fix the in-code comment coding styles when possible, and drop the unnecessary #include headers in the original arch/arm/mm.c. Signed-off-by: Henry Wang Signed-off-by: Penny Zheng Reviewed-by: Julien Grall --- v8: - Add Julien's Reviewed-by tag. v7: - Do not move pte_of_xenaddr() to mmu/pt.c. - Do not expose global variable phys_offset. v6: - Rework the original patch "[v5,07/13] xen/arm: Extract MMU-specific code", only split the page table related code out in this patch. --- xen/arch/arm/Makefile | 1 + xen/arch/arm/mm.c | 717 ------------------------------------- xen/arch/arm/mmu/Makefile | 1 + xen/arch/arm/mmu/pt.c | 736 ++++++++++++++++++++++++++++++++++++++ 4 files changed, 738 insertions(+), 717 deletions(-) create mode 100644 xen/arch/arm/mmu/Makefile create mode 100644 xen/arch/arm/mmu/pt.c diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index 7bf07e9920..c45b08b31e 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -1,5 +1,6 @@ obj-$(CONFIG_ARM_32) +=3D arm32/ obj-$(CONFIG_ARM_64) +=3D arm64/ +obj-$(CONFIG_MMU) +=3D mmu/ obj-$(CONFIG_ACPI) +=3D acpi/ obj-$(CONFIG_HAS_PCI) +=3D pci/ ifneq ($(CONFIG_NO_PLAT),y) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index c34cc94c90..fd02493564 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -9,22 +9,14 @@ */ =20 #include -#include #include #include -#include #include #include -#include -#include -#include #include -#include -#include =20 #include =20 -#include #include =20 #include @@ -35,19 +27,6 @@ #undef mfn_to_virt #define mfn_to_virt(mfn) __mfn_to_virt(mfn_x(mfn)) =20 -#ifdef NDEBUG -static inline void -__attribute__ ((__format__ (__printf__, 1, 2))) -mm_printk(const char *fmt, ...) {} -#else -#define mm_printk(fmt, args...) \ - do \ - { \ - dprintk(XENLOG_ERR, fmt, ## args); \ - WARN(); \ - } while (0) -#endif - /* Static start-of-day pagetables that we use before the allocators * are up. These are used by all CPUs during bringup before switching * to the CPUs own pagetables. @@ -92,12 +71,10 @@ DEFINE_BOOT_PAGE_TABLES(boot_third, XEN_NR_ENTRIES(2)); */ =20 #ifdef CONFIG_ARM_64 -#define HYP_PT_ROOT_LEVEL 0 DEFINE_PAGE_TABLE(xen_pgtable); static DEFINE_PAGE_TABLE(xen_first); #define THIS_CPU_PGTABLE xen_pgtable #else -#define HYP_PT_ROOT_LEVEL 1 /* Per-CPU pagetable pages */ /* xen_pgtable =3D=3D root of the trie (zeroeth level on 64-bit, first on = 32-bit) */ DEFINE_PER_CPU(lpae_t *, xen_pgtable); @@ -200,179 +177,6 @@ static void __init __maybe_unused build_assertions(vo= id) #undef CHECK_DIFFERENT_SLOT } =20 -static lpae_t *xen_map_table(mfn_t mfn) -{ - /* - * During early boot, map_domain_page() may be unusable. Use the - * PMAP to map temporarily a page-table. - */ - if ( system_state =3D=3D SYS_STATE_early_boot ) - return pmap_map(mfn); - - return map_domain_page(mfn); -} - -static void xen_unmap_table(const lpae_t *table) -{ - /* - * During early boot, xen_map_table() will not use map_domain_page() - * but the PMAP. - */ - if ( system_state =3D=3D SYS_STATE_early_boot ) - pmap_unmap(table); - else - unmap_domain_page(table); -} - -void dump_pt_walk(paddr_t ttbr, paddr_t addr, - unsigned int root_level, - unsigned int nr_root_tables) -{ - static const char *level_strs[4] =3D { "0TH", "1ST", "2ND", "3RD" }; - const mfn_t root_mfn =3D maddr_to_mfn(ttbr); - DECLARE_OFFSETS(offsets, addr); - lpae_t pte, *mapping; - unsigned int level, root_table; - -#ifdef CONFIG_ARM_32 - BUG_ON(root_level < 1); -#endif - BUG_ON(root_level > 3); - - if ( nr_root_tables > 1 ) - { - /* - * Concatenated root-level tables. The table number will be - * the offset at the previous level. It is not possible to - * concatenate a level-0 root. - */ - BUG_ON(root_level =3D=3D 0); - root_table =3D offsets[root_level - 1]; - printk("Using concatenated root table %u\n", root_table); - if ( root_table >=3D nr_root_tables ) - { - printk("Invalid root table offset\n"); - return; - } - } - else - root_table =3D 0; - - mapping =3D xen_map_table(mfn_add(root_mfn, root_table)); - - for ( level =3D root_level; ; level++ ) - { - if ( offsets[level] > XEN_PT_LPAE_ENTRIES ) - break; - - pte =3D mapping[offsets[level]]; - - printk("%s[0x%03x] =3D 0x%"PRIx64"\n", - level_strs[level], offsets[level], pte.bits); - - if ( level =3D=3D 3 || !pte.walk.valid || !pte.walk.table ) - break; - - /* For next iteration */ - xen_unmap_table(mapping); - mapping =3D xen_map_table(lpae_get_mfn(pte)); - } - - xen_unmap_table(mapping); -} - -void dump_hyp_walk(vaddr_t addr) -{ - uint64_t ttbr =3D READ_SYSREG64(TTBR0_EL2); - - printk("Walking Hypervisor VA 0x%"PRIvaddr" " - "on CPU%d via TTBR 0x%016"PRIx64"\n", - addr, smp_processor_id(), ttbr); - - dump_pt_walk(ttbr, addr, HYP_PT_ROOT_LEVEL, 1); -} - -lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr) -{ - lpae_t e =3D (lpae_t) { - .pt =3D { - .valid =3D 1, /* Mappings are present */ - .table =3D 0, /* Set to 1 for links and 4k maps */ - .ai =3D attr, - .ns =3D 1, /* Hyp mode is in the non-secure world= */ - .up =3D 1, /* See below */ - .ro =3D 0, /* Assume read-write */ - .af =3D 1, /* No need for access tracking */ - .ng =3D 1, /* Makes TLB flushes easier */ - .contig =3D 0, /* Assume non-contiguous */ - .xn =3D 1, /* No need to execute outside .text */ - .avail =3D 0, /* Reference count for domheap mapping= */ - }}; - /* - * For EL2 stage-1 page table, up (aka AP[1]) is RES1 as the translati= on - * regime applies to only one exception level (see D4.4.4 and G4.6.1 - * in ARM DDI 0487B.a). If this changes, remember to update the - * hard-coded values in head.S too. - */ - - switch ( attr ) - { - case MT_NORMAL_NC: - /* - * ARM ARM: Overlaying the shareability attribute (DDI - * 0406C.b B3-1376 to 1377) - * - * A memory region with a resultant memory type attribute of Norma= l, - * and a resultant cacheability attribute of Inner Non-cacheable, - * Outer Non-cacheable, must have a resultant shareability attribu= te - * of Outer Shareable, otherwise shareability is UNPREDICTABLE. - * - * On ARMv8 sharability is ignored and explicitly treated as Outer - * Shareable for Normal Inner Non_cacheable, Outer Non-cacheable. - */ - e.pt.sh =3D LPAE_SH_OUTER; - break; - case MT_DEVICE_nGnRnE: - case MT_DEVICE_nGnRE: - /* - * Shareability is ignored for non-Normal memory, Outer is as - * good as anything. - * - * On ARMv8 sharability is ignored and explicitly treated as Outer - * Shareable for any device memory type. - */ - e.pt.sh =3D LPAE_SH_OUTER; - break; - default: - e.pt.sh =3D LPAE_SH_INNER; /* Xen mappings are SMP coherent */ - break; - } - - ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK)); - - lpae_set_mfn(e, mfn); - - return e; -} - -/* Map a 4k page in a fixmap entry */ -void set_fixmap(unsigned int map, mfn_t mfn, unsigned int flags) -{ - int res; - - res =3D map_pages_to_xen(FIXMAP_ADDR(map), mfn, 1, flags); - BUG_ON(res !=3D 0); -} - -/* Remove a mapping from a fixmap entry */ -void clear_fixmap(unsigned int map) -{ - int res; - - res =3D destroy_xen_mappings(FIXMAP_ADDR(map), FIXMAP_ADDR(map) + PAGE= _SIZE); - BUG_ON(res !=3D 0); -} - void flush_page_to_ram(unsigned long mfn, bool sync_icache) { void *v =3D map_domain_page(_mfn(mfn)); @@ -733,527 +537,6 @@ void *__init arch_vmap_virt_end(void) return (void *)(VMAP_VIRT_START + VMAP_VIRT_SIZE); } =20 -/* - * This function should only be used to remap device address ranges - * TODO: add a check to verify this assumption - */ -void *ioremap_attr(paddr_t start, size_t len, unsigned int attributes) -{ - mfn_t mfn =3D _mfn(PFN_DOWN(start)); - unsigned int offs =3D start & (PAGE_SIZE - 1); - unsigned int nr =3D PFN_UP(offs + len); - void *ptr =3D __vmap(&mfn, nr, 1, 1, attributes, VMAP_DEFAULT); - - if ( ptr =3D=3D NULL ) - return NULL; - - return ptr + offs; -} - -void *ioremap(paddr_t pa, size_t len) -{ - return ioremap_attr(pa, len, PAGE_HYPERVISOR_NOCACHE); -} - -static int create_xen_table(lpae_t *entry) -{ - mfn_t mfn; - void *p; - lpae_t pte; - - if ( system_state !=3D SYS_STATE_early_boot ) - { - struct page_info *pg =3D alloc_domheap_page(NULL, 0); - - if ( pg =3D=3D NULL ) - return -ENOMEM; - - mfn =3D page_to_mfn(pg); - } - else - mfn =3D alloc_boot_pages(1, 1); - - p =3D xen_map_table(mfn); - clear_page(p); - xen_unmap_table(p); - - pte =3D mfn_to_xen_entry(mfn, MT_NORMAL); - pte.pt.table =3D 1; - write_pte(entry, pte); - /* - * No ISB here. It is deferred to xen_pt_update() as the new table - * will not be used for hardware translation table access as part of - * the mapping update. - */ - - return 0; -} - -#define XEN_TABLE_MAP_FAILED 0 -#define XEN_TABLE_SUPER_PAGE 1 -#define XEN_TABLE_NORMAL_PAGE 2 - -/* - * Take the currently mapped table, find the corresponding entry, - * and map the next table, if available. - * - * The read_only parameters indicates whether intermediate tables should - * be allocated when not present. - * - * Return values: - * XEN_TABLE_MAP_FAILED: Either read_only was set and the entry - * was empty, or allocating a new page failed. - * XEN_TABLE_NORMAL_PAGE: next level mapped normally - * XEN_TABLE_SUPER_PAGE: The next entry points to a superpage. - */ -static int xen_pt_next_level(bool read_only, unsigned int level, - lpae_t **table, unsigned int offset) -{ - lpae_t *entry; - int ret; - mfn_t mfn; - - entry =3D *table + offset; - - if ( !lpae_is_valid(*entry) ) - { - if ( read_only ) - return XEN_TABLE_MAP_FAILED; - - ret =3D create_xen_table(entry); - if ( ret ) - return XEN_TABLE_MAP_FAILED; - } - - /* The function xen_pt_next_level is never called at the 3rd level */ - if ( lpae_is_mapping(*entry, level) ) - return XEN_TABLE_SUPER_PAGE; - - mfn =3D lpae_get_mfn(*entry); - - xen_unmap_table(*table); - *table =3D xen_map_table(mfn); - - return XEN_TABLE_NORMAL_PAGE; -} - -/* Sanity check of the entry */ -static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int level, - unsigned int flags) -{ - /* Sanity check when modifying an entry. */ - if ( (flags & _PAGE_PRESENT) && mfn_eq(mfn, INVALID_MFN) ) - { - /* We don't allow modifying an invalid entry. */ - if ( !lpae_is_valid(entry) ) - { - mm_printk("Modifying invalid entry is not allowed.\n"); - return false; - } - - /* We don't allow modifying a table entry */ - if ( !lpae_is_mapping(entry, level) ) - { - mm_printk("Modifying a table entry is not allowed.\n"); - return false; - } - - /* We don't allow changing memory attributes. */ - if ( entry.pt.ai !=3D PAGE_AI_MASK(flags) ) - { - mm_printk("Modifying memory attributes is not allowed (0x%x ->= 0x%x).\n", - entry.pt.ai, PAGE_AI_MASK(flags)); - return false; - } - - /* We don't allow modifying entry with contiguous bit set. */ - if ( entry.pt.contig ) - { - mm_printk("Modifying entry with contiguous bit set is not allo= wed.\n"); - return false; - } - } - /* Sanity check when inserting a mapping */ - else if ( flags & _PAGE_PRESENT ) - { - /* We should be here with a valid MFN. */ - ASSERT(!mfn_eq(mfn, INVALID_MFN)); - - /* - * We don't allow replacing any valid entry. - * - * Note that the function xen_pt_update() relies on this - * assumption and will skip the TLB flush. The function will need - * to be updated if the check is relaxed. - */ - if ( lpae_is_valid(entry) ) - { - if ( lpae_is_mapping(entry, level) ) - mm_printk("Changing MFN for a valid entry is not allowed (= %#"PRI_mfn" -> %#"PRI_mfn").\n", - mfn_x(lpae_get_mfn(entry)), mfn_x(mfn)); - else - mm_printk("Trying to replace a table with a mapping.\n"); - return false; - } - } - /* Sanity check when removing a mapping. */ - else if ( (flags & (_PAGE_PRESENT|_PAGE_POPULATE)) =3D=3D 0 ) - { - /* We should be here with an invalid MFN. */ - ASSERT(mfn_eq(mfn, INVALID_MFN)); - - /* We don't allow removing a table */ - if ( lpae_is_table(entry, level) ) - { - mm_printk("Removing a table is not allowed.\n"); - return false; - } - - /* We don't allow removing a mapping with contiguous bit set. */ - if ( entry.pt.contig ) - { - mm_printk("Removing entry with contiguous bit set is not allow= ed.\n"); - return false; - } - } - /* Sanity check when populating the page-table. No check so far. */ - else - { - ASSERT(flags & _PAGE_POPULATE); - /* We should be here with an invalid MFN */ - ASSERT(mfn_eq(mfn, INVALID_MFN)); - } - - return true; -} - -/* Update an entry at the level @target. */ -static int xen_pt_update_entry(mfn_t root, unsigned long virt, - mfn_t mfn, unsigned int target, - unsigned int flags) -{ - int rc; - unsigned int level; - lpae_t *table; - /* - * The intermediate page tables are read-only when the MFN is not valid - * and we are not populating page table. - * This means we either modify permissions or remove an entry. - */ - bool read_only =3D mfn_eq(mfn, INVALID_MFN) && !(flags & _PAGE_POPULAT= E); - lpae_t pte, *entry; - - /* convenience aliases */ - DECLARE_OFFSETS(offsets, (paddr_t)virt); - - /* _PAGE_POPULATE and _PAGE_PRESENT should never be set together. */ - ASSERT((flags & (_PAGE_POPULATE|_PAGE_PRESENT)) !=3D (_PAGE_POPULATE|_= PAGE_PRESENT)); - - table =3D xen_map_table(root); - for ( level =3D HYP_PT_ROOT_LEVEL; level < target; level++ ) - { - rc =3D xen_pt_next_level(read_only, level, &table, offsets[level]); - if ( rc =3D=3D XEN_TABLE_MAP_FAILED ) - { - /* - * We are here because xen_pt_next_level has failed to map - * the intermediate page table (e.g the table does not exist - * and the pt is read-only). It is a valid case when - * removing a mapping as it may not exist in the page table. - * In this case, just ignore it. - */ - if ( flags & (_PAGE_PRESENT|_PAGE_POPULATE) ) - { - mm_printk("%s: Unable to map level %u\n", __func__, level); - rc =3D -ENOENT; - goto out; - } - else - { - rc =3D 0; - goto out; - } - } - else if ( rc !=3D XEN_TABLE_NORMAL_PAGE ) - break; - } - - if ( level !=3D target ) - { - mm_printk("%s: Shattering superpage is not supported\n", __func__); - rc =3D -EOPNOTSUPP; - goto out; - } - - entry =3D table + offsets[level]; - - rc =3D -EINVAL; - if ( !xen_pt_check_entry(*entry, mfn, level, flags) ) - goto out; - - /* If we are only populating page-table, then we are done. */ - rc =3D 0; - if ( flags & _PAGE_POPULATE ) - goto out; - - /* We are removing the page */ - if ( !(flags & _PAGE_PRESENT) ) - memset(&pte, 0x00, sizeof(pte)); - else - { - /* We are inserting a mapping =3D> Create new pte. */ - if ( !mfn_eq(mfn, INVALID_MFN) ) - { - pte =3D mfn_to_xen_entry(mfn, PAGE_AI_MASK(flags)); - - /* - * First and second level pages set pte.pt.table =3D 0, but - * third level entries set pte.pt.table =3D 1. - */ - pte.pt.table =3D (level =3D=3D 3); - } - else /* We are updating the permission =3D> Copy the current pte. = */ - pte =3D *entry; - - /* Set permission */ - pte.pt.ro =3D PAGE_RO_MASK(flags); - pte.pt.xn =3D PAGE_XN_MASK(flags); - /* Set contiguous bit */ - pte.pt.contig =3D !!(flags & _PAGE_CONTIG); - } - - write_pte(entry, pte); - /* - * No ISB or TLB flush here. They are deferred to xen_pt_update() - * as the entry will not be used as part of the mapping update. - */ - - rc =3D 0; - -out: - xen_unmap_table(table); - - return rc; -} - -/* Return the level where mapping should be done */ -static int xen_pt_mapping_level(unsigned long vfn, mfn_t mfn, unsigned lon= g nr, - unsigned int flags) -{ - unsigned int level; - unsigned long mask; - - /* - * Don't take into account the MFN when removing mapping (i.e - * MFN_INVALID) to calculate the correct target order. - * - * Per the Arm Arm, `vfn` and `mfn` must be both superpage aligned. - * They are or-ed together and then checked against the size of - * each level. - * - * `left` is not included and checked separately to allow - * superpage mapping even if it is not properly aligned (the - * user may have asked to map 2MB + 4k). - */ - mask =3D !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0; - mask |=3D vfn; - - /* - * Always use level 3 mapping unless the caller request block - * mapping. - */ - if ( likely(!(flags & _PAGE_BLOCK)) ) - level =3D 3; - else if ( !(mask & (BIT(FIRST_ORDER, UL) - 1)) && - (nr >=3D BIT(FIRST_ORDER, UL)) ) - level =3D 1; - else if ( !(mask & (BIT(SECOND_ORDER, UL) - 1)) && - (nr >=3D BIT(SECOND_ORDER, UL)) ) - level =3D 2; - else - level =3D 3; - - return level; -} - -#define XEN_PT_4K_NR_CONTIG 16 - -/* - * Check whether the contiguous bit can be set. Return the number of - * contiguous entry allowed. If not allowed, return 1. - */ -static unsigned int xen_pt_check_contig(unsigned long vfn, mfn_t mfn, - unsigned int level, unsigned long = left, - unsigned int flags) -{ - unsigned long nr_contig; - - /* - * Allow the contiguous bit to set when the caller requests block - * mapping. - */ - if ( !(flags & _PAGE_BLOCK) ) - return 1; - - /* - * We don't allow to remove mapping with the contiguous bit set. - * So shortcut the logic and directly return 1. - */ - if ( mfn_eq(mfn, INVALID_MFN) ) - return 1; - - /* - * The number of contiguous entries varies depending on the page - * granularity used. The logic below assumes 4KB. - */ - BUILD_BUG_ON(PAGE_SIZE !=3D SZ_4K); - - /* - * In order to enable the contiguous bit, we should have enough entries - * to map left and both the virtual and physical address should be - * aligned to the size of 16 translation tables entries. - */ - nr_contig =3D BIT(XEN_PT_LEVEL_ORDER(level), UL) * XEN_PT_4K_NR_CONTIG; - - if ( (left < nr_contig) || ((mfn_x(mfn) | vfn) & (nr_contig - 1)) ) - return 1; - - return XEN_PT_4K_NR_CONTIG; -} - -static DEFINE_SPINLOCK(xen_pt_lock); - -static int xen_pt_update(unsigned long virt, - mfn_t mfn, - /* const on purpose as it is used for TLB flush */ - const unsigned long nr_mfns, - unsigned int flags) -{ - int rc =3D 0; - unsigned long vfn =3D virt >> PAGE_SHIFT; - unsigned long left =3D nr_mfns; - - /* - * For arm32, page-tables are different on each CPUs. Yet, they share - * some common mappings. It is assumed that only common mappings - * will be modified with this function. - * - * XXX: Add a check. - */ - const mfn_t root =3D maddr_to_mfn(READ_SYSREG64(TTBR0_EL2)); - - /* - * The hardware was configured to forbid mapping both writeable and - * executable. - * When modifying/creating mapping (i.e _PAGE_PRESENT is set), - * prevent any update if this happen. - */ - if ( (flags & _PAGE_PRESENT) && !PAGE_RO_MASK(flags) && - !PAGE_XN_MASK(flags) ) - { - mm_printk("Mappings should not be both Writeable and Executable.\n= "); - return -EINVAL; - } - - if ( flags & _PAGE_CONTIG ) - { - mm_printk("_PAGE_CONTIG is an internal only flag.\n"); - return -EINVAL; - } - - if ( !IS_ALIGNED(virt, PAGE_SIZE) ) - { - mm_printk("The virtual address is not aligned to the page-size.\n"= ); - return -EINVAL; - } - - spin_lock(&xen_pt_lock); - - while ( left ) - { - unsigned int order, level, nr_contig, new_flags; - - level =3D xen_pt_mapping_level(vfn, mfn, left, flags); - order =3D XEN_PT_LEVEL_ORDER(level); - - ASSERT(left >=3D BIT(order, UL)); - - /* - * Check if we can set the contiguous mapping and update the - * flags accordingly. - */ - nr_contig =3D xen_pt_check_contig(vfn, mfn, level, left, flags); - new_flags =3D flags | ((nr_contig > 1) ? _PAGE_CONTIG : 0); - - for ( ; nr_contig > 0; nr_contig-- ) - { - rc =3D xen_pt_update_entry(root, vfn << PAGE_SHIFT, mfn, level, - new_flags); - if ( rc ) - break; - - vfn +=3D 1U << order; - if ( !mfn_eq(mfn, INVALID_MFN) ) - mfn =3D mfn_add(mfn, 1U << order); - - left -=3D (1U << order); - } - - if ( rc ) - break; - } - - /* - * The TLBs flush can be safely skipped when a mapping is inserted - * as we don't allow mapping replacement (see xen_pt_check_entry()). - * Although we still need an ISB to ensure any DSB in - * write_pte() will complete because the mapping may be used soon - * after. - * - * For all the other cases, the TLBs will be flushed unconditionally - * even if the mapping has failed. This is because we may have - * partially modified the PT. This will prevent any unexpected - * behavior afterwards. - */ - if ( !((flags & _PAGE_PRESENT) && !mfn_eq(mfn, INVALID_MFN)) ) - flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns); - else - isb(); - - spin_unlock(&xen_pt_lock); - - return rc; -} - -int map_pages_to_xen(unsigned long virt, - mfn_t mfn, - unsigned long nr_mfns, - unsigned int flags) -{ - return xen_pt_update(virt, mfn, nr_mfns, flags); -} - -int __init populate_pt_range(unsigned long virt, unsigned long nr_mfns) -{ - return xen_pt_update(virt, INVALID_MFN, nr_mfns, _PAGE_POPULATE); -} - -int destroy_xen_mappings(unsigned long s, unsigned long e) -{ - ASSERT(IS_ALIGNED(s, PAGE_SIZE)); - ASSERT(IS_ALIGNED(e, PAGE_SIZE)); - ASSERT(s <=3D e); - return xen_pt_update(s, INVALID_MFN, (e - s) >> PAGE_SHIFT, 0); -} - -int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int fla= gs) -{ - ASSERT(IS_ALIGNED(s, PAGE_SIZE)); - ASSERT(IS_ALIGNED(e, PAGE_SIZE)); - ASSERT(s <=3D e); - return xen_pt_update(s, INVALID_MFN, (e - s) >> PAGE_SHIFT, flags); -} - /* Release all __init and __initdata ranges to be reused */ void free_init_memory(void) { diff --git a/xen/arch/arm/mmu/Makefile b/xen/arch/arm/mmu/Makefile new file mode 100644 index 0000000000..bdfc2e077d --- /dev/null +++ b/xen/arch/arm/mmu/Makefile @@ -0,0 +1 @@ +obj-y +=3D pt.o diff --git a/xen/arch/arm/mmu/pt.c b/xen/arch/arm/mmu/pt.c new file mode 100644 index 0000000000..e6fc5ed45a --- /dev/null +++ b/xen/arch/arm/mmu/pt.c @@ -0,0 +1,736 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * xen/arch/arm/mmu/pt.c + * + * MMU system page table related functions. + */ + +#include +#include +#include +#include +#include + +#include + +#ifdef NDEBUG +static inline void +__attribute__ ((__format__ (__printf__, 1, 2))) +mm_printk(const char *fmt, ...) {} +#else +#define mm_printk(fmt, args...) \ + do \ + { \ + dprintk(XENLOG_ERR, fmt, ## args); \ + WARN(); \ + } while (0) +#endif + +#ifdef CONFIG_ARM_64 +#define HYP_PT_ROOT_LEVEL 0 +#else +#define HYP_PT_ROOT_LEVEL 1 +#endif + +static lpae_t *xen_map_table(mfn_t mfn) +{ + /* + * During early boot, map_domain_page() may be unusable. Use the + * PMAP to map temporarily a page-table. + */ + if ( system_state =3D=3D SYS_STATE_early_boot ) + return pmap_map(mfn); + + return map_domain_page(mfn); +} + +static void xen_unmap_table(const lpae_t *table) +{ + /* + * During early boot, xen_map_table() will not use map_domain_page() + * but the PMAP. + */ + if ( system_state =3D=3D SYS_STATE_early_boot ) + pmap_unmap(table); + else + unmap_domain_page(table); +} + +void dump_pt_walk(paddr_t ttbr, paddr_t addr, + unsigned int root_level, + unsigned int nr_root_tables) +{ + static const char *level_strs[4] =3D { "0TH", "1ST", "2ND", "3RD" }; + const mfn_t root_mfn =3D maddr_to_mfn(ttbr); + DECLARE_OFFSETS(offsets, addr); + lpae_t pte, *mapping; + unsigned int level, root_table; + +#ifdef CONFIG_ARM_32 + BUG_ON(root_level < 1); +#endif + BUG_ON(root_level > 3); + + if ( nr_root_tables > 1 ) + { + /* + * Concatenated root-level tables. The table number will be + * the offset at the previous level. It is not possible to + * concatenate a level-0 root. + */ + BUG_ON(root_level =3D=3D 0); + root_table =3D offsets[root_level - 1]; + printk("Using concatenated root table %u\n", root_table); + if ( root_table >=3D nr_root_tables ) + { + printk("Invalid root table offset\n"); + return; + } + } + else + root_table =3D 0; + + mapping =3D xen_map_table(mfn_add(root_mfn, root_table)); + + for ( level =3D root_level; ; level++ ) + { + if ( offsets[level] > XEN_PT_LPAE_ENTRIES ) + break; + + pte =3D mapping[offsets[level]]; + + printk("%s[0x%03x] =3D 0x%"PRIx64"\n", + level_strs[level], offsets[level], pte.bits); + + if ( level =3D=3D 3 || !pte.walk.valid || !pte.walk.table ) + break; + + /* For next iteration */ + xen_unmap_table(mapping); + mapping =3D xen_map_table(lpae_get_mfn(pte)); + } + + xen_unmap_table(mapping); +} + +void dump_hyp_walk(vaddr_t addr) +{ + uint64_t ttbr =3D READ_SYSREG64(TTBR0_EL2); + + printk("Walking Hypervisor VA 0x%"PRIvaddr" " + "on CPU%d via TTBR 0x%016"PRIx64"\n", + addr, smp_processor_id(), ttbr); + + dump_pt_walk(ttbr, addr, HYP_PT_ROOT_LEVEL, 1); +} + +lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr) +{ + lpae_t e =3D (lpae_t) { + .pt =3D { + .valid =3D 1, /* Mappings are present */ + .table =3D 0, /* Set to 1 for links and 4k maps */ + .ai =3D attr, + .ns =3D 1, /* Hyp mode is in the non-secure world= */ + .up =3D 1, /* See below */ + .ro =3D 0, /* Assume read-write */ + .af =3D 1, /* No need for access tracking */ + .ng =3D 1, /* Makes TLB flushes easier */ + .contig =3D 0, /* Assume non-contiguous */ + .xn =3D 1, /* No need to execute outside .text */ + .avail =3D 0, /* Reference count for domheap mapping= */ + }}; + /* + * For EL2 stage-1 page table, up (aka AP[1]) is RES1 as the translati= on + * regime applies to only one exception level (see D4.4.4 and G4.6.1 + * in ARM DDI 0487B.a). If this changes, remember to update the + * hard-coded values in head.S too. + */ + + switch ( attr ) + { + case MT_NORMAL_NC: + /* + * ARM ARM: Overlaying the shareability attribute (DDI + * 0406C.b B3-1376 to 1377) + * + * A memory region with a resultant memory type attribute of Norma= l, + * and a resultant cacheability attribute of Inner Non-cacheable, + * Outer Non-cacheable, must have a resultant shareability attribu= te + * of Outer Shareable, otherwise shareability is UNPREDICTABLE. + * + * On ARMv8 sharability is ignored and explicitly treated as Outer + * Shareable for Normal Inner Non_cacheable, Outer Non-cacheable. + */ + e.pt.sh =3D LPAE_SH_OUTER; + break; + case MT_DEVICE_nGnRnE: + case MT_DEVICE_nGnRE: + /* + * Shareability is ignored for non-Normal memory, Outer is as + * good as anything. + * + * On ARMv8 sharability is ignored and explicitly treated as Outer + * Shareable for any device memory type. + */ + e.pt.sh =3D LPAE_SH_OUTER; + break; + default: + e.pt.sh =3D LPAE_SH_INNER; /* Xen mappings are SMP coherent */ + break; + } + + ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK)); + + lpae_set_mfn(e, mfn); + + return e; +} + +/* Map a 4k page in a fixmap entry */ +void set_fixmap(unsigned int map, mfn_t mfn, unsigned int flags) +{ + int res; + + res =3D map_pages_to_xen(FIXMAP_ADDR(map), mfn, 1, flags); + BUG_ON(res !=3D 0); +} + +/* Remove a mapping from a fixmap entry */ +void clear_fixmap(unsigned int map) +{ + int res; + + res =3D destroy_xen_mappings(FIXMAP_ADDR(map), FIXMAP_ADDR(map) + PAGE= _SIZE); + BUG_ON(res !=3D 0); +} + +/* + * This function should only be used to remap device address ranges + * TODO: add a check to verify this assumption + */ +void *ioremap_attr(paddr_t start, size_t len, unsigned int attributes) +{ + mfn_t mfn =3D _mfn(PFN_DOWN(start)); + unsigned int offs =3D start & (PAGE_SIZE - 1); + unsigned int nr =3D PFN_UP(offs + len); + void *ptr =3D __vmap(&mfn, nr, 1, 1, attributes, VMAP_DEFAULT); + + if ( ptr =3D=3D NULL ) + return NULL; + + return ptr + offs; +} + +void *ioremap(paddr_t pa, size_t len) +{ + return ioremap_attr(pa, len, PAGE_HYPERVISOR_NOCACHE); +} + +static int create_xen_table(lpae_t *entry) +{ + mfn_t mfn; + void *p; + lpae_t pte; + + if ( system_state !=3D SYS_STATE_early_boot ) + { + struct page_info *pg =3D alloc_domheap_page(NULL, 0); + + if ( pg =3D=3D NULL ) + return -ENOMEM; + + mfn =3D page_to_mfn(pg); + } + else + mfn =3D alloc_boot_pages(1, 1); + + p =3D xen_map_table(mfn); + clear_page(p); + xen_unmap_table(p); + + pte =3D mfn_to_xen_entry(mfn, MT_NORMAL); + pte.pt.table =3D 1; + write_pte(entry, pte); + /* + * No ISB here. It is deferred to xen_pt_update() as the new table + * will not be used for hardware translation table access as part of + * the mapping update. + */ + + return 0; +} + +#define XEN_TABLE_MAP_FAILED 0 +#define XEN_TABLE_SUPER_PAGE 1 +#define XEN_TABLE_NORMAL_PAGE 2 + +/* + * Take the currently mapped table, find the corresponding entry, + * and map the next table, if available. + * + * The read_only parameters indicates whether intermediate tables should + * be allocated when not present. + * + * Return values: + * XEN_TABLE_MAP_FAILED: Either read_only was set and the entry + * was empty, or allocating a new page failed. + * XEN_TABLE_NORMAL_PAGE: next level mapped normally + * XEN_TABLE_SUPER_PAGE: The next entry points to a superpage. + */ +static int xen_pt_next_level(bool read_only, unsigned int level, + lpae_t **table, unsigned int offset) +{ + lpae_t *entry; + int ret; + mfn_t mfn; + + entry =3D *table + offset; + + if ( !lpae_is_valid(*entry) ) + { + if ( read_only ) + return XEN_TABLE_MAP_FAILED; + + ret =3D create_xen_table(entry); + if ( ret ) + return XEN_TABLE_MAP_FAILED; + } + + /* The function xen_pt_next_level is never called at the 3rd level */ + if ( lpae_is_mapping(*entry, level) ) + return XEN_TABLE_SUPER_PAGE; + + mfn =3D lpae_get_mfn(*entry); + + xen_unmap_table(*table); + *table =3D xen_map_table(mfn); + + return XEN_TABLE_NORMAL_PAGE; +} + +/* Sanity check of the entry */ +static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int level, + unsigned int flags) +{ + /* Sanity check when modifying an entry. */ + if ( (flags & _PAGE_PRESENT) && mfn_eq(mfn, INVALID_MFN) ) + { + /* We don't allow modifying an invalid entry. */ + if ( !lpae_is_valid(entry) ) + { + mm_printk("Modifying invalid entry is not allowed.\n"); + return false; + } + + /* We don't allow modifying a table entry */ + if ( !lpae_is_mapping(entry, level) ) + { + mm_printk("Modifying a table entry is not allowed.\n"); + return false; + } + + /* We don't allow changing memory attributes. */ + if ( entry.pt.ai !=3D PAGE_AI_MASK(flags) ) + { + mm_printk("Modifying memory attributes is not allowed (0x%x ->= 0x%x).\n", + entry.pt.ai, PAGE_AI_MASK(flags)); + return false; + } + + /* We don't allow modifying entry with contiguous bit set. */ + if ( entry.pt.contig ) + { + mm_printk("Modifying entry with contiguous bit set is not allo= wed.\n"); + return false; + } + } + /* Sanity check when inserting a mapping */ + else if ( flags & _PAGE_PRESENT ) + { + /* We should be here with a valid MFN. */ + ASSERT(!mfn_eq(mfn, INVALID_MFN)); + + /* + * We don't allow replacing any valid entry. + * + * Note that the function xen_pt_update() relies on this + * assumption and will skip the TLB flush. The function will need + * to be updated if the check is relaxed. + */ + if ( lpae_is_valid(entry) ) + { + if ( lpae_is_mapping(entry, level) ) + mm_printk("Changing MFN for a valid entry is not allowed (= %#"PRI_mfn" -> %#"PRI_mfn").\n", + mfn_x(lpae_get_mfn(entry)), mfn_x(mfn)); + else + mm_printk("Trying to replace a table with a mapping.\n"); + return false; + } + } + /* Sanity check when removing a mapping. */ + else if ( (flags & (_PAGE_PRESENT|_PAGE_POPULATE)) =3D=3D 0 ) + { + /* We should be here with an invalid MFN. */ + ASSERT(mfn_eq(mfn, INVALID_MFN)); + + /* We don't allow removing a table */ + if ( lpae_is_table(entry, level) ) + { + mm_printk("Removing a table is not allowed.\n"); + return false; + } + + /* We don't allow removing a mapping with contiguous bit set. */ + if ( entry.pt.contig ) + { + mm_printk("Removing entry with contiguous bit set is not allow= ed.\n"); + return false; + } + } + /* Sanity check when populating the page-table. No check so far. */ + else + { + ASSERT(flags & _PAGE_POPULATE); + /* We should be here with an invalid MFN */ + ASSERT(mfn_eq(mfn, INVALID_MFN)); + } + + return true; +} + +/* Update an entry at the level @target. */ +static int xen_pt_update_entry(mfn_t root, unsigned long virt, + mfn_t mfn, unsigned int target, + unsigned int flags) +{ + int rc; + unsigned int level; + lpae_t *table; + /* + * The intermediate page tables are read-only when the MFN is not valid + * and we are not populating page table. + * This means we either modify permissions or remove an entry. + */ + bool read_only =3D mfn_eq(mfn, INVALID_MFN) && !(flags & _PAGE_POPULAT= E); + lpae_t pte, *entry; + + /* convenience aliases */ + DECLARE_OFFSETS(offsets, (paddr_t)virt); + + /* _PAGE_POPULATE and _PAGE_PRESENT should never be set together. */ + ASSERT((flags & (_PAGE_POPULATE|_PAGE_PRESENT)) !=3D (_PAGE_POPULATE|_= PAGE_PRESENT)); + + table =3D xen_map_table(root); + for ( level =3D HYP_PT_ROOT_LEVEL; level < target; level++ ) + { + rc =3D xen_pt_next_level(read_only, level, &table, offsets[level]); + if ( rc =3D=3D XEN_TABLE_MAP_FAILED ) + { + /* + * We are here because xen_pt_next_level has failed to map + * the intermediate page table (e.g the table does not exist + * and the pt is read-only). It is a valid case when + * removing a mapping as it may not exist in the page table. + * In this case, just ignore it. + */ + if ( flags & (_PAGE_PRESENT|_PAGE_POPULATE) ) + { + mm_printk("%s: Unable to map level %u\n", __func__, level); + rc =3D -ENOENT; + goto out; + } + else + { + rc =3D 0; + goto out; + } + } + else if ( rc !=3D XEN_TABLE_NORMAL_PAGE ) + break; + } + + if ( level !=3D target ) + { + mm_printk("%s: Shattering superpage is not supported\n", __func__); + rc =3D -EOPNOTSUPP; + goto out; + } + + entry =3D table + offsets[level]; + + rc =3D -EINVAL; + if ( !xen_pt_check_entry(*entry, mfn, level, flags) ) + goto out; + + /* If we are only populating page-table, then we are done. */ + rc =3D 0; + if ( flags & _PAGE_POPULATE ) + goto out; + + /* We are removing the page */ + if ( !(flags & _PAGE_PRESENT) ) + memset(&pte, 0x00, sizeof(pte)); + else + { + /* We are inserting a mapping =3D> Create new pte. */ + if ( !mfn_eq(mfn, INVALID_MFN) ) + { + pte =3D mfn_to_xen_entry(mfn, PAGE_AI_MASK(flags)); + + /* + * First and second level pages set pte.pt.table =3D 0, but + * third level entries set pte.pt.table =3D 1. + */ + pte.pt.table =3D (level =3D=3D 3); + } + else /* We are updating the permission =3D> Copy the current pte. = */ + pte =3D *entry; + + /* Set permission */ + pte.pt.ro =3D PAGE_RO_MASK(flags); + pte.pt.xn =3D PAGE_XN_MASK(flags); + /* Set contiguous bit */ + pte.pt.contig =3D !!(flags & _PAGE_CONTIG); + } + + write_pte(entry, pte); + /* + * No ISB or TLB flush here. They are deferred to xen_pt_update() + * as the entry will not be used as part of the mapping update. + */ + + rc =3D 0; + +out: + xen_unmap_table(table); + + return rc; +} + +/* Return the level where mapping should be done */ +static int xen_pt_mapping_level(unsigned long vfn, mfn_t mfn, unsigned lon= g nr, + unsigned int flags) +{ + unsigned int level; + unsigned long mask; + + /* + * Don't take into account the MFN when removing mapping (i.e + * MFN_INVALID) to calculate the correct target order. + * + * Per the Arm Arm, `vfn` and `mfn` must be both superpage aligned. + * They are or-ed together and then checked against the size of + * each level. + * + * `left` is not included and checked separately to allow + * superpage mapping even if it is not properly aligned (the + * user may have asked to map 2MB + 4k). + */ + mask =3D !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0; + mask |=3D vfn; + + /* + * Always use level 3 mapping unless the caller request block + * mapping. + */ + if ( likely(!(flags & _PAGE_BLOCK)) ) + level =3D 3; + else if ( !(mask & (BIT(FIRST_ORDER, UL) - 1)) && + (nr >=3D BIT(FIRST_ORDER, UL)) ) + level =3D 1; + else if ( !(mask & (BIT(SECOND_ORDER, UL) - 1)) && + (nr >=3D BIT(SECOND_ORDER, UL)) ) + level =3D 2; + else + level =3D 3; + + return level; +} + +#define XEN_PT_4K_NR_CONTIG 16 + +/* + * Check whether the contiguous bit can be set. Return the number of + * contiguous entry allowed. If not allowed, return 1. + */ +static unsigned int xen_pt_check_contig(unsigned long vfn, mfn_t mfn, + unsigned int level, unsigned long = left, + unsigned int flags) +{ + unsigned long nr_contig; + + /* + * Allow the contiguous bit to set when the caller requests block + * mapping. + */ + if ( !(flags & _PAGE_BLOCK) ) + return 1; + + /* + * We don't allow to remove mapping with the contiguous bit set. + * So shortcut the logic and directly return 1. + */ + if ( mfn_eq(mfn, INVALID_MFN) ) + return 1; + + /* + * The number of contiguous entries varies depending on the page + * granularity used. The logic below assumes 4KB. + */ + BUILD_BUG_ON(PAGE_SIZE !=3D SZ_4K); + + /* + * In order to enable the contiguous bit, we should have enough entries + * to map left and both the virtual and physical address should be + * aligned to the size of 16 translation tables entries. + */ + nr_contig =3D BIT(XEN_PT_LEVEL_ORDER(level), UL) * XEN_PT_4K_NR_CONTIG; + + if ( (left < nr_contig) || ((mfn_x(mfn) | vfn) & (nr_contig - 1)) ) + return 1; + + return XEN_PT_4K_NR_CONTIG; +} + +static DEFINE_SPINLOCK(xen_pt_lock); + +static int xen_pt_update(unsigned long virt, + mfn_t mfn, + /* const on purpose as it is used for TLB flush */ + const unsigned long nr_mfns, + unsigned int flags) +{ + int rc =3D 0; + unsigned long vfn =3D virt >> PAGE_SHIFT; + unsigned long left =3D nr_mfns; + + /* + * For arm32, page-tables are different on each CPUs. Yet, they share + * some common mappings. It is assumed that only common mappings + * will be modified with this function. + * + * XXX: Add a check. + */ + const mfn_t root =3D maddr_to_mfn(READ_SYSREG64(TTBR0_EL2)); + + /* + * The hardware was configured to forbid mapping both writeable and + * executable. + * When modifying/creating mapping (i.e _PAGE_PRESENT is set), + * prevent any update if this happen. + */ + if ( (flags & _PAGE_PRESENT) && !PAGE_RO_MASK(flags) && + !PAGE_XN_MASK(flags) ) + { + mm_printk("Mappings should not be both Writeable and Executable.\n= "); + return -EINVAL; + } + + if ( flags & _PAGE_CONTIG ) + { + mm_printk("_PAGE_CONTIG is an internal only flag.\n"); + return -EINVAL; + } + + if ( !IS_ALIGNED(virt, PAGE_SIZE) ) + { + mm_printk("The virtual address is not aligned to the page-size.\n"= ); + return -EINVAL; + } + + spin_lock(&xen_pt_lock); + + while ( left ) + { + unsigned int order, level, nr_contig, new_flags; + + level =3D xen_pt_mapping_level(vfn, mfn, left, flags); + order =3D XEN_PT_LEVEL_ORDER(level); + + ASSERT(left >=3D BIT(order, UL)); + + /* + * Check if we can set the contiguous mapping and update the + * flags accordingly. + */ + nr_contig =3D xen_pt_check_contig(vfn, mfn, level, left, flags); + new_flags =3D flags | ((nr_contig > 1) ? _PAGE_CONTIG : 0); + + for ( ; nr_contig > 0; nr_contig-- ) + { + rc =3D xen_pt_update_entry(root, vfn << PAGE_SHIFT, mfn, level, + new_flags); + if ( rc ) + break; + + vfn +=3D 1U << order; + if ( !mfn_eq(mfn, INVALID_MFN) ) + mfn =3D mfn_add(mfn, 1U << order); + + left -=3D (1U << order); + } + + if ( rc ) + break; + } + + /* + * The TLBs flush can be safely skipped when a mapping is inserted + * as we don't allow mapping replacement (see xen_pt_check_entry()). + * Although we still need an ISB to ensure any DSB in + * write_pte() will complete because the mapping may be used soon + * after. + * + * For all the other cases, the TLBs will be flushed unconditionally + * even if the mapping has failed. This is because we may have + * partially modified the PT. This will prevent any unexpected + * behavior afterwards. + */ + if ( !((flags & _PAGE_PRESENT) && !mfn_eq(mfn, INVALID_MFN)) ) + flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns); + else + isb(); + + spin_unlock(&xen_pt_lock); + + return rc; +} + +int map_pages_to_xen(unsigned long virt, + mfn_t mfn, + unsigned long nr_mfns, + unsigned int flags) +{ + return xen_pt_update(virt, mfn, nr_mfns, flags); +} + +int __init populate_pt_range(unsigned long virt, unsigned long nr_mfns) +{ + return xen_pt_update(virt, INVALID_MFN, nr_mfns, _PAGE_POPULATE); +} + +int destroy_xen_mappings(unsigned long s, unsigned long e) +{ + ASSERT(IS_ALIGNED(s, PAGE_SIZE)); + ASSERT(IS_ALIGNED(e, PAGE_SIZE)); + ASSERT(s <=3D e); + return xen_pt_update(s, INVALID_MFN, (e - s) >> PAGE_SHIFT, 0); +} + +int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int fla= gs) +{ + ASSERT(IS_ALIGNED(s, PAGE_SIZE)); + ASSERT(IS_ALIGNED(e, PAGE_SIZE)); + ASSERT(s <=3D e); + return xen_pt_update(s, INVALID_MFN, (e - s) >> PAGE_SHIFT, flags); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.25.1 From nobody Wed Nov 27 11:54:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 169802728381691.07452131474349; Sun, 22 Oct 2023 19:14:43 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.620921.966778 (Exim 4.92) (envelope-from ) id 1qukSG-0001Uo-1v; Mon, 23 Oct 2023 02:14:08 +0000 Received: by outflank-mailman (output) from mailman id 620921.966778; Mon, 23 Oct 2023 02:14:08 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSF-0001Uh-VR; Mon, 23 Oct 2023 02:14:07 +0000 Received: by outflank-mailman (input) for mailman id 620921; Mon, 23 Oct 2023 02:14:06 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSE-0001F1-Qy for xen-devel@lists.xenproject.org; Mon, 23 Oct 2023 02:14:06 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id d777c8e2-7149-11ee-98d5-6d05b1d4d9a1; Mon, 23 Oct 2023 04:14:05 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EE643FEC; Sun, 22 Oct 2023 19:14:45 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 1D1363F738; Sun, 22 Oct 2023 19:14:01 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d777c8e2-7149-11ee-98d5-6d05b1d4d9a1 From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Henry Wang , Stefano Stabellini , Julien Grall , Bertrand Marquis , Wei Chen , Volodymyr Babchuk , Penny Zheng , Julien Grall Subject: [PATCH v8 2/8] xen/arm: Split MMU system SMP MM bringup code to mmu/smpboot.c Date: Mon, 23 Oct 2023 10:13:39 +0800 Message-Id: <20231023021345.1731436-3-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231023021345.1731436-1-Henry.Wang@arm.com> References: <20231023021345.1731436-1-Henry.Wang@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1698027284406100005 Content-Type: text/plain; charset="utf-8" Move the code related to secondary page table initialization, clear boot page tables and the global variable definitions of these boot page tables from arch/arm/mm.c to arch/arm/mmu/smpboot.c Since arm32 global variable cpu0_pgtable will be used by both arch/arm/mm.c and arch/arm/mmu/smpboot.c, to avoid exporting this variable, change the variable usage in arch/arm/mmu/smpboot.c to per_cpu(xen_pgtable, 0). To avoid exposing global variable phys_offset, use virt_to_maddr() to calculate init_ttbr for arm64. Take the opportunity to fix the in-code comment coding styles when possible. Signed-off-by: Henry Wang Signed-off-by: Penny Zheng Reviewed-by: Julien Grall --- v8: - Drop the unnecessary cast in virt_to_maddr((uintptr_t) xen_pgtable); - Add Julien's Reviewed-by tag. v7: - Do not export cpu0_pgtable, replace the variable usage in arch/arm/mmu/smpboot.c to per_cpu(xen_pgtable, 0). - Also move global variable init_ttbr to arch/arm/mmu/smpboot.c. - Use virt_to_maddr() instead of phys_offset to calculate init_ttbr in arm64 implementation of init_secondary_pagetables(). v6: - Rework the original patch "[v5,07/13] xen/arm: Extract MMU-specific code", only split the smpboot related code out in this patch. --- xen/arch/arm/mm.c | 104 ------------------------------- xen/arch/arm/mmu/Makefile | 1 + xen/arch/arm/mmu/smpboot.c | 124 +++++++++++++++++++++++++++++++++++++ 3 files changed, 125 insertions(+), 104 deletions(-) create mode 100644 xen/arch/arm/mmu/smpboot.c diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index fd02493564..b7eb3a6e08 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -27,39 +27,6 @@ #undef mfn_to_virt #define mfn_to_virt(mfn) __mfn_to_virt(mfn_x(mfn)) =20 -/* Static start-of-day pagetables that we use before the allocators - * are up. These are used by all CPUs during bringup before switching - * to the CPUs own pagetables. - * - * These pagetables have a very simple structure. They include: - * - XEN_VIRT_SIZE worth of L3 mappings of xen at XEN_VIRT_START, boot_fi= rst - * and boot_second are used to populate the tables down to boot_third - * which contains the actual mapping. - * - a 1:1 mapping of xen at its current physical address. This uses a - * section mapping at whichever of boot_{pgtable,first,second} - * covers that physical address. - * - * For the boot CPU these mappings point to the address where Xen was - * loaded by the bootloader. For secondary CPUs they point to the - * relocated copy of Xen for the benefit of secondary CPUs. - * - * In addition to the above for the boot CPU the device-tree is - * initially mapped in the boot misc slot. This mapping is not present - * for secondary CPUs. - * - * Finally, if EARLY_PRINTK is enabled then xen_fixmap will be mapped - * by the CPU once it has moved off the 1:1 mapping. - */ -DEFINE_BOOT_PAGE_TABLE(boot_pgtable); -#ifdef CONFIG_ARM_64 -DEFINE_BOOT_PAGE_TABLE(boot_first); -DEFINE_BOOT_PAGE_TABLE(boot_first_id); -#endif -DEFINE_BOOT_PAGE_TABLE(boot_second_id); -DEFINE_BOOT_PAGE_TABLE(boot_third_id); -DEFINE_BOOT_PAGE_TABLE(boot_second); -DEFINE_BOOT_PAGE_TABLES(boot_third, XEN_NR_ENTRIES(2)); - /* Main runtime page tables */ =20 /* @@ -94,9 +61,6 @@ DEFINE_BOOT_PAGE_TABLE(xen_fixmap); */ static DEFINE_PAGE_TABLES(xen_xenmap, XEN_NR_ENTRIES(2)); =20 -/* Non-boot CPUs use this to find the correct pagetables. */ -uint64_t init_ttbr; - static paddr_t phys_offset; =20 /* Limits of the Xen heap */ @@ -284,13 +248,6 @@ static void xen_pt_enforce_wnx(void) flush_xen_tlb_local(); } =20 -/* Clear a translation table and clean & invalidate the cache */ -static void clear_table(void *table) -{ - clear_page(table); - clean_and_invalidate_dcache_va_range(table, PAGE_SIZE); -} - /* Boot-time pagetable setup. * Changes here may need matching changes in head.S */ void __init setup_pagetables(unsigned long boot_phys_offset) @@ -369,67 +326,6 @@ void __init setup_pagetables(unsigned long boot_phys_o= ffset) #endif } =20 -static void clear_boot_pagetables(void) -{ - /* - * Clear the copy of the boot pagetables. Each secondary CPU - * rebuilds these itself (see head.S). - */ - clear_table(boot_pgtable); -#ifdef CONFIG_ARM_64 - clear_table(boot_first); - clear_table(boot_first_id); -#endif - clear_table(boot_second); - clear_table(boot_third); -} - -#ifdef CONFIG_ARM_64 -int init_secondary_pagetables(int cpu) -{ - clear_boot_pagetables(); - - /* Set init_ttbr for this CPU coming up. All CPus share a single setof - * pagetables, but rewrite it each time for consistency with 32 bit. */ - init_ttbr =3D (uintptr_t) xen_pgtable + phys_offset; - clean_dcache(init_ttbr); - return 0; -} -#else -int init_secondary_pagetables(int cpu) -{ - lpae_t *first; - - first =3D alloc_xenheap_page(); /* root =3D=3D first level on 32-bit 3= -level trie */ - - if ( !first ) - { - printk("CPU%u: Unable to allocate the first page-table\n", cpu); - return -ENOMEM; - } - - /* Initialise root pagetable from root of boot tables */ - memcpy(first, cpu0_pgtable, PAGE_SIZE); - per_cpu(xen_pgtable, cpu) =3D first; - - if ( !init_domheap_mappings(cpu) ) - { - printk("CPU%u: Unable to prepare the domheap page-tables\n", cpu); - per_cpu(xen_pgtable, cpu) =3D NULL; - free_xenheap_page(first); - return -ENOMEM; - } - - clear_boot_pagetables(); - - /* Set init_ttbr for this CPU coming up */ - init_ttbr =3D __pa(first); - clean_dcache(init_ttbr); - - return 0; -} -#endif - /* MMU setup for secondary CPUS (which already have paging enabled) */ void mmu_init_secondary_cpu(void) { diff --git a/xen/arch/arm/mmu/Makefile b/xen/arch/arm/mmu/Makefile index bdfc2e077d..0e82015ee1 100644 --- a/xen/arch/arm/mmu/Makefile +++ b/xen/arch/arm/mmu/Makefile @@ -1 +1,2 @@ obj-y +=3D pt.o +obj-y +=3D smpboot.o diff --git a/xen/arch/arm/mmu/smpboot.c b/xen/arch/arm/mmu/smpboot.c new file mode 100644 index 0000000000..8b6a09f843 --- /dev/null +++ b/xen/arch/arm/mmu/smpboot.c @@ -0,0 +1,124 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * xen/arch/arm/mmu/smpboot.c + * + * MMU system secondary CPUs MM bringup code. + */ + +#include + +/* + * Static start-of-day pagetables that we use before the allocators + * are up. These are used by all CPUs during bringup before switching + * to the CPUs own pagetables. + * + * These pagetables have a very simple structure. They include: + * - XEN_VIRT_SIZE worth of L3 mappings of xen at XEN_VIRT_START, boot_fi= rst + * and boot_second are used to populate the tables down to boot_third + * which contains the actual mapping. + * - a 1:1 mapping of xen at its current physical address. This uses a + * section mapping at whichever of boot_{pgtable,first,second} + * covers that physical address. + * + * For the boot CPU these mappings point to the address where Xen was + * loaded by the bootloader. For secondary CPUs they point to the + * relocated copy of Xen for the benefit of secondary CPUs. + * + * In addition to the above for the boot CPU the device-tree is + * initially mapped in the boot misc slot. This mapping is not present + * for secondary CPUs. + * + * Finally, if EARLY_PRINTK is enabled then xen_fixmap will be mapped + * by the CPU once it has moved off the 1:1 mapping. + */ +DEFINE_BOOT_PAGE_TABLE(boot_pgtable); +#ifdef CONFIG_ARM_64 +DEFINE_BOOT_PAGE_TABLE(boot_first); +DEFINE_BOOT_PAGE_TABLE(boot_first_id); +#endif +DEFINE_BOOT_PAGE_TABLE(boot_second_id); +DEFINE_BOOT_PAGE_TABLE(boot_third_id); +DEFINE_BOOT_PAGE_TABLE(boot_second); +DEFINE_BOOT_PAGE_TABLES(boot_third, XEN_NR_ENTRIES(2)); + +/* Non-boot CPUs use this to find the correct pagetables. */ +uint64_t init_ttbr; + +/* Clear a translation table and clean & invalidate the cache */ +static void clear_table(void *table) +{ + clear_page(table); + clean_and_invalidate_dcache_va_range(table, PAGE_SIZE); +} + +static void clear_boot_pagetables(void) +{ + /* + * Clear the copy of the boot pagetables. Each secondary CPU + * rebuilds these itself (see head.S). + */ + clear_table(boot_pgtable); +#ifdef CONFIG_ARM_64 + clear_table(boot_first); + clear_table(boot_first_id); +#endif + clear_table(boot_second); + clear_table(boot_third); +} + +#ifdef CONFIG_ARM_64 +int init_secondary_pagetables(int cpu) +{ + clear_boot_pagetables(); + + /* + * Set init_ttbr for this CPU coming up. All CPUs share a single setof + * pagetables, but rewrite it each time for consistency with 32 bit. + */ + init_ttbr =3D virt_to_maddr(xen_pgtable); + clean_dcache(init_ttbr); + return 0; +} +#else +int init_secondary_pagetables(int cpu) +{ + lpae_t *first; + + first =3D alloc_xenheap_page(); /* root =3D=3D first level on 32-bit 3= -level trie */ + + if ( !first ) + { + printk("CPU%u: Unable to allocate the first page-table\n", cpu); + return -ENOMEM; + } + + /* Initialise root pagetable from root of boot tables */ + memcpy(first, per_cpu(xen_pgtable, 0), PAGE_SIZE); + per_cpu(xen_pgtable, cpu) =3D first; + + if ( !init_domheap_mappings(cpu) ) + { + printk("CPU%u: Unable to prepare the domheap page-tables\n", cpu); + per_cpu(xen_pgtable, cpu) =3D NULL; + free_xenheap_page(first); + return -ENOMEM; + } + + clear_boot_pagetables(); + + /* Set init_ttbr for this CPU coming up */ + init_ttbr =3D __pa(first); + clean_dcache(init_ttbr); + + return 0; +} +#endif + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.25.1 From nobody Wed Nov 27 11:54:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1698027280219105.47845713894105; Sun, 22 Oct 2023 19:14:40 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.620923.966797 (Exim 4.92) (envelope-from ) id 1qukSK-00021s-Ku; Mon, 23 Oct 2023 02:14:12 +0000 Received: by outflank-mailman (output) from mailman id 620923.966797; Mon, 23 Oct 2023 02:14:12 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSK-00021j-Hd; Mon, 23 Oct 2023 02:14:12 +0000 Received: by outflank-mailman (input) for mailman id 620923; Mon, 23 Oct 2023 02:14:10 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSI-0001F1-PE for xen-devel@lists.xenproject.org; Mon, 23 Oct 2023 02:14:10 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id d9e3efe5-7149-11ee-98d5-6d05b1d4d9a1; Mon, 23 Oct 2023 04:14:10 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 161E32F4; Sun, 22 Oct 2023 19:14:50 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 393EC3F738; Sun, 22 Oct 2023 19:14:05 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d9e3efe5-7149-11ee-98d5-6d05b1d4d9a1 From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Henry Wang , Stefano Stabellini , Julien Grall , Bertrand Marquis , Wei Chen , Penny Zheng , Volodymyr Babchuk , Julien Grall Subject: [PATCH v8 3/8] xen/arm: Fold mmu_init_secondary_cpu() to head.S Date: Mon, 23 Oct 2023 10:13:40 +0800 Message-Id: <20231023021345.1731436-4-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231023021345.1731436-1-Henry.Wang@arm.com> References: <20231023021345.1731436-1-Henry.Wang@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1698027282355100001 Content-Type: text/plain; charset="utf-8" Currently mmu_init_secondary_cpu() only enforces the page table should not contain mapping that are both Writable and eXecutables after boot. To ease the arch/arm/mm.c split work, fold this function to head.S. For arm32, introduce an assembly macro pt_enforce_wxn. The macro is called before secondary CPUs jumping into the C world. For arm64, set the SCTLR_Axx_ELx_WXN flag right when the MMU is enabled. This would avoid the extra TLB flush and SCTLR dance. Signed-off-by: Henry Wang Co-authored-by: Julien Grall Signed-off-by: Julien Grall --- v8: - Change the setting of SCTLR_Axx_ELx_WXN for arm64 to set the flag right when the MMU is enabled. v7: - No change. v6: - New patch. --- xen/arch/arm/arm32/head.S | 20 ++++++++++++++++++++ xen/arch/arm/arm64/mmu/head.S | 18 +++++++++++------- xen/arch/arm/include/asm/mm.h | 2 -- xen/arch/arm/mm.c | 6 ------ xen/arch/arm/smpboot.c | 2 -- 5 files changed, 31 insertions(+), 17 deletions(-) diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S index 33b038e7e0..39218cf15f 100644 --- a/xen/arch/arm/arm32/head.S +++ b/xen/arch/arm/arm32/head.S @@ -83,6 +83,25 @@ isb .endm =20 +/* + * Enforce Xen page-tables do not contain mapping that are both + * Writable and eXecutables. + * + * This should be called on each secondary CPU. + */ +.macro pt_enforce_wxn tmp + mrc CP32(\tmp, HSCTLR) + orr \tmp, \tmp, #SCTLR_Axx_ELx_WXN + dsb + mcr CP32(\tmp, HSCTLR) + /* + * The TLBs may cache SCTLR_EL2.WXN. So ensure it is synchronized + * before flushing the TLBs. + */ + isb + flush_xen_tlb_local \tmp +.endm + /* * Common register usage in this file: * r0 - @@ -254,6 +273,7 @@ secondary_switched: /* Use a virtual address to access the UART. */ mov_w r11, EARLY_UART_VIRTUAL_ADDRESS #endif + pt_enforce_wxn r0 PRINT("- Ready -\r\n") /* Jump to C world */ mov_w r2, start_secondary diff --git a/xen/arch/arm/arm64/mmu/head.S b/xen/arch/arm/arm64/mmu/head.S index 88075ef083..df06cefbbe 100644 --- a/xen/arch/arm/arm64/mmu/head.S +++ b/xen/arch/arm/arm64/mmu/head.S @@ -264,10 +264,11 @@ ENDPROC(create_page_tables) * Inputs: * x0 : Physical address of the page tables. * - * Clobbers x0 - x4 + * Clobbers x0 - x6 */ enable_mmu: mov x4, x0 + mov x5, x1 PRINT("- Turning on paging -\r\n") =20 /* @@ -283,6 +284,7 @@ enable_mmu: mrs x0, SCTLR_EL2 orr x0, x0, #SCTLR_Axx_ELx_M /* Enable MMU */ orr x0, x0, #SCTLR_Axx_ELx_C /* Enable D-cache */ + orr x0, x0, x5 /* Enable extra flags */ dsb sy /* Flush PTE writes and finish reads = */ msr SCTLR_EL2, x0 /* now paging is enabled */ isb /* Now, flush the icache */ @@ -297,16 +299,17 @@ ENDPROC(enable_mmu) * Inputs: * lr : Virtual address to return to. * - * Clobbers x0 - x5 + * Clobbers x0 - x6 */ ENTRY(enable_secondary_cpu_mm) - mov x5, lr + mov x6, lr =20 load_paddr x0, init_ttbr ldr x0, [x0] =20 + mov x1, #SCTLR_Axx_ELx_WXN /* Enable WxN from the start */ bl enable_mmu - mov lr, x5 + mov lr, x6 =20 /* Return to the virtual address requested by the caller. */ ret @@ -320,14 +323,15 @@ ENDPROC(enable_secondary_cpu_mm) * Inputs: * lr : Virtual address to return to. * - * Clobbers x0 - x5 + * Clobbers x0 - x6 */ ENTRY(enable_boot_cpu_mm) - mov x5, lr + mov x6, lr =20 bl create_page_tables load_paddr x0, boot_pgtable =20 + mov x1, #0 /* No extra SCTLR flags */ bl enable_mmu =20 /* @@ -337,7 +341,7 @@ ENTRY(enable_boot_cpu_mm) ldr x0, =3D1f br x0 1: - mov lr, x5 + mov lr, x6 /* * The 1:1 map may clash with other parts of the Xen virtual memory * layout. As it is not used anymore, remove it completely to diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h index d25e59f828..163d22ecd3 100644 --- a/xen/arch/arm/include/asm/mm.h +++ b/xen/arch/arm/include/asm/mm.h @@ -214,8 +214,6 @@ extern void remove_early_mappings(void); /* Allocate and initialise pagetables for a secondary CPU. Sets init_ttbr = to the * new page table */ extern int init_secondary_pagetables(int cpu); -/* Switch secondary CPUS to its own pagetables and finalise MMU setup */ -extern void mmu_init_secondary_cpu(void); /* * For Arm32, set up the direct-mapped xenheap: up to 1GB of contiguous, * always-mapped memory. Base must be 32MB aligned and size a multiple of = 32MB. diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index b7eb3a6e08..923a90925c 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -326,12 +326,6 @@ void __init setup_pagetables(unsigned long boot_phys_o= ffset) #endif } =20 -/* MMU setup for secondary CPUS (which already have paging enabled) */ -void mmu_init_secondary_cpu(void) -{ - xen_pt_enforce_wnx(); -} - #ifdef CONFIG_ARM_32 /* * Set up the direct-mapped xenheap: diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c index ec76de3cac..beb137d06e 100644 --- a/xen/arch/arm/smpboot.c +++ b/xen/arch/arm/smpboot.c @@ -361,8 +361,6 @@ void start_secondary(void) */ update_system_features(¤t_cpu_data); =20 - mmu_init_secondary_cpu(); - gic_init_secondary_cpu(); =20 set_current(idle_vcpu[cpuid]); --=20 2.25.1 From nobody Wed Nov 27 11:54:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1698027290292227.24843137567473; Sun, 22 Oct 2023 19:14:50 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.620925.966808 (Exim 4.92) (envelope-from ) id 1qukSP-0002NT-Sy; Mon, 23 Oct 2023 02:14:17 +0000 Received: by outflank-mailman (output) from mailman id 620925.966808; Mon, 23 Oct 2023 02:14:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSP-0002NI-Pq; Mon, 23 Oct 2023 02:14:17 +0000 Received: by outflank-mailman (input) for mailman id 620925; Mon, 23 Oct 2023 02:14:16 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSO-0001F1-3V for xen-devel@lists.xenproject.org; Mon, 23 Oct 2023 02:14:16 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id dc370c77-7149-11ee-98d5-6d05b1d4d9a1; Mon, 23 Oct 2023 04:14:13 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1E4622F4; Sun, 22 Oct 2023 19:14:54 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3FE1A3F738; Sun, 22 Oct 2023 19:14:09 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: dc370c77-7149-11ee-98d5-6d05b1d4d9a1 From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Henry Wang , Stefano Stabellini , Julien Grall , Bertrand Marquis , Wei Chen , Volodymyr Babchuk , Penny Zheng , Julien Grall Subject: [PATCH v8 4/8] xen/arm: Extract MMU-specific MM code Date: Mon, 23 Oct 2023 10:13:41 +0800 Message-Id: <20231023021345.1731436-5-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231023021345.1731436-1-Henry.Wang@arm.com> References: <20231023021345.1731436-1-Henry.Wang@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1698027292399100003 Content-Type: text/plain; charset="utf-8" Currently, most of the code is in arm/mm.{c,h} and arm/arm64/mm.c is MMU-specific. To make the MM code extendable, this commit extracts the MMU-specific MM code. Extract the boot CPU MM bringup code from arm/mm.c to mmu/setup.c. While moving, mark pte_of_xenaddr() as __init to make clear that this helper is only intended to be used during early boot. Move arm/arm64/mm.c to arm/arm64/mmu/mm.c. Since the function setup_directmap_mappings() has different implementations between arm32 and arm64, move their arch-specific implementation to arch-specific arm{32,64}/mmu/mm.c instead using #ifdef again. For header files, move MMU-related function declarations in asm/mm.h, declaration of global variable init_ttbr and the declaration of dump_pt_walk() in asm/page.h to asm/mmu/mm.h Also modify the build system (Makefiles in this case) to pick above mentioned code changes. Take the opportunity to fix the in-code comment coding styles when possible, and drop the unnecessary #include headers in the original arm/mm.c. Signed-off-by: Henry Wang Signed-off-by: Penny Zheng Acked-by: Julien Grall --- v8: - Add Julien's Acked-by tag. v7: - Move pte_of_xenaddr() to mmu/setup.c and mark it as __init. - Also move the declaration of init_ttbr to asm/mmu/mm.h. v6: - Rework the original patch "[v5,07/13] xen/arm: Extract MMU-specific code" --- xen/arch/arm/arm32/Makefile | 1 + xen/arch/arm/arm32/mmu/Makefile | 1 + xen/arch/arm/arm32/mmu/mm.c | 31 +++ xen/arch/arm/arm64/Makefile | 1 - xen/arch/arm/arm64/mmu/Makefile | 1 + xen/arch/arm/arm64/{ =3D> mmu}/mm.c | 37 +++ xen/arch/arm/include/asm/mm.h | 25 +- xen/arch/arm/include/asm/mmu/mm.h | 50 ++++ xen/arch/arm/include/asm/page.h | 15 -- xen/arch/arm/mm.c | 385 ------------------------------ xen/arch/arm/mmu/Makefile | 1 + xen/arch/arm/mmu/setup.c | 349 +++++++++++++++++++++++++++ 12 files changed, 477 insertions(+), 420 deletions(-) create mode 100644 xen/arch/arm/arm32/mmu/Makefile create mode 100644 xen/arch/arm/arm32/mmu/mm.c rename xen/arch/arm/arm64/{ =3D> mmu}/mm.c (75%) create mode 100644 xen/arch/arm/include/asm/mmu/mm.h create mode 100644 xen/arch/arm/mmu/setup.c diff --git a/xen/arch/arm/arm32/Makefile b/xen/arch/arm/arm32/Makefile index 520fb42054..40a2b4803f 100644 --- a/xen/arch/arm/arm32/Makefile +++ b/xen/arch/arm/arm32/Makefile @@ -1,4 +1,5 @@ obj-y +=3D lib/ +obj-$(CONFIG_MMU) +=3D mmu/ =20 obj-$(CONFIG_EARLY_PRINTK) +=3D debug.o obj-y +=3D domctl.o diff --git a/xen/arch/arm/arm32/mmu/Makefile b/xen/arch/arm/arm32/mmu/Makef= ile new file mode 100644 index 0000000000..b18cec4836 --- /dev/null +++ b/xen/arch/arm/arm32/mmu/Makefile @@ -0,0 +1 @@ +obj-y +=3D mm.o diff --git a/xen/arch/arm/arm32/mmu/mm.c b/xen/arch/arm/arm32/mmu/mm.c new file mode 100644 index 0000000000..647baf4a81 --- /dev/null +++ b/xen/arch/arm/arm32/mmu/mm.c @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include +#include + +/* + * Set up the direct-mapped xenheap: + * up to 1GB of contiguous, always-mapped memory. + */ +void __init setup_directmap_mappings(unsigned long base_mfn, + unsigned long nr_mfns) +{ + int rc; + + rc =3D map_pages_to_xen(XENHEAP_VIRT_START, _mfn(base_mfn), nr_mfns, + PAGE_HYPERVISOR_RW | _PAGE_BLOCK); + if ( rc ) + panic("Unable to setup the directmap mappings.\n"); + + /* Record where the directmap is, for translation routines. */ + directmap_virt_end =3D XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile index f89d5fb4fb..72161ff22e 100644 --- a/xen/arch/arm/arm64/Makefile +++ b/xen/arch/arm/arm64/Makefile @@ -11,7 +11,6 @@ obj-y +=3D entry.o obj-y +=3D head.o obj-y +=3D insn.o obj-$(CONFIG_LIVEPATCH) +=3D livepatch.o -obj-y +=3D mm.o obj-y +=3D smc.o obj-y +=3D smpboot.o obj-$(CONFIG_ARM64_SVE) +=3D sve.o sve-asm.o diff --git a/xen/arch/arm/arm64/mmu/Makefile b/xen/arch/arm/arm64/mmu/Makef= ile index 3340058c08..a8a750a3d0 100644 --- a/xen/arch/arm/arm64/mmu/Makefile +++ b/xen/arch/arm/arm64/mmu/Makefile @@ -1 +1,2 @@ obj-y +=3D head.o +obj-y +=3D mm.o diff --git a/xen/arch/arm/arm64/mm.c b/xen/arch/arm/arm64/mmu/mm.c similarity index 75% rename from xen/arch/arm/arm64/mm.c rename to xen/arch/arm/arm64/mmu/mm.c index 78b7c7eb00..36073041ed 100644 --- a/xen/arch/arm/arm64/mm.c +++ b/xen/arch/arm/arm64/mmu/mm.c @@ -151,6 +151,43 @@ void __init switch_ttbr(uint64_t ttbr) update_identity_mapping(false); } =20 +/* Map the region in the directmap area. */ +void __init setup_directmap_mappings(unsigned long base_mfn, + unsigned long nr_mfns) +{ + int rc; + + /* First call sets the directmap physical and virtual offset. */ + if ( mfn_eq(directmap_mfn_start, INVALID_MFN) ) + { + unsigned long mfn_gb =3D base_mfn & ~((FIRST_SIZE >> PAGE_SHIFT) -= 1); + + directmap_mfn_start =3D _mfn(base_mfn); + directmap_base_pdx =3D mfn_to_pdx(_mfn(base_mfn)); + /* + * The base address may not be aligned to the first level + * size (e.g. 1GB when using 4KB pages). This would prevent + * superpage mappings for all the regions because the virtual + * address and machine address should both be suitably aligned. + * + * Prevent that by offsetting the start of the directmap virtual + * address. + */ + directmap_virt_start =3D DIRECTMAP_VIRT_START + + (base_mfn - mfn_gb) * PAGE_SIZE; + } + + if ( base_mfn < mfn_x(directmap_mfn_start) ) + panic("cannot add directmap mapping at %lx below heap start %lx\n", + base_mfn, mfn_x(directmap_mfn_start)); + + rc =3D map_pages_to_xen((vaddr_t)__mfn_to_virt(base_mfn), + _mfn(base_mfn), nr_mfns, + PAGE_HYPERVISOR_RW | _PAGE_BLOCK); + if ( rc ) + panic("Unable to setup the directmap mappings.\n"); +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h index 163d22ecd3..d23ebc7df6 100644 --- a/xen/arch/arm/include/asm/mm.h +++ b/xen/arch/arm/include/asm/mm.h @@ -14,6 +14,12 @@ # error "unknown ARM variant" #endif =20 +#if defined(CONFIG_MMU) +# include +#else +# error "Unknown memory management layout" +#endif + /* Align Xen to a 2 MiB boundary. */ #define XEN_PADDR_ALIGN (1 << 21) =20 @@ -165,16 +171,6 @@ struct page_info #define _PGC_need_scrub _PGC_allocated #define PGC_need_scrub PGC_allocated =20 -/* Non-boot CPUs use this to find the correct pagetables. */ -extern uint64_t init_ttbr; - -extern mfn_t directmap_mfn_start, directmap_mfn_end; -extern vaddr_t directmap_virt_end; -#ifdef CONFIG_ARM_64 -extern vaddr_t directmap_virt_start; -extern unsigned long directmap_base_pdx; -#endif - #ifdef CONFIG_ARM_32 #define is_xen_heap_page(page) is_xen_heap_mfn(page_to_mfn(page)) #define is_xen_heap_mfn(mfn) ({ \ @@ -197,7 +193,6 @@ extern unsigned long directmap_base_pdx; =20 #define maddr_get_owner(ma) (page_get_owner(maddr_to_page((ma)))) =20 -#define frame_table ((struct page_info *)FRAMETABLE_VIRT_START) /* PDX of the first page in the frame table. */ extern unsigned long frametable_base_pdx; =20 @@ -207,19 +202,11 @@ extern unsigned long frametable_base_pdx; extern void setup_pagetables(unsigned long boot_phys_offset); /* Map FDT in boot pagetable */ extern void *early_fdt_map(paddr_t fdt_paddr); -/* Switch to a new root page-tables */ -extern void switch_ttbr(uint64_t ttbr); /* Remove early mappings */ extern void remove_early_mappings(void); /* Allocate and initialise pagetables for a secondary CPU. Sets init_ttbr = to the * new page table */ extern int init_secondary_pagetables(int cpu); -/* - * For Arm32, set up the direct-mapped xenheap: up to 1GB of contiguous, - * always-mapped memory. Base must be 32MB aligned and size a multiple of = 32MB. - * For Arm64, map the region in the directmap area. - */ -extern void setup_directmap_mappings(unsigned long base_mfn, unsigned long= nr_mfns); /* Map a frame table to cover physical addresses ps through pe */ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe); /* map a physical range in virtual memory */ diff --git a/xen/arch/arm/include/asm/mmu/mm.h b/xen/arch/arm/include/asm/m= mu/mm.h new file mode 100644 index 0000000000..439ae314fd --- /dev/null +++ b/xen/arch/arm/include/asm/mmu/mm.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __ARM_MMU_MM_H__ +#define __ARM_MMU_MM_H__ + +/* Non-boot CPUs use this to find the correct pagetables. */ +extern uint64_t init_ttbr; + +extern mfn_t directmap_mfn_start, directmap_mfn_end; +extern vaddr_t directmap_virt_end; +#ifdef CONFIG_ARM_64 +extern vaddr_t directmap_virt_start; +extern unsigned long directmap_base_pdx; +#endif + +#define frame_table ((struct page_info *)FRAMETABLE_VIRT_START) + +/* + * Print a walk of a page table or p2m + * + * ttbr is the base address register (TTBR0_EL2 or VTTBR_EL2) + * addr is the PA or IPA to translate + * root_level is the starting level of the page table + * (e.g. TCR_EL2.SL0 or VTCR_EL2.SL0 ) + * nr_root_tables is the number of concatenated tables at the root. + * this can only be !=3D 1 for P2M walks starting at the first or + * subsequent level. + */ +void dump_pt_walk(paddr_t ttbr, paddr_t addr, + unsigned int root_level, + unsigned int nr_root_tables); + +/* Switch to a new root page-tables */ +extern void switch_ttbr(uint64_t ttbr); +/* + * For Arm32, set up the direct-mapped xenheap: up to 1GB of contiguous, + * always-mapped memory. Base must be 32MB aligned and size a multiple of = 32MB. + * For Arm64, map the region in the directmap area. + */ +extern void setup_directmap_mappings(unsigned long base_mfn, unsigned long= nr_mfns); + +#endif /* __ARM_MMU_MM_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/include/asm/page.h b/xen/arch/arm/include/asm/pag= e.h index aa0080e8d7..ebaf5964f1 100644 --- a/xen/arch/arm/include/asm/page.h +++ b/xen/arch/arm/include/asm/page.h @@ -264,21 +264,6 @@ static inline void write_pte(lpae_t *p, lpae_t pte) /* Flush the dcache for an entire page. */ void flush_page_to_ram(unsigned long mfn, bool sync_icache); =20 -/* - * Print a walk of a page table or p2m - * - * ttbr is the base address register (TTBR0_EL2 or VTTBR_EL2) - * addr is the PA or IPA to translate - * root_level is the starting level of the page table - * (e.g. TCR_EL2.SL0 or VTCR_EL2.SL0 ) - * nr_root_tables is the number of concatenated tables at the root. - * this can only be !=3D 1 for P2M walks starting at the first or - * subsequent level. - */ -void dump_pt_walk(paddr_t ttbr, paddr_t addr, - unsigned int root_level, - unsigned int nr_root_tables); - /* Print a walk of the hypervisor's page tables for a virtual addr. */ extern void dump_hyp_walk(vaddr_t addr); /* Print a walk of the p2m for a domain for a physical address. */ diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 923a90925c..eeb65ca6bb 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -11,136 +11,19 @@ #include #include #include -#include #include -#include =20 #include =20 -#include - #include =20 /* Override macros from asm/page.h to make them work with mfn_t */ #undef virt_to_mfn #define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) -#undef mfn_to_virt -#define mfn_to_virt(mfn) __mfn_to_virt(mfn_x(mfn)) - -/* Main runtime page tables */ - -/* - * For arm32 xen_pgtable are per-PCPU and are allocated before - * bringing up each CPU. For arm64 xen_pgtable is common to all PCPUs. - * - * xen_second, xen_fixmap and xen_xenmap are always shared between all - * PCPUs. - */ - -#ifdef CONFIG_ARM_64 -DEFINE_PAGE_TABLE(xen_pgtable); -static DEFINE_PAGE_TABLE(xen_first); -#define THIS_CPU_PGTABLE xen_pgtable -#else -/* Per-CPU pagetable pages */ -/* xen_pgtable =3D=3D root of the trie (zeroeth level on 64-bit, first on = 32-bit) */ -DEFINE_PER_CPU(lpae_t *, xen_pgtable); -#define THIS_CPU_PGTABLE this_cpu(xen_pgtable) -/* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */ -static DEFINE_PAGE_TABLE(cpu0_pgtable); -#endif - -/* Common pagetable leaves */ -/* Second level page table used to cover Xen virtual address space */ -static DEFINE_PAGE_TABLE(xen_second); -/* Third level page table used for fixmap */ -DEFINE_BOOT_PAGE_TABLE(xen_fixmap); -/* - * Third level page table used to map Xen itself with the XN bit set - * as appropriate. - */ -static DEFINE_PAGE_TABLES(xen_xenmap, XEN_NR_ENTRIES(2)); - -static paddr_t phys_offset; - -/* Limits of the Xen heap */ -mfn_t directmap_mfn_start __read_mostly =3D INVALID_MFN_INITIALIZER; -mfn_t directmap_mfn_end __read_mostly; -vaddr_t directmap_virt_end __read_mostly; -#ifdef CONFIG_ARM_64 -vaddr_t directmap_virt_start __read_mostly; -unsigned long directmap_base_pdx __read_mostly; -#endif =20 unsigned long frametable_base_pdx __read_mostly; unsigned long frametable_virt_end __read_mostly; =20 -extern char __init_begin[], __init_end[]; - -/* Checking VA memory layout alignment. */ -static void __init __maybe_unused build_assertions(void) -{ - /* 2MB aligned regions */ - BUILD_BUG_ON(XEN_VIRT_START & ~SECOND_MASK); - BUILD_BUG_ON(FIXMAP_ADDR(0) & ~SECOND_MASK); - /* 1GB aligned regions */ -#ifdef CONFIG_ARM_32 - BUILD_BUG_ON(XENHEAP_VIRT_START & ~FIRST_MASK); -#else - BUILD_BUG_ON(DIRECTMAP_VIRT_START & ~FIRST_MASK); -#endif - /* Page table structure constraints */ -#ifdef CONFIG_ARM_64 - /* - * The first few slots of the L0 table is reserved for the identity - * mapping. Check that none of the other regions are overlapping - * with it. - */ -#define CHECK_OVERLAP_WITH_IDMAP(virt) \ - BUILD_BUG_ON(zeroeth_table_offset(virt) < IDENTITY_MAPPING_AREA_NR_L0) - - CHECK_OVERLAP_WITH_IDMAP(XEN_VIRT_START); - CHECK_OVERLAP_WITH_IDMAP(VMAP_VIRT_START); - CHECK_OVERLAP_WITH_IDMAP(FRAMETABLE_VIRT_START); - CHECK_OVERLAP_WITH_IDMAP(DIRECTMAP_VIRT_START); -#undef CHECK_OVERLAP_WITH_IDMAP -#endif - BUILD_BUG_ON(first_table_offset(XEN_VIRT_START)); -#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE - BUILD_BUG_ON(DOMHEAP_VIRT_START & ~FIRST_MASK); -#endif - /* - * The boot code expects the regions XEN_VIRT_START, FIXMAP_ADDR(0), - * BOOT_FDT_VIRT_START to use the same 0th (arm64 only) and 1st - * slot in the page tables. - */ -#define CHECK_SAME_SLOT(level, virt1, virt2) \ - BUILD_BUG_ON(level##_table_offset(virt1) !=3D level##_table_offset(vir= t2)) - -#define CHECK_DIFFERENT_SLOT(level, virt1, virt2) \ - BUILD_BUG_ON(level##_table_offset(virt1) =3D=3D level##_table_offset(v= irt2)) - -#ifdef CONFIG_ARM_64 - CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, FIXMAP_ADDR(0)); - CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, BOOT_FDT_VIRT_START); -#endif - CHECK_SAME_SLOT(first, XEN_VIRT_START, FIXMAP_ADDR(0)); - CHECK_SAME_SLOT(first, XEN_VIRT_START, BOOT_FDT_VIRT_START); - - /* - * For arm32, the temporary mapping will re-use the domheap - * first slot and the second slots will match. - */ -#ifdef CONFIG_ARM_32 - CHECK_SAME_SLOT(first, TEMPORARY_XEN_VIRT_START, DOMHEAP_VIRT_START); - CHECK_DIFFERENT_SLOT(first, XEN_VIRT_START, TEMPORARY_XEN_VIRT_START); - CHECK_SAME_SLOT(second, XEN_VIRT_START, TEMPORARY_XEN_VIRT_START); -#endif - -#undef CHECK_SAME_SLOT -#undef CHECK_DIFFERENT_SLOT -} - void flush_page_to_ram(unsigned long mfn, bool sync_icache) { void *v =3D map_domain_page(_mfn(mfn)); @@ -160,229 +43,6 @@ void flush_page_to_ram(unsigned long mfn, bool sync_ic= ache) invalidate_icache(); } =20 -lpae_t pte_of_xenaddr(vaddr_t va) -{ - paddr_t ma =3D va + phys_offset; - - return mfn_to_xen_entry(maddr_to_mfn(ma), MT_NORMAL); -} - -void * __init early_fdt_map(paddr_t fdt_paddr) -{ - /* We are using 2MB superpage for mapping the FDT */ - paddr_t base_paddr =3D fdt_paddr & SECOND_MASK; - paddr_t offset; - void *fdt_virt; - uint32_t size; - int rc; - - /* - * Check whether the physical FDT address is set and meets the minimum - * alignment requirement. Since we are relying on MIN_FDT_ALIGN to be = at - * least 8 bytes so that we always access the magic and size fields - * of the FDT header after mapping the first chunk, double check if - * that is indeed the case. - */ - BUILD_BUG_ON(MIN_FDT_ALIGN < 8); - if ( !fdt_paddr || fdt_paddr % MIN_FDT_ALIGN ) - return NULL; - - /* The FDT is mapped using 2MB superpage */ - BUILD_BUG_ON(BOOT_FDT_VIRT_START % SZ_2M); - - rc =3D map_pages_to_xen(BOOT_FDT_VIRT_START, maddr_to_mfn(base_paddr), - SZ_2M >> PAGE_SHIFT, - PAGE_HYPERVISOR_RO | _PAGE_BLOCK); - if ( rc ) - panic("Unable to map the device-tree.\n"); - - - offset =3D fdt_paddr % SECOND_SIZE; - fdt_virt =3D (void *)BOOT_FDT_VIRT_START + offset; - - if ( fdt_magic(fdt_virt) !=3D FDT_MAGIC ) - return NULL; - - size =3D fdt_totalsize(fdt_virt); - if ( size > MAX_FDT_SIZE ) - return NULL; - - if ( (offset + size) > SZ_2M ) - { - rc =3D map_pages_to_xen(BOOT_FDT_VIRT_START + SZ_2M, - maddr_to_mfn(base_paddr + SZ_2M), - SZ_2M >> PAGE_SHIFT, - PAGE_HYPERVISOR_RO | _PAGE_BLOCK); - if ( rc ) - panic("Unable to map the device-tree\n"); - } - - return fdt_virt; -} - -void __init remove_early_mappings(void) -{ - int rc; - - /* destroy the _PAGE_BLOCK mapping */ - rc =3D modify_xen_mappings(BOOT_FDT_VIRT_START, - BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE, - _PAGE_BLOCK); - BUG_ON(rc); -} - -/* - * After boot, Xen page-tables should not contain mapping that are both - * Writable and eXecutables. - * - * This should be called on each CPU to enforce the policy. - */ -static void xen_pt_enforce_wnx(void) -{ - WRITE_SYSREG(READ_SYSREG(SCTLR_EL2) | SCTLR_Axx_ELx_WXN, SCTLR_EL2); - /* - * The TLBs may cache SCTLR_EL2.WXN. So ensure it is synchronized - * before flushing the TLBs. - */ - isb(); - flush_xen_tlb_local(); -} - -/* Boot-time pagetable setup. - * Changes here may need matching changes in head.S */ -void __init setup_pagetables(unsigned long boot_phys_offset) -{ - uint64_t ttbr; - lpae_t pte, *p; - int i; - - phys_offset =3D boot_phys_offset; - - arch_setup_page_tables(); - -#ifdef CONFIG_ARM_64 - pte =3D pte_of_xenaddr((uintptr_t)xen_first); - pte.pt.table =3D 1; - pte.pt.xn =3D 0; - xen_pgtable[zeroeth_table_offset(XEN_VIRT_START)] =3D pte; - - p =3D (void *) xen_first; -#else - p =3D (void *) cpu0_pgtable; -#endif - - /* Map xen second level page-table */ - p[0] =3D pte_of_xenaddr((uintptr_t)(xen_second)); - p[0].pt.table =3D 1; - p[0].pt.xn =3D 0; - - /* Break up the Xen mapping into pages and protect them separately. */ - for ( i =3D 0; i < XEN_NR_ENTRIES(3); i++ ) - { - vaddr_t va =3D XEN_VIRT_START + (i << PAGE_SHIFT); - - if ( !is_kernel(va) ) - break; - pte =3D pte_of_xenaddr(va); - pte.pt.table =3D 1; /* third level mappings always have this bit s= et */ - if ( is_kernel_text(va) || is_kernel_inittext(va) ) - { - pte.pt.xn =3D 0; - pte.pt.ro =3D 1; - } - if ( is_kernel_rodata(va) ) - pte.pt.ro =3D 1; - xen_xenmap[i] =3D pte; - } - - /* Initialise xen second level entries ... */ - /* ... Xen's text etc */ - for ( i =3D 0; i < XEN_NR_ENTRIES(2); i++ ) - { - vaddr_t va =3D XEN_VIRT_START + (i << XEN_PT_LEVEL_SHIFT(2)); - - pte =3D pte_of_xenaddr((vaddr_t)(xen_xenmap + i * XEN_PT_LPAE_ENTR= IES)); - pte.pt.table =3D 1; - xen_second[second_table_offset(va)] =3D pte; - } - - /* ... Fixmap */ - pte =3D pte_of_xenaddr((vaddr_t)xen_fixmap); - pte.pt.table =3D 1; - xen_second[second_table_offset(FIXMAP_ADDR(0))] =3D pte; - -#ifdef CONFIG_ARM_64 - ttbr =3D (uintptr_t) xen_pgtable + phys_offset; -#else - ttbr =3D (uintptr_t) cpu0_pgtable + phys_offset; -#endif - - switch_ttbr(ttbr); - - xen_pt_enforce_wnx(); - -#ifdef CONFIG_ARM_32 - per_cpu(xen_pgtable, 0) =3D cpu0_pgtable; -#endif -} - -#ifdef CONFIG_ARM_32 -/* - * Set up the direct-mapped xenheap: - * up to 1GB of contiguous, always-mapped memory. - */ -void __init setup_directmap_mappings(unsigned long base_mfn, - unsigned long nr_mfns) -{ - int rc; - - rc =3D map_pages_to_xen(XENHEAP_VIRT_START, _mfn(base_mfn), nr_mfns, - PAGE_HYPERVISOR_RW | _PAGE_BLOCK); - if ( rc ) - panic("Unable to setup the directmap mappings.\n"); - - /* Record where the directmap is, for translation routines. */ - directmap_virt_end =3D XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE; -} -#else /* CONFIG_ARM_64 */ -/* Map the region in the directmap area. */ -void __init setup_directmap_mappings(unsigned long base_mfn, - unsigned long nr_mfns) -{ - int rc; - - /* First call sets the directmap physical and virtual offset. */ - if ( mfn_eq(directmap_mfn_start, INVALID_MFN) ) - { - unsigned long mfn_gb =3D base_mfn & ~((FIRST_SIZE >> PAGE_SHIFT) -= 1); - - directmap_mfn_start =3D _mfn(base_mfn); - directmap_base_pdx =3D mfn_to_pdx(_mfn(base_mfn)); - /* - * The base address may not be aligned to the first level - * size (e.g. 1GB when using 4KB pages). This would prevent - * superpage mappings for all the regions because the virtual - * address and machine address should both be suitably aligned. - * - * Prevent that by offsetting the start of the directmap virtual - * address. - */ - directmap_virt_start =3D DIRECTMAP_VIRT_START + - (base_mfn - mfn_gb) * PAGE_SIZE; - } - - if ( base_mfn < mfn_x(directmap_mfn_start) ) - panic("cannot add directmap mapping at %lx below heap start %lx\n", - base_mfn, mfn_x(directmap_mfn_start)); - - rc =3D map_pages_to_xen((vaddr_t)__mfn_to_virt(base_mfn), - _mfn(base_mfn), nr_mfns, - PAGE_HYPERVISOR_RW | _PAGE_BLOCK); - if ( rc ) - panic("Unable to setup the directmap mappings.\n"); -} -#endif - /* Map a frame table to cover physical addresses ps through pe */ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe) { @@ -422,51 +82,6 @@ void __init setup_frametable_mappings(paddr_t ps, paddr= _t pe) frametable_virt_end =3D FRAMETABLE_VIRT_START + (nr_pdxs * sizeof(stru= ct page_info)); } =20 -void *__init arch_vmap_virt_end(void) -{ - return (void *)(VMAP_VIRT_START + VMAP_VIRT_SIZE); -} - -/* Release all __init and __initdata ranges to be reused */ -void free_init_memory(void) -{ - paddr_t pa =3D virt_to_maddr(__init_begin); - unsigned long len =3D __init_end - __init_begin; - uint32_t insn; - unsigned int i, nr =3D len / sizeof(insn); - uint32_t *p; - int rc; - - rc =3D modify_xen_mappings((unsigned long)__init_begin, - (unsigned long)__init_end, PAGE_HYPERVISOR_RW= ); - if ( rc ) - panic("Unable to map RW the init section (rc =3D %d)\n", rc); - - /* - * From now on, init will not be used for execution anymore, - * so nuke the instruction cache to remove entries related to init. - */ - invalidate_icache_local(); - -#ifdef CONFIG_ARM_32 - /* udf instruction i.e (see A8.8.247 in ARM DDI 0406C.c) */ - insn =3D 0xe7f000f0; -#else - insn =3D AARCH64_BREAK_FAULT; -#endif - p =3D (uint32_t *)__init_begin; - for ( i =3D 0; i < nr; i++ ) - *(p + i) =3D insn; - - rc =3D destroy_xen_mappings((unsigned long)__init_begin, - (unsigned long)__init_end); - if ( rc ) - panic("Unable to remove the init section (rc =3D %d)\n", rc); - - init_domheap_pages(pa, pa + len); - printk("Freed %ldkB init memory.\n", (long)(__init_end-__init_begin)>>= 10); -} - int steal_page( struct domain *d, struct page_info *page, unsigned int memflags) { diff --git a/xen/arch/arm/mmu/Makefile b/xen/arch/arm/mmu/Makefile index 0e82015ee1..98aea965df 100644 --- a/xen/arch/arm/mmu/Makefile +++ b/xen/arch/arm/mmu/Makefile @@ -1,2 +1,3 @@ obj-y +=3D pt.o +obj-y +=3D setup.o obj-y +=3D smpboot.o diff --git a/xen/arch/arm/mmu/setup.c b/xen/arch/arm/mmu/setup.c new file mode 100644 index 0000000000..c2df976ab2 --- /dev/null +++ b/xen/arch/arm/mmu/setup.c @@ -0,0 +1,349 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * xen/arch/arm/mmu/setup.c + * + * MMU system boot CPU MM bringup code. + */ + +#include +#include +#include + +#include + +/* Override macros from asm/page.h to make them work with mfn_t */ +#undef mfn_to_virt +#define mfn_to_virt(mfn) __mfn_to_virt(mfn_x(mfn)) + +/* Main runtime page tables */ + +/* + * For arm32 xen_pgtable are per-PCPU and are allocated before + * bringing up each CPU. For arm64 xen_pgtable is common to all PCPUs. + * + * xen_second, xen_fixmap and xen_xenmap are always shared between all + * PCPUs. + */ + +#ifdef CONFIG_ARM_64 +DEFINE_PAGE_TABLE(xen_pgtable); +static DEFINE_PAGE_TABLE(xen_first); +#define THIS_CPU_PGTABLE xen_pgtable +#else +/* Per-CPU pagetable pages */ +/* xen_pgtable =3D=3D root of the trie (zeroeth level on 64-bit, first on = 32-bit) */ +DEFINE_PER_CPU(lpae_t *, xen_pgtable); +#define THIS_CPU_PGTABLE this_cpu(xen_pgtable) +/* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */ +static DEFINE_PAGE_TABLE(cpu0_pgtable); +#endif + +/* Common pagetable leaves */ +/* Second level page table used to cover Xen virtual address space */ +static DEFINE_PAGE_TABLE(xen_second); +/* Third level page table used for fixmap */ +DEFINE_BOOT_PAGE_TABLE(xen_fixmap); +/* + * Third level page table used to map Xen itself with the XN bit set + * as appropriate. + */ +static DEFINE_PAGE_TABLES(xen_xenmap, XEN_NR_ENTRIES(2)); + +static paddr_t phys_offset; + +/* Limits of the Xen heap */ +mfn_t directmap_mfn_start __read_mostly =3D INVALID_MFN_INITIALIZER; +mfn_t directmap_mfn_end __read_mostly; +vaddr_t directmap_virt_end __read_mostly; +#ifdef CONFIG_ARM_64 +vaddr_t directmap_virt_start __read_mostly; +unsigned long directmap_base_pdx __read_mostly; +#endif + +extern char __init_begin[], __init_end[]; + +/* Checking VA memory layout alignment. */ +static void __init __maybe_unused build_assertions(void) +{ + /* 2MB aligned regions */ + BUILD_BUG_ON(XEN_VIRT_START & ~SECOND_MASK); + BUILD_BUG_ON(FIXMAP_ADDR(0) & ~SECOND_MASK); + /* 1GB aligned regions */ +#ifdef CONFIG_ARM_32 + BUILD_BUG_ON(XENHEAP_VIRT_START & ~FIRST_MASK); +#else + BUILD_BUG_ON(DIRECTMAP_VIRT_START & ~FIRST_MASK); +#endif + /* Page table structure constraints */ +#ifdef CONFIG_ARM_64 + /* + * The first few slots of the L0 table is reserved for the identity + * mapping. Check that none of the other regions are overlapping + * with it. + */ +#define CHECK_OVERLAP_WITH_IDMAP(virt) \ + BUILD_BUG_ON(zeroeth_table_offset(virt) < IDENTITY_MAPPING_AREA_NR_L0) + + CHECK_OVERLAP_WITH_IDMAP(XEN_VIRT_START); + CHECK_OVERLAP_WITH_IDMAP(VMAP_VIRT_START); + CHECK_OVERLAP_WITH_IDMAP(FRAMETABLE_VIRT_START); + CHECK_OVERLAP_WITH_IDMAP(DIRECTMAP_VIRT_START); +#undef CHECK_OVERLAP_WITH_IDMAP +#endif + BUILD_BUG_ON(first_table_offset(XEN_VIRT_START)); +#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE + BUILD_BUG_ON(DOMHEAP_VIRT_START & ~FIRST_MASK); +#endif + /* + * The boot code expects the regions XEN_VIRT_START, FIXMAP_ADDR(0), + * BOOT_FDT_VIRT_START to use the same 0th (arm64 only) and 1st + * slot in the page tables. + */ +#define CHECK_SAME_SLOT(level, virt1, virt2) \ + BUILD_BUG_ON(level##_table_offset(virt1) !=3D level##_table_offset(vir= t2)) + +#define CHECK_DIFFERENT_SLOT(level, virt1, virt2) \ + BUILD_BUG_ON(level##_table_offset(virt1) =3D=3D level##_table_offset(v= irt2)) + +#ifdef CONFIG_ARM_64 + CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, FIXMAP_ADDR(0)); + CHECK_SAME_SLOT(zeroeth, XEN_VIRT_START, BOOT_FDT_VIRT_START); +#endif + CHECK_SAME_SLOT(first, XEN_VIRT_START, FIXMAP_ADDR(0)); + CHECK_SAME_SLOT(first, XEN_VIRT_START, BOOT_FDT_VIRT_START); + + /* + * For arm32, the temporary mapping will re-use the domheap + * first slot and the second slots will match. + */ +#ifdef CONFIG_ARM_32 + CHECK_SAME_SLOT(first, TEMPORARY_XEN_VIRT_START, DOMHEAP_VIRT_START); + CHECK_DIFFERENT_SLOT(first, XEN_VIRT_START, TEMPORARY_XEN_VIRT_START); + CHECK_SAME_SLOT(second, XEN_VIRT_START, TEMPORARY_XEN_VIRT_START); +#endif + +#undef CHECK_SAME_SLOT +#undef CHECK_DIFFERENT_SLOT +} + +lpae_t __init pte_of_xenaddr(vaddr_t va) +{ + paddr_t ma =3D va + phys_offset; + + return mfn_to_xen_entry(maddr_to_mfn(ma), MT_NORMAL); +} + +void * __init early_fdt_map(paddr_t fdt_paddr) +{ + /* We are using 2MB superpage for mapping the FDT */ + paddr_t base_paddr =3D fdt_paddr & SECOND_MASK; + paddr_t offset; + void *fdt_virt; + uint32_t size; + int rc; + + /* + * Check whether the physical FDT address is set and meets the minimum + * alignment requirement. Since we are relying on MIN_FDT_ALIGN to be = at + * least 8 bytes so that we always access the magic and size fields + * of the FDT header after mapping the first chunk, double check if + * that is indeed the case. + */ + BUILD_BUG_ON(MIN_FDT_ALIGN < 8); + if ( !fdt_paddr || fdt_paddr % MIN_FDT_ALIGN ) + return NULL; + + /* The FDT is mapped using 2MB superpage */ + BUILD_BUG_ON(BOOT_FDT_VIRT_START % SZ_2M); + + rc =3D map_pages_to_xen(BOOT_FDT_VIRT_START, maddr_to_mfn(base_paddr), + SZ_2M >> PAGE_SHIFT, + PAGE_HYPERVISOR_RO | _PAGE_BLOCK); + if ( rc ) + panic("Unable to map the device-tree.\n"); + + + offset =3D fdt_paddr % SECOND_SIZE; + fdt_virt =3D (void *)BOOT_FDT_VIRT_START + offset; + + if ( fdt_magic(fdt_virt) !=3D FDT_MAGIC ) + return NULL; + + size =3D fdt_totalsize(fdt_virt); + if ( size > MAX_FDT_SIZE ) + return NULL; + + if ( (offset + size) > SZ_2M ) + { + rc =3D map_pages_to_xen(BOOT_FDT_VIRT_START + SZ_2M, + maddr_to_mfn(base_paddr + SZ_2M), + SZ_2M >> PAGE_SHIFT, + PAGE_HYPERVISOR_RO | _PAGE_BLOCK); + if ( rc ) + panic("Unable to map the device-tree\n"); + } + + return fdt_virt; +} + +void __init remove_early_mappings(void) +{ + int rc; + + /* destroy the _PAGE_BLOCK mapping */ + rc =3D modify_xen_mappings(BOOT_FDT_VIRT_START, + BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE, + _PAGE_BLOCK); + BUG_ON(rc); +} + +/* + * After boot, Xen page-tables should not contain mapping that are both + * Writable and eXecutables. + * + * This should be called on each CPU to enforce the policy. + */ +static void xen_pt_enforce_wnx(void) +{ + WRITE_SYSREG(READ_SYSREG(SCTLR_EL2) | SCTLR_Axx_ELx_WXN, SCTLR_EL2); + /* + * The TLBs may cache SCTLR_EL2.WXN. So ensure it is synchronized + * before flushing the TLBs. + */ + isb(); + flush_xen_tlb_local(); +} + +/* + * Boot-time pagetable setup. + * Changes here may need matching changes in head.S + */ +void __init setup_pagetables(unsigned long boot_phys_offset) +{ + uint64_t ttbr; + lpae_t pte, *p; + int i; + + phys_offset =3D boot_phys_offset; + + arch_setup_page_tables(); + +#ifdef CONFIG_ARM_64 + pte =3D pte_of_xenaddr((uintptr_t)xen_first); + pte.pt.table =3D 1; + pte.pt.xn =3D 0; + xen_pgtable[zeroeth_table_offset(XEN_VIRT_START)] =3D pte; + + p =3D (void *) xen_first; +#else + p =3D (void *) cpu0_pgtable; +#endif + + /* Map xen second level page-table */ + p[0] =3D pte_of_xenaddr((uintptr_t)(xen_second)); + p[0].pt.table =3D 1; + p[0].pt.xn =3D 0; + + /* Break up the Xen mapping into pages and protect them separately. */ + for ( i =3D 0; i < XEN_NR_ENTRIES(3); i++ ) + { + vaddr_t va =3D XEN_VIRT_START + (i << PAGE_SHIFT); + + if ( !is_kernel(va) ) + break; + pte =3D pte_of_xenaddr(va); + pte.pt.table =3D 1; /* third level mappings always have this bit s= et */ + if ( is_kernel_text(va) || is_kernel_inittext(va) ) + { + pte.pt.xn =3D 0; + pte.pt.ro =3D 1; + } + if ( is_kernel_rodata(va) ) + pte.pt.ro =3D 1; + xen_xenmap[i] =3D pte; + } + + /* Initialise xen second level entries ... */ + /* ... Xen's text etc */ + for ( i =3D 0; i < XEN_NR_ENTRIES(2); i++ ) + { + vaddr_t va =3D XEN_VIRT_START + (i << XEN_PT_LEVEL_SHIFT(2)); + + pte =3D pte_of_xenaddr((vaddr_t)(xen_xenmap + i * XEN_PT_LPAE_ENTR= IES)); + pte.pt.table =3D 1; + xen_second[second_table_offset(va)] =3D pte; + } + + /* ... Fixmap */ + pte =3D pte_of_xenaddr((vaddr_t)xen_fixmap); + pte.pt.table =3D 1; + xen_second[second_table_offset(FIXMAP_ADDR(0))] =3D pte; + +#ifdef CONFIG_ARM_64 + ttbr =3D (uintptr_t) xen_pgtable + phys_offset; +#else + ttbr =3D (uintptr_t) cpu0_pgtable + phys_offset; +#endif + + switch_ttbr(ttbr); + + xen_pt_enforce_wnx(); + +#ifdef CONFIG_ARM_32 + per_cpu(xen_pgtable, 0) =3D cpu0_pgtable; +#endif +} + +void *__init arch_vmap_virt_end(void) +{ + return (void *)(VMAP_VIRT_START + VMAP_VIRT_SIZE); +} + +/* Release all __init and __initdata ranges to be reused */ +void free_init_memory(void) +{ + paddr_t pa =3D virt_to_maddr(__init_begin); + unsigned long len =3D __init_end - __init_begin; + uint32_t insn; + unsigned int i, nr =3D len / sizeof(insn); + uint32_t *p; + int rc; + + rc =3D modify_xen_mappings((unsigned long)__init_begin, + (unsigned long)__init_end, PAGE_HYPERVISOR_RW= ); + if ( rc ) + panic("Unable to map RW the init section (rc =3D %d)\n", rc); + + /* + * From now on, init will not be used for execution anymore, + * so nuke the instruction cache to remove entries related to init. + */ + invalidate_icache_local(); + +#ifdef CONFIG_ARM_32 + /* udf instruction i.e (see A8.8.247 in ARM DDI 0406C.c) */ + insn =3D 0xe7f000f0; +#else + insn =3D AARCH64_BREAK_FAULT; +#endif + p =3D (uint32_t *)__init_begin; + for ( i =3D 0; i < nr; i++ ) + *(p + i) =3D insn; + + rc =3D destroy_xen_mappings((unsigned long)__init_begin, + (unsigned long)__init_end); + if ( rc ) + panic("Unable to remove the init section (rc =3D %d)\n", rc); + + init_domheap_pages(pa, pa + len); + printk("Freed %ldkB init memory.\n", (long)(__init_end-__init_begin)>>= 10); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.25.1 From nobody Wed Nov 27 11:54:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1698027289062793.0623440738648; Sun, 22 Oct 2023 19:14:49 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.620928.966818 (Exim 4.92) (envelope-from ) id 1qukSU-0002nV-B4; Mon, 23 Oct 2023 02:14:22 +0000 Received: by outflank-mailman (output) from mailman id 620928.966818; Mon, 23 Oct 2023 02:14:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSU-0002nI-6w; Mon, 23 Oct 2023 02:14:22 +0000 Received: by outflank-mailman (input) for mailman id 620928; Mon, 23 Oct 2023 02:14:21 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSS-0001U7-RL for xen-devel@lists.xenproject.org; Mon, 23 Oct 2023 02:14:20 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id decbb2c2-7149-11ee-9b0e-b553b5be7939; Mon, 23 Oct 2023 04:14:18 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 403112F4; Sun, 22 Oct 2023 19:14:58 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 62DF43F738; Sun, 22 Oct 2023 19:14:14 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: decbb2c2-7149-11ee-9b0e-b553b5be7939 From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Henry Wang , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen , Julien Grall Subject: [PATCH v8 5/8] xen/arm: Split MMU-specific setup_mm() and related code out Date: Mon, 23 Oct 2023 10:13:42 +0800 Message-Id: <20231023021345.1731436-6-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231023021345.1731436-1-Henry.Wang@arm.com> References: <20231023021345.1731436-1-Henry.Wang@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1698027290470100001 Content-Type: text/plain; charset="utf-8" setup_mm() is used for Xen to setup memory management subsystem, such as boot allocator, direct-mapping, xenheap initialization, frametable and static memory pages, at boot time. We could inherit some components seamlessly for MPU support, such as the setup of boot allocator, whilst we need to implement some components differently for MPU, such as xenheap, etc. Also, there are some components that is specific to MMU only, for example the direct-mapping. Therefore in this commit, we split the MMU-specific setup_mm() and related code out. Since arm32 and arm64 have completely different setup_mm() implementation, take the opportunity to split the arch-specific setup_mm() to arch-specific files, so that we can avoid #ifdef. Also, make init_pdx(), init_staticmem_pages(), and populate_boot_allocator() public as these functions are now called from two different units, and make setup_mm() public for future MPU implementation. With above code movement, mark setup_directmap_mappings() as static because the only caller of this function is now in the same file with it. Drop the original setup_directmap_mappings() declaration and move the in-code comment on top of the declaration on top of the function implementation. Signed-off-by: Henry Wang Signed-off-by: Penny Zheng Signed-off-by: Wei Chen Acked-by: Julien Grall --- v8: - Reword the commit message about making init_pdx() & co public. - Add Julien's Acked-by tag. v7: - No change. v6: - Rework the original patch: [v5,10/13] xen/arm: mmu: move MMU-specific setup_mm to mmu/setup.c --- xen/arch/arm/arm32/mmu/mm.c | 278 ++++++++++++++++++++++++- xen/arch/arm/arm64/mmu/mm.c | 51 ++++- xen/arch/arm/include/asm/mmu/mm.h | 6 - xen/arch/arm/include/asm/setup.h | 5 + xen/arch/arm/setup.c | 324 +----------------------------- 5 files changed, 331 insertions(+), 333 deletions(-) diff --git a/xen/arch/arm/arm32/mmu/mm.c b/xen/arch/arm/arm32/mmu/mm.c index 647baf4a81..94d6cab49c 100644 --- a/xen/arch/arm/arm32/mmu/mm.c +++ b/xen/arch/arm/arm32/mmu/mm.c @@ -1,14 +1,21 @@ /* SPDX-License-Identifier: GPL-2.0 */ =20 #include +#include +#include +#include +#include #include =20 +static unsigned long opt_xenheap_megabytes __initdata; +integer_param("xenheap_megabytes", opt_xenheap_megabytes); + /* - * Set up the direct-mapped xenheap: - * up to 1GB of contiguous, always-mapped memory. + * Set up the direct-mapped xenheap: up to 1GB of contiguous, + * always-mapped memory. Base must be 32MB aligned and size a multiple of = 32MB. */ -void __init setup_directmap_mappings(unsigned long base_mfn, - unsigned long nr_mfns) +static void __init setup_directmap_mappings(unsigned long base_mfn, + unsigned long nr_mfns) { int rc; =20 @@ -21,6 +28,269 @@ void __init setup_directmap_mappings(unsigned long base= _mfn, directmap_virt_end =3D XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE; } =20 +/* + * Returns the end address of the highest region in the range s..e + * with required size and alignment that does not conflict with the + * modules from first_mod to nr_modules. + * + * For non-recursive callers first_mod should normally be 0 (all + * modules and Xen itself) or 1 (all modules but not Xen). + */ +static paddr_t __init consider_modules(paddr_t s, paddr_t e, + uint32_t size, paddr_t align, + int first_mod) +{ + const struct bootmodules *mi =3D &bootinfo.modules; + int i; + int nr; + + s =3D (s+align-1) & ~(align-1); + e =3D e & ~(align-1); + + if ( s > e || e - s < size ) + return 0; + + /* First check the boot modules */ + for ( i =3D first_mod; i < mi->nr_mods; i++ ) + { + paddr_t mod_s =3D mi->module[i].start; + paddr_t mod_e =3D mod_s + mi->module[i].size; + + if ( s < mod_e && mod_s < e ) + { + mod_e =3D consider_modules(mod_e, e, size, align, i+1); + if ( mod_e ) + return mod_e; + + return consider_modules(s, mod_s, size, align, i+1); + } + } + + /* Now check any fdt reserved areas. */ + + nr =3D fdt_num_mem_rsv(device_tree_flattened); + + for ( ; i < mi->nr_mods + nr; i++ ) + { + paddr_t mod_s, mod_e; + + if ( fdt_get_mem_rsv_paddr(device_tree_flattened, + i - mi->nr_mods, + &mod_s, &mod_e ) < 0 ) + /* If we can't read it, pretend it doesn't exist... */ + continue; + + /* fdt_get_mem_rsv_paddr returns length */ + mod_e +=3D mod_s; + + if ( s < mod_e && mod_s < e ) + { + mod_e =3D consider_modules(mod_e, e, size, align, i+1); + if ( mod_e ) + return mod_e; + + return consider_modules(s, mod_s, size, align, i+1); + } + } + + /* + * i is the current bootmodule we are evaluating, across all + * possible kinds of bootmodules. + * + * When retrieving the corresponding reserved-memory addresses, we + * need to index the bootinfo.reserved_mem bank starting from 0, and + * only counting the reserved-memory modules. Hence, we need to use + * i - nr. + */ + nr +=3D mi->nr_mods; + for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ ) + { + paddr_t r_s =3D bootinfo.reserved_mem.bank[i - nr].start; + paddr_t r_e =3D r_s + bootinfo.reserved_mem.bank[i - nr].size; + + if ( s < r_e && r_s < e ) + { + r_e =3D consider_modules(r_e, e, size, align, i + 1); + if ( r_e ) + return r_e; + + return consider_modules(s, r_s, size, align, i + 1); + } + } + return e; +} + +/* + * Find a contiguous region that fits in the static heap region with + * required size and alignment, and return the end address of the region + * if found otherwise 0. + */ +static paddr_t __init fit_xenheap_in_static_heap(uint32_t size, paddr_t al= ign) +{ + unsigned int i; + paddr_t end =3D 0, aligned_start, aligned_end; + paddr_t bank_start, bank_size, bank_end; + + for ( i =3D 0 ; i < bootinfo.reserved_mem.nr_banks; i++ ) + { + if ( bootinfo.reserved_mem.bank[i].type !=3D MEMBANK_STATIC_HEAP ) + continue; + + bank_start =3D bootinfo.reserved_mem.bank[i].start; + bank_size =3D bootinfo.reserved_mem.bank[i].size; + bank_end =3D bank_start + bank_size; + + if ( bank_size < size ) + continue; + + aligned_end =3D bank_end & ~(align - 1); + aligned_start =3D (aligned_end - size) & ~(align - 1); + + if ( aligned_start > bank_start ) + /* + * Allocate the xenheap as high as possible to keep low-memory + * available (assuming the admin supplied region below 4GB) + * for other use (e.g. domain memory allocation). + */ + end =3D max(end, aligned_end); + } + + return end; +} + +void __init setup_mm(void) +{ + paddr_t ram_start, ram_end, ram_size, e, bank_start, bank_end, bank_si= ze; + paddr_t static_heap_end =3D 0, static_heap_size =3D 0; + unsigned long heap_pages, xenheap_pages, domheap_pages; + unsigned int i; + const uint32_t ctr =3D READ_CP32(CTR); + + if ( !bootinfo.mem.nr_banks ) + panic("No memory bank\n"); + + /* We only supports instruction caches implementing the IVIPT extensio= n. */ + if ( ((ctr >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK) =3D=3D ICACHE_POLICY_AI= VIVT ) + panic("AIVIVT instruction cache not supported\n"); + + init_pdx(); + + ram_start =3D bootinfo.mem.bank[0].start; + ram_size =3D bootinfo.mem.bank[0].size; + ram_end =3D ram_start + ram_size; + + for ( i =3D 1; i < bootinfo.mem.nr_banks; i++ ) + { + bank_start =3D bootinfo.mem.bank[i].start; + bank_size =3D bootinfo.mem.bank[i].size; + bank_end =3D bank_start + bank_size; + + ram_size =3D ram_size + bank_size; + ram_start =3D min(ram_start,bank_start); + ram_end =3D max(ram_end,bank_end); + } + + total_pages =3D ram_size >> PAGE_SHIFT; + + if ( bootinfo.static_heap ) + { + for ( i =3D 0 ; i < bootinfo.reserved_mem.nr_banks; i++ ) + { + if ( bootinfo.reserved_mem.bank[i].type !=3D MEMBANK_STATIC_HE= AP ) + continue; + + bank_start =3D bootinfo.reserved_mem.bank[i].start; + bank_size =3D bootinfo.reserved_mem.bank[i].size; + bank_end =3D bank_start + bank_size; + + static_heap_size +=3D bank_size; + static_heap_end =3D max(static_heap_end, bank_end); + } + + heap_pages =3D static_heap_size >> PAGE_SHIFT; + } + else + heap_pages =3D total_pages; + + /* + * If the user has not requested otherwise via the command line + * then locate the xenheap using these constraints: + * + * - must be contiguous + * - must be 32 MiB aligned + * - must not include Xen itself or the boot modules + * - must be at most 1GB or 1/32 the total RAM in the system (or stat= ic + heap if enabled) if less + * - must be at least 32M + * + * We try to allocate the largest xenheap possible within these + * constraints. + */ + if ( opt_xenheap_megabytes ) + xenheap_pages =3D opt_xenheap_megabytes << (20-PAGE_SHIFT); + else + { + xenheap_pages =3D (heap_pages/32 + 0x1fffUL) & ~0x1fffUL; + xenheap_pages =3D max(xenheap_pages, 32UL<<(20-PAGE_SHIFT)); + xenheap_pages =3D min(xenheap_pages, 1UL<<(30-PAGE_SHIFT)); + } + + do + { + e =3D bootinfo.static_heap ? + fit_xenheap_in_static_heap(pfn_to_paddr(xenheap_pages), MB(32)= ) : + consider_modules(ram_start, ram_end, + pfn_to_paddr(xenheap_pages), + 32<<20, 0); + if ( e ) + break; + + xenheap_pages >>=3D 1; + } while ( !opt_xenheap_megabytes && xenheap_pages > 32<<(20-PAGE_SHIFT= ) ); + + if ( ! e ) + panic("Not enough space for xenheap\n"); + + domheap_pages =3D heap_pages - xenheap_pages; + + printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages%s)\n", + e - (pfn_to_paddr(xenheap_pages)), e, xenheap_pages, + opt_xenheap_megabytes ? ", from command-line" : ""); + printk("Dom heap: %lu pages\n", domheap_pages); + + /* + * We need some memory to allocate the page-tables used for the + * directmap mappings. So populate the boot allocator first. + * + * This requires us to set directmap_mfn_{start, end} first so the + * direct-mapped Xenheap region can be avoided. + */ + directmap_mfn_start =3D _mfn((e >> PAGE_SHIFT) - xenheap_pages); + directmap_mfn_end =3D mfn_add(directmap_mfn_start, xenheap_pages); + + populate_boot_allocator(); + + setup_directmap_mappings(mfn_x(directmap_mfn_start), xenheap_pages); + + /* Frame table covers all of RAM region, including holes */ + setup_frametable_mappings(ram_start, ram_end); + max_page =3D PFN_DOWN(ram_end); + + /* + * The allocators may need to use map_domain_page() (such as for + * scrubbing pages). So we need to prepare the domheap area first. + */ + if ( !init_domheap_mappings(smp_processor_id()) ) + panic("CPU%u: Unable to prepare the domheap page-tables\n", + smp_processor_id()); + + /* Add xenheap memory that was not already added to the boot allocator= . */ + init_xenheap_pages(mfn_to_maddr(directmap_mfn_start), + mfn_to_maddr(directmap_mfn_end)); + + init_staticmem_pages(); +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/arm64/mmu/mm.c b/xen/arch/arm/arm64/mmu/mm.c index 36073041ed..c0f166a437 100644 --- a/xen/arch/arm/arm64/mmu/mm.c +++ b/xen/arch/arm/arm64/mmu/mm.c @@ -2,6 +2,7 @@ =20 #include #include +#include =20 #include =20 @@ -152,8 +153,8 @@ void __init switch_ttbr(uint64_t ttbr) } =20 /* Map the region in the directmap area. */ -void __init setup_directmap_mappings(unsigned long base_mfn, - unsigned long nr_mfns) +static void __init setup_directmap_mappings(unsigned long base_mfn, + unsigned long nr_mfns) { int rc; =20 @@ -188,6 +189,52 @@ void __init setup_directmap_mappings(unsigned long bas= e_mfn, panic("Unable to setup the directmap mappings.\n"); } =20 +void __init setup_mm(void) +{ + const struct meminfo *banks =3D &bootinfo.mem; + paddr_t ram_start =3D INVALID_PADDR; + paddr_t ram_end =3D 0; + paddr_t ram_size =3D 0; + unsigned int i; + + init_pdx(); + + /* + * We need some memory to allocate the page-tables used for the direct= map + * mappings. But some regions may contain memory already allocated + * for other uses (e.g. modules, reserved-memory...). + * + * For simplicity, add all the free regions in the boot allocator. + */ + populate_boot_allocator(); + + total_pages =3D 0; + + for ( i =3D 0; i < banks->nr_banks; i++ ) + { + const struct membank *bank =3D &banks->bank[i]; + paddr_t bank_end =3D bank->start + bank->size; + + ram_size =3D ram_size + bank->size; + ram_start =3D min(ram_start, bank->start); + ram_end =3D max(ram_end, bank_end); + + setup_directmap_mappings(PFN_DOWN(bank->start), + PFN_DOWN(bank->size)); + } + + total_pages +=3D ram_size >> PAGE_SHIFT; + + directmap_virt_end =3D XENHEAP_VIRT_START + ram_end - ram_start; + directmap_mfn_start =3D maddr_to_mfn(ram_start); + directmap_mfn_end =3D maddr_to_mfn(ram_end); + + setup_frametable_mappings(ram_start, ram_end); + max_page =3D PFN_DOWN(ram_end); + + init_staticmem_pages(); +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/include/asm/mmu/mm.h b/xen/arch/arm/include/asm/m= mu/mm.h index 439ae314fd..c5e03a66bf 100644 --- a/xen/arch/arm/include/asm/mmu/mm.h +++ b/xen/arch/arm/include/asm/mmu/mm.h @@ -31,12 +31,6 @@ void dump_pt_walk(paddr_t ttbr, paddr_t addr, =20 /* Switch to a new root page-tables */ extern void switch_ttbr(uint64_t ttbr); -/* - * For Arm32, set up the direct-mapped xenheap: up to 1GB of contiguous, - * always-mapped memory. Base must be 32MB aligned and size a multiple of = 32MB. - * For Arm64, map the region in the directmap area. - */ -extern void setup_directmap_mappings(unsigned long base_mfn, unsigned long= nr_mfns); =20 #endif /* __ARM_MMU_MM_H__ */ =20 diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/se= tup.h index b8866c20f4..863e9b88cd 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -159,6 +159,11 @@ struct bootcmdline *boot_cmdline_find_by_kind(bootmodu= le_kind kind); struct bootcmdline * boot_cmdline_find_by_name(const char *name); const char *boot_module_kind_as_string(bootmodule_kind kind); =20 +void init_pdx(void); +void init_staticmem_pages(void); +void populate_boot_allocator(void); +void setup_mm(void); + extern uint32_t hyp_traps_vector[]; void init_traps(void); =20 diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index db748839d3..5983546e64 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -58,11 +58,6 @@ struct cpuinfo_arm __read_mostly system_cpuinfo; bool __read_mostly acpi_disabled; #endif =20 -#ifdef CONFIG_ARM_32 -static unsigned long opt_xenheap_megabytes __initdata; -integer_param("xenheap_megabytes", opt_xenheap_megabytes); -#endif - domid_t __read_mostly max_init_domid; =20 static __used void init_done(void) @@ -547,138 +542,6 @@ static void * __init relocate_fdt(paddr_t dtb_paddr, = size_t dtb_size) return fdt; } =20 -#ifdef CONFIG_ARM_32 -/* - * Returns the end address of the highest region in the range s..e - * with required size and alignment that does not conflict with the - * modules from first_mod to nr_modules. - * - * For non-recursive callers first_mod should normally be 0 (all - * modules and Xen itself) or 1 (all modules but not Xen). - */ -static paddr_t __init consider_modules(paddr_t s, paddr_t e, - uint32_t size, paddr_t align, - int first_mod) -{ - const struct bootmodules *mi =3D &bootinfo.modules; - int i; - int nr; - - s =3D (s+align-1) & ~(align-1); - e =3D e & ~(align-1); - - if ( s > e || e - s < size ) - return 0; - - /* First check the boot modules */ - for ( i =3D first_mod; i < mi->nr_mods; i++ ) - { - paddr_t mod_s =3D mi->module[i].start; - paddr_t mod_e =3D mod_s + mi->module[i].size; - - if ( s < mod_e && mod_s < e ) - { - mod_e =3D consider_modules(mod_e, e, size, align, i+1); - if ( mod_e ) - return mod_e; - - return consider_modules(s, mod_s, size, align, i+1); - } - } - - /* Now check any fdt reserved areas. */ - - nr =3D fdt_num_mem_rsv(device_tree_flattened); - - for ( ; i < mi->nr_mods + nr; i++ ) - { - paddr_t mod_s, mod_e; - - if ( fdt_get_mem_rsv_paddr(device_tree_flattened, - i - mi->nr_mods, - &mod_s, &mod_e ) < 0 ) - /* If we can't read it, pretend it doesn't exist... */ - continue; - - /* fdt_get_mem_rsv_paddr returns length */ - mod_e +=3D mod_s; - - if ( s < mod_e && mod_s < e ) - { - mod_e =3D consider_modules(mod_e, e, size, align, i+1); - if ( mod_e ) - return mod_e; - - return consider_modules(s, mod_s, size, align, i+1); - } - } - - /* - * i is the current bootmodule we are evaluating, across all - * possible kinds of bootmodules. - * - * When retrieving the corresponding reserved-memory addresses, we - * need to index the bootinfo.reserved_mem bank starting from 0, and - * only counting the reserved-memory modules. Hence, we need to use - * i - nr. - */ - nr +=3D mi->nr_mods; - for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ ) - { - paddr_t r_s =3D bootinfo.reserved_mem.bank[i - nr].start; - paddr_t r_e =3D r_s + bootinfo.reserved_mem.bank[i - nr].size; - - if ( s < r_e && r_s < e ) - { - r_e =3D consider_modules(r_e, e, size, align, i + 1); - if ( r_e ) - return r_e; - - return consider_modules(s, r_s, size, align, i + 1); - } - } - return e; -} - -/* - * Find a contiguous region that fits in the static heap region with - * required size and alignment, and return the end address of the region - * if found otherwise 0. - */ -static paddr_t __init fit_xenheap_in_static_heap(uint32_t size, paddr_t al= ign) -{ - unsigned int i; - paddr_t end =3D 0, aligned_start, aligned_end; - paddr_t bank_start, bank_size, bank_end; - - for ( i =3D 0 ; i < bootinfo.reserved_mem.nr_banks; i++ ) - { - if ( bootinfo.reserved_mem.bank[i].type !=3D MEMBANK_STATIC_HEAP ) - continue; - - bank_start =3D bootinfo.reserved_mem.bank[i].start; - bank_size =3D bootinfo.reserved_mem.bank[i].size; - bank_end =3D bank_start + bank_size; - - if ( bank_size < size ) - continue; - - aligned_end =3D bank_end & ~(align - 1); - aligned_start =3D (aligned_end - size) & ~(align - 1); - - if ( aligned_start > bank_start ) - /* - * Allocate the xenheap as high as possible to keep low-memory - * available (assuming the admin supplied region below 4GB) - * for other use (e.g. domain memory allocation). - */ - end =3D max(end, aligned_end); - } - - return end; -} -#endif - /* * Return the end of the non-module region starting at s. In other * words return s the start of the next modules after s. @@ -713,7 +576,7 @@ static paddr_t __init next_module(paddr_t s, paddr_t *e= nd) return lowest; } =20 -static void __init init_pdx(void) +void __init init_pdx(void) { paddr_t bank_start, bank_size, bank_end; =20 @@ -758,7 +621,7 @@ static void __init init_pdx(void) } =20 /* Static memory initialization */ -static void __init init_staticmem_pages(void) +void __init init_staticmem_pages(void) { #ifdef CONFIG_STATIC_MEMORY unsigned int bank; @@ -792,7 +655,7 @@ static void __init init_staticmem_pages(void) * allocator with the corresponding regions only, but with Xenheap excluded * on arm32. */ -static void __init populate_boot_allocator(void) +void __init populate_boot_allocator(void) { unsigned int i; const struct meminfo *banks =3D &bootinfo.mem; @@ -861,187 +724,6 @@ static void __init populate_boot_allocator(void) } } =20 -#ifdef CONFIG_ARM_32 -static void __init setup_mm(void) -{ - paddr_t ram_start, ram_end, ram_size, e, bank_start, bank_end, bank_si= ze; - paddr_t static_heap_end =3D 0, static_heap_size =3D 0; - unsigned long heap_pages, xenheap_pages, domheap_pages; - unsigned int i; - const uint32_t ctr =3D READ_CP32(CTR); - - if ( !bootinfo.mem.nr_banks ) - panic("No memory bank\n"); - - /* We only supports instruction caches implementing the IVIPT extensio= n. */ - if ( ((ctr >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK) =3D=3D ICACHE_POLICY_AI= VIVT ) - panic("AIVIVT instruction cache not supported\n"); - - init_pdx(); - - ram_start =3D bootinfo.mem.bank[0].start; - ram_size =3D bootinfo.mem.bank[0].size; - ram_end =3D ram_start + ram_size; - - for ( i =3D 1; i < bootinfo.mem.nr_banks; i++ ) - { - bank_start =3D bootinfo.mem.bank[i].start; - bank_size =3D bootinfo.mem.bank[i].size; - bank_end =3D bank_start + bank_size; - - ram_size =3D ram_size + bank_size; - ram_start =3D min(ram_start,bank_start); - ram_end =3D max(ram_end,bank_end); - } - - total_pages =3D ram_size >> PAGE_SHIFT; - - if ( bootinfo.static_heap ) - { - for ( i =3D 0 ; i < bootinfo.reserved_mem.nr_banks; i++ ) - { - if ( bootinfo.reserved_mem.bank[i].type !=3D MEMBANK_STATIC_HE= AP ) - continue; - - bank_start =3D bootinfo.reserved_mem.bank[i].start; - bank_size =3D bootinfo.reserved_mem.bank[i].size; - bank_end =3D bank_start + bank_size; - - static_heap_size +=3D bank_size; - static_heap_end =3D max(static_heap_end, bank_end); - } - - heap_pages =3D static_heap_size >> PAGE_SHIFT; - } - else - heap_pages =3D total_pages; - - /* - * If the user has not requested otherwise via the command line - * then locate the xenheap using these constraints: - * - * - must be contiguous - * - must be 32 MiB aligned - * - must not include Xen itself or the boot modules - * - must be at most 1GB or 1/32 the total RAM in the system (or stat= ic - heap if enabled) if less - * - must be at least 32M - * - * We try to allocate the largest xenheap possible within these - * constraints. - */ - if ( opt_xenheap_megabytes ) - xenheap_pages =3D opt_xenheap_megabytes << (20-PAGE_SHIFT); - else - { - xenheap_pages =3D (heap_pages/32 + 0x1fffUL) & ~0x1fffUL; - xenheap_pages =3D max(xenheap_pages, 32UL<<(20-PAGE_SHIFT)); - xenheap_pages =3D min(xenheap_pages, 1UL<<(30-PAGE_SHIFT)); - } - - do - { - e =3D bootinfo.static_heap ? - fit_xenheap_in_static_heap(pfn_to_paddr(xenheap_pages), MB(32)= ) : - consider_modules(ram_start, ram_end, - pfn_to_paddr(xenheap_pages), - 32<<20, 0); - if ( e ) - break; - - xenheap_pages >>=3D 1; - } while ( !opt_xenheap_megabytes && xenheap_pages > 32<<(20-PAGE_SHIFT= ) ); - - if ( ! e ) - panic("Not enough space for xenheap\n"); - - domheap_pages =3D heap_pages - xenheap_pages; - - printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages%s)\n", - e - (pfn_to_paddr(xenheap_pages)), e, xenheap_pages, - opt_xenheap_megabytes ? ", from command-line" : ""); - printk("Dom heap: %lu pages\n", domheap_pages); - - /* - * We need some memory to allocate the page-tables used for the - * directmap mappings. So populate the boot allocator first. - * - * This requires us to set directmap_mfn_{start, end} first so the - * direct-mapped Xenheap region can be avoided. - */ - directmap_mfn_start =3D _mfn((e >> PAGE_SHIFT) - xenheap_pages); - directmap_mfn_end =3D mfn_add(directmap_mfn_start, xenheap_pages); - - populate_boot_allocator(); - - setup_directmap_mappings(mfn_x(directmap_mfn_start), xenheap_pages); - - /* Frame table covers all of RAM region, including holes */ - setup_frametable_mappings(ram_start, ram_end); - max_page =3D PFN_DOWN(ram_end); - - /* - * The allocators may need to use map_domain_page() (such as for - * scrubbing pages). So we need to prepare the domheap area first. - */ - if ( !init_domheap_mappings(smp_processor_id()) ) - panic("CPU%u: Unable to prepare the domheap page-tables\n", - smp_processor_id()); - - /* Add xenheap memory that was not already added to the boot allocator= . */ - init_xenheap_pages(mfn_to_maddr(directmap_mfn_start), - mfn_to_maddr(directmap_mfn_end)); - - init_staticmem_pages(); -} -#else /* CONFIG_ARM_64 */ -static void __init setup_mm(void) -{ - const struct meminfo *banks =3D &bootinfo.mem; - paddr_t ram_start =3D INVALID_PADDR; - paddr_t ram_end =3D 0; - paddr_t ram_size =3D 0; - unsigned int i; - - init_pdx(); - - /* - * We need some memory to allocate the page-tables used for the direct= map - * mappings. But some regions may contain memory already allocated - * for other uses (e.g. modules, reserved-memory...). - * - * For simplicity, add all the free regions in the boot allocator. - */ - populate_boot_allocator(); - - total_pages =3D 0; - - for ( i =3D 0; i < banks->nr_banks; i++ ) - { - const struct membank *bank =3D &banks->bank[i]; - paddr_t bank_end =3D bank->start + bank->size; - - ram_size =3D ram_size + bank->size; - ram_start =3D min(ram_start, bank->start); - ram_end =3D max(ram_end, bank_end); - - setup_directmap_mappings(PFN_DOWN(bank->start), - PFN_DOWN(bank->size)); - } - - total_pages +=3D ram_size >> PAGE_SHIFT; - - directmap_virt_end =3D XENHEAP_VIRT_START + ram_end - ram_start; - directmap_mfn_start =3D maddr_to_mfn(ram_start); - directmap_mfn_end =3D maddr_to_mfn(ram_end); - - setup_frametable_mappings(ram_start, ram_end); - max_page =3D PFN_DOWN(ram_end); - - init_staticmem_pages(); -} -#endif - static bool __init is_dom0less_mode(void) { struct bootmodules *mods =3D &bootinfo.modules; --=20 2.25.1 From nobody Wed Nov 27 11:54:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1698027296122541.7122306316969; Sun, 22 Oct 2023 19:14:56 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.620930.966828 (Exim 4.92) (envelope-from ) id 1qukSW-00039l-Km; Mon, 23 Oct 2023 02:14:24 +0000 Received: by outflank-mailman (output) from mailman id 620930.966828; Mon, 23 Oct 2023 02:14:24 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSW-00038U-Fh; Mon, 23 Oct 2023 02:14:24 +0000 Received: by outflank-mailman (input) for mailman id 620930; Mon, 23 Oct 2023 02:14:23 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSU-0001F1-Vu for xen-devel@lists.xenproject.org; Mon, 23 Oct 2023 02:14:22 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id e1289e33-7149-11ee-98d5-6d05b1d4d9a1; Mon, 23 Oct 2023 04:14:22 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4188C2F4; Sun, 22 Oct 2023 19:15:02 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 64C103F738; Sun, 22 Oct 2023 19:14:18 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e1289e33-7149-11ee-98d5-6d05b1d4d9a1 From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Henry Wang , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen , Julien Grall Subject: [PATCH v8 6/8] xen/arm: Fold pmap and fixmap into MMU system Date: Mon, 23 Oct 2023 10:13:43 +0800 Message-Id: <20231023021345.1731436-7-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231023021345.1731436-1-Henry.Wang@arm.com> References: <20231023021345.1731436-1-Henry.Wang@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1698027296469100001 Content-Type: text/plain; charset="utf-8" fixmap and pmap are MMU-specific features, so fold them to the MMU system. Do the folding for pmap by moving the HAS_PMAP Kconfig selection under MMU. Since none of the definitions in asm/fixmap.h actually makes sense for the MPU, so do the folding for fixmap by limiting the inclusion of asm/fixmap.h for MPU code when necessary. To guarantee that, moving the implementation of copy_from_paddr() from kernel.c to mmu/setup.c, so that inclusion of asm/fixmap.h in the kernel.c can be dropped. Take the opportunity to add a missing space before and after '-' in "s =3D paddr & (PAGE_SIZE-1);" of copy_from_paddr(). Signed-off-by: Henry Wang Signed-off-by: Penny Zheng Signed-off-by: Wei Chen Reviewed-by: Julien Grall --- v8: - Add a missing space before/after '-' in "s =3D paddr & (PAGE_SIZE-1);" of copy_from_paddr(), mention this change in commit message. - Add Julien's Reviewed-by tag. v7: - No change. v6: - Rework original patch: [v5,08/13] xen/arm: Fold pmap and fixmap into MMU system and fold in the original patch: [v5,12/13] xen/arm: mmu: relocate copy_from_paddr() to setup.c --- xen/arch/arm/Kconfig | 2 +- xen/arch/arm/kernel.c | 28 ---------------------------- xen/arch/arm/mmu/setup.c | 27 +++++++++++++++++++++++++++ 3 files changed, 28 insertions(+), 29 deletions(-) diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index 2939db429b..7b5b0c0c05 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -14,7 +14,6 @@ config ARM select HAS_ALTERNATIVE select HAS_DEVICE_TREE select HAS_PASSTHROUGH - select HAS_PMAP select HAS_UBSAN select IOMMU_FORCE_PT_SHARE =20 @@ -60,6 +59,7 @@ config PADDR_BITS =20 config MMU def_bool y + select HAS_PMAP =20 source "arch/Kconfig" =20 diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c index 508c54824d..bc3e5bd6f9 100644 --- a/xen/arch/arm/kernel.c +++ b/xen/arch/arm/kernel.c @@ -16,7 +16,6 @@ #include =20 #include -#include #include #include =20 @@ -41,33 +40,6 @@ struct minimal_dtb_header { =20 #define DTB_MAGIC 0xd00dfeedU =20 -/** - * copy_from_paddr - copy data from a physical address - * @dst: destination virtual address - * @paddr: source physical address - * @len: length to copy - */ -void __init copy_from_paddr(void *dst, paddr_t paddr, unsigned long len) -{ - void *src =3D (void *)FIXMAP_ADDR(FIXMAP_MISC); - - while (len) { - unsigned long l, s; - - s =3D paddr & (PAGE_SIZE-1); - l =3D min(PAGE_SIZE - s, len); - - set_fixmap(FIXMAP_MISC, maddr_to_mfn(paddr), PAGE_HYPERVISOR_WC); - memcpy(dst, src + s, l); - clean_dcache_va_range(dst, l); - clear_fixmap(FIXMAP_MISC); - - paddr +=3D l; - dst +=3D l; - len -=3D l; - } -} - static void __init place_modules(struct kernel_info *info, paddr_t kernbase, paddr_t kernend) { diff --git a/xen/arch/arm/mmu/setup.c b/xen/arch/arm/mmu/setup.c index c2df976ab2..a5a9b538ff 100644 --- a/xen/arch/arm/mmu/setup.c +++ b/xen/arch/arm/mmu/setup.c @@ -339,6 +339,33 @@ void free_init_memory(void) printk("Freed %ldkB init memory.\n", (long)(__init_end-__init_begin)>>= 10); } =20 +/** + * copy_from_paddr - copy data from a physical address + * @dst: destination virtual address + * @paddr: source physical address + * @len: length to copy + */ +void __init copy_from_paddr(void *dst, paddr_t paddr, unsigned long len) +{ + void *src =3D (void *)FIXMAP_ADDR(FIXMAP_MISC); + + while (len) { + unsigned long l, s; + + s =3D paddr & (PAGE_SIZE - 1); + l =3D min(PAGE_SIZE - s, len); + + set_fixmap(FIXMAP_MISC, maddr_to_mfn(paddr), PAGE_HYPERVISOR_WC); + memcpy(dst, src + s, l); + clean_dcache_va_range(dst, l); + clear_fixmap(FIXMAP_MISC); + + paddr +=3D l; + dst +=3D l; + len -=3D l; + } +} + /* * Local variables: * mode: C --=20 2.25.1 From nobody Wed Nov 27 11:54:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1698027293700844.8479529494919; Sun, 22 Oct 2023 19:14:53 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.620933.966838 (Exim 4.92) (envelope-from ) id 1qukSc-0003ln-0p; Mon, 23 Oct 2023 02:14:30 +0000 Received: by outflank-mailman (output) from mailman id 620933.966838; Mon, 23 Oct 2023 02:14:29 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSb-0003le-Sf; Mon, 23 Oct 2023 02:14:29 +0000 Received: by outflank-mailman (input) for mailman id 620933; Mon, 23 Oct 2023 02:14:28 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSZ-0001U7-W4 for xen-devel@lists.xenproject.org; Mon, 23 Oct 2023 02:14:28 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id e381d4df-7149-11ee-9b0e-b553b5be7939; Mon, 23 Oct 2023 04:14:26 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4ADC92F4; Sun, 22 Oct 2023 19:15:06 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6DC3D3F738; Sun, 22 Oct 2023 19:14:22 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e381d4df-7149-11ee-9b0e-b553b5be7939 From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Wei Chen , Volodymyr Babchuk , Henry Wang , Julien Grall Subject: [PATCH v8 7/8] xen/arm: Rename init_secondary_pagetables() to prepare_secondary_mm() Date: Mon, 23 Oct 2023 10:13:44 +0800 Message-Id: <20231023021345.1731436-8-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231023021345.1731436-1-Henry.Wang@arm.com> References: <20231023021345.1731436-1-Henry.Wang@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1698027294378100005 Content-Type: text/plain; charset="utf-8" From: Penny Zheng init_secondary_pagetables() is a function in the common code path of both MMU and future MPU support. Since "page table" is a MMU specific concept, rename init_secondary_pagetables() to a generic name prepare_secondary_mm() as the preparation for MPU support. Reword the in-code comment on top of prepare_secondary_mm() because this function is now supposed to be MMU/MPU agnostic. Take the opportunity to fix the incorrect coding style of the in-code comments. Signed-off-by: Penny Zheng Signed-off-by: Henry Wang Reviewed-by: Julien Grall --- v8: - Change the in-code comment on top of prepare_secondary_mm() because this function is now supposed to be MMU/MPU agnostic, mention this in the commit message. - Add Julien's Reviewed-by tag. v7: - No change. v6: - Only rename init_secondary_pagetables() to prepare_secondary_mm(). --- xen/arch/arm/arm32/head.S | 2 +- xen/arch/arm/include/asm/mm.h | 5 ++--- xen/arch/arm/mmu/smpboot.c | 4 ++-- xen/arch/arm/smpboot.c | 2 +- 4 files changed, 6 insertions(+), 7 deletions(-) diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S index 39218cf15f..c7b2efb8f0 100644 --- a/xen/arch/arm/arm32/head.S +++ b/xen/arch/arm/arm32/head.S @@ -257,7 +257,7 @@ GLOBAL(init_secondary) secondary_switched: /* * Non-boot CPUs need to move on to the proper pagetables, which w= ere - * setup in init_secondary_pagetables. + * setup in prepare_secondary_mm. * * XXX: This is not compliant with the Arm Arm. */ diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h index d23ebc7df6..cbcf3bf147 100644 --- a/xen/arch/arm/include/asm/mm.h +++ b/xen/arch/arm/include/asm/mm.h @@ -204,9 +204,8 @@ extern void setup_pagetables(unsigned long boot_phys_of= fset); extern void *early_fdt_map(paddr_t fdt_paddr); /* Remove early mappings */ extern void remove_early_mappings(void); -/* Allocate and initialise pagetables for a secondary CPU. Sets init_ttbr = to the - * new page table */ -extern int init_secondary_pagetables(int cpu); +/* Prepare the memory subystem to bring-up the given secondary CPU */ +extern int prepare_secondary_mm(int cpu); /* Map a frame table to cover physical addresses ps through pe */ extern void setup_frametable_mappings(paddr_t ps, paddr_t pe); /* map a physical range in virtual memory */ diff --git a/xen/arch/arm/mmu/smpboot.c b/xen/arch/arm/mmu/smpboot.c index 8b6a09f843..12f1a5d761 100644 --- a/xen/arch/arm/mmu/smpboot.c +++ b/xen/arch/arm/mmu/smpboot.c @@ -67,7 +67,7 @@ static void clear_boot_pagetables(void) } =20 #ifdef CONFIG_ARM_64 -int init_secondary_pagetables(int cpu) +int prepare_secondary_mm(int cpu) { clear_boot_pagetables(); =20 @@ -80,7 +80,7 @@ int init_secondary_pagetables(int cpu) return 0; } #else -int init_secondary_pagetables(int cpu) +int prepare_secondary_mm(int cpu) { lpae_t *first; =20 diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c index beb137d06e..ac451e9b3e 100644 --- a/xen/arch/arm/smpboot.c +++ b/xen/arch/arm/smpboot.c @@ -448,7 +448,7 @@ int __cpu_up(unsigned int cpu) =20 printk("Bringing up CPU%d\n", cpu); =20 - rc =3D init_secondary_pagetables(cpu); + rc =3D prepare_secondary_mm(cpu); if ( rc < 0 ) return rc; =20 --=20 2.25.1 From nobody Wed Nov 27 11:54:04 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1698027831521853.9967457495688; Sun, 22 Oct 2023 19:23:51 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.620950.966847 (Exim 4.92) (envelope-from ) id 1qukbA-0006qc-Uv; Mon, 23 Oct 2023 02:23:20 +0000 Received: by outflank-mailman (output) from mailman id 620950.966847; Mon, 23 Oct 2023 02:23:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukbA-0006qU-Qr; Mon, 23 Oct 2023 02:23:20 +0000 Received: by outflank-mailman (input) for mailman id 620950; Mon, 23 Oct 2023 02:23:20 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSg-0001U7-M0 for xen-devel@lists.xenproject.org; Mon, 23 Oct 2023 02:14:35 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id e65529cd-7149-11ee-9b0e-b553b5be7939; Mon, 23 Oct 2023 04:14:30 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F35EF2F4; Sun, 22 Oct 2023 19:15:10 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8DFC93F738; Sun, 22 Oct 2023 19:14:26 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e65529cd-7149-11ee-9b0e-b553b5be7939 From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Penny Zheng , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Wei Chen , Henry Wang Subject: [PATCH v8 8/8] xen/arm: mmu: move MMU specific P2M code to mmu/p2m.{c,h} Date: Mon, 23 Oct 2023 10:13:45 +0800 Message-Id: <20231023021345.1731436-9-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231023021345.1731436-1-Henry.Wang@arm.com> References: <20231023021345.1731436-1-Henry.Wang@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1698027832422100001 Content-Type: text/plain; charset="utf-8" From: Penny Zheng Current P2M implementation is designed for MMU system only. We move the MMU-specific codes into mmu/p2m.c, and only keep generic codes in p2m.c, like VMID allocator, etc. We also move MMU-specific definitions and declarations to mmu/p2m.h, such as p2m_tlb_flush_sync(). Also expose previously static functions p2m_vmid_allocator_init(), p2m_alloc_vmid() for further MPU usage. Since with the code movement p2m_free_vmid() is now used in two files, also expose p2m_free_vmid(). With the code movement, global variable max_vmid is used in multiple files instead of a single file (and will be used in MPU P2M implementation), declare it in the header and remove the "static" of this variable. Also, since p2m_invalidate_root() should be MMU only and after the code movement the only caller of p2m_invalidate_root() outside of mmu/p2m.c is arch_domain_creation_finished(), creating a new function named p2m_domain_creation_finished() in mmu/p2m.c for the original code in arch_domain_creation_finished(), and marking p2m_invalidate_root() as static. Take the opportunity to fix the incorrect coding style when possible. When there is bit shift in macros, take the opportunity to add the missing 'U' as a compliance of MISRA. Signed-off-by: Penny Zheng Signed-off-by: Wei Chen Signed-off-by: Henry Wang --- v8: - Note: The renaming of p2m_flush_vm() is not done due to the unclarity of other maintainers' ideas. - Also move P2M_ROOT_LEVEL, P2M_ROOT_ORDER and P2M_ROOT_PAGES to mmu/p2m.h. Move the two functions using p2m->root to mmu/p2m.c. - Also move the declaration of p2m_clear_root_pages() to mmu/p2m.h. - Expose p2m_free_vmid() as it is now used by two files. - Take the opportunity to use 1U and add space before/after <<, update the commit message about this. - Do not export setup_virt_paging_one(), instead, move cpu_virt_paging_callback() & co to mmu/p2m.c. - Create a new function p2m_domain_creation_finished() in mmu/p2m.c for the original code in arch_domain_creation_finished(), and mark p2m_invalidate_root() as static. v7: - No change. v6: - Also move relinquish_p2m_mapping() to mmu/p2m.c, make __p2m_set_entry() static. - Also move p2m_clear_root_pages() and p2m_flush_vm() to mmu/p2m.c. - Don't add #ifdef CONFIG_MMU to the p2m_tlb_flush_sync() in p2m_write_unlock(), this need further discussion. - Correct typo in commit message. v5: - No change v4: - Rework the patch to drop the unnecessary changes. - Rework the commit msg a bit. v3: - remove MPU stubs - adapt to the introduction of new directories: mmu/ v2: - new commit --- xen/arch/arm/domain.c | 11 +- xen/arch/arm/include/asm/mmu/p2m.h | 26 + xen/arch/arm/include/asm/p2m.h | 32 +- xen/arch/arm/mmu/Makefile | 1 + xen/arch/arm/mmu/p2m.c | 1834 ++++++++++++++++++++++++++ xen/arch/arm/p2m.c | 1909 +--------------------------- 6 files changed, 1933 insertions(+), 1880 deletions(-) create mode 100644 xen/arch/arm/include/asm/mmu/p2m.h create mode 100644 xen/arch/arm/mmu/p2m.c diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 28e3aaa5e4..5e7a7f3e7e 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -870,16 +870,7 @@ int arch_domain_soft_reset(struct domain *d) =20 void arch_domain_creation_finished(struct domain *d) { - /* - * To avoid flushing the whole guest RAM on the first Set/Way, we - * invalidate the P2M to track what has been accessed. - * - * This is only turned when IOMMU is not used or the page-table are - * not shared because bit[0] (e.g valid bit) unset will result - * IOMMU fault that could be not fixed-up. - */ - if ( !iommu_use_hap_pt(d) ) - p2m_invalidate_root(p2m_get_hostp2m(d)); + p2m_domain_creation_finished(d); } =20 static int is_guest_pv32_psr(uint32_t psr) diff --git a/xen/arch/arm/include/asm/mmu/p2m.h b/xen/arch/arm/include/asm/= mmu/p2m.h new file mode 100644 index 0000000000..58496c0b09 --- /dev/null +++ b/xen/arch/arm/include/asm/mmu/p2m.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __ARM_MMU_P2M_H__ +#define __ARM_MMU_P2M_H__ + +extern unsigned int p2m_root_order; +extern unsigned int p2m_root_level; +#define P2M_ROOT_ORDER p2m_root_order +#define P2M_ROOT_LEVEL p2m_root_level +#define P2M_ROOT_PAGES (1U << P2M_ROOT_ORDER) + +struct p2m_domain; +void p2m_force_tlb_flush_sync(struct p2m_domain *p2m); +void p2m_tlb_flush_sync(struct p2m_domain *p2m); + +void p2m_clear_root_pages(struct p2m_domain *p2m); + +#endif /* __ARM_MMU_P2M_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h index 940495d42b..e7428fb8db 100644 --- a/xen/arch/arm/include/asm/p2m.h +++ b/xen/arch/arm/include/asm/p2m.h @@ -14,10 +14,19 @@ /* Holds the bit size of IPAs in p2m tables. */ extern unsigned int p2m_ipa_bits; =20 -extern unsigned int p2m_root_order; -extern unsigned int p2m_root_level; -#define P2M_ROOT_ORDER p2m_root_order -#define P2M_ROOT_LEVEL p2m_root_level +#define MAX_VMID_8_BIT (1UL << 8) +#define MAX_VMID_16_BIT (1UL << 16) + +#define INVALID_VMID 0 /* VMID 0 is reserved */ + +#ifdef CONFIG_ARM_64 +extern unsigned int max_vmid; +/* VMID is by default 8 bit width on AArch64 */ +#define MAX_VMID max_vmid +#else +/* VMID is always 8 bit width on AArch32 */ +#define MAX_VMID MAX_VMID_8_BIT +#endif =20 struct domain; =20 @@ -156,6 +165,12 @@ typedef enum { #endif #include =20 +#if defined(CONFIG_MMU) +# include +#else +# error "Unknown memory management layout" +#endif + static inline bool arch_acquire_resource_check(struct domain *d) { /* @@ -180,6 +195,10 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx) */ void p2m_restrict_ipa_bits(unsigned int ipa_bits); =20 +void p2m_vmid_allocator_init(void); +int p2m_alloc_vmid(struct domain *d); +void p2m_free_vmid(struct domain *d); + /* Second stage paging setup, to be called on all CPUs */ void setup_virt_paging(void); =20 @@ -242,8 +261,6 @@ static inline int p2m_is_write_locked(struct p2m_domain= *p2m) return rw_is_write_locked(&p2m->lock); } =20 -void p2m_tlb_flush_sync(struct p2m_domain *p2m); - /* Look up the MFN corresponding to a domain's GFN. */ mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t); =20 @@ -271,8 +288,7 @@ int p2m_set_entry(struct p2m_domain *p2m, =20 bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn); =20 -void p2m_clear_root_pages(struct p2m_domain *p2m); -void p2m_invalidate_root(struct p2m_domain *p2m); +void p2m_domain_creation_finished(struct domain *d); =20 /* * Clean & invalidate caches corresponding to a region [start,end) of guest diff --git a/xen/arch/arm/mmu/Makefile b/xen/arch/arm/mmu/Makefile index 98aea965df..67475fcd80 100644 --- a/xen/arch/arm/mmu/Makefile +++ b/xen/arch/arm/mmu/Makefile @@ -1,3 +1,4 @@ +obj-y +=3D p2m.o obj-y +=3D pt.o obj-y +=3D setup.o obj-y +=3D smpboot.o diff --git a/xen/arch/arm/mmu/p2m.c b/xen/arch/arm/mmu/p2m.c new file mode 100644 index 0000000000..6a5a080307 --- /dev/null +++ b/xen/arch/arm/mmu/p2m.c @@ -0,0 +1,1834 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +unsigned int __read_mostly p2m_root_order; +unsigned int __read_mostly p2m_root_level; + +static mfn_t __read_mostly empty_root_mfn; + +static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn) +{ + return (mfn_to_maddr(root_mfn) | ((uint64_t)vmid << 48)); +} + +static struct page_info *p2m_alloc_page(struct domain *d) +{ + struct page_info *pg; + + /* + * For hardware domain, there should be no limit in the number of page= s that + * can be allocated, so that the kernel may take advantage of the exte= nded + * regions. Hence, allocate p2m pages for hardware domains from heap. + */ + if ( is_hardware_domain(d) ) + { + pg =3D alloc_domheap_page(NULL, 0); + if ( pg =3D=3D NULL ) + printk(XENLOG_G_ERR "Failed to allocate P2M pages for hwdom.\n= "); + } + else + { + spin_lock(&d->arch.paging.lock); + pg =3D page_list_remove_head(&d->arch.paging.p2m_freelist); + spin_unlock(&d->arch.paging.lock); + } + + return pg; +} + +static void p2m_free_page(struct domain *d, struct page_info *pg) +{ + if ( is_hardware_domain(d) ) + free_domheap_page(pg); + else + { + spin_lock(&d->arch.paging.lock); + page_list_add_tail(pg, &d->arch.paging.p2m_freelist); + spin_unlock(&d->arch.paging.lock); + } +} + +/* Return the size of the pool, in bytes. */ +int arch_get_paging_mempool_size(struct domain *d, uint64_t *size) +{ + *size =3D (uint64_t)ACCESS_ONCE(d->arch.paging.p2m_total_pages) << PAG= E_SHIFT; + return 0; +} + +/* + * Set the pool of pages to the required number of pages. + * Returns 0 for success, non-zero for failure. + * Call with d->arch.paging.lock held. + */ +int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preemp= ted) +{ + struct page_info *pg; + + ASSERT(spin_is_locked(&d->arch.paging.lock)); + + for ( ; ; ) + { + if ( d->arch.paging.p2m_total_pages < pages ) + { + /* Need to allocate more memory from domheap */ + pg =3D alloc_domheap_page(NULL, 0); + if ( pg =3D=3D NULL ) + { + printk(XENLOG_ERR "Failed to allocate P2M pages.\n"); + return -ENOMEM; + } + ACCESS_ONCE(d->arch.paging.p2m_total_pages) =3D + d->arch.paging.p2m_total_pages + 1; + page_list_add_tail(pg, &d->arch.paging.p2m_freelist); + } + else if ( d->arch.paging.p2m_total_pages > pages ) + { + /* Need to return memory to domheap */ + pg =3D page_list_remove_head(&d->arch.paging.p2m_freelist); + if( pg ) + { + ACCESS_ONCE(d->arch.paging.p2m_total_pages) =3D + d->arch.paging.p2m_total_pages - 1; + free_domheap_page(pg); + } + else + { + printk(XENLOG_ERR + "Failed to free P2M pages, P2M freelist is empty.\n= "); + return -ENOMEM; + } + } + else + break; + + /* Check to see if we need to yield and try again */ + if ( preempted && general_preempt_check() ) + { + *preempted =3D true; + return -ERESTART; + } + } + + return 0; +} + +int arch_set_paging_mempool_size(struct domain *d, uint64_t size) +{ + unsigned long pages =3D size >> PAGE_SHIFT; + bool preempted =3D false; + int rc; + + if ( (size & ~PAGE_MASK) || /* Non page-sized request? */ + pages !=3D (size >> PAGE_SHIFT) ) /* 32-bit overflow? */ + return -EINVAL; + + spin_lock(&d->arch.paging.lock); + rc =3D p2m_set_allocation(d, pages, &preempted); + spin_unlock(&d->arch.paging.lock); + + ASSERT(preempted =3D=3D (rc =3D=3D -ERESTART)); + + return rc; +} + +int p2m_teardown_allocation(struct domain *d) +{ + int ret =3D 0; + bool preempted =3D false; + + spin_lock(&d->arch.paging.lock); + if ( d->arch.paging.p2m_total_pages !=3D 0 ) + { + ret =3D p2m_set_allocation(d, 0, &preempted); + if ( preempted ) + { + spin_unlock(&d->arch.paging.lock); + return -ERESTART; + } + ASSERT(d->arch.paging.p2m_total_pages =3D=3D 0); + } + spin_unlock(&d->arch.paging.lock); + + return ret; +} + +void p2m_dump_info(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + + p2m_read_lock(p2m); + printk("p2m mappings for domain %d (vmid %d):\n", + d->domain_id, p2m->vmid); + BUG_ON(p2m->stats.mappings[0] || p2m->stats.shattered[0]); + printk(" 1G mappings: %ld (shattered %ld)\n", + p2m->stats.mappings[1], p2m->stats.shattered[1]); + printk(" 2M mappings: %ld (shattered %ld)\n", + p2m->stats.mappings[2], p2m->stats.shattered[2]); + printk(" 4K mappings: %ld\n", p2m->stats.mappings[3]); + p2m_read_unlock(p2m); +} + +void dump_p2m_lookup(struct domain *d, paddr_t addr) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + + printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr); + + printk("P2M @ %p mfn:%#"PRI_mfn"\n", + p2m->root, mfn_x(page_to_mfn(p2m->root))); + + dump_pt_walk(page_to_maddr(p2m->root), addr, + P2M_ROOT_LEVEL, P2M_ROOT_PAGES); +} + +/* + * p2m_save_state and p2m_restore_state work in pair to workaround + * ARM64_WORKAROUND_AT_SPECULATE. p2m_save_state will set-up VTTBR to + * point to the empty page-tables to stop allocating TLB entries. + */ +void p2m_save_state(struct vcpu *p) +{ + p->arch.sctlr =3D READ_SYSREG(SCTLR_EL1); + + if ( cpus_have_const_cap(ARM64_WORKAROUND_AT_SPECULATE) ) + { + WRITE_SYSREG64(generate_vttbr(INVALID_VMID, empty_root_mfn), VTTBR= _EL2); + /* + * Ensure VTTBR_EL2 is correctly synchronized so we can restore + * the next vCPU context without worrying about AT instruction + * speculation. + */ + isb(); + } +} + +void p2m_restore_state(struct vcpu *n) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(n->domain); + uint8_t *last_vcpu_ran; + + if ( is_idle_vcpu(n) ) + return; + + WRITE_SYSREG(n->arch.sctlr, SCTLR_EL1); + WRITE_SYSREG(n->arch.hcr_el2, HCR_EL2); + + /* + * ARM64_WORKAROUND_AT_SPECULATE: VTTBR_EL2 should be restored after a= ll + * registers associated to EL1/EL0 translations regime have been + * synchronized. + */ + asm volatile(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_AT_SPECULATE)); + WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2); + + last_vcpu_ran =3D &p2m->last_vcpu_ran[smp_processor_id()]; + + /* + * While we are restoring an out-of-context translation regime + * we still need to ensure: + * - VTTBR_EL2 is synchronized before flushing the TLBs + * - All registers for EL1 are synchronized before executing an AT + * instructions targeting S1/S2. + */ + isb(); + + /* + * Flush local TLB for the domain to prevent wrong TLB translation + * when running multiple vCPU of the same domain on a single pCPU. + */ + if ( *last_vcpu_ran !=3D INVALID_VCPU_ID && *last_vcpu_ran !=3D n->vcp= u_id ) + flush_guest_tlb_local(); + + *last_vcpu_ran =3D n->vcpu_id; +} + +/* + * Force a synchronous P2M TLB flush. + * + * Must be called with the p2m lock held. + */ +void p2m_force_tlb_flush_sync(struct p2m_domain *p2m) +{ + unsigned long flags =3D 0; + uint64_t ovttbr; + + ASSERT(p2m_is_write_locked(p2m)); + + /* + * ARM only provides an instruction to flush TLBs for the current + * VMID. So switch to the VTTBR of a given P2M if different. + */ + ovttbr =3D READ_SYSREG64(VTTBR_EL2); + if ( ovttbr !=3D p2m->vttbr ) + { + uint64_t vttbr; + + local_irq_save(flags); + + /* + * ARM64_WORKAROUND_AT_SPECULATE: We need to stop AT to allocate + * TLBs entries because the context is partially modified. We + * only need the VMID for flushing the TLBs, so we can generate + * a new VTTBR with the VMID to flush and the empty root table. + */ + if ( !cpus_have_const_cap(ARM64_WORKAROUND_AT_SPECULATE) ) + vttbr =3D p2m->vttbr; + else + vttbr =3D generate_vttbr(p2m->vmid, empty_root_mfn); + + WRITE_SYSREG64(vttbr, VTTBR_EL2); + + /* Ensure VTTBR_EL2 is synchronized before flushing the TLBs */ + isb(); + } + + flush_guest_tlb(); + + if ( ovttbr !=3D READ_SYSREG64(VTTBR_EL2) ) + { + WRITE_SYSREG64(ovttbr, VTTBR_EL2); + /* Ensure VTTBR_EL2 is back in place before continuing. */ + isb(); + local_irq_restore(flags); + } + + p2m->need_flush =3D false; +} + +void p2m_tlb_flush_sync(struct p2m_domain *p2m) +{ + if ( p2m->need_flush ) + p2m_force_tlb_flush_sync(p2m); +} + +/* + * Find and map the root page table. The caller is responsible for + * unmapping the table. + * + * The function will return NULL if the offset of the root table is + * invalid. + */ +static lpae_t *p2m_get_root_pointer(struct p2m_domain *p2m, + gfn_t gfn) +{ + unsigned long root_table; + + /* + * While the root table index is the offset from the previous level, + * we can't use (P2M_ROOT_LEVEL - 1) because the root level might be + * 0. Yet we still want to check if all the unused bits are zeroed. + */ + root_table =3D gfn_x(gfn) >> (XEN_PT_LEVEL_ORDER(P2M_ROOT_LEVEL) + + XEN_PT_LPAE_SHIFT); + if ( root_table >=3D P2M_ROOT_PAGES ) + return NULL; + + return __map_domain_page(p2m->root + root_table); +} + +/* + * Lookup the MFN corresponding to a domain's GFN. + * Lookup mem access in the ratrix tree. + * The entries associated to the GFN is considered valid. + */ +static p2m_access_t p2m_mem_access_radix_get(struct p2m_domain *p2m, gfn_t= gfn) +{ + void *ptr; + + if ( !p2m->mem_access_enabled ) + return p2m->default_access; + + ptr =3D radix_tree_lookup(&p2m->mem_access_settings, gfn_x(gfn)); + if ( !ptr ) + return p2m_access_rwx; + else + return radix_tree_ptr_to_int(ptr); +} + +/* + * In the case of the P2M, the valid bit is used for other purpose. Use + * the type to check whether an entry is valid. + */ +static inline bool p2m_is_valid(lpae_t pte) +{ + return pte.p2m.type !=3D p2m_invalid; +} + +/* + * lpae_is_* helpers don't check whether the valid bit is set in the + * PTE. Provide our own overlay to check the valid bit. + */ +static inline bool p2m_is_mapping(lpae_t pte, unsigned int level) +{ + return p2m_is_valid(pte) && lpae_is_mapping(pte, level); +} + +static inline bool p2m_is_superpage(lpae_t pte, unsigned int level) +{ + return p2m_is_valid(pte) && lpae_is_superpage(pte, level); +} + +#define GUEST_TABLE_MAP_FAILED 0 +#define GUEST_TABLE_SUPER_PAGE 1 +#define GUEST_TABLE_NORMAL_PAGE 2 + +static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry); + +/* + * Take the currently mapped table, find the corresponding GFN entry, + * and map the next table, if available. The previous table will be + * unmapped if the next level was mapped (e.g GUEST_TABLE_NORMAL_PAGE + * returned). + * + * The read_only parameters indicates whether intermediate tables should + * be allocated when not present. + * + * Return values: + * GUEST_TABLE_MAP_FAILED: Either read_only was set and the entry + * was empty, or allocating a new page failed. + * GUEST_TABLE_NORMAL_PAGE: next level mapped normally + * GUEST_TABLE_SUPER_PAGE: The next entry points to a superpage. + */ +static int p2m_next_level(struct p2m_domain *p2m, bool read_only, + unsigned int level, lpae_t **table, + unsigned int offset) +{ + lpae_t *entry; + int ret; + mfn_t mfn; + + entry =3D *table + offset; + + if ( !p2m_is_valid(*entry) ) + { + if ( read_only ) + return GUEST_TABLE_MAP_FAILED; + + ret =3D p2m_create_table(p2m, entry); + if ( ret ) + return GUEST_TABLE_MAP_FAILED; + } + + /* The function p2m_next_level is never called at the 3rd level */ + ASSERT(level < 3); + if ( p2m_is_mapping(*entry, level) ) + return GUEST_TABLE_SUPER_PAGE; + + mfn =3D lpae_get_mfn(*entry); + + unmap_domain_page(*table); + *table =3D map_domain_page(mfn); + + return GUEST_TABLE_NORMAL_PAGE; +} + +/* + * Get the details of a given gfn. + * + * If the entry is present, the associated MFN will be returned and the + * access and type filled up. The page_order will correspond to the + * order of the mapping in the page table (i.e it could be a superpage). + * + * If the entry is not present, INVALID_MFN will be returned and the + * page_order will be set according to the order of the invalid range. + * + * valid will contain the value of bit[0] (e.g valid bit) of the + * entry. + */ +mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn, + p2m_type_t *t, p2m_access_t *a, + unsigned int *page_order, + bool *valid) +{ + paddr_t addr =3D gfn_to_gaddr(gfn); + unsigned int level =3D 0; + lpae_t entry, *table; + int rc; + mfn_t mfn =3D INVALID_MFN; + p2m_type_t _t; + DECLARE_OFFSETS(offsets, addr); + + ASSERT(p2m_is_locked(p2m)); + BUILD_BUG_ON(THIRD_MASK !=3D PAGE_MASK); + + /* Allow t to be NULL */ + t =3D t ?: &_t; + + *t =3D p2m_invalid; + + if ( valid ) + *valid =3D false; + + /* XXX: Check if the mapping is lower than the mapped gfn */ + + /* This gfn is higher than the highest the p2m map currently holds */ + if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) ) + { + for ( level =3D P2M_ROOT_LEVEL; level < 3; level++ ) + if ( (gfn_x(gfn) & (XEN_PT_LEVEL_MASK(level) >> PAGE_SHIFT)) > + gfn_x(p2m->max_mapped_gfn) ) + break; + + goto out; + } + + table =3D p2m_get_root_pointer(p2m, gfn); + + /* + * the table should always be non-NULL because the gfn is below + * p2m->max_mapped_gfn and the root table pages are always present. + */ + if ( !table ) + { + ASSERT_UNREACHABLE(); + level =3D P2M_ROOT_LEVEL; + goto out; + } + + for ( level =3D P2M_ROOT_LEVEL; level < 3; level++ ) + { + rc =3D p2m_next_level(p2m, true, level, &table, offsets[level]); + if ( rc =3D=3D GUEST_TABLE_MAP_FAILED ) + goto out_unmap; + else if ( rc !=3D GUEST_TABLE_NORMAL_PAGE ) + break; + } + + entry =3D table[offsets[level]]; + + if ( p2m_is_valid(entry) ) + { + *t =3D entry.p2m.type; + + if ( a ) + *a =3D p2m_mem_access_radix_get(p2m, gfn); + + mfn =3D lpae_get_mfn(entry); + /* + * The entry may point to a superpage. Find the MFN associated + * to the GFN. + */ + mfn =3D mfn_add(mfn, + gfn_x(gfn) & ((1UL << XEN_PT_LEVEL_ORDER(level)) - 1= )); + + if ( valid ) + *valid =3D lpae_is_valid(entry); + } + +out_unmap: + unmap_domain_page(table); + +out: + if ( page_order ) + *page_order =3D XEN_PT_LEVEL_ORDER(level); + + return mfn; +} + +static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a) +{ + /* First apply type permissions */ + switch ( t ) + { + case p2m_ram_rw: + e->p2m.xn =3D 0; + e->p2m.write =3D 1; + break; + + case p2m_ram_ro: + e->p2m.xn =3D 0; + e->p2m.write =3D 0; + break; + + case p2m_iommu_map_rw: + case p2m_map_foreign_rw: + case p2m_grant_map_rw: + case p2m_mmio_direct_dev: + case p2m_mmio_direct_nc: + case p2m_mmio_direct_c: + e->p2m.xn =3D 1; + e->p2m.write =3D 1; + break; + + case p2m_iommu_map_ro: + case p2m_map_foreign_ro: + case p2m_grant_map_ro: + case p2m_invalid: + e->p2m.xn =3D 1; + e->p2m.write =3D 0; + break; + + case p2m_max_real_type: + BUG(); + break; + } + + /* Then restrict with access permissions */ + switch ( a ) + { + case p2m_access_rwx: + break; + case p2m_access_wx: + e->p2m.read =3D 0; + break; + case p2m_access_rw: + e->p2m.xn =3D 1; + break; + case p2m_access_w: + e->p2m.read =3D 0; + e->p2m.xn =3D 1; + break; + case p2m_access_rx: + case p2m_access_rx2rw: + e->p2m.write =3D 0; + break; + case p2m_access_x: + e->p2m.write =3D 0; + e->p2m.read =3D 0; + break; + case p2m_access_r: + e->p2m.write =3D 0; + e->p2m.xn =3D 1; + break; + case p2m_access_n: + case p2m_access_n2rwx: + e->p2m.read =3D e->p2m.write =3D 0; + e->p2m.xn =3D 1; + break; + } +} + +static lpae_t mfn_to_p2m_entry(mfn_t mfn, p2m_type_t t, p2m_access_t a) +{ + /* + * sh, xn and write bit will be defined in the following switches + * based on mattr and t. + */ + lpae_t e =3D (lpae_t) { + .p2m.af =3D 1, + .p2m.read =3D 1, + .p2m.table =3D 1, + .p2m.valid =3D 1, + .p2m.type =3D t, + }; + + BUILD_BUG_ON(p2m_max_real_type > (1 << 4)); + + switch ( t ) + { + case p2m_mmio_direct_dev: + e.p2m.mattr =3D MATTR_DEV; + e.p2m.sh =3D LPAE_SH_OUTER; + break; + + case p2m_mmio_direct_c: + e.p2m.mattr =3D MATTR_MEM; + e.p2m.sh =3D LPAE_SH_OUTER; + break; + + /* + * ARM ARM: Overlaying the shareability attribute (DDI + * 0406C.b B3-1376 to 1377) + * + * A memory region with a resultant memory type attribute of Normal, + * and a resultant cacheability attribute of Inner Non-cacheable, + * Outer Non-cacheable, must have a resultant shareability attribute + * of Outer Shareable, otherwise shareability is UNPREDICTABLE. + * + * On ARMv8 shareability is ignored and explicitly treated as Outer + * Shareable for Normal Inner Non_cacheable, Outer Non-cacheable. + * See the note for table D4-40, in page 1788 of the ARM DDI 0487A.j. + */ + case p2m_mmio_direct_nc: + e.p2m.mattr =3D MATTR_MEM_NC; + e.p2m.sh =3D LPAE_SH_OUTER; + break; + + default: + e.p2m.mattr =3D MATTR_MEM; + e.p2m.sh =3D LPAE_SH_INNER; + } + + p2m_set_permission(&e, t, a); + + ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK)); + + lpae_set_mfn(e, mfn); + + return e; +} + +/* Generate table entry with correct attributes. */ +static lpae_t page_to_p2m_table(struct page_info *page) +{ + /* + * The access value does not matter because the hardware will ignore + * the permission fields for table entry. + * + * We use p2m_ram_rw so the entry has a valid type. This is important + * for p2m_is_valid() to return valid on table entries. + */ + return mfn_to_p2m_entry(page_to_mfn(page), p2m_ram_rw, p2m_access_rwx); +} + +static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool clean_pte) +{ + write_pte(p, pte); + if ( clean_pte ) + clean_dcache(*p); +} + +static inline void p2m_remove_pte(lpae_t *p, bool clean_pte) +{ + lpae_t pte; + + memset(&pte, 0x00, sizeof(pte)); + p2m_write_pte(p, pte, clean_pte); +} + +/* Allocate a new page table page and hook it in via the given entry. */ +static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry) +{ + struct page_info *page; + lpae_t *p; + + ASSERT(!p2m_is_valid(*entry)); + + page =3D p2m_alloc_page(p2m->domain); + if ( page =3D=3D NULL ) + return -ENOMEM; + + page_list_add(page, &p2m->pages); + + p =3D __map_domain_page(page); + clear_page(p); + + if ( p2m->clean_pte ) + clean_dcache_va_range(p, PAGE_SIZE); + + unmap_domain_page(p); + + p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_pte); + + return 0; +} + +static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn, + p2m_access_t a) +{ + int rc; + + if ( !p2m->mem_access_enabled ) + return 0; + + if ( p2m_access_rwx =3D=3D a ) + { + radix_tree_delete(&p2m->mem_access_settings, gfn_x(gfn)); + return 0; + } + + rc =3D radix_tree_insert(&p2m->mem_access_settings, gfn_x(gfn), + radix_tree_int_to_ptr(a)); + if ( rc =3D=3D -EEXIST ) + { + /* If a setting already exists, change it to the new one */ + radix_tree_replace_slot( + radix_tree_lookup_slot( + &p2m->mem_access_settings, gfn_x(gfn)), + radix_tree_int_to_ptr(a)); + rc =3D 0; + } + + return rc; +} + +/* + * Put any references on the single 4K page referenced by pte. + * TODO: Handle superpages, for now we only take special references for le= af + * pages (specifically foreign ones, which can't be super mapped today). + */ +static void p2m_put_l3_page(const lpae_t pte) +{ + mfn_t mfn =3D lpae_get_mfn(pte); + + ASSERT(p2m_is_valid(pte)); + + /* + * TODO: Handle other p2m types + * + * It's safe to do the put_page here because page_alloc will + * flush the TLBs if the page is reallocated before the end of + * this loop. + */ + if ( p2m_is_foreign(pte.p2m.type) ) + { + ASSERT(mfn_valid(mfn)); + put_page(mfn_to_page(mfn)); + } + /* Detect the xenheap page and mark the stored GFN as invalid. */ + else if ( p2m_is_ram(pte.p2m.type) && is_xen_heap_mfn(mfn) ) + page_set_xenheap_gfn(mfn_to_page(mfn), INVALID_GFN); +} + +/* Free lpae sub-tree behind an entry */ +static void p2m_free_entry(struct p2m_domain *p2m, + lpae_t entry, unsigned int level) +{ + unsigned int i; + lpae_t *table; + mfn_t mfn; + struct page_info *pg; + + /* Nothing to do if the entry is invalid. */ + if ( !p2m_is_valid(entry) ) + return; + + if ( p2m_is_superpage(entry, level) || (level =3D=3D 3) ) + { +#ifdef CONFIG_IOREQ_SERVER + /* + * If this gets called then either the entry was replaced by an en= try + * with a different base (valid case) or the shattering of a super= page + * has failed (error case). + * So, at worst, the spurious mapcache invalidation might be sent. + */ + if ( p2m_is_ram(entry.p2m.type) && + domain_has_ioreq_server(p2m->domain) ) + ioreq_request_mapcache_invalidate(p2m->domain); +#endif + + p2m->stats.mappings[level]--; + /* Nothing to do if the entry is a super-page. */ + if ( level =3D=3D 3 ) + p2m_put_l3_page(entry); + return; + } + + table =3D map_domain_page(lpae_get_mfn(entry)); + for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) + p2m_free_entry(p2m, *(table + i), level + 1); + + unmap_domain_page(table); + + /* + * Make sure all the references in the TLB have been removed before + * freing the intermediate page table. + * XXX: Should we defer the free of the page table to avoid the + * flush? + */ + p2m_tlb_flush_sync(p2m); + + mfn =3D lpae_get_mfn(entry); + ASSERT(mfn_valid(mfn)); + + pg =3D mfn_to_page(mfn); + + page_list_del(pg, &p2m->pages); + p2m_free_page(p2m->domain, pg); +} + +static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry, + unsigned int level, unsigned int target, + const unsigned int *offsets) +{ + struct page_info *page; + unsigned int i; + lpae_t pte, *table; + bool rv =3D true; + + /* Convenience aliases */ + mfn_t mfn =3D lpae_get_mfn(*entry); + unsigned int next_level =3D level + 1; + unsigned int level_order =3D XEN_PT_LEVEL_ORDER(next_level); + + /* + * This should only be called with target !=3D level and the entry is + * a superpage. + */ + ASSERT(level < target); + ASSERT(p2m_is_superpage(*entry, level)); + + page =3D p2m_alloc_page(p2m->domain); + if ( !page ) + return false; + + page_list_add(page, &p2m->pages); + table =3D __map_domain_page(page); + + /* + * We are either splitting a first level 1G page into 512 second level + * 2M pages, or a second level 2M page into 512 third level 4K pages. + */ + for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) + { + lpae_t *new_entry =3D table + i; + + /* + * Use the content of the superpage entry and override + * the necessary fields. So the correct permission are kept. + */ + pte =3D *entry; + lpae_set_mfn(pte, mfn_add(mfn, i << level_order)); + + /* + * First and second level pages set p2m.table =3D 0, but third + * level entries set p2m.table =3D 1. + */ + pte.p2m.table =3D (next_level =3D=3D 3); + + write_pte(new_entry, pte); + } + + /* Update stats */ + p2m->stats.shattered[level]++; + p2m->stats.mappings[level]--; + p2m->stats.mappings[next_level] +=3D XEN_PT_LPAE_ENTRIES; + + /* + * Shatter superpage in the page to the level we want to make the + * changes. + * This is done outside the loop to avoid checking the offset to + * know whether the entry should be shattered for every entry. + */ + if ( next_level !=3D target ) + rv =3D p2m_split_superpage(p2m, table + offsets[next_level], + level + 1, target, offsets); + + if ( p2m->clean_pte ) + clean_dcache_va_range(table, PAGE_SIZE); + + unmap_domain_page(table); + + /* + * Even if we failed, we should install the newly allocated LPAE + * entry. The caller will be in charge to free the sub-tree. + */ + p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_pte); + + return rv; +} + +/* + * Insert an entry in the p2m. This should be called with a mapping + * equal to a page/superpage (4K, 2M, 1G). + */ +static int __p2m_set_entry(struct p2m_domain *p2m, + gfn_t sgfn, + unsigned int page_order, + mfn_t smfn, + p2m_type_t t, + p2m_access_t a) +{ + unsigned int level =3D 0; + unsigned int target =3D 3 - (page_order / XEN_PT_LPAE_SHIFT); + lpae_t *entry, *table, orig_pte; + int rc; + /* A mapping is removed if the MFN is invalid. */ + bool removing_mapping =3D mfn_eq(smfn, INVALID_MFN); + DECLARE_OFFSETS(offsets, gfn_to_gaddr(sgfn)); + + ASSERT(p2m_is_write_locked(p2m)); + + /* + * Check if the level target is valid: we only support + * 4K - 2M - 1G mapping. + */ + ASSERT(target > 0 && target <=3D 3); + + table =3D p2m_get_root_pointer(p2m, sgfn); + if ( !table ) + return -EINVAL; + + for ( level =3D P2M_ROOT_LEVEL; level < target; level++ ) + { + /* + * Don't try to allocate intermediate page table if the mapping + * is about to be removed. + */ + rc =3D p2m_next_level(p2m, removing_mapping, + level, &table, offsets[level]); + if ( rc =3D=3D GUEST_TABLE_MAP_FAILED ) + { + /* + * We are here because p2m_next_level has failed to map + * the intermediate page table (e.g the table does not exist + * and they p2m tree is read-only). It is a valid case + * when removing a mapping as it may not exist in the + * page table. In this case, just ignore it. + */ + rc =3D removing_mapping ? 0 : -ENOENT; + goto out; + } + else if ( rc !=3D GUEST_TABLE_NORMAL_PAGE ) + break; + } + + entry =3D table + offsets[level]; + + /* + * If we are here with level < target, we must be at a leaf node, + * and we need to break up the superpage. + */ + if ( level < target ) + { + /* We need to split the original page. */ + lpae_t split_pte =3D *entry; + + ASSERT(p2m_is_superpage(*entry, level)); + + if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets)= ) + { + /* + * The current super-page is still in-place, so re-increment + * the stats. + */ + p2m->stats.mappings[level]++; + + /* Free the allocated sub-tree */ + p2m_free_entry(p2m, split_pte, level); + + rc =3D -ENOMEM; + goto out; + } + + /* + * Follow the break-before-sequence to update the entry. + * For more details see (D4.7.1 in ARM DDI 0487A.j). + */ + p2m_remove_pte(entry, p2m->clean_pte); + p2m_force_tlb_flush_sync(p2m); + + p2m_write_pte(entry, split_pte, p2m->clean_pte); + + /* then move to the level we want to make real changes */ + for ( ; level < target; level++ ) + { + rc =3D p2m_next_level(p2m, true, level, &table, offsets[level]= ); + + /* + * The entry should be found and either be a table + * or a superpage if level 3 is not targeted + */ + ASSERT(rc =3D=3D GUEST_TABLE_NORMAL_PAGE || + (rc =3D=3D GUEST_TABLE_SUPER_PAGE && target < 3)); + } + + entry =3D table + offsets[level]; + } + + /* + * We should always be there with the correct level because + * all the intermediate tables have been installed if necessary. + */ + ASSERT(level =3D=3D target); + + orig_pte =3D *entry; + + /* + * The radix-tree can only work on 4KB. This is only used when + * memaccess is enabled and during shutdown. + */ + ASSERT(!p2m->mem_access_enabled || page_order =3D=3D 0 || + p2m->domain->is_dying); + /* + * The access type should always be p2m_access_rwx when the mapping + * is removed. + */ + ASSERT(!mfn_eq(INVALID_MFN, smfn) || (a =3D=3D p2m_access_rwx)); + /* + * Update the mem access permission before update the P2M. So we + * don't have to revert the mapping if it has failed. + */ + rc =3D p2m_mem_access_radix_set(p2m, sgfn, a); + if ( rc ) + goto out; + + /* + * Always remove the entry in order to follow the break-before-make + * sequence when updating the translation table (D4.7.1 in ARM DDI + * 0487A.j). + */ + if ( lpae_is_valid(orig_pte) || removing_mapping ) + p2m_remove_pte(entry, p2m->clean_pte); + + if ( removing_mapping ) + /* Flush can be deferred if the entry is removed */ + p2m->need_flush |=3D !!lpae_is_valid(orig_pte); + else + { + lpae_t pte =3D mfn_to_p2m_entry(smfn, t, a); + + if ( level < 3 ) + pte.p2m.table =3D 0; /* Superpage entry */ + + /* + * It is necessary to flush the TLB before writing the new entry + * to keep coherency when the previous entry was valid. + * + * Although, it could be defered when only the permissions are + * changed (e.g in case of memaccess). + */ + if ( lpae_is_valid(orig_pte) ) + { + if ( likely(!p2m->mem_access_enabled) || + P2M_CLEAR_PERM(pte) !=3D P2M_CLEAR_PERM(orig_pte) ) + p2m_force_tlb_flush_sync(p2m); + else + p2m->need_flush =3D true; + } + else if ( !p2m_is_valid(orig_pte) ) /* new mapping */ + p2m->stats.mappings[level]++; + + p2m_write_pte(entry, pte, p2m->clean_pte); + + p2m->max_mapped_gfn =3D gfn_max(p2m->max_mapped_gfn, + gfn_add(sgfn, (1UL << page_order) - = 1)); + p2m->lowest_mapped_gfn =3D gfn_min(p2m->lowest_mapped_gfn, sgfn); + } + + if ( is_iommu_enabled(p2m->domain) && + (lpae_is_valid(orig_pte) || lpae_is_valid(*entry)) ) + { + unsigned int flush_flags =3D 0; + + if ( lpae_is_valid(orig_pte) ) + flush_flags |=3D IOMMU_FLUSHF_modified; + if ( lpae_is_valid(*entry) ) + flush_flags |=3D IOMMU_FLUSHF_added; + + rc =3D iommu_iotlb_flush(p2m->domain, _dfn(gfn_x(sgfn)), + 1UL << page_order, flush_flags); + } + else + rc =3D 0; + + /* + * Free the entry only if the original pte was valid and the base + * is different (to avoid freeing when permission is changed). + */ + if ( p2m_is_valid(orig_pte) && + !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) ) + p2m_free_entry(p2m, orig_pte, level); + +out: + unmap_domain_page(table); + + return rc; +} + +int p2m_set_entry(struct p2m_domain *p2m, + gfn_t sgfn, + unsigned long nr, + mfn_t smfn, + p2m_type_t t, + p2m_access_t a) +{ + int rc =3D 0; + + /* + * Any reference taken by the P2M mappings (e.g. foreign mapping) will + * be dropped in relinquish_p2m_mapping(). As the P2M will still + * be accessible after, we need to prevent mapping to be added when the + * domain is dying. + */ + if ( unlikely(p2m->domain->is_dying) ) + return -ENOMEM; + + while ( nr ) + { + unsigned long mask; + unsigned long order; + + /* + * Don't take into account the MFN when removing mapping (i.e + * MFN_INVALID) to calculate the correct target order. + * + * XXX: Support superpage mappings if nr is not aligned to a + * superpage size. + */ + mask =3D !mfn_eq(smfn, INVALID_MFN) ? mfn_x(smfn) : 0; + mask |=3D gfn_x(sgfn) | nr; + + /* Always map 4k by 4k when memaccess is enabled */ + if ( unlikely(p2m->mem_access_enabled) ) + order =3D THIRD_ORDER; + else if ( !(mask & ((1UL << FIRST_ORDER) - 1)) ) + order =3D FIRST_ORDER; + else if ( !(mask & ((1UL << SECOND_ORDER) - 1)) ) + order =3D SECOND_ORDER; + else + order =3D THIRD_ORDER; + + rc =3D __p2m_set_entry(p2m, sgfn, order, smfn, t, a); + if ( rc ) + break; + + sgfn =3D gfn_add(sgfn, (1 << order)); + if ( !mfn_eq(smfn, INVALID_MFN) ) + smfn =3D mfn_add(smfn, (1 << order)); + + nr -=3D (1 << order); + } + + return rc; +} + +/* Invalidate all entries in the table. The p2m should be write locked. */ +static void p2m_invalidate_table(struct p2m_domain *p2m, mfn_t mfn) +{ + lpae_t *table; + unsigned int i; + + ASSERT(p2m_is_write_locked(p2m)); + + table =3D map_domain_page(mfn); + + for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) + { + lpae_t pte =3D table[i]; + + /* + * Writing an entry can be expensive because it may involve + * cleaning the cache. So avoid updating the entry if the valid + * bit is already cleared. + */ + if ( !pte.p2m.valid ) + continue; + + pte.p2m.valid =3D 0; + + p2m_write_pte(&table[i], pte, p2m->clean_pte); + } + + unmap_domain_page(table); + + p2m->need_flush =3D true; +} + +/* + * The domain will not be scheduled anymore, so in theory we should + * not need to flush the TLBs. Do it for safety purpose. + * Note that all the devices have already been de-assigned. So we don't + * need to flush the IOMMU TLB here. + */ +void p2m_clear_root_pages(struct p2m_domain *p2m) +{ + unsigned int i; + + p2m_write_lock(p2m); + + for ( i =3D 0; i < P2M_ROOT_PAGES; i++ ) + clear_and_clean_page(p2m->root + i); + + p2m_force_tlb_flush_sync(p2m); + + p2m_write_unlock(p2m); +} + +/* + * Invalidate all entries in the root page-tables. This is + * useful to get fault on entry and do an action. + * + * p2m_invalid_root() should not be called when the P2M is shared with + * the IOMMU because it will cause IOMMU fault. + */ +static void p2m_invalidate_root(struct p2m_domain *p2m) +{ + unsigned int i; + + ASSERT(!iommu_use_hap_pt(p2m->domain)); + + p2m_write_lock(p2m); + + for ( i =3D 0; i < P2M_ROOT_LEVEL; i++ ) + p2m_invalidate_table(p2m, page_to_mfn(p2m->root + i)); + + p2m_write_unlock(p2m); +} + +void p2m_domain_creation_finished(struct domain *d) +{ + /* + * To avoid flushing the whole guest RAM on the first Set/Way, we + * invalidate the P2M to track what has been accessed. + * + * This is only turned when IOMMU is not used or the page-table are + * not shared because bit[0] (e.g valid bit) unset will result + * IOMMU fault that could be not fixed-up. + */ + if ( !iommu_use_hap_pt(d) ) + p2m_invalidate_root(p2m_get_hostp2m(d)); +} + +/* + * Resolve any translation fault due to change in the p2m. This + * includes break-before-make and valid bit cleared. + */ +bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + unsigned int level =3D 0; + bool resolved =3D false; + lpae_t entry, *table; + + /* Convenience aliases */ + DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn)); + + p2m_write_lock(p2m); + + /* This gfn is higher than the highest the p2m map currently holds */ + if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) ) + goto out; + + table =3D p2m_get_root_pointer(p2m, gfn); + /* + * The table should always be non-NULL because the gfn is below + * p2m->max_mapped_gfn and the root table pages are always present. + */ + if ( !table ) + { + ASSERT_UNREACHABLE(); + goto out; + } + + /* + * Go down the page-tables until an entry has the valid bit unset or + * a block/page entry has been hit. + */ + for ( level =3D P2M_ROOT_LEVEL; level <=3D 3; level++ ) + { + int rc; + + entry =3D table[offsets[level]]; + + if ( level =3D=3D 3 ) + break; + + /* Stop as soon as we hit an entry with the valid bit unset. */ + if ( !lpae_is_valid(entry) ) + break; + + rc =3D p2m_next_level(p2m, true, level, &table, offsets[level]); + if ( rc =3D=3D GUEST_TABLE_MAP_FAILED ) + goto out_unmap; + else if ( rc !=3D GUEST_TABLE_NORMAL_PAGE ) + break; + } + + /* + * If the valid bit of the entry is set, it means someone was playing = with + * the Stage-2 page table. Nothing to do and mark the fault as resolve= d. + */ + if ( lpae_is_valid(entry) ) + { + resolved =3D true; + goto out_unmap; + } + + /* + * The valid bit is unset. If the entry is still not valid then the fa= ult + * cannot be resolved, exit and report it. + */ + if ( !p2m_is_valid(entry) ) + goto out_unmap; + + /* + * Now we have an entry with valid bit unset, but still valid from + * the P2M point of view. + * + * If an entry is pointing to a table, each entry of the table will + * have there valid bit cleared. This allows a function to clear the + * full p2m with just a couple of write. The valid bit will then be + * propagated on the fault. + * If an entry is pointing to a block/page, no work to do for now. + */ + if ( lpae_is_table(entry, level) ) + p2m_invalidate_table(p2m, lpae_get_mfn(entry)); + + /* + * Now that the work on the entry is done, set the valid bit to prevent + * another fault on that entry. + */ + resolved =3D true; + entry.p2m.valid =3D 1; + + p2m_write_pte(table + offsets[level], entry, p2m->clean_pte); + + /* + * No need to flush the TLBs as the modified entry had the valid bit + * unset. + */ + +out_unmap: + unmap_domain_page(table); + +out: + p2m_write_unlock(p2m); + + return resolved; +} + +static struct page_info *p2m_allocate_root(void) +{ + struct page_info *page; + unsigned int i; + + page =3D alloc_domheap_pages(NULL, P2M_ROOT_ORDER, 0); + if ( page =3D=3D NULL ) + return NULL; + + /* Clear both first level pages */ + for ( i =3D 0; i < P2M_ROOT_PAGES; i++ ) + clear_and_clean_page(page + i); + + return page; +} + +static int p2m_alloc_table(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + + p2m->root =3D p2m_allocate_root(); + if ( !p2m->root ) + return -ENOMEM; + + p2m->vttbr =3D generate_vttbr(p2m->vmid, page_to_mfn(p2m->root)); + + /* + * Make sure that all TLBs corresponding to the new VMID are flushed + * before using it + */ + p2m_write_lock(p2m); + p2m_force_tlb_flush_sync(p2m); + p2m_write_unlock(p2m); + + return 0; +} + +int p2m_teardown(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + unsigned long count =3D 0; + struct page_info *pg; + int rc =3D 0; + + p2m_write_lock(p2m); + + while ( (pg =3D page_list_remove_head(&p2m->pages)) ) + { + p2m_free_page(p2m->domain, pg); + count++; + /* Arbitrarily preempt every 512 iterations */ + if ( !(count % 512) && hypercall_preempt_check() ) + { + rc =3D -ERESTART; + break; + } + } + + p2m_write_unlock(p2m); + + return rc; +} + +void p2m_final_teardown(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + + /* p2m not actually initialized */ + if ( !p2m->domain ) + return; + + /* + * No need to call relinquish_p2m_mapping() here because + * p2m_final_teardown() is called either after domain_relinquish_resou= rces() + * where relinquish_p2m_mapping() has been called. + */ + + ASSERT(page_list_empty(&p2m->pages)); + + while ( p2m_teardown_allocation(d) =3D=3D -ERESTART ) + continue; /* No preemption support here */ + ASSERT(page_list_empty(&d->arch.paging.p2m_freelist)); + + if ( p2m->root ) + free_domheap_pages(p2m->root, P2M_ROOT_ORDER); + + p2m->root =3D NULL; + + p2m_free_vmid(d); + + radix_tree_destroy(&p2m->mem_access_settings, NULL); + + p2m->domain =3D NULL; +} + +int p2m_init(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + int rc; + unsigned int cpu; + + rwlock_init(&p2m->lock); + spin_lock_init(&d->arch.paging.lock); + INIT_PAGE_LIST_HEAD(&p2m->pages); + INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist); + + p2m->vmid =3D INVALID_VMID; + p2m->max_mapped_gfn =3D _gfn(0); + p2m->lowest_mapped_gfn =3D _gfn(ULONG_MAX); + + p2m->default_access =3D p2m_access_rwx; + p2m->mem_access_enabled =3D false; + radix_tree_init(&p2m->mem_access_settings); + + /* + * Some IOMMUs don't support coherent PT walk. When the p2m is + * shared with the CPU, Xen has to make sure that the PT changes have + * reached the memory + */ + p2m->clean_pte =3D is_iommu_enabled(d) && + !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK); + + /* + * Make sure that the type chosen to is able to store the an vCPU ID + * between 0 and the maximum of virtual CPUS supported as long as + * the INVALID_VCPU_ID. + */ + BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0]) * 8)) < MAX_VIRT_CPU= S); + BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0])* 8)) < INVALID_VCPU_= ID); + + for_each_possible_cpu(cpu) + p2m->last_vcpu_ran[cpu] =3D INVALID_VCPU_ID; + + /* + * "Trivial" initialisation is now complete. Set the backpointer so + * p2m_teardown() and friends know to do something. + */ + p2m->domain =3D d; + + rc =3D p2m_alloc_vmid(d); + if ( rc ) + return rc; + + rc =3D p2m_alloc_table(d); + if ( rc ) + return rc; + + return 0; +} + +/* + * The function will go through the p2m and remove page reference when it + * is required. The mapping will be removed from the p2m. + * + * XXX: See whether the mapping can be left intact in the p2m. + */ +int relinquish_p2m_mapping(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + unsigned long count =3D 0; + p2m_type_t t; + int rc =3D 0; + unsigned int order; + gfn_t start, end; + + BUG_ON(!d->is_dying); + /* No mappings can be added in the P2M after the P2M lock is released.= */ + p2m_write_lock(p2m); + + start =3D p2m->lowest_mapped_gfn; + end =3D gfn_add(p2m->max_mapped_gfn, 1); + + for ( ; gfn_x(start) < gfn_x(end); + start =3D gfn_next_boundary(start, order) ) + { + mfn_t mfn =3D p2m_get_entry(p2m, start, &t, NULL, &order, NULL); + + count++; + /* + * Arbitrarily preempt every 512 iterations. + */ + if ( !(count % 512) && hypercall_preempt_check() ) + { + rc =3D -ERESTART; + break; + } + + /* + * p2m_set_entry will take care of removing reference on page + * when it is necessary and removing the mapping in the p2m. + */ + if ( !mfn_eq(mfn, INVALID_MFN) ) + { + /* + * For valid mapping, the start will always be aligned as + * entry will be removed whilst relinquishing. + */ + rc =3D __p2m_set_entry(p2m, start, order, INVALID_MFN, + p2m_invalid, p2m_access_rwx); + if ( unlikely(rc) ) + { + printk(XENLOG_G_ERR "Unable to remove mapping gfn=3D%#"PRI= _gfn" order=3D%u from the p2m of domain %d\n", gfn_x(start), order, d->doma= in_id); + break; + } + } + } + + /* + * Update lowest_mapped_gfn so on the next call we still start where + * we stopped. + */ + p2m->lowest_mapped_gfn =3D start; + + p2m_write_unlock(p2m); + + return rc; +} + +/* + * Clean & invalidate RAM associated to the guest vCPU. + * + * The function can only work with the current vCPU and should be called + * with IRQ enabled as the vCPU could get preempted. + */ +void p2m_flush_vm(struct vcpu *v) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(v->domain); + int rc; + gfn_t start =3D _gfn(0); + + ASSERT(v =3D=3D current); + ASSERT(local_irq_is_enabled()); + ASSERT(v->arch.need_flush_to_ram); + + do + { + rc =3D p2m_cache_flush_range(v->domain, &start, _gfn(ULONG_MAX)); + if ( rc =3D=3D -ERESTART ) + do_softirq(); + } while ( rc =3D=3D -ERESTART ); + + if ( rc !=3D 0 ) + gprintk(XENLOG_WARNING, + "P2M has not been correctly cleaned (rc =3D %d)\n", + rc); + + /* + * Invalidate the p2m to track which page was modified by the guest + * between call of p2m_flush_vm(). + */ + p2m_invalidate_root(p2m); + + v->arch.need_flush_to_ram =3D false; +} + +/* VTCR value to be configured by all CPUs. Set only once by the boot CPU = */ +static register_t __read_mostly vtcr; + +static void setup_virt_paging_one(void *data) +{ + WRITE_SYSREG(vtcr, VTCR_EL2); + + /* + * ARM64_WORKAROUND_AT_SPECULATE: We want to keep the TLBs free from + * entries related to EL1/EL0 translation regime until a guest vCPU + * is running. For that, we need to set-up VTTBR to point to an empty + * page-table and turn on stage-2 translation. The TLB entries + * associated with EL1/EL0 translation regime will also be flushed in = case + * an AT instruction was speculated before hand. + */ + if ( cpus_have_cap(ARM64_WORKAROUND_AT_SPECULATE) ) + { + WRITE_SYSREG64(generate_vttbr(INVALID_VMID, empty_root_mfn), VTTBR= _EL2); + WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_VM, HCR_EL2); + isb(); + + flush_all_guests_tlb_local(); + } +} + +void __init setup_virt_paging(void) +{ + /* Setup Stage 2 address translation */ + register_t val =3D VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WB= WA; + + static const struct { + unsigned int pabits; /* Physical Address Size */ + unsigned int t0sz; /* Desired T0SZ, minimum in comment */ + unsigned int root_order; /* Page order of the root of the p2m */ + unsigned int sl0; /* Desired SL0, maximum in comment */ + } pa_range_info[] __initconst =3D { + /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table D5-6 */ + /* PA size, t0sz(min), root-order, sl0(max) */ +#ifdef CONFIG_ARM_64 + [0] =3D { 32, 32/*32*/, 0, 1 }, + [1] =3D { 36, 28/*28*/, 0, 1 }, + [2] =3D { 40, 24/*24*/, 1, 1 }, + [3] =3D { 42, 22/*22*/, 3, 1 }, + [4] =3D { 44, 20/*20*/, 0, 2 }, + [5] =3D { 48, 16/*16*/, 0, 2 }, + [6] =3D { 52, 12/*12*/, 4, 2 }, + [7] =3D { 0 } /* Invalid */ +#else + { 32, 0/*0*/, 0, 1 }, + { 40, 24/*24*/, 1, 1 } +#endif + }; + + unsigned int i; + unsigned int pa_range =3D 0x10; /* Larger than any possible value */ + +#ifdef CONFIG_ARM_32 + /* + * Typecast pa_range_info[].t0sz into arm32 bit variant. + * + * VTCR.T0SZ is bits [3:0] and S(sign extension), bit[4] for arm322. + * Thus, pa_range_info[].t0sz is translated to its arm32 variant using + * struct bitfields. + */ + struct + { + signed int val:5; + } t0sz_32; +#else + /* + * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured + * with IPA bits =3D=3D PA bits, compare against "pabits". + */ + if ( pa_range_info[system_cpuinfo.mm64.pa_range].pabits < p2m_ipa_bits= ) + p2m_ipa_bits =3D pa_range_info[system_cpuinfo.mm64.pa_range].pabit= s; + + /* + * cpu info sanitization made sure we support 16bits VMID only if all + * cores are supporting it. + */ + if ( system_cpuinfo.mm64.vmid_bits =3D=3D MM64_VMID_16_BITS_SUPPORT ) + max_vmid =3D MAX_VMID_16_BIT; +#endif + + /* Choose suitable "pa_range" according to the resulted "p2m_ipa_bits"= . */ + for ( i =3D 0; i < ARRAY_SIZE(pa_range_info); i++ ) + { + if ( p2m_ipa_bits =3D=3D pa_range_info[i].pabits ) + { + pa_range =3D i; + break; + } + } + + /* Check if we found the associated entry in the array */ + if ( pa_range >=3D ARRAY_SIZE(pa_range_info) || !pa_range_info[pa_rang= e].pabits ) + panic("%u-bit P2M is not supported\n", p2m_ipa_bits); + +#ifdef CONFIG_ARM_64 + val |=3D VTCR_PS(pa_range); + val |=3D VTCR_TG0_4K; + + /* Set the VS bit only if 16 bit VMID is supported. */ + if ( MAX_VMID =3D=3D MAX_VMID_16_BIT ) + val |=3D VTCR_VS; +#endif + + val |=3D VTCR_SL0(pa_range_info[pa_range].sl0); + val |=3D VTCR_T0SZ(pa_range_info[pa_range].t0sz); + + p2m_root_order =3D pa_range_info[pa_range].root_order; + p2m_root_level =3D 2 - pa_range_info[pa_range].sl0; + +#ifdef CONFIG_ARM_64 + p2m_ipa_bits =3D 64 - pa_range_info[pa_range].t0sz; +#else + t0sz_32.val =3D pa_range_info[pa_range].t0sz; + p2m_ipa_bits =3D 32 - t0sz_32.val; +#endif + + printk("P2M: %d-bit IPA with %d-bit PA and %d-bit VMID\n", + p2m_ipa_bits, + pa_range_info[pa_range].pabits, + ( MAX_VMID =3D=3D MAX_VMID_16_BIT ) ? 16 : 8); + + printk("P2M: %d levels with order-%d root, VTCR 0x%"PRIregister"\n", + 4 - P2M_ROOT_LEVEL, P2M_ROOT_ORDER, val); + + p2m_vmid_allocator_init(); + + /* It is not allowed to concatenate a level zero root */ + BUG_ON( P2M_ROOT_LEVEL =3D=3D 0 && P2M_ROOT_ORDER > 0 ); + vtcr =3D val; + + /* + * ARM64_WORKAROUND_AT_SPECULATE requires to allocate root table + * with all entries zeroed. + */ + if ( cpus_have_cap(ARM64_WORKAROUND_AT_SPECULATE) ) + { + struct page_info *root; + + root =3D p2m_allocate_root(); + if ( !root ) + panic("Unable to allocate root table for ARM64_WORKAROUND_AT_S= PECULATE\n"); + + empty_root_mfn =3D page_to_mfn(root); + } + + setup_virt_paging_one(NULL); + smp_call_function(setup_virt_paging_one, NULL, 1); +} + +static int cpu_virt_paging_callback(struct notifier_block *nfb, + unsigned long action, + void *hcpu) +{ + switch ( action ) + { + case CPU_STARTING: + ASSERT(system_state !=3D SYS_STATE_boot); + setup_virt_paging_one(NULL); + break; + default: + break; + } + + return NOTIFY_DONE; +} + +static struct notifier_block cpu_virt_paging_nfb =3D { + .notifier_call =3D cpu_virt_paging_callback, +}; + +static int __init cpu_virt_paging_init(void) +{ + register_cpu_notifier(&cpu_virt_paging_nfb); + + return 0; +} +/* + * Initialization of the notifier has to be done at init rather than presm= p_init + * phase because: the registered notifier is used to setup virtual paging = for + * non-boot CPUs after the initial virtual paging for all CPUs is already = setup, + * i.e. when a non-boot CPU is hotplugged after the system has booted. In = other + * words, the notifier should be registered after the virtual paging is + * initially setup (setup_virt_paging() is called from start_xen()). This = is + * required because vtcr config value has to be set before a notifier can = fire. + */ +__initcall(cpu_virt_paging_init); + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index de32a2d638..b991b76ce4 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1,191 +1,25 @@ /* SPDX-License-Identifier: GPL-2.0 */ -#include -#include #include -#include #include #include #include =20 -#include #include #include #include #include #include =20 -#define MAX_VMID_8_BIT (1UL << 8) -#define MAX_VMID_16_BIT (1UL << 16) - -#define INVALID_VMID 0 /* VMID 0 is reserved */ - -unsigned int __read_mostly p2m_root_order; -unsigned int __read_mostly p2m_root_level; #ifdef CONFIG_ARM_64 -static unsigned int __read_mostly max_vmid =3D MAX_VMID_8_BIT; -/* VMID is by default 8 bit width on AArch64 */ -#define MAX_VMID max_vmid -#else -/* VMID is always 8 bit width on AArch32 */ -#define MAX_VMID MAX_VMID_8_BIT +unsigned int __read_mostly max_vmid =3D MAX_VMID_8_BIT; #endif =20 -#define P2M_ROOT_PAGES (1<arch.paging.p2m_total_pages) =3D - d->arch.paging.p2m_total_pages + 1; - page_list_add_tail(pg, &d->arch.paging.p2m_freelist); - } - else if ( d->arch.paging.p2m_total_pages > pages ) - { - /* Need to return memory to domheap */ - pg =3D page_list_remove_head(&d->arch.paging.p2m_freelist); - if( pg ) - { - ACCESS_ONCE(d->arch.paging.p2m_total_pages) =3D - d->arch.paging.p2m_total_pages - 1; - free_domheap_page(pg); - } - else - { - printk(XENLOG_ERR - "Failed to free P2M pages, P2M freelist is empty.\n= "); - return -ENOMEM; - } - } - else - break; - - /* Check to see if we need to yield and try again */ - if ( preempted && general_preempt_check() ) - { - *preempted =3D true; - return -ERESTART; - } - } - - return 0; -} - -int arch_set_paging_mempool_size(struct domain *d, uint64_t size) -{ - unsigned long pages =3D size >> PAGE_SHIFT; - bool preempted =3D false; - int rc; - - if ( (size & ~PAGE_MASK) || /* Non page-sized request? */ - pages !=3D (size >> PAGE_SHIFT) ) /* 32-bit overflow? */ - return -EINVAL; - - spin_lock(&d->arch.paging.lock); - rc =3D p2m_set_allocation(d, pages, &preempted); - spin_unlock(&d->arch.paging.lock); - - ASSERT(preempted =3D=3D (rc =3D=3D -ERESTART)); - - return rc; -} - -int p2m_teardown_allocation(struct domain *d) -{ - int ret =3D 0; - bool preempted =3D false; - - spin_lock(&d->arch.paging.lock); - if ( d->arch.paging.p2m_total_pages !=3D 0 ) - { - ret =3D p2m_set_allocation(d, 0, &preempted); - if ( preempted ) - { - spin_unlock(&d->arch.paging.lock); - return -ERESTART; - } - ASSERT(d->arch.paging.p2m_total_pages =3D=3D 0); - } - spin_unlock(&d->arch.paging.lock); - - return ret; -} - /* Unlock the flush and do a P2M TLB flush if necessary */ void p2m_write_unlock(struct p2m_domain *p2m) { @@ -199,1268 +33,66 @@ void p2m_write_unlock(struct p2m_domain *p2m) write_unlock(&p2m->lock); } =20 -void p2m_dump_info(struct domain *d) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - - p2m_read_lock(p2m); - printk("p2m mappings for domain %d (vmid %d):\n", - d->domain_id, p2m->vmid); - BUG_ON(p2m->stats.mappings[0] || p2m->stats.shattered[0]); - printk(" 1G mappings: %ld (shattered %ld)\n", - p2m->stats.mappings[1], p2m->stats.shattered[1]); - printk(" 2M mappings: %ld (shattered %ld)\n", - p2m->stats.mappings[2], p2m->stats.shattered[2]); - printk(" 4K mappings: %ld\n", p2m->stats.mappings[3]); - p2m_read_unlock(p2m); -} - void memory_type_changed(struct domain *d) { } =20 -void dump_p2m_lookup(struct domain *d, paddr_t addr) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - - printk("dom%d IPA 0x%"PRIpaddr"\n", d->domain_id, addr); - - printk("P2M @ %p mfn:%#"PRI_mfn"\n", - p2m->root, mfn_x(page_to_mfn(p2m->root))); - - dump_pt_walk(page_to_maddr(p2m->root), addr, - P2M_ROOT_LEVEL, P2M_ROOT_PAGES); -} - -/* - * p2m_save_state and p2m_restore_state work in pair to workaround - * ARM64_WORKAROUND_AT_SPECULATE. p2m_save_state will set-up VTTBR to - * point to the empty page-tables to stop allocating TLB entries. - */ -void p2m_save_state(struct vcpu *p) -{ - p->arch.sctlr =3D READ_SYSREG(SCTLR_EL1); - - if ( cpus_have_const_cap(ARM64_WORKAROUND_AT_SPECULATE) ) - { - WRITE_SYSREG64(generate_vttbr(INVALID_VMID, empty_root_mfn), VTTBR= _EL2); - /* - * Ensure VTTBR_EL2 is correctly synchronized so we can restore - * the next vCPU context without worrying about AT instruction - * speculation. - */ - isb(); - } -} - -void p2m_restore_state(struct vcpu *n) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(n->domain); - uint8_t *last_vcpu_ran; - - if ( is_idle_vcpu(n) ) - return; - - WRITE_SYSREG(n->arch.sctlr, SCTLR_EL1); - WRITE_SYSREG(n->arch.hcr_el2, HCR_EL2); - - /* - * ARM64_WORKAROUND_AT_SPECULATE: VTTBR_EL2 should be restored after a= ll - * registers associated to EL1/EL0 translations regime have been - * synchronized. - */ - asm volatile(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_AT_SPECULATE)); - WRITE_SYSREG64(p2m->vttbr, VTTBR_EL2); - - last_vcpu_ran =3D &p2m->last_vcpu_ran[smp_processor_id()]; - - /* - * While we are restoring an out-of-context translation regime - * we still need to ensure: - * - VTTBR_EL2 is synchronized before flushing the TLBs - * - All registers for EL1 are synchronized before executing an AT - * instructions targeting S1/S2. - */ - isb(); - - /* - * Flush local TLB for the domain to prevent wrong TLB translation - * when running multiple vCPU of the same domain on a single pCPU. - */ - if ( *last_vcpu_ran !=3D INVALID_VCPU_ID && *last_vcpu_ran !=3D n->vcp= u_id ) - flush_guest_tlb_local(); - - *last_vcpu_ran =3D n->vcpu_id; -} - -/* - * Force a synchronous P2M TLB flush. - * - * Must be called with the p2m lock held. - */ -static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m) -{ - unsigned long flags =3D 0; - uint64_t ovttbr; - - ASSERT(p2m_is_write_locked(p2m)); - - /* - * ARM only provides an instruction to flush TLBs for the current - * VMID. So switch to the VTTBR of a given P2M if different. - */ - ovttbr =3D READ_SYSREG64(VTTBR_EL2); - if ( ovttbr !=3D p2m->vttbr ) - { - uint64_t vttbr; - - local_irq_save(flags); - - /* - * ARM64_WORKAROUND_AT_SPECULATE: We need to stop AT to allocate - * TLBs entries because the context is partially modified. We - * only need the VMID for flushing the TLBs, so we can generate - * a new VTTBR with the VMID to flush and the empty root table. - */ - if ( !cpus_have_const_cap(ARM64_WORKAROUND_AT_SPECULATE) ) - vttbr =3D p2m->vttbr; - else - vttbr =3D generate_vttbr(p2m->vmid, empty_root_mfn); - - WRITE_SYSREG64(vttbr, VTTBR_EL2); - - /* Ensure VTTBR_EL2 is synchronized before flushing the TLBs */ - isb(); - } - - flush_guest_tlb(); - - if ( ovttbr !=3D READ_SYSREG64(VTTBR_EL2) ) - { - WRITE_SYSREG64(ovttbr, VTTBR_EL2); - /* Ensure VTTBR_EL2 is back in place before continuing. */ - isb(); - local_irq_restore(flags); - } - - p2m->need_flush =3D false; -} - -void p2m_tlb_flush_sync(struct p2m_domain *p2m) -{ - if ( p2m->need_flush ) - p2m_force_tlb_flush_sync(p2m); -} - -/* - * Find and map the root page table. The caller is responsible for - * unmapping the table. - * - * The function will return NULL if the offset of the root table is - * invalid. - */ -static lpae_t *p2m_get_root_pointer(struct p2m_domain *p2m, - gfn_t gfn) -{ - unsigned long root_table; - - /* - * While the root table index is the offset from the previous level, - * we can't use (P2M_ROOT_LEVEL - 1) because the root level might be - * 0. Yet we still want to check if all the unused bits are zeroed. - */ - root_table =3D gfn_x(gfn) >> (XEN_PT_LEVEL_ORDER(P2M_ROOT_LEVEL) + - XEN_PT_LPAE_SHIFT); - if ( root_table >=3D P2M_ROOT_PAGES ) - return NULL; - - return __map_domain_page(p2m->root + root_table); -} - -/* - * Lookup the MFN corresponding to a domain's GFN. - * Lookup mem access in the ratrix tree. - * The entries associated to the GFN is considered valid. - */ -static p2m_access_t p2m_mem_access_radix_get(struct p2m_domain *p2m, gfn_t= gfn) -{ - void *ptr; - - if ( !p2m->mem_access_enabled ) - return p2m->default_access; - - ptr =3D radix_tree_lookup(&p2m->mem_access_settings, gfn_x(gfn)); - if ( !ptr ) - return p2m_access_rwx; - else - return radix_tree_ptr_to_int(ptr); -} - -/* - * In the case of the P2M, the valid bit is used for other purpose. Use - * the type to check whether an entry is valid. - */ -static inline bool p2m_is_valid(lpae_t pte) -{ - return pte.p2m.type !=3D p2m_invalid; -} - -/* - * lpae_is_* helpers don't check whether the valid bit is set in the - * PTE. Provide our own overlay to check the valid bit. - */ -static inline bool p2m_is_mapping(lpae_t pte, unsigned int level) -{ - return p2m_is_valid(pte) && lpae_is_mapping(pte, level); -} - -static inline bool p2m_is_superpage(lpae_t pte, unsigned int level) -{ - return p2m_is_valid(pte) && lpae_is_superpage(pte, level); -} - -#define GUEST_TABLE_MAP_FAILED 0 -#define GUEST_TABLE_SUPER_PAGE 1 -#define GUEST_TABLE_NORMAL_PAGE 2 - -static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry); - -/* - * Take the currently mapped table, find the corresponding GFN entry, - * and map the next table, if available. The previous table will be - * unmapped if the next level was mapped (e.g GUEST_TABLE_NORMAL_PAGE - * returned). - * - * The read_only parameters indicates whether intermediate tables should - * be allocated when not present. - * - * Return values: - * GUEST_TABLE_MAP_FAILED: Either read_only was set and the entry - * was empty, or allocating a new page failed. - * GUEST_TABLE_NORMAL_PAGE: next level mapped normally - * GUEST_TABLE_SUPER_PAGE: The next entry points to a superpage. - */ -static int p2m_next_level(struct p2m_domain *p2m, bool read_only, - unsigned int level, lpae_t **table, - unsigned int offset) -{ - lpae_t *entry; - int ret; - mfn_t mfn; - - entry =3D *table + offset; - - if ( !p2m_is_valid(*entry) ) - { - if ( read_only ) - return GUEST_TABLE_MAP_FAILED; - - ret =3D p2m_create_table(p2m, entry); - if ( ret ) - return GUEST_TABLE_MAP_FAILED; - } - - /* The function p2m_next_level is never called at the 3rd level */ - ASSERT(level < 3); - if ( p2m_is_mapping(*entry, level) ) - return GUEST_TABLE_SUPER_PAGE; - - mfn =3D lpae_get_mfn(*entry); - - unmap_domain_page(*table); - *table =3D map_domain_page(mfn); - - return GUEST_TABLE_NORMAL_PAGE; -} - -/* - * Get the details of a given gfn. - * - * If the entry is present, the associated MFN will be returned and the - * access and type filled up. The page_order will correspond to the - * order of the mapping in the page table (i.e it could be a superpage). - * - * If the entry is not present, INVALID_MFN will be returned and the - * page_order will be set according to the order of the invalid range. - * - * valid will contain the value of bit[0] (e.g valid bit) of the - * entry. - */ -mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn, - p2m_type_t *t, p2m_access_t *a, - unsigned int *page_order, - bool *valid) -{ - paddr_t addr =3D gfn_to_gaddr(gfn); - unsigned int level =3D 0; - lpae_t entry, *table; - int rc; - mfn_t mfn =3D INVALID_MFN; - p2m_type_t _t; - DECLARE_OFFSETS(offsets, addr); - - ASSERT(p2m_is_locked(p2m)); - BUILD_BUG_ON(THIRD_MASK !=3D PAGE_MASK); - - /* Allow t to be NULL */ - t =3D t ?: &_t; - - *t =3D p2m_invalid; - - if ( valid ) - *valid =3D false; - - /* XXX: Check if the mapping is lower than the mapped gfn */ - - /* This gfn is higher than the highest the p2m map currently holds */ - if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) ) - { - for ( level =3D P2M_ROOT_LEVEL; level < 3; level++ ) - if ( (gfn_x(gfn) & (XEN_PT_LEVEL_MASK(level) >> PAGE_SHIFT)) > - gfn_x(p2m->max_mapped_gfn) ) - break; - - goto out; - } - - table =3D p2m_get_root_pointer(p2m, gfn); - - /* - * the table should always be non-NULL because the gfn is below - * p2m->max_mapped_gfn and the root table pages are always present. - */ - if ( !table ) - { - ASSERT_UNREACHABLE(); - level =3D P2M_ROOT_LEVEL; - goto out; - } - - for ( level =3D P2M_ROOT_LEVEL; level < 3; level++ ) - { - rc =3D p2m_next_level(p2m, true, level, &table, offsets[level]); - if ( rc =3D=3D GUEST_TABLE_MAP_FAILED ) - goto out_unmap; - else if ( rc !=3D GUEST_TABLE_NORMAL_PAGE ) - break; - } - - entry =3D table[offsets[level]]; - - if ( p2m_is_valid(entry) ) - { - *t =3D entry.p2m.type; - - if ( a ) - *a =3D p2m_mem_access_radix_get(p2m, gfn); - - mfn =3D lpae_get_mfn(entry); - /* - * The entry may point to a superpage. Find the MFN associated - * to the GFN. - */ - mfn =3D mfn_add(mfn, - gfn_x(gfn) & ((1UL << XEN_PT_LEVEL_ORDER(level)) - 1= )); - - if ( valid ) - *valid =3D lpae_is_valid(entry); - } - -out_unmap: - unmap_domain_page(table); - -out: - if ( page_order ) - *page_order =3D XEN_PT_LEVEL_ORDER(level); - - return mfn; -} - -mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t) -{ - mfn_t mfn; - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - - p2m_read_lock(p2m); - mfn =3D p2m_get_entry(p2m, gfn, t, NULL, NULL, NULL); - p2m_read_unlock(p2m); - - return mfn; -} - -struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn, - p2m_type_t *t) -{ - struct page_info *page; - p2m_type_t p2mt; - mfn_t mfn =3D p2m_lookup(d, gfn, &p2mt); - - if ( t ) - *t =3D p2mt; - - if ( !p2m_is_any_ram(p2mt) ) - return NULL; - - if ( !mfn_valid(mfn) ) - return NULL; - - page =3D mfn_to_page(mfn); - - /* - * get_page won't work on foreign mapping because the page doesn't - * belong to the current domain. - */ - if ( p2m_is_foreign(p2mt) ) - { - struct domain *fdom =3D page_get_owner_and_reference(page); - ASSERT(fdom !=3D NULL); - ASSERT(fdom !=3D d); - return page; - } - - return get_page(page, d) ? page : NULL; -} - -int guest_physmap_mark_populate_on_demand(struct domain *d, - unsigned long gfn, - unsigned int order) -{ - return -ENOSYS; -} - -unsigned long p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, - unsigned int order) -{ - return 0; -} - -static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a) -{ - /* First apply type permissions */ - switch ( t ) - { - case p2m_ram_rw: - e->p2m.xn =3D 0; - e->p2m.write =3D 1; - break; - - case p2m_ram_ro: - e->p2m.xn =3D 0; - e->p2m.write =3D 0; - break; - - case p2m_iommu_map_rw: - case p2m_map_foreign_rw: - case p2m_grant_map_rw: - case p2m_mmio_direct_dev: - case p2m_mmio_direct_nc: - case p2m_mmio_direct_c: - e->p2m.xn =3D 1; - e->p2m.write =3D 1; - break; - - case p2m_iommu_map_ro: - case p2m_map_foreign_ro: - case p2m_grant_map_ro: - case p2m_invalid: - e->p2m.xn =3D 1; - e->p2m.write =3D 0; - break; - - case p2m_max_real_type: - BUG(); - break; - } - - /* Then restrict with access permissions */ - switch ( a ) - { - case p2m_access_rwx: - break; - case p2m_access_wx: - e->p2m.read =3D 0; - break; - case p2m_access_rw: - e->p2m.xn =3D 1; - break; - case p2m_access_w: - e->p2m.read =3D 0; - e->p2m.xn =3D 1; - break; - case p2m_access_rx: - case p2m_access_rx2rw: - e->p2m.write =3D 0; - break; - case p2m_access_x: - e->p2m.write =3D 0; - e->p2m.read =3D 0; - break; - case p2m_access_r: - e->p2m.write =3D 0; - e->p2m.xn =3D 1; - break; - case p2m_access_n: - case p2m_access_n2rwx: - e->p2m.read =3D e->p2m.write =3D 0; - e->p2m.xn =3D 1; - break; - } -} - -static lpae_t mfn_to_p2m_entry(mfn_t mfn, p2m_type_t t, p2m_access_t a) -{ - /* - * sh, xn and write bit will be defined in the following switches - * based on mattr and t. - */ - lpae_t e =3D (lpae_t) { - .p2m.af =3D 1, - .p2m.read =3D 1, - .p2m.table =3D 1, - .p2m.valid =3D 1, - .p2m.type =3D t, - }; - - BUILD_BUG_ON(p2m_max_real_type > (1 << 4)); - - switch ( t ) - { - case p2m_mmio_direct_dev: - e.p2m.mattr =3D MATTR_DEV; - e.p2m.sh =3D LPAE_SH_OUTER; - break; - - case p2m_mmio_direct_c: - e.p2m.mattr =3D MATTR_MEM; - e.p2m.sh =3D LPAE_SH_OUTER; - break; - - /* - * ARM ARM: Overlaying the shareability attribute (DDI - * 0406C.b B3-1376 to 1377) - * - * A memory region with a resultant memory type attribute of Normal, - * and a resultant cacheability attribute of Inner Non-cacheable, - * Outer Non-cacheable, must have a resultant shareability attribute - * of Outer Shareable, otherwise shareability is UNPREDICTABLE. - * - * On ARMv8 shareability is ignored and explicitly treated as Outer - * Shareable for Normal Inner Non_cacheable, Outer Non-cacheable. - * See the note for table D4-40, in page 1788 of the ARM DDI 0487A.j. - */ - case p2m_mmio_direct_nc: - e.p2m.mattr =3D MATTR_MEM_NC; - e.p2m.sh =3D LPAE_SH_OUTER; - break; - - default: - e.p2m.mattr =3D MATTR_MEM; - e.p2m.sh =3D LPAE_SH_INNER; - } - - p2m_set_permission(&e, t, a); - - ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK)); - - lpae_set_mfn(e, mfn); - - return e; -} - -/* Generate table entry with correct attributes. */ -static lpae_t page_to_p2m_table(struct page_info *page) -{ - /* - * The access value does not matter because the hardware will ignore - * the permission fields for table entry. - * - * We use p2m_ram_rw so the entry has a valid type. This is important - * for p2m_is_valid() to return valid on table entries. - */ - return mfn_to_p2m_entry(page_to_mfn(page), p2m_ram_rw, p2m_access_rwx); -} - -static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool clean_pte) -{ - write_pte(p, pte); - if ( clean_pte ) - clean_dcache(*p); -} - -static inline void p2m_remove_pte(lpae_t *p, bool clean_pte) -{ - lpae_t pte; - - memset(&pte, 0x00, sizeof(pte)); - p2m_write_pte(p, pte, clean_pte); -} - -/* Allocate a new page table page and hook it in via the given entry. */ -static int p2m_create_table(struct p2m_domain *p2m, lpae_t *entry) -{ - struct page_info *page; - lpae_t *p; - - ASSERT(!p2m_is_valid(*entry)); - - page =3D p2m_alloc_page(p2m->domain); - if ( page =3D=3D NULL ) - return -ENOMEM; - - page_list_add(page, &p2m->pages); - - p =3D __map_domain_page(page); - clear_page(p); - - if ( p2m->clean_pte ) - clean_dcache_va_range(p, PAGE_SIZE); - - unmap_domain_page(p); - - p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_pte); - - return 0; -} - -static int p2m_mem_access_radix_set(struct p2m_domain *p2m, gfn_t gfn, - p2m_access_t a) -{ - int rc; - - if ( !p2m->mem_access_enabled ) - return 0; - - if ( p2m_access_rwx =3D=3D a ) - { - radix_tree_delete(&p2m->mem_access_settings, gfn_x(gfn)); - return 0; - } - - rc =3D radix_tree_insert(&p2m->mem_access_settings, gfn_x(gfn), - radix_tree_int_to_ptr(a)); - if ( rc =3D=3D -EEXIST ) - { - /* If a setting already exists, change it to the new one */ - radix_tree_replace_slot( - radix_tree_lookup_slot( - &p2m->mem_access_settings, gfn_x(gfn)), - radix_tree_int_to_ptr(a)); - rc =3D 0; - } - - return rc; -} - -/* - * Put any references on the single 4K page referenced by pte. - * TODO: Handle superpages, for now we only take special references for le= af - * pages (specifically foreign ones, which can't be super mapped today). - */ -static void p2m_put_l3_page(const lpae_t pte) -{ - mfn_t mfn =3D lpae_get_mfn(pte); - - ASSERT(p2m_is_valid(pte)); - - /* - * TODO: Handle other p2m types - * - * It's safe to do the put_page here because page_alloc will - * flush the TLBs if the page is reallocated before the end of - * this loop. - */ - if ( p2m_is_foreign(pte.p2m.type) ) - { - ASSERT(mfn_valid(mfn)); - put_page(mfn_to_page(mfn)); - } - /* Detect the xenheap page and mark the stored GFN as invalid. */ - else if ( p2m_is_ram(pte.p2m.type) && is_xen_heap_mfn(mfn) ) - page_set_xenheap_gfn(mfn_to_page(mfn), INVALID_GFN); -} - -/* Free lpae sub-tree behind an entry */ -static void p2m_free_entry(struct p2m_domain *p2m, - lpae_t entry, unsigned int level) -{ - unsigned int i; - lpae_t *table; - mfn_t mfn; - struct page_info *pg; - - /* Nothing to do if the entry is invalid. */ - if ( !p2m_is_valid(entry) ) - return; - - if ( p2m_is_superpage(entry, level) || (level =3D=3D 3) ) - { -#ifdef CONFIG_IOREQ_SERVER - /* - * If this gets called then either the entry was replaced by an en= try - * with a different base (valid case) or the shattering of a super= page - * has failed (error case). - * So, at worst, the spurious mapcache invalidation might be sent. - */ - if ( p2m_is_ram(entry.p2m.type) && - domain_has_ioreq_server(p2m->domain) ) - ioreq_request_mapcache_invalidate(p2m->domain); -#endif - - p2m->stats.mappings[level]--; - /* Nothing to do if the entry is a super-page. */ - if ( level =3D=3D 3 ) - p2m_put_l3_page(entry); - return; - } - - table =3D map_domain_page(lpae_get_mfn(entry)); - for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) - p2m_free_entry(p2m, *(table + i), level + 1); - - unmap_domain_page(table); - - /* - * Make sure all the references in the TLB have been removed before - * freing the intermediate page table. - * XXX: Should we defer the free of the page table to avoid the - * flush? - */ - p2m_tlb_flush_sync(p2m); - - mfn =3D lpae_get_mfn(entry); - ASSERT(mfn_valid(mfn)); - - pg =3D mfn_to_page(mfn); - - page_list_del(pg, &p2m->pages); - p2m_free_page(p2m->domain, pg); -} - -static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry, - unsigned int level, unsigned int target, - const unsigned int *offsets) -{ - struct page_info *page; - unsigned int i; - lpae_t pte, *table; - bool rv =3D true; - - /* Convenience aliases */ - mfn_t mfn =3D lpae_get_mfn(*entry); - unsigned int next_level =3D level + 1; - unsigned int level_order =3D XEN_PT_LEVEL_ORDER(next_level); - - /* - * This should only be called with target !=3D level and the entry is - * a superpage. - */ - ASSERT(level < target); - ASSERT(p2m_is_superpage(*entry, level)); - - page =3D p2m_alloc_page(p2m->domain); - if ( !page ) - return false; - - page_list_add(page, &p2m->pages); - table =3D __map_domain_page(page); - - /* - * We are either splitting a first level 1G page into 512 second level - * 2M pages, or a second level 2M page into 512 third level 4K pages. - */ - for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) - { - lpae_t *new_entry =3D table + i; - - /* - * Use the content of the superpage entry and override - * the necessary fields. So the correct permission are kept. - */ - pte =3D *entry; - lpae_set_mfn(pte, mfn_add(mfn, i << level_order)); - - /* - * First and second level pages set p2m.table =3D 0, but third - * level entries set p2m.table =3D 1. - */ - pte.p2m.table =3D (next_level =3D=3D 3); - - write_pte(new_entry, pte); - } - - /* Update stats */ - p2m->stats.shattered[level]++; - p2m->stats.mappings[level]--; - p2m->stats.mappings[next_level] +=3D XEN_PT_LPAE_ENTRIES; - - /* - * Shatter superpage in the page to the level we want to make the - * changes. - * This is done outside the loop to avoid checking the offset to - * know whether the entry should be shattered for every entry. - */ - if ( next_level !=3D target ) - rv =3D p2m_split_superpage(p2m, table + offsets[next_level], - level + 1, target, offsets); - - if ( p2m->clean_pte ) - clean_dcache_va_range(table, PAGE_SIZE); - - unmap_domain_page(table); - - /* - * Even if we failed, we should install the newly allocated LPAE - * entry. The caller will be in charge to free the sub-tree. - */ - p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_pte); - - return rv; -} - -/* - * Insert an entry in the p2m. This should be called with a mapping - * equal to a page/superpage (4K, 2M, 1G). - */ -static int __p2m_set_entry(struct p2m_domain *p2m, - gfn_t sgfn, - unsigned int page_order, - mfn_t smfn, - p2m_type_t t, - p2m_access_t a) -{ - unsigned int level =3D 0; - unsigned int target =3D 3 - (page_order / XEN_PT_LPAE_SHIFT); - lpae_t *entry, *table, orig_pte; - int rc; - /* A mapping is removed if the MFN is invalid. */ - bool removing_mapping =3D mfn_eq(smfn, INVALID_MFN); - DECLARE_OFFSETS(offsets, gfn_to_gaddr(sgfn)); - - ASSERT(p2m_is_write_locked(p2m)); - - /* - * Check if the level target is valid: we only support - * 4K - 2M - 1G mapping. - */ - ASSERT(target > 0 && target <=3D 3); - - table =3D p2m_get_root_pointer(p2m, sgfn); - if ( !table ) - return -EINVAL; - - for ( level =3D P2M_ROOT_LEVEL; level < target; level++ ) - { - /* - * Don't try to allocate intermediate page table if the mapping - * is about to be removed. - */ - rc =3D p2m_next_level(p2m, removing_mapping, - level, &table, offsets[level]); - if ( rc =3D=3D GUEST_TABLE_MAP_FAILED ) - { - /* - * We are here because p2m_next_level has failed to map - * the intermediate page table (e.g the table does not exist - * and they p2m tree is read-only). It is a valid case - * when removing a mapping as it may not exist in the - * page table. In this case, just ignore it. - */ - rc =3D removing_mapping ? 0 : -ENOENT; - goto out; - } - else if ( rc !=3D GUEST_TABLE_NORMAL_PAGE ) - break; - } - - entry =3D table + offsets[level]; - - /* - * If we are here with level < target, we must be at a leaf node, - * and we need to break up the superpage. - */ - if ( level < target ) - { - /* We need to split the original page. */ - lpae_t split_pte =3D *entry; - - ASSERT(p2m_is_superpage(*entry, level)); - - if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets)= ) - { - /* - * The current super-page is still in-place, so re-increment - * the stats. - */ - p2m->stats.mappings[level]++; - - /* Free the allocated sub-tree */ - p2m_free_entry(p2m, split_pte, level); - - rc =3D -ENOMEM; - goto out; - } - - /* - * Follow the break-before-sequence to update the entry. - * For more details see (D4.7.1 in ARM DDI 0487A.j). - */ - p2m_remove_pte(entry, p2m->clean_pte); - p2m_force_tlb_flush_sync(p2m); - - p2m_write_pte(entry, split_pte, p2m->clean_pte); - - /* then move to the level we want to make real changes */ - for ( ; level < target; level++ ) - { - rc =3D p2m_next_level(p2m, true, level, &table, offsets[level]= ); - - /* - * The entry should be found and either be a table - * or a superpage if level 3 is not targeted - */ - ASSERT(rc =3D=3D GUEST_TABLE_NORMAL_PAGE || - (rc =3D=3D GUEST_TABLE_SUPER_PAGE && target < 3)); - } - - entry =3D table + offsets[level]; - } - - /* - * We should always be there with the correct level because - * all the intermediate tables have been installed if necessary. - */ - ASSERT(level =3D=3D target); - - orig_pte =3D *entry; - - /* - * The radix-tree can only work on 4KB. This is only used when - * memaccess is enabled and during shutdown. - */ - ASSERT(!p2m->mem_access_enabled || page_order =3D=3D 0 || - p2m->domain->is_dying); - /* - * The access type should always be p2m_access_rwx when the mapping - * is removed. - */ - ASSERT(!mfn_eq(INVALID_MFN, smfn) || (a =3D=3D p2m_access_rwx)); - /* - * Update the mem access permission before update the P2M. So we - * don't have to revert the mapping if it has failed. - */ - rc =3D p2m_mem_access_radix_set(p2m, sgfn, a); - if ( rc ) - goto out; - - /* - * Always remove the entry in order to follow the break-before-make - * sequence when updating the translation table (D4.7.1 in ARM DDI - * 0487A.j). - */ - if ( lpae_is_valid(orig_pte) || removing_mapping ) - p2m_remove_pte(entry, p2m->clean_pte); - - if ( removing_mapping ) - /* Flush can be deferred if the entry is removed */ - p2m->need_flush |=3D !!lpae_is_valid(orig_pte); - else - { - lpae_t pte =3D mfn_to_p2m_entry(smfn, t, a); - - if ( level < 3 ) - pte.p2m.table =3D 0; /* Superpage entry */ - - /* - * It is necessary to flush the TLB before writing the new entry - * to keep coherency when the previous entry was valid. - * - * Although, it could be defered when only the permissions are - * changed (e.g in case of memaccess). - */ - if ( lpae_is_valid(orig_pte) ) - { - if ( likely(!p2m->mem_access_enabled) || - P2M_CLEAR_PERM(pte) !=3D P2M_CLEAR_PERM(orig_pte) ) - p2m_force_tlb_flush_sync(p2m); - else - p2m->need_flush =3D true; - } - else if ( !p2m_is_valid(orig_pte) ) /* new mapping */ - p2m->stats.mappings[level]++; - - p2m_write_pte(entry, pte, p2m->clean_pte); - - p2m->max_mapped_gfn =3D gfn_max(p2m->max_mapped_gfn, - gfn_add(sgfn, (1UL << page_order) - = 1)); - p2m->lowest_mapped_gfn =3D gfn_min(p2m->lowest_mapped_gfn, sgfn); - } - - if ( is_iommu_enabled(p2m->domain) && - (lpae_is_valid(orig_pte) || lpae_is_valid(*entry)) ) - { - unsigned int flush_flags =3D 0; - - if ( lpae_is_valid(orig_pte) ) - flush_flags |=3D IOMMU_FLUSHF_modified; - if ( lpae_is_valid(*entry) ) - flush_flags |=3D IOMMU_FLUSHF_added; - - rc =3D iommu_iotlb_flush(p2m->domain, _dfn(gfn_x(sgfn)), - 1UL << page_order, flush_flags); - } - else - rc =3D 0; - - /* - * Free the entry only if the original pte was valid and the base - * is different (to avoid freeing when permission is changed). - */ - if ( p2m_is_valid(orig_pte) && - !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) ) - p2m_free_entry(p2m, orig_pte, level); - -out: - unmap_domain_page(table); - - return rc; -} - -int p2m_set_entry(struct p2m_domain *p2m, - gfn_t sgfn, - unsigned long nr, - mfn_t smfn, - p2m_type_t t, - p2m_access_t a) -{ - int rc =3D 0; - - /* - * Any reference taken by the P2M mappings (e.g. foreign mapping) will - * be dropped in relinquish_p2m_mapping(). As the P2M will still - * be accessible after, we need to prevent mapping to be added when the - * domain is dying. - */ - if ( unlikely(p2m->domain->is_dying) ) - return -ENOMEM; - - while ( nr ) - { - unsigned long mask; - unsigned long order; - - /* - * Don't take into account the MFN when removing mapping (i.e - * MFN_INVALID) to calculate the correct target order. - * - * XXX: Support superpage mappings if nr is not aligned to a - * superpage size. - */ - mask =3D !mfn_eq(smfn, INVALID_MFN) ? mfn_x(smfn) : 0; - mask |=3D gfn_x(sgfn) | nr; - - /* Always map 4k by 4k when memaccess is enabled */ - if ( unlikely(p2m->mem_access_enabled) ) - order =3D THIRD_ORDER; - else if ( !(mask & ((1UL << FIRST_ORDER) - 1)) ) - order =3D FIRST_ORDER; - else if ( !(mask & ((1UL << SECOND_ORDER) - 1)) ) - order =3D SECOND_ORDER; - else - order =3D THIRD_ORDER; - - rc =3D __p2m_set_entry(p2m, sgfn, order, smfn, t, a); - if ( rc ) - break; - - sgfn =3D gfn_add(sgfn, (1 << order)); - if ( !mfn_eq(smfn, INVALID_MFN) ) - smfn =3D mfn_add(smfn, (1 << order)); - - nr -=3D (1 << order); - } - - return rc; -} - -/* Invalidate all entries in the table. The p2m should be write locked. */ -static void p2m_invalidate_table(struct p2m_domain *p2m, mfn_t mfn) -{ - lpae_t *table; - unsigned int i; - - ASSERT(p2m_is_write_locked(p2m)); - - table =3D map_domain_page(mfn); - - for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) - { - lpae_t pte =3D table[i]; - - /* - * Writing an entry can be expensive because it may involve - * cleaning the cache. So avoid updating the entry if the valid - * bit is already cleared. - */ - if ( !pte.p2m.valid ) - continue; - - pte.p2m.valid =3D 0; - - p2m_write_pte(&table[i], pte, p2m->clean_pte); - } - - unmap_domain_page(table); - - p2m->need_flush =3D true; -} - -/* - * The domain will not be scheduled anymore, so in theory we should - * not need to flush the TLBs. Do it for safety purpose. - * Note that all the devices have already been de-assigned. So we don't - * need to flush the IOMMU TLB here. - */ -void p2m_clear_root_pages(struct p2m_domain *p2m) -{ - unsigned int i; - - p2m_write_lock(p2m); - - for ( i =3D 0; i < P2M_ROOT_PAGES; i++ ) - clear_and_clean_page(p2m->root + i); - - p2m_force_tlb_flush_sync(p2m); - - p2m_write_unlock(p2m); -} - -/* - * Invalidate all entries in the root page-tables. This is - * useful to get fault on entry and do an action. - * - * p2m_invalid_root() should not be called when the P2M is shared with - * the IOMMU because it will cause IOMMU fault. - */ -void p2m_invalidate_root(struct p2m_domain *p2m) -{ - unsigned int i; - - ASSERT(!iommu_use_hap_pt(p2m->domain)); - - p2m_write_lock(p2m); - - for ( i =3D 0; i < P2M_ROOT_LEVEL; i++ ) - p2m_invalidate_table(p2m, page_to_mfn(p2m->root + i)); - - p2m_write_unlock(p2m); -} - -/* - * Resolve any translation fault due to change in the p2m. This - * includes break-before-make and valid bit cleared. - */ -bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - unsigned int level =3D 0; - bool resolved =3D false; - lpae_t entry, *table; - - /* Convenience aliases */ - DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn)); - - p2m_write_lock(p2m); - - /* This gfn is higher than the highest the p2m map currently holds */ - if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) ) - goto out; - - table =3D p2m_get_root_pointer(p2m, gfn); - /* - * The table should always be non-NULL because the gfn is below - * p2m->max_mapped_gfn and the root table pages are always present. - */ - if ( !table ) - { - ASSERT_UNREACHABLE(); - goto out; - } - - /* - * Go down the page-tables until an entry has the valid bit unset or - * a block/page entry has been hit. - */ - for ( level =3D P2M_ROOT_LEVEL; level <=3D 3; level++ ) - { - int rc; - - entry =3D table[offsets[level]]; - - if ( level =3D=3D 3 ) - break; +mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t) +{ + mfn_t mfn; + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); =20 - /* Stop as soon as we hit an entry with the valid bit unset. */ - if ( !lpae_is_valid(entry) ) - break; + p2m_read_lock(p2m); + mfn =3D p2m_get_entry(p2m, gfn, t, NULL, NULL, NULL); + p2m_read_unlock(p2m); =20 - rc =3D p2m_next_level(p2m, true, level, &table, offsets[level]); - if ( rc =3D=3D GUEST_TABLE_MAP_FAILED ) - goto out_unmap; - else if ( rc !=3D GUEST_TABLE_NORMAL_PAGE ) - break; - } + return mfn; +} =20 - /* - * If the valid bit of the entry is set, it means someone was playing = with - * the Stage-2 page table. Nothing to do and mark the fault as resolve= d. - */ - if ( lpae_is_valid(entry) ) - { - resolved =3D true; - goto out_unmap; - } +struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn, + p2m_type_t *t) +{ + struct page_info *page; + p2m_type_t p2mt; + mfn_t mfn =3D p2m_lookup(d, gfn, &p2mt); =20 - /* - * The valid bit is unset. If the entry is still not valid then the fa= ult - * cannot be resolved, exit and report it. - */ - if ( !p2m_is_valid(entry) ) - goto out_unmap; + if ( t ) + *t =3D p2mt; =20 - /* - * Now we have an entry with valid bit unset, but still valid from - * the P2M point of view. - * - * If an entry is pointing to a table, each entry of the table will - * have there valid bit cleared. This allows a function to clear the - * full p2m with just a couple of write. The valid bit will then be - * propagated on the fault. - * If an entry is pointing to a block/page, no work to do for now. - */ - if ( lpae_is_table(entry, level) ) - p2m_invalidate_table(p2m, lpae_get_mfn(entry)); + if ( !p2m_is_any_ram(p2mt) ) + return NULL; =20 - /* - * Now that the work on the entry is done, set the valid bit to prevent - * another fault on that entry. - */ - resolved =3D true; - entry.p2m.valid =3D 1; + if ( !mfn_valid(mfn) ) + return NULL; =20 - p2m_write_pte(table + offsets[level], entry, p2m->clean_pte); + page =3D mfn_to_page(mfn); =20 /* - * No need to flush the TLBs as the modified entry had the valid bit - * unset. + * get_page won't work on foreign mapping because the page doesn't + * belong to the current domain. */ + if ( p2m_is_foreign(p2mt) ) + { + struct domain *fdom =3D page_get_owner_and_reference(page); + ASSERT(fdom !=3D NULL); + ASSERT(fdom !=3D d); + return page; + } =20 -out_unmap: - unmap_domain_page(table); + return get_page(page, d) ? page : NULL; +} =20 -out: - p2m_write_unlock(p2m); +int guest_physmap_mark_populate_on_demand(struct domain *d, + unsigned long gfn, + unsigned int order) +{ + return -ENOSYS; +} =20 - return resolved; +unsigned long p2m_pod_decrease_reservation(struct domain *d, gfn_t gfn, + unsigned int order) +{ + return 0; } =20 int p2m_insert_mapping(struct domain *d, gfn_t start_gfn, unsigned long nr, @@ -1612,44 +244,6 @@ int set_foreign_p2m_entry(struct domain *d, const str= uct domain *fd, return rc; } =20 -static struct page_info *p2m_allocate_root(void) -{ - struct page_info *page; - unsigned int i; - - page =3D alloc_domheap_pages(NULL, P2M_ROOT_ORDER, 0); - if ( page =3D=3D NULL ) - return NULL; - - /* Clear both first level pages */ - for ( i =3D 0; i < P2M_ROOT_PAGES; i++ ) - clear_and_clean_page(page + i); - - return page; -} - -static int p2m_alloc_table(struct domain *d) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - - p2m->root =3D p2m_allocate_root(); - if ( !p2m->root ) - return -ENOMEM; - - p2m->vttbr =3D generate_vttbr(p2m->vmid, page_to_mfn(p2m->root)); - - /* - * Make sure that all TLBs corresponding to the new VMID are flushed - * before using it - */ - p2m_write_lock(p2m); - p2m_force_tlb_flush_sync(p2m); - p2m_write_unlock(p2m); - - return 0; -} - - static spinlock_t vmid_alloc_lock =3D SPIN_LOCK_UNLOCKED; =20 /* @@ -1660,7 +254,7 @@ static spinlock_t vmid_alloc_lock =3D SPIN_LOCK_UNLOCK= ED; */ static unsigned long *vmid_mask; =20 -static void p2m_vmid_allocator_init(void) +void p2m_vmid_allocator_init(void) { /* * allocate space for vmid_mask based on MAX_VMID @@ -1673,7 +267,7 @@ static void p2m_vmid_allocator_init(void) set_bit(INVALID_VMID, vmid_mask); } =20 -static int p2m_alloc_vmid(struct domain *d) +int p2m_alloc_vmid(struct domain *d) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); =20 @@ -1703,7 +297,7 @@ out: return rc; } =20 -static void p2m_free_vmid(struct domain *d) +void p2m_free_vmid(struct domain *d) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); spin_lock(&vmid_alloc_lock); @@ -1713,187 +307,6 @@ static void p2m_free_vmid(struct domain *d) spin_unlock(&vmid_alloc_lock); } =20 -int p2m_teardown(struct domain *d) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - unsigned long count =3D 0; - struct page_info *pg; - int rc =3D 0; - - p2m_write_lock(p2m); - - while ( (pg =3D page_list_remove_head(&p2m->pages)) ) - { - p2m_free_page(p2m->domain, pg); - count++; - /* Arbitrarily preempt every 512 iterations */ - if ( !(count % 512) && hypercall_preempt_check() ) - { - rc =3D -ERESTART; - break; - } - } - - p2m_write_unlock(p2m); - - return rc; -} - -void p2m_final_teardown(struct domain *d) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - - /* p2m not actually initialized */ - if ( !p2m->domain ) - return; - - /* - * No need to call relinquish_p2m_mapping() here because - * p2m_final_teardown() is called either after domain_relinquish_resou= rces() - * where relinquish_p2m_mapping() has been called. - */ - - ASSERT(page_list_empty(&p2m->pages)); - - while ( p2m_teardown_allocation(d) =3D=3D -ERESTART ) - continue; /* No preemption support here */ - ASSERT(page_list_empty(&d->arch.paging.p2m_freelist)); - - if ( p2m->root ) - free_domheap_pages(p2m->root, P2M_ROOT_ORDER); - - p2m->root =3D NULL; - - p2m_free_vmid(d); - - radix_tree_destroy(&p2m->mem_access_settings, NULL); - - p2m->domain =3D NULL; -} - -int p2m_init(struct domain *d) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - int rc; - unsigned int cpu; - - rwlock_init(&p2m->lock); - spin_lock_init(&d->arch.paging.lock); - INIT_PAGE_LIST_HEAD(&p2m->pages); - INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist); - - p2m->vmid =3D INVALID_VMID; - p2m->max_mapped_gfn =3D _gfn(0); - p2m->lowest_mapped_gfn =3D _gfn(ULONG_MAX); - - p2m->default_access =3D p2m_access_rwx; - p2m->mem_access_enabled =3D false; - radix_tree_init(&p2m->mem_access_settings); - - /* - * Some IOMMUs don't support coherent PT walk. When the p2m is - * shared with the CPU, Xen has to make sure that the PT changes have - * reached the memory - */ - p2m->clean_pte =3D is_iommu_enabled(d) && - !iommu_has_feature(d, IOMMU_FEAT_COHERENT_WALK); - - /* - * Make sure that the type chosen to is able to store the an vCPU ID - * between 0 and the maximum of virtual CPUS supported as long as - * the INVALID_VCPU_ID. - */ - BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0]) * 8)) < MAX_VIRT_CPU= S); - BUILD_BUG_ON((1 << (sizeof(p2m->last_vcpu_ran[0])* 8)) < INVALID_VCPU_= ID); - - for_each_possible_cpu(cpu) - p2m->last_vcpu_ran[cpu] =3D INVALID_VCPU_ID; - - /* - * "Trivial" initialisation is now complete. Set the backpointer so - * p2m_teardown() and friends know to do something. - */ - p2m->domain =3D d; - - rc =3D p2m_alloc_vmid(d); - if ( rc ) - return rc; - - rc =3D p2m_alloc_table(d); - if ( rc ) - return rc; - - return 0; -} - -/* - * The function will go through the p2m and remove page reference when it - * is required. The mapping will be removed from the p2m. - * - * XXX: See whether the mapping can be left intact in the p2m. - */ -int relinquish_p2m_mapping(struct domain *d) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(d); - unsigned long count =3D 0; - p2m_type_t t; - int rc =3D 0; - unsigned int order; - gfn_t start, end; - - BUG_ON(!d->is_dying); - /* No mappings can be added in the P2M after the P2M lock is released.= */ - p2m_write_lock(p2m); - - start =3D p2m->lowest_mapped_gfn; - end =3D gfn_add(p2m->max_mapped_gfn, 1); - - for ( ; gfn_x(start) < gfn_x(end); - start =3D gfn_next_boundary(start, order) ) - { - mfn_t mfn =3D p2m_get_entry(p2m, start, &t, NULL, &order, NULL); - - count++; - /* - * Arbitrarily preempt every 512 iterations. - */ - if ( !(count % 512) && hypercall_preempt_check() ) - { - rc =3D -ERESTART; - break; - } - - /* - * p2m_set_entry will take care of removing reference on page - * when it is necessary and removing the mapping in the p2m. - */ - if ( !mfn_eq(mfn, INVALID_MFN) ) - { - /* - * For valid mapping, the start will always be aligned as - * entry will be removed whilst relinquishing. - */ - rc =3D __p2m_set_entry(p2m, start, order, INVALID_MFN, - p2m_invalid, p2m_access_rwx); - if ( unlikely(rc) ) - { - printk(XENLOG_G_ERR "Unable to remove mapping gfn=3D%#"PRI= _gfn" order=3D%u from the p2m of domain %d\n", gfn_x(start), order, d->doma= in_id); - break; - } - } - } - - /* - * Update lowest_mapped_gfn so on the next call we still start where - * we stopped. - */ - p2m->lowest_mapped_gfn =3D start; - - p2m_write_unlock(p2m); - - return rc; -} - int p2m_cache_flush_range(struct domain *d, gfn_t *pstart, gfn_t end) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); @@ -1987,43 +400,6 @@ int p2m_cache_flush_range(struct domain *d, gfn_t *ps= tart, gfn_t end) return rc; } =20 -/* - * Clean & invalidate RAM associated to the guest vCPU. - * - * The function can only work with the current vCPU and should be called - * with IRQ enabled as the vCPU could get preempted. - */ -void p2m_flush_vm(struct vcpu *v) -{ - struct p2m_domain *p2m =3D p2m_get_hostp2m(v->domain); - int rc; - gfn_t start =3D _gfn(0); - - ASSERT(v =3D=3D current); - ASSERT(local_irq_is_enabled()); - ASSERT(v->arch.need_flush_to_ram); - - do - { - rc =3D p2m_cache_flush_range(v->domain, &start, _gfn(ULONG_MAX)); - if ( rc =3D=3D -ERESTART ) - do_softirq(); - } while ( rc =3D=3D -ERESTART ); - - if ( rc !=3D 0 ) - gprintk(XENLOG_WARNING, - "P2M has not been correctly cleaned (rc =3D %d)\n", - rc); - - /* - * Invalidate the p2m to track which page was modified by the guest - * between call of p2m_flush_vm(). - */ - p2m_invalidate_root(p2m); - - v->arch.need_flush_to_ram =3D false; -} - /* * See note at ARMv7 ARM B1.14.4 (DDI 0406C.c) (TL;DR: S/W ops are not * easily virtualized). @@ -2217,197 +593,6 @@ void __init p2m_restrict_ipa_bits(unsigned int ipa_b= its) p2m_ipa_bits =3D ipa_bits; } =20 -/* VTCR value to be configured by all CPUs. Set only once by the boot CPU = */ -static register_t __read_mostly vtcr; - -static void setup_virt_paging_one(void *data) -{ - WRITE_SYSREG(vtcr, VTCR_EL2); - - /* - * ARM64_WORKAROUND_AT_SPECULATE: We want to keep the TLBs free from - * entries related to EL1/EL0 translation regime until a guest vCPU - * is running. For that, we need to set-up VTTBR to point to an empty - * page-table and turn on stage-2 translation. The TLB entries - * associated with EL1/EL0 translation regime will also be flushed in = case - * an AT instruction was speculated before hand. - */ - if ( cpus_have_cap(ARM64_WORKAROUND_AT_SPECULATE) ) - { - WRITE_SYSREG64(generate_vttbr(INVALID_VMID, empty_root_mfn), VTTBR= _EL2); - WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_VM, HCR_EL2); - isb(); - - flush_all_guests_tlb_local(); - } -} - -void __init setup_virt_paging(void) -{ - /* Setup Stage 2 address translation */ - register_t val =3D VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WB= WA; - - static const struct { - unsigned int pabits; /* Physical Address Size */ - unsigned int t0sz; /* Desired T0SZ, minimum in comment */ - unsigned int root_order; /* Page order of the root of the p2m */ - unsigned int sl0; /* Desired SL0, maximum in comment */ - } pa_range_info[] __initconst =3D { - /* T0SZ minimum and SL0 maximum from ARM DDI 0487H.a Table D5-6 */ - /* PA size, t0sz(min), root-order, sl0(max) */ -#ifdef CONFIG_ARM_64 - [0] =3D { 32, 32/*32*/, 0, 1 }, - [1] =3D { 36, 28/*28*/, 0, 1 }, - [2] =3D { 40, 24/*24*/, 1, 1 }, - [3] =3D { 42, 22/*22*/, 3, 1 }, - [4] =3D { 44, 20/*20*/, 0, 2 }, - [5] =3D { 48, 16/*16*/, 0, 2 }, - [6] =3D { 52, 12/*12*/, 4, 2 }, - [7] =3D { 0 } /* Invalid */ -#else - { 32, 0/*0*/, 0, 1 }, - { 40, 24/*24*/, 1, 1 } -#endif - }; - - unsigned int i; - unsigned int pa_range =3D 0x10; /* Larger than any possible value */ - -#ifdef CONFIG_ARM_32 - /* - * Typecast pa_range_info[].t0sz into arm32 bit variant. - * - * VTCR.T0SZ is bits [3:0] and S(sign extension), bit[4] for arm322. - * Thus, pa_range_info[].t0sz is translated to its arm32 variant using - * struct bitfields. - */ - struct - { - signed int val:5; - } t0sz_32; -#else - /* - * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured - * with IPA bits =3D=3D PA bits, compare against "pabits". - */ - if ( pa_range_info[system_cpuinfo.mm64.pa_range].pabits < p2m_ipa_bits= ) - p2m_ipa_bits =3D pa_range_info[system_cpuinfo.mm64.pa_range].pabit= s; - - /* - * cpu info sanitization made sure we support 16bits VMID only if all - * cores are supporting it. - */ - if ( system_cpuinfo.mm64.vmid_bits =3D=3D MM64_VMID_16_BITS_SUPPORT ) - max_vmid =3D MAX_VMID_16_BIT; -#endif - - /* Choose suitable "pa_range" according to the resulted "p2m_ipa_bits"= . */ - for ( i =3D 0; i < ARRAY_SIZE(pa_range_info); i++ ) - { - if ( p2m_ipa_bits =3D=3D pa_range_info[i].pabits ) - { - pa_range =3D i; - break; - } - } - - /* Check if we found the associated entry in the array */ - if ( pa_range >=3D ARRAY_SIZE(pa_range_info) || !pa_range_info[pa_rang= e].pabits ) - panic("%u-bit P2M is not supported\n", p2m_ipa_bits); - -#ifdef CONFIG_ARM_64 - val |=3D VTCR_PS(pa_range); - val |=3D VTCR_TG0_4K; - - /* Set the VS bit only if 16 bit VMID is supported. */ - if ( MAX_VMID =3D=3D MAX_VMID_16_BIT ) - val |=3D VTCR_VS; -#endif - - val |=3D VTCR_SL0(pa_range_info[pa_range].sl0); - val |=3D VTCR_T0SZ(pa_range_info[pa_range].t0sz); - - p2m_root_order =3D pa_range_info[pa_range].root_order; - p2m_root_level =3D 2 - pa_range_info[pa_range].sl0; - -#ifdef CONFIG_ARM_64 - p2m_ipa_bits =3D 64 - pa_range_info[pa_range].t0sz; -#else - t0sz_32.val =3D pa_range_info[pa_range].t0sz; - p2m_ipa_bits =3D 32 - t0sz_32.val; -#endif - - printk("P2M: %d-bit IPA with %d-bit PA and %d-bit VMID\n", - p2m_ipa_bits, - pa_range_info[pa_range].pabits, - ( MAX_VMID =3D=3D MAX_VMID_16_BIT ) ? 16 : 8); - - printk("P2M: %d levels with order-%d root, VTCR 0x%"PRIregister"\n", - 4 - P2M_ROOT_LEVEL, P2M_ROOT_ORDER, val); - - p2m_vmid_allocator_init(); - - /* It is not allowed to concatenate a level zero root */ - BUG_ON( P2M_ROOT_LEVEL =3D=3D 0 && P2M_ROOT_ORDER > 0 ); - vtcr =3D val; - - /* - * ARM64_WORKAROUND_AT_SPECULATE requires to allocate root table - * with all entries zeroed. - */ - if ( cpus_have_cap(ARM64_WORKAROUND_AT_SPECULATE) ) - { - struct page_info *root; - - root =3D p2m_allocate_root(); - if ( !root ) - panic("Unable to allocate root table for ARM64_WORKAROUND_AT_S= PECULATE\n"); - - empty_root_mfn =3D page_to_mfn(root); - } - - setup_virt_paging_one(NULL); - smp_call_function(setup_virt_paging_one, NULL, 1); -} - -static int cpu_virt_paging_callback(struct notifier_block *nfb, - unsigned long action, - void *hcpu) -{ - switch ( action ) - { - case CPU_STARTING: - ASSERT(system_state !=3D SYS_STATE_boot); - setup_virt_paging_one(NULL); - break; - default: - break; - } - - return NOTIFY_DONE; -} - -static struct notifier_block cpu_virt_paging_nfb =3D { - .notifier_call =3D cpu_virt_paging_callback, -}; - -static int __init cpu_virt_paging_init(void) -{ - register_cpu_notifier(&cpu_virt_paging_nfb); - - return 0; -} -/* - * Initialization of the notifier has to be done at init rather than presm= p_init - * phase because: the registered notifier is used to setup virtual paging = for - * non-boot CPUs after the initial virtual paging for all CPUs is already = setup, - * i.e. when a non-boot CPU is hotplugged after the system has booted. In = other - * words, the notifier should be registered after the virtual paging is - * initially setup (setup_virt_paging() is called from start_xen()). This = is - * required because vtcr config value has to be set before a notifier can = fire. - */ -__initcall(cpu_virt_paging_init); - /* * Local variables: * mode: C --=20 2.25.1