From nobody Wed Nov 27 13:38:48 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1698027289062793.0623440738648; Sun, 22 Oct 2023 19:14:49 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.620928.966818 (Exim 4.92) (envelope-from ) id 1qukSU-0002nV-B4; Mon, 23 Oct 2023 02:14:22 +0000 Received: by outflank-mailman (output) from mailman id 620928.966818; Mon, 23 Oct 2023 02:14:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSU-0002nI-6w; Mon, 23 Oct 2023 02:14:22 +0000 Received: by outflank-mailman (input) for mailman id 620928; Mon, 23 Oct 2023 02:14:21 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qukSS-0001U7-RL for xen-devel@lists.xenproject.org; Mon, 23 Oct 2023 02:14:20 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id decbb2c2-7149-11ee-9b0e-b553b5be7939; Mon, 23 Oct 2023 04:14:18 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 403112F4; Sun, 22 Oct 2023 19:14:58 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 62DF43F738; Sun, 22 Oct 2023 19:14:14 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: decbb2c2-7149-11ee-9b0e-b553b5be7939 From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Henry Wang , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Penny Zheng , Wei Chen , Julien Grall Subject: [PATCH v8 5/8] xen/arm: Split MMU-specific setup_mm() and related code out Date: Mon, 23 Oct 2023 10:13:42 +0800 Message-Id: <20231023021345.1731436-6-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231023021345.1731436-1-Henry.Wang@arm.com> References: <20231023021345.1731436-1-Henry.Wang@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1698027290470100001 Content-Type: text/plain; charset="utf-8" setup_mm() is used for Xen to setup memory management subsystem, such as boot allocator, direct-mapping, xenheap initialization, frametable and static memory pages, at boot time. We could inherit some components seamlessly for MPU support, such as the setup of boot allocator, whilst we need to implement some components differently for MPU, such as xenheap, etc. Also, there are some components that is specific to MMU only, for example the direct-mapping. Therefore in this commit, we split the MMU-specific setup_mm() and related code out. Since arm32 and arm64 have completely different setup_mm() implementation, take the opportunity to split the arch-specific setup_mm() to arch-specific files, so that we can avoid #ifdef. Also, make init_pdx(), init_staticmem_pages(), and populate_boot_allocator() public as these functions are now called from two different units, and make setup_mm() public for future MPU implementation. With above code movement, mark setup_directmap_mappings() as static because the only caller of this function is now in the same file with it. Drop the original setup_directmap_mappings() declaration and move the in-code comment on top of the declaration on top of the function implementation. Signed-off-by: Henry Wang Signed-off-by: Penny Zheng Signed-off-by: Wei Chen Acked-by: Julien Grall --- v8: - Reword the commit message about making init_pdx() & co public. - Add Julien's Acked-by tag. v7: - No change. v6: - Rework the original patch: [v5,10/13] xen/arm: mmu: move MMU-specific setup_mm to mmu/setup.c --- xen/arch/arm/arm32/mmu/mm.c | 278 ++++++++++++++++++++++++- xen/arch/arm/arm64/mmu/mm.c | 51 ++++- xen/arch/arm/include/asm/mmu/mm.h | 6 - xen/arch/arm/include/asm/setup.h | 5 + xen/arch/arm/setup.c | 324 +----------------------------- 5 files changed, 331 insertions(+), 333 deletions(-) diff --git a/xen/arch/arm/arm32/mmu/mm.c b/xen/arch/arm/arm32/mmu/mm.c index 647baf4a81..94d6cab49c 100644 --- a/xen/arch/arm/arm32/mmu/mm.c +++ b/xen/arch/arm/arm32/mmu/mm.c @@ -1,14 +1,21 @@ /* SPDX-License-Identifier: GPL-2.0 */ =20 #include +#include +#include +#include +#include #include =20 +static unsigned long opt_xenheap_megabytes __initdata; +integer_param("xenheap_megabytes", opt_xenheap_megabytes); + /* - * Set up the direct-mapped xenheap: - * up to 1GB of contiguous, always-mapped memory. + * Set up the direct-mapped xenheap: up to 1GB of contiguous, + * always-mapped memory. Base must be 32MB aligned and size a multiple of = 32MB. */ -void __init setup_directmap_mappings(unsigned long base_mfn, - unsigned long nr_mfns) +static void __init setup_directmap_mappings(unsigned long base_mfn, + unsigned long nr_mfns) { int rc; =20 @@ -21,6 +28,269 @@ void __init setup_directmap_mappings(unsigned long base= _mfn, directmap_virt_end =3D XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE; } =20 +/* + * Returns the end address of the highest region in the range s..e + * with required size and alignment that does not conflict with the + * modules from first_mod to nr_modules. + * + * For non-recursive callers first_mod should normally be 0 (all + * modules and Xen itself) or 1 (all modules but not Xen). + */ +static paddr_t __init consider_modules(paddr_t s, paddr_t e, + uint32_t size, paddr_t align, + int first_mod) +{ + const struct bootmodules *mi =3D &bootinfo.modules; + int i; + int nr; + + s =3D (s+align-1) & ~(align-1); + e =3D e & ~(align-1); + + if ( s > e || e - s < size ) + return 0; + + /* First check the boot modules */ + for ( i =3D first_mod; i < mi->nr_mods; i++ ) + { + paddr_t mod_s =3D mi->module[i].start; + paddr_t mod_e =3D mod_s + mi->module[i].size; + + if ( s < mod_e && mod_s < e ) + { + mod_e =3D consider_modules(mod_e, e, size, align, i+1); + if ( mod_e ) + return mod_e; + + return consider_modules(s, mod_s, size, align, i+1); + } + } + + /* Now check any fdt reserved areas. */ + + nr =3D fdt_num_mem_rsv(device_tree_flattened); + + for ( ; i < mi->nr_mods + nr; i++ ) + { + paddr_t mod_s, mod_e; + + if ( fdt_get_mem_rsv_paddr(device_tree_flattened, + i - mi->nr_mods, + &mod_s, &mod_e ) < 0 ) + /* If we can't read it, pretend it doesn't exist... */ + continue; + + /* fdt_get_mem_rsv_paddr returns length */ + mod_e +=3D mod_s; + + if ( s < mod_e && mod_s < e ) + { + mod_e =3D consider_modules(mod_e, e, size, align, i+1); + if ( mod_e ) + return mod_e; + + return consider_modules(s, mod_s, size, align, i+1); + } + } + + /* + * i is the current bootmodule we are evaluating, across all + * possible kinds of bootmodules. + * + * When retrieving the corresponding reserved-memory addresses, we + * need to index the bootinfo.reserved_mem bank starting from 0, and + * only counting the reserved-memory modules. Hence, we need to use + * i - nr. + */ + nr +=3D mi->nr_mods; + for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ ) + { + paddr_t r_s =3D bootinfo.reserved_mem.bank[i - nr].start; + paddr_t r_e =3D r_s + bootinfo.reserved_mem.bank[i - nr].size; + + if ( s < r_e && r_s < e ) + { + r_e =3D consider_modules(r_e, e, size, align, i + 1); + if ( r_e ) + return r_e; + + return consider_modules(s, r_s, size, align, i + 1); + } + } + return e; +} + +/* + * Find a contiguous region that fits in the static heap region with + * required size and alignment, and return the end address of the region + * if found otherwise 0. + */ +static paddr_t __init fit_xenheap_in_static_heap(uint32_t size, paddr_t al= ign) +{ + unsigned int i; + paddr_t end =3D 0, aligned_start, aligned_end; + paddr_t bank_start, bank_size, bank_end; + + for ( i =3D 0 ; i < bootinfo.reserved_mem.nr_banks; i++ ) + { + if ( bootinfo.reserved_mem.bank[i].type !=3D MEMBANK_STATIC_HEAP ) + continue; + + bank_start =3D bootinfo.reserved_mem.bank[i].start; + bank_size =3D bootinfo.reserved_mem.bank[i].size; + bank_end =3D bank_start + bank_size; + + if ( bank_size < size ) + continue; + + aligned_end =3D bank_end & ~(align - 1); + aligned_start =3D (aligned_end - size) & ~(align - 1); + + if ( aligned_start > bank_start ) + /* + * Allocate the xenheap as high as possible to keep low-memory + * available (assuming the admin supplied region below 4GB) + * for other use (e.g. domain memory allocation). + */ + end =3D max(end, aligned_end); + } + + return end; +} + +void __init setup_mm(void) +{ + paddr_t ram_start, ram_end, ram_size, e, bank_start, bank_end, bank_si= ze; + paddr_t static_heap_end =3D 0, static_heap_size =3D 0; + unsigned long heap_pages, xenheap_pages, domheap_pages; + unsigned int i; + const uint32_t ctr =3D READ_CP32(CTR); + + if ( !bootinfo.mem.nr_banks ) + panic("No memory bank\n"); + + /* We only supports instruction caches implementing the IVIPT extensio= n. */ + if ( ((ctr >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK) =3D=3D ICACHE_POLICY_AI= VIVT ) + panic("AIVIVT instruction cache not supported\n"); + + init_pdx(); + + ram_start =3D bootinfo.mem.bank[0].start; + ram_size =3D bootinfo.mem.bank[0].size; + ram_end =3D ram_start + ram_size; + + for ( i =3D 1; i < bootinfo.mem.nr_banks; i++ ) + { + bank_start =3D bootinfo.mem.bank[i].start; + bank_size =3D bootinfo.mem.bank[i].size; + bank_end =3D bank_start + bank_size; + + ram_size =3D ram_size + bank_size; + ram_start =3D min(ram_start,bank_start); + ram_end =3D max(ram_end,bank_end); + } + + total_pages =3D ram_size >> PAGE_SHIFT; + + if ( bootinfo.static_heap ) + { + for ( i =3D 0 ; i < bootinfo.reserved_mem.nr_banks; i++ ) + { + if ( bootinfo.reserved_mem.bank[i].type !=3D MEMBANK_STATIC_HE= AP ) + continue; + + bank_start =3D bootinfo.reserved_mem.bank[i].start; + bank_size =3D bootinfo.reserved_mem.bank[i].size; + bank_end =3D bank_start + bank_size; + + static_heap_size +=3D bank_size; + static_heap_end =3D max(static_heap_end, bank_end); + } + + heap_pages =3D static_heap_size >> PAGE_SHIFT; + } + else + heap_pages =3D total_pages; + + /* + * If the user has not requested otherwise via the command line + * then locate the xenheap using these constraints: + * + * - must be contiguous + * - must be 32 MiB aligned + * - must not include Xen itself or the boot modules + * - must be at most 1GB or 1/32 the total RAM in the system (or stat= ic + heap if enabled) if less + * - must be at least 32M + * + * We try to allocate the largest xenheap possible within these + * constraints. + */ + if ( opt_xenheap_megabytes ) + xenheap_pages =3D opt_xenheap_megabytes << (20-PAGE_SHIFT); + else + { + xenheap_pages =3D (heap_pages/32 + 0x1fffUL) & ~0x1fffUL; + xenheap_pages =3D max(xenheap_pages, 32UL<<(20-PAGE_SHIFT)); + xenheap_pages =3D min(xenheap_pages, 1UL<<(30-PAGE_SHIFT)); + } + + do + { + e =3D bootinfo.static_heap ? + fit_xenheap_in_static_heap(pfn_to_paddr(xenheap_pages), MB(32)= ) : + consider_modules(ram_start, ram_end, + pfn_to_paddr(xenheap_pages), + 32<<20, 0); + if ( e ) + break; + + xenheap_pages >>=3D 1; + } while ( !opt_xenheap_megabytes && xenheap_pages > 32<<(20-PAGE_SHIFT= ) ); + + if ( ! e ) + panic("Not enough space for xenheap\n"); + + domheap_pages =3D heap_pages - xenheap_pages; + + printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages%s)\n", + e - (pfn_to_paddr(xenheap_pages)), e, xenheap_pages, + opt_xenheap_megabytes ? ", from command-line" : ""); + printk("Dom heap: %lu pages\n", domheap_pages); + + /* + * We need some memory to allocate the page-tables used for the + * directmap mappings. So populate the boot allocator first. + * + * This requires us to set directmap_mfn_{start, end} first so the + * direct-mapped Xenheap region can be avoided. + */ + directmap_mfn_start =3D _mfn((e >> PAGE_SHIFT) - xenheap_pages); + directmap_mfn_end =3D mfn_add(directmap_mfn_start, xenheap_pages); + + populate_boot_allocator(); + + setup_directmap_mappings(mfn_x(directmap_mfn_start), xenheap_pages); + + /* Frame table covers all of RAM region, including holes */ + setup_frametable_mappings(ram_start, ram_end); + max_page =3D PFN_DOWN(ram_end); + + /* + * The allocators may need to use map_domain_page() (such as for + * scrubbing pages). So we need to prepare the domheap area first. + */ + if ( !init_domheap_mappings(smp_processor_id()) ) + panic("CPU%u: Unable to prepare the domheap page-tables\n", + smp_processor_id()); + + /* Add xenheap memory that was not already added to the boot allocator= . */ + init_xenheap_pages(mfn_to_maddr(directmap_mfn_start), + mfn_to_maddr(directmap_mfn_end)); + + init_staticmem_pages(); +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/arm64/mmu/mm.c b/xen/arch/arm/arm64/mmu/mm.c index 36073041ed..c0f166a437 100644 --- a/xen/arch/arm/arm64/mmu/mm.c +++ b/xen/arch/arm/arm64/mmu/mm.c @@ -2,6 +2,7 @@ =20 #include #include +#include =20 #include =20 @@ -152,8 +153,8 @@ void __init switch_ttbr(uint64_t ttbr) } =20 /* Map the region in the directmap area. */ -void __init setup_directmap_mappings(unsigned long base_mfn, - unsigned long nr_mfns) +static void __init setup_directmap_mappings(unsigned long base_mfn, + unsigned long nr_mfns) { int rc; =20 @@ -188,6 +189,52 @@ void __init setup_directmap_mappings(unsigned long bas= e_mfn, panic("Unable to setup the directmap mappings.\n"); } =20 +void __init setup_mm(void) +{ + const struct meminfo *banks =3D &bootinfo.mem; + paddr_t ram_start =3D INVALID_PADDR; + paddr_t ram_end =3D 0; + paddr_t ram_size =3D 0; + unsigned int i; + + init_pdx(); + + /* + * We need some memory to allocate the page-tables used for the direct= map + * mappings. But some regions may contain memory already allocated + * for other uses (e.g. modules, reserved-memory...). + * + * For simplicity, add all the free regions in the boot allocator. + */ + populate_boot_allocator(); + + total_pages =3D 0; + + for ( i =3D 0; i < banks->nr_banks; i++ ) + { + const struct membank *bank =3D &banks->bank[i]; + paddr_t bank_end =3D bank->start + bank->size; + + ram_size =3D ram_size + bank->size; + ram_start =3D min(ram_start, bank->start); + ram_end =3D max(ram_end, bank_end); + + setup_directmap_mappings(PFN_DOWN(bank->start), + PFN_DOWN(bank->size)); + } + + total_pages +=3D ram_size >> PAGE_SHIFT; + + directmap_virt_end =3D XENHEAP_VIRT_START + ram_end - ram_start; + directmap_mfn_start =3D maddr_to_mfn(ram_start); + directmap_mfn_end =3D maddr_to_mfn(ram_end); + + setup_frametable_mappings(ram_start, ram_end); + max_page =3D PFN_DOWN(ram_end); + + init_staticmem_pages(); +} + /* * Local variables: * mode: C diff --git a/xen/arch/arm/include/asm/mmu/mm.h b/xen/arch/arm/include/asm/m= mu/mm.h index 439ae314fd..c5e03a66bf 100644 --- a/xen/arch/arm/include/asm/mmu/mm.h +++ b/xen/arch/arm/include/asm/mmu/mm.h @@ -31,12 +31,6 @@ void dump_pt_walk(paddr_t ttbr, paddr_t addr, =20 /* Switch to a new root page-tables */ extern void switch_ttbr(uint64_t ttbr); -/* - * For Arm32, set up the direct-mapped xenheap: up to 1GB of contiguous, - * always-mapped memory. Base must be 32MB aligned and size a multiple of = 32MB. - * For Arm64, map the region in the directmap area. - */ -extern void setup_directmap_mappings(unsigned long base_mfn, unsigned long= nr_mfns); =20 #endif /* __ARM_MMU_MM_H__ */ =20 diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/se= tup.h index b8866c20f4..863e9b88cd 100644 --- a/xen/arch/arm/include/asm/setup.h +++ b/xen/arch/arm/include/asm/setup.h @@ -159,6 +159,11 @@ struct bootcmdline *boot_cmdline_find_by_kind(bootmodu= le_kind kind); struct bootcmdline * boot_cmdline_find_by_name(const char *name); const char *boot_module_kind_as_string(bootmodule_kind kind); =20 +void init_pdx(void); +void init_staticmem_pages(void); +void populate_boot_allocator(void); +void setup_mm(void); + extern uint32_t hyp_traps_vector[]; void init_traps(void); =20 diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index db748839d3..5983546e64 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -58,11 +58,6 @@ struct cpuinfo_arm __read_mostly system_cpuinfo; bool __read_mostly acpi_disabled; #endif =20 -#ifdef CONFIG_ARM_32 -static unsigned long opt_xenheap_megabytes __initdata; -integer_param("xenheap_megabytes", opt_xenheap_megabytes); -#endif - domid_t __read_mostly max_init_domid; =20 static __used void init_done(void) @@ -547,138 +542,6 @@ static void * __init relocate_fdt(paddr_t dtb_paddr, = size_t dtb_size) return fdt; } =20 -#ifdef CONFIG_ARM_32 -/* - * Returns the end address of the highest region in the range s..e - * with required size and alignment that does not conflict with the - * modules from first_mod to nr_modules. - * - * For non-recursive callers first_mod should normally be 0 (all - * modules and Xen itself) or 1 (all modules but not Xen). - */ -static paddr_t __init consider_modules(paddr_t s, paddr_t e, - uint32_t size, paddr_t align, - int first_mod) -{ - const struct bootmodules *mi =3D &bootinfo.modules; - int i; - int nr; - - s =3D (s+align-1) & ~(align-1); - e =3D e & ~(align-1); - - if ( s > e || e - s < size ) - return 0; - - /* First check the boot modules */ - for ( i =3D first_mod; i < mi->nr_mods; i++ ) - { - paddr_t mod_s =3D mi->module[i].start; - paddr_t mod_e =3D mod_s + mi->module[i].size; - - if ( s < mod_e && mod_s < e ) - { - mod_e =3D consider_modules(mod_e, e, size, align, i+1); - if ( mod_e ) - return mod_e; - - return consider_modules(s, mod_s, size, align, i+1); - } - } - - /* Now check any fdt reserved areas. */ - - nr =3D fdt_num_mem_rsv(device_tree_flattened); - - for ( ; i < mi->nr_mods + nr; i++ ) - { - paddr_t mod_s, mod_e; - - if ( fdt_get_mem_rsv_paddr(device_tree_flattened, - i - mi->nr_mods, - &mod_s, &mod_e ) < 0 ) - /* If we can't read it, pretend it doesn't exist... */ - continue; - - /* fdt_get_mem_rsv_paddr returns length */ - mod_e +=3D mod_s; - - if ( s < mod_e && mod_s < e ) - { - mod_e =3D consider_modules(mod_e, e, size, align, i+1); - if ( mod_e ) - return mod_e; - - return consider_modules(s, mod_s, size, align, i+1); - } - } - - /* - * i is the current bootmodule we are evaluating, across all - * possible kinds of bootmodules. - * - * When retrieving the corresponding reserved-memory addresses, we - * need to index the bootinfo.reserved_mem bank starting from 0, and - * only counting the reserved-memory modules. Hence, we need to use - * i - nr. - */ - nr +=3D mi->nr_mods; - for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ ) - { - paddr_t r_s =3D bootinfo.reserved_mem.bank[i - nr].start; - paddr_t r_e =3D r_s + bootinfo.reserved_mem.bank[i - nr].size; - - if ( s < r_e && r_s < e ) - { - r_e =3D consider_modules(r_e, e, size, align, i + 1); - if ( r_e ) - return r_e; - - return consider_modules(s, r_s, size, align, i + 1); - } - } - return e; -} - -/* - * Find a contiguous region that fits in the static heap region with - * required size and alignment, and return the end address of the region - * if found otherwise 0. - */ -static paddr_t __init fit_xenheap_in_static_heap(uint32_t size, paddr_t al= ign) -{ - unsigned int i; - paddr_t end =3D 0, aligned_start, aligned_end; - paddr_t bank_start, bank_size, bank_end; - - for ( i =3D 0 ; i < bootinfo.reserved_mem.nr_banks; i++ ) - { - if ( bootinfo.reserved_mem.bank[i].type !=3D MEMBANK_STATIC_HEAP ) - continue; - - bank_start =3D bootinfo.reserved_mem.bank[i].start; - bank_size =3D bootinfo.reserved_mem.bank[i].size; - bank_end =3D bank_start + bank_size; - - if ( bank_size < size ) - continue; - - aligned_end =3D bank_end & ~(align - 1); - aligned_start =3D (aligned_end - size) & ~(align - 1); - - if ( aligned_start > bank_start ) - /* - * Allocate the xenheap as high as possible to keep low-memory - * available (assuming the admin supplied region below 4GB) - * for other use (e.g. domain memory allocation). - */ - end =3D max(end, aligned_end); - } - - return end; -} -#endif - /* * Return the end of the non-module region starting at s. In other * words return s the start of the next modules after s. @@ -713,7 +576,7 @@ static paddr_t __init next_module(paddr_t s, paddr_t *e= nd) return lowest; } =20 -static void __init init_pdx(void) +void __init init_pdx(void) { paddr_t bank_start, bank_size, bank_end; =20 @@ -758,7 +621,7 @@ static void __init init_pdx(void) } =20 /* Static memory initialization */ -static void __init init_staticmem_pages(void) +void __init init_staticmem_pages(void) { #ifdef CONFIG_STATIC_MEMORY unsigned int bank; @@ -792,7 +655,7 @@ static void __init init_staticmem_pages(void) * allocator with the corresponding regions only, but with Xenheap excluded * on arm32. */ -static void __init populate_boot_allocator(void) +void __init populate_boot_allocator(void) { unsigned int i; const struct meminfo *banks =3D &bootinfo.mem; @@ -861,187 +724,6 @@ static void __init populate_boot_allocator(void) } } =20 -#ifdef CONFIG_ARM_32 -static void __init setup_mm(void) -{ - paddr_t ram_start, ram_end, ram_size, e, bank_start, bank_end, bank_si= ze; - paddr_t static_heap_end =3D 0, static_heap_size =3D 0; - unsigned long heap_pages, xenheap_pages, domheap_pages; - unsigned int i; - const uint32_t ctr =3D READ_CP32(CTR); - - if ( !bootinfo.mem.nr_banks ) - panic("No memory bank\n"); - - /* We only supports instruction caches implementing the IVIPT extensio= n. */ - if ( ((ctr >> CTR_L1IP_SHIFT) & CTR_L1IP_MASK) =3D=3D ICACHE_POLICY_AI= VIVT ) - panic("AIVIVT instruction cache not supported\n"); - - init_pdx(); - - ram_start =3D bootinfo.mem.bank[0].start; - ram_size =3D bootinfo.mem.bank[0].size; - ram_end =3D ram_start + ram_size; - - for ( i =3D 1; i < bootinfo.mem.nr_banks; i++ ) - { - bank_start =3D bootinfo.mem.bank[i].start; - bank_size =3D bootinfo.mem.bank[i].size; - bank_end =3D bank_start + bank_size; - - ram_size =3D ram_size + bank_size; - ram_start =3D min(ram_start,bank_start); - ram_end =3D max(ram_end,bank_end); - } - - total_pages =3D ram_size >> PAGE_SHIFT; - - if ( bootinfo.static_heap ) - { - for ( i =3D 0 ; i < bootinfo.reserved_mem.nr_banks; i++ ) - { - if ( bootinfo.reserved_mem.bank[i].type !=3D MEMBANK_STATIC_HE= AP ) - continue; - - bank_start =3D bootinfo.reserved_mem.bank[i].start; - bank_size =3D bootinfo.reserved_mem.bank[i].size; - bank_end =3D bank_start + bank_size; - - static_heap_size +=3D bank_size; - static_heap_end =3D max(static_heap_end, bank_end); - } - - heap_pages =3D static_heap_size >> PAGE_SHIFT; - } - else - heap_pages =3D total_pages; - - /* - * If the user has not requested otherwise via the command line - * then locate the xenheap using these constraints: - * - * - must be contiguous - * - must be 32 MiB aligned - * - must not include Xen itself or the boot modules - * - must be at most 1GB or 1/32 the total RAM in the system (or stat= ic - heap if enabled) if less - * - must be at least 32M - * - * We try to allocate the largest xenheap possible within these - * constraints. - */ - if ( opt_xenheap_megabytes ) - xenheap_pages =3D opt_xenheap_megabytes << (20-PAGE_SHIFT); - else - { - xenheap_pages =3D (heap_pages/32 + 0x1fffUL) & ~0x1fffUL; - xenheap_pages =3D max(xenheap_pages, 32UL<<(20-PAGE_SHIFT)); - xenheap_pages =3D min(xenheap_pages, 1UL<<(30-PAGE_SHIFT)); - } - - do - { - e =3D bootinfo.static_heap ? - fit_xenheap_in_static_heap(pfn_to_paddr(xenheap_pages), MB(32)= ) : - consider_modules(ram_start, ram_end, - pfn_to_paddr(xenheap_pages), - 32<<20, 0); - if ( e ) - break; - - xenheap_pages >>=3D 1; - } while ( !opt_xenheap_megabytes && xenheap_pages > 32<<(20-PAGE_SHIFT= ) ); - - if ( ! e ) - panic("Not enough space for xenheap\n"); - - domheap_pages =3D heap_pages - xenheap_pages; - - printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages%s)\n", - e - (pfn_to_paddr(xenheap_pages)), e, xenheap_pages, - opt_xenheap_megabytes ? ", from command-line" : ""); - printk("Dom heap: %lu pages\n", domheap_pages); - - /* - * We need some memory to allocate the page-tables used for the - * directmap mappings. So populate the boot allocator first. - * - * This requires us to set directmap_mfn_{start, end} first so the - * direct-mapped Xenheap region can be avoided. - */ - directmap_mfn_start =3D _mfn((e >> PAGE_SHIFT) - xenheap_pages); - directmap_mfn_end =3D mfn_add(directmap_mfn_start, xenheap_pages); - - populate_boot_allocator(); - - setup_directmap_mappings(mfn_x(directmap_mfn_start), xenheap_pages); - - /* Frame table covers all of RAM region, including holes */ - setup_frametable_mappings(ram_start, ram_end); - max_page =3D PFN_DOWN(ram_end); - - /* - * The allocators may need to use map_domain_page() (such as for - * scrubbing pages). So we need to prepare the domheap area first. - */ - if ( !init_domheap_mappings(smp_processor_id()) ) - panic("CPU%u: Unable to prepare the domheap page-tables\n", - smp_processor_id()); - - /* Add xenheap memory that was not already added to the boot allocator= . */ - init_xenheap_pages(mfn_to_maddr(directmap_mfn_start), - mfn_to_maddr(directmap_mfn_end)); - - init_staticmem_pages(); -} -#else /* CONFIG_ARM_64 */ -static void __init setup_mm(void) -{ - const struct meminfo *banks =3D &bootinfo.mem; - paddr_t ram_start =3D INVALID_PADDR; - paddr_t ram_end =3D 0; - paddr_t ram_size =3D 0; - unsigned int i; - - init_pdx(); - - /* - * We need some memory to allocate the page-tables used for the direct= map - * mappings. But some regions may contain memory already allocated - * for other uses (e.g. modules, reserved-memory...). - * - * For simplicity, add all the free regions in the boot allocator. - */ - populate_boot_allocator(); - - total_pages =3D 0; - - for ( i =3D 0; i < banks->nr_banks; i++ ) - { - const struct membank *bank =3D &banks->bank[i]; - paddr_t bank_end =3D bank->start + bank->size; - - ram_size =3D ram_size + bank->size; - ram_start =3D min(ram_start, bank->start); - ram_end =3D max(ram_end, bank_end); - - setup_directmap_mappings(PFN_DOWN(bank->start), - PFN_DOWN(bank->size)); - } - - total_pages +=3D ram_size >> PAGE_SHIFT; - - directmap_virt_end =3D XENHEAP_VIRT_START + ram_end - ram_start; - directmap_mfn_start =3D maddr_to_mfn(ram_start); - directmap_mfn_end =3D maddr_to_mfn(ram_end); - - setup_frametable_mappings(ram_start, ram_end); - max_page =3D PFN_DOWN(ram_end); - - init_staticmem_pages(); -} -#endif - static bool __init is_dom0less_mode(void) { struct bootmodules *mods =3D &bootinfo.modules; --=20 2.25.1