From nobody Mon Apr 29 14:39:24 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1658342726; cv=none; d=zohomail.com; s=zohoarc; b=J2gE+YnrvobKFkbbHx9ZNGCDwCQIrflH2AiSw4QTNQfjeMGOMKOQ4hAonYMINsOCkXvvbixiu3QKX0Lv5DXt8qlf1JMZT3i+TnpIuIriQ2SW10W5LpSAo5DJRY+qvv/7GnSaTR0T/KEBBGL+OKJkzUPFmLKkHBIE8/bVBsuA778= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1658342726; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=hYBserRHCKnxjEBZbsMua2qzdvwsIdV+2glCDCaTlZQ=; b=ashWVZA0V6UoKQvyEc1TxzFHswAaJ64rdTMGjVSt3hgvP5y6yo/GY6pLfPWkqcLCYpKOVaE/4rnVHkxDiNBFd0d10dvI+ZxtqPjWnOfhlGvXEdPwTdGEdapyRBHRgtSTEc+1+Pk48Nsk88rk81yDxT4OOU5SYv2ILsky0UVf+0Y= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1658342726642838.4076400705441; Wed, 20 Jul 2022 11:45:26 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.372154.604011 (Exim 4.92) (envelope-from ) id 1oEEh1-0003i4-V8; Wed, 20 Jul 2022 18:45:07 +0000 Received: by outflank-mailman (output) from mailman id 372154.604011; Wed, 20 Jul 2022 18:45:07 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEh1-0003hx-R1; Wed, 20 Jul 2022 18:45:07 +0000 Received: by outflank-mailman (input) for mailman id 372154; Wed, 20 Jul 2022 18:45:06 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEh0-0003Zu-2Q for xen-devel@lists.xenproject.org; Wed, 20 Jul 2022 18:45:06 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEgz-00077P-Ia; Wed, 20 Jul 2022 18:45:05 +0000 Received: from 54-240-197-224.amazon.com ([54.240.197.224] helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92) (envelope-from ) id 1oEEgz-000309-AI; Wed, 20 Jul 2022 18:45:05 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=hYBserRHCKnxjEBZbsMua2qzdvwsIdV+2glCDCaTlZQ=; b=XdW7vqYlbVh26LBYfdiQxTq3rk 6GHrE0ZwYKd8nwVfpe/XwbcyPC5YwYI/TxYGxhRNEKCradEp3S6BPzZDjNkgnE6StTutfhnEipk3m HSrfNl0TIRhVu0RmI7mwdESxZ+gyfRjSjPJ7V0t1SHEV7s7qCD19rC5i11VGQk+V9n9g=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: julien@xen.org, Julien Grall , Stefano Stabellini , Bertrand Marquis , Volodymyr Babchuk , Konrad Rzeszutek Wilk , Ross Lagerwall Subject: [PATCH v2 1/5] xen/arm: Remove most of the *_VIRT_END defines Date: Wed, 20 Jul 2022 19:44:55 +0100 Message-Id: <20220720184459.51582-2-julien@xen.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220720184459.51582-1-julien@xen.org> References: <20220720184459.51582-1-julien@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @xen.org) X-ZM-MESSAGEID: 1658342728750100001 Content-Type: text/plain; charset="utf-8" From: Julien Grall At the moment, *_VIRT_END may either point to the address after the end or the last address of the region. The lack of consistency make quite difficult to reason with them. Furthermore, there is a risk of overflow in the case where the address points past to the end. I am not aware of any cases, so this is only a latent bug. Start to solve the problem by removing all the *_VIRT_END exclusively used by the Arm code and add *_VIRT_SIZE when it is not present. Take the opportunity to rename BOOT_FDT_SLOT_SIZE to BOOT_FDT_VIRT_SIZE for better consistency and use _AT(vaddr_t, ). Also take the opportunity to fix the coding style of the comment touched in mm.c. Signed-off-by: Julien Grall Reviewed-by: Bertrand Marquis Reviewed-by: Luca Fancellu Tested-By: Luca Fancellu Tested-by: Bertrand Marquis ---- I noticed that a few functions in Xen expect [start, end[. This is risky as we may end up with end < start if the region is defined right at the top of the address space. I haven't yet tackle this issue. But I would at least like to get rid of *_VIRT_END. This was originally sent separately (lets call it v0). Changes in v2: - Correct the check in domain_page_map_to_mfn() Changes in v1: - Mention the coding style change. --- xen/arch/arm/include/asm/config.h | 18 ++++++++---------- xen/arch/arm/livepatch.c | 2 +- xen/arch/arm/mm.c | 13 ++++++++----- 3 files changed, 17 insertions(+), 16 deletions(-) diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/c= onfig.h index 3e2a55a91058..66db618b34e7 100644 --- a/xen/arch/arm/include/asm/config.h +++ b/xen/arch/arm/include/asm/config.h @@ -111,12 +111,11 @@ #define FIXMAP_ADDR(n) (_AT(vaddr_t,0x00400000) + (n) * PAGE_SIZE) =20 #define BOOT_FDT_VIRT_START _AT(vaddr_t,0x00600000) -#define BOOT_FDT_SLOT_SIZE MB(4) -#define BOOT_FDT_VIRT_END (BOOT_FDT_VIRT_START + BOOT_FDT_SLOT_SIZE) +#define BOOT_FDT_VIRT_SIZE _AT(vaddr_t, MB(4)) =20 #ifdef CONFIG_LIVEPATCH #define LIVEPATCH_VMAP_START _AT(vaddr_t,0x00a00000) -#define LIVEPATCH_VMAP_END (LIVEPATCH_VMAP_START + MB(2)) +#define LIVEPATCH_VMAP_SIZE _AT(vaddr_t, MB(2)) #endif =20 #define HYPERVISOR_VIRT_START XEN_VIRT_START @@ -132,18 +131,18 @@ #define FRAMETABLE_VIRT_END (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - = 1) =20 #define VMAP_VIRT_START _AT(vaddr_t,0x10000000) +#define VMAP_VIRT_SIZE _AT(vaddr_t, GB(1) - MB(256)) =20 #define XENHEAP_VIRT_START _AT(vaddr_t,0x40000000) -#define XENHEAP_VIRT_END _AT(vaddr_t,0x7fffffff) -#define DOMHEAP_VIRT_START _AT(vaddr_t,0x80000000) -#define DOMHEAP_VIRT_END _AT(vaddr_t,0xffffffff) +#define XENHEAP_VIRT_SIZE _AT(vaddr_t, GB(1)) =20 -#define VMAP_VIRT_END XENHEAP_VIRT_START +#define DOMHEAP_VIRT_START _AT(vaddr_t,0x80000000) +#define DOMHEAP_VIRT_SIZE _AT(vaddr_t, GB(2)) =20 #define DOMHEAP_ENTRIES 1024 /* 1024 2MB mapping slots */ =20 /* Number of domheap pagetable pages required at the second level (2MB map= pings) */ -#define DOMHEAP_SECOND_PAGES ((DOMHEAP_VIRT_END - DOMHEAP_VIRT_START + 1) = >> FIRST_SHIFT) +#define DOMHEAP_SECOND_PAGES (DOMHEAP_VIRT_SIZE >> FIRST_SHIFT) =20 #else /* ARM_64 */ =20 @@ -152,12 +151,11 @@ #define SLOT0_ENTRY_SIZE SLOT0(1) =20 #define VMAP_VIRT_START GB(1) -#define VMAP_VIRT_END (VMAP_VIRT_START + GB(1)) +#define VMAP_VIRT_SIZE GB(1) =20 #define FRAMETABLE_VIRT_START GB(32) #define FRAMETABLE_SIZE GB(32) #define FRAMETABLE_NR (FRAMETABLE_SIZE / sizeof(*frame_table)) -#define FRAMETABLE_VIRT_END (FRAMETABLE_VIRT_START + FRAMETABLE_SIZE - = 1) =20 #define DIRECTMAP_VIRT_START SLOT0(256) #define DIRECTMAP_SIZE (SLOT0_ENTRY_SIZE * (265-256)) diff --git a/xen/arch/arm/livepatch.c b/xen/arch/arm/livepatch.c index 75e8adcfd6a1..57abc746e60b 100644 --- a/xen/arch/arm/livepatch.c +++ b/xen/arch/arm/livepatch.c @@ -175,7 +175,7 @@ void __init arch_livepatch_init(void) void *start, *end; =20 start =3D (void *)LIVEPATCH_VMAP_START; - end =3D (void *)LIVEPATCH_VMAP_END; + end =3D start + LIVEPATCH_VMAP_SIZE; =20 vm_init_type(VMAP_XEN, start, end); =20 diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 56fd0845861f..0177bc6b34d2 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -128,9 +128,11 @@ static DEFINE_PAGE_TABLE(xen_first); /* xen_pgtable =3D=3D root of the trie (zeroeth level on 64-bit, first on = 32-bit) */ static DEFINE_PER_CPU(lpae_t *, xen_pgtable); #define THIS_CPU_PGTABLE this_cpu(xen_pgtable) -/* xen_dommap =3D=3D pages used by map_domain_page, these pages contain +/* + * xen_dommap =3D=3D pages used by map_domain_page, these pages contain * the second level pagetables which map the domheap region - * DOMHEAP_VIRT_START...DOMHEAP_VIRT_END in 2MB chunks. */ + * starting at DOMHEAP_VIRT_START in 2MB chunks. + */ static DEFINE_PER_CPU(lpae_t *, xen_dommap); /* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */ static DEFINE_PAGE_TABLE(cpu0_pgtable); @@ -476,7 +478,7 @@ mfn_t domain_page_map_to_mfn(const void *ptr) int slot =3D (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT; unsigned long offset =3D (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK; =20 - if ( va >=3D VMAP_VIRT_START && va < VMAP_VIRT_END ) + if ( (va >=3D VMAP_VIRT_START) && ((va - VMAP_VIRT_START) < VMAP_VIRT_= SIZE) ) return virt_to_mfn(va); =20 ASSERT(slot >=3D 0 && slot < DOMHEAP_ENTRIES); @@ -570,7 +572,8 @@ void __init remove_early_mappings(void) int rc; =20 /* destroy the _PAGE_BLOCK mapping */ - rc =3D modify_xen_mappings(BOOT_FDT_VIRT_START, BOOT_FDT_VIRT_END, + rc =3D modify_xen_mappings(BOOT_FDT_VIRT_START, + BOOT_FDT_VIRT_START + BOOT_FDT_VIRT_SIZE, _PAGE_BLOCK); BUG_ON(rc); } @@ -850,7 +853,7 @@ void __init setup_frametable_mappings(paddr_t ps, paddr= _t pe) =20 void *__init arch_vmap_virt_end(void) { - return (void *)VMAP_VIRT_END; + return (void *)(VMAP_VIRT_START + VMAP_VIRT_SIZE); } =20 /* --=20 2.32.0 From nobody Mon Apr 29 14:39:24 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1658342727; cv=none; d=zohomail.com; s=zohoarc; b=jqb0qJfuhrD3rqwCQ6WjqJlDaHUp4/ipKhtJaRgxWNm3JyZ/hZ/e6TEIdjl5LyFVnanvAIoBBmY7R3bZKsK2sAhdiRxvOTIMC+sfEr9Tyu3yNOVzO8vfOPFrgCb9fNHMvUyvpK1k+PBNCGrVwGQhgCaV0kHOVzA7xnGlmVs6YW4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1658342727; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=roTEvcWcu2N8zSJ81FcgCWgpjdQOhGblXfk3B0P0v/Y=; b=Ndxe/a3dL4P81GidG+d5dSTRm4kKBym3pT+GF8t1fiGr8sLNBo9emXOfo0J7sGDua9gFnsZ2bvzB1TKSQTG0u88GsvWHL/AjqMP7DQlfkZplTX4J3E9PEYgeAk6t8anC4Ya7p3spPexb2qzws8tV92j94DcvDie+hI5qlyDSvQ8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1658342727322556.0778968529247; Wed, 20 Jul 2022 11:45:27 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.372155.604016 (Exim 4.92) (envelope-from ) id 1oEEh2-0003mB-9F; Wed, 20 Jul 2022 18:45:08 +0000 Received: by outflank-mailman (output) from mailman id 372155.604016; Wed, 20 Jul 2022 18:45:08 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEh2-0003kR-4V; Wed, 20 Jul 2022 18:45:08 +0000 Received: by outflank-mailman (input) for mailman id 372155; Wed, 20 Jul 2022 18:45:06 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEh0-0003h3-RY for xen-devel@lists.xenproject.org; Wed, 20 Jul 2022 18:45:06 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEh0-00077Z-I2; Wed, 20 Jul 2022 18:45:06 +0000 Received: from 54-240-197-224.amazon.com ([54.240.197.224] helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92) (envelope-from ) id 1oEEh0-000309-AK; Wed, 20 Jul 2022 18:45:06 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=roTEvcWcu2N8zSJ81FcgCWgpjdQOhGblXfk3B0P0v/Y=; b=AkrdoLpzTFmBR59StjPpMkW9z4 neOooEFhItAr/Wxqv8DkNINV/ADMeS2PcmzUd07YQaifNSe7TLb5UPt6Nwsk8lcJ6RR+gnKU+wqVt mY1CHk6oolM/Gw6dkjcqIyO3w33X6xJhdVbMxElpJA3i/PZtF9qED4c3KdA0n7yWVkNs=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: julien@xen.org, Julien Grall , Stefano Stabellini , Bertrand Marquis , Volodymyr Babchuk Subject: [PATCH v2 2/5] xen/arm32: mm: Consolidate the domheap mappings initialization Date: Wed, 20 Jul 2022 19:44:56 +0100 Message-Id: <20220720184459.51582-3-julien@xen.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220720184459.51582-1-julien@xen.org> References: <20220720184459.51582-1-julien@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @xen.org) X-ZM-MESSAGEID: 1658342728753100002 Content-Type: text/plain; charset="utf-8" From: Julien Grall At the moment, the domheap mappings initialization is done separately for the boot CPU and secondary CPUs. The main difference is for the former the pages are part of Xen binary whilst for the latter they are dynamically allocated. It would be good to have a single helper so it is easier to rework on the domheap is initialized. For CPU0, we still need to use pre-allocated pages because the allocators may use domain_map_page(), so we need to have the domheap area ready first. But we can still delay the initialization to setup_mm(). Introduce a new helper init_domheap_mappings() that will be called from setup_mm() for the boot CPU and from init_secondary_pagetables() for secondary CPUs. Signed-off-by: Julien Grall Reviewed-by: Bertrand Marquis Reviewed-by: Luca Fancellu Tested-by: Bertrand Marquis Tested-by: Luca Fancellu ---- Changes in v2: - Fix function name in the commit message - Remove duplicated 'been' in the comment --- xen/arch/arm/include/asm/arm32/mm.h | 2 + xen/arch/arm/mm.c | 92 +++++++++++++++++++---------- xen/arch/arm/setup.c | 8 +++ 3 files changed, 71 insertions(+), 31 deletions(-) diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm= /arm32/mm.h index 6b039d9ceaa2..575373aeb985 100644 --- a/xen/arch/arm/include/asm/arm32/mm.h +++ b/xen/arch/arm/include/asm/arm32/mm.h @@ -10,6 +10,8 @@ static inline bool arch_mfns_in_directmap(unsigned long m= fn, unsigned long nr) return false; } =20 +bool init_domheap_mappings(unsigned int cpu); + #endif /* __ARM_ARM32_MM_H__ */ =20 /* diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 0177bc6b34d2..9311f3530066 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -372,6 +372,58 @@ void clear_fixmap(unsigned int map) } =20 #ifdef CONFIG_DOMAIN_PAGE +/* + * Prepare the area that will be used to map domheap pages. They are + * mapped in 2MB chunks, so we need to allocate the page-tables up to + * the 2nd level. + * + * The caller should make sure the root page-table for @cpu has been + * allocated. + */ +bool init_domheap_mappings(unsigned int cpu) +{ + unsigned int order =3D get_order_from_pages(DOMHEAP_SECOND_PAGES); + lpae_t *root =3D per_cpu(xen_pgtable, cpu); + unsigned int i, first_idx; + lpae_t *domheap; + mfn_t mfn; + + ASSERT(root); + ASSERT(!per_cpu(xen_dommap, cpu)); + + /* + * The domheap for cpu0 is before the heap is initialized. So we + * need to use pre-allocated pages. + */ + if ( !cpu ) + domheap =3D cpu0_dommap; + else + domheap =3D alloc_xenheap_pages(order, 0); + + if ( !domheap ) + return false; + + /* Ensure the domheap has no stray mappings */ + memset(domheap, 0, DOMHEAP_SECOND_PAGES * PAGE_SIZE); + + /* + * Update the first level mapping to reference the local CPUs + * domheap mapping pages. + */ + mfn =3D virt_to_mfn(domheap); + first_idx =3D first_table_offset(DOMHEAP_VIRT_START); + for ( i =3D 0; i < DOMHEAP_SECOND_PAGES; i++ ) + { + lpae_t pte =3D mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL); + pte.pt.table =3D 1; + write_pte(&root[first_idx + i], pte); + } + + per_cpu(xen_dommap, cpu) =3D domheap; + + return true; +} + void *map_domain_page_global(mfn_t mfn) { return vmap(&mfn, 1); @@ -633,16 +685,6 @@ void __init setup_pagetables(unsigned long boot_phys_o= ffset) p[i].pt.xn =3D 0; } =20 -#ifdef CONFIG_ARM_32 - for ( i =3D 0; i < DOMHEAP_SECOND_PAGES; i++ ) - { - p[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)] - =3D pte_of_xenaddr((uintptr_t)(cpu0_dommap + - i * XEN_PT_LPAE_ENTRIES)); - p[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)].pt.table = =3D 1; - } -#endif - /* Break up the Xen mapping into 4k pages and protect them separately.= */ for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) { @@ -686,7 +728,6 @@ void __init setup_pagetables(unsigned long boot_phys_of= fset) =20 #ifdef CONFIG_ARM_32 per_cpu(xen_pgtable, 0) =3D cpu0_pgtable; - per_cpu(xen_dommap, 0) =3D cpu0_dommap; #endif } =20 @@ -719,39 +760,28 @@ int init_secondary_pagetables(int cpu) #else int init_secondary_pagetables(int cpu) { - lpae_t *first, *domheap, pte; - int i; + lpae_t *first; =20 first =3D alloc_xenheap_page(); /* root =3D=3D first level on 32-bit 3= -level trie */ - domheap =3D alloc_xenheap_pages(get_order_from_pages(DOMHEAP_SECOND_PA= GES), 0); =20 - if ( domheap =3D=3D NULL || first =3D=3D NULL ) + if ( !first ) { - printk("Not enough free memory for secondary CPU%d pagetables\n", = cpu); - free_xenheap_pages(domheap, get_order_from_pages(DOMHEAP_SECOND_PA= GES)); - free_xenheap_page(first); + printk("CPU%u: Unable to allocate the first page-table\n", cpu); return -ENOMEM; } =20 /* Initialise root pagetable from root of boot tables */ memcpy(first, cpu0_pgtable, PAGE_SIZE); + per_cpu(xen_pgtable, cpu) =3D first; =20 - /* Ensure the domheap has no stray mappings */ - memset(domheap, 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE); - - /* Update the first level mapping to reference the local CPUs - * domheap mapping pages. */ - for ( i =3D 0; i < DOMHEAP_SECOND_PAGES; i++ ) + if ( !init_domheap_mappings(cpu) ) { - pte =3D mfn_to_xen_entry(virt_to_mfn(domheap + i * XEN_PT_LPAE_ENT= RIES), - MT_NORMAL); - pte.pt.table =3D 1; - write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZ= E)], pte); + printk("CPU%u: Unable to prepare the domheap page-tables\n", cpu); + per_cpu(xen_pgtable, cpu) =3D NULL; + free_xenheap_page(first); + return -ENOMEM; } =20 - per_cpu(xen_pgtable, cpu) =3D first; - per_cpu(xen_dommap, cpu) =3D domheap; - clear_boot_pagetables(); =20 /* Set init_ttbr for this CPU coming up */ diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 85ff956ec2e3..068e84b10335 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -783,6 +783,14 @@ static void __init setup_mm(void) setup_frametable_mappings(ram_start, ram_end); max_page =3D PFN_DOWN(ram_end); =20 + /* + * The allocators may need to use map_domain_page() (such as for + * scrubbing pages). So we need to prepare the domheap area first. + */ + if ( !init_domheap_mappings(smp_processor_id()) ) + panic("CPU%u: Unable to prepare the domheap page-tables\n", + smp_processor_id()); + /* Add xenheap memory that was not already added to the boot allocator= . */ init_xenheap_pages(mfn_to_maddr(xenheap_mfn_start), mfn_to_maddr(xenheap_mfn_end)); --=20 2.32.0 From nobody Mon Apr 29 14:39:24 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1658342727; cv=none; d=zohomail.com; s=zohoarc; b=gcHqSlA8TGGn1a8CJvx73yuvNkBx2sMo5XPqpUsR2jy3x3mnF4d3lWRFkWwQ2xSIErqwPEyVhrF4I1bTAJ/9ZgyfHYUv8ay3x4EJs/uP9+HPHmtsU6WQ5TPzX+MIrfamj9XjdRj4POoyjqfZGQF0VcJwiOHrrVUgvFWHOo8/Jb8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1658342727; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ZCcbJJZ0Rn+Gw3fU5XlJ9t1HEWrkwplqjvNtiYujb+U=; b=CLHsotlUwNAVAzP9bKJAKOL76todDgDBwxqjU12UObKzkq/0fikdaQRM429hOKJ3XT+/S5TK+9IbIKDiAv40sP7S2CBfIjCIvhxEMFSLBzLVve26vcQaqcN1kEN8lEx4yG8Ak1rPn+eFxg4dVoSgoXAseBe+yao1JbUBFhT3IVw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1658342727057153.79011138691487; Wed, 20 Jul 2022 11:45:27 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.372156.604033 (Exim 4.92) (envelope-from ) id 1oEEh3-0004E8-VR; Wed, 20 Jul 2022 18:45:09 +0000 Received: by outflank-mailman (output) from mailman id 372156.604033; Wed, 20 Jul 2022 18:45:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEh3-0004Dr-PZ; Wed, 20 Jul 2022 18:45:09 +0000 Received: by outflank-mailman (input) for mailman id 372156; Wed, 20 Jul 2022 18:45:08 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEh2-0003ri-Ey for xen-devel@lists.xenproject.org; Wed, 20 Jul 2022 18:45:08 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEh2-00077p-1r; Wed, 20 Jul 2022 18:45:08 +0000 Received: from 54-240-197-224.amazon.com ([54.240.197.224] helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92) (envelope-from ) id 1oEEh1-000309-QO; Wed, 20 Jul 2022 18:45:08 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=ZCcbJJZ0Rn+Gw3fU5XlJ9t1HEWrkwplqjvNtiYujb+U=; b=mgfkhn8HmiKixAUJGLH7LNTgmO mU+d2SWubeDXGLatCRQ1GhcpZh2cssgZarwNFrKkFR1D/XCixHhi2h5IClQUi/YidmJDEg8v5pRBK SfALmDJCWtenAHbvJjZZoq0C7cQFnJ6hZ8FPS2OFi4Xp4fYfoPNjyyxFGmi5OmF9qnlw=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: julien@xen.org, Julien Grall , Stefano Stabellini , Bertrand Marquis , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Subject: [PATCH v2 3/5] xen: Rename CONFIG_DOMAIN_PAGE to CONFIG_ARCH_MAP_DOMAIN_PAGE and... Date: Wed, 20 Jul 2022 19:44:57 +0100 Message-Id: <20220720184459.51582-4-julien@xen.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220720184459.51582-1-julien@xen.org> References: <20220720184459.51582-1-julien@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @xen.org) X-ZM-MESSAGEID: 1658342728949100005 Content-Type: text/plain; charset="utf-8" From: Julien Grall move it to Kconfig. The define CONFIG_DOMAIN_PAGE indicates whether the architecture provide helpers to map/unmap a domain page. Rename it to the define to CONFIG_ARCH_MAP_DOMAIN_PAGE so it is clearer that this will not remove support for domain page (this is not a concept that Xen can't get away with). Take the opportunity to move CONFIG_MAP_DOMAIN_PAGE to Kconfig as this will soon be necessary to use it in the Makefile. Signed-off-by: Julien Grall Reviewed-by: Bertrand Marquis #arm part Reviewed-by: Jan Beulich Tested-by: Bertrand Marquis ---- Changes in v2: - New patch --- xen/arch/arm/Kconfig | 1 + xen/arch/arm/include/asm/config.h | 1 - xen/arch/arm/mm.c | 2 +- xen/arch/x86/Kconfig | 1 + xen/arch/x86/include/asm/config.h | 1 - xen/common/Kconfig | 3 +++ xen/include/xen/domain_page.h | 6 +++--- 7 files changed, 9 insertions(+), 6 deletions(-) diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig index be9eff014120..33e004d702bf 100644 --- a/xen/arch/arm/Kconfig +++ b/xen/arch/arm/Kconfig @@ -1,6 +1,7 @@ config ARM_32 def_bool y depends on "$(ARCH)" =3D "arm32" + select ARCH_MAP_DOMAIN_PAGE =20 config ARM_64 def_bool y diff --git a/xen/arch/arm/include/asm/config.h b/xen/arch/arm/include/asm/c= onfig.h index 66db618b34e7..2fafb9f2283c 100644 --- a/xen/arch/arm/include/asm/config.h +++ b/xen/arch/arm/include/asm/config.h @@ -122,7 +122,6 @@ =20 #ifdef CONFIG_ARM_32 =20 -#define CONFIG_DOMAIN_PAGE 1 #define CONFIG_SEPARATE_XENHEAP 1 =20 #define FRAMETABLE_VIRT_START _AT(vaddr_t,0x02000000) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 9311f3530066..7a722d6c86c6 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -371,7 +371,7 @@ void clear_fixmap(unsigned int map) BUG_ON(res !=3D 0); } =20 -#ifdef CONFIG_DOMAIN_PAGE +#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE /* * Prepare the area that will be used to map domheap pages. They are * mapped in 2MB chunks, so we need to allocate the page-tables up to diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig index 6bed72b79141..6a7825f4ba3c 100644 --- a/xen/arch/x86/Kconfig +++ b/xen/arch/x86/Kconfig @@ -8,6 +8,7 @@ config X86 select ACPI_LEGACY_TABLES_LOOKUP select ACPI_NUMA select ALTERNATIVE_CALL + select ARCH_MAP_DOMAIN_PAGE select ARCH_SUPPORTS_INT128 select CORE_PARKING select HAS_ALTERNATIVE diff --git a/xen/arch/x86/include/asm/config.h b/xen/arch/x86/include/asm/c= onfig.h index 07bcd158314b..fbc4bb3416bd 100644 --- a/xen/arch/x86/include/asm/config.h +++ b/xen/arch/x86/include/asm/config.h @@ -22,7 +22,6 @@ #define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 1 #define CONFIG_DISCONTIGMEM 1 #define CONFIG_NUMA_EMU 1 -#define CONFIG_DOMAIN_PAGE 1 =20 #define CONFIG_PAGEALLOC_MAX_ORDER (2 * PAGETABLE_ORDER) #define CONFIG_DOMU_MAX_ORDER PAGETABLE_ORDER diff --git a/xen/common/Kconfig b/xen/common/Kconfig index 41a67891bcc8..f1ea3199c8eb 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -25,6 +25,9 @@ config GRANT_TABLE config ALTERNATIVE_CALL bool =20 +config ARCH_MAP_DOMAIN_PAGE + bool + config HAS_ALTERNATIVE bool =20 diff --git a/xen/include/xen/domain_page.h b/xen/include/xen/domain_page.h index a182d33b6701..149b217b9619 100644 --- a/xen/include/xen/domain_page.h +++ b/xen/include/xen/domain_page.h @@ -17,7 +17,7 @@ void clear_domain_page(mfn_t mfn); void copy_domain_page(mfn_t dst, const mfn_t src); =20 -#ifdef CONFIG_DOMAIN_PAGE +#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE =20 /* * Map a given page frame, returning the mapped virtual address. The page = is @@ -51,7 +51,7 @@ static inline void *__map_domain_page_global(const struct= page_info *pg) return map_domain_page_global(page_to_mfn(pg)); } =20 -#else /* !CONFIG_DOMAIN_PAGE */ +#else /* !CONFIG_ARCH_MAP_DOMAIN_PAGE */ =20 #define map_domain_page(mfn) __mfn_to_virt(mfn_x(mfn)) #define __map_domain_page(pg) page_to_virt(pg) @@ -70,7 +70,7 @@ static inline void *__map_domain_page_global(const struct= page_info *pg) =20 static inline void unmap_domain_page_global(const void *va) {}; =20 -#endif /* !CONFIG_DOMAIN_PAGE */ +#endif /* !CONFIG_ARCH_MAP_DOMAIN_PAGE */ =20 #define UNMAP_DOMAIN_PAGE(p) do { \ unmap_domain_page(p); \ --=20 2.32.0 From nobody Mon Apr 29 14:39:24 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1658342731; cv=none; d=zohomail.com; s=zohoarc; b=hWya/8DKAzAuSg8buDFu5u0ia4RFUSgzNt2l5nUc3nI+ITPZQS6COqawoHfzrkCbS8vXcaKqINPBrGSOW8ynKelqZJEBaEoXHv8Q55GctyQRGfnYo7hisT5FKIRZhMZuoh2oJgRDxXK3IOpI4tW9+2eVR8uxBz9Zydmst45ev4g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1658342731; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=5eiQZQZs5fJEBjxXTKP6HOuojy96rusV1MzRtnzOfTg=; b=jr1FZtfkjI4CjVlVPhTO4Nzg0wPjAOQSGvHUUYqF6g0rQZLTFOaiFf+aM3CguCJ7bgeKiAX0mOFMRaoNvwvRTzl1+QzJxIIU55khjOC6UeOMVHT7wUFgYHwCmmV2dyMUetryItNay1JE+ASxfUjYDmFYHrHYdYxFaMVH8fnDRJY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1658342731732620.9113377910074; Wed, 20 Jul 2022 11:45:31 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.372158.604044 (Exim 4.92) (envelope-from ) id 1oEEh5-0004Vf-9L; Wed, 20 Jul 2022 18:45:11 +0000 Received: by outflank-mailman (output) from mailman id 372158.604044; Wed, 20 Jul 2022 18:45:11 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEh5-0004Ua-2L; Wed, 20 Jul 2022 18:45:11 +0000 Received: by outflank-mailman (input) for mailman id 372158; Wed, 20 Jul 2022 18:45:10 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEh3-0004FA-UJ for xen-devel@lists.xenproject.org; Wed, 20 Jul 2022 18:45:09 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEh3-00078B-Hq; Wed, 20 Jul 2022 18:45:09 +0000 Received: from 54-240-197-224.amazon.com ([54.240.197.224] helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92) (envelope-from ) id 1oEEh3-000309-74; Wed, 20 Jul 2022 18:45:09 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=5eiQZQZs5fJEBjxXTKP6HOuojy96rusV1MzRtnzOfTg=; b=EtDWPMqUAWteB5yasJHq/KXHVB SXLNrFdO5At3cb8XkIxePvaC6EYvRA5p7+aRrP0zMyO0HaQ2CUkn7IkExZJRyunsolSeGzQLEmWYR SIMgb4bKXPJZuPVtsgbQpLFgx5lqi3HMYwMJRheXbdM5ROoY9bgGwhQ4MMWsv7JxQ3fk=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: julien@xen.org, Julien Grall , Stefano Stabellini , Bertrand Marquis , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu Subject: [PATCH v2 4/5] xen/arm: mm: Move domain_{,un}map_* helpers in a separate file Date: Wed, 20 Jul 2022 19:44:58 +0100 Message-Id: <20220720184459.51582-5-julien@xen.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220720184459.51582-1-julien@xen.org> References: <20220720184459.51582-1-julien@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @xen.org) X-ZM-MESSAGEID: 1658342732660100009 Content-Type: text/plain; charset="utf-8" From: Julien Grall The file xen/arch/mm.c has been growing quite a lot. It now contains various independent part of the MM subsytem. One of them is the helpers to map/unmap a page which is only used by arm32 and protected by CONFIG_ARCH_MAP_DOMAIN_PAGE. Move them in a new file xen/arch/arm/domain_page.c. Signed-off-by: Julien Grall Reviewed-by: Bertrand Marquis Tested-by: Bertrand Marquis ---- Changes in v2: - Move CONFIG_* to Kconfig is now in a separate patch --- xen/arch/arm/Makefile | 1 + xen/arch/arm/domain_page.c | 193 +++++++++++++++++++++++++++ xen/arch/arm/include/asm/arm32/mm.h | 6 + xen/arch/arm/include/asm/lpae.h | 17 +++ xen/arch/arm/mm.c | 198 +--------------------------- xen/common/Kconfig | 3 + 6 files changed, 222 insertions(+), 196 deletions(-) create mode 100644 xen/arch/arm/domain_page.c diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile index bb7a6151c13c..4d076b278b10 100644 --- a/xen/arch/arm/Makefile +++ b/xen/arch/arm/Makefile @@ -17,6 +17,7 @@ obj-y +=3D device.o obj-$(CONFIG_IOREQ_SERVER) +=3D dm.o obj-y +=3D domain.o obj-y +=3D domain_build.init.o +obj-$(CONFIG_ARCH_MAP_DOMAIN_PAGE) +=3D domain_page.o obj-y +=3D domctl.o obj-$(CONFIG_EARLY_PRINTK) +=3D early_printk.o obj-y +=3D efi/ diff --git a/xen/arch/arm/domain_page.c b/xen/arch/arm/domain_page.c new file mode 100644 index 000000000000..63e97730cf57 --- /dev/null +++ b/xen/arch/arm/domain_page.c @@ -0,0 +1,193 @@ +#include +#include +#include + +/* Override macros from asm/page.h to make them work with mfn_t */ +#undef virt_to_mfn +#define virt_to_mfn(va) _mfn(__virt_to_mfn(va)) + +/* cpu0's domheap page tables */ +static DEFINE_PAGE_TABLES(cpu0_dommap, DOMHEAP_SECOND_PAGES); + +/* + * xen_dommap =3D=3D pages used by map_domain_page, these pages contain + * the second level pagetables which map the domheap region + * starting at DOMHEAP_VIRT_START in 2MB chunks. + */ +static DEFINE_PER_CPU(lpae_t *, xen_dommap); + +/* + * Prepare the area that will be used to map domheap pages. They are + * mapped in 2MB chunks, so we need to allocate the page-tables up to + * the 2nd level. + * + * The caller should make sure the root page-table for @cpu has been + * allocated. + */ +bool init_domheap_mappings(unsigned int cpu) +{ + unsigned int order =3D get_order_from_pages(DOMHEAP_SECOND_PAGES); + lpae_t *root =3D per_cpu(xen_pgtable, cpu); + unsigned int i, first_idx; + lpae_t *domheap; + mfn_t mfn; + + ASSERT(root); + ASSERT(!per_cpu(xen_dommap, cpu)); + + /* + * The domheap for cpu0 is before the heap is initialized. So we + * need to use pre-allocated pages. + */ + if ( !cpu ) + domheap =3D cpu0_dommap; + else + domheap =3D alloc_xenheap_pages(order, 0); + + if ( !domheap ) + return false; + + /* Ensure the domheap has no stray mappings */ + memset(domheap, 0, DOMHEAP_SECOND_PAGES * PAGE_SIZE); + + /* + * Update the first level mapping to reference the local CPUs + * domheap mapping pages. + */ + mfn =3D virt_to_mfn(domheap); + first_idx =3D first_table_offset(DOMHEAP_VIRT_START); + for ( i =3D 0; i < DOMHEAP_SECOND_PAGES; i++ ) + { + lpae_t pte =3D mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL); + pte.pt.table =3D 1; + write_pte(&root[first_idx + i], pte); + } + + per_cpu(xen_dommap, cpu) =3D domheap; + + return true; +} + +void *map_domain_page_global(mfn_t mfn) +{ + return vmap(&mfn, 1); +} + +void unmap_domain_page_global(const void *va) +{ + vunmap(va); +} + +/* Map a page of domheap memory */ +void *map_domain_page(mfn_t mfn) +{ + unsigned long flags; + lpae_t *map =3D this_cpu(xen_dommap); + unsigned long slot_mfn =3D mfn_x(mfn) & ~XEN_PT_LPAE_ENTRY_MASK; + vaddr_t va; + lpae_t pte; + int i, slot; + + local_irq_save(flags); + + /* The map is laid out as an open-addressed hash table where each + * entry is a 2MB superpage pte. We use the available bits of each + * PTE as a reference count; when the refcount is zero the slot can + * be reused. */ + for ( slot =3D (slot_mfn >> XEN_PT_LPAE_SHIFT) % DOMHEAP_ENTRIES, i = =3D 0; + i < DOMHEAP_ENTRIES; + slot =3D (slot + 1) % DOMHEAP_ENTRIES, i++ ) + { + if ( map[slot].pt.avail < 0xf && + map[slot].pt.base =3D=3D slot_mfn && + map[slot].pt.valid ) + { + /* This slot already points to the right place; reuse it */ + map[slot].pt.avail++; + break; + } + else if ( map[slot].pt.avail =3D=3D 0 ) + { + /* Commandeer this 2MB slot */ + pte =3D mfn_to_xen_entry(_mfn(slot_mfn), MT_NORMAL); + pte.pt.avail =3D 1; + write_pte(map + slot, pte); + break; + } + + } + /* If the map fills up, the callers have misbehaved. */ + BUG_ON(i =3D=3D DOMHEAP_ENTRIES); + +#ifndef NDEBUG + /* Searching the hash could get slow if the map starts filling up. + * Cross that bridge when we come to it */ + { + static int max_tries =3D 32; + if ( i >=3D max_tries ) + { + dprintk(XENLOG_WARNING, "Domheap map is filling: %i tries\n", = i); + max_tries *=3D 2; + } + } +#endif + + local_irq_restore(flags); + + va =3D (DOMHEAP_VIRT_START + + (slot << SECOND_SHIFT) + + ((mfn_x(mfn) & XEN_PT_LPAE_ENTRY_MASK) << THIRD_SHIFT)); + + /* + * We may not have flushed this specific subpage at map time, + * since we only flush the 4k page not the superpage + */ + flush_xen_tlb_range_va_local(va, PAGE_SIZE); + + return (void *)va; +} + +/* Release a mapping taken with map_domain_page() */ +void unmap_domain_page(const void *va) +{ + unsigned long flags; + lpae_t *map =3D this_cpu(xen_dommap); + int slot =3D ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT; + + if ( !va ) + return; + + local_irq_save(flags); + + ASSERT(slot >=3D 0 && slot < DOMHEAP_ENTRIES); + ASSERT(map[slot].pt.avail !=3D 0); + + map[slot].pt.avail--; + + local_irq_restore(flags); +} + +mfn_t domain_page_map_to_mfn(const void *ptr) +{ + unsigned long va =3D (unsigned long)ptr; + lpae_t *map =3D this_cpu(xen_dommap); + int slot =3D (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT; + unsigned long offset =3D (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK; + + if ( (va >=3D VMAP_VIRT_START) && ((va - VMAP_VIRT_START) < VMAP_VIRT_= SIZE) ) + return virt_to_mfn(va); + + ASSERT(slot >=3D 0 && slot < DOMHEAP_ENTRIES); + ASSERT(map[slot].pt.avail !=3D 0); + + return mfn_add(lpae_get_mfn(map[slot]), offset); +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/arch/arm/include/asm/arm32/mm.h b/xen/arch/arm/include/asm= /arm32/mm.h index 575373aeb985..8bfc906e7178 100644 --- a/xen/arch/arm/include/asm/arm32/mm.h +++ b/xen/arch/arm/include/asm/arm32/mm.h @@ -1,6 +1,12 @@ #ifndef __ARM_ARM32_MM_H__ #define __ARM_ARM32_MM_H__ =20 +#include + +#include + +DECLARE_PER_CPU(lpae_t *, xen_pgtable); + /* * Only a limited amount of RAM, called xenheap, is always mapped on ARM32. * For convenience always return false. diff --git a/xen/arch/arm/include/asm/lpae.h b/xen/arch/arm/include/asm/lpa= e.h index fc19cbd84772..3fdd5d0de28e 100644 --- a/xen/arch/arm/include/asm/lpae.h +++ b/xen/arch/arm/include/asm/lpae.h @@ -261,6 +261,23 @@ lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr); #define third_table_offset(va) TABLE_OFFSET(third_linear_offset(va)) #define zeroeth_table_offset(va) TABLE_OFFSET(zeroeth_linear_offset(va)) =20 +/* + * Macros to define page-tables: + * - DEFINE_BOOT_PAGE_TABLE is used to define page-table that are used + * in assembly code before BSS is zeroed. + * - DEFINE_PAGE_TABLE{,S} are used to define one or multiple + * page-tables to be used after BSS is zeroed (typically they are only us= ed + * in C). + */ +#define DEFINE_BOOT_PAGE_TABLE(name) = \ +lpae_t __aligned(PAGE_SIZE) __section(".data.page_aligned") = \ + name[XEN_PT_LPAE_ENTRIES] + +#define DEFINE_PAGE_TABLES(name, nr) \ +lpae_t __aligned(PAGE_SIZE) name[XEN_PT_LPAE_ENTRIES * (nr)] + +#define DEFINE_PAGE_TABLE(name) DEFINE_PAGE_TABLES(name, 1) + #endif /* __ARM_LPAE_H__ */ =20 /* diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 7a722d6c86c6..ad26ad740308 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -57,23 +57,6 @@ mm_printk(const char *fmt, ...) {} } while (0) #endif =20 -/* - * Macros to define page-tables: - * - DEFINE_BOOT_PAGE_TABLE is used to define page-table that are used - * in assembly code before BSS is zeroed. - * - DEFINE_PAGE_TABLE{,S} are used to define one or multiple - * page-tables to be used after BSS is zeroed (typically they are only us= ed - * in C). - */ -#define DEFINE_BOOT_PAGE_TABLE(name) = \ -lpae_t __aligned(PAGE_SIZE) __section(".data.page_aligned") = \ - name[XEN_PT_LPAE_ENTRIES] - -#define DEFINE_PAGE_TABLES(name, nr) \ -lpae_t __aligned(PAGE_SIZE) name[XEN_PT_LPAE_ENTRIES * (nr)] - -#define DEFINE_PAGE_TABLE(name) DEFINE_PAGE_TABLES(name, 1) - /* Static start-of-day pagetables that we use before the allocators * are up. These are used by all CPUs during bringup before switching * to the CPUs own pagetables. @@ -110,7 +93,7 @@ DEFINE_BOOT_PAGE_TABLE(boot_third); /* Main runtime page tables */ =20 /* - * For arm32 xen_pgtable and xen_dommap are per-PCPU and are allocated bef= ore + * For arm32 xen_pgtable are per-PCPU and are allocated before * bringing up each CPU. For arm64 xen_pgtable is common to all PCPUs. * * xen_second, xen_fixmap and xen_xenmap are always shared between all @@ -126,18 +109,10 @@ static DEFINE_PAGE_TABLE(xen_first); #define HYP_PT_ROOT_LEVEL 1 /* Per-CPU pagetable pages */ /* xen_pgtable =3D=3D root of the trie (zeroeth level on 64-bit, first on = 32-bit) */ -static DEFINE_PER_CPU(lpae_t *, xen_pgtable); +DEFINE_PER_CPU(lpae_t *, xen_pgtable); #define THIS_CPU_PGTABLE this_cpu(xen_pgtable) -/* - * xen_dommap =3D=3D pages used by map_domain_page, these pages contain - * the second level pagetables which map the domheap region - * starting at DOMHEAP_VIRT_START in 2MB chunks. - */ -static DEFINE_PER_CPU(lpae_t *, xen_dommap); /* Root of the trie for cpu0, other CPU's PTs are dynamically allocated */ static DEFINE_PAGE_TABLE(cpu0_pgtable); -/* cpu0's domheap page tables */ -static DEFINE_PAGE_TABLES(cpu0_dommap, DOMHEAP_SECOND_PAGES); #endif =20 /* Common pagetable leaves */ @@ -371,175 +346,6 @@ void clear_fixmap(unsigned int map) BUG_ON(res !=3D 0); } =20 -#ifdef CONFIG_ARCH_MAP_DOMAIN_PAGE -/* - * Prepare the area that will be used to map domheap pages. They are - * mapped in 2MB chunks, so we need to allocate the page-tables up to - * the 2nd level. - * - * The caller should make sure the root page-table for @cpu has been - * allocated. - */ -bool init_domheap_mappings(unsigned int cpu) -{ - unsigned int order =3D get_order_from_pages(DOMHEAP_SECOND_PAGES); - lpae_t *root =3D per_cpu(xen_pgtable, cpu); - unsigned int i, first_idx; - lpae_t *domheap; - mfn_t mfn; - - ASSERT(root); - ASSERT(!per_cpu(xen_dommap, cpu)); - - /* - * The domheap for cpu0 is before the heap is initialized. So we - * need to use pre-allocated pages. - */ - if ( !cpu ) - domheap =3D cpu0_dommap; - else - domheap =3D alloc_xenheap_pages(order, 0); - - if ( !domheap ) - return false; - - /* Ensure the domheap has no stray mappings */ - memset(domheap, 0, DOMHEAP_SECOND_PAGES * PAGE_SIZE); - - /* - * Update the first level mapping to reference the local CPUs - * domheap mapping pages. - */ - mfn =3D virt_to_mfn(domheap); - first_idx =3D first_table_offset(DOMHEAP_VIRT_START); - for ( i =3D 0; i < DOMHEAP_SECOND_PAGES; i++ ) - { - lpae_t pte =3D mfn_to_xen_entry(mfn_add(mfn, i), MT_NORMAL); - pte.pt.table =3D 1; - write_pte(&root[first_idx + i], pte); - } - - per_cpu(xen_dommap, cpu) =3D domheap; - - return true; -} - -void *map_domain_page_global(mfn_t mfn) -{ - return vmap(&mfn, 1); -} - -void unmap_domain_page_global(const void *va) -{ - vunmap(va); -} - -/* Map a page of domheap memory */ -void *map_domain_page(mfn_t mfn) -{ - unsigned long flags; - lpae_t *map =3D this_cpu(xen_dommap); - unsigned long slot_mfn =3D mfn_x(mfn) & ~XEN_PT_LPAE_ENTRY_MASK; - vaddr_t va; - lpae_t pte; - int i, slot; - - local_irq_save(flags); - - /* The map is laid out as an open-addressed hash table where each - * entry is a 2MB superpage pte. We use the available bits of each - * PTE as a reference count; when the refcount is zero the slot can - * be reused. */ - for ( slot =3D (slot_mfn >> XEN_PT_LPAE_SHIFT) % DOMHEAP_ENTRIES, i = =3D 0; - i < DOMHEAP_ENTRIES; - slot =3D (slot + 1) % DOMHEAP_ENTRIES, i++ ) - { - if ( map[slot].pt.avail < 0xf && - map[slot].pt.base =3D=3D slot_mfn && - map[slot].pt.valid ) - { - /* This slot already points to the right place; reuse it */ - map[slot].pt.avail++; - break; - } - else if ( map[slot].pt.avail =3D=3D 0 ) - { - /* Commandeer this 2MB slot */ - pte =3D mfn_to_xen_entry(_mfn(slot_mfn), MT_NORMAL); - pte.pt.avail =3D 1; - write_pte(map + slot, pte); - break; - } - - } - /* If the map fills up, the callers have misbehaved. */ - BUG_ON(i =3D=3D DOMHEAP_ENTRIES); - -#ifndef NDEBUG - /* Searching the hash could get slow if the map starts filling up. - * Cross that bridge when we come to it */ - { - static int max_tries =3D 32; - if ( i >=3D max_tries ) - { - dprintk(XENLOG_WARNING, "Domheap map is filling: %i tries\n", = i); - max_tries *=3D 2; - } - } -#endif - - local_irq_restore(flags); - - va =3D (DOMHEAP_VIRT_START - + (slot << SECOND_SHIFT) - + ((mfn_x(mfn) & XEN_PT_LPAE_ENTRY_MASK) << THIRD_SHIFT)); - - /* - * We may not have flushed this specific subpage at map time, - * since we only flush the 4k page not the superpage - */ - flush_xen_tlb_range_va_local(va, PAGE_SIZE); - - return (void *)va; -} - -/* Release a mapping taken with map_domain_page() */ -void unmap_domain_page(const void *va) -{ - unsigned long flags; - lpae_t *map =3D this_cpu(xen_dommap); - int slot =3D ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT; - - if ( !va ) - return; - - local_irq_save(flags); - - ASSERT(slot >=3D 0 && slot < DOMHEAP_ENTRIES); - ASSERT(map[slot].pt.avail !=3D 0); - - map[slot].pt.avail--; - - local_irq_restore(flags); -} - -mfn_t domain_page_map_to_mfn(const void *ptr) -{ - unsigned long va =3D (unsigned long)ptr; - lpae_t *map =3D this_cpu(xen_dommap); - int slot =3D (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT; - unsigned long offset =3D (va>>THIRD_SHIFT) & XEN_PT_LPAE_ENTRY_MASK; - - if ( (va >=3D VMAP_VIRT_START) && ((va - VMAP_VIRT_START) < VMAP_VIRT_= SIZE) ) - return virt_to_mfn(va); - - ASSERT(slot >=3D 0 && slot < DOMHEAP_ENTRIES); - ASSERT(map[slot].pt.avail !=3D 0); - - return mfn_add(lpae_get_mfn(map[slot]), offset); -} -#endif - void flush_page_to_ram(unsigned long mfn, bool sync_icache) { void *v =3D map_domain_page(_mfn(mfn)); diff --git a/xen/common/Kconfig b/xen/common/Kconfig index f1ea3199c8eb..f0aee2cfd9f8 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -11,6 +11,9 @@ config COMPAT config CORE_PARKING bool =20 +config DOMAIN_PAGE + bool + config GRANT_TABLE bool "Grant table support" if EXPERT default y --=20 2.32.0 From nobody Mon Apr 29 14:39:24 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1658342728; cv=none; d=zohomail.com; s=zohoarc; b=c9Vg+hWJP261aWAi34XlJScnmFYCp7s0ow7A5MYhj2q6FxS/XTi6EX6vWAgAFcc68WgShix5t/HTx0IvYbvqLc2sZgc0UvUifacXDHuMu3raKP7ah3c4jiDXJUm3VkV5IWDgw/9UO74PBbWA7f6EaXX932jj/XRRdXAHcVzLg8s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1658342728; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=W88wa027woW8KL9i8JFI4zNWkFVBWQcBIx2bW2nUgSg=; b=QyLch0nYB4UHknB4eELb65XqvvwuTr6ZsFnnv8d1WN++G89Iy/M3Whq130TwcHNLYD2aD2kEZ33IRH3GS5jTN5R2E4ECZdnNKyHVrX19AnjE+IeQ5fsp1vrfU3T9lSzONKZ29/fzAwtcyDvHhvjaxsyeU/2VsErw47oMnf9Ic74= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1658342728402296.39937308845197; Wed, 20 Jul 2022 11:45:28 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.372160.604055 (Exim 4.92) (envelope-from ) id 1oEEh6-0004ow-LM; Wed, 20 Jul 2022 18:45:12 +0000 Received: by outflank-mailman (output) from mailman id 372160.604055; Wed, 20 Jul 2022 18:45:12 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEh6-0004ob-DC; Wed, 20 Jul 2022 18:45:12 +0000 Received: by outflank-mailman (input) for mailman id 372160; Wed, 20 Jul 2022 18:45:10 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEh4-0004S6-Pi for xen-devel@lists.xenproject.org; Wed, 20 Jul 2022 18:45:10 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oEEh4-00078a-L2; Wed, 20 Jul 2022 18:45:10 +0000 Received: from 54-240-197-224.amazon.com ([54.240.197.224] helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92) (envelope-from ) id 1oEEh4-000309-DV; Wed, 20 Jul 2022 18:45:10 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=W88wa027woW8KL9i8JFI4zNWkFVBWQcBIx2bW2nUgSg=; b=a/P6a7Z+WG8XJpD3Cq+j93KA8M vIdqaLI32/+6exvX4FfXD8BM1D7Qi1ZwpdZc2ilIu+WHj0/RNPvNY8li+2zGmTI5es1O1fdoyxqel 9c+3gB4d/sgz0vnviU/8Kcn+IooE9skcuHelxaAHL3iFBA1k71EEPXNfuJqU+g+QMwI0=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: julien@xen.org, Julien Grall , Stefano Stabellini , Bertrand Marquis , Volodymyr Babchuk , Michal Orzel Subject: [PATCH v2 5/5] xen/arm: mm: Reduce the area that xen_second covers Date: Wed, 20 Jul 2022 19:44:59 +0100 Message-Id: <20220720184459.51582-6-julien@xen.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220720184459.51582-1-julien@xen.org> References: <20220720184459.51582-1-julien@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @xen.org) X-ZM-MESSAGEID: 1658342730643100007 Content-Type: text/plain; charset="utf-8" From: Julien Grall At the moment, xen_second is used to cover the first 2GB of the virtual address space. With the recent rework of the page-tables, only the first 1GB region (where Xen resides) is effectively used. In addition to that, I would like to reshuffle the memory layout. So Xen mappings may not be anymore in the first 2GB of the virtual address space. Therefore, rework xen_second so it only covers the 1GB region where Xen will reside. With this change, xen_second doesn't cover anymore the xenheap area on arm32. So, we first need to add memory to the boot allocator before setting up the xenheap mappings. Take the opportunity to update the comments on top of xen_fixmap and xen_xenmap. Signed-off-by: Julien Grall Reviewed-by: Michal Orzel Reviewed-by: Bertrand Marquis Tested-by: Bertrand Marquis ---- Changes in v2: - Add Michal's reviewed-by --- xen/arch/arm/mm.c | 32 +++++++++++--------------------- xen/arch/arm/setup.c | 13 +++++++++++-- 2 files changed, 22 insertions(+), 23 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index ad26ad740308..3d2c046bbb5c 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -116,17 +116,14 @@ static DEFINE_PAGE_TABLE(cpu0_pgtable); #endif =20 /* Common pagetable leaves */ -/* Second level page tables. - * - * The second-level table is 2 contiguous pages long, and covers all - * addresses from 0 to 0x7fffffff. Offsets into it are calculated - * with second_linear_offset(), not second_table_offset(). - */ -static DEFINE_PAGE_TABLES(xen_second, 2); -/* First level page table used for fixmap */ +/* Second level page table used to cover Xen virtual address space */ +static DEFINE_PAGE_TABLE(xen_second); +/* Third level page table used for fixmap */ DEFINE_BOOT_PAGE_TABLE(xen_fixmap); -/* First level page table used to map Xen itself with the XN bit set - * as appropriate. */ +/* + * Third level page table used to map Xen itself with the XN bit set + * as appropriate. + */ static DEFINE_PAGE_TABLE(xen_xenmap); =20 /* Non-boot CPUs use this to find the correct pagetables. */ @@ -168,7 +165,6 @@ static void __init __maybe_unused build_assertions(void) BUILD_BUG_ON(zeroeth_table_offset(XEN_VIRT_START)); #endif BUILD_BUG_ON(first_table_offset(XEN_VIRT_START)); - BUILD_BUG_ON(second_linear_offset(XEN_VIRT_START) >=3D XEN_PT_LPAE_ENT= RIES); #ifdef CONFIG_DOMAIN_PAGE BUILD_BUG_ON(DOMHEAP_VIRT_START & ~FIRST_MASK); #endif @@ -482,14 +478,10 @@ void __init setup_pagetables(unsigned long boot_phys_= offset) p =3D (void *) cpu0_pgtable; #endif =20 - /* Initialise first level entries, to point to second level entries */ - for ( i =3D 0; i < 2; i++) - { - p[i] =3D pte_of_xenaddr((uintptr_t)(xen_second + - i * XEN_PT_LPAE_ENTRIES)); - p[i].pt.table =3D 1; - p[i].pt.xn =3D 0; - } + /* Map xen second level page-table */ + p[0] =3D pte_of_xenaddr((uintptr_t)(xen_second)); + p[0].pt.table =3D 1; + p[0].pt.xn =3D 0; =20 /* Break up the Xen mapping into 4k pages and protect them separately.= */ for ( i =3D 0; i < XEN_PT_LPAE_ENTRIES; i++ ) @@ -618,8 +610,6 @@ void __init setup_xenheap_mappings(unsigned long base_m= fn, =20 /* Record where the xenheap is, for translation routines. */ xenheap_virt_end =3D XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE; - xenheap_mfn_start =3D _mfn(base_mfn); - xenheap_mfn_end =3D _mfn(base_mfn + nr_mfns); } #else /* CONFIG_ARM_64 */ void __init setup_xenheap_mappings(unsigned long base_mfn, diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c index 068e84b10335..500307edc08d 100644 --- a/xen/arch/arm/setup.c +++ b/xen/arch/arm/setup.c @@ -774,11 +774,20 @@ static void __init setup_mm(void) opt_xenheap_megabytes ? ", from command-line" : ""); printk("Dom heap: %lu pages\n", domheap_pages); =20 - setup_xenheap_mappings((e >> PAGE_SHIFT) - xenheap_pages, xenheap_page= s); + /* + * We need some memory to allocate the page-tables used for the + * xenheap mappings. So populate the boot allocator first. + * + * This requires us to set xenheap_mfn_{start, end} first so the Xenhe= ap + * region can be avoided. + */ + xenheap_mfn_start =3D _mfn((e >> PAGE_SHIFT) - xenheap_pages); + xenheap_mfn_end =3D mfn_add(xenheap_mfn_start, xenheap_pages); =20 - /* Add non-xenheap memory */ populate_boot_allocator(); =20 + setup_xenheap_mappings(mfn_x(xenheap_mfn_start), xenheap_pages); + /* Frame table covers all of RAM region, including holes */ setup_frametable_mappings(ram_start, ram_end); max_page =3D PFN_DOWN(ram_end); --=20 2.32.0