From nobody Mon Feb 9 00:42:32 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1640016468; cv=none; d=zohomail.com; s=zohoarc; b=drDNrlt6Gp+VAMIDEupA4FdA4XkskyoP6NK2mZ518ZqcmGThyh86IgbNH4ArHT+cYnKVXv64TtRKaYfefoNE9Lx78XXGtssyLGL8/6WY0ep65T2WrzXrQtQBM3H5nkyidivJ5d+O9frDqnkIjaCOyBu9ydjg03eJAtMj672mE0s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1640016468; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ZJxJEFiHs3uH1jT5/e24KC56tHdpQMXoWtlkInpMjy4=; b=eh/jZYhjHPTaXkaihDVs740rugG75P/xk9maCatM8waXK8RfjkhGv6jW8JiPGbIdVryd4E8AnWFM0BcMGMlTzV4GTbCRj8pT3xJnnz78ObX9nWUJx95bng3lAAif2tn54EC7SsE6r/GFz7VTE5wdkO+v5qcGhOAQDExcVYy3ln8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 164001646854759.06465940176497; Mon, 20 Dec 2021 08:07:48 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.249860.430308 (Exim 4.92) (envelope-from ) id 1mzLCC-0006N1-9V; Mon, 20 Dec 2021 16:07:28 +0000 Received: by outflank-mailman (output) from mailman id 249860.430308; Mon, 20 Dec 2021 16:07:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1mzLCC-0006Mm-4j; Mon, 20 Dec 2021 16:07:28 +0000 Received: by outflank-mailman (input) for mailman id 249860; Mon, 20 Dec 2021 16:07:26 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1mzLCA-00055w-9I for xen-devel@lists.xenproject.org; Mon, 20 Dec 2021 16:07:26 +0000 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id e8c303e1-61ae-11ec-85d3-df6b77346a89; Mon, 20 Dec 2021 17:07:20 +0100 (CET) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id BD8AE1F3A3; Mon, 20 Dec 2021 16:07:19 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 9272213D6B; Mon, 20 Dec 2021 16:07:19 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id YJl6IjeqwGEUTAAAMHmgww (envelope-from ); Mon, 20 Dec 2021 16:07:19 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e8c303e1-61ae-11ec-85d3-df6b77346a89 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1640016439; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZJxJEFiHs3uH1jT5/e24KC56tHdpQMXoWtlkInpMjy4=; b=Ho/HSDwECpZMfw9sIrvOOQyj1wHST8peaJ8xVzPJ2EmXuE9djCKi4FimsP/eHMj59LJqqr Lj8Pqib5o0Bm8YL+ud/YPtReFbuIcKgoaJzykS3ElqlXNTXL6ZJaw7UZmeJqR+MmwqRebN NPe4jzUxPGPluWhRinGt5oV+w8cFf2Q= From: Juergen Gross To: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org Cc: samuel.thibault@ens-lyon.org, wl@xen.org, Juergen Gross Subject: [PATCH v2 04/10] mini-os: respect memory map when ballooning up Date: Mon, 20 Dec 2021 17:07:10 +0100 Message-Id: <20211220160716.4159-5-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20211220160716.4159-1-jgross@suse.com> References: <20211220160716.4159-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1640016469160100005 Content-Type: text/plain; charset="utf-8" Today Mini-OS won't look at the memory map when ballooning up. This can result in problems for PVH domains with more than 4 GB of RAM, as ballooning will happily run into the ACPI area. Fix that by adding only pages being marked as RAM in the memory map and by distinguishing between the current number of RAM pages and the first unallocated page. Signed-off-by: Juergen Gross Reviewed-by: Samuel Thibault --- V2: - rename and fix e820_get_max_pages() (Samuel Thibault) --- arch/arm/mm.c | 3 +++ arch/x86/balloon.c | 4 ++-- arch/x86/mm.c | 2 ++ balloon.c | 33 ++++++++++++++++++++++++--------- e820.c | 21 ++++++++++++++++++++- include/balloon.h | 5 +++-- include/e820.h | 1 + mm.c | 7 ++----- 8 files changed, 57 insertions(+), 19 deletions(-) diff --git a/arch/arm/mm.c b/arch/arm/mm.c index 9068166..11962f8 100644 --- a/arch/arm/mm.c +++ b/arch/arm/mm.c @@ -3,6 +3,7 @@ #include #include #include +#include #include #include =20 @@ -70,6 +71,8 @@ void arch_init_mm(unsigned long *start_pfn_p, unsigned lo= ng *max_pfn_p) } device_tree =3D new_device_tree; *max_pfn_p =3D to_phys(new_device_tree) >> PAGE_SHIFT; + + balloon_set_nr_pages(*max_pfn_p, *max_pfn_p); } =20 void arch_init_demand_mapping_area(void) diff --git a/arch/x86/balloon.c b/arch/x86/balloon.c index 10b440c..fe79644 100644 --- a/arch/x86/balloon.c +++ b/arch/x86/balloon.c @@ -61,10 +61,10 @@ void arch_remap_p2m(unsigned long max_pfn) p2m_invalidate(l2_list, L2_P2M_IDX(max_pfn - 1) + 1); p2m_invalidate(l1_list, L1_P2M_IDX(max_pfn - 1) + 1); =20 - if ( p2m_pages(nr_max_pages) <=3D p2m_pages(max_pfn) ) + if ( p2m_pages(nr_max_pfn) <=3D p2m_pages(max_pfn) ) return; =20 - new_p2m =3D alloc_virt_kernel(p2m_pages(nr_max_pages)); + new_p2m =3D alloc_virt_kernel(p2m_pages(nr_max_pfn)); for ( pfn =3D 0; pfn < max_pfn; pfn +=3D P2M_ENTRIES ) { map_frame_rw(new_p2m + PAGE_SIZE * (pfn / P2M_ENTRIES), diff --git a/arch/x86/mm.c b/arch/x86/mm.c index 3bf6170..c30d8bc 100644 --- a/arch/x86/mm.c +++ b/arch/x86/mm.c @@ -72,6 +72,7 @@ void arch_mm_preinit(void *p) pt_base =3D (pgentry_t *)si->pt_base; first_free_pfn =3D PFN_UP(to_phys(pt_base)) + si->nr_pt_frames; last_free_pfn =3D si->nr_pages; + balloon_set_nr_pages(last_free_pfn, last_free_pfn); } #else #include @@ -118,6 +119,7 @@ void arch_mm_preinit(void *p) } =20 last_free_pfn =3D e820_get_maxpfn(ret); + balloon_set_nr_pages(ret, last_free_pfn); } #endif =20 diff --git a/balloon.c b/balloon.c index 5676d3b..9dc77c5 100644 --- a/balloon.c +++ b/balloon.c @@ -23,14 +23,24 @@ =20 #include #include +#include #include #include #include #include #include =20 -unsigned long nr_max_pages; -unsigned long nr_mem_pages; +unsigned long nr_max_pfn; + +static unsigned long nr_max_pages; +static unsigned long nr_mem_pfn; +static unsigned long nr_mem_pages; + +void balloon_set_nr_pages(unsigned long pages, unsigned long pfn) +{ + nr_mem_pages =3D pages; + nr_mem_pfn =3D pfn; +} =20 void get_max_pages(void) { @@ -46,16 +56,18 @@ void get_max_pages(void) =20 nr_max_pages =3D ret; printk("Maximum memory size: %ld pages\n", nr_max_pages); + + nr_max_pfn =3D e820_get_maxpfn(nr_max_pages); } =20 void mm_alloc_bitmap_remap(void) { unsigned long i, new_bitmap; =20 - if ( mm_alloc_bitmap_size >=3D ((nr_max_pages + 1) >> 3) ) + if ( mm_alloc_bitmap_size >=3D ((nr_max_pfn + 1) >> 3) ) return; =20 - new_bitmap =3D alloc_virt_kernel(PFN_UP((nr_max_pages + 1) >> 3)); + new_bitmap =3D alloc_virt_kernel(PFN_UP((nr_max_pfn + 1) >> 3)); for ( i =3D 0; i < mm_alloc_bitmap_size; i +=3D PAGE_SIZE ) { map_frame_rw(new_bitmap + i, @@ -70,7 +82,7 @@ static unsigned long balloon_frames[N_BALLOON_FRAMES]; =20 int balloon_up(unsigned long n_pages) { - unsigned long page, pfn; + unsigned long page, pfn, start_pfn; int rc; struct xen_memory_reservation reservation =3D { .domid =3D DOMID_SELF @@ -81,8 +93,11 @@ int balloon_up(unsigned long n_pages) if ( n_pages > N_BALLOON_FRAMES ) n_pages =3D N_BALLOON_FRAMES; =20 + start_pfn =3D e820_get_maxpfn(nr_mem_pages + 1) - 1; + n_pages =3D e820_get_max_contig_pages(start_pfn, n_pages); + /* Resize alloc_bitmap if necessary. */ - while ( mm_alloc_bitmap_size * 8 < nr_mem_pages + n_pages ) + while ( mm_alloc_bitmap_size * 8 < start_pfn + n_pages ) { page =3D alloc_page(); if ( !page ) @@ -99,14 +114,14 @@ int balloon_up(unsigned long n_pages) mm_alloc_bitmap_size +=3D PAGE_SIZE; } =20 - rc =3D arch_expand_p2m(nr_mem_pages + n_pages); + rc =3D arch_expand_p2m(start_pfn + n_pages); if ( rc ) return rc; =20 /* Get new memory from hypervisor. */ for ( pfn =3D 0; pfn < n_pages; pfn++ ) { - balloon_frames[pfn] =3D nr_mem_pages + pfn; + balloon_frames[pfn] =3D start_pfn + pfn; } set_xen_guest_handle(reservation.extent_start, balloon_frames); reservation.nr_extents =3D n_pages; @@ -116,7 +131,7 @@ int balloon_up(unsigned long n_pages) =20 for ( pfn =3D 0; pfn < rc; pfn++ ) { - arch_pfn_add(nr_mem_pages + pfn, balloon_frames[pfn]); + arch_pfn_add(start_pfn + pfn, balloon_frames[pfn]); free_page(pfn_to_virt(nr_mem_pages + pfn)); } =20 diff --git a/e820.c b/e820.c index 6d15cdf..659f71c 100644 --- a/e820.c +++ b/e820.c @@ -290,7 +290,8 @@ unsigned long e820_get_maxpfn(unsigned long pages) int i; unsigned long pfns, start =3D 0; =20 - e820_get_memmap(); + if ( !e820_entries ) + e820_get_memmap(); =20 for ( i =3D 0; i < e820_entries; i++ ) { @@ -305,3 +306,21 @@ unsigned long e820_get_maxpfn(unsigned long pages) =20 return start + pfns; } + +unsigned long e820_get_max_contig_pages(unsigned long pfn, unsigned long p= ages) +{ + int i; + unsigned long end; + + for ( i =3D 0; i < e820_entries && e820_map[i].addr < (pfn << PAGE_SHI= FT); + i++ ) + { + end =3D (e820_map[i].addr + e820_map[i].size) >> PAGE_SHIFT; + if ( e820_map[i].type !=3D E820_RAM || end <=3D pfn ) + continue; + + return ((end - pfn) > pages) ? pages : end - pfn; + } + + return 0; +} diff --git a/include/balloon.h b/include/balloon.h index 6cfec4f..8f7c8bd 100644 --- a/include/balloon.h +++ b/include/balloon.h @@ -32,11 +32,11 @@ */ #define BALLOON_EMERGENCY_PAGES 64 =20 -extern unsigned long nr_max_pages; -extern unsigned long nr_mem_pages; +extern unsigned long nr_max_pfn; =20 void get_max_pages(void); int balloon_up(unsigned long n_pages); +void balloon_set_nr_pages(unsigned long pages, unsigned long pfn); =20 void mm_alloc_bitmap_remap(void); void arch_pfn_add(unsigned long pfn, unsigned long mfn); @@ -50,6 +50,7 @@ static inline int chk_free_pages(unsigned long needed) { return needed <=3D nr_free_pages; } +static inline balloon_set_nr_pages(unsigned long pages, unsigned long pfn)= { } =20 #endif /* CONFIG_BALLOON */ #endif /* _BALLOON_H_ */ diff --git a/include/e820.h b/include/e820.h index 6a57f05..8d4d371 100644 --- a/include/e820.h +++ b/include/e820.h @@ -50,5 +50,6 @@ extern struct e820entry e820_map[]; extern unsigned e820_entries; =20 unsigned long e820_get_maxpfn(unsigned long pages); +unsigned long e820_get_max_contig_pages(unsigned long pfn, unsigned long p= ages); =20 #endif /*__E820_HEADER*/ diff --git a/mm.c b/mm.c index 932ceeb..6493bdd 100644 --- a/mm.c +++ b/mm.c @@ -396,8 +396,9 @@ void init_mm(void) =20 printk("MM: Init\n"); =20 - get_max_pages(); arch_init_mm(&start_pfn, &max_pfn); + get_max_pages(); + /* * now we can initialise the page allocator */ @@ -407,10 +408,6 @@ void init_mm(void) arch_init_p2m(max_pfn); =20 arch_init_demand_mapping_area(); - -#ifdef CONFIG_BALLOON - nr_mem_pages =3D max_pfn; -#endif } =20 void fini_mm(void) --=20 2.26.2