From nobody Mon Feb 9 19:53:58 2026 Delivered-To: importer@patchew.org Received-SPF: none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=none (zohomail.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1580517225762902.3034119414024; Fri, 31 Jan 2020 16:33:45 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ixgiq-0006B0-WC; Sat, 01 Feb 2020 00:33:16 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ixgiq-0006Aa-2p for xen-devel@lists.xenproject.org; Sat, 01 Feb 2020 00:33:16 +0000 Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 69022c1c-448a-11ea-b211-bc764e2007e4; Sat, 01 Feb 2020 00:33:05 +0000 (UTC) Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92]) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1ixgie-0007N1-EE; Sat, 01 Feb 2020 00:33:04 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.92 #3 (Red Hat Linux)) id 1ixgid-009ukw-78; Sat, 01 Feb 2020 00:33:03 +0000 X-Inumbo-ID: 69022c1c-448a-11ea-b211-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Ibs6daSKtbv9Id382VB9N53Kh+mTH7nRJBEXpgxFwj4=; b=AJCLu528w1dy/ftJ1htU1LxNuW JLvfzBBOSxZSFrRSOEtUIK/RI55eTh3xdN1QiQm0f1lzjIcYkdufAkIR7nMvYtgkyNcVJavE8xNHh Jun5RftCpCSJuKsQAfz8uJd+YThEcpyU25OijDxF/aiLKhqqiP41MWU14vwoStufOmOZQdGPk/zN8 T4KCit/6twmjcTkKq5K0RSL6CPFAh2As1cbknqBYyBL+5HF9Ln7ohhDvHh/OtmlrxB7gBBxlXz4KA 04ctUoZn3gJTk0Fr+OJnixHye6bwmihXSO64bE0lxHFAlXBX0XObXwiDAOyHpMdmpyCHAS8pnGCwW KFFw4I/A==; From: David Woodhouse To: xen-devel@lists.xenproject.org Date: Sat, 1 Feb 2020 00:32:57 +0000 Message-Id: <20200201003303.2363081-2-dwmw2@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <8a95f787ca93b23ee8d8c0b55fcc63d22a75c5f3.camel@infradead.org> References: <8a95f787ca93b23ee8d8c0b55fcc63d22a75c5f3.camel@infradead.org> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Subject: [Xen-devel] [PATCH 2/8] x86/setup: Fix badpage= handling for memory above HYPERVISOR_VIRT_END X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Varad Gautam , Ian Jackson , Hongyan Xia , Paul Durrant , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: David Woodhouse Bad pages are identified by get_platform_badpages() and with badpage=3D on the command line. The boot allocator currently automatically elides these from the regions passed to it with init_boot_pages(). The xenheap is then initialised with the pages which are still marked as free by the boot allocator when end_boot_allocator() is called. However, any memory above HYPERVISOR_VIRT_END is passed directly to init_domheap_pages() later in __start_xen(), and the bad page list is not consulted. Fix this by marking those pages as PGC_broken in the frametable at the time end_boot_allocator() runs, and then making init_heap_pages() skip over any pages which are so marked. Signed-off-by: David Woodhouse --- xen/common/page_alloc.c | 82 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 79 insertions(+), 3 deletions(-) diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 919a270587..3cf478311b 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -1758,6 +1758,18 @@ int query_page_offline(mfn_t mfn, uint32_t *status) return 0; } =20 +static unsigned long contig_avail_pages(struct page_info *pg, unsigned lon= g max_pages) +{ + unsigned long i; + + for ( i =3D 0 ; i < max_pages; i++) + { + if ( pg[i].count_info & PGC_broken ) + break; + } + return i; +} + /* * Hand the specified arbitrary page range to the specified heap zone * checking the node_id of the previous page. If they differ and the @@ -1799,18 +1811,23 @@ static void init_heap_pages( { unsigned int nid =3D phys_to_nid(page_to_maddr(pg+i)); =20 + /* If the (first) page is already marked broken, don't add it. */ + if ( pg[i].count_info & PGC_broken ) + continue; + if ( unlikely(!avail[nid]) ) { + unsigned long contig_nr_pages =3D contig_avail_pages(pg + i, n= r_pages); unsigned long s =3D mfn_x(page_to_mfn(pg + i)); - unsigned long e =3D mfn_x(mfn_add(page_to_mfn(pg + nr_pages - = 1), 1)); + unsigned long e =3D mfn_x(mfn_add(page_to_mfn(pg + i + contig_= nr_pages - 1), 1)); bool use_tail =3D (nid =3D=3D phys_to_nid(pfn_to_paddr(e - 1))= ) && !(s & ((1UL << MAX_ORDER) - 1)) && (find_first_set_bit(e) <=3D find_first_set_bit= (s)); unsigned long n; =20 - n =3D init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), nr_pages= - i, + n =3D init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), contig_n= r_pages, &use_tail); - BUG_ON(i + n > nr_pages); + BUG_ON(n > contig_nr_pages); if ( n && !use_tail ) { i +=3D n - 1; @@ -1846,6 +1863,63 @@ static unsigned long avail_heap_pages( return free_pages; } =20 +static void mark_bad_pages(void) +{ + unsigned long bad_spfn, bad_epfn; + const char *p; + struct page_info *pg; +#ifdef CONFIG_X86 + const struct platform_bad_page *badpage; + unsigned int i, j, array_size; + + badpage =3D get_platform_badpages(&array_size); + if ( badpage ) + { + for ( i =3D 0; i < array_size; i++ ) + { + for ( j =3D 0; j < 1UL << badpage->order; j++ ) + { + if ( mfn_valid(_mfn(badpage->mfn + j)) ) + { + pg =3D mfn_to_page(_mfn(badpage->mfn + j)); + pg->count_info |=3D PGC_broken; + page_list_add_tail(pg, &page_broken_list); + } + } + } + } +#endif + + /* Check new pages against the bad-page list. */ + p =3D opt_badpage; + while ( *p !=3D '\0' ) + { + bad_spfn =3D simple_strtoul(p, &p, 0); + bad_epfn =3D bad_spfn; + + if ( *p =3D=3D '-' ) + { + p++; + bad_epfn =3D simple_strtoul(p, &p, 0); + if ( bad_epfn < bad_spfn ) + bad_epfn =3D bad_spfn; + } + + if ( *p =3D=3D ',' ) + p++; + else if ( *p !=3D '\0' ) + break; + + while ( mfn_valid(_mfn(bad_spfn)) && bad_spfn < bad_epfn ) + { + pg =3D mfn_to_page(_mfn(bad_spfn)); + pg->count_info |=3D PGC_broken; + page_list_add_tail(pg, &page_broken_list); + bad_spfn++; + } + } +} + void __init end_boot_allocator(void) { unsigned int i; @@ -1870,6 +1944,8 @@ void __init end_boot_allocator(void) } nr_bootmem_regions =3D 0; =20 + mark_bad_pages(); + if ( !dma_bitsize && (num_online_nodes() > 1) ) dma_bitsize =3D arch_get_dma_bitsize(); =20 --=20 2.21.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel