From nobody Thu Apr 16 20:47:14 2026 Received: from mail-lf1-f74.google.com (mail-lf1-f74.google.com [209.85.167.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D4B853A6B7A for ; Thu, 16 Apr 2026 11:07:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776337627; cv=none; b=kMllgab5V7hXQTJy2OkZWVuvwsliuAm1YqEobHQ92aoYv5qGUiBaoSr7TcSvTx9+IGJGbe7PKeyPTbhYOei+SOp687+rvxAD0jz8mIs34/70KPEWz5qp3h6xSlY68LZtaSD1UF+PKEEiYFW/gjLGcED3hBKkxmJ4B6XdjxcYszI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776337627; c=relaxed/simple; bh=HYOR0f2h2YY1XkuHdNl+nfRvC/R1Ul1F50Xort/fHL8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VMgHZBOQCqQ9tQu197BOMXhJP1vV8Bxjlwsz0Pt4NcCki5F8OpF3KxpL15/zXeQVDLJvk0mHQtlWt2nhuM8Qy5Ur6cjJ+U2iDBwnKLmxOTcrmJw4ygrZfDZ4GfG6aIlboRrjCmgD0Bi5mD2DQ+5srqpczTOH/Jkj090zWngdVss= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mclapinski.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tPPjPABQ; arc=none smtp.client-ip=209.85.167.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mclapinski.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tPPjPABQ" Received: by mail-lf1-f74.google.com with SMTP id 2adb3069b0e04-5a405badd98so1238621e87.2 for ; Thu, 16 Apr 2026 04:07:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1776337624; x=1776942424; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=v5dV2LMJtAOOBBBsjxKBUavRi9Ccqee2XzO+Iu2tpbA=; b=tPPjPABQV6uzcnbM+A3iVVsCUowHu3+QWabzpY95+zwdeyZUgqe1nzYY1v8kDTffuH 6ahXn01xaD9bc73W3ZG/c9mY3BHOduiNh1etSD66ATzRPMl2jWwddVQYd+UCIKYIb9/n aIqdD2PkDasnplK/g6VEmLcLJfvUHM4OgspGHSJdBZdbwWXTCkayUF6T0WsqBSQROA7l l3/DM9uws7y21r6fN4yxx1wv3nc29QsKSD9GgqJ8+Zd5cfaGNRT7Pd5VP+Zk3xxDiVr9 1ZQK7CkfuzLOg5cxGFh1hPn1URh8C/I6n9O0RllJ4MFjPZcsGKvTH2T2+MgcIxnBwB4j PTGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776337624; x=1776942424; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=v5dV2LMJtAOOBBBsjxKBUavRi9Ccqee2XzO+Iu2tpbA=; b=FRh0CofmvR5/CoHP9tP+CXqRHlwiVSkvu2m2JrtFcR9Xr9ElwLLpcmVlCRE7ijePpB 3Fpn2IjDKTs5l+SOk4qXA5SmZeWHU6uaZKo5sZRBntn+OIsyLnNDNR/zwjdPoe4XbzAw AjZtiSTrZt8sDrfHiFVuVR1VZ1LE51KGeHT4SG2yxjU4Hh8x4Q2TAVsswB08mbWz0EN3 qtXxrqaO6Ob3fw25o4Pzn4IS3CuqProZNYr1ubAY50JesTVnsWFvMXgp4Six2h0iMunh 2ZOvovA4WCZ3cN7NtxsMdjtlzkJQzcIHSGMjahyBCK1A35UsEOZRGQSMjjiWxdnAf3+i rg0A== X-Gm-Message-State: AOJu0Yy7snWzTmO56TaX4ZOopzp5Tu+6avHhuXIVLVKp1IZ1cJOlPwmp WQdGiovoFHG4Qj7/Te2f0cay63aOWmWtKZgJtgjokgGU74gwcuD36suCRWyx6IT1WVsbW1SgQrR xjDqz7qu9e+No4bgURvVJbQ== X-Received: from lfe15.prod.google.com ([2002:a05:6512:290f:b0:5a4:5d8:6190]) (user=mclapinski job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6512:2252:b0:5a4:d21:de07 with SMTP id 2adb3069b0e04-5a40d21de29mr1675847e87.1.1776337623788; Thu, 16 Apr 2026 04:07:03 -0700 (PDT) Date: Thu, 16 Apr 2026 13:06:53 +0200 In-Reply-To: <20260416110654.247398-1-mclapinski@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260416110654.247398-1-mclapinski@google.com> X-Mailer: git-send-email 2.54.0.rc1.555.g9c883467ad-goog Message-ID: <20260416110654.247398-2-mclapinski@google.com> Subject: [PATCH v8 1/2] kho: fix deferred initialization of scratch areas From: Michal Clapinski To: Evangelos Petrongonas , Pasha Tatashin , Mike Rapoport , Pratyush Yadav , Alexander Graf , Samiullah Khawaja , kexec@lists.infradead.org, linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , Vlastimil Babka , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Johannes Weiner , Zi Yan , Michal Clapinski Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, if CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, kho_release_scratch() will initialize the struct pages and set migratetype of KHO scratch. Unless the whole scratch fits below first_deferred_pfn, some of that will be overwritten either by deferred_init_pages() or memmap_init_reserved_range(). To fix it, make memmap_init_range(), deferred_init_memmap_chunk() and memmap_init_reserved_range() recognize KHO scratch regions and set migratetype of pageblocks in those regions to MIGRATE_CMA. Signed-off-by: Michal Clapinski Co-developed-by: Mike Rapoport (Microsoft) Signed-off-by: Mike Rapoport (Microsoft) --- include/linux/memblock.h | 7 +++-- kernel/liveupdate/kexec_handover.c | 25 ------------------ mm/memblock.c | 41 ++++++++++++++---------------- mm/mm_init.c | 27 ++++++++++++++------ 4 files changed, 43 insertions(+), 57 deletions(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 6ec5e9ac0699..410f2a399691 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -614,11 +614,14 @@ static inline void memtest_report_meminfo(struct seq_= file *m) { } #ifdef CONFIG_MEMBLOCK_KHO_SCRATCH void memblock_set_kho_scratch_only(void); void memblock_clear_kho_scratch_only(void); -void memmap_init_kho_scratch_pages(void); +bool memblock_is_kho_scratch_memory(phys_addr_t addr); #else static inline void memblock_set_kho_scratch_only(void) { } static inline void memblock_clear_kho_scratch_only(void) { } -static inline void memmap_init_kho_scratch_pages(void) {} +static inline bool memblock_is_kho_scratch_memory(phys_addr_t addr) +{ + return false; +} #endif =20 #endif /* _LINUX_MEMBLOCK_H */ diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_h= andover.c index 18509d8082ea..a507366a2cf9 100644 --- a/kernel/liveupdate/kexec_handover.c +++ b/kernel/liveupdate/kexec_handover.c @@ -1576,35 +1576,10 @@ static __init int kho_init(void) } fs_initcall(kho_init); =20 -static void __init kho_release_scratch(void) -{ - phys_addr_t start, end; - u64 i; - - memmap_init_kho_scratch_pages(); - - /* - * Mark scratch mem as CMA before we return it. That way we - * ensure that no kernel allocations happen on it. That means - * we can reuse it as scratch memory again later. - */ - __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, - MEMBLOCK_KHO_SCRATCH, &start, &end, NULL) { - ulong start_pfn =3D pageblock_start_pfn(PFN_DOWN(start)); - ulong end_pfn =3D pageblock_align(PFN_UP(end)); - ulong pfn; - - for (pfn =3D start_pfn; pfn < end_pfn; pfn +=3D pageblock_nr_pages) - init_pageblock_migratetype(pfn_to_page(pfn), - MIGRATE_CMA, false); - } -} - void __init kho_memory_init(void) { if (kho_in.scratch_phys) { kho_scratch =3D phys_to_virt(kho_in.scratch_phys); - kho_release_scratch(); =20 if (kho_mem_retrieve(kho_get_fdt())) kho_in.fdt_phys =3D 0; diff --git a/mm/memblock.c b/mm/memblock.c index 4224fdaa8918..fab234f732c3 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -17,6 +17,7 @@ #include #include #include +#include =20 #ifdef CONFIG_KEXEC_HANDOVER #include @@ -959,28 +960,6 @@ __init void memblock_clear_kho_scratch_only(void) { kho_scratch_only =3D false; } - -__init void memmap_init_kho_scratch_pages(void) -{ - phys_addr_t start, end; - unsigned long pfn; - int nid; - u64 i; - - if (!IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) - return; - - /* - * Initialize struct pages for free scratch memory. - * The struct pages for reserved scratch memory will be set up in - * memmap_init_reserved_pages() - */ - __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE, - MEMBLOCK_KHO_SCRATCH, &start, &end, &nid) { - for (pfn =3D PFN_UP(start); pfn < PFN_DOWN(end); pfn++) - init_deferred_page(pfn, nid); - } -} #endif =20 /** @@ -1971,6 +1950,18 @@ bool __init_memblock memblock_is_map_memory(phys_add= r_t addr) return !memblock_is_nomap(&memblock.memory.regions[i]); } =20 +#ifdef CONFIG_MEMBLOCK_KHO_SCRATCH +bool __init_memblock memblock_is_kho_scratch_memory(phys_addr_t addr) +{ + int i =3D memblock_search(&memblock.memory, addr); + + if (i =3D=3D -1) + return false; + + return memblock_is_kho_scratch(&memblock.memory.regions[i]); +} +#endif + int __init_memblock memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn, unsigned long *end_pfn) { @@ -2262,6 +2253,12 @@ static void __init memmap_init_reserved_range(phys_a= ddr_t start, * access it yet. */ __SetPageReserved(page); + +#ifdef CONFIG_MEMBLOCK_KHO_SCRATCH + if (memblock_is_kho_scratch_memory(PFN_PHYS(pfn)) && + pageblock_aligned(pfn)) + init_pageblock_migratetype(page, MIGRATE_CMA, false); +#endif } } =20 diff --git a/mm/mm_init.c b/mm/mm_init.c index f9f8e1af921c..890c3ae21ba0 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -916,8 +916,15 @@ void __meminit memmap_init_range(unsigned long size, i= nt nid, unsigned long zone * over the place during system boot. */ if (pageblock_aligned(pfn)) { - init_pageblock_migratetype(page, migratetype, - isolate_pageblock); + int mt =3D migratetype; + +#ifdef CONFIG_MEMBLOCK_KHO_SCRATCH + if (memblock_is_kho_scratch_memory(page_to_phys(page))) + mt =3D MIGRATE_CMA; +#endif + + init_pageblock_migratetype(page, mt, + isolate_pageblock); cond_resched(); } pfn++; @@ -1970,7 +1977,7 @@ unsigned long __init node_map_pfn_alignment(void) =20 #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT static void __init deferred_free_pages(unsigned long pfn, - unsigned long nr_pages) + unsigned long nr_pages, enum migratetype mt) { struct page *page; unsigned long i; @@ -1983,8 +1990,7 @@ static void __init deferred_free_pages(unsigned long = pfn, /* Free a large naturally-aligned chunk if possible */ if (nr_pages =3D=3D MAX_ORDER_NR_PAGES && IS_MAX_ORDER_ALIGNED(pfn)) { for (i =3D 0; i < nr_pages; i +=3D pageblock_nr_pages) - init_pageblock_migratetype(page + i, MIGRATE_MOVABLE, - false); + init_pageblock_migratetype(page + i, mt, false); __free_pages_core(page, MAX_PAGE_ORDER, MEMINIT_EARLY); return; } @@ -1994,8 +2000,7 @@ static void __init deferred_free_pages(unsigned long = pfn, =20 for (i =3D 0; i < nr_pages; i++, page++, pfn++) { if (pageblock_aligned(pfn)) - init_pageblock_migratetype(page, MIGRATE_MOVABLE, - false); + init_pageblock_migratetype(page, mt, false); __free_pages_core(page, 0, MEMINIT_EARLY); } } @@ -2051,6 +2056,7 @@ deferred_init_memmap_chunk(unsigned long start_pfn, u= nsigned long end_pfn, u64 i =3D 0; =20 for_each_free_mem_range(i, nid, 0, &start, &end, NULL) { + enum migratetype mt =3D MIGRATE_MOVABLE; unsigned long spfn =3D PFN_UP(start); unsigned long epfn =3D PFN_DOWN(end); =20 @@ -2060,12 +2066,17 @@ deferred_init_memmap_chunk(unsigned long start_pfn,= unsigned long end_pfn, spfn =3D max(spfn, start_pfn); epfn =3D min(epfn, end_pfn); =20 +#ifdef CONFIG_MEMBLOCK_KHO_SCRATCH + if (memblock_is_kho_scratch_memory(PFN_PHYS(spfn))) + mt =3D MIGRATE_CMA; +#endif + while (spfn < epfn) { unsigned long mo_pfn =3D ALIGN(spfn + 1, MAX_ORDER_NR_PAGES); unsigned long chunk_end =3D min(mo_pfn, epfn); =20 nr_pages +=3D deferred_init_pages(zone, spfn, chunk_end); - deferred_free_pages(spfn, chunk_end - spfn); + deferred_free_pages(spfn, chunk_end - spfn, mt); =20 spfn =3D chunk_end; =20 --=20 2.54.0.rc1.555.g9c883467ad-goog From nobody Thu Apr 16 20:47:14 2026 Received: from mail-lf1-f73.google.com (mail-lf1-f73.google.com [209.85.167.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 975D33A640C for ; Thu, 16 Apr 2026 11:07:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776337632; cv=none; b=KHnOehwH2WnoB9MG5Bmy1Pjyvnhxh7YQ2TuoMwOhOgisCcbQmcS+zsmo+kImTRqN3Ub2u/xiFwqJDM8Yx70IjDpcSatTQkBG9W/AaPefOL7ZLd+JpRiDlWX9HWMzv/RM+d7eGYyG19WtCZoCQLLvU7mqzmya0k22jULqI1Ez4O0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776337632; c=relaxed/simple; bh=QGhcCCxRkl4V3MUrGT03VcZglC6D8UAjz0XhG0rEh3o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=L13aHxJKKl5+i3DvtS8FGztTN+YoEV3d/TfhABY0htNFtsmSvKakEgsMNzIE+So8LuJH+RrkpnnAmkKHhtF5HjSvnnmgOf/rAGrj7Kve1incKIG+tjifg+rN127iXFeunORlpiDjXN4LtxEiqgTEQhVeiHijpEGZZyNKHEJpxi8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mclapinski.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=n2LN0mbp; arc=none smtp.client-ip=209.85.167.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mclapinski.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="n2LN0mbp" Received: by mail-lf1-f73.google.com with SMTP id 2adb3069b0e04-5a3fdf4491bso3426827e87.3 for ; Thu, 16 Apr 2026 04:07:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1776337628; x=1776942428; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=83G/JFf5TbAu/dcy7wbV8ZkTGlvEgBosGu//G/NV30c=; b=n2LN0mbpHcFEzEboi8cR5NSBb/WztnYvR/SABNEgYNV64laISZnZVqnOQMr5OqreN6 t1+eFNq5dtLnrd7liYUaBAwaHzo73vmRWnNvUPYmeftjwc2rYo+50GQvLtK4Ja2Wf+ne Zj/0T3/B9c8AZa7g1qFSXccL3GiSCAA6HM2Ugu5+G0U+4XCav/B7PdJZAIxBrFImuVId XJGRhl32NJyEdWSs+verr94fLv1QOr1D+UzhyJacoq7VaSVxVtmNPyHGUwhWrXZyg+b5 7ElsalThOVkuynRuhJRuHwbe8A26LnaXpv8NxiZdvpYxfkd/J/P3hRIxBsg3kqwpX05u ZZeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776337628; x=1776942428; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=83G/JFf5TbAu/dcy7wbV8ZkTGlvEgBosGu//G/NV30c=; b=N7iN+rIT3aaW4fCvgqs8YewWc1hgTwEdFuEWaqTRjTsTUBri7QCPg1mhlZGWB5sM4p kmtcQ8ETEPiWOctmHG2hbPZbLN9dlFMktSdQrFsQuS+bTUHEDO48H2vdt4fw9KhknlG3 GbhZQor1JZgdWbBF3oluobJLylhVylMchfrX4Dem/fQzi/V7nWzIhFmDLGgjMaUM51lM jaxy3pAJi31xlpmAjHn6P2R79DTChbBuHqZMojnN9PhPCX4SCeMUTwgEHeiDowLND4j2 1O2qu77Nc3Uz33/rT87N2XeiZxllSoL3Bu8DiBb6g8ilQAfcLG03/NuYbDWiwjF5zt8c 6SzA== X-Gm-Message-State: AOJu0YwOPSvFO7941GGom3BdwmqobY5htDxy1XYGNbWOncZhLvesPNUL f3wxtgjFhMsCLLo0GlojTUQMy9Nt+VX9/q1ymGXn35WDSusLp2nS7lzliN/+KxVpqdbts6Ev2we ckf2eMedW15gVv+tZXxKpSQ== X-Received: from lfaz26.prod.google.com ([2002:a05:6512:21fa:b0:5a4:5d2:9524]) (user=mclapinski job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6512:b99:b0:5a1:2332:68f with SMTP id 2adb3069b0e04-5a3ef6e2732mr7016523e87.0.1776337627236; Thu, 16 Apr 2026 04:07:07 -0700 (PDT) Date: Thu, 16 Apr 2026 13:06:54 +0200 In-Reply-To: <20260416110654.247398-1-mclapinski@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260416110654.247398-1-mclapinski@google.com> X-Mailer: git-send-email 2.54.0.rc1.555.g9c883467ad-goog Message-ID: <20260416110654.247398-3-mclapinski@google.com> Subject: [PATCH v8 2/2] kho: make preserved pages compatible with deferred struct page init From: Michal Clapinski To: Evangelos Petrongonas , Pasha Tatashin , Mike Rapoport , Pratyush Yadav , Alexander Graf , Samiullah Khawaja , kexec@lists.infradead.org, linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , Vlastimil Babka , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Johannes Weiner , Zi Yan , Michal Clapinski Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Evangelos Petrongonas When CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, struct page initialization is deferred to parallel kthreads that run later in the boot process. During KHO restoration, kho_preserved_memory_reserve() writes metadata for each preserved memory region. However, if the struct page has not been initialized, this write targets uninitialized memory, potentially leading to errors like: BUG: unable to handle page fault for address: ... Fix this by introducing kho_get_preserved_page(), which ensures all struct pages in a preserved region are initialized by calling init_deferred_page() which is a no-op when the struct page is already initialized. Signed-off-by: Evangelos Petrongonas Co-developed-by: Michal Clapinski Signed-off-by: Michal Clapinski Reviewed-by: Pratyush Yadav (Google) Reviewed-by: Pasha Tatashin Reviewed-by: Mike Rapoport (Microsoft) --- kernel/liveupdate/Kconfig | 2 -- kernel/liveupdate/kexec_handover.c | 27 ++++++++++++++++++++++++++- 2 files changed, 26 insertions(+), 3 deletions(-) diff --git a/kernel/liveupdate/Kconfig b/kernel/liveupdate/Kconfig index 1a8513f16ef7..c13af38ba23a 100644 --- a/kernel/liveupdate/Kconfig +++ b/kernel/liveupdate/Kconfig @@ -1,12 +1,10 @@ # SPDX-License-Identifier: GPL-2.0-only =20 menu "Live Update and Kexec HandOver" - depends on !DEFERRED_STRUCT_PAGE_INIT =20 config KEXEC_HANDOVER bool "kexec handover" depends on ARCH_SUPPORTS_KEXEC_HANDOVER && ARCH_SUPPORTS_KEXEC_FILE - depends on !DEFERRED_STRUCT_PAGE_INIT select MEMBLOCK_KHO_SCRATCH select KEXEC_FILE select LIBFDT diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_h= andover.c index a507366a2cf9..d5718bef6d4d 100644 --- a/kernel/liveupdate/kexec_handover.c +++ b/kernel/liveupdate/kexec_handover.c @@ -473,6 +473,31 @@ struct page *kho_restore_pages(phys_addr_t phys, unsig= ned long nr_pages) } EXPORT_SYMBOL_GPL(kho_restore_pages); =20 +/* + * With CONFIG_DEFERRED_STRUCT_PAGE_INIT, struct pages in higher memory re= gions + * may not be initialized yet at the time KHO deserializes preserved memor= y. + * KHO uses the struct page to store metadata and a later initialization w= ould + * overwrite it. + * Ensure all the struct pages in the preservation are + * initialized. kho_preserved_memory_reserve() marks the reservation as no= init + * to make sure they don't get re-initialized later. + */ +static struct page *__init kho_get_preserved_page(phys_addr_t phys, + unsigned int order) +{ + unsigned long pfn =3D PHYS_PFN(phys); + int nid; + + if (!IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) + return pfn_to_page(pfn); + + nid =3D early_pfn_to_nid(pfn); + for (unsigned long i =3D 0; i < (1UL << order); i++) + init_deferred_page(pfn + i, nid); + + return pfn_to_page(pfn); +} + static int __init kho_preserved_memory_reserve(phys_addr_t phys, unsigned int order) { @@ -481,7 +506,7 @@ static int __init kho_preserved_memory_reserve(phys_add= r_t phys, u64 sz; =20 sz =3D 1 << (order + PAGE_SHIFT); - page =3D phys_to_page(phys); + page =3D kho_get_preserved_page(phys, order); =20 /* Reserve the memory preserved in KHO in memblock */ memblock_reserve(phys, sz); --=20 2.54.0.rc1.555.g9c883467ad-goog