From nobody Tue Apr 7 23:40:51 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 092A72773E5 for ; Wed, 11 Mar 2026 12:55:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773233758; cv=none; b=mr3LSDrAGpthCfibNYPU10VabCdsvpJ7/4CGHrhKxBGc/UrayriISkQVS4pqzBl/2t1t2gTbyfJhbj55mV81JVBCTGP+O911hpXNq0zYe/JzvoFKVdQnL6T0J6nfoXN6iJskzYD+g4PtzulvkHMv3EIzzpWJYb059BrsOT2ULPk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773233758; c=relaxed/simple; bh=LPDF/F0/x9JGpN5I+j7Yia7+yB7dQoagUu77jOadPBw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PFdl9Txmtmu9N+i8D9ZGT/5L5J+/loGmw39JxuSD6xwFjgwfSb/MFMowz5MeNbfqIq4Z7GspH1soT8i43m42V3+F3GxIdTlgNW4BN073QwLYigvMkrIRUseFJVUUxXAZRH4u1Ffxj4WFxpehbEVAEKuNeRthHrdpQF7Or7wqPss= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--mclapinski.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rMf+0VXm; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--mclapinski.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rMf+0VXm" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-485375aa56eso29372995e9.1 for ; Wed, 11 Mar 2026 05:55:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1773233755; x=1773838555; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QiNhESp4ivptJOpYZbbAOc1dWjPibReMWlebFfT5Yug=; b=rMf+0VXmytoWbaaUodanwqQ9GTmzzp7+3U5nN95dd8q5ZDFxchivahi7Eg69XP6psH KKog2rsmLeRfP1spErxZwNoNh+twZGRHtCgvpf0ubdaDQCjYhNB8qA2boMA0DASBunId IbKcgyYwUf+TOlHMQ7CH1DmHhvclo2nig6qtoVxLgC8y+vFR+o1YaGcE7NWLAifYoesR xCAv6Ppyhk0v3QlgCtYnA/liqD3p+W4H/HO3NGtadtcNoxzrl2h1nUjP7Sv8hmYLQ6xy aDaZGKr10NLbAbS9dhRzE7V94p+p0Ru1GcbafJE8Y7jMFhpJkPsijGi/cFbRG0eon+nS nWmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1773233755; x=1773838555; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QiNhESp4ivptJOpYZbbAOc1dWjPibReMWlebFfT5Yug=; b=bDDli4KozbHVs9+Q8kDZEgF4hGwVTpo+MT+7ln7wLKmpM+MWkIIQqCAyOp9K4dMdKG HHo9C4zEBlletuyP+ASznpC+qc4ekqrvOdfBfvc5pj0GdEqfYng+bOM8JszozsCFWgDV ZSmv7TTqyxWnyq2Birb4Ny9t+YShFUWSGkTSQsOgFrzKFoEQgelRaJ6M2dWbDBhUh/cv FBYVpmSIm2Gepryzgr+8QfFDMkeURCb3JQHyol1tvnhe1XaKz/Vs6Xoo21jlu7ptTIvs L7U16zj7lK0s3ae7z/NGgUSx8c+n8BAW2vHG44cBILmrKFpnzOi2/MUV4BG6SCidPThQ wYpA== X-Gm-Message-State: AOJu0YwIGfPzC+GXmreiLmpzdJ+VFopCt+roNw/n2NhlGLSEMMNuLAxa YblsnGEkTjzQlCP/hzJVP9Vc6gSkZny0kOiAchWshiZwTCps/7D1s7jIyI3CjJy93W2KpKNpRO/ N79Usn7Nog0NBF8AVkH1gHA== X-Received: from wmby16.prod.google.com ([2002:a05:600c:c050:b0:480:6b05:6b9a]) (user=mclapinski job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:458e:b0:485:3ae8:2231 with SMTP id 5b1f17b1804b1-4854b11a321mr35639475e9.30.1773233755110; Wed, 11 Mar 2026 05:55:55 -0700 (PDT) Date: Wed, 11 Mar 2026 13:55:39 +0100 In-Reply-To: <20260311125539.4123672-1-mclapinski@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260311125539.4123672-1-mclapinski@google.com> X-Mailer: git-send-email 2.53.0.473.g4a7958ca14-goog Message-ID: <20260311125539.4123672-3-mclapinski@google.com> Subject: [PATCH v6 2/2] kho: make preserved pages compatible with deferred struct page init From: Michal Clapinski To: Evangelos Petrongonas , Pasha Tatashin , Mike Rapoport , Pratyush Yadav , Alexander Graf , Samiullah Khawaja , kexec@lists.infradead.org, linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , Michal Clapinski Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Evangelos Petrongonas When CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, struct page initialization is deferred to parallel kthreads that run later in the boot process. During KHO restoration, kho_preserved_memory_reserve() writes metadata for each preserved memory region. However, if the struct page has not been initialized, this write targets uninitialized memory, potentially leading to errors like: BUG: unable to handle page fault for address: ... Fix this by introducing kho_get_preserved_page(), which ensures all struct pages in a preserved region are initialized by calling init_deferred_page() which is a no-op when the struct page is already initialized. Signed-off-by: Evangelos Petrongonas Co-developed-by: Michal Clapinski Signed-off-by: Michal Clapinski Reviewed-by: Pratyush Yadav (Google) Reviewed-by: Pasha Tatashin Reviewed-by: Mike Rapoport (Microsoft) --- I think we can't initialize those struct pages in kho_restore_page. I encountered this stack: page_zone(start_page) __pageblock_pfn_to_page set_zone_contiguous page_alloc_init_late So, at the end of page_alloc_init_late struct pages are expected to be already initialized. set_zone_contiguous() looks at the first and last struct page of each pageblock in each populated zone to figure out if the zone is contiguous. If a kho page lands on a pageblock boundary, this will lead to access of an uninitialized struct page. There is also page_ext_init that invokes pfn_to_nid, which calls page_to_nid for each section-aligned page. There might be other places that do something similar. Therefore, it's a good idea to initialize all struct pages by the end of deferred struct page init. That's why I'm resending Evangelos's patch. I also tried to implement Pratyush's idea, i.e. iterate over zones, then get node from zone. I didn't notice any performance difference even with 8GB of kho. --- kernel/liveupdate/Kconfig | 2 -- kernel/liveupdate/kexec_handover.c | 27 ++++++++++++++++++++++++++- 2 files changed, 26 insertions(+), 3 deletions(-) diff --git a/kernel/liveupdate/Kconfig b/kernel/liveupdate/Kconfig index 1a8513f16ef7..c13af38ba23a 100644 --- a/kernel/liveupdate/Kconfig +++ b/kernel/liveupdate/Kconfig @@ -1,12 +1,10 @@ # SPDX-License-Identifier: GPL-2.0-only =20 menu "Live Update and Kexec HandOver" - depends on !DEFERRED_STRUCT_PAGE_INIT =20 config KEXEC_HANDOVER bool "kexec handover" depends on ARCH_SUPPORTS_KEXEC_HANDOVER && ARCH_SUPPORTS_KEXEC_FILE - depends on !DEFERRED_STRUCT_PAGE_INIT select MEMBLOCK_KHO_SCRATCH select KEXEC_FILE select LIBFDT diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_h= andover.c index 09cb6660ade7..1f9707d11e5f 100644 --- a/kernel/liveupdate/kexec_handover.c +++ b/kernel/liveupdate/kexec_handover.c @@ -471,6 +471,31 @@ struct page *kho_restore_pages(phys_addr_t phys, unsig= ned long nr_pages) } EXPORT_SYMBOL_GPL(kho_restore_pages); =20 +/* + * With CONFIG_DEFERRED_STRUCT_PAGE_INIT, struct pages in higher memory re= gions + * may not be initialized yet at the time KHO deserializes preserved memor= y. + * KHO uses the struct page to store metadata and a later initialization w= ould + * overwrite it. + * Ensure all the struct pages in the preservation are + * initialized. kho_preserved_memory_reserve() marks the reservation as no= init + * to make sure they don't get re-initialized later. + */ +static struct page *__init kho_get_preserved_page(phys_addr_t phys, + unsigned int order) +{ + unsigned long pfn =3D PHYS_PFN(phys); + int nid; + + if (!IS_ENABLED(CONFIG_DEFERRED_STRUCT_PAGE_INIT)) + return pfn_to_page(pfn); + + nid =3D early_pfn_to_nid(pfn); + for (unsigned long i =3D 0; i < (1UL << order); i++) + init_deferred_page(pfn + i, nid); + + return pfn_to_page(pfn); +} + static int __init kho_preserved_memory_reserve(phys_addr_t phys, unsigned int order) { @@ -479,7 +504,7 @@ static int __init kho_preserved_memory_reserve(phys_add= r_t phys, u64 sz; =20 sz =3D 1 << (order + PAGE_SHIFT); - page =3D phys_to_page(phys); + page =3D kho_get_preserved_page(phys, order); =20 /* Reserve the memory preserved in KHO in memblock */ memblock_reserve(phys, sz); --=20 2.53.0.473.g4a7958ca14-goog