From nobody Sat Feb 7 21:23:57 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4682C32C932 for ; Tue, 23 Dec 2025 10:45:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766486701; cv=none; b=buHQybAvckJaTu5DPLfzh32GcFDnRkwxeQTZi2qVHVsx809It0wFPAnQHuCYi+8YYi/MYjjyR3TnPIktUHD86JMrMXJAMK412R0kG9aKlM6ecck+h9E0ilwkeM9e7f0vhNLuQALSxRZfVCpqJiCBvD5DSSaTxyaPfoptOC2vKb8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766486701; c=relaxed/simple; bh=OvvBh3rBFu8/YoqGW0OVMvMC2gUrUjk3BKp4jAKJ5cY=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=t0B25zvgFF/cuge8YBysSOdpdki7rKe7h+bvnnd2nT4JrNA5tUd/P3QBol44ncocWlMYeaDGXrL3TRUuZb6Falrh6k7L42Hxc/ES0w9qufSb+P3LVTZSM9IyvnHmhTed6nbaqI3hOWSmSuFAM+gJzU8c9NchX1apzy3I+o0BmNc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jON0z822; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jON0z822" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E4CCC113D0; Tue, 23 Dec 2025 10:44:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766486700; bh=OvvBh3rBFu8/YoqGW0OVMvMC2gUrUjk3BKp4jAKJ5cY=; h=From:To:Cc:Subject:Date:From; b=jON0z822WBdoyLOJjLlvfkXrkqu6HzPdAhG8rpHn/N9eKY3CAu/hCCf7Mc4j2i/3l hQiirZPJNV6uayyXXW51Barhq2M0Oa6KnLSM/vQeC+Y6KjefyvIhwaEuAR5pqK3AoZ 3DlDLXkZQkezL9+VkCxrIL0bM/oonhQU2/sIVyLQjQXFHPRysv/bBvCBRsrMcS+vzp S6Z2SQNs8mupbsAA0VwpK3E0LXKHx/2ycceFBxo7wGY3f4Wv80blA2g97eD8TDzUyq jDiJ2DAupbad9j42rzqQjqjaS/lPfm8t8D654SsZM6Dor5tQ8iFxzCBQ7Gn+oO//u7 kEvymJfuv8F7A== From: Pratyush Yadav To: Andrew Morton , Alexander Graf , Mike Rapoport , Pasha Tatashin , Pratyush Yadav Cc: kexec@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] kho: simplify page initialization in kho_restore_page() Date: Tue, 23 Dec 2025 11:44:46 +0100 Message-ID: <20251223104448.195589-1-pratyush@kernel.org> X-Mailer: git-send-email 2.43.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When restoring a page (from kho_restore_pages()) or folio (from kho_restore_folio()), KHO must initialize the struct page. The initialization differs slightly depending on if a folio is requested or a set of 0-order pages is requested. Conceptually, it is quite simple to understand. When restoring 0-order pages, each page gets a refcount of 1 and that's it. When restoring a folio, head page gets a refcount of 1 and tail pages get 0. kho_restore_page() tries to combine the two separate initialization flow into one piece of code. While it works fine, it is more complicated to read than it needs to be. Make the code simpler by splitting the two initalization paths into two separate functions. This improves readability by clearly showing how each type must be initialized. Signed-off-by: Pratyush Yadav --- Notes: This patch is a follow up from https://lore.kernel.org/linux-mm/86ms42mj44.fsf@kernel.org/ kernel/liveupdate/kexec_handover.c | 41 ++++++++++++++++++++---------- 1 file changed, 27 insertions(+), 14 deletions(-) diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_h= andover.c index 2d9ce33c63dc..304c26fd5ee6 100644 --- a/kernel/liveupdate/kexec_handover.c +++ b/kernel/liveupdate/kexec_handover.c @@ -219,11 +219,33 @@ static int __kho_preserve_order(struct kho_mem_track = *track, unsigned long pfn, return 0; } =20 +/* For physically contiguous 0-order pages. */ +static void kho_init_pages(struct page *page, unsigned int nr_pages) +{ + for (unsigned int i =3D 0; i < nr_pages; i++) + set_page_count(page + i, 1); +} + +static void kho_init_folio(struct page *page, unsigned int order) +{ + unsigned int nr_pages =3D (1 << order); + + /* Head page gets refcount of 1. */ + set_page_count(page, 1); + + /* For higher order folios, tail pages get a page count of zero. */ + for (unsigned int i =3D 1; i < nr_pages; i++) + set_page_count(page + i, 0); + + if (order > 0) + prep_compound_page(page, order); +} + static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) { struct page *page =3D pfn_to_online_page(PHYS_PFN(phys)); - unsigned int nr_pages, ref_cnt; union kho_page_info info; + unsigned int nr_pages; =20 if (!page) return NULL; @@ -240,20 +262,11 @@ static struct page *kho_restore_page(phys_addr_t phys= , bool is_folio) =20 /* Clear private to make sure later restores on this page error out. */ page->private =3D 0; - /* Head page gets refcount of 1. */ - set_page_count(page, 1); =20 - /* - * For higher order folios, tail pages get a page count of zero. - * For physically contiguous order-0 pages every pages gets a page - * count of 1 - */ - ref_cnt =3D is_folio ? 0 : 1; - for (unsigned int i =3D 1; i < nr_pages; i++) - set_page_count(page + i, ref_cnt); - - if (is_folio && info.order) - prep_compound_page(page, info.order); + if (is_folio) + kho_init_folio(page, info.order); + else + kho_init_pages(page, nr_pages); =20 adjust_managed_page_count(page, nr_pages); return page; base-commit: 9f7b37a7c250baf3092719d4ebc9a8edaa79a7b4 --=20 2.43.0