From nobody Sat Feb 7 23:23:07 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A879936E46B for ; Fri, 16 Jan 2026 11:22:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768562552; cv=none; b=g6SDiZ88CMQyUf/r7MD2YL4Ywq0sWeH+CHuSd//zgQ5B5kauIbK71ELB4aGR8dyDQX3+WiLr9ouchS3D9Y1zr+JaT1wqa9avW2WVRwPEO7nlWHmN0xf084S0U7wgvt8UDSteFHHGUYR8xOpFMpQBwEsAxr+zmHyGRZMzI8flqzI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768562552; c=relaxed/simple; bh=tCtXq7BO4v6qCmECL0o+wXPIF7pMwPBxgUjHzekhM/8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NPO64CrLROgiD+DOdLvSwY+djpWHHprRn6M9RVtHgblhkoYjyTU4Sx9oYyKMWXZ2+KRVj/4s29WMnchVVShQ/Z9MbVPyuUz9E97SLcZ52SUQQXd6xxxBmkNYSk1wQ/CyiCXQX4YN8ZBZzW5I4QR93pR5/SWUK0IAm8NilRw2xp4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=r0p+LPou; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="r0p+LPou" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BFA12C116C6; Fri, 16 Jan 2026 11:22:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1768562551; bh=tCtXq7BO4v6qCmECL0o+wXPIF7pMwPBxgUjHzekhM/8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=r0p+LPouc1TxNkjL5n0a74F/8lySb/XPmBfoo4ophrABJu1LgZzCLRDHzCAQ+efhx CDJSH4lDGs4vhv9P+8W5YyZoi5mcssI9wzDQlhD61aZ3v8KJo9D/AO/kfw5WnvxrVy rvFDWih5VS8rXd+FEH904ritvGEl5GyaEPCTC5/JHHHtG4reFM8IvZiL6iDVZiRGis 6iMdZr5ZvnZS+i7l74B48UkDvmbfzZeJGDTqD0uZeyJxQHI2gvhP0XAwHQsFbMmckP k/wyduWnsawJFxv5WctNrWJmU/9y8HO8yQWb7DH9KMQZW5cbe0aBJBl0L6e57vdWqo 6VZ1624Z8eLPg== From: Pratyush Yadav To: Andrew Morton , Alexander Graf , Mike Rapoport , Pasha Tatashin , Pratyush Yadav Cc: kexec@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Suren Baghdasaryan Subject: [PATCH v2 1/2] kho: use unsigned long for nr_pages Date: Fri, 16 Jan 2026 11:22:14 +0000 Message-ID: <20260116112217.915803-2-pratyush@kernel.org> X-Mailer: git-send-email 2.52.0.457.g6b5491de43-goog In-Reply-To: <20260116112217.915803-1-pratyush@kernel.org> References: <20260116112217.915803-1-pratyush@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With 4k pages, a 32-bit nr_pages can span up to 16 TiB. While it is a lot, there exist systems with terabytes of RAM. gup is also moving to using long for nr_pages. Use unsigned long and make KHO future-proof. Suggested-by: Pasha Tatashin Signed-off-by: Pratyush Yadav Reviewed-by: Mike Rapoport (Microsoft) Reviewed-by: Pasha Tatashin --- Changes in v2: - New in v2. include/linux/kexec_handover.h | 6 +++--- kernel/liveupdate/kexec_handover.c | 11 ++++++----- 2 files changed, 9 insertions(+), 8 deletions(-) diff --git a/include/linux/kexec_handover.h b/include/linux/kexec_handover.h index 5f7b9de97e8d..81814aa92370 100644 --- a/include/linux/kexec_handover.h +++ b/include/linux/kexec_handover.h @@ -45,15 +45,15 @@ bool is_kho_boot(void); =20 int kho_preserve_folio(struct folio *folio); void kho_unpreserve_folio(struct folio *folio); -int kho_preserve_pages(struct page *page, unsigned int nr_pages); -void kho_unpreserve_pages(struct page *page, unsigned int nr_pages); +int kho_preserve_pages(struct page *page, unsigned long nr_pages); +void kho_unpreserve_pages(struct page *page, unsigned long nr_pages); int kho_preserve_vmalloc(void *ptr, struct kho_vmalloc *preservation); void kho_unpreserve_vmalloc(struct kho_vmalloc *preservation); void *kho_alloc_preserve(size_t size); void kho_unpreserve_free(void *mem); void kho_restore_free(void *mem); struct folio *kho_restore_folio(phys_addr_t phys); -struct page *kho_restore_pages(phys_addr_t phys, unsigned int nr_pages); +struct page *kho_restore_pages(phys_addr_t phys, unsigned long nr_pages); void *kho_restore_vmalloc(const struct kho_vmalloc *preservation); int kho_add_subtree(const char *name, void *fdt); void kho_remove_subtree(void *fdt); diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_h= andover.c index 9dc51fab604f..709484fbf9fd 100644 --- a/kernel/liveupdate/kexec_handover.c +++ b/kernel/liveupdate/kexec_handover.c @@ -222,7 +222,8 @@ static int __kho_preserve_order(struct kho_mem_track *t= rack, unsigned long pfn, static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) { struct page *page =3D pfn_to_online_page(PHYS_PFN(phys)); - unsigned int nr_pages, ref_cnt; + unsigned long nr_pages; + unsigned int ref_cnt; union kho_page_info info; =20 if (!page) @@ -249,7 +250,7 @@ static struct page *kho_restore_page(phys_addr_t phys, = bool is_folio) * count of 1 */ ref_cnt =3D is_folio ? 0 : 1; - for (unsigned int i =3D 1; i < nr_pages; i++) + for (unsigned long i =3D 1; i < nr_pages; i++) set_page_count(page + i, ref_cnt); =20 if (is_folio && info.order) @@ -283,7 +284,7 @@ EXPORT_SYMBOL_GPL(kho_restore_folio); * * Return: 0 on success, error code on failure */ -struct page *kho_restore_pages(phys_addr_t phys, unsigned int nr_pages) +struct page *kho_restore_pages(phys_addr_t phys, unsigned long nr_pages) { const unsigned long start_pfn =3D PHYS_PFN(phys); const unsigned long end_pfn =3D start_pfn + nr_pages; @@ -829,7 +830,7 @@ EXPORT_SYMBOL_GPL(kho_unpreserve_folio); * * Return: 0 on success, error code on failure */ -int kho_preserve_pages(struct page *page, unsigned int nr_pages) +int kho_preserve_pages(struct page *page, unsigned long nr_pages) { struct kho_mem_track *track =3D &kho_out.track; const unsigned long start_pfn =3D page_to_pfn(page); @@ -873,7 +874,7 @@ EXPORT_SYMBOL_GPL(kho_preserve_pages); * kho_preserve_pages() call. Unpreserving arbitrary sub-ranges of larger * preserved blocks is not supported. */ -void kho_unpreserve_pages(struct page *page, unsigned int nr_pages) +void kho_unpreserve_pages(struct page *page, unsigned long nr_pages) { struct kho_mem_track *track =3D &kho_out.track; const unsigned long start_pfn =3D page_to_pfn(page); --=20 2.52.0.457.g6b5491de43-goog From nobody Sat Feb 7 23:23:07 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2B2A363C4B for ; Fri, 16 Jan 2026 11:22:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768562554; cv=none; b=DzpJi7K/IebwHGY73Cxv5XWehv4b+BSoUTOQK0JBPTjK2djj7YaKFokolbD4u7yNek/MXPEfCL2GpdOWLrpd+k5bnWfAD09BgLbaHN3fl48nXxAiR9mtr6AfDJP/QFGwoTLrQfjEkhd9QoUi69x3/rtCu5OfbPuFOypWDCPHav8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768562554; c=relaxed/simple; bh=AVBcqfwv4HnAqgcteSfH5IP5ilFZ6g7I/bqNeJQP95I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SeRjerxT+erFCjNiSR0BZgHSyP8hser+mUJr/b+zdNjIrpvajv/hbbKHHO6pPNbaD4w5M5sm7LDSDJ9AZpQKd/wZNdIM8nwxvHEceEZ4NzvGSzw7ercFyEhtenTvNbk1V53SvSpG5Vm9RU3/6FpfJ/8BiMOZq5pz9MOtIGkOc2g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=b8g0cLnT; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="b8g0cLnT" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9EFEBC16AAE; Fri, 16 Jan 2026 11:22:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1768562553; bh=AVBcqfwv4HnAqgcteSfH5IP5ilFZ6g7I/bqNeJQP95I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b8g0cLnTmF6l+lcFfexSzl8Dhd7Ymz/1cfV1yF4r1erM2SBSuDQOTLjMRlD6vvnRF urI7sQFDAFU0EmjW8uRHoSkjk2xjicicMxdJWsvKwl/t4Z1yvULV8YSl/+VnqzRKaX SrCYw45GCQ+uhVjY/7p11O3iueRWY9NETYWrnVRqPdZZaY3i5UzzTRRYzrKw8ZGuhg WgiZa5RV1vk+tQCa4CMdhTToIBNzlyX1cemal/VGTCt2IwVW4rCqEGzPa/P+cBAj7i lYEAkMmAkY0nCU5hTuG2B9Dt9I5svhq7IO3MdTmCfRyQGjrf4FBkCJxlJdCtqgHk5E Z1UC3TU1Vd+YA== From: Pratyush Yadav To: Andrew Morton , Alexander Graf , Mike Rapoport , Pasha Tatashin , Pratyush Yadav Cc: kexec@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Suren Baghdasaryan Subject: [PATCH v2 2/2] kho: simplify page initialization in kho_restore_page() Date: Fri, 16 Jan 2026 11:22:15 +0000 Message-ID: <20260116112217.915803-3-pratyush@kernel.org> X-Mailer: git-send-email 2.52.0.457.g6b5491de43-goog In-Reply-To: <20260116112217.915803-1-pratyush@kernel.org> References: <20260116112217.915803-1-pratyush@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When restoring a page (from kho_restore_pages()) or folio (from kho_restore_folio()), KHO must initialize the struct page. The initialization differs slightly depending on if a folio is requested or a set of 0-order pages is requested. Conceptually, it is quite simple to understand. When restoring 0-order pages, each page gets a refcount of 1 and that's it. When restoring a folio, head page gets a refcount of 1 and tail pages get 0. kho_restore_page() tries to combine the two separate initialization flow into one piece of code. While it works fine, it is more complicated to read than it needs to be. Make the code simpler by splitting the two initalization paths into two separate functions. This improves readability by clearly showing how each type must be initialized. Signed-off-by: Pratyush Yadav Reviewed-by: Mike Rapoport (Microsoft) Reviewed-by: Pasha Tatashin --- Changes in v2: - Use unsigned long for nr_pages. kernel/liveupdate/kexec_handover.c | 40 +++++++++++++++++++----------- 1 file changed, 26 insertions(+), 14 deletions(-) diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_h= andover.c index 709484fbf9fd..92da76977684 100644 --- a/kernel/liveupdate/kexec_handover.c +++ b/kernel/liveupdate/kexec_handover.c @@ -219,11 +219,32 @@ static int __kho_preserve_order(struct kho_mem_track = *track, unsigned long pfn, return 0; } =20 +/* For physically contiguous 0-order pages. */ +static void kho_init_pages(struct page *page, unsigned long nr_pages) +{ + for (unsigned long i =3D 0; i < nr_pages; i++) + set_page_count(page + i, 1); +} + +static void kho_init_folio(struct page *page, unsigned int order) +{ + unsigned long nr_pages =3D (1 << order); + + /* Head page gets refcount of 1. */ + set_page_count(page, 1); + + /* For higher order folios, tail pages get a page count of zero. */ + for (unsigned long i =3D 1; i < nr_pages; i++) + set_page_count(page + i, 0); + + if (order > 0) + prep_compound_page(page, order); +} + static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) { struct page *page =3D pfn_to_online_page(PHYS_PFN(phys)); unsigned long nr_pages; - unsigned int ref_cnt; union kho_page_info info; =20 if (!page) @@ -241,20 +262,11 @@ static struct page *kho_restore_page(phys_addr_t phys= , bool is_folio) =20 /* Clear private to make sure later restores on this page error out. */ page->private =3D 0; - /* Head page gets refcount of 1. */ - set_page_count(page, 1); - - /* - * For higher order folios, tail pages get a page count of zero. - * For physically contiguous order-0 pages every pages gets a page - * count of 1 - */ - ref_cnt =3D is_folio ? 0 : 1; - for (unsigned long i =3D 1; i < nr_pages; i++) - set_page_count(page + i, ref_cnt); =20 - if (is_folio && info.order) - prep_compound_page(page, info.order); + if (is_folio) + kho_init_folio(page, info.order); + else + kho_init_pages(page, nr_pages); =20 adjust_managed_page_count(page, nr_pages); return page; --=20 2.52.0.457.g6b5491de43-goog