From nobody Sun Feb 8 12:37:03 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A879936E46B for ; Fri, 16 Jan 2026 11:22:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768562552; cv=none; b=g6SDiZ88CMQyUf/r7MD2YL4Ywq0sWeH+CHuSd//zgQ5B5kauIbK71ELB4aGR8dyDQX3+WiLr9ouchS3D9Y1zr+JaT1wqa9avW2WVRwPEO7nlWHmN0xf084S0U7wgvt8UDSteFHHGUYR8xOpFMpQBwEsAxr+zmHyGRZMzI8flqzI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768562552; c=relaxed/simple; bh=tCtXq7BO4v6qCmECL0o+wXPIF7pMwPBxgUjHzekhM/8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NPO64CrLROgiD+DOdLvSwY+djpWHHprRn6M9RVtHgblhkoYjyTU4Sx9oYyKMWXZ2+KRVj/4s29WMnchVVShQ/Z9MbVPyuUz9E97SLcZ52SUQQXd6xxxBmkNYSk1wQ/CyiCXQX4YN8ZBZzW5I4QR93pR5/SWUK0IAm8NilRw2xp4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=r0p+LPou; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="r0p+LPou" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BFA12C116C6; Fri, 16 Jan 2026 11:22:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1768562551; bh=tCtXq7BO4v6qCmECL0o+wXPIF7pMwPBxgUjHzekhM/8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=r0p+LPouc1TxNkjL5n0a74F/8lySb/XPmBfoo4ophrABJu1LgZzCLRDHzCAQ+efhx CDJSH4lDGs4vhv9P+8W5YyZoi5mcssI9wzDQlhD61aZ3v8KJo9D/AO/kfw5WnvxrVy rvFDWih5VS8rXd+FEH904ritvGEl5GyaEPCTC5/JHHHtG4reFM8IvZiL6iDVZiRGis 6iMdZr5ZvnZS+i7l74B48UkDvmbfzZeJGDTqD0uZeyJxQHI2gvhP0XAwHQsFbMmckP k/wyduWnsawJFxv5WctNrWJmU/9y8HO8yQWb7DH9KMQZW5cbe0aBJBl0L6e57vdWqo 6VZ1624Z8eLPg== From: Pratyush Yadav To: Andrew Morton , Alexander Graf , Mike Rapoport , Pasha Tatashin , Pratyush Yadav Cc: kexec@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Suren Baghdasaryan Subject: [PATCH v2 1/2] kho: use unsigned long for nr_pages Date: Fri, 16 Jan 2026 11:22:14 +0000 Message-ID: <20260116112217.915803-2-pratyush@kernel.org> X-Mailer: git-send-email 2.52.0.457.g6b5491de43-goog In-Reply-To: <20260116112217.915803-1-pratyush@kernel.org> References: <20260116112217.915803-1-pratyush@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With 4k pages, a 32-bit nr_pages can span up to 16 TiB. While it is a lot, there exist systems with terabytes of RAM. gup is also moving to using long for nr_pages. Use unsigned long and make KHO future-proof. Suggested-by: Pasha Tatashin Signed-off-by: Pratyush Yadav Reviewed-by: Mike Rapoport (Microsoft) Reviewed-by: Pasha Tatashin --- Changes in v2: - New in v2. include/linux/kexec_handover.h | 6 +++--- kernel/liveupdate/kexec_handover.c | 11 ++++++----- 2 files changed, 9 insertions(+), 8 deletions(-) diff --git a/include/linux/kexec_handover.h b/include/linux/kexec_handover.h index 5f7b9de97e8d..81814aa92370 100644 --- a/include/linux/kexec_handover.h +++ b/include/linux/kexec_handover.h @@ -45,15 +45,15 @@ bool is_kho_boot(void); =20 int kho_preserve_folio(struct folio *folio); void kho_unpreserve_folio(struct folio *folio); -int kho_preserve_pages(struct page *page, unsigned int nr_pages); -void kho_unpreserve_pages(struct page *page, unsigned int nr_pages); +int kho_preserve_pages(struct page *page, unsigned long nr_pages); +void kho_unpreserve_pages(struct page *page, unsigned long nr_pages); int kho_preserve_vmalloc(void *ptr, struct kho_vmalloc *preservation); void kho_unpreserve_vmalloc(struct kho_vmalloc *preservation); void *kho_alloc_preserve(size_t size); void kho_unpreserve_free(void *mem); void kho_restore_free(void *mem); struct folio *kho_restore_folio(phys_addr_t phys); -struct page *kho_restore_pages(phys_addr_t phys, unsigned int nr_pages); +struct page *kho_restore_pages(phys_addr_t phys, unsigned long nr_pages); void *kho_restore_vmalloc(const struct kho_vmalloc *preservation); int kho_add_subtree(const char *name, void *fdt); void kho_remove_subtree(void *fdt); diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_h= andover.c index 9dc51fab604f..709484fbf9fd 100644 --- a/kernel/liveupdate/kexec_handover.c +++ b/kernel/liveupdate/kexec_handover.c @@ -222,7 +222,8 @@ static int __kho_preserve_order(struct kho_mem_track *t= rack, unsigned long pfn, static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) { struct page *page =3D pfn_to_online_page(PHYS_PFN(phys)); - unsigned int nr_pages, ref_cnt; + unsigned long nr_pages; + unsigned int ref_cnt; union kho_page_info info; =20 if (!page) @@ -249,7 +250,7 @@ static struct page *kho_restore_page(phys_addr_t phys, = bool is_folio) * count of 1 */ ref_cnt =3D is_folio ? 0 : 1; - for (unsigned int i =3D 1; i < nr_pages; i++) + for (unsigned long i =3D 1; i < nr_pages; i++) set_page_count(page + i, ref_cnt); =20 if (is_folio && info.order) @@ -283,7 +284,7 @@ EXPORT_SYMBOL_GPL(kho_restore_folio); * * Return: 0 on success, error code on failure */ -struct page *kho_restore_pages(phys_addr_t phys, unsigned int nr_pages) +struct page *kho_restore_pages(phys_addr_t phys, unsigned long nr_pages) { const unsigned long start_pfn =3D PHYS_PFN(phys); const unsigned long end_pfn =3D start_pfn + nr_pages; @@ -829,7 +830,7 @@ EXPORT_SYMBOL_GPL(kho_unpreserve_folio); * * Return: 0 on success, error code on failure */ -int kho_preserve_pages(struct page *page, unsigned int nr_pages) +int kho_preserve_pages(struct page *page, unsigned long nr_pages) { struct kho_mem_track *track =3D &kho_out.track; const unsigned long start_pfn =3D page_to_pfn(page); @@ -873,7 +874,7 @@ EXPORT_SYMBOL_GPL(kho_preserve_pages); * kho_preserve_pages() call. Unpreserving arbitrary sub-ranges of larger * preserved blocks is not supported. */ -void kho_unpreserve_pages(struct page *page, unsigned int nr_pages) +void kho_unpreserve_pages(struct page *page, unsigned long nr_pages) { struct kho_mem_track *track =3D &kho_out.track; const unsigned long start_pfn =3D page_to_pfn(page); --=20 2.52.0.457.g6b5491de43-goog