From nobody Tue Dec 2 00:04:07 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 68F55314D2A for ; Tue, 25 Nov 2025 11:09:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764068966; cv=none; b=t1PSpQC/YVZoQGK0oUISdrRtVKPrhdWKIk1PLGZ5BDAwPJVV/Kco7vSrz5WP8L71wENSL7wk8lSHJNUsUF3jlw2E0QMIgj9JeLwzcHmSEgVLtUHddwlIG+dNvHejJlgCBBmuSIDktKboBeF+/49mXHUGVboT5bkhlwDuQfluh2w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764068966; c=relaxed/simple; bh=tl0WK0vKXJu7BGS4fSCa6MdaVgIDRX4gqgHFUZ7FCvE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=e08H5nrW50jzLWlJ3F1HlY0vOQpCrRuU3uMa2GIExVdN8F0JhU20fevByxuqnb+3eD1DVD37g+4HTEposKbcu74Fju2twzSdOyziGMXca9hJ0TaorIXO0+tLPUB2Ee4jrWjlyLyCo7sppDaQ/oLs+Tik+qWVIUxcNCxii2hg1+w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=VbnlXt4v; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VbnlXt4v" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AB5D3C116B1; Tue, 25 Nov 2025 11:09:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1764068965; bh=tl0WK0vKXJu7BGS4fSCa6MdaVgIDRX4gqgHFUZ7FCvE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VbnlXt4vHwhp3etx/I2CyBGBAEisjkzEBHPcohG60xGiIC2LCcCaadt9eBE5cTRuQ IH8TsX9arsUjYD7UGW81qx/k5gn+ElbZ2oOta0wgR7UasVfPoXTs4M+2IbevAwTGL2 vD6tuNrdOEWObS30ep/vIBJSSZtytYYFm4poTr71FvNMwxHJr8vk1m7AQxpzda4eIX 1zQBl3vcEjdiqCjairTRFNGZXKNel9ojJ7dDIelhQ37jOwDq916GlbdDc35oUbppWx e6EzScsu+5yFNTNhF/1M0CpKcUa9XG7P1PvND4uQtVtWo1hs0jXm50lrFblQKlUOod h9TFZXswX5hWg== From: Mike Rapoport To: Andrew Morton Cc: Alexander Graf , Mike Rapoport , Pasha Tatashin , Pratyush Yadav , kexec@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/2] kho: kho_restore_vmalloc: fix initialization of pages array Date: Tue, 25 Nov 2025 13:09:16 +0200 Message-ID: <20251125110917.843744-2-rppt@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20251125110917.843744-1-rppt@kernel.org> References: <20251125110917.843744-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" In case a preserved vmalloc allocation was using huge pages, all pages in the array of pages added to vm_struct during kho_restore_vmalloc() are wrongly set to the same page. Fix the indexing when assigning pages to that array. Fixes: a667300bd53f ("kho: add support for preserving vmalloc allocations") Signed-off-by: Mike Rapoport (Microsoft) Reviewed-by: Pratyush Yadav --- kernel/liveupdate/kexec_handover.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_h= andover.c index 5809c6fe331c..e64ee87fa62a 100644 --- a/kernel/liveupdate/kexec_handover.c +++ b/kernel/liveupdate/kexec_handover.c @@ -1096,7 +1096,7 @@ void *kho_restore_vmalloc(const struct kho_vmalloc *p= reservation) goto err_free_pages_array; =20 for (int j =3D 0; j < contig_pages; j++) - pages[idx++] =3D page; + pages[idx++] =3D page + j; =20 phys +=3D contig_pages * PAGE_SIZE; } --=20 2.50.1 From nobody Tue Dec 2 00:04:07 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0DAAD314A75 for ; Tue, 25 Nov 2025 11:09:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764068969; cv=none; b=MviS7sT6IVgsPiB+Z6TUmZtPB17EG04z226yaT8KXzt8oFpGysjmIFCJI4UFdsatN5qq8dcO7Prpp05GHbb80Vz9iaWGw/B3c5ZsGVUlsiVLWl7ahw8no75ZmiKBnSuYRE/tv62TiGH+oSIBPRyQ1Awnnl21Dxzl522M7ZE2uQY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764068969; c=relaxed/simple; bh=MS5FYY77YbUK+JnTC70ZJhGSJBc/kCi9Z+1XsOTG/hg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AeXGOdnpwJh6A7qdGT4Z/A9KtwadcSNt+A7WTpVJ+qzoxpjB9LIb5wfsTeMHL421z+YW1MvyujKYWfHXKfN7lDmWHoOSZxw8v2V4lMJcIEZPl41RTZHmhNWap1VKgNIXtN8tpyqtT6FBsetIWvGwsEcl6o7PCKNuTVuNA7CNphc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=IL+MewD2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="IL+MewD2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 77BE5C4CEF1; Tue, 25 Nov 2025 11:09:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1764068968; bh=MS5FYY77YbUK+JnTC70ZJhGSJBc/kCi9Z+1XsOTG/hg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IL+MewD2vccgFyCl83IBplNj7nZAp3OmlK7ap4lcO9Suh408qb8kBRDwhD5hR77RJ +lJioiWjhFZV9M3k2xrBWD+cNXir5DaHSn8KUwDQI59e1wlH4qjQ8fWgHTPemittIP D2OILvVd+b33cfmj6H9JM+GrqAHFwbwb1brsJvFD3FtCexu+X6CneRzKUu4lNqGbzf gzQd7SF319Io0kg6PDxH01y/r/LU5PcgGarQC5XC2XMP0izIOex/YiQbIzBPDr+o+f fkCvr7RRA9+2uTEs3gcNgqi1tgO9VkXfDc3G7ePKncGs7Tfd7ynmcuFGu/Igw95F3R qSzXObEyFSLKQ== From: Mike Rapoport To: Andrew Morton Cc: Alexander Graf , Mike Rapoport , Pasha Tatashin , Pratyush Yadav , kexec@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/2] kho: fix restoring of contiguous ranges of order-0 pages Date: Tue, 25 Nov 2025 13:09:17 +0200 Message-ID: <20251125110917.843744-3-rppt@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20251125110917.843744-1-rppt@kernel.org> References: <20251125110917.843744-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" When contiguous ranges of order-0 pages are restored, kho_restore_page() calls prep_compound_page() with the first page in the range and order as parameters and then kho_restore_pages() calls split_page() to make sure all pages in the range are order-0. However, since split_page() is not intended to split compound pages and with VM_DEBUG enabled it will trigger a VM_BUG_ON_PAGE(). Update kho_restore_page() so that it will use prep_compound_page() when it restores a folio and make sure it properly sets page count for both large folios and ranges of order-0 pages. Reported-by: Pratyush Yadav Fixes: a667300bd53f ("kho: add support for preserving vmalloc allocations") Signed-off-by: Mike Rapoport (Microsoft) --- kernel/liveupdate/kexec_handover.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_h= andover.c index e64ee87fa62a..61d17ed1f423 100644 --- a/kernel/liveupdate/kexec_handover.c +++ b/kernel/liveupdate/kexec_handover.c @@ -219,11 +219,11 @@ static int __kho_preserve_order(struct kho_mem_track = *track, unsigned long pfn, return 0; } =20 -static struct page *kho_restore_page(phys_addr_t phys) +static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) { struct page *page =3D pfn_to_online_page(PHYS_PFN(phys)); + unsigned int nr_pages, ref_cnt; union kho_page_info info; - unsigned int nr_pages; =20 if (!page) return NULL; @@ -243,11 +243,16 @@ static struct page *kho_restore_page(phys_addr_t phys) /* Head page gets refcount of 1. */ set_page_count(page, 1); =20 - /* For higher order folios, tail pages get a page count of zero. */ + /* + * For higher order folios, tail pages get a page count of zero. + * For physically contiguous order-0 pages every pages gets a page + * count of 1 + */ + ref_cnt =3D is_folio ? 0 : 1; for (unsigned int i =3D 1; i < nr_pages; i++) - set_page_count(page + i, 0); + set_page_count(page + i, ref_cnt); =20 - if (info.order > 0) + if (is_folio && info.order) prep_compound_page(page, info.order); =20 adjust_managed_page_count(page, nr_pages); @@ -262,7 +267,7 @@ static struct page *kho_restore_page(phys_addr_t phys) */ struct folio *kho_restore_folio(phys_addr_t phys) { - struct page *page =3D kho_restore_page(phys); + struct page *page =3D kho_restore_page(phys, true); =20 return page ? page_folio(page) : NULL; } @@ -287,11 +292,10 @@ struct page *kho_restore_pages(phys_addr_t phys, unsi= gned int nr_pages) while (pfn < end_pfn) { const unsigned int order =3D min(count_trailing_zeros(pfn), ilog2(end_pfn - pfn)); - struct page *page =3D kho_restore_page(PFN_PHYS(pfn)); + struct page *page =3D kho_restore_page(PFN_PHYS(pfn), false); =20 if (!page) return NULL; - split_page(page, order); pfn +=3D 1 << order; } =20 --=20 2.50.1