From nobody Fri Apr 3 22:31:11 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC450363090; Mon, 23 Mar 2026 07:49:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252141; cv=none; b=UfFGeD+UpJnPiEO1+4n7bPebcCrt9ygQ6263IdRE2dZWaEfOsOpuxijMV+ODQAPlBrTxTRyc5PMHnhrHBu+e7SCSszL7FIK83iaufvpMJU7qb8TUiDUzgfzzrAgTjCFZU9EBX0EMQmTXqHZJ9RWQYgCcev4Dqaj2TMqok0ZA+lc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252141; c=relaxed/simple; bh=ZPayAoUZgLwz6EhVKnVbCR0MqW3d3UF4hy1nzr/kEys=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=L9twtqjIIZahydnIRGdVoPTVOd8jRp/zgvWDRFJm43PuiYjNaLjMwdoXyQU1rtZEHhKgyQ7U6zVxWNqFtuyPDjqg0DMYvdWlnQwszYRbpa/psvb8BEaBSbTQJrlu22SSNifP8JgbstsTrK3vWmDJIkupNV4KWvYxjfayCXQdcRI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=soMwPHB4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="soMwPHB4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E6B1FC2BC9E; Mon, 23 Mar 2026 07:48:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774252141; bh=ZPayAoUZgLwz6EhVKnVbCR0MqW3d3UF4hy1nzr/kEys=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=soMwPHB478isjyOEuq0IFRN9Cief5Jqwxg1OL7jUkDAL9y4hBeM9dbR/1MWaRqGWi ib0jpyTzOuLzovqiLkfFzH/sitT7zv4W8SFvjQAfIZyXowACHhmqxuqnkiN4Ye2w/b fqdb/4qzrOD886HOPMUSVszQTzLVwA06GpzpYZb9RUwXxMKzldKhDOEpdPGEiqzvEz d8ZnoMumXfNyr/H7/zGCKqVZFhNYMDERdxkrZ9jFtVB2x96iGjkhmG/En+iUsCJUVO API6o3jBTJGcQaw3dzn8rTLMq1L6nSmRBE/0It/iNumPWF5phnJ73bZcJMJHMFZdCH C3sxaPr5Q/wkA== From: Mike Rapoport To: Andrew Morton Cc: Alexander Potapenko , Alexander Viro , Andreas Larsson , Ard Biesheuvel , Borislav Petkov , Brendan Jackman , "Christophe Leroy (CS GROUP)" , Catalin Marinas , Christian Brauner , "David S. Miller" , Dave Hansen , David Hildenbrand , Dmitry Vyukov , Ilias Apalodimas , Ingo Molnar , Jan Kara , Johannes Weiner , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Marco Elver , Marek Szyprowski , Masami Hiramatsu , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , "H. Peter Anvin" , Rob Herring , Robin Murphy , Saravana Kannan , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Zi Yan , devicetree@vger.kernel.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-efi@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v2 1/9] memblock: reserve_mem: fix end caclulation in reserve_mem_release_by_name() Date: Mon, 23 Mar 2026 09:48:28 +0200 Message-ID: <20260323074836.3653702-2-rppt@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260323074836.3653702-1-rppt@kernel.org> References: <20260323074836.3653702-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" free_reserved_area() expects end parameter to point to the first address after the area, but reserve_mem_release_by_name() passes it the last address inside the area. Remove subtraction of one in calculation of the area end. Fixes: 74e2498ccf7b ("mm/memblock: Add reserved memory release function") Signed-off-by: Mike Rapoport (Microsoft) --- mm/memblock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memblock.c b/mm/memblock.c index b3ddfdec7a80..d4a02f1750e9 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -2434,7 +2434,7 @@ int reserve_mem_release_by_name(const char *name) return 0; =20 start =3D phys_to_virt(map->start); - end =3D start + map->size - 1; + end =3D start + map->size; snprintf(buf, sizeof(buf), "reserve_mem:%s", name); free_reserved_area(start, end, 0, buf); map->size =3D 0; --=20 2.53.0 From nobody Fri Apr 3 22:31:11 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 13E032D63F8; Mon, 23 Mar 2026 07:49:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252153; cv=none; b=Hgf3naqIXQA1j/ZXw9/bBALNB1FmUuUNAxrLbZ+AC8biG+rQvncXoM7Ny/ivQ5PeyjhwflcYfnqCjsRAUMM3abn6r4Csry9BhOePnyRAnVpr3KmNk4JPjgnUHIw/ff+dx/rxyYC1t3dD65jgdhzzNWlYR1je0N6qXK9tGVj2Qls= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252153; c=relaxed/simple; bh=IEszaCrFLo/Rs8yG7tIln7RmG7s5ArmJTUt7L+grDBM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=S8gQxFYmG+7rxamV6HRl2biVrXxlBQtvDTRmUw2vmyAjI/ck5cKyUt0Cvq5etuUoa5Ihzt86lEKR1Aq0olMQvTg/NpkNQgiamTwzc3TH5kV8cZaIW8o3TSsqNb5kuucfPedsMV+T2c6ZkpvixqbrDwapIMjXpWqUNpaISy1Kox4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mJkHNMOm; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mJkHNMOm" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 161B9C2BC9E; Mon, 23 Mar 2026 07:49:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774252152; bh=IEszaCrFLo/Rs8yG7tIln7RmG7s5ArmJTUt7L+grDBM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mJkHNMOmwoIFLTcd31YucBrDfzfANQW+xkkREf7kQp1YDYI6vfV8/hVhcG3l8g/jK sQngYeWtYhv0ZkW+8RpdtrXoti30446exi6Ju2/Ux3my4Q287Qpr4iWYaKli+PF0u9 vL/UiFYGZsmNZ4bQKMUXEy7AH5R6vF3zaUA3sbrcBu/WMLiPFtPcKLGomrzmqCTzqi PwkOo9OAQEaeopupEyTaQ/a8bAo4WgnB6AuNb75FaxwKNBNMNqccFmqdvaop/ltQA0 aYjKEuyW1vof5rqfOJ6PkR20WHVtA+dub6QcUQQxq3ViDq6QsOyMq2uvJPowcDC5VL AoERA5+OddeHw== From: Mike Rapoport To: Andrew Morton Cc: Alexander Potapenko , Alexander Viro , Andreas Larsson , Ard Biesheuvel , Borislav Petkov , Brendan Jackman , "Christophe Leroy (CS GROUP)" , Catalin Marinas , Christian Brauner , "David S. Miller" , Dave Hansen , David Hildenbrand , Dmitry Vyukov , Ilias Apalodimas , Ingo Molnar , Jan Kara , Johannes Weiner , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Marco Elver , Marek Szyprowski , Masami Hiramatsu , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , "H. Peter Anvin" , Rob Herring , Robin Murphy , Saravana Kannan , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Zi Yan , devicetree@vger.kernel.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-efi@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v2 2/9] powerpc: fadump: pair alloc_pages_exact() with free_pages_exact() Date: Mon, 23 Mar 2026 09:48:29 +0200 Message-ID: <20260323074836.3653702-3-rppt@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260323074836.3653702-1-rppt@kernel.org> References: <20260323074836.3653702-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" fadump allocates buffers with alloc_pages_exact(), but then marks them as reserved and frees using free_reserved_area(). This is completely unnecessary and the pages allocated with alloc_pages_exact() can be naturally freed with free_pages_exact(). Replace freeing of memory in fadump_free_buffer() with free_pages_exact() and simplify allocation code so that it won't mark allocated pages as reserved. Signed-off-by: Mike Rapoport (Microsoft) --- arch/powerpc/kernel/fadump.c | 16 ++-------------- 1 file changed, 2 insertions(+), 14 deletions(-) diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c index 4ebc333dd786..501d43bf18f3 100644 --- a/arch/powerpc/kernel/fadump.c +++ b/arch/powerpc/kernel/fadump.c @@ -775,24 +775,12 @@ void __init fadump_update_elfcore_header(char *bufp) =20 static void *__init fadump_alloc_buffer(unsigned long size) { - unsigned long count, i; - struct page *page; - void *vaddr; - - vaddr =3D alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO); - if (!vaddr) - return NULL; - - count =3D PAGE_ALIGN(size) / PAGE_SIZE; - page =3D virt_to_page(vaddr); - for (i =3D 0; i < count; i++) - mark_page_reserved(page + i); - return vaddr; + return alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO); } =20 static void fadump_free_buffer(unsigned long vaddr, unsigned long size) { - free_reserved_area((void *)vaddr, (void *)(vaddr + size), -1, NULL); + free_pages_exact((void *)vaddr, size); } =20 s32 __init fadump_setup_cpu_notes_buf(u32 num_cpus) --=20 2.53.0 From nobody Fri Apr 3 22:31:11 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CC11363090; Mon, 23 Mar 2026 07:49:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252164; cv=none; b=HhZ2XFO9SXtYMM/66hTJL1Ohy9NgsyEIb0/K6Xw0VIdfWIrjGa400RNbnRlue2CUwLYQfjIllMQD31OVt8eL2v0X8Qbk8Nuujw7Z8B61rQH5e0RVGg9N+qE/Ktq3I2LCSGEPHq/snzq1RN/mK14qZMnfry2P4Oiq59PHyFNzcO4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252164; c=relaxed/simple; bh=H66xl/BIJkTq28ukNF8WEu/fNAsPeeebsB5A/nwWYaQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oAzKc21/oTyCm+hmKhGKiFf8xK+8Kt0Id4VyX4OR+jm0oJp++CSElXKPKF3k33vya8c/b6vp5MZ5o3I5ScPe9PoOt7d5YIFYWMd0OdX0Tob8AJ4dxkRSExqvb6wdX8TOUQjKMSOio+egmVzknYJpWDrWMvqteduXquJQmNKytSk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=I5My3+MQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="I5My3+MQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3804AC4CEF7; Mon, 23 Mar 2026 07:49:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774252163; bh=H66xl/BIJkTq28ukNF8WEu/fNAsPeeebsB5A/nwWYaQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I5My3+MQvuPzdJ/eqnURHDufO2mFAIraPI31J+w/BDj7mSnOzz+bBSQ1nkIe6BHbp XVpJh7sZx9NFenINTf9GYQ137cP6vq3WlrCC8bnp6YDsHPgO8DCjw+QUFpZuyiQRev 4ZiDp6rsouY8nD13d6IOzKlDukyiP/rqJ1UKAgnZiOiYzFhPGMGLMTbUTFoLqAqxjQ tmiRSnujVaLodu+eJjuLWv5yflUsq2SZE3WtD7/jj+hDdLQLtnmbP8EpnzzEK7QIi7 D0NRQA1iQ/GFpfYatJJU9GNNL+T6vp+PtI6c81WogxjFBeSkYizd+xg4X95TFNbYpb 0iF2JqU+gfR1g== From: Mike Rapoport To: Andrew Morton Cc: Alexander Potapenko , Alexander Viro , Andreas Larsson , Ard Biesheuvel , Borislav Petkov , Brendan Jackman , "Christophe Leroy (CS GROUP)" , Catalin Marinas , Christian Brauner , "David S. Miller" , Dave Hansen , David Hildenbrand , Dmitry Vyukov , Ilias Apalodimas , Ingo Molnar , Jan Kara , Johannes Weiner , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Marco Elver , Marek Szyprowski , Masami Hiramatsu , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , "H. Peter Anvin" , Rob Herring , Robin Murphy , Saravana Kannan , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Zi Yan , devicetree@vger.kernel.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-efi@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v2 3/9] powerpc: opal-core: pair alloc_pages_exact() with free_pages_exact() Date: Mon, 23 Mar 2026 09:48:30 +0200 Message-ID: <20260323074836.3653702-4-rppt@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260323074836.3653702-1-rppt@kernel.org> References: <20260323074836.3653702-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" opal-core allocates buffers with alloc_pages_exact(), but then marks them as reserved and frees using free_reserved_area(). This is completely unnecessary and the pages allocated with alloc_pages_exact() can be naturally freed with free_pages_exact(). Replace freeing of memory in opalcore_cleanup() with free_pages_exact() and simplify allocation code so that it won't mark allocated pages as reserved. Signed-off-by: Mike Rapoport (Microsoft) --- arch/powerpc/platforms/powernv/opal-core.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/arch/powerpc/platforms/powernv/opal-core.c b/arch/powerpc/plat= forms/powernv/opal-core.c index e76e462f55f6..32662d30d70f 100644 --- a/arch/powerpc/platforms/powernv/opal-core.c +++ b/arch/powerpc/platforms/powernv/opal-core.c @@ -303,7 +303,6 @@ static int __init create_opalcore(void) struct device_node *dn; struct opalcore *new; loff_t opalcore_off; - struct page *page; Elf64_Phdr *phdr; Elf64_Ehdr *elf; int i, ret; @@ -328,11 +327,6 @@ static int __init create_opalcore(void) oc_conf->opalcorebuf_sz =3D 0; return -ENOMEM; } - count =3D oc_conf->opalcorebuf_sz / PAGE_SIZE; - page =3D virt_to_page(oc_conf->opalcorebuf); - for (i =3D 0; i < count; i++) - mark_page_reserved(page + i); - pr_debug("opalcorebuf =3D 0x%llx\n", (u64)oc_conf->opalcorebuf); =20 /* Read OPAL related device-tree entries */ @@ -437,10 +431,7 @@ static void opalcore_cleanup(void) =20 /* free the buffer used for setting up OPAL core */ if (oc_conf->opalcorebuf) { - void *end =3D (void *)((u64)oc_conf->opalcorebuf + - oc_conf->opalcorebuf_sz); - - free_reserved_area(oc_conf->opalcorebuf, end, -1, NULL); + free_pages_exact(oc_conf->opalcorebuf, oc_conf->opalcorebuf_sz); oc_conf->opalcorebuf =3D NULL; oc_conf->opalcorebuf_sz =3D 0; } --=20 2.53.0 From nobody Fri Apr 3 22:31:11 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A20F1DE8BB; Mon, 23 Mar 2026 07:49:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252175; cv=none; b=GWjB2jfN95s4Bd+kU2BHITkBAaBc40xf0/5KfhHhXDSnLOomc8xj6tqqhYK0G+CtR3yAuzNu+EL3JWN+1OmA3ICR8rRDEsLKVA8gpkhDtXePycm8Vz3kBW32apMdhU7yULdzhfCN/a5ima52Zsytes5Osgh/bRCBb9LaGK60yT4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252175; c=relaxed/simple; bh=BA3ifIM8vMxNHhaniMoWafVGKix2KJOOnEEIB0IJslA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HD3ZdEIAqboaPUbADQnK15bPOt8LyduQIR9xX5VBSYzW0I066k1LC4KQZA3xc5KQEXRWaahmYhKx4ZMZ3ghWW1lME6A58t6y6q+jJ9F//v4kAygjXFxMmhfJu0e1dydG18U7SJ4qOVwwAEVHfAmuc7s7POqgWx5DTxpSZJZoYRw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mLj2uAa3; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mLj2uAa3" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 591E2C2BC9E; Mon, 23 Mar 2026 07:49:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774252175; bh=BA3ifIM8vMxNHhaniMoWafVGKix2KJOOnEEIB0IJslA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mLj2uAa3+/NwQuaTBTXMkFS5rLYfpWnRharPWCiY5k6pdAcO4J3dmwxqJ62FgiBQQ 6QZgWy9Mlf+tLly2+EmiGoDig1TtvZff/B4jJXf2DfbTWewwveYymwfcCOjtHZQiEM A5doCHhRHLPTbVpCiM0WsxIRUNgKw1Bx20fB8JozatJ8LOTL2kl9c4hvj5qA9sfO+n AzfuKIyOe63JrT4uxw94ft2d5Kx2CM1R8jWppspvBEBJEOpyiO3cITWk2VSYdG1nuP qsE5AeX67dLQzBsQgP/eryYFm2ElRC0wJ8E8PQf9c8rRCCLLCxB/Hagi8E4T2A58pl wNxCDVFEoh0Dw== From: Mike Rapoport To: Andrew Morton Cc: Alexander Potapenko , Alexander Viro , Andreas Larsson , Ard Biesheuvel , Borislav Petkov , Brendan Jackman , "Christophe Leroy (CS GROUP)" , Catalin Marinas , Christian Brauner , "David S. Miller" , Dave Hansen , David Hildenbrand , Dmitry Vyukov , Ilias Apalodimas , Ingo Molnar , Jan Kara , Johannes Weiner , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Marco Elver , Marek Szyprowski , Masami Hiramatsu , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , "H. Peter Anvin" , Rob Herring , Robin Murphy , Saravana Kannan , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Zi Yan , devicetree@vger.kernel.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-efi@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v2 4/9] mm: move free_reserved_area() to mm/memblock.c Date: Mon, 23 Mar 2026 09:48:31 +0200 Message-ID: <20260323074836.3653702-5-rppt@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260323074836.3653702-1-rppt@kernel.org> References: <20260323074836.3653702-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" free_reserved_area() is related to memblock as it frees reserved memory back to the buddy allocator, similar to what memblock_free_late() does. Move free_reserved_area() to mm/memblock.c to prepare for further consolidation of the functions that free reserved memory. No functional changes. Signed-off-by: Mike Rapoport (Microsoft) Acked-by: Vlastimil Babka (SUSE) --- mm/memblock.c | 37 ++++++++++++++++++++++++++++++- mm/page_alloc.c | 36 ------------------------------ tools/include/linux/mm.h | 1 + tools/testing/memblock/internal.h | 34 +++++++++++++++++++++++++--- 4 files changed, 68 insertions(+), 40 deletions(-) diff --git a/mm/memblock.c b/mm/memblock.c index d4a02f1750e9..c0896efbee97 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -893,6 +893,42 @@ int __init_memblock memblock_remove(phys_addr_t base, = phys_addr_t size) return memblock_remove_range(&memblock.memory, base, size); } =20 +unsigned long free_reserved_area(void *start, void *end, int poison, const= char *s) +{ + void *pos; + unsigned long pages =3D 0; + + start =3D (void *)PAGE_ALIGN((unsigned long)start); + end =3D (void *)((unsigned long)end & PAGE_MASK); + for (pos =3D start; pos < end; pos +=3D PAGE_SIZE, pages++) { + struct page *page =3D virt_to_page(pos); + void *direct_map_addr; + + /* + * 'direct_map_addr' might be different from 'pos' + * because some architectures' virt_to_page() + * work with aliases. Getting the direct map + * address ensures that we get a _writeable_ + * alias for the memset(). + */ + direct_map_addr =3D page_address(page); + /* + * Perform a kasan-unchecked memset() since this memory + * has not been initialized. + */ + direct_map_addr =3D kasan_reset_tag(direct_map_addr); + if ((unsigned int)poison <=3D 0xFF) + memset(direct_map_addr, poison, PAGE_SIZE); + + free_reserved_page(page); + } + + if (pages && s) + pr_info("Freeing %s memory: %ldK\n", s, K(pages)); + + return pages; +} + /** * memblock_free - free boot memory allocation * @ptr: starting address of the boot memory allocation @@ -1776,7 +1812,6 @@ void __init memblock_free_late(phys_addr_t base, phys= _addr_t size) totalram_pages_inc(); } } - /* * Remaining API functions */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2d4b6f1a554e..df3d61253001 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6234,42 +6234,6 @@ void adjust_managed_page_count(struct page *page, lo= ng count) } EXPORT_SYMBOL(adjust_managed_page_count); =20 -unsigned long free_reserved_area(void *start, void *end, int poison, const= char *s) -{ - void *pos; - unsigned long pages =3D 0; - - start =3D (void *)PAGE_ALIGN((unsigned long)start); - end =3D (void *)((unsigned long)end & PAGE_MASK); - for (pos =3D start; pos < end; pos +=3D PAGE_SIZE, pages++) { - struct page *page =3D virt_to_page(pos); - void *direct_map_addr; - - /* - * 'direct_map_addr' might be different from 'pos' - * because some architectures' virt_to_page() - * work with aliases. Getting the direct map - * address ensures that we get a _writeable_ - * alias for the memset(). - */ - direct_map_addr =3D page_address(page); - /* - * Perform a kasan-unchecked memset() since this memory - * has not been initialized. - */ - direct_map_addr =3D kasan_reset_tag(direct_map_addr); - if ((unsigned int)poison <=3D 0xFF) - memset(direct_map_addr, poison, PAGE_SIZE); - - free_reserved_page(page); - } - - if (pages && s) - pr_info("Freeing %s memory: %ldK\n", s, K(pages)); - - return pages; -} - void free_reserved_page(struct page *page) { clear_page_tag_ref(page); diff --git a/tools/include/linux/mm.h b/tools/include/linux/mm.h index 028f3faf46e7..4407d8396108 100644 --- a/tools/include/linux/mm.h +++ b/tools/include/linux/mm.h @@ -17,6 +17,7 @@ =20 #define __va(x) ((void *)((unsigned long)(x))) #define __pa(x) ((unsigned long)(x)) +#define __pa_symbol(x) ((unsigned long)(x)) =20 #define pfn_to_page(pfn) ((void *)((pfn) * PAGE_SIZE)) =20 diff --git a/tools/testing/memblock/internal.h b/tools/testing/memblock/int= ernal.h index 009b97bbdd22..b72be2968104 100644 --- a/tools/testing/memblock/internal.h +++ b/tools/testing/memblock/internal.h @@ -11,9 +11,22 @@ static int memblock_debug =3D 1; =20 #define pr_warn_ratelimited(fmt, ...) printf(fmt, ##__VA_ARGS__) =20 +#define K(x) ((x) << (PAGE_SHIFT-10)) + bool mirrored_kernelcore =3D false; =20 struct page {}; +static inline void *page_address(struct page *page) +{ + BUG(); + return page; +} + +static inline struct page *virt_to_page(void *virt) +{ + BUG(); + return virt; +} =20 void memblock_free_pages(unsigned long pfn, unsigned int order) { @@ -23,10 +36,25 @@ static inline void accept_memory(phys_addr_t start, uns= igned long size) { } =20 -static inline unsigned long free_reserved_area(void *start, void *end, - int poison, const char *s) +unsigned long free_reserved_area(void *start, void *end, int poison, const= char *s); +void free_reserved_page(struct page *page); + +static inline bool deferred_pages_enabled(void) +{ + return false; +} + +#define for_each_valid_pfn(pfn, start_pfn, end_pfn) \ + for ((pfn) =3D (start_pfn); (pfn) < (end_pfn); (pfn)++) + +static inline void *kasan_reset_tag(const void *addr) +{ + return (void *)addr; +} + +static inline bool __is_kernel(unsigned long addr) { - return 0; + return false; } =20 #endif --=20 2.53.0 From nobody Fri Apr 3 22:31:11 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8342B36308C; Mon, 23 Mar 2026 07:49:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252186; cv=none; b=HlF+myv387sItmH9+potVmbjYGehfAFscs9xRFpxndrUbtkwrXRVhtrqchVSqU+5o0lalN4G5cpf1DhnFwBqUz3TWdFNFDvfEG9fjHIRYjFXgE/N2bVWPZ2IbPUkAqyKCpF2Ai8bHRPOAatTZZ8vf+RVH0r6T0ZfB0LdHkt/jfo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252186; c=relaxed/simple; bh=oTjYKF2HzgnTzfVt/T4KR/aYoXu0jvREAeLPVzJjchU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MlZpmO6Nm7zbrfXslQ24f4GxQqNy6VkTaxxqm+aAnsqGEYZp5yPiFlmVIo8GgC3yMhnwvhps8dy31o7X08KGvk72s3ZZRvhfxT2dMRCoZGl6hVmrpSpURk8QqowWKGNTYC2MRk6vDrrvGl32b4r7g7jfds1QsqJHrfK14/S1CYk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ApW+Ttb+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ApW+Ttb+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7A752C4CEF7; Mon, 23 Mar 2026 07:49:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774252186; bh=oTjYKF2HzgnTzfVt/T4KR/aYoXu0jvREAeLPVzJjchU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ApW+Ttb+jFMzL2PVxO2byAuk+A4fmPJnMaO15oJq2IHN6Zecm4hgjhBgwx8npbKST TGIhzz69K5K4pqQrfkuVb111Ir4KwV9TblQWUzh9dnGHx88hKymgGX1Jv2l7vvSHbM eeOwB1tP4IvmYQA/ieHNVxEm4eqUCFiuavhCV8w4JZLIDlqV+RrBsQ6RutKPB1KiCx CTJ9HkxBlgZ4IexKCBXnmCd9iYf4bnQmxV5OCKwv84oMEH+prH6i6fEMCUVPImpnIq IFbjWI+qGUrIe7+ITN828go5miiQUt8iDKr+D1nvEix/c2fD9Fsz6E0RBqR/E2a1+7 j3bxw4hqbLsrw== From: Mike Rapoport To: Andrew Morton Cc: Alexander Potapenko , Alexander Viro , Andreas Larsson , Ard Biesheuvel , Borislav Petkov , Brendan Jackman , "Christophe Leroy (CS GROUP)" , Catalin Marinas , Christian Brauner , "David S. Miller" , Dave Hansen , David Hildenbrand , Dmitry Vyukov , Ilias Apalodimas , Ingo Molnar , Jan Kara , Johannes Weiner , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Marco Elver , Marek Szyprowski , Masami Hiramatsu , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , "H. Peter Anvin" , Rob Herring , Robin Murphy , Saravana Kannan , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Zi Yan , devicetree@vger.kernel.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-efi@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v2 5/9] memblock: make free_reserved_area() more robust Date: Mon, 23 Mar 2026 09:48:32 +0200 Message-ID: <20260323074836.3653702-6-rppt@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260323074836.3653702-1-rppt@kernel.org> References: <20260323074836.3653702-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" There are two potential problems in free_reserved_area(): * it may free a page with not-existent buddy page * it may be passed a virtual address from an alias mapping that won't be properly translated by virt_to_page(), for example a symbol on arm64 While first issue is quite theoretical and the second one does not manifest itself because all the callers do the right thing, it is easy to make free_reserved_area() robust enough to avoid these potential issues. Replace the loop by virtual address with a loop by pfn that uses for_each_valid_pfn() and use __pa() or __pa_symbol() depending on the virtual mapping alias to correctly determine the loop boundaries. Signed-off-by: Mike Rapoport (Microsoft) --- mm/memblock.c | 34 +++++++++++++++++++++++----------- 1 file changed, 23 insertions(+), 11 deletions(-) diff --git a/mm/memblock.c b/mm/memblock.c index c0896efbee97..eb086724802a 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -895,21 +895,32 @@ int __init_memblock memblock_remove(phys_addr_t base,= phys_addr_t size) =20 unsigned long free_reserved_area(void *start, void *end, int poison, const= char *s) { - void *pos; - unsigned long pages =3D 0; + phys_addr_t start_pa, end_pa; + unsigned long pages =3D 0, pfn; =20 - start =3D (void *)PAGE_ALIGN((unsigned long)start); - end =3D (void *)((unsigned long)end & PAGE_MASK); - for (pos =3D start; pos < end; pos +=3D PAGE_SIZE, pages++) { - struct page *page =3D virt_to_page(pos); + /* + * end is the first address past the region and it may be beyond what + * __pa() or __pa_symbol() can handle. + * Use the address included in the range for the conversion and add + * back 1 afterwards. + */ + if (__is_kernel((unsigned long)start)) { + start_pa =3D __pa_symbol(start); + end_pa =3D __pa_symbol(end - 1) + 1; + } else { + start_pa =3D __pa(start); + end_pa =3D __pa(end - 1) + 1; + } + + for_each_valid_pfn(pfn, PFN_UP(start_pa), PFN_DOWN(end_pa)) { + struct page *page =3D pfn_to_page(pfn); void *direct_map_addr; =20 /* - * 'direct_map_addr' might be different from 'pos' - * because some architectures' virt_to_page() - * work with aliases. Getting the direct map - * address ensures that we get a _writeable_ - * alias for the memset(). + * 'direct_map_addr' might be different from the kernel virtual + * address because some architectures use aliases. + * Going via physical address, pfn_to_page() and page_address() + * ensures that we get a _writeable_ alias for the memset(). */ direct_map_addr =3D page_address(page); /* @@ -921,6 +932,7 @@ unsigned long free_reserved_area(void *start, void *end= , int poison, const char memset(direct_map_addr, poison, PAGE_SIZE); =20 free_reserved_page(page); + pages++; } =20 if (pages && s) --=20 2.53.0 From nobody Fri Apr 3 22:31:11 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6BC1E3603D9; Mon, 23 Mar 2026 07:49:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252197; cv=none; b=LI8+oCP81mL92dE3QKO/GZyKY9Biws+h9MoDxyodIRB1lv92taPNh4IM7PrvJMKvEePP73OmF1uTK9Jl1rEnx/6T/d/0MfbzPhfeBJ821+zBTh22Vh8MXxMBoGdmNwNnTLop+XXhSzjbuQr2JtR88GpnDLp7vmoSpIGIZm1W/2o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252197; c=relaxed/simple; bh=CusCegPCNb++eRZgEnFYAhQJhnAtG3gjmvQHWQKqyPY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QS68z+wLHqtx4CbRT4DPwnbG0Pmk4dEJHpULohmsv2hT5V6yMggG5SVqUgVfinYwRWs8ORWzc7/x9iaNsWo7nk6HwHv3zf8G2vcOK5fOuH4PtyrykYE2VFFjl6HvaoRw3Xi8HbTYcu6Tq/7/ZEO9I3CW4VhtEu6bmZWRexCZWL4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=B6fPKlxH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="B6fPKlxH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9B693C2BC9E; Mon, 23 Mar 2026 07:49:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774252197; bh=CusCegPCNb++eRZgEnFYAhQJhnAtG3gjmvQHWQKqyPY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B6fPKlxHQgWSYzAKTM8G/WWrhZHeaeNsETUGqkQueGTMK64wv6WfUMzzorQkeopo7 hBHZE+Gqs69UQLXKRy/u5tAt1wOavloqFcv52lzntc3bYHhMnIBwNhwWERo67xoRWs e+V1fc4IuFsAILvV17/Umo0CswXHiww2G9w5Xmd5bUZ/65HujbklLiPL0z9Ta4QPPB jXAaAUCkB5sI+cmP91/5BYP+gLzDhZrZBjNKcAP9UvluKbCVPCd5xFSjRu5OWYWMU0 Gk6PiNYwrqh+5Y8Mj243Ec5aEbQi2bt8Ki9ITja32vtDHNFNfgb/1VhVG4vtHkAh/C 1xTA2fQLh15Mg== From: Mike Rapoport To: Andrew Morton Cc: Alexander Potapenko , Alexander Viro , Andreas Larsson , Ard Biesheuvel , Borislav Petkov , Brendan Jackman , "Christophe Leroy (CS GROUP)" , Catalin Marinas , Christian Brauner , "David S. Miller" , Dave Hansen , David Hildenbrand , Dmitry Vyukov , Ilias Apalodimas , Ingo Molnar , Jan Kara , Johannes Weiner , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Marco Elver , Marek Szyprowski , Masami Hiramatsu , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , "H. Peter Anvin" , Rob Herring , Robin Murphy , Saravana Kannan , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Zi Yan , devicetree@vger.kernel.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-efi@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v2 6/9] memblock: extract page freeing from free_reserved_area() into a helper Date: Mon, 23 Mar 2026 09:48:33 +0200 Message-ID: <20260323074836.3653702-7-rppt@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260323074836.3653702-1-rppt@kernel.org> References: <20260323074836.3653702-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" There are two functions that release pages to the buddy allocator late in the boot: free_reserved_area() and memblock_free_late(). Currently they are using different underlying functionality, free_reserved_area() runs each page being freed via free_reserved_page() and memblock_free_late() uses memblock_free_pages() -> __free_pages_core(), but in the end they both boil down to a loop that frees a range page by page. Extract the loop frees pages from free_reserved_area() into a helper and use that helper in memblock_free_late(). Signed-off-by: Mike Rapoport (Microsoft) --- mm/memblock.c | 55 +++++++++++++++++++++++++++------------------------ 1 file changed, 29 insertions(+), 26 deletions(-) diff --git a/mm/memblock.c b/mm/memblock.c index eb086724802a..ccdf3d225626 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -893,26 +893,12 @@ int __init_memblock memblock_remove(phys_addr_t base,= phys_addr_t size) return memblock_remove_range(&memblock.memory, base, size); } =20 -unsigned long free_reserved_area(void *start, void *end, int poison, const= char *s) +static unsigned long __free_reserved_area(phys_addr_t start, phys_addr_t e= nd, + int poison) { - phys_addr_t start_pa, end_pa; unsigned long pages =3D 0, pfn; =20 - /* - * end is the first address past the region and it may be beyond what - * __pa() or __pa_symbol() can handle. - * Use the address included in the range for the conversion and add - * back 1 afterwards. - */ - if (__is_kernel((unsigned long)start)) { - start_pa =3D __pa_symbol(start); - end_pa =3D __pa_symbol(end - 1) + 1; - } else { - start_pa =3D __pa(start); - end_pa =3D __pa(end - 1) + 1; - } - - for_each_valid_pfn(pfn, PFN_UP(start_pa), PFN_DOWN(end_pa)) { + for_each_valid_pfn(pfn, PFN_UP(start), PFN_DOWN(end)) { struct page *page =3D pfn_to_page(pfn); void *direct_map_addr; =20 @@ -934,7 +920,29 @@ unsigned long free_reserved_area(void *start, void *en= d, int poison, const char free_reserved_page(page); pages++; } + return pages; +} + +unsigned long free_reserved_area(void *start, void *end, int poison, const= char *s) +{ + phys_addr_t start_pa, end_pa; + unsigned long pages; + + /* + * end is the first address past the region and it may be beyond what + * __pa() or __pa_symbol() can handle. + * Use the address included in the range for the conversion and add back + * 1 afterwards. + */ + if (__is_kernel((unsigned long)start)) { + start_pa =3D __pa_symbol(start); + end_pa =3D __pa_symbol(end - 1) + 1; + } else { + start_pa =3D __pa(start); + end_pa =3D __pa(end - 1) + 1; + } =20 + pages =3D __free_reserved_area(start_pa, end_pa, poison); if (pages && s) pr_info("Freeing %s memory: %ldK\n", s, K(pages)); =20 @@ -1810,20 +1818,15 @@ void *__init __memblock_alloc_or_panic(phys_addr_t = size, phys_addr_t align, */ void __init memblock_free_late(phys_addr_t base, phys_addr_t size) { - phys_addr_t cursor, end; + phys_addr_t end =3D base + size - 1; =20 - end =3D base + size - 1; memblock_dbg("%s: [%pa-%pa] %pS\n", __func__, &base, &end, (void *)_RET_IP_); - kmemleak_free_part_phys(base, size); - cursor =3D PFN_UP(base); - end =3D PFN_DOWN(base + size); =20 - for (; cursor < end; cursor++) { - memblock_free_pages(cursor, 0); - totalram_pages_inc(); - } + kmemleak_free_part_phys(base, size); + __free_reserved_area(base, base + size, -1); } + /* * Remaining API functions */ --=20 2.53.0 From nobody Fri Apr 3 22:31:11 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CFE113603D9; Mon, 23 Mar 2026 07:50:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252208; cv=none; b=nFQfKsYo+TpTZdyM5Cy09WriOrL1JAKqxm8jGuGAelU9ZkPIMs34QTmrtg0SGVsou77o9qTIlniUJRch5Mrwq8kXpa1M1UGDyLATX6AEBc6BmM3w+GiBberTNBzATwYf1A4CqRQ0gcdTWp95NHqYV4KJiijIH5TgUMfRIeSVE+I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252208; c=relaxed/simple; bh=JX9RHWZi5vuFIJLodu0lHA5nrZZIkASHjFmUizN2RBs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nA4j4qzJ/THfRiDtZbjSqUaviDmkXKL/ks4TnGZk0ggIOmm9sa4AwU6H1Hrcmy5tdyUPtIjDiPMEk1ONRIiX14ho5mq8RwsAVG9sWs/t5Gh7QB+8kOjRXHLX7pR+30LQ1tICqfcsJ2Eakr0Gt+7KcKfCaIT82m7anG3rV2uUZZI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=aWVLvFut; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="aWVLvFut" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BBBAEC4CEF7; Mon, 23 Mar 2026 07:49:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774252208; bh=JX9RHWZi5vuFIJLodu0lHA5nrZZIkASHjFmUizN2RBs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aWVLvFut4f6MYN3wwvVgBsvFdFfwPxFVLIxEnZF2GAp0JQd7HON8/QxygCU2ypG3q 85RF6aLIrLIQls+Z9PC1N5+z9V63pBv8iSd7N325b75nqcJrfzheijh13oNYSPPUHF 7H4UTJH2FCWdxQJdQJ0moQ/robJ3B0VZTNpPoyjWFkZEoeQUN2laq6KQhmOmXfYL+r ATjY+R2jtWYZYg2H6wmqiRgYcaDRVxYkBHUXg6eZ2dnlzyU+afaaq90pMJbntP0NXj 8kFILSXJOVGHQNzneF/T6HYv7O2tYy9lrUVFuGHfpzFLaD+zWwYbsaZr6rNdr5t1Nb oKdXVBglapLiQ== From: Mike Rapoport To: Andrew Morton Cc: Alexander Potapenko , Alexander Viro , Andreas Larsson , Ard Biesheuvel , Borislav Petkov , Brendan Jackman , "Christophe Leroy (CS GROUP)" , Catalin Marinas , Christian Brauner , "David S. Miller" , Dave Hansen , David Hildenbrand , Dmitry Vyukov , Ilias Apalodimas , Ingo Molnar , Jan Kara , Johannes Weiner , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Marco Elver , Marek Szyprowski , Masami Hiramatsu , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , "H. Peter Anvin" , Rob Herring , Robin Murphy , Saravana Kannan , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Zi Yan , devicetree@vger.kernel.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-efi@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v2 7/9] memblock: make free_reserved_area() update memblock if ARCH_KEEP_MEMBLOCK=y Date: Mon, 23 Mar 2026 09:48:34 +0200 Message-ID: <20260323074836.3653702-8-rppt@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260323074836.3653702-1-rppt@kernel.org> References: <20260323074836.3653702-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" On architectures that keep memblock after boot, freeing of reserved memory with free_reserved_area() is paired with an update of memblock arrays, usually by a call to memblock_free(). Make free_reserved_area() directly update memblock.reserved when ARCH_KEEP_MEMBLOCK is enabled. Remove the now-redundant explicit memblock_free() call from arm64::free_initmem() and the #ifdef CONFIG_ARCH_KEEP_MEMBLOCK block from the generic free_initrd_mem(). Signed-off-by: Mike Rapoport (Microsoft) --- arch/arm64/mm/init.c | 3 --- init/initramfs.c | 7 ------- mm/memblock.c | 6 ++++++ 3 files changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 96711b8578fd..07b17c708702 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -385,9 +385,6 @@ void free_initmem(void) WARN_ON(!IS_ALIGNED((unsigned long)lm_init_begin, PAGE_SIZE)); WARN_ON(!IS_ALIGNED((unsigned long)lm_init_end, PAGE_SIZE)); =20 - /* Delete __init region from memblock.reserved. */ - memblock_free(lm_init_begin, lm_init_end - lm_init_begin); - free_reserved_area(lm_init_begin, lm_init_end, POISON_FREE_INITMEM, "unused kernel"); /* diff --git a/init/initramfs.c b/init/initramfs.c index 139baed06589..bca0922b2850 100644 --- a/init/initramfs.c +++ b/init/initramfs.c @@ -652,13 +652,6 @@ void __init reserve_initrd_mem(void) =20 void __weak __init free_initrd_mem(unsigned long start, unsigned long end) { -#ifdef CONFIG_ARCH_KEEP_MEMBLOCK - unsigned long aligned_start =3D ALIGN_DOWN(start, PAGE_SIZE); - unsigned long aligned_end =3D ALIGN(end, PAGE_SIZE); - - memblock_free((void *)aligned_start, aligned_end - aligned_start); -#endif - free_reserved_area((void *)start, (void *)end, POISON_FREE_INITMEM, "initrd"); } diff --git a/mm/memblock.c b/mm/memblock.c index ccdf3d225626..0ad968c2f2e8 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -942,6 +942,12 @@ unsigned long free_reserved_area(void *start, void *en= d, int poison, const char end_pa =3D __pa(end - 1) + 1; } =20 + if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) { + if (start_pa < end_pa) + memblock_remove_range(&memblock.reserved, + start_pa, end_pa - start_pa); + } + pages =3D __free_reserved_area(start_pa, end_pa, poison); if (pages && s) pr_info("Freeing %s memory: %ldK\n", s, K(pages)); --=20 2.53.0 From nobody Fri Apr 3 22:31:11 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E009F36308D; Mon, 23 Mar 2026 07:50:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252220; cv=none; b=VC9h9geK7TKaOB67BtCPQ1XEsr/KCY+I7RP0FnUkYyQPGMOS9vVk4vqb5roHoVGL722DOqoRhsW7VJGT66vufJEprwUqOrhfHCxU3yF7yy9Ew5I/PLN6RmhpitPuk5YIKEpjWXovQs94nvGNWwDNp3jeuXtR0n5EvIPmMex9HI4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252220; c=relaxed/simple; bh=+ig/11q3VRUjF4ArUTH3xGDXOKNW07DxpNhjoaJhyZw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LQE6e3EnCoRIMa/6S22t9AY/6YYpEmLEMBcetQgiDFS3aqTr6diX15C0w4aXSoez39XZ2rGBJY1ZtdouVmm8+agrtUWKH5jwWlGcQWL2Nz1rzIcttcSAid/uF4ZK1Xw8kAdSzVrvB1MeIWc9Q/VQvSnoOiu3n5xDTotLV/q3wzw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qZ1UA+S0; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qZ1UA+S0" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DE36EC2BC9E; Mon, 23 Mar 2026 07:50:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774252219; bh=+ig/11q3VRUjF4ArUTH3xGDXOKNW07DxpNhjoaJhyZw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qZ1UA+S0+bqroRdQ7z9eWrVqAwO2iw+4ovgINTlm3abR6YK36AftCJVaJb1iJfkoo uhqVGJrE9Yb4VkqFX/pnhX1e0V/d6p6AshGhQpkA+JQLZL9YUwzBPJsiWyOz+7akJ8 Ciw7bBi65L+YU8wl1or8U8ac4EODsE3Zz4y1GXaEMLmUCFrrK0UqcauK7WvYYOtpaU o2+46DVcuJEsMjFu3pRR4gqkDbBoEVV8rZu4ZWhsLpDVfC6wJmpfrR5WYbGvZfzlnz Vsy42hU8lzIDX/ALRMvYcuUgo/O5Xw9CUlMr0PXgjWQGA9aRWYD5PrDj5dKZthfLfr ARLZ9RxDe9cag== From: Mike Rapoport To: Andrew Morton Cc: Alexander Potapenko , Alexander Viro , Andreas Larsson , Ard Biesheuvel , Borislav Petkov , Brendan Jackman , "Christophe Leroy (CS GROUP)" , Catalin Marinas , Christian Brauner , "David S. Miller" , Dave Hansen , David Hildenbrand , Dmitry Vyukov , Ilias Apalodimas , Ingo Molnar , Jan Kara , Johannes Weiner , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Marco Elver , Marek Szyprowski , Masami Hiramatsu , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , "H. Peter Anvin" , Rob Herring , Robin Murphy , Saravana Kannan , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Zi Yan , devicetree@vger.kernel.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-efi@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v2 8/9] memblock, treewide: make memblock_free() handle late freeing Date: Mon, 23 Mar 2026 09:48:35 +0200 Message-ID: <20260323074836.3653702-9-rppt@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260323074836.3653702-1-rppt@kernel.org> References: <20260323074836.3653702-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" It shouldn't be responsibility of memblock users to detect if they free memory allocated from memblock late and should use memblock_free_late(). Make memblock_free() and memblock_phys_free() take care of late memory freeing and drop memblock_free_late(). Signed-off-by: Mike Rapoport (Microsoft) --- arch/sparc/kernel/mdesc.c | 4 +- arch/x86/kernel/setup.c | 2 +- arch/x86/platform/efi/memmap.c | 5 +-- arch/x86/platform/efi/quirks.c | 2 +- drivers/firmware/efi/apple-properties.c | 2 +- drivers/of/kexec.c | 2 +- include/linux/memblock.h | 2 - kernel/dma/swiotlb.c | 6 +-- lib/bootconfig.c | 2 +- mm/kfence/core.c | 4 +- mm/memblock.c | 49 ++++++++++--------------- 11 files changed, 31 insertions(+), 49 deletions(-) diff --git a/arch/sparc/kernel/mdesc.c b/arch/sparc/kernel/mdesc.c index 30f171b7b00c..ecd6c8ae49c7 100644 --- a/arch/sparc/kernel/mdesc.c +++ b/arch/sparc/kernel/mdesc.c @@ -183,14 +183,12 @@ static struct mdesc_handle * __init mdesc_memblock_al= loc(unsigned int mdesc_size static void __init mdesc_memblock_free(struct mdesc_handle *hp) { unsigned int alloc_size; - unsigned long start; =20 BUG_ON(refcount_read(&hp->refcnt) !=3D 0); BUG_ON(!list_empty(&hp->list)); =20 alloc_size =3D PAGE_ALIGN(hp->handle_size); - start =3D __pa(hp); - memblock_free_late(start, alloc_size); + memblock_free(hp, alloc_size); } =20 static struct mdesc_mem_ops memblock_mdesc_ops =3D { diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index eebcc9db1a1b..46882ce79c3a 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -426,7 +426,7 @@ int __init ima_free_kexec_buffer(void) if (!ima_kexec_buffer_size) return -ENOENT; =20 - memblock_free_late(ima_kexec_buffer_phys, + memblock_phys_free(ima_kexec_buffer_phys, ima_kexec_buffer_size); =20 ima_kexec_buffer_phys =3D 0; diff --git a/arch/x86/platform/efi/memmap.c b/arch/x86/platform/efi/memmap.c index 023697c88910..697a9a26a005 100644 --- a/arch/x86/platform/efi/memmap.c +++ b/arch/x86/platform/efi/memmap.c @@ -34,10 +34,7 @@ static void __init __efi_memmap_free(u64 phys, unsigned long size, unsigned long = flags) { if (flags & EFI_MEMMAP_MEMBLOCK) { - if (slab_is_available()) - memblock_free_late(phys, size); - else - memblock_phys_free(phys, size); + memblock_phys_free(phys, size); } else if (flags & EFI_MEMMAP_SLAB) { struct page *p =3D pfn_to_page(PHYS_PFN(phys)); unsigned int order =3D get_order(size); diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c index 35caa5746115..a560bbcaa006 100644 --- a/arch/x86/platform/efi/quirks.c +++ b/arch/x86/platform/efi/quirks.c @@ -372,7 +372,7 @@ void __init efi_reserve_boot_services(void) * doesn't make sense as far as the firmware is * concerned, but it does provide us with a way to tag * those regions that must not be paired with - * memblock_free_late(). + * memblock_phys_free(). */ md->attribute |=3D EFI_MEMORY_RUNTIME; } diff --git a/drivers/firmware/efi/apple-properties.c b/drivers/firmware/efi= /apple-properties.c index 13ac28754c03..2e525e17fba7 100644 --- a/drivers/firmware/efi/apple-properties.c +++ b/drivers/firmware/efi/apple-properties.c @@ -226,7 +226,7 @@ static int __init map_properties(void) */ data->len =3D 0; memunmap(data); - memblock_free_late(pa_data + sizeof(*data), data_len); + memblock_phys_free(pa_data + sizeof(*data), data_len); =20 return ret; } diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c index c4cf3552c018..512d9be9d513 100644 --- a/drivers/of/kexec.c +++ b/drivers/of/kexec.c @@ -175,7 +175,7 @@ int __init ima_free_kexec_buffer(void) if (ret) return ret; =20 - memblock_free_late(addr, size); + memblock_phys_free(addr, size); return 0; } #endif diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 6ec5e9ac0699..6f6c5b5c4a4b 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -172,8 +172,6 @@ void __next_mem_range_rev(u64 *idx, int nid, enum membl= ock_flags flags, struct memblock_type *type_b, phys_addr_t *out_start, phys_addr_t *out_end, int *out_nid); =20 -void memblock_free_late(phys_addr_t base, phys_addr_t size); - #ifdef CONFIG_HAVE_MEMBLOCK_PHYS_MAP static inline void __next_physmem_range(u64 *idx, struct memblock_type *ty= pe, phys_addr_t *out_start, diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index d8e6f1d889d5..e44e039e00d3 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -546,10 +546,10 @@ void __init swiotlb_exit(void) free_pages(tbl_vaddr, get_order(tbl_size)); free_pages((unsigned long)mem->slots, get_order(slots_size)); } else { - memblock_free_late(__pa(mem->areas), + memblock_free(mem->areas, array_size(sizeof(*mem->areas), mem->nareas)); - memblock_free_late(mem->start, tbl_size); - memblock_free_late(__pa(mem->slots), slots_size); + memblock_phys_free(mem->start, tbl_size); + memblock_free(mem->slots, slots_size); } =20 memset(mem, 0, sizeof(*mem)); diff --git a/lib/bootconfig.c b/lib/bootconfig.c index 449369a60846..86a75bf636bc 100644 --- a/lib/bootconfig.c +++ b/lib/bootconfig.c @@ -64,7 +64,7 @@ static inline void __init xbc_free_mem(void *addr, size_t= size, bool early) if (early) memblock_free(addr, size); else if (addr) - memblock_free_late(__pa(addr), size); + memblock_free(addr, size); } =20 #else /* !__KERNEL__ */ diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 7393957f9a20..5c8268af533e 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -731,10 +731,10 @@ static bool __init kfence_init_pool_early(void) * fails for the first page, and therefore expect addr=3D=3D__kfence_pool= in * most failure cases. */ - memblock_free_late(__pa(addr), KFENCE_POOL_SIZE - (addr - (unsigned long)= __kfence_pool)); + memblock_free((void *)addr, KFENCE_POOL_SIZE - (addr - (unsigned long)__k= fence_pool)); __kfence_pool =3D NULL; =20 - memblock_free_late(__pa(kfence_metadata_init), KFENCE_METADATA_SIZE); + memblock_free(kfence_metadata_init, KFENCE_METADATA_SIZE); kfence_metadata_init =3D NULL; =20 return false; diff --git a/mm/memblock.c b/mm/memblock.c index 0ad968c2f2e8..dc8811861c11 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -384,26 +384,27 @@ static void __init_memblock memblock_remove_region(st= ruct memblock_type *type, u */ void __init memblock_discard(void) { - phys_addr_t addr, size; + phys_addr_t size; + void *addr; =20 if (memblock.reserved.regions !=3D memblock_reserved_init_regions) { - addr =3D __pa(memblock.reserved.regions); + addr =3D memblock.reserved.regions; size =3D PAGE_ALIGN(sizeof(struct memblock_region) * memblock.reserved.max); if (memblock_reserved_in_slab) - kfree(memblock.reserved.regions); + kfree(addr); else - memblock_free_late(addr, size); + memblock_free(addr, size); } =20 if (memblock.memory.regions !=3D memblock_memory_init_regions) { - addr =3D __pa(memblock.memory.regions); + addr =3D memblock.memory.regions; size =3D PAGE_ALIGN(sizeof(struct memblock_region) * memblock.memory.max); if (memblock_memory_in_slab) - kfree(memblock.memory.regions); + kfree(addr); else - memblock_free_late(addr, size); + memblock_free(addr, size); } =20 memblock_memory =3D NULL; @@ -961,7 +962,8 @@ unsigned long free_reserved_area(void *start, void *end= , int poison, const char * @size: size of the boot memory block in bytes * * Free boot memory block previously allocated by memblock_alloc_xx() API. - * The freeing memory will not be released to the buddy allocator. + * If called after the buddy allocator is available, the memory is release= d to + * the buddy allocator. */ void __init_memblock memblock_free(void *ptr, size_t size) { @@ -975,17 +977,24 @@ void __init_memblock memblock_free(void *ptr, size_t = size) * @size: size of the boot memory block in bytes * * Free boot memory block previously allocated by memblock_phys_alloc_xx()= API. - * The freeing memory will not be released to the buddy allocator. + * If called after the buddy allocator is available, the memory is release= d to + * the buddy allocator. */ int __init_memblock memblock_phys_free(phys_addr_t base, phys_addr_t size) { phys_addr_t end =3D base + size - 1; + int ret; =20 memblock_dbg("%s: [%pa-%pa] %pS\n", __func__, &base, &end, (void *)_RET_IP_); =20 kmemleak_free_part_phys(base, size); - return memblock_remove_range(&memblock.reserved, base, size); + ret =3D memblock_remove_range(&memblock.reserved, base, size); + + if (slab_is_available()) + __free_reserved_area(base, base + size, -1); + + return ret; } =20 int __init_memblock __memblock_reserve(phys_addr_t base, phys_addr_t size, @@ -1813,26 +1822,6 @@ void *__init __memblock_alloc_or_panic(phys_addr_t s= ize, phys_addr_t align, return addr; } =20 -/** - * memblock_free_late - free pages directly to buddy allocator - * @base: phys starting address of the boot memory block - * @size: size of the boot memory block in bytes - * - * This is only useful when the memblock allocator has already been torn - * down, but we are still initializing the system. Pages are released dir= ectly - * to the buddy allocator. - */ -void __init memblock_free_late(phys_addr_t base, phys_addr_t size) -{ - phys_addr_t end =3D base + size - 1; - - memblock_dbg("%s: [%pa-%pa] %pS\n", - __func__, &base, &end, (void *)_RET_IP_); - - kmemleak_free_part_phys(base, size); - __free_reserved_area(base, base + size, -1); -} - /* * Remaining API functions */ --=20 2.53.0 From nobody Fri Apr 3 22:31:11 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 083F436308D; Mon, 23 Mar 2026 07:50:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252231; cv=none; b=SJXUg2kwfKVeZhf5njkEjK1oul8mz5PhTx8SMy0TvPzeToyDMKpxSx7tiRQc7dWfGvk62dmeoehpFTxBsHTMMM2Ih303Cb6d1RvnNrz5mhnHvkJ6QC+bOvdkfr+t2bjlGgxJX6a+6OfrCG1F/m96Nli8cltKJ7OwO9XAnKER6tA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774252231; c=relaxed/simple; bh=2YaMKdXLS6q2N8eU5YUjQf/EJjPIOHb0bH+1NaX/nR8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=T37gVZujdNj49xbzwWAIzY1pQxduGL7T0k7JuZVTY9sUvteO9EBp1cBUCW+0QP8Oq/b12EZ6lvld5ERA30MmvwbhWwdype0QhNteNq/aEJn732shu1uYoED+spA1kvspXasvAQDzBpMltUcJLERbqQ/670KMvWj/7N15DJii0R4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=l2yPHSZJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="l2yPHSZJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0A012C4CEF7; Mon, 23 Mar 2026 07:50:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774252230; bh=2YaMKdXLS6q2N8eU5YUjQf/EJjPIOHb0bH+1NaX/nR8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l2yPHSZJSFV/2Xg29g07aWdlpqOxb4al6/yvYuIOUOc2J7tJq3yci9lYgN2Ad0Btc nh8HF9hH3MVm8s1S/WvMnipPqcfHyqAWEvEIecCcyDRi1zIUz73z0ufinHvS2jUzLd 5aN1Ha8QPGf10iolGEgBJSw3s3hSAP0Nc5WD7PS9QgwP1azCodE+X0whKC6tqHZLc6 x38IZNl+bgo7TYx0jvflmrbVMoc8805c5TR7QMdkpN9XA29LY99N0h9RQbpDy+SrSX EuBPFYqGifP/Y+eDtcoJ7SoPLUbAZ5a/FJM8kp8xH+wzmAySKlRhhdc4rhpWSVh4yl yYvsW9iWMARwA== From: Mike Rapoport To: Andrew Morton Cc: Alexander Potapenko , Alexander Viro , Andreas Larsson , Ard Biesheuvel , Borislav Petkov , Brendan Jackman , "Christophe Leroy (CS GROUP)" , Catalin Marinas , Christian Brauner , "David S. Miller" , Dave Hansen , David Hildenbrand , Dmitry Vyukov , Ilias Apalodimas , Ingo Molnar , Jan Kara , Johannes Weiner , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Marco Elver , Marek Szyprowski , Masami Hiramatsu , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , "H. Peter Anvin" , Rob Herring , Robin Murphy , Saravana Kannan , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Zi Yan , devicetree@vger.kernel.org, iommu@lists.linux.dev, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-efi@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v2 9/9] memblock: warn when freeing reserved memory before memory map is initialized Date: Mon, 23 Mar 2026 09:48:36 +0200 Message-ID: <20260323074836.3653702-10-rppt@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260323074836.3653702-1-rppt@kernel.org> References: <20260323074836.3653702-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" When CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled, freeing of reserved memory before the memory map is fully initialized in deferred_init_memmap() would cause access to uninitialized struct pages and may crash when accessing spurious list pointers, like was recently discovered during discussion about memory leaks in x86 EFI code [1]. The trace below is from an attempt to call free_reserved_page() before page_alloc_init_late(): [ 0.076840] BUG: unable to handle page fault for address: ffffce1a005a07= 88 [ 0.078226] #PF: supervisor read access in kernel mode [ 0.078226] #PF: error_code(0x0000) - not-present page [ 0.078226] PGD 0 P4D 0 [ 0.078226] Oops: Oops: 0000 [#1] PREEMPT SMP NOPTI [ 0.078226] CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.12.68-92.= 123.amzn2023.x86_64 #1 [ 0.078226] Hardware name: Amazon EC2 t3a.nano/, BIOS 1.0 10/16/2017 [ 0.078226] RIP: 0010:__list_del_entry_valid_or_report+0x32/0xb0 ... [ 0.078226] __free_one_page+0x170/0x520 [ 0.078226] free_pcppages_bulk+0x151/0x1e0 [ 0.078226] free_unref_page_commit+0x263/0x320 [ 0.078226] free_unref_page+0x2c8/0x5b0 [ 0.078226] ? srso_return_thunk+0x5/0x5f [ 0.078226] free_reserved_page+0x1c/0x30 [ 0.078226] memblock_free_late+0x6c/0xc0 Currently there are not many callers of free_reserved_area() and they all appear to be at the right timings. Still, in order to protect against problematic code moves or additions of new callers add a warning that will inform that reserved pages cannot be freed until the memory map is fully initialized. [1] https://lore.kernel.org/all/e5d5a1105d90ee1e7fe7eafaed2ed03bbad0c46b.ca= mel@kernel.crashing.org/ Signed-off-by: Mike Rapoport (Microsoft) Tested-By: Bert Karwatzki --- mm/internal.h | 10 ++++++++++ mm/memblock.c | 5 +++++ mm/page_alloc.c | 10 ---------- 3 files changed, 15 insertions(+), 10 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index cb0af847d7d9..f60c1edb2e02 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1233,7 +1233,17 @@ static inline void vunmap_range_noflush(unsigned lon= g start, unsigned long end) #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT DECLARE_STATIC_KEY_TRUE(deferred_pages); =20 +static inline bool deferred_pages_enabled(void) +{ + return static_branch_unlikely(&deferred_pages); +} + bool __init deferred_grow_zone(struct zone *zone, unsigned int order); +#else +static inline bool deferred_pages_enabled(void) +{ + return false; +} #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ =20 void init_deferred_page(unsigned long pfn, int nid); diff --git a/mm/memblock.c b/mm/memblock.c index dc8811861c11..ab8f35c3bd41 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -899,6 +899,11 @@ static unsigned long __free_reserved_area(phys_addr_t = start, phys_addr_t end, { unsigned long pages =3D 0, pfn; =20 + if (deferred_pages_enabled()) { + WARN(1, "Cannot free reserved memory because of deferred initialization = of the memory map"); + return 0; + } + for_each_valid_pfn(pfn, PFN_UP(start), PFN_DOWN(end)) { struct page *page =3D pfn_to_page(pfn); void *direct_map_addr; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index df3d61253001..9ac47bab2ea7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -331,11 +331,6 @@ int page_group_by_mobility_disabled __read_mostly; */ DEFINE_STATIC_KEY_TRUE(deferred_pages); =20 -static inline bool deferred_pages_enabled(void) -{ - return static_branch_unlikely(&deferred_pages); -} - /* * deferred_grow_zone() is __init, but it is called from * get_page_from_freelist() during early boot until deferred_pages permane= ntly @@ -348,11 +343,6 @@ _deferred_grow_zone(struct zone *zone, unsigned int or= der) return deferred_grow_zone(zone, order); } #else -static inline bool deferred_pages_enabled(void) -{ - return false; -} - static inline bool _deferred_grow_zone(struct zone *zone, unsigned int ord= er) { return false; --=20 2.53.0