From nobody Sun Feb 8 15:29:16 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 417B730F547; Fri, 2 Jan 2026 07:06:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767337613; cv=none; b=Ws7hykf1glJH84QDVbizbK9/rlZXHRLQC1KUz5Jny5TbEHV4Fi/F6gakCqaS+jpHW0tRouoB73zXiewLgXLQryx/NEgOjZ0UZZRbFCYNppAgRS5C2FuxJff7P6+OL7r58ydj8rhtQFn++uGLJCdDX3achCDUvFQ8M4ZJs+TlKoQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767337613; c=relaxed/simple; bh=kMKUIBkqL1O4S3ZKa4SfOsA0W3WTf/VpmW+w2Z3QAsw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Jn35tnGP4YVSOu+ngGB5uIb6cKOrvbjMhbo8jWKKUDmtd4hnl+auQhJ+O04s0+c6eqQIf6P++3lioRx0maRpQRAkHU7MWYy1kgDSbIz6RPImvrSOgPGj2ppkuBg5fOiN0x4OmDZg8h1eEe3V45SHv0Jhm0BrN19Oxq0CflgiIuo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TkDnmotZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TkDnmotZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3EEC1C116D0; Fri, 2 Jan 2026 07:06:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1767337613; bh=kMKUIBkqL1O4S3ZKa4SfOsA0W3WTf/VpmW+w2Z3QAsw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TkDnmotZu/YdNywK+Eq/loOeeJEB97xyKcnXixh8fkgNeSCwHVL6QnzxLLoAyErWM TRj2JtHlwHjDoVrWQTbaRj59/J4jnFOEokG7LKVo52RjzFCb878p8ePaRTeDzh2Si0 3eLbVEoXK5/mP+D2jyewADhIDqxVGI2EpIIAjd0Elxy9Vlp2N4rNc/JsXa0N5H3giT 0w6d0zO+vezRlwoSkgewur1fcZo4H7c1PPtods9nW8WUnlSV1SZFSxxt7kyXrS24qp KUWoXUzjudgicjqcHDPCRzEqHWPBB1EWTVX3Ta28eX8TEBh0H1yNHWWP6NYEYDqBll SVfxF1n6SYoTw== From: Mike Rapoport To: Andrew Morton Cc: Alex Shi , Alexander Gordeev , Andreas Larsson , Borislav Petkov , Brian Cain , "Christophe Leroy (CS GROUP)" , Catalin Marinas , "David S. Miller" , Dave Hansen , David Hildenbrand , Dinh Nguyen , Geert Uytterhoeven , Guo Ren , Heiko Carstens , Helge Deller , Huacai Chen , Ingo Molnar , Johannes Berg , John Paul Adrian Glaubitz , Jonathan Corbet , "Liam R. Howlett" , Lorenzo Stoakes , Magnus Lindholm , Matt Turner , Max Filippov , Michael Ellerman , Michal Hocko , Michal Simek , Mike Rapoport , Muchun Song , Oscar Salvador , Palmer Dabbelt , Pratyush Yadav , Richard Weinberger , Russell King , Stafford Horne , Suren Baghdasaryan , Thomas Bogendoerfer , Thomas Gleixner , Vasily Gorbik , Vineet Gupta , Vlastimil Babka , Will Deacon , x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, sparclinux@vger.kernel.org Subject: [PATCH v2 27/28] mm/hugetlb: drop hugetlb_cma_check() Date: Fri, 2 Jan 2026 09:00:03 +0200 Message-ID: <20260102070005.65328-28-rppt@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260102070005.65328-1-rppt@kernel.org> References: <20260102070005.65328-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" hugetlb_cma_check() was required when the ordering of hugetlb_cma_reserve() and hugetlb_bootmem_alloc() was architecture depended. Since hugetlb_cma_reserve() is always called before hugetlb_bootmem_alloc() there is no need to check whether hugetlb_cma_reserve() was already called. Drop unneeded hugetlb_cma_check() function. Signed-off-by: Mike Rapoport (Microsoft) Acked-by: Muchun Song --- mm/hugetlb.c | 1 - mm/hugetlb_cma.c | 16 +++------------- mm/hugetlb_cma.h | 5 ----- 3 files changed, 3 insertions(+), 19 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 51273baec9e5..82b322ae3fdc 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4159,7 +4159,6 @@ static int __init hugetlb_init(void) } } =20 - hugetlb_cma_check(); hugetlb_init_hstates(); gather_bootmem_prealloc(); report_hugepages(); diff --git a/mm/hugetlb_cma.c b/mm/hugetlb_cma.c index b1eb5998282c..f5e79103e110 100644 --- a/mm/hugetlb_cma.c +++ b/mm/hugetlb_cma.c @@ -85,9 +85,6 @@ hugetlb_cma_alloc_bootmem(struct hstate *h, int *nid, boo= l node_exact) return m; } =20 - -static bool cma_reserve_called __initdata; - static int __init cmdline_parse_hugetlb_cma(char *p) { int nid, count =3D 0; @@ -149,8 +146,10 @@ void __init hugetlb_cma_reserve(void) return; =20 order =3D arch_hugetlb_cma_order(); - if (!order) + if (!order) { + pr_warn("hugetlb_cma: the option isn't supported by current arch\n"); return; + } =20 /* * HugeTLB CMA reservation is required for gigantic @@ -159,7 +158,6 @@ void __init hugetlb_cma_reserve(void) * breaking this assumption. */ VM_WARN_ON(order <=3D MAX_PAGE_ORDER); - cma_reserve_called =3D true; =20 hugetlb_bootmem_set_nodes(); =20 @@ -253,14 +251,6 @@ void __init hugetlb_cma_reserve(void) hugetlb_cma_size =3D 0; } =20 -void __init hugetlb_cma_check(void) -{ - if (!hugetlb_cma_size || cma_reserve_called) - return; - - pr_warn("hugetlb_cma: the option isn't supported by current arch\n"); -} - bool hugetlb_cma_exclusive_alloc(void) { return hugetlb_cma_only; diff --git a/mm/hugetlb_cma.h b/mm/hugetlb_cma.h index 2c2ec8a7e134..78186839df3a 100644 --- a/mm/hugetlb_cma.h +++ b/mm/hugetlb_cma.h @@ -8,7 +8,6 @@ struct folio *hugetlb_cma_alloc_folio(int order, gfp_t gfp_= mask, int nid, nodemask_t *nodemask); struct huge_bootmem_page *hugetlb_cma_alloc_bootmem(struct hstate *h, int = *nid, bool node_exact); -void hugetlb_cma_check(void); bool hugetlb_cma_exclusive_alloc(void); unsigned long hugetlb_cma_total_size(void); void hugetlb_cma_validate_params(void); @@ -31,10 +30,6 @@ struct huge_bootmem_page *hugetlb_cma_alloc_bootmem(stru= ct hstate *h, int *nid, return NULL; } =20 -static inline void hugetlb_cma_check(void) -{ -} - static inline bool hugetlb_cma_exclusive_alloc(void) { return false; --=20 2.51.0