From nobody Tue Feb 10 00:22:23 2026 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6CD451F666B for ; Mon, 27 Jan 2025 23:23:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738020189; cv=none; b=l2cFE/XbThU4qIWTwC7bwQN0vFAQt1xUrow4c6QcgneIUVIp5CFA7KPdieO9UT+kjByJqbmZmD+E30hIGNOmuIgEpGVCYzrWhsGlYRbavc5O7lYTY/KSYhAfdPhSlre+G8VxkdsIvpZ9ZBo0Auh/clb7cDY1gH/pMFlDTBQ6Kxo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738020189; c=relaxed/simple; bh=q3OdJStvUirJpFMt50/3SfKZgFDPVuV9WVJvJmZAdIM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PNxgYJUWWC5MCyyXEFYZI6Msz01QA7gOF8EOC7viSRkNvo0UY3At5eLGTztWe2fX1GP6OUGjM+9ldO/cFGYJtkgjUYKYGXGGUTiZjFCh/50dA0o7aUqqH4PvnhYhyr92fEX4evp5YYgdfRepT8achzxsOJF/+UMzrOfy1GWPAZQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ceo4g4FI; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--fvdl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ceo4g4FI" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2f81a0d0a18so3448193a91.3 for ; Mon, 27 Jan 2025 15:23:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738020186; x=1738624986; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Gm3hQSpmhBarlZRgBgM0zsWbfOG8zObqZGHewKZCOZk=; b=ceo4g4FIjskJJEu3L+yHWi5c1QCLTy0X7u81PNR9i4VCzgKIvKOwqznbidYxg3xIR0 2hO0TwCIJ+vz4ghGKIs62uVxY09SXIgT+vmcxy1OD5FHcwsBUI8C6ZV3zJqlu8sSqsfA PoeLav7YtyU8mX29xD9Sy3cKSRT6mbmaQbTqjBACPTyKuY8z1zkqcWdsLA8TlPzCS4YR WoW5fZrsmphfWMPK18s3xVenSECNrYIW9jSOfZHHusXY3fG45tnLuqYknpnb9kISdB0j oanGAbCVEQl3d/8i3937biHrGqQqwI89/SaBJCuZBbpFje2DnpQ+6qs8VMpRX2lF/DqB K+ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738020186; x=1738624986; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Gm3hQSpmhBarlZRgBgM0zsWbfOG8zObqZGHewKZCOZk=; b=vZajwFGY0woNOOvqjkpb4rNkYsaSzjyro0mk4GcJoNPKnHOKRHC8Vi96g5l7oaqKJR 4tquGdKBmRlneSb1S1Qp5Bo5KUzmSgPXqayf6isSZpxqsMOhnVVhlhWycQrzRSVh2jtt CNf97O3wcDmeuwYMQljZiJN+h/4pvXRB1twrWNdHLD8YSp9+yoa5xnV/HFg+FKk8myUO kze/s2uffKWmxTrVlGZpqR+1loIcqHT1zqx1o++KytS6Z4oNX1xUK+AsG7sjomw9KSPT MJuwbCUubhyYiAR9N+VeFwbjc3yiPsxk+mWNvM2TBkXxG6q6kf4sjbMpPBvpzxzrBe/I LSxA== X-Forwarded-Encrypted: i=1; AJvYcCUb/uvYbwSScIU99pST84xeMFp/S1gAy8ICWA9DBwOmIKeC5h+HuUZM9EHM893b/i6So81kwyYHiDKVkgU=@vger.kernel.org X-Gm-Message-State: AOJu0Yw3q4xvYFyLvXNX5AoxuWhz609OLzeGuxGgEbC51bBvDMwQWonu sgfY+XvV2AVf4T8A7ES4X8CLNQi/pEL+U7R7xyobr5yOrK5Sf/9qZUw3OiXggTpibcugiw== X-Google-Smtp-Source: AGHT+IH19j30fP8A5nC7LbdjLhXfAefG5JdyL9TVfOVI+iunswaOnTqgk5DlFd7Zc5gd2cUKDhngrEFu X-Received: from pfbjo40.prod.google.com ([2002:a05:6a00:90a8:b0:728:e508:8a48]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:2917:b0:725:b201:2362 with SMTP id d2e1a72fcca58-72dafa409b5mr61127310b3a.11.1738020186643; Mon, 27 Jan 2025 15:23:06 -0800 (PST) Date: Mon, 27 Jan 2025 23:22:07 +0000 In-Reply-To: <20250127232207.3888640-1-fvdl@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250127232207.3888640-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog Message-ID: <20250127232207.3888640-28-fvdl@google.com> Subject: [PATCH 27/27] mm/hugetlb: enable bootmem allocation from CMA areas From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usama.arif@bytedance.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, Frank van der Linden , Madhavan Srinivasan , Michael Ellerman , linuxppc-dev@lists.ozlabs.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If hugetlb_cma_only is enabled, we know that hugetlb pages can only be allocated from CMA. Now that there is an interface to do early reservations from a CMA area (returning memblock memory), it can be used to allocate hugetlb pages from CMA. This also allows for doing pre-HVO on these pages (if enabled). Make sure to initialize the page structures and associated data correctly. Create a flag to signal that a hugetlb page has been allocated from CMA to make things a little easier. Some configurations of powerpc have a special hugetlb bootmem allocator, so introduce a boolean arch_specific_huge_bootmem_alloc that returns true if such an allocator is present. In that case, CMA bootmem allocations can't be used, so check that function before trying. Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Frank van der Linden --- arch/powerpc/mm/hugetlbpage.c | 5 ++ include/linux/hugetlb.h | 7 ++ mm/hugetlb.c | 135 +++++++++++++++++++++++++--------- 3 files changed, 114 insertions(+), 33 deletions(-) diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index d3c1b749dcfc..e53e4b4c8ef6 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -121,6 +121,11 @@ bool __init hugetlb_node_alloc_supported(void) { return false; } + +bool __init arch_specific_huge_bootmem_alloc(struct hstate *h) +{ + return (firmware_has_feature(FW_FEATURE_LPAR) && !radix_enabled()); +} #endif =20 =20 diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 2512463bca49..bca3052fb175 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -591,6 +591,7 @@ enum hugetlb_page_flags { HPG_freed, HPG_vmemmap_optimized, HPG_raw_hwp_unreliable, + HPG_cma, __NR_HPAGEFLAGS, }; =20 @@ -650,6 +651,7 @@ HPAGEFLAG(Temporary, temporary) HPAGEFLAG(Freed, freed) HPAGEFLAG(VmemmapOptimized, vmemmap_optimized) HPAGEFLAG(RawHwpUnreliable, raw_hwp_unreliable) +HPAGEFLAG(Cma, cma) =20 #ifdef CONFIG_HUGETLB_PAGE =20 @@ -678,14 +680,18 @@ struct hstate { char name[HSTATE_NAME_LEN]; }; =20 +struct cma; + struct huge_bootmem_page { struct list_head list; struct hstate *hstate; unsigned long flags; + struct cma *cma; }; =20 #define HUGE_BOOTMEM_HVO 0x0001 #define HUGE_BOOTMEM_ZONES_VALID 0x0002 +#define HUGE_BOOTMEM_CMA 0x0004 =20 bool hugetlb_bootmem_page_zones_valid(int nid, struct huge_bootmem_page *m= ); =20 @@ -711,6 +717,7 @@ bool __init hugetlb_node_alloc_supported(void); =20 void __init hugetlb_add_hstate(unsigned order); bool __init arch_hugetlb_valid_size(unsigned long size); +bool __init arch_specific_huge_bootmem_alloc(struct hstate *h); struct hstate *size_to_hstate(unsigned long size); =20 #ifndef HUGE_MAX_HSTATE diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 32ebde9039e2..183e8d0c2fb4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -61,7 +61,7 @@ static struct cma *hugetlb_cma[MAX_NUMNODES]; static unsigned long hugetlb_cma_size_in_node[MAX_NUMNODES] __initdata; #endif static bool hugetlb_cma_only; -static unsigned long hugetlb_cma_size __initdata; +static unsigned long hugetlb_cma_size; =20 __initdata struct list_head huge_boot_pages[MAX_NUMNODES]; __initdata unsigned long hstate_boot_nrinvalid[HUGE_MAX_HSTATE]; @@ -132,8 +132,10 @@ static void hugetlb_free_folio(struct folio *folio) #ifdef CONFIG_CMA int nid =3D folio_nid(folio); =20 - if (cma_free_folio(hugetlb_cma[nid], folio)) + if (folio_test_hugetlb_cma(folio)) { + WARN_ON(!cma_free_folio(hugetlb_cma[nid], folio)); return; + } #endif folio_put(folio); } @@ -1509,6 +1511,9 @@ static struct folio *alloc_gigantic_folio(struct hsta= te *h, gfp_t gfp_mask, break; } } + + if (folio) + folio_set_hugetlb_cma(folio); } #endif if (!folio) { @@ -3175,6 +3180,63 @@ struct folio *alloc_hugetlb_folio(struct vm_area_str= uct *vma, return ERR_PTR(-ENOSPC); } =20 +/* + * Some architectures do their own bootmem allocation, so they can't use + * early CMA allocation. So, allow for this function to be redefined. + */ +bool __init __attribute((weak)) +arch_specific_huge_bootmem_alloc(struct hstate *h) +{ + return false; +} + +static bool __init hugetlb_early_cma(struct hstate *h) +{ + if (arch_specific_huge_bootmem_alloc(h)) + return false; + + return (hstate_is_gigantic(h) && hugetlb_cma_size && hugetlb_cma_only); +} + +static __init void *alloc_bootmem(struct hstate *h, int nid) +{ + struct huge_bootmem_page *m; + unsigned long flags; + struct cma *cma; + +#ifdef CONFIG_CMA + if (hugetlb_early_cma(h)) { + flags =3D HUGE_BOOTMEM_CMA; + cma =3D hugetlb_cma[nid]; + m =3D cma_reserve_early(cma, huge_page_size(h)); + } else +#endif + { + flags =3D 0; + cma =3D NULL; + m =3D memblock_alloc_try_nid_raw(huge_page_size(h), + huge_page_size(h), 0, MEMBLOCK_ALLOC_ACCESSIBLE, nid); + } + + if (m) { + /* + * Use the beginning of the huge page to store the + * huge_bootmem_page struct (until gather_bootmem + * puts them into the mem_map). + * + * Put them into a private list first because mem_map + * is not up yet. + */ + INIT_LIST_HEAD(&m->list); + list_add(&m->list, &huge_boot_pages[nid]); + m->hstate =3D h; + m->flags =3D flags; + m->cma =3D cma; + } + + return m; +} + int alloc_bootmem_huge_page(struct hstate *h, int nid) __attribute__ ((weak, alias("__alloc_bootmem_huge_page"))); int __alloc_bootmem_huge_page(struct hstate *h, int nid) @@ -3184,17 +3246,14 @@ int __alloc_bootmem_huge_page(struct hstate *h, int= nid) =20 /* do node specific alloc */ if (nid !=3D NUMA_NO_NODE) { - m =3D memblock_alloc_try_nid_raw(huge_page_size(h), huge_page_size(h), - 0, MEMBLOCK_ALLOC_ACCESSIBLE, nid); + m =3D alloc_bootmem(h, node); if (!m) return 0; goto found; } /* allocate from next node when distributing huge pages */ for_each_node_mask_to_alloc(&h->next_nid_to_alloc, nr_nodes, node, &node_= states[N_ONLINE]) { - m =3D memblock_alloc_try_nid_raw( - huge_page_size(h), huge_page_size(h), - 0, MEMBLOCK_ALLOC_ACCESSIBLE, node); + m =3D alloc_bootmem(h, node); if (m) break; } @@ -3203,7 +3262,6 @@ int __alloc_bootmem_huge_page(struct hstate *h, int n= id) return 0; =20 found: - /* * Only initialize the head struct page in memmap_init_reserved_pages, * rest of the struct pages will be initialized by the HugeTLB @@ -3213,18 +3271,6 @@ int __alloc_bootmem_huge_page(struct hstate *h, int = nid) */ memblock_reserved_mark_noinit(virt_to_phys((void *)m + PAGE_SIZE), huge_page_size(h) - PAGE_SIZE); - /* - * Use the beginning of the huge page to store the - * huge_bootmem_page struct (until gather_bootmem - * puts them into the mem_map). - * - * Put them into a private list first because mem_map - * is not up yet. - */ - INIT_LIST_HEAD(&m->list); - list_add(&m->list, &huge_boot_pages[node]); - m->hstate =3D h; - m->flags =3D 0; return 1; } =20 @@ -3265,13 +3311,25 @@ static void __init hugetlb_folio_init_vmemmap(struc= t folio *folio, prep_compound_head((struct page *)folio, huge_page_order(h)); } =20 +static bool __init hugetlb_bootmem_page_prehvo(struct huge_bootmem_page *m) +{ + return (m->flags & HUGE_BOOTMEM_HVO); +} + +static bool __init hugetlb_bootmem_page_earlycma(struct huge_bootmem_page = *m) +{ + return (m->flags & HUGE_BOOTMEM_CMA); +} + /* * memblock-allocated pageblocks might not have the migrate type set * if marked with the 'noinit' flag. Set it to the default (MIGRATE_MOVABL= E) - * here. + * here, or MIGRATE_CMA if this was a page allocated through an early CMA + * reservation. * - * Note that this will not write the page struct, it is ok (and necessary) - * to do this on vmemmap optimized folios. + * In case of vmemmap optimized folios, the tail vmemmap pages are mapped + * read-only, but that's ok - for sparse vmemmap this does not write to + * the page structure. */ static void __init hugetlb_bootmem_init_migratetype(struct folio *folio, struct hstate *h) @@ -3280,9 +3338,13 @@ static void __init hugetlb_bootmem_init_migratetype(= struct folio *folio, =20 WARN_ON_ONCE(!pageblock_aligned(folio_pfn(folio))); =20 - for (i =3D 0; i < nr_pages; i +=3D pageblock_nr_pages) - set_pageblock_migratetype(folio_page(folio, i), + for (i =3D 0; i < nr_pages; i +=3D pageblock_nr_pages) { + if (folio_test_hugetlb_cma(folio)) + init_cma_pageblock(folio_page(folio, i)); + else + set_pageblock_migratetype(folio_page(folio, i), MIGRATE_MOVABLE); + } } =20 static void __init prep_and_add_bootmem_folios(struct hstate *h, @@ -3319,7 +3381,7 @@ bool __init hugetlb_bootmem_page_zones_valid(int nid, struct huge_bootmem_page *m) { unsigned long start_pfn; - bool valid; + bool valid =3D false; =20 if (m->flags & HUGE_BOOTMEM_ZONES_VALID) { /* @@ -3328,10 +3390,16 @@ bool __init hugetlb_bootmem_page_zones_valid(int ni= d, return true; } =20 + if (hugetlb_bootmem_page_earlycma(m)) { + valid =3D cma_validate_zones(m->cma); + goto out; + } + start_pfn =3D virt_to_phys(m) >> PAGE_SHIFT; =20 valid =3D !pfn_range_intersects_zones(nid, start_pfn, pages_per_huge_page(m->hstate)); +out: if (!valid) hstate_boot_nrinvalid[hstate_index(m->hstate)]++; =20 @@ -3360,11 +3428,6 @@ static void __init hugetlb_bootmem_free_invalid_page= (int nid, struct page *page, } } =20 -static bool __init hugetlb_bootmem_page_prehvo(struct huge_bootmem_page *m) -{ - return (m->flags & HUGE_BOOTMEM_HVO); -} - /* * Put bootmem huge pages into the standard lists after mem_map is up. * Note: This only applies to gigantic (order > MAX_PAGE_ORDER) pages. @@ -3414,6 +3477,9 @@ static void __init gather_bootmem_prealloc_node(unsig= ned long nid) */ folio_set_hugetlb_vmemmap_optimized(folio); =20 + if (hugetlb_bootmem_page_earlycma(m)) + folio_set_hugetlb_cma(folio); + list_add(&folio->lru, &folio_list); =20 /* @@ -3606,8 +3672,11 @@ static void __init hugetlb_hstate_alloc_pages(struct= hstate *h) { unsigned long allocated; =20 - /* skip gigantic hugepages allocation if hugetlb_cma enabled */ - if (hstate_is_gigantic(h) && hugetlb_cma_size) { + /* + * Skip gigantic hugepages allocation if early CMA + * reservations are not available. + */ + if (hstate_is_gigantic(h) && hugetlb_cma_size && !hugetlb_early_cma(h)) { pr_warn_once("HugeTLB: hugetlb_cma is enabled, skip boot time allocation= \n"); return; } --=20 2.48.1.262.g85cc9f2d1e-goog