From nobody Wed Dec 17 01:44:31 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 043F5C4167B for ; Fri, 8 Dec 2023 02:53:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1573022AbjLHCx3 (ORCPT ); Thu, 7 Dec 2023 21:53:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1573023AbjLHCxR (ORCPT ); Thu, 7 Dec 2023 21:53:17 -0500 Received: from out-183.mta0.migadu.com (out-183.mta0.migadu.com [IPv6:2001:41d0:1004:224b::b7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 736221723 for ; Thu, 7 Dec 2023 18:53:22 -0800 (PST) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1702004000; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JPiol85rH/HHWH3MtNGJ1vVzJRIHoY4UOz/xBdoQAl8=; b=NxMVWG21gW8upSMu6W1+UqWPM1mu3TNNK7ySpDZBuJgxSJQJnlV784GJ4eVMKSUVd0+zvX 1DuMiRjvpJps0PvFoWiZGOezZxshmLOm4vijHRinD2+eRO6AOMJh3fsAb46TUi6BMzI3VH gXkzeo3fmyYW7grIYjzDRsnY4fGCZgc= From: Gang Li To: David Hildenbrand , David Rientjes , Mike Kravetz , Muchun Song , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, ligang.bdlg@bytedance.com, Gang Li Subject: [RFC PATCH v2 5/5] hugetlb: parallelize 1G hugetlb initialization Date: Fri, 8 Dec 2023 10:52:40 +0800 Message-Id: <20231208025240.4744-6-gang.li@linux.dev> In-Reply-To: <20231208025240.4744-1-gang.li@linux.dev> References: <20231208025240.4744-1-gang.li@linux.dev> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Optimizing the initialization speed of 1G huge pages through parallelization. 1G hugetlbs are allocated from bootmem, a process that is already very fast and does not currently require optimization. Therefore, we focus on parallelizing only the initialization phase in `gather_bootmem_prealloc`. This patch can achieve 40%-50% improvement in performance. test no patch(ms) patched(ms) saved ------------------- -------------- ------------- -------- 256c2t(4 node) 1G 2679 1582 40.95% 128c1t(2 node) 1G 3160 1618 48.80% Signed-off-by: Gang Li --- include/linux/hugetlb.h | 2 +- mm/hugetlb.c | 40 +++++++++++++++++++++++++++++++++------- 2 files changed, 34 insertions(+), 8 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d3acecc5db4b3..ca94c43a63b84 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -178,7 +178,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_a= rea_struct *vma, struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage); =20 extern int sysctl_hugetlb_shm_group; -extern struct list_head huge_boot_pages; +extern struct list_head huge_boot_pages[MAX_NUMNODES]; =20 /* arch callbacks */ =20 diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 033e359fdb86b..eb33cb15dce61 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -69,7 +69,7 @@ static bool hugetlb_cma_folio(struct folio *folio, unsign= ed int order) #endif static unsigned long hugetlb_cma_size __initdata; =20 -__initdata LIST_HEAD(huge_boot_pages); +__initdata struct list_head huge_boot_pages[MAX_NUMNODES]; =20 /* for command line parsing */ static struct hstate * __initdata parsed_hstate; @@ -3331,7 +3331,7 @@ int __alloc_bootmem_huge_page(struct hstate *h, int n= id) huge_page_size(h) - PAGE_SIZE); /* Put them into a private list first because mem_map is not up yet */ INIT_LIST_HEAD(&m->list); - list_add(&m->list, &huge_boot_pages); + list_add(&m->list, &huge_boot_pages[node]); m->hstate =3D h; return 1; } @@ -3382,8 +3382,6 @@ static void __init prep_and_add_bootmem_folios(struct= hstate *h, /* Send list for bulk vmemmap optimization processing */ hugetlb_vmemmap_optimize_folios(h, folio_list); =20 - /* Add all new pool pages to free lists in one lock cycle */ - spin_lock_irqsave(&hugetlb_lock, flags); list_for_each_entry_safe(folio, tmp_f, folio_list, lru) { if (!folio_test_hugetlb_vmemmap_optimized(folio)) { /* @@ -3396,23 +3394,27 @@ static void __init prep_and_add_bootmem_folios(stru= ct hstate *h, HUGETLB_VMEMMAP_RESERVE_PAGES, pages_per_huge_page(h)); } + /* Subdivide locks to achieve better parallel performance */ + spin_lock_irqsave(&hugetlb_lock, flags); __prep_account_new_huge_page(h, folio_nid(folio)); enqueue_hugetlb_folio(h, folio); + spin_unlock_irqrestore(&hugetlb_lock, flags); } - spin_unlock_irqrestore(&hugetlb_lock, flags); } =20 /* * Put bootmem huge pages into the standard lists after mem_map is up. * Note: This only applies to gigantic (order > MAX_ORDER) pages. */ -static void __init gather_bootmem_prealloc(void) +static void __init __gather_bootmem_prealloc(unsigned long start, unsigned= long end, void *arg) + { + int nid =3D start; LIST_HEAD(folio_list); struct huge_bootmem_page *m; struct hstate *h =3D NULL, *prev_h =3D NULL; =20 - list_for_each_entry(m, &huge_boot_pages, list) { + list_for_each_entry(m, &huge_boot_pages[nid], list) { struct page *page =3D virt_to_page(m); struct folio *folio =3D (void *)page; =20 @@ -3445,6 +3447,22 @@ static void __init gather_bootmem_prealloc(void) prep_and_add_bootmem_folios(h, &folio_list); } =20 +static void __init gather_bootmem_prealloc(void) +{ + struct padata_mt_job job =3D { + .thread_fn =3D __gather_bootmem_prealloc, + .fn_arg =3D NULL, + .start =3D 0, + .size =3D num_node_state(N_MEMORY), + .align =3D 1, + .min_chunk =3D 1, + .max_threads =3D num_node_state(N_MEMORY), + .numa_aware =3D true, + }; + + padata_do_multithreaded(&job); +} + static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, in= t nid) { unsigned long i; @@ -3597,6 +3615,14 @@ static void __init hugetlb_hstate_alloc_pages(struct= hstate *h) return; } =20 + /* hugetlb_hstate_alloc_pages will be called many times, init huge_boot_p= ages once*/ + if (huge_boot_pages[0].next =3D=3D NULL) { + int i =3D 0; + + for (i =3D 0; i < MAX_NUMNODES; i++) + INIT_LIST_HEAD(&huge_boot_pages[i]); + } + /* do node specific alloc */ if (hugetlb_hstate_alloc_pages_node_specific(h)) return; --=20 2.30.2