From nobody Wed Dec 24 12:14:40 2025 Received: from out-171.mta1.migadu.com (out-171.mta1.migadu.com [95.215.58.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1BBBC1CF91 for ; Fri, 26 Jan 2024 15:24:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706282701; cv=none; b=nD7q1HV94BtN6Qk3+J0WB8rWTQBbmrSNm/VIfEcUSrr2CgOXshyyjwEaXZKCmD0xyZVbdKlq31QagUYVNx6BG3Dg0cCk9QaSL5BRV9IGSiVRlOmrKl2ERf8hwgiBeuRXtchbjG9r0b60o+ibqiJwY1ZBhLcD+Emqj993k/53yLQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706282701; c=relaxed/simple; bh=JynJSsrcEmEyDiudVXgdLpdbaxMIxOCbRPacHoaDnbQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=elMGBtMqd3WwB9Ekd6QqBeNQ6tY/uipBvmVc7ELoyYnDM8QeGWqY6FsW2+41m75wS9ea+5/xQZuJjjOCrnAnjT9c3Me0QfAi6cT9oRUQ0tv5nnTu1KNx0+KgFwMtvTJKmePq/HEViyXB3l+n3dn8mnPR/eb4nyr0oBp8YbjeL4s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=Z2mGLWb9; arc=none smtp.client-ip=95.215.58.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="Z2mGLWb9" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1706282697; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=P9aNrm9Rl6VWe92w5Q1+H7U7+x0T3IY85gF9i1XPLD0=; b=Z2mGLWb9xrBuI0vOVD3E/NJFwQiuaje5gheixSf/WVC/8b+43XxkctLePzaM1b5Q45N43H jiACbC9uRwNgXG1Aj/uKYCxw/54Qrw4qopzYmDpDtD28Eu+qzsQAz6NeImrzqeJQQJWWKD oNdPxdCl8DH8xl4A48uiL4uj+fNJQvE= From: Gang Li To: David Hildenbrand , David Rientjes , Mike Kravetz , Muchun Song , Andrew Morton , Tim Chen Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, ligang.bdlg@bytedance.com, Gang Li Subject: [PATCH v5 1/7] hugetlb: code clean for hugetlb_hstate_alloc_pages Date: Fri, 26 Jan 2024 23:24:05 +0800 Message-Id: <20240126152411.1238072-2-gang.li@linux.dev> In-Reply-To: <20240126152411.1238072-1-gang.li@linux.dev> References: <20240126152411.1238072-1-gang.li@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" The readability of `hugetlb_hstate_alloc_pages` is poor. By cleaning the code, its readability can be improved, facilitating future modifications. This patch extracts two functions to reduce the complexity of `hugetlb_hstate_alloc_pages` and has no functional changes. - hugetlb_hstate_alloc_pages_node_specific() to handle iterates through each online node and performs allocation if necessary. - hugetlb_hstate_alloc_pages_report() report error during allocation. And the value of h->max_huge_pages is updated accordingly. Signed-off-by: Gang Li Tested-by: David Rientjes Reviewed-by: Muchun Song Reviewed-by: Tim Chen --- mm/hugetlb.c | 46 +++++++++++++++++++++++++++++----------------- 1 file changed, 29 insertions(+), 17 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2cf78218dfe2e..20d0494424780 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3482,6 +3482,33 @@ static void __init hugetlb_hstate_alloc_pages_onenod= e(struct hstate *h, int nid) h->max_huge_pages_node[nid] =3D i; } =20 +static bool __init hugetlb_hstate_alloc_pages_specific_nodes(struct hstate= *h) +{ + int i; + bool node_specific_alloc =3D false; + + for_each_online_node(i) { + if (h->max_huge_pages_node[i] > 0) { + hugetlb_hstate_alloc_pages_onenode(h, i); + node_specific_alloc =3D true; + } + } + + return node_specific_alloc; +} + +static void __init hugetlb_hstate_alloc_pages_errcheck(unsigned long alloc= ated, struct hstate *h) +{ + if (allocated < h->max_huge_pages) { + char buf[32]; + + string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); + pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated= %lu hugepages.\n", + h->max_huge_pages, buf, allocated); + h->max_huge_pages =3D allocated; + } +} + /* * NOTE: this routine is called in different contexts for gigantic and * non-gigantic pages. @@ -3499,7 +3526,6 @@ static void __init hugetlb_hstate_alloc_pages(struct = hstate *h) struct folio *folio; LIST_HEAD(folio_list); nodemask_t *node_alloc_noretry; - bool node_specific_alloc =3D false; =20 /* skip gigantic hugepages allocation if hugetlb_cma enabled */ if (hstate_is_gigantic(h) && hugetlb_cma_size) { @@ -3508,14 +3534,7 @@ static void __init hugetlb_hstate_alloc_pages(struct= hstate *h) } =20 /* do node specific alloc */ - for_each_online_node(i) { - if (h->max_huge_pages_node[i] > 0) { - hugetlb_hstate_alloc_pages_onenode(h, i); - node_specific_alloc =3D true; - } - } - - if (node_specific_alloc) + if (hugetlb_hstate_alloc_pages_specific_nodes(h)) return; =20 /* below will do all node balanced alloc */ @@ -3558,14 +3577,7 @@ static void __init hugetlb_hstate_alloc_pages(struct= hstate *h) /* list will be empty if hstate_is_gigantic */ prep_and_add_allocated_folios(h, &folio_list); =20 - if (i < h->max_huge_pages) { - char buf[32]; - - string_get_size(huge_page_size(h), 1, STRING_UNITS_2, buf, 32); - pr_warn("HugeTLB: allocating %lu of page size %s failed. Only allocated= %lu hugepages.\n", - h->max_huge_pages, buf, i); - h->max_huge_pages =3D i; - } + hugetlb_hstate_alloc_pages_errcheck(i, h); kfree(node_alloc_noretry); } =20 --=20 2.20.1 From nobody Wed Dec 24 12:14:40 2025 Received: from out-182.mta1.migadu.com (out-182.mta1.migadu.com [95.215.58.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 810D61D6BD for ; Fri, 26 Jan 2024 15:25:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706282709; cv=none; b=HfbHa9vTwFIagJU/ESYP7lVuiemUeNWKBm6kuAEy++4xNv5l5r8VdqvaFV4Sj4VMKf58DW/RKd2x2MPjlRllKUwE/jnxfOrpehcUoZMXRMKYBG1sV1C2p8xL/qk6w8ZOSIZF2MArFHhE/OJqVj3pP9KKUn4So8u+rgwe+s+nv6U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706282709; c=relaxed/simple; bh=YikDHBkjoqd+ws8wiuocM92jiPsOsAd+AX07ms88HyE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=agKZdm0bJtJfREKQZAuGFmmfv/BCLo4Z8D1KTmMBB//3pJ7FfRBoQFVQ1kJUL9aSge66lDutpDekMfYgEbXhDHpWxZ186GIhddlxSrkw9fTwmdOBCdoLE5U3lAiIIxcMgtCwpOr1Ff/NYjKteG1EJ69OSuuaRJjibJe1ZJt/bNI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=NIN5KgkQ; arc=none smtp.client-ip=95.215.58.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="NIN5KgkQ" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1706282704; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EpmnH1heqxiT9L2xvocYTAQyyeV+M8p6qSwgZH/mouw=; b=NIN5KgkQ6ktwWa4GiDjUdAl7RaNOrgEmptJXjSAwQxyyMDJcoijL2+TDpy7yOmpIZkDhsL OSnefDgiw31H9dPHL9T6oBqYiSboHE9E7/g4GRKZgZb0/NDBZi/sa0bL1SshUOSqTeY3cu /G8p+dcISN/5OT93ypM/kpclSHGR/Kg= From: Gang Li To: David Hildenbrand , David Rientjes , Mike Kravetz , Muchun Song , Andrew Morton , Tim Chen Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, ligang.bdlg@bytedance.com, Gang Li Subject: [PATCH v5 2/7] hugetlb: split hugetlb_hstate_alloc_pages Date: Fri, 26 Jan 2024 23:24:06 +0800 Message-Id: <20240126152411.1238072-3-gang.li@linux.dev> In-Reply-To: <20240126152411.1238072-1-gang.li@linux.dev> References: <20240126152411.1238072-1-gang.li@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" 1G and 2M huge pages have different allocation and initialization logic, which leads to subtle differences in parallelization. Therefore, it is appropriate to split hugetlb_hstate_alloc_pages into gigantic and non-gigantic. This patch has no functional changes. Signed-off-by: Gang Li Tested-by: David Rientjes Reviewed-by: Tim Chen Reviewed-by: Muchun Song --- mm/hugetlb.c | 87 ++++++++++++++++++++++++++-------------------------- 1 file changed, 43 insertions(+), 44 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 20d0494424780..00bbf7442eb6c 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3509,6 +3509,43 @@ static void __init hugetlb_hstate_alloc_pages_errche= ck(unsigned long allocated, } } =20 +static unsigned long __init hugetlb_gigantic_pages_alloc_boot(struct hstat= e *h) +{ + unsigned long i; + + for (i =3D 0; i < h->max_huge_pages; ++i) { + if (!alloc_bootmem_huge_page(h, NUMA_NO_NODE)) + break; + cond_resched(); + } + + return i; +} + +static unsigned long __init hugetlb_pages_alloc_boot(struct hstate *h) +{ + unsigned long i; + struct folio *folio; + LIST_HEAD(folio_list); + nodemask_t node_alloc_noretry; + + /* Bit mask controlling how hard we retry per-node allocations.*/ + nodes_clear(node_alloc_noretry); + + for (i =3D 0; i < h->max_huge_pages; ++i) { + folio =3D alloc_pool_huge_folio(h, &node_states[N_MEMORY], + &node_alloc_noretry); + if (!folio) + break; + list_add(&folio->lru, &folio_list); + cond_resched(); + } + + prep_and_add_allocated_folios(h, &folio_list); + + return i; +} + /* * NOTE: this routine is called in different contexts for gigantic and * non-gigantic pages. @@ -3522,10 +3559,7 @@ static void __init hugetlb_hstate_alloc_pages_errche= ck(unsigned long allocated, */ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) { - unsigned long i; - struct folio *folio; - LIST_HEAD(folio_list); - nodemask_t *node_alloc_noretry; + unsigned long allocated; =20 /* skip gigantic hugepages allocation if hugetlb_cma enabled */ if (hstate_is_gigantic(h) && hugetlb_cma_size) { @@ -3538,47 +3572,12 @@ static void __init hugetlb_hstate_alloc_pages(struc= t hstate *h) return; =20 /* below will do all node balanced alloc */ - if (!hstate_is_gigantic(h)) { - /* - * Bit mask controlling how hard we retry per-node allocations. - * Ignore errors as lower level routines can deal with - * node_alloc_noretry =3D=3D NULL. If this kmalloc fails at boot - * time, we are likely in bigger trouble. - */ - node_alloc_noretry =3D kmalloc(sizeof(*node_alloc_noretry), - GFP_KERNEL); - } else { - /* allocations done at boot time */ - node_alloc_noretry =3D NULL; - } - - /* bit mask controlling how hard we retry per-node allocations */ - if (node_alloc_noretry) - nodes_clear(*node_alloc_noretry); - - for (i =3D 0; i < h->max_huge_pages; ++i) { - if (hstate_is_gigantic(h)) { - /* - * gigantic pages not added to list as they are not - * added to pools now. - */ - if (!alloc_bootmem_huge_page(h, NUMA_NO_NODE)) - break; - } else { - folio =3D alloc_pool_huge_folio(h, &node_states[N_MEMORY], - node_alloc_noretry); - if (!folio) - break; - list_add(&folio->lru, &folio_list); - } - cond_resched(); - } - - /* list will be empty if hstate_is_gigantic */ - prep_and_add_allocated_folios(h, &folio_list); + if (hstate_is_gigantic(h)) + allocated =3D hugetlb_gigantic_pages_alloc_boot(h); + else + allocated =3D hugetlb_pages_alloc_boot(h); =20 - hugetlb_hstate_alloc_pages_errcheck(i, h); - kfree(node_alloc_noretry); + hugetlb_hstate_alloc_pages_errcheck(allocated, h); } =20 static void __init hugetlb_init_hstates(void) --=20 2.20.1 From nobody Wed Dec 24 12:14:40 2025 Received: from out-178.mta1.migadu.com (out-178.mta1.migadu.com [95.215.58.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 162791DFE1 for ; Fri, 26 Jan 2024 15:25:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706282718; cv=none; b=uKyOHJuQF5R6Zb8yvZIOwS5Ct1h5LPl+mmtP5CHczLemnpAv4Li8pLjZSglDan6qnoMKKkODvImBFH0mxZxtO9WF3+gJiOhNBVa9pgoA417rd0K6DjjrfnA6LAtSMdZFSl/N42/aPlG3eCXOh5VG0MOuMwYWw+NNffJQpv0Jgx8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706282718; c=relaxed/simple; bh=A7DeycgaAylfyaGHNXPeWa+Y5alq9KqCBr3DPXdOYWA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kCeFITXaapkNiwwNqgJxqwXW1fzVogXfo4jlWK+oVR475/z5/thytWCMOefS9q3V0MqCONERFGtf2ib0/FotJ8o646K1ARMimk+VfokEvmL7a5FjI1F5Jtel2XzVdhhBZv9N7fkNry8yuNgdh0TzJMYRU8tRk6UxIxuCotpNrtg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=wyYlzc34; arc=none smtp.client-ip=95.215.58.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="wyYlzc34" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1706282715; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RmrMLc6E2IVqKhV8ZhRU9VeFjN80rDuhVxh0u6+eJMA=; b=wyYlzc34oaBoDa42QsBypygqXrsoGKHAfb+deg3SuOPDAmbhiv09g6PCqoW3PG4AJbHsVA hz3+64qyLRhxOGW5ACJiiomiu2sHCxRKxpEOC4gAbDn/H6YePZadVNJJ/H4FtQXG7IOOjG 0L+u9CTOwvqS2DBzIXlE1TKDr+2kLOc= From: Gang Li To: David Hildenbrand , David Rientjes , Mike Kravetz , Muchun Song , Andrew Morton , Tim Chen Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, ligang.bdlg@bytedance.com, Gang Li Subject: [PATCH v5 3/7] padata: dispatch works on different nodes Date: Fri, 26 Jan 2024 23:24:07 +0800 Message-Id: <20240126152411.1238072-4-gang.li@linux.dev> In-Reply-To: <20240126152411.1238072-1-gang.li@linux.dev> References: <20240126152411.1238072-1-gang.li@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" When a group of tasks that access different nodes are scheduled on the same node, they may encounter bandwidth bottlenecks and access latency. Thus, numa_aware flag is introduced here, allowing tasks to be distributed across different nodes to fully utilize the advantage of multi-node systems. Signed-off-by: Gang Li Tested-by: David Rientjes Reviewed-by: Muchun Song Reviewed-by: Tim Chen --- include/linux/padata.h | 2 ++ kernel/padata.c | 14 ++++++++++++-- mm/mm_init.c | 1 + 3 files changed, 15 insertions(+), 2 deletions(-) diff --git a/include/linux/padata.h b/include/linux/padata.h index 495b16b6b4d72..8f418711351bc 100644 --- a/include/linux/padata.h +++ b/include/linux/padata.h @@ -137,6 +137,7 @@ struct padata_shell { * appropriate for one worker thread to do at once. * @max_threads: Max threads to use for the job, actual number may be less * depending on task size and minimum chunk size. + * @numa_aware: Distribute jobs to different nodes with CPU in a round rob= in fashion. */ struct padata_mt_job { void (*thread_fn)(unsigned long start, unsigned long end, void *arg); @@ -146,6 +147,7 @@ struct padata_mt_job { unsigned long align; unsigned long min_chunk; int max_threads; + bool numa_aware; }; =20 /** diff --git a/kernel/padata.c b/kernel/padata.c index 179fb1518070c..e3f639ff16707 100644 --- a/kernel/padata.c +++ b/kernel/padata.c @@ -485,7 +485,8 @@ void __init padata_do_multithreaded(struct padata_mt_jo= b *job) struct padata_work my_work, *pw; struct padata_mt_job_state ps; LIST_HEAD(works); - int nworks; + int nworks, nid; + static atomic_t last_used_nid __initdata; =20 if (job->size =3D=3D 0) return; @@ -517,7 +518,16 @@ void __init padata_do_multithreaded(struct padata_mt_j= ob *job) ps.chunk_size =3D roundup(ps.chunk_size, job->align); =20 list_for_each_entry(pw, &works, pw_list) - queue_work(system_unbound_wq, &pw->pw_work); + if (job->numa_aware) { + int old_node =3D atomic_read(&last_used_nid); + + do { + nid =3D next_node_in(old_node, node_states[N_CPU]); + } while (!atomic_try_cmpxchg(&last_used_nid, &old_node, nid)); + queue_work_node(nid, system_unbound_wq, &pw->pw_work); + } else { + queue_work(system_unbound_wq, &pw->pw_work); + } =20 /* Use the current thread, which saves starting a workqueue worker. */ padata_work_init(&my_work, padata_mt_helper, &ps, PADATA_WORK_ONSTACK); diff --git a/mm/mm_init.c b/mm/mm_init.c index 2c19f5515e36c..549e76af8f82a 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2231,6 +2231,7 @@ static int __init deferred_init_memmap(void *data) .align =3D PAGES_PER_SECTION, .min_chunk =3D PAGES_PER_SECTION, .max_threads =3D max_threads, + .numa_aware =3D false, }; =20 padata_do_multithreaded(&job); --=20 2.20.1 From nobody Wed Dec 24 12:14:40 2025 Received: from out-187.mta1.migadu.com (out-187.mta1.migadu.com [95.215.58.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 512CC1DFD8 for ; Fri, 26 Jan 2024 15:25:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706282730; cv=none; b=JuHPDDXdetb9k0PK/TozbDxdUqE2sFh/j3ziVLBItKtcYziG4qHi9dzbZdTbIwPEzwxnKQEdIFdkjFdNO1oNdGwPTbLsSplZm6GH+IR8baEGfHpw6HmS448Ei3d+Szj4EG2y9ZYPOYaun4GQ4jqZidJ8lLA5oTJNWb2pNuRr88Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706282730; c=relaxed/simple; bh=ibrxswcbU4iJtA1ZyA/2KqNtGq9c0Eu/5DaeuCu3KLM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WwhyEsKtN1tK9oMJ+hBemfnrWFuXFNn6Q1RsMOgRRUFYa26y+vMKvMODNgb20CZ3RkpsylAn133rPzwVROd3Ei34wgcWnvkbA7WlPkY5WowYhSoqFc0Rvf4PqBT1nDq7q1jYjmrQGdewlfamN13caXv0nnRCSTslFnuGHPCxaGc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=gXsmiu/Z; arc=none smtp.client-ip=95.215.58.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="gXsmiu/Z" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1706282726; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DgUhGSnAajARz2sR8zEsgOZsTNsgDGT2+wOJsJvBtFE=; b=gXsmiu/ZCf1nMQFyOdSPcIWiSFw45L9E3U4xHBNZLrlxxSNlwx6qWum6Nk/BZJ5TD6kyb2 jZji4uY+b0TrkNYU777z3nkzvP+RZPRiUs6uA84GSunLLr+1e8DmhHiDWLWBdX0usDWBhv Au8J+F364De+i5U9tAHRO6qcgDJNC1o= From: Gang Li To: David Hildenbrand , David Rientjes , Mike Kravetz , Muchun Song , Andrew Morton , Tim Chen Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, ligang.bdlg@bytedance.com, Gang Li Subject: [PATCH v5 4/7] hugetlb: pass *next_nid_to_alloc directly to for_each_node_mask_to_alloc Date: Fri, 26 Jan 2024 23:24:08 +0800 Message-Id: <20240126152411.1238072-5-gang.li@linux.dev> In-Reply-To: <20240126152411.1238072-1-gang.li@linux.dev> References: <20240126152411.1238072-1-gang.li@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" With parallelization of hugetlb allocation across different threads, each thread works on a differnet node to allocate pages from, instead of all allocating from a common node h->next_nid_to_alloc. To address this, it's necessary to assign a separate next_nid_to_alloc for each thread. Consequently, the hstate_next_node_to_alloc and for_each_node_mask_to_alloc have been modified to directly accept a *next_nid_to_alloc parameter, ensuring thread-specific allocation and avoiding concurrent access issues. Signed-off-by: Gang Li Tested-by: David Rientjes Reviewed-by: Tim Chen Reviewed-by: Muchun Song --- mm/hugetlb.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 00bbf7442eb6c..e4e8ffa1c145a 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1464,15 +1464,15 @@ static int get_valid_node_allowed(int nid, nodemask= _t *nodes_allowed) * next node from which to allocate, handling wrap at end of node * mask. */ -static int hstate_next_node_to_alloc(struct hstate *h, +static int hstate_next_node_to_alloc(int *next_node, nodemask_t *nodes_allowed) { int nid; =20 VM_BUG_ON(!nodes_allowed); =20 - nid =3D get_valid_node_allowed(h->next_nid_to_alloc, nodes_allowed); - h->next_nid_to_alloc =3D next_node_allowed(nid, nodes_allowed); + nid =3D get_valid_node_allowed(*next_node, nodes_allowed); + *next_node =3D next_node_allowed(nid, nodes_allowed); =20 return nid; } @@ -1495,10 +1495,10 @@ static int hstate_next_node_to_free(struct hstate *= h, nodemask_t *nodes_allowed) return nid; } =20 -#define for_each_node_mask_to_alloc(hs, nr_nodes, node, mask) \ +#define for_each_node_mask_to_alloc(next_node, nr_nodes, node, mask) \ for (nr_nodes =3D nodes_weight(*mask); \ nr_nodes > 0 && \ - ((node =3D hstate_next_node_to_alloc(hs, mask)) || 1); \ + ((node =3D hstate_next_node_to_alloc(next_node, mask)) || 1); \ nr_nodes--) =20 #define for_each_node_mask_to_free(hs, nr_nodes, node, mask) \ @@ -2350,12 +2350,13 @@ static void prep_and_add_allocated_folios(struct hs= tate *h, */ static struct folio *alloc_pool_huge_folio(struct hstate *h, nodemask_t *nodes_allowed, - nodemask_t *node_alloc_noretry) + nodemask_t *node_alloc_noretry, + int *next_node) { gfp_t gfp_mask =3D htlb_alloc_mask(h) | __GFP_THISNODE; int nr_nodes, node; =20 - for_each_node_mask_to_alloc(h, nr_nodes, node, nodes_allowed) { + for_each_node_mask_to_alloc(next_node, nr_nodes, node, nodes_allowed) { struct folio *folio; =20 folio =3D only_alloc_fresh_hugetlb_folio(h, gfp_mask, node, @@ -3310,7 +3311,7 @@ int __alloc_bootmem_huge_page(struct hstate *h, int n= id) goto found; } /* allocate from next node when distributing huge pages */ - for_each_node_mask_to_alloc(h, nr_nodes, node, &node_states[N_MEMORY]) { + for_each_node_mask_to_alloc(&h->next_nid_to_alloc, nr_nodes, node, &node_= states[N_MEMORY]) { m =3D memblock_alloc_try_nid_raw( huge_page_size(h), huge_page_size(h), 0, MEMBLOCK_ALLOC_ACCESSIBLE, node); @@ -3679,7 +3680,7 @@ static int adjust_pool_surplus(struct hstate *h, node= mask_t *nodes_allowed, VM_BUG_ON(delta !=3D -1 && delta !=3D 1); =20 if (delta < 0) { - for_each_node_mask_to_alloc(h, nr_nodes, node, nodes_allowed) { + for_each_node_mask_to_alloc(&h->next_nid_to_alloc, nr_nodes, node, nodes= _allowed) { if (h->surplus_huge_pages_node[node]) goto found; } @@ -3794,7 +3795,8 @@ static int set_max_huge_pages(struct hstate *h, unsig= ned long count, int nid, cond_resched(); =20 folio =3D alloc_pool_huge_folio(h, nodes_allowed, - node_alloc_noretry); + node_alloc_noretry, + &h->next_nid_to_alloc); if (!folio) { prep_and_add_allocated_folios(h, &page_list); spin_lock_irq(&hugetlb_lock); --=20 2.20.1 From nobody Wed Dec 24 12:14:40 2025 Received: from out-178.mta1.migadu.com (out-178.mta1.migadu.com [95.215.58.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E9AA51E873 for ; Fri, 26 Jan 2024 15:25:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706282735; cv=none; b=rHfJ5/p0Hd/rNk43YEeAqA1Skq7vHNaj2W+GCZ78JZoXuyA97nYSQShyrY3D9QAIdUgage1GtUFVMLt5KiBDC6SHDpp0kgYHjkfeM2q8w2JXYcRkp/JlC9VmHH1oVKNuzUHAB8WU+IvTlNL/GSqqwKOLM8qJm5KGlwjewnnSpu0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706282735; c=relaxed/simple; bh=nRdDqIo0CuuDkPFpmOy7jAUoA1VsUwtHB9YZhdW7xbg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=FIClnCYtE2AFBosulzS72LwJKM4IDRni/NGxHV3jIuu1mm38z7gueFF+/a6OxSRYZ+DsFebxVZV9UkFv2VID6mLHZMKdp+tqw98qQ/5JhaVa68ySlKTjsRkyI4miyhgFnbgo7pHdcGjgUnTdeV6mWEA25ltfLpyFJMcPVi1ewyA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=IkyvnIde; arc=none smtp.client-ip=95.215.58.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="IkyvnIde" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1706282732; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Fxc6M49wsiZtuAiZJk8Hxt8gV/1ixSfY+RSkangwfeM=; b=IkyvnIdeEAXzyNoF0LqdZSDTbiTOQSu5etCNgZoKti/tjJbcin7DkJcC3ohdXGZT0Tnwyh /g1l+hlYRjpoHgsQS3F7EjYdzCdbyPl6t77EACOC7usJara6gjWSefTrR2k8x+WlfEOi3s WjM+WuvVeiatPmm6FNEI9O1LnuQ1/Ow= From: Gang Li To: David Hildenbrand , David Rientjes , Mike Kravetz , Muchun Song , Andrew Morton , Tim Chen Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, ligang.bdlg@bytedance.com, Gang Li Subject: [PATCH v5 5/7] hugetlb: have CONFIG_HUGETLBFS select CONFIG_PADATA Date: Fri, 26 Jan 2024 23:24:09 +0800 Message-Id: <20240126152411.1238072-6-gang.li@linux.dev> In-Reply-To: <20240126152411.1238072-1-gang.li@linux.dev> References: <20240126152411.1238072-1-gang.li@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" Allow hugetlb use padata_do_multithreaded for parallel initialization. Select CONFIG_PADATA in this case. Signed-off-by: Gang Li Tested-by: David Rientjes Reviewed-by: Muchun Song --- fs/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/Kconfig b/fs/Kconfig index ea2f77446080e..3abc107ab2fbd 100644 --- a/fs/Kconfig +++ b/fs/Kconfig @@ -261,6 +261,7 @@ menuconfig HUGETLBFS depends on X86 || SPARC64 || ARCH_SUPPORTS_HUGETLBFS || BROKEN depends on (SYSFS || SYSCTL) select MEMFD_CREATE + select PADATA help hugetlbfs is a filesystem backing for HugeTLB pages, based on ramfs. For architectures that support it, say Y here and read --=20 2.20.1 From nobody Wed Dec 24 12:14:40 2025 Received: from out-187.mta1.migadu.com (out-187.mta1.migadu.com [95.215.58.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 733591D697 for ; Fri, 26 Jan 2024 15:25:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706282742; cv=none; b=KilYz0wc6iztOE3Ne7WJvZO+6fPaFGrTWIpHuOOgjfGuGTHxVSX0HjbFjb0ohPiivUbCm9vr9MtS4313VNFlYYSlZMuNeq1wbEu++ME2PATfunQyfSWtE0M5P+7wqpw05CegvJ3U2kBb6WUIsZmNa9ycy8W7sNTuhxeErxW72hU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706282742; c=relaxed/simple; bh=1MgYoEYA0TYAFs7M4fp1D81Bw+npp83XpLElfpWHMK8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=eBv3xNdrqoqeOqYEIjz0vH6pBkefL/CqZSOBqNQqBTO6HI0s/d4cJRRFtX+uXwzK3Ih2ISv1GxXJIB0kHFv2tKeguQuZkkIq3fnw+R34JKqYSKmOWAlNspPegdX7PIk4QL2wPuLf1cY8eng/4HSKz5rikYVqHd34nGXwDIRQwLg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=qwIcy4O8; arc=none smtp.client-ip=95.215.58.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="qwIcy4O8" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1706282738; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jedycsf2hoazZl2VD5dKzn3Ogz/yNT2KcxNdOLYavB4=; b=qwIcy4O8oYc6ieAe8OwvpEKH4wyBee3Jlif+nxuPJKFmuXp+E3oYFgTjBUsVhVcTbe3PJO LbG1Yp4MWWpIydaJlX5rrJfhgvOcuD2EKuJO/Yme8P9fG6DM1ACjVdMyPO1FZ4Ag0JyV0D a0k79BNlAPNr4wc8ZdQcdQUIK48SoXc= From: Gang Li To: David Hildenbrand , David Rientjes , Mike Kravetz , Muchun Song , Andrew Morton , Tim Chen Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, ligang.bdlg@bytedance.com, Gang Li Subject: [PATCH v5 6/7] hugetlb: parallelize 2M hugetlb allocation and initialization Date: Fri, 26 Jan 2024 23:24:10 +0800 Message-Id: <20240126152411.1238072-7-gang.li@linux.dev> In-Reply-To: <20240126152411.1238072-1-gang.li@linux.dev> References: <20240126152411.1238072-1-gang.li@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" By distributing both the allocation and the initialization tasks across multiple threads, the initialization of 2M hugetlb will be faster, thereby improving the boot speed. Here are some test results: test case no patch(ms) patched(ms) saved ------------------- -------------- ------------- -------- 256c2T(4 node) 2M 3336 1051 68.52% 128c1T(2 node) 2M 1943 716 63.15% Signed-off-by: Gang Li Tested-by: David Rientjes Reviewed-by: Muchun Song --- mm/hugetlb.c | 73 ++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 56 insertions(+), 17 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e4e8ffa1c145a..385840397bce5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -35,6 +35,7 @@ #include #include #include +#include =20 #include #include @@ -3510,6 +3511,30 @@ static void __init hugetlb_hstate_alloc_pages_errche= ck(unsigned long allocated, } } =20 +static void __init hugetlb_pages_alloc_boot_node(unsigned long start, unsi= gned long end, void *arg) +{ + struct hstate *h =3D (struct hstate *)arg; + int i, num =3D end - start; + nodemask_t node_alloc_noretry; + LIST_HEAD(folio_list); + int next_node =3D first_online_node; + + /* Bit mask controlling how hard we retry per-node allocations.*/ + nodes_clear(node_alloc_noretry); + + for (i =3D 0; i < num; ++i) { + struct folio *folio =3D alloc_pool_huge_folio(h, &node_states[N_MEMORY], + &node_alloc_noretry, &next_node); + if (!folio) + break; + + list_move(&folio->lru, &folio_list); + cond_resched(); + } + + prep_and_add_allocated_folios(h, &folio_list); +} + static unsigned long __init hugetlb_gigantic_pages_alloc_boot(struct hstat= e *h) { unsigned long i; @@ -3525,26 +3550,40 @@ static unsigned long __init hugetlb_gigantic_pages_= alloc_boot(struct hstate *h) =20 static unsigned long __init hugetlb_pages_alloc_boot(struct hstate *h) { - unsigned long i; - struct folio *folio; - LIST_HEAD(folio_list); - nodemask_t node_alloc_noretry; - - /* Bit mask controlling how hard we retry per-node allocations.*/ - nodes_clear(node_alloc_noretry); + struct padata_mt_job job =3D { + .fn_arg =3D h, + .align =3D 1, + .numa_aware =3D true + }; =20 - for (i =3D 0; i < h->max_huge_pages; ++i) { - folio =3D alloc_pool_huge_folio(h, &node_states[N_MEMORY], - &node_alloc_noretry); - if (!folio) - break; - list_add(&folio->lru, &folio_list); - cond_resched(); - } + job.thread_fn =3D hugetlb_pages_alloc_boot_node; + job.start =3D 0; + job.size =3D h->max_huge_pages; =20 - prep_and_add_allocated_folios(h, &folio_list); + /* + * job.max_threads is twice the num_node_state(N_MEMORY), + * + * Tests below indicate that a multiplier of 2 significantly improves + * performance, and although larger values also provide improvements, + * the gains are marginal. + * + * Therefore, choosing 2 as the multiplier strikes a good balance between + * enhancing parallel processing capabilities and maintaining efficient + * resource management. + * + * +------------+-------+-------+-------+-------+-------+ + * | multiplier | 1 | 2 | 3 | 4 | 5 | + * +------------+-------+-------+-------+-------+-------+ + * | 256G 2node | 358ms | 215ms | 157ms | 134ms | 126ms | + * | 2T 4node | 979ms | 679ms | 543ms | 489ms | 481ms | + * | 50G 2node | 71ms | 44ms | 37ms | 30ms | 31ms | + * +------------+-------+-------+-------+-------+-------+ + */ + job.max_threads =3D num_node_state(N_MEMORY) * 2; + job.min_chunk =3D h->max_huge_pages / num_node_state(N_MEMORY) / 2; + padata_do_multithreaded(&job); =20 - return i; + return h->nr_huge_pages; } =20 /* --=20 2.20.1 From nobody Wed Dec 24 12:14:40 2025 Received: from out-172.mta1.migadu.com (out-172.mta1.migadu.com [95.215.58.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A73DF1D55F for ; Fri, 26 Jan 2024 15:25:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706282750; cv=none; b=Ss5iokE7sHK8BmsARIOHM0YlWtY/MokLW3PoYiYBYiNBquuHclaMXw7sVlCf//tciHKo3NaB7+fOPo4hOq0F2YC5oQq93jc+AO3Gt/ejGcJ2LcWzduFEJygCqbVE5ZD84gA10oP0hZ3B3MyHudmScWhfVfXepWenXT50smbOh6A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706282750; c=relaxed/simple; bh=c2eGy8j5jjUuuogbCux3HCSCTf68lnabsdin1Kloo8E=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=aGHeOHZB3lOEiUegydXOiZgm5PRGeGnViaOyPIkYjZRt1gk7iyiMgVuwJ19D/mAz5J1IUCukz33V9FzxDaKir+3CAOiVhioTgmGEb6uqmyMns33T3fxPs4cVzcpli8erQHlWHhx0gzhnPaQb87+syqFx0ic/vATrUoAsOQvYwUU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=dYyUEugy; arc=none smtp.client-ip=95.215.58.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="dYyUEugy" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1706282746; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0iArgVEROeDskUb/co444WxwcA+iCGyxYPbAZNymH1g=; b=dYyUEugy+xtDscuDVSmLIJChMKlpJo7hUQlBy8G4LmvOhGPDib/TUY2KXKt7cRInLKAa1I b1QrLCFOaKrSbCu0c2Y3x47nwNUIIzTHzZvBCbhCbYQlzL25YqVImvkO36hN0p1LqAbwUF 8bvuvbI8HTBoL2mr29vGEt4GJQlLJPQ= From: Gang Li To: David Hildenbrand , David Rientjes , Mike Kravetz , Muchun Song , Andrew Morton , Tim Chen Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, ligang.bdlg@bytedance.com, Gang Li Subject: [PATCH v5 7/7] hugetlb: parallelize 1G hugetlb initialization Date: Fri, 26 Jan 2024 23:24:11 +0800 Message-Id: <20240126152411.1238072-8-gang.li@linux.dev> In-Reply-To: <20240126152411.1238072-1-gang.li@linux.dev> References: <20240126152411.1238072-1-gang.li@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" Optimizing the initialization speed of 1G huge pages through parallelization. 1G hugetlbs are allocated from bootmem, a process that is already very fast and does not currently require optimization. Therefore, we focus on parallelizing only the initialization phase in `gather_bootmem_prealloc`. Here are some test results: test case no patch(ms) patched(ms) saved ------------------- -------------- ------------- -------- 256c2T(4 node) 1G 4745 2024 57.34% 128c1T(2 node) 1G 3358 1712 49.02% 12T 1G 77000 18300 76.23% Signed-off-by: Gang Li Tested-by: David Rientjes Reviewed-by: Muchun Song --- arch/powerpc/mm/hugetlbpage.c | 2 +- include/linux/hugetlb.h | 2 +- mm/hugetlb.c | 44 ++++++++++++++++++++++++++++------- 3 files changed, 38 insertions(+), 10 deletions(-) diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index 0a540b37aab62..a1651d5471862 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -226,7 +226,7 @@ static int __init pseries_alloc_bootmem_huge_page(struc= t hstate *hstate) return 0; m =3D phys_to_virt(gpage_freearray[--nr_gpages]); gpage_freearray[nr_gpages] =3D 0; - list_add(&m->list, &huge_boot_pages); + list_add(&m->list, &huge_boot_pages[0]); m->hstate =3D hstate; return 1; } diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index c1ee640d87b11..77b30a8c6076b 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -178,7 +178,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_a= rea_struct *vma, struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage); =20 extern int sysctl_hugetlb_shm_group; -extern struct list_head huge_boot_pages; +extern struct list_head huge_boot_pages[MAX_NUMNODES]; =20 /* arch callbacks */ =20 diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 385840397bce5..eee0c456f6571 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -69,7 +69,7 @@ static bool hugetlb_cma_folio(struct folio *folio, unsign= ed int order) #endif static unsigned long hugetlb_cma_size __initdata; =20 -__initdata LIST_HEAD(huge_boot_pages); +__initdata struct list_head huge_boot_pages[MAX_NUMNODES]; =20 /* for command line parsing */ static struct hstate * __initdata parsed_hstate; @@ -3301,7 +3301,7 @@ int alloc_bootmem_huge_page(struct hstate *h, int nid) int __alloc_bootmem_huge_page(struct hstate *h, int nid) { struct huge_bootmem_page *m =3D NULL; /* initialize for clang */ - int nr_nodes, node; + int nr_nodes, node =3D nid; =20 /* do node specific alloc */ if (nid !=3D NUMA_NO_NODE) { @@ -3339,7 +3339,7 @@ int __alloc_bootmem_huge_page(struct hstate *h, int n= id) huge_page_size(h) - PAGE_SIZE); /* Put them into a private list first because mem_map is not up yet */ INIT_LIST_HEAD(&m->list); - list_add(&m->list, &huge_boot_pages); + list_add(&m->list, &huge_boot_pages[node]); m->hstate =3D h; return 1; } @@ -3390,8 +3390,6 @@ static void __init prep_and_add_bootmem_folios(struct= hstate *h, /* Send list for bulk vmemmap optimization processing */ hugetlb_vmemmap_optimize_folios(h, folio_list); =20 - /* Add all new pool pages to free lists in one lock cycle */ - spin_lock_irqsave(&hugetlb_lock, flags); list_for_each_entry_safe(folio, tmp_f, folio_list, lru) { if (!folio_test_hugetlb_vmemmap_optimized(folio)) { /* @@ -3404,23 +3402,27 @@ static void __init prep_and_add_bootmem_folios(stru= ct hstate *h, HUGETLB_VMEMMAP_RESERVE_PAGES, pages_per_huge_page(h)); } + /* Subdivide locks to achieve better parallel performance */ + spin_lock_irqsave(&hugetlb_lock, flags); __prep_account_new_huge_page(h, folio_nid(folio)); enqueue_hugetlb_folio(h, folio); + spin_unlock_irqrestore(&hugetlb_lock, flags); } - spin_unlock_irqrestore(&hugetlb_lock, flags); } =20 /* * Put bootmem huge pages into the standard lists after mem_map is up. * Note: This only applies to gigantic (order > MAX_PAGE_ORDER) pages. */ -static void __init gather_bootmem_prealloc(void) +static void __init gather_bootmem_prealloc_node(unsigned long start, unsig= ned long end, void *arg) + { + int nid =3D start; LIST_HEAD(folio_list); struct huge_bootmem_page *m; struct hstate *h =3D NULL, *prev_h =3D NULL; =20 - list_for_each_entry(m, &huge_boot_pages, list) { + list_for_each_entry(m, &huge_boot_pages[nid], list) { struct page *page =3D virt_to_page(m); struct folio *folio =3D (void *)page; =20 @@ -3453,6 +3455,22 @@ static void __init gather_bootmem_prealloc(void) prep_and_add_bootmem_folios(h, &folio_list); } =20 +static void __init gather_bootmem_prealloc(void) +{ + struct padata_mt_job job =3D { + .thread_fn =3D gather_bootmem_prealloc_node, + .fn_arg =3D NULL, + .start =3D 0, + .size =3D num_node_state(N_MEMORY), + .align =3D 1, + .min_chunk =3D 1, + .max_threads =3D num_node_state(N_MEMORY), + .numa_aware =3D true, + }; + + padata_do_multithreaded(&job); +} + static void __init hugetlb_hstate_alloc_pages_onenode(struct hstate *h, in= t nid) { unsigned long i; @@ -3600,6 +3618,7 @@ static unsigned long __init hugetlb_pages_alloc_boot(= struct hstate *h) static void __init hugetlb_hstate_alloc_pages(struct hstate *h) { unsigned long allocated; + static bool initialied __initdata; =20 /* skip gigantic hugepages allocation if hugetlb_cma enabled */ if (hstate_is_gigantic(h) && hugetlb_cma_size) { @@ -3607,6 +3626,15 @@ static void __init hugetlb_hstate_alloc_pages(struct= hstate *h) return; } =20 + /* hugetlb_hstate_alloc_pages will be called many times, initialize huge_= boot_pages once */ + if (!initialied) { + int i =3D 0; + + for (i =3D 0; i < MAX_NUMNODES; i++) + INIT_LIST_HEAD(&huge_boot_pages[i]); + initialied =3D true; + } + /* do node specific alloc */ if (hugetlb_hstate_alloc_pages_specific_nodes(h)) return; --=20 2.20.1