From nobody Fri Dec 19 20:38:06 2025 Received: from SHSQR01.spreadtrum.com (mx1.unisoc.com [222.66.158.135]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A74A29ACD1 for ; Tue, 14 Oct 2025 08:33:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=222.66.158.135 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760430799; cv=none; b=fqRTPhOK/WRevn+QX/qcycpZgsPTOQRQau60ATtIU8ISjUlBuY/uCj+MFFvOb2HGoTsdGbir1QcLfy+gayTz5LGp1WHaD2KAnGBmuCwthsKiMLGZDT6QBz6EswoItNIkKff7SEjIcc7KS84FHVbYVFp9CyQssgHGN72h4fONkFU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760430799; c=relaxed/simple; bh=d1pcj2HRsn3tGEXaVZdcvAlo2ZzEjlDWVXZ69ZNju94=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Ns30Llhj1KauRtWArJEXGlOofAH04bggB7IdRPRFn6afbtMvgqB9TlVXuABeoDVnUeze4uE+OJ+qJgwzYGylXqA+ptDVZD6gAW6ZjfTcjuXmJ+7Iz34Wlsz9wd0g5UP0PMth+y29hUcmPJjho8eQFbhdx9PkFXZfAiOqb75D4nY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=unisoc.com; spf=pass smtp.mailfrom=unisoc.com; dkim=pass (2048-bit key) header.d=unisoc.com header.i=@unisoc.com header.b=A6BMxK4r; arc=none smtp.client-ip=222.66.158.135 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=unisoc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=unisoc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=unisoc.com header.i=@unisoc.com header.b="A6BMxK4r" Received: from SHSQR01.spreadtrum.com (localhost [127.0.0.2] (may be forged)) by SHSQR01.spreadtrum.com with ESMTP id 59E8XBtP088005 for ; Tue, 14 Oct 2025 16:33:11 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from dlp.unisoc.com ([10.29.3.86]) by SHSQR01.spreadtrum.com with ESMTP id 59E8WlsX086161; Tue, 14 Oct 2025 16:32:47 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from SHDLP.spreadtrum.com (BJMBX01.spreadtrum.com [10.0.64.7]) by dlp.unisoc.com (SkyGuard) with ESMTPS id 4cm6q45dHRz2Q1PRd; Tue, 14 Oct 2025 16:30:24 +0800 (CST) Received: from bj03382pcu03.spreadtrum.com (10.0.73.40) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 14 Oct 2025 16:32:43 +0800 From: "zhaoyang.huang" To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Mel Gorman , Vlastimil Babka , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?UTF-8?q?Christian=20K=C3=B6nig?= , , , , , , Zhaoyang Huang , Subject: [PATCH 1/2] mm: call back alloc_pages_bulk_list since it is useful Date: Tue, 14 Oct 2025 16:32:29 +0800 Message-ID: <20251014083230.1181072-2-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20251014083230.1181072-1-zhaoyang.huang@unisoc.com> References: <20251014083230.1181072-1-zhaoyang.huang@unisoc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 59E8WlsX086161 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=unisoc.com; s=default; t=1760430782; bh=63LZGfJJc6amCQ8kyFsu44XHeulr67TgoXNrETLf0zo=; h=From:To:Subject:Date:In-Reply-To:References; b=A6BMxK4rAeYaYPIcBx+mHWlG7KIuigkXVQwQ+QYyf8pYMQ9eq1TaZ+9gEkWaordnG Gy3RD/QSsz1QQlfedaoYYiFTBj1kD9J2QtYt2mxt/nRSwI3bDlSHeKUB+w7Ty+QgId oQjhEME1ywmxLfeaOcLCesMNyuGmV7s2Pp6wwC1WW18l2kzI6ZWpfTWdb8H8i5DLY5 7KUOPMQrmAUX7BBglGhlSGP/fxlJUCyRDgCXYxGjBQEshMP8XJxPqu2jteIr6l+3iA ErL5jR5yKccLLRMbnNj4zuKS839zHMHgmQcGrRXMqlhX7As6s/yo07v54F2LKorGUe RhV4Z6d1cPSlw== Content-Type: text/plain; charset="utf-8" From: Zhaoyang Huang commit c8b979530f27 ("mm: alloc_pages_bulk_noprof: drop page_list argument") drops alloc_pages_bulk_list. This commit would like to call back it since it is proved to be helpful to the drivers which allocate a bulk of pages(see patch of 2 in this series ). I do notice that Matthew's comment of the time cost of iterating a list. However, I also observed in our test that the extra page_array's allocation could be more expensive than cpu iteration when direct reclaiming happens when ram is low[1]. IMHO, could we leave the API here to have the users choose between the array or list according to their scenarios. [1] android.hardwar-728 [002] ..... 334.573875: system_heap_do_allocate: = Execution time: order 0 1 us android.hardwar-728 [002] ..... 334.573879: system_heap_do_allocate: = Execution time: order 0 2 us android.hardwar-728 [002] ..... 334.574239: system_heap_do_allocate: = Execution time: order 0 354 us android.hardwar-728 [002] ..... 334.574247: system_heap_do_allocate: = Execution time: order 0 4 us android.hardwar-728 [002] ..... 334.574250: system_heap_do_allocate: = Execution time: order 0 2 us Signed-off-by: Zhaoyang Huang --- include/linux/gfp.h | 9 +++++++-- mm/mempolicy.c | 14 +++++++------- mm/page_alloc.c | 39 +++++++++++++++++++++++++++------------ 3 files changed, 41 insertions(+), 21 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 5ebf26fcdcfa..f1540c9fcd87 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -231,6 +231,7 @@ struct folio *__folio_alloc_noprof(gfp_t gfp, unsigned = int order, int preferred_ =20 unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, + struct list_head *page_list, struct page **page_array); #define __alloc_pages_bulk(...) alloc_hooks(alloc_pages_bulk_noprof(__VA= _ARGS__)) =20 @@ -242,7 +243,11 @@ unsigned long alloc_pages_bulk_mempolicy_noprof(gfp_t = gfp, =20 /* Bulk allocate order-0 pages */ #define alloc_pages_bulk(_gfp, _nr_pages, _page_array) \ - __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, _page_array) + __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, NULL, _page_arra= y) + +#define alloc_pages_bulk_list(_gfp, _nr_pages, _list) \ + __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, _list, NULL) + =20 static inline unsigned long alloc_pages_bulk_node_noprof(gfp_t gfp, int nid, unsigned long nr_pages, @@ -251,7 +256,7 @@ alloc_pages_bulk_node_noprof(gfp_t gfp, int nid, unsign= ed long nr_pages, if (nid =3D=3D NUMA_NO_NODE) nid =3D numa_mem_id(); =20 - return alloc_pages_bulk_noprof(gfp, nid, NULL, nr_pages, page_array); + return alloc_pages_bulk_noprof(gfp, nid, NULL, nr_pages, NULL, page_array= ); } =20 #define alloc_pages_bulk_node(...) \ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index eb83cff7db8c..26274302ee01 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2537,13 +2537,13 @@ static unsigned long alloc_pages_bulk_interleave(gf= p_t gfp, if (delta) { nr_allocated =3D alloc_pages_bulk_noprof(gfp, interleave_nodes(pol), NULL, - nr_pages_per_node + 1, + nr_pages_per_node + 1, NULL, page_array); delta--; } else { nr_allocated =3D alloc_pages_bulk_noprof(gfp, interleave_nodes(pol), NULL, - nr_pages_per_node, page_array); + nr_pages_per_node, NULL, page_array); } =20 page_array +=3D nr_allocated; @@ -2593,7 +2593,7 @@ static unsigned long alloc_pages_bulk_weighted_interl= eave(gfp_t gfp, if (weight && node_isset(node, nodes)) { node_pages =3D min(rem_pages, weight); nr_allocated =3D __alloc_pages_bulk(gfp, node, NULL, node_pages, - page_array); + NULL, page_array); page_array +=3D nr_allocated; total_allocated +=3D nr_allocated; /* if that's all the pages, no need to interleave */ @@ -2658,7 +2658,7 @@ static unsigned long alloc_pages_bulk_weighted_interl= eave(gfp_t gfp, if (!node_pages) break; nr_allocated =3D __alloc_pages_bulk(gfp, node, NULL, node_pages, - page_array); + NULL, page_array); page_array +=3D nr_allocated; total_allocated +=3D nr_allocated; if (total_allocated =3D=3D nr_pages) @@ -2682,11 +2682,11 @@ static unsigned long alloc_pages_bulk_preferred_man= y(gfp_t gfp, int nid, preferred_gfp &=3D ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); =20 nr_allocated =3D alloc_pages_bulk_noprof(preferred_gfp, nid, &pol->nodes, - nr_pages, page_array); + nr_pages, NULL, page_array); =20 if (nr_allocated < nr_pages) nr_allocated +=3D alloc_pages_bulk_noprof(gfp, numa_node_id(), NULL, - nr_pages - nr_allocated, + nr_pages - nr_allocated, NULL, page_array + nr_allocated); return nr_allocated; } @@ -2722,7 +2722,7 @@ unsigned long alloc_pages_bulk_mempolicy_noprof(gfp_t= gfp, nid =3D numa_node_id(); nodemask =3D policy_nodemask(gfp, pol, NO_INTERLEAVE_INDEX, &nid); return alloc_pages_bulk_noprof(gfp, nid, nodemask, - nr_pages, page_array); + nr_pages, NULL, page_array); } =20 int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d1d037f97c5f..a95bdd8cbf5b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4940,23 +4940,28 @@ static inline bool prepare_alloc_pages(gfp_t gfp_ma= sk, unsigned int order, } =20 /* - * __alloc_pages_bulk - Allocate a number of order-0 pages to an array + * __alloc_pages_bulk - Allocate a number of order-0 pages to a list or ar= ray * @gfp: GFP flags for the allocation * @preferred_nid: The preferred NUMA node ID to allocate from * @nodemask: Set of nodes to allocate from, may be NULL - * @nr_pages: The number of pages desired in the array - * @page_array: Array to store the pages + * @nr_pages: The number of pages desired on the list or array + * @page_list: Optional list to store the allocated pages + * @page_array: Optional array to store the pages * * This is a batched version of the page allocator that attempts to - * allocate nr_pages quickly. Pages are added to the page_array. + * allocate nr_pages quickly. Pages are added to page_list if page_list + * is not NULL, otherwise it is assumed that the page_array is valid. * - * Note that only NULL elements are populated with pages and nr_pages + * For lists, nr_pages is the number of pages that should be allocated. + * + * For arrays, only NULL elements are populated with pages and nr_pages * is the maximum number of pages that will be stored in the array. * - * Returns the number of pages in the array. + * Returns the number of pages on the list or array. */ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, + struct list_head *page_list, struct page **page_array) { struct page *page; @@ -4974,7 +4979,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int = preferred_nid, * Skip populated array elements to determine if any pages need * to be allocated before disabling IRQs. */ - while (nr_populated < nr_pages && page_array[nr_populated]) + while (page_array && nr_populated < nr_pages && page_array[nr_populated]) nr_populated++; =20 /* No pages requested? */ @@ -4982,7 +4987,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int = preferred_nid, goto out; =20 /* Already populated array? */ - if (unlikely(nr_pages - nr_populated =3D=3D 0)) + if (unlikely(page_array && nr_pages - nr_populated =3D=3D 0)) goto out; =20 /* Bulk allocator does not support memcg accounting. */ @@ -5064,7 +5069,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int = preferred_nid, while (nr_populated < nr_pages) { =20 /* Skip existing pages */ - if (page_array[nr_populated]) { + if (page_array && page_array[nr_populated]) { nr_populated++; continue; } @@ -5083,7 +5088,11 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int= preferred_nid, =20 prep_new_page(page, 0, gfp, 0); set_page_refcounted(page); - page_array[nr_populated++] =3D page; + if (page_list) + list_add(&page->lru, page_list); + else + page_array[nr_populated] =3D page; + nr_populated++; } =20 pcp_spin_unlock(pcp); @@ -5100,8 +5109,14 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int= preferred_nid, =20 failed: page =3D __alloc_pages_noprof(gfp, 0, preferred_nid, nodemask); - if (page) - page_array[nr_populated++] =3D page; + if (page) { + if (page_list) + list_add(&page->lru, page_list); + else + page_array[nr_populated] =3D page; + nr_populated++; + } + goto out; } EXPORT_SYMBOL_GPL(alloc_pages_bulk_noprof); --=20 2.25.1