From nobody Fri Dec 19 17:20:09 2025 Received: from SHSQR01.spreadtrum.com (mx1.unisoc.com [222.66.158.135]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A74A29ACD1 for ; Tue, 14 Oct 2025 08:33:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=222.66.158.135 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760430799; cv=none; b=fqRTPhOK/WRevn+QX/qcycpZgsPTOQRQau60ATtIU8ISjUlBuY/uCj+MFFvOb2HGoTsdGbir1QcLfy+gayTz5LGp1WHaD2KAnGBmuCwthsKiMLGZDT6QBz6EswoItNIkKff7SEjIcc7KS84FHVbYVFp9CyQssgHGN72h4fONkFU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760430799; c=relaxed/simple; bh=d1pcj2HRsn3tGEXaVZdcvAlo2ZzEjlDWVXZ69ZNju94=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Ns30Llhj1KauRtWArJEXGlOofAH04bggB7IdRPRFn6afbtMvgqB9TlVXuABeoDVnUeze4uE+OJ+qJgwzYGylXqA+ptDVZD6gAW6ZjfTcjuXmJ+7Iz34Wlsz9wd0g5UP0PMth+y29hUcmPJjho8eQFbhdx9PkFXZfAiOqb75D4nY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=unisoc.com; spf=pass smtp.mailfrom=unisoc.com; dkim=pass (2048-bit key) header.d=unisoc.com header.i=@unisoc.com header.b=A6BMxK4r; arc=none smtp.client-ip=222.66.158.135 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=unisoc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=unisoc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=unisoc.com header.i=@unisoc.com header.b="A6BMxK4r" Received: from SHSQR01.spreadtrum.com (localhost [127.0.0.2] (may be forged)) by SHSQR01.spreadtrum.com with ESMTP id 59E8XBtP088005 for ; Tue, 14 Oct 2025 16:33:11 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from dlp.unisoc.com ([10.29.3.86]) by SHSQR01.spreadtrum.com with ESMTP id 59E8WlsX086161; Tue, 14 Oct 2025 16:32:47 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from SHDLP.spreadtrum.com (BJMBX01.spreadtrum.com [10.0.64.7]) by dlp.unisoc.com (SkyGuard) with ESMTPS id 4cm6q45dHRz2Q1PRd; Tue, 14 Oct 2025 16:30:24 +0800 (CST) Received: from bj03382pcu03.spreadtrum.com (10.0.73.40) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 14 Oct 2025 16:32:43 +0800 From: "zhaoyang.huang" To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Mel Gorman , Vlastimil Babka , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?UTF-8?q?Christian=20K=C3=B6nig?= , , , , , , Zhaoyang Huang , Subject: [PATCH 1/2] mm: call back alloc_pages_bulk_list since it is useful Date: Tue, 14 Oct 2025 16:32:29 +0800 Message-ID: <20251014083230.1181072-2-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20251014083230.1181072-1-zhaoyang.huang@unisoc.com> References: <20251014083230.1181072-1-zhaoyang.huang@unisoc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 59E8WlsX086161 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=unisoc.com; s=default; t=1760430782; bh=63LZGfJJc6amCQ8kyFsu44XHeulr67TgoXNrETLf0zo=; h=From:To:Subject:Date:In-Reply-To:References; b=A6BMxK4rAeYaYPIcBx+mHWlG7KIuigkXVQwQ+QYyf8pYMQ9eq1TaZ+9gEkWaordnG Gy3RD/QSsz1QQlfedaoYYiFTBj1kD9J2QtYt2mxt/nRSwI3bDlSHeKUB+w7Ty+QgId oQjhEME1ywmxLfeaOcLCesMNyuGmV7s2Pp6wwC1WW18l2kzI6ZWpfTWdb8H8i5DLY5 7KUOPMQrmAUX7BBglGhlSGP/fxlJUCyRDgCXYxGjBQEshMP8XJxPqu2jteIr6l+3iA ErL5jR5yKccLLRMbnNj4zuKS839zHMHgmQcGrRXMqlhX7As6s/yo07v54F2LKorGUe RhV4Z6d1cPSlw== Content-Type: text/plain; charset="utf-8" From: Zhaoyang Huang commit c8b979530f27 ("mm: alloc_pages_bulk_noprof: drop page_list argument") drops alloc_pages_bulk_list. This commit would like to call back it since it is proved to be helpful to the drivers which allocate a bulk of pages(see patch of 2 in this series ). I do notice that Matthew's comment of the time cost of iterating a list. However, I also observed in our test that the extra page_array's allocation could be more expensive than cpu iteration when direct reclaiming happens when ram is low[1]. IMHO, could we leave the API here to have the users choose between the array or list according to their scenarios. [1] android.hardwar-728 [002] ..... 334.573875: system_heap_do_allocate: = Execution time: order 0 1 us android.hardwar-728 [002] ..... 334.573879: system_heap_do_allocate: = Execution time: order 0 2 us android.hardwar-728 [002] ..... 334.574239: system_heap_do_allocate: = Execution time: order 0 354 us android.hardwar-728 [002] ..... 334.574247: system_heap_do_allocate: = Execution time: order 0 4 us android.hardwar-728 [002] ..... 334.574250: system_heap_do_allocate: = Execution time: order 0 2 us Signed-off-by: Zhaoyang Huang --- include/linux/gfp.h | 9 +++++++-- mm/mempolicy.c | 14 +++++++------- mm/page_alloc.c | 39 +++++++++++++++++++++++++++------------ 3 files changed, 41 insertions(+), 21 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 5ebf26fcdcfa..f1540c9fcd87 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -231,6 +231,7 @@ struct folio *__folio_alloc_noprof(gfp_t gfp, unsigned = int order, int preferred_ =20 unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, + struct list_head *page_list, struct page **page_array); #define __alloc_pages_bulk(...) alloc_hooks(alloc_pages_bulk_noprof(__VA= _ARGS__)) =20 @@ -242,7 +243,11 @@ unsigned long alloc_pages_bulk_mempolicy_noprof(gfp_t = gfp, =20 /* Bulk allocate order-0 pages */ #define alloc_pages_bulk(_gfp, _nr_pages, _page_array) \ - __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, _page_array) + __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, NULL, _page_arra= y) + +#define alloc_pages_bulk_list(_gfp, _nr_pages, _list) \ + __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, _list, NULL) + =20 static inline unsigned long alloc_pages_bulk_node_noprof(gfp_t gfp, int nid, unsigned long nr_pages, @@ -251,7 +256,7 @@ alloc_pages_bulk_node_noprof(gfp_t gfp, int nid, unsign= ed long nr_pages, if (nid =3D=3D NUMA_NO_NODE) nid =3D numa_mem_id(); =20 - return alloc_pages_bulk_noprof(gfp, nid, NULL, nr_pages, page_array); + return alloc_pages_bulk_noprof(gfp, nid, NULL, nr_pages, NULL, page_array= ); } =20 #define alloc_pages_bulk_node(...) \ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index eb83cff7db8c..26274302ee01 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2537,13 +2537,13 @@ static unsigned long alloc_pages_bulk_interleave(gf= p_t gfp, if (delta) { nr_allocated =3D alloc_pages_bulk_noprof(gfp, interleave_nodes(pol), NULL, - nr_pages_per_node + 1, + nr_pages_per_node + 1, NULL, page_array); delta--; } else { nr_allocated =3D alloc_pages_bulk_noprof(gfp, interleave_nodes(pol), NULL, - nr_pages_per_node, page_array); + nr_pages_per_node, NULL, page_array); } =20 page_array +=3D nr_allocated; @@ -2593,7 +2593,7 @@ static unsigned long alloc_pages_bulk_weighted_interl= eave(gfp_t gfp, if (weight && node_isset(node, nodes)) { node_pages =3D min(rem_pages, weight); nr_allocated =3D __alloc_pages_bulk(gfp, node, NULL, node_pages, - page_array); + NULL, page_array); page_array +=3D nr_allocated; total_allocated +=3D nr_allocated; /* if that's all the pages, no need to interleave */ @@ -2658,7 +2658,7 @@ static unsigned long alloc_pages_bulk_weighted_interl= eave(gfp_t gfp, if (!node_pages) break; nr_allocated =3D __alloc_pages_bulk(gfp, node, NULL, node_pages, - page_array); + NULL, page_array); page_array +=3D nr_allocated; total_allocated +=3D nr_allocated; if (total_allocated =3D=3D nr_pages) @@ -2682,11 +2682,11 @@ static unsigned long alloc_pages_bulk_preferred_man= y(gfp_t gfp, int nid, preferred_gfp &=3D ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); =20 nr_allocated =3D alloc_pages_bulk_noprof(preferred_gfp, nid, &pol->nodes, - nr_pages, page_array); + nr_pages, NULL, page_array); =20 if (nr_allocated < nr_pages) nr_allocated +=3D alloc_pages_bulk_noprof(gfp, numa_node_id(), NULL, - nr_pages - nr_allocated, + nr_pages - nr_allocated, NULL, page_array + nr_allocated); return nr_allocated; } @@ -2722,7 +2722,7 @@ unsigned long alloc_pages_bulk_mempolicy_noprof(gfp_t= gfp, nid =3D numa_node_id(); nodemask =3D policy_nodemask(gfp, pol, NO_INTERLEAVE_INDEX, &nid); return alloc_pages_bulk_noprof(gfp, nid, nodemask, - nr_pages, page_array); + nr_pages, NULL, page_array); } =20 int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d1d037f97c5f..a95bdd8cbf5b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4940,23 +4940,28 @@ static inline bool prepare_alloc_pages(gfp_t gfp_ma= sk, unsigned int order, } =20 /* - * __alloc_pages_bulk - Allocate a number of order-0 pages to an array + * __alloc_pages_bulk - Allocate a number of order-0 pages to a list or ar= ray * @gfp: GFP flags for the allocation * @preferred_nid: The preferred NUMA node ID to allocate from * @nodemask: Set of nodes to allocate from, may be NULL - * @nr_pages: The number of pages desired in the array - * @page_array: Array to store the pages + * @nr_pages: The number of pages desired on the list or array + * @page_list: Optional list to store the allocated pages + * @page_array: Optional array to store the pages * * This is a batched version of the page allocator that attempts to - * allocate nr_pages quickly. Pages are added to the page_array. + * allocate nr_pages quickly. Pages are added to page_list if page_list + * is not NULL, otherwise it is assumed that the page_array is valid. * - * Note that only NULL elements are populated with pages and nr_pages + * For lists, nr_pages is the number of pages that should be allocated. + * + * For arrays, only NULL elements are populated with pages and nr_pages * is the maximum number of pages that will be stored in the array. * - * Returns the number of pages in the array. + * Returns the number of pages on the list or array. */ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, + struct list_head *page_list, struct page **page_array) { struct page *page; @@ -4974,7 +4979,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int = preferred_nid, * Skip populated array elements to determine if any pages need * to be allocated before disabling IRQs. */ - while (nr_populated < nr_pages && page_array[nr_populated]) + while (page_array && nr_populated < nr_pages && page_array[nr_populated]) nr_populated++; =20 /* No pages requested? */ @@ -4982,7 +4987,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int = preferred_nid, goto out; =20 /* Already populated array? */ - if (unlikely(nr_pages - nr_populated =3D=3D 0)) + if (unlikely(page_array && nr_pages - nr_populated =3D=3D 0)) goto out; =20 /* Bulk allocator does not support memcg accounting. */ @@ -5064,7 +5069,7 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int = preferred_nid, while (nr_populated < nr_pages) { =20 /* Skip existing pages */ - if (page_array[nr_populated]) { + if (page_array && page_array[nr_populated]) { nr_populated++; continue; } @@ -5083,7 +5088,11 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int= preferred_nid, =20 prep_new_page(page, 0, gfp, 0); set_page_refcounted(page); - page_array[nr_populated++] =3D page; + if (page_list) + list_add(&page->lru, page_list); + else + page_array[nr_populated] =3D page; + nr_populated++; } =20 pcp_spin_unlock(pcp); @@ -5100,8 +5109,14 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int= preferred_nid, =20 failed: page =3D __alloc_pages_noprof(gfp, 0, preferred_nid, nodemask); - if (page) - page_array[nr_populated++] =3D page; + if (page) { + if (page_list) + list_add(&page->lru, page_list); + else + page_array[nr_populated] =3D page; + nr_populated++; + } + goto out; } EXPORT_SYMBOL_GPL(alloc_pages_bulk_noprof); --=20 2.25.1 From nobody Fri Dec 19 17:20:09 2025 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 340542C15BA for ; Tue, 14 Oct 2025 08:33:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=222.66.158.135 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760430800; cv=none; b=NR9DHSRz7HvMVbu7LNoFfG/rG04U5tZ7C6CqCzWPtammTB0V26LlFeAOFOGaA4mR8NndVwIMV42qUOflZwqO+utXFfkLbAze6vCwKCFmFYk+1HzxNReo5sA77e+R+MC12MnsR+smyGeXSAywQ5U2ShJy1Ny7INo3h1UKvDbYTY8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760430800; c=relaxed/simple; bh=ABwOP1knGJkFDoc+XtKo0TwfI7cNvHWprhqJUcSx7yw=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=MmDOU/e1LxF8mBJt6g/Ao3Pj+8HHlJRAmTZPZPMCZpwLO9vWVSfxZT2rA45XQAn8Tt+/s3FhnqV1msnjHZpBt0Jbmntd0/lrhEmk9zbAY8o2eJOi3FwY+okh0grqQfere1ivfmIvXv6CmcCf0VS/xE9VPMHbLvRyWIUyKdPRj3A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=unisoc.com; spf=pass smtp.mailfrom=unisoc.com; dkim=pass (2048-bit key) header.d=unisoc.com header.i=@unisoc.com header.b=xSuX/NcK; arc=none smtp.client-ip=222.66.158.135 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=unisoc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=unisoc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=unisoc.com header.i=@unisoc.com header.b="xSuX/NcK" Received: from SHSQR01.spreadtrum.com (localhost [127.0.0.2] (may be forged)) by SHSQR01.spreadtrum.com with ESMTP id 59E8XD9o088188 for ; Tue, 14 Oct 2025 16:33:13 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from dlp.unisoc.com ([10.29.3.86]) by SHSQR01.spreadtrum.com with ESMTP id 59E8Wojt086369; Tue, 14 Oct 2025 16:32:50 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from SHDLP.spreadtrum.com (BJMBX01.spreadtrum.com [10.0.64.7]) by dlp.unisoc.com (SkyGuard) with ESMTPS id 4cm6q862xFz2Nc5jJ; Tue, 14 Oct 2025 16:30:28 +0800 (CST) Received: from bj03382pcu03.spreadtrum.com (10.0.73.40) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 14 Oct 2025 16:32:47 +0800 From: "zhaoyang.huang" To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Mel Gorman , Vlastimil Babka , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T . J . Mercier" , =?UTF-8?q?Christian=20K=C3=B6nig?= , , , , , , Zhaoyang Huang , Subject: [PATCH 2/2] driver: dma-buf: use alloc_pages_bulk_list for order-0 allocation Date: Tue, 14 Oct 2025 16:32:30 +0800 Message-ID: <20251014083230.1181072-3-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20251014083230.1181072-1-zhaoyang.huang@unisoc.com> References: <20251014083230.1181072-1-zhaoyang.huang@unisoc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 59E8Wojt086369 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=unisoc.com; s=default; t=1760430784; bh=K7+5gnU0m4ULo+Ba+y5s66MuVOBWfTtrRajsj8ay8+0=; h=From:To:Subject:Date:In-Reply-To:References; b=xSuX/NcK03y5esnExK/NNHsgEI0OVUwyBxdOhoPg1QqdB5l1iFdFp6lug4g1HSEeN 98BXkanBOwOKSVWebSeadOW+Awoa8DA7l6Qy9PUc/AU3bjHNyqF+1Pd2qYZzt1OZlb SOOuN9wCeIqUAd5axIGA7v6UwZxQxfha2sZr5bGzD7kTsIAvJNzPTIkiuxFTgNzmtc ZiXttiRlixfhILDlObIzhdM6Y8js4uTDTugnamf3zDFU39likolRlTAqYgriDo9dFx LiPujVT1bcIL/QFWMHXs+7UAoYG8CT2eliKXmu8omC8ucBJCc0Eb3/fzR1b4u97Uui sr9i4BpJ+wj7w== Content-Type: text/plain; charset="utf-8" From: Zhaoyang Huang The size of once dma-buf allocation could be dozens MB or much more which introduce a loop of allocating several thousands of order-0 pages. Furthermore, the concurrent allocation could have dma-buf allocation enter direct-reclaim during the loop. This commit would like to eliminate the above two affections by introducing alloc_pages_bulk_list in dma-buf's order-0 allocation. This patch is proved to be conditionally helpful in 18MB allocation as decreasing the time from 24604us to 6555us and no harm when bulk allocation can't be done(fallback to single page allocation) Signed-off-by: Zhaoyang Huang --- drivers/dma-buf/heaps/system_heap.c | 36 +++++++++++++++++++---------- 1 file changed, 24 insertions(+), 12 deletions(-) diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/sy= stem_heap.c index bbe7881f1360..71b028c63bd8 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -300,8 +300,8 @@ static const struct dma_buf_ops system_heap_buf_ops =3D= { .release =3D system_heap_dma_buf_release, }; =20 -static struct page *alloc_largest_available(unsigned long size, - unsigned int max_order) +static void alloc_largest_available(unsigned long size, + unsigned int max_order, unsigned int *num_pages, struct list_head *l= ist) { struct page *page; int i; @@ -312,12 +312,19 @@ static struct page *alloc_largest_available(unsigned = long size, if (max_order < orders[i]) continue; =20 - page =3D alloc_pages(order_flags[i], orders[i]); - if (!page) + if (orders[i]) { + page =3D alloc_pages(order_flags[i], orders[i]); + if (page) { + list_add(&page->lru, list); + *num_pages =3D 1; + } + } else + *num_pages =3D alloc_pages_bulk_list(LOW_ORDER_GFP, size / PAGE_SIZE, l= ist); + + if (list_empty(list)) continue; - return page; + return; } - return NULL; } =20 static struct dma_buf *system_heap_allocate(struct dma_heap *heap, @@ -335,6 +342,8 @@ static struct dma_buf *system_heap_allocate(struct dma_= heap *heap, struct list_head pages; struct page *page, *tmp_page; int i, ret =3D -ENOMEM; + unsigned int num_pages; + LIST_HEAD(head); =20 buffer =3D kzalloc(sizeof(*buffer), GFP_KERNEL); if (!buffer) @@ -348,6 +357,8 @@ static struct dma_buf *system_heap_allocate(struct dma_= heap *heap, INIT_LIST_HEAD(&pages); i =3D 0; while (size_remaining > 0) { + num_pages =3D 0; + INIT_LIST_HEAD(&head); /* * Avoid trying to allocate memory if the process * has been killed by SIGKILL @@ -357,14 +368,15 @@ static struct dma_buf *system_heap_allocate(struct dm= a_heap *heap, goto free_buffer; } =20 - page =3D alloc_largest_available(size_remaining, max_order); - if (!page) + alloc_largest_available(size_remaining, max_order, &num_pages, &head); + if (!num_pages) goto free_buffer; =20 - list_add_tail(&page->lru, &pages); - size_remaining -=3D page_size(page); - max_order =3D compound_order(page); - i++; + list_splice_tail(&head, &pages); + max_order =3D folio_order(lru_to_folio(&head)); + size_remaining -=3D PAGE_SIZE * (num_pages << max_order); + i +=3D num_pages; + } =20 table =3D &buffer->sg_table; --=20 2.25.1