From nobody Thu Dec 18 18:50:15 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A0BCE1D86C6 for ; Mon, 23 Dec 2024 22:01:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734991270; cv=none; b=tEheahSLryKCzolRptMRzPH/WcLDIEM15MVWHZfALSG+h5u7E4Raoe462HbZzTAZ8b9AUp9yMj4OUX/7tqF281o5ZFDCrgC8PGtCNEmohCZxF2hj6V4GuzO9JJgN/NqV5Co2Wxdo7EmYo6gzZycNCc/b5/eqbbJN3j3iLzA7iEA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734991270; c=relaxed/simple; bh=xdqk1rxI3iYKRQk4prTUbmOTgzZyDDqFPbe+0of7YAE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VBPBtn8ZZDhDNOWaDo8vNkEP224qycRXOs/lrM+nwyFuvtVcO4Fn/3b6bTFH+6fHY91x7OGaZWKrgMb1iW7Ydo0ri+tzN/X5riz1ZT3botxAK77gu8j7zRJOLl86TkFXtak88P+M2tdY0RpChHDaMzf14frVuAl0BJVjudOt1PQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=L/bkOIpX; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="L/bkOIpX" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1734991267; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=22x4sB1iTArwhTFCjc3Efqe/QbFz8QOy/g3Pla1JmDw=; b=L/bkOIpX26o7dOUtPU8diIrzjuHXkxJoFU0+RDnSI1BhIEC+QXCGkjvYKek07ToNoeiz3u IMTdo7gftKame2N5LLSSvdWXVBtKptlo+/ja1FtlBbiYiqgwNC8Yf+A8y/f0Sw4q1nJxAf ZMA4X+fhQ0LjJrkSWHEDe+QI6ljmQAI= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-12-r6uDLTP4O2-Fh3Y-1NntIw-1; Mon, 23 Dec 2024 17:01:04 -0500 X-MC-Unique: r6uDLTP4O2-Fh3Y-1NntIw-1 X-Mimecast-MFC-AGG-ID: r6uDLTP4O2-Fh3Y-1NntIw Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 35F4C195608E; Mon, 23 Dec 2024 22:01:01 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.22.80.63]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id CFDC01956052; Mon, 23 Dec 2024 22:00:59 +0000 (UTC) From: Luiz Capitulino To: linux-mm@kvack.org, mgorman@techsingularity.net, willy@infradead.org Cc: david@redhat.com, linux-kernel@vger.kernel.org, lcapitulino@gmail.com Subject: [PATCH v2 2/2] mm: alloc_pages_bulk: rename API Date: Mon, 23 Dec 2024 17:00:38 -0500 Message-ID: <275a3bbc0be20fbe9002297d60045e67ab3d4ada.1734991165.git.luizcap@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Content-Type: text/plain; charset="utf-8" The previous commit removed the page_list argument from alloc_pages_bulk_noprof() along with the alloc_pages_bulk_list() function. Now that only the *_array() flavour of the API remains, we can do the following renaming (along with the _noprof() ones): alloc_pages_bulk_array -> alloc_pages_bulk alloc_pages_bulk_array_mempolicy -> alloc_pages_bulk_mempolicy alloc_pages_bulk_array_node -> alloc_pages_bulk_node Signed-off-by: Luiz Capitulino Acked-by: David Hildenbrand --- drivers/staging/media/atomisp/pci/hmm/hmm_bo.c | 4 ++-- drivers/vfio/pci/mlx5/cmd.c | 14 +++++++------- drivers/vfio/pci/virtio/migrate.c | 6 +++--- fs/btrfs/extent_io.c | 2 +- fs/erofs/zutil.c | 4 ++-- fs/splice.c | 2 +- fs/xfs/xfs_buf.c | 4 ++-- include/linux/gfp.h | 14 +++++++------- kernel/bpf/arena.c | 2 +- lib/alloc_tag.c | 4 ++-- lib/kunit_iov_iter.c | 2 +- lib/test_vmalloc.c | 2 +- mm/mempolicy.c | 14 +++++++------- mm/vmalloc.c | 4 ++-- net/core/page_pool.c | 7 +++---- net/sunrpc/svc.c | 4 ++-- net/sunrpc/svc_xprt.c | 3 +-- 17 files changed, 45 insertions(+), 47 deletions(-) diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c b/drivers/stagi= ng/media/atomisp/pci/hmm/hmm_bo.c index 07ed33464d711..224ca8d42721a 100644 --- a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c +++ b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c @@ -624,10 +624,10 @@ static int alloc_private_pages(struct hmm_buffer_obje= ct *bo) const gfp_t gfp =3D __GFP_NOWARN | __GFP_RECLAIM | __GFP_FS; int ret; =20 - ret =3D alloc_pages_bulk_array(gfp, bo->pgnr, bo->pages); + ret =3D alloc_pages_bulk(gfp, bo->pgnr, bo->pages); if (ret !=3D bo->pgnr) { free_pages_bulk_array(ret, bo->pages); - dev_err(atomisp_dev, "alloc_pages_bulk_array() failed\n"); + dev_err(atomisp_dev, "alloc_pages_bulk() failed\n"); return -ENOMEM; } =20 diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index eb7387ee6ebd1..11eda6b207f13 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -408,7 +408,7 @@ void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buff= er *buf) buf->dma_dir, 0); } =20 - /* Undo alloc_pages_bulk_array() */ + /* Undo alloc_pages_bulk() */ for_each_sgtable_page(&buf->table.sgt, &sg_iter, 0) __free_page(sg_page_iter_page(&sg_iter)); sg_free_append_table(&buf->table); @@ -431,8 +431,8 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_= data_buffer *buf, return -ENOMEM; =20 do { - filled =3D alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, to_fill, - page_list); + filled =3D alloc_pages_bulk(GFP_KERNEL_ACCOUNT, to_fill, + page_list); if (!filled) { ret =3D -ENOMEM; goto err; @@ -1342,7 +1342,7 @@ static void free_recv_pages(struct mlx5_vhca_recv_buf= *recv_buf) { int i; =20 - /* Undo alloc_pages_bulk_array() */ + /* Undo alloc_pages_bulk() */ for (i =3D 0; i < recv_buf->npages; i++) __free_page(recv_buf->page_list[i]); =20 @@ -1361,9 +1361,9 @@ static int alloc_recv_pages(struct mlx5_vhca_recv_buf= *recv_buf, return -ENOMEM; =20 for (;;) { - filled =3D alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, - npages - done, - recv_buf->page_list + done); + filled =3D alloc_pages_bulk(GFP_KERNEL_ACCOUNT, + npages - done, + recv_buf->page_list + done); if (!filled) goto err; =20 diff --git a/drivers/vfio/pci/virtio/migrate.c b/drivers/vfio/pci/virtio/mi= grate.c index ee54f4c178577..ba92bb4e9af94 100644 --- a/drivers/vfio/pci/virtio/migrate.c +++ b/drivers/vfio/pci/virtio/migrate.c @@ -77,8 +77,8 @@ static int virtiovf_add_migration_pages(struct virtiovf_d= ata_buffer *buf, return -ENOMEM; =20 do { - filled =3D alloc_pages_bulk_array(GFP_KERNEL_ACCOUNT, to_fill, - page_list); + filled =3D alloc_pages_bulk(GFP_KERNEL_ACCOUNT, to_fill, + page_list); if (!filled) { ret =3D -ENOMEM; goto err; @@ -112,7 +112,7 @@ static void virtiovf_free_data_buffer(struct virtiovf_d= ata_buffer *buf) { struct sg_page_iter sg_iter; =20 - /* Undo alloc_pages_bulk_array() */ + /* Undo alloc_pages_bulk() */ for_each_sgtable_page(&buf->table.sgt, &sg_iter, 0) __free_page(sg_page_iter_page(&sg_iter)); sg_free_append_table(&buf->table); diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index b923d0cec61c7..d70e9461fea86 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -632,7 +632,7 @@ int btrfs_alloc_page_array(unsigned int nr_pages, struc= t page **page_array, for (allocated =3D 0; allocated < nr_pages;) { unsigned int last =3D allocated; =20 - allocated =3D alloc_pages_bulk_array(gfp, nr_pages, page_array); + allocated =3D alloc_pages_bulk(gfp, nr_pages, page_array); if (unlikely(allocated =3D=3D last)) { /* No progress, fail and do cleanup. */ for (int i =3D 0; i < allocated; i++) { diff --git a/fs/erofs/zutil.c b/fs/erofs/zutil.c index 0dd65cefce33e..9c5aa9d536821 100644 --- a/fs/erofs/zutil.c +++ b/fs/erofs/zutil.c @@ -87,8 +87,8 @@ int z_erofs_gbuf_growsize(unsigned int nrpages) tmp_pages[j] =3D gbuf->pages[j]; do { last =3D j; - j =3D alloc_pages_bulk_array(GFP_KERNEL, nrpages, - tmp_pages); + j =3D alloc_pages_bulk(GFP_KERNEL, nrpages, + tmp_pages); if (last =3D=3D j) goto out; } while (j !=3D nrpages); diff --git a/fs/splice.c b/fs/splice.c index 2898fa1e9e638..28cfa63aa2364 100644 --- a/fs/splice.c +++ b/fs/splice.c @@ -342,7 +342,7 @@ ssize_t copy_splice_read(struct file *in, loff_t *ppos, return -ENOMEM; =20 pages =3D (struct page **)(bv + npages); - npages =3D alloc_pages_bulk_array(GFP_USER, npages, pages); + npages =3D alloc_pages_bulk(GFP_USER, npages, pages); if (!npages) { kfree(bv); return -ENOMEM; diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index aa63b8efd7822..82db3ab0e8b40 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -395,8 +395,8 @@ xfs_buf_alloc_pages( for (;;) { long last =3D filled; =20 - filled =3D alloc_pages_bulk_array(gfp_mask, bp->b_page_count, - bp->b_pages); + filled =3D alloc_pages_bulk(gfp_mask, bp->b_page_count, + bp->b_pages); if (filled =3D=3D bp->b_page_count) { XFS_STATS_INC(bp->b_mount, xb_page_found); break; diff --git a/include/linux/gfp.h b/include/linux/gfp.h index eebed36443b35..64c3f0729bdc6 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -215,18 +215,18 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int = preferred_nid, struct page **page_array); #define __alloc_pages_bulk(...) alloc_hooks(alloc_pages_bulk_noprof(__VA= _ARGS__)) =20 -unsigned long alloc_pages_bulk_array_mempolicy_noprof(gfp_t gfp, +unsigned long alloc_pages_bulk_mempolicy_noprof(gfp_t gfp, unsigned long nr_pages, struct page **page_array); -#define alloc_pages_bulk_array_mempolicy(...) \ - alloc_hooks(alloc_pages_bulk_array_mempolicy_noprof(__VA_ARGS__)) +#define alloc_pages_bulk_mempolicy(...) \ + alloc_hooks(alloc_pages_bulk_mempolicy_noprof(__VA_ARGS__)) =20 /* Bulk allocate order-0 pages */ -#define alloc_pages_bulk_array(_gfp, _nr_pages, _page_array) \ +#define alloc_pages_bulk(_gfp, _nr_pages, _page_array) \ __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, _page_array) =20 static inline unsigned long -alloc_pages_bulk_array_node_noprof(gfp_t gfp, int nid, unsigned long nr_pa= ges, +alloc_pages_bulk_node_noprof(gfp_t gfp, int nid, unsigned long nr_pages, struct page **page_array) { if (nid =3D=3D NUMA_NO_NODE) @@ -235,8 +235,8 @@ alloc_pages_bulk_array_node_noprof(gfp_t gfp, int nid, = unsigned long nr_pages, return alloc_pages_bulk_noprof(gfp, nid, NULL, nr_pages, page_array); } =20 -#define alloc_pages_bulk_array_node(...) \ - alloc_hooks(alloc_pages_bulk_array_node_noprof(__VA_ARGS__)) +#define alloc_pages_bulk_node(...) \ + alloc_hooks(alloc_pages_bulk_node_noprof(__VA_ARGS__)) =20 static inline void warn_if_node_offline(int this_node, gfp_t gfp_mask) { diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c index 945a5680f6a54..9927cd4c9e0ea 100644 --- a/kernel/bpf/arena.c +++ b/kernel/bpf/arena.c @@ -443,7 +443,7 @@ static long arena_alloc_pages(struct bpf_arena *arena, = long uaddr, long page_cnt return 0; } =20 - /* zeroing is needed, since alloc_pages_bulk_array() only fills in non-ze= ro entries */ + /* zeroing is needed, since alloc_pages_bulk() only fills in non-zero ent= ries */ pages =3D kvcalloc(page_cnt, sizeof(struct page *), GFP_KERNEL); if (!pages) return 0; diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c index 7dcebf118a3e6..4bb778be44764 100644 --- a/lib/alloc_tag.c +++ b/lib/alloc_tag.c @@ -420,8 +420,8 @@ static int vm_module_tags_populate(void) unsigned long nr; =20 more_pages =3D ALIGN(new_end - phys_end, PAGE_SIZE) >> PAGE_SHIFT; - nr =3D alloc_pages_bulk_array_node(GFP_KERNEL | __GFP_NOWARN, - NUMA_NO_NODE, more_pages, next_page); + nr =3D alloc_pages_bulk_node(GFP_KERNEL | __GFP_NOWARN, + NUMA_NO_NODE, more_pages, next_page); if (nr < more_pages || vmap_pages_range(phys_end, phys_end + (nr << PAGE_SHIFT), PAGE_KERNE= L, next_page, PAGE_SHIFT) < 0) { diff --git a/lib/kunit_iov_iter.c b/lib/kunit_iov_iter.c index 13e15687675a8..830bf3eca4c2e 100644 --- a/lib/kunit_iov_iter.c +++ b/lib/kunit_iov_iter.c @@ -57,7 +57,7 @@ static void *__init iov_kunit_create_buffer(struct kunit = *test, KUNIT_ASSERT_NOT_ERR_OR_NULL(test, pages); *ppages =3D pages; =20 - got =3D alloc_pages_bulk_array(GFP_KERNEL, npages, pages); + got =3D alloc_pages_bulk(GFP_KERNEL, npages, pages); if (got !=3D npages) { release_pages(pages, got); KUNIT_ASSERT_EQ(test, got, npages); diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c index 4ddf769861ff7..f585949ff696e 100644 --- a/lib/test_vmalloc.c +++ b/lib/test_vmalloc.c @@ -373,7 +373,7 @@ vm_map_ram_test(void) if (!pages) return -1; =20 - nr_allocated =3D alloc_pages_bulk_array(GFP_KERNEL, map_nr_pages, pages); + nr_allocated =3D alloc_pages_bulk(GFP_KERNEL, map_nr_pages, pages); if (nr_allocated !=3D map_nr_pages) goto cleanup; =20 diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 42a7b07ccc15a..69bc9019368e4 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2356,7 +2356,7 @@ struct folio *folio_alloc_noprof(gfp_t gfp, unsigned = int order) } EXPORT_SYMBOL(folio_alloc_noprof); =20 -static unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp, +static unsigned long alloc_pages_bulk_interleave(gfp_t gfp, struct mempolicy *pol, unsigned long nr_pages, struct page **page_array) { @@ -2391,7 +2391,7 @@ static unsigned long alloc_pages_bulk_array_interleav= e(gfp_t gfp, return total_allocated; } =20 -static unsigned long alloc_pages_bulk_array_weighted_interleave(gfp_t gfp, +static unsigned long alloc_pages_bulk_weighted_interleave(gfp_t gfp, struct mempolicy *pol, unsigned long nr_pages, struct page **page_array) { @@ -2506,7 +2506,7 @@ static unsigned long alloc_pages_bulk_array_weighted_= interleave(gfp_t gfp, return total_allocated; } =20 -static unsigned long alloc_pages_bulk_array_preferred_many(gfp_t gfp, int = nid, +static unsigned long alloc_pages_bulk_preferred_many(gfp_t gfp, int nid, struct mempolicy *pol, unsigned long nr_pages, struct page **page_array) { @@ -2532,7 +2532,7 @@ static unsigned long alloc_pages_bulk_array_preferred= _many(gfp_t gfp, int nid, * It can accelerate memory allocation especially interleaving * allocate memory. */ -unsigned long alloc_pages_bulk_array_mempolicy_noprof(gfp_t gfp, +unsigned long alloc_pages_bulk_mempolicy_noprof(gfp_t gfp, unsigned long nr_pages, struct page **page_array) { struct mempolicy *pol =3D &default_policy; @@ -2543,15 +2543,15 @@ unsigned long alloc_pages_bulk_array_mempolicy_nopr= of(gfp_t gfp, pol =3D get_task_policy(current); =20 if (pol->mode =3D=3D MPOL_INTERLEAVE) - return alloc_pages_bulk_array_interleave(gfp, pol, + return alloc_pages_bulk_interleave(gfp, pol, nr_pages, page_array); =20 if (pol->mode =3D=3D MPOL_WEIGHTED_INTERLEAVE) - return alloc_pages_bulk_array_weighted_interleave( + return alloc_pages_bulk_weighted_interleave( gfp, pol, nr_pages, page_array); =20 if (pol->mode =3D=3D MPOL_PREFERRED_MANY) - return alloc_pages_bulk_array_preferred_many(gfp, + return alloc_pages_bulk_preferred_many(gfp, numa_node_id(), pol, nr_pages, page_array); =20 nid =3D numa_node_id(); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 5c88d0e90c209..a6e7acebe9adf 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3562,11 +3562,11 @@ vm_area_alloc_pages(gfp_t gfp, int nid, * but mempolicy wants to alloc memory by interleaving. */ if (IS_ENABLED(CONFIG_NUMA) && nid =3D=3D NUMA_NO_NODE) - nr =3D alloc_pages_bulk_array_mempolicy_noprof(gfp, + nr =3D alloc_pages_bulk_mempolicy_noprof(gfp, nr_pages_request, pages + nr_allocated); else - nr =3D alloc_pages_bulk_array_node_noprof(gfp, nid, + nr =3D alloc_pages_bulk_node_noprof(gfp, nid, nr_pages_request, pages + nr_allocated); =20 diff --git a/net/core/page_pool.c b/net/core/page_pool.c index f89cf93f6eb45..8a91c1972dc50 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -532,12 +532,11 @@ static noinline netmem_ref __page_pool_alloc_pages_sl= ow(struct page_pool *pool, if (unlikely(pool->alloc.count > 0)) return pool->alloc.cache[--pool->alloc.count]; =20 - /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */ + /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk */ memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); =20 - nr_pages =3D alloc_pages_bulk_array_node(gfp, - pool->p.nid, bulk, - (struct page **)pool->alloc.cache); + nr_pages =3D alloc_pages_bulk_node(gfp, pool->p.nid, bulk, + (struct page **)pool->alloc.cache); if (unlikely(!nr_pages)) return 0; =20 diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index 79879b7d39cb4..e7f9c295d13c0 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -651,8 +651,8 @@ svc_init_buffer(struct svc_rqst *rqstp, unsigned int si= ze, int node) if (pages > RPCSVC_MAXPAGES) pages =3D RPCSVC_MAXPAGES; =20 - ret =3D alloc_pages_bulk_array_node(GFP_KERNEL, node, pages, - rqstp->rq_pages); + ret =3D alloc_pages_bulk_node(GFP_KERNEL, node, pages, + rqstp->rq_pages); return ret =3D=3D pages; } =20 diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c index 43c57124de52f..aebc0d8ddff5c 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c @@ -671,8 +671,7 @@ static bool svc_alloc_arg(struct svc_rqst *rqstp) } =20 for (filled =3D 0; filled < pages; filled =3D ret) { - ret =3D alloc_pages_bulk_array(GFP_KERNEL, pages, - rqstp->rq_pages); + ret =3D alloc_pages_bulk(GFP_KERNEL, pages, rqstp->rq_pages); if (ret > filled) /* Made progress, don't sleep yet */ continue; --=20 2.47.1