From nobody Fri Sep 20 04:12:48 2024 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8C99E12CDBF; Thu, 28 Mar 2024 13:40:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.190 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711633257; cv=none; b=U/LxYd1DRLDdNKPwqVX5EkPDzhCdGVIL2ZPe+J0q4hUOdCQUn5EkOHvTEpU7TCBZG/r9BcvtHOxM0exfglDCpzSLrk/8bCGBlNNJJSUbd9C4ze9XHdE4LRYY5LDyNcFDHxj5BO7Q5m7jWL9IO6eaybMBtKIEOWE3fg8UaGtZCPQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711633257; c=relaxed/simple; bh=87LDOgcQyx3tniemvkIdYzQZsDwIKGWkpZM3o8ms+LY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=AplmxfsuIFJdCsigZTCsg5vuA17SLpiMQGWmyQnJDumQ56X2XRIsU4YWvfRPUryY5wsz0npaOMADI3X0m2kiY56leagJBLOTzUHuueSa7NfhyHb87gWh1eZ3L057dpkvUygLvnWBVfUgSsGigr8ZsfD1JbAstbBjiYsIsnfDG5Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.190 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4V54Pj5dqyz1xrSD; Thu, 28 Mar 2024 21:38:49 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 715951402CA; Thu, 28 Mar 2024 21:40:51 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Thu, 28 Mar 2024 21:40:50 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Jeroen de Borst , Praveen Kaligineedi , Shailend Chand , Eric Dumazet , Jesse Brandeburg , Tony Nguyen , Sunil Goutham , Geetha sowjanya , Subbaraya Sundeep , hariprasad , Felix Fietkau , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , "Michael S. Tsirkin" , Jason Wang , Andrew Morton , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , David Howells , Marc Dionne , Trond Myklebust , Anna Schumaker , Chuck Lever , Jeff Layton , Neil Brown , Olga Kornievskaia , Dai Ngo , Tom Talpey , , , , , , , , , , Subject: [PATCH RFC 04/10] mm: page_frag: add '_va' suffix to page_frag API Date: Thu, 28 Mar 2024 21:38:33 +0800 Message-ID: <20240328133839.13620-5-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240328133839.13620-1-linyunsheng@huawei.com> References: <20240328133839.13620-1-linyunsheng@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500005.china.huawei.com (7.185.36.74) Content-Type: text/plain; charset="utf-8" Currently most of the API for page_frag API is returning 'virtual address' as output or expecting 'virtual address' as input, in order to differentiate the API handling between 'virtual address' and 'struct page', add '_va' suffix to the corresponding API mirroring the page_pool_alloc_va() API of the page_pool. Signed-off-by: Yunsheng Lin --- drivers/net/ethernet/google/gve/gve_rx.c | 4 ++-- drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +- drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +- drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +- .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 4 ++-- .../marvell/octeontx2/nic/otx2_common.c | 2 +- drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 ++-- drivers/nvme/host/tcp.c | 8 +++---- drivers/nvme/target/tcp.c | 22 ++++++++--------- drivers/vhost/net.c | 6 ++--- include/linux/page_frag_cache.h | 24 ++++++++++--------- include/linux/skbuff.h | 2 +- kernel/bpf/cpumap.c | 2 +- mm/page_frag_alloc.c | 10 ++++---- net/core/skbuff.c | 15 ++++++------ net/core/xdp.c | 2 +- net/rxrpc/txbuf.c | 15 ++++++------ net/sunrpc/svcsock.c | 4 ++-- 18 files changed, 67 insertions(+), 63 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/etherne= t/google/gve/gve_rx.c index 20f5a9e7fae9..58091de93430 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -687,7 +687,7 @@ static int gve_xdp_redirect(struct net_device *dev, str= uct gve_rx_ring *rx, =20 total_len =3D headroom + SKB_DATA_ALIGN(len) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - frame =3D page_frag_alloc(&rx->page_cache, total_len, GFP_ATOMIC); + frame =3D page_frag_alloc_va(&rx->page_cache, total_len, GFP_ATOMIC); if (!frame) { u64_stats_update_begin(&rx->statss); rx->xdp_alloc_fails++; @@ -700,7 +700,7 @@ static int gve_xdp_redirect(struct net_device *dev, str= uct gve_rx_ring *rx, =20 err =3D xdp_do_redirect(dev, &new, xdp_prog); if (err) - page_frag_free(frame); + page_frag_free_va(frame); =20 return err; } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethern= et/intel/ice/ice_txrx.c index 97d41d6ebf1f..87f23995b657 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -126,7 +126,7 @@ ice_unmap_and_free_tx_buf(struct ice_tx_ring *ring, str= uct ice_tx_buf *tx_buf) dev_kfree_skb_any(tx_buf->skb); break; case ICE_TX_BUF_XDP_TX: - page_frag_free(tx_buf->raw_buf); + page_frag_free_va(tx_buf->raw_buf); break; case ICE_TX_BUF_XDP_XMIT: xdp_return_frame(tx_buf->xdpf); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethern= et/intel/ice/ice_txrx.h index af955b0e5dc5..65ad1757824f 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -148,7 +148,7 @@ static inline int ice_skb_pad(void) * @ICE_TX_BUF_DUMMY: dummy Flow Director packet, unmap and kfree() * @ICE_TX_BUF_FRAG: mapped skb OR &xdp_buff frag, only unmap DMA * @ICE_TX_BUF_SKB: &sk_buff, unmap and consume_skb(), update stats - * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free(), stats + * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free_va(), stats * @ICE_TX_BUF_XDP_XMIT: &xdp_frame, unmap and xdp_return_frame(), stats * @ICE_TX_BUF_XSK_TX: &xdp_buff on XSk queue, xsk_buff_free(), stats */ diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/et= hernet/intel/ice/ice_txrx_lib.c index f8f1d2bdc1be..312f351ac601 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -279,7 +279,7 @@ ice_clean_xdp_tx_buf(struct device *dev, struct ice_tx_= buf *tx_buf, =20 switch (tx_buf->type) { case ICE_TX_BUF_XDP_TX: - page_frag_free(tx_buf->raw_buf); + page_frag_free_va(tx_buf->raw_buf); break; case ICE_TX_BUF_XDP_XMIT: xdp_return_frame_bulk(tx_buf->xdpf, bq); diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/ne= t/ethernet/intel/ixgbevf/ixgbevf_main.c index 9c960017a6de..f781c5f202c9 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -303,7 +303,7 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vecto= r *q_vector, =20 /* free the skb */ if (ring_is_xdp(tx_ring)) - page_frag_free(tx_buffer->data); + page_frag_free_va(tx_buffer->data); else napi_consume_skb(tx_buffer->skb, napi_budget); =20 @@ -2413,7 +2413,7 @@ static void ixgbevf_clean_tx_ring(struct ixgbevf_ring= *tx_ring) =20 /* Free all the Tx ring sk_buffs */ if (ring_is_xdp(tx_ring)) - page_frag_free(tx_buffer->data); + page_frag_free_va(tx_buffer->data); else dev_kfree_skb_any(tx_buffer->skb); =20 diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/dri= vers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index a85ac039d779..8eb5820b8a70 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -553,7 +553,7 @@ static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, str= uct otx2_pool *pool, *dma =3D dma_map_single_attrs(pfvf->dev, buf, pool->rbsize, DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); if (unlikely(dma_mapping_error(pfvf->dev, *dma))) { - page_frag_free(buf); + page_frag_free_va(buf); return -ENOMEM; } =20 diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ether= net/mediatek/mtk_wed_wo.c index 7063c78bd35f..c4228719f8a4 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c @@ -142,8 +142,8 @@ mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct m= tk_wed_wo_queue *q, dma_addr_t addr; void *buf; =20 - buf =3D page_frag_alloc(&q->cache, q->buf_size, - GFP_ATOMIC | GFP_DMA32); + buf =3D page_frag_alloc_va(&q->cache, q->buf_size, + GFP_ATOMIC | GFP_DMA32); if (!buf) break; =20 diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 3692b56cb58d..ceb0d2d1497a 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -492,7 +492,7 @@ static void nvme_tcp_exit_request(struct blk_mq_tag_set= *set, { struct nvme_tcp_request *req =3D blk_mq_rq_to_pdu(rq); =20 - page_frag_free(req->pdu); + page_frag_free_va(req->pdu); } =20 static int nvme_tcp_init_request(struct blk_mq_tag_set *set, @@ -506,7 +506,7 @@ static int nvme_tcp_init_request(struct blk_mq_tag_set = *set, struct nvme_tcp_queue *queue =3D &ctrl->queues[queue_idx]; u8 hdgst =3D nvme_tcp_hdgst_len(queue); =20 - req->pdu =3D page_frag_alloc(&queue->pf_cache, + req->pdu =3D page_frag_alloc_va(&queue->pf_cache, sizeof(struct nvme_tcp_cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!req->pdu) @@ -1323,7 +1323,7 @@ static void nvme_tcp_free_async_req(struct nvme_tcp_c= trl *ctrl) { struct nvme_tcp_request *async =3D &ctrl->async_req; =20 - page_frag_free(async->pdu); + page_frag_free_va(async->pdu); } =20 static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) @@ -1332,7 +1332,7 @@ static int nvme_tcp_alloc_async_req(struct nvme_tcp_c= trl *ctrl) struct nvme_tcp_request *async =3D &ctrl->async_req; u8 hdgst =3D nvme_tcp_hdgst_len(queue); =20 - async->pdu =3D page_frag_alloc(&queue->pf_cache, + async->pdu =3D page_frag_alloc_va(&queue->pf_cache, sizeof(struct nvme_tcp_cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!async->pdu) diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index 2aa5762e9f50..a236e9fe145d 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -1461,24 +1461,24 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_que= ue *queue, c->queue =3D queue; c->req.port =3D queue->port->nport; =20 - c->cmd_pdu =3D page_frag_alloc(&queue->pf_cache, + c->cmd_pdu =3D page_frag_alloc_va(&queue->pf_cache, sizeof(*c->cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->cmd_pdu) return -ENOMEM; c->req.cmd =3D &c->cmd_pdu->cmd; =20 - c->rsp_pdu =3D page_frag_alloc(&queue->pf_cache, + c->rsp_pdu =3D page_frag_alloc_va(&queue->pf_cache, sizeof(*c->rsp_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->rsp_pdu) goto out_free_cmd; c->req.cqe =3D &c->rsp_pdu->cqe; =20 - c->data_pdu =3D page_frag_alloc(&queue->pf_cache, + c->data_pdu =3D page_frag_alloc_va(&queue->pf_cache, sizeof(*c->data_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->data_pdu) goto out_free_rsp; =20 - c->r2t_pdu =3D page_frag_alloc(&queue->pf_cache, + c->r2t_pdu =3D page_frag_alloc_va(&queue->pf_cache, sizeof(*c->r2t_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->r2t_pdu) goto out_free_data; @@ -1493,20 +1493,20 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_que= ue *queue, =20 return 0; out_free_data: - page_frag_free(c->data_pdu); + page_frag_free_va(c->data_pdu); out_free_rsp: - page_frag_free(c->rsp_pdu); + page_frag_free_va(c->rsp_pdu); out_free_cmd: - page_frag_free(c->cmd_pdu); + page_frag_free_va(c->cmd_pdu); return -ENOMEM; } =20 static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c) { - page_frag_free(c->r2t_pdu); - page_frag_free(c->data_pdu); - page_frag_free(c->rsp_pdu); - page_frag_free(c->cmd_pdu); + page_frag_free_va(c->r2t_pdu); + page_frag_free_va(c->data_pdu); + page_frag_free_va(c->rsp_pdu); + page_frag_free_va(c->cmd_pdu); } =20 static int nvmet_tcp_alloc_cmds(struct nvmet_tcp_queue *queue) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index c64ded183f8d..96d5ca299552 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -682,8 +682,8 @@ static int vhost_net_build_xdp(struct vhost_net_virtque= ue *nvq, return -ENOSPC; =20 buflen +=3D SKB_DATA_ALIGN(len + pad); - buf =3D page_frag_alloc_align(&net->pf_cache, buflen, GFP_KERNEL, - SMP_CACHE_BYTES); + buf =3D page_frag_alloc_va_align(&net->pf_cache, buflen, GFP_KERNEL, + SMP_CACHE_BYTES); if (unlikely(!buf)) return -ENOMEM; =20 @@ -730,7 +730,7 @@ static int vhost_net_build_xdp(struct vhost_net_virtque= ue *nvq, return 0; =20 err: - page_frag_free(buf); + page_frag_free_va(buf); return ret; } =20 diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cach= e.h index cc0ede0912f3..9d5d86b2d3ab 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -25,27 +25,29 @@ struct page_frag_cache { =20 void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); -void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask); +void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask); =20 -static inline void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) +static inline void *__page_frag_alloc_va_align(struct page_frag_cache *nc, + unsigned int fragsz, + gfp_t gfp_mask, + unsigned int align) { nc->offset =3D ALIGN(nc->offset, align); =20 - return page_frag_alloc(nc, fragsz, gfp_mask); + return page_frag_alloc_va(nc, fragsz, gfp_mask); } =20 -static inline void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) +static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, + unsigned int fragsz, + gfp_t gfp_mask, + unsigned int align) { WARN_ON_ONCE(!is_power_of_2(align)); =20 - return __page_frag_alloc_align(nc, fragsz, gfp_mask, align); + return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, align); } =20 -void page_frag_free(void *addr); +void page_frag_free_va(void *addr); =20 #endif diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 074cdd29f782..70d657a7b309 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3339,7 +3339,7 @@ static inline struct sk_buff *netdev_alloc_skb_ip_ali= gn(struct net_device *dev, =20 static inline void skb_free_frag(void *addr) { - page_frag_free(addr); + page_frag_free_va(addr); } =20 void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align); diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index a8e34416e960..3a6a237e7dd3 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -322,7 +322,7 @@ static int cpu_map_kthread_run(void *data) =20 /* Bring struct page memory area to curr CPU. Read by * build_skb_around via page_is_pfmemalloc(), and when - * freed written by page_frag_free call. + * freed written by page_frag_free_va call. */ prefetchw(page); } diff --git a/mm/page_frag_alloc.c b/mm/page_frag_alloc.c index 39c744c892ed..7f639af4e518 100644 --- a/mm/page_frag_alloc.c +++ b/mm/page_frag_alloc.c @@ -63,8 +63,8 @@ void __page_frag_cache_drain(struct page *page, unsigned = int count) } EXPORT_SYMBOL(__page_frag_cache_drain); =20 -void *page_frag_alloc(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask) +void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, + gfp_t gfp_mask) { unsigned int size, offset; struct page *page; @@ -130,16 +130,16 @@ void *page_frag_alloc(struct page_frag_cache *nc, uns= igned int fragsz, =20 return nc->va + offset; } -EXPORT_SYMBOL(page_frag_alloc); +EXPORT_SYMBOL(page_frag_alloc_va); =20 /* * Frees a page fragment allocated out of either a compound or order 0 pag= e. */ -void page_frag_free(void *addr) +void page_frag_free_va(void *addr) { struct page *page =3D virt_to_head_page(addr); =20 if (unlikely(put_page_testzero(page))) free_unref_page(page, compound_order(page)); } -EXPORT_SYMBOL(page_frag_free); +EXPORT_SYMBOL(page_frag_free_va); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 4c88d7f541e4..aa3adaa2c466 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -311,7 +311,7 @@ void *__napi_alloc_frag_align(unsigned int fragsz, unsi= gned int align) =20 fragsz =3D SKB_DATA_ALIGN(fragsz); =20 - return __page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, align); + return __page_frag_alloc_va_align(&nc->page, fragsz, GFP_ATOMIC, align); } EXPORT_SYMBOL(__napi_alloc_frag_align); =20 @@ -323,14 +323,15 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, = unsigned int align) if (in_hardirq() || irqs_disabled()) { struct page_frag_cache *nc =3D this_cpu_ptr(&netdev_alloc_cache); =20 - data =3D __page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, align); + data =3D __page_frag_alloc_va_align(nc, fragsz, GFP_ATOMIC, + align); } else { struct napi_alloc_cache *nc; =20 local_bh_disable(); nc =3D this_cpu_ptr(&napi_alloc_cache); - data =3D __page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, - align); + data =3D __page_frag_alloc_va_align(&nc->page, fragsz, GFP_ATOMIC, + align); local_bh_enable(); } return data; @@ -740,12 +741,12 @@ struct sk_buff *__netdev_alloc_skb(struct net_device = *dev, unsigned int len, =20 if (in_hardirq() || irqs_disabled()) { nc =3D this_cpu_ptr(&netdev_alloc_cache); - data =3D page_frag_alloc(nc, len, gfp_mask); + data =3D page_frag_alloc_va(nc, len, gfp_mask); pfmemalloc =3D nc->pfmemalloc; } else { local_bh_disable(); nc =3D this_cpu_ptr(&napi_alloc_cache.page); - data =3D page_frag_alloc(nc, len, gfp_mask); + data =3D page_frag_alloc_va(nc, len, gfp_mask); pfmemalloc =3D nc->pfmemalloc; local_bh_enable(); } @@ -834,7 +835,7 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *na= pi, unsigned int len, } else { len =3D SKB_HEAD_ALIGN(len); =20 - data =3D page_frag_alloc(&nc->page, len, gfp_mask); + data =3D page_frag_alloc_va(&nc->page, len, gfp_mask); pfmemalloc =3D nc->page.pfmemalloc; } =20 diff --git a/net/core/xdp.c b/net/core/xdp.c index 41693154e426..245a2d011aeb 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -391,7 +391,7 @@ void __xdp_return(void *data, struct xdp_mem_info *mem,= bool napi_direct, page_pool_put_full_page(page->pp, page, napi_direct); break; case MEM_TYPE_PAGE_SHARED: - page_frag_free(data); + page_frag_free_va(data); break; case MEM_TYPE_PAGE_ORDER0: page =3D virt_to_page(data); /* Assumes order0 page*/ diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index eb640875bf07..f2fa98360789 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -34,8 +34,8 @@ struct rxrpc_txbuf *rxrpc_alloc_data_txbuf(struct rxrpc_c= all *call, size_t data_ =20 data_align =3D max_t(size_t, data_align, L1_CACHE_BYTES); mutex_lock(&call->conn->tx_data_alloc_lock); - buf =3D page_frag_alloc_align(&call->conn->tx_data_alloc, total, gfp, - data_align); + buf =3D page_frag_alloc_va_align(&call->conn->tx_data_alloc, total, gfp, + data_align); mutex_unlock(&call->conn->tx_data_alloc_lock); if (!buf) { kfree(txb); @@ -97,17 +97,18 @@ struct rxrpc_txbuf *rxrpc_alloc_ack_txbuf(struct rxrpc_= call *call, size_t sack_s if (!txb) return NULL; =20 - buf =3D page_frag_alloc(&call->local->tx_alloc, - sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp); + buf =3D page_frag_alloc_va(&call->local->tx_alloc, + sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp); if (!buf) { kfree(txb); return NULL; } =20 if (sack_size) { - buf2 =3D page_frag_alloc(&call->local->tx_alloc, sack_size, gfp); + buf2 =3D page_frag_alloc_va(&call->local->tx_alloc, sack_size, + gfp); if (!buf2) { - page_frag_free(buf); + page_frag_free_va(buf); kfree(txb); return NULL; } @@ -181,7 +182,7 @@ static void rxrpc_free_txbuf(struct rxrpc_txbuf *txb) rxrpc_txbuf_free); for (i =3D 0; i < txb->nr_kvec; i++) if (txb->kvec[i].iov_base) - page_frag_free(txb->kvec[i].iov_base); + page_frag_free_va(txb->kvec[i].iov_base); kfree(txb); atomic_dec(&rxrpc_nr_txbuf); } diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 545017a3daa4..055ed38cef97 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1231,8 +1231,8 @@ static int svc_tcp_sendmsg(struct svc_sock *svsk, str= uct svc_rqst *rqstp, /* The stream record marker is copied into a temporary page * fragment buffer so that it can be included in rq_bvec. */ - buf =3D page_frag_alloc(&svsk->sk_frag_cache, sizeof(marker), - GFP_KERNEL); + buf =3D page_frag_alloc_va(&svsk->sk_frag_cache, sizeof(marker), + GFP_KERNEL); if (!buf) return -ENOMEM; memcpy(buf, &marker, sizeof(marker)); --=20 2.33.0