From nobody Mon Dec 1 22:34:55 2025 Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2DF9B3002C8 for ; Sun, 30 Nov 2025 23:35:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764545745; cv=none; b=FqGbocH9ZTNukJnVciIfxqKPOVFXhPwCEoPQW9YeqL11gYA49APA8Mf/cmFxrwWUdmr6N9FlDqgxDJKQsgdjphRnWHr2ODRkDJ7vtObt1hLHd1MNObLMeH5vXn2VDwwMjIpp6Szmk7uMCUX/AvhPSi1F+1pgvbHmuWCIxuTmobY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764545745; c=relaxed/simple; bh=DADAJ7mYPolx1XHOGjWWt4HhdgyF0N3s1YEOI+3uj4E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=U5rtd0BTqeqoLQ0B3FWrv3v8GnA26R3Y7U3S8e1UFV3cOvwwiphUKpSNIaHQaiwNmnt02tHe/OMxRjjTn0wMXt+DxRdFcUkyRixisMrxjj/LbDGk0vCv2/T2WE4cGkIIGY04w5USmhCJHSWTYrMN0z8QvjgFfpOYHYhMEqOD4c4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=eTmnmn6s; arc=none smtp.client-ip=209.85.128.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="eTmnmn6s" Received: by mail-wm1-f42.google.com with SMTP id 5b1f17b1804b1-477a2ab455fso37851505e9.3 for ; Sun, 30 Nov 2025 15:35:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1764545739; x=1765150539; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6sP54dh+F6PIs0vyhovcn5h3b0YQGVwBUXh1fM4Mmss=; b=eTmnmn6sQNgUuhv0hip/VQ022ONRcb6/F46pCi+6fBhA4pqvEKmzonQEBf6Y7A1dMn h7uJ0kvl7/wUnDZu9uVZ5ISKqwACP28bdCJxqIABi/Yn1uxXpNnxYJqa+ENleIEQn9QC mVPYqX7t9Nd0BCLKpOBrv4gLEMrmALcNSFBS2+wTisWwZozHUOJclQxNPBFwZTZP+AL5 jl/3kDlAP447rhvq8h8aLNhDsih8WyanmrbKxVRuK1SXTs1mVj+fF0a4TbF4q2pohEOm 7bW6uXQERtg5iFQo3jdertqvME8uvEOfb9tfKDbfjVLljgQhrDGniB9b58YtSl0L344d FTag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764545739; x=1765150539; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=6sP54dh+F6PIs0vyhovcn5h3b0YQGVwBUXh1fM4Mmss=; b=CkW97UNmElx0TRFnODzRopQIgwxjXAxyH/fXP5UrSUkxOvJ9yOLKT6q+QIHafR1fi7 gtGfj8Q58yoMsyNmlAD1ZEDfWJTW1QRkev3sPgKwBcFSSnsJ6b+cxvCzEr356lkHCe9w nCjr9LWZivqEGTqbgsRLCC5wPxcHVdq+/siqzfYvVpy6EyDBFh1dNxKxN5USA5mm78jt F3nIa5chVKMahjsE6Ratw1YZ1te5Fhj750eZV5KP3cBiilH43gsV65d7qnKSkKxMnVhs l7sLCgoE27ZsJH94Q0N/XaO767ZML5Ir8NjzRn5ZUjXSWdA/jl5dC5VcEmVkaCZOetMi /E/Q== X-Forwarded-Encrypted: i=1; AJvYcCUNQN7qa1he7O56M8sDvVbGZXpMmJF8JQCOg5+po+EEK0O8kPfXNZx9n5nLuIR8aNkmjBxSh2bxyobG490=@vger.kernel.org X-Gm-Message-State: AOJu0Yyz8gCEf8a1/bJgkYCI3GXb0Psj1o/6O4MygzuhkxM88lBVyrM7 5AONHxy96tDM5yW8q2imYXZ7yhdpbrtgfW6sES3SPOto1G4ts6mi61/+ X-Gm-Gg: ASbGncuZUF8vX21MOUs/meLIByPDO/ZwA/nMKBvraJluXAHgYqdPoIbf5RCQvmhkplL LJh91dz0Af7c1NhRvO9w00q/BWaXz+/DlQaA5HCXzEh201GuSyfNxvXkZ4lJc2MSduLT4FWEsmY YCluzawJGHEkP3l4D5QcjrRu4PWqeEoRxUl+H3cfdNncWQSrHU/URgNL3vubqsu8ymqK60fjiIC 6iTibxY5S3XOvBQpXFXzwlgQwYIaWob0B+niq+M7Fduy0pTMCzlAsbl6czmB0G14nDgidLF41jn V3Bz9FFWxN899d5AtTqIyD2tAl/PAz5UpZFp+I63bIEL7tqzG+bOH24y5iNnhuOEsp2hfhIvbFy b8Gg8VjfU9+2QM6OHDP+EytzJYYrUnGGG2oD4Up4ddENr5N31QOxI9mM/qiwiG55/OsCBDtak2z NNvnF/EcmhiX/9OW+XQq64GR9Xz2PENDMTIf3kSg+CSUxQVYtt5HtLQmKKer/ex/5A5lE/lSgnT WjtlG2J8zGNYs0x X-Google-Smtp-Source: AGHT+IFDTwEX3gNCDQ9kOhIXuTo1nTeuDb/alg6Ejs1hp+3N0DW7G41hHnD0oaZ+TBkyVu5TSXSR4Q== X-Received: by 2002:a05:600c:1ca5:b0:475:dde5:d91b with SMTP id 5b1f17b1804b1-477c1115ff6mr432598415e9.17.1764545739253; Sun, 30 Nov 2025 15:35:39 -0800 (PST) Received: from 127.mynet ([2a01:4b00:bd21:4f00:7cc6:d3ca:494:116c]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-479040b3092sm142722075e9.1.2025.11.30.15.35.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 30 Nov 2025 15:35:38 -0800 (PST) From: Pavel Begunkov To: netdev@vger.kernel.org Cc: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jonathan Corbet , Michael Chan , Pavan Chebbi , Andrew Lunn , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Ilias Apalodimas , Shuah Khan , Mina Almasry , Stanislav Fomichev , Pavel Begunkov , Yue Haibing , David Wei , Haiyue Wang , Jens Axboe , Joe Damato , Simon Horman , Vishwanath Seshagiri , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, io-uring@vger.kernel.org, dtatulea@nvidia.com Subject: [PATCH net-next v7 5/9] eth: bnxt: store rx buffer size per queue Date: Sun, 30 Nov 2025 23:35:20 +0000 Message-ID: X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of using a constant buffer length, allow configuring the size for each queue separately. There is no way to change the length yet, and it'll be passed from memory providers in a later patch. Suggested-by: Jakub Kicinski Signed-off-by: Pavel Begunkov --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 56 +++++++++++-------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 + drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 6 +- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h | 2 +- 4 files changed, 38 insertions(+), 27 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethern= et/broadcom/bnxt/bnxt.c index d17d0ea89c36..f4c2ec243e9a 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -905,7 +905,7 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_na= pi *bnapi, int budget) =20 static bool bnxt_separate_head_pool(struct bnxt_rx_ring_info *rxr) { - return rxr->need_head_pool || PAGE_SIZE > BNXT_RX_PAGE_SIZE; + return rxr->need_head_pool || rxr->rx_page_size < PAGE_SIZE; } =20 static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapp= ing, @@ -915,9 +915,9 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *b= p, dma_addr_t *mapping, { struct page *page; =20 - if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { + if (rxr->rx_page_size < PAGE_SIZE) { page =3D page_pool_dev_alloc_frag(rxr->page_pool, offset, - BNXT_RX_PAGE_SIZE); + rxr->rx_page_size); } else { page =3D page_pool_dev_alloc_pages(rxr->page_pool); *offset =3D 0; @@ -936,8 +936,9 @@ static netmem_ref __bnxt_alloc_rx_netmem(struct bnxt *b= p, dma_addr_t *mapping, { netmem_ref netmem; =20 - if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) { - netmem =3D page_pool_alloc_frag_netmem(rxr->page_pool, offset, BNXT_RX_P= AGE_SIZE, gfp); + if (rxr->rx_page_size < PAGE_SIZE) { + netmem =3D page_pool_alloc_frag_netmem(rxr->page_pool, offset, + rxr->rx_page_size, gfp); } else { netmem =3D page_pool_alloc_netmems(rxr->page_pool, gfp); *offset =3D 0; @@ -1155,9 +1156,9 @@ static struct sk_buff *bnxt_rx_multi_page_skb(struct = bnxt *bp, return NULL; } dma_addr -=3D bp->rx_dma_offset; - dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, + dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, rxr->rx_page_size, bp->rx_dir); - skb =3D napi_build_skb(data_ptr - bp->rx_offset, BNXT_RX_PAGE_SIZE); + skb =3D napi_build_skb(data_ptr - bp->rx_offset, rxr->rx_page_size); if (!skb) { page_pool_recycle_direct(rxr->page_pool, page); return NULL; @@ -1189,7 +1190,7 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *= bp, return NULL; } dma_addr -=3D bp->rx_dma_offset; - dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE, + dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, rxr->rx_page_size, bp->rx_dir); =20 if (unlikely(!payload)) @@ -1203,7 +1204,7 @@ static struct sk_buff *bnxt_rx_page_skb(struct bnxt *= bp, =20 skb_mark_for_recycle(skb); off =3D (void *)data_ptr - page_address(page); - skb_add_rx_frag(skb, 0, page, off, len, BNXT_RX_PAGE_SIZE); + skb_add_rx_frag(skb, 0, page, off, len, rxr->rx_page_size); memcpy(skb->data - NET_IP_ALIGN, data_ptr - NET_IP_ALIGN, payload + NET_IP_ALIGN); =20 @@ -1288,7 +1289,7 @@ static u32 __bnxt_rx_agg_netmems(struct bnxt *bp, if (skb) { skb_add_rx_frag_netmem(skb, i, cons_rx_buf->netmem, cons_rx_buf->offset, - frag_len, BNXT_RX_PAGE_SIZE); + frag_len, rxr->rx_page_size); } else { skb_frag_t *frag =3D &shinfo->frags[i]; =20 @@ -1313,7 +1314,7 @@ static u32 __bnxt_rx_agg_netmems(struct bnxt *bp, if (skb) { skb->len -=3D frag_len; skb->data_len -=3D frag_len; - skb->truesize -=3D BNXT_RX_PAGE_SIZE; + skb->truesize -=3D rxr->rx_page_size; } =20 --shinfo->nr_frags; @@ -1328,7 +1329,7 @@ static u32 __bnxt_rx_agg_netmems(struct bnxt *bp, } =20 page_pool_dma_sync_netmem_for_cpu(rxr->page_pool, netmem, 0, - BNXT_RX_PAGE_SIZE); + rxr->rx_page_size); =20 total_frag_len +=3D frag_len; prod =3D NEXT_RX_AGG(prod); @@ -2281,8 +2282,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_c= p_ring_info *cpr, if (!skb) goto oom_next_rx; } else { - skb =3D bnxt_xdp_build_skb(bp, skb, agg_bufs, - rxr->page_pool, &xdp); + skb =3D bnxt_xdp_build_skb(bp, skb, agg_bufs, rxr, &xdp); if (!skb) { /* we should be able to free the old skb here */ bnxt_xdp_buff_frags_free(rxr, &xdp); @@ -3828,11 +3828,13 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp, pp.pool_size =3D bp->rx_agg_ring_size / agg_size_fac; if (BNXT_RX_PAGE_MODE(bp)) pp.pool_size +=3D bp->rx_ring_size / rx_size_fac; + + pp.order =3D get_order(rxr->rx_page_size); pp.nid =3D numa_node; pp.netdev =3D bp->dev; pp.dev =3D &bp->pdev->dev; pp.dma_dir =3D bp->rx_dir; - pp.max_len =3D PAGE_SIZE; + pp.max_len =3D PAGE_SIZE << pp.order; pp.flags =3D PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV | PP_FLAG_ALLOW_UNREADABLE_NETMEM; pp.queue_idx =3D rxr->bnapi->index; @@ -3843,7 +3845,10 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp, rxr->page_pool =3D pool; =20 rxr->need_head_pool =3D page_pool_is_unreadable(pool); + rxr->need_head_pool |=3D !!pp.order; if (bnxt_separate_head_pool(rxr)) { + pp.order =3D 0; + pp.max_len =3D PAGE_SIZE; pp.pool_size =3D min(bp->rx_ring_size / rx_size_fac, 1024); pp.flags =3D PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV; pool =3D page_pool_create(&pp); @@ -4319,6 +4324,8 @@ static void bnxt_init_ring_struct(struct bnxt *bp) if (!rxr) goto skip_rx; =20 + rxr->rx_page_size =3D BNXT_RX_PAGE_SIZE; + ring =3D &rxr->rx_ring_struct; rmem =3D &ring->ring_mem; rmem->nr_pages =3D bp->rx_nr_pages; @@ -4478,7 +4485,7 @@ static void bnxt_init_one_rx_agg_ring_rxbd(struct bnx= t *bp, ring =3D &rxr->rx_agg_ring_struct; ring->fw_ring_id =3D INVALID_HW_RING_ID; if ((bp->flags & BNXT_FLAG_AGG_RINGS)) { - type =3D ((u32)BNXT_RX_PAGE_SIZE << RX_BD_LEN_SHIFT) | + type =3D ((u32)(u32)rxr->rx_page_size << RX_BD_LEN_SHIFT) | RX_BD_TYPE_RX_AGG_BD; =20 /* On P7, setting EOP will cause the chip to disable @@ -7056,6 +7063,7 @@ static void bnxt_hwrm_ring_grp_free(struct bnxt *bp) =20 static void bnxt_set_rx_ring_params_p5(struct bnxt *bp, u32 ring_type, struct hwrm_ring_alloc_input *req, + struct bnxt_rx_ring_info *rxr, struct bnxt_ring_struct *ring) { struct bnxt_ring_grp_info *grp_info =3D &bp->grp_info[ring->grp_idx]; @@ -7065,7 +7073,7 @@ static void bnxt_set_rx_ring_params_p5(struct bnxt *b= p, u32 ring_type, if (ring_type =3D=3D HWRM_RING_ALLOC_AGG) { req->ring_type =3D RING_ALLOC_REQ_RING_TYPE_RX_AGG; req->rx_ring_id =3D cpu_to_le16(grp_info->rx_fw_ring_id); - req->rx_buf_size =3D cpu_to_le16(BNXT_RX_PAGE_SIZE); + req->rx_buf_size =3D cpu_to_le16(rxr->rx_page_size); enables |=3D RING_ALLOC_REQ_ENABLES_RX_RING_ID_VALID; } else { req->rx_buf_size =3D cpu_to_le16(bp->rx_buf_use_size); @@ -7079,6 +7087,7 @@ static void bnxt_set_rx_ring_params_p5(struct bnxt *b= p, u32 ring_type, } =20 static int hwrm_ring_alloc_send_msg(struct bnxt *bp, + struct bnxt_rx_ring_info *rxr, struct bnxt_ring_struct *ring, u32 ring_type, u32 map_index) { @@ -7135,7 +7144,8 @@ static int hwrm_ring_alloc_send_msg(struct bnxt *bp, cpu_to_le32(bp->rx_ring_mask + 1) : cpu_to_le32(bp->rx_agg_ring_mask + 1); if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) - bnxt_set_rx_ring_params_p5(bp, ring_type, req, ring); + bnxt_set_rx_ring_params_p5(bp, ring_type, req, + rxr, ring); break; case HWRM_RING_ALLOC_CMPL: req->ring_type =3D RING_ALLOC_REQ_RING_TYPE_L2_CMPL; @@ -7283,7 +7293,7 @@ static int bnxt_hwrm_rx_ring_alloc(struct bnxt *bp, u32 map_idx =3D bnapi->index; int rc; =20 - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, rxr, ring, type, map_idx); if (rc) return rc; =20 @@ -7303,7 +7313,7 @@ static int bnxt_hwrm_rx_agg_ring_alloc(struct bnxt *b= p, int rc; =20 map_idx =3D grp_idx + bp->rx_nr_rings; - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, rxr, ring, type, map_idx); if (rc) return rc; =20 @@ -7327,7 +7337,7 @@ static int bnxt_hwrm_cp_ring_alloc_p5(struct bnxt *bp, =20 ring =3D &cpr->cp_ring_struct; ring->handle =3D BNXT_SET_NQ_HDL(cpr); - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, NULL, ring, type, map_idx); if (rc) return rc; bnxt_set_db(bp, &cpr->cp_db, type, map_idx, ring->fw_ring_id); @@ -7342,7 +7352,7 @@ static int bnxt_hwrm_tx_ring_alloc(struct bnxt *bp, const u32 type =3D HWRM_RING_ALLOC_TX; int rc; =20 - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, tx_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, NULL, ring, type, tx_idx); if (rc) return rc; bnxt_set_db(bp, &txr->tx_db, type, tx_idx, ring->fw_ring_id); @@ -7368,7 +7378,7 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp) =20 vector =3D bp->irq_tbl[map_idx].vector; disable_irq_nosync(vector); - rc =3D hwrm_ring_alloc_send_msg(bp, ring, type, map_idx); + rc =3D hwrm_ring_alloc_send_msg(bp, NULL, ring, type, map_idx); if (rc) { enable_irq(vector); goto err_out; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethern= et/broadcom/bnxt/bnxt.h index f5f07a7e6b29..4c880a9fba92 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1107,6 +1107,7 @@ struct bnxt_rx_ring_info { =20 unsigned long *rx_agg_bmap; u16 rx_agg_bmap_size; + u16 rx_page_size; bool need_head_pool; =20 dma_addr_t rx_desc_mapping[MAX_RX_PAGES]; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/et= hernet/broadcom/bnxt/bnxt_xdp.c index 3e77a96e5a3e..619235b151a4 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -183,7 +183,7 @@ void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx= _ring_info *rxr, u16 cons, u8 *data_ptr, unsigned int len, struct xdp_buff *xdp) { - u32 buflen =3D BNXT_RX_PAGE_SIZE; + u32 buflen =3D rxr->rx_page_size; struct bnxt_sw_rx_bd *rx_buf; struct pci_dev *pdev; dma_addr_t mapping; @@ -461,7 +461,7 @@ int bnxt_xdp(struct net_device *dev, struct netdev_bpf = *xdp) =20 struct sk_buff * bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, u8 num_frags, - struct page_pool *pool, struct xdp_buff *xdp) + struct bnxt_rx_ring_info *rxr, struct xdp_buff *xdp) { struct skb_shared_info *sinfo =3D xdp_get_shared_info_from_buff(xdp); =20 @@ -469,7 +469,7 @@ bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb= , u8 num_frags, return NULL; =20 xdp_update_skb_frags_info(skb, num_frags, sinfo->xdp_frags_size, - BNXT_RX_PAGE_SIZE * num_frags, + rxr->rx_page_size * num_frags, xdp_buff_get_skb_flags(xdp)); return skb; } diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h b/drivers/net/et= hernet/broadcom/bnxt/bnxt_xdp.h index 220285e190fc..8933a0dec09a 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h @@ -32,6 +32,6 @@ void bnxt_xdp_buff_init(struct bnxt *bp, struct bnxt_rx_r= ing_info *rxr, void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info *rxr, struct xdp_buff *xdp); struct sk_buff *bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, - u8 num_frags, struct page_pool *pool, + u8 num_frags, struct bnxt_rx_ring_info *rxr, struct xdp_buff *xdp); #endif --=20 2.52.0