From nobody Fri Dec 19 14:22:51 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49DAB26FA5B; Sun, 9 Nov 2025 21:38:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724338; cv=none; b=jT2zzWF5iZQiT40R4AuReB7ccOYkTq9rxnZIpKXbxAjQpypGMUqGEpy6E2mj0J5MBogWk6P8OuEHHTDcPzJcdMBnC07e/cVxZD81RR3PW4YEWhaPNTit5l7GQ0Qyj4lMqSVPjOqnjzPrOzktRDOmJ6QTD27BbldXfnlNzmNC78I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724338; c=relaxed/simple; bh=bBSe/udiclVqrxJNKOKH/yd5ELEJknHndW0mtte2Njo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=eb5AgLJQFDflJJixy/Eqij0yUAyrVlWgy5Go3aV3birGXXXRZp04HQGGBG/2bFJYSNA9b6dHy2fVMPkwVsP6gixDQ7d+KXypamSqL17Sp+zY+J0+MpzvO3KvtMDzw/opQ50fn7LhYEJ6TZBIxgiTt0XrX5nPH1CaeOmyF2+TgK0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=h/iOu1ZI; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="h/iOu1ZI" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 76E60C4CEFB; Sun, 9 Nov 2025 21:38:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1762724337; bh=bBSe/udiclVqrxJNKOKH/yd5ELEJknHndW0mtte2Njo=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=h/iOu1ZIkhQ/z+6Lrk2IQq86jfV9Nn035Oi69rQk4fCqkVEETrn3yV6oEnO+r2yjZ w6KMa298s/hxHrDx1teg6yuJAO3I8IAMaN1681xttX6cETDj2qjBNAdHnmCj853J7A 6glaGEl/9lm2D3uUs0jV3RMyCEtoWoyKm5Lgs2fLToerdL7UzVwDg1yKxMbd/9A4aj 4ESLSQ7s5lbvk+GFCuwaPTjzy+tZL1kP9FxD6s1JezBLXzXyHZCngfv9GfbySytwea fUNwWqUq5UXi94osM2jaggIDSE33HB0UzcEMXEtzmFA0fX6BoiLzqx/bM2rnkIQnLQ nfYOtgQSWSS3Q== From: Roger Quadros Date: Sun, 09 Nov 2025 23:37:54 +0200 Subject: [PATCH net-next v2 4/7] net: ethernet: ti: am65-cpsw: Add AF_XDP zero copy for RX Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251109-am65-cpsw-xdp-zc-v2-4-858f60a09d12@kernel.org> References: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> In-Reply-To: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> To: Siddharth Vadapalli , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Stanislav Fomichev , Simon Horman Cc: srk@ti.com, Meghana Malladi , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Roger Quadros X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=15517; i=rogerq@kernel.org; h=from:subject:message-id; bh=bBSe/udiclVqrxJNKOKH/yd5ELEJknHndW0mtte2Njo=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBpEQnZOF3kI4R2wCFWPLpqMfWd64Ljs/dc114u6 9yAn04Hf8SJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCaREJ2QAKCRDSWmvTvnYw k66kD/4y2wDf5aLMC0JT5UBl+FnIE0WxT5bBIDuCRYtXKucVGjZlHMd17v6qjQoTgjHR6fohfd7 pnctQnW0g8rHej/QIRRCgqL7TdeF5emuo2KoroJjCzNjy4gG/67RqkIcQhPyF0Vnup5SONIjn7d iqyjcOHN9QlSRzo2D1P6vDuYI8Vxpj2xYWr7VrBTEwUmpqDcOmRIxyXtfZqwbhlFz6qJMoTHNav ZNBLbuY+Tdr36m8PspfdNZfuQ+xKm8GVmdesGjL/YzqlSTDwvXia3B3XD3f9Y5yQ5VQtmGIfzpS PsFEia/RHzPQTM0ZgAxsqJCCC4F5YY5K+HVakneMru+x7g8+AXLED226RDpXdC+kbemeCqoYmRc PiS/i2fR5rV9TrT9qx8vjA+6SBknhe5hWX5SM9sGNKfQaYRpI+3iHVdlsjF8Pxuu6HvQvS6UKCS Am5h9gv45xzKrAU3p1wUsbGSf8YjNb9efGUYPxP0vNLptoSnay70Cx4QofEV5PBVGWiDQbp6wlO oRDr+JPHI0CU0MIrVgKzav9W02VQzmclnoBnJU5wQW+4DHr1QazJ5Bvfz7hSco8KgyHTVuspuXm IBekOMB4kgd7iPba7FUOABTXOWVM/tsd7VccWNjUOvt//1u8aEESzzbdU13erT7baVVxEzvrNPg CVp0L5sv00ZUitw== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 Add zero copy support to RX path. Introduce xsk_pool and xsk_port_id to struct am65_cpsw_rx_flow. This way we can quickly check if the flow is setup as XSK pool and for which port. If the RX flow is setup as XSK pool then register it as MEM_TYPE_XSK_BUFF_POOL. At queue creation get free frames from the XSK pool and push it to the RX ring. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 317 +++++++++++++++++++++++++++= ---- drivers/net/ethernet/ti/am65-cpsw-nuss.h | 12 +- drivers/net/ethernet/ti/am65-cpsw-xdp.c | 24 +++ 3 files changed, 319 insertions(+), 34 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index 46523be93df27710be77b288c36c1a0f66d8ca8d..afc0c8836fe242d8bf47ce9bcd3= e6b725ca37bf9 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -429,6 +429,55 @@ static void am65_cpsw_nuss_ndo_host_tx_timeout(struct = net_device *ndev, } } =20 +static int am65_cpsw_nuss_rx_push_zc(struct am65_cpsw_rx_flow *flow, + struct xdp_buff *xdp) +{ + struct am65_cpsw_rx_chn *rx_chn =3D &flow->common->rx_chns; + struct cppi5_host_desc_t *desc_rx; + struct am65_cpsw_swdata *swdata; + u32 flow_id =3D flow->id; + dma_addr_t desc_dma; + dma_addr_t buf_dma; + int buf_len; + + desc_rx =3D k3_cppi_desc_pool_alloc(rx_chn->desc_pool); + if (!desc_rx) + return -ENOMEM; + + desc_dma =3D k3_cppi_desc_pool_virt2dma(rx_chn->desc_pool, desc_rx); + buf_dma =3D xsk_buff_xdp_get_dma(xdp); + cppi5_hdesc_init(desc_rx, CPPI5_INFO0_HDESC_EPIB_PRESENT, + AM65_CPSW_NAV_PS_DATA_SIZE); + k3_udma_glue_rx_dma_to_cppi5_addr(rx_chn->rx_chn, &buf_dma); + buf_len =3D xsk_pool_get_rx_frame_size(flow->xsk_pool); + cppi5_hdesc_attach_buf(desc_rx, buf_dma, buf_len, buf_dma, buf_len); + swdata =3D cppi5_hdesc_get_swdata(desc_rx); + swdata->xdp =3D xdp; + swdata->flow_id =3D flow_id; + + return k3_udma_glue_push_rx_chn(rx_chn->rx_chn, flow_id, + desc_rx, desc_dma); +} + +static int am65_cpsw_nuss_rx_alloc_zc(struct am65_cpsw_rx_flow *flow, + int budget) +{ + struct xdp_buff *xdp; + int i, ret; + + for (i =3D 0; i < budget; i++) { + xdp =3D xsk_buff_alloc(flow->xsk_pool); + if (!xdp) + break; + + ret =3D am65_cpsw_nuss_rx_push_zc(flow, xdp); + if (ret < 0) + break; + } + + return i; +} + static int am65_cpsw_nuss_rx_push(struct am65_cpsw_common *common, struct page *page, u32 flow_idx) { @@ -529,6 +578,9 @@ void am65_cpsw_destroy_rxq(struct am65_cpsw_common *com= mon, int id, bool retain_ page_pool_destroy(flow->page_pool); flow->page_pool =3D NULL; } + + flow->xsk_pool =3D NULL; + flow->xsk_port_id =3D -EINVAL; } =20 static void am65_cpsw_destroy_rxqs(struct am65_cpsw_common *common, bool r= etain_page_pool) @@ -568,6 +620,7 @@ int am65_cpsw_create_rxq(struct am65_cpsw_common *commo= n, int id) struct page_pool *pool; struct page *page; int port, ret, i; + int port_id; =20 flow =3D &rx_chn->flows[id]; pp_params.napi =3D &flow->napi_rx; @@ -587,9 +640,30 @@ int am65_cpsw_create_rxq(struct am65_cpsw_common *comm= on, int id) /* using same page pool is allowed as no running rx handlers * simultaneously for both ndevs */ + + /* get first port with XSK pool & XDP program set */ + for (port =3D 0; port < common->port_num; port++) { + if (!common->ports[port].ndev) + continue; + + flow->xsk_pool =3D am65_cpsw_xsk_get_pool(&common->ports[port], + id); + if (flow->xsk_pool) + break; + } + + port_id =3D common->ports[port].port_id; + flow->xsk_port_id =3D flow->xsk_pool ? port_id : -EINVAL; for (port =3D 0; port < common->port_num; port++) { if (!common->ports[port].ndev) - /* FIXME should we BUG here? */ + continue; + + port_id =3D common->ports[port].port_id; + + /* NOTE: if queue is XSK then only register it + * for the relevant port it was assigned to + */ + if (flow->xsk_pool && port_id !=3D flow->xsk_port_id) continue; =20 rxq =3D &common->ports[port].xdp_rxq[id]; @@ -598,29 +672,44 @@ int am65_cpsw_create_rxq(struct am65_cpsw_common *com= mon, int id) if (ret) goto err; =20 - ret =3D xdp_rxq_info_reg_mem_model(rxq, - MEM_TYPE_PAGE_POOL, - pool); + if (flow->xsk_pool) { + ret =3D xdp_rxq_info_reg_mem_model(rxq, + MEM_TYPE_XSK_BUFF_POOL, + NULL); + xsk_pool_set_rxq_info(flow->xsk_pool, rxq); + } else { + ret =3D xdp_rxq_info_reg_mem_model(rxq, + MEM_TYPE_PAGE_POOL, + pool); + } + if (ret) goto err; } =20 - for (i =3D 0; i < AM65_CPSW_MAX_RX_DESC; i++) { - page =3D page_pool_dev_alloc_pages(flow->page_pool); - if (!page) { - dev_err(common->dev, "cannot allocate page in flow %d\n", - id); - ret =3D -ENOMEM; - goto err; - } + if (flow->xsk_pool) { + /* get pages from xsk_pool and push to RX ring + * queue as much as possible + */ + am65_cpsw_nuss_rx_alloc_zc(flow, AM65_CPSW_MAX_RX_DESC); + } else { + for (i =3D 0; i < AM65_CPSW_MAX_RX_DESC; i++) { + page =3D page_pool_dev_alloc_pages(flow->page_pool); + if (!page) { + dev_err(common->dev, "cannot allocate page in flow %d\n", + id); + ret =3D -ENOMEM; + goto err; + } =20 - ret =3D am65_cpsw_nuss_rx_push(common, page, id); - if (ret < 0) { - dev_err(common->dev, - "cannot submit page to rx channel flow %d, error %d\n", - id, ret); - am65_cpsw_put_page(flow, page, false); - goto err; + ret =3D am65_cpsw_nuss_rx_push(common, page, id); + if (ret < 0) { + dev_err(common->dev, + "cannot submit page to rx channel flow %d, error %d\n", + id, ret); + am65_cpsw_put_page(flow, page, false); + goto err; + } } } =20 @@ -777,6 +866,8 @@ static void am65_cpsw_nuss_rx_cleanup(void *data, dma_a= ddr_t desc_dma) struct am65_cpsw_rx_chn *rx_chn =3D data; struct cppi5_host_desc_t *desc_rx; struct am65_cpsw_swdata *swdata; + struct am65_cpsw_rx_flow *flow; + struct xdp_buff *xdp; dma_addr_t buf_dma; struct page *page; u32 buf_dma_len; @@ -784,13 +875,20 @@ static void am65_cpsw_nuss_rx_cleanup(void *data, dma= _addr_t desc_dma) =20 desc_rx =3D k3_cppi_desc_pool_dma2virt(rx_chn->desc_pool, desc_dma); swdata =3D cppi5_hdesc_get_swdata(desc_rx); - page =3D swdata->page; flow_id =3D swdata->flow_id; cppi5_hdesc_get_obuf(desc_rx, &buf_dma, &buf_dma_len); k3_udma_glue_rx_cppi5_to_dma_addr(rx_chn->rx_chn, &buf_dma); - dma_unmap_single(rx_chn->dma_dev, buf_dma, buf_dma_len, DMA_FROM_DEVICE); k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); - am65_cpsw_put_page(&rx_chn->flows[flow_id], page, false); + flow =3D &rx_chn->flows[flow_id]; + if (flow->xsk_pool) { + xdp =3D swdata->xdp; + xsk_buff_free(xdp); + } else { + page =3D swdata->page; + dma_unmap_single(rx_chn->dma_dev, buf_dma, buf_dma_len, + DMA_FROM_DEVICE); + am65_cpsw_put_page(flow, page, false); + } } =20 static void am65_cpsw_nuss_xmit_free(struct am65_cpsw_tx_chn *tx_chn, @@ -1267,6 +1365,151 @@ static void am65_cpsw_nuss_rx_csum(struct sk_buff *= skb, u32 csum_info) } } =20 +static struct sk_buff *am65_cpsw_create_skb_zc(struct am65_cpsw_rx_flow *f= low, + struct xdp_buff *xdp) +{ + unsigned int metasize =3D xdp->data - xdp->data_meta; + unsigned int datasize =3D xdp->data_end - xdp->data; + struct sk_buff *skb; + + skb =3D napi_alloc_skb(&flow->napi_rx, + xdp->data_end - xdp->data_hard_start); + if (unlikely(!skb)) + return NULL; + + skb_reserve(skb, xdp->data - xdp->data_hard_start); + memcpy(__skb_put(skb, datasize), xdp->data, datasize); + if (metasize) + skb_metadata_set(skb, metasize); + + return skb; +} + +static void am65_cpsw_dispatch_skb_zc(struct am65_cpsw_rx_flow *flow, + struct am65_cpsw_port *port, + struct xdp_buff *xdp, u32 csum_info) +{ + struct am65_cpsw_common *common =3D flow->common; + unsigned int len =3D xdp->data_end - xdp->data; + struct am65_cpsw_ndev_priv *ndev_priv; + struct net_device *ndev =3D port->ndev; + struct sk_buff *skb; + + skb =3D am65_cpsw_create_skb_zc(flow, xdp); + if (!skb) { + ndev->stats.rx_dropped++; + return; + } + + ndev_priv =3D netdev_priv(ndev); + am65_cpsw_nuss_set_offload_fwd_mark(skb, ndev_priv->offload_fwd_mark); + if (port->rx_ts_enabled) + am65_cpts_rx_timestamp(common->cpts, skb); + + skb_mark_for_recycle(skb); + skb->protocol =3D eth_type_trans(skb, ndev); + am65_cpsw_nuss_rx_csum(skb, csum_info); + napi_gro_receive(&flow->napi_rx, skb); + dev_sw_netstats_rx_add(ndev, len); +} + +static int am65_cpsw_nuss_rx_zc(struct am65_cpsw_rx_flow *flow, int budget) +{ + struct am65_cpsw_rx_chn *rx_chn =3D &flow->common->rx_chns; + u32 buf_dma_len, pkt_len, port_id =3D 0, csum_info; + struct am65_cpsw_common *common =3D flow->common; + struct cppi5_host_desc_t *desc_rx; + struct device *dev =3D common->dev; + struct am65_cpsw_swdata *swdata; + dma_addr_t desc_dma, buf_dma; + struct am65_cpsw_port *port; + struct net_device *ndev; + u32 flow_idx =3D flow->id; + struct xdp_buff *xdp; + int count =3D 0; + int xdp_status =3D 0; + u32 *psdata; + int ret; + + while (count < budget) { + ret =3D k3_udma_glue_pop_rx_chn(rx_chn->rx_chn, flow_idx, + &desc_dma); + if (ret) { + if (ret !=3D -ENODATA) + dev_err(dev, "RX: pop chn fail %d\n", + ret); + break; + } + + if (cppi5_desc_is_tdcm(desc_dma)) { + dev_dbg(dev, "%s RX tdown flow: %u\n", + __func__, flow_idx); + if (common->pdata.quirks & AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ) + complete(&common->tdown_complete); + continue; + } + + desc_rx =3D k3_cppi_desc_pool_dma2virt(rx_chn->desc_pool, + desc_dma); + dev_dbg(dev, "%s flow_idx: %u desc %pad\n", + __func__, flow_idx, &desc_dma); + + swdata =3D cppi5_hdesc_get_swdata(desc_rx); + xdp =3D swdata->xdp; + cppi5_hdesc_get_obuf(desc_rx, &buf_dma, &buf_dma_len); + k3_udma_glue_rx_cppi5_to_dma_addr(rx_chn->rx_chn, &buf_dma); + pkt_len =3D cppi5_hdesc_get_pktlen(desc_rx); + cppi5_desc_get_tags_ids(&desc_rx->hdr, &port_id, NULL); + dev_dbg(dev, "%s rx port_id:%d\n", __func__, port_id); + port =3D am65_common_get_port(common, port_id); + ndev =3D port->ndev; + psdata =3D cppi5_hdesc_get_psdata(desc_rx); + csum_info =3D psdata[2]; + dev_dbg(dev, "%s rx csum_info:%#x\n", __func__, csum_info); + k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); + count++; + xsk_buff_set_size(xdp, pkt_len); + xsk_buff_dma_sync_for_cpu(xdp); + /* check if this port has XSK enabled. else drop packet */ + if (port_id !=3D flow->xsk_port_id) { + dev_dbg(dev, "discarding non xsk port data\n"); + xsk_buff_free(xdp); + ndev->stats.rx_dropped++; + continue; + } + + ret =3D am65_cpsw_run_xdp(flow, port, xdp, &pkt_len); + switch (ret) { + case AM65_CPSW_XDP_PASS: + am65_cpsw_dispatch_skb_zc(flow, port, xdp, csum_info); + xsk_buff_free(xdp); + break; + case AM65_CPSW_XDP_CONSUMED: + xsk_buff_free(xdp); + break; + case AM65_CPSW_XDP_TX: + case AM65_CPSW_XDP_REDIRECT: + xdp_status |=3D ret; + break; + } + } + + if (xdp_status & AM65_CPSW_XDP_REDIRECT) + xdp_do_flush(); + + ret =3D am65_cpsw_nuss_rx_alloc_zc(flow, count); + + if (xsk_uses_need_wakeup(flow->xsk_pool)) { + /* We set wakeup if we are exhausted of new requests */ + if (ret < count) + xsk_set_rx_need_wakeup(flow->xsk_pool); + else + xsk_clear_rx_need_wakeup(flow->xsk_pool); + } + + return count; +} + static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_rx_flow *flow, int *xdp_state) { @@ -1392,7 +1635,11 @@ static enum hrtimer_restart am65_cpsw_nuss_rx_timer_= callback(struct hrtimer *tim struct am65_cpsw_rx_flow, rx_hrtimer); =20 - enable_irq(flow->irq); + if (flow->irq_disabled) { + flow->irq_disabled =3D false; + enable_irq(flow->irq); + } + return HRTIMER_NORESTART; } =20 @@ -1406,17 +1653,21 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struc= t *napi_rx, int budget) int num_rx =3D 0; =20 /* process only this flow */ - cur_budget =3D budget; - while (cur_budget--) { - ret =3D am65_cpsw_nuss_rx_packets(flow, &xdp_state); - xdp_state_or |=3D xdp_state; - if (ret) - break; - num_rx++; - } + if (flow->xsk_pool) { + num_rx =3D am65_cpsw_nuss_rx_zc(flow, budget); + } else { + cur_budget =3D budget; + while (cur_budget--) { + ret =3D am65_cpsw_nuss_rx_packets(flow, &xdp_state); + xdp_state_or |=3D xdp_state; + if (ret) + break; + num_rx++; + } =20 - if (xdp_state_or & AM65_CPSW_XDP_REDIRECT) - xdp_do_flush(); + if (xdp_state_or & AM65_CPSW_XDP_REDIRECT) + xdp_do_flush(); + } =20 dev_dbg(common->dev, "%s num_rx:%d %d\n", __func__, num_rx, budget); =20 diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/etherne= t/ti/am65-cpsw-nuss.h index 31789b5e5e1fc96be20cce17234d0e16cdcea796..2bf4d12f92764706719cc1d6500= 1dbb53da58c38 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h @@ -15,6 +15,7 @@ #include #include #include +#include #include "am65-cpsw-qos.h" =20 struct am65_cpts; @@ -107,6 +108,8 @@ struct am65_cpsw_rx_flow { struct hrtimer rx_hrtimer; unsigned long rx_pace_timeout; struct page_pool *page_pool; + struct xsk_buff_pool *xsk_pool; + int xsk_port_id; char name[32]; }; =20 @@ -120,7 +123,10 @@ struct am65_cpsw_tx_swdata { =20 struct am65_cpsw_swdata { u32 flow_id; - struct page *page; + union { + struct page *page; + struct xdp_buff *xdp; + }; }; =20 struct am65_cpsw_rx_chn { @@ -248,4 +254,8 @@ static inline bool am65_cpsw_xdp_is_enabled(struct am65= _cpsw_port *port) { return !!READ_ONCE(port->xdp_prog); } + +struct xsk_buff_pool *am65_cpsw_xsk_get_pool(struct am65_cpsw_port *port, + u32 qid); + #endif /* AM65_CPSW_NUSS_H_ */ diff --git a/drivers/net/ethernet/ti/am65-cpsw-xdp.c b/drivers/net/ethernet= /ti/am65-cpsw-xdp.c index 89f43f7c83db35dba96621bae930172e0fc85b6a..0e37c27f77720713430a3e70f6c= 4b3dfb048cfc0 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-xdp.c +++ b/drivers/net/ethernet/ti/am65-cpsw-xdp.c @@ -108,6 +108,9 @@ int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 q= id, u32 flags) { struct am65_cpsw_common *common =3D am65_ndev_to_common(ndev); struct am65_cpsw_port *port =3D am65_ndev_to_port(ndev); + struct am65_cpsw_rx_flow *rx_flow; + + rx_flow =3D &common->rx_chns.flows[qid]; =20 if (!netif_running(ndev) || !netif_carrier_ok(ndev)) return -ENETDOWN; @@ -118,5 +121,26 @@ int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 = qid, u32 flags) if (qid >=3D common->rx_ch_num_flows || qid >=3D common->tx_ch_num) return -EINVAL; =20 + if (!rx_flow->xsk_pool) + return -EINVAL; + + if (flags & XDP_WAKEUP_RX) { + if (!napi_if_scheduled_mark_missed(&rx_flow->napi_rx)) { + if (likely(napi_schedule_prep(&rx_flow->napi_rx))) + __napi_schedule(&rx_flow->napi_rx); + } + } + return 0; } + +struct xsk_buff_pool *am65_cpsw_xsk_get_pool(struct am65_cpsw_port *port, + u32 qid) +{ + if (!am65_cpsw_xdp_is_enabled(port) || + !test_bit(qid, port->common->xdp_zc_queues) || + port->common->xsk_port_id[qid] !=3D port->port_id) + return NULL; + + return xsk_get_pool_from_qid(port->ndev, qid); +} --=20 2.34.1