From nobody Fri Dec 19 14:22:49 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7B0B27AC54; Sun, 9 Nov 2025 21:39:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724342; cv=none; b=I763RG6gb7kY3FSHt9Aci46JZOmNch4eM+10CMTqzJq2NzYR3NcDcp3fCipX9K5iYNwjzQ/vf1C19f+THVGM/HX+CS6BF1jVb2gSsP3hvjIDCtbfVfCt8UsbfAkVSJV5SmI1QXrX5X7KeBguLCc21EFbLhcebaa6ScyQ1cXqv8g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724342; c=relaxed/simple; bh=yZDjsw7EWrQMT8Z5kEDaqMC+c+zkVy5QHbXnLygqNXs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=pdHpDaUOzRINwjScbUP5f3odtRwwBHi5PmnemdGVV5+Lsnoyt1JrgMIz8drM/Snq1ZQkRBNlZG9r+y+e8ieOsbdSFrqPMgL3KdkuA6V3zK9SKgwG4IADcsXjP5QMbRwdq5w0C0gNyv+4ubO+0/I0WB0tWzI3m2+qVSnHkIFfugg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=V19xPEkB; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="V19xPEkB" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 34D73C4AF0D; Sun, 9 Nov 2025 21:38:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1762724342; bh=yZDjsw7EWrQMT8Z5kEDaqMC+c+zkVy5QHbXnLygqNXs=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=V19xPEkBSzjaqzbAZRvV8O9sbrSVjUIvu6C9RH91sPlsv2LbBSOE5nygwTvq2Z35T IxW3B67NFmYD7hvKp9UPY3Eupgk9BLpAESNLn7XI6lfiP6poP9/6h7CWv105Pei/Cl aqR77R+edyle64sQTovTdEfesPepISpmJ6LFNTKV887qTPyDmbvLUtanvpGJU8Z3h3 nxM5500Y+isEaNaLnnutsiSgckzLbs1rc+LBDubOZMER6GPV0pBOdoehMuWBZXx9Ec Dy+61uqB7B1hYbQWT8/BOwGxOoWn2/1fq3WXCEV6gdnx4qg+Oq5duBhdUs/Ae8nchg Xr2RgnENMxknQ== From: Roger Quadros Date: Sun, 09 Nov 2025 23:37:55 +0200 Subject: [PATCH net-next v2 5/7] net: ethernet: ti: am65-cpsw: Add AF_XDP zero copy for TX Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251109-am65-cpsw-xdp-zc-v2-5-858f60a09d12@kernel.org> References: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> In-Reply-To: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> To: Siddharth Vadapalli , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Stanislav Fomichev , Simon Horman Cc: srk@ti.com, Meghana Malladi , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Roger Quadros X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=13581; i=rogerq@kernel.org; h=from:subject:message-id; bh=yZDjsw7EWrQMT8Z5kEDaqMC+c+zkVy5QHbXnLygqNXs=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBpEQnZq0zF4jgKCs2OMHWL6QhnUAh909BXkpdoI y+GfkCC9huJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCaREJ2QAKCRDSWmvTvnYw k5giD/0V24C1D4sCAm5QRfCSRJVGr0SElles+RWAIocGvfKtruXO4VfzQnQhncsI1rRGrXYOeeA BbrtF73Qk4QWRWNniPBSU9ulu1TuVuoSZV+OpReXNIwA+ZOOcMFCjy0jPMdjjELC7vGs4YAliOM HXPgoXYK0L4che90ydxmVfuflKGTMJOTqAmmGYgjsjQ4WMsUedCQFv4iUrtcLG75/CgYxDYAOZ/ yvY0FzyCKipide485NxQG+uyQkKNSswhbGCrsSUfOE/kCJitFl4jIdJq2kDewZtNNqWRHPsM1/e 9k0hvxu3mTxzu07h7B064ro3en0AEF9/qyF6NF8Iju0yAIHgtpgT5IcTacQwxMDLWR76Dr2LyxC 3pLvkElriLBS3U/dVBFign/a3e3q5+3Ukh9qVC5dnGM/bj2eynPO51Ljt0xG9cPptDWrAHAC7zz kMk5hZU8MgYoHi17mCYnMmoISaZses9Jc5jmqdqrwmwZCYTBnNCc79IeNWzBGKWrmAnPYMCTIID S82H9dAp4qvryGcXtkVEsVjQd1w1SKa2wgDjVNa/PLDhu6nitgoSsqiZjCwtDrqygqky3LYTC1m HBMmRxvYPVQR7ifwmAl3xgl3ZHIvM/jeRv7miIabVPIj174qsNP6xcC/Qrm3Q0xcDEKwsiT6Tjd NYtNfG7sx921CTA== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 Add zero copy support to TX path. Introduce xsk_pool and xsk_port_id to struct am65_cpsw_tx_chn. This way we can quickly check if the flow is setup as XSK pool and for which port. If the TX channel is setup as XSK pool then get the frames from the pool and send it to the TX channel. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 171 +++++++++++++++++++++++++++= +--- drivers/net/ethernet/ti/am65-cpsw-nuss.h | 5 + drivers/net/ethernet/ti/am65-cpsw-xdp.c | 11 +- 3 files changed, 171 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index afc0c8836fe242d8bf47ce9bcd3e6b725ca37bf9..2e06e7df23ad5249786d081e514= 34f87dd2a76b5 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -758,6 +758,8 @@ void am65_cpsw_destroy_txq(struct am65_cpsw_common *com= mon, int id) k3_udma_glue_reset_tx_chn(tx_chn->tx_chn, tx_chn, am65_cpsw_nuss_tx_cleanup); k3_udma_glue_disable_tx_chn(tx_chn->tx_chn); + tx_chn->xsk_pool =3D NULL; + tx_chn->xsk_port_id =3D -EINVAL; } =20 static void am65_cpsw_destroy_txqs(struct am65_cpsw_common *common) @@ -786,12 +788,25 @@ static void am65_cpsw_destroy_txqs(struct am65_cpsw_c= ommon *common) int am65_cpsw_create_txq(struct am65_cpsw_common *common, int id) { struct am65_cpsw_tx_chn *tx_chn =3D &common->tx_chns[id]; - int ret; + int port, ret; =20 ret =3D k3_udma_glue_enable_tx_chn(tx_chn->tx_chn); if (ret) return ret; =20 + /* get first port with XSK pool & XDP program set */ + for (port =3D 0; port < common->port_num; port++) { + if (!common->ports[port].ndev) + continue; + + tx_chn->xsk_pool =3D am65_cpsw_xsk_get_pool(&common->ports[port], + id); + if (tx_chn->xsk_pool) + break; + } + + tx_chn->xsk_port_id =3D tx_chn->xsk_pool ? + common->ports[port].port_id : -EINVAL; napi_enable(&tx_chn->napi_tx); =20 return 0; @@ -892,15 +907,18 @@ static void am65_cpsw_nuss_rx_cleanup(void *data, dma= _addr_t desc_dma) } =20 static void am65_cpsw_nuss_xmit_free(struct am65_cpsw_tx_chn *tx_chn, - struct cppi5_host_desc_t *desc) + struct cppi5_host_desc_t *desc, + enum am65_cpsw_tx_buf_type buf_type) { struct cppi5_host_desc_t *first_desc, *next_desc; dma_addr_t buf_dma, next_desc_dma; u32 buf_dma_len; =20 first_desc =3D desc; - next_desc =3D first_desc; + if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_XSK_TX) + goto free_pool; =20 + next_desc =3D first_desc; cppi5_hdesc_get_obuf(first_desc, &buf_dma, &buf_dma_len); k3_udma_glue_tx_cppi5_to_dma_addr(tx_chn->tx_chn, &buf_dma); =20 @@ -923,6 +941,7 @@ static void am65_cpsw_nuss_xmit_free(struct am65_cpsw_t= x_chn *tx_chn, k3_cppi_desc_pool_free(tx_chn->desc_pool, next_desc); } =20 +free_pool: k3_cppi_desc_pool_free(tx_chn->desc_pool, first_desc); } =20 @@ -932,21 +951,32 @@ static void am65_cpsw_nuss_tx_cleanup(void *data, dma= _addr_t desc_dma) enum am65_cpsw_tx_buf_type buf_type; struct am65_cpsw_tx_swdata *swdata; struct cppi5_host_desc_t *desc_tx; + struct xsk_buff_pool *xsk_pool; struct xdp_frame *xdpf; struct sk_buff *skb; =20 desc_tx =3D k3_cppi_desc_pool_dma2virt(tx_chn->desc_pool, desc_dma); swdata =3D cppi5_hdesc_get_swdata(desc_tx); buf_type =3D am65_cpsw_nuss_buf_type(tx_chn, desc_dma); - if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_SKB) { + switch (buf_type) { + case AM65_CPSW_TX_BUF_TYPE_SKB: skb =3D swdata->skb; dev_kfree_skb_any(skb); - } else { + break; + case AM65_CPSW_TX_BUF_TYPE_XDP_TX: + case AM65_CPSW_TX_BUF_TYPE_XDP_NDO: xdpf =3D swdata->xdpf; xdp_return_frame(xdpf); + break; + case AM65_CPSW_TX_BUF_TYPE_XSK_TX: + xsk_pool =3D swdata->xsk_pool; + xsk_tx_completed(xsk_pool, 1); + break; + default: + break; } =20 - am65_cpsw_nuss_xmit_free(tx_chn, desc_tx); + am65_cpsw_nuss_xmit_free(tx_chn, desc_tx, buf_type); } =20 static struct sk_buff *am65_cpsw_build_skb(void *page_addr, @@ -1189,6 +1219,82 @@ static int am65_cpsw_nuss_ndo_slave_open(struct net_= device *ndev) return ret; } =20 +static int am65_cpsw_xsk_xmit_zc(struct net_device *ndev, + struct am65_cpsw_tx_chn *tx_chn) +{ + struct am65_cpsw_common *common =3D tx_chn->common; + struct xsk_buff_pool *pool =3D tx_chn->xsk_pool; + struct xdp_desc *xdp_descs =3D pool->tx_descs; + struct cppi5_host_desc_t *host_desc; + struct am65_cpsw_tx_swdata *swdata; + dma_addr_t dma_desc, dma_buf; + int num_tx =3D 0, pkt_len; + int descs_avail, ret; + int i; + + descs_avail =3D k3_cppi_desc_pool_avail(tx_chn->desc_pool); + /* ensure that TX ring is not filled up by XDP, always MAX_SKB_FRAGS + * will be available for normal TX path and queue is stopped there if + * necessary + */ + if (descs_avail <=3D MAX_SKB_FRAGS) + return 0; + + descs_avail -=3D MAX_SKB_FRAGS; + descs_avail =3D xsk_tx_peek_release_desc_batch(pool, descs_avail); + + for (i =3D 0; i < descs_avail; i++) { + host_desc =3D k3_cppi_desc_pool_alloc(tx_chn->desc_pool); + if (unlikely(!host_desc)) + break; + + am65_cpsw_nuss_set_buf_type(tx_chn, host_desc, + AM65_CPSW_TX_BUF_TYPE_XSK_TX); + dma_buf =3D xsk_buff_raw_get_dma(pool, xdp_descs[i].addr); + pkt_len =3D xdp_descs[i].len; + xsk_buff_raw_dma_sync_for_device(pool, dma_buf, pkt_len); + + cppi5_hdesc_init(host_desc, CPPI5_INFO0_HDESC_EPIB_PRESENT, + AM65_CPSW_NAV_PS_DATA_SIZE); + cppi5_hdesc_set_pkttype(host_desc, AM65_CPSW_CPPI_TX_PKT_TYPE); + cppi5_hdesc_set_pktlen(host_desc, pkt_len); + cppi5_desc_set_pktids(&host_desc->hdr, 0, + AM65_CPSW_CPPI_TX_FLOW_ID); + cppi5_desc_set_tags_ids(&host_desc->hdr, 0, + tx_chn->xsk_port_id); + + k3_udma_glue_tx_dma_to_cppi5_addr(tx_chn->tx_chn, &dma_buf); + cppi5_hdesc_attach_buf(host_desc, dma_buf, pkt_len, dma_buf, + pkt_len); + + swdata =3D cppi5_hdesc_get_swdata(host_desc); + swdata->ndev =3D ndev; + swdata->xsk_pool =3D pool; + + dma_desc =3D k3_cppi_desc_pool_virt2dma(tx_chn->desc_pool, + host_desc); + if (AM65_CPSW_IS_CPSW2G(common)) { + ret =3D k3_udma_glue_push_tx_chn(tx_chn->tx_chn, + host_desc, dma_desc); + } else { + spin_lock_bh(&tx_chn->lock); + ret =3D k3_udma_glue_push_tx_chn(tx_chn->tx_chn, + host_desc, dma_desc); + spin_unlock_bh(&tx_chn->lock); + } + + if (ret) { + ndev->stats.tx_errors++; + k3_cppi_desc_pool_free(tx_chn->desc_pool, host_desc); + break; + } + + num_tx++; + } + + return num_tx; +} + static int am65_cpsw_xdp_tx_frame(struct net_device *ndev, struct am65_cpsw_tx_chn *tx_chn, struct xdp_frame *xdpf, @@ -1716,15 +1822,19 @@ static int am65_cpsw_nuss_tx_compl_packets(struct a= m65_cpsw_common *common, struct netdev_queue *netif_txq; unsigned int total_bytes =3D 0; struct net_device *ndev; + int xsk_frames_done =3D 0; struct xdp_frame *xdpf; unsigned int pkt_len; struct sk_buff *skb; dma_addr_t desc_dma; int res, num_tx =3D 0; + int xsk_tx =3D 0; =20 tx_chn =3D &common->tx_chns[chn]; =20 while (true) { + pkt_len =3D 0; + if (!single_port) spin_lock(&tx_chn->lock); res =3D k3_udma_glue_pop_tx_chn(tx_chn->tx_chn, &desc_dma); @@ -1746,25 +1856,36 @@ static int am65_cpsw_nuss_tx_compl_packets(struct a= m65_cpsw_common *common, swdata =3D cppi5_hdesc_get_swdata(desc_tx); ndev =3D swdata->ndev; buf_type =3D am65_cpsw_nuss_buf_type(tx_chn, desc_dma); - if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_SKB) { + switch (buf_type) { + case AM65_CPSW_TX_BUF_TYPE_SKB: skb =3D swdata->skb; am65_cpts_tx_timestamp(tx_chn->common->cpts, skb); pkt_len =3D skb->len; napi_consume_skb(skb, budget); - } else { + total_bytes +=3D pkt_len; + break; + case AM65_CPSW_TX_BUF_TYPE_XDP_TX: + case AM65_CPSW_TX_BUF_TYPE_XDP_NDO: xdpf =3D swdata->xdpf; pkt_len =3D xdpf->len; + total_bytes +=3D pkt_len; if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_XDP_TX) xdp_return_frame_rx_napi(xdpf); else xdp_return_frame(xdpf); + break; + case AM65_CPSW_TX_BUF_TYPE_XSK_TX: + pkt_len =3D cppi5_hdesc_get_pktlen(desc_tx); + xsk_frames_done++; + break; + default: + break; } =20 - total_bytes +=3D pkt_len; num_tx++; - am65_cpsw_nuss_xmit_free(tx_chn, desc_tx); + am65_cpsw_nuss_xmit_free(tx_chn, desc_tx, buf_type); dev_sw_netstats_tx_add(ndev, 1, pkt_len); - if (!single_port) { + if (!single_port && buf_type !=3D AM65_CPSW_TX_BUF_TYPE_XSK_TX) { /* as packets from multi ports can be interleaved * on the same channel, we have to figure out the * port/queue at every packet and report it/wake queue. @@ -1781,6 +1902,19 @@ static int am65_cpsw_nuss_tx_compl_packets(struct am= 65_cpsw_common *common, am65_cpsw_nuss_tx_wake(tx_chn, ndev, netif_txq); } =20 + if (tx_chn->xsk_pool) { + if (xsk_frames_done) + xsk_tx_completed(tx_chn->xsk_pool, xsk_frames_done); + + if (xsk_uses_need_wakeup(tx_chn->xsk_pool)) + xsk_set_tx_need_wakeup(tx_chn->xsk_pool); + + ndev =3D common->ports[tx_chn->xsk_port_id].ndev; + netif_txq =3D netdev_get_tx_queue(ndev, chn); + txq_trans_cond_update(netif_txq); + xsk_tx =3D am65_cpsw_xsk_xmit_zc(ndev, tx_chn); + } + dev_dbg(dev, "%s:%u pkt:%d\n", __func__, chn, num_tx); =20 return num_tx; @@ -1791,7 +1925,11 @@ static enum hrtimer_restart am65_cpsw_nuss_tx_timer_= callback(struct hrtimer *tim struct am65_cpsw_tx_chn *tx_chns =3D container_of(timer, struct am65_cpsw_tx_chn, tx_hrtimer); =20 - enable_irq(tx_chns->irq); + if (tx_chns->irq_disabled) { + tx_chns->irq_disabled =3D false; + enable_irq(tx_chns->irq); + } + return HRTIMER_NORESTART; } =20 @@ -1811,7 +1949,8 @@ static int am65_cpsw_nuss_tx_poll(struct napi_struct = *napi_tx, int budget) hrtimer_start(&tx_chn->tx_hrtimer, ns_to_ktime(tx_chn->tx_pace_timeout), HRTIMER_MODE_REL_PINNED); - } else { + } else if (tx_chn->irq_disabled) { + tx_chn->irq_disabled =3D false; enable_irq(tx_chn->irq); } } @@ -1834,6 +1973,7 @@ static irqreturn_t am65_cpsw_nuss_tx_irq(int irq, voi= d *dev_id) { struct am65_cpsw_tx_chn *tx_chn =3D dev_id; =20 + tx_chn->irq_disabled =3D true; disable_irq_nosync(irq); napi_schedule(&tx_chn->napi_tx); =20 @@ -1998,14 +2138,14 @@ static netdev_tx_t am65_cpsw_nuss_ndo_slave_xmit(st= ruct sk_buff *skb, return NETDEV_TX_OK; =20 err_free_descs: - am65_cpsw_nuss_xmit_free(tx_chn, first_desc); + am65_cpsw_nuss_xmit_free(tx_chn, first_desc, AM65_CPSW_TX_BUF_TYPE_SKB); err_free_skb: ndev->stats.tx_dropped++; dev_kfree_skb_any(skb); return NETDEV_TX_OK; =20 busy_free_descs: - am65_cpsw_nuss_xmit_free(tx_chn, first_desc); + am65_cpsw_nuss_xmit_free(tx_chn, first_desc, AM65_CPSW_TX_BUF_TYPE_SKB); busy_stop_q: netif_tx_stop_queue(netif_txq); return NETDEV_TX_BUSY; @@ -2259,6 +2399,7 @@ static const struct net_device_ops am65_cpsw_nuss_net= dev_ops =3D { .ndo_xdp_xmit =3D am65_cpsw_ndo_xdp_xmit, .ndo_hwtstamp_get =3D am65_cpsw_nuss_hwtstamp_get, .ndo_hwtstamp_set =3D am65_cpsw_nuss_hwtstamp_set, + .ndo_xsk_wakeup =3D am65_cpsw_xsk_wakeup, }; =20 static void am65_cpsw_disable_phy(struct phy *phy) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/etherne= t/ti/am65-cpsw-nuss.h index 2bf4d12f92764706719cc1d65001dbb53da58c38..ac2d9d32e95b932665131a317df= 8316cb6cb7f96 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h @@ -72,6 +72,7 @@ enum am65_cpsw_tx_buf_type { AM65_CPSW_TX_BUF_TYPE_SKB, AM65_CPSW_TX_BUF_TYPE_XDP_TX, AM65_CPSW_TX_BUF_TYPE_XDP_NDO, + AM65_CPSW_TX_BUF_TYPE_XSK_TX, }; =20 struct am65_cpsw_host { @@ -97,6 +98,9 @@ struct am65_cpsw_tx_chn { unsigned char dsize_log2; char tx_chn_name[128]; u32 rate_mbps; + struct xsk_buff_pool *xsk_pool; + int xsk_port_id; + bool irq_disabled; }; =20 struct am65_cpsw_rx_flow { @@ -118,6 +122,7 @@ struct am65_cpsw_tx_swdata { union { struct sk_buff *skb; struct xdp_frame *xdpf; + struct xsk_buff_pool *xsk_pool; }; }; =20 diff --git a/drivers/net/ethernet/ti/am65-cpsw-xdp.c b/drivers/net/ethernet= /ti/am65-cpsw-xdp.c index 0e37c27f77720713430a3e70f6c4b3dfb048cfc0..9adf13056f70fea36d9aeac157b= 7da0cae2c011e 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-xdp.c +++ b/drivers/net/ethernet/ti/am65-cpsw-xdp.c @@ -109,8 +109,10 @@ int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 = qid, u32 flags) struct am65_cpsw_common *common =3D am65_ndev_to_common(ndev); struct am65_cpsw_port *port =3D am65_ndev_to_port(ndev); struct am65_cpsw_rx_flow *rx_flow; + struct am65_cpsw_tx_chn *tx_ch; =20 rx_flow =3D &common->rx_chns.flows[qid]; + tx_ch =3D &common->tx_chns[qid]; =20 if (!netif_running(ndev) || !netif_carrier_ok(ndev)) return -ENETDOWN; @@ -121,9 +123,16 @@ int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 = qid, u32 flags) if (qid >=3D common->rx_ch_num_flows || qid >=3D common->tx_ch_num) return -EINVAL; =20 - if (!rx_flow->xsk_pool) + if (!rx_flow->xsk_pool && !tx_ch->xsk_pool) return -EINVAL; =20 + if (flags & XDP_WAKEUP_TX) { + if (!napi_if_scheduled_mark_missed(&tx_ch->napi_tx)) { + if (likely(napi_schedule_prep(&tx_ch->napi_tx))) + __napi_schedule(&tx_ch->napi_tx); + } + } + if (flags & XDP_WAKEUP_RX) { if (!napi_if_scheduled_mark_missed(&rx_flow->napi_rx)) { if (likely(napi_schedule_prep(&rx_flow->napi_rx))) --=20 2.34.1