From nobody Fri Dec 19 12:49:51 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 245C8258CF9; Tue, 20 May 2025 10:24:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747736647; cv=none; b=QqV74z2hmZETXrFI7uatOi/XIVtBJOZl6ZRqHXBuY1FdhooA2uWMqQVdUvCW8+yh7MIoNkO15M0r1NsYaaq0Q8lrkqsSOZIY4E08NJEn7UBbQnpYsYqWpM4JTKNF9FE35W9lENoTK8VoRb3ROA8z44PAk15dL7jU/0cR4r6LQDc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747736647; c=relaxed/simple; bh=nIrx+rQmiCnbtUaUn/P6Ik6giA5OLb1b5pJBQ3z/y3I=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=rllJwH1ieGOWUQDQHNBngwBT+dc9AXBike+Nik89JeI1iqxXp472Zkz6U9fUPaf0dcvv8GR5F0Ylqf21Rj83RTt7Ww3Ip1IAHUYB94FcC3e7FdSO5g0Gy/boPfV4bbvwHq34GawWt66Pj/1uc4Meuq+Eqbb62i+RtiL7UFIfrIc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=iEgzGjSC; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="iEgzGjSC" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 67282C4CEEB; Tue, 20 May 2025 10:24:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1747736646; bh=nIrx+rQmiCnbtUaUn/P6Ik6giA5OLb1b5pJBQ3z/y3I=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=iEgzGjSCDNAreycfZfel/PDOPVl55vnX6//v3eJsVsLBymctEvM8+599Av2rZecEY mU1ndS+eeGnYbfFkYCo9yXW8Q7ASX5YYMYB8SyWfRf/Sfp+MkWcPq2eg/ZjQ2O7Ykk ZFgQ4rqcfVO1LfStwsqp43V03fEZTt3bGzFnWvaReDDq/uVq16TwdEuqBiQGnFcpSN o6g7yrhRkgJRRs4GxYHjxMfJkde2ERegaP4IaUblnmBrGtgD/YfSQ3ITjT8PRywm4A q78zKdZG1Q6tI7nQLFKJWw1gozq7rNAoWMdBub6E2Fm5W/zeircPNovOoMUYgVoqEk jbF1rQDqEL7VQ== From: Roger Quadros Date: Tue, 20 May 2025 13:23:50 +0300 Subject: [PATCH RFC net-next 1/5] net: ethernet: ti: am65-cpsw: fix BPF Program change on multi-port CPSW Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250520-am65-cpsw-xdp-zc-v1-1-45558024f566@kernel.org> References: <20250520-am65-cpsw-xdp-zc-v1-0-45558024f566@kernel.org> In-Reply-To: <20250520-am65-cpsw-xdp-zc-v1-0-45558024f566@kernel.org> To: Siddharth Vadapalli , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Cc: srk@ti.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=1633; i=rogerq@kernel.org; h=from:subject:message-id; bh=nIrx+rQmiCnbtUaUn/P6Ik6giA5OLb1b5pJBQ3z/y3I=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBoLFg8vD0nQfmi6PcyMBmwmgeg6lxVKRdAM6c8O 6WY76gS5dqJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCaCxYPAAKCRDSWmvTvnYw k2L6D/4/z+FNWgxJwjERqYfSg+GnzlJJh4byuBEYjsYtx4BDFFicW8b7f6QIgKUkuOPA6c76eL7 WO8HaxevvwV3oshGZYMJxoDFbsWYD69235tfLfa+j41GCM8i12M0P4iTWS5FDxSwm3Hoa+teUfW sR+5Fs72wLimc9XtV8U9Et9j6tOa+hHibTIvCn30XFf5EINzWtJ5mtf7iW2RjZ3Zj2XzBjowBWQ HYHceGJA0pn7tj/lRa/QCQFef4frkfSrDYd9VQoU52fMagHyR5NhTXOMPEVcsPcyYkvjA8d5D2F j/J3K6e49UrAnL6PxBdJgWmsf0JXEIU3yCkQogpy1nMbeEs+grSewLC660Czx5NeupmhT71CiLl wZqp0dr5X4hhkcuXrE8UuxaGS4UnudFn7EBu2i/mI3iVx0Fl6ei4CGKc2viBHftSOKbiVwQIflx hQ78Pk+JM7U/L+lJcknOQA8WH8QSD591uVTJesgLCEOVdCq8RtQ4jZVncxD58jAVXlDOvOegThx 79p7dGeqwAAQbpttV0rZOxHX3UMPistIXcuZ9IFpk62hkI8dOwWXN6isF+t25dLxa0+uBSivv0W l1gUI7VFbe9Kx2+nUhYNTVcAKmoLAp6nB3A8Ge3Ubnz+VNL2ZxAz5isQ3dKO7zpfQW4rJlkBw32 GyireYbsB+UxwjA== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 On a multi-port CPSW system, stopping and starting just one port (ndev) will not restart the queues if other ports (ndevs) are open. Instead, check the usage_count variable to know if CPSW is running and if so restart all the queues. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index 988ce9119306..cd713bb57b91 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -1924,18 +1924,33 @@ static int am65_cpsw_xdp_prog_setup(struct net_devi= ce *ndev, struct bpf_prog *prog) { struct am65_cpsw_port *port =3D am65_ndev_to_port(ndev); - bool running =3D netif_running(ndev); + struct am65_cpsw_common *common =3D port->common; + bool running =3D !!port->common->usage_count; struct bpf_prog *old_prog; + int ret; =20 - if (running) - am65_cpsw_nuss_ndo_slave_stop(ndev); + if (running) { + /* stop all queues */ + am65_cpsw_destroy_txqs(common); + am65_cpsw_destroy_rxqs(common); + } =20 old_prog =3D xchg(&port->xdp_prog, prog); if (old_prog) bpf_prog_put(old_prog); =20 - if (running) - return am65_cpsw_nuss_ndo_slave_open(ndev); + if (running) { + /* start all queues */ + ret =3D am65_cpsw_create_rxqs(common); + if (ret) + return ret; + + ret =3D am65_cpsw_create_txqs(common); + if (ret) { + am65_cpsw_destroy_rxqs(common); + return ret; + } + } =20 return 0; } --=20 2.34.1 From nobody Fri Dec 19 12:49:51 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D12B226A0FC; Tue, 20 May 2025 10:24:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747736651; cv=none; b=cz/Wm2zC6R6o/1l2mBnd42sKjEGwNl9B4DEYcHY3baGnBX2cmFOFIl7Hwq4FSxOpZZFx+L2LeAKNPV9AFXba+mv8Go8l0tY3XIJx/9sgkPZJo5lc354D4IolpKPAUHZsRrZhTk4FtMP0ObschfHyw6ACtpnEnestKD3cctk/ops= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747736651; c=relaxed/simple; bh=YMbSCnPRwIyCAzYWgDoVNLwWOCQ2pSWUMHTeLhfe8Mg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=jypy81YD+dIAdes2nytwf3KOTGwlPdaCQTkAdMB3gzZ8kGPyK6sjkfd9YPrNYlXSPlXytUwSwOcR6dXOXbm83pDyCthdw63XY0dhWfH5Q2jXlGorQaDQzdF6ebWitskiTGzRXSXChN47JO0PRmmtjetUZnSgXJycfZGAW5z1V4E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TQ0Q97lK; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TQ0Q97lK" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18E98C4CEE9; Tue, 20 May 2025 10:24:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1747736651; bh=YMbSCnPRwIyCAzYWgDoVNLwWOCQ2pSWUMHTeLhfe8Mg=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=TQ0Q97lK3ASQ/Dw454Wnkafpd2LfIiQMkmkh4t1svAtWYyIo7lU8DHrQ++d7T84Mf i8du1iJXHZV6VnlNnLVWz2KVuNpJu+LgIw621NtC2rSmdvMWN/Pxrlgnk7vojgsPTU dBikIqWZ3UMQtB8UahBhXdqiaT6aC1Ht0UlBZ8EhPma6sKiz9t0COASTMz9ddXCuVJ BoxSuQl3nB0fupas+J07wT0u9z1UhrIRP46o5C6RLqSy0uLKZQOJfWUX95W06aefgh lwUwhEgIa77Y3wX05JAO4qs3hQNcSCu+E9Pl9kNsmD4kUtKKXLrl/zFpdFpAqKUZgX zBBddopO5mUbg== From: Roger Quadros Date: Tue, 20 May 2025 13:23:51 +0300 Subject: [PATCH RFC net-next 2/5] net: ethernet: ti: am65-cpsw: add XSK pool helpers Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250520-am65-cpsw-xdp-zc-v1-2-45558024f566@kernel.org> References: <20250520-am65-cpsw-xdp-zc-v1-0-45558024f566@kernel.org> In-Reply-To: <20250520-am65-cpsw-xdp-zc-v1-0-45558024f566@kernel.org> To: Siddharth Vadapalli , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Cc: srk@ti.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=10416; i=rogerq@kernel.org; h=from:subject:message-id; bh=YMbSCnPRwIyCAzYWgDoVNLwWOCQ2pSWUMHTeLhfe8Mg=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBoLFg92e7URhiH+NeEIMbh7NBIawZnPO2Wwm1Mf Zw9ot3fZl6JAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCaCxYPQAKCRDSWmvTvnYw k/1PEAC9eHSJacMig16V/H768Mkz0avpiZZp3JadPkI7CiCmv3Lt61hW2w8h6xrc8A0CiqBey2L a0mMt63Cdepbjlv3LJ1mFd+Zqp86OFMYTwyCKQT8ndrDWS3Qc7ca0/Hj8u6hZc1YZCri5+YB6vi f4pYDbEPRvVtkCflF+toZ1KCTyEkdhy2iRiuyacwj8WVSImxpE8stBv92ibzuD6GKX+JZ0EmeTV mg3WFJOE7tGy+6fFqaqse6UQ6YhxlRjeRWuNq0n6X/nrOvVJIHwIlFztB4b4n1hpyqWTaa0ARGl LkbE8kF8+qUAbGBBRCyYIaRA8l04SXf5D1VQ1hWT6EjPug742wpt8KIBy+zDCBlgrd2+ednK/0Y LBo1ee5VLdbCxJcNANun874OYqga71hKdCso0pg0EgcMFYadpKU4cUpg9VP3Ja8dqleoQQKyUC2 mX458885gERYmKPjcvhsvlGwIU4pbxRP+j7ElgHxwGbK6v087NuwSzPJLRpPBwRrji0pWxIbyo1 r1IVT9aLNQ8mA02Uzqc2b8nZYb99+3uLBVaVNAU5Glu+9jTOgL1B994+OC6JPdd1XNX5taMqBfv wXwGF0vB3dSqPNQQgQcAwFHkmejVD+cNyM5lVLqdZW2vrcSHZWKbxtBHKIXbTUPhD3yd/Hnp/2J OYipTZcJxFwu0jQ== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 To prepare for XSK zero copy support, add XSK pool helpers in a new file am65-cpsw-xdp.c As queues are shared between ports we can no longer support the case where zero copy (XSK Pool) is enabled for the queue on one port but not for other ports. Current solution is to drop the packet if Zero copy is not enabled for that port + queue but enabled for some other port + same queue. xdp_zc_queues bitmap tracks if queue is setup as XSK pool and xsk_port_id array tracks which port the XSK queue is assigned to for zero copy. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/Makefile | 2 +- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 21 ++++-- drivers/net/ethernet/ti/am65-cpsw-nuss.h | 20 +++++ drivers/net/ethernet/ti/am65-cpsw-xdp.c | 122 +++++++++++++++++++++++++++= ++++ 4 files changed, 156 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/ti/Makefile b/drivers/net/ethernet/ti/Mak= efile index cbcf44806924..48d07afe30f9 100644 --- a/drivers/net/ethernet/ti/Makefile +++ b/drivers/net/ethernet/ti/Makefile @@ -26,7 +26,7 @@ keystone_netcp_ethss-y :=3D netcp_ethss.o netcp_sgmii.o n= etcp_xgbepcsr.o cpsw_ale. obj-$(CONFIG_TI_K3_CPPI_DESC_POOL) +=3D k3-cppi-desc-pool.o =20 obj-$(CONFIG_TI_K3_AM65_CPSW_NUSS) +=3D ti-am65-cpsw-nuss.o -ti-am65-cpsw-nuss-y :=3D am65-cpsw-nuss.o cpsw_sl.o am65-cpsw-ethtool.o cp= sw_ale.o +ti-am65-cpsw-nuss-y :=3D am65-cpsw-nuss.o cpsw_sl.o am65-cpsw-ethtool.o cp= sw_ale.o am65-cpsw-xdp.o ti-am65-cpsw-nuss-$(CONFIG_TI_AM65_CPSW_QOS) +=3D am65-cpsw-qos.o ti-am65-cpsw-nuss-$(CONFIG_TI_K3_AM65_CPSW_SWITCHDEV) +=3D am65-cpsw-switc= hdev.o obj-$(CONFIG_TI_K3_AM65_CPTS) +=3D am65-cpts.o diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index cd713bb57b91..a946bcd770c4 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -58,9 +58,6 @@ =20 #define AM65_CPSW_MAX_PORTS 8 =20 -#define AM65_CPSW_MIN_PACKET_SIZE VLAN_ETH_ZLEN -#define AM65_CPSW_MAX_PACKET_SIZE 2024 - #define AM65_CPSW_REG_CTL 0x004 #define AM65_CPSW_REG_STAT_PORT_EN 0x014 #define AM65_CPSW_REG_PTYPE 0x018 @@ -505,7 +502,7 @@ static inline void am65_cpsw_put_page(struct am65_cpsw_= rx_flow *flow, static void am65_cpsw_nuss_rx_cleanup(void *data, dma_addr_t desc_dma); static void am65_cpsw_nuss_tx_cleanup(void *data, dma_addr_t desc_dma); =20 -static void am65_cpsw_destroy_rxq(struct am65_cpsw_common *common, int id) +void am65_cpsw_destroy_rxq(struct am65_cpsw_common *common, int id) { struct am65_cpsw_rx_chn *rx_chn =3D &common->rx_chns; struct am65_cpsw_rx_flow *flow; @@ -554,7 +551,7 @@ static void am65_cpsw_destroy_rxqs(struct am65_cpsw_com= mon *common) k3_udma_glue_disable_rx_chn(common->rx_chns.rx_chn); } =20 -static int am65_cpsw_create_rxq(struct am65_cpsw_common *common, int id) +int am65_cpsw_create_rxq(struct am65_cpsw_common *common, int id) { struct am65_cpsw_rx_chn *rx_chn =3D &common->rx_chns; struct page_pool_params pp_params =3D { @@ -658,7 +655,7 @@ static int am65_cpsw_create_rxqs(struct am65_cpsw_commo= n *common) return ret; } =20 -static void am65_cpsw_destroy_txq(struct am65_cpsw_common *common, int id) +void am65_cpsw_destroy_txq(struct am65_cpsw_common *common, int id) { struct am65_cpsw_tx_chn *tx_chn =3D &common->tx_chns[id]; =20 @@ -692,7 +689,7 @@ static void am65_cpsw_destroy_txqs(struct am65_cpsw_com= mon *common) am65_cpsw_destroy_txq(common, id); } =20 -static int am65_cpsw_create_txq(struct am65_cpsw_common *common, int id) +int am65_cpsw_create_txq(struct am65_cpsw_common *common, int id) { struct am65_cpsw_tx_chn *tx_chn =3D &common->tx_chns[id]; int ret; @@ -1324,7 +1321,7 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw= _rx_flow *flow, dma_unmap_single(rx_chn->dma_dev, buf_dma, buf_dma_len, DMA_FROM_DEVICE); k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); =20 - if (port->xdp_prog) { + if (am65_cpsw_xdp_is_enabled(port)) { xdp_init_buff(&xdp, PAGE_SIZE, &port->xdp_rxq[flow->id]); xdp_prepare_buff(&xdp, page_addr, AM65_CPSW_HEADROOM, pkt_len, false); @@ -1960,6 +1957,9 @@ static int am65_cpsw_ndo_bpf(struct net_device *ndev,= struct netdev_bpf *bpf) switch (bpf->command) { case XDP_SETUP_PROG: return am65_cpsw_xdp_prog_setup(ndev, bpf->prog); + case XDP_SETUP_XSK_POOL: + return am65_cpsw_xsk_setup_pool(ndev, bpf->xsk.pool, + bpf->xsk.queue_id); default: return -EINVAL; } @@ -3527,7 +3527,12 @@ static int am65_cpsw_nuss_probe(struct platform_devi= ce *pdev) common =3D devm_kzalloc(dev, sizeof(struct am65_cpsw_common), GFP_KERNEL); if (!common) return -ENOMEM; + common->dev =3D dev; + common->xdp_zc_queues =3D devm_bitmap_zalloc(dev, AM65_CPSW_MAX_QUEUES, + GFP_KERNEL); + if (!common->xdp_zc_queues) + return -ENOMEM; =20 of_id =3D of_match_device(am65_cpsw_nuss_of_mtable, dev); if (!of_id) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/etherne= t/ti/am65-cpsw-nuss.h index 917c37e4e89b..e80e74a74d71 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h @@ -23,8 +23,14 @@ struct am65_cpts; =20 #define AM65_CPSW_MAX_QUEUES 8 /* both TX & RX */ =20 +#define AM65_CPSW_MIN_PACKET_SIZE VLAN_ETH_ZLEN +#define AM65_CPSW_MAX_PACKET_SIZE 2024 + #define AM65_CPSW_PORT_VLAN_REG_OFFSET 0x014 =20 +#define AM65_CPSW_RX_DMA_ATTR (DMA_ATTR_SKIP_CPU_SYNC |\ + DMA_ATTR_WEAK_ORDERING) + struct am65_cpsw_slave_data { bool mac_only; struct cpsw_sl *mac_sl; @@ -190,6 +196,9 @@ struct am65_cpsw_common { unsigned char switch_id[MAX_PHYS_ITEM_ID_LEN]; /* only for suspend/resume context restore */ u32 *ale_context; + /* XDP Zero Copy */ + unsigned long *xdp_zc_queues; + int xsk_port_id[AM65_CPSW_MAX_QUEUES]; }; =20 struct am65_cpsw_ndev_priv { @@ -228,4 +237,15 @@ int am65_cpsw_nuss_update_tx_rx_chns(struct am65_cpsw_= common *common, =20 bool am65_cpsw_port_dev_check(const struct net_device *dev); =20 +int am65_cpsw_create_rxq(struct am65_cpsw_common *common, int id); +void am65_cpsw_destroy_rxq(struct am65_cpsw_common *common, int id); +int am65_cpsw_create_txq(struct am65_cpsw_common *common, int id); +void am65_cpsw_destroy_txq(struct am65_cpsw_common *common, int id); +int am65_cpsw_xsk_setup_pool(struct net_device *ndev, + struct xsk_buff_pool *pool, u16 qid); +int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 qid, u32 flags); +static inline bool am65_cpsw_xdp_is_enabled(struct am65_cpsw_port *port) +{ + return !!READ_ONCE(port->xdp_prog); +} #endif /* AM65_CPSW_NUSS_H_ */ diff --git a/drivers/net/ethernet/ti/am65-cpsw-xdp.c b/drivers/net/ethernet= /ti/am65-cpsw-xdp.c new file mode 100644 index 000000000000..e1ab81cb4548 --- /dev/null +++ b/drivers/net/ethernet/ti/am65-cpsw-xdp.c @@ -0,0 +1,122 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Texas Instruments K3 AM65 Ethernet Switch SubSystem Driver + * + * Copyright (C) 2025 Texas Instruments Incorporated - http://www.ti.com/ + * + */ + +#include +#include +#include "am65-cpsw-nuss.h" + +static int am65_cpsw_xsk_pool_enable(struct am65_cpsw_port *port, + struct xsk_buff_pool *pool, u16 qid) +{ + struct am65_cpsw_common *common =3D port->common; + struct am65_cpsw_rx_chn *rx_chn; + bool need_update; + u32 frame_size; + int ret; + + /* + * As queues are shared between ports we can no longer + * support the case where zero copy (XSK Pool) is enabled + * for the queue on one port but not for other ports. + * + * Current solution is to drop the packet if Zero copy + * is not enabled for that port + queue but enabled for + * some other port + same queue. + */ + if (test_bit(qid, common->xdp_zc_queues)) + return -EINVAL; + + rx_chn =3D &common->rx_chns; + if (qid >=3D common->rx_ch_num_flows || qid >=3D common->tx_ch_num) + return -EINVAL; + + frame_size =3D xsk_pool_get_rx_frame_size(pool); + if (frame_size < AM65_CPSW_MAX_PACKET_SIZE) + return -EOPNOTSUPP; + + ret =3D xsk_pool_dma_map(pool, rx_chn->dma_dev, AM65_CPSW_RX_DMA_ATTR); + if (ret) { + netdev_err(port->ndev, "Failed to map xsk pool\n"); + return ret; + } + + need_update =3D common->usage_count && + am65_cpsw_xdp_is_enabled(port); + if (need_update) { + am65_cpsw_destroy_rxq(common, qid); + am65_cpsw_destroy_txq(common, qid); + } + + set_bit(qid, common->xdp_zc_queues); + common->xsk_port_id[qid] =3D port->port_id; + if (need_update) { + am65_cpsw_create_rxq(common, qid); + am65_cpsw_create_txq(common, qid); + } + + return 0; +} + +static int am65_cpsw_xsk_pool_disable(struct am65_cpsw_port *port, + struct xsk_buff_pool *pool, u16 qid) +{ + struct am65_cpsw_common *common =3D port->common; + bool need_update; + + if (qid >=3D common->rx_ch_num_flows || qid >=3D common->tx_ch_num) + return -EINVAL; + + if (!test_bit(qid, common->xdp_zc_queues)) + return -EINVAL; + + pool =3D xsk_get_pool_from_qid(port->ndev, qid); + if (!pool) + return -EINVAL; + + need_update =3D common->usage_count && am65_cpsw_xdp_is_enabled(port); + if (need_update) { + am65_cpsw_destroy_rxq(common, qid); + am65_cpsw_destroy_txq(common, qid); + synchronize_rcu(); + } + + xsk_pool_dma_unmap(pool, AM65_CPSW_RX_DMA_ATTR); + clear_bit(qid, common->xdp_zc_queues); + common->xsk_port_id[qid] =3D -EINVAL; + if (need_update) { + am65_cpsw_create_rxq(common, qid); + am65_cpsw_create_txq(common, qid); + } + + return 0; +} + +int am65_cpsw_xsk_setup_pool(struct net_device *ndev, + struct xsk_buff_pool *pool, u16 qid) +{ + struct am65_cpsw_port *port =3D am65_ndev_to_port(ndev); + + return pool ? am65_cpsw_xsk_pool_enable(port, pool, qid) : + am65_cpsw_xsk_pool_disable(port, pool, qid); +} + +int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 qid, u32 flags) +{ + struct am65_cpsw_common *common =3D am65_ndev_to_common(ndev); + struct am65_cpsw_port *port =3D am65_ndev_to_port(ndev); + + if (!netif_running(ndev) || !netif_carrier_ok(ndev)) + return -ENETDOWN; + + if (!am65_cpsw_xdp_is_enabled(port)) + return -EINVAL; + + if (qid >=3D common->rx_ch_num_flows || qid >=3D common->tx_ch_num) + return -EINVAL; + + return 0; +} --=20 2.34.1 From nobody Fri Dec 19 12:49:51 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 773F126C3AF; Tue, 20 May 2025 10:24:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747736656; cv=none; b=reN5nTe83QmUQWhsZs3Xd0AsN6P7v+CfQC5u4m39k31FVW5yDCXSE8SI3vM9brUaoeBkpQOg74E0oPAUX7Cy5Y1d5q8TAEGh+bJjTCDBFAqIUiy23KaHWh/ivwHJl8XwEBoTeEoxUGcl8AjizMOXRQvtrYOaPyCaLQURHXDypo4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747736656; c=relaxed/simple; bh=xLrlxL0xXwR94rknogbGfGiqdI6tjOR6Ewlo75vEhpI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=t7YhGKRtcZLIO5o5rV232WgGYqlTt8cs8x3h43IT8pMkCSReQgbF8yUd44RAlz+5A6evndHiRKNYiPgNIcdRMwigMRLBe5Q1RKh5k4DuqkkObpnvHA4MwRJCm56zseLbHHP393AUqIO/Oyg5mgvyga00YQk950/38tO8C0XWx20= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=CKgtBdz2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="CKgtBdz2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BFFA0C4CEF1; Tue, 20 May 2025 10:24:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1747736656; bh=xLrlxL0xXwR94rknogbGfGiqdI6tjOR6Ewlo75vEhpI=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=CKgtBdz22eZ55+d5Gmqxl0v7hdUY3H2cSsTLLE77BtOYoJJ8OYO16oLm1RTvBo4fE ujPdPI5OA5mD3JuJhQkqQkNN5BJxQk6X5RZL8V/9mrF6HDhLMCCpQbdkzy/7L/IeUP /1gfh1OwHgD85TpuDFMbRXhK4uBcfZf4wZU0ukN1Dg8t0YFQVVvFMsacbvX7pXw1f2 urYf+6zqe6ybaH6V+52YzyOcH+9jRrWT4lAN8/sQ6xccjqBFBjZhaN5LTnHqGRQbeV jw3ObB3ff5Ii/E5cYwMPwrxoAMUEGKlmkAo34y42BU8opAHTgfSs8OHWdp0V7a6NlO iTxgFy61TaDbg== From: Roger Quadros Date: Tue, 20 May 2025 13:23:52 +0300 Subject: [PATCH RFC net-next 3/5] net: ethernet: ti: am65-cpsw: Add AF_XDP zero copy for RX Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250520-am65-cpsw-xdp-zc-v1-3-45558024f566@kernel.org> References: <20250520-am65-cpsw-xdp-zc-v1-0-45558024f566@kernel.org> In-Reply-To: <20250520-am65-cpsw-xdp-zc-v1-0-45558024f566@kernel.org> To: Siddharth Vadapalli , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Cc: srk@ti.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=14909; i=rogerq@kernel.org; h=from:subject:message-id; bh=xLrlxL0xXwR94rknogbGfGiqdI6tjOR6Ewlo75vEhpI=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBoLFg9g7i7wyvOPisGCZCKiUTIUo+sFU7EZPR8c rlqy4Lxrf6JAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCaCxYPQAKCRDSWmvTvnYw k6yOEADJZzVrdF2cNlbN/tWUnC3eWyhu6xasPuhZXmZmXPwW2D8Iq7njZwegMktyofQ5r/PN8KH 9upfTKbF3DvDx+sRgxLAEePkF6E/g54sS8AwvY7J+j9UYnQFQDm8JoURpxouW47qkFrlp2bJADA eM6+yEs4HNGiCZ+O2RTz30ZW/+RMND8SuSrgYj4Fti2s5FI2pHDf36/Q9sBBIOc0WpyiorBjdRM 9J/dJEOehB+XYlI/3NsjJ9v/LWbftao2CtL2XCZOC2KeDcVbKpBaOiie6kwQL3bzYnrhDQ4H8xu eEUq60sbDxeDUy+U8hzms0Ecy284a6B5YrGSMnyG5YTulp/v+IM0z0voSmkikBqs3wbsuDhltMZ GSAGgW4SRLp+2brOVxyMf4jUdcBK6ic3RJ1h4OIDFE4NiD+fQOrFy7Ik0tlPMjDsxfowkFJd7r3 yiuWkk9sYS9LA98meh3MZPXtqntX6CbGVqX20JbHUFsrkEedlJHJWRhJoKt/OUS+EClQUd1Pz0f ALmgsACQ9AOQTjWA1nNpJy2AOM1TSlEOGXBi6XeZ+CKmnwJ1Bc5ljRp4dubrVqnI4S0oAR3BiVN wzWa/2yVsudH5Wrc2cFdNOSED0BP5bPzXjXp4dJo4TOj5hJ2EagJeAafwZKM4E0l3lQjHKztHLk XrKlwCTL7E/yt3w== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 Add zero copy support to RX path. Introduce xsk_pool and xsk_port_id to struct am65_cpsw_rx_flow. This way we can quickly check if the flow is setup as XSK pool and for which port. If the RX flow is setup as XSK pool then register it as MEM_TYPE_XSK_BUFF_POOL. At queue creation get free frames from the XSK pool and push it to the RX ring. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 307 +++++++++++++++++++++++++++= ---- drivers/net/ethernet/ti/am65-cpsw-nuss.h | 12 +- drivers/net/ethernet/ti/am65-cpsw-xdp.c | 24 +++ 3 files changed, 309 insertions(+), 34 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index a946bcd770c4..5fbf04880722 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -429,6 +429,55 @@ static void am65_cpsw_nuss_ndo_host_tx_timeout(struct = net_device *ndev, } } =20 +static int am65_cpsw_nuss_rx_push_zc(struct am65_cpsw_rx_flow *flow, + struct xdp_buff *xdp) +{ + struct am65_cpsw_rx_chn *rx_chn =3D &flow->common->rx_chns; + struct cppi5_host_desc_t *desc_rx; + struct am65_cpsw_swdata *swdata; + u32 flow_id =3D flow->id; + dma_addr_t desc_dma; + dma_addr_t buf_dma; + int buf_len; + + desc_rx =3D k3_cppi_desc_pool_alloc(rx_chn->desc_pool); + if (!desc_rx) + return -ENOMEM; + + desc_dma =3D k3_cppi_desc_pool_virt2dma(rx_chn->desc_pool, desc_rx); + buf_dma =3D xsk_buff_xdp_get_dma(xdp); + cppi5_hdesc_init(desc_rx, CPPI5_INFO0_HDESC_EPIB_PRESENT, + AM65_CPSW_NAV_PS_DATA_SIZE); + k3_udma_glue_rx_dma_to_cppi5_addr(rx_chn->rx_chn, &buf_dma); + buf_len =3D xsk_pool_get_rx_frame_size(flow->xsk_pool); + cppi5_hdesc_attach_buf(desc_rx, buf_dma, buf_len, buf_dma, buf_len); + swdata =3D cppi5_hdesc_get_swdata(desc_rx); + swdata->xdp =3D xdp; + swdata->flow_id =3D flow_id; + + return k3_udma_glue_push_rx_chn(rx_chn->rx_chn, flow_id, + desc_rx, desc_dma); +} + +static int am65_cpsw_nuss_rx_alloc_zc(struct am65_cpsw_rx_flow *flow, + int budget) +{ + struct xdp_buff *xdp; + int i, ret; + + for (i =3D 0; i < budget; i++) { + xdp =3D xsk_buff_alloc(flow->xsk_pool); + if (!xdp) + break; + + ret =3D am65_cpsw_nuss_rx_push_zc(flow, xdp); + if (ret < 0) + break; + } + + return i; +} + static int am65_cpsw_nuss_rx_push(struct am65_cpsw_common *common, struct page *page, u32 flow_idx) { @@ -529,6 +578,9 @@ void am65_cpsw_destroy_rxq(struct am65_cpsw_common *com= mon, int id) page_pool_destroy(flow->page_pool); flow->page_pool =3D NULL; } + + flow->xsk_pool =3D NULL; + flow->xsk_port_id =3D -EINVAL; } =20 static void am65_cpsw_destroy_rxqs(struct am65_cpsw_common *common) @@ -568,6 +620,7 @@ int am65_cpsw_create_rxq(struct am65_cpsw_common *commo= n, int id) struct page_pool *pool; struct page *page; int port, ret, i; + int port_id; =20 flow =3D &rx_chn->flows[id]; pp_params.napi =3D &flow->napi_rx; @@ -582,9 +635,24 @@ int am65_cpsw_create_rxq(struct am65_cpsw_common *comm= on, int id) /* using same page pool is allowed as no running rx handlers * simultaneously for both ndevs */ + + /* get first port with XSK pool & XDP program set */ for (port =3D 0; port < common->port_num; port++) { - if (!common->ports[port].ndev) - /* FIXME should we BUG here? */ + flow->xsk_pool =3D am65_cpsw_xsk_get_pool(&common->ports[port], + id); + if (flow->xsk_pool) + break; + } + + port_id =3D common->ports[port].port_id; + flow->xsk_port_id =3D flow->xsk_pool ? port_id : -EINVAL; + for (port =3D 0; port < common->port_num; port++) { + port_id =3D common->ports[port].port_id; + + /* NOTE: if queue is XSK then only register it + * for the relevant port it was assigned to + */ + if (flow->xsk_pool && port_id !=3D flow->xsk_port_id) continue; =20 rxq =3D &common->ports[port].xdp_rxq[id]; @@ -593,29 +661,44 @@ int am65_cpsw_create_rxq(struct am65_cpsw_common *com= mon, int id) if (ret) goto err; =20 - ret =3D xdp_rxq_info_reg_mem_model(rxq, - MEM_TYPE_PAGE_POOL, - pool); + if (flow->xsk_pool) { + ret =3D xdp_rxq_info_reg_mem_model(rxq, + MEM_TYPE_XSK_BUFF_POOL, + NULL); + xsk_pool_set_rxq_info(flow->xsk_pool, rxq); + } else { + ret =3D xdp_rxq_info_reg_mem_model(rxq, + MEM_TYPE_PAGE_POOL, + pool); + } + if (ret) goto err; } =20 - for (i =3D 0; i < AM65_CPSW_MAX_RX_DESC; i++) { - page =3D page_pool_dev_alloc_pages(flow->page_pool); - if (!page) { - dev_err(common->dev, "cannot allocate page in flow %d\n", - id); - ret =3D -ENOMEM; - goto err; - } + if (flow->xsk_pool) { + /* get pages from xsk_pool and push to RX ring + * queue as much as possible + */ + am65_cpsw_nuss_rx_alloc_zc(flow, AM65_CPSW_MAX_RX_DESC); + } else { + for (i =3D 0; i < AM65_CPSW_MAX_RX_DESC; i++) { + page =3D page_pool_dev_alloc_pages(flow->page_pool); + if (!page) { + dev_err(common->dev, "cannot allocate page in flow %d\n", + id); + ret =3D -ENOMEM; + goto err; + } =20 - ret =3D am65_cpsw_nuss_rx_push(common, page, id); - if (ret < 0) { - dev_err(common->dev, - "cannot submit page to rx channel flow %d, error %d\n", - id, ret); - am65_cpsw_put_page(flow, page, false); - goto err; + ret =3D am65_cpsw_nuss_rx_push(common, page, id); + if (ret < 0) { + dev_err(common->dev, + "cannot submit page to rx channel flow %d, error %d\n", + id, ret); + am65_cpsw_put_page(flow, page, false); + goto err; + } } } =20 @@ -772,6 +855,8 @@ static void am65_cpsw_nuss_rx_cleanup(void *data, dma_a= ddr_t desc_dma) struct am65_cpsw_rx_chn *rx_chn =3D data; struct cppi5_host_desc_t *desc_rx; struct am65_cpsw_swdata *swdata; + struct am65_cpsw_rx_flow *flow; + struct xdp_buff *xdp; dma_addr_t buf_dma; struct page *page; u32 buf_dma_len; @@ -779,13 +864,20 @@ static void am65_cpsw_nuss_rx_cleanup(void *data, dma= _addr_t desc_dma) =20 desc_rx =3D k3_cppi_desc_pool_dma2virt(rx_chn->desc_pool, desc_dma); swdata =3D cppi5_hdesc_get_swdata(desc_rx); - page =3D swdata->page; flow_id =3D swdata->flow_id; cppi5_hdesc_get_obuf(desc_rx, &buf_dma, &buf_dma_len); k3_udma_glue_rx_cppi5_to_dma_addr(rx_chn->rx_chn, &buf_dma); - dma_unmap_single(rx_chn->dma_dev, buf_dma, buf_dma_len, DMA_FROM_DEVICE); k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); - am65_cpsw_put_page(&rx_chn->flows[flow_id], page, false); + flow =3D &rx_chn->flows[flow_id]; + if (flow->xsk_pool) { + xdp =3D swdata->xdp; + xsk_buff_free(xdp); + } else { + page =3D swdata->page; + dma_unmap_single(rx_chn->dma_dev, buf_dma, buf_dma_len, + DMA_FROM_DEVICE); + am65_cpsw_put_page(flow, page, false); + } } =20 static void am65_cpsw_nuss_xmit_free(struct am65_cpsw_tx_chn *tx_chn, @@ -1264,6 +1356,151 @@ static void am65_cpsw_nuss_rx_csum(struct sk_buff *= skb, u32 csum_info) } } =20 +static struct sk_buff *am65_cpsw_create_skb_zc(struct am65_cpsw_rx_flow *f= low, + struct xdp_buff *xdp) +{ + unsigned int metasize =3D xdp->data - xdp->data_meta; + unsigned int datasize =3D xdp->data_end - xdp->data; + struct sk_buff *skb; + + skb =3D napi_alloc_skb(&flow->napi_rx, + xdp->data_end - xdp->data_hard_start); + if (unlikely(!skb)) + return NULL; + + skb_reserve(skb, xdp->data - xdp->data_hard_start); + memcpy(__skb_put(skb, datasize), xdp->data, datasize); + if (metasize) + skb_metadata_set(skb, metasize); + + return skb; +} + +static void am65_cpsw_dispatch_skb_zc(struct am65_cpsw_rx_flow *flow, + struct am65_cpsw_port *port, + struct xdp_buff *xdp, u32 csum_info) +{ + struct am65_cpsw_common *common =3D flow->common; + unsigned int len =3D xdp->data_end - xdp->data; + struct am65_cpsw_ndev_priv *ndev_priv; + struct net_device *ndev =3D port->ndev; + struct sk_buff *skb; + + skb =3D am65_cpsw_create_skb_zc(flow, xdp); + if (!skb) { + ndev->stats.rx_dropped++; + return; + } + + ndev_priv =3D netdev_priv(ndev); + am65_cpsw_nuss_set_offload_fwd_mark(skb, ndev_priv->offload_fwd_mark); + if (port->rx_ts_enabled) + am65_cpts_rx_timestamp(common->cpts, skb); + + skb_mark_for_recycle(skb); + skb->protocol =3D eth_type_trans(skb, ndev); + am65_cpsw_nuss_rx_csum(skb, csum_info); + napi_gro_receive(&flow->napi_rx, skb); + dev_sw_netstats_rx_add(ndev, len); +} + +static int am65_cpsw_nuss_rx_zc(struct am65_cpsw_rx_flow *flow, int budget) +{ + struct am65_cpsw_rx_chn *rx_chn =3D &flow->common->rx_chns; + u32 buf_dma_len, pkt_len, port_id =3D 0, csum_info; + struct am65_cpsw_common *common =3D flow->common; + struct cppi5_host_desc_t *desc_rx; + struct device *dev =3D common->dev; + struct am65_cpsw_swdata *swdata; + dma_addr_t desc_dma, buf_dma; + struct am65_cpsw_port *port; + struct net_device *ndev; + u32 flow_idx =3D flow->id; + struct xdp_buff *xdp; + int count =3D 0; + int xdp_status =3D 0; + u32 *psdata; + int ret; + + while (count < budget) { + ret =3D k3_udma_glue_pop_rx_chn(rx_chn->rx_chn, flow_idx, + &desc_dma); + if (ret) { + if (ret !=3D -ENODATA) + dev_err(dev, "RX: pop chn fail %d\n", + ret); + break; + } + + if (cppi5_desc_is_tdcm(desc_dma)) { + dev_dbg(dev, "%s RX tdown flow: %u\n", + __func__, flow_idx); + if (common->pdata.quirks & AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ) + complete(&common->tdown_complete); + continue; + } + + desc_rx =3D k3_cppi_desc_pool_dma2virt(rx_chn->desc_pool, + desc_dma); + dev_dbg(dev, "%s flow_idx: %u desc %pad\n", + __func__, flow_idx, &desc_dma); + + swdata =3D cppi5_hdesc_get_swdata(desc_rx); + xdp =3D swdata->xdp; + cppi5_hdesc_get_obuf(desc_rx, &buf_dma, &buf_dma_len); + k3_udma_glue_rx_cppi5_to_dma_addr(rx_chn->rx_chn, &buf_dma); + pkt_len =3D cppi5_hdesc_get_pktlen(desc_rx); + cppi5_desc_get_tags_ids(&desc_rx->hdr, &port_id, NULL); + dev_dbg(dev, "%s rx port_id:%d\n", __func__, port_id); + port =3D am65_common_get_port(common, port_id); + ndev =3D port->ndev; + psdata =3D cppi5_hdesc_get_psdata(desc_rx); + csum_info =3D psdata[2]; + dev_dbg(dev, "%s rx csum_info:%#x\n", __func__, csum_info); + k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); + count++; + xsk_buff_set_size(xdp, pkt_len); + xsk_buff_dma_sync_for_cpu(xdp); + /* check if this port has XSK enabled. else drop packet */ + if (port_id !=3D flow->xsk_port_id) { + dev_dbg(dev, "discarding non xsk port data\n"); + xsk_buff_free(xdp); + ndev->stats.rx_dropped++; + continue; + } + + ret =3D am65_cpsw_run_xdp(flow, port, xdp, &pkt_len); + switch (ret) { + case AM65_CPSW_XDP_PASS: + am65_cpsw_dispatch_skb_zc(flow, port, xdp, csum_info); + xsk_buff_free(xdp); + break; + case AM65_CPSW_XDP_CONSUMED: + xsk_buff_free(xdp); + break; + case AM65_CPSW_XDP_TX: + case AM65_CPSW_XDP_REDIRECT: + xdp_status |=3D ret; + break; + } + } + + if (xdp_status & AM65_CPSW_XDP_REDIRECT) + xdp_do_flush(); + + ret =3D am65_cpsw_nuss_rx_alloc_zc(flow, count); + + if (xsk_uses_need_wakeup(flow->xsk_pool)) { + /* We set wakeup if we are exhausted of new requests */ + if (ret < count) + xsk_set_rx_need_wakeup(flow->xsk_pool); + else + xsk_clear_rx_need_wakeup(flow->xsk_pool); + } + + return count; +} + static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_rx_flow *flow, int *xdp_state) { @@ -1403,17 +1640,21 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struc= t *napi_rx, int budget) int num_rx =3D 0; =20 /* process only this flow */ - cur_budget =3D budget; - while (cur_budget--) { - ret =3D am65_cpsw_nuss_rx_packets(flow, &xdp_state); - xdp_state_or |=3D xdp_state; - if (ret) - break; - num_rx++; - } + if (flow->xsk_pool) { + num_rx =3D am65_cpsw_nuss_rx_zc(flow, budget); + } else { + cur_budget =3D budget; + while (cur_budget--) { + ret =3D am65_cpsw_nuss_rx_packets(flow, &xdp_state); + xdp_state_or |=3D xdp_state; + if (ret) + break; + num_rx++; + } =20 - if (xdp_state_or & AM65_CPSW_XDP_REDIRECT) - xdp_do_flush(); + if (xdp_state_or & AM65_CPSW_XDP_REDIRECT) + xdp_do_flush(); + } =20 dev_dbg(common->dev, "%s num_rx:%d %d\n", __func__, num_rx, budget); =20 diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/etherne= t/ti/am65-cpsw-nuss.h index e80e74a74d71..0e44d8a6cd68 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h @@ -15,6 +15,7 @@ #include #include #include +#include #include "am65-cpsw-qos.h" =20 struct am65_cpts; @@ -107,6 +108,8 @@ struct am65_cpsw_rx_flow { struct hrtimer rx_hrtimer; unsigned long rx_pace_timeout; struct page_pool *page_pool; + struct xsk_buff_pool *xsk_pool; + int xsk_port_id; char name[32]; }; =20 @@ -120,7 +123,10 @@ struct am65_cpsw_tx_swdata { =20 struct am65_cpsw_swdata { u32 flow_id; - struct page *page; + union { + struct page *page; + struct xdp_buff *xdp; + }; }; =20 struct am65_cpsw_rx_chn { @@ -248,4 +254,8 @@ static inline bool am65_cpsw_xdp_is_enabled(struct am65= _cpsw_port *port) { return !!READ_ONCE(port->xdp_prog); } + +struct xsk_buff_pool *am65_cpsw_xsk_get_pool(struct am65_cpsw_port *port, + u32 qid); + #endif /* AM65_CPSW_NUSS_H_ */ diff --git a/drivers/net/ethernet/ti/am65-cpsw-xdp.c b/drivers/net/ethernet= /ti/am65-cpsw-xdp.c index e1ab81cb4548..e71ff38f851f 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-xdp.c +++ b/drivers/net/ethernet/ti/am65-cpsw-xdp.c @@ -108,6 +108,9 @@ int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 q= id, u32 flags) { struct am65_cpsw_common *common =3D am65_ndev_to_common(ndev); struct am65_cpsw_port *port =3D am65_ndev_to_port(ndev); + struct am65_cpsw_rx_flow *rx_flow; + + rx_flow =3D &common->rx_chns.flows[qid]; =20 if (!netif_running(ndev) || !netif_carrier_ok(ndev)) return -ENETDOWN; @@ -118,5 +121,26 @@ int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 = qid, u32 flags) if (qid >=3D common->rx_ch_num_flows || qid >=3D common->tx_ch_num) return -EINVAL; =20 + if (!rx_flow->xsk_pool) + return -EINVAL; + + if (flags & XDP_WAKEUP_RX) { + if (!napi_if_scheduled_mark_missed(&rx_flow->napi_rx)) { + if (likely(napi_schedule_prep(&rx_flow->napi_rx))) + __napi_schedule(&rx_flow->napi_rx); + } + } + return 0; } + +struct xsk_buff_pool *am65_cpsw_xsk_get_pool(struct am65_cpsw_port *port, + u32 qid) +{ + if (!am65_cpsw_xdp_is_enabled(port) || + !test_bit(qid, port->common->xdp_zc_queues) || + port->common->xsk_port_id[qid] !=3D port->port_id) + return NULL; + + return xsk_get_pool_from_qid(port->ndev, qid); +} --=20 2.34.1 From nobody Fri Dec 19 12:49:51 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4004E26AA98; Tue, 20 May 2025 10:24:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747736661; cv=none; b=lzSfAKLGlfOnV6XsIDAGijgZPpkmTPP7QJ5seurI7KREN1Om9TB9DfTil2GUVDP082nxULVA3qk/y+Go0++ZWgnDMd8Y3NsYmUOgL4T7pKbyBsTcQ0hxToEcOCMbsRpLek0L9xHH6OlZuaMQqirxzOBGr6/jTldgxBE8MJy77OY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747736661; c=relaxed/simple; bh=jyablavjxf0FtJa9g7V3FV8B2RVy7Em6vHSsqZjLPcI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=T0tbI4GoJ43Q6f8CsXvlV8I66kcD4t0JUe8rQUmFX6xgL7rZ1wca24GbEDSs+hHoqttn+SHvcd2mZ7wo2rqMAMrUavQ5A5KR+IUorX9ODeBhtXP0rMB65Vrurabvg+6n23dNMHIjzZg1urxoz38lepvdL605yrphEbguVNR782o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TaBo/s6R; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TaBo/s6R" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 75DDCC4CEEB; Tue, 20 May 2025 10:24:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1747736660; bh=jyablavjxf0FtJa9g7V3FV8B2RVy7Em6vHSsqZjLPcI=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=TaBo/s6R9uT+v0yLNJDFC9BNJqCY5DUOuojW7IB1+YPR1AQsHl19o118R2oE9Duf9 VTWPhKLH+SS7QM0u5a/DwjvF7Yqlrl74o/DCcpbj/lPtKcXtgGuQejgSPW0NGKi0b/ xoSb73LERPDLlfRHCMqGHQbDouwvKTB1E04vf2m0SF4lvl3WWVPjAM4oXZsNZBypYy L14ImHhsIm78Y+EjWpqftgqzOL1uT9Fn4USnG9L4d2LVffqAKpvUSz2Lehcd9+/AWq MucRJuC7OY+EpM/yssY6n52FX4G0itUcDxw6KmAyf2NxnilKWT3HnaI5fyiPNogUZy 3sthtJ8Aljx4A== From: Roger Quadros Date: Tue, 20 May 2025 13:23:53 +0300 Subject: [PATCH RFC net-next 4/5] net: ethernet: ti: am65-cpsw: Add AF_XDP zero copy for TX Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250520-am65-cpsw-xdp-zc-v1-4-45558024f566@kernel.org> References: <20250520-am65-cpsw-xdp-zc-v1-0-45558024f566@kernel.org> In-Reply-To: <20250520-am65-cpsw-xdp-zc-v1-0-45558024f566@kernel.org> To: Siddharth Vadapalli , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Cc: srk@ti.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=13337; i=rogerq@kernel.org; h=from:subject:message-id; bh=jyablavjxf0FtJa9g7V3FV8B2RVy7Em6vHSsqZjLPcI=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBoLFg9W19nNTzeoFAT0cLhutFVcqP5zfq3E/PlV MG60CKll1yJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCaCxYPQAKCRDSWmvTvnYw kydjEACWBGDF+NLNXEn+48W1q8OlGSMEsTi0fsLnw8hp5KzRnXN4IwGz14CyvCJM5TXH9QaOhKX Z1Fv04OA37GZZasxaOmGAd8wincJihcOQrXOZ+gkSjDIi39jmgCkGM7NDxuRgPR8cmIzMxIpAVE /E9XEgvAtMTPW0LFTdDqxXyU+j3cZiDpkhvScLbZC5T53lW3BXyCnvFMDMawREACRcA4ESY2wtT HZ+o/ANrUqyWsqCYXZnUJ9OrrR1N8krHeB0MX0SVcVREbXX0KhgRbsIeGp85rrj1SgxEzqagu4+ vLNMnPZVQ/t0TkaaMqMnM2AF6yg7FEm6V9ehTFgVqINQYwuMJGx8SXs4Q1Zydb9bcXKun3yLQAA /wsl8OQpr/XyDJGWEA7/bXKHY7cqla53FRBYJP29c7sSEi8UIxl4gFYPKfYInQkwgoyQGYt6num N4yamIekLxo2QFSdGXbTZ2B04BbdisWtGTOKYWOLohl9mTruIGQ3a/OpNbAnNSG7mf7eIF6WLhu bKfy5dXEnvJcHrUNYGM5cg1fqpa/nPZgFPNiTHAWL1XesZ/DcUwTkrK9PyWXJ7oLn2/MEDCZcJa XCTsnjF3lmjnb2GFaBYfa8V6BYLSt8AuVvReVSAG+nKZpCL07gFG1S07i9dWw4aLm89fFhahz4O MC0R3iftew9Bqcg== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 Add zero copy support to TX path. Introduce xsk_pool and xsk_port_id to struct am65_cpsw_tx_chn. This way we can quickly check if the flow is setup as XSK pool and for which port. If the TX channel is setup as XSK pool then get the frames from the pool and send it to the TX channel. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 170 +++++++++++++++++++++++++++= +--- drivers/net/ethernet/ti/am65-cpsw-nuss.h | 5 + drivers/net/ethernet/ti/am65-cpsw-xdp.c | 11 +- 3 files changed, 170 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index 5fbf04880722..e89b3cefcb05 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -747,6 +747,8 @@ void am65_cpsw_destroy_txq(struct am65_cpsw_common *com= mon, int id) k3_udma_glue_reset_tx_chn(tx_chn->tx_chn, tx_chn, am65_cpsw_nuss_tx_cleanup); k3_udma_glue_disable_tx_chn(tx_chn->tx_chn); + tx_chn->xsk_pool =3D NULL; + tx_chn->xsk_port_id =3D -EINVAL; } =20 static void am65_cpsw_destroy_txqs(struct am65_cpsw_common *common) @@ -775,12 +777,22 @@ static void am65_cpsw_destroy_txqs(struct am65_cpsw_c= ommon *common) int am65_cpsw_create_txq(struct am65_cpsw_common *common, int id) { struct am65_cpsw_tx_chn *tx_chn =3D &common->tx_chns[id]; - int ret; + int port, ret; =20 ret =3D k3_udma_glue_enable_tx_chn(tx_chn->tx_chn); if (ret) return ret; =20 + /* get first port with XSK pool & XDP program set */ + for (port =3D 0; port < common->port_num; port++) { + tx_chn->xsk_pool =3D am65_cpsw_xsk_get_pool(&common->ports[port], + id); + if (tx_chn->xsk_pool) + break; + } + + tx_chn->xsk_port_id =3D tx_chn->xsk_pool ? + common->ports[port].port_id : -EINVAL; napi_enable(&tx_chn->napi_tx); =20 return 0; @@ -881,15 +893,18 @@ static void am65_cpsw_nuss_rx_cleanup(void *data, dma= _addr_t desc_dma) } =20 static void am65_cpsw_nuss_xmit_free(struct am65_cpsw_tx_chn *tx_chn, - struct cppi5_host_desc_t *desc) + struct cppi5_host_desc_t *desc, + enum am65_cpsw_tx_buf_type buf_type) { struct cppi5_host_desc_t *first_desc, *next_desc; dma_addr_t buf_dma, next_desc_dma; u32 buf_dma_len; =20 first_desc =3D desc; - next_desc =3D first_desc; + if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_XSK_TX) + goto free_pool; =20 + next_desc =3D first_desc; cppi5_hdesc_get_obuf(first_desc, &buf_dma, &buf_dma_len); k3_udma_glue_tx_cppi5_to_dma_addr(tx_chn->tx_chn, &buf_dma); =20 @@ -912,6 +927,7 @@ static void am65_cpsw_nuss_xmit_free(struct am65_cpsw_t= x_chn *tx_chn, k3_cppi_desc_pool_free(tx_chn->desc_pool, next_desc); } =20 +free_pool: k3_cppi_desc_pool_free(tx_chn->desc_pool, first_desc); } =20 @@ -921,21 +937,32 @@ static void am65_cpsw_nuss_tx_cleanup(void *data, dma= _addr_t desc_dma) enum am65_cpsw_tx_buf_type buf_type; struct am65_cpsw_tx_swdata *swdata; struct cppi5_host_desc_t *desc_tx; + struct xsk_buff_pool *xsk_pool; struct xdp_frame *xdpf; struct sk_buff *skb; =20 desc_tx =3D k3_cppi_desc_pool_dma2virt(tx_chn->desc_pool, desc_dma); swdata =3D cppi5_hdesc_get_swdata(desc_tx); buf_type =3D am65_cpsw_nuss_buf_type(tx_chn, desc_dma); - if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_SKB) { + switch (buf_type) { + case AM65_CPSW_TX_BUF_TYPE_SKB: skb =3D swdata->skb; dev_kfree_skb_any(skb); - } else { + break; + case AM65_CPSW_TX_BUF_TYPE_XDP_TX: + case AM65_CPSW_TX_BUF_TYPE_XDP_NDO: xdpf =3D swdata->xdpf; xdp_return_frame(xdpf); + break; + case AM65_CPSW_TX_BUF_TYPE_XSK_TX: + xsk_pool =3D swdata->xsk_pool; + xsk_tx_completed(xsk_pool, 1); + break; + default: + break; } =20 - am65_cpsw_nuss_xmit_free(tx_chn, desc_tx); + am65_cpsw_nuss_xmit_free(tx_chn, desc_tx, buf_type); } =20 static struct sk_buff *am65_cpsw_build_skb(void *page_addr, @@ -1180,6 +1207,82 @@ static int am65_cpsw_nuss_ndo_slave_open(struct net_= device *ndev) return ret; } =20 +static int am65_cpsw_xsk_xmit_zc(struct net_device *ndev, + struct am65_cpsw_tx_chn *tx_chn) +{ + struct am65_cpsw_common *common =3D tx_chn->common; + struct xsk_buff_pool *pool =3D tx_chn->xsk_pool; + struct xdp_desc *xdp_descs =3D pool->tx_descs; + struct cppi5_host_desc_t *host_desc; + struct am65_cpsw_tx_swdata *swdata; + dma_addr_t dma_desc, dma_buf; + int num_tx =3D 0, pkt_len; + int descs_avail, ret; + int i; + + descs_avail =3D k3_cppi_desc_pool_avail(tx_chn->desc_pool); + /* ensure that TX ring is not filled up by XDP, always MAX_SKB_FRAGS + * will be available for normal TX path and queue is stopped there if + * necessary + */ + if (descs_avail <=3D MAX_SKB_FRAGS) + return 0; + + descs_avail -=3D MAX_SKB_FRAGS; + descs_avail =3D xsk_tx_peek_release_desc_batch(pool, descs_avail); + + for (i =3D 0; i < descs_avail; i++) { + host_desc =3D k3_cppi_desc_pool_alloc(tx_chn->desc_pool); + if (unlikely(!host_desc)) + break; + + am65_cpsw_nuss_set_buf_type(tx_chn, host_desc, + AM65_CPSW_TX_BUF_TYPE_XSK_TX); + dma_buf =3D xsk_buff_raw_get_dma(pool, xdp_descs[i].addr); + pkt_len =3D xdp_descs[i].len; + xsk_buff_raw_dma_sync_for_device(pool, dma_buf, pkt_len); + + cppi5_hdesc_init(host_desc, CPPI5_INFO0_HDESC_EPIB_PRESENT, + AM65_CPSW_NAV_PS_DATA_SIZE); + cppi5_hdesc_set_pkttype(host_desc, AM65_CPSW_CPPI_TX_PKT_TYPE); + cppi5_hdesc_set_pktlen(host_desc, pkt_len); + cppi5_desc_set_pktids(&host_desc->hdr, 0, + AM65_CPSW_CPPI_TX_FLOW_ID); + cppi5_desc_set_tags_ids(&host_desc->hdr, 0, + tx_chn->xsk_port_id); + + k3_udma_glue_tx_dma_to_cppi5_addr(tx_chn->tx_chn, &dma_buf); + cppi5_hdesc_attach_buf(host_desc, dma_buf, pkt_len, dma_buf, + pkt_len); + + swdata =3D cppi5_hdesc_get_swdata(host_desc); + swdata->ndev =3D ndev; + swdata->xsk_pool =3D pool; + + dma_desc =3D k3_cppi_desc_pool_virt2dma(tx_chn->desc_pool, + host_desc); + if (AM65_CPSW_IS_CPSW2G(common)) { + ret =3D k3_udma_glue_push_tx_chn(tx_chn->tx_chn, + host_desc, dma_desc); + } else { + spin_lock_bh(&tx_chn->lock); + ret =3D k3_udma_glue_push_tx_chn(tx_chn->tx_chn, + host_desc, dma_desc); + spin_unlock_bh(&tx_chn->lock); + } + + if (ret) { + ndev->stats.tx_errors++; + k3_cppi_desc_pool_free(tx_chn->desc_pool, host_desc); + break; + } + + num_tx++; + } + + return num_tx; +} + static int am65_cpsw_xdp_tx_frame(struct net_device *ndev, struct am65_cpsw_tx_chn *tx_chn, struct xdp_frame *xdpf, @@ -1703,15 +1806,19 @@ static int am65_cpsw_nuss_tx_compl_packets(struct a= m65_cpsw_common *common, struct netdev_queue *netif_txq; unsigned int total_bytes =3D 0; struct net_device *ndev; + int xsk_frames_done =3D 0; struct xdp_frame *xdpf; unsigned int pkt_len; struct sk_buff *skb; dma_addr_t desc_dma; int res, num_tx =3D 0; + int xsk_tx =3D 0; =20 tx_chn =3D &common->tx_chns[chn]; =20 while (true) { + pkt_len =3D 0; + if (!single_port) spin_lock(&tx_chn->lock); res =3D k3_udma_glue_pop_tx_chn(tx_chn->tx_chn, &desc_dma); @@ -1733,25 +1840,36 @@ static int am65_cpsw_nuss_tx_compl_packets(struct a= m65_cpsw_common *common, swdata =3D cppi5_hdesc_get_swdata(desc_tx); ndev =3D swdata->ndev; buf_type =3D am65_cpsw_nuss_buf_type(tx_chn, desc_dma); - if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_SKB) { + switch (buf_type) { + case AM65_CPSW_TX_BUF_TYPE_SKB: skb =3D swdata->skb; am65_cpts_tx_timestamp(tx_chn->common->cpts, skb); pkt_len =3D skb->len; napi_consume_skb(skb, budget); - } else { + total_bytes +=3D pkt_len; + break; + case AM65_CPSW_TX_BUF_TYPE_XDP_TX: + case AM65_CPSW_TX_BUF_TYPE_XDP_NDO: xdpf =3D swdata->xdpf; pkt_len =3D xdpf->len; + total_bytes +=3D pkt_len; if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_XDP_TX) xdp_return_frame_rx_napi(xdpf); else xdp_return_frame(xdpf); + break; + case AM65_CPSW_TX_BUF_TYPE_XSK_TX: + pkt_len =3D cppi5_hdesc_get_pktlen(desc_tx); + xsk_frames_done++; + break; + default: + break; } =20 - total_bytes +=3D pkt_len; num_tx++; - am65_cpsw_nuss_xmit_free(tx_chn, desc_tx); + am65_cpsw_nuss_xmit_free(tx_chn, desc_tx, buf_type); dev_sw_netstats_tx_add(ndev, 1, pkt_len); - if (!single_port) { + if (!single_port && buf_type !=3D AM65_CPSW_TX_BUF_TYPE_XSK_TX) { /* as packets from multi ports can be interleaved * on the same channel, we have to figure out the * port/queue at every packet and report it/wake queue. @@ -1768,6 +1886,19 @@ static int am65_cpsw_nuss_tx_compl_packets(struct am= 65_cpsw_common *common, am65_cpsw_nuss_tx_wake(tx_chn, ndev, netif_txq); } =20 + if (tx_chn->xsk_pool) { + if (xsk_frames_done) + xsk_tx_completed(tx_chn->xsk_pool, xsk_frames_done); + + if (xsk_uses_need_wakeup(tx_chn->xsk_pool)) + xsk_set_tx_need_wakeup(tx_chn->xsk_pool); + + ndev =3D common->ports[tx_chn->xsk_port_id].ndev; + netif_txq =3D netdev_get_tx_queue(ndev, chn); + txq_trans_cond_update(netif_txq); + xsk_tx =3D am65_cpsw_xsk_xmit_zc(ndev, tx_chn); + } + dev_dbg(dev, "%s:%u pkt:%d\n", __func__, chn, num_tx); =20 return num_tx; @@ -1778,7 +1909,11 @@ static enum hrtimer_restart am65_cpsw_nuss_tx_timer_= callback(struct hrtimer *tim struct am65_cpsw_tx_chn *tx_chns =3D container_of(timer, struct am65_cpsw_tx_chn, tx_hrtimer); =20 - enable_irq(tx_chns->irq); + if (tx_chns->irq_disabled) { + tx_chns->irq_disabled =3D false; + enable_irq(tx_chns->irq); + } + return HRTIMER_NORESTART; } =20 @@ -1799,7 +1934,10 @@ static int am65_cpsw_nuss_tx_poll(struct napi_struct= *napi_tx, int budget) ns_to_ktime(tx_chn->tx_pace_timeout), HRTIMER_MODE_REL_PINNED); } else { - enable_irq(tx_chn->irq); + if (tx_chn->irq_disabled) { + tx_chn->irq_disabled =3D false; + enable_irq(tx_chn->irq); + } } } =20 @@ -1821,6 +1959,7 @@ static irqreturn_t am65_cpsw_nuss_tx_irq(int irq, voi= d *dev_id) { struct am65_cpsw_tx_chn *tx_chn =3D dev_id; =20 + tx_chn->irq_disabled =3D true; disable_irq_nosync(irq); napi_schedule(&tx_chn->napi_tx); =20 @@ -1985,14 +2124,14 @@ static netdev_tx_t am65_cpsw_nuss_ndo_slave_xmit(st= ruct sk_buff *skb, return NETDEV_TX_OK; =20 err_free_descs: - am65_cpsw_nuss_xmit_free(tx_chn, first_desc); + am65_cpsw_nuss_xmit_free(tx_chn, first_desc, AM65_CPSW_TX_BUF_TYPE_SKB); err_free_skb: ndev->stats.tx_dropped++; dev_kfree_skb_any(skb); return NETDEV_TX_OK; =20 busy_free_descs: - am65_cpsw_nuss_xmit_free(tx_chn, first_desc); + am65_cpsw_nuss_xmit_free(tx_chn, first_desc, AM65_CPSW_TX_BUF_TYPE_SKB); busy_stop_q: netif_tx_stop_queue(netif_txq); return NETDEV_TX_BUSY; @@ -2246,6 +2385,7 @@ static const struct net_device_ops am65_cpsw_nuss_net= dev_ops =3D { .ndo_set_tx_maxrate =3D am65_cpsw_qos_ndo_tx_p0_set_maxrate, .ndo_bpf =3D am65_cpsw_ndo_bpf, .ndo_xdp_xmit =3D am65_cpsw_ndo_xdp_xmit, + .ndo_xsk_wakeup =3D am65_cpsw_xsk_wakeup, }; =20 static void am65_cpsw_disable_phy(struct phy *phy) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/etherne= t/ti/am65-cpsw-nuss.h index 0e44d8a6cd68..0152767e8436 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h @@ -72,6 +72,7 @@ enum am65_cpsw_tx_buf_type { AM65_CPSW_TX_BUF_TYPE_SKB, AM65_CPSW_TX_BUF_TYPE_XDP_TX, AM65_CPSW_TX_BUF_TYPE_XDP_NDO, + AM65_CPSW_TX_BUF_TYPE_XSK_TX, }; =20 struct am65_cpsw_host { @@ -97,6 +98,9 @@ struct am65_cpsw_tx_chn { unsigned char dsize_log2; char tx_chn_name[128]; u32 rate_mbps; + struct xsk_buff_pool *xsk_pool; + int xsk_port_id; + bool irq_disabled; }; =20 struct am65_cpsw_rx_flow { @@ -118,6 +122,7 @@ struct am65_cpsw_tx_swdata { union { struct sk_buff *skb; struct xdp_frame *xdpf; + struct xsk_buff_pool *xsk_pool; }; }; =20 diff --git a/drivers/net/ethernet/ti/am65-cpsw-xdp.c b/drivers/net/ethernet= /ti/am65-cpsw-xdp.c index e71ff38f851f..b8b35ce702b1 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-xdp.c +++ b/drivers/net/ethernet/ti/am65-cpsw-xdp.c @@ -109,8 +109,10 @@ int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 = qid, u32 flags) struct am65_cpsw_common *common =3D am65_ndev_to_common(ndev); struct am65_cpsw_port *port =3D am65_ndev_to_port(ndev); struct am65_cpsw_rx_flow *rx_flow; + struct am65_cpsw_tx_chn *tx_ch; =20 rx_flow =3D &common->rx_chns.flows[qid]; + tx_ch =3D &common->tx_chns[qid]; =20 if (!netif_running(ndev) || !netif_carrier_ok(ndev)) return -ENETDOWN; @@ -121,9 +123,16 @@ int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 = qid, u32 flags) if (qid >=3D common->rx_ch_num_flows || qid >=3D common->tx_ch_num) return -EINVAL; =20 - if (!rx_flow->xsk_pool) + if (!rx_flow->xsk_pool && !tx_ch->xsk_pool) return -EINVAL; =20 + if (flags & XDP_WAKEUP_TX) { + if (!napi_if_scheduled_mark_missed(&tx_ch->napi_tx)) { + if (likely(napi_schedule_prep(&tx_ch->napi_tx))) + __napi_schedule(&tx_ch->napi_tx); + } + } + if (flags & XDP_WAKEUP_RX) { if (!napi_if_scheduled_mark_missed(&rx_flow->napi_rx)) { if (likely(napi_schedule_prep(&rx_flow->napi_rx))) --=20 2.34.1 From nobody Fri Dec 19 12:49:51 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7C17E26F45F; Tue, 20 May 2025 10:24:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747736665; cv=none; b=NnxxCY1SaSAAGtBc5aC8WD0CLuARi6B0AlA46uqMZaMKFWLv06civMccO5gFcqLeFF5+F66kUBLDipTTE+kT0t2gMlepW2IesQ8x8X5btPyd3lpErQrBMGH0GfXwhF6nDc/bAYYWSs/JicG8XYiSC6qx826LwgvYG74zIiMeXkA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747736665; c=relaxed/simple; bh=7bF1g5BtF5y8HRNr5uPegmqpZJu6hsSnvWmZWf6Q/iY=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=bCxhhhuCFs0WIZLhokJrRS+aOQV6eox3pD6NuuoEB3b2nNOF+p0lyg/d+EH4ZeGLjeh96kyWTbYyhwnhdrON9a0OdHEF9BEfKOpIkiyYArSVvcBcZGSvZYxqFwkqxeXf4PO9Erewl8fYIhwcEJSkEnD1oJG69NbZ1aYOyfNd7PI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UcmZeny2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UcmZeny2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2B653C4CEF1; Tue, 20 May 2025 10:24:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1747736665; bh=7bF1g5BtF5y8HRNr5uPegmqpZJu6hsSnvWmZWf6Q/iY=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=UcmZeny2V+OVqF1UOuUD8Jh/MQqIacA9L/qr4YU96otM70AsfGSMElfRCKoaN+Xat Wz8sfaaKixrboNcxCObymaQbECItyxTbS171YyPnq4mRPo4Z6jzSUHlZsAD3fgCfv2 vGrop3Lve/PsZ9NyZpVt5VneT6dYoRNQ4IqVhpaWPjRPOgBUyibWWPhYOMuTTmCa9v gZrDUGrUznVdjrGqCogC4BkQ7g+ORfjxlEM6sLeVvKy+yQslSBjKAaQQ0iEsE0nU2e 6H0Md/fIduMEnyAzViwFlfkQhPx1f5QgE38y3Vz3DrXZsMvgCRy5XPbe/YHILWiFGM B+p7TV31fGyEw== From: Roger Quadros Date: Tue, 20 May 2025 13:23:54 +0300 Subject: [PATCH RFC net-next 5/5] net: ethernet: ti: am65-cpsw: enable zero copy in XDP features Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250520-am65-cpsw-xdp-zc-v1-5-45558024f566@kernel.org> References: <20250520-am65-cpsw-xdp-zc-v1-0-45558024f566@kernel.org> In-Reply-To: <20250520-am65-cpsw-xdp-zc-v1-0-45558024f566@kernel.org> To: Siddharth Vadapalli , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= Cc: srk@ti.com, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Roger Quadros X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=openpgp-sha256; l=1004; i=rogerq@kernel.org; h=from:subject:message-id; bh=7bF1g5BtF5y8HRNr5uPegmqpZJu6hsSnvWmZWf6Q/iY=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBoLFg9QqfTMXHaac7LIFLSN7eraJrtyYderlPHO EhdBr0HXECJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCaCxYPQAKCRDSWmvTvnYw kxhMD/9mdKqlMkzj8if68+vDH5qPukpjWb03+Wk/8S775fc1UwkpQ6gsQhHl5a6MRbjxE0j3kIg QP1x7fmCuM4IT0RGVKoZ9a1oS5k+7DhJJ5KJ2CLT3/NFNoxuBXZGFY+2PDMKkkG6DUGidwk7dC6 PVOajDtNuKysAC9yXvnDP9om5Y28gd3Tn1wcn8g9ofDDZLRh50l+Nbk2QYLN6Zlv9GnDGRX/4qX ndWVkJDDmDFwz2GSfNMqZ7PpSP7g/ZHNCGHUrL3lSas+CNnFwYS9yCfJ2Akl/2eroFike45WlHg 1MoKQBPhdE5yrl/lV4eDM76hJEggnbeuEawryBw0RxbOlpESBSEf2pP3pCDopevaUkZb2IbJEfz Dx4NayeRCxozAfQjlCVX7nD1ZHN1p0UILfBLl5JXoEb7hq9I5wY7onBmLrmVfh+TiPX8rR9h8Kb U57jsFJeqAp9jWOqa6IyKaeHyjPCpSCnes2lke5msQ9CtkJ1HqfHeWf/7cv/zfWxbYfvxeV0aG2 Sq6yRxj0JICaUvv+cbWPjKchwbsn8RXWo6aX0l1+Jb7btIJyRQvgLB3nvmvHrpWbDm/a1VvcU4K PuKRIw398Y29PWrfsqBhGBNdcMUUdQGG1hZ0bGhr6dBb4S9Q67i+ceWAVyyFf/UE5AWiL1yQq2R gBEcc5+KQtzlzRw== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 Now that we have the plumbing in for XDP zero copy RX and TX, enable the zero copy feature flag. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index e89b3cefcb05..894a0bd2a810 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -3173,7 +3173,8 @@ am65_cpsw_nuss_init_port_ndev(struct am65_cpsw_common= *common, u32 port_idx) NETIF_F_HW_VLAN_CTAG_FILTER; port->ndev->xdp_features =3D NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | - NETDEV_XDP_ACT_NDO_XMIT; + NETDEV_XDP_ACT_NDO_XMIT | + NETDEV_XDP_ACT_XSK_ZEROCOPY; port->ndev->vlan_features |=3D NETIF_F_SG; port->ndev->netdev_ops =3D &am65_cpsw_nuss_netdev_ops; port->ndev->ethtool_ops =3D &am65_cpsw_ethtool_ops_slave; --=20 2.34.1