From nobody Fri Dec 19 12:48:28 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 027BC24469E; Sun, 9 Nov 2025 21:38:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724324; cv=none; b=By2LtIbmdy4ljCO+weGGz5M5i0WVm3IPJn02N/Xyd1pKqBmSK4KWg8ZPi0GmRID8eBGq7L1/N+r40UW3qkCecE5xxA7OoUk62CjwdeQmVS3Wk0siBSZwCkzZYUiqD6xWgjH1kPjj+UjneyeMjfJNZZJyYSFm1E3eHwJj4l1c4Bo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724324; c=relaxed/simple; bh=mYeKBxdNWxr3l2Xd+2fxH5pR3XRM2UpwbXIymhRfXcA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=WQYT/8YvdabKUUqJf/jfMOgQekiDkcZEAw4P4n0ozKjSWCaO8SRTDUpJTo7SS9VHWE6plWQLmhOmq7r7/XIAaJMUxh0JmGPRXUqkdeuAL1s3BtPfjCPeG4b6YwYmUhdpDvTWOqENIdN1mU0CDES3BJm1kngpUaZwuk2HcdGfFNk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=iuSWbBXg; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="iuSWbBXg" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 598ACC4CEF8; Sun, 9 Nov 2025 21:38:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1762724323; bh=mYeKBxdNWxr3l2Xd+2fxH5pR3XRM2UpwbXIymhRfXcA=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=iuSWbBXghof79JaI2GX0Ghhh5oDX1JVe6wF7V08+c4bGBKeD83MgJg0X+01WyJ2MM 24Zh5+WF4RaFmPSSMyM/BrE9Q+gibWipI0eQAz9t7TuE3r0DaC6DVZPYC7FjUjcn20 RRdYw5DZAbPT/9GAZ6pU6BhyJU+lSyXG8Gx5/iXbRcbhJmtEbGNwWn1TtQdzNe34fQ SahDtkZ6jyWduwQJTBeBCK5m2FYjJlHYHE8aEEmni9/1MOwx8XxuCXBGb3GEJPHJ+c 2oRfu9FHeS8tiRWeiSQO4W43n1UDSyQzI/fyIYTQPPN9x987HYN+GCPWIDZECUly6/ SmxQx7sJUOmyQ== From: Roger Quadros Date: Sun, 09 Nov 2025 23:37:51 +0200 Subject: [PATCH net-next v2 1/7] net: ethernet: ti: am65-cpsw: fix BPF Program change on multi-port CPSW Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251109-am65-cpsw-xdp-zc-v2-1-858f60a09d12@kernel.org> References: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> In-Reply-To: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> To: Siddharth Vadapalli , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Stanislav Fomichev , Simon Horman Cc: srk@ti.com, Meghana Malladi , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Roger Quadros X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=1689; i=rogerq@kernel.org; h=from:subject:message-id; bh=mYeKBxdNWxr3l2Xd+2fxH5pR3XRM2UpwbXIymhRfXcA=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBpEQnZcwLY0HMVPPZ/5ui9vfL2GAhL/5dDS3p3K kFMMOKF2SeJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCaREJ2QAKCRDSWmvTvnYw k3maEACi9iTV7eypb48uXJC0Rh1cWcZbkK3adwV37dIK7x+5G8ZmM/4EsZB1KGbnghoKaBxmi8H nWYtXxAOi4IRXoFJ+Zwhm2BILJt0MXSQ4aFa6s09INzw4vzjQNkzuYKDkKPR8lGCGKJ+8d5CcMc gp5dyXlkAE9W4EM9YFGW3HxPR46uYbPW6+L0gNXZeumhjkqMIaXpcUt9ObiZJU4J0Cedyeh6/vt F1QSpFaN+1eDoc57RIlAdYh2ooHYq9vummsn9FHnSOHaXueeMDNdigkS/H+Z5lmUJyWLRM25XI4 4W85nq3nSc2xlNnmas6hMg2nZo+LHZF+g0IAs4ewaUQ0ca4nAH0CJdJdLd2FY2bFVl1gUUJybdG TCoqq/9g44f/9FId7UrSLKFZnr39F5K9JoL4P7Kk1SqYlktqCMdF0VUSAg6H21iCI8dbLi8EzXr /iqn2Y0v259e6D8U7Ccd4+TljRflL1cGibPGO6XCjf/ZL5Q4iAmke6gbJpYbpwut9+8HnkEM07y yLhTM6W0fYiy7H2doBpa2WZtIm0dbjQth4NzbkOj6vAZpb+ZBoGcJVsSmrfoGcCPs+csaDjAcng pmgblzX2IvOkoaZgW9dkd2fE2JaXc0lwp8BUSZJ81/r+DiQcmTnNtJBjcFenlHDGV77gkuDpLL3 Iqb9ctE7Y5SSCSA== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 On a multi-port CPSW system, stopping and starting just one port (ndev) will not restart the queues if other ports (ndevs) are open. Instead, check the usage_count variable to know if CPSW is running and if so restart all the queues. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index d5f358ec982050751a63039e73887bf6e7f684e7..f8beb1735fb9cb75577e60f5b22= 111cb3a66acb9 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -1919,18 +1919,33 @@ static int am65_cpsw_xdp_prog_setup(struct net_devi= ce *ndev, struct bpf_prog *prog) { struct am65_cpsw_port *port =3D am65_ndev_to_port(ndev); - bool running =3D netif_running(ndev); + struct am65_cpsw_common *common =3D port->common; + bool running =3D !!port->common->usage_count; struct bpf_prog *old_prog; + int ret; =20 - if (running) - am65_cpsw_nuss_ndo_slave_stop(ndev); + if (running) { + /* stop all queues */ + am65_cpsw_destroy_txqs(common); + am65_cpsw_destroy_rxqs(common); + } =20 old_prog =3D xchg(&port->xdp_prog, prog); if (old_prog) bpf_prog_put(old_prog); =20 - if (running) - return am65_cpsw_nuss_ndo_slave_open(ndev); + if (running) { + /* start all queues */ + ret =3D am65_cpsw_create_rxqs(common); + if (ret) + return ret; + + ret =3D am65_cpsw_create_txqs(common); + if (ret) { + am65_cpsw_destroy_rxqs(common); + return ret; + } + } =20 return 0; } --=20 2.34.1 From nobody Fri Dec 19 12:48:28 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BECDC24469E; Sun, 9 Nov 2025 21:38:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724329; cv=none; b=M5i6fC8YN9/wAfi/dVq5scRQ87OM9AQHbjnu0QELUtiFAVO5vqM2+m1DvPaSsuqf3Szqy+G9jDod/E+/aH4a4AeIL+lRTidJK9n/bd8P/3DDcthX1tOgFEF4cpX7PIm3vowOE6JTcuv36wFz643Csje5tYqw3RwQb6RobbHAM94= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724329; c=relaxed/simple; bh=gIGT5LBe7L8md5oQjDH05tEQUXGSEFsCS5BXhErI/34=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=MYYRbccNDTyP/g+u0u1hEm1JkVFHq4aypkVdMcPUXl6bD/eyh3ugmaX8ipJCUqc33bVLEhAKIatX34gKSoWwRsRBFzLLUK5V5q1zIUmD3pGiS9g8bwLn3bYRVF2Wv0bAHhvzjZiYBZp71SaqT7xDcz9CdxAXhauuWmxIYX02GxE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OUZcaulk; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OUZcaulk" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 056F8C113D0; Sun, 9 Nov 2025 21:38:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1762724328; bh=gIGT5LBe7L8md5oQjDH05tEQUXGSEFsCS5BXhErI/34=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=OUZcaulkOQYNXcO7TMmJPnjXonxkmYthF+VuP3vBb8wJc1g2Qmuew0M2fxAJgDHTw /6iRIX+w6jL1l4Rp3fW74gOvMpU+zw/PGViNxeoM5k9A2c3D9UNjjagQPmqX05O6Om m1wt9FnBr/EEWUGdQ9Si8QPCnjOZsP3GyUguet0FUQBXulapHRXzy/qxpOX0LTV3R/ fmMS2lT4PMEPb82FRFWeMHqSlVOeykXL69sVA6FqXp3z24EGkYI2tnH8cyKkluwm+c sHdQ4IJPPAFjAxa+QeOV3DxGf/YlP85xiw7dPp2h9wm5s78gg7IbDpICVd8brz5S0w UclccOg1y+yiw== From: Roger Quadros Date: Sun, 09 Nov 2025 23:37:52 +0200 Subject: [PATCH net-next v2 2/7] net: ethernet: ti: am65-cpsw: Retain page_pool on XDP program exchange Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251109-am65-cpsw-xdp-zc-v2-2-858f60a09d12@kernel.org> References: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> In-Reply-To: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> To: Siddharth Vadapalli , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Stanislav Fomichev , Simon Horman Cc: srk@ti.com, Meghana Malladi , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Roger Quadros X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=4303; i=rogerq@kernel.org; h=from:subject:message-id; bh=gIGT5LBe7L8md5oQjDH05tEQUXGSEFsCS5BXhErI/34=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBpEQnZEgrhSGZzTKNQF5wrsKUoU8Mb/qZOqwVbB HmeIxtF37CJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCaREJ2QAKCRDSWmvTvnYw kyAaEACtovfY4fmc96nxZTZPLeKHKdd2tyzoOJ9yiDpSjIcW4/MWpSGwAjpkZs0oY/lFtXRtJ2f 13a9JsOaiF4cQO5kpuTMuTss0ooCuqcZJSi1/NYm8iCSK0qGxZhz9hhTIElmiqyJhvQfCTCBali LLGl4/oIrCinlDgiNf6BM4IM9KUChcCYU7TG4jGdseTskYgTYRIzvKyuyXFDHdJAho2foKQ/orE W8IOD+UPMz85veTz8OjxbQPA0ZcnIMoAQp2C6sJDPw/U3Kbb/UKsUeECTWF0zYctuAhFlJSu6yZ GdSjKbJrVryKVoh5GQcXfnTF0u8G1931FTdBuz72peeI2g8P8JJz8/WQ3jb+wPlBl0XugaCulci gOmeUjmQ8iUpl8nSsBWqdjD0H/kJyXftle98n5W9KHV4WThr3V9R7J9SugrC8LmmfPy8K9MnWaw ErkDehHGIBUSNF3iucDOmHsRHYyzNNkMhOrECkpeJQKDuBASjbCUv+nR+brGif4k5dy6mXN3hLT jxfHz9b+UY2uwb1GaY/XjMh8neX7txo3f0r8wapC8AVpdPjlQjQPLAFZ9ARvM06fvrsw+/lJO58 +0ymplrGTe9xCkt+l5VdnndtRoQ9w+qgNkgWsD4v6/5KYBYgApmD9L2r+cDFcEcVcbAeUUEDSOH v6VdtOKfSUZzuTw== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 Add a new 'retain_page_pool' flag to am65_cpsw_destroy_rxq/s() so that the page pool allocation is retained while switching XDP program. This will avoid requiring any re-allocation and potential failures during low memory conditions. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 38 ++++++++++++++++++----------= ---- 1 file changed, 22 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index f8beb1735fb9cb75577e60f5b22111cb3a66acb9..f9e2286efa29bbb7056fda1fc82= c38b479aae8bd 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -505,7 +505,7 @@ static inline void am65_cpsw_put_page(struct am65_cpsw_= rx_flow *flow, static void am65_cpsw_nuss_rx_cleanup(void *data, dma_addr_t desc_dma); static void am65_cpsw_nuss_tx_cleanup(void *data, dma_addr_t desc_dma); =20 -static void am65_cpsw_destroy_rxq(struct am65_cpsw_common *common, int id) +static void am65_cpsw_destroy_rxq(struct am65_cpsw_common *common, int id,= bool retain_page_pool) { struct am65_cpsw_rx_chn *rx_chn =3D &common->rx_chns; struct am65_cpsw_rx_flow *flow; @@ -528,13 +528,13 @@ static void am65_cpsw_destroy_rxq(struct am65_cpsw_co= mmon *common, int id) xdp_rxq_info_unreg(rxq); } =20 - if (flow->page_pool) { + if (flow->page_pool && !retain_page_pool) { page_pool_destroy(flow->page_pool); flow->page_pool =3D NULL; } } =20 -static void am65_cpsw_destroy_rxqs(struct am65_cpsw_common *common) +static void am65_cpsw_destroy_rxqs(struct am65_cpsw_common *common, bool r= etain_page_pool) { struct am65_cpsw_rx_chn *rx_chn =3D &common->rx_chns; int id; @@ -549,7 +549,7 @@ static void am65_cpsw_destroy_rxqs(struct am65_cpsw_com= mon *common) } =20 for (id =3D common->rx_ch_num_flows - 1; id >=3D 0; id--) - am65_cpsw_destroy_rxq(common, id); + am65_cpsw_destroy_rxq(common, id, retain_page_pool); =20 k3_udma_glue_disable_rx_chn(common->rx_chns.rx_chn); } @@ -574,13 +574,18 @@ static int am65_cpsw_create_rxq(struct am65_cpsw_comm= on *common, int id) =20 flow =3D &rx_chn->flows[id]; pp_params.napi =3D &flow->napi_rx; - pool =3D page_pool_create(&pp_params); - if (IS_ERR(pool)) { - ret =3D PTR_ERR(pool); - return ret; - } =20 - flow->page_pool =3D pool; + if (!flow->page_pool) { + pool =3D page_pool_create(&pp_params); + if (IS_ERR(pool)) { + ret =3D PTR_ERR(pool); + return ret; + } + + flow->page_pool =3D pool; + } else { + pool =3D flow->page_pool; + } =20 /* using same page pool is allowed as no running rx handlers * simultaneously for both ndevs @@ -626,7 +631,7 @@ static int am65_cpsw_create_rxq(struct am65_cpsw_common= *common, int id) return 0; =20 err: - am65_cpsw_destroy_rxq(common, id); + am65_cpsw_destroy_rxq(common, id, false); return ret; } =20 @@ -653,7 +658,7 @@ static int am65_cpsw_create_rxqs(struct am65_cpsw_commo= n *common) =20 err: for (--id; id >=3D 0; id--) - am65_cpsw_destroy_rxq(common, id); + am65_cpsw_destroy_rxq(common, id, false); =20 return ret; } @@ -942,7 +947,7 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_= common *common) return 0; =20 cleanup_rx: - am65_cpsw_destroy_rxqs(common); + am65_cpsw_destroy_rxqs(common, false); =20 return ret; } @@ -956,7 +961,7 @@ static int am65_cpsw_nuss_common_stop(struct am65_cpsw_= common *common) ALE_PORT_STATE, ALE_PORT_STATE_DISABLE); =20 am65_cpsw_destroy_txqs(common); - am65_cpsw_destroy_rxqs(common); + am65_cpsw_destroy_rxqs(common, false); cpsw_ale_stop(common->ale); =20 writel(0, common->cpsw_base + AM65_CPSW_REG_CTL); @@ -1927,7 +1932,8 @@ static int am65_cpsw_xdp_prog_setup(struct net_device= *ndev, if (running) { /* stop all queues */ am65_cpsw_destroy_txqs(common); - am65_cpsw_destroy_rxqs(common); + /* Retain page pool */ + am65_cpsw_destroy_rxqs(common, true); } =20 old_prog =3D xchg(&port->xdp_prog, prog); @@ -1942,7 +1948,7 @@ static int am65_cpsw_xdp_prog_setup(struct net_device= *ndev, =20 ret =3D am65_cpsw_create_txqs(common); if (ret) { - am65_cpsw_destroy_rxqs(common); + am65_cpsw_destroy_rxqs(common, false); return ret; } } --=20 2.34.1 From nobody Fri Dec 19 12:48:28 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6FB04276051; Sun, 9 Nov 2025 21:38:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724333; cv=none; b=I+whYw9AhhC53lRnC7NJz6X2R/zyMhlJKmS/XAESm+74k1LqZfq4+q3zI7Z4ztXTmdUdiP9bdopByZJ1fDD4rndLGCkNXBsbCeJGKC3Q+1Dju73W6xNo3R/SWZNhu/vrTbfRFaC10IDyoLrCcBEX6/I5ZQINhokK2DSLDjIz+c0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724333; c=relaxed/simple; bh=mibZkYswiasKK6+RcCCdg62j5OqVExQt5C/Z60vtyvQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=BOuiHDGzSzG1qiQe94K4asycSO8adqbj4K1VKiyyPofb0/NhHq4UUTnWz7l3BcWSKOgJCeiOJOhIqds+k4ebsUS/9gaFuJSi8ND1MBFzZqMwE4lkkRi7USf5T79oxNMe7hQngiXiFGePUYF7hpWW6VAG6L/HGYayCySedurdqOI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ZR+MunMO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ZR+MunMO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CA498C2BCB3; Sun, 9 Nov 2025 21:38:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1762724333; bh=mibZkYswiasKK6+RcCCdg62j5OqVExQt5C/Z60vtyvQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=ZR+MunMOHQWQsuqAk053NmVaE/nhJ26dJwG6p2a9rMPdOA5jcBwWlEjrYARlfD54G iSGb1LAxLT1Dj8e5/xAKnjAuCBuVGbbz1fsuXPpjbnvA9CA5MlUkaMYLMK7c0K+s0n DIcadp4hkUSvorMv0fFZ2dBFNxsf7bG4VmGtmT0plg8lvGpuS4Ynvpk+peUrfbK28b jJcCdO5+P8MhUbQhfNYBfIFTTgs0EMwsweIfvMPTM0BTnJo1RVP+zZXVTV2Ti2DFbm OdGrdqm3y26RELM7+OJoWBYyPU4sLhaI6WdNqcz6zTB21I9HAcONTczTfCiqrorCta DHDWsBbzOG8ew== From: Roger Quadros Date: Sun, 09 Nov 2025 23:37:53 +0200 Subject: [PATCH net-next v2 3/7] net: ethernet: ti: am65-cpsw: add XSK pool helpers Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251109-am65-cpsw-xdp-zc-v2-3-858f60a09d12@kernel.org> References: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> In-Reply-To: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> To: Siddharth Vadapalli , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Stanislav Fomichev , Simon Horman Cc: srk@ti.com, Meghana Malladi , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Roger Quadros X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=10734; i=rogerq@kernel.org; h=from:subject:message-id; bh=mibZkYswiasKK6+RcCCdg62j5OqVExQt5C/Z60vtyvQ=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBpEQnZygN2YhJPuXvMzdOM+JvVDAIzxetchCdVe XVoOuwU1OOJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCaREJ2QAKCRDSWmvTvnYw kwRLEADVH3RtMg8vVFmKybY7fazds8KI+xlEwsFxuTTVzPN4Y0H1UGrUaIjcUhGy77O5Fd8InoJ BI+2SlpZXeBvAI3DlVKSCd8Qf+CbvOV0JuT0nE+m3fSjY8zusZ3dxLBen3CaAzJwNOszrJWKS+g SabjGd0OvByDFiW1F2V1J/aZCHGBDuMaJNP7+f7oCnCi7Vtq7RTA3eJL2Bu2W3eDljue/TMU7J3 5FoJvefIFjTxjqegO+5vYCYnAWFmjtbeT6cFVVLkq/YApc0tG0cU8KMTRO0QMGW+2CgXHv6VD+c DluUirzJjoSnVWPE9dJ5IqyOrleXsN2imcRW18OTqYfmMzDoJKHAPtdXUcAGlB375k8f9nxZ5xv ENdKYgsYSwZ68mRTRTbz82SDZ2mxWVjY1OxNlTyYfSQnQ03rqG3PBJ6wFhjhyZXZeDqiPfbXN3e h3+mDWbv/RjphxWXhqEN6kT0DYIyBj2LQGKW/Ws3/FDDwAttf3TLrO9N3ZVoWDrr0+/iAnUtVOK ocQ/EwLzOKcZRJEKIpTxfPJ5bOjy7TJGQiQU4UC01KQQfMny48iOCeK/sqHEkFqL8RMNzSwjS7H auZa4AMeLrJaKrHmjRlQKVlu84C/6+McfZ9DpZ4OrT2uj/z5M1VnLBJ1okK/oJki0rrSroLhbEU xi5yMUS52328thA== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 To prepare for XSK zero copy support, add XSK pool helpers in a new file am65-cpsw-xdp.c As queues are shared between ports we can no longer support the case where zero copy (XSK Pool) is enabled for the queue on one port but not for other ports. Current solution is to drop the packet if Zero copy is not enabled for that port + queue but enabled for some other port + same queue. xdp_zc_queues bitmap tracks if queue is setup as XSK pool and xsk_port_id array tracks which port the XSK queue is assigned to for zero copy. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/Makefile | 2 +- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 21 ++++-- drivers/net/ethernet/ti/am65-cpsw-nuss.h | 20 +++++ drivers/net/ethernet/ti/am65-cpsw-xdp.c | 122 +++++++++++++++++++++++++++= ++++ 4 files changed, 156 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/ti/Makefile b/drivers/net/ethernet/ti/Mak= efile index 93c0a4d0e33a6fb725ad61c3ec0eab87d2d3f61a..96585a28fc7d73f61b888e5d158= 7d5123875db31 100644 --- a/drivers/net/ethernet/ti/Makefile +++ b/drivers/net/ethernet/ti/Makefile @@ -29,7 +29,7 @@ keystone_netcp_ethss-y :=3D netcp_ethss.o netcp_sgmii.o n= etcp_xgbepcsr.o cpsw_ale. obj-$(CONFIG_TI_K3_CPPI_DESC_POOL) +=3D k3-cppi-desc-pool.o =20 obj-$(CONFIG_TI_K3_AM65_CPSW_NUSS) +=3D ti-am65-cpsw-nuss.o -ti-am65-cpsw-nuss-y :=3D am65-cpsw-nuss.o cpsw_sl.o am65-cpsw-ethtool.o cp= sw_ale.o +ti-am65-cpsw-nuss-y :=3D am65-cpsw-nuss.o cpsw_sl.o am65-cpsw-ethtool.o cp= sw_ale.o am65-cpsw-xdp.o ti-am65-cpsw-nuss-$(CONFIG_TI_AM65_CPSW_QOS) +=3D am65-cpsw-qos.o ti-am65-cpsw-nuss-$(CONFIG_TI_K3_AM65_CPSW_SWITCHDEV) +=3D am65-cpsw-switc= hdev.o obj-$(CONFIG_TI_K3_AM65_CPTS) +=3D am65-cpts.o diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index f9e2286efa29bbb7056fda1fc82c38b479aae8bd..46523be93df27710be77b288c36= c1a0f66d8ca8d 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -58,9 +58,6 @@ =20 #define AM65_CPSW_MAX_PORTS 8 =20 -#define AM65_CPSW_MIN_PACKET_SIZE VLAN_ETH_ZLEN -#define AM65_CPSW_MAX_PACKET_SIZE 2024 - #define AM65_CPSW_REG_CTL 0x004 #define AM65_CPSW_REG_STAT_PORT_EN 0x014 #define AM65_CPSW_REG_PTYPE 0x018 @@ -505,7 +502,7 @@ static inline void am65_cpsw_put_page(struct am65_cpsw_= rx_flow *flow, static void am65_cpsw_nuss_rx_cleanup(void *data, dma_addr_t desc_dma); static void am65_cpsw_nuss_tx_cleanup(void *data, dma_addr_t desc_dma); =20 -static void am65_cpsw_destroy_rxq(struct am65_cpsw_common *common, int id,= bool retain_page_pool) +void am65_cpsw_destroy_rxq(struct am65_cpsw_common *common, int id, bool r= etain_page_pool) { struct am65_cpsw_rx_chn *rx_chn =3D &common->rx_chns; struct am65_cpsw_rx_flow *flow; @@ -554,7 +551,7 @@ static void am65_cpsw_destroy_rxqs(struct am65_cpsw_com= mon *common, bool retain_ k3_udma_glue_disable_rx_chn(common->rx_chns.rx_chn); } =20 -static int am65_cpsw_create_rxq(struct am65_cpsw_common *common, int id) +int am65_cpsw_create_rxq(struct am65_cpsw_common *common, int id) { struct am65_cpsw_rx_chn *rx_chn =3D &common->rx_chns; struct page_pool_params pp_params =3D { @@ -663,7 +660,7 @@ static int am65_cpsw_create_rxqs(struct am65_cpsw_commo= n *common) return ret; } =20 -static void am65_cpsw_destroy_txq(struct am65_cpsw_common *common, int id) +void am65_cpsw_destroy_txq(struct am65_cpsw_common *common, int id) { struct am65_cpsw_tx_chn *tx_chn =3D &common->tx_chns[id]; =20 @@ -697,7 +694,7 @@ static void am65_cpsw_destroy_txqs(struct am65_cpsw_com= mon *common) am65_cpsw_destroy_txq(common, id); } =20 -static int am65_cpsw_create_txq(struct am65_cpsw_common *common, int id) +int am65_cpsw_create_txq(struct am65_cpsw_common *common, int id) { struct am65_cpsw_tx_chn *tx_chn =3D &common->tx_chns[id]; int ret; @@ -1327,7 +1324,7 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw= _rx_flow *flow, dma_unmap_single(rx_chn->dma_dev, buf_dma, buf_dma_len, DMA_FROM_DEVICE); k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); =20 - if (port->xdp_prog) { + if (am65_cpsw_xdp_is_enabled(port)) { xdp_init_buff(&xdp, PAGE_SIZE, &port->xdp_rxq[flow->id]); xdp_prepare_buff(&xdp, page_addr, AM65_CPSW_HEADROOM, pkt_len, false); @@ -1961,6 +1958,9 @@ static int am65_cpsw_ndo_bpf(struct net_device *ndev,= struct netdev_bpf *bpf) switch (bpf->command) { case XDP_SETUP_PROG: return am65_cpsw_xdp_prog_setup(ndev, bpf->prog); + case XDP_SETUP_XSK_POOL: + return am65_cpsw_xsk_setup_pool(ndev, bpf->xsk.pool, + bpf->xsk.queue_id); default: return -EINVAL; } @@ -3553,7 +3553,12 @@ static int am65_cpsw_nuss_probe(struct platform_devi= ce *pdev) common =3D devm_kzalloc(dev, sizeof(struct am65_cpsw_common), GFP_KERNEL); if (!common) return -ENOMEM; + common->dev =3D dev; + common->xdp_zc_queues =3D devm_bitmap_zalloc(dev, AM65_CPSW_MAX_QUEUES, + GFP_KERNEL); + if (!common->xdp_zc_queues) + return -ENOMEM; =20 of_id =3D of_match_device(am65_cpsw_nuss_of_mtable, dev); if (!of_id) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/etherne= t/ti/am65-cpsw-nuss.h index 917c37e4e89bd933d3001f6c35a62db01cd8da4c..31789b5e5e1fc96be20cce17234= d0e16cdcea796 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h @@ -23,8 +23,14 @@ struct am65_cpts; =20 #define AM65_CPSW_MAX_QUEUES 8 /* both TX & RX */ =20 +#define AM65_CPSW_MIN_PACKET_SIZE VLAN_ETH_ZLEN +#define AM65_CPSW_MAX_PACKET_SIZE 2024 + #define AM65_CPSW_PORT_VLAN_REG_OFFSET 0x014 =20 +#define AM65_CPSW_RX_DMA_ATTR (DMA_ATTR_SKIP_CPU_SYNC |\ + DMA_ATTR_WEAK_ORDERING) + struct am65_cpsw_slave_data { bool mac_only; struct cpsw_sl *mac_sl; @@ -190,6 +196,9 @@ struct am65_cpsw_common { unsigned char switch_id[MAX_PHYS_ITEM_ID_LEN]; /* only for suspend/resume context restore */ u32 *ale_context; + /* XDP Zero Copy */ + unsigned long *xdp_zc_queues; + int xsk_port_id[AM65_CPSW_MAX_QUEUES]; }; =20 struct am65_cpsw_ndev_priv { @@ -228,4 +237,15 @@ int am65_cpsw_nuss_update_tx_rx_chns(struct am65_cpsw_= common *common, =20 bool am65_cpsw_port_dev_check(const struct net_device *dev); =20 +int am65_cpsw_create_rxq(struct am65_cpsw_common *common, int id); +void am65_cpsw_destroy_rxq(struct am65_cpsw_common *common, int id, bool r= etain_page_pool); +int am65_cpsw_create_txq(struct am65_cpsw_common *common, int id); +void am65_cpsw_destroy_txq(struct am65_cpsw_common *common, int id); +int am65_cpsw_xsk_setup_pool(struct net_device *ndev, + struct xsk_buff_pool *pool, u16 qid); +int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 qid, u32 flags); +static inline bool am65_cpsw_xdp_is_enabled(struct am65_cpsw_port *port) +{ + return !!READ_ONCE(port->xdp_prog); +} #endif /* AM65_CPSW_NUSS_H_ */ diff --git a/drivers/net/ethernet/ti/am65-cpsw-xdp.c b/drivers/net/ethernet= /ti/am65-cpsw-xdp.c new file mode 100644 index 0000000000000000000000000000000000000000..89f43f7c83db35dba96621bae93= 0172e0fc85b6a --- /dev/null +++ b/drivers/net/ethernet/ti/am65-cpsw-xdp.c @@ -0,0 +1,122 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Texas Instruments K3 AM65 Ethernet Switch SubSystem Driver + * + * Copyright (C) 2025 Texas Instruments Incorporated - http://www.ti.com/ + * + */ + +#include +#include +#include "am65-cpsw-nuss.h" + +static int am65_cpsw_xsk_pool_enable(struct am65_cpsw_port *port, + struct xsk_buff_pool *pool, u16 qid) +{ + struct am65_cpsw_common *common =3D port->common; + struct am65_cpsw_rx_chn *rx_chn; + bool need_update; + u32 frame_size; + int ret; + + /* + * As queues are shared between ports we can no longer + * support the case where zero copy (XSK Pool) is enabled + * for the queue on one port but not for other ports. + * + * Current solution is to drop the packet if Zero copy + * is not enabled for that port + queue but enabled for + * some other port + same queue. + */ + if (test_bit(qid, common->xdp_zc_queues)) + return -EINVAL; + + rx_chn =3D &common->rx_chns; + if (qid >=3D common->rx_ch_num_flows || qid >=3D common->tx_ch_num) + return -EINVAL; + + frame_size =3D xsk_pool_get_rx_frame_size(pool); + if (frame_size < AM65_CPSW_MAX_PACKET_SIZE) + return -EOPNOTSUPP; + + ret =3D xsk_pool_dma_map(pool, rx_chn->dma_dev, AM65_CPSW_RX_DMA_ATTR); + if (ret) { + netdev_err(port->ndev, "Failed to map xsk pool\n"); + return ret; + } + + need_update =3D common->usage_count && + am65_cpsw_xdp_is_enabled(port); + if (need_update) { + am65_cpsw_destroy_rxq(common, qid, true); + am65_cpsw_destroy_txq(common, qid); + } + + set_bit(qid, common->xdp_zc_queues); + common->xsk_port_id[qid] =3D port->port_id; + if (need_update) { + am65_cpsw_create_rxq(common, qid); + am65_cpsw_create_txq(common, qid); + } + + return 0; +} + +static int am65_cpsw_xsk_pool_disable(struct am65_cpsw_port *port, + struct xsk_buff_pool *pool, u16 qid) +{ + struct am65_cpsw_common *common =3D port->common; + bool need_update; + + if (qid >=3D common->rx_ch_num_flows || qid >=3D common->tx_ch_num) + return -EINVAL; + + if (!test_bit(qid, common->xdp_zc_queues)) + return -EINVAL; + + pool =3D xsk_get_pool_from_qid(port->ndev, qid); + if (!pool) + return -EINVAL; + + need_update =3D common->usage_count && am65_cpsw_xdp_is_enabled(port); + if (need_update) { + am65_cpsw_destroy_rxq(common, qid, true); + am65_cpsw_destroy_txq(common, qid); + synchronize_rcu(); + } + + xsk_pool_dma_unmap(pool, AM65_CPSW_RX_DMA_ATTR); + clear_bit(qid, common->xdp_zc_queues); + common->xsk_port_id[qid] =3D -EINVAL; + if (need_update) { + am65_cpsw_create_rxq(common, qid); + am65_cpsw_create_txq(common, qid); + } + + return 0; +} + +int am65_cpsw_xsk_setup_pool(struct net_device *ndev, + struct xsk_buff_pool *pool, u16 qid) +{ + struct am65_cpsw_port *port =3D am65_ndev_to_port(ndev); + + return pool ? am65_cpsw_xsk_pool_enable(port, pool, qid) : + am65_cpsw_xsk_pool_disable(port, pool, qid); +} + +int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 qid, u32 flags) +{ + struct am65_cpsw_common *common =3D am65_ndev_to_common(ndev); + struct am65_cpsw_port *port =3D am65_ndev_to_port(ndev); + + if (!netif_running(ndev) || !netif_carrier_ok(ndev)) + return -ENETDOWN; + + if (!am65_cpsw_xdp_is_enabled(port)) + return -EINVAL; + + if (qid >=3D common->rx_ch_num_flows || qid >=3D common->tx_ch_num) + return -EINVAL; + + return 0; +} --=20 2.34.1 From nobody Fri Dec 19 12:48:28 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49DAB26FA5B; Sun, 9 Nov 2025 21:38:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724338; cv=none; b=jT2zzWF5iZQiT40R4AuReB7ccOYkTq9rxnZIpKXbxAjQpypGMUqGEpy6E2mj0J5MBogWk6P8OuEHHTDcPzJcdMBnC07e/cVxZD81RR3PW4YEWhaPNTit5l7GQ0Qyj4lMqSVPjOqnjzPrOzktRDOmJ6QTD27BbldXfnlNzmNC78I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724338; c=relaxed/simple; bh=bBSe/udiclVqrxJNKOKH/yd5ELEJknHndW0mtte2Njo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=eb5AgLJQFDflJJixy/Eqij0yUAyrVlWgy5Go3aV3birGXXXRZp04HQGGBG/2bFJYSNA9b6dHy2fVMPkwVsP6gixDQ7d+KXypamSqL17Sp+zY+J0+MpzvO3KvtMDzw/opQ50fn7LhYEJ6TZBIxgiTt0XrX5nPH1CaeOmyF2+TgK0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=h/iOu1ZI; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="h/iOu1ZI" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 76E60C4CEFB; Sun, 9 Nov 2025 21:38:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1762724337; bh=bBSe/udiclVqrxJNKOKH/yd5ELEJknHndW0mtte2Njo=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=h/iOu1ZIkhQ/z+6Lrk2IQq86jfV9Nn035Oi69rQk4fCqkVEETrn3yV6oEnO+r2yjZ w6KMa298s/hxHrDx1teg6yuJAO3I8IAMaN1681xttX6cETDj2qjBNAdHnmCj853J7A 6glaGEl/9lm2D3uUs0jV3RMyCEtoWoyKm5Lgs2fLToerdL7UzVwDg1yKxMbd/9A4aj 4ESLSQ7s5lbvk+GFCuwaPTjzy+tZL1kP9FxD6s1JezBLXzXyHZCngfv9GfbySytwea fUNwWqUq5UXi94osM2jaggIDSE33HB0UzcEMXEtzmFA0fX6BoiLzqx/bM2rnkIQnLQ nfYOtgQSWSS3Q== From: Roger Quadros Date: Sun, 09 Nov 2025 23:37:54 +0200 Subject: [PATCH net-next v2 4/7] net: ethernet: ti: am65-cpsw: Add AF_XDP zero copy for RX Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251109-am65-cpsw-xdp-zc-v2-4-858f60a09d12@kernel.org> References: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> In-Reply-To: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> To: Siddharth Vadapalli , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Stanislav Fomichev , Simon Horman Cc: srk@ti.com, Meghana Malladi , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Roger Quadros X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=15517; i=rogerq@kernel.org; h=from:subject:message-id; bh=bBSe/udiclVqrxJNKOKH/yd5ELEJknHndW0mtte2Njo=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBpEQnZOF3kI4R2wCFWPLpqMfWd64Ljs/dc114u6 9yAn04Hf8SJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCaREJ2QAKCRDSWmvTvnYw k66kD/4y2wDf5aLMC0JT5UBl+FnIE0WxT5bBIDuCRYtXKucVGjZlHMd17v6qjQoTgjHR6fohfd7 pnctQnW0g8rHej/QIRRCgqL7TdeF5emuo2KoroJjCzNjy4gG/67RqkIcQhPyF0Vnup5SONIjn7d iqyjcOHN9QlSRzo2D1P6vDuYI8Vxpj2xYWr7VrBTEwUmpqDcOmRIxyXtfZqwbhlFz6qJMoTHNav ZNBLbuY+Tdr36m8PspfdNZfuQ+xKm8GVmdesGjL/YzqlSTDwvXia3B3XD3f9Y5yQ5VQtmGIfzpS PsFEia/RHzPQTM0ZgAxsqJCCC4F5YY5K+HVakneMru+x7g8+AXLED226RDpXdC+kbemeCqoYmRc PiS/i2fR5rV9TrT9qx8vjA+6SBknhe5hWX5SM9sGNKfQaYRpI+3iHVdlsjF8Pxuu6HvQvS6UKCS Am5h9gv45xzKrAU3p1wUsbGSf8YjNb9efGUYPxP0vNLptoSnay70Cx4QofEV5PBVGWiDQbp6wlO oRDr+JPHI0CU0MIrVgKzav9W02VQzmclnoBnJU5wQW+4DHr1QazJ5Bvfz7hSco8KgyHTVuspuXm IBekOMB4kgd7iPba7FUOABTXOWVM/tsd7VccWNjUOvt//1u8aEESzzbdU13erT7baVVxEzvrNPg CVp0L5sv00ZUitw== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 Add zero copy support to RX path. Introduce xsk_pool and xsk_port_id to struct am65_cpsw_rx_flow. This way we can quickly check if the flow is setup as XSK pool and for which port. If the RX flow is setup as XSK pool then register it as MEM_TYPE_XSK_BUFF_POOL. At queue creation get free frames from the XSK pool and push it to the RX ring. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 317 +++++++++++++++++++++++++++= ---- drivers/net/ethernet/ti/am65-cpsw-nuss.h | 12 +- drivers/net/ethernet/ti/am65-cpsw-xdp.c | 24 +++ 3 files changed, 319 insertions(+), 34 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index 46523be93df27710be77b288c36c1a0f66d8ca8d..afc0c8836fe242d8bf47ce9bcd3= e6b725ca37bf9 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -429,6 +429,55 @@ static void am65_cpsw_nuss_ndo_host_tx_timeout(struct = net_device *ndev, } } =20 +static int am65_cpsw_nuss_rx_push_zc(struct am65_cpsw_rx_flow *flow, + struct xdp_buff *xdp) +{ + struct am65_cpsw_rx_chn *rx_chn =3D &flow->common->rx_chns; + struct cppi5_host_desc_t *desc_rx; + struct am65_cpsw_swdata *swdata; + u32 flow_id =3D flow->id; + dma_addr_t desc_dma; + dma_addr_t buf_dma; + int buf_len; + + desc_rx =3D k3_cppi_desc_pool_alloc(rx_chn->desc_pool); + if (!desc_rx) + return -ENOMEM; + + desc_dma =3D k3_cppi_desc_pool_virt2dma(rx_chn->desc_pool, desc_rx); + buf_dma =3D xsk_buff_xdp_get_dma(xdp); + cppi5_hdesc_init(desc_rx, CPPI5_INFO0_HDESC_EPIB_PRESENT, + AM65_CPSW_NAV_PS_DATA_SIZE); + k3_udma_glue_rx_dma_to_cppi5_addr(rx_chn->rx_chn, &buf_dma); + buf_len =3D xsk_pool_get_rx_frame_size(flow->xsk_pool); + cppi5_hdesc_attach_buf(desc_rx, buf_dma, buf_len, buf_dma, buf_len); + swdata =3D cppi5_hdesc_get_swdata(desc_rx); + swdata->xdp =3D xdp; + swdata->flow_id =3D flow_id; + + return k3_udma_glue_push_rx_chn(rx_chn->rx_chn, flow_id, + desc_rx, desc_dma); +} + +static int am65_cpsw_nuss_rx_alloc_zc(struct am65_cpsw_rx_flow *flow, + int budget) +{ + struct xdp_buff *xdp; + int i, ret; + + for (i =3D 0; i < budget; i++) { + xdp =3D xsk_buff_alloc(flow->xsk_pool); + if (!xdp) + break; + + ret =3D am65_cpsw_nuss_rx_push_zc(flow, xdp); + if (ret < 0) + break; + } + + return i; +} + static int am65_cpsw_nuss_rx_push(struct am65_cpsw_common *common, struct page *page, u32 flow_idx) { @@ -529,6 +578,9 @@ void am65_cpsw_destroy_rxq(struct am65_cpsw_common *com= mon, int id, bool retain_ page_pool_destroy(flow->page_pool); flow->page_pool =3D NULL; } + + flow->xsk_pool =3D NULL; + flow->xsk_port_id =3D -EINVAL; } =20 static void am65_cpsw_destroy_rxqs(struct am65_cpsw_common *common, bool r= etain_page_pool) @@ -568,6 +620,7 @@ int am65_cpsw_create_rxq(struct am65_cpsw_common *commo= n, int id) struct page_pool *pool; struct page *page; int port, ret, i; + int port_id; =20 flow =3D &rx_chn->flows[id]; pp_params.napi =3D &flow->napi_rx; @@ -587,9 +640,30 @@ int am65_cpsw_create_rxq(struct am65_cpsw_common *comm= on, int id) /* using same page pool is allowed as no running rx handlers * simultaneously for both ndevs */ + + /* get first port with XSK pool & XDP program set */ + for (port =3D 0; port < common->port_num; port++) { + if (!common->ports[port].ndev) + continue; + + flow->xsk_pool =3D am65_cpsw_xsk_get_pool(&common->ports[port], + id); + if (flow->xsk_pool) + break; + } + + port_id =3D common->ports[port].port_id; + flow->xsk_port_id =3D flow->xsk_pool ? port_id : -EINVAL; for (port =3D 0; port < common->port_num; port++) { if (!common->ports[port].ndev) - /* FIXME should we BUG here? */ + continue; + + port_id =3D common->ports[port].port_id; + + /* NOTE: if queue is XSK then only register it + * for the relevant port it was assigned to + */ + if (flow->xsk_pool && port_id !=3D flow->xsk_port_id) continue; =20 rxq =3D &common->ports[port].xdp_rxq[id]; @@ -598,29 +672,44 @@ int am65_cpsw_create_rxq(struct am65_cpsw_common *com= mon, int id) if (ret) goto err; =20 - ret =3D xdp_rxq_info_reg_mem_model(rxq, - MEM_TYPE_PAGE_POOL, - pool); + if (flow->xsk_pool) { + ret =3D xdp_rxq_info_reg_mem_model(rxq, + MEM_TYPE_XSK_BUFF_POOL, + NULL); + xsk_pool_set_rxq_info(flow->xsk_pool, rxq); + } else { + ret =3D xdp_rxq_info_reg_mem_model(rxq, + MEM_TYPE_PAGE_POOL, + pool); + } + if (ret) goto err; } =20 - for (i =3D 0; i < AM65_CPSW_MAX_RX_DESC; i++) { - page =3D page_pool_dev_alloc_pages(flow->page_pool); - if (!page) { - dev_err(common->dev, "cannot allocate page in flow %d\n", - id); - ret =3D -ENOMEM; - goto err; - } + if (flow->xsk_pool) { + /* get pages from xsk_pool and push to RX ring + * queue as much as possible + */ + am65_cpsw_nuss_rx_alloc_zc(flow, AM65_CPSW_MAX_RX_DESC); + } else { + for (i =3D 0; i < AM65_CPSW_MAX_RX_DESC; i++) { + page =3D page_pool_dev_alloc_pages(flow->page_pool); + if (!page) { + dev_err(common->dev, "cannot allocate page in flow %d\n", + id); + ret =3D -ENOMEM; + goto err; + } =20 - ret =3D am65_cpsw_nuss_rx_push(common, page, id); - if (ret < 0) { - dev_err(common->dev, - "cannot submit page to rx channel flow %d, error %d\n", - id, ret); - am65_cpsw_put_page(flow, page, false); - goto err; + ret =3D am65_cpsw_nuss_rx_push(common, page, id); + if (ret < 0) { + dev_err(common->dev, + "cannot submit page to rx channel flow %d, error %d\n", + id, ret); + am65_cpsw_put_page(flow, page, false); + goto err; + } } } =20 @@ -777,6 +866,8 @@ static void am65_cpsw_nuss_rx_cleanup(void *data, dma_a= ddr_t desc_dma) struct am65_cpsw_rx_chn *rx_chn =3D data; struct cppi5_host_desc_t *desc_rx; struct am65_cpsw_swdata *swdata; + struct am65_cpsw_rx_flow *flow; + struct xdp_buff *xdp; dma_addr_t buf_dma; struct page *page; u32 buf_dma_len; @@ -784,13 +875,20 @@ static void am65_cpsw_nuss_rx_cleanup(void *data, dma= _addr_t desc_dma) =20 desc_rx =3D k3_cppi_desc_pool_dma2virt(rx_chn->desc_pool, desc_dma); swdata =3D cppi5_hdesc_get_swdata(desc_rx); - page =3D swdata->page; flow_id =3D swdata->flow_id; cppi5_hdesc_get_obuf(desc_rx, &buf_dma, &buf_dma_len); k3_udma_glue_rx_cppi5_to_dma_addr(rx_chn->rx_chn, &buf_dma); - dma_unmap_single(rx_chn->dma_dev, buf_dma, buf_dma_len, DMA_FROM_DEVICE); k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); - am65_cpsw_put_page(&rx_chn->flows[flow_id], page, false); + flow =3D &rx_chn->flows[flow_id]; + if (flow->xsk_pool) { + xdp =3D swdata->xdp; + xsk_buff_free(xdp); + } else { + page =3D swdata->page; + dma_unmap_single(rx_chn->dma_dev, buf_dma, buf_dma_len, + DMA_FROM_DEVICE); + am65_cpsw_put_page(flow, page, false); + } } =20 static void am65_cpsw_nuss_xmit_free(struct am65_cpsw_tx_chn *tx_chn, @@ -1267,6 +1365,151 @@ static void am65_cpsw_nuss_rx_csum(struct sk_buff *= skb, u32 csum_info) } } =20 +static struct sk_buff *am65_cpsw_create_skb_zc(struct am65_cpsw_rx_flow *f= low, + struct xdp_buff *xdp) +{ + unsigned int metasize =3D xdp->data - xdp->data_meta; + unsigned int datasize =3D xdp->data_end - xdp->data; + struct sk_buff *skb; + + skb =3D napi_alloc_skb(&flow->napi_rx, + xdp->data_end - xdp->data_hard_start); + if (unlikely(!skb)) + return NULL; + + skb_reserve(skb, xdp->data - xdp->data_hard_start); + memcpy(__skb_put(skb, datasize), xdp->data, datasize); + if (metasize) + skb_metadata_set(skb, metasize); + + return skb; +} + +static void am65_cpsw_dispatch_skb_zc(struct am65_cpsw_rx_flow *flow, + struct am65_cpsw_port *port, + struct xdp_buff *xdp, u32 csum_info) +{ + struct am65_cpsw_common *common =3D flow->common; + unsigned int len =3D xdp->data_end - xdp->data; + struct am65_cpsw_ndev_priv *ndev_priv; + struct net_device *ndev =3D port->ndev; + struct sk_buff *skb; + + skb =3D am65_cpsw_create_skb_zc(flow, xdp); + if (!skb) { + ndev->stats.rx_dropped++; + return; + } + + ndev_priv =3D netdev_priv(ndev); + am65_cpsw_nuss_set_offload_fwd_mark(skb, ndev_priv->offload_fwd_mark); + if (port->rx_ts_enabled) + am65_cpts_rx_timestamp(common->cpts, skb); + + skb_mark_for_recycle(skb); + skb->protocol =3D eth_type_trans(skb, ndev); + am65_cpsw_nuss_rx_csum(skb, csum_info); + napi_gro_receive(&flow->napi_rx, skb); + dev_sw_netstats_rx_add(ndev, len); +} + +static int am65_cpsw_nuss_rx_zc(struct am65_cpsw_rx_flow *flow, int budget) +{ + struct am65_cpsw_rx_chn *rx_chn =3D &flow->common->rx_chns; + u32 buf_dma_len, pkt_len, port_id =3D 0, csum_info; + struct am65_cpsw_common *common =3D flow->common; + struct cppi5_host_desc_t *desc_rx; + struct device *dev =3D common->dev; + struct am65_cpsw_swdata *swdata; + dma_addr_t desc_dma, buf_dma; + struct am65_cpsw_port *port; + struct net_device *ndev; + u32 flow_idx =3D flow->id; + struct xdp_buff *xdp; + int count =3D 0; + int xdp_status =3D 0; + u32 *psdata; + int ret; + + while (count < budget) { + ret =3D k3_udma_glue_pop_rx_chn(rx_chn->rx_chn, flow_idx, + &desc_dma); + if (ret) { + if (ret !=3D -ENODATA) + dev_err(dev, "RX: pop chn fail %d\n", + ret); + break; + } + + if (cppi5_desc_is_tdcm(desc_dma)) { + dev_dbg(dev, "%s RX tdown flow: %u\n", + __func__, flow_idx); + if (common->pdata.quirks & AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ) + complete(&common->tdown_complete); + continue; + } + + desc_rx =3D k3_cppi_desc_pool_dma2virt(rx_chn->desc_pool, + desc_dma); + dev_dbg(dev, "%s flow_idx: %u desc %pad\n", + __func__, flow_idx, &desc_dma); + + swdata =3D cppi5_hdesc_get_swdata(desc_rx); + xdp =3D swdata->xdp; + cppi5_hdesc_get_obuf(desc_rx, &buf_dma, &buf_dma_len); + k3_udma_glue_rx_cppi5_to_dma_addr(rx_chn->rx_chn, &buf_dma); + pkt_len =3D cppi5_hdesc_get_pktlen(desc_rx); + cppi5_desc_get_tags_ids(&desc_rx->hdr, &port_id, NULL); + dev_dbg(dev, "%s rx port_id:%d\n", __func__, port_id); + port =3D am65_common_get_port(common, port_id); + ndev =3D port->ndev; + psdata =3D cppi5_hdesc_get_psdata(desc_rx); + csum_info =3D psdata[2]; + dev_dbg(dev, "%s rx csum_info:%#x\n", __func__, csum_info); + k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); + count++; + xsk_buff_set_size(xdp, pkt_len); + xsk_buff_dma_sync_for_cpu(xdp); + /* check if this port has XSK enabled. else drop packet */ + if (port_id !=3D flow->xsk_port_id) { + dev_dbg(dev, "discarding non xsk port data\n"); + xsk_buff_free(xdp); + ndev->stats.rx_dropped++; + continue; + } + + ret =3D am65_cpsw_run_xdp(flow, port, xdp, &pkt_len); + switch (ret) { + case AM65_CPSW_XDP_PASS: + am65_cpsw_dispatch_skb_zc(flow, port, xdp, csum_info); + xsk_buff_free(xdp); + break; + case AM65_CPSW_XDP_CONSUMED: + xsk_buff_free(xdp); + break; + case AM65_CPSW_XDP_TX: + case AM65_CPSW_XDP_REDIRECT: + xdp_status |=3D ret; + break; + } + } + + if (xdp_status & AM65_CPSW_XDP_REDIRECT) + xdp_do_flush(); + + ret =3D am65_cpsw_nuss_rx_alloc_zc(flow, count); + + if (xsk_uses_need_wakeup(flow->xsk_pool)) { + /* We set wakeup if we are exhausted of new requests */ + if (ret < count) + xsk_set_rx_need_wakeup(flow->xsk_pool); + else + xsk_clear_rx_need_wakeup(flow->xsk_pool); + } + + return count; +} + static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_rx_flow *flow, int *xdp_state) { @@ -1392,7 +1635,11 @@ static enum hrtimer_restart am65_cpsw_nuss_rx_timer_= callback(struct hrtimer *tim struct am65_cpsw_rx_flow, rx_hrtimer); =20 - enable_irq(flow->irq); + if (flow->irq_disabled) { + flow->irq_disabled =3D false; + enable_irq(flow->irq); + } + return HRTIMER_NORESTART; } =20 @@ -1406,17 +1653,21 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struc= t *napi_rx, int budget) int num_rx =3D 0; =20 /* process only this flow */ - cur_budget =3D budget; - while (cur_budget--) { - ret =3D am65_cpsw_nuss_rx_packets(flow, &xdp_state); - xdp_state_or |=3D xdp_state; - if (ret) - break; - num_rx++; - } + if (flow->xsk_pool) { + num_rx =3D am65_cpsw_nuss_rx_zc(flow, budget); + } else { + cur_budget =3D budget; + while (cur_budget--) { + ret =3D am65_cpsw_nuss_rx_packets(flow, &xdp_state); + xdp_state_or |=3D xdp_state; + if (ret) + break; + num_rx++; + } =20 - if (xdp_state_or & AM65_CPSW_XDP_REDIRECT) - xdp_do_flush(); + if (xdp_state_or & AM65_CPSW_XDP_REDIRECT) + xdp_do_flush(); + } =20 dev_dbg(common->dev, "%s num_rx:%d %d\n", __func__, num_rx, budget); =20 diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/etherne= t/ti/am65-cpsw-nuss.h index 31789b5e5e1fc96be20cce17234d0e16cdcea796..2bf4d12f92764706719cc1d6500= 1dbb53da58c38 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h @@ -15,6 +15,7 @@ #include #include #include +#include #include "am65-cpsw-qos.h" =20 struct am65_cpts; @@ -107,6 +108,8 @@ struct am65_cpsw_rx_flow { struct hrtimer rx_hrtimer; unsigned long rx_pace_timeout; struct page_pool *page_pool; + struct xsk_buff_pool *xsk_pool; + int xsk_port_id; char name[32]; }; =20 @@ -120,7 +123,10 @@ struct am65_cpsw_tx_swdata { =20 struct am65_cpsw_swdata { u32 flow_id; - struct page *page; + union { + struct page *page; + struct xdp_buff *xdp; + }; }; =20 struct am65_cpsw_rx_chn { @@ -248,4 +254,8 @@ static inline bool am65_cpsw_xdp_is_enabled(struct am65= _cpsw_port *port) { return !!READ_ONCE(port->xdp_prog); } + +struct xsk_buff_pool *am65_cpsw_xsk_get_pool(struct am65_cpsw_port *port, + u32 qid); + #endif /* AM65_CPSW_NUSS_H_ */ diff --git a/drivers/net/ethernet/ti/am65-cpsw-xdp.c b/drivers/net/ethernet= /ti/am65-cpsw-xdp.c index 89f43f7c83db35dba96621bae930172e0fc85b6a..0e37c27f77720713430a3e70f6c= 4b3dfb048cfc0 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-xdp.c +++ b/drivers/net/ethernet/ti/am65-cpsw-xdp.c @@ -108,6 +108,9 @@ int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 q= id, u32 flags) { struct am65_cpsw_common *common =3D am65_ndev_to_common(ndev); struct am65_cpsw_port *port =3D am65_ndev_to_port(ndev); + struct am65_cpsw_rx_flow *rx_flow; + + rx_flow =3D &common->rx_chns.flows[qid]; =20 if (!netif_running(ndev) || !netif_carrier_ok(ndev)) return -ENETDOWN; @@ -118,5 +121,26 @@ int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 = qid, u32 flags) if (qid >=3D common->rx_ch_num_flows || qid >=3D common->tx_ch_num) return -EINVAL; =20 + if (!rx_flow->xsk_pool) + return -EINVAL; + + if (flags & XDP_WAKEUP_RX) { + if (!napi_if_scheduled_mark_missed(&rx_flow->napi_rx)) { + if (likely(napi_schedule_prep(&rx_flow->napi_rx))) + __napi_schedule(&rx_flow->napi_rx); + } + } + return 0; } + +struct xsk_buff_pool *am65_cpsw_xsk_get_pool(struct am65_cpsw_port *port, + u32 qid) +{ + if (!am65_cpsw_xdp_is_enabled(port) || + !test_bit(qid, port->common->xdp_zc_queues) || + port->common->xsk_port_id[qid] !=3D port->port_id) + return NULL; + + return xsk_get_pool_from_qid(port->ndev, qid); +} --=20 2.34.1 From nobody Fri Dec 19 12:48:28 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7B0B27AC54; Sun, 9 Nov 2025 21:39:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724342; cv=none; b=I763RG6gb7kY3FSHt9Aci46JZOmNch4eM+10CMTqzJq2NzYR3NcDcp3fCipX9K5iYNwjzQ/vf1C19f+THVGM/HX+CS6BF1jVb2gSsP3hvjIDCtbfVfCt8UsbfAkVSJV5SmI1QXrX5X7KeBguLCc21EFbLhcebaa6ScyQ1cXqv8g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724342; c=relaxed/simple; bh=yZDjsw7EWrQMT8Z5kEDaqMC+c+zkVy5QHbXnLygqNXs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=pdHpDaUOzRINwjScbUP5f3odtRwwBHi5PmnemdGVV5+Lsnoyt1JrgMIz8drM/Snq1ZQkRBNlZG9r+y+e8ieOsbdSFrqPMgL3KdkuA6V3zK9SKgwG4IADcsXjP5QMbRwdq5w0C0gNyv+4ubO+0/I0WB0tWzI3m2+qVSnHkIFfugg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=V19xPEkB; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="V19xPEkB" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 34D73C4AF0D; Sun, 9 Nov 2025 21:38:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1762724342; bh=yZDjsw7EWrQMT8Z5kEDaqMC+c+zkVy5QHbXnLygqNXs=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=V19xPEkBSzjaqzbAZRvV8O9sbrSVjUIvu6C9RH91sPlsv2LbBSOE5nygwTvq2Z35T IxW3B67NFmYD7hvKp9UPY3Eupgk9BLpAESNLn7XI6lfiP6poP9/6h7CWv105Pei/Cl aqR77R+edyle64sQTovTdEfesPepISpmJ6LFNTKV887qTPyDmbvLUtanvpGJU8Z3h3 nxM5500Y+isEaNaLnnutsiSgckzLbs1rc+LBDubOZMER6GPV0pBOdoehMuWBZXx9Ec Dy+61uqB7B1hYbQWT8/BOwGxOoWn2/1fq3WXCEV6gdnx4qg+Oq5duBhdUs/Ae8nchg Xr2RgnENMxknQ== From: Roger Quadros Date: Sun, 09 Nov 2025 23:37:55 +0200 Subject: [PATCH net-next v2 5/7] net: ethernet: ti: am65-cpsw: Add AF_XDP zero copy for TX Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251109-am65-cpsw-xdp-zc-v2-5-858f60a09d12@kernel.org> References: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> In-Reply-To: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> To: Siddharth Vadapalli , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Stanislav Fomichev , Simon Horman Cc: srk@ti.com, Meghana Malladi , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Roger Quadros X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=13581; i=rogerq@kernel.org; h=from:subject:message-id; bh=yZDjsw7EWrQMT8Z5kEDaqMC+c+zkVy5QHbXnLygqNXs=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBpEQnZq0zF4jgKCs2OMHWL6QhnUAh909BXkpdoI y+GfkCC9huJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCaREJ2QAKCRDSWmvTvnYw k5giD/0V24C1D4sCAm5QRfCSRJVGr0SElles+RWAIocGvfKtruXO4VfzQnQhncsI1rRGrXYOeeA BbrtF73Qk4QWRWNniPBSU9ulu1TuVuoSZV+OpReXNIwA+ZOOcMFCjy0jPMdjjELC7vGs4YAliOM HXPgoXYK0L4che90ydxmVfuflKGTMJOTqAmmGYgjsjQ4WMsUedCQFv4iUrtcLG75/CgYxDYAOZ/ yvY0FzyCKipide485NxQG+uyQkKNSswhbGCrsSUfOE/kCJitFl4jIdJq2kDewZtNNqWRHPsM1/e 9k0hvxu3mTxzu07h7B064ro3en0AEF9/qyF6NF8Iju0yAIHgtpgT5IcTacQwxMDLWR76Dr2LyxC 3pLvkElriLBS3U/dVBFign/a3e3q5+3Ukh9qVC5dnGM/bj2eynPO51Ljt0xG9cPptDWrAHAC7zz kMk5hZU8MgYoHi17mCYnMmoISaZses9Jc5jmqdqrwmwZCYTBnNCc79IeNWzBGKWrmAnPYMCTIID S82H9dAp4qvryGcXtkVEsVjQd1w1SKa2wgDjVNa/PLDhu6nitgoSsqiZjCwtDrqygqky3LYTC1m HBMmRxvYPVQR7ifwmAl3xgl3ZHIvM/jeRv7miIabVPIj174qsNP6xcC/Qrm3Q0xcDEKwsiT6Tjd NYtNfG7sx921CTA== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 Add zero copy support to TX path. Introduce xsk_pool and xsk_port_id to struct am65_cpsw_tx_chn. This way we can quickly check if the flow is setup as XSK pool and for which port. If the TX channel is setup as XSK pool then get the frames from the pool and send it to the TX channel. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 171 +++++++++++++++++++++++++++= +--- drivers/net/ethernet/ti/am65-cpsw-nuss.h | 5 + drivers/net/ethernet/ti/am65-cpsw-xdp.c | 11 +- 3 files changed, 171 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index afc0c8836fe242d8bf47ce9bcd3e6b725ca37bf9..2e06e7df23ad5249786d081e514= 34f87dd2a76b5 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -758,6 +758,8 @@ void am65_cpsw_destroy_txq(struct am65_cpsw_common *com= mon, int id) k3_udma_glue_reset_tx_chn(tx_chn->tx_chn, tx_chn, am65_cpsw_nuss_tx_cleanup); k3_udma_glue_disable_tx_chn(tx_chn->tx_chn); + tx_chn->xsk_pool =3D NULL; + tx_chn->xsk_port_id =3D -EINVAL; } =20 static void am65_cpsw_destroy_txqs(struct am65_cpsw_common *common) @@ -786,12 +788,25 @@ static void am65_cpsw_destroy_txqs(struct am65_cpsw_c= ommon *common) int am65_cpsw_create_txq(struct am65_cpsw_common *common, int id) { struct am65_cpsw_tx_chn *tx_chn =3D &common->tx_chns[id]; - int ret; + int port, ret; =20 ret =3D k3_udma_glue_enable_tx_chn(tx_chn->tx_chn); if (ret) return ret; =20 + /* get first port with XSK pool & XDP program set */ + for (port =3D 0; port < common->port_num; port++) { + if (!common->ports[port].ndev) + continue; + + tx_chn->xsk_pool =3D am65_cpsw_xsk_get_pool(&common->ports[port], + id); + if (tx_chn->xsk_pool) + break; + } + + tx_chn->xsk_port_id =3D tx_chn->xsk_pool ? + common->ports[port].port_id : -EINVAL; napi_enable(&tx_chn->napi_tx); =20 return 0; @@ -892,15 +907,18 @@ static void am65_cpsw_nuss_rx_cleanup(void *data, dma= _addr_t desc_dma) } =20 static void am65_cpsw_nuss_xmit_free(struct am65_cpsw_tx_chn *tx_chn, - struct cppi5_host_desc_t *desc) + struct cppi5_host_desc_t *desc, + enum am65_cpsw_tx_buf_type buf_type) { struct cppi5_host_desc_t *first_desc, *next_desc; dma_addr_t buf_dma, next_desc_dma; u32 buf_dma_len; =20 first_desc =3D desc; - next_desc =3D first_desc; + if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_XSK_TX) + goto free_pool; =20 + next_desc =3D first_desc; cppi5_hdesc_get_obuf(first_desc, &buf_dma, &buf_dma_len); k3_udma_glue_tx_cppi5_to_dma_addr(tx_chn->tx_chn, &buf_dma); =20 @@ -923,6 +941,7 @@ static void am65_cpsw_nuss_xmit_free(struct am65_cpsw_t= x_chn *tx_chn, k3_cppi_desc_pool_free(tx_chn->desc_pool, next_desc); } =20 +free_pool: k3_cppi_desc_pool_free(tx_chn->desc_pool, first_desc); } =20 @@ -932,21 +951,32 @@ static void am65_cpsw_nuss_tx_cleanup(void *data, dma= _addr_t desc_dma) enum am65_cpsw_tx_buf_type buf_type; struct am65_cpsw_tx_swdata *swdata; struct cppi5_host_desc_t *desc_tx; + struct xsk_buff_pool *xsk_pool; struct xdp_frame *xdpf; struct sk_buff *skb; =20 desc_tx =3D k3_cppi_desc_pool_dma2virt(tx_chn->desc_pool, desc_dma); swdata =3D cppi5_hdesc_get_swdata(desc_tx); buf_type =3D am65_cpsw_nuss_buf_type(tx_chn, desc_dma); - if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_SKB) { + switch (buf_type) { + case AM65_CPSW_TX_BUF_TYPE_SKB: skb =3D swdata->skb; dev_kfree_skb_any(skb); - } else { + break; + case AM65_CPSW_TX_BUF_TYPE_XDP_TX: + case AM65_CPSW_TX_BUF_TYPE_XDP_NDO: xdpf =3D swdata->xdpf; xdp_return_frame(xdpf); + break; + case AM65_CPSW_TX_BUF_TYPE_XSK_TX: + xsk_pool =3D swdata->xsk_pool; + xsk_tx_completed(xsk_pool, 1); + break; + default: + break; } =20 - am65_cpsw_nuss_xmit_free(tx_chn, desc_tx); + am65_cpsw_nuss_xmit_free(tx_chn, desc_tx, buf_type); } =20 static struct sk_buff *am65_cpsw_build_skb(void *page_addr, @@ -1189,6 +1219,82 @@ static int am65_cpsw_nuss_ndo_slave_open(struct net_= device *ndev) return ret; } =20 +static int am65_cpsw_xsk_xmit_zc(struct net_device *ndev, + struct am65_cpsw_tx_chn *tx_chn) +{ + struct am65_cpsw_common *common =3D tx_chn->common; + struct xsk_buff_pool *pool =3D tx_chn->xsk_pool; + struct xdp_desc *xdp_descs =3D pool->tx_descs; + struct cppi5_host_desc_t *host_desc; + struct am65_cpsw_tx_swdata *swdata; + dma_addr_t dma_desc, dma_buf; + int num_tx =3D 0, pkt_len; + int descs_avail, ret; + int i; + + descs_avail =3D k3_cppi_desc_pool_avail(tx_chn->desc_pool); + /* ensure that TX ring is not filled up by XDP, always MAX_SKB_FRAGS + * will be available for normal TX path and queue is stopped there if + * necessary + */ + if (descs_avail <=3D MAX_SKB_FRAGS) + return 0; + + descs_avail -=3D MAX_SKB_FRAGS; + descs_avail =3D xsk_tx_peek_release_desc_batch(pool, descs_avail); + + for (i =3D 0; i < descs_avail; i++) { + host_desc =3D k3_cppi_desc_pool_alloc(tx_chn->desc_pool); + if (unlikely(!host_desc)) + break; + + am65_cpsw_nuss_set_buf_type(tx_chn, host_desc, + AM65_CPSW_TX_BUF_TYPE_XSK_TX); + dma_buf =3D xsk_buff_raw_get_dma(pool, xdp_descs[i].addr); + pkt_len =3D xdp_descs[i].len; + xsk_buff_raw_dma_sync_for_device(pool, dma_buf, pkt_len); + + cppi5_hdesc_init(host_desc, CPPI5_INFO0_HDESC_EPIB_PRESENT, + AM65_CPSW_NAV_PS_DATA_SIZE); + cppi5_hdesc_set_pkttype(host_desc, AM65_CPSW_CPPI_TX_PKT_TYPE); + cppi5_hdesc_set_pktlen(host_desc, pkt_len); + cppi5_desc_set_pktids(&host_desc->hdr, 0, + AM65_CPSW_CPPI_TX_FLOW_ID); + cppi5_desc_set_tags_ids(&host_desc->hdr, 0, + tx_chn->xsk_port_id); + + k3_udma_glue_tx_dma_to_cppi5_addr(tx_chn->tx_chn, &dma_buf); + cppi5_hdesc_attach_buf(host_desc, dma_buf, pkt_len, dma_buf, + pkt_len); + + swdata =3D cppi5_hdesc_get_swdata(host_desc); + swdata->ndev =3D ndev; + swdata->xsk_pool =3D pool; + + dma_desc =3D k3_cppi_desc_pool_virt2dma(tx_chn->desc_pool, + host_desc); + if (AM65_CPSW_IS_CPSW2G(common)) { + ret =3D k3_udma_glue_push_tx_chn(tx_chn->tx_chn, + host_desc, dma_desc); + } else { + spin_lock_bh(&tx_chn->lock); + ret =3D k3_udma_glue_push_tx_chn(tx_chn->tx_chn, + host_desc, dma_desc); + spin_unlock_bh(&tx_chn->lock); + } + + if (ret) { + ndev->stats.tx_errors++; + k3_cppi_desc_pool_free(tx_chn->desc_pool, host_desc); + break; + } + + num_tx++; + } + + return num_tx; +} + static int am65_cpsw_xdp_tx_frame(struct net_device *ndev, struct am65_cpsw_tx_chn *tx_chn, struct xdp_frame *xdpf, @@ -1716,15 +1822,19 @@ static int am65_cpsw_nuss_tx_compl_packets(struct a= m65_cpsw_common *common, struct netdev_queue *netif_txq; unsigned int total_bytes =3D 0; struct net_device *ndev; + int xsk_frames_done =3D 0; struct xdp_frame *xdpf; unsigned int pkt_len; struct sk_buff *skb; dma_addr_t desc_dma; int res, num_tx =3D 0; + int xsk_tx =3D 0; =20 tx_chn =3D &common->tx_chns[chn]; =20 while (true) { + pkt_len =3D 0; + if (!single_port) spin_lock(&tx_chn->lock); res =3D k3_udma_glue_pop_tx_chn(tx_chn->tx_chn, &desc_dma); @@ -1746,25 +1856,36 @@ static int am65_cpsw_nuss_tx_compl_packets(struct a= m65_cpsw_common *common, swdata =3D cppi5_hdesc_get_swdata(desc_tx); ndev =3D swdata->ndev; buf_type =3D am65_cpsw_nuss_buf_type(tx_chn, desc_dma); - if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_SKB) { + switch (buf_type) { + case AM65_CPSW_TX_BUF_TYPE_SKB: skb =3D swdata->skb; am65_cpts_tx_timestamp(tx_chn->common->cpts, skb); pkt_len =3D skb->len; napi_consume_skb(skb, budget); - } else { + total_bytes +=3D pkt_len; + break; + case AM65_CPSW_TX_BUF_TYPE_XDP_TX: + case AM65_CPSW_TX_BUF_TYPE_XDP_NDO: xdpf =3D swdata->xdpf; pkt_len =3D xdpf->len; + total_bytes +=3D pkt_len; if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_XDP_TX) xdp_return_frame_rx_napi(xdpf); else xdp_return_frame(xdpf); + break; + case AM65_CPSW_TX_BUF_TYPE_XSK_TX: + pkt_len =3D cppi5_hdesc_get_pktlen(desc_tx); + xsk_frames_done++; + break; + default: + break; } =20 - total_bytes +=3D pkt_len; num_tx++; - am65_cpsw_nuss_xmit_free(tx_chn, desc_tx); + am65_cpsw_nuss_xmit_free(tx_chn, desc_tx, buf_type); dev_sw_netstats_tx_add(ndev, 1, pkt_len); - if (!single_port) { + if (!single_port && buf_type !=3D AM65_CPSW_TX_BUF_TYPE_XSK_TX) { /* as packets from multi ports can be interleaved * on the same channel, we have to figure out the * port/queue at every packet and report it/wake queue. @@ -1781,6 +1902,19 @@ static int am65_cpsw_nuss_tx_compl_packets(struct am= 65_cpsw_common *common, am65_cpsw_nuss_tx_wake(tx_chn, ndev, netif_txq); } =20 + if (tx_chn->xsk_pool) { + if (xsk_frames_done) + xsk_tx_completed(tx_chn->xsk_pool, xsk_frames_done); + + if (xsk_uses_need_wakeup(tx_chn->xsk_pool)) + xsk_set_tx_need_wakeup(tx_chn->xsk_pool); + + ndev =3D common->ports[tx_chn->xsk_port_id].ndev; + netif_txq =3D netdev_get_tx_queue(ndev, chn); + txq_trans_cond_update(netif_txq); + xsk_tx =3D am65_cpsw_xsk_xmit_zc(ndev, tx_chn); + } + dev_dbg(dev, "%s:%u pkt:%d\n", __func__, chn, num_tx); =20 return num_tx; @@ -1791,7 +1925,11 @@ static enum hrtimer_restart am65_cpsw_nuss_tx_timer_= callback(struct hrtimer *tim struct am65_cpsw_tx_chn *tx_chns =3D container_of(timer, struct am65_cpsw_tx_chn, tx_hrtimer); =20 - enable_irq(tx_chns->irq); + if (tx_chns->irq_disabled) { + tx_chns->irq_disabled =3D false; + enable_irq(tx_chns->irq); + } + return HRTIMER_NORESTART; } =20 @@ -1811,7 +1949,8 @@ static int am65_cpsw_nuss_tx_poll(struct napi_struct = *napi_tx, int budget) hrtimer_start(&tx_chn->tx_hrtimer, ns_to_ktime(tx_chn->tx_pace_timeout), HRTIMER_MODE_REL_PINNED); - } else { + } else if (tx_chn->irq_disabled) { + tx_chn->irq_disabled =3D false; enable_irq(tx_chn->irq); } } @@ -1834,6 +1973,7 @@ static irqreturn_t am65_cpsw_nuss_tx_irq(int irq, voi= d *dev_id) { struct am65_cpsw_tx_chn *tx_chn =3D dev_id; =20 + tx_chn->irq_disabled =3D true; disable_irq_nosync(irq); napi_schedule(&tx_chn->napi_tx); =20 @@ -1998,14 +2138,14 @@ static netdev_tx_t am65_cpsw_nuss_ndo_slave_xmit(st= ruct sk_buff *skb, return NETDEV_TX_OK; =20 err_free_descs: - am65_cpsw_nuss_xmit_free(tx_chn, first_desc); + am65_cpsw_nuss_xmit_free(tx_chn, first_desc, AM65_CPSW_TX_BUF_TYPE_SKB); err_free_skb: ndev->stats.tx_dropped++; dev_kfree_skb_any(skb); return NETDEV_TX_OK; =20 busy_free_descs: - am65_cpsw_nuss_xmit_free(tx_chn, first_desc); + am65_cpsw_nuss_xmit_free(tx_chn, first_desc, AM65_CPSW_TX_BUF_TYPE_SKB); busy_stop_q: netif_tx_stop_queue(netif_txq); return NETDEV_TX_BUSY; @@ -2259,6 +2399,7 @@ static const struct net_device_ops am65_cpsw_nuss_net= dev_ops =3D { .ndo_xdp_xmit =3D am65_cpsw_ndo_xdp_xmit, .ndo_hwtstamp_get =3D am65_cpsw_nuss_hwtstamp_get, .ndo_hwtstamp_set =3D am65_cpsw_nuss_hwtstamp_set, + .ndo_xsk_wakeup =3D am65_cpsw_xsk_wakeup, }; =20 static void am65_cpsw_disable_phy(struct phy *phy) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/etherne= t/ti/am65-cpsw-nuss.h index 2bf4d12f92764706719cc1d65001dbb53da58c38..ac2d9d32e95b932665131a317df= 8316cb6cb7f96 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h @@ -72,6 +72,7 @@ enum am65_cpsw_tx_buf_type { AM65_CPSW_TX_BUF_TYPE_SKB, AM65_CPSW_TX_BUF_TYPE_XDP_TX, AM65_CPSW_TX_BUF_TYPE_XDP_NDO, + AM65_CPSW_TX_BUF_TYPE_XSK_TX, }; =20 struct am65_cpsw_host { @@ -97,6 +98,9 @@ struct am65_cpsw_tx_chn { unsigned char dsize_log2; char tx_chn_name[128]; u32 rate_mbps; + struct xsk_buff_pool *xsk_pool; + int xsk_port_id; + bool irq_disabled; }; =20 struct am65_cpsw_rx_flow { @@ -118,6 +122,7 @@ struct am65_cpsw_tx_swdata { union { struct sk_buff *skb; struct xdp_frame *xdpf; + struct xsk_buff_pool *xsk_pool; }; }; =20 diff --git a/drivers/net/ethernet/ti/am65-cpsw-xdp.c b/drivers/net/ethernet= /ti/am65-cpsw-xdp.c index 0e37c27f77720713430a3e70f6c4b3dfb048cfc0..9adf13056f70fea36d9aeac157b= 7da0cae2c011e 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-xdp.c +++ b/drivers/net/ethernet/ti/am65-cpsw-xdp.c @@ -109,8 +109,10 @@ int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 = qid, u32 flags) struct am65_cpsw_common *common =3D am65_ndev_to_common(ndev); struct am65_cpsw_port *port =3D am65_ndev_to_port(ndev); struct am65_cpsw_rx_flow *rx_flow; + struct am65_cpsw_tx_chn *tx_ch; =20 rx_flow =3D &common->rx_chns.flows[qid]; + tx_ch =3D &common->tx_chns[qid]; =20 if (!netif_running(ndev) || !netif_carrier_ok(ndev)) return -ENETDOWN; @@ -121,9 +123,16 @@ int am65_cpsw_xsk_wakeup(struct net_device *ndev, u32 = qid, u32 flags) if (qid >=3D common->rx_ch_num_flows || qid >=3D common->tx_ch_num) return -EINVAL; =20 - if (!rx_flow->xsk_pool) + if (!rx_flow->xsk_pool && !tx_ch->xsk_pool) return -EINVAL; =20 + if (flags & XDP_WAKEUP_TX) { + if (!napi_if_scheduled_mark_missed(&tx_ch->napi_tx)) { + if (likely(napi_schedule_prep(&tx_ch->napi_tx))) + __napi_schedule(&tx_ch->napi_tx); + } + } + if (flags & XDP_WAKEUP_RX) { if (!napi_if_scheduled_mark_missed(&rx_flow->napi_rx)) { if (likely(napi_schedule_prep(&rx_flow->napi_rx))) --=20 2.34.1 From nobody Fri Dec 19 12:48:28 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D47A027FB2B; Sun, 9 Nov 2025 21:39:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724347; cv=none; b=o5xKJ82BmFNoxhEX3xpPqh9/fijeg/JfY5a2VBIvbz7DnJklybxoAcB8WcIrpjmeffcTQoAv5BGFppVeGYpAOvauUDzyS7WAjlPdys5PR1TZHVZWSpwG+3soBBDMDtfmMy2FHtF5+ovRx1VPYf/OIzTMEpZKTm+0JzcqiDyJex8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724347; c=relaxed/simple; bh=CN1oEXUSojkUBmaZlWlvuSVJlA/WWlAbJVlcXldCc2I=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=UCC1CU8JIBevsJS8nALyNvAKJUMRp06EI4CtDAz1iN68f4G7nwHBRHo0PFeGHRj7JOdmV+vB41w7wZvnJBgdUJL0TCT8TfY45vH6cMi/Gir8UuRlNUEO5nHcN0YJr65GA/F2xDa4OT2yPOtZxDLnGJ6qG9O/12/RzNNZZwue3Vw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=AE/QsBKe; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="AE/QsBKe" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DDBEBC4CEF7; Sun, 9 Nov 2025 21:39:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1762724347; bh=CN1oEXUSojkUBmaZlWlvuSVJlA/WWlAbJVlcXldCc2I=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=AE/QsBKeL4COZDbWNKyPSb/Yeb4U/0zyaGDjqv6fz7Yf7CZDZasUkJDL9fz2jvP1A guxGCe+ViOhnrhhSbMo4ARJjAB+LGMNHwSNoCASXHrfHftFfwDOYyMV2rffepGqX2J m7jLtHkNq9h6MYUTUl69gmK4j9+mGkgMkaCOoehqY30OB9SDoWRZxrgJQVw0+oB4dI RphQ7zt0ZYEtu9OfwHOQb7mg+M/1ZD57U2qYKkvWDkJHM4JUmU4GYhonLk3LF87t16 4EjtEkee052stjebv+wipougewFrKDaIRwEKHF/Otk1APFTM/YBsZwyIEKYz+bgoS2 KTVvo0R0iTAFQ== From: Roger Quadros Date: Sun, 09 Nov 2025 23:37:56 +0200 Subject: [PATCH net-next v2 6/7] net: ethernet: ti: am65-cpsw: enable zero copy in XDP features Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251109-am65-cpsw-xdp-zc-v2-6-858f60a09d12@kernel.org> References: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> In-Reply-To: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> To: Siddharth Vadapalli , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Stanislav Fomichev , Simon Horman Cc: srk@ti.com, Meghana Malladi , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Roger Quadros X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=1060; i=rogerq@kernel.org; h=from:subject:message-id; bh=CN1oEXUSojkUBmaZlWlvuSVJlA/WWlAbJVlcXldCc2I=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBpEQna96yZlfyKuLZK25llM6B9VnnOag/jnGh9D 91Dvo+PqVeJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCaREJ2gAKCRDSWmvTvnYw k+LtEADUgCFmF6aZIvEYu7sabE8u6Bp5lcZDbwYqnRBpvb8Vj4koNaounoTMuRmXWpOpUPIX/Xm TWvBAML53q3C79zHyatbDkcp0TymeiAQybxky5gS61YJLI3Ti947kKm/590u+tUsSFcKUeYe/Ny Fz3LFHW0Bm/YcGuR7xytB4DAvhra8VO5LhQXVwmIbewxf0OG8R1k8i8jzfY3CUQbRTVeuuUY+j2 F+GcdAD3WqM9CIKrM1uhBJ9gAhstsl7XB4pVKUMosjJ5NnSwyTrUiDEkrUmx0hgQWtryXu6Yhjd bWW2+mYQmPPPssI6C3UROAD9j9sLnufwsY9mDzgQy/ko0jx2kH9KlQ2cp7p7dkIrukRVTXMTGiy lp1EgHGhUdjJszxldCwV08sx5O+IecOKlahvuU5GfGDrex0vRiTWe5AP7djJBCqAEySr0hHaG6F MtdyUDcfH8vbZau0Jgvaez5btY1MziZwQ0WTxZd2VROvdthaDeFR4pJGI9+ZnfmpJ8yjUCFyp7O mgFnof9Vp9f9TJSQhnFYPTQWOqUFsquvYoKvU1pszELddV2Uh9ncX3g9g1xZIECcJOH8suiIUVs nJ2cnfLqBvJkF+M4sGBcXkPT1DJ8yZcI2ODkZs5Pmb6RWfxOWysx3rveW7VM7+V77rq8TskBHQq ms57HQrnSzz98xQ== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 Now that we have the plumbing in for XDP zero copy RX and TX, enable the zero copy feature flag. Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index 2e06e7df23ad5249786d081e51434f87dd2a76b5..9d1048eea7e4734873676026906= e07babf0345f5 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -3210,7 +3210,8 @@ am65_cpsw_nuss_init_port_ndev(struct am65_cpsw_common= *common, u32 port_idx) NETIF_F_HW_VLAN_CTAG_FILTER; port->ndev->xdp_features =3D NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | - NETDEV_XDP_ACT_NDO_XMIT; + NETDEV_XDP_ACT_NDO_XMIT | + NETDEV_XDP_ACT_XSK_ZEROCOPY; port->ndev->vlan_features |=3D NETIF_F_SG; port->ndev->netdev_ops =3D &am65_cpsw_nuss_netdev_ops; port->ndev->ethtool_ops =3D &am65_cpsw_ethtool_ops_slave; --=20 2.34.1 From nobody Fri Dec 19 12:48:28 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 728BB24469E; Sun, 9 Nov 2025 21:39:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724352; cv=none; b=HoSx0SLDeHsWgkg0RJ20SsNcQotSCMStM3OBnh5oFrJ+Q6b47TTWvKJ0kfIY0Ul3N/L8694RLhyYZ6wQBqtFH6abr657/WuR6i1sGsxMhhBfbNEk1r30xCBiEUlbPZZUn3J3UCNOYECGrvurb7+QMdnGX4gDLHUNB/BkNDfXNoc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762724352; c=relaxed/simple; bh=kVgnVQO18a916rVySKF1mV8rRyJyvwESYEk0lLLgQaY=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=iv0JD+PS9YFaSNFCUBs3XhH2Ad5bq0rLnV/qbM/qik4JMVu5KYJv2GLGOgIp0fNZWdYf/CAcGdbOklqvjthgMUXRATiiPENGVYMajejQ8yzAl/GhbZyfKnZOvXwGuYyI/GIeUeEhxtZJCFcv4QYZQZUzZGnwZ0Roj2zS+aAbs7A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=WEefhM+w; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="WEefhM+w" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BECEAC19423; Sun, 9 Nov 2025 21:39:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1762724352; bh=kVgnVQO18a916rVySKF1mV8rRyJyvwESYEk0lLLgQaY=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=WEefhM+wvXEy1rWWzumJC5RPb0gVAJSBqN7mRuhwxH8PvDdhBWgEqFwZStqMEMNF2 taGAZqeHs/RbkUh58PsASM2X92CWWgOBU29vC8wHuFqIVesO2YFSWtrDJ1g4pA9BWX Ndwi6/JkinnfIsIFh+LYcbKrXszHMCzdV2Yb05W4D7wHhWEECCbELc8MHuaZoC9pbN yG55Zlnv+nUJuyCyEbr40jdhms4NDQ30g/7UyhzZFAbnhtSqT3Rsd63NwqT37D79PD J/AamRW5kFyj6MEW2UI68kpWObmACabj4HDf6WuqwSzA8XYRRMRJKoZqXpL+hqbqTG JKvmi+4MlwgMg== From: Roger Quadros Date: Sun, 09 Nov 2025 23:37:57 +0200 Subject: [PATCH net-next v2 7/7] net: ethernet: ti: am65-cpsw: Fix clearing of irq_disabled flag in rx_poll Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251109-am65-cpsw-xdp-zc-v2-7-858f60a09d12@kernel.org> References: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> In-Reply-To: <20251109-am65-cpsw-xdp-zc-v2-0-858f60a09d12@kernel.org> To: Siddharth Vadapalli , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Stanislav Fomichev , Simon Horman Cc: srk@ti.com, Meghana Malladi , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Roger Quadros X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=1531; i=rogerq@kernel.org; h=from:subject:message-id; bh=kVgnVQO18a916rVySKF1mV8rRyJyvwESYEk0lLLgQaY=; b=owEBbQKS/ZANAwAIAdJaa9O+djCTAcsmYgBpEQnaoZQaCDOcP2jKXsqaIb7Y4IAM39JiwAlpR whm4N5Vn1iJAjMEAAEIAB0WIQRBIWXUTJ9SeA+rEFjSWmvTvnYwkwUCaREJ2gAKCRDSWmvTvnYw k+n2D/4n3QLBvZL+Ug6lzBiQ3yEIJ54RMQhXfWtoHhcZT+Wor3emaGkM+iZ3o3j+ObQhNfLnaH9 nqj5dWuuKVXWirrXjeIcgudnU6FcaO1DUY8SH6+H3jAtspFAI0Ne/eaRWCMNtEgdWw+tjR6Hki5 K/HF/m38zK2UzGqBPhSr/wFUiT7E94Xgvn30dEZEYJ2GMF0F9bDmsQLWkP12bD0U9M0Tkku8r3w YZbH/h+iY986+tLVcgL9bV72urrYlAFGtzfkoUzGwTD0CN0Gk8qsiDwPEfwAb/wpJMR1NsOhDiX RCa3Errmmt2VRD/ViOAbkobEfNRFsSGke6jrDeQ7VOeNJn6kIzrBtCDWZISg1orpImNpZLht1qF mBW2x+s/+/nJLMMTyZA1eo72nRW0gu+ugSQ0JIwcuvdTFPn67sV+LRwgJusoBUUyeUbhdquF/Y5 EGMIRdGhkkVp+y4wSeZhmrWjyThAbUPh/ECj7mqphqLz4Ecd+osPOwtHjeQfZnJ+rXLSb+PAjqw 7+J6wPI83RIPktGVBgGr7kLkPqmSSRCTgs6NEEwDMp9x//wKe+AN1+X/PS48nmxQF2kYsJCzZsO NxWZ3LNtNVHwlyhZR7SEvxnM1T77zq0r9uYfJmHk1iqhaaOaUBNk+HA4ER1KeBvte0Fc7iehK80 Dnd1EZKdXkopKeg== X-Developer-Key: i=rogerq@kernel.org; a=openpgp; fpr=412165D44C9F52780FAB1058D25A6BD3BE763093 In am65_cpsw_nuss_rx_poll() there is a possibility that irq_disabled flag is cleared but the IRQ is not enabled. This patch fixes by that by clearing irq_disabled flag right when enabling the irq. Fixes: da70d184a8c3 ("net: ethernet: ti: am65-cpsw: Introduce multi queue R= x") Signed-off-by: Roger Quadros --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index 9d1048eea7e4734873676026906e07babf0345f5..c0f891a91d7471364bd4c8b7d82= da9967f1753b8 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -1778,15 +1778,13 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struc= t *napi_rx, int budget) dev_dbg(common->dev, "%s num_rx:%d %d\n", __func__, num_rx, budget); =20 if (num_rx < budget && napi_complete_done(napi_rx, num_rx)) { - if (flow->irq_disabled) { + if (unlikely(flow->rx_pace_timeout)) { + hrtimer_start(&flow->rx_hrtimer, + ns_to_ktime(flow->rx_pace_timeout), + HRTIMER_MODE_REL_PINNED); + } else if (flow->irq_disabled) { flow->irq_disabled =3D false; - if (unlikely(flow->rx_pace_timeout)) { - hrtimer_start(&flow->rx_hrtimer, - ns_to_ktime(flow->rx_pace_timeout), - HRTIMER_MODE_REL_PINNED); - } else { - enable_irq(flow->irq); - } + enable_irq(flow->irq); } } =20 --=20 2.34.1