From nobody Fri Apr 10 01:00:55 2026 Received: from smtpout-04.galae.net (smtpout-04.galae.net [185.171.202.116]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA50D3D5647 for ; Wed, 4 Mar 2026 18:24:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.171.202.116 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648687; cv=none; b=eSkiJEDIboGBboUPui0w1gNBvumLqR/f7M/vs2PUQewyHLwHo6jRMKrPplMr2hQsyjFHfdrBoVXFNGrpU5TbFpCZGA1KvtWpK6zlZ2iOltlS/Mhff7RCIIW3ypLf8df4gUCWWCOiOKUS+4UCtvYrXf/mt9fHUrzT3dzAEIbOBis= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648687; c=relaxed/simple; bh=1kn1psIhiJQGixudWstdQUGbMnl65a2Ws9qkxb0h6vA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=NaGzgutBMo0XZiZ+MY5vqnEjSqmyq1ZQPSGRAJpYkv+dKyVNoJ3XAwVpCLVlonq0sXdozCcwyLXS8s2JFhT+XVE298AmSQ/cyRVmGHlSmMEZRKaDNjsPT8lBkcx4VfPQAReDACa+goZrfHteYfib0UeWXCUQoU5HePcoI3AhxOQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=KSOTSxrp; arc=none smtp.client-ip=185.171.202.116 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="KSOTSxrp" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-04.galae.net (Postfix) with ESMTPS id 9834EC143FC; Wed, 4 Mar 2026 18:25:02 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 63F6C5FF5C; Wed, 4 Mar 2026 18:24:44 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 10862103695FE; Wed, 4 Mar 2026 19:24:41 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1772648683; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=afu30LFlXgCnEzNfi1TzEAIspkrLSXKZZ9DD7cZcOIc=; b=KSOTSxrpWkZQgzLiyoGhPRhMXy7ijJemYXH/1A9i1v912AqXLRj21WYaD9tFiOGBrCKjaC DPrNlF+fuYRgVpUeRfJGTKdDULcpjE5WWGS8rLuv8M1FtLNK10d3HfQyh3VAjAQO7GNHEX tsHKcUrF0nnXG62J3WnXVQETPMSYvVSnE4HXJG3jc+rahGYNtv6RMb6YdfKcnLLbt89bkR kfIhmWd+QrNxUb7kuK2YwM5aWlE/boa4/Q3eoV0/LCJULuX0fdaTQrq6JnVCPMhxsRbAio uKEWZGI7o6Y5u2FYBkocnEWkGixsWmIPk7W6Ke4J7TKrCONEj3kc53DKSvjrdw== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 04 Mar 2026 19:24:24 +0100 Subject: [PATCH net-next 1/8] net: macb: make rx error messages rate-limited Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260304-macb-xsk-v1-1-ba2ebe2bdaa3@bootlin.com> References: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> In-Reply-To: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , Richard Cochran Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.14.3 X-Last-TLS-Session-Version: TLSv1.3 If Rx codepath error messages trigger, they do not interrupt reception. Kernel log gets spammed, we lose useful history and everything crawls to a halt. Instead, make them rate-limited to keep old useful information in the log and keep the system responsive. No netdev_*_ratelimited() variants exist so we switch to dev_*(). Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb_main.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index a79daad275ba..ab73d1a522c2 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -1303,11 +1303,12 @@ static int macb_tx_complete(struct macb_queue *queu= e, int budget) static inline int gem_rx_data_len(struct macb *bp, struct macb_queue *queu= e, u32 desc_ctrl, bool rx_sof, bool rx_eof) { + struct device *dev =3D &bp->pdev->dev; int len; =20 if (unlikely(!rx_sof && !queue->skb)) { - netdev_err(bp->dev, - "Received non-starting frame while expecting a starting one\n"); + dev_err_ratelimited(dev, + "Received non-starting frame while expecting a starting one\n"); return -1; } =20 @@ -1322,7 +1323,7 @@ static inline int gem_rx_data_len(struct macb *bp, st= ruct macb_queue *queue, =20 if (rx_eof && !rx_sof) { if (unlikely(queue->skb->len > len)) { - netdev_err(bp->dev, "Unexpected frame len: %d\n", len); + dev_err_ratelimited(dev, "Unexpected frame len: %d\n", len); return -1; } =20 @@ -1382,8 +1383,8 @@ static int gem_rx_refill(struct macb_queue *queue, bo= ol napi) gem_total_rx_buffer_size(bp), gfp_alloc | __GFP_NOWARN); if (!page) { - netdev_err(bp->dev, - "Unable to allocate rx buffer\n"); + dev_err_ratelimited(&bp->pdev->dev, + "Unable to allocate rx buffer\n"); err =3D -ENOMEM; break; } @@ -1666,8 +1667,8 @@ static int gem_rx(struct macb_queue *queue, struct na= pi_struct *napi, =20 buff_head =3D queue->rx_buff[entry]; if (unlikely(!buff_head)) { - netdev_err(bp->dev, - "inconsistent Rx descriptor chain\n"); + dev_err_ratelimited(&bp->pdev->dev, + "inconsistent Rx descriptor chain\n"); bp->dev->stats.rx_dropped++; queue->stats.rx_dropped++; break; --=20 2.53.0 From nobody Fri Apr 10 01:00:55 2026 Received: from smtpout-02.galae.net (smtpout-02.galae.net [185.246.84.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91E463DA7FC; Wed, 4 Mar 2026 18:24:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.246.84.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648690; cv=none; b=tkiV+rn9UXM15ga/o6QIvnEoPE23kbwFnLDyPANiFV+AeqL3eXok+sgO8SXmLzQ+SbX51v2bYl7Gfu0ZSlyGt7n3Ofjl1uMySvtE10hDhJnRhSvMKyzh6rLZC357dSqJi19WCqXgK9cf8a9aFbjFfdg4Eof+KtGr8Vg5sMLZZEQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648690; c=relaxed/simple; bh=lR/31OvnWqMELBpLtLx85N3MW+LVO6QwM8QgYOp7nE8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=AcMzwxXoHTrbX/UTAIJM/xiEEjP22B4a1VsFkEZy3o/QgQ0uT7ZJeh86vnEvAl2CclvR14G9Nmgejg4F0PaO/UI0v1mpE2Pi4nlSzdlU30Eub0FRjqpzIwYPKlIH/wNozsmfWW9MKb4uvm9lUpXbz+O4Vudtq78A5HnQ4fpPmZQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=PExWwYIy; arc=none smtp.client-ip=185.246.84.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="PExWwYIy" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-02.galae.net (Postfix) with ESMTPS id 3A7341A2CB5; Wed, 4 Mar 2026 18:24:46 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 0C1125FF5C; Wed, 4 Mar 2026 18:24:46 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 37591103695E4; Wed, 4 Mar 2026 19:24:43 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1772648684; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=RKXDnQhH/EddQZ2g7wQh6dP4jPVhKwjH602UOlRULfw=; b=PExWwYIydaVRfi6sme5iJpEVrkn5wpp0eKVa+L+YOanEDC8j0zYgWTO3/hJRvJDBxpQ3Jf hxvPPW5eHv+aGDpn4/lSpboPXUXMdk7EcvaPmGUvLsGkcJt/V+ME0zJ5969jW0aFIzkT2i I9LnjRI3cx0Wf4QKDs3i3ggHgtUjkU6ta+IsHIxOkR/JGqU2D3SfGKJGmmoxsYacKEoa5S FzWqC4dQC8UYa4aHorpGGpctkDzrmh6nTiP19Ht9LrfQd97wTfqgzTeBFBeqRvtTm4dGJI AeRjO3LHs8iTKtciDj2qTMh1YllXCDTjuo90h7W5aSXjgxBjXrFrQBu/0T4TFA== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 04 Mar 2026 19:24:25 +0100 Subject: [PATCH net-next 2/8] net: macb: account for stats in Rx XDP codepaths Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260304-macb-xsk-v1-2-ba2ebe2bdaa3@bootlin.com> References: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> In-Reply-To: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , Richard Cochran Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.14.3 X-Last-TLS-Session-Version: TLSv1.3 gem_xdp_run() returns an action. Wrt stats, we land in three different cases: - Packet is handed to the stack (XDP_PASS), turns into an SKB and gets accounted for below in gem_rx(). No fix here. - Packet is dropped (XDP_DROP|ABORTED), we must increment the dropped counter. Missing; add it. - Packet is passed along (XDP_TX|REDIRECT), we must increment bytes & packets counters. Missing; add it. Along the way, use local variables to store rx_bytes, rx_packets and rx_dropped. Then increase stats only once at the end of gem_rx(). This is simpler because all three stats must modified on a per interface and per queue basis. Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb_main.c | 47 +++++++++++++++++++++++-----= ---- 1 file changed, 34 insertions(+), 13 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index ab73d1a522c2..1aa90499343a 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -1627,6 +1627,7 @@ static u32 gem_xdp_run(struct macb_queue *queue, void= *buff_head, static int gem_rx(struct macb_queue *queue, struct napi_struct *napi, int budget) { + unsigned int packets =3D 0, dropped =3D 0, bytes =3D 0; struct skb_shared_info *shinfo; struct macb *bp =3D queue->bp; struct macb_dma_desc *desc; @@ -1669,8 +1670,7 @@ static int gem_rx(struct macb_queue *queue, struct na= pi_struct *napi, if (unlikely(!buff_head)) { dev_err_ratelimited(&bp->pdev->dev, "inconsistent Rx descriptor chain\n"); - bp->dev->stats.rx_dropped++; - queue->stats.rx_dropped++; + dropped++; break; } =20 @@ -1700,11 +1700,29 @@ static int gem_rx(struct macb_queue *queue, struct = napi_struct *napi, if (last_frame) { ret =3D gem_xdp_run(queue, buff_head, &data_len, &headroom, addr - gem_rx_pad(bp)); - if (ret =3D=3D XDP_REDIRECT) - xdp_flush =3D true; =20 - if (ret !=3D XDP_PASS) - goto next_frame; + switch (ret) { + /* continue to SKB handling codepath */ + case XDP_PASS: + break; + + /* dropped packet cases */ + case XDP_ABORTED: + case XDP_DROP: + dropped++; + queue->rx_buff[entry] =3D NULL; + continue; + + /* redirect/tx cases */ + case XDP_REDIRECT: + xdp_flush =3D true; + fallthrough; + case XDP_TX: + packets++; + bytes +=3D data_len; + queue->rx_buff[entry] =3D NULL; + continue; + } } =20 queue->skb =3D napi_build_skb(buff_head, gem_total_rx_buffer_size(bp)); @@ -1743,10 +1761,8 @@ static int gem_rx(struct macb_queue *queue, struct n= api_struct *napi, =20 /* now everything is ready for receiving packet */ if (last_frame) { - bp->dev->stats.rx_packets++; - queue->stats.rx_packets++; - bp->dev->stats.rx_bytes +=3D queue->skb->len; - queue->stats.rx_bytes +=3D queue->skb->len; + packets++; + bytes +=3D queue->skb->len; =20 queue->skb->protocol =3D eth_type_trans(queue->skb, bp->dev); skb_checksum_none_assert(queue->skb); @@ -1769,7 +1785,6 @@ static int gem_rx(struct macb_queue *queue, struct na= pi_struct *napi, queue->skb =3D NULL; } =20 -next_frame: queue->rx_buff[entry] =3D NULL; continue; =20 @@ -1784,11 +1799,17 @@ static int gem_rx(struct macb_queue *queue, struct = napi_struct *napi, virt_to_head_page(buff_head), false); =20 - bp->dev->stats.rx_dropped++; - queue->stats.rx_dropped++; + dropped++; queue->rx_buff[entry] =3D NULL; } =20 + bp->dev->stats.rx_packets +=3D packets; + queue->stats.rx_packets +=3D packets; + bp->dev->stats.rx_dropped +=3D dropped; + queue->stats.rx_dropped +=3D dropped; + bp->dev->stats.rx_bytes +=3D bytes; + queue->stats.rx_bytes +=3D bytes; + if (xdp_flush) xdp_do_flush(); =20 --=20 2.53.0 From nobody Fri Apr 10 01:00:55 2026 Received: from smtpout-02.galae.net (smtpout-02.galae.net [185.246.84.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 209DC3DBD4C; Wed, 4 Mar 2026 18:24:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.246.84.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648691; cv=none; b=U/CPmst0NIA2OgZHDskkMuiiIxtO5OClwOiLlfoVrlZxrh3ZwxhUl+9HMJoTt7Xx6kjC//pT1aq3KCYTMeqUkGEo7gn/mVJd0IuOJ3vntBqQHSiYeDiiv6QnjK8BczH+uxzUe+IABKZPLaIOHwNa+s5rN8FtDyJu1qlSF2CGbC8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648691; c=relaxed/simple; bh=gExp/m0uv0I2CpCXmMKevKdtVBq/K6XyZ9iXZ8ojeYw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=oYZ9WGzlXLqMUZoSkZtppYK++T8BB+cx+KS5VrjKwjVYvGGCCvmZSyit2BscGsTQPrHa9645IQ34Z9iBkduOTdJ9H9EdYJeZtPQDspimngd+MBN0N1iuxmv9XXWRU6/yWzik3m/tEOFdOf/E07B5PZVT3cE4RKXoNCpgjIlHWZQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=v65Ipeee; arc=none smtp.client-ip=185.246.84.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="v65Ipeee" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-02.galae.net (Postfix) with ESMTPS id CFA121A2C77; Wed, 4 Mar 2026 18:24:48 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id A34385FF5C; Wed, 4 Mar 2026 18:24:48 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 29C5B103694F8; Wed, 4 Mar 2026 19:24:45 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1772648687; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=dzl/qreKh68sppWaNDnY8qNK5Kh//0XfSDDQisblgGk=; b=v65IpeeejbD0oHoj2meGzZPPVvCHaT8vnUuvyWBLILIUIM3STpr8+oLdKAZVKGrwGIxvcL XZRKNaLaWRf7yCsy394Tz61ZySsEVpw3E5IzZkqC4IYO2sYKvFdFN0p/o7nTq5KIX0b+Le h0CHd6/GGts0PClVorezX8tHnySON4q4P9jb1R8VpmJW8W4qdET70fhD5JuEuD4eddaE0G 11fOGaipy58b6bKhGFQkBRUAO4iad+WUd0LfFxiemfEW8Dwy2lOi5PIq2vLK16sxDuITHT FeiFl/VfYJALFEVcVvxkBrdMGXsw6DFrl10mky0JBmwp2j1e8Y9BCS1hOB0S4A== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 04 Mar 2026 19:24:26 +0100 Subject: [PATCH net-next 3/8] net: macb: account for stats in Tx XDP codepaths Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260304-macb-xsk-v1-3-ba2ebe2bdaa3@bootlin.com> References: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> In-Reply-To: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , Richard Cochran Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.14.3 X-Last-TLS-Session-Version: TLSv1.3 macb_tx_complete() processing loop assumes a packet is composed of multiple frames and composes around this idea. However, this is only true in the SKB case ie `tx_buff->type =3D=3D MACB_TYPE_SKB`. Rework macb_tx_complete() to bring the tx_buff->type switch statement outside and the frame iteration loop now lives only inside the SKB case. Fix Tx XDP stats that were not accounted for, in the XDP_TX|NDO cases. Only increment statistics once per macb_tx_complete() call rather than once per frame. The `bytes` and `packets` stack variables now gets incremented for completed XDP XMIT/TX packets. This implies the DQL subsystem through netdev_tx_completed_queue() now gets notified of those packets completing. We must therefore also report those bytes as sent, using netdev_tx_sent_queue(), in macb_xdp_submit_frame() called by: - Rx XDP programs returning action XDP_TX and, - the .ndo_xdp_xmit() callback. Incrementing `packets` also implies XDP packets are accounted for in our NAPI budget calculation. Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb_main.c | 71 +++++++++++++++-------------= ---- 1 file changed, 33 insertions(+), 38 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index 1aa90499343a..c1677f1d8f23 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -1212,7 +1212,7 @@ static int macb_tx_complete(struct macb_queue *queue,= int budget) { struct macb *bp =3D queue->bp; unsigned long flags; - int skb_packets =3D 0; + int xsk_frames =3D 0; unsigned int tail; unsigned int head; u16 queue_index; @@ -1227,7 +1227,6 @@ static int macb_tx_complete(struct macb_queue *queue,= int budget) struct macb_tx_buff *tx_buff; struct macb_dma_desc *desc; struct sk_buff *skb; - void *data =3D NULL; u32 ctrl; =20 desc =3D macb_tx_desc(queue, tail); @@ -1243,52 +1242,46 @@ static int macb_tx_complete(struct macb_queue *queu= e, int budget) if (!(ctrl & MACB_BIT(TX_USED))) break; =20 - /* Process all buffers of the current transmitted frame */ - for (;; tail++) { - tx_buff =3D macb_tx_buff(queue, tail); + tx_buff =3D macb_tx_buff(queue, tail); =20 - if (tx_buff->type !=3D MACB_TYPE_SKB) { - data =3D tx_buff->ptr; - packets++; - goto unmap; + switch (tx_buff->type) { + case MACB_TYPE_SKB: + /* Process all buffers of the current transmitted frame */ + while (!tx_buff->ptr) { + macb_tx_unmap(bp, tx_buff, budget); + tail++; + tx_buff =3D macb_tx_buff(queue, tail); } =20 - /* First, update TX stats if needed */ - if (tx_buff->ptr) { - data =3D tx_buff->ptr; - skb =3D tx_buff->ptr; + skb =3D tx_buff->ptr; =20 - if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) && - !ptp_one_step_sync(skb)) - gem_ptp_do_txstamp(bp, skb, desc); + if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) && + !ptp_one_step_sync(skb)) + gem_ptp_do_txstamp(bp, skb, desc); =20 - netdev_vdbg(bp->dev, "skb %u (data %p) TX complete\n", - macb_tx_ring_wrap(bp, tail), - skb->data); - bp->dev->stats.tx_packets++; - queue->stats.tx_packets++; - bp->dev->stats.tx_bytes +=3D skb->len; - queue->stats.tx_bytes +=3D skb->len; - skb_packets++; - packets++; - bytes +=3D skb->len; - } + netdev_vdbg(bp->dev, "skb %u (data %p) TX complete\n", + macb_tx_ring_wrap(bp, tail), + skb->data); + bytes +=3D skb->len; + break; =20 -unmap: - /* Now we can safely release resources */ - macb_tx_unmap(bp, tx_buff, budget); - - /* data is set only for the last buffer of the frame. - * WARNING: at this point the buffer has been freed by - * macb_tx_unmap(). - */ - if (data) - break; + case MACB_TYPE_XDP_TX: + case MACB_TYPE_XDP_NDO: + bytes +=3D tx_buff->size; + break; } + + packets++; + macb_tx_unmap(bp, tx_buff, budget); } =20 + bp->dev->stats.tx_packets +=3D packets; + queue->stats.tx_packets +=3D packets; + bp->dev->stats.tx_bytes +=3D bytes; + queue->stats.tx_bytes +=3D bytes; + netdev_tx_completed_queue(netdev_get_tx_queue(bp->dev, queue_index), - skb_packets, bytes); + packets, bytes); =20 queue->tx_tail =3D tail; if (__netif_subqueue_stopped(bp->dev, queue_index) && @@ -1529,6 +1522,8 @@ static int macb_xdp_submit_frame(struct macb *bp, str= uct xdp_frame *xdpf, macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART)); spin_unlock(&bp->lock); =20 + netdev_tx_sent_queue(netdev_get_tx_queue(bp->dev, queue_index), xdpf->len= ); + if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1) netif_stop_subqueue(dev, queue_index); =20 --=20 2.53.0 From nobody Fri Apr 10 01:00:55 2026 Received: from smtpout-04.galae.net (smtpout-04.galae.net [185.171.202.116]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3695D3DEAF8; Wed, 4 Mar 2026 18:24:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.171.202.116 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648693; cv=none; b=Zc62ZWbfzY2XXEq4m5gdVlCSjPuPLIh+5sjtfA9BszPzXHxGjW5ftFcNeqAoDhzFamoiLVQLcrlJLj0a5VZVh5MocfC2MPhj/68yGnoFRUr06SrfuKXvMpOGUtJxYPFqs65BrdJhLfNQNyQzFoZ6Fb2G2MfXtGxSpLB3KYevQMA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648693; c=relaxed/simple; bh=G9KhzfbRPiUfwEZc2MDjN1mlwQlI2CXVD3V8Is7hDwI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=LEaAD/LkwxWB3reHfk2+T9iFoyyDFnq+Sdkg2A2EWczAKaliHqvmcMCZiwVsbVAETeU+D6uEzXF2TcI6fFR/ICzvCOQQMsUgrLuwZK9VhOqYesLUdU6D3yJDCMZtlKn5cj99UKcAeTbqkKuaLAF5gDiVdV2OUMtWHsNbh1zMWnA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=NI/qEVF+; arc=none smtp.client-ip=185.171.202.116 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="NI/qEVF+" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-04.galae.net (Postfix) with ESMTPS id CF658C143E5; Wed, 4 Mar 2026 18:25:08 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 97A7E5FF5C; Wed, 4 Mar 2026 18:24:50 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 88DF7103695FD; Wed, 4 Mar 2026 19:24:47 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1772648689; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=VDNyDdnQCdnj1Wd9DXlEzXJRQVE56WRRVJ8JxqxAUF4=; b=NI/qEVF+X42vsw2DnF/KJKKkF0qEDGrpJtxP/5ece/SuvWwbL2yrcSNIsPfp4iEbDomEnd XUAB+qdrAzdNcrSBi4VBLTwiw6tWXphFiwbEI1Tr3wLXd1YsXAWGsiqaK7eNS4tJ3JZ3HI 4uVQ0cXromSXoWdKiP1iRv6vQNXc0VaDQ9mHN/eTiS/CCgF4Cq9tm3qI2YuaZbGqVrCngz Zq4N2tX/8Uge9UC/le5brIasjhBprtQwEmuHZvABqzfVjFTlO6vLKrZEFqGr06u5WlpHim KSX4h5YC2UDiZqxUlweqYGiAaLDtAdY49KiwwW7tjz7bhn/YzWynhnY9DTj8gQ== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 04 Mar 2026 19:24:27 +0100 Subject: [PATCH net-next 4/8] net: macb: drop handling of recycled buffers in gem_rx_refill() Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260304-macb-xsk-v1-4-ba2ebe2bdaa3@bootlin.com> References: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> In-Reply-To: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , Richard Cochran Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.14.3 X-Last-TLS-Session-Version: TLSv1.3 The refill operation supports detecting if a buffer is present in a slot; if it is, then it updates its DMA descriptor reusing the same buffer. This behavior can be dropped; all codepaths of gem_rx() letting a buffer lay around to be reused by refill have disappeared. Said another way: every time queue->tx_tail is incremented, queue->rx_buff[entry] is set to NULL. On the same occasion, move `gfp_alloc` assignment out of the loop and into variable declarations. Its value is constant across the function's lifetime. Also fix tiny alignment issue with the while statement. Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb_main.c | 64 ++++++++++++++--------------= ---- 1 file changed, 28 insertions(+), 36 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index c1677f1d8f23..ed94f9f0894b 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -1351,18 +1351,18 @@ static unsigned int gem_total_rx_buffer_size(struct= macb *bp) =20 static int gem_rx_refill(struct macb_queue *queue, bool napi) { + gfp_t gfp_alloc =3D napi ? GFP_ATOMIC : GFP_KERNEL; struct macb *bp =3D queue->bp; struct macb_dma_desc *desc; unsigned int entry; struct page *page; dma_addr_t paddr; - gfp_t gfp_alloc; int err =3D 0; void *data; int offset; =20 while (CIRC_SPACE(queue->rx_prepared_head, queue->rx_tail, - bp->rx_ring_size) > 0) { + bp->rx_ring_size) > 0) { entry =3D macb_rx_ring_wrap(bp, queue->rx_prepared_head); =20 /* Make hw descriptor updates visible to CPU */ @@ -1370,41 +1370,33 @@ static int gem_rx_refill(struct macb_queue *queue, = bool napi) =20 desc =3D macb_rx_desc(queue, entry); =20 - if (!queue->rx_buff[entry]) { - gfp_alloc =3D napi ? GFP_ATOMIC : GFP_KERNEL; - page =3D page_pool_alloc_frag(queue->page_pool, &offset, - gem_total_rx_buffer_size(bp), - gfp_alloc | __GFP_NOWARN); - if (!page) { - dev_err_ratelimited(&bp->pdev->dev, - "Unable to allocate rx buffer\n"); - err =3D -ENOMEM; - break; - } - - paddr =3D page_pool_get_dma_addr(page) + - gem_rx_pad(bp) + offset; - - dma_sync_single_for_device(&bp->pdev->dev, - paddr, bp->rx_buffer_size, - page_pool_get_dma_dir(queue->page_pool)); - - data =3D page_address(page) + offset; - queue->rx_buff[entry] =3D data; - - if (entry =3D=3D bp->rx_ring_size - 1) - paddr |=3D MACB_BIT(RX_WRAP); - desc->ctrl =3D 0; - /* Setting addr clears RX_USED and allows reception, - * make sure ctrl is cleared first to avoid a race. - */ - dma_wmb(); - macb_set_addr(bp, desc, paddr); - } else { - desc->ctrl =3D 0; - dma_wmb(); - desc->addr &=3D ~MACB_BIT(RX_USED); + page =3D page_pool_alloc_frag(queue->page_pool, &offset, + gem_total_rx_buffer_size(bp), + gfp_alloc | __GFP_NOWARN); + if (!page) { + dev_err_ratelimited(&bp->pdev->dev, + "Unable to allocate rx buffer\n"); + err =3D -ENOMEM; + break; } + + paddr =3D page_pool_get_dma_addr(page) + gem_rx_pad(bp) + offset; + + dma_sync_single_for_device(&bp->pdev->dev, + paddr, bp->rx_buffer_size, + page_pool_get_dma_dir(queue->page_pool)); + + data =3D page_address(page) + offset; + queue->rx_buff[entry] =3D data; + + if (entry =3D=3D bp->rx_ring_size - 1) + paddr |=3D MACB_BIT(RX_WRAP); + desc->ctrl =3D 0; + /* Setting addr clears RX_USED and allows reception, + * make sure ctrl is cleared first to avoid a race. + */ + dma_wmb(); + macb_set_addr(bp, desc, paddr); queue->rx_prepared_head++; } =20 --=20 2.53.0 From nobody Fri Apr 10 01:00:55 2026 Received: from smtpout-04.galae.net (smtpout-04.galae.net [185.171.202.116]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 997073E0C4C for ; Wed, 4 Mar 2026 18:24:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.171.202.116 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648696; cv=none; b=DP+oGj848NwkTpc9gvl7AZvZXeCgToQj5BzFsydzcvIfhVDzw8AWthjyNF/4pJDPH3FyHdzJe0oFZvblXfvtNSwEGVT7r3LoAT6zz9vIwvY1o9LkBzS851DlD2Gn4teg0dW8ZPStofOFXzVjjNwgAZb783t6SGgXNvJqoXUJfIk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648696; c=relaxed/simple; bh=lQQjYjZNPUbQ7rcixRKkV5WLW2kgdqScBVux9Qg7dOE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=WisRr7diVtAWl1Um3OB8W6FQ2JhkVKvG7+FffCy7E4pFDQo5ImpzrhAqJ2BV0yt7dT4Hc+VEmosp/0F6vHCJ+9FfVmqHhd1FfwLy5bzgdEbFuqkwH7w0vAEcpSRQ7MfFLcm8Z5i5R3UDHPrlHnfO1OYJ9wA/Xf/5Q1EXrWaWnAI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=iL4Rac+l; arc=none smtp.client-ip=185.171.202.116 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="iL4Rac+l" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-04.galae.net (Postfix) with ESMTPS id 9C931C143FF; Wed, 4 Mar 2026 18:25:11 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 68A205FF5C; Wed, 4 Mar 2026 18:24:53 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id B40B810369605; Wed, 4 Mar 2026 19:24:49 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1772648692; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=JtnE5Dd9ijOOlqqPU5d8+YUT29h7RsvROog7pXP+pnI=; b=iL4Rac+lgRRZlK5K9Z3NWm7OEqAghfHX5+HLhBmZgpqdm1fvWXyLD8QzvXvTd6zSqIE5BJ uvhIA+/VyxBTkf2st98+LoVyQ+YFydEO6kCvkdME8y+JfUvVBUsBeT/cQGjkaJPMdbpaOF 9uH1OIrLKPpkhEGxrMBYkh72wg4+SxFhWDxw3jIy5fzCXIWkhwomZy+/TbS4U73fhbVwiu URkYbqM5wAD4aHHmcq8U6y4SP5kShnKtNV/VIRvpS7w/gbm8pD/1IEBGCFzGdlHgq+pjsv yeSDixI21qcc+HbKWCNHkkctq1Cov/iwVh149n+vjlVziJCeNhZ5GfoJsYZwMg== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 04 Mar 2026 19:24:28 +0100 Subject: [PATCH net-next 5/8] net: macb: move macb_xdp_submit_frame() body to helper function Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260304-macb-xsk-v1-5-ba2ebe2bdaa3@bootlin.com> References: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> In-Reply-To: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , Richard Cochran Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.14.3 X-Last-TLS-Session-Version: TLSv1.3 Part of macb_xdp_submit_frame() is specific to the handling of an XDP buffer (pick a queue for emission, DMA map or sync, report emitted bytes), part is chitchat with hardware to update DMA descriptor and start transmit. Move the hardware specific code out of macb_xdp_submit_frame() into a macb_xdp_submit_buff() helper function. The goal is to make code reusable to support XSK buffers. The macb_xdp_submit_frame() body is modified slightly: we bring the dma_map_single() call outside of the queue->tx_ptr_lock critical section, to minimise its span. Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb_main.c | 143 +++++++++++++++++----------= ---- 1 file changed, 78 insertions(+), 65 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index ed94f9f0894b..65c2ec2a843c 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -1208,6 +1208,52 @@ static bool ptp_one_step_sync(struct sk_buff *skb) return false; } =20 +static void macb_xdp_submit_buff(struct macb *bp, unsigned int queue_index, + struct macb_tx_buff buff) +{ + struct macb_queue *queue =3D &bp->queues[queue_index]; + struct net_device *netdev =3D bp->dev; + struct macb_tx_buff *tx_buff; + struct macb_dma_desc *desc; + unsigned int next_head; + u32 ctrl; + + next_head =3D queue->tx_head + 1; + + ctrl =3D MACB_BIT(TX_USED); + desc =3D macb_tx_desc(queue, next_head); + desc->ctrl =3D ctrl; + + desc =3D macb_tx_desc(queue, queue->tx_head); + tx_buff =3D macb_tx_buff(queue, queue->tx_head); + *tx_buff =3D buff; + + ctrl =3D (u32)buff.size; + ctrl |=3D MACB_BIT(TX_LAST); + + if (unlikely(macb_tx_ring_wrap(bp, queue->tx_head) =3D=3D (bp->tx_ring_si= ze - 1))) + ctrl |=3D MACB_BIT(TX_WRAP); + + /* Set TX buffer descriptor */ + macb_set_addr(bp, desc, buff.mapping); + /* desc->addr must be visible to hardware before clearing + * 'TX_USED' bit in desc->ctrl. + */ + wmb(); + desc->ctrl =3D ctrl; + queue->tx_head =3D next_head; + + /* Make newly initialized descriptor visible to hardware */ + wmb(); + + spin_lock(&bp->lock); + macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART)); + spin_unlock(&bp->lock); + + if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1) + netif_stop_subqueue(netdev, queue_index); +} + static int macb_tx_complete(struct macb_queue *queue, int budget) { struct macb *bp =3D queue->bp; @@ -1430,44 +1476,25 @@ static void discard_partial_frame(struct macb_queue= *queue, unsigned int begin, } =20 static int macb_xdp_submit_frame(struct macb *bp, struct xdp_frame *xdpf, - struct net_device *dev, bool dma_map, + struct net_device *netdev, bool dma_map, dma_addr_t addr) { + struct device *dev =3D &bp->pdev->dev; enum macb_tx_buff_type buff_type; - struct macb_tx_buff *tx_buff; int cpu =3D smp_processor_id(); - struct macb_dma_desc *desc; struct macb_queue *queue; - unsigned int next_head; unsigned long flags; dma_addr_t mapping; u16 queue_index; int err =3D 0; - u32 ctrl; - - queue_index =3D cpu % bp->num_queues; - queue =3D &bp->queues[queue_index]; - buff_type =3D dma_map ? MACB_TYPE_XDP_NDO : MACB_TYPE_XDP_TX; - - spin_lock_irqsave(&queue->tx_ptr_lock, flags); - - /* This is a hard error, log it. */ - if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1) { - netif_stop_subqueue(dev, queue_index); - netdev_dbg(bp->dev, "tx_head =3D %u, tx_tail =3D %u\n", - queue->tx_head, queue->tx_tail); - err =3D -ENOMEM; - goto unlock; - } =20 if (dma_map) { - mapping =3D dma_map_single(&bp->pdev->dev, - xdpf->data, - xdpf->len, DMA_TO_DEVICE); - if (unlikely(dma_mapping_error(&bp->pdev->dev, mapping))) { - err =3D -ENOMEM; - goto unlock; - } + mapping =3D dma_map_single(dev, xdpf->data, xdpf->len, DMA_TO_DEVICE); + err =3D dma_mapping_error(&bp->pdev->dev, mapping); + if (unlikely(err)) + return err; + + buff_type =3D MACB_TYPE_XDP_NDO; } else { /* progs can adjust the head. Sync and set the adjusted one. * This also implicitly takes into account ip alignment, @@ -1476,52 +1503,38 @@ static int macb_xdp_submit_frame(struct macb *bp, s= truct xdp_frame *xdpf, mapping =3D addr + xdpf->headroom + sizeof(*xdpf); dma_sync_single_for_device(&bp->pdev->dev, mapping, xdpf->len, DMA_BIDIRECTIONAL); + + buff_type =3D MACB_TYPE_XDP_TX; } =20 - next_head =3D queue->tx_head + 1; + queue_index =3D cpu % bp->num_queues; + queue =3D &bp->queues[queue_index]; =20 - ctrl =3D MACB_BIT(TX_USED); - desc =3D macb_tx_desc(queue, next_head); - desc->ctrl =3D ctrl; + spin_lock_irqsave(&queue->tx_ptr_lock, flags); =20 - desc =3D macb_tx_desc(queue, queue->tx_head); - tx_buff =3D macb_tx_buff(queue, queue->tx_head); - tx_buff->ptr =3D xdpf; - tx_buff->type =3D buff_type; - tx_buff->mapping =3D dma_map ? mapping : 0; - tx_buff->size =3D xdpf->len; - tx_buff->mapped_as_page =3D false; + if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1) { + /* This is a hard error, log it. */ + netif_stop_subqueue(netdev, queue_index); + netdev_dbg(netdev, "tx_head =3D %u, tx_tail =3D %u\n", + queue->tx_head, queue->tx_tail); + err =3D -ENOMEM; + } else { + macb_xdp_submit_buff(bp, queue_index, (struct macb_tx_buff){ + .ptr =3D xdpf, + .mapping =3D dma_map ? mapping : 0, + .size =3D xdpf->len, + .mapped_as_page =3D false, + .type =3D buff_type, + }); =20 - ctrl =3D (u32)tx_buff->size; - ctrl |=3D MACB_BIT(TX_LAST); + netdev_tx_sent_queue(netdev_get_tx_queue(bp->dev, queue_index), xdpf->le= n); + } =20 - if (unlikely(macb_tx_ring_wrap(bp, queue->tx_head) =3D=3D (bp->tx_ring_si= ze - 1))) - ctrl |=3D MACB_BIT(TX_WRAP); - - /* Set TX buffer descriptor */ - macb_set_addr(bp, desc, mapping); - /* desc->addr must be visible to hardware before clearing - * 'TX_USED' bit in desc->ctrl. - */ - wmb(); - desc->ctrl =3D ctrl; - queue->tx_head =3D next_head; - - /* Make newly initialized descriptor visible to hardware */ - wmb(); - - spin_lock(&bp->lock); - macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART)); - spin_unlock(&bp->lock); - - netdev_tx_sent_queue(netdev_get_tx_queue(bp->dev, queue_index), xdpf->len= ); - - if (CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring_size) < 1) - netif_stop_subqueue(dev, queue_index); - -unlock: spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); =20 + if (err && dma_map) + dma_unmap_single(dev, mapping, xdpf->len, DMA_TO_DEVICE); + return err; } =20 --=20 2.53.0 From nobody Fri Apr 10 01:00:55 2026 Received: from smtpout-02.galae.net (smtpout-02.galae.net [185.246.84.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F4803E0C69; Wed, 4 Mar 2026 18:24:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.246.84.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648697; cv=none; b=pJGj+awl/dnvPKUzP7Kf9PMB7YvDFSXh5RJTRPnz3W1iSaFXEpTZieo7bir0+rElUdakCPGZy0vUuAmYlZrDBQry8fTgEuNKASfW0pPI6ZOryJ4tfdnfwQQNDsYHwnOVIkcW/LDaeMO0OlwQyPQKhZmIknQEIh0NMI5a8PnLjWc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648697; c=relaxed/simple; bh=V1cxfz1RVnwzXZRXsNSqaV7vjhBBeY9wugZ2uCUAE30=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=FcYfUYwGtqUhG7TkQwDAsoXidHwOCkcXivGGHAaDSo1HYmZhg48Njx8cbD1rG/B78AJyYb/fgK9h6O4XveEhjbnnngx0EVkhhbbY1ylVWxkrnJdNfxR8TRjtCNJB6RMoXiU+r9XrCjt60/X2aYzx3nvPqF1EMxuy9YThl+KegX4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=WO7lo1Xn; arc=none smtp.client-ip=185.246.84.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="WO7lo1Xn" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-02.galae.net (Postfix) with ESMTPS id 2B18E1A2CB5; Wed, 4 Mar 2026 18:24:55 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 008675FF5C; Wed, 4 Mar 2026 18:24:55 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 47E8E103695EE; Wed, 4 Mar 2026 19:24:52 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1772648693; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=u1s91F5grS0fyQv6iyaQkfETkK6uLonCQoWR9iCw2sk=; b=WO7lo1XnDtILDDaRKOWMMFgOaKZWlrIjV14E2hymfx2CBiuOobTb/6fz8nQqAn+oVF0xRU EXsqFXH2jdh9jR3uQkZZlToYwRAQuBzqi3OSu1ou56etETfZR7/EkIL5owpf7uR2DUX0I4 qA48joQlgSWH1B9YXp/xC2fsUwRdhH5xLg6ya/SIZmprpTx5fPMXEL6F7mHQWDkUxcsHEX 03BX7D1La37q24DubVKojI4ud8/QJ51bfMA26IEiVzp+24n8fosZ0f+b2tacU8lE9SMkbj K6gRKXSXDVTeC9pmDvuKgnfrhqE7ZFgifVYFrK/0E962zpjOGObjebQS36mn6A== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 04 Mar 2026 19:24:29 +0100 Subject: [PATCH net-next 6/8] net: macb: add infrastructure for XSK buffer pool Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260304-macb-xsk-v1-6-ba2ebe2bdaa3@bootlin.com> References: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> In-Reply-To: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , Richard Cochran Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.14.3 X-Last-TLS-Session-Version: TLSv1.3 Store a XSK buffer pool per queue, assigned through .ndo_bpf() with command =3D=3D XDP_SETUP_XSK_POOL. We have no sequence upstream to disable a single queue, free its buffers, refill it and re-enable the queue (without affecting other queues). Therefore we protect our operation with interface-wide close and open. Also, prepare the terrain with a .ndo_xsk_wakeup() operation that does the pre-flight-checks but is a no-op. Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb.h | 1 + drivers/net/ethernet/cadence/macb_main.c | 66 ++++++++++++++++++++++++++++= +++- 2 files changed, 66 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cad= ence/macb.h index 009a44e94726..a9e6f0289ecb 100644 --- a/drivers/net/ethernet/cadence/macb.h +++ b/drivers/net/ethernet/cadence/macb.h @@ -1278,6 +1278,7 @@ struct macb_queue { struct napi_struct napi_rx; struct queue_stats stats; struct page_pool *page_pool; + struct xsk_buff_pool *xsk_pool; struct sk_buff *skb; struct xdp_rxq_info xdp_rxq; }; diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index 65c2ec2a843c..a72d59ffd1cf 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -38,6 +38,7 @@ #include #include #include +#include #include "macb.h" =20 /* This structure is only used for MACB on SiFive FU540 devices */ @@ -1564,6 +1565,24 @@ static int gem_xdp_xmit(struct net_device *dev, int = num_frame, return xmitted; } =20 +static int gem_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags) +{ + struct macb *bp =3D netdev_priv(dev); + struct macb_queue *queue =3D &bp->queues[qid]; + + if (unlikely(!netif_carrier_ok(dev))) + return -ENETDOWN; + + if (unlikely(qid >=3D bp->num_queues || + !rcu_access_pointer(bp->prog) || + !queue->xsk_pool)) + return -ENXIO; + + /* no-op, until rx/tx implement XSK support */ + + return 0; +} + static u32 gem_xdp_run(struct macb_queue *queue, void *buff_head, unsigned int *len, unsigned int *headroom, dma_addr_t addr) @@ -3580,6 +3599,46 @@ static int gem_xdp_setup(struct net_device *dev, str= uct bpf_prog *prog, return err; } =20 +static int gem_xdp_setup_xsk_pool(struct net_device *netdev, + struct xsk_buff_pool *pool, u16 qid) +{ + struct macb *bp =3D netdev_priv(netdev); + unsigned long attrs =3D DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING; + struct macb_queue *queue =3D &bp->queues[qid]; + bool running =3D netif_running(netdev); + struct device *dev =3D &bp->pdev->dev; + int err =3D 0; + + if (qid >=3D bp->num_queues) + return -EINVAL; + + if (pool && queue->xsk_pool) + return -EBUSY; + + if (running) + macb_close(netdev); + + if (pool) { + err =3D xsk_pool_dma_map(pool, dev, attrs); + if (err) + netdev_err(netdev, "xdp: failed to DMA map XSK pool\n"); + else + queue->xsk_pool =3D pool; + } else { + if (queue->xsk_pool) + xsk_pool_dma_unmap(queue->xsk_pool, attrs); + queue->xsk_pool =3D NULL; + } + + if (running) { + int err_open =3D macb_open(netdev); + + err =3D err ?: err_open; + } + + return err; +} + static int gem_xdp(struct net_device *dev, struct netdev_bpf *xdp) { struct macb *bp =3D netdev_priv(dev); @@ -3590,6 +3649,9 @@ static int gem_xdp(struct net_device *dev, struct net= dev_bpf *xdp) switch (xdp->command) { case XDP_SETUP_PROG: return gem_xdp_setup(dev, xdp->prog, xdp->extack); + case XDP_SETUP_XSK_POOL: + return gem_xdp_setup_xsk_pool(dev, xdp->xsk.pool, + xdp->xsk.queue_id); default: return -EOPNOTSUPP; } @@ -4852,6 +4914,7 @@ static const struct net_device_ops macb_netdev_ops = =3D { .ndo_setup_tc =3D macb_setup_tc, .ndo_bpf =3D gem_xdp, .ndo_xdp_xmit =3D gem_xdp_xmit, + .ndo_xsk_wakeup =3D gem_xsk_wakeup, }; =20 /* Configure peripheral capabilities according to device tree @@ -6156,7 +6219,8 @@ static int macb_probe(struct platform_device *pdev) =20 dev->xdp_features =3D NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | - NETDEV_XDP_ACT_NDO_XMIT; + NETDEV_XDP_ACT_NDO_XMIT | + NETDEV_XDP_ACT_XSK_ZEROCOPY; } =20 netif_carrier_off(dev); --=20 2.53.0 From nobody Fri Apr 10 01:00:55 2026 Received: from smtpout-04.galae.net (smtpout-04.galae.net [185.171.202.116]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32D213E1238; Wed, 4 Mar 2026 18:24:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.171.202.116 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648701; cv=none; b=bvOMPRn8/nknw0zTIx2ZHjFTq5O0sQl2edgHacPRfGzGtXrFkRGn1lFbL2n/DR+zhP62I4nOardtkK/j6ytQTK2JT7F8tz3FyUmLyaDDZ57CUCOD9FbqObqaEXevFH5ONunxIimz5wl0jGM0trwNf3GLmLIxApdm6Rxr1fmdX2c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648701; c=relaxed/simple; bh=acX1lwKxA/kpYia2Nfc2aehMaCwtHIEVvKb6m9g4pOE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=N5t6DR+EmuPZHTwknOatjVprz+nOOaSjdOI19zVVJTB+vQk4CqQs6hKK0YzFtZdCgA36jvqYQJng2e9Aus2q6IFz53jEB4MRKqdlr5E+AxeZX+xW56GoCkHPPWz2QAvPtNn3UuvSx9Bxk0vJy4sTon/htWYRaMItozGNF6AepDg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=0r+XExvA; arc=none smtp.client-ip=185.171.202.116 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="0r+XExvA" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-04.galae.net (Postfix) with ESMTPS id 1FFC3C143E5; Wed, 4 Mar 2026 18:25:16 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id DFCEF5FF5C; Wed, 4 Mar 2026 18:24:57 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 2522B10369606; Wed, 4 Mar 2026 19:24:54 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1772648696; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=wdbgnZgPQID9OhTHaefDAPkKPH6Py0SmgPx68VwD8t4=; b=0r+XExvAHvGglakxWoa1Iifi5N3oBg8pcIxp+LN5u1kuWPMutlc3bGS5VgfajbukFVG3tk EUczG3WCzGj+zvzsOLMCb9iTIavAVxUV5HvTmZJ8ZDlKzDNt5xMkfLPgKjA86eCLIE5XWi RD542kpRPzgGOwJIi69ezL7mbz/NsoRzjdXjTOklzz9KilXrmAdsTSHblPPnYFimgFe2Gc A3KwjAE2alXQSuiDldUFMwG4aA4yxXdX0aDlYuSlSDbVDY3IbhtAJfGKOv7AHmcKrkqz+F qHDWvU+yNOpdrXW51u8fTYw6LxPSN69D8hQYwCsc41w6eXy+6GNOU2ZuKj32qA== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 04 Mar 2026 19:24:30 +0100 Subject: [PATCH net-next 7/8] net: macb: add Rx zero-copy AF_XDP support Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260304-macb-xsk-v1-7-ba2ebe2bdaa3@bootlin.com> References: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> In-Reply-To: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , Richard Cochran Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.14.3 X-Last-TLS-Session-Version: TLSv1.3 The Rx direction uses a page_pool instance as allocator created at open. If present, exploit our new xsk_buff_pool located at queue->xsk_pool. Allocate `struct xdp_buff` inside each queue->rx_buff[] slot instead of raw pointers to the buffer start. Therefore, inside gem_rx() and gem_xdp_run(), we get handed XDP buffers directly and need not to allocate one on the stack to pass to the XDP program. As this is a fresh implementation, jump straight to batch alloc rather than the xsk_buff_alloc() API. We need two batch alloc calls at wrap-around. -- At open, in gem_create_page_pool() renamed to gem_init_pool(): - Stop creating a page_pool if we have an XSK one. - Report proper values to xdp_rxq. While running, in gem_rx(), gem_rx_refill() and gem_xdp_run(): - Refill buffer slots using one/two calls to xsk_buff_alloc_batch(). - Support running XDP program on a pre-allocated `struct xdp_buff`. - Adjust buffer free operations to support XSK. xsk_buff_free() replaces page_pool_put_full_page() if XSK is active. - End gem_rx() by marking the XSK need_wakeup flag. - When needed, wakeup is triggered by activating an IRQ from software, allowed by the hardware in the per-queue IMR register. At close, in gem_free_rx_buffers(): - Adjust the buffer free operation. - Don't destroy the page pool if we were in XSK mode. Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb_main.c | 223 ++++++++++++++++++++++-----= ---- 1 file changed, 161 insertions(+), 62 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index a72d59ffd1cf..ea1b0b8c4fab 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -1398,18 +1398,39 @@ static unsigned int gem_total_rx_buffer_size(struct= macb *bp) =20 static int gem_rx_refill(struct macb_queue *queue, bool napi) { - gfp_t gfp_alloc =3D napi ? GFP_ATOMIC : GFP_KERNEL; struct macb *bp =3D queue->bp; + struct xdp_buff **xdp_buffs =3D (struct xdp_buff **)queue->rx_buff; + gfp_t gfp_alloc =3D napi ? GFP_ATOMIC : GFP_KERNEL; + struct xsk_buff_pool *xsk =3D queue->xsk_pool; + unsigned int size =3D bp->rx_ring_size; struct macb_dma_desc *desc; + unsigned int offset; unsigned int entry; struct page *page; dma_addr_t paddr; int err =3D 0; - void *data; - int offset; =20 - while (CIRC_SPACE(queue->rx_prepared_head, queue->rx_tail, - bp->rx_ring_size) > 0) { + if (xsk) { + u32 head, tail, space_to_end, space_from_start, first_alloc; + + /* CIRC_SPACE_TO_END() requires wrapping head & tail. */ + head =3D macb_rx_ring_wrap(bp, queue->rx_prepared_head); + tail =3D macb_rx_ring_wrap(bp, queue->rx_tail); + space_to_end =3D CIRC_SPACE_TO_END(head, tail, size); + space_from_start =3D CIRC_SPACE(head, tail, size) - space_to_end; + + first_alloc =3D xsk_buff_alloc_batch(xsk, xdp_buffs + head, + space_to_end); + + /* + * Refill in two batch operations if we are wrapping around and + * the first alloc batch gave us satisfaction. + */ + if (head + first_alloc =3D=3D size && space_from_start) + xsk_buff_alloc_batch(xsk, xdp_buffs, space_from_start); + } + + while (CIRC_SPACE(queue->rx_prepared_head, queue->rx_tail, size) > 0) { entry =3D macb_rx_ring_wrap(bp, queue->rx_prepared_head); =20 /* Make hw descriptor updates visible to CPU */ @@ -1417,26 +1438,38 @@ static int gem_rx_refill(struct macb_queue *queue, = bool napi) =20 desc =3D macb_rx_desc(queue, entry); =20 - page =3D page_pool_alloc_frag(queue->page_pool, &offset, - gem_total_rx_buffer_size(bp), - gfp_alloc | __GFP_NOWARN); - if (!page) { + if (xsk) { + /* Remember xdp_buffs is an alias to queue->rx_buff. */ + if (xdp_buffs[entry]) + paddr =3D xsk_buff_xdp_get_dma(xdp_buffs[entry]); + } else { + page =3D page_pool_alloc_frag(queue->page_pool, &offset, + gem_total_rx_buffer_size(bp), + gfp_alloc | __GFP_NOWARN); + if (page) { + queue->rx_buff[entry] =3D page_address(page) + + offset; + paddr =3D page_pool_get_dma_addr(page) + + gem_rx_pad(bp) + offset; + dma_sync_single_for_device(&bp->pdev->dev, + paddr, + bp->rx_buffer_size, + page_pool_get_dma_dir(queue->page_pool)); + } + } + + /* + * In case xsk_buff_alloc_batch() returned less than requested + * or page_pool_alloc_frag() failed. + */ + if (!queue->rx_buff[entry]) { dev_err_ratelimited(&bp->pdev->dev, "Unable to allocate rx buffer\n"); err =3D -ENOMEM; break; } =20 - paddr =3D page_pool_get_dma_addr(page) + gem_rx_pad(bp) + offset; - - dma_sync_single_for_device(&bp->pdev->dev, - paddr, bp->rx_buffer_size, - page_pool_get_dma_dir(queue->page_pool)); - - data =3D page_address(page) + offset; - queue->rx_buff[entry] =3D data; - - if (entry =3D=3D bp->rx_ring_size - 1) + if (entry =3D=3D size - 1) paddr |=3D MACB_BIT(RX_WRAP); desc->ctrl =3D 0; /* Setting addr clears RX_USED and allows reception, @@ -1569,6 +1602,7 @@ static int gem_xsk_wakeup(struct net_device *dev, u32= qid, u32 flags) { struct macb *bp =3D netdev_priv(dev); struct macb_queue *queue =3D &bp->queues[qid]; + u32 irqs =3D 0; =20 if (unlikely(!netif_carrier_ok(dev))) return -ENETDOWN; @@ -1578,7 +1612,12 @@ static int gem_xsk_wakeup(struct net_device *dev, u3= 2 qid, u32 flags) !queue->xsk_pool)) return -ENXIO; =20 - /* no-op, until rx/tx implement XSK support */ + if ((flags & XDP_WAKEUP_RX) && + !napi_if_scheduled_mark_missed(&queue->napi_rx)) + irqs |=3D MACB_BIT(RCOMP); + + if (irqs) + queue_writel(queue, IMR, irqs); =20 return 0; } @@ -1587,10 +1626,11 @@ static u32 gem_xdp_run(struct macb_queue *queue, vo= id *buff_head, unsigned int *len, unsigned int *headroom, dma_addr_t addr) { - struct net_device *dev; + struct xsk_buff_pool *xsk =3D queue->xsk_pool; + struct net_device *dev =3D queue->bp->dev; + struct xdp_buff xdp, *xdp_ptr; struct xdp_frame *xdpf; struct bpf_prog *prog; - struct xdp_buff xdp; =20 u32 act =3D XDP_PASS; =20 @@ -1600,25 +1640,35 @@ static u32 gem_xdp_run(struct macb_queue *queue, vo= id *buff_head, if (!prog) goto out; =20 - xdp_init_buff(&xdp, gem_total_rx_buffer_size(queue->bp), &queue->xdp_rxq); - xdp_prepare_buff(&xdp, buff_head, *headroom, *len, false); - xdp_buff_clear_frags_flag(&xdp); - dev =3D queue->bp->dev; + if (xsk) { + /* + * It was a lie all along: buff_head is not a buffer but a + * struct xdp_buff that points to the actual buffer. + */ + xdp_ptr =3D buff_head; + xdp_ptr->data_end =3D xdp_ptr->data + *len; + } else { + /* Use a stack-allocated struct xdp_buff. */ + xdp_init_buff(&xdp, gem_total_rx_buffer_size(queue->bp), &queue->xdp_rxq= ); + xdp_prepare_buff(&xdp, buff_head, *headroom, *len, false); + xdp_buff_clear_frags_flag(&xdp); + xdp_ptr =3D &xdp; + } =20 - act =3D bpf_prog_run_xdp(prog, &xdp); + act =3D bpf_prog_run_xdp(prog, xdp_ptr); switch (act) { case XDP_PASS: *len =3D xdp.data_end - xdp.data; *headroom =3D xdp.data - xdp.data_hard_start; goto out; case XDP_REDIRECT: - if (unlikely(xdp_do_redirect(dev, &xdp, prog))) { + if (unlikely(xdp_do_redirect(dev, xdp_ptr, prog))) { act =3D XDP_DROP; break; } goto out; case XDP_TX: - xdpf =3D xdp_convert_buff_to_frame(&xdp); + xdpf =3D xdp_convert_buff_to_frame(xdp_ptr); if (unlikely(!xdpf) || macb_xdp_submit_frame(queue->bp, xdpf, dev, false, addr)) { act =3D XDP_DROP; @@ -1635,8 +1685,12 @@ static u32 gem_xdp_run(struct macb_queue *queue, voi= d *buff_head, break; } =20 - page_pool_put_full_page(queue->page_pool, - virt_to_head_page(xdp.data), true); + if (xsk) + xsk_buff_free(xdp_ptr); + else + page_pool_put_full_page(queue->page_pool, + virt_to_head_page(xdp.data), true); + out: rcu_read_unlock(); =20 @@ -1647,14 +1701,17 @@ static int gem_rx(struct macb_queue *queue, struct = napi_struct *napi, int budget) { unsigned int packets =3D 0, dropped =3D 0, bytes =3D 0; + struct xsk_buff_pool *xsk =3D queue->xsk_pool; struct skb_shared_info *shinfo; struct macb *bp =3D queue->bp; struct macb_dma_desc *desc; + struct xdp_buff *xsk_xdp; bool xdp_flush =3D false; unsigned int headroom; unsigned int entry; struct page *page; void *buff_head; + int refill_err; int count =3D 0; int data_len; int nr_frags; @@ -1686,6 +1743,7 @@ static int gem_rx(struct macb_queue *queue, struct na= pi_struct *napi, count++; =20 buff_head =3D queue->rx_buff[entry]; + xsk_xdp =3D buff_head; if (unlikely(!buff_head)) { dev_err_ratelimited(&bp->pdev->dev, "inconsistent Rx descriptor chain\n"); @@ -1701,10 +1759,14 @@ static int gem_rx(struct macb_queue *queue, struct = napi_struct *napi, if (data_len < 0) goto free_frags; =20 - dma_sync_single_for_cpu(&bp->pdev->dev, - addr + (first_frame ? bp->rx_ip_align : 0), - data_len, - page_pool_get_dma_dir(queue->page_pool)); + if (xsk) { + xsk_buff_dma_sync_for_cpu(xsk_xdp); + } else { + dma_sync_single_for_cpu(&bp->pdev->dev, + addr + (first_frame ? bp->rx_ip_align : 0), + data_len, + page_pool_get_dma_dir(queue->page_pool)); + } =20 if (first_frame) { if (unlikely(queue->skb)) { @@ -1813,10 +1875,13 @@ static int gem_rx(struct macb_queue *queue, struct = napi_struct *napi, queue->skb =3D NULL; } =20 - if (buff_head) + if (buff_head && xsk) { + xsk_buff_free(xsk_xdp); + } else if (buff_head) { page_pool_put_full_page(queue->page_pool, virt_to_head_page(buff_head), false); + } =20 dropped++; queue->rx_buff[entry] =3D NULL; @@ -1829,10 +1894,26 @@ static int gem_rx(struct macb_queue *queue, struct = napi_struct *napi, bp->dev->stats.rx_bytes +=3D bytes; queue->stats.rx_bytes +=3D bytes; =20 + if (!count) /* short-circuit */ + return 0; + if (xdp_flush) xdp_do_flush(); =20 - gem_rx_refill(queue, true); + refill_err =3D gem_rx_refill(queue, true); + if (refill_err) + count =3D budget; + + if (xsk && xsk_uses_need_wakeup(xsk)) { + unsigned int desc_available =3D CIRC_SPACE(queue->rx_prepared_head, + queue->rx_tail, + bp->rx_ring_size); + + if (refill_err || !desc_available) + xsk_set_rx_need_wakeup(xsk); + else + xsk_clear_rx_need_wakeup(xsk); + } =20 return count; } @@ -2816,9 +2897,16 @@ static void gem_free_rx_buffers(struct macb *bp) if (!data) continue; =20 - page_pool_put_full_page(queue->page_pool, - virt_to_head_page(data), - false); + if (queue->xsk_pool) { + struct xdp_buff *xdp =3D data; + + xsk_buff_free(xdp); + } else { + page_pool_put_full_page(queue->page_pool, + virt_to_head_page(data), + false); + } + queue->rx_buff[i] =3D NULL; } =20 @@ -2831,8 +2919,10 @@ static void gem_free_rx_buffers(struct macb *bp) queue->rx_buff =3D NULL; if (xdp_rxq_info_is_reg(&queue->xdp_rxq)) xdp_rxq_info_unreg(&queue->xdp_rxq); - page_pool_destroy(queue->page_pool); - queue->page_pool =3D NULL; + if (!queue->xsk_pool) { + page_pool_destroy(queue->page_pool); + queue->page_pool =3D NULL; + } } } =20 @@ -2987,7 +3077,7 @@ static int macb_alloc_consistent(struct macb *bp) return -ENOMEM; } =20 -static int gem_create_page_pool(struct macb_queue *queue, int qid) +static int gem_init_pool(struct macb_queue *queue, int qid) { struct page_pool_params pp_params =3D { .order =3D 0, @@ -3002,24 +3092,32 @@ static int gem_create_page_pool(struct macb_queue *= queue, int qid) .napi =3D &queue->napi_rx, .max_len =3D PAGE_SIZE, }; - struct page_pool *pool; - int err; + struct xsk_buff_pool *xsk =3D queue->xsk_pool; + enum xdp_mem_type mem_type; + void *allocator; + int err =3D 0; =20 - /* This can happen in the case of HRESP error. - * Do nothing as page pool is already existing. - */ - if (queue->page_pool) - return 0; + if (xsk) { + mem_type =3D MEM_TYPE_XSK_BUFF_POOL; + allocator =3D xsk; + } else { + /* This can happen in the case of HRESP error. + * Do nothing as page pool is already existing. + */ + if (queue->page_pool) + return 0; =20 - pool =3D page_pool_create(&pp_params); - if (IS_ERR(pool)) { - netdev_err(queue->bp->dev, "cannot create rx page pool\n"); - err =3D PTR_ERR(pool); - goto clear_pool; + queue->page_pool =3D page_pool_create(&pp_params); + if (IS_ERR(queue->page_pool)) { + netdev_err(queue->bp->dev, "cannot create rx page pool\n"); + err =3D PTR_ERR(queue->page_pool); + goto clear_pool; + } + + mem_type =3D MEM_TYPE_PAGE_POOL; + allocator =3D queue->page_pool; } =20 - queue->page_pool =3D pool; - err =3D xdp_rxq_info_reg(&queue->xdp_rxq, queue->bp->dev, qid, queue->napi_rx.napi_id); if (err < 0) { @@ -3027,8 +3125,7 @@ static int gem_create_page_pool(struct macb_queue *qu= eue, int qid) goto destroy_pool; } =20 - err =3D xdp_rxq_info_reg_mem_model(&queue->xdp_rxq, MEM_TYPE_PAGE_POOL, - queue->page_pool); + err =3D xdp_rxq_info_reg_mem_model(&queue->xdp_rxq, mem_type, allocator); if (err) { netdev_err(queue->bp->dev, "xdp: failed to register rxq memory model\n"); goto unreg_info; @@ -3039,9 +3136,11 @@ static int gem_create_page_pool(struct macb_queue *q= ueue, int qid) unreg_info: xdp_rxq_info_unreg(&queue->xdp_rxq); destroy_pool: - page_pool_destroy(pool); + if (!xsk) + page_pool_destroy(queue->page_pool); clear_pool: - queue->page_pool =3D NULL; + if (!xsk) + queue->page_pool =3D NULL; =20 return err; } @@ -3084,7 +3183,7 @@ static int gem_init_rings(struct macb *bp, bool fail_= early) /* This is a hard failure. In case of HRESP error * recovery we always reuse the existing page pool. */ - last_err =3D gem_create_page_pool(queue, q); + last_err =3D gem_init_pool(queue, q); if (last_err) break; =20 --=20 2.53.0 From nobody Fri Apr 10 01:00:55 2026 Received: from smtpout-03.galae.net (smtpout-03.galae.net [185.246.85.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BAE203E3D8B for ; Wed, 4 Mar 2026 18:25:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.246.85.4 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648702; cv=none; b=P2RvQBNtLS4k+tBJ4gJOxaeTGiBOeozWLSIIm7F5yObBzwGGzakZF+KOxna7czy6aMOF8RvQmn4wXO2BRWmspk060heGAOwRawFUF5B/slACPZqzVDipIEIqChwny4WOAOK3fHpbp+24v98f2+29qi9hmkx3yeaTYMm5Qjb1Ebk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772648702; c=relaxed/simple; bh=vibvuKxVooPBJkXJkyFwoq08DAa9glaCCQBQpNxwfCg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=GJn9TRUfyJvKwwBCJIcpb72CjqyBG7StR/ZJgXAoLNL3TmbFfbIrNzJttk+YIb7R/48cVTYIi5qtOvnmMTfr+ROKdvpB08pRbd1+uHyThd1CjulaRUI07dnWeiMTlr9yaq2G9fyxpriZ0FUR1thtGqHwHBv67bxAkSGuGjGHHYg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=CH56vEt7; arc=none smtp.client-ip=185.246.85.4 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="CH56vEt7" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-03.galae.net (Postfix) with ESMTPS id 7DD464E42545; Wed, 4 Mar 2026 18:24:59 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 533FA5FF5C; Wed, 4 Mar 2026 18:24:59 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id A89561036960A; Wed, 4 Mar 2026 19:24:56 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1772648698; h=from:subject:date:message-id:to:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:references; bh=s+cOyZ8u3T068fgeZVGs8jhtYZwKXq49c7tRInBCtPo=; b=CH56vEt7PpVL01KzdTpGssEy+FyW9dM1gqNqhUzRH9mM26f4fLsMwvUhzvzEW+TKeq+nVZ kdfRjfyj2q9nf2they9fVzl8dqNOGF5jkCz+HaxKLFzIalBifmgSbLooyow3m4Fbdcz5B1 ASLPyhqAFIvybKh9OwIZ+xoeWxHAsj/I4rhRuC0JSyX9KMK4cy30QNe9goQxwtZYWH/L6H VKuSfMOBWaXHjcCtWB1y2K3Abvm1GeDIwhh0SoHQf4kFtDglXainz5fYE225PhrNAuGjcC Ju0+LISdEDrnrnKdoJTTe3LIZcWMwonM6IPVB1k/n/iOcXtWbaKm/RyJPX/mYg== From: =?utf-8?q?Th=C3=A9o_Lebrun?= Date: Wed, 04 Mar 2026 19:24:31 +0100 Subject: [PATCH net-next 8/8] net: macb: add Tx zero-copy AF_XDP support Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260304-macb-xsk-v1-8-ba2ebe2bdaa3@bootlin.com> References: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> In-Reply-To: <20260304-macb-xsk-v1-0-ba2ebe2bdaa3@bootlin.com> To: Nicolas Ferre , Claudiu Beznea , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Stanislav Fomichev , Richard Cochran Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, Vladimir Kondratiev , Gregory CLEMENT , =?utf-8?q?Beno=C3=AEt_Monin?= , Tawfik Bayouk , Thomas Petazzoni , Maxime Chevallier , =?utf-8?q?Th=C3=A9o_Lebrun?= X-Mailer: b4 0.14.3 X-Last-TLS-Session-Version: TLSv1.3 Add a new buffer type (to `enum macb_tx_buff_type`). Near the end of macb_tx_complete(), we go and read the XSK buffers using xsk_tx_peek_release_desc_batch() and append those buffers to our Tx ring. Additionally, in macb_tx_complete(), we signal to the XSK subsystem number of bytes completed and conditionally mark the need_wakeup flag. Lastly, we update XSK wakeup by writing the TCOMP bit in the per-queue IMR register, to ensure NAPI scheduling will take place. Signed-off-by: Th=C3=A9o Lebrun --- drivers/net/ethernet/cadence/macb.h | 1 + drivers/net/ethernet/cadence/macb_main.c | 91 ++++++++++++++++++++++++++++= +--- 2 files changed, 86 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cad= ence/macb.h index a9e6f0289ecb..5700a285c08a 100644 --- a/drivers/net/ethernet/cadence/macb.h +++ b/drivers/net/ethernet/cadence/macb.h @@ -963,6 +963,7 @@ enum macb_tx_buff_type { MACB_TYPE_SKB, MACB_TYPE_XDP_TX, MACB_TYPE_XDP_NDO, + MACB_TYPE_XSK, }; =20 /* struct macb_tx_buff - data about an skb or xdp frame which is being diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/etherne= t/cadence/macb_main.c index ea1b0b8c4fab..fee1ebadcf20 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -986,21 +986,30 @@ static int macb_halt_tx(struct macb *bp) =20 static void macb_tx_release_buff(void *buff, enum macb_tx_buff_type type, = int budget) { - if (type =3D=3D MACB_TYPE_SKB) { + switch (type) { + case MACB_TYPE_SKB: napi_consume_skb(buff, budget); - } else if (type =3D=3D MACB_TYPE_XDP_TX) { - if (!budget) - xdp_return_frame(buff); - else + break; + case MACB_TYPE_XDP_TX: + if (budget) xdp_return_frame_rx_napi(buff); - } else { + else + xdp_return_frame(buff); + break; + case MACB_TYPE_XDP_NDO: xdp_return_frame(buff); + break; + case MACB_TYPE_XSK: + break; } } =20 static void macb_tx_unmap(struct macb *bp, struct macb_tx_buff *tx_buff, int budget) { + if (tx_buff->type =3D=3D MACB_TYPE_XSK) + return; + if (tx_buff->mapping) { if (tx_buff->mapped_as_page) dma_unmap_page(&bp->pdev->dev, tx_buff->mapping, @@ -1255,6 +1264,57 @@ static void macb_xdp_submit_buff(struct macb *bp, un= signed int queue_index, netif_stop_subqueue(netdev, queue_index); } =20 +static void macb_xdp_xmit_zc(struct macb *bp, unsigned int queue_index, in= t budget) +{ + struct macb_queue *queue =3D &bp->queues[queue_index]; + struct xsk_buff_pool *xsk =3D queue->xsk_pool; + dma_addr_t mapping; + u32 slot_available; + size_t bytes =3D 0; + u32 batch; + + guard(spinlock_irqsave)(&queue->tx_ptr_lock); + + /* This is a hard error, log it. */ + slot_available =3D CIRC_SPACE(queue->tx_head, queue->tx_tail, bp->tx_ring= _size); + if (slot_available < 1) { + netif_stop_subqueue(bp->dev, queue_index); + netdev_dbg(bp->dev, "tx_head =3D %u, tx_tail =3D %u\n", + queue->tx_head, queue->tx_tail); + return; + } + + batch =3D min_t(u32, slot_available, budget); + batch =3D xsk_tx_peek_release_desc_batch(xsk, batch); + if (!batch) + return; + + for (u32 i =3D 0; i < batch; i++) { + struct xdp_desc *desc =3D &xsk->tx_descs[i]; + + mapping =3D xsk_buff_raw_get_dma(xsk, desc->addr); + xsk_buff_raw_dma_sync_for_device(xsk, mapping, desc->len); + + macb_xdp_submit_buff(bp, queue_index, (struct macb_tx_buff){ + .ptr =3D NULL, + .mapping =3D mapping, + .size =3D desc->len, + .mapped_as_page =3D false, + .type =3D MACB_TYPE_XSK, + }); + + bytes +=3D desc->len; + } + + /* Make newly initialized descriptor visible to hardware */ + wmb(); + spin_lock(&bp->lock); + macb_writel(bp, NCR, macb_readl(bp, NCR) | MACB_BIT(TSTART)); + spin_unlock(&bp->lock); + + netdev_tx_sent_queue(netdev_get_tx_queue(bp->dev, queue_index), bytes); +} + static int macb_tx_complete(struct macb_queue *queue, int budget) { struct macb *bp =3D queue->bp; @@ -1316,6 +1376,11 @@ static int macb_tx_complete(struct macb_queue *queue= , int budget) case MACB_TYPE_XDP_NDO: bytes +=3D tx_buff->size; break; + + case MACB_TYPE_XSK: + bytes +=3D tx_buff->size; + xsk_frames++; + break; } =20 packets++; @@ -1337,6 +1402,16 @@ static int macb_tx_complete(struct macb_queue *queue= , int budget) netif_wake_subqueue(bp->dev, queue_index); spin_unlock_irqrestore(&queue->tx_ptr_lock, flags); =20 + if (queue->xsk_pool) { + if (xsk_frames) + xsk_tx_completed(queue->xsk_pool, xsk_frames); + + if (xsk_uses_need_wakeup(queue->xsk_pool)) + xsk_set_tx_need_wakeup(queue->xsk_pool); + + macb_xdp_xmit_zc(bp, queue_index, budget); + } + return packets; } =20 @@ -1616,6 +1691,10 @@ static int gem_xsk_wakeup(struct net_device *dev, u3= 2 qid, u32 flags) !napi_if_scheduled_mark_missed(&queue->napi_rx)) irqs |=3D MACB_BIT(RCOMP); =20 + if ((flags & XDP_WAKEUP_TX) && + !napi_if_scheduled_mark_missed(&queue->napi_tx)) + irqs |=3D MACB_BIT(TCOMP); + if (irqs) queue_writel(queue, IMR, irqs); =20 --=20 2.53.0