From nobody Thu Apr 2 20:22:06 2026 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C219F3815F4 for ; Thu, 26 Mar 2026 23:53:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774569188; cv=none; b=Kt9WhCohnhphtepR9b+A7TnRRzNOW9vPJkLKMhT7SM0PZXJR6KPAo0OoP8iLh6XOg5s4pmdjiq/gb13d7f/Nl0BDgOWLVHDas47XUwvIJF8MZeHjoaJTvMtoOy9fUur1oecgpUOaFJ/AC+W5AWLqDPFGB3UmYVmnBsp/ZDYD1uQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774569188; c=relaxed/simple; bh=g0ZMY/mYiaYad0AEOxJY66TQHgpI8nIFWwm5NwbmfBQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qmjEuSyK6irwu0nDvEw3JzStyC97RcKp6Y2yW3EmuzbXFDsAQampxdRObAB+AeqmWIVBtl36TMrBRhwr0ECAc3br1zIINnFDsVUDW1yI8+mBnqqrc5s2kaEN1mhmFWVvU2xV3cc/UHhhAIRtSv1x0ePgY0+hTDoxIUqqgQC+g/Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=dama.to; spf=none smtp.mailfrom=dama.to; dkim=pass (2048-bit key) header.d=dama-to.20230601.gappssmtp.com header.i=@dama-to.20230601.gappssmtp.com header.b=J9vwX0oF; arc=none smtp.client-ip=209.85.215.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=dama.to Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=dama.to Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=dama-to.20230601.gappssmtp.com header.i=@dama-to.20230601.gappssmtp.com header.b="J9vwX0oF" Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-c76864f4e58so41575a12.1 for ; Thu, 26 Mar 2026 16:53:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dama-to.20230601.gappssmtp.com; s=20230601; t=1774569185; x=1775173985; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9lO3C5A6DOvNenEaFdW0PEXT4Y2qjfVt93hz9LU/SBU=; b=J9vwX0oF2/gu7NI3T+sLoOH5036UU7sr9ww5bnmi/dcWRHG5UjVTJNo6G2eWyggggw Nz3114a8+lY/T6/eRGliz/eB8YGbwRb8lwRf6clFiyIspHdAvJPvEJBDNDKEQmEczCh9 6q1BW1lOD9zdt4UWAurMbrBnC85o1Lf5I/Nqmv67cw8a4y86J3J0CPPZ5Zh33MLIWWeg j74/enyiOb+fCgbvuBdhpNFdLvfA+3nYnC9J6Jni7i/ytEBHUlmb20gdiCb9CGxojybK /8ZjzYScDktz4ZR85MeJ8KyOcj+hQdvdmztYripBoabADK5ACFlTdrZVYhY7Q9EJgX6a ZB2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774569185; x=1775173985; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=9lO3C5A6DOvNenEaFdW0PEXT4Y2qjfVt93hz9LU/SBU=; b=omcZKkZdUPKA8b1CDEESjfGdeTn61TZf4xvmeia77GnoqaGNw/PGfJ5oYWwQ21r0Gy MsmMNzTFFjB0C28aurpFLk9d1w61bVKw4CtW5Zpkcd89/8177zFyUtkWaaIF5Uz2TtVM 94BU84lCQlomhWizIx+EzDrySJMVmUgF7tnPaEvlcZOSbj09KFlEvWnijuAYm9T91dec EDg15Cua/lPvt9CJssNfCXBS8JWp+k6/PxdQ84S3WZUNxX3X1ldlLE3VncD66O/ERyjw PdLLINMERah1WK4p6fLH6nFBuhDFIeFzAyhUkdv2/QJPG0o7MbKmViVlclBvhodQkjho s2yw== X-Forwarded-Encrypted: i=1; AJvYcCX8CTyXkzm9Vp/tBVb3PS71xi8z8ORoGF0+JKp+Ub0aE5ldB7VbuAv2eFHgG8suiSJctoLTZ2Xj8BgUe4M=@vger.kernel.org X-Gm-Message-State: AOJu0YwCeByck8KodUD4Xq/NciruVGdGkZjAordZWoM/G9F8R1vBMZE0 OUTd9ewgtoTg1K6FleoQJUCqfDmG2tkHDE87h6p0IILX3EvO/pulC5ZWhz3X6+eCFzUgyUmAIk4 aaiV5nMg= X-Gm-Gg: ATEYQzwwovOht5udRz3YzB41txyjkHiajxJsptPbv49XCjsmRD9wZkAtX7QEyFl+FGF vDXWyFvavZgff51swmlDEiVHt4hCfYYpiV6kyD62Hp8gvf3Un65GjEUjLAqAufMxfVhsXEamR6R LJFB0BOJChJsNDBNbjsm3O1Dl1r/tV4dehomOtWKxuuCjj8NfM4ebLWMIigoy8OBxG36BkDqIGH iYE+ZY4TWq5SIbWl4wjo9Xpzo6CdBIf55Y65/2gnDGsKrgFKy/pC5kQL+WJdy8hcPFWd4qMbXkY wIALfO0GL0EoUtR6wYr7TxraqwN7RzZq0XHuqnTRnV3hkHOh14fRFcBp0Estkzg0wm0bD4hYZWh GNE/kHITzEIKZ6aE5PTybLb0sLCTJSOZU8DL87eSu1UzG9oc+dMzZm2+lIE5plX5+YpwJ1JdmeC 48yj7i X-Received: by 2002:a05:6a20:6a10:b0:39b:835f:1730 with SMTP id adf61e73a8af0-39c877f8f90mr617127637.15.1774569184978; Thu, 26 Mar 2026 16:53:04 -0700 (PDT) Received: from localhost ([2a03:2880:2ff:53::]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-c767382c2a6sm3053581a12.12.2026.03.26.16.53.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 26 Mar 2026 16:53:04 -0700 (PDT) From: Joe Damato To: netdev@vger.kernel.org, Michael Chan , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: andrew+netdev@lunn.ch, horms@kernel.org, pavan.chebbi@broadcom.com, linux-kernel@vger.kernel.org, leon@kernel.org, Joe Damato Subject: [net-next v6 08/12] net: bnxt: Implement software USO Date: Thu, 26 Mar 2026 16:52:27 -0700 Message-ID: <20260326235238.2940471-9-joe@dama.to> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260326235238.2940471-1-joe@dama.to> References: <20260326235238.2940471-1-joe@dama.to> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement bnxt_sw_udp_gso_xmit() using the core tso_dma_map API and the pre-allocated TX inline buffer for per-segment headers. The xmit path: 1. Calls tso_start() to initialize TSO state 2. Stack-allocates a tso_dma_map and calls tso_dma_map_init() to DMA-map the linear payload and all frags upfront. 3. For each segment: - Copies and patches headers via tso_build_hdr() into the pre-allocated tx_inline_buf (DMA-synced per segment) - Counts payload BDs via tso_dma_map_count() - Emits long BD (header) + ext BD + payload BDs - Payload BDs use tso_dma_map_next() which yields (dma_addr, chunk_len, mapping_len) tuples. Header BDs set dma_unmap_len=3D0 since the inline buffer is pre-allocated and unmapped only at ring teardown. Suggested-by: Jakub Kicinski Reviewed-by: Pavan Chebbi Signed-off-by: Joe Damato --- v6: - Addressed Paolo's feedback where the IOVA API could fail transiently, leaving stale state in iova_state. Fix this by always copying the stat= e, noting that dma_iova_try_alloc is called unconditionally in the tso_dma_map_init function (via tso_dma_iova_try), which zeroes the sta= te even if the API can't be used. - Since this was a very minor change, I retained Pavan's Reviewed-by. v5: - Added __maybe_unused to last_unmap_len and last_unmap_addr to silence a build warning when CONFIG_NEED_DMA_MAP_STATE is disabled. No functional changes. - Added Pavan's Reviewed-by. v4: - Fixed the early return issue Pavan pointed out when num_segs <=3D 1; u= se the drop label instead of returning. v3: - Added iova_state and iova_total_len to struct bnxt_sw_tx_bd. - Stores iova_state on the last segment's tx_buf during xmit. rfcv2: - set the unmap len on the last descriptor, so that when completions fire only the last completion unmaps the region. drivers/net/ethernet/broadcom/bnxt/bnxt.h | 4 + drivers/net/ethernet/broadcom/bnxt/bnxt_gso.c | 210 ++++++++++++++++++ 2 files changed, 214 insertions(+) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethern= et/broadcom/bnxt/bnxt.h index 18b08789b3a4..865546f3bfce 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -11,6 +11,8 @@ #ifndef BNXT_H #define BNXT_H =20 +#include + #define DRV_MODULE_NAME "bnxt_en" =20 /* DO NOT CHANGE DRV_VER_* defines @@ -897,6 +899,8 @@ struct bnxt_sw_tx_bd { u16 rx_prod; u16 txts_prod; }; + struct dma_iova_state iova_state; + size_t iova_total_len; }; =20 #define BNXT_SW_GSO_MID 1 diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_gso.c b/drivers/net/et= hernet/broadcom/bnxt/bnxt_gso.c index b296769ee4fe..7c198847a771 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_gso.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_gso.c @@ -19,11 +19,221 @@ #include "bnxt.h" #include "bnxt_gso.h" =20 +static u32 bnxt_sw_gso_lhint(unsigned int len) +{ + if (len <=3D 512) + return TX_BD_FLAGS_LHINT_512_AND_SMALLER; + else if (len <=3D 1023) + return TX_BD_FLAGS_LHINT_512_TO_1023; + else if (len <=3D 2047) + return TX_BD_FLAGS_LHINT_1024_TO_2047; + else + return TX_BD_FLAGS_LHINT_2048_AND_LARGER; +} + netdev_tx_t bnxt_sw_udp_gso_xmit(struct bnxt *bp, struct bnxt_tx_ring_info *txr, struct netdev_queue *txq, struct sk_buff *skb) { + unsigned int last_unmap_len __maybe_unused =3D 0; + dma_addr_t last_unmap_addr __maybe_unused =3D 0; + struct bnxt_sw_tx_bd *last_unmap_buf =3D NULL; + unsigned int hdr_len, mss, num_segs; + struct pci_dev *pdev =3D bp->pdev; + unsigned int total_payload; + int i, bds_needed, slots; + struct tso_dma_map map; + u32 vlan_tag_flags =3D 0; + struct tso_t tso; + u16 cfa_action; + u16 prod; + + hdr_len =3D tso_start(skb, &tso); + mss =3D skb_shinfo(skb)->gso_size; + total_payload =3D skb->len - hdr_len; + num_segs =3D DIV_ROUND_UP(total_payload, mss); + + /* Zero the csum fields so tso_build_hdr will propagate zeroes into + * every segment header. HW csum offload will recompute from scratch. + */ + udp_hdr(skb)->check =3D 0; + if (!tso.ipv6) + ip_hdr(skb)->check =3D 0; + + if (unlikely(num_segs <=3D 1)) + goto drop; + + /* Upper bound on the number of descriptors needed. + * + * Each segment uses 1 long BD + 1 ext BD + payload BDs, which is + * at most num_segs + nr_frags (each frag boundary crossing adds at + * most 1 extra BD). + */ + bds_needed =3D 3 * num_segs + skb_shinfo(skb)->nr_frags + 1; + + if (unlikely(bnxt_tx_avail(bp, txr) < bds_needed)) { + netif_txq_try_stop(txq, bnxt_tx_avail(bp, txr), + bp->tx_wake_thresh); + return NETDEV_TX_BUSY; + } + + slots =3D BNXT_SW_USO_MAX_SEGS - (txr->tx_inline_prod - txr->tx_inline_co= ns); + + if (unlikely(slots < num_segs)) { + netif_txq_try_stop(txq, bnxt_tx_avail(bp, txr), + bp->tx_wake_thresh); + return NETDEV_TX_BUSY; + } + + if (unlikely(tso_dma_map_init(&map, &pdev->dev, skb, hdr_len))) + goto drop; + + cfa_action =3D bnxt_xmit_get_cfa_action(skb); + if (skb_vlan_tag_present(skb)) { + vlan_tag_flags =3D TX_BD_CFA_META_KEY_VLAN | + skb_vlan_tag_get(skb); + if (skb->vlan_proto =3D=3D htons(ETH_P_8021Q)) + vlan_tag_flags |=3D 1 << TX_BD_CFA_META_TPID_SHIFT; + } + + prod =3D txr->tx_prod; + + for (i =3D 0; i < num_segs; i++) { + unsigned int seg_payload =3D min_t(unsigned int, mss, + total_payload - i * mss); + u16 slot =3D (txr->tx_inline_prod + i) & + (BNXT_SW_USO_MAX_SEGS - 1); + struct bnxt_sw_tx_bd *tx_buf; + unsigned int mapping_len; + dma_addr_t this_hdr_dma; + unsigned int chunk_len; + unsigned int offset; + dma_addr_t dma_addr; + struct tx_bd *txbd; + void *this_hdr; + int bd_count; + __le32 csum; + bool last; + u32 flags; + + last =3D (i =3D=3D num_segs - 1); + offset =3D slot * TSO_HEADER_SIZE; + this_hdr =3D txr->tx_inline_buf + offset; + this_hdr_dma =3D txr->tx_inline_dma + offset; + + tso_build_hdr(skb, this_hdr, &tso, seg_payload, last); + + dma_sync_single_for_device(&pdev->dev, this_hdr_dma, + hdr_len, DMA_TO_DEVICE); + + bd_count =3D tso_dma_map_count(&map, seg_payload); + + tx_buf =3D &txr->tx_buf_ring[RING_TX(bp, prod)]; + txbd =3D &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; + + tx_buf->skb =3D skb; + tx_buf->nr_frags =3D bd_count; + tx_buf->is_push =3D 0; + tx_buf->is_ts_pkt =3D 0; + + dma_unmap_addr_set(tx_buf, mapping, this_hdr_dma); + dma_unmap_len_set(tx_buf, len, 0); + + tx_buf->is_sw_gso =3D last ? BNXT_SW_GSO_LAST : BNXT_SW_GSO_MID; + + /* Store IOVA state on the last segment for completion. + * Always copy so that a stale iova_state from a prior + * occupant of this ring slot cannot be misread by + * dma_use_iova() in the completion path. + */ + if (last) { + tx_buf->iova_state =3D map.iova_state; + tx_buf->iova_total_len =3D map.total_len; + } + + flags =3D (hdr_len << TX_BD_LEN_SHIFT) | + TX_BD_TYPE_LONG_TX_BD | + TX_BD_CNT(2 + bd_count); + + flags |=3D bnxt_sw_gso_lhint(hdr_len + seg_payload); + + txbd->tx_bd_len_flags_type =3D cpu_to_le32(flags); + txbd->tx_bd_haddr =3D cpu_to_le64(this_hdr_dma); + txbd->tx_bd_opaque =3D SET_TX_OPAQUE(bp, txr, prod, + 2 + bd_count); + + csum =3D cpu_to_le32(TX_BD_FLAGS_TCP_UDP_CHKSUM | + TX_BD_FLAGS_IP_CKSUM); + + prod =3D NEXT_TX(prod); + bnxt_init_ext_bd(bp, txr, prod, csum, + vlan_tag_flags, cfa_action); + + /* set dma_unmap_len on the LAST BD touching each + * region. Since completions are in-order, the last segment + * completes after all earlier ones, so the unmap is safe. + */ + while (tso_dma_map_next(&map, &dma_addr, &chunk_len, + &mapping_len, seg_payload)) { + prod =3D NEXT_TX(prod); + txbd =3D &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; + tx_buf =3D &txr->tx_buf_ring[RING_TX(bp, prod)]; + + txbd->tx_bd_haddr =3D cpu_to_le64(dma_addr); + dma_unmap_addr_set(tx_buf, mapping, dma_addr); + dma_unmap_len_set(tx_buf, len, 0); + tx_buf->skb =3D NULL; + tx_buf->is_sw_gso =3D 0; + + if (mapping_len) { + if (last_unmap_buf) { + dma_unmap_addr_set(last_unmap_buf, + mapping, + last_unmap_addr); + dma_unmap_len_set(last_unmap_buf, + len, + last_unmap_len); + } + last_unmap_addr =3D dma_addr; + last_unmap_len =3D mapping_len; + } + last_unmap_buf =3D tx_buf; + + flags =3D chunk_len << TX_BD_LEN_SHIFT; + txbd->tx_bd_len_flags_type =3D cpu_to_le32(flags); + txbd->tx_bd_opaque =3D 0; + + seg_payload -=3D chunk_len; + } + + txbd->tx_bd_len_flags_type |=3D + cpu_to_le32(TX_BD_FLAGS_PACKET_END); + + prod =3D NEXT_TX(prod); + } + + if (last_unmap_buf) { + dma_unmap_addr_set(last_unmap_buf, mapping, last_unmap_addr); + dma_unmap_len_set(last_unmap_buf, len, last_unmap_len); + } + + txr->tx_inline_prod +=3D num_segs; + + netdev_tx_sent_queue(txq, skb->len); + + WRITE_ONCE(txr->tx_prod, prod); + /* Sync BDs before doorbell */ + wmb(); + bnxt_db_write(bp, &txr->tx_db, prod); + + if (unlikely(bnxt_tx_avail(bp, txr) <=3D bp->tx_wake_thresh)) + netif_txq_try_stop(txq, bnxt_tx_avail(bp, txr), + bp->tx_wake_thresh); + + return NETDEV_TX_OK; + +drop: dev_kfree_skb_any(skb); dev_core_stats_tx_dropped_inc(bp->dev); return NETDEV_TX_OK; --=20 2.52.0