From nobody Wed Apr 8 02:50:19 2026 Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC8E13859EE for ; Tue, 7 Apr 2026 22:03:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775599419; cv=none; b=kQrG4epfEjFD0auN3XLh9eNwmd2H9Byi3gjF/Yp270HvNLJ43Hy/4Sc2opviHP9QeR6d4EDgguc5ySH8XTEwY0eyTVBkMHcj4AewrOaPR7hWzi6L0pZcFTB69Mk/7TOIhisckDGRufe9O94uEUn0oRT4XbvcRNN0T7HNiQHgL88= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775599419; c=relaxed/simple; bh=B7I8cCWDzhUdGf6UqiSRvQaJMsLwdy5sI9Ijne0R970=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cMG2U5dbPZMfFVyM5kIDu+JoVVdqNkXL7fbrJj19I04KZZrvPcWi7xolXIpZ21rAavqX298+OnEi3sCfN9Db5FKuKDd63ZNPu7C2lI8MihD2kAMpRLSdwwaFxIMojS4um41IQg8hVknFDXxuta+NMM5wp54WSAodv6CHbk+K7Gs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=dama.to; spf=none smtp.mailfrom=dama.to; dkim=pass (2048-bit key) header.d=dama-to.20251104.gappssmtp.com header.i=@dama-to.20251104.gappssmtp.com header.b=QS6MXqjL; arc=none smtp.client-ip=209.85.215.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=dama.to Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=dama.to Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=dama-to.20251104.gappssmtp.com header.i=@dama-to.20251104.gappssmtp.com header.b="QS6MXqjL" Received: by mail-pg1-f174.google.com with SMTP id 41be03b00d2f7-c757a9251faso1947815a12.1 for ; Tue, 07 Apr 2026 15:03:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dama-to.20251104.gappssmtp.com; s=20251104; t=1775599417; x=1776204217; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iveK1Up5GWkb/fUAX7CrBw5jY1NKh9uiQG+zXPUUS+Y=; b=QS6MXqjLwBdUEj70JylkLdty7wnpX2wy9JfHx1bQTFnbbviWidnAnk9iW4OSp9aeVU 4RsVvVNp+9r+5ZMKqgK5dLCrVcPkEYHsxSd7EU/xb1EMXdCVWRtjtZDZ4uUck7jpL7o0 zJXgw9GEixfpFm3cPEEA9UM73xHcWX2vaiofGPjMZos1ZyjfteCZA51qlge0EkAxbimd fI9KD+vRitIgAuOBIMCXlAaGdEDsr3WHmfuTgCdfXyE4iPa0e/1Ds/5o4Z+C0eaRbxtu +t/RnT+i4GQMCjTqZh5AbL9Z8mF7BDma91Kh0nnyFWYgMJ6r2/Xq3Q+2sUnf+scami+3 J4fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775599417; x=1776204217; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=iveK1Up5GWkb/fUAX7CrBw5jY1NKh9uiQG+zXPUUS+Y=; b=eSG7HoKvsUJ1zSK7QkZyQmZcURSq2TMApfvuY8O/FQB9JBsLyv10zTMofJ0w7vEmxk oI2P41n2wdTpZ8cVBnrcxlX16xsRe3caLzOoE5dX8N+XhVPup5Lz3UhCOvlkteem7lvK 9tOVCvih0UMi5uyCHfh2T2olkl2krso92FiGoeTr2EJm+bx9cQb7M61r3h5u1xPoDaUa YyN+PdnqBSYY8gHDq4HC7Dply5N6spBF0IDisJpspFkNnX8tgTvLt8gIHglAIHKCMsLb sWWLqySpe6nbtLDbizxveZCKbTck3Xz3ok8dXBHt/XM3alKeVKnl/b9RxUbFhFPGa2dK jJZA== X-Forwarded-Encrypted: i=1; AJvYcCUvcALN424IiWWokYZuXinBHHatpNt8sAEsMV/BhmTb/+KWfVzA+Tod527sFwvNWZfgSkX6JYLy0iADUBM=@vger.kernel.org X-Gm-Message-State: AOJu0YwkeNyDd5pkoxD29yrTioBZ+J26qE8X2aeKfHWlXX0gMOX1a2/y HJ0ILInDy0E8eZD0JIBEdKXXeTSXTFF04EJXUIb5gF4ogYhkwCIEeB+m5DZI+ZPpSVQ= X-Gm-Gg: AeBDies9eWGqkhsWZFNH6JIlM9gcT+Ic9rcNTpxBnvN79530cwtFjF/BbNlaELpRyt7 w7IVq4Wd4Xue1BDOZzFW9pOcJxq0h7Fl18zDkEK95rplHRbCMzkmgM2Efcop7Ps4uf/pbA4s6Q+ y15w+QRUFjOHoTZLGcEfIsYIAZCxX0bhHORBs8XOOMJzAkxsL8/s2pLmxR1GRBc+tB3KMcIfSmA 0MstiebPqOpaLuqPiLtpvmsiz/3etRef4TUSOGSk1oe7pAB/+7b8KQA6YOYcx7rv0RFhT+W8tYp aZU6u39Gcp1BIyuEDY5wcOXnFdrSQINQmKEOMebJuua12LNOh9kyAw/fKWBok3Wx8qmORZxkuqo SsKKgOWvFEWwUtwpsO2lfEmEXonOTfncXolyA9PZCsnPO9LNBIZdDuLXEDZVojIxmXdJ8FYc57G ATpLU= X-Received: by 2002:a05:6a20:ca3:b0:398:98f2:743d with SMTP id adf61e73a8af0-39f2f372a67mr14329053637.57.1775599417082; Tue, 07 Apr 2026 15:03:37 -0700 (PDT) Received: from localhost ([2a03:2880:2ff:9::]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-82cf9c3dd6bsm19393605b3a.32.2026.04.07.15.03.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Apr 2026 15:03:36 -0700 (PDT) From: Joe Damato To: netdev@vger.kernel.org, Michael Chan , Pavan Chebbi , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: horms@kernel.org, linux-kernel@vger.kernel.org, leon@kernel.org, Joe Damato Subject: [net-next v9 07/10] net: bnxt: Implement software USO Date: Tue, 7 Apr 2026 15:03:03 -0700 Message-ID: <20260407220313.3990909-8-joe@dama.to> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260407220313.3990909-1-joe@dama.to> References: <20260407220313.3990909-1-joe@dama.to> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement bnxt_sw_udp_gso_xmit() using the core tso_dma_map API and the pre-allocated TX inline buffer for per-segment headers. The xmit path: 1. Calls tso_start() to initialize TSO state 2. Stack-allocates a tso_dma_map and calls tso_dma_map_init() to DMA-map the linear payload and all frags upfront. 3. For each segment: - Copies and patches headers via tso_build_hdr() into the pre-allocated tx_inline_buf (DMA-synced per segment) - Counts payload BDs via tso_dma_map_count() - Emits long BD (header) + ext BD + payload BDs - Payload BDs use tso_dma_map_next() which yields (dma_addr, chunk_len, mapping_len) tuples. Header BDs set dma_unmap_len=3D0 since the inline buffer is pre-allocated and unmapped only at ring teardown. Completion state is updated by calling tso_dma_map_completion_save() for the last segment. Suggested-by: Jakub Kicinski Signed-off-by: Joe Damato --- v9: - Added inline slot check to prevent possible overwriting of in-flight headers (suggested by AI). - Set TX_BD_FLAGS_IP_CKSUM conditionally on !tso.ipv6 (suggested by AI). v8: - Zero csum fields on per-segment header copy after tso_build_hdr() instead of on the original skb, avoiding the need for skb_cow_head, as suggested by Eric Dumazet. v7: - Dropped Pavan's Reviewed-by as some changes were made. - Updated struct bnxt_sw_tx_bd to embed a tso_dma_map_completion_state struct for tracking completion state. - Dropped an unnecessary slot check. - Eliminated an ugly looking ternary to simplify the code. - Call tso_dma_map_completion_save to update completion state. v6: - Addressed Paolo's feedback where the IOVA API could fail transiently, leaving stale state in iova_state. Fix this by always copying the stat= e, noting that dma_iova_try_alloc is called unconditionally in the tso_dma_map_init function (via tso_dma_iova_try), which zeroes the sta= te even if the API can't be used. - Since this was a very minor change, I retained Pavan's Reviewed-by. v5: - Added __maybe_unused to last_unmap_len and last_unmap_addr to silence a build warning when CONFIG_NEED_DMA_MAP_STATE is disabled. No functional changes. - Added Pavan's Reviewed-by. v4: - Fixed the early return issue Pavan pointed out when num_segs <=3D 1; u= se the drop label instead of returning. v3: - Added iova_state and iova_total_len to struct bnxt_sw_tx_bd. - Stores iova_state on the last segment's tx_buf during xmit. rfcv2: - set the unmap len on the last descriptor, so that when completions fire only the last completion unmaps the region. drivers/net/ethernet/broadcom/bnxt/bnxt.h | 3 + drivers/net/ethernet/broadcom/bnxt/bnxt_gso.c | 214 ++++++++++++++++++ 2 files changed, 217 insertions(+) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethern= et/broadcom/bnxt/bnxt.h index 6b38b84924e0..fe50576ae525 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -11,6 +11,8 @@ #ifndef BNXT_H #define BNXT_H =20 +#include + #define DRV_MODULE_NAME "bnxt_en" =20 /* DO NOT CHANGE DRV_VER_* defines @@ -899,6 +901,7 @@ struct bnxt_sw_tx_bd { u16 rx_prod; u16 txts_prod; }; + struct tso_dma_map_completion_state sw_gso_cstate; }; =20 #define BNXT_SW_GSO_MID 1 diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_gso.c b/drivers/net/et= hernet/broadcom/bnxt/bnxt_gso.c index b296769ee4fe..0d4a59aae88e 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_gso.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_gso.c @@ -19,11 +19,225 @@ #include "bnxt.h" #include "bnxt_gso.h" =20 +static u32 bnxt_sw_gso_lhint(unsigned int len) +{ + if (len <=3D 512) + return TX_BD_FLAGS_LHINT_512_AND_SMALLER; + else if (len <=3D 1023) + return TX_BD_FLAGS_LHINT_512_TO_1023; + else if (len <=3D 2047) + return TX_BD_FLAGS_LHINT_1024_TO_2047; + else + return TX_BD_FLAGS_LHINT_2048_AND_LARGER; +} + netdev_tx_t bnxt_sw_udp_gso_xmit(struct bnxt *bp, struct bnxt_tx_ring_info *txr, struct netdev_queue *txq, struct sk_buff *skb) { + unsigned int last_unmap_len __maybe_unused =3D 0; + dma_addr_t last_unmap_addr __maybe_unused =3D 0; + struct bnxt_sw_tx_bd *last_unmap_buf =3D NULL; + unsigned int hdr_len, mss, num_segs; + struct pci_dev *pdev =3D bp->pdev; + unsigned int total_payload; + struct tso_dma_map map; + u32 vlan_tag_flags =3D 0; + int i, bds_needed; + struct tso_t tso; + u16 prod, slots; + u16 cfa_action; + __le32 csum; + + hdr_len =3D tso_start(skb, &tso); + mss =3D skb_shinfo(skb)->gso_size; + total_payload =3D skb->len - hdr_len; + num_segs =3D DIV_ROUND_UP(total_payload, mss); + + if (unlikely(num_segs <=3D 1)) + goto drop; + + /* Upper bound on the number of descriptors needed. + * + * Each segment uses 1 long BD + 1 ext BD + payload BDs, which is + * at most num_segs + nr_frags (each frag boundary crossing adds at + * most 1 extra BD). + */ + bds_needed =3D 3 * num_segs + skb_shinfo(skb)->nr_frags + 1; + + if (unlikely(bnxt_tx_avail(bp, txr) < bds_needed)) { + netif_txq_try_stop(txq, bnxt_tx_avail(bp, txr), + bp->tx_wake_thresh); + return NETDEV_TX_BUSY; + } + + /* BD backpressure alone cannot prevent overwriting in-flight + * headers in the inline buffer. Check slot availability directly. + */ + slots =3D txr->tx_inline_prod - txr->tx_inline_cons; + slots =3D BNXT_SW_USO_MAX_SEGS - slots; + + if (unlikely(slots < num_segs)) { + netif_txq_try_stop(txq, slots, num_segs); + return NETDEV_TX_BUSY; + } + + if (unlikely(tso_dma_map_init(&map, &pdev->dev, skb, hdr_len))) + goto drop; + + cfa_action =3D bnxt_xmit_get_cfa_action(skb); + if (skb_vlan_tag_present(skb)) { + vlan_tag_flags =3D TX_BD_CFA_META_KEY_VLAN | + skb_vlan_tag_get(skb); + if (skb->vlan_proto =3D=3D htons(ETH_P_8021Q)) + vlan_tag_flags |=3D 1 << TX_BD_CFA_META_TPID_SHIFT; + } + + csum =3D cpu_to_le32(TX_BD_FLAGS_TCP_UDP_CHKSUM); + if (!tso.ipv6) + csum |=3D cpu_to_le32(TX_BD_FLAGS_IP_CKSUM); + + prod =3D txr->tx_prod; + + for (i =3D 0; i < num_segs; i++) { + unsigned int seg_payload =3D min_t(unsigned int, mss, + total_payload - i * mss); + u16 slot =3D (txr->tx_inline_prod + i) & + (BNXT_SW_USO_MAX_SEGS - 1); + struct bnxt_sw_tx_bd *tx_buf; + unsigned int mapping_len; + dma_addr_t this_hdr_dma; + unsigned int chunk_len; + unsigned int offset; + dma_addr_t dma_addr; + struct tx_bd *txbd; + struct udphdr *uh; + void *this_hdr; + int bd_count; + bool last; + u32 flags; + + last =3D (i =3D=3D num_segs - 1); + offset =3D slot * TSO_HEADER_SIZE; + this_hdr =3D txr->tx_inline_buf + offset; + this_hdr_dma =3D txr->tx_inline_dma + offset; + + tso_build_hdr(skb, this_hdr, &tso, seg_payload, last); + + /* Zero stale csum fields copied from the original skb; + * HW offload recomputes from scratch. + */ + uh =3D this_hdr + skb_transport_offset(skb); + uh->check =3D 0; + if (!tso.ipv6) { + struct iphdr *iph =3D this_hdr + skb_network_offset(skb); + + iph->check =3D 0; + } + + dma_sync_single_for_device(&pdev->dev, this_hdr_dma, + hdr_len, DMA_TO_DEVICE); + + bd_count =3D tso_dma_map_count(&map, seg_payload); + + tx_buf =3D &txr->tx_buf_ring[RING_TX(bp, prod)]; + txbd =3D &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; + + tx_buf->skb =3D skb; + tx_buf->nr_frags =3D bd_count; + tx_buf->is_push =3D 0; + tx_buf->is_ts_pkt =3D 0; + + dma_unmap_addr_set(tx_buf, mapping, this_hdr_dma); + dma_unmap_len_set(tx_buf, len, 0); + + if (last) { + tx_buf->is_sw_gso =3D BNXT_SW_GSO_LAST; + tso_dma_map_completion_save(&map, &tx_buf->sw_gso_cstate); + } else { + tx_buf->is_sw_gso =3D BNXT_SW_GSO_MID; + } + + flags =3D (hdr_len << TX_BD_LEN_SHIFT) | + TX_BD_TYPE_LONG_TX_BD | + TX_BD_CNT(2 + bd_count); + + flags |=3D bnxt_sw_gso_lhint(hdr_len + seg_payload); + + txbd->tx_bd_len_flags_type =3D cpu_to_le32(flags); + txbd->tx_bd_haddr =3D cpu_to_le64(this_hdr_dma); + txbd->tx_bd_opaque =3D SET_TX_OPAQUE(bp, txr, prod, + 2 + bd_count); + + prod =3D NEXT_TX(prod); + bnxt_init_ext_bd(bp, txr, prod, csum, + vlan_tag_flags, cfa_action); + + /* set dma_unmap_len on the LAST BD touching each + * region. Since completions are in-order, the last segment + * completes after all earlier ones, so the unmap is safe. + */ + while (tso_dma_map_next(&map, &dma_addr, &chunk_len, + &mapping_len, seg_payload)) { + prod =3D NEXT_TX(prod); + txbd =3D &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; + tx_buf =3D &txr->tx_buf_ring[RING_TX(bp, prod)]; + + txbd->tx_bd_haddr =3D cpu_to_le64(dma_addr); + dma_unmap_addr_set(tx_buf, mapping, dma_addr); + dma_unmap_len_set(tx_buf, len, 0); + tx_buf->skb =3D NULL; + tx_buf->is_sw_gso =3D 0; + + if (mapping_len) { + if (last_unmap_buf) { + dma_unmap_addr_set(last_unmap_buf, + mapping, + last_unmap_addr); + dma_unmap_len_set(last_unmap_buf, + len, + last_unmap_len); + } + last_unmap_addr =3D dma_addr; + last_unmap_len =3D mapping_len; + } + last_unmap_buf =3D tx_buf; + + flags =3D chunk_len << TX_BD_LEN_SHIFT; + txbd->tx_bd_len_flags_type =3D cpu_to_le32(flags); + txbd->tx_bd_opaque =3D 0; + + seg_payload -=3D chunk_len; + } + + txbd->tx_bd_len_flags_type |=3D + cpu_to_le32(TX_BD_FLAGS_PACKET_END); + + prod =3D NEXT_TX(prod); + } + + if (last_unmap_buf) { + dma_unmap_addr_set(last_unmap_buf, mapping, last_unmap_addr); + dma_unmap_len_set(last_unmap_buf, len, last_unmap_len); + } + + txr->tx_inline_prod +=3D num_segs; + + netdev_tx_sent_queue(txq, skb->len); + + WRITE_ONCE(txr->tx_prod, prod); + /* Sync BDs before doorbell */ + wmb(); + bnxt_db_write(bp, &txr->tx_db, prod); + + if (unlikely(bnxt_tx_avail(bp, txr) <=3D bp->tx_wake_thresh)) + netif_txq_try_stop(txq, bnxt_tx_avail(bp, txr), + bp->tx_wake_thresh); + + return NETDEV_TX_OK; + +drop: dev_kfree_skb_any(skb); dev_core_stats_tx_dropped_inc(bp->dev); return NETDEV_TX_OK; --=20 2.52.0