From nobody Sun Apr 5 18:08:28 2026 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7EE3A27F18B for ; Fri, 3 Apr 2026 00:36:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176563; cv=none; b=oeEh6Lu0EcTearmfSbjizMittOuck1s2+idZqObjikXRkKEkVMXNAm1jy7KrydmPL6mo8keV0+eCCRiEx/SUSu3b76JbAZhngJL5l0EAyPrwhv/5el1NBMDObhFwYFtL00fXip5Mn04zmbJdr+ZZjdAUhz40MFxEyFwkNWv4fJU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775176563; c=relaxed/simple; bh=kZRHtIKxn+u0/uLZUh7cjhfYIVnWmzKAZbXjoBV0kpc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SthyDCEEXPjbfxrPeraFlGQjpmdPL7n0y3ifcbmswydzK+JbFZyASJpXQIuzhb4TZ8gvWKZMNxmWN/8T22hfdYYm09DIo6QdpVKC1GtxhME6PbioYosMfI22eWZAetTY1tHcH79dVses2csZzAuN7g1cAiL92OJ29RDnC+I4d3o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=dama.to; spf=none smtp.mailfrom=dama.to; dkim=pass (2048-bit key) header.d=dama-to.20251104.gappssmtp.com header.i=@dama-to.20251104.gappssmtp.com header.b=Cohz7b9A; arc=none smtp.client-ip=209.85.216.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=dama.to Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=dama.to Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=dama-to.20251104.gappssmtp.com header.i=@dama-to.20251104.gappssmtp.com header.b="Cohz7b9A" Received: by mail-pj1-f52.google.com with SMTP id 98e67ed59e1d1-35d94f4ee36so853739a91.3 for ; Thu, 02 Apr 2026 17:36:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dama-to.20251104.gappssmtp.com; s=20251104; t=1775176561; x=1775781361; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PsQ3WCmsmM6SQuzrllGk6EfoXK5HciHhGyhpPbk0hLM=; b=Cohz7b9As7XEQAu6PLWmBunZqhyO/nLKThfpVMR6dBdnk/cFcKqOTjBuHs9YjnwEi7 eCVuO9RdUHZ8yIQrvYeyfJHnu4JR34OEpFLlyii/BaW1qA+MYwNkdnh2gDjOLvjDi30o bjghjo4XGbkyaumlipnBlLy2BlcpFKH2K8J6UHL2vZLv92glIUoc64ING42EJ3epL4bE XfthUkJBFcZ8SkmXYzMAB6UDSy75NXeUjN5PCUXvJ70NE3IgmxO/NUiSHbwjslEp/Dc/ NSqZ/XFGOEyXm5eGQl9lOU3OkKQDc2bh7bj9jpgtSPqV0zLNrbDkaU3QgQrhHGl6J4YU Wkqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775176561; x=1775781361; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=PsQ3WCmsmM6SQuzrllGk6EfoXK5HciHhGyhpPbk0hLM=; b=fMR+K3MfKHffdoJcMhVoSSUYXoVQcy243v+UGXgO8ayQPTzKJLQv749rRTk5Itv2tL JCNKoa/rZUizePK4PRQbrT/oJ9Y9c++fHo1FhYORtZy4MIyYcESKQGsPl4Esz/0D51J8 tYaXQ/sjR+JIk5vaFZ/1xRcCAsrN5kYvd/7fxQRkcLh+q+HJItCYzLQXfbiZE/1Z9/Bb +R0UGVsrX0k2CelgCsTgtU+l5j93sKtz5/5/20Kfv3XImNgp6354oyGxjb2beKEQDgCR 6A3McvfRWkeEiOM9kshKqMbHIQ8qnZrzOL1xCY5+j5xS8ie+MkA4iMNEAFE91kSLsVGp KsVA== X-Forwarded-Encrypted: i=1; AJvYcCVJwjpQcJ6U+FUXIZZJXi5d7ihy65sA83KqngZxuKt5pjsliSlbTullvG7X/EELAa4vUDVuMST33fZ9KN0=@vger.kernel.org X-Gm-Message-State: AOJu0YxWgQydyF1HpQVfPyAYa27BbuCzTZ5Cd80ytf7RCqqM3aJb2uJg hx4xQGRxEA/h2q3ZdPNzgPnzmOpo4bIDGuqWVLp0pImGhFv2hznCQNaGOWKDuJb9hAc= X-Gm-Gg: AeBDieu4o96SePP/Yd9W63xuLGUsq9svdFbMZKxSUTEvnY3EaP3oU5cvp6BmfGY0J7R 8MaUTXi5KE8hMgKxRc2v0G2BT0bQL1Dxs4hy8svhINWxKygAsL2y4/eBP/FteeGo6rIb/TWgJRs hTjfSTyx9Z4JZ0hpyRvNr1LhbHS+0s3E8B8U+otujntQOOm+q2OFQXwfufp5mkidTI/3e4F/YMn GxY84t7wvjHmKQnqRvAJiOi4SXLvWVNolPin6NsWeejDOPfuyXrPS4tYMcX4pMpoQGCw2kiLVF9 /cX/5B9rrj4X30hsSyjE6WL5cLBY2GqH3VcQU+Ln2/TyWbETHvejqGL6zneXZFXJC4XX9D+zK45 I5DVyh+hyslrxI6T6acg5BMF8I/bv88ziRO9uY96/XmkZ4oYzfnG3bGrmcocBdGnhzX6eRoTFCm EQQLZp X-Received: by 2002:a17:90b:3ecb:b0:35d:a8d9:3a8 with SMTP id 98e67ed59e1d1-35de68cf48amr728073a91.16.1775176560933; Thu, 02 Apr 2026 17:36:00 -0700 (PDT) Received: from localhost ([2a03:2880:2ff:5d::]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-35de51a87f4sm277726a91.2.2026.04.02.17.36.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Apr 2026 17:36:00 -0700 (PDT) From: Joe Damato To: netdev@vger.kernel.org, Michael Chan , Pavan Chebbi , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: horms@kernel.org, linux-kernel@vger.kernel.org, leon@kernel.org, Joe Damato Subject: [net-next v8 07/10] net: bnxt: Implement software USO Date: Thu, 2 Apr 2026 17:35:14 -0700 Message-ID: <20260403003524.2564973-8-joe@dama.to> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260403003524.2564973-1-joe@dama.to> References: <20260403003524.2564973-1-joe@dama.to> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement bnxt_sw_udp_gso_xmit() using the core tso_dma_map API and the pre-allocated TX inline buffer for per-segment headers. The xmit path: 1. Calls tso_start() to initialize TSO state 2. Stack-allocates a tso_dma_map and calls tso_dma_map_init() to DMA-map the linear payload and all frags upfront. 3. For each segment: - Copies and patches headers via tso_build_hdr() into the pre-allocated tx_inline_buf (DMA-synced per segment) - Counts payload BDs via tso_dma_map_count() - Emits long BD (header) + ext BD + payload BDs - Payload BDs use tso_dma_map_next() which yields (dma_addr, chunk_len, mapping_len) tuples. Header BDs set dma_unmap_len=3D0 since the inline buffer is pre-allocated and unmapped only at ring teardown. Completion state is updated by calling tso_dma_map_completion_save() for the last segment. Suggested-by: Jakub Kicinski Signed-off-by: Joe Damato --- v8: - Zero csum fields on per-segment header copy after tso_build_hdr() instead of on the original skb, avoiding the need for skb_cow_head, as suggested by Eric Dumazet. v7: - Dropped Pavan's Reviewed-by as some changes were made. - Updated struct bnxt_sw_tx_bd to embed a tso_dma_map_completion_state struct for tracking completion state. - Dropped an unnecessary slot check. - Eliminated an ugly looking ternary to simplify the code. - Call tso_dma_map_completion_save to update completion state. v6: - Addressed Paolo's feedback where the IOVA API could fail transiently, leaving stale state in iova_state. Fix this by always copying the stat= e, noting that dma_iova_try_alloc is called unconditionally in the tso_dma_map_init function (via tso_dma_iova_try), which zeroes the sta= te even if the API can't be used. - Since this was a very minor change, I retained Pavan's Reviewed-by. v5: - Added __maybe_unused to last_unmap_len and last_unmap_addr to silence a build warning when CONFIG_NEED_DMA_MAP_STATE is disabled. No functional changes. - Added Pavan's Reviewed-by. v4: - Fixed the early return issue Pavan pointed out when num_segs <=3D 1; u= se the drop label instead of returning. v3: - Added iova_state and iova_total_len to struct bnxt_sw_tx_bd. - Stores iova_state on the last segment's tx_buf during xmit. rfcv2: - set the unmap len on the last descriptor, so that when completions fire only the last completion unmaps the region. drivers/net/ethernet/broadcom/bnxt/bnxt.h | 3 + drivers/net/ethernet/broadcom/bnxt/bnxt_gso.c | 202 ++++++++++++++++++ 2 files changed, 205 insertions(+) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethern= et/broadcom/bnxt/bnxt.h index 6b38b84924e0..fe50576ae525 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -11,6 +11,8 @@ #ifndef BNXT_H #define BNXT_H =20 +#include + #define DRV_MODULE_NAME "bnxt_en" =20 /* DO NOT CHANGE DRV_VER_* defines @@ -899,6 +901,7 @@ struct bnxt_sw_tx_bd { u16 rx_prod; u16 txts_prod; }; + struct tso_dma_map_completion_state sw_gso_cstate; }; =20 #define BNXT_SW_GSO_MID 1 diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_gso.c b/drivers/net/et= hernet/broadcom/bnxt/bnxt_gso.c index b296769ee4fe..7a7d40e36cea 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_gso.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_gso.c @@ -19,11 +19,213 @@ #include "bnxt.h" #include "bnxt_gso.h" =20 +static u32 bnxt_sw_gso_lhint(unsigned int len) +{ + if (len <=3D 512) + return TX_BD_FLAGS_LHINT_512_AND_SMALLER; + else if (len <=3D 1023) + return TX_BD_FLAGS_LHINT_512_TO_1023; + else if (len <=3D 2047) + return TX_BD_FLAGS_LHINT_1024_TO_2047; + else + return TX_BD_FLAGS_LHINT_2048_AND_LARGER; +} + netdev_tx_t bnxt_sw_udp_gso_xmit(struct bnxt *bp, struct bnxt_tx_ring_info *txr, struct netdev_queue *txq, struct sk_buff *skb) { + unsigned int last_unmap_len __maybe_unused =3D 0; + dma_addr_t last_unmap_addr __maybe_unused =3D 0; + struct bnxt_sw_tx_bd *last_unmap_buf =3D NULL; + unsigned int hdr_len, mss, num_segs; + struct pci_dev *pdev =3D bp->pdev; + unsigned int total_payload; + struct tso_dma_map map; + u32 vlan_tag_flags =3D 0; + int i, bds_needed; + struct tso_t tso; + u16 cfa_action; + u16 prod; + + hdr_len =3D tso_start(skb, &tso); + mss =3D skb_shinfo(skb)->gso_size; + total_payload =3D skb->len - hdr_len; + num_segs =3D DIV_ROUND_UP(total_payload, mss); + + if (unlikely(num_segs <=3D 1)) + goto drop; + + /* Upper bound on the number of descriptors needed. + * + * Each segment uses 1 long BD + 1 ext BD + payload BDs, which is + * at most num_segs + nr_frags (each frag boundary crossing adds at + * most 1 extra BD). + */ + bds_needed =3D 3 * num_segs + skb_shinfo(skb)->nr_frags + 1; + + if (unlikely(bnxt_tx_avail(bp, txr) < bds_needed)) { + netif_txq_try_stop(txq, bnxt_tx_avail(bp, txr), + bp->tx_wake_thresh); + return NETDEV_TX_BUSY; + } + + if (unlikely(tso_dma_map_init(&map, &pdev->dev, skb, hdr_len))) + goto drop; + + cfa_action =3D bnxt_xmit_get_cfa_action(skb); + if (skb_vlan_tag_present(skb)) { + vlan_tag_flags =3D TX_BD_CFA_META_KEY_VLAN | + skb_vlan_tag_get(skb); + if (skb->vlan_proto =3D=3D htons(ETH_P_8021Q)) + vlan_tag_flags |=3D 1 << TX_BD_CFA_META_TPID_SHIFT; + } + + prod =3D txr->tx_prod; + + for (i =3D 0; i < num_segs; i++) { + unsigned int seg_payload =3D min_t(unsigned int, mss, + total_payload - i * mss); + u16 slot =3D (txr->tx_inline_prod + i) & + (BNXT_SW_USO_MAX_SEGS - 1); + struct bnxt_sw_tx_bd *tx_buf; + unsigned int mapping_len; + dma_addr_t this_hdr_dma; + unsigned int chunk_len; + unsigned int offset; + dma_addr_t dma_addr; + struct tx_bd *txbd; + struct udphdr *uh; + void *this_hdr; + int bd_count; + __le32 csum; + bool last; + u32 flags; + + last =3D (i =3D=3D num_segs - 1); + offset =3D slot * TSO_HEADER_SIZE; + this_hdr =3D txr->tx_inline_buf + offset; + this_hdr_dma =3D txr->tx_inline_dma + offset; + + tso_build_hdr(skb, this_hdr, &tso, seg_payload, last); + + /* Zero stale csum fields copied from the original skb; + * HW offload recomputes from scratch. + */ + uh =3D this_hdr + skb_transport_offset(skb); + uh->check =3D 0; + if (!tso.ipv6) { + struct iphdr *iph =3D this_hdr + skb_network_offset(skb); + + iph->check =3D 0; + } + + dma_sync_single_for_device(&pdev->dev, this_hdr_dma, + hdr_len, DMA_TO_DEVICE); + + bd_count =3D tso_dma_map_count(&map, seg_payload); + + tx_buf =3D &txr->tx_buf_ring[RING_TX(bp, prod)]; + txbd =3D &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; + + tx_buf->skb =3D skb; + tx_buf->nr_frags =3D bd_count; + tx_buf->is_push =3D 0; + tx_buf->is_ts_pkt =3D 0; + + dma_unmap_addr_set(tx_buf, mapping, this_hdr_dma); + dma_unmap_len_set(tx_buf, len, 0); + + if (last) { + tx_buf->is_sw_gso =3D BNXT_SW_GSO_LAST; + tso_dma_map_completion_save(&map, &tx_buf->sw_gso_cstate); + } else { + tx_buf->is_sw_gso =3D BNXT_SW_GSO_MID; + } + + flags =3D (hdr_len << TX_BD_LEN_SHIFT) | + TX_BD_TYPE_LONG_TX_BD | + TX_BD_CNT(2 + bd_count); + + flags |=3D bnxt_sw_gso_lhint(hdr_len + seg_payload); + + txbd->tx_bd_len_flags_type =3D cpu_to_le32(flags); + txbd->tx_bd_haddr =3D cpu_to_le64(this_hdr_dma); + txbd->tx_bd_opaque =3D SET_TX_OPAQUE(bp, txr, prod, + 2 + bd_count); + + csum =3D cpu_to_le32(TX_BD_FLAGS_TCP_UDP_CHKSUM | + TX_BD_FLAGS_IP_CKSUM); + + prod =3D NEXT_TX(prod); + bnxt_init_ext_bd(bp, txr, prod, csum, + vlan_tag_flags, cfa_action); + + /* set dma_unmap_len on the LAST BD touching each + * region. Since completions are in-order, the last segment + * completes after all earlier ones, so the unmap is safe. + */ + while (tso_dma_map_next(&map, &dma_addr, &chunk_len, + &mapping_len, seg_payload)) { + prod =3D NEXT_TX(prod); + txbd =3D &txr->tx_desc_ring[TX_RING(bp, prod)][TX_IDX(prod)]; + tx_buf =3D &txr->tx_buf_ring[RING_TX(bp, prod)]; + + txbd->tx_bd_haddr =3D cpu_to_le64(dma_addr); + dma_unmap_addr_set(tx_buf, mapping, dma_addr); + dma_unmap_len_set(tx_buf, len, 0); + tx_buf->skb =3D NULL; + tx_buf->is_sw_gso =3D 0; + + if (mapping_len) { + if (last_unmap_buf) { + dma_unmap_addr_set(last_unmap_buf, + mapping, + last_unmap_addr); + dma_unmap_len_set(last_unmap_buf, + len, + last_unmap_len); + } + last_unmap_addr =3D dma_addr; + last_unmap_len =3D mapping_len; + } + last_unmap_buf =3D tx_buf; + + flags =3D chunk_len << TX_BD_LEN_SHIFT; + txbd->tx_bd_len_flags_type =3D cpu_to_le32(flags); + txbd->tx_bd_opaque =3D 0; + + seg_payload -=3D chunk_len; + } + + txbd->tx_bd_len_flags_type |=3D + cpu_to_le32(TX_BD_FLAGS_PACKET_END); + + prod =3D NEXT_TX(prod); + } + + if (last_unmap_buf) { + dma_unmap_addr_set(last_unmap_buf, mapping, last_unmap_addr); + dma_unmap_len_set(last_unmap_buf, len, last_unmap_len); + } + + txr->tx_inline_prod +=3D num_segs; + + netdev_tx_sent_queue(txq, skb->len); + + WRITE_ONCE(txr->tx_prod, prod); + /* Sync BDs before doorbell */ + wmb(); + bnxt_db_write(bp, &txr->tx_db, prod); + + if (unlikely(bnxt_tx_avail(bp, txr) <=3D bp->tx_wake_thresh)) + netif_txq_try_stop(txq, bnxt_tx_avail(bp, txr), + bp->tx_wake_thresh); + + return NETDEV_TX_OK; + +drop: dev_kfree_skb_any(skb); dev_core_stats_tx_dropped_inc(bp->dev); return NETDEV_TX_OK; --=20 2.52.0