From nobody Thu Apr 9 06:46:24 2026 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 505D53A2577 for ; Wed, 8 Apr 2026 23:06:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775689594; cv=none; b=OhbGdluV1vewuLkgJF41gh2gIhgt2t7N05IE57QqDDNuNAA3oXFl9tJuDO6OSKQpwMSdHf0iM+B/h2IutLF9G2GKBZ4ThqBTMeSZ3gtnf9biTsn60R6TmLSFNMMh8mvgf8VLybRzNcqepneH2ZCPLpzoSOxGPv13iYt6yrfqZzg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775689594; c=relaxed/simple; bh=r0znbPiRamQbWgnOeDNrPZwKEc4KFxR7p34H/kzdSl8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YvZtz5XxXCcektV+yiLtoxSvkGqtqbtsgFpqbrybfSpj5vAIElnNIlUVTYzq4+p672n8m4vpJbkSSD/xE/2JK1mprhVU9v60Ke35Ylex7ly39gk9YbV6OKThqk1tq2UvVgWUGkSjUE3b3E2SMrFb42g0WE/ZYxYdtDu0Qa+sM4E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=dama.to; spf=none smtp.mailfrom=dama.to; dkim=pass (2048-bit key) header.d=dama-to.20251104.gappssmtp.com header.i=@dama-to.20251104.gappssmtp.com header.b=QPJhXZhT; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=dama.to Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=dama.to Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=dama-to.20251104.gappssmtp.com header.i=@dama-to.20251104.gappssmtp.com header.b="QPJhXZhT" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-2a7a9b8ed69so2764935ad.2 for ; Wed, 08 Apr 2026 16:06:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dama-to.20251104.gappssmtp.com; s=20251104; t=1775689589; x=1776294389; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ilY9w1UZiWm/ILI/BayVivC49jozyBguzBsmp+OC5C8=; b=QPJhXZhTMkQRWPUOBwd3lU6ZKyD2JQ0nGDZ7nxhfh5XpxfOWideCVxEZAT5E5IJLpK 9n9HtY9nvHxb++LHwsQFDn5idkpVF/U67cR1CQUYuHlyNLwpizw4nYGtSWFX8m4ySHIs pjvmSxLX5Basax2WPImq9hFoZv55LOTIpjB+rJ/OVS8LMVgYqiEktOqbldWs8xtxHpn7 SZLzJMMbuP8a9AoOynd29xiNkSkHMYDHVmvA3NE4pKU7u5bHn/lIYk4vqsy5MYf57lzY jqnj1gmjWyQVPY5NnURaxsRCbgaKFAdezcfV5fJb5OBO5k+hS5Ta2zSUxpT5yNZA4YuC 2jBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775689589; x=1776294389; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=ilY9w1UZiWm/ILI/BayVivC49jozyBguzBsmp+OC5C8=; b=FPIdEqu7BP1LY5aNfp/Zxq7A9QCH7tSEUvV48zC1Y1CwWl4gkA/kFh2uxKgzgZOiSz e/FFsmiRxqZY9zihcprMkw4I0GIR1Yw5x9uGp4Kyf2UucH+kW5Wvz+k8Puz/jVIALo1c 74pbtYqxFKx/UlN7Jx6L7Yo0Y6fH75HRu3xM39xZ5jxpQd2VheckdERhcUpHLR5vHrtS hm8fBuRPBEC0wRFPFwzgsjBHusVBrZT3khKZcDbKIyofU3J1HFoDFEQKRlJs7ATlLhiO kkXdphTk7xC6JjA4OCFcxImoO9vCSj1aqsfg05y/tIgdXTBSX8xx21faexH732ToAtZG Tz2g== X-Forwarded-Encrypted: i=1; AJvYcCUKYrPLHlQIZ2BguVyzhtDaamo4AfDNuWbYVWmD/4frqzzS1xcEKtjSiqTDusoB5RwlABGAuzSA3Qoz9r8=@vger.kernel.org X-Gm-Message-State: AOJu0Yyq8MlGinheNanBbR4CKUSvm4OZrGRc8xz87soTxTLT6SFfXSc8 lsLV3rfZJuonrtbyekTYcgAVF4GXFqojL28yBIlvSaLk6W8DUcWCaS9EWTOo3BZjHjg= X-Gm-Gg: AeBDiet3cPIUqnSgNCJXngddwWxj8siSrqGGKLR37b4VfM+ykjGI8BB0YzVpF9KaRNs 8wdZ1Cr/n+UGhBcRaXR19rYBD5W3oKOCvXYXd5DUfy4U6iA6op4cU/cfcBSNhbSrxC1PA9ZgOHM E4VDtXc41rh7jGQ+fFVj0ubWrWqPb2tit8Cw0C8/HG1dZeOk8CPXkvnRWiWv24bqqCqX0Y3cD3Z AoOjwItL6R378SGJg3YywzwqRHdxJFwhV9tK0/271vlK8Mvu0oVUoobeLF0TncTsiasiGce1xkw ZkGQKmtuohEt19WAOxHC4rL8HfI0qG8hswyZ2mNfybBe9bHMqtroGBjuaHSMPKo+dNhrP/9DL87 vtp/4AzFTSjDWbNezfLZuh7e8/7ypzEaihrvzUfjEBcsov5Sth8iFj0GirsOASa+zIzzg0B59+D N6wF8= X-Received: by 2002:a17:902:ccc9:b0:2b2:42da:25c4 with SMTP id d9443c01a7336-2b2816ca44emr266936545ad.14.1775689588753; Wed, 08 Apr 2026 16:06:28 -0700 (PDT) Received: from localhost ([2a03:2880:2ff:4::]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2b2a488caa2sm127336085ad.14.2026.04.08.16.06.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2026 16:06:28 -0700 (PDT) From: Joe Damato To: netdev@vger.kernel.org, Michael Chan , Pavan Chebbi , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: horms@kernel.org, linux-kernel@vger.kernel.org, leon@kernel.org, Joe Damato Subject: [net-next v10 08/10] net: bnxt: Add SW GSO completion and teardown support Date: Wed, 8 Apr 2026 16:05:57 -0700 Message-ID: <20260408230607.2019402-9-joe@dama.to> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260408230607.2019402-1-joe@dama.to> References: <20260408230607.2019402-1-joe@dama.to> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update __bnxt_tx_int and bnxt_free_one_tx_ring_skbs to handle SW GSO segments: - MID segments: adjust tx_pkts/tx_bytes accounting and skip skb free (the skb is shared across all segments and freed only once) - LAST segments: call tso_dma_map_complete() to tear down the IOVA mapping if one was used. On the fallback path, payload DMA unmapping is handled by the existing per-BD dma_unmap_len walk. Both MID and LAST completions advance tx_inline_cons to release the segment's inline header slot back to the ring. is_sw_gso is initialized to zero, so the new code paths are not run. Add logic for feature advertisement and guardrails for ring sizing. Suggested-by: Jakub Kicinski Reviewed-by: Pavan Chebbi Signed-off-by: Joe Damato --- v10: - Wrap tx_inline_cons in WRITE_ONCE to pair with READ_ONCE in bnxt_inline_avail. v9: - Always allocate header buffer for non-HW-USO NICs. Avoids a possible NULL deref if USO is toggled off and the device is brought down, up, and USO is re-enabled (suggested by AI). - Adjust bnxt_min_tx_desc_cnt to take a feature parameter. This is needed to prevent stale features from being examined (suggested by AI). v7: - Dropped Pavan's Reviewed-by because some changes were made. - Added helper bnxt_min_tx_desc_cnt to avoid repeated code computing descriptor counts. - Updated to use tso_dma_map_complete helper instead of calling the DMA IOVA API directly. v5: - Added Pavan's Reviewed-by. No functional changes. v3: - completion paths updated to use DMA IOVA APIs to teardown mappings. rfcv2: - Update the shared header buffer consumer on TX completion. drivers/net/ethernet/broadcom/bnxt/bnxt.c | 75 ++++++++++++++++--- .../net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 19 ++++- drivers/net/ethernet/broadcom/bnxt/bnxt_gso.h | 9 +++ 3 files changed, 92 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethern= et/broadcom/bnxt/bnxt.c index bd93edb09ee0..26aae48a7d0e 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -74,6 +74,8 @@ #include "bnxt_debugfs.h" #include "bnxt_coredump.h" #include "bnxt_hwmon.h" +#include "bnxt_gso.h" +#include =20 #define BNXT_TX_TIMEOUT (5 * HZ) #define BNXT_DEF_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_HW | \ @@ -817,12 +819,13 @@ static bool __bnxt_tx_int(struct bnxt *bp, struct bnx= t_tx_ring_info *txr, bool rc =3D false; =20 while (RING_TX(bp, cons) !=3D hw_cons) { - struct bnxt_sw_tx_bd *tx_buf; + struct bnxt_sw_tx_bd *tx_buf, *head_buf; struct sk_buff *skb; bool is_ts_pkt; int j, last; =20 tx_buf =3D &txr->tx_buf_ring[RING_TX(bp, cons)]; + head_buf =3D tx_buf; skb =3D tx_buf->skb; =20 if (unlikely(!skb)) { @@ -869,6 +872,22 @@ static bool __bnxt_tx_int(struct bnxt *bp, struct bnxt= _tx_ring_info *txr, DMA_TO_DEVICE, 0); } } + + if (unlikely(head_buf->is_sw_gso)) { + u16 inline_cons =3D txr->tx_inline_cons + 1; + + WRITE_ONCE(txr->tx_inline_cons, inline_cons); + if (head_buf->is_sw_gso =3D=3D BNXT_SW_GSO_LAST) { + tso_dma_map_complete(&pdev->dev, + &head_buf->sw_gso_cstate); + } else { + tx_pkts--; + tx_bytes -=3D skb->len; + skb =3D NULL; + } + head_buf->is_sw_gso =3D 0; + } + if (unlikely(is_ts_pkt)) { if (BNXT_CHIP_P5(bp)) { /* PTP worker takes ownership of the skb */ @@ -3412,6 +3431,7 @@ static void bnxt_free_one_tx_ring_skbs(struct bnxt *b= p, =20 for (i =3D 0; i < max_idx;) { struct bnxt_sw_tx_bd *tx_buf =3D &txr->tx_buf_ring[i]; + struct bnxt_sw_tx_bd *head_buf =3D tx_buf; struct sk_buff *skb; int j, last; =20 @@ -3466,7 +3486,20 @@ static void bnxt_free_one_tx_ring_skbs(struct bnxt *= bp, DMA_TO_DEVICE, 0); } } - dev_kfree_skb(skb); + if (head_buf->is_sw_gso) { + u16 inline_cons =3D txr->tx_inline_cons + 1; + + WRITE_ONCE(txr->tx_inline_cons, inline_cons); + if (head_buf->is_sw_gso =3D=3D BNXT_SW_GSO_LAST) { + tso_dma_map_complete(&pdev->dev, + &head_buf->sw_gso_cstate); + } else { + skb =3D NULL; + } + head_buf->is_sw_gso =3D 0; + } + if (skb) + dev_kfree_skb(skb); } netdev_tx_reset_queue(netdev_get_tx_queue(bp->dev, idx)); } @@ -3992,9 +4025,9 @@ static void bnxt_free_tx_inline_buf(struct bnxt_tx_ri= ng_info *txr, txr->tx_inline_size =3D 0; } =20 -static int __maybe_unused bnxt_alloc_tx_inline_buf(struct bnxt_tx_ring_inf= o *txr, - struct pci_dev *pdev, - unsigned int size) +static int bnxt_alloc_tx_inline_buf(struct bnxt_tx_ring_info *txr, + struct pci_dev *pdev, + unsigned int size) { txr->tx_inline_buf =3D kmalloc(size, GFP_KERNEL); if (!txr->tx_inline_buf) @@ -4097,6 +4130,13 @@ static int bnxt_alloc_tx_rings(struct bnxt *bp) sizeof(struct tx_push_bd); txr->data_mapping =3D cpu_to_le64(mapping); } + if (!(bp->flags & BNXT_FLAG_UDP_GSO_CAP)) { + rc =3D bnxt_alloc_tx_inline_buf(txr, pdev, + BNXT_SW_USO_MAX_SEGS * + TSO_HEADER_SIZE); + if (rc) + return rc; + } qidx =3D bp->tc_to_qidx[j]; ring->queue_id =3D bp->q_info[qidx].queue_id; spin_lock_init(&txr->xdp_tx_lock); @@ -4635,10 +4675,13 @@ static int bnxt_init_rx_rings(struct bnxt *bp) =20 static int bnxt_init_tx_rings(struct bnxt *bp) { + netdev_features_t features; u16 i; =20 + features =3D bp->dev->features; + bp->tx_wake_thresh =3D max_t(int, bp->tx_ring_size / 2, - BNXT_MIN_TX_DESC_CNT); + bnxt_min_tx_desc_cnt(bp, features)); =20 for (i =3D 0; i < bp->tx_nr_rings; i++) { struct bnxt_tx_ring_info *txr =3D &bp->tx_ring[i]; @@ -13837,6 +13880,11 @@ static netdev_features_t bnxt_fix_features(struct = net_device *dev, if ((features & NETIF_F_NTUPLE) && !bnxt_rfs_capable(bp, false)) features &=3D ~NETIF_F_NTUPLE; =20 + if ((features & NETIF_F_GSO_UDP_L4) && + !(bp->flags & BNXT_FLAG_UDP_GSO_CAP) && + bp->tx_ring_size < 2 * BNXT_SW_USO_MAX_DESCS) + features &=3D ~NETIF_F_GSO_UDP_L4; + if ((bp->flags & BNXT_FLAG_NO_AGG_RINGS) || bp->xdp_prog) features &=3D ~(NETIF_F_LRO | NETIF_F_GRO_HW); =20 @@ -13882,6 +13930,9 @@ static int bnxt_set_features(struct net_device *dev= , netdev_features_t features) int rc =3D 0; bool re_init =3D false; =20 + bp->tx_wake_thresh =3D max_t(int, bp->tx_ring_size / 2, + bnxt_min_tx_desc_cnt(bp, features)); + flags &=3D ~BNXT_FLAG_ALL_CONFIG_FEATS; if (features & NETIF_F_GRO_HW) flags |=3D BNXT_FLAG_GRO; @@ -16907,8 +16958,7 @@ static int bnxt_init_one(struct pci_dev *pdev, cons= t struct pci_device_id *ent) NETIF_F_GSO_UDP_TUNNEL_CSUM | NETIF_F_GSO_GRE_CSUM | NETIF_F_GSO_PARTIAL | NETIF_F_RXHASH | NETIF_F_RXCSUM | NETIF_F_GRO; - if (bp->flags & BNXT_FLAG_UDP_GSO_CAP) - dev->hw_features |=3D NETIF_F_GSO_UDP_L4; + dev->hw_features |=3D NETIF_F_GSO_UDP_L4; =20 if (BNXT_SUPPORTS_TPA(bp)) dev->hw_features |=3D NETIF_F_LRO; @@ -16941,8 +16991,15 @@ static int bnxt_init_one(struct pci_dev *pdev, con= st struct pci_device_id *ent) dev->priv_flags |=3D IFF_UNICAST_FLT; =20 netif_set_tso_max_size(dev, GSO_MAX_SIZE); - if (bp->tso_max_segs) + if (!(bp->flags & BNXT_FLAG_UDP_GSO_CAP)) { + u16 max_segs =3D BNXT_SW_USO_MAX_SEGS; + + if (bp->tso_max_segs) + max_segs =3D min_t(u16, max_segs, bp->tso_max_segs); + netif_set_tso_max_segs(dev, max_segs); + } else if (bp->tso_max_segs) { netif_set_tso_max_segs(dev, bp->tso_max_segs); + } =20 dev->xdp_features =3D NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | NETDEV_XDP_ACT_RX_SG; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/ne= t/ethernet/broadcom/bnxt/bnxt_ethtool.c index 6826bf762d26..9ded88196bb4 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c @@ -33,6 +33,7 @@ #include "bnxt_xdp.h" #include "bnxt_ptp.h" #include "bnxt_ethtool.h" +#include "bnxt_gso.h" #include "bnxt_nvm_defs.h" /* NVRAM content constant and structure defs */ #include "bnxt_fw_hdr.h" /* Firmware hdr constant and structure defs */ #include "bnxt_coredump.h" @@ -852,12 +853,18 @@ static int bnxt_set_ringparam(struct net_device *dev, u8 tcp_data_split =3D kernel_ering->tcp_data_split; struct bnxt *bp =3D netdev_priv(dev); u8 hds_config_mod; + int rc; =20 if ((ering->rx_pending > BNXT_MAX_RX_DESC_CNT) || (ering->tx_pending > BNXT_MAX_TX_DESC_CNT) || (ering->tx_pending < BNXT_MIN_TX_DESC_CNT)) return -EINVAL; =20 + if ((dev->features & NETIF_F_GSO_UDP_L4) && + !(bp->flags & BNXT_FLAG_UDP_GSO_CAP) && + ering->tx_pending < 2 * BNXT_SW_USO_MAX_DESCS) + return -EINVAL; + hds_config_mod =3D tcp_data_split !=3D dev->cfg->hds_config; if (tcp_data_split =3D=3D ETHTOOL_TCP_DATA_SPLIT_DISABLED && hds_config_m= od) return -EINVAL; @@ -882,9 +889,17 @@ static int bnxt_set_ringparam(struct net_device *dev, bp->tx_ring_size =3D ering->tx_pending; bnxt_set_ring_params(bp); =20 - if (netif_running(dev)) - return bnxt_open_nic(bp, false, false); + if (netif_running(dev)) { + rc =3D bnxt_open_nic(bp, false, false); + if (rc) + return rc; + } =20 + /* ring size changes may affect features (SW USO requires a minimum + * ring size), so recalculate features to ensure the correct features + * are blocked/available. + */ + netdev_update_features(dev); return 0; } =20 diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_gso.h b/drivers/net/et= hernet/broadcom/bnxt/bnxt_gso.h index 6ba8ccc451de..47528c20f311 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_gso.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_gso.h @@ -29,6 +29,15 @@ static inline u16 bnxt_inline_avail(struct bnxt_tx_ring_= info *txr) (u16)(txr->tx_inline_prod - READ_ONCE(txr->tx_inline_cons)); } =20 +static inline int bnxt_min_tx_desc_cnt(struct bnxt *bp, + netdev_features_t features) +{ + if (!(bp->flags & BNXT_FLAG_UDP_GSO_CAP) && + (features & NETIF_F_GSO_UDP_L4)) + return BNXT_SW_USO_MAX_DESCS; + return BNXT_MIN_TX_DESC_CNT; +} + netdev_tx_t bnxt_sw_udp_gso_xmit(struct bnxt *bp, struct bnxt_tx_ring_info *txr, struct netdev_queue *txq, --=20 2.52.0