From nobody Wed Apr 8 02:49:45 2026 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5D2EA3806B8 for ; Tue, 7 Apr 2026 22:03:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775599412; cv=none; b=hXeUjo8WK+PvDxWZ5dL5MDV6LTLlj6ompgXznyeGhlyElTihB+j+Nd2roQBJDHwtKgrOlAyDhRJMVuCF2HcHuOYTGXEbWucgV+t0QsRuyJ3xPj1kxb7yAvFsiP/So+v2czNurEC2fEZWMrMi+lSHtb/50B+yyP1tv6qXSMPw1Lw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775599412; c=relaxed/simple; bh=OUk9q5Arkhanj5HtzwfH5k7t5KwEKJDygKJt0djc3kM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=txI5OARiSSfcyjozxh2gmQZ6H+6AyYPE15GqxrF9h1l+L74XkHll5x4L/bKofYmRRoSLrsIqzpmjh4yh6MVEUO84LvN7vg5gRvnklmLr1VexK3d7DMTisRNEbg4c2/aImXnwytqO9BSumBKc+RpUF+B6uTxrrgJm3M3J0vBVWJ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=dama.to; spf=none smtp.mailfrom=dama.to; dkim=pass (2048-bit key) header.d=dama-to.20251104.gappssmtp.com header.i=@dama-to.20251104.gappssmtp.com header.b=GYJQ7ruw; arc=none smtp.client-ip=209.85.210.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=dama.to Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=dama.to Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=dama-to.20251104.gappssmtp.com header.i=@dama-to.20251104.gappssmtp.com header.b="GYJQ7ruw" Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-82c70e4654eso2545286b3a.2 for ; Tue, 07 Apr 2026 15:03:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dama-to.20251104.gappssmtp.com; s=20251104; t=1775599411; x=1776204211; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ErHvbnQsFM396o0wOypHE+KSBfNaqH86ZoLPZibNI5c=; b=GYJQ7ruwj8IuMz0Ozwm723RHbKUTVZ23OG13rvyllVTHaaryQmAsf1hvM/pSl4U65t ULDS74axnhxO+Yq40kH/wqqd33KRL7CFNAd1bd2yBDgmsd5JP94RG4R/8nC4ehT2vk1p pUPA47U317MgZBVIS+bEEAy/gQQdMnezwA8o+ZpBf11PQwLwM+0VkExjyIANHHLvUa/m vSmF7V4dRzjuftm2i13Ib6ZLqoOM8A5jFebyZsFBoQA4LkxeZrTQGprCwnCOU6d115aG A0qQAh53wIASHt37tbevBgSTA2g00NKC55qw2ORQzZlOSHGPod5OdILi3QGFqVfoM0Cf JVOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775599411; x=1776204211; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=ErHvbnQsFM396o0wOypHE+KSBfNaqH86ZoLPZibNI5c=; b=AwGlCc64pgtuvviSZluzb6qZRki8EwN9WNSc+IKCnH0q2MZQDrrn3wDo36NIkyuAxi B1O/w1GpRlx5IoHq726PhGXC0B3hk2yaeatgFP54Qo65F2T/x90GTvV+vy9OVtU4db94 Ca6XZUM9HPIeMXquLtU4ybm4cnuVqibHDDpqbcjURjrM0uXwgOUROtPxJG3zu85ogw1V DsJl4ThFRbRGjNYOusiiy1YA/2h1r5ler8arnK9C3WmOwUNUxtK56ttGUOLYe8GZ7AJz 6NyAG0eC5Nx4w8jEM32AvsM7fPF/zuWb4u/qa3FpvS65esulfE7n1ytU1GFw2djG3g2n 2IgA== X-Forwarded-Encrypted: i=1; AJvYcCWPUmqB2y34DZ9LAcyK7jCkXmwbqwKCyxqG8uEfsJ0okiKLRk+YbBQ8OPQ+DoTViU8NNC6grMdtVv9B8kg=@vger.kernel.org X-Gm-Message-State: AOJu0Yy5S87zZKTZoPd+8Zn7ZD/ssCL5FVGXA/HT3pckxwc2SqZapuHv 26woDvWkmWrHnPt4xgwlq23M3utF3C7iSNkMmYNWlCh6lfYZfKTFP5I58TNIeNfQ9p4= X-Gm-Gg: AeBDiev8x/doa6xaRupRhXaOu6LV4wuo9NICiUZ2/nT09o/d7VYbguRAiqrOumKwDOE 4OecMOMUmmPn6poj+bwJazz6Cjgj1ZQ8bwnY066auRipGI1u9YERy/Cf4qUbTJMNcqOjSsKa2CW rciNnvKyBJDuxIWr00eV9tMeJN/DfscwJFJYZxP19JgFUDnU2rvvg+SPA6L9vzzd45jlZWhgOJk zUUa7Try618ErxpomZ8aoD6N/2NrRrGvIM2ELCoPzNSreSy8ReW8En8hAy9KZYDfCHMRDlUI88r WYPuZ+KhrsATdpEVbydZkcugTzTueQksz0yGfDKPWA7+ITN4i6K/feeuuJFjmcDzhBWJVbg6O3T QfnGdYS8aBk4MSGKNg/1B+lVG0zVO/iXskZiCx2bClOsCxowrFdVxNZMp4ZbnepVH6J+xYeJOX6 klMXs= X-Received: by 2002:a05:6a00:3e15:b0:82c:2241:ab71 with SMTP id d2e1a72fcca58-82d0db9b250mr18799352b3a.42.1775599410724; Tue, 07 Apr 2026 15:03:30 -0700 (PDT) Received: from localhost ([2a03:2880:2ff:6::]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-82cf9b27064sm23353085b3a.9.2026.04.07.15.03.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Apr 2026 15:03:30 -0700 (PDT) From: Joe Damato To: netdev@vger.kernel.org, Michael Chan , Pavan Chebbi , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: horms@kernel.org, linux-kernel@vger.kernel.org, leon@kernel.org, Joe Damato Subject: [net-next v9 04/10] net: bnxt: Use dma_unmap_len for TX completion unmapping Date: Tue, 7 Apr 2026 15:03:00 -0700 Message-ID: <20260407220313.3990909-5-joe@dama.to> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260407220313.3990909-1-joe@dama.to> References: <20260407220313.3990909-1-joe@dama.to> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Store the DMA mapping length in each TX buffer descriptor via dma_unmap_len_set at submit time, and use dma_unmap_len at completion time. This is a no-op for normal packets but prepares for software USO, where header BDs set dma_unmap_len to 0 because the header buffer is unmapped collectively rather than per-segment. Suggested-by: Jakub Kicinski Reviewed-by: Pavan Chebbi Signed-off-by: Joe Damato --- v4: - Added Pavan's Reviewed-by tag. No functional changes. rfcv2: - Use some local variables to shorten long lines. No functional change f= rom rfcv1. drivers/net/ethernet/broadcom/bnxt/bnxt.c | 63 ++++++++++++++--------- 1 file changed, 40 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethern= et/broadcom/bnxt/bnxt.c index d1f0969b781c..32a0e71e9fb7 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -656,6 +656,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb,= struct net_device *dev) goto tx_free; =20 dma_unmap_addr_set(tx_buf, mapping, mapping); + dma_unmap_len_set(tx_buf, len, len); flags =3D (len << TX_BD_LEN_SHIFT) | TX_BD_TYPE_LONG_TX_BD | TX_BD_CNT(last_frag + 2); =20 @@ -720,6 +721,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb,= struct net_device *dev) tx_buf =3D &txr->tx_buf_ring[RING_TX(bp, prod)]; netmem_dma_unmap_addr_set(skb_frag_netmem(frag), tx_buf, mapping, mapping); + dma_unmap_len_set(tx_buf, len, len); =20 txbd->tx_bd_haddr =3D cpu_to_le64(mapping); =20 @@ -809,7 +811,8 @@ static bool __bnxt_tx_int(struct bnxt *bp, struct bnxt_= tx_ring_info *txr, u16 hw_cons =3D txr->tx_hw_cons; unsigned int tx_bytes =3D 0; u16 cons =3D txr->tx_cons; - skb_frag_t *frag; + unsigned int dma_len; + dma_addr_t dma_addr; int tx_pkts =3D 0; bool rc =3D false; =20 @@ -844,19 +847,27 @@ static bool __bnxt_tx_int(struct bnxt *bp, struct bnx= t_tx_ring_info *txr, goto next_tx_int; } =20 - dma_unmap_single(&pdev->dev, dma_unmap_addr(tx_buf, mapping), - skb_headlen(skb), DMA_TO_DEVICE); + if (dma_unmap_len(tx_buf, len)) { + dma_addr =3D dma_unmap_addr(tx_buf, mapping); + dma_len =3D dma_unmap_len(tx_buf, len); + + dma_unmap_single(&pdev->dev, dma_addr, dma_len, + DMA_TO_DEVICE); + } + last =3D tx_buf->nr_frags; =20 for (j =3D 0; j < last; j++) { - frag =3D &skb_shinfo(skb)->frags[j]; cons =3D NEXT_TX(cons); tx_buf =3D &txr->tx_buf_ring[RING_TX(bp, cons)]; - netmem_dma_unmap_page_attrs(&pdev->dev, - dma_unmap_addr(tx_buf, - mapping), - skb_frag_size(frag), - DMA_TO_DEVICE, 0); + if (dma_unmap_len(tx_buf, len)) { + dma_addr =3D dma_unmap_addr(tx_buf, mapping); + dma_len =3D dma_unmap_len(tx_buf, len); + + netmem_dma_unmap_page_attrs(&pdev->dev, + dma_addr, dma_len, + DMA_TO_DEVICE, 0); + } } if (unlikely(is_ts_pkt)) { if (BNXT_CHIP_P5(bp)) { @@ -3394,6 +3405,8 @@ static void bnxt_free_one_tx_ring_skbs(struct bnxt *b= p, { int i, max_idx; struct pci_dev *pdev =3D bp->pdev; + unsigned int dma_len; + dma_addr_t dma_addr; =20 max_idx =3D bp->tx_nr_pages * TX_DESC_CNT; =20 @@ -3404,10 +3417,10 @@ static void bnxt_free_one_tx_ring_skbs(struct bnxt = *bp, =20 if (idx < bp->tx_nr_rings_xdp && tx_buf->action =3D=3D XDP_REDIRECT) { - dma_unmap_single(&pdev->dev, - dma_unmap_addr(tx_buf, mapping), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); + dma_addr =3D dma_unmap_addr(tx_buf, mapping); + dma_len =3D dma_unmap_len(tx_buf, len); + + dma_unmap_single(&pdev->dev, dma_addr, dma_len, DMA_TO_DEVICE); xdp_return_frame(tx_buf->xdpf); tx_buf->action =3D 0; tx_buf->xdpf =3D NULL; @@ -3429,23 +3442,27 @@ static void bnxt_free_one_tx_ring_skbs(struct bnxt = *bp, continue; } =20 - dma_unmap_single(&pdev->dev, - dma_unmap_addr(tx_buf, mapping), - skb_headlen(skb), - DMA_TO_DEVICE); + if (dma_unmap_len(tx_buf, len)) { + dma_addr =3D dma_unmap_addr(tx_buf, mapping); + dma_len =3D dma_unmap_len(tx_buf, len); + + dma_unmap_single(&pdev->dev, dma_addr, dma_len, DMA_TO_DEVICE); + } =20 last =3D tx_buf->nr_frags; i +=3D 2; for (j =3D 0; j < last; j++, i++) { int ring_idx =3D i & bp->tx_ring_mask; - skb_frag_t *frag =3D &skb_shinfo(skb)->frags[j]; =20 tx_buf =3D &txr->tx_buf_ring[ring_idx]; - netmem_dma_unmap_page_attrs(&pdev->dev, - dma_unmap_addr(tx_buf, - mapping), - skb_frag_size(frag), - DMA_TO_DEVICE, 0); + if (dma_unmap_len(tx_buf, len)) { + dma_addr =3D dma_unmap_addr(tx_buf, mapping); + dma_len =3D dma_unmap_len(tx_buf, len); + + netmem_dma_unmap_page_attrs(&pdev->dev, + dma_addr, dma_len, + DMA_TO_DEVICE, 0); + } } dev_kfree_skb(skb); } --=20 2.52.0