From nobody Tue Apr 7 16:16:24 2026 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 40A233BB9F5 for ; Thu, 12 Mar 2026 22:35:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773354919; cv=none; b=lhT0mZFudOU75deoiCFRbjjsYl5WUn25u6R7zB4X4xnQEcJmxVGtYeMNcHvDNYVNFHUvEyTSMH5K9270opxqb+DREyJnaDWdJj4SxaYYihCzMiATr913bDqQW+3WtrNhcJS68/VzArvEsKBIa724LMv/EoEeHlXlBjpmI5YkJSU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773354919; c=relaxed/simple; bh=LFxmYubxtvxESGJkaDDD8TLr9mfsBvvJI9Y6f+WIXHY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BVm1GeoZQmEwXNHnRZTGfp4omIgAvn4cmuc2uAOKuK2oXOuION2+lB9cT6J+lv59hplEbvYgzQSOOAf0UWSwN3w2/6nooLw1HnhWJfNOenEduIYzZwACf6RjTOVyWjSoH9J5Lu8oH6ABo4aeKfJgF4XuebibCeNoD8hYcm19j5o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=dama.to; spf=none smtp.mailfrom=dama.to; dkim=pass (2048-bit key) header.d=dama-to.20230601.gappssmtp.com header.i=@dama-to.20230601.gappssmtp.com header.b=uueeTmGF; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=dama.to Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=dama.to Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=dama-to.20230601.gappssmtp.com header.i=@dama-to.20230601.gappssmtp.com header.b="uueeTmGF" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-2ab077e3f32so7246095ad.3 for ; Thu, 12 Mar 2026 15:35:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dama-to.20230601.gappssmtp.com; s=20230601; t=1773354915; x=1773959715; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WJyTxUd6u+yzwsu9Uzqyo58WkKimIItw6uQu3GFh36E=; b=uueeTmGFNiBHKvYXC5I4xLGzHOfiRKSlgHB16afLePG2WYlAgCK//WO8ohO7JrsP9H hQDugfk/RC8MzL12JvXkvrlcgDbIUxmh3tKCEQ3xDlmJzw0z/oWACPqZyhZKeIAMw7AD OIdvmqATrEkWgadjBtg4zJUqjqKBR9ceH60I083OS9Pz3hEZqeyQFjYSHyek7V0LN/rC CikXdad3y/WxfI5lF/xPdyoVO2W12tuPZqIfwmsYQ853yzqViqoKGUH6tyop2+DsOVV8 zrEk/MyQAiZwgdAXhIcNfbmndPFQaDvgnYW90biEThFr0Xk6mWWyBMCwLWDIDsRZ49un vryg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773354915; x=1773959715; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=WJyTxUd6u+yzwsu9Uzqyo58WkKimIItw6uQu3GFh36E=; b=h0/1dxzhyzVtG6aL/K3pY3/6skDQEMMoCPMMzA0RjxNWM9MFkyKq2ftabiSxPY+gdk NRR0Z9JHmEE5/OwKt60Zma7H3RauSgql3m18Paqoh21DN5eYFKHXVi2fKPswq1yKwTGe z52i+l2TTskkQJYu6+uk9VpplPO6NjuJZQ/5cDcaWBx9WN1p1IraVyzMp3KlLq5NUUPi 3AIwpKmHGqBYCmY2gj9yG+jf1n0k5vflhn7pJAf6fWFrNoi5J4BCFiieripqELLw3zTk NucR6hoReWs9u6jnEA+BCVtYzUZ+uDbZd6BuS3xL/ugFfdNYHo50+gXUSDrwBmo1Kq7U +vYQ== X-Gm-Message-State: AOJu0YwvL6Z+txbgUs3Epfo7zcF9MFLSYaPFOCqRHWF+naGFEONRceCX /fHXaQT/ulfzNaFnfx9ibRulyYHwQ2fsDNVWldtHJ1b0v+fHeukv61z2LdcGpeCCzkk= X-Gm-Gg: ATEYQzyZqK19v2gv3o4+XmpAVL846r43ONJBgCbTGA3aTKGgeHMIkwh22AkA5LZHddV vOFcC81uy194YwumFZ0Q9coOk9gXiP/kCMZ7sRK+8c9KzB5i4rxC2/mcuG0hHBfH4ZKYpDIIrDI aQ+UtzTXOfHvlGPvo+P/hhBuxW3hJOPz/QR5YB/LdFiyLtAkVHL1wV8iRW+VSjxcFE+NVzKTADg sn1NQQeZojH3dl5PVvvACpVUOSolNRZQC4maexbyd4fTkEmFMxhzxgVflEJEtr2uZgW4g73ZF3t x+zTeSMDrdZEoX3tpCLeNf/tzCvVfoIwrxcVAeLfEkAsX1/q5Z4hbjuNlPoo0B99/kfViPR+UOz OKWOWjf7p4NaN8xUJgOjNoE4U9PJtmlUYPZgmwsu08WCvnFxx4gZHvdpHyaVbTBEICPCQ58z9iQ YvAurz X-Received: by 2002:a17:902:d490:b0:2ad:6e26:abbb with SMTP id d9443c01a7336-2aecacc3272mr9078945ad.54.1773354915571; Thu, 12 Mar 2026 15:35:15 -0700 (PDT) Received: from localhost ([2a03:2880:2ff:70::]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-c73eba9e183sm25897a12.14.2026.03.12.15.35.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2026 15:35:15 -0700 (PDT) From: Joe Damato To: netdev@vger.kernel.org, Michael Chan , Pavan Chebbi , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: linux-kernel@vger.kernel.org, Joe Damato Subject: [RFC net-next v2 05/12] net: bnxt: Use dma_unmap_len for TX completion unmapping Date: Thu, 12 Mar 2026 15:34:42 -0700 Message-ID: <20260312223457.1999489-6-joe@dama.to> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260312223457.1999489-1-joe@dama.to> References: <20260312223457.1999489-1-joe@dama.to> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Store the DMA mapping length in each TX buffer descriptor via dma_unmap_len_set at submit time, and use dma_unmap_len at completion time. This is a no-op for normal packets but prepares for software USO, where header BDs set dma_unmap_len to 0 because the header buffer is unmapped collectively rather than per-segment. Suggested-by: Jakub Kicinski Signed-off-by: Joe Damato --- rfcv2: - Use some local variables to shortern long lines. No functional change fr= om rfcv1. drivers/net/ethernet/broadcom/bnxt/bnxt.c | 63 ++++++++++++++--------- 1 file changed, 40 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethern= et/broadcom/bnxt/bnxt.c index d12e4fcd5063..ea8081aeb5ae 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -656,6 +656,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb,= struct net_device *dev) goto tx_free; =20 dma_unmap_addr_set(tx_buf, mapping, mapping); + dma_unmap_len_set(tx_buf, len, len); flags =3D (len << TX_BD_LEN_SHIFT) | TX_BD_TYPE_LONG_TX_BD | TX_BD_CNT(last_frag + 2); =20 @@ -720,6 +721,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb,= struct net_device *dev) tx_buf =3D &txr->tx_buf_ring[RING_TX(bp, prod)]; netmem_dma_unmap_addr_set(skb_frag_netmem(frag), tx_buf, mapping, mapping); + dma_unmap_len_set(tx_buf, len, len); =20 txbd->tx_bd_haddr =3D cpu_to_le64(mapping); =20 @@ -809,7 +811,8 @@ static bool __bnxt_tx_int(struct bnxt *bp, struct bnxt_= tx_ring_info *txr, u16 hw_cons =3D txr->tx_hw_cons; unsigned int tx_bytes =3D 0; u16 cons =3D txr->tx_cons; - skb_frag_t *frag; + unsigned int dma_len; + dma_addr_t dma_addr; int tx_pkts =3D 0; bool rc =3D false; =20 @@ -844,19 +847,27 @@ static bool __bnxt_tx_int(struct bnxt *bp, struct bnx= t_tx_ring_info *txr, goto next_tx_int; } =20 - dma_unmap_single(&pdev->dev, dma_unmap_addr(tx_buf, mapping), - skb_headlen(skb), DMA_TO_DEVICE); + if (dma_unmap_len(tx_buf, len)) { + dma_addr =3D dma_unmap_addr(tx_buf, mapping); + dma_len =3D dma_unmap_len(tx_buf, len); + + dma_unmap_single(&pdev->dev, dma_addr, dma_len, + DMA_TO_DEVICE); + } + last =3D tx_buf->nr_frags; =20 for (j =3D 0; j < last; j++) { - frag =3D &skb_shinfo(skb)->frags[j]; cons =3D NEXT_TX(cons); tx_buf =3D &txr->tx_buf_ring[RING_TX(bp, cons)]; - netmem_dma_unmap_page_attrs(&pdev->dev, - dma_unmap_addr(tx_buf, - mapping), - skb_frag_size(frag), - DMA_TO_DEVICE, 0); + if (dma_unmap_len(tx_buf, len)) { + dma_addr =3D dma_unmap_addr(tx_buf, mapping); + dma_len =3D dma_unmap_len(tx_buf, len); + + netmem_dma_unmap_page_attrs(&pdev->dev, + dma_addr, dma_len, + DMA_TO_DEVICE, 0); + } } if (unlikely(is_ts_pkt)) { if (BNXT_CHIP_P5(bp)) { @@ -3400,6 +3411,8 @@ static void bnxt_free_one_tx_ring_skbs(struct bnxt *b= p, { int i, max_idx; struct pci_dev *pdev =3D bp->pdev; + unsigned int dma_len; + dma_addr_t dma_addr; =20 max_idx =3D bp->tx_nr_pages * TX_DESC_CNT; =20 @@ -3410,10 +3423,10 @@ static void bnxt_free_one_tx_ring_skbs(struct bnxt = *bp, =20 if (idx < bp->tx_nr_rings_xdp && tx_buf->action =3D=3D XDP_REDIRECT) { - dma_unmap_single(&pdev->dev, - dma_unmap_addr(tx_buf, mapping), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); + dma_addr =3D dma_unmap_addr(tx_buf, mapping); + dma_len =3D dma_unmap_len(tx_buf, len); + + dma_unmap_single(&pdev->dev, dma_addr, dma_len, DMA_TO_DEVICE); xdp_return_frame(tx_buf->xdpf); tx_buf->action =3D 0; tx_buf->xdpf =3D NULL; @@ -3435,23 +3448,27 @@ static void bnxt_free_one_tx_ring_skbs(struct bnxt = *bp, continue; } =20 - dma_unmap_single(&pdev->dev, - dma_unmap_addr(tx_buf, mapping), - skb_headlen(skb), - DMA_TO_DEVICE); + if (dma_unmap_len(tx_buf, len)) { + dma_addr =3D dma_unmap_addr(tx_buf, mapping); + dma_len =3D dma_unmap_len(tx_buf, len); + + dma_unmap_single(&pdev->dev, dma_addr, dma_len, DMA_TO_DEVICE); + } =20 last =3D tx_buf->nr_frags; i +=3D 2; for (j =3D 0; j < last; j++, i++) { int ring_idx =3D i & bp->tx_ring_mask; - skb_frag_t *frag =3D &skb_shinfo(skb)->frags[j]; =20 tx_buf =3D &txr->tx_buf_ring[ring_idx]; - netmem_dma_unmap_page_attrs(&pdev->dev, - dma_unmap_addr(tx_buf, - mapping), - skb_frag_size(frag), - DMA_TO_DEVICE, 0); + if (dma_unmap_len(tx_buf, len)) { + dma_addr =3D dma_unmap_addr(tx_buf, mapping); + dma_len =3D dma_unmap_len(tx_buf, len); + + netmem_dma_unmap_page_attrs(&pdev->dev, + dma_addr, dma_len, + DMA_TO_DEVICE, 0); + } } dev_kfree_skb(skb); } --=20 2.52.0