From nobody Tue Apr 7 16:20:25 2026 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E4DAA3BBA12 for ; Thu, 12 Mar 2026 22:35:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773354912; cv=none; b=FJxMsizxqKp/uDhUHDBRKUDUFSIaCQhiKN6eJDir/DyhpJkXgiys4dcq7alF+Ify+a8YFUxsaZqgiWx+1fzL16FHTzaFSOZ5aG2f5uqr2yGOO7hIMcaImqlE/IaIwO+OI/7gyNGAyhEMpehHzBeMXVj0hKtvk2RBc4ZQ5D6mXPo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773354912; c=relaxed/simple; bh=X9VHKvATHxHvX/KMPAT55nhb5PDXdcI5BfyrlNhdZIA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hPTMkEA4mrORXH/Cy5Sp2tGMdI+zwRQmmuMzUkljDZ+moU5/MAx8x+7ES3GjHxl0+3pw0Vr2QzjQIsKZBQQzDh0CZV0lsRXtF0JqOQrVViSsGFpc5GD516KycsmnoSrSnIoOaUwgSl4VbPOEdRcL2ynBLxGnGVKWVV87KsR134s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=dama.to; spf=none smtp.mailfrom=dama.to; dkim=pass (2048-bit key) header.d=dama-to.20230601.gappssmtp.com header.i=@dama-to.20230601.gappssmtp.com header.b=2DFV/DZU; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=dama.to Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=dama.to Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=dama-to.20230601.gappssmtp.com header.i=@dama-to.20230601.gappssmtp.com header.b="2DFV/DZU" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-2ab39b111b9so7173455ad.1 for ; Thu, 12 Mar 2026 15:35:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=dama-to.20230601.gappssmtp.com; s=20230601; t=1773354910; x=1773959710; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ECNOcyzIL3pvAcmxfJ0jNS0/PNd0BF4YXgB3fthJC9U=; b=2DFV/DZULxNeQ68eKEROxwx56JK3Ru5eooNh5P8HcN52AvNy4zpbR/jMtQQ7wnuoMa aYzpIednwHfB9bpBxMseeU8pU5QVO1lJQutEL0/e2UmrdyX8P64Dxs8o9Nwz8O6CymOh Q5vR2+RxfsCMQURx4do740XtdqvD0pjYZEzIPmbbNOViPuR5/DqeDFISnwkP8tqWS7aL gOyr2T2zb4zOVOg3mk/046qYqYTnOa16ifEiJE2qqC+WyzYjSXjbYZI7ZymrZ38PV7LB 8CcI7fVDOnvcLryE8PXmhHCC69mrsStEDFyM9nTnnKSgdZYgZPsjEGPH8SCt3b89eJEo dgIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773354910; x=1773959710; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=ECNOcyzIL3pvAcmxfJ0jNS0/PNd0BF4YXgB3fthJC9U=; b=RbbRXq+fLZPy067/NPyjUW4fzc3JSePqUu7CITUw+WCqMOODWoJm6oKhXpSFQ9JqQ7 18LHrtl7oE6Q4p+pqEQ8Yl3T4XHMrNpWEVZeuWq1yCMvVGn3dduSl7F70yas7iYwVxKe JFO3WyYDGGhsFASi0ZZbHi8rgYLge5V9+vEP3GPHhi951DfICcyo/PFGARr1pw+TUmdp 0bsmhaURxt3PxTJMJc3kz2/Ob8JXxg97/OUeYwzL8qF8wc6WPjMDgX1xO6qtALMzSqbJ 1I3Kh/psGsZB5DWg1DI/H4ncHNI1riodgo/TaUo3/CwA1MV3OYxTMPVG6Q9CaLg2Lhxa Udow== X-Forwarded-Encrypted: i=1; AJvYcCWxDeo329AhH/rSGSn3+suEb7Kj/N5KN8O+cwTJgXA4GrgG9S0nEBqm+ewf+dNArkLBiy23Wc73V/JQS2A=@vger.kernel.org X-Gm-Message-State: AOJu0YzNocjIRc1YzxDtjBVPQQ2IotSP8ClKGGJNkKnSaJUzAFaYIHez 0PfaRIhyiHruxOqmkDw4VxPup3GpcuDtzPDXyoINEML/lQNtXuLgXPLBen1E5ZWFGBk= X-Gm-Gg: ATEYQzygBfpdPx/ljspAR3MFnow4UzgmuFmmdwLNGtsOSR8xtGZtO3GJpI9JOrYZjy9 SPVXwd46wzlDVG+uXTmUi/uIR1vYXaRnxMoH7sKSNtqG4ziV6Lcbe5iJ+GcUoyNLRyO8y2W/8Du FHPk1P2MQIIP0IfU7xbCppVf5Vg8lhMnd0KpG+RqRZOjf4Y3mSaGiUQfU1MNADPCA34Dh9XeHdI Q5LIgUeBH2Ts94WCMoeb1/9rlp2EmIFmEP3Chsyo/SCqkMb1JHOG90UBq8Lx3S2Yca3pSRsCldN 9Ic3W7hTTSczIRaumD3Sl0xWxlosaOgU1pz0fTxV3qysItdx+5IDUTqfYcpdy0WWpIzF37G5SQB MXq4/rlFonUTpU2Q+xGnvoPmpIWAruLaBmleGWZP5FskXTD0H2SGju4esDfI28xji9K82nprSi1 khu4M= X-Received: by 2002:a17:903:3c43:b0:2ae:5baa:cef6 with SMTP id d9443c01a7336-2aecaa26653mr9883235ad.19.1773354910240; Thu, 12 Mar 2026 15:35:10 -0700 (PDT) Received: from localhost ([2a03:2880:2ff:2::]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2aece63133fsm317385ad.41.2026.03.12.15.35.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2026 15:35:09 -0700 (PDT) From: Joe Damato To: netdev@vger.kernel.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman Cc: michael.chan@broadcom.com, pavan.chebbi@broadcom.com, linux-kernel@vger.kernel.org, Joe Damato Subject: [RFC net-next v2 02/12] net: tso: Add tso_dma_map helpers Date: Thu, 12 Mar 2026 15:34:39 -0700 Message-ID: <20260312223457.1999489-3-joe@dama.to> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260312223457.1999489-1-joe@dama.to> References: <20260312223457.1999489-1-joe@dama.to> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add helpers to initialize, iterate, and clean up a tso_dma_map: tso_dma_map_init(): DMA-maps the linear payload region and all frags upfront into the tso_dma_map struct. Returns 0 on success, cleans up partial mappings on failure. tso_dma_map_cleanup(): unmaps all DMA regions. Used on error paths. tso_dma_map_count(): counts how many descriptors the next N bytes of payload will need, without advancing the iterator. tso_dma_map_next(): yields the next (dma_addr, chunk_len) pair. Indicates when a chunk starts a new DMA mapping so the driver can set dma_unmap_len on that BD for completion-time unmapping. Suggested-by: Jakub Kicinski Signed-off-by: Joe Damato --- include/net/tso.h | 8 +++ net/core/tso.c | 165 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 173 insertions(+) diff --git a/include/net/tso.h b/include/net/tso.h index cd4b98dbea71..a1fa605f26b4 100644 --- a/include/net/tso.h +++ b/include/net/tso.h @@ -62,4 +62,12 @@ struct tso_dma_map { } frags[MAX_SKB_FRAGS]; }; =20 +int tso_dma_map_init(struct tso_dma_map *map, struct device *dev, + const struct sk_buff *skb, unsigned int hdr_len); +void tso_dma_map_cleanup(struct tso_dma_map *map); +unsigned int tso_dma_map_count(const struct tso_dma_map *map, unsigned int= len); +bool tso_dma_map_next(struct tso_dma_map *map, dma_addr_t *addr, + unsigned int *chunk_len, unsigned int *mapping_len, + unsigned int seg_remaining); + #endif /* _TSO_H */ diff --git a/net/core/tso.c b/net/core/tso.c index 6df997b9076e..fdbef4ca840d 100644 --- a/net/core/tso.c +++ b/net/core/tso.c @@ -3,6 +3,7 @@ #include #include #include +#include #include =20 void tso_build_hdr(const struct sk_buff *skb, char *hdr, struct tso_t *tso, @@ -87,3 +88,167 @@ int tso_start(struct sk_buff *skb, struct tso_t *tso) return hdr_len; } EXPORT_SYMBOL(tso_start); + +/** + * tso_dma_map_init - DMA-map GSO payload regions + * @map: map struct to initialize + * @dev: device for DMA mapping + * @skb: the GSO skb + * @hdr_len: per-segment header length in bytes + * + * DMA-maps the linear payload (after headers) and all frags. + * Positions the iterator at byte 0 of the payload. + * + * Returns 0 on success, -ENOMEM on DMA mapping failure (partial mappings + * are cleaned up internally). + */ +int tso_dma_map_init(struct tso_dma_map *map, struct device *dev, + const struct sk_buff *skb, unsigned int hdr_len) +{ + unsigned int linear_len =3D skb_headlen(skb) - hdr_len; + unsigned int nr_frags =3D skb_shinfo(skb)->nr_frags; + int i; + + map->dev =3D dev; + map->skb =3D skb; + map->hdr_len =3D hdr_len; + map->frag_idx =3D -1; + map->offset =3D 0; + map->linear_len =3D 0; + map->nr_frags =3D 0; + + if (linear_len > 0) { + map->linear_dma =3D dma_map_single(dev, skb->data + hdr_len, + linear_len, DMA_TO_DEVICE); + if (dma_mapping_error(dev, map->linear_dma)) + return -ENOMEM; + map->linear_len =3D linear_len; + } + + for (i =3D 0; i < nr_frags; i++) { + skb_frag_t *frag =3D &skb_shinfo(skb)->frags[i]; + + map->frags[i].len =3D skb_frag_size(frag); + map->frags[i].dma =3D skb_frag_dma_map(dev, frag, 0, + map->frags[i].len, + DMA_TO_DEVICE); + if (dma_mapping_error(dev, map->frags[i].dma)) { + tso_dma_map_cleanup(map); + return -ENOMEM; + } + map->nr_frags =3D i + 1; + } + + if (linear_len =3D=3D 0 && nr_frags > 0) + map->frag_idx =3D 0; + + return 0; +} +EXPORT_SYMBOL(tso_dma_map_init); + +/** + * tso_dma_map_cleanup - unmap all DMA regions in a tso_dma_map + * @map: the map to clean up + * + * Unmaps linear payload and all mapped frags. Used on error paths. + * Success paths use the driver's completion path to handle unmapping. + */ +void tso_dma_map_cleanup(struct tso_dma_map *map) +{ + int i; + + if (map->linear_len) + dma_unmap_single(map->dev, map->linear_dma, map->linear_len, + DMA_TO_DEVICE); + + for (i =3D 0; i < map->nr_frags; i++) + dma_unmap_page(map->dev, map->frags[i].dma, map->frags[i].len, + DMA_TO_DEVICE); + + map->linear_len =3D 0; + map->nr_frags =3D 0; +} +EXPORT_SYMBOL(tso_dma_map_cleanup); + +/** + * tso_dma_map_count - count descriptors for a payload range + * @map: the payload map + * @len: number of payload bytes in this segment + * + * Counts how many contiguous DMA region chunks the next @len bytes + * will span, without advancing the iterator. Uses region sizes from + * the current position. + * + * Returns the number of descriptors needed for @len bytes of payload. + */ +unsigned int tso_dma_map_count(const struct tso_dma_map *map, unsigned int= len) +{ + unsigned int offset =3D map->offset; + int idx =3D map->frag_idx; + unsigned int count =3D 0; + + while (len > 0) { + unsigned int region_len, chunk; + + if (idx =3D=3D -1) + region_len =3D map->linear_len; + else + region_len =3D map->frags[idx].len; + + chunk =3D min(len, region_len - offset); + len -=3D chunk; + count++; + offset =3D 0; + idx++; + } + + return count; +} +EXPORT_SYMBOL(tso_dma_map_count); + +/** + * tso_dma_map_next - yield the next DMA address range + * @map: the payload map + * @addr: output DMA address + * @chunk_len: output chunk length + * @mapping_len: full DMA mapping length when this chunk starts a new + * mapping region, or 0 when continuing a previous one. + * Driver can assign this to the last descriptor. + * @seg_remaining: bytes left in current segment + * + * Yields the next (dma_addr, chunk_len) pair and advances the iterator. + * + * Returns true if a chunk was yielded, false when @seg_remaining is 0. + */ +bool tso_dma_map_next(struct tso_dma_map *map, dma_addr_t *addr, + unsigned int *chunk_len, unsigned int *mapping_len, + unsigned int seg_remaining) +{ + unsigned int region_len, chunk; + + if (!seg_remaining) + return false; + + if (map->frag_idx =3D=3D -1) { + region_len =3D map->linear_len; + chunk =3D min(seg_remaining, region_len - map->offset); + *addr =3D map->linear_dma + map->offset; + *mapping_len =3D (map->offset =3D=3D 0) ? region_len : 0; + } else { + region_len =3D map->frags[map->frag_idx].len; + chunk =3D min(seg_remaining, region_len - map->offset); + *addr =3D map->frags[map->frag_idx].dma + map->offset; + *mapping_len =3D (map->offset =3D=3D 0) ? region_len : 0; + } + + *chunk_len =3D chunk; + map->offset +=3D chunk; + + if (map->offset >=3D region_len) { + map->frag_idx++; + map->offset =3D 0; + } + + return true; +} +EXPORT_SYMBOL(tso_dma_map_next); --=20 2.52.0