From nobody Mon Feb 9 17:56:11 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6440C761A6 for ; Fri, 31 Mar 2023 16:11:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232688AbjCaQL3 (ORCPT ); Fri, 31 Mar 2023 12:11:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232391AbjCaQLE (ORCPT ); Fri, 31 Mar 2023 12:11:04 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 535C320D9E for ; Fri, 31 Mar 2023 09:09:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680278987; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HlTSQeuXI3aut1lZSX+tmY2WXPBg0R+eepIbeSsvlZU=; b=AvJ+jw9MuXyLGKoQJ9nUXgKnQifgyreVAxGFZ5Qj/BUqBr8lqlfjP3cSzNCqh5XT8oJ00X RDWvpodkQhRVDhb+ubbnPxKwvqBU61TVo11aTURrqOlQW1muJxsuf9k6bngfbcO6G4MdEh 5xGKqgujlE7YIjYuU8F9dXJOJyoeqYI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-407-4An1B-SyOQeBgjPKwSEzAg-1; Fri, 31 Mar 2023 12:09:43 -0400 X-MC-Unique: 4An1B-SyOQeBgjPKwSEzAg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0B59D101A54F; Fri, 31 Mar 2023 16:09:42 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id 27382C15BA0; Fri, 31 Mar 2023 16:09:40 +0000 (UTC) From: David Howells To: Matthew Wilcox , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: David Howells , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , netdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 07/55] tcp: Support MSG_SPLICE_PAGES Date: Fri, 31 Mar 2023 17:08:26 +0100 Message-Id: <20230331160914.1608208-8-dhowells@redhat.com> In-Reply-To: <20230331160914.1608208-1-dhowells@redhat.com> References: <20230331160914.1608208-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make TCP's sendmsg() support MSG_SPLICE_PAGES. This causes pages to be spliced from the source iterator. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells cc: Eric Dumazet cc: "David S. Miller" cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- net/ipv4/tcp.c | 67 ++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 60 insertions(+), 7 deletions(-) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 288693981b00..910b327c236e 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1220,7 +1220,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr= *msg, size_t size) int flags, err, copied =3D 0; int mss_now =3D 0, size_goal, copied_syn =3D 0; int process_backlog =3D 0; - bool zc =3D false; + int zc =3D 0; long timeo; =20 flags =3D msg->msg_flags; @@ -1231,17 +1231,22 @@ int tcp_sendmsg_locked(struct sock *sk, struct msgh= dr *msg, size_t size) if (msg->msg_ubuf) { uarg =3D msg->msg_ubuf; net_zcopy_get(uarg); - zc =3D sk->sk_route_caps & NETIF_F_SG; + if (sk->sk_route_caps & NETIF_F_SG) + zc =3D 1; } else if (sock_flag(sk, SOCK_ZEROCOPY)) { uarg =3D msg_zerocopy_realloc(sk, size, skb_zcopy(skb)); if (!uarg) { err =3D -ENOBUFS; goto out_err; } - zc =3D sk->sk_route_caps & NETIF_F_SG; - if (!zc) + if (sk->sk_route_caps & NETIF_F_SG) + zc =3D 1; + else uarg_to_msgzc(uarg)->zerocopy =3D 0; } + } else if (unlikely(msg->msg_flags & MSG_SPLICE_PAGES) && size) { + if (sk->sk_route_caps & NETIF_F_SG) + zc =3D 2; } =20 if (unlikely(flags & MSG_FASTOPEN || inet_sk(sk)->defer_connect) && @@ -1304,7 +1309,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr= *msg, size_t size) goto do_error; =20 while (msg_data_left(msg)) { - int copy =3D 0; + ssize_t copy =3D 0; =20 skb =3D tcp_write_queue_tail(sk); if (skb) @@ -1345,7 +1350,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr= *msg, size_t size) if (copy > msg_data_left(msg)) copy =3D msg_data_left(msg); =20 - if (!zc) { + if (zc =3D=3D 0) { bool merge =3D true; int i =3D skb_shinfo(skb)->nr_frags; struct page_frag *pfrag =3D sk_page_frag(sk); @@ -1390,7 +1395,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr= *msg, size_t size) page_ref_inc(pfrag->page); } pfrag->offset +=3D copy; - } else { + } else if (zc =3D=3D 1) { /* First append to a fragless skb builds initial * pure zerocopy skb */ @@ -1411,6 +1416,54 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghd= r *msg, size_t size) if (err < 0) goto do_error; copy =3D err; + } else if (zc =3D=3D 2) { + /* Splice in data. */ + struct page *page =3D NULL, **pages =3D &page; + size_t off =3D 0, part; + bool can_coalesce; + int i =3D skb_shinfo(skb)->nr_frags; + + copy =3D iov_iter_extract_pages(&msg->msg_iter, &pages, + copy, 1, 0, &off); + if (copy <=3D 0) { + err =3D copy ?: -EIO; + goto do_error; + } + + can_coalesce =3D skb_can_coalesce(skb, i, page, off); + if (!can_coalesce && i >=3D READ_ONCE(sysctl_max_skb_frags)) { + tcp_mark_push(tp, skb); + iov_iter_revert(&msg->msg_iter, copy); + goto new_segment; + } + if (tcp_downgrade_zcopy_pure(sk, skb)) { + iov_iter_revert(&msg->msg_iter, copy); + goto wait_for_space; + } + + part =3D tcp_wmem_schedule(sk, copy); + iov_iter_revert(&msg->msg_iter, copy - part); + if (!part) + goto wait_for_space; + copy =3D part; + + if (can_coalesce) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + } else { + get_page(page); + skb_fill_page_desc_noacc(skb, i, page, off, copy); + } + page =3D NULL; + + if (!(flags & MSG_NO_SHARED_FRAGS)) + skb_shinfo(skb)->flags |=3D SKBFL_SHARED_FRAG; + + skb->len +=3D copy; + skb->data_len +=3D copy; + skb->truesize +=3D copy; + sk_wmem_queued_add(sk, copy); + sk_mem_charge(sk, copy); + } =20 if (!copied)