From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2664EC7EE2A for ; Mon, 22 May 2023 12:12:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233541AbjEVMMd (ORCPT ); Mon, 22 May 2023 08:12:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233221AbjEVMM0 (ORCPT ); Mon, 22 May 2023 08:12:26 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F769A2 for ; Mon, 22 May 2023 05:11:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757498; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zo53+MxtcExCaAvP1E3d2cbKIX8PzAGUnvJuN4UlUmE=; b=cWm5s77XtGqo27sxpLyTWLrmOe4XoB/2V4fNOHcaK88hwCs3P045plsHGayumDdi6rMFMr cZznylM5QtQzsF2moq6qoH3cuJsjobEvUpwGvNaKTNuIibK4fxFIcXk6YeKr2GAMYAOYaM u4f35MpHR/CH7A17OQ4q8R24JsBdgV0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-645-LAZUC1NYOiyGQR7ae9mZ2Q-1; Mon, 22 May 2023 08:11:34 -0400 X-MC-Unique: LAZUC1NYOiyGQR7ae9mZ2Q-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6FA7E185A78F; Mon, 22 May 2023 12:11:33 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id A54FE140E95D; Mon, 22 May 2023 12:11:30 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Willem de Bruijn , io-uring@vger.kernel.org Subject: [PATCH net-next v10 01/16] net: Declare MSG_SPLICE_PAGES internal sendmsg() flag Date: Mon, 22 May 2023 13:11:10 +0100 Message-Id: <20230522121125.2595254-2-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Declare MSG_SPLICE_PAGES, an internal sendmsg() flag, that hints to a network protocol that it should splice pages from the source iterator rather than copying the data if it can. This flag is added to a list that is cleared by sendmsg syscalls on entry. This is intended as a replacement for the ->sendpage() op, allowing a way to splice in several multipage folios in one go. Signed-off-by: David Howells Reviewed-by: Willem de Bruijn cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: io-uring@vger.kernel.org cc: netdev@vger.kernel.org --- Notes: ver #7) - In ____sys_sendmsg(), clear internal flags before setting msg_flags. - Clear internal flags in uring io_send{,_zc}(). include/linux/socket.h | 3 +++ io_uring/net.c | 2 ++ net/socket.c | 2 ++ 3 files changed, 7 insertions(+) diff --git a/include/linux/socket.h b/include/linux/socket.h index 13c3a237b9c9..bd1cc3238851 100644 --- a/include/linux/socket.h +++ b/include/linux/socket.h @@ -327,6 +327,7 @@ struct ucred { */ =20 #define MSG_ZEROCOPY 0x4000000 /* Use user data in kernel path */ +#define MSG_SPLICE_PAGES 0x8000000 /* Splice the pages from the iterator i= n sendmsg() */ #define MSG_FASTOPEN 0x20000000 /* Send data in TCP SYN */ #define MSG_CMSG_CLOEXEC 0x40000000 /* Set close_on_exec for file descriptor received through @@ -337,6 +338,8 @@ struct ucred { #define MSG_CMSG_COMPAT 0 /* We never have 32 bit fixups */ #endif =20 +/* Flags to be cleared on entry by sendmsg and sendmmsg syscalls */ +#define MSG_INTERNAL_SENDMSG_FLAGS (MSG_SPLICE_PAGES) =20 /* Setsockoptions(2) level. Thanks to BSD these must match IPPROTO_xxx */ #define SOL_IP 0 diff --git a/io_uring/net.c b/io_uring/net.c index 89e839013837..f7cbb3c7a575 100644 --- a/io_uring/net.c +++ b/io_uring/net.c @@ -389,6 +389,7 @@ int io_send(struct io_kiocb *req, unsigned int issue_fl= ags) if (flags & MSG_WAITALL) min_ret =3D iov_iter_count(&msg.msg_iter); =20 + flags &=3D ~MSG_INTERNAL_SENDMSG_FLAGS; msg.msg_flags =3D flags; ret =3D sock_sendmsg(sock, &msg); if (ret < min_ret) { @@ -1136,6 +1137,7 @@ int io_send_zc(struct io_kiocb *req, unsigned int iss= ue_flags) msg_flags |=3D MSG_DONTWAIT; if (msg_flags & MSG_WAITALL) min_ret =3D iov_iter_count(&msg.msg_iter); + msg_flags &=3D ~MSG_INTERNAL_SENDMSG_FLAGS; =20 msg.msg_flags =3D msg_flags; msg.msg_ubuf =3D &io_notif_to_data(zc->notif)->uarg; diff --git a/net/socket.c b/net/socket.c index b7e01d0fe082..3df96e9ba4e2 100644 --- a/net/socket.c +++ b/net/socket.c @@ -2138,6 +2138,7 @@ int __sys_sendto(int fd, void __user *buff, size_t le= n, unsigned int flags, msg.msg_name =3D (struct sockaddr *)&address; msg.msg_namelen =3D addr_len; } + flags &=3D ~MSG_INTERNAL_SENDMSG_FLAGS; if (sock->file->f_flags & O_NONBLOCK) flags |=3D MSG_DONTWAIT; msg.msg_flags =3D flags; @@ -2483,6 +2484,7 @@ static int ____sys_sendmsg(struct socket *sock, struc= t msghdr *msg_sys, msg_sys->msg_control =3D ctl_buf; msg_sys->msg_control_is_user =3D false; } + flags &=3D ~MSG_INTERNAL_SENDMSG_FLAGS; msg_sys->msg_flags =3D flags; =20 if (sock->file->f_flags & O_NONBLOCK) From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D336AC77B75 for ; Mon, 22 May 2023 12:12:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233145AbjEVMMT (ORCPT ); Mon, 22 May 2023 08:12:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229577AbjEVMMR (ORCPT ); Mon, 22 May 2023 08:12:17 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C38C79B for ; Mon, 22 May 2023 05:11:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757501; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dMBJPtbzh4IjFjJ1HeY2h7yLKLc/0VkfcRg1OdFJq1c=; b=fBY/C8HpOJeKnd5SWjguBlniacaIEyMUrRRiYZ5FMqGcVN0vwSOPPrTNt4clFp8K0BE9if HOx4KSGpybXeKylY4zzpUXJS2wv22VlC1A2za25IJVrB9cvQyg9EoNRYNmwB+qzdfzqew2 2AdlM3m/ewe86QV1pWRlg3Hs7BeXCKI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-652-UlqM_PkRPIidB5yyeKr35w-1; Mon, 22 May 2023 08:11:37 -0400 X-MC-Unique: UlqM_PkRPIidB5yyeKr35w-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D8CBB3800E87; Mon, 22 May 2023 12:11:36 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5FD694F2DC5; Mon, 22 May 2023 12:11:34 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH net-next v10 02/16] net: Pass max frags into skb_append_pagefrags() Date: Mon, 22 May 2023 13:11:11 +0100 Message-Id: <20230522121125.2595254-3-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Pass the maximum number of fragments into skb_append_pagefrags() rather than using MAX_SKB_FRAGS so that it can be used from code that wants to specify sysctl_max_skb_frags. Signed-off-by: David Howells cc: Eric Dumazet cc: "David S. Miller" cc: David Ahern cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- include/linux/skbuff.h | 2 +- net/core/skbuff.c | 4 ++-- net/ipv4/ip_output.c | 3 ++- net/unix/af_unix.c | 2 +- 4 files changed, 6 insertions(+), 5 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 8cff3d817131..15011408c47c 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -1383,7 +1383,7 @@ static inline int skb_pad(struct sk_buff *skb, int pa= d) #define dev_kfree_skb(a) consume_skb(a) =20 int skb_append_pagefrags(struct sk_buff *skb, struct page *page, - int offset, size_t size); + int offset, size_t size, size_t max_frags); =20 struct skb_seq_state { __u32 lower_offset; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 6724a84ebb09..7f53dcb26ad3 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -4188,13 +4188,13 @@ unsigned int skb_find_text(struct sk_buff *skb, uns= igned int from, EXPORT_SYMBOL(skb_find_text); =20 int skb_append_pagefrags(struct sk_buff *skb, struct page *page, - int offset, size_t size) + int offset, size_t size, size_t max_frags) { int i =3D skb_shinfo(skb)->nr_frags; =20 if (skb_can_coalesce(skb, i, page, offset)) { skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], size); - } else if (i < MAX_SKB_FRAGS) { + } else if (i < max_frags) { skb_zcopy_downgrade_managed(skb); get_page(page); skb_fill_page_desc_noacc(skb, i, page, offset, size); diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index 61892268e8a6..52fc840898d8 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -1450,7 +1450,8 @@ ssize_t ip_append_page(struct sock *sk, struct flowi4= *fl4, struct page *page, if (len > size) len =3D size; =20 - if (skb_append_pagefrags(skb, page, offset, len)) { + if (skb_append_pagefrags(skb, page, offset, len, + MAX_SKB_FRAGS)) { err =3D -EMSGSIZE; goto error; } diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index cc695c9f09ec..dd55506b4632 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -2349,7 +2349,7 @@ static ssize_t unix_stream_sendpage(struct socket *so= cket, struct page *page, newskb =3D NULL; } =20 - if (skb_append_pagefrags(skb, page, offset, size)) { + if (skb_append_pagefrags(skb, page, offset, size, MAX_SKB_FRAGS)) { tail =3D skb; goto alloc_skb; } From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC755C77B75 for ; Mon, 22 May 2023 12:12:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233701AbjEVMMk (ORCPT ); Mon, 22 May 2023 08:12:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232845AbjEVMMb (ORCPT ); Mon, 22 May 2023 08:12:31 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18971B9 for ; Mon, 22 May 2023 05:11:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757504; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Fl+2zC2cOWTcM7FjMckfIDo5NeTUJ6H1d1XKuMzp1Mk=; b=OLD3rVgGbPPElNzSBQR4vEKCR2APkAP90YdwE5m9wWPqT1i7UJR1i9txorm91IuOQYETaj VBiqDeHH0XFyzOtbtHwdFBkph2tZnlkOx8+9LKmRNa1/M4FVpykCIAxe7x+v77r4/V+Yzc 1UHI7YP77Zpir1T80qfeeAAMrwuQT2w= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-569-AAbKytGoOYy6qpGKlO0Rjg-1; Mon, 22 May 2023 08:11:41 -0400 X-MC-Unique: AAbKytGoOYy6qpGKlO0Rjg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 12CA4185A78F; Mon, 22 May 2023 12:11:40 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 90F96140E95D; Mon, 22 May 2023 12:11:37 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH net-next v10 03/16] net: Add a function to splice pages into an skbuff for MSG_SPLICE_PAGES Date: Mon, 22 May 2023 13:11:12 +0100 Message-Id: <20230522121125.2595254-4-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add a function to handle MSG_SPLICE_PAGES being passed internally to sendmsg(). Pages are spliced into the given socket buffer if possible and copied in if not (e.g. they're slab pages or have a zero refcount). Signed-off-by: David Howells cc: Eric Dumazet cc: "David S. Miller" cc: David Ahern cc: Jakub Kicinski cc: Paolo Abeni cc: Al Viro cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- Notes: ver #8) - Order local variables in reverse xmas tree order. - Remove duplicate coalescence check. - Warn if sendpage_ok() fails. =20 ver #7) - Export function. - Never copy data, return -EIO if sendpage_ok() returns false. include/linux/skbuff.h | 3 ++ net/core/skbuff.c | 88 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 91 insertions(+) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 15011408c47c..1b2ebf6113e0 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -5097,5 +5097,8 @@ static inline void skb_mark_for_recycle(struct sk_buf= f *skb) #endif } =20 +ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, + ssize_t maxsize, gfp_t gfp); + #endif /* __KERNEL__ */ #endif /* _LINUX_SKBUFF_H */ diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 7f53dcb26ad3..f4a5b51aed22 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6892,3 +6892,91 @@ nodefer: __kfree_skb(skb); if (unlikely(kick) && !cmpxchg(&sd->defer_ipi_scheduled, 0, 1)) smp_call_function_single_async(cpu, &sd->defer_csd); } + +static void skb_splice_csum_page(struct sk_buff *skb, struct page *page, + size_t offset, size_t len) +{ + const char *kaddr; + __wsum csum; + + kaddr =3D kmap_local_page(page); + csum =3D csum_partial(kaddr + offset, len, 0); + kunmap_local(kaddr); + skb->csum =3D csum_block_add(skb->csum, csum, skb->len); +} + +/** + * skb_splice_from_iter - Splice (or copy) pages to skbuff + * @skb: The buffer to add pages to + * @iter: Iterator representing the pages to be added + * @maxsize: Maximum amount of pages to be added + * @gfp: Allocation flags + * + * This is a common helper function for supporting MSG_SPLICE_PAGES. It + * extracts pages from an iterator and adds them to the socket buffer if + * possible, copying them to fragments if not possible (such as if they're= slab + * pages). + * + * Returns the amount of data spliced/copied or -EMSGSIZE if there's + * insufficient space in the buffer to transfer anything. + */ +ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, + ssize_t maxsize, gfp_t gfp) +{ + size_t frag_limit =3D READ_ONCE(sysctl_max_skb_frags); + struct page *pages[8], **ppages =3D pages; + ssize_t spliced =3D 0, ret =3D 0; + unsigned int i; + + while (iter->count > 0) { + ssize_t space, nr; + size_t off, len; + + ret =3D -EMSGSIZE; + space =3D frag_limit - skb_shinfo(skb)->nr_frags; + if (space < 0) + break; + + /* We might be able to coalesce without increasing nr_frags */ + nr =3D clamp_t(size_t, space, 1, ARRAY_SIZE(pages)); + + len =3D iov_iter_extract_pages(iter, &ppages, maxsize, nr, 0, &off); + if (len <=3D 0) { + ret =3D len ?: -EIO; + break; + } + + i =3D 0; + do { + struct page *page =3D pages[i++]; + size_t part =3D min_t(size_t, PAGE_SIZE - off, len); + + ret =3D -EIO; + if (WARN_ON_ONCE(!sendpage_ok(page))) + goto out; + + ret =3D skb_append_pagefrags(skb, page, off, part, + frag_limit); + if (ret < 0) { + iov_iter_revert(iter, len); + goto out; + } + + if (skb->ip_summed =3D=3D CHECKSUM_NONE) + skb_splice_csum_page(skb, page, off, part); + + off =3D 0; + spliced +=3D part; + maxsize -=3D part; + len -=3D part; + } while (len > 0); + + if (maxsize <=3D 0) + break; + } + +out: + skb_len_add(skb, spliced); + return spliced ?: ret; +} +EXPORT_SYMBOL(skb_splice_from_iter); From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11550C77B73 for ; Mon, 22 May 2023 12:13:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231384AbjEVMNS (ORCPT ); Mon, 22 May 2023 08:13:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232411AbjEVMNF (ORCPT ); Mon, 22 May 2023 08:13:05 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64AE3AC for ; Mon, 22 May 2023 05:11:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757507; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CbEUQCnChP/PzTidzzQWL6iO0yyE2e0IQp2l4Z4U9vY=; b=UYi0EFZE4QxwXjQDBqAuzaGiN+HWO0+we82QVPIXmVmRdrxCx1Lz9LGT+UIdUTqMr26iRQ xUXIOIEA8GhLAlwmCAH0YxV8NgCAajPtMQo8Y6dcMWhhsf7jnF7FubOqUV2p163WAUyYJI Ispigw0oSPdP0DXhIcSXFPOaruDkkoc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-316-9JSwXOCGNNaJscCJXa4Eqg-1; Mon, 22 May 2023 08:11:44 -0400 X-MC-Unique: 9JSwXOCGNNaJscCJXa4Eqg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5FC812A5956E; Mon, 22 May 2023 12:11:43 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id C96347B7C; Mon, 22 May 2023 12:11:40 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH net-next v10 04/16] tcp: Support MSG_SPLICE_PAGES Date: Mon, 22 May 2023 13:11:13 +0100 Message-Id: <20230522121125.2595254-5-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make TCP's sendmsg() support MSG_SPLICE_PAGES. This causes pages to be spliced or copied (if it cannot be spliced) from the source iterator. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells cc: Eric Dumazet cc: "David S. Miller" cc: David Ahern cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- Notes: ver #9) - Fix a merge conflict with commit eea96a3e2c909. =20 ver #7) - Missed a "zc =3D 1" in tcp_sendmsg_locked(). =20 ver #6) - Set zc to 0/MSG_ZEROCOPY/MSG_SPLICE_PAGES rather than 0/1/2. - Use common helper. net/ipv4/tcp.c | 43 ++++++++++++++++++++++++++++++++++++------- 1 file changed, 36 insertions(+), 7 deletions(-) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 3d18e295bb2f..2d61150d01f1 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1223,7 +1223,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr= *msg, size_t size) int flags, err, copied =3D 0; int mss_now =3D 0, size_goal, copied_syn =3D 0; int process_backlog =3D 0; - bool zc =3D false; + int zc =3D 0; long timeo; =20 flags =3D msg->msg_flags; @@ -1231,7 +1231,8 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr= *msg, size_t size) if ((flags & MSG_ZEROCOPY) && size) { if (msg->msg_ubuf) { uarg =3D msg->msg_ubuf; - zc =3D sk->sk_route_caps & NETIF_F_SG; + if (sk->sk_route_caps & NETIF_F_SG) + zc =3D MSG_ZEROCOPY; } else if (sock_flag(sk, SOCK_ZEROCOPY)) { skb =3D tcp_write_queue_tail(sk); uarg =3D msg_zerocopy_realloc(sk, size, skb_zcopy(skb)); @@ -1239,10 +1240,14 @@ int tcp_sendmsg_locked(struct sock *sk, struct msgh= dr *msg, size_t size) err =3D -ENOBUFS; goto out_err; } - zc =3D sk->sk_route_caps & NETIF_F_SG; - if (!zc) + if (sk->sk_route_caps & NETIF_F_SG) + zc =3D MSG_ZEROCOPY; + else uarg_to_msgzc(uarg)->zerocopy =3D 0; } + } else if (unlikely(msg->msg_flags & MSG_SPLICE_PAGES) && size) { + if (sk->sk_route_caps & NETIF_F_SG) + zc =3D MSG_SPLICE_PAGES; } =20 if (unlikely(flags & MSG_FASTOPEN || inet_sk(sk)->defer_connect) && @@ -1305,7 +1310,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr= *msg, size_t size) goto do_error; =20 while (msg_data_left(msg)) { - int copy =3D 0; + ssize_t copy =3D 0; =20 skb =3D tcp_write_queue_tail(sk); if (skb) @@ -1346,7 +1351,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr= *msg, size_t size) if (copy > msg_data_left(msg)) copy =3D msg_data_left(msg); =20 - if (!zc) { + if (zc =3D=3D 0) { bool merge =3D true; int i =3D skb_shinfo(skb)->nr_frags; struct page_frag *pfrag =3D sk_page_frag(sk); @@ -1391,7 +1396,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr= *msg, size_t size) page_ref_inc(pfrag->page); } pfrag->offset +=3D copy; - } else { + } else if (zc =3D=3D MSG_ZEROCOPY) { /* First append to a fragless skb builds initial * pure zerocopy skb */ @@ -1412,6 +1417,30 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghd= r *msg, size_t size) if (err < 0) goto do_error; copy =3D err; + } else if (zc =3D=3D MSG_SPLICE_PAGES) { + /* Splice in data if we can; copy if we can't. */ + if (tcp_downgrade_zcopy_pure(sk, skb)) + goto wait_for_space; + copy =3D tcp_wmem_schedule(sk, copy); + if (!copy) + goto wait_for_space; + + err =3D skb_splice_from_iter(skb, &msg->msg_iter, copy, + sk->sk_allocation); + if (err < 0) { + if (err =3D=3D -EMSGSIZE) { + tcp_mark_push(tp, skb); + goto new_segment; + } + goto do_error; + } + copy =3D err; + + if (!(flags & MSG_NO_SHARED_FRAGS)) + skb_shinfo(skb)->flags |=3D SKBFL_SHARED_FRAG; + + sk_wmem_queued_add(sk, copy); + sk_mem_charge(sk, copy); } =20 if (!copied) From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E71B2C77B75 for ; Mon, 22 May 2023 12:13:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233745AbjEVMNW (ORCPT ); Mon, 22 May 2023 08:13:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230312AbjEVMNI (ORCPT ); Mon, 22 May 2023 08:13:08 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85D95BF for ; Mon, 22 May 2023 05:11:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757509; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=79TjdqGEZWzMR9284kh0SsbKnhgq/mMmLDGgzzPhuR4=; b=TSjUuf9oNzSeS0ef1c0KwwMkY4D0pvdIwPbA1CmuNOuNzHacK0RodGJLUL5aNvdWcoG5ru 2UYNKHX7fubNkLREmZwGQTYzXyZ7MBMYtA7YYIwYwmyINae3W7qpX/QhKawzTPF4mPMGN/ Kfk51xdBAooLtod/+6dcqavAuoxWvW4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-13-s8VTVl0_ND6DLvjr5Pd7CA-1; Mon, 22 May 2023 08:11:48 -0400 X-MC-Unique: s8VTVl0_ND6DLvjr5Pd7CA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 87AEA3800E87; Mon, 22 May 2023 12:11:46 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 14AA7492B0A; Mon, 22 May 2023 12:11:43 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH net-next v10 05/16] tcp: Convert do_tcp_sendpages() to use MSG_SPLICE_PAGES Date: Mon, 22 May 2023 13:11:14 +0100 Message-Id: <20230522121125.2595254-6-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert do_tcp_sendpages() to use sendmsg() with MSG_SPLICE_PAGES rather than directly splicing in the pages itself. do_tcp_sendpages() can then be inlined in subsequent patches into its callers. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells cc: Eric Dumazet cc: "David S. Miller" cc: David Ahern cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- net/ipv4/tcp.c | 158 +++---------------------------------------------- 1 file changed, 7 insertions(+), 151 deletions(-) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 2d61150d01f1..f3a0c02678e0 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -974,163 +974,19 @@ static int tcp_wmem_schedule(struct sock *sk, int co= py) return min(copy, sk->sk_forward_alloc); } =20 -static struct sk_buff *tcp_build_frag(struct sock *sk, int size_goal, int = flags, - struct page *page, int offset, size_t *size) -{ - struct sk_buff *skb =3D tcp_write_queue_tail(sk); - struct tcp_sock *tp =3D tcp_sk(sk); - bool can_coalesce; - int copy, i; - - if (!skb || (copy =3D size_goal - skb->len) <=3D 0 || - !tcp_skb_can_collapse_to(skb)) { -new_segment: - if (!sk_stream_memory_free(sk)) - return NULL; - - skb =3D tcp_stream_alloc_skb(sk, 0, sk->sk_allocation, - tcp_rtx_and_write_queues_empty(sk)); - if (!skb) - return NULL; - -#ifdef CONFIG_TLS_DEVICE - skb->decrypted =3D !!(flags & MSG_SENDPAGE_DECRYPTED); -#endif - tcp_skb_entail(sk, skb); - copy =3D size_goal; - } - - if (copy > *size) - copy =3D *size; - - i =3D skb_shinfo(skb)->nr_frags; - can_coalesce =3D skb_can_coalesce(skb, i, page, offset); - if (!can_coalesce && i >=3D READ_ONCE(sysctl_max_skb_frags)) { - tcp_mark_push(tp, skb); - goto new_segment; - } - if (tcp_downgrade_zcopy_pure(sk, skb)) - return NULL; - - copy =3D tcp_wmem_schedule(sk, copy); - if (!copy) - return NULL; - - if (can_coalesce) { - skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); - } else { - get_page(page); - skb_fill_page_desc_noacc(skb, i, page, offset, copy); - } - - if (!(flags & MSG_NO_SHARED_FRAGS)) - skb_shinfo(skb)->flags |=3D SKBFL_SHARED_FRAG; - - skb->len +=3D copy; - skb->data_len +=3D copy; - skb->truesize +=3D copy; - sk_wmem_queued_add(sk, copy); - sk_mem_charge(sk, copy); - WRITE_ONCE(tp->write_seq, tp->write_seq + copy); - TCP_SKB_CB(skb)->end_seq +=3D copy; - tcp_skb_pcount_set(skb, 0); - - *size =3D copy; - return skb; -} - ssize_t do_tcp_sendpages(struct sock *sk, struct page *page, int offset, size_t size, int flags) { - struct tcp_sock *tp =3D tcp_sk(sk); - int mss_now, size_goal; - int err; - ssize_t copied; - long timeo =3D sock_sndtimeo(sk, flags & MSG_DONTWAIT); - - if (IS_ENABLED(CONFIG_DEBUG_VM) && - WARN_ONCE(!sendpage_ok(page), - "page must not be a Slab one and have page_count > 0")) - return -EINVAL; - - /* Wait for a connection to finish. One exception is TCP Fast Open - * (passive side) where data is allowed to be sent before a connection - * is fully established. - */ - if (((1 << sk->sk_state) & ~(TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) && - !tcp_passive_fastopen(sk)) { - err =3D sk_stream_wait_connect(sk, &timeo); - if (err !=3D 0) - goto out_err; - } + struct bio_vec bvec; + struct msghdr msg =3D { .msg_flags =3D flags | MSG_SPLICE_PAGES, }; =20 - sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk); + bvec_set_page(&bvec, page, size, offset); + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size); =20 - mss_now =3D tcp_send_mss(sk, &size_goal, flags); - copied =3D 0; + if (flags & MSG_SENDPAGE_NOTLAST) + msg.msg_flags |=3D MSG_MORE; =20 - err =3D -EPIPE; - if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN)) - goto out_err; - - while (size > 0) { - struct sk_buff *skb; - size_t copy =3D size; - - skb =3D tcp_build_frag(sk, size_goal, flags, page, offset, ©); - if (!skb) - goto wait_for_space; - - if (!copied) - TCP_SKB_CB(skb)->tcp_flags &=3D ~TCPHDR_PSH; - - copied +=3D copy; - offset +=3D copy; - size -=3D copy; - if (!size) - goto out; - - if (skb->len < size_goal || (flags & MSG_OOB)) - continue; - - if (forced_push(tp)) { - tcp_mark_push(tp, skb); - __tcp_push_pending_frames(sk, mss_now, TCP_NAGLE_PUSH); - } else if (skb =3D=3D tcp_send_head(sk)) - tcp_push_one(sk, mss_now); - continue; - -wait_for_space: - set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); - tcp_push(sk, flags & ~MSG_MORE, mss_now, - TCP_NAGLE_PUSH, size_goal); - - err =3D sk_stream_wait_memory(sk, &timeo); - if (err !=3D 0) - goto do_error; - - mss_now =3D tcp_send_mss(sk, &size_goal, flags); - } - -out: - if (copied) { - tcp_tx_timestamp(sk, sk->sk_tsflags); - if (!(flags & MSG_SENDPAGE_NOTLAST)) - tcp_push(sk, flags, mss_now, tp->nonagle, size_goal); - } - return copied; - -do_error: - tcp_remove_empty_skb(sk); - if (copied) - goto out; -out_err: - /* make sure we wake any epoll edge trigger waiter */ - if (unlikely(tcp_rtx_and_write_queues_empty(sk) && err =3D=3D -EAGAIN)) { - sk->sk_write_space(sk); - tcp_chrono_stop(sk, TCP_CHRONO_SNDBUF_LIMITED); - } - return sk_stream_error(sk, flags, err); + return tcp_sendmsg_locked(sk, &msg, size); } EXPORT_SYMBOL_GPL(do_tcp_sendpages); From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7ADB5C77B73 for ; Mon, 22 May 2023 12:13:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233785AbjEVMNn (ORCPT ); Mon, 22 May 2023 08:13:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232133AbjEVMNN (ORCPT ); Mon, 22 May 2023 08:13:13 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F5C2CE for ; Mon, 22 May 2023 05:11:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757516; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2EXLlBUI8HMrlq1LdzuePmnE9vo0jChMg019VHUJEgs=; b=e8SDArFGbawiESCCnOF3nppfyh+Ho2U5yL0I1J1mHKCnuslG4fsujcmxTOA+whwCeu6Wrp DzzYiq5w00N5dOzqaVuaZAiG5frNOcU06XR4AxeOuin7WR3PBvzn2sD2t+3BUKB70STk0c ChZUU4R3UbENqzL40KuGZeyYEphSOmE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-447-fpogOg9rPfSKKoQ1z-hkZA-1; Mon, 22 May 2023 08:11:51 -0400 X-MC-Unique: fpogOg9rPfSKKoQ1z-hkZA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 298F3802355; Mon, 22 May 2023 12:11:50 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5AD417C52; Mon, 22 May 2023 12:11:47 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, John Fastabend , Jakub Sitnicki , bpf@vger.kernel.org Subject: [PATCH net-next v10 06/16] tcp_bpf: Inline do_tcp_sendpages as it's now a wrapper around tcp_sendmsg Date: Mon, 22 May 2023 13:11:15 +0100 Message-Id: <20230522121125.2595254-7-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" do_tcp_sendpages() is now just a small wrapper around tcp_sendmsg_locked(), so inline it. This is part of replacing ->sendpage() with a call to sendmsg() with MSG_SPLICE_PAGES set. Signed-off-by: David Howells cc: John Fastabend cc: Jakub Sitnicki cc: "David S. Miller" cc: Eric Dumazet cc: David Ahern cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org cc: bpf@vger.kernel.org --- net/ipv4/tcp_bpf.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index 2e9547467edb..0291d15acd19 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -72,11 +72,13 @@ static int tcp_bpf_push(struct sock *sk, struct sk_msg = *msg, u32 apply_bytes, { bool apply =3D apply_bytes; struct scatterlist *sge; + struct msghdr msghdr =3D { .msg_flags =3D flags | MSG_SPLICE_PAGES, }; struct page *page; int size, ret =3D 0; u32 off; =20 while (1) { + struct bio_vec bvec; bool has_tx_ulp; =20 sge =3D sk_msg_elem(msg, msg->sg.start); @@ -88,16 +90,18 @@ static int tcp_bpf_push(struct sock *sk, struct sk_msg = *msg, u32 apply_bytes, tcp_rate_check_app_limited(sk); retry: has_tx_ulp =3D tls_sw_has_ctx_tx(sk); - if (has_tx_ulp) { - flags |=3D MSG_SENDPAGE_NOPOLICY; - ret =3D kernel_sendpage_locked(sk, - page, off, size, flags); - } else { - ret =3D do_tcp_sendpages(sk, page, off, size, flags); - } + if (has_tx_ulp) + msghdr.msg_flags |=3D MSG_SENDPAGE_NOPOLICY; =20 + if (flags & MSG_SENDPAGE_NOTLAST) + msghdr.msg_flags |=3D MSG_MORE; + + bvec_set_page(&bvec, page, size, off); + iov_iter_bvec(&msghdr.msg_iter, ITER_SOURCE, &bvec, 1, size); + ret =3D tcp_sendmsg_locked(sk, &msghdr, size); if (ret <=3D 0) return ret; + if (apply) apply_bytes -=3D ret; msg->sg.size -=3D ret; @@ -404,7 +408,7 @@ static int tcp_bpf_sendmsg(struct sock *sk, struct msgh= dr *msg, size_t size) long timeo; int flags; =20 - /* Don't let internal do_tcp_sendpages() flags through */ + /* Don't let internal sendpage flags through */ flags =3D (msg->msg_flags & ~MSG_SENDPAGE_DECRYPTED); flags |=3D MSG_NO_SHARED_FRAGS; From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 085DFC77B73 for ; Mon, 22 May 2023 12:13:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233065AbjEVMNk (ORCPT ); Mon, 22 May 2023 08:13:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233469AbjEVMNO (ORCPT ); Mon, 22 May 2023 08:13:14 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD55FDC for ; Mon, 22 May 2023 05:11:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757519; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tYeoTX9B8dad49qWPHePVLYzX6oyR1jBlVymL6Q5Zsc=; b=U+3QQm+v312ByLRBWyREcfENqCS583v+cFK+oa83ifmyvdb62SuqMx9uS3/FaejO7F+PKC eFTyVYJ2SZsIEG/YYiFZ6Ie1iKYGz/fb/x7EzL75GbQn6O3wgPbit16hd/VUMmXgM3bkZ0 rIMqJw2CoVrCBUoa8bqhKXrhWZBbJSM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-329-z_k6b1dFNWyCPfMJlHKjGw-1; Mon, 22 May 2023 08:11:55 -0400 X-MC-Unique: z_k6b1dFNWyCPfMJlHKjGw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 096DF811E8F; Mon, 22 May 2023 12:11:54 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id D9261C54F80; Mon, 22 May 2023 12:11:50 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Steffen Klassert , Herbert Xu Subject: [PATCH net-next v10 07/16] espintcp: Inline do_tcp_sendpages() Date: Mon, 22 May 2023 13:11:16 +0100 Message-Id: <20230522121125.2595254-8-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" do_tcp_sendpages() is now just a small wrapper around tcp_sendmsg_locked(), so inline it, allowing do_tcp_sendpages() to be removed. This is part of replacing ->sendpage() with a call to sendmsg() with MSG_SPLICE_PAGES set. Signed-off-by: David Howells cc: Steffen Klassert cc: Herbert Xu cc: Eric Dumazet cc: "David S. Miller" cc: David Ahern cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- net/xfrm/espintcp.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/net/xfrm/espintcp.c b/net/xfrm/espintcp.c index 872b80188e83..3504925babdb 100644 --- a/net/xfrm/espintcp.c +++ b/net/xfrm/espintcp.c @@ -205,14 +205,16 @@ static int espintcp_sendskb_locked(struct sock *sk, s= truct espintcp_msg *emsg, static int espintcp_sendskmsg_locked(struct sock *sk, struct espintcp_msg *emsg, int flags) { + struct msghdr msghdr =3D { .msg_flags =3D flags | MSG_SPLICE_PAGES, }; struct sk_msg *skmsg =3D &emsg->skmsg; struct scatterlist *sg; int done =3D 0; int ret; =20 - flags |=3D MSG_SENDPAGE_NOTLAST; + msghdr.msg_flags |=3D MSG_SENDPAGE_NOTLAST; sg =3D &skmsg->sg.data[skmsg->sg.start]; do { + struct bio_vec bvec; size_t size =3D sg->length - emsg->offset; int offset =3D sg->offset + emsg->offset; struct page *p; @@ -220,11 +222,13 @@ static int espintcp_sendskmsg_locked(struct sock *sk, emsg->offset =3D 0; =20 if (sg_is_last(sg)) - flags &=3D ~MSG_SENDPAGE_NOTLAST; + msghdr.msg_flags &=3D ~MSG_SENDPAGE_NOTLAST; =20 p =3D sg_page(sg); retry: - ret =3D do_tcp_sendpages(sk, p, offset, size, flags); + bvec_set_page(&bvec, p, size, offset); + iov_iter_bvec(&msghdr.msg_iter, ITER_SOURCE, &bvec, 1, size); + ret =3D tcp_sendmsg_locked(sk, &msghdr, size); if (ret < 0) { emsg->offset =3D offset - sg->offset; skmsg->sg.start +=3D done; From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E49BAC77B75 for ; Mon, 22 May 2023 12:13:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233794AbjEVMNr (ORCPT ); Mon, 22 May 2023 08:13:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233714AbjEVMNQ (ORCPT ); Mon, 22 May 2023 08:13:16 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6D27DED for ; Mon, 22 May 2023 05:12:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757523; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JUrQ0rsOJEKe2VauAy4RSjWfQDMPveXienxXCaL8A+w=; b=BpgwDkDcl3Pndc3JOHamlETALJg+zqoL+69qT4hkl/XqiDAWZ2iiG2oHKjR+WY/9KdyINf shBEqcOkiudMaIcNAcSlYxtEPAxa40kyefxsZI8BLu2zdbpBBIzI9/dCmSX0C7Sy8H9PpC 0/VRJvFGCiue2puCvYAVXU9vBc2X4xQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-587-GZXCZZECNuaQheY_clprmg-1; Mon, 22 May 2023 08:12:00 -0400 X-MC-Unique: GZXCZZECNuaQheY_clprmg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 73454802A55; Mon, 22 May 2023 12:11:59 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id B74D740E6A4E; Mon, 22 May 2023 12:11:56 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Boris Pismenny , John Fastabend Subject: [PATCH net-next v10 08/16] tls: Inline do_tcp_sendpages() Date: Mon, 22 May 2023 13:11:17 +0100 Message-Id: <20230522121125.2595254-9-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" do_tcp_sendpages() is now just a small wrapper around tcp_sendmsg_locked(), so inline it, allowing do_tcp_sendpages() to be removed. This is part of replacing ->sendpage() with a call to sendmsg() with MSG_SPLICE_PAGES set. Signed-off-by: David Howells cc: Boris Pismenny cc: John Fastabend cc: Jakub Kicinski cc: "David S. Miller" cc: Eric Dumazet cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org Tested-by: Tariq Toukan --- include/net/tls.h | 2 +- net/tls/tls_main.c | 24 +++++++++++++++--------- 2 files changed, 16 insertions(+), 10 deletions(-) diff --git a/include/net/tls.h b/include/net/tls.h index 6056ce5a2aa5..5791ca7a189c 100644 --- a/include/net/tls.h +++ b/include/net/tls.h @@ -258,7 +258,7 @@ struct tls_context { struct scatterlist *partially_sent_record; u16 partially_sent_offset; =20 - bool in_tcp_sendpages; + bool splicing_pages; bool pending_open_record_frags; =20 struct mutex tx_lock; /* protects partially_sent_* fields and diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c index f2e7302a4d96..3d45fdb5c4e9 100644 --- a/net/tls/tls_main.c +++ b/net/tls/tls_main.c @@ -125,7 +125,10 @@ int tls_push_sg(struct sock *sk, u16 first_offset, int flags) { - int sendpage_flags =3D flags | MSG_SENDPAGE_NOTLAST; + struct bio_vec bvec; + struct msghdr msg =3D { + .msg_flags =3D MSG_SENDPAGE_NOTLAST | MSG_SPLICE_PAGES | flags, + }; int ret =3D 0; struct page *p; size_t size; @@ -134,16 +137,19 @@ int tls_push_sg(struct sock *sk, size =3D sg->length - offset; offset +=3D sg->offset; =20 - ctx->in_tcp_sendpages =3D true; + ctx->splicing_pages =3D true; while (1) { if (sg_is_last(sg)) - sendpage_flags =3D flags; + msg.msg_flags =3D flags; =20 /* is sending application-limited? */ tcp_rate_check_app_limited(sk); p =3D sg_page(sg); retry: - ret =3D do_tcp_sendpages(sk, p, offset, size, sendpage_flags); + bvec_set_page(&bvec, p, size, offset); + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size); + + ret =3D tcp_sendmsg_locked(sk, &msg, size); =20 if (ret !=3D size) { if (ret > 0) { @@ -155,7 +161,7 @@ int tls_push_sg(struct sock *sk, offset -=3D sg->offset; ctx->partially_sent_offset =3D offset; ctx->partially_sent_record =3D (void *)sg; - ctx->in_tcp_sendpages =3D false; + ctx->splicing_pages =3D false; return ret; } =20 @@ -169,7 +175,7 @@ int tls_push_sg(struct sock *sk, size =3D sg->length; } =20 - ctx->in_tcp_sendpages =3D false; + ctx->splicing_pages =3D false; =20 return 0; } @@ -247,11 +253,11 @@ static void tls_write_space(struct sock *sk) { struct tls_context *ctx =3D tls_get_ctx(sk); =20 - /* If in_tcp_sendpages call lower protocol write space handler + /* If splicing_pages call lower protocol write space handler * to ensure we wake up any waiting operations there. For example - * if do_tcp_sendpages where to call sk_wait_event. + * if splicing pages where to call sk_wait_event. */ - if (ctx->in_tcp_sendpages) { + if (ctx->splicing_pages) { ctx->sk_write_space(sk); return; } From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 123C6C7EE2D for ; Mon, 22 May 2023 12:13:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233569AbjEVMNj (ORCPT ); Mon, 22 May 2023 08:13:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232966AbjEVMNM (ORCPT ); Mon, 22 May 2023 08:13:12 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B576FA for ; Mon, 22 May 2023 05:12:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757528; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tEe6KsGIUg5Ug9eymZUK4cq5L1lxhWlwpIXxPsGo3ck=; b=LFjjVKi46ASWKBNz4jZ9s06cwiSmNcmkCaHROR8tTjXRUxYCpittLDSGYI+MOW0rew5eUj OwxxzNos3P7D8Qe/v5QJGOInjelKQBEuUNDRVJPp/KtJPaDUgk4CDRnoL7Ji+KKx5XCYz5 6ZCoX0grfDTBg427BrpK6Y4bibbeK0Y= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-474-MWYfU8vlNOi-xXlJkHtgaw-1; Mon, 22 May 2023 08:12:04 -0400 X-MC-Unique: MWYfU8vlNOi-xXlJkHtgaw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6F58E858F14; Mon, 22 May 2023 12:12:03 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2BC277B7C; Mon, 22 May 2023 12:12:00 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Bernard Metzler , Tom Talpey , Jason Gunthorpe , Leon Romanovsky , linux-rdma@vger.kernel.org Subject: [PATCH net-next v10 09/16] siw: Inline do_tcp_sendpages() Date: Mon, 22 May 2023 13:11:18 +0100 Message-Id: <20230522121125.2595254-10-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" do_tcp_sendpages() is now just a small wrapper around tcp_sendmsg_locked(), so inline it, allowing do_tcp_sendpages() to be removed. This is part of replacing ->sendpage() with a call to sendmsg() with MSG_SPLICE_PAGES set. Signed-off-by: David Howells Reviewed-by: Bernard Metzler Reviewed-by: Tom Talpey cc: Jason Gunthorpe cc: Leon Romanovsky cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: linux-rdma@vger.kernel.org cc: netdev@vger.kernel.org --- Notes: ver #6) - Don't clear MSG_SPLICE_PAGES on the last page. drivers/infiniband/sw/siw/siw_qp_tx.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/= siw/siw_qp_tx.c index 4b292e0504f1..ffb16beb6c30 100644 --- a/drivers/infiniband/sw/siw/siw_qp_tx.c +++ b/drivers/infiniband/sw/siw/siw_qp_tx.c @@ -312,7 +312,7 @@ static int siw_tx_ctrl(struct siw_iwarp_tx *c_tx, struc= t socket *s, } =20 /* - * 0copy TCP transmit interface: Use do_tcp_sendpages. + * 0copy TCP transmit interface: Use MSG_SPLICE_PAGES. * * Using sendpage to push page by page appears to be less efficient * than using sendmsg, even if data are copied. @@ -323,20 +323,27 @@ static int siw_tx_ctrl(struct siw_iwarp_tx *c_tx, str= uct socket *s, static int siw_tcp_sendpages(struct socket *s, struct page **page, int off= set, size_t size) { + struct bio_vec bvec; + struct msghdr msg =3D { + .msg_flags =3D (MSG_MORE | MSG_DONTWAIT | MSG_SENDPAGE_NOTLAST | + MSG_SPLICE_PAGES), + }; struct sock *sk =3D s->sk; - int i =3D 0, rv =3D 0, sent =3D 0, - flags =3D MSG_MORE | MSG_DONTWAIT | MSG_SENDPAGE_NOTLAST; + int i =3D 0, rv =3D 0, sent =3D 0; =20 while (size) { size_t bytes =3D min_t(size_t, PAGE_SIZE - offset, size); =20 if (size + offset <=3D PAGE_SIZE) - flags =3D MSG_MORE | MSG_DONTWAIT; + msg.msg_flags &=3D ~MSG_SENDPAGE_NOTLAST; =20 tcp_rate_check_app_limited(sk); + bvec_set_page(&bvec, page[i], bytes, offset); + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size); + try_page_again: lock_sock(sk); - rv =3D do_tcp_sendpages(sk, page[i], offset, bytes, flags); + rv =3D tcp_sendmsg_locked(sk, &msg, size); release_sock(sk); =20 if (rv > 0) { From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E313CC77B73 for ; Mon, 22 May 2023 12:13:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233724AbjEVMNw (ORCPT ); Mon, 22 May 2023 08:13:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232741AbjEVMNQ (ORCPT ); Mon, 22 May 2023 08:13:16 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D55ABFF for ; Mon, 22 May 2023 05:12:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757535; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FkLIsuWcxLdm5m/RLKw/6bFEvudw3Dbf8/jbJKlIa0s=; b=SFO9r++p9NbqLMvSoZKfVcB0EuNtjTD7ik+MhoCTR3sU/axb4tkPwIaMYQL79PpUz+Xa19 gGV9YXrNiHnR+vVLOdW1aFb3TN1XuX2tAo8jJyJxkxJdZ+OZCF9ipr5Uy4U3ITV5Xle6zB wCMIAvNF0VSVuBAfwhXjYAuNBO2nNqg= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-54-cIwQpXAwOju38rvcwCBp5g-1; Mon, 22 May 2023 08:12:07 -0400 X-MC-Unique: cIwQpXAwOju38rvcwCBp5g-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CCA9C3800E8C; Mon, 22 May 2023 12:12:06 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 57ECF7B7C; Mon, 22 May 2023 12:12:04 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH net-next v10 10/16] tcp: Fold do_tcp_sendpages() into tcp_sendpage_locked() Date: Mon, 22 May 2023 13:11:19 +0100 Message-Id: <20230522121125.2595254-11-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Fold do_tcp_sendpages() into its last remaining caller, tcp_sendpage_locked(). Signed-off-by: David Howells cc: Eric Dumazet cc: David Ahern cc: "David S. Miller" cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- include/net/tcp.h | 2 -- net/ipv4/tcp.c | 21 +++++++-------------- 2 files changed, 7 insertions(+), 16 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 04a31643cda3..02a6cff1827e 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -333,8 +333,6 @@ int tcp_sendpage(struct sock *sk, struct page *page, in= t offset, size_t size, int flags); int tcp_sendpage_locked(struct sock *sk, struct page *page, int offset, size_t size, int flags); -ssize_t do_tcp_sendpages(struct sock *sk, struct page *page, int offset, - size_t size, int flags); int tcp_send_mss(struct sock *sk, int *size_goal, int flags); void tcp_push(struct sock *sk, int flags, int mss_now, int nonagle, int size_goal); diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index f3a0c02678e0..e9506cebecce 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -974,12 +974,17 @@ static int tcp_wmem_schedule(struct sock *sk, int cop= y) return min(copy, sk->sk_forward_alloc); } =20 -ssize_t do_tcp_sendpages(struct sock *sk, struct page *page, int offset, - size_t size, int flags) +int tcp_sendpage_locked(struct sock *sk, struct page *page, int offset, + size_t size, int flags) { struct bio_vec bvec; struct msghdr msg =3D { .msg_flags =3D flags | MSG_SPLICE_PAGES, }; =20 + if (!(sk->sk_route_caps & NETIF_F_SG)) + return sock_no_sendpage_locked(sk, page, offset, size, flags); + + tcp_rate_check_app_limited(sk); /* is sending application-limited? */ + bvec_set_page(&bvec, page, size, offset); iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size); =20 @@ -988,18 +993,6 @@ ssize_t do_tcp_sendpages(struct sock *sk, struct page = *page, int offset, =20 return tcp_sendmsg_locked(sk, &msg, size); } -EXPORT_SYMBOL_GPL(do_tcp_sendpages); - -int tcp_sendpage_locked(struct sock *sk, struct page *page, int offset, - size_t size, int flags) -{ - if (!(sk->sk_route_caps & NETIF_F_SG)) - return sock_no_sendpage_locked(sk, page, offset, size, flags); - - tcp_rate_check_app_limited(sk); /* is sending application-limited? */ - - return do_tcp_sendpages(sk, page, offset, size, flags); -} EXPORT_SYMBOL_GPL(tcp_sendpage_locked); =20 int tcp_sendpage(struct sock *sk, struct page *page, int offset, From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24161C77B75 for ; Mon, 22 May 2023 12:14:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233809AbjEVMN6 (ORCPT ); Mon, 22 May 2023 08:13:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233346AbjEVMNS (ORCPT ); Mon, 22 May 2023 08:13:18 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1786FA3 for ; Mon, 22 May 2023 05:12:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757538; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Di14zlocOEDECmKOzgXOEyS4sjRLATonTy1WJdIXAUU=; b=cul1hH2ZjcACPvgRvbC7yUyba812B7cG0nlVmVZxlpIWux6lQdyw1drqQXvTRCrNx8qBKI VWtv1Q5ELX1Oo5k6013eHW/2PwmW9oIlK21QVZHjdHNgF6zRrQ39P7VPMsSaLcpch2xaPr 0Vv+/6Q6EGDHvvameWjEzSZ/XJ5mUhU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-389-i0ekLMi1P3uITGz1Ox_XFQ-1; Mon, 22 May 2023 08:12:11 -0400 X-MC-Unique: i0ekLMi1P3uITGz1Ox_XFQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2602780349B; Mon, 22 May 2023 12:12:10 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 856907B7C; Mon, 22 May 2023 12:12:07 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH net-next v10 11/16] ip, udp: Support MSG_SPLICE_PAGES Date: Mon, 22 May 2023 13:11:20 +0100 Message-Id: <20230522121125.2595254-12-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make IP/UDP sendmsg() support MSG_SPLICE_PAGES. This causes pages to be spliced from the source iterator. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells cc: Willem de Bruijn cc: David Ahern cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- Notes: ver #6) - Use common helper. net/ipv4/ip_output.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index 52fc840898d8..c7db973b5d29 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -1048,6 +1048,14 @@ static int __ip_append_data(struct sock *sk, skb_zcopy_set(skb, uarg, &extra_uref); } } + } else if ((flags & MSG_SPLICE_PAGES) && length) { + if (inet->hdrincl) + return -EPERM; + if (rt->dst.dev->features & NETIF_F_SG) + /* We need an empty buffer to attach stuff to */ + paged =3D true; + else + flags &=3D ~MSG_SPLICE_PAGES; } =20 cork->length +=3D length; @@ -1207,6 +1215,15 @@ static int __ip_append_data(struct sock *sk, err =3D -EFAULT; goto error; } + } else if (flags & MSG_SPLICE_PAGES) { + struct msghdr *msg =3D from; + + err =3D skb_splice_from_iter(skb, &msg->msg_iter, copy, + sk->sk_allocation); + if (err < 0) + goto error; + copy =3D err; + wmem_alloc_delta +=3D copy; } else if (!zc) { int i =3D skb_shinfo(skb)->nr_frags; From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 484FFC7EE2A for ; Mon, 22 May 2023 12:13:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233360AbjEVMNd (ORCPT ); Mon, 22 May 2023 08:13:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232889AbjEVMNM (ORCPT ); Mon, 22 May 2023 08:13:12 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8D29410C for ; Mon, 22 May 2023 05:12:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757538; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5dEuCyEYbAeTLbptDWG+lpG3TePaGyDY6iKA6gIr8Gk=; b=f9MGfhu+kYRZ553CRO+KZiiRJvo9JTj5p0xr8Q8GLJMsnwVOKxCdi28aY8N4IbIqnLU79H Csh03QSQtP4XJJDuOp9mbWGMYfr3hcdi30/MJeoxHtwbZCXBY98JtCbf3tumoI9Mq8+GdW q8EIlvJRZd+n5Z5XYnLU06d/gaea6aY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-569-bsmo5ELVM-2TXNBTHt6zHg-1; Mon, 22 May 2023 08:12:14 -0400 X-MC-Unique: bsmo5ELVM-2TXNBTHt6zHg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8B87B85A5A8; Mon, 22 May 2023 12:12:13 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0EFBB1121314; Mon, 22 May 2023 12:12:10 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH net-next v10 12/16] ip6, udp6: Support MSG_SPLICE_PAGES Date: Mon, 22 May 2023 13:11:21 +0100 Message-Id: <20230522121125.2595254-13-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make IP6/UDP6 sendmsg() support MSG_SPLICE_PAGES. This causes pages to be spliced from the source iterator if possible, copying the data if not. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells cc: Willem de Bruijn cc: David Ahern cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- Notes: ver #6) - Use common helper. net/ipv6/ip6_output.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c index 9554cf46ed88..c722cb881b2d 100644 --- a/net/ipv6/ip6_output.c +++ b/net/ipv6/ip6_output.c @@ -1589,6 +1589,14 @@ static int __ip6_append_data(struct sock *sk, skb_zcopy_set(skb, uarg, &extra_uref); } } + } else if ((flags & MSG_SPLICE_PAGES) && length) { + if (inet_sk(sk)->hdrincl) + return -EPERM; + if (rt->dst.dev->features & NETIF_F_SG) + /* We need an empty buffer to attach stuff to */ + paged =3D true; + else + flags &=3D ~MSG_SPLICE_PAGES; } =20 /* @@ -1778,6 +1786,15 @@ static int __ip6_append_data(struct sock *sk, err =3D -EFAULT; goto error; } + } else if (flags & MSG_SPLICE_PAGES) { + struct msghdr *msg =3D from; + + err =3D skb_splice_from_iter(skb, &msg->msg_iter, copy, + sk->sk_allocation); + if (err < 0) + goto error; + copy =3D err; + wmem_alloc_delta +=3D copy; } else if (!zc) { int i =3D skb_shinfo(skb)->nr_frags; From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30ABFC77B75 for ; Mon, 22 May 2023 12:14:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233749AbjEVMOW (ORCPT ); Mon, 22 May 2023 08:14:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230365AbjEVMNU (ORCPT ); Mon, 22 May 2023 08:13:20 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7C3E118 for ; Mon, 22 May 2023 05:12:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757539; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=arPLIYOMzcdUPWLjuF22tHo0vP4jVzFFThikgCzP+CE=; b=gNEBrb2+7QCk6oqJRxymmU2zNQ1qUWvTbpjt4XQnlR97+GayVv49U4ZLO94yy6Vsxn0Vzu dE2e3/jN1bYNV+SXQaP9xlBlH8CLuLTQuILSXCI6aoz91yf3VxrPVhsHiucuIvd3uOMKf5 Yv1TbqZ7ZCI65CYoXM8EpicRj6Whbgg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-407-eMUMHyWdN1qeVHDjTVvlzg-1; Mon, 22 May 2023 08:12:17 -0400 X-MC-Unique: eMUMHyWdN1qeVHDjTVvlzg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AF505185A78B; Mon, 22 May 2023 12:12:16 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4439120296C6; Mon, 22 May 2023 12:12:14 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH net-next v10 13/16] udp: Convert udp_sendpage() to use MSG_SPLICE_PAGES Date: Mon, 22 May 2023 13:11:22 +0100 Message-Id: <20230522121125.2595254-14-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert udp_sendpage() to use sendmsg() with MSG_SPLICE_PAGES rather than directly splicing in the pages itself. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells cc: Willem de Bruijn cc: David Ahern cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- Notes: ver #6) - udp_sendpage() shouldn't lock the socket around udp_sendpage(). - udp_sendpage() should only set MSG_MORE if MSG_SENDPAGE_NOTLAST is s= et. net/ipv4/udp.c | 51 ++++++-------------------------------------------- 1 file changed, 6 insertions(+), 45 deletions(-) diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index aa32afd871ee..2879dc6d66ea 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1332,54 +1332,15 @@ EXPORT_SYMBOL(udp_sendmsg); int udp_sendpage(struct sock *sk, struct page *page, int offset, size_t size, int flags) { - struct inet_sock *inet =3D inet_sk(sk); - struct udp_sock *up =3D udp_sk(sk); - int ret; + struct bio_vec bvec; + struct msghdr msg =3D { .msg_flags =3D flags | MSG_SPLICE_PAGES }; =20 if (flags & MSG_SENDPAGE_NOTLAST) - flags |=3D MSG_MORE; - - if (!up->pending) { - struct msghdr msg =3D { .msg_flags =3D flags|MSG_MORE }; - - /* Call udp_sendmsg to specify destination address which - * sendpage interface can't pass. - * This will succeed only when the socket is connected. - */ - ret =3D udp_sendmsg(sk, &msg, 0); - if (ret < 0) - return ret; - } - - lock_sock(sk); + msg.msg_flags |=3D MSG_MORE; =20 - if (unlikely(!up->pending)) { - release_sock(sk); - - net_dbg_ratelimited("cork failed\n"); - return -EINVAL; - } - - ret =3D ip_append_page(sk, &inet->cork.fl.u.ip4, - page, offset, size, flags); - if (ret =3D=3D -EOPNOTSUPP) { - release_sock(sk); - return sock_no_sendpage(sk->sk_socket, page, offset, - size, flags); - } - if (ret < 0) { - udp_flush_pending_frames(sk); - goto out; - } - - up->len +=3D size; - if (!(READ_ONCE(up->corkflag) || (flags&MSG_MORE))) - ret =3D udp_push_pending_frames(sk); - if (!ret) - ret =3D size; -out: - release_sock(sk); - return ret; + bvec_set_page(&bvec, page, size, offset); + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size); + return udp_sendmsg(sk, &msg, size); } =20 #define UDP_SKB_IS_STATELESS 0x80000000 From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD401C7EE2A for ; Mon, 22 May 2023 12:14:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233827AbjEVMOk (ORCPT ); Mon, 22 May 2023 08:14:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47132 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230363AbjEVMNc (ORCPT ); Mon, 22 May 2023 08:13:32 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 486AB133 for ; Mon, 22 May 2023 05:12:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757544; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vVs0CsGrBKFATDm79TCVboT9gSnZlC7XDeXTGZfAJzM=; b=UbtONZjSlqYaw5tuOmwMSHGYPuwr1YMBexKN50cdDg6T8/pVj18TkMp/4oXFjpfwMKcjUu y/Qw8vbCO7XfjZ2dzZKy1xZ7LJHHXeymzKIRzhW55wotxif24l9lYxnIzh5CeucZGHMmtY sa16khsj8TPEJRJWRHVN90f9CFzmWCc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-296-LPYcjbL_PmKPilJ00U_9eQ-1; Mon, 22 May 2023 08:12:21 -0400 X-MC-Unique: LPYcjbL_PmKPilJ00U_9eQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 544FB3C0BE2F; Mon, 22 May 2023 12:12:20 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id ABF042029F6D; Mon, 22 May 2023 12:12:17 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH net-next v10 14/16] ip: Remove ip_append_page() Date: Mon, 22 May 2023 13:11:23 +0100 Message-Id: <20230522121125.2595254-15-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" ip_append_page() is no longer used with the removal of udp_sendpage(), so remove it. Signed-off-by: David Howells cc: Willem de Bruijn cc: David Ahern cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- Notes: ver #7) - Remove now-unused csum_page(). include/net/ip.h | 2 - net/ipv4/ip_output.c | 148 ++----------------------------------------- 2 files changed, 4 insertions(+), 146 deletions(-) diff --git a/include/net/ip.h b/include/net/ip.h index c3fffaa92d6e..7627a4df893b 100644 --- a/include/net/ip.h +++ b/include/net/ip.h @@ -220,8 +220,6 @@ int ip_append_data(struct sock *sk, struct flowi4 *fl4, unsigned int flags); int ip_generic_getfrag(void *from, char *to, int offset, int len, int odd, struct sk_buff *skb); -ssize_t ip_append_page(struct sock *sk, struct flowi4 *fl4, struct page *p= age, - int offset, size_t size, int flags); struct sk_buff *__ip_make_skb(struct sock *sk, struct flowi4 *fl4, struct sk_buff_head *queue, struct inet_cork *cork); diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index c7db973b5d29..553c740a6bfb 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -946,17 +946,6 @@ ip_generic_getfrag(void *from, char *to, int offset, i= nt len, int odd, struct sk } EXPORT_SYMBOL(ip_generic_getfrag); =20 -static inline __wsum -csum_page(struct page *page, int offset, int copy) -{ - char *kaddr; - __wsum csum; - kaddr =3D kmap(page); - csum =3D csum_partial(kaddr + offset, copy, 0); - kunmap(page); - return csum; -} - static int __ip_append_data(struct sock *sk, struct flowi4 *fl4, struct sk_buff_head *queue, @@ -1327,10 +1316,10 @@ static int ip_setup_cork(struct sock *sk, struct in= et_cork *cork, } =20 /* - * ip_append_data() and ip_append_page() can make one large IP datagram - * from many pieces of data. Each pieces will be holded on the socket - * until ip_push_pending_frames() is called. Each piece can be a page - * or non-page data. + * ip_append_data() can make one large IP datagram from many pieces of + * data. Each piece will be held on the socket until + * ip_push_pending_frames() is called. Each piece can be a page or + * non-page data. * * Not only UDP, other transport protocols - e.g. raw sockets - can use * this interface potentially. @@ -1363,135 +1352,6 @@ int ip_append_data(struct sock *sk, struct flowi4 *= fl4, from, length, transhdrlen, flags); } =20 -ssize_t ip_append_page(struct sock *sk, struct flowi4 *fl4, struct page *p= age, - int offset, size_t size, int flags) -{ - struct inet_sock *inet =3D inet_sk(sk); - struct sk_buff *skb; - struct rtable *rt; - struct ip_options *opt =3D NULL; - struct inet_cork *cork; - int hh_len; - int mtu; - int len; - int err; - unsigned int maxfraglen, fragheaderlen, fraggap, maxnonfragsize; - - if (inet->hdrincl) - return -EPERM; - - if (flags&MSG_PROBE) - return 0; - - if (skb_queue_empty(&sk->sk_write_queue)) - return -EINVAL; - - cork =3D &inet->cork.base; - rt =3D (struct rtable *)cork->dst; - if (cork->flags & IPCORK_OPT) - opt =3D cork->opt; - - if (!(rt->dst.dev->features & NETIF_F_SG)) - return -EOPNOTSUPP; - - hh_len =3D LL_RESERVED_SPACE(rt->dst.dev); - mtu =3D cork->gso_size ? IP_MAX_MTU : cork->fragsize; - - fragheaderlen =3D sizeof(struct iphdr) + (opt ? opt->optlen : 0); - maxfraglen =3D ((mtu - fragheaderlen) & ~7) + fragheaderlen; - maxnonfragsize =3D ip_sk_ignore_df(sk) ? 0xFFFF : mtu; - - if (cork->length + size > maxnonfragsize - fragheaderlen) { - ip_local_error(sk, EMSGSIZE, fl4->daddr, inet->inet_dport, - mtu - (opt ? opt->optlen : 0)); - return -EMSGSIZE; - } - - skb =3D skb_peek_tail(&sk->sk_write_queue); - if (!skb) - return -EINVAL; - - cork->length +=3D size; - - while (size > 0) { - /* Check if the remaining data fits into current packet. */ - len =3D mtu - skb->len; - if (len < size) - len =3D maxfraglen - skb->len; - - if (len <=3D 0) { - struct sk_buff *skb_prev; - int alloclen; - - skb_prev =3D skb; - fraggap =3D skb_prev->len - maxfraglen; - - alloclen =3D fragheaderlen + hh_len + fraggap + 15; - skb =3D sock_wmalloc(sk, alloclen, 1, sk->sk_allocation); - if (unlikely(!skb)) { - err =3D -ENOBUFS; - goto error; - } - - /* - * Fill in the control structures - */ - skb->ip_summed =3D CHECKSUM_NONE; - skb->csum =3D 0; - skb_reserve(skb, hh_len); - - /* - * Find where to start putting bytes. - */ - skb_put(skb, fragheaderlen + fraggap); - skb_reset_network_header(skb); - skb->transport_header =3D (skb->network_header + - fragheaderlen); - if (fraggap) { - skb->csum =3D skb_copy_and_csum_bits(skb_prev, - maxfraglen, - skb_transport_header(skb), - fraggap); - skb_prev->csum =3D csum_sub(skb_prev->csum, - skb->csum); - pskb_trim_unique(skb_prev, maxfraglen); - } - - /* - * Put the packet on the pending queue. - */ - __skb_queue_tail(&sk->sk_write_queue, skb); - continue; - } - - if (len > size) - len =3D size; - - if (skb_append_pagefrags(skb, page, offset, len, - MAX_SKB_FRAGS)) { - err =3D -EMSGSIZE; - goto error; - } - - if (skb->ip_summed =3D=3D CHECKSUM_NONE) { - __wsum csum; - csum =3D csum_page(page, offset, len); - skb->csum =3D csum_block_add(skb->csum, csum, skb->len); - } - - skb_len_add(skb, len); - refcount_add(len, &sk->sk_wmem_alloc); - offset +=3D len; - size -=3D len; - } - return 0; - -error: - cork->length -=3D size; - IP_INC_STATS(sock_net(sk), IPSTATS_MIB_OUTDISCARDS); - return err; -} - static void ip_cork_release(struct inet_cork *cork) { cork->flags &=3D ~IPCORK_OPT; From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B234C77B73 for ; Mon, 22 May 2023 12:14:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233831AbjEVMOn (ORCPT ); Mon, 22 May 2023 08:14:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233773AbjEVMNh (ORCPT ); Mon, 22 May 2023 08:13:37 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86690B7 for ; Mon, 22 May 2023 05:12:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757550; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4sbDm+4sgaynAjmiQFVbDl+OyfXvG2vWGJtRH+IAST4=; b=G/CKFMnqzvP1STpQK6HprlhUmBsCn2PIwUrB8a8+Bc1OuV2NxV61Fw4/huDzDB442cyE/X 3Vbl6/jCWQ5cXGFQL0OuNW580yKQmh5GjaWqR9k8LXTv4+RVzC7TpoRBvRPKjLlUpqatlN ++d/mkPAcOZn6gnR2jfaV4zfLZxHUbE= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-160-SjSudd0MOWalkYvBaLq9kg-1; Mon, 22 May 2023 08:12:24 -0400 X-MC-Unique: SjSudd0MOWalkYvBaLq9kg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B4B991C04602; Mon, 22 May 2023 12:12:23 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 112762166B26; Mon, 22 May 2023 12:12:20 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Kuniyuki Iwashima Subject: [PATCH net-next v10 15/16] af_unix: Support MSG_SPLICE_PAGES Date: Mon, 22 May 2023 13:11:24 +0100 Message-Id: <20230522121125.2595254-16-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make AF_UNIX sendmsg() support MSG_SPLICE_PAGES, splicing in pages from the source iterator if possible and copying the data in otherwise. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Kuniyuki Iwashima cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- Notes: ver #6) - Use common helper. net/unix/af_unix.c | 49 +++++++++++++++++++++++++++++++--------------- 1 file changed, 33 insertions(+), 16 deletions(-) diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index dd55506b4632..976bc1c5e11b 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -2200,19 +2200,25 @@ static int unix_stream_sendmsg(struct socket *sock,= struct msghdr *msg, while (sent < len) { size =3D len - sent; =20 - /* Keep two messages in the pipe so it schedules better */ - size =3D min_t(int, size, (sk->sk_sndbuf >> 1) - 64); + if (unlikely(msg->msg_flags & MSG_SPLICE_PAGES)) { + skb =3D sock_alloc_send_pskb(sk, 0, 0, + msg->msg_flags & MSG_DONTWAIT, + &err, 0); + } else { + /* Keep two messages in the pipe so it schedules better */ + size =3D min_t(int, size, (sk->sk_sndbuf >> 1) - 64); =20 - /* allow fallback to order-0 allocations */ - size =3D min_t(int, size, SKB_MAX_HEAD(0) + UNIX_SKB_FRAGS_SZ); + /* allow fallback to order-0 allocations */ + size =3D min_t(int, size, SKB_MAX_HEAD(0) + UNIX_SKB_FRAGS_SZ); =20 - data_len =3D max_t(int, 0, size - SKB_MAX_HEAD(0)); + data_len =3D max_t(int, 0, size - SKB_MAX_HEAD(0)); =20 - data_len =3D min_t(size_t, size, PAGE_ALIGN(data_len)); + data_len =3D min_t(size_t, size, PAGE_ALIGN(data_len)); =20 - skb =3D sock_alloc_send_pskb(sk, size - data_len, data_len, - msg->msg_flags & MSG_DONTWAIT, &err, - get_order(UNIX_SKB_FRAGS_SZ)); + skb =3D sock_alloc_send_pskb(sk, size - data_len, data_len, + msg->msg_flags & MSG_DONTWAIT, &err, + get_order(UNIX_SKB_FRAGS_SZ)); + } if (!skb) goto out_err; =20 @@ -2224,13 +2230,24 @@ static int unix_stream_sendmsg(struct socket *sock,= struct msghdr *msg, } fds_sent =3D true; =20 - skb_put(skb, size - data_len); - skb->data_len =3D data_len; - skb->len =3D size; - err =3D skb_copy_datagram_from_iter(skb, 0, &msg->msg_iter, size); - if (err) { - kfree_skb(skb); - goto out_err; + if (unlikely(msg->msg_flags & MSG_SPLICE_PAGES)) { + err =3D skb_splice_from_iter(skb, &msg->msg_iter, size, + sk->sk_allocation); + if (err < 0) { + kfree_skb(skb); + goto out_err; + } + size =3D err; + refcount_add(size, &sk->sk_wmem_alloc); + } else { + skb_put(skb, size - data_len); + skb->data_len =3D data_len; + skb->len =3D size; + err =3D skb_copy_datagram_from_iter(skb, 0, &msg->msg_iter, size); + if (err) { + kfree_skb(skb); + goto out_err; + } } =20 unix_state_lock(other); From nobody Fri Dec 19 02:50:37 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 406ADC7EE2A for ; Mon, 22 May 2023 12:14:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233839AbjEVMOr (ORCPT ); Mon, 22 May 2023 08:14:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232921AbjEVMNh (ORCPT ); Mon, 22 May 2023 08:13:37 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48C141A8 for ; Mon, 22 May 2023 05:12:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684757553; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4nmIdUwXs8xUZa8LHr6SjLu9qXp25hoa6tjiHzVF7uA=; b=S02zCYjBj2eSxU3JR+nCF+w5NpsRZd6XDz1Okh7EibaO0D7FRW1zNSYLViPAC1V0ybegBb Mo5q/84+kaIS8pN4qZ5RzjYj2W85x072SKjbmT6E6qCkBmFPnfCivnR2HnO02x3/UmJPRj 6KgLAFv4LSqfy3yV3wluqP+//8k1nIs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-576-SCXcFoZWMeW_c6Kl9rP7Pg-1; Mon, 22 May 2023 08:12:28 -0400 X-MC-Unique: SCXcFoZWMeW_c6Kl9rP7Pg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4357E185A792; Mon, 22 May 2023 12:12:27 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id A11807B7C; Mon, 22 May 2023 12:12:24 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Kuniyuki Iwashima Subject: [PATCH net-next v10 16/16] unix: Convert unix_stream_sendpage() to use MSG_SPLICE_PAGES Date: Mon, 22 May 2023 13:11:25 +0100 Message-Id: <20230522121125.2595254-17-dhowells@redhat.com> In-Reply-To: <20230522121125.2595254-1-dhowells@redhat.com> References: <20230522121125.2595254-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Convert unix_stream_sendpage() to use sendmsg() with MSG_SPLICE_PAGES rather than directly splicing in the pages itself. This allows ->sendpage() to be replaced by something that can handle multiple multipage folios in a single transaction. Signed-off-by: David Howells cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Kuniyuki Iwashima cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- Notes: ver #10) - Fix subject to refer to unix_stream_sendpage() not udp_sendpage(). net/unix/af_unix.c | 134 +++------------------------------------------ 1 file changed, 7 insertions(+), 127 deletions(-) diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index 976bc1c5e11b..115436ce1f8a 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -1839,24 +1839,6 @@ static void maybe_add_creds(struct sk_buff *skb, con= st struct socket *sock, } } =20 -static int maybe_init_creds(struct scm_cookie *scm, - struct socket *socket, - const struct sock *other) -{ - int err; - struct msghdr msg =3D { .msg_controllen =3D 0 }; - - err =3D scm_send(socket, &msg, scm, false); - if (err) - return err; - - if (unix_passcred_enabled(socket, other)) { - scm->pid =3D get_pid(task_tgid(current)); - current_uid_gid(&scm->creds.uid, &scm->creds.gid); - } - return err; -} - static bool unix_skb_scm_eq(struct sk_buff *skb, struct scm_cookie *scm) { @@ -2292,117 +2274,15 @@ static int unix_stream_sendmsg(struct socket *sock= , struct msghdr *msg, static ssize_t unix_stream_sendpage(struct socket *socket, struct page *pa= ge, int offset, size_t size, int flags) { - int err; - bool send_sigpipe =3D false; - bool init_scm =3D true; - struct scm_cookie scm; - struct sock *other, *sk =3D socket->sk; - struct sk_buff *skb, *newskb =3D NULL, *tail =3D NULL; - - if (flags & MSG_OOB) - return -EOPNOTSUPP; + struct bio_vec bvec; + struct msghdr msg =3D { .msg_flags =3D flags | MSG_SPLICE_PAGES }; =20 - other =3D unix_peer(sk); - if (!other || sk->sk_state !=3D TCP_ESTABLISHED) - return -ENOTCONN; - - if (false) { -alloc_skb: - unix_state_unlock(other); - mutex_unlock(&unix_sk(other)->iolock); - newskb =3D sock_alloc_send_pskb(sk, 0, 0, flags & MSG_DONTWAIT, - &err, 0); - if (!newskb) - goto err; - } - - /* we must acquire iolock as we modify already present - * skbs in the sk_receive_queue and mess with skb->len - */ - err =3D mutex_lock_interruptible(&unix_sk(other)->iolock); - if (err) { - err =3D flags & MSG_DONTWAIT ? -EAGAIN : -ERESTARTSYS; - goto err; - } - - if (sk->sk_shutdown & SEND_SHUTDOWN) { - err =3D -EPIPE; - send_sigpipe =3D true; - goto err_unlock; - } - - unix_state_lock(other); + if (flags & MSG_SENDPAGE_NOTLAST) + msg.msg_flags |=3D MSG_MORE; =20 - if (sock_flag(other, SOCK_DEAD) || - other->sk_shutdown & RCV_SHUTDOWN) { - err =3D -EPIPE; - send_sigpipe =3D true; - goto err_state_unlock; - } - - if (init_scm) { - err =3D maybe_init_creds(&scm, socket, other); - if (err) - goto err_state_unlock; - init_scm =3D false; - } - - skb =3D skb_peek_tail(&other->sk_receive_queue); - if (tail && tail =3D=3D skb) { - skb =3D newskb; - } else if (!skb || !unix_skb_scm_eq(skb, &scm)) { - if (newskb) { - skb =3D newskb; - } else { - tail =3D skb; - goto alloc_skb; - } - } else if (newskb) { - /* this is fast path, we don't necessarily need to - * call to kfree_skb even though with newskb =3D=3D NULL - * this - does no harm - */ - consume_skb(newskb); - newskb =3D NULL; - } - - if (skb_append_pagefrags(skb, page, offset, size, MAX_SKB_FRAGS)) { - tail =3D skb; - goto alloc_skb; - } - - skb->len +=3D size; - skb->data_len +=3D size; - skb->truesize +=3D size; - refcount_add(size, &sk->sk_wmem_alloc); - - if (newskb) { - err =3D unix_scm_to_skb(&scm, skb, false); - if (err) - goto err_state_unlock; - spin_lock(&other->sk_receive_queue.lock); - __skb_queue_tail(&other->sk_receive_queue, newskb); - spin_unlock(&other->sk_receive_queue.lock); - } - - unix_state_unlock(other); - mutex_unlock(&unix_sk(other)->iolock); - - other->sk_data_ready(other); - scm_destroy(&scm); - return size; - -err_state_unlock: - unix_state_unlock(other); -err_unlock: - mutex_unlock(&unix_sk(other)->iolock); -err: - kfree_skb(newskb); - if (send_sigpipe && !(flags & MSG_NOSIGNAL)) - send_sig(SIGPIPE, current, 0); - if (!init_scm) - scm_destroy(&scm); - return err; + bvec_set_page(&bvec, page, size, offset); + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size); + return unix_stream_sendmsg(socket, &msg, size); } =20 static int unix_seqpacket_sendmsg(struct socket *sock, struct msghdr *msg,