From nobody Sun Dec 14 22:12:05 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A9BAC001B0 for ; Fri, 23 Jun 2023 22:56:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232112AbjFWW4O (ORCPT ); Fri, 23 Jun 2023 18:56:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229905AbjFWW4M (ORCPT ); Fri, 23 Jun 2023 18:56:12 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BA5FC2707 for ; Fri, 23 Jun 2023 15:55:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687560927; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+rBMqYSyv/4ZPCkVQAGQaA6Fc9mkJmC9WKrulhUVyYg=; b=ew1BfP6IDH0kmYetIID2zo+6rn0jpLftBdj8KbQvnyuwnPWrBlZT9wqnOV99Y12s4k1BF/ 7P9HxGThUVqgRphznMaWI2USbYY6/r+5zd/o+obgQ0HB/xaW6vHQhCbMHKn5XPXzQYonm6 V+w74uESDBn1WamtzgYf9FdDSkQQlKU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-132-XZyhTNHGPUuH2-31Her6VQ-1; Fri, 23 Jun 2023 18:55:24 -0400 X-MC-Unique: XZyhTNHGPUuH2-31Her6VQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AA74E185A78F; Fri, 23 Jun 2023 22:55:22 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.4]) by smtp.corp.redhat.com (Postfix) with ESMTP id 518A01121314; Fri, 23 Jun 2023 22:55:19 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Alexander Duyck , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Bernard Metzler , Jason Gunthorpe , Leon Romanovsky , John Fastabend , Jakub Sitnicki , Karsten Graul , Wenjia Zhang , Jan Karcher , "D. Wythe" , Tony Lu , Wen Gu , Boris Pismenny , Steffen Klassert , Herbert Xu , bpf@vger.kernel.org, linux-s390@vger.kernel.org, linux-rdma@vger.kernel.org Subject: [PATCH net-next v5 01/16] tcp_bpf, smc, tls, espintcp, siw: Reduce MSG_SENDPAGE_NOTLAST usage Date: Fri, 23 Jun 2023 23:54:58 +0100 Message-ID: <20230623225513.2732256-2-dhowells@redhat.com> In-Reply-To: <20230623225513.2732256-1-dhowells@redhat.com> References: <20230623225513.2732256-1-dhowells@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" As MSG_SENDPAGE_NOTLAST is being phased out along with sendpage(), don't use it further in than the sendpage methods, but rather translate it to MSG_MORE and use that instead. Signed-off-by: David Howells cc: Willem de Bruijn cc: Bernard Metzler cc: Jason Gunthorpe cc: Leon Romanovsky cc: John Fastabend cc: Jakub Sitnicki cc: Eric Dumazet cc: "David S. Miller" cc: David Ahern cc: Jakub Kicinski cc: Paolo Abeni cc: Karsten Graul cc: Wenjia Zhang cc: Jan Karcher cc: "D. Wythe" cc: Tony Lu cc: Wen Gu cc: Boris Pismenny cc: Steffen Klassert cc: Herbert Xu cc: netdev@vger.kernel.org cc: bpf@vger.kernel.org cc: linux-s390@vger.kernel.org cc: linux-rdma@vger.kernel.org --- Notes: ver #3) - In tcp_bpf, reset msg_flags on each iteration to clear MSG_MORE. - In tcp_bpf, set MSG_MORE if there's more data in the sk_msg. drivers/infiniband/sw/siw/siw_qp_tx.c | 5 ++--- net/ipv4/tcp_bpf.c | 5 +++-- net/smc/smc_tx.c | 6 ++++-- net/tls/tls_device.c | 4 ++-- net/xfrm/espintcp.c | 10 ++++++---- 5 files changed, 17 insertions(+), 13 deletions(-) diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/= siw/siw_qp_tx.c index ffb16beb6c30..7c7a51d36d0c 100644 --- a/drivers/infiniband/sw/siw/siw_qp_tx.c +++ b/drivers/infiniband/sw/siw/siw_qp_tx.c @@ -325,8 +325,7 @@ static int siw_tcp_sendpages(struct socket *s, struct p= age **page, int offset, { struct bio_vec bvec; struct msghdr msg =3D { - .msg_flags =3D (MSG_MORE | MSG_DONTWAIT | MSG_SENDPAGE_NOTLAST | - MSG_SPLICE_PAGES), + .msg_flags =3D (MSG_MORE | MSG_DONTWAIT | MSG_SPLICE_PAGES), }; struct sock *sk =3D s->sk; int i =3D 0, rv =3D 0, sent =3D 0; @@ -335,7 +334,7 @@ static int siw_tcp_sendpages(struct socket *s, struct p= age **page, int offset, size_t bytes =3D min_t(size_t, PAGE_SIZE - offset, size); =20 if (size + offset <=3D PAGE_SIZE) - msg.msg_flags &=3D ~MSG_SENDPAGE_NOTLAST; + msg.msg_flags &=3D ~MSG_MORE; =20 tcp_rate_check_app_limited(sk); bvec_set_page(&bvec, page[i], bytes, offset); diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index 5a84053ac62b..31d6005cea9b 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -88,9 +88,9 @@ static int bpf_tcp_ingress(struct sock *sk, struct sk_pso= ck *psock, static int tcp_bpf_push(struct sock *sk, struct sk_msg *msg, u32 apply_byt= es, int flags, bool uncharge) { + struct msghdr msghdr =3D {}; bool apply =3D apply_bytes; struct scatterlist *sge; - struct msghdr msghdr =3D { .msg_flags =3D flags | MSG_SPLICE_PAGES, }; struct page *page; int size, ret =3D 0; u32 off; @@ -107,11 +107,12 @@ static int tcp_bpf_push(struct sock *sk, struct sk_ms= g *msg, u32 apply_bytes, =20 tcp_rate_check_app_limited(sk); retry: + msghdr.msg_flags =3D flags | MSG_SPLICE_PAGES; has_tx_ulp =3D tls_sw_has_ctx_tx(sk); if (has_tx_ulp) msghdr.msg_flags |=3D MSG_SENDPAGE_NOPOLICY; =20 - if (flags & MSG_SENDPAGE_NOTLAST) + if (size < sge->length && msg->sg.start !=3D msg->sg.end) msghdr.msg_flags |=3D MSG_MORE; =20 bvec_set_page(&bvec, page, size, off); diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c index 45128443f1f1..9b9e0a190734 100644 --- a/net/smc/smc_tx.c +++ b/net/smc/smc_tx.c @@ -168,8 +168,7 @@ static bool smc_tx_should_cork(struct smc_sock *smc, st= ruct msghdr *msg) * should known how/when to uncork it. */ if ((msg->msg_flags & MSG_MORE || - smc_tx_is_corked(smc) || - msg->msg_flags & MSG_SENDPAGE_NOTLAST) && + smc_tx_is_corked(smc)) && atomic_read(&conn->sndbuf_space)) return true; =20 @@ -306,6 +305,9 @@ int smc_tx_sendpage(struct smc_sock *smc, struct page *= page, int offset, struct kvec iov; int rc; =20 + if (flags & MSG_SENDPAGE_NOTLAST) + msg.msg_flags |=3D MSG_MORE; + iov.iov_base =3D kaddr + offset; iov.iov_len =3D size; iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, &iov, 1, size); diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index b82770f68807..975299d7213b 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -449,7 +449,7 @@ static int tls_push_data(struct sock *sk, return -sk->sk_err; =20 flags |=3D MSG_SENDPAGE_DECRYPTED; - tls_push_record_flags =3D flags | MSG_SENDPAGE_NOTLAST; + tls_push_record_flags =3D flags | MSG_MORE; =20 timeo =3D sock_sndtimeo(sk, flags & MSG_DONTWAIT); if (tls_is_partially_sent_record(tls_ctx)) { @@ -532,7 +532,7 @@ static int tls_push_data(struct sock *sk, if (!size) { last_record: tls_push_record_flags =3D flags; - if (flags & (MSG_SENDPAGE_NOTLAST | MSG_MORE)) { + if (flags & MSG_MORE) { more =3D true; break; } diff --git a/net/xfrm/espintcp.c b/net/xfrm/espintcp.c index 3504925babdb..d3b3f9e720b3 100644 --- a/net/xfrm/espintcp.c +++ b/net/xfrm/espintcp.c @@ -205,13 +205,15 @@ static int espintcp_sendskb_locked(struct sock *sk, s= truct espintcp_msg *emsg, static int espintcp_sendskmsg_locked(struct sock *sk, struct espintcp_msg *emsg, int flags) { - struct msghdr msghdr =3D { .msg_flags =3D flags | MSG_SPLICE_PAGES, }; + struct msghdr msghdr =3D { + .msg_flags =3D flags | MSG_SPLICE_PAGES | MSG_MORE, + }; struct sk_msg *skmsg =3D &emsg->skmsg; + bool more =3D flags & MSG_MORE; struct scatterlist *sg; int done =3D 0; int ret; =20 - msghdr.msg_flags |=3D MSG_SENDPAGE_NOTLAST; sg =3D &skmsg->sg.data[skmsg->sg.start]; do { struct bio_vec bvec; @@ -221,8 +223,8 @@ static int espintcp_sendskmsg_locked(struct sock *sk, =20 emsg->offset =3D 0; =20 - if (sg_is_last(sg)) - msghdr.msg_flags &=3D ~MSG_SENDPAGE_NOTLAST; + if (sg_is_last(sg) && !more) + msghdr.msg_flags &=3D ~MSG_MORE; =20 p =3D sg_page(sg); retry: