From nobody Mon Sep 29 21:25:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC464C25B0D for ; Mon, 15 Aug 2022 23:34:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243805AbiHOXeK (ORCPT ); Mon, 15 Aug 2022 19:34:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346702AbiHOX1Z (ORCPT ); Mon, 15 Aug 2022 19:27:25 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 770A182847; Mon, 15 Aug 2022 13:07:13 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id CFCEEB810C5; Mon, 15 Aug 2022 20:07:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19FA8C433D6; Mon, 15 Aug 2022 20:07:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1660594030; bh=bNA+L8ZtPWK90hlTA0ywsViACbAwVY8ybVjhT9qqLuk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I94Oj4O5KQfKgfg20bAzWefJgwiammlyoeucnC0MI9a8TQN4X7QGVpPRYov7FxeCB LvPy5Ro2vHzMhOwnIEmdCxGzJuirpnSOwo+bOodr1vorFGfV1UsU31X53haWksdltV 4kHSXiMY/EDlEfMnaah0eQRdmeLwvdYZfbvYxMoQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Eric Dumazet , Soheil Hassas Yeganeh , Wei Wang , Shakeel Butt , "David S. Miller" , Sasha Levin Subject: [PATCH 5.19 0364/1157] tcp: fix possible freeze in tx path under memory pressure Date: Mon, 15 Aug 2022 19:55:20 +0200 Message-Id: <20220815180454.279576717@linuxfoundation.org> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220815180439.416659447@linuxfoundation.org> References: <20220815180439.416659447@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Eric Dumazet [ Upstream commit 849b425cd091e1804af964b771761cfbefbafb43 ] Blamed commit only dealt with applications issuing small writes. Issue here is that we allow to force memory schedule for the sk_buff allocation, but we have no guarantee that sendmsg() is able to copy some payload in it. In this patch, I make sure the socket can use up to tcp_wmem[0] bytes. For example, if we consider tcp_wmem[0] =3D 4096 (default on x86), and initial skb->truesize being 1280, tcp_sendmsg() is able to copy up to 2816 bytes under memory pressure. Before this patch a sendmsg() sending more than 2816 bytes would either block forever (if persistent memory pressure), or return -EAGAIN. For bigger MTU networks, it is advised to increase tcp_wmem[0] to avoid sending too small packets. v2: deal with zero copy paths. Fixes: 8e4d980ac215 ("tcp: fix behavior for epoll edge trigger") Signed-off-by: Eric Dumazet Acked-by: Soheil Hassas Yeganeh Reviewed-by: Wei Wang Reviewed-by: Shakeel Butt Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp.c | 33 +++++++++++++++++++++++++++++---- 1 file changed, 29 insertions(+), 4 deletions(-) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 766881775abb..3ae2ea048883 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -952,6 +952,23 @@ static int tcp_downgrade_zcopy_pure(struct sock *sk, s= truct sk_buff *skb) return 0; } =20 +static int tcp_wmem_schedule(struct sock *sk, int copy) +{ + int left; + + if (likely(sk_wmem_schedule(sk, copy))) + return copy; + + /* We could be in trouble if we have nothing queued. + * Use whatever is left in sk->sk_forward_alloc and tcp_wmem[0] + * to guarantee some progress. + */ + left =3D sock_net(sk)->ipv4.sysctl_tcp_wmem[0] - sk->sk_wmem_queued; + if (left > 0) + sk_forced_mem_schedule(sk, min(left, copy)); + return min(copy, sk->sk_forward_alloc); +} + static struct sk_buff *tcp_build_frag(struct sock *sk, int size_goal, int = flags, struct page *page, int offset, size_t *size) { @@ -987,7 +1004,11 @@ static struct sk_buff *tcp_build_frag(struct sock *sk= , int size_goal, int flags, tcp_mark_push(tp, skb); goto new_segment; } - if (tcp_downgrade_zcopy_pure(sk, skb) || !sk_wmem_schedule(sk, copy)) + if (tcp_downgrade_zcopy_pure(sk, skb)) + return NULL; + + copy =3D tcp_wmem_schedule(sk, copy); + if (!copy) return NULL; =20 if (can_coalesce) { @@ -1336,8 +1357,11 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghd= r *msg, size_t size) =20 copy =3D min_t(int, copy, pfrag->size - pfrag->offset); =20 - if (tcp_downgrade_zcopy_pure(sk, skb) || - !sk_wmem_schedule(sk, copy)) + if (tcp_downgrade_zcopy_pure(sk, skb)) + goto wait_for_space; + + copy =3D tcp_wmem_schedule(sk, copy); + if (!copy) goto wait_for_space; =20 err =3D skb_copy_to_page_nocache(sk, &msg->msg_iter, skb, @@ -1364,7 +1388,8 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr= *msg, size_t size) skb_shinfo(skb)->flags |=3D SKBFL_PURE_ZEROCOPY; =20 if (!skb_zcopy_pure(skb)) { - if (!sk_wmem_schedule(sk, copy)) + copy =3D tcp_wmem_schedule(sk, copy); + if (!copy) goto wait_for_space; } =20 --=20 2.35.1