From nobody Fri Apr 19 11:06:21 2024 Delivered-To: wpasupplicant.patchew@gmail.com Received: by 2002:a05:6638:d02:0:0:0:0 with SMTP id q2csp100018jaj; Thu, 2 Sep 2021 07:20:52 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyCbKVJ1nfbLVJieJcil0ewv7W/xVj+FVHwioKhRcY0BsgTglfGzo+4p4S62q2sL25mqnix X-Received: by 2002:a05:6e02:b48:: with SMTP id f8mr2642719ilu.25.1630592452192; Thu, 02 Sep 2021 07:20:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1630592452; cv=none; d=google.com; s=arc-20160816; b=KbfVtpJXDOxqhVi+h54+41slMs1gVZ1A0Llb15H4dnls2+FZQ6zX6E1bDYFCP2tqLd DGGIEDFJOQKVg0JVdfejtpliwtJ2/b8N+yjKCR2sgmK3q20DxyIv7XQWo9Nx5yqLz5g/ dGKNOnjsOVFDBTLEgrJ/CMRJbAuvDyr+rq3TuPdhYDmf+plKZf2xRj9kxA25oXXCmd9s SvYiPhlZSPKaD7b/kBwGAN4sOPFP/+CiaG+wul/3YOsMEi2dBiTox2he0VMejmUJOXxC gBNfGbTJiFyY1BTmgHjtl/9j2Sm+gFgHc4YPcld/i57+zFkTCAiuwAlJ/dyGoJ2sEYdi x5Vw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:to:from:dkim-signature; bh=L3zioryc0vCi647cm8LzE15jr6n7M4l98puvnid3800=; b=hpuC2H9X0YKBLwKMeoDCr+ITakV5WrvqF7cmTkHG+KXHpPukevs2Xc9+Thc86uEGjS GR1wAU4mIrLj6Xxyrx/BuxDWMzQTrA/n4+azsT+c9DRZBEYDRFwlR8SG85fYdn7VOKNM 1wSUGMBD8NguEy224hKpHBMWl9GYzPLKa6XumgmyDr5cFx2NFUpxw+oqZlLqdGwMY7cW UW8L1zRIRBOdcNTe9Kwa4mzoNlbQJhmQA0dj87T2W2U/7uvUzJI8V5GAeJCCxm5O/ktU 14iH8IDcQe0pAwMYnWOPItlL8MEzVBIxwJk9OtNXsMw7s9P0Al+JWXkI0StuSGZFLTTP AqfA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=B8eTpLkI; spf=pass (google.com: domain of mptcp+bounces-1812-wpasupplicant.patchew=gmail.com@lists.linux.dev designates 2604:1380:1:3600::1 as permitted sender) smtp.mailfrom="mptcp+bounces-1812-wpasupplicant.patchew=gmail.com@lists.linux.dev"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from ewr.edge.kernel.org (ewr.edge.kernel.org. [2604:1380:1:3600::1]) by mx.google.com with ESMTPS id h12si1161881qtu.101.2021.09.02.07.20.52 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 02 Sep 2021 07:20:52 -0700 (PDT) Received-SPF: pass (google.com: domain of mptcp+bounces-1812-wpasupplicant.patchew=gmail.com@lists.linux.dev designates 2604:1380:1:3600::1 as permitted sender) client-ip=2604:1380:1:3600::1; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=B8eTpLkI; spf=pass (google.com: domain of mptcp+bounces-1812-wpasupplicant.patchew=gmail.com@lists.linux.dev designates 2604:1380:1:3600::1 as permitted sender) smtp.mailfrom="mptcp+bounces-1812-wpasupplicant.patchew=gmail.com@lists.linux.dev"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ewr.edge.kernel.org (Postfix) with ESMTPS id DE88C1C0A4A for ; Thu, 2 Sep 2021 14:20:51 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C14533FD3; Thu, 2 Sep 2021 14:20:49 +0000 (UTC) X-Original-To: mptcp@lists.linux.dev Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 710B72FB2 for ; Thu, 2 Sep 2021 14:20:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1630592447; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=L3zioryc0vCi647cm8LzE15jr6n7M4l98puvnid3800=; b=B8eTpLkIlEOFOL46wrueqTQeXpmgLoTAod8sUExYUUlbN/euTbMHbPpp2KtQT4cMtAXttm FDOfYNIOuh4UGGM2z3RuY1sGdGWSj7YElNc+SWYoyM8tokZ6FiukvNITLQxWxVnHvc11Ch eg79Jmm938n4Nzxn0CHUuA2KOLes8tY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-398-6Ykbc789N9C1mwhvmoKXWw-1; Thu, 02 Sep 2021 10:20:46 -0400 X-MC-Unique: 6Ykbc789N9C1mwhvmoKXWw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 612D084A5E1 for ; Thu, 2 Sep 2021 14:20:45 +0000 (UTC) Received: from gerbillo.redhat.com (unknown [10.39.194.237]) by smtp.corp.redhat.com (Postfix) with ESMTP id CADB21042A40 for ; Thu, 2 Sep 2021 14:20:44 +0000 (UTC) From: Paolo Abeni To: mptcp@lists.linux.dev Subject: [PATCH mptcp-next 2/4] mptcp: stop relaying on tcp_tx_skb_cache. Date: Thu, 2 Sep 2021 16:20:30 +0200 Message-Id: <77a77e43a30e0338a1b7a0c811b8e6451862eb02.1630591985.git.pabeni@redhat.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pabeni@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We want to revert the skb TX cache, but MPTCP is currently using it unconditionally. Rework the MPTCP tx code, so that tcp_tx_skb_cache is not needed anymore: do the whole coalescing check, skb allocation skb initialization/update inside mptcp_sendmsg_frag(), quite alike the current TCP code. Signed-off-by: Paolo Abeni --- net/mptcp/protocol.c | 131 +++++++++++++++++++++++++------------------ 1 file changed, 76 insertions(+), 55 deletions(-) diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index faf6e7000d18..98fdb0ebd68d 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -1224,6 +1224,7 @@ static struct sk_buff *__mptcp_do_alloc_tx_skb(struct= sock *sk, gfp_t gfp) if (likely(__mptcp_add_ext(skb, gfp))) { skb_reserve(skb, MAX_TCP_HEADER); skb->reserved_tailroom =3D skb->end - skb->tail; + INIT_LIST_HEAD(&skb->tcp_tsorted_anchor); return skb; } __kfree_skb(skb); @@ -1233,31 +1234,23 @@ static struct sk_buff *__mptcp_do_alloc_tx_skb(stru= ct sock *sk, gfp_t gfp) return NULL; } =20 -static bool __mptcp_alloc_tx_skb(struct sock *sk, struct sock *ssk, gfp_t = gfp) +static struct sk_buff *__mptcp_alloc_tx_skb(struct sock *sk, struct sock *= ssk, gfp_t gfp) { struct sk_buff *skb; =20 - if (ssk->sk_tx_skb_cache) { - skb =3D ssk->sk_tx_skb_cache; - if (unlikely(!skb_ext_find(skb, SKB_EXT_MPTCP) && - !__mptcp_add_ext(skb, gfp))) - return false; - return true; - } - skb =3D __mptcp_do_alloc_tx_skb(sk, gfp); if (!skb) - return false; + return NULL; =20 if (likely(sk_wmem_schedule(ssk, skb->truesize))) { - ssk->sk_tx_skb_cache =3D skb; - return true; + skb_entail(ssk, skb); + return skb; } kfree_skb(skb); - return false; + return NULL; } =20 -static bool mptcp_alloc_tx_skb(struct sock *sk, struct sock *ssk, bool dat= a_lock_held) +static struct sk_buff *mptcp_alloc_tx_skb(struct sock *sk, struct sock *ss= k, bool data_lock_held) { gfp_t gfp =3D data_lock_held ? GFP_ATOMIC : sk->sk_allocation; =20 @@ -1287,23 +1280,29 @@ static int mptcp_sendmsg_frag(struct sock *sk, stru= ct sock *ssk, struct mptcp_sendmsg_info *info) { u64 data_seq =3D dfrag->data_seq + info->sent; + int offset =3D dfrag->offset + info->sent; struct mptcp_sock *msk =3D mptcp_sk(sk); bool zero_window_probe =3D false; struct mptcp_ext *mpext =3D NULL; - struct sk_buff *skb, *tail; - bool must_collapse =3D false; - int size_bias =3D 0; - int avail_size; - size_t ret =3D 0; + bool can_coalesce =3D false; + bool reuse_skb =3D true; + struct sk_buff *skb; + size_t copy; + int i =3D 0; =20 pr_debug("msk=3D%p ssk=3D%p sending dfrag at seq=3D%llu len=3D%u already = sent=3D%u", msk, ssk, dfrag->data_seq, dfrag->data_len, info->sent); =20 + if (WARN_ON_ONCE(info->sent > info->limit || + info->limit > dfrag->data_len)) + return 0; + /* compute send limit */ info->mss_now =3D tcp_send_mss(ssk, &info->size_goal, info->flags); - avail_size =3D info->size_goal; + copy =3D info->size_goal; + skb =3D tcp_write_queue_tail(ssk); - if (skb) { + if (skb && (copy > skb->len)) { /* Limit the write to the size available in the * current skb, if any, so that we create at most a new skb. * Explicitly tells TCP internals to avoid collapsing on later @@ -1316,53 +1315,75 @@ static int mptcp_sendmsg_frag(struct sock *sk, stru= ct sock *ssk, goto alloc_skb; } =20 - must_collapse =3D (info->size_goal - skb->len > 0) && - (skb_shinfo(skb)->nr_frags < sysctl_max_skb_frags); - if (must_collapse) { - size_bias =3D skb->len; - avail_size =3D info->size_goal - skb->len; + i =3D skb_shinfo(skb)->nr_frags; + can_coalesce =3D skb_can_coalesce(skb, i, dfrag->page, offset); + if (!can_coalesce && i >=3D sysctl_max_skb_frags) { + tcp_mark_push(tcp_sk(ssk), skb); + goto alloc_skb; } - } =20 + copy -=3D skb->len; + } else { alloc_skb: - if (!must_collapse && !ssk->sk_tx_skb_cache && - !mptcp_alloc_tx_skb(sk, ssk, info->data_lock_held)) - return 0; + skb =3D mptcp_alloc_tx_skb(sk, ssk, info->data_lock_held); + if (!skb) + return -ENOMEM; + + reuse_skb =3D false; + mpext =3D skb_ext_find(skb, SKB_EXT_MPTCP); + } =20 /* Zero window and all data acked? Probe. */ - avail_size =3D mptcp_check_allowed_size(msk, data_seq, avail_size); - if (avail_size =3D=3D 0) { + copy =3D mptcp_check_allowed_size(msk, data_seq, copy); + if (copy =3D=3D 0) { u64 snd_una =3D READ_ONCE(msk->snd_una); =20 - if (skb || snd_una !=3D msk->snd_nxt) + if (skb || snd_una !=3D msk->snd_nxt) { + tcp_remove_empty_skb(ssk, tcp_write_queue_tail(ssk)); return 0; + } + zero_window_probe =3D true; data_seq =3D snd_una - 1; - avail_size =3D 1; - } + copy =3D 1; =20 - if (WARN_ON_ONCE(info->sent > info->limit || - info->limit > dfrag->data_len)) - return 0; + /* all mptcp-level data is acked, no skbs should be present into the + * ssk write queue + */ + WARN_ON_ONCE(reuse_skb); + } =20 - ret =3D info->limit - info->sent; - tail =3D tcp_build_frag(ssk, avail_size + size_bias, info->flags, - dfrag->page, dfrag->offset + info->sent, &ret); - if (!tail) { - tcp_remove_empty_skb(sk, tcp_write_queue_tail(ssk)); + copy =3D min_t(size_t, copy, info->limit - info->sent); + if (!sk_wmem_schedule(ssk, copy)) { + tcp_remove_empty_skb(ssk, tcp_write_queue_tail(ssk)); return -ENOMEM; } =20 - /* if the tail skb is still the cached one, collapsing really happened. - */ - if (skb =3D=3D tail) { - TCP_SKB_CB(tail)->tcp_flags &=3D ~TCPHDR_PSH; - mpext->data_len +=3D ret; + if (can_coalesce) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + } else { + get_page(dfrag->page); + skb_fill_page_desc(skb, i, dfrag->page, offset, copy); + } + + skb->len +=3D copy; + skb->data_len +=3D copy; + skb->truesize +=3D copy; + sk_wmem_queued_add(ssk, copy); + sk_mem_charge(ssk, copy); + skb->ip_summed =3D CHECKSUM_PARTIAL; + WRITE_ONCE(tcp_sk(ssk)->write_seq, tcp_sk(ssk)->write_seq + copy); + TCP_SKB_CB(skb)->end_seq +=3D copy; + tcp_skb_pcount_set(skb, 0); + + /* on skb reuse we just need to update the DSS len */ + if (reuse_skb) { + TCP_SKB_CB(skb)->tcp_flags &=3D ~TCPHDR_PSH; + mpext->data_len +=3D copy; WARN_ON_ONCE(zero_window_probe); goto out; } =20 - mpext =3D skb_ext_find(tail, SKB_EXT_MPTCP); if (WARN_ON_ONCE(!mpext)) { /* should never reach here, stream corrupted */ return -EINVAL; @@ -1371,7 +1392,7 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct= sock *ssk, memset(mpext, 0, sizeof(*mpext)); mpext->data_seq =3D data_seq; mpext->subflow_seq =3D mptcp_subflow_ctx(ssk)->rel_write_seq; - mpext->data_len =3D ret; + mpext->data_len =3D copy; mpext->use_map =3D 1; mpext->dsn64 =3D 1; =20 @@ -1380,18 +1401,18 @@ static int mptcp_sendmsg_frag(struct sock *sk, stru= ct sock *ssk, mpext->dsn64); =20 if (zero_window_probe) { - mptcp_subflow_ctx(ssk)->rel_write_seq +=3D ret; + mptcp_subflow_ctx(ssk)->rel_write_seq +=3D copy; mpext->frozen =3D 1; if (READ_ONCE(msk->csum_enabled)) - mptcp_update_data_checksum(tail, ret); + mptcp_update_data_checksum(skb, copy); tcp_push_pending_frames(ssk); return 0; } out: if (READ_ONCE(msk->csum_enabled)) - mptcp_update_data_checksum(tail, ret); - mptcp_subflow_ctx(ssk)->rel_write_seq +=3D ret; - return ret; + mptcp_update_data_checksum(skb, copy); + mptcp_subflow_ctx(ssk)->rel_write_seq +=3D copy; + return copy; } =20 #define MPTCP_SEND_BURST_SIZE ((1 << 16) - \ --=20 2.26.3