[PATCH v3 mptcp-next 00/10] mptcp: address stall under memory pressure

Paolo Abeni posted 10 patches 19 hours ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/multipath-tcp/mptcp_net-next tags/patchew/cover.1777908248.git.pabeni@redhat.com
include/net/tcp.h    |   8 +
net/ipv4/tcp_input.c |  55 +++---
net/mptcp/fastopen.c |  17 +-
net/mptcp/mib.c      |   3 +
net/mptcp/mib.h      |   3 +
net/mptcp/options.c  |  64 ++++++-
net/mptcp/protocol.c | 399 ++++++++++++++++++++++++++++---------------
net/mptcp/protocol.h |  24 ++-
net/mptcp/subflow.c  |  11 ++
9 files changed, 414 insertions(+), 170 deletions(-)
[PATCH v3 mptcp-next 00/10] mptcp: address stall under memory pressure
Posted by Paolo Abeni 19 hours ago
This an attempt to fix the data transfer stall reported by Geliang and
Gang more carefully enforcing memory constraints at the MPTCP level.

Patch 1/10 moves the bound check before entering the TCP socket.
Patch 2, 3, 4 and 5 are cleanups/refactors finalized to safely re-using
TCP helpers on MPTCP skbs.
Patch 6 makes TCP pruning related helpers available to MPTCP and patch 7
makes use of them. Patch 8 addresses an edge scenario that could still
lead to transfer stall under memory pressure.
Finally patch 9 and 10 improve the MPTCP-level retransmission schema to
make recovery from memory pressure significanly faster.

Note that the diffstat is biases by the quite large patch 4/9, which
contains mechanical transformation of existing code; "real" changes are
noticiable smaller.

Tested successfully vs the test cases proposed by Geliang and Gang and
vs the selftests.
---
Some notes on each patch WRT ignored or false positive issues noticed
by sashiko so far.

Paolo Abeni (10):
  mptcp: move checks vs rcvbuf size earlier in the RX path
  mptcp: drop the mptcp_ooo_try_coalesce() helper
  mptcp: drop the cant_coalesce CB field
  mptcp: remove CB offset field
  mptcp: sync mptcp skb cb layout with tcp one
  tcp: expose the tcp_collapse_ofo_queue() helper to mptcp usage, too
  mptcp: implemented OoO queue pruning
  mptcp: track prune recovery status
  mptcp: move the retrans loop to a separate helper
  mptcp: let the retrans scheduler do its job.

 include/net/tcp.h    |   8 +
 net/ipv4/tcp_input.c |  55 +++---
 net/mptcp/fastopen.c |  17 +-
 net/mptcp/mib.c      |   3 +
 net/mptcp/mib.h      |   3 +
 net/mptcp/options.c  |  64 ++++++-
 net/mptcp/protocol.c | 399 ++++++++++++++++++++++++++++---------------
 net/mptcp/protocol.h |  24 ++-
 net/mptcp/subflow.c  |  11 ++
 9 files changed, 414 insertions(+), 170 deletions(-)

-- 
2.54.0
Re: [PATCH v3 mptcp-next 00/10] mptcp: address stall under memory pressure
Posted by MPTCP CI 18 hours ago
Hi Paolo,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal (except selftest_mptcp_join): Success! ✅
- KVM Validation: normal (only selftest_mptcp_join): Success! ✅
- KVM Validation: debug (except selftest_mptcp_join): Unstable: 1 failed test(s): packetdrill_sockopts ⚠️ 
- KVM Validation: debug (only selftest_mptcp_join): Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/25329194631

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/4ee2213ed212
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=1089374


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)