include/net/tcp.h | 1 + net/ipv4/tcp_input.c | 2 +- net/mptcp/ctrl.c | 2 + net/mptcp/mib.c | 2 + net/mptcp/mib.h | 4 + net/mptcp/protocol.c | 335 ++++++++++++++++++++++++------------------- net/mptcp/protocol.h | 8 +- net/mptcp/subflow.c | 24 ++-- 8 files changed, 217 insertions(+), 161 deletions(-)
This series includes several changes to the MPTCP RX path. The main goals are improving the RX performances _and_ increase the long term maintainability. Some changes reflects recent (or not so) improvements introduced in the TCP stack: patch 1, 2 and 3 are the MPTCP counter part of skb deferral free and auto-tuning improvements. Note that patch 3 could possibly fix issues/574, and overall such patch should protect from similar issues to arise in the future. All the others patches are aimed at introducing the socket backlog usage to process the packets received by the subflows while the msk socket is owned. That (almost completely) replace the processing currently happening in the mptcp_release_cb(). The actual job is done in patch 9, while the others are cleanups needed to make the change tidy and more follow-up cleanups. Sharing earlier with known issues (at least on fallback socket) to raise awareness about this upcoming work. v1 -> v2: - fix compile warn in patch 3 - removed unneeded arg in patch 4 - commit msg clarification and rebase Paolo Abeni (12): mptcp: leverage skb deferral free tcp: make tcp_rcvbuf_grow() accessible to mptcp code mptcp: rcvbuf auto-tuning improvement mptcp: introduce the mptcp_init_skb helper. mptcp: remove unneeded mptcp_move_skb() mptcp: factor out a basic skb coalesce helper mptcp: minor move_skbs_to_msk() cleanup mptcp: cleanup fallback data fin reception mptcp: leverage the sk backlog for RX packet processing. mptcp: prevernt __mptcp_move_skbs() interfering with the fastpath mptcp: borrow forward memory from subflow mptcp: make fallback backlog aware include/net/tcp.h | 1 + net/ipv4/tcp_input.c | 2 +- net/mptcp/ctrl.c | 2 + net/mptcp/mib.c | 2 + net/mptcp/mib.h | 4 + net/mptcp/protocol.c | 335 ++++++++++++++++++++++++------------------- net/mptcp/protocol.h | 8 +- net/mptcp/subflow.c | 24 ++-- 8 files changed, 217 insertions(+), 161 deletions(-) -- 2.51.0
Hi Paolo, Thank you for your modifications, that's great! Our CI did some validations and here is its report: - KVM Validation: normal: Unstable: 5 failed test(s): packetdrill_mp_capable selftest_mptcp_connect selftest_mptcp_connect_checksum selftest_mptcp_connect_mmap selftest_mptcp_connect_sendfile 🔴 - KVM Validation: debug: Unstable: 5 failed test(s): packetdrill_mp_capable selftest_mptcp_connect selftest_mptcp_connect_checksum selftest_mptcp_connect_mmap selftest_mptcp_connect_sendfile 🔴 - KVM Validation: btf-normal (only bpftest_all): Success! ✅ - KVM Validation: btf-debug (only bpftest_all): Success! ✅ - Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/17836736262 Initiator: Patchew Applier Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/20282b41a458 Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=1003937 If there are some issues, you can reproduce them using the same environment as the one used by the CI thanks to a docker image, e.g.: $ cd [kernel source code] $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \ --pull always mptcp/mptcp-upstream-virtme-docker:latest \ auto-normal For more details: https://github.com/multipath-tcp/mptcp-upstream-virtme-docker Please note that despite all the efforts that have been already done to have a stable tests suite when executed on a public CI like here, it is possible some reported issues are not due to your modifications. Still, do not hesitate to help us improve that ;-) Cheers, MPTCP GH Action bot Bot operated by Matthieu Baerts (NGI0 Core)
Hi Paolo, On Thu, 2025-09-18 at 20:33 +0000, MPTCP CI wrote: > Hi Paolo, > > Thank you for your modifications, that's great! > > Our CI did some validations and here is its report: > > - KVM Validation: normal: Unstable: 5 failed test(s): > packetdrill_mp_capable selftest_mptcp_connect > selftest_mptcp_connect_checksum selftest_mptcp_connect_mmap > selftest_mptcp_connect_sendfile 🔴 > - KVM Validation: debug: Unstable: 5 failed test(s): > packetdrill_mp_capable selftest_mptcp_connect > selftest_mptcp_connect_checksum selftest_mptcp_connect_mmap > selftest_mptcp_connect_sendfile 🔴 > - KVM Validation: btf-normal (only bpftest_all): Success! ✅ > - KVM Validation: btf-debug (only bpftest_all): Success! ✅ > - Task: > https://github.com/multipath-tcp/mptcp_net-next/actions/runs/17836736262 CI reports that this series breaks the mptcp_connect.sh test: # INFO: disconnect # 63 ns1 MPTCP -> ns1 (10.0.1.1:20001 ) MPTCP (duration 1348ms) [ OK ] # 64 ns1 MPTCP -> ns1 (10.0.1.1:20002 ) TCP main_loop_s: timed out # (duration 61185ms) [FAIL] client exit code 124, server 2 # # netns ns1-HugJD9 (listener) socket stat for 20002: # Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port # tcp TIME-WAIT 0 0 10.0.1.1:20002 10.0.1.1:47722 timer:(timewait,,0) ino:0 sk:2066 # ^I # TcpActiveOpens 2 0.0 # TcpPassiveOpens 2 0.0 I also tested it on my end and found that starting from patch 8 "mptcp: cleanup fallback data fin reception", the mptcp_connect.sh test has been failing. I apologize for not providing this feedback during the v1 review, but I was too busy yesterday to complete the testing. Thanks, -Geliang > > Initiator: Patchew Applier > Commits: > https://github.com/multipath-tcp/mptcp_net-next/commits/20282b41a458 > Patchwork: > https://patchwork.kernel.org/project/mptcp/list/?series=1003937 > > > If there are some issues, you can reproduce them using the same > environment as > the one used by the CI thanks to a docker image, e.g.: > > $ cd [kernel source code] > $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm > -it \ > --pull always mptcp/mptcp-upstream-virtme-docker:latest \ > auto-normal > > For more details: > > https://github.com/multipath-tcp/mptcp-upstream-virtme-docker > > > Please note that despite all the efforts that have been already done > to have a > stable tests suite when executed on a public CI like here, it is > possible some > reported issues are not due to your modifications. Still, do not > hesitate to > help us improve that ;-) > > Cheers, > MPTCP GH Action bot > Bot operated by Matthieu Baerts (NGI0 Core)
On 9/19/25 4:22 AM, Geliang Tang wrote: > On Thu, 2025-09-18 at 20:33 +0000, MPTCP CI wrote: >> Hi Paolo, >> >> Thank you for your modifications, that's great! >> >> Our CI did some validations and here is its report: >> >> - KVM Validation: normal: Unstable: 5 failed test(s): >> packetdrill_mp_capable selftest_mptcp_connect >> selftest_mptcp_connect_checksum selftest_mptcp_connect_mmap >> selftest_mptcp_connect_sendfile 🔴 >> - KVM Validation: debug: Unstable: 5 failed test(s): >> packetdrill_mp_capable selftest_mptcp_connect >> selftest_mptcp_connect_checksum selftest_mptcp_connect_mmap >> selftest_mptcp_connect_sendfile 🔴 >> - KVM Validation: btf-normal (only bpftest_all): Success! ✅ >> - KVM Validation: btf-debug (only bpftest_all): Success! ✅ >> - Task: >> https://github.com/multipath-tcp/mptcp_net-next/actions/runs/17836736262 > > CI reports that this series breaks the mptcp_connect.sh test: > > # INFO: disconnect > # 63 ns1 MPTCP -> ns1 (10.0.1.1:20001 ) MPTCP (duration > 1348ms) [ OK ] > # 64 ns1 MPTCP -> ns1 (10.0.1.1:20002 ) TCP main_loop_s: > timed out > # (duration 61185ms) [FAIL] client exit code 124, server 2 > # > # netns ns1-HugJD9 (listener) socket stat for 20002: > # Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port > # tcp TIME-WAIT 0 0 10.0.1.1:20002 10.0.1.1:47722 > timer:(timewait,,0) ino:0 sk:2066 > # ^I > # TcpActiveOpens 2 0.0 > # TcpPassiveOpens 2 0.0 > > I also tested it on my end and found that starting from patch 8 "mptcp: > cleanup fallback data fin reception", the mptcp_connect.sh test has > been failing. > > I apologize for not providing this feedback during the v1 review, but I > was too busy yesterday to complete the testing. Thank you very much for bisecting. I did not observe the failures on v1, but I see them on v2 (which is quite suprising, giving no big changes in there). I submitted v2 anyway to give syzkaller/CI a chance to run more. I think patch 1-3 should be sane and effective, we can probably merge them. Thanks, Paolo
Hi Paolo, On 19/09/2025 08:54, Paolo Abeni wrote: > On 9/19/25 4:22 AM, Geliang Tang wrote: >> On Thu, 2025-09-18 at 20:33 +0000, MPTCP CI wrote: >>> Hi Paolo, >>> >>> Thank you for your modifications, that's great! >>> >>> Our CI did some validations and here is its report: >>> >>> - KVM Validation: normal: Unstable: 5 failed test(s): >>> packetdrill_mp_capable selftest_mptcp_connect >>> selftest_mptcp_connect_checksum selftest_mptcp_connect_mmap >>> selftest_mptcp_connect_sendfile 🔴 >>> - KVM Validation: debug: Unstable: 5 failed test(s): >>> packetdrill_mp_capable selftest_mptcp_connect >>> selftest_mptcp_connect_checksum selftest_mptcp_connect_mmap >>> selftest_mptcp_connect_sendfile 🔴 >>> - KVM Validation: btf-normal (only bpftest_all): Success! ✅ >>> - KVM Validation: btf-debug (only bpftest_all): Success! ✅ >>> - Task: >>> https://github.com/multipath-tcp/mptcp_net-next/actions/runs/17836736262 >> >> CI reports that this series breaks the mptcp_connect.sh test: >> >> # INFO: disconnect >> # 63 ns1 MPTCP -> ns1 (10.0.1.1:20001 ) MPTCP (duration >> 1348ms) [ OK ] >> # 64 ns1 MPTCP -> ns1 (10.0.1.1:20002 ) TCP main_loop_s: >> timed out >> # (duration 61185ms) [FAIL] client exit code 124, server 2 >> # >> # netns ns1-HugJD9 (listener) socket stat for 20002: >> # Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port >> # tcp TIME-WAIT 0 0 10.0.1.1:20002 10.0.1.1:47722 >> timer:(timewait,,0) ino:0 sk:2066 >> # ^I >> # TcpActiveOpens 2 0.0 >> # TcpPassiveOpens 2 0.0 >> >> I also tested it on my end and found that starting from patch 8 "mptcp: >> cleanup fallback data fin reception", the mptcp_connect.sh test has >> been failing. >> >> I apologize for not providing this feedback during the v1 review, but I >> was too busy yesterday to complete the testing. > > Thank you very much for bisecting. I did not observe the failures on v1, > but I see them on v2 (which is quite suprising, giving no big changes in > there). I submitted v2 anyway to give syzkaller/CI a chance to run more. It looks like the CI had the same issues with the v1: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/17795536153 > I think patch 1-3 should be sane and effective, we can probably merge them. Thank you for that! Cheers, Matt -- Sponsored by the NGI0 Core fund.
On 19/09/2025 10:14, Matthieu Baerts wrote: > Hi Paolo, > > On 19/09/2025 08:54, Paolo Abeni wrote: >> On 9/19/25 4:22 AM, Geliang Tang wrote: >>> On Thu, 2025-09-18 at 20:33 +0000, MPTCP CI wrote: >>>> Hi Paolo, >>>> >>>> Thank you for your modifications, that's great! >>>> >>>> Our CI did some validations and here is its report: >>>> >>>> - KVM Validation: normal: Unstable: 5 failed test(s): >>>> packetdrill_mp_capable selftest_mptcp_connect >>>> selftest_mptcp_connect_checksum selftest_mptcp_connect_mmap >>>> selftest_mptcp_connect_sendfile 🔴 >>>> - KVM Validation: debug: Unstable: 5 failed test(s): >>>> packetdrill_mp_capable selftest_mptcp_connect >>>> selftest_mptcp_connect_checksum selftest_mptcp_connect_mmap >>>> selftest_mptcp_connect_sendfile 🔴 >>>> - KVM Validation: btf-normal (only bpftest_all): Success! ✅ >>>> - KVM Validation: btf-debug (only bpftest_all): Success! ✅ >>>> - Task: >>>> https://github.com/multipath-tcp/mptcp_net-next/actions/runs/17836736262 >>> >>> CI reports that this series breaks the mptcp_connect.sh test: >>> >>> # INFO: disconnect >>> # 63 ns1 MPTCP -> ns1 (10.0.1.1:20001 ) MPTCP (duration >>> 1348ms) [ OK ] >>> # 64 ns1 MPTCP -> ns1 (10.0.1.1:20002 ) TCP main_loop_s: >>> timed out >>> # (duration 61185ms) [FAIL] client exit code 124, server 2 >>> # >>> # netns ns1-HugJD9 (listener) socket stat for 20002: >>> # Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port >>> # tcp TIME-WAIT 0 0 10.0.1.1:20002 10.0.1.1:47722 >>> timer:(timewait,,0) ino:0 sk:2066 >>> # ^I >>> # TcpActiveOpens 2 0.0 >>> # TcpPassiveOpens 2 0.0 >>> >>> I also tested it on my end and found that starting from patch 8 "mptcp: >>> cleanup fallback data fin reception", the mptcp_connect.sh test has >>> been failing. >>> >>> I apologize for not providing this feedback during the v1 review, but I >>> was too busy yesterday to complete the testing. >> >> Thank you very much for bisecting. I did not observe the failures on v1, >> but I see them on v2 (which is quite suprising, giving no big changes in >> there). I submitted v2 anyway to give syzkaller/CI a chance to run more. > > It looks like the CI had the same issues with the v1: > > https://github.com/multipath-tcp/mptcp_net-next/actions/runs/17795536153 Note that these issues were probably not visible on your side when you developed the v1, with an older base. For mptcp_connect*.sh, it looks like they are failing in the 'disconnect' part, due to a recent modification in the selftests: https://lore.kernel.org/20250912-net-mptcp-fix-sft-connect-v1-3-d40e77cbbf02@kernel.org Now the TCP connections are listed, and one stays open. The packetdrill failure looks strange: a RST is sent in reply to a FIN+ACK. Cheers, Matt -- Sponsored by the NGI0 Core fund.
On Fri, 2025-09-19 at 08:54 +0200, Paolo Abeni wrote: > On 9/19/25 4:22 AM, Geliang Tang wrote: > > On Thu, 2025-09-18 at 20:33 +0000, MPTCP CI wrote: > > > Hi Paolo, > > > > > > Thank you for your modifications, that's great! > > > > > > Our CI did some validations and here is its report: > > > > > > - KVM Validation: normal: Unstable: 5 failed test(s): > > > packetdrill_mp_capable selftest_mptcp_connect > > > selftest_mptcp_connect_checksum selftest_mptcp_connect_mmap > > > selftest_mptcp_connect_sendfile 🔴 > > > - KVM Validation: debug: Unstable: 5 failed test(s): > > > packetdrill_mp_capable selftest_mptcp_connect > > > selftest_mptcp_connect_checksum selftest_mptcp_connect_mmap > > > selftest_mptcp_connect_sendfile 🔴 > > > - KVM Validation: btf-normal (only bpftest_all): Success! ✅ > > > - KVM Validation: btf-debug (only bpftest_all): Success! ✅ > > > - Task: > > > https://github.com/multipath-tcp/mptcp_net-next/actions/runs/17836736262 > > > > CI reports that this series breaks the mptcp_connect.sh test: > > > > # INFO: disconnect > > # 63 ns1 MPTCP -> ns1 (10.0.1.1:20001 ) MPTCP (duration > > 1348ms) [ OK ] > > # 64 ns1 MPTCP -> ns1 (10.0.1.1:20002 ) TCP main_loop_s: > > timed out > > # (duration 61185ms) [FAIL] client exit code 124, server 2 > > # > > # netns ns1-HugJD9 (listener) socket stat for 20002: > > # Netid State Recv-Q Send-Q Local Address:Port Peer > > Address:Port > > # tcp TIME-WAIT 0 0 10.0.1.1:20002 > > 10.0.1.1:47722 > > timer:(timewait,,0) ino:0 sk:2066 > > # ^I > > # TcpActiveOpens 2 0.0 > > # TcpPassiveOpens 2 0.0 > > > > I also tested it on my end and found that starting from patch 8 > > "mptcp: > > cleanup fallback data fin reception", the mptcp_connect.sh test has > > been failing. > > > > I apologize for not providing this feedback during the v1 review, > > but I > > was too busy yesterday to complete the testing. > > Thank you very much for bisecting. I did not observe the failures on > v1, > but I see them on v2 (which is quite suprising, giving no big changes > in > there). I submitted v2 anyway to give syzkaller/CI a chance to run > more. > > I think patch 1-3 should be sane and effective, we can probably merge > them. Yes, indeed. I think patch 5 and patch 7 can be merged too. I have just applied these 5 patches (1-3, 5, 7) to my tree and started testing. Once all the tests pass, I will add my Reviewed-by and Tested-by tags for them today. Thanks, -Geliang > > Thanks, > > Paolo >
© 2016 - 2025 Red Hat, Inc.