[PATCH mptcp-next 0/2] squash to "implement mptcp read_sock"

Geliang Tang posted 2 patches 6 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/multipath-tcp/mptcp_net-next tags/patchew/cover.1749633671.git.tanggeliang@kylinos.cn
net/mptcp/protocol.c                          |  2 +-
.../selftests/net/mptcp/mptcp_connect.c       | 61 ++++++++++++++++++-
.../selftests/net/mptcp/mptcp_connect.sh      | 10 ++-
.../selftests/net/mptcp/mptcp_splice.c        | 43 ++++++++-----
4 files changed, 99 insertions(+), 17 deletions(-)
[PATCH mptcp-next 0/2] squash to "implement mptcp read_sock"
Posted by Geliang Tang 6 months ago
From: Geliang Tang <tanggeliang@kylinos.cn>

Two squash-to patches for "implement mptcp read_sock" v2:

 - Use sk->sk_rcvbuf instead of INT_MAX as the max len.
 - Close all fds of mptcp_splice.c.
 - Add splice io mode for mptcp_connect.

Based-on: <cover.1749286212.git.tanggeliang@kylinos.cn>

Geliang Tang (2):
  Squash to "mptcp: implement .read_sock"
  Squash to "selftests: mptcp: add splice test"

 net/mptcp/protocol.c                          |  2 +-
 .../selftests/net/mptcp/mptcp_connect.c       | 61 ++++++++++++++++++-
 .../selftests/net/mptcp/mptcp_connect.sh      | 10 ++-
 .../selftests/net/mptcp/mptcp_splice.c        | 43 ++++++++-----
 4 files changed, 99 insertions(+), 17 deletions(-)

-- 
2.43.0
Re: [PATCH mptcp-next 0/2] squash to "implement mptcp read_sock"
Posted by MPTCP CI 6 months ago
Hi Geliang,

Thank you for your modifications, that's great!

Our CI did some validations and here is its report:

- KVM Validation: normal: Success! ✅
- KVM Validation: debug: Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/15581436214

Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/b571299e8f55
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=970730


If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:

    $ cd [kernel source code]
    $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
        --pull always mptcp/mptcp-upstream-virtme-docker:latest \
        auto-normal

For more details:

    https://github.com/multipath-tcp/mptcp-upstream-virtme-docker


Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)

Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)