[PATCH mptcp-next 1/2] Squash-to: "mptcp: move msk input path under full msk socket lock"

Paolo Abeni posted 2 patches 3 years, 3 months ago
Maintainers: Mat Martineau <mathew.j.martineau@linux.intel.com>, Matthieu Baerts <matthieu.baerts@tessares.net>, Paolo Abeni <pabeni@redhat.com>, "David S. Miller" <davem@davemloft.net>, Eric Dumazet <edumazet@google.com>, Jakub Kicinski <kuba@kernel.org>
[PATCH mptcp-next 1/2] Squash-to: "mptcp: move msk input path under full msk socket lock"
Posted by Paolo Abeni 3 years, 3 months ago
whoops, I forgot to really test for pending data at release_cb time.

The above causes several recurring failures in the self-tests.

Note that this could affect badly the mptcp performance (as we now
really move relevant CPU time from the subflow rx path/ksoftirqd to
the user-space process), even if I haven't performed perf-tests yet
on top of this change.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
This should be placed just before: "mptcp: move RCVPRUNE event later"/
the last rx path refactor
---
 net/mptcp/protocol.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 74699bd47edf..d47c19494023 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -3008,7 +3008,8 @@ void __mptcp_check_push(struct sock *sk, struct sock *ssk)
 
 #define MPTCP_FLAGS_PROCESS_CTX_NEED (BIT(MPTCP_PUSH_PENDING) | \
 				      BIT(MPTCP_RETRANSMIT) | \
-				      BIT(MPTCP_FLUSH_JOIN_LIST))
+				      BIT(MPTCP_FLUSH_JOIN_LIST) | \
+				      BIT(MPTCP_DEQUEUE))
 
 /* processes deferred events and flush wmem */
 static void mptcp_release_cb(struct sock *sk)
-- 
2.37.1


Re: [PATCH mptcp-next 1/2] Squash-to: "mptcp: move msk input path under full msk socket lock"
Posted by Paolo Abeni 3 years, 3 months ago
On Wed, 2022-08-24 at 13:18 +0200, Paolo Abeni wrote:
> whoops, I forgot to really test for pending data at release_cb time.
> 
> The above causes several recurring failures in the self-tests.
> 
> Note that this could affect badly the mptcp performance (as we now
> really move relevant CPU time from the subflow rx path/ksoftirqd to
> the user-space process), even if I haven't performed perf-tests yet
> on top of this change.

The first raw number are quite discouraging :/ 
-30% on single subflow test.

/P