From: Gang Yan <yangang@kylinos.cn>
This patch defers mptcp_check_data_fin from __mptcp_move_skbs to avoid
deadlock.
When processing backlogged data in softirq context, __mptcp_move_skbs
directly calls mptcp_check_data_fin, which can lead to a deadlock if
the msk socket is in TCP_FIN_WAIT2 state. In that case,
mptcp_check_data_fin calls mptcp_shutdown_subflows, which attempts to
lock the subflow socket with lock_sock_fast() - a sleeping function
that cannot be called in atomic context (softirq).
The correct pattern is already used in move_skbs_to_msk: if a data fin
is pending, schedule the work to be processed later in process context
rather than handling it directly.
Reported-by: Geliang Tang <geliang@kernel.org>
Co-developed-by: Geliang Tang <geliang@kernel.org>
Signed-off-by: Geliang Tang <geliang@kernel.org>
Signed-off-by: Gang Yan <yangang@kylinos.cn>
---
Notes:
Hi Matt,Paolo:
Here is the response of AI review for this patch:
- Typo problem in subject line:
Already fixed in v5.
- Remove else branch:
It demonstrates a TOCTOU race, though the race window is extremely
narrow. The else branch is useless even without the race. So I
removed it in v5. WDYT?
Thanks,
Gang
Changelog:
v5:
- Fix typo in title.
- Remove the else branch.
---
net/mptcp/protocol.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 326b5d4d79e7..88e19fa61b84 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2247,8 +2247,9 @@ static bool __mptcp_move_skbs(struct sock *sk, struct list_head *skbs, u32 *delt
}
__mptcp_ofo_queue(msk);
- if (moved)
- mptcp_check_data_fin((struct sock *)msk);
+ if (moved && mptcp_pending_data_fin(sk, NULL))
+ mptcp_schedule_work(sk);
+
return moved;
}
--
2.43.0
On 4/1/26 10:54 AM, Gang Yan wrote: > From: Gang Yan <yangang@kylinos.cn> > > This patch defers mptcp_check_data_fin from __mptcp_move_skbs to avoid > deadlock. > > When processing backlogged data in softirq context, __mptcp_move_skbs > directly calls mptcp_check_data_fin, which can lead to a deadlock if > the msk socket is in TCP_FIN_WAIT2 state. In that case, > mptcp_check_data_fin calls mptcp_shutdown_subflows, which attempts to > lock the subflow socket with lock_sock_fast() - a sleeping function > that cannot be called in atomic context (softirq). The fix for such issue should be squashed inside the patch generating the issue itself. > The correct pattern is already used in move_skbs_to_msk: if a data fin > is pending, schedule the work to be processed later in process context > rather than handling it directly. > > Reported-by: Geliang Tang <geliang@kernel.org> > Co-developed-by: Geliang Tang <geliang@kernel.org> > Signed-off-by: Geliang Tang <geliang@kernel.org> > Signed-off-by: Gang Yan <yangang@kylinos.cn> > --- > Notes: > > Hi Matt,Paolo: > > Here is the response of AI review for this patch: > > - Typo problem in subject line: > Already fixed in v5. > > - Remove else branch: > It demonstrates a TOCTOU race, though the race window is extremely > narrow. The else branch is useless even without the race. So I > removed it in v5. WDYT? > > Thanks, > Gang > > Changelog: > v5: > - Fix typo in title. > - Remove the else branch. > > --- > net/mptcp/protocol.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c > index 326b5d4d79e7..88e19fa61b84 100644 > --- a/net/mptcp/protocol.c > +++ b/net/mptcp/protocol.c > @@ -2247,8 +2247,9 @@ static bool __mptcp_move_skbs(struct sock *sk, struct list_head *skbs, u32 *delt > } > > __mptcp_ofo_queue(msk); > - if (moved) > - mptcp_check_data_fin((struct sock *)msk); > + if (moved && mptcp_pending_data_fin(sk, NULL)) > + mptcp_schedule_work(sk); Scheduling the worker when we are not in BH could introduce problematic delays. Instead you could factor out a ____mptcp_move_skbs() helper (a better name would be nice!) not including the mptcp_check_data_fin() call, use the latter in mptcp_data_ready() and in such function eventually schedule the mptcp work as needed. /P
© 2016 - 2026 Red Hat, Inc.