From: YonglongLi <liyonglong@chinatelecom.cn>
receive RM_ADDR will update pm.add_addr_accepted and pm.add_addr_accepted
only if remove id match remote id of subflow. so receive ADD_ADDR should
update pm.add_addr_accepted and pm.add_addr_accepted after subflow added
to conn_list.
Fixes: f7d6a237d742 ("mptcp: fix per socket endpoint accounting")
Signed-off-by: YonglongLi <liyonglong@chinatelecom.cn>
---
net/mptcp/pm_netlink.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
index 766a840..117cc7b 100644
--- a/net/mptcp/pm_netlink.c
+++ b/net/mptcp/pm_netlink.c
@@ -676,6 +676,7 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
struct sock *sk = (struct sock *)msk;
unsigned int add_addr_accept_max;
struct mptcp_addr_info remote;
+ bool subflow_added = false;
unsigned int subflows_max;
int i, nr;
@@ -704,15 +705,18 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk)
if (nr == 0)
return;
- msk->pm.add_addr_accepted++;
- if (msk->pm.add_addr_accepted >= add_addr_accept_max ||
- msk->pm.subflows >= subflows_max)
- WRITE_ONCE(msk->pm.accept_addr, false);
-
spin_unlock_bh(&msk->pm.lock);
for (i = 0; i < nr; i++)
- __mptcp_subflow_connect(sk, &addrs[i], &remote);
+ if (!__mptcp_subflow_connect(sk, &addrs[i], &remote))
+ subflow_added = true;
spin_lock_bh(&msk->pm.lock);
+
+ if (subflow_added) {
+ msk->pm.add_addr_accepted++;
+ if (msk->pm.add_addr_accepted >= add_addr_accept_max ||
+ msk->pm.subflows >= subflows_max)
+ WRITE_ONCE(msk->pm.accept_addr, false);
+ }
}
void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
--
1.8.3.1
Hi Yonglong Li, On 24/05/2024 11:44, Yonglong Li wrote: > From: YonglongLi <liyonglong@chinatelecom.cn> > > receive RM_ADDR will update pm.add_addr_accepted and pm.add_addr_accepted > only if remove id match remote id of subflow. so receive ADD_ADDR should > update pm.add_addr_accepted and pm.add_addr_accepted after subflow added > to conn_list. Do you think it could be possible to cover this case by modifying/adding one subtest in our selftests? The commit message is not very clear to me. ("add_addr_accepted" are also always duplicated, I'm not sure why). I understand we can have issues with __mptcp_subflow_connect() and we might want to increment the counter only if it was possible to create a subflow (even if the connection can fail later -- I'm not sure whether we decrement the counter in this case as well), but I'm not sure to understand the context: the reason why this call could fail, and the link with RM_ADDR. > > Fixes: f7d6a237d742 ("mptcp: fix per socket endpoint accounting") > (no new line here) > Signed-off-by: YonglongLi <liyonglong@chinatelecom.cn> > --- > net/mptcp/pm_netlink.c | 16 ++++++++++------ > 1 file changed, 10 insertions(+), 6 deletions(-) > > diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c > index 766a840..117cc7b 100644 > --- a/net/mptcp/pm_netlink.c > +++ b/net/mptcp/pm_netlink.c > @@ -676,6 +676,7 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk) > struct sock *sk = (struct sock *)msk; > unsigned int add_addr_accept_max; > struct mptcp_addr_info remote; > + bool subflow_added = false; > unsigned int subflows_max; > int i, nr; > > @@ -704,15 +705,18 @@ static void mptcp_pm_nl_add_addr_received(struct mptcp_sock *msk) > if (nr == 0) > return; > > - msk->pm.add_addr_accepted++; > - if (msk->pm.add_addr_accepted >= add_addr_accept_max || > - msk->pm.subflows >= subflows_max) > - WRITE_ONCE(msk->pm.accept_addr, false); > - > spin_unlock_bh(&msk->pm.lock); > for (i = 0; i < nr; i++) > - __mptcp_subflow_connect(sk, &addrs[i], &remote); > + if (!__mptcp_subflow_connect(sk, &addrs[i], &remote)) > + subflow_added = true; Probably clearer to use: if (__mptcp_subflow_connect(...) == 0) Otherwise, we might think the subflow is added in case of issues. > spin_lock_bh(&msk->pm.lock); > + > + if (subflow_added) { > + msk->pm.add_addr_accepted++; > + if (msk->pm.add_addr_accepted >= add_addr_accept_max || > + msk->pm.subflows >= subflows_max) > + WRITE_ONCE(msk->pm.accept_addr, false); > + } > } > > void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk) Cheers, Matt -- Sponsored by the NGI0 Core fund.
Hi YonglongLi, Thank you for your modifications, that's great! Our CI did some validations and here is its report: - KVM Validation: normal: Success! ✅ - KVM Validation: debug: Success! ✅ - KVM Validation: btf (only bpftest_all): Success! ✅ - Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/9222300685 Initiator: Patchew Applier Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/3e6551d3f61f Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=855647 If there are some issues, you can reproduce them using the same environment as the one used by the CI thanks to a docker image, e.g.: $ cd [kernel source code] $ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \ --pull always mptcp/mptcp-upstream-virtme-docker:latest \ auto-normal For more details: https://github.com/multipath-tcp/mptcp-upstream-virtme-docker Please note that despite all the efforts that have been already done to have a stable tests suite when executed on a public CI like here, it is possible some reported issues are not due to your modifications. Still, do not hesitate to help us improve that ;-) Cheers, MPTCP GH Action bot Bot operated by Matthieu Baerts (NGI0 Core)
© 2016 - 2024 Red Hat, Inc.