From: Geliang Tang <tanggeliang@kylinos.cn>
Reload now = jiffies32 as Eric suggested.
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
---
net/mptcp/sockopt.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
index 2ec2fdf9f4af..17223da5436d 100644
--- a/net/mptcp/sockopt.c
+++ b/net/mptcp/sockopt.c
@@ -944,6 +944,7 @@ void mptcp_diag_fill_info(struct mptcp_sock *msk, struct mptcp_info *info)
info->mptcpi_bytes_retrans = msk->bytes_retrans;
info->mptcpi_subflows_total = info->mptcpi_subflows +
__mptcp_has_initial_subflow(msk);
+ now = tcp_jiffies32; /* reload it, lock_sock_fast can sleep and be quite slow */
info->mptcpi_last_data_sent = jiffies_to_msecs(now - msk->last_data_sent);
info->mptcpi_last_data_recv = jiffies_to_msecs(now - msk->last_data_recv);
unlock_sock_fast(sk, slow);
--
2.40.1
On Sat, 6 Apr 2024, Geliang Tang wrote: > From: Geliang Tang <tanggeliang@kylinos.cn> > > Reload now = jiffies32 as Eric suggested. > > Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> > --- > net/mptcp/sockopt.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c > index 2ec2fdf9f4af..17223da5436d 100644 > --- a/net/mptcp/sockopt.c > +++ b/net/mptcp/sockopt.c > @@ -944,6 +944,7 @@ void mptcp_diag_fill_info(struct mptcp_sock *msk, struct mptcp_info *info) > info->mptcpi_bytes_retrans = msk->bytes_retrans; > info->mptcpi_subflows_total = info->mptcpi_subflows + > __mptcp_has_initial_subflow(msk); > + now = tcp_jiffies32; /* reload it, lock_sock_fast can sleep and be quite slow */ > info->mptcpi_last_data_sent = jiffies_to_msecs(now - msk->last_data_sent); > info->mptcpi_last_data_recv = jiffies_to_msecs(now - msk->last_data_recv); > unlock_sock_fast(sk, slow); Geliang and Matthieu - I'd like to minimize the time difference between the assigments for all the last time fields. For example, if the last_ack_recv is copied before the socket lock is acquired, the value in the info struct could become stale while waiting for the lock and not match up well with last_data_sent. Could instead move the entire mptcp_data_lock()/mptcp_data_unlock() block inside the socket lock critical section. Then, if 'now' is assigned with both locks acquired there should not be significant delays between them. - Mat
Hi Mat, 6 Apr 2024 02:48:20 Mat Martineau <martineau@kernel.org>: > On Sat, 6 Apr 2024, Geliang Tang wrote: > >> From: Geliang Tang <tanggeliang@kylinos.cn> >> >> Reload now = jiffies32 as Eric suggested. >> >> Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn> >> --- >> net/mptcp/sockopt.c | 1 + >> 1 file changed, 1 insertion(+) >> >> diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c >> index 2ec2fdf9f4af..17223da5436d 100644 >> --- a/net/mptcp/sockopt.c >> +++ b/net/mptcp/sockopt.c >> @@ -944,6 +944,7 @@ void mptcp_diag_fill_info(struct mptcp_sock *msk, struct mptcp_info *info) >> info->mptcpi_bytes_retrans = msk->bytes_retrans; >> info->mptcpi_subflows_total = info->mptcpi_subflows + >> __mptcp_has_initial_subflow(msk); >> + now = tcp_jiffies32; /* reload it, lock_sock_fast can sleep and be quite slow */ >> info->mptcpi_last_data_sent = jiffies_to_msecs(now - msk->last_data_sent); >> info->mptcpi_last_data_recv = jiffies_to_msecs(now - msk->last_data_recv); >> unlock_sock_fast(sk, slow); > > Geliang and Matthieu - > > I'd like to minimize the time difference between the assigments for all the last time fields. For example, if the last_ack_recv is copied before the socket lock is acquired, the value in the info struct could become stale while waiting for the lock and not match up well with last_data_sent. > > Could instead move the entire mptcp_data_lock()/mptcp_data_unlock() block inside the socket lock critical section. Then, if 'now' is assigned with both locks acquired there should not be significant delays between them. Yes, that's what I had in mind when I replied to Eric. Because the data lock is a spin lock, maybe we don't need to reload it there if this block is moved after the sk lock? So all the counters would be synced. Cheers, Matt
© 2016 - 2024 Red Hat, Inc.