If there is no cached TCP metrics entry for a connection, initialize
tp->snd_ssthresh from the corresponding dst entry. Also move the check
against tp->snd_cwnd_clamp to the common path to ensure that the ssthresh
value is never greater than maximum cwnd, regardless where it came from.
Fixes: 51c5d0c4b169 ("tcp: Maintain dynamic metrics in local cache.")
Signed-off-by: Petr Tesarik <ptesarik@suse.com>
---
net/ipv4/tcp_metrics.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c
index dd8f3457bd72e..b08920abec0e6 100644
--- a/net/ipv4/tcp_metrics.c
+++ b/net/ipv4/tcp_metrics.c
@@ -479,6 +479,9 @@ void tcp_init_metrics(struct sock *sk)
if (dst_metric_locked(dst, RTAX_CWND))
tp->snd_cwnd_clamp = dst_metric(dst, RTAX_CWND);
+ val = dst_metric(dst, RTAX_SSTHRESH);
+ if (val)
+ tp->snd_ssthresh = val;
rcu_read_lock();
tm = tcp_get_metrics(sk, dst, false);
@@ -489,11 +492,8 @@ void tcp_init_metrics(struct sock *sk)
val = READ_ONCE(net->ipv4.sysctl_tcp_no_ssthresh_metrics_save) ?
0 : tcp_metric_get(tm, TCP_METRIC_SSTHRESH);
- if (val) {
+ if (val)
tp->snd_ssthresh = val;
- if (tp->snd_ssthresh > tp->snd_cwnd_clamp)
- tp->snd_ssthresh = tp->snd_cwnd_clamp;
- }
val = tcp_metric_get(tm, TCP_METRIC_REORDERING);
if (val && tp->reordering != val)
tp->reordering = val;
@@ -537,6 +537,9 @@ void tcp_init_metrics(struct sock *sk)
inet_csk(sk)->icsk_rto = TCP_TIMEOUT_FALLBACK;
}
+
+ if (tp->snd_ssthresh > tp->snd_cwnd_clamp)
+ tp->snd_ssthresh = tp->snd_cwnd_clamp;
}
bool tcp_peer_is_proven(struct request_sock *req, struct dst_entry *dst)
--
2.49.0
On 6/13/25 12:20 PM, Petr Tesarik wrote: > @@ -537,6 +537,9 @@ void tcp_init_metrics(struct sock *sk) > > inet_csk(sk)->icsk_rto = TCP_TIMEOUT_FALLBACK; > } > + > + if (tp->snd_ssthresh > tp->snd_cwnd_clamp) > + tp->snd_ssthresh = tp->snd_cwnd_clamp; I don't think we can do this unconditionally, as other parts of the TCP stack check explicitly for TCP_INFINITE_SSTHRESH. /P
On Tue, 17 Jun 2025 12:48:30 +0200 Paolo Abeni <pabeni@redhat.com> wrote: > On 6/13/25 12:20 PM, Petr Tesarik wrote: > > @@ -537,6 +537,9 @@ void tcp_init_metrics(struct sock *sk) > > > > inet_csk(sk)->icsk_rto = TCP_TIMEOUT_FALLBACK; > > } > > + > > + if (tp->snd_ssthresh > tp->snd_cwnd_clamp) > > + tp->snd_ssthresh = tp->snd_cwnd_clamp; > > I don't think we can do this unconditionally, as other parts of the TCP > stack check explicitly for TCP_INFINITE_SSTHRESH. Good catch! I noticed that the condition can never be true unless the congestion window is explicitly clamped, but you're right that it is a valid combination to lock the maximum cwnd but keep the initial TCP Slow Start. I'll fix that in v2. Thank you, Petr T
© 2016 - 2025 Red Hat, Inc.