From nobody Sun Mar 22 10:17:11 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF7C53C6A39; Tue, 3 Mar 2026 10:56:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772535390; cv=none; b=sJMRrzcuez6f00TS9nShvs6SZLwLPAstZe6igbnAGYkstWECOw5NAywIlE1gBte9/Hl3uwVcPEkwFFOULRLzQbkq1kRKwdTinSHGCmmpF1YdUf/KUj3UxFm1pLrhzPxnpCuP3p5ck2AyOTHImlIa4hsW4OZlqaJK+lQ1iFLgdYM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772535390; c=relaxed/simple; bh=Ks5sz/Z+ErNwuPM92DKF6giNHn/EWfQ89oX63UDnp4Y=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=EfAXX8oKr7qdWap4/AGtMp2/vxqW0YlD5ZLOrbqa0yu4DZQGb77155Ks6w38O7O4z2eIm5MZxVWcfZRIcOtkWvapgaUDvO46Z2Mv40MnL+fmwws+KJOd2CeMRAoSj9HWiWSTwsdx1/RWGklnRyW4fqimFQHwBlf0ZJvIQ1cpQyI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=cWt6LZ5f; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="cWt6LZ5f" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E65DBC2BC9E; Tue, 3 Mar 2026 10:56:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772535390; bh=Ks5sz/Z+ErNwuPM92DKF6giNHn/EWfQ89oX63UDnp4Y=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=cWt6LZ5fc1lbMOIuKFrnj+BeakCCdQyYEzVjlKBJjqoHUKTuW6e+ZDqmPNL8/pyi/ 0Ktj1KtBIaFmS5LXp7OeJKVXjaJpdnbAZGxPdNVF8aJm+jxA4ANjE8z67dFlziNesJ bdR2l5nq54zlUihOicv8Mqz+uvs7dSw/jAMXx8O/12EIl3ExzJt11mtji8/Eu/RrOd UeasLL0j2Oj1GeFxN/qlpgHoYk+88WjSx44ABSqRLszSIEF4aDo+U6dmRk3+OkcuIG /BOi1jeiocd9cx8jZyLG+Bqa9j6iuO4gimxKlkuQY4eTYp+Q2Y7o8texPxftFQmfqO pm7Ogqo5thv9g== From: "Matthieu Baerts (NGI0)" Date: Tue, 03 Mar 2026 11:56:02 +0100 Subject: [PATCH net 1/5] selftests: mptcp: more stable simult_flows tests Precedence: bulk X-Mailing-List: mptcp@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260303-net-mptcp-misc-fixes-7-0-rc2-v1-1-4b5462b6f016@kernel.org> References: <20260303-net-mptcp-misc-fixes-7-0-rc2-v1-0-4b5462b6f016@kernel.org> In-Reply-To: <20260303-net-mptcp-misc-fixes-7-0-rc2-v1-0-4b5462b6f016@kernel.org> To: Mat Martineau , Geliang Tang , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Shuah Khan Cc: netdev@vger.kernel.org, mptcp@lists.linux.dev, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, "Matthieu Baerts (NGI0)" , stable@vger.kernel.org X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=openpgp-sha256; l=2606; i=matttbe@kernel.org; h=from:subject:message-id; bh=18tA02xe0nsbth02mlcKtrTHUJ7V+JsHWaPsJEewHgo=; b=owGbwMvMwCVWo/Th0Gd3rumMp9WSGDKX7QswiXGfU7NC5fISqddhXQzrV/z8qlfzU0lxQ7dnw b+vD5/e6ihlYRDjYpAVU2SRbovMn/m8irfEy88CZg4rE8gQBi5OAZjI/48Mf8W/xxgd1nE7dTX8 wv3ojjepE2Sd798+ny5UYzFx35rZk+YyMjTLsZeXpnSv4OPYcu2FwePsRTW5yte5HVMUYlxnX5z EwgoA X-Developer-Key: i=matttbe@kernel.org; a=openpgp; fpr=E8CB85F76877057A6E27F77AF6B7824F4269A073 From: Paolo Abeni By default, the netem qdisc can keep up to 1000 packets under its belly to deal with the configured rate and delay. The simult flows test-case simulates very low speed links, to avoid problems due to slow CPUs and the TCP stack tend to transmit at a slightly higher rate than the (virtual) link constraints. All the above causes a relatively large amount of packets being enqueued in the netem qdiscs - the longer the transfer, the longer the queue - producing increasingly high TCP RTT samples and consequently increasingly larger receive buffer size due to DRS. When the receive buffer size becomes considerably larger than the needed size, the tests results can flake, i.e. because minimal inaccuracy in the pacing rate can lead to a single subflow usage towards the end of the connection for a considerable amount of data. Address the issue explicitly setting netem limits suitable for the configured link speeds and unflake all the affected tests. Fixes: 1a418cb8e888 ("mptcp: simult flow self-tests") Cc: stable@vger.kernel.org Signed-off-by: Paolo Abeni Reviewed-by: Matthieu Baerts (NGI0) Signed-off-by: Matthieu Baerts (NGI0) --- tools/testing/selftests/net/mptcp/simult_flows.sh | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/net/mptcp/simult_flows.sh b/tools/test= ing/selftests/net/mptcp/simult_flows.sh index 806aaa7d2d61..d11a8b949aab 100755 --- a/tools/testing/selftests/net/mptcp/simult_flows.sh +++ b/tools/testing/selftests/net/mptcp/simult_flows.sh @@ -237,10 +237,13 @@ run_test() for dev in ns2eth1 ns2eth2; do tc -n $ns2 qdisc del dev $dev root >/dev/null 2>&1 done - tc -n $ns1 qdisc add dev ns1eth1 root netem rate ${rate1}mbit $delay1 - tc -n $ns1 qdisc add dev ns1eth2 root netem rate ${rate2}mbit $delay2 - tc -n $ns2 qdisc add dev ns2eth1 root netem rate ${rate1}mbit $delay1 - tc -n $ns2 qdisc add dev ns2eth2 root netem rate ${rate2}mbit $delay2 + + # keep the queued pkts number low, or the RTT estimator will see + # increasing latency over time. + tc -n $ns1 qdisc add dev ns1eth1 root netem rate ${rate1}mbit $delay1 lim= it 50 + tc -n $ns1 qdisc add dev ns1eth2 root netem rate ${rate2}mbit $delay2 lim= it 50 + tc -n $ns2 qdisc add dev ns2eth1 root netem rate ${rate1}mbit $delay1 lim= it 50 + tc -n $ns2 qdisc add dev ns2eth2 root netem rate ${rate2}mbit $delay2 lim= it 50 =20 # time is measured in ms, account for transfer size, aggregated link speed # and header overhead (10%) --=20 2.51.0