Allow the max aggregated pkt size to go up-to GSO_MAX_SIZE for MANA NIC.
This patch only increases the max allowable gso/gro pkt size for MANA
devices and does not change the defaults.
Following are the perf benefits by increasing the pkt aggregate size from
legacy gso_max_size value(64K) to newer one(up-to 511K)
for i in {1..10}; do netperf -t TCP_RR -H 10.0.0.5 -p50000 -- -r80000,80000
-O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT|tail -1; done
min p90 p99 Throughput gso_max_size
93 171 194 6594.25
97 154 180 7183.74
95 165 189 6927.86
96 165 188 6976.04
93 154 185 7338.05 64K
93 168 189 6938.03
94 169 189 6784.93
92 166 189 7117.56
94 179 191 6678.44
95 157 183 7277.81
min p90 p99 Throughput
93 134 146 8448.75
95 134 140 8396.54
94 137 148 8204.12
94 137 148 8244.41
94 128 139 8666.52 80K
94 141 153 8116.86
94 138 149 8163.92
92 135 142 8362.72
92 134 142 8497.57
93 136 148 8393.23
Tested on azure env with Accelerated Networking enabled and disabled.
Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
---
Changes in v2
* Instead of using 'tcp segment' throughout the patch used used more accurate
term 'aggregated pkt size'
---
drivers/net/ethernet/microsoft/mana/mana_en.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
index aa1e47233fe5..da630cb37cfb 100644
--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
+++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
@@ -2873,6 +2873,8 @@ static int mana_probe_port(struct mana_context *ac, int port_idx,
ndev->dev_port = port_idx;
SET_NETDEV_DEV(ndev, gc->dev);
+ netif_set_tso_max_size(ndev, GSO_MAX_SIZE);
+
netif_carrier_off(ndev);
netdev_rss_key_fill(apc->hashkey, MANA_HASH_KEY_SIZE);
--
2.34.1
On Mon, Feb 10, 2025 at 5:40 AM Shradha Gupta
<shradhagupta@linux.microsoft.com> wrote:
>
> Allow the max aggregated pkt size to go up-to GSO_MAX_SIZE for MANA NIC.
> This patch only increases the max allowable gso/gro pkt size for MANA
> devices and does not change the defaults.
> Following are the perf benefits by increasing the pkt aggregate size from
> legacy gso_max_size value(64K) to newer one(up-to 511K)
>
> for i in {1..10}; do netperf -t TCP_RR -H 10.0.0.5 -p50000 -- -r80000,80000
> -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT|tail -1; done
Was this tested with IPv6 ?
On Mon, Feb 10, 2025 at 04:55:54PM +0100, Eric Dumazet wrote:
> On Mon, Feb 10, 2025 at 5:40???AM Shradha Gupta
> <shradhagupta@linux.microsoft.com> wrote:
> >
> > Allow the max aggregated pkt size to go up-to GSO_MAX_SIZE for MANA NIC.
> > This patch only increases the max allowable gso/gro pkt size for MANA
> > devices and does not change the defaults.
> > Following are the perf benefits by increasing the pkt aggregate size from
> > legacy gso_max_size value(64K) to newer one(up-to 511K)
> >
> > for i in {1..10}; do netperf -t TCP_RR -H 10.0.0.5 -p50000 -- -r80000,80000
> > -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT|tail -1; done
>
> Was this tested with IPv6 ?
Hi Eric,
yes, sanity and functional testing where performed(manually) and passed on azure
VMs with IPv6.
On Mon, Feb 10, 2025 at 09:57:53AM -0800, Shradha Gupta wrote:
> On Mon, Feb 10, 2025 at 04:55:54PM +0100, Eric Dumazet wrote:
> > On Mon, Feb 10, 2025 at 5:40???AM Shradha Gupta
> > <shradhagupta@linux.microsoft.com> wrote:
> > >
> > > Allow the max aggregated pkt size to go up-to GSO_MAX_SIZE for MANA NIC.
> > > This patch only increases the max allowable gso/gro pkt size for MANA
> > > devices and does not change the defaults.
> > > Following are the perf benefits by increasing the pkt aggregate size from
> > > legacy gso_max_size value(64K) to newer one(up-to 511K)
> > >
> > > for i in {1..10}; do netperf -t TCP_RR -H 10.0.0.5 -p50000 -- -r80000,80000
> > > -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT|tail -1; done
> >
> > Was this tested with IPv6 ?
>
> Hi Eric,
> yes, sanity and functional testing where performed(manually) and passed on azure
> VMs with IPv6.
Forgot to mention that the tests were performed on both IPv4 and IPv6
and these numbers are from IPv4 testing
regards,
Shradha.
On Mon, Feb 10, 2025 at 6:59 PM Shradha Gupta
<shradhagupta@linux.microsoft.com> wrote:
>
> On Mon, Feb 10, 2025 at 09:57:53AM -0800, Shradha Gupta wrote:
> > On Mon, Feb 10, 2025 at 04:55:54PM +0100, Eric Dumazet wrote:
> > > On Mon, Feb 10, 2025 at 5:40???AM Shradha Gupta
> > > <shradhagupta@linux.microsoft.com> wrote:
> > > >
> > > > Allow the max aggregated pkt size to go up-to GSO_MAX_SIZE for MANA NIC.
> > > > This patch only increases the max allowable gso/gro pkt size for MANA
> > > > devices and does not change the defaults.
> > > > Following are the perf benefits by increasing the pkt aggregate size from
> > > > legacy gso_max_size value(64K) to newer one(up-to 511K)
> > > >
> > > > for i in {1..10}; do netperf -t TCP_RR -H 10.0.0.5 -p50000 -- -r80000,80000
> > > > -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT|tail -1; done
> > >
> > > Was this tested with IPv6 ?
> >
> > Hi Eric,
> > yes, sanity and functional testing where performed(manually) and passed on azure
> > VMs with IPv6.
> Forgot to mention that the tests were performed on both IPv4 and IPv6
> and these numbers are from IPv4 testing
Where is the IPv6 jumbo header removed ?
On Mon, Feb 10, 2025 at 07:02:04PM +0100, Eric Dumazet wrote:
> On Mon, Feb 10, 2025 at 6:59???PM Shradha Gupta
> <shradhagupta@linux.microsoft.com> wrote:
> >
> > On Mon, Feb 10, 2025 at 09:57:53AM -0800, Shradha Gupta wrote:
> > > On Mon, Feb 10, 2025 at 04:55:54PM +0100, Eric Dumazet wrote:
> > > > On Mon, Feb 10, 2025 at 5:40???AM Shradha Gupta
> > > > <shradhagupta@linux.microsoft.com> wrote:
> > > > >
> > > > > Allow the max aggregated pkt size to go up-to GSO_MAX_SIZE for MANA NIC.
> > > > > This patch only increases the max allowable gso/gro pkt size for MANA
> > > > > devices and does not change the defaults.
> > > > > Following are the perf benefits by increasing the pkt aggregate size from
> > > > > legacy gso_max_size value(64K) to newer one(up-to 511K)
> > > > >
> > > > > for i in {1..10}; do netperf -t TCP_RR -H 10.0.0.5 -p50000 -- -r80000,80000
> > > > > -O MIN_LATENCY,P90_LATENCY,P99_LATENCY,THROUGHPUT|tail -1; done
> > > >
> > > > Was this tested with IPv6 ?
> > >
> > > Hi Eric,
> > > yes, sanity and functional testing where performed(manually) and passed on azure
> > > VMs with IPv6.
> > Forgot to mention that the tests were performed on both IPv4 and IPv6
> > and these numbers are from IPv4 testing
>
> Where is the IPv6 jumbo header removed ?
I think this is missed in this patchset. In our IPv6 tests, patch sanity was
performed without changing the default values.
I will add this support, thorughly test IPv6 and send out another
version with complete IPv6 support and numbers.
Thanks for the pointers Eric.
Regards,
Shradha Gupta.
© 2016 - 2026 Red Hat, Inc.