Support dumping gro_flush_timeout for a NAPI ID.
Signed-off-by: Joe Damato <jdamato@fastly.com>
---
Documentation/netlink/specs/netdev.yaml | 6 ++++++
include/uapi/linux/netdev.h | 1 +
net/core/netdev-genl.c | 6 ++++++
tools/include/uapi/linux/netdev.h | 1 +
4 files changed, 14 insertions(+)
diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
index 585e87ec3c16..bf13613eaa0d 100644
--- a/Documentation/netlink/specs/netdev.yaml
+++ b/Documentation/netlink/specs/netdev.yaml
@@ -255,6 +255,11 @@ attribute-sets:
type: u32
checks:
max: s32-max
+ -
+ name: gro-flush-timeout
+ doc: The timeout, in nanoseconds, of when to trigger the NAPI
+ watchdog timer and schedule NAPI processing.
+ type: uint
-
name: queue
attributes:
@@ -644,6 +649,7 @@ operations:
- irq
- pid
- defer-hard-irqs
+ - gro-flush-timeout
dump:
request:
attributes:
diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
index 13dc0b027e86..cacd33359c76 100644
--- a/include/uapi/linux/netdev.h
+++ b/include/uapi/linux/netdev.h
@@ -123,6 +123,7 @@ enum {
NETDEV_A_NAPI_IRQ,
NETDEV_A_NAPI_PID,
NETDEV_A_NAPI_DEFER_HARD_IRQS,
+ NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
__NETDEV_A_NAPI_MAX,
NETDEV_A_NAPI_MAX = (__NETDEV_A_NAPI_MAX - 1)
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index de9bd76f43f8..64e5e4cee60d 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -161,6 +161,7 @@ static int
netdev_nl_napi_fill_one(struct sk_buff *rsp, struct napi_struct *napi,
const struct genl_info *info)
{
+ unsigned long gro_flush_timeout;
u32 napi_defer_hard_irqs;
void *hdr;
pid_t pid;
@@ -195,6 +196,11 @@ netdev_nl_napi_fill_one(struct sk_buff *rsp, struct napi_struct *napi,
napi_defer_hard_irqs))
goto nla_put_failure;
+ gro_flush_timeout = napi_get_gro_flush_timeout(napi);
+ if (nla_put_uint(rsp, NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
+ gro_flush_timeout))
+ goto nla_put_failure;
+
genlmsg_end(rsp, hdr);
return 0;
diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h
index 13dc0b027e86..cacd33359c76 100644
--- a/tools/include/uapi/linux/netdev.h
+++ b/tools/include/uapi/linux/netdev.h
@@ -123,6 +123,7 @@ enum {
NETDEV_A_NAPI_IRQ,
NETDEV_A_NAPI_PID,
NETDEV_A_NAPI_DEFER_HARD_IRQS,
+ NETDEV_A_NAPI_GRO_FLUSH_TIMEOUT,
__NETDEV_A_NAPI_MAX,
NETDEV_A_NAPI_MAX = (__NETDEV_A_NAPI_MAX - 1)
--
2.34.1
On Wed, 9 Oct 2024 00:54:58 +0000 Joe Damato wrote: > + name: gro-flush-timeout > + doc: The timeout, in nanoseconds, of when to trigger the NAPI > + watchdog timer and schedule NAPI processing. You gotta respin because we reformatted the cacheline info. So while at it perhaps throw in a sentence here about the GRO effects? The initial use of GRO flush timeout was to hold incomplete GRO super-frames in the GRO engine across NAPI cycles.
On Wed, Oct 09, 2024 at 08:14:40PM -0700, Jakub Kicinski wrote: > On Wed, 9 Oct 2024 00:54:58 +0000 Joe Damato wrote: > > + name: gro-flush-timeout > > + doc: The timeout, in nanoseconds, of when to trigger the NAPI > > + watchdog timer and schedule NAPI processing. > > You gotta respin because we reformatted the cacheline info. Yea, I figured I'd be racing with that change and would need a respin. I'm not sure how the queue works exactly, but it looks like I might also be racing with another change [1], I think. I think I'm just over 24hr and could respin and resend now, but should I wait longer in case [1] is merged before you see my respin? Just trying to figure out how to get the fewest number of respins possible ;) > So while at it perhaps throw in a sentence here about the GRO effects? > The initial use of GRO flush timeout was to hold incomplete GRO > super-frames in the GRO engine across NAPI cycles. From my reading of the code, if the timeout is non-zero, then napi_gro_flush will flush only "old" super-frames in napi_complete_done. If that's accurate (and maybe I missed something?), then how about: doc: The timeout, in nanoseconds, of when to trigger the NAPI watchdog timer which schedules NAPI processing. Additionally, a non-zero value will also prevent GRO from flushing recent super-frames at the end of a NAPI cycle. This may add receive latency in exchange for reducing the number of frames processed by the network stack. LMK if that's accurate and sounds OK or if it's wrong / too verbose? [1]: https://lore.kernel.org/netdev/20241009232728.107604-1-edumazet@google.com/T/#m3f11aae53b3244037ac641ef36985c5e85e2ed5e
On Thu, Oct 10, 2024 at 6:34 AM Joe Damato <jdamato@fastly.com> wrote: > > On Wed, Oct 09, 2024 at 08:14:40PM -0700, Jakub Kicinski wrote: > > On Wed, 9 Oct 2024 00:54:58 +0000 Joe Damato wrote: > > > + name: gro-flush-timeout > > > + doc: The timeout, in nanoseconds, of when to trigger the NAPI > > > + watchdog timer and schedule NAPI processing. > > > > You gotta respin because we reformatted the cacheline info. > > Yea, I figured I'd be racing with that change and would need a > respin. > > I'm not sure how the queue works exactly, but it looks like I might > also be racing with another change [1], I think. > > I think I'm just over 24hr and could respin and resend now, but > should I wait longer in case [1] is merged before you see my > respin? I would avoid the rtnl_lock() addition in "netdev-genl: Support setting per-NAPI config values" before re-sending ? > > Just trying to figure out how to get the fewest number of respins > possible ;) > > > So while at it perhaps throw in a sentence here about the GRO effects? > > The initial use of GRO flush timeout was to hold incomplete GRO > > super-frames in the GRO engine across NAPI cycles. > > From my reading of the code, if the timeout is non-zero, then > napi_gro_flush will flush only "old" super-frames in > napi_complete_done. > > If that's accurate (and maybe I missed something?), then how about: > > doc: The timeout, in nanoseconds, of when to trigger the NAPI > watchdog timer which schedules NAPI processing. Additionally, a > non-zero value will also prevent GRO from flushing recent > super-frames at the end of a NAPI cycle. This may add receive > latency in exchange for reducing the number of frames processed > by the network stack. Note that linux TCP always has a PSH flag at the end of each TSO packet, so the latency increase is only possible in presence of tail drop, if the last MSS (with the PSH) was dropped. > > LMK if that's accurate and sounds OK or if it's wrong / too verbose? I do not think it is too verbose. > > [1]: https://lore.kernel.org/netdev/20241009232728.107604-1-edumazet@google.com/T/#m3f11aae53b3244037ac641ef36985c5e85e2ed5e
On Thu, Oct 10, 2024 at 06:45:11AM +0200, Eric Dumazet wrote: > On Thu, Oct 10, 2024 at 6:34 AM Joe Damato <jdamato@fastly.com> wrote: > > > > On Wed, Oct 09, 2024 at 08:14:40PM -0700, Jakub Kicinski wrote: > > > On Wed, 9 Oct 2024 00:54:58 +0000 Joe Damato wrote: > > > > + name: gro-flush-timeout > > > > + doc: The timeout, in nanoseconds, of when to trigger the NAPI > > > > + watchdog timer and schedule NAPI processing. > > > > > > You gotta respin because we reformatted the cacheline info. > > > > Yea, I figured I'd be racing with that change and would need a > > respin. > > > > I'm not sure how the queue works exactly, but it looks like I might > > also be racing with another change [1], I think. > > > > I think I'm just over 24hr and could respin and resend now, but > > should I wait longer in case [1] is merged before you see my > > respin? > > I would avoid the rtnl_lock() addition in "netdev-genl: Support > setting per-NAPI config values" > before re-sending ? OK. > > > > Just trying to figure out how to get the fewest number of respins > > possible ;) > > > > > So while at it perhaps throw in a sentence here about the GRO effects? > > > The initial use of GRO flush timeout was to hold incomplete GRO > > > super-frames in the GRO engine across NAPI cycles. > > > > From my reading of the code, if the timeout is non-zero, then > > napi_gro_flush will flush only "old" super-frames in > > napi_complete_done. > > > > If that's accurate (and maybe I missed something?), then how about: > > > > doc: The timeout, in nanoseconds, of when to trigger the NAPI > > watchdog timer which schedules NAPI processing. Additionally, a > > non-zero value will also prevent GRO from flushing recent > > super-frames at the end of a NAPI cycle. This may add receive > > latency in exchange for reducing the number of frames processed > > by the network stack. > > Note that linux TCP always has a PSH flag at the end of each TSO packet, > so the latency increase is only possible in presence of tail drop, > if the last MSS (with the PSH) was dropped. Would you like me to note that in the doc, as well?
© 2016 - 2024 Red Hat, Inc.