From nobody Sat Nov 30 06:28:23 2024 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8B1A1A00C5 for ; Thu, 12 Sep 2024 10:08:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726135701; cv=none; b=PuqO0vNhwZek3c+Z0minmLcQtlX+EXGidce8WZYBH/Xi60A5rYH3Y8aLJ4nPGlRNEHpVVcB4awz5L36/YxQA2S0M83hiB1OLItAZ4o52ziVqJzowb/qp8J4Nzxb0p7eoArSu/F9KQ6OrWajc+146yCylW5W6AuTdNaE9V/5dwMo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726135701; c=relaxed/simple; bh=UYFHT5Rtrjp2TbS7AQ5vsSkVHNaJO8WMKD93eugQvpY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=MSla0KXN5Rxdt1eJBtYRX/9kZLAKlnkdgp9y76lPvBl8tZ/FxXfK+waLVNj54BrdMpxirFZW8avpy9qsgJ/OhU7bhrw/XxoFByhB/3ltnFN7H5jlcPAk9eEWFn1iTHq+E27+njs9DD+fhJwA6Ot30xku/is0phsXZY6ODjEG0+Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=fastly.com; spf=pass smtp.mailfrom=fastly.com; dkim=pass (1024-bit key) header.d=fastly.com header.i=@fastly.com header.b=xGuyByfZ; arc=none smtp.client-ip=209.85.214.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=fastly.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=fastly.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=fastly.com header.i=@fastly.com header.b="xGuyByfZ" Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-205659dc63aso8808715ad.1 for ; Thu, 12 Sep 2024 03:08:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastly.com; s=google; t=1726135699; x=1726740499; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YerXYBbzlInpcAAkn5hrMa1T09WgrIl+NF/PwzLH+gI=; b=xGuyByfZka6UvNEfSJ63Mxl/HRy8jWaYHiP+lZzQsTPs/nMt+OnzLSNo58/2vn8flu nyhoJ7DnXv5jzhOpqt6LwiLTucKX86Vx3UlIsnYvo1qD2r1M/Kf8HKAKtWzCfVNM3InD LZ4cPKWuQifGj1jN4nufG3qn1oaLjnboV62jQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726135699; x=1726740499; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YerXYBbzlInpcAAkn5hrMa1T09WgrIl+NF/PwzLH+gI=; b=KIDtttzK5o4S0WQpkKpJQVTl9nOqX1n/vbX7hxujhaHt7jGAU511f/C0OlhGEKyYcV 5EtiL6MRg1TLSCvaVf41TqoUNZ86Vus97Ontg35l8C/YqgkpR9EdpZ2IiRwGYq/KGT5A qKKRXL5o5+aa4gTDtesWGPfUvxukq3M4mzdbKql6BJu642TQHXc/LwmHXNlqTCGcEall RhderSbBxZzvGY7JIYcV/h//Dz3Dov8AIOMaY7R/iACU6xxQHWb/PZwKQk/OwYkXcoiq gM5iykAfqAd5y+3Mw5lcLDy4E5HFDx38L0GRgn8jBYKjZE8rsk3thzIQ5bp0+g1cb2F4 daXw== X-Forwarded-Encrypted: i=1; AJvYcCXTtLPptOakHRPaqXmB9w4uRvGe7G9v/DrUIGmLYreiKUTEj6cN9kE8m9cIePfc4D5S4QqfRD93okP/Tjc=@vger.kernel.org X-Gm-Message-State: AOJu0YwRKFGGejwIdS3gDAQ8dRCw0TN7TL7PKRXuisDNPYIj93tXsGnJ B1XRRUwsWVC8hSjeSSB/tpSPLqGs1Y14XVtourEXTWyyaJCuNApYhfEdqEyPhR0= X-Google-Smtp-Source: AGHT+IForQeMfGb0GCdvRNabl/PACHbWLROiaNCpFz1nTSKb8kGS4noXiMmFQIaSiz28x7rm1JGmYw== X-Received: by 2002:a17:902:c943:b0:203:a0ea:63c5 with SMTP id d9443c01a7336-2076d71ada8mr29329145ad.0.1726135698853; Thu, 12 Sep 2024 03:08:18 -0700 (PDT) Received: from localhost.localdomain ([2620:11a:c019:0:65e:3115:2f58:c5fd]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-2076afe9da3sm11583795ad.239.2024.09.12.03.08.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Sep 2024 03:08:18 -0700 (PDT) From: Joe Damato To: netdev@vger.kernel.org Cc: mkarsten@uwaterloo.ca, kuba@kernel.org, skhawaja@google.com, sdf@fomichev.me, bjorn@rivosinc.com, amritha.nambiar@intel.com, sridhar.samudrala@intel.com, Joe Damato , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Jiri Pirko , Sebastian Andrzej Siewior , Lorenzo Bianconi , Johannes Berg , Breno Leitao , Alexander Lobakin , linux-doc@vger.kernel.org (open list:DOCUMENTATION), linux-kernel@vger.kernel.org (open list) Subject: [RFC net-next v3 3/9] net: napi: Make gro_flush_timeout per-NAPI Date: Thu, 12 Sep 2024 10:07:11 +0000 Message-Id: <20240912100738.16567-4-jdamato@fastly.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240912100738.16567-1-jdamato@fastly.com> References: <20240912100738.16567-1-jdamato@fastly.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Allow per-NAPI gro_flush_timeout setting. The existing sysfs parameter is respected; writes to sysfs will write to all NAPI structs for the device and the net_device gro_flush_timeout field. Reads from sysfs will read from the net_device field. The ability to set gro_flush_timeout on specific NAPI instances will be added in a later commit, via netdev-genl. Signed-off-by: Joe Damato --- .../networking/net_cachelines/net_device.rst | 2 +- include/linux/netdevice.h | 3 +- net/core/dev.c | 12 +++--- net/core/dev.h | 39 +++++++++++++++++++ net/core/net-sysfs.c | 2 +- 5 files changed, 49 insertions(+), 9 deletions(-) diff --git a/Documentation/networking/net_cachelines/net_device.rst b/Docum= entation/networking/net_cachelines/net_device.rst index eeeb7c925ec5..3d02ae79c850 100644 --- a/Documentation/networking/net_cachelines/net_device.rst +++ b/Documentation/networking/net_cachelines/net_device.rst @@ -98,7 +98,6 @@ struct_netdev_queue* _rx = read_mostly unsigned_int num_rx_queues = =20 unsigned_int real_num_rx_queues - = read_mostly get_rps_cpu struct_bpf_prog* xdp_prog - = read_mostly netif_elide_gro() -unsigned_long gro_flush_timeout - = read_mostly napi_complete_done unsigned_int gro_max_size - = read_mostly skb_gro_receive unsigned_int gro_ipv4_max_size - = read_mostly skb_gro_receive rx_handler_func_t* rx_handler read_mostly = - __netif_receive_skb_core @@ -182,4 +181,5 @@ struct_devlink_port* devlink_port struct_dpll_pin* dpll_pin = =20 struct hlist_head page_pools struct dim_irq_moder* irq_moder +unsigned_long gro_flush_timeout u32 napi_defer_hard_irqs diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index f28b96c95259..3e07ab8e0295 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -377,6 +377,7 @@ struct napi_struct { struct list_head dev_list; struct hlist_node napi_hash_node; int irq; + unsigned long gro_flush_timeout; u32 defer_hard_irqs; }; =20 @@ -2075,7 +2076,6 @@ struct net_device { int ifindex; unsigned int real_num_rx_queues; struct netdev_rx_queue *_rx; - unsigned long gro_flush_timeout; unsigned int gro_max_size; unsigned int gro_ipv4_max_size; rx_handler_func_t __rcu *rx_handler; @@ -2398,6 +2398,7 @@ struct net_device { =20 /** @irq_moder: dim parameters used if IS_ENABLED(CONFIG_DIMLIB). */ struct dim_irq_moder *irq_moder; + unsigned long gro_flush_timeout; u32 napi_defer_hard_irqs; =20 u8 priv[] ____cacheline_aligned diff --git a/net/core/dev.c b/net/core/dev.c index d3d0680664b3..f2fd503516de 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -6220,12 +6220,12 @@ bool napi_complete_done(struct napi_struct *n, int = work_done) =20 if (work_done) { if (n->gro_bitmask) - timeout =3D READ_ONCE(n->dev->gro_flush_timeout); + timeout =3D napi_get_gro_flush_timeout(n); n->defer_hard_irqs_count =3D napi_get_defer_hard_irqs(n); } if (n->defer_hard_irqs_count > 0) { n->defer_hard_irqs_count--; - timeout =3D READ_ONCE(n->dev->gro_flush_timeout); + timeout =3D napi_get_gro_flush_timeout(n); if (timeout) ret =3D false; } @@ -6360,7 +6360,7 @@ static void busy_poll_stop(struct napi_struct *napi, = void *have_poll_lock, =20 if (flags & NAPI_F_PREFER_BUSY_POLL) { napi->defer_hard_irqs_count =3D napi_get_defer_hard_irqs(napi); - timeout =3D READ_ONCE(napi->dev->gro_flush_timeout); + timeout =3D napi_get_gro_flush_timeout(napi); if (napi->defer_hard_irqs_count && timeout) { hrtimer_start(&napi->timer, ns_to_ktime(timeout), HRTIMER_MODE_REL_PINN= ED); skip_schedule =3D true; @@ -6642,6 +6642,7 @@ void netif_napi_add_weight(struct net_device *dev, st= ruct napi_struct *napi, hrtimer_init(&napi->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED); napi->timer.function =3D napi_watchdog; napi_set_defer_hard_irqs(napi, READ_ONCE(dev->napi_defer_hard_irqs)); + napi_set_gro_flush_timeout(napi, READ_ONCE(dev->gro_flush_timeout)); init_gro_hash(napi); napi->skb =3D NULL; INIT_LIST_HEAD(&napi->rx_list); @@ -11023,7 +11024,7 @@ void netdev_sw_irq_coalesce_default_on(struct net_d= evice *dev) WARN_ON(dev->reg_state =3D=3D NETREG_REGISTERED); =20 if (!IS_ENABLED(CONFIG_PREEMPT_RT)) { - dev->gro_flush_timeout =3D 20000; + netdev_set_gro_flush_timeout(dev, 20000); netdev_set_defer_hard_irqs(dev, 1); } } @@ -11960,7 +11961,6 @@ static void __init net_dev_struct_check(void) CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, ifin= dex); CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, real= _num_rx_queues); CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, _rx); - CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, gro_= flush_timeout); CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, gro_= max_size); CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, gro_= ipv4_max_size); CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, rx_h= andler); @@ -11972,7 +11972,7 @@ static void __init net_dev_struct_check(void) #ifdef CONFIG_NET_XGRESS CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, tcx_= ingress); #endif - CACHELINE_ASSERT_GROUP_SIZE(struct net_device, net_device_read_rx, 100); + CACHELINE_ASSERT_GROUP_SIZE(struct net_device, net_device_read_rx, 92); } =20 /* diff --git a/net/core/dev.h b/net/core/dev.h index f24fa38a2cac..a9d5f678564a 100644 --- a/net/core/dev.h +++ b/net/core/dev.h @@ -174,6 +174,45 @@ static inline void netdev_set_defer_hard_irqs(struct n= et_device *netdev, napi_set_defer_hard_irqs(napi, defer); } =20 +/** + * napi_get_gro_flush_timeout - get the gro_flush_timeout + * @n: napi struct to get the gro_flush_timeout from + * + * Return: the per-NAPI value of the gro_flush_timeout field. + */ +static inline unsigned long napi_get_gro_flush_timeout(const struct napi_s= truct *n) +{ + return READ_ONCE(n->gro_flush_timeout); +} + +/** + * napi_set_gro_flush_timeout - set the gro_flush_timeout for a napi + * @n: napi struct to set the gro_flush_timeout + * @timeout: timeout value to set + * + * napi_set_gro_flush_timeout sets the per-NAPI gro_flush_timeout + */ +static inline void napi_set_gro_flush_timeout(struct napi_struct *n, + unsigned long timeout) +{ + WRITE_ONCE(n->gro_flush_timeout, timeout); +} + +/** + * netdev_set_gro_flush_timeout - set gro_flush_timeout for all NAPIs of a= netdev + * @netdev: the net_device for which all NAPIs will have their gro_flush_t= imeout set + * @timeout: the timeout value to set + */ +static inline void netdev_set_gro_flush_timeout(struct net_device *netdev, + unsigned long timeout) +{ + struct napi_struct *napi; + + WRITE_ONCE(netdev->gro_flush_timeout, timeout); + list_for_each_entry(napi, &netdev->napi_list, dev_list) + napi_set_gro_flush_timeout(napi, timeout); +} + int rps_cpumask_housekeeping(struct cpumask *mask); =20 #if defined(CONFIG_DEBUG_NET) && defined(CONFIG_BPF_SYSCALL) diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c index 25125f356a15..2d9afc6e2161 100644 --- a/net/core/net-sysfs.c +++ b/net/core/net-sysfs.c @@ -409,7 +409,7 @@ NETDEVICE_SHOW_RW(tx_queue_len, fmt_dec); =20 static int change_gro_flush_timeout(struct net_device *dev, unsigned long = val) { - WRITE_ONCE(dev->gro_flush_timeout, val); + netdev_set_gro_flush_timeout(dev, val); return 0; } =20 --=20 2.25.1