From nobody Tue Dec 2 02:27:53 2025 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB2D732D0E8 for ; Thu, 20 Nov 2025 14:16:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763648165; cv=none; b=PFkKVgu8ueZHabrUaR3uNuEhLbEVsVWEC9fKo0gW8C0656U2jU/cSj4gRuRPzi8j/K2Ddrzt7FJIHssFOxUY621hzbEJccI9uO7oSBz1cy1AS161aur1QfVvEL4MK2cfy1K2N6XGqi2x6tFTkFv9OrLzQb4E6ZiTPKC4ptS00Sk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763648165; c=relaxed/simple; bh=fdofOYSx30kxze7ZMeejuTjOfFbcUuyiRQtQxtHJmHM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=u/tjLz0wO1yW4H8sUfxXR4KEsMaZ8or1TLGaj9uCpA/xA1w5ku6rNtgs7R7fEOtJQDxs/wLjolr1TLt5h851tXUDr0La/unpChzSFa2my9IRK1InPHZwv9EyNbUfTirqrgVNcSaO70qVqKTE/Z5vReA/e91zyWYgOltW1cWnvTs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Eh0p9eo2; arc=none smtp.client-ip=209.85.214.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Eh0p9eo2" Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-295548467c7so10676545ad.2 for ; Thu, 20 Nov 2025 06:16:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1763648163; x=1764252963; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3ajidgd+5FB7avT3KG2su9bYCh9pzkFLcEde9nhVLLg=; b=Eh0p9eo2Ne6zKz0wwdNr+FSPErprTZsT/s913zrdAYN5RwOBCStVSI7nQgdWivNJKo uSFSTDQ/ztXEdHUL1DUNwbid/7e9ioF/t3Q4UfPfEy4z/zX1HJJbvvS6DpERAzIp7Uo0 X6E3DnUgiNroUNKbVMCBdaN3K1Tkksb8ZTEMFKS7LvnCzv/AgHdSFEUzdSvYwmztv3Pz frrpXqHCVDDkgWqVO5KyV2yJ9VhDXBq0LkoL6PMkLuuceGF2Jq5TpTPvcXW/f2Dk29Np 2Fwv8WXAVT8G5lGPgzBVWC+F8wGYaCKkLMP5T8+BIKmzpMmPg8qsl3BIatDqjewS18mT YUaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763648163; x=1764252963; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=3ajidgd+5FB7avT3KG2su9bYCh9pzkFLcEde9nhVLLg=; b=W53ypaP7IWGa3/99ObEOQdw62hsEUVvq+dVZlOXVqfEmb2r9yG47vDkjDqSgOw0EGQ 7HT78O3qyQTL0mOzHTS/fCWrjcxZyeX9horP8R4V4hKttpwQ0rZodJfSDbZk28iqU3d8 pNrKhDMiCFyjNi7h3EIqb42hMVIlwczVnRDEaEsoTqEV3lEqWIYLMysBtNL+yLHeS/7J VyQIh818x9hIpuquC3Hgp4g/aIAFo6yQ0VIXm2QFMp5xR8yi14TvRlHDZjNBSwNV21nL tHAQl+OPH9SOrK1o91uBFlCrmvrXt944PiWa3R6APglC2Y42Ib/mqp7IziL54EdVuZTE JhZg== X-Forwarded-Encrypted: i=1; AJvYcCV+91ffkiMToK9EoUnkjpfpLPyyJWdgXMuhZS5fCxIJU687iNiV9Dwkw6zQ0gahdf8y/UFUGQ7x7ohD2UM=@vger.kernel.org X-Gm-Message-State: AOJu0YzUqsqy/os9Ov7dKBAii7wZHurA1w1ue03v0yO3fNToHpoA6bxy Miu4PX42ily2wttlXjulFKERlC8aKEmSRFIsQ0vjeVPH/NFj/QMVlJIy X-Gm-Gg: ASbGncvsi09wl1EE6F6KaJUTWwIIMFvaxrT/yRkyMa+zBHVogqe9CQOd9dzGePKBJS1 V47GjHcflXPN5a2/exPU32/i0L9d1rH+mie3jNJn10q+qYZr+GT4wdZc5gDLafHfPgEg3VnHDyt yhF7rdOCQgaufQzEhIgTLB9SEX6UjJ+LOh9oCViCLxB36R3HmmJXES5uaEDr9py5Fhxpt7SuVFy ieohEPnOpks/i4hDW4SI3QT4Q19nz9QYcUUVi2+CMSvN7LS2aSYFK795839jsLNjW9yjqMKNf2L MaMJNgu7yf7qqwtFhq7BMjRKFZKHeGsCpfoTWLdmqPHORQvu1dJafvsGNwdBA0wUdDCzmGXZHwK b1VbIx5895/NcitR+RykHnDNiQcSkpi6MIzDrfn8+x5dvG0IxYYw1qRiBM8S32PBMU8uF7pnXtK jbQx1praKK9Jrm+rdE7/ZGt9jD8LGVe2MT2i1ukBUxxg== X-Google-Smtp-Source: AGHT+IHmQT0EhWZieWnM8D+8CZXX5h86PIdEOLv9K/NZZDLWTZWGHTQD+Xa/os8v0JbueFnnTUbEOA== X-Received: by 2002:a17:903:28e:b0:295:9d7f:9296 with SMTP id d9443c01a7336-29b5ec7bad8mr31892945ad.45.1763648159251; Thu, 20 Nov 2025 06:15:59 -0800 (PST) Received: from COB-LTR7HP24-497.. ([223.185.131.209]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-29b5b138bcbsm28442915ad.29.2025.11.20.06.15.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Nov 2025 06:15:58 -0800 (PST) From: I Viswanath To: andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, horms@kernel.org, sdf@fomichev.me, kuniyu@google.com, skhawaja@google.com, aleksander.lobakin@intel.com, mst@redhat.com, jasowang@redhat.com, xuanzhuo@linux.alibaba.com, eperezma@redhat.com Cc: virtualization@lists.linux.dev, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kernel-mentees@lists.linux.dev, I Viswanath Subject: [PATCH net-next v5 1/2] net: refactor set_rx_mode into snapshot and deferred I/O Date: Thu, 20 Nov 2025 19:43:53 +0530 Message-Id: <20251120141354.355059-2-viswanathiyyappan@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20251120141354.355059-1-viswanathiyyappan@gmail.com> References: <20251120141354.355059-1-viswanathiyyappan@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" ndo_set_rx_mode is problematic as it cannot sleep. There are drivers that circumvent this by doing the rx_mode work in a work item. This requires extra work that can be avoided if core provided a mechanism to do that. This patch proposes such a mechanism. Refactor set_rx_mode into 2 stages: A snapshot stage and the actual I/O. In this new model, when _dev_set_rx_mode is called, we take a snapshot of the current rx_config and then commit it to the hardware later via a work item To accomplish this, reinterpret set_rx_mode as the ndo for customizing the snapshot and enabling/disabling rx_mode set and add a new ndo write_rx_mode for the deferred I/O Signed-off-by: I Viswanath Suggested-by: Jakub Kicinski --- include/linux/netdevice.h | 104 ++++++++++++++++++- net/core/dev.c | 208 +++++++++++++++++++++++++++++++++++++- 2 files changed, 305 insertions(+), 7 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index e808071dbb7d..e819426bb7cb 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1049,6 +1049,40 @@ struct netdev_net_notifier { struct notifier_block *nb; }; =20 +enum netif_rx_mode_flags { + /* enable flags */ + NETIF_RX_MODE_ALLMULTI_EN, + NETIF_RX_MODE_PROM_EN, + NETIF_RX_MODE_VLAN_EN, + + /* control flags */ + /* pending config state */ + NETIF_RX_MODE_CFG_READY, + + /* if set, rx_mode config work will not be executed */ + NETIF_RX_MODE_SET_DIS, + + /* if set, uc/mc lists will not be part of rx_mode config */ + NETIF_RX_MODE_UC_SKIP, + NETIF_RX_MODE_MC_SKIP +}; + +struct netif_rx_mode_config { + char *uc_addrs; + char *mc_addrs; + int uc_count; + int mc_count; + int ctrl_flags; + void *priv_ptr; +}; + +struct netif_rx_mode_ctx { + struct work_struct rx_mode_work; + struct net_device *dev; + struct netif_rx_mode_config *ready; + struct netif_rx_mode_config *pending; +}; + /* * This structure defines the management hooks for network devices. * The following hooks can be defined; unless noted otherwise, they are @@ -1101,9 +1135,14 @@ struct netdev_net_notifier { * changes to configuration when multicast or promiscuous is enabled. * * void (*ndo_set_rx_mode)(struct net_device *dev); - * This function is called device changes address list filtering. + * This function is called when device changes address list filtering. * If driver handles unicast address filtering, it should set - * IFF_UNICAST_FLT in its priv_flags. + * IFF_UNICAST_FLT in its priv_flags. This is used to configure + * the rx_mode snapshot that will be written to the hardware. + * + * void (*ndo_write_rx_mode)(struct net_device *dev); + * This function is scheduled after set_rx_mode and is responsible for + * writing the rx_mode snapshot to the hardware. * * int (*ndo_set_mac_address)(struct net_device *dev, void *addr); * This function is called when the Media Access Control address @@ -1424,6 +1463,7 @@ struct net_device_ops { void (*ndo_change_rx_flags)(struct net_device *dev, int flags); void (*ndo_set_rx_mode)(struct net_device *dev); + void (*ndo_write_rx_mode)(struct net_device *dev); int (*ndo_set_mac_address)(struct net_device *dev, void *addr); int (*ndo_validate_addr)(struct net_device *dev); @@ -1926,7 +1966,7 @@ enum netdev_reg_state { * @ingress_queue: XXX: need comments on this one * @nf_hooks_ingress: netfilter hooks executed for ingress packets * @broadcast: hw bcast address - * + * @rx_mode_ctx: context required for rx_mode config work * @rx_cpu_rmap: CPU reverse-mapping for RX completion interrupts, * indexed by RX queue number. Assigned by driver. * This must only be set if the ndo_rx_flow_steer @@ -2337,6 +2377,7 @@ struct net_device { #endif =20 unsigned char broadcast[MAX_ADDR_LEN]; + struct netif_rx_mode_ctx *rx_mode_ctx; #ifdef CONFIG_RFS_ACCEL struct cpu_rmap *rx_cpu_rmap; #endif @@ -3360,6 +3401,63 @@ int dev_loopback_xmit(struct net *net, struct sock *= sk, struct sk_buff *newskb); u16 dev_pick_tx_zero(struct net_device *dev, struct sk_buff *skb, struct net_device *sb_dev); =20 +void netif_rx_mode_schedule_work(struct net_device *dev, bool flush); + +/* Drivers that implement rx mode as work flush the work item when closing + * or suspending. This is the substitute for those calls. + */ +static inline void netif_rx_mode_flush_work(struct net_device *dev) +{ + flush_work(&dev->rx_mode_ctx->rx_mode_work); +} + +/* Helpers to be used in the set_rx_mode implementation */ +static inline void netif_rx_mode_set_bit(struct net_device *dev, int b, + bool val) +{ + if (val) + dev->rx_mode_ctx->pending->ctrl_flags |=3D BIT(b); + else + dev->rx_mode_ctx->pending->ctrl_flags &=3D ~BIT(b); +} + +static inline void netif_rx_mode_set_priv_ptr(struct net_device *dev, + void *priv) +{ + dev->rx_mode_ctx->pending->priv_ptr =3D priv; +} + +/* Helpers to be used in the write_rx_mode implementation */ +static inline bool netif_rx_mode_get_bit(struct net_device *dev, int b) +{ + return !!(dev->rx_mode_ctx->ready->ctrl_flags & BIT(b)); +} + +static inline void *netif_rx_mode_get_priv_ptr(struct net_device *dev) +{ + return dev->rx_mode_ctx->ready->priv_ptr; +} + +static inline int netif_rx_mode_get_mc_count(struct net_device *dev) +{ + return dev->rx_mode_ctx->ready->mc_count; +} + +static inline int netif_rx_mode_get_uc_count(struct net_device *dev) +{ + return dev->rx_mode_ctx->ready->uc_count; +} + +#define netif_rx_mode_for_each_uc_addr(dev, ha_addr, idx) \ + for (ha_addr =3D (dev)->rx_mode_ctx->ready->uc_addrs, idx =3D 0; \ + idx < (dev)->rx_mode_ctx->ready->uc_count; \ + ha_addr +=3D (dev)->addr_len, idx++) + +#define netif_rx_mode_for_each_mc_addr(dev, ha_addr, idx) \ + for (ha_addr =3D (dev)->rx_mode_ctx->ready->mc_addrs, idx =3D 0; \ + idx < (dev)->rx_mode_ctx->ready->mc_count; \ + ha_addr +=3D (dev)->addr_len, idx++) + int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev); int __dev_direct_xmit(struct sk_buff *skb, u16 queue_id); =20 diff --git a/net/core/dev.c b/net/core/dev.c index 69515edd17bc..2be3ff8512b1 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -1645,6 +1645,160 @@ static int napi_kthread_create(struct napi_struct *= n) return err; } =20 +/* The existence of pending/ready config is an implementation detail. The + * caller shouldn't be aware of them. This is a bit hacky. We read + * bits from pending because control bits need to be read before pending + * is prepared. + */ +static bool __netif_rx_mode_pending_get_bit(struct net_device *dev, int b) +{ + return !!(dev->rx_mode_ctx->pending->ctrl_flags & BIT(b)); +} + +/* This function attempts to copy the current state of the + * net device into pending (reallocating if necessary). If it fails, + * pending is guaranteed to be unmodified. + */ +static int netif_rx_mode_alloc_and_fill_pending(struct net_device *dev) +{ + struct netif_rx_mode_config *pending =3D dev->rx_mode_ctx->pending; + int uc_count =3D 0, mc_count =3D 0; + struct netdev_hw_addr *ha; + char *tmp; + int i; + + /* The allocations need to be atomic since this will be called under + * netif_addr_lock_bh() + */ + if (!__netif_rx_mode_pending_get_bit(dev, NETIF_RX_MODE_UC_SKIP)) { + uc_count =3D netdev_uc_count(dev); + tmp =3D krealloc(pending->uc_addrs, + uc_count * dev->addr_len, + GFP_ATOMIC); + if (!tmp) + return -ENOMEM; + pending->uc_addrs =3D tmp; + } + + if (!__netif_rx_mode_pending_get_bit(dev, NETIF_RX_MODE_MC_SKIP)) { + mc_count =3D netdev_mc_count(dev); + tmp =3D krealloc(pending->mc_addrs, + mc_count * dev->addr_len, + GFP_ATOMIC); + if (!tmp) + return -ENOMEM; + pending->mc_addrs =3D tmp; + } + + /* This function cannot fail after this point */ + + /* This is going to be the same for every single driver. Better to + * do it here than in the set_rx_mode impl + */ + netif_rx_mode_set_bit(dev, NETIF_RX_MODE_ALLMULTI_EN, + !!(dev->flags & IFF_ALLMULTI)); + + netif_rx_mode_set_bit(dev, NETIF_RX_MODE_PROM_EN, + !!(dev->flags & IFF_PROMISC)); + + i =3D 0; + if (!__netif_rx_mode_pending_get_bit(dev, NETIF_RX_MODE_UC_SKIP)) { + pending->uc_count =3D uc_count; + netdev_for_each_uc_addr(ha, dev) + memcpy(pending->uc_addrs + (i++) * dev->addr_len, + ha->addr, + dev->addr_len); + } + + i =3D 0; + if (!__netif_rx_mode_pending_get_bit(dev, NETIF_RX_MODE_MC_SKIP)) { + pending->mc_count =3D mc_count; + netdev_for_each_mc_addr(ha, dev) + memcpy(pending->mc_addrs + (i++) * dev->addr_len, + ha->addr, + dev->addr_len); + } + return 0; +} + +static void netif_rx_mode_prepare_pending(struct net_device *dev) +{ + lockdep_assert_held(&dev->addr_list_lock); + int rc; + + rc =3D netif_rx_mode_alloc_and_fill_pending(dev); + if (rc) + return; + + netif_rx_mode_set_bit(dev, NETIF_RX_MODE_CFG_READY, true); +} + +static void netif_rx_mode_write_active(struct work_struct *param) +{ + struct netif_rx_mode_ctx *rx_mode_ctx =3D container_of(param, + struct netif_rx_mode_ctx, rx_mode_work); + + struct net_device *dev =3D rx_mode_ctx->dev; + + /* Paranoia. */ + WARN_ON(!dev->netdev_ops->ndo_write_rx_mode); + + /* We could introduce a new lock for this but reusing the addr + * lock works well enough + */ + netif_addr_lock_bh(dev); + + /* There's no point continuing if the pending config is not ready */ + if (!__netif_rx_mode_pending_get_bit(dev, NETIF_RX_MODE_CFG_READY)) { + netif_addr_unlock_bh(dev); + return; + } + + /* We use the prepared pending config as the new ready config and + * reuse old ready config's memory for the next pending config + */ + swap(rx_mode_ctx->ready, rx_mode_ctx->pending); + netif_rx_mode_set_bit(dev, NETIF_RX_MODE_CFG_READY, false); + + netif_addr_unlock_bh(dev); + + rtnl_lock(); + dev->netdev_ops->ndo_write_rx_mode(dev); + rtnl_unlock(); +} + +static int alloc_rx_mode_ctx(struct net_device *dev) +{ + dev->rx_mode_ctx =3D kzalloc(sizeof(*dev->rx_mode_ctx), GFP_KERNEL); + + if (!dev->rx_mode_ctx) + goto fail; + + dev->rx_mode_ctx->ready =3D kzalloc(sizeof(*dev->rx_mode_ctx->ready), + GFP_KERNEL); + + if (!dev->rx_mode_ctx->ready) + goto fail_ready; + + dev->rx_mode_ctx->pending =3D kzalloc(sizeof(*dev->rx_mode_ctx->pending), + GFP_KERNEL); + + if (!dev->rx_mode_ctx->pending) + goto fail_pending; + + INIT_WORK(&dev->rx_mode_ctx->rx_mode_work, netif_rx_mode_write_active); + dev->rx_mode_ctx->dev =3D dev; + + return 0; + +fail_pending: + kfree(dev->rx_mode_ctx->ready); +fail_ready: + kfree(dev->rx_mode_ctx); +fail: + return -ENOMEM; +} + static int __dev_open(struct net_device *dev, struct netlink_ext_ack *exta= ck) { const struct net_device_ops *ops =3D dev->netdev_ops; @@ -1679,6 +1833,9 @@ static int __dev_open(struct net_device *dev, struct = netlink_ext_ack *extack) if (ops->ndo_validate_addr) ret =3D ops->ndo_validate_addr(dev); =20 + if (!ret && ops->ndo_write_rx_mode) + ret =3D alloc_rx_mode_ctx(dev); + if (!ret && ops->ndo_open) ret =3D ops->ndo_open(dev); =20 @@ -1713,6 +1870,22 @@ int netif_open(struct net_device *dev, struct netlin= k_ext_ack *extack) return ret; } =20 +static void cleanup_rx_mode_ctx(struct net_device *dev) +{ + /* cancel and wait for execution to complete */ + cancel_work_sync(&dev->rx_mode_ctx->rx_mode_work); + + kfree(dev->rx_mode_ctx->pending->uc_addrs); + kfree(dev->rx_mode_ctx->pending->mc_addrs); + kfree(dev->rx_mode_ctx->pending); + + kfree(dev->rx_mode_ctx->ready->uc_addrs); + kfree(dev->rx_mode_ctx->ready->mc_addrs); + kfree(dev->rx_mode_ctx->ready); + + kfree(dev->rx_mode_ctx); +} + static void __dev_close_many(struct list_head *head) { struct net_device *dev; @@ -1755,6 +1928,9 @@ static void __dev_close_many(struct list_head *head) if (ops->ndo_stop) ops->ndo_stop(dev); =20 + if (ops->ndo_write_rx_mode) + cleanup_rx_mode_ctx(dev); + netif_set_up(dev, false); netpoll_poll_enable(dev); } @@ -9613,6 +9789,33 @@ int netif_set_allmulti(struct net_device *dev, int i= nc, bool notify) return 0; } =20 +/* netif_rx_mode_schedule_work - Sets up the rx_config snapshot and + * schedules the deferred I/O. If it's necessary to wait for completion + * of I/O, set flush to true. + */ +void netif_rx_mode_schedule_work(struct net_device *dev, bool flush) +{ + const struct net_device_ops *ops =3D dev->netdev_ops; + + if (ops->ndo_set_rx_mode) + ops->ndo_set_rx_mode(dev); + + /* Return early if ndo_write_rx_mode is not implemented */ + if (!ops->ndo_write_rx_mode) + return; + + /* If rx_mode config is disabled, we don't schedule the work */ + if (__netif_rx_mode_pending_get_bit(dev, NETIF_RX_MODE_SET_DIS)) + return; + + netif_rx_mode_prepare_pending(dev); + + schedule_work(&dev->rx_mode_ctx->rx_mode_work); + if (flush) + flush_work(&dev->rx_mode_ctx->rx_mode_work); +} +EXPORT_SYMBOL(netif_rx_mode_schedule_work); + /* * Upload unicast and multicast address lists to device and * configure RX filtering. When the device doesn't support unicast @@ -9621,8 +9824,6 @@ int netif_set_allmulti(struct net_device *dev, int in= c, bool notify) */ void __dev_set_rx_mode(struct net_device *dev) { - const struct net_device_ops *ops =3D dev->netdev_ops; - /* dev_open will call this function so the list will stay sane. */ if (!(dev->flags&IFF_UP)) return; @@ -9643,8 +9844,7 @@ void __dev_set_rx_mode(struct net_device *dev) } } =20 - if (ops->ndo_set_rx_mode) - ops->ndo_set_rx_mode(dev); + netif_rx_mode_schedule_work(dev, false); } =20 void dev_set_rx_mode(struct net_device *dev) --=20 2.34.1