drivers/net/ethernet/intel/e1000/e1000_main.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
Previously, e1000_down called cancel_work_sync for the e1000 reset task
(via e1000_down_and_stop), which takes RTNL.
As reported by users and syzbot, a deadlock is possible due to lock
inversion in the following scenario:
CPU 0:
- RTNL is held
- e1000_close
- e1000_down
- cancel_work_sync (takes the work queue mutex)
- e1000_reset_task
CPU 1:
- process_one_work (takes the work queue mutex)
- e1000_reset_task (takes RTNL)
To remedy this, avoid calling cancel_work_sync from e1000_down
(e1000_reset_task does nothing if the device is down anyway). Instead,
call cancel_work_sync for e1000_reset_task when the device is being
removed.
Fixes: e400c7444d84 ("e1000: Hold RTNL when e1000_down can be called")
Reported-by: syzbot+846bb38dc67fe62cc733@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/netdev/683837bf.a00a0220.52848.0003.GAE@google.com/
Reported-by: John <john.cs.hey@gmail.com>
Closes: https://lore.kernel.org/netdev/CAP=Rh=OEsn4y_2LvkO3UtDWurKcGPnZ_NPSXK=FbgygNXL37Sw@mail.gmail.com/
Signed-off-by: Joe Damato <jdamato@fastly.com>
---
drivers/net/ethernet/intel/e1000/e1000_main.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
index 3f089c3d47b2..d8595e84326d 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
@@ -477,10 +477,6 @@ static void e1000_down_and_stop(struct e1000_adapter *adapter)
cancel_delayed_work_sync(&adapter->phy_info_task);
cancel_delayed_work_sync(&adapter->fifo_stall_task);
-
- /* Only kill reset task if adapter is not resetting */
- if (!test_bit(__E1000_RESETTING, &adapter->flags))
- cancel_work_sync(&adapter->reset_task);
}
void e1000_down(struct e1000_adapter *adapter)
@@ -1266,6 +1262,10 @@ static void e1000_remove(struct pci_dev *pdev)
unregister_netdev(netdev);
+ /* Only kill reset task if adapter is not resetting */
+ if (!test_bit(__E1000_RESETTING, &adapter->flags))
+ cancel_work_sync(&adapter->reset_task);
+
e1000_phy_hw_reset(hw);
kfree(adapter->tx_ring);
--
2.43.0
On 05/30, Joe Damato wrote: > Previously, e1000_down called cancel_work_sync for the e1000 reset task > (via e1000_down_and_stop), which takes RTNL. > > As reported by users and syzbot, a deadlock is possible due to lock > inversion in the following scenario: > > CPU 0: > - RTNL is held > - e1000_close > - e1000_down > - cancel_work_sync (takes the work queue mutex) > - e1000_reset_task > > CPU 1: > - process_one_work (takes the work queue mutex) > - e1000_reset_task (takes RTNL) nit: as Jakub mentioned in another thread, it seems more about the flush_work waiting for the reset_task to complete rather than wq mutexes (which are fake)? CPU 0: - RTNL is held - e1000_close - e1000_down - cancel_work_sync - __flush_work - <wait here for the reset_task to finish> CPU 1: - process_one_work - e1000_reset_task (takes RTNL) - <but cpu 0 already holds rtnl> The fix looks good! Acked-by: Stanislav Fomichev <sdf@fomichev.me>
On Fri, May 30, 2025 at 08:07:29AM -0700, Stanislav Fomichev wrote: > On 05/30, Joe Damato wrote: > > Previously, e1000_down called cancel_work_sync for the e1000 reset task > > (via e1000_down_and_stop), which takes RTNL. > > > > As reported by users and syzbot, a deadlock is possible due to lock > > inversion in the following scenario: > > > > CPU 0: > > - RTNL is held > > - e1000_close > > - e1000_down > > - cancel_work_sync (takes the work queue mutex) > > - e1000_reset_task > > > > CPU 1: > > - process_one_work (takes the work queue mutex) > > - e1000_reset_task (takes RTNL) > > nit: as Jakub mentioned in another thread, it seems more about the > flush_work waiting for the reset_task to complete rather than > wq mutexes (which are fake)? Hm, I probably misunderstood something. Also, not sure what you meant by the wq mutexes being fake? My understanding (which is prob wrong) from the syzbot and user report was that the order of wq mutex and rtnl are inverted in the two paths, which can cause a deadlock if both paths run. In the case you describe below, wouldn't cpu0's __flush_work eventually finish, releasing RTNL, and allowing CPU 1 to proceed? It seemed to me that the only way for deadlock to happen was with the inversion described above -- but I'm probably missing something. > CPU 0: > - RTNL is held > - e1000_close > - e1000_down > - cancel_work_sync > - __flush_work > - <wait here for the reset_task to finish> > > CPU 1: > - process_one_work > - e1000_reset_task (takes RTNL) > - <but cpu 0 already holds rtnl> > > The fix looks good! Thanks for taking a look. > Acked-by: Stanislav Fomichev <sdf@fomichev.me>
On Fri, 30 May 2025 12:45:13 -0700 Joe Damato wrote: > > nit: as Jakub mentioned in another thread, it seems more about the > > flush_work waiting for the reset_task to complete rather than > > wq mutexes (which are fake)? > > Hm, I probably misunderstood something. Also, not sure what you > meant by the wq mutexes being fake? > > My understanding (which is prob wrong) from the syzbot and user > report was that the order of wq mutex and rtnl are inverted in the > two paths, which can cause a deadlock if both paths run. Take a look at touch_work_lockdep_map(), theres nosaj thing as wq mutex. It's just a lockdep "annotation" that helps lockdep connect the dots between waiting thread and the work item, not a real mutex. So the commit msg may be better phrased like this (modulo the lines in front): CPU 0: , - RTNL is held / - e1000_close | - e1000_down +- - cancel_work_sync (cancel / wait for e1000_reset_task()) | | CPU 1: | - process_one_work \ - e1000_reset_task `- take RTNL
On Fri, May 30, 2025 at 06:31:40PM -0700, Jakub Kicinski wrote:
> On Fri, 30 May 2025 12:45:13 -0700 Joe Damato wrote:
> > > nit: as Jakub mentioned in another thread, it seems more about the
> > > flush_work waiting for the reset_task to complete rather than
> > > wq mutexes (which are fake)?
> >
> > Hm, I probably misunderstood something. Also, not sure what you
> > meant by the wq mutexes being fake?
> >
> > My understanding (which is prob wrong) from the syzbot and user
> > report was that the order of wq mutex and rtnl are inverted in the
> > two paths, which can cause a deadlock if both paths run.
>
> Take a look at touch_work_lockdep_map(), theres nosaj thing as wq mutex.
> It's just a lockdep "annotation" that helps lockdep connect the dots
> between waiting thread and the work item, not a real mutex. So the
> commit msg may be better phrased like this (modulo the lines in front):
>
> CPU 0:
> , - RTNL is held
> / - e1000_close
> | - e1000_down
> +- - cancel_work_sync (cancel / wait for e1000_reset_task())
> |
> | CPU 1:
> | - process_one_work
> \ - e1000_reset_task
> `- take RTNL
OK, I'll resubmit shortly with the following commit message:
e1000: Move cancel_work_sync to avoid deadlock
Previously, e1000_down called cancel_work_sync for the e1000 reset task
(via e1000_down_and_stop), which takes RTNL.
As reported by users and syzbot, a deadlock is possible in the following
scenario:
CPU 0:
- RTNL is held
- e1000_close
- e1000_down
- cancel_work_sync (cancel / wait for e1000_reset_task())
CPU 1:
- process_one_work
- e1000_reset_task
- take RTNL
To remedy this, avoid calling cancel_work_sync from e1000_down
(e1000_reset_task does nothing if the device is down anyway). Instead,
call cancel_work_sync for e1000_reset_task when the device is being
removed.
> -----Original Message----- > From: Joe Damato <jdamato@fastly.com> > Sent: Monday, June 2, 2025 1:32 PM > To: Jakub Kicinski <kuba@kernel.org> > Cc: Stanislav Fomichev <stfomichev@gmail.com>; netdev@vger.kernel.org; > john.cs.hey@gmail.com; Keller, Jacob E <jacob.e.keller@intel.com>; > syzbot+846bb38dc67fe62cc733@syzkaller.appspotmail.com; Nguyen, Anthony L > <anthony.l.nguyen@intel.com>; Kitszel, Przemyslaw > <przemyslaw.kitszel@intel.com>; Andrew Lunn <andrew+netdev@lunn.ch>; David > S. Miller <davem@davemloft.net>; Eric Dumazet <edumazet@google.com>; > Paolo Abeni <pabeni@redhat.com>; moderated list:INTEL ETHERNET DRIVERS > <intel-wired-lan@lists.osuosl.org>; open list <linux-kernel@vger.kernel.org> > Subject: Re: [PATCH iwl-net] e1000: Move cancel_work_sync to avoid deadlock > > On Fri, May 30, 2025 at 06:31:40PM -0700, Jakub Kicinski wrote: > > On Fri, 30 May 2025 12:45:13 -0700 Joe Damato wrote: > > > > nit: as Jakub mentioned in another thread, it seems more about the > > > > flush_work waiting for the reset_task to complete rather than > > > > wq mutexes (which are fake)? > > > > > > Hm, I probably misunderstood something. Also, not sure what you > > > meant by the wq mutexes being fake? > > > > > > My understanding (which is prob wrong) from the syzbot and user > > > report was that the order of wq mutex and rtnl are inverted in the > > > two paths, which can cause a deadlock if both paths run. > > > > Take a look at touch_work_lockdep_map(), theres nosaj thing as wq mutex. > > It's just a lockdep "annotation" that helps lockdep connect the dots > > between waiting thread and the work item, not a real mutex. So the > > commit msg may be better phrased like this (modulo the lines in front): > > > > CPU 0: > > , - RTNL is held > > / - e1000_close > > | - e1000_down > > +- - cancel_work_sync (cancel / wait for e1000_reset_task()) > > | > > | CPU 1: > > | - process_one_work > > \ - e1000_reset_task > > `- take RTNL > > OK, I'll resubmit shortly with the following commit message: > > e1000: Move cancel_work_sync to avoid deadlock > > Previously, e1000_down called cancel_work_sync for the e1000 reset task > (via e1000_down_and_stop), which takes RTNL. > > As reported by users and syzbot, a deadlock is possible in the following > scenario: > > CPU 0: > - RTNL is held > - e1000_close > - e1000_down > - cancel_work_sync (cancel / wait for e1000_reset_task()) > > CPU 1: > - process_one_work > - e1000_reset_task > - take RTNL > > To remedy this, avoid calling cancel_work_sync from e1000_down > (e1000_reset_task does nothing if the device is down anyway). Instead, > call cancel_work_sync for e1000_reset_task when the device is being > removed. Acked-by: Jacob Keller <jacob.e.keller@intel.com>
© 2016 - 2026 Red Hat, Inc.