drivers/pci/devres.c | 2 ++ 1 file changed, 2 insertions(+)
commit 25216afc9db5 ("PCI: Add managed pcim_intx()") moved the
allocation step for pci_intx()'s device resource from
pcim_enable_device() to pcim_intx(). As before, pcim_enable_device()
sets pci_dev.is_managed to true; and it is never set to false again.
Due to the lifecycle of a struct pci_dev, it can happen that a second
driver obtains the same pci_dev after a first driver ran.
If one driver uses pcim_enable_device() and the other doesn't,
this causes the other driver to run into managed pcim_intx(), which will
try to allocate when called for the first time.
Allocations might sleep, so calling pci_intx() while holding spinlocks
becomes then invalid, which causes lockdep warnings and could cause
deadlocks:
========================================================
WARNING: possible irq lock inversion dependency detected
6.11.0-rc6+ #59 Tainted: G W
--------------------------------------------------------
CPU 0/KVM/1537 just changed the state of lock:
ffffa0f0cff965f0 (&vdev->irqlock){-...}-{2:2}, at:
vfio_intx_handler+0x21/0xd0 [vfio_pci_core] but this lock took another,
HARDIRQ-unsafe lock in the past: (fs_reclaim){+.+.}-{0:0}
and interrupts could create inverse lock ordering between them.
other info that might help us debug this:
Possible interrupt unsafe locking scenario:
CPU0 CPU1
---- ----
lock(fs_reclaim);
local_irq_disable();
lock(&vdev->irqlock);
lock(fs_reclaim);
<Interrupt>
lock(&vdev->irqlock);
*** DEADLOCK ***
Have pcim_enable_device()'s release function, pcim_disable_device(), set
pci_dev.is_managed to false so that subsequent drivers using the same
struct pci_dev do implicitly run into managed code.
Fixes: 25216afc9db5 ("PCI: Add managed pcim_intx()")
Reported-by: Alex Williamson <alex.williamson@redhat.com>
Closes: https://lore.kernel.org/all/20240903094431.63551744.alex.williamson@redhat.com/
Suggested-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Philipp Stanner <pstanner@redhat.com>
Tested-by: Alex Williamson <alex.williamson@redhat.com>
---
@Bjorn:
This problem was introduced in the v6.11 merge window. So one might
consider getting it into mainline before v6.11.0 gets tagged.
P.
---
drivers/pci/devres.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/pci/devres.c b/drivers/pci/devres.c
index 3780a9f9ec00..c7affbbf73ab 100644
--- a/drivers/pci/devres.c
+++ b/drivers/pci/devres.c
@@ -483,6 +483,8 @@ static void pcim_disable_device(void *pdev_raw)
if (!pdev->pinned)
pci_disable_device(pdev);
+
+ pdev->is_managed = false;
}
/**
--
2.46.0
On Thu, Sep 05, 2024 at 09:25:57AM +0200, Philipp Stanner wrote:
> commit 25216afc9db5 ("PCI: Add managed pcim_intx()") moved the
> allocation step for pci_intx()'s device resource from
> pcim_enable_device() to pcim_intx(). As before, pcim_enable_device()
> sets pci_dev.is_managed to true; and it is never set to false again.
>
> Due to the lifecycle of a struct pci_dev, it can happen that a second
> driver obtains the same pci_dev after a first driver ran.
> If one driver uses pcim_enable_device() and the other doesn't,
> this causes the other driver to run into managed pcim_intx(), which will
> try to allocate when called for the first time.
>
> Allocations might sleep, so calling pci_intx() while holding spinlocks
> becomes then invalid, which causes lockdep warnings and could cause
> deadlocks:
>
> ========================================================
> WARNING: possible irq lock inversion dependency detected
> 6.11.0-rc6+ #59 Tainted: G W
> --------------------------------------------------------
> CPU 0/KVM/1537 just changed the state of lock:
> ffffa0f0cff965f0 (&vdev->irqlock){-...}-{2:2}, at:
> vfio_intx_handler+0x21/0xd0 [vfio_pci_core] but this lock took another,
> HARDIRQ-unsafe lock in the past: (fs_reclaim){+.+.}-{0:0}
>
> and interrupts could create inverse lock ordering between them.
>
> other info that might help us debug this:
> Possible interrupt unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(fs_reclaim);
> local_irq_disable();
> lock(&vdev->irqlock);
> lock(fs_reclaim);
> <Interrupt>
> lock(&vdev->irqlock);
>
> *** DEADLOCK ***
>
> Have pcim_enable_device()'s release function, pcim_disable_device(), set
> pci_dev.is_managed to false so that subsequent drivers using the same
> struct pci_dev do implicitly run into managed code.
>
> Fixes: 25216afc9db5 ("PCI: Add managed pcim_intx()")
> Reported-by: Alex Williamson <alex.williamson@redhat.com>
> Closes: https://lore.kernel.org/all/20240903094431.63551744.alex.williamson@redhat.com/
> Suggested-by: Alex Williamson <alex.williamson@redhat.com>
> Signed-off-by: Philipp Stanner <pstanner@redhat.com>
> Tested-by: Alex Williamson <alex.williamson@redhat.com>
> ---
> @Bjorn:
> This problem was introduced in the v6.11 merge window. So one might
> consider getting it into mainline before v6.11.0 gets tagged.
Applied with Damien's Reviewed-by to pci/for-linus for v6.11, thanks.
> ---
> drivers/pci/devres.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/pci/devres.c b/drivers/pci/devres.c
> index 3780a9f9ec00..c7affbbf73ab 100644
> --- a/drivers/pci/devres.c
> +++ b/drivers/pci/devres.c
> @@ -483,6 +483,8 @@ static void pcim_disable_device(void *pdev_raw)
>
> if (!pdev->pinned)
> pci_disable_device(pdev);
> +
> + pdev->is_managed = false;
> }
>
> /**
> --
> 2.46.0
>
On Wed, 2024-09-11 at 09:27 -0500, Bjorn Helgaas wrote:
> On Thu, Sep 05, 2024 at 09:25:57AM +0200, Philipp Stanner wrote:
> > commit 25216afc9db5 ("PCI: Add managed pcim_intx()") moved the
> > allocation step for pci_intx()'s device resource from
> > pcim_enable_device() to pcim_intx(). As before,
> > pcim_enable_device()
> > sets pci_dev.is_managed to true; and it is never set to false
> > again.
> >
> > Due to the lifecycle of a struct pci_dev, it can happen that a
> > second
> > driver obtains the same pci_dev after a first driver ran.
> > If one driver uses pcim_enable_device() and the other doesn't,
> > this causes the other driver to run into managed pcim_intx(), which
> > will
> > try to allocate when called for the first time.
> >
> > Allocations might sleep, so calling pci_intx() while holding
> > spinlocks
> > becomes then invalid, which causes lockdep warnings and could cause
> > deadlocks:
> >
> > ========================================================
> > WARNING: possible irq lock inversion dependency detected
> > 6.11.0-rc6+ #59 Tainted: G W
> > --------------------------------------------------------
> > CPU 0/KVM/1537 just changed the state of lock:
> > ffffa0f0cff965f0 (&vdev->irqlock){-...}-{2:2}, at:
> > vfio_intx_handler+0x21/0xd0 [vfio_pci_core] but this lock took
> > another,
> > HARDIRQ-unsafe lock in the past: (fs_reclaim){+.+.}-{0:0}
> >
> > and interrupts could create inverse lock ordering between them.
> >
> > other info that might help us debug this:
> > Possible interrupt unsafe locking scenario:
> >
> > CPU0 CPU1
> > ---- ----
> > lock(fs_reclaim);
> > local_irq_disable();
> > lock(&vdev->irqlock);
> > lock(fs_reclaim);
> > <Interrupt>
> > lock(&vdev->irqlock);
> >
> > *** DEADLOCK ***
> >
> > Have pcim_enable_device()'s release function,
> > pcim_disable_device(), set
> > pci_dev.is_managed to false so that subsequent drivers using the
> > same
> > struct pci_dev do implicitly run into managed code.
Oops, that should obviously be "do *not* run into managed code."
Mea culpa. Maybe you can ammend that, Bjorn?
> >
> > Fixes: 25216afc9db5 ("PCI: Add managed pcim_intx()")
> > Reported-by: Alex Williamson <alex.williamson@redhat.com>
> > Closes:
> > https://lore.kernel.org/all/20240903094431.63551744.alex.williamson@redhat.com/
> > Suggested-by: Alex Williamson <alex.williamson@redhat.com>
> > Signed-off-by: Philipp Stanner <pstanner@redhat.com>
> > Tested-by: Alex Williamson <alex.williamson@redhat.com>
> > ---
> > @Bjorn:
> > This problem was introduced in the v6.11 merge window. So one might
> > consider getting it into mainline before v6.11.0 gets tagged.
>
> Applied with Damien's Reviewed-by to pci/for-linus for v6.11, thanks.
thx!
P.
>
> > ---
> > drivers/pci/devres.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/drivers/pci/devres.c b/drivers/pci/devres.c
> > index 3780a9f9ec00..c7affbbf73ab 100644
> > --- a/drivers/pci/devres.c
> > +++ b/drivers/pci/devres.c
> > @@ -483,6 +483,8 @@ static void pcim_disable_device(void *pdev_raw)
> >
> > if (!pdev->pinned)
> > pci_disable_device(pdev);
> > +
> > + pdev->is_managed = false;
> > }
> >
> > /**
> > --
> > 2.46.0
> >
>
On Thu, Sep 12, 2024 at 09:18:17AM +0200, Philipp Stanner wrote:
> On Wed, 2024-09-11 at 09:27 -0500, Bjorn Helgaas wrote:
> > On Thu, Sep 05, 2024 at 09:25:57AM +0200, Philipp Stanner wrote:
> > > commit 25216afc9db5 ("PCI: Add managed pcim_intx()") moved the
> > > allocation step for pci_intx()'s device resource from
> > > pcim_enable_device() to pcim_intx(). As before,
> > > pcim_enable_device()
> > > sets pci_dev.is_managed to true; and it is never set to false
> > > again.
> > >
> > > Due to the lifecycle of a struct pci_dev, it can happen that a
> > > second
> > > driver obtains the same pci_dev after a first driver ran.
> > > If one driver uses pcim_enable_device() and the other doesn't,
> > > this causes the other driver to run into managed pcim_intx(), which
> > > will
> > > try to allocate when called for the first time.
> > >
> > > Allocations might sleep, so calling pci_intx() while holding
> > > spinlocks
> > > becomes then invalid, which causes lockdep warnings and could cause
> > > deadlocks:
> > >
> > > ========================================================
> > > WARNING: possible irq lock inversion dependency detected
> > > 6.11.0-rc6+ #59 Tainted: G W
> > > --------------------------------------------------------
> > > CPU 0/KVM/1537 just changed the state of lock:
> > > ffffa0f0cff965f0 (&vdev->irqlock){-...}-{2:2}, at:
> > > vfio_intx_handler+0x21/0xd0 [vfio_pci_core] but this lock took
> > > another,
> > > HARDIRQ-unsafe lock in the past: (fs_reclaim){+.+.}-{0:0}
> > >
> > > and interrupts could create inverse lock ordering between them.
> > >
> > > other info that might help us debug this:
> > > Possible interrupt unsafe locking scenario:
> > >
> > > CPU0 CPU1
> > > ---- ----
> > > lock(fs_reclaim);
> > > local_irq_disable();
> > > lock(&vdev->irqlock);
> > > lock(fs_reclaim);
> > > <Interrupt>
> > > lock(&vdev->irqlock);
> > >
> > > *** DEADLOCK ***
> > >
> > > Have pcim_enable_device()'s release function,
> > > pcim_disable_device(), set
> > > pci_dev.is_managed to false so that subsequent drivers using the
> > > same
> > > struct pci_dev do implicitly run into managed code.
>
> Oops, that should obviously be "do *not* run into managed code."
>
> Mea culpa. Maybe you can ammend that, Bjorn?
Fixed, thanks for the pointer.
On 9/5/24 16:25, Philipp Stanner wrote:
> commit 25216afc9db5 ("PCI: Add managed pcim_intx()") moved the
> allocation step for pci_intx()'s device resource from
> pcim_enable_device() to pcim_intx(). As before, pcim_enable_device()
> sets pci_dev.is_managed to true; and it is never set to false again.
>
> Due to the lifecycle of a struct pci_dev, it can happen that a second
> driver obtains the same pci_dev after a first driver ran.
> If one driver uses pcim_enable_device() and the other doesn't,
> this causes the other driver to run into managed pcim_intx(), which will
> try to allocate when called for the first time.
>
> Allocations might sleep, so calling pci_intx() while holding spinlocks
> becomes then invalid, which causes lockdep warnings and could cause
> deadlocks:
>
> ========================================================
> WARNING: possible irq lock inversion dependency detected
> 6.11.0-rc6+ #59 Tainted: G W
> --------------------------------------------------------
> CPU 0/KVM/1537 just changed the state of lock:
> ffffa0f0cff965f0 (&vdev->irqlock){-...}-{2:2}, at:
> vfio_intx_handler+0x21/0xd0 [vfio_pci_core] but this lock took another,
> HARDIRQ-unsafe lock in the past: (fs_reclaim){+.+.}-{0:0}
>
> and interrupts could create inverse lock ordering between them.
>
> other info that might help us debug this:
> Possible interrupt unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(fs_reclaim);
> local_irq_disable();
> lock(&vdev->irqlock);
> lock(fs_reclaim);
> <Interrupt>
> lock(&vdev->irqlock);
>
> *** DEADLOCK ***
>
> Have pcim_enable_device()'s release function, pcim_disable_device(), set
> pci_dev.is_managed to false so that subsequent drivers using the same
> struct pci_dev do implicitly run into managed code.
>
> Fixes: 25216afc9db5 ("PCI: Add managed pcim_intx()")
> Reported-by: Alex Williamson <alex.williamson@redhat.com>
> Closes: https://lore.kernel.org/all/20240903094431.63551744.alex.williamson@redhat.com/
> Suggested-by: Alex Williamson <alex.williamson@redhat.com>
> Signed-off-by: Philipp Stanner <pstanner@redhat.com>
> Tested-by: Alex Williamson <alex.williamson@redhat.com>
Looks OK to me.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
--
Damien Le Moal
Western Digital Research
© 2016 - 2025 Red Hat, Inc.