[PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for

Lecopzer Chen posted 6 patches 2 years ago
arch/arm64/Kconfig               |  2 +
arch/arm64/kernel/Makefile       |  1 +
arch/arm64/kernel/perf_event.c   | 12 +++++-
arch/arm64/kernel/watchdog_hld.c | 39 +++++++++++++++++
arch/sparc/kernel/nmi.c          |  8 ++--
drivers/perf/arm_pmu.c           |  5 +++
include/linux/nmi.h              |  4 +-
include/linux/perf/arm_pmu.h     |  2 +
kernel/watchdog.c                | 72 +++++++++++++++++++++++++++++---
kernel/watchdog_hld.c            |  8 +++-
10 files changed, 139 insertions(+), 14 deletions(-)
create mode 100644 arch/arm64/kernel/watchdog_hld.c
[PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for
Posted by Lecopzer Chen 2 years ago
As we already used hld internally for arm64 since 2020, there still
doesn't have a proper commit on the upstream and we badly need it.

This serise rework on 5.17 from [1] and the origin author is
Pingfan Liu <kernelfans@gmail.com>
Sumit Garg <sumit.garg@linaro.org>

Qoute from [1]:

> Hard lockup detector is helpful to diagnose unpaired irq
> enable/disable.
> But the current watchdog framework can not cope with arm64 hw perf
> event
> easily.

> On arm64, when lockup_detector_init()->watchdog_nmi_probe(), PMU is
> not
> ready until device_initcall(armv8_pmu_driver_init).  And it is deeply
> integrated with the driver model and cpuhp. Hence it is hard to push
> the
> initialization of armv8_pmu_driver_init() before smp_init().

> But it is easy to take an opposite approach by enabling watchdog_hld
> to
> get the capability of PMU async. 
> The async model is achieved by expanding watchdog_nmi_probe() with
> -EBUSY, and a re-initializing work_struct which waits on a
> wait_queue_head.

Provide an API - retry_lockup_detector_init() for anyone who needs
to delayed init lockup detector.

The original assumption is: nobody should use delayed probe after
lockup_detector_check() (which has __init attribute).
That is, anyone uses this API must call between lockup_detector_init()
and lockup_detector_check(), and the caller must have __init attribute

The delayed init flow is:
1. lockup_detector_init() -> watchdog_nmi_probe() get non-zero retun,
   then set allow_lockup_detector_init_retry to true which means it's
   able to do delayed probe later.

2. PMU arch code init done, call retry_lockup_detector_init().

3. retry_lockup_detector_init() queue the work only when
   allow_lockup_detector_init_retry is true which means nobody should
call
   this before lockup_detector_init().

4. the work lockup_detector_delay_init() is doing without wait event.
   if probe success, set allow_lockup_detector_init_retry to false.

5. at late_initcall_sync(), lockup_detector_check() set
   allow_lockup_detector_init_retry to false first to avoid any later
retry,
   and then flush_work() to make sure the __init section won't be freed
   before the work done.

[1]
https://lore.kernel.org/lkml/20211014024155.15253-1-kernelfans@gmail.com/

v7:
  rebase on v6.0-rc3

v6:
  fix build failed reported by kernel test robot <lkp@intel.com>
https://lore.kernel.org/lkml/20220614062835.7196-1-lecopzer.chen@mediatek.com/

v5:
  1. rebase on v5.19-rc2
  2. change to proper schedule api
  3. return value checking before retry_lockup_detector_init()
https://lore.kernel.org/lkml/20220613135956.15711-1-lecopzer.chen@mediatek.com/

v4:
  1. remove -EBUSY protocal, let all the non-zero value from
     watchdog_nmi_probe() be able to retry.
  2. separate arm64 part patch into hw_nmi_get_sample_period and retry
     delayed init
  3. tweak commit msg that we don't have to limit to -EBUSY  
  4. rebase on v5.18-rc4
https://lore.kernel.org/lkml/20220427161340.8518-1-lecopzer.chen@mediatek.com/

v3:
  1. Tweak commit message in patch 04 
	2. Remove wait event
  3. s/lockup_detector_pending_init/allow_lockup_detector_init_retry/
  4. provide api retry_lockup_detector_init() 
https://lore.kernel.org/lkml/20220324141405.10835-1-lecopzer.chen@mediatek.com/ 

v2:
  1. Tweak commit message in patch 01/02/04/05 
  2. Remove vobose WARN in patch 04 within watchdog core.
  3. Change from three states variable: detector_delay_init_state to
     two states variable: allow_lockup_detector_init_retry

     Thanks Petr Mladek <pmladek@suse.com> for the idea.
     > 1.  lockup_detector_work() called before lockup_detector_check().
     >     In this case, wait_event() will wait until
     >     lockup_detector_check()
     >     clears detector_delay_pending_init and calls wake_up().

     > 2. lockup_detector_check() called before lockup_detector_work().
     >    In this case, wait_even() will immediately continue because
     >    it will see cleared detector_delay_pending_init.
  4. Add comment in code in patch 04/05 for two states variable
changing.
https://lore.kernel.org/lkml/20220307154729.13477-1-lecopzer.chen@mediatek.com/


Lecopzer Chen (5):
  kernel/watchdog: remove WATCHDOG_DEFAULT
  kernel/watchdog: change watchdog_nmi_enable() to void
  kernel/watchdog: Adapt the watchdog_hld interface for async model
  arm64: add hw_nmi_get_sample_period for preparation of lockup detector
  arm64: Enable perf events based hard lockup detector

Pingfan Liu (1):
  kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup
    detector event

 arch/arm64/Kconfig               |  2 +
 arch/arm64/kernel/Makefile       |  1 +
 arch/arm64/kernel/perf_event.c   | 12 +++++-
 arch/arm64/kernel/watchdog_hld.c | 39 +++++++++++++++++
 arch/sparc/kernel/nmi.c          |  8 ++--
 drivers/perf/arm_pmu.c           |  5 +++
 include/linux/nmi.h              |  4 +-
 include/linux/perf/arm_pmu.h     |  2 +
 kernel/watchdog.c                | 72 +++++++++++++++++++++++++++++---
 kernel/watchdog_hld.c            |  8 +++-
 10 files changed, 139 insertions(+), 14 deletions(-)
 create mode 100644 arch/arm64/kernel/watchdog_hld.c

-- 
2.25.1
RE: [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for arm64
Posted by Lecopzer Chen 2 years ago
Hi Will, Mark
 
Sorry for bothering you, this need to be reviewed by ARM Perf maintainer,

could you please help review this pathset or comment about it?




Thanks a lot.

 
> Hi Will, Mark
>
> Could you help review arm parts of this patchset, please?
> 
> For the question mention in both [1] and [2],
> 
> > I'd still like Mark's Ack on this, as the approach you have taken doesn't
> > really sit with what he was suggesting.
> >
> > I also don't understand how all the CPUs get initialised with your patch,
> > since the PMU driver will be initialised after SMP is up and running.
> 
> The hardlock detector utilizes the softlockup_start_all() to start all
> the cpu on watchdog_allowed_mask, which will do watchdog_nmi_enable()
> that registers perf event on each CPUs.
> Thus we simply need to retry lockup_detector_init() in a single cpu which
> will reconfig and call to softlockup_start_all().
> 
> Also, the CONFIG_HARDLOCKUP_DETECTOR_PERF selects SOFTLOCKUP_DETECTOR,
> IMO, this shows that hardlockup detector supports from softlockup.
> 
> 
> > We should know whether pNMIs are possible once we've completed
> > setup_arch() (and possibly init_IRQ()), long before SMP, so so I reckon
> > we should have all the information available once we get to
> > lockup_detector_init(), even if that requires some preparatory rework.
> 
> Hardlockup depends on PMU driver , I think the only way is moving
> pmu driver at setup_arch() or any point which is earlier than
> lockup_detector_init(), and I guess we have to reorganize the architecture
> of arm PMU.
> 
> The retry function should benifit all the arch/ not only for arm64.
> Any arch who needs to probe its pmu as module can use this without providing
> a chance to mess up the setup order. 
> 
> 
> Please let me know if you have any concern about this, thank you
> 
> 
> [1] https://lore.kernel.org/all/CAFA6WYPPgUvHCpN5=EpJ2Us5h5uVWCbBA59C-YwYQX2ovyVeEw@mail.gmail.com/
> [2] https://lore.kernel.org/linux-arm-kernel/20210419170331.GB31045@willie-the-truck/
> 
>
Re: [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for
Posted by Doug Anderson 1 year, 4 months ago
Hi,

On Sat, Sep 3, 2022 at 2:35 AM Lecopzer Chen <lecopzer.chen@mediatek.com> wrote:
>
> As we already used hld internally for arm64 since 2020, there still
> doesn't have a proper commit on the upstream and we badly need it.
>
> This serise rework on 5.17 from [1] and the origin author is
> Pingfan Liu <kernelfans@gmail.com>
> Sumit Garg <sumit.garg@linaro.org>
>
> Qoute from [1]:
>
> > Hard lockup detector is helpful to diagnose unpaired irq
> > enable/disable.
> > But the current watchdog framework can not cope with arm64 hw perf
> > event
> > easily.
>
> > On arm64, when lockup_detector_init()->watchdog_nmi_probe(), PMU is
> > not
> > ready until device_initcall(armv8_pmu_driver_init).  And it is deeply
> > integrated with the driver model and cpuhp. Hence it is hard to push
> > the
> > initialization of armv8_pmu_driver_init() before smp_init().
>
> > But it is easy to take an opposite approach by enabling watchdog_hld
> > to
> > get the capability of PMU async.
> > The async model is achieved by expanding watchdog_nmi_probe() with
> > -EBUSY, and a re-initializing work_struct which waits on a
> > wait_queue_head.
>
> Provide an API - retry_lockup_detector_init() for anyone who needs
> to delayed init lockup detector.
>
> The original assumption is: nobody should use delayed probe after
> lockup_detector_check() (which has __init attribute).
> That is, anyone uses this API must call between lockup_detector_init()
> and lockup_detector_check(), and the caller must have __init attribute
>
> The delayed init flow is:
> 1. lockup_detector_init() -> watchdog_nmi_probe() get non-zero retun,
>    then set allow_lockup_detector_init_retry to true which means it's
>    able to do delayed probe later.
>
> 2. PMU arch code init done, call retry_lockup_detector_init().
>
> 3. retry_lockup_detector_init() queue the work only when
>    allow_lockup_detector_init_retry is true which means nobody should
> call
>    this before lockup_detector_init().
>
> 4. the work lockup_detector_delay_init() is doing without wait event.
>    if probe success, set allow_lockup_detector_init_retry to false.
>
> 5. at late_initcall_sync(), lockup_detector_check() set
>    allow_lockup_detector_init_retry to false first to avoid any later
> retry,
>    and then flush_work() to make sure the __init section won't be freed
>    before the work done.
>
> [1]
> https://lore.kernel.org/lkml/20211014024155.15253-1-kernelfans@gmail.com/
>
> v7:
>   rebase on v6.0-rc3
>
> v6:
>   fix build failed reported by kernel test robot <lkp@intel.com>
> https://lore.kernel.org/lkml/20220614062835.7196-1-lecopzer.chen@mediatek.com/
>
> v5:
>   1. rebase on v5.19-rc2
>   2. change to proper schedule api
>   3. return value checking before retry_lockup_detector_init()
> https://lore.kernel.org/lkml/20220613135956.15711-1-lecopzer.chen@mediatek.com/
>
> v4:
>   1. remove -EBUSY protocal, let all the non-zero value from
>      watchdog_nmi_probe() be able to retry.
>   2. separate arm64 part patch into hw_nmi_get_sample_period and retry
>      delayed init
>   3. tweak commit msg that we don't have to limit to -EBUSY
>   4. rebase on v5.18-rc4
> https://lore.kernel.org/lkml/20220427161340.8518-1-lecopzer.chen@mediatek.com/
>
> v3:
>   1. Tweak commit message in patch 04
>         2. Remove wait event
>   3. s/lockup_detector_pending_init/allow_lockup_detector_init_retry/
>   4. provide api retry_lockup_detector_init()
> https://lore.kernel.org/lkml/20220324141405.10835-1-lecopzer.chen@mediatek.com/
>
> v2:
>   1. Tweak commit message in patch 01/02/04/05
>   2. Remove vobose WARN in patch 04 within watchdog core.
>   3. Change from three states variable: detector_delay_init_state to
>      two states variable: allow_lockup_detector_init_retry
>
>      Thanks Petr Mladek <pmladek@suse.com> for the idea.
>      > 1.  lockup_detector_work() called before lockup_detector_check().
>      >     In this case, wait_event() will wait until
>      >     lockup_detector_check()
>      >     clears detector_delay_pending_init and calls wake_up().
>
>      > 2. lockup_detector_check() called before lockup_detector_work().
>      >    In this case, wait_even() will immediately continue because
>      >    it will see cleared detector_delay_pending_init.
>   4. Add comment in code in patch 04/05 for two states variable
> changing.
> https://lore.kernel.org/lkml/20220307154729.13477-1-lecopzer.chen@mediatek.com/
>
>
> Lecopzer Chen (5):
>   kernel/watchdog: remove WATCHDOG_DEFAULT
>   kernel/watchdog: change watchdog_nmi_enable() to void
>   kernel/watchdog: Adapt the watchdog_hld interface for async model
>   arm64: add hw_nmi_get_sample_period for preparation of lockup detector
>   arm64: Enable perf events based hard lockup detector
>
> Pingfan Liu (1):
>   kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup
>     detector event
>
>  arch/arm64/Kconfig               |  2 +
>  arch/arm64/kernel/Makefile       |  1 +
>  arch/arm64/kernel/perf_event.c   | 12 +++++-
>  arch/arm64/kernel/watchdog_hld.c | 39 +++++++++++++++++
>  arch/sparc/kernel/nmi.c          |  8 ++--
>  drivers/perf/arm_pmu.c           |  5 +++
>  include/linux/nmi.h              |  4 +-
>  include/linux/perf/arm_pmu.h     |  2 +
>  kernel/watchdog.c                | 72 +++++++++++++++++++++++++++++---
>  kernel/watchdog_hld.c            |  8 +++-
>  10 files changed, 139 insertions(+), 14 deletions(-)

To leave some breadcrumbs here, I've included all the patches here in
my latest "buddy" hardlockup detector series. I'm hoping that the
cleanup patches that were part of your series can land as part of my
series. I'm not necessarily expecting the the arm64 perf hardlockup
detector patches will land as part of my series, though. See the cover
letter and "after-the-cut" notes on the later patches in my series for
details.

https://lore.kernel.org/r/20230504221349.1535669-1-dianders@chromium.org
Re: [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for
Posted by Will Deacon 1 year, 10 months ago
On Sat, Sep 03, 2022 at 05:34:09PM +0800, Lecopzer Chen wrote:
> As we already used hld internally for arm64 since 2020, there still
> doesn't have a proper commit on the upstream and we badly need it.
> 
> This serise rework on 5.17 from [1] and the origin author is
> Pingfan Liu <kernelfans@gmail.com>
> Sumit Garg <sumit.garg@linaro.org>

I'd definitely want Mark's ack on this, as he previously had suggestions
when we reverted the old broken code back in:

https://lore.kernel.org/r/20210113130235.GB19011@C02TD0UTHF1T.local

Will