On Thu, Oct 09, 2025 at 05:51:07PM -0700, Andrew Morton wrote:
> On Thu, 9 Oct 2025 18:55:36 +0800 Jinchao Wang <wangjinchao600@gmail.com> wrote:
>
> > This patch series introduces KStackWatch, a lightweight debugging tool to detect
> > kernel stack corruption in real time. It installs a hardware breakpoint
> > (watchpoint) at a function's specified offset using `kprobe.post_handler` and
> > removes it in `fprobe.exit_handler`. This covers the full execution window and
> > reports corruption immediately with time, location, and a call stack.
> >
> > The motivation comes from scenarios where corruption occurs silently in one
> > function but manifests later in another, without a direct call trace linking
> > the two. Such bugs are often extremely hard to debug with existing tools.
> > These scenarios are demonstrated in test 3–5 (silent corruption test, patch 20).
> >
> > ...
> >
> > 20 files changed, 1809 insertions(+), 62 deletions(-)
>
> It's obviously a substantial project. We need to decide whether to add
> this to Linux.
>
> There are some really important [0/N] changelog details which I'm not
> immediately seeing:
Thanks for the review and questions.
>
> Am I correct in thinking that it's x86-only? If so, what's involved in
> enabling other architectures? Is there any such work in progress?
Currently yes.
There are two architecture-specific dependencies:
- Hardware breakpoint (HWPB) modification in atomic context.
This has been implemented for x86 in patches 1–3.
I think it is not a big problem for other architectures.
- Stack canary locating mechanism, which does not work on parisc:
- Automatic canary discovery scans from the stack base to high memory.
- This feature is optional; a stack offset address can be provided instead.
Future work could include enabling support for other architectures such
as arm64 and riscv once their hardware breakpoint implementations allow
safe modification in atomic context. I do not currently have the
environment to test those architectures, but the framework was designed
to be generic and can be extended by contributors familiar with them.
> What motivated the work? Was there some particular class of failures
> which you were persistently seeing and wished to fix more efficiently?
>
> Has this code (or something like it) been used in production systems?
> If so, by whom and with what results?
The motivation came from silent stack corruption issues. They occur
rarely but are extremely difficult to debug. I personally encountered
two such bugs which each took weeks to isolate, and I know similar
issues exist in other environments. KStackWatch was developed as a
result of those debugging efforts. It has been used mainly in my own
debugging environment and verified with controlled test cases
(patches 17–21). If it had existed earlier, similar bugs could have
been resolved much faster.
>
> Has it actually found some kernel bugs yet? If so, details please.
It was designed to help diagnose bugs whose existence was already known
but whose root cause was difficult to locate. So far it has been used
in my personal environment and can be validated with controlled test
cases in patches 17–21.
>
> Can this be enabled on production systems? If so, what is the
> measured runtime overhead?
I believe it can. The overhead is summarized below.
Without watching:
- Per-task context: 2 * sizeof(ulong) + 4 bytes (≈20 bytes on x86_64)
With watching:
- Same per-task context as above
- One or more preallocated HWBPs (configurable, at least one)
- Small additional memory for managing HWBP and context state
- Runtime overhead (measured on x86_64):
Type | Time (ns) | Cycles
-----------------------------------------------
entry with watch | 10892 | 32620
entry without watch | 159 | 466
exit with watch | 12541 | 37556
exit without watch | 124 | 369
Would you prefer that I include the measurement code (used to collect the
timing and cycle statistics shown above) in the next version of the patch
set, or submit it separately as an additional patch?
--
Jinchao