drivers/acpi/apei/ghes.c | 2 ++ 1 file changed, 2 insertions(+)
When a GHES (Generic Hardware Error Source) triggers a panic, add the
TAINT_MACHINE_CHECK taint flag to the kernel. This explicitly marks the
kernel as tainted due to a machine check event, improving diagnostics
and post-mortem analysis. The taint is set with LOCKDEP_STILL_OK to
indicate lockdep remains valid.
At large scale deployment, this helps to quickly determin panics that
are coming due to hardware failures.
Signed-off-by: Breno Leitao <leitao@debian.org>
---
drivers/acpi/apei/ghes.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index f0584ccad4519..3d44f926afe8e 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -1088,6 +1088,8 @@ static void __ghes_panic(struct ghes *ghes,
__ghes_print_estatus(KERN_EMERG, ghes->generic, estatus);
+ add_taint(TAINT_MACHINE_CHECK, LOCKDEP_STILL_OK);
+
ghes_clear_estatus(ghes, estatus, buf_paddr, fixmap_idx);
if (!panic_timeout)
---
base-commit: e96ee511c906c59b7c4e6efd9d9b33917730e000
change-id: 20250702-add_tain-902925f3eb96
Best regards,
--
Breno Leitao <leitao@debian.org>
On Wed, Jul 02, 2025 at 08:39:51AM -0700, Breno Leitao wrote: > When a GHES (Generic Hardware Error Source) triggers a panic, add the > TAINT_MACHINE_CHECK taint flag to the kernel. This explicitly marks the While it might not strictly be a machine check that caused GHES to panic, it seems close enough from the available TAINT options. So unless someone feels it would be better to create a new TAINT flag (TAINT_FATAL_GHES? TAINT_FIRMWARE_REPORTED_FATAL_ERRROR?) then this seems OK to me. Reviewed-by: Tony Luck <tony.luck@intel.com> > kernel as tainted due to a machine check event, improving diagnostics > and post-mortem analysis. The taint is set with LOCKDEP_STILL_OK to > indicate lockdep remains valid. > > At large scale deployment, this helps to quickly determin panics that > are coming due to hardware failures. > > Signed-off-by: Breno Leitao <leitao@debian.org> > --- > drivers/acpi/apei/ghes.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c > index f0584ccad4519..3d44f926afe8e 100644 > --- a/drivers/acpi/apei/ghes.c > +++ b/drivers/acpi/apei/ghes.c > @@ -1088,6 +1088,8 @@ static void __ghes_panic(struct ghes *ghes, > > __ghes_print_estatus(KERN_EMERG, ghes->generic, estatus); > > + add_taint(TAINT_MACHINE_CHECK, LOCKDEP_STILL_OK); > + > ghes_clear_estatus(ghes, estatus, buf_paddr, fixmap_idx); > > if (!panic_timeout) > > --- > base-commit: e96ee511c906c59b7c4e6efd9d9b33917730e000 > change-id: 20250702-add_tain-902925f3eb96 > > Best regards, > -- > Breno Leitao <leitao@debian.org> >
On Wed, Jul 2, 2025 at 6:31 PM Luck, Tony <tony.luck@intel.com> wrote: > > On Wed, Jul 02, 2025 at 08:39:51AM -0700, Breno Leitao wrote: > > When a GHES (Generic Hardware Error Source) triggers a panic, add the > > TAINT_MACHINE_CHECK taint flag to the kernel. This explicitly marks the > > While it might not strictly be a machine check that caused GHES to > panic, it seems close enough from the available TAINT options. > > So unless someone feels it would be better to create a new TAINT > flag (TAINT_FATAL_GHES? TAINT_FIRMWARE_REPORTED_FATAL_ERRROR?) > then this seems OK to me. > > Reviewed-by: Tony Luck <tony.luck@intel.com> Applied as 6.17 material, thanks! > > kernel as tainted due to a machine check event, improving diagnostics > > and post-mortem analysis. The taint is set with LOCKDEP_STILL_OK to > > indicate lockdep remains valid. > > > > At large scale deployment, this helps to quickly determin panics that > > are coming due to hardware failures. > > > > Signed-off-by: Breno Leitao <leitao@debian.org> > > --- > > drivers/acpi/apei/ghes.c | 2 ++ > > 1 file changed, 2 insertions(+) > > > > diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c > > index f0584ccad4519..3d44f926afe8e 100644 > > --- a/drivers/acpi/apei/ghes.c > > +++ b/drivers/acpi/apei/ghes.c > > @@ -1088,6 +1088,8 @@ static void __ghes_panic(struct ghes *ghes, > > > > __ghes_print_estatus(KERN_EMERG, ghes->generic, estatus); > > > > + add_taint(TAINT_MACHINE_CHECK, LOCKDEP_STILL_OK); > > + > > ghes_clear_estatus(ghes, estatus, buf_paddr, fixmap_idx); > > > > if (!panic_timeout) > > > > --- > > base-commit: e96ee511c906c59b7c4e6efd9d9b33917730e000 > > change-id: 20250702-add_tain-902925f3eb96 > > > > Best regards, > > -- > > Breno Leitao <leitao@debian.org> > >
On Wed, Jul 02, 2025 at 09:31:30AM -0700, Luck, Tony wrote: > On Wed, Jul 02, 2025 at 08:39:51AM -0700, Breno Leitao wrote: > > When a GHES (Generic Hardware Error Source) triggers a panic, add the > > TAINT_MACHINE_CHECK taint flag to the kernel. This explicitly marks the > > While it might not strictly be a machine check that caused GHES to > panic, it seems close enough from the available TAINT options. Right, that was my reasoning as well. There are other cases where TAINT_MACHINE_CHECK is set when the Hardware is broken. > So unless someone feels it would be better to create a new TAINT > flag (TAINT_FATAL_GHES? TAINT_FIRMWARE_REPORTED_FATAL_ERRROR?) > then this seems OK to me. Thanks. That brings another topic. I am seeing crashes and warnings that are only happening after recoverable errors. I.e, there is a GHES recoverable error, and then machine crashes minutes later. A classical example is when the PCI downstream port disappear, and recovers later, re-enumerating everything, which is simply chaotic. I would like to be able to correlate the crash/warning with a machine that had a recoverable error. At scale, this improves the kernel monitoring by a lot. So, if we go toward using TAINT_FATAL_GHES, can we have two flavors? TAINT_FATAL_GHES_RECOVERABLE and TAINT_FATAL_GHES_FATAL? Thanks for the review, --breno > Reviewed-by: Tony Luck <tony.luck@intel.com> > > > kernel as tainted due to a machine check event, improving diagnostics > > and post-mortem analysis. The taint is set with LOCKDEP_STILL_OK to > > indicate lockdep remains valid. > > > > At large scale deployment, this helps to quickly determin panics that > > are coming due to hardware failures. > > > > Signed-off-by: Breno Leitao <leitao@debian.org> > > --- > > drivers/acpi/apei/ghes.c | 2 ++ > > 1 file changed, 2 insertions(+) > > > > diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c > > index f0584ccad4519..3d44f926afe8e 100644 > > --- a/drivers/acpi/apei/ghes.c > > +++ b/drivers/acpi/apei/ghes.c > > @@ -1088,6 +1088,8 @@ static void __ghes_panic(struct ghes *ghes, > > > > __ghes_print_estatus(KERN_EMERG, ghes->generic, estatus); > > > > + add_taint(TAINT_MACHINE_CHECK, LOCKDEP_STILL_OK); > > + > > ghes_clear_estatus(ghes, estatus, buf_paddr, fixmap_idx); > > > > if (!panic_timeout) > > > > --- > > base-commit: e96ee511c906c59b7c4e6efd9d9b33917730e000 > > change-id: 20250702-add_tain-902925f3eb96 > > > > Best regards, > > -- > > Breno Leitao <leitao@debian.org> > >
On Wed, Jul 02, 2025 at 10:22:50AM -0700, Breno Leitao wrote: > On Wed, Jul 02, 2025 at 09:31:30AM -0700, Luck, Tony wrote: > > On Wed, Jul 02, 2025 at 08:39:51AM -0700, Breno Leitao wrote: > > So unless someone feels it would be better to create a new TAINT > > flag (TAINT_FATAL_GHES? TAINT_FIRMWARE_REPORTED_FATAL_ERRROR?) > > then this seems OK to me. > > Thanks. That brings another topic. I am seeing crashes and warnings that > are only happening after recoverable errors. I.e, there is a GHES > recoverable error, and then machine crashes minutes later. A classical > example is when the PCI downstream port disappear, and recovers later, > re-enumerating everything, which is simply chaotic. > > I would like to be able to correlate the crash/warning with a machine > that had a recoverable error. At scale, this improves the kernel > monitoring by a lot. > > So, if we go toward using TAINT_FATAL_GHES, can we have two flavors? > TAINT_FATAL_GHES_RECOVERABLE and TAINT_FATAL_GHES_FATAL? Do you really want to TAINT for recoverable errors? If most errors are successfully recovered, then a TAINT indication that a recovery happened a week ago would be misleading. Maybe better to save a timestamp for when the most recent recoverable error occurred, then compare that against the current time in panic() path and print warning if the recoverable error was "recent" (for some TBD value of "recent"). -Tony
On Wed, Jul 02, 2025 at 10:54:39AM -0700, Luck, Tony wrote: > On Wed, Jul 02, 2025 at 10:22:50AM -0700, Breno Leitao wrote: > > On Wed, Jul 02, 2025 at 09:31:30AM -0700, Luck, Tony wrote: > > > On Wed, Jul 02, 2025 at 08:39:51AM -0700, Breno Leitao wrote: > > > So unless someone feels it would be better to create a new TAINT > > > flag (TAINT_FATAL_GHES? TAINT_FIRMWARE_REPORTED_FATAL_ERRROR?) > > > then this seems OK to me. > > > > Thanks. That brings another topic. I am seeing crashes and warnings that > > are only happening after recoverable errors. I.e, there is a GHES > > recoverable error, and then machine crashes minutes later. A classical > > example is when the PCI downstream port disappear, and recovers later, > > re-enumerating everything, which is simply chaotic. > > > > I would like to be able to correlate the crash/warning with a machine > > that had a recoverable error. At scale, this improves the kernel > > monitoring by a lot. > > > > So, if we go toward using TAINT_FATAL_GHES, can we have two flavors? > > TAINT_FATAL_GHES_RECOVERABLE and TAINT_FATAL_GHES_FATAL? > > Do you really want to TAINT for recoverable errors? If most errors > are successfully recovered, then a TAINT indication that a recovery > happened a week ago would be misleading. > > Maybe better to save a timestamp for when the most recent recoverable > error occurred, then compare that against the current time in panic() > path and print warning if the recoverable error was "recent" (for > some TBD value of "recent"). Thanks for your insight. I believe it would be simpler to just add support for a TAINT that the hardware got an error while this kernel was booted. That is what I would like to indicate to the user. The user shouldn't not correlate that to the crash or panic. As you said, the hardware error could have happened weeks ago and completely unrelated. Tainting the kernel that a hardware error happen must NOT imply that the kernel crashed because of the hardware error. Something similar happens with proprietary module taint (PROPRIETARY_MODULE). The kernel is tainted when an proprietary module is loaded, but, it does not imply that the crash came because of the external module being loaded. It is just an extra information that will help the investigation later. In summary, I don't think we should solve the problem of correlation here, given it is not straightforward. I just want to tag that the hardware got an error while the kernel was running, and the operator can use this information the way they want. Am I on the right track? Thanks for the discussion, --breno
> In summary, I don't think we should solve the problem of correlation > here, given it is not straightforward. I just want to tag that the > hardware got an error while the kernel was running, and the operator can > use this information the way they want. > > Am I on the right track? It seems that Rafael has just applied your patch for taint with the machine check option. So you've got what you originally asked for. If you want pursue the idea of a taint for GHES warnings, then create a new patch that does that to spark discussion. Your case would be helped if you have some data to back up the need for this. E.g. we have observed "X% of recovered GHES errors are followed by a system crash within Y minutes". If you don't have hard numbers, then at least some "We often/sometimes see a crash shortly after a recovered GHES error that appears related." There are only a few unused capital letters for the taint summary: H, Q, V, Y, Z. None super-intuitive. Either pick one, or move into uncharted territory of using lower case ('g'?). -Tony
© 2016 - 2025 Red Hat, Inc.