arch/x86/kernel/cpu/mce/core.c | 9 +-- drivers/acpi/apei/ghes.c | 113 ++++++++++++++++++++++----------- include/acpi/ghes.h | 3 - include/linux/mm.h | 1 - mm/memory-failure.c | 22 ++----- 5 files changed, 82 insertions(+), 66 deletions(-)
Hi, ALL,
I have rewritten the cover letter with the hope that the maintainer will truly
understand the necessity of this patch. Both Alibaba and Huawei met the same
issue in products, and we hope it could be fixed ASAP.
## Changes Log
changes since v8:
- remove the bug fix tag of patch 2 (per Jarkko Sakkinen)
- remove the declaration of memory_failure_queue_kick (per Naoya Horiguchi)
- rewrite the return value comments of memory_failure (per Naoya Horiguchi)
changes since v7:
- rebase to Linux v6.6-rc2 (no code changed)
- rewritten the cover letter to explain the motivation of this patchset
changes since v6:
- add more explicty error message suggested by Xiaofei
- pick up reviewed-by tag from Xiaofei
- pick up internal reviewed-by tag from Baolin
changes since v5 by addressing comments from Kefeng:
- document return value of memory_failure()
- drop redundant comments in call site of memory_failure()
- make ghes_do_proc void and handle abnormal case within it
- pick up reviewed-by tag from Kefeng Wang
changes since v4 by addressing comments from Xiaofei:
- do a force kill only for abnormal sync errors
changes since v3 by addressing comments from Xiaofei:
- do a force kill for abnormal memory failure error such as invalid PA,
unexpected severity, OOM, etc
- pcik up tested-by tag from Ma Wupeng
changes since v2 by addressing comments from Naoya:
- rename mce_task_work to sync_task_work
- drop ACPI_HEST_NOTIFY_MCE case in is_hest_sync_notify()
- add steps to reproduce this problem in cover letter
changes since v1:
- synchronous events by notify type
- Link: https://lore.kernel.org/lkml/20221206153354.92394-3-xueshuai@linux.alibaba.com/
## Cover Letter
There are two major types of uncorrected recoverable (UCR) errors :
- Action Required (AR): The error is detected and the processor already
consumes the memory. OS requires to take action (for example, offline
failure page/kill failure thread) to recover this error.
- Action Optional (AO): The error is detected out of processor execution
context. Some data in the memory are corrupted. But the data have not
been consumed. OS is optional to take action to recover this error.
The main difference between AR and AO errors is that AR errors are synchronous
events, while AO errors are asynchronous events. Synchronous exceptions, such as
Machine Check Exception (MCE) on X86 and Synchronous External Abort (SEA) on
Arm64, are signaled by the hardware when an error is detected and the memory
access has architecturally been executed.
Currently, both synchronous and asynchronous errors are queued as AO errors and
handled by a dedicated kernel thread in a work queue on the ARM64 platform. For
synchronous errors, memory_failure() is synced using a cancel_work_sync trick to
ensure that the corrupted page is unmapped and poisoned. Upon returning to
user-space, the process resumes at the current instruction, triggering a page
fault. As a result, the kernel sends a SIGBUS signal to the current process due
to VM_FAULT_HWPOISON.
However, this trick is not always be effective, this patch set improves the
recovery process in three specific aspects:
1. Handle synchronous exceptions with proper si_code
ghes_handle_memory_failure() queue both synchronous and asynchronous errors with
flag=0. Then the kernel will notify the process by sending a SIGBUS signal in
memory_failure() with wrong si_code: BUS_MCEERR_AO to the actual user-space
process instead of BUS_MCEERR_AR. The user-space processes rely on the si_code
to distinguish to handle memory failure.
For example, hwpoison-aware user-space processes use the si_code:
BUS_MCEERR_AO for 'action optional' early notifications, and BUS_MCEERR_AR
for 'action required' synchronous/late notifications. Specifically, when a
signal with SIGBUS_MCEERR_AR is delivered to QEMU, it will inject a vSEA to
Guest kernel. In contrast, a signal with SIGBUS_MCEERR_AO will be ignored
by QEMU.[1]
Fix it by seting memory failure flags as MF_ACTION_REQUIRED on synchronous events. (PATCH 1)
2. Handle memory_failure() abnormal fails to avoid a unnecessary reboot
If process mapping fault page, but memory_failure() abnormal return before
try_to_unmap(), for example, the fault page process mapping is KSM page.
In this case, arm64 cannot use the page fault process to terminate the
synchronous exception loop.[4]
This loop can potentially exceed the platform firmware threshold or even trigger
a kernel hard lockup, leading to a system reboot. However, kernel has the
capability to recover from this error.
Fix it by performing a force kill when memory_failure() abnormal fails or when
other abnormal synchronous errors occur. These errors can include situations
such as invalid PA, unexpected severity, no memory failure config support,
invalid GUID section, OOM, etc. (PATCH 2)
3. Handle memory_failure() in current process context which consuming poison
When synchronous errors occur, memory_failure() assume that current process
context is exactly that consuming poison synchronous error.
For example, kill_accessing_process() holds mmap locking of current->mm, does
pagetable walk to find the error virtual address, and sends SIGBUS to the
current process with error info. However, the mm of kworker is not valid,
resulting in a null-pointer dereference. I have fixed this in[3].
commit 77677cdbc2aa mm,hwpoison: check mm when killing accessing process
Another example is that collect_procs()/kill_procs() walk the task list, only
collect and send sigbus to task which consuming poison. But memory_failure() is
queued and handled by a dedicated kernel thread on arm64 platform.
Fix it by queuing memory_failure() as a task work which runs in current
execution context to synchronously send SIGBUS before ret_to_user. (PATCH 2)
** In summary, this patch set handles synchronous errors in task work with
proper si_code so that hwpoison-aware process can recover from errors, and
fixes (potentially) abnormal cases. **
Lv Ying and XiuQi from Huawei also proposed to address similar problem[2][4].
Acknowledge to discussion with them.
## Steps to Reproduce This Problem
To reproduce this problem:
# STEP1: enable early kill mode
#sysctl -w vm.memory_failure_early_kill=1
vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error
#einj_mem_uc single
0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400
injecting ...
triggering ...
signal 7 code 5 addr 0xffffb0d75000
page not present
Test passed
The si_code (code 5) from einj_mem_uc indicates that it is BUS_MCEERR_AO error
and it is not fact.
After this patch set:
# STEP1: enable early kill mode
#sysctl -w vm.memory_failure_early_kill=1
vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error
#einj_mem_uc single
0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400
injecting ...
triggering ...
signal 7 code 4 addr 0xffffb0d75000
page not present
Test passed
The si_code (code 4) from einj_mem_uc indicates that it is BUS_MCEERR_AR error
as we expected.
[1] Add ARMv8 RAS virtualization support in QEMU https://patchew.org/QEMU/20200512030609.19593-1-gengdongjiu@huawei.com/
[2] https://lore.kernel.org/lkml/20221205115111.131568-3-lvying6@huawei.com/
[3] https://lkml.kernel.org/r/20220914064935.7851-1-xueshuai@linux.alibaba.com
[4] https://lore.kernel.org/lkml/20221209095407.383211-1-lvying6@huawei.com/
Shuai Xue (2):
ACPI: APEI: set memory failure flags as MF_ACTION_REQUIRED on
synchronous events
ACPI: APEI: handle synchronous exceptions in task work
arch/x86/kernel/cpu/mce/core.c | 9 +--
drivers/acpi/apei/ghes.c | 113 ++++++++++++++++++++++-----------
include/acpi/ghes.h | 3 -
include/linux/mm.h | 1 -
mm/memory-failure.c | 22 ++-----
5 files changed, 82 insertions(+), 66 deletions(-)
--
2.39.3
On Sat, Oct 07, 2023 at 03:28:16PM +0800, Shuai Xue wrote:
> However, this trick is not always be effective
So far so good.
What's missing here is why "this trick" is not always effective.
Basically to explain what exactly the problem is.
> For example, hwpoison-aware user-space processes use the si_code:
> BUS_MCEERR_AO for 'action optional' early notifications, and BUS_MCEERR_AR
> for 'action required' synchronous/late notifications. Specifically, when a
> signal with SIGBUS_MCEERR_AR is delivered to QEMU, it will inject a vSEA to
> Guest kernel. In contrast, a signal with SIGBUS_MCEERR_AO will be ignored
> by QEMU.[1]
>
> Fix it by seting memory failure flags as MF_ACTION_REQUIRED on synchronous events. (PATCH 1)
So you're fixing qemu by "fixing" the kernel?
This doesn't make any sense.
Make errors which are ACPI_HEST_NOTIFY_SEA type return
MF_ACTION_REQUIRED so that it *happens* to fix your use case.
Sounds like a lot of nonsense to me.
What is the issue here you're trying to solve?
> 2. Handle memory_failure() abnormal fails to avoid a unnecessary reboot
>
> If process mapping fault page, but memory_failure() abnormal return before
> try_to_unmap(), for example, the fault page process mapping is KSM page.
> In this case, arm64 cannot use the page fault process to terminate the
> synchronous exception loop.[4]
>
> This loop can potentially exceed the platform firmware threshold or even trigger
> a kernel hard lockup, leading to a system reboot. However, kernel has the
> capability to recover from this error.
>
> Fix it by performing a force kill when memory_failure() abnormal fails or when
> other abnormal synchronous errors occur.
Just like that?
Without giving the process the opportunity to even save its other data?
So this all is still very confusing, patches definitely need splitting
and this whole thing needs restraint.
You go and do this: you split *each* issue you're addressing into
a separate patch and explain it like this:
---
1. Prepare the context for the explanation briefly.
2. Explain the problem at hand.
3. "It happens because of <...>"
4. "Fix it by doing X"
5. "(Potentially do Y)."
---
and each patch explains *exactly* *one* issue, what happens, why it
happens and just the fix for it and *why* it is needed.
Otherwise, this is unreviewable.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
On 2023/11/23 23:07, Borislav Petkov wrote:
Hi, Borislav,
Thank you for your reply and advice.
> On Sat, Oct 07, 2023 at 03:28:16PM +0800, Shuai Xue wrote:
>> However, this trick is not always be effective
>
> So far so good.
>
> What's missing here is why "this trick" is not always effective.
>
> Basically to explain what exactly the problem is.
I think the main point is that this trick for AR error is not effective,
because:
- an AR error consumed by current process is deferred to handle in a
dedicated kernel thread, but memory_failure() assumes that it runs in the
current context
- another page fault is not unnecessary, we can send sigbus to current
process in the first Synchronous External Abort SEA on arm64 (analogy
Machine Check Exception on x86)
>
>> For example, hwpoison-aware user-space processes use the si_code:
>> BUS_MCEERR_AO for 'action optional' early notifications, and BUS_MCEERR_AR
>> for 'action required' synchronous/late notifications. Specifically, when a
>> signal with SIGBUS_MCEERR_AR is delivered to QEMU, it will inject a vSEA to
>> Guest kernel. In contrast, a signal with SIGBUS_MCEERR_AO will be ignored
>> by QEMU.[1]
>>
>> Fix it by seting memory failure flags as MF_ACTION_REQUIRED on synchronous events. (PATCH 1)
>
> So you're fixing qemu by "fixing" the kernel?
>
> This doesn't make any sense.
I just give an example that the user space process *really* relys on the
si_code of signal to handle hardware errors
>
> Make errors which are ACPI_HEST_NOTIFY_SEA type return
> MF_ACTION_REQUIRED so that it *happens* to fix your use case.
>
> Sounds like a lot of nonsense to me.
>
> What is the issue here you're trying to solve?
The SIGBUS si_codes defined in include/uapi/asm-generic/siginfo.h says:
/* hardware memory error consumed on a machine check: action required */
#define BUS_MCEERR_AR 4
/* hardware memory error detected in process but not consumed: action optional*/
#define BUS_MCEERR_AO 5
When a synchronous error is consumed by Guest, the kernel should send a
signal with BUS_MCEERR_AR instead of BUS_MCEERR_AO.
>
>> 2. Handle memory_failure() abnormal fails to avoid a unnecessary reboot
>>
>> If process mapping fault page, but memory_failure() abnormal return before
>> try_to_unmap(), for example, the fault page process mapping is KSM page.
>> In this case, arm64 cannot use the page fault process to terminate the
>> synchronous exception loop.[4]
>>
>> This loop can potentially exceed the platform firmware threshold or even trigger
>> a kernel hard lockup, leading to a system reboot. However, kernel has the
>> capability to recover from this error.
>>
>> Fix it by performing a force kill when memory_failure() abnormal fails or when
>> other abnormal synchronous errors occur.
>
> Just like that?
>
> Without giving the process the opportunity to even save its other data?
Exactly.
>
> So this all is still very confusing, patches definitely need splitting
> and this whole thing needs restraint.
>
> You go and do this: you split *each* issue you're addressing into
> a separate patch and explain it like this:
>
> ---
> 1. Prepare the context for the explanation briefly.
>
> 2. Explain the problem at hand.
>
> 3. "It happens because of <...>"
>
> 4. "Fix it by doing X"
>
> 5. "(Potentially do Y)."
> ---
>
> and each patch explains *exactly* *one* issue, what happens, why it
> happens and just the fix for it and *why* it is needed.
>
> Otherwise, this is unreviewable.
Thank you for your valuable suggestion, I will split the patches and
resubmit a new patch set.
>
> Thx.
>
Best Regards,
Shuai
On Sat, Nov 25, 2023 at 02:44:52PM +0800, Shuai Xue wrote:
> - an AR error consumed by current process is deferred to handle in a
> dedicated kernel thread, but memory_failure() assumes that it runs in the
> current context
On x86? ARM?
Please point to the exact code flow.
> - another page fault is not unnecessary, we can send sigbus to current
> process in the first Synchronous External Abort SEA on arm64 (analogy
> Machine Check Exception on x86)
I have no clue what that means. What page fault?
> I just give an example that the user space process *really* relys on the
> si_code of signal to handle hardware errors
No, don't give examples.
Explain what the exact problem is you're seeing, in your use case, point
to the code and then state how you think it should be fixed and why.
Right now your text is "all over the place" and I have no clue what you
even want.
> The SIGBUS si_codes defined in include/uapi/asm-generic/siginfo.h says:
>
> /* hardware memory error consumed on a machine check: action required */
> #define BUS_MCEERR_AR 4
> /* hardware memory error detected in process but not consumed: action optional*/
> #define BUS_MCEERR_AO 5
>
> When a synchronous error is consumed by Guest, the kernel should send a
> signal with BUS_MCEERR_AR instead of BUS_MCEERR_AO.
Can you drop this "synchronous" bla and concentrate on the error
*severity*?
I think you want to say that there are some types of errors for which
error handling needs to happen immediately and for some reason that
doesn't happen.
Which errors are those? Types?
Why do you need them to be handled immediately?
> Exactly.
No, not exactly. Why is it ok to do that? What are the implications of
this?
Is immediate killing the right decision?
Is this ok for *every* possible kernel running out there - not only for
your use case?
And so on and so on...
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
On 2023/11/25 20:10, Borislav Petkov wrote:
Hi, Borislav,
Thank you for your reply, and sorry for the confusion I made. Please see my rely
inline.
Best Regards,
Shuai
> On Sat, Nov 25, 2023 at 02:44:52PM +0800, Shuai Xue wrote:
>> - an AR error consumed by current process is deferred to handle in a
>> dedicated kernel thread, but memory_failure() assumes that it runs in the
>> current context
>
> On x86? ARM?
>
> Pease point to the exact code flow.
An AR error consumed by current process is deferred to handle in a
dedicated kernel thread on ARM platform. The AR error is handled in bellow
flow:
-----------------------------------------------------------------------------
[usr space task einj_mem_uc consumd data poison, CPU 3] STEP 0
-----------------------------------------------------------------------------
[ghes_sdei_critical_callback: current einj_mem_uc, CPU 3] STEP 1
ghes_sdei_critical_callback
=> __ghes_sdei_callback
=> ghes_in_nmi_queue_one_entry // peak and read estatus
=> irq_work_queue(&ghes_proc_irq_work) <=> ghes_proc_in_irq // irq_work
[ghes_sdei_critical_callback: return]
-----------------------------------------------------------------------------
[ghes_proc_in_irq: current einj_mem_uc, CPU 3] STEP 2
=> ghes_do_proc
=> ghes_handle_memory_failure
=> ghes_do_memory_failure
=> memory_failure_queue // put work task on current CPU
=> if (kfifo_put(&mf_cpu->fifo, entry))
schedule_work_on(smp_processor_id(), &mf_cpu->work);
=> task_work_add(current, &estatus_node->task_work, TWA_RESUME);
[ghes_proc_in_irq: return]
-----------------------------------------------------------------------------
// kworker preempts einj_mem_uc on CPU 3 due to RESCHED flag STEP 3
[memory_failure_work_func: current kworker, CPU 3]
=> memory_failure_work_func(&mf_cpu->work)
=> while kfifo_get(&mf_cpu->fifo, &entry); // until get no work
=> memory_failure(entry.pfn, entry.flags);
-----------------------------------------------------------------------------
[ghes_kick_task_work: current einj_mem_uc, other cpu] STEP 4
=> memory_failure_queue_kick
=> cancel_work_sync - waiting memory_failure_work_func finish
=> memory_failure_work_func(&mf_cpu->work)
=> kfifo_get(&mf_cpu->fifo, &entry); // no work
-----------------------------------------------------------------------------
[einj_mem_uc resume at the same PC, trigger a page fault STEP 5
STEP 0: A user space task, named einj_mem_uc consume a poison. The firmware
notifies hardware error to kernel through is SDEI
(ACPI_HEST_NOTIFY_SOFTWARE_DELEGATED).
STEP 1: The swapper running on CPU 3 is interrupted. irq_work_queue() rasie
a irq_work to handle hardware errors in IRQ context
STEP2: In IRQ context, ghes_proc_in_irq() queues memory failure work on
current CPU in workqueue and add task work to sync with the workqueue.
STEP3: The kworker preempts the current running thread and get CPU 3. Then
memory_failure() is processed in kworker.
STEP4: ghes_kick_task_work() is called as task_work to ensure any queued
workqueue has been done before returning to user-space.
STEP5: Upon returning to user-space, the task einj_mem_uc resumes at the
current instruction, because the poison page is unmapped by
memory_failure() in step 3, so a page fault will be triggered.
memory_failure() assumes that it runs in the current context on both x86
and ARM platform.
for example:
memory_failure() in mm/memory-failure.c:
if (flags & MF_ACTION_REQUIRED) {
folio = page_folio(p);
res = kill_accessing_process(current, folio_pfn(folio), flags);
}
>
>> - another page fault is not unnecessary, we can send sigbus to current
>> process in the first Synchronous External Abort SEA on arm64 (analogy
>> Machine Check Exception on x86)
>
> I have no clue what that means. What page fault?
I mean page fault in step 5. We can simplify the above flow by queuing
memory_failure() as a task work for AR errors in step 3 directly.
>
>> I just give an example that the user space process *really* relys on the
>> si_code of signal to handle hardware errors
>
> No, don't give examples.
>
> Explain what the exact problem is you're seeing, in your use case, point
> to the code and then state how you think it should be fixed and why.
>
> Right now your text is "all over the place" and I have no clue what you
> even want.
Ok, got it. Thank you.
>
>> The SIGBUS si_codes defined in include/uapi/asm-generic/siginfo.h says:
>>
>> /* hardware memory error consumed on a machine check: action required */
>> #define BUS_MCEERR_AR 4
>> /* hardware memory error detected in process but not consumed: action optional*/
>> #define BUS_MCEERR_AO 5
>>
>> When a synchronous error is consumed by Guest, the kernel should send a
>> signal with BUS_MCEERR_AR instead of BUS_MCEERR_AO.
>
> Can you drop this "synchronous" bla and concentrate on the error
> *severity*?
>
> I think you want to say that there are some types of errors for which
> error handling needs to happen immediately and for some reason that
> doesn't happen.
>
> Which errors are those? Types?
>
> Why do you need them to be handled immediately?
Well, the severity defined on x86 and ARM platform is quite different. I
guess you mean taxonomy of producer error types.
- X86: Software recoverable action required (SRAR)
A UCR error that *requires* system software to take a recovery action on
this processor *before scheduling another stream of execution on this
processor*.
(15.6.3 UCR Error Classification in Intel® 64 and IA-32 Architectures
Software Developer’s Manual Volume 3)
- ARM: Recoverable state (UER)
The PE determines that software *must* take action to locate and repair
the error to successfully recover execution. This might be because the
exception was taken before the error was architecturally consumed by
the PE, at the point when the PE was not be able to make correct
progress without either consuming the error or *otherwise making the
state of the PE unrecoverable*.
(2.3.2 PE error state classification in Arm RAS Supplement
https://documentation-service.arm.com/static/63185614f72fad1903828eda)
I think above two types of error need to be handled immediately.
>
>> Exactly.
>
> No, not exactly. Why is it ok to do that? What are the implications of
> this?
>
> Is immediate killing the right decision?
>
> Is this ok for *every* possible kernel running out there - not only for
> your use case?
>
> And so on and so on...
>
I don't have a clear answer here. I guess the poison data only effects the
user space task which triggers exception. A panic is not necessary.
On x86 platform, the current error handling of memory_failure() in
kill_me_maybe() is just send a sigbus forcely.
kill_me_maybe():
ret = memory_failure(pfn, flags);
if (ret == -EHWPOISON || ret == -EOPNOTSUPP)
return;
pr_err("Memory error not recovered");
kill_me_now(cb);
Do you have any comments or suggestion about this? I don't change x86
behavior.
For arm64 platform, step 3 in above flow, memory_failure_work_func(), the
call site of memory_failure(), does not handle the return code of
memory_failure(). I just add the same behavior.
Moving James to To:
On Sun, Nov 26, 2023 at 08:25:38PM +0800, Shuai Xue wrote:
> > On Sat, Nov 25, 2023 at 02:44:52PM +0800, Shuai Xue wrote:
> >> - an AR error consumed by current process is deferred to handle in a
> >> dedicated kernel thread, but memory_failure() assumes that it runs in the
> >> current context
> >
> > On x86? ARM?
> >
> > Pease point to the exact code flow.
>
> An AR error consumed by current process is deferred to handle in a
> dedicated kernel thread on ARM platform. The AR error is handled in bellow
> flow:
>
> -----------------------------------------------------------------------------
> [usr space task einj_mem_uc consumd data poison, CPU 3] STEP 0
>
> -----------------------------------------------------------------------------
> [ghes_sdei_critical_callback: current einj_mem_uc, CPU 3] STEP 1
> ghes_sdei_critical_callback
> => __ghes_sdei_callback
> => ghes_in_nmi_queue_one_entry // peak and read estatus
> => irq_work_queue(&ghes_proc_irq_work) <=> ghes_proc_in_irq // irq_work
> [ghes_sdei_critical_callback: return]
> -----------------------------------------------------------------------------
> [ghes_proc_in_irq: current einj_mem_uc, CPU 3] STEP 2
> => ghes_do_proc
> => ghes_handle_memory_failure
> => ghes_do_memory_failure
> => memory_failure_queue // put work task on current CPU
> => if (kfifo_put(&mf_cpu->fifo, entry))
> schedule_work_on(smp_processor_id(), &mf_cpu->work);
> => task_work_add(current, &estatus_node->task_work, TWA_RESUME);
> [ghes_proc_in_irq: return]
> -----------------------------------------------------------------------------
> // kworker preempts einj_mem_uc on CPU 3 due to RESCHED flag STEP 3
> [memory_failure_work_func: current kworker, CPU 3]
> => memory_failure_work_func(&mf_cpu->work)
> => while kfifo_get(&mf_cpu->fifo, &entry); // until get no work
> => memory_failure(entry.pfn, entry.flags);
From the comment above that function:
* The function is primarily of use for corruptions that
* happen outside the current execution context (e.g. when
* detected by a background scrubber)
*
* Must run in process context (e.g. a work queue) with interrupts
* enabled and no spinlocks held.
> -----------------------------------------------------------------------------
> [ghes_kick_task_work: current einj_mem_uc, other cpu] STEP 4
> => memory_failure_queue_kick
> => cancel_work_sync - waiting memory_failure_work_func finish
> => memory_failure_work_func(&mf_cpu->work)
> => kfifo_get(&mf_cpu->fifo, &entry); // no work
> -----------------------------------------------------------------------------
> [einj_mem_uc resume at the same PC, trigger a page fault STEP 5
>
> STEP 0: A user space task, named einj_mem_uc consume a poison. The firmware
> notifies hardware error to kernel through is SDEI
> (ACPI_HEST_NOTIFY_SOFTWARE_DELEGATED).
>
> STEP 1: The swapper running on CPU 3 is interrupted. irq_work_queue() rasie
> a irq_work to handle hardware errors in IRQ context
>
> STEP2: In IRQ context, ghes_proc_in_irq() queues memory failure work on
> current CPU in workqueue and add task work to sync with the workqueue.
>
> STEP3: The kworker preempts the current running thread and get CPU 3. Then
> memory_failure() is processed in kworker.
See above.
> STEP4: ghes_kick_task_work() is called as task_work to ensure any queued
> workqueue has been done before returning to user-space.
>
> STEP5: Upon returning to user-space, the task einj_mem_uc resumes at the
> current instruction, because the poison page is unmapped by
> memory_failure() in step 3, so a page fault will be triggered.
>
> memory_failure() assumes that it runs in the current context on both x86
> and ARM platform.
>
>
> for example:
> memory_failure() in mm/memory-failure.c:
>
> if (flags & MF_ACTION_REQUIRED) {
> folio = page_folio(p);
> res = kill_accessing_process(current, folio_pfn(folio), flags);
> }
And?
Do you see the check above it?
if (TestSetPageHWPoison(p)) {
test_and_set_bit() returns true only when the page was poisoned already.
* This function is intended to handle "Action Required" MCEs on already
* hardware poisoned pages. They could happen, for example, when
* memory_failure() failed to unmap the error page at the first call, or
* when multiple local machine checks happened on different CPUs.
And that's kill_accessing_process().
So AFAIU, the kworker running memory_failure() would only mark the page
as poison.
The killing happens when memory_failure() runs again and the process
touches the page again.
But I'd let James confirm here.
I still don't know what you're fixing here.
Is this something you're encountering on some machine or you simply
stared at code?
What does that
"Both Alibaba and Huawei met the same issue in products, and we hope it
could be fixed ASAP."
mean?
What did you meet?
What was the problem?
I still note that you're avoiding answering the question what the issue
is and if you keep avoiding it, I'll ignore this whole thread.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
Hi Boris, Shuai,
On 29/11/2023 18:54, Borislav Petkov wrote:
> On Sun, Nov 26, 2023 at 08:25:38PM +0800, Shuai Xue wrote:
>>> On Sat, Nov 25, 2023 at 02:44:52PM +0800, Shuai Xue wrote:
>>>> - an AR error consumed by current process is deferred to handle in a
>>>> dedicated kernel thread, but memory_failure() assumes that it runs in the
>>>> current context
>>>
>>> On x86? ARM?
>>>
>>> Pease point to the exact code flow.
>> An AR error consumed by current process is deferred to handle in a
>> dedicated kernel thread on ARM platform. The AR error is handled in bellow
>> flow:
Please don't think of errors as "action required" - that's a user-space signal code. If
the page could be fixed by memory-failure(), you may never get a signal. (all this was the
fix for always sending an action-required signal)
I assume you mean the CPU accessed a poisoned location and took a synchronous error.
>> -----------------------------------------------------------------------------
>> [usr space task einj_mem_uc consumd data poison, CPU 3] STEP 0
>>
>> -----------------------------------------------------------------------------
>> [ghes_sdei_critical_callback: current einj_mem_uc, CPU 3] STEP 1
>> ghes_sdei_critical_callback
>> => __ghes_sdei_callback
>> => ghes_in_nmi_queue_one_entry // peak and read estatus
>> => irq_work_queue(&ghes_proc_irq_work) <=> ghes_proc_in_irq // irq_work
>> [ghes_sdei_critical_callback: return]
>> -----------------------------------------------------------------------------
>> [ghes_proc_in_irq: current einj_mem_uc, CPU 3] STEP 2
>> => ghes_do_proc
>> => ghes_handle_memory_failure
>> => ghes_do_memory_failure
>> => memory_failure_queue // put work task on current CPU
>> => if (kfifo_put(&mf_cpu->fifo, entry))
>> schedule_work_on(smp_processor_id(), &mf_cpu->work);
>> => task_work_add(current, &estatus_node->task_work, TWA_RESUME);
>> [ghes_proc_in_irq: return]
>> -----------------------------------------------------------------------------
>> // kworker preempts einj_mem_uc on CPU 3 due to RESCHED flag STEP 3
>> [memory_failure_work_func: current kworker, CPU 3]
>> => memory_failure_work_func(&mf_cpu->work)
>> => while kfifo_get(&mf_cpu->fifo, &entry); // until get no work
>> => memory_failure(entry.pfn, entry.flags);
>
> From the comment above that function:
>
> * The function is primarily of use for corruptions that
> * happen outside the current execution context (e.g. when
> * detected by a background scrubber)
> *
> * Must run in process context (e.g. a work queue) with interrupts
> * enabled and no spinlocks held.
>
>> -----------------------------------------------------------------------------
>> [ghes_kick_task_work: current einj_mem_uc, other cpu] STEP 4
>> => memory_failure_queue_kick
>> => cancel_work_sync - waiting memory_failure_work_func finish
>> => memory_failure_work_func(&mf_cpu->work)
>> => kfifo_get(&mf_cpu->fifo, &entry); // no work
>> -----------------------------------------------------------------------------
>> [einj_mem_uc resume at the same PC, trigger a page fault STEP 5
>>
>> STEP 0: A user space task, named einj_mem_uc consume a poison. The firmware
>> notifies hardware error to kernel through is SDEI
>> (ACPI_HEST_NOTIFY_SOFTWARE_DELEGATED).
>>
>> STEP 1: The swapper running on CPU 3 is interrupted. irq_work_queue() rasie
>> a irq_work to handle hardware errors in IRQ context
>>
>> STEP2: In IRQ context, ghes_proc_in_irq() queues memory failure work on
>> current CPU in workqueue and add task work to sync with the workqueue.
>>
>> STEP3: The kworker preempts the current running thread and get CPU 3. Then
>> memory_failure() is processed in kworker.
>
> See above.
>
>> STEP4: ghes_kick_task_work() is called as task_work to ensure any queued
>> workqueue has been done before returning to user-space.
>>
>> STEP5: Upon returning to user-space, the task einj_mem_uc resumes at the
>> current instruction, because the poison page is unmapped by
>> memory_failure() in step 3, so a page fault will be triggered.
>>
>> memory_failure() assumes that it runs in the current context on both x86
>> and ARM platform.
>>
>>
>> for example:
>> memory_failure() in mm/memory-failure.c:
>>
>> if (flags & MF_ACTION_REQUIRED) {
>> folio = page_folio(p);
>> res = kill_accessing_process(current, folio_pfn(folio), flags);
>> }
>
> And?
>
> Do you see the check above it?
>
> if (TestSetPageHWPoison(p)) {
>
> test_and_set_bit() returns true only when the page was poisoned already.
>
> * This function is intended to handle "Action Required" MCEs on already
> * hardware poisoned pages. They could happen, for example, when
> * memory_failure() failed to unmap the error page at the first call, or
> * when multiple local machine checks happened on different CPUs.
>
> And that's kill_accessing_process().
>
> So AFAIU, the kworker running memory_failure() would only mark the page
> as poison.
>
> The killing happens when memory_failure() runs again and the process
> touches the page again.
>
> But I'd let James confirm here.
Yes, this is what is expected to happen with the existing code.
The first pass will remove the pages from all processes that have it mapped before this
user-space task can restart. Restarting the task will make it access a poisoned page,
kicking off the second path which delivers the signal.
The reason for two passes is send_sig_mceerr() likes to clear_siginfo(), so even if you
queued action-required before leaving GHES, memory-failure() would stomp on it.
> I still don't know what you're fixing here.
The problem is if the user-space process registered for early messages, it gets a signal
on the first pass. If it returns from that signal, it will access the poisoned page and
get the action-required signal.
How is this making Qemu go wrong?
As to how this works for you given Boris' comments above: kill_procs() is also called from
hwpoison_user_mappings(), which takes the flags given to memory-failure(). This is where
the action-optional signals come from.
Thanks,
James
On 2023/12/1 01:39, James Morse wrote:
> Hi Boris, Shuai,
>
> On 29/11/2023 18:54, Borislav Petkov wrote:
>> On Sun, Nov 26, 2023 at 08:25:38PM +0800, Shuai Xue wrote:
>>>> On Sat, Nov 25, 2023 at 02:44:52PM +0800, Shuai Xue wrote:
>>>>> - an AR error consumed by current process is deferred to handle in a
>>>>> dedicated kernel thread, but memory_failure() assumes that it runs in the
>>>>> current context
>>>>
>>>> On x86? ARM?
>>>>
>>>> Pease point to the exact code flow.
>
>
>>> An AR error consumed by current process is deferred to handle in a
>>> dedicated kernel thread on ARM platform. The AR error is handled in bellow
>>> flow:
>
> Please don't think of errors as "action required" - that's a user-space signal code. If
> the page could be fixed by memory-failure(), you may never get a signal. (all this was the
> fix for always sending an action-required signal)
>
> I assume you mean the CPU accessed a poisoned location and took a synchronous error.
Yes, I mean that CPU accessed a poisoned location and took a synchronous error.
>
>
>>> -----------------------------------------------------------------------------
>>> [usr space task einj_mem_uc consumd data poison, CPU 3] STEP 0
>>>
>>> -----------------------------------------------------------------------------
>>> [ghes_sdei_critical_callback: current einj_mem_uc, CPU 3] STEP 1
>>> ghes_sdei_critical_callback
>>> => __ghes_sdei_callback
>>> => ghes_in_nmi_queue_one_entry // peak and read estatus
>>> => irq_work_queue(&ghes_proc_irq_work) <=> ghes_proc_in_irq // irq_work
>>> [ghes_sdei_critical_callback: return]
>>> -----------------------------------------------------------------------------
>>> [ghes_proc_in_irq: current einj_mem_uc, CPU 3] STEP 2
>>> => ghes_do_proc
>>> => ghes_handle_memory_failure
>>> => ghes_do_memory_failure
>>> => memory_failure_queue // put work task on current CPU
>>> => if (kfifo_put(&mf_cpu->fifo, entry))
>>> schedule_work_on(smp_processor_id(), &mf_cpu->work);
>>> => task_work_add(current, &estatus_node->task_work, TWA_RESUME);
>>> [ghes_proc_in_irq: return]
>>> -----------------------------------------------------------------------------
>>> // kworker preempts einj_mem_uc on CPU 3 due to RESCHED flag STEP 3
>>> [memory_failure_work_func: current kworker, CPU 3]
>>> => memory_failure_work_func(&mf_cpu->work)
>>> => while kfifo_get(&mf_cpu->fifo, &entry); // until get no work
>>> => memory_failure(entry.pfn, entry.flags);
>>
>> From the comment above that function:
>>
>> * The function is primarily of use for corruptions that
>> * happen outside the current execution context (e.g. when
>> * detected by a background scrubber)
>> *
>> * Must run in process context (e.g. a work queue) with interrupts
>> * enabled and no spinlocks held.
>>
>>> -----------------------------------------------------------------------------
>>> [ghes_kick_task_work: current einj_mem_uc, other cpu] STEP 4
>>> => memory_failure_queue_kick
>>> => cancel_work_sync - waiting memory_failure_work_func finish
>>> => memory_failure_work_func(&mf_cpu->work)
>>> => kfifo_get(&mf_cpu->fifo, &entry); // no work
>>> -----------------------------------------------------------------------------
>>> [einj_mem_uc resume at the same PC, trigger a page fault STEP 5
>>>
>>> STEP 0: A user space task, named einj_mem_uc consume a poison. The firmware
>>> notifies hardware error to kernel through is SDEI
>>> (ACPI_HEST_NOTIFY_SOFTWARE_DELEGATED).
>>>
>>> STEP 1: The swapper running on CPU 3 is interrupted. irq_work_queue() rasie
>>> a irq_work to handle hardware errors in IRQ context
>>>
>>> STEP2: In IRQ context, ghes_proc_in_irq() queues memory failure work on
>>> current CPU in workqueue and add task work to sync with the workqueue.
>>>
>>> STEP3: The kworker preempts the current running thread and get CPU 3. Then
>>> memory_failure() is processed in kworker.
>>
>> See above.
>>
>>> STEP4: ghes_kick_task_work() is called as task_work to ensure any queued
>>> workqueue has been done before returning to user-space.
>>>
>>> STEP5: Upon returning to user-space, the task einj_mem_uc resumes at the
>>> current instruction, because the poison page is unmapped by
>>> memory_failure() in step 3, so a page fault will be triggered.
>>>
>>> memory_failure() assumes that it runs in the current context on both x86
>>> and ARM platform.
>>>
>>>
>>> for example:
>>> memory_failure() in mm/memory-failure.c:
>>>
>>> if (flags & MF_ACTION_REQUIRED) {
>>> folio = page_folio(p);
>>> res = kill_accessing_process(current, folio_pfn(folio), flags);
>>> }
>>
>> And?
>>
>> Do you see the check above it?
>>
>> if (TestSetPageHWPoison(p)) {
>>
>> test_and_set_bit() returns true only when the page was poisoned already.
>>
>> * This function is intended to handle "Action Required" MCEs on already
>> * hardware poisoned pages. They could happen, for example, when
>> * memory_failure() failed to unmap the error page at the first call, or
>> * when multiple local machine checks happened on different CPUs.
>>
>> And that's kill_accessing_process().
>>
>> So AFAIU, the kworker running memory_failure() would only mark the page
>> as poison.
>>
>> The killing happens when memory_failure() runs again and the process
>> touches the page again.
>>
>> But I'd let James confirm here.
>
> Yes, this is what is expected to happen with the existing code.
>
> The first pass will remove the pages from all processes that have it mapped before this
> user-space task can restart. Restarting the task will make it access a poisoned page,
> kicking off the second path which delivers the signal.
>
> The reason for two passes is send_sig_mceerr() likes to clear_siginfo(), so even if you
> queued action-required before leaving GHES, memory-failure() would stomp on it.
>
>
>> I still don't know what you're fixing here.
>
> The problem is if the user-space process registered for early messages, it gets a signal
> on the first pass. If it returns from that signal, it will access the poisoned page and
> get the action-required signal.
>
> How is this making Qemu go wrong?
The problem here is that we need to assume, the first pass memory failure
handle and unmap the poisoned page successfully.
- If so, it may work by the second pass action-requried signal because it
access an unmapped page. But IMHO, we can improve by just sending one
pass signal, so that the Guest will vmexit only once, right?
- If not, there is no second pass signal. The exist code does not handle
the error code from memory_failure(), so a exception loop happens
resulting a hard lockup panic.
Besides, in production environment, a second access to an already known
poison page will introduce more risk of error propagation.
>
>
> As to how this works for you given Boris' comments above: kill_procs() is also called from
> hwpoison_user_mappings(), which takes the flags given to memory-failure(). This is where
> the action-optional signals come from.
>
>
Thank you very much for involving to review and comment.
Best Regards,
Shuai
On 2023/11/30 02:54, Borislav Petkov wrote:
> Moving James to To:
>
> On Sun, Nov 26, 2023 at 08:25:38PM +0800, Shuai Xue wrote:
>>> On Sat, Nov 25, 2023 at 02:44:52PM +0800, Shuai Xue wrote:
>>>> - an AR error consumed by current process is deferred to handle in a
>>>> dedicated kernel thread, but memory_failure() assumes that it runs in the
>>>> current context
>>>
>>> On x86? ARM?
>>>
>>> Pease point to the exact code flow.
>>
>> An AR error consumed by current process is deferred to handle in a
>> dedicated kernel thread on ARM platform. The AR error is handled in bellow
>> flow:
>>
>> -----------------------------------------------------------------------------
>> [usr space task einj_mem_uc consumd data poison, CPU 3] STEP 0
>>
>> -----------------------------------------------------------------------------
>> [ghes_sdei_critical_callback: current einj_mem_uc, CPU 3] STEP 1
>> ghes_sdei_critical_callback
>> => __ghes_sdei_callback
>> => ghes_in_nmi_queue_one_entry // peak and read estatus
>> => irq_work_queue(&ghes_proc_irq_work) <=> ghes_proc_in_irq // irq_work
>> [ghes_sdei_critical_callback: return]
>> -----------------------------------------------------------------------------
>> [ghes_proc_in_irq: current einj_mem_uc, CPU 3] STEP 2
>> => ghes_do_proc
>> => ghes_handle_memory_failure
>> => ghes_do_memory_failure
>> => memory_failure_queue // put work task on current CPU
>> => if (kfifo_put(&mf_cpu->fifo, entry))
>> schedule_work_on(smp_processor_id(), &mf_cpu->work);
>> => task_work_add(current, &estatus_node->task_work, TWA_RESUME);
>> [ghes_proc_in_irq: return]
>> -----------------------------------------------------------------------------
>> // kworker preempts einj_mem_uc on CPU 3 due to RESCHED flag STEP 3
>> [memory_failure_work_func: current kworker, CPU 3]
>> => memory_failure_work_func(&mf_cpu->work)
>> => while kfifo_get(&mf_cpu->fifo, &entry); // until get no work
>> => memory_failure(entry.pfn, entry.flags);
>
> From the comment above that function:
>
> * The function is primarily of use for corruptions that
> * happen outside the current execution context (e.g. when
> * detected by a background scrubber)
> *
> * Must run in process context (e.g. a work queue) with interrupts
> * enabled and no spinlocks held.
Hi, Borislav,
Thank you for your comments.
But we are talking about Action Required error, it does happen *inside the
current execution context*. The Action Required error does not meet the
function comments.
>
>> -----------------------------------------------------------------------------
>> [ghes_kick_task_work: current einj_mem_uc, other cpu] STEP 4
>> => memory_failure_queue_kick
>> => cancel_work_sync - waiting memory_failure_work_func finish
>> => memory_failure_work_func(&mf_cpu->work)
>> => kfifo_get(&mf_cpu->fifo, &entry); // no work
>> -----------------------------------------------------------------------------
>> [einj_mem_uc resume at the same PC, trigger a page fault STEP 5
>>
>> STEP 0: A user space task, named einj_mem_uc consume a poison. The firmware
>> notifies hardware error to kernel through is SDEI
>> (ACPI_HEST_NOTIFY_SOFTWARE_DELEGATED).
>>
>> STEP 1: The swapper running on CPU 3 is interrupted. irq_work_queue() rasie
>> a irq_work to handle hardware errors in IRQ context
>>
>> STEP2: In IRQ context, ghes_proc_in_irq() queues memory failure work on
>> current CPU in workqueue and add task work to sync with the workqueue.
>>
>> STEP3: The kworker preempts the current running thread and get CPU 3. Then
>> memory_failure() is processed in kworker.
>
> See above.
>
>> STEP4: ghes_kick_task_work() is called as task_work to ensure any queued
>> workqueue has been done before returning to user-space.
>>
>> STEP5: Upon returning to user-space, the task einj_mem_uc resumes at the
>> current instruction, because the poison page is unmapped by
>> memory_failure() in step 3, so a page fault will be triggered.
>>
>> memory_failure() assumes that it runs in the current context on both x86
>> and ARM platform.
>>
>>
>> for example:
>> memory_failure() in mm/memory-failure.c:
>>
>> if (flags & MF_ACTION_REQUIRED) {
>> folio = page_folio(p);
>> res = kill_accessing_process(current, folio_pfn(folio), flags);
>> }
>
> And?
>
> Do you see the check above it?
>
> if (TestSetPageHWPoison(p)) {
>
> test_and_set_bit() returns true only when the page was poisoned already.
>
> * This function is intended to handle "Action Required" MCEs on already
> * hardware poisoned pages. They could happen, for example, when
> * memory_failure() failed to unmap the error page at the first call, or
> * when multiple local machine checks happened on different CPUs.
>
> And that's kill_accessing_process().
>
> So AFAIU, the kworker running memory_failure() would only mark the page
> as poison.
>
> The killing happens when memory_failure() runs again and the process
> touches the page again.
When a Action Required error occurs, it triggers a MCE-like exception
(SEA). In the first call of memory_failure(), it will poison the page. If
it failed to unmap the error page, the user space task resumes at the
current PC and triggers another SEA exception, then the second call of
memory_failure() will run into kill_accessing_process() which do nothing
and just return -EFAULT. As a result, a third SEA exception will be
triggered. Finally, a exception loop happens resulting a hard lockup
panic.
>
> But I'd let James confirm here.
>
>
> I still don't know what you're fixing here.
In ARM64 platform, when a Action Required error occurs, the kernel should
send SIGBUS with si_code BUS_MCEERR_AR instead of BUS_MCEERR_AO. (It is
also the subject of this thread)
>
> Is this something you're encountering on some machine or you simply
> stared at code?
I met the wrong si_code problem on Yitian 710 machine which is based on
ARM64 platform. And I think it is gernel on ARM64 platfrom.
To reproduce this problem:
# STEP1: enable early kill mode
#sysctl -w vm.memory_failure_early_kill=1
vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error
#einj_mem_uc single
0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400
injecting ...
triggering ...
signal 7 code 5 addr 0xffffb0d75000
page not present
Test passed
The si_code (code 5) from einj_mem_uc indicates that it is BUS_MCEERR_AO error
and it is not fact.
After this patch set:
# STEP1: enable early kill mode
#sysctl -w vm.memory_failure_early_kill=1
vm.memory_failure_early_kill = 1
# STEP2: inject an UCE error and consume it to trigger a synchronous error
#einj_mem_uc single
0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400
injecting ...
triggering ...
signal 7 code 4 addr 0xffffb0d75000
page not present
Test passed
The si_code (code 4) from einj_mem_uc indicates that it is BUS_MCEERR_AR error
as we expected.
>
> What does that
>
> "Both Alibaba and Huawei met the same issue in products, and we hope it
> could be fixed ASAP."
>
> mean?
>
> What did you meet?
>
> What was the problem?
We both got wrong si_code of SIGBUS from kernel side on ARM64 platform.
The VMM in our product relies on the si_code of SIGBUS to handle memory
failure in userspace.
- For BUS_MCEERR_AO, we regard that the corruptions happen *outside the
current execution context* e.g. detected by a background scrubber, the
VMM will ignore the error and the VM will not be killed immediately.
- For BUS_MCEERR_AR, we regard that the corruptions happen *insdie the
current execution context*, e.g. when a data poison is consumed, the VMM
will kill the VM immediately to avoid any further potential data
propagation.
>
> I still note that you're avoiding answering the question what the issue
> is and if you keep avoiding it, I'll ignore this whole thread.
>
Sorry, Borislav, thank you for your patient and time. I really appreciate
that you are involving in to review this patchset. But I have to say it is
not the truth, I am avoiding anything. I tried my best to answer every comments
you raised, give the details of ARM RAS specific and code flow.
Best Regards,
Shuai
FTR, this is starting to make sense, thanks for explaining.
Replying only to this one for now:
On Thu, Nov 30, 2023 at 10:58:53AM +0800, Shuai Xue wrote:
> To reproduce this problem:
>
> # STEP1: enable early kill mode
> #sysctl -w vm.memory_failure_early_kill=1
> vm.memory_failure_early_kill = 1
>
> # STEP2: inject an UCE error and consume it to trigger a synchronous error
So this is for ARM folks to deal with, BUT:
A consumed uncorrectable error on x86 means panic. On some hw like on
AMD, that error doesn't even get seen by the OS but the hw does
something called syncflood to prevent further error propagation. So
there's no any action required - the hw does that.
But I'd like to hear from ARM folks whether consuming an uncorrectable
error even lets software run. Dunno.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
Hi Boris, On 30/11/2023 14:40, Borislav Petkov wrote: > FTR, this is starting to make sense, thanks for explaining. > > Replying only to this one for now: > > On Thu, Nov 30, 2023 at 10:58:53AM +0800, Shuai Xue wrote: >> To reproduce this problem: >> >> # STEP1: enable early kill mode >> #sysctl -w vm.memory_failure_early_kill=1 >> vm.memory_failure_early_kill = 1 >> >> # STEP2: inject an UCE error and consume it to trigger a synchronous error > > So this is for ARM folks to deal with, BUT: > > A consumed uncorrectable error on x86 means panic. On some hw like on > AMD, that error doesn't even get seen by the OS but the hw does > something called syncflood to prevent further error propagation. So > there's no any action required - the hw does that. > > But I'd like to hear from ARM folks whether consuming an uncorrectable > error even lets software run. Dunno. I think we mean different things by 'consume' here. I'd assume Shuai's test is poisoning a cache-line. When the CPU tries to access that cache-line it will get an 'external abort' signal back from the memory system. Shuai - is this what you mean by 'consume' - the CPU received external abort from the poisoned cache line? It's then up to the CPU whether it can put the world back in order to take this as synchronous-external-abort or asynchronous-external-abort, which for arm64 are two different interrupt/exception types. The synchronous exceptions can't be masked, but the asynchronous one can. If by the time the asynchronous-external-abort interrupt/exception has been unmasked, the CPU has used the poisoned value in some calculation (which is what we usually mean by consume) which has resulted in a memory access - it will report the error as 'uncontained' because the error has been silently propagated. APEI should always report those a 'fatal', and there is little point getting the OS involved at this point. Also in this category are things like 'tag ram corruption', where you can no longer trust anything about memory. Everything in this thread is about synchronous errors where this can't happen. The CPU stops and does takes an interrupt/exception instead. Thanks, James
On 2023/12/1 01:43, James Morse wrote: > Hi Boris, > > On 30/11/2023 14:40, Borislav Petkov wrote: >> FTR, this is starting to make sense, thanks for explaining. >> >> Replying only to this one for now: >> >> On Thu, Nov 30, 2023 at 10:58:53AM +0800, Shuai Xue wrote: >>> To reproduce this problem: >>> >>> # STEP1: enable early kill mode >>> #sysctl -w vm.memory_failure_early_kill=1 >>> vm.memory_failure_early_kill = 1 >>> >>> # STEP2: inject an UCE error and consume it to trigger a synchronous error >> >> So this is for ARM folks to deal with, BUT: >> >> A consumed uncorrectable error on x86 means panic. On some hw like on >> AMD, that error doesn't even get seen by the OS but the hw does >> something called syncflood to prevent further error propagation. So >> there's no any action required - the hw does that. The "consume" is at the application point of view, e.g. a memory read. If poison is enable, then a SRAR error will be detected and a MCE raised at the point of the consumption in the execution flow. A generic Intel x86 hw behaves like below: 1. UE Error Inject at a known Physical Address. (by einj_mem_uc through EINJ interface) 2. Core Issue a Memory Read to the same Physical Address (by a singe memory read) 3. iMC Detects the error. 4. HA logs UCA error and signals CMCI if enabled 5. HA Forward data with poison indication bit set. 6. CBo detects the Poison data. Does not log any error. 7. MLC detects the Poison data. 8. DCU detects the Poison data, logs SRAR error and trigger MCERR if recoverable 9. OS/VMM takes corresponding recovery action based on affected state. In our example: - step 2 is triggered by a singe memory read. - step 8: UCR errors detected on data load, MCACOD 134H, triggering MCERR - step 9: the kernel is excepted to send sigbus with si_code BUS_MCEERR_AR (code 4) I also run the same test in AMD EPYC platform, e.g. Milan, Genoa, which behaves the same as Intel Xeon platform, e.g. Icelake, SPR. The ARMv8.2 RAS extension support similar data poison mechanism, a Synchronous External Abort on arm64 (analogy Machine Check Exception on x86) will be trigger in setp 8. See James comments for details. But the kernel sends sigbus with si_code BUS_MCEERR_AO (code 5) , tested on Alibaba Yitian710 and Huawei Kunepng 920. >> >> But I'd like to hear from ARM folks whether consuming an uncorrectable >> error even lets software run. Dunno. > > I think we mean different things by 'consume' here. > > I'd assume Shuai's test is poisoning a cache-line. When the CPU tries to access that > cache-line it will get an 'external abort' signal back from the memory system. Shuai - is > this what you mean by 'consume' - the CPU received external abort from the poisoned cache > line? > Yes, exactly. Thank you for point it out. We are talking about synchronous errors. > It's then up to the CPU whether it can put the world back in order to take this as > synchronous-external-abort or asynchronous-external-abort, which for arm64 are two > different interrupt/exception types. > The synchronous exceptions can't be masked, but the asynchronous one can. > If by the time the asynchronous-external-abort interrupt/exception has been unmasked, the > CPU has used the poisoned value in some calculation (which is what we usually mean by > consume) which has resulted in a memory access - it will report the error as 'uncontained' > because the error has been silently propagated. APEI should always report those a 'fatal', > and there is little point getting the OS involved at this point. Also in this category are > things like 'tag ram corruption', where you can no longer trust anything about memory. > > Everything in this thread is about synchronous errors where this can't happen. The CPU > stops and does takes an interrupt/exception instead. > > Thank you for explaining. Best Regards, Shuai
Hi, ALL, Gentle ping. Best Regards, Shuai On 2023/10/7 15:28, Shuai Xue wrote: > Hi, ALL, > > I have rewritten the cover letter with the hope that the maintainer will truly > understand the necessity of this patch. Both Alibaba and Huawei met the same > issue in products, and we hope it could be fixed ASAP. > > ## Changes Log > > changes since v8: > - remove the bug fix tag of patch 2 (per Jarkko Sakkinen) > - remove the declaration of memory_failure_queue_kick (per Naoya Horiguchi) > - rewrite the return value comments of memory_failure (per Naoya Horiguchi) > > changes since v7: > - rebase to Linux v6.6-rc2 (no code changed) > - rewritten the cover letter to explain the motivation of this patchset > > changes since v6: > - add more explicty error message suggested by Xiaofei > - pick up reviewed-by tag from Xiaofei > - pick up internal reviewed-by tag from Baolin > > changes since v5 by addressing comments from Kefeng: > - document return value of memory_failure() > - drop redundant comments in call site of memory_failure() > - make ghes_do_proc void and handle abnormal case within it > - pick up reviewed-by tag from Kefeng Wang > > changes since v4 by addressing comments from Xiaofei: > - do a force kill only for abnormal sync errors > > changes since v3 by addressing comments from Xiaofei: > - do a force kill for abnormal memory failure error such as invalid PA, > unexpected severity, OOM, etc > - pcik up tested-by tag from Ma Wupeng > > changes since v2 by addressing comments from Naoya: > - rename mce_task_work to sync_task_work > - drop ACPI_HEST_NOTIFY_MCE case in is_hest_sync_notify() > - add steps to reproduce this problem in cover letter > > changes since v1: > - synchronous events by notify type > - Link: https://lore.kernel.org/lkml/20221206153354.92394-3-xueshuai@linux.alibaba.com/ > > > ## Cover Letter > > There are two major types of uncorrected recoverable (UCR) errors : > > - Action Required (AR): The error is detected and the processor already > consumes the memory. OS requires to take action (for example, offline > failure page/kill failure thread) to recover this error. > > - Action Optional (AO): The error is detected out of processor execution > context. Some data in the memory are corrupted. But the data have not > been consumed. OS is optional to take action to recover this error. > > The main difference between AR and AO errors is that AR errors are synchronous > events, while AO errors are asynchronous events. Synchronous exceptions, such as > Machine Check Exception (MCE) on X86 and Synchronous External Abort (SEA) on > Arm64, are signaled by the hardware when an error is detected and the memory > access has architecturally been executed. > > Currently, both synchronous and asynchronous errors are queued as AO errors and > handled by a dedicated kernel thread in a work queue on the ARM64 platform. For > synchronous errors, memory_failure() is synced using a cancel_work_sync trick to > ensure that the corrupted page is unmapped and poisoned. Upon returning to > user-space, the process resumes at the current instruction, triggering a page > fault. As a result, the kernel sends a SIGBUS signal to the current process due > to VM_FAULT_HWPOISON. > > However, this trick is not always be effective, this patch set improves the > recovery process in three specific aspects: > > 1. Handle synchronous exceptions with proper si_code > > ghes_handle_memory_failure() queue both synchronous and asynchronous errors with > flag=0. Then the kernel will notify the process by sending a SIGBUS signal in > memory_failure() with wrong si_code: BUS_MCEERR_AO to the actual user-space > process instead of BUS_MCEERR_AR. The user-space processes rely on the si_code > to distinguish to handle memory failure. > > For example, hwpoison-aware user-space processes use the si_code: > BUS_MCEERR_AO for 'action optional' early notifications, and BUS_MCEERR_AR > for 'action required' synchronous/late notifications. Specifically, when a > signal with SIGBUS_MCEERR_AR is delivered to QEMU, it will inject a vSEA to > Guest kernel. In contrast, a signal with SIGBUS_MCEERR_AO will be ignored > by QEMU.[1] > > Fix it by seting memory failure flags as MF_ACTION_REQUIRED on synchronous events. (PATCH 1) > > 2. Handle memory_failure() abnormal fails to avoid a unnecessary reboot > > If process mapping fault page, but memory_failure() abnormal return before > try_to_unmap(), for example, the fault page process mapping is KSM page. > In this case, arm64 cannot use the page fault process to terminate the > synchronous exception loop.[4] > > This loop can potentially exceed the platform firmware threshold or even trigger > a kernel hard lockup, leading to a system reboot. However, kernel has the > capability to recover from this error. > > Fix it by performing a force kill when memory_failure() abnormal fails or when > other abnormal synchronous errors occur. These errors can include situations > such as invalid PA, unexpected severity, no memory failure config support, > invalid GUID section, OOM, etc. (PATCH 2) > > 3. Handle memory_failure() in current process context which consuming poison > > When synchronous errors occur, memory_failure() assume that current process > context is exactly that consuming poison synchronous error. > > For example, kill_accessing_process() holds mmap locking of current->mm, does > pagetable walk to find the error virtual address, and sends SIGBUS to the > current process with error info. However, the mm of kworker is not valid, > resulting in a null-pointer dereference. I have fixed this in[3]. > > commit 77677cdbc2aa mm,hwpoison: check mm when killing accessing process > > Another example is that collect_procs()/kill_procs() walk the task list, only > collect and send sigbus to task which consuming poison. But memory_failure() is > queued and handled by a dedicated kernel thread on arm64 platform. > > Fix it by queuing memory_failure() as a task work which runs in current > execution context to synchronously send SIGBUS before ret_to_user. (PATCH 2) > > ** In summary, this patch set handles synchronous errors in task work with > proper si_code so that hwpoison-aware process can recover from errors, and > fixes (potentially) abnormal cases. ** > > Lv Ying and XiuQi from Huawei also proposed to address similar problem[2][4]. > Acknowledge to discussion with them. > > ## Steps to Reproduce This Problem > > To reproduce this problem: > > # STEP1: enable early kill mode > #sysctl -w vm.memory_failure_early_kill=1 > vm.memory_failure_early_kill = 1 > > # STEP2: inject an UCE error and consume it to trigger a synchronous error > #einj_mem_uc single > 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 > injecting ... > triggering ... > signal 7 code 5 addr 0xffffb0d75000 > page not present > Test passed > > The si_code (code 5) from einj_mem_uc indicates that it is BUS_MCEERR_AO error > and it is not fact. > > After this patch set: > > # STEP1: enable early kill mode > #sysctl -w vm.memory_failure_early_kill=1 > vm.memory_failure_early_kill = 1 > > # STEP2: inject an UCE error and consume it to trigger a synchronous error > #einj_mem_uc single > 0: single vaddr = 0xffffb0d75400 paddr = 4092d55b400 > injecting ... > triggering ... > signal 7 code 4 addr 0xffffb0d75000 > page not present > Test passed > > The si_code (code 4) from einj_mem_uc indicates that it is BUS_MCEERR_AR error > as we expected. > > [1] Add ARMv8 RAS virtualization support in QEMU https://patchew.org/QEMU/20200512030609.19593-1-gengdongjiu@huawei.com/ > [2] https://lore.kernel.org/lkml/20221205115111.131568-3-lvying6@huawei.com/ > [3] https://lkml.kernel.org/r/20220914064935.7851-1-xueshuai@linux.alibaba.com > [4] https://lore.kernel.org/lkml/20221209095407.383211-1-lvying6@huawei.com/ > > Shuai Xue (2): > ACPI: APEI: set memory failure flags as MF_ACTION_REQUIRED on > synchronous events > ACPI: APEI: handle synchronous exceptions in task work > > arch/x86/kernel/cpu/mce/core.c | 9 +-- > drivers/acpi/apei/ghes.c | 113 ++++++++++++++++++++++----------- > include/acpi/ghes.h | 3 - > include/linux/mm.h | 1 - > mm/memory-failure.c | 22 ++----- > 5 files changed, 82 insertions(+), 66 deletions(-) >
© 2016 - 2026 Red Hat, Inc.