From nobody Thu Apr 9 11:49:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A97F1C433FE for ; Fri, 21 Oct 2022 20:01:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229695AbiJUUBl (ORCPT ); Fri, 21 Oct 2022 16:01:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47072 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229897AbiJUUBa (ORCPT ); Fri, 21 Oct 2022 16:01:30 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CFD3125E88D for ; Fri, 21 Oct 2022 13:01:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666382489; x=1697918489; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KaF9e0WVFrqnnol9CO30O+x7LQt9U5cOhvA5BN7/yRs=; b=J2IcpiDQ1b2GhBT6r13xInTu+9MtO3hQ5vZmuWCSUUCrWeUHzJ/F/tyZ ufmzAUUyamv9wjsytXy0K4nZeLwmyml/R6NUULsU8JfrE87+RZ369OzQf 7cFwP3JG5n80sv+UQ9b1/Tz2DmFY/5WhAgvXCZyq2K42TCg70dYiAQboz u5xR3DQ15ufMMjMO8LenHTbyTViqheJmCNwpQ/QoG6N6OtjeWEPAujzSC 5DBYr13Wh8kbtDWW01nwBZloomcaVuzQBe/hA0qXuXNZqwSax/HUozylE ontfT8MhV4T7pBtfPV0z0H2NlE4r3jRmmsg9UY2BxBd4YFhaMa5qwOJOS Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10507"; a="290401088" X-IronPort-AV: E=Sophos;i="5.95,203,1661842800"; d="scan'208";a="290401088" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2022 13:01:28 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10507"; a="633069088" X-IronPort-AV: E=Sophos;i="5.95,203,1661842800"; d="scan'208";a="633069088" Received: from agluck-desk3.sc.intel.com ([172.25.222.78]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2022 13:01:27 -0700 From: Tony Luck To: Naoya Horiguchi , Andrew Morton Cc: Miaohe Lin , Matthew Wilcox , Shuai Xue , Dan Williams , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Tony Luck Subject: [PATCH v3 1/2] mm, hwpoison: Try to recover from copy-on write faults Date: Fri, 21 Oct 2022 13:01:19 -0700 Message-Id: <20221021200120.175753-2-tony.luck@intel.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221021200120.175753-1-tony.luck@intel.com> References: <20221019170835.155381-1-tony.luck@intel.com> <20221021200120.175753-1-tony.luck@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" If the kernel is copying a page as the result of a copy-on-write fault and runs into an uncorrectable error, Linux will crash because it does not have recovery code for this case where poison is consumed by the kernel. It is easy to set up a test case. Just inject an error into a private page, fork(2), and have the child process write to the page. I wrapped that neatly into a test at: git://git.kernel.org/pub/scm/linux/kernel/git/aegl/ras-tools.git just enable ACPI error injection and run: # ./einj_mem-uc -f copy-on-write Add a new copy_user_highpage_mc() function that uses copy_mc_to_kernel() on architectures where that is available (currently x86 and powerpc). When an error is detected during the page copy, return VM_FAULT_HWPOISON to caller of wp_page_copy(). This propagates up the call stack. Both x86 and powerpc have code in their fault handler to deal with this code by sending a SIGBUS to the application. Note that this patch avoids a system crash and signals the process that triggered the copy-on-write action. It does not take any action for the memory error that is still in the shared page. To handle that a call to memory_failure() is needed. But this cannot be done from wp_page_copy() because it holds mmap_lock(). Perhaps the architecture fault handlers can deal with this loose end in a subsequent patch? On Intel/x86 this loose end will often be handled automatically because the memory controller provides an additional notification of the h/w poison in memory, the handler for this will call memory_failure(). This isn't a 100% solution. If there are multiple errors, not all may be logged in this way. Reviewed-by: Dan Williams Signed-off-by: Tony Luck Reviewed-by: Alexander Potapenko Reviewed-by: Miaohe Lin Reviewed-by: Naoya Horiguchi Tested-by: Shuai Xue --- Changes in V3: Dan Williams Rename copy_user_highpage_mc() to copy_mc_user_highpage() for consistency with Linus' discussion on names of functions that check for machine check. Write complete functions for the have/have-not copy_mc_to_kernel cases (so grep shows there are two versions) Change __wp_page_copy_user() to return 0 for success, negative for fail [I picked -EAGAIN for both non-EHWPOISON cases] Changes in V2: Naoya Horiguchi: 1) Use -EHWPOISON error code instead of minus one. 2) Poison path needs also to deal with old_page Tony Luck: Rewrote commit message Added some powerpc folks to Cc: list --- include/linux/highmem.h | 24 ++++++++++++++++++++++++ mm/memory.c | 30 ++++++++++++++++++++---------- 2 files changed, 44 insertions(+), 10 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index e9912da5441b..a32c64681f03 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -319,6 +319,30 @@ static inline void copy_user_highpage(struct page *to,= struct page *from, =20 #endif =20 +#ifdef copy_mc_to_kernel +static inline int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + unsigned long ret; + char *vfrom, *vto; + + vfrom =3D kmap_local_page(from); + vto =3D kmap_local_page(to); + ret =3D copy_mc_to_kernel(vto, vfrom, PAGE_SIZE); + kunmap_local(vto); + kunmap_local(vfrom); + + return ret; +} +#else +static inline int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + copy_user_highpage(to, from, vaddr, vma); + return 0; +} +#endif + #ifndef __HAVE_ARCH_COPY_HIGHPAGE =20 static inline void copy_highpage(struct page *to, struct page *from) diff --git a/mm/memory.c b/mm/memory.c index f88c351aecd4..b6056eef2f72 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2848,10 +2848,16 @@ static inline int pte_unmap_same(struct vm_fault *v= mf) return same; } =20 -static inline bool __wp_page_copy_user(struct page *dst, struct page *src, - struct vm_fault *vmf) +/* + * Return: + * 0: copied succeeded + * -EHWPOISON: copy failed due to hwpoison in source page + * -EAGAIN: copied failed (some other reason) + */ +static inline int __wp_page_copy_user(struct page *dst, struct page *src, + struct vm_fault *vmf) { - bool ret; + int ret; void *kaddr; void __user *uaddr; bool locked =3D false; @@ -2860,8 +2866,9 @@ static inline bool __wp_page_copy_user(struct page *d= st, struct page *src, unsigned long addr =3D vmf->address; =20 if (likely(src)) { - copy_user_highpage(dst, src, addr, vma); - return true; + if (copy_mc_user_highpage(dst, src, addr, vma)) + return -EHWPOISON; + return 0; } =20 /* @@ -2888,7 +2895,7 @@ static inline bool __wp_page_copy_user(struct page *d= st, struct page *src, * and update local tlb only */ update_mmu_tlb(vma, addr, vmf->pte); - ret =3D false; + ret =3D -EAGAIN; goto pte_unlock; } =20 @@ -2913,7 +2920,7 @@ static inline bool __wp_page_copy_user(struct page *d= st, struct page *src, if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) { /* The PTE changed under us, update local tlb */ update_mmu_tlb(vma, addr, vmf->pte); - ret =3D false; + ret =3D -EAGAIN; goto pte_unlock; } =20 @@ -2932,7 +2939,7 @@ static inline bool __wp_page_copy_user(struct page *d= st, struct page *src, } } =20 - ret =3D true; + ret =3D 0; =20 pte_unlock: if (locked) @@ -3104,6 +3111,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) pte_t entry; int page_copied =3D 0; struct mmu_notifier_range range; + int ret; =20 delayacct_wpcopy_start(); =20 @@ -3121,19 +3129,21 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) if (!new_page) goto oom; =20 - if (!__wp_page_copy_user(new_page, old_page, vmf)) { + ret =3D __wp_page_copy_user(new_page, old_page, vmf); + if (ret) { /* * COW failed, if the fault was solved by other, * it's fine. If not, userspace would re-fault on * the same address and we will handle the fault * from the second attempt. + * The -EHWPOISON case will not be retried. */ put_page(new_page); if (old_page) put_page(old_page); =20 delayacct_wpcopy_end(); - return 0; + return ret =3D=3D -EHWPOISON ? VM_FAULT_HWPOISON : 0; } kmsan_copy_page_meta(new_page, old_page); } --=20 2.37.3 From nobody Thu Apr 9 11:49:32 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A38A0C38A2D for ; Fri, 21 Oct 2022 20:01:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230049AbiJUUBh (ORCPT ); Fri, 21 Oct 2022 16:01:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229872AbiJUUBa (ORCPT ); Fri, 21 Oct 2022 16:01:30 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F60B25E891 for ; Fri, 21 Oct 2022 13:01:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666382490; x=1697918490; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=V6m+YOfy5sxTfMqFs+Sj5pHdQRZvXRPoJwkK0Nyet44=; b=WcBHh3Hct0txAAqvbYNLgbiUbR3F1xFiGvVtyJPdniUKRz4xcQdbKR1g tZveZV78S3XyYLhXPVrlJrwlOESJ0yMdZYXnPn7sjp2CJjTt8QNox5KcB A6ybOnZ38917SRiop7MtVLLI13vh0j1r7+0UwTXovdh4tUN9MXKFQzxkM 0hJBGMwxAI6t8ln4lD0YjDjuWas9YSrZPweuU9L9jNs4/zBYLsaZ8KXnh VI3Aq3rOLVbh5P4qmoSJFQytAzsP8QmAbgPGxUE/DJ0XwwctUcilJr4rb fvFOx82ySPOIruWEXR080ohDdOf7S7Lm9M2SUUyi82Xa6eu47KSdaHZvy A==; X-IronPort-AV: E=McAfee;i="6500,9779,10507"; a="290401090" X-IronPort-AV: E=Sophos;i="5.95,203,1661842800"; d="scan'208";a="290401090" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2022 13:01:28 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10507"; a="633069091" X-IronPort-AV: E=Sophos;i="5.95,203,1661842800"; d="scan'208";a="633069091" Received: from agluck-desk3.sc.intel.com ([172.25.222.78]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2022 13:01:27 -0700 From: Tony Luck To: Naoya Horiguchi , Andrew Morton Cc: Miaohe Lin , Matthew Wilcox , Shuai Xue , Dan Williams , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Tony Luck Subject: [PATCH v3 2/2] mm, hwpoison: When copy-on-write hits poison, take page offline Date: Fri, 21 Oct 2022 13:01:20 -0700 Message-Id: <20221021200120.175753-3-tony.luck@intel.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221021200120.175753-1-tony.luck@intel.com> References: <20221019170835.155381-1-tony.luck@intel.com> <20221021200120.175753-1-tony.luck@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Cannot call memory_failure() directly from the fault handler because mmap_lock (and others) are held. It is important, but not urgent, to mark the source page as h/w poisoned and unmap it from other tasks. Use memory_failure_queue() to request a call to memory_failure() for the page with the error. Also provide a stub version for CONFIG_MEMORY_FAILURE=3Dn Signed-off-by: Tony Luck Reviewed-by: Miaohe Lin Tested-by: Shuai Xue --- include/linux/mm.h | 5 ++++- mm/memory.c | 4 +++- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 8bbcccbc5565..03ced659eb58 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3268,7 +3268,6 @@ enum mf_flags { int mf_dax_kill_procs(struct address_space *mapping, pgoff_t index, unsigned long count, int mf_flags); extern int memory_failure(unsigned long pfn, int flags); -extern void memory_failure_queue(unsigned long pfn, int flags); extern void memory_failure_queue_kick(int cpu); extern int unpoison_memory(unsigned long pfn); extern int sysctl_memory_failure_early_kill; @@ -3277,8 +3276,12 @@ extern void shake_page(struct page *p); extern atomic_long_t num_poisoned_pages __read_mostly; extern int soft_offline_page(unsigned long pfn, int flags); #ifdef CONFIG_MEMORY_FAILURE +extern void memory_failure_queue(unsigned long pfn, int flags); extern int __get_huge_page_for_hwpoison(unsigned long pfn, int flags); #else +static inline void memory_failure_queue(unsigned long pfn, int flags) +{ +} static inline int __get_huge_page_for_hwpoison(unsigned long pfn, int flag= s) { return 0; diff --git a/mm/memory.c b/mm/memory.c index b6056eef2f72..eae242351726 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2866,8 +2866,10 @@ static inline int __wp_page_copy_user(struct page *d= st, struct page *src, unsigned long addr =3D vmf->address; =20 if (likely(src)) { - if (copy_mc_user_highpage(dst, src, addr, vma)) + if (copy_mc_user_highpage(dst, src, addr, vma)) { + memory_failure_queue(page_to_pfn(src), 0); return -EHWPOISON; + } return 0; } =20 --=20 2.37.3