From nobody Thu Apr 9 00:16:25 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67DF4FA3741 for ; Mon, 31 Oct 2022 20:10:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230005AbiJaUKx (ORCPT ); Mon, 31 Oct 2022 16:10:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229626AbiJaUKt (ORCPT ); Mon, 31 Oct 2022 16:10:49 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4423662C4 for ; Mon, 31 Oct 2022 13:10:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1667247048; x=1698783048; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FkjXs29rqfDyvtsYyleWaTFa4Yy2vHuQ4WNYRV8TNe0=; b=ahMlsw5ITCX3W+o4l+AquGNInM1laL/M4elIajXn5zQSvPk/iaCAcTKf psGasS/bcebIr+0O8U5yhh8hlA+kgSL2sMiIpkz+7e6za2msiujiPVk57 ZGZ9tdEv/VMVzg1D/8vG7uqBMIuEZzAMeC3HYEL8dKgcC5eBrlFtpdXbG s6P+foxjwhbdX+NEw5gojCODTm7Rl9YXlr9T6m6yYUYAVRfEikrwbc923 n3jBqskv9vDNL/ERDNJZQGEOEtgVE26Hf08ozkM/C7SoVuSEiPePUQF3A J2/3+fk2MjbvDwZg2U5TizPf556ig6bT5EBe/Wz92vwdF0ZnymhFala/9 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10517"; a="335651891" X-IronPort-AV: E=Sophos;i="5.95,228,1661842800"; d="scan'208";a="335651891" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2022 13:10:47 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10517"; a="722931443" X-IronPort-AV: E=Sophos;i="5.95,228,1661842800"; d="scan'208";a="722931443" Received: from agluck-desk3.sc.intel.com ([172.25.222.78]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2022 13:10:47 -0700 From: Tony Luck To: Andrew Morton Cc: Alexander Potapenko , Naoya Horiguchi , Miaohe Lin , Matthew Wilcox , Shuai Xue , Dan Williams , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Tony Luck Subject: [PATCH v4 1/2] mm, hwpoison: Try to recover from copy-on write faults Date: Mon, 31 Oct 2022 13:10:28 -0700 Message-Id: <20221031201029.102123-2-tony.luck@intel.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221031201029.102123-1-tony.luck@intel.com> References: <20221021200120.175753-1-tony.luck@intel.com> <20221031201029.102123-1-tony.luck@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" If the kernel is copying a page as the result of a copy-on-write fault and runs into an uncorrectable error, Linux will crash because it does not have recovery code for this case where poison is consumed by the kernel. It is easy to set up a test case. Just inject an error into a private page, fork(2), and have the child process write to the page. I wrapped that neatly into a test at: git://git.kernel.org/pub/scm/linux/kernel/git/aegl/ras-tools.git just enable ACPI error injection and run: # ./einj_mem-uc -f copy-on-write Add a new copy_user_highpage_mc() function that uses copy_mc_to_kernel() on architectures where that is available (currently x86 and powerpc). When an error is detected during the page copy, return VM_FAULT_HWPOISON to caller of wp_page_copy(). This propagates up the call stack. Both x86 and powerpc have code in their fault handler to deal with this code by sending a SIGBUS to the application. Note that this patch avoids a system crash and signals the process that triggered the copy-on-write action. It does not take any action for the memory error that is still in the shared page. To handle that a call to memory_failure() is needed. But this cannot be done from wp_page_copy() because it holds mmap_lock(). Perhaps the architecture fault handlers can deal with this loose end in a subsequent patch? On Intel/x86 this loose end will often be handled automatically because the memory controller provides an additional notification of the h/w poison in memory, the handler for this will call memory_failure(). This isn't a 100% solution. If there are multiple errors, not all may be logged in this way. Reviewed-by: Dan Williams Reviewed-by: Miaohe Lin Reviewed-by: Naoya Horiguchi Tested-by: Shuai Xue Signed-off-by: Tony Luck Message-Id: <20221021200120.175753-2-tony.luck@intel.com> Signed-off-by: Tony Luck --- include/linux/highmem.h | 26 ++++++++++++++++++++++++++ mm/memory.c | 30 ++++++++++++++++++++---------- 2 files changed, 46 insertions(+), 10 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index e9912da5441b..44242268f53b 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -319,6 +319,32 @@ static inline void copy_user_highpage(struct page *to,= struct page *from, =20 #endif =20 +#ifdef copy_mc_to_kernel +static inline int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + unsigned long ret; + char *vfrom, *vto; + + vfrom =3D kmap_local_page(from); + vto =3D kmap_local_page(to); + ret =3D copy_mc_to_kernel(vto, vfrom, PAGE_SIZE); + if (!ret) + kmsan_unpoison_memory(page_address(to), PAGE_SIZE); + kunmap_local(vto); + kunmap_local(vfrom); + + return ret; +} +#else +static inline int copy_mc_user_highpage(struct page *to, struct page *from, + unsigned long vaddr, struct vm_area_struct *vma) +{ + copy_user_highpage(to, from, vaddr, vma); + return 0; +} +#endif + #ifndef __HAVE_ARCH_COPY_HIGHPAGE =20 static inline void copy_highpage(struct page *to, struct page *from) diff --git a/mm/memory.c b/mm/memory.c index f88c351aecd4..b6056eef2f72 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2848,10 +2848,16 @@ static inline int pte_unmap_same(struct vm_fault *v= mf) return same; } =20 -static inline bool __wp_page_copy_user(struct page *dst, struct page *src, - struct vm_fault *vmf) +/* + * Return: + * 0: copied succeeded + * -EHWPOISON: copy failed due to hwpoison in source page + * -EAGAIN: copied failed (some other reason) + */ +static inline int __wp_page_copy_user(struct page *dst, struct page *src, + struct vm_fault *vmf) { - bool ret; + int ret; void *kaddr; void __user *uaddr; bool locked =3D false; @@ -2860,8 +2866,9 @@ static inline bool __wp_page_copy_user(struct page *d= st, struct page *src, unsigned long addr =3D vmf->address; =20 if (likely(src)) { - copy_user_highpage(dst, src, addr, vma); - return true; + if (copy_mc_user_highpage(dst, src, addr, vma)) + return -EHWPOISON; + return 0; } =20 /* @@ -2888,7 +2895,7 @@ static inline bool __wp_page_copy_user(struct page *d= st, struct page *src, * and update local tlb only */ update_mmu_tlb(vma, addr, vmf->pte); - ret =3D false; + ret =3D -EAGAIN; goto pte_unlock; } =20 @@ -2913,7 +2920,7 @@ static inline bool __wp_page_copy_user(struct page *d= st, struct page *src, if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) { /* The PTE changed under us, update local tlb */ update_mmu_tlb(vma, addr, vmf->pte); - ret =3D false; + ret =3D -EAGAIN; goto pte_unlock; } =20 @@ -2932,7 +2939,7 @@ static inline bool __wp_page_copy_user(struct page *d= st, struct page *src, } } =20 - ret =3D true; + ret =3D 0; =20 pte_unlock: if (locked) @@ -3104,6 +3111,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) pte_t entry; int page_copied =3D 0; struct mmu_notifier_range range; + int ret; =20 delayacct_wpcopy_start(); =20 @@ -3121,19 +3129,21 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) if (!new_page) goto oom; =20 - if (!__wp_page_copy_user(new_page, old_page, vmf)) { + ret =3D __wp_page_copy_user(new_page, old_page, vmf); + if (ret) { /* * COW failed, if the fault was solved by other, * it's fine. If not, userspace would re-fault on * the same address and we will handle the fault * from the second attempt. + * The -EHWPOISON case will not be retried. */ put_page(new_page); if (old_page) put_page(old_page); =20 delayacct_wpcopy_end(); - return 0; + return ret =3D=3D -EHWPOISON ? VM_FAULT_HWPOISON : 0; } kmsan_copy_page_meta(new_page, old_page); } --=20 2.37.3 From nobody Thu Apr 9 00:16:25 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B93E6ECAAA1 for ; Mon, 31 Oct 2022 20:10:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229919AbiJaUK5 (ORCPT ); Mon, 31 Oct 2022 16:10:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229667AbiJaUKt (ORCPT ); Mon, 31 Oct 2022 16:10:49 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14D1BC6C for ; Mon, 31 Oct 2022 13:10:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1667247049; x=1698783049; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/v2ZvqyUX9QuQjryssW2yaZzhbAsEUWDkNYvgzcoQls=; b=oJhbbdJx2yqz2UFya/z7eh9aPA2m/Tqsms5Z7x155lLOqy+Tkr2NFb7k r+J4oJKec1TJgxQBNy0Udd619gV/KZuO+Prxfz7Ff7XKpvpfCshOu37Gw lbe2Oo8FoAz2sHhdTJgXVt/YpTmen7KtZqkxgJif9Dz+1WBkAgyKMjBsu XWKMDKwjkofXb1Y7Qn9G+uF5XqwWlrbVTC25l2lnvC/ivEhyHDeHJVen3 tCRDFWzOy2SbkWbpYNqqHM1jPpN4mjFuB+ySPvyR66F0z/yJfBRYA1ZQQ hi7VceuhRRTx03HywBB5b8ZVHK+Bf3XbdMadgTQgJiQt1kt0p21Yvogxe w==; X-IronPort-AV: E=McAfee;i="6500,9779,10517"; a="335651895" X-IronPort-AV: E=Sophos;i="5.95,228,1661842800"; d="scan'208";a="335651895" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2022 13:10:47 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10517"; a="722931446" X-IronPort-AV: E=Sophos;i="5.95,228,1661842800"; d="scan'208";a="722931446" Received: from agluck-desk3.sc.intel.com ([172.25.222.78]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Oct 2022 13:10:47 -0700 From: Tony Luck To: Andrew Morton Cc: Alexander Potapenko , Naoya Horiguchi , Miaohe Lin , Matthew Wilcox , Shuai Xue , Dan Williams , Michael Ellerman , Nicholas Piggin , Christophe Leroy , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Tony Luck Subject: [PATCH v4 2/2] mm, hwpoison: When copy-on-write hits poison, take page offline Date: Mon, 31 Oct 2022 13:10:29 -0700 Message-Id: <20221031201029.102123-3-tony.luck@intel.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221031201029.102123-1-tony.luck@intel.com> References: <20221021200120.175753-1-tony.luck@intel.com> <20221031201029.102123-1-tony.luck@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Cannot call memory_failure() directly from the fault handler because mmap_lock (and others) are held. It is important, but not urgent, to mark the source page as h/w poisoned and unmap it from other tasks. Use memory_failure_queue() to request a call to memory_failure() for the page with the error. Also provide a stub version for CONFIG_MEMORY_FAILURE=3Dn Reviewed-by: Miaohe Lin Tested-by: Shuai Xue Signed-off-by: Tony Luck Message-Id: <20221021200120.175753-3-tony.luck@intel.com> Signed-off-by: Tony Luck --- include/linux/mm.h | 5 ++++- mm/memory.c | 4 +++- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 8bbcccbc5565..03ced659eb58 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3268,7 +3268,6 @@ enum mf_flags { int mf_dax_kill_procs(struct address_space *mapping, pgoff_t index, unsigned long count, int mf_flags); extern int memory_failure(unsigned long pfn, int flags); -extern void memory_failure_queue(unsigned long pfn, int flags); extern void memory_failure_queue_kick(int cpu); extern int unpoison_memory(unsigned long pfn); extern int sysctl_memory_failure_early_kill; @@ -3277,8 +3276,12 @@ extern void shake_page(struct page *p); extern atomic_long_t num_poisoned_pages __read_mostly; extern int soft_offline_page(unsigned long pfn, int flags); #ifdef CONFIG_MEMORY_FAILURE +extern void memory_failure_queue(unsigned long pfn, int flags); extern int __get_huge_page_for_hwpoison(unsigned long pfn, int flags); #else +static inline void memory_failure_queue(unsigned long pfn, int flags) +{ +} static inline int __get_huge_page_for_hwpoison(unsigned long pfn, int flag= s) { return 0; diff --git a/mm/memory.c b/mm/memory.c index b6056eef2f72..eae242351726 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2866,8 +2866,10 @@ static inline int __wp_page_copy_user(struct page *d= st, struct page *src, unsigned long addr =3D vmf->address; =20 if (likely(src)) { - if (copy_mc_user_highpage(dst, src, addr, vma)) + if (copy_mc_user_highpage(dst, src, addr, vma)) { + memory_failure_queue(page_to_pfn(src), 0); return -EHWPOISON; + } return 0; } =20 --=20 2.37.3