From nobody Thu Sep 18 01:30:10 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C765EC4332F for ; Tue, 13 Dec 2022 02:49:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234300AbiLMCtN (ORCPT ); Mon, 12 Dec 2022 21:49:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233798AbiLMCtL (ORCPT ); Mon, 12 Dec 2022 21:49:11 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65D1E1DA46 for ; Mon, 12 Dec 2022 18:49:09 -0800 (PST) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4NWNGJ4J12zRpt2; Tue, 13 Dec 2022 10:48:08 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 13 Dec 2022 10:49:07 +0800 From: Kefeng Wang To: , , CC: , , , David Hildenbrand , Kefeng Wang Subject: [PATCH -next v3] mm: hwposion: support recovery from ksm_might_need_to_copy() Date: Tue, 13 Dec 2022 11:05:57 +0800 Message-ID: <20221213030557.143432-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When the kernel copy a page from ksm_might_need_to_copy(), but runs into an uncorrectable error, it will crash since poisoned page is consumed by kernel, this is similar to Copy-on-write poison recovery, When an error is detected during the page copy, return VM_FAULT_HWPOISON in do_swap_page(), and install a hwpoison entry in unuse_pte() when swapoff, which help us to avoid system crash. Note, memory failure on a KSM page will be skipped, but still call memory_failure_queue() to be consistent with general memory failure process. Signed-off-by: Kefeng Wang --- mm/ksm.c | 8 ++++++-- mm/memory.c | 3 +++ mm/swapfile.c | 19 +++++++++++++------ 3 files changed, 22 insertions(+), 8 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index dd02780c387f..83e2f74ae7da 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2629,8 +2629,12 @@ struct page *ksm_might_need_to_copy(struct page *pag= e, new_page =3D NULL; } if (new_page) { - copy_user_highpage(new_page, page, address, vma); - + if (copy_mc_user_highpage(new_page, page, address, vma)) { + put_page(new_page); + new_page =3D ERR_PTR(-EHWPOISON); + memory_failure_queue(page_to_pfn(page), 0); + return new_page; + } SetPageDirty(new_page); __SetPageUptodate(new_page); __SetPageLocked(new_page); diff --git a/mm/memory.c b/mm/memory.c index aad226daf41b..5b2c137dfb2a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3840,6 +3840,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(!page)) { ret =3D VM_FAULT_OOM; goto out_page; + } else if (unlikely(PTR_ERR(page) =3D=3D -EHWPOISON)) { + ret =3D VM_FAULT_HWPOISON; + goto out_page; } folio =3D page_folio(page); =20 diff --git a/mm/swapfile.c b/mm/swapfile.c index 908a529bca12..06aaca111233 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1763,12 +1763,15 @@ static int unuse_pte(struct vm_area_struct *vma, pm= d_t *pmd, struct page *swapcache; spinlock_t *ptl; pte_t *pte, new_pte; + bool hwposioned =3D false; int ret =3D 1; =20 swapcache =3D page; page =3D ksm_might_need_to_copy(page, vma, addr); if (unlikely(!page)) return -ENOMEM; + else if (unlikely(PTR_ERR(page) =3D=3D -EHWPOISON)) + hwposioned =3D true; =20 pte =3D pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); if (unlikely(!pte_same_as_swp(*pte, swp_entry_to_pte(entry)))) { @@ -1776,13 +1779,17 @@ static int unuse_pte(struct vm_area_struct *vma, pm= d_t *pmd, goto out; } =20 - if (unlikely(!PageUptodate(page))) { - pte_t pteval; + if (hwposioned || !PageUptodate(page)) { + swp_entry_t swp_entry; =20 dec_mm_counter(vma->vm_mm, MM_SWAPENTS); - pteval =3D swp_entry_to_pte(make_swapin_error_entry()); - set_pte_at(vma->vm_mm, addr, pte, pteval); - swap_free(entry); + if (hwposioned) { + swp_entry =3D make_hwpoison_entry(swapcache); + page =3D swapcache; + } else { + swp_entry =3D make_swapin_error_entry(); + } + new_pte =3D swp_entry_to_pte(swp_entry); ret =3D 0; goto out; } @@ -1816,9 +1823,9 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_= t *pmd, new_pte =3D pte_mksoft_dirty(new_pte); if (pte_swp_uffd_wp(*pte)) new_pte =3D pte_mkuffd_wp(new_pte); +out: set_pte_at(vma->vm_mm, addr, pte, new_pte); swap_free(entry); -out: pte_unmap_unlock(pte, ptl); if (page !=3D swapcache) { unlock_page(page); --=20 2.35.3