From nobody Wed Sep 17 12:04:47 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB0A3C4332F for ; Tue, 20 Dec 2022 07:26:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233444AbiLTH0Q (ORCPT ); Tue, 20 Dec 2022 02:26:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233517AbiLTHZf (ORCPT ); Tue, 20 Dec 2022 02:25:35 -0500 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [IPv6:2607:f8b0:4864:20::62c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C31871659E for ; Mon, 19 Dec 2022 23:25:30 -0800 (PST) Received: by mail-pl1-x62c.google.com with SMTP id l4so122209pld.13 for ; Mon, 19 Dec 2022 23:25:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/t6LuwFAGWbsZ6dMtjW/A8v0QGvpGJligamF6F81WOc=; b=LK6A8Ib1of+UjiY50k1DoKIshnVtLUdpsiAq3QvElWxYPooXdkY2uj+3REuWy5FyJe W6DWqBPi8VZ6vGc5KGHdb1llRt1LgH+EnOjCuSmeZoU243yVA4fNFDk+niNM8ULiI109 fW7PKLJlGLT0P1h5Ej7eHyD5XuhpLYKBRa1qQrLAFy3NIX2Sd65er3C9VK/N3det7JYi bYYCLA/wPxCcBlIM8qNn23Jm61b+AWZTbeGzy9bz89OVHGlgJwP6Sgqj0AQKup11mO0d DI441aM6+te1JDuxLpewD14n1aPMN8LjqXRYXDdzzw8qjLpBwOzYf5AYISM9C0Tn6BGg yt5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/t6LuwFAGWbsZ6dMtjW/A8v0QGvpGJligamF6F81WOc=; b=S0MfqfQ3e66NpxskQzsWy6bKMy2CoLbSDZ3gL/lrENeSU004D9cXddUcjOfhPda9CT HMmacEbC6r0IYZ2hVlxgsg85UspW3KbObOybUGWKArUJP8ydQSq2c+FqU3ppGyctfnpO gm9wo7gH05Wm0Zvka3wtGkYu/13T/DAasf9YeYMHvG9ocFJir1TQv++O6S3ThZcPtfRr OE4RwKh6fybp1vUtdBAfIzSM6tSkNgsYCVZkHDp4B6nXPxXGORnz2V3UVGSDmtIA1Zpu wTsj7/AZAq2J7av5ns57Ke1Xp8WEU34UDTJuhz/54XU34BzBhUXppAO5jDriIU0wVqwf k6NA== X-Gm-Message-State: ANoB5pn5VDD4qzOTpKUf1rY+sC63I1+UddIQKBHdALSDrWhzuSlYm4jV bWJEFZfjUfy4JGU6u+SqHJE= X-Google-Smtp-Source: AA0mqf72zDpgTCXGteBrrOYijbalbeEHoDalBxMJpqfzoSwDK7e+ka4IWvLTQqCGG5GO6NrPC+8kpQ== X-Received: by 2002:a05:6a20:a023:b0:9d:efd3:66ec with SMTP id p35-20020a056a20a02300b0009defd366ecmr68457722pzj.51.1671521130077; Mon, 19 Dec 2022 23:25:30 -0800 (PST) Received: from archlinux.localdomain ([140.121.198.213]) by smtp.googlemail.com with ESMTPSA id q15-20020aa7982f000000b00576f9773c80sm7865544pfl.206.2022.12.19.23.25.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Dec 2022 23:25:29 -0800 (PST) From: Chih-En Lin To: Andrew Morton , Qi Zheng , David Hildenbrand , Matthew Wilcox , Christophe Leroy , John Hubbard , Nadav Amit Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Steven Rostedt , Masami Hiramatsu , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Yang Shi , Peter Xu , Zach O'Keefe , "Liam R . Howlett" , Alex Sierra , Xianting Tian , Colin Cross , Suren Baghdasaryan , Barry Song , Pasha Tatashin , Suleiman Souhlal , Brian Geffon , Yu Zhao , Tong Tiangen , Liu Shixin , Li kunyu , Anshuman Khandual , Vlastimil Babka , Hugh Dickins , Minchan Kim , Miaohe Lin , Gautam Menghani , Catalin Marinas , Mark Brown , Will Deacon , "Eric W . Biederman" , Thomas Gleixner , Sebastian Andrzej Siewior , Andy Lutomirski , Fenghua Yu , Barret Rhoden , Davidlohr Bueso , "Jason A . Donenfeld" , Dinglan Peng , Pedro Fonseca , Jim Huang , Huichun Feng , Chih-En Lin Subject: [PATCH v3 03/14] mm: Add break COW PTE fault and helper functions Date: Tue, 20 Dec 2022 15:27:32 +0800 Message-Id: <20221220072743.3039060-4-shiyn.lin@gmail.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221220072743.3039060-1-shiyn.lin@gmail.com> References: <20221220072743.3039060-1-shiyn.lin@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add the function, break_cow_pte_fault(), to break (unshare) COW-ed PTE with the page fault that will modify the PTE table or the mapped page resided in COW-ed PTE (i.e., write, unshared, file read). When breaking COW PTE, it first checks COW-ed PTE's refcount to try to reuse it. If COW-ed PTE cannot be reused, allocates new PTE and duplicates all pte entries in COW-ed PTE. Moreover, flush TLB when we change the write protection of PTE. In addition, provide the helper functions, break_cow_pte{,_range}(), to let the other features (remap, THP, migration, swapfile, etc) to use. Signed-off-by: Chih-En Lin --- include/linux/mm.h | 4 + include/linux/pgtable.h | 6 + mm/memory.c | 319 +++++++++++++++++++++++++++++++++++++++- mm/mmap.c | 4 + mm/mremap.c | 2 + mm/swapfile.c | 2 + 6 files changed, 331 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 8c6ec1da2336f..6a0eb01ee6f7e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1894,6 +1894,10 @@ void pagecache_isize_extended(struct inode *inode, l= off_t from, loff_t to); void truncate_pagecache_range(struct inode *inode, loff_t offset, loff_t e= nd); int generic_error_remove_page(struct address_space *mapping, struct page *= page); =20 +int break_cow_pte(struct vm_area_struct *vma, pmd_t *pmd, unsigned long ad= dr); +int break_cow_pte_range(struct vm_area_struct *vma, unsigned long start, + unsigned long end); + #ifdef CONFIG_MMU extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags, diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index a108b60a6962b..895fa18e3b011 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1395,6 +1395,12 @@ static inline int pmd_none_or_trans_huge_or_clear_ba= d(pmd_t *pmd) if (pmd_none(pmdval) || pmd_trans_huge(pmdval) || (IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION) && !pmd_present(pmdval))) return 1; + /* + * COW-ed PTE has write protection which can trigger pmd_bad(). + * To avoid this, return here if entry is write protection. + */ + if (!pmd_write(pmdval)) + return 0; if (unlikely(pmd_bad(pmdval))) { pmd_clear_bad(pmd); return 1; diff --git a/mm/memory.c b/mm/memory.c index 5b474d14a5411..8ebff4cac2191 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -239,6 +239,35 @@ static inline void free_pmd_range(struct mmu_gather *t= lb, pud_t *pud, pmd =3D pmd_offset(pud, addr); do { next =3D pmd_addr_end(addr, end); + /* + * For COW-ed PTE, the pte entries still mapping to pages. + * However, we should did de-accounting to all of it. So, + * even if the refcount is not the same as zapping, we + * could still fall back to normal PTE and handle it + * without traversing entries to do the de-accounting. + */ + if (test_bit(MMF_COW_PTE, &tlb->mm->flags)) { + if (!pmd_none(*pmd) && !pmd_write(*pmd)) { + spinlock_t *ptl =3D pte_lockptr(tlb->mm, pmd); + + spin_lock(ptl); + if (!pmd_put_pte(pmd)) { + pmd_t new =3D pmd_mkwrite(*pmd); + + set_pmd_at(tlb->mm, addr, pmd, new); + spin_unlock(ptl); + free_pte_range(tlb, pmd, addr); + continue; + } + spin_unlock(ptl); + + pmd_clear(pmd); + mm_dec_nr_ptes(tlb->mm); + flush_tlb_mm_range(tlb->mm, addr, next, + PAGE_SHIFT, false); + } else + VM_WARN_ON(cow_pte_count(pmd) !=3D 1); + } if (pmd_none_or_clear_bad(pmd)) continue; free_pte_range(tlb, pmd, addr); @@ -1676,12 +1705,34 @@ static unsigned long zap_pte_range(struct mmu_gathe= r *tlb, pte_t *start_pte; pte_t *pte; swp_entry_t entry; + bool pte_is_shared =3D false; + + if (test_bit(MMF_COW_PTE, &mm->flags) && !pmd_write(*pmd)) { + if (!range_in_vma(vma, addr & PMD_MASK, + (addr + PMD_SIZE) & PMD_MASK)) { + /* + * We cannot promise this COW-ed PTE will also be zap + * with the rest of VMAs. So, break COW PTE here. + */ + break_cow_pte(vma, pmd, addr); + } else { + start_pte =3D pte_offset_map_lock(mm, pmd, addr, &ptl); + if (cow_pte_count(pmd) =3D=3D 1) { + /* Reuse COW-ed PTE */ + pmd_t new =3D pmd_mkwrite(*pmd); + set_pmd_at(tlb->mm, addr, pmd, new); + } else + pte_is_shared =3D true; + pte_unmap_unlock(start_pte, ptl); + } + } =20 tlb_change_page_size(tlb, PAGE_SIZE); again: init_rss_vec(rss); start_pte =3D pte_offset_map_lock(mm, pmd, addr, &ptl); pte =3D start_pte; + flush_tlb_batched_pending(mm); arch_enter_lazy_mmu_mode(); do { @@ -1698,11 +1749,15 @@ static unsigned long zap_pte_range(struct mmu_gathe= r *tlb, page =3D vm_normal_page(vma, addr, ptent); if (unlikely(!should_zap_page(details, page))) continue; - ptent =3D ptep_get_and_clear_full(mm, addr, pte, - tlb->fullmm); + if (pte_is_shared) + ptent =3D *pte; + else + ptent =3D ptep_get_and_clear_full(mm, addr, pte, + tlb->fullmm); tlb_remove_tlb_entry(tlb, pte, addr); - zap_install_uffd_wp_if_needed(vma, addr, pte, details, - ptent); + if (!pte_is_shared) + zap_install_uffd_wp_if_needed(vma, addr, pte, + details, ptent); if (unlikely(!page)) continue; =20 @@ -1768,8 +1823,12 @@ static unsigned long zap_pte_range(struct mmu_gather= *tlb, /* We should have covered all the swap entry types */ WARN_ON_ONCE(1); } - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); - zap_install_uffd_wp_if_needed(vma, addr, pte, details, ptent); + + if (!pte_is_shared) { + pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); + zap_install_uffd_wp_if_needed(vma, addr, pte, + details, ptent); + } } while (pte++, addr +=3D PAGE_SIZE, addr !=3D end); =20 add_mm_rss_vec(mm, rss); @@ -2147,6 +2206,8 @@ static int insert_page(struct vm_area_struct *vma, un= signed long addr, if (retval) goto out; retval =3D -ENOMEM; + if (break_cow_pte(vma, NULL, addr) < 0) + goto out; pte =3D get_locked_pte(vma->vm_mm, addr, &ptl); if (!pte) goto out; @@ -2406,6 +2467,9 @@ static vm_fault_t insert_pfn(struct vm_area_struct *v= ma, unsigned long addr, pte_t *pte, entry; spinlock_t *ptl; =20 + if (break_cow_pte(vma, NULL, addr) < 0) + return VM_FAULT_OOM; + pte =3D get_locked_pte(mm, addr, &ptl); if (!pte) return VM_FAULT_OOM; @@ -2783,6 +2847,10 @@ int remap_pfn_range_notrack(struct vm_area_struct *v= ma, unsigned long addr, BUG_ON(addr >=3D end); pfn -=3D addr >> PAGE_SHIFT; pgd =3D pgd_offset(mm, addr); + + if (!break_cow_pte_range(vma, addr, end)) + return -ENOMEM; + flush_cache_range(vma, addr, end); do { next =3D pgd_addr_end(addr, end); @@ -5143,6 +5211,226 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf,= pud_t orig_pud) return VM_FAULT_FALLBACK; } =20 +/* Break (unshare) COW PTE */ +static vm_fault_t handle_cow_pte_fault(struct vm_fault *vmf) +{ + struct vm_area_struct *vma =3D vmf->vma; + struct mm_struct *mm =3D vma->vm_mm; + pmd_t *pmd =3D vmf->pmd; + unsigned long start, end, addr =3D vmf->address; + struct mmu_notifier_range range; + pmd_t cowed_entry; + pte_t *orig_dst_pte, *orig_src_pte; + pte_t *dst_pte, *src_pte; + spinlock_t *dst_ptl, *src_ptl; + int ret =3D 0; + + /* + * Do nothing with the fault that doesn't have PTE yet + * (from lazy fork). + */ + if (pmd_none(*pmd) || pmd_write(*pmd)) + return 0; + /* COW PTE doesn't handle huge page. */ + if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) + return 0; + + mmap_assert_write_locked(mm); + + start =3D addr & PMD_MASK; + end =3D (addr + PMD_SIZE) & PMD_MASK; + addr =3D start; + + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, + 0, vma, mm, start, end); + /* + * Because of the address range is PTE not only for the faulted + * vma, it might have some unmatch situations since mmu notifier + * will only reigster the faulted vma. + * Do we really need to care about this kind of unmatch? + */ + mmu_notifier_invalidate_range_start(&range); + raw_write_seqcount_begin(&mm->write_protect_seq); + + /* + * Fast path, check if we are the only one faulted task + * references to this COW-ed PTE, reuse it. + */ + src_pte =3D pte_offset_map_lock(mm, pmd, addr, &src_ptl); + if (cow_pte_count(pmd) =3D=3D 1) { + pmd_t new =3D pmd_mkwrite(*pmd); + set_pmd_at(mm, addr, pmd, new); + pte_unmap_unlock(src_pte, src_ptl); + goto flush_tlb; + } + pte_unmap_unlock(src_pte, src_ptl); + + /* + * Slow path. Since we already did the accounting and still + * sharing the mapped pages, we can just clone PTE. + */ + + cowed_entry =3D READ_ONCE(*pmd); + /* Decrease the pgtable_bytes of COW-ed PTE. */ + mm_dec_nr_ptes(mm); + pmd_clear(pmd); + orig_dst_pte =3D dst_pte =3D pte_alloc_map_lock(mm, pmd, addr, &dst_ptl); + if (unlikely(!dst_pte)) { + /* If allocation failed, restore COW-ed PTE. */ + set_pmd_at(mm, addr, pmd, cowed_entry); + ret =3D -ENOMEM; + goto out; + } + + /* + * We should hold the lock of COW-ed PTE until all the operations + * have been done, including duplicating, TLB flush, and decrease + * refcount. + */ + src_pte =3D pte_offset_map_lock(mm, &cowed_entry, addr, &src_ptl); + orig_src_pte =3D src_pte; + arch_enter_lazy_mmu_mode(); + + do { + if (pte_none(*src_pte)) + continue; + /* + * We should handled the most of cases in copy_cow_pte_range(), + * But, we cannot distinguish the vma is belong to parent or + * child, so we need to take care about it. + */ + set_pte_at(mm, addr, dst_pte, *src_pte); + } while (dst_pte++, src_pte++, addr +=3D PAGE_SIZE, addr !=3D end); + + arch_leave_lazy_mmu_mode(); + pte_unmap_unlock(orig_dst_pte, dst_ptl); + + /* Decrease the refcount of COW-ed PTE. */ + if (!pmd_put_pte(&cowed_entry)) { + /* COW-ed (old) PTE's refcount is 1, reuse it. */ + pgtable_t token =3D pmd_pgtable(*pmd); + /* Reuse COW-ed PTE. */ + pmd_t new =3D pmd_mkwrite(cowed_entry); + + /* Clear all the entries of new PTE. */ + addr =3D start; + dst_pte =3D pte_offset_map_lock(mm, pmd, addr, &dst_ptl); + orig_dst_pte =3D dst_pte; + do { + if (pte_none(*dst_pte)) + continue; + if (pte_present(*dst_pte)) + page_table_check_pte_clear(mm, addr, *dst_pte); + pte_clear(mm, addr, dst_pte); + } while (dst_pte++, addr +=3D PAGE_SIZE, addr !=3D end); + pte_unmap_unlock(orig_dst_pte, dst_ptl); + /* Now, we can safely free new PTE. */ + pmd_clear(pmd); + pte_free(mm, token); + /* Reuse COW-ed PTE */ + set_pmd_at(mm, start, pmd, new); + } + + pte_unmap_unlock(orig_src_pte, src_ptl); + +flush_tlb: + /* + * If we change the protection, flush TLB. + * flush_tlb_range() will only use vma to get mm, we don't need + * to consider the unmatch address range with vma problem here. + */ + flush_tlb_range(vma, start, end); +out: + raw_write_seqcount_end(&mm->write_protect_seq); + mmu_notifier_invalidate_range_end(&range); + + return ret; +} + +static inline int __break_cow_pte(struct vm_area_struct *vma, pmd_t *pmd, + unsigned long addr) +{ + struct vm_fault vmf =3D { + .vma =3D vma, + .address =3D addr & PAGE_MASK, + .pmd =3D pmd, + }; + + return handle_cow_pte_fault(&vmf); +} + +/** + * break_cow_pte - duplicate/reuse shared, wprotected (COW-ed) PTE + * @vma: target vma want to break COW + * @pmd: pmd index that maps to the shared PTE + * @addr: the address trigger break COW PTE + * + * The address needs to be in the range of shared and write portected + * PTE that the pmd index mapped. If pmd is NULL, it will get the pmd + * from vma. Duplicate COW-ed PTE when some still mapping to it. + * Otherwise, reuse COW-ed PTE. + */ +int break_cow_pte(struct vm_area_struct *vma, pmd_t *pmd, unsigned long ad= dr) +{ + struct mm_struct *mm; + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + + if (!vma) + return -EINVAL; + mm =3D vma->vm_mm; + + if (!test_bit(MMF_COW_PTE, &mm->flags)) + return 0; + + if (!pmd) { + pgd =3D pgd_offset(mm, addr); + if (pgd_none_or_clear_bad(pgd)) + return 0; + p4d =3D p4d_offset(pgd, addr); + if (p4d_none_or_clear_bad(p4d)) + return 0; + pud =3D pud_offset(p4d, addr); + if (pud_none_or_clear_bad(pud)) + return 0; + pmd =3D pmd_offset(pud, addr); + } + + /* We will check the type of pmd entry later. */ + + return __break_cow_pte(vma, pmd, addr); +} + +/** + * break_cow_pte_range - duplicate/reuse COW-ed PTE in a given range + * @vma: target vma want to break COW + * @start: the address of start breaking + * @end: the address of end breaking + * + * Return: zero on success, the number of failed otherwise. + */ +int break_cow_pte_range(struct vm_area_struct *vma, unsigned long start, + unsigned long end) +{ + unsigned long addr, next; + int nr_failed =3D 0; + + if (!vma) + return -EINVAL; + if (range_in_vma(vma, start, end)) + return -EINVAL; + + addr =3D start; + do { + next =3D pmd_addr_end(addr, end); + if (break_cow_pte(vma, NULL, addr) < 0) + nr_failed++; + } while (addr =3D next, addr !=3D end); + + return nr_failed; +} + /* * These routines also need to handle stuff like marking pages dirty * and/or accessed for architectures that don't do it in hardware (most @@ -5355,8 +5643,27 @@ static vm_fault_t __handle_mm_fault(struct vm_area_s= truct *vma, return 0; } } + /* + * Duplicate COW-ed PTE when page fault will change the + * mapped pages (write or unshared fault) or COW-ed PTE + * (file mapped read fault, see do_read_fault()). + */ + if ((flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE) || + vma->vm_ops) && test_bit(MMF_COW_PTE, &mm->flags)) { + ret =3D handle_cow_pte_fault(&vmf); + if (unlikely(ret =3D=3D -ENOMEM)) + return VM_FAULT_OOM; + } } =20 + /* + * It's definitely will break the kernel when refcount of PTE + * is higher than 1 and it is writeable in PMD entry. But we + * want to see more information so just warning here. + */ + if (likely(!pmd_none(*vmf.pmd))) + VM_WARN_ON(cow_pte_count(vmf.pmd) > 1 && pmd_write(*vmf.pmd)); + return handle_pte_fault(&vmf); } =20 diff --git a/mm/mmap.c b/mm/mmap.c index 74a84eb33b904..3eb9b852adc3b 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2208,6 +2208,10 @@ int __split_vma(struct mm_struct *mm, struct vm_area= _struct *vma, return err; } =20 + err =3D break_cow_pte(vma, NULL, addr); + if (err) + return err; + new =3D vm_area_dup(vma); if (!new) return -ENOMEM; diff --git a/mm/mremap.c b/mm/mremap.c index e465ffe279bb0..b4136b12f24b6 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -534,6 +534,8 @@ unsigned long move_page_tables(struct vm_area_struct *v= ma, old_pmd =3D get_old_pmd(vma->vm_mm, old_addr); if (!old_pmd) continue; + /* TLB flush twice time here? */ + break_cow_pte(vma, old_pmd, old_addr); new_pmd =3D alloc_new_pmd(vma->vm_mm, vma, new_addr); if (!new_pmd) break; diff --git a/mm/swapfile.c b/mm/swapfile.c index 72e481aacd5df..10af3e0a2eb5d 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1911,6 +1911,8 @@ static inline int unuse_pmd_range(struct vm_area_stru= ct *vma, pud_t *pud, next =3D pmd_addr_end(addr, end); if (pmd_none_or_trans_huge_or_clear_bad(pmd)) continue; + if (break_cow_pte(vma, pmd, addr) < 0) + return -ENOMEM; ret =3D unuse_pte_range(vma, pmd, addr, next, type); if (ret) return ret; --=20 2.37.3