From nobody Wed Dec 31 13:22:18 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A03BC001B0 for ; Fri, 3 Nov 2023 14:04:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377930AbjKCOES (ORCPT ); Fri, 3 Nov 2023 10:04:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377860AbjKCOED (ORCPT ); Fri, 3 Nov 2023 10:04:03 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9242D4F; Fri, 3 Nov 2023 07:03:55 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4SMMpQ1tzNz1P7pq; Fri, 3 Nov 2023 22:00:46 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 3 Nov 2023 22:03:49 +0800 From: Kefeng Wang To: Andrew Morton CC: , , Matthew Wilcox , David Hildenbrand , , Kefeng Wang Subject: [PATCH 16/18] mm: use a folio in zap_pte_range() Date: Fri, 3 Nov 2023 22:01:17 +0800 Message-ID: <20231103140119.2306578-17-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231103140119.2306578-1-wangkefeng.wang@huawei.com> References: <20231103140119.2306578-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Make should_zap_page() to take a folio and use a folio in zap_pte_range(), which save several compound_head() calls. Signed-off-by: Kefeng Wang --- mm/memory.c | 43 ++++++++++++++++++++++++------------------- 1 file changed, 24 insertions(+), 19 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index fcc04dce8e8a..9b4334de9bf0 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1358,19 +1358,19 @@ static inline bool should_zap_cows(struct zap_detai= ls *details) return details->even_cows; } =20 -/* Decides whether we should zap this page with the page pointer specified= */ -static inline bool should_zap_page(struct zap_details *details, struct pag= e *page) +/* Decides whether we should zap this folio with the folio pointer specifi= ed */ +static inline bool should_zap_page(struct zap_details *details, struct fol= io *folio) { - /* If we can make a decision without *page.. */ + /* If we can make a decision without *folio.. */ if (should_zap_cows(details)) return true; =20 - /* E.g. the caller passes NULL for the case of a zero page */ - if (!page) + /* E.g. the caller passes NULL for the case of a zero folio */ + if (!folio) return true; =20 - /* Otherwise we should only zap non-anon pages */ - return !PageAnon(page); + /* Otherwise we should only zap non-anon folios */ + return !folio_test_anon(folio); } =20 static inline bool zap_drop_file_uffd_wp(struct zap_details *details) @@ -1423,6 +1423,7 @@ static unsigned long zap_pte_range(struct mmu_gather = *tlb, arch_enter_lazy_mmu_mode(); do { pte_t ptent =3D ptep_get(pte); + struct folio *folio =3D NULL; struct page *page; =20 if (pte_none(ptent)) @@ -1435,7 +1436,10 @@ static unsigned long zap_pte_range(struct mmu_gather= *tlb, unsigned int delay_rmap; =20 page =3D vm_normal_page(vma, addr, ptent); - if (unlikely(!should_zap_page(details, page))) + if (page) + folio =3D page_folio(page); + + if (unlikely(!should_zap_page(details, folio))) continue; ptent =3D ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); @@ -1449,18 +1453,18 @@ static unsigned long zap_pte_range(struct mmu_gathe= r *tlb, } =20 delay_rmap =3D 0; - if (!PageAnon(page)) { + if (!folio_test_anon(folio)) { if (pte_dirty(ptent)) { - set_page_dirty(page); + folio_set_dirty(folio); if (tlb_delay_rmap(tlb)) { delay_rmap =3D 1; force_flush =3D 1; } } if (pte_young(ptent) && likely(vma_has_recency(vma))) - mark_page_accessed(page); + folio_mark_accessed(folio); } - rss[mm_counter(page)]--; + rss[mm_counter_folio(folio)]--; if (!delay_rmap) { page_remove_rmap(page, vma, false); if (unlikely(page_mapcount(page) < 0)) @@ -1477,9 +1481,10 @@ static unsigned long zap_pte_range(struct mmu_gather= *tlb, entry =3D pte_to_swp_entry(ptent); if (is_device_private_entry(entry) || is_device_exclusive_entry(entry)) { - page =3D pfn_swap_entry_to_page(entry); - if (unlikely(!should_zap_page(details, page))) + folio =3D pfn_swap_entry_to_folio(entry); + if (unlikely(!should_zap_page(details, folio))) continue; + /* * Both device private/exclusive mappings should only * work with anonymous page so far, so we don't need to @@ -1487,10 +1492,10 @@ static unsigned long zap_pte_range(struct mmu_gathe= r *tlb, * see zap_install_uffd_wp_if_needed(). */ WARN_ON_ONCE(!vma_is_anonymous(vma)); - rss[mm_counter(page)]--; + rss[mm_counter_folio(folio)]--; if (is_device_private_entry(entry)) page_remove_rmap(page, vma, false); - put_page(page); + folio_put(folio); } else if (!non_swap_entry(entry)) { /* Genuine swap entry, hence a private anon page */ if (!should_zap_cows(details)) @@ -1499,10 +1504,10 @@ static unsigned long zap_pte_range(struct mmu_gathe= r *tlb, if (unlikely(!free_swap_and_cache(entry))) print_bad_pte(vma, addr, ptent, NULL); } else if (is_migration_entry(entry)) { - page =3D pfn_swap_entry_to_page(entry); - if (!should_zap_page(details, page)) + folio =3D pfn_swap_entry_to_folio(entry); + if (!should_zap_page(details, folio)) continue; - rss[mm_counter(page)]--; + rss[mm_counter_folio(folio)]--; } else if (pte_marker_entry_uffd_wp(entry)) { /* * For anon: always drop the marker; for file: only --=20 2.27.0