From nobody Wed Dec 31 09:23:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B4FAC4167B for ; Mon, 6 Nov 2023 15:50:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232704AbjKFPuO (ORCPT ); Mon, 6 Nov 2023 10:50:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229922AbjKFPuF (ORCPT ); Mon, 6 Nov 2023 10:50:05 -0500 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDCB6D61; Mon, 6 Nov 2023 07:50:02 -0800 (PST) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4SPG4x5P3BzvQTB; Mon, 6 Nov 2023 23:49:53 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 6 Nov 2023 23:50:00 +0800 From: Kefeng Wang To: Andrew Morton CC: , , Matthew Wilcox , David Hildenbrand , , Kefeng Wang Subject: [PATCH 04/10] mm: huge_memory: use a folio in zap_huge_pmd() Date: Mon, 6 Nov 2023 23:49:44 +0800 Message-ID: <20231106154950.3399469-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231106154950.3399469-1-wangkefeng.wang@huawei.com> References: <20231106154950.3399469-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use a folio in zap_huge_pmd(), which is a preparetion for converting mm counter functions to take a folio. Signed-off-by: Kefeng Wang --- mm/huge_memory.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 34dd01970927..78a00fe22c2d 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1716,11 +1716,13 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_= area_struct *vma, zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); } else { - struct page *page =3D NULL; + struct folio *folio =3D NULL; int flush_needed =3D 1; =20 if (pmd_present(orig_pmd)) { - page =3D pmd_page(orig_pmd); + struct page *page =3D pmd_page(orig_pmd); + + folio =3D page_folio(page); page_remove_rmap(page, vma, true); VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); VM_BUG_ON_PAGE(!PageHead(page), page); @@ -1729,12 +1731,12 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_= area_struct *vma, =20 VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); entry =3D pmd_to_swp_entry(orig_pmd); - page =3D pfn_swap_entry_to_page(entry); + folio =3D pfn_swap_entry_to_folio(entry); flush_needed =3D 0; } else WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); =20 - if (PageAnon(page)) { + if (folio_test_anon(folio)) { zap_deposited_table(tlb->mm, pmd); add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR); } else { @@ -1745,7 +1747,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_ar= ea_struct *vma, =20 spin_unlock(ptl); if (flush_needed) - tlb_remove_page_size(tlb, page, HPAGE_PMD_SIZE); + tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE); } return 1; } --=20 2.27.0