From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 76B013DEAEF for ; Fri, 20 Mar 2026 18:07:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030056; cv=none; b=obT89FyswnpfvbPthTc5kYRQ/UyefxSBJEFPIX/OhsZL9tSjZwYppPUtEAnOHsZfa8ELpiKI2VKZLG4Xbx7pD/jsd0NUiADpzKzwagpnLGcgf9Bc+1vaxdSDJ9INSsGM4D82ahR2OhBiNXa7oWsvyS6o2hQGjWj6DA9mMBQYxJ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030056; c=relaxed/simple; bh=4U83wxJSN0UvnMmG6gmIbCq9G1O39b23blfys7NRzY8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MqLVKacfOVAgZXR3QXxEddaIyS1PVGwDyDsOspkmuR/qGxW7mJ0oEnOU+XM7I8JYEBBwZAnGHiQljcAc+wo70Xjxw9JMn0mG3rNTiAGtOdiGSYd8rVjo9Y00jVxbJpyYJHJwerv0OZ4RG1mkPv/ZUfEMtS3IgPFafERU6m0nN6I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oawTQ/US; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oawTQ/US" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B75BDC4CEF7; Fri, 20 Mar 2026 18:07:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774030056; bh=4U83wxJSN0UvnMmG6gmIbCq9G1O39b23blfys7NRzY8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oawTQ/USDOI+bBk/Avv+KVeaY20yO81RjkQvmlkgxNHTsQlu9LbTaivaiWjnRA+b1 G5WdrtIcbc0NRE/4uiWq4tNmDt5Ea3mYygKsL97OFSEjVNdMdqg6KeDbPHAgu5t3Bo 3HGj2k0NDsFaMUYlShx6FqdpAMieNNgv15Rwa+Cmlfd0bUX6GyTqsaW9fS4quYWusq wVICpBDG6r4LoKUqWWAJiKdnQ5ofLqAZjMk85D2kiGfD7504yLBH+uxcuq4kkVOC2v KspSLh5nHcK7m4gppd5c2SYZhrwW6FAjjlSC+U0r3VShwIdYiZgS6k+wb9a3I10hz3 6yHswKWehCEBQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Kiryl Shutsemau , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 01/13] mm/huge_memory: simplify vma_is_specal_huge() Date: Fri, 20 Mar 2026 18:07:18 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This function is confused - it overloads the term 'special' yet again, checks for DAX but in many cases the code explicitly excludes DAX before invoking the predicate. It also unnecessarily checks for vma->vm_file - this has to be present for a driver to have set VMA_MIXEDMAP_BIT or VMA_PFNMAP_BIT. In fact, a far simpler form of this is to reverse the DAX predicate and return false if DAX is set. This makes sense from the point of view of 'special' as in vm_normal_page(), as DAX actually does potentially have retrievable folios. Also there's no need to have this in mm.h so move it to huge_memory.c. No functional change intended. Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- include/linux/huge_mm.h | 4 ++-- include/linux/mm.h | 16 ---------------- mm/huge_memory.c | 30 +++++++++++++++++++++++------- 3 files changed, 25 insertions(+), 25 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 8d801ed378db..af726f0aa30d 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -83,7 +83,7 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr; * file is never split and the MAX_PAGECACHE_ORDER limit does not apply to * it. Same to PFNMAPs where there's neither page* nor pagecache. */ -#define THP_ORDERS_ALL_SPECIAL \ +#define THP_ORDERS_ALL_SPECIAL_DAX \ (BIT(PMD_ORDER) | BIT(PUD_ORDER)) #define THP_ORDERS_ALL_FILE_DEFAULT \ ((BIT(MAX_PAGECACHE_ORDER + 1) - 1) & ~BIT(0)) @@ -92,7 +92,7 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr; * Mask of all large folio orders supported for THP. */ #define THP_ORDERS_ALL \ - (THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_SPECIAL | THP_ORDERS_ALL_FILE_DEFAU= LT) + (THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_SPECIAL_DAX | THP_ORDERS_ALL_FILE_D= EFAULT) =20 enum tva_type { TVA_SMAPS, /* Exposing "THPeligible:" in smaps. */ diff --git a/include/linux/mm.h b/include/linux/mm.h index 8aadf115278e..6b07ee99b38b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -5078,22 +5078,6 @@ long copy_folio_from_user(struct folio *dst_folio, const void __user *usr_src, bool allow_pagefault); =20 -/** - * vma_is_special_huge - Are transhuge page-table entries considered speci= al? - * @vma: Pointer to the struct vm_area_struct to consider - * - * Whether transhuge page-table entries are considered "special" following - * the definition in vm_normal_page(). - * - * Return: true if transhuge page-table entries should be considered speci= al, - * false otherwise. - */ -static inline bool vma_is_special_huge(const struct vm_area_struct *vma) -{ - return vma_is_dax(vma) || (vma->vm_file && - (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))); -} - #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ =20 #if MAX_NUMNODES > 1 diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e90d08db219d..2775309b317a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -103,6 +103,14 @@ static inline bool file_thp_enabled(struct vm_area_str= uct *vma) return !inode_is_open_for_write(inode) && S_ISREG(inode->i_mode); } =20 +/* If returns true, we are unable to access the VMA's folios. */ +static bool vma_is_special_huge(const struct vm_area_struct *vma) +{ + if (vma_is_dax(vma)) + return false; + return vma_test_any(vma, VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT); +} + unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, vm_flags_t vm_flags, enum tva_type type, @@ -116,8 +124,8 @@ unsigned long __thp_vma_allowable_orders(struct vm_area= _struct *vma, /* Check the intersection of requested and supported orders. */ if (vma_is_anonymous(vma)) supported_orders =3D THP_ORDERS_ALL_ANON; - else if (vma_is_special_huge(vma)) - supported_orders =3D THP_ORDERS_ALL_SPECIAL; + else if (vma_is_dax(vma) || vma_is_special_huge(vma)) + supported_orders =3D THP_ORDERS_ALL_SPECIAL_DAX; else supported_orders =3D THP_ORDERS_ALL_FILE_DEFAULT; =20 @@ -2338,7 +2346,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_ar= ea_struct *vma, tlb->fullmm); arch_check_zapped_pmd(vma, orig_pmd); tlb_remove_pmd_tlb_entry(tlb, pmd, addr); - if (!vma_is_dax(vma) && vma_is_special_huge(vma)) { + if (vma_is_special_huge(vma)) { if (arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); @@ -2840,7 +2848,7 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_ar= ea_struct *vma, orig_pud =3D pudp_huge_get_and_clear_full(vma, addr, pud, tlb->fullmm); arch_check_zapped_pud(vma, orig_pud); tlb_remove_pud_tlb_entry(tlb, pud, addr); - if (!vma_is_dax(vma) && vma_is_special_huge(vma)) { + if (vma_is_special_huge(vma)) { spin_unlock(ptl); /* No zero page support yet */ } else { @@ -2991,7 +2999,7 @@ static void __split_huge_pmd_locked(struct vm_area_st= ruct *vma, pmd_t *pmd, */ if (arch_needs_pgtable_deposit()) zap_deposited_table(mm, pmd); - if (!vma_is_dax(vma) && vma_is_special_huge(vma)) + if (vma_is_special_huge(vma)) return; if (unlikely(pmd_is_migration_entry(old_pmd))) { const softleaf_t old_entry =3D softleaf_from_pmd(old_pmd); @@ -4517,8 +4525,16 @@ static void split_huge_pages_all(void) =20 static inline bool vma_not_suitable_for_thp_split(struct vm_area_struct *v= ma) { - return vma_is_special_huge(vma) || (vma->vm_flags & VM_IO) || - is_vm_hugetlb_page(vma); + if (vma_is_dax(vma)) + return true; + if (vma_is_special_huge(vma)) + return true; + if (vma_test(vma, VMA_IO_BIT)) + return true; + if (is_vm_hugetlb_page(vma)) + return true; + + return false; } =20 static int split_huge_pages_pid(int pid, unsigned long vaddr_start, --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B28F83DFC9F for ; Fri, 20 Mar 2026 18:07:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030059; cv=none; b=Bnwacia8QaV6AYHfwSnLpWnJmLzqCUThFINXWioTcBNrlwL2llnAIBhfyJbroeDWmKFebY49zOLMV+YyJOw9+7rNuFCDvF1SVbDkfJvr6wasWOJb2RsOwtDgJTFZigYWXY1buXUs46G9MCdepJpju0SMSXY7T/ElCV7f4g8Qb9I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030059; c=relaxed/simple; bh=XUJkWVB3zKGz0f5aKQPRTsHRbbcmexMUG5ohKhs002I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sFaddaPb8aXkW6jUwarz2sTz2Tzds7k8oLSO0YSG+emyUGbiuOcjDxIBxGM5t6Z//dv4OBtWciS5xEeOW2pXZHrn23bGSAnxKYgX8xfrxzLfUQIua9cJ4lNA2dcMJcZjLj8Awkac8zTHweBDFFNQVPo3esu0TgyA5qES5BNiu/c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dwMPvtMm; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dwMPvtMm" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 96194C19425; Fri, 20 Mar 2026 18:07:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774030059; bh=XUJkWVB3zKGz0f5aKQPRTsHRbbcmexMUG5ohKhs002I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dwMPvtMm8PDwhuywgcTI8jcb1ncAxe3d8tmnub93/x4ITsJAKdzpERxWuHLv/R7/N suKI54oXZXQ3U0Nw23+e0BRnDexcTyYcre6ka0k12DZPHN2poqkNcitacbfI4t44He yi2C90e2NLrd96zNnp8mLyMhDn7yiU0Mk1TIA6VG5hRVWPHl9GNAWo4U3VwSpgHXYg VODla42wqhwn5DkPhEKSExIVe+pGRVmrou9CbyTY/NkBXiOMZwT1wOzYzFs26MAKgu KJ3GZdlPcJoYuVZ7wxrfUlThcVWTdiM9TyngF7jnb1pE3RGak+N9GCYy76zCS3C8iQ 4kHGHV66ClQMg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Kiryl Shutsemau , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 02/13] mm/huge: avoid big else branch in zap_huge_pmd() Date: Fri, 20 Mar 2026 18:07:19 +0000 Message-ID: <6b4d5efdbf5554b8fe788f677d0b50f355eec999.1774029655.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We don't need to have an extra level of indentation, we can simply exit early in the first two branches. No functional change intended. Reviewed-by: Baolin Wang Acked-by: Qi Zheng Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- mm/huge_memory.c | 87 +++++++++++++++++++++++++----------------------- 1 file changed, 45 insertions(+), 42 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2775309b317a..4e8df3a35cab 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2328,8 +2328,10 @@ static inline void zap_deposited_table(struct mm_str= uct *mm, pmd_t *pmd) int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr) { - pmd_t orig_pmd; + struct folio *folio =3D NULL; + int flush_needed =3D 1; spinlock_t *ptl; + pmd_t orig_pmd; =20 tlb_change_page_size(tlb, HPAGE_PMD_SIZE); =20 @@ -2350,59 +2352,60 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_= area_struct *vma, if (arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); - } else if (is_huge_zero_pmd(orig_pmd)) { + return 1; + } + if (is_huge_zero_pmd(orig_pmd)) { if (!vma_is_dax(vma) || arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); - } else { - struct folio *folio =3D NULL; - int flush_needed =3D 1; + return 1; + } =20 - if (pmd_present(orig_pmd)) { - struct page *page =3D pmd_page(orig_pmd); + if (pmd_present(orig_pmd)) { + struct page *page =3D pmd_page(orig_pmd); =20 - folio =3D page_folio(page); - folio_remove_rmap_pmd(folio, page, vma); - WARN_ON_ONCE(folio_mapcount(folio) < 0); - VM_BUG_ON_PAGE(!PageHead(page), page); - } else if (pmd_is_valid_softleaf(orig_pmd)) { - const softleaf_t entry =3D softleaf_from_pmd(orig_pmd); + folio =3D page_folio(page); + folio_remove_rmap_pmd(folio, page, vma); + WARN_ON_ONCE(folio_mapcount(folio) < 0); + VM_BUG_ON_PAGE(!PageHead(page), page); + } else if (pmd_is_valid_softleaf(orig_pmd)) { + const softleaf_t entry =3D softleaf_from_pmd(orig_pmd); =20 - folio =3D softleaf_to_folio(entry); - flush_needed =3D 0; + folio =3D softleaf_to_folio(entry); + flush_needed =3D 0; =20 - if (!thp_migration_supported()) - WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); - } + if (!thp_migration_supported()) + WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); + } =20 - if (folio_test_anon(folio)) { + if (folio_test_anon(folio)) { + zap_deposited_table(tlb->mm, pmd); + add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR); + } else { + if (arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); - add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR); - } else { - if (arch_needs_pgtable_deposit()) - zap_deposited_table(tlb->mm, pmd); - add_mm_counter(tlb->mm, mm_counter_file(folio), - -HPAGE_PMD_NR); - - /* - * Use flush_needed to indicate whether the PMD entry - * is present, instead of checking pmd_present() again. - */ - if (flush_needed && pmd_young(orig_pmd) && - likely(vma_has_recency(vma))) - folio_mark_accessed(folio); - } + add_mm_counter(tlb->mm, mm_counter_file(folio), + -HPAGE_PMD_NR); =20 - if (folio_is_device_private(folio)) { - folio_remove_rmap_pmd(folio, &folio->page, vma); - WARN_ON_ONCE(folio_mapcount(folio) < 0); - folio_put(folio); - } + /* + * Use flush_needed to indicate whether the PMD entry + * is present, instead of checking pmd_present() again. + */ + if (flush_needed && pmd_young(orig_pmd) && + likely(vma_has_recency(vma))) + folio_mark_accessed(folio); + } =20 - spin_unlock(ptl); - if (flush_needed) - tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE); + if (folio_is_device_private(folio)) { + folio_remove_rmap_pmd(folio, &folio->page, vma); + WARN_ON_ONCE(folio_mapcount(folio) < 0); + folio_put(folio); } + + spin_unlock(ptl); + if (flush_needed) + tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE); + return 1; } =20 --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 81EF33DEAE7 for ; Fri, 20 Mar 2026 18:07:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030062; cv=none; b=srkGGZjgM33qAHZBWjDUxZ3fdPGr0CnvSR8VMXLK8NSjmQeOmywupxrlj9wpeozkOiBSFnnc36QOS8kXKAr79ZVuCOI0mNjbXH7D28yAmJUPTcCwdytn2qwcB9WFIboP7sqPQ70OY9KFtRp0JdQw+EprH0776hNQ+ghvos8IsJw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030062; c=relaxed/simple; bh=amPSkQMTzSQojPGe7/NnBN/jpbTA7AJ3qnkGmEg1DZw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iiNut+N5yKWHQ7l4EHL7zm9O149qjtT8bjAlnMMmghXYGbTqzUybytAURllCEwcvi69B4jMT9oIPXT25fOgUzo4liTdh2smslgSkHzHUWxwh+qUMrxK3OPEWs2Ak4vJBb8OMVVpEJdxovb2JI0/9OZES5IRHfo2IbbnROVNCa+Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=G9T35OO2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="G9T35OO2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 648EBC2BC87; Fri, 20 Mar 2026 18:07:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774030062; bh=amPSkQMTzSQojPGe7/NnBN/jpbTA7AJ3qnkGmEg1DZw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G9T35OO2fy9XN0A+O0UjchlBJ4cTGUO0KfKGNpyrYzh8Y4kncNgv0tzdzpIN0BhNc RvyEFjm5KVMdkfaUv0p5DgB3VFIbND1GGr3hOKcXwMu/X4nEyqbmk/XAJGAH3Fz9RT l2eDjHpESSpFmx5ps9dDI2INm9Md+bSRWTzaOC0z6wWcLV4C2a70mOM/bbq10sRTjw nvZl9TKFO/fAJLrMfNrBMOkgIy4wOWZQb1PmfQMF43gsTbkmxawHrihVGv6za557d4 eJWb/c7GZuAuZLg5eYJcOwS0HvLLlbntVNU9o/6lKrBHORDPTtaeKlvaQq8++SoO2g xLF/SYzXxGKiQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Kiryl Shutsemau , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 03/13] mm/huge_memory: have zap_huge_pmd return a boolean, add kdoc Date: Fri, 20 Mar 2026 18:07:20 +0000 Message-ID: <132274566cd49d2960a2294c36dd2450593dfc55.1774029655.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There's no need to use the ancient approach of returning an integer here, just return a boolean. Also update flush_needed to be a boolean, similarly. Also add a kdoc comment describing the function. No functional change intended. Reviewed-by: Baolin Wang Acked-by: Qi Zheng Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- include/linux/huge_mm.h | 4 ++-- mm/huge_memory.c | 23 ++++++++++++++++------- 2 files changed, 18 insertions(+), 9 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index af726f0aa30d..1258fa37e85b 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -27,8 +27,8 @@ static inline void huge_pud_set_accessed(struct vm_fault = *vmf, pud_t orig_pud) vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf); bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *= vma, pmd_t *pmd, unsigned long addr, unsigned long next); -int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t= *pmd, - unsigned long addr); +bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_= t *pmd, + unsigned long addr); int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, pud_t= *pud, unsigned long addr); bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4e8df3a35cab..3c9e2ebaacfa 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2325,11 +2325,20 @@ static inline void zap_deposited_table(struct mm_st= ruct *mm, pmd_t *pmd) mm_dec_nr_ptes(mm); } =20 -int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, +/** + * zap_huge_pmd - Zap a huge THP which is of PMD size. + * @tlb: The MMU gather TLB state associated with the operation. + * @vma: The VMA containing the range to zap. + * @pmd: A pointer to the leaf PMD entry. + * @addr: The virtual address for the range to zap. + * + * Returns: %true on success, %false otherwise. + */ +bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr) { struct folio *folio =3D NULL; - int flush_needed =3D 1; + bool flush_needed =3D true; spinlock_t *ptl; pmd_t orig_pmd; =20 @@ -2337,7 +2346,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_ar= ea_struct *vma, =20 ptl =3D __pmd_trans_huge_lock(pmd, vma); if (!ptl) - return 0; + return false; /* * For architectures like ppc64 we look at deposited pgtable * when calling pmdp_huge_get_and_clear. So do the @@ -2352,13 +2361,13 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_= area_struct *vma, if (arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); - return 1; + return true; } if (is_huge_zero_pmd(orig_pmd)) { if (!vma_is_dax(vma) || arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); - return 1; + return true; } =20 if (pmd_present(orig_pmd)) { @@ -2372,7 +2381,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_ar= ea_struct *vma, const softleaf_t entry =3D softleaf_from_pmd(orig_pmd); =20 folio =3D softleaf_to_folio(entry); - flush_needed =3D 0; + flush_needed =3D false; =20 if (!thp_migration_supported()) WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); @@ -2406,7 +2415,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_ar= ea_struct *vma, if (flush_needed) tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE); =20 - return 1; + return true; } =20 #ifndef pmd_move_must_withdraw --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3BABF3DF004 for ; Fri, 20 Mar 2026 18:07:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030065; cv=none; b=cGIRN7pBDgPPXrazw1hxBkFZo8TtQzHV5M7hVS+VDzRKdgwwTRf5QdLcdIypjnhI2O9jo0zch6+VEYXe+VssszuR1RXW0kq7aPAydrO0R2ystSDpXGoCImwqxxBclu+mrcl5FG4Kxml6GSSsURZy977/XfDbS4XCl0Nk7fX2Ap4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030065; c=relaxed/simple; bh=P/GkKotdiAAJ2kX5xJff0/9NvIiAuzmB7tSLtVtZQAA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nKRksAdjxCdYxNi7m0FkI3ddzt4mZz85hhxRbVG5+pY7uHbBnE8YByP8nG85nYEBtnjEIkIwE3bAVhrFj5yQRmLI2sL9U6+1IZTgYu29OKjNid56wwgzRf+HGuEVB3xe6vlRjsUZ5X5zps+gJPA7ekA6BKuFgidYEs7Qd4hugdY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=o543Rz+D; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="o543Rz+D" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 44C17C4CEF7; Fri, 20 Mar 2026 18:07:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774030064; bh=P/GkKotdiAAJ2kX5xJff0/9NvIiAuzmB7tSLtVtZQAA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=o543Rz+D7ParciZZN7bePd2+jTsesQ1AAogXkuQEnh94ViHNNdl9C/HFmhtEWlTiC 6is621MtqtrcF4n7LgGlSFi3az1wNbjNAZKYi2tAidldig9W1CMu45cTG1LymocEt9 BUg2ByHHoeocuPZwRwK12NwYbfl3w+7VnzEs8hw+leX9pG2LK2JKwcry+uT+MRd1Z/ hQZWZbonItH2M+1/tMMmyRP8FqWhQFpUdyNgflQLkRTe+M+h6qkBIwq685RrJZ2Vpq 79Jx7LwqDasiej2INE4C0KrL3HVeAis18BBVu59+9RUJB8FGHjBN6y3Oc6DnI66pU4 zHtkYfTlkPgDg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Kiryl Shutsemau , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 04/13] mm/huge_memory: handle buggy PMD entry in zap_huge_pmd() Date: Fri, 20 Mar 2026 18:07:21 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A recent bug I analysed managed to, through a bug in the userfaultfd implementation, reach an invalid point in the zap_huge_pmd() code where the PMD was none of: - A non-DAX, PFN or mixed map. - The huge zero folio - A present PMD entry - A softleaf entry The code at this point calls folio_test_anon() on a known-NULL folio. Having logic like this explicitly NULL dereference in the code is hard to understand, and makes debugging potentially more difficult. Add an else branch to handle this case and WARN(). No functional change intended. Link: https://lore.kernel.org/all/6b3d7ad7-49e1-407a-903d-3103704160d8@luci= fer.local/ Reviewed-by: Baolin Wang Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- mm/huge_memory.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3c9e2ebaacfa..0056ac27ec9a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2385,6 +2385,10 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_= area_struct *vma, =20 if (!thp_migration_supported()) WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); + } else { + WARN_ON_ONCE(true); + spin_unlock(ptl); + return true; } =20 if (folio_test_anon(folio)) { --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9EFC3E0235 for ; Fri, 20 Mar 2026 18:07:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030067; cv=none; b=FR6herZhd9quEfa9t5PB6PMeDYx1/iOrFvfvdNrYTgbCcM9ZRu9FRX8TWqAchkTGf7bCv3/QOJ0jWHtasrYgbW8DH0HvxY7NvQzpCOAxDoxnQ+48y6wcx6JlA/czQnl140ZonTZqejMfH4dob/5+oJZKhuGJ7bSRNTi1ob+mSis= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030067; c=relaxed/simple; bh=fI1DXKEqfJaz8wCKY4bKhT28MvXJUmPPDBUHofI9UvY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NePlDe0X0Yw1SBwaQSncXaDAdmEBc2jgDf9q4mqsUDomzgDcR+3Aso2+EWprO8U7UBOJMWHj4NaudccOWs6lFWaSquzjpJRVVyiTz6xMOSZ5tQlsrnYPF/RZ5UC3jeYzD/2FvC/IOjyCUBWWcL5dQpi6vNwvBixcBYew76HCXc4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=awXrgR7a; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="awXrgR7a" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E64E3C2BCAF; Fri, 20 Mar 2026 18:07:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774030067; bh=fI1DXKEqfJaz8wCKY4bKhT28MvXJUmPPDBUHofI9UvY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=awXrgR7a4UDCSAsSaSsKba7axf1o1Sg7jj30LbzIxY72sjLdrgXkGIjnf847MV5Ng zx6ITshDDS5/YBfSTa50y/7DfdmMur1WeCxqDRBNOMvPBJQbUOQZ1qX0pbz9o0CoN0 DLiNudkoRFSeIlD0mZikAL64hN3qlZGPcwb/PF4+NcVYh0B88FGJ2QjSPWEljCm6iw e1+l7o4seDZ1x/vF/TSoTDdAop0lSZbyG1U1COgzVLC8hcHsIpnKOkzMid7cp/Hqtm VAAZcdsFcke/J5UKZ8HlgptNQ+Pzqeh1s+1XKiotfNPxFSTP7EM+DMu9i0Fa6c4zEk PxcDkaH/vloyQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Kiryl Shutsemau , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 05/13] mm/huge_memory: add a common exit path to zap_huge_pmd() Date: Fri, 20 Mar 2026 18:07:22 +0000 Message-ID: <6b281d8ed972dff0e89bdcbdd810c96c7ae8c9dc.1774029655.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Other than when we acquire the PTL, we always need to unlock the PTL, and optionally need to flush on exit. The code is currently very duplicated in this respect, so default flush_needed to false, set it true in the case in which it's required, then share the same logic for all exit paths. This also makes flush_needed make more sense as a function-scope value (we don't need to flush for the PFN map/mixed map, zero huge, error cases for instance). Reviewed-by: Baolin Wang Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- mm/huge_memory.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0056ac27ec9a..b9d9acfef147 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2338,7 +2338,7 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_a= rea_struct *vma, pmd_t *pmd, unsigned long addr) { struct folio *folio =3D NULL; - bool flush_needed =3D true; + bool flush_needed =3D false; spinlock_t *ptl; pmd_t orig_pmd; =20 @@ -2360,19 +2360,18 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm= _area_struct *vma, if (vma_is_special_huge(vma)) { if (arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); - spin_unlock(ptl); - return true; + goto out; } if (is_huge_zero_pmd(orig_pmd)) { if (!vma_is_dax(vma) || arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); - spin_unlock(ptl); - return true; + goto out; } =20 if (pmd_present(orig_pmd)) { struct page *page =3D pmd_page(orig_pmd); =20 + flush_needed =3D true; folio =3D page_folio(page); folio_remove_rmap_pmd(folio, page, vma); WARN_ON_ONCE(folio_mapcount(folio) < 0); @@ -2381,14 +2380,12 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm= _area_struct *vma, const softleaf_t entry =3D softleaf_from_pmd(orig_pmd); =20 folio =3D softleaf_to_folio(entry); - flush_needed =3D false; =20 if (!thp_migration_supported()) WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); } else { WARN_ON_ONCE(true); - spin_unlock(ptl); - return true; + goto out; } =20 if (folio_test_anon(folio)) { @@ -2415,10 +2412,10 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm= _area_struct *vma, folio_put(folio); } =20 +out: spin_unlock(ptl); if (flush_needed) tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE); - return true; } =20 --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 364DB3DEAF4 for ; Fri, 20 Mar 2026 18:07:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030070; cv=none; b=nTKiS1ZaXlYNiab+/iUQAENnmD5iyOh78qYFKM+zhA1+qdjDjTI1++99uhPldBEzxxIt+szU91skmZfq2Og1Q27MqZrXorimRq/GqpbUI11zIXQ7vnqtrN06Dxi6DvQ1rn4cmCF931x9D+RJHNMKykA0uiYpHS6D3Jh1l1b7Wj0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030070; c=relaxed/simple; bh=T6ecK45s9Uwsgz6Ib7sYknOiCaTzzdT93VdENnaa6Ac=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oVsWkWdzgM8/8zB6vqS6Lb/Ir9S2B8recEzpR58K7X8scLeGOLNDT42+KiGHNeBKc4kj1KjQYMf9bM/ckj/q5sNVwIGQS9/7iRBfGV8bV8aoOXpAw2yaBfhe0fXvPiwK48RN4372ig7q51AbS/xWLvQWHT10RDX4NmndzmfmBZw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=h4BPbH29; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="h4BPbH29" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8FCBAC2BCAF; Fri, 20 Mar 2026 18:07:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774030070; bh=T6ecK45s9Uwsgz6Ib7sYknOiCaTzzdT93VdENnaa6Ac=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=h4BPbH29BVr91MzEuSzgIx6CdXy4kih1RxRTEx5sa0mOh3zp+4OiXv3XaegdadVPx V5N4CNgx/H7wmHm7GW0Zoklg2FEbEwwNipTe5TFxfa2U5zguUNQ4HoGK045B1ovHa7 SoVaD7InCps6sRdcPgB7vQJCOKkpKQxSG2uDO2qG3ZGlwnlEf35/yx+kiIVGG28uG7 uh/fo/k2BaWuOi7W4077ogPlVVMrWSK+9gSuNoRlv98dIx2LGZcYzQNxNDQnfOdJm0 BVSexAWDwkIdfyQzhpP9bT5oYRf5T4nY1YC7wuILcRjSnBluxAsLLh534poxGCqJDK MfGrawTbPgGUA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Kiryl Shutsemau , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 06/13] mm/huge_memory: remove unnecessary VM_BUG_ON_PAGE() Date: Fri, 20 Mar 2026 18:07:23 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This has been around since the beginnings of the THP implementation. I think we can safely assume that, if we have a THP folio, it will have a head page. Reviewed-by: Baolin Wang Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- mm/huge_memory.c | 1 - 1 file changed, 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b9d9acfef147..4add863cd18f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2375,7 +2375,6 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_a= rea_struct *vma, folio =3D page_folio(page); folio_remove_rmap_pmd(folio, page, vma); WARN_ON_ONCE(folio_mapcount(folio) < 0); - VM_BUG_ON_PAGE(!PageHead(page), page); } else if (pmd_is_valid_softleaf(orig_pmd)) { const softleaf_t entry =3D softleaf_from_pmd(orig_pmd); =20 --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA9473E0C7C for ; Fri, 20 Mar 2026 18:07:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030072; cv=none; b=gDQKrd/XZ0JS/KnqoFq6xn90qIr8ealSUsYBMfxU27DzzumyMRiuPJKfn/30LLxUBYeJHtlpPAcPxAw4SsPkKH7K1twwmERt/wgHyMrAapEu2QFA5xO8bELfiO8mEuVRrwRm12OsqUvcs/1CdeoqijzBxVtgd6VjDC53stlNXD8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030072; c=relaxed/simple; bh=Ram9Rs4qqm5tPTEq7peh56y47e5xBm78Wi3l40G/zq8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OZ1zi6LzXanxpbIGWalcFKrEEBvKFceH/wCivsrezzfahTiuGpYlU9079PONauqea81nSuyL3psMVNiehRQECFdD9jUHKhKzUUIHztEEqyRn1gIT57u1WvHuQXDcjAdHV8t2NaDI7dNT2QiSt5gLvVN0K6wBjYRJTxMibNdnLB8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lUFHfM00; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lUFHfM00" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 36810C19425; Fri, 20 Mar 2026 18:07:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774030072; bh=Ram9Rs4qqm5tPTEq7peh56y47e5xBm78Wi3l40G/zq8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lUFHfM00kM5L74fQT1thqwmi1KonthuAYbSBB4Bb9I06HZcQG7K0c/95TZDYsbqOo RPikeYS+Nx2SCSHWU3PsgfHD+ra5ezwC6ymJH7z24ykI2mcIqFuaxYZvYWYr/FivfP hdEW5YFLXastAizTavgbetEeY4qCCohtcxnA0aV1ZYQd5ZDSjDJU+iLUMkzTJcASwd jKYEPyYDJK0weSmU71FqmJoKnN0sh7WVIUrQY7nUfDx6GptHZ44AKs6b1BaQvaB+WN sp8J+c3mFcejdvhFYFPxRp0+AtBDkk9vi4eUzvuJmcKck86FIvUM+8WwkMBlKrhQJO PbbpruhyTOZYQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Kiryl Shutsemau , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 07/13] mm/huge_memory: deduplicate zap deposited table call Date: Fri, 20 Mar 2026 18:07:24 +0000 Message-ID: <71f576a1fbcd27a86322d12caa937bcdacf75407.1774029655.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rather than having separate logic for each case determining whether to zap the deposited table, simply track this via a boolean. We default this to whether the architecture requires it, and update it as required elsewhere. Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Baolin Wang Reviewed-by: Suren Baghdasaryan --- mm/huge_memory.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4add863cd18f..fca44aec6022 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2337,6 +2337,7 @@ static inline void zap_deposited_table(struct mm_stru= ct *mm, pmd_t *pmd) bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr) { + bool has_deposit =3D arch_needs_pgtable_deposit(); struct folio *folio =3D NULL; bool flush_needed =3D false; spinlock_t *ptl; @@ -2357,23 +2358,19 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm= _area_struct *vma, tlb->fullmm); arch_check_zapped_pmd(vma, orig_pmd); tlb_remove_pmd_tlb_entry(tlb, pmd, addr); - if (vma_is_special_huge(vma)) { - if (arch_needs_pgtable_deposit()) - zap_deposited_table(tlb->mm, pmd); + if (vma_is_special_huge(vma)) goto out; - } if (is_huge_zero_pmd(orig_pmd)) { - if (!vma_is_dax(vma) || arch_needs_pgtable_deposit()) - zap_deposited_table(tlb->mm, pmd); + if (!vma_is_dax(vma)) + has_deposit =3D true; goto out; } =20 if (pmd_present(orig_pmd)) { - struct page *page =3D pmd_page(orig_pmd); + folio =3D pmd_folio(orig_pmd); =20 flush_needed =3D true; - folio =3D page_folio(page); - folio_remove_rmap_pmd(folio, page, vma); + folio_remove_rmap_pmd(folio, &folio->page, vma); WARN_ON_ONCE(folio_mapcount(folio) < 0); } else if (pmd_is_valid_softleaf(orig_pmd)) { const softleaf_t entry =3D softleaf_from_pmd(orig_pmd); @@ -2388,11 +2385,9 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_= area_struct *vma, } =20 if (folio_test_anon(folio)) { - zap_deposited_table(tlb->mm, pmd); + has_deposit =3D true; add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR); } else { - if (arch_needs_pgtable_deposit()) - zap_deposited_table(tlb->mm, pmd); add_mm_counter(tlb->mm, mm_counter_file(folio), -HPAGE_PMD_NR); =20 @@ -2412,6 +2407,9 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_a= rea_struct *vma, } =20 out: + if (has_deposit) + zap_deposited_table(tlb->mm, pmd); + spin_unlock(ptl); if (flush_needed) tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE); --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FEAD3E122C for ; Fri, 20 Mar 2026 18:07:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030075; cv=none; b=p1L7UPbThOmk7Wf0hopvpda4ce7Palf3Vs/+/ql9Or6hXWYXzBxK35MwvWe0eVWopVyX/4HXSRyXMyeT9hS6xfJhihgMciNhd05kazsz5Gx86D7PnOggqKHKASATfBAM1Ijp/41mMIHbYnaQlUwBSXRZngWiDYqI7ruYIElcMcg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030075; c=relaxed/simple; bh=IQWNPr/YjOK/b68R1uDz3iksCldtt6DiEoWwwpqjSw4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Mf8x7+6pQ7neArMXpUHTQkI/Kej8SBiabBz5mWefxLRUoTpmbW03XBHzTZzOTlE+URVSc8ub2qdCQVYe3MWZNCIihYst8Eyk+v3S26HPCwhqXcn393qAFfG/VHoqUkirJaDtcwoTDlTARvuwpJd3SLHR4s+6ndUfxpRVkVW0lvU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PeHIahap; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PeHIahap" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D56DDC4CEF7; Fri, 20 Mar 2026 18:07:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774030075; bh=IQWNPr/YjOK/b68R1uDz3iksCldtt6DiEoWwwpqjSw4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PeHIahapRFUfhv4MxV6dbM4uIo9f4A5EP85/jRgebgxMlSUZQ230V1BAGYtAX5wnY AtdaU9tRmjnjK+jB9d55FnBSJneIlvqSPcF9BMVnkMnrPRlzll4EQ0K5DDUfP9DEJ3 6xsQR9x7wVQ7DcqTqfPL7EjosB+qHAGqRwHp8apEHDj/rD/dEIhz9aazLEDa39v33j liF/cM2B18Xq7tRiLV/aM1q/RjtYTVkvD33HlnveheU9GMaCWVN5W8e7PEOO0SenDT 4HxMxPQGmvLp6zHnELygPPko/YMQWCx9VbEyQd6Qwu7aUzRgpqFV82XU/df4uNkuGc 4L1tYelROcJXA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Kiryl Shutsemau , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 08/13] mm/huge_memory: remove unnecessary sanity checks Date: Fri, 20 Mar 2026 18:07:25 +0000 Message-ID: <0c4c5ab247c90f80cf44718e8124b217d6a22544.1774029655.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" These checks have been in place since 2014, I think we can safely assume that we are in a place where we don't need these as runtime checks. In addition there are 4 other invocations of folio_remove_rmap_pmd(), none of which make this assertion. If we need to add this assertion, it should be in folio_remove_rmap_pmd(), and as a VM_WARN_ON_ONCE(), however these seem superfluous so just remove them. Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- mm/huge_memory.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index fca44aec6022..c5b16c218900 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2371,7 +2371,6 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_a= rea_struct *vma, =20 flush_needed =3D true; folio_remove_rmap_pmd(folio, &folio->page, vma); - WARN_ON_ONCE(folio_mapcount(folio) < 0); } else if (pmd_is_valid_softleaf(orig_pmd)) { const softleaf_t entry =3D softleaf_from_pmd(orig_pmd); =20 @@ -2402,7 +2401,6 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_a= rea_struct *vma, =20 if (folio_is_device_private(folio)) { folio_remove_rmap_pmd(folio, &folio->page, vma); - WARN_ON_ONCE(folio_mapcount(folio) < 0); folio_put(folio); } =20 --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6087A3E1CE3 for ; Fri, 20 Mar 2026 18:07:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030078; cv=none; b=qdmSEkAbn53AMx+iqobSKUzDVrd/+SQxjHXV+tAmAtPRTbRy4TmE4DEneuv+i8Z8naao0sDEdltvp1RPEuW9LNzmBy4oImCBkEIho5w7Dyg9RwHncp1F+ZTtjlwLxPOQDWcLZ6HjI7lrHGWb6pDCYCMvqKgfdtAnsXNXPafng+k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030078; c=relaxed/simple; bh=5aAgSxBfwNEPHyvo5LTABZZaCkcbiSMEvIABINFNT8E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Vjg+FtRY/PhOahpp7mD2PZaog/eu0MGHH82pePXkW4HThJ/nORoCrrDlAhmWjde7yCSSFKLBkyngazU4b7mvcZIeU50m9TBJ/wGKoOAMoEGcEZOzCbMpmRi17jygzPufIXDjDuly9zsfbUvj+NqlJTSk/2gX6JQaC3ZdxSXubHQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UDqhQm1y; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UDqhQm1y" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9697BC4CEF7; Fri, 20 Mar 2026 18:07:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774030078; bh=5aAgSxBfwNEPHyvo5LTABZZaCkcbiSMEvIABINFNT8E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UDqhQm1y29Cmsi9P3gnEs+qHGcMHG5+Uhwg1sINVZP2y+zKd1XupmbUYz3ckzXgK+ sfA0ewGLSJ7CdBRwLrN0J5pLoEb9mS46ZQxFDujtl/FLCMLlCQGlc3+20EDWHV2k7e a65NnEvZ3qkCVZlKy5j13DRyxJ8zjtfe7hvSs9uhH1xxkO57heI0Drv6xwoutmh9OB SGbuqeZ7wi1ItwipLTV0AHtftzfjWQ2ic2IZf/LlTELTon9i/WBMvItLGcZjOTKlHf kl1j/ftoWWTQagUw48aDtJnGjk2VpM3NnC4tH7M1I5LHuE+xs5U++Mcqnx4izgUPqs +FzD10CdBNnOw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Kiryl Shutsemau , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 09/13] mm/huge_memory: use mm instead of tlb->mm Date: Fri, 20 Mar 2026 18:07:26 +0000 Message-ID: <98104cde87e4b2aabeb16f236b8731591594457f.1774029655.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Reduce the repetition, and lay the ground for further refactorings by keeping this variable separate. Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Baolin Wang Reviewed-by: Suren Baghdasaryan --- mm/huge_memory.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c5b16c218900..673d0c4734ad 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2338,6 +2338,7 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_a= rea_struct *vma, pmd_t *pmd, unsigned long addr) { bool has_deposit =3D arch_needs_pgtable_deposit(); + struct mm_struct *mm =3D tlb->mm; struct folio *folio =3D NULL; bool flush_needed =3D false; spinlock_t *ptl; @@ -2385,9 +2386,9 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_a= rea_struct *vma, =20 if (folio_test_anon(folio)) { has_deposit =3D true; - add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR); + add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR); } else { - add_mm_counter(tlb->mm, mm_counter_file(folio), + add_mm_counter(mm, mm_counter_file(folio), -HPAGE_PMD_NR); =20 /* @@ -2406,7 +2407,7 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_a= rea_struct *vma, =20 out: if (has_deposit) - zap_deposited_table(tlb->mm, pmd); + zap_deposited_table(mm, pmd); =20 spin_unlock(ptl); if (flush_needed) --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C19833DF002 for ; Fri, 20 Mar 2026 18:08:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030080; cv=none; b=oNvyoD4gOiopQ7tijsYPOedCtys9oE9KO4NnolHpULwTNnTjIXcRr5vWrzHewYfb+fiR/z9GTQOMdlnsJVryefP7aukEa+D8ieYG34jKG0hM6TmZnnKh21zrghI8vNcR4nQ2vbIyEFanA9w6uUT0LUO5ys9Jp/l2o+Imh/X4ckM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030080; c=relaxed/simple; bh=tXuLxXbV6sBaJ9fh/guXYRO+hrWj8nkDANbCEmM4U+w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JejjLGBO6Hej5pHktYMAfTyCuiRNjFXDOwALgv11EGk2fGSpl048qs0wwhMENC7NE/xnak+M9ADx4gJ8EyguikuKketKHyvPyOi4HFYcaLq0WkqoOQMLITwulO05y/7+v35rTYcHD99jhnvAWs5+O0FM4kuGbEr5CT0nSUm4qQQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OBaSXJ+l; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OBaSXJ+l" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 442A2C4CEF7; Fri, 20 Mar 2026 18:08:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774030080; bh=tXuLxXbV6sBaJ9fh/guXYRO+hrWj8nkDANbCEmM4U+w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OBaSXJ+lJmXJ2EJ0++u5PATPuo9ha9Q1I7CKvIXwGZiX4vqIqCAyPCTt0oDVQGKPf Qbh9z6yJo6V7Wf/VaoawJTTmioIRd4Pc81c613eqpP+NYqpL64ucKucKPjXXSe1XyD WdNP0NTcuAwlAYPUS2xZfzE5SZcd1cwvBr1EIVrAYMQedcz+aV3QQK0qs1OekIdkeC 5ewxNG1LsWXiU8cQuuoOInQKyGUO3Gm4c/4Thi4qfWL89xW/1tw9OL0z0WaqSlg5ih 8K7SSuu7kjt69FpMVRDik/KC+MmMgXvHaizS+TRZ9eiKtvcCvtauTV1RctFfq9wuMT juI+UuXSImyBg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Kiryl Shutsemau , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 10/13] mm/huge_memory: separate out the folio part of zap_huge_pmd() Date: Fri, 20 Mar 2026 18:07:27 +0000 Message-ID: <6c4db67952f5529da4db102a6149b9050b5dda4e.1774029655.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Place the part of the logic that manipulates counters and possibly updates the accessed bit of the folio into its own function to make zap_huge_pmd() more readable. Also rename flush_needed to is_present as we only require a flush for present entries. Additionally add comments as to why we're doing what we're doing with respect to softleaf entries. This also lays the ground for further refactoring. Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Baolin Wang Reviewed-by: Suren Baghdasaryan --- mm/huge_memory.c | 61 +++++++++++++++++++++++++++--------------------- 1 file changed, 35 insertions(+), 26 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 673d0c4734ad..9ddf38d68406 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2325,6 +2325,37 @@ static inline void zap_deposited_table(struct mm_str= uct *mm, pmd_t *pmd) mm_dec_nr_ptes(mm); } =20 +static void zap_huge_pmd_folio(struct mm_struct *mm, struct vm_area_struct= *vma, + pmd_t pmdval, struct folio *folio, bool is_present, + bool *has_deposit) +{ + const bool is_device_private =3D folio_is_device_private(folio); + + /* Present and device private folios are rmappable. */ + if (is_present || is_device_private) + folio_remove_rmap_pmd(folio, &folio->page, vma); + + if (folio_test_anon(folio)) { + *has_deposit =3D true; + add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR); + } else { + add_mm_counter(mm, mm_counter_file(folio), + -HPAGE_PMD_NR); + + /* + * Use flush_needed to indicate whether the PMD entry + * is present, instead of checking pmd_present() again. + */ + if (is_present && pmd_young(pmdval) && + likely(vma_has_recency(vma))) + folio_mark_accessed(folio); + } + + /* Device private folios are pinned. */ + if (is_device_private) + folio_put(folio); +} + /** * zap_huge_pmd - Zap a huge THP which is of PMD size. * @tlb: The MMU gather TLB state associated with the operation. @@ -2340,7 +2371,7 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_a= rea_struct *vma, bool has_deposit =3D arch_needs_pgtable_deposit(); struct mm_struct *mm =3D tlb->mm; struct folio *folio =3D NULL; - bool flush_needed =3D false; + bool is_present =3D false; spinlock_t *ptl; pmd_t orig_pmd; =20 @@ -2369,14 +2400,11 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm= _area_struct *vma, =20 if (pmd_present(orig_pmd)) { folio =3D pmd_folio(orig_pmd); - - flush_needed =3D true; - folio_remove_rmap_pmd(folio, &folio->page, vma); + is_present =3D true; } else if (pmd_is_valid_softleaf(orig_pmd)) { const softleaf_t entry =3D softleaf_from_pmd(orig_pmd); =20 folio =3D softleaf_to_folio(entry); - if (!thp_migration_supported()) WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); } else { @@ -2384,33 +2412,14 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm= _area_struct *vma, goto out; } =20 - if (folio_test_anon(folio)) { - has_deposit =3D true; - add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR); - } else { - add_mm_counter(mm, mm_counter_file(folio), - -HPAGE_PMD_NR); - - /* - * Use flush_needed to indicate whether the PMD entry - * is present, instead of checking pmd_present() again. - */ - if (flush_needed && pmd_young(orig_pmd) && - likely(vma_has_recency(vma))) - folio_mark_accessed(folio); - } - - if (folio_is_device_private(folio)) { - folio_remove_rmap_pmd(folio, &folio->page, vma); - folio_put(folio); - } + zap_huge_pmd_folio(mm, vma, orig_pmd, folio, is_present, &has_deposit); =20 out: if (has_deposit) zap_deposited_table(mm, pmd); =20 spin_unlock(ptl); - if (flush_needed) + if (is_present) tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE); return true; } --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 533E83E3145 for ; Fri, 20 Mar 2026 18:08:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030084; cv=none; b=DRu4aoRQ/j1YuvNiF9AtebAFdRUu/kzAEJPq/JwR3HEI7qhKNaZI/RKG3ZZAA/Wcnfh9yMhkTlBMXPnLpc+I2GiskYBOMroxr2obrEBvq6wPleBjOQqfuVeVabZwcs0w/y8IUj20JPiSkkrBKyF04JIDgtsMZfQxXHGcgcIkXMo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030084; c=relaxed/simple; bh=0+GlSeGdSvEbk7Xvpg9UTnriV57myN66M3z+ejCspIw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sr3avbKfLwwUudhjFLsGV+L3y79cOQILHxDv0PsZTxZPLCXGSRbsbxgRn3zZEAmOWeR78erGhvhfNjlv2pgbqp1tRvhwIaiPszXPOrG3fxWVlGjeNue+IzckrtCG7EA8lVQZBodt1I0Gjrg5NGimh5hQ3yPNcqD/XY6G6buc+Tg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Vp8rY01q; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Vp8rY01q" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0A39CC4CEF7; Fri, 20 Mar 2026 18:08:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774030083; bh=0+GlSeGdSvEbk7Xvpg9UTnriV57myN66M3z+ejCspIw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Vp8rY01q/ADSly9S0oZzHX9BeisKfH4/tXjVnxU6oSlnyETElhWI4MAIn2oFAxBtK 3n/01WJKOMzJLND+lCSDid55KjRqtwUuWL44FDJMzY4X+gPle1iMnbzxgnKlnBosG+ zBMYyALSVtEn89licpz9JV0B9CWXMb4EpH9BrcP9MQmOoC2t2R6XbnKDn/+gE+7xXI iPTeMSGmptRwIkHk84I40eP33uNagvXsq9keqkUJjNnuTnXDbXkg0fFVBRNsryz6ln Vgv79ct3xAvnqRtINcB/p1GDstXhoTwAm2Ts9nu2m4d+Y+oTgwxuk7fqs6N7Guqp6J +hlAN3cI9s/zw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Kiryl Shutsemau , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 11/13] mm: add softleaf_is_valid_pmd_entry(), pmd_to_softleaf_folio() Date: Fri, 20 Mar 2026 18:07:28 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Separate pmd_is_valid_softleaf() into separate components, then use the pmd_is_valid_softleaf() predicate to implement pmd_to_softleaf_folio(). This returns the folio associated with a softleaf entry at PMD level. It expects this to be valid for a PMD entry. If CONFIG_DEBUG_VM is set, then assert on this being an invalid entry, and either way return NULL in this case. This lays the ground for further refactorings. Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- include/linux/leafops.h | 39 +++++++++++++++++++++++++++++++++++---- 1 file changed, 35 insertions(+), 4 deletions(-) diff --git a/include/linux/leafops.h b/include/linux/leafops.h index dd4130b7cb7f..65957283fa9f 100644 --- a/include/linux/leafops.h +++ b/include/linux/leafops.h @@ -603,7 +603,20 @@ static inline bool pmd_is_migration_entry(pmd_t pmd) } =20 /** - * pmd_is_valid_softleaf() - Is this PMD entry a valid leaf entry? + * softleaf_is_valid_pmd_entry() - Is the specified softleaf entry obtaine= d from + * a PMD one that we support at PMD level? + * @entry: Entry to check. + * Returns: true if the softleaf entry is valid at PMD, otherwise false. + */ +static inline bool softleaf_is_valid_pmd_entry(softleaf_t entry) +{ + /* Only device private, migration entries valid for PMD. */ + return softleaf_is_device_private(entry) || + softleaf_is_migration(entry); +} + +/** + * pmd_is_valid_softleaf() - Is this PMD entry a valid softleaf entry? * @pmd: PMD entry. * * PMD leaf entries are valid only if they are device private or migration @@ -616,9 +629,27 @@ static inline bool pmd_is_valid_softleaf(pmd_t pmd) { const softleaf_t entry =3D softleaf_from_pmd(pmd); =20 - /* Only device private, migration entries valid for PMD. */ - return softleaf_is_device_private(entry) || - softleaf_is_migration(entry); + return softleaf_is_valid_pmd_entry(entry); +} + +/** + * pmd_to_softleaf_folio() - Convert the PMD entry to a folio. + * @pmd: PMD entry. + * + * The PMD entry is expected to be a valid PMD softleaf entry. + * + * Returns: the folio the softleaf entry references if this is a valid sof= tleaf + * entry, otherwise NULL. + */ +static inline struct folio *pmd_to_softleaf_folio(pmd_t pmd) +{ + const softleaf_t entry =3D softleaf_from_pmd(pmd); + + if (!softleaf_is_valid_pmd_entry(entry)) { + VM_WARN_ON_ONCE(true); + return NULL; + } + return softleaf_to_folio(entry); } =20 #endif /* CONFIG_MMU */ --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43D533E3152 for ; Fri, 20 Mar 2026 18:08:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030087; cv=none; b=PfZIqhIf9Uk3tthcWXxxZU6VDZ/k3uoPM5KrsDAObPM6N68zzVQav5U6DHEfqjaeiMkaawMmXD2c1EHdwMytAOUyhdmIt4UnEx0Kd1gHhqwzEuCMPaTjiL9Pq8McZfvJNPqO08KbDeGJoDc9Sh0a2S+199ZC/+6Na5r2GUEUlJA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030087; c=relaxed/simple; bh=6P2QVEZVXVar1bmzgldHuZ7a+MnKysvamdRBwGT7peM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=niYR7U9jdQ1I4oqLhF6I07dWqAcI0Zgufl3ef6UHrggWEYZB9GtHV5mFCdT/SaVs99YpXcktxiPygvQjQiD56ZPBMuVDMEwcmOi0uRO7vW01MZkJ6ftT+Ui4OMEm5dtRUqJHNoYLO08udmtGI72N68Y4n8aRbhLvNMmw9VsELM0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=a68hAJ1i; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="a68hAJ1i" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D5E3AC19425; Fri, 20 Mar 2026 18:08:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774030086; bh=6P2QVEZVXVar1bmzgldHuZ7a+MnKysvamdRBwGT7peM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=a68hAJ1iGqH/rdOm9hYDiCHCh8oHUxOEaobz/g6HDcmOJMC9tpFKDjNUCaxMvitz7 Y3InS9H/twbtaW9AREXURS2pEjKmH6WK5wJWsDJoNGS0cEtgT1kctX9T5aQWZC/pjO KaYXQ2KePnvfxMjBmRvx2Llacg4PJVRnhYcrGXzkq9exheni6GiTG8Dcbc7pEDlk9R Lqmr2Kifo7s4iB28ELQwP3m2RggAkrl01KOON70VqbrVm+QKdnTcVzIEKUVsg/Dei2 DTPSHnMVPx0Smc7arXu8ioY6C46nFk6ONrS6/O41gw4kLvmMl/0yQHPFUhRAWKh4l4 E6fXVkX9r/Rkg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Kiryl Shutsemau , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 12/13] mm/huge_memory: add and use normal_or_softleaf_folio_pmd() Date: Fri, 20 Mar 2026 18:07:29 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now we have pmd_to_softleaf_folio() available to us which also raises a CONFIG_DEBUG_VM warning if unexpectedly an invalid softleaf entry, we can now abstract folio handling altogether. vm_normal_folio() deals with the huge zero page (which is present), as well as PFN map/mixed map mappings in both cases returning NULL. Otherwise, we try to obtain the softleaf folio. This makes the logic far easier to comprehend and has it use the standard vm_normal_folio_pmd() path for decoding of present entries. Finally, we have to update the flushing logic to only do so if a folio is established. This patch also makes the 'is_present' value more accurate - because PFN map, mixed map and zero huge pages are present, just not present and 'normal'. Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- mm/huge_memory.c | 47 +++++++++++++++++++---------------------------- 1 file changed, 19 insertions(+), 28 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9ddf38d68406..5831966391bd 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2342,10 +2342,6 @@ static void zap_huge_pmd_folio(struct mm_struct *mm,= struct vm_area_struct *vma, add_mm_counter(mm, mm_counter_file(folio), -HPAGE_PMD_NR); =20 - /* - * Use flush_needed to indicate whether the PMD entry - * is present, instead of checking pmd_present() again. - */ if (is_present && pmd_young(pmdval) && likely(vma_has_recency(vma))) folio_mark_accessed(folio); @@ -2356,6 +2352,17 @@ static void zap_huge_pmd_folio(struct mm_struct *mm,= struct vm_area_struct *vma, folio_put(folio); } =20 +static struct folio *normal_or_softleaf_folio_pmd(struct vm_area_struct *v= ma, + unsigned long addr, pmd_t pmdval, bool is_present) +{ + if (is_present) + return vm_normal_folio_pmd(vma, addr, pmdval); + + if (!thp_migration_supported()) + WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); + return pmd_to_softleaf_folio(pmdval); +} + /** * zap_huge_pmd - Zap a huge THP which is of PMD size. * @tlb: The MMU gather TLB state associated with the operation. @@ -2390,36 +2397,20 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm= _area_struct *vma, tlb->fullmm); arch_check_zapped_pmd(vma, orig_pmd); tlb_remove_pmd_tlb_entry(tlb, pmd, addr); - if (vma_is_special_huge(vma)) - goto out; - if (is_huge_zero_pmd(orig_pmd)) { - if (!vma_is_dax(vma)) - has_deposit =3D true; - goto out; - } =20 - if (pmd_present(orig_pmd)) { - folio =3D pmd_folio(orig_pmd); - is_present =3D true; - } else if (pmd_is_valid_softleaf(orig_pmd)) { - const softleaf_t entry =3D softleaf_from_pmd(orig_pmd); + is_present =3D pmd_present(orig_pmd); + folio =3D normal_or_softleaf_folio_pmd(vma, addr, orig_pmd, is_present); + if (folio) + zap_huge_pmd_folio(mm, vma, orig_pmd, folio, is_present, + &has_deposit); + else if (is_huge_zero_pmd(orig_pmd)) + has_deposit =3D !vma_is_dax(vma); =20 - folio =3D softleaf_to_folio(entry); - if (!thp_migration_supported()) - WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); - } else { - WARN_ON_ONCE(true); - goto out; - } - - zap_huge_pmd_folio(mm, vma, orig_pmd, folio, is_present, &has_deposit); - -out: if (has_deposit) zap_deposited_table(mm, pmd); =20 spin_unlock(ptl); - if (is_present) + if (is_present && folio) tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE); return true; } --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1EC653E316E for ; Fri, 20 Mar 2026 18:08:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030089; cv=none; b=BZ5vAAZJZ+Z3gBsx25goRCwpMh88Mh6tPVSUqDi2nLSiepzF7xmOSc1FK2juINzRtaLdq8szQMkw3ZmIGddpU5vKkqajj2pz9hXM3rW9EvO/y3MzVKw8cutcjqSfy/ARnahZfU+sM1tnn6uWE9pJWDLDWxbBA2r9BadDhaJa5k8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030089; c=relaxed/simple; bh=ZH1v5KSbOjRMJis+xJElreGFGOfOHidqWPb9vPCbzbQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AVp5EBMht3vgzIjm3uNL0OGbhgSNziENYQu3H23ZGtY7HLzQzDOSj3qvwyIomEnn/WhlUbvlBkbBIUcfXgVScYQ8WrATuwbToA9OLmtgsfrASPGMMSFR2ZEjNNa+JvWJqJC4Nck8h0SWVzjFqGqsiZo0OcG6s1MCN+RqmMk3XYs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BSV5/vCW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BSV5/vCW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 77550C4CEF7; Fri, 20 Mar 2026 18:08:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774030088; bh=ZH1v5KSbOjRMJis+xJElreGFGOfOHidqWPb9vPCbzbQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BSV5/vCWpCXSfa/ioXFIBAG+8nmeyr8ybamFgx0mpr5ZmqwisIsdb3aEZBNi0F0o+ sGA10+dmUWnY5UX/tWHlm72wgpiy2Td/BOcN5XBp7aY9m00c5lkmOQxQUN3TW6gYb5 YQVe3AGplKLmnPblpxiJosa6yq5UrC3AipQAYJVGNNh9VxQFHU89SkFP7ffARlhKs3 ORqPY9j+ElmMXhFwC+deQ7fmB7bUWwHXT7AINRSTRYakxvpSAcaswPVTYCNhGbpj9i QAmYjJQTVok8cYh+lXAnA4qW1VjNa42VKuSwx953WFANuANj1iP7K5YhTxIoBa4gMz eyBy2+z8eVA3A== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Kiryl Shutsemau , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 13/13] mm/huge_memory: add and use has_deposited_pgtable() Date: Fri, 20 Mar 2026 18:07:30 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rather than thread has_deposited through zap_huge_pmd(), make things clearer by adding has_deposited_pgtable() with comments describing why in each case. Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- mm/huge_memory.c | 33 ++++++++++++++++++++++++--------- 1 file changed, 24 insertions(+), 9 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5831966391bd..610a6184e92c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2326,8 +2326,7 @@ static inline void zap_deposited_table(struct mm_stru= ct *mm, pmd_t *pmd) } =20 static void zap_huge_pmd_folio(struct mm_struct *mm, struct vm_area_struct= *vma, - pmd_t pmdval, struct folio *folio, bool is_present, - bool *has_deposit) + pmd_t pmdval, struct folio *folio, bool is_present) { const bool is_device_private =3D folio_is_device_private(folio); =20 @@ -2336,7 +2335,6 @@ static void zap_huge_pmd_folio(struct mm_struct *mm, = struct vm_area_struct *vma, folio_remove_rmap_pmd(folio, &folio->page, vma); =20 if (folio_test_anon(folio)) { - *has_deposit =3D true; add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR); } else { add_mm_counter(mm, mm_counter_file(folio), @@ -2363,6 +2361,27 @@ static struct folio *normal_or_softleaf_folio_pmd(st= ruct vm_area_struct *vma, return pmd_to_softleaf_folio(pmdval); } =20 +static bool has_deposited_pgtable(struct vm_area_struct *vma, pmd_t pmdval, + struct folio *folio) +{ + /* Some architectures require unconditional depositing. */ + if (arch_needs_pgtable_deposit()) + return true; + + /* + * Huge zero always deposited except for DAX which handles itself, see + * set_huge_zero_folio(). + */ + if (is_huge_zero_pmd(pmdval)) + return !vma_is_dax(vma); + + /* + * Otherwise, only anonymous folios are deposited, see + * __do_huge_pmd_anonymous_page(). + */ + return folio && folio_test_anon(folio); +} + /** * zap_huge_pmd - Zap a huge THP which is of PMD size. * @tlb: The MMU gather TLB state associated with the operation. @@ -2375,7 +2394,6 @@ static struct folio *normal_or_softleaf_folio_pmd(str= uct vm_area_struct *vma, bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr) { - bool has_deposit =3D arch_needs_pgtable_deposit(); struct mm_struct *mm =3D tlb->mm; struct folio *folio =3D NULL; bool is_present =3D false; @@ -2401,12 +2419,9 @@ bool zap_huge_pmd(struct mmu_gather *tlb, struct vm_= area_struct *vma, is_present =3D pmd_present(orig_pmd); folio =3D normal_or_softleaf_folio_pmd(vma, addr, orig_pmd, is_present); if (folio) - zap_huge_pmd_folio(mm, vma, orig_pmd, folio, is_present, - &has_deposit); - else if (is_huge_zero_pmd(orig_pmd)) - has_deposit =3D !vma_is_dax(vma); + zap_huge_pmd_folio(mm, vma, orig_pmd, folio, is_present); =20 - if (has_deposit) + if (has_deposited_pgtable(vma, orig_pmd, folio)) zap_deposited_table(mm, pmd); =20 spin_unlock(ptl); --=20 2.53.0