From nobody Sat Apr 4 04:54:45 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 76B013DEAEF for ; Fri, 20 Mar 2026 18:07:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030056; cv=none; b=obT89FyswnpfvbPthTc5kYRQ/UyefxSBJEFPIX/OhsZL9tSjZwYppPUtEAnOHsZfa8ELpiKI2VKZLG4Xbx7pD/jsd0NUiADpzKzwagpnLGcgf9Bc+1vaxdSDJ9INSsGM4D82ahR2OhBiNXa7oWsvyS6o2hQGjWj6DA9mMBQYxJ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774030056; c=relaxed/simple; bh=4U83wxJSN0UvnMmG6gmIbCq9G1O39b23blfys7NRzY8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MqLVKacfOVAgZXR3QXxEddaIyS1PVGwDyDsOspkmuR/qGxW7mJ0oEnOU+XM7I8JYEBBwZAnGHiQljcAc+wo70Xjxw9JMn0mG3rNTiAGtOdiGSYd8rVjo9Y00jVxbJpyYJHJwerv0OZ4RG1mkPv/ZUfEMtS3IgPFafERU6m0nN6I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oawTQ/US; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oawTQ/US" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B75BDC4CEF7; Fri, 20 Mar 2026 18:07:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774030056; bh=4U83wxJSN0UvnMmG6gmIbCq9G1O39b23blfys7NRzY8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oawTQ/USDOI+bBk/Avv+KVeaY20yO81RjkQvmlkgxNHTsQlu9LbTaivaiWjnRA+b1 G5WdrtIcbc0NRE/4uiWq4tNmDt5Ea3mYygKsL97OFSEjVNdMdqg6KeDbPHAgu5t3Bo 3HGj2k0NDsFaMUYlShx6FqdpAMieNNgv15Rwa+Cmlfd0bUX6GyTqsaW9fS4quYWusq wVICpBDG6r4LoKUqWWAJiKdnQ5ofLqAZjMk85D2kiGfD7504yLBH+uxcuq4kkVOC2v KspSLh5nHcK7m4gppd5c2SYZhrwW6FAjjlSC+U0r3VShwIdYiZgS6k+wb9a3I10hz3 6yHswKWehCEBQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Kiryl Shutsemau , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 01/13] mm/huge_memory: simplify vma_is_specal_huge() Date: Fri, 20 Mar 2026 18:07:18 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This function is confused - it overloads the term 'special' yet again, checks for DAX but in many cases the code explicitly excludes DAX before invoking the predicate. It also unnecessarily checks for vma->vm_file - this has to be present for a driver to have set VMA_MIXEDMAP_BIT or VMA_PFNMAP_BIT. In fact, a far simpler form of this is to reverse the DAX predicate and return false if DAX is set. This makes sense from the point of view of 'special' as in vm_normal_page(), as DAX actually does potentially have retrievable folios. Also there's no need to have this in mm.h so move it to huge_memory.c. No functional change intended. Signed-off-by: Lorenzo Stoakes (Oracle) Reviewed-by: Suren Baghdasaryan --- include/linux/huge_mm.h | 4 ++-- include/linux/mm.h | 16 ---------------- mm/huge_memory.c | 30 +++++++++++++++++++++++------- 3 files changed, 25 insertions(+), 25 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 8d801ed378db..af726f0aa30d 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -83,7 +83,7 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr; * file is never split and the MAX_PAGECACHE_ORDER limit does not apply to * it. Same to PFNMAPs where there's neither page* nor pagecache. */ -#define THP_ORDERS_ALL_SPECIAL \ +#define THP_ORDERS_ALL_SPECIAL_DAX \ (BIT(PMD_ORDER) | BIT(PUD_ORDER)) #define THP_ORDERS_ALL_FILE_DEFAULT \ ((BIT(MAX_PAGECACHE_ORDER + 1) - 1) & ~BIT(0)) @@ -92,7 +92,7 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr; * Mask of all large folio orders supported for THP. */ #define THP_ORDERS_ALL \ - (THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_SPECIAL | THP_ORDERS_ALL_FILE_DEFAU= LT) + (THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_SPECIAL_DAX | THP_ORDERS_ALL_FILE_D= EFAULT) =20 enum tva_type { TVA_SMAPS, /* Exposing "THPeligible:" in smaps. */ diff --git a/include/linux/mm.h b/include/linux/mm.h index 8aadf115278e..6b07ee99b38b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -5078,22 +5078,6 @@ long copy_folio_from_user(struct folio *dst_folio, const void __user *usr_src, bool allow_pagefault); =20 -/** - * vma_is_special_huge - Are transhuge page-table entries considered speci= al? - * @vma: Pointer to the struct vm_area_struct to consider - * - * Whether transhuge page-table entries are considered "special" following - * the definition in vm_normal_page(). - * - * Return: true if transhuge page-table entries should be considered speci= al, - * false otherwise. - */ -static inline bool vma_is_special_huge(const struct vm_area_struct *vma) -{ - return vma_is_dax(vma) || (vma->vm_file && - (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))); -} - #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ =20 #if MAX_NUMNODES > 1 diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e90d08db219d..2775309b317a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -103,6 +103,14 @@ static inline bool file_thp_enabled(struct vm_area_str= uct *vma) return !inode_is_open_for_write(inode) && S_ISREG(inode->i_mode); } =20 +/* If returns true, we are unable to access the VMA's folios. */ +static bool vma_is_special_huge(const struct vm_area_struct *vma) +{ + if (vma_is_dax(vma)) + return false; + return vma_test_any(vma, VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT); +} + unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, vm_flags_t vm_flags, enum tva_type type, @@ -116,8 +124,8 @@ unsigned long __thp_vma_allowable_orders(struct vm_area= _struct *vma, /* Check the intersection of requested and supported orders. */ if (vma_is_anonymous(vma)) supported_orders =3D THP_ORDERS_ALL_ANON; - else if (vma_is_special_huge(vma)) - supported_orders =3D THP_ORDERS_ALL_SPECIAL; + else if (vma_is_dax(vma) || vma_is_special_huge(vma)) + supported_orders =3D THP_ORDERS_ALL_SPECIAL_DAX; else supported_orders =3D THP_ORDERS_ALL_FILE_DEFAULT; =20 @@ -2338,7 +2346,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_ar= ea_struct *vma, tlb->fullmm); arch_check_zapped_pmd(vma, orig_pmd); tlb_remove_pmd_tlb_entry(tlb, pmd, addr); - if (!vma_is_dax(vma) && vma_is_special_huge(vma)) { + if (vma_is_special_huge(vma)) { if (arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); @@ -2840,7 +2848,7 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_ar= ea_struct *vma, orig_pud =3D pudp_huge_get_and_clear_full(vma, addr, pud, tlb->fullmm); arch_check_zapped_pud(vma, orig_pud); tlb_remove_pud_tlb_entry(tlb, pud, addr); - if (!vma_is_dax(vma) && vma_is_special_huge(vma)) { + if (vma_is_special_huge(vma)) { spin_unlock(ptl); /* No zero page support yet */ } else { @@ -2991,7 +2999,7 @@ static void __split_huge_pmd_locked(struct vm_area_st= ruct *vma, pmd_t *pmd, */ if (arch_needs_pgtable_deposit()) zap_deposited_table(mm, pmd); - if (!vma_is_dax(vma) && vma_is_special_huge(vma)) + if (vma_is_special_huge(vma)) return; if (unlikely(pmd_is_migration_entry(old_pmd))) { const softleaf_t old_entry =3D softleaf_from_pmd(old_pmd); @@ -4517,8 +4525,16 @@ static void split_huge_pages_all(void) =20 static inline bool vma_not_suitable_for_thp_split(struct vm_area_struct *v= ma) { - return vma_is_special_huge(vma) || (vma->vm_flags & VM_IO) || - is_vm_hugetlb_page(vma); + if (vma_is_dax(vma)) + return true; + if (vma_is_special_huge(vma)) + return true; + if (vma_test(vma, VMA_IO_BIT)) + return true; + if (is_vm_hugetlb_page(vma)) + return true; + + return false; } =20 static int split_huge_pages_pid(int pid, unsigned long vaddr_start, --=20 2.53.0