From nobody Wed Oct 1 23:30:04 2025 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BA21327FB10 for ; Tue, 30 Sep 2025 05:59:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759211947; cv=none; b=YsRhCe6+7KuwkitHmJzPjZdCoSO9v+IGkxFhCWqztaiwsQNVWUPnwYbxfbx9Ui4EtuJKWfqUSeNMd0PxpAFpQy76nTL341POGOGrddB6dCPN93gbibdwOvmr9ITTs+9t3T3zO6/CqrMfRi6W751SaFUZxoFsWqoIR9gmCEztVAY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759211947; c=relaxed/simple; bh=QVWu+s4grXw7TZwDzhbSweg4ctwhYK5zTlTC5+DUPKE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=lRPAEnlNZgXeQWGC6LnzcKyyizI9MeRW6IJFQcuf027oifh1Li+2ZUB/q3b2gPx5IKB9TgBv78wHUJcIO0A0dWVN7KWP1DIdWaIW6iqk1qZxj7j5dewf+5oP5plaibEL3mDgACedtTu+fYs7tRqHwX/lfeuoJ1IjUT6gqDnKXMs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=f9f0fqla; arc=none smtp.client-ip=209.85.215.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="f9f0fqla" Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-b57bffc0248so4306055a12.0 for ; Mon, 29 Sep 2025 22:59:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1759211945; x=1759816745; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vy5v9XGDeS4JIoi1WBeevI4rwn9UOnGGFWMHMux6eVM=; b=f9f0fqlaxQCKGEy33qChq6ZXfFUwmNKYV2AsdiWq1xKmspChbFvz6FmhfwtA1XEc0h lV8GoRW7/MXw39sfOvf/xsDxGxgMx49SUBXxv9QCoJSkujw3fg+KIu8wWW3bPzseUfPW 5501EF8rUHf+xnXhcysWbSmTezMpcSftbGB4/qGBqSMk+ASNhC5bPVZI5D3LuWSd/v+I JciQvG2FA232u6GReh9LT1MWKZjb0nLk8AOst1F+UeOIP9P0hF3nclQEk12oBoXtlcEz fTOJao1uOz70Xziqf5PPjuWZ2YtltR4iQ6v9kK/XQtbHyNYDRgfK7SBpfzU0hm4ysa69 /5Cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1759211945; x=1759816745; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vy5v9XGDeS4JIoi1WBeevI4rwn9UOnGGFWMHMux6eVM=; b=v5+ej1zXqg5MXo2sUGujus/ejO02Vh90gOdbamypK8/nBcuEhxGvTwpBax3vCYCOxa FFMI0BXWRs1DQC+3nQeezxumMRKCQHyPlYHhgQ1EUp+W4jq/0mmmSpb/W3ysFqefBh1K c1/eLJC3ZDndtYa4F4WdIjXVSW6YWassQrckVuye238MqrP/wMT1hVgu+nNEEDpE/yzw 48a9sRTHUOvcqEtxEYjkkDclklGOnRIJV6F0LIXGIyd0qDuuVhISI/D38BFnUivlVodO 319fXVc47WiSNJRmCyQs9MKXCmEu8VuQsUAmcTcYHfVEAO4xvNzFyBagBTt1hFpwBpCu kiUg== X-Forwarded-Encrypted: i=1; AJvYcCXr3pwlsE5+w5tYbmtR18kQhDQgvNPFwiiYVm54P2MTgHYKVgGRvt+L5oJzWauXZ+Dz0segK6+8c/1fFCs=@vger.kernel.org X-Gm-Message-State: AOJu0YyUk2V6jFN2/AQdK1CotA+A1C4MxfZ9qT6vScXPz2ivOLZYLezv ULwep3dFf+Jo26AwIiyemNfQKuwqCR3CGVVFhE3bqQ/lBvDrM53JuEQr X-Gm-Gg: ASbGncsIPhpTtcFx+qoI66PTKOJvvrJ9yK41gIjiBFCLN7tg5PD7dQiROAniW3KMvN0 ODgJjFUB8wzs2R/OwIJFgVF3FaCfND5JrKoY8WtVJK2rcwZZ2laXf5yOjwwhDibDF/0ZbZclyIk XVNTBUhRKx0OghGvObkqzE9ZGX8eCi8OVViwl3WJ0K3CQajdqP1RakgSAAwakDGKjszp7DiPrT0 lfxgh08T8qMRXZClLNpnSEz9k/ATOEzPInkCVdgpKZlpRxu7N9mIJPW05+2/lXC29tWR3n1Jy6A apqS6VKvqDkdTpbUs38QQMz5UOaUMLHt0nLERLhKrlGZBiA3ecPde0ddouLqiGw/siiDOVH4P7S A3R4jL2BGMu7uN3qy9HRIyRqxxlBMCCW7QClv1w1zW9T9WACM4VcqJCFnb6nCH/+1O1aI4JYmTe UH9N4iJ20GAe536/5L6E9M/fenCSA= X-Google-Smtp-Source: AGHT+IHiNqaI5HcMCjNBXCMCb69rPSRlL4lp8kiQPTncjuapEf6Dsp0eKT1UMyUgAU1qyrFaGosstA== X-Received: by 2002:a17:902:ce04:b0:26c:4280:4860 with SMTP id d9443c01a7336-28d16d92d9cmr37310415ad.8.1759211944857; Mon, 29 Sep 2025 22:59:04 -0700 (PDT) Received: from localhost.localdomain ([61.171.228.24]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-27ed66d43b8sm148834065ad.9.2025.09.29.22.58.55 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 29 Sep 2025 22:59:04 -0700 (PDT) From: Yafang Shao To: akpm@linux-foundation.org, david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, hannes@cmpxchg.org, usamaarif642@gmail.com, gutierrez.asier@huawei-partners.com, willy@infradead.org, ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, ameryhung@gmail.com, rientjes@google.com, corbet@lwn.net, 21cnbao@gmail.com, shakeel.butt@linux.dev, tj@kernel.org, lance.yang@linux.dev, rdunlap@infradead.org Cc: bpf@vger.kernel.org, linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, Yafang Shao Subject: [PATCH v9 mm-new 02/11] mm: thp: remove vm_flags parameter from thp_vma_allowable_order() Date: Tue, 30 Sep 2025 13:58:17 +0800 Message-Id: <20250930055826.9810-3-laoar.shao@gmail.com> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) In-Reply-To: <20250930055826.9810-1-laoar.shao@gmail.com> References: <20250930055826.9810-1-laoar.shao@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Because all calls to thp_vma_allowable_order() pass vma->vm_flags as the vma_flags argument, we can remove the parameter and have the function access vma->vm_flags directly. Signed-off-by: Yafang Shao Acked-by: Usama Arif --- fs/proc/task_mmu.c | 3 +-- include/linux/huge_mm.h | 16 ++++++++-------- mm/huge_memory.c | 4 ++-- mm/khugepaged.c | 10 +++++----- mm/memory.c | 11 +++++------ mm/shmem.c | 2 +- 6 files changed, 22 insertions(+), 24 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index fc35a0543f01..e713d1905750 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1369,8 +1369,7 @@ static int show_smap(struct seq_file *m, void *v) __show_smap(m, &mss, false); =20 seq_printf(m, "THPeligible: %8u\n", - !!thp_vma_allowable_orders(vma, vma->vm_flags, TVA_SMAPS, - THP_ORDERS_ALL)); + !!thp_vma_allowable_orders(vma, TVA_SMAPS, THP_ORDERS_ALL)); =20 if (arch_pkeys_enabled()) seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index f327d62fc985..a635dcbb2b99 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -101,8 +101,8 @@ enum tva_type { TVA_FORCED_COLLAPSE, /* Forced collapse (e.g. MADV_COLLAPSE). */ }; =20 -#define thp_vma_allowable_order(vma, vm_flags, type, order) \ - (!!thp_vma_allowable_orders(vma, vm_flags, type, BIT(order))) +#define thp_vma_allowable_order(vma, type, order) \ + (!!thp_vma_allowable_orders(vma, type, BIT(order))) =20 #define split_folio(f) split_folio_to_list(f, NULL) =20 @@ -266,14 +266,12 @@ static inline unsigned long thp_vma_suitable_orders(s= truct vm_area_struct *vma, } =20 unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, - vm_flags_t vm_flags, enum tva_type type, unsigned long orders); =20 /** * thp_vma_allowable_orders - determine hugepage orders that are allowed f= or vma * @vma: the vm area to check - * @vm_flags: use these vm_flags instead of vma->vm_flags * @type: TVA type * @orders: bitfield of all orders to consider * @@ -287,10 +285,11 @@ unsigned long __thp_vma_allowable_orders(struct vm_ar= ea_struct *vma, */ static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, - vm_flags_t vm_flags, enum tva_type type, unsigned long orders) { + vm_flags_t vm_flags =3D vma->vm_flags; + /* * Optimization to check if required orders are enabled early. Only * forced collapse ignores sysfs configs. @@ -309,7 +308,7 @@ unsigned long thp_vma_allowable_orders(struct vm_area_s= truct *vma, return 0; } =20 - return __thp_vma_allowable_orders(vma, vm_flags, type, orders); + return __thp_vma_allowable_orders(vma, type, orders); } =20 struct thpsize { @@ -329,8 +328,10 @@ struct thpsize { * through madvise or prctl. */ static inline bool vma_thp_disabled(struct vm_area_struct *vma, - vm_flags_t vm_flags, bool forced_collapse) + bool forced_collapse) { + vm_flags_t vm_flags =3D vma->vm_flags; + /* Are THPs disabled for this VMA? */ if (vm_flags & VM_NOHUGEPAGE) return true; @@ -560,7 +561,6 @@ static inline unsigned long thp_vma_suitable_orders(str= uct vm_area_struct *vma, } =20 static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct= *vma, - vm_flags_t vm_flags, enum tva_type type, unsigned long orders) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ac6601f30e65..1ac476fe6dc5 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -98,7 +98,6 @@ static inline bool file_thp_enabled(struct vm_area_struct= *vma) } =20 unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, - vm_flags_t vm_flags, enum tva_type type, unsigned long orders) { @@ -106,6 +105,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area= _struct *vma, const bool in_pf =3D type =3D=3D TVA_PAGEFAULT; const bool forced_collapse =3D type =3D=3D TVA_FORCED_COLLAPSE; unsigned long supported_orders; + vm_flags_t vm_flags =3D vma->vm_flags; =20 /* Check the intersection of requested and supported orders. */ if (vma_is_anonymous(vma)) @@ -122,7 +122,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area= _struct *vma, if (!vma->vm_mm) /* vdso */ return 0; =20 - if (thp_disabled_by_hw() || vma_thp_disabled(vma, vm_flags, forced_collap= se)) + if (thp_disabled_by_hw() || vma_thp_disabled(vma, forced_collapse)) return 0; =20 /* khugepaged doesn't collapse DAX vma, but page fault is fine. */ diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 5088eedafc35..b60f1856714a 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -466,7 +466,7 @@ void khugepaged_enter_mm(struct mm_struct *mm) =20 void khugepaged_enter_vma(struct vm_area_struct *vma) { - if (!thp_vma_allowable_order(vma, vma->vm_flags, TVA_KHUGEPAGED, PMD_ORDE= R)) + if (!thp_vma_allowable_order(vma, TVA_KHUGEPAGED, PMD_ORDER)) return; khugepaged_enter_mm(vma->vm_mm); } @@ -917,7 +917,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm= , unsigned long address, =20 if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) return SCAN_ADDRESS_RANGE; - if (!thp_vma_allowable_order(vma, vma->vm_flags, type, PMD_ORDER)) + if (!thp_vma_allowable_order(vma, type, PMD_ORDER)) return SCAN_VMA_CHECK; /* * Anon VMA expected, the address may be unmapped then @@ -1531,7 +1531,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, uns= igned long addr, * and map it by a PMD, regardless of sysfs THP settings. As such, let's * analogously elide sysfs THP settings here and force collapse. */ - if (!thp_vma_allowable_order(vma, vma->vm_flags, TVA_FORCED_COLLAPSE, PMD= _ORDER)) + if (!thp_vma_allowable_order(vma, TVA_FORCED_COLLAPSE, PMD_ORDER)) return SCAN_VMA_CHECK; =20 /* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */ @@ -2426,7 +2426,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned = int pages, int *result, progress++; break; } - if (!thp_vma_allowable_order(vma, vma->vm_flags, TVA_KHUGEPAGED, PMD_ORD= ER)) { + if (!thp_vma_allowable_order(vma, TVA_KHUGEPAGED, PMD_ORDER)) { skip: progress++; continue; @@ -2757,7 +2757,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsi= gned long start, BUG_ON(vma->vm_start > start); BUG_ON(vma->vm_end < end); =20 - if (!thp_vma_allowable_order(vma, vma->vm_flags, TVA_FORCED_COLLAPSE, PMD= _ORDER)) + if (!thp_vma_allowable_order(vma, TVA_FORCED_COLLAPSE, PMD_ORDER)) return -EINVAL; =20 cc =3D kmalloc(sizeof(*cc), GFP_KERNEL); diff --git a/mm/memory.c b/mm/memory.c index 7e32eb79ba99..cd04e4894725 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4558,7 +4558,7 @@ static struct folio *alloc_swap_folio(struct vm_fault= *vmf) * Get a list of all the (large) orders below PMD_ORDER that are enabled * and suitable for swapping THP. */ - orders =3D thp_vma_allowable_orders(vma, vma->vm_flags, TVA_PAGEFAULT, + orders =3D thp_vma_allowable_orders(vma, TVA_PAGEFAULT, BIT(PMD_ORDER) - 1); orders =3D thp_vma_suitable_orders(vma, vmf->address, orders); orders =3D thp_swap_suitable_orders(swp_offset(entry), @@ -5107,7 +5107,7 @@ static struct folio *alloc_anon_folio(struct vm_fault= *vmf) * for this vma. Then filter out the orders that can't be allocated over * the faulting address and still be fully contained in the vma. */ - orders =3D thp_vma_allowable_orders(vma, vma->vm_flags, TVA_PAGEFAULT, + orders =3D thp_vma_allowable_orders(vma, TVA_PAGEFAULT, BIT(PMD_ORDER) - 1); orders =3D thp_vma_suitable_orders(vma, vmf->address, orders); =20 @@ -5379,7 +5379,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct fo= lio *folio, struct page *pa * PMD mappings if THPs are disabled. As we already have a THP, * behave as if we are forcing a collapse. */ - if (thp_disabled_by_hw() || vma_thp_disabled(vma, vma->vm_flags, + if (thp_disabled_by_hw() || vma_thp_disabled(vma, /* forced_collapse=3D*/ true)) return ret; =20 @@ -6280,7 +6280,6 @@ static vm_fault_t __handle_mm_fault(struct vm_area_st= ruct *vma, .gfp_mask =3D __get_fault_gfp_mask(vma), }; struct mm_struct *mm =3D vma->vm_mm; - vm_flags_t vm_flags =3D vma->vm_flags; pgd_t *pgd; p4d_t *p4d; vm_fault_t ret; @@ -6295,7 +6294,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_st= ruct *vma, return VM_FAULT_OOM; retry_pud: if (pud_none(*vmf.pud) && - thp_vma_allowable_order(vma, vm_flags, TVA_PAGEFAULT, PUD_ORDER)) { + thp_vma_allowable_order(vma, TVA_PAGEFAULT, PUD_ORDER)) { ret =3D create_huge_pud(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; @@ -6329,7 +6328,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_st= ruct *vma, goto retry_pud; =20 if (pmd_none(*vmf.pmd) && - thp_vma_allowable_order(vma, vm_flags, TVA_PAGEFAULT, PMD_ORDER)) { + thp_vma_allowable_order(vma, TVA_PAGEFAULT, PMD_ORDER)) { ret =3D create_huge_pmd(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; diff --git a/mm/shmem.c b/mm/shmem.c index 4855eee22731..cc2c90656b66 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1780,7 +1780,7 @@ unsigned long shmem_allowable_huge_orders(struct inod= e *inode, vm_flags_t vm_flags =3D vma ? vma->vm_flags : 0; unsigned int global_orders; =20 - if (thp_disabled_by_hw() || (vma && vma_thp_disabled(vma, vm_flags, shmem= _huge_force))) + if (thp_disabled_by_hw() || (vma && vma_thp_disabled(vma, shmem_huge_forc= e))) return 0; =20 global_orders =3D shmem_huge_global_enabled(inode, index, write_end, --=20 2.47.3