[PATCH v1 25/29] mm: simplify folio_expected_ref_count()

David Hildenbrand posted 29 patches 3 months, 1 week ago
There is a newer version of this series
[PATCH v1 25/29] mm: simplify folio_expected_ref_count()
Posted by David Hildenbrand 3 months, 1 week ago
Now that PAGE_MAPPING_MOVABLE is gone, we can simplify and rely on the
folio_test_anon() test only.

... but staring at the users, this function should never even have been
called on movable_ops pages. E.g.,
* __buffer_migrate_folio() does not make sense for them
* folio_migrate_mapping() does not make sense for them
* migrate_huge_page_move_mapping() does not make sense for them
* __migrate_folio() does not make sense for them
* ... and khugepaged should never stumble over them

Let's simply refuse typed pages (which includes slab) except hugetlb,
and WARN.

Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/mm.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6a5447bd43fd8..f6ef4c4eb536b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2176,13 +2176,13 @@ static inline int folio_expected_ref_count(const struct folio *folio)
 	const int order = folio_order(folio);
 	int ref_count = 0;
 
-	if (WARN_ON_ONCE(folio_test_slab(folio)))
+	if (WARN_ON_ONCE(page_has_type(&folio->page) && !folio_test_hugetlb(folio)))
 		return 0;
 
 	if (folio_test_anon(folio)) {
 		/* One reference per page from the swapcache. */
 		ref_count += folio_test_swapcache(folio) << order;
-	} else if (!((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS)) {
+	} else {
 		/* One reference per page from the pagecache. */
 		ref_count += !!folio->mapping << order;
 		/* One reference from PG_private. */
-- 
2.49.0
Re: [PATCH v1 25/29] mm: simplify folio_expected_ref_count()
Posted by Harry Yoo 3 months, 1 week ago
On Mon, Jun 30, 2025 at 03:00:06PM +0200, David Hildenbrand wrote:
> Now that PAGE_MAPPING_MOVABLE is gone, we can simplify and rely on the
> folio_test_anon() test only.
> 
> ... but staring at the users, this function should never even have been
> called on movable_ops pages. E.g.,
> * __buffer_migrate_folio() does not make sense for them
> * folio_migrate_mapping() does not make sense for them
> * migrate_huge_page_move_mapping() does not make sense for them
> * __migrate_folio() does not make sense for them
> * ... and khugepaged should never stumble over them
> 
> Let's simply refuse typed pages (which includes slab) except hugetlb,
> and WARN.
> 
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---

Yup, it doesn't really make sense to do this for typed pages
because they can't be mapped to userspace, except hugetlb.

LGTM,
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>

-- 
Cheers,
Harry / Hyeonggon
Re: [PATCH v1 25/29] mm: simplify folio_expected_ref_count()
Posted by Lorenzo Stoakes 3 months, 1 week ago
On Mon, Jun 30, 2025 at 03:00:06PM +0200, David Hildenbrand wrote:
> Now that PAGE_MAPPING_MOVABLE is gone, we can simplify and rely on the
> folio_test_anon() test only.
>
> ... but staring at the users, this function should never even have been
> called on movable_ops pages. E.g.,
> * __buffer_migrate_folio() does not make sense for them
> * folio_migrate_mapping() does not make sense for them
> * migrate_huge_page_move_mapping() does not make sense for them
> * __migrate_folio() does not make sense for them
> * ... and khugepaged should never stumble over them
>
> Let's simply refuse typed pages (which includes slab) except hugetlb,
> and WARN.

I guess also:

* PGTY_buddy - raw buddy allocator pagess should't be here...
* PGTY_table - nor page table...
* PGTY_guard - nor whatever kind of guard this is I assume? (Not my precious guard regions :P)
* PGTY_unaccepted - nor unaccepted memory perhaps?
* PGTY_large_malloc - slab, shouldn't be here

I'd maybe delineate these cases also.

>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>

On assumption no typed page should be tolerable here:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

> ---
>  include/linux/mm.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 6a5447bd43fd8..f6ef4c4eb536b 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2176,13 +2176,13 @@ static inline int folio_expected_ref_count(const struct folio *folio)
>  	const int order = folio_order(folio);
>  	int ref_count = 0;
>
> -	if (WARN_ON_ONCE(folio_test_slab(folio)))
> +	if (WARN_ON_ONCE(page_has_type(&folio->page) && !folio_test_hugetlb(folio)))
>  		return 0;
>
>  	if (folio_test_anon(folio)) {
>  		/* One reference per page from the swapcache. */
>  		ref_count += folio_test_swapcache(folio) << order;
> -	} else if (!((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS)) {
> +	} else {
>  		/* One reference per page from the pagecache. */
>  		ref_count += !!folio->mapping << order;
>  		/* One reference from PG_private. */
> --
> 2.49.0
>