[PATCH v4 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related.

Zi Yan posted 3 patches 1 month, 2 weeks ago
There is a newer version of this series
[PATCH v4 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
Posted by Zi Yan 1 month, 2 weeks ago
try_folio_split_to_order(), folio_split, __folio_split(), and
__split_unmapped_folio() do not have correct kernel-doc comment format.
Fix them.

Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: David Hildenbrand <david@redhat.com>
---
 include/linux/huge_mm.h | 10 ++++++----
 mm/huge_memory.c        | 27 +++++++++++++++------------
 2 files changed, 21 insertions(+), 16 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 34f8d8453bf3..cbb2243f8e56 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -386,9 +386,9 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
 	return split_huge_page_to_list_to_order(page, NULL, new_order);
 }
 
-/*
- * try_folio_split_to_order - try to split a @folio at @page to @new_order using
- * non uniform split.
+/**
+ * try_folio_split_to_order() - try to split a @folio at @page to @new_order
+ * using non uniform split.
  * @folio: folio to be split
  * @page: split to @new_order at the given page
  * @new_order: the target split order
@@ -398,7 +398,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
  * folios are put back to LRU list. Use min_order_for_split() to get the lower
  * bound of @new_order.
  *
- * Return: 0: split is successful, otherwise split failed.
+ * Return: 0 - split is successful, otherwise split failed.
  */
 static inline int try_folio_split_to_order(struct folio *folio,
 		struct page *page, unsigned int new_order)
@@ -486,6 +486,8 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
 /**
  * folio_test_pmd_mappable - Can we map this folio with a PMD?
  * @folio: The folio to test
+ *
+ * Return: true - @folio can be mapped, false - @folio cannot be mapped.
  */
 static inline bool folio_test_pmd_mappable(struct folio *folio)
 {
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 0e24bb7e90d0..381a49c5ac3f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3567,8 +3567,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
 		ClearPageCompound(&folio->page);
 }
 
-/*
- * It splits an unmapped @folio to lower order smaller folios in two ways.
+/**
+ * __split_unmapped_folio() - splits an unmapped @folio to lower order folios in
+ * two ways: uniform split or non-uniform split.
  * @folio: the to-be-split folio
  * @new_order: the smallest order of the after split folios (since buddy
  *             allocator like split generates folios with orders from @folio's
@@ -3603,8 +3604,8 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
  * folio containing @page. The caller needs to unlock and/or free after-split
  * folios if necessary.
  *
- * For !uniform_split, when -ENOMEM is returned, the original folio might be
- * split. The caller needs to check the input folio.
+ * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
+ * split but not to @new_order, the caller needs to check)
  */
 static int __split_unmapped_folio(struct folio *folio, int new_order,
 		struct page *split_at, struct xa_state *xas,
@@ -3722,8 +3723,8 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
 	return true;
 }
 
-/*
- * __folio_split: split a folio at @split_at to a @new_order folio
+/**
+ * __folio_split() - split a folio at @split_at to a @new_order folio
  * @folio: folio to split
  * @new_order: the order of the new folio
  * @split_at: a page within the new folio
@@ -3741,7 +3742,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
  * 1. for uniform split, @lock_at points to one of @folio's subpages;
  * 2. for buddy allocator like (non-uniform) split, @lock_at points to @folio.
  *
- * return: 0: successful, <0 failed (if -ENOMEM is returned, @folio might be
+ * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
  * split but not to @new_order, the caller needs to check)
  */
 static int __folio_split(struct folio *folio, unsigned int new_order,
@@ -4130,14 +4131,13 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
 				unmapped);
 }
 
-/*
- * folio_split: split a folio at @split_at to a @new_order folio
+/**
+ * folio_split() - split a folio at @split_at to a @new_order folio
  * @folio: folio to split
  * @new_order: the order of the new folio
  * @split_at: a page within the new folio
- *
- * return: 0: successful, <0 failed (if -ENOMEM is returned, @folio might be
- * split but not to @new_order, the caller needs to check)
+ * @list: after-split folios are added to @list if not null, otherwise to LRU
+ *        list
  *
  * It has the same prerequisites and returns as
  * split_huge_page_to_list_to_order().
@@ -4151,6 +4151,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
  * [order-4, {order-3}, order-3, order-5, order-6, order-7, order-8].
  *
  * After split, folio is left locked for caller.
+ *
+ * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
+ * split but not to @new_order, the caller needs to check)
  */
 int folio_split(struct folio *folio, unsigned int new_order,
 		struct page *split_at, struct list_head *list)
-- 
2.43.0
Re: [PATCH v4 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
Posted by Wei Yang 1 month, 2 weeks ago
On Wed, Oct 29, 2025 at 09:40:20PM -0400, Zi Yan wrote:
>try_folio_split_to_order(), folio_split, __folio_split(), and
>__split_unmapped_folio() do not have correct kernel-doc comment format.
>Fix them.
>
>Signed-off-by: Zi Yan <ziy@nvidia.com>
>Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>Acked-by: David Hildenbrand <david@redhat.com>

Generally looks good, while some nit below.

>---
> include/linux/huge_mm.h | 10 ++++++----
> mm/huge_memory.c        | 27 +++++++++++++++------------
> 2 files changed, 21 insertions(+), 16 deletions(-)
>
>diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>index 34f8d8453bf3..cbb2243f8e56 100644
>--- a/include/linux/huge_mm.h
>+++ b/include/linux/huge_mm.h
>@@ -386,9 +386,9 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
> 	return split_huge_page_to_list_to_order(page, NULL, new_order);
> }
> 
>-/*
>- * try_folio_split_to_order - try to split a @folio at @page to @new_order using
>- * non uniform split.
>+/**
>+ * try_folio_split_to_order() - try to split a @folio at @page to @new_order
>+ * using non uniform split.

This looks try_folio_split_to_order() only perform non uniform split, while the
following comment mentions it will try uniform split if non uniform split is
not supported. 

Do you think this is a little confusing?

>  * @folio: folio to be split
>  * @page: split to @new_order at the given page
>  * @new_order: the target split order
>@@ -398,7 +398,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>  * folios are put back to LRU list. Use min_order_for_split() to get the lower
>  * bound of @new_order.
>  *
>- * Return: 0: split is successful, otherwise split failed.
>+ * Return: 0 - split is successful, otherwise split failed.
>  */
> static inline int try_folio_split_to_order(struct folio *folio,
> 		struct page *page, unsigned int new_order)
>@@ -486,6 +486,8 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
> /**
>  * folio_test_pmd_mappable - Can we map this folio with a PMD?
>  * @folio: The folio to test
>+ *
>+ * Return: true - @folio can be mapped, false - @folio cannot be mapped.
>  */
> static inline bool folio_test_pmd_mappable(struct folio *folio)
> {
>diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>index 0e24bb7e90d0..381a49c5ac3f 100644
>--- a/mm/huge_memory.c
>+++ b/mm/huge_memory.c
>@@ -3567,8 +3567,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
> 		ClearPageCompound(&folio->page);
> }
> 
>-/*
>- * It splits an unmapped @folio to lower order smaller folios in two ways.
>+/**
>+ * __split_unmapped_folio() - splits an unmapped @folio to lower order folios in
>+ * two ways: uniform split or non-uniform split.
>  * @folio: the to-be-split folio
>  * @new_order: the smallest order of the after split folios (since buddy
>  *             allocator like split generates folios with orders from @folio's

In the comment of __split_unmapped_folio(), we have some description about the
split behavior, e.g. update stat, unfreeze.

Is this out-dated?

-- 
Wei Yang
Help you, Help me
Re: [PATCH v4 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
Posted by Zi Yan 1 month, 1 week ago
On 30 Oct 2025, at 22:55, Wei Yang wrote:

> On Wed, Oct 29, 2025 at 09:40:20PM -0400, Zi Yan wrote:
>> try_folio_split_to_order(), folio_split, __folio_split(), and
>> __split_unmapped_folio() do not have correct kernel-doc comment format.
>> Fix them.
>>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>> Acked-by: David Hildenbrand <david@redhat.com>
>
> Generally looks good, while some nit below.
>
>> ---
>> include/linux/huge_mm.h | 10 ++++++----
>> mm/huge_memory.c        | 27 +++++++++++++++------------
>> 2 files changed, 21 insertions(+), 16 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index 34f8d8453bf3..cbb2243f8e56 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -386,9 +386,9 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>> 	return split_huge_page_to_list_to_order(page, NULL, new_order);
>> }
>>
>> -/*
>> - * try_folio_split_to_order - try to split a @folio at @page to @new_order using
>> - * non uniform split.
>> +/**
>> + * try_folio_split_to_order() - try to split a @folio at @page to @new_order
>> + * using non uniform split.
>
> This looks try_folio_split_to_order() only perform non uniform split, while the
> following comment mentions it will try uniform split if non uniform split is
> not supported.
>
> Do you think this is a little confusing?

It says "try to", so it is possible that an alternative can be used.

>
>>  * @folio: folio to be split
>>  * @page: split to @new_order at the given page
>>  * @new_order: the target split order
>> @@ -398,7 +398,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>>  * folios are put back to LRU list. Use min_order_for_split() to get the lower
>>  * bound of @new_order.
>>  *
>> - * Return: 0: split is successful, otherwise split failed.
>> + * Return: 0 - split is successful, otherwise split failed.
>>  */
>> static inline int try_folio_split_to_order(struct folio *folio,
>> 		struct page *page, unsigned int new_order)
>> @@ -486,6 +486,8 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
>> /**
>>  * folio_test_pmd_mappable - Can we map this folio with a PMD?
>>  * @folio: The folio to test
>> + *
>> + * Return: true - @folio can be mapped, false - @folio cannot be mapped.
>>  */
>> static inline bool folio_test_pmd_mappable(struct folio *folio)
>> {
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 0e24bb7e90d0..381a49c5ac3f 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3567,8 +3567,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>> 		ClearPageCompound(&folio->page);
>> }
>>
>> -/*
>> - * It splits an unmapped @folio to lower order smaller folios in two ways.
>> +/**
>> + * __split_unmapped_folio() - splits an unmapped @folio to lower order folios in
>> + * two ways: uniform split or non-uniform split.
>>  * @folio: the to-be-split folio
>>  * @new_order: the smallest order of the after split folios (since buddy
>>  *             allocator like split generates folios with orders from @folio's
>
> In the comment of __split_unmapped_folio(), we have some description about the
> split behavior, e.g. update stat, unfreeze.
>
> Is this out-dated?

OK, I will update it.

--
Best Regards,
Yan, Zi
Re: [PATCH v4 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
Posted by Miaohe Lin 1 month, 2 weeks ago
On 2025/10/30 9:40, Zi Yan wrote:
> try_folio_split_to_order(), folio_split, __folio_split(), and
> __split_unmapped_folio() do not have correct kernel-doc comment format.
> Fix them.
> 
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Acked-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>

Thanks.
.
Re: [PATCH v4 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
Posted by Barry Song 1 month, 2 weeks ago
On Thu, Oct 30, 2025 at 9:40 AM Zi Yan <ziy@nvidia.com> wrote:
>
> try_folio_split_to_order(), folio_split, __folio_split(), and
> __split_unmapped_folio() do not have correct kernel-doc comment format.
> Fix them.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> ---

LGTM,
Reviewed-by: Barry Song <baohua@kernel.org>
Re: [PATCH v4 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related.
Posted by Lance Yang 1 month, 2 weeks ago

On 2025/10/30 09:40, Zi Yan wrote:
> try_folio_split_to_order(), folio_split, __folio_split(), and
> __split_unmapped_folio() do not have correct kernel-doc comment format.
> Fix them.
> 
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> ---

LGTM.

Reviewed-by: Lance Yang <lance.yang@linux.dev>

>   include/linux/huge_mm.h | 10 ++++++----
>   mm/huge_memory.c        | 27 +++++++++++++++------------
>   2 files changed, 21 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 34f8d8453bf3..cbb2243f8e56 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -386,9 +386,9 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>   	return split_huge_page_to_list_to_order(page, NULL, new_order);
>   }
>   
> -/*
> - * try_folio_split_to_order - try to split a @folio at @page to @new_order using
> - * non uniform split.
> +/**
> + * try_folio_split_to_order() - try to split a @folio at @page to @new_order
> + * using non uniform split.
>    * @folio: folio to be split
>    * @page: split to @new_order at the given page
>    * @new_order: the target split order
> @@ -398,7 +398,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>    * folios are put back to LRU list. Use min_order_for_split() to get the lower
>    * bound of @new_order.
>    *
> - * Return: 0: split is successful, otherwise split failed.
> + * Return: 0 - split is successful, otherwise split failed.
>    */
>   static inline int try_folio_split_to_order(struct folio *folio,
>   		struct page *page, unsigned int new_order)
> @@ -486,6 +486,8 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
>   /**
>    * folio_test_pmd_mappable - Can we map this folio with a PMD?
>    * @folio: The folio to test
> + *
> + * Return: true - @folio can be mapped, false - @folio cannot be mapped.
>    */
>   static inline bool folio_test_pmd_mappable(struct folio *folio)
>   {
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 0e24bb7e90d0..381a49c5ac3f 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3567,8 +3567,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>   		ClearPageCompound(&folio->page);
>   }
>   
> -/*
> - * It splits an unmapped @folio to lower order smaller folios in two ways.
> +/**
> + * __split_unmapped_folio() - splits an unmapped @folio to lower order folios in
> + * two ways: uniform split or non-uniform split.
>    * @folio: the to-be-split folio
>    * @new_order: the smallest order of the after split folios (since buddy
>    *             allocator like split generates folios with orders from @folio's
> @@ -3603,8 +3604,8 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>    * folio containing @page. The caller needs to unlock and/or free after-split
>    * folios if necessary.
>    *
> - * For !uniform_split, when -ENOMEM is returned, the original folio might be
> - * split. The caller needs to check the input folio.
> + * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
> + * split but not to @new_order, the caller needs to check)
>    */
>   static int __split_unmapped_folio(struct folio *folio, int new_order,
>   		struct page *split_at, struct xa_state *xas,
> @@ -3722,8 +3723,8 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>   	return true;
>   }
>   
> -/*
> - * __folio_split: split a folio at @split_at to a @new_order folio
> +/**
> + * __folio_split() - split a folio at @split_at to a @new_order folio
>    * @folio: folio to split
>    * @new_order: the order of the new folio
>    * @split_at: a page within the new folio
> @@ -3741,7 +3742,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>    * 1. for uniform split, @lock_at points to one of @folio's subpages;
>    * 2. for buddy allocator like (non-uniform) split, @lock_at points to @folio.
>    *
> - * return: 0: successful, <0 failed (if -ENOMEM is returned, @folio might be
> + * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
>    * split but not to @new_order, the caller needs to check)
>    */
>   static int __folio_split(struct folio *folio, unsigned int new_order,
> @@ -4130,14 +4131,13 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>   				unmapped);
>   }
>   
> -/*
> - * folio_split: split a folio at @split_at to a @new_order folio
> +/**
> + * folio_split() - split a folio at @split_at to a @new_order folio
>    * @folio: folio to split
>    * @new_order: the order of the new folio
>    * @split_at: a page within the new folio
> - *
> - * return: 0: successful, <0 failed (if -ENOMEM is returned, @folio might be
> - * split but not to @new_order, the caller needs to check)
> + * @list: after-split folios are added to @list if not null, otherwise to LRU
> + *        list
>    *
>    * It has the same prerequisites and returns as
>    * split_huge_page_to_list_to_order().
> @@ -4151,6 +4151,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>    * [order-4, {order-3}, order-3, order-5, order-6, order-7, order-8].
>    *
>    * After split, folio is left locked for caller.
> + *
> + * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
> + * split but not to @new_order, the caller needs to check)
>    */
>   int folio_split(struct folio *folio, unsigned int new_order,
>   		struct page *split_at, struct list_head *list)