[PATCH v2 1/4] mm/huge_memory: change folio_split_supported() to folio_check_splittable()

Zi Yan posted 4 patches 1 week, 2 days ago
There is a newer version of this series
[PATCH v2 1/4] mm/huge_memory: change folio_split_supported() to folio_check_splittable()
Posted by Zi Yan 1 week, 2 days ago
folio_split_supported() used in try_folio_split_to_order() requires
folio->mapping to be non NULL, but current try_folio_split_to_order() does
not check it. There is no issue in the current code, since
try_folio_split_to_order() is only used in truncate_inode_partial_folio(),
where folio->mapping is not NULL.

To prevent future misuse, move folio->mapping NULL check (i.e., folio is
truncated) into folio_split_supported(). Since folio->mapping NULL check
returns -EBUSY and folio_split_supported() == false means -EINVAL, change
folio_split_supported() return type from bool to int and return error
numbers accordingly. Rename folio_split_supported() to
folio_check_splittable() to match the return type change.

While at it, move is_huge_zero_folio() check and folio_test_writeback()
check into folio_check_splittable() and add kernel-doc.

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
 include/linux/huge_mm.h | 10 ++++--
 mm/huge_memory.c        | 74 +++++++++++++++++++++++++----------------
 2 files changed, 53 insertions(+), 31 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 1d439de1ca2c..97686fb46e30 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -375,8 +375,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
 int folio_split_unmapped(struct folio *folio, unsigned int new_order);
 int min_order_for_split(struct folio *folio);
 int split_folio_to_list(struct folio *folio, struct list_head *list);
-bool folio_split_supported(struct folio *folio, unsigned int new_order,
-		enum split_type split_type, bool warns);
+int folio_check_splittable(struct folio *folio, unsigned int new_order,
+			   enum split_type split_type, bool warns);
 int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
 		struct list_head *list);
 
@@ -407,7 +407,11 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
 static inline int try_folio_split_to_order(struct folio *folio,
 		struct page *page, unsigned int new_order)
 {
-	if (!folio_split_supported(folio, new_order, SPLIT_TYPE_NON_UNIFORM, /* warns= */ false))
+	int ret;
+
+	ret = folio_check_splittable(folio, new_order, SPLIT_TYPE_NON_UNIFORM,
+				     /* warns= */ false);
+	if (ret)
 		return split_huge_page_to_order(&folio->page, new_order);
 	return folio_split(folio, new_order, page, NULL);
 }
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 041b554c7115..c1f1055165dd 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3688,15 +3688,43 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
 	return 0;
 }
 
-bool folio_split_supported(struct folio *folio, unsigned int new_order,
-		enum split_type split_type, bool warns)
+/**
+ * folio_check_splittable() - check if a folio can be split to a given order
+ * @folio: folio to be split
+ * @new_order: the smallest order of the after split folios (since buddy
+ *             allocator like split generates folios with orders from @folio's
+ *             order - 1 to new_order).
+ * @split_type: uniform or non-uniform split
+ * @warns: whether gives warnings or not for the checks in the function
+ *
+ * folio_check_splittable() checks if @folio can be split to @new_order using
+ * @split_type method. The truncated folio check must come first.
+ *
+ * Context: folio must be locked.
+ *
+ * Return: 0 - @folio can be split to @new_order, otherwise an error number is
+ * returned.
+ */
+int folio_check_splittable(struct folio *folio, unsigned int new_order,
+			   enum split_type split_type, bool warns)
 {
+	VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
+	/*
+	 * Folios that just got truncated cannot get split. Signal to the
+	 * caller that there was a race.
+	 *
+	 * TODO: this will also currently refuse shmem folios that are in the
+	 * swapcache.
+	 */
+	if (!folio_test_anon(folio) && !folio->mapping)
+		return -EBUSY;
+
 	if (folio_test_anon(folio)) {
 		/* order-1 is not supported for anonymous THP. */
 		VM_WARN_ONCE(warns && new_order == 1,
 				"Cannot split to order-1 folio");
 		if (new_order == 1)
-			return false;
+			return -EINVAL;
 	} else if (split_type == SPLIT_TYPE_NON_UNIFORM || new_order) {
 		if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
 		    !mapping_large_folio_support(folio->mapping)) {
@@ -3719,7 +3747,7 @@ bool folio_split_supported(struct folio *folio, unsigned int new_order,
 			 */
 			VM_WARN_ONCE(warns,
 				"Cannot split file folio to non-0 order");
-			return false;
+			return -EINVAL;
 		}
 	}
 
@@ -3734,10 +3762,18 @@ bool folio_split_supported(struct folio *folio, unsigned int new_order,
 	if ((split_type == SPLIT_TYPE_NON_UNIFORM || new_order) && folio_test_swapcache(folio)) {
 		VM_WARN_ONCE(warns,
 			"Cannot split swapcache folio to non-0 order");
-		return false;
+		return -EINVAL;
 	}
 
-	return true;
+	if (is_huge_zero_folio(folio)) {
+		pr_warn_ratelimited("Called split_huge_page for huge zero page\n");
+		return -EINVAL;
+	}
+
+	if (folio_test_writeback(folio))
+		return -EBUSY;
+
+	return 0;
 }
 
 static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int new_order,
@@ -3922,7 +3958,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 	int remap_flags = 0;
 	int extra_pins, ret;
 	pgoff_t end = 0;
-	bool is_hzp;
 
 	VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio);
 	VM_WARN_ON_ONCE_FOLIO(!folio_test_large(folio), folio);
@@ -3930,30 +3965,13 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
 	if (folio != page_folio(split_at) || folio != page_folio(lock_at))
 		return -EINVAL;
 
-	/*
-	 * Folios that just got truncated cannot get split. Signal to the
-	 * caller that there was a race.
-	 *
-	 * TODO: this will also currently refuse shmem folios that are in the
-	 * swapcache.
-	 */
-	if (!is_anon && !folio->mapping)
-		return -EBUSY;
-
 	if (new_order >= old_order)
 		return -EINVAL;
 
-	if (!folio_split_supported(folio, new_order, split_type, /* warn = */ true))
-		return -EINVAL;
-
-	is_hzp = is_huge_zero_folio(folio);
-	if (is_hzp) {
-		pr_warn_ratelimited("Called split_huge_page for huge zero page\n");
-		return -EBUSY;
-	}
-
-	if (folio_test_writeback(folio))
-		return -EBUSY;
+	ret = folio_check_splittable(folio, new_order, split_type,
+				     /* warn = */ true);
+	if (ret)
+		return ret;
 
 	if (is_anon) {
 		/*
-- 
2.51.0
Re: [PATCH v2 1/4] mm/huge_memory: change folio_split_supported() to folio_check_splittable()
Posted by David Hildenbrand (Red Hat) 6 days, 16 hours ago
On 11/22/25 03:55, Zi Yan wrote:
> folio_split_supported() used in try_folio_split_to_order() requires
> folio->mapping to be non NULL, but current try_folio_split_to_order() does
> not check it. There is no issue in the current code, since
> try_folio_split_to_order() is only used in truncate_inode_partial_folio(),
> where folio->mapping is not NULL.
> 
> To prevent future misuse, move folio->mapping NULL check (i.e., folio is
> truncated) into folio_split_supported(). Since folio->mapping NULL check
> returns -EBUSY and folio_split_supported() == false means -EINVAL, change
> folio_split_supported() return type from bool to int and return error
> numbers accordingly. Rename folio_split_supported() to
> folio_check_splittable() to match the return type change.
> 
> While at it, move is_huge_zero_folio() check and folio_test_writeback()
> check into folio_check_splittable() and add kernel-doc.
> 
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
>   include/linux/huge_mm.h | 10 ++++--
>   mm/huge_memory.c        | 74 +++++++++++++++++++++++++----------------
>   2 files changed, 53 insertions(+), 31 deletions(-)
> 
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 1d439de1ca2c..97686fb46e30 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -375,8 +375,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>   int folio_split_unmapped(struct folio *folio, unsigned int new_order);
>   int min_order_for_split(struct folio *folio);
>   int split_folio_to_list(struct folio *folio, struct list_head *list);
> -bool folio_split_supported(struct folio *folio, unsigned int new_order,
> -		enum split_type split_type, bool warns);
> +int folio_check_splittable(struct folio *folio, unsigned int new_order,
> +			   enum split_type split_type, bool warns);
>   int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>   		struct list_head *list);
>   
> @@ -407,7 +407,11 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>   static inline int try_folio_split_to_order(struct folio *folio,
>   		struct page *page, unsigned int new_order)
>   {
> -	if (!folio_split_supported(folio, new_order, SPLIT_TYPE_NON_UNIFORM, /* warns= */ false))
> +	int ret;
> +
> +	ret = folio_check_splittable(folio, new_order, SPLIT_TYPE_NON_UNIFORM,
> +				     /* warns= */ false);
> +	if (ret)
>   		return split_huge_page_to_order(&folio->page, new_order);
>   	return folio_split(folio, new_order, page, NULL);
>   }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 041b554c7115..c1f1055165dd 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3688,15 +3688,43 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>   	return 0;
>   }
>   
> -bool folio_split_supported(struct folio *folio, unsigned int new_order,
> -		enum split_type split_type, bool warns)
> +/**
> + * folio_check_splittable() - check if a folio can be split to a given order
> + * @folio: folio to be split
> + * @new_order: the smallest order of the after split folios (since buddy
> + *             allocator like split generates folios with orders from @folio's
> + *             order - 1 to new_order).
> + * @split_type: uniform or non-uniform split
> + * @warns: whether gives warnings or not for the checks in the function
> + *
> + * folio_check_splittable() checks if @folio can be split to @new_order using
> + * @split_type method. The truncated folio check must come first.
> + *
> + * Context: folio must be locked.
> + *
> + * Return: 0 - @folio can be split to @new_order, otherwise an error number is
> + * returned.
> + */
> +int folio_check_splittable(struct folio *folio, unsigned int new_order,
> +			   enum split_type split_type, bool warns)
>   {
> +	VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
> +	/*
> +	 * Folios that just got truncated cannot get split. Signal to the
> +	 * caller that there was a race.
> +	 *
> +	 * TODO: this will also currently refuse shmem folios that are in the
> +	 * swapcache.
> +	 */

Per the other discussion, should this even be:

"this will also currently refuse folios without a mapping in the 
swapcache (shmem or to-be-anon folios)"

IOW, to spell out that anon folios that were read into the swapcache but 
not mapped yet into page tables (where we set folio->mapping).


-- 
Cheers

David
Re: [PATCH v2 1/4] mm/huge_memory: change folio_split_supported() to folio_check_splittable()
Posted by Andrew Morton 6 days, 7 hours ago
On Tue, 25 Nov 2025 09:58:03 +0100 "David Hildenbrand (Red Hat)" <david@kernel.org> wrote:

> > + * Return: 0 - @folio can be split to @new_order, otherwise an error number is
> > + * returned.
> > + */
> > +int folio_check_splittable(struct folio *folio, unsigned int new_order,
> > +			   enum split_type split_type, bool warns)
> >   {
> > +	VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
> > +	/*
> > +	 * Folios that just got truncated cannot get split. Signal to the
> > +	 * caller that there was a race.
> > +	 *
> > +	 * TODO: this will also currently refuse shmem folios that are in the
> > +	 * swapcache.
> > +	 */
> 
> Per the other discussion, should this even be:
> 
> "this will also currently refuse folios without a mapping in the 
> swapcache (shmem or to-be-anon folios)"
> 
> IOW, to spell out that anon folios that were read into the swapcache but 
> not mapped yet into page tables (where we set folio->mapping).

This?

--- a/mm/huge_memory.c~mm-huge_memory-change-folio_split_supported-to-folio_check_splittable-fix
+++ a/mm/huge_memory.c
@@ -3714,7 +3714,8 @@ int folio_check_splittable(struct folio
 	 * caller that there was a race.
 	 *
 	 * TODO: this will also currently refuse shmem folios that are in the
-	 * swapcache.
+	 * swapcache.  Currently it will also refuse folios without a mapping
+	 * in the swapcache (shmem or to-be-anon folios).
 	 */
 	if (!folio_test_anon(folio) && !folio->mapping)
 		return -EBUSY;
_
Re: [PATCH v2 1/4] mm/huge_memory: change folio_split_supported() to folio_check_splittable()
Posted by Barry Song 1 week, 1 day ago
Hi Zi Yan,

Thanks for the nice cleanup.

On Sat, Nov 22, 2025 at 10:55 AM Zi Yan <ziy@nvidia.com> wrote:
>
> folio_split_supported() used in try_folio_split_to_order() requires
> folio->mapping to be non NULL, but current try_folio_split_to_order() does
> not check it. There is no issue in the current code, since
> try_folio_split_to_order() is only used in truncate_inode_partial_folio(),
> where folio->mapping is not NULL.
>
> To prevent future misuse, move folio->mapping NULL check (i.e., folio is
> truncated) into folio_split_supported(). Since folio->mapping NULL check
> returns -EBUSY and folio_split_supported() == false means -EINVAL, change
> folio_split_supported() return type from bool to int and return error
> numbers accordingly. Rename folio_split_supported() to
> folio_check_splittable() to match the return type change.
>
> While at it, move is_huge_zero_folio() check and folio_test_writeback()
> check into folio_check_splittable() and add kernel-doc.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
>  include/linux/huge_mm.h | 10 ++++--
>  mm/huge_memory.c        | 74 +++++++++++++++++++++++++----------------
>  2 files changed, 53 insertions(+), 31 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 1d439de1ca2c..97686fb46e30 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -375,8 +375,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>  int folio_split_unmapped(struct folio *folio, unsigned int new_order);
>  int min_order_for_split(struct folio *folio);
>  int split_folio_to_list(struct folio *folio, struct list_head *list);
> -bool folio_split_supported(struct folio *folio, unsigned int new_order,
> -               enum split_type split_type, bool warns);
> +int folio_check_splittable(struct folio *folio, unsigned int new_order,
> +                          enum split_type split_type, bool warns);


It feels a bit odd to have a warns parameter here, especially given that it's
a bool. I understand that in one case we're only checking whether a split is
possible, without actually performing it. In the other case, we are performing
the split, so we must confirm it's valid — otherwise it's a bug.

Could we rename split_type to something more like gfp_flags, where we have
variants such as __GFP_NOWARN or something similar? That would make the code
much more readable.

[...]

>
> @@ -3734,10 +3762,18 @@ bool folio_split_supported(struct folio *folio, unsigned int new_order,
>         if ((split_type == SPLIT_TYPE_NON_UNIFORM || new_order) && folio_test_swapcache(folio)) {
>                 VM_WARN_ONCE(warns,
>                         "Cannot split swapcache folio to non-0 order");
> -               return false;
> +               return -EINVAL;
>         }
>
> -       return true;
> +       if (is_huge_zero_folio(folio)) {
> +               pr_warn_ratelimited("Called split_huge_page for huge zero page\n");
> +               return -EINVAL;
> +       }

However, I don’t quite understand why this doesn’t check warns or why it
isn’t using VM_WARN_ONCE. Why is the zero-huge case different?

Thanks
Barry
Re: [PATCH v2 1/4] mm/huge_memory: change folio_split_supported() to folio_check_splittable()
Posted by David Hildenbrand (Red Hat) 1 week ago
On 11/23/25 19:38, Barry Song wrote:
> Hi Zi Yan,
> 
> Thanks for the nice cleanup.
> 
> On Sat, Nov 22, 2025 at 10:55 AM Zi Yan <ziy@nvidia.com> wrote:
>>
>> folio_split_supported() used in try_folio_split_to_order() requires
>> folio->mapping to be non NULL, but current try_folio_split_to_order() does
>> not check it. There is no issue in the current code, since
>> try_folio_split_to_order() is only used in truncate_inode_partial_folio(),
>> where folio->mapping is not NULL.
>>
>> To prevent future misuse, move folio->mapping NULL check (i.e., folio is
>> truncated) into folio_split_supported(). Since folio->mapping NULL check
>> returns -EBUSY and folio_split_supported() == false means -EINVAL, change
>> folio_split_supported() return type from bool to int and return error
>> numbers accordingly. Rename folio_split_supported() to
>> folio_check_splittable() to match the return type change.
>>
>> While at it, move is_huge_zero_folio() check and folio_test_writeback()
>> check into folio_check_splittable() and add kernel-doc.
>>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> ---
>>   include/linux/huge_mm.h | 10 ++++--
>>   mm/huge_memory.c        | 74 +++++++++++++++++++++++++----------------
>>   2 files changed, 53 insertions(+), 31 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index 1d439de1ca2c..97686fb46e30 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -375,8 +375,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>>   int folio_split_unmapped(struct folio *folio, unsigned int new_order);
>>   int min_order_for_split(struct folio *folio);
>>   int split_folio_to_list(struct folio *folio, struct list_head *list);
>> -bool folio_split_supported(struct folio *folio, unsigned int new_order,
>> -               enum split_type split_type, bool warns);
>> +int folio_check_splittable(struct folio *folio, unsigned int new_order,
>> +                          enum split_type split_type, bool warns);
> 
> 
> It feels a bit odd to have a warns parameter here, especially given that it's
> a bool. I understand that in one case we're only checking whether a split is
> possible, without actually performing it. In the other case, we are performing
> the split, so we must confirm it's valid — otherwise it's a bug.
> 
> Could we rename split_type to something more like gfp_flags, where we have
> variants such as __GFP_NOWARN or something similar? That would make the code
> much more readable.

Could we get rid of the "warns" parameter and simply always do a 
pr_warn_once()?

As an alternative, simply move the warning to the single caller

VM_WARN_ONCE(ret == -EINVAL, "Tried to split an unsplittable folio");

-- 
Cheers

David
Re: [PATCH v2 1/4] mm/huge_memory: change folio_split_supported() to folio_check_splittable()
Posted by Zi Yan 1 week ago
On 24 Nov 2025, at 5:33, David Hildenbrand (Red Hat) wrote:

> On 11/23/25 19:38, Barry Song wrote:
>> Hi Zi Yan,
>>
>> Thanks for the nice cleanup.
>>
>> On Sat, Nov 22, 2025 at 10:55 AM Zi Yan <ziy@nvidia.com> wrote:
>>>
>>> folio_split_supported() used in try_folio_split_to_order() requires
>>> folio->mapping to be non NULL, but current try_folio_split_to_order() does
>>> not check it. There is no issue in the current code, since
>>> try_folio_split_to_order() is only used in truncate_inode_partial_folio(),
>>> where folio->mapping is not NULL.
>>>
>>> To prevent future misuse, move folio->mapping NULL check (i.e., folio is
>>> truncated) into folio_split_supported(). Since folio->mapping NULL check
>>> returns -EBUSY and folio_split_supported() == false means -EINVAL, change
>>> folio_split_supported() return type from bool to int and return error
>>> numbers accordingly. Rename folio_split_supported() to
>>> folio_check_splittable() to match the return type change.
>>>
>>> While at it, move is_huge_zero_folio() check and folio_test_writeback()
>>> check into folio_check_splittable() and add kernel-doc.
>>>
>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>> ---
>>>   include/linux/huge_mm.h | 10 ++++--
>>>   mm/huge_memory.c        | 74 +++++++++++++++++++++++++----------------
>>>   2 files changed, 53 insertions(+), 31 deletions(-)
>>>
>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>> index 1d439de1ca2c..97686fb46e30 100644
>>> --- a/include/linux/huge_mm.h
>>> +++ b/include/linux/huge_mm.h
>>> @@ -375,8 +375,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>>>   int folio_split_unmapped(struct folio *folio, unsigned int new_order);
>>>   int min_order_for_split(struct folio *folio);
>>>   int split_folio_to_list(struct folio *folio, struct list_head *list);
>>> -bool folio_split_supported(struct folio *folio, unsigned int new_order,
>>> -               enum split_type split_type, bool warns);
>>> +int folio_check_splittable(struct folio *folio, unsigned int new_order,
>>> +                          enum split_type split_type, bool warns);
>>
>>
>> It feels a bit odd to have a warns parameter here, especially given that it's
>> a bool. I understand that in one case we're only checking whether a split is
>> possible, without actually performing it. In the other case, we are performing
>> the split, so we must confirm it's valid — otherwise it's a bug.
>>
>> Could we rename split_type to something more like gfp_flags, where we have
>> variants such as __GFP_NOWARN or something similar? That would make the code
>> much more readable.

We do not want to make folio split complicated, especially the long term plan
is to move to non uniform split entirely. And this warn parameter is solely
for CONFIG_READ_ONLY_THP_FOR_FS, since large folios created via it
cannot be split in non uniform way.

>
> Could we get rid of the "warns" parameter and simply always do a pr_warn_once()?

The issue with this method is that truncating to a large folio created via
CONFIG_READ_ONLY_THP_FOR_FS triggers an undesirable warning.

>
> As an alternative, simply move the warning to the single caller
>
> VM_WARN_ONCE(ret == -EINVAL, "Tried to split an unsplittable folio");

It sounds good to me. It at most needs the person causes the warning to
add some code to find the actual violation.

I will do this in the next version. All VM_WARN_ONCE() and pr_warn_ratelimited()
in folio_check_splittable() will be removed and __folio_split() will
do it when ret is -EINVAL.

Best Regards,
Yan, Zi
Re: [PATCH v2 1/4] mm/huge_memory: change folio_split_supported() to folio_check_splittable()
Posted by Wei Yang 1 week, 1 day ago
On Fri, Nov 21, 2025 at 09:55:26PM -0500, Zi Yan wrote:
>folio_split_supported() used in try_folio_split_to_order() requires
>folio->mapping to be non NULL, but current try_folio_split_to_order() does
>not check it. There is no issue in the current code, since
>try_folio_split_to_order() is only used in truncate_inode_partial_folio(),
>where folio->mapping is not NULL.
>
>To prevent future misuse, move folio->mapping NULL check (i.e., folio is
>truncated) into folio_split_supported(). Since folio->mapping NULL check
>returns -EBUSY and folio_split_supported() == false means -EINVAL, change
>folio_split_supported() return type from bool to int and return error
>numbers accordingly. Rename folio_split_supported() to
>folio_check_splittable() to match the return type change.
>
>While at it, move is_huge_zero_folio() check and folio_test_writeback()
>check into folio_check_splittable() and add kernel-doc.
>
>Signed-off-by: Zi Yan <ziy@nvidia.com>

LGTM, Thanks

Reviewed-by: Wei Yang <richard.weiyang@gmail.com>


-- 
Wei Yang
Help you, Help me