[PATCH v4 2/3] mm/memory-failure: improve large block size folio handling.

Zi Yan posted 3 patches 1 month, 2 weeks ago
There is a newer version of this series
[PATCH v4 2/3] mm/memory-failure: improve large block size folio handling.
Posted by Zi Yan 1 month, 2 weeks ago
Large block size (LBS) folios cannot be split to order-0 folios but
min_order_for_folio(). Current split fails directly, but that is not
optimal. Split the folio to min_order_for_folio(), so that, after split,
only the folio containing the poisoned page becomes unusable instead.

For soft offline, do not split the large folio if its min_order_for_folio()
is not 0. Since the folio is still accessible from userspace and premature
split might lead to potential performance loss.

Suggested-by: Jane Chu <jane.chu@oracle.com>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
 mm/memory-failure.c | 31 +++++++++++++++++++++++++++----
 1 file changed, 27 insertions(+), 4 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index f698df156bf8..acc35c881547 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
  * there is still more to do, hence the page refcount we took earlier
  * is still needed.
  */
-static int try_to_split_thp_page(struct page *page, bool release)
+static int try_to_split_thp_page(struct page *page, unsigned int new_order,
+		bool release)
 {
 	int ret;
 
 	lock_page(page);
-	ret = split_huge_page(page);
+	ret = split_huge_page_to_order(page, new_order);
 	unlock_page(page);
 
 	if (ret && release)
@@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
 	folio_unlock(folio);
 
 	if (folio_test_large(folio)) {
+		const int new_order = min_order_for_split(folio);
+		int err;
+
 		/*
 		 * The flag must be set after the refcount is bumped
 		 * otherwise it may race with THP split.
@@ -2294,7 +2298,16 @@ int memory_failure(unsigned long pfn, int flags)
 		 * page is a valid handlable page.
 		 */
 		folio_set_has_hwpoisoned(folio);
-		if (try_to_split_thp_page(p, false) < 0) {
+		err = try_to_split_thp_page(p, new_order, /* release= */ false);
+		/*
+		 * If splitting a folio to order-0 fails, kill the process.
+		 * Split the folio regardless to minimize unusable pages.
+		 * Because the memory failure code cannot handle large
+		 * folios, this split is always treated as if it failed.
+		 */
+		if (err || new_order) {
+			/* get folio again in case the original one is split */
+			folio = page_folio(p);
 			res = -EHWPOISON;
 			kill_procs_now(p, pfn, flags, folio);
 			put_page(p);
@@ -2621,7 +2634,17 @@ static int soft_offline_in_use_page(struct page *page)
 	};
 
 	if (!huge && folio_test_large(folio)) {
-		if (try_to_split_thp_page(page, true)) {
+		const int new_order = min_order_for_split(folio);
+
+		/*
+		 * If new_order (target split order) is not 0, do not split the
+		 * folio at all to retain the still accessible large folio.
+		 * NOTE: if minimizing the number of soft offline pages is
+		 * preferred, split it to non-zero new_order like it is done in
+		 * memory_failure().
+		 */
+		if (new_order || try_to_split_thp_page(page, /* new_order= */ 0,
+						       /* release= */ true)) {
 			pr_info("%#lx: thp split failed\n", pfn);
 			return -EBUSY;
 		}
-- 
2.43.0
Re: [PATCH v4 2/3] mm/memory-failure: improve large block size folio handling.
Posted by David Hildenbrand 1 month, 1 week ago
On 30.10.25 02:40, Zi Yan wrote:
> Large block size (LBS) folios cannot be split to order-0 folios but
> min_order_for_folio(). Current split fails directly, but that is not
> optimal. Split the folio to min_order_for_folio(), so that, after split,
> only the folio containing the poisoned page becomes unusable instead.
> 
> For soft offline, do not split the large folio if its min_order_for_folio()
> is not 0. Since the folio is still accessible from userspace and premature
> split might lead to potential performance loss.
> 
> Suggested-by: Jane Chu <jane.chu@oracle.com>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers

David / dhildenb
Re: [PATCH v4 2/3] mm/memory-failure: improve large block size folio handling.
Posted by Wei Yang 1 month, 2 weeks ago
On Wed, Oct 29, 2025 at 09:40:19PM -0400, Zi Yan wrote:
>Large block size (LBS) folios cannot be split to order-0 folios but
>min_order_for_folio(). Current split fails directly, but that is not
>optimal. Split the folio to min_order_for_folio(), so that, after split,
>only the folio containing the poisoned page becomes unusable instead.
>
>For soft offline, do not split the large folio if its min_order_for_folio()
>is not 0. Since the folio is still accessible from userspace and premature
>split might lead to potential performance loss.
>
>Suggested-by: Jane Chu <jane.chu@oracle.com>
>Signed-off-by: Zi Yan <ziy@nvidia.com>
>Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
>Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

Looks reasonable.

Reviewed-by: Wei Yang <richard.weiyang@gmail.com>

-- 
Wei Yang
Help you, Help me
Re: [PATCH v4 2/3] mm/memory-failure: improve large block size folio handling.
Posted by Miaohe Lin 1 month, 2 weeks ago
On 2025/10/30 9:40, Zi Yan wrote:
> Large block size (LBS) folios cannot be split to order-0 folios but
> min_order_for_folio(). Current split fails directly, but that is not
> optimal. Split the folio to min_order_for_folio(), so that, after split,
> only the folio containing the poisoned page becomes unusable instead.
> 
> For soft offline, do not split the large folio if its min_order_for_folio()
> is not 0. Since the folio is still accessible from userspace and premature
> split might lead to potential performance loss.
> 
> Suggested-by: Jane Chu <jane.chu@oracle.com>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>

Thanks.
.
Re: [PATCH v4 2/3] mm/memory-failure: improve large block size folio handling.
Posted by Barry Song 1 month, 2 weeks ago
On Thu, Oct 30, 2025 at 9:40 AM Zi Yan <ziy@nvidia.com> wrote:
>
> Large block size (LBS) folios cannot be split to order-0 folios but
> min_order_for_folio(). Current split fails directly, but that is not
> optimal. Split the folio to min_order_for_folio(), so that, after split,
> only the folio containing the poisoned page becomes unusable instead.
>
> For soft offline, do not split the large folio if its min_order_for_folio()
> is not 0. Since the folio is still accessible from userspace and premature
> split might lead to potential performance loss.
>
> Suggested-by: Jane Chu <jane.chu@oracle.com>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

Reviewed-by: Barry Song <baohua@kernel.org>

> ---
>  mm/memory-failure.c | 31 +++++++++++++++++++++++++++----
>  1 file changed, 27 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index f698df156bf8..acc35c881547 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
...
> @@ -2294,7 +2298,16 @@ int memory_failure(unsigned long pfn, int flags)
>                  * page is a valid handlable page.
>                  */
>                 folio_set_has_hwpoisoned(folio);
> -               if (try_to_split_thp_page(p, false) < 0) {
> +               err = try_to_split_thp_page(p, new_order, /* release= */ false);
> +               /*
> +                * If splitting a folio to order-0 fails, kill the process.
> +                * Split the folio regardless to minimize unusable pages.
> +                * Because the memory failure code cannot handle large
> +                * folios, this split is always treated as if it failed.
> +                */
> +               if (err || new_order) {
> +                       /* get folio again in case the original one is split */
> +                       folio = page_folio(p);

It’s a bit hard to follow that we implicitly use p to get its original
folio for splitting in try_to_split_thp_page(), and then again use p to
get its new folio for kill_procs_now(). It might be more readable to move
try_to_split_thp_page() into a helper like try_to_split_folio(folio, …),
so it’s explicit that we’re splitting a folio rather than a page?

Thanks
Barry
Re: [PATCH v4 2/3] mm/memory-failure: improve large block size folio handling.
Posted by Lance Yang 1 month, 2 weeks ago

On 2025/10/30 09:40, Zi Yan wrote:
> Large block size (LBS) folios cannot be split to order-0 folios but
> min_order_for_folio(). Current split fails directly, but that is not
> optimal. Split the folio to min_order_for_folio(), so that, after split,
> only the folio containing the poisoned page becomes unusable instead.
> 
> For soft offline, do not split the large folio if its min_order_for_folio()
> is not 0. Since the folio is still accessible from userspace and premature
> split might lead to potential performance loss.
> 
> Suggested-by: Jane Chu <jane.chu@oracle.com>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---

LGTM! Feel free to add:

Reviewed-by: Lance Yang <lance.yang@linux.dev>

>   mm/memory-failure.c | 31 +++++++++++++++++++++++++++----
>   1 file changed, 27 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index f698df156bf8..acc35c881547 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
>    * there is still more to do, hence the page refcount we took earlier
>    * is still needed.
>    */
> -static int try_to_split_thp_page(struct page *page, bool release)
> +static int try_to_split_thp_page(struct page *page, unsigned int new_order,
> +		bool release)
>   {
>   	int ret;
>   
>   	lock_page(page);
> -	ret = split_huge_page(page);
> +	ret = split_huge_page_to_order(page, new_order);
>   	unlock_page(page);
>   
>   	if (ret && release)
> @@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
>   	folio_unlock(folio);
>   
>   	if (folio_test_large(folio)) {
> +		const int new_order = min_order_for_split(folio);
> +		int err;
> +
>   		/*
>   		 * The flag must be set after the refcount is bumped
>   		 * otherwise it may race with THP split.
> @@ -2294,7 +2298,16 @@ int memory_failure(unsigned long pfn, int flags)
>   		 * page is a valid handlable page.
>   		 */
>   		folio_set_has_hwpoisoned(folio);
> -		if (try_to_split_thp_page(p, false) < 0) {
> +		err = try_to_split_thp_page(p, new_order, /* release= */ false);
> +		/*
> +		 * If splitting a folio to order-0 fails, kill the process.
> +		 * Split the folio regardless to minimize unusable pages.
> +		 * Because the memory failure code cannot handle large
> +		 * folios, this split is always treated as if it failed.
> +		 */
> +		if (err || new_order) {
> +			/* get folio again in case the original one is split */
> +			folio = page_folio(p);
>   			res = -EHWPOISON;
>   			kill_procs_now(p, pfn, flags, folio);
>   			put_page(p);
> @@ -2621,7 +2634,17 @@ static int soft_offline_in_use_page(struct page *page)
>   	};
>   
>   	if (!huge && folio_test_large(folio)) {
> -		if (try_to_split_thp_page(page, true)) {
> +		const int new_order = min_order_for_split(folio);
> +
> +		/*
> +		 * If new_order (target split order) is not 0, do not split the
> +		 * folio at all to retain the still accessible large folio.
> +		 * NOTE: if minimizing the number of soft offline pages is
> +		 * preferred, split it to non-zero new_order like it is done in
> +		 * memory_failure().
> +		 */
> +		if (new_order || try_to_split_thp_page(page, /* new_order= */ 0,
> +						       /* release= */ true)) {
>   			pr_info("%#lx: thp split failed\n", pfn);
>   			return -EBUSY;
>   		}