Large block size (LBS) folios cannot be split to order-0 folios but
min_order_for_folio(). Current split fails directly, but that is not
optimal. Split the folio to min_order_for_folio(), so that, after split,
only the folio containing the poisoned page becomes unusable instead.
For soft offline, do not split the large folio if its min_order_for_folio()
is not 0. Since the folio is still accessible from userspace and premature
split might lead to potential performance loss.
Suggested-by: Jane Chu <jane.chu@oracle.com>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
---
mm/memory-failure.c | 30 ++++++++++++++++++++++++++----
1 file changed, 26 insertions(+), 4 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index f698df156bf8..40687b7aa8be 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
* there is still more to do, hence the page refcount we took earlier
* is still needed.
*/
-static int try_to_split_thp_page(struct page *page, bool release)
+static int try_to_split_thp_page(struct page *page, unsigned int new_order,
+ bool release)
{
int ret;
lock_page(page);
- ret = split_huge_page(page);
+ ret = split_huge_page_to_order(page, new_order);
unlock_page(page);
if (ret && release)
@@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
folio_unlock(folio);
if (folio_test_large(folio)) {
+ int new_order = min_order_for_split(folio);
+ int err;
+
/*
* The flag must be set after the refcount is bumped
* otherwise it may race with THP split.
@@ -2294,7 +2298,15 @@ int memory_failure(unsigned long pfn, int flags)
* page is a valid handlable page.
*/
folio_set_has_hwpoisoned(folio);
- if (try_to_split_thp_page(p, false) < 0) {
+ err = try_to_split_thp_page(p, new_order, /* release= */ false);
+ /*
+ * If the folio cannot be split to order-0, kill the process,
+ * but split the folio anyway to minimize the amount of unusable
+ * pages.
+ */
+ if (err || new_order) {
+ /* get folio again in case the original one is split */
+ folio = page_folio(p);
res = -EHWPOISON;
kill_procs_now(p, pfn, flags, folio);
put_page(p);
@@ -2621,7 +2633,17 @@ static int soft_offline_in_use_page(struct page *page)
};
if (!huge && folio_test_large(folio)) {
- if (try_to_split_thp_page(page, true)) {
+ int new_order = min_order_for_split(folio);
+
+ /*
+ * If new_order (target split order) is not 0, do not split the
+ * folio at all to retain the still accessible large folio.
+ * NOTE: if minimizing the number of soft offline pages is
+ * preferred, split it to non-zero new_order like it is done in
+ * memory_failure().
+ */
+ if (new_order || try_to_split_thp_page(page, /* new_order= */ 0,
+ /* release= */ true)) {
pr_info("%#lx: thp split failed\n", pfn);
return -EBUSY;
}
--
2.51.0
On Tue, Oct 21, 2025 at 11:35:29PM -0400, Zi Yan wrote:
> Large block size (LBS) folios cannot be split to order-0 folios but
> min_order_for_folio(). Current split fails directly, but that is not
> optimal. Split the folio to min_order_for_folio(), so that, after split,
> only the folio containing the poisoned page becomes unusable instead.
>
> For soft offline, do not split the large folio if its min_order_for_folio()
> is not 0. Since the folio is still accessible from userspace and premature
> split might lead to potential performance loss.
>
> Suggested-by: Jane Chu <jane.chu@oracle.com>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
LGTM, with David's comments addressed, feel free to add:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
> mm/memory-failure.c | 30 ++++++++++++++++++++++++++----
> 1 file changed, 26 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index f698df156bf8..40687b7aa8be 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
> * there is still more to do, hence the page refcount we took earlier
> * is still needed.
> */
> -static int try_to_split_thp_page(struct page *page, bool release)
> +static int try_to_split_thp_page(struct page *page, unsigned int new_order,
> + bool release)
> {
> int ret;
>
> lock_page(page);
> - ret = split_huge_page(page);
> + ret = split_huge_page_to_order(page, new_order);
> unlock_page(page);
>
> if (ret && release)
> @@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
> folio_unlock(folio);
>
> if (folio_test_large(folio)) {
> + int new_order = min_order_for_split(folio);
> + int err;
> +
> /*
> * The flag must be set after the refcount is bumped
> * otherwise it may race with THP split.
> @@ -2294,7 +2298,15 @@ int memory_failure(unsigned long pfn, int flags)
> * page is a valid handlable page.
> */
> folio_set_has_hwpoisoned(folio);
> - if (try_to_split_thp_page(p, false) < 0) {
> + err = try_to_split_thp_page(p, new_order, /* release= */ false);
> + /*
> + * If the folio cannot be split to order-0, kill the process,
> + * but split the folio anyway to minimize the amount of unusable
> + * pages.
> + */
> + if (err || new_order) {
> + /* get folio again in case the original one is split */
> + folio = page_folio(p);
> res = -EHWPOISON;
> kill_procs_now(p, pfn, flags, folio);
> put_page(p);
> @@ -2621,7 +2633,17 @@ static int soft_offline_in_use_page(struct page *page)
> };
>
> if (!huge && folio_test_large(folio)) {
> - if (try_to_split_thp_page(page, true)) {
> + int new_order = min_order_for_split(folio);
> +
> + /*
> + * If new_order (target split order) is not 0, do not split the
> + * folio at all to retain the still accessible large folio.
> + * NOTE: if minimizing the number of soft offline pages is
> + * preferred, split it to non-zero new_order like it is done in
> + * memory_failure().
> + */
> + if (new_order || try_to_split_thp_page(page, /* new_order= */ 0,
> + /* release= */ true)) {
> pr_info("%#lx: thp split failed\n", pfn);
> return -EBUSY;
> }
> --
> 2.51.0
>
On 22.10.25 05:35, Zi Yan wrote:
Subject: I'd drop the trailing "."
> Large block size (LBS) folios cannot be split to order-0 folios but
> min_order_for_folio(). Current split fails directly, but that is not
> optimal. Split the folio to min_order_for_folio(), so that, after split,
> only the folio containing the poisoned page becomes unusable instead.
>
> For soft offline, do not split the large folio if its min_order_for_folio()
> is not 0. Since the folio is still accessible from userspace and premature
> split might lead to potential performance loss.
>
> Suggested-by: Jane Chu <jane.chu@oracle.com>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
This is not a fix, correct? Because the fix for the issue we saw was
sent out separately.
> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
> mm/memory-failure.c | 30 ++++++++++++++++++++++++++----
> 1 file changed, 26 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index f698df156bf8..40687b7aa8be 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
> * there is still more to do, hence the page refcount we took earlier
> * is still needed.
> */
> -static int try_to_split_thp_page(struct page *page, bool release)
> +static int try_to_split_thp_page(struct page *page, unsigned int new_order,
> + bool release)
> {
> int ret;
>
> lock_page(page);
> - ret = split_huge_page(page);
> + ret = split_huge_page_to_order(page, new_order);
> unlock_page(page);
>
> if (ret && release)
> @@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
> folio_unlock(folio);
>
> if (folio_test_large(folio)) {
> + int new_order = min_order_for_split(folio);
could be const
> + int err;
> +
> /*
> * The flag must be set after the refcount is bumped
> * otherwise it may race with THP split.
> @@ -2294,7 +2298,15 @@ int memory_failure(unsigned long pfn, int flags)
> * page is a valid handlable page.
> */
> folio_set_has_hwpoisoned(folio);
> - if (try_to_split_thp_page(p, false) < 0) {
> + err = try_to_split_thp_page(p, new_order, /* release= */ false);
> + /*
> + * If the folio cannot be split to order-0, kill the process,
> + * but split the folio anyway to minimize the amount of unusable
> + * pages.
You could briefly explain here that the remainder of memory failure
handling code cannot deal with large folios, which is why we treat it
just like failed split.
--
Cheers
David / dhildenb
On 22 Oct 2025, at 16:17, David Hildenbrand wrote:
> On 22.10.25 05:35, Zi Yan wrote:
>
> Subject: I'd drop the trailing "."
>
>> Large block size (LBS) folios cannot be split to order-0 folios but
>> min_order_for_folio(). Current split fails directly, but that is not
>> optimal. Split the folio to min_order_for_folio(), so that, after split,
>> only the folio containing the poisoned page becomes unusable instead.
>>
>> For soft offline, do not split the large folio if its min_order_for_folio()
>> is not 0. Since the folio is still accessible from userspace and premature
>> split might lead to potential performance loss.
>>
>> Suggested-by: Jane Chu <jane.chu@oracle.com>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>
> This is not a fix, correct? Because the fix for the issue we saw was sent out separately.
No. It is just an optimization.
>
>> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
>> ---
>> mm/memory-failure.c | 30 ++++++++++++++++++++++++++----
>> 1 file changed, 26 insertions(+), 4 deletions(-)
>>
>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>> index f698df156bf8..40687b7aa8be 100644
>> --- a/mm/memory-failure.c
>> +++ b/mm/memory-failure.c
>> @@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
>> * there is still more to do, hence the page refcount we took earlier
>> * is still needed.
>> */
>> -static int try_to_split_thp_page(struct page *page, bool release)
>> +static int try_to_split_thp_page(struct page *page, unsigned int new_order,
>> + bool release)
>> {
>> int ret;
>> lock_page(page);
>> - ret = split_huge_page(page);
>> + ret = split_huge_page_to_order(page, new_order);
>> unlock_page(page);
>> if (ret && release)
>> @@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
>> folio_unlock(folio);
>> if (folio_test_large(folio)) {
>> + int new_order = min_order_for_split(folio);
>
> could be const
Sure.
>
>> + int err;
>> +
>> /*
>> * The flag must be set after the refcount is bumped
>> * otherwise it may race with THP split.
>> @@ -2294,7 +2298,15 @@ int memory_failure(unsigned long pfn, int flags)
>> * page is a valid handlable page.
>> */
>> folio_set_has_hwpoisoned(folio);
>> - if (try_to_split_thp_page(p, false) < 0) {
>> + err = try_to_split_thp_page(p, new_order, /* release= */ false);
>> + /*
>> + * If the folio cannot be split to order-0, kill the process,
>> + * but split the folio anyway to minimize the amount of unusable
>> + * pages.
>
> You could briefly explain here that the remainder of memory failure handling code cannot deal with large folios, which is why we treat it just like failed split.
Sure. Will add.
--
Best Regards,
Yan, Zi
© 2016 - 2026 Red Hat, Inc.