folio split clears PG_has_hwpoisoned, but the flag should be preserved in
after-split folios containing pages with PG_hwpoisoned flag if the folio is
split to >0 order folios. Scan all pages in a to-be-split folio to
determine which after-split folios need the flag.
An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to
avoid the scan and set it on all after-split folios, but resulting false
positive has undesirable negative impact. To remove false positive, caller
of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to
do the scan. That might be causing a hassle for current and future callers
and more costly than doing the scan in the split code. More details are
discussed in [1].
It is OK that current implementation does not do this, because memory
failure code always tries to split to order-0 folios and if a folio cannot
be split to order-0, memory failure code either gives warnings or the split
is not performed.
Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985q4g@mail.gmail.com/ [1]
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/huge_memory.c | 28 +++++++++++++++++++++++++---
1 file changed, 25 insertions(+), 3 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index fc65ec3393d2..f3896c1f130f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3455,6 +3455,17 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
caller_pins;
}
+static bool page_range_has_hwpoisoned(struct page *first_page, long nr_pages)
+{
+ long i;
+
+ for (i = 0; i < nr_pages; i++)
+ if (PageHWPoison(first_page + i))
+ return true;
+
+ return false;
+}
+
/*
* It splits @folio into @new_order folios and copies the @folio metadata to
* all the resulting folios.
@@ -3462,22 +3473,32 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
static void __split_folio_to_order(struct folio *folio, int old_order,
int new_order)
{
+ /* Scan poisoned pages when split a poisoned folio to large folios */
+ bool check_poisoned_pages = folio_test_has_hwpoisoned(folio) &&
+ new_order != 0;
long new_nr_pages = 1 << new_order;
long nr_pages = 1 << old_order;
long i;
+ folio_clear_has_hwpoisoned(folio);
+
+ /* Check first new_nr_pages since the loop below skips them */
+ if (check_poisoned_pages &&
+ page_range_has_hwpoisoned(folio_page(folio, 0), new_nr_pages))
+ folio_set_has_hwpoisoned(folio);
/*
* Skip the first new_nr_pages, since the new folio from them have all
* the flags from the original folio.
*/
for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) {
struct page *new_head = &folio->page + i;
-
/*
* Careful: new_folio is not a "real" folio before we cleared PageTail.
* Don't pass it around before clear_compound_head().
*/
struct folio *new_folio = (struct folio *)new_head;
+ bool poisoned_new_folio = check_poisoned_pages &&
+ page_range_has_hwpoisoned(new_head, new_nr_pages);
VM_BUG_ON_PAGE(atomic_read(&new_folio->_mapcount) != -1, new_head);
@@ -3514,6 +3535,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
(1L << PG_dirty) |
LRU_GEN_MASK | LRU_REFS_MASK));
+ if (poisoned_new_folio)
+ folio_set_has_hwpoisoned(new_folio);
+
new_folio->mapping = folio->mapping;
new_folio->index = folio->index + i;
@@ -3600,8 +3624,6 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
int start_order = uniform_split ? new_order : old_order - 1;
int split_order;
- folio_clear_has_hwpoisoned(folio);
-
/*
* split to new_order one order at a time. For uniform split,
* folio is split to new_order directly.
--
2.51.0
On Tue, Oct 21, 2025 at 11:35:27PM -0400, Zi Yan wrote:
> folio split clears PG_has_hwpoisoned, but the flag should be preserved in
> after-split folios containing pages with PG_hwpoisoned flag if the folio is
> split to >0 order folios. Scan all pages in a to-be-split folio to
> determine which after-split folios need the flag.
>
> An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to
> avoid the scan and set it on all after-split folios, but resulting false
> positive has undesirable negative impact. To remove false positive, caller
> of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to
> do the scan. That might be causing a hassle for current and future callers
> and more costly than doing the scan in the split code. More details are
> discussed in [1].
>
> It is OK that current implementation does not do this, because memory
> failure code always tries to split to order-0 folios and if a folio cannot
> be split to order-0, memory failure code either gives warnings or the split
> is not performed.
>
> Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985q4g@mail.gmail.com/ [1]
> Signed-off-by: Zi Yan <ziy@nvidia.com>
I guess this was split out to [0]? :)
[0]: https://lore.kernel.org/linux-mm/44310717-347c-4ede-ad31-c6d375a449b9@linux.dev/
> ---
> mm/huge_memory.c | 28 +++++++++++++++++++++++++---
> 1 file changed, 25 insertions(+), 3 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index fc65ec3393d2..f3896c1f130f 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3455,6 +3455,17 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
> caller_pins;
> }
>
> +static bool page_range_has_hwpoisoned(struct page *first_page, long nr_pages)
> +{
> + long i;
> +
> + for (i = 0; i < nr_pages; i++)
> + if (PageHWPoison(first_page + i))
> + return true;
> +
> + return false;
> +}
> +
> /*
> * It splits @folio into @new_order folios and copies the @folio metadata to
> * all the resulting folios.
> @@ -3462,22 +3473,32 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
> static void __split_folio_to_order(struct folio *folio, int old_order,
> int new_order)
> {
> + /* Scan poisoned pages when split a poisoned folio to large folios */
> + bool check_poisoned_pages = folio_test_has_hwpoisoned(folio) &&
> + new_order != 0;
> long new_nr_pages = 1 << new_order;
> long nr_pages = 1 << old_order;
> long i;
>
> + folio_clear_has_hwpoisoned(folio);
> +
> + /* Check first new_nr_pages since the loop below skips them */
> + if (check_poisoned_pages &&
> + page_range_has_hwpoisoned(folio_page(folio, 0), new_nr_pages))
> + folio_set_has_hwpoisoned(folio);
> /*
> * Skip the first new_nr_pages, since the new folio from them have all
> * the flags from the original folio.
> */
> for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) {
> struct page *new_head = &folio->page + i;
> -
> /*
> * Careful: new_folio is not a "real" folio before we cleared PageTail.
> * Don't pass it around before clear_compound_head().
> */
> struct folio *new_folio = (struct folio *)new_head;
> + bool poisoned_new_folio = check_poisoned_pages &&
> + page_range_has_hwpoisoned(new_head, new_nr_pages);
>
> VM_BUG_ON_PAGE(atomic_read(&new_folio->_mapcount) != -1, new_head);
>
> @@ -3514,6 +3535,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
> (1L << PG_dirty) |
> LRU_GEN_MASK | LRU_REFS_MASK));
>
> + if (poisoned_new_folio)
> + folio_set_has_hwpoisoned(new_folio);
> +
> new_folio->mapping = folio->mapping;
> new_folio->index = folio->index + i;
>
> @@ -3600,8 +3624,6 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> int start_order = uniform_split ? new_order : old_order - 1;
> int split_order;
>
> - folio_clear_has_hwpoisoned(folio);
> -
> /*
> * split to new_order one order at a time. For uniform split,
> * folio is split to new_order directly.
> --
> 2.51.0
>
On 24 Oct 2025, at 11:58, Lorenzo Stoakes wrote: > On Tue, Oct 21, 2025 at 11:35:27PM -0400, Zi Yan wrote: >> folio split clears PG_has_hwpoisoned, but the flag should be preserved in >> after-split folios containing pages with PG_hwpoisoned flag if the folio is >> split to >0 order folios. Scan all pages in a to-be-split folio to >> determine which after-split folios need the flag. >> >> An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to >> avoid the scan and set it on all after-split folios, but resulting false >> positive has undesirable negative impact. To remove false positive, caller >> of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to >> do the scan. That might be causing a hassle for current and future callers >> and more costly than doing the scan in the split code. More details are >> discussed in [1]. >> >> It is OK that current implementation does not do this, because memory >> failure code always tries to split to order-0 folios and if a folio cannot >> be split to order-0, memory failure code either gives warnings or the split >> is not performed. >> >> Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985q4g@mail.gmail.com/ [1] >> Signed-off-by: Zi Yan <ziy@nvidia.com> > > I guess this was split out to [0]? :) > > [0]: https://lore.kernel.org/linux-mm/44310717-347c-4ede-ad31-c6d375a449b9@linux.dev/ Yes. The decision is based on the discussion with David[1] and announced at[2]. [1] https://lore.kernel.org/all/d3d05898-5530-4990-9d61-8268bd483765@redhat.com/ [2] https://lore.kernel.org/all/1AE28DE5-1E0A-432B-B21B-61E0E3F54909@nvidia.com/ -- Best Regards, Yan, Zi
On 22.10.25 05:35, Zi Yan wrote:
> folio split clears PG_has_hwpoisoned, but the flag should be preserved in
> after-split folios containing pages with PG_hwpoisoned flag if the folio is
> split to >0 order folios. Scan all pages in a to-be-split folio to
> determine which after-split folios need the flag.
>
> An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to
> avoid the scan and set it on all after-split folios, but resulting false
> positive has undesirable negative impact. To remove false positive, caller
> of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to
> do the scan. That might be causing a hassle for current and future callers
> and more costly than doing the scan in the split code. More details are
> discussed in [1].
>
> It is OK that current implementation does not do this, because memory
> failure code always tries to split to order-0 folios and if a folio cannot
> be split to order-0, memory failure code either gives warnings or the split
> is not performed.
>
We're losing PG_has_hwpoisoned for large folios, so likely this should be
a stable fix for splitting anything to an order > 0 ?
> Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985q4g@mail.gmail.com/ [1]
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
> mm/huge_memory.c | 28 +++++++++++++++++++++++++---
> 1 file changed, 25 insertions(+), 3 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index fc65ec3393d2..f3896c1f130f 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3455,6 +3455,17 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
> caller_pins;
> }
>
> +static bool page_range_has_hwpoisoned(struct page *first_page, long nr_pages)
> +{
> + long i;
> +
> + for (i = 0; i < nr_pages; i++)
> + if (PageHWPoison(first_page + i))
> + return true;
> +
> + return false;
Nit: I'd just do
static bool page_range_has_hwpoisoned(struct page *page, unsigned long nr_pages)
{
for (; nr_pages; page++, nr_pages--)
if (PageHWPoison(page))
return true;
}
return false;
}
> +}
> +
> /*
> * It splits @folio into @new_order folios and copies the @folio metadata to
> * all the resulting folios.
> @@ -3462,22 +3473,32 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
> static void __split_folio_to_order(struct folio *folio, int old_order,
> int new_order)
> {
> + /* Scan poisoned pages when split a poisoned folio to large folios */
> + bool check_poisoned_pages = folio_test_has_hwpoisoned(folio) &&
> + new_order != 0;
I'd shorten this to "handle_hwpoison" or sth like that.
Maybe we can make it const and fit it into a single line.
Comparison with 0 is not required.
const bool handle_hwpoison = folio_test_has_hwpoisoned(folio) && new_order;
> long new_nr_pages = 1 << new_order;
> long nr_pages = 1 << old_order;
> long i;
>
> + folio_clear_has_hwpoisoned(folio);
> +
> + /* Check first new_nr_pages since the loop below skips them */
> + if (check_poisoned_pages &&
> + page_range_has_hwpoisoned(folio_page(folio, 0), new_nr_pages))
> + folio_set_has_hwpoisoned(folio);
> /*
> * Skip the first new_nr_pages, since the new folio from them have all
> * the flags from the original folio.
> */
> for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) {
> struct page *new_head = &folio->page + i;
> -
> /*
> * Careful: new_folio is not a "real" folio before we cleared PageTail.
> * Don't pass it around before clear_compound_head().
> */
> struct folio *new_folio = (struct folio *)new_head;
> + bool poisoned_new_folio = check_poisoned_pages &&
> + page_range_has_hwpoisoned(new_head, new_nr_pages);
Is the temp variable really required? I'm afraid it is a bit ugly either way :)
I'd just move it into the if() below.
if (handle_hwpoison &&
page_range_has_hwpoisoned(new_head, new_nr_pages)
folio_set_has_hwpoisoned(new_folio);
--
Cheers
David / dhildenb
On 22 Oct 2025, at 16:09, David Hildenbrand wrote:
> On 22.10.25 05:35, Zi Yan wrote:
>> folio split clears PG_has_hwpoisoned, but the flag should be preserved in
>> after-split folios containing pages with PG_hwpoisoned flag if the folio is
>> split to >0 order folios. Scan all pages in a to-be-split folio to
>> determine which after-split folios need the flag.
>>
>> An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to
>> avoid the scan and set it on all after-split folios, but resulting false
>> positive has undesirable negative impact. To remove false positive, caller
>> of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to
>> do the scan. That might be causing a hassle for current and future callers
>> and more costly than doing the scan in the split code. More details are
>> discussed in [1].
>>
>> It is OK that current implementation does not do this, because memory
>> failure code always tries to split to order-0 folios and if a folio cannot
>> be split to order-0, memory failure code either gives warnings or the split
>> is not performed.
>>
>
> We're losing PG_has_hwpoisoned for large folios, so likely this should be
> a stable fix for splitting anything to an order > 0 ?
I was the borderline on this, because:
1. before the hotfix, which prevents silently bumping target split order,
memory failure would give a warning when a folio is split to >0 order
folios. The warning is masking this issue.
2. after the hotfix, folios with PG_has_hwpoisoned will not be split
to >0 order folios since memory failure always wants to split a folio
to order 0 and a folio containing LBS folios will not be split, thus
without losing PG_has_hwpoisoned.
But one can use debugfs interface to split a has_hwpoisoned folio to >0 order
folios.
I will add
Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages")
and cc stable in the next version.
>
>> Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985q4g@mail.gmail.com/ [1]
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> ---
>> mm/huge_memory.c | 28 +++++++++++++++++++++++++---
>> 1 file changed, 25 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index fc65ec3393d2..f3896c1f130f 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3455,6 +3455,17 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
>> caller_pins;
>> }
>> +static bool page_range_has_hwpoisoned(struct page *first_page, long nr_pages)
>> +{
>> + long i;
>> +
>> + for (i = 0; i < nr_pages; i++)
>> + if (PageHWPoison(first_page + i))
>> + return true;
>> +
>> + return false;
>
> Nit: I'd just do
>
> static bool page_range_has_hwpoisoned(struct page *page, unsigned long nr_pages)
> {
> for (; nr_pages; page++, nr_pages--)
> if (PageHWPoison(page))
> return true;
> }
> return false;
> }
>
OK, will use this one.
>> +}
>> +
>> /*
>> * It splits @folio into @new_order folios and copies the @folio metadata to
>> * all the resulting folios.
>> @@ -3462,22 +3473,32 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
>> static void __split_folio_to_order(struct folio *folio, int old_order,
>> int new_order)
>> {
>> + /* Scan poisoned pages when split a poisoned folio to large folios */
>> + bool check_poisoned_pages = folio_test_has_hwpoisoned(folio) &&
>> + new_order != 0;
>
> I'd shorten this to "handle_hwpoison" or sth like that.
>
> Maybe we can make it const and fit it into a single line.
>
> Comparison with 0 is not required.
>
> const bool handle_hwpoison = folio_test_has_hwpoisoned(folio) && new_order;
Sure, will use this.
>
>> long new_nr_pages = 1 << new_order;
>> long nr_pages = 1 << old_order;
>> long i;
>> + folio_clear_has_hwpoisoned(folio);
>> +
>> + /* Check first new_nr_pages since the loop below skips them */
>> + if (check_poisoned_pages &&
>> + page_range_has_hwpoisoned(folio_page(folio, 0), new_nr_pages))
>> + folio_set_has_hwpoisoned(folio);
>> /*
>> * Skip the first new_nr_pages, since the new folio from them have all
>> * the flags from the original folio.
>> */
>> for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) {
>> struct page *new_head = &folio->page + i;
>> -
>> /*
>> * Careful: new_folio is not a "real" folio before we cleared PageTail.
>> * Don't pass it around before clear_compound_head().
>> */
>> struct folio *new_folio = (struct folio *)new_head;
>> + bool poisoned_new_folio = check_poisoned_pages &&
>> + page_range_has_hwpoisoned(new_head, new_nr_pages);
>
> Is the temp variable really required? I'm afraid it is a bit ugly either way :)
>
> I'd just move it into the if() below.
>
> if (handle_hwpoison &&
> page_range_has_hwpoisoned(new_head, new_nr_pages)
> folio_set_has_hwpoisoned(new_folio);
>
Sure. :)
--
Best Regards,
Yan, Zi
On 22.10.25 22:27, Zi Yan wrote:
> On 22 Oct 2025, at 16:09, David Hildenbrand wrote:
>
>> On 22.10.25 05:35, Zi Yan wrote:
>>> folio split clears PG_has_hwpoisoned, but the flag should be preserved in
>>> after-split folios containing pages with PG_hwpoisoned flag if the folio is
>>> split to >0 order folios. Scan all pages in a to-be-split folio to
>>> determine which after-split folios need the flag.
>>>
>>> An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to
>>> avoid the scan and set it on all after-split folios, but resulting false
>>> positive has undesirable negative impact. To remove false positive, caller
>>> of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to
>>> do the scan. That might be causing a hassle for current and future callers
>>> and more costly than doing the scan in the split code. More details are
>>> discussed in [1].
>>>
>>> It is OK that current implementation does not do this, because memory
>>> failure code always tries to split to order-0 folios and if a folio cannot
>>> be split to order-0, memory failure code either gives warnings or the split
>>> is not performed.
>>>
>>
>> We're losing PG_has_hwpoisoned for large folios, so likely this should be
>> a stable fix for splitting anything to an order > 0 ?
>
> I was the borderline on this, because:
>
> 1. before the hotfix, which prevents silently bumping target split order,
> memory failure would give a warning when a folio is split to >0 order
> folios. The warning is masking this issue.
> 2. after the hotfix, folios with PG_has_hwpoisoned will not be split
> to >0 order folios since memory failure always wants to split a folio
> to order 0 and a folio containing LBS folios will not be split, thus
> without losing PG_has_hwpoisoned.
>
I was rather wondering about something like
a) memory failure wants to split to some order (order-0?) but fails the
split (e.g., raised reference). hwpoison is set.
b) Later, something else (truncation?) wants to split to order > 0 and
loses the hwpoison bit.
Would that be possible?
>
> I will add
> Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages")
> and cc stable in the next version.
That would be better I think. But then you have to pull this patch out
as well from this series, gah :)
--
Cheers
David / dhildenb
On 22 Oct 2025, at 16:34, David Hildenbrand wrote:
> On 22.10.25 22:27, Zi Yan wrote:
>> On 22 Oct 2025, at 16:09, David Hildenbrand wrote:
>>
>>> On 22.10.25 05:35, Zi Yan wrote:
>>>> folio split clears PG_has_hwpoisoned, but the flag should be preserved in
>>>> after-split folios containing pages with PG_hwpoisoned flag if the folio is
>>>> split to >0 order folios. Scan all pages in a to-be-split folio to
>>>> determine which after-split folios need the flag.
>>>>
>>>> An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to
>>>> avoid the scan and set it on all after-split folios, but resulting false
>>>> positive has undesirable negative impact. To remove false positive, caller
>>>> of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to
>>>> do the scan. That might be causing a hassle for current and future callers
>>>> and more costly than doing the scan in the split code. More details are
>>>> discussed in [1].
>>>>
>>>> It is OK that current implementation does not do this, because memory
>>>> failure code always tries to split to order-0 folios and if a folio cannot
>>>> be split to order-0, memory failure code either gives warnings or the split
>>>> is not performed.
>>>>
>>>
>>> We're losing PG_has_hwpoisoned for large folios, so likely this should be
>>> a stable fix for splitting anything to an order > 0 ?
>>
>> I was the borderline on this, because:
>>
>> 1. before the hotfix, which prevents silently bumping target split order,
>> memory failure would give a warning when a folio is split to >0 order
>> folios. The warning is masking this issue.
>> 2. after the hotfix, folios with PG_has_hwpoisoned will not be split
>> to >0 order folios since memory failure always wants to split a folio
>> to order 0 and a folio containing LBS folios will not be split, thus
>> without losing PG_has_hwpoisoned.
>>
>
> I was rather wondering about something like
>
> a) memory failure wants to split to some order (order-0?) but fails the split (e.g., raised reference). hwpoison is set.
>
> b) Later, something else (truncation?) wants to split to order > 0 and loses the hwpoison bit.
>
> Would that be possible?
Yeah, that is possible after commit 7460b470a131 ("mm/truncate: use folio_split()
in truncate operation") when truncation splits a folio to >0 order folios.
>
>>
>> I will add
>> Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages")
>> and cc stable in the next version.
>
> That would be better I think. But then you have to pull this patch out as well from this series, gah :)
Yep, let me tell this horrible story in the cover letter.
--
Best Regards,
Yan, Zi
© 2016 - 2026 Red Hat, Inc.