mm/shmem.c | 45 ++++++++++++++++++++++++++++++++++----------- 1 file changed, 34 insertions(+), 11 deletions(-)
From: Kairui Song <kasong@tencent.com>
The helper for shmem swap freeing is not handling the order of swap
entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but
it gets the entry order before that using xa_get_order without lock
protection, and it may get an outdated order value if the entry is split
or changed in other ways after the xa_get_order and before the
xa_cmpxchg_irq.
And besides, the order could grow and be larger than expected, and cause
truncation to erase data beyond the end border. For example, if the
target entry and following entries are swapped in or freed, then a large
folio was added in place and swapped out, using the same entry, the
xa_cmpxchg_irq will still succeed, it's very unlikely to happen though.
To fix that, open code the Xarray cmpxchg and put the order retrieval
and value checking in the same critical section. Also, ensure the order
won't exceed the end border, skip it if the entry goes across the
border.
Skipping large swap entries crosses the end border is safe here.
Shmem truncate iterates the range twice, in the first iteration,
find_lock_entries already filtered such entries, and shmem will
swapin the entries that cross the end border and partially truncate the
folio (split the folio or at least zero part of it). So in the second
loop here, if we see a swap entry that crosses the end order, it must
at least have its content erased already.
I observed random swapoff hangs and kernel panics when stress testing
ZSWAP with shmem. After applying this patch, all problems are gone.
Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
Cc: stable@vger.kernel.org
Signed-off-by: Kairui Song <kasong@tencent.com>
---
Changes in v3:
- Rebased on top of mainline.
- Fix nr_pages calculation [ Baolin Wang ]
- Link to v2: https://lore.kernel.org/r/20260119-shmem-swap-fix-v2-1-034c946fd393@tencent.com
Changes in v2:
- Fix a potential retry loop issue and improvement to code style thanks
to Baoling Wang. I didn't split the change into two patches because a
separate patch doesn't stand well as a fix.
- Link to v1: https://lore.kernel.org/r/20260112-shmem-swap-fix-v1-1-0f347f4f6952@tencent.com
---
mm/shmem.c | 45 ++++++++++++++++++++++++++++++++++-----------
1 file changed, 34 insertions(+), 11 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index ec6c01378e9d..6c3485d24d66 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -962,17 +962,29 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
* being freed).
*/
static long shmem_free_swap(struct address_space *mapping,
- pgoff_t index, void *radswap)
+ pgoff_t index, pgoff_t end, void *radswap)
{
- int order = xa_get_order(&mapping->i_pages, index);
- void *old;
+ XA_STATE(xas, &mapping->i_pages, index);
+ unsigned int nr_pages = 0;
+ pgoff_t base;
+ void *entry;
- old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
- if (old != radswap)
- return 0;
- free_swap_and_cache_nr(radix_to_swp_entry(radswap), 1 << order);
+ xas_lock_irq(&xas);
+ entry = xas_load(&xas);
+ if (entry == radswap) {
+ nr_pages = 1 << xas_get_order(&xas);
+ base = round_down(xas.xa_index, nr_pages);
+ if (base < index || base + nr_pages - 1 > end)
+ nr_pages = 0;
+ else
+ xas_store(&xas, NULL);
+ }
+ xas_unlock_irq(&xas);
+
+ if (nr_pages)
+ free_swap_and_cache_nr(radix_to_swp_entry(radswap), nr_pages);
- return 1 << order;
+ return nr_pages;
}
/*
@@ -1124,8 +1136,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, uoff_t lend,
if (xa_is_value(folio)) {
if (unfalloc)
continue;
- nr_swaps_freed += shmem_free_swap(mapping,
- indices[i], folio);
+ nr_swaps_freed += shmem_free_swap(mapping, indices[i],
+ end - 1, folio);
continue;
}
@@ -1191,12 +1203,23 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, uoff_t lend,
folio = fbatch.folios[i];
if (xa_is_value(folio)) {
+ int order;
long swaps_freed;
if (unfalloc)
continue;
- swaps_freed = shmem_free_swap(mapping, indices[i], folio);
+ swaps_freed = shmem_free_swap(mapping, indices[i],
+ end - 1, folio);
if (!swaps_freed) {
+ /*
+ * If found a large swap entry cross the end border,
+ * skip it as the truncate_inode_partial_folio above
+ * should have at least zerod its content once.
+ */
+ order = shmem_confirm_swap(mapping, indices[i],
+ radix_to_swp_entry(folio));
+ if (order > 0 && indices[i] + (1 << order) > end)
+ continue;
/* Swap was replaced by page: retry */
index = indices[i];
break;
---
base-commit: 24d479d26b25bce5faea3ddd9fa8f3a6c3129ea7
change-id: 20260111-shmem-swap-fix-8d0e20a14b5d
Best regards,
--
Kairui Song <kasong@tencent.com>
Kairui Song <ryncsn@gmail.com> wrote:
> From: Kairui Song <kasong@tencent.com>
>
> The helper for shmem swap freeing is not handling the order of swap
> entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but
> it gets the entry order before that using xa_get_order without lock
> protection, and it may get an outdated order value if the entry is split
> or changed in other ways after the xa_get_order and before the
> xa_cmpxchg_irq.
>
> And besides, the order could grow and be larger than expected, and cause
> truncation to erase data beyond the end border. For example, if the
> target entry and following entries are swapped in or freed, then a large
> folio was added in place and swapped out, using the same entry, the
> xa_cmpxchg_irq will still succeed, it's very unlikely to happen though.
>
> To fix that, open code the Xarray cmpxchg and put the order retrieval
> and value checking in the same critical section. Also, ensure the order
> won't exceed the end border, skip it if the entry goes across the
> border.
>
> Skipping large swap entries crosses the end border is safe here.
> Shmem truncate iterates the range twice, in the first iteration,
> find_lock_entries already filtered such entries, and shmem will
> swapin the entries that cross the end border and partially truncate the
> folio (split the folio or at least zero part of it). So in the second
> loop here, if we see a swap entry that crosses the end order, it must
> at least have its content erased already.
>
> I observed random swapoff hangs and kernel panics when stress testing
> ZSWAP with shmem. After applying this patch, all problems are gone.
>
Hi everyone, another flag from running mm-next through AI review. I'm
low confidence on this one:
> diff --git a/mm/shmem.c b/mm/shmem.c
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
[ ... ]
> @@ -1191,12 +1203,23 @@
> folio = fbatch.folios[i];
>
> if (xa_is_value(folio)) {
> + int order;
> long swaps_freed;
>
> if (unfalloc)
> continue;
> - swaps_freed = shmem_free_swap(mapping, indices[i], folio);
> + swaps_freed = shmem_free_swap(mapping, indices[i],
> + end - 1, folio);
> if (!swaps_freed) {
> + /*
> + * If found a large swap entry cross the end border,
> + * skip it as the truncate_inode_partial_folio above
> + * should have at least zerod its content once.
> + */
> + order = shmem_confirm_swap(mapping, indices[i],
> + radix_to_swp_entry(folio));
> + if (order > 0 && indices[i] + (1 << order) > end)
> + continue;
> /* Swap was replaced by page: retry */
> index = indices[i];
> break;
Could this code infinite loop if a large swap entry crosses the START
boundary instead of the END boundary?
When shmem_free_swap() returns 0, it could be either because:
(a) The entry was replaced by a page, or
(b) The entry failed the boundary check (base < index ||
base + nr_pages - 1 > end)
The boundary check here only handles case (b) for entries crossing the
END boundary:
if (order > 0 && indices[i] + (1 << order) > end)
continue;
But what happens if the entry crosses the START boundary? If
find_get_entries() returns a large swap entry at indices[i] where
the entry's base (calculated as indices[i] & ~((1 << order) - 1)) is
less than the truncation start point, then shmem_free_swap() will
return 0 due to the "base < index" check. The code will then call
shmem_confirm_swap(), get the order, check if it crosses the END
boundary (which it doesn't), and retry with the same index:
index = indices[i];
break;
The next iteration will find the same entry again at the same index,
leading to an infinite loop. For example:
- Truncating range [18, 30]
- Large swap entry at [16, 23] (order 3, 8 pages)
- indices[i] = 18
- shmem_free_swap() sees base=16 < index=18, returns 0
- Check: 18 + 8 > 30 is false (26 <= 30)
- Retries with index=18
- Loop repeats indefinitely
Should the boundary check also handle the START case, perhaps:
if (order > 0) {
pgoff_t base = indices[i] & ~((1UL << order) - 1);
if (base + (1 << order) - 1 > end || base < start)
continue;
}
where 'start' is preserved from before the loop?
On Wed, Jan 28, 2026 at 05:02:34AM +0800, Chris Mason wrote:
>
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
>
> [ ... ]
>
> > @@ -1191,12 +1203,23 @@
> > folio = fbatch.folios[i];
> >
> > if (xa_is_value(folio)) {
> > + int order;
> > long swaps_freed;
> >
> > if (unfalloc)
> > continue;
> > - swaps_freed = shmem_free_swap(mapping, indices[i], folio);
> > + swaps_freed = shmem_free_swap(mapping, indices[i],
> > + end - 1, folio);
> > if (!swaps_freed) {
> > + /*
> > + * If found a large swap entry cross the end border,
> > + * skip it as the truncate_inode_partial_folio above
> > + * should have at least zerod its content once.
> > + */
> > + order = shmem_confirm_swap(mapping, indices[i],
> > + radix_to_swp_entry(folio));
> > + if (order > 0 && indices[i] + (1 << order) > end)
> > + continue;
> > /* Swap was replaced by page: retry */
> > index = indices[i];
> > break;
>
> Could this code infinite loop if a large swap entry crosses the START
> boundary instead of the END boundary?
Thanks for the review! It's really helpful :)
> When shmem_free_swap() returns 0, it could be either because:
> (a) The entry was replaced by a page, or
> (b) The entry failed the boundary check (base < index ||
> base + nr_pages - 1 > end)
>
> The boundary check here only handles case (b) for entries crossing the
> END boundary:
>
> if (order > 0 && indices[i] + (1 << order) > end)
> continue;
>
> But what happens if the entry crosses the START boundary? If
> find_get_entries() returns a large swap entry at indices[i] where
> the entry's base (calculated as indices[i] & ~((1 << order) - 1)) is
> less than the truncation start point, then shmem_free_swap() will
> return 0 due to the "base < index" check. The code will then call
> shmem_confirm_swap(), get the order, check if it crosses the END
> boundary (which it doesn't), and retry with the same index:
>
> index = indices[i];
> break;
>
> The next iteration will find the same entry again at the same index,
> leading to an infinite loop. For example:
>
> - Truncating range [18, 30]
> - Large swap entry at [16, 23] (order 3, 8 pages)
> - indices[i] = 18
> - shmem_free_swap() sees base=16 < index=18, returns 0
> - Check: 18 + 8 > 30 is false (26 <= 30)
> - Retries with index=18
> - Loop repeats indefinitely
I think this is a valid issue. And it's worse than that, during the `while (index < end)` loop a new large entry can land anywhere in the range, if one interaction's starting `index` points to the middle of any large entry, an infinite loop will occur: indices[0] are always equal to the `index` iteration value of that moments, shmem_free_swap will fail because the swap entry's index doesn't match indices[0], and so the `index = indices[i]; break;` keep it loop forever.
The chance seems very low though.
> Should the boundary check also handle the START case, perhaps:
>
> if (order > 0) {
> pgoff_t base = indices[i] & ~((1UL << order) - 1);
> if (base + (1 << order) - 1 > end || base < start)
> continue;
> }
This still doesn't cover the case when a new large entry somehow lands in the range during the loop.
> where 'start' is preserved from before the loop?
How about following patch:
From 863f38c757ee0898b6b7f0f8c695f551a1380ce8 Mon Sep 17 00:00:00 2001
From: Kairui Song <kasong@tencent.com>
Date: Thu, 29 Jan 2026 00:19:23 +0800
Subject: [PATCH] mm, shmem: prevent infinite loop on truncate race
When truncating a large swap entry, shmem_free_swap() returns 0 when the
entry's index doesn't match the given index due to lookup alignment. The
failure fallback path checks if the entry crosses the end border and
aborts when it happens, so truncate won't erase an unexpected entry or
range. But one scenario was ignored.
When `index` points to the middle of a large swap entry, and the large
swap entry doesn't go across the end border, find_get_entries() will
return that large swap entry as the first item in the batch with
`indices[0]` equal to `index`. The entry's base index will be smaller
than `indices[0]`, so shmem_free_swap() will fail and return 0 due to
the "base < index" check. The code will then call shmem_confirm_swap(),
get the order, check if it crosses the END boundary (which it doesn't),
and retry with the same index.
The next iteration will find the same entry again at the same index with
same indices, leading to an infinite loop.
Fix this by retrying with a round-down index, and abort if the index is
smaller than the truncate range.
Reported-by: Chris Mason <clm@meta.com>
Closes: https://lore.kernel.org/linux-mm/20260128130336.727049-1-clm@meta.com/
Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
Fixes: 8a1968bd997f ("mm/shmem, swap: fix race of truncate and swap entry split")
Signed-off-by: Kairui Song <kasong@tencent.com>
---
mm/shmem.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index b9ddd38621a0..fe3719eb5a3c 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1211,17 +1211,22 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, uoff_t lend,
swaps_freed = shmem_free_swap(mapping, indices[i],
end - 1, folio);
if (!swaps_freed) {
- /*
- * If found a large swap entry cross the end border,
- * skip it as the truncate_inode_partial_folio above
- * should have at least zerod its content once.
- */
+ pgoff_t base = indices[i];
+
order = shmem_confirm_swap(mapping, indices[i],
radix_to_swp_entry(folio));
- if (order > 0 && indices[i] + (1 << order) > end)
- continue;
- /* Swap was replaced by page: retry */
- index = indices[i];
+ /*
+ * If found a large swap entry cross the end or start
+ * border, skip it as the truncate_inode_partial_folio
+ * above should have at least zerod its content once.
+ */
+ if (order > 0) {
+ base = round_down(base, 1 << order);
+ if (base < start || base + (1 << order) > end)
+ continue;
+ }
+ /* Swap was replaced by page or extended, retry */
+ index = base;
break;
}
nr_swaps_freed += swaps_freed;
--
2.52.0
And I think we really should simplify the whole truncate loop.
On 1/29/26 12:52 AM, Kairui Song wrote:
> On Wed, Jan 28, 2026 at 05:02:34AM +0800, Chris Mason wrote:
>>
>>> diff --git a/mm/shmem.c b/mm/shmem.c
>>> --- a/mm/shmem.c
>>> +++ b/mm/shmem.c
>>
>> [ ... ]
>>
>>> @@ -1191,12 +1203,23 @@
>>> folio = fbatch.folios[i];
>>>
>>> if (xa_is_value(folio)) {
>>> + int order;
>>> long swaps_freed;
>>>
>>> if (unfalloc)
>>> continue;
>>> - swaps_freed = shmem_free_swap(mapping, indices[i], folio);
>>> + swaps_freed = shmem_free_swap(mapping, indices[i],
>>> + end - 1, folio);
>>> if (!swaps_freed) {
>>> + /*
>>> + * If found a large swap entry cross the end border,
>>> + * skip it as the truncate_inode_partial_folio above
>>> + * should have at least zerod its content once.
>>> + */
>>> + order = shmem_confirm_swap(mapping, indices[i],
>>> + radix_to_swp_entry(folio));
>>> + if (order > 0 && indices[i] + (1 << order) > end)
>>> + continue;
>>> /* Swap was replaced by page: retry */
>>> index = indices[i];
>>> break;
>>
>> Could this code infinite loop if a large swap entry crosses the START
>> boundary instead of the END boundary?
>
> Thanks for the review! It's really helpful :)
>
>> When shmem_free_swap() returns 0, it could be either because:
>> (a) The entry was replaced by a page, or
>> (b) The entry failed the boundary check (base < index ||
>> base + nr_pages - 1 > end)
>>
>> The boundary check here only handles case (b) for entries crossing the
>> END boundary:
>>
>> if (order > 0 && indices[i] + (1 << order) > end)
>> continue;
>>
>> But what happens if the entry crosses the START boundary? If
>> find_get_entries() returns a large swap entry at indices[i] where
>> the entry's base (calculated as indices[i] & ~((1 << order) - 1)) is
>> less than the truncation start point, then shmem_free_swap() will
>> return 0 due to the "base < index" check. The code will then call
>> shmem_confirm_swap(), get the order, check if it crosses the END
>> boundary (which it doesn't), and retry with the same index:
>>
>> index = indices[i];
>> break;
>>
>> The next iteration will find the same entry again at the same index,
>> leading to an infinite loop. For example:
>>
>> - Truncating range [18, 30]
>> - Large swap entry at [16, 23] (order 3, 8 pages)
>> - indices[i] = 18
>> - shmem_free_swap() sees base=16 < index=18, returns 0
>> - Check: 18 + 8 > 30 is false (26 <= 30)
>> - Retries with index=18
>> - Loop repeats indefinitely
>
> I think this is a valid issue. And it's worse than that, during the `while (index < end)` loop a new large entry can land anywhere in the range, if one interaction's starting `index` points to the middle of any large entry, an infinite loop will occur: indices[0] are always equal to the `index` iteration value of that moments, shmem_free_swap will fail because the swap entry's index doesn't match indices[0], and so the `index = indices[i]; break;` keep it loop forever.
>
> The chance seems very low though.
>
>> Should the boundary check also handle the START case, perhaps:
>>
>> if (order > 0) {
>> pgoff_t base = indices[i] & ~((1UL << order) - 1);
>> if (base + (1 << order) - 1 > end || base < start)
>> continue;
>> }
>
> This still doesn't cover the case when a new large entry somehow lands in the range during the loop.
>
>> where 'start' is preserved from before the loop?
>
> How about following patch:
>
> From 863f38c757ee0898b6b7f0f8c695f551a1380ce8 Mon Sep 17 00:00:00 2001
> From: Kairui Song <kasong@tencent.com>
> Date: Thu, 29 Jan 2026 00:19:23 +0800
> Subject: [PATCH] mm, shmem: prevent infinite loop on truncate race
>
> When truncating a large swap entry, shmem_free_swap() returns 0 when the
> entry's index doesn't match the given index due to lookup alignment. The
> failure fallback path checks if the entry crosses the end border and
> aborts when it happens, so truncate won't erase an unexpected entry or
> range. But one scenario was ignored.
>
> When `index` points to the middle of a large swap entry, and the large
> swap entry doesn't go across the end border, find_get_entries() will
> return that large swap entry as the first item in the batch with
> `indices[0]` equal to `index`. The entry's base index will be smaller
> than `indices[0]`, so shmem_free_swap() will fail and return 0 due to
> the "base < index" check. The code will then call shmem_confirm_swap(),
> get the order, check if it crosses the END boundary (which it doesn't),
> and retry with the same index.
>
> The next iteration will find the same entry again at the same index with
> same indices, leading to an infinite loop.
>
> Fix this by retrying with a round-down index, and abort if the index is
> smaller than the truncate range.
>
> Reported-by: Chris Mason <clm@meta.com>
> Closes: https://lore.kernel.org/linux-mm/20260128130336.727049-1-clm@meta.com/
> Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
> Fixes: 8a1968bd997f ("mm/shmem, swap: fix race of truncate and swap entry split")
> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
Thanks. The fix looks good to me.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
(BTW, I think we can simplify the logic by moving the boundary
validation into shmem_free_swap() in the future).
> mm/shmem.c | 23 ++++++++++++++---------
> 1 file changed, 14 insertions(+), 9 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index b9ddd38621a0..fe3719eb5a3c 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1211,17 +1211,22 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, uoff_t lend,
> swaps_freed = shmem_free_swap(mapping, indices[i],
> end - 1, folio);
> if (!swaps_freed) {
> - /*
> - * If found a large swap entry cross the end border,
> - * skip it as the truncate_inode_partial_folio above
> - * should have at least zerod its content once.
> - */
> + pgoff_t base = indices[i];
> +
> order = shmem_confirm_swap(mapping, indices[i],
> radix_to_swp_entry(folio));
> - if (order > 0 && indices[i] + (1 << order) > end)
> - continue;
> - /* Swap was replaced by page: retry */
> - index = indices[i];
> + /*
> + * If found a large swap entry cross the end or start
> + * border, skip it as the truncate_inode_partial_folio
> + * above should have at least zerod its content once.
> + */
> + if (order > 0) {
> + base = round_down(base, 1 << order);
> + if (base < start || base + (1 << order) > end)
> + continue;
> + }
> + /* Swap was replaced by page or extended, retry */
> + index = base;
> break;
> }
> nr_swaps_freed += swaps_freed;
On 1/28/26 11:52 AM, Kairui Song wrote:
> On Wed, Jan 28, 2026 at 05:02:34AM +0800, Chris Mason wrote:
>>
>>> diff --git a/mm/shmem.c b/mm/shmem.c
>>> --- a/mm/shmem.c
>>> +++ b/mm/shmem.c
>>
>> [ ... ]
>>
>>> @@ -1191,12 +1203,23 @@
>>> folio = fbatch.folios[i];
>>>
>>> if (xa_is_value(folio)) {
>>> + int order;
>>> long swaps_freed;
>>>
>>> if (unfalloc)
>>> continue;
>>> - swaps_freed = shmem_free_swap(mapping, indices[i], folio);
>>> + swaps_freed = shmem_free_swap(mapping, indices[i],
>>> + end - 1, folio);
>>> if (!swaps_freed) {
>>> + /*
>>> + * If found a large swap entry cross the end border,
>>> + * skip it as the truncate_inode_partial_folio above
>>> + * should have at least zerod its content once.
>>> + */
>>> + order = shmem_confirm_swap(mapping, indices[i],
>>> + radix_to_swp_entry(folio));
>>> + if (order > 0 && indices[i] + (1 << order) > end)
>>> + continue;
>>> /* Swap was replaced by page: retry */
>>> index = indices[i];
>>> break;
>>
>> Could this code infinite loop if a large swap entry crosses the START
>> boundary instead of the END boundary?
>
> Thanks for the review! It's really helpful :)
>
>> When shmem_free_swap() returns 0, it could be either because:
>> (a) The entry was replaced by a page, or
>> (b) The entry failed the boundary check (base < index ||
>> base + nr_pages - 1 > end)
>>
>> The boundary check here only handles case (b) for entries crossing the
>> END boundary:
>>
>> if (order > 0 && indices[i] + (1 << order) > end)
>> continue;
>>
>> But what happens if the entry crosses the START boundary? If
>> find_get_entries() returns a large swap entry at indices[i] where
>> the entry's base (calculated as indices[i] & ~((1 << order) - 1)) is
>> less than the truncation start point, then shmem_free_swap() will
>> return 0 due to the "base < index" check. The code will then call
>> shmem_confirm_swap(), get the order, check if it crosses the END
>> boundary (which it doesn't), and retry with the same index:
>>
>> index = indices[i];
>> break;
>>
>> The next iteration will find the same entry again at the same index,
>> leading to an infinite loop. For example:
>>
>> - Truncating range [18, 30]
>> - Large swap entry at [16, 23] (order 3, 8 pages)
>> - indices[i] = 18
>> - shmem_free_swap() sees base=16 < index=18, returns 0
>> - Check: 18 + 8 > 30 is false (26 <= 30)
>> - Retries with index=18
>> - Loop repeats indefinitely
>
> I think this is a valid issue. And it's worse than that, during the `while (index < end)` loop a new large entry can land anywhere in the range, if one interaction's starting `index` points to the middle of any large entry, an infinite loop will occur: indices[0] are always equal to the `index` iteration value of that moments, shmem_free_swap will fail because the swap entry's index doesn't match indices[0], and so the `index = indices[i]; break;` keep it loop forever.
>
> The chance seems very low though.
>
>> Should the boundary check also handle the START case, perhaps:
>>
>> if (order > 0) {
>> pgoff_t base = indices[i] & ~((1UL << order) - 1);
>> if (base + (1 << order) - 1 > end || base < start)
>> continue;
>> }
>
> This still doesn't cover the case when a new large entry somehow lands in the range during the loop.
FWIW, I'd have been really surprised if claude actually fixed the
bug...I haven't stopped the prompt from making the suggestions just
because I think it helps explain the potential issue.
>
>> where 'start' is preserved from before the loop?
>
> How about following patch:
>
> From 863f38c757ee0898b6b7f0f8c695f551a1380ce8 Mon Sep 17 00:00:00 2001
> From: Kairui Song <kasong@tencent.com>
> Date: Thu, 29 Jan 2026 00:19:23 +0800
> Subject: [PATCH] mm, shmem: prevent infinite loop on truncate race
>
I ran this incremental through and it didn't flag any issues, thanks!
-chris
On Mon, Jan 19, 2026 at 8:11 AM Kairui Song <ryncsn@gmail.com> wrote:
>
> From: Kairui Song <kasong@tencent.com>
>
> The helper for shmem swap freeing is not handling the order of swap
> entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but
> it gets the entry order before that using xa_get_order without lock
> protection, and it may get an outdated order value if the entry is split
> or changed in other ways after the xa_get_order and before the
> xa_cmpxchg_irq.
>
> And besides, the order could grow and be larger than expected, and cause
> truncation to erase data beyond the end border. For example, if the
> target entry and following entries are swapped in or freed, then a large
> folio was added in place and swapped out, using the same entry, the
> xa_cmpxchg_irq will still succeed, it's very unlikely to happen though.
>
> To fix that, open code the Xarray cmpxchg and put the order retrieval
> and value checking in the same critical section. Also, ensure the order
> won't exceed the end border, skip it if the entry goes across the
> border.
>
> Skipping large swap entries crosses the end border is safe here.
> Shmem truncate iterates the range twice, in the first iteration,
> find_lock_entries already filtered such entries, and shmem will
> swapin the entries that cross the end border and partially truncate the
> folio (split the folio or at least zero part of it). So in the second
> loop here, if we see a swap entry that crosses the end order, it must
> at least have its content erased already.
>
> I observed random swapoff hangs and kernel panics when stress testing
> ZSWAP with shmem. After applying this patch, all problems are gone.
>
> Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
> Cc: stable@vger.kernel.org
> Signed-off-by: Kairui Song <kasong@tencent.com>
Acked-by: Chris Li <chrisl@kernel.org>
For the record the two stage retry loop in shmem_undo_range() is not
easy for me to follow. Thanks for the fix.
Chris
> ---
> Changes in v3:
> - Rebased on top of mainline.
> - Fix nr_pages calculation [ Baolin Wang ]
> - Link to v2: https://lore.kernel.org/r/20260119-shmem-swap-fix-v2-1-034c946fd393@tencent.com
>
> Changes in v2:
> - Fix a potential retry loop issue and improvement to code style thanks
> to Baoling Wang. I didn't split the change into two patches because a
> separate patch doesn't stand well as a fix.
> - Link to v1: https://lore.kernel.org/r/20260112-shmem-swap-fix-v1-1-0f347f4f6952@tencent.com
> ---
> mm/shmem.c | 45 ++++++++++++++++++++++++++++++++++-----------
> 1 file changed, 34 insertions(+), 11 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index ec6c01378e9d..6c3485d24d66 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -962,17 +962,29 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
> * being freed).
> */
> static long shmem_free_swap(struct address_space *mapping,
> - pgoff_t index, void *radswap)
> + pgoff_t index, pgoff_t end, void *radswap)
> {
> - int order = xa_get_order(&mapping->i_pages, index);
> - void *old;
> + XA_STATE(xas, &mapping->i_pages, index);
> + unsigned int nr_pages = 0;
> + pgoff_t base;
> + void *entry;
>
> - old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
> - if (old != radswap)
> - return 0;
> - free_swap_and_cache_nr(radix_to_swp_entry(radswap), 1 << order);
> + xas_lock_irq(&xas);
> + entry = xas_load(&xas);
> + if (entry == radswap) {
> + nr_pages = 1 << xas_get_order(&xas);
> + base = round_down(xas.xa_index, nr_pages);
> + if (base < index || base + nr_pages - 1 > end)
> + nr_pages = 0;
> + else
> + xas_store(&xas, NULL);
> + }
> + xas_unlock_irq(&xas);
> +
> + if (nr_pages)
> + free_swap_and_cache_nr(radix_to_swp_entry(radswap), nr_pages);
>
> - return 1 << order;
> + return nr_pages;
> }
>
> /*
> @@ -1124,8 +1136,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, uoff_t lend,
> if (xa_is_value(folio)) {
> if (unfalloc)
> continue;
> - nr_swaps_freed += shmem_free_swap(mapping,
> - indices[i], folio);
> + nr_swaps_freed += shmem_free_swap(mapping, indices[i],
> + end - 1, folio);
> continue;
> }
>
> @@ -1191,12 +1203,23 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, uoff_t lend,
> folio = fbatch.folios[i];
>
> if (xa_is_value(folio)) {
> + int order;
> long swaps_freed;
>
> if (unfalloc)
> continue;
> - swaps_freed = shmem_free_swap(mapping, indices[i], folio);
> + swaps_freed = shmem_free_swap(mapping, indices[i],
> + end - 1, folio);
> if (!swaps_freed) {
> + /*
> + * If found a large swap entry cross the end border,
> + * skip it as the truncate_inode_partial_folio above
> + * should have at least zerod its content once.
> + */
> + order = shmem_confirm_swap(mapping, indices[i],
> + radix_to_swp_entry(folio));
> + if (order > 0 && indices[i] + (1 << order) > end)
> + continue;
> /* Swap was replaced by page: retry */
> index = indices[i];
> break;
>
> ---
> base-commit: 24d479d26b25bce5faea3ddd9fa8f3a6c3129ea7
> change-id: 20260111-shmem-swap-fix-8d0e20a14b5d
>
> Best regards,
> --
> Kairui Song <kasong@tencent.com>
>
On 1/20/26 12:11 AM, Kairui Song wrote:
> From: Kairui Song <kasong@tencent.com>
>
> The helper for shmem swap freeing is not handling the order of swap
> entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but
> it gets the entry order before that using xa_get_order without lock
> protection, and it may get an outdated order value if the entry is split
> or changed in other ways after the xa_get_order and before the
> xa_cmpxchg_irq.
>
> And besides, the order could grow and be larger than expected, and cause
> truncation to erase data beyond the end border. For example, if the
> target entry and following entries are swapped in or freed, then a large
> folio was added in place and swapped out, using the same entry, the
> xa_cmpxchg_irq will still succeed, it's very unlikely to happen though.
>
> To fix that, open code the Xarray cmpxchg and put the order retrieval
> and value checking in the same critical section. Also, ensure the order
> won't exceed the end border, skip it if the entry goes across the
> border.
>
> Skipping large swap entries crosses the end border is safe here.
> Shmem truncate iterates the range twice, in the first iteration,
> find_lock_entries already filtered such entries, and shmem will
> swapin the entries that cross the end border and partially truncate the
> folio (split the folio or at least zero part of it). So in the second
> loop here, if we see a swap entry that crosses the end order, it must
> at least have its content erased already.
>
> I observed random swapoff hangs and kernel panics when stress testing
> ZSWAP with shmem. After applying this patch, all problems are gone.
>
> Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
> Cc: stable@vger.kernel.org
> Signed-off-by: Kairui Song <kasong@tencent.com>
LGTM. Thanks.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> Changes in v3:
> - Rebased on top of mainline.
> - Fix nr_pages calculation [ Baolin Wang ]
> - Link to v2: https://lore.kernel.org/r/20260119-shmem-swap-fix-v2-1-034c946fd393@tencent.com
>
> Changes in v2:
> - Fix a potential retry loop issue and improvement to code style thanks
> to Baoling Wang. I didn't split the change into two patches because a
> separate patch doesn't stand well as a fix.
> - Link to v1: https://lore.kernel.org/r/20260112-shmem-swap-fix-v1-1-0f347f4f6952@tencent.com
> ---
> mm/shmem.c | 45 ++++++++++++++++++++++++++++++++++-----------
> 1 file changed, 34 insertions(+), 11 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index ec6c01378e9d..6c3485d24d66 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -962,17 +962,29 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
> * being freed).
> */
> static long shmem_free_swap(struct address_space *mapping,
> - pgoff_t index, void *radswap)
> + pgoff_t index, pgoff_t end, void *radswap)
> {
> - int order = xa_get_order(&mapping->i_pages, index);
> - void *old;
> + XA_STATE(xas, &mapping->i_pages, index);
> + unsigned int nr_pages = 0;
> + pgoff_t base;
> + void *entry;
>
> - old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
> - if (old != radswap)
> - return 0;
> - free_swap_and_cache_nr(radix_to_swp_entry(radswap), 1 << order);
> + xas_lock_irq(&xas);
> + entry = xas_load(&xas);
> + if (entry == radswap) {
> + nr_pages = 1 << xas_get_order(&xas);
> + base = round_down(xas.xa_index, nr_pages);
> + if (base < index || base + nr_pages - 1 > end)
> + nr_pages = 0;
> + else
> + xas_store(&xas, NULL);
> + }
> + xas_unlock_irq(&xas);
> +
> + if (nr_pages)
> + free_swap_and_cache_nr(radix_to_swp_entry(radswap), nr_pages);
>
> - return 1 << order;
> + return nr_pages;
> }
>
> /*
> @@ -1124,8 +1136,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, uoff_t lend,
> if (xa_is_value(folio)) {
> if (unfalloc)
> continue;
> - nr_swaps_freed += shmem_free_swap(mapping,
> - indices[i], folio);
> + nr_swaps_freed += shmem_free_swap(mapping, indices[i],
> + end - 1, folio);
> continue;
> }
>
> @@ -1191,12 +1203,23 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, uoff_t lend,
> folio = fbatch.folios[i];
>
> if (xa_is_value(folio)) {
> + int order;
> long swaps_freed;
>
> if (unfalloc)
> continue;
> - swaps_freed = shmem_free_swap(mapping, indices[i], folio);
> + swaps_freed = shmem_free_swap(mapping, indices[i],
> + end - 1, folio);
> if (!swaps_freed) {
> + /*
> + * If found a large swap entry cross the end border,
> + * skip it as the truncate_inode_partial_folio above
> + * should have at least zerod its content once.
> + */
> + order = shmem_confirm_swap(mapping, indices[i],
> + radix_to_swp_entry(folio));
> + if (order > 0 && indices[i] + (1 << order) > end)
> + continue;
> /* Swap was replaced by page: retry */
> index = indices[i];
> break;
>
> ---
> base-commit: 24d479d26b25bce5faea3ddd9fa8f3a6c3129ea7
> change-id: 20260111-shmem-swap-fix-8d0e20a14b5d
>
> Best regards,
On Mon, Jan 19, 2026 at 8:11 AM Kairui Song <ryncsn@gmail.com> wrote:
>
> From: Kairui Song <kasong@tencent.com>
>
> The helper for shmem swap freeing is not handling the order of swap
> entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but
> it gets the entry order before that using xa_get_order without lock
> protection, and it may get an outdated order value if the entry is split
> or changed in other ways after the xa_get_order and before the
> xa_cmpxchg_irq.
>
> And besides, the order could grow and be larger than expected, and cause
> truncation to erase data beyond the end border. For example, if the
> target entry and following entries are swapped in or freed, then a large
> folio was added in place and swapped out, using the same entry, the
> xa_cmpxchg_irq will still succeed, it's very unlikely to happen though.
>
> To fix that, open code the Xarray cmpxchg and put the order retrieval
> and value checking in the same critical section. Also, ensure the order
> won't exceed the end border, skip it if the entry goes across the
> border.
>
> Skipping large swap entries crosses the end border is safe here.
> Shmem truncate iterates the range twice, in the first iteration,
> find_lock_entries already filtered such entries, and shmem will
> swapin the entries that cross the end border and partially truncate the
> folio (split the folio or at least zero part of it). So in the second
> loop here, if we see a swap entry that crosses the end order, it must
> at least have its content erased already.
>
> I observed random swapoff hangs and kernel panics when stress testing
> ZSWAP with shmem. After applying this patch, all problems are gone.
>
> Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
> Cc: stable@vger.kernel.org
> Signed-off-by: Kairui Song <kasong@tencent.com>
Good catch.
From the swap POV:
Reviewed-by: Nhat Pham <nphamcs@gmail.com>
© 2016 - 2026 Red Hat, Inc.