mm/shmem.c | 45 ++++++++++++++++++++++++++++++++++----------- 1 file changed, 34 insertions(+), 11 deletions(-)
From: Kairui Song <kasong@tencent.com>
The helper for shmem swap freeing is not handling the order of swap
entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but
it gets the entry order before that using xa_get_order without lock
protection, and it may get an outdated order value if the entry is split
or changed in other ways after the xa_get_order and before the
xa_cmpxchg_irq.
And besides, the order could grow and be larger than expected, and cause
truncation to erase data beyond the end border. For example, if the
target entry and following entries are swapped in or freed, then a large
folio was added in place and swapped out, using the same entry, the
xa_cmpxchg_irq will still succeed, it's very unlikely to happen though.
To fix that, open code the Xarray cmpxchg and put the order retrieval
and value checking in the same critical section. Also, ensure the order
won't exceed the end border, skip it if the entry goes across the
border.
Skipping large swap entries crosses the end border is safe here.
Shmem truncate iterates the range twice, in the first iteration,
find_lock_entries already filtered such entries, and shmem will
swapin the entries that cross the end border and partially truncate the
folio (split the folio or at least zero part of it). So in the second
loop here, if we see a swap entry that crosses the end order, it must
at least have its content erased already.
I observed random swapoff hangs and kernel panics when stress testing
ZSWAP with shmem. After applying this patch, all problems are gone.
Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
Cc: stable@vger.kernel.org
Signed-off-by: Kairui Song <kasong@tencent.com>
---
Changes in v2:
- Fix a potential retry loop issue and improvement to code style thanks
to Baoling Wang. I didn't split the change into two patches because a
separate patch doesn't stand well as a fix.
- Link to v1: https://lore.kernel.org/r/20260112-shmem-swap-fix-v1-1-0f347f4f6952@tencent.com
---
mm/shmem.c | 45 ++++++++++++++++++++++++++++++++++-----------
1 file changed, 34 insertions(+), 11 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 0b4c8c70d017..fadd5dd33d8b 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -962,17 +962,29 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
* being freed).
*/
static long shmem_free_swap(struct address_space *mapping,
- pgoff_t index, void *radswap)
+ pgoff_t index, pgoff_t end, void *radswap)
{
- int order = xa_get_order(&mapping->i_pages, index);
- void *old;
+ XA_STATE(xas, &mapping->i_pages, index);
+ unsigned int nr_pages = 0;
+ pgoff_t base;
+ void *entry;
- old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
- if (old != radswap)
- return 0;
- swap_put_entries_direct(radix_to_swp_entry(radswap), 1 << order);
+ xas_lock_irq(&xas);
+ entry = xas_load(&xas);
+ if (entry == radswap) {
+ nr_pages = 1 << xas_get_order(&xas);
+ base = round_down(xas.xa_index, nr_pages);
+ if (base < index || base + nr_pages - 1 > end)
+ nr_pages = 0;
+ else
+ xas_store(&xas, NULL);
+ }
+ xas_unlock_irq(&xas);
+
+ if (nr_pages)
+ swap_put_entries_direct(radix_to_swp_entry(radswap), nr_pages);
- return 1 << order;
+ return nr_pages;
}
/*
@@ -1124,8 +1136,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, uoff_t lend,
if (xa_is_value(folio)) {
if (unfalloc)
continue;
- nr_swaps_freed += shmem_free_swap(mapping,
- indices[i], folio);
+ nr_swaps_freed += shmem_free_swap(mapping, indices[i],
+ end - 1, folio);
continue;
}
@@ -1191,12 +1203,23 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, uoff_t lend,
folio = fbatch.folios[i];
if (xa_is_value(folio)) {
+ int order;
long swaps_freed;
if (unfalloc)
continue;
- swaps_freed = shmem_free_swap(mapping, indices[i], folio);
+ swaps_freed = shmem_free_swap(mapping, indices[i],
+ end - 1, folio);
if (!swaps_freed) {
+ /*
+ * If found a large swap entry cross the end border,
+ * skip it as the truncate_inode_partial_folio above
+ * should have at least zerod its content once.
+ */
+ order = shmem_confirm_swap(mapping, indices[i],
+ radix_to_swp_entry(folio));
+ if (order > 0 && indices[i] + order > end)
+ continue;
/* Swap was replaced by page: retry */
index = indices[i];
break;
---
base-commit: fe2c34b6ea5a0e1175c30d59bc1c28caafb02c62
change-id: 20260111-shmem-swap-fix-8d0e20a14b5d
Best regards,
--
Kairui Song <kasong@tencent.com>
On 1/19/26 12:55 AM, Kairui Song wrote:
> From: Kairui Song <kasong@tencent.com>
>
> The helper for shmem swap freeing is not handling the order of swap
> entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but
> it gets the entry order before that using xa_get_order without lock
> protection, and it may get an outdated order value if the entry is split
> or changed in other ways after the xa_get_order and before the
> xa_cmpxchg_irq.
>
> And besides, the order could grow and be larger than expected, and cause
> truncation to erase data beyond the end border. For example, if the
> target entry and following entries are swapped in or freed, then a large
> folio was added in place and swapped out, using the same entry, the
> xa_cmpxchg_irq will still succeed, it's very unlikely to happen though.
>
> To fix that, open code the Xarray cmpxchg and put the order retrieval
> and value checking in the same critical section. Also, ensure the order
> won't exceed the end border, skip it if the entry goes across the
> border.
>
> Skipping large swap entries crosses the end border is safe here.
> Shmem truncate iterates the range twice, in the first iteration,
> find_lock_entries already filtered such entries, and shmem will
> swapin the entries that cross the end border and partially truncate the
> folio (split the folio or at least zero part of it). So in the second
> loop here, if we see a swap entry that crosses the end order, it must
> at least have its content erased already.
>
> I observed random swapoff hangs and kernel panics when stress testing
> ZSWAP with shmem. After applying this patch, all problems are gone.
>
> Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
> Cc: stable@vger.kernel.org
> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
> Changes in v2:
> - Fix a potential retry loop issue and improvement to code style thanks
> to Baoling Wang. I didn't split the change into two patches because a
> separate patch doesn't stand well as a fix.
> - Link to v1: https://lore.kernel.org/r/20260112-shmem-swap-fix-v1-1-0f347f4f6952@tencent.com
> ---
> mm/shmem.c | 45 ++++++++++++++++++++++++++++++++++-----------
> 1 file changed, 34 insertions(+), 11 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 0b4c8c70d017..fadd5dd33d8b 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -962,17 +962,29 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
> * being freed).
> */
> static long shmem_free_swap(struct address_space *mapping,
> - pgoff_t index, void *radswap)
> + pgoff_t index, pgoff_t end, void *radswap)
> {
> - int order = xa_get_order(&mapping->i_pages, index);
> - void *old;
> + XA_STATE(xas, &mapping->i_pages, index);
> + unsigned int nr_pages = 0;
> + pgoff_t base;
> + void *entry;
>
> - old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
> - if (old != radswap)
> - return 0;
> - swap_put_entries_direct(radix_to_swp_entry(radswap), 1 << order);
> + xas_lock_irq(&xas);
> + entry = xas_load(&xas);
> + if (entry == radswap) {
> + nr_pages = 1 << xas_get_order(&xas);
> + base = round_down(xas.xa_index, nr_pages);
> + if (base < index || base + nr_pages - 1 > end)
> + nr_pages = 0;
> + else
> + xas_store(&xas, NULL);
> + }
> + xas_unlock_irq(&xas);
> +
> + if (nr_pages)
> + swap_put_entries_direct(radix_to_swp_entry(radswap), nr_pages);
>
> - return 1 << order;
> + return nr_pages;
> }
>
> /*
> @@ -1124,8 +1136,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, uoff_t lend,
> if (xa_is_value(folio)) {
> if (unfalloc)
> continue;
> - nr_swaps_freed += shmem_free_swap(mapping,
> - indices[i], folio);
> + nr_swaps_freed += shmem_free_swap(mapping, indices[i],
> + end - 1, folio);
> continue;
> }
>
> @@ -1191,12 +1203,23 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, uoff_t lend,
> folio = fbatch.folios[i];
>
> if (xa_is_value(folio)) {
> + int order;
> long swaps_freed;
>
> if (unfalloc)
> continue;
> - swaps_freed = shmem_free_swap(mapping, indices[i], folio);
> + swaps_freed = shmem_free_swap(mapping, indices[i],
> + end - 1, folio);
> if (!swaps_freed) {
> + /*
> + * If found a large swap entry cross the end border,
> + * skip it as the truncate_inode_partial_folio above
> + * should have at least zerod its content once.
> + */
> + order = shmem_confirm_swap(mapping, indices[i],
> + radix_to_swp_entry(folio));
> + if (order > 0 && indices[i] + order > end)
> + continue;
The latter check shoud be 'indices[i] + 1 << order > end', right?
On Mon, Jan 19, 2026 at 11:04 AM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
> On 1/19/26 12:55 AM, Kairui Song wrote:
> > From: Kairui Song <kasong@tencent.com>
> >
> > if (!swaps_freed) {
> > + /*
> > + * If found a large swap entry cross the end border,
> > + * skip it as the truncate_inode_partial_folio above
> > + * should have at least zerod its content once.
> > + */
> > + order = shmem_confirm_swap(mapping, indices[i],
> > + radix_to_swp_entry(folio));
> > + if (order > 0 && indices[i] + order > end)
> > + continue;
>
> The latter check shoud be 'indices[i] + 1 << order > end', right?
Yes, you are right, it should be 1 << order, thanks!
On Mon, 19 Jan 2026 00:55:59 +0800 Kairui Song <ryncsn@gmail.com> wrote:
> From: Kairui Song <kasong@tencent.com>
>
> The helper for shmem swap freeing is not handling the order of swap
> entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but
> it gets the entry order before that using xa_get_order without lock
> protection, and it may get an outdated order value if the entry is split
> or changed in other ways after the xa_get_order and before the
> xa_cmpxchg_irq.
>
> And besides, the order could grow and be larger than expected, and cause
> truncation to erase data beyond the end border. For example, if the
> target entry and following entries are swapped in or freed, then a large
> folio was added in place and swapped out, using the same entry, the
> xa_cmpxchg_irq will still succeed, it's very unlikely to happen though.
>
> To fix that, open code the Xarray cmpxchg and put the order retrieval
> and value checking in the same critical section. Also, ensure the order
> won't exceed the end border, skip it if the entry goes across the
> border.
>
> Skipping large swap entries crosses the end border is safe here.
> Shmem truncate iterates the range twice, in the first iteration,
> find_lock_entries already filtered such entries, and shmem will
> swapin the entries that cross the end border and partially truncate the
> folio (split the folio or at least zero part of it). So in the second
> loop here, if we see a swap entry that crosses the end order, it must
> at least have its content erased already.
>
> I observed random swapoff hangs and kernel panics when stress testing
> ZSWAP with shmem. After applying this patch, all problems are gone.
>
> Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
September 2024.
Seems about right. A researcher recently found that kernel bugs take two years
to fix. https://pebblebed.com/blog/kernel-bugs?ref=itsfoss.com
>
> ...
>
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -962,17 +962,29 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
> * being freed).
> */
> static long shmem_free_swap(struct address_space *mapping,
> - pgoff_t index, void *radswap)
> + pgoff_t index, pgoff_t end, void *radswap)
> {
> - int order = xa_get_order(&mapping->i_pages, index);
> - void *old;
> + XA_STATE(xas, &mapping->i_pages, index);
> + unsigned int nr_pages = 0;
> + pgoff_t base;
> + void *entry;
>
> - old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
> - if (old != radswap)
> - return 0;
> - swap_put_entries_direct(radix_to_swp_entry(radswap), 1 << order);
> + xas_lock_irq(&xas);
> + entry = xas_load(&xas);
> + if (entry == radswap) {
> + nr_pages = 1 << xas_get_order(&xas);
> + base = round_down(xas.xa_index, nr_pages);
> + if (base < index || base + nr_pages - 1 > end)
> + nr_pages = 0;
> + else
> + xas_store(&xas, NULL);
> + }
> + xas_unlock_irq(&xas);
> +
> + if (nr_pages)
> + swap_put_entries_direct(radix_to_swp_entry(radswap), nr_pages);
>
> - return 1 << order;
> + return nr_pages;
> }
>
What tree was this prepared against?
Both Linus mainline and mm.git have
: static long shmem_free_swap(struct address_space *mapping,
: pgoff_t index, void *radswap)
: {
: int order = xa_get_order(&mapping->i_pages, index);
: void *old;
:
: old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
: if (old != radswap)
: return 0;
: free_swap_and_cache_nr(radix_to_swp_entry(radswap), 1 << order);
:
: return 1 << order;
: }
but that free_swap_and_cache_nr() call is absent from your tree.
On Mon, Jan 19, 2026 at 3:33 AM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Mon, 19 Jan 2026 00:55:59 +0800 Kairui Song <ryncsn@gmail.com> wrote:
>
> > From: Kairui Song <kasong@tencent.com>
> >
> > I observed random swapoff hangs and kernel panics when stress testing
> > ZSWAP with shmem. After applying this patch, all problems are gone.
> >
> > Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
>
> September 2024.
>
> Seems about right. A researcher recently found that kernel bugs take two years
> to fix. https://pebblebed.com/blog/kernel-bugs?ref=itsfoss.com
>
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -962,17 +962,29 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
> > * being freed).
> > */
> > static long shmem_free_swap(struct address_space *mapping,
> > - pgoff_t index, void *radswap)
> > + pgoff_t index, pgoff_t end, void *radswap)
> > {
> > - int order = xa_get_order(&mapping->i_pages, index);
> > - void *old;
> > + XA_STATE(xas, &mapping->i_pages, index);
> > + unsigned int nr_pages = 0;
> > + pgoff_t base;
> > + void *entry;
> >
> > - old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
> > - if (old != radswap)
> > - return 0;
> > - swap_put_entries_direct(radix_to_swp_entry(radswap), 1 << order);
> > + xas_lock_irq(&xas);
> > + entry = xas_load(&xas);
> > + if (entry == radswap) {
> > + nr_pages = 1 << xas_get_order(&xas);
> > + base = round_down(xas.xa_index, nr_pages);
> > + if (base < index || base + nr_pages - 1 > end)
> > + nr_pages = 0;
> > + else
> > + xas_store(&xas, NULL);
> > + }
> > + xas_unlock_irq(&xas);
> > +
> > + if (nr_pages)
> > + swap_put_entries_direct(radix_to_swp_entry(radswap), nr_pages);
> >
> > - return 1 << order;
> > + return nr_pages;
> > }
> >
>
> What tree was this prepared against?
>
> Both Linus mainline and mm.git have
>
> : static long shmem_free_swap(struct address_space *mapping,
> : pgoff_t index, void *radswap)
> : {
> : int order = xa_get_order(&mapping->i_pages, index);
> : void *old;
> :
> : old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
> : if (old != radswap)
> : return 0;
> : free_swap_and_cache_nr(radix_to_swp_entry(radswap), 1 << order);
> :
> : return 1 << order;
> : }
>
> but that free_swap_and_cache_nr() call is absent from your tree.
Oh, I tested and sent this patch based on mm-unstable, because the bug
was found while I was testing swap table series. This is a 2 year old
existing bug though. Swapoff during high system pressure is not a very
common thing, and maybe mTHP for shmem is currently not very commonly
used either? So maybe that's why no one found this issue.
free_swap_and_cache_nr is renamed to swap_put_entries_direct in
mm-unstable, it's irrelevant to this fix or bug. The rename change was
made here:
https://lore.kernel.org/linux-mm/20251220-swap-table-p2-v5-14-8862a265a033@tencent.com/
Should I resend this patch base on the mainline and rebase that
series? Or should we merge this in mm-unstable first then I can
send seperate fixes for stable?
On Mon, 19 Jan 2026 10:17:50 +0800 Kairui Song <ryncsn@gmail.com> wrote:
> >
> > What tree was this prepared against?
> >
> > Both Linus mainline and mm.git have
> >
> > : static long shmem_free_swap(struct address_space *mapping,
> > : pgoff_t index, void *radswap)
> > : {
> > : int order = xa_get_order(&mapping->i_pages, index);
> > : void *old;
> > :
> > : old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
> > : if (old != radswap)
> > : return 0;
> > : free_swap_and_cache_nr(radix_to_swp_entry(radswap), 1 << order);
> > :
> > : return 1 << order;
> > : }
> >
> > but that free_swap_and_cache_nr() call is absent from your tree.
>
> Oh, I tested and sent this patch based on mm-unstable, because the bug
> was found while I was testing swap table series. This is a 2 year old
> existing bug though. Swapoff during high system pressure is not a very
> common thing, and maybe mTHP for shmem is currently not very commonly
> used either? So maybe that's why no one found this issue.
>
> free_swap_and_cache_nr is renamed to swap_put_entries_direct in
> mm-unstable, it's irrelevant to this fix or bug. The rename change was
> made here:
> https://lore.kernel.org/linux-mm/20251220-swap-table-p2-v5-14-8862a265a033@tencent.com/
>
> Should I resend this patch base on the mainline and rebase that
> series? Or should we merge this in mm-unstable first then I can
> send seperate fixes for stable?
I think a clean fix against Linus mainline, please. Then let's take a
look at what's needed to repair any later problems.
© 2016 - 2026 Red Hat, Inc.