From: Kairui Song <kasong@tencent.com>
Shmem may replace a folio in the swap cache if the cached one doesn't
fit the swapin's GFP zone. When doing so, shmem has already double
checked that the swap cache folio is locked, still has the swap cache
flag set, and contains the wanted swap entry. So it is impossible to
fail due to an Xarray mismatch. There is even a comment for that.
Delete the defensive error handling path, and add a WARN_ON instead:
if that happened, something has broken the basic principle of how the
swap cache works, we should catch and fix that.
Signed-off-by: Kairui Song <kasong@tencent.com>
---
mm/shmem.c | 28 +++-------------------------
1 file changed, 3 insertions(+), 25 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index b4d39f2a1e0a..e03793cc5169 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2158,35 +2158,13 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
/* Swap cache still stores N entries instead of a high-order entry */
xa_lock_irq(&swap_mapping->i_pages);
for (i = 0; i < nr_pages; i++) {
- void *item = xas_load(&xas);
-
- if (item != old) {
- error = -ENOENT;
- break;
- }
-
- xas_store(&xas, new);
+ WARN_ON_ONCE(xas_store(&xas, new));
xas_next(&xas);
}
- if (!error) {
- mem_cgroup_replace_folio(old, new);
- shmem_update_stats(new, nr_pages);
- shmem_update_stats(old, -nr_pages);
- }
xa_unlock_irq(&swap_mapping->i_pages);
- if (unlikely(error)) {
- /*
- * Is this possible? I think not, now that our callers
- * check both the swapcache flag and folio->private
- * after getting the folio lock; but be defensive.
- * Reverse old to newpage for clear and free.
- */
- old = new;
- } else {
- folio_add_lru(new);
- *foliop = new;
- }
+ folio_add_lru(new);
+ *foliop = new;
folio_clear_swapcache(old);
old->private = NULL;
--
2.51.0
On 22.08.25 21:20, Kairui Song wrote: > From: Kairui Song <kasong@tencent.com> > > Shmem may replace a folio in the swap cache if the cached one doesn't > fit the swapin's GFP zone. When doing so, shmem has already double > checked that the swap cache folio is locked, still has the swap cache > flag set, and contains the wanted swap entry. So it is impossible to > fail due to an Xarray mismatch. There is even a comment for that. > > Delete the defensive error handling path, and add a WARN_ON instead: > if that happened, something has broken the basic principle of how the > swap cache works, we should catch and fix that. > > Signed-off-by: Kairui Song <kasong@tencent.com> > --- Sounds sensible to me. With the "!error" code not deleted Reviewed-by: David Hildenbrand <david@redhat.com> -- Cheers David / dhildenb
On 2025/8/23 03:20, Kairui Song wrote: > From: Kairui Song <kasong@tencent.com> > > Shmem may replace a folio in the swap cache if the cached one doesn't > fit the swapin's GFP zone. When doing so, shmem has already double > checked that the swap cache folio is locked, still has the swap cache > flag set, and contains the wanted swap entry. So it is impossible to > fail due to an Xarray mismatch. There is even a comment for that. > > Delete the defensive error handling path, and add a WARN_ON instead: > if that happened, something has broken the basic principle of how the > swap cache works, we should catch and fix that. > > Signed-off-by: Kairui Song <kasong@tencent.com> > --- > mm/shmem.c | 28 +++------------------------- > 1 file changed, 3 insertions(+), 25 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index b4d39f2a1e0a..e03793cc5169 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -2158,35 +2158,13 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp, > /* Swap cache still stores N entries instead of a high-order entry */ > xa_lock_irq(&swap_mapping->i_pages); > for (i = 0; i < nr_pages; i++) { > - void *item = xas_load(&xas); > - > - if (item != old) { > - error = -ENOENT; > - break; > - } > - > - xas_store(&xas, new); > + WARN_ON_ONCE(xas_store(&xas, new)); > xas_next(&xas); > } > - if (!error) { > - mem_cgroup_replace_folio(old, new); > - shmem_update_stats(new, nr_pages); > - shmem_update_stats(old, -nr_pages); > - } It looks like the shmem statistics update was mistakenly deleted? ( Continue to understand the whole series:) ) > xa_unlock_irq(&swap_mapping->i_pages); > > - if (unlikely(error)) { > - /* > - * Is this possible? I think not, now that our callers > - * check both the swapcache flag and folio->private > - * after getting the folio lock; but be defensive. > - * Reverse old to newpage for clear and free. > - */ > - old = new; > - } else { > - folio_add_lru(new); > - *foliop = new; > - } > + folio_add_lru(new); > + *foliop = new; > > folio_clear_swapcache(old); > old->private = NULL;
On Mon, Aug 25, 2025 at 11:09 AM Baolin Wang <baolin.wang@linux.alibaba.com> wrote: > > > > On 2025/8/23 03:20, Kairui Song wrote: > > From: Kairui Song <kasong@tencent.com> > > > > Shmem may replace a folio in the swap cache if the cached one doesn't > > fit the swapin's GFP zone. When doing so, shmem has already double > > checked that the swap cache folio is locked, still has the swap cache > > flag set, and contains the wanted swap entry. So it is impossible to > > fail due to an Xarray mismatch. There is even a comment for that. > > > > Delete the defensive error handling path, and add a WARN_ON instead: > > if that happened, something has broken the basic principle of how the > > swap cache works, we should catch and fix that. > > > > Signed-off-by: Kairui Song <kasong@tencent.com> > > --- > > mm/shmem.c | 28 +++------------------------- > > 1 file changed, 3 insertions(+), 25 deletions(-) > > > > diff --git a/mm/shmem.c b/mm/shmem.c > > index b4d39f2a1e0a..e03793cc5169 100644 > > --- a/mm/shmem.c > > +++ b/mm/shmem.c > > @@ -2158,35 +2158,13 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp, > > /* Swap cache still stores N entries instead of a high-order entry */ > > xa_lock_irq(&swap_mapping->i_pages); > > for (i = 0; i < nr_pages; i++) { > > - void *item = xas_load(&xas); > > - > > - if (item != old) { > > - error = -ENOENT; > > - break; > > - } > > - > > - xas_store(&xas, new); > > + WARN_ON_ONCE(xas_store(&xas, new)); > > xas_next(&xas); > > } > > - if (!error) { > > - mem_cgroup_replace_folio(old, new); > > - shmem_update_stats(new, nr_pages); > > - shmem_update_stats(old, -nr_pages); > > - } > > It looks like the shmem statistics update was mistakenly deleted? Ah, you are right, I'll need to add it back. I somehow misread this as the error handling path. Need to add it back just drop the !error check. Thanks for the review. > > ( Continue to understand the whole series:) ) > > > xa_unlock_irq(&swap_mapping->i_pages); > > > > - if (unlikely(error)) { > > - /* > > - * Is this possible? I think not, now that our callers > > - * check both the swapcache flag and folio->private > > - * after getting the folio lock; but be defensive. > > - * Reverse old to newpage for clear and free. > > - */ > > - old = new; > > - } else { > > - folio_add_lru(new); > > - *foliop = new; > > - } > > + folio_add_lru(new); > > + *foliop = new; > > > > folio_clear_swapcache(old); > > old->private = NULL; > >
On Mon, Aug 25, 2025 at 2:46 AM Kairui Song <ryncsn@gmail.com> wrote: > > On Mon, Aug 25, 2025 at 11:09 AM Baolin Wang > <baolin.wang@linux.alibaba.com> wrote: > > > > > > > > On 2025/8/23 03:20, Kairui Song wrote: > > > From: Kairui Song <kasong@tencent.com> > > > > > > Shmem may replace a folio in the swap cache if the cached one doesn't > > > fit the swapin's GFP zone. When doing so, shmem has already double > > > checked that the swap cache folio is locked, still has the swap cache > > > flag set, and contains the wanted swap entry. So it is impossible to > > > fail due to an Xarray mismatch. There is even a comment for that. > > > > > > Delete the defensive error handling path, and add a WARN_ON instead: > > > if that happened, something has broken the basic principle of how the > > > swap cache works, we should catch and fix that. > > > > > > Signed-off-by: Kairui Song <kasong@tencent.com> > > > --- > > > mm/shmem.c | 28 +++------------------------- > > > 1 file changed, 3 insertions(+), 25 deletions(-) > > > > > > diff --git a/mm/shmem.c b/mm/shmem.c > > > index b4d39f2a1e0a..e03793cc5169 100644 > > > --- a/mm/shmem.c > > > +++ b/mm/shmem.c > > > @@ -2158,35 +2158,13 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp, > > > /* Swap cache still stores N entries instead of a high-order entry */ > > > xa_lock_irq(&swap_mapping->i_pages); > > > for (i = 0; i < nr_pages; i++) { > > > - void *item = xas_load(&xas); > > > - > > > - if (item != old) { > > > - error = -ENOENT; > > > - break; > > > - } > > > - > > > - xas_store(&xas, new); > > > + WARN_ON_ONCE(xas_store(&xas, new)); > > > xas_next(&xas); > > > } > > > - if (!error) { > > > - mem_cgroup_replace_folio(old, new); > > > - shmem_update_stats(new, nr_pages); > > > - shmem_update_stats(old, -nr_pages); > > > - } > > > > It looks like the shmem statistics update was mistakenly deleted? > > Ah, you are right, I'll need to add it back. I somehow misread this as > the error handling path. Need to add it back just drop the !error > check. +1, I will wait for your next version then. Otherwise the patch looks fine to me. Chris
© 2016 - 2025 Red Hat, Inc.