mm/shmem.c | 2 ++ 1 file changed, 2 insertions(+)
Elsewhere, NR_SHMEM is updated at the same time as shmem NR_FILE_PAGES;
but shmem_replace_page() was forgetting to do that - so NR_SHMEM stats
could grow too big or too small, in those unusual cases when it's used.
Signed-off-by: Hugh Dickins <hughd@google.com>
---
This is not terribly important, and will clash with one of Matthew's
59 for 5.21; I don't mind if this gets delayed, and we just do it again
on top of his series later, or he fold the equivalent into his series;
but thought I'd better send it in as another fix to shmem_replace_page()
while that function is on our minds.
mm/shmem.c | 2 ++
1 file changed, 2 insertions(+)
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1659,7 +1659,9 @@ static int shmem_replace_page(struct page **pagep, gfp_t gfp,
new = page_folio(newpage);
mem_cgroup_migrate(old, new);
__inc_lruvec_page_state(newpage, NR_FILE_PAGES);
+ __inc_lruvec_page_state(newpage, NR_SHMEM);
__dec_lruvec_page_state(oldpage, NR_FILE_PAGES);
+ __dec_lruvec_page_state(oldpage, NR_SHMEM);
}
xa_unlock_irq(&swap_mapping->i_pages);
On Wed, Aug 10, 2022 at 10:06:33PM -0700, Hugh Dickins wrote: > Elsewhere, NR_SHMEM is updated at the same time as shmem NR_FILE_PAGES; > but shmem_replace_page() was forgetting to do that - so NR_SHMEM stats > could grow too big or too small, in those unusual cases when it's used. > > Signed-off-by: Hugh Dickins <hughd@google.com> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> > --- > This is not terribly important, and will clash with one of Matthew's > 59 for 5.21; I don't mind if this gets delayed, and we just do it again > on top of his series later, or he fold the equivalent into his series; > but thought I'd better send it in as another fix to shmem_replace_page() > while that function is on our minds. Let's get this into 6.0 since it's a bugfix, and I'll rebase my patches for 6.1 on top of it.
© 2016 - 2026 Red Hat, Inc.