In production, show_mem() can be called concurrently from two
different entities, for example one from oom_kill_process()
another from __alloc_pages_slowpath from another kthread. This
patch adds a spinlock and invokes trylock before printing out the
kernel alloc info in show_mem(). This way two alloc info won't
interleave with each other, which then makes parsing easier.
Signed-off-by: Yueyang Pan <pyyjason@gmail.com>
---
mm/show_mem.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/show_mem.c b/mm/show_mem.c
index fd85a028a926..e9701d07549b 100644
--- a/mm/show_mem.c
+++ b/mm/show_mem.c
@@ -421,7 +421,9 @@ void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx)
printk("%lu pages hwpoisoned\n", atomic_long_read(&num_poisoned_pages));
#endif
#ifdef CONFIG_MEM_ALLOC_PROFILING
- {
+ static DEFINE_SPINLOCK(mem_alloc_profiling_spinlock);
+
+ if (spin_trylock(&mem_alloc_profiling_spinlock)) {
struct codetag_bytes tags[10];
size_t i, nr;
@@ -448,6 +450,7 @@ void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx)
ct->lineno, ct->function);
}
}
+ spin_unlock(&mem_alloc_profiling_spinlock);
}
#endif
}
--
2.47.3
On Wed, Sep 03, 2025 at 04:16:14AM -0700, Yueyang Pan wrote: > In production, show_mem() can be called concurrently from two > different entities, for example one from oom_kill_process() > another from __alloc_pages_slowpath from another kthread. This > patch adds a spinlock and invokes trylock before printing out the > kernel alloc info in show_mem(). This way two alloc info won't > interleave with each other, which then makes parsing easier. > > Signed-off-by: Yueyang Pan <pyyjason@gmail.com> Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
On 3 Sep 2025, at 7:16, Yueyang Pan wrote: > In production, show_mem() can be called concurrently from two > different entities, for example one from oom_kill_process() > another from __alloc_pages_slowpath from another kthread. This > patch adds a spinlock and invokes trylock before printing out the > kernel alloc info in show_mem(). This way two alloc info won't > interleave with each other, which then makes parsing easier. > > Signed-off-by: Yueyang Pan <pyyjason@gmail.com> > --- > mm/show_mem.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > Acked-by: Zi Yan <ziy@nvidia.com> Best Regards, Yan, Zi
On Wed, Sep 3, 2025 at 9:30 AM Zi Yan <ziy@nvidia.com> wrote: > > On 3 Sep 2025, at 7:16, Yueyang Pan wrote: > > > In production, show_mem() can be called concurrently from two > > different entities, for example one from oom_kill_process() > > another from __alloc_pages_slowpath from another kthread. This > > patch adds a spinlock and invokes trylock before printing out the > > kernel alloc info in show_mem(). This way two alloc info won't > > interleave with each other, which then makes parsing easier. > > > > Signed-off-by: Yueyang Pan <pyyjason@gmail.com> > > --- > > mm/show_mem.c | 5 ++++- > > 1 file changed, 4 insertions(+), 1 deletion(-) > > > > Acked-by: Zi Yan <ziy@nvidia.com> Acked-by: Suren Baghdasaryan <surenb@google.com> > > Best Regards, > Yan, Zi
On 9/3/25 13:16, Yueyang Pan wrote: > In production, show_mem() can be called concurrently from two > different entities, for example one from oom_kill_process() > another from __alloc_pages_slowpath from another kthread. This > patch adds a spinlock and invokes trylock before printing out the > kernel alloc info in show_mem(). This way two alloc info won't > interleave with each other, which then makes parsing easier. > > Signed-off-by: Yueyang Pan <pyyjason@gmail.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> > --- > mm/show_mem.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/mm/show_mem.c b/mm/show_mem.c > index fd85a028a926..e9701d07549b 100644 > --- a/mm/show_mem.c > +++ b/mm/show_mem.c > @@ -421,7 +421,9 @@ void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) > printk("%lu pages hwpoisoned\n", atomic_long_read(&num_poisoned_pages)); > #endif > #ifdef CONFIG_MEM_ALLOC_PROFILING > - { > + static DEFINE_SPINLOCK(mem_alloc_profiling_spinlock); > + > + if (spin_trylock(&mem_alloc_profiling_spinlock)) { > struct codetag_bytes tags[10]; > size_t i, nr; > > @@ -448,6 +450,7 @@ void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) > ct->lineno, ct->function); > } > } > + spin_unlock(&mem_alloc_profiling_spinlock); > } > #endif > }
On 03/09/2025 12:16, Yueyang Pan wrote: > In production, show_mem() can be called concurrently from two > different entities, for example one from oom_kill_process() > another from __alloc_pages_slowpath from another kthread. This > patch adds a spinlock and invokes trylock before printing out the > kernel alloc info in show_mem(). This way two alloc info won't > interleave with each other, which then makes parsing easier. > > Signed-off-by: Yueyang Pan <pyyjason@gmail.com> Acked-by: Usama Arif <usamaarif642@gmail.com> > --- > mm/show_mem.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/mm/show_mem.c b/mm/show_mem.c > index fd85a028a926..e9701d07549b 100644 > --- a/mm/show_mem.c > +++ b/mm/show_mem.c > @@ -421,7 +421,9 @@ void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) > printk("%lu pages hwpoisoned\n", atomic_long_read(&num_poisoned_pages)); > #endif > #ifdef CONFIG_MEM_ALLOC_PROFILING > - { > + static DEFINE_SPINLOCK(mem_alloc_profiling_spinlock); > + > + if (spin_trylock(&mem_alloc_profiling_spinlock)) { > struct codetag_bytes tags[10]; > size_t i, nr; > > @@ -448,6 +450,7 @@ void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) > ct->lineno, ct->function); > } > } > + spin_unlock(&mem_alloc_profiling_spinlock); > } > #endif > }
© 2016 - 2025 Red Hat, Inc.