include/linux/vmstat.h | 11 +++++++++++ mm/swap_cgroup.c | 3 +++ mm/vmstat.c | 35 ++++++++++++++++++++++++++--------- 3 files changed, 40 insertions(+), 9 deletions(-)
We noticed high overhead for pages allocated for struct swap_cgroup in our fleet. This patchset adds the number of pages allocated for struct swap_cgroup to vmstat. This can be a useful metric for identifying unneeded overhead on systems which configure swap. Before adding the new stat, Patch 1 introduces a generic system-wide counting interface. Kinsey Ho (2): mm: add generic system-wide page counters mm, swap: add pages allocated for struct swap_cgroup to vmstat include/linux/vmstat.h | 11 +++++++++++ mm/swap_cgroup.c | 3 +++ mm/vmstat.c | 35 ++++++++++++++++++++++++++--------- 3 files changed, 40 insertions(+), 9 deletions(-) -- 2.47.0.163.g1226f6d8fa-goog
hm. On Thu, 31 Oct 2024 22:45:49 +0000 Kinsey Ho <kinseyho@google.com> wrote: > We noticed high overhead for pages allocated for struct swap_cgroup in > our fleet. This is scanty. Please describe the problem further. > This patchset adds the number of pages allocated for struct > swap_cgroup to vmstat. This can be a useful metric for identifying > unneeded overhead on systems which configure swap. Possibly dumb question: can we switch swap_cgroup_prepare to kmalloc() (or kmem-cache_alloc()) and use slab's accounting to satisfy this requirement? > Before adding the new stat, Patch 1 introduces a generic system-wide > counting interface. I don't really understand this one and would like to hear much more about the motivation. And: "the existing use case" is OK with a global counter, but what about future use cases? And: what are the future use cases? Thanks.
On Thu, Oct 31, 2024 at 04:06:04PM -0700, Andrew Morton wrote: > hm. > > On Thu, 31 Oct 2024 22:45:49 +0000 Kinsey Ho <kinseyho@google.com> wrote: > > > We noticed high overhead for pages allocated for struct swap_cgroup in > > our fleet. > > This is scanty. Please describe the problem further. > > > This patchset adds the number of pages allocated for struct > > swap_cgroup to vmstat. This can be a useful metric for identifying > > unneeded overhead on systems which configure swap. > > Possibly dumb question: can we switch swap_cgroup_prepare to kmalloc() > (or kmem-cache_alloc()) and use slab's accounting to satisfy this > requirement? Or vzalloc/kvmalloc(), which are used for allocating the rest of swap-related meta-data.
Hi Andrew, Thank you for the review and comments! On Fri, Nov 1, 2024 at 6:57 AM Andrew Morton <akpm@linux-foundation.org> wrote: > > hm. > > On Thu, 31 Oct 2024 22:45:49 +0000 Kinsey Ho <kinseyho@google.com> wrote: > > > We noticed high overhead for pages allocated for struct swap_cgroup in > > our fleet. > > This is scanty. Please describe the problem further. In our fleet, we had machines with multiple large swap files configured, and we noticed that we hadn't accounted for the overhead from the pages allocated for struct swap_cgroup. In some cases, we saw a couple GiB of overhead from these pages, so this patchset's goal is to expose this overhead value for easier detection. > And: "the existing use case" is OK with a global counter, but what about > future use cases? > > And: what are the future use cases? As global counting already exists with memmap/memmap_boot pages, the introduction of a generic global counter interface was just to try and aggregate the global counting code when introducing another use case. However, since the value of pages allocated for swap_cgroup can be derived from /proc/swaps, it doesn't seem warranted that a new entry be added to vmstat. We've decided to drop this patchset. Thanks again!
On Thu, Oct 31, 2024 at 04:06:04PM -0700, Andrew Morton wrote: > Possibly dumb question: can we switch swap_cgroup_prepare to kmalloc() > (or kmem-cache_alloc()) and use slab's accounting to satisfy this > requirement? It looks to me like a bad reimplemention of vmalloc(). It looks like this code used to make more sense once upon a time, but now it's really just vmalloc(). Or did I miss something?
© 2016 - 2024 Red Hat, Inc.