[PATCH v3 3/3] mm/memcg: use kmem_cache when alloc memcg pernode info

Huan Yang posted 3 patches 7 months, 4 weeks ago
[PATCH v3 3/3] mm/memcg: use kmem_cache when alloc memcg pernode info
Posted by Huan Yang 7 months, 4 weeks ago
When tracing mem_cgroup_per_node allocations with kmalloc ftrace:

kmalloc: call_site=mem_cgroup_css_alloc+0x1d8/0x5b4 ptr=00000000d798700c
    bytes_req=2896 bytes_alloc=4096 gfp_flags=GFP_KERNEL|__GFP_ZERO node=0
    accounted=false

This reveals the slab allocator provides 4096B chunks for 2896B
mem_cgroup_per_node due to:

1. The slab allocator predefines bucket sizes from 64B to 8096B
2. The mem_cgroup allocation size (2312B) falls between the 2KB and 4KB
   slabs
3. The allocator rounds up to the nearest larger slab (4KB), resulting in
   ~1KB wasted memory per memcg alloc - per node.

This patch introduces a dedicated kmem_cache for mem_cgroup structs,
achieving precise memory allocation. Post-patch ftrace verification shows:

kmem_cache_alloc: call_site=mem_cgroup_css_alloc+0x1b8/0x5d4
    ptr=000000002989e63a bytes_req=2896 bytes_alloc=2944
    gfp_flags=GFP_KERNEL|__GFP_ZERO node=0 accounted=false

Each mem_cgroup_per_node alloc 2944bytes(include hw cacheline align),
compare to 4096, it avoid waste.

Signed-off-by: Huan Yang <link@vivo.com>
---
 mm/memcontrol.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index e34216e55688..af1cd5adfd6c 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -96,6 +96,7 @@ static bool cgroup_memory_nokmem __ro_after_init;
 static bool cgroup_memory_nobpf __ro_after_init;
 
 static struct kmem_cache *memcg_cachep;
+static struct kmem_cache *memcg_pn_cachep;
 
 #ifdef CONFIG_CGROUP_WRITEBACK
 static DECLARE_WAIT_QUEUE_HEAD(memcg_cgwb_frn_waitq);
@@ -3601,7 +3602,8 @@ static bool alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node)
 {
 	struct mem_cgroup_per_node *pn;
 
-	pn = kzalloc_node(sizeof(*pn), GFP_KERNEL, node);
+	pn = kmem_cache_alloc_node(memcg_pn_cachep, GFP_KERNEL | __GFP_ZERO,
+				   node);
 	if (!pn)
 		return false;
 
@@ -5062,6 +5064,9 @@ int __init mem_cgroup_init(void)
 	memcg_cachep = kmem_cache_create("mem_cgroup", memcg_size, 0,
 					 SLAB_PANIC | SLAB_HWCACHE_ALIGN, NULL);
 
+	memcg_pn_cachep = KMEM_CACHE(mem_cgroup_per_node,
+				     SLAB_PANIC | SLAB_HWCACHE_ALIGN);
+
 	return 0;
 }
 
-- 
2.48.1
Re: [PATCH v3 3/3] mm/memcg: use kmem_cache when alloc memcg pernode info
Posted by Johannes Weiner 7 months, 3 weeks ago
On Fri, Apr 25, 2025 at 11:19:25AM +0800, Huan Yang wrote:
> When tracing mem_cgroup_per_node allocations with kmalloc ftrace:
> 
> kmalloc: call_site=mem_cgroup_css_alloc+0x1d8/0x5b4 ptr=00000000d798700c
>     bytes_req=2896 bytes_alloc=4096 gfp_flags=GFP_KERNEL|__GFP_ZERO node=0
>     accounted=false
> 
> This reveals the slab allocator provides 4096B chunks for 2896B
> mem_cgroup_per_node due to:
> 
> 1. The slab allocator predefines bucket sizes from 64B to 8096B
> 2. The mem_cgroup allocation size (2312B) falls between the 2KB and 4KB
>    slabs
> 3. The allocator rounds up to the nearest larger slab (4KB), resulting in
>    ~1KB wasted memory per memcg alloc - per node.
> 
> This patch introduces a dedicated kmem_cache for mem_cgroup structs,
> achieving precise memory allocation. Post-patch ftrace verification shows:
> 
> kmem_cache_alloc: call_site=mem_cgroup_css_alloc+0x1b8/0x5d4
>     ptr=000000002989e63a bytes_req=2896 bytes_alloc=2944
>     gfp_flags=GFP_KERNEL|__GFP_ZERO node=0 accounted=false
> 
> Each mem_cgroup_per_node alloc 2944bytes(include hw cacheline align),
> compare to 4096, it avoid waste.
> 
> Signed-off-by: Huan Yang <link@vivo.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Re: [PATCH v3 3/3] mm/memcg: use kmem_cache when alloc memcg pernode info
Posted by Shakeel Butt 7 months, 4 weeks ago
On Thu, Apr 24, 2025 at 8:19 PM Huan Yang <link@vivo.com> wrote:
>
> When tracing mem_cgroup_per_node allocations with kmalloc ftrace:
>
> kmalloc: call_site=mem_cgroup_css_alloc+0x1d8/0x5b4 ptr=00000000d798700c
>     bytes_req=2896 bytes_alloc=4096 gfp_flags=GFP_KERNEL|__GFP_ZERO node=0
>     accounted=false
>
> This reveals the slab allocator provides 4096B chunks for 2896B
> mem_cgroup_per_node due to:
>
> 1. The slab allocator predefines bucket sizes from 64B to 8096B
> 2. The mem_cgroup allocation size (2312B) falls between the 2KB and 4KB
>    slabs
> 3. The allocator rounds up to the nearest larger slab (4KB), resulting in
>    ~1KB wasted memory per memcg alloc - per node.
>
> This patch introduces a dedicated kmem_cache for mem_cgroup structs,
> achieving precise memory allocation. Post-patch ftrace verification shows:
>
> kmem_cache_alloc: call_site=mem_cgroup_css_alloc+0x1b8/0x5d4
>     ptr=000000002989e63a bytes_req=2896 bytes_alloc=2944
>     gfp_flags=GFP_KERNEL|__GFP_ZERO node=0 accounted=false
>
> Each mem_cgroup_per_node alloc 2944bytes(include hw cacheline align),
> compare to 4096, it avoid waste.
>
> Signed-off-by: Huan Yang <link@vivo.com>

Acked-by: Shakeel Butt <shakeel.butt@linux.dev>