[PATCH 2/2] memcg: fix kmem over-charging for embedded obj_exts array

ranxiaokai627@163.com posted 2 patches 4 weeks, 1 day ago
[PATCH 2/2] memcg: fix kmem over-charging for embedded obj_exts array
Posted by ranxiaokai627@163.com 4 weeks, 1 day ago
From: Ran Xiaokai <ran.xiaokai@zte.com.cn>

Since commit a77d6d338685 ("mm/slab: place slabobj_ext metadata
in unused space within s->size"), the struct slabobj_ext array
can use slab leftover space or be embedded into the slub object
to save memory by calling alloc_slab_obj_exts_early(). In these
cases, no extra kmalloc space is allocated to store the obj_exts array.

Optimize obj_full_size() behavior to avoid over-charging kmem
for objects whose obj_exts use slab leftover space or are embedded
in the object.

Fixes: a77d6d338685 ("mm/slab: place slabobj_ext metadata in unused space within s->size")
Cc: stable@vger.kernel.org
Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
---
 mm/memcontrol.c | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 87614cfc4a3e..d6289a5cd6f3 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3199,11 +3199,20 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
 	refill_obj_stock(objcg, size, true, 0, NULL, 0);
 }
 
-static inline size_t obj_full_size(struct kmem_cache *s)
+static inline size_t obj_full_size(struct kmem_cache *s, struct slab *slab)
 {
+	if (obj_exts_in_slab(s, slab)) {
+		/*
+		 * If obj_exts array uses slab leftover space or is embedded
+		 * in object, no extra space is allocated, just charge the
+		 * object size.
+		 */
+		return s->size;
+	}
+
 	/*
-	 * For each accounted object there is an extra space which is used
-	 * to store obj_cgroup membership. Charge it too.
+	 * For caches whose obj_exts array is allocated separately
+	 * outside of slab, also charge the objcg pointer.
 	 */
 	return s->size + sizeof(struct obj_cgroup *);
 }
@@ -3270,7 +3279,7 @@ bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru,
 		 * TODO: we could batch this until slab_pgdat(slab) changes
 		 * between iterations, with a more complicated undo
 		 */
-		if (obj_cgroup_charge_account(objcg, flags, obj_full_size(s),
+		if (obj_cgroup_charge_account(objcg, flags, obj_full_size(s, slab),
 					slab_pgdat(slab), cache_vmstat_idx(s)))
 			return false;
 
@@ -3289,7 +3298,7 @@ bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru,
 void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab,
 			    void **p, int objects, unsigned long obj_exts)
 {
-	size_t obj_size = obj_full_size(s);
+	size_t obj_size = obj_full_size(s, slab);
 
 	for (int i = 0; i < objects; i++) {
 		struct obj_cgroup *objcg;
-- 
2.25.1