From nobody Wed Apr 1 12:31:58 2026 Received: from out-179.mta1.migadu.com (out-179.mta1.migadu.com [95.215.58.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92C8F2BE7BA for ; Tue, 31 Mar 2026 09:17:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774948659; cv=none; b=RO23Hy3tVtagBev7V7zJ+9G59ruKB2TRLwLTnU1hdj3MWFr9T4GHMEwxSUPXwJ98DHqKBBHk++0qGmvtVzH6p8rba7o64QfTAmmyJjzhIApIJJ9k+xqV/szYOAcKzblAlxN8+xwUyJ3Xd6pRwRMNuoIyZhGmc3mtD7SXRICPBwY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774948659; c=relaxed/simple; bh=v+MMdLnwwUXjaZr7iC9Cg6OAA5IW91QdKR17/FuEM38=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=ig0yYS6n8IxPhGD6bx6fozx9ejEMeRmj8tO8b0p1jPpF9djeu3SdZkE5xx9rKKdYK+BTULG9i1Q4GQY50rHcLjCKcbWy3IMPgTLHMjdbYCPva3AOwQ4jH6hVls2HNm2MypD4bY+QJPc9Yjy0l9f/cFNZ8oR3fIw3BUeL4W19IS0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=urNiOCnm; arc=none smtp.client-ip=95.215.58.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="urNiOCnm" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1774948645; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=lp3DpxTMCR7y+PPNSc4ZSesA8X8RvD0NTh7t+I9q0Pk=; b=urNiOCnm2nA1ap8689/tDYlQ/VXYIFTKHVvXNCZnKlJYDDK/+GzCJ/wgrIIst/j+7thz4v qZwF/jJt/Ts7FFBGcQ0D/E1PcXRVpSvE2u+CjhCkZry2q8S5Gw+6ZD/8OqJ/ZG5Ho4g8v2 GeyAxmhOMUkGUUtMZoRK47cgFHzOXRs= From: Hui Zhu To: Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Hui Zhu Subject: [PATCH mm-stable v3] mm/memcontrol: batch memcg charging in __memcg_slab_post_alloc_hook Date: Tue, 31 Mar 2026 17:17:07 +0800 Message-ID: <20260331091707.226786-1-hui.zhu@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Hui Zhu When kmem_cache_alloc_bulk() allocates multiple objects, the post-alloc hook __memcg_slab_post_alloc_hook() previously charged memcg one object at a time, even though consecutive objects may reside on slabs backed by the same pgdat node. Batch the memcg charging by scanning ahead from the current position to find a contiguous run of objects whose slabs share the same pgdat, then issue a single __obj_cgroup_charge() / __consume_obj_stock() call for the entire run. The per-object obj_ext assignment loop is preserved as-is since it cannot be further collapsed. This implements the TODO comment left in commit bc730030f956 ("memcg: combine slab obj stock charging and accounting"). The existing error-recovery contract is unchanged: if size =3D=3D 1 then memcg_alloc_abort_single() will free the sole object, and for larger bulk allocations kmem_cache_free_bulk() will uncharge any objects that were already charged before the failure. Benchmark using kmem_cache_alloc_bulk() with SLAB_ACCOUNT (iters=3D100000): bulk=3D32 before: 215 ns/object after: 174 ns/object (-19%) bulk=3D1 before: 344 ns/object after: 335 ns/object ( ~) No measurable regression for bulk=3D1, as expected. Signed-off-by: Hui Zhu --- Changelog: v3: Update base from "mm-unstable" to "mm-stable". v2: According to the comments in [1], add code to handle the integer overflow issue. [1] https://sashiko.dev/#/patchset/20260316084839.1342163-1-hui.zhu%40linux= .dev mm/memcontrol.c | 77 +++++++++++++++++++++++++++++++++++++------------ 1 file changed, 58 insertions(+), 19 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 051b82ebf371..3159bf39e060 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3277,51 +3277,90 @@ bool __memcg_slab_post_alloc_hook(struct kmem_cache= *s, struct list_lru *lru, return false; } =20 - for (i =3D 0; i < size; i++) { + for (i =3D 0; i < size; ) { unsigned long obj_exts; struct slabobj_ext *obj_ext; struct obj_stock_pcp *stock; + struct pglist_data *pgdat; + int batch_bytes; + size_t run_len =3D 0; + size_t j; + size_t max_size; + bool skip_next =3D false; =20 slab =3D virt_to_slab(p[i]); =20 if (!slab_obj_exts(slab) && alloc_slab_obj_exts(slab, s, flags, false)) { + i++; continue; } =20 + pgdat =3D slab_pgdat(slab); + run_len =3D 1; + + /* + * The value of batch_bytes must not exceed + * (INT_MAX - PAGE_SIZE) to prevent integer overflow in + * the final accumulation performed by __account_obj_stock(). + */ + max_size =3D min((size_t)((INT_MAX - PAGE_SIZE) / obj_size), + size); + + for (j =3D i + 1; j < max_size; j++) { + struct slab *slab_j =3D virt_to_slab(p[j]); + + if (slab_pgdat(slab_j) !=3D pgdat) + break; + + if (!slab_obj_exts(slab_j) && + alloc_slab_obj_exts(slab_j, s, flags, false)) { + skip_next =3D true; + break; + } + + run_len++; + } + /* - * if we fail and size is 1, memcg_alloc_abort_single() will + * If we fail and size is 1, memcg_alloc_abort_single() will * just free the object, which is ok as we have not assigned - * objcg to its obj_ext yet - * - * for larger sizes, kmem_cache_free_bulk() will uncharge - * any objects that were already charged and obj_ext assigned + * objcg to its obj_ext yet. * - * TODO: we could batch this until slab_pgdat(slab) changes - * between iterations, with a more complicated undo + * For larger sizes, kmem_cache_free_bulk() will uncharge + * any objects that were already charged and obj_ext assigned. */ + batch_bytes =3D obj_size * run_len; stock =3D trylock_stock(); - if (!stock || !__consume_obj_stock(objcg, stock, obj_size)) { + if (!stock || !__consume_obj_stock(objcg, stock, batch_bytes)) { size_t remainder; =20 unlock_stock(stock); - if (__obj_cgroup_charge(objcg, flags, obj_size, &remainder)) + if (__obj_cgroup_charge(objcg, flags, batch_bytes, &remainder)) return false; stock =3D trylock_stock(); if (remainder) __refill_obj_stock(objcg, stock, remainder, false); } - __account_obj_stock(objcg, stock, obj_size, - slab_pgdat(slab), cache_vmstat_idx(s)); + __account_obj_stock(objcg, stock, batch_bytes, + pgdat, cache_vmstat_idx(s)); unlock_stock(stock); =20 - obj_exts =3D slab_obj_exts(slab); - get_slab_obj_exts(obj_exts); - off =3D obj_to_index(s, slab, p[i]); - obj_ext =3D slab_obj_ext(slab, obj_exts, off); - obj_cgroup_get(objcg); - obj_ext->objcg =3D objcg; - put_slab_obj_exts(obj_exts); + for (j =3D 0; j < run_len; j++) { + slab =3D virt_to_slab(p[i + j]); + obj_exts =3D slab_obj_exts(slab); + get_slab_obj_exts(obj_exts); + off =3D obj_to_index(s, slab, p[i + j]); + obj_ext =3D slab_obj_ext(slab, obj_exts, off); + obj_cgroup_get(objcg); + obj_ext->objcg =3D objcg; + put_slab_obj_exts(obj_exts); + } + + if (skip_next) + i =3D i + run_len + 1; + else + i +=3D run_len; } =20 return true; --=20 2.43.0