From nobody Tue Mar 3 05:25:22 2026 Received: from mail-qv1-f41.google.com (mail-qv1-f41.google.com [209.85.219.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 59676383C88 for ; Mon, 2 Mar 2026 19:53:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772481207; cv=none; b=e4wOwJAiGLZX9TPKb7+aP3N5tQMStpGtg6OtTeDGhekK/q+hlSOsvDNZFW/I/yZiODg8wEB12MGaNIO2thb2v2Wja17LFoELuNdOWxVcyq9furFffSHNxWeK6B4Kb+faZ4Shc15C9OQHl99ulXrNo3WtU+KEbShkrgJooRI361c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772481207; c=relaxed/simple; bh=HEeg9n9CQQLgAaFgl++GPRpIQzNa5AIDCSTnURtsnD4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KpOBGERXz3VknIgu3YyVMP1KX2WSodqdmzQ8Mg4YZ2IYUWX0mgXkWxtjMoLE9wCebqsc1j4VccrvwJjxAtXBmdrP0JobHun+NuF5B8hduJzoRD0wfJ1bCBAO/96uoLWmyJFMYDe0/sBRASgKRg2RlPQxQyKCkDP4uP53SLqfESs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg.org header.i=@cmpxchg.org header.b=OPvJFXz4; arc=none smtp.client-ip=209.85.219.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg.org header.i=@cmpxchg.org header.b="OPvJFXz4" Received: by mail-qv1-f41.google.com with SMTP id 6a1803df08f44-899e87b04d8so36962396d6.3 for ; Mon, 02 Mar 2026 11:53:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg.org; s=google; t=1772481200; x=1773086000; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=g7vZGkE+1JaDLDSfa23XDw27l8nhvvwnMmFZbgJqVLA=; b=OPvJFXz4MGknnu7B55WQ/LCFfJdk8PdeRLmB19v+PwYpxz2tbjrjqnWcD9ryAm3GJ8 btLD+CUSXx6kKCpUqbcCSqhhiWHhDXzcjlvHp3nAjictLPvRdg3SXKKF9rTllbEqybbU Hk9HU9fFSIeH6BsvCvzhQTi9gD0bj3q2qWDkVAJJwlH0CcA2kTi7vNS9F3YfbZBPBXdu 6sFgct7wiD1XxWF9ARxVs4g70GpMoLDtTOewr2dNUbDi1NFC3fFaiMkyVj1mfW+NvJ4K VrhugsDdmcLP1DsbS/UvFa53W2OeiRjTT5uNfU+Jf7chmxmobbqkbGvPlh6trNNVfQrn GC5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1772481200; x=1773086000; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=g7vZGkE+1JaDLDSfa23XDw27l8nhvvwnMmFZbgJqVLA=; b=mMHjmckD43G+aymHP4dI2r47Dv8ZzA3qVnB6DEr9NaPa1PDle8t2YkRhv0BWFhTqM0 ZNm++zgkwIctqhO7kjzcp42597yOPwb59C5nAxZdQQffrsqBqbRN7mvKWZXzjxObGEkZ 0I/8Vc7WiVyGN0uQlyv7m7QXy9Sff951x7au8fJEjVB+rXf0SaAfyV1Oz11lk3QblF3R Y8V74Ws8sOepOUJt3hENz12WnwyB0Pq3XnM/vyMrAcEdq13RnW2xl3unVJi4DCQtbtoe T1ExjKdq+5/l5Zhqchf8bgxW7TlZs59vWHo94O1VlneAgE9GD1MkGGrPcNibySwGoNnz Km+A== X-Forwarded-Encrypted: i=1; AJvYcCUL3fZoHjfcJqBjhHtiojMyWZZzy8RF1+rsoD6Q+yFCsUjnpQFWIcNcvofHW4R1AXV6RquZA2v85AC3kgE=@vger.kernel.org X-Gm-Message-State: AOJu0YxLqu76ph6The1aVSScG3nLDJAlg3LgcfHDzwBGFyFnAO2Cc9nz VcRlObjaXodLEecgywjXHsuCoX1xZg1mITbovBoBmpi6Z45WpB3/V85DQeVuBkBWeEQ= X-Gm-Gg: ATEYQzzLW2baMXB9I2Twqz5RxzODa0EridMOWs00Q/Rr7UqqvX+9Bh/eIiCisED0Mkt VBM9YldgKpyYM4GepF8MlEccfHvbldfJjy9MhKLZTVl1L/isBTxrePuAnGU1SSqM5wTPwl8c5lS mjOeUZUktcmVa5ZtomTYsnzjWcdRKaesoVrfegu3BzGalo09z94X3YCD+mmus4/Y6ilAEHDv3kg +LieHuJkNBzQ6LOylzk0mfUp6O9MOMwmpFspCSAtOge/IL7gksBvGzY2X89fqVE7TY/np1zSVl+ wRja/9lMpQ8/g9bps5rQLmT+QmjS8we3iHBCm8DFEyFHiNyi1ZQ1PHyZl/sM/m89jvW3qyIA8sQ K1PxGWlCSgjQ0MfBAaA/wAH1JHX1h1fJJhBkCPogCAg3aRcthTgD1ZslBQtJin8+j4MWGT1Vjwg lJ84J5brawUOP7hVqL/9EQ689C0lonQgH7 X-Received: by 2002:a05:6214:d4a:b0:899:fd80:f79c with SMTP id 6a1803df08f44-899fd8106b5mr59620616d6.22.1772481200266; Mon, 02 Mar 2026 11:53:20 -0800 (PST) Received: from localhost ([2603:7000:c00:3a00:365a:60ff:fe62:ff29]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-899f4fb208dsm44179416d6.28.2026.03.02.11.53.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Mar 2026 11:53:19 -0800 (PST) From: Johannes Weiner To: Andrew Morton Cc: Hao Li , Michal Hocko , Roman Gushchin , Shakeel Butt , Vlastimil Babka , Harry Yoo , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 5/5] mm: memcg: separate slab stat accounting from objcg charge cache Date: Mon, 2 Mar 2026 14:50:18 -0500 Message-ID: <20260302195305.620713-6-hannes@cmpxchg.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260302195305.620713-1-hannes@cmpxchg.org> References: <20260302195305.620713-1-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Cgroup slab metrics are cached per-cpu the same way as the sub-page charge cache. However, the intertwined code to manage those dependent caches right now is quite difficult to follow. Specifically, cached slab stat updates occur in consume() if there was enough charge cache to satisfy the new object. If that fails, whole pages are reserved, and slab stats are updated when the remainder of those pages, after subtracting the size of the new slab object, are put into the charge cache. This already juggles a delicate mix of the object size, the page charge size, and the remainder to put into the byte cache. Doing slab accounting in this path as well is fragile, and has recently caused a bug where the input parameters between the two caches were mixed up. Refactor the consume() and refill() paths into unlocked and locked variants that only do charge caching. Then let the slab path manage its own lock section and open-code charging and accounting. This makes the slab stat cache subordinate to the charge cache: __refill_obj_stock() is called first to prepare it; __account_obj_stock() follows to hitch a ride. This results in a minor behavioral change: previously, a mismatching percpu stock would always be drained for the purpose of setting up slab account caching, even if there was no byte remainder to put into the charge cache. Now, the stock is left alone, and slab accounting takes the uncached path if there is a mismatch. This is exceedingly rare, and it was probably never worth draining the whole stock just to cache the slab stat update. Signed-off-by: Johannes Weiner Acked-by: Shakeel Butt --- mm/memcontrol.c | 100 +++++++++++++++++++++++++++++------------------- 1 file changed, 61 insertions(+), 39 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 4f12b75743d4..9c6f9849b717 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3218,16 +3218,18 @@ static struct obj_stock_pcp *trylock_stock(void) =20 static void unlock_stock(struct obj_stock_pcp *stock) { - local_unlock(&obj_stock.lock); + if (stock) + local_unlock(&obj_stock.lock); } =20 +/* Call after __refill_obj_stock() to ensure stock->cached_objg =3D=3D obj= cg */ static void __account_obj_stock(struct obj_cgroup *objcg, struct obj_stock_pcp *stock, int nr, struct pglist_data *pgdat, enum node_stat_item idx) { int *bytes; =20 - if (!stock) + if (!stock || READ_ONCE(stock->cached_objcg) !=3D objcg) goto direct; =20 /* @@ -3274,8 +3276,20 @@ static void __account_obj_stock(struct obj_cgroup *o= bjcg, mod_objcg_mlstate(objcg, pgdat, idx, nr); } =20 -static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_by= tes, - struct pglist_data *pgdat, enum node_stat_item idx) +static bool __consume_obj_stock(struct obj_cgroup *objcg, + struct obj_stock_pcp *stock, + unsigned int nr_bytes) +{ + if (objcg =3D=3D READ_ONCE(stock->cached_objcg) && + stock->nr_bytes >=3D nr_bytes) { + stock->nr_bytes -=3D nr_bytes; + return true; + } + + return false; +} + +static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_by= tes) { struct obj_stock_pcp *stock; bool ret =3D false; @@ -3284,14 +3298,7 @@ static bool consume_obj_stock(struct obj_cgroup *obj= cg, unsigned int nr_bytes, if (!stock) return ret; =20 - if (objcg =3D=3D READ_ONCE(stock->cached_objcg) && stock->nr_bytes >=3D n= r_bytes) { - stock->nr_bytes -=3D nr_bytes; - ret =3D true; - - if (pgdat) - __account_obj_stock(objcg, stock, nr_bytes, pgdat, idx); - } - + ret =3D __consume_obj_stock(objcg, stock, nr_bytes); unlock_stock(stock); =20 return ret; @@ -3376,17 +3383,14 @@ static bool obj_stock_flush_required(struct obj_sto= ck_pcp *stock, return flush; } =20 -static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_byt= es, - bool allow_uncharge, int nr_acct, struct pglist_data *pgdat, - enum node_stat_item idx) +static void __refill_obj_stock(struct obj_cgroup *objcg, + struct obj_stock_pcp *stock, + unsigned int nr_bytes, + bool allow_uncharge) { - struct obj_stock_pcp *stock; unsigned int nr_pages =3D 0; =20 - stock =3D trylock_stock(); if (!stock) { - if (pgdat) - __account_obj_stock(objcg, NULL, nr_acct, pgdat, idx); nr_pages =3D nr_bytes >> PAGE_SHIFT; nr_bytes =3D nr_bytes & (PAGE_SIZE - 1); atomic_add(nr_bytes, &objcg->nr_charged_bytes); @@ -3404,20 +3408,25 @@ static void refill_obj_stock(struct obj_cgroup *obj= cg, unsigned int nr_bytes, } stock->nr_bytes +=3D nr_bytes; =20 - if (pgdat) - __account_obj_stock(objcg, stock, nr_acct, pgdat, idx); - if (allow_uncharge && (stock->nr_bytes > PAGE_SIZE)) { nr_pages =3D stock->nr_bytes >> PAGE_SHIFT; stock->nr_bytes &=3D (PAGE_SIZE - 1); } =20 - unlock_stock(stock); out: if (nr_pages) obj_cgroup_uncharge_pages(objcg, nr_pages); } =20 +static void refill_obj_stock(struct obj_cgroup *objcg, + unsigned int nr_bytes, + bool allow_uncharge) +{ + struct obj_stock_pcp *stock =3D trylock_stock(); + __refill_obj_stock(objcg, stock, nr_bytes, allow_uncharge); + unlock_stock(stock); +} + static int __obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size, size_t *remainder) { @@ -3432,13 +3441,12 @@ static int __obj_cgroup_charge(struct obj_cgroup *o= bjcg, gfp_t gfp, return ret; } =20 -static int obj_cgroup_charge_account(struct obj_cgroup *objcg, gfp_t gfp, = size_t size, - struct pglist_data *pgdat, enum node_stat_item idx) +int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size) { size_t remainder; int ret; =20 - if (likely(consume_obj_stock(objcg, size, pgdat, idx))) + if (likely(consume_obj_stock(objcg, size))) return 0; =20 /* @@ -3465,20 +3473,15 @@ static int obj_cgroup_charge_account(struct obj_cgr= oup *objcg, gfp_t gfp, size_t * race. */ ret =3D __obj_cgroup_charge(objcg, gfp, size, &remainder); - if (!ret && (remainder || pgdat)) - refill_obj_stock(objcg, remainder, false, size, pgdat, idx); + if (!ret && remainder) + refill_obj_stock(objcg, remainder, false); =20 return ret; } =20 -int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size) -{ - return obj_cgroup_charge_account(objcg, gfp, size, NULL, 0); -} - void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) { - refill_obj_stock(objcg, size, true, 0, NULL, 0); + refill_obj_stock(objcg, size, true); } =20 static inline size_t obj_full_size(struct kmem_cache *s) @@ -3493,6 +3496,7 @@ static inline size_t obj_full_size(struct kmem_cache = *s) bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *l= ru, gfp_t flags, size_t size, void **p) { + size_t obj_size =3D obj_full_size(s); struct obj_cgroup *objcg; struct slab *slab; unsigned long off; @@ -3533,6 +3537,7 @@ bool __memcg_slab_post_alloc_hook(struct kmem_cache *= s, struct list_lru *lru, for (i =3D 0; i < size; i++) { unsigned long obj_exts; struct slabobj_ext *obj_ext; + struct obj_stock_pcp *stock; =20 slab =3D virt_to_slab(p[i]); =20 @@ -3552,9 +3557,20 @@ bool __memcg_slab_post_alloc_hook(struct kmem_cache = *s, struct list_lru *lru, * TODO: we could batch this until slab_pgdat(slab) changes * between iterations, with a more complicated undo */ - if (obj_cgroup_charge_account(objcg, flags, obj_full_size(s), - slab_pgdat(slab), cache_vmstat_idx(s))) - return false; + stock =3D trylock_stock(); + if (!stock || !__consume_obj_stock(objcg, stock, obj_size)) { + size_t remainder; + + unlock_stock(stock); + if (__obj_cgroup_charge(objcg, flags, obj_size, &remainder)) + return false; + stock =3D trylock_stock(); + if (remainder) + __refill_obj_stock(objcg, stock, remainder, false); + } + __account_obj_stock(objcg, stock, obj_size, + slab_pgdat(slab), cache_vmstat_idx(s)); + unlock_stock(stock); =20 obj_exts =3D slab_obj_exts(slab); get_slab_obj_exts(obj_exts); @@ -3576,6 +3592,7 @@ void __memcg_slab_free_hook(struct kmem_cache *s, str= uct slab *slab, for (int i =3D 0; i < objects; i++) { struct obj_cgroup *objcg; struct slabobj_ext *obj_ext; + struct obj_stock_pcp *stock; unsigned int off; =20 off =3D obj_to_index(s, slab, p[i]); @@ -3585,8 +3602,13 @@ void __memcg_slab_free_hook(struct kmem_cache *s, st= ruct slab *slab, continue; =20 obj_ext->objcg =3D NULL; - refill_obj_stock(objcg, obj_size, true, -obj_size, - slab_pgdat(slab), cache_vmstat_idx(s)); + + stock =3D trylock_stock(); + __refill_obj_stock(objcg, stock, obj_size, true); + __account_obj_stock(objcg, stock, -obj_size, + slab_pgdat(slab), cache_vmstat_idx(s)); + unlock_stock(stock); + obj_cgroup_put(objcg); } } --=20 2.53.0