From nobody Fri Dec 19 17:33:03 2025 Received: from out-172.mta1.migadu.com (out-172.mta1.migadu.com [95.215.58.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1FD01DE4C3 for ; Tue, 13 May 2025 03:14:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747106061; cv=none; b=fnCDbMpJHr+VI/Xqrv+F3XQJL44aLPCPSxSuqOzYP53/x2H+d1LDPTdVLYnzw0nZq0fFM7I2FEBvt45shYBks2kTYwUly7vFWzTmX8+LuU/7aWddDx9ks3t2gv41sQbxpSrMPfM9/A0ZTOlSsIW2TUMeOX+OaiMflbTjmkCEa40= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747106061; c=relaxed/simple; bh=7sONA/V33xpobgeSmyIE5zKaplWdxKZgqzaN0mo3UjE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RGfz5lOV7XmHRE9t3e9YTiDsoLAaiy/pIbMF9sEpFU53VoaYfTGAMS2NzB5yg9gKDrS5vM/kz4OvMr2k/g3X6Z1eS9a5UgKBHEIajxHrlM8m/eEWdeZEnUdY/Dj+sREij1JrgtFhUJ9ZB3tc/+R167XqBlhS+bAkD4J4wOC70Mo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=nMEpv0pa; arc=none smtp.client-ip=95.215.58.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="nMEpv0pa" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1747106058; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AODqjUq8YBPAVfDEVDBp4w1/avIi3o3A3GYYWWucdOg=; b=nMEpv0paVndzE8CayYRfrJko55+hok+VSNUS4tnLfpg19zLqu/PvnMnIxqCvJWjK48L3bJ pgdw7Iur+sFTvXj2v9Wm4AgeQIsJIFY5CJhHjQVZ5kZpIc7+TLNW/z16Mp1RmWkUmKZHQW 31iyv5GGYhMMXu1kJxlCeCI14I5l6pk= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Vlastimil Babka , Alexei Starovoitov , Sebastian Andrzej Siewior , Harry Yoo , Yosry Ahmed , bpf@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: [RFC PATCH 5/7] memcg: make __mod_memcg_lruvec_state re-entrant safe against irqs Date: Mon, 12 May 2025 20:13:14 -0700 Message-ID: <20250513031316.2147548-6-shakeel.butt@linux.dev> In-Reply-To: <20250513031316.2147548-1-shakeel.butt@linux.dev> References: <20250513031316.2147548-1-shakeel.butt@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" Let's make __mod_memcg_lruvec_state re-entrant safe and name it mod_memcg_lruvec_state(). The only thing needed is to convert the usage of __this_cpu_add() to this_cpu_add(). There are two callers of mod_memcg_lruvec_state() and one of them i.e. __mod_objcg_mlstate() will be re-entrant safe as well, so, rename it mod_objcg_mlstate(). The last caller __mod_lruvec_state() still calls __mod_node_page_state() which is not re-entrant safe yet, so keep it as is. Signed-off-by: Shakeel Butt Acked-by: Vlastimil Babka --- mm/memcontrol.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9e7dc90cc460..adf2f1922118 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -731,7 +731,7 @@ unsigned long memcg_page_state_local(struct mem_cgroup = *memcg, int idx) } #endif =20 -static void __mod_memcg_lruvec_state(struct lruvec *lruvec, +static void mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val) { @@ -743,16 +743,20 @@ static void __mod_memcg_lruvec_state(struct lruvec *l= ruvec, if (WARN_ONCE(BAD_STAT_IDX(i), "%s: missing stat item %d\n", __func__, id= x)) return; =20 + if (WARN_ONCE(in_nmi(), "%s: called in nmi context for stat item %d\n", + __func__, idx)) + return; + pn =3D container_of(lruvec, struct mem_cgroup_per_node, lruvec); memcg =3D pn->memcg; =20 cpu =3D get_cpu(); =20 /* Update memcg */ - __this_cpu_add(memcg->vmstats_percpu->state[i], val); + this_cpu_add(memcg->vmstats_percpu->state[i], val); =20 /* Update lruvec */ - __this_cpu_add(pn->lruvec_stats_percpu->state[i], val); + this_cpu_add(pn->lruvec_stats_percpu->state[i], val); =20 val =3D memcg_state_val_in_pages(idx, val); memcg_rstat_updated(memcg, val, cpu); @@ -779,7 +783,7 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum nod= e_stat_item idx, =20 /* Update memcg and lruvec */ if (!mem_cgroup_disabled()) - __mod_memcg_lruvec_state(lruvec, idx, val); + mod_memcg_lruvec_state(lruvec, idx, val); } =20 void __lruvec_stat_mod_folio(struct folio *folio, enum node_stat_item idx, @@ -2559,7 +2563,7 @@ static void commit_charge(struct folio *folio, struct= mem_cgroup *memcg) folio->memcg_data =3D (unsigned long)memcg; } =20 -static inline void __mod_objcg_mlstate(struct obj_cgroup *objcg, +static inline void mod_objcg_mlstate(struct obj_cgroup *objcg, struct pglist_data *pgdat, enum node_stat_item idx, int nr) { @@ -2570,7 +2574,7 @@ static inline void __mod_objcg_mlstate(struct obj_cgr= oup *objcg, memcg =3D obj_cgroup_memcg(objcg); if (likely(!in_nmi())) { lruvec =3D mem_cgroup_lruvec(memcg, pgdat); - __mod_memcg_lruvec_state(lruvec, idx, nr); + mod_memcg_lruvec_state(lruvec, idx, nr); } else { struct mem_cgroup_per_node *pn =3D memcg->nodeinfo[pgdat->node_id]; =20 @@ -2901,12 +2905,12 @@ static void __account_obj_stock(struct obj_cgroup *= objcg, struct pglist_data *oldpg =3D stock->cached_pgdat; =20 if (stock->nr_slab_reclaimable_b) { - __mod_objcg_mlstate(objcg, oldpg, NR_SLAB_RECLAIMABLE_B, + mod_objcg_mlstate(objcg, oldpg, NR_SLAB_RECLAIMABLE_B, stock->nr_slab_reclaimable_b); stock->nr_slab_reclaimable_b =3D 0; } if (stock->nr_slab_unreclaimable_b) { - __mod_objcg_mlstate(objcg, oldpg, NR_SLAB_UNRECLAIMABLE_B, + mod_objcg_mlstate(objcg, oldpg, NR_SLAB_UNRECLAIMABLE_B, stock->nr_slab_unreclaimable_b); stock->nr_slab_unreclaimable_b =3D 0; } @@ -2932,7 +2936,7 @@ static void __account_obj_stock(struct obj_cgroup *ob= jcg, } } if (nr) - __mod_objcg_mlstate(objcg, pgdat, idx, nr); + mod_objcg_mlstate(objcg, pgdat, idx, nr); } =20 static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_by= tes, @@ -3004,13 +3008,13 @@ static void drain_obj_stock(struct obj_stock_pcp *s= tock) */ if (stock->nr_slab_reclaimable_b || stock->nr_slab_unreclaimable_b) { if (stock->nr_slab_reclaimable_b) { - __mod_objcg_mlstate(old, stock->cached_pgdat, + mod_objcg_mlstate(old, stock->cached_pgdat, NR_SLAB_RECLAIMABLE_B, stock->nr_slab_reclaimable_b); stock->nr_slab_reclaimable_b =3D 0; } if (stock->nr_slab_unreclaimable_b) { - __mod_objcg_mlstate(old, stock->cached_pgdat, + mod_objcg_mlstate(old, stock->cached_pgdat, NR_SLAB_UNRECLAIMABLE_B, stock->nr_slab_unreclaimable_b); stock->nr_slab_unreclaimable_b =3D 0; @@ -3050,7 +3054,7 @@ static void refill_obj_stock(struct obj_cgroup *objcg= , unsigned int nr_bytes, =20 if (unlikely(in_nmi())) { if (pgdat) - __mod_objcg_mlstate(objcg, pgdat, idx, nr_bytes); + mod_objcg_mlstate(objcg, pgdat, idx, nr_bytes); nr_pages =3D nr_bytes >> PAGE_SHIFT; nr_bytes =3D nr_bytes & (PAGE_SIZE - 1); atomic_add(nr_bytes, &objcg->nr_charged_bytes); --=20 2.47.1