From nobody Sun Feb 8 09:10:48 2026 Received: from out-178.mta0.migadu.com (out-178.mta0.migadu.com [91.218.175.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 980B81FAC34 for ; Wed, 14 May 2025 05:08:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747199341; cv=none; b=i093uohkuKSHpdiEdtcVbUyRIsl1Q72Yc2rr/BRA82WsyxIhhfPMCLgcD7uAAZjjUdBmRg6RtfbsxZ2AGAEN4t3L68xbGn1OGTUoUMRttmjpa8mnoU/q8m5J3R2XjotZWpMq2G/1Z+8nHYWunK3OlgXkDf8zJf6tzp1MKuiYjfg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747199341; c=relaxed/simple; bh=9G93xRpalAPmzoB0gd9eGKyJD5r+wMSs2rOoBnThEB8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hKqfJ6qH4S2BuJz1Rertdar1F1EA6fPyEumA3XFir46s+2S6atj1y/NTOEx0/4UpUUdiS9yTLVdyCS9M6EBaKk+dCQ2U5WExWK0h1M3jfw2wL6YSyGQpvpOnYFepTOmlod203qNYXrB9asK8H7IIJaJGen4G1EYevZCXXe9UUDM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=N+woL7ya; arc=none smtp.client-ip=91.218.175.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="N+woL7ya" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1747199337; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lcPht+GPMt0yev/Qh9l4mZlCLB+ximZHBdS6AfFo5Is=; b=N+woL7yakbLXHSkg0D4+onHGELWFvbVkocWEWivtqfAyf19m78VaP0Yf1jF+i4mnw2rT3B LmklWcglrPX5Hm1MXJWgcDrgHkpVlnBNps6xpadkyAFwHtii03rmxln6RdVdJY2eABDiVa HLo3+j2gbHWT9rgePUcF8a2qCYciHJM= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Vlastimil Babka , Alexei Starovoitov , Sebastian Andrzej Siewior , Harry Yoo , Yosry Ahmed , bpf@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: [PATCH 5/7] memcg: make __mod_memcg_lruvec_state re-entrant safe against irqs Date: Tue, 13 May 2025 22:08:11 -0700 Message-ID: <20250514050813.2526843-6-shakeel.butt@linux.dev> In-Reply-To: <20250514050813.2526843-1-shakeel.butt@linux.dev> References: <20250514050813.2526843-1-shakeel.butt@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" Let's make __mod_memcg_lruvec_state re-entrant safe and name it mod_memcg_lruvec_state(). The only thing needed is to convert the usage of __this_cpu_add() to this_cpu_add(). There are two callers of mod_memcg_lruvec_state() and one of them i.e. __mod_objcg_mlstate() will be re-entrant safe as well, so, rename it mod_objcg_mlstate(). The last caller __mod_lruvec_state() still calls __mod_node_page_state() which is not re-entrant safe yet, so keep it as is. Signed-off-by: Shakeel Butt Acked-by: Vlastimil Babka --- mm/memcontrol.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b666cdb1af68..4f19fe9de5bf 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -728,7 +728,7 @@ unsigned long memcg_page_state_local(struct mem_cgroup = *memcg, int idx) } #endif =20 -static void __mod_memcg_lruvec_state(struct lruvec *lruvec, +static void mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val) { @@ -746,10 +746,10 @@ static void __mod_memcg_lruvec_state(struct lruvec *l= ruvec, cpu =3D get_cpu(); =20 /* Update memcg */ - __this_cpu_add(memcg->vmstats_percpu->state[i], val); + this_cpu_add(memcg->vmstats_percpu->state[i], val); =20 /* Update lruvec */ - __this_cpu_add(pn->lruvec_stats_percpu->state[i], val); + this_cpu_add(pn->lruvec_stats_percpu->state[i], val); =20 val =3D memcg_state_val_in_pages(idx, val); memcg_rstat_updated(memcg, val, cpu); @@ -776,7 +776,7 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum nod= e_stat_item idx, =20 /* Update memcg and lruvec */ if (!mem_cgroup_disabled()) - __mod_memcg_lruvec_state(lruvec, idx, val); + mod_memcg_lruvec_state(lruvec, idx, val); } =20 void __lruvec_stat_mod_folio(struct folio *folio, enum node_stat_item idx, @@ -2552,7 +2552,7 @@ static void commit_charge(struct folio *folio, struct= mem_cgroup *memcg) folio->memcg_data =3D (unsigned long)memcg; } =20 -static inline void __mod_objcg_mlstate(struct obj_cgroup *objcg, +static inline void mod_objcg_mlstate(struct obj_cgroup *objcg, struct pglist_data *pgdat, enum node_stat_item idx, int nr) { @@ -2562,7 +2562,7 @@ static inline void __mod_objcg_mlstate(struct obj_cgr= oup *objcg, rcu_read_lock(); memcg =3D obj_cgroup_memcg(objcg); lruvec =3D mem_cgroup_lruvec(memcg, pgdat); - __mod_memcg_lruvec_state(lruvec, idx, nr); + mod_memcg_lruvec_state(lruvec, idx, nr); rcu_read_unlock(); } =20 @@ -2872,12 +2872,12 @@ static void __account_obj_stock(struct obj_cgroup *= objcg, struct pglist_data *oldpg =3D stock->cached_pgdat; =20 if (stock->nr_slab_reclaimable_b) { - __mod_objcg_mlstate(objcg, oldpg, NR_SLAB_RECLAIMABLE_B, + mod_objcg_mlstate(objcg, oldpg, NR_SLAB_RECLAIMABLE_B, stock->nr_slab_reclaimable_b); stock->nr_slab_reclaimable_b =3D 0; } if (stock->nr_slab_unreclaimable_b) { - __mod_objcg_mlstate(objcg, oldpg, NR_SLAB_UNRECLAIMABLE_B, + mod_objcg_mlstate(objcg, oldpg, NR_SLAB_UNRECLAIMABLE_B, stock->nr_slab_unreclaimable_b); stock->nr_slab_unreclaimable_b =3D 0; } @@ -2903,7 +2903,7 @@ static void __account_obj_stock(struct obj_cgroup *ob= jcg, } } if (nr) - __mod_objcg_mlstate(objcg, pgdat, idx, nr); + mod_objcg_mlstate(objcg, pgdat, idx, nr); } =20 static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_by= tes, @@ -2972,13 +2972,13 @@ static void drain_obj_stock(struct obj_stock_pcp *s= tock) */ if (stock->nr_slab_reclaimable_b || stock->nr_slab_unreclaimable_b) { if (stock->nr_slab_reclaimable_b) { - __mod_objcg_mlstate(old, stock->cached_pgdat, + mod_objcg_mlstate(old, stock->cached_pgdat, NR_SLAB_RECLAIMABLE_B, stock->nr_slab_reclaimable_b); stock->nr_slab_reclaimable_b =3D 0; } if (stock->nr_slab_unreclaimable_b) { - __mod_objcg_mlstate(old, stock->cached_pgdat, + mod_objcg_mlstate(old, stock->cached_pgdat, NR_SLAB_UNRECLAIMABLE_B, stock->nr_slab_unreclaimable_b); stock->nr_slab_unreclaimable_b =3D 0; --=20 2.47.1