From nobody Sat Feb 7 17:54:52 2026 Received: from out-173.mta1.migadu.com (out-173.mta1.migadu.com [95.215.58.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E7DA01B0413 for ; Mon, 10 Nov 2025 23:20:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762816834; cv=none; b=rgBDNaQ8zgYIluDNskp0PXzV+MGRFbQVq5vGX0OImReJx9iBJ4r5zAzpGVAOZ5/gTrCX/IABNYtoPg+ZgJ/PrLMLL7u7Hzzv32SNbOLwx0GbBBRyraDUWk0f3P8zMesyNnyVNAoEb0chC0rnIn7XSFqGp4IPkUZptZCg/h4r0mU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762816834; c=relaxed/simple; bh=Jv5n1HUnW+ynjpRt1C/uf4EPEqjigyQEXbnueX0w5Rc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QdaH8MlZ+jKNWIh8px4gVfWryuJZ+b62he4SbU3MuiWg2rB0MFoU72/1gs42J4vSS35X4me29+1dYnPVdC3/RLiTlxKSC7lOdtlFHcCqLt9X5ofNxOdiYnX21syMS8f2LsebGJR9+h21UP2BCrikKatWrb7ZocaMRoiE1tg1A3U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=BgAStWnG; arc=none smtp.client-ip=95.215.58.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="BgAStWnG" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1762816831; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9otsU5sTTsv5iDT1hLkwZC6VR8+kvlYsVUkWcUV3yUw=; b=BgAStWnGWu+R++HQPUGOOHPlqRzdxWNJ0t59sVFAJf0Bbvv09PqCZGD04X5NPEYjczxOVu jskKx8vaHfikcuEhDCvFDPEaIM+ICUbyL8lklWpFjP8EoEknW3w+mRhSTi2/WREjj1BfDY LLQmumQ5TKsmYPQqq7ZVuNy/GmrDuvk= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Harry Yoo , Qi Zheng , Vlastimil Babka , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: [PATCH 1/4] memcg: use mod_node_page_state to update stats Date: Mon, 10 Nov 2025 15:20:05 -0800 Message-ID: <20251110232008.1352063-2-shakeel.butt@linux.dev> In-Reply-To: <20251110232008.1352063-1-shakeel.butt@linux.dev> References: <20251110232008.1352063-1-shakeel.butt@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" The memcg stats are safe against irq (and nmi) context and thus does not require disabling irqs. However some code paths for memcg stats also update the node level stats and use irq unsafe interface and thus require the users to disable irqs. However node level stats, on architectures with HAVE_CMPXCHG_LOCAL (all major ones), has interface which does not require irq disabling. Let's move memcg stats code to start using that interface for node level stats. Signed-off-by: Shakeel Butt Acked-by: Roman Gushchin Acked-by: Vlastimil Babka Reviewed-by: Harry Yoo --- include/linux/memcontrol.h | 2 +- include/linux/vmstat.h | 4 ++-- mm/memcontrol.c | 6 +++--- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 8c0f15e5978f..f82fac2fd988 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1408,7 +1408,7 @@ static inline void __mod_lruvec_kmem_state(void *p, e= num node_stat_item idx, { struct page *page =3D virt_to_head_page(p); =20 - __mod_node_page_state(page_pgdat(page), idx, val); + mod_node_page_state(page_pgdat(page), idx, val); } =20 static inline void mod_lruvec_kmem_state(void *p, enum node_stat_item idx, diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index c287998908bf..11a37aaa4dd9 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -557,7 +557,7 @@ static inline void mod_lruvec_page_state(struct page *p= age, static inline void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val) { - __mod_node_page_state(lruvec_pgdat(lruvec), idx, val); + mod_node_page_state(lruvec_pgdat(lruvec), idx, val); } =20 static inline void mod_lruvec_state(struct lruvec *lruvec, @@ -569,7 +569,7 @@ static inline void mod_lruvec_state(struct lruvec *lruv= ec, static inline void __lruvec_stat_mod_folio(struct folio *folio, enum node_stat_item idx, int val) { - __mod_node_page_state(folio_pgdat(folio), idx, val); + mod_node_page_state(folio_pgdat(folio), idx, val); } =20 static inline void lruvec_stat_mod_folio(struct folio *folio, diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 025da46d9959..f4b8a6414ed3 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -770,7 +770,7 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum nod= e_stat_item idx, int val) { /* Update node */ - __mod_node_page_state(lruvec_pgdat(lruvec), idx, val); + mod_node_page_state(lruvec_pgdat(lruvec), idx, val); =20 /* Update memcg and lruvec */ if (!mem_cgroup_disabled()) @@ -789,7 +789,7 @@ void __lruvec_stat_mod_folio(struct folio *folio, enum = node_stat_item idx, /* Untracked pages have no memcg, no lruvec. Update only the node */ if (!memcg) { rcu_read_unlock(); - __mod_node_page_state(pgdat, idx, val); + mod_node_page_state(pgdat, idx, val); return; } =20 @@ -815,7 +815,7 @@ void __mod_lruvec_kmem_state(void *p, enum node_stat_it= em idx, int val) * vmstats to keep it correct for the root memcg. */ if (!memcg) { - __mod_node_page_state(pgdat, idx, val); + mod_node_page_state(pgdat, idx, val); } else { lruvec =3D mem_cgroup_lruvec(memcg, pgdat); __mod_lruvec_state(lruvec, idx, val); --=20 2.47.3 From nobody Sat Feb 7 17:54:52 2026 Received: from out-179.mta1.migadu.com (out-179.mta1.migadu.com [95.215.58.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 277D22EBB87 for ; Mon, 10 Nov 2025 23:21:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762816877; cv=none; b=LR/Xoft8CuT21Rn8hrDO0NvhdpN6+qK0CTSnQp687lHtY+CL8YGIl78CWusfl9xTZrsIBb7kMNyYDhOtXU9uOMeFdRuSdax4/r7nhqR9VdHKrkY2g1z4QNEoGX4bdWbBX7Dx2wrHBeJ5sVw1UYPtF3yR0H7syRC4NbGldRIhwQY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762816877; c=relaxed/simple; bh=IAf0KX2uEXPUE/F1NX4jaitahNH53U3qnBwckd/2Mjw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jUU6DJyi1j/W/AyxlAuEAqyN/Inv0JWJSJsxnPAIM4R78j5Tkq6ygp23OlVaA1FxtnJZVd2Tmypj/CKC5ufNUw9EHwB0xiEWzS7oL20K1eTLBfLlyFc6h60oOiDWZX8riQeREtj9RRVm2YULmrl5UkmkEvsExKaNtB8zvXyM+Ms= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=mmWY40qN; arc=none smtp.client-ip=95.215.58.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="mmWY40qN" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1762816874; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=46TR/sEczQ+2endRnnVwUIJHB5dy7B94vjCFZXto+D4=; b=mmWY40qN6VIF9ZxDo3HqTObvI92nLXnIOdn9QVUEniqCl+Q/AcuqeInjI3fOW+Drh5TLpA amlVqxwCPeEVsVDwCDbuiPFQpFlv30CJHJtBjDcnshma286fhPwsh3aiYmvELvFdy2MXDd R9Mv7CVSiOawTROCn3U/VA4oK9rpYgE= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Harry Yoo , Qi Zheng , Vlastimil Babka , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: [PATCH 2/4] memcg: remove __mod_lruvec_kmem_state Date: Mon, 10 Nov 2025 15:20:06 -0800 Message-ID: <20251110232008.1352063-3-shakeel.butt@linux.dev> In-Reply-To: <20251110232008.1352063-1-shakeel.butt@linux.dev> References: <20251110232008.1352063-1-shakeel.butt@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" The __mod_lruvec_kmem_state is already safe against irqs, so there is no need to have a separate interface (i.e. mod_lruvec_kmem_state) which wraps calls to it with irq disabling and reenabling. Let's rename __mod_lruvec_kmem_state to mod_lruvec_kmem_state. Signed-off-by: Shakeel Butt Acked-by: Roman Gushchin Acked-by: Vlastimil Babka Reviewed-by: Harry Yoo Reviewed-by: Qi Zheng --- include/linux/memcontrol.h | 28 +++++----------------------- mm/memcontrol.c | 2 +- mm/workingset.c | 2 +- 3 files changed, 7 insertions(+), 25 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index f82fac2fd988..1384a9d305e1 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -957,17 +957,7 @@ unsigned long lruvec_page_state_local(struct lruvec *l= ruvec, void mem_cgroup_flush_stats(struct mem_cgroup *memcg); void mem_cgroup_flush_stats_ratelimited(struct mem_cgroup *memcg); =20 -void __mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val); - -static inline void mod_lruvec_kmem_state(void *p, enum node_stat_item idx, - int val) -{ - unsigned long flags; - - local_irq_save(flags); - __mod_lruvec_kmem_state(p, idx, val); - local_irq_restore(flags); -} +void mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val); =20 void count_memcg_events(struct mem_cgroup *memcg, enum vm_event_item idx, unsigned long count); @@ -1403,14 +1393,6 @@ static inline void mem_cgroup_flush_stats_ratelimite= d(struct mem_cgroup *memcg) { } =20 -static inline void __mod_lruvec_kmem_state(void *p, enum node_stat_item id= x, - int val) -{ - struct page *page =3D virt_to_head_page(p); - - mod_node_page_state(page_pgdat(page), idx, val); -} - static inline void mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val) { @@ -1470,14 +1452,14 @@ struct slabobj_ext { #endif } __aligned(8); =20 -static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item id= x) +static inline void inc_lruvec_kmem_state(void *p, enum node_stat_item idx) { - __mod_lruvec_kmem_state(p, idx, 1); + mod_lruvec_kmem_state(p, idx, 1); } =20 -static inline void __dec_lruvec_kmem_state(void *p, enum node_stat_item id= x) +static inline void dec_lruvec_kmem_state(void *p, enum node_stat_item idx) { - __mod_lruvec_kmem_state(p, idx, -1); + mod_lruvec_kmem_state(p, idx, -1); } =20 static inline struct lruvec *parent_lruvec(struct lruvec *lruvec) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f4b8a6414ed3..3a59d3ee92a7 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -799,7 +799,7 @@ void __lruvec_stat_mod_folio(struct folio *folio, enum = node_stat_item idx, } EXPORT_SYMBOL(__lruvec_stat_mod_folio); =20 -void __mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val) +void mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val) { pg_data_t *pgdat =3D page_pgdat(virt_to_page(p)); struct mem_cgroup *memcg; diff --git a/mm/workingset.c b/mm/workingset.c index d32dc2e02a61..892f6fe94ea9 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -749,7 +749,7 @@ static enum lru_status shadow_lru_isolate(struct list_h= ead *item, if (WARN_ON_ONCE(node->count !=3D node->nr_values)) goto out_invalid; xa_delete_node(node, workingset_update_node); - __inc_lruvec_kmem_state(node, WORKINGSET_NODERECLAIM); + inc_lruvec_kmem_state(node, WORKINGSET_NODERECLAIM); =20 out_invalid: xa_unlock_irq(&mapping->i_pages); --=20 2.47.3 From nobody Sat Feb 7 17:54:52 2026 Received: from out-170.mta0.migadu.com (out-170.mta0.migadu.com [91.218.175.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B2EDD30217E for ; Mon, 10 Nov 2025 23:21:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762816891; cv=none; b=HBIKYlUqqxd6tvMcZG8Bs35TbucdpxNuXg8g009q1qfj6HrjEp00DJKuDcMeb9zNgNNS70AREid9jCDa0u7WiaxWjB1/DpzAJRNnePDyuiN8wRkfYwCs6bVaatKqbfwG5KC6lP50LEPLmpzeclEbHUSVqTydEhHs0/6aqu5uPPA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762816891; c=relaxed/simple; bh=xbrd9ny/EOrPz6PNLd/wWUHVuo1aRCdU8KfRKO1WBiE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Y8XDOt7DaBWH0eUJkq9GlihVeoU4W/K3j3BISaGEIJHcnf2Kg3KdZRWNV16SiCe6mytN7tyiRjAnDWsljhtHQzK1SQvYfHPK36ou76NYPa0FqFOcSQ+lNzAb9NkymDl4bUQD0hYmszcuDQQQAwwYKcUNpGA2+9oxCZ+cuU3zdug= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=X4c+7Vyx; arc=none smtp.client-ip=91.218.175.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="X4c+7Vyx" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1762816877; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lysW96W1EmnFFHlgmtNEYrJza4enbLaPzVpDvxDjGNQ=; b=X4c+7VyxY6HfvIJvlc94nZqVXjd6UGao/+wY3ZSW8aRNNVfIQBFkDBtQKgi0EPUX+AOmpz EyOywj6kMmJdY5bFDCW/9ay/25AgeaRzgBZ4JO5btKLtAfePuPV9Jl8k3tMg/1RhpW9y9R WSJR52Vcc1McW39v0PLZZvvceHO1+0c= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Harry Yoo , Qi Zheng , Vlastimil Babka , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: [PATCH 3/4] memcg: remove __mod_lruvec_state Date: Mon, 10 Nov 2025 15:20:07 -0800 Message-ID: <20251110232008.1352063-4-shakeel.butt@linux.dev> In-Reply-To: <20251110232008.1352063-1-shakeel.butt@linux.dev> References: <20251110232008.1352063-1-shakeel.butt@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" The __mod_lruvec_state is already safe against irqs, so there is no need to have a separate interface (i.e. mod_lruvec_state) which wraps calls to it with irq disabling and reenabling. Let's rename __mod_lruvec_state to mod_lruvec_state. Signed-off-by: Shakeel Butt Acked-by: Roman Gushchin Acked-by: Vlastimil Babka Reviewed-by: Harry Yoo --- include/linux/mm_inline.h | 2 +- include/linux/vmstat.h | 18 +----------------- mm/memcontrol.c | 8 ++++---- mm/migrate.c | 20 ++++++++++---------- mm/vmscan.c | 4 ++-- 5 files changed, 18 insertions(+), 34 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 795b255abf65..d7b963255012 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -44,7 +44,7 @@ static __always_inline void __update_lru_size(struct lruv= ec *lruvec, lockdep_assert_held(&lruvec->lru_lock); WARN_ON_ONCE(nr_pages !=3D (int)nr_pages); =20 - __mod_lruvec_state(lruvec, NR_LRU_BASE + lru, nr_pages); + mod_lruvec_state(lruvec, NR_LRU_BASE + lru, nr_pages); __mod_zone_page_state(&pgdat->node_zones[zid], NR_ZONE_LRU_BASE + lru, nr_pages); } diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index 11a37aaa4dd9..4eb7753e6e5c 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -520,19 +520,9 @@ static inline const char *vm_event_name(enum vm_event_= item item) =20 #ifdef CONFIG_MEMCG =20 -void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, +void mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val); =20 -static inline void mod_lruvec_state(struct lruvec *lruvec, - enum node_stat_item idx, int val) -{ - unsigned long flags; - - local_irq_save(flags); - __mod_lruvec_state(lruvec, idx, val); - local_irq_restore(flags); -} - void __lruvec_stat_mod_folio(struct folio *folio, enum node_stat_item idx, int val); =20 @@ -554,12 +544,6 @@ static inline void mod_lruvec_page_state(struct page *= page, =20 #else =20 -static inline void __mod_lruvec_state(struct lruvec *lruvec, - enum node_stat_item idx, int val) -{ - mod_node_page_state(lruvec_pgdat(lruvec), idx, val); -} - static inline void mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 3a59d3ee92a7..c31074e5852b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -757,7 +757,7 @@ static void mod_memcg_lruvec_state(struct lruvec *lruve= c, } =20 /** - * __mod_lruvec_state - update lruvec memory statistics + * mod_lruvec_state - update lruvec memory statistics * @lruvec: the lruvec * @idx: the stat item * @val: delta to add to the counter, can be negative @@ -766,7 +766,7 @@ static void mod_memcg_lruvec_state(struct lruvec *lruve= c, * function updates the all three counters that are affected by a * change of state at this level: per-node, per-cgroup, per-lruvec. */ -void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, +void mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val) { /* Update node */ @@ -794,7 +794,7 @@ void __lruvec_stat_mod_folio(struct folio *folio, enum = node_stat_item idx, } =20 lruvec =3D mem_cgroup_lruvec(memcg, pgdat); - __mod_lruvec_state(lruvec, idx, val); + mod_lruvec_state(lruvec, idx, val); rcu_read_unlock(); } EXPORT_SYMBOL(__lruvec_stat_mod_folio); @@ -818,7 +818,7 @@ void mod_lruvec_kmem_state(void *p, enum node_stat_item= idx, int val) mod_node_page_state(pgdat, idx, val); } else { lruvec =3D mem_cgroup_lruvec(memcg, pgdat); - __mod_lruvec_state(lruvec, idx, val); + mod_lruvec_state(lruvec, idx, val); } rcu_read_unlock(); } diff --git a/mm/migrate.c b/mm/migrate.c index 567dfae4d9f8..be00c3c82f3a 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -675,27 +675,27 @@ static int __folio_migrate_mapping(struct address_spa= ce *mapping, old_lruvec =3D mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); new_lruvec =3D mem_cgroup_lruvec(memcg, newzone->zone_pgdat); =20 - __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr); - __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr); + mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr); + mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr); if (folio_test_swapbacked(folio) && !folio_test_swapcache(folio)) { - __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr); - __mod_lruvec_state(new_lruvec, NR_SHMEM, nr); + mod_lruvec_state(old_lruvec, NR_SHMEM, -nr); + mod_lruvec_state(new_lruvec, NR_SHMEM, nr); =20 if (folio_test_pmd_mappable(folio)) { - __mod_lruvec_state(old_lruvec, NR_SHMEM_THPS, -nr); - __mod_lruvec_state(new_lruvec, NR_SHMEM_THPS, nr); + mod_lruvec_state(old_lruvec, NR_SHMEM_THPS, -nr); + mod_lruvec_state(new_lruvec, NR_SHMEM_THPS, nr); } } #ifdef CONFIG_SWAP if (folio_test_swapcache(folio)) { - __mod_lruvec_state(old_lruvec, NR_SWAPCACHE, -nr); - __mod_lruvec_state(new_lruvec, NR_SWAPCACHE, nr); + mod_lruvec_state(old_lruvec, NR_SWAPCACHE, -nr); + mod_lruvec_state(new_lruvec, NR_SWAPCACHE, nr); } #endif if (dirty && mapping_can_writeback(mapping)) { - __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr); + mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr); __mod_zone_page_state(oldzone, NR_ZONE_WRITE_PENDING, -nr); - __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr); + mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr); __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr); } } diff --git a/mm/vmscan.c b/mm/vmscan.c index ba760072830b..b3231bdde4e6 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2019,7 +2019,7 @@ static unsigned long shrink_inactive_list(unsigned lo= ng nr_to_scan, spin_lock_irq(&lruvec->lru_lock); move_folios_to_lru(lruvec, &folio_list); =20 - __mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(sc), + mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(sc), stat.nr_demoted); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); item =3D PGSTEAL_KSWAPD + reclaimer_offset(sc); @@ -4745,7 +4745,7 @@ static int evict_folios(unsigned long nr_to_scan, str= uct lruvec *lruvec, reset_batch_size(walk); } =20 - __mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(sc), + mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(sc), stat.nr_demoted); =20 item =3D PGSTEAL_KSWAPD + reclaimer_offset(sc); --=20 2.47.3 From nobody Sat Feb 7 17:54:52 2026 Received: from out-172.mta0.migadu.com (out-172.mta0.migadu.com [91.218.175.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D24BE2F745E for ; Mon, 10 Nov 2025 23:21:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762816888; cv=none; b=cwKtWdpvdlOaeANwBgL3UqWV8N20VdL8vqBAc0OTzTFrXtpOdKP6hrcu9LvW4PKiUla8jUEmOPhTzq/XgiUkCjOTXktkdaY6ThP4+BHhM3O6aYLipDvPNiGG6ZwGzEgP2Kpibkn6eaYrh67WXvOCUn/1W58AlDUzhiwhhYiqXt4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762816888; c=relaxed/simple; bh=WLEBOTkZKUM14cVgKXUz+PaAOxh51gV8vtHKxRIAvWQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HIWgc5uDJi2ME3LOYpu/cLnLWhGdpsoi0rK2JMl8Xg3a7/zFjf21rwdj/TW0nZmtqzVuKbtlRGa9fVDh50AsX4BBibRJ1FuGkuAWXa9vwwICJZFeLbcf8i5pDN4r55KZmSwUdCuQAvTWesu9G56NwlpIAjTqQeNQMYBDsBekBtw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=R4NU6Q8J; arc=none smtp.client-ip=91.218.175.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="R4NU6Q8J" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1762816883; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KeXZlzM9S/cAH2YiJk+aaehPaWC6sAatt90g+33TR9M=; b=R4NU6Q8JCz+vv5o5XmD4mFPnZiEkcZEeGblXosVFQw+RFf9leuRmhR0urgV21dHwZQIuL9 T3w0t0fzyPiOGdBbxvlt0vGbaBSsux7ELLDLlXmNEd0BBaDNtaUNWI0NmH6bvA0RUStsI0 ANDzkbGwvlFROHEOOQpmnoW1jnNzIAk= From: Shakeel Butt To: Andrew Morton Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , Harry Yoo , Qi Zheng , Vlastimil Babka , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Meta kernel team Subject: [PATCH 4/4] memcg: remove __lruvec_stat_mod_folio Date: Mon, 10 Nov 2025 15:20:08 -0800 Message-ID: <20251110232008.1352063-5-shakeel.butt@linux.dev> In-Reply-To: <20251110232008.1352063-1-shakeel.butt@linux.dev> References: <20251110232008.1352063-1-shakeel.butt@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" The __lruvec_stat_mod_folio is already safe against irqs, so there is no need to have a separate interface (i.e. lruvec_stat_mod_folio) which wraps calls to it with irq disabling and reenabling. Let's rename __lruvec_stat_mod_folio to lruvec_stat_mod_folio. Signed-off-by: Shakeel Butt Acked-by: Roman Gushchin Acked-by: Vlastimil Babka Reviewed-by: Harry Yoo --- include/linux/vmstat.h | 30 +----------------------------- mm/filemap.c | 20 ++++++++++---------- mm/huge_memory.c | 4 ++-- mm/khugepaged.c | 8 ++++---- mm/memcontrol.c | 4 ++-- mm/page-writeback.c | 2 +- mm/rmap.c | 4 ++-- mm/shmem.c | 6 +++--- 8 files changed, 25 insertions(+), 53 deletions(-) diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index 4eb7753e6e5c..3398a345bda8 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -523,19 +523,9 @@ static inline const char *vm_event_name(enum vm_event_= item item) void mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, int val); =20 -void __lruvec_stat_mod_folio(struct folio *folio, +void lruvec_stat_mod_folio(struct folio *folio, enum node_stat_item idx, int val); =20 -static inline void lruvec_stat_mod_folio(struct folio *folio, - enum node_stat_item idx, int val) -{ - unsigned long flags; - - local_irq_save(flags); - __lruvec_stat_mod_folio(folio, idx, val); - local_irq_restore(flags); -} - static inline void mod_lruvec_page_state(struct page *page, enum node_stat_item idx, int val) { @@ -550,12 +540,6 @@ static inline void mod_lruvec_state(struct lruvec *lru= vec, mod_node_page_state(lruvec_pgdat(lruvec), idx, val); } =20 -static inline void __lruvec_stat_mod_folio(struct folio *folio, - enum node_stat_item idx, int val) -{ - mod_node_page_state(folio_pgdat(folio), idx, val); -} - static inline void lruvec_stat_mod_folio(struct folio *folio, enum node_stat_item idx, int val) { @@ -570,18 +554,6 @@ static inline void mod_lruvec_page_state(struct page *= page, =20 #endif /* CONFIG_MEMCG */ =20 -static inline void __lruvec_stat_add_folio(struct folio *folio, - enum node_stat_item idx) -{ - __lruvec_stat_mod_folio(folio, idx, folio_nr_pages(folio)); -} - -static inline void __lruvec_stat_sub_folio(struct folio *folio, - enum node_stat_item idx) -{ - __lruvec_stat_mod_folio(folio, idx, -folio_nr_pages(folio)); -} - static inline void lruvec_stat_add_folio(struct folio *folio, enum node_stat_item idx) { diff --git a/mm/filemap.c b/mm/filemap.c index 63eb163af99c..9a52fb3ba093 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -182,13 +182,13 @@ static void filemap_unaccount_folio(struct address_sp= ace *mapping, =20 nr =3D folio_nr_pages(folio); =20 - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr); + lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr); if (folio_test_swapbacked(folio)) { - __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr); + lruvec_stat_mod_folio(folio, NR_SHMEM, -nr); if (folio_test_pmd_mappable(folio)) - __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); + lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); } else if (folio_test_pmd_mappable(folio)) { - __lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr); + lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping); } if (test_bit(AS_KERNEL_FILE, &folio->mapping->flags)) @@ -831,13 +831,13 @@ void replace_page_cache_folio(struct folio *old, stru= ct folio *new) old->mapping =3D NULL; /* hugetlb pages do not participate in page cache accounting. */ if (!folio_test_hugetlb(old)) - __lruvec_stat_sub_folio(old, NR_FILE_PAGES); + lruvec_stat_sub_folio(old, NR_FILE_PAGES); if (!folio_test_hugetlb(new)) - __lruvec_stat_add_folio(new, NR_FILE_PAGES); + lruvec_stat_add_folio(new, NR_FILE_PAGES); if (folio_test_swapbacked(old)) - __lruvec_stat_sub_folio(old, NR_SHMEM); + lruvec_stat_sub_folio(old, NR_SHMEM); if (folio_test_swapbacked(new)) - __lruvec_stat_add_folio(new, NR_SHMEM); + lruvec_stat_add_folio(new, NR_SHMEM); xas_unlock_irq(&xas); if (free_folio) free_folio(old); @@ -920,9 +920,9 @@ noinline int __filemap_add_folio(struct address_space *= mapping, =20 /* hugetlb pages do not participate in page cache accounting */ if (!huge) { - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr); + lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr); if (folio_test_pmd_mappable(folio)) - __lruvec_stat_mod_folio(folio, + lruvec_stat_mod_folio(folio, NR_FILE_THPS, nr); } =20 diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 949250932bb4..943099eae8d5 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3866,10 +3866,10 @@ static int __folio_split(struct folio *folio, unsig= ned int new_order, if (folio_test_pmd_mappable(folio) && new_order < HPAGE_PMD_ORDER) { if (folio_test_swapbacked(folio)) { - __lruvec_stat_mod_folio(folio, + lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); } else { - __lruvec_stat_mod_folio(folio, + lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping); } diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 1a08673b0d8b..2a460664a67d 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -2174,14 +2174,14 @@ static int collapse_file(struct mm_struct *mm, unsi= gned long addr, } =20 if (is_shmem) - __lruvec_stat_mod_folio(new_folio, NR_SHMEM_THPS, HPAGE_PMD_NR); + lruvec_stat_mod_folio(new_folio, NR_SHMEM_THPS, HPAGE_PMD_NR); else - __lruvec_stat_mod_folio(new_folio, NR_FILE_THPS, HPAGE_PMD_NR); + lruvec_stat_mod_folio(new_folio, NR_FILE_THPS, HPAGE_PMD_NR); =20 if (nr_none) { - __lruvec_stat_mod_folio(new_folio, NR_FILE_PAGES, nr_none); + lruvec_stat_mod_folio(new_folio, NR_FILE_PAGES, nr_none); /* nr_none is always 0 for non-shmem. */ - __lruvec_stat_mod_folio(new_folio, NR_SHMEM, nr_none); + lruvec_stat_mod_folio(new_folio, NR_SHMEM, nr_none); } =20 /* diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c31074e5852b..7f074d72dabc 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -777,7 +777,7 @@ void mod_lruvec_state(struct lruvec *lruvec, enum node_= stat_item idx, mod_memcg_lruvec_state(lruvec, idx, val); } =20 -void __lruvec_stat_mod_folio(struct folio *folio, enum node_stat_item idx, +void lruvec_stat_mod_folio(struct folio *folio, enum node_stat_item idx, int val) { struct mem_cgroup *memcg; @@ -797,7 +797,7 @@ void __lruvec_stat_mod_folio(struct folio *folio, enum = node_stat_item idx, mod_lruvec_state(lruvec, idx, val); rcu_read_unlock(); } -EXPORT_SYMBOL(__lruvec_stat_mod_folio); +EXPORT_SYMBOL(lruvec_stat_mod_folio); =20 void mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val) { diff --git a/mm/page-writeback.c b/mm/page-writeback.c index a124ab6a205d..ccdeb0e84d39 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2652,7 +2652,7 @@ static void folio_account_dirtied(struct folio *folio, inode_attach_wb(inode, folio); wb =3D inode_to_wb(inode); =20 - __lruvec_stat_mod_folio(folio, NR_FILE_DIRTY, nr); + lruvec_stat_mod_folio(folio, NR_FILE_DIRTY, nr); __zone_stat_mod_folio(folio, NR_ZONE_WRITE_PENDING, nr); __node_stat_mod_folio(folio, NR_DIRTIED, nr); wb_stat_mod(wb, WB_RECLAIMABLE, nr); diff --git a/mm/rmap.c b/mm/rmap.c index 60c3cd70b6ea..1b3a3c7b0aeb 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1212,12 +1212,12 @@ static void __folio_mod_stat(struct folio *folio, i= nt nr, int nr_pmdmapped) =20 if (nr) { idx =3D folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED; - __lruvec_stat_mod_folio(folio, idx, nr); + lruvec_stat_mod_folio(folio, idx, nr); } if (nr_pmdmapped) { if (folio_test_anon(folio)) { idx =3D NR_ANON_THPS; - __lruvec_stat_mod_folio(folio, idx, nr_pmdmapped); + lruvec_stat_mod_folio(folio, idx, nr_pmdmapped); } else { /* NR_*_PMDMAPPED are not maintained per-memcg */ idx =3D folio_test_swapbacked(folio) ? diff --git a/mm/shmem.c b/mm/shmem.c index c3ed2dcd17f8..4fba8a597256 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -882,9 +882,9 @@ static unsigned int shmem_huge_global_enabled(struct in= ode *inode, pgoff_t index static void shmem_update_stats(struct folio *folio, int nr_pages) { if (folio_test_pmd_mappable(folio)) - __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr_pages); - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr_pages); - __lruvec_stat_mod_folio(folio, NR_SHMEM, nr_pages); + lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr_pages); + lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr_pages); + lruvec_stat_mod_folio(folio, NR_SHMEM, nr_pages); } =20 /* --=20 2.47.3