From nobody Sat Feb 7 15:40:25 2026 Received: from out-170.mta0.migadu.com (out-170.mta0.migadu.com [91.218.175.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2751332912 for ; Tue, 28 Oct 2025 14:02:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660133; cv=none; b=Liejz2QRBZ4zxS4241C038SU629DfKHe4P4DHqEcRa2pQ9vebMfBgKSv6xoPfLO6yUPeXdmH8oVxBQUI0pzwBtXfY16N2YZxAUVlBZdDaxmM9FTfk3mjq8uXudUgWJeP3YPto8TYX/gKfyzNKDg9b4noP9NYenFLR5ylLJg2pF0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660133; c=relaxed/simple; bh=ubuHyVoQfOTWzax2RiUiR/WKVhO7tZd2MTZFWOARtno=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aCIF0BhaySFV8Up9O5h5QcRK+VwA9L7/Ct0lyVhWyduo+76TrL1hVNMXy8Yat9ghRBI6QJXHNLFw1ZOBjWq9EtgLw6Tu5M5yUlREi12Q85P7YNfyo5ya8Sg8fNKcnr6fLaHsJvu1bF95qpioWuVtRqPtrInrxL8r4S3/ZNd1lcA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=m5pB3ZWQ; arc=none smtp.client-ip=91.218.175.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="m5pB3ZWQ" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660126; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=moQWsyM34kheODhB4m6le5EmBuxmXNZC4RNHBl5yhm4=; b=m5pB3ZWQxP+sPvfEGsIWUd01+4Ud8CDRin+yJXxv+vXSeS/tvgW6+9V1cnAIEUEP3kL5MZ RaHdbjXmDsivF+rDsb1cYJID3SmV1vSRE9OWuD5IA/+/ky8Jp1kroJNOvTdF4t7i7+aH52 RZkxvRVNH2wWtx9OLxFAJtiTkeNOIqU= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 01/26] mm: memcontrol: remove dead code of checking parent memory cgroup Date: Tue, 28 Oct 2025 21:58:14 +0800 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song Since the no-hierarchy mode has been deprecated after the commit: commit bef8620cd8e0 ("mm: memcg: deprecate the non-hierarchical mode"). As a result, parent_mem_cgroup() will not return NULL except when passing the root memcg, and the root memcg cannot be offline. Hence, it's safe to remove the check on the returned value of parent_mem_cgroup(). Remove the corresponding dead code. Signed-off-by: Muchun Song Acked-by: Roman Gushchin Acked-by: Johannes Weiner Signed-off-by: Qi Zheng Reviewed-by: Chen Ridong Reviewed-by: Harry Yoo --- mm/memcontrol.c | 5 ----- mm/shrinker.c | 6 +----- 2 files changed, 1 insertion(+), 10 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 93f7c76f0ce96..d5257465c9d75 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3339,9 +3339,6 @@ static void memcg_offline_kmem(struct mem_cgroup *mem= cg) return; =20 parent =3D parent_mem_cgroup(memcg); - if (!parent) - parent =3D root_mem_cgroup; - memcg_reparent_list_lrus(memcg, parent); =20 /* @@ -3632,8 +3629,6 @@ struct mem_cgroup *mem_cgroup_id_get_online(struct me= m_cgroup *memcg) break; } memcg =3D parent_mem_cgroup(memcg); - if (!memcg) - memcg =3D root_mem_cgroup; } return memcg; } diff --git a/mm/shrinker.c b/mm/shrinker.c index 4a93fd433689a..e8e092a2f7f41 100644 --- a/mm/shrinker.c +++ b/mm/shrinker.c @@ -286,14 +286,10 @@ void reparent_shrinker_deferred(struct mem_cgroup *me= mcg) { int nid, index, offset; long nr; - struct mem_cgroup *parent; + struct mem_cgroup *parent =3D parent_mem_cgroup(memcg); struct shrinker_info *child_info, *parent_info; struct shrinker_info_unit *child_unit, *parent_unit; =20 - parent =3D parent_mem_cgroup(memcg); - if (!parent) - parent =3D root_mem_cgroup; - /* Prevent from concurrent shrinker_info expand */ mutex_lock(&shrinker_mutex); for_each_node(nid) { --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-179.mta0.migadu.com (out-179.mta0.migadu.com [91.218.175.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7ABE8335096 for ; Tue, 28 Oct 2025 14:02:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660139; cv=none; b=RVkqY+W/8YhJ/hAWOPh2XO6Bt4Uk9rX0aWGv0YiECr4/BNOjr3XA6HR6DwKaFREqzB5l7ur69NlK0d/7WmQgylvMHHSuRRDbOt7WZ5z0ymXyhfCrUPfvAuNL3EJxpxiNX1xAS7po1O7exi8Tnt373x09c2blqEow6IBcBOOIMIM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660139; c=relaxed/simple; bh=+f4QWLoNuo4B3Dp63Eyapx1W20imhRiyMKbc1mEZdDA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=A1kmlXiQrx8zfuUFJ8yJAADFmvWSHHi7jvFelrIejVJuYzusTrgQmh23LKD742CuBQ6RR1feU7bf1TY5pGM/ae9/PnZfcG+ujfXnM63hE+7/L8RLUoZp2eM1GDlI4Io9dNGCg8V4xUMJOxWQWNgySk0aVQ9YMzogkc+wpgwsH6Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=YrvJpFSm; arc=none smtp.client-ip=91.218.175.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="YrvJpFSm" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660135; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HjHMsvNc6Xx00FaU40JWEli8RMLF1vJXLo7+WwnW8IE=; b=YrvJpFSm06X23CK+cQFmSOcRfTt3cT97EdWGct98ZAmZZsodG45MMmxuPeK6f+vymiDmMa R3Bl8CbCbZywc5+NmbhK1NFu3mg+nXz5b9feFPfwMUUHOAUs7vlqExUk/gZ0d38ndEwYRT WijBbtVbYAfPAgyXV/Eq5gyaov1eihg= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 02/26] mm: workingset: use folio_lruvec() in workingset_refault() Date: Tue, 28 Oct 2025 21:58:15 +0800 Message-ID: <02536beaa78aeaafcaec40d4fca42ccbbd9f8643.1761658310.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song Use folio_lruvec() to simplify the code. Signed-off-by: Muchun Song Acked-by: Johannes Weiner Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- mm/workingset.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/mm/workingset.c b/mm/workingset.c index 68a76a91111f4..8cad8ee6dec6a 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -534,8 +534,6 @@ bool workingset_test_recent(void *shadow, bool file, bo= ol *workingset, void workingset_refault(struct folio *folio, void *shadow) { bool file =3D folio_is_file_lru(folio); - struct pglist_data *pgdat; - struct mem_cgroup *memcg; struct lruvec *lruvec; bool workingset; long nr; @@ -557,10 +555,7 @@ void workingset_refault(struct folio *folio, void *sha= dow) * locked to guarantee folio_memcg() stability throughout. */ nr =3D folio_nr_pages(folio); - memcg =3D folio_memcg(folio); - pgdat =3D folio_pgdat(folio); - lruvec =3D mem_cgroup_lruvec(memcg, pgdat); - + lruvec =3D folio_lruvec(folio); mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr); =20 if (!workingset_test_recent(shadow, file, &workingset, true)) --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-177.mta0.migadu.com (out-177.mta0.migadu.com [91.218.175.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6888F3370FE for ; Tue, 28 Oct 2025 14:02:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660145; cv=none; b=dTH+PkTqUNU+fv4x0QV9iREgTjW5qb6l9Y2yD4T9OGcNsYKG1piQ7jJY6ko1o/HdAYavuXL0WdeOrq4xY2L9oI1dcAiKlwlq69ohprnFjYuYu3UVrxftSCpTVL1ZBTatcjDe7g3hOLPxporhfmNWeGS3tUZqEaTvXNogk9GjWqo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660145; c=relaxed/simple; bh=AWNGi7LTxvG9oHz7t6w+GQBl3pJenLXTwMOmNvqPRSg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oE0zk2DbMUbfZ7PX2c+UmLjEbhffW2ygRAtTBX2v3tAEV6/u7Z/zvi883gSR5AKwzDtb6RzC5urmQtwSdwfQ8eoTo0LO84U7mWzfwVD0EhrTpxtEKlQSdXiMTUwOw1nLv+SIQdeaKbd26rL29T3Sh8NiN/xC0EZfIvC3WeoKxv8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=Ek9b/gCz; arc=none smtp.client-ip=91.218.175.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="Ek9b/gCz" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660141; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/UKL5kOds3ICU+T2GyTUXmAkx1XUpiy8Su9/vaOkCW0=; b=Ek9b/gCzsgxylJugL0ca4VIGAc3IiM8/A2ufsf3dpRjY+iWIyMyPFT1WblHirpEDrp9mp5 oPD0lv3NYbDv/ox837kWGFkHfrmKV1uWz2fdqB327GIzAEeylU6SiCT29a0aUyVHqVxVC+ YOMiftSXryOaAvBB38StW/kk2zo0lpI= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 03/26] mm: rename unlock_page_lruvec_irq and its variants Date: Tue, 28 Oct 2025 21:58:16 +0800 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song It is inappropriate to use folio_lruvec_lock() variants in conjunction with unlock_page_lruvec() variants, as this involves the inconsistent operation of locking a folio while unlocking a page. To rectify this, the functions unlock_page_lruvec{_irq, _irqrestore} are renamed to lruvec_unlock{_irq,_irqrestore}. Signed-off-by: Muchun Song Acked-by: Roman Gushchin Acked-by: Johannes Weiner Signed-off-by: Qi Zheng Reviewed-by: Chen Ridong Reviewed-by: Harry Yoo --- include/linux/memcontrol.h | 10 +++++----- mm/compaction.c | 14 +++++++------- mm/huge_memory.c | 2 +- mm/mlock.c | 2 +- mm/swap.c | 12 ++++++------ mm/vmscan.c | 4 ++-- 6 files changed, 22 insertions(+), 22 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 8d2e250535a8a..6185d8399a54e 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1493,17 +1493,17 @@ static inline struct lruvec *parent_lruvec(struct l= ruvec *lruvec) return mem_cgroup_lruvec(memcg, lruvec_pgdat(lruvec)); } =20 -static inline void unlock_page_lruvec(struct lruvec *lruvec) +static inline void lruvec_unlock(struct lruvec *lruvec) { spin_unlock(&lruvec->lru_lock); } =20 -static inline void unlock_page_lruvec_irq(struct lruvec *lruvec) +static inline void lruvec_unlock_irq(struct lruvec *lruvec) { spin_unlock_irq(&lruvec->lru_lock); } =20 -static inline void unlock_page_lruvec_irqrestore(struct lruvec *lruvec, +static inline void lruvec_unlock_irqrestore(struct lruvec *lruvec, unsigned long flags) { spin_unlock_irqrestore(&lruvec->lru_lock, flags); @@ -1525,7 +1525,7 @@ static inline struct lruvec *folio_lruvec_relock_irq(= struct folio *folio, if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; =20 - unlock_page_lruvec_irq(locked_lruvec); + lruvec_unlock_irq(locked_lruvec); } =20 return folio_lruvec_lock_irq(folio); @@ -1539,7 +1539,7 @@ static inline void folio_lruvec_relock_irqsave(struct= folio *folio, if (folio_matches_lruvec(folio, *lruvecp)) return; =20 - unlock_page_lruvec_irqrestore(*lruvecp, *flags); + lruvec_unlock_irqrestore(*lruvecp, *flags); } =20 *lruvecp =3D folio_lruvec_lock_irqsave(folio, flags); diff --git a/mm/compaction.c b/mm/compaction.c index 8760d10bd0b32..4dce180f699b4 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -913,7 +913,7 @@ isolate_migratepages_block(struct compact_control *cc, = unsigned long low_pfn, */ if (!(low_pfn % COMPACT_CLUSTER_MAX)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked =3D NULL; } =20 @@ -964,7 +964,7 @@ isolate_migratepages_block(struct compact_control *cc, = unsigned long low_pfn, } /* for alloc_contig case */ if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked =3D NULL; } =20 @@ -1053,7 +1053,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, if (unlikely(page_has_movable_ops(page)) && !PageMovableOpsIsolated(page)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked =3D NULL; } =20 @@ -1158,7 +1158,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, /* If we already hold the lock, we can skip some rechecking */ if (lruvec !=3D locked) { if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); =20 compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); locked =3D lruvec; @@ -1226,7 +1226,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, isolate_fail_put: /* Avoid potential deadlock in freeing page under lru_lock */ if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked =3D NULL; } folio_put(folio); @@ -1242,7 +1242,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, */ if (nr_isolated) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked =3D NULL; } putback_movable_pages(&cc->migratepages); @@ -1274,7 +1274,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, =20 isolate_abort: if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); if (folio) { folio_set_lru(folio); folio_put(folio); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0a826b6e6aa7f..9d3594df6eedf 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -4014,7 +4014,7 @@ static int __folio_split(struct folio *folio, unsigne= d int new_order, expected_refs =3D folio_expected_ref_count(folio) + 1; folio_ref_unfreeze(folio, expected_refs); =20 - unlock_page_lruvec(lruvec); + lruvec_unlock(lruvec); =20 if (ci) swap_cluster_unlock(ci); diff --git a/mm/mlock.c b/mm/mlock.c index bb0776f5ef7ca..5a81de8dd4875 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -205,7 +205,7 @@ static void mlock_folio_batch(struct folio_batch *fbatc= h) } =20 if (lruvec) - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); folios_put(fbatch); } =20 diff --git a/mm/swap.c b/mm/swap.c index 2260dcd2775e7..ec0c654e128dc 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -91,7 +91,7 @@ static void page_cache_release(struct folio *folio) =20 __page_cache_release(folio, &lruvec, &flags); if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); } =20 void __folio_put(struct folio *folio) @@ -175,7 +175,7 @@ static void folio_batch_move_lru(struct folio_batch *fb= atch, move_fn_t move_fn) } =20 if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); folios_put(fbatch); } =20 @@ -349,7 +349,7 @@ void folio_activate(struct folio *folio) =20 lruvec =3D folio_lruvec_lock_irq(folio); lru_activate(lruvec, folio); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); folio_set_lru(folio); } #endif @@ -963,7 +963,7 @@ void folios_put_refs(struct folio_batch *folios, unsign= ed int *refs) =20 if (folio_is_zone_device(folio)) { if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec =3D NULL; } if (folio_ref_sub_and_test(folio, nr_refs)) @@ -977,7 +977,7 @@ void folios_put_refs(struct folio_batch *folios, unsign= ed int *refs) /* hugetlb has its own memcg */ if (folio_test_hugetlb(folio)) { if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec =3D NULL; } free_huge_folio(folio); @@ -991,7 +991,7 @@ void folios_put_refs(struct folio_batch *folios, unsign= ed int *refs) j++; } if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); if (!j) { folio_batch_reinit(folios); return; diff --git a/mm/vmscan.c b/mm/vmscan.c index c922bad2b8fd4..3a1044ce30f1e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1829,7 +1829,7 @@ bool folio_isolate_lru(struct folio *folio) folio_get(folio); lruvec =3D folio_lruvec_lock_irq(folio); lruvec_del_folio(lruvec, folio); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); ret =3D true; } =20 @@ -7849,7 +7849,7 @@ void check_move_unevictable_folios(struct folio_batch= *fbatch) if (lruvec) { __count_vm_events(UNEVICTABLE_PGRESCUED, pgrescued); __count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); } else if (pgscanned) { count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); } --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-182.mta0.migadu.com (out-182.mta0.migadu.com [91.218.175.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1AAA93328E5 for ; Tue, 28 Oct 2025 14:02:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660158; cv=none; b=IM3oYplzdXXmXf0WNS4NC+3bCcJMkxV+MkIukDQkE4Syx34ugqgKhU7bIBvKiGXCC0LbsKmL/W/iZJfIuVNSBl1sRj8VMbuL93ZaN5kPvEvUSq6XJ6AvB8iGxY2Z4+sRx7U+wfp27zp5lft3LL2JoAfj46KRkmIbYdAeOCg2AYY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660158; c=relaxed/simple; bh=/OjYxg33QbDH2DAe7dMyZnXiAe6+omF/Zgd51hY/Ac8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ia9xw4yLYbsnlX1k/U2eKV2ofYLsmBNfuvDOF9ospF8u6S5LBzYu/7maLoxSoqRO0E+CMKVoe9whcZUj7SB+paRZBtu51vQ/dvnOvT2kIck4L5jo7jZ9Uh45cHblxlPOZJ/xFW1MGcWhB7sYszsbWoMT2gma5RVwp93XAVqLWh4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=H0fYw4BX; arc=none smtp.client-ip=91.218.175.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="H0fYw4BX" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660148; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ntWKOV2hOWEVTZd0bqYk35Hl6SiE01TplnI2Iu5V7Ko=; b=H0fYw4BXk1abzFkAD8aOuO/0MEMAlxU4Zi57/TcvC5lcwzDJ73q7T9goIpkkuqerxDtSXj L4qeo01kT+DpjjamzfU89mOClsDIxQp73RCYYVP0wkee21XV1YT7hvzVEiB/cszAa6v4pG Bid7SYhHQLk5xyqHLJtLqLWzCcPnEK4= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 04/26] mm: vmscan: refactor move_folios_to_lru() Date: Tue, 28 Oct 2025 21:58:17 +0800 Message-ID: <97ea4728568459f501ddcab6c378c29064630bb9.1761658310.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song In a subsequent patch, we'll reparent the LRU folios. The folios that are moved to the appropriate LRU list can undergo reparenting during the move_folios_to_lru() process. Hence, it's incorrect for the caller to hold a lruvec lock. Instead, we should utilize the more general interface of folio_lruvec_relock_irq() to obtain the correct lruvec lock. This patch involves only code refactoring and doesn't introduce any functional changes. Signed-off-by: Muchun Song Acked-by: Johannes Weiner Signed-off-by: Qi Zheng --- mm/vmscan.c | 46 +++++++++++++++++++++++----------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 3a1044ce30f1e..660cd40cfddd4 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1883,24 +1883,27 @@ static bool too_many_isolated(struct pglist_data *p= gdat, int file, /* * move_folios_to_lru() moves folios from private @list to appropriate LRU= list. * - * Returns the number of pages moved to the given lruvec. + * Returns the number of pages moved to the appropriate lruvec. + * + * Note: The caller must not hold any lruvec lock. */ -static unsigned int move_folios_to_lru(struct lruvec *lruvec, - struct list_head *list) +static unsigned int move_folios_to_lru(struct list_head *list) { int nr_pages, nr_moved =3D 0; + struct lruvec *lruvec =3D NULL; struct folio_batch free_folios; =20 folio_batch_init(&free_folios); while (!list_empty(list)) { struct folio *folio =3D lru_to_folio(list); =20 + lruvec =3D folio_lruvec_relock_irq(folio, lruvec); VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); list_del(&folio->lru); if (unlikely(!folio_evictable(folio))) { - spin_unlock_irq(&lruvec->lru_lock); + lruvec_unlock_irq(lruvec); folio_putback_lru(folio); - spin_lock_irq(&lruvec->lru_lock); + lruvec =3D NULL; continue; } =20 @@ -1922,19 +1925,15 @@ static unsigned int move_folios_to_lru(struct lruve= c *lruvec, =20 folio_unqueue_deferred_split(folio); if (folio_batch_add(&free_folios, folio) =3D=3D 0) { - spin_unlock_irq(&lruvec->lru_lock); + lruvec_unlock_irq(lruvec); mem_cgroup_uncharge_folios(&free_folios); free_unref_folios(&free_folios); - spin_lock_irq(&lruvec->lru_lock); + lruvec =3D NULL; } =20 continue; } =20 - /* - * All pages were isolated from the same lruvec (and isolation - * inhibits memcg migration). - */ VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); lruvec_add_folio(lruvec, folio); nr_pages =3D folio_nr_pages(folio); @@ -1943,11 +1942,12 @@ static unsigned int move_folios_to_lru(struct lruve= c *lruvec, workingset_age_nonresident(lruvec, nr_pages); } =20 + if (lruvec) + lruvec_unlock_irq(lruvec); + if (free_folios.nr) { - spin_unlock_irq(&lruvec->lru_lock); mem_cgroup_uncharge_folios(&free_folios); free_unref_folios(&free_folios); - spin_lock_irq(&lruvec->lru_lock); } =20 return nr_moved; @@ -2016,9 +2016,9 @@ static unsigned long shrink_inactive_list(unsigned lo= ng nr_to_scan, nr_reclaimed =3D shrink_folio_list(&folio_list, pgdat, sc, &stat, false, lruvec_memcg(lruvec)); =20 - spin_lock_irq(&lruvec->lru_lock); - move_folios_to_lru(lruvec, &folio_list); + move_folios_to_lru(&folio_list); =20 + spin_lock_irq(&lruvec->lru_lock); __mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(sc), stat.nr_demoted); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); @@ -2166,11 +2166,10 @@ static void shrink_active_list(unsigned long nr_to_= scan, /* * Move folios back to the lru list. */ - spin_lock_irq(&lruvec->lru_lock); - - nr_activate =3D move_folios_to_lru(lruvec, &l_active); - nr_deactivate =3D move_folios_to_lru(lruvec, &l_inactive); + nr_activate =3D move_folios_to_lru(&l_active); + nr_deactivate =3D move_folios_to_lru(&l_inactive); =20 + spin_lock_irq(&lruvec->lru_lock); __count_vm_events(PGDEACTIVATE, nr_deactivate); count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate); =20 @@ -4735,14 +4734,15 @@ static int evict_folios(unsigned long nr_to_scan, s= truct lruvec *lruvec, set_mask_bits(&folio->flags.f, LRU_REFS_FLAGS, BIT(PG_active)); } =20 - spin_lock_irq(&lruvec->lru_lock); - - move_folios_to_lru(lruvec, &list); + move_folios_to_lru(&list); =20 + local_irq_disable(); walk =3D current->reclaim_state->mm_walk; if (walk && walk->batched) { walk->lruvec =3D lruvec; + spin_lock(&lruvec->lru_lock); reset_batch_size(walk); + spin_unlock(&lruvec->lru_lock); } =20 __mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(sc), @@ -4754,7 +4754,7 @@ static int evict_folios(unsigned long nr_to_scan, str= uct lruvec *lruvec, count_memcg_events(memcg, item, reclaimed); __count_vm_events(PGSTEAL_ANON + type, reclaimed); =20 - spin_unlock_irq(&lruvec->lru_lock); + local_irq_enable(); =20 list_splice_init(&clean, &list); =20 --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-174.mta0.migadu.com (out-174.mta0.migadu.com [91.218.175.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B93533328E5 for ; Tue, 28 Oct 2025 14:02:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660167; cv=none; b=pfvpHgy5oOQnGRdNaYWhA9L5BdzeWC+suZmYbmYHOV+w/uuyhPzkxQ5cVBNBGSFHLZiaKSh2XOgqTDMcjT1DwZknJ8Cfp6+x4MGmNoXAqkNTYDcCFRqQU+KOR4wRsyJNPpHRmKQ3XzsT2U+cI45ZAHl4t1yf7Vi5xqkRITQvijo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660167; c=relaxed/simple; bh=Zv48HsLwcntjjFzKufklSHi2jr5erbyCXlXZpk4tBZ4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OsdP9fgTVg5EuLGZSsJzW8aJbzAIvhJcc156/m7IEpdz+aFZJLjRsFXO0NL5uxj599oLqsrUK9EnIOHQBdbO06inQ4KjTw0hfMSd4feiRP9X9GAOceZb4IXJ8u2jErGD0Q7vlgkRJob197FjxorCje2a5R2dCJiVQtP6msAFGkM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=W0fUHRG4; arc=none smtp.client-ip=91.218.175.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="W0fUHRG4" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660162; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vADUOVCTUyanUbeGYLX+Ark3f6lRuzoYGyYZcLXZB20=; b=W0fUHRG4doPVJdPkNsUr8ClISjNAHVaM3dOSVPdYSnIpdNFSYC1KglRqk1Db2UqDqExsp7 MzTNJEmVzy6i8fmBnDqH68d3HdsloQOgbmdRuHgAr5CIh9raIpJoGFDS4updR4s4ChTrpp h3UazzgIlv8YMEkBXpEcX3FkbEHAIMU= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 05/26] mm: memcontrol: allocate object cgroup for non-kmem case Date: Tue, 28 Oct 2025 21:58:18 +0800 Message-ID: <05ef300193bbe5bb7d2d97723efe928dab60429b.1761658310.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song Pagecache pages are charged at allocation time and hold a reference to the original memory cgroup until reclaimed. Depending on memory pressure, page sharing patterns between different cgroups and cgroup creation/destruction rates, many dying memory cgroups can be pinned by pagecache pages, reducing page reclaim efficiency and wasting memory. Converting LRU folios and most other raw memory cgroup pins to the object cgroup direction can fix this long-living problem. As a result, the objcg infrastructure is no longer solely applicable to the kmem case. In this patch, we extend the scope of the objcg infrastructure beyond the kmem case, enabling LRU folios to reuse it for folio charging purposes. It should be noted that LRU folios are not accounted for at the root level, yet the folio->memcg_data points to the root_mem_cgroup. Hence, the folio->memcg_data of LRU folios always points to a valid pointer. However, the root_mem_cgroup does not possess an object cgroup. Therefore, we also allocate an object cgroup for the root_mem_cgroup. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- mm/memcontrol.c | 51 +++++++++++++++++++++++-------------------------- 1 file changed, 24 insertions(+), 27 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d5257465c9d75..2afd7f99ca101 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -204,10 +204,10 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } =20 -static void memcg_reparent_objcgs(struct mem_cgroup *memcg, - struct mem_cgroup *parent) +static void memcg_reparent_objcgs(struct mem_cgroup *memcg) { struct obj_cgroup *objcg, *iter; + struct mem_cgroup *parent =3D parent_mem_cgroup(memcg); =20 objcg =3D rcu_replace_pointer(memcg->objcg, NULL, true); =20 @@ -3302,30 +3302,17 @@ unsigned long mem_cgroup_usage(struct mem_cgroup *m= emcg, bool swap) return val; } =20 -static int memcg_online_kmem(struct mem_cgroup *memcg) +static void memcg_online_kmem(struct mem_cgroup *memcg) { - struct obj_cgroup *objcg; - if (mem_cgroup_kmem_disabled()) - return 0; + return; =20 if (unlikely(mem_cgroup_is_root(memcg))) - return 0; - - objcg =3D obj_cgroup_alloc(); - if (!objcg) - return -ENOMEM; - - objcg->memcg =3D memcg; - rcu_assign_pointer(memcg->objcg, objcg); - obj_cgroup_get(objcg); - memcg->orig_objcg =3D objcg; + return; =20 static_branch_enable(&memcg_kmem_online_key); =20 memcg->kmemcg_id =3D memcg->id.id; - - return 0; } =20 static void memcg_offline_kmem(struct mem_cgroup *memcg) @@ -3340,12 +3327,6 @@ static void memcg_offline_kmem(struct mem_cgroup *me= mcg) =20 parent =3D parent_mem_cgroup(memcg); memcg_reparent_list_lrus(memcg, parent); - - /* - * Objcg's reparenting must be after list_lru's, make sure list_lru - * helpers won't use parent's list_lru until child is drained. - */ - memcg_reparent_objcgs(memcg, parent); } =20 #ifdef CONFIG_CGROUP_WRITEBACK @@ -3862,9 +3843,9 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *pare= nt_css) static int mem_cgroup_css_online(struct cgroup_subsys_state *css) { struct mem_cgroup *memcg =3D mem_cgroup_from_css(css); + struct obj_cgroup *objcg; =20 - if (memcg_online_kmem(memcg)) - goto remove_id; + memcg_online_kmem(memcg); =20 /* * A memcg must be visible for expand_shrinker_info() @@ -3874,6 +3855,15 @@ static int mem_cgroup_css_online(struct cgroup_subsy= s_state *css) if (alloc_shrinker_info(memcg)) goto offline_kmem; =20 + objcg =3D obj_cgroup_alloc(); + if (!objcg) + goto free_shrinker; + + objcg->memcg =3D memcg; + rcu_assign_pointer(memcg->objcg, objcg); + obj_cgroup_get(objcg); + memcg->orig_objcg =3D objcg; + if (unlikely(mem_cgroup_is_root(memcg)) && !mem_cgroup_disabled()) queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME); @@ -3896,9 +3886,10 @@ static int mem_cgroup_css_online(struct cgroup_subsy= s_state *css) xa_store(&mem_cgroup_ids, memcg->id.id, memcg, GFP_KERNEL); =20 return 0; +free_shrinker: + free_shrinker_info(memcg); offline_kmem: memcg_offline_kmem(memcg); -remove_id: mem_cgroup_id_remove(memcg); return -ENOMEM; } @@ -3916,6 +3907,12 @@ static void mem_cgroup_css_offline(struct cgroup_sub= sys_state *css) =20 memcg_offline_kmem(memcg); reparent_deferred_split_queue(memcg); + /* + * The reparenting of objcg must be after the reparenting of the + * list_lru and deferred_split_queue above, which ensures that they will + * not mistakenly get the parent list_lru and deferred_split_queue. + */ + memcg_reparent_objcgs(memcg); reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); lru_gen_offline_memcg(memcg); --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-179.mta0.migadu.com (out-179.mta0.migadu.com [91.218.175.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B6B7433769F for ; Tue, 28 Oct 2025 14:02:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660174; cv=none; b=X2bOChxTKGBbS+Q2sJWxRHvgJQVpOevsAWdO0zlSGcXHnGFmQ43YzUVm49MlXiZISKU2mBjR7S42BmkXhaW1sfcBh4KtKP/ddl2edN/DPUMy0Z/gXu3j8IivoQ3r2F6a1CpTcRnPxBEpM8Vf29emik2qPV6Et5hFrpx1Up+5yMo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660174; c=relaxed/simple; bh=oAdOqRMHHvyzbSZokDAr0MwFqcSbrb1dIfMR6SMUzIQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dlw88rLpBJJhlSme57+N2hBZ4Bh8ob8BeGVXhbTOLHhm7WAbRcebdWC8eZTMqk87anf/q3tmNQtG3D1KnkJY6DnHtbVQQXIHm1SvNiuv44NSzfmg8LCfNQgBT5ctD6UretRN/tVzbyoicwVlyc0O5Tb2Sc6geyGGZHvAlON1ASg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=P1ZV/Rzo; arc=none smtp.client-ip=91.218.175.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="P1ZV/Rzo" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660169; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AJMTup4HmkOG1M/S4B2pqIfmE4VeQgyqIHJuLcuLAMw=; b=P1ZV/Rzo0zukdIZKNWkACft52p0bD9mj8vgl3w2tvfgkoYsM7iiIP2iysPyTViots9XuqC noF/6I8rjwF/84+02CM7BpyUGv3CRb/Qox7oaKjHs4FP018QD6ncfhErqR+b/ZbIChrrsd p1x2z3bmLs4Qym6k9dPwJf+DKz4vUt0= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 06/26] mm: memcontrol: return root object cgroup for root memory cgroup Date: Tue, 28 Oct 2025 21:58:19 +0800 Message-ID: <5e9743f291e7ca7b8f052775e993090ed66cfa80.1761658310.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song Memory cgroup functions such as get_mem_cgroup_from_folio() and get_mem_cgroup_from_mm() return a valid memory cgroup pointer, even for the root memory cgroup. In contrast, the situation for object cgroups has been different. Previously, the root object cgroup couldn't be returned because it didn't exist. Now that a valid root object cgroup exists, for the sake of consistency, it's necessary to align the behavior of object-cgroup-related operations with that of memory cgroup APIs. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng --- include/linux/memcontrol.h | 29 +++++++++++++++++------- mm/memcontrol.c | 45 ++++++++++++++++++++------------------ mm/percpu.c | 2 +- 3 files changed, 46 insertions(+), 30 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 6185d8399a54e..9fdbd4970021d 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -332,6 +332,7 @@ struct mem_cgroup { #define MEMCG_CHARGE_BATCH 64U =20 extern struct mem_cgroup *root_mem_cgroup; +extern struct obj_cgroup *root_obj_cgroup; =20 enum page_memcg_data_flags { /* page->memcg_data is a pointer to an slabobj_ext vector */ @@ -549,6 +550,11 @@ static inline bool mem_cgroup_is_root(struct mem_cgrou= p *memcg) return (memcg =3D=3D root_mem_cgroup); } =20 +static inline bool obj_cgroup_is_root(const struct obj_cgroup *objcg) +{ + return objcg =3D=3D root_obj_cgroup; +} + static inline bool mem_cgroup_disabled(void) { return !cgroup_subsys_enabled(memory_cgrp_subsys); @@ -773,23 +779,26 @@ struct mem_cgroup *mem_cgroup_from_css(struct cgroup_= subsys_state *css){ =20 static inline bool obj_cgroup_tryget(struct obj_cgroup *objcg) { + if (obj_cgroup_is_root(objcg)) + return true; return percpu_ref_tryget(&objcg->refcnt); } =20 -static inline void obj_cgroup_get(struct obj_cgroup *objcg) +static inline void obj_cgroup_get_many(struct obj_cgroup *objcg, + unsigned long nr) { - percpu_ref_get(&objcg->refcnt); + if (!obj_cgroup_is_root(objcg)) + percpu_ref_get_many(&objcg->refcnt, nr); } =20 -static inline void obj_cgroup_get_many(struct obj_cgroup *objcg, - unsigned long nr) +static inline void obj_cgroup_get(struct obj_cgroup *objcg) { - percpu_ref_get_many(&objcg->refcnt, nr); + obj_cgroup_get_many(objcg, 1); } =20 static inline void obj_cgroup_put(struct obj_cgroup *objcg) { - if (objcg) + if (objcg && !obj_cgroup_is_root(objcg)) percpu_ref_put(&objcg->refcnt); } =20 @@ -1094,6 +1103,11 @@ static inline bool mem_cgroup_is_root(struct mem_cgr= oup *memcg) return true; } =20 +static inline bool obj_cgroup_is_root(const struct obj_cgroup *objcg) +{ + return true; +} + static inline bool mem_cgroup_disabled(void) { return true; @@ -1710,8 +1724,7 @@ static inline struct obj_cgroup *get_obj_cgroup_from_= current(void) { struct obj_cgroup *objcg =3D current_obj_cgroup(); =20 - if (objcg) - obj_cgroup_get(objcg); + obj_cgroup_get(objcg); =20 return objcg; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2afd7f99ca101..d484b632c790f 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -83,6 +83,8 @@ EXPORT_SYMBOL(memory_cgrp_subsys); struct mem_cgroup *root_mem_cgroup __read_mostly; EXPORT_SYMBOL(root_mem_cgroup); =20 +struct obj_cgroup *root_obj_cgroup __read_mostly; + /* Active memory cgroup to use from an interrupt context */ DEFINE_PER_CPU(struct mem_cgroup *, int_active_memcg); EXPORT_PER_CPU_SYMBOL_GPL(int_active_memcg); @@ -2642,15 +2644,14 @@ struct mem_cgroup *mem_cgroup_from_slab_obj(void *p) =20 static struct obj_cgroup *__get_obj_cgroup_from_memcg(struct mem_cgroup *m= emcg) { - struct obj_cgroup *objcg =3D NULL; + for (; memcg; memcg =3D parent_mem_cgroup(memcg)) { + struct obj_cgroup *objcg =3D rcu_dereference(memcg->objcg); =20 - for (; !mem_cgroup_is_root(memcg); memcg =3D parent_mem_cgroup(memcg)) { - objcg =3D rcu_dereference(memcg->objcg); if (likely(objcg && obj_cgroup_tryget(objcg))) - break; - objcg =3D NULL; + return objcg; } - return objcg; + + return NULL; } =20 static struct obj_cgroup *current_objcg_update(void) @@ -2724,18 +2725,17 @@ __always_inline struct obj_cgroup *current_obj_cgro= up(void) * Objcg reference is kept by the task, so it's safe * to use the objcg by the current task. */ - return objcg; + return objcg ? : root_obj_cgroup; } =20 memcg =3D this_cpu_read(int_active_memcg); if (unlikely(memcg)) goto from_memcg; =20 - return NULL; + return root_obj_cgroup; =20 from_memcg: - objcg =3D NULL; - for (; !mem_cgroup_is_root(memcg); memcg =3D parent_mem_cgroup(memcg)) { + for (; memcg; memcg =3D parent_mem_cgroup(memcg)) { /* * Memcg pointer is protected by scope (see set_active_memcg()) * and is pinning the corresponding objcg, so objcg can't go @@ -2744,10 +2744,10 @@ __always_inline struct obj_cgroup *current_obj_cgro= up(void) */ objcg =3D rcu_dereference_check(memcg->objcg, 1); if (likely(objcg)) - break; + return objcg; } =20 - return objcg; + return root_obj_cgroup; } =20 struct obj_cgroup *get_obj_cgroup_from_folio(struct folio *folio) @@ -2761,14 +2761,8 @@ struct obj_cgroup *get_obj_cgroup_from_folio(struct = folio *folio) objcg =3D __folio_objcg(folio); obj_cgroup_get(objcg); } else { - struct mem_cgroup *memcg; - rcu_read_lock(); - memcg =3D __folio_memcg(folio); - if (memcg) - objcg =3D __get_obj_cgroup_from_memcg(memcg); - else - objcg =3D NULL; + objcg =3D __get_obj_cgroup_from_memcg(__folio_memcg(folio)); rcu_read_unlock(); } return objcg; @@ -2871,7 +2865,7 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t= gfp, int order) int ret =3D 0; =20 objcg =3D current_obj_cgroup(); - if (objcg) { + if (!obj_cgroup_is_root(objcg)) { ret =3D obj_cgroup_charge_pages(objcg, gfp, 1 << order); if (!ret) { obj_cgroup_get(objcg); @@ -3172,7 +3166,7 @@ bool __memcg_slab_post_alloc_hook(struct kmem_cache *= s, struct list_lru *lru, * obj_cgroup_get() is used to get a permanent reference. */ objcg =3D current_obj_cgroup(); - if (!objcg) + if (obj_cgroup_is_root(objcg)) return true; =20 /* @@ -3859,6 +3853,9 @@ static int mem_cgroup_css_online(struct cgroup_subsys= _state *css) if (!objcg) goto free_shrinker; =20 + if (unlikely(mem_cgroup_is_root(memcg))) + root_obj_cgroup =3D objcg; + objcg->memcg =3D memcg; rcu_assign_pointer(memcg->objcg, objcg); obj_cgroup_get(objcg); @@ -5479,6 +5476,9 @@ void obj_cgroup_charge_zswap(struct obj_cgroup *objcg= , size_t size) if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; =20 + if (obj_cgroup_is_root(objcg)) + return; + VM_WARN_ON_ONCE(!(current->flags & PF_MEMALLOC)); =20 /* PF_MEMALLOC context, charging must succeed */ @@ -5506,6 +5506,9 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *obj= cg, size_t size) if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; =20 + if (obj_cgroup_is_root(objcg)) + return; + obj_cgroup_uncharge(objcg, size); =20 rcu_read_lock(); diff --git a/mm/percpu.c b/mm/percpu.c index 81462ce5866e1..78bdffe1fcb57 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1616,7 +1616,7 @@ static bool pcpu_memcg_pre_alloc_hook(size_t size, gf= p_t gfp, return true; =20 objcg =3D current_obj_cgroup(); - if (!objcg) + if (obj_cgroup_is_root(objcg)) return true; =20 if (obj_cgroup_charge(objcg, gfp, pcpu_obj_full_size(size))) --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-186.mta0.migadu.com (out-186.mta0.migadu.com [91.218.175.186]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB5F4336EDC for ; Tue, 28 Oct 2025 14:02:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.186 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660180; cv=none; b=e0OPEpIjQ3klRLU2nX3XJQUy0r1QYuPPbwJosp2MDCCPA+vf7F7AUNp5ZNCUzR8EosTNz2C0FyZXn0Oz9WR/CDeILrNRZP8GY218jm1E3tH058Ogls64YiSbNsmIm9SJTnsaOyHcb4bzvZRt/0UcoGVjo0R4f+JV3Wa79HY3YLg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660180; c=relaxed/simple; bh=4zZme6vj/fv/F0aMJBOYAzVbwej+e7U6DodAB6yUuBM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bMcnd5Uz2NyUuHLQ/9ukPUpTURNx1xmLcJo5XSo3mWefENNXwefPQQQ/WQaFqDYVCSHqiWrtNQhq7vYsTlK0uVQtOz3EpSE4hC0drk8b1m3pks2OXCYRudfAV9HvBRw1+isnJXw0pUZbfa4PVJd8b6JEW3+7R3MSzt7HP24Q04M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=q4KXYujY; arc=none smtp.client-ip=91.218.175.186 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="q4KXYujY" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660176; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KwZ644xbYvlNcagDZjQ0mKRDmPXR0vZ+/IMJ8ruGmik=; b=q4KXYujYUCNGMd6/2GfYLrwuY6WKfDyyGjAFmIfFVpIINUQT+TYRszwkW0sd/3PfetejZz qFDU7mIuGhZ3R8LP+L8tH0OPOfJMdxCIBW7lZeAT7MGVMa+sgKyH9un9PUQmNiCyyFDaba dJb66qMsLrpMqVBKRqibHSucfgUHoak= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 07/26] mm: memcontrol: prevent memory cgroup release in get_mem_cgroup_from_folio() Date: Tue, 28 Oct 2025 21:58:20 +0800 Message-ID: <9eca65dec044d4352bee84511fb58960a1402ddf.1761658310.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song In the near future, a folio will no longer pin its corresponding memory cgroup. To ensure safety, it will only be appropriate to hold the rcu read lock or acquire a reference to the memory cgroup returned by folio_memcg(), thereby preventing it from being released. In the current patch, the rcu read lock is employed to safeguard against the release of the memory cgroup in get_mem_cgroup_from_folio(). This serves as a preparatory measure for the reparenting of the LRU pages. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- mm/memcontrol.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d484b632c790f..1da3ad77054d3 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -973,14 +973,19 @@ struct mem_cgroup *get_mem_cgroup_from_current(void) */ struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio) { - struct mem_cgroup *memcg =3D folio_memcg(folio); + struct mem_cgroup *memcg; =20 if (mem_cgroup_disabled()) return NULL; =20 + if (!folio_memcg_charged(folio)) + return root_mem_cgroup; + rcu_read_lock(); - if (!memcg || WARN_ON_ONCE(!css_tryget(&memcg->css))) - memcg =3D root_mem_cgroup; +retry: + memcg =3D folio_memcg(folio); + if (unlikely(!css_tryget(&memcg->css))) + goto retry; rcu_read_unlock(); return memcg; } --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-171.mta0.migadu.com (out-171.mta0.migadu.com [91.218.175.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A7F6B338F5B for ; Tue, 28 Oct 2025 14:03:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660191; cv=none; b=ebM5yh8NfT36B2xxa+68d+vE1wdNJLjWbPNm873BKlwHeW7DF24gSkRyTQmcU8hrFQ3vBwqBMFvjjEM7s72xapLYrp6V/Z/Oi8laiFosdg/A6fti9Z8O9CdDqgORN4RuwZXxtAYfaCaM6sLS3x3LNGjNXQ4RaVyuK+U0eeFxJiM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660191; c=relaxed/simple; bh=d3HGIX+5xqQ3gJbNPVc+GczUi6vw7BKVy2gymVINVFw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ezP+XVD+yUo/Nn8mHs61cZF3iYKSDCcal4qH2dNkJqX6ldwDRJ8ClRXY6+VWTVoI/FcQlY0DKkEC2z8COPAQV0ueFqNf3AhKrFjfgaolrW789R3hB+2oMFRyqKk/VRVBmb06hYSZSKc2kFdds8dGDHs5+CnouOYMNl97Z1p5yuo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=maDouu5c; arc=none smtp.client-ip=91.218.175.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="maDouu5c" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660182; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ul0akWBgQO9d8PYcmieApSptZ1K1m0W1WjLnOQW3EIU=; b=maDouu5ce544NaChXO2olwlSoJjFWIOouCmnASfkFN706sflQURVJ7rfqaMcU+KKGFzR1j b7Gq5siNpsBiC9GDc6Px2VT6uFOE0GtMoXgS/KvQOSx48h4q9XxMHVQ0aMWon1S84m/pky 13xFZW1Z9O3abQIUUWzA2GfMn4SyxBM= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 08/26] buffer: prevent memory cgroup release in folio_alloc_buffers() Date: Tue, 28 Oct 2025 21:58:21 +0800 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song In the near future, a folio will no longer pin its corresponding memory cgroup. To ensure safety, it will only be appropriate to hold the rcu read lock or acquire a reference to the memory cgroup returned by folio_memcg(), thereby preventing it from being released. In the current patch, the function get_mem_cgroup_from_folio() is employed to safeguard against the release of the memory cgroup. This serves as a preparatory measure for the reparenting of the LRU pages. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- fs/buffer.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 6a8752f7bbedb..bc93d0b1d0c30 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -925,8 +925,7 @@ struct buffer_head *folio_alloc_buffers(struct folio *f= olio, unsigned long size, long offset; struct mem_cgroup *memcg, *old_memcg; =20 - /* The folio lock pins the memcg */ - memcg =3D folio_memcg(folio); + memcg =3D get_mem_cgroup_from_folio(folio); old_memcg =3D set_active_memcg(memcg); =20 head =3D NULL; @@ -947,6 +946,7 @@ struct buffer_head *folio_alloc_buffers(struct folio *f= olio, unsigned long size, } out: set_active_memcg(old_memcg); + mem_cgroup_put(memcg); return head; /* * In case anything failed, we just free everything we got. --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-171.mta0.migadu.com (out-171.mta0.migadu.com [91.218.175.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D5D7332912 for ; Tue, 28 Oct 2025 14:03:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660194; cv=none; b=TROuxBp4fWEGna4oqETxIrIql+O7TXFj0YLiaFzpxQmlkVs/rE9WWqGMqg66oA5XUwxiJGLiySX79IUg+3KT4HxNogG5RPCS5YRjMVGFJKHd7aSOVnwcJ+IdBJhGUyQks3Pj60NLhTvi7jEMh3mfafuUGLlE6+E1uMadTLec/+I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660194; c=relaxed/simple; bh=c7aS0B7Q4aI8qvE0o1ujFPvYKq8OZ4jgeruzo0A6oMY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=R3Jjt/QBPAf6V+y0X076P3P8sr6hAcACeD9NjvRjk/GKu0HT+j5v/KyNzcdgjvUYns8cF45PN5SdDZ9adUgXYjma0ZIllfPqlh99rDJOXD+0apIZihOBUAm/lilZGhHMFG0jsl1BZOAmnfkqksxuvHrWkwAmVNTKBCVLtCWzWSg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=bp6a9gAz; arc=none smtp.client-ip=91.218.175.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="bp6a9gAz" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660189; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SdeP1C9lZUf/BHy9VyO6H/2ZkQY+8y2qy7tx97P5h/I=; b=bp6a9gAztickZhcVXAzCYe6KGXExJZWpvLQghvCnTrMnB8k9/ms5522DdHvjaBQXwl1RP/ 1YHo2+jDweAPtLcq9U1Cb3e57rrqkh4/RyV+G7msuW0Jz/HOa8JsbwwuEnjip7pOA7mM6l 35opjXquzD26S/pjcE8ZenFRa/5aYXs= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 09/26] writeback: prevent memory cgroup release in writeback module Date: Tue, 28 Oct 2025 21:58:22 +0800 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song In the near future, a folio will no longer pin its corresponding memory cgroup. To ensure safety, it will only be appropriate to hold the rcu read lock or acquire a reference to the memory cgroup returned by folio_memcg(), thereby preventing it from being released. In the current patch, the function get_mem_cgroup_css_from_folio() and the rcu read lock are employed to safeguard against the release of the memory cgroup. This serves as a preparatory measure for the reparenting of the LRU pages. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- fs/fs-writeback.c | 22 +++++++++++----------- include/linux/memcontrol.h | 9 +++++++-- include/trace/events/writeback.h | 3 +++ mm/memcontrol.c | 14 ++++++++------ 4 files changed, 29 insertions(+), 19 deletions(-) diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 2b35e80037fee..afd81fb8100cb 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -269,15 +269,13 @@ void __inode_attach_wb(struct inode *inode, struct fo= lio *folio) if (inode_cgwb_enabled(inode)) { struct cgroup_subsys_state *memcg_css; =20 - if (folio) { - memcg_css =3D mem_cgroup_css_from_folio(folio); - wb =3D wb_get_create(bdi, memcg_css, GFP_ATOMIC); - } else { - /* must pin memcg_css, see wb_get_create() */ + /* must pin memcg_css, see wb_get_create() */ + if (folio) + memcg_css =3D get_mem_cgroup_css_from_folio(folio); + else memcg_css =3D task_get_css(current, memory_cgrp_id); - wb =3D wb_get_create(bdi, memcg_css, GFP_ATOMIC); - css_put(memcg_css); - } + wb =3D wb_get_create(bdi, memcg_css, GFP_ATOMIC); + css_put(memcg_css); } =20 if (!wb) @@ -968,16 +966,16 @@ void wbc_account_cgroup_owner(struct writeback_contro= l *wbc, struct folio *folio if (!wbc->wb || wbc->no_cgroup_owner) return; =20 - css =3D mem_cgroup_css_from_folio(folio); + css =3D get_mem_cgroup_css_from_folio(folio); /* dead cgroups shouldn't contribute to inode ownership arbitration */ if (!(css->flags & CSS_ONLINE)) - return; + goto out; =20 id =3D css->id; =20 if (id =3D=3D wbc->wb_id) { wbc->wb_bytes +=3D bytes; - return; + goto out; } =20 if (id =3D=3D wbc->wb_lcand_id) @@ -990,6 +988,8 @@ void wbc_account_cgroup_owner(struct writeback_control = *wbc, struct folio *folio wbc->wb_tcand_bytes +=3D bytes; else wbc->wb_tcand_bytes -=3D min(bytes, wbc->wb_tcand_bytes); +out: + css_put(css); } EXPORT_SYMBOL_GPL(wbc_account_cgroup_owner); =20 diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 9fdbd4970021d..174e52d8e7039 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -895,7 +895,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm, return match; } =20 -struct cgroup_subsys_state *mem_cgroup_css_from_folio(struct folio *folio); +struct cgroup_subsys_state *get_mem_cgroup_css_from_folio(struct folio *fo= lio); ino_t page_cgroup_ino(struct page *page); =20 static inline bool mem_cgroup_online(struct mem_cgroup *memcg) @@ -1577,9 +1577,14 @@ static inline void mem_cgroup_track_foreign_dirty(st= ruct folio *folio, if (mem_cgroup_disabled()) return; =20 + if (!folio_memcg_charged(folio)) + return; + + rcu_read_lock(); memcg =3D folio_memcg(folio); - if (unlikely(memcg && &memcg->css !=3D wb->memcg_css)) + if (unlikely(&memcg->css !=3D wb->memcg_css)) mem_cgroup_track_foreign_dirty_slowpath(folio, wb); + rcu_read_unlock(); } =20 void mem_cgroup_flush_foreign(struct bdi_writeback *wb); diff --git a/include/trace/events/writeback.h b/include/trace/events/writeb= ack.h index c08aff044e807..3680828727f23 100644 --- a/include/trace/events/writeback.h +++ b/include/trace/events/writeback.h @@ -295,7 +295,10 @@ TRACE_EVENT(track_foreign_dirty, __entry->ino =3D inode ? inode->i_ino : 0; __entry->memcg_id =3D wb->memcg_css->id; __entry->cgroup_ino =3D __trace_wb_assign_cgroup(wb); + + rcu_read_lock(); __entry->page_cgroup_ino =3D cgroup_ino(folio_memcg(folio)->css.cgroup); + rcu_read_unlock(); ), =20 TP_printk("bdi %s[%llu]: ino=3D%lu memcg_id=3D%u cgroup_ino=3D%lu page_cg= roup_ino=3D%lu", diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1da3ad77054d3..aa8945c4ee383 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -241,7 +241,7 @@ DEFINE_STATIC_KEY_FALSE(memcg_bpf_enabled_key); EXPORT_SYMBOL(memcg_bpf_enabled_key); =20 /** - * mem_cgroup_css_from_folio - css of the memcg associated with a folio + * get_mem_cgroup_css_from_folio - acquire a css of the memcg associated w= ith a folio * @folio: folio of interest * * If memcg is bound to the default hierarchy, css of the memcg associated @@ -251,14 +251,16 @@ EXPORT_SYMBOL(memcg_bpf_enabled_key); * If memcg is bound to a traditional hierarchy, the css of root_mem_cgroup * is returned. */ -struct cgroup_subsys_state *mem_cgroup_css_from_folio(struct folio *folio) +struct cgroup_subsys_state *get_mem_cgroup_css_from_folio(struct folio *fo= lio) { - struct mem_cgroup *memcg =3D folio_memcg(folio); + struct mem_cgroup *memcg; =20 - if (!memcg || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) - memcg =3D root_mem_cgroup; + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) + return &root_mem_cgroup->css; =20 - return &memcg->css; + memcg =3D get_mem_cgroup_from_folio(folio); + + return memcg ? &memcg->css : &root_mem_cgroup->css; } =20 /** --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-182.mta0.migadu.com (out-182.mta0.migadu.com [91.218.175.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05F9133970D for ; Tue, 28 Oct 2025 14:03:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660200; cv=none; b=d2zTLW+BZO3b6icsthKE99jI4QQVbAUxAHobmBlDB9CepAXVEk/ziPFDAE1dJXCgzTj026pvRedC7KpGNYJ+QdeXg+utsFfG0VnOP4rzxQp1GgfEdMZIUNrF84RDl1ldmsNC8y8q9tO8+yY/pSG5gBrdBG5Soi07iTAHRhWqfuE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660200; c=relaxed/simple; bh=NacU3FVS+RRrB8wkdEVX01eh4VOJCi4K0F2ydqNV8rU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PyCSkW8hmRbSVcfjDB8wBmDDSqC7hRhEFYtV8JBDxJGHhoF6NN+oRSaCxq0LMcViF5pGfP7sT7/GbpaAV8cjcsYfHiEh/fRfIZQPjoC1HsaPNKrECSmeYA9moptIzKgGoQN//UXpPCZKYlvcpyQezNKEKM5LAzrcPF9rMwIwMkk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=Keb41diP; arc=none smtp.client-ip=91.218.175.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="Keb41diP" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660197; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AQ5H2mz4dVXx5yLN5zTg1jKj+s12XH2F8FoyP4DTscE=; b=Keb41diPvx6YSmv3KjV5tfMFVwgCTvM3vVnU5B8tPPTriLp0WGiAeHuYMTVXp2SxD5lxBu 33n4iaxgrmdpRj6MsPU9h3TczXEoHjcIiBgf6j5GU4SPKcdnBad+SPu7i363kDDpf4bSkC 3R2eVAJrKlZIF4uljuyHmUYjf+6rNsw= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 10/26] mm: memcontrol: prevent memory cgroup release in count_memcg_folio_events() Date: Tue, 28 Oct 2025 21:58:23 +0800 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song In the near future, a folio will no longer pin its corresponding memory cgroup. To ensure safety, it will only be appropriate to hold the rcu read lock or acquire a reference to the memory cgroup returned by folio_memcg(), thereby preventing it from being released. In the current patch, the rcu read lock is employed to safeguard against the release of the memory cgroup in count_memcg_folio_events(). This serves as a preparatory measure for the reparenting of the LRU pages. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- include/linux/memcontrol.h | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 174e52d8e7039..ca8d4e09cbe7d 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -984,10 +984,15 @@ void count_memcg_events(struct mem_cgroup *memcg, enu= m vm_event_item idx, static inline void count_memcg_folio_events(struct folio *folio, enum vm_event_item idx, unsigned long nr) { - struct mem_cgroup *memcg =3D folio_memcg(folio); + struct mem_cgroup *memcg; =20 - if (memcg) - count_memcg_events(memcg, idx, nr); + if (!folio_memcg_charged(folio)) + return; + + rcu_read_lock(); + memcg =3D folio_memcg(folio); + count_memcg_events(memcg, idx, nr); + rcu_read_unlock(); } =20 static inline void count_memcg_events_mm(struct mm_struct *mm, --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-177.mta0.migadu.com (out-177.mta0.migadu.com [91.218.175.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26D48339B2D for ; Tue, 28 Oct 2025 14:03:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660206; cv=none; b=BAJ3ckq4Juh5/eLt1/xpgICQSNUoGSkx9kThMiQNX4CHWrjVRHf8E4wM3VidPkUhmepdOHFb/5H80gnmhZMNPtZCSlWK91QIxtaXG8iU34+isAVPIsyeMcPvShn5DFZDSd8TyLssDSDl/eZh0AN0ELASQ36k/qtf14c2Vtg16B8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660206; c=relaxed/simple; bh=5l4p/fTqHCysr0qDVUg5+YIYU3cjNanZoTEF1SLsXxU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=R2MVjUTcPTUFoi06i7Hh6ivwpdq7o95AkYWiIdr0q7yaz+TMx3swgjJTr2q3FXhv6rlmEBRt4rgbbt1619GDxalmByI9u53gFXIKGGMtc1Z3NlTvMQ0/S/zffGSKQ+nr6G2YXtMbV2sNSbYHvi0FH8gef7KTRqKNT06q+ALOKYM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=H7iclBlV; arc=none smtp.client-ip=91.218.175.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="H7iclBlV" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660203; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kl2sza6IcUK9wJN5eVQSzKCSp8AzW1LJjYyxxBCgR78=; b=H7iclBlVKKGyl5kABnOqTcFgYu4LyN2DakLdUh4eUBdDZZKIf7nXXfSrmelvGDHdZ/RE7R JiOTF3qla1+kXPKP7kNhzLw7lqeqAYyiS8DvVBp+Kkx3JJEd/2hF6rjGF95YeVK/Cv0NB8 s6zT9kgDPlM9lfU1CZd/lJ9D8N0y+eA= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 11/26] mm: page_io: prevent memory cgroup release in page_io module Date: Tue, 28 Oct 2025 21:58:24 +0800 Message-ID: <3076467321061767e16ca7abbaa33998bfee97cc.1761658310.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song In the near future, a folio will no longer pin its corresponding memory cgroup. To ensure safety, it will only be appropriate to hold the rcu read lock or acquire a reference to the memory cgroup returned by folio_memcg(), thereby preventing it from being released. In the current patch, the rcu read lock is employed to safeguard against the release of the memory cgroup in swap_writepage() and bio_associate_blkg_from_page(). This serves as a preparatory measure for the reparenting of the LRU pages. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- mm/page_io.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/mm/page_io.c b/mm/page_io.c index 3c342db77ce38..ec7720762042c 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -276,10 +276,14 @@ int swap_writeout(struct folio *folio, struct swap_io= cb **swap_plug) count_mthp_stat(folio_order(folio), MTHP_STAT_ZSWPOUT); goto out_unlock; } + + rcu_read_lock(); if (!mem_cgroup_zswap_writeback_enabled(folio_memcg(folio))) { + rcu_read_unlock(); folio_mark_dirty(folio); return AOP_WRITEPAGE_ACTIVATE; } + rcu_read_unlock(); =20 __swap_writepage(folio, swap_plug); return 0; @@ -307,11 +311,11 @@ static void bio_associate_blkg_from_page(struct bio *= bio, struct folio *folio) struct cgroup_subsys_state *css; struct mem_cgroup *memcg; =20 - memcg =3D folio_memcg(folio); - if (!memcg) + if (!folio_memcg_charged(folio)) return; =20 rcu_read_lock(); + memcg =3D folio_memcg(folio); css =3D cgroup_e_css(memcg->css.cgroup, &io_cgrp_subsys); bio_associate_blkg_from_css(bio, css); rcu_read_unlock(); --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-182.mta0.migadu.com (out-182.mta0.migadu.com [91.218.175.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F30E433A019 for ; Tue, 28 Oct 2025 14:03:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660216; cv=none; b=LNuNC1p3Hfls7yS5tHbI4jcHU5MK7FdoJIJcU3LvzmeuAJh/HCUMUO8sQfjrKn8V3wTXCS0smjfQk2BdAr9GJo1rg6XR7kv18PFTuGCtH520XbLz/swxPRZHPedJWglKyS4wfYSq5IMVHWyQDveGdkJbOiAa8PktbUJcMW/rbvk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660216; c=relaxed/simple; bh=Yc9DHXI7rCsRGLUF4OcSKmvJ70rdkg6hmVUqegpf2ws=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ewdnP6NXSSLokNVaY0RNzLCYdYAvHBXp4bqbXNosOfmvibkYRx+2PbUYo4dHz0qmw4atg8vzIY5q+eYYROJ6lH5VtT25mfCvqmvebWbhj+52kW9tkHbc2MNtPZjQ9dVIgjBUxIyFgD4ELzHdnr+AAdFBk6Ag9sqMxicyi6YtOdE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=MbNGCPjX; arc=none smtp.client-ip=91.218.175.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="MbNGCPjX" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660211; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4oMNhuRAi27cTHrhaZltmQus/gsvCkLckLS+iJA38nw=; b=MbNGCPjX2eMT1kl0IeUIiPgzGhk9+LXl8jF+2FbuFvx9cFOCbfxtLGhzIX0YHs43F8rw0q MOMy/qbzVn5dCpvh5smAIvBZR6AFMi1nbijvzmHsPNRZR6Crox1yE20Tq1CRzKFdbfjJ5E /C/lzYoQFfNgDHCjcMQg1QkXB/WzS4c= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 12/26] mm: migrate: prevent memory cgroup release in folio_migrate_mapping() Date: Tue, 28 Oct 2025 21:58:25 +0800 Message-ID: <2ff7f2f1ac1d3884c549d9b5134322df21703018.1761658310.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song In the near future, a folio will no longer pin its corresponding memory cgroup. To ensure safety, it will only be appropriate to hold the rcu read lock or acquire a reference to the memory cgroup returned by folio_memcg(), thereby preventing it from being released. In the current patch, the rcu read lock is employed to safeguard against the release of the memory cgroup in folio_migrate_mapping(). This serves as a preparatory measure for the reparenting of the LRU pages. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- mm/migrate.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/migrate.c b/mm/migrate.c index ceee354ef2152..cdab1b652c530 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -664,6 +664,7 @@ static int __folio_migrate_mapping(struct address_space= *mapping, struct lruvec *old_lruvec, *new_lruvec; struct mem_cgroup *memcg; =20 + rcu_read_lock(); memcg =3D folio_memcg(folio); old_lruvec =3D mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); new_lruvec =3D mem_cgroup_lruvec(memcg, newzone->zone_pgdat); @@ -691,6 +692,7 @@ static int __folio_migrate_mapping(struct address_space= *mapping, __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr); __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr); } + rcu_read_unlock(); } local_irq_enable(); =20 --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-174.mta0.migadu.com (out-174.mta0.migadu.com [91.218.175.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 85A3633B973 for ; Tue, 28 Oct 2025 14:03:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660224; cv=none; b=fB15HrqTdlW5BoTaViYCHVSQnCCocOP1rexIodshIAuTakSu91yWCYXfm9uFyw6akSLFhNeXFL76tsaf3mtVipith5EdKm1Uzii9DYZh93w5JpscAWfbrVjc8P1dcMyx1TGGsZjc9U/KrU5SiwCOuMjGH792xvazf95OoSNXEoY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660224; c=relaxed/simple; bh=0WOzV3O7yun4QmZNfkHeGCtNjrtReHHgOdRYuRd4qyA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TSkH9L5IoObIVKaJFYWtK4ctP8QViyTtuSS9+Cb76kBNopW4Mm55PT7oaOpj/b/cRXtMGip9Gph9KSmq3M/p13cpQnivzEVxmVKu2+8t5kDjZysbtpqA6uB57nTsQcHbrMr4jpxkgbxfKNTp3+KBMj3iw3rgllEq+3wcvQ2JD4w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=fhoHKTrq; arc=none smtp.client-ip=91.218.175.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="fhoHKTrq" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660219; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wkLnu/O9q4xdyqjaFOUvki0UTYleR0McbTd1ETEhnKE=; b=fhoHKTrqeVk6EZTuFAS26dH7KFQqARavNeR76iE73qBaEIWQbclTIzwrgj8JSL8qtfhWPz Prin+M4U0nQHEyZAfPOwRr6LzXUBd1RKMclEfqJMc+oKdYk9NayqXqvzChMmpi6ZmtuPOL IHh2SSrRo7IPh5r1+4Fh+biiYMfo3io= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 13/26] mm: mglru: prevent memory cgroup release in mglru Date: Tue, 28 Oct 2025 21:58:26 +0800 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song In the near future, a folio will no longer pin its corresponding memory cgroup. To ensure safety, it will only be appropriate to hold the rcu read lock or acquire a reference to the memory cgroup returned by folio_memcg(), thereby preventing it from being released. In the current patch, the rcu read lock is employed to safeguard against the release of the memory cgroup in mglru. This serves as a preparatory measure for the reparenting of the LRU pages. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng --- mm/vmscan.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 660cd40cfddd4..676e6270e5b45 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3445,8 +3445,10 @@ static struct folio *get_pfn_folio(unsigned long pfn= , struct mem_cgroup *memcg, if (folio_nid(folio) !=3D pgdat->node_id) return NULL; =20 + rcu_read_lock(); if (folio_memcg(folio) !=3D memcg) - return NULL; + folio =3D NULL; + rcu_read_unlock(); =20 return folio; } @@ -4203,12 +4205,12 @@ bool lru_gen_look_around(struct page_vma_mapped_wal= k *pvmw) unsigned long addr =3D pvmw->address; struct vm_area_struct *vma =3D pvmw->vma; struct folio *folio =3D pfn_folio(pvmw->pfn); - struct mem_cgroup *memcg =3D folio_memcg(folio); + struct mem_cgroup *memcg; struct pglist_data *pgdat =3D folio_pgdat(folio); - struct lruvec *lruvec =3D mem_cgroup_lruvec(memcg, pgdat); - struct lru_gen_mm_state *mm_state =3D get_mm_state(lruvec); - DEFINE_MAX_SEQ(lruvec); - int gen =3D lru_gen_from_seq(max_seq); + struct lruvec *lruvec; + struct lru_gen_mm_state *mm_state; + unsigned long max_seq; + int gen; =20 lockdep_assert_held(pvmw->ptl); VM_WARN_ON_ONCE_FOLIO(folio_test_lru(folio), folio); @@ -4243,6 +4245,13 @@ bool lru_gen_look_around(struct page_vma_mapped_walk= *pvmw) } } =20 + rcu_read_lock(); + memcg =3D folio_memcg(folio); + lruvec =3D mem_cgroup_lruvec(memcg, pgdat); + max_seq =3D READ_ONCE((lruvec)->lrugen.max_seq); + gen =3D lru_gen_from_seq(max_seq); + mm_state =3D get_mm_state(lruvec); + arch_enter_lazy_mmu_mode(); =20 pte -=3D (addr - start) / PAGE_SIZE; @@ -4279,6 +4288,8 @@ bool lru_gen_look_around(struct page_vma_mapped_walk = *pvmw) =20 arch_leave_lazy_mmu_mode(); =20 + rcu_read_unlock(); + /* feedback from rmap walkers to page table walkers */ if (mm_state && suitable_to_scan(i, young)) update_bloom_filter(mm_state, max_seq, pvmw->pmd); --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-171.mta0.migadu.com (out-171.mta0.migadu.com [91.218.175.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D502533C52C for ; Tue, 28 Oct 2025 14:03:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660231; cv=none; b=Qdc83bS7jqJvEQ+sadv11Il1h5MyT1NvA1HCYWaCOAF9bsrrnYVw3zFK5N5vVQVOQZdRs9lb/i9OGciMrPr/MsGiuw19dqOmISbQVHNM/35GdzYdrF1ACo7W6TqiD4Or3eySZM4MUVOk17V3h0exOVIF076Fx81Wt3YQ75sZmj4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660231; c=relaxed/simple; bh=g6mBa5snmAUsToF8dqEegQmzk3lpmdjATko7jyh+zNo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LAgKFHugkv2HTAT2JcMCJSwfMjx+OyAr8R98QccDOSdyUNuUcmgSpxRT2sSFDubFFy7I/DF5QvgXEg1CVkrFJham/rmcX9r6ANujdL4x5bFgaiTLhNV7VJMhDxeX2JlKl0zp5DJPT7Wks7fcT1Az/MIV096JrGTOZ1b4a6emxi0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=MfRN3t19; arc=none smtp.client-ip=91.218.175.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="MfRN3t19" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660225; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=F4kFK5BJ7jqgsuSDiSoEfAf9VHgtSpfSrzUfS3UnF3Y=; b=MfRN3t19XYpdxb4wcQVFPd3/mu0+cZ9GQ7ccvIU9cBaG4F2YOqrqyvYKeVRMsZKhWRNqj7 eZDaToOPOy6Fi94Ky9Hi3X1Abh0FGtpGNvKor+IdkbwbYytdVygxKfFANTcCJ8v46nZnuG nroMiEABUxj0K9UP7lCdMidgPxFPqE0= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 14/26] mm: memcontrol: prevent memory cgroup release in mem_cgroup_swap_full() Date: Tue, 28 Oct 2025 21:58:27 +0800 Message-ID: <1593e9efc2de666e9f7e7659d5c61d2ccdd17a8d.1761658310.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song In the near future, a folio will no longer pin its corresponding memory cgroup. To ensure safety, it will only be appropriate to hold the rcu read lock or acquire a reference to the memory cgroup returned by folio_memcg(), thereby preventing it from being released. In the current patch, the rcu read lock is employed to safeguard against the release of the memory cgroup in mem_cgroup_swap_full(). This serves as a preparatory measure for the reparenting of the LRU pages. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- mm/memcontrol.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index aa8945c4ee383..4b3c7d4f346b5 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5275,17 +5275,21 @@ bool mem_cgroup_swap_full(struct folio *folio) if (do_memsw_account()) return false; =20 - memcg =3D folio_memcg(folio); - if (!memcg) + if (!folio_memcg_charged(folio)) return false; =20 + rcu_read_lock(); + memcg =3D folio_memcg(folio); for (; !mem_cgroup_is_root(memcg); memcg =3D parent_mem_cgroup(memcg)) { unsigned long usage =3D page_counter_read(&memcg->swap); =20 if (usage * 2 >=3D READ_ONCE(memcg->swap.high) || - usage * 2 >=3D READ_ONCE(memcg->swap.max)) + usage * 2 >=3D READ_ONCE(memcg->swap.max)) { + rcu_read_unlock(); return true; + } } + rcu_read_unlock(); =20 return false; } --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-179.mta0.migadu.com (out-179.mta0.migadu.com [91.218.175.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0805E33B973 for ; Tue, 28 Oct 2025 14:03:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660237; cv=none; b=eJfKVQYkYUiom/OD+30tS/AGj4M4oQyTJdHGU7LJ8mKa8tQ8zmqB0jIjk+lthAWblC4tn9o3RMWE9vQ7Urg89/SvuCk4ZgU9hw2YFquMoyPcTtLKxrsFhLv1ZOvpedX6HRXNLBjhCKTPFW/Oz1UWzj0xlOVwlCqBGjHz82l4SOs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660237; c=relaxed/simple; bh=B99n2BJg7ZZJCSR2ssO24WoXB6kUezL1KP4/WR6sRm0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=k8FBo/X0T6WoXxcGU+jsXS+VnEk3J8sIopw+udL/y5ZaGLN1bpPinlnHTj7H4g/2/EBjKETP305VaLSswNsxqbBwgW6pacfwIvNgojMrfuRgRq7JNnBatE3EjaMBz9BKqdiCx7/gIO+v2myE/rGigCQkopPQ2cPKlAP63UV5fpc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=Mcp43v4U; arc=none smtp.client-ip=91.218.175.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="Mcp43v4U" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660232; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=N6cy0qJU4vZTky6WsO6fPgmyrH0v2+1c03Oi7wqa5+0=; b=Mcp43v4Uw1DN1te6MLoGfIu4JBZPPNa/YdsIikEd59YRXiDP67BQ+YkTF+91+BLyLuIPFr EZQ+DNJ0Tz9/WrVUMTS7op86uCagBE5emzwbk0TROnKZweY3Opo2prHAUcrb+eDsox0mLI BN26/J6xOdjx6l2HOhJ2q5avXrQhhH8= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 15/26] mm: workingset: prevent memory cgroup release in lru_gen_eviction() Date: Tue, 28 Oct 2025 21:58:28 +0800 Message-ID: <847c35eb649fde525eecde97fcee3d01708b7b3a.1761658310.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song In the near future, a folio will no longer pin its corresponding memory cgroup. To ensure safety, it will only be appropriate to hold the rcu read lock or acquire a reference to the memory cgroup returned by folio_memcg(), thereby preventing it from being released. In the current patch, the rcu read lock is employed to safeguard against the release of the memory cgroup in lru_gen_eviction(). This serves as a preparatory measure for the reparenting of the LRU pages. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- mm/workingset.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/mm/workingset.c b/mm/workingset.c index 8cad8ee6dec6a..c4d21c15bad51 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -241,11 +241,14 @@ static void *lru_gen_eviction(struct folio *folio) int refs =3D folio_lru_refs(folio); bool workingset =3D folio_test_workingset(folio); int tier =3D lru_tier_from_refs(refs, workingset); - struct mem_cgroup *memcg =3D folio_memcg(folio); + struct mem_cgroup *memcg; struct pglist_data *pgdat =3D folio_pgdat(folio); + unsigned short memcg_id; =20 BUILD_BUG_ON(LRU_GEN_WIDTH + LRU_REFS_WIDTH > BITS_PER_LONG - EVICTION_SH= IFT); =20 + rcu_read_lock(); + memcg =3D folio_memcg(folio); lruvec =3D mem_cgroup_lruvec(memcg, pgdat); lrugen =3D &lruvec->lrugen; min_seq =3D READ_ONCE(lrugen->min_seq[type]); @@ -253,8 +256,10 @@ static void *lru_gen_eviction(struct folio *folio) =20 hist =3D lru_hist_from_seq(min_seq); atomic_long_add(delta, &lrugen->evicted[hist][type][tier]); + memcg_id =3D mem_cgroup_id(memcg); + rcu_read_unlock(); =20 - return pack_shadow(mem_cgroup_id(memcg), pgdat, token, workingset); + return pack_shadow(memcg_id, pgdat, token, workingset); } =20 /* --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-186.mta0.migadu.com (out-186.mta0.migadu.com [91.218.175.186]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C438033CEB1 for ; Tue, 28 Oct 2025 14:04:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.186 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660243; cv=none; b=m2ap1FeImhx1EvjPi8cuJaL6+g1tuL6wospWkEZ9NcADvQe0QK+tz2pgNNlleDslU7hqXb2MiroLPPnE1AAVBD8c0Xsl9QgSklGOvU/a0e4bLRKdx2eYuF7573TC0a0OjetH/ev1cQ+7ww2tpofiv4CxFD7ea0fzp+cLEGZe0Jk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660243; c=relaxed/simple; bh=vKMiI/fCaDbQtvED7yaLQh4IT6nXVGp8S4zEKJEZe+c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RkygsFy79Ea4qtGlwT7GupxbfQzYPiYRrIselrYFQ6Wpi4O7nUsH96ZyB6WzbvUpBDT5j2cOvRAOC7WuePFHZT3les2QHHecie3+no8kaWpvop7M3KZNJjJiMPnJ3eLt09EJ+cmGtNT0y3W2fLrn7+o+8l5jZwqJRinAF2y6NLM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=Qdyjhz16; arc=none smtp.client-ip=91.218.175.186 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="Qdyjhz16" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660238; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=H+7NrfaZDIPUuVApNFCo2ZEP+9yr6hXQlGbR4tT0+ik=; b=Qdyjhz16yrrQUs+gx1nzi89faKxHADcP4j9vTU0L4BargQ0ss1oUx66KBzo6X3MeKKARg+ i7TdrDOyRKnHNz6X7Ryn/w5Lf8VqKa61i19yefLGT69//fwtHrH24n4kSvJiW1mDdOEeSP 2QfnOZ1TikLPFn1LqXa/r/dZnOq6tdw= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Qi Zheng Subject: [PATCH v1 16/26] mm: thp: prevent memory cgroup release in folio_split_queue_lock{_irqsave}() Date: Tue, 28 Oct 2025 21:58:29 +0800 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Qi Zheng In the near future, a folio will no longer pin its corresponding memory cgroup. To ensure safety, it will only be appropriate to hold the rcu read lock or acquire a reference to the memory cgroup returned by folio_memcg(), thereby preventing it from being released. In the current patch, the rcu read lock is employed to safeguard against the release of the memory cgroup in folio_split_queue_lock{_irqsave}(). Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- mm/huge_memory.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9d3594df6eedf..067259a9e0809 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1153,13 +1153,25 @@ split_queue_lock_irqsave(int nid, struct mem_cgroup= *memcg, unsigned long *flags =20 static struct deferred_split *folio_split_queue_lock(struct folio *folio) { - return split_queue_lock(folio_nid(folio), folio_memcg(folio)); + struct deferred_split *queue; + + rcu_read_lock(); + queue =3D split_queue_lock(folio_nid(folio), folio_memcg(folio)); + rcu_read_unlock(); + + return queue; } =20 static struct deferred_split * folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags) { - return split_queue_lock_irqsave(folio_nid(folio), folio_memcg(folio), fla= gs); + struct deferred_split *queue; + + rcu_read_lock(); + queue =3D split_queue_lock_irqsave(folio_nid(folio), folio_memcg(folio), = flags); + rcu_read_unlock(); + + return queue; } =20 static inline void split_queue_unlock(struct deferred_split *queue) --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-179.mta0.migadu.com (out-179.mta0.migadu.com [91.218.175.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 47863335BAF for ; Tue, 28 Oct 2025 14:04:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660249; cv=none; b=E3MpCP8Fl2+ETB+s4Dcf2UsOg6abt92bgAPB6Pc7CEgguhgCvtyeuRpbfGmgzygxfXeGTz/C/wi+ci024w6q9wTJiuU2WaEfTClubaTGWMv5Ovbq+gPysqBgH6zUjnbThS1UJ0wf1kIEWPzk6vJv2KWYkmf3J7xTd0BDNh31YBo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660249; c=relaxed/simple; bh=OSy/9GGCdN7m8F02iyVvXHlEK7sNx7mHMO14zaERKGQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=j+zWjkzxsoNL+gWGYsRTul9ZRZqgfQOTgns7FT7XHL5fR/dClUPL49YgeXwb+ob8dBGIi+DrjA8o8U50qYVhYogTKU3n16Tan88NemQAY2RWOLKe1utjv/kQnzc6/cK9xFX+7fpbHXBq/Vjy1SDSRuyk33zI9fTbea7jtfBvWrY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=pWzdGFK9; arc=none smtp.client-ip=91.218.175.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="pWzdGFK9" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660244; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=d2EOOSlZh8oRwtRfQtnzgpJLzbPe1acyr5pcLzwWU68=; b=pWzdGFK9jp5/gKWOoMPYfN/1zgwDvyk5R6UyUH2ocXAJcYLiJkqMhI+d2hIraioYk881Hw 7rzlQkLG3uO06lrW3XshXYI5VB2JcP+sh5murHeNw1f62HR7cktjcNoiJHB7FiwL0hTSM1 7bK+5TfatVR2jrRm9JSuNcGrGdlqOCQ= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 17/26] mm: workingset: prevent lruvec release in workingset_refault() Date: Tue, 28 Oct 2025 21:58:30 +0800 Message-ID: <7d58f7f924961bc9ce386b3101448a49d7aa1daa.1761658310.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song In the near future, a folio will no longer pin its corresponding memory cgroup. So an lruvec returned by folio_lruvec() could be released without the rcu read lock or a reference to its memory cgroup. In the current patch, the rcu read lock is employed to safeguard against the release of the lruvec in workingset_refault(). This serves as a preparatory measure for the reparenting of the LRU pages. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- mm/workingset.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/workingset.c b/mm/workingset.c index c4d21c15bad51..a69cc7bf9246d 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -560,11 +560,12 @@ void workingset_refault(struct folio *folio, void *sh= adow) * locked to guarantee folio_memcg() stability throughout. */ nr =3D folio_nr_pages(folio); + rcu_read_lock(); lruvec =3D folio_lruvec(folio); mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr); =20 if (!workingset_test_recent(shadow, file, &workingset, true)) - return; + goto out; =20 folio_set_active(folio); workingset_age_nonresident(lruvec, nr); @@ -580,6 +581,8 @@ void workingset_refault(struct folio *folio, void *shad= ow) lru_note_cost_refault(folio); mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file, nr); } +out: + rcu_read_unlock(); } =20 /** --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-173.mta0.migadu.com (out-173.mta0.migadu.com [91.218.175.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5705433DEE8 for ; Tue, 28 Oct 2025 14:04:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660258; cv=none; b=MedBnGf/8FFKxhiEMKMUCVFrEqfqXTCQG86uKuUiQqRPbHB8PzBcukZV9gdaTm4SMnKwuXCqZhnpQV/MbeVrCrIna02q/ikLyzEmg04OJDeINM/O3WlyzfL26iK6fMVj4Y/N7RPAgSVO1bOqahf604YMygw3x/jUqkjoKOCxcE0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660258; c=relaxed/simple; bh=P0q6MjSkVjsNV2grrn3T44FIUMkX9PSRxmfEJUTIUEA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hfPTUgEmtEh/BiiRQyTZGHW9uMdqvZKhuNEHalVJYNZv+Hou2o+DxQjsS3XwF07sPt8fTgoVb4UfIhFW48IfXpznHPWLmZ9bKBBFhNsx+HQqWBcvMpv+JR8RuVQiEgPkqzbdXoZjspIcrX+aYrmvgjyW0e4un6VXTmBww7NVA2Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=mQRNNpfr; arc=none smtp.client-ip=91.218.175.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="mQRNNpfr" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660252; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=awA3itBKj19H7/d+bEsq6yy17boeItLnhliB54mswXw=; b=mQRNNpfrKBpzz1ab1nTDgyCoguSHCTkk95shXK3iNipGI04oEMEXi5IIHEPqlqy+SEmRpM qyRZZQYqvRsB5YE+THqhsZdE833GfIrCf6qfwNtQCy481oRQtt+89LTqZmnI4GWpd8q08w S9hcQ/U+3SEnqwhb2eichfz//vZJxFk= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Nhat Pham , Chengming Zhou , Qi Zheng Subject: [PATCH v1 18/26] mm: zswap: prevent lruvec release in zswap_folio_swapin() Date: Tue, 28 Oct 2025 21:58:31 +0800 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song In the near future, a folio will no longer pin its corresponding memory cgroup. So an lruvec returned by folio_lruvec() could be released without the rcu read lock or a reference to its memory cgroup. In the current patch, the rcu read lock is employed to safeguard against the release of the lruvec in zswap_folio_swapin(). This serves as a preparatory measure for the reparenting of the LRU pages. Signed-off-by: Muchun Song Acked-by: Nhat Pham Reviewed-by: Chengming Zhou Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- mm/zswap.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/zswap.c b/mm/zswap.c index 5d0f8b13a958d..a341814468b95 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -664,8 +664,10 @@ void zswap_folio_swapin(struct folio *folio) struct lruvec *lruvec; =20 if (folio) { + rcu_read_lock(); lruvec =3D folio_lruvec(folio); atomic_long_inc(&lruvec->zswap_lruvec_state.nr_disk_swapins); + rcu_read_unlock(); } } =20 --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-188.mta0.migadu.com (out-188.mta0.migadu.com [91.218.175.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAC013328E5 for ; Tue, 28 Oct 2025 14:04:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660265; cv=none; b=RdgyJM5gO9hQnTDhxVnfj+vbEsF4dnAaWLaxpTU9nYcTELx73FDgmXTVvqk5I8n/i4uruNQzmQTbA4aDyRcQQPsof1Uhd6qe/6rqBr8vDUSLP02Ms1n8A+nrWt71ABezv+zENDmtUuketieHBGme4Zngv+W28xYQtANhnqq0VvA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660265; c=relaxed/simple; bh=DzOkPJH5frJK/3HizU5/9Z+USdIFx5BFtceq8DvgOxI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gb3LXg7ZSwzVbgHSRbU4eeMSM3+Lp+nGvL3xVnwWXoSQKYwP85+X001ETkPUyTqmB5rNHnULs3YsCkg11+/1wImnW776RF5FyHylzWsz5BLDmkJhqE9jOiLkwQip5ZQn5q2xd8g1DdXlq9kAsqCO1q6p/kjq2KSEKwLoheCM+1o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=NVJozt+A; arc=none smtp.client-ip=91.218.175.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="NVJozt+A" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660260; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aojrlW5QM0w8YO0ADBtzTkECPDUg/Z5IsfaLfTy6jEI=; b=NVJozt+AdVtBShZigadtoWRY1WbWR7ZpmjhNfD5oqkNoSunHh7O3bg0e+LXQkCnelRdVjj X2BHIa3BeXi0buSPXZBq2lboVKZxjmF0FxqfxBUTEsaKvpiCYNFVxur3NjE4kwmfm6s7zF MSovgm+6YsP2wyb9q6luD8mzY6ZfbjY= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 19/26] mm: swap: prevent lruvec release in swap module Date: Tue, 28 Oct 2025 21:58:32 +0800 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song In the near future, a folio will no longer pin its corresponding memory cgroup. So an lruvec returned by folio_lruvec() could be released without the rcu read lock or a reference to its memory cgroup. In the current patch, the rcu read lock is employed to safeguard against the release of the lruvec in lru_note_cost_refault() and lru_activate(). This serves as a preparatory measure for the reparenting of the LRU pages. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- mm/swap.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index ec0c654e128dc..0606795f3ccf3 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -412,18 +412,20 @@ static void lru_gen_inc_refs(struct folio *folio) =20 static bool lru_gen_clear_refs(struct folio *folio) { - struct lru_gen_folio *lrugen; int gen =3D folio_lru_gen(folio); int type =3D folio_is_file_lru(folio); + unsigned long seq; =20 if (gen < 0) return true; =20 set_mask_bits(&folio->flags.f, LRU_REFS_FLAGS | BIT(PG_workingset), 0); =20 - lrugen =3D &folio_lruvec(folio)->lrugen; + rcu_read_lock(); + seq =3D READ_ONCE(folio_lruvec(folio)->lrugen.min_seq[type]); + rcu_read_unlock(); /* whether can do without shuffling under the LRU lock */ - return gen =3D=3D lru_gen_from_seq(READ_ONCE(lrugen->min_seq[type])); + return gen =3D=3D lru_gen_from_seq(seq); } =20 #else /* !CONFIG_LRU_GEN */ --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-172.mta0.migadu.com (out-172.mta0.migadu.com [91.218.175.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5BF5A338929; Tue, 28 Oct 2025 14:04:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660271; cv=none; b=c995w0vE73AKrWaxxEWcdMyh+PMmSRv0vDoJoooGO9mdpIU8eiff9/OFB9u6ZFl6uHZyK230brAlYWfCtJ7UeWUhRGIxbPLrz+Eyu/rAKKEpJ3uagCKnCLcj8Uv1iPry6L1g7Pd+tcZvdpje2r2bgUbbfspPnFUgv/DC0smmFP8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660271; c=relaxed/simple; bh=0kgT6fzD5akkCPqm03VJrzchYDCochSXofUVtd05LKc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sJy9HJKSi1KJ49biJXJ0l51qqOGE8JpCnRMmcIcbfC3Xz5/lcMWyd68KIXLTaBWf4H5DX/vMYNkrp+ja0lpYLkT2beeg9EYxp3T67ZOKkshZ+p+kRYo/gogRZKNlm1iKa9ubegoWhHzloDPdd0Er4ycNz4d+cUHLbMamzrmdq9Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=GZWIQPg2; arc=none smtp.client-ip=91.218.175.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="GZWIQPg2" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660266; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MfFT50NAM4YJdcJwEiZWm9eoPKYAch/I/HKN7VyhW9I=; b=GZWIQPg2QKwgut9Dq3hgo3nl0aQwy3J/E5mWPtGNg4bdNwkQ+AI6uq3lfLJklHFfbHo+Pg 3E7xR05XF643Awodwuy+5roySsCLCwXL2xAjK8TTY1IsG7QNx+MN4lIQAYKtvBXnOk8QdZ 7rTdqleL9uVxBl3NuOZLwvW0LxrK80E= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 20/26] mm: workingset: prevent lruvec release in workingset_activation() Date: Tue, 28 Oct 2025 21:58:33 +0800 Message-ID: <03a26f7d8723a42c29a24b03ed318a2830faff02.1761658311.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song In the near future, a folio will no longer pin its corresponding memory cgroup. So an lruvec returned by folio_lruvec() could be released without the rcu read lock or a reference to its memory cgroup. In the current patch, the rcu read lock is employed to safeguard against the release of the lruvec in workingset_activation(). This serves as a preparatory measure for the reparenting of the LRU pages. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- mm/workingset.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/workingset.c b/mm/workingset.c index a69cc7bf9246d..d0eb3636dcd1d 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -595,8 +595,11 @@ void workingset_activation(struct folio *folio) * Filter non-memcg pages here, e.g. unmap can call * mark_page_accessed() on VDSO pages. */ - if (mem_cgroup_disabled() || folio_memcg_charged(folio)) + if (mem_cgroup_disabled() || folio_memcg_charged(folio)) { + rcu_read_lock(); workingset_age_nonresident(folio_lruvec(folio), folio_nr_pages(folio)); + rcu_read_unlock(); + } } =20 /* --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-172.mta0.migadu.com (out-172.mta0.migadu.com [91.218.175.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C3F8C340A57 for ; Tue, 28 Oct 2025 14:04:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660278; cv=none; b=glfFttevSWpdMmnwEsqjTKckpuCZpD1WYhaUjqhxOi55behb9p/WVC/JV4o51uUGSASklRdp9/rDItHm/cRuTgjZJj6SxTkwb/ZIpA1fYc5YrCcqVsbamXt8WhM/dYdl8DL0OO2Gukd26XPEqJ6p8E8lqwehVaYKQ+1KmHwSbJs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660278; c=relaxed/simple; bh=hgQYbDTPaCKHBmP6pQl2g+PhfDm3Z6m75kSoL5hyhf4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=odhITohRaXm47o0hekWXBnJa7mV6ZyFMo58ciynexsHmi54ZzNJlK7yQtwE5tcoGI2oyce2vdc/UhR9NRU6pg1/XAktC9G1q20b23O5X1ufLvuP8EWfdZfXdA5Qu386nEjVPT550GE3wn2+HeNcaA9ZilJid3yKZJ6fu+f9Oun4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=FPqX5Jsy; arc=none smtp.client-ip=91.218.175.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="FPqX5Jsy" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660274; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SIvMMZtiHeleJuOL4dKZoaUL1E0ZJO7RAB2Ee9n3WYU=; b=FPqX5JsybzsHXjrfTFcOA99OddeCuoln/4P04FFv3wrwoVbySXrfzU6CQQOXPMx37blhfD 77oJfK9rQYFNirve77cNm4PdoucPClzqnu50HZhpoCtQMIMUxecqzmfJKxkP/F+cMUgOI4 AyEhJaySa9fVyrJk41bgUddT+I3p7GQ= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 21/26] mm: memcontrol: prepare for reparenting LRU pages for lruvec lock Date: Tue, 28 Oct 2025 21:58:34 +0800 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song The following diagram illustrates how to ensure the safety of the folio lruvec lock when LRU folios undergo reparenting. In the folio_lruvec_lock(folio) function: ``` rcu_read_lock(); retry: lruvec =3D folio_lruvec(folio); /* There is a possibility of folio reparenting at this point. */ spin_lock(&lruvec->lru_lock); if (unlikely(lruvec_memcg(lruvec) !=3D folio_memcg(folio))) { /* * The wrong lruvec lock was acquired, and a retry is required. * This is because the folio resides on the parent memcg lruvec * list. */ spin_unlock(&lruvec->lru_lock); goto retry; } /* Reaching here indicates that folio_memcg() is stable. */ ``` In the memcg_reparent_objcgs(memcg) function: ``` spin_lock(&lruvec->lru_lock); spin_lock(&lruvec_parent->lru_lock); /* Transfer folios from the lruvec list to the parent's. */ spin_unlock(&lruvec_parent->lru_lock); spin_unlock(&lruvec->lru_lock); ``` After acquiring the lruvec lock, it is necessary to verify whether the folio has been reparented. If reparenting has occurred, the new lruvec lock must be reacquired. During the LRU folio reparenting process, the lruvec lock will also be acquired (this will be implemented in a subsequent patch). Therefore, folio_memcg() remains unchanged while the lruvec lock is held. Given that lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after the lruvec lock is acquired, the lruvec_memcg_debug() check is redundant. Hence, it is removed. This patch serves as a preparation for the reparenting of LRU folios. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng --- include/linux/memcontrol.h | 23 ++++++----------- mm/compaction.c | 29 ++++++++++++++++----- mm/memcontrol.c | 53 +++++++++++++++++++------------------- 3 files changed, 58 insertions(+), 47 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index ca8d4e09cbe7d..6f6b28f8f0f63 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -740,7 +740,11 @@ static inline struct lruvec *mem_cgroup_lruvec(struct = mem_cgroup *memcg, * folio_lruvec - return lruvec for isolating/putting an LRU folio * @folio: Pointer to the folio. * - * This function relies on folio->mem_cgroup being stable. + * The user should hold an rcu read lock to protect lruvec associated with + * the folio from being released. But it does not prevent binding stability + * between the folio and the returned lruvec from being changed to its par= ent + * or ancestor (e.g. like folio_lruvec_lock() does that holds LRU lock to + * prevent the change). */ static inline struct lruvec *folio_lruvec(struct folio *folio) { @@ -763,15 +767,6 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *fol= io); struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags); =20 -#ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio); -#else -static inline -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ -} -#endif - static inline struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){ return css ? container_of(css, struct mem_cgroup, css) : NULL; @@ -1204,11 +1199,6 @@ static inline struct lruvec *folio_lruvec(struct fol= io *folio) return &pgdat->__lruvec; } =20 -static inline -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ -} - static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memc= g) { return NULL; @@ -1515,17 +1505,20 @@ static inline struct lruvec *parent_lruvec(struct l= ruvec *lruvec) static inline void lruvec_unlock(struct lruvec *lruvec) { spin_unlock(&lruvec->lru_lock); + rcu_read_unlock(); } =20 static inline void lruvec_unlock_irq(struct lruvec *lruvec) { spin_unlock_irq(&lruvec->lru_lock); + rcu_read_unlock(); } =20 static inline void lruvec_unlock_irqrestore(struct lruvec *lruvec, unsigned long flags) { spin_unlock_irqrestore(&lruvec->lru_lock, flags); + rcu_read_unlock(); } =20 /* Test requires a stable folio->memcg binding, see folio_memcg() */ diff --git a/mm/compaction.c b/mm/compaction.c index 4dce180f699b4..0d2a0e6239eb4 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -518,6 +518,24 @@ static bool compact_lock_irqsave(spinlock_t *lock, uns= igned long *flags, return true; } =20 +static struct lruvec * +compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flag= s, + struct compact_control *cc) +{ + struct lruvec *lruvec; + + rcu_read_lock(); +retry: + lruvec =3D folio_lruvec(folio); + compact_lock_irqsave(&lruvec->lru_lock, flags, cc); + if (unlikely(lruvec_memcg(lruvec) !=3D folio_memcg(folio))) { + spin_unlock_irqrestore(&lruvec->lru_lock, *flags); + goto retry; + } + + return lruvec; +} + /* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. The lock should be periodically unlocked to avo= id @@ -839,7 +857,7 @@ isolate_migratepages_block(struct compact_control *cc, = unsigned long low_pfn, { pg_data_t *pgdat =3D cc->zone->zone_pgdat; unsigned long nr_scanned =3D 0, nr_isolated =3D 0; - struct lruvec *lruvec; + struct lruvec *lruvec =3D NULL; unsigned long flags =3D 0; struct lruvec *locked =3D NULL; struct folio *folio =3D NULL; @@ -1153,18 +1171,17 @@ isolate_migratepages_block(struct compact_control *= cc, unsigned long low_pfn, if (!folio_test_clear_lru(folio)) goto isolate_fail_put; =20 - lruvec =3D folio_lruvec(folio); + if (locked) + lruvec =3D folio_lruvec(folio); =20 /* If we already hold the lock, we can skip some rechecking */ - if (lruvec !=3D locked) { + if (lruvec !=3D locked || !locked) { if (locked) lruvec_unlock_irqrestore(locked, flags); =20 - compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); + lruvec =3D compact_folio_lruvec_lock_irqsave(folio, &flags, cc); locked =3D lruvec; =20 - lruvec_memcg_debug(lruvec, folio); - /* * Try get exclusive access under lock. If marked for * skip, the scan is aborted unless the current context diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 4b3c7d4f346b5..7969dd93d858a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1184,23 +1184,6 @@ void mem_cgroup_scan_tasks(struct mem_cgroup *memcg, } } =20 -#ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ - struct mem_cgroup *memcg; - - if (mem_cgroup_disabled()) - return; - - memcg =3D folio_memcg(folio); - - if (!memcg) - VM_BUG_ON_FOLIO(!mem_cgroup_is_root(lruvec_memcg(lruvec)), folio); - else - VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) !=3D memcg, folio); -} -#endif - /** * folio_lruvec_lock - Lock the lruvec for a folio. * @folio: Pointer to the folio. @@ -1210,14 +1193,20 @@ void lruvec_memcg_debug(struct lruvec *lruvec, stru= ct folio *folio) * - folio_test_lru false * - folio frozen (refcount of 0) * - * Return: The lruvec this folio is on with its lock held. + * Return: The lruvec this folio is on with its lock held and rcu read loc= k held. */ struct lruvec *folio_lruvec_lock(struct folio *folio) { - struct lruvec *lruvec =3D folio_lruvec(folio); + struct lruvec *lruvec; =20 + rcu_read_lock(); +retry: + lruvec =3D folio_lruvec(folio); spin_lock(&lruvec->lru_lock); - lruvec_memcg_debug(lruvec, folio); + if (unlikely(lruvec_memcg(lruvec) !=3D folio_memcg(folio))) { + spin_unlock(&lruvec->lru_lock); + goto retry; + } =20 return lruvec; } @@ -1232,14 +1221,20 @@ struct lruvec *folio_lruvec_lock(struct folio *foli= o) * - folio frozen (refcount of 0) * * Return: The lruvec this folio is on with its lock held and interrupts - * disabled. + * disabled and rcu read lock held. */ struct lruvec *folio_lruvec_lock_irq(struct folio *folio) { - struct lruvec *lruvec =3D folio_lruvec(folio); + struct lruvec *lruvec; =20 + rcu_read_lock(); +retry: + lruvec =3D folio_lruvec(folio); spin_lock_irq(&lruvec->lru_lock); - lruvec_memcg_debug(lruvec, folio); + if (unlikely(lruvec_memcg(lruvec) !=3D folio_memcg(folio))) { + spin_unlock_irq(&lruvec->lru_lock); + goto retry; + } =20 return lruvec; } @@ -1255,15 +1250,21 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *= folio) * - folio frozen (refcount of 0) * * Return: The lruvec this folio is on with its lock held and interrupts - * disabled. + * disabled and rcu read lock held. */ struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags) { - struct lruvec *lruvec =3D folio_lruvec(folio); + struct lruvec *lruvec; =20 + rcu_read_lock(); +retry: + lruvec =3D folio_lruvec(folio); spin_lock_irqsave(&lruvec->lru_lock, *flags); - lruvec_memcg_debug(lruvec, folio); + if (unlikely(lruvec_memcg(lruvec) !=3D folio_memcg(folio))) { + spin_unlock_irqrestore(&lruvec->lru_lock, *flags); + goto retry; + } =20 return lruvec; } --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-178.mta0.migadu.com (out-178.mta0.migadu.com [91.218.175.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0CADF337108 for ; Tue, 28 Oct 2025 14:04:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660286; cv=none; b=M/+VAG+PlJPvZNp9BnARCnUiod7plgzaCurfJnZBlNXfJbxKurxOuApH1ygjAs6VPE7bnVndkwnXtUGDyPAVhnRZCxLYs/S5/mTY+e5BdKmT0DIVGp0OrewZGZ8YEpsgy/ZulB71EQOHLzJIjKNe6BeRc2H4OHDO5Ds6a6qMROI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660286; c=relaxed/simple; bh=tQPVFYoqRrmI1p/WiQPQYEXxJTCm73j5VaCKJSaWw/Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rPBdT252PQFSCUmQfAYKcCgEL20rtJgR8DxBfshblR1inoFoKqV/ubFiVgpM8zKXs/qu+qBynK3Wo+QYED8g+uw/mrNvGS/sppiwpmVkCnxj32840KWHFCOobf+h736t1ClAik2VH0TQpPBpC3myoNGgUgJQib+1amJma+5iJN0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=WK8tb2Ee; arc=none smtp.client-ip=91.218.175.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="WK8tb2Ee" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660283; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=f58KJTzXSs4X+PWy8zQk6jO+YawoCWN3pSg57esT6X0=; b=WK8tb2EePZkDDXUBHA8X88Oky6gXYUGXPse40fNXgFJHtOyOpwiA5y2uY/2VpOK8KvPNgg VlXvf/E84wkSoXZBGSZplP19r6Ar3cyL356W88MS4D4kSi6MltL6XLjkKUacyXhIf+LeFQ PdHFCQeViZJqPv2MfDtugYoYaxGFSEE= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Qi Zheng Subject: [PATCH v1 22/26] mm: vmscan: prepare for reparenting traditional LRU folios Date: Tue, 28 Oct 2025 21:58:35 +0800 Message-ID: <77c64e29b70bad6ca303e0e591624f9cdf2a750b.1761658311.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Qi Zheng To reslove the dying memcg issue, we need to reparent LRU folios of child memcg to its parent memcg. For traditional LRU list, each lruvec of every memcg comprises four LRU lists. Due to the symmetry of the LRU lists, it is feasible to transfer the LRU lists from a memcg to its parent memcg during the reparenting process. This commit implements the specific function, which will be used during the reparenting process. Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo --- include/linux/mmzone.h | 4 ++++ mm/vmscan.c | 39 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 43 insertions(+) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 4398e027f450e..0d8776e5b6747 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -366,6 +366,10 @@ enum lruvec_flags { LRUVEC_NODE_CONGESTED, }; =20 +#ifdef CONFIG_MEMCG +void lru_reparent_memcg(struct mem_cgroup *src, struct mem_cgroup *dst); +#endif /* CONFIG_MEMCG */ + #endif /* !__GENERATING_BOUNDS_H */ =20 /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 676e6270e5b45..7aa8e1472d10d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2649,6 +2649,45 @@ static bool can_age_anon_pages(struct lruvec *lruvec, lruvec_memcg(lruvec)); } =20 + +#ifdef CONFIG_MEMCG +static void lruvec_reparent_lru(struct lruvec *src, struct lruvec *dst, + enum lru_list lru) +{ + int zid; + struct mem_cgroup_per_node *mz_src, *mz_dst; + + mz_src =3D container_of(src, struct mem_cgroup_per_node, lruvec); + mz_dst =3D container_of(dst, struct mem_cgroup_per_node, lruvec); + + if (lru !=3D LRU_UNEVICTABLE) + list_splice_tail_init(&src->lists[lru], &dst->lists[lru]); + + for (zid =3D 0; zid < MAX_NR_ZONES; zid++) { + mz_dst->lru_zone_size[zid][lru] +=3D mz_src->lru_zone_size[zid][lru]; + mz_src->lru_zone_size[zid][lru] =3D 0; + } +} + +void lru_reparent_memcg(struct mem_cgroup *src, struct mem_cgroup *dst) +{ + int nid; + + for_each_node(nid) { + enum lru_list lru; + struct lruvec *src_lruvec, *dst_lruvec; + + src_lruvec =3D mem_cgroup_lruvec(src, NODE_DATA(nid)); + dst_lruvec =3D mem_cgroup_lruvec(dst, NODE_DATA(nid)); + dst_lruvec->anon_cost +=3D src_lruvec->anon_cost; + dst_lruvec->file_cost +=3D src_lruvec->file_cost; + + for_each_lru(lru) + lruvec_reparent_lru(src_lruvec, dst_lruvec, lru); + } +} +#endif + #ifdef CONFIG_LRU_GEN =20 #ifdef CONFIG_LRU_GEN_ENABLED --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-182.mta0.migadu.com (out-182.mta0.migadu.com [91.218.175.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E9A293396E0 for ; Tue, 28 Oct 2025 14:04:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660295; cv=none; b=qinN84VjWGOqE3cRybJOu6GgKFtSh4RGoSDPaCA7cBbk2/GA0iSOqQ+tyewughMy73ZNvZe9p7BhhxkVCkkr3fYpZIsx9kpI3AOAHmq+8cKZNv89CVeVrPcEI0j7BHys/qxngzRxa721duN7qvYAo1O5OcPkuZdIMTxXw2CoEcg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660295; c=relaxed/simple; bh=EKDh1bG6TFboUMyykCUwFenrp6Gkt0w8MiIp22DFWIM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZDIhpYAh4aNW3sgfrCxTLrdN1SYz6dHvjqNlNdOYtiwQQICXW52hstb7TUghVBlM7gIbQDMwlsVgdWqKAjn0DqT3idqXoVpUSsYVu2VxO/LNXHifC+jvBNWHNsZHbgOWpLB7VqsSzMz+faxyYpDmJXXbmSPUx6pO3+r7undKeQk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=rVxvyEdK; arc=none smtp.client-ip=91.218.175.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="rVxvyEdK" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660290; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/NlFh6JZBSK01bz7xFAPu6gc4AMfcAw/edRnb/5hy/w=; b=rVxvyEdKqZsKYbizlGoj5w1+xNdO0odOM+rCoiyzALnMIy7ike96ihWGIzWYiy4tbbGDIM MFwF4TLRfWcQFpmKN7ZvRLWlEEuHXF3FEA3g8mCAQJ9g4p+yLKsbKZDNT459dRFG2IZcSB B8MqCIlQiBRG+tgkrW4RswTqYCqY1Y8= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Qi Zheng Subject: [PATCH v1 23/26] mm: vmscan: prepare for reparenting MGLRU folios Date: Tue, 28 Oct 2025 21:58:36 +0800 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Qi Zheng Similar to traditional LRU folios, in order to solve the dying memcg problem, we also need to reparenting MGLRU folios to the parent memcg when memcg offline. However, there are the following challenges: 1. Each lruvec has between MIN_NR_GENS and MAX_NR_GENS generations, the number of generations of the parent and child memcg may be different, so we cannot simply transfer MGLRU folios in the child memcg to the parent memcg as we did for traditional LRU folios. 2. The generation information is stored in folio->flags, but we cannot traverse these folios while holding the lru lock, otherwise it may cause softlockup. 3. In walk_update_folio(), the gen of folio and corresponding lru size may be updated, but the folio is not immediately moved to the corresponding lru list. Therefore, there may be folios of different generations on an LRU list. 4. In lru_gen_del_folio(), the generation to which the folio belongs is found based on the generation information in folio->flags, and the corresponding LRU size will be updated. Therefore, we need to update the lru size correctly during reparenting, otherwise the lru size may be updated incorrectly in lru_gen_del_folio(). Finally, this patch chose a compromise method, which is to splice the lru list in the child memcg to the lru list of the same generation in the parent memcg during reparenting. And in order to ensure that the parent memcg has the same generation, we need to increase the generations in the parent memcg to the MAX_NR_GENS before reparenting. Of course, the same generation has different meanings in the parent and child memcg, this will cause confusion in the hot and cold information of folios. But other than that, this method is simple enough, the lru size is correct, and there is no need to consider some concurrency issues (such as lru_gen_del_folio()). To prepare for the above work, this commit implements the specific functions, which will be used during reparenting. Suggested-by: Harry Yoo Suggested-by: Imran Khan Signed-off-by: Qi Zheng --- include/linux/mmzone.h | 16 ++++++++ mm/vmscan.c | 86 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 102 insertions(+) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 0d8776e5b6747..0a71bf015d12b 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -628,6 +628,9 @@ void lru_gen_online_memcg(struct mem_cgroup *memcg); void lru_gen_offline_memcg(struct mem_cgroup *memcg); void lru_gen_release_memcg(struct mem_cgroup *memcg); void lru_gen_soft_reclaim(struct mem_cgroup *memcg, int nid); +void max_lru_gen_memcg(struct mem_cgroup *memcg); +bool recheck_lru_gen_max_memcg(struct mem_cgroup *memcg); +void lru_gen_reparent_memcg(struct mem_cgroup *src, struct mem_cgroup *dst= ); =20 #else /* !CONFIG_LRU_GEN */ =20 @@ -668,6 +671,19 @@ static inline void lru_gen_soft_reclaim(struct mem_cgr= oup *memcg, int nid) { } =20 +static inline void max_lru_gen_memcg(struct mem_cgroup *memcg) +{ +} + +static inline bool recheck_lru_gen_max_memcg(struct mem_cgroup *memcg) +{ + return true; +} + +static inline void lru_gen_reparent_memcg(struct mem_cgroup *src, struct m= em_cgroup *dst) +{ +} + #endif /* CONFIG_LRU_GEN */ =20 struct lruvec { diff --git a/mm/vmscan.c b/mm/vmscan.c index 7aa8e1472d10d..3ee7fb96b8aeb 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4468,6 +4468,92 @@ void lru_gen_soft_reclaim(struct mem_cgroup *memcg, = int nid) lru_gen_rotate_memcg(lruvec, MEMCG_LRU_HEAD); } =20 +bool recheck_lru_gen_max_memcg(struct mem_cgroup *memcg) +{ + int nid; + + for_each_node(nid) { + struct lruvec *lruvec =3D get_lruvec(memcg, nid); + int type; + + for (type =3D 0; type < ANON_AND_FILE; type++) { + if (get_nr_gens(lruvec, type) !=3D MAX_NR_GENS) + return false; + } + } + + return true; +} + +/* + * We need to ensure that the folios of child memcg can be reparented to t= he + * same gen of the parent memcg, so the gens of the parent memcg needed be + * incremented to the MAX_NR_GENS before reparenting. + */ +void max_lru_gen_memcg(struct mem_cgroup *memcg) +{ + int nid; + + for_each_node(nid) { + struct lruvec *lruvec =3D get_lruvec(memcg, nid); + int type; + + for (type =3D 0; type < ANON_AND_FILE; type++) { + while (get_nr_gens(lruvec, type) < MAX_NR_GENS) { + DEFINE_MAX_SEQ(lruvec); + + inc_max_seq(lruvec, max_seq, mem_cgroup_swappiness(memcg)); + cond_resched(); + } + } + } +} + +static void __lru_gen_reparent_memcg(struct lruvec *src_lruvec, struct lru= vec *dst_lruvec, + int zone, int type) +{ + struct lru_gen_folio *src_lrugen, *dst_lrugen; + enum lru_list lru =3D type * LRU_INACTIVE_FILE; + int i; + + src_lrugen =3D &src_lruvec->lrugen; + dst_lrugen =3D &dst_lruvec->lrugen; + + for (i =3D 0; i < get_nr_gens(src_lruvec, type); i++) { + int gen =3D lru_gen_from_seq(src_lrugen->max_seq - i); + int nr_pages =3D src_lrugen->nr_pages[gen][type][zone]; + int src_lru_active =3D lru_gen_is_active(src_lruvec, gen) ? LRU_ACTIVE := 0; + int dst_lru_active =3D lru_gen_is_active(dst_lruvec, gen) ? LRU_ACTIVE := 0; + + list_splice_tail_init(&src_lrugen->folios[gen][type][zone], + &dst_lrugen->folios[gen][type][zone]); + + WRITE_ONCE(src_lrugen->nr_pages[gen][type][zone], 0); + WRITE_ONCE(dst_lrugen->nr_pages[gen][type][zone], + dst_lrugen->nr_pages[gen][type][zone] + nr_pages); + + __update_lru_size(src_lruvec, lru + src_lru_active, zone, -nr_pages); + __update_lru_size(dst_lruvec, lru + dst_lru_active, zone, nr_pages); + } +} + +void lru_gen_reparent_memcg(struct mem_cgroup *src, struct mem_cgroup *dst) +{ + int nid; + + for_each_node(nid) { + struct lruvec *src_lruvec, *dst_lruvec; + int type, zone; + + src_lruvec =3D get_lruvec(src, nid); + dst_lruvec =3D get_lruvec(dst, nid); + + for (zone =3D 0; zone < MAX_NR_ZONES; zone++) + for (type =3D 0; type < ANON_AND_FILE; type++) + __lru_gen_reparent_memcg(src_lruvec, dst_lruvec, zone, type); + } +} + #endif /* CONFIG_MEMCG */ =20 /*************************************************************************= ***** --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-181.mta0.migadu.com (out-181.mta0.migadu.com [91.218.175.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A5DA7339709 for ; Tue, 28 Oct 2025 14:05:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660302; cv=none; b=gVNpkyHuaHKWfNi31HhMjQYwfMKzfHUVJGLh6ahWAz/LIpJqsqTAXjT2QHWZDC3MszLAGZ6M1duZlsXOWnjZbk9YldrETz+ETJ8cpKr5FdQum8AOKxo02R/8rh/gORZGXauff6e39qj2/V/uq6/X+t+yRm1tNbltn5Fuj+zdSr4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660302; c=relaxed/simple; bh=JiM3ipnlBJOzX0zcLtp9ixiYB7jD4RiHN2hmztb9BG4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eE1c2Nnq46Fywd6jiu3HuE7hecZTDIXbmgQDgMxV12jxHgULTvabXBWwjlzE8yW/dE8NrA9PZg3NZubHYeDrIVCitwDtmwRORbZmLaKtOMQeArk7PBlncBE0e9trwgN42M+Dj6nT8EitMFFViUaGGWxalTXgBUgJ1DOre5qCYKw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=xBN0+Xgv; arc=none smtp.client-ip=91.218.175.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="xBN0+Xgv" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660298; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u1RiVu6Vt2llYHxIz14p3hNB1v2Bo6KqNLmF3dplOT8=; b=xBN0+XgvMzEZ0CVS0wiMq+TSAfwuuche1Bos4yWYPX1Tui4h8RXSxjPTErRLvGhdCYqgHW DM2gjYat2LC4eZVWnOU0yZ14JQwberQ4wxKRvzyduIgakeP7D7tFGbwE/FVTMn/ZQp7TeB 2CuDjqJ1RidrsBxK6dAHAxAMERleDos= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Qi Zheng Subject: [PATCH v1 24/26] mm: memcontrol: refactor memcg_reparent_objcgs() Date: Tue, 28 Oct 2025 21:58:37 +0800 Message-ID: <21f6b42ec2441372bdfb540d2e7e0fef770c0f6c.1761658311.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Qi Zheng Refactor the memcg_reparent_objcgs() to facilitate subsequent reparenting LRU folios here. Signed-off-by: Qi Zheng --- mm/memcontrol.c | 37 +++++++++++++++++++++++++++---------- mm/vmscan.c | 1 - 2 files changed, 27 insertions(+), 11 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7969dd93d858a..ee98c9e8fdcea 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -206,24 +206,41 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } =20 -static void memcg_reparent_objcgs(struct mem_cgroup *memcg) +static void __memcg_reparent_objcgs(struct mem_cgroup *src, + struct mem_cgroup *dst) { struct obj_cgroup *objcg, *iter; - struct mem_cgroup *parent =3D parent_mem_cgroup(memcg); - - objcg =3D rcu_replace_pointer(memcg->objcg, NULL, true); - - spin_lock_irq(&objcg_lock); =20 + objcg =3D rcu_replace_pointer(src->objcg, NULL, true); /* 1) Ready to reparent active objcg. */ - list_add(&objcg->list, &memcg->objcg_list); + list_add(&objcg->list, &src->objcg_list); /* 2) Reparent active objcg and already reparented objcgs to parent. */ - list_for_each_entry(iter, &memcg->objcg_list, list) - WRITE_ONCE(iter->memcg, parent); + list_for_each_entry(iter, &src->objcg_list, list) + WRITE_ONCE(iter->memcg, dst); /* 3) Move already reparented objcgs to the parent's list */ - list_splice(&memcg->objcg_list, &parent->objcg_list); + list_splice(&src->objcg_list, &dst->objcg_list); +} + +static void reparent_locks(struct mem_cgroup *src, struct mem_cgroup *dst) +{ + spin_lock_irq(&objcg_lock); +} =20 +static void reparent_unlocks(struct mem_cgroup *src, struct mem_cgroup *ds= t) +{ spin_unlock_irq(&objcg_lock); +} + +static void memcg_reparent_objcgs(struct mem_cgroup *src) +{ + struct obj_cgroup *objcg =3D rcu_dereference_protected(src->objcg, true); + struct mem_cgroup *dst =3D parent_mem_cgroup(src); + + reparent_locks(src, dst); + + __memcg_reparent_objcgs(src, dst); + + reparent_unlocks(src, dst); =20 percpu_ref_kill(&objcg->refcnt); } diff --git a/mm/vmscan.c b/mm/vmscan.c index 3ee7fb96b8aeb..82c4cc6edbca5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2649,7 +2649,6 @@ static bool can_age_anon_pages(struct lruvec *lruvec, lruvec_memcg(lruvec)); } =20 - #ifdef CONFIG_MEMCG static void lruvec_reparent_lru(struct lruvec *src, struct lruvec *dst, enum lru_list lru) --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-174.mta0.migadu.com (out-174.mta0.migadu.com [91.218.175.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3341330D25 for ; Tue, 28 Oct 2025 14:05:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660310; cv=none; b=WpJ23neTPQzI5rvCHduiJaPfH4gId4kFz8iFhU7e9U9pLgN8WtPUZ+G7zAlx18hiXHPJ2frWvHYUiTEA8MpLMRz0Oq7wudNoivwf4PIXkz8MUt1eMQvGXm4xLbleh6EXuLjukenCofr4unhwVBMoWBOIZcWQfvG/RQ2YtnQ5JXA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660310; c=relaxed/simple; bh=iTzQ8dauVHrviATDie28Ac+pzGp/fJ9c5/a7F0LK22U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rZp6tzdNJ7cKbZNxjnnqbzXC0hDdgSg509t8DySFk+gO7HZ4beju9daTHDeWO4R9Sy9sBoO0xbQadRm704QJfb2pyD+x4SUUjw41PVJVqbwYmKFY/uPm7u4d8tHSOOExCjTt7P741tDfWoHzvhdt8fZmuKtzKsYdHordXMGaQIQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=RwRM9kGf; arc=none smtp.client-ip=91.218.175.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="RwRM9kGf" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660306; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w3zBlBNUuKPlPCh2ctZnhmavZnVuki2UU+VV30hL06Q=; b=RwRM9kGfFzoMLN2zCLsCYv3ADLwuki5uxTnC3P6Q3arhqps8eRzM5R7Ev7FqYVDdxHjhXN +H0qY3eVtox6p9kj3DStZaj+yMzCtajUO3UbHd8INL7Zj92vTmZBqhD4XPOUAD0QQuKdri v2xjFfSAEp88jCgyj+dQtUmhaC5U4lA= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 25/26] mm: memcontrol: eliminate the problem of dying memory cgroup for LRU folios Date: Tue, 28 Oct 2025 21:58:38 +0800 Message-ID: <44fd54721dfa74941e65a82e03c23d9c0bff9feb.1761658311.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song Pagecache pages are charged at allocation time and hold a reference to the original memory cgroup until reclaimed. Depending on memory pressure, page sharing patterns between different cgroups and cgroup creation/destruction rates, many dying memory cgroups can be pinned by pagecache pages, reducing page reclaim efficiency and wasting memory. Converting LRU folios and most other raw memory cgroup pins to the object cgroup direction can fix this long-living problem. Finally, folio->memcg_data of LRU folios and kmem folios will always point to an object cgroup pointer. The folio->memcg_data of slab folios will point to an vector of object cgroups. Signed-off-by: Muchun Song Signed-off-by: Qi Zheng --- include/linux/memcontrol.h | 77 +++++---------- mm/memcontrol-v1.c | 15 +-- mm/memcontrol.c | 189 +++++++++++++++++++++++-------------- 3 files changed, 150 insertions(+), 131 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 6f6b28f8f0f63..f87aa43d8e54a 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -369,9 +369,6 @@ enum objext_flags { #define OBJEXTS_FLAGS_MASK (__NR_OBJEXTS_FLAGS - 1) =20 #ifdef CONFIG_MEMCG - -static inline bool folio_memcg_kmem(struct folio *folio); - /* * After the initialization objcg->memcg is always pointing at * a valid memcg, but can be atomically swapped to the parent memcg. @@ -385,43 +382,19 @@ static inline struct mem_cgroup *obj_cgroup_memcg(str= uct obj_cgroup *objcg) } =20 /* - * __folio_memcg - Get the memory cgroup associated with a non-kmem folio - * @folio: Pointer to the folio. - * - * Returns a pointer to the memory cgroup associated with the folio, - * or NULL. This function assumes that the folio is known to have a - * proper memory cgroup pointer. It's not safe to call this function - * against some type of folios, e.g. slab folios or ex-slab folios or - * kmem folios. - */ -static inline struct mem_cgroup *__folio_memcg(struct folio *folio) -{ - unsigned long memcg_data =3D folio->memcg_data; - - VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); - - return (struct mem_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); -} - -/* - * __folio_objcg - get the object cgroup associated with a kmem folio. + * folio_objcg - get the object cgroup associated with a folio. * @folio: Pointer to the folio. * * Returns a pointer to the object cgroup associated with the folio, * or NULL. This function assumes that the folio is known to have a - * proper object cgroup pointer. It's not safe to call this function - * against some type of folios, e.g. slab folios or ex-slab folios or - * LRU folios. + * proper object cgroup pointer. */ -static inline struct obj_cgroup *__folio_objcg(struct folio *folio) +static inline struct obj_cgroup *folio_objcg(struct folio *folio) { unsigned long memcg_data =3D folio->memcg_data; =20 VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); - VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); =20 return (struct obj_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } @@ -435,21 +408,30 @@ static inline struct obj_cgroup *__folio_objcg(struct= folio *folio) * proper memory cgroup pointer. It's not safe to call this function * against some type of folios, e.g. slab folios or ex-slab folios. * - * For a non-kmem folio any of the following ensures folio and memcg bindi= ng - * stability: + * For a folio any of the following ensures folio and objcg binding stabil= ity: * * - the folio lock * - LRU isolation * - exclusive reference * - * For a kmem folio a caller should hold an rcu read lock to protect memcg - * associated with a kmem folio from being released. + * Based on the stable binding of folio and objcg, for a folio any of the + * following ensures folio and memcg binding stability: + * + * - cgroup_mutex + * - the lruvec lock + * + * If the caller only want to ensure that the page counters of memcg are + * updated correctly, ensure that the binding stability of folio and objcg + * is sufficient. + * + * Note: The caller should hold an rcu read lock or cgroup_mutex to protect + * memcg associated with a folio from being released. */ static inline struct mem_cgroup *folio_memcg(struct folio *folio) { - if (folio_memcg_kmem(folio)) - return obj_cgroup_memcg(__folio_objcg(folio)); - return __folio_memcg(folio); + struct obj_cgroup *objcg =3D folio_objcg(folio); + + return objcg ? obj_cgroup_memcg(objcg) : NULL; } =20 /* @@ -473,15 +455,10 @@ static inline bool folio_memcg_charged(struct folio *= folio) * has an associated memory cgroup pointer or an object cgroups vector or * an object cgroup. * - * For a non-kmem folio any of the following ensures folio and memcg bindi= ng - * stability: + * The page and objcg or memcg binding rules can refer to folio_memcg(). * - * - the folio lock - * - LRU isolation - * - exclusive reference - * - * For a kmem folio a caller should hold an rcu read lock to protect memcg - * associated with a kmem folio from being released. + * A caller should hold an rcu read lock to protect memcg associated with a + * page from being released. */ static inline struct mem_cgroup *folio_memcg_check(struct folio *folio) { @@ -490,18 +467,14 @@ static inline struct mem_cgroup *folio_memcg_check(st= ruct folio *folio) * for slabs, READ_ONCE() should be used here. */ unsigned long memcg_data =3D READ_ONCE(folio->memcg_data); + struct obj_cgroup *objcg; =20 if (memcg_data & MEMCG_DATA_OBJEXTS) return NULL; =20 - if (memcg_data & MEMCG_DATA_KMEM) { - struct obj_cgroup *objcg; - - objcg =3D (void *)(memcg_data & ~OBJEXTS_FLAGS_MASK); - return obj_cgroup_memcg(objcg); - } + objcg =3D (void *)(memcg_data & ~OBJEXTS_FLAGS_MASK); =20 - return (struct mem_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); + return objcg ? obj_cgroup_memcg(objcg) : NULL; } =20 static inline struct mem_cgroup *page_memcg_check(struct page *page) diff --git a/mm/memcontrol-v1.c b/mm/memcontrol-v1.c index 6eed14bff7426..23c07df2063c8 100644 --- a/mm/memcontrol-v1.c +++ b/mm/memcontrol-v1.c @@ -591,6 +591,7 @@ void memcg1_commit_charge(struct folio *folio, struct m= em_cgroup *memcg) void memcg1_swapout(struct folio *folio, swp_entry_t entry) { struct mem_cgroup *memcg, *swap_memcg; + struct obj_cgroup *objcg; unsigned int nr_entries; =20 VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); @@ -602,12 +603,13 @@ void memcg1_swapout(struct folio *folio, swp_entry_t = entry) if (!do_memsw_account()) return; =20 - memcg =3D folio_memcg(folio); - - VM_WARN_ON_ONCE_FOLIO(!memcg, folio); - if (!memcg) + objcg =3D folio_objcg(folio); + VM_WARN_ON_ONCE_FOLIO(!objcg, folio); + if (!objcg) return; =20 + rcu_read_lock(); + memcg =3D obj_cgroup_memcg(objcg); /* * In case the memcg owning these pages has been offlined and doesn't * have an ID allocated to it anymore, charge the closest online @@ -625,7 +627,7 @@ void memcg1_swapout(struct folio *folio, swp_entry_t en= try) folio_unqueue_deferred_split(folio); folio->memcg_data =3D 0; =20 - if (!mem_cgroup_is_root(memcg)) + if (!obj_cgroup_is_root(objcg)) page_counter_uncharge(&memcg->memory, nr_entries); =20 if (memcg !=3D swap_memcg) { @@ -646,7 +648,8 @@ void memcg1_swapout(struct folio *folio, swp_entry_t en= try) preempt_enable_nested(); memcg1_check_events(memcg, folio_nid(folio)); =20 - css_put(&memcg->css); + rcu_read_unlock(); + obj_cgroup_put(objcg); } =20 /* diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ee98c9e8fdcea..759197e19c50b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -223,22 +223,55 @@ static void __memcg_reparent_objcgs(struct mem_cgroup= *src, =20 static void reparent_locks(struct mem_cgroup *src, struct mem_cgroup *dst) { + int nid, nest =3D 0; + spin_lock_irq(&objcg_lock); + for_each_node(nid) { + spin_lock_nested(&mem_cgroup_lruvec(src, + NODE_DATA(nid))->lru_lock, nest++); + spin_lock_nested(&mem_cgroup_lruvec(dst, + NODE_DATA(nid))->lru_lock, nest++); + } } =20 static void reparent_unlocks(struct mem_cgroup *src, struct mem_cgroup *ds= t) { + int nid; + + for_each_node(nid) { + spin_unlock(&mem_cgroup_lruvec(dst, NODE_DATA(nid))->lru_lock); + spin_unlock(&mem_cgroup_lruvec(src, NODE_DATA(nid))->lru_lock); + } spin_unlock_irq(&objcg_lock); } =20 +static void memcg_reparent_lru_folios(struct mem_cgroup *src, + struct mem_cgroup *dst) +{ + if (lru_gen_enabled()) + lru_gen_reparent_memcg(src, dst); + else + lru_reparent_memcg(src, dst); +} + static void memcg_reparent_objcgs(struct mem_cgroup *src) { struct obj_cgroup *objcg =3D rcu_dereference_protected(src->objcg, true); struct mem_cgroup *dst =3D parent_mem_cgroup(src); =20 +retry: + if (lru_gen_enabled()) + max_lru_gen_memcg(dst); + reparent_locks(src, dst); + if (lru_gen_enabled() && !recheck_lru_gen_max_memcg(dst)) { + reparent_unlocks(src, dst); + cond_resched(); + goto retry; + } =20 __memcg_reparent_objcgs(src, dst); + memcg_reparent_lru_folios(src, dst); =20 reparent_unlocks(src, dst); =20 @@ -989,6 +1022,8 @@ struct mem_cgroup *get_mem_cgroup_from_current(void) /** * get_mem_cgroup_from_folio - Obtain a reference on a given folio's memcg. * @folio: folio from which memcg should be extracted. + * + * The folio and objcg or memcg binding rules can refer to folio_memcg(). */ struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio) { @@ -2557,17 +2592,17 @@ static inline int try_charge(struct mem_cgroup *mem= cg, gfp_t gfp_mask, return try_charge_memcg(memcg, gfp_mask, nr_pages); } =20 -static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) +static void commit_charge(struct folio *folio, struct obj_cgroup *objcg) { VM_BUG_ON_FOLIO(folio_memcg_charged(folio), folio); /* - * Any of the following ensures page's memcg stability: + * Any of the following ensures folio's objcg stability: * * - the page lock * - LRU isolation * - exclusive reference */ - folio->memcg_data =3D (unsigned long)memcg; + folio->memcg_data =3D (unsigned long)objcg; } =20 #ifdef CONFIG_MEMCG_NMI_SAFETY_REQUIRES_ATOMIC @@ -2679,6 +2714,17 @@ static struct obj_cgroup *__get_obj_cgroup_from_memc= g(struct mem_cgroup *memcg) return NULL; } =20 +static inline struct obj_cgroup *get_obj_cgroup_from_memcg(struct mem_cgro= up *memcg) +{ + struct obj_cgroup *objcg; + + rcu_read_lock(); + objcg =3D __get_obj_cgroup_from_memcg(memcg); + rcu_read_unlock(); + + return objcg; +} + static struct obj_cgroup *current_objcg_update(void) { struct mem_cgroup *memcg; @@ -2779,17 +2825,10 @@ struct obj_cgroup *get_obj_cgroup_from_folio(struct= folio *folio) { struct obj_cgroup *objcg; =20 - if (!memcg_kmem_online()) - return NULL; - - if (folio_memcg_kmem(folio)) { - objcg =3D __folio_objcg(folio); + objcg =3D folio_objcg(folio); + if (objcg) obj_cgroup_get(objcg); - } else { - rcu_read_lock(); - objcg =3D __get_obj_cgroup_from_memcg(__folio_memcg(folio)); - rcu_read_unlock(); - } + return objcg; } =20 @@ -3296,7 +3335,7 @@ void folio_split_memcg_refs(struct folio *folio, unsi= gned old_order, return; =20 new_refs =3D (1 << (old_order - new_order)) - 1; - css_get_many(&__folio_memcg(folio)->css, new_refs); + obj_cgroup_get_many(folio_objcg(folio), new_refs); } =20 unsigned long mem_cgroup_usage(struct mem_cgroup *memcg, bool swap) @@ -4745,16 +4784,20 @@ void mem_cgroup_calculate_protection(struct mem_cgr= oup *root, static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg, gfp_t gfp) { - int ret; - - ret =3D try_charge(memcg, gfp, folio_nr_pages(folio)); - if (ret) - goto out; + int ret =3D 0; + struct obj_cgroup *objcg; =20 - css_get(&memcg->css); - commit_charge(folio, memcg); + objcg =3D get_obj_cgroup_from_memcg(memcg); + /* Do not account at the root objcg level. */ + if (!obj_cgroup_is_root(objcg)) + ret =3D try_charge(memcg, gfp, folio_nr_pages(folio)); + if (ret) { + obj_cgroup_put(objcg); + return ret; + } + commit_charge(folio, objcg); memcg1_commit_charge(folio, memcg); -out: + return ret; } =20 @@ -4840,7 +4883,7 @@ int mem_cgroup_swapin_charge_folio(struct folio *foli= o, struct mm_struct *mm, } =20 struct uncharge_gather { - struct mem_cgroup *memcg; + struct obj_cgroup *objcg; unsigned long nr_memory; unsigned long pgpgout; unsigned long nr_kmem; @@ -4854,58 +4897,52 @@ static inline void uncharge_gather_clear(struct unc= harge_gather *ug) =20 static void uncharge_batch(const struct uncharge_gather *ug) { + struct mem_cgroup *memcg; + + rcu_read_lock(); + memcg =3D obj_cgroup_memcg(ug->objcg); if (ug->nr_memory) { - memcg_uncharge(ug->memcg, ug->nr_memory); + memcg_uncharge(memcg, ug->nr_memory); if (ug->nr_kmem) { - mod_memcg_state(ug->memcg, MEMCG_KMEM, -ug->nr_kmem); - memcg1_account_kmem(ug->memcg, -ug->nr_kmem); + mod_memcg_state(memcg, MEMCG_KMEM, -ug->nr_kmem); + memcg1_account_kmem(memcg, -ug->nr_kmem); } - memcg1_oom_recover(ug->memcg); + memcg1_oom_recover(memcg); } =20 - memcg1_uncharge_batch(ug->memcg, ug->pgpgout, ug->nr_memory, ug->nid); + memcg1_uncharge_batch(memcg, ug->pgpgout, ug->nr_memory, ug->nid); + rcu_read_unlock(); =20 /* drop reference from uncharge_folio */ - css_put(&ug->memcg->css); + obj_cgroup_put(ug->objcg); } =20 static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug) { long nr_pages; - struct mem_cgroup *memcg; struct obj_cgroup *objcg; =20 VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); =20 /* * Nobody should be changing or seriously looking at - * folio memcg or objcg at this point, we have fully - * exclusive access to the folio. + * folio objcg at this point, we have fully exclusive + * access to the folio. */ - if (folio_memcg_kmem(folio)) { - objcg =3D __folio_objcg(folio); - /* - * This get matches the put at the end of the function and - * kmem pages do not hold memcg references anymore. - */ - memcg =3D get_mem_cgroup_from_objcg(objcg); - } else { - memcg =3D __folio_memcg(folio); - } - - if (!memcg) + objcg =3D folio_objcg(folio); + if (!objcg) return; =20 - if (ug->memcg !=3D memcg) { - if (ug->memcg) { + if (ug->objcg !=3D objcg) { + if (ug->objcg) { uncharge_batch(ug); uncharge_gather_clear(ug); } - ug->memcg =3D memcg; + ug->objcg =3D objcg; ug->nid =3D folio_nid(folio); =20 - /* pairs with css_put in uncharge_batch */ - css_get(&memcg->css); + /* pairs with obj_cgroup_put in uncharge_batch */ + obj_cgroup_get(objcg); } =20 nr_pages =3D folio_nr_pages(folio); @@ -4913,20 +4950,17 @@ static void uncharge_folio(struct folio *folio, str= uct uncharge_gather *ug) if (folio_memcg_kmem(folio)) { ug->nr_memory +=3D nr_pages; ug->nr_kmem +=3D nr_pages; - - folio->memcg_data =3D 0; - obj_cgroup_put(objcg); } else { /* LRU pages aren't accounted at the root level */ - if (!mem_cgroup_is_root(memcg)) + if (!obj_cgroup_is_root(objcg)) ug->nr_memory +=3D nr_pages; ug->pgpgout++; =20 WARN_ON_ONCE(folio_unqueue_deferred_split(folio)); - folio->memcg_data =3D 0; } =20 - css_put(&memcg->css); + folio->memcg_data =3D 0; + obj_cgroup_put(objcg); } =20 void __mem_cgroup_uncharge(struct folio *folio) @@ -4950,7 +4984,7 @@ void __mem_cgroup_uncharge_folios(struct folio_batch = *folios) uncharge_gather_clear(&ug); for (i =3D 0; i < folios->nr; i++) uncharge_folio(folios->folios[i], &ug); - if (ug.memcg) + if (ug.objcg) uncharge_batch(&ug); } =20 @@ -4967,6 +5001,7 @@ void __mem_cgroup_uncharge_folios(struct folio_batch = *folios) void mem_cgroup_replace_folio(struct folio *old, struct folio *new) { struct mem_cgroup *memcg; + struct obj_cgroup *objcg; long nr_pages =3D folio_nr_pages(new); =20 VM_BUG_ON_FOLIO(!folio_test_locked(old), old); @@ -4981,21 +5016,24 @@ void mem_cgroup_replace_folio(struct folio *old, st= ruct folio *new) if (folio_memcg_charged(new)) return; =20 - memcg =3D folio_memcg(old); - VM_WARN_ON_ONCE_FOLIO(!memcg, old); - if (!memcg) + objcg =3D folio_objcg(old); + VM_WARN_ON_ONCE_FOLIO(!objcg, old); + if (!objcg) return; =20 + rcu_read_lock(); + memcg =3D obj_cgroup_memcg(objcg); /* Force-charge the new page. The old one will be freed soon */ - if (!mem_cgroup_is_root(memcg)) { + if (!obj_cgroup_is_root(objcg)) { page_counter_charge(&memcg->memory, nr_pages); if (do_memsw_account()) page_counter_charge(&memcg->memsw, nr_pages); } =20 - css_get(&memcg->css); - commit_charge(new, memcg); + obj_cgroup_get(objcg); + commit_charge(new, objcg); memcg1_commit_charge(new, memcg); + rcu_read_unlock(); } =20 /** @@ -5011,7 +5049,7 @@ void mem_cgroup_replace_folio(struct folio *old, stru= ct folio *new) */ void mem_cgroup_migrate(struct folio *old, struct folio *new) { - struct mem_cgroup *memcg; + struct obj_cgroup *objcg; =20 VM_BUG_ON_FOLIO(!folio_test_locked(old), old); VM_BUG_ON_FOLIO(!folio_test_locked(new), new); @@ -5022,18 +5060,18 @@ void mem_cgroup_migrate(struct folio *old, struct f= olio *new) if (mem_cgroup_disabled()) return; =20 - memcg =3D folio_memcg(old); + objcg =3D folio_objcg(old); /* - * Note that it is normal to see !memcg for a hugetlb folio. + * Note that it is normal to see !objcg for a hugetlb folio. * For e.g, itt could have been allocated when memory_hugetlb_accounting * was not selected. */ - VM_WARN_ON_ONCE_FOLIO(!folio_test_hugetlb(old) && !memcg, old); - if (!memcg) + VM_WARN_ON_ONCE_FOLIO(!folio_test_hugetlb(old) && !objcg, old); + if (!objcg) return; =20 - /* Transfer the charge and the css ref */ - commit_charge(new, memcg); + /* Transfer the charge and the objcg ref */ + commit_charge(new, objcg); =20 /* Warning should never happen, so don't worry about refcount non-0 */ WARN_ON_ONCE(folio_unqueue_deferred_split(old)); @@ -5208,22 +5246,27 @@ int __mem_cgroup_try_charge_swap(struct folio *foli= o, swp_entry_t entry) unsigned int nr_pages =3D folio_nr_pages(folio); struct page_counter *counter; struct mem_cgroup *memcg; + struct obj_cgroup *objcg; =20 if (do_memsw_account()) return 0; =20 - memcg =3D folio_memcg(folio); - - VM_WARN_ON_ONCE_FOLIO(!memcg, folio); - if (!memcg) + objcg =3D folio_objcg(folio); + VM_WARN_ON_ONCE_FOLIO(!objcg, folio); + if (!objcg) return 0; =20 + rcu_read_lock(); + memcg =3D obj_cgroup_memcg(objcg); if (!entry.val) { memcg_memory_event(memcg, MEMCG_SWAP_FAIL); + rcu_read_unlock(); return 0; } =20 memcg =3D mem_cgroup_id_get_online(memcg); + /* memcg is pined by memcg ID. */ + rcu_read_unlock(); =20 if (!mem_cgroup_is_root(memcg) && !page_counter_try_charge(&memcg->swap, nr_pages, &counter)) { --=20 2.20.1 From nobody Sat Feb 7 15:40:25 2026 Received: from out-188.mta0.migadu.com (out-188.mta0.migadu.com [91.218.175.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3006344024 for ; Tue, 28 Oct 2025 14:05:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660318; cv=none; b=CAb9JpcsCKc9BJvK/ae5Q4Q18hFyJR9VWYyxZt1sKg8HfgK+VSZJwj6/60lEoUdC8kXoXGV7Nf4Qjh5hjQ2HSH0dvlLhAwSgkQnfx8jELJqdfrME2YZmQ1g9CJwX9I9TdTt2Ba5Bt/5YCyv1chFl7YHMU+tFrISA4b4O1TVLqIE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761660318; c=relaxed/simple; bh=Do0bHbOeCjLDlB59DgVp8QPSFDjKiJqFiJEFN3AuOk8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SF2qriVJbmHrmVzpze6J3JrY4iTQ1v8shpxLsIbeU+6wSDp2zr7kwzEEWfC05y42zMiUZHnnRKAJsDGKEfNOOWf7bvsHXpD4JfXnCoh36hD6ca2JAWWalR2zbl8/4wgYERoz+5zaUyA7I0dEu7dzYE0Wc3EthkOZBqkm7Q1DpSo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=QZ08CyZX; arc=none smtp.client-ip=91.218.175.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="QZ08CyZX" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1761660314; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Y+4JuPe6Xrl/SYCmzewScvNRpN/5M/JdTL4WT8/nRkM=; b=QZ08CyZXMwfT4eFZs6LeoiXExg+Zb0ZEn7nn6b+Q3xbDLDOgxfqeGdLARk6QkvAIoaFDgy jdI3nKcmsJi02Hqo4qkI42o7inxB3rf196178kVRJxdYQwi/2sajqr6xsiNVgF234tsjYX CA0xYbgVOcQNCaafV2FFIVuOMfiUDfo= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@redhat.com, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng Subject: [PATCH v1 26/26] mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance helpers Date: Tue, 28 Oct 2025 21:58:39 +0800 Message-ID: <7c59a234362d080af554061941983588feffce01.1761658311.git.zhengqi.arch@bytedance.com> In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song We must ensure the folio is deleted from or added to the correct lruvec list. So, add VM_WARN_ON_ONCE_FOLIO() to catch invalid users. The VM_BUG_ON_PAGE() in move_pages_to_lru() can be removed as add_page_to_lru_list() will perform the necessary check. Signed-off-by: Muchun Song Acked-by: Roman Gushchin Signed-off-by: Qi Zheng --- include/linux/mm_inline.h | 6 ++++++ mm/vmscan.c | 1 - 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index f6a2b2d200162..dfed0523e0c43 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -342,6 +342,8 @@ void lruvec_add_folio(struct lruvec *lruvec, struct fol= io *folio) { enum lru_list lru =3D folio_lru_list(folio); =20 + VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + if (lru_gen_add_folio(lruvec, folio, false)) return; =20 @@ -356,6 +358,8 @@ void lruvec_add_folio_tail(struct lruvec *lruvec, struc= t folio *folio) { enum lru_list lru =3D folio_lru_list(folio); =20 + VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + if (lru_gen_add_folio(lruvec, folio, true)) return; =20 @@ -370,6 +374,8 @@ void lruvec_del_folio(struct lruvec *lruvec, struct fol= io *folio) { enum lru_list lru =3D folio_lru_list(folio); =20 + VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + if (lru_gen_del_folio(lruvec, folio, false)) return; =20 diff --git a/mm/vmscan.c b/mm/vmscan.c index 82c4cc6edbca5..5f22ec438c018 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1934,7 +1934,6 @@ static unsigned int move_folios_to_lru(struct list_he= ad *list) continue; } =20 - VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); lruvec_add_folio(lruvec, folio); nr_pages =3D folio_nr_pages(folio); nr_moved +=3D nr_pages; --=20 2.20.1