From nobody Tue Apr 28 19:32:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70746C433F5 for ; Mon, 30 May 2022 07:51:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231881AbiE3Hvj (ORCPT ); Mon, 30 May 2022 03:51:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234026AbiE3HuY (ORCPT ); Mon, 30 May 2022 03:50:24 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A41D71D8B for ; Mon, 30 May 2022 00:50:23 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id c2so9637039plh.2 for ; Mon, 30 May 2022 00:50:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WXoATc+zbrMinz5NUa5ozUC0jOTcvkPC5pLx07yWbkQ=; b=2EM02glyaZI9nPq2vkW6z3/AzXTv1u6Z3Rb2QQefVH2nhduDyPrtPjKRtHjq9g90FL /CXIjNPgSxLJM/RId9CYNnMZuVQ2pkO7ccabdIFa4HHSHA22NFUd7VSEbTAmRcx/C6Pq 1pdgBVjlr98ujtH4tl4OIxO4C9vjDL2OV1l6eX7q5T7QIEW73W0PuuVZ0rsRAwIrHCPg S5PL5p6FQyEYK+6qwGC/x1J2tkW2NOXpHu8+XmlEjKSJrbZ7iHrjCEv2VhtwhI52Hprp /qpQJFNAyYI3aTojFpHR+FncgU4BMwjh8nFipSkRzIzlSa7UB+xBxhdfY6PXixqlVlL5 l6zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WXoATc+zbrMinz5NUa5ozUC0jOTcvkPC5pLx07yWbkQ=; b=JosyGkFsKCgvMbwHFiuhudD2o/ZTeMSU92ZULUUVoUk6kd+AamGFd061STYGfVu+rg 6a+Szpd4u97ZroN43oR8oMb81juF8a4o5+BXG4b9st7TcKT5qV+O14tdWvJ+2wZ5USy6 2uc7abQOcAxntuwiDNxrHMPW8SMeDfj/nbM+QZI+X32sYO6JGRSumxGSqMrHWjLoOnEL pTAGbFvs6+kBuaZrlZivGlSOmF8HM1Lck8hQjZmGdzoPyoYmzqb0GNv76wLcKWI5qCla 2GPK4brXa65D9zon1WOtpZNDOx27L8dLrV8LSnjr0MHjH5C8aTJHtptTMVM3dWzwV7Hc NNNw== X-Gm-Message-State: AOAM530FcQHn7sRuJ07OQLOk3lWITZu6n8mfJzFBapa7l8SSvRJIvQ4J L+4sh3X09JlljnxnQKjtfILkHw== X-Google-Smtp-Source: ABdhPJzFO2bdmj87yqsVCuzBbqGlQwZx7W2PblGMZx3/l+70EpZyp1xSqrhLdvRzm2ZSRWn7pRspng== X-Received: by 2002:a17:902:d4c7:b0:162:4625:ecad with SMTP id o7-20020a170902d4c700b001624625ecadmr30734418plg.79.1653897022979; Mon, 30 May 2022 00:50:22 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.50.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:50:22 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 01/11] mm: memcontrol: remove dead code and comments Date: Mon, 30 May 2022 15:49:09 +0800 Message-Id: <20220530074919.46352-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Since no-hierarchy mode is deprecated after commit bef8620cd8e0 ("mm: memcg: deprecate the non-hierarchical mode") so parent_mem_cgroup() cannot return a NULL except root memcg, however, root memcg cannot be offline, so it is safe to drop the check of returned value of parent_mem_cgroup(). Remove those dead code. The comments in memcg_offline_kmem() above memcg_reparent_list_lrus() are out of date since commit 5abc1e37afa0 ("mm: list_lru: allocate list_lru_one only when neede= d") There is no ordering requirement between memcg_reparent_list_lrus() and memcg_reparent_objcgs(), so remove those outdated comments. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- include/linux/memcontrol.h | 3 +-- mm/memcontrol.c | 16 ---------------- mm/vmscan.c | 6 +----- 3 files changed, 2 insertions(+), 23 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 89b14729d59f..0833be256134 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -851,8 +851,7 @@ static inline struct mem_cgroup *lruvec_memcg(struct lr= uvec *lruvec) * parent_mem_cgroup - find the accounting parent of a memcg * @memcg: memcg whose parent to find * - * Returns the parent memcg, or NULL if this is the root or the memory - * controller is in legacy no-hierarchy mode. + * Returns the parent memcg, or NULL if this is the root. */ static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memc= g) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 598fece89e2b..13da256ff2e4 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3622,17 +3622,7 @@ static void memcg_offline_kmem(struct mem_cgroup *me= mcg) return; =20 parent =3D parent_mem_cgroup(memcg); - if (!parent) - parent =3D root_mem_cgroup; - memcg_reparent_objcgs(memcg, parent); - - /* - * After we have finished memcg_reparent_objcgs(), all list_lrus - * corresponding to this cgroup are guaranteed to remain empty. - * The ordering is imposed by list_lru_node->lock taken by - * memcg_reparent_list_lrus(). - */ memcg_reparent_list_lrus(memcg, parent); } #else @@ -6593,10 +6583,6 @@ void mem_cgroup_calculate_protection(struct mem_cgro= up *root, return; =20 parent =3D parent_mem_cgroup(memcg); - /* No parent means a non-hierarchical mode on v1 memcg */ - if (!parent) - return; - if (parent =3D=3D root) { memcg->memory.emin =3D READ_ONCE(memcg->memory.min); memcg->memory.elow =3D READ_ONCE(memcg->memory.low); @@ -7050,8 +7036,6 @@ static struct mem_cgroup *mem_cgroup_id_get_online(st= ruct mem_cgroup *memcg) break; } memcg =3D parent_mem_cgroup(memcg); - if (!memcg) - memcg =3D root_mem_cgroup; } return memcg; } diff --git a/mm/vmscan.c b/mm/vmscan.c index 1678802e03e7..8c6054e06087 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -409,13 +409,9 @@ void reparent_shrinker_deferred(struct mem_cgroup *mem= cg) { int i, nid; long nr; - struct mem_cgroup *parent; + struct mem_cgroup *parent =3D parent_mem_cgroup(memcg); struct shrinker_info *child_info, *parent_info; =20 - parent =3D parent_mem_cgroup(memcg); - if (!parent) - parent =3D root_mem_cgroup; - /* Prevent from concurrent shrinker_info expand */ down_read(&shrinker_rwsem); for_each_node(nid) { --=20 2.11.0 From nobody Tue Apr 28 19:32:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78636C433FE for ; Mon, 30 May 2022 07:51:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233253AbiE3Hvz (ORCPT ); Mon, 30 May 2022 03:51:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234063AbiE3Hud (ORCPT ); Mon, 30 May 2022 03:50:33 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 086A7222B4 for ; Mon, 30 May 2022 00:50:32 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id a63so3283406pge.12 for ; Mon, 30 May 2022 00:50:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qP3NYqmy/kFDq8fy7T/74NXJ0W9tdNYCsTtwUre3MFo=; b=2lWVMQ4OH/ac/xcyLrM2cbUXGSinjLfD4xqq9FUvko6fE3Ndz9Fwe10KdO/ZMwW3Gk 7MlSYaSGBpbCqkL3KEJHSM1sVt4SWyXoo1O0/amdBG/7ZL3O0i3K1sD6L68KMKfFRLMA uXHJHC++Pv7FxsKTjboIcMFe5xETp4/Q26H3bQXUA9t9IzX1KeMFpjUDU/XeAyN6ZDsa iYr3PRXd+4StN2hEjFWb/eb+zQqvkhzzFZvg9sTQ/ttztnFqkbH/QGoL2nBQaBzi42cF hjMZvtcQadKm1UJTOGWbrWWCd8VwkzaH+Ghc+AqlzZxgGn/9JcFERz9GP4VCwzwjAXSh S5aQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qP3NYqmy/kFDq8fy7T/74NXJ0W9tdNYCsTtwUre3MFo=; b=5YT/UaheZgEFoCBeOQsiimmD2elXx3J9gDCC/OM59PZv/JGA22XOsjgLbqYrXA2oAL VGpR8YomMbFSACBwA9/ZgjLDfOP3GX6qRoAMzfourDGaFY54hSCmg5SeNUT6o6OxqYdY +vVfDrbwQFzcybbwRIApNOgcXOYcj+0NF0YFtC/iePOv2iwGi4WgYdPEdDJSYi/2ixPN D1lKCgPAFbf7yKwxSrO6iTIdxuutPkIJcpMqTbYC3kwwT7DD6/rWMHtYi5AnqACR/qza Tuvvv/ha7F17SCXmNAoZKwgl3wvB2XDWcLNVcJr/1+4z1j6bx7o/YBCsIkWSGqApryF+ 6iHA== X-Gm-Message-State: AOAM5325z5/QCr7BKSZCR1q86ses1qb7VhKcgfMbn1L0op2MHozeO/Ay qEAyOaQBlENFVoDfwT83Zivhuw== X-Google-Smtp-Source: ABdhPJz5vQlTwdV9raF7OHl26nMrY/j/IHnExQCWOCI9eOPyN02ThtPAZX7YHxztg+1OGcmlYSK0QA== X-Received: by 2002:a63:2360:0:b0:3fb:ee61:82cf with SMTP id u32-20020a632360000000b003fbee6182cfmr6489036pgm.574.1653897031530; Mon, 30 May 2022 00:50:31 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.50.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:50:31 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 02/11] mm: rename unlock_page_lruvec{_irq, _irqrestore} to lruvec_unlock{_irq, _irqrestore} Date: Mon, 30 May 2022 15:49:10 +0800 Message-Id: <20220530074919.46352-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It is weird to use folio_lruvec_lock() variants and unlock_page_lruvec() va= riants together, e.g. locking folio and unlocking page. So rename unlock_page_lruvec{_irq, _irqrestore} to lruvec_unlock{_irq, _irqrestore}. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- include/linux/memcontrol.h | 10 +++++----- mm/compaction.c | 12 ++++++------ mm/huge_memory.c | 2 +- mm/mlock.c | 2 +- mm/swap.c | 16 ++++++++-------- mm/vmscan.c | 4 ++-- 6 files changed, 23 insertions(+), 23 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 0833be256134..6d7f97cc3fd4 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1538,17 +1538,17 @@ static inline struct lruvec *parent_lruvec(struct l= ruvec *lruvec) return mem_cgroup_lruvec(memcg, lruvec_pgdat(lruvec)); } =20 -static inline void unlock_page_lruvec(struct lruvec *lruvec) +static inline void lruvec_unlock(struct lruvec *lruvec) { spin_unlock(&lruvec->lru_lock); } =20 -static inline void unlock_page_lruvec_irq(struct lruvec *lruvec) +static inline void lruvec_unlock_irq(struct lruvec *lruvec) { spin_unlock_irq(&lruvec->lru_lock); } =20 -static inline void unlock_page_lruvec_irqrestore(struct lruvec *lruvec, +static inline void lruvec_unlock_irqrestore(struct lruvec *lruvec, unsigned long flags) { spin_unlock_irqrestore(&lruvec->lru_lock, flags); @@ -1570,7 +1570,7 @@ static inline struct lruvec *folio_lruvec_relock_irq(= struct folio *folio, if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; =20 - unlock_page_lruvec_irq(locked_lruvec); + lruvec_unlock_irq(locked_lruvec); } =20 return folio_lruvec_lock_irq(folio); @@ -1584,7 +1584,7 @@ static inline struct lruvec *folio_lruvec_relock_irqs= ave(struct folio *folio, if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; =20 - unlock_page_lruvec_irqrestore(locked_lruvec, *flags); + lruvec_unlock_irqrestore(locked_lruvec, *flags); } =20 return folio_lruvec_lock_irqsave(folio, flags); diff --git a/mm/compaction.c b/mm/compaction.c index fe915db6149b..4f155df6b39c 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -874,7 +874,7 @@ isolate_migratepages_block(struct compact_control *cc, = unsigned long low_pfn, */ if (!(low_pfn % SWAP_CLUSTER_MAX)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked =3D NULL; } =20 @@ -987,7 +987,7 @@ isolate_migratepages_block(struct compact_control *cc, = unsigned long low_pfn, if (unlikely(__PageMovable(page)) && !PageIsolated(page)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked =3D NULL; } =20 @@ -1070,7 +1070,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, /* If we already hold the lock, we can skip some rechecking */ if (lruvec !=3D locked) { if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); =20 compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); locked =3D lruvec; @@ -1129,7 +1129,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, isolate_fail_put: /* Avoid potential deadlock in freeing page under lru_lock */ if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked =3D NULL; } put_page(page); @@ -1145,7 +1145,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, */ if (nr_isolated) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked =3D NULL; } putback_movable_pages(&cc->migratepages); @@ -1177,7 +1177,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, =20 isolate_abort: if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); if (page) { SetPageLRU(page); put_page(page); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 910a138e9859..b17b9d25d045 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2404,7 +2404,7 @@ static void __split_huge_page(struct page *page, stru= ct list_head *list, } =20 ClearPageCompound(head); - unlock_page_lruvec(lruvec); + lruvec_unlock(lruvec); /* Caller disabled irqs, so they are still disabled here */ =20 split_page_owner(head, nr); diff --git a/mm/mlock.c b/mm/mlock.c index 716caf851043..6649f3dda56e 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -205,7 +205,7 @@ static void mlock_pagevec(struct pagevec *pvec) } =20 if (lruvec) - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); release_pages(pvec->pages, pvec->nr); pagevec_reinit(pvec); } diff --git a/mm/swap.c b/mm/swap.c index 7e320ec08c6a..0a8ee33116c5 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -87,7 +87,7 @@ static void __page_cache_release(struct page *page) lruvec =3D folio_lruvec_lock_irqsave(folio, &flags); del_page_from_lru_list(page, lruvec); __clear_page_lru_flags(page); - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); } /* See comment on PageMlocked in release_pages() */ if (unlikely(PageMlocked(page))) { @@ -209,7 +209,7 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, SetPageLRU(page); } if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); release_pages(pvec->pages, pvec->nr); pagevec_reinit(pvec); } @@ -369,7 +369,7 @@ static void folio_activate(struct folio *folio) if (folio_test_clear_lru(folio)) { lruvec =3D folio_lruvec_lock_irq(folio); __folio_activate(folio, lruvec); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); folio_set_lru(folio); } } @@ -915,7 +915,7 @@ void release_pages(struct page **pages, int nr) * same lruvec. The lock is held only if lruvec !=3D NULL. */ if (lruvec && ++lock_batch =3D=3D SWAP_CLUSTER_MAX) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec =3D NULL; } =20 @@ -925,7 +925,7 @@ void release_pages(struct page **pages, int nr) =20 if (is_zone_device_page(page)) { if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec =3D NULL; } if (put_devmap_managed_page(page)) @@ -940,7 +940,7 @@ void release_pages(struct page **pages, int nr) =20 if (PageCompound(page)) { if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec =3D NULL; } __put_compound_page(page); @@ -974,7 +974,7 @@ void release_pages(struct page **pages, int nr) list_add(&page->lru, &pages_to_free); } if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); =20 mem_cgroup_uncharge_list(&pages_to_free); free_unref_page_list(&pages_to_free); @@ -1060,7 +1060,7 @@ void __pagevec_lru_add(struct pagevec *pvec) __pagevec_lru_add_fn(folio, lruvec); } if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); release_pages(pvec->pages, pvec->nr); pagevec_reinit(pvec); } diff --git a/mm/vmscan.c b/mm/vmscan.c index 8c6054e06087..a611ccf03c9b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2171,7 +2171,7 @@ int folio_isolate_lru(struct folio *folio) folio_get(folio); lruvec =3D folio_lruvec_lock_irq(folio); lruvec_del_folio(lruvec, folio); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); ret =3D 0; } =20 @@ -4806,7 +4806,7 @@ void check_move_unevictable_pages(struct pagevec *pve= c) if (lruvec) { __count_vm_events(UNEVICTABLE_PGRESCUED, pgrescued); __count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); } else if (pgscanned) { count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); } --=20 2.11.0 From nobody Tue Apr 28 19:32:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 610B2C4332F for ; Mon, 30 May 2022 07:51:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234013AbiE3HvO (ORCPT ); Mon, 30 May 2022 03:51:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60010 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234085AbiE3Hul (ORCPT ); Mon, 30 May 2022 03:50:41 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C186371D8B for ; Mon, 30 May 2022 00:50:40 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id s68so9439009pgs.10 for ; Mon, 30 May 2022 00:50:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+KuhHyD4PJmicRh1rvovvsjVwRIDUdFowoJNCrBTN7Y=; b=ZLgUMfaZ+tA2Ajunkij0CRf9vFf1S0Q7PwqPVEhETJxVNROl04VLPpk0byiKakWPCp zSXMSILlxQjjexSRsmvwcKH2AeyrBT+VOKLyeWRzGc3lf/bUJtHyuMX+A4A0WoWkP95V INrPgOVrmW7S//LKWv/KgnN/1UDC261OwdlqpdWnh6tntEcn7qWrjD2a56zpuDiErFiC 1bO96DreZfqvWZZpRT3L+u77shEQMswa3q4Q53U4IF/TqbaWnY7lFO3DgAIofyDerGod aou4OYxYjrcMyaBBnHgBRa65b7trXpppLtdzw7Hm5ZzswNvScITVQ/sLLu2vPq/Fmgp6 nzTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+KuhHyD4PJmicRh1rvovvsjVwRIDUdFowoJNCrBTN7Y=; b=iPytDuQxf+kadz5mk/Zd4nKjb0V8+2IpgbfjR+Jo+UB7EmoRh4SkLNMiEL4CAJuIkl 3B6ZzKZnYb7HhKuOnIJS2c+IxoMM5CVJ+rKv1iQWzqePjN9eBfMsvf5hg4PoX3yvdxcy ouj3QPSfObGHWOnvPSu2juKOGO5XgtU38AU/o8CELAuOYVf2YfGjjcEE+GxMHUyaYlQ/ 7GWo/uYAa+jkGbNDvhlpaz2WmWoh/5cUF2ciI8G5rzA3ifuDZ4fWeLjN1LZzPiMlHe7x 9C7jW2iXwsmu/lvNJfkjgNijprxPhMMw92dT+q1V2bQHKvHT2IoUJYy0hh3gVhrYPTqK 0mZA== X-Gm-Message-State: AOAM530VGUEXEvoYVQyazbpaBSEhsiXuMxptUuhpYYZ4QesWLqdrqayx ubezzG5yINHG3jg9FUvKilKO6g== X-Google-Smtp-Source: ABdhPJxIW2mIJMC71rXyAx3/OFd2GVPjad4pqowx3iRyXvsDwDpp1vJywSxLS2ZA2oGfnuxO8OXp3A== X-Received: by 2002:a62:8689:0:b0:51b:4143:d1ae with SMTP id x131-20020a628689000000b0051b4143d1aemr6716887pfd.22.1653897040304; Mon, 30 May 2022 00:50:40 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.50.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:50:40 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 03/11] mm: memcontrol: prepare objcg API for non-kmem usage Date: Mon, 30 May 2022 15:49:11 +0800 Message-Id: <20220530074919.46352-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Pagecache pages are charged at the allocation time and holding a reference to the original memory cgroup until being reclaimed. Depending on the memory pressure, specific patterns of the page sharing between different cgroups and the cgroup creation and destruction rates, a large number of dying memory cgroups can be pinned by pagecache pages. It makes the page reclaim less efficient and wastes memory. We can convert LRU pages and most other raw memcg pins to the objcg direction to fix this problem, and then the page->memcg will always point to an object cgroup pointer. Therefore, the infrastructure of objcg no longer only serves CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages can reuse it to charge pages. We know that the LRU pages are not accounted at the root level. But the page->memcg_data points to the root_mem_cgroup. So the page->memcg_data of the LRU pages always points to a valid pointer. But the root_mem_cgroup dose not have an object cgroup. If we use obj_cgroup APIs to charge the LRU pages, we should set the page->memcg_data to a root object cgroup. So we also allocate an object cgroup for the root_mem_cgroup. Signed-off-by: Muchun Song Acked-by: Johannes Weiner Acked-by: Roman Gushchin Reviewed-by: Michal Koutn=C3=BD --- include/linux/memcontrol.h | 2 +- mm/memcontrol.c | 56 +++++++++++++++++++++++++++---------------= ---- 2 files changed, 34 insertions(+), 24 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 6d7f97cc3fd4..27f3171f42a1 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -315,10 +315,10 @@ struct mem_cgroup { =20 #ifdef CONFIG_MEMCG_KMEM int kmemcg_id; +#endif struct obj_cgroup __rcu *objcg; /* list of inherited objcgs, protected by objcg_lock */ struct list_head objcg_list; -#endif =20 MEMCG_PADDING(_pad2_); =20 diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 13da256ff2e4..739a1d58ce97 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -254,9 +254,9 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressur= e *vmpr) return container_of(vmpr, struct mem_cgroup, vmpressure); } =20 -#ifdef CONFIG_MEMCG_KMEM static DEFINE_SPINLOCK(objcg_lock); =20 +#ifdef CONFIG_MEMCG_KMEM bool mem_cgroup_kmem_disabled(void) { return cgroup_memory_nokmem; @@ -265,12 +265,10 @@ bool mem_cgroup_kmem_disabled(void) static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg, unsigned int nr_pages); =20 -static void obj_cgroup_release(struct percpu_ref *ref) +static void obj_cgroup_release_bytes(struct obj_cgroup *objcg) { - struct obj_cgroup *objcg =3D container_of(ref, struct obj_cgroup, refcnt); unsigned int nr_bytes; unsigned int nr_pages; - unsigned long flags; =20 /* * At this point all allocated objects are freed, and @@ -284,9 +282,9 @@ static void obj_cgroup_release(struct percpu_ref *ref) * 3) CPU1: a process from another memcg is allocating something, * the stock if flushed, * objcg->nr_charged_bytes =3D PAGE_SIZE - 92 - * 5) CPU0: we do release this object, + * 4) CPU0: we do release this object, * 92 bytes are added to stock->nr_bytes - * 6) CPU0: stock is flushed, + * 5) CPU0: stock is flushed, * 92 bytes are added to objcg->nr_charged_bytes * * In the result, nr_charged_bytes =3D=3D PAGE_SIZE. @@ -298,6 +296,19 @@ static void obj_cgroup_release(struct percpu_ref *ref) =20 if (nr_pages) obj_cgroup_uncharge_pages(objcg, nr_pages); +} +#else +static inline void obj_cgroup_release_bytes(struct obj_cgroup *objcg) +{ +} +#endif + +static void obj_cgroup_release(struct percpu_ref *ref) +{ + struct obj_cgroup *objcg =3D container_of(ref, struct obj_cgroup, refcnt); + unsigned long flags; + + obj_cgroup_release_bytes(objcg); =20 spin_lock_irqsave(&objcg_lock, flags); list_del(&objcg->list); @@ -326,10 +337,10 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } =20 -static void memcg_reparent_objcgs(struct mem_cgroup *memcg, - struct mem_cgroup *parent) +static void memcg_reparent_objcgs(struct mem_cgroup *memcg) { struct obj_cgroup *objcg, *iter; + struct mem_cgroup *parent =3D parent_mem_cgroup(memcg); =20 objcg =3D rcu_replace_pointer(memcg->objcg, NULL, true); =20 @@ -348,6 +359,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *me= mcg, percpu_ref_kill(&objcg->refcnt); } =20 +#ifdef CONFIG_MEMCG_KMEM /* * A lot of the calls to the cache allocation functions are expected to be * inlined by the compiler. Since the calls to memcg_slab_pre_alloc_hook()= are @@ -3589,21 +3601,12 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys= _state *css, #ifdef CONFIG_MEMCG_KMEM static int memcg_online_kmem(struct mem_cgroup *memcg) { - struct obj_cgroup *objcg; - if (cgroup_memory_nokmem) return 0; =20 if (unlikely(mem_cgroup_is_root(memcg))) return 0; =20 - objcg =3D obj_cgroup_alloc(); - if (!objcg) - return -ENOMEM; - - objcg->memcg =3D memcg; - rcu_assign_pointer(memcg->objcg, objcg); - static_branch_enable(&memcg_kmem_enabled_key); =20 memcg->kmemcg_id =3D memcg->id.id; @@ -3613,17 +3616,13 @@ static int memcg_online_kmem(struct mem_cgroup *mem= cg) =20 static void memcg_offline_kmem(struct mem_cgroup *memcg) { - struct mem_cgroup *parent; - if (cgroup_memory_nokmem) return; =20 if (unlikely(mem_cgroup_is_root(memcg))) return; =20 - parent =3D parent_mem_cgroup(memcg); - memcg_reparent_objcgs(memcg, parent); - memcg_reparent_list_lrus(memcg, parent); + memcg_reparent_list_lrus(memcg, parent_mem_cgroup(memcg)); } #else static int memcg_online_kmem(struct mem_cgroup *memcg) @@ -5106,8 +5105,8 @@ static struct mem_cgroup *mem_cgroup_alloc(void) memcg->socket_pressure =3D jiffies; #ifdef CONFIG_MEMCG_KMEM memcg->kmemcg_id =3D -1; - INIT_LIST_HEAD(&memcg->objcg_list); #endif + INIT_LIST_HEAD(&memcg->objcg_list); #ifdef CONFIG_CGROUP_WRITEBACK INIT_LIST_HEAD(&memcg->cgwb_list); for (i =3D 0; i < MEMCG_CGWB_FRN_CNT; i++) @@ -5169,6 +5168,7 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *pare= nt_css) static int mem_cgroup_css_online(struct cgroup_subsys_state *css) { struct mem_cgroup *memcg =3D mem_cgroup_from_css(css); + struct obj_cgroup *objcg; =20 if (memcg_online_kmem(memcg)) goto remove_id; @@ -5181,6 +5181,13 @@ static int mem_cgroup_css_online(struct cgroup_subsy= s_state *css) if (alloc_shrinker_info(memcg)) goto offline_kmem; =20 + objcg =3D obj_cgroup_alloc(); + if (!objcg) + goto free_shrinker; + + objcg->memcg =3D memcg; + rcu_assign_pointer(memcg->objcg, objcg); + /* Online state pins memcg ID, memcg ID pins CSS */ refcount_set(&memcg->id.ref, 1); css_get(css); @@ -5189,6 +5196,8 @@ static int mem_cgroup_css_online(struct cgroup_subsys= _state *css) queue_delayed_work(system_unbound_wq, &stats_flush_dwork, 2UL*HZ); return 0; +free_shrinker: + free_shrinker_info(memcg); offline_kmem: memcg_offline_kmem(memcg); remove_id: @@ -5216,6 +5225,7 @@ static void mem_cgroup_css_offline(struct cgroup_subs= ys_state *css) page_counter_set_min(&memcg->memory, 0); page_counter_set_low(&memcg->memory, 0); =20 + memcg_reparent_objcgs(memcg); memcg_offline_kmem(memcg); reparent_shrinker_deferred(memcg); wb_memcg_offline(memcg); --=20 2.11.0 From nobody Tue Apr 28 19:32:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04BC6C433F5 for ; Mon, 30 May 2022 07:51:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232057AbiE3Hvd (ORCPT ); Mon, 30 May 2022 03:51:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234105AbiE3Huv (ORCPT ); Mon, 30 May 2022 03:50:51 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D04DC6300 for ; Mon, 30 May 2022 00:50:49 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id gd1so2121388pjb.2 for ; Mon, 30 May 2022 00:50:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=DFDrT6L7Ic0OCHDZSnFDn3vZ9wX5eeepTxbi1OgbPW8=; b=zQJmLbL4/+D7h7zErVxvNlAnICGCMYdOu/gHKJqzPdOGx+66n44qTTTvxszVFnR9b6 CZQZJqJqMWTpk22ClCuRMI2ine4bHBhgprMNn46ytJkPebSctl8WSg2tf4knJOFFLdLl ppUxrioy6c8/Oj8jir8vGEa46BW4ZY5om1hT2z5AGMoNumGVtcnHztCm3VRB/b3CS45x 1K2u+8BV2trALYE6HwT9XvemXagGeYIrKW/+h3l2f5qg8d6ZIcYcHfgN8oO4Qx84aEZJ n9GMHxnXIPDQFYNrHVzQRuM8aJKAXzWYbFzRe1J2iJTAhiYVmEilT/p1JQplQZ7T8fAd 1f6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=DFDrT6L7Ic0OCHDZSnFDn3vZ9wX5eeepTxbi1OgbPW8=; b=y6wPhJ2UNe/DKnl+t/9/zBC6RbJX9FzMHy9dTnYAtMWAtqIk/CcZnjQUf3OP4kHTWo mTeW91MVeWCkn+z1vUaeYVEwFddr2LY0Ah7kB4g29IqGy8fVfoAf7vQyrOFslp8ynksB co4kiuJjWCy+OFWwrQZi/qhxfBjIjJ0A6jHVI0K/geoHit18wNJ8GdzVUz9J68bZqoa1 kO1+qM+uUiuu16g55fYyXRG3govnD3eWG4lzg76A10LBKUNIoE6JLQuCnlHW8IL5Vq+O vfR+gL5B0Zw9IF4ehcbIAaHSvVrRXtujrjFyXLiShkvJrYvfzScCMvgX8CxvW4BBN6xn GPNw== X-Gm-Message-State: AOAM531xeIH1sQpaz1yWHFKDSGUnbqtbkSjyKYTUQwFOcnH2/FlZPpwq IKyDozuHFntlQSN+DKNb0l45hg== X-Google-Smtp-Source: ABdhPJzLXK9P++G3+iXY1INLBCQB7H/yWUAQ17Fk4gv9kP+usUlghFA/XdQQI3DfZHC36wb06zZj/g== X-Received: by 2002:a17:90a:4413:b0:1cd:2d00:9d0b with SMTP id s19-20020a17090a441300b001cd2d009d0bmr21912947pjg.81.1653897049337; Mon, 30 May 2022 00:50:49 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.50.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:50:48 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 04/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Date: Mon, 30 May 2022 15:49:12 +0800 Message-Id: <20220530074919.46352-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The diagram below shows how to make the folio lruvec lock safe when LRU pages are reparented. folio_lruvec_lock(folio) rcu_read_lock(); retry: lruvec =3D folio_lruvec(folio); // The folio is reparented at this time. spin_lock(&lruvec->lru_lock); if (unlikely(lruvec_memcg(lruvec) !=3D folio_memcg(folio))) // Acquired the wrong lruvec lock and need to retry. // Because this folio is on the parent memcg lruvec list. spin_unlock(&lruvec->lru_lock); goto retry; // If we reach here, it means that folio_memcg(folio) is stable. memcg_reparent_objcgs(memcg) // lruvec belongs to memcg and lruvec_parent belongs to parent memcg. spin_lock(&lruvec->lru_lock); spin_lock(&lruvec_parent->lru_lock); // Move all the pages from the lruvec list to the parent lruvec list. spin_unlock(&lruvec_parent->lru_lock); spin_unlock(&lruvec->lru_lock); After we acquire the lruvec lock, we need to check whether the folio is reparented. If so, we need to reacquire the new lruvec lock. On the routine of the LRU pages reparenting, we will also acquire the lruvec lock (will be implemented in the later patch). So folio_memcg() cannot be changed when we hold the lruvec lock. Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So remove it. This is a preparation for reparenting the LRU pages. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 18 +++------------- mm/compaction.c | 27 +++++++++++++++++++---- mm/memcontrol.c | 53 ++++++++++++++++++++++++++----------------= ---- mm/swap.c | 5 +++++ 4 files changed, 61 insertions(+), 42 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 27f3171f42a1..e390aaa46776 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -752,7 +752,9 @@ static inline struct lruvec *mem_cgroup_lruvec(struct m= em_cgroup *memcg, * folio_lruvec - return lruvec for isolating/putting an LRU folio * @folio: Pointer to the folio. * - * This function relies on folio->mem_cgroup being stable. + * The lruvec can be changed to its parent lruvec when the page reparented. + * The caller need to recheck if it cares about this changes (just like + * folio_lruvec_lock() does). */ static inline struct lruvec *folio_lruvec(struct folio *folio) { @@ -771,15 +773,6 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *fol= io); struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags); =20 -#ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio); -#else -static inline -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ -} -#endif - static inline struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){ return css ? container_of(css, struct mem_cgroup, css) : NULL; @@ -1240,11 +1233,6 @@ static inline struct lruvec *folio_lruvec(struct fol= io *folio) return &pgdat->__lruvec; } =20 -static inline -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ -} - static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memc= g) { return NULL; diff --git a/mm/compaction.c b/mm/compaction.c index 4f155df6b39c..29ff111e5711 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -509,6 +509,25 @@ static bool compact_lock_irqsave(spinlock_t *lock, uns= igned long *flags, return true; } =20 +static struct lruvec * +compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flag= s, + struct compact_control *cc) +{ + struct lruvec *lruvec; + + rcu_read_lock(); +retry: + lruvec =3D folio_lruvec(folio); + compact_lock_irqsave(&lruvec->lru_lock, flags, cc); + if (unlikely(lruvec_memcg(lruvec) !=3D folio_memcg(folio))) { + spin_unlock_irqrestore(&lruvec->lru_lock, *flags); + goto retry; + } + rcu_read_unlock(); + + return lruvec; +} + /* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. The lock should be periodically unlocked to avo= id @@ -844,6 +863,7 @@ isolate_migratepages_block(struct compact_control *cc, = unsigned long low_pfn, =20 /* Time to isolate some pages for migration */ for (; low_pfn < end_pfn; low_pfn++) { + struct folio *folio; =20 if (skip_on_failure && low_pfn >=3D next_skip_pfn) { /* @@ -1065,18 +1085,17 @@ isolate_migratepages_block(struct compact_control *= cc, unsigned long low_pfn, if (!TestClearPageLRU(page)) goto isolate_fail_put; =20 - lruvec =3D folio_lruvec(page_folio(page)); + folio =3D page_folio(page); + lruvec =3D folio_lruvec(folio); =20 /* If we already hold the lock, we can skip some rechecking */ if (lruvec !=3D locked) { if (locked) lruvec_unlock_irqrestore(locked, flags); =20 - compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); + lruvec =3D compact_folio_lruvec_lock_irqsave(folio, &flags, cc); locked =3D lruvec; =20 - lruvec_memcg_debug(lruvec, page_folio(page)); - /* Try get exclusive access under lock */ if (!skip_updated) { skip_updated =3D true; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 739a1d58ce97..9d98a791353c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1199,23 +1199,6 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, return ret; } =20 -#ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) -{ - struct mem_cgroup *memcg; - - if (mem_cgroup_disabled()) - return; - - memcg =3D folio_memcg(folio); - - if (!memcg) - VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) !=3D root_mem_cgroup, folio); - else - VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) !=3D memcg, folio); -} -#endif - /** * folio_lruvec_lock - Lock the lruvec for a folio. * @folio: Pointer to the folio. @@ -1230,10 +1213,18 @@ void lruvec_memcg_debug(struct lruvec *lruvec, stru= ct folio *folio) */ struct lruvec *folio_lruvec_lock(struct folio *folio) { - struct lruvec *lruvec =3D folio_lruvec(folio); + struct lruvec *lruvec; =20 + rcu_read_lock(); +retry: + lruvec =3D folio_lruvec(folio); spin_lock(&lruvec->lru_lock); - lruvec_memcg_debug(lruvec, folio); + + if (unlikely(lruvec_memcg(lruvec) !=3D folio_memcg(folio))) { + spin_unlock(&lruvec->lru_lock); + goto retry; + } + rcu_read_unlock(); =20 return lruvec; } @@ -1253,10 +1244,18 @@ struct lruvec *folio_lruvec_lock(struct folio *foli= o) */ struct lruvec *folio_lruvec_lock_irq(struct folio *folio) { - struct lruvec *lruvec =3D folio_lruvec(folio); + struct lruvec *lruvec; =20 + rcu_read_lock(); +retry: + lruvec =3D folio_lruvec(folio); spin_lock_irq(&lruvec->lru_lock); - lruvec_memcg_debug(lruvec, folio); + + if (unlikely(lruvec_memcg(lruvec) !=3D folio_memcg(folio))) { + spin_unlock_irq(&lruvec->lru_lock); + goto retry; + } + rcu_read_unlock(); =20 return lruvec; } @@ -1278,10 +1277,18 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *= folio) struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags) { - struct lruvec *lruvec =3D folio_lruvec(folio); + struct lruvec *lruvec; =20 + rcu_read_lock(); +retry: + lruvec =3D folio_lruvec(folio); spin_lock_irqsave(&lruvec->lru_lock, *flags); - lruvec_memcg_debug(lruvec, folio); + + if (unlikely(lruvec_memcg(lruvec) !=3D folio_memcg(folio))) { + spin_unlock_irqrestore(&lruvec->lru_lock, *flags); + goto retry; + } + rcu_read_unlock(); =20 return lruvec; } diff --git a/mm/swap.c b/mm/swap.c index 0a8ee33116c5..6cea469b6ff2 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -303,6 +303,11 @@ void lru_note_cost(struct lruvec *lruvec, bool file, u= nsigned int nr_pages) =20 void lru_note_cost_folio(struct folio *folio) { + WARN_ON_ONCE(!rcu_read_lock_held()); + /* + * The rcu read lock is held by the caller, so we do not need to + * care about the lruvec returned by folio_lruvec() being released. + */ lru_note_cost(folio_lruvec(folio), folio_is_file_lru(folio), folio_nr_pages(folio)); } --=20 2.11.0 From nobody Tue Apr 28 19:32:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54EE8C433EF for ; Mon, 30 May 2022 07:51:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234134AbiE3HvX (ORCPT ); Mon, 30 May 2022 03:51:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234117AbiE3Hu7 (ORCPT ); Mon, 30 May 2022 03:50:59 -0400 Received: from mail-pg1-x52a.google.com (mail-pg1-x52a.google.com [IPv6:2607:f8b0:4864:20::52a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92CA271D8B for ; Mon, 30 May 2022 00:50:58 -0700 (PDT) Received: by mail-pg1-x52a.google.com with SMTP id f4so9457991pgf.4 for ; Mon, 30 May 2022 00:50:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZLeoBTZOxUhrkpOpKK9VuHqYh0E8gkNOLgtoFpvje4s=; b=0UWfJ32A7C5WI27zeLqZ/c6KOFraVfKR3bXGTtaDgPDytYhXuCTM3qvoO9ceqdaRDY fr39wYyg7FNqYEXu7E3U+8G7ozPy19QPvOK9hI2Ny0vQUkeBoDLjPgS8aTFBVAodAxGx 8OSmvcX8j0gKWNAh4tE37+enbVmb1X1UZryzOlOYEQPIZj2IK7nSX8iAkvvVdAYwLWys OiEsEOxJuyBDueV6i0OF5TBGIiBI1tXdFsSoSvKcjLXQ7FdjiAImGCcBfQu2YTrRG93y VIe9OpG86IVOQ4eKuTUJmNMvndwiXUer8p45NAyNURmUmuS8zbfIud/W0jzIjXAK7IPG 5Y0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZLeoBTZOxUhrkpOpKK9VuHqYh0E8gkNOLgtoFpvje4s=; b=KoIkSX3s7dqKdTOT/Z7ZTskp1SRjkSU9qFc+RhLrxaMS59UVYfJXQtnuxLlwlJ68DB yVMLXMhN/6PmTJV6k6pRyzurI5e8ivMBkqIr0muFMrsSkBeE3YN4kwFkQHbDh03QsWb0 6NdexNG+4fRKteMBNbWmxeChuZQSAoslNQ0F8Pryv7fxKMnHQiClbEpEmLEYo9WwizKy PqYrplxvT386Q/sy/zMRuHjt98KE6HBM3LmUb768ytW4mYVVdn5OkYS85grSvblFELns XK7KLL+mlyZ90cy9X8SP4cNG1yuJ6qM3bvrsVs9wU1QmOT+pcVxLoh953GgjF9pRMKiy Ft2g== X-Gm-Message-State: AOAM532AWClwyX0tyFdEKRhzj8VJSPky41YVLpJt/o/iGLL1rX9hjkHv KKHbPlICipr6ri/M6DDWzkXWYw== X-Google-Smtp-Source: ABdhPJyX2juWPk7K2/Yr/XqfWc11dY430FPctTcojFzIUFCbhzADFmo0FxVQDT1ZOU8NMg5XfBdegA== X-Received: by 2002:a63:693:0:b0:3f5:ef4e:d359 with SMTP id 141-20020a630693000000b003f5ef4ed359mr48004817pgg.540.1653897058034; Mon, 30 May 2022 00:50:58 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.50.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:50:57 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 05/11] mm: vmscan: rework move_pages_to_lru() Date: Mon, 30 May 2022 15:49:13 +0800 Message-Id: <20220530074919.46352-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In the later patch, we will reparent the LRU pages. The pages moved to appropriate LRU list can be reparented during the process of the move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we should use the more general interface of folio_lruvec_relock_irq() to acquire the correct lruvec lock. Signed-off-by: Muchun Song Acked-by: Johannes Weiner Acked-by: Roman Gushchin --- mm/vmscan.c | 49 +++++++++++++++++++++++++------------------------ 1 file changed, 25 insertions(+), 24 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index a611ccf03c9b..67f1462b150d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2226,23 +2226,28 @@ static int too_many_isolated(struct pglist_data *pg= dat, int file, * move_pages_to_lru() moves pages from private @list to appropriate LRU l= ist. * On return, @list is reused as a list of pages to be freed by the caller. * - * Returns the number of pages moved to the given lruvec. + * Returns the number of pages moved to the appropriate LRU list. + * + * Note: The caller must not hold any lruvec lock. */ -static unsigned int move_pages_to_lru(struct lruvec *lruvec, - struct list_head *list) +static unsigned int move_pages_to_lru(struct list_head *list) { - int nr_pages, nr_moved =3D 0; + int nr_moved =3D 0; + struct lruvec *lruvec =3D NULL; LIST_HEAD(pages_to_free); - struct page *page; =20 while (!list_empty(list)) { - page =3D lru_to_page(list); + int nr_pages; + struct folio *folio =3D lru_to_folio(list); + struct page *page =3D &folio->page; + + lruvec =3D folio_lruvec_relock_irq(folio, lruvec); VM_BUG_ON_PAGE(PageLRU(page), page); list_del(&page->lru); if (unlikely(!page_evictable(page))) { - spin_unlock_irq(&lruvec->lru_lock); + lruvec_unlock_irq(lruvec); putback_lru_page(page); - spin_lock_irq(&lruvec->lru_lock); + lruvec =3D NULL; continue; } =20 @@ -2263,20 +2268,16 @@ static unsigned int move_pages_to_lru(struct lruvec= *lruvec, __clear_page_lru_flags(page); =20 if (unlikely(PageCompound(page))) { - spin_unlock_irq(&lruvec->lru_lock); + lruvec_unlock_irq(lruvec); destroy_compound_page(page); - spin_lock_irq(&lruvec->lru_lock); + lruvec =3D NULL; } else list_add(&page->lru, &pages_to_free); =20 continue; } =20 - /* - * All pages were isolated from the same lruvec (and isolation - * inhibits memcg migration). - */ - VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page); + VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page); add_page_to_lru_list(page, lruvec); nr_pages =3D thp_nr_pages(page); nr_moved +=3D nr_pages; @@ -2284,6 +2285,8 @@ static unsigned int move_pages_to_lru(struct lruvec *= lruvec, workingset_age_nonresident(lruvec, nr_pages); } =20 + if (lruvec) + lruvec_unlock_irq(lruvec); /* * To save our caller's stack, now use input list for pages to free. */ @@ -2355,16 +2358,16 @@ shrink_inactive_list(unsigned long nr_to_scan, stru= ct lruvec *lruvec, =20 nr_reclaimed =3D shrink_page_list(&page_list, pgdat, sc, &stat, false); =20 - spin_lock_irq(&lruvec->lru_lock); - move_pages_to_lru(lruvec, &page_list); + move_pages_to_lru(&page_list); =20 + local_irq_disable(); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); item =3D current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_reclaimed); __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed); - spin_unlock_irq(&lruvec->lru_lock); + local_irq_enable(); =20 lru_note_cost(lruvec, file, stat.nr_pageout); mem_cgroup_uncharge_list(&page_list); @@ -2494,18 +2497,16 @@ static void shrink_active_list(unsigned long nr_to_= scan, /* * Move pages back to the lru list. */ - spin_lock_irq(&lruvec->lru_lock); - - nr_activate =3D move_pages_to_lru(lruvec, &l_active); - nr_deactivate =3D move_pages_to_lru(lruvec, &l_inactive); + nr_activate =3D move_pages_to_lru(&l_active); + nr_deactivate =3D move_pages_to_lru(&l_inactive); /* Keep all free pages in l_active list */ list_splice(&l_inactive, &l_active); =20 + local_irq_disable(); __count_vm_events(PGDEACTIVATE, nr_deactivate); __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate); - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); - spin_unlock_irq(&lruvec->lru_lock); + local_irq_enable(); =20 mem_cgroup_uncharge_list(&l_active); free_unref_page_list(&l_active); --=20 2.11.0 From nobody Tue Apr 28 19:32:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57E6DC433EF for ; Mon, 30 May 2022 07:51:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234141AbiE3Hv0 (ORCPT ); Mon, 30 May 2022 03:51:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60188 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233962AbiE3HvJ (ORCPT ); Mon, 30 May 2022 03:51:09 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B4BB6300 for ; Mon, 30 May 2022 00:51:08 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id h13so9904560pfq.5 for ; Mon, 30 May 2022 00:51:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eS8SvDpHTohQF4EqjCM+H0xOhguiTqKM+7kzLSkVHtI=; b=oNhG52LEbWnwUNpy9jfaU81Vmv8ckOaaRIy4hKuoyyvIBdiOMJxTEWcGRybCQOMK5r 1nbQzFTPEPVCro2sxvSjWsUavxLKR22/Vo74emZiaBVIF/iJD8jAEiNC81BATOLEP2wz uJwI0vDomMYwIeB/05ckx17wN19YDVO478ZSuECDlAjahoZU3sRO3V323bucTt2LLBi+ ewE7brA8zxbX4o8fE2qB5WNe7IHqWMNZCQb2LoiWm73zsJHIu3/3xrWRhdsRekwZg4cn pfuCP172Gkj6OLI5/cntWLEfw/+yK55lf6NthD9yyA02aiW2I/FMkWTYgQRZgiqE7qAE Kglw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eS8SvDpHTohQF4EqjCM+H0xOhguiTqKM+7kzLSkVHtI=; b=A9nKZyh1l8FdUUKyfNjKEzFTIN7z/hGkjsh5wmhp7UKQUt8nSu2hcQ5GOueRG/dx8a snCjop/2Hf5mKuln303jzY/+HarWT7kRI30sQYUbJdNHcuGlDTl555j35YzwIgbP+BT3 kglWLuWcEjI63D7jXGgFHqgUjsNBQJAUZ9zpNRzXOYf5eV1AkRbfGHhWHjIKRgNQvhRU NLa2qsYjUhWeU0U0fnwQ3JTbORvsePqT7tYVUnJtHEW6AiKFnRWsic62zUOrktZLPz+m sThcmMG0Pc4PjxOVa/oY+pBkEepSiL9aCcUSHpU8Z8kpUMI2vRjRBNWEGTzyQSu1wEQ2 KSYg== X-Gm-Message-State: AOAM532Fxo6KELd5JGFot8mDsfUptcB/t5B68fNJv33h2rKYYPYJQj10 txUtGZPOth/8jDBPKrUGlXxS4w== X-Google-Smtp-Source: ABdhPJxDcBQuPQuEg/cf8yN6kKV4P5TywkiUYecbsMiAztAXNXJlgHQcbNGGtDRXdsuez+VXYEOCWA== X-Received: by 2002:a63:2c16:0:b0:3fb:1b5f:4441 with SMTP id s22-20020a632c16000000b003fb1b5f4441mr16702730pgs.516.1653897067974; Mon, 30 May 2022 00:51:07 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.50.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:51:07 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 06/11] mm: thp: make split queue lock safe when LRU pages are reparented Date: Mon, 30 May 2022 15:49:14 +0800 Message-Id: <20220530074919.46352-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Similar to the lruvec lock, we use the same approach to make the split queue lock safe when LRU pages are reparented. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- include/linux/memcontrol.h | 10 ++++ mm/huge_memory.c | 116 +++++++++++++++++++++++++++++++++++------= ---- 2 files changed, 100 insertions(+), 26 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e390aaa46776..56227603dcb8 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1650,6 +1650,11 @@ int alloc_shrinker_info(struct mem_cgroup *memcg); void free_shrinker_info(struct mem_cgroup *memcg); void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id); void reparent_shrinker_deferred(struct mem_cgroup *memcg); + +static inline int shrinker_id(struct shrinker *shrinker) +{ + return shrinker->id; +} #else #define mem_cgroup_sockets_enabled 0 static inline void mem_cgroup_sk_alloc(struct sock *sk) { }; @@ -1663,6 +1668,11 @@ static inline void set_shrinker_bit(struct mem_cgrou= p *memcg, int nid, int shrinker_id) { } + +static inline int shrinker_id(struct shrinker *shrinker) +{ + return -1; +} #endif =20 #ifdef CONFIG_MEMCG_KMEM diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b17b9d25d045..d3411dc291ab 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -503,25 +503,90 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_str= uct *vma) } =20 #ifdef CONFIG_MEMCG -static inline struct deferred_split *get_deferred_split_queue(struct page = *page) +static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *fol= io, + struct deferred_split *queue) { - struct mem_cgroup *memcg =3D page_memcg(compound_head(page)); - struct pglist_data *pgdat =3D NODE_DATA(page_to_nid(page)); + if (mem_cgroup_disabled()) + return NULL; + if (&NODE_DATA(folio_nid(folio))->deferred_split_queue =3D=3D queue) + return NULL; + return container_of(queue, struct mem_cgroup, deferred_split_queue); +} =20 - if (memcg) - return &memcg->deferred_split_queue; - else - return &pgdat->deferred_split_queue; +static inline struct deferred_split *folio_memcg_split_queue(struct folio = *folio) +{ + struct mem_cgroup *memcg =3D folio_memcg(folio); + + return memcg ? &memcg->deferred_split_queue : NULL; } #else -static inline struct deferred_split *get_deferred_split_queue(struct page = *page) +static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *fol= io, + struct deferred_split *queue) { - struct pglist_data *pgdat =3D NODE_DATA(page_to_nid(page)); + return NULL; +} =20 - return &pgdat->deferred_split_queue; +static inline struct deferred_split *folio_memcg_split_queue(struct folio = *folio) +{ + return NULL; } #endif =20 +static struct deferred_split *folio_split_queue(struct folio *folio) +{ + struct deferred_split *queue =3D folio_memcg_split_queue(folio); + + return queue ? : &NODE_DATA(folio_nid(folio))->deferred_split_queue; +} + +static struct deferred_split *folio_split_queue_lock(struct folio *folio) +{ + struct deferred_split *queue; + + rcu_read_lock(); +retry: + queue =3D folio_split_queue(folio); + spin_lock(&queue->split_queue_lock); + + if (unlikely(folio_split_queue_memcg(folio, queue) !=3D folio_memcg(folio= ))) { + spin_unlock(&queue->split_queue_lock); + goto retry; + } + rcu_read_unlock(); + + return queue; +} + +static struct deferred_split * +folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags) +{ + struct deferred_split *queue; + + rcu_read_lock(); +retry: + queue =3D folio_split_queue(folio); + spin_lock_irqsave(&queue->split_queue_lock, *flags); + + if (unlikely(folio_split_queue_memcg(folio, queue) !=3D folio_memcg(folio= ))) { + spin_unlock_irqrestore(&queue->split_queue_lock, *flags); + goto retry; + } + rcu_read_unlock(); + + return queue; +} + +static inline void split_queue_unlock(struct deferred_split *queue) +{ + spin_unlock(&queue->split_queue_lock); +} + +static inline void split_queue_unlock_irqrestore(struct deferred_split *qu= eue, + unsigned long flags) +{ + spin_unlock_irqrestore(&queue->split_queue_lock, flags); +} + void prep_transhuge_page(struct page *page) { /* @@ -2489,7 +2554,7 @@ int split_huge_page_to_list(struct page *page, struct= list_head *list) { struct folio *folio =3D page_folio(page); struct page *head =3D &folio->page; - struct deferred_split *ds_queue =3D get_deferred_split_queue(head); + struct deferred_split *ds_queue; XA_STATE(xas, &head->mapping->i_pages, head->index); struct anon_vma *anon_vma =3D NULL; struct address_space *mapping =3D NULL; @@ -2581,13 +2646,13 @@ int split_huge_page_to_list(struct page *page, stru= ct list_head *list) } =20 /* Prevent deferred_split_scan() touching ->_refcount */ - spin_lock(&ds_queue->split_queue_lock); + ds_queue =3D folio_split_queue_lock(folio); if (page_ref_freeze(head, 1 + extra_pins)) { if (!list_empty(page_deferred_list(head))) { ds_queue->split_queue_len--; list_del(page_deferred_list(head)); } - spin_unlock(&ds_queue->split_queue_lock); + split_queue_unlock(ds_queue); if (mapping) { int nr =3D thp_nr_pages(head); =20 @@ -2605,7 +2670,7 @@ int split_huge_page_to_list(struct page *page, struct= list_head *list) __split_huge_page(page, list, end); ret =3D 0; } else { - spin_unlock(&ds_queue->split_queue_lock); + split_queue_unlock(ds_queue); fail: if (mapping) xas_unlock(&xas); @@ -2630,25 +2695,23 @@ int split_huge_page_to_list(struct page *page, stru= ct list_head *list) =20 void free_transhuge_page(struct page *page) { - struct deferred_split *ds_queue =3D get_deferred_split_queue(page); + struct deferred_split *ds_queue; unsigned long flags; =20 - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + ds_queue =3D folio_split_queue_lock_irqsave(page_folio(page), &flags); if (!list_empty(page_deferred_list(page))) { ds_queue->split_queue_len--; list_del(page_deferred_list(page)); } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + split_queue_unlock_irqrestore(ds_queue, flags); free_compound_page(page); } =20 void deferred_split_huge_page(struct page *page) { - struct deferred_split *ds_queue =3D get_deferred_split_queue(page); -#ifdef CONFIG_MEMCG - struct mem_cgroup *memcg =3D page_memcg(compound_head(page)); -#endif + struct deferred_split *ds_queue; unsigned long flags; + struct folio *folio =3D page_folio(page); =20 VM_BUG_ON_PAGE(!PageTransHuge(page), page); =20 @@ -2665,18 +2728,19 @@ void deferred_split_huge_page(struct page *page) if (PageSwapCache(page)) return; =20 - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); + ds_queue =3D folio_split_queue_lock_irqsave(folio, &flags); if (list_empty(page_deferred_list(page))) { + struct mem_cgroup *memcg; + + memcg =3D folio_split_queue_memcg(folio, ds_queue); count_vm_event(THP_DEFERRED_SPLIT_PAGE); list_add_tail(page_deferred_list(page), &ds_queue->split_queue); ds_queue->split_queue_len++; -#ifdef CONFIG_MEMCG if (memcg) set_shrinker_bit(memcg, page_to_nid(page), - deferred_split_shrinker.id); -#endif + shrinker_id(&deferred_split_shrinker)); } - spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); + split_queue_unlock_irqrestore(ds_queue, flags); } =20 static unsigned long deferred_split_count(struct shrinker *shrink, --=20 2.11.0 From nobody Tue Apr 28 19:32:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7F9DC433F5 for ; Mon, 30 May 2022 07:52:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234197AbiE3HwF (ORCPT ); Mon, 30 May 2022 03:52:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234147AbiE3Hvf (ORCPT ); Mon, 30 May 2022 03:51:35 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2E7172E0B for ; Mon, 30 May 2022 00:51:17 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id a10so1014149pju.3 for ; Mon, 30 May 2022 00:51:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IOnakioNTpkftz5TEEpxPkx+AXEcUhIQ+xBmxegLLLk=; b=G7UlzCFpeRyuujxoet9lrU1xhyW7FqRaPoHxhPWDapAkZHgzLVWtZvPYnkDrlFlULX RUwmuVgCUVLHss0KzevDkqrBPiahzVKv9axDXDIuimp7R7/aChPMdh+0G7dY3RE/Aqsw NA09if8i2KGe3yjtHL1j3LAcjME/OtX2A874ONr2TLKEKqGxxYvEN4DtXCouNvzV+Cdh sa0np4Yvv5OWhB8gFrkepTeti4iAWZmUC2ExdUtJzYwzoqZV3Bo4cPqdRoT1ezd4N8ye EDN2VujblGS7sShPM+Hk6rFUlmE/iFU5Zdm4MZy0LnkZXA/2P79JRNnSy1Ve7oUXzRJB JFeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IOnakioNTpkftz5TEEpxPkx+AXEcUhIQ+xBmxegLLLk=; b=7g4tFTataTNWblpeuLg9YPeBQiDAEFKxBaSlckbQ2crAOKjrySgNOsJUzu29GZIxTm 54Y8Ue+CvaxLsYkulUY1dwJmL/cVFixhVdCCzwgqZutOm9M1k5/vql1HAyfRXnQIvdvL Rr+lIccGYaenWSW42OHsqHarGoNL7DbtVWmnHzHXKeZJSm28BMysjwe8X6LUKhNEJJEf 1XEAgg/lkNL7rGd5/EfqVpN+8EtGfu0BRP4GHCdzFVP94/dY1jEnQt9w0ZxnohAdZ3tW 0dyrcjVPBIXiaIm5qp9gb/hzi5a7cZoS9c0mbdJPMS/J17VeuL1XNClL8Plr/lt/1nOO I9fA== X-Gm-Message-State: AOAM530sC7eMvW8l30J+ZMxCAu/XCSbOZEFL69RJ3vlzzrAQ+LRyM9r4 EuxG0bZ46aX0UwxamKAH/rdgxg== X-Google-Smtp-Source: ABdhPJzvVjd94LyMjg4MN7EbMOK/53ve9a+HlZ4JvWRhjFBngX+lwFiXPgqD1EKJsTzag8g9j2FPqA== X-Received: by 2002:a17:90a:66c1:b0:1e2:758c:46f1 with SMTP id z1-20020a17090a66c100b001e2758c46f1mr18023933pjl.104.1653897077341; Mon, 30 May 2022 00:51:17 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.51.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:51:16 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 07/11] mm: memcontrol: make all the callers of {folio,page}_memcg() safe Date: Mon, 30 May 2022 15:49:15 +0800 Message-Id: <20220530074919.46352-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When we use objcg APIs to charge the LRU pages, the page will not hold a reference to the memcg associated with the page. So the caller of the {folio,page}_memcg() should hold an rcu read lock or obtain a reference to the memcg associated with the page to protect memcg from being released. So introduce get_mem_cgroup_from_{page,folio}() to obtain a reference to the memory cgroup associated with the page. In this patch, make all the callers hold an rcu read lock or obtain a reference to the memcg to protect memcg from being released when the LRU pages reparented. We do not need to adjust the callers of {folio,page}_memcg() during the whole process of mem_cgroup_move_task(). Because the cgroup migration and memory cgroup offlining are serialized by @cgroup_mutex. In this routine, the LRU pages cannot be reparented to its parent memory cgroup. So {folio,page}_memcg() is stable and cannot be released. This is a preparation for reparenting the LRU pages. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- fs/buffer.c | 4 +-- fs/fs-writeback.c | 23 +++++++------- include/linux/memcontrol.h | 61 +++++++++++++++++++++++++++++++---- include/trace/events/writeback.h | 5 +++ mm/memcontrol.c | 68 +++++++++++++++++++++++++++++-------= ---- mm/migrate.c | 4 +++ mm/page_io.c | 5 +-- 7 files changed, 131 insertions(+), 39 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 2b5561ae5d0b..80975a457670 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -819,8 +819,7 @@ struct buffer_head *alloc_page_buffers(struct page *pag= e, unsigned long size, if (retry) gfp |=3D __GFP_NOFAIL; =20 - /* The page lock pins the memcg */ - memcg =3D page_memcg(page); + memcg =3D get_mem_cgroup_from_page(page); old_memcg =3D set_active_memcg(memcg); =20 head =3D NULL; @@ -840,6 +839,7 @@ struct buffer_head *alloc_page_buffers(struct page *pag= e, unsigned long size, set_bh_page(bh, page, offset); } out: + mem_cgroup_put(memcg); set_active_memcg(old_memcg); return head; /* diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 1fae0196292a..56612ace8778 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -243,15 +243,13 @@ void __inode_attach_wb(struct inode *inode, struct pa= ge *page) if (inode_cgwb_enabled(inode)) { struct cgroup_subsys_state *memcg_css; =20 - if (page) { - memcg_css =3D mem_cgroup_css_from_page(page); - wb =3D wb_get_create(bdi, memcg_css, GFP_ATOMIC); - } else { - /* must pin memcg_css, see wb_get_create() */ + /* must pin memcg_css, see wb_get_create() */ + if (page) + memcg_css =3D get_mem_cgroup_css_from_page(page); + else memcg_css =3D task_get_css(current, memory_cgrp_id); - wb =3D wb_get_create(bdi, memcg_css, GFP_ATOMIC); - css_put(memcg_css); - } + wb =3D wb_get_create(bdi, memcg_css, GFP_ATOMIC); + css_put(memcg_css); } =20 if (!wb) @@ -868,16 +866,16 @@ void wbc_account_cgroup_owner(struct writeback_contro= l *wbc, struct page *page, if (!wbc->wb || wbc->no_cgroup_owner) return; =20 - css =3D mem_cgroup_css_from_page(page); + css =3D get_mem_cgroup_css_from_page(page); /* dead cgroups shouldn't contribute to inode ownership arbitration */ if (!(css->flags & CSS_ONLINE)) - return; + goto out; =20 id =3D css->id; =20 if (id =3D=3D wbc->wb_id) { wbc->wb_bytes +=3D bytes; - return; + goto out; } =20 if (id =3D=3D wbc->wb_lcand_id) @@ -890,6 +888,9 @@ void wbc_account_cgroup_owner(struct writeback_control = *wbc, struct page *page, wbc->wb_tcand_bytes +=3D bytes; else wbc->wb_tcand_bytes -=3D min(bytes, wbc->wb_tcand_bytes); + +out: + css_put(css); } EXPORT_SYMBOL_GPL(wbc_account_cgroup_owner); =20 diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 56227603dcb8..16464116f94a 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -373,7 +373,7 @@ static inline bool folio_memcg_kmem(struct folio *folio= ); * a valid memcg, but can be atomically swapped to the parent memcg. * * The caller must ensure that the returned memcg won't be released: - * e.g. acquire the rcu_read_lock or css_set_lock. + * e.g. acquire the rcu_read_lock or objcg_lock or cgroup_mutex. */ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) { @@ -439,8 +439,8 @@ static inline struct obj_cgroup *__folio_objcg(struct f= olio *folio) * - lock_page_memcg() * - exclusive reference * - * For a kmem folio a caller should hold an rcu read lock to protect memcg - * associated with a kmem folio from being released. + * Note: The caller should hold an rcu read lock to protect memcg associat= ed + * with a folio from being released. */ static inline struct mem_cgroup *folio_memcg(struct folio *folio) { @@ -449,12 +449,48 @@ static inline struct mem_cgroup *folio_memcg(struct f= olio *folio) return __folio_memcg(folio); } =20 +/* + * page_memcg - Get the memory cgroup associated with a page. + * @page: Pointer to the page. + * + * See the cooments in folio_memcg(). + */ static inline struct mem_cgroup *page_memcg(struct page *page) { return folio_memcg(page_folio(page)); } =20 -/** +/* + * get_mem_cgroup_from_folio - Obtain a reference on the memory cgroup + * associated with a folio. + * @folio: Pointer to the folio. + * + * Returns a pointer to the memory cgroup (and obtain a reference on it) + * associated with the folio, or NULL. This function assumes that the + * folio is known to have a proper memory cgroup pointer. It's not safe + * to call this function against some type of pages, e.g. slab pages or + * ex-slab pages. + */ +static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *f= olio) +{ + struct mem_cgroup *memcg; + + rcu_read_lock(); +retry: + memcg =3D folio_memcg(folio); + if (unlikely(memcg && !css_tryget(&memcg->css))) + goto retry; + rcu_read_unlock(); + + return memcg; +} + +static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *pag= e) +{ + return get_mem_cgroup_from_folio(page_folio(page)); +} + +/* * folio_memcg_rcu - Locklessly get the memory cgroup associated with a fo= lio. * @folio: Pointer to the folio. * @@ -873,7 +909,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm, return match; } =20 -struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page); +struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page= ); ino_t page_cgroup_ino(struct page *page); =20 static inline bool mem_cgroup_online(struct mem_cgroup *memcg) @@ -1047,10 +1083,13 @@ static inline void count_memcg_events(struct mem_cg= roup *memcg, static inline void count_memcg_page_event(struct page *page, enum vm_event_item idx) { - struct mem_cgroup *memcg =3D page_memcg(page); + struct mem_cgroup *memcg; =20 + rcu_read_lock(); + memcg =3D page_memcg(page); if (memcg) count_memcg_events(memcg, idx, 1); + rcu_read_unlock(); } =20 static inline void count_memcg_event_mm(struct mm_struct *mm, @@ -1129,6 +1168,16 @@ static inline struct mem_cgroup *page_memcg(struct p= age *page) return NULL; } =20 +static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *f= olio) +{ + return NULL; +} + +static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *pag= e) +{ + return NULL; +} + static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { WARN_ON_ONCE(!rcu_read_lock_held()); diff --git a/include/trace/events/writeback.h b/include/trace/events/writeb= ack.h index 86b2a82da546..cdb822339f13 100644 --- a/include/trace/events/writeback.h +++ b/include/trace/events/writeback.h @@ -258,6 +258,11 @@ TRACE_EVENT(track_foreign_dirty, __entry->ino =3D inode ? inode->i_ino : 0; __entry->memcg_id =3D wb->memcg_css->id; __entry->cgroup_ino =3D __trace_wb_assign_cgroup(wb); + /* + * TP_fast_assign() is under preemption disabled which can + * serve as an RCU read-side critical section so that the + * memcg returned by folio_memcg() cannot be freed. + */ __entry->page_cgroup_ino =3D cgroup_ino(folio_memcg(folio)->css.cgroup); ), =20 diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9d98a791353c..4cc392741753 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -371,7 +371,7 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key); #endif =20 /** - * mem_cgroup_css_from_page - css of the memcg associated with a page + * get_mem_cgroup_css_from_page - get css of the memcg associated with a p= age * @page: page of interest * * If memcg is bound to the default hierarchy, css of the memcg associated @@ -381,13 +381,15 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key); * If memcg is bound to a traditional hierarchy, the css of root_mem_cgroup * is returned. */ -struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page) +struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page) { struct mem_cgroup *memcg; =20 - memcg =3D page_memcg(page); + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) + return &root_mem_cgroup->css; =20 - if (!memcg || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) + memcg =3D get_mem_cgroup_from_page(page); + if (!memcg) memcg =3D root_mem_cgroup; =20 return &memcg->css; @@ -770,13 +772,13 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum n= ode_stat_item idx, void __mod_lruvec_page_state(struct page *page, enum node_stat_item idx, int val) { - struct page *head =3D compound_head(page); /* rmap on tail pages */ + struct folio *folio =3D page_folio(page); /* rmap on tail pages */ struct mem_cgroup *memcg; pg_data_t *pgdat =3D page_pgdat(page); struct lruvec *lruvec; =20 rcu_read_lock(); - memcg =3D page_memcg(head); + memcg =3D folio_memcg(folio); /* Untracked pages have no memcg, no lruvec. Update only the node */ if (!memcg) { rcu_read_unlock(); @@ -2049,7 +2051,9 @@ void folio_memcg_lock(struct folio *folio) * The RCU lock is held throughout the transaction. The fast * path can get away without acquiring the memcg->move_lock * because page moving starts with an RCU grace period. - */ + * + * The RCU lock also protects the memcg from being freed. + */ rcu_read_lock(); =20 if (mem_cgroup_disabled()) @@ -3287,7 +3291,7 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, si= ze_t size) void split_page_memcg(struct page *head, unsigned int nr) { struct folio *folio =3D page_folio(head); - struct mem_cgroup *memcg =3D folio_memcg(folio); + struct mem_cgroup *memcg =3D get_mem_cgroup_from_folio(folio); int i; =20 if (mem_cgroup_disabled() || !memcg) @@ -3300,6 +3304,8 @@ void split_page_memcg(struct page *head, unsigned int= nr) obj_cgroup_get_many(__folio_objcg(folio), nr - 1); else css_get_many(&memcg->css, nr - 1); + + css_put(&memcg->css); } =20 #ifdef CONFIG_MEMCG_SWAP @@ -4496,7 +4502,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, un= signed long *pfilepages, void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio, struct bdi_writeback *wb) { - struct mem_cgroup *memcg =3D folio_memcg(folio); + struct mem_cgroup *memcg =3D get_mem_cgroup_from_folio(folio); struct memcg_cgwb_frn *frn; u64 now =3D get_jiffies_64(); u64 oldest_at =3D now; @@ -4543,6 +4549,7 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct f= olio *folio, frn->memcg_id =3D wb->memcg_css->id; frn->at =3D now; } + css_put(&memcg->css); } =20 /* issue foreign writeback flushes for recorded foreign dirtying events */ @@ -6077,6 +6084,14 @@ static void mem_cgroup_move_charge(void) atomic_dec(&mc.from->moving_account); } =20 +/* + * The cgroup migration and memory cgroup offlining are serialized by + * @cgroup_mutex. If we reach here, it means that the LRU pages cannot + * be reparented to its parent memory cgroup. So during the whole process + * of mem_cgroup_move_task(), page_memcg(page) is stable. So we do not + * need to worry about the memcg (returned from page_memcg()) being + * released even if we do not hold an rcu read lock. + */ static void mem_cgroup_move_task(void) { if (mc.to) { @@ -6876,7 +6891,7 @@ void mem_cgroup_migrate(struct folio *old, struct fol= io *new) if (folio_memcg(new)) return; =20 - memcg =3D folio_memcg(old); + memcg =3D get_mem_cgroup_from_folio(old); VM_WARN_ON_ONCE_FOLIO(!memcg, old); if (!memcg) return; @@ -6895,6 +6910,8 @@ void mem_cgroup_migrate(struct folio *old, struct fol= io *new) mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, folio_nid(new)); local_irq_restore(flags); + + css_put(&memcg->css); } =20 DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); @@ -7079,6 +7096,10 @@ void mem_cgroup_swapout(struct folio *folio, swp_ent= ry_t entry) if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; =20 + /* + * Interrupts should be disabled by the caller (see the comments below), + * which can serve as RCU read-side critical sections. + */ memcg =3D folio_memcg(folio); =20 VM_WARN_ON_ONCE_FOLIO(!memcg, folio); @@ -7140,19 +7161,21 @@ int __mem_cgroup_try_charge_swap(struct page *page,= swp_entry_t entry) struct page_counter *counter; struct mem_cgroup *memcg; unsigned short oldid; + int ret =3D 0; =20 if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return 0; =20 + rcu_read_lock(); memcg =3D page_memcg(page); =20 VM_WARN_ON_ONCE_PAGE(!memcg, page); if (!memcg) - return 0; + goto out; =20 if (!entry.val) { memcg_memory_event(memcg, MEMCG_SWAP_FAIL); - return 0; + goto out; } =20 memcg =3D mem_cgroup_id_get_online(memcg); @@ -7162,7 +7185,8 @@ int __mem_cgroup_try_charge_swap(struct page *page, s= wp_entry_t entry) memcg_memory_event(memcg, MEMCG_SWAP_MAX); memcg_memory_event(memcg, MEMCG_SWAP_FAIL); mem_cgroup_id_put(memcg); - return -ENOMEM; + ret =3D -ENOMEM; + goto out; } =20 /* Get references for the tail pages, too */ @@ -7171,8 +7195,10 @@ int __mem_cgroup_try_charge_swap(struct page *page, = swp_entry_t entry) oldid =3D swap_cgroup_record(entry, mem_cgroup_id(memcg), nr_pages); VM_BUG_ON_PAGE(oldid, page); mod_memcg_state(memcg, MEMCG_SWAP, nr_pages); +out: + rcu_read_unlock(); =20 - return 0; + return ret; } =20 /** @@ -7217,6 +7243,7 @@ long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *= memcg) bool mem_cgroup_swap_full(struct page *page) { struct mem_cgroup *memcg; + bool ret =3D false; =20 VM_BUG_ON_PAGE(!PageLocked(page), page); =20 @@ -7225,19 +7252,24 @@ bool mem_cgroup_swap_full(struct page *page) if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) return false; =20 + rcu_read_lock(); memcg =3D page_memcg(page); if (!memcg) - return false; + goto out; =20 for (; memcg !=3D root_mem_cgroup; memcg =3D parent_mem_cgroup(memcg)) { unsigned long usage =3D page_counter_read(&memcg->swap); =20 if (usage * 2 >=3D READ_ONCE(memcg->swap.high) || - usage * 2 >=3D READ_ONCE(memcg->swap.max)) - return true; + usage * 2 >=3D READ_ONCE(memcg->swap.max)) { + ret =3D true; + goto out; + } } +out: + rcu_read_unlock(); =20 - return false; + return ret; } =20 static int __init setup_swap_account(char *s) diff --git a/mm/migrate.c b/mm/migrate.c index 6c31ee1e1c9b..59e97a8a64a0 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -430,6 +430,10 @@ int folio_migrate_mapping(struct address_space *mappin= g, struct lruvec *old_lruvec, *new_lruvec; struct mem_cgroup *memcg; =20 + /* + * Irq is disabled, which can serve as RCU read-side critical + * sections. + */ memcg =3D folio_memcg(folio); old_lruvec =3D mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); new_lruvec =3D mem_cgroup_lruvec(memcg, newzone->zone_pgdat); diff --git a/mm/page_io.c b/mm/page_io.c index 89fbf3cae30f..a0d9cd68e87a 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -221,13 +221,14 @@ static void bio_associate_blkg_from_page(struct bio *= bio, struct page *page) struct cgroup_subsys_state *css; struct mem_cgroup *memcg; =20 + rcu_read_lock(); memcg =3D page_memcg(page); if (!memcg) - return; + goto out; =20 - rcu_read_lock(); css =3D cgroup_e_css(memcg->css.cgroup, &io_cgrp_subsys); bio_associate_blkg_from_css(bio, css); +out: rcu_read_unlock(); } #else --=20 2.11.0 From nobody Tue Apr 28 19:32:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63652C433EF for ; Mon, 30 May 2022 07:52:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234223AbiE3Hwq (ORCPT ); Mon, 30 May 2022 03:52:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234188AbiE3Hvw (ORCPT ); Mon, 30 May 2022 03:51:52 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF1EC73552 for ; Mon, 30 May 2022 00:51:26 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id d22so9618062plr.9 for ; Mon, 30 May 2022 00:51:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KV6RbMM8w6wD8F4JQPmwSzYM0iR92dgQE9lDfN51JOg=; b=OD2hATG9TOBEVqaSLbbb7DYN7E8AxGheax8qlgspCPL4scsq8K8ZDBJOLH6UR68odC s2KrbB/vRF3Bw+pYrdYCdp60/yeTrEYzRqbRKi8mCN0RtzUDev0NTCWYwqBZxmc2wc+n xSzVbVM9KwQdSk51ksO9a4BBDdf5ZwsFojHgLgUQaUx/Ls26AttA2JUg7SWXFBumtoYG fj3u7jLmMbfOQXtlNBi+WO+WEoIHGKj6wXpZLWfcDUfZcjqV+/WPOCBFIWgKk9P24235 7c9W84Xep8nVv4TwBL/lczB1wY+BTC1hu+9N4zYnuxQEn02sXN2tnwfUZlH3S/BQeLRJ 94cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KV6RbMM8w6wD8F4JQPmwSzYM0iR92dgQE9lDfN51JOg=; b=CS3sASX6mdt3cTeih8jtimoqgnlam1QpWOYqrIXvT3hO54a89d5OW4MHm3q2WYbkS3 1DWsNSqtMPep4NnpBpRVK9FcE7UZwywPGXSUuY5uBfEuKypUkepilT5TLyWyijOiXRAh fPYL9BXVXa3jlVza9jYBBaT6YlSAmHIaiTjiwhNKHQjXqXZm8xE9UsyiejhMryi9nOG1 drjD4MYMGD6DndmhEIP8XUbGSODqppBJ7L8cJLwRl0cAeb57047r6FHtl2P42Z2/CgBZ nJWQ2HN0uI5OULdAPw5/JotbHNHhfIWm3beJ0ldd5frCGXxqIQkHCnCbsYLzvkgH7Eo3 XeTg== X-Gm-Message-State: AOAM533quH/wYRpir4amQ+AfPV8r0OszjcZqrTS2Gdm/cD1FgVyVkM4Q mZ4k41nUosVby9ssrLLDd+ck+Q== X-Google-Smtp-Source: ABdhPJx6AClcURIyJsLO64YIW87asSM1Y5uhNBox8YFpOq5D89BEw3QtMBy0j3V9+Otgo4I2gLmM6A== X-Received: by 2002:a17:902:bb82:b0:163:dc33:6b88 with SMTP id m2-20020a170902bb8200b00163dc336b88mr4349088pls.135.1653897086375; Mon, 30 May 2022 00:51:26 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.51.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:51:25 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 08/11] mm: memcontrol: introduce memcg_reparent_ops Date: Mon, 30 May 2022 15:49:16 +0800 Message-Id: <20220530074919.46352-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In the previous patch, we know how to make the lruvec lock safe when LRU pages are reparented. We should do something like following. memcg_reparent_objcgs(memcg) 1) lock // lruvec belongs to memcg and lruvec_parent belongs to parent memc= g. spin_lock(&lruvec->lru_lock); spin_lock(&lruvec_parent->lru_lock); 2) relocate from current memcg to its parent // Move all the pages from the lruvec list to the parent lruvec lis= t. 3) unlock spin_unlock(&lruvec_parent->lru_lock); spin_unlock(&lruvec->lru_lock); Apart from the page lruvec lock, the deferred split queue lock (THP only) also needs to do something similar. So we extract the necessary three steps in the memcg_reparent_objcgs(). memcg_reparent_objcgs(memcg) 1) lock memcg_reparent_ops->lock(memcg, parent); 2) relocate memcg_reparent_ops->relocate(memcg, reparent); 3) unlock memcg_reparent_ops->unlock(memcg, reparent); Now there are two different locks (e.g. lruvec lock and deferred split queue lock) need to use this infrastructure. In the next patch, we will use those APIs to make those locks safe when the LRU pages reparented. Signed-off-by: Muchun Song --- include/linux/memcontrol.h | 20 +++++++++++++++ mm/memcontrol.c | 62 ++++++++++++++++++++++++++++++++++++------= ---- 2 files changed, 69 insertions(+), 13 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 16464116f94a..c2ac98a0ece4 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -347,6 +347,26 @@ struct mem_cgroup { struct mem_cgroup_per_node *nodeinfo[]; }; =20 +struct memcg_reparent_ops { + /* + * Note that interrupt is disabled before calling those callbacks, + * so the interrupt should remain disabled when leaving those callbacks. + */ + void (*lock)(struct mem_cgroup *src, struct mem_cgroup *dst); + void (*relocate)(struct mem_cgroup *src, struct mem_cgroup *dst); + void (*unlock)(struct mem_cgroup *src, struct mem_cgroup *dst); +}; + +#define DEFINE_MEMCG_REPARENT_OPS(name) \ + const struct memcg_reparent_ops memcg_##name##_reparent_ops =3D { \ + .lock =3D name##_reparent_lock, \ + .relocate =3D name##_reparent_relocate, \ + .unlock =3D name##_reparent_unlock, \ + } + +#define DECLARE_MEMCG_REPARENT_OPS(name) \ + extern const struct memcg_reparent_ops memcg_##name##_reparent_ops + /* * size of first charge trial. "32" comes from vmscan.c's magic value. * TODO: maybe necessary to use big numbers in big irons. diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 4cc392741753..059188eeb80c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -337,24 +337,60 @@ static struct obj_cgroup *obj_cgroup_alloc(void) return objcg; } =20 -static void memcg_reparent_objcgs(struct mem_cgroup *memcg) +static void objcg_reparent_lock(struct mem_cgroup *src, struct mem_cgroup = *dst) +{ + spin_lock(&objcg_lock); +} + +static void objcg_reparent_relocate(struct mem_cgroup *src, struct mem_cgr= oup *dst) { struct obj_cgroup *objcg, *iter; - struct mem_cgroup *parent =3D parent_mem_cgroup(memcg); =20 - objcg =3D rcu_replace_pointer(memcg->objcg, NULL, true); + objcg =3D rcu_replace_pointer(src->objcg, NULL, true); + /* 1) Ready to reparent active objcg. */ + list_add(&objcg->list, &src->objcg_list); + /* 2) Reparent active objcg and already reparented objcgs to dst. */ + list_for_each_entry(iter, &src->objcg_list, list) + WRITE_ONCE(iter->memcg, dst); + /* 3) Move already reparented objcgs to the dst's list */ + list_splice(&src->objcg_list, &dst->objcg_list); +} + +static void objcg_reparent_unlock(struct mem_cgroup *src, struct mem_cgrou= p *dst) +{ + spin_unlock(&objcg_lock); +} =20 - spin_lock_irq(&objcg_lock); +static DEFINE_MEMCG_REPARENT_OPS(objcg); =20 - /* 1) Ready to reparent active objcg. */ - list_add(&objcg->list, &memcg->objcg_list); - /* 2) Reparent active objcg and already reparented objcgs to parent. */ - list_for_each_entry(iter, &memcg->objcg_list, list) - WRITE_ONCE(iter->memcg, parent); - /* 3) Move already reparented objcgs to the parent's list */ - list_splice(&memcg->objcg_list, &parent->objcg_list); - - spin_unlock_irq(&objcg_lock); +static const struct memcg_reparent_ops *memcg_reparent_ops[] =3D { + &memcg_objcg_reparent_ops, +}; + +#define DEFINE_MEMCG_REPARENT_FUNC(phase) \ + static void memcg_reparent_##phase(struct mem_cgroup *src, \ + struct mem_cgroup *dst) \ + { \ + int i; \ + \ + for (i =3D 0; i < ARRAY_SIZE(memcg_reparent_ops); i++) \ + memcg_reparent_ops[i]->phase(src, dst); \ + } + +DEFINE_MEMCG_REPARENT_FUNC(lock) +DEFINE_MEMCG_REPARENT_FUNC(relocate) +DEFINE_MEMCG_REPARENT_FUNC(unlock) + +static void memcg_reparent_objcgs(struct mem_cgroup *src) +{ + struct mem_cgroup *dst =3D parent_mem_cgroup(src); + struct obj_cgroup *objcg =3D rcu_dereference_protected(src->objcg, true); + + local_irq_disable(); + memcg_reparent_lock(src, dst); + memcg_reparent_relocate(src, dst); + memcg_reparent_unlock(src, dst); + local_irq_enable(); =20 percpu_ref_kill(&objcg->refcnt); } --=20 2.11.0 From nobody Tue Apr 28 19:32:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6042CC433F5 for ; Mon, 30 May 2022 07:52:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234131AbiE3Hw2 (ORCPT ); Mon, 30 May 2022 03:52:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234219AbiE3HwU (ORCPT ); Mon, 30 May 2022 03:52:20 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D200CE39 for ; Mon, 30 May 2022 00:51:37 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id gd1so2123102pjb.2 for ; Mon, 30 May 2022 00:51:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HHHwRXT4NPA3dNEWLRGKZUfTyeAeKsYLTuXrhRkWfDM=; b=yUDMo/O91Wcss4NFsx1qSVoD+u+E4a7NTlFOwfAYtdDPAyAOM4RLYHUyfmKWSN6L5H wbJ8HkzAyGQs9WLmn/dPcH5/btuQQX3gnAgxe9c/8mcjYsLfBjQF7/c/u57m3iZElYTX kvnS72J1bhZrhf4dAMcv/1wWdoFB7ufe/JKc2NttPWcqnX6xcc9JlDUv/5v3GAVXC1xV fiqk66kQaGe53jU1NbeFfaDjNSSuoiFHSqpw0UBOOLNbWl9xuyEVzvIDTJWNgdLpuXj7 8wTXz6XJ9hdi9HH6ffeCxIv5Yl7tZD+pT20hfopsB/Jkls8VLfSSQXFcaSdl4om5cHrW tYzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HHHwRXT4NPA3dNEWLRGKZUfTyeAeKsYLTuXrhRkWfDM=; b=VwtmVaP7Ev2gPMppd80twd124Y751YYZ62O6c9RGB/xVDev+5rEDdjA57PpCHule/T pNliprleiZKFgS02Vrp8sJVD321Y6VwJ1ao+QzFQE8me6aU5qjTmiZZGHedW+rukxm7i gvcJm2NfHxiZ5XZ9OA/FjDlruCA4MjqFpGltYzVFwk1GzDGoiUzU3Mf9gSBI0ASM4d9b EOXIipB9/VzlqNZMAvqzZqE5m93Rjj/xf6wdk6o7/W4OaDazwRM+RqzHNZLjgDCPCOpP 4BP5uJO4RJS8wTKFUk5zBW/7OLSQf//wD9EJjFFGSBmDpO/U6pWH/Hj+vgeRb7fPlK4d FdIA== X-Gm-Message-State: AOAM530OxlS0I+j80oeY9ZsscQCn5MRJ+ZaN+cRafhdCbAfFipIO3I83 P1RS6+vRtKIs1SQiJ37kitCY1A== X-Google-Smtp-Source: ABdhPJxTIdL9pAhUXVIJ5QlkRX1sLVU21czYu6GwIC3LiRO1i7YkmXNmWYWeC/tw/MSQuJgnemPsbg== X-Received: by 2002:a17:902:704a:b0:161:996e:bf4 with SMTP id h10-20020a170902704a00b00161996e0bf4mr54349648plt.118.1653897096375; Mon, 30 May 2022 00:51:36 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.51.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:51:35 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 09/11] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Date: Mon, 30 May 2022 15:49:17 +0800 Message-Id: <20220530074919.46352-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We will reuse the obj_cgroup APIs to charge the LRU pages. Finally, page->memcg_data will have 2 different meanings. - For the slab pages, page->memcg_data points to an object cgroups vector. - For the kmem pages (exclude the slab pages) and the LRU pages, page->memcg_data points to an object cgroup. In this patch, we reuse obj_cgroup APIs to charge LRU pages. In the end, The page cache cannot prevent long-living objects from pinning the original memory cgroup in the memory. At the same time we also changed the rules of page and objcg or memcg binding stability. The new rules are as follows. For a page any of the following ensures page and objcg binding stability: - the page lock - LRU isolation - lock_page_memcg() - exclusive reference Based on the stable binding of page and objcg, for a page any of the following ensures page and memcg binding stability: - objcg_lock - cgroup_mutex - the lruvec lock - the split queue lock (only THP page) If the caller only want to ensure that the page counters of memcg are updated correctly, ensure that the binding stability of page and objcg is sufficient. Signed-off-by: Muchun Song Acked-by: Roman Gushchin Reviewed-by: Michal Koutn=C3=BD --- include/linux/memcontrol.h | 89 +++++--------- mm/huge_memory.c | 35 ++++++ mm/memcontrol.c | 289 ++++++++++++++++++++++++++++++++---------= ---- 3 files changed, 276 insertions(+), 137 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index c2ac98a0ece4..e3a4354e20da 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -386,8 +386,6 @@ enum page_memcg_data_flags { =20 #define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1) =20 -static inline bool folio_memcg_kmem(struct folio *folio); - /* * After the initialization objcg->memcg is always pointing at * a valid memcg, but can be atomically swapped to the parent memcg. @@ -401,43 +399,19 @@ static inline struct mem_cgroup *obj_cgroup_memcg(str= uct obj_cgroup *objcg) } =20 /* - * __folio_memcg - Get the memory cgroup associated with a non-kmem folio - * @folio: Pointer to the folio. - * - * Returns a pointer to the memory cgroup associated with the folio, - * or NULL. This function assumes that the folio is known to have a - * proper memory cgroup pointer. It's not safe to call this function - * against some type of folios, e.g. slab folios or ex-slab folios or - * kmem folios. - */ -static inline struct mem_cgroup *__folio_memcg(struct folio *folio) -{ - unsigned long memcg_data =3D folio->memcg_data; - - VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); - - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); -} - -/* - * __folio_objcg - get the object cgroup associated with a kmem folio. + * folio_objcg - get the object cgroup associated with a folio. * @folio: Pointer to the folio. * * Returns a pointer to the object cgroup associated with the folio, * or NULL. This function assumes that the folio is known to have a - * proper object cgroup pointer. It's not safe to call this function - * against some type of folios, e.g. slab folios or ex-slab folios or - * LRU folios. + * proper object cgroup pointer. */ -static inline struct obj_cgroup *__folio_objcg(struct folio *folio) +static inline struct obj_cgroup *folio_objcg(struct folio *folio) { unsigned long memcg_data =3D folio->memcg_data; =20 VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); - VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); =20 return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); } @@ -451,22 +425,33 @@ static inline struct obj_cgroup *__folio_objcg(struct= folio *folio) * proper memory cgroup pointer. It's not safe to call this function * against some type of folios, e.g. slab folios or ex-slab folios. * - * For a non-kmem folio any of the following ensures folio and memcg bindi= ng - * stability: + * For a folio any of the following ensures folio and objcg binding stabil= ity: * * - the folio lock * - LRU isolation * - lock_page_memcg() * - exclusive reference * + * Based on the stable binding of folio and objcg, for a folio any of the + * following ensures folio and memcg binding stability: + * + * - objcg_lock + * - cgroup_mutex + * - the lruvec lock + * - the split queue lock (only THP page) + * + * If the caller only want to ensure that the page counters of memcg are + * updated correctly, ensure that the binding stability of folio and objcg + * is sufficient. + * * Note: The caller should hold an rcu read lock to protect memcg associat= ed * with a folio from being released. */ static inline struct mem_cgroup *folio_memcg(struct folio *folio) { - if (folio_memcg_kmem(folio)) - return obj_cgroup_memcg(__folio_objcg(folio)); - return __folio_memcg(folio); + struct obj_cgroup *objcg =3D folio_objcg(folio); + + return objcg ? obj_cgroup_memcg(objcg) : NULL; } =20 /* @@ -490,6 +475,8 @@ static inline struct mem_cgroup *page_memcg(struct page= *page) * folio is known to have a proper memory cgroup pointer. It's not safe * to call this function against some type of pages, e.g. slab pages or * ex-slab pages. + * + * The page and objcg or memcg binding rules can refer to folio_memcg(). */ static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *f= olio) { @@ -520,22 +507,20 @@ static inline struct mem_cgroup *get_mem_cgroup_from_= page(struct page *page) * * Return: A pointer to the memory cgroup associated with the folio, * or NULL. + * + * The folio and objcg or memcg binding rules can refer to folio_memcg(). */ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) { unsigned long memcg_data =3D READ_ONCE(folio->memcg_data); + struct obj_cgroup *objcg; =20 VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); WARN_ON_ONCE(!rcu_read_lock_held()); =20 - if (memcg_data & MEMCG_DATA_KMEM) { - struct obj_cgroup *objcg; + objcg =3D (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); =20 - objcg =3D (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return obj_cgroup_memcg(objcg); - } - - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return objcg ? obj_cgroup_memcg(objcg) : NULL; } =20 /* @@ -548,16 +533,10 @@ static inline struct mem_cgroup *folio_memcg_rcu(stru= ct folio *folio) * has an associated memory cgroup pointer or an object cgroups vector or * an object cgroup. * - * For a non-kmem page any of the following ensures page and memcg binding - * stability: + * The page and objcg or memcg binding rules can refer to page_memcg(). * - * - the page lock - * - LRU isolation - * - lock_page_memcg() - * - exclusive reference - * - * For a kmem page a caller should hold an rcu read lock to protect memcg - * associated with a kmem page from being released. + * A caller should hold an rcu read lock to protect memcg associated with a + * page from being released. */ static inline struct mem_cgroup *page_memcg_check(struct page *page) { @@ -566,18 +545,14 @@ static inline struct mem_cgroup *page_memcg_check(str= uct page *page) * for slab pages, READ_ONCE() should be used here. */ unsigned long memcg_data =3D READ_ONCE(page->memcg_data); + struct obj_cgroup *objcg; =20 if (memcg_data & MEMCG_DATA_OBJCGS) return NULL; =20 - if (memcg_data & MEMCG_DATA_KMEM) { - struct obj_cgroup *objcg; - - objcg =3D (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); - return obj_cgroup_memcg(objcg); - } + objcg =3D (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); =20 - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return objcg ? obj_cgroup_memcg(objcg) : NULL; } =20 static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgro= up *objcg) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d3411dc291ab..931d0c2ce062 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -503,6 +503,8 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struc= t *vma) } =20 #ifdef CONFIG_MEMCG +static struct shrinker deferred_split_shrinker; + static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *fol= io, struct deferred_split *queue) { @@ -519,6 +521,39 @@ static inline struct deferred_split *folio_memcg_split= _queue(struct folio *folio =20 return memcg ? &memcg->deferred_split_queue : NULL; } + +static void thp_sq_reparent_lock(struct mem_cgroup *src, struct mem_cgroup= *dst) +{ + spin_lock(&src->deferred_split_queue.split_queue_lock); + spin_lock_nested(&dst->deferred_split_queue.split_queue_lock, + SINGLE_DEPTH_NESTING); +} + +static void thp_sq_reparent_relocate(struct mem_cgroup *src, struct mem_cg= roup *dst) +{ + int nid; + struct deferred_split *src_queue, *dst_queue; + + src_queue =3D &src->deferred_split_queue; + dst_queue =3D &dst->deferred_split_queue; + + if (!src_queue->split_queue_len) + return; + + list_splice_tail_init(&src_queue->split_queue, &dst_queue->split_queue); + dst_queue->split_queue_len +=3D src_queue->split_queue_len; + src_queue->split_queue_len =3D 0; + + for_each_node(nid) + set_shrinker_bit(dst, nid, deferred_split_shrinker.id); +} + +static void thp_sq_reparent_unlock(struct mem_cgroup *src, struct mem_cgro= up *dst) +{ + spin_unlock(&dst->deferred_split_queue.split_queue_lock); + spin_unlock(&src->deferred_split_queue.split_queue_lock); +} +DEFINE_MEMCG_REPARENT_OPS(thp_sq); #else static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *fol= io, struct deferred_split *queue) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 059188eeb80c..f4db3cb2aedc 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -76,6 +76,7 @@ struct cgroup_subsys memory_cgrp_subsys __read_mostly; EXPORT_SYMBOL(memory_cgrp_subsys); =20 struct mem_cgroup *root_mem_cgroup __read_mostly; +static struct obj_cgroup *root_obj_cgroup __read_mostly; =20 /* Active memory cgroup to use from an interrupt context */ DEFINE_PER_CPU(struct mem_cgroup *, int_active_memcg); @@ -256,6 +257,11 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressu= re *vmpr) =20 static DEFINE_SPINLOCK(objcg_lock); =20 +static inline bool obj_cgroup_is_root(struct obj_cgroup *objcg) +{ + return objcg =3D=3D root_obj_cgroup; +} + #ifdef CONFIG_MEMCG_KMEM bool mem_cgroup_kmem_disabled(void) { @@ -363,8 +369,77 @@ static void objcg_reparent_unlock(struct mem_cgroup *s= rc, struct mem_cgroup *dst =20 static DEFINE_MEMCG_REPARENT_OPS(objcg); =20 +static void lruvec_reparent_lock(struct mem_cgroup *src, struct mem_cgroup= *dst) +{ + int nid, nest =3D 0; + + for_each_node(nid) { + spin_lock_nested(&mem_cgroup_lruvec(src, + NODE_DATA(nid))->lru_lock, nest++); + spin_lock_nested(&mem_cgroup_lruvec(dst, + NODE_DATA(nid))->lru_lock, nest++); + } +} + +static void lruvec_reparent_lru(struct lruvec *src, struct lruvec *dst, + enum lru_list lru) +{ + int zid; + struct mem_cgroup_per_node *mz_src, *mz_dst; + + mz_src =3D container_of(src, struct mem_cgroup_per_node, lruvec); + mz_dst =3D container_of(dst, struct mem_cgroup_per_node, lruvec); + + if (lru !=3D LRU_UNEVICTABLE) + list_splice_tail_init(&src->lists[lru], &dst->lists[lru]); + + for (zid =3D 0; zid < MAX_NR_ZONES; zid++) { + mz_dst->lru_zone_size[zid][lru] +=3D mz_src->lru_zone_size[zid][lru]; + mz_src->lru_zone_size[zid][lru] =3D 0; + } +} + +static void lruvec_reparent_relocate(struct mem_cgroup *src, struct mem_cg= roup *dst) +{ + int nid; + + for_each_node(nid) { + enum lru_list lru; + struct lruvec *src_lruvec, *dst_lruvec; + + src_lruvec =3D mem_cgroup_lruvec(src, NODE_DATA(nid)); + dst_lruvec =3D mem_cgroup_lruvec(dst, NODE_DATA(nid)); + + dst_lruvec->anon_cost +=3D src_lruvec->anon_cost; + dst_lruvec->file_cost +=3D src_lruvec->file_cost; + + for_each_lru(lru) + lruvec_reparent_lru(src_lruvec, dst_lruvec, lru); + } +} + +static void lruvec_reparent_unlock(struct mem_cgroup *src, struct mem_cgro= up *dst) +{ + int nid; + + for_each_node(nid) { + spin_unlock(&mem_cgroup_lruvec(dst, NODE_DATA(nid))->lru_lock); + spin_unlock(&mem_cgroup_lruvec(src, NODE_DATA(nid))->lru_lock); + } +} + +static DEFINE_MEMCG_REPARENT_OPS(lruvec); + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +DECLARE_MEMCG_REPARENT_OPS(thp_sq); +#endif + static const struct memcg_reparent_ops *memcg_reparent_ops[] =3D { &memcg_objcg_reparent_ops, + &memcg_lruvec_reparent_ops, +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + &memcg_thp_sq_reparent_ops, +#endif }; =20 #define DEFINE_MEMCG_REPARENT_FUNC(phase) \ @@ -2818,18 +2893,33 @@ static inline void cancel_charge(struct mem_cgroup = *memcg, unsigned int nr_pages page_counter_uncharge(&memcg->memsw, nr_pages); } =20 -static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) +static void commit_charge(struct folio *folio, struct obj_cgroup *objcg) { - VM_BUG_ON_FOLIO(folio_memcg(folio), folio); + VM_BUG_ON_FOLIO(folio_objcg(folio), folio); /* - * Any of the following ensures page's memcg stability: + * Any of the following ensures page's objcg stability: * * - the page lock * - LRU isolation * - lock_page_memcg() * - exclusive reference */ - folio->memcg_data =3D (unsigned long)memcg; + folio->memcg_data =3D (unsigned long)objcg; +} + +static struct obj_cgroup *__get_obj_cgroup_from_memcg(struct mem_cgroup *m= emcg) +{ + struct obj_cgroup *objcg =3D NULL; + + rcu_read_lock(); + for (; memcg; memcg =3D parent_mem_cgroup(memcg)) { + objcg =3D rcu_dereference(memcg->objcg); + if (objcg && obj_cgroup_tryget(objcg)) + break; + } + rcu_read_unlock(); + + return objcg; } =20 #ifdef CONFIG_MEMCG_KMEM @@ -2960,12 +3050,15 @@ __always_inline struct obj_cgroup *get_obj_cgroup_f= rom_current(void) else memcg =3D mem_cgroup_from_task(current); =20 - for (; memcg !=3D root_mem_cgroup; memcg =3D parent_mem_cgroup(memcg)) { - objcg =3D rcu_dereference(memcg->objcg); - if (objcg && obj_cgroup_tryget(objcg)) - break; + if (mem_cgroup_is_root(memcg)) + goto out; + + objcg =3D __get_obj_cgroup_from_memcg(memcg); + if (obj_cgroup_is_root(objcg)) { + obj_cgroup_put(objcg); objcg =3D NULL; } +out: rcu_read_unlock(); =20 return objcg; @@ -3062,13 +3155,13 @@ int __memcg_kmem_charge_page(struct page *page, gfp= _t gfp, int order) void __memcg_kmem_uncharge_page(struct page *page, int order) { struct folio *folio =3D page_folio(page); - struct obj_cgroup *objcg; + struct obj_cgroup *objcg =3D folio_objcg(folio); unsigned int nr_pages =3D 1 << order; =20 - if (!folio_memcg_kmem(folio)) + if (!objcg) return; =20 - objcg =3D __folio_objcg(folio); + VM_BUG_ON_FOLIO(!folio_memcg_kmem(folio), folio); obj_cgroup_uncharge_pages(objcg, nr_pages); folio->memcg_data =3D 0; obj_cgroup_put(objcg); @@ -3322,26 +3415,21 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, = size_t size) #endif /* CONFIG_MEMCG_KMEM */ =20 /* - * Because page_memcg(head) is not set on tails, set it now. + * Because page_objcg(head) is not set on tails, set it now. */ void split_page_memcg(struct page *head, unsigned int nr) { struct folio *folio =3D page_folio(head); - struct mem_cgroup *memcg =3D get_mem_cgroup_from_folio(folio); + struct obj_cgroup *objcg =3D folio_objcg(folio); int i; =20 - if (mem_cgroup_disabled() || !memcg) + if (mem_cgroup_disabled() || !objcg) return; =20 for (i =3D 1; i < nr; i++) folio_page(folio, i)->memcg_data =3D folio->memcg_data; =20 - if (folio_memcg_kmem(folio)) - obj_cgroup_get_many(__folio_objcg(folio), nr - 1); - else - css_get_many(&memcg->css, nr - 1); - - css_put(&memcg->css); + obj_cgroup_get_many(objcg, nr - 1); } =20 #ifdef CONFIG_MEMCG_SWAP @@ -5238,6 +5326,9 @@ static int mem_cgroup_css_online(struct cgroup_subsys= _state *css) objcg->memcg =3D memcg; rcu_assign_pointer(memcg->objcg, objcg); =20 + if (unlikely(mem_cgroup_is_root(memcg))) + root_obj_cgroup =3D objcg; + /* Online state pins memcg ID, memcg ID pins CSS */ refcount_set(&memcg->id.ref, 1); css_get(css); @@ -5642,10 +5733,12 @@ static int mem_cgroup_move_account(struct page *pag= e, */ smp_mb(); =20 - css_get(&to->css); - css_put(&from->css); + rcu_read_lock(); + obj_cgroup_get(rcu_dereference(to->objcg)); + obj_cgroup_put(rcu_dereference(from->objcg)); + rcu_read_unlock(); =20 - folio->memcg_data =3D (unsigned long)to; + folio->memcg_data =3D (unsigned long)rcu_access_pointer(to->objcg); =20 __folio_memcg_unlock(from); =20 @@ -6118,6 +6211,42 @@ static void mem_cgroup_move_charge(void) =20 mmap_read_unlock(mc.mm); atomic_dec(&mc.from->moving_account); + + /* + * Moving its pages to another memcg is finished. Wait for already + * started RCU-only updates to finish to make sure that the caller + * of lock_page_memcg() can unlock the correct move_lock. The + * possible bad scenario would like: + * + * CPU0: CPU1: + * mem_cgroup_move_charge() + * walk_page_range() + * + * lock_page_memcg(page) + * memcg =3D folio_memcg() + * spin_lock_irqsave(&memcg->move_lock) + * memcg->move_lock_task =3D current + * + * atomic_dec(&mc.from->moving_account) + * + * mem_cgroup_css_offline() + * memcg_offline_kmem() + * memcg_reparent_objcgs() <=3D=3D reparented + * + * unlock_page_memcg(page) + * memcg =3D folio_memcg() <=3D=3D memcg has been changed + * if (memcg->move_lock_task =3D=3D current) <=3D=3D false + * spin_unlock_irqrestore(&memcg->move_lock) + * + * Once mem_cgroup_move_charge() returns (it means that the cgroup_mutex + * would be released soon), the page can be reparented to its parent + * memcg. When the unlock_page_memcg() is called for the page, we will + * miss unlock the move_lock. So using synchronize_rcu to wait for + * already started RCU-only updates to finish before this function + * returns (mem_cgroup_move_charge() and mem_cgroup_css_offline() are + * serialized by cgroup_mutex). + */ + synchronize_rcu(); } =20 /* @@ -6673,21 +6802,26 @@ void mem_cgroup_calculate_protection(struct mem_cgr= oup *root, static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg, gfp_t gfp) { + struct obj_cgroup *objcg; long nr_pages =3D folio_nr_pages(folio); - int ret; + int ret =3D 0; =20 - ret =3D try_charge(memcg, gfp, nr_pages); + objcg =3D __get_obj_cgroup_from_memcg(memcg); + /* Do not account at the root objcg level. */ + if (!obj_cgroup_is_root(objcg)) + ret =3D try_charge(memcg, gfp, nr_pages); if (ret) goto out; =20 - css_get(&memcg->css); - commit_charge(folio, memcg); + obj_cgroup_get(objcg); + commit_charge(folio, objcg); =20 local_irq_disable(); mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, folio_nid(folio)); local_irq_enable(); out: + obj_cgroup_put(objcg); return ret; } =20 @@ -6773,7 +6907,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entr= y) } =20 struct uncharge_gather { - struct mem_cgroup *memcg; + struct obj_cgroup *objcg; unsigned long nr_memory; unsigned long pgpgout; unsigned long nr_kmem; @@ -6788,63 +6922,56 @@ static inline void uncharge_gather_clear(struct unc= harge_gather *ug) static void uncharge_batch(const struct uncharge_gather *ug) { unsigned long flags; + struct mem_cgroup *memcg; =20 + rcu_read_lock(); + memcg =3D obj_cgroup_memcg(ug->objcg); if (ug->nr_memory) { - page_counter_uncharge(&ug->memcg->memory, ug->nr_memory); + page_counter_uncharge(&memcg->memory, ug->nr_memory); if (do_memsw_account()) - page_counter_uncharge(&ug->memcg->memsw, ug->nr_memory); + page_counter_uncharge(&memcg->memsw, ug->nr_memory); if (ug->nr_kmem) - memcg_account_kmem(ug->memcg, -ug->nr_kmem); - memcg_oom_recover(ug->memcg); + memcg_account_kmem(memcg, -ug->nr_kmem); + memcg_oom_recover(memcg); } =20 local_irq_save(flags); - __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout); - __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory); - memcg_check_events(ug->memcg, ug->nid); + __count_memcg_events(memcg, PGPGOUT, ug->pgpgout); + __this_cpu_add(memcg->vmstats_percpu->nr_page_events, ug->nr_memory); + memcg_check_events(memcg, ug->nid); local_irq_restore(flags); + rcu_read_unlock(); =20 /* drop reference from uncharge_folio */ - css_put(&ug->memcg->css); + obj_cgroup_put(ug->objcg); } =20 static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug) { long nr_pages; - struct mem_cgroup *memcg; struct obj_cgroup *objcg; =20 VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); =20 /* * Nobody should be changing or seriously looking at - * folio memcg or objcg at this point, we have fully - * exclusive access to the folio. + * folio objcg at this point, we have fully exclusive + * access to the folio. */ - if (folio_memcg_kmem(folio)) { - objcg =3D __folio_objcg(folio); - /* - * This get matches the put at the end of the function and - * kmem pages do not hold memcg references anymore. - */ - memcg =3D get_mem_cgroup_from_objcg(objcg); - } else { - memcg =3D __folio_memcg(folio); - } - - if (!memcg) + objcg =3D folio_objcg(folio); + if (!objcg) return; =20 - if (ug->memcg !=3D memcg) { - if (ug->memcg) { + if (ug->objcg !=3D objcg) { + if (ug->objcg) { uncharge_batch(ug); uncharge_gather_clear(ug); } - ug->memcg =3D memcg; + ug->objcg =3D objcg; ug->nid =3D folio_nid(folio); =20 - /* pairs with css_put in uncharge_batch */ - css_get(&memcg->css); + /* pairs with obj_cgroup_put in uncharge_batch */ + obj_cgroup_get(objcg); } =20 nr_pages =3D folio_nr_pages(folio); @@ -6852,19 +6979,15 @@ static void uncharge_folio(struct folio *folio, str= uct uncharge_gather *ug) if (folio_memcg_kmem(folio)) { ug->nr_memory +=3D nr_pages; ug->nr_kmem +=3D nr_pages; - - folio->memcg_data =3D 0; - obj_cgroup_put(objcg); } else { /* LRU pages aren't accounted at the root level */ - if (!mem_cgroup_is_root(memcg)) + if (!obj_cgroup_is_root(objcg)) ug->nr_memory +=3D nr_pages; ug->pgpgout++; - - folio->memcg_data =3D 0; } =20 - css_put(&memcg->css); + folio->memcg_data =3D 0; + obj_cgroup_put(objcg); } =20 void __mem_cgroup_uncharge(struct folio *folio) @@ -6872,7 +6995,7 @@ void __mem_cgroup_uncharge(struct folio *folio) struct uncharge_gather ug; =20 /* Don't touch folio->lru of any random page, pre-check: */ - if (!folio_memcg(folio)) + if (!folio_objcg(folio)) return; =20 uncharge_gather_clear(&ug); @@ -6895,7 +7018,7 @@ void __mem_cgroup_uncharge_list(struct list_head *pag= e_list) uncharge_gather_clear(&ug); list_for_each_entry(folio, page_list, lru) uncharge_folio(folio, &ug); - if (ug.memcg) + if (ug.objcg) uncharge_batch(&ug); } =20 @@ -6912,6 +7035,7 @@ void __mem_cgroup_uncharge_list(struct list_head *pag= e_list) void mem_cgroup_migrate(struct folio *old, struct folio *new) { struct mem_cgroup *memcg; + struct obj_cgroup *objcg; long nr_pages =3D folio_nr_pages(new); unsigned long flags; =20 @@ -6924,30 +7048,33 @@ void mem_cgroup_migrate(struct folio *old, struct f= olio *new) return; =20 /* Page cache replacement: new folio already charged? */ - if (folio_memcg(new)) + if (folio_objcg(new)) return; =20 - memcg =3D get_mem_cgroup_from_folio(old); - VM_WARN_ON_ONCE_FOLIO(!memcg, old); - if (!memcg) + objcg =3D folio_objcg(old); + VM_WARN_ON_ONCE_FOLIO(!objcg, old); + if (!objcg) return; =20 + rcu_read_lock(); + memcg =3D obj_cgroup_memcg(objcg); + /* Force-charge the new page. The old one will be freed soon */ - if (!mem_cgroup_is_root(memcg)) { + if (!obj_cgroup_is_root(objcg)) { page_counter_charge(&memcg->memory, nr_pages); if (do_memsw_account()) page_counter_charge(&memcg->memsw, nr_pages); } =20 - css_get(&memcg->css); - commit_charge(new, memcg); + obj_cgroup_get(objcg); + commit_charge(new, objcg); =20 local_irq_save(flags); mem_cgroup_charge_statistics(memcg, nr_pages); memcg_check_events(memcg, folio_nid(new)); local_irq_restore(flags); =20 - css_put(&memcg->css); + rcu_read_unlock(); } =20 DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); @@ -7120,6 +7247,7 @@ static struct mem_cgroup *mem_cgroup_id_get_online(st= ruct mem_cgroup *memcg) void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) { struct mem_cgroup *memcg, *swap_memcg; + struct obj_cgroup *objcg; unsigned int nr_entries; unsigned short oldid; =20 @@ -7132,15 +7260,16 @@ void mem_cgroup_swapout(struct folio *folio, swp_en= try_t entry) if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; =20 + objcg =3D folio_objcg(folio); + VM_WARN_ON_ONCE_FOLIO(!objcg, folio); + if (!objcg) + return; + /* * Interrupts should be disabled by the caller (see the comments below), * which can serve as RCU read-side critical sections. */ - memcg =3D folio_memcg(folio); - - VM_WARN_ON_ONCE_FOLIO(!memcg, folio); - if (!memcg) - return; + memcg =3D obj_cgroup_memcg(objcg); =20 /* * In case the memcg owning these pages has been offlined and doesn't @@ -7159,7 +7288,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_entr= y_t entry) =20 folio->memcg_data =3D 0; =20 - if (!mem_cgroup_is_root(memcg)) + if (!obj_cgroup_is_root(objcg)) page_counter_uncharge(&memcg->memory, nr_entries); =20 if (!cgroup_memory_noswap && memcg !=3D swap_memcg) { @@ -7179,7 +7308,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_entr= y_t entry) memcg_stats_unlock(); memcg_check_events(memcg, folio_nid(folio)); =20 - css_put(&memcg->css); + obj_cgroup_put(objcg); } =20 /** --=20 2.11.0 From nobody Tue Apr 28 19:32:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC62CC433EF for ; Mon, 30 May 2022 07:52:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234218AbiE3Hwe (ORCPT ); Mon, 30 May 2022 03:52:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234255AbiE3HwX (ORCPT ); Mon, 30 May 2022 03:52:23 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 267562CDD5 for ; Mon, 30 May 2022 00:51:45 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id x143so9887769pfc.11 for ; Mon, 30 May 2022 00:51:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c1uSXYTb359GYZYUL38E0tsCvHSowrqKxSPkaS7BCw0=; b=Z+vIgsb9cnaEHIul2toSXUm6rrCc47T9Tan81WiyTc8fO5XEV8I1D0ZFf2vl7MhWLH kLBQa8mUAWBL97TrBJ4ySat0p7EVMxZ9DfvEJgOWcsN9kO+4aSfWwa6W1telCGw6aHwN 3CiTzRdDD3omXqegxcAje7SfZfyHu57wPelMG9sAHBaqD980yvgFNENio5LLl8JPnXRH N1LgGOA1LxRBixOD66Ngx0ua5As3E1N3oOgQlrtXJlCbziPvms/69m7Usnjz8Gt5hwDO U6mY4k2NR0vVq1xWJaoddNfTFC37xhE8DosrvuxlbyNp2gAbhMr3NkLJlYCrIoKN/fnB TAug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c1uSXYTb359GYZYUL38E0tsCvHSowrqKxSPkaS7BCw0=; b=b9vWU0KN5afkydXYyelJBH8rNabYqiwa9YRU2TQFvQksyHKoNTSTuZCMGCKDzO71By cfK/ofbkym/S1FtbOhi51Y+YeZNDvdAvG7+i8pYCJ1kycUWjOnN3Y9N63h71U9Au2afh JqBN2C3L5k9MIVYttgecDzE1Sd36nkpSYZ75DLJUzwfKP1jbq1a2q5UIYsfPz9CRC5CP kp6kM23SVgCwlKflMprO7HggJqxu9N3WiJonw7iABTmFUjjynE1IdvNG76ujLVaBQwXE jRuMqVvQnDVRtSG+LcP/GTZOitRu3exFZprOHMftiqItcLujlWhTU16VoEPluRIBarBk R0EQ== X-Gm-Message-State: AOAM531k6uhLDYPcVvleXqcmF1Lq5zkMpKc3Vw9Ur5SO8Iul8mSf+SCF umMVjfFpMKpkjhTZqwx6M95zdg== X-Google-Smtp-Source: ABdhPJxGV5AoXKcyeUU8ZBJd7zKVAHC0obAgkRJffyrhrae0ETdWG8nqs7sScPBo5ThCSl6Lwkx+yg== X-Received: by 2002:a62:e90e:0:b0:51b:3f85:c97c with SMTP id j14-20020a62e90e000000b0051b3f85c97cmr6834623pfh.86.1653897104942; Mon, 30 May 2022 00:51:44 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.51.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:51:44 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 10/11] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function Date: Mon, 30 May 2022 15:49:18 +0800 Message-Id: <20220530074919.46352-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We need to make sure that the page is deleted from or added to the correct lruvec list. So add a VM_WARN_ON_ONCE_FOLIO() to catch invalid users. Then the VM_BUG_ON_PAGE() in move_pages_to_lru() could be removed since add_page_to_lru_list() will check that. Signed-off-by: Muchun Song Acked-by: Roman Gushchin --- include/linux/mm_inline.h | 6 ++++++ mm/vmscan.c | 1 - 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index ac32125745ab..e13e56c7fdbd 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -97,6 +97,8 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio= *folio) { enum lru_list lru =3D folio_lru_list(folio); =20 + VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + update_lru_size(lruvec, lru, folio_zonenum(folio), folio_nr_pages(folio)); if (lru !=3D LRU_UNEVICTABLE) @@ -114,6 +116,8 @@ void lruvec_add_folio_tail(struct lruvec *lruvec, struc= t folio *folio) { enum lru_list lru =3D folio_lru_list(folio); =20 + VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + update_lru_size(lruvec, lru, folio_zonenum(folio), folio_nr_pages(folio)); /* This is not expected to be used on LRU_UNEVICTABLE */ @@ -131,6 +135,8 @@ void lruvec_del_folio(struct lruvec *lruvec, struct fol= io *folio) { enum lru_list lru =3D folio_lru_list(folio); =20 + VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio); + if (lru !=3D LRU_UNEVICTABLE) list_del(&folio->lru); update_lru_size(lruvec, lru, folio_zonenum(folio), diff --git a/mm/vmscan.c b/mm/vmscan.c index 67f1462b150d..51853d6df7b4 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2277,7 +2277,6 @@ static unsigned int move_pages_to_lru(struct list_hea= d *list) continue; } =20 - VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page); add_page_to_lru_list(page, lruvec); nr_pages =3D thp_nr_pages(page); nr_moved +=3D nr_pages; --=20 2.11.0 From nobody Tue Apr 28 19:32:30 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8998AC433EF for ; Mon, 30 May 2022 07:52:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234202AbiE3Hww (ORCPT ); Mon, 30 May 2022 03:52:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234256AbiE3Hw3 (ORCPT ); Mon, 30 May 2022 03:52:29 -0400 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A73842C679 for ; Mon, 30 May 2022 00:51:55 -0700 (PDT) Received: by mail-pg1-x536.google.com with SMTP id j191so2230231pgd.3 for ; Mon, 30 May 2022 00:51:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zTIPIwKXUo1jntkJwVJ1sc6imAMuNPgvSBrYOaWp7QA=; b=nG5qRDAwcXWO+X0Dki+i4YRhICkZTgizDnL8jGVt8i38gBcnMQ+PEzB+4S8f9bRmMM /dr5aZuC15y7PfpyTFr/vkOdIFmHF/lvgEW6BcSF+u0sQfJ0Jr1riyfq7eyAXMqBMriU gpaWesos7gIbjD/4ebYzpph8ntcaNrvYk6ZsIs+O9X8LLNZnEwvhBciiUHcFWbUkepH6 f7uQXVabUjxGAqexFEeMZtu0mn+ShivhUM3cH96KyBk8JbMdL6sAPc5rpchNyBeAr4vu mkiOhXkDrR99zKdVBXqW1XnPc4KPjTH5VGTiyNXtRA+pl7QVsUAJzg1OYerfzLi4ttKY WPsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zTIPIwKXUo1jntkJwVJ1sc6imAMuNPgvSBrYOaWp7QA=; b=F2OMBh+fXFFeEBMnWLItfgprpPX5R9ZGmEdMWPS/l0cUkT8xo2tagjnnAdS5PltxLS vVuAqmXr1hxpErD/02Zh+fRUzR2D38ZtfNtmXauiSVBc4GwvoAnLV5TU8/lNJJdxbtyN aKiwkdEZcalve32Z98ugseM8FLh0m9tw+2vNv8o1GI/BBzyQcTQ6bukbYibv0A56ehAl Ivoyw5SHlZbuDi+/PpOLBKYQA1SKRLwYLCo2fXuj06Tke5hjPJ/chm+ATN2msoMH1SqQ JK/0xHdZ2Gcwg6cVpgmSYoepllTjnNYRyMlxSesd0lCNI1kGsJx8cTzigjfkd+h7dC+d akdQ== X-Gm-Message-State: AOAM531MyYt8pbB/85435nvKByo/5eryzlfyIeOhB7bV3lURtURfu5qk +lil5OsF9v9g9+MEgG2UT+zIpw== X-Google-Smtp-Source: ABdhPJwuVWp+MeKG8ZENJf/0D824rv/vfRXylHNl8RsBcRIKzT3qmI+YmeK9lvxmKvP3eliznaQo/w== X-Received: by 2002:aa7:8094:0:b0:505:b544:d1ca with SMTP id v20-20020aa78094000000b00505b544d1camr56072290pff.26.1653897114489; Mon, 30 May 2022 00:51:54 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:2071:e13a:8aa:cacf]) by smtp.gmail.com with ESMTPSA id a23-20020a170902b59700b001616c3bd5c2sm8421381pls.162.2022.05.30.00.51.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 00:51:54 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, akpm@linux-foundation.org Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v5 11/11] mm: lru: use lruvec lock to serialize memcg changes Date: Mon, 30 May 2022 15:49:19 +0800 Message-Id: <20220530074919.46352-12-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220530074919.46352-1-songmuchun@bytedance.com> References: <20220530074919.46352-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" As described by commit fc574c23558c ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn"), TestClearPageLRU() aims to serialize mem_cgroup_move_account() during pagevec_lru_move_fn(). Now folio_lruvec_lock*() has the ability to detect whether page memcg has been changed. So we can use lruvec lock to serialize mem_cgroup_move_account() during pagevec_lru_move_fn(). This change is a partial revert of the commit fc574c23558c ("mm/swap.c: serialize memcg changes in pagevec_lru_move_fn"). And pagevec_lru_move_fn() is more hot compare with mem_cgroup_move_account(), removing an atomic operation would be an optimization. Also this change would not dirty cacheline for a page which isn't on the LRU. Signed-off-by: Muchun Song --- mm/memcontrol.c | 34 ++++++++++++++++++++++++++++++++++ mm/swap.c | 45 ++++++++++++++------------------------------- mm/vmscan.c | 9 ++++----- 3 files changed, 52 insertions(+), 36 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f4db3cb2aedc..3a0f3838f02d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1333,10 +1333,39 @@ struct lruvec *folio_lruvec_lock(struct folio *foli= o) lruvec =3D folio_lruvec(folio); spin_lock(&lruvec->lru_lock); =20 + /* + * The memcg of the page can be changed by any the following routines: + * + * 1) mem_cgroup_move_account() or + * 2) memcg_reparent_objcgs() + * + * The possible bad scenario would like: + * + * CPU0: CPU1: CPU2: + * lruvec =3D folio_lruvec() + * + * if (!isolate_lru_page()) + * mem_cgroup_move_account() + * + * memcg_reparent_objcgs() + * + * spin_lock(&lruvec->lru_lock) + * ^^^^^^ + * wrong lock + * + * Either CPU1 or CPU2 can change page memcg, so we need to check + * whether page memcg is changed, if so, we should reacquire the + * new lruvec lock. + */ if (unlikely(lruvec_memcg(lruvec) !=3D folio_memcg(folio))) { spin_unlock(&lruvec->lru_lock); goto retry; } + + /* + * When we reach here, it means that the folio_memcg(folio) is + * stable. + */ rcu_read_unlock(); =20 return lruvec; @@ -1364,6 +1393,7 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *fo= lio) lruvec =3D folio_lruvec(folio); spin_lock_irq(&lruvec->lru_lock); =20 + /* See the comments in folio_lruvec_lock(). */ if (unlikely(lruvec_memcg(lruvec) !=3D folio_memcg(folio))) { spin_unlock_irq(&lruvec->lru_lock); goto retry; @@ -1397,6 +1427,7 @@ struct lruvec *folio_lruvec_lock_irqsave(struct folio= *folio, lruvec =3D folio_lruvec(folio); spin_lock_irqsave(&lruvec->lru_lock, *flags); =20 + /* See the comments in folio_lruvec_lock(). */ if (unlikely(lruvec_memcg(lruvec) !=3D folio_memcg(folio))) { spin_unlock_irqrestore(&lruvec->lru_lock, *flags); goto retry; @@ -5738,7 +5769,10 @@ static int mem_cgroup_move_account(struct page *page, obj_cgroup_put(rcu_dereference(from->objcg)); rcu_read_unlock(); =20 + /* See the comments in folio_lruvec_lock(). */ + spin_lock(&from_vec->lru_lock); folio->memcg_data =3D (unsigned long)rcu_access_pointer(to->objcg); + spin_unlock(&from_vec->lru_lock); =20 __folio_memcg_unlock(from); =20 diff --git a/mm/swap.c b/mm/swap.c index 6cea469b6ff2..1b893c157bd1 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -199,14 +199,8 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, struct page *page =3D pvec->pages[i]; struct folio *folio =3D page_folio(page); =20 - /* block memcg migration during page moving between lru */ - if (!TestClearPageLRU(page)) - continue; - lruvec =3D folio_lruvec_relock_irqsave(folio, lruvec, &flags); (*move_fn)(page, lruvec); - - SetPageLRU(page); } if (lruvec) lruvec_unlock_irqrestore(lruvec, flags); @@ -218,7 +212,7 @@ static void pagevec_move_tail_fn(struct page *page, str= uct lruvec *lruvec) { struct folio *folio =3D page_folio(page); =20 - if (!folio_test_unevictable(folio)) { + if (folio_test_lru(folio) && !folio_test_unevictable(folio)) { lruvec_del_folio(lruvec, folio); folio_clear_active(folio); lruvec_add_folio_tail(lruvec, folio); @@ -314,7 +308,8 @@ void lru_note_cost_folio(struct folio *folio) =20 static void __folio_activate(struct folio *folio, struct lruvec *lruvec) { - if (!folio_test_active(folio) && !folio_test_unevictable(folio)) { + if (folio_test_lru(folio) && !folio_test_active(folio) && + !folio_test_unevictable(folio)) { long nr_pages =3D folio_nr_pages(folio); =20 lruvec_del_folio(lruvec, folio); @@ -371,12 +366,9 @@ static void folio_activate(struct folio *folio) { struct lruvec *lruvec; =20 - if (folio_test_clear_lru(folio)) { - lruvec =3D folio_lruvec_lock_irq(folio); - __folio_activate(folio, lruvec); - lruvec_unlock_irq(lruvec); - folio_set_lru(folio); - } + lruvec =3D folio_lruvec_lock_irq(folio); + __folio_activate(folio, lruvec); + lruvec_unlock_irq(lruvec); } #endif =20 @@ -519,6 +511,9 @@ static void lru_deactivate_file_fn(struct page *page, s= truct lruvec *lruvec) bool active =3D PageActive(page); int nr_pages =3D thp_nr_pages(page); =20 + if (!PageLRU(page)) + return; + if (PageUnevictable(page)) return; =20 @@ -556,7 +551,7 @@ static void lru_deactivate_file_fn(struct page *page, s= truct lruvec *lruvec) =20 static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) { - if (PageActive(page) && !PageUnevictable(page)) { + if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { int nr_pages =3D thp_nr_pages(page); =20 del_page_from_lru_list(page, lruvec); @@ -572,7 +567,7 @@ static void lru_deactivate_fn(struct page *page, struct= lruvec *lruvec) =20 static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec) { - if (PageAnon(page) && PageSwapBacked(page) && + if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { int nr_pages =3D thp_nr_pages(page); =20 @@ -1007,8 +1002,9 @@ void __pagevec_release(struct pagevec *pvec) } EXPORT_SYMBOL(__pagevec_release); =20 -static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruve= c) +static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) { + struct folio *folio =3D page_folio(page); int was_unevictable =3D folio_test_clear_unevictable(folio); long nr_pages =3D folio_nr_pages(folio); =20 @@ -1054,20 +1050,7 @@ static void __pagevec_lru_add_fn(struct folio *folio= , struct lruvec *lruvec) */ void __pagevec_lru_add(struct pagevec *pvec) { - int i; - struct lruvec *lruvec =3D NULL; - unsigned long flags =3D 0; - - for (i =3D 0; i < pagevec_count(pvec); i++) { - struct folio *folio =3D page_folio(pvec->pages[i]); - - lruvec =3D folio_lruvec_relock_irqsave(folio, lruvec, &flags); - __pagevec_lru_add_fn(folio, lruvec); - } - if (lruvec) - lruvec_unlock_irqrestore(lruvec, flags); - release_pages(pvec->pages, pvec->nr); - pagevec_reinit(pvec); + pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn); } =20 /** diff --git a/mm/vmscan.c b/mm/vmscan.c index 51853d6df7b4..c591d071a598 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4789,18 +4789,17 @@ void check_move_unevictable_pages(struct pagevec *p= vec) nr_pages =3D thp_nr_pages(page); pgscanned +=3D nr_pages; =20 - /* block memcg migration during page moving between lru */ - if (!TestClearPageLRU(page)) + lruvec =3D folio_lruvec_relock_irq(folio, lruvec); + + if (!PageLRU(page) || !PageUnevictable(page)) continue; =20 - lruvec =3D folio_lruvec_relock_irq(folio, lruvec); - if (page_evictable(page) && PageUnevictable(page)) { + if (page_evictable(page)) { del_page_from_lru_list(page, lruvec); ClearPageUnevictable(page); add_page_to_lru_list(page, lruvec); pgrescued +=3D nr_pages; } - SetPageLRU(page); } =20 if (lruvec) { --=20 2.11.0