From nobody Fri Dec 19 17:38:08 2025 Received: from out-177.mta0.migadu.com (out-177.mta0.migadu.com [91.218.175.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E20D028136C for ; Wed, 17 Dec 2025 07:29:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765956559; cv=none; b=iqS2fZRmebSmpO84oNIPfQKO2bmodHiazYbmjQsa/RcIELEf28AxvokrgNtHfkOsXbu1jG5k+j7Y/+FaShfA4i7g5YUQZYspAZ43alQM+8AHzJkMMSClkTcU9J11S0lXSZFr6Pf/BSEd9YFYY86HyjwS1GFnnK9Lfd6eHOSQZbo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765956559; c=relaxed/simple; bh=U6m2VzWknbLlaBHJywfGw7jn0XU1TPaDYRztNSOF6kE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ESY4DNwB1PkZfRZtsOv2oGSqnH1jlkgBNBSo5l9sWwvvY/+i2/8x2aNE35b6Sebj5Yl+D96N8yoXeSAZzRTigBYbhEO32HMLSWt7szPzBk4HBhL4Kg6sjirkmBArmirOxVVjF2tS9mBtLqMCGsq1ukh9uZ/Irb4nZ5ExniaARFM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=b8lGVFMU; arc=none smtp.client-ip=91.218.175.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="b8lGVFMU" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1765956550; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4L3YNXkehnIUzmA6CqnAEojCUbAuuP/FY++XgJVDs2Q=; b=b8lGVFMUdp31z3ViKZO9ump0anJKy72LA/L/JhZ57uNsA6yNNyApzS7YROJV21E4UwWLmI b4dAJfey+LBXc3d7gLEIlKJc+YLaxUhW+zi+6ERo3nQQ0imO8Hc0plH3kE/UAKciz786CR 9hGDSdlAMoSLd6IADJTjF6eif5Bppxw= From: Qi Zheng To: hannes@cmpxchg.org, hughd@google.com, mhocko@suse.com, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, david@kernel.org, lorenzo.stoakes@oracle.com, ziy@nvidia.com, harry.yoo@oracle.com, imran.f.khan@oracle.com, kamalesh.babulal@oracle.com, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, chenridong@huaweicloud.com, mkoutny@suse.com, akpm@linux-foundation.org, hamzamahfooz@linux.microsoft.com, apais@linux.microsoft.com, lance.yang@linux.dev Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, Muchun Song , Qi Zheng , Chen Ridong Subject: [PATCH v2 03/28] mm: rename unlock_page_lruvec_irq and its variants Date: Wed, 17 Dec 2025 15:27:27 +0800 Message-ID: In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" From: Muchun Song It is inappropriate to use folio_lruvec_lock() variants in conjunction with unlock_page_lruvec() variants, as this involves the inconsistent operation of locking a folio while unlocking a page. To rectify this, the functions unlock_page_lruvec{_irq, _irqrestore} are renamed to lruvec_unlock{_irq,_irqrestore}. Signed-off-by: Muchun Song Acked-by: Roman Gushchin Acked-by: Johannes Weiner Signed-off-by: Qi Zheng Reviewed-by: Harry Yoo Reviewed-by: Chen Ridong Acked-by: David Hildenbrand (Red Hat) Acked-by: Shakeel Butt --- include/linux/memcontrol.h | 10 +++++----- mm/compaction.c | 14 +++++++------- mm/huge_memory.c | 2 +- mm/mlock.c | 2 +- mm/swap.c | 12 ++++++------ mm/vmscan.c | 4 ++-- 6 files changed, 22 insertions(+), 22 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 6a48398a1f4e7..288dd6337f80f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1465,17 +1465,17 @@ static inline struct lruvec *parent_lruvec(struct l= ruvec *lruvec) return mem_cgroup_lruvec(memcg, lruvec_pgdat(lruvec)); } =20 -static inline void unlock_page_lruvec(struct lruvec *lruvec) +static inline void lruvec_unlock(struct lruvec *lruvec) { spin_unlock(&lruvec->lru_lock); } =20 -static inline void unlock_page_lruvec_irq(struct lruvec *lruvec) +static inline void lruvec_unlock_irq(struct lruvec *lruvec) { spin_unlock_irq(&lruvec->lru_lock); } =20 -static inline void unlock_page_lruvec_irqrestore(struct lruvec *lruvec, +static inline void lruvec_unlock_irqrestore(struct lruvec *lruvec, unsigned long flags) { spin_unlock_irqrestore(&lruvec->lru_lock, flags); @@ -1497,7 +1497,7 @@ static inline struct lruvec *folio_lruvec_relock_irq(= struct folio *folio, if (folio_matches_lruvec(folio, locked_lruvec)) return locked_lruvec; =20 - unlock_page_lruvec_irq(locked_lruvec); + lruvec_unlock_irq(locked_lruvec); } =20 return folio_lruvec_lock_irq(folio); @@ -1511,7 +1511,7 @@ static inline void folio_lruvec_relock_irqsave(struct= folio *folio, if (folio_matches_lruvec(folio, *lruvecp)) return; =20 - unlock_page_lruvec_irqrestore(*lruvecp, *flags); + lruvec_unlock_irqrestore(*lruvecp, *flags); } =20 *lruvecp =3D folio_lruvec_lock_irqsave(folio, flags); diff --git a/mm/compaction.c b/mm/compaction.c index 1e8f8eca318c6..c3e338aaa0ffb 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -913,7 +913,7 @@ isolate_migratepages_block(struct compact_control *cc, = unsigned long low_pfn, */ if (!(low_pfn % COMPACT_CLUSTER_MAX)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked =3D NULL; } =20 @@ -964,7 +964,7 @@ isolate_migratepages_block(struct compact_control *cc, = unsigned long low_pfn, } /* for alloc_contig case */ if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked =3D NULL; } =20 @@ -1053,7 +1053,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, if (unlikely(page_has_movable_ops(page)) && !PageMovableOpsIsolated(page)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked =3D NULL; } =20 @@ -1158,7 +1158,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, /* If we already hold the lock, we can skip some rechecking */ if (lruvec !=3D locked) { if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); =20 compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); locked =3D lruvec; @@ -1226,7 +1226,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, isolate_fail_put: /* Avoid potential deadlock in freeing page under lru_lock */ if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked =3D NULL; } folio_put(folio); @@ -1242,7 +1242,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, */ if (nr_isolated) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); locked =3D NULL; } putback_movable_pages(&cc->migratepages); @@ -1274,7 +1274,7 @@ isolate_migratepages_block(struct compact_control *cc= , unsigned long low_pfn, =20 isolate_abort: if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + lruvec_unlock_irqrestore(locked, flags); if (folio) { folio_set_lru(folio); folio_put(folio); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 40cf59301c21a..12b46215b30c1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3899,7 +3899,7 @@ static int __folio_freeze_and_split_unmapped(struct f= olio *folio, unsigned int n folio_ref_unfreeze(folio, folio_cache_ref_count(folio) + 1); =20 if (do_lru) - unlock_page_lruvec(lruvec); + lruvec_unlock(lruvec); =20 if (ci) swap_cluster_unlock(ci); diff --git a/mm/mlock.c b/mm/mlock.c index 2f699c3497a57..66740e16679c3 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -205,7 +205,7 @@ static void mlock_folio_batch(struct folio_batch *fbatc= h) } =20 if (lruvec) - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); folios_put(fbatch); } =20 diff --git a/mm/swap.c b/mm/swap.c index 2260dcd2775e7..ec0c654e128dc 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -91,7 +91,7 @@ static void page_cache_release(struct folio *folio) =20 __page_cache_release(folio, &lruvec, &flags); if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); } =20 void __folio_put(struct folio *folio) @@ -175,7 +175,7 @@ static void folio_batch_move_lru(struct folio_batch *fb= atch, move_fn_t move_fn) } =20 if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); folios_put(fbatch); } =20 @@ -349,7 +349,7 @@ void folio_activate(struct folio *folio) =20 lruvec =3D folio_lruvec_lock_irq(folio); lru_activate(lruvec, folio); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); folio_set_lru(folio); } #endif @@ -963,7 +963,7 @@ void folios_put_refs(struct folio_batch *folios, unsign= ed int *refs) =20 if (folio_is_zone_device(folio)) { if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec =3D NULL; } if (folio_ref_sub_and_test(folio, nr_refs)) @@ -977,7 +977,7 @@ void folios_put_refs(struct folio_batch *folios, unsign= ed int *refs) /* hugetlb has its own memcg */ if (folio_test_hugetlb(folio)) { if (lruvec) { - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); lruvec =3D NULL; } free_huge_folio(folio); @@ -991,7 +991,7 @@ void folios_put_refs(struct folio_batch *folios, unsign= ed int *refs) j++; } if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); + lruvec_unlock_irqrestore(lruvec, flags); if (!j) { folio_batch_reinit(folios); return; diff --git a/mm/vmscan.c b/mm/vmscan.c index 670fe9fae5baa..28d9b3af47130 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1829,7 +1829,7 @@ bool folio_isolate_lru(struct folio *folio) folio_get(folio); lruvec =3D folio_lruvec_lock_irq(folio); lruvec_del_folio(lruvec, folio); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); ret =3D true; } =20 @@ -7855,7 +7855,7 @@ void check_move_unevictable_folios(struct folio_batch= *fbatch) if (lruvec) { __count_vm_events(UNEVICTABLE_PGRESCUED, pgrescued); __count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); - unlock_page_lruvec_irq(lruvec); + lruvec_unlock_irq(lruvec); } else if (pgscanned) { count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); } --=20 2.20.1