From nobody Wed Feb 11 04:19:08 2026 Received: from phoenix.uberspace.de (phoenix.uberspace.de [95.143.172.135]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A6B4A1EB5E for ; Fri, 17 May 2024 09:20:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.143.172.135 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715937645; cv=none; b=h8WP06lLh4c5i8k2Kr3F9PNGwLdFYtWXQTEKZrEJt/yAPfhT0yauaPZMc8MhX0T0ib1DA1rAU8U3d/5IUpJaA24+zSMM5x9Kij4KqgK2PUaBNmtRkqynNjiBlPMom3TsnphCKvymXhdtPQf/Gs4y+hYTUC8C8zoqtJuh5GVSVGo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715937645; c=relaxed/simple; bh=zUX5qNIfW548vNLMV3YfOWr2RVuk/rIW/JNspwwgaDA=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=miOO6hXABB8WYhRafy1Bd4LdUwtvrI09mh9jefGrk0LJ+VbF5nrE1hMopsMyY3CIZSIM3lSBxjeWj+Xfh7W5mdsYjj500/O9/yOJn2BJFlSydDMw0WucBwVSZRvWdySlkHPtC9ZARWB0L5MjUkA1KIS9Rob/SJz3BSdh8zlrZ54= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yshyn.com; spf=pass smtp.mailfrom=yshyn.com; dkim=pass (2048-bit key) header.d=yshyn.com header.i=@yshyn.com header.b=s8qF9X+x; arc=none smtp.client-ip=95.143.172.135 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yshyn.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yshyn.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yshyn.com header.i=@yshyn.com header.b="s8qF9X+x" Received: (qmail 13005 invoked by uid 988); 17 May 2024 09:13:58 -0000 Authentication-Results: phoenix.uberspace.de; auth=pass (plain) Received: from unknown (HELO unkown) (::1) by phoenix.uberspace.de (Haraka/3.0.1) with ESMTPSA; Fri, 17 May 2024 11:13:57 +0200 From: Illia Ostapyshyn To: Jonathan Corbet , Andrew Morton , Matthew Wilcox Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Illia Ostapyshyn Subject: [PATCH] mm/vmscan: Update stale references to shrink_page_list Date: Fri, 17 May 2024 11:13:48 +0200 Message-Id: <20240517091348.1185566-1-illia@yshyn.com> X-Mailer: git-send-email 2.39.2 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Rspamd-Bar: + X-Rspamd-Report: MID_CONTAINS_FROM(1) MIME_GOOD(-0.1) R_MISSING_CHARSET(0.5) X-Rspamd-Score: 1.4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=yshyn.com; s=uberspace; h=from:to:cc:subject:date; bh=zUX5qNIfW548vNLMV3YfOWr2RVuk/rIW/JNspwwgaDA=; b=s8qF9X+xASJEWN/cMfdXpv37+eChv24TVme12xnLdvA65YeLPvsTXzosKqg8OeC49J5W76JCJG jXt8b3WBCoWyHOW6ZAXTIEhwcx3HgSKo7GNhhPzlmOFAiMkpfbb/Qr2+tBLxzjpAAPK2KqHIAS3X dun0Pvmwj1DQ5Ikf7ijj4Y6RxzPpcYrN9nNzeGMn0R0Thnq6Z3unRuYKkAK98CKUGmfeVqggYD8j 7pWiuwES3hWX1bm0vSyKHopIQGsB0IWLRKFbBd0pEPGW4zAG8AmBYKOyWdHAm1T2zG3TdRB4+fRv qKhYjDH39qcYIOrnRP/Z7LvFijc0MOp8VK/n/Kiw== Content-Type: text/plain; charset="utf-8" Commit 49fd9b6df54e ("mm/vmscan: fix a lot of comments") renamed shrink_page_list() to shrink_folio_list(). Fix up the remaining references to the old name in comments and documentation. Signed-off-by: Illia Ostapyshyn --- Documentation/mm/unevictable-lru.rst | 10 +++++----- mm/memory.c | 2 +- mm/swap_state.c | 2 +- mm/truncate.c | 2 +- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevic= table-lru.rst index b6a07a26b10d..2feb2ed51ae2 100644 --- a/Documentation/mm/unevictable-lru.rst +++ b/Documentation/mm/unevictable-lru.rst @@ -191,13 +191,13 @@ have become evictable again (via munlock() for exampl= e) and have been "rescued" from the unevictable list. However, there may be situations where we deci= de, for the sake of expediency, to leave an unevictable folio on one of the re= gular active/inactive LRU lists for vmscan to deal with. vmscan checks for such -folios in all of the shrink_{active|inactive|page}_list() functions and wi= ll +folios in all of the shrink_{active|inactive|folio}_list() functions and w= ill "cull" such folios that it encounters: that is, it diverts those folios to= the unevictable list for the memory cgroup and node being scanned. =20 There may be situations where a folio is mapped into a VM_LOCKED VMA, but the folio does not have the mlocked flag set. Such folios will make -it all the way to shrink_active_list() or shrink_page_list() where they +it all the way to shrink_active_list() or shrink_folio_list() where they will be detected when vmscan walks the reverse map in folio_referenced() or try_to_unmap(). The folio is culled to the unevictable list when it is released by the shrinker. @@ -269,7 +269,7 @@ the LRU. Such pages can be "noticed" by memory managem= ent in several places: =20 (4) in the fault path and when a VM_LOCKED stack segment is expanded; or =20 - (5) as mentioned above, in vmscan:shrink_page_list() when attempting to + (5) as mentioned above, in vmscan:shrink_folio_list() when attempting to reclaim a page in a VM_LOCKED VMA by folio_referenced() or try_to_unm= ap(). =20 mlocked pages become unlocked and rescued from the unevictable list when: @@ -548,12 +548,12 @@ Some examples of these unevictable pages on the LRU l= ists are: (3) pages still mapped into VM_LOCKED VMAs, which should be marked mlocke= d, but events left mlock_count too low, so they were munlocked too early. =20 -vmscan's shrink_inactive_list() and shrink_page_list() also divert obvious= ly +vmscan's shrink_inactive_list() and shrink_folio_list() also divert obviou= sly unevictable pages found on the inactive lists to the appropriate memory cg= roup and node unevictable list. =20 rmap's folio_referenced_one(), called via vmscan's shrink_active_list() or -shrink_page_list(), and rmap's try_to_unmap_one() called via shrink_page_l= ist(), +shrink_folio_list(), and rmap's try_to_unmap_one() called via shrink_folio= _list(), check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_f= olio() to correct them. Such pages are culled to the unevictable list when relea= sed by the shrinker. diff --git a/mm/memory.c b/mm/memory.c index 0201f50d8307..c58b3d92e6a8 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4511,7 +4511,7 @@ static vm_fault_t __do_fault(struct vm_fault *vmf) * lock_page(B) * lock_page(B) * pte_alloc_one - * shrink_page_list + * shrink_folio_list * wait_on_page_writeback(A) * SetPageWriteback(B) * unlock_page(B) diff --git a/mm/swap_state.c b/mm/swap_state.c index bfc7e8c58a6d..3d163ec1364a 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -28,7 +28,7 @@ =20 /* * swapper_space is a fiction, retained to simplify the path through - * vmscan's shrink_page_list. + * vmscan's shrink_folio_list. */ static const struct address_space_operations swap_aops =3D { .writepage =3D swap_writepage, diff --git a/mm/truncate.c b/mm/truncate.c index 725b150e47ac..e1c352bb026b 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -554,7 +554,7 @@ EXPORT_SYMBOL(invalidate_mapping_pages); * This is like mapping_evict_folio(), except it ignores the folio's * refcount. We do this because invalidate_inode_pages2() needs stronger * invalidation guarantees, and cannot afford to leave folios behind becau= se - * shrink_page_list() has a temp ref on them, or because they're transient= ly + * shrink_folio_list() has a temp ref on them, or because they're transien= tly * sitting in the folio_add_lru() caches. */ static int invalidate_complete_folio2(struct address_space *mapping, --=20 2.39.2