From nobody Wed Sep 17 01:34:39 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19421C4332F for ; Mon, 26 Dec 2022 08:44:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231933AbiLZIos (ORCPT ); Mon, 26 Dec 2022 03:44:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231883AbiLZIog (ORCPT ); Mon, 26 Dec 2022 03:44:36 -0500 Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com [IPv6:2a00:1450:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 268D1273 for ; Mon, 26 Dec 2022 00:44:30 -0800 (PST) Received: by mail-wr1-x434.google.com with SMTP id t15so736535wro.9 for ; Mon, 26 Dec 2022 00:44:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WF0czWG0z2T4oq6TY2ZkLRE6dWmClGYr0kuRjGhgjlc=; b=crtUviAZFraW9aio2bfSjtkziz8WjnRNOQeWdAf0u2+zEXl0h09SG/WZwfa7gQsQ27 MwBMstJNcTGsxcyJdwiO/Cq/2RqaP0u6mT63Pr1kzywS9NS7F8l8vL++QkSVm05GBW/A o3WkDnb1eo7yJdm2vlfjRGLtVEmWExsY2rVqRLTB9KTprJPLVYnmvBYbZZLCACvPaerO b846AD5/3y/3sn0Z5ARghvOHSUmt6rfwK3q7eHBcAvpK2O4KPd1C+i4YKQafJAyq5BRd oi2X9EKu9iK6vqcKKjmRUKnAdCGgDZS4XwuBp8xzbbEbr/sKvLXkSAYyxPEKHk7x2Rdb sp1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WF0czWG0z2T4oq6TY2ZkLRE6dWmClGYr0kuRjGhgjlc=; b=SLF28UAgj09NhiIqG1CamS1gRfSHgqWlRt7H6uOaVO61X/uiO47SXbso6Lewe8iMGH wb9rbwfTvSyS+TqvergXM0vYva4t0grr23+6TgXFHxHjlyFrCp+ueu/ZtQkh2t7f2Ij2 wTBEAdXQxtaYX3RNUnwu5HR8UtR2s3+BbhRiKO1iG4UJlGnVR24YN0U7DWlC2kIXQuKu NHK5NOIFzKT9YZgVLHHviAuxA2opz8STBVs9a89fpRgQ/dBTZAvw0R93lbsz5yvCuZkt X20BhpInJo0sjmG2phjdwyZlT/n6MhIeVe4t9k8fCfh2sX3BHJVVX53Eq1ZODw7oh96I OdGw== X-Gm-Message-State: AFqh2kpIFheTKbY4TGDiujAFopiAoN1B5tD3Ga/udB5snidwTxDsF2bW pULTEk86PoyrYBP1ADUVQhs= X-Google-Smtp-Source: AMrXdXvHrmDIUZtGPHoYbQh76pYeg6Gpq29Q4e+53IIvzHC+4BgsPCw3M80xOdXVeczWzxE2psz1eA== X-Received: by 2002:a5d:50c9:0:b0:236:770a:665a with SMTP id f9-20020a5d50c9000000b00236770a665amr9743724wrt.66.1672044268718; Mon, 26 Dec 2022 00:44:28 -0800 (PST) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id m16-20020adffe50000000b00241bd7a7165sm10593253wrs.82.2022.12.26.00.44.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Dec 2022 00:44:28 -0800 (PST) From: Lorenzo Stoakes To: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Vlastimil Babka , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Geert Uytterhoeven , Lorenzo Stoakes Subject: [PATCH v3 1/5] mm: pagevec: add folio_batch_reinit() Date: Mon, 26 Dec 2022 08:44:19 +0000 Message-Id: X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This performs the same task as pagevec_reinit(), only modifying a folio batch rather than a pagevec. Signed-off-by: Lorenzo Stoakes Acked-by: Vlastimil Babka --- include/linux/pagevec.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 215eb6c3bdc9..2a6f61a0c10a 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -103,6 +103,11 @@ static inline void folio_batch_init(struct folio_batch= *fbatch) fbatch->percpu_pvec_drained =3D false; } =20 +static inline void folio_batch_reinit(struct folio_batch *fbatch) +{ + fbatch->nr =3D 0; +} + static inline unsigned int folio_batch_count(struct folio_batch *fbatch) { return fbatch->nr; --=20 2.39.0 From nobody Wed Sep 17 01:34:39 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2F17C4332F for ; Mon, 26 Dec 2022 08:45:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231942AbiLZIo6 (ORCPT ); Mon, 26 Dec 2022 03:44:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231894AbiLZIoh (ORCPT ); Mon, 26 Dec 2022 03:44:37 -0500 Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com [IPv6:2a00:1450:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93A27D77 for ; Mon, 26 Dec 2022 00:44:31 -0800 (PST) Received: by mail-wr1-x42f.google.com with SMTP id y8so9548166wrl.13 for ; Mon, 26 Dec 2022 00:44:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VLkOKeT4uIbBG6ss89Hd1tlt6NrDYEFm7lc6id39hK0=; b=eyqQ8EjXSV89/1fXCsCYjEV7+ama/10F8Lq2nRdB7uTKAW2hzK00tXiWUxmpcKeuo8 YKUSkkaks7fv4RgqSOHf/ukYKR+JSFiiGojlTRYLSnVVDFizkHESrcMmk2oLUxt7yves QTzb6LlkjYnOvrW2CHvQh6eJyzRkhfJpiyzO/X6CsBrm1DgAItMi/iO9n/i6zU53RxsP i+0bR0hkGBb4W/D7Iq01MJnHMjrx+gA9sh0jp5mszPsffkmVryh5sAoYjRxA7keQ5Iac hQWWTlj7X7fRnXpzSqHz0a8Qj+z4hYdIXT8ZA4jG/b1OLwH5Z3G7tkx9473TxSg/9vSK llzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VLkOKeT4uIbBG6ss89Hd1tlt6NrDYEFm7lc6id39hK0=; b=5PDaFfO72C3moJtX3WDu7YFao0JTV/g26Q1HRtTZA6fxE0Yeajdc5vWuwj572s4Ny2 Ikcf8NZUe36f79+maErR7/v2PhOoq9HxHFR7y4dMLD48On1Lx4KVpLTf/PFeg3yN1pEt yA1Ovov0poLy9clS01B5FA9p39gyQblBqZzbyKmTE9yRa03fGuc4Y24Zwf/qOhD3w/ue iF0nfxSjxOCzXYKQYQ5/6sYC+p9Hgr7beDYtg/w6dmmV2b5Wj1NPR+rG5DYedvCW7rFm 01OTqthDMVksb0gqFKCWINlWVEhExr9+gjnhO9Q3sPhLUtJY31lbp4QLTF55FhKg7dr0 aYXQ== X-Gm-Message-State: AFqh2kq/Nm0kAtemTPrgwepnfkEf3Sj2tH8qXaKixlyrW0dy77woB8OB NJzmclYTCgwFPensd9+ddss= X-Google-Smtp-Source: AMrXdXvkx0ZiKARj9eCYvPjSEESupfKR2o9UHPNbFBJCx+a7ivp762Cwovc55wun5cIBW6J0WwCwEA== X-Received: by 2002:a05:6000:18a6:b0:280:4a9:c8dd with SMTP id b6-20020a05600018a600b0028004a9c8ddmr783606wri.18.1672044269896; Mon, 26 Dec 2022 00:44:29 -0800 (PST) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id m16-20020adffe50000000b00241bd7a7165sm10593253wrs.82.2022.12.26.00.44.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Dec 2022 00:44:29 -0800 (PST) From: Lorenzo Stoakes To: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Vlastimil Babka , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Geert Uytterhoeven , Lorenzo Stoakes Subject: [PATCH v3 2/5] mm: mlock: use folios and a folio batch internally Date: Mon, 26 Dec 2022 08:44:20 +0000 Message-Id: <03ac78b416be5a361b79464acc3da7f93b9c37e8.1672043615.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This brings mlock in line with the folio batches declared in mm/swap.c and makes the code more consistent across the two. The existing mechanism for identifying which operation each folio in the batch is undergoing is maintained, i.e. using the lower 2 bits of the struct folio address (previously struct page address). This should continue to function correctly as folios remain at least system word-aligned. All invoctions of mlock() pass either a non-compound page or the head of a THP-compound page and no tail pages need updating so this functionality works with struct folios being used internally rather than struct pages. In this patch the external interface is kept identical to before in order to maintain separation between patches in the series, using a rather awkward conversion from struct page to struct folio in relevant functions. However, this maintenance of the existing interface is intended to be temporary - the next patch in the series will update the interfaces to accept folios directly. Signed-off-by: Lorenzo Stoakes Acked-by: Vlastimil Babka --- mm/mlock.c | 238 +++++++++++++++++++++++++++-------------------------- 1 file changed, 120 insertions(+), 118 deletions(-) diff --git a/mm/mlock.c b/mm/mlock.c index 7032f6dd0ce1..e9ba47fe67ed 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -28,12 +28,12 @@ =20 #include "internal.h" =20 -struct mlock_pvec { +struct mlock_fbatch { local_lock_t lock; - struct pagevec vec; + struct folio_batch fbatch; }; =20 -static DEFINE_PER_CPU(struct mlock_pvec, mlock_pvec) =3D { +static DEFINE_PER_CPU(struct mlock_fbatch, mlock_fbatch) =3D { .lock =3D INIT_LOCAL_LOCK(lock), }; =20 @@ -48,192 +48,192 @@ bool can_do_mlock(void) EXPORT_SYMBOL(can_do_mlock); =20 /* - * Mlocked pages are marked with PageMlocked() flag for efficient testing + * Mlocked folios are marked with the PG_mlocked flag for efficient testing * in vmscan and, possibly, the fault path; and to support semi-accurate * statistics. * - * An mlocked page [PageMlocked(page)] is unevictable. As such, it will - * be placed on the LRU "unevictable" list, rather than the [in]active lis= ts. - * The unevictable list is an LRU sibling list to the [in]active lists. - * PageUnevictable is set to indicate the unevictable state. + * An mlocked folio [folio_test_mlocked(folio)] is unevictable. As such, = it + * will be ostensibly placed on the LRU "unevictable" list (actually no su= ch + * list exists), rather than the [in]active lists. PG_unevictable is set to + * indicate the unevictable state. */ =20 -static struct lruvec *__mlock_page(struct page *page, struct lruvec *lruve= c) +static struct lruvec *__mlock_folio(struct folio *folio, struct lruvec *lr= uvec) { /* There is nothing more we can do while it's off LRU */ - if (!TestClearPageLRU(page)) + if (!folio_test_clear_lru(folio)) return lruvec; =20 - lruvec =3D folio_lruvec_relock_irq(page_folio(page), lruvec); + lruvec =3D folio_lruvec_relock_irq(folio, lruvec); =20 - if (unlikely(page_evictable(page))) { + if (unlikely(folio_evictable(folio))) { /* - * This is a little surprising, but quite possible: - * PageMlocked must have got cleared already by another CPU. - * Could this page be on the Unevictable LRU? I'm not sure, - * but move it now if so. + * This is a little surprising, but quite possible: PG_mlocked + * must have got cleared already by another CPU. Could this + * folio be unevictable? I'm not sure, but move it now if so. */ - if (PageUnevictable(page)) { - del_page_from_lru_list(page, lruvec); - ClearPageUnevictable(page); - add_page_to_lru_list(page, lruvec); + if (folio_test_unevictable(folio)) { + lruvec_del_folio(lruvec, folio); + folio_clear_unevictable(folio); + lruvec_add_folio(lruvec, folio); + __count_vm_events(UNEVICTABLE_PGRESCUED, - thp_nr_pages(page)); + folio_nr_pages(folio)); } goto out; } =20 - if (PageUnevictable(page)) { - if (PageMlocked(page)) - page->mlock_count++; + if (folio_test_unevictable(folio)) { + if (folio_test_mlocked(folio)) + folio->mlock_count++; goto out; } =20 - del_page_from_lru_list(page, lruvec); - ClearPageActive(page); - SetPageUnevictable(page); - page->mlock_count =3D !!PageMlocked(page); - add_page_to_lru_list(page, lruvec); - __count_vm_events(UNEVICTABLE_PGCULLED, thp_nr_pages(page)); + lruvec_del_folio(lruvec, folio); + folio_clear_active(folio); + folio_set_unevictable(folio); + folio->mlock_count =3D !!folio_test_mlocked(folio); + lruvec_add_folio(lruvec, folio); + __count_vm_events(UNEVICTABLE_PGCULLED, folio_nr_pages(folio)); out: - SetPageLRU(page); + folio_set_lru(folio); return lruvec; } =20 -static struct lruvec *__mlock_new_page(struct page *page, struct lruvec *l= ruvec) +static struct lruvec *__mlock_new_folio(struct folio *folio, struct lruvec= *lruvec) { - VM_BUG_ON_PAGE(PageLRU(page), page); + VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); =20 - lruvec =3D folio_lruvec_relock_irq(page_folio(page), lruvec); + lruvec =3D folio_lruvec_relock_irq(folio, lruvec); =20 /* As above, this is a little surprising, but possible */ - if (unlikely(page_evictable(page))) + if (unlikely(folio_evictable(folio))) goto out; =20 - SetPageUnevictable(page); - page->mlock_count =3D !!PageMlocked(page); - __count_vm_events(UNEVICTABLE_PGCULLED, thp_nr_pages(page)); + folio_set_unevictable(folio); + folio->mlock_count =3D !!folio_test_mlocked(folio); + __count_vm_events(UNEVICTABLE_PGCULLED, folio_nr_pages(folio)); out: - add_page_to_lru_list(page, lruvec); - SetPageLRU(page); + lruvec_add_folio(lruvec, folio); + folio_set_lru(folio); return lruvec; } =20 -static struct lruvec *__munlock_page(struct page *page, struct lruvec *lru= vec) +static struct lruvec *__munlock_folio(struct folio *folio, struct lruvec *= lruvec) { - int nr_pages =3D thp_nr_pages(page); + int nr_pages =3D folio_nr_pages(folio); bool isolated =3D false; =20 - if (!TestClearPageLRU(page)) + if (!folio_test_clear_lru(folio)) goto munlock; =20 isolated =3D true; - lruvec =3D folio_lruvec_relock_irq(page_folio(page), lruvec); + lruvec =3D folio_lruvec_relock_irq(folio, lruvec); =20 - if (PageUnevictable(page)) { + if (folio_test_unevictable(folio)) { /* Then mlock_count is maintained, but might undercount */ - if (page->mlock_count) - page->mlock_count--; - if (page->mlock_count) + if (folio->mlock_count) + folio->mlock_count--; + if (folio->mlock_count) goto out; } /* else assume that was the last mlock: reclaim will fix it if not */ =20 munlock: - if (TestClearPageMlocked(page)) { - __mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages); - if (isolated || !PageUnevictable(page)) + if (folio_test_clear_mlocked(folio)) { + zone_stat_mod_folio(folio, NR_MLOCK, -nr_pages); + if (isolated || !folio_test_unevictable(folio)) __count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages); else __count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages); } =20 - /* page_evictable() has to be checked *after* clearing Mlocked */ - if (isolated && PageUnevictable(page) && page_evictable(page)) { - del_page_from_lru_list(page, lruvec); - ClearPageUnevictable(page); - add_page_to_lru_list(page, lruvec); + /* folio_evictable() has to be checked *after* clearing Mlocked */ + if (isolated && folio_test_unevictable(folio) && folio_evictable(folio)) { + lruvec_del_folio(lruvec, folio); + folio_clear_unevictable(folio); + lruvec_add_folio(lruvec, folio); __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } out: if (isolated) - SetPageLRU(page); + folio_set_lru(folio); return lruvec; } =20 /* - * Flags held in the low bits of a struct page pointer on the mlock_pvec. + * Flags held in the low bits of a struct folio pointer on the mlock_fbatc= h. */ #define LRU_PAGE 0x1 #define NEW_PAGE 0x2 -static inline struct page *mlock_lru(struct page *page) +static inline struct folio *mlock_lru(struct folio *folio) { - return (struct page *)((unsigned long)page + LRU_PAGE); + return (struct folio *)((unsigned long)folio + LRU_PAGE); } =20 -static inline struct page *mlock_new(struct page *page) +static inline struct folio *mlock_new(struct folio *folio) { - return (struct page *)((unsigned long)page + NEW_PAGE); + return (struct folio *)((unsigned long)folio + NEW_PAGE); } =20 /* - * mlock_pagevec() is derived from pagevec_lru_move_fn(): - * perhaps that can make use of such page pointer flags in future, - * but for now just keep it for mlock. We could use three separate - * pagevecs instead, but one feels better (munlocking a full pagevec - * does not need to drain mlocking pagevecs first). + * mlock_folio_batch() is derived from folio_batch_move_lru(): perhaps tha= t can + * make use of such page pointer flags in future, but for now just keep it= for + * mlock. We could use three separate folio batches instead, but one feels + * better (munlocking a full folio batch does not need to drain mlocking f= olio + * batches first). */ -static void mlock_pagevec(struct pagevec *pvec) +static void mlock_folio_batch(struct folio_batch *fbatch) { struct lruvec *lruvec =3D NULL; unsigned long mlock; - struct page *page; + struct folio *folio; int i; =20 - for (i =3D 0; i < pagevec_count(pvec); i++) { - page =3D pvec->pages[i]; - mlock =3D (unsigned long)page & (LRU_PAGE | NEW_PAGE); - page =3D (struct page *)((unsigned long)page - mlock); - pvec->pages[i] =3D page; + for (i =3D 0; i < folio_batch_count(fbatch); i++) { + folio =3D fbatch->folios[i]; + mlock =3D (unsigned long)folio & (LRU_PAGE | NEW_PAGE); + folio =3D (struct folio *)((unsigned long)folio - mlock); + fbatch->folios[i] =3D folio; =20 if (mlock & LRU_PAGE) - lruvec =3D __mlock_page(page, lruvec); + lruvec =3D __mlock_folio(folio, lruvec); else if (mlock & NEW_PAGE) - lruvec =3D __mlock_new_page(page, lruvec); + lruvec =3D __mlock_new_folio(folio, lruvec); else - lruvec =3D __munlock_page(page, lruvec); + lruvec =3D __munlock_folio(folio, lruvec); } =20 if (lruvec) unlock_page_lruvec_irq(lruvec); - release_pages(pvec->pages, pvec->nr); - pagevec_reinit(pvec); + release_pages(fbatch->folios, fbatch->nr); + folio_batch_reinit(fbatch); } =20 void mlock_page_drain_local(void) { - struct pagevec *pvec; + struct folio_batch *fbatch; =20 - local_lock(&mlock_pvec.lock); - pvec =3D this_cpu_ptr(&mlock_pvec.vec); - if (pagevec_count(pvec)) - mlock_pagevec(pvec); - local_unlock(&mlock_pvec.lock); + local_lock(&mlock_fbatch.lock); + fbatch =3D this_cpu_ptr(&mlock_fbatch.fbatch); + if (folio_batch_count(fbatch)) + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); } =20 void mlock_page_drain_remote(int cpu) { - struct pagevec *pvec; + struct folio_batch *fbatch; =20 WARN_ON_ONCE(cpu_online(cpu)); - pvec =3D &per_cpu(mlock_pvec.vec, cpu); - if (pagevec_count(pvec)) - mlock_pagevec(pvec); + fbatch =3D &per_cpu(mlock_fbatch.fbatch, cpu); + if (folio_batch_count(fbatch)) + mlock_folio_batch(fbatch); } =20 bool need_mlock_page_drain(int cpu) { - return pagevec_count(&per_cpu(mlock_pvec.vec, cpu)); + return folio_batch_count(&per_cpu(mlock_fbatch.fbatch, cpu)); } =20 /** @@ -242,10 +242,10 @@ bool need_mlock_page_drain(int cpu) */ void mlock_folio(struct folio *folio) { - struct pagevec *pvec; + struct folio_batch *fbatch; =20 - local_lock(&mlock_pvec.lock); - pvec =3D this_cpu_ptr(&mlock_pvec.vec); + local_lock(&mlock_fbatch.lock); + fbatch =3D this_cpu_ptr(&mlock_fbatch.fbatch); =20 if (!folio_test_set_mlocked(folio)) { int nr_pages =3D folio_nr_pages(folio); @@ -255,10 +255,10 @@ void mlock_folio(struct folio *folio) } =20 folio_get(folio); - if (!pagevec_add(pvec, mlock_lru(&folio->page)) || + if (!folio_batch_add(fbatch, mlock_lru(folio)) || folio_test_large(folio) || lru_cache_disabled()) - mlock_pagevec(pvec); - local_unlock(&mlock_pvec.lock); + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); } =20 /** @@ -267,20 +267,22 @@ void mlock_folio(struct folio *folio) */ void mlock_new_page(struct page *page) { - struct pagevec *pvec; - int nr_pages =3D thp_nr_pages(page); + struct folio_batch *fbatch; + struct folio *folio =3D page_folio(page); + int nr_pages =3D folio_nr_pages(folio); =20 - local_lock(&mlock_pvec.lock); - pvec =3D this_cpu_ptr(&mlock_pvec.vec); - SetPageMlocked(page); - mod_zone_page_state(page_zone(page), NR_MLOCK, nr_pages); + local_lock(&mlock_fbatch.lock); + fbatch =3D this_cpu_ptr(&mlock_fbatch.fbatch); + folio_set_mlocked(folio); + + zone_stat_mod_folio(folio, NR_MLOCK, nr_pages); __count_vm_events(UNEVICTABLE_PGMLOCKED, nr_pages); =20 - get_page(page); - if (!pagevec_add(pvec, mlock_new(page)) || - PageHead(page) || lru_cache_disabled()) - mlock_pagevec(pvec); - local_unlock(&mlock_pvec.lock); + folio_get(folio); + if (!folio_batch_add(fbatch, mlock_new(folio)) || + folio_test_large(folio) || lru_cache_disabled()) + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); } =20 /** @@ -289,20 +291,20 @@ void mlock_new_page(struct page *page) */ void munlock_page(struct page *page) { - struct pagevec *pvec; + struct folio_batch *fbatch; + struct folio *folio =3D page_folio(page); =20 - local_lock(&mlock_pvec.lock); - pvec =3D this_cpu_ptr(&mlock_pvec.vec); + local_lock(&mlock_fbatch.lock); + fbatch =3D this_cpu_ptr(&mlock_fbatch.fbatch); /* - * TestClearPageMlocked(page) must be left to __munlock_page(), - * which will check whether the page is multiply mlocked. + * folio_test_clear_mlocked(folio) must be left to __munlock_folio(), + * which will check whether the folio is multiply mlocked. */ - - get_page(page); - if (!pagevec_add(pvec, page) || - PageHead(page) || lru_cache_disabled()) - mlock_pagevec(pvec); - local_unlock(&mlock_pvec.lock); + folio_get(folio); + if (!folio_batch_add(fbatch, folio) || + folio_test_large(folio) || lru_cache_disabled()) + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); } =20 static int mlock_pte_range(pmd_t *pmd, unsigned long addr, --=20 2.39.0 From nobody Wed Sep 17 01:34:39 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83B56C4332F for ; Mon, 26 Dec 2022 08:44:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231950AbiLZIox (ORCPT ); Mon, 26 Dec 2022 03:44:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231875AbiLZIoh (ORCPT ); Mon, 26 Dec 2022 03:44:37 -0500 Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com [IPv6:2a00:1450:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AB3B32700 for ; Mon, 26 Dec 2022 00:44:32 -0800 (PST) Received: by mail-wr1-x42e.google.com with SMTP id z10so9550041wrh.10 for ; Mon, 26 Dec 2022 00:44:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BAasLS5piscKHmj2mkeiZECl/zgxp7yB65Ha7FfxjAc=; b=S/g5U5KbNJO2Sul16L8x7UQIrS1nwPDHZCJqliviGoCbAelu50KAZS5EDTDI2B1WET TZD/IxLEnU4suuSVPTGW/Pp7CO+apWw4N8vFTQVUV2VAuxcJSEad7cCx9CakEC75Zrsh Bg6DKhFlimPpanqAhoL5WVNrrfy525sZR0bgCTYRZ+ASQDrbFkfxLYX66nyGfQU01akV IPO6HobN7IHqzqSItuWDJa2arEL5YcqpxDTLO8K2qht84kZzdvsKhEjyXlBZP/MrYJ78 AaP2KwxZCKIEMwLpsW1/Ube9WBplv3FVy9DQ/c5g0bxvnNqP/HQ2DzYmlpUkLgRoWxPF Diqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BAasLS5piscKHmj2mkeiZECl/zgxp7yB65Ha7FfxjAc=; b=upd0bvBCBCswYOdDnSMCh3YrQwbcBBW+w/zGo3Wdwl0mmxLpJxR9ahaVnYSebTiFMB +kteyJ/T5PnKlZH8Y5pbrsK5YDZ/HjloXL7dWEHw2MQn4D2lKosm3Ez+mMIIXM+EA6wT 7/Fm/TiP1PxSijlkZoLmd1h3KDq2joOoG9/g9KzcBCSGK5ofIW3j7G3aejLd+QLOTHzr p6E14YRmjgsehq61IhPAeTLVVevFpK0AAmmc56wV/kQ5ahqX6vfK+tg3wA/UI+EJH3bs ge5pLMiksLUWX8newLd4LWDbJSALJjQRDIS5kes0EMT3TCEn6FxPGFiKxRGlS44n5Zk/ nxsw== X-Gm-Message-State: AFqh2kqAcbrbOLCIHixY+hbpbg8C3bVUgFT/2u0CWJLtoNN8FQuLLuGM +BP4w7nEChZQ4UOb2EbyOT0= X-Google-Smtp-Source: AMrXdXvviisHQb2mXnJZVBNdr+6iL8AQ80jBKLi2sJvWvpvs1vVzsBq7ECSgs7SC23AXa2Vx+sLZBA== X-Received: by 2002:adf:a4c3:0:b0:242:7279:a56b with SMTP id h3-20020adfa4c3000000b002427279a56bmr11645451wrb.56.1672044271211; Mon, 26 Dec 2022 00:44:31 -0800 (PST) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id m16-20020adffe50000000b00241bd7a7165sm10593253wrs.82.2022.12.26.00.44.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Dec 2022 00:44:30 -0800 (PST) From: Lorenzo Stoakes To: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Vlastimil Babka , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Geert Uytterhoeven , Lorenzo Stoakes Subject: [PATCH v3 3/5] m68k/mm/motorola: specify pmd_page() type Date: Mon, 26 Dec 2022 08:44:21 +0000 Message-Id: <4b59f47ff4cd89ff76a5b6edbef6e8e0b37046f1.1672043615.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Failing to specify a specific type here breaks anything that relies on the = type being explicitly known, such as page_folio(). Make explicit the type of null pointer returned here. Signed-off-by: Lorenzo Stoakes Acked-by: Geert Uytterhoeven Acked-by: Vlastimil Babka --- arch/m68k/include/asm/motorola_pgtable.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/m68k/include/asm/motorola_pgtable.h b/arch/m68k/include/a= sm/motorola_pgtable.h index 7ac3d64c6b33..562b54e09850 100644 --- a/arch/m68k/include/asm/motorola_pgtable.h +++ b/arch/m68k/include/asm/motorola_pgtable.h @@ -124,7 +124,7 @@ static inline void pud_set(pud_t *pudp, pmd_t *pmdp) * expects pmd_page() to exists, only to then DCE it all. Provide a dummy = to * make the compiler happy. */ -#define pmd_page(pmd) NULL +#define pmd_page(pmd) ((struct page *)NULL) =20 =20 #define pud_none(pud) (!pud_val(pud)) --=20 2.39.0 From nobody Wed Sep 17 01:34:39 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4F4EC4332F for ; Mon, 26 Dec 2022 08:45:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231968AbiLZIpC (ORCPT ); Mon, 26 Dec 2022 03:45:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231898AbiLZIoh (ORCPT ); Mon, 26 Dec 2022 03:44:37 -0500 Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com [IPv6:2a00:1450:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E4FD2BDB for ; Mon, 26 Dec 2022 00:44:34 -0800 (PST) Received: by mail-wr1-x430.google.com with SMTP id n3so8824012wrc.5 for ; Mon, 26 Dec 2022 00:44:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=65qxl2pPQDnBMbGwdUZ+PFGonfHL4ihIaGu2BHcPC3E=; b=FgkDWhxXChE2Ah6CAPAdtfnpZw2aUSmE9f8oFWA0FnR8ofTYUM1SDtwnT6VuCIBVNy PTMlbo4WkY0LpvkaVci/gogKcodFLwGQNAvF56lh2J0b6k/o+RGBIBtOFfVGggfA1tyh d6DG7FBQqIBKSoQY6KL3aKyVUq+3UAFwt8v/NevHvrNlJ4hsTj8jX1FiT3vdO/uj7SeO NYvf3X2VK5jEswRAfscG5X/I6pmd+e9ifMdsTrRNKKWzyCQ90hufV3nfffCgjuidGU3Y qMnOxEPFJ6f95vhqsj/XUwM+7uaOP5SYhs/t5uz40SFCHEBAneKTY22D/Z22mFN+P917 Am8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=65qxl2pPQDnBMbGwdUZ+PFGonfHL4ihIaGu2BHcPC3E=; b=JbLWoAuwrMEm6ubEq2VdvWU3gsa83vg7bZDhfD4KBsb9WmSAZ33OoLlui5FJxbSxGN lUJjaTZKACgrRctdZN7U5OEKzKixULAiA4GF5OaDW63LpshIXonmLcfzT5XbwPgS/OEo pmoD/O198a3pDrjhvGurHDZSDLF50uDXec6ih7wPiFYs8PcTTMkZ+2jZtdiYqbTnM+qd VHeC8/XYgRXKMVWtWs3MgSQDNDENYcBNTRnrBJMhkMwHKe/AnkyO8Iw5atXxv3uci4tc L+7v44sEZiGTrjHfZNXVSdFx0P6v1U7LFs+DjawZCJ0FKsyoOfCx8rDtM6GdxlEgnDfU 5/jQ== X-Gm-Message-State: AFqh2kpjLdhLag1Bpyd47C+ZUU2rEqTLhIdFuWPP61ClhbulY11iv8pl CtL9EEzH1OC3PCGa0sYc76U= X-Google-Smtp-Source: AMrXdXslUa0w4r+uVtAIHdIox5/CO/jCEnlpT1uvf+NMtbH+3xN1khmtIZti8q1jPLjsins5m6+Olw== X-Received: by 2002:adf:e5ce:0:b0:27e:520f:1093 with SMTP id a14-20020adfe5ce000000b0027e520f1093mr1282397wrn.37.1672044272478; Mon, 26 Dec 2022 00:44:32 -0800 (PST) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id m16-20020adffe50000000b00241bd7a7165sm10593253wrs.82.2022.12.26.00.44.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Dec 2022 00:44:31 -0800 (PST) From: Lorenzo Stoakes To: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Vlastimil Babka , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Geert Uytterhoeven , Lorenzo Stoakes Subject: [PATCH v3 4/5] mm: mlock: update the interface to use folios Date: Mon, 26 Dec 2022 08:44:22 +0000 Message-Id: X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This patch updates the mlock interface to accept folios rather than pages, bringing the interface in line with the internal implementation. munlock_vma_page() still requires a page_folio() conversion, however this is consistent with the existent mlock_vma_page() implementation and a product of rmap still dealing in pages rather than folios. Signed-off-by: Lorenzo Stoakes Acked-by: Vlastimil Babka --- mm/internal.h | 26 ++++++++++++++++---------- mm/mlock.c | 32 +++++++++++++++----------------- mm/swap.c | 2 +- 3 files changed, 32 insertions(+), 28 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 1d6f4e168510..8a6e83315369 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -515,10 +515,9 @@ extern int mlock_future_check(struct mm_struct *mm, un= signed long flags, * should be called with vma's mmap_lock held for read or write, * under page table lock for the pte/pmd being added or removed. * - * mlock is usually called at the end of page_add_*_rmap(), - * munlock at the end of page_remove_rmap(); but new anon - * pages are managed by lru_cache_add_inactive_or_unevictable() - * calling mlock_new_page(). + * mlock is usually called at the end of page_add_*_rmap(), munlock at + * the end of page_remove_rmap(); but new anon folios are managed by + * folio_add_lru_vma() calling mlock_new_folio(). * * @compound is used to include pmd mappings of THPs, but filter out * pte mappings of THPs, which cannot be consistently counted: a pte @@ -547,15 +546,22 @@ static inline void mlock_vma_page(struct page *page, mlock_vma_folio(page_folio(page), vma, compound); } =20 -void munlock_page(struct page *page); -static inline void munlock_vma_page(struct page *page, +void munlock_folio(struct folio *folio); + +static inline void munlock_vma_folio(struct folio *folio, struct vm_area_struct *vma, bool compound) { if (unlikely(vma->vm_flags & VM_LOCKED) && - (compound || !PageTransCompound(page))) - munlock_page(page); + (compound || !folio_test_large(folio))) + munlock_folio(folio); +} + +static inline void munlock_vma_page(struct page *page, + struct vm_area_struct *vma, bool compound) +{ + munlock_vma_folio(page_folio(page), vma, compound); } -void mlock_new_page(struct page *page); +void mlock_new_folio(struct folio *folio); bool need_mlock_page_drain(int cpu); void mlock_page_drain_local(void); void mlock_page_drain_remote(int cpu); @@ -647,7 +653,7 @@ static inline void mlock_vma_page(struct page *page, struct vm_area_struct *vma, bool compound) { } static inline void munlock_vma_page(struct page *page, struct vm_area_struct *vma, bool compound) { } -static inline void mlock_new_page(struct page *page) { } +static inline void mlock_new_folio(struct folio *folio) { } static inline bool need_mlock_page_drain(int cpu) { return false; } static inline void mlock_page_drain_local(void) { } static inline void mlock_page_drain_remote(int cpu) { } diff --git a/mm/mlock.c b/mm/mlock.c index e9ba47fe67ed..3982ef4d1632 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -262,13 +262,12 @@ void mlock_folio(struct folio *folio) } =20 /** - * mlock_new_page - mlock a newly allocated page not yet on LRU - * @page: page to be mlocked, either a normal page or a THP head. + * mlock_new_folio - mlock a newly allocated folio not yet on LRU + * @folio: folio to be mlocked, either normal or a THP head. */ -void mlock_new_page(struct page *page) +void mlock_new_folio(struct folio *folio) { struct folio_batch *fbatch; - struct folio *folio =3D page_folio(page); int nr_pages =3D folio_nr_pages(folio); =20 local_lock(&mlock_fbatch.lock); @@ -286,13 +285,12 @@ void mlock_new_page(struct page *page) } =20 /** - * munlock_page - munlock a page - * @page: page to be munlocked, either a normal page or a THP head. + * munlock_folio - munlock a folio + * @folio: folio to be munlocked, either normal or a THP head. */ -void munlock_page(struct page *page) +void munlock_folio(struct folio *folio) { struct folio_batch *fbatch; - struct folio *folio =3D page_folio(page); =20 local_lock(&mlock_fbatch.lock); fbatch =3D this_cpu_ptr(&mlock_fbatch.fbatch); @@ -314,7 +312,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long ad= dr, struct vm_area_struct *vma =3D walk->vma; spinlock_t *ptl; pte_t *start_pte, *pte; - struct page *page; + struct folio *folio; =20 ptl =3D pmd_trans_huge_lock(pmd, vma); if (ptl) { @@ -322,11 +320,11 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long = addr, goto out; if (is_huge_zero_pmd(*pmd)) goto out; - page =3D pmd_page(*pmd); + folio =3D page_folio(pmd_page(*pmd)); if (vma->vm_flags & VM_LOCKED) - mlock_folio(page_folio(page)); + mlock_folio(folio); else - munlock_page(page); + munlock_folio(folio); goto out; } =20 @@ -334,15 +332,15 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long = addr, for (pte =3D start_pte; addr !=3D end; pte++, addr +=3D PAGE_SIZE) { if (!pte_present(*pte)) continue; - page =3D vm_normal_page(vma, addr, *pte); - if (!page || is_zone_device_page(page)) + folio =3D vm_normal_folio(vma, addr, *pte); + if (!folio || folio_is_zone_device(folio)) continue; - if (PageTransCompound(page)) + if (folio_test_large(folio)) continue; if (vma->vm_flags & VM_LOCKED) - mlock_folio(page_folio(page)); + mlock_folio(folio); else - munlock_page(page); + munlock_folio(folio); } pte_unmap(start_pte); out: diff --git a/mm/swap.c b/mm/swap.c index e54e2a252e27..7df297b143f9 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -562,7 +562,7 @@ void folio_add_lru_vma(struct folio *folio, struct vm_a= rea_struct *vma) VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); =20 if (unlikely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) =3D=3D VM_LOCKED)) - mlock_new_page(&folio->page); + mlock_new_folio(folio); else folio_add_lru(folio); } --=20 2.39.0 From nobody Wed Sep 17 01:34:39 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11A97C4332F for ; Mon, 26 Dec 2022 08:45:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231974AbiLZIpK (ORCPT ); Mon, 26 Dec 2022 03:45:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231899AbiLZIoh (ORCPT ); Mon, 26 Dec 2022 03:44:37 -0500 Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com [IPv6:2a00:1450:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7574ABC1 for ; Mon, 26 Dec 2022 00:44:35 -0800 (PST) Received: by mail-wr1-x42e.google.com with SMTP id d4so1570384wrw.6 for ; Mon, 26 Dec 2022 00:44:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r2F81c9AgLyck1wSShCjUojXRC7u44cFLH6IR6oWmN4=; b=Luutk8sTmkcPaV2HE6ZSm09yMDd6aFg37OB1rbF+z6MNMKyYZIy2EhVaJ8oIYCBjnB B2oixh9MFvf2UtOfNsPSuzO8f2ANjFTV351U8BHT/q5eP9Ez5aZX9Ye4Kes1fANH3IBw mY+LkVCPLJc7lShfS6oBJIYoYmg3vR5HaJLNL+Sg7dcIOIZ12GPP0t4NBD7Lhd8jDYUg FISQC4Bl8cJoalEFXhStA10T82XA5bxFIr2Tg/leWqa0KvSgdTOyhzfSKJU1cWfwPZmM hhqvoAJ7NSgYcTdwT7BF25IR2P/c8k7I5HPpbw7bVTOqkOOn7yAeEmhIk7lf5Q8hRHuD KnFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r2F81c9AgLyck1wSShCjUojXRC7u44cFLH6IR6oWmN4=; b=F0EzqQ9Cejgv/A8pu7NNtpHwPdMPCIAoPJjYNAsvy/RIVwTv5Z3YPe3VIdN9LT4IVs Pm7SjbYWq26qDQ/Wygc+RRZij2IUrn9SpeKyn+1zGOBUoXC+xL9HABPXbuw4F8ObsTdk zxMXCvZFmYtti1Ro9+Y6IVMur7/PWToLrL04nx2B+J3fBvQVqadfnqIg6AGQKdtlD1f7 75moH/xuC2RnKHABe2QCvCa9mqpEN5C7p7WExwOYWLieSWmBeKkSSZ9exqBJq3AkZRTd grGOhwSqAnqXb0u83mFDiEQ5qbqRJ5wBYN0wQdVjLfnYqWIb740eYKZWg10lbQ3GSbwu MhzA== X-Gm-Message-State: AFqh2kp7AjjL8I/NHGwmkXXesfb9yjyRCzNpqvug68PDoupVzl+Lg716 GkVmluJccJVtbSB6kAk7apk= X-Google-Smtp-Source: AMrXdXssR55KbgMxEr7dJwad4zliU0tQ0fJwRlDzhgMpGzuioR1wo/Z5xOzvgxOdHsStnOtIPiN1VA== X-Received: by 2002:a5d:65c4:0:b0:281:24c5:a533 with SMTP id e4-20020a5d65c4000000b0028124c5a533mr220616wrw.23.1672044273841; Mon, 26 Dec 2022 00:44:33 -0800 (PST) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id m16-20020adffe50000000b00241bd7a7165sm10593253wrs.82.2022.12.26.00.44.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Dec 2022 00:44:33 -0800 (PST) From: Lorenzo Stoakes To: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Vlastimil Babka , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Geert Uytterhoeven , Lorenzo Stoakes Subject: [PATCH v3 5/5] Documentation/mm: Update references to __m[un]lock_page() to *_folio() Date: Mon, 26 Dec 2022 08:44:23 +0000 Message-Id: X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" We now pass folios to these functions, so update the documentation accordingly. Additionally, correct the outdated reference to __pagevec_lru_add_fn(), the referenced action occurs in __munlock_folio() directly now. Signed-off-by: Lorenzo Stoakes Acked-by: Vlastimil Babka --- Documentation/mm/unevictable-lru.rst | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevic= table-lru.rst index 4a0e158aa9ce..153629e0c100 100644 --- a/Documentation/mm/unevictable-lru.rst +++ b/Documentation/mm/unevictable-lru.rst @@ -308,22 +308,22 @@ do end up getting faulted into this VM_LOCKED VMA, th= ey will be handled in the fault path - which is also how mlock2()'s MLOCK_ONFAULT areas are handled. =20 For each PTE (or PMD) being faulted into a VMA, the page add rmap function -calls mlock_vma_page(), which calls mlock_page() when the VMA is VM_LOCKED +calls mlock_vma_page(), which calls mlock_folio() when the VMA is VM_LOCKED (unless it is a PTE mapping of a part of a transparent huge page). Or when it is a newly allocated anonymous page, lru_cache_add_inactive_or_unevicta= ble() -calls mlock_new_page() instead: similar to mlock_page(), but can make bett= er +calls mlock_new_folio() instead: similar to mlock_folio(), but can make be= tter judgments, since this page is held exclusively and known not to be on LRU = yet. =20 -mlock_page() sets PageMlocked immediately, then places the page on the CPU= 's -mlock pagevec, to batch up the rest of the work to be done under lru_lock = by -__mlock_page(). __mlock_page() sets PageUnevictable, initializes mlock_co= unt +mlock_folio() sets PageMlocked immediately, then places the page on the CP= U's +mlock folio batch, to batch up the rest of the work to be done under lru_l= ock by +__mlock_folio(). __mlock_folio() sets PageUnevictable, initializes mlock_= count and moves the page to unevictable state ("the unevictable LRU", but with mlock_count in place of LRU threading). Or if the page was already PageLRU and PageUnevictable and PageMlocked, it simply increments the mlock_count. =20 But in practice that may not work ideally: the page may not yet be on an L= RU, or it may have been temporarily isolated from LRU. In such cases the mlock_c= ount -field cannot be touched, but will be set to 0 later when __pagevec_lru_add= _fn() +field cannot be touched, but will be set to 0 later when __munlock_folio() returns the page to "LRU". Races prohibit mlock_count from being set to 1= then: rather than risk stranding a page indefinitely as unevictable, always err = with mlock_count on the low side, so that when munlocked the page will be rescu= ed to --=20 2.39.0