From nobody Sat Sep 13 20:25:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F9A7C636CD for ; Mon, 30 Jan 2023 20:18:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229600AbjA3USu (ORCPT ); Mon, 30 Jan 2023 15:18:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229605AbjA3USn (ORCPT ); Mon, 30 Jan 2023 15:18:43 -0500 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 57EE7474F1 for ; Mon, 30 Jan 2023 12:18:42 -0800 (PST) Received: by mail-pf1-x435.google.com with SMTP id z1so5698620pfg.12 for ; Mon, 30 Jan 2023 12:18:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=j/4S2TO195EZhvaFVO5pP2w+KNwU4ZdXZTJQBP7AkLA=; b=hKmaa5zWwzaA/eQzJXW/p3AbrjuCU1hJ1WqkvmCe5OjsEJy/zfYO3gyiHtXR/Nmicc QZYQqjBrrsHy42KLOqyKPLFiE5ljyx3lwnU1LSjg+iAeh/HHn2CxRLuBmV3I3qMAz3Cs Ne58gJfHTpr0l9umv4lK3maRT5u4n0kTDHo9ZkK8oU0BjWL11GtJTad04BQMCWmHBz73 nH6tcsBm9sz+Ea1dWSlI9Fiy3dis8DFNKppIY2bXLHu8vR2JqgBcaxoficz3AoWBJM+Z 2TO/3iaXYb46sgGXjY60MgE+Ka+7Z3DLxW15j0perrbJldV7bVy5hroX5c6xfkxUhZlH EkRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=j/4S2TO195EZhvaFVO5pP2w+KNwU4ZdXZTJQBP7AkLA=; b=NWvJOy3O+hWsOxBHT/m/OzqsCd2mJQDbxn5SP2MLB/CYWtkqOi1Y3BTakINeK6gXBP beI1aBdF+i5WPbi24ndhcqihTmZB3DSA9xcyHb07CDVA2D8nRaL8s09aquNDkX5ZroDb oYX0sq2BRSY3QfnE5o+soWAMSsK8DvwrUvoSZ4Wmw5tqOhCnmMHAVuOHaHAT4UVYAdl5 jieob5F45bfBpebEOVO+DNP0qJvWTJHUtzINWl0McdaNdTwZxKrZg9EaNmj7mzpiJ6Jj 1TZ05knY89+RetTo32OWsZZJ1ah12N/Kgc2ORZBHMIcG/9ByLGQceBpjnU2S/4EUBM+2 e9Tw== X-Gm-Message-State: AO0yUKUT0ZRT+LZuLPPic5ag7oyUG7Xx3ocYEyLM/z6N5H7UcobJ9Umb FA7QLaJd/8qDbBXMZ3sXyzI= X-Google-Smtp-Source: AK7set8Ihsg+3xvIHWQXAyP08XWpzy8TCdCFoKOEqxjzQ3NsmeF6Ie0PReDCSFAdVbc4D07D2h75hA== X-Received: by 2002:aa7:8c54:0:b0:593:93db:5f63 with SMTP id e20-20020aa78c54000000b0059393db5f63mr8994693pfd.23.1675109921811; Mon, 30 Jan 2023 12:18:41 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::a8cd]) by smtp.googlemail.com with ESMTPSA id g9-20020a056a000b8900b0058d9730ede0sm113262pfj.210.2023.01.30.12.18.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 12:18:41 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v4 1/6] mm: add folio_estimated_sharers() Date: Mon, 30 Jan 2023 12:18:28 -0800 Message-Id: <20230130201833.27042-2-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230130201833.27042-1-vishal.moola@gmail.com> References: <20230130201833.27042-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" folio_estimated_sharers() takes in a folio and returns the precise number of times the first subpage of the folio is mapped. This function aims to provide an estimate for the number of sharers of a folio. This is necessary for folio conversions where we care about the number of processes that share a folio, but don't necessarily want to check every single page within that folio. This is in contrast to folio_mapcount() which calculates the total number of the times a folio and all its subpages are mapped. Signed-off-by: Vishal Moola (Oracle) Acked-by: David Hildenbrand Reviewed-by: Yin Fengwei --- include/linux/mm.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 27b34f7730e7..c91bf9cdb3d0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1905,6 +1905,24 @@ static inline size_t folio_size(struct folio *folio) return PAGE_SIZE << folio_order(folio); } =20 +/** + * folio_estimated_sharers - Estimate the number of sharers of a folio. + * @folio: The folio. + * + * folio_estimated_sharers() aims to serve as a function to efficiently + * estimate the number of processes sharing a folio. This is done by + * looking at the precise mapcount of the first subpage in the folio, and + * assuming the other subpages are the same. This may not be true for large + * folios. If you want exact mapcounts for exact calculations, look at + * page_mapcount() or folio_total_mapcount(). + * + * Return: The estimated number of processes sharing a folio. + */ +static inline int folio_estimated_sharers(struct folio *folio) +{ + return page_mapcount(folio_page(folio, 0)); +} + #ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE static inline int arch_make_page_accessible(struct page *page) { --=20 2.38.1 From nobody Sat Sep 13 20:25:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44DE1C54EAA for ; Mon, 30 Jan 2023 20:19:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229683AbjA3UTA (ORCPT ); Mon, 30 Jan 2023 15:19:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229611AbjA3USo (ORCPT ); Mon, 30 Jan 2023 15:18:44 -0500 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5039E93CA for ; Mon, 30 Jan 2023 12:18:43 -0800 (PST) Received: by mail-pg1-x535.google.com with SMTP id q9so8500662pgq.5 for ; Mon, 30 Jan 2023 12:18:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Y48CKbE3JT22h7yWw1CaLB5y4wyl8IVKXFn03JqpXa4=; b=Wl7zEv/H7Ojm0N8EkHS/Tvyo9Ekr63AoEgevn1M8sPtkvvYppkidaB9fQ9hosG51Mz VAuNl+BCp+5Nvi3wGhMGBG6zoThzCELD4w39zuOs5I+v3HyrYzPZoJ7RFBKACOCMEUIR Gc4WkpzCH1Ber5EQQoJnVxBVHNek3ot3YGeqJwTyc2+4p68hlG9LBHmMyNKqD0ezp9r0 J132PTIIn9ZRd4wAFVkUahjshU0vA89lVPx1FwwChqRZpLp97jlinvFCla6xGd27tMwL ySmR6tewviI0124vTdnLIxgiZQIttehQNGUlE3I53cOGtvUDpBuptYhUExG9pqBz7tJ/ H8yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Y48CKbE3JT22h7yWw1CaLB5y4wyl8IVKXFn03JqpXa4=; b=DfcrMFB3xsVNuGiUraJ+qQ6L+Q5uDZj2sqxzjNnREcdQ9Mfu2viBE/IUU9DTnZYVNW Zr2gSoaky2VNhfR8PhQdlX5V89rMc09VXqqr5H9FMzigQhkDRgDfp6N293S6bw6b2dew I8iSk8hQ6U1syaoTRJjyTA+drOAR3xgLxrITTL94OqPSIwil6qInaMboUiJ/lv8sMX5Q KM3vON3X811T/5Nd4dOzP+sJVidtdSC6rMT9q5CW9daPPd+fr/PxxrAuI92VUndeW6n/ zzPzOC5ZS20EprXlA3Si+AVL+/GYZqQ3pR7hWsU1DkvzGqyVi6sh453QegnR0vq0DuX+ JX7g== X-Gm-Message-State: AO0yUKURNNz8hQO+hPeXIqO/4I4ypxhtc57jih0HlUw4rgwLrCtjKm9D hJrSYaD7maX4hCwt0B2T6M0= X-Google-Smtp-Source: AK7set+DyQIA8npSLdFVb1C6aoLN+sbHw5JucCXJtvR5QA7iDsmOy5cg0rBq1nQf5Nj26FAkra7OTw== X-Received: by 2002:aa7:950a:0:b0:592:568a:1f70 with SMTP id b10-20020aa7950a000000b00592568a1f70mr8887370pfp.25.1675109922725; Mon, 30 Jan 2023 12:18:42 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::a8cd]) by smtp.googlemail.com with ESMTPSA id g9-20020a056a000b8900b0058d9730ede0sm113262pfj.210.2023.01.30.12.18.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 12:18:42 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v4 2/6] mm/mempolicy: convert queue_pages_pmd() to queue_folios_pmd() Date: Mon, 30 Jan 2023 12:18:29 -0800 Message-Id: <20230130201833.27042-3-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230130201833.27042-1-vishal.moola@gmail.com> References: <20230130201833.27042-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The function now operates on a folio instead of the page associated with a pmd. This change is in preparation for the conversion of queue_pages_required() to queue_folio_required() and migrate_page_add() to migrate_folio_add(). Signed-off-by: Vishal Moola (Oracle) --- mm/mempolicy.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 7686f40c9750..fc754dbcbbcd 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -442,21 +442,21 @@ static inline bool queue_pages_required(struct page *= page, } =20 /* - * queue_pages_pmd() has three possible return values: - * 0 - pages are placed on the right node or queued successfully, or + * queue_folios_pmd() has three possible return values: + * 0 - folios are placed on the right node or queued successfully, or * special page is met, i.e. huge zero page. - * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were + * 1 - there is unmovable folio, and MPOL_MF_MOVE* & MPOL_MF_STRICT were * specified. * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an - * existing page was already on a node that does not follow the + * existing folio was already on a node that does not follow the * policy. */ -static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr, +static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long add= r, unsigned long end, struct mm_walk *walk) __releases(ptl) { int ret =3D 0; - struct page *page; + struct folio *folio; struct queue_pages *qp =3D walk->private; unsigned long flags; =20 @@ -464,19 +464,19 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *pt= l, unsigned long addr, ret =3D -EIO; goto unlock; } - page =3D pmd_page(*pmd); - if (is_huge_zero_page(page)) { + folio =3D pfn_folio(pmd_pfn(*pmd)); + if (is_huge_zero_page(&folio->page)) { walk->action =3D ACTION_CONTINUE; goto unlock; } - if (!queue_pages_required(page, qp)) + if (!queue_pages_required(&folio->page, qp)) goto unlock; =20 flags =3D qp->flags; - /* go to thp migration */ + /* go to folio migration */ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { if (!vma_migratable(walk->vma) || - migrate_page_add(page, qp->pagelist, flags)) { + migrate_page_add(&folio->page, qp->pagelist, flags)) { ret =3D 1; goto unlock; } @@ -512,7 +512,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned l= ong addr, =20 ptl =3D pmd_trans_huge_lock(pmd, vma); if (ptl) - return queue_pages_pmd(pmd, ptl, addr, end, walk); + return queue_folios_pmd(pmd, ptl, addr, end, walk); =20 if (pmd_trans_unstable(pmd)) return 0; --=20 2.38.1 From nobody Sat Sep 13 20:25:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DA0CC636CD for ; Mon, 30 Jan 2023 20:19:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229688AbjA3UTD (ORCPT ); Mon, 30 Jan 2023 15:19:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229620AbjA3USp (ORCPT ); Mon, 30 Jan 2023 15:18:45 -0500 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 718BE474F4 for ; Mon, 30 Jan 2023 12:18:44 -0800 (PST) Received: by mail-pg1-x533.google.com with SMTP id 78so8497387pgb.8 for ; Mon, 30 Jan 2023 12:18:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8Nhbc0AgbHO5zV6t6mmHVD0PFRFqh3MKW7JYpBDIsYU=; b=nRv3hsG1oZEx1tRfncyxMeskGayFbaD3uykQ6nIJM9rxPmYih1WHusV9Y2O1iLGRxS +kfnhlN/Ocx15MTYrjdSPbVJHaKFKNxwXWvD+YgY48bzQyMiPXUvHKCZQdPqbzdCoKMg 5ulCiSiG+QWzAZG+KStlmxZjgubTxggZSPapErWiXtKuMIN7+8pR+3kjuqKU28bFnhuK nTGunLTXpfPWao7Lye+vchwTPWeu8qIkkQCZ56/K0Xkv1yYFPw/XIEmsZEi8M10Epuii LyEJ09MbQEhwhoM+A4VuUjzoif+UoeAX4nLv7Ng62tII1BOOJ3YgAmg6am0gdFHw+RU+ PgmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8Nhbc0AgbHO5zV6t6mmHVD0PFRFqh3MKW7JYpBDIsYU=; b=nHGPaa2ntTlr31AXjTdDUwH+yciO9OE6kS+M1MPFCKKxFesppupDzBldGNoVk09QD7 R/E3d/aWlCYGHAZ10ntDNGqIl+F4d2tUXgEU56zbIhAFm5TY7/FuaKNoOPBZmrxXCKyM NI3VXVpPPF3lCTCyMQW20qr0aOP/KJwEF2ymfH4YM68U2e9++vYCFKKAImWrhNtBudta cId/AKp6EHfT0/ZdtrUg+xIAOQ6clCU/T3CBCGamhoP8owqq4NPwxGi2unYZ6RK+ij91 FHy3eiOK7ACaPBiKcz+pAGHww6DKCZEnbZX2Fy/M8D/OZLCnkS/y8y38RbJ+NcAepVfq a0Kw== X-Gm-Message-State: AO0yUKWpzM4xfjFDUAIGovThKJQpruhvlbCmWkLZWdwP36z6fmmBHXPY clmxesrWnGu1fIE3rJLWhxA= X-Google-Smtp-Source: AK7set+YqzG592x28e6kZFTk/siUxYxEWaKBctpynci92iHX5sfOS9hsl3+DQ6yEMS9HJtV4dcMSig== X-Received: by 2002:a05:6a00:212e:b0:592:5eab:3402 with SMTP id n14-20020a056a00212e00b005925eab3402mr10171232pfj.28.1675109924061; Mon, 30 Jan 2023 12:18:44 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::a8cd]) by smtp.googlemail.com with ESMTPSA id g9-20020a056a000b8900b0058d9730ede0sm113262pfj.210.2023.01.30.12.18.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 12:18:43 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v4 3/6] mm/mempolicy: convert queue_pages_pte_range() to queue_folios_pte_range() Date: Mon, 30 Jan 2023 12:18:30 -0800 Message-Id: <20230130201833.27042-4-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230130201833.27042-1-vishal.moola@gmail.com> References: <20230130201833.27042-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This function now operates on folios associated with ptes instead of pages. This change is in preparation for the conversion of queue_pages_required() to queue_folio_required() and migrate_page_add() to migrate_folio_add(). Signed-off-by: Vishal Moola (Oracle) --- mm/mempolicy.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index fc754dbcbbcd..b0805bb87655 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -491,19 +491,19 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *p= tl, unsigned long addr, * Scan through pages checking if pages follow certain conditions, * and move them to the pagelist if they do. * - * queue_pages_pte_range() has three possible return values: - * 0 - pages are placed on the right node or queued successfully, or + * queue_folios_pte_range() has three possible return values: + * 0 - folios are placed on the right node or queued successfully, or * special page is met, i.e. zero page. - * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were + * 1 - there is unmovable folio, and MPOL_MF_MOVE* & MPOL_MF_STRICT were * specified. - * -EIO - only MPOL_MF_STRICT was specified and an existing page was alrea= dy + * -EIO - only MPOL_MF_STRICT was specified and an existing folio was alre= ady * on a node that does not follow the policy. */ -static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, +static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) { struct vm_area_struct *vma =3D walk->vma; - struct page *page; + struct folio *folio; struct queue_pages *qp =3D walk->private; unsigned long flags =3D qp->flags; bool has_unmovable =3D false; @@ -521,16 +521,16 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned= long addr, for (; addr !=3D end; pte++, addr +=3D PAGE_SIZE) { if (!pte_present(*pte)) continue; - page =3D vm_normal_page(vma, addr, *pte); - if (!page || is_zone_device_page(page)) + folio =3D vm_normal_folio(vma, addr, *pte); + if (!folio || folio_is_zone_device(folio)) continue; /* - * vm_normal_page() filters out zero pages, but there might - * still be PageReserved pages to skip, perhaps in a VDSO. + * vm_normal_folio() filters out zero pages, but there might + * still be reserved folios to skip, perhaps in a VDSO. */ - if (PageReserved(page)) + if (folio_test_reserved(folio)) continue; - if (!queue_pages_required(page, qp)) + if (!queue_pages_required(&folio->page, qp)) continue; if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { /* MPOL_MF_STRICT must be specified if we get here */ @@ -544,7 +544,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned l= ong addr, * temporary off LRU pages in the range. Still * need migrate other LRU pages. */ - if (migrate_page_add(page, qp->pagelist, flags)) + if (migrate_page_add(&folio->page, qp->pagelist, flags)) has_unmovable =3D true; } else break; @@ -704,7 +704,7 @@ static int queue_pages_test_walk(unsigned long start, u= nsigned long end, =20 static const struct mm_walk_ops queue_pages_walk_ops =3D { .hugetlb_entry =3D queue_pages_hugetlb, - .pmd_entry =3D queue_pages_pte_range, + .pmd_entry =3D queue_folios_pte_range, .test_walk =3D queue_pages_test_walk, }; =20 --=20 2.38.1 From nobody Sat Sep 13 20:25:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09E1FC636D6 for ; Mon, 30 Jan 2023 20:19:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229776AbjA3UTI (ORCPT ); Mon, 30 Jan 2023 15:19:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229624AbjA3USq (ORCPT ); Mon, 30 Jan 2023 15:18:46 -0500 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BADD41727 for ; Mon, 30 Jan 2023 12:18:45 -0800 (PST) Received: by mail-pf1-x436.google.com with SMTP id ay1so8599296pfb.7 for ; Mon, 30 Jan 2023 12:18:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SSeBOBmV5irVttghOKU/WEyUh0YnA4r3UGGroMyochk=; b=Uoo5nyF2OvMxPs0CxInigN1mJkJMwm5kATVUZBZlY8FvxSKwsrCYPNOpgk9ixI1C13 jjmHWadhrt9csTXfa3+84oLpPKGjkm5nWzcKsmF3eCUMdqoCWKJzjSItxDE1yFsLl266 9YgCoQXUE60Y10w/pTt7qjpCo4MjYCMpgGlnTRfQpj0XPFYYYBG2zNd2r66nMD/Pmh/n 0Ujs1OUPo1skoLWFTbngUeBk9H47O+nou8ws2yllTFrC6eEE4HEPj4iGeUJUviOPlI4g /gvHSiUfmZYXMkFrMNKUMKPQvcCtnMffPqy38v0jcPg2BOydF1h0qE4YIIqwlenrzOl6 RCaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SSeBOBmV5irVttghOKU/WEyUh0YnA4r3UGGroMyochk=; b=sRouzQMBzNBYBh7C0CBIOVKqYe1NydBbtim2m7/afEnWPSCxNn5dNyguC2Xq6cyBF7 hlEQvpAXGpLjm5WHj8X5ixgLQfbfKQBb4qPtcE1cprKuziFSIwFfZMK6ujW6lZgWmbWu IT0am2ZBZ80C/u5qcsFSfd01ymxSYJ/xZmZNJWHPKif9ollVlQ2YWRFaPlYIg8twVdbK yYF0/NkDohjTUK/enVINM3DWCUmZ2ALxWpydt7wyZhBuLXJqmfq5zEaEYri7VTczXwvp h9CQ/hmZk3w8a0A7D9N8s/j5Q7ZtWMdzioxXUjU/k+eOOyKnKxI+cr6KKaSWy8asl1kI o4Kg== X-Gm-Message-State: AO0yUKUfuF6G+MUlx37Ux9ewttrJcNvFKkYMFV6vXSPV1BG8JlTllKPH hfTQng745mwxR8HrOegAIpE= X-Google-Smtp-Source: AK7set+DszPwpSPBVBkWWwuActthXmx+XqB/olsEL1Pq6RwXLA89syW3jS5kw/6r1MPLSHmjUCv5yg== X-Received: by 2002:aa7:8883:0:b0:58d:ca02:3fd6 with SMTP id z3-20020aa78883000000b0058dca023fd6mr12047476pfe.25.1675109925162; Mon, 30 Jan 2023 12:18:45 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::a8cd]) by smtp.googlemail.com with ESMTPSA id g9-20020a056a000b8900b0058d9730ede0sm113262pfj.210.2023.01.30.12.18.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 12:18:44 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v4 4/6] mm/mempolicy: convert queue_pages_hugetlb() to queue_folios_hugetlb() Date: Mon, 30 Jan 2023 12:18:31 -0800 Message-Id: <20230130201833.27042-5-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230130201833.27042-1-vishal.moola@gmail.com> References: <20230130201833.27042-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This change is in preparation for the conversion of queue_pages_required() to queue_folio_required() and migrate_page_add() to migrate_folio_add(). Signed-off-by: Vishal Moola (Oracle) --- mm/mempolicy.c | 29 ++++++++++++++++++----------- 1 file changed, 18 insertions(+), 11 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b0805bb87655..668392493500 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -558,7 +558,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned = long addr, return addr !=3D end ? -EIO : 0; } =20 -static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask, +static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr, unsigned long end, struct mm_walk *walk) { @@ -566,7 +566,7 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned lon= g hmask, #ifdef CONFIG_HUGETLB_PAGE struct queue_pages *qp =3D walk->private; unsigned long flags =3D (qp->flags & MPOL_MF_VALID); - struct page *page; + struct folio *folio; spinlock_t *ptl; pte_t entry; =20 @@ -574,13 +574,13 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned l= ong hmask, entry =3D huge_ptep_get(pte); if (!pte_present(entry)) goto unlock; - page =3D pte_page(entry); - if (!queue_pages_required(page, qp)) + folio =3D pfn_folio(pte_pfn(entry)); + if (!queue_pages_required(&folio->page, qp)) goto unlock; =20 if (flags =3D=3D MPOL_MF_STRICT) { /* - * STRICT alone means only detecting misplaced page and no + * STRICT alone means only detecting misplaced folio and no * need to further check other vma. */ ret =3D -EIO; @@ -591,21 +591,28 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned l= ong hmask, /* * Must be STRICT with MOVE*, otherwise .test_walk() have * stopped walking current vma. - * Detecting misplaced page but allow migrating pages which + * Detecting misplaced folio but allow migrating folios which * have been queued. */ ret =3D 1; goto unlock; } =20 - /* With MPOL_MF_MOVE, we migrate only unshared hugepage. */ + /* + * With MPOL_MF_MOVE, we try to migrate only unshared folios. If it + * is shared it is likely not worth migrating. + * + * To check if the folio is shared, ideally we want to make sure + * every page is mapped to the same process. Doing that is very + * expensive, so check the estimated mapcount of the folio instead. + */ if (flags & (MPOL_MF_MOVE_ALL) || - (flags & MPOL_MF_MOVE && page_mapcount(page) =3D=3D 1 && + (flags & MPOL_MF_MOVE && folio_estimated_sharers(folio) =3D=3D 1 && !hugetlb_pmd_shared(pte))) { - if (isolate_hugetlb(page_folio(page), qp->pagelist) && + if (isolate_hugetlb(folio, qp->pagelist) && (flags & MPOL_MF_STRICT)) /* - * Failed to isolate page but allow migrating pages + * Failed to isolate folio but allow migrating pages * which have been queued. */ ret =3D 1; @@ -703,7 +710,7 @@ static int queue_pages_test_walk(unsigned long start, u= nsigned long end, } =20 static const struct mm_walk_ops queue_pages_walk_ops =3D { - .hugetlb_entry =3D queue_pages_hugetlb, + .hugetlb_entry =3D queue_folios_hugetlb, .pmd_entry =3D queue_folios_pte_range, .test_walk =3D queue_pages_test_walk, }; --=20 2.38.1 From nobody Sat Sep 13 20:25:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D930AC636D3 for ; Mon, 30 Jan 2023 20:19:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229700AbjA3UTF (ORCPT ); Mon, 30 Jan 2023 15:19:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229572AbjA3USs (ORCPT ); Mon, 30 Jan 2023 15:18:48 -0500 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B54B6470AD for ; Mon, 30 Jan 2023 12:18:46 -0800 (PST) Received: by mail-pj1-x1035.google.com with SMTP id j5so12175747pjn.5 for ; Mon, 30 Jan 2023 12:18:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xJQ5OTdh+fc+ccMNlHyYjb+fYbFIfXaQUDxSd+sy4Ac=; b=PEUIZrC+RH5+9pYkh6XYzKYqbMb8jOHl+69QZ/Lb2+tXCHO+jqtbDIZKClYL68y3RO BbK/pZKsW5mscHpNz/jhxraDmx1IyeZPErllGnlJNJO99fUJZDIG01OCBA7AtlvXr2QZ tjXA4+oBey2Rie2dE8OW4bp/7ySNDizbDQGMfpcvuPSTBoblsldgyu8UYC0i/IncGdLQ N4ZuaojlXmmT5uNLUBr+NSJObiaW0CQ1NEu7yjMk7bToXSoQocoD0SAiNt7tFRgy3Zjt 73B1duMYfMdMKFMX36MZn7wGnh/osd/8fYOdi2jqhKZyca30g8842xjxn5NY0WfkiXn0 AMEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xJQ5OTdh+fc+ccMNlHyYjb+fYbFIfXaQUDxSd+sy4Ac=; b=uI9UgWEEYDd498mJUK1/1p/UiItV1HSUYoUDdSHdhEvX5nyJpJPIWxGy8+EYKAXki+ R+DPxZGJKf43C87N7k4kyMoVKcYvvpc2dvKrnFWvUy/mGF0DY/IgF0HnC+oxI7oRqhRD 9CIG5Ue+s5TKdqeuPhmH3mpWYrbX4bH0PTjFzSC7UfAO1CIYlf3Iy2Pww7wqT14bDs9K MukzuwCEB2EfoYCL1hohJwP7JwsdKtiVdNFZl2dLy4iumTm/bU4/JxIJeRaKikUQbCPj sa4dp2gRGnsGBAdpxu0RzPBjx6v6AR6sTygkBSbrakjj66y9Nbyqbm3rOeYV+pqVItf1 DrrA== X-Gm-Message-State: AO0yUKVs04UCweTH2lN18hVvMhUtoLLf+MsiKVN8rK9q7+Z3V6nfy7dK +CrK3sfdPQwaT9qsNs/10v0= X-Google-Smtp-Source: AK7set/3xtBWdaXl/BxR1DUewvgNkf3u0eTu5c7YIlED/AQZjc6fr+VgNbYD76m5b1zy5IDlpUgKzg== X-Received: by 2002:a05:6a20:4408:b0:bc:ae32:4d6e with SMTP id ce8-20020a056a20440800b000bcae324d6emr10770844pzb.54.1675109926167; Mon, 30 Jan 2023 12:18:46 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::a8cd]) by smtp.googlemail.com with ESMTPSA id g9-20020a056a000b8900b0058d9730ede0sm113262pfj.210.2023.01.30.12.18.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 12:18:45 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v4 5/6] mm/mempolicy: convert queue_pages_required() to queue_folio_required() Date: Mon, 30 Jan 2023 12:18:32 -0800 Message-Id: <20230130201833.27042-6-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230130201833.27042-1-vishal.moola@gmail.com> References: <20230130201833.27042-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Replace queue_pages_required() with queue_folio_required(). queue_folio_required() does the same as queue_pages_required(), except takes in a folio instead of a page. Signed-off-by: Vishal Moola (Oracle) --- mm/mempolicy.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 668392493500..6a68dbce3b70 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -427,15 +427,15 @@ struct queue_pages { }; =20 /* - * Check if the page's nid is in qp->nmask. + * Check if the folio's nid is in qp->nmask. * * If MPOL_MF_INVERT is set in qp->flags, check if the nid is * in the invert of qp->nmask. */ -static inline bool queue_pages_required(struct page *page, +static inline bool queue_folio_required(struct folio *folio, struct queue_pages *qp) { - int nid =3D page_to_nid(page); + int nid =3D folio_nid(folio); unsigned long flags =3D qp->flags; =20 return node_isset(nid, *qp->nmask) =3D=3D !(flags & MPOL_MF_INVERT); @@ -469,7 +469,7 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl= , unsigned long addr, walk->action =3D ACTION_CONTINUE; goto unlock; } - if (!queue_pages_required(&folio->page, qp)) + if (!queue_folio_required(folio, qp)) goto unlock; =20 flags =3D qp->flags; @@ -530,7 +530,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned = long addr, */ if (folio_test_reserved(folio)) continue; - if (!queue_pages_required(&folio->page, qp)) + if (!queue_folio_required(folio, qp)) continue; if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { /* MPOL_MF_STRICT must be specified if we get here */ @@ -575,7 +575,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned lo= ng hmask, if (!pte_present(entry)) goto unlock; folio =3D pfn_folio(pte_pfn(entry)); - if (!queue_pages_required(&folio->page, qp)) + if (!queue_folio_required(folio, qp)) goto unlock; =20 if (flags =3D=3D MPOL_MF_STRICT) { --=20 2.38.1 From nobody Sat Sep 13 20:25:01 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9991C54EAA for ; Mon, 30 Jan 2023 20:19:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229834AbjA3UTK (ORCPT ); Mon, 30 Jan 2023 15:19:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229573AbjA3USt (ORCPT ); Mon, 30 Jan 2023 15:18:49 -0500 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA0C4474E4 for ; Mon, 30 Jan 2023 12:18:47 -0800 (PST) Received: by mail-pg1-x534.google.com with SMTP id 7so8510795pga.1 for ; Mon, 30 Jan 2023 12:18:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Rj/qU+zS2SpVSC06qhYsv6OtjuVRmpJHRc9Sl9s1IRQ=; b=Tj3i6e31rScg++PudsTthpoxV+1jPlpPlY3/2b5Bf0/UDddmvfwoSeZ+wYXRcuVc2s wRlQRVqhaPdfYKY1/JcUFWGAyeH43ttEzDrtWbqT+7iJn/MB4dHk+JXlS32nb7igoFty J9/9643gbLQ/ImSRTqG97EHjD5VvPpCXcog14K0EjqXA51AkRgjmIbWqnOfPPXFe65Ri MDyBz/g7XdxdVnoMxbYe2N6WuWw1NeC62o4FfTo4S/s3M+4uRlhzGZ/c6TYRKZXNODpk TDOT5lG0BDlRIhjVRNv7IIUn6wH7Sx+sBaVWNVUw/fpiys5R733Q1SifNCc1biGkBk2W r0CQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Rj/qU+zS2SpVSC06qhYsv6OtjuVRmpJHRc9Sl9s1IRQ=; b=7PRZECPs5QGjkDULkhV9S2JkxhpSvts0x6oKN36V0v9033Qr3Dq26g+cKQRQ+Oo8x1 mBcUi2VF1/I8RPpXMKYZNK+qgH49Mnlb97IG8dX+0/c+SGp8uWVdxheCkDgXuueAlMDU QIdoCxMaYuTKY7w8aFO9WVDXRX2f0QuqMdoA4EUScGl1AQItKaD7SWHk/7P1uDP64EEu syVuimwb4+Z7iMIZ50TWV6WbWqFaRAACReV4zJ/PdHrRzkcYEZ4UCNYh1gCUZT4FMh/d V2a5voSMOddMGxWmC4V6drehm0nyiubJBKKmpRue2YVXMcmMuh7Wfq8ZDJhdDA0dUiBG NH9w== X-Gm-Message-State: AO0yUKUGV5jfnYOeYG/HhFYz2vGwps7+kUpN5VGvzl6m02w1kGFE37iE NJ3t7vQ5mxOdai+CjlaNlmc= X-Google-Smtp-Source: AK7set8JYge76j6ldf/cIJq9uWBWkQOUiCPeaBC+LTVdk7HzGHP97Ti5ETwPgNXR/shIaFwgkw06Cw== X-Received: by 2002:a62:3044:0:b0:590:64f1:8873 with SMTP id w65-20020a623044000000b0059064f18873mr7945965pfw.5.1675109927113; Mon, 30 Jan 2023 12:18:47 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::a8cd]) by smtp.googlemail.com with ESMTPSA id g9-20020a056a000b8900b0058d9730ede0sm113262pfj.210.2023.01.30.12.18.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Jan 2023 12:18:46 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v4 6/6] mm/mempolicy: convert migrate_page_add() to migrate_folio_add() Date: Mon, 30 Jan 2023 12:18:33 -0800 Message-Id: <20230130201833.27042-7-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230130201833.27042-1-vishal.moola@gmail.com> References: <20230130201833.27042-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Replace migrate_page_add() with migrate_folio_add(). migrate_folio_add() does the same a migrate_page_add() but takes in a folio instead of a page. This removes a couple of calls to compound_head(). Signed-off-by: Vishal Moola (Oracle) Reviewed-by: Yin Fengwei --- mm/mempolicy.c | 39 ++++++++++++++++++++------------------- 1 file changed, 20 insertions(+), 19 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 6a68dbce3b70..0919c7a719d4 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -414,7 +414,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_= MAX] =3D { }, }; =20 -static int migrate_page_add(struct page *page, struct list_head *pagelist, +static int migrate_folio_add(struct folio *folio, struct list_head *foliol= ist, unsigned long flags); =20 struct queue_pages { @@ -476,7 +476,7 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl= , unsigned long addr, /* go to folio migration */ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { if (!vma_migratable(walk->vma) || - migrate_page_add(&folio->page, qp->pagelist, flags)) { + migrate_folio_add(folio, qp->pagelist, flags)) { ret =3D 1; goto unlock; } @@ -544,7 +544,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned = long addr, * temporary off LRU pages in the range. Still * need migrate other LRU pages. */ - if (migrate_page_add(&folio->page, qp->pagelist, flags)) + if (migrate_folio_add(folio, qp->pagelist, flags)) has_unmovable =3D true; } else break; @@ -1021,27 +1021,28 @@ static long do_get_mempolicy(int *policy, nodemask_= t *nmask, } =20 #ifdef CONFIG_MIGRATION -/* - * page migration, thp tail pages can be passed. - */ -static int migrate_page_add(struct page *page, struct list_head *pagelist, +static int migrate_folio_add(struct folio *folio, struct list_head *foliol= ist, unsigned long flags) { - struct page *head =3D compound_head(page); /* - * Avoid migrating a page that is shared with others. + * We try to migrate only unshared folios. If it is shared it + * is likely not worth migrating. + * + * To check if the folio is shared, ideally we want to make sure + * every page is mapped to the same process. Doing that is very + * expensive, so check the estimated mapcount of the folio instead. */ - if ((flags & MPOL_MF_MOVE_ALL) || page_mapcount(head) =3D=3D 1) { - if (!isolate_lru_page(head)) { - list_add_tail(&head->lru, pagelist); - mod_node_page_state(page_pgdat(head), - NR_ISOLATED_ANON + page_is_file_lru(head), - thp_nr_pages(head)); + if ((flags & MPOL_MF_MOVE_ALL) || folio_estimated_sharers(folio) =3D=3D 1= ) { + if (!folio_isolate_lru(folio)) { + list_add_tail(&folio->lru, foliolist); + node_stat_mod_folio(folio, + NR_ISOLATED_ANON + folio_is_file_lru(folio), + folio_nr_pages(folio)); } else if (flags & MPOL_MF_STRICT) { /* - * Non-movable page may reach here. And, there may be - * temporary off LRU pages or non-LRU movable pages. - * Treat them as unmovable pages since they can't be + * Non-movable folio may reach here. And, there may be + * temporary off LRU folios or non-LRU movable folios. + * Treat them as unmovable folios since they can't be * isolated, so they can't be moved at the moment. It * should return -EIO for this case too. */ @@ -1235,7 +1236,7 @@ static struct page *new_page(struct page *page, unsig= ned long start) } #else =20 -static int migrate_page_add(struct page *page, struct list_head *pagelist, +static int migrate_folio_add(struct folio *folio, struct list_head *foliol= ist, unsigned long flags) { return -EIO; --=20 2.38.1