From nobody Sun Sep 14 06:37:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DD7DC54EED for ; Wed, 25 Jan 2023 23:41:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236131AbjAYXlu (ORCPT ); Wed, 25 Jan 2023 18:41:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44008 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236044AbjAYXlr (ORCPT ); Wed, 25 Jan 2023 18:41:47 -0500 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 665F65FE1 for ; Wed, 25 Jan 2023 15:41:46 -0800 (PST) Received: by mail-pj1-x1033.google.com with SMTP id b10so32100pjo.1 for ; Wed, 25 Jan 2023 15:41:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BGNwUbDjmHky71GpbBmE9Nh0iBILc0cb1v5YBQZD5HQ=; b=mIXSbS3zx/wNtBmDxZy9pZVvSQJv/mHw4CtjWD65+3W9Nc24sZUEDDrNS8oORXHJiB CGLmGyovSMwnhhTJMVQUd2L8GZWQRYAR8SllfoLNKEh7wrEFC3j3GTPcpO6eGTt7IB6K lBtFoVj4NrI54l4nxgO5tQt5fd01MB+QIlXVuAOr/ab4O+2GR41LajuWCD6FMFuOkAbT foNteELlF+BDx6tehXHJkvTXF29r//7Ulc+DmZlFdhCsp3uvvGWYSsETbBZM2P3pBhTp ISJu5TQzkh/8DkQKSyiad0S01AG/Ia9PJmFtvYNhAgSLLO6dL8TA1pu24hsqMP5gFGpv Dniw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BGNwUbDjmHky71GpbBmE9Nh0iBILc0cb1v5YBQZD5HQ=; b=ZDP425oRvoBXKhJMiWjN/F4E9mvo8YiCeCTCfQ1qwCqVpJBv31xGwAJ//iHO79WgqD UDBuB+ZohynQgLd9D4yqw4dxcvxtGmrs4INExQSqFpMHCUpyBvAzWp4wxtDrdR8eJKVB V23xAGW/zJaqj9oESc3/TiYjUjTBHVBbhNJ26Gore5NC1ZS7rUDCtudIOTf2ybnxwoqa +SBzFcNjIcSWEWo8BSzYJ2BjKOkorotO43Ob7e6WP/hVdjn/QwPRkcSzuSqX/rmd52Ts 8vCxJlQkcdMwko5GLSBNg46srHywuiDRGu+Ak26NdxGNNZZSMw9AT21QE75uifyxJ06r MQRg== X-Gm-Message-State: AFqh2kqEMd9+eQMpxFMW7o+qM6qzeHblPAvI5GKl/L2Fka8OUwvwqOtj lMp83cGLeTJ5Mbc4FPx2uYg= X-Google-Smtp-Source: AMrXdXv1TwX5bErJjQ9rOnQpq3s7ctcdlqui9fwxZ8JXfbnCm/ilrQAv8fF/p6GLVjSDREEOIxnLsQ== X-Received: by 2002:a17:90b:1e4b:b0:229:ff05:691a with SMTP id pi11-20020a17090b1e4b00b00229ff05691amr23275110pjb.16.1674690105896; Wed, 25 Jan 2023 15:41:45 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:8002:1c20::4e4b]) by smtp.googlemail.com with ESMTPSA id e8-20020a17090a630800b00219752c8ea5sm2226806pjj.37.2023.01.25.15.41.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jan 2023 15:41:45 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v3 1/6] mm: Add folio_estimated_mapcount() Date: Wed, 25 Jan 2023 15:41:29 -0800 Message-Id: <20230125234134.227244-2-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230125234134.227244-1-vishal.moola@gmail.com> References: <20230125234134.227244-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" folio_estimated_mapcount() takes in a folio and returns the precise number of times the first subpage of the folio is mapped. This function aims to provide an estimate for the mapcount of a subpage within a folio. This is necessary for folio conversions where we care about the mapcount of a subpage, but not necessarily the whole folio. This is in contrast to folio_mapcount() which calculates the total number of the times a folio and all its subpages are mapped. Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index c9db257f09b3..fdd5b77ac209 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1817,6 +1817,23 @@ static inline size_t folio_size(struct folio *folio) return PAGE_SIZE << folio_order(folio); } =20 +/** + * folio_estimated_mapcount - Estimate a folio's per-page mapcount. + * @folio: The folio. + * + * folio_estimated_mapcount() aims to serve as a function to efficiently + * estimate the number of times each page in a folio is mapped. + * This may not be accurate for large folios. If you want exact mapcounts, + * look at page_mapcount() or folio_total_mapcount(). + * + * Return: The precise mapcount of the first subpage, meant to estimate + * the precise mapcount of any subpage. + */ +static inline int folio_estimated_mapcount(struct folio *folio) +{ + return page_mapcount(folio_page(folio, 0)); +} + #ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE static inline int arch_make_page_accessible(struct page *page) { --=20 2.38.1 From nobody Sun Sep 14 06:37:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B70FC61D97 for ; Wed, 25 Jan 2023 23:41:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236197AbjAYXlx (ORCPT ); Wed, 25 Jan 2023 18:41:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235859AbjAYXls (ORCPT ); Wed, 25 Jan 2023 18:41:48 -0500 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A257F2D79 for ; Wed, 25 Jan 2023 15:41:47 -0800 (PST) Received: by mail-pl1-x631.google.com with SMTP id d3so418038plr.10 for ; Wed, 25 Jan 2023 15:41:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1khdYzxT7PMCcz7jBx9VZXJLxnPij2EE71ujrUVxn1A=; b=UBtFM/nSuni9xqZ8qtJkIk8LKEjARrE386v/XFm4q/6iNHXQglwU92LD/bg63vOquU XtqMQys200RkZrhJtyGKmphdie9l9Zb+gcF0lR2o2q/lBs2t6ClE41cQ5XjRo+Edpf9h F/md9YBpcoz6dthRROooSVzXlPU/MVHdUIy7JULq17/ImoUOXg3nmk8rQMNBYRjdCoFE s0Ynvr4xdLyHXZRENkORJ/T5iABEwi93EyCQsVc6yO0tidjP/AUx8OD8A94a2VejPLdP H2+pU1CayBQrjSiZzZHcZ4Az8LmDqjh5OsliDpGe4V/0lp3jnPy/iqNSK2qSaG1aZ3rI sqgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1khdYzxT7PMCcz7jBx9VZXJLxnPij2EE71ujrUVxn1A=; b=wBSKQDEB4j0vu/6xTh2Bm+90u+cR5urfhTnd4jZNAGPh+C7E3KEVSvaLH7Y8vYICI8 wjTo5bTaOzevxkSSOZUG2EWMxPFAoDi7CCgeRWUb5bep7OLgLhLMjYYd/ikoaW5aOfER tg+zYtHRQaakOmc1iDRcvIX843mnp3KiWVwOGgFQo8PZSvlRdlg6Aq0PlF/fuIug79de 7rT6+xUP4XIOZxQuw0ry3YtfhM25Ph4Ut/OnckyqyHEzJerntVfFRvuccNqiBM2eJkz6 uOhhZld2ZYGMDb5iurvsPLHCamkfQ+s99/lIG85IqsEFGIgkMS/BeH2d1B9Q5eWK2cP/ JFVA== X-Gm-Message-State: AO0yUKVE9kYtTEFffA+Qlw1ITzrNH7YHwNrZhQnq9SIHQcyaiHNrejhi Q76qN5lhku9qYoAWKa9XMi4= X-Google-Smtp-Source: AK7set+5Nv8G0rFVsuyjEbZtPNT4sTuxT0GinYLwHib3RTjsxtB9s5pk4/OGua0gnNYq1cTWnineZg== X-Received: by 2002:a17:90a:14a5:b0:22b:b25a:d0a0 with SMTP id k34-20020a17090a14a500b0022bb25ad0a0mr4742pja.15.1674690107112; Wed, 25 Jan 2023 15:41:47 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:8002:1c20::4e4b]) by smtp.googlemail.com with ESMTPSA id e8-20020a17090a630800b00219752c8ea5sm2226806pjj.37.2023.01.25.15.41.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jan 2023 15:41:46 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v3 2/6] mm/mempolicy: Convert queue_pages_pmd() to queue_folios_pmd() Date: Wed, 25 Jan 2023 15:41:30 -0800 Message-Id: <20230125234134.227244-3-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230125234134.227244-1-vishal.moola@gmail.com> References: <20230125234134.227244-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The function now operates on a folio instead of the page associated with a pmd. This change is in preparation for the conversion of queue_pages_required() to queue_folio_required() and migrate_page_add() to migrate_folio_add(). Signed-off-by: Vishal Moola (Oracle) --- mm/mempolicy.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index fd99d303e34f..00fffa93adae 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -442,21 +442,21 @@ static inline bool queue_pages_required(struct page *= page, } =20 /* - * queue_pages_pmd() has three possible return values: - * 0 - pages are placed on the right node or queued successfully, or + * queue_folios_pmd() has three possible return values: + * 0 - folios are placed on the right node or queued successfully, or * special page is met, i.e. huge zero page. - * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were + * 1 - there is unmovable folio, and MPOL_MF_MOVE* & MPOL_MF_STRICT were * specified. * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an - * existing page was already on a node that does not follow the + * existing folio was already on a node that does not follow the * policy. */ -static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr, +static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long add= r, unsigned long end, struct mm_walk *walk) __releases(ptl) { int ret =3D 0; - struct page *page; + struct folio *folio; struct queue_pages *qp =3D walk->private; unsigned long flags; =20 @@ -464,19 +464,19 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *pt= l, unsigned long addr, ret =3D -EIO; goto unlock; } - page =3D pmd_page(*pmd); - if (is_huge_zero_page(page)) { + folio =3D pfn_folio(pmd_pfn(*pmd)); + if (is_huge_zero_page(&folio->page)) { walk->action =3D ACTION_CONTINUE; goto unlock; } - if (!queue_pages_required(page, qp)) + if (!queue_pages_required(&folio->page, qp)) goto unlock; =20 flags =3D qp->flags; - /* go to thp migration */ + /* go to folio migration */ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { if (!vma_migratable(walk->vma) || - migrate_page_add(page, qp->pagelist, flags)) { + migrate_page_add(&folio->page, qp->pagelist, flags)) { ret =3D 1; goto unlock; } @@ -512,7 +512,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned l= ong addr, =20 ptl =3D pmd_trans_huge_lock(pmd, vma); if (ptl) - return queue_pages_pmd(pmd, ptl, addr, end, walk); + return queue_folios_pmd(pmd, ptl, addr, end, walk); =20 if (pmd_trans_unstable(pmd)) return 0; --=20 2.38.1 From nobody Sun Sep 14 06:37:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28557C54EED for ; Wed, 25 Jan 2023 23:41:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234990AbjAYXl4 (ORCPT ); Wed, 25 Jan 2023 18:41:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236055AbjAYXlu (ORCPT ); Wed, 25 Jan 2023 18:41:50 -0500 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 884865FE1 for ; Wed, 25 Jan 2023 15:41:48 -0800 (PST) Received: by mail-pj1-x1029.google.com with SMTP id j5so8592pjn.5 for ; Wed, 25 Jan 2023 15:41:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8vZic6jN1vVBvksvwJz1xQpCGfyGjrm+SSW90x/iIgk=; b=Kh4RfcbGLDn6wlmfeljol0sY1eBsJweLAeurosHOgEoNqinDlrYKhY8araOCBOa8wF I/o3DWf7ZyhOwYrBnjIJRzBu/1oB/NNffx5MEncS4Ysj8sudz0DvCcQtY9fGPeFgMfF3 gKedCJxlAVgpI44tRJIRNaoo2AD/9FsRM5z2GBhyTI2MAjFKxNDv5qbOn4GX0RlVDWbT wOIc3zu6WO9W0NW0LMVWZ/+cPSyma7Hk0MtPYdhVAYyp8QnbYyoyev+Rcuk4Y3LqRtIm OtWmcd7GqPnf+HwQ2pxmx2LZVvI6PszEEAQlIgBPp3TtL0QXQaoEbcrbEc6fAkM9ag6v nOwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8vZic6jN1vVBvksvwJz1xQpCGfyGjrm+SSW90x/iIgk=; b=vf/G0W95z75W6ySqF4yEpZjvYRS9zbYsU22I3kHdNAzGgdCpRD5sU22isuzy2DGIYZ TkDfGSscapbxVBDXgikPmp+MXBETYf133qBppYcMYMVnlyRIN+ZJf9tA7pUWX2rDUiww y3O+nDD4zG0TA7ZNS1sAa32GkQWTxZpxPD/7m1d7XO3rX5wLXsl94d1R4l8oJmqZ/Qan 7ygsosGiyD9nUjgFp6GwJbdySjOj7yQkOhJDMy8z6MiWqchMZhaz1HJNNUhokFN21Ptd PdKGE10bP3Ql4acG0R2L6aWvzsxk/FyJOcZrWWV2W7QIhoAqPuVM8zTYYv7qxZG2edbD 4INQ== X-Gm-Message-State: AFqh2kosJIlstvg3MoRERcrwraO27TGN1RUss+oOR+3j5LDbnTk64QVE Bk4hMRrIXkqXJRqjfVzqYbA= X-Google-Smtp-Source: AMrXdXvYkeLDgOA+420enIPeVPOV1ZMOk9RJKxRmcpIlHaghsMRa14AqSMz9s53AItO9WVuQozAWfA== X-Received: by 2002:a17:90a:7347:b0:226:b52e:f1b8 with SMTP id j7-20020a17090a734700b00226b52ef1b8mr35518920pjs.24.1674690107979; Wed, 25 Jan 2023 15:41:47 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:8002:1c20::4e4b]) by smtp.googlemail.com with ESMTPSA id e8-20020a17090a630800b00219752c8ea5sm2226806pjj.37.2023.01.25.15.41.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jan 2023 15:41:47 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v3 3/6] mm/mempolicy: Convert queue_pages_pte_range() to queue_folios_pte_range() Date: Wed, 25 Jan 2023 15:41:31 -0800 Message-Id: <20230125234134.227244-4-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230125234134.227244-1-vishal.moola@gmail.com> References: <20230125234134.227244-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This function now operates on folios associated with ptes instead of pages. This change is in preparation for the conversion of queue_pages_required() to queue_folio_required() and migrate_page_add() to migrate_folio_add(). Signed-off-by: Vishal Moola (Oracle) --- mm/mempolicy.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 00fffa93adae..ae9d16124f45 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -491,19 +491,19 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *p= tl, unsigned long addr, * Scan through pages checking if pages follow certain conditions, * and move them to the pagelist if they do. * - * queue_pages_pte_range() has three possible return values: - * 0 - pages are placed on the right node or queued successfully, or + * queue_folios_pte_range() has three possible return values: + * 0 - folios are placed on the right node or queued successfully, or * special page is met, i.e. zero page. - * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were + * 1 - there is unmovable folio, and MPOL_MF_MOVE* & MPOL_MF_STRICT were * specified. - * -EIO - only MPOL_MF_STRICT was specified and an existing page was alrea= dy + * -EIO - only MPOL_MF_STRICT was specified and an existing folio was alre= ady * on a node that does not follow the policy. */ -static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, +static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) { struct vm_area_struct *vma =3D walk->vma; - struct page *page; + struct folio *folio; struct queue_pages *qp =3D walk->private; unsigned long flags =3D qp->flags; bool has_unmovable =3D false; @@ -521,16 +521,16 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned= long addr, for (; addr !=3D end; pte++, addr +=3D PAGE_SIZE) { if (!pte_present(*pte)) continue; - page =3D vm_normal_page(vma, addr, *pte); - if (!page || is_zone_device_page(page)) + folio =3D vm_normal_folio(vma, addr, *pte); + if (!folio || folio_is_zone_device(folio)) continue; /* - * vm_normal_page() filters out zero pages, but there might - * still be PageReserved pages to skip, perhaps in a VDSO. + * vm_normal_folio() filters out zero pages, but there might + * still be reserved folios to skip, perhaps in a VDSO. */ - if (PageReserved(page)) + if (folio_test_reserved(folio)) continue; - if (!queue_pages_required(page, qp)) + if (!queue_pages_required(&folio->page, qp)) continue; if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { /* MPOL_MF_STRICT must be specified if we get here */ @@ -544,7 +544,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned l= ong addr, * temporary off LRU pages in the range. Still * need migrate other LRU pages. */ - if (migrate_page_add(page, qp->pagelist, flags)) + if (migrate_page_add(&folio->page, qp->pagelist, flags)) has_unmovable =3D true; } else break; @@ -703,7 +703,7 @@ static int queue_pages_test_walk(unsigned long start, u= nsigned long end, =20 static const struct mm_walk_ops queue_pages_walk_ops =3D { .hugetlb_entry =3D queue_pages_hugetlb, - .pmd_entry =3D queue_pages_pte_range, + .pmd_entry =3D queue_folios_pte_range, .test_walk =3D queue_pages_test_walk, }; =20 --=20 2.38.1 From nobody Sun Sep 14 06:37:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BEECC54E94 for ; Wed, 25 Jan 2023 23:42:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236226AbjAYXl7 (ORCPT ); Wed, 25 Jan 2023 18:41:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236108AbjAYXlu (ORCPT ); Wed, 25 Jan 2023 18:41:50 -0500 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6FC515C8A for ; Wed, 25 Jan 2023 15:41:49 -0800 (PST) Received: by mail-pj1-x102a.google.com with SMTP id z9-20020a17090a468900b00226b6e7aeeaso241884pjf.1 for ; Wed, 25 Jan 2023 15:41:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Sz4HU32GWbuxfmIkQjW0u4ISqBHZ5RIiVnsfOJ2Lp54=; b=dmcxWY8U8wAh2AlCeZFmiUjni88jRf+FuhGzSc73OspKeJg/jDgHIv8asRJVZPLE7M GeKI9hjadJUH2P0ponOvP3pbBraa8S/sk+8nTl4/oqYo8JIFSVUDaHw97FrPx7bfNMeo IJM+6DNiaCAdp7b5r7OTFO2h/cIicEasMLMQulW80HW0FxGMwht/P0Z1SgDxEeIXj8ay 2XqexkXBfBMFdPiiPa5MRPZOB4u9lcaP/PQFrDUw4y91ocEY92PcwC3UgXttppZ+VloB bHpeYO3CnwoDG+CP4gqrrpWVu95EqQynL873hIUCRONzU4/DRWp2/OjNZ5OfpzIHluv5 h5Vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Sz4HU32GWbuxfmIkQjW0u4ISqBHZ5RIiVnsfOJ2Lp54=; b=Us+zlwAzv8xA7AYGLHLg9hXkDIbGswTbtBjHiJieCEw8HYYcQ8flIjYlUq6odp0Wtu JunraBMxuhLhk236MFfk7c5FvE0B+HES5KpLB8oc6kSM4GP9NOoJ0zYZuhibtZjlAll0 mqIzw8yPdeJu9WUA7t+IUbFx3j3CzShZWXqJj7Jk8azIxX5nYik7gS1z6AJ7RUD12sEn AIxhDgaJLD+m9+P9L8hES59bbI1GjGbaNhpXy3Jyppvkk81f41zrQIvAJ/rF2MRaaD1d X5195XtGxc4aynblrdNeQ8hT/fPAlOj7fjzrB0nRvDFIKoeR/zoWUOiuQs7AO7xF2vp0 H3ow== X-Gm-Message-State: AFqh2ko6/Zg+vK1VgzUXtBMxz3iB6dYGqLqjd+awUy46Zl95iwzDQWao Dcvk7aamQ0vEqLv+i8ASkWM= X-Google-Smtp-Source: AMrXdXsXHo3GclXuQ2bpTqlIoBZsCQOEjXZP3AX7aqO/uZ63Y2s2D6JyQLIgnnmthEvorHx4y3b9Fg== X-Received: by 2002:a17:90b:4fc2:b0:229:680:1729 with SMTP id qa2-20020a17090b4fc200b0022906801729mr34762484pjb.10.1674690108968; Wed, 25 Jan 2023 15:41:48 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:8002:1c20::4e4b]) by smtp.googlemail.com with ESMTPSA id e8-20020a17090a630800b00219752c8ea5sm2226806pjj.37.2023.01.25.15.41.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jan 2023 15:41:48 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v3 4/6] mm/mempolicy: Convert queue_pages_hugetlb() to queue_folios_hugetlb() Date: Wed, 25 Jan 2023 15:41:32 -0800 Message-Id: <20230125234134.227244-5-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230125234134.227244-1-vishal.moola@gmail.com> References: <20230125234134.227244-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This change is in preparation for the conversion of queue_pages_required() to queue_folio_required() and migrate_page_add() to migrate_folio_add(). Signed-off-by: Vishal Moola (Oracle) --- mm/mempolicy.c | 29 ++++++++++++++++++----------- 1 file changed, 18 insertions(+), 11 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index ae9d16124f45..ea8cac447e04 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -558,7 +558,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned = long addr, return addr !=3D end ? -EIO : 0; } =20 -static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask, +static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr, unsigned long end, struct mm_walk *walk) { @@ -566,7 +566,7 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned lon= g hmask, #ifdef CONFIG_HUGETLB_PAGE struct queue_pages *qp =3D walk->private; unsigned long flags =3D (qp->flags & MPOL_MF_VALID); - struct page *page; + struct folio *folio; spinlock_t *ptl; pte_t entry; =20 @@ -574,13 +574,13 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned l= ong hmask, entry =3D huge_ptep_get(pte); if (!pte_present(entry)) goto unlock; - page =3D pte_page(entry); - if (!queue_pages_required(page, qp)) + folio =3D pfn_folio(pte_pfn(entry)); + if (!queue_pages_required(&folio->page, qp)) goto unlock; =20 if (flags =3D=3D MPOL_MF_STRICT) { /* - * STRICT alone means only detecting misplaced page and no + * STRICT alone means only detecting misplaced folio and no * need to further check other vma. */ ret =3D -EIO; @@ -591,20 +591,27 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned l= ong hmask, /* * Must be STRICT with MOVE*, otherwise .test_walk() have * stopped walking current vma. - * Detecting misplaced page but allow migrating pages which + * Detecting misplaced folio but allow migrating folios which * have been queued. */ ret =3D 1; goto unlock; } =20 - /* With MPOL_MF_MOVE, we migrate only unshared hugepage. */ + /* + * With MPOL_MF_MOVE, we try to migrate only unshared folios. If it + * is shared it is likely not worth migrating. + * + * To check if the folio is shared, ideally we want to make sure + * every page is mapped to the same process. Doing that is very + * expensive, so check the estimated mapcount of the folio instead. + */ if (flags & (MPOL_MF_MOVE_ALL) || - (flags & MPOL_MF_MOVE && page_mapcount(page) =3D=3D 1)) { - if (isolate_hugetlb(page_folio(page), qp->pagelist) && + (flags & MPOL_MF_MOVE && folio_estimated_mapcount(folio) =3D=3D 1)) { + if (isolate_hugetlb(folio, qp->pagelist) && (flags & MPOL_MF_STRICT)) /* - * Failed to isolate page but allow migrating pages + * Failed to isolate folio but allow migrating folios * which have been queued. */ ret =3D 1; @@ -702,7 +709,7 @@ static int queue_pages_test_walk(unsigned long start, u= nsigned long end, } =20 static const struct mm_walk_ops queue_pages_walk_ops =3D { - .hugetlb_entry =3D queue_pages_hugetlb, + .hugetlb_entry =3D queue_folios_hugetlb, .pmd_entry =3D queue_folios_pte_range, .test_walk =3D queue_pages_test_walk, }; --=20 2.38.1 From nobody Sun Sep 14 06:37:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3911C27C76 for ; Wed, 25 Jan 2023 23:42:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236277AbjAYXmD (ORCPT ); Wed, 25 Jan 2023 18:42:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44234 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236147AbjAYXlw (ORCPT ); Wed, 25 Jan 2023 18:41:52 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0522298E2 for ; Wed, 25 Jan 2023 15:41:50 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id z1-20020a17090a66c100b00226f05b9595so270801pjl.0 for ; Wed, 25 Jan 2023 15:41:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nE+3UuHIJTsmxu1JqArGmSE4JP62IBIElITMZZ15/p0=; b=jQrVZxKeueZyKOX/9U6abED0nBNMUqF3gG1HIZMoK5tlZcQs9cJ/ODvbcqhthYHpKQ DWBlgZq8Tb2XiUpPQuMtypfYZ9+e6LBX9aHXQkiGdmyvdgULRvOsFnVYvZyBzK93TgON yT+3SjOAyRVIKqC2tyImtBCNxFkufKllds3KVdssuCQ158DP+AUXt5N9a7sXPIwJu0b7 6NUR4WdIza3XMkpwmk7ebXCPL8ZC27KhnroLf9RZKkf8URh1kFoa0wxqKDWcfTalHBmd 5ICDiFBJCDc915aV0MFz6KVHGYDjIaaj6cqBHLkLCcrCG7fiSVT8xzU8d7ZO0yGQVsMP GiWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nE+3UuHIJTsmxu1JqArGmSE4JP62IBIElITMZZ15/p0=; b=K/9AGnEcVwn2xrHEm7y/QuAQ3UE1XyHv6Opemm4anIvYSOj77kkET4oIOe5w9gZJhY gnqxcmYjFpDeF4CjhHRNi2HP/aZWsHJqf9wjysiGVv0zRiHoIiOrWC6KlPE+5TfhN7Vn GLVq3Pi/z5WMJTEMu1l+BlDZzY5Q1CpTIClBNp1tiMUEwwLNlTFqBA6PPj7VHZFl3+Xk QPuz4XprQgALOxfHMIMa7kG86ls32A3HVdqtMxoBlFChqjwlG3ytoEwsS1AEOI6RpbJs 27iVPFxfVVza7yQIM7q0Dxc+w4/F+hRwgA8PgTS7dmIwVrIhowHsk2cuz/D1mdvUc5r1 DIjw== X-Gm-Message-State: AFqh2krdm6A2/f5B2UylzhsgJb70yfv2F/SnBFsfUzn8xmnNTw5EtAQB 54N46ZEF5TfRcODeI9YFWX4= X-Google-Smtp-Source: AMrXdXv4oLc42hq8XgjfnjjhZaGW0OoALKHESNHJGGGtNWvyj5n5jowOYvkPnWGGVyUX9x1zr7AhOw== X-Received: by 2002:a17:90a:7804:b0:229:912:1340 with SMTP id w4-20020a17090a780400b0022909121340mr34515942pjk.39.1674690110189; Wed, 25 Jan 2023 15:41:50 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:8002:1c20::4e4b]) by smtp.googlemail.com with ESMTPSA id e8-20020a17090a630800b00219752c8ea5sm2226806pjj.37.2023.01.25.15.41.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jan 2023 15:41:49 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v3 5/6] mm/mempolicy: Convert queue_pages_required() to queue_folio_required() Date: Wed, 25 Jan 2023 15:41:33 -0800 Message-Id: <20230125234134.227244-6-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230125234134.227244-1-vishal.moola@gmail.com> References: <20230125234134.227244-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Replace queue_pages_required() with queue_folio_required(). queue_folio_required() does the same as queue_pages_required(), except takes in a folio instead of a page. Signed-off-by: Vishal Moola (Oracle) --- mm/mempolicy.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index ea8cac447e04..da87644430e3 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -427,15 +427,15 @@ struct queue_pages { }; =20 /* - * Check if the page's nid is in qp->nmask. + * Check if the folio's nid is in qp->nmask. * * If MPOL_MF_INVERT is set in qp->flags, check if the nid is * in the invert of qp->nmask. */ -static inline bool queue_pages_required(struct page *page, +static inline bool queue_folio_required(struct folio *folio, struct queue_pages *qp) { - int nid =3D page_to_nid(page); + int nid =3D folio_nid(folio); unsigned long flags =3D qp->flags; =20 return node_isset(nid, *qp->nmask) =3D=3D !(flags & MPOL_MF_INVERT); @@ -469,7 +469,7 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl= , unsigned long addr, walk->action =3D ACTION_CONTINUE; goto unlock; } - if (!queue_pages_required(&folio->page, qp)) + if (!queue_folio_required(folio, qp)) goto unlock; =20 flags =3D qp->flags; @@ -530,7 +530,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned = long addr, */ if (folio_test_reserved(folio)) continue; - if (!queue_pages_required(&folio->page, qp)) + if (!queue_folio_required(folio, qp)) continue; if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { /* MPOL_MF_STRICT must be specified if we get here */ @@ -575,7 +575,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned lo= ng hmask, if (!pte_present(entry)) goto unlock; folio =3D pfn_folio(pte_pfn(entry)); - if (!queue_pages_required(&folio->page, qp)) + if (!queue_folio_required(folio, qp)) goto unlock; =20 if (flags =3D=3D MPOL_MF_STRICT) { --=20 2.38.1 From nobody Sun Sep 14 06:37:27 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14FE3C54E94 for ; Wed, 25 Jan 2023 23:42:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236256AbjAYXmB (ORCPT ); Wed, 25 Jan 2023 18:42:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236182AbjAYXlw (ORCPT ); Wed, 25 Jan 2023 18:41:52 -0500 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 991AA5FDB for ; Wed, 25 Jan 2023 15:41:51 -0800 (PST) Received: by mail-pj1-x102b.google.com with SMTP id nm12-20020a17090b19cc00b0022c2155cc0bso211744pjb.4 for ; Wed, 25 Jan 2023 15:41:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aN57XfC0N2Z9PnMQ9nUjXYfVmJE2FGzydu2ZVvrzX3Q=; b=P6CDrkzJSgET54MDlG18KNSNOlZoW3MU7+hkgLqRJ/5Xqzgl8we5PZOg1hkICvX2CV qJuAyZ0zwDf79dalVbx6THwSx5WpRv7irQY4fWrO65cPK9qxlsBx0aRlT9bMDJUF1IWk Vdc3Flt7m1v4/SJ2NceeyIu3ROeYczykNgzpa6MKvRiWkg3DKWUnET3/mPqUh8G50olU TZeUw09UecGDHUoaAyfy+kXlHDrlCQD7OBWh+QrMS+SXz98gVihHyB7H5Lkz5OvooRno IJSQLLalXZorgQX63BTIU5Rmz6BSXpttPAnERpvzhMa9w+rhHUUo1reXjMpAwfuE0P5c DeAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aN57XfC0N2Z9PnMQ9nUjXYfVmJE2FGzydu2ZVvrzX3Q=; b=TeFtzNW+SQGD9ZrMLThSLubp8Al03gq7LI2U5ou7ElCiebmI6+hUEpMEQ1OyCrXcC/ knplikuJeUV7efobtpev3MzJoWQVU0kycFH5qe2xjSOGLt0KhG1jG2H9fAkpWXwtOnGc qZbo2L+CsrO6vyRdSzfw+rsmTJrluJUXfIjKkij4TPPS0iUiZH4A/Tz6z/OhYk7Z2dYN Ve2wD6GjjtoQCaAvJUHuwiyh55WBC861ySf52N6E7DDhxOWyuFByLkEneQx98gkLnynU DBQiXkN7RCeFNI/F+8IddOSWOvXV/dmjG2tTzC4ijA5sBUn9F2O/l7HhBmp/tmuP5pA3 sy4Q== X-Gm-Message-State: AFqh2kodCdR7RqvGQ/QqhRjkQ0HQtu0RJFSDopTY97s5HSR1WjIaAxcC 4k0Xh408sX4hGtgsJdXVgyQVwQejn+t/9w== X-Google-Smtp-Source: AMrXdXulsPWSiWTwVZGqHBrrwSWR9r4OEwxKEGBefyPEA2GtBCq+t9NogNmvV2nutnNPlWn7MjG4Mg== X-Received: by 2002:a17:90a:86ca:b0:225:d697:41ea with SMTP id y10-20020a17090a86ca00b00225d69741eamr36754049pjv.23.1674690111263; Wed, 25 Jan 2023 15:41:51 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:8002:1c20::4e4b]) by smtp.googlemail.com with ESMTPSA id e8-20020a17090a630800b00219752c8ea5sm2226806pjj.37.2023.01.25.15.41.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jan 2023 15:41:50 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v3 6/6] mm/mempolicy: Convert migrate_page_add() to migrate_folio_add() Date: Wed, 25 Jan 2023 15:41:34 -0800 Message-Id: <20230125234134.227244-7-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230125234134.227244-1-vishal.moola@gmail.com> References: <20230125234134.227244-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Replace migrate_page_add() with migrate_folio_add(). migrate_folio_add() does the same a migrate_page_add() but takes in a folio instead of a page. This removes a couple of calls to compound_head(). Signed-off-by: Vishal Moola (Oracle) --- mm/mempolicy.c | 39 ++++++++++++++++++++------------------- 1 file changed, 20 insertions(+), 19 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index da87644430e3..9bb4600c4294 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -414,7 +414,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_= MAX] =3D { }, }; =20 -static int migrate_page_add(struct page *page, struct list_head *pagelist, +static int migrate_folio_add(struct folio *folio, struct list_head *foliol= ist, unsigned long flags); =20 struct queue_pages { @@ -476,7 +476,7 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl= , unsigned long addr, /* go to folio migration */ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { if (!vma_migratable(walk->vma) || - migrate_page_add(&folio->page, qp->pagelist, flags)) { + migrate_folio_add(folio, qp->pagelist, flags)) { ret =3D 1; goto unlock; } @@ -544,7 +544,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned = long addr, * temporary off LRU pages in the range. Still * need migrate other LRU pages. */ - if (migrate_page_add(&folio->page, qp->pagelist, flags)) + if (migrate_folio_add(folio, qp->pagelist, flags)) has_unmovable =3D true; } else break; @@ -1029,27 +1029,28 @@ static long do_get_mempolicy(int *policy, nodemask_= t *nmask, } =20 #ifdef CONFIG_MIGRATION -/* - * page migration, thp tail pages can be passed. - */ -static int migrate_page_add(struct page *page, struct list_head *pagelist, +static int migrate_folio_add(struct folio *folio, struct list_head *foliol= ist, unsigned long flags) { - struct page *head =3D compound_head(page); /* - * Avoid migrating a page that is shared with others. + * We try to migrate only unshared folios. If it is shared it + * is likely not worth migrating. + * + * To check if the folio is shared, ideally we want to make sure + * every page is mapped to the same process. Doing that is very + * expensive, so check the estimated mapcount of the folio instead. */ - if ((flags & MPOL_MF_MOVE_ALL) || page_mapcount(head) =3D=3D 1) { - if (!isolate_lru_page(head)) { - list_add_tail(&head->lru, pagelist); - mod_node_page_state(page_pgdat(head), - NR_ISOLATED_ANON + page_is_file_lru(head), - thp_nr_pages(head)); + if ((flags & MPOL_MF_MOVE_ALL) || folio_estimated_mapcount(folio) =3D=3D = 1) { + if (!folio_isolate_lru(folio)) { + list_add_tail(&folio->lru, foliolist); + node_stat_mod_folio(folio, + NR_ISOLATED_ANON + folio_is_file_lru(folio), + folio_nr_pages(folio)); } else if (flags & MPOL_MF_STRICT) { /* - * Non-movable page may reach here. And, there may be - * temporary off LRU pages or non-LRU movable pages. - * Treat them as unmovable pages since they can't be + * Non-movable folio may reach here. And, there may be + * temporary off LRU folios or non-LRU movable folios. + * Treat them as unmovable folios since they can't be * isolated, so they can't be moved at the moment. It * should return -EIO for this case too. */ @@ -1241,7 +1242,7 @@ static struct page *new_page(struct page *page, unsig= ned long start) } #else =20 -static int migrate_page_add(struct page *page, struct list_head *pagelist, +static int migrate_folio_add(struct folio *folio, struct list_head *foliol= ist, unsigned long flags) { return -EIO; --=20 2.38.1