From nobody Sun Sep 14 18:43:34 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FC73C38159 for ; Wed, 18 Jan 2023 23:22:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229459AbjARXW1 (ORCPT ); Wed, 18 Jan 2023 18:22:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229739AbjARXWY (ORCPT ); Wed, 18 Jan 2023 18:22:24 -0500 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA630654CA for ; Wed, 18 Jan 2023 15:22:23 -0800 (PST) Received: by mail-pl1-x630.google.com with SMTP id g23so609345plq.12 for ; Wed, 18 Jan 2023 15:22:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1khdYzxT7PMCcz7jBx9VZXJLxnPij2EE71ujrUVxn1A=; b=eTYkEu4QWiTBpQ5PZ5ETHC6H9jjOL0AHY/mzZ+lFwdXVq1tuyLj8P3E8ewR+ImEgsN 2TKG+AYru0iFaf87vE/NcBHoRFaodj4Cq/jO5AHvjoFqicU95rVV+VCC4PEuK3cHjvzh GjJUtXwEbNPzRonxhAuHRRefgnS3GM3lnAbWbhZ0bjiRhRr1StSYxWesqcLL3iyignAo xGR7WE0QCGspcGOhpDXTO5lEolH12G/ONF8MoJTL+ZgQYmk/leR/bKANIOXBEaVBQt4W PiCDTuIL4Pqm+pTmwGQHbdrrM88Tmo/SToVqmP6+kV/l6w5fewFqt4VNShrV0AO2A3XS TJlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1khdYzxT7PMCcz7jBx9VZXJLxnPij2EE71ujrUVxn1A=; b=uujuiou5Te7MFbWJC9YR2Koz2EQDOI8XFJyQiNVxTRnrYFxNANoEuYi0lrzyx2za+h 97NI4JkE0bZVj8Ka40fbkbsffUhV8oM2kuNxZfMNAIcqVMIsuavLN2SzZcUdqh59iCMv mSBn99UjF52qW5dyTUfXs12NZFCx29tn0YdET+U2UilTA74OjMIYrtUDuJxCFp+HZF0D UKYS1rDrhzA2p+61I2C2OM9RPOnts5fg6B0N8HrM0+I0eIGm2Rzf+wd93ykpnllm2m8C jgYdpkbMIZSs91rTo1h/QYo3FobeglDWtXm7nDvVpnoy/zj2PMfXl4hMEOiaf/H6SbLv mSYQ== X-Gm-Message-State: AFqh2kqDckqZKjjvIe1jTciCy6wtlLwwGnIAfbf9YhVQEhGmlOhE1DpQ R4k9AYdRYydDOV82rooOOlY= X-Google-Smtp-Source: AMrXdXtOVaWZzLBo+o2TAej0yzsAKCSW+StQtUR8ucQfseE7GI7WjiYyBcPXOLDxSbj/hQ/ZHy51tQ== X-Received: by 2002:a17:90a:d188:b0:227:e22:4a85 with SMTP id fu8-20020a17090ad18800b002270e224a85mr9314629pjb.9.1674084143365; Wed, 18 Jan 2023 15:22:23 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:8002:1c20::4e4b]) by smtp.googlemail.com with ESMTPSA id k1-20020a17090a7f0100b00223f495dc28sm1862265pjl.14.2023.01.18.15.22.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 15:22:23 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable 1/5] mm/mempolicy: Convert queue_pages_pmd() to queue_folios_pmd() Date: Wed, 18 Jan 2023 15:22:15 -0800 Message-Id: <20230118232219.27038-2-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230118232219.27038-1-vishal.moola@gmail.com> References: <20230118232219.27038-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The function now operates on a folio instead of the page associated with a pmd. This change is in preparation for the conversion of queue_pages_required() to queue_folio_required() and migrate_page_add() to migrate_folio_add(). Signed-off-by: Vishal Moola (Oracle) --- mm/mempolicy.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index fd99d303e34f..00fffa93adae 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -442,21 +442,21 @@ static inline bool queue_pages_required(struct page *= page, } =20 /* - * queue_pages_pmd() has three possible return values: - * 0 - pages are placed on the right node or queued successfully, or + * queue_folios_pmd() has three possible return values: + * 0 - folios are placed on the right node or queued successfully, or * special page is met, i.e. huge zero page. - * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were + * 1 - there is unmovable folio, and MPOL_MF_MOVE* & MPOL_MF_STRICT were * specified. * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an - * existing page was already on a node that does not follow the + * existing folio was already on a node that does not follow the * policy. */ -static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr, +static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long add= r, unsigned long end, struct mm_walk *walk) __releases(ptl) { int ret =3D 0; - struct page *page; + struct folio *folio; struct queue_pages *qp =3D walk->private; unsigned long flags; =20 @@ -464,19 +464,19 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *pt= l, unsigned long addr, ret =3D -EIO; goto unlock; } - page =3D pmd_page(*pmd); - if (is_huge_zero_page(page)) { + folio =3D pfn_folio(pmd_pfn(*pmd)); + if (is_huge_zero_page(&folio->page)) { walk->action =3D ACTION_CONTINUE; goto unlock; } - if (!queue_pages_required(page, qp)) + if (!queue_pages_required(&folio->page, qp)) goto unlock; =20 flags =3D qp->flags; - /* go to thp migration */ + /* go to folio migration */ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { if (!vma_migratable(walk->vma) || - migrate_page_add(page, qp->pagelist, flags)) { + migrate_page_add(&folio->page, qp->pagelist, flags)) { ret =3D 1; goto unlock; } @@ -512,7 +512,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned l= ong addr, =20 ptl =3D pmd_trans_huge_lock(pmd, vma); if (ptl) - return queue_pages_pmd(pmd, ptl, addr, end, walk); + return queue_folios_pmd(pmd, ptl, addr, end, walk); =20 if (pmd_trans_unstable(pmd)) return 0; --=20 2.38.1 From nobody Sun Sep 14 18:43:34 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFB6AC38159 for ; Wed, 18 Jan 2023 23:22:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229901AbjARXWc (ORCPT ); Wed, 18 Jan 2023 18:22:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229572AbjARXWZ (ORCPT ); Wed, 18 Jan 2023 18:22:25 -0500 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB4B363E33 for ; Wed, 18 Jan 2023 15:22:24 -0800 (PST) Received: by mail-pj1-x102f.google.com with SMTP id v10-20020a17090abb8a00b00229c517a6eeso4068753pjr.5 for ; Wed, 18 Jan 2023 15:22:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8vZic6jN1vVBvksvwJz1xQpCGfyGjrm+SSW90x/iIgk=; b=plLzrfqug0qycgZKzYJ6RHpBxvkpRXgY6TLreuwWkAlzfkSQlhcW8SQF4rL8+JBxTE 4W7xSaKb6YDytsbat4R3kDWOJnP7jL6b0O2dLvzFim8Csq+j+fa+RieFjmkR5514FIFM n/++JQD/VGUUrwitWCDh6Jkj0razxFRTPNaYbd4hLxUTRmYk2Qi8OzhBq2HxrFAJffT0 HIPUzvNZXNQHj/gGr7gtVk4DfzG3EHegoOQRgqwm6udpoYGhF+IshP70rfO8DoS+uYPx XYsr8em2yN/N62hb7RydrBF04dPqVuP04femkqre4+xnKeH8L7MNyIFeShxXS3a2R7Y5 X35Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8vZic6jN1vVBvksvwJz1xQpCGfyGjrm+SSW90x/iIgk=; b=G0EaUqEKNLgFquCxgG8Pb0LLmB7tNymfaFRDwyM4J8/ik6W+52gb6gyoVH0/T4pkXo i/BADE86hJeO9s+Qmt4NvyGxQB7GYXKxuWzZbDrtisC23hWkYwiMT8jLeQNHOOpXI7G5 +Vrfq7/Ca2R+324W0IH7T8BgMARW3IXtCAZTbcLxYBl1FELyN9wGH7nd/ycrlUFbMFZU VunPpyVqOq/QmHMqTOVBjj2h7JkSThsu0ragZpqVhTA0ZoTqONHBOKEAd0FnaiYIHEJl pIDXlhRwfHU7ekQkh+49zKVQJ4eap5Zc5iE06e0TFRL6DT50r9ZjMAhe7+fanSRWqeyP RFKw== X-Gm-Message-State: AFqh2kq8QUDWN7ETTI74q6pY/ZeKZ4fXwkP2imI53zTl9t/VCUO/FVJm fh2/2BQXwNUSGKLiDX6Y1E/uQP6tYuo= X-Google-Smtp-Source: AMrXdXvEjyCVC/4TVwOh1+YdAemnNYpOML8teEekvwuoZvNygTzqcIdTkYbXe+Ux0NiePnfI3ZFfNw== X-Received: by 2002:a17:90a:f606:b0:228:f6cc:dc4f with SMTP id bw6-20020a17090af60600b00228f6ccdc4fmr9191489pjb.10.1674084144399; Wed, 18 Jan 2023 15:22:24 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:8002:1c20::4e4b]) by smtp.googlemail.com with ESMTPSA id k1-20020a17090a7f0100b00223f495dc28sm1862265pjl.14.2023.01.18.15.22.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 15:22:24 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable 2/5] mm/mempolicy: Convert queue_pages_pte_range() to queue_folios_pte_range() Date: Wed, 18 Jan 2023 15:22:16 -0800 Message-Id: <20230118232219.27038-3-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230118232219.27038-1-vishal.moola@gmail.com> References: <20230118232219.27038-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This function now operates on folios associated with ptes instead of pages. This change is in preparation for the conversion of queue_pages_required() to queue_folio_required() and migrate_page_add() to migrate_folio_add(). Signed-off-by: Vishal Moola (Oracle) --- mm/mempolicy.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 00fffa93adae..ae9d16124f45 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -491,19 +491,19 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *p= tl, unsigned long addr, * Scan through pages checking if pages follow certain conditions, * and move them to the pagelist if they do. * - * queue_pages_pte_range() has three possible return values: - * 0 - pages are placed on the right node or queued successfully, or + * queue_folios_pte_range() has three possible return values: + * 0 - folios are placed on the right node or queued successfully, or * special page is met, i.e. zero page. - * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were + * 1 - there is unmovable folio, and MPOL_MF_MOVE* & MPOL_MF_STRICT were * specified. - * -EIO - only MPOL_MF_STRICT was specified and an existing page was alrea= dy + * -EIO - only MPOL_MF_STRICT was specified and an existing folio was alre= ady * on a node that does not follow the policy. */ -static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, +static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) { struct vm_area_struct *vma =3D walk->vma; - struct page *page; + struct folio *folio; struct queue_pages *qp =3D walk->private; unsigned long flags =3D qp->flags; bool has_unmovable =3D false; @@ -521,16 +521,16 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned= long addr, for (; addr !=3D end; pte++, addr +=3D PAGE_SIZE) { if (!pte_present(*pte)) continue; - page =3D vm_normal_page(vma, addr, *pte); - if (!page || is_zone_device_page(page)) + folio =3D vm_normal_folio(vma, addr, *pte); + if (!folio || folio_is_zone_device(folio)) continue; /* - * vm_normal_page() filters out zero pages, but there might - * still be PageReserved pages to skip, perhaps in a VDSO. + * vm_normal_folio() filters out zero pages, but there might + * still be reserved folios to skip, perhaps in a VDSO. */ - if (PageReserved(page)) + if (folio_test_reserved(folio)) continue; - if (!queue_pages_required(page, qp)) + if (!queue_pages_required(&folio->page, qp)) continue; if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { /* MPOL_MF_STRICT must be specified if we get here */ @@ -544,7 +544,7 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned l= ong addr, * temporary off LRU pages in the range. Still * need migrate other LRU pages. */ - if (migrate_page_add(page, qp->pagelist, flags)) + if (migrate_page_add(&folio->page, qp->pagelist, flags)) has_unmovable =3D true; } else break; @@ -703,7 +703,7 @@ static int queue_pages_test_walk(unsigned long start, u= nsigned long end, =20 static const struct mm_walk_ops queue_pages_walk_ops =3D { .hugetlb_entry =3D queue_pages_hugetlb, - .pmd_entry =3D queue_pages_pte_range, + .pmd_entry =3D queue_folios_pte_range, .test_walk =3D queue_pages_test_walk, }; =20 --=20 2.38.1 From nobody Sun Sep 14 18:43:34 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AE5CC32793 for ; Wed, 18 Jan 2023 23:22:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229883AbjARXWj (ORCPT ); Wed, 18 Jan 2023 18:22:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229737AbjARXW0 (ORCPT ); Wed, 18 Jan 2023 18:22:26 -0500 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCE2666EF7 for ; Wed, 18 Jan 2023 15:22:25 -0800 (PST) Received: by mail-pl1-x62a.google.com with SMTP id y1so664224plb.2 for ; Wed, 18 Jan 2023 15:22:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sR1JcCUA9O2Vny3R+G4SUtwxLVo8Eke3dTYifmkKbTA=; b=VLVGj2MeNhV4dG8tqMCOaUdWWZUjRbL/wzXS2nrU9rMdcdiWoHupAjfnkS2LZ1UIzY EIrqU/eNtw167F98QPa+DlF0ttEz6P6jzss97ZHeMD+Mnp+0o17cR9id+VKeOvQQTo3U yCYIeFH96KTFdJyLdWHN+Y55hcNZwYOVh0olOR9/ydyvrvK3SrD7Ds/NSW1Z0mk/pf2m RhJSpm+Ca0ZYRz9opt+TrOTf+OIbRsYV0dxpzVQLYDESUfHLZoFpRJy20nXvMeS7rqRO tTccJ45EBAGYT0CVJTr0FvUCBeU7cNfdMTbnxr39qIXoSZB1IxqhH3nzHofdm80Ce5K8 EJsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sR1JcCUA9O2Vny3R+G4SUtwxLVo8Eke3dTYifmkKbTA=; b=4fGmfZs/CpCoWtEkc4D1jzPOblbckwDIOVabvTgDdm27Yck97Gx/en+SUfGZ5rOVDj bFfSaYnCV7nogRUw/MICWSj0HOHENJ71uAziAOzANZZEWsAuWW1NOJFjYvoaaVYWEV7u Z2VYnPmmkDXyoYykPhwIGkNBlBpHPGg3eBtBqmRG/Tv7ZPFfzGvo0E9x3cpC+OGbHkkx TzMzKvHe91ILsGK4zG3Z7jjN1/FGKi/gx0q1pQ3ZZKBzlDlPw0bbYuvkX3ryn6W34noT nl/DWpKdsmDoFlhevBTtSY9A2pEpvsCzXYFOD4xkyrNhQDii3LkaXEGw+/o+twurh8Mh vR2A== X-Gm-Message-State: AFqh2kqRDmVgZPWPbGv8fiLTWeEOwn0kZFQdo1DMoaJgphIwV33u3cH2 wqVY6V1XaQJhLfrDAAjrGqw= X-Google-Smtp-Source: AMrXdXt0ENl5WIQO0+QtroAVNydcvggXodDzKiNp1GCI+BCD3n+pIqAhLtX5Ca38kf6D0ROrCrVurA== X-Received: by 2002:a17:90a:2dce:b0:229:4a04:65eb with SMTP id q14-20020a17090a2dce00b002294a0465ebmr9156656pjm.31.1674084145299; Wed, 18 Jan 2023 15:22:25 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:8002:1c20::4e4b]) by smtp.googlemail.com with ESMTPSA id k1-20020a17090a7f0100b00223f495dc28sm1862265pjl.14.2023.01.18.15.22.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 15:22:25 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable 3/5] mm/mempolicy: Convert queue_pages_hugetlb() to queue_folios_hugetlb() Date: Wed, 18 Jan 2023 15:22:17 -0800 Message-Id: <20230118232219.27038-4-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230118232219.27038-1-vishal.moola@gmail.com> References: <20230118232219.27038-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This change is in preparation for the conversion of queue_pages_required() to queue_folio_required() and migrate_page_add() to migrate_folio_add(). Signed-off-by: Vishal Moola (Oracle) --- mm/mempolicy.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index ae9d16124f45..0b82f8159541 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -558,7 +558,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned = long addr, return addr !=3D end ? -EIO : 0; } =20 -static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask, +static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr, unsigned long end, struct mm_walk *walk) { @@ -566,7 +566,7 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned lon= g hmask, #ifdef CONFIG_HUGETLB_PAGE struct queue_pages *qp =3D walk->private; unsigned long flags =3D (qp->flags & MPOL_MF_VALID); - struct page *page; + struct folio *folio; spinlock_t *ptl; pte_t entry; =20 @@ -574,13 +574,13 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned l= ong hmask, entry =3D huge_ptep_get(pte); if (!pte_present(entry)) goto unlock; - page =3D pte_page(entry); - if (!queue_pages_required(page, qp)) + folio =3D pfn_folio(pte_pfn(entry)); + if (!queue_pages_required(&folio->page, qp)) goto unlock; =20 if (flags =3D=3D MPOL_MF_STRICT) { /* - * STRICT alone means only detecting misplaced page and no + * STRICT alone means only detecting misplaced folio and no * need to further check other vma. */ ret =3D -EIO; @@ -591,7 +591,7 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned lon= g hmask, /* * Must be STRICT with MOVE*, otherwise .test_walk() have * stopped walking current vma. - * Detecting misplaced page but allow migrating pages which + * Detecting misplaced folio but allow migrating folios which * have been queued. */ ret =3D 1; @@ -600,11 +600,11 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned l= ong hmask, =20 /* With MPOL_MF_MOVE, we migrate only unshared hugepage. */ if (flags & (MPOL_MF_MOVE_ALL) || - (flags & MPOL_MF_MOVE && page_mapcount(page) =3D=3D 1)) { - if (isolate_hugetlb(page_folio(page), qp->pagelist) && + (flags & MPOL_MF_MOVE && folio_mapcount(folio) =3D=3D 1)) { + if (isolate_hugetlb(folio, qp->pagelist) && (flags & MPOL_MF_STRICT)) /* - * Failed to isolate page but allow migrating pages + * Failed to isolate folio but allow migrating folios * which have been queued. */ ret =3D 1; @@ -702,7 +702,7 @@ static int queue_pages_test_walk(unsigned long start, u= nsigned long end, } =20 static const struct mm_walk_ops queue_pages_walk_ops =3D { - .hugetlb_entry =3D queue_pages_hugetlb, + .hugetlb_entry =3D queue_folios_hugetlb, .pmd_entry =3D queue_folios_pte_range, .test_walk =3D queue_pages_test_walk, }; --=20 2.38.1 From nobody Sun Sep 14 18:43:34 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCB9FC32793 for ; Wed, 18 Jan 2023 23:22:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229686AbjARXWl (ORCPT ); Wed, 18 Jan 2023 18:22:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229878AbjARXW1 (ORCPT ); Wed, 18 Jan 2023 18:22:27 -0500 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3B0A66F96 for ; Wed, 18 Jan 2023 15:22:26 -0800 (PST) Received: by mail-pj1-x1036.google.com with SMTP id d8so631981pjc.3 for ; Wed, 18 Jan 2023 15:22:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=au6G63hab9343OwjwQffrlqu0KPUTVvJhczlUez6ZjA=; b=OpFvnNuBiURxuCLUuAX8gclebbcqS8tU6zG6zpK3Jg1ecyC6h3uLBzssNc1S9jHDHd 1KSwYCTy5G+mPdu5exzax5tjqlvSskyGaPQSz2bLj5cGgUwau+nGTJtUhKx3c49NjoQy 6GUqiA5ZMVqONluZBos2f4nk1GfO3iyRvN2BF6x2yt/bmDXB0HwztZcRcSZyiOq5rUtD hKmIpBI6UMGAta00Do7k5jjp5fbulHwkyARQBQXZlvhSh89iQcmRKh/lII9hc2hT7cQH YLp5ofBrxBLqctMsPDWQSaIABWdHl3KtCfopPF77zlY8V5mGSpD2ZgaD5AduVb1iIvjd tM+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=au6G63hab9343OwjwQffrlqu0KPUTVvJhczlUez6ZjA=; b=Z98ga1MkStuGHSvEAup333ZeZHP9WpcIg+Job+TdOAevd7UVbNTS9+RiLGw4KJFRNU 9KSO+nHRzZWJ50LL91O66rY/R+8YrdKUFlWgus9I9kUbTEgaSaEp39zYeVmB8P+9rWl3 XYFF91jTfRncpUZOdGAgPzhoi2LJ6EgWLrCKl3lnC7xUcNKStRBhQfopxCK41svnI9jX pl31/QO2HKFkpzJW2e7BtuffSQEnyPSR8tWoYnkqYQsIrG7MfG/v5sW/2oAgkawN1VXl BlHtVlUq4vkzeV8iFkwYVgA3mrPuLZ20+xHDTtL5qbYKyS71NwnFNVCzQnrzyZMKlVXt NfIw== X-Gm-Message-State: AFqh2krvKTVMSBN68Bdm7EsbXFV4TNruYJCDGw1XdzofMY2hRYeFTzDS arnTqmtVTQgy8e6oq63Vg/Y= X-Google-Smtp-Source: AMrXdXtYJDKTLcIyyRdhdUAe0qzWWeGZAf+3g+OYbwB8HsvEU4ytSVpzypPZ2UghsoSvMOCw1+1M+Q== X-Received: by 2002:a17:90a:e647:b0:229:5b13:d850 with SMTP id ep7-20020a17090ae64700b002295b13d850mr8776893pjb.5.1674084146362; Wed, 18 Jan 2023 15:22:26 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:8002:1c20::4e4b]) by smtp.googlemail.com with ESMTPSA id k1-20020a17090a7f0100b00223f495dc28sm1862265pjl.14.2023.01.18.15.22.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 15:22:26 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable 4/5] mm/mempolicy: Convert queue_pages_required() to queue_folio_required() Date: Wed, 18 Jan 2023 15:22:18 -0800 Message-Id: <20230118232219.27038-5-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230118232219.27038-1-vishal.moola@gmail.com> References: <20230118232219.27038-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Replace queue_pages_required() with queue_folio_required(). queue_folio_required() does the same as queue_pages_required(), except takes in a folio instead of a page. Signed-off-by: Vishal Moola (Oracle) --- mm/mempolicy.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 0b82f8159541..0a3690ecab7d 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -427,15 +427,15 @@ struct queue_pages { }; =20 /* - * Check if the page's nid is in qp->nmask. + * Check if the folio's nid is in qp->nmask. * * If MPOL_MF_INVERT is set in qp->flags, check if the nid is * in the invert of qp->nmask. */ -static inline bool queue_pages_required(struct page *page, +static inline bool queue_folio_required(struct folio *folio, struct queue_pages *qp) { - int nid =3D page_to_nid(page); + int nid =3D folio_nid(folio); unsigned long flags =3D qp->flags; =20 return node_isset(nid, *qp->nmask) =3D=3D !(flags & MPOL_MF_INVERT); @@ -469,7 +469,7 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl= , unsigned long addr, walk->action =3D ACTION_CONTINUE; goto unlock; } - if (!queue_pages_required(&folio->page, qp)) + if (!queue_folio_required(folio, qp)) goto unlock; =20 flags =3D qp->flags; @@ -530,7 +530,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned = long addr, */ if (folio_test_reserved(folio)) continue; - if (!queue_pages_required(&folio->page, qp)) + if (!queue_folio_required(folio, qp)) continue; if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { /* MPOL_MF_STRICT must be specified if we get here */ @@ -575,7 +575,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned lo= ng hmask, if (!pte_present(entry)) goto unlock; folio =3D pfn_folio(pte_pfn(entry)); - if (!queue_pages_required(&folio->page, qp)) + if (!queue_folio_required(folio, qp)) goto unlock; =20 if (flags =3D=3D MPOL_MF_STRICT) { --=20 2.38.1 From nobody Sun Sep 14 18:43:34 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CF89C38147 for ; Wed, 18 Jan 2023 23:22:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229974AbjARXWp (ORCPT ); Wed, 18 Jan 2023 18:22:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229884AbjARXW2 (ORCPT ); Wed, 18 Jan 2023 18:22:28 -0500 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27C5B676C4 for ; Wed, 18 Jan 2023 15:22:27 -0800 (PST) Received: by mail-pj1-x1033.google.com with SMTP id a14-20020a17090a70ce00b00229a2f73c56so4075129pjm.3 for ; Wed, 18 Jan 2023 15:22:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AyaiUqa6uxGoekyzThZMenGPL9hncRzPR7deDY3LNSw=; b=lR6rKoh2RFGVgliV98r5mNThqsblkwbahtu0j3cOy5zj+Wzs0dviXjd2xpwYFy2b/s FygjQje2As8gfiTv9IpoYviItU5IfXF3C+GHMqEG1TIChBHPivgqi6vwtmYerHkKtLg7 Wor81yJRBNys/MDCgjcgoFKMBZc6oqepM5BPCSw5FKbe5xpLw6HtkAKD/ZCxElxGwXq/ E7IuAIb85nTgV+Zl0fknCajTg5E6lgdimAdfDfa1GyuBlO703HbOwVSZiZ1bBuRQaOo+ 40WaxHk5Ci8dNgUiqHOfu4PLfvBSxBin0LQbgice19D3oERHCz3XPGmgoPf+Xd0BQzLs xjcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AyaiUqa6uxGoekyzThZMenGPL9hncRzPR7deDY3LNSw=; b=hftufaGZ2C0efATgHYml/O6ezP+NlQ/s2e4bAD1IjV2Ni/jjUEw2yTdMCEhqArykK5 biax/SzzlijwBvjTzzGqEE7RgWzu4R4Yxgnvc4qrPgBY6G1GznpQZOHntFtkQdAn9Z2t 6A/yaFa31ELQmZ+nGEKPnjRsIkR9av2DvPdWx7cOzy//90CCHnFGM0PgidOsUTZKkcir +KvwrcYxmF0xWiSuL05e6OAFh2NFhfUr5WGBnJn3bd9EGdXCsmC6/yV03Wo/OCQ1eQZZ W0yX24acBPRJpnaDYykrA31BPA6ISP2InffGwm+ZXwZBkMb/fhS6wE8YwlY0Jn5RAyFU gASA== X-Gm-Message-State: AFqh2kqmKunRGccQQTtTRJ9lxgqsisvkeO7kbryqj+brMyS3QS8QMEUg 9aA83FrXVJzGoiALxb8st0uVY1F2KT4= X-Google-Smtp-Source: AMrXdXuNC0wSbEBnfwY4tpq5Uzm/p/V4ipRKGX+mC0Iiyh8LXHRGagSW26Bf2QUBFcOVAY6JVMfNOA== X-Received: by 2002:a17:90a:ea86:b0:229:9369:e13 with SMTP id h6-20020a17090aea8600b0022993690e13mr8927376pjz.36.1674084147437; Wed, 18 Jan 2023 15:22:27 -0800 (PST) Received: from fedora.hsd1.ca.comcast.net ([2601:644:8002:1c20::4e4b]) by smtp.googlemail.com with ESMTPSA id k1-20020a17090a7f0100b00223f495dc28sm1862265pjl.14.2023.01.18.15.22.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 15:22:27 -0800 (PST) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable 5/5] mm/mempolicy: Convert migrate_page_add() to migrate_folio_add() Date: Wed, 18 Jan 2023 15:22:19 -0800 Message-Id: <20230118232219.27038-6-vishal.moola@gmail.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230118232219.27038-1-vishal.moola@gmail.com> References: <20230118232219.27038-1-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Replace migrate_page_add() with migrate_folio_add(). migrate_folio_add() does the same a migrate_page_add() but takes in a folio instead of a page. This removes a couple of calls to compound_head(). Signed-off-by: Vishal Moola (Oracle) --- mm/mempolicy.c | 34 +++++++++++++++------------------- 1 file changed, 15 insertions(+), 19 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 0a3690ecab7d..253ce368cf16 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -414,7 +414,7 @@ static const struct mempolicy_operations mpol_ops[MPOL_= MAX] =3D { }, }; =20 -static int migrate_page_add(struct page *page, struct list_head *pagelist, +static int migrate_folio_add(struct folio *folio, struct list_head *foliol= ist, unsigned long flags); =20 struct queue_pages { @@ -476,7 +476,7 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl= , unsigned long addr, /* go to folio migration */ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { if (!vma_migratable(walk->vma) || - migrate_page_add(&folio->page, qp->pagelist, flags)) { + migrate_folio_add(folio, qp->pagelist, flags)) { ret =3D 1; goto unlock; } @@ -544,7 +544,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned = long addr, * temporary off LRU pages in the range. Still * need migrate other LRU pages. */ - if (migrate_page_add(&folio->page, qp->pagelist, flags)) + if (migrate_folio_add(folio, qp->pagelist, flags)) has_unmovable =3D true; } else break; @@ -1022,27 +1022,23 @@ static long do_get_mempolicy(int *policy, nodemask_= t *nmask, } =20 #ifdef CONFIG_MIGRATION -/* - * page migration, thp tail pages can be passed. - */ -static int migrate_page_add(struct page *page, struct list_head *pagelist, +static int migrate_folio_add(struct folio *folio, struct list_head *foliol= ist, unsigned long flags) { - struct page *head =3D compound_head(page); /* - * Avoid migrating a page that is shared with others. + * Avoid migrating a folio that is shared with others. */ - if ((flags & MPOL_MF_MOVE_ALL) || page_mapcount(head) =3D=3D 1) { - if (!isolate_lru_page(head)) { - list_add_tail(&head->lru, pagelist); - mod_node_page_state(page_pgdat(head), - NR_ISOLATED_ANON + page_is_file_lru(head), - thp_nr_pages(head)); + if ((flags & MPOL_MF_MOVE_ALL) || folio_mapcount(folio) =3D=3D 1) { + if (!folio_isolate_lru(folio)) { + list_add_tail(&folio->lru, foliolist); + node_stat_mod_folio(folio, + NR_ISOLATED_ANON + folio_is_file_lru(folio), + folio_nr_pages(folio)); } else if (flags & MPOL_MF_STRICT) { /* - * Non-movable page may reach here. And, there may be - * temporary off LRU pages or non-LRU movable pages. - * Treat them as unmovable pages since they can't be + * Non-movable folio may reach here. And, there may be + * temporary off LRU folios or non-LRU movable folios. + * Treat them as unmovable folios since they can't be * isolated, so they can't be moved at the moment. It * should return -EIO for this case too. */ @@ -1234,7 +1230,7 @@ static struct page *new_page(struct page *page, unsig= ned long start) } #else =20 -static int migrate_page_add(struct page *page, struct list_head *pagelist, +static int migrate_folio_add(struct folio *folio, struct list_head *foliol= ist, unsigned long flags) { return -EIO; --=20 2.38.1