From nobody Fri Feb 13 14:08:39 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CAA6CE7A8C for ; Mon, 25 Sep 2023 08:32:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232824AbjIYIcO (ORCPT ); Mon, 25 Sep 2023 04:32:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230435AbjIYIcN (ORCPT ); Mon, 25 Sep 2023 04:32:13 -0400 Received: from mail-yw1-x1133.google.com (mail-yw1-x1133.google.com [IPv6:2607:f8b0:4864:20::1133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B18FABC for ; Mon, 25 Sep 2023 01:32:06 -0700 (PDT) Received: by mail-yw1-x1133.google.com with SMTP id 00721157ae682-59c07cf02ebso70286667b3.1 for ; Mon, 25 Sep 2023 01:32:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695630726; x=1696235526; darn=vger.kernel.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=ESRqX1PlrIchswbjY6F5gY3I+wvwb2kVpQ+ph7N3oWo=; b=YE4gVrE5UL3QWSJeaJ9+SqhEvadjQ/UUiak38YJh+2A9pLX7wZzVsi9aHGiMb016GF HlobMBNghSGPFjKkEaNDAevbtJm0qXJ4cK37tYR6nvMlGOkIgTPNlrYHa6bbomG5EMHu t+cPaCGIPOE8961bC5sw5/bOdB5361HMj0UJfZbj4RKKYgSj+9PnMU1M38qV/SQzOZbW M13g6E1MhUumfqBexsINmPKFI9tftuGgc0E56/rFOOXZwUFeKw5Nym9UsRU+f6Eio/4F 9P8NyUxY0PvSVjzGahoALL+o+6pKQkMAkDewE59zN5VGavTCUrfJZBr50bxIYvOcV6tw BDGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695630726; x=1696235526; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ESRqX1PlrIchswbjY6F5gY3I+wvwb2kVpQ+ph7N3oWo=; b=YJe5QZdOi0128JeJUqMWV1+kQnm1hcxyFHi6rC50g+I6ECm6muUccQ8GxfjCvH7ihN F4Fwsoln/XES18aIfLhORWs0NIs/oYzGmQRlyuri55v9UEaF8+2EudGMq1BfR/Ej3ItH PkMFdrtqZB8tiCSLKH1LW/wfNpbm3hmzHv9L2WTNmbBwgGgTkglNXNELX4I6KjDW/i7m Ht6qkrhQCdDmT32+8h4htmtxRJTybB1F0f4zM2Azxj+cGXOIlVgS5uWQBYHprom4R07D 8gR/hd+fiY/MvUaVcBY7Uf1c9LWJoC1qBrVbcQK9dQx2OGR46G9erXvipETEuVN6qBkG MkDw== X-Gm-Message-State: AOJu0YyMb/fCyWWzjcnwqHkgEsfgptHKgx6EBT22GHGMNiVFHEMhraZu GHY8lnEC4xCEt6gk8+T/ibWX5A== X-Google-Smtp-Source: AGHT+IFI7sWdT9SaX/WaLFmLycQV1E3ZHD2waCsU6dSHFKECPifOP+qkTIKONo5XkYDTZbJNeNdwNw== X-Received: by 2002:a25:744b:0:b0:d84:da24:96de with SMTP id p72-20020a25744b000000b00d84da2496demr5304926ybc.33.1695630725781; Mon, 25 Sep 2023 01:32:05 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id t3-20020a056902018300b00d1b86efc0ffsm2024884ybh.6.2023.09.25.01.32.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Sep 2023 01:32:05 -0700 (PDT) Date: Mon, 25 Sep 2023 01:32:02 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Andi Kleen , Christoph Lameter , Matthew Wilcox , Mike Kravetz , David Hildenbrand , Suren Baghdasaryan , Yang Shi , Sidhartha Kumar , Vishal Moola , Kefeng Wang , Greg Kroah-Hartman , Tejun Heo , Mel Gorman , Michal Hocko , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 09/12] mm: add page_rmappable_folio() wrapper In-Reply-To: <2d872cef-7787-a7ca-10e-9d45a64c80b4@google.com> Message-ID: References: <2d872cef-7787-a7ca-10e-9d45a64c80b4@google.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" folio_prep_large_rmappable() is being used repeatedly along with a conversion from page to folio, a check non-NULL, a check order > 1: wrap it all up into struct folio *page_rmappable_folio(struct page *). Signed-off-by: Hugh Dickins --- include/linux/huge_mm.h | 13 +++++++++++++ mm/mempolicy.c | 17 +++-------------- mm/page_alloc.c | 8 ++------ 3 files changed, 18 insertions(+), 20 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index fa0350b0812a..58e7662a8a62 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -141,6 +141,15 @@ unsigned long thp_get_unmapped_area(struct file *filp,= unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); =20 void folio_prep_large_rmappable(struct folio *folio); +static inline struct folio *page_rmappable_folio(struct page *page) +{ + struct folio *folio =3D (struct folio *)page; + + if (folio && folio_order(folio) > 1) + folio_prep_large_rmappable(folio); + return folio; +} + bool can_split_folio(struct folio *folio, int *pextra_pins); int split_huge_page_to_list(struct page *page, struct list_head *list); static inline int split_huge_page(struct page *page) @@ -281,6 +290,10 @@ static inline bool hugepage_vma_check(struct vm_area_s= truct *vma, } =20 static inline void folio_prep_large_rmappable(struct folio *folio) {} +static inline struct folio *page_rmappable_folio(struct page *page) +{ + return (struct folio *)page; +} =20 #define transparent_hugepage_flags 0UL =20 diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 7ab6102d7da4..4c3b3f535630 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2137,10 +2137,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, = struct vm_area_struct *vma, mpol_cond_put(pol); gfp |=3D __GFP_COMP; page =3D alloc_page_interleave(gfp, order, nid); - folio =3D (struct folio *)page; - if (folio && order > 1) - folio_prep_large_rmappable(folio); - goto out; + return page_rmappable_folio(page); } =20 if (pol->mode =3D=3D MPOL_PREFERRED_MANY) { @@ -2150,10 +2147,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, = struct vm_area_struct *vma, gfp |=3D __GFP_COMP; page =3D alloc_pages_preferred_many(gfp, order, node, pol); mpol_cond_put(pol); - folio =3D (struct folio *)page; - if (folio && order > 1) - folio_prep_large_rmappable(folio); - goto out; + return page_rmappable_folio(page); } =20 if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) { @@ -2247,12 +2241,7 @@ EXPORT_SYMBOL(alloc_pages); =20 struct folio *folio_alloc(gfp_t gfp, unsigned order) { - struct page *page =3D alloc_pages(gfp | __GFP_COMP, order); - struct folio *folio =3D (struct folio *)page; - - if (folio && order > 1) - folio_prep_large_rmappable(folio); - return folio; + return page_rmappable_folio(alloc_pages(gfp | __GFP_COMP, order)); } EXPORT_SYMBOL(folio_alloc); =20 diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 95546f376302..5b1707d9025a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4456,12 +4456,8 @@ struct folio *__folio_alloc(gfp_t gfp, unsigned int = order, int preferred_nid, nodemask_t *nodemask) { struct page *page =3D __alloc_pages(gfp | __GFP_COMP, order, - preferred_nid, nodemask); - struct folio *folio =3D (struct folio *)page; - - if (folio && order > 1) - folio_prep_large_rmappable(folio); - return folio; + preferred_nid, nodemask); + return page_rmappable_folio(page); } EXPORT_SYMBOL(__folio_alloc); =20 --=20 2.35.3