From nobody Tue Sep 16 04:21:31 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6622BC54EBD for ; Fri, 6 Jan 2023 17:41:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235718AbjAFRkz (ORCPT ); Fri, 6 Jan 2023 12:40:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36496 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235727AbjAFRkt (ORCPT ); Fri, 6 Jan 2023 12:40:49 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B193F7DE18 for ; Fri, 6 Jan 2023 09:40:43 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 6C6B9B810A7 for ; Fri, 6 Jan 2023 17:40:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B1883C433EF; Fri, 6 Jan 2023 17:40:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673026841; bh=lehRB+fMwqiaN+fsr24vSBnpbQfHVQHTL6mnxILFBts=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I9KaEMH/AY3GqDnRRYPEJoBMGtwMgv+AjV6t5PTvgm/xPvax0GyXmcoFTaHIKYUjf E6+10Gx0Zsd3ZLs+NLFgwS6jfv3WprcOXJAUpeKODNOiYxXGlzbJ+5gtviSQ/zQ8o6 ut4slaiJsNbgznopHReIdiJCcNZwMTHrzgCk0fASbBaE85Is4fM5Jmhoz5/lK8fj1L TwD7ZQf+YCAL1yAuzJf4hVtWNxw6IZwiskQti0MUj3BdeQ9DYPAIe1NcZ8Kk8bWjH/ Uo5K3kBHkNHUZ4bFa8hnobyxxNGOl+jzr23GcfFp4lVDXzWA7KrH4BYXkilwLnG4C5 /AJpyeph2OkoA== From: SeongJae Park To: Andrew Morton Cc: SeongJae Park , willy@infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/3] include/linux/page-flags: add folio_headpage() Date: Fri, 6 Jan 2023 17:40:26 +0000 Message-Id: <20230106174028.151384-2-sj@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230106174028.151384-1-sj@kernel.org> References: <20230106174028.151384-1-sj@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The standard idiom for getting head page of a given folio is '&folio->page'. It is efficient and safe even if the folio is NULL, because the offset of page field in folio is zero. However, it makes the code not that easy to understand at the first glance, especially the NULL safety. Also, sometimes people forget the idiom and use 'folio_page(folio, 0)' instead. To make it easier to read and remember, add a new macro function called 'folio_headpage()' with the NULL case explanation. Signed-off-by: SeongJae Park --- include/linux/page-flags.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 69e93a0c1277..5a22bd823a5d 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -285,6 +285,14 @@ static inline unsigned long _compound_head(const struc= t page *page) */ #define folio_page(folio, n) nth_page(&(folio)->page, n) =20 +/** + * folio_headpage - Return the head page from a folio. + * @folio: The pointer to the folio. + * + * Return: The head page of the folio, or NULL if the folio is NULL. + */ +#define folio_headpage(folio) (&(folio)->page) + static __always_inline int PageTail(struct page *page) { return READ_ONCE(page->compound_head) & 1 || page_is_fake_head(page); --=20 2.25.1 From nobody Tue Sep 16 04:21:31 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B594C54E76 for ; Fri, 6 Jan 2023 17:41:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235834AbjAFRk7 (ORCPT ); Fri, 6 Jan 2023 12:40:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235748AbjAFRku (ORCPT ); Fri, 6 Jan 2023 12:40:50 -0500 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0FD4C7DE35 for ; Fri, 6 Jan 2023 09:40:45 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 440E2CE1D8D for ; Fri, 6 Jan 2023 17:40:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 27124C433F0; Fri, 6 Jan 2023 17:40:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673026842; bh=cHvb4Vehgiz+i9jP3UmTQ1hcTa3cMG0+RNH66KZZupg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hlP6RQhZERq4T5nw+s83wXPjEzQ1POOMyZB06pm0+LABO/jSGs600mUTQCYSN4fFz Dko/3tAdYvbpVuT5SpObb3AnmEdrUu0BUUDNhQC11D6+E1t2AJ7d4kj7PtwwbTWFW/ lwDnadWc6GT+89qA4JPg3Nm1BBOuY0MBIi1kdz1+LMODHH7pcDIZndVDLIVXMGB2cR qMRUsLddFzA8wCI1cxGJ7OPW0tdUXmV/n8OOIv4O6Y+vzlklCKglnb7CoyGHl5pTpy 0YFgFw3Qzb8YrEzAdzKQx15I0BVqJil4JIlLZpqs/YE4616rzUmhr2qF4jBO0ti+0p a5k51guE738/g== From: SeongJae Park To: Andrew Morton Cc: SeongJae Park , willy@infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/3] mm: use folio_headpage() instead of folio_page() Date: Fri, 6 Jan 2023 17:40:27 +0000 Message-Id: <20230106174028.151384-3-sj@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230106174028.151384-1-sj@kernel.org> References: <20230106174028.151384-1-sj@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Several code in mm is using 'folio_page(folio, 0)' for getting the head pages of folios. It's not the standard idiom and inefficient. Replace the calls to 'folio_headpage()'. Signed-off-by: SeongJae Park --- mm/shmem.c | 4 ++-- mm/slab.c | 6 +++--- mm/slab_common.c | 4 ++-- mm/slub.c | 4 ++-- 4 files changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index bc5c156ef470..8ae73973a7fc 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -3211,7 +3211,7 @@ static const char *shmem_get_link(struct dentry *dent= ry, folio =3D filemap_get_folio(inode->i_mapping, 0); if (!folio) return ERR_PTR(-ECHILD); - if (PageHWPoison(folio_page(folio, 0)) || + if (PageHWPoison(folio_headpage(folio)) || !folio_test_uptodate(folio)) { folio_put(folio); return ERR_PTR(-ECHILD); @@ -3222,7 +3222,7 @@ static const char *shmem_get_link(struct dentry *dent= ry, return ERR_PTR(error); if (!folio) return ERR_PTR(-ECHILD); - if (PageHWPoison(folio_page(folio, 0))) { + if (PageHWPoison(folio_headpage(folio))) { folio_unlock(folio); folio_put(folio); return ERR_PTR(-ECHILD); diff --git a/mm/slab.c b/mm/slab.c index 7a269db050ee..a6f8f95678c9 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1373,7 +1373,7 @@ static struct slab *kmem_getpages(struct kmem_cache *= cachep, gfp_t flags, /* Make the flag visible before any changes to folio->mapping */ smp_wmb(); /* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */ - if (sk_memalloc_socks() && page_is_pfmemalloc(folio_page(folio, 0))) + if (sk_memalloc_socks() && page_is_pfmemalloc(folio_headpage(folio))) slab_set_pfmemalloc(slab); =20 return slab; @@ -1389,7 +1389,7 @@ static void kmem_freepages(struct kmem_cache *cachep,= struct slab *slab) =20 BUG_ON(!folio_test_slab(folio)); __slab_clear_pfmemalloc(slab); - page_mapcount_reset(folio_page(folio, 0)); + page_mapcount_reset(folio_headpage(folio)); folio->mapping =3D NULL; /* Make the mapping reset visible before clearing the flag */ smp_wmb(); @@ -1398,7 +1398,7 @@ static void kmem_freepages(struct kmem_cache *cachep,= struct slab *slab) if (current->reclaim_state) current->reclaim_state->reclaimed_slab +=3D 1 << order; unaccount_slab(slab, order, cachep); - __free_pages(folio_page(folio, 0), order); + __free_pages(folio_headpage(folio), order); } =20 static void kmem_rcu_free(struct rcu_head *head) diff --git a/mm/slab_common.c b/mm/slab_common.c index bf4e777cfe90..34a0b9988d12 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -939,9 +939,9 @@ void free_large_kmalloc(struct folio *folio, void *obje= ct) kasan_kfree_large(object); kmsan_kfree_large(object); =20 - mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, + mod_lruvec_page_state(folio_headpage(folio), NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); - __free_pages(folio_page(folio, 0), order); + __free_pages(folio_headpage(folio), order); } =20 static void *__kmalloc_large_node(size_t size, gfp_t flags, int node); diff --git a/mm/slub.c b/mm/slub.c index 13459c69095a..1f0cbb4c2288 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1859,7 +1859,7 @@ static inline struct slab *alloc_slab_page(gfp_t flag= s, int node, __folio_set_slab(folio); /* Make the flag visible before any changes to folio->mapping */ smp_wmb(); - if (page_is_pfmemalloc(folio_page(folio, 0))) + if (page_is_pfmemalloc(folio_headpage(folio))) slab_set_pfmemalloc(slab); =20 return slab; @@ -2066,7 +2066,7 @@ static void __free_slab(struct kmem_cache *s, struct = slab *slab) if (current->reclaim_state) current->reclaim_state->reclaimed_slab +=3D pages; unaccount_slab(slab, order, s); - __free_pages(folio_page(folio, 0), order); + __free_pages(folio_headpage(folio), order); } =20 static void rcu_free_slab(struct rcu_head *h) --=20 2.25.1 From nobody Tue Sep 16 04:21:31 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63AD2C61DB3 for ; Fri, 6 Jan 2023 17:41:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235883AbjAFRlH (ORCPT ); Fri, 6 Jan 2023 12:41:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235742AbjAFRkt (ORCPT ); Fri, 6 Jan 2023 12:40:49 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC4907DE2F; Fri, 6 Jan 2023 09:40:45 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CA83F616F5; Fri, 6 Jan 2023 17:40:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9F76BC433F1; Fri, 6 Jan 2023 17:40:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1673026844; bh=YZZNY9dRwSGWCR6rI1dKx/9bpURDl+Gj2lOCqnQv91U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Z6cyH5ZRauWxvfPORXoj+wOKxhm88QKwFJtkWNBMkZG3gI46FE91EyPqpwZKjrILS RgdcJG33HPg9cJHPy5eKKnjHWSKUFivP4YcQ4jZh5OFHyNvVMBPDYXP+JeOBde+1m2 Fhm7tKt4vQzjz1SEA8qQC32e+xF1RT1RngZPn80xzlR9cVkGZfdBKGoetCf2G6DF2i LXTwHWSLFbVDXuWTGf9J0WhubYpgHsqgVMo/4LH4Y+sGSrMCLFnwVW+dP2y6pS+joo EUjIaeme/J9pRRgLp3DDGE/0ygpmlNaOd6dGmBO34HShJ9efDfVSf2OiapQSu3LKmX jKdEAcoM+lVqA== From: SeongJae Park To: Andrew Morton Cc: willy@infradead.org, Xiubo Li , Ilya Dryomov , Jeff Layton , ceph-devel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, SeongJae Park Subject: [PATCH 3/3] fs/ceph/addr: use folio_headpage() instead of folio_page() Date: Fri, 6 Jan 2023 17:40:28 +0000 Message-Id: <20230106174028.151384-4-sj@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230106174028.151384-1-sj@kernel.org> References: <20230106174028.151384-1-sj@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Using 'folio_page(folio, 0)' for getting the head page of a folios is not the standard idiom and inefficient. Replace the call in fs/ceph/ to 'folio_headpage()'. Signed-off-by: SeongJae Park --- fs/ceph/addr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 8c74871e37c9..b76e94152b21 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -1290,7 +1290,7 @@ static int ceph_netfs_check_write_begin(struct file *= file, loff_t pos, unsigned struct ceph_inode_info *ci =3D ceph_inode(inode); struct ceph_snap_context *snapc; =20 - snapc =3D ceph_find_incompatible(folio_page(*foliop, 0)); + snapc =3D ceph_find_incompatible(folio_headpage(*foliop)); if (snapc) { int r; =20 --=20 2.25.1