From nobody Fri Dec 19 00:50:09 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4607EE49AB for ; Mon, 21 Aug 2023 16:10:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236201AbjHUQKK (ORCPT ); Mon, 21 Aug 2023 12:10:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235986AbjHUQKG (ORCPT ); Mon, 21 Aug 2023 12:10:06 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62580FE for ; Mon, 21 Aug 2023 09:09:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1692634152; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J/PlBgSOglEnVLTJqu2zbMYsJDvOn/SB34qHQ+lZyNk=; b=gP6Ft9YJ0vVj1169p7RdODEkACqQ6fb1lUmjqbzc5Bf6Uv+/gWA/2UU/NVwy/13yHenr2N to8ei6q5zAv9gItA8atSHTKNkrMYggDmY/OEtcj1UFY8jetnhJxoZQIkbz4leAR4B+KFaI emg2B5tFos25WPJFOJ3Dr/5JANFzz7U= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-211-Afa7N3q2MaOpDmVf2OHBqA-1; Mon, 21 Aug 2023 12:09:09 -0400 X-MC-Unique: Afa7N3q2MaOpDmVf2OHBqA-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 67DB9858EED; Mon, 21 Aug 2023 16:09:08 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.192.184]) by smtp.corp.redhat.com (Postfix) with ESMTP id 219E8492C13; Mon, 21 Aug 2023 16:09:02 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, David Hildenbrand , Andrew Morton , Matthew Wilcox , Peter Xu , Catalin Marinas , Will Deacon , Hugh Dickins , Seth Jennings , Dan Streetman , Vitaly Wool Subject: [PATCH mm-unstable v1 4/4] mm/huge_memory: work on folio->swap instead of page->private when splitting folio Date: Mon, 21 Aug 2023 18:08:49 +0200 Message-ID: <20230821160849.531668-5-david@redhat.com> In-Reply-To: <20230821160849.531668-1-david@redhat.com> References: <20230821160849.531668-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Let's work on folio->swap instead. While at it, use folio_test_anon() and folio_test_swapcache() -- the original folio remains valid even after splitting (but is then an order-0 folio). We can probably convert a lot more to folios in that code, let's focus on folio->swap handling only for now. Signed-off-by: David Hildenbrand Reviewed-by: Chris Li --- mm/huge_memory.c | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c04702ae71d2..4465915711c3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2401,10 +2401,16 @@ static void lru_add_page_tail(struct page *head, st= ruct page *tail, } } =20 -static void __split_huge_page_tail(struct page *head, int tail, +static void __split_huge_page_tail(struct folio *folio, int tail, struct lruvec *lruvec, struct list_head *list) { + struct page *head =3D &folio->page; struct page *page_tail =3D head + tail; + /* + * Careful: new_folio is not a "real" folio before we cleared PageTail. + * Don't pass it around before clear_compound_head(). + */ + struct folio *new_folio =3D (struct folio *)page_tail; =20 VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) !=3D -1, page_tail); =20 @@ -2453,8 +2459,8 @@ static void __split_huge_page_tail(struct page *head,= int tail, VM_WARN_ON_ONCE_PAGE(true, page_tail); page_tail->private =3D 0; } - if (PageSwapCache(head)) - set_page_private(page_tail, (unsigned long)head->private + tail); + if (folio_test_swapcache(folio)) + new_folio->swap.val =3D folio->swap.val + tail; =20 /* Page flags must be visible before we make the page non-compound. */ smp_wmb(); @@ -2500,11 +2506,9 @@ static void __split_huge_page(struct page *page, str= uct list_head *list, /* complete memcg works before add pages to LRU */ split_page_memcg(head, nr); =20 - if (PageAnon(head) && PageSwapCache(head)) { - swp_entry_t entry =3D { .val =3D page_private(head) }; - - offset =3D swp_offset(entry); - swap_cache =3D swap_address_space(entry); + if (folio_test_anon(folio) && folio_test_swapcache(folio)) { + offset =3D swp_offset(folio->swap); + swap_cache =3D swap_address_space(folio->swap); xa_lock(&swap_cache->i_pages); } =20 @@ -2514,7 +2518,7 @@ static void __split_huge_page(struct page *page, stru= ct list_head *list, ClearPageHasHWPoisoned(head); =20 for (i =3D nr - 1; i >=3D 1; i--) { - __split_huge_page_tail(head, i, lruvec, list); + __split_huge_page_tail(folio, i, lruvec, list); /* Some pages can be beyond EOF: drop them from page cache */ if (head[i].index >=3D end) { struct folio *tail =3D page_folio(head + i); @@ -2559,11 +2563,8 @@ static void __split_huge_page(struct page *page, str= uct list_head *list, =20 remap_page(folio, nr); =20 - if (PageSwapCache(head)) { - swp_entry_t entry =3D { .val =3D page_private(head) }; - - split_swap_cluster(entry); - } + if (folio_test_swapcache(folio)) + split_swap_cluster(folio->swap); =20 for (i =3D 0; i < nr; i++) { struct page *subpage =3D head + i; --=20 2.41.0