From nobody Wed Apr 8 03:07:14 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31C14FA373D for ; Fri, 21 Oct 2022 16:39:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231132AbiJUQjp (ORCPT ); Fri, 21 Oct 2022 12:39:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231224AbiJUQid (ORCPT ); Fri, 21 Oct 2022 12:38:33 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7CB3C2892E7 for ; Fri, 21 Oct 2022 09:37:43 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id h4-20020a5b02c4000000b006bc192d672bso3734405ybp.22 for ; Fri, 21 Oct 2022 09:37:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ei7k3uU35Z4Hu/w/xkPODXeMU6eNvsuQ55K19eUxLMI=; b=huKMsf08tu+ikIVOCPqBNwFsJXqNyp0P8NACcPr/sumx9YmFpNoCw7wMEhSlSL8pMP nSAMHrcN+7UfC0DL5d1quDTLv1jC014OC18xm+ftOJsMlX86PO+FJJI4RqcxlQb7hajv PzQFgZUu5gOec6xHS10xWWeea1IDWG2Utvm0kWwRv/uh6n7cEEBs0u+ueLl/S4oJE+Vy nDwZQusgJIjBbOwa/l5QxjrgNgvpToknRIkLb2dQOitFidH8U4mZMupmnTOexm99hkrz z0sYqYb8B3PDqwJp6QYPOzSeMKhGJlKRatwIHEblCaYbrb8BMhBjB2tWxOLSppUpCrob tTWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ei7k3uU35Z4Hu/w/xkPODXeMU6eNvsuQ55K19eUxLMI=; b=1bQgREnWO0T731JmYARbFzFi58laCGy+ZOBHNylWXqhdBGUdbF/v6A0OR3XFL8OvBg xi2VE8TU57UqAYsZpX3E0NBZAbQuQaQP0xcsO7jkkOzrUJC5zPhjhc3qyQ+elfmclS+2 REv7ymRE5T3gtknDnvTTtwlvGwN0R0To6JDQyCsH4LUpDcyXbS7D7o5M97u6RojDvbdD n7tLlQ0/EtKIkPWB2xLaQ1+vKA0v+Uj5+q/3v5wMJ5ucGvdomwhIBonDw/yolKD64TAl vIEP3UbFvaKqAGbfM8RvhL/E4gQszKjLWmn1Foeg26ET4I1ua1C4NlQJfo7hIsOKOJgK fojA== X-Gm-Message-State: ACrzQf2BFoGoVhNSy+lx/VGm5IHNlbJOV65PvOvsYsH62VqkkofLga0k 7G5WnN88jjdRFw6eVXnyy3G694Mf2f0qd0BO X-Google-Smtp-Source: AMsMyM7zI9mXFO3X8vPctlBiB3UBLdTU3cbMoRvzP0AM4QwHltXjTzo1HEhNiplbJ7IWD/KkV3s28fi8e7mc0/0l X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a25:15c4:0:b0:6bd:4d9c:a3bc with SMTP id 187-20020a2515c4000000b006bd4d9ca3bcmr18149514ybv.211.1666370262380; Fri, 21 Oct 2022 09:37:42 -0700 (PDT) Date: Fri, 21 Oct 2022 16:36:44 +0000 In-Reply-To: <20221021163703.3218176-1-jthoughton@google.com> Mime-Version: 1.0 References: <20221021163703.3218176-1-jthoughton@google.com> X-Mailer: git-send-email 2.38.0.135.g90850a2211-goog Message-ID: <20221021163703.3218176-29-jthoughton@google.com> Subject: [RFC PATCH v2 28/47] rmap: in try_to_{migrate,unmap}_one, check head page for page flags From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The main complication here is that HugeTLB pages have their poison status stored in the head page as the HWPoison page flag. Because HugeTLB high-granularity mapping can create PTEs that point to subpages instead of always the head of a hugepage, we need to check the compound_head for page flags. Signed-off-by: James Houghton --- mm/rmap.c | 34 ++++++++++++++++++++++++++-------- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index a8359584467e..d5e1eb6b8ce5 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1474,10 +1474,11 @@ static bool try_to_unmap_one(struct folio *folio, s= truct vm_area_struct *vma, struct mm_struct *mm =3D vma->vm_mm; DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); pte_t pteval; - struct page *subpage; + struct page *subpage, *page_flags_page; bool anon_exclusive, ret =3D true; struct mmu_notifier_range range; enum ttu_flags flags =3D (enum ttu_flags)(long)arg; + bool page_poisoned; =20 /* * When racing against e.g. zap_pte_range() on another cpu, @@ -1530,9 +1531,17 @@ static bool try_to_unmap_one(struct folio *folio, st= ruct vm_area_struct *vma, =20 subpage =3D folio_page(folio, pte_pfn(*pvmw.pte) - folio_pfn(folio)); + /* + * We check the page flags of HugeTLB pages by checking the + * head page. + */ + page_flags_page =3D folio_test_hugetlb(folio) + ? &folio->page + : subpage; + page_poisoned =3D PageHWPoison(page_flags_page); address =3D pvmw.address; anon_exclusive =3D folio_test_anon(folio) && - PageAnonExclusive(subpage); + PageAnonExclusive(page_flags_page); =20 if (folio_test_hugetlb(folio)) { bool anon =3D folio_test_anon(folio); @@ -1541,7 +1550,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, * The try_to_unmap() is only passed a hugetlb page * in the case where the hugetlb page is poisoned. */ - VM_BUG_ON_PAGE(!PageHWPoison(subpage), subpage); + VM_BUG_ON_FOLIO(!page_poisoned, folio); /* * huge_pmd_unshare may unmap an entire PMD page. * There is no way of knowing exactly which PMDs may @@ -1630,7 +1639,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, /* Update high watermark before we lower rss */ update_hiwater_rss(mm); =20 - if (PageHWPoison(subpage) && !(flags & TTU_IGNORE_HWPOISON)) { + if (page_poisoned && !(flags & TTU_IGNORE_HWPOISON)) { pteval =3D swp_entry_to_pte(make_hwpoison_entry(subpage)); if (folio_test_hugetlb(folio)) { hugetlb_count_sub(1UL << pvmw.pte_order, mm); @@ -1656,7 +1665,9 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, mmu_notifier_invalidate_range(mm, address, address + PAGE_SIZE); } else if (folio_test_anon(folio)) { - swp_entry_t entry =3D { .val =3D page_private(subpage) }; + swp_entry_t entry =3D { + .val =3D page_private(page_flags_page) + }; pte_t swp_pte; /* * Store the swap location in the pte. @@ -1855,7 +1866,7 @@ static bool try_to_migrate_one(struct folio *folio, s= truct vm_area_struct *vma, struct mm_struct *mm =3D vma->vm_mm; DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); pte_t pteval; - struct page *subpage; + struct page *subpage, *page_flags_page; bool anon_exclusive, ret =3D true; struct mmu_notifier_range range; enum ttu_flags flags =3D (enum ttu_flags)(long)arg; @@ -1935,9 +1946,16 @@ static bool try_to_migrate_one(struct folio *folio, = struct vm_area_struct *vma, subpage =3D folio_page(folio, pte_pfn(*pvmw.pte) - folio_pfn(folio)); } + /* + * We check the page flags of HugeTLB pages by checking the + * head page. + */ + page_flags_page =3D folio_test_hugetlb(folio) + ? &folio->page + : subpage; address =3D pvmw.address; anon_exclusive =3D folio_test_anon(folio) && - PageAnonExclusive(subpage); + PageAnonExclusive(page_flags_page); =20 if (folio_test_hugetlb(folio)) { bool anon =3D folio_test_anon(folio); @@ -2048,7 +2066,7 @@ static bool try_to_migrate_one(struct folio *folio, s= truct vm_area_struct *vma, * No need to invalidate here it will synchronize on * against the special swap migration pte. */ - } else if (PageHWPoison(subpage)) { + } else if (PageHWPoison(page_flags_page)) { pteval =3D swp_entry_to_pte(make_hwpoison_entry(subpage)); if (folio_test_hugetlb(folio)) { hugetlb_count_sub(1L << pvmw.pte_order, mm); --=20 2.38.0.135.g90850a2211-goog