From nobody Tue Sep 16 09:01:22 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02FAEC3DA7D for ; Thu, 5 Jan 2023 10:22:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232957AbjAEKWh (ORCPT ); Thu, 5 Jan 2023 05:22:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232746AbjAEKUy (ORCPT ); Thu, 5 Jan 2023 05:20:54 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D093758304 for ; Thu, 5 Jan 2023 02:19:39 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-4755eb8a57bso287394817b3.12 for ; Thu, 05 Jan 2023 02:19:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=D5Cz37H+8pkm+eG9Z0H34ERNe8HsOxRqux6VnMQZxVI=; b=RZvyYiGXLrB3ETdxawuVmo48jsdy59hZOrayu4MwbFs3pJ3ttnsPWlfACHcTKuKELB u7elr2o9ZzrreQn73HuxEenrQvdmQK+5qRx8fmCbb3+0y2yuQjtc7Eke6FKuTchDEYK8 YYzHLqNy10+sQRvm2sKmrDW/f1TF8kIqeHUqYRoK6qBEqz5HHAvdPBYZSb8+r9/77pgk lP8AdMMJCj/pfVHI2L4IlZhXYA3ZFnieW2soK/tiuKrevAeNTQ9+1Pzbbjl+RBoEO1Mv 1xYC+3/f2aDhItaL5lZuwhNnbQvNespJUKgHC6UK4LpthFqABlyIPMSYy0cETysfVVG/ YFkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=D5Cz37H+8pkm+eG9Z0H34ERNe8HsOxRqux6VnMQZxVI=; b=BIw6RbUDoSmkeR/xxKy0KvxXmuj7ZrokWTVzLtCmnPZ1w9H0SQkIUA/owE5T/Iw1cX FcSwrTGnw+mo0k3W0CXQpn/s+bhi9Xp79VgpuX2bsGMF8ZJI6N9KigYntcBo+08lrKOC Qv7NrSnUNXlfB48jSJdZs8W0fiw8aRHT6tYhbqVVduq3/U6qAQvRc48A5vpB7s23ooBA K6XkXu494B0T5sIzp72bjL1GcHLKdNk3QVIhwtgeLUZFTBZB4RV85NYEeVqVC8WXNtH5 yHBa5DVGfHXLuM5Rjy61qOY8mZXzyqQK/DSl2421CFoNGliPuq+NoAI/OscO9KWjZP8S AG8Q== X-Gm-Message-State: AFqh2kpSDhIHIVHyukI/N6C5I0QtDkGOw4+o7KO5ieioUguHiDb9R/eu Ipf8Rje/SzC/js2WXoFG+J3D51d1DNy6FIZc X-Google-Smtp-Source: AMrXdXtZGNLt5SyLCQDlE7Cx/SKZasA9AHgebF17gcCgna+iBzg3syTx/7S3c65ST+dTPGEZeQ5Z177L8GEcKyYJ X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a81:17d6:0:b0:3ea:9ce2:cd76 with SMTP id 205-20020a8117d6000000b003ea9ce2cd76mr93655ywx.217.1672913979081; Thu, 05 Jan 2023 02:19:39 -0800 (PST) Date: Thu, 5 Jan 2023 10:18:29 +0000 In-Reply-To: <20230105101844.1893104-1-jthoughton@google.com> Mime-Version: 1.0 References: <20230105101844.1893104-1-jthoughton@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230105101844.1893104-32-jthoughton@google.com> Subject: [PATCH 31/46] hugetlb: sort hstates in hugetlb_init_hstates From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When using HugeTLB high-granularity mapping, we need to go through the supported hugepage sizes in decreasing order so that we pick the largest size that works. Consider the case where we're faulting in a 1G hugepage for the first time: we want hugetlb_fault/hugetlb_no_page to map it with a PUD. By going through the sizes in decreasing order, we will find that PUD_SIZE works before finding out that PMD_SIZE or PAGE_SIZE work too. This commit also changes bootmem hugepages from storing hstate pointers directly to storing the hstate sizes. The hstate pointers used for boot-time-allocated hugepages become invalid after we sort the hstates. `gather_bootmem_prealloc`, called after the hstates have been sorted, now converts the size to the correct hstate. Signed-off-by: James Houghton --- include/linux/hugetlb.h | 2 +- mm/hugetlb.c | 49 ++++++++++++++++++++++++++++++++--------- 2 files changed, 40 insertions(+), 11 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index daf993fdbc38..8a664a9dd0a8 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -789,7 +789,7 @@ struct hstate { =20 struct huge_bootmem_page { struct list_head list; - struct hstate *hstate; + unsigned long hstate_sz; }; =20 int isolate_or_dissolve_huge_page(struct page *page, struct list_head *lis= t); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2fb95ecafc63..1e9e149587b3 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -34,6 +34,7 @@ #include #include #include +#include =20 #include #include @@ -49,6 +50,10 @@ =20 int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; +/* + * After hugetlb_init_hstates is called, hstates will be sorted from large= st + * to smallest. + */ struct hstate hstates[HUGE_MAX_HSTATE]; =20 #ifdef CONFIG_CMA @@ -3347,7 +3352,7 @@ int __alloc_bootmem_huge_page(struct hstate *h, int n= id) /* Put them into a private list first because mem_map is not up yet */ INIT_LIST_HEAD(&m->list); list_add(&m->list, &huge_boot_pages); - m->hstate =3D h; + m->hstate_sz =3D huge_page_size(h); return 1; } =20 @@ -3362,7 +3367,7 @@ static void __init gather_bootmem_prealloc(void) list_for_each_entry(m, &huge_boot_pages, list) { struct page *page =3D virt_to_page(m); struct folio *folio =3D page_folio(page); - struct hstate *h =3D m->hstate; + struct hstate *h =3D size_to_hstate(m->hstate_sz); =20 VM_BUG_ON(!hstate_is_gigantic(h)); WARN_ON(folio_ref_count(folio) !=3D 1); @@ -3478,9 +3483,38 @@ static void __init hugetlb_hstate_alloc_pages(struct= hstate *h) kfree(node_alloc_noretry); } =20 +static int compare_hstates_decreasing(const void *a, const void *b) +{ + unsigned long sz_a =3D huge_page_size((const struct hstate *)a); + unsigned long sz_b =3D huge_page_size((const struct hstate *)b); + + if (sz_a < sz_b) + return 1; + if (sz_a > sz_b) + return -1; + return 0; +} + +static void sort_hstates(void) +{ + unsigned long default_hstate_sz =3D huge_page_size(&default_hstate); + + /* Sort from largest to smallest. */ + sort(hstates, hugetlb_max_hstate, sizeof(*hstates), + compare_hstates_decreasing, NULL); + + /* + * We may have changed the location of the default hstate, so we need to + * update it. + */ + default_hstate_idx =3D hstate_index(size_to_hstate(default_hstate_sz)); +} + static void __init hugetlb_init_hstates(void) { - struct hstate *h, *h2; + struct hstate *h; + + sort_hstates(); =20 for_each_hstate(h) { /* oversize hugepages were init'ed in early boot */ @@ -3499,13 +3533,8 @@ static void __init hugetlb_init_hstates(void) continue; if (hugetlb_cma_size && h->order <=3D HUGETLB_PAGE_ORDER) continue; - for_each_hstate(h2) { - if (h2 =3D=3D h) - continue; - if (h2->order < h->order && - h2->order > h->demote_order) - h->demote_order =3D h2->order; - } + if (h - 1 >=3D &hstates[0]) + h->demote_order =3D huge_page_order(h - 1); } } =20 --=20 2.39.0.314.g84b9a713c41-goog