From nobody Mon Apr 6 09:18:52 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D189D3E6DDD for ; Fri, 20 Mar 2026 18:23:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774031034; cv=none; b=LTg1hHV/6xkkjvb4HT+IjwPEkNiFhjlKhCmsqyXsaCV5UIaOZpzaPeQnjIG6VBt4YPrKFN0fiTuLONPhNenJgG6kYLSZJ+M9+LwqXrBONKT2gYsNKp3BUbBqf//yIkhEzH7xCJkixdZGaaGBOgvPqjyZ6q8S8+TWOJ385yyEtJo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774031034; c=relaxed/simple; bh=pYUcJu44QV1yoL8oPVi8OoqXGVMylSpaeh9DuQUmu84=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=FLDk4lG2yQ2rcwMlsk6pno8P6M+M0dYb8yKmxmYFUsbha7oujSh18onBLvUltWZV3Vo6o+7FS+8HnhunMyGn66pZh7g8dx3UELyevYckfeVMJcFhX7mGmfmekmT0tn/DtUV3OS8jO5S12B/yRgErFcOrKit5lc/Qr+n3BKgLnl0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nzNwmBk1; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nzNwmBk1" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-48532df52c5so25754535e9.1 for ; Fri, 20 Mar 2026 11:23:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774031031; x=1774635831; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9aSRnDL3pRDJil+a68nW+puWSy5HdAwOwwJrBBZtDNI=; b=nzNwmBk14wPBvkJZRHiXosyY1G9ZiZd0fACgQP7k7p+0B9G6cRM6Z5P9AZdvsch3NY Hd/AgpLxIsxku6PXSNnCNBVK8m7CRAEmy2AJTopAFkW/TRKvAqo6KMwBOITra0chfqUm IElG/+0Fz7YeeeRzXltpCmBD4UdCo1LeZGNHhIyv6YKdED3OW483H9vgiC9/l1E65VE/ P6uXmoITl1rGm3PDsN4GAd5hBD9wvpvF3OkjzXZ690Acl9LbWCur3caPwE+Knj/WoUnK Kc0VZmFzx+5HSxIbWRKdUA4pFCr3YPyJOSG4thNkzEtUlqLeg11OV7+DknTDMDjG+cAu pl+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774031031; x=1774635831; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9aSRnDL3pRDJil+a68nW+puWSy5HdAwOwwJrBBZtDNI=; b=YtqCgEb6dIYHe+IbhXUtBya05mlX6s2v0IufqDnx8675y8sCL4XpnsKBDQBZ3ejk66 J2bntV3TKGxavsnMYio7cLqBj22Hr1US9IsNTv4uIA/FPGaPq3ydUd4wZX3YgG9SNNJt /RSt8ol5hJouszZ+5HMS3RBq8bLwRQ/0bScs4CNpMAgHj0Omn3WXWfh7pJVVIZ1QEL60 HuTFszvgvDczLK2C8t/KpCxwmeEqTH8hf0u/Kzu8sLFiI2vDyeZLtlzlVQSsuDkgyk79 oYdniC8w3vh8mQAgk2GgqEvfQVgGeHz4+zlaesSHCBBiWmZOU92HhDj79S230ALjpTwX eGKw== X-Forwarded-Encrypted: i=1; AJvYcCV2m/PtaLp2+/JiH38L/07aYh4oCSgCDksLDC0vCztYv3eaQ9etIKo2ys9RKuBf4O1qnJQwkAvAExnMy5A=@vger.kernel.org X-Gm-Message-State: AOJu0YwrkFqQcLbsBy9HlBj7gLu/NYmM73VUsLKVAuFAv/vn7sxfYwJB d8GIzX6/1Gj4QDflnLh0oAjoTw+mznB/Exeq+T2aTPUk/u4CuEldlHwWwxVYNg9irV9hGlFP0A1 itEAECecfnBEABQ== X-Received: from wmjv1.prod.google.com ([2002:a7b:cb41:0:b0:485:37cb:adc1]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:a43:b0:485:40ed:2d1 with SMTP id 5b1f17b1804b1-486fee0fa3dmr57810255e9.17.1774031031171; Fri, 20 Mar 2026 11:23:51 -0700 (PDT) Date: Fri, 20 Mar 2026 18:23:33 +0000 In-Reply-To: <20260320-page_alloc-unmapped-v2-0-28bf1bd54f41@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260320-page_alloc-unmapped-v2-0-28bf1bd54f41@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260320-page_alloc-unmapped-v2-9-28bf1bd54f41@google.com> Subject: [PATCH v2 09/22] mm/page_alloc: don't overload migratetype in find_suitable_fallback() From: Brendan Jackman To: Borislav Petkov , Dave Hansen , Peter Zijlstra , Andrew Morton , David Hildenbrand , Vlastimil Babka , Wei Xu , Johannes Weiner , Zi Yan , Lorenzo Stoakes Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, rppt@kernel.org, Sumit Garg , derkling@google.com, reijiw@google.com, Will Deacon , rientjes@google.com, "Kalyazin, Nikita" , patrick.roy@linux.dev, "Itazuri, Takahiro" , Andy Lutomirski , David Kaplan , Thomas Gleixner , Brendan Jackman , Yosry Ahmed Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable This function currently returns a signed integer that encodes status in-band, as negative numbers, along with a migratetype. This function is about to be updated to a mode where this in-band signaling no longer makes sense. Therefore, switch to a more explicit/verbose style that encodes the status and migratetype separately. In the spirit of making things more explicit, also create an enum to avoid using magic integer literals with special meanings. This enables documenting the values at their definition instead of in one of the callers. Signed-off-by: Brendan Jackman --- mm/compaction.c | 3 ++- mm/internal.h | 14 +++++++++++--- mm/page_alloc.c | 40 +++++++++++++++++++++++----------------- 3 files changed, 36 insertions(+), 21 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 32623894a6327..25371a75471dd 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2358,7 +2358,8 @@ static enum compact_result __compact_finished(struct = compact_control *cc) * Job done if allocation would steal freepages from * other migratetype buddy lists. */ - if (find_suitable_fallback(area, order, migratetype, true) >=3D 0) + if (find_suitable_fallback(area, order, migratetype, true, NULL) + =3D=3D FALLBACK_FOUND) /* * Movable pages are OK in any pageblock. If we are * stealing for a non-movable allocation, make sure diff --git a/mm/internal.h b/mm/internal.h index f4c59534670e4..e3782721a588b 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1059,9 +1059,17 @@ static inline void init_cma_pageblock(struct page *p= age) } #endif =20 - -int find_suitable_fallback(struct free_area *area, unsigned int order, - int migratetype, bool claimable); +enum fallback_result { + /* Found suitable migratetype, *mt_out is valid. */ + FALLBACK_FOUND, + /* No fallback found in requested order. */ + FALLBACK_EMPTY, + /* Passed @claimable, but claiming whole block is a bad idea. */ + FALLBACK_NOCLAIM, +}; +enum fallback_result +find_suitable_fallback(struct free_area *area, unsigned int order, + int migratetype, bool claimable, unsigned int *mt_out); =20 static inline bool free_area_empty(struct free_area *area, int migratetype) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0a1dc7866068f..ac077d98019f3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2248,25 +2248,29 @@ static bool should_try_claim_block(unsigned int ord= er, int start_mt) * we would do this whole-block claiming. This would help to reduce * fragmentation due to mixed migratetype pages in one pageblock. */ -int find_suitable_fallback(struct free_area *area, unsigned int order, - int migratetype, bool claimable) +enum fallback_result +find_suitable_fallback(struct free_area *area, unsigned int order, + int migratetype, bool claimable, unsigned int *mt_out) { int i; =20 if (claimable && !should_try_claim_block(order, migratetype)) - return -2; + return FALLBACK_NOCLAIM; =20 if (area->nr_free =3D=3D 0) - return -1; + return FALLBACK_EMPTY; =20 for (i =3D 0; i < MIGRATE_PCPTYPES - 1 ; i++) { int fallback_mt =3D fallbacks[migratetype][i]; =20 - if (!free_area_empty(area, fallback_mt)) - return fallback_mt; + if (!free_area_empty(area, fallback_mt)) { + if (mt_out) + *mt_out =3D fallback_mt; + return FALLBACK_FOUND; + } } =20 - return -1; + return FALLBACK_EMPTY; } =20 /* @@ -2376,16 +2380,16 @@ __rmqueue_claim(struct zone *zone, int order, int s= tart_migratetype, */ for (current_order =3D MAX_PAGE_ORDER; current_order >=3D min_order; --current_order) { - area =3D &(zone->free_area[current_order]); - fallback_mt =3D find_suitable_fallback(area, current_order, - start_migratetype, true); + enum fallback_result result; =20 - /* No block in that order */ - if (fallback_mt =3D=3D -1) + area =3D &(zone->free_area[current_order]); + result =3D find_suitable_fallback(area, current_order, + start_migratetype, true, &fallback_mt); + + if (result =3D=3D FALLBACK_EMPTY) continue; =20 - /* Advanced into orders too low to claim, abort */ - if (fallback_mt =3D=3D -2) + if (result =3D=3D FALLBACK_NOCLAIM) break; =20 page =3D get_page_from_free_area(area, fallback_mt); @@ -2415,10 +2419,12 @@ __rmqueue_steal(struct zone *zone, int order, int s= tart_migratetype) int fallback_mt; =20 for (current_order =3D order; current_order < NR_PAGE_ORDERS; current_ord= er++) { + enum fallback_result result; + area =3D &(zone->free_area[current_order]); - fallback_mt =3D find_suitable_fallback(area, current_order, - start_migratetype, false); - if (fallback_mt =3D=3D -1) + result =3D find_suitable_fallback(area, current_order, start_migratetype, + false, &fallback_mt); + if (result =3D=3D FALLBACK_EMPTY) continue; =20 page =3D get_page_from_free_area(area, fallback_mt); --=20 2.51.2