From nobody Mon Apr 6 09:18:49 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 940733E959D for ; Fri, 20 Mar 2026 18:23:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774031033; cv=none; b=Y1bxOjcxVxpZpAyQPiZ1eFabTVshaTq7YO1SvMF/B2YptuFu+CUlqLop0P1w+Xga8SSM2t3TEadF75hN/8lsHN9xRJQjrTbMqFErcKv7ndEvXqexdrfRaTCZWpzahRqdxGl0GWsgb4piLH2DfX82A4cG4t3BJk5wqAXR4S0HUnM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774031033; c=relaxed/simple; bh=Wo8kI6Gs+zEsCArPrxs8B5nQJ5xwwo+biPO0qyhJ/QY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NwBQfAiZI0+VybCMcccUt2W0lVPU8wD1PgaKJXh1kz8+oZ3CvyBGNOUZQshDllxFsSzIAPWvp1Ql7xDzTjWeGKUvO9cySzI3qwn1G++e55JTXqqHwGYnk09i2vNEV3gPZ9P4oUmnJraEa9v+uWWYomdHRCAAesJPwbEj1j9v9eQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IUfKAWwU; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IUfKAWwU" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4853a9467c5so19916305e9.2 for ; Fri, 20 Mar 2026 11:23:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1774031030; x=1774635830; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2RN1mWexhEznLePJIj6qOnpFYEhHscYECWiTyyR7gk4=; b=IUfKAWwUKGJJcmyU0DxL3q6mFk/aepMr2mJtdpRWTuBcgFLHCxOLKG6457fCZ5GtMI paG7k1xgpBQraITO/m1FelmvYF2TgL+LQbSQ+Ma4fpnG+a5GO7qaVngc9kMShPic1Rgd BDbMSk6w64upss/YjLcY8EM6VTihyATz2C2VZ/rC7ty4VzBWVgfQG4Y54kiSNSGdvuW8 BVOB+FIUevRkYbsnbMCUxY1saDk+81c9teBzOk2iz6iXZGze21GucLV3PCSETHpetIF7 Oh/qlljF6G2XMJCQz6kZ8UzjrGhpr/aZvC136kfCIVAfP/ddm+4pdfc0oGwlU+Y24ONf qBrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774031030; x=1774635830; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2RN1mWexhEznLePJIj6qOnpFYEhHscYECWiTyyR7gk4=; b=i6x3bd5/lcbUPCgqFph0MmI4yVQlDH5fdyexOpNnPvIstEDTncsRkoIQT04qNdbe2/ fuOPKgsfAdKfP13Akzud6ZyrTHepyk/LpkS03Eb7jJn79X6iSgb1h+5iHwXo8P8eirl5 3fsvLEjkpPRqwjbHQg7V45vjtv2oAIP0K4fZ7W+ZVD/Uurovt/Vj/GJ/1GObDOngfVbY qM8cVuPQpvN9xvskIhtGKlzPuydSSWD2qFhNXYSdpH/e2fWX4fW+uUUeTqa6+Roie3xk b0LuZm3fo4vYcI5FBBVtu613DU6P561zJ5tofLSP3Is5b3ommmZU/ag84tT+oSg0NKPE aO1g== X-Forwarded-Encrypted: i=1; AJvYcCXdTKqA+U2qYNy2Fy7S3VukKlxgcDl6pf5HO/4COmOR89N7875T0zTzh6qkRZ+AFes/0y/9mG9SZZ1eFVQ=@vger.kernel.org X-Gm-Message-State: AOJu0YzPzU1X6gv7LXmng8wn0MfyuyMZXhIjuUTWfAaWrtQVmjqFa+9n 3WbG4anK86KXlnXlXsHgmSEv5xMKwnCDqUmslMk7Qno6Ll5hBCSJxwR4A3YbkSsPZsMEA7FL67w jB4yM4JedTEbpnw== X-Received: from wmaj7.prod.google.com ([2002:a05:600c:6c07:b0:480:4a03:7b6f]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3d90:b0:485:3c8f:e4d9 with SMTP id 5b1f17b1804b1-486fee1d9e6mr53990165e9.26.1774031029696; Fri, 20 Mar 2026 11:23:49 -0700 (PDT) Date: Fri, 20 Mar 2026 18:23:32 +0000 In-Reply-To: <20260320-page_alloc-unmapped-v2-0-28bf1bd54f41@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260320-page_alloc-unmapped-v2-0-28bf1bd54f41@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260320-page_alloc-unmapped-v2-8-28bf1bd54f41@google.com> Subject: [PATCH v2 08/22] mm: introduce for_each_free_list() From: Brendan Jackman To: Borislav Petkov , Dave Hansen , Peter Zijlstra , Andrew Morton , David Hildenbrand , Vlastimil Babka , Wei Xu , Johannes Weiner , Zi Yan , Lorenzo Stoakes Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, rppt@kernel.org, Sumit Garg , derkling@google.com, reijiw@google.com, Will Deacon , rientjes@google.com, "Kalyazin, Nikita" , patrick.roy@linux.dev, "Itazuri, Takahiro" , Andy Lutomirski , David Kaplan , Thomas Gleixner , Brendan Jackman , Yosry Ahmed Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Later patches will rearrange the free areas, but there are a couple of places that iterate over them with the assumption that they have the current structure. It seems ideally, code outside of mm should not be directly aware of struct free_area in the first place, but that awareness seems relatively harmless so just make the minimal change. Now instead of letting users manually iterate over the free lists, just provide a macro to do that. Then adopt that macro in a couple of places. Signed-off-by: Brendan Jackman --- include/linux/mmzone.h | 7 +++++-- kernel/power/snapshot.c | 8 ++++---- mm/mm_init.c | 11 +++++++---- 3 files changed, 16 insertions(+), 10 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 7bd0134c241ce..c49e3cdf4f6bb 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -177,9 +177,12 @@ static inline bool migratetype_is_mergeable(int mt) return mt < MIGRATE_PCPTYPES; } =20 -#define for_each_migratetype_order(order, type) \ +#define for_each_free_list(list, zone, order) \ for (order =3D 0; order < NR_PAGE_ORDERS; order++) \ - for (type =3D 0; type < MIGRATE_TYPES; type++) + for (unsigned int type =3D 0; \ + list =3D &zone->free_area[order].free_list[type], \ + type < MIGRATE_TYPES; \ + type++) \ =20 extern int page_group_by_mobility_disabled; =20 diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index 7dcccf378cc2f..abd33ca13eec4 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -1245,8 +1245,9 @@ unsigned int snapshot_additional_pages(struct zone *z= one) static void mark_free_pages(struct zone *zone) { unsigned long pfn, max_zone_pfn, page_count =3D WD_PAGE_COUNT; + struct list_head *free_list; unsigned long flags; - unsigned int order, t; + unsigned int order; struct page *page; =20 if (zone_is_empty(zone)) @@ -1270,9 +1271,8 @@ static void mark_free_pages(struct zone *zone) swsusp_unset_page_free(page); } =20 - for_each_migratetype_order(order, t) { - list_for_each_entry(page, - &zone->free_area[order].free_list[t], buddy_list) { + for_each_free_list(free_list, zone, order) { + list_for_each_entry(page, free_list, buddy_list) { unsigned long i; =20 pfn =3D page_to_pfn(page); diff --git a/mm/mm_init.c b/mm/mm_init.c index 969048f9b320c..f6f9455bc42b6 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -1445,11 +1445,14 @@ static void __meminit zone_init_internals(struct zo= ne *zone, enum zone_type idx, =20 static void __meminit zone_init_free_lists(struct zone *zone) { - unsigned int order, t; - for_each_migratetype_order(order, t) { - INIT_LIST_HEAD(&zone->free_area[order].free_list[t]); + struct list_head *list; + unsigned int order; + + for_each_free_list(list, zone, order) + INIT_LIST_HEAD(list); + + for (order =3D 0; order < NR_PAGE_ORDERS; order++) zone->free_area[order].nr_free =3D 0; - } =20 #ifdef CONFIG_UNACCEPTED_MEMORY INIT_LIST_HEAD(&zone->unaccepted_pages); --=20 2.51.2