The freelist hygiene patches made migratetype accesses fully protected
under the zone->lock. Remove remnants of handling the race conditions
that existed before from the MIGRATE_HIGHATOMIC code.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
mm/page_alloc.c | 50 ++++++++++++++++---------------------------------
1 file changed, 16 insertions(+), 34 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9ea14ec52449..53d315aa69c4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1991,20 +1991,10 @@ static inline bool boost_watermark(struct zone *zone)
static struct page *
try_to_steal_block(struct zone *zone, struct page *page,
int current_order, int order, int start_type,
- unsigned int alloc_flags)
+ int block_type, unsigned int alloc_flags)
{
int free_pages, movable_pages, alike_pages;
unsigned long start_pfn;
- int block_type;
-
- block_type = get_pageblock_migratetype(page);
-
- /*
- * This can happen due to races and we want to prevent broken
- * highatomic accounting.
- */
- if (is_migrate_highatomic(block_type))
- return NULL;
/* Take ownership for orders >= pageblock_order */
if (current_order >= pageblock_order) {
@@ -2179,33 +2169,22 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
spin_lock_irqsave(&zone->lock, flags);
for (order = 0; order < NR_PAGE_ORDERS; order++) {
struct free_area *area = &(zone->free_area[order]);
- int mt;
+ unsigned long size;
page = get_page_from_free_area(area, MIGRATE_HIGHATOMIC);
if (!page)
continue;
- mt = get_pageblock_migratetype(page);
/*
- * In page freeing path, migratetype change is racy so
- * we can counter several free pages in a pageblock
- * in this loop although we changed the pageblock type
- * from highatomic to ac->migratetype. So we should
- * adjust the count once.
+ * It should never happen but changes to
+ * locking could inadvertently allow a per-cpu
+ * drain to add pages to MIGRATE_HIGHATOMIC
+ * while unreserving so be safe and watch for
+ * underflows.
*/
- if (is_migrate_highatomic(mt)) {
- unsigned long size;
- /*
- * It should never happen but changes to
- * locking could inadvertently allow a per-cpu
- * drain to add pages to MIGRATE_HIGHATOMIC
- * while unreserving so be safe and watch for
- * underflows.
- */
- size = max(pageblock_nr_pages, 1UL << order);
- size = min(size, zone->nr_reserved_highatomic);
- zone->nr_reserved_highatomic -= size;
- }
+ size = max(pageblock_nr_pages, 1UL << order);
+ size = min(size, zone->nr_reserved_highatomic);
+ zone->nr_reserved_highatomic -= size;
/*
* Convert to ac->migratetype and avoid the normal
@@ -2217,10 +2196,12 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
* may increase.
*/
if (order < pageblock_order)
- ret = move_freepages_block(zone, page, mt,
+ ret = move_freepages_block(zone, page,
+ MIGRATE_HIGHATOMIC,
ac->migratetype);
else {
- move_to_free_list(page, zone, order, mt,
+ move_to_free_list(page, zone, order,
+ MIGRATE_HIGHATOMIC,
ac->migratetype);
change_pageblock_range(page, order,
ac->migratetype);
@@ -2294,7 +2275,8 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
page = get_page_from_free_area(area, fallback_mt);
page = try_to_steal_block(zone, page, current_order, order,
- start_migratetype, alloc_flags);
+ start_migratetype, fallback_mt,
+ alloc_flags);
if (page)
goto got_one;
}
--
2.48.1
On Mon, Feb 24, 2025 at 07:08:25PM -0500, Johannes Weiner wrote:
> The freelist hygiene patches made migratetype accesses fully protected
> under the zone->lock. Remove remnants of handling the race conditions
> that existed before from the MIGRATE_HIGHATOMIC code.
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Aside from my WARN bikeshedding, which isn't really about this patch
anyway:
Reviewed-by: Brendan Jackman <jackmanb@google.com>
> - if (is_migrate_highatomic(mt)) {
> - unsigned long size;
> - /*
> - * It should never happen but changes to
> - * locking could inadvertently allow a per-cpu
> - * drain to add pages to MIGRATE_HIGHATOMIC
> - * while unreserving so be safe and watch for
> - * underflows.
> - */
> - size = max(pageblock_nr_pages, 1UL << order);
> - size = min(size, zone->nr_reserved_highatomic);
> - zone->nr_reserved_highatomic -= size;
> - }
> + size = max(pageblock_nr_pages, 1UL << order);
> + size = min(size, zone->nr_reserved_highatomic);
> + zone->nr_reserved_highatomic -= size;
Now that the locking is a bit cleaner, would it make sense to add a
[VM_]WARN_ON[_ONCE] for underflow?
On Tue, Feb 25, 2025 at 01:43:35PM +0000, Brendan Jackman wrote:
> On Mon, Feb 24, 2025 at 07:08:25PM -0500, Johannes Weiner wrote:
> > The freelist hygiene patches made migratetype accesses fully protected
> > under the zone->lock. Remove remnants of handling the race conditions
> > that existed before from the MIGRATE_HIGHATOMIC code.
> >
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
>
> Aside from my WARN bikeshedding, which isn't really about this patch
> anyway:
>
> Reviewed-by: Brendan Jackman <jackmanb@google.com>
Thanks
> > - if (is_migrate_highatomic(mt)) {
> > - unsigned long size;
> > - /*
> > - * It should never happen but changes to
> > - * locking could inadvertently allow a per-cpu
> > - * drain to add pages to MIGRATE_HIGHATOMIC
> > - * while unreserving so be safe and watch for
> > - * underflows.
> > - */
> > - size = max(pageblock_nr_pages, 1UL << order);
> > - size = min(size, zone->nr_reserved_highatomic);
> > - zone->nr_reserved_highatomic -= size;
> > - }
> > + size = max(pageblock_nr_pages, 1UL << order);
> > + size = min(size, zone->nr_reserved_highatomic);
> > + zone->nr_reserved_highatomic -= size;
>
> Now that the locking is a bit cleaner, would it make sense to add a
> [VM_]WARN_ON[_ONCE] for underflow?
Yeah I think that would be a nice additional cleanup. Do you want to
send a patch? Otherwise, I can.
On Tue, 25 Feb 2025 at 16:09, Johannes Weiner <hannes@cmpxchg.org> wrote: > > Now that the locking is a bit cleaner, would it make sense to add a > > [VM_]WARN_ON[_ONCE] for underflow? > > Yeah I think that would be a nice additional cleanup. Do you want to > send a patch? Otherwise, I can. Yep I'll kick off some tests to check it doesn't fire and send it once that's done.
On 2/25/25 01:08, Johannes Weiner wrote: > The freelist hygiene patches made migratetype accesses fully protected > under the zone->lock. Remove remnants of handling the race conditions > that existed before from the MIGRATE_HIGHATOMIC code. > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
© 2016 - 2026 Red Hat, Inc.