[PATCH 1/3] mm/page_alloc: add per-migratetype counts to buddy allocator

Hongru Zhang posted 3 patches 3 days, 18 hours ago
[PATCH 1/3] mm/page_alloc: add per-migratetype counts to buddy allocator
Posted by Hongru Zhang 3 days, 18 hours ago
From: Hongru Zhang <zhanghongru@xiaomi.com>

On mobile devices, some user-space memory management components check
memory pressure and fragmentation status periodically or via PSI, and
take actions such as killing processes or performing memory compaction
based on this information.

Under high load scenarios, reading /proc/pagetypeinfo causes memory
management components or memory allocation/free paths to be blocked
for extended periods waiting for the zone lock, leading to the
following issues:
1. Long interrupt-disabled spinlocks - occasionally exceeding 10ms on
   Qcom 8750 platforms, reducing system real-time performance
2. Memory management components being blocked for extended periods,
   preventing rapid acquisition of memory fragmentation information for
   critical memory management decisions and actions
3. Increased latency in memory allocation and free paths due to prolonged
   zone lock contention

This patch adds per-migratetype counts to the buddy allocator in
preparation for optimizing /proc/pagetypeinfo access.

The optimized implementation:
- Make per-migratetype count updates protected by zone lock on the write
  side while /proc/pagetypeinfo reads are lock-free, which reduces
  interrupt-disabled spinlock duration and improves system real-time
  performance (addressing issue #1)
- Reduce blocking time for memory management components when reading
  /proc/pagetypeinfo, enabling more rapid acquisition of memory
  fragmentation information (addressing issue #2)
- Minimize the critical section held during /proc/pagetypeinfo reads to
  reduce zone lock contention on memory allocation and free paths
  (addressing issue #3)

The main overhead is a slight increase in latency on the memory
allocation and free paths due to additional per-migratetype counting,
with theoretically minimal impact on overall performance.

Signed-off-by: Hongru Zhang <zhanghongru@xiaomi.com>
---
 include/linux/mmzone.h | 1 +
 mm/mm_init.c           | 1 +
 mm/page_alloc.c        | 7 ++++++-
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 7fb7331c5725..6eeefe6a3727 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -138,6 +138,7 @@ extern int page_group_by_mobility_disabled;
 struct free_area {
 	struct list_head	free_list[MIGRATE_TYPES];
 	unsigned long		nr_free;
+	unsigned long		mt_nr_free[MIGRATE_TYPES];
 };
 
 struct pglist_data;
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 7712d887b696..dca2be8cc3b1 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -1439,6 +1439,7 @@ static void __meminit zone_init_free_lists(struct zone *zone)
 	for_each_migratetype_order(order, t) {
 		INIT_LIST_HEAD(&zone->free_area[order].free_list[t]);
 		zone->free_area[order].nr_free = 0;
+		zone->free_area[order].mt_nr_free[t] = 0;
 	}
 
 #ifdef CONFIG_UNACCEPTED_MEMORY
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ed82ee55e66a..9431073e7255 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -818,6 +818,7 @@ static inline void __add_to_free_list(struct page *page, struct zone *zone,
 	else
 		list_add(&page->buddy_list, &area->free_list[migratetype]);
 	area->nr_free++;
+	area->mt_nr_free[migratetype]++;
 
 	if (order >= pageblock_order && !is_migrate_isolate(migratetype))
 		__mod_zone_page_state(zone, NR_FREE_PAGES_BLOCKS, nr_pages);
@@ -840,6 +841,8 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
 		     get_pageblock_migratetype(page), old_mt, nr_pages);
 
 	list_move_tail(&page->buddy_list, &area->free_list[new_mt]);
+	area->mt_nr_free[old_mt]--;
+	area->mt_nr_free[new_mt]++;
 
 	account_freepages(zone, -nr_pages, old_mt);
 	account_freepages(zone, nr_pages, new_mt);
@@ -855,6 +858,7 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
 static inline void __del_page_from_free_list(struct page *page, struct zone *zone,
 					     unsigned int order, int migratetype)
 {
+	struct free_area *area = &zone->free_area[order];
 	int nr_pages = 1 << order;
 
         VM_WARN_ONCE(get_pageblock_migratetype(page) != migratetype,
@@ -868,7 +872,8 @@ static inline void __del_page_from_free_list(struct page *page, struct zone *zon
 	list_del(&page->buddy_list);
 	__ClearPageBuddy(page);
 	set_page_private(page, 0);
-	zone->free_area[order].nr_free--;
+	area->nr_free--;
+	area->mt_nr_free[migratetype]--;
 
 	if (order >= pageblock_order && !is_migrate_isolate(migratetype))
 		__mod_zone_page_state(zone, NR_FREE_PAGES_BLOCKS, -nr_pages);
-- 
2.43.0
Re: [PATCH 1/3] mm/page_alloc: add per-migratetype counts to buddy allocator
Posted by Barry Song 2 days, 20 hours ago
On Fri, Nov 28, 2025 at 11:12 AM Hongru Zhang <zhanghongru06@gmail.com> wrote:
>
[...]
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ed82ee55e66a..9431073e7255 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -818,6 +818,7 @@ static inline void __add_to_free_list(struct page *page, struct zone *zone,
>         else
>                 list_add(&page->buddy_list, &area->free_list[migratetype]);
>         area->nr_free++;
> +       area->mt_nr_free[migratetype]++;
>
>         if (order >= pageblock_order && !is_migrate_isolate(migratetype))
>                 __mod_zone_page_state(zone, NR_FREE_PAGES_BLOCKS, nr_pages);
> @@ -840,6 +841,8 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
>                      get_pageblock_migratetype(page), old_mt, nr_pages);
>
>         list_move_tail(&page->buddy_list, &area->free_list[new_mt]);
> +       area->mt_nr_free[old_mt]--;
> +       area->mt_nr_free[new_mt]++;

The overhead comes from effectively counting twice. Have we checked whether
the readers of area->nr_free are on a hot path? If not, we might just drop
nr_free and compute the sum each time.

Buddyinfo and compaction do not seem to be on a hot path ?

Thanks
Barry