[PATCH v3 3/7] xen/page_alloc: Add and track per_node(avail_pages)

Bernhard Kaindl posted 7 patches 2 days, 13 hours ago
[PATCH v3 3/7] xen/page_alloc: Add and track per_node(avail_pages)
Posted by Bernhard Kaindl 2 days, 13 hours ago
From: Alejandro Vallejo <alejandro.garciavallejo@amd.com>

The static per-NUMA-node count of free pages is the sum of free memory
in all zones of a node. It's an optimisation to avoid doing that operation
frequently in the following patches that introduce per-NUMA-node claims.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Signed-off-by: Bernhard Kaindl <bernhard.kaindl@cloud.com>

---
Changes in v2:
- Added ASSERT(per_node(avail_pages, node) >= request) as requested
  during review by Roger: Comment by me: As we have

  ASSERT(avail[node][zone] >= request);

  directly before it, the request is already valid, so this checks
  that per_node(avail_pages, node) is not mis-accounted too low.

Changes in v3:
- Converted from static array to use per_node(avail_pages, node)
---
 xen/common/page_alloc.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index e056624583..b8acb500da 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -486,6 +486,10 @@ static unsigned long node_need_scrub[MAX_NUMNODES];
 static unsigned long *avail[MAX_NUMNODES];
 static long total_avail_pages;
 
+/* Per-NUMA-node counts of free pages */
+DECLARE_PER_NODE(unsigned long, avail_pages);
+DEFINE_PER_NODE(unsigned long, avail_pages);
+
 static DEFINE_SPINLOCK(heap_lock);
 static long outstanding_claims; /* total outstanding claims by all domains */
 
@@ -1074,6 +1078,8 @@ static struct page_info *alloc_heap_pages(
 
     ASSERT(avail[node][zone] >= request);
     avail[node][zone] -= request;
+    ASSERT(per_node(avail_pages, node) >= request);
+    per_node(avail_pages, node) -= request;
     total_avail_pages -= request;
     ASSERT(total_avail_pages >= 0);
 
@@ -1234,6 +1240,8 @@ static int reserve_offlined_page(struct page_info *head)
             continue;
 
         avail[node][zone]--;
+        ASSERT(per_node(avail_pages, node) > 0);
+        per_node(avail_pages, node)--;
         total_avail_pages--;
         ASSERT(total_avail_pages >= 0);
 
@@ -1558,6 +1566,7 @@ static void free_heap_pages(
     }
 
     avail[node][zone] += 1 << order;
+    per_node(avail_pages, node) += 1 << order;
     total_avail_pages += 1 << order;
     if ( need_scrub )
     {
-- 
2.43.0