From nobody Sat Feb 7 07:24:02 2026 Received: from mail-yx1-f52.google.com (mail-yx1-f52.google.com [74.125.224.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B04922C21EA for ; Mon, 13 Oct 2025 19:08:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.224.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760382499; cv=none; b=UrnBTUvhSk4cyNoJaGwx76Z14peAw0vvdXhWswEY7Oioi8Ih7aXSTd8g34BdIFNYwByv6W7WC5EFzl60nwdAPCUwNe6LL5gu2V3vIHzKX7EiLtzPNIhSsF6Qp9TjKY+Yt1KqKWYFL0bb+sIvwBQeplm5ul2VD6IZRg9R3KBy1B8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760382499; c=relaxed/simple; bh=bCS44l/LvBBitJnnk4ob4XEeLl5p3PQtQuAfvJHhxEs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Gp3TIV71ng90rGSjUeVjxLe7YsOH9Jefx9sn6JDAp+nuhQp+5hVtg3T+cm9FqQu6M6URUZV+2mHnbaTrYDiqiVyP0NhTTgzwgcnz85WVH1PmSvJhqiXsJ0Yit4DaJm/LUhqPncKPprvJ5IPcS5e67UsEvuGqzuzFVtAPUfqS0y0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Y5zI4YP+; arc=none smtp.client-ip=74.125.224.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Y5zI4YP+" Received: by mail-yx1-f52.google.com with SMTP id 956f58d0204a3-6361a421b67so4702260d50.2 for ; Mon, 13 Oct 2025 12:08:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760382494; x=1760987294; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DaKsUlhHVYzGr/qaa9OrQYufYtYEbalh3sXBUEUgogU=; b=Y5zI4YP+0m4h2ptSPJA4qAoC6Yd2p+vTLVRvOnji6K9RY8NiyxbRnAYb8i4V9DEfL2 ucI3B6/zf/R0AX4J2D77pVGZyHrn8hEfK2T9Ar2dDsEoG/g9STa1FIaEwVBYIbieBVrM +jeiqoXvRdyqoRfdhKz1Sq49t3fcL1FdDHUZ0qua4ev25vialg9afjAk72K3ahVBGFc0 5M0F+dXzWDI3UJM9SCzAykHyXdRaI10yV/vv/ZYWldXfmLaUsLxODtAp65fwUNMcWhT/ gJn9IPPV3W7/zWpgMDDcTeFiyqRXeIeaTbE2pXX+/stJ/uC86OK+RcaeftA89AHKGuFy 9iQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760382494; x=1760987294; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DaKsUlhHVYzGr/qaa9OrQYufYtYEbalh3sXBUEUgogU=; b=JvMHUcJFxPbnkquTPnHT5xdLwYZSW8wVLPdzIkPZvs5xLX9pX4BCoF46oxBxvbp7G9 XaqczXFDziHpK9oG6gEk9oFCGMqvnBu+L5s1XfEEVtQiEv1fZj4/vie+lPJElzmnzA5t Dq1YV10NioY2e7fCO7vxGDIVhGJs4JPj2B6AxMFlbYEwUwDIHFjheB5980RqNpYxObb9 0ZHjJS5GIgMvVq9tifGPubX3OJrLVMwAMExp7yYwsj1jX6OSR1sztwaTN07BdG/1t2k6 SfSXEjwolhp7vojKSf4kUQdj1S28xAOhNktuWnG3QM0RGvOsKxaHKDpevN7EhtxVrc2m sPZA== X-Forwarded-Encrypted: i=1; AJvYcCXUVcXvArv0MGfpR9ePhCNDKhg/Lsv/9u+Am0HENnJDQek7880/sjlCcU85gU1/jOs+WD5AxhiZ1qi8nOw=@vger.kernel.org X-Gm-Message-State: AOJu0YzEHsX41eR1GZJYcSb05wlydmerSssjtNPAjcxPkpOUlndyZKyP dc3zrqLcrYWQOHhRvnomCEAGiNSKN/5zHn+tAM0ngvzsOsUrOY3e8ECc X-Gm-Gg: ASbGnctHky41LW1T5E5Sihg62aWcbiiDSU5VE8HU+YoxsuZdk2S/ZsvQdk8iuWgGh4i BgEujIuDm2dMzAWMF8J7V8PI34/dssRiJ+cjd+zo1i8eJrWXUs9zV02oam1+WbtNXXx6eU6D0Wn KL3QwnK3i9gaFgVFKNCUzhj9uFz4tSsc7s0kqine6Uu8ObR7h7VGMf50F5SjqCJNlzLmENYrlTA Sw1mPToxs9igW05I3luPlbgjzPi6Ibtc/mMiyMlltZR6+WSdppAJo2Qf5IBU/I9PI3wSse0Vvdw 2/SXzv6NeQmU7PMlcCotR2WlmBKOHDfPh1hEM6WXi/2/DxtMa+YVLVBGNbH7F7yvPtlbNwaajBq MxZ6KmAg93Q/91He6jAdxLH92ebXM6xLUXHJIAd9XNK3H05XlL0tTrVmzYAeFhh/CU2KPF1VrUp C9nI/mHXJslqM+Ew3tsSA= X-Google-Smtp-Source: AGHT+IHCF7ulcRqtzOcwLOSypYkVBjLtTayORGayA5ndYRP3Xnuskyn2V4s/3SPUaOLMHR0F/AXzvA== X-Received: by 2002:a05:690e:164f:b0:63c:db25:6406 with SMTP id 956f58d0204a3-63cdb2584d8mr10560890d50.42.1760382494512; Mon, 13 Oct 2025 12:08:14 -0700 (PDT) Received: from localhost ([2a03:2880:25ff:73::]) by smtp.gmail.com with ESMTPSA id 956f58d0204a3-63cd9524284sm4219891d50.10.2025.10.13.12.08.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Oct 2025 12:08:14 -0700 (PDT) From: Joshua Hahn To: Andrew Morton Cc: Chris Mason , Kiryl Shutsemau , "Liam R. Howlett" , Brendan Jackman , David Hildenbrand , Johannes Weiner , Lorenzo Stoakes , Michal Hocko , Mike Rapoport , Suren Baghdasaryan , Vlastimil Babka , Zi Yan , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 1/3] mm/page_alloc/vmstat: Simplify refresh_cpu_vm_stats change detection Date: Mon, 13 Oct 2025 12:08:09 -0700 Message-ID: <20251013190812.787205-2-joshua.hahnjy@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20251013190812.787205-1-joshua.hahnjy@gmail.com> References: <20251013190812.787205-1-joshua.hahnjy@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, refresh_cpu_vm_stats returns an int, indicating how many changes were made during its updates. Using this information, callers like vmstat_update can heuristically determine if more work will be done in the future. However, all of refresh_cpu_vm_stats's callers either (a) ignore the result, only caring about performing the updates, or (b) only care about whether changes were made, but not *how many* changes were made. Simplify the code by returning a bool instead to indicate if updates were made. In addition, simplify fold_diff and decay_pcp_high to return a bool for the same reason. Reviewed-by: Vlastimil Babka Reviewed-by: SeongJae Park Signed-off-by: Joshua Hahn --- include/linux/gfp.h | 2 +- mm/page_alloc.c | 8 ++++---- mm/vmstat.c | 28 +++++++++++++++------------- 3 files changed, 20 insertions(+), 18 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 0ceb4e09306c..f46b066c7661 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -386,7 +386,7 @@ extern void free_pages(unsigned long addr, unsigned int= order); #define free_page(addr) free_pages((addr), 0) =20 void page_alloc_init_cpuhp(void); -int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp); +bool decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp); void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp); void drain_all_pages(struct zone *zone); void drain_local_pages(struct zone *zone); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 600d9e981c23..bbc3282fdffc 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2557,10 +2557,10 @@ static int rmqueue_bulk(struct zone *zone, unsigned= int order, * Called from the vmstat counter updater to decay the PCP high. * Return whether there are addition works to do. */ -int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) +bool decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) { int high_min, to_drain, batch; - int todo =3D 0; + bool todo =3D false; =20 high_min =3D READ_ONCE(pcp->high_min); batch =3D READ_ONCE(pcp->batch); @@ -2573,7 +2573,7 @@ int decay_pcp_high(struct zone *zone, struct per_cpu_= pages *pcp) pcp->high =3D max3(pcp->count - (batch << CONFIG_PCP_BATCH_SCALE_MAX), pcp->high - (pcp->high >> 3), high_min); if (pcp->high > high_min) - todo++; + todo =3D true; } =20 to_drain =3D pcp->count - pcp->high; @@ -2581,7 +2581,7 @@ int decay_pcp_high(struct zone *zone, struct per_cpu_= pages *pcp) spin_lock(&pcp->lock); free_pcppages_bulk(zone, to_drain, pcp, 0); spin_unlock(&pcp->lock); - todo++; + todo =3D true; } =20 return todo; diff --git a/mm/vmstat.c b/mm/vmstat.c index bb09c032eecf..98855f31294d 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -771,25 +771,25 @@ EXPORT_SYMBOL(dec_node_page_state); =20 /* * Fold a differential into the global counters. - * Returns the number of counters updated. + * Returns whether counters were updated. */ static int fold_diff(int *zone_diff, int *node_diff) { int i; - int changes =3D 0; + bool changed =3D false; =20 for (i =3D 0; i < NR_VM_ZONE_STAT_ITEMS; i++) if (zone_diff[i]) { atomic_long_add(zone_diff[i], &vm_zone_stat[i]); - changes++; + changed =3D true; } =20 for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) if (node_diff[i]) { atomic_long_add(node_diff[i], &vm_node_stat[i]); - changes++; + changed =3D true; } - return changes; + return changed; } =20 /* @@ -806,16 +806,16 @@ static int fold_diff(int *zone_diff, int *node_diff) * with the global counters. These could cause remote node cache line * bouncing and will have to be only done when necessary. * - * The function returns the number of global counters updated. + * The function returns whether global counters were updated. */ -static int refresh_cpu_vm_stats(bool do_pagesets) +static bool refresh_cpu_vm_stats(bool do_pagesets) { struct pglist_data *pgdat; struct zone *zone; int i; int global_zone_diff[NR_VM_ZONE_STAT_ITEMS] =3D { 0, }; int global_node_diff[NR_VM_NODE_STAT_ITEMS] =3D { 0, }; - int changes =3D 0; + bool changed =3D false; =20 for_each_populated_zone(zone) { struct per_cpu_zonestat __percpu *pzstats =3D zone->per_cpu_zonestats; @@ -839,7 +839,8 @@ static int refresh_cpu_vm_stats(bool do_pagesets) if (do_pagesets) { cond_resched(); =20 - changes +=3D decay_pcp_high(zone, this_cpu_ptr(pcp)); + if (decay_pcp_high(zone, this_cpu_ptr(pcp))) + changed =3D true; #ifdef CONFIG_NUMA /* * Deal with draining the remote pageset of this @@ -861,13 +862,13 @@ static int refresh_cpu_vm_stats(bool do_pagesets) } =20 if (__this_cpu_dec_return(pcp->expire)) { - changes++; + changed =3D true; continue; } =20 if (__this_cpu_read(pcp->count)) { drain_zone_pages(zone, this_cpu_ptr(pcp)); - changes++; + changed =3D true; } #endif } @@ -887,8 +888,9 @@ static int refresh_cpu_vm_stats(bool do_pagesets) } } =20 - changes +=3D fold_diff(global_zone_diff, global_node_diff); - return changes; + if (fold_diff(global_zone_diff, global_node_diff)) + changed =3D true; + return changed; } =20 /* --=20 2.47.3 From nobody Sat Feb 7 07:24:02 2026 Received: from mail-yw1-f172.google.com (mail-yw1-f172.google.com [209.85.128.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BB4882C2375 for ; Mon, 13 Oct 2025 19:08:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760382498; cv=none; b=RUNrnAr86LvPfdqZv69+L9m7rliZXNuljmbFYw9JAg7CUEaQxMKu9i4t3LEhFWt4FJ3MBD/ifojLJczbGOtmPehIXPsoonz4bNdU7JIZ5Z5FbrGoFbzRq3FMOjz8T5nwfw7pMOlqmRkXQ6BDdGBEnmLS6Z+bvgYtnblAmlIw0xk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760382498; c=relaxed/simple; bh=dBI3eakdPMRsK/c3A94rS1c38Jy4VOMXm5ScJs1zjkY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qV70kB+3Pt0pDJzHQpZAqo1us5EV2/aL01yXbw2AgnkeOx5WXs/N3vGr/d6678MAyPb6Yf87MAUh9+lKApK8IofGpxvs+d8VPSFMbScED4N/YhEubmaNfq5Ow6BucnvazCHyuDF1jvEudW1uKL2LUm6kxiZVvtA1fB9rLxfBySI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=mDl/rvlB; arc=none smtp.client-ip=209.85.128.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mDl/rvlB" Received: by mail-yw1-f172.google.com with SMTP id 00721157ae682-71d71bcab69so41713197b3.0 for ; Mon, 13 Oct 2025 12:08:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760382496; x=1760987296; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=G4PWcq4M+uwGRj6srPkRX14QChgWCsKUHbcguMBrius=; b=mDl/rvlBCf0QmlRuVp8U+4T3clB2ACWFv2F1IdutYPtipW1HY2Xx2IP2RmN+NpJM62 r8EbzcVzW3nN4K8psuGAPxJMEsazTh91aAJ8kftFv0eBhQz6oOCN5nNAV6nHmPXc2coq CMmBMaObiL/c3s9GL+hqvuZfn12iFDUSeUuMpl1hd/QmFQmdqUyMwX94M+4oOxgIdvjR mO8ISZtQrwS1kKb07KgE3Oee21kJ0RnNSGz0nuNJmyD9jsCJs8ImXLA9syeGeQTx5CmB SsWCCijiSCchnZb9UEtVq1wtHMCCFU+9de9WVGTm05MKunR8QiNNPuVzLSJnUmJQ5MXd vGdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760382496; x=1760987296; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=G4PWcq4M+uwGRj6srPkRX14QChgWCsKUHbcguMBrius=; b=wjofBZy8AuiDEF/3DljvOFxaVhDVQoVWLXR+kayTWih/csdJpDO5nsvXz0YnXZ2Zel zLBDIJQNsRehD/GB7X86hJ3gshZDhzoZXOh9GrA8NMllVoFrRw4q35CRCvpeWVKY+uvA uauTWcc+/JAw0/d4prjjUKCs5pNKzqwcWe1QkStot4oc6M7pKs6HZiteIewkFd6UJKEz 2cY75ENU8NrOsn9iWVV/YTog3Ve6RC5UKN2eozv1ws4xF0sqrsn0Lvkc046yPbavY05R SyPpBao50gd+GLScyqIGR1Q1eg2DayaHP4/+vygzkJsro7mOteu8pHIh0bdmbwL5Ve1D HHBQ== X-Forwarded-Encrypted: i=1; AJvYcCVo08l+44pnjlz4dQkr8yuT5s5rwgXGB5mRtFL6HaZy4kHkHPHutsv82XzSCu8GLvstj/mUNBOBa7G6DTM=@vger.kernel.org X-Gm-Message-State: AOJu0YzjkDD/ckVZqQES00hojrr6XuD1pyZHg9MnAO6gxYcgirIRNzks 1I9qhR8q2j2tf2rHdRI8m6HK2lIrn2i6fSHMtXeLK6v1HkYXkF5uuU4nDgfAaQ== X-Gm-Gg: ASbGncuMUGwfpdZmT1e3IrZzs3WvisIDME6QeP4cxkCxEm/iMDRwQFujcGiKee6SpW1 tukISpe7EXvOlkdZddmNwQ+A4tkb62Jk8uPK9w0VlEYau/UA/XwY9zcMM0nn0AecvEDv23/8vJk nKT7xGdzVAzaij6Stwqn6CHLWAmv7tYGH4DXowHmVaBBGtG3/0QHd1XKhOuJeUoHrguvjDs0RDg uGf7KEhPGBoP7THQa6WWxir4rJEznlQ1Rwes7bkdVrCKK9ihebJN6apTrGOIcJ0otQFEBh7UdzA F+XVEgKRgeGnV88gNR4a4zQbHyAdcfDg/rgMAZ1eJ/uiysv3hsvLzG/aIaoYeK58ihQxYevJb6N k4Eu8Wa1qnHLqwsmbMsZgUMuMhnOyFrwRWMBeFl3MAzoTkzD0kP6FpbUbCOfbD1h2G1fNh6q7gf gSMSFb94sYqK2HA+hlag== X-Google-Smtp-Source: AGHT+IEwc6kNppGrUkzHR7CUmSJb11RqLXjiocSwItNawnwmXymiBVgb+9hi7J3cnFP0Eq2Igk3xkA== X-Received: by 2002:a05:690e:18e:b0:635:4ed0:572b with SMTP id 956f58d0204a3-63ccb92c021mr16046558d50.47.1760382495682; Mon, 13 Oct 2025 12:08:15 -0700 (PDT) Received: from localhost ([2a03:2880:25ff:1::]) by smtp.gmail.com with ESMTPSA id 956f58d0204a3-63cdec375bcsm3853594d50.17.2025.10.13.12.08.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Oct 2025 12:08:15 -0700 (PDT) From: Joshua Hahn To: Andrew Morton Cc: Chris Mason , Kiryl Shutsemau , Brendan Jackman , Johannes Weiner , Michal Hocko , Suren Baghdasaryan , Vlastimil Babka , Zi Yan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@meta.com Subject: [PATCH v4 2/3] mm/page_alloc: Batch page freeing in decay_pcp_high Date: Mon, 13 Oct 2025 12:08:10 -0700 Message-ID: <20251013190812.787205-3-joshua.hahnjy@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20251013190812.787205-1-joshua.hahnjy@gmail.com> References: <20251013190812.787205-1-joshua.hahnjy@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" It is possible for pcp->count - pcp->high to exceed pcp->batch by a lot. When this happens, we should perform batching to ensure that free_pcppages_bulk isn't called with too many pages to free at once and starve out other threads that need the pcp or zone lock. Since we are still only freeing the difference between the initial pcp->count and pcp->high values, there should be no change to how many pages are freed. Suggested-by: Chris Mason Suggested-by: Andrew Morton Co-developed-by: Johannes Weiner Reviewed-by: Vlastimil Babka Signed-off-by: Joshua Hahn --- mm/page_alloc.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index bbc3282fdffc..8ecd48be8bdd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2559,7 +2559,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned i= nt order, */ bool decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) { - int high_min, to_drain, batch; + int high_min, to_drain, to_drain_batched, batch; bool todo =3D false; =20 high_min =3D READ_ONCE(pcp->high_min); @@ -2577,11 +2577,14 @@ bool decay_pcp_high(struct zone *zone, struct per_c= pu_pages *pcp) } =20 to_drain =3D pcp->count - pcp->high; - if (to_drain > 0) { + while (to_drain > 0) { + to_drain_batched =3D min(to_drain, batch); spin_lock(&pcp->lock); - free_pcppages_bulk(zone, to_drain, pcp, 0); + free_pcppages_bulk(zone, to_drain_batched, pcp, 0); spin_unlock(&pcp->lock); todo =3D true; + + to_drain -=3D to_drain_batched; } =20 return todo; --=20 2.47.3 From nobody Sat Feb 7 07:24:02 2026 Received: from mail-yw1-f173.google.com (mail-yw1-f173.google.com [209.85.128.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D5C0C2D6E57 for ; Mon, 13 Oct 2025 19:08:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760382499; cv=none; b=dJfG+iB91eECxJjmqggcTrEiM0ynwcDYgCXn7P90JJhcQICHC4HgXPD8a5pvG3fxyGoL8yqG5r+K8fMyLq7BaWBhlgKPudYrHHBNXzyANrSebQqyTXijjy7S7ilhf2TP1o3fS8BX0QfJD/N9chzr4Qa5IZz2Sc1sS3L3VvKmKTA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760382499; c=relaxed/simple; bh=Nw814zuhZwKKeZYq10E0XQ1hRIfSoUE8sJgW8LwlcAs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OHjSvPBgpvIr8m13CQA50gdfux5uCRqAE5OILYWD2yd/tKfARa6Fqi7SnTjgdihllYvWLZmnpfzzvZFQnsDmJIYNn6XCb8/5xAmoR/c85HHFJN8yg3TeYeK96OOtCMOJDNgECp3SxN334iVEjqzuV1Zc5FimSz+vcbcGfKH23PM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=mpfXRlEb; arc=none smtp.client-ip=209.85.128.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mpfXRlEb" Received: by mail-yw1-f173.google.com with SMTP id 00721157ae682-7814273415cso6017777b3.1 for ; Mon, 13 Oct 2025 12:08:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760382497; x=1760987297; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SDzIWaOgwVfkqKsoR4FvITyGy2H0kRZNktinhlzDtIQ=; b=mpfXRlEbSA6nxyqwDSCC3wbxvMmwE9n7A8oMU7BmmWECwJIO5SCGhsD9TWVK5fb63+ Gdidc7wRAZp4rxIsHz4Eq3J1+/VK1JRCkwPAfktn2VmGEXkKc6ACjdghLCgqSvtEAi8z CkQVCpuh6fjCTYwmo3k2zxWBxiYcmyHN7zEQFnabJzNpM8vBgG6K9ApeRvyItLXdVcYr bcxKZT5hOty+YCTtugt42ANjpJXA3a/Tq1v6garYEcxTuDb52+G1gcKs/3xMpcDe5+7W usKPuCYMx5VJI0/m1gGf3B851op2h72ZKKooSW3lW03xuOipTUuXFpal2e5qZKGzwSSX K+ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760382497; x=1760987297; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SDzIWaOgwVfkqKsoR4FvITyGy2H0kRZNktinhlzDtIQ=; b=Vvggh+BuqY/RhPa0GFrZa/OsmHargevE3YmN2wkZiQ7gvnNVD2emqouvOP9XvdGIqK 2hkW5aVA0ISwZNjxZZtFq+qAW4apRyVGpScSTSAIjSLVGqgDDy7ZGazVTQjUB6Tcy48W a+tItjbiTavfikGKcMuaRU+phFdPQvA67jBHv+CHV4BPsKPKn7wR6soh7RhT0Ie+TK9w 5ONe8lMr+yBdfU2d27feUAY2J34Dr1KRpoTvl6pBC4w1E7m+urAohcbqa3lvMhev5v17 Ylvbr6da8mpaZrFgqcdArIkmFMnxmoXonL5GTqu/HShtLFUSJVFrIr6tA6ubP/FbB85s bnMw== X-Forwarded-Encrypted: i=1; AJvYcCU/HqkM+ei0JSj0Imo9uu4V38qNVgtdS+b7h1yLBPSR4UifFQt1yJIrIPkvgNLsozIFBZxvSxWRd2dWIQE=@vger.kernel.org X-Gm-Message-State: AOJu0YzkiRsmmdXmn6SYe/wn57QiFCTbuyQTQh+Rx2RPWHlsmxDXi/i5 hUH9KuelkAeU0hpsdM3DIy0GNcBAIXHT1ZEOcAUuvNkODC9wiEGqSUOR X-Gm-Gg: ASbGncs/FjOqeTpEdH7tvHOMWHJQGo8k4qy7AlvP99gKVc1KnA4da3hQKX1a8s9koD4 tKJfGm4F6Ruarw8eluVqTZSvi6n2yDLfqyHefIFBAYbNOjUd8Zbn4h7+ka9v/ZbRVJqZo+J8fu6 MzYV7WfiEExqC2z6fh5ZoowsmGGVfcHpQDD/qlx6GJMc6nAV1E8J3a2mp9iwqC1kxXYYdg0i0fP 5n1I5tVdoO5VCK87988dpow2OVoE6QfM70DR37FJbS9cOiBoeJ5g+gusYjg7RK7MeFWJf1QFQ2N m93m33pq4roDGrDFofRY3p5VQ5iUOGywosU8ca6WB09DC6Xxz5djt98W94PyN2oOLi8cC2E4gCJ DSwD40jwP6oTOL8DQdAdkhoelUUJrsjdELrF2GoNQr85/AR1av2JFRigGHrT4j72PEC7wyltq0w P7nv6Kg2/K X-Google-Smtp-Source: AGHT+IE9xC3500baXqEHlI48re4/P275laxRB6z9xaFR7v78dK95X3SgDSsqDOD2QAARYLwFZ+0PNw== X-Received: by 2002:a05:690c:a641:b0:781:64f:2b68 with SMTP id 00721157ae682-781064f3762mr101116847b3.68.1760382496767; Mon, 13 Oct 2025 12:08:16 -0700 (PDT) Received: from localhost ([2a03:2880:25ff:4a::]) by smtp.gmail.com with ESMTPSA id 00721157ae682-78106dbbef8sm28680687b3.1.2025.10.13.12.08.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Oct 2025 12:08:16 -0700 (PDT) From: Joshua Hahn To: Andrew Morton Cc: Chris Mason , Kiryl Shutsemau , Brendan Jackman , Johannes Weiner , Michal Hocko , Suren Baghdasaryan , Vlastimil Babka , Zi Yan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@meta.com Subject: [PATCH v4 3/3] mm/page_alloc: Batch page freeing in free_frozen_page_commit Date: Mon, 13 Oct 2025 12:08:11 -0700 Message-ID: <20251013190812.787205-4-joshua.hahnjy@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20251013190812.787205-1-joshua.hahnjy@gmail.com> References: <20251013190812.787205-1-joshua.hahnjy@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Before returning, free_frozen_page_commit calls free_pcppages_bulk using nr_pcp_free to determine how many pages can appropritately be freed, based on the tunable parameters stored in pcp. While this number is an accurate representation of how many pages should be freed in total, it is not an appropriate number of pages to free at once using free_pcppages_bulk, since we have seen the value consistently go above 2000 in the Meta fleet on larger machines. As such, perform batched page freeing in free_pcppages_bulk by using pcp->batch member. In order to ensure that other processes are not starved of the zone lock, free both the zone lock and pcp lock to yield to other threads. Note that because free_frozen_page_commit now performs a spinlock inside the function (and can fail), the function may now return with a freed pcp. To handle this, return true if the pcp is locked on exit and false otherwis= e. In addition, since free_frozen_page_commit must now be aware of what UP flags were stored at the time of the spin lock, and because we must be able to report new UP flags to the callers, add a new unsigned long* parameter UP_flags to keep track of this. The following are a few synthetic benchmarks, made on three machines. The first is a large machine with 754GiB memory and 316 processors. The second is a relatively smaller machine with 251GiB memory and 176 processors. The third and final is the smallest of the three, which has 62G= iB memory and 36 processors. On all machines, I kick off a kernel build with -j$(nproc). Negative delta is better (faster compilation) Large machine (754GiB memory, 316 processors) make -j$(nproc) +------------+---------------+-----------+ | Metric (s) | Variation (%) | Delta(%) | +------------+---------------+-----------+ | real | 0.8070 | - 1.4865 | | user | 0.2823 | + 0.4081 | | sys | 5.0267 | -11.8737 | +------------+---------------+-----------+ Medium machine (251GiB memory, 176 processors) make -j$(nproc) +------------+---------------+----------+ | Metric (s) | Variation (%) | Delta(%) | +------------+---------------+----------+ | real | 0.2806 | +0.0351 | | user | 0.0994 | +0.3170 | | sys | 0.6229 | -0.6277 | +------------+---------------+----------+ Small machine (62GiB memory, 36 processors) make -j$(nproc) +------------+---------------+----------+ | Metric (s) | Variation (%) | Delta(%) | +------------+---------------+----------+ | real | 0.1503 | -2.6585 | | user | 0.0431 | -2.2984 | | sys | 0.1870 | -3.2013 | +------------+---------------+----------+ Here, variation is the coefficient of variation, i.e. standard deviation / = mean. Suggested-by: Chris Mason Co-developed-by: Johannes Weiner Signed-off-by: Joshua Hahn --- mm/page_alloc.c | 66 ++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 57 insertions(+), 9 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8ecd48be8bdd..e85770dd54bd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2818,12 +2818,22 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, s= truct zone *zone, return high; } =20 -static void free_frozen_page_commit(struct zone *zone, +/* + * Tune pcp alloc factor and adjust count & free_count. Free pages to brin= g the + * pcp's watermarks below high. + * + * May return a freed pcp, if during page freeing the pcp spinlock cannot = be + * reacquired. Return true if pcp is locked, false otherwise. + */ +static bool free_frozen_page_commit(struct zone *zone, struct per_cpu_pages *pcp, struct page *page, int migratetype, - unsigned int order, fpi_t fpi_flags) + unsigned int order, fpi_t fpi_flags, unsigned long *UP_flags) { int high, batch; + int to_free, to_free_batched; int pindex; + int cpu =3D smp_processor_id(); + int ret =3D true; bool free_high =3D false; =20 /* @@ -2861,15 +2871,47 @@ static void free_frozen_page_commit(struct zone *zo= ne, * Do not attempt to take a zone lock. Let pcp->count get * over high mark temporarily. */ - return; + return true; } =20 high =3D nr_pcp_high(pcp, zone, batch, free_high); if (pcp->count < high) - return; + return true; + + to_free =3D nr_pcp_free(pcp, batch, high, free_high); + if (to_free =3D=3D 0) + return true; + + while (to_free > 0 && pcp->count >=3D high) { + to_free_batched =3D min(to_free, batch); + free_pcppages_bulk(zone, to_free_batched, pcp, pindex); + to_free -=3D to_free_batched; + if (pcp->count >=3D high) { + pcp_spin_unlock(pcp); + pcp_trylock_finish(*UP_flags); + + pcp_trylock_prepare(*UP_flags); + pcp =3D pcp_spin_trylock(zone->per_cpu_pageset); + if (!pcp) { + pcp_trylock_finish(*UP_flags); + ret =3D false; + break; + } + + /* + * Check if this thread has been migrated to a different + * CPU. If that is the case, give up and indicate that + * the pcp is returned in an unlocked state. + */ + if (smp_processor_id() !=3D cpu) { + pcp_spin_unlock(pcp); + pcp_trylock_finish(*UP_flags); + ret =3D false; + break; + } + } + } =20 - free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high), - pcp, pindex); if (test_bit(ZONE_BELOW_HIGH, &zone->flags) && zone_watermark_ok(zone, 0, high_wmark_pages(zone), ZONE_MOVABLE, 0)) { @@ -2887,6 +2929,7 @@ static void free_frozen_page_commit(struct zone *zone, next_memory_node(pgdat->node_id) < MAX_NUMNODES) atomic_set(&pgdat->kswapd_failures, 0); } + return ret; } =20 /* @@ -2934,7 +2977,9 @@ static void __free_frozen_pages(struct page *page, un= signed int order, pcp_trylock_prepare(UP_flags); pcp =3D pcp_spin_trylock(zone->per_cpu_pageset); if (pcp) { - free_frozen_page_commit(zone, pcp, page, migratetype, order, fpi_flags); + if (!free_frozen_page_commit(zone, pcp, page, migratetype, + order, fpi_flags, &UP_flags)) + return; pcp_spin_unlock(pcp); } else { free_one_page(zone, page, pfn, order, fpi_flags); @@ -3034,8 +3079,11 @@ void free_unref_folios(struct folio_batch *folios) migratetype =3D MIGRATE_MOVABLE; =20 trace_mm_page_free_batched(&folio->page); - free_frozen_page_commit(zone, pcp, &folio->page, migratetype, - order, FPI_NONE); + if (!free_frozen_page_commit(zone, pcp, &folio->page, + migratetype, order, FPI_NONE, &UP_flags)) { + pcp =3D NULL; + locked_zone =3D NULL; + } } =20 if (pcp) { --=20 2.47.3