From nobody Tue Dec 16 07:33:21 2025 Received: from mail-yx1-f41.google.com (mail-yx1-f41.google.com [74.125.224.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 037A52248A5 for ; Fri, 5 Dec 2025 23:32:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.224.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764977542; cv=none; b=oMfE2igNnSXzj+pje3+X1pg6K4CYsYOQkpSp1gms5R/4od6W1qnbjMEFitgFDwQ7XjOCfTO2MAFm3m731Z6cn/GrzZHZLWY94MqY/izy/kMC5bIY7hodiyYExkdREE51KXy+4uJw2NoPPh60wG/pvwGS+ScBa1eA4rFhCzQf9b8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764977542; c=relaxed/simple; bh=KsyL1tAa0IIhbMiJgBbcZLIMfFacJIN5WXZgweQxvgU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kFK3uQAuPSARjFEHecOmvxivk2n5jrXmkavrpHXyvk7S3ndvURIsXcd4YiRAAekCvFSy3TfTsv6W6AoN4nBYaGGtt5Z4seCnlYf1oI5hoF1ztYFjNcnCIfHP30oj4molpMn0WDaoJvkD5kMTJPi16C4xV8PVA5oKTb+Mp00cC4A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ITLlYHSJ; arc=none smtp.client-ip=74.125.224.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ITLlYHSJ" Received: by mail-yx1-f41.google.com with SMTP id 956f58d0204a3-640f88b8613so2801209d50.2 for ; Fri, 05 Dec 2025 15:32:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1764977540; x=1765582340; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xVNIZshLyuSr21iBy7gYlmgdiF5+BHlPixcUrdxDgUM=; b=ITLlYHSJt+0E+fQQ/kPGOzRfb+mRQgKNM4KY4FYaKEqyOuCfZwnQ3HZFVUIevZU2kC 5JpwTbzMeq9ith2RMcdYF83MpBjeJtyVzFdvLV36fcqzS4P0A+zHkbmHndcLo96/shfx cjD7Rhtclp+DQcU4QGaFy4Ul3kH8l3ot06ojBUmwmW+raaiItchtatyIMJZuUhB+sQti yJA/cA4nJ7bVcjJN4I8r9PMsSqiMIad8i1KpshZKAj1ai+QUdJ/f+d4a04jDiNqFoX6d NSB3zJMMI6ObaaDBtVn8bHzECOgTobefAGRm0dBX1HyIghGdCrD4pa8iQuKSa+BPYQ8z HFAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764977540; x=1765582340; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=xVNIZshLyuSr21iBy7gYlmgdiF5+BHlPixcUrdxDgUM=; b=qZN1syvaYhJnBfxZ4rOqHlnNqW2duRbkV8ydLEMEho4tN3A/bHs1e6oMSli3jumL65 NhrEAfTg2uGJA5iuCEEeBm7h5mECkC7Loe4NfQcci3Qlq1jFumx2F1Wk8lqDVZtBjNAb MwxyQWFJ021CYnxs0G7LBvs2XO1qImll4yUmjix4eZFz3DNvjIWI7J4ooPwBKXblJYrF bUNsSPN5+wQMq+v7FlAZA9t/ad8TUmdOn501cpgKEo9CRRkjJ+i2DnqylPVQKBYuTnux nbxH5PF92TJORtHtJhUagutZmP1BTe3hY1ND2zl6dJQi5r4wPy5aWCCSUpZ3PdZ0m71T 8fqw== X-Forwarded-Encrypted: i=1; AJvYcCWtdXFVsIROEFEuQWG8MvQadNsJfupVTLsLt/nj+vtZxbH5pfEkdTmoJVCYxLLYMionfu2atmU6u9+tgiQ=@vger.kernel.org X-Gm-Message-State: AOJu0YxoIyMhTna0rTQN2LknvL8Hwki5djYRc6v2jOr8KttjI0kklEtT QDKZ4mzJjqNZ+6Vx8UYrQn1lob5lquD/t1Jq5l8d+iUc5lwFfH9hx1LA X-Gm-Gg: ASbGncsl+/GbxsQZ5/sNzpUa5sg27ge4fsgjMDjJSe5ToCHFRel+xlm2BsXdkasy9I6 Ub3pT1whKdd11Bu3tCmLYrzfaBjkIbw9IZYFEZWhY4VE9Z8MgJT+3D0Z2afnkCKMwTAA4F0WPWK 8nrLp2xk/gIKyO92rAqolrZpKz74HPaxR1ky1mpCG9t23MrwIdEP/sO1j+ptBLCeHnSSA33VeFy SsY3Be06Aquov/nnM0m09hnA80im5VbfRPZgOLjbyYTb2KQCvciKhSU8YaDsc2OHFI4FqcyCqCx AZHH3Al6ErqAEOx5ar4LL3SNt6U0JEv+GI4VxWARh4ykT5j6Gq51EkXGCl3O0yrnKr790Ot6ggC wPyQ6AD8Q10WnjVVvZg8MRdqoolVHiKhHlQ1uC3v9BoTqWsuw3wBPPU9fJV4zXgOl++6JEr6QaV KuPCu4iHbJzkFRD5gZwkesUA== X-Google-Smtp-Source: AGHT+IGQ5c7kUmmwqDGLvrjuZJ0aZLEBVHhDBqk12LjM5CR3AT6ty5CjA/zMORi2krLA1PmCDUq99w== X-Received: by 2002:a05:690e:1401:b0:644:41e5:dea1 with SMTP id 956f58d0204a3-6444e756d86mr556035d50.4.1764977539865; Fri, 05 Dec 2025 15:32:19 -0800 (PST) Received: from localhost ([2a03:2880:25ff:72::]) by smtp.gmail.com with ESMTPSA id 956f58d0204a3-6443f2b80casm2356630d50.9.2025.12.05.15.32.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Dec 2025 15:32:19 -0800 (PST) From: Joshua Hahn To: Cc: "Liam R. Howlett" , Andrew Morton , Baolin Wang , Barry Song , David Hildenbrand , Dev Jain , Lance Yang , Lorenzo Stoakes , Masami Hiramatsu , Mathieu Desnoyers , Nico Pache , Ryan Roberts , Steven Rostedt , Zi Yan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [RFC LPC2025 PATCH 1/4] mm/khugepaged: Remove hpage_collapse_scan_abort Date: Fri, 5 Dec 2025 15:32:12 -0800 Message-ID: <20251205233217.3344186-2-joshua.hahnjy@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20251205233217.3344186-1-joshua.hahnjy@gmail.com> References: <20251205233217.3344186-1-joshua.hahnjy@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Commit 14a4e2141e24 ("mm, thp: only collapse hugepages to nodes with affinity for zone_reclaim_mode") introduced khugepaged_scan_abort, which was later renamed to hpage_collapse_scan_abort. It prevents collapsing hugepages to remote nodes when zone_reclaim_mode is enabled as to prefer reclaiming & allocating locally instead of allocating on a far away remote node (distance > RECLAIM_DISTANCE). With the zone_reclaim_mode sysctl being deprecated later in the series, remove hpage_collapse_scan_abort, its callers, and its associated values in the scan_result enum. Signed-off-by: Joshua Hahn --- include/trace/events/huge_memory.h | 1 - mm/khugepaged.c | 34 ------------------------------ 2 files changed, 35 deletions(-) diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge= _memory.h index 4cde53b45a85..1c0b146d1286 100644 --- a/include/trace/events/huge_memory.h +++ b/include/trace/events/huge_memory.h @@ -20,7 +20,6 @@ EM( SCAN_PTE_MAPPED_HUGEPAGE, "pte_mapped_hugepage") \ EM( SCAN_LACK_REFERENCED_PAGE, "lack_referenced_page") \ EM( SCAN_PAGE_NULL, "page_null") \ - EM( SCAN_SCAN_ABORT, "scan_aborted") \ EM( SCAN_PAGE_COUNT, "not_suitable_page_count") \ EM( SCAN_PAGE_LRU, "page_not_in_lru") \ EM( SCAN_PAGE_LOCK, "page_locked") \ diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 97d1b2824386..a93228a53ee4 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -40,7 +40,6 @@ enum scan_result { SCAN_PTE_MAPPED_HUGEPAGE, SCAN_LACK_REFERENCED_PAGE, SCAN_PAGE_NULL, - SCAN_SCAN_ABORT, SCAN_PAGE_COUNT, SCAN_PAGE_LRU, SCAN_PAGE_LOCK, @@ -830,30 +829,6 @@ struct collapse_control khugepaged_collapse_control = =3D { .is_khugepaged =3D true, }; =20 -static bool hpage_collapse_scan_abort(int nid, struct collapse_control *cc) -{ - int i; - - /* - * If node_reclaim_mode is disabled, then no extra effort is made to - * allocate memory locally. - */ - if (!node_reclaim_enabled()) - return false; - - /* If there is a count for this node already, it must be acceptable */ - if (cc->node_load[nid]) - return false; - - for (i =3D 0; i < MAX_NUMNODES; i++) { - if (!cc->node_load[i]) - continue; - if (node_distance(nid, i) > node_reclaim_distance) - return true; - } - return false; -} - #define khugepaged_defrag() \ (transparent_hugepage_flags & \ (1<node_load[node]++; if (!folio_test_lru(folio)) { result =3D SCAN_PAGE_LRU; @@ -2342,11 +2313,6 @@ static int hpage_collapse_scan_file(struct mm_struct= *mm, unsigned long addr, } =20 node =3D folio_nid(folio); - if (hpage_collapse_scan_abort(node, cc)) { - result =3D SCAN_SCAN_ABORT; - folio_put(folio); - break; - } cc->node_load[node]++; =20 if (!folio_test_lru(folio)) { --=20 2.47.3 From nobody Tue Dec 16 07:33:21 2025 Received: from mail-yw1-f177.google.com (mail-yw1-f177.google.com [209.85.128.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D3CD296BC9 for ; Fri, 5 Dec 2025 23:32:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764977544; cv=none; b=olM3IcPtDL+efC09Itmvl/TSzCgAcISn7IfmI31xiaWutkW4XSPUXiPXMZovqf13aJr/EXRRNX4BuufdZqKstbtI52NCatpYpaE27fDVv2tQMLdPjzvoZP1WfysO63J+mtJ6J8/McIKnm1QSL6TDTZUJMUMQeSGzpYk1JBUJhj4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764977544; c=relaxed/simple; bh=cGKx/IzR2QlX7DyjCscTSxAapSYAfvPDDUbR+uoxqfI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IS1NWzlaKy6BbCOuD6uRZMsJsz/SDlhW2ZyPi/FG6vFRX5HKoh8hHXHIg/0nn0+Pxz7+/uJXSURtGQscCYrnGMsxqO3WCdqVGaI6DY7DU2RmhYAOKN54uoUICdL5936qqk26Zh00Z94BjfrVJ6JTDpugukBfk8qObLuarPXsWhc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=gqURtGrJ; arc=none smtp.client-ip=209.85.128.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gqURtGrJ" Received: by mail-yw1-f177.google.com with SMTP id 00721157ae682-78c33f74b72so2179057b3.2 for ; Fri, 05 Dec 2025 15:32:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1764977541; x=1765582341; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3N6g0zC5PiFKht2Nr2Q2t2kJmA7KR7Ut4l5xQUtIjXQ=; b=gqURtGrJp/Z4d1BQ4jFPosbQQn9ylohOJiyEirDv8ZkerpqBKAkh1gQIwGWf+Zj3UH E5cm2dVjeLosT+K2QlbFktoB5Ebwpsr6Q3JJuLG/Awe3nz54HRji6u1rJYpRkwbXT7qR 7410yxaMaIErMICZulMHKinVYh460RKbRipNpASMc7w3YWmQx2t0jDKQ/xXv5hSONGe6 6KeuRg88Ac/hCnQomTNI8Uuehffs0ER4QT0ps4RkJd8pKoxLutnrXJdiq7riD+RDNOZd cfmfeMRKYb4zQ8lrDdgC8B9qPdnTTNZJitCcmeP1tmSHYv/u+YfGucHm+vpmh5djxpJn M/7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764977541; x=1765582341; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=3N6g0zC5PiFKht2Nr2Q2t2kJmA7KR7Ut4l5xQUtIjXQ=; b=RV8+oex3YwLp9mBDA/EgeYewApDiXDDiv6z5s+qrXN5M96rks7TmmrDaCwtDMSJT7o uEcK5s/fE0VQkw0dZaFQoBIBkKM90pEbOkZu4/LellK0pBDPFkCADAytegM+QUi4b0RB FQXKHs+ILY3rbAHkoE9GNt+F2L3sQnl/Gu3Ebt57QmqKf+rfzK9+sr1I4l5ulfsfacrR uYr7pgO4fF+8Hi84hyO3z6SWwfyk1SkkpPxujSr1zHsft+fpR8gc5Id/qznbbfXqOyP7 wccj1xBRCPMLPSrMneVeBtyyWlMztuXHE+6A1Er9DqYDP2ygOigxjdmNlKxTecV0hzdW noEw== X-Forwarded-Encrypted: i=1; AJvYcCXEUh6fpkGA2jO62VOXh4Iw1aQx078fInKPqIQneypMulWrTF2AvvI4T3YhCPR4HoghRqjw3Z3T5hP9Ncs=@vger.kernel.org X-Gm-Message-State: AOJu0YwFs/jqOFzNhxjdNjxSL5MPZadX1elB8rPgiq1ypugod6jM71+t B66c0dFY6ASStUnypmjg9WoiWgbTq9/uwE0Dl4BP9jGCHec4FM/svwNt X-Gm-Gg: ASbGncvo0/1kFz5jbHbb2gvBrYYJ0g1pFKa/1W3qQOkzf9Hw/zsATOK1Ku/lBJHfqcy P1GH6/HUzgVrl4dF1clwn6ak0v5ZCarhGNrcEtpreKJNqYYf3KOxHVLGlUTGW5CaroobvbMlzAh rD+BtLmlNf0P2r6IofQrqTlMuUUOB+gRWkfITXNIEr0nKz+HNKEOg+QBE4l0EQoc13QV4TcHn5U XPUwKnkDMcmhpc7WzHSHnz9PMLpGs/tFtSY6A1lcP5xpi/QnS+4EC0Y2DNdTOy1yRnRxtyDtQeF 6XAtUX35ofkLIjmERqxQJnJwIULMnexJ9HBzoGw99oUQU08y3b6QzLLcxZV6RJ2oEYw1pB4Sj9Q mx7fkMHSyTIpYARVnhbablCnTNhy2gGtLyF00CFiYokGwwsZvacdvts2S0MBV9+z39xtQn3y5zI oW36dnha15nVnNoPCm7CIbkQ== X-Google-Smtp-Source: AGHT+IFmgCP1RSAK298BihptrWQmwFYdSfIHOvG0eN4y9QhPWKLGCFw1/fHkvkeHjYBbQ3rzbZ6AkA== X-Received: by 2002:a05:690c:c91:b0:788:bda:47ea with SMTP id 00721157ae682-78c33afae87mr8349447b3.5.1764977541051; Fri, 05 Dec 2025 15:32:21 -0800 (PST) Received: from localhost ([2a03:2880:25ff:4a::]) by smtp.gmail.com with ESMTPSA id 00721157ae682-78c1b790577sm21692767b3.47.2025.12.05.15.32.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Dec 2025 15:32:20 -0800 (PST) From: Joshua Hahn To: Cc: "Liam R. Howlett" , Andrew Morton , Axel Rasmussen , Brendan Jackman , David Hildenbrand , Johannes Weiner , Lorenzo Stoakes , Michal Hocko , Mike Rapoport , Qi Zheng , Shakeel Butt , Suren Baghdasaryan , Vlastimil Babka , Wei Xu , Yuanchu Xie , Zi Yan , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC LPC2025 PATCH 2/4] mm/vmscan/page_alloc: Remove node_reclaim Date: Fri, 5 Dec 2025 15:32:13 -0800 Message-ID: <20251205233217.3344186-3-joshua.hahnjy@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20251205233217.3344186-1-joshua.hahnjy@gmail.com> References: <20251205233217.3344186-1-joshua.hahnjy@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" node_reclaim() is currently only called when the zone_reclaim_mode sysctl is set, during get_page_from_freelist if the current node is full. With the zone_reclaim_mode sysctl being deprecated later in the series, there are no more callsites for node_reclaim. Remove node_reclaim and its associated return values NODE_RECLAIM_{NOSCAN, FULL, SOME, SUCCESS}, as well as the zone_reclaim_{success, failed} vmstat items. We can also remove zone_allows_reclaim, since with node_reclaim_enabled always returning false, it will never get evaluated. Signed-off-by: Joshua Hahn --- include/linux/vm_event_item.h | 4 --- mm/internal.h | 11 ------ mm/page_alloc.c | 34 ------------------ mm/vmscan.c | 67 ----------------------------------- mm/vmstat.c | 4 --- 5 files changed, 120 deletions(-) diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 92f80b4d69a6..2520200b65f0 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -53,10 +53,6 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, PGSCAN_FILE, PGSTEAL_ANON, PGSTEAL_FILE, -#ifdef CONFIG_NUMA - PGSCAN_ZONE_RECLAIM_SUCCESS, - PGSCAN_ZONE_RECLAIM_FAILED, -#endif PGINODESTEAL, SLABS_SCANNED, KSWAPD_INODESTEAL, KSWAPD_LOW_WMARK_HIT_QUICKLY, KSWAPD_HIGH_WMARK_HIT_QUICKLY, PAGEOUTRUN, PGROTATED, diff --git a/mm/internal.h b/mm/internal.h index 04c307ee33ae..743fcebe53a8 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1196,24 +1196,13 @@ static inline void mminit_verify_zonelist(void) } #endif /* CONFIG_DEBUG_MEMORY_INIT */ =20 -#define NODE_RECLAIM_NOSCAN -2 -#define NODE_RECLAIM_FULL -1 -#define NODE_RECLAIM_SOME 0 -#define NODE_RECLAIM_SUCCESS 1 - #ifdef CONFIG_NUMA extern int node_reclaim_mode; =20 -extern int node_reclaim(struct pglist_data *, gfp_t, unsigned int); extern int find_next_best_node(int node, nodemask_t *used_node_mask); #else #define node_reclaim_mode 0 =20 -static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask, - unsigned int order) -{ - return NODE_RECLAIM_NOSCAN; -} static inline int find_next_best_node(int node, nodemask_t *used_node_mask) { return NUMA_NO_NODE; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d0f026ec10b6..010a035e81bd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3684,17 +3684,6 @@ static inline bool zone_watermark_fast(struct zone *= z, unsigned int order, =20 #ifdef CONFIG_NUMA int __read_mostly node_reclaim_distance =3D RECLAIM_DISTANCE; - -static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone) -{ - return node_distance(zone_to_nid(local_zone), zone_to_nid(zone)) <=3D - node_reclaim_distance; -} -#else /* CONFIG_NUMA */ -static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone) -{ - return true; -} #endif /* CONFIG_NUMA */ =20 /* @@ -3868,8 +3857,6 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int o= rder, int alloc_flags, if (!zone_watermark_fast(zone, order, mark, ac->highest_zoneidx, alloc_flags, gfp_mask)) { - int ret; - if (cond_accept_memory(zone, order, alloc_flags)) goto try_this_zone; =20 @@ -3885,27 +3872,6 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int = order, int alloc_flags, BUILD_BUG_ON(ALLOC_NO_WATERMARKS < NR_WMARK); if (alloc_flags & ALLOC_NO_WATERMARKS) goto try_this_zone; - - if (!node_reclaim_enabled() || - !zone_allows_reclaim(zonelist_zone(ac->preferred_zoneref), zone)) - continue; - - ret =3D node_reclaim(zone->zone_pgdat, gfp_mask, order); - switch (ret) { - case NODE_RECLAIM_NOSCAN: - /* did not scan */ - continue; - case NODE_RECLAIM_FULL: - /* scanned but unreclaimable */ - continue; - default: - /* did we reclaim enough */ - if (zone_watermark_ok(zone, order, mark, - ac->highest_zoneidx, alloc_flags)) - goto try_this_zone; - - continue; - } } =20 try_this_zone: diff --git a/mm/vmscan.c b/mm/vmscan.c index 3b85652a42b9..d07acd76fdea 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -7537,13 +7537,6 @@ module_init(kswapd_init) */ int node_reclaim_mode __read_mostly; =20 -/* - * Priority for NODE_RECLAIM. This determines the fraction of pages - * of a node considered for each zone_reclaim. 4 scans 1/16th of - * a zone. - */ -#define NODE_RECLAIM_PRIORITY 4 - /* * Percentage of pages in a zone that must be unmapped for node_reclaim to * occur. @@ -7646,66 +7639,6 @@ static unsigned long __node_reclaim(struct pglist_da= ta *pgdat, gfp_t gfp_mask, return sc->nr_reclaimed; } =20 -int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int o= rder) -{ - int ret; - /* Minimum pages needed in order to stay on node */ - const unsigned long nr_pages =3D 1 << order; - struct scan_control sc =3D { - .nr_to_reclaim =3D max(nr_pages, SWAP_CLUSTER_MAX), - .gfp_mask =3D current_gfp_context(gfp_mask), - .order =3D order, - .priority =3D NODE_RECLAIM_PRIORITY, - .may_writepage =3D !!(node_reclaim_mode & RECLAIM_WRITE), - .may_unmap =3D !!(node_reclaim_mode & RECLAIM_UNMAP), - .may_swap =3D 1, - .reclaim_idx =3D gfp_zone(gfp_mask), - }; - - /* - * Node reclaim reclaims unmapped file backed pages and - * slab pages if we are over the defined limits. - * - * A small portion of unmapped file backed pages is needed for - * file I/O otherwise pages read by file I/O will be immediately - * thrown out if the node is overallocated. So we do not reclaim - * if less than a specified percentage of the node is used by - * unmapped file backed pages. - */ - if (node_pagecache_reclaimable(pgdat) <=3D pgdat->min_unmapped_pages && - node_page_state_pages(pgdat, NR_SLAB_RECLAIMABLE_B) <=3D - pgdat->min_slab_pages) - return NODE_RECLAIM_FULL; - - /* - * Do not scan if the allocation should not be delayed. - */ - if (!gfpflags_allow_blocking(gfp_mask) || (current->flags & PF_MEMALLOC)) - return NODE_RECLAIM_NOSCAN; - - /* - * Only run node reclaim on the local node or on nodes that do not - * have associated processors. This will favor the local processor - * over remote processors and spread off node memory allocations - * as wide as possible. - */ - if (node_state(pgdat->node_id, N_CPU) && pgdat->node_id !=3D numa_node_id= ()) - return NODE_RECLAIM_NOSCAN; - - if (test_and_set_bit_lock(PGDAT_RECLAIM_LOCKED, &pgdat->flags)) - return NODE_RECLAIM_NOSCAN; - - ret =3D __node_reclaim(pgdat, gfp_mask, nr_pages, &sc) >=3D nr_pages; - clear_bit_unlock(PGDAT_RECLAIM_LOCKED, &pgdat->flags); - - if (ret) - count_vm_event(PGSCAN_ZONE_RECLAIM_SUCCESS); - else - count_vm_event(PGSCAN_ZONE_RECLAIM_FAILED); - - return ret; -} - enum { MEMORY_RECLAIM_SWAPPINESS =3D 0, MEMORY_RECLAIM_SWAPPINESS_MAX, diff --git a/mm/vmstat.c b/mm/vmstat.c index 65de88cdf40e..3564bc62325a 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1349,10 +1349,6 @@ const char * const vmstat_text[] =3D { [I(PGSTEAL_ANON)] =3D "pgsteal_anon", [I(PGSTEAL_FILE)] =3D "pgsteal_file", =20 -#ifdef CONFIG_NUMA - [I(PGSCAN_ZONE_RECLAIM_SUCCESS)] =3D "zone_reclaim_success", - [I(PGSCAN_ZONE_RECLAIM_FAILED)] =3D "zone_reclaim_failed", -#endif [I(PGINODESTEAL)] =3D "pginodesteal", [I(SLABS_SCANNED)] =3D "slabs_scanned", [I(KSWAPD_INODESTEAL)] =3D "kswapd_inodesteal", --=20 2.47.3 From nobody Tue Dec 16 07:33:21 2025 Received: from mail-yw1-f182.google.com (mail-yw1-f182.google.com [209.85.128.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5AD492E973F for ; Fri, 5 Dec 2025 23:32:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764977545; cv=none; b=bT1vVHoM1x6Ezxb20A6lhyix5sJL6zw1NM8dQICwkc3fVIG9RHZkiyKkPG6+j0onVhPTi/EbYWzK06n2mIJwkhOnDgRjOj0eZR7kikquma5GmKP47Y+QF0yYaOjpxTWmOGNSAPH2Frrt2yXQPKmHT0OdPF/YJYu3WMc0qU/II7Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764977545; c=relaxed/simple; bh=wpY55Dp8L0kCqW3v8iD8EKWvzYT13cCqvAzMecr+vgU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=N5e5GOd/WZbyo85c29Vo0yJowZ++FiFsfKmuACXn8XMJXP5GM7UQO1+p3ODgWVjpccGhgGHbqNSP9slibUbWv4d50TKyUrJBW8tluEjTIrG5iRUcPGX8iYRTe9aiwNf/6YxwfvYaS31IPshnvQdnHXKawpipS+LXcMTPXiCDTEU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=NmT3a8Q2; arc=none smtp.client-ip=209.85.128.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="NmT3a8Q2" Received: by mail-yw1-f182.google.com with SMTP id 00721157ae682-787da30c50fso28143677b3.3 for ; Fri, 05 Dec 2025 15:32:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1764977542; x=1765582342; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0dNzBA/KM3TzCUjw+cH+YJCV/4vpqJc2N6ZuhjsZcyw=; b=NmT3a8Q2JN/P3eGb8oqwFqR3b3f+DGrwHmUNf+Lncxuk1jlztpsY/uK856o8gW4t3C r/zVlWOp0L60+8LtVF1GuRMOfusQyl1QmdCDMRfJcY18SBytS//WZjdYQ9S7KEn/gb1S ck1EzOE+PdKCsScIj4R18LTEfAvE6ejT/yPJvOm4L8LBYM4QmCl8ILLH8IAg0M+H7M5P hR1s4msxo2xUAF8pQgxBAxyeP8MldN58E+CCahjeelPz5ceIibEB12wW8Iz4586c7ASJ +Dh2OFdQPtLwX+NLTtli5TcNuXSVN/6znMEOaGCuj8bNfSuH96J09woGGWgQZVt/D1+a +QJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764977542; x=1765582342; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=0dNzBA/KM3TzCUjw+cH+YJCV/4vpqJc2N6ZuhjsZcyw=; b=ZQ2DzzrzT1Yuzv/jjNGWU9OeOx7rnXrqvqZtRyfEvCGO/Yl5+oQ2cbH7cZOZF+sx+8 Ac2uR3k4f/AIV4iyqZ8x+0WZRDoVzaU9KdPa/Ny8XWeAshNj9zl1yt69Svcs/I1kbutt jw8nU3XXICf9FmiGRJd3XgUqrf27gfx+eelGyOgosPlimQN7A/S6cZU5z/kuyNaBWd4H xTcVnykPLUuaPD/xGLp2u5nn4LpUWchMsezhAC+oMgCJfj7DELzNrqr0+5Q42DJKDm6N pqUKMaXHU9AISiPvGfVy1OAmFX0x8QzRAPWU5be6XJiPkfsIX4WruhRofmXZR/1/jmcZ ulyQ== X-Forwarded-Encrypted: i=1; AJvYcCVYs2r5kvL1w2dgrjBe4NIvFhKalfWtGwxzepV+yOfQK6541zv6KKNJtIKGs3NJnH4myOcf6pMlHARUg1s=@vger.kernel.org X-Gm-Message-State: AOJu0Yy0OZ4sUyZWbUt/OI3l/+ymjamOsJkeKOL5AepQIiAeIM/i3VUt xBYf89tN36WMQUiCUXZuC8elQNXh4Ur5qedHbGcwBC8v2rb4vfiRCTu7 X-Gm-Gg: ASbGncunVlf7Cxfw+DqADy8YD1SNE7MhO4fwPBbP9q8gQj7fcDWWzSyOmNK6LirkoVL 0jnkJE05lsriRkdf+GNWVS8NoeHz6u1WKY0d47TTy0k/YIfgAF0PfnaoB4nIV+Bg1r3liiwlBYq l+jFMBSQ/kGi8sKB+5PV3MebU6NbUBi8kHETlyRpGJu0QmS1/imsHaeb9OrpEzNRBfv9D7ml0bn a/Juw5r2Uq9EErmNUaz38HgQv9SRNqZvdNRyzP29fsHd+nryRD/bvHcVCSugr5mx13y9K4EO0BV Yf1DAlII/urGWumZxo94EzQI9W/bvxXmyKVwB1pWwJgKVLmxsmuGrebs6Fxe9hr94RzU1VPeUfM 6PxNM3hMW5DiBGw5n6Z55jJ2kqWPvBh31C9uxkSS5dtBCnBfoNhdANkwGTIn+IdAjzjV5xuklWM VQbTuX6a0otigu/EH1FBCr X-Google-Smtp-Source: AGHT+IGDMUjSj5eZNlS8p7+TuSppjTK9shhyWb4cNaH9Kyj+wS4thzK9tHU/IzcAApEIw14tqpAafQ== X-Received: by 2002:a05:690c:4b87:b0:789:61ca:88f6 with SMTP id 00721157ae682-78c33b19cedmr14020287b3.4.1764977542151; Fri, 05 Dec 2025 15:32:22 -0800 (PST) Received: from localhost ([2a03:2880:25ff:1::]) by smtp.gmail.com with ESMTPSA id 956f58d0204a3-6443f5b0c2fsm2331650d50.19.2025.12.05.15.32.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Dec 2025 15:32:21 -0800 (PST) From: Joshua Hahn To: Cc: "Liam R. Howlett" , Alex Shi , Andrew Morton , Axel Rasmussen , Baoquan He , Barry Song , Brendan Jackman , Chris Li , David Hildenbrand , Dongliang Mu , Johannes Weiner , Jonathan Corbet , Kairui Song , Kemeng Shi , Lorenzo Stoakes , Michal Hocko , Mike Rapoport , Nhat Pham , Qi Zheng , Shakeel Butt , Suren Baghdasaryan , Vlastimil Babka , Wei Xu , Yanteng Si , Yuanchu Xie , Zi Yan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC LPC2025 PATCH 3/4] mm/vmscan/page_alloc: Deprecate min_{slab, unmapped}_ratio Date: Fri, 5 Dec 2025 15:32:14 -0800 Message-ID: <20251205233217.3344186-4-joshua.hahnjy@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20251205233217.3344186-1-joshua.hahnjy@gmail.com> References: <20251205233217.3344186-1-joshua.hahnjy@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The min_slab_ratio and min_unmapped_ratio sysctls allow the user to tune how much reclaimable slab or reclaimable pagecache a node has before deciding to shrink it in __node_reclaim. Prior to this series, there were two ways these checks were done: 1. When zone_reclaim_mode is enabled, the local node is full, and node_reclaim is called to shrink the current node 2. When the user directly asks to shrink a node by writing to the memory.reclaim file (i.e. proactive reclaim) In the first scenario, the two parameters ensures that node reclaim is only performed when the cost to reclaim is overcome by the amount of memory that can easily be freed. In other words, it acts to throttle node reclaim when the local node runs out of memory, and instead resorts to fallback allocations on a remote node. With the zone_reclaim_mode sysctl being deprecated later in the series, only the second scenario remains in the system. The implications here are slightly different. Now, node_reclaim is only called when the user explicitly asks for it. In this case, it might make less sense to try and throttle this behavior. In fact, it might feel counterintuitive from the user's perspective if triggering direct reclaim leads to no memory reclaimed, even if there is reclaimable memory (albeit small). Deprecate the min_{slab, unmapped}_ratio sysctls now that node_reclaim no longer needs to be throttled. This leads to less sysctls needing to be maintained, and a more intuitive __node_reclaim. Signed-off-by: Joshua Hahn --- Documentation/admin-guide/sysctl/vm.rst | 37 --------- Documentation/mm/physical_memory.rst | 9 -- .../translations/zh_CN/mm/physical_memory.rst | 8 -- include/linux/mmzone.h | 8 -- include/linux/swap.h | 5 -- mm/page_alloc.c | 82 ------------------- mm/vmscan.c | 73 ++--------------- 7 files changed, 7 insertions(+), 215 deletions(-) diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-= guide/sysctl/vm.rst index 4d71211fdad8..ea2fd3feb9c6 100644 --- a/Documentation/admin-guide/sysctl/vm.rst +++ b/Documentation/admin-guide/sysctl/vm.rst @@ -49,8 +49,6 @@ Currently, these files are in /proc/sys/vm: - memory_failure_early_kill - memory_failure_recovery - min_free_kbytes -- min_slab_ratio -- min_unmapped_ratio - mmap_min_addr - mmap_rnd_bits - mmap_rnd_compat_bits @@ -549,41 +547,6 @@ become subtly broken, and prone to deadlock under high= loads. Setting this too high will OOM your machine instantly. =20 =20 -min_slab_ratio -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -This is available only on NUMA kernels. - -A percentage of the total pages in each zone. On Zone reclaim -(fallback from the local zone occurs) slabs will be reclaimed if more -than this percentage of pages in a zone are reclaimable slab pages. -This insures that the slab growth stays under control even in NUMA -systems that rarely perform global reclaim. - -The default is 5 percent. - -Note that slab reclaim is triggered in a per zone / node fashion. -The process of reclaiming slab memory is currently not node specific -and may not be fast. - - -min_unmapped_ratio -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -This is available only on NUMA kernels. - -This is a percentage of the total pages in each zone. Zone reclaim will -only occur if more than this percentage of pages are in a state that -zone_reclaim_mode allows to be reclaimed. - -If zone_reclaim_mode has the value 4 OR'd, then the percentage is compared -against all file-backed unmapped pages including swapcache pages and tmpfs -files. Otherwise, only unmapped pages backed by normal files but not tmpfs -files and similar are considered. - -The default is 1 percent. - - mmap_min_addr =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =20 diff --git a/Documentation/mm/physical_memory.rst b/Documentation/mm/physic= al_memory.rst index b76183545e5b..ee8fd939020d 100644 --- a/Documentation/mm/physical_memory.rst +++ b/Documentation/mm/physical_memory.rst @@ -296,15 +296,6 @@ See also Documentation/mm/page_reclaim.rst. ``kswapd_failures`` Number of runs kswapd was unable to reclaim any pages =20 -``min_unmapped_pages`` - Minimal number of unmapped file backed pages that cannot be reclaimed. - Determined by ``vm.min_unmapped_ratio`` sysctl. Only defined when - ``CONFIG_NUMA`` is enabled. - -``min_slab_pages`` - Minimal number of SLAB pages that cannot be reclaimed. Determined by - ``vm.min_slab_ratio sysctl``. Only defined when ``CONFIG_NUMA`` is enabl= ed - ``flags`` Flags controlling reclaim behavior. =20 diff --git a/Documentation/translations/zh_CN/mm/physical_memory.rst b/Docu= mentation/translations/zh_CN/mm/physical_memory.rst index 4594d15cefec..670bd8103c3b 100644 --- a/Documentation/translations/zh_CN/mm/physical_memory.rst +++ b/Documentation/translations/zh_CN/mm/physical_memory.rst @@ -280,14 +280,6 @@ kswapd=E7=BA=BF=E7=A8=8B=E5=8F=AF=E4=BB=A5=E5=9B=9E=E6= =94=B6=E7=9A=84=E6=9C=80=E9=AB=98=E5=8C=BA=E5=9F=9F=E7=B4=A2=E5=BC=95=E3=80= =82 ``kswapd_failures`` kswapd=E6=97=A0=E6=B3=95=E5=9B=9E=E6=94=B6=E4=BB=BB=E4=BD=95=E9=A1=B5=E9= =9D=A2=E7=9A=84=E8=BF=90=E8=A1=8C=E6=AC=A1=E6=95=B0=E3=80=82 =20 -``min_unmapped_pages`` -=E6=97=A0=E6=B3=95=E5=9B=9E=E6=94=B6=E7=9A=84=E6=9C=AA=E6=98=A0=E5=B0=84= =E6=96=87=E4=BB=B6=E6=94=AF=E6=8C=81=E7=9A=84=E6=9C=80=E5=B0=8F=E9=A1=B5=E9= =9D=A2=E6=95=B0=E9=87=8F=E3=80=82=E7=94=B1 ``vm.min_unmapped_ratio`` -=E7=B3=BB=E7=BB=9F=E6=8E=A7=E5=88=B6=E5=8F=B0=EF=BC=88sysctl=EF=BC=89=E5= =8F=82=E6=95=B0=E5=86=B3=E5=AE=9A=E3=80=82=E5=9C=A8=E5=BC=80=E5=90=AF ``CON= FIG_NUMA`` =E9=85=8D=E7=BD=AE=E6=97=B6=E5=AE=9A=E4=B9=89=E3=80=82 - -``min_slab_pages`` -=E6=97=A0=E6=B3=95=E5=9B=9E=E6=94=B6=E7=9A=84SLAB=E9=A1=B5=E9=9D=A2=E7=9A= =84=E6=9C=80=E5=B0=91=E6=95=B0=E9=87=8F=E3=80=82=E7=94=B1 ``vm.min_slab_rat= io`` =E7=B3=BB=E7=BB=9F=E6=8E=A7=E5=88=B6=E5=8F=B0 -=EF=BC=88sysctl=EF=BC=89=E5=8F=82=E6=95=B0=E5=86=B3=E5=AE=9A=E3=80=82=E5= =9C=A8=E5=BC=80=E5=90=AF ``CONFIG_NUMA`` =E6=97=B6=E5=AE=9A=E4=B9=89=E3=80= =82 - ``flags`` =E6=8E=A7=E5=88=B6=E5=9B=9E=E6=94=B6=E8=A1=8C=E4=B8=BA=E7=9A=84=E6=A0=87= =E5=BF=97=E4=BD=8D=E3=80=82 =20 diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 75ef7c9f9307..4be84764d097 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1451,14 +1451,6 @@ typedef struct pglist_data { */ unsigned long totalreserve_pages; =20 -#ifdef CONFIG_NUMA - /* - * node reclaim becomes active if more unmapped pages exist. - */ - unsigned long min_unmapped_pages; - unsigned long min_slab_pages; -#endif /* CONFIG_NUMA */ - /* Write-intensive fields used by page reclaim */ CACHELINE_PADDING(_pad1_); =20 diff --git a/include/linux/swap.h b/include/linux/swap.h index 38ca3df68716..c5915d787852 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -411,11 +411,6 @@ static inline void reclaim_unregister_node(struct node= *node) } #endif /* CONFIG_SYSFS && CONFIG_NUMA */ =20 -#ifdef CONFIG_NUMA -extern int sysctl_min_unmapped_ratio; -extern int sysctl_min_slab_ratio; -#endif - void check_move_unevictable_folios(struct folio_batch *fbatch); =20 extern void __meminit kswapd_run(int nid); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 010a035e81bd..9524713c81b7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5676,8 +5676,6 @@ int local_memory_node(int node) } #endif =20 -static void setup_min_unmapped_ratio(void); -static void setup_min_slab_ratio(void); #else /* CONFIG_NUMA */ =20 static void build_zonelists(pg_data_t *pgdat) @@ -6487,11 +6485,6 @@ int __meminit init_per_zone_wmark_min(void) refresh_zone_stat_thresholds(); setup_per_zone_lowmem_reserve(); =20 -#ifdef CONFIG_NUMA - setup_min_unmapped_ratio(); - setup_min_slab_ratio(); -#endif - khugepaged_min_free_kbytes_update(); =20 return 0; @@ -6534,63 +6527,6 @@ static int watermark_scale_factor_sysctl_handler(con= st struct ctl_table *table, return 0; } =20 -#ifdef CONFIG_NUMA -static void setup_min_unmapped_ratio(void) -{ - pg_data_t *pgdat; - struct zone *zone; - - for_each_online_pgdat(pgdat) - pgdat->min_unmapped_pages =3D 0; - - for_each_zone(zone) - zone->zone_pgdat->min_unmapped_pages +=3D (zone_managed_pages(zone) * - sysctl_min_unmapped_ratio) / 100; -} - - -static int sysctl_min_unmapped_ratio_sysctl_handler(const struct ctl_table= *table, int write, - void *buffer, size_t *length, loff_t *ppos) -{ - int rc; - - rc =3D proc_dointvec_minmax(table, write, buffer, length, ppos); - if (rc) - return rc; - - setup_min_unmapped_ratio(); - - return 0; -} - -static void setup_min_slab_ratio(void) -{ - pg_data_t *pgdat; - struct zone *zone; - - for_each_online_pgdat(pgdat) - pgdat->min_slab_pages =3D 0; - - for_each_zone(zone) - zone->zone_pgdat->min_slab_pages +=3D (zone_managed_pages(zone) * - sysctl_min_slab_ratio) / 100; -} - -static int sysctl_min_slab_ratio_sysctl_handler(const struct ctl_table *ta= ble, int write, - void *buffer, size_t *length, loff_t *ppos) -{ - int rc; - - rc =3D proc_dointvec_minmax(table, write, buffer, length, ppos); - if (rc) - return rc; - - setup_min_slab_ratio(); - - return 0; -} -#endif - /* * lowmem_reserve_ratio_sysctl_handler - just a wrapper around * proc_dointvec() so that we can call setup_per_zone_lowmem_reserve() @@ -6720,24 +6656,6 @@ static const struct ctl_table page_alloc_sysctl_tabl= e[] =3D { .mode =3D 0644, .proc_handler =3D numa_zonelist_order_handler, }, - { - .procname =3D "min_unmapped_ratio", - .data =3D &sysctl_min_unmapped_ratio, - .maxlen =3D sizeof(sysctl_min_unmapped_ratio), - .mode =3D 0644, - .proc_handler =3D sysctl_min_unmapped_ratio_sysctl_handler, - .extra1 =3D SYSCTL_ZERO, - .extra2 =3D SYSCTL_ONE_HUNDRED, - }, - { - .procname =3D "min_slab_ratio", - .data =3D &sysctl_min_slab_ratio, - .maxlen =3D sizeof(sysctl_min_slab_ratio), - .mode =3D 0644, - .proc_handler =3D sysctl_min_slab_ratio_sysctl_handler, - .extra1 =3D SYSCTL_ZERO, - .extra2 =3D SYSCTL_ONE_HUNDRED, - }, #endif }; =20 diff --git a/mm/vmscan.c b/mm/vmscan.c index d07acd76fdea..4e23289efba4 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -7537,62 +7537,6 @@ module_init(kswapd_init) */ int node_reclaim_mode __read_mostly; =20 -/* - * Percentage of pages in a zone that must be unmapped for node_reclaim to - * occur. - */ -int sysctl_min_unmapped_ratio =3D 1; - -/* - * If the number of slab pages in a zone grows beyond this percentage then - * slab reclaim needs to occur. - */ -int sysctl_min_slab_ratio =3D 5; - -static inline unsigned long node_unmapped_file_pages(struct pglist_data *p= gdat) -{ - unsigned long file_mapped =3D node_page_state(pgdat, NR_FILE_MAPPED); - unsigned long file_lru =3D node_page_state(pgdat, NR_INACTIVE_FILE) + - node_page_state(pgdat, NR_ACTIVE_FILE); - - /* - * It's possible for there to be more file mapped pages than - * accounted for by the pages on the file LRU lists because - * tmpfs pages accounted for as ANON can also be FILE_MAPPED - */ - return (file_lru > file_mapped) ? (file_lru - file_mapped) : 0; -} - -/* Work out how many page cache pages we can reclaim in this reclaim_mode = */ -static unsigned long node_pagecache_reclaimable(struct pglist_data *pgdat) -{ - unsigned long nr_pagecache_reclaimable; - unsigned long delta =3D 0; - - /* - * If RECLAIM_UNMAP is set, then all file pages are considered - * potentially reclaimable. Otherwise, we have to worry about - * pages like swapcache and node_unmapped_file_pages() provides - * a better estimate - */ - if (node_reclaim_mode & RECLAIM_UNMAP) - nr_pagecache_reclaimable =3D node_page_state(pgdat, NR_FILE_PAGES); - else - nr_pagecache_reclaimable =3D node_unmapped_file_pages(pgdat); - - /* - * Since we can't clean folios through reclaim, remove dirty file - * folios from consideration. - */ - delta +=3D node_page_state(pgdat, NR_FILE_DIRTY); - - /* Watch for any possible underflows due to delta */ - if (unlikely(delta > nr_pagecache_reclaimable)) - delta =3D nr_pagecache_reclaimable; - - return nr_pagecache_reclaimable - delta; -} - /* * Try to free up some pages from this node through reclaim. */ @@ -7617,16 +7561,13 @@ static unsigned long __node_reclaim(struct pglist_d= ata *pgdat, gfp_t gfp_mask, noreclaim_flag =3D memalloc_noreclaim_save(); set_task_reclaim_state(p, &sc->reclaim_state); =20 - if (node_pagecache_reclaimable(pgdat) > pgdat->min_unmapped_pages || - node_page_state_pages(pgdat, NR_SLAB_RECLAIMABLE_B) > pgdat->min_slab= _pages) { - /* - * Free memory by calling shrink node with increasing - * priorities until we have enough memory freed. - */ - do { - shrink_node(pgdat, sc); - } while (sc->nr_reclaimed < nr_pages && --sc->priority >=3D 0); - } + /* + * Free memory by calling shrink node with increasing priorities until + * we have enough memory freed. + */ + do { + shrink_node(pgdat, sc); + } while (sc->nr_reclaimed < nr_pages && --sc->priority >=3D 0); =20 set_task_reclaim_state(p, NULL); memalloc_noreclaim_restore(noreclaim_flag); --=20 2.47.3 From nobody Tue Dec 16 07:33:21 2025 Received: from mail-yx1-f53.google.com (mail-yx1-f53.google.com [74.125.224.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64309280327 for ; Fri, 5 Dec 2025 23:32:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.224.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764977546; cv=none; b=gq43A+xDE/MDkqzmEWVOWdxJxWvO/Xqxrock7CxASmIpzH7+vA030f35MEKSLOT26c3TbefboVpYZsvnG8UFuwDPuKAFNBca91cMj29FJ268wSpZZ3b++k9jG3QTBUNDpFdlA6AkqM951cLtHHfPLxNLxF3M90AbtDA4mIwSrwY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764977546; c=relaxed/simple; bh=KuUawxiE1obehEPGbhCYaTrHUXmJuSPQ8BYWNOFalcI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=R0/8913YJ9xa0YijrdBwealB1GbHoZ3ejRzynz0vK9+vmkCJwFbfyMDlFMHF7/nYX05i0SzvD+Sna7U+femktmxJC/f1fNgk/5Q0P4kkptDlO3CwtCnV/oStdKamyBz0HRJY7sCGKf4imIrMAym2jLuA64hig4Ic7FWTTl07bzI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=GAjlkpCg; arc=none smtp.client-ip=74.125.224.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GAjlkpCg" Received: by mail-yx1-f53.google.com with SMTP id 956f58d0204a3-640f88b8613so2801249d50.2 for ; Fri, 05 Dec 2025 15:32:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1764977543; x=1765582343; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=q1bp6D5JTy++9hCcdnJaW8sAeu8MFN04EOXKBwVmfQY=; b=GAjlkpCgl53oP5Kr1SdvOOXyJEn3bggCypZRetDJFvOQn1q/Xza0KDSifO1PBZ9QaI menb3jQ1X+tCYWKimXiafHb5ULFXRes1SJPthodcAO2K6smZI88C0E/jj42TaikLI8w+ xfkx+ThfsAIQ5deNCZK134OiwbCXEfO1cHbS16C+izrM5cMcQGAN+Hj4MgTh2rD2zxk1 A3l0cy6t0wJYCBriAifTdBrlI1CRJFbZjO22AaCgwtl80nbQl3MZkn0JJWGjr+LMww/T s4lA67hWWexwOi2RNWxxlKGd0Yzjd3kotLceJ5hwgYTAkuQQVNRtVHSOmkAe9JfDvsYu PSPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764977543; x=1765582343; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=q1bp6D5JTy++9hCcdnJaW8sAeu8MFN04EOXKBwVmfQY=; b=rVdOMPIycPlojMMtxWy3sTTfx5Hcjxuv1ikgThccPPZgKFvFJ4LTNHPktsNnH7l0F8 1fk2maDVjNhORmVRm4NgB2j0UZ0OGMcHp6hW8dITOcEF9ThY3zLmFM2mhhTAZL26ZVmf s+TnOJ9LEnwAIzXILs6iIqbGSnGvYpRlmC7bEcxtFSX0AT6bJjxOPTXfwlT8e95z2Oh3 c3lNWi1xUdjjC8vyXhxI77sCM5CpwTWWIhKUjbpxaCihZJXNt5x6i665cEfKE0R+DDS/ CYMBl7mbJUms2porKD6xsrp/eo2g0hk1AapzKrkPOwIv8zt4CPvUpFmnB5IE7yrcCsDy EmsA== X-Forwarded-Encrypted: i=1; AJvYcCVjfARviP+vdIC/c6DCVHZnFsubDg+yoCWDu3qz77GTYaq65vwTyD9cBmt7ILO/osTcHUqm83qAVYj7QjY=@vger.kernel.org X-Gm-Message-State: AOJu0YwTkjCLGUgf5W1M+lwiXKDvUFcv3vucuVLG3UJxa4Y5nbWS/IH/ V0svxguJfqRksvvCMaoB/Gn1z3TZxmgGQ8BtDokdV9FYs8z63TffACch X-Gm-Gg: ASbGncvjjqR59kFUBiqH34TS3YbLmOwy4frGTxtY/zCZaUjQqq0iGPVi6xoLqSx1IRx 1/FGw8bNRZceclaf9o1+7eP8gZwjfDVeZc2eAa9lsu1SG0psV/orzdywZcDPi0X4qDax4UHWz34 sJgr/q99C4EBpwZzNNbHp5QoarLmLHPoHRWfipp06l7G9hsUl7lsFIPWb/LJ9FJ2NeYJlQT4CFv IAiHIIHeSYe3cmpXV5Qr0xBhFRMMM8pKG8xQTSlwm0NQuKRq+2zhPkQey7Q1sQTXOuKmqvd43BP WvH1qfZdkoS/Nq3WIiXRvPlreTO+sTTmQvZNzcAly0kNUSbng/rKLXv8oDZDxpdkNcAzAvK4ajj os9VSgrrFWW7DUELOj8jlDiEoOg/TeJ98QO2vx94y/cu8/FzrxXYyj46mkPom/BlB0D1zbNi08Z 69IvVFkBAPwYZDl2M65mfA3w== X-Google-Smtp-Source: AGHT+IHuiF3W+0+y+RHRuFqEDswx1eZLikuL04SAxFeBYl2SfNCvqWtSQJuYZ+6A9jLmPwXhNw08/g== X-Received: by 2002:a05:690e:2554:b0:641:f5bc:68e5 with SMTP id 956f58d0204a3-6444e804d45mr461456d50.82.1764977543310; Fri, 05 Dec 2025 15:32:23 -0800 (PST) Received: from localhost ([2a03:2880:25ff:4c::]) by smtp.gmail.com with ESMTPSA id 956f58d0204a3-6443f2b80casm2356687d50.9.2025.12.05.15.32.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Dec 2025 15:32:22 -0800 (PST) From: Joshua Hahn To: Cc: "Liam R. Howlett" , Alistair Popple , Andrew Morton , Axel Rasmussen , Brendan Jackman , Byungchul Park , Christophe Leroy , David Hildenbrand , Gregory Price , Johannes Weiner , Jonathan Corbet , Lorenzo Stoakes , Madhavan Srinivasan , Matthew Brost , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , Qi Zheng , Rakie Kim , Shakeel Butt , Suren Baghdasaryan , Vlastimil Babka , Wei Xu , Ying Huang , Yuanchu Xie , Zi Yan , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org Subject: [RFC LPC2025 PATCH 4/4] mm/vmscan: Deprecate zone_reclaim_mode Date: Fri, 5 Dec 2025 15:32:15 -0800 Message-ID: <20251205233217.3344186-5-joshua.hahnjy@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20251205233217.3344186-1-joshua.hahnjy@gmail.com> References: <20251205233217.3344186-1-joshua.hahnjy@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" zone_reclaim_mode was introduced in 2005 to work around the NUMA penalties associated with allocating memory on remote nodes. It changed the page allocator's behavior to prefer stalling and performing direct reclaim locally over allocating on a remote node. In 2014, zone_reclaim_mode was disabled by default, as it was deemed as unsuitable for most workloads [1]. Since then, and especially since 2005, a lot has changed. NUMA penalties are lower than they used to before, and we now have much more extensive infrastructure to control NUMA spillage (NUMA balancing, memory.reclaim, tiering / promotion / demotion). Together, these changes make remote memory access a much more appealing alternative compared to stalling the system, when there might be free memory in other nodes. This is not to say that there are no workloads that perform better with NUMA locality. However, zone_reclaim_mode is a system-wide setting that makes this bet for all running workloads on the machine. Today, we have many more alternatives that can provide more fine-grained control over allocation strategy, such as mbind or set_mempolicy. Deprecate zone_reclaim_mode in favor of modern alternatives, such as NUMA balancing, membinding, and promotion/demotion mechanisms. This improves code readability and maintainability, especially in the page allocation code. [1] Commit 4f9b16a64753 ("mm: disable zone_reclaim_mode by default") Signed-off-by: Joshua Hahn --- Documentation/admin-guide/sysctl/vm.rst | 41 ------------------------- arch/powerpc/include/asm/topology.h | 4 --- include/linux/topology.h | 6 ---- include/uapi/linux/mempolicy.h | 14 --------- mm/internal.h | 11 ------- mm/page_alloc.c | 4 +-- mm/vmscan.c | 18 ----------- 7 files changed, 2 insertions(+), 96 deletions(-) diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-= guide/sysctl/vm.rst index ea2fd3feb9c6..635b16c1867e 100644 --- a/Documentation/admin-guide/sysctl/vm.rst +++ b/Documentation/admin-guide/sysctl/vm.rst @@ -76,7 +76,6 @@ Currently, these files are in /proc/sys/vm: - vfs_cache_pressure_denom - watermark_boost_factor - watermark_scale_factor -- zone_reclaim_mode =20 =20 admin_reserve_kbytes @@ -1046,43 +1045,3 @@ going to sleep prematurely (kswapd_low_wmark_hit_qui= ckly) can indicate that the number of free pages kswapd maintains for latency reasons is too small for the allocation bursts occurring in the system. This knob can then be used to tune kswapd aggressiveness accordingly. - - -zone_reclaim_mode -=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -Zone_reclaim_mode allows someone to set more or less aggressive approaches= to -reclaim memory when a zone runs out of memory. If it is set to zero then no -zone reclaim occurs. Allocations will be satisfied from other zones / nodes -in the system. - -This is value OR'ed together of - -=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D -1 Zone reclaim on -2 Zone reclaim writes dirty pages out -4 Zone reclaim swaps pages -=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D - -zone_reclaim_mode is disabled by default. For file servers or workloads -that benefit from having their data cached, zone_reclaim_mode should be -left disabled as the caching effect is likely to be more important than -data locality. - -Consider enabling one or more zone_reclaim mode bits if it's known that the -workload is partitioned such that each partition fits within a NUMA node -and that accessing remote memory would cause a measurable performance -reduction. The page allocator will take additional actions before -allocating off node pages. - -Allowing zone reclaim to write out pages stops processes that are -writing large amounts of data from dirtying pages on other nodes. Zone -reclaim will write out dirty pages if a zone fills up and so effectively -throttle the process. This may decrease the performance of a single process -since it cannot use all of system memory to buffer the outgoing writes -anymore but it preserve the memory on other nodes so that the performance -of other processes running on other nodes will not be affected. - -Allowing regular swap effectively restricts allocations to the local -node unless explicitly overridden by memory policies or cpuset -configurations. diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm= /topology.h index f19ca44512d1..49015b2b0d8d 100644 --- a/arch/powerpc/include/asm/topology.h +++ b/arch/powerpc/include/asm/topology.h @@ -10,10 +10,6 @@ struct drmem_lmb; =20 #ifdef CONFIG_NUMA =20 -/* - * If zone_reclaim_mode is enabled, a RECLAIM_DISTANCE of 10 will mean that - * all zones on all nodes will be eligible for zone_reclaim(). - */ #define RECLAIM_DISTANCE 10 =20 #include diff --git a/include/linux/topology.h b/include/linux/topology.h index 6575af39fd10..37018264ca1e 100644 --- a/include/linux/topology.h +++ b/include/linux/topology.h @@ -50,12 +50,6 @@ int arch_update_cpu_topology(void); #define node_distance(from,to) ((from) =3D=3D (to) ? LOCAL_DISTANCE : REMO= TE_DISTANCE) #endif #ifndef RECLAIM_DISTANCE -/* - * If the distance between nodes in a system is larger than RECLAIM_DISTAN= CE - * (in whatever arch specific measurement units returned by node_distance(= )) - * and node_reclaim_mode is enabled then the VM will only call node_reclai= m() - * on nodes within this distance. - */ #define RECLAIM_DISTANCE 30 #endif =20 diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h index 8fbbe613611a..194f922dad9b 100644 --- a/include/uapi/linux/mempolicy.h +++ b/include/uapi/linux/mempolicy.h @@ -65,18 +65,4 @@ enum { #define MPOL_F_MOF (1 << 3) /* this policy wants migrate on fault */ #define MPOL_F_MORON (1 << 4) /* Migrate On protnone Reference On Node */ =20 -/* - * Enabling zone reclaim means the page allocator will attempt to fulfill - * the allocation request on the current node by triggering reclaim and - * trying to shrink the current node. - * Fallback allocations on the next candidates in the zonelist are conside= red - * when reclaim fails to free up enough memory in the current node/zone. - * - * These bit locations are exposed in the vm.zone_reclaim_mode sysctl. - * New bits are OK, but existing bits should not be changed. - */ -#define RECLAIM_ZONE (1<<0) /* Enable zone reclaim */ -#define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */ -#define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */ - #endif /* _UAPI_LINUX_MEMPOLICY_H */ diff --git a/mm/internal.h b/mm/internal.h index 743fcebe53a8..a2df0bf3f458 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1197,24 +1197,13 @@ static inline void mminit_verify_zonelist(void) #endif /* CONFIG_DEBUG_MEMORY_INIT */ =20 #ifdef CONFIG_NUMA -extern int node_reclaim_mode; - extern int find_next_best_node(int node, nodemask_t *used_node_mask); #else -#define node_reclaim_mode 0 - static inline int find_next_best_node(int node, nodemask_t *used_node_mask) { return NUMA_NO_NODE; } #endif - -static inline bool node_reclaim_enabled(void) -{ - /* Is any node_reclaim_mode bit set? */ - return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP); -} - /* * mm/memory-failure.c */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9524713c81b7..bf4faec4ebe6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3823,8 +3823,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int o= rder, int alloc_flags, * If kswapd is already active on a node, keep looking * for other nodes that might be idle. This can happen * if another process has NUMA bindings and is causing - * kswapd wakeups on only some nodes. Avoid accidental - * "node_reclaim_mode"-like behavior in this case. + * kswapd wakeups on only some nodes. Avoid accidentally + * overpressuring the local node when remote nodes are free. */ if (skip_kswapd_nodes && !waitqueue_active(&zone->zone_pgdat->kswapd_wait)) { diff --git a/mm/vmscan.c b/mm/vmscan.c index 4e23289efba4..f480a395df65 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -7503,16 +7503,6 @@ static const struct ctl_table vmscan_sysctl_table[] = =3D { .extra1 =3D SYSCTL_ZERO, .extra2 =3D SYSCTL_TWO_HUNDRED, }, -#ifdef CONFIG_NUMA - { - .procname =3D "zone_reclaim_mode", - .data =3D &node_reclaim_mode, - .maxlen =3D sizeof(node_reclaim_mode), - .mode =3D 0644, - .proc_handler =3D proc_dointvec_minmax, - .extra1 =3D SYSCTL_ZERO, - } -#endif }; =20 static int __init kswapd_init(void) @@ -7529,14 +7519,6 @@ static int __init kswapd_init(void) module_init(kswapd_init) =20 #ifdef CONFIG_NUMA -/* - * Node reclaim mode - * - * If non-zero call node_reclaim when the number of free pages falls below - * the watermarks. - */ -int node_reclaim_mode __read_mostly; - /* * Try to free up some pages from this node through reclaim. */ --=20 2.47.3