From nobody Tue Apr 7 04:36:16 2026 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A9CFC3264C1; Mon, 16 Mar 2026 05:13:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773637996; cv=none; b=tCyC+M0MPDA+aEj9gfsKmvZtULhJZBOj3D2+6+iscWgdZOfdoOu0a3voyslev6U8dbkHa2AFn6+qOEYW2v5UhLgYijCw69OAh4Hnuux3Os+I/jAs9soegN8bIiLYYYX5MUVXwRID+MjoHW/V4iYlBi3Qe8W59sYMffnh3MhbzSQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773637996; c=relaxed/simple; bh=t1qlbhSxSyzKIl5q14Wx6Pm5J2G3Y854nHSLx/wUMT0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nhcQXhcRSnB83+D9NMa3TwroQ3Yc2XyMDWx6m4DU4VvoGQRtSvb3AGLb3/C0sfGNR9Ohtqvp8MIOnMV4VDeQxKe1MDNBEf12XKcADH3sgXUR01Uox5ltOc77SJubzUjAcduvjrwuPRIJ8xuEalKwi9jsRwRCP8t6Ow0CvNyaTR8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-c45ff70000001609-33-69b791649d3e From: Rakie Kim To: akpm@linux-foundation.org Cc: gourry@gourry.net, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com, byungchul@sk.com, ying.huang@linux.alibaba.com, apopple@nvidia.com, david@kernel.org, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, dave@stgolabs.net, jonathan.cameron@huawei.com, dave.jiang@intel.com, alison.schofield@intel.com, vishal.l.verma@intel.com, ira.weiny@intel.com, dan.j.williams@intel.com, kernel_team@skhynix.com, honggyu.kim@sk.com, yunjeong.mun@sk.com, rakie.kim@sk.com Subject: [RFC PATCH 1/4] mm/numa: introduce nearest_nodes_nodemask() Date: Mon, 16 Mar 2026 14:12:49 +0900 Message-ID: <20260316051258.246-2-rakie.kim@sk.com> X-Mailer: git-send-email 2.52.0.windows.1 In-Reply-To: <20260316051258.246-1-rakie.kim@sk.com> References: <20260316051258.246-1-rakie.kim@sk.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrPIsWRmVeSWpSXmKPExsXC9ZZnkW7KxO2ZBhenclvMWb+GzeLu4wts FrtuhFhMn3qB0eLEzUY2i9U31zBaPN/6i9Hi593j7Bb7nz5nsVi18BqbxfGt89gttjc8YLc4 P+sUi8XlXXPYLO6t+c9qcXLWShaLb33SFvf7HCyOrN/OZDH50gI2i9mNfYwWtyYcY7JYvSbD YvbRe+wOEh47Z91l91iwqdSju+0yu0fLkbesHov3vGTy2LSqk81j06dJ7B4nZvxm8dj50NKj t/kdm8fHp7dYPKbOrvdYv+Uqi8eZBUfYPT5vkgvgj+KySUnNySxLLdK3S+DK+H/9MFtBv2TF v/mfWBoY14t0MXJySAiYSGxZeJINxp46ZTJ7FyMHB5uAksSxvTEgYREBWYmpf8+zdDFycTAL rGSVOH/yNzNIQljAReJY33sWEJtFQFXi8Kt/TCA2r4CxxLG2XawQMzUl1m28xQIykxNo/rYF xiBhIaCSeU8+sEOUC0qcnPkEbAyzgLxE89bZzCC7JAQ+skvs2PcS6jZJiYMrbrBMYOSfhaRn FpKeBYxMqxiFMvPKchMzc0z0MirzMiv0kvNzNzECo3VZ7Z/oHYyfLgQfYhTgYFTi4c04tC1T iDWxrLgy9xCjBAezkgjvsiNAId6UxMqq1KL8+KLSnNTiQ4zSHCxK4rxG38pThATSE0tSs1NT C1KLYLJMHJxSDYwSHhcXnVurltbkesiZmWm27MNJf/+UVFYb7RE33Vv8P7vqlaXTkm8rN874 VNB/3S3lh2nywxL27V8D7cLeNp8O3qD6S3ZacdmiY5lHNs9evGwKw7F7LXrCoY5p2VVGPhqf VZjE/b3WzpsZJqtl8njF6asP4q5rZXCmb9SMuGl1P5ep+mrPRDklluKMREMt5qLiRABKIb0y 0gIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA02RX0hTYRiH/bazc47DwXGtOikWDSIy1MSEz5IQivqIqC5CRcQc7tCGc8qm pkU4MwWV5kxHujmdDCR1uBz+Scm0bU0tSU20VJyV2spKzUxIS9uSoLuX5/nx3Lwkm9+FBZBS eSajkItkQpyLcasOJ4aIyzulx74EwBqLGYczcyM47H5zBb6s1OPwvnYEwIHJfBw2T5oBdLdv APhzpp+A3z98ZsPeBTcGm+oncNjfXktAu2GQAztVbwk4rHuOwbHuGhy6zNscOKhrxOC6OhDO qmOgbcLNgQ5LJwtWvDLiUJ+vBnBK42TBZrMEbnY88KBnLiImCHXpZghktGah0qIxAt1xfOUg 0+NPLGRtKsaRdfUegQaqNjHU9S4K3S1YwtG3hSkMrU8jZPq4wkJafR6ytI1jaMjoIC77J3Cj xYxMms0owk4lcyXbr+14Rtm+nK26VUwFLIIS4EvS1HFaW1lBlACSxCkh7exJ9GIBFURrfw9j JYBLsqlGDj08uMn2il3UGdqpXsa8N0Ydou2LWyzvzaMiaGdRN2eneYRuaZ3CvE1fT7/DGOHF fM+kdn6F2Jn704PV838zbOoAXdCuZ2uAn+4/pftPGQGrCQik8uw0kVQWGapMleTKpTmhKelp VuB5aMOtX+WPwNrYORugSCD040lsHVI+R5StzE2zAZpkCwW8BocH8cSi3BuMIv2qIkvGKG0g kMSEe3nn45hkPnVNlMmkMkwGo/hnWaRvgAo8Mc2HDbXtz7zeuvL0gsvQgEfVJ8Qm7UlPMqQQ PzTjUafD3nfEsyQHT/advRS/ZGjwC3GPxrmUkT51hkSlUa9oid7Iiw2eW5NHysyLk30LKvtC xIl2wW01ow93X5T0TPfKto4ux4kbd4sLfUzRZYZi7s2HL7T11Zrp0sLRWbEQU0pE4cFshVL0 B4Tls83MAgAA X-CFilter-Loop: Reflected Content-Type: text/plain; charset="utf-8" Add a new NUMA helper, nearest_nodes_nodemask(), to find all nodes in a given nodemask that are located at the minimum distance from a specified source node. Unlike nearest_node_nodemask(), which returns only a single node, this function identifies all nodes that share the closest distance value. This is useful when multiple nodes are equally near in the NUMA topology and a complete set of nearest candidates is required. The helper clears the output nodemask and sets all nodes that meet the minimum distance condition. It returns 0 on success or -EINVAL if the output argument is invalid. Signed-off-by: Rakie Kim --- include/linux/numa.h | 8 ++++++++ mm/mempolicy.c | 41 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 49 insertions(+) diff --git a/include/linux/numa.h b/include/linux/numa.h index e6baaf6051bc..aa9526e9078b 100644 --- a/include/linux/numa.h +++ b/include/linux/numa.h @@ -33,6 +33,8 @@ int numa_nearest_node(int node, unsigned int state); =20 int nearest_node_nodemask(int node, nodemask_t *mask); =20 +int nearest_nodes_nodemask(int node, const nodemask_t *mask, nodemask_t *o= ut); + #ifndef memory_add_physaddr_to_nid int memory_add_physaddr_to_nid(u64 start); #endif @@ -54,6 +56,12 @@ static inline int nearest_node_nodemask(int node, nodema= sk_t *mask) return NUMA_NO_NODE; } =20 +static inline int nearest_nodes_nodemask(int node, const nodemask_t *mask, + nodemask_t *out) +{ + return 0; +} + static inline int memory_add_physaddr_to_nid(u64 start) { return 0; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 68a98ba57882..a3f0fde6c626 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -338,6 +338,47 @@ int nearest_node_nodemask(int node, nodemask_t *mask) } EXPORT_SYMBOL_GPL(nearest_node_nodemask); =20 +/** + * nearest_nodes_nodemask - Find all nodes in @mask that are nearest to @n= ode + * @node: The reference node ID to measure distance from + * @mask: The set of candidate nodes to compare against + * @out: Pointer to a nodemask that will store the nearest node(s) + * + * This function iterates over all nodes in @mask and measures the distance + * between each candidate node and the given @node using node_distance(). + * It finds the minimum distance and then records all nodes in @mask that + * share that same minimum distance into the output mask @out. + * + * For example, if multiple nodes have equal minimal distance to @node, all + * of them are included in @out. + * + * Return: 0 on success, or -EINVAL if @out is NULL. + */ +int nearest_nodes_nodemask(int node, const nodemask_t *mask, nodemask_t *o= ut) +{ + int dist, n, min_dist =3D INT_MAX; + + if (!out) + return -EINVAL; + + nodes_clear(*out); + + for_each_node_mask(n, *mask) { + dist =3D node_distance(node, n); + + if (dist < min_dist) { + min_dist =3D dist; + nodes_clear(*out); + node_set(n, *out); + } else if (dist =3D=3D min_dist) { + node_set(n, *out); + } + } + + return 0; +} +EXPORT_SYMBOL_GPL(nearest_nodes_nodemask); + struct mempolicy *get_task_policy(struct task_struct *p) { struct mempolicy *pol =3D p->mempolicy; --=20 2.34.1 From nobody Tue Apr 7 04:36:16 2026 Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4C73133A03A; Mon, 16 Mar 2026 05:13:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773638000; cv=none; b=iwm6Pv8U+E4OegD0lcVZn4Y89sVRY5N2FiN+E6diY3HQZwe5BycoF8lcb9bAFokJqvumTwEE/E9K6oWrMgwFol3++jUj8I57IAX1gkDsvQZ0R7CNvmut9O6D3idVAXlr7qj9QbBpSX6UhpXfsolzwauVifbIFUJD9eowvhCAtcc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773638000; c=relaxed/simple; bh=E80/CcTr0qF1s9eXV5saglVapd8KYVeb9DxyCbq+Qww=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=K86GcGnUSs0D08WSrH1M9fx54YFPDRhDwPpaHEsdO3jv1N8jjZKWpH9eUEyiZvFikgfgWGZeNe5adME5MQ5OhZnqj9In3lohZjmavohb27K8OE1wjy1t6Ej4hZsRGGhobVNLPLfpcxmtzvyVmI2/i5/pid/M8kcgJZLGmm7uQXg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-c45ff70000001609-43-69b791662e0a From: Rakie Kim To: akpm@linux-foundation.org Cc: gourry@gourry.net, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com, byungchul@sk.com, ying.huang@linux.alibaba.com, apopple@nvidia.com, david@kernel.org, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, dave@stgolabs.net, jonathan.cameron@huawei.com, dave.jiang@intel.com, alison.schofield@intel.com, vishal.l.verma@intel.com, ira.weiny@intel.com, dan.j.williams@intel.com, kernel_team@skhynix.com, honggyu.kim@sk.com, yunjeong.mun@sk.com, rakie.kim@sk.com Subject: [RFC PATCH 2/4] mm/memory-tiers: introduce socket-aware topology management for NUMA nodes Date: Mon, 16 Mar 2026 14:12:50 +0900 Message-ID: <20260316051258.246-3-rakie.kim@sk.com> X-Mailer: git-send-email 2.52.0.windows.1 In-Reply-To: <20260316051258.246-1-rakie.kim@sk.com> References: <20260316051258.246-1-rakie.kim@sk.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Brightmail-Tracker: H4sIAAAAAAAAA02Ra0hTYRjHeXfOzjkOR6cp+uqwYBBhkVkqvEWk4JcXi5ASgi7UcMe23KZs Xoty5rqopGaZtZlNA1FnjVY6lTLalroMJ1mikqaoOAlLLcXUMKdEfvvz+1+eDw9DiPrIYEah TuM0aqlSQglIwbRv9Z6kOzZFuLlJjCosDRQaGuuhUGt/Aiov6wGocyCXQuaBBoAmG5cA+j3U QaM3E5Mkqq/qo1BHYyWNbLoRGrkN70nU21pBoeGGVT5yGepItFAkRl+LYpDTYuOhux9NFDLm FgE0WNLOQ+YGOTK+G6ZjIG4xDNHYZE3HhTd6aax3TvPxk1dTPGytz6ewda6Uxp0PlkncMnoA 3877TuHZiUESlxlzsOXlZxJ/MDlp/NO6LX7LKcEhGadUZHCavYfPC+QLtY/o1NlukNXa3ELp wP06UAAYBrKRsMihKgA+69LZPsb3YoqVwPbXZ7zYnw2BZX/cZAEQMARbx4du1zLhNfzYRHj9 U9e6JtkdMF+vW9dCNgIONS+RG5uh8NnzQdK76bO232SK8GLRWqRyfIbeiG+Frofj6xGC3Qkt lSIvJtjtMK/RSHjPQjaXgZ7VEnpjMgi+re0nSwBr2FQ3/K8bNtVNgKgHIoU6QyVVKCPD5Nlq RVZYYorKCta+XXNl5XQzmOs5YQcsAyS+Qrm9SSHiSzO02So7gAwh8RfWONeQUCbNvsRpUs5p 0pWc1g7EDCkJFO5fyJSJ2AvSNC6Z41I5zT+Xx/gE64A+3SG7GLIbB+i7C5KL5+MKu0aIwaTH X8o91779OGp0h47l2KKmggNvBsUiHmASwrW//MxzkBMvjjg8pdEn46NfxILJp5X85Cbxol+M 60i3Osl1ts1R5XHp2jIj/FWmW3nF89rCqINXGV0bjBttvmdYCZiZOe48drnamWOWkFq5dN8u QqOV/gWscSed6QIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA02Ra0hTYQCG+XbOzjkOB8cpeNLMWpkYNQsNPsHKfvkR/bAwwpJy6akN59RN xRXiRLuoaV7LNrWJIE5Hq5FXFGut6SrUtCQXM81bJpaWSGm3TYn89/K8D++fl8IEQ7gPJZWn sQq5WCYkeDivKjB238XSNun+3094sNpoIKBjcpCAnW+jYX+FloB3KgcB7BvNIWDzqAHA2ZZV AH84ekn4bWYegz3TszhsqhshYG9LLQmf1ti4sE09TsIBzXMcDndWE3DM8IcLbRo9DleKfeH7 4ghoHpnlQouxjQPLh3QE1OYUA2gvsXJgs0EC11obnejZGBnhhzo0DhLpTOmo8NowifIsC1xU 3zXHQaamfAKZvpaRqK9qDUcdE2GoKPczgZam7ThaeYdQ/cdFDqrUZiPjozc4eqmzkFEeZ3jh CaxMmsEqgg/H8SQrjTVkylI/yOxs7yDU4LYeFAA3iqFDGYt1klsAKIqghYy1O9aFvWg/pvLX AF4AeBRG67nMgG0NcxWedDxz9fWL9YzTAUx+nno98+kQxtG+im9sBjH3H9px16abc79VF+LC AqdSO7VIbugejO3u1LqC0YGMsVbgwhjtz+S2aLESwNdssjT/Lc0mSwewJuAllWckiaWygyJl okQll2aK4pOTTMD5dEPWz9J2sDwcaQY0BYTufIm5VSrgijOUqiQzYChM6MVvsDgRP0Gsuswq ks8r0mWs0gx8KVzozT92mo0T0JfEaWwiy6awin8th3LzUYOj74lCOOHptyof0t+wRWxVUtnj J7Jv2fAMlSOr83tosD1s525OpvcVUaN/+SlOeEiPrmiXR9mSTmb9YJ2uO1QQ1JMaej1mkI3Z W9bQUTESvp1/xH4vQXjyXPSDOZEm8uaF5VepC11ROVuOLwbMhcxv+7KjBMsrnXEPPuvz6bG2 W4grJeIDezCFUvwX9hS1iOUCAAA= X-CFilter-Loop: Reflected The existing NUMA distance model provides only relative latency values between nodes and lacks any notion of structural grouping such as socket or package boundaries. As a result, memory policies based solely on distance cannot differentiate between nodes that are physically local to the same socket and those that belong to different sockets. This often leads to inefficient cross-socket demotion and suboptimal memory placement. This patch introduces a socket-aware topology management layer that groups NUMA nodes according to their physical package (socket) association. Each group forms a "memory package" that explicitly links CPU and memory-only nodes (such as CXL or HBM) under the same socket. This structure allows the kernel to interpret NUMA topology in a way that reflects real hardware locality rather than relying solely on flat distance values. By maintaining socket-level grouping, the kernel can: - Enforce demotion and promotion policies that stay within the same socket. - Avoid unintended cross-socket migrations that degrade performance. - Provide a structural abstraction for future policy and tiering logic. Unlike ACPI-provided distance tables, which offer static and symmetric relationships, this socket-aware model captures the true hardware hierarchy and provides a flexible foundation for systems where the distance matrix alone cannot accurately express socket boundaries or asymmetric topologies. This establishes a topology-aware basis for more predictable and performance-consistent NUMA memory management. Signed-off-by: Rakie Kim --- include/linux/memory-tiers.h | 93 +++++ mm/memory-tiers.c | 766 +++++++++++++++++++++++++++++++++++ 2 files changed, 859 insertions(+) diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h index 7a805796fcfd..406b50ac7d88 100644 --- a/include/linux/memory-tiers.h +++ b/include/linux/memory-tiers.h @@ -52,10 +52,24 @@ int mt_perf_to_adistance(struct access_coordinate *perf= , int *adist); struct memory_dev_type *mt_find_alloc_memory_type(int adist, struct list_head *memory_types); void mt_put_memory_types(struct list_head *memory_types); + +int register_mp_package_notifier(struct notifier_block *notifier); +void unregister_mp_package_notifier(struct notifier_block *notifier); +int mp_probe_package_id(int nid); +int mp_add_package_node_by_initiator(int nid, int initiator_nid); +int mp_add_package_node(int nid); +int mp_get_package_nodes(int nid, nodemask_t *out); +int mp_get_package_cpu_nodes(int nid, nodemask_t *out); +int mp_get_package_memory_only_nodes(int nid, nodemask_t *out); #ifdef CONFIG_MIGRATION int next_demotion_node(int node); void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets); bool node_is_toptier(int node); + +int mp_next_demotion_nodemask(int nid, nodemask_t *out); +int mp_next_demotion_node(int nid); +int mp_next_promotion_nodemask(int nid, nodemask_t *out); +int mp_next_promotion_node(int nid); #else static inline int next_demotion_node(int node) { @@ -71,6 +85,26 @@ static inline bool node_is_toptier(int node) { return true; } + +static inline int mp_next_demotion_nodemask(int nid, nodemask_t *out) +{ + return 0; +} + +static inline int mp_next_demotion_node(int nid) +{ + return 0; +} + +static inline int mp_next_promotion_nodemask(int nid, nodemask_t *out) +{ + return 0; +} + +static inline int mp_next_promotion_node(int nid) +{ + return 0; +} #endif =20 #else @@ -151,5 +185,64 @@ static inline struct memory_dev_type *mt_find_alloc_me= mory_type(int adist, static inline void mt_put_memory_types(struct list_head *memory_types) { } + +static inline int register_mp_package_notifier(struct notifier_block *noti= fier) +{ + return 0; +} + +static inline void unregister_mp_package_notifier(struct notifier_block *n= otifier) +{ +} + +static inline int mp_probe_package_id(int nid) +{ + return NOTIFY_DONE; +} + +static inline int mp_add_package_node_by_initiator(int nid, int initiator_= nid) +{ + return 0; +} + +static inline int mp_add_package_node(int nid) +{ + return 0; +} + +static inline int mp_get_package_nodes(int nid, nodemask_t *out) +{ + return 0; +} + +static inline int mp_get_package_cpu_nodes(int nid, nodemask_t *out) +{ + return 0; +} + +static inline int mp_get_package_memory_only_nodes(int nid, nodemask_t *ou= t) +{ + return 0; +} + +static inline int mp_next_demotion_nodemask(int nid, nodemask_t *out) +{ + return 0; +} + +static inline int mp_next_demotion_node(int nid) +{ + return 0; +} + +static inline int mp_next_promotion_nodemask(int nid, nodemask_t *out) +{ + return 0; +} + +static inline int mp_next_promotion_node(int nid) +{ + return 0; +} #endif /* CONFIG_NUMA */ #endif /* _LINUX_MEMORY_TIERS_H */ diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c index 864811fff409..47d323e5466e 100644 --- a/mm/memory-tiers.c +++ b/mm/memory-tiers.c @@ -998,3 +998,769 @@ static int __init numa_init_sysfs(void) subsys_initcall(numa_init_sysfs); #endif /* CONFIG_SYSFS */ #endif + +/** + * enum mp_nodes_type - Selector for which subset of a package to return + * @MP_NODES_ALL: All NUMA nodes that belong to the package. + * @MP_NODES_CPU: Only CPU nodes in the package. + * @MP_NODES_MEM_ONLY: Only memory-only nodes (e.g. CXL/HBM) in the packa= ge. + * + * Used internally to choose which nodemask to expose for a given package. + */ +enum mp_nodes_type { + MP_NODES_ALL, + MP_NODES_CPU, + MP_NODES_MEM_ONLY +}; + +/** + * struct memory_package - Per-socket (physical package) container + * @package_id: Physical socket/package id (from topology). + * @nodes: Nodemask of all member nodes in this package. + * @cpu_nodes: Nodemask of CPU nodes in this package. + * @memory_only_nodes: Nodemask of memory-only nodes in this package. + * @cpu_list: List head of CPU-type members. + * @memory_only_list: List head of memory-only members. + * @list: Linkage on the global @memory_packages list. + * + * A memory_package groups NUMA nodes that share the same physical CPU pac= kage. + * The masks are used to implement socket-local placement/demotion/promoti= on. + */ +struct memory_package { + int package_id; + nodemask_t nodes; + nodemask_t cpu_nodes; + nodemask_t memory_only_nodes; + struct list_head cpu_list; + struct list_head memory_only_list; + struct list_head list; +}; + +/** + * enum mpn_source_flags - Source used to resolve a node's package members= hip + * @MPN_SRC_UNKNOWN: Unknown/unspecified. + * @MPN_SRC_CPU: Directly resolved from a CPU node (1:1). + * @MPN_SRC_INITIATOR: Resolved via an initiator CPU node provided by a = driver. + * @MPN_SRC_SLIT: Resolved via SLIT/nearest-node. + * + * These flags are informational; they describe how a given node was bound= to + * its package and help with policy decisions later. + */ +enum mpn_source_flags { + MPN_SRC_UNKNOWN =3D 0, + MPN_SRC_CPU =3D BIT(1), + MPN_SRC_INITIATOR =3D BIT(2), + MPN_SRC_SLIT =3D BIT(3) +}; + +/** + * struct memory_package_node - Per-node membership and preferences + * @nid: NUMA node id for this entry. + * @initiator_nid: CPU nid that served as the initiator when resolving = @nid. + * @package_id: Resolved package id that @nid belongs to. + * @source_flags: One of &enum mpn_source_flags describing the resolut= ion. + * @preferred: Opposite-type nearest candidates inside the same pac= kage. + * @package: Pointer to the owning &struct memory_package (NULL u= ntil bound). + * @package_entry: Linkage on the owning package's type list. + * + * Each NUMA node that participates in socket-aware policy gets a wrapper = entry + * that caches package membership and the precomputed set of preferred tar= gets. + */ +struct memory_package_node { + int nid; + int initiator_nid; + int package_id; + int source_flags; + nodemask_t preferred; + struct memory_package *package; + struct list_head package_entry; +}; + +#define node_is_memory_only(_nid) \ + (node_state((_nid), N_MEMORY) && !node_state((_nid), N_CPU)) + +static BLOCKING_NOTIFIER_HEAD(mp_package_algorithms); + +static LIST_HEAD(memory_packages); +static struct memory_package_node *mpns[MAX_NUMNODES]; +static DEFINE_MUTEX(memory_package_lock); + +/** + * register_mp_package_notifier - Register a package resolution algorithm + * @notifier: Notifier called with the nid to resolve (see mp_probe_packag= e_id()). + * + * Drivers (e.g., CXL region/decoder code) register here to supply a packa= ge + * hint for newly appearing nodes. The notifier is invoked during nid->pac= kage + * resolution. + * + * Return: 0 on success, negative errno on failure. + */ +int register_mp_package_notifier(struct notifier_block *notifier) +{ + return blocking_notifier_chain_register(&mp_package_algorithms, notifier); +} +EXPORT_SYMBOL_GPL(register_mp_package_notifier); + +/** + * unregister_mp_package_notifier - Unregister a package resolution algori= thm + * @notifier: Notifier previously registered with register_mp_package_noti= fier(). + */ +void unregister_mp_package_notifier(struct notifier_block *notifier) +{ + blocking_notifier_chain_unregister(&mp_package_algorithms, notifier); +} +EXPORT_SYMBOL_GPL(unregister_mp_package_notifier); + +/** + * mp_probe_package_id - Invoke registered notifiers to resolve a node's p= ackage + * @nid: NUMA node id to resolve. + * + * Calls the blocking notifier chain to let subsystems provide an initiato= r or + * package id for @nid. + * + * Return: Notifier return code (>=3D0 typically); negative errno on failu= re. + */ +int mp_probe_package_id(int nid) +{ + return blocking_notifier_call_chain(&mp_package_algorithms, nid, NULL); +} +EXPORT_SYMBOL_GPL(mp_probe_package_id); + +static int mp_node_to_package_id(int nid) +{ + int package_id =3D -EINVAL; + unsigned int first_cpu; + const struct cpumask *cpu_mask; + + if (!node_state(nid, N_CPU)) + goto out; + + cpu_mask =3D cpumask_of_node(nid); + if (cpumask_empty(cpu_mask)) { + pr_err("node%d: CPU mask is empty\n", nid); + goto out; + } + + first_cpu =3D cpumask_first(cpu_mask); + if (first_cpu >=3D nr_cpu_ids) { + pr_err("node%d: CPU (%d) out of range\n", nid, first_cpu); + goto out; + } + + /* + * Map the first CPU in this node=E2=80=99s cpumask to its physical packa= ge id. + * This ties the NUMA node to a socket (package) using topology info. + */ + package_id =3D topology_physical_package_id(first_cpu); + if (package_id < 0) { + pr_err("node%d: failed to resolve package id (%d)\n", nid, package_id); + package_id =3D -EINVAL; + goto out; + } + +out: + return package_id; +} + +static void update_package_preferred(struct memory_package *mp) +{ + struct memory_package_node *mpn; + + lockdep_assert_held(&memory_package_lock); + + /* + * For each CPU node, compute its preferred set as the nearest + * memory-only node(s) within the same package. If the package has + * no memory-only nodes, fall back to a self-reference so callers + * never see an empty preferred set. + */ + list_for_each_entry(mpn, &mp->cpu_list, package_entry) { + nodes_clear(mpn->preferred); + if (!nodes_empty(mp->memory_only_nodes)) + nearest_nodes_nodemask(mpn->nid, &mp->memory_only_nodes, + &mpn->preferred); + else + node_set(mpn->nid, mpn->preferred); + } + + /* + * Symmetrically, for each memory-only node, compute its preferred set + * as the nearest CPU node(s) within the same package. If the package + * has no CPU nodes, fall back to a self-reference. + */ + list_for_each_entry(mpn, &mp->memory_only_list, package_entry) { + nodes_clear(mpn->preferred); + if (!nodes_empty(mp->cpu_nodes)) + nearest_nodes_nodemask(mpn->nid, &mp->cpu_nodes, + &mpn->preferred); + else + node_set(mpn->nid, mpn->preferred); + } +} + +static inline bool memory_package_is_empty(struct memory_package *mp) +{ + lockdep_assert_held(&memory_package_lock); + + return (nodes_empty(mp->cpu_nodes) && nodes_empty(mp->memory_only_nodes)); +} + +static inline bool package_node_is_valid(int nid) +{ + if (!mpns[nid]) { + pr_err("mpns[%d] is NULL\n", nid); + return false; + } + + if (nodes_empty(mpns[nid]->preferred) || (mpns[nid]->package =3D=3D NULL)= ) { + pr_err("nid %d: package or preferred mask not initialized\n", nid); + return false; + } + + return true; +} + +static struct memory_package *create_memory_package(int package_id) +{ + struct memory_package *mempackage; + + mempackage =3D kzalloc(sizeof(*mempackage), GFP_KERNEL); + if (!mempackage) + return ERR_PTR(-ENOMEM); + + mempackage->package_id =3D package_id; + mempackage->nodes =3D NODE_MASK_NONE; + mempackage->cpu_nodes =3D NODE_MASK_NONE; + mempackage->memory_only_nodes =3D NODE_MASK_NONE; + INIT_LIST_HEAD(&mempackage->cpu_list); + INIT_LIST_HEAD(&mempackage->memory_only_list); + INIT_LIST_HEAD(&mempackage->list); + + return mempackage; +} + +static void destroy_memory_package(struct memory_package *mp) +{ + lockdep_assert_held(&memory_package_lock); + + if (memory_package_is_empty(mp)) { + list_del(&mp->list); + kfree(mp); + } +} + +static struct memory_package *find_create_memory_package(int package_id) +{ + struct memory_package *mempackage; + + mutex_lock(&memory_package_lock); + list_for_each_entry(mempackage, &memory_packages, list) { + /* + * If a package for this package_id already exists, reuse it + * instead of allocating a new one. + */ + if (mempackage->package_id =3D=3D package_id) { + mutex_unlock(&memory_package_lock); + return mempackage; + } + } + mutex_unlock(&memory_package_lock); + + mempackage =3D create_memory_package(package_id); + if (IS_ERR(mempackage)) + return ERR_PTR(-ENOMEM); + + mutex_lock(&memory_package_lock); + list_add(&mempackage->list, &memory_packages); + mutex_unlock(&memory_package_lock); + + return mempackage; +} + +static int bind_node_to_package(int nid) +{ + int ret =3D 0, package_id; + struct memory_package *mp; + + mutex_lock(&memory_package_lock); + if (!mpns[nid]) { + ret =3D -EINVAL; + goto unlock_out; + } + package_id =3D mpns[nid]->package_id; + mutex_unlock(&memory_package_lock); + + mp =3D find_create_memory_package(package_id); + if (IS_ERR(mp)) { + ret =3D PTR_ERR(mp); + goto out; + } + + mutex_lock(&memory_package_lock); + mpns[nid]->package =3D mp; + node_set(mpns[nid]->nid, mp->nodes); + if (node_is_memory_only(mpns[nid]->nid)) { + node_set(mpns[nid]->nid, mp->memory_only_nodes); + list_add(&mpns[nid]->package_entry, &mp->memory_only_list); + } else { + node_set(mpns[nid]->nid, mp->cpu_nodes); + list_add(&mpns[nid]->package_entry, &mp->cpu_list); + } + update_package_preferred(mp); + +unlock_out: + mutex_unlock(&memory_package_lock); +out: + pr_info("memory_package %d: nodes=3D%*pbl cpu=3D%*pbl memery_only=3D%*pbl= \n", + mp->package_id, + nodemask_pr_args(&mp->nodes), + nodemask_pr_args(&mp->cpu_nodes), + nodemask_pr_args(&mp->memory_only_nodes)); + + return ret; +} + +static void unbind_node_to_package(struct memory_package *mp, int nid) +{ + lockdep_assert_held(&memory_package_lock); + + node_clear(nid, mp->nodes); + if (node_state(nid, N_CPU)) + node_clear(nid, mp->cpu_nodes); + else + node_clear(nid, mp->memory_only_nodes); + + if (mpns[nid]) + list_del(&mpns[nid]->package_entry); + + update_package_preferred(mp); +} + +static struct memory_package_node *create_package_node(int nid, int initia= tor_nid) +{ + int cpu_nid, package_id; + int source_flags; + struct memory_package_node *mpn; + + if (node_state(nid, N_CPU)) { + cpu_nid =3D nid; + source_flags =3D MPN_SRC_CPU; + } else { + if (initiator_nid >=3D 0) { + cpu_nid =3D initiator_nid; + source_flags =3D MPN_SRC_INITIATOR; + } else { + /* + * No driver-supplied initiator: fall back to the + * nearest CPU node (via SLIT/numa_distance). + */ + cpu_nid =3D numa_nearest_node(nid, N_CPU); + source_flags =3D MPN_SRC_SLIT; + } + } + + package_id =3D mp_node_to_package_id(cpu_nid); + if (package_id < 0) + return ERR_PTR(-EINVAL); + + mpn =3D kzalloc(sizeof(*mpn), GFP_KERNEL); + if (!mpn) + return ERR_PTR(-ENOMEM); + + mpn->nid =3D nid; + mpn->initiator_nid =3D cpu_nid; + mpn->package_id =3D package_id; + mpn->source_flags =3D source_flags; + mpn->preferred =3D NODE_MASK_NONE; + mpn->package =3D NULL; + INIT_LIST_HEAD(&mpn->package_entry); + + return mpn; +} + +static void __destroy_package_node(int nid) +{ + struct memory_package_node *mpn; + struct memory_package *mp; + + lockdep_assert_held(&memory_package_lock); + + mpn =3D mpns[nid]; + if (!mpn) + return; + + mp =3D mpn->package; + if (mp) { + unbind_node_to_package(mp, nid); + mpn->package =3D NULL; + + if (memory_package_is_empty(mp)) + destroy_memory_package(mp); + } + + mpns[nid] =3D NULL; + kfree(mpn); +} + +static void destroy_package_node(int nid) +{ + mutex_lock(&memory_package_lock); + __destroy_package_node(nid); + mutex_unlock(&memory_package_lock); +} + +static int find_package_node(int nid, int initiator_nid) +{ + int mpn_nid =3D NUMA_NO_NODE; + + mutex_lock(&memory_package_lock); + if (mpns[nid]) { + /* + * SLIT-derived entries are provisional; if a driver later + * provides an explicit initiator, drop the provisional + * entry and rebuild with the stronger hint. + */ + if (mpns[nid]->source_flags =3D=3D MPN_SRC_SLIT && initiator_nid >=3D 0) + __destroy_package_node(nid); + else + mpn_nid =3D nid; + } + mutex_unlock(&memory_package_lock); + + return mpn_nid; +} + +static int find_create_package_node(int nid, int initiator_nid) +{ + int mpn_nid; + struct memory_package_node *mpn; + + mpn_nid =3D find_package_node(nid, initiator_nid); + if (mpn_nid !=3D NUMA_NO_NODE) + return mpn_nid; + + mpn =3D create_package_node(nid, initiator_nid); + if (IS_ERR(mpn)) + return PTR_ERR(mpn); + + mutex_lock(&memory_package_lock); + mpns[nid] =3D mpn; + mutex_unlock(&memory_package_lock); + + return nid; +} + +static int create_node_with_package(int nid) +{ + int ret; + + ret =3D find_create_package_node(nid, NUMA_NO_NODE); + if (ret < 0) { + pr_err("package_node(%d) failed: %d\n", nid, ret); + return ret; + } + + ret =3D bind_node_to_package(nid); + if (ret) { + pr_err("bind_node_to_package(%d) failed: %d\n", nid, ret); + return ret; + } + + return 0; +} + +/** + * mp_add_package_node_by_initiator - Add a node with an initiator + * @nid: Target NUMA node to add. + * @initiator_nid: CPU nid used to resolve @nid's package (>=3D0). + * + * Ensures that a &struct memory_package_node exists for @nid and that its + * package_id is determined using @initiator_nid when provided. Binding to= the + * package is not performed here. + * + * Return: 0 on success; negative errno on failure. + */ +int mp_add_package_node_by_initiator(int nid, int initiator_nid) +{ + int ret; + + ret =3D find_create_package_node(nid, initiator_nid); + if (ret < 0) { + pr_err("find_create_package_node(nid=3D%d, initiator=3D%d) failed: %d\n", + nid, initiator_nid, ret); + return ret; + } + + return 0; +} +EXPORT_SYMBOL_GPL(mp_add_package_node_by_initiator); + +/** + * mp_add_package_node - Add a node, resolving package automatically + * @nid: Target NUMA node to add. + * + * Wrapper over mp_add_package_node_by_initiator() that requests automatic + * initiator resolution (e.g., nearest CPU). + * + * Return: 0 on success; negative errno on failure. + */ +int mp_add_package_node(int nid) +{ + return mp_add_package_node_by_initiator(nid, NUMA_NO_NODE); +} +EXPORT_SYMBOL_GPL(mp_add_package_node); + +static int __mp_get_preferred_nodemask(int nid, enum mp_nodes_type node_ty= pe, + nodemask_t *out) +{ + int ret =3D 0; + + if (!out) { + ret =3D -EINVAL; + goto out; + } + + nodes_clear(*out); + + if (nid < 0 || nid >=3D MAX_NUMNODES) { + ret =3D -EINVAL; + goto out; + } + + if (node_type =3D=3D MP_NODES_CPU) { + if (node_is_memory_only(nid)) { + pr_err("nid %d is a memory-only node\n", nid); + ret =3D -EINVAL; + goto out; + } + } else if (node_type =3D=3D MP_NODES_MEM_ONLY) { + if (!node_is_memory_only(nid)) { + pr_err("nid %d is a CPU node\n", nid); + ret =3D -EINVAL; + goto out; + } + } else { + pr_err("invalid node type: %d\n", (int)node_type); + ret =3D -EINVAL; + goto out; + } + + if (!package_node_is_valid(nid)) { + ret =3D -ENOENT; + goto out; + } + + nodes_copy(*out, mpns[nid]->preferred); + +out: + return ret; +} + +static int __mp_get_package_nodemask(int nid, enum mp_nodes_type node_type, + nodemask_t *out) +{ + int ret =3D 0; + + if (!out) { + ret =3D -EINVAL; + goto out; + } + + nodes_clear(*out); + + if (nid < 0 || nid >=3D MAX_NUMNODES) { + ret =3D -EINVAL; + goto out; + } + + if (!package_node_is_valid(nid)) { + ret =3D -ENOENT; + goto out; + } + + switch (node_type) { + case MP_NODES_ALL: + nodes_copy(*out, mpns[nid]->package->nodes); + break; + case MP_NODES_CPU: + nodes_copy(*out, mpns[nid]->package->cpu_nodes); + break; + case MP_NODES_MEM_ONLY: + nodes_copy(*out, mpns[nid]->package->memory_only_nodes); + break; + default: + ret =3D -EINVAL; + goto out; + } + +out: + return ret; +} + +#if CONFIG_MIGRATION +/** + * mp_next_demotion_nodemask - Demotion candidates within a package + * @nid: CPU node from which memory would be demoted. + * @out: Output nodemask of nearest memory-only targets in the same packag= e. + * + * Return: 0 on success; negative errno if @nid is invalid or not initiali= zed. + */ +int mp_next_demotion_nodemask(int nid, nodemask_t *out) +{ + return __mp_get_preferred_nodemask(nid, MP_NODES_CPU, out); +} +EXPORT_SYMBOL_GPL(mp_next_demotion_nodemask); + +/** + * mp_next_demotion_node - Pick one demotion target + * @nid: CPU node from which memory would be demoted. + * + * Picks one target (random among the nearest) from mp_next_demotion_nodem= ask(). + * + * Return: target nid on success, or NUMA_NO_NODE if no candidate is avail= able. + */ +int mp_next_demotion_node(int nid) +{ + int target_nid; + nodemask_t target_nodemask; + + if (mp_next_demotion_nodemask(nid, &target_nodemask)) + return NUMA_NO_NODE; + if (nodes_empty(target_nodemask)) + return NUMA_NO_NODE; + + target_nid =3D node_random(&target_nodemask); + + return target_nid; +} +EXPORT_SYMBOL_GPL(mp_next_demotion_node); + +/** + * mp_next_promotion_nodemask - Promotion candidates within a package + * @nid: Memory-only node towards which promotion seeks CPU locality. + * @out: Output nodemask of nearest CPU targets in the same package. + * + * Return: 0 on success; negative errno if @nid is invalid or not initiali= zed. + */ +int mp_next_promotion_nodemask(int nid, nodemask_t *out) +{ + return __mp_get_preferred_nodemask(nid, MP_NODES_MEM_ONLY, out); +} +EXPORT_SYMBOL_GPL(mp_next_promotion_nodemask); + +/** + * mp_next_promotion_node - Pick one promotion target + * @nid: Memory-only node to be promoted towards CPUs. + * + * Picks one target (random among the nearest) from mp_next_promotion_node= mask(). + * + * Return: target nid on success, or NUMA_NO_NODE if no candidate is avail= able. + */ +int mp_next_promotion_node(int nid) +{ + int target_nid; + nodemask_t target_nodemask; + + if (mp_next_promotion_nodemask(nid, &target_nodemask)) + return NUMA_NO_NODE; + if (nodes_empty(target_nodemask)) + return NUMA_NO_NODE; + + target_nid =3D node_random(&target_nodemask); + + return target_nid; +} +EXPORT_SYMBOL_GPL(mp_next_promotion_node); +#endif /* CONFIG_MIGRATION */ + +/** + * mp_get_package_nodes - Return all members of @nid's package + * @nid: Any NUMA node in the package. + * @out: Output nodemask to receive all members. + * + * Return: 0 on success; negative errno if @nid is invalid or not initiali= zed. + */ +int mp_get_package_nodes(int nid, nodemask_t *out) +{ + return __mp_get_package_nodemask(nid, MP_NODES_ALL, out); +} +EXPORT_SYMBOL_GPL(mp_get_package_nodes); + +/** + * mp_get_package_cpu_nodes - Return CPU members of @nid's package + * @nid: Any NUMA node in the package. + * @out: Output nodemask to receive CPU members. + * + * Return: 0 on success; negative errno if @nid is invalid or not initiali= zed. + */ +int mp_get_package_cpu_nodes(int nid, nodemask_t *out) +{ + return __mp_get_package_nodemask(nid, MP_NODES_CPU, out); +} +EXPORT_SYMBOL_GPL(mp_get_package_cpu_nodes); + +int mp_get_package_memory_only_nodes(int nid, nodemask_t *out) +{ + return __mp_get_package_nodemask(nid, MP_NODES_MEM_ONLY, out); +} +EXPORT_SYMBOL_GPL(mp_get_package_memory_only_nodes); + +/** + * mp_get_package_memory_only_nodes - Return memory-only members of @nid's= package + * @nid: Any NUMA node in the package. + * @out: Output nodemask to receive memory-only members. + * + * Return: 0 on success; negative errno if @nid is invalid or not initiali= zed. + */ +static int __meminit mp_hotplug_callback(struct notifier_block *nb, + unsigned long action, void *_arg) +{ + int nid; + struct node_notify *nn =3D _arg; + + nid =3D nn->nid; + if (nid < 0) + return notifier_from_errno(0); + + switch (action) { + case NODE_REMOVED_LAST_MEMORY: + destroy_package_node(nid); + break; + + case NODE_ADDED_FIRST_MEMORY: + create_node_with_package(nid); + break; + + default: + break; + } + + return notifier_from_errno(0); +} + +static int __init memory_package_init(void) +{ + int ret =3D 0, nid; + + for_each_online_node(nid) { + if (!node_state(nid, N_MEMORY)) + continue; + + /* + * On boot, enumerate already-present NUMA nodes and build the + * initial package topology. CPU nodes are the common case, + * but memory-only nodes are handled as well. + */ + ret =3D create_node_with_package(nid); + if (ret) { + pr_err("create nid(%d) failed: %d\n", nid, ret); + goto out; + } + } + + hotplug_node_notifier(mp_hotplug_callback, MEMTIER_HOTPLUG_PRI); + +out: + return ret; +} +late_initcall(memory_package_init); --=20 2.34.1 From nobody Tue Apr 7 04:36:16 2026 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 815EC33A9D1; Mon, 16 Mar 2026 05:13:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773637999; cv=none; b=mx2BJzd6XPFftztAxKVOA8WSJz87nOpFAoJztk0ZeZRpBxf2ivH0z7Lt2/jndWujzABddLgglHEE5WCoXe0QqbgEToRN/nDFrqZ61codEnlekzf6v/1Tj8RLJibFSxN8JwZczy1MOdKIpvsUal5WODAeDxtuxP93eHTdF35/Ebc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773637999; c=relaxed/simple; bh=yPPh7Rw6b55+IuHsJDVV9cU82/LoRnBv0sChcuOch68=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=kp5dFuspgxGblxS1GrZMt8uSxSOZqfI9bDDAxYN8pvoPzQTWx96528TMCIJ6NJ+Hu5bfpudyKxU5oy6ZeRqYeA2jfBshK+qtYEqINPAOfm3Mn+HT4wK/B+VwCVG8btWLRz3JjdVdph0+ykGWEsIrSMLAzw653JknZAyu/nLdC6s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-c45ff70000001609-4d-69b791687819 From: Rakie Kim To: akpm@linux-foundation.org Cc: gourry@gourry.net, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com, byungchul@sk.com, ying.huang@linux.alibaba.com, apopple@nvidia.com, david@kernel.org, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, dave@stgolabs.net, jonathan.cameron@huawei.com, dave.jiang@intel.com, alison.schofield@intel.com, vishal.l.verma@intel.com, ira.weiny@intel.com, dan.j.williams@intel.com, kernel_team@skhynix.com, honggyu.kim@sk.com, yunjeong.mun@sk.com, rakie.kim@sk.com Subject: [RFC PATCH 3/4] mm/memory-tiers: register CXL nodes to socket-aware packages via initiator Date: Mon, 16 Mar 2026 14:12:51 +0900 Message-ID: <20260316051258.246-4-rakie.kim@sk.com> X-Mailer: git-send-email 2.52.0.windows.1 In-Reply-To: <20260316051258.246-1-rakie.kim@sk.com> References: <20260316051258.246-1-rakie.kim@sk.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrCIsWRmVeSWpSXmKPExsXC9ZZnkW7GxO2ZBk+PSVnMWb+GzeLu4wts FrtuhFhMn3qB0eLEzUY2i9U31zBaPN/6i9Hi593j7Bb7nz5nsVi18BqbxfGt89gttjc8YLc4 P+sUi8XlXXPYLO6t+c9qcXLWShaLb33SFvf7HCyOrN/OZDH50gI2i9mNfYwWtyYcY7JYvSbD YvbRe+wOEh47Z91l91iwqdSju+0yu0fLkbesHov3vGTy2LSqk81j06dJ7B4nZvxm8dj50NKj t/kdm8fHp7dYPKbOrvdYv+Uqi8eZBUfYPT5vkgvgj+KySUnNySxLLdK3S+DK+HfhJ2PBDcWK tfcesTcwtsh0MXJySAiYSHyYcZG1i5EDzF74QRPEZBNQkji2NwakQkRAVmLq3/MsXYxcHMwC K1klzp/8zQySEBZIljj4YyMriM0ioCqx/mgjWJxXwFii78g5RojxmhLrNt5iAZnJCTR+2wJj kLAQUMm8Jx/YIcoFJU7OfAJWwiygLrF+nhBImFlAXqJ562xmkLUSAo0cEmfmP2SDGCkpcXDF DZYJjAKzkLTPQmifhaR9ASPzKkahzLyy3MTMHBO9jMq8zAq95PzcTYzAuF5W+yd6B+OnC8GH GAU4GJV4eDMObcsUYk0sK67MPcQowcGsJMK77AhQiDclsbIqtSg/vqg0J7X4EKM0B4uSOK/R t/IUIYH0xJLU7NTUgtQimCwTB6dUA2NPRP/DZ6uv3WQpbk67ZzWX8avfDSvX3RyZ504/mXs3 LDqlOttAn4WZpeLkS2VLve5Vm5XjOjWFX/paMVyt3xKd8So4f+UKtqdyjvvio6f907i0zz/o Y0yZYE4nr2ri4jsbbjD7VEi9tJ9TlNqc9CHvCIeQyC7HDKmX7RoSthP2qr9KVb64WImlOCPR UIu5qDgRAH3R3NDnAgAA X-Brightmail-Tracker: H4sIAAAAAAAAA02Ra0hTYQCG+XbOzjmuFmfL8KDZZV3MKE0s+CIpicKP6IeQIIihSw9uualt ahpIEyVMa2kl6WY2sSSnbTjSqZXWnJeZOEkcajkzNbvjDcu0bDMi/70878P756UwoR33pqRJ qawiSSwTETycV+IXvV9SZJYeaBrbAMuMtQQcGe8jYPNgBOy9rSXgneI+ALuGsglYM1QL4FT9 TwAXRzpJOPf+MwZbJ6dwqK9wELCzvpyEbXdtXGhWvSWhXdONw/7mMgI6a1e40KapxuGC2geO qkOhxTHFhVajmQNvvdIRUJutBnC4sIMDa2olcKnhoQu1O8lQX9SkGSGRzpSGCq70kyjX+pWL Kp9+5CCT/iqBTLM3SdRVsoSjprHD6HrONwLNTA7jaOE1QpUfpjmoWHsZGR8P4KhHZyXDBVG8 kHhWJk1nFYFHY3mS332LIGVwe8Yj5ztSBXI35wOKYuiDTMW0vzsStIjpeBadDzwoT9qXKf5l x/MBj8Loai5jty1h7mIjHce8+FHHdWec3sUY27NXOZ8OZtTWXuDODO3PGOqGcfemh2u+QRfs xkKXUj4xTf7VBYytdGJVwWg/xlgudGOM3srk1GuxQsDXrLE0/y3NGksHMD3wlCaly8VS2aEA ZaIkM0maERCXLDcB181VWctFjWC+P8wCaAqI1vMllgapkCtOV2bKLYChMJEnv8rqQvx4ceYl VpEco0iTsUoL8KFwkRf/VCQbK6QTxKlsIsumsIp/LYfy8FYBr8CKPgM5PTECLWHHugVnI5df nh4VGO7PylucX0r5hvO2LQOqgrjMqLyorFRTzHPszE69gx12ptzbExLCDxo8ua/qeNuJTcvr gi+GXvh+bceTI+bxNDwhpK0mry4npnEm40G+800gijgX0Tq3wji2OW6U7p4X+BZ0tnzq6dGq w0W4UiIO2osplOI/Gka6rOICAAA= X-CFilter-Loop: Reflected CXL memory nodes appear without an explicit socket association. Relying on plain NUMA distance does not convey which physical package (CPU socket) they should belong to, which in turn makes locality-aware placement ambiguous. This change introduces a registration path that binds a CXL memory node to a socket-aware "memory package" using an initiator CPU node. The initiator is the CPU nid that best represents the host-side attachment of the region (e.g., the CPU closest to the region=E2=80=99s target). By us= ing this nid to resolve the package, the CXL node is grouped with the CPUs it actually services. The flow is: - Determine an initiator CPU nid for the CXL region. - Register the CXL node with the package layer using that initiator. This provides a deterministic and topology-consistent way to place CXL nodes into the correct socket grouping, reducing the risk of inadvertent cross-socket choices that distance alone cannot prevent. Signed-off-by: Rakie Kim --- drivers/cxl/core/region.c | 46 +++++++++++++++++++++++++++++++++++++++ drivers/cxl/cxl.h | 1 + drivers/dax/kmem.c | 2 ++ 3 files changed, 49 insertions(+) diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 5bd1213737fa..2733e0d465cc 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -2570,6 +2570,47 @@ static int cxl_region_calculate_adistance(struct not= ifier_block *nb, return NOTIFY_STOP; } =20 +static int cxl_region_find_nearest_node(struct cxl_region *cxlr) +{ + struct cxl_region_params *p =3D &cxlr->params; + struct cxl_endpoint_decoder *cxled =3D NULL; + struct cxl_memdev *cxlmd =3D NULL; + int i, numa_node; + + for (i =3D 0; i < p->nr_targets; i++) { + cxled =3D p->targets[i]; + cxlmd =3D cxled_to_memdev(cxled); + numa_node =3D dev_to_node(&cxlmd->dev); + if (numa_node !=3D NUMA_NO_NODE) + return numa_node; + } + return NUMA_NO_NODE; +} + +static int cxl_region_add_package_node(struct notifier_block *nb, + unsigned long dax_nid, void *data) +{ + int region_nid, nearest_nid, ret; + struct cxl_region *cxlr =3D container_of(nb, struct cxl_region, package_n= otifier); + + region_nid =3D phys_to_target_node(cxlr->params.res->start); + if (region_nid !=3D dax_nid) + return NOTIFY_DONE; + + nearest_nid =3D cxl_region_find_nearest_node(cxlr); + if (nearest_nid =3D=3D NUMA_NO_NODE) + return NOTIFY_DONE; + + ret =3D mp_add_package_node_by_initiator(dax_nid, nearest_nid); + if (ret) { + dev_info(&cxlr->dev, "failed add package node (%lu), nearest_nid (%d)\n", + dax_nid, nearest_nid); + return NOTIFY_DONE; + } + + return NOTIFY_OK; +} + /** * devm_cxl_add_region - Adds a region to a decoder * @cxlrd: root decoder @@ -3788,6 +3829,7 @@ static void shutdown_notifiers(void *_cxlr) =20 unregister_node_notifier(&cxlr->node_notifier); unregister_mt_adistance_algorithm(&cxlr->adist_notifier); + unregister_mp_package_notifier(&cxlr->package_notifier); } =20 static void remove_debugfs(void *dentry) @@ -3940,6 +3982,10 @@ static int cxl_region_probe(struct device *dev) cxlr->adist_notifier.priority =3D 100; register_mt_adistance_algorithm(&cxlr->adist_notifier); =20 + cxlr->package_notifier.notifier_call =3D cxl_region_add_package_node; + cxlr->package_notifier.priority =3D 100; + register_mp_package_notifier(&cxlr->package_notifier); + rc =3D devm_add_action_or_reset(&cxlr->dev, shutdown_notifiers, cxlr); if (rc) return rc; diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index ba17fa86d249..6b6653e31135 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -551,6 +551,7 @@ struct cxl_region { struct access_coordinate coord[ACCESS_COORDINATE_MAX]; struct notifier_block node_notifier; struct notifier_block adist_notifier; + struct notifier_block package_notifier; }; =20 struct cxl_nvdimm_bridge { diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c index c036e4d0b610..32ee66b82cd3 100644 --- a/drivers/dax/kmem.c +++ b/drivers/dax/kmem.c @@ -94,6 +94,8 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax) if (IS_ERR(mtype)) return PTR_ERR(mtype); =20 + mp_probe_package_id(numa_node); + for (i =3D 0; i < dev_dax->nr_range; i++) { struct range range; =20 --=20 2.34.1 From nobody Tue Apr 7 04:36:16 2026 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 31A2F33A9E9; Mon, 16 Mar 2026 05:13:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773638002; cv=none; b=FXe52AXWYRDhI5NVr6p6MPlAitvX8iIfdhYoNnMdRUn3Aa+VcSu8hiQx0iNGiNGAjaOJU2Q5BoeKd+ACj44h7h+B55dLUtF+V2qQ4XQSQDULeQ5wTvPO74xsJSdn1pUvJTowzG7VziQoyM0VNf8ahY35lafliShltkhIeLif/Tk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773638002; c=relaxed/simple; bh=3u397GeryJKzFd7s/4nxD7m9qs90WJido5jXQ0N/mtM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XVAykeQLw5ijpLfzzd93ApkaHUgL1t2mjeAMUktH4xILx+qju/h0CSMkLSbS4YjqXTBBIhRXq1qh0MtjWPgD7JSaKqQZAm5sCY4+QJatZWSBMWF2Pcxbw5kDPbHv/q2tVkEo/go0+rDrCK2DlR1b57vGAYlT1Z9VVDmhRChsyzs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-c45ff70000001609-5a-69b7916a4adf From: Rakie Kim To: akpm@linux-foundation.org Cc: gourry@gourry.net, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, ziy@nvidia.com, matthew.brost@intel.com, joshua.hahnjy@gmail.com, byungchul@sk.com, ying.huang@linux.alibaba.com, apopple@nvidia.com, david@kernel.org, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, dave@stgolabs.net, jonathan.cameron@huawei.com, dave.jiang@intel.com, alison.schofield@intel.com, vishal.l.verma@intel.com, ira.weiny@intel.com, dan.j.williams@intel.com, kernel_team@skhynix.com, honggyu.kim@sk.com, yunjeong.mun@sk.com, rakie.kim@sk.com Subject: [RFC PATCH 4/4] mm/mempolicy: enhance weighted interleave with socket-aware locality Date: Mon, 16 Mar 2026 14:12:52 +0900 Message-ID: <20260316051258.246-5-rakie.kim@sk.com> X-Mailer: git-send-email 2.52.0.windows.1 In-Reply-To: <20260316051258.246-1-rakie.kim@sk.com> References: <20260316051258.246-1-rakie.kim@sk.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrHIsWRmVeSWpSXmKPExsXC9ZZnkW7WxO2ZBvvPcVrMWb+GzeLu4wts FrtuhFhMn3qB0eLEzUY2i9U31zBaPN/6i9Hi593j7Bb7nz5nsVi18BqbxfGt89gttjc8YLc4 P+sUi8XlXXPYLO6t+c9qcXLWShaLb33SFvf7HCyOrN/OZDH50gI2i9mNfYwWtyYcY7JYvSbD YvbRe+wOEh47Z91l91iwqdSju+0yu0fLkbesHov3vGTy2LSqk81j06dJ7B4nZvxm8dj50NKj t/kdm8fHp7dYPKbOrvdYv+Uqi8eZBUfYPT5vkgvgj+KySUnNySxLLdK3S+DKePl6P1PBQfOK +b/nszYwrtLqYuTkkBAwkXi+cS0bjP1lynsgm4ODTUBJ4tjeGJCwiICsxNS/51m6GLk4mAVW skqcP/mbGaRGWCBW4uPEEJAaFgFViS+zJrKA2LwCxhJbJ02CGqkpsW7jLRaQck6g8dsWGIOE hYBK5j35wA5RLihxcuYTsFZmAXmJ5q2zmUFWSQh8ZZc42XeUFWKOpMTBFTdYJjDyz0LSMwtJ zwJGplWMQpl5ZbmJmTkmehmVeZkVesn5uZsYgbG6rPZP9A7GTxeCDzEKcDAq8fBmHNqWKcSa WFZcmXuIUYKDWUmEd9kRoBBvSmJlVWpRfnxRaU5q8SFGaQ4WJXFeo2/lKUIC6YklqdmpqQWp RTBZJg5OqQZGlWcMG6fMNrV33XPozfYK/3zvKcnuv+b+3GrFfp/33EdfhdiUu+Yr5CsidWXV Y9bonIkvSM5aaH7t25LWz7xdq1UVveabTzojpqYrW/qouSo0Puwv7wOXzzo/9521r/M/pfPv 4ZTIK+pP3m0/Pm312d9NdltnvP2svW/jBdbssuRm04fLJ+m4KLEUZyQaajEXFScCAIL+GRHR AgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrJIsWRmVeSWpSXmKPExsXCNUM9Rjdz4vZMg/UHLS3mrF/DZnH38QU2 i103QizOTZnNZjF96gVGixM3G9ksVt9cw2jxfOsvRoufd4+zW3x+9prZYv/T5ywWqxZeY7M4 vnUeu8XhuSdZLbY3PGC3OD/rFIvF5V1z2CzurfnPanFy1koWi2990hb3+xwsDl17zmpxZP12 JovJlxawWcxu7GO0uDXhGJPF6jUZFr+3rQAKHb3H7iDrsXPWXXaPBZtKPbrbLrN7tBx5y+qx eM9LJo9NqzrZPDZ9msTucWLGbxaPnQ8tPXqb37F5fHx6i8Xj220Pj8UvPjB5TJ1d77F+y1UW jzMLjrAHCEZx2aSk5mSWpRbp2yVwZbx8vZ+p4KB5xfzf81kbGFdpdTFyckgImEh8mfKerYuR g4NNQEni2N4YkLCIgKzE1L/nWboYuTiYBVaySpw/+ZsZpEZYIFbi48QQkBoWAVWJL7MmsoDY vALGElsnTWKDGKkpsW7jLRaQck6g8dsWGIOEhYBK5j35wA5RLihxcuYTsFZmAXmJ5q2zmScw 8sxCkpqFJLWAkWkVo0hmXlluYmaOqV5xdkZlXmaFXnJ+7iZGYIQuq/0zcQfjl8vuhxgFOBiV eHgzDm3LFGJNLCuuzD3EKMHBrCTCu+wIUIg3JbGyKrUoP76oNCe1+BCjNAeLkjivV3hqgpBA emJJanZqakFqEUyWiYNTqoGRZ3dI+3bxaV/7Mz4vfXsj/0LV26Jws7Damw86M8/Hi/opxKhL azRHLbx5JuVD0WORrwsObFZ3rc/p22fUJ3dOcOPPzi1rOW6dn7snUNqwqmsy15e8WNHrM5q0 /m+f5rX/7tL83iyVf1xNXAH9fw9VSOw2n3dReOmOjRMWybjfuRy8U2YqR/MvJZbijERDLeai 4kQAbKmZC8wCAAA= X-CFilter-Loop: Reflected Content-Type: text/plain; charset="utf-8" Flat weighted interleave applies one global weight vector regardless of where a task runs. On multi-socket systems this ignores inter-socket interconnect costs and can steer allocations to remote sockets even when local capacity exists, degrading effective bandwidth and increasing latency. Consider a dual-socket system: node0 node1 +-------+ +-------+ | CPU0 |---------| CPU1 | +-------+ +-------+ | DRAM0 | | DRAM1 | +---+---+ +---+---+ | | +---+---+ +---+---+ | CXL0 | | CXL1 | +-------+ +-------+ node2 node3 Local device capabilities (GB/s) versus cross-socket effective bandwidth: 0 1 2 3 0 300 150 100 50 1 150 300 50 100 A reasonable global weight vector reflecting device capabilities is: node0=3D3 node1=3D3 node2=3D1 node3=3D1 However, applying it flat to all sources yields the effective map: 0 1 2 3 0 3 3 1 1 1 3 3 1 1 This does not account for the interconnect penalty (e.g., node0->node1 drops 300->150, node0->node3 drops 100->50) and thus permits cross-socket allocations that underutilize local bandwidth. This patch makes weighted interleave socket-aware. Before weighting is applied, the candidate nodes are restricted to the current socket; only if no eligible local nodes remain does the policy fall back to the wider set. The resulting effective map becomes: 0 1 2 3 0 3 0 1 0 1 0 3 0 1 Now tasks running on node0 prefer DRAM0(3) and CXL0(1), while tasks on node1 prefer DRAM1(3) and CXL1(1). This aligns allocation with actual effective bandwidth, preserves NUMA locality, and reduces cross-socket traffic. Signed-off-by: Rakie Kim --- mm/mempolicy.c | 94 +++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 90 insertions(+), 4 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index a3f0fde6c626..541853ac08bc 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -117,6 +117,7 @@ #include #include #include +#include =20 #include "internal.h" =20 @@ -2134,17 +2135,87 @@ bool apply_policy_zone(struct mempolicy *policy, en= um zone_type zone) return zone >=3D dynamic_policy_zone; } =20 +/** + * policy_resolve_package_nodes - Restrict policy nodes to the current pac= kage + * @policy: Target mempolicy whose user-selected nodes are in @policy->nod= es. + * @mask: Output nodemask. On success, contains policy->nodes limited to + * the package that should be used for the allocation. + * + * This helper combines two constraints to decide where within a socket/pa= ckage + * memory may be allocated: + * + * 1) The caller's package: derived via mp_get_package_nodes(numa_node_i= d()). + * 2) The user's preselected set @policy->nodes (cpusets/mempolicy). + * + * The function obtains the nodemask of the current CPU's package and + * intersects it with @policy->nodes. If the intersection is empty (e.g. t= he + * user excluded every node of the current package), it falls back to the + * node in @policy->nodes, derives that node's package, and intersects + * again. If the fallback also yields an empty set, @mask stays empty and a + * non-zero error is returned. + * + * Examples (packages: P0=3D{CPU:0, MEM:2}, P1=3D{CPU:1, MEM:3}): + * - policy->nodes =3D {0,1,2,3} + * on P0: mask =3D {0,2}; on P1: mask =3D {1,3}. + * - policy->nodes =3D {0,1,3} + * on P0: mask =3D {0} (only node 0 from P0 is allowed). + * - policy->nodes =3D {1,2,3} + * on P0: mask =3D {2} (only node 2 from P0 is allowed). + * - policy->nodes =3D {1,3} + * on P0: current package (P0) & policy =3D NULL -> fallback to poli= cy=3D1, + * package(1)=3DP1, mask =3D {1,3}. (User effectively opted = out of P0.) + * + * Return: + * 0 on success with @mask set as above; + * -EINVAL if @policy/@mask is NULL; + * Propagated error from mp_get_package_nodes() on failure. + */ +static int policy_resolve_package_nodes(struct mempolicy *policy, nodemask= _t *mask) +{ + unsigned int node, ret =3D 0; + nodemask_t package_mask; + + if (!policy || !mask) + return -EINVAL; + + nodes_clear(*mask); + + node =3D numa_node_id(); + ret =3D mp_get_package_nodes(node, &package_mask); + if (!ret) { + nodes_and(*mask, package_mask, policy->nodes); + + if (nodes_empty(*mask)) { + node =3D first_node(policy->nodes); + ret =3D mp_get_package_nodes(node, &package_mask); + if (ret) + goto out; + nodes_and(*mask, package_mask, policy->nodes); + if (nodes_empty(*mask)) + goto out; + } + } + +out: + return ret; +} + static unsigned int weighted_interleave_nodes(struct mempolicy *policy) { unsigned int node; unsigned int cpuset_mems_cookie; + nodemask_t mask; =20 retry: /* to prevent miscount use tsk->mems_allowed_seq to detect rebind */ cpuset_mems_cookie =3D read_mems_allowed_begin(); node =3D current->il_prev; - if (!current->il_weight || !node_isset(node, policy->nodes)) { - node =3D next_node_in(node, policy->nodes); + + if (policy_resolve_package_nodes(policy, &mask)) + mask =3D policy->nodes; + + if (!current->il_weight || !node_isset(node, mask)) { + node =3D next_node_in(node, mask); if (read_mems_allowed_retry(cpuset_mems_cookie)) goto retry; if (node =3D=3D MAX_NUMNODES) @@ -2237,6 +2308,21 @@ static unsigned int read_once_policy_nodemask(struct= mempolicy *pol, return nodes_weight(*mask); } =20 +static unsigned int read_once_policy_package_nodemask(struct mempolicy *po= l, + nodemask_t *mask) +{ + nodemask_t package_mask; + + barrier(); + if (policy_resolve_package_nodes(pol, &package_mask)) + memcpy(mask, &pol->nodes, sizeof(nodemask_t)); + else + memcpy(mask, &package_mask, sizeof(nodemask_t)); + barrier(); + + return nodes_weight(*mask); +} + static unsigned int weighted_interleave_nid(struct mempolicy *pol, pgoff_t= ilx) { struct weighted_interleave_state *state; @@ -2247,7 +2333,7 @@ static unsigned int weighted_interleave_nid(struct me= mpolicy *pol, pgoff_t ilx) u8 weight; int nid =3D 0; =20 - nr_nodes =3D read_once_policy_nodemask(pol, &nodemask); + nr_nodes =3D read_once_policy_package_nodemask(pol, &nodemask); if (!nr_nodes) return numa_node_id(); =20 @@ -2691,7 +2777,7 @@ static unsigned long alloc_pages_bulk_weighted_interl= eave(gfp_t gfp, /* read the nodes onto the stack, retry if done during rebind */ do { cpuset_mems_cookie =3D read_mems_allowed_begin(); - nnodes =3D read_once_policy_nodemask(pol, &nodes); + nnodes =3D read_once_policy_package_nodemask(pol, &nodes); } while (read_mems_allowed_retry(cpuset_mems_cookie)); =20 /* if the nodemask has become invalid, we cannot do anything */ --=20 2.34.1