From nobody Sat Feb 7 15:05:28 2026 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B86CE84D24 for ; Mon, 25 Mar 2024 14:25:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.158.5 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711376718; cv=none; b=HcYWfUv6X/WNTjVcK4AfDWLJbn2MKRi5QpDmQBmnAPdI6m3/FubTHjiU86vPaP0Nl1GiHl/MTtYAJC9vdyxK7Kdv8JvKr4X5OyVLsS8MYeXs2jqYQDrFoAnHxPYjrxMRYORcigoQ5y3NnADc9QAZRnkDLR93PTQ5N9v2fs7YzeI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711376718; c=relaxed/simple; bh=NOkC6fuBFDVAMCdenIh+0pTGLiSZTt1fM8G7LmfdZa8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=L5OIwH178x4zdxFwzsYlriyJBmiXgLOX/SnEiUxjPLjoLoBhsaKy6MXVGB1OlOJWCwd4zPOhAgG0aJQSHwmmw0Xd6opqJuBdu/gQGJTsuJhyfp0uF456ypF1vZ5esx6jmrhno+amWkpLBowuWZg9krIr18C09WzxRO8gtKGFiGM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=ICQD+4QZ; arc=none smtp.client-ip=148.163.158.5 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="ICQD+4QZ" Received: from pps.filterd (m0360072.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 42PE0tST025924; Mon, 25 Mar 2024 14:24:41 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=GqNwmjjGnKkyJCZ+En/YSrGZg3Tm76YRIoUTja7BmQk=; b=ICQD+4QZLG2QCurmkwHRefcaTdSzHoG3ri2myLhz6svRKsYAMuB2IZGW4j5qiXWWUXts 0RpUJCyNTfe3lkNY14yyUkypQWwvuAoH4qqVjB7BcSb3gGFh1wE5lg9Lt0T6Mcmenzql cOF2lnV+YHHhF2joRjyNBnlZ4faAV1M9WgBc06785Bj88YjlCmZ1iQeVMhUTk2ulHXno lwsqjKzOfF6wbRuuV8SuEdcXgFUpQWJyTfLk73W+QKDLAQN2qZlj+4voEh+ai0TNVcTH yDrMNNz11Wk7nAYvH08+eoFy9h5udsi43N/N6OIKD9ICKUM5P6YMBkKTaKu0q9XOmQoO yg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3x2hh6afgv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 25 Mar 2024 14:24:41 +0000 Received: from m0360072.ppops.net (m0360072.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 42PEOeu2030098; Mon, 25 Mar 2024 14:24:40 GMT Received: from ppma11.dal12v.mail.ibm.com (db.9e.1632.ip4.static.sl-reverse.com [50.22.158.219]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3x2hh6afgs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 25 Mar 2024 14:24:40 +0000 Received: from pps.filterd (ppma11.dal12v.mail.ibm.com [127.0.0.1]) by ppma11.dal12v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 42PDKuBC003747; Mon, 25 Mar 2024 14:24:39 GMT Received: from smtprelay03.fra02v.mail.ibm.com ([9.218.2.224]) by ppma11.dal12v.mail.ibm.com (PPS) with ESMTPS id 3x2c42h4wc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 25 Mar 2024 14:24:39 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay03.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 42PEOZgO50725358 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 25 Mar 2024 14:24:37 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9DE6620043; Mon, 25 Mar 2024 14:24:35 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 680DF20040; Mon, 25 Mar 2024 14:24:32 +0000 (GMT) Received: from ltczz402-lp1.aus.stglabs.ibm.com (unknown [9.53.171.174]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Mon, 25 Mar 2024 14:24:32 +0000 (GMT) From: Donet Tom To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Aneesh Kumar , Huang Ying , Michal Hocko , Dave Hansen , Mel Gorman , Feng Tang , Andrea Arcangeli , Peter Zijlstra , Ingo Molnar , Rik van Riel , Johannes Weiner , Matthew Wilcox , Vlastimil Babka , Dan Williams , Hugh Dickins , Kefeng Wang , Suren Baghdasaryan , Donet Tom Subject: [PATCH v4 1/2] mm/mempolicy: Use numa_node_id() instead of cpu_to_node() Date: Mon, 25 Mar 2024 09:24:13 -0500 Message-Id: <6059f034f436734b472d066db69676fb3a459864.1711373653.git.donettom@linux.ibm.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: eZ1kfnPOcT21x235Sc4nZQRmPuVRVe-Y X-Proofpoint-GUID: E-M-IsOKD4XnmgEQptei3myaR1cTef3R X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-25_09,2024-03-21_02,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 priorityscore=1501 lowpriorityscore=0 mlxlogscore=861 spamscore=0 phishscore=0 clxscore=1015 impostorscore=0 suspectscore=0 malwarescore=0 adultscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2403210000 definitions=main-2403250079 Content-Type: text/plain; charset="utf-8" Instead of using 'cpu_to_node()', we use 'numa_node_id()', which is quicker. smp_processor_id is guaranteed to be stable in the 'mpol_misplaced()' function because it is called with ptl held. lockdep_assert_held was added to ensure that. No functional change in this patch. Signed-off-by: Aneesh Kumar K.V (IBM) Signed-off-by: Donet Tom Reviewed-by: "Huang, Ying" --- include/linux/mempolicy.h | 5 +++-- mm/huge_memory.c | 2 +- mm/internal.h | 2 +- mm/memory.c | 8 +++++--- mm/mempolicy.c | 14 ++++++++++---- 5 files changed, 20 insertions(+), 11 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 931b118336f4..1add16f21612 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -167,7 +167,8 @@ extern void mpol_to_str(char *buffer, int maxlen, struc= t mempolicy *pol); /* Check if a vma is migratable */ extern bool vma_migratable(struct vm_area_struct *vma); =20 -int mpol_misplaced(struct folio *, struct vm_area_struct *, unsigned long); +int mpol_misplaced(struct folio *folio, struct vm_fault *vmf, + unsigned long addr); extern void mpol_put_task_policy(struct task_struct *); =20 static inline bool mpol_is_preferred_many(struct mempolicy *pol) @@ -282,7 +283,7 @@ static inline int mpol_parse_str(char *str, struct memp= olicy **mpol) #endif =20 static inline int mpol_misplaced(struct folio *folio, - struct vm_area_struct *vma, + struct vm_fault *vmf, unsigned long address) { return -1; /* no node preference */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9859aa4f7553..b40bd9f3ead5 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1754,7 +1754,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) */ if (node_is_toptier(nid)) last_cpupid =3D folio_last_cpupid(folio); - target_nid =3D numa_migrate_prep(folio, vma, haddr, nid, &flags); + target_nid =3D numa_migrate_prep(folio, vmf, haddr, nid, &flags); if (target_nid =3D=3D NUMA_NO_NODE) { folio_put(folio); goto out_map; diff --git a/mm/internal.h b/mm/internal.h index 7e486f2c502c..e0001c681c56 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1088,7 +1088,7 @@ void vunmap_range_noflush(unsigned long start, unsign= ed long end); =20 void __vunmap_range_noflush(unsigned long start, unsigned long end); =20 -int numa_migrate_prep(struct folio *folio, struct vm_area_struct *vma, +int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf, unsigned long addr, int page_nid, int *flags); =20 void free_zone_device_page(struct page *page); diff --git a/mm/memory.c b/mm/memory.c index f2bc6dd15eb8..29e240978f45 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5033,9 +5033,11 @@ static vm_fault_t do_fault(struct vm_fault *vmf) return ret; } =20 -int numa_migrate_prep(struct folio *folio, struct vm_area_struct *vma, +int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf, unsigned long addr, int page_nid, int *flags) { + struct vm_area_struct *vma =3D vmf->vma; + folio_get(folio); =20 /* Record the current PID acceesing VMA */ @@ -5047,7 +5049,7 @@ int numa_migrate_prep(struct folio *folio, struct vm_= area_struct *vma, *flags |=3D TNF_FAULT_LOCAL; } =20 - return mpol_misplaced(folio, vma, addr); + return mpol_misplaced(folio, vmf, addr); } =20 static vm_fault_t do_numa_page(struct vm_fault *vmf) @@ -5121,7 +5123,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) last_cpupid =3D (-1 & LAST_CPUPID_MASK); else last_cpupid =3D folio_last_cpupid(folio); - target_nid =3D numa_migrate_prep(folio, vma, vmf->address, nid, &flags); + target_nid =3D numa_migrate_prep(folio, vmf, vmf->address, nid, &flags); if (target_nid =3D=3D NUMA_NO_NODE) { folio_put(folio); goto out_map; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 0fe77738d971..aa48376e2d34 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2718,7 +2718,7 @@ static void sp_free(struct sp_node *n) * mpol_misplaced - check whether current folio node is valid in policy * * @folio: folio to be checked - * @vma: vm area where folio mapped + * @vmf: structure describing the fault * @addr: virtual address in @vma for shared policy lookup and interleave = policy * * Lookup current policy node id for vma,addr and "compare to" folio's @@ -2728,18 +2728,24 @@ static void sp_free(struct sp_node *n) * Return: NUMA_NO_NODE if the page is in a node that is valid for this * policy, or a suitable node ID to allocate a replacement folio from. */ -int mpol_misplaced(struct folio *folio, struct vm_area_struct *vma, +int mpol_misplaced(struct folio *folio, struct vm_fault *vmf, unsigned long addr) { struct mempolicy *pol; pgoff_t ilx; struct zoneref *z; int curnid =3D folio_nid(folio); + struct vm_area_struct *vma =3D vmf->vma; int thiscpu =3D raw_smp_processor_id(); - int thisnid =3D cpu_to_node(thiscpu); + int thisnid =3D numa_node_id(); int polnid =3D NUMA_NO_NODE; int ret =3D NUMA_NO_NODE; =20 + /* + * Make sure ptl is held so that we don't preempt and we + * have a stable smp processor id + */ + lockdep_assert_held(vmf->ptl); pol =3D get_vma_policy(vma, addr, folio_order(folio), &ilx); if (!(pol->flags & MPOL_F_MOF)) goto out; @@ -2781,7 +2787,7 @@ int mpol_misplaced(struct folio *folio, struct vm_are= a_struct *vma, if (node_isset(curnid, pol->nodes)) goto out; z =3D first_zones_zonelist( - node_zonelist(numa_node_id(), GFP_HIGHUSER), + node_zonelist(thisnid, GFP_HIGHUSER), gfp_zone(GFP_HIGHUSER), &pol->nodes); polnid =3D zone_to_nid(z->zone); --=20 2.39.3 From nobody Sat Feb 7 15:05:28 2026 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F1E175CDE9 for ; Mon, 25 Mar 2024 14:25:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.156.1 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711376719; cv=none; b=NRhbCpCWk8LwMODaRySfy3L6GuAsQ8PlVWmrsSHnpV11KD/WqGb5S8UNfSSSgvvcliJsdYe6ZJAoG4bGjYJsC/td2NABG2yI5NDc8ooQnYHMxC5gH0Pxj46a+OW9i3+r/CjIh7YzBEq4f5BiTekFjrvYZya4FsPlqJQd19nfyR4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711376719; c=relaxed/simple; bh=7jNdCfHO1rjUis85EuHgw8/eW4qRdxzuua3xi8tSxG0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RJihTLANRaZBQNxgTW6gNqTJ4IXSZsobogRtrny+LscHsVXu8bMvZy0oV5FIbh1bCzNDBmIqZ1iDmmzauMYfyhzmLWmn+um1zOXGtxHmfEoYibpW3lj7q8qOefuGROiHWHv3XX/OKLNTxxV9fOOVILXrqUpJh3EYsXFA9ANFolU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=OliGHSCP; arc=none smtp.client-ip=148.163.156.1 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="OliGHSCP" Received: from pps.filterd (m0353727.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 42PDriE6019480; Mon, 25 Mar 2024 14:24:46 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=XQmi1NL6YoDj3tFR//+jd2Zxi9HPg24JatSnWES+vWc=; b=OliGHSCPx7lnU9eV95ZrA24gYhWVFc80J1SLP0BNtYX6oEnoDKtLDea6FlAtDKpdpTVr TFFReDHlHqIL9e7AG5Y+fHH/1xuJELsAbSa6hPTgxwfLG8mX/buZ1dWHZxIrIjPVIfM5 sLldFl7S89laIvP3mX5Fhw5/X4yLrgAr/mxtoh+czzEKeqAnOq3dD59SDOOtYueQw1As hE7KMzjrN3LPvE8sbl4QPp1zfM+Rc/pQEW6iIzrFJjNMMtdCSU6cJSOTEcJIC2t82Fh/ yJpEZeGXk6vEVMV+ZHs1hTOF3tSxu8yVdzoYfreR/1/AVWgSS1Nqpk8SZu9p35YkdRQT ig== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3x36e68m1j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 25 Mar 2024 14:24:46 +0000 Received: from m0353727.ppops.net (m0353727.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 42PEOjDL008406; Mon, 25 Mar 2024 14:24:45 GMT Received: from ppma12.dal12v.mail.ibm.com (dc.9e.1632.ip4.static.sl-reverse.com [50.22.158.220]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3x36e68m1e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 25 Mar 2024 14:24:45 +0000 Received: from pps.filterd (ppma12.dal12v.mail.ibm.com [127.0.0.1]) by ppma12.dal12v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 42PDF7du016411; Mon, 25 Mar 2024 14:24:44 GMT Received: from smtprelay01.fra02v.mail.ibm.com ([9.218.2.227]) by ppma12.dal12v.mail.ibm.com (PPS) with ESMTPS id 3x29dtsyfv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 25 Mar 2024 14:24:44 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay01.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 42PEOe6D32833910 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 25 Mar 2024 14:24:42 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A20BF20043; Mon, 25 Mar 2024 14:24:40 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6BECE20040; Mon, 25 Mar 2024 14:24:37 +0000 (GMT) Received: from ltczz402-lp1.aus.stglabs.ibm.com (unknown [9.53.171.174]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Mon, 25 Mar 2024 14:24:37 +0000 (GMT) From: Donet Tom To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Aneesh Kumar , Huang Ying , Michal Hocko , Dave Hansen , Mel Gorman , Feng Tang , Andrea Arcangeli , Peter Zijlstra , Ingo Molnar , Rik van Riel , Johannes Weiner , Matthew Wilcox , Vlastimil Babka , Dan Williams , Hugh Dickins , Kefeng Wang , Suren Baghdasaryan , Donet Tom Subject: [PATCH v4 2/2] mm/numa_balancing:Allow migrate on protnone reference with MPOL_PREFERRED_MANY policy Date: Mon, 25 Mar 2024 09:24:14 -0500 Message-Id: <158acc57319129aa46d50fd64c9330f3e7c7b4bf.1711373653.git.donettom@linux.ibm.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-GUID: 0HDwk0B_u03dHdcU1BTox6ml58Kw2LH3 X-Proofpoint-ORIG-GUID: ON7z38xTRC7wCB0Gk_w0WEurN4gTeR8v X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-25_09,2024-03-21_02,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 bulkscore=0 spamscore=0 phishscore=0 suspectscore=0 mlxscore=0 priorityscore=1501 mlxlogscore=999 impostorscore=0 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2403210000 definitions=main-2403250079 Content-Type: text/plain; charset="utf-8" commit bda420b98505 ("numa balancing: migrate on fault among multiple bound nodes") added support for migrate on protnone reference with MPOL_BIND memory policy. This allowed numa fault migration when the executing node is part of the policy mask for MPOL_BIND. This patch extends migration support to MPOL_PREFERRED_MANY policy. Currently, we cannot specify MPOL_PREFERRED_MANY with the mempolicy flag MPOL_F_NUMA_BALANCING. This causes issues when we want to use NUMA_BALANCING_MEMORY_TIERING. To effectively use the slow memory tier, the kernel should not allocate pages from the slower memory tier via allocation control zonelist fallback. Instead, we should move cold pages from the faster memory node via memory demotion. For a page allocation, kswapd is only woken up after we try to allocate pages from all nodes in the allocation zone list. This implies that, without using memory policies, we will end up allocating hot pages in the slower memory tier. MPOL_PREFERRED_MANY was added by commit b27abaccf8e8 ("mm/mempolicy: add MPOL_PREFERRED_MANY for multiple preferred nodes") to allow better allocation control when we have memory tiers in the system. With MPOL_PREFERRED_MANY, the user can use a policy node mask consisting only of faster memory nodes. When we fail to allocate pages from the faster memory node, kswapd would be woken up, allowing demotion of cold pages to slower memory nodes. With the current kernel, such usage of memory policies implies we can't do page promotion from a slower memory tier to a faster memory tier using numa fault. This patch fixes this issue. For MPOL_PREFERRED_MANY, if the executing node is in the policy node mask, we allow numa migration to the executing nodes. If the executing node is not in the policy node mask, we do not allow numa migration. Example: On a 2-sockets system, NUMA node N0, N1 and N2 are in socket 0, N3 in socket 1. N0, N1 and N3 have fast memory and CPU, while N2 has slow memory and no CPU. For a workload, we may use MPOL_PREFERRED_MANY with nodemask N0 and N1 set because the workload runs on CPUs of socket 0 at most times. Then, even if the workload runs on CPUs of N3 occasionally, we will not try to migrate the workload pages from N2 to N3 because users may want to avoid cross-socket access as much as possible in the long term. In below table, Process is the Process executing node and Curr Loc Pgs is the numa node where page present(folio node) =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D Process Policy Curr Loc Pgs Observation Reviewed-by: "Huang, Ying" ----------------------------------------------------------- N0 N0 N1 N1 Pages Migrated from N1 to N0 N0 N0 N1 N2 Pages Migrated from N2 to N0 N0 N0 N1 N3 Pages Migrated from N3 to N0 N3 N0 N1 N0 Pages NOT Migrated to N3 N3 N0 N1 N1 Pages NOT Migrated to N3 N3 N0 N1 N2 Pages NOT Migrated to N3 ------------------------------------------------------------ Signed-off-by: Aneesh Kumar K.V (IBM) Signed-off-by: Donet Tom --- mm/mempolicy.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index aa48376e2d34..13100a290918 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1504,9 +1504,10 @@ static inline int sanitize_mpol_flags(int *mode, uns= igned short *flags) if ((*flags & MPOL_F_STATIC_NODES) && (*flags & MPOL_F_RELATIVE_NODES)) return -EINVAL; if (*flags & MPOL_F_NUMA_BALANCING) { - if (*mode !=3D MPOL_BIND) + if (*mode =3D=3D MPOL_BIND || *mode =3D=3D MPOL_PREFERRED_MANY) + *flags |=3D (MPOL_F_MOF | MPOL_F_MORON); + else return -EINVAL; - *flags |=3D (MPOL_F_MOF | MPOL_F_MORON); } return 0; } @@ -2770,15 +2771,26 @@ int mpol_misplaced(struct folio *folio, struct vm_f= ault *vmf, break; =20 case MPOL_BIND: - /* Optimize placement among multiple nodes via NUMA balancing */ + case MPOL_PREFERRED_MANY: + /* + * Even though MPOL_PREFERRED_MANY can allocate pages outside + * policy nodemask we don't allow numa migration to nodes + * outside policy nodemask for now. This is done so that if we + * want demotion to slow memory to happen, before allocating + * from some DRAM node say 'x', we will end up using a + * MPOL_PREFERRED_MANY mask excluding node 'x'. In such scenario + * we should not promote to node 'x' from slow memory node. + */ if (pol->flags & MPOL_F_MORON) { + /* + * Optimize placement among multiple nodes + * via NUMA balancing + */ if (node_isset(thisnid, pol->nodes)) break; goto out; } - fallthrough; =20 - case MPOL_PREFERRED_MANY: /* * use current page if in policy nodemask, * else select nearest allowed node, if any. --=20 2.39.3