From nobody Thu Apr 2 20:07:32 2026 Received: from out-188.mta0.migadu.com (out-188.mta0.migadu.com [91.218.175.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B68E4188596 for ; Fri, 27 Mar 2026 02:14:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774577665; cv=none; b=h0B2p5UNJpZ+etAI8VQlSbVSr9K8M5np9+Q7EvzTvpeJMDhsdO3Y3s9Sg6qpWO6kyQFmGzEEH/MdZPUAlLrIMH1df+H5FJdZV49YaKOKEhRmWRDrSNRrX/XB3SdiIMhd2yDiH9Z6wvdGiOTqxOUtI6nKw9qbz++ePOC7oRshW4A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774577665; c=relaxed/simple; bh=qw5vVm6Ga5hdedCmxX+ZaUBhH3obR6pIqeEo5qpXhgU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Z8Ba7jgaX1Vez6O5sCaIxQ26z6vfQnRIxp35g46wspKFBWXhQ9I+8MDh+s4YVMwqxU5MYnfUczl2TUmoDIyqNpc0ujFLuKfwPSxZpJMf7ZNaE/+p+KETiz59t/xft0hsxEhT3lJxH8s9sYCDYfuuN8ZI5spMiMuHu1ldecEAOW8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=JXh0qus4; arc=none smtp.client-ip=91.218.175.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="JXh0qus4" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1774577661; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Q6STvr62iSGyD74aKGWAYxsY9Nv9L3Kw5kG6Q78YS0c=; b=JXh0qus4OUnHnUJQMBwIISIQ03+Y70d5O+Day5tjDbyx3pzwEpYu0lwJjjVphY24TjxxMa 1hFfAyuk5rjIO0pLg3gZ3xFgFBzeuJbGE+WXFJEXpCI6xGIZu5xY6u0fIMlGDpXIyghrV9 9AA/ClvozRfD2U72HJ06fpVqrLPPqR0= From: Usama Arif To: Andrew Morton , david@kernel.org, Lorenzo Stoakes , willy@infradead.org, linux-mm@kvack.org Cc: fvdl@google.com, hannes@cmpxchg.org, riel@surriel.com, shakeel.butt@linux.dev, kas@kernel.org, baohua@kernel.org, dev.jain@arm.com, baolin.wang@linux.alibaba.com, npache@redhat.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, Vlastimil Babka , lance.yang@linux.dev, linux-kernel@vger.kernel.org, kernel-team@meta.com, maddy@linux.ibm.com, mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com, svens@linux.ibm.com, linux-s390@vger.kernel.org, Usama Arif Subject: [v3 02/24] mm: thp: propagate split failure from vma_adjust_trans_huge() Date: Thu, 26 Mar 2026 19:08:44 -0700 Message-ID: <20260327021403.214713-3-usama.arif@linux.dev> In-Reply-To: <20260327021403.214713-1-usama.arif@linux.dev> References: <20260327021403.214713-1-usama.arif@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" With lazy PTE page table allocation, split_huge_pmd_if_needed() and thus vma_adjust_trans_huge() can now fail if order-0 allocation for pagetable fails when trying to split. It is important to check if this failure occurred to prevent a huge PMD straddling at VMA boundary. The vma_adjust_trans_huge() call is moved before vma_prepare() in all three callers (__split_vma, vma_shrink, commit_merge). Previously it sat between vma_prepare() and vma_complete(), where there is no mechanism to abort - once vma_prepare() has been called, we must reach vma_complete(). By moving the call earlier, a split failure can return -ENOMEM cleanly without needing to undo VMA preparation. This move is safe because vma_adjust_trans_huge() acquires its own pmd_lock() internally and does not depend on any locks or state changes from vma_prepare(). The VMA boundaries are also unchanged at the new call site, satisfying __split_huge_pmd_locked()'s requirement that the VMA covers the full PMD range. All 3 callers (__split_vma, vma_shrink, commit_merge) already return -ENOMEM if there are allocation failures for other reasons (failure in vma_iter_prealloc for example), this follows the same pattern. Signed-off-by: Usama Arif --- include/linux/huge_mm.h | 13 ++++++----- mm/huge_memory.c | 21 +++++++++++++----- mm/vma.c | 37 +++++++++++++++++++++---------- tools/testing/vma/include/stubs.h | 9 ++++---- 4 files changed, 53 insertions(+), 27 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index b081ce044c735..224965fce4e66 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -483,8 +483,8 @@ int hugepage_madvise(struct vm_area_struct *vma, vm_fla= gs_t *vm_flags, int advice); int madvise_collapse(struct vm_area_struct *vma, unsigned long start, unsigned long end, bool *lock_dropped); -void vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start, - unsigned long end, struct vm_area_struct *next); +int vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start, + unsigned long end, struct vm_area_struct *next); spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma); spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma); =20 @@ -685,11 +685,12 @@ static inline int madvise_collapse(struct vm_area_str= uct *vma, return -EINVAL; } =20 -static inline void vma_adjust_trans_huge(struct vm_area_struct *vma, - unsigned long start, - unsigned long end, - struct vm_area_struct *next) +static inline int vma_adjust_trans_huge(struct vm_area_struct *vma, + unsigned long start, + unsigned long end, + struct vm_area_struct *next) { + return 0; } static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 976a1c74c0870..99f3b8b24c682 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3326,20 +3326,31 @@ static inline int split_huge_pmd_if_needed(struct v= m_area_struct *vma, unsigned return 0; } =20 -void vma_adjust_trans_huge(struct vm_area_struct *vma, +int vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start, unsigned long end, struct vm_area_struct *next) { + int err; + /* Check if we need to split start first. */ - split_huge_pmd_if_needed(vma, start); + err =3D split_huge_pmd_if_needed(vma, start); + if (err) + return err; =20 /* Check if we need to split end next. */ - split_huge_pmd_if_needed(vma, end); + err =3D split_huge_pmd_if_needed(vma, end); + if (err) + return err; =20 /* If we're incrementing next->vm_start, we might need to split it. */ - if (next) - split_huge_pmd_if_needed(next, end); + if (next) { + err =3D split_huge_pmd_if_needed(next, end); + if (err) + return err; + } + + return 0; } =20 static void unmap_folio(struct folio *folio) diff --git a/mm/vma.c b/mm/vma.c index a43f3c5d4b3dd..b4a3839a8036e 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -513,6 +513,15 @@ __split_vma(struct vma_iterator *vmi, struct vm_area_s= truct *vma, return err; } =20 + /* + * Split any THP straddling the split boundary before splitting + * the VMA itself. Do this before vma_prepare() so we can + * cleanly fail without undoing VMA preparation. + */ + err =3D vma_adjust_trans_huge(vma, vma->vm_start, addr, NULL); + if (err) + return err; + new =3D vm_area_dup(vma); if (!new) return -ENOMEM; @@ -550,11 +559,6 @@ __split_vma(struct vma_iterator *vmi, struct vm_area_s= truct *vma, vp.insert =3D new; vma_prepare(&vp); =20 - /* - * Get rid of huge pages and shared page tables straddling the split - * boundary. - */ - vma_adjust_trans_huge(vma, vma->vm_start, addr, NULL); if (is_vm_hugetlb_page(vma)) hugetlb_split(vma, addr); =20 @@ -732,6 +736,7 @@ static int commit_merge(struct vma_merge_struct *vmg) { struct vm_area_struct *vma; struct vma_prepare vp; + int err; =20 if (vmg->__adjust_next_start) { /* We manipulate middle and adjust next, which is the target. */ @@ -743,6 +748,16 @@ static int commit_merge(struct vma_merge_struct *vmg) vma_iter_config(vmg->vmi, vmg->start, vmg->end); } =20 + /* + * THP pages may need to do additional splits if we increase + * middle->vm_start. Do this before vma_prepare() so we can + * cleanly fail without undoing VMA preparation. + */ + err =3D vma_adjust_trans_huge(vma, vmg->start, vmg->end, + vmg->__adjust_middle_start ? vmg->middle : NULL); + if (err) + return err; + init_multi_vma_prep(&vp, vma, vmg); =20 /* @@ -755,12 +770,6 @@ static int commit_merge(struct vma_merge_struct *vmg) return -ENOMEM; =20 vma_prepare(&vp); - /* - * THP pages may need to do additional splits if we increase - * middle->vm_start. - */ - vma_adjust_trans_huge(vma, vmg->start, vmg->end, - vmg->__adjust_middle_start ? vmg->middle : NULL); vma_set_range(vma, vmg->start, vmg->end, vmg->pgoff); vmg_adjust_set_range(vmg); vma_iter_store_overwrite(vmg->vmi, vmg->target); @@ -1248,9 +1257,14 @@ int vma_shrink(struct vma_iterator *vmi, struct vm_a= rea_struct *vma, unsigned long start, unsigned long end, pgoff_t pgoff) { struct vma_prepare vp; + int err; =20 WARN_ON((vma->vm_start !=3D start) && (vma->vm_end !=3D end)); =20 + err =3D vma_adjust_trans_huge(vma, start, end, NULL); + if (err) + return err; + if (vma->vm_start < start) vma_iter_config(vmi, vma->vm_start, start); else @@ -1263,7 +1277,6 @@ int vma_shrink(struct vma_iterator *vmi, struct vm_ar= ea_struct *vma, =20 init_vma_prep(&vp, vma); vma_prepare(&vp); - vma_adjust_trans_huge(vma, start, end, NULL); =20 vma_iter_clear(vmi); vma_set_range(vma, start, end, pgoff); diff --git a/tools/testing/vma/include/stubs.h b/tools/testing/vma/include/= stubs.h index a30b8bc849557..952e3cc88ef10 100644 --- a/tools/testing/vma/include/stubs.h +++ b/tools/testing/vma/include/stubs.h @@ -419,11 +419,12 @@ static inline int vma_dup_policy(struct vm_area_struc= t *src, struct vm_area_stru return 0; } =20 -static inline void vma_adjust_trans_huge(struct vm_area_struct *vma, - unsigned long start, - unsigned long end, - struct vm_area_struct *next) +static inline int vma_adjust_trans_huge(struct vm_area_struct *vma, + unsigned long start, + unsigned long end, + struct vm_area_struct *next) { + return 0; } =20 static inline void hugetlb_split(struct vm_area_struct *, unsigned long) {} --=20 2.52.0