From nobody Fri Jan 2 20:32:58 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FC64E95A94 for ; Sun, 8 Oct 2023 20:23:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344625AbjJHUXa (ORCPT ); Sun, 8 Oct 2023 16:23:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37750 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344467AbjJHUXZ (ORCPT ); Sun, 8 Oct 2023 16:23:25 -0400 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C93299; Sun, 8 Oct 2023 13:23:23 -0700 (PDT) Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-40684f53bfcso35599675e9.0; Sun, 08 Oct 2023 13:23:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1696796601; x=1697401401; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oh6ec6CTQesC51vfa/4MCcXzfoVwgVcGNsFkXLPVEho=; b=esUqV6PRt1dJVwLCO4LT6NdW8eVE4ne5iU4/6+XRgtLmiDgCqhSgyVDiSajr6gd5Xl F+TtJkFABGITXvyOtzxW2Mbql/Gyd4x6YveQEIBGrT10mVXW5VqnGZXLphKBogRpaZej 7AD0HoQi6N81KwcmddezKvbxPNE01XO0zNMaX2B6CuNTWDQKzIkRu9WIILCfPW/WF/WM pfMkXS7F05reffuyr0MEoTp1YxuaeTRgwOSAH/FbzlRPYoH3T1S4FbL5E+5wkTLM7oH4 OVNSnjw1kN/TzqNa77JLN2AdDSfERH7lVR30PnIEkJLndfkBWj3ms0LXRoWcfOt8ibz7 zgsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696796601; x=1697401401; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oh6ec6CTQesC51vfa/4MCcXzfoVwgVcGNsFkXLPVEho=; b=EJec6KmBbMkEyzCGpcCpyTX2PX9mmgLuVOj+gQoDIwALCpw2RDo08LTxthogajW3Sn SKMViEY7wbbBW7nqPRjrH8egTuFO8dZAx0v4Qwu8ppPiQcc0gWLui26cYAIFLnfwsNIK 6e3+BvGynRZrTrkiaIt0uXZ0tDKITnJ3SUFpLH7W6wHb28VdP0CqilKvM5MaG2YE+1WG 0l+T8gFd/Txdbta+Pcdiepj2peMPD8ubJIjSFt7dRMJ7IraURUuoPOM775zl6e/W0TYn AowuxLPVi48ffosPp1iVxQh4UCTGofqIVY0toLNPTLZsaWIEsjUt+crt2qOgAgTFOOn/ ik5A== X-Gm-Message-State: AOJu0Yytz9LtNx2kwNnD9COgcr0LRI/RikWfiIu+gN4/RHEEPKj2IOG/ e6mQlmqqMzXKa/0r+QTwTMk= X-Google-Smtp-Source: AGHT+IFX6EUh5dGcLh467RUTdLkD7TfmQOxpSCn/MZRadvHnRtJtkVpbin/HqLminQaG/NJnzpjLCg== X-Received: by 2002:a1c:7419:0:b0:3fe:89be:cd3 with SMTP id p25-20020a1c7419000000b003fe89be0cd3mr11961603wmc.22.1696796601251; Sun, 08 Oct 2023 13:23:21 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id c5-20020a05600c0ac500b0040586360a36sm11474879wmr.17.2023.10.08.13.23.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 Oct 2023 13:23:20 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Alexander Viro , Christian Brauner Cc: "=Liam R . Howlett" , Vlastimil Babka , linux-fsdevel@vger.kernel.org, Lorenzo Stoakes Subject: [PATCH 1/4] mm: abstract the vma_merge()/split_vma() pattern for mprotect() et al. Date: Sun, 8 Oct 2023 21:23:13 +0100 Message-ID: X-Mailer: git-send-email 2.42.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" mprotect() and other functions which change VMA parameters over a range each employ a pattern of:- 1. Attempt to merge the range with adjacent VMAs. 2. If this fails, and the range spans a subset of the VMA, split it accordingly. This is open-coded and duplicated in each case. Also in each case most of the parameters passed to vma_merge() remain the same. Create a new static function, vma_modify(), which abstracts this operation, accepting only those parameters which can be changed. To avoid the mess of invoking each function call with unnecessary parameters, create wrapper functions for each of the modify operations, parameterised only by what is required to perform the action. Note that the userfaultfd_release() case works even though it does not split VMAs - since start is set to vma->vm_start and end is set to vma->vm_end, the split logic does not trigger. In addition, since we calculate pgoff to be equal to vma->vm_pgoff + (start - vma->vm_start) >> PAGE_SHIFT, and start - vma->vm_start will be 0 in this instance, this invocation will remain unchanged. Signed-off-by: Lorenzo Stoakes --- fs/userfaultfd.c | 53 +++++++++----------------- include/linux/mm.h | 23 ++++++++++++ mm/madvise.c | 25 ++++--------- mm/mempolicy.c | 20 ++-------- mm/mlock.c | 24 ++++-------- mm/mmap.c | 93 ++++++++++++++++++++++++++++++++++++++++++++++ mm/mprotect.c | 27 ++++---------- 7 files changed, 157 insertions(+), 108 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index a7c6ef764e63..9e5232d23927 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -927,11 +927,10 @@ static int userfaultfd_release(struct inode *inode, s= truct file *file) continue; } new_flags =3D vma->vm_flags & ~__VM_UFFD_FLAGS; - prev =3D vma_merge(&vmi, mm, prev, vma->vm_start, vma->vm_end, - new_flags, vma->anon_vma, - vma->vm_file, vma->vm_pgoff, - vma_policy(vma), - NULL_VM_UFFD_CTX, anon_vma_name(vma)); + prev =3D vma_modify_uffd(&vmi, prev, vma, vma->vm_start, + vma->vm_end, new_flags, + NULL_VM_UFFD_CTX); + if (prev) { vma =3D prev; } else { @@ -1331,7 +1330,6 @@ static int userfaultfd_register(struct userfaultfd_ct= x *ctx, unsigned long start, end, vma_end; struct vma_iterator vmi; bool wp_async =3D userfaultfd_wp_async_ctx(ctx); - pgoff_t pgoff; =20 user_uffdio_register =3D (struct uffdio_register __user *) arg; =20 @@ -1484,26 +1482,18 @@ static int userfaultfd_register(struct userfaultfd_= ctx *ctx, vma_end =3D min(end, vma->vm_end); =20 new_flags =3D (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags; - pgoff =3D vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); - prev =3D vma_merge(&vmi, mm, prev, start, vma_end, new_flags, - vma->anon_vma, vma->vm_file, pgoff, - vma_policy(vma), - ((struct vm_userfaultfd_ctx){ ctx }), - anon_vma_name(vma)); + prev =3D vma_modify_uffd(&vmi, prev, vma, start, vma_end, + new_flags, + ((struct vm_userfaultfd_ctx){ ctx })); if (prev) { /* vma_merge() invalidated the mas */ vma =3D prev; goto next; } - if (vma->vm_start < start) { - ret =3D split_vma(&vmi, vma, start, 1); - if (ret) - break; - } - if (vma->vm_end > end) { - ret =3D split_vma(&vmi, vma, end, 0); - if (ret) - break; + + if (IS_ERR(prev)) { + ret =3D PTR_ERR(prev); + break; } next: /* @@ -1568,7 +1558,6 @@ static int userfaultfd_unregister(struct userfaultfd_= ctx *ctx, const void __user *buf =3D (void __user *)arg; struct vma_iterator vmi; bool wp_async =3D userfaultfd_wp_async_ctx(ctx); - pgoff_t pgoff; =20 ret =3D -EFAULT; if (copy_from_user(&uffdio_unregister, buf, sizeof(uffdio_unregister))) @@ -1671,24 +1660,16 @@ static int userfaultfd_unregister(struct userfaultf= d_ctx *ctx, uffd_wp_range(vma, start, vma_end - start, false); =20 new_flags =3D vma->vm_flags & ~__VM_UFFD_FLAGS; - pgoff =3D vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); - prev =3D vma_merge(&vmi, mm, prev, start, vma_end, new_flags, - vma->anon_vma, vma->vm_file, pgoff, - vma_policy(vma), - NULL_VM_UFFD_CTX, anon_vma_name(vma)); + prev =3D vma_modify_uffd(&vmi, prev, vma, start, vma_end, + new_flags, NULL_VM_UFFD_CTX); if (prev) { vma =3D prev; goto next; } - if (vma->vm_start < start) { - ret =3D split_vma(&vmi, vma, start, 1); - if (ret) - break; - } - if (vma->vm_end > end) { - ret =3D split_vma(&vmi, vma, end, 0); - if (ret) - break; + + if (IS_ERR(prev)) { + ret =3D PTR_ERR(prev); + break; } next: /* diff --git a/include/linux/mm.h b/include/linux/mm.h index a7b667786cde..c069813f215f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3253,6 +3253,29 @@ extern struct vm_area_struct *copy_vma(struct vm_are= a_struct **, unsigned long addr, unsigned long len, pgoff_t pgoff, bool *need_rmap_locks); extern void exit_mmap(struct mm_struct *); +struct vm_area_struct *vma_modify_flags(struct vma_iterator *vmi, + struct vm_area_struct *prev, + struct vm_area_struct *vma, + unsigned long start, unsigned long end, + unsigned long new_flags); +struct vm_area_struct *vma_modify_flags_name(struct vma_iterator *vmi, + struct vm_area_struct *prev, + struct vm_area_struct *vma, + unsigned long start, + unsigned long end, + unsigned long new_flags, + struct anon_vma_name *new_name); +struct vm_area_struct *vma_modify_policy(struct vma_iterator *vmi, + struct vm_area_struct *prev, + struct vm_area_struct *vma, + unsigned long start, unsigned long end, + struct mempolicy *new_pol); +struct vm_area_struct *vma_modify_uffd(struct vma_iterator *vmi, + struct vm_area_struct *prev, + struct vm_area_struct *vma, + unsigned long start, unsigned long end, + unsigned long new_flags, + struct vm_userfaultfd_ctx new_ctx); =20 static inline int check_data_rlimit(unsigned long rlim, unsigned long new, diff --git a/mm/madvise.c b/mm/madvise.c index a4a20de50494..73024693d5c8 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -141,7 +141,7 @@ static int madvise_update_vma(struct vm_area_struct *vm= a, { struct mm_struct *mm =3D vma->vm_mm; int error; - pgoff_t pgoff; + struct vm_area_struct *merged; VMA_ITERATOR(vmi, mm, start); =20 if (new_flags =3D=3D vma->vm_flags && anon_vma_name_eq(anon_vma_name(vma)= , anon_name)) { @@ -149,28 +149,17 @@ static int madvise_update_vma(struct vm_area_struct *= vma, return 0; } =20 - pgoff =3D vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); - *prev =3D vma_merge(&vmi, mm, *prev, start, end, new_flags, - vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma), - vma->vm_userfaultfd_ctx, anon_name); - if (*prev) { - vma =3D *prev; + merged =3D vma_modify_flags_name(&vmi, *prev, vma, start, end, new_flags, + anon_name); + if (merged) { + vma =3D *prev =3D merged; goto success; } =20 *prev =3D vma; =20 - if (start !=3D vma->vm_start) { - error =3D split_vma(&vmi, vma, start, 1); - if (error) - return error; - } - - if (end !=3D vma->vm_end) { - error =3D split_vma(&vmi, vma, end, 0); - if (error) - return error; - } + if (IS_ERR(merged)) + return PTR_ERR(merged); =20 success: /* vm_flags is protected by the mmap_lock held in write mode. */ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index b01922e88548..b608b1744197 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -786,8 +786,6 @@ static int mbind_range(struct vma_iterator *vmi, struct= vm_area_struct *vma, { struct vm_area_struct *merged; unsigned long vmstart, vmend; - pgoff_t pgoff; - int err; =20 vmend =3D min(end, vma->vm_end); if (start > vma->vm_start) { @@ -802,26 +800,14 @@ static int mbind_range(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, return 0; } =20 - pgoff =3D vma->vm_pgoff + ((vmstart - vma->vm_start) >> PAGE_SHIFT); - merged =3D vma_merge(vmi, vma->vm_mm, *prev, vmstart, vmend, vma->vm_flag= s, - vma->anon_vma, vma->vm_file, pgoff, new_pol, - vma->vm_userfaultfd_ctx, anon_vma_name(vma)); + merged =3D vma_modify_policy(vmi, *prev, vma, vmstart, vmend, new_pol); if (merged) { *prev =3D merged; return vma_replace_policy(merged, new_pol); } =20 - if (vma->vm_start !=3D vmstart) { - err =3D split_vma(vmi, vma, vmstart, 1); - if (err) - return err; - } - - if (vma->vm_end !=3D vmend) { - err =3D split_vma(vmi, vma, vmend, 0); - if (err) - return err; - } + if (IS_ERR(merged)) + return PTR_ERR(merged); =20 *prev =3D vma; return vma_replace_policy(vma, new_pol); diff --git a/mm/mlock.c b/mm/mlock.c index 42b6865f8f82..50ebea3b7885 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -476,10 +476,10 @@ static int mlock_fixup(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, unsigned long end, vm_flags_t newflags) { struct mm_struct *mm =3D vma->vm_mm; - pgoff_t pgoff; int nr_pages; int ret =3D 0; vm_flags_t oldflags =3D vma->vm_flags; + struct vm_area_struct *merged; =20 if (newflags =3D=3D oldflags || (oldflags & VM_SPECIAL) || is_vm_hugetlb_page(vma) || vma =3D=3D get_gate_vma(current->mm) || @@ -487,25 +487,15 @@ static int mlock_fixup(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, /* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */ goto out; =20 - pgoff =3D vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); - *prev =3D vma_merge(vmi, mm, *prev, start, end, newflags, - vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma), - vma->vm_userfaultfd_ctx, anon_vma_name(vma)); - if (*prev) { - vma =3D *prev; + merged =3D vma_modify_flags(vmi, *prev, vma, start, end, newflags); + if (merged) { + vma =3D *prev =3D merged; goto success; } =20 - if (start !=3D vma->vm_start) { - ret =3D split_vma(vmi, vma, start, 1); - if (ret) - goto out; - } - - if (end !=3D vma->vm_end) { - ret =3D split_vma(vmi, vma, end, 0); - if (ret) - goto out; + if (IS_ERR(merged)) { + ret =3D PTR_ERR(merged); + goto out; } =20 success: diff --git a/mm/mmap.c b/mm/mmap.c index 673429ee8a9e..8c21171b431f 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2437,6 +2437,99 @@ int split_vma(struct vma_iterator *vmi, struct vm_ar= ea_struct *vma, return __split_vma(vmi, vma, addr, new_below); } =20 +/* + * We are about to modify one or multiple of a VMA's flags, policy, userfa= ultfd + * context and anonymous VMA name within the range [start, end). + * + * As a result, we might be able to merge the newly modified VMA range wit= h an + * adjacent VMA with identical properties. + * + * If no merge is possible and the range does not span the entirety of the= VMA, + * we then need to split the VMA to accommodate the change. + */ +static struct vm_area_struct *vma_modify(struct vma_iterator *vmi, + struct vm_area_struct *prev, + struct vm_area_struct *vma, + unsigned long start, unsigned long end, + unsigned long vm_flags, + struct mempolicy *policy, + struct vm_userfaultfd_ctx uffd_ctx, + struct anon_vma_name *anon_name) +{ + pgoff_t pgoff =3D vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); + struct vm_area_struct *merged; + + merged =3D vma_merge(vmi, vma->vm_mm, prev, start, end, vm_flags, + vma->anon_vma, vma->vm_file, pgoff, policy, + uffd_ctx, anon_name); + if (merged) + return merged; + + if (vma->vm_start < start) { + int err =3D split_vma(vmi, vma, start, 1); + + if (err) + return ERR_PTR(err); + } + + if (vma->vm_end > end) { + int err =3D split_vma(vmi, vma, end, 0); + + if (err) + return ERR_PTR(err); + } + + return NULL; +} + +/* We are about to modify the VMA's flags. */ +struct vm_area_struct *vma_modify_flags(struct vma_iterator *vmi, + struct vm_area_struct *prev, + struct vm_area_struct *vma, + unsigned long start, unsigned long end, + unsigned long new_flags) +{ + return vma_modify(vmi, prev, vma, start, end, new_flags, + vma_policy(vma), vma->vm_userfaultfd_ctx, + anon_vma_name(vma)); +} + +/* We are about to modify the VMA's flags and/or anon_name. */ +struct vm_area_struct *vma_modify_flags_name(struct vma_iterator *vmi, + struct vm_area_struct *prev, + struct vm_area_struct *vma, + unsigned long start, + unsigned long end, + unsigned long new_flags, + struct anon_vma_name *new_name) +{ + return vma_modify(vmi, prev, vma, start, end, new_flags, + vma_policy(vma), vma->vm_userfaultfd_ctx, new_name); +} + +/* We are about to modify the VMA's flags memory policy. */ +struct vm_area_struct *vma_modify_policy(struct vma_iterator *vmi, + struct vm_area_struct *prev, + struct vm_area_struct *vma, + unsigned long start, unsigned long end, + struct mempolicy *new_pol) +{ + return vma_modify(vmi, prev, vma, start, end, vma->vm_flags, + new_pol, vma->vm_userfaultfd_ctx, anon_vma_name(vma)); +} + +/* We are about to modify the VMA's uffd context and/or flags. */ +struct vm_area_struct *vma_modify_uffd(struct vma_iterator *vmi, + struct vm_area_struct *prev, + struct vm_area_struct *vma, + unsigned long start, unsigned long end, + unsigned long new_flags, + struct vm_userfaultfd_ctx new_ctx) +{ + return vma_modify(vmi, prev, vma, start, end, new_flags, + vma_policy(vma), new_ctx, anon_vma_name(vma)); +} + /* * do_vmi_align_munmap() - munmap the aligned region from @start to @end. * @vmi: The vma iterator diff --git a/mm/mprotect.c b/mm/mprotect.c index b94fbb45d5c7..fdc94453bced 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -581,7 +581,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, long nrpages =3D (end - start) >> PAGE_SHIFT; unsigned int mm_cp_flags =3D 0; unsigned long charged =3D 0; - pgoff_t pgoff; + struct vm_area_struct *merged; int error; =20 if (newflags =3D=3D oldflags) { @@ -625,31 +625,18 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, } } =20 - /* - * First try to merge with previous and/or next vma. - */ - pgoff =3D vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); - *pprev =3D vma_merge(vmi, mm, *pprev, start, end, newflags, - vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma), - vma->vm_userfaultfd_ctx, anon_vma_name(vma)); - if (*pprev) { - vma =3D *pprev; + merged =3D vma_modify_flags(vmi, *pprev, vma, start, end, newflags); + if (merged) { + vma =3D *pprev =3D merged; VM_WARN_ON((vma->vm_flags ^ newflags) & ~VM_SOFTDIRTY); goto success; } =20 *pprev =3D vma; =20 - if (start !=3D vma->vm_start) { - error =3D split_vma(vmi, vma, start, 1); - if (error) - goto fail; - } - - if (end !=3D vma->vm_end) { - error =3D split_vma(vmi, vma, end, 0); - if (error) - goto fail; + if (IS_ERR(merged)) { + error =3D PTR_ERR(merged); + goto fail; } =20 success: --=20 2.42.0 From nobody Fri Jan 2 20:32:58 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B043E95A80 for ; Sun, 8 Oct 2023 20:23:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344662AbjJHUXd (ORCPT ); Sun, 8 Oct 2023 16:23:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344590AbjJHUX0 (ORCPT ); Sun, 8 Oct 2023 16:23:26 -0400 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD47EAC; Sun, 8 Oct 2023 13:23:24 -0700 (PDT) Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-4065dea9a33so37583285e9.3; Sun, 08 Oct 2023 13:23:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1696796603; x=1697401403; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xj76wJ9Zoom8wv+//ugduIlfQcCX14rBwnZA3eUs6cY=; b=j4zgUczgeIggSSznOPVE6wnl2CqQqeocYaKNwVGZblaLQvqVCEHT88RDPZI2vkQu77 FGnMhXwvlPveKeKh36Ki++pGS6hVGoTFlnszs5ozjMkRFAeMbQqEhFuiKqFjA3q+8Lk2 IJJsVZHXSErsCQV8eBTtUSC48DnS9DIDb7eQ/jC9+H2jBovu2pmh7KqehwUI59wUu0PU bBT1LMV5gi5DlA9lF1dQh8+FuOPKgTfdMccwTZSQd65Fjp57OMR3xuum1VgZQdaTeYWI b1js+S5mz8OSvBWX3+cEMKPQ52cyXHc1jIqRZmvnCQTG0rr6u88O202aLKyX8laU77bQ rbpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696796603; x=1697401403; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xj76wJ9Zoom8wv+//ugduIlfQcCX14rBwnZA3eUs6cY=; b=YFBgHjWhbIuiO8XcYurSWXHlkxSJxtPZItcXw8vVPVh2aT4Rs6tCKWYOjF9t4wKp0A lwRllsVP/P0SqdANfbllEfQDUfgoGMYu4KJLR7iYnGaSHcklh6QQ0pKMpVU71PXqUz3o at/TdAdqpnRbhqMJ9ytQZZn5IpoF+JmUPBT5Ik39X2wJJ8TfUT+KsaFlrXB77mwp0dF+ sn163cYnb0WvTRcqODxhbRY6GzaYe4e+kyF6eoFzokKCr9oVMpWtyGLlaWlJiWBEl2kM Pbosl58xzh/1340LPU9QodlyUMlhWVz7kmp2foquwZ1E5bsuG/mfsvrPl/aBICzGh1v7 Izlg== X-Gm-Message-State: AOJu0YxVzr0YZ7nK6pDnJmUjvXLpGxqzUeeAY2uJ5d01DEjZae+6TNy2 ZSPB9Lg8Opxlwb96L7Kj4FvzlxhAknA= X-Google-Smtp-Source: AGHT+IG4F9Wkca1FesebOtAogwJp1fVsExkOmC7if9rq1cZTVc9/EQiXNYXRnEH6/EOgRmTVDlQ1tw== X-Received: by 2002:a05:600c:2b0e:b0:406:848f:8711 with SMTP id y14-20020a05600c2b0e00b00406848f8711mr11974165wme.21.1696796602890; Sun, 08 Oct 2023 13:23:22 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id c5-20020a05600c0ac500b0040586360a36sm11474879wmr.17.2023.10.08.13.23.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 Oct 2023 13:23:21 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Alexander Viro , Christian Brauner Cc: "=Liam R . Howlett" , Vlastimil Babka , linux-fsdevel@vger.kernel.org, Lorenzo Stoakes Subject: [PATCH 2/4] mm: make vma_merge() and split_vma() internal Date: Sun, 8 Oct 2023 21:23:14 +0100 Message-ID: <6237f46d751d5dca385242a92c09169ad4d277ee.1696795837.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Now the vma_merge()/split_vma() pattern has been abstracted, we use it entirely internally within mm/mmap.c, so make the function static. We also no longer need vma_merge() anywhere else except mm/mremap.c, so make it internal. In addition, the split_vma() nommu variant also need not be exported. Signed-off-by: Lorenzo Stoakes Reviewed-by: Vlastimil Babka --- include/linux/mm.h | 9 --------- mm/internal.h | 9 +++++++++ mm/mmap.c | 8 ++++---- mm/nommu.c | 4 ++-- 4 files changed, 15 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c069813f215f..6aa532682094 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3237,16 +3237,7 @@ extern int vma_expand(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, struct vm_area_struct *next); extern int vma_shrink(struct vma_iterator *vmi, struct vm_area_struct *vma, unsigned long start, unsigned long end, pgoff_t pgoff); -extern struct vm_area_struct *vma_merge(struct vma_iterator *vmi, - struct mm_struct *, struct vm_area_struct *prev, unsigned long addr, - unsigned long end, unsigned long vm_flags, struct anon_vma *, - struct file *, pgoff_t, struct mempolicy *, struct vm_userfaultfd_ctx, - struct anon_vma_name *); extern struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *); -extern int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *, - unsigned long addr, int new_below); -extern int split_vma(struct vma_iterator *vmi, struct vm_area_struct *, - unsigned long addr, int new_below); extern int insert_vm_struct(struct mm_struct *, struct vm_area_struct *); extern void unlink_file_vma(struct vm_area_struct *); extern struct vm_area_struct *copy_vma(struct vm_area_struct **, diff --git a/mm/internal.h b/mm/internal.h index 3a72975425bb..ddaeb9f2d9d7 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1011,6 +1011,15 @@ struct page *follow_trans_huge_pmd(struct vm_area_st= ruct *vma, unsigned long addr, pmd_t *pmd, unsigned int flags); =20 +/* + * mm/mmap.c + */ +struct vm_area_struct *vma_merge(struct vma_iterator *vmi, + struct mm_struct *, struct vm_area_struct *prev, unsigned long addr, + unsigned long end, unsigned long vm_flags, struct anon_vma *, + struct file *, pgoff_t, struct mempolicy *, struct vm_userfaultfd_ctx, + struct anon_vma_name *); + enum { /* mark page accessed */ FOLL_TOUCH =3D 1 << 16, diff --git a/mm/mmap.c b/mm/mmap.c index 8c21171b431f..58d71f84e917 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2346,8 +2346,8 @@ static void unmap_region(struct mm_struct *mm, struct= ma_state *mas, * has already been checked or doesn't make sense to fail. * VMA Iterator will point to the end VMA. */ -int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma, - unsigned long addr, int new_below) +static int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vm= a, + unsigned long addr, int new_below) { struct vma_prepare vp; struct vm_area_struct *new; @@ -2428,8 +2428,8 @@ int __split_vma(struct vma_iterator *vmi, struct vm_a= rea_struct *vma, * Split a vma into two pieces at address 'addr', a new vma is allocated * either for the first part or the tail. */ -int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma, - unsigned long addr, int new_below) +static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma, + unsigned long addr, int new_below) { if (vma->vm_mm->map_count >=3D sysctl_max_map_count) return -ENOMEM; diff --git a/mm/nommu.c b/mm/nommu.c index f9553579389b..fc4afe924ad5 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -1305,8 +1305,8 @@ SYSCALL_DEFINE1(old_mmap, struct mmap_arg_struct __us= er *, arg) * split a vma into two pieces at address 'addr', a new vma is allocated e= ither * for the first part or the tail. */ -int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma, - unsigned long addr, int new_below) +static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma, + unsigned long addr, int new_below) { struct vm_area_struct *new; struct vm_region *region; --=20 2.42.0 From nobody Fri Jan 2 20:32:58 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47698E95A67 for ; Sun, 8 Oct 2023 20:23:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344675AbjJHUXg (ORCPT ); Sun, 8 Oct 2023 16:23:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344591AbjJHUX1 (ORCPT ); Sun, 8 Oct 2023 16:23:27 -0400 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 637BCB3; Sun, 8 Oct 2023 13:23:26 -0700 (PDT) Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-405417465aaso37143845e9.1; Sun, 08 Oct 2023 13:23:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1696796605; x=1697401405; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GphpI5imAra4MlTQiZu1du23bwIkT93dz+Vpq/IpRZ4=; b=dJeejiTlN2rSU8/13wSBo3lCjs5gBiYvdZ2wK7HBOIaVej5ngX8a9iPP9efRePrr3b N3Lps3KuZx2/IZGiFul0a/Vz/6sC53RcmradDR0u65nv0f94EOsQCHAkmpMSfSoQ8hRH Q/w57OmODJXsMognI3u4BgEQY2GjvpIwiJmFFpanbznzM09ANdutv3MkMN+3oJuQAnlY vHz0cwFFpvDGR16Ude7QdCkAMEiEluBCKcrrtvEmkKxSt/aFPEW0Vmdk8hkeLItZU8U0 rThfQjujFlHTvi7jGmDnRIvg++pDo4QQdNzSklM9OhvICiOlxDzXaPJHHcblMT6wNqYT yotw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696796605; x=1697401405; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GphpI5imAra4MlTQiZu1du23bwIkT93dz+Vpq/IpRZ4=; b=iLhoc0iiFrPDabilk9OKrXYtN4vNId7AYnpnwy4dA46ULCNrIyifZwX/SmjJZV0p88 mkVVBLnWgbzd0umK5jAzx9QXURk/fS6YAV1O+EYEzdoxEeukGLVQVLLnpbIRGu8D2QV2 Y5pqoo4nuA74XLZPdDa0tadneb4ejmpYk3f8ecGQfhUD9EzggvL1Lye1plaHL05DW88u zQgGI2iMkoEenZZq4OL9t17tnT0KhMoi1co5E1ZJcm7BxC40wNzd4eHNof30/dQItj5e n13xe7HFrpHGz9m+crIonPkzNKlE99lUpPJDGWiO0oQdEdFcHLVxtsSvZOqFVs8eAuPb vcPQ== X-Gm-Message-State: AOJu0Yxa0cz7Zw2MGEdwyvc/DjOo1+9vQeHLnV5TZzNW3QNSI/EZupau K9zVRite35RZLMyTlFh4774= X-Google-Smtp-Source: AGHT+IFLzkXje9IEvfeVChyj/N8dc9zcG9gS5hukGXOpPqdRcGECy0kYVxH9DGQPH7cdYFKyB/CDNA== X-Received: by 2002:a05:600c:204f:b0:3fe:3004:1ffd with SMTP id p15-20020a05600c204f00b003fe30041ffdmr12585263wmg.4.1696796604593; Sun, 08 Oct 2023 13:23:24 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id c5-20020a05600c0ac500b0040586360a36sm11474879wmr.17.2023.10.08.13.23.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 Oct 2023 13:23:23 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Alexander Viro , Christian Brauner Cc: "=Liam R . Howlett" , Vlastimil Babka , linux-fsdevel@vger.kernel.org, Lorenzo Stoakes Subject: [PATCH 3/4] mm: abstract merge for new VMAs into vma_merge_new_vma() Date: Sun, 8 Oct 2023 21:23:15 +0100 Message-ID: X-Mailer: git-send-email 2.42.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Only in mmap_region() and copy_vma() do we add VMAs which occupy entirely new regions of virtual memory. We can share the logic between these invocations and make it absolutely explici to reduce confusion around the rather inscrutible parameters possessed by vma_merge(). This also paves the way for a simplification of the core vma_merge() implementation, as we seek to make the function entirely an implementation detail. Note that on mmap_region(), vma fields are initialised to zero, so we can simply reference these rather than explicitly specifying NULL. Signed-off-by: Lorenzo Stoakes Reviewed-by: Vlastimil Babka --- mm/mmap.c | 27 ++++++++++++++++++++------- 1 file changed, 20 insertions(+), 7 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 58d71f84e917..51be864b876b 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2530,6 +2530,22 @@ struct vm_area_struct *vma_modify_uffd(struct vma_it= erator *vmi, vma_policy(vma), new_ctx, anon_vma_name(vma)); } =20 +/* + * Attempt to merge a newly mapped VMA with those adjacent to it. The call= er + * must ensure that [start, end) does not overlap any existing VMA. + */ +static struct vm_area_struct *vma_merge_new_vma(struct vma_iterator *vmi, + struct vm_area_struct *prev, + struct vm_area_struct *vma, + unsigned long start, + unsigned long end, + pgoff_t pgoff) +{ + return vma_merge(vmi, vma->vm_mm, prev, start, end, vma->vm_flags, + vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma), + vma->vm_userfaultfd_ctx, anon_vma_name(vma)); +} + /* * do_vmi_align_munmap() - munmap the aligned region from @start to @end. * @vmi: The vma iterator @@ -2885,10 +2901,9 @@ unsigned long mmap_region(struct file *file, unsigne= d long addr, * vma again as we may succeed this time. */ if (unlikely(vm_flags !=3D vma->vm_flags && prev)) { - merge =3D vma_merge(&vmi, mm, prev, vma->vm_start, - vma->vm_end, vma->vm_flags, NULL, - vma->vm_file, vma->vm_pgoff, NULL, - NULL_VM_UFFD_CTX, NULL); + merge =3D vma_merge_new_vma(&vmi, prev, vma, + vma->vm_start, vma->vm_end, + pgoff); if (merge) { /* * ->mmap() can change vma->vm_file and fput @@ -3430,9 +3445,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct= **vmap, if (new_vma && new_vma->vm_start < addr + len) return NULL; /* should never get here */ =20 - new_vma =3D vma_merge(&vmi, mm, prev, addr, addr + len, vma->vm_flags, - vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma), - vma->vm_userfaultfd_ctx, anon_vma_name(vma)); + new_vma =3D vma_merge_new_vma(&vmi, prev, vma, addr, addr + len, pgoff); if (new_vma) { /* * Source vma may have been merged into new_vma --=20 2.42.0 From nobody Fri Jan 2 20:32:58 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70FA5E95A80 for ; Sun, 8 Oct 2023 20:23:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344723AbjJHUXm (ORCPT ); Sun, 8 Oct 2023 16:23:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344600AbjJHUX3 (ORCPT ); Sun, 8 Oct 2023 16:23:29 -0400 Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com [IPv6:2a00:1450:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B40D4BA; Sun, 8 Oct 2023 13:23:27 -0700 (PDT) Received: by mail-wr1-x431.google.com with SMTP id ffacd0b85a97d-313e742a787so2252059f8f.1; Sun, 08 Oct 2023 13:23:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1696796606; x=1697401406; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VCWwJGD+i6SLQnaAJLGE5OXPFaOejL4paLAiGQJbd2k=; b=JXydqT+4775nGmmfh0XcabAYNa1Fv1ov5so2IJYRqRAAp0mnnMeFxLldC1mTnv8bUR /pL8wutVkmOOcAAql0K1rUyzlyLFbxJC2B5MleTdUqHmHFq72K1zHzQTpg8RiCKzGZdp iWWJEFQmEb0fQskDzjSDiMq0W2L751YAt6QFcSXGgs5ZlwDNAMKcpGL9ujMN6dYlb9EQ jSCdk2hWmUcCaYkQPpEtEbWGjx0RJuvsKwwSeQMRHq8X3LDX+8Ogbx7mJlhj3fqm1wKQ bF9cJtE5wfpBMyZe+BYj1hZeKdaNOznVeFgSDuVqBnBGFuScRNMRoIc4PhoTTRpAneFb 9ZVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696796606; x=1697401406; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VCWwJGD+i6SLQnaAJLGE5OXPFaOejL4paLAiGQJbd2k=; b=e65AX4jpDe1aUXi6uqh8k6qyc7AMEvcJNYjcYltW6ikgVULeby/PYFhoawG289S39D U8dEKCvw9FM+na/aROOvPz462GWNg1l/UAxpjaSA4PASexq2PPjwUp3wcuc1RNlhDueJ ahzWWh/2zKAMrphaaVxlu0/RMEdMoYCT26vR1NtdiUgjNweMMSJ0Ui57KmT2/oclziS5 sGCc3s+L1txfx/TLeoubYdp4gMieXO2CS7My8K3hDs0GV54fnrs0u2lf9u6d3V559sKn 3vLvEb9upbFDo0nxeGhPWwe7Gb3zeW2uVdhwIFREvklu8TtoxV6Rgo29dMVqx5/oB/5q e63Q== X-Gm-Message-State: AOJu0Ywxq2hax+YvaGiDML3z2wlz1QlU86C+KHhm3T1fTpsdup/M1mhk fsBnLJwKQjpVo9EIVJfRP2JMNjZMR9w= X-Google-Smtp-Source: AGHT+IFCA87467gu7z5IDWce4HL8A128HwTPWTaB+9+eEVOi6NwgXaeueapZGaEywr2dFnR51gtXgg== X-Received: by 2002:a05:6000:1112:b0:317:6734:c2ae with SMTP id z18-20020a056000111200b003176734c2aemr7884139wrw.11.1696796606069; Sun, 08 Oct 2023 13:23:26 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id c5-20020a05600c0ac500b0040586360a36sm11474879wmr.17.2023.10.08.13.23.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 Oct 2023 13:23:25 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Alexander Viro , Christian Brauner Cc: "=Liam R . Howlett" , Vlastimil Babka , linux-fsdevel@vger.kernel.org, Lorenzo Stoakes Subject: [PATCH 4/4] mm: abstract VMA extension and merge into vma_merge_extend() helper Date: Sun, 8 Oct 2023 21:23:16 +0100 Message-ID: <1ed3d1ba0069104e1685298aa2baf980c38a85ff.1696795837.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" mremap uses vma_merge() in the case where a VMA needs to be extended. This can be significantly simplified and abstracted. This makes it far easier to understand what the actual function is doing, avoids future mistakes in use of the confusing vma_merge() function and importantly allows us to make future changes to how vma_merge() is implemented by knowing explicitly which merge cases each invocation uses. Note that in the mremap() extend case, we perform this merge only when old_len =3D=3D vma->vm_end - addr. The extension_start, i.e. the start of t= he extended portion of the VMA is equal to addr + old_len, i.e. vma->vm_end. With this refactoring, vma_merge() is no longer required anywhere except mm/mmap.c, so mark it static. Signed-off-by: Lorenzo Stoakes Reviewed-by: Vlastimil Babka --- mm/internal.h | 8 +++----- mm/mmap.c | 32 +++++++++++++++++++++++++------- mm/mremap.c | 30 +++++++++++++----------------- 3 files changed, 41 insertions(+), 29 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index ddaeb9f2d9d7..6fa722b07a94 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1014,11 +1014,9 @@ struct page *follow_trans_huge_pmd(struct vm_area_st= ruct *vma, /* * mm/mmap.c */ -struct vm_area_struct *vma_merge(struct vma_iterator *vmi, - struct mm_struct *, struct vm_area_struct *prev, unsigned long addr, - unsigned long end, unsigned long vm_flags, struct anon_vma *, - struct file *, pgoff_t, struct mempolicy *, struct vm_userfaultfd_ctx, - struct anon_vma_name *); +struct vm_area_struct *vma_merge_extend(struct vma_iterator *vmi, + struct vm_area_struct *vma, + unsigned long delta); =20 enum { /* mark page accessed */ diff --git a/mm/mmap.c b/mm/mmap.c index 51be864b876b..5d2f2e8d7307 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -860,13 +860,13 @@ can_vma_merge_after(struct vm_area_struct *vma, unsig= ned long vm_flags, * **** is not represented - it will be merged and the vma containing the * area is returned, or the function will return NULL */ -struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struc= t *mm, - struct vm_area_struct *prev, unsigned long addr, - unsigned long end, unsigned long vm_flags, - struct anon_vma *anon_vma, struct file *file, - pgoff_t pgoff, struct mempolicy *policy, - struct vm_userfaultfd_ctx vm_userfaultfd_ctx, - struct anon_vma_name *anon_name) +static struct vm_area_struct +*vma_merge(struct vma_iterator *vmi, struct mm_struct *mm, + struct vm_area_struct *prev, unsigned long addr, unsigned long end, + unsigned long vm_flags, struct anon_vma *anon_vma, struct file *file, + pgoff_t pgoff, struct mempolicy *policy, + struct vm_userfaultfd_ctx vm_userfaultfd_ctx, + struct anon_vma_name *anon_name) { struct vm_area_struct *curr, *next, *res; struct vm_area_struct *vma, *adjust, *remove, *remove2; @@ -2546,6 +2546,24 @@ static struct vm_area_struct *vma_merge_new_vma(stru= ct vma_iterator *vmi, vma->vm_userfaultfd_ctx, anon_vma_name(vma)); } =20 +/* + * Expand vma by delta bytes, potentially merging with an immediately adja= cent + * VMA with identical properties. + */ +struct vm_area_struct *vma_merge_extend(struct vma_iterator *vmi, + struct vm_area_struct *vma, + unsigned long delta) +{ + pgoff_t pgoff =3D vma->vm_pgoff + + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT); + + /* vma is specified as prev, so case 1 or 2 will apply. */ + return vma_merge(vmi, vma->vm_mm, vma, vma->vm_end, vma->vm_end + delta, + vma->vm_flags, vma->anon_vma, vma->vm_file, pgoff, + vma_policy(vma), vma->vm_userfaultfd_ctx, + anon_vma_name(vma)); +} + /* * do_vmi_align_munmap() - munmap the aligned region from @start to @end. * @vmi: The vma iterator diff --git a/mm/mremap.c b/mm/mremap.c index ce8a23ef325a..38d98465f3d8 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -1096,14 +1096,12 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsign= ed long, old_len, /* old_len exactly to the end of the area.. */ if (old_len =3D=3D vma->vm_end - addr) { + unsigned long delta =3D new_len - old_len; + /* can we just expand the current mapping? */ - if (vma_expandable(vma, new_len - old_len)) { - long pages =3D (new_len - old_len) >> PAGE_SHIFT; - unsigned long extension_start =3D addr + old_len; - unsigned long extension_end =3D addr + new_len; - pgoff_t extension_pgoff =3D vma->vm_pgoff + - ((extension_start - vma->vm_start) >> PAGE_SHIFT); - VMA_ITERATOR(vmi, mm, extension_start); + if (vma_expandable(vma, delta)) { + long pages =3D delta >> PAGE_SHIFT; + VMA_ITERATOR(vmi, mm, vma->vm_end); long charged =3D 0; =20 if (vma->vm_flags & VM_ACCOUNT) { @@ -1115,17 +1113,15 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsign= ed long, old_len, } =20 /* - * Function vma_merge() is called on the extension we - * are adding to the already existing vma, vma_merge() - * will merge this extension with the already existing - * vma (expand operation itself) and possibly also with - * the next vma if it becomes adjacent to the expanded - * vma and otherwise compatible. + * Function vma_merge_extend() is called on the + * extension we are adding to the already existing vma, + * vma_merge_extend() will merge this extension with the + * already existing vma (expand operation itself) and + * possibly also with the next vma if it becomes + * adjacent to the expanded vma and otherwise + * compatible. */ - vma =3D vma_merge(&vmi, mm, vma, extension_start, - extension_end, vma->vm_flags, vma->anon_vma, - vma->vm_file, extension_pgoff, vma_policy(vma), - vma->vm_userfaultfd_ctx, anon_vma_name(vma)); + vma =3D vma_merge_extend(&vmi, vma, delta); if (!vma) { vm_unacct_memory(charged); ret =3D -ENOMEM; --=20 2.42.0