From nobody Wed Sep 10 09:29:25 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D15FCC001DB for ; Fri, 4 Aug 2023 15:28:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229600AbjHDP2k (ORCPT ); Fri, 4 Aug 2023 11:28:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57800 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229685AbjHDP2F (ORCPT ); Fri, 4 Aug 2023 11:28:05 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D921A49E5 for ; Fri, 4 Aug 2023 08:27:42 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d05883d850fso2183674276.3 for ; Fri, 04 Aug 2023 08:27:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691162862; x=1691767662; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=okHSXq0SYkri7E0to28JFQj9dGiLp+hbVT5qT1e9xdU=; b=Ze27H+8OOQhF7Oj+2ZOk+cY5b5gFDys4DfQgVT60v89OD7qi5UeZ/cauYLk5j1Z+GU XzmqYz49Fo48ZGd45wNkV/OWAGwTfMj8sHgA+WLxLRz9FINm38B5BG9wrka8I8py53e4 JxZXr7fPwYEz6xiF3SpxqoYBrBtxE6j/MgIItpyaYxeW+m/Vd5hX9SE2Cml3lUFOZW6r WM0e0CYffNX/JyZR7vi5OUlMXdaHPpAW0Kgep2I5V112BkSAgUR3PqlHz5NH22t6usrn YTi2zd8kbs4j6hx+FeLjGtgIsVutMy1LiPSegd4snkm0AH19LFF/qxuEmA9dhEOGf0FI 5fXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691162862; x=1691767662; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=okHSXq0SYkri7E0to28JFQj9dGiLp+hbVT5qT1e9xdU=; b=i8s+hGS59c9rGrIzJ7iAtfI6ruftLTSzaSz2lZy6rPiZULutLj0qsLnpg8cwAAsZr7 JBuOqDIFPcJyFJTcf5w5B2nbJ26EvVsXkNtfwEy1hQE+zp+TksvrAFMK8oV/fHVMZ3+/ A7vjCvBc6KluSk8wihoww7hGLMt3OtbQS/rDm4UAFY1PbrH/0ZKtJFJEla0WyHwvR/eQ Gj0iaojWBNYRdt4gV6h0SmffmM5zDCWmbD0KEMSF5MsMI5d0dsII1+fzRlw7saeTI6Dk +fFtz6ygq5Y3WdjDOeNFvKLtX5dfA45ZViW82XeGPEN89t1/zPiDkyIULsi1TaC73JAM sPVg== X-Gm-Message-State: AOJu0YzS12RwtXsrsH5VljVXvGH5I/R2dV3+FtPJjIK+AVDc2WiXfMCm vytyH001kSxnH6qBBW/JehTZg/FlybU= X-Google-Smtp-Source: AGHT+IGWFF8LV12V45AQ8PHaqPReOn9OGTO6qHNI1IHy9jYdCHxrz4hNZdLqwTqM3hhHqBDlThuUA+9rLzA= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:43a7:a50f:b0fd:a068]) (user=surenb job=sendgmr) by 2002:a05:6902:1614:b0:d0d:c74a:a6c0 with SMTP id bw20-20020a056902161400b00d0dc74aa6c0mr10193ybb.2.1691162861553; Fri, 04 Aug 2023 08:27:41 -0700 (PDT) Date: Fri, 4 Aug 2023 08:27:24 -0700 In-Reply-To: <20230804152724.3090321-1-surenb@google.com> Mime-Version: 1.0 References: <20230804152724.3090321-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.585.gd2178a4bd4-goog Message-ID: <20230804152724.3090321-7-surenb@google.com> Subject: [PATCH v4 6/6] mm: move vma locking out of vma_prepare and dup_anon_vma From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: torvalds@linux-foundation.org, jannh@google.com, willy@infradead.org, liam.howlett@oracle.com, david@redhat.com, peterx@redhat.com, ldufour@linux.ibm.com, vbabka@suse.cz, michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, hannes@cmpxchg.org, dave@stgolabs.net, hughd@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org, kernel-team@android.com, Suren Baghdasaryan , Linus Torvalds , "Liam R . Howlett" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" vma_prepare() is currently the central place where vmas are being locked before vma_complete() applies changes to them. While this is convenient, it also obscures vma locking and makes it harder to follow the locking rules. Move vma locking out of vma_prepare() and take vma locks explicitly at the locations where vmas are being modified. Move vma locking and replace it with an assertion inside dup_anon_vma() to further clarify the locking pattern inside vma_merge(). Suggested-by: Linus Torvalds Suggested-by: Liam R. Howlett Signed-off-by: Suren Baghdasaryan --- mm/mmap.c | 30 +++++++++++++++++++----------- 1 file changed, 19 insertions(+), 11 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 850a39dee075..16661427d3e8 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -476,16 +476,6 @@ static inline void init_vma_prep(struct vma_prepare *v= p, */ static inline void vma_prepare(struct vma_prepare *vp) { - vma_start_write(vp->vma); - if (vp->adj_next) - vma_start_write(vp->adj_next); - if (vp->insert) - vma_start_write(vp->insert); - if (vp->remove) - vma_start_write(vp->remove); - if (vp->remove2) - vma_start_write(vp->remove2); - if (vp->file) { uprobe_munmap(vp->vma, vp->vma->vm_start, vp->vma->vm_end); =20 @@ -618,7 +608,7 @@ static inline int dup_anon_vma(struct vm_area_struct *d= st, * anon pages imported. */ if (src->anon_vma && !dst->anon_vma) { - vma_start_write(dst); + vma_assert_write_locked(dst); dst->anon_vma =3D src->anon_vma; return anon_vma_clone(dst, src); } @@ -650,10 +640,12 @@ int vma_expand(struct vma_iterator *vmi, struct vm_ar= ea_struct *vma, bool remove_next =3D false; struct vma_prepare vp; =20 + vma_start_write(vma); if (next && (vma !=3D next) && (end =3D=3D next->vm_end)) { int ret; =20 remove_next =3D true; + vma_start_write(next); ret =3D dup_anon_vma(vma, next); if (ret) return ret; @@ -708,6 +700,8 @@ int vma_shrink(struct vma_iterator *vmi, struct vm_area= _struct *vma, if (vma_iter_prealloc(vmi)) return -ENOMEM; =20 + vma_start_write(vma); + init_vma_prep(&vp, vma); vma_prepare(&vp); vma_adjust_trans_huge(vma, start, end, 0); @@ -940,16 +934,21 @@ struct vm_area_struct *vma_merge(struct vma_iterator = *vmi, struct mm_struct *mm, if (!merge_prev && !merge_next) return NULL; /* Not mergeable. */ =20 + if (merge_prev) + vma_start_write(prev); + res =3D vma =3D prev; remove =3D remove2 =3D adjust =3D NULL; =20 /* Can we merge both the predecessor and the successor? */ if (merge_prev && merge_next && is_mergeable_anon_vma(prev->anon_vma, next->anon_vma, NULL)) { + vma_start_write(next); remove =3D next; /* case 1 */ vma_end =3D next->vm_end; err =3D dup_anon_vma(prev, next); if (curr) { /* case 6 */ + vma_start_write(curr); remove =3D curr; remove2 =3D next; if (!next->anon_vma) @@ -957,6 +956,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *v= mi, struct mm_struct *mm, } } else if (merge_prev) { /* case 2 */ if (curr) { + vma_start_write(curr); err =3D dup_anon_vma(prev, curr); if (end =3D=3D curr->vm_end) { /* case 7 */ remove =3D curr; @@ -966,8 +966,10 @@ struct vm_area_struct *vma_merge(struct vma_iterator *= vmi, struct mm_struct *mm, } } } else { /* merge_next */ + vma_start_write(next); res =3D next; if (prev && addr < prev->vm_end) { /* case 4 */ + vma_start_write(prev); vma_end =3D addr; adjust =3D next; adj_start =3D -(prev->vm_end - addr); @@ -983,6 +985,7 @@ struct vm_area_struct *vma_merge(struct vma_iterator *v= mi, struct mm_struct *mm, vma_pgoff =3D next->vm_pgoff - pglen; if (curr) { /* case 8 */ vma_pgoff =3D curr->vm_pgoff; + vma_start_write(curr); remove =3D curr; err =3D dup_anon_vma(next, curr); } @@ -2373,6 +2376,9 @@ int __split_vma(struct vma_iterator *vmi, struct vm_a= rea_struct *vma, if (new->vm_ops && new->vm_ops->open) new->vm_ops->open(new); =20 + vma_start_write(vma); + vma_start_write(new); + init_vma_prep(&vp, vma); vp.insert =3D new; vma_prepare(&vp); @@ -3078,6 +3084,8 @@ static int do_brk_flags(struct vma_iterator *vmi, str= uct vm_area_struct *vma, if (vma_iter_prealloc(vmi)) goto unacct_fail; =20 + vma_start_write(vma); + init_vma_prep(&vp, vma); vma_prepare(&vp); vma_adjust_trans_huge(vma, vma->vm_start, addr + len, 0); --=20 2.41.0.585.gd2178a4bd4-goog