From nobody Thu Sep 19 01:55:41 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58331148314 for ; Thu, 15 Feb 2024 18:28:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708021700; cv=none; b=ZKLmnDcPQBALsp2upHzMy4azjUjHLTYae0aeLim3uV2qtioRNTRMNDnsqcIFcznD88Vr94Ve4ssZ0tUhQlyCO7ZNh5jbi1x5J6QWFuENabTCNwsnWk2EqCUaYBPwOgiFvTnprCiEziHZcWRMRWWmAFBA+GHY15in5kIIxcuSAf4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708021700; c=relaxed/simple; bh=R7NMKiXmQ2Fp2CZTF8zqri2+kmmTnnUwVA5YhhZQtwI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Y00vz2Zo0BaWtHqJ7XPYwX5tFhE+xuFelaRkP7GUf4CFWt2bMGD6rOP6JdyUl1nP5twKfFjyo/CnsAaXFrG9ZYZY0k8Ej3VLvqkMqVMWLEmhj2xLDnxZCBYE8tX99NinUkBvbatGe51aWZwhTmCfk2QkZx4pVPfh2utxuU7Ho70= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=V4EajePM; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="V4EajePM" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-60781e8709eso13869587b3.1 for ; Thu, 15 Feb 2024 10:28:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708021696; x=1708626496; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sdfVARy65CgD4hIH18dnYipXQIzvI+jzwwy4qPKyCEs=; b=V4EajePMHAwz3ZCwo69oRKCG4+5+wvoI7lLqjbH5sD16xXt16ez0lLF14J4cqALhQ8 fKlbIxuTyEj/RkIFu91rwix4FPPB3V305UG08B2szxAIYFVRkmVNq3xJyFh+kt1/zUx2 oxY9pbmUFnKuGvM64eqWhQw6/+rPNKVQACOLpR1lUXmdPcMV/o1BrccqSrXWkh+YwZrD 2lfvwHaEsWXAKgB6j6ueVoAnq8fyjmr1gvKI/pmGIDUUKcVBV7GZWkpTg7ZRXLN5BQKj tikxfSJ7NBs/oFvIneZatW4iNexIucaTI2IYMR2lLagJ3WuBSdlQk9GmeOR+5IzZSFcA tOAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708021696; x=1708626496; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sdfVARy65CgD4hIH18dnYipXQIzvI+jzwwy4qPKyCEs=; b=kQ0717lZ8+hLiUPzrKZ8Boyv051OmGL5g58XMjsOkPfRl9JOrCHAClK5c4Qio6OKrE MwyRycUlyM93v06yx/OOhbBOCUC8lRoJcfhApkadP0HEiOsgC4zuMjvB2iNv7BeqTOfb ac2XoJFdyiQDgmUcR0/8Sit57UT9XnJ2dWc/+mtRPqjf+FKp18Ju7eB6vQAS2NN+vrI4 evF561RApwugXv9wdDCPBVz6N5NbnMBrXQkFu3TISdeOSR4T9hg/92WI4qnpHKLfhkZR /j9U5W/Xq+37Z2l2UsOv1ixHoHCQ1G+7CZZANto79tIIabknxng4DcpwLAynCnxz5STm bM9A== X-Forwarded-Encrypted: i=1; AJvYcCX7L0sABPMNV9KeOzXxxxjQUgYSfFtI6hUwwI1Cy3PmtWa3LEWvTyMY/Cgx0+2trVhpoHcx/uWJBlOQ58SPyJXcARw0hYyYrapNbb3O X-Gm-Message-State: AOJu0Yw/z1R9wA5SbDkaF4bKfOxMS+LNJkDpB1ALNE/RsI+ij6sDveJu y0RsUCmEKNZ1SMmUy1thfqwAMXShVlBDiX0cwW0OF3UTzf9A7+cCBBn9oDz7TLmpcGly+jT2CEx vjhr0NOZksi1y36KtRNM7oA== X-Google-Smtp-Source: AGHT+IHGae+uPzrc4Hv5tlNVkUxfz8K6+IwOsbpum+6krBpkrYXvIIY2FdVy1Ib5pZBj1K2oQGCjrbGbqVY/rhISHw== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:6186:87a3:6b94:9b81]) (user=lokeshgidra job=sendgmr) by 2002:a81:4402:0:b0:607:7927:dea1 with SMTP id r2-20020a814402000000b006077927dea1mr1073112ywa.1.1708021696307; Thu, 15 Feb 2024 10:28:16 -0800 (PST) Date: Thu, 15 Feb 2024 10:27:56 -0800 In-Reply-To: <20240215182756.3448972-1-lokeshgidra@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240215182756.3448972-1-lokeshgidra@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240215182756.3448972-5-lokeshgidra@google.com> Subject: [PATCH v7 4/4] userfaultfd: use per-vma locks in userfaultfd operations From: Lokesh Gidra To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org, Liam.Howlett@oracle.com, ryan.roberts@arm.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" All userfaultfd operations, except write-protect, opportunistically use per-vma locks to lock vmas. On failure, attempt again inside mmap_lock critical section. Write-protect operation requires mmap_lock as it iterates over multiple vmas. Signed-off-by: Lokesh Gidra Reviewed-by: Liam R. Howlett --- fs/userfaultfd.c | 13 +- include/linux/userfaultfd_k.h | 5 +- mm/huge_memory.c | 5 +- mm/userfaultfd.c | 380 ++++++++++++++++++++++++++-------- 4 files changed, 299 insertions(+), 104 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index c00a021bcce4..60dcfafdc11a 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -2005,17 +2005,8 @@ static int userfaultfd_move(struct userfaultfd_ctx *= ctx, return -EINVAL; =20 if (mmget_not_zero(mm)) { - mmap_read_lock(mm); - - /* Re-check after taking map_changing_lock */ - down_read(&ctx->map_changing_lock); - if (likely(!atomic_read(&ctx->mmap_changing))) - ret =3D move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, - uffdio_move.len, uffdio_move.mode); - else - ret =3D -EAGAIN; - up_read(&ctx->map_changing_lock); - mmap_read_unlock(mm); + ret =3D move_pages(ctx, uffdio_move.dst, uffdio_move.src, + uffdio_move.len, uffdio_move.mode); mmput(mm); } else { return -ESRCH; diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 3210c3552976..05d59f74fc88 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -138,9 +138,8 @@ extern long uffd_wp_range(struct vm_area_struct *vma, /* move_pages */ void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2); void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2); -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, - unsigned long dst_start, unsigned long src_start, - unsigned long len, __u64 flags); +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long src_start, unsigned long len, __u64 flags); int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_p= md, pmd_t dst_pmdval, struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 016e20bd813e..c337ebb4f7ab 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2158,7 +2158,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm= _area_struct *vma, =20 #ifdef CONFIG_USERFAULTFD /* - * The PT lock for src_pmd and the mmap_lock for reading are held by + * The PT lock for src_pmd and dst_vma/src_vma (for reading) are locked by * the caller, but it must return after releasing the page_table_lock. * Just move the page from src_pmd to dst_pmd if possible. * Return zero if succeeded in moving the page, -EAGAIN if it needs to be @@ -2181,7 +2181,8 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *= dst_pmd, pmd_t *src_pmd, pm src_ptl =3D pmd_lockptr(mm, src_pmd); =20 lockdep_assert_held(src_ptl); - mmap_assert_locked(mm); + vma_assert_locked(src_vma); + vma_assert_locked(dst_vma); =20 /* Sanity checks before the operation */ if (WARN_ON_ONCE(!pmd_none(dst_pmdval)) || WARN_ON_ONCE(src_addr & ~HPAGE= _PMD_MASK) || diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 74aad0831e40..4744d6a96f96 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -20,19 +20,11 @@ #include "internal.h" =20 static __always_inline -struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, - unsigned long dst_start, - unsigned long len) +bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_en= d) { - /* - * Make sure that the dst range is both valid and fully within a - * single existing vma. - */ - struct vm_area_struct *dst_vma; - - dst_vma =3D find_vma(dst_mm, dst_start); - if (!range_in_vma(dst_vma, dst_start, dst_start + len)) - return NULL; + /* Make sure that the dst range is fully within dst_vma. */ + if (dst_end > dst_vma->vm_end) + return false; =20 /* * Check the vma is registered in uffd, this is required to @@ -40,11 +32,122 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *= dst_mm, * time. */ if (!dst_vma->vm_userfaultfd_ctx.ctx) - return NULL; + return false; + + return true; +} + +static __always_inline +struct vm_area_struct *find_vma_and_prepare_anon(struct mm_struct *mm, + unsigned long addr) +{ + struct vm_area_struct *vma; + + mmap_assert_locked(mm); + vma =3D vma_lookup(mm, addr); + if (!vma) + vma =3D ERR_PTR(-ENOENT); + else if (!(vma->vm_flags & VM_SHARED) && + unlikely(anon_vma_prepare(vma))) + vma =3D ERR_PTR(-ENOMEM); + + return vma; +} + +#ifdef CONFIG_PER_VMA_LOCK +/* + * lock_vma() - Lookup and lock vma corresponding to @address. + * @mm: mm to search vma in. + * @address: address that the vma should contain. + * + * Should be called without holding mmap_lock. vma should be unlocked afte= r use + * with unlock_vma(). + * + * Return: A locked vma containing @address, -ENOENT if no vma is found, or + * -ENOMEM if anon_vma couldn't be allocated. + */ +static struct vm_area_struct *lock_vma(struct mm_struct *mm, + unsigned long address) +{ + struct vm_area_struct *vma; + + vma =3D lock_vma_under_rcu(mm, address); + if (vma) { + /* + * lock_vma_under_rcu() only checks anon_vma for private + * anonymous mappings. But we need to ensure it is assigned in + * private file-backed vmas as well. + */ + if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma)) + vma_end_read(vma); + else + return vma; + } + + mmap_read_lock(mm); + vma =3D find_vma_and_prepare_anon(mm, address); + if (!IS_ERR(vma)) { + /* + * We cannot use vma_start_read() as it may fail due to + * false locked (see comment in vma_start_read()). We + * can avoid that by directly locking vm_lock under + * mmap_lock, which guarantees that nobody can lock the + * vma for write (vma_start_write()) under us. + */ + down_read(&vma->vm_lock->lock); + } + + mmap_read_unlock(mm); + return vma; +} + +static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm, + unsigned long dst_start, + unsigned long len) +{ + struct vm_area_struct *dst_vma; =20 + dst_vma =3D lock_vma(dst_mm, dst_start); + if (IS_ERR(dst_vma) || validate_dst_vma(dst_vma, dst_start + len)) + return dst_vma; + + vma_end_read(dst_vma); + return ERR_PTR(-ENOENT); +} + +static void uffd_mfill_unlock(struct vm_area_struct *vma) +{ + vma_end_read(vma); +} + +#else + +static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm, + unsigned long dst_start, + unsigned long len) +{ + struct vm_area_struct *dst_vma; + + mmap_read_lock(dst_mm); + dst_vma =3D find_vma_and_prepare_anon(dst_mm, dst_start); + if (IS_ERR(dst_vma)) + goto out_unlock; + + if (validate_dst_vma(dst_vma, dst_start + len)) + return dst_vma; + + dst_vma =3D ERR_PTR(-ENOENT); +out_unlock: + mmap_read_unlock(dst_mm); return dst_vma; } =20 +static void uffd_mfill_unlock(struct vm_area_struct *vma) +{ + mmap_read_unlock(vma->vm_mm); +} +#endif + /* Check if dst_addr is outside of file's size. Must be called with ptl he= ld. */ static bool mfill_file_over_size(struct vm_area_struct *dst_vma, unsigned long dst_addr) @@ -350,7 +453,8 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsign= ed long address) #ifdef CONFIG_HUGETLB_PAGE /* * mfill_atomic processing for HUGETLB vmas. Note that this routine is - * called with mmap_lock held, it will release mmap_lock before returning. + * called with either vma-lock or mmap_lock held, it will release the lock + * before returning. */ static __always_inline ssize_t mfill_atomic_hugetlb( struct userfaultfd_ctx *ctx, @@ -361,7 +465,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( uffd_flags_t flags) { struct mm_struct *dst_mm =3D dst_vma->vm_mm; - int vm_shared =3D dst_vma->vm_flags & VM_SHARED; ssize_t err; pte_t *dst_pte; unsigned long src_addr, dst_addr; @@ -380,7 +483,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( */ if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + uffd_mfill_unlock(dst_vma); return -EINVAL; } =20 @@ -403,24 +506,28 @@ static __always_inline ssize_t mfill_atomic_hugetlb( * retry, dst_vma will be set to NULL and we must lookup again. */ if (!dst_vma) { + dst_vma =3D uffd_mfill_lock(dst_mm, dst_start, len); + if (IS_ERR(dst_vma)) { + err =3D PTR_ERR(dst_vma); + goto out; + } + err =3D -ENOENT; - dst_vma =3D find_dst_vma(dst_mm, dst_start, len); - if (!dst_vma || !is_vm_hugetlb_page(dst_vma)) - goto out_unlock; + if (!is_vm_hugetlb_page(dst_vma)) + goto out_unlock_vma; =20 err =3D -EINVAL; if (vma_hpagesize !=3D vma_kernel_pagesize(dst_vma)) - goto out_unlock; - - vm_shared =3D dst_vma->vm_flags & VM_SHARED; - } + goto out_unlock_vma; =20 - /* - * If not shared, ensure the dst_vma has a anon_vma. - */ - err =3D -ENOMEM; - if (!vm_shared) { - if (unlikely(anon_vma_prepare(dst_vma))) + /* + * If memory mappings are changing because of non-cooperative + * operation (e.g. mremap) running in parallel, bail out and + * request the user to retry later + */ + down_read(&ctx->map_changing_lock); + err =3D -EAGAIN; + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; } =20 @@ -465,7 +572,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( =20 if (unlikely(err =3D=3D -ENOENT)) { up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + uffd_mfill_unlock(dst_vma); BUG_ON(!folio); =20 err =3D copy_folio_from_user(folio, @@ -474,17 +581,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( err =3D -EFAULT; goto out; } - mmap_read_lock(dst_mm); - down_read(&ctx->map_changing_lock); - /* - * If memory mappings are changing because of non-cooperative - * operation (e.g. mremap) running in parallel, bail out and - * request the user to retry later - */ - if (atomic_read(&ctx->mmap_changing)) { - err =3D -EAGAIN; - break; - } =20 dst_vma =3D NULL; goto retry; @@ -505,7 +601,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( =20 out_unlock: up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); +out_unlock_vma: + uffd_mfill_unlock(dst_vma); out: if (folio) folio_put(folio); @@ -597,7 +694,15 @@ static __always_inline ssize_t mfill_atomic(struct use= rfaultfd_ctx *ctx, copied =3D 0; folio =3D NULL; retry: - mmap_read_lock(dst_mm); + /* + * Make sure the vma is not shared, that the dst range is + * both valid and fully within a single existing vma. + */ + dst_vma =3D uffd_mfill_lock(dst_mm, dst_start, len); + if (IS_ERR(dst_vma)) { + err =3D PTR_ERR(dst_vma); + goto out; + } =20 /* * If memory mappings are changing because of non-cooperative @@ -609,15 +714,6 @@ static __always_inline ssize_t mfill_atomic(struct use= rfaultfd_ctx *ctx, if (atomic_read(&ctx->mmap_changing)) goto out_unlock; =20 - /* - * Make sure the vma is not shared, that the dst range is - * both valid and fully within a single existing vma. - */ - err =3D -ENOENT; - dst_vma =3D find_dst_vma(dst_mm, dst_start, len); - if (!dst_vma) - goto out_unlock; - err =3D -EINVAL; /* * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SHARED but @@ -647,16 +743,6 @@ static __always_inline ssize_t mfill_atomic(struct use= rfaultfd_ctx *ctx, uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) goto out_unlock; =20 - /* - * Ensure the dst_vma has a anon_vma or this page - * would get a NULL anon_vma when moved in the - * dst_vma. - */ - err =3D -ENOMEM; - if (!(dst_vma->vm_flags & VM_SHARED) && - unlikely(anon_vma_prepare(dst_vma))) - goto out_unlock; - while (src_addr < src_start + len) { pmd_t dst_pmdval; =20 @@ -699,7 +785,7 @@ static __always_inline ssize_t mfill_atomic(struct user= faultfd_ctx *ctx, void *kaddr; =20 up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + uffd_mfill_unlock(dst_vma); BUG_ON(!folio); =20 kaddr =3D kmap_local_folio(folio, 0); @@ -730,7 +816,7 @@ static __always_inline ssize_t mfill_atomic(struct user= faultfd_ctx *ctx, =20 out_unlock: up_read(&ctx->map_changing_lock); - mmap_read_unlock(dst_mm); + uffd_mfill_unlock(dst_vma); out: if (folio) folio_put(folio); @@ -1267,27 +1353,136 @@ static int validate_move_areas(struct userfaultfd_= ctx *ctx, if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma)) return -EINVAL; =20 + return 0; +} + +static __always_inline +int find_vmas_mm_locked(struct mm_struct *mm, + unsigned long dst_start, + unsigned long src_start, + struct vm_area_struct **dst_vmap, + struct vm_area_struct **src_vmap) +{ + struct vm_area_struct *vma; + + mmap_assert_locked(mm); + vma =3D find_vma_and_prepare_anon(mm, dst_start); + if (IS_ERR(vma)) + return PTR_ERR(vma); + + *dst_vmap =3D vma; + /* Skip finding src_vma if src_start is in dst_vma */ + if (src_start >=3D vma->vm_start && src_start < vma->vm_end) + goto out_success; + + vma =3D vma_lookup(mm, src_start); + if (!vma) + return -ENOENT; +out_success: + *src_vmap =3D vma; + return 0; +} + +#ifdef CONFIG_PER_VMA_LOCK +static int uffd_move_lock(struct mm_struct *mm, + unsigned long dst_start, + unsigned long src_start, + struct vm_area_struct **dst_vmap, + struct vm_area_struct **src_vmap) +{ + struct vm_area_struct *vma; + int err; + + vma =3D lock_vma(mm, dst_start); + if (IS_ERR(vma)) + return PTR_ERR(vma); + + *dst_vmap =3D vma; /* - * Ensure the dst_vma has a anon_vma or this page - * would get a NULL anon_vma when moved in the - * dst_vma. + * Skip finding src_vma if src_start is in dst_vma. This also ensures + * that we don't lock the same vma twice. */ - if (unlikely(anon_vma_prepare(dst_vma))) - return -ENOMEM; + if (src_start >=3D vma->vm_start && src_start < vma->vm_end) { + *src_vmap =3D vma; + return 0; + } =20 - return 0; + /* + * Using lock_vma() to get src_vma can lead to following deadlock: + * + * Thread1 Thread2 + * ------- ------- + * vma_start_read(dst_vma) + * mmap_write_lock(mm) + * vma_start_write(src_vma) + * vma_start_read(src_vma) + * mmap_read_lock(mm) + * vma_start_write(dst_vma) + */ + *src_vmap =3D lock_vma_under_rcu(mm, src_start); + if (likely(*src_vmap)) + return 0; + + /* Undo any locking and retry in mmap_lock critical section */ + vma_end_read(*dst_vmap); + + mmap_read_lock(mm); + err =3D find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); + if (!err) { + /* + * See comment in lock_vma() as to why not using + * vma_start_read() here. + */ + down_read(&(*dst_vmap)->vm_lock->lock); + if (*dst_vmap !=3D *src_vmap) + down_read(&(*src_vmap)->vm_lock->lock); + } + mmap_read_unlock(mm); + return err; +} + +static void uffd_move_unlock(struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma) +{ + vma_end_read(src_vma); + if (src_vma !=3D dst_vma) + vma_end_read(dst_vma); } =20 +#else + +static int uffd_move_lock(struct mm_struct *mm, + unsigned long dst_start, + unsigned long src_start, + struct vm_area_struct **dst_vmap, + struct vm_area_struct **src_vmap) +{ + int err; + + mmap_read_lock(mm); + err =3D find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); + if (err) + mmap_read_unlock(mm); + return err; +} + +static void uffd_move_unlock(struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma) +{ + mmap_assert_locked(src_vma->vm_mm); + mmap_read_unlock(dst_vma->vm_mm); +} +#endif + /** * move_pages - move arbitrary anonymous pages of an existing vma * @ctx: pointer to the userfaultfd context - * @mm: the address space to move pages * @dst_start: start of the destination virtual memory range * @src_start: start of the source virtual memory range * @len: length of the virtual memory range * @mode: flags from uffdio_move.mode * - * Must be called with mmap_lock held for read. + * It will either use the mmap_lock in read mode or per-vma locks * * move_pages() remaps arbitrary anonymous pages atomically in zero * copy. It only works on non shared anonymous pages because those can @@ -1355,10 +1550,10 @@ static int validate_move_areas(struct userfaultfd_c= tx *ctx, * could be obtained. This is the only additional complexity added to * the rmap code to provide this anonymous page remapping functionality. */ -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, - unsigned long dst_start, unsigned long src_start, - unsigned long len, __u64 mode) +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long src_start, unsigned long len, __u64 mode) { + struct mm_struct *mm =3D ctx->mm; struct vm_area_struct *src_vma, *dst_vma; unsigned long src_addr, dst_addr; pmd_t *src_pmd, *dst_pmd; @@ -1376,28 +1571,34 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, str= uct mm_struct *mm, WARN_ON_ONCE(dst_start + len <=3D dst_start)) goto out; =20 + err =3D uffd_move_lock(mm, dst_start, src_start, &dst_vma, &src_vma); + if (err) + goto out; + + /* Re-check after taking map_changing_lock */ + err =3D -EAGAIN; + down_read(&ctx->map_changing_lock); + if (likely(atomic_read(&ctx->mmap_changing))) + goto out_unlock; /* * Make sure the vma is not shared, that the src and dst remap * ranges are both valid and fully within a single existing * vma. */ - src_vma =3D find_vma(mm, src_start); - if (!src_vma || (src_vma->vm_flags & VM_SHARED)) - goto out; - if (src_start < src_vma->vm_start || - src_start + len > src_vma->vm_end) - goto out; + err =3D -EINVAL; + if (src_vma->vm_flags & VM_SHARED) + goto out_unlock; + if (src_start + len > src_vma->vm_end) + goto out_unlock; =20 - dst_vma =3D find_vma(mm, dst_start); - if (!dst_vma || (dst_vma->vm_flags & VM_SHARED)) - goto out; - if (dst_start < dst_vma->vm_start || - dst_start + len > dst_vma->vm_end) - goto out; + if (dst_vma->vm_flags & VM_SHARED) + goto out_unlock; + if (dst_start + len > dst_vma->vm_end) + goto out_unlock; =20 err =3D validate_move_areas(ctx, src_vma, dst_vma); if (err) - goto out; + goto out_unlock; =20 for (src_addr =3D src_start, dst_addr =3D dst_start; src_addr < src_start + len;) { @@ -1514,6 +1715,9 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, struc= t mm_struct *mm, moved +=3D step_size; } =20 +out_unlock: + up_read(&ctx->map_changing_lock); + uffd_move_unlock(dst_vma, src_vma); out: VM_WARN_ON(moved < 0); VM_WARN_ON(err > 0); --=20 2.43.0.687.g38aa6559b0-goog