From nobody Sun Sep 8 03:09:49 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 193D654FA5 for ; Tue, 13 Feb 2024 00:19:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707783588; cv=none; b=SLYK04QuKZmfMasKlnkFp6GlDTNAxIiCt6M60b0QIz1bnQqFH3iagcg7XFbjCZEkSFmGe7JDpEGEhds6CLy8BKSVSPd4Dn7OMSSigqCtGNrAzegpTq0nldDf/X2AXsD/2v9t94Vwn8e6G8Rhm1pouQ/VsNmmTF2VMwkpzPnj4T0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707783588; c=relaxed/simple; bh=z41+dvNtRXeKCs53H9ygm12hU6JxKhETApVU+NXe4U4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PaSnUBgVYgX8/o+ZFTTIhmnwLwxNakigz1GEEY7sL2mQPBJ2El2tknjIr/+ed8C4Z0B9aHi+Cl0HSP1cu07zRZJq2An+jqG52SOgLQ8wwQGKgJl1NTYTZkdF3f+QhSGKvJx1V4o86Eg0ryYBZz8oeVhU/+M91paZrT1SmwKkXS4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=g52v5K/R; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="g52v5K/R" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dcc4563611cso576291276.3 for ; Mon, 12 Feb 2024 16:19:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707783585; x=1708388385; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FeTp34JzZtDcGzw26Xy4JuyVdUXeLDl6wgD9/498E4U=; b=g52v5K/RGyVGjMvU9YqeKBjkNahzo5eqSpoHrWByHxcBSyJ7vBCWUIMWjYXjWNKdjR H7yDKG7ki0iVR4Jfgc6e+gGlNNzQPGaZcEWlSjC2pvvbLhsuQmZD4dkaEoMbcrDfbZRF wiW7IzKRAOJKg8mGObhMLWPb80z3UbOjRmmUWps+EOJvG+V/aPTRWA8yggof2FI0SiR/ cEzPXtdPTQNH78lXR634/EEXuUxHVmzmEmUk3C8dw2tTWaUUg+Ri4GDJ+A/6UJS31pX/ KI8cfUPgHneiCH85qFGA3iaN5iPYUhE7x+E3PRhJqbiQYE6ps5ObN9g9Cv0/IY69SKDE NhnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707783585; x=1708388385; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FeTp34JzZtDcGzw26Xy4JuyVdUXeLDl6wgD9/498E4U=; b=pB1iloxBpJO5iUN64e+qSvStRyHNb3GRMRnC6CxitGikFRts/1rCLXWSTCgc4VKKAB I5T7Dr1NclLAXEKL4GmjqCmeJuVhrGVCNJIXsJyGQuWcXEc6Fqbvdfe9RlPA1qjZQGy9 WKAQOq5NNOd195K5grgT81cDnbwPnJnp+R3RKJGZoM9UJU2nPHQfnvlCfYOh4lepq42n U1RQ46F3mxKGyoj4X/lLpcCJWDEPZ0YHRsc44e5V411Ay3HyBh5wS2tx0dxRLS+5bo6A j5pVRg60bqAGYALQP3QCiYBnKfnfGjwigcRdsVTC+0PWdbf1bDy9ByIGDQ9AbNUA3TkL sSAQ== X-Gm-Message-State: AOJu0YzsdJOsIk15piOyhjJUCOYPc83OewUhAsNv+ylN4EuVg+geaT5Y RAi8ppl8TYcv2V/ouUXmjzpVlzReORA31UVwqq7adSnJjaTXejBH44ziQiMatz2/CXIAFnGLJal GVqFwyEVG+UI+TtnFB2P4jg== X-Google-Smtp-Source: AGHT+IHHF8psOCpR3jPtRIweQhip+WhYFn+NGs+b1/7N2Bm+HUMlIaS9Pt1A9QzZr7uT4pBB5l5vrCPMXQlp/u7k7Q== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:fd80:ef0f:6359:fc4c]) (user=lokeshgidra job=sendgmr) by 2002:a05:6902:10c2:b0:dc6:d2c8:6e50 with SMTP id w2-20020a05690210c200b00dc6d2c86e50mr1289834ybu.7.1707783585214; Mon, 12 Feb 2024 16:19:45 -0800 (PST) Date: Mon, 12 Feb 2024 16:19:18 -0800 In-Reply-To: <20240213001920.3551772-1-lokeshgidra@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240213001920.3551772-1-lokeshgidra@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240213001920.3551772-2-lokeshgidra@google.com> Subject: [PATCH v5 1/3] userfaultfd: move userfaultfd_ctx struct to header file From: Lokesh Gidra To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org, Liam.Howlett@oracle.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Moving the struct to userfaultfd_k.h to be accessible from mm/userfaultfd.c. There are no other changes in the struct. This is required to prepare for using per-vma locks in userfaultfd operations. Signed-off-by: Lokesh Gidra Reviewed-by: Mike Rapoport (IBM) --- fs/userfaultfd.c | 39 ----------------------------------- include/linux/userfaultfd_k.h | 39 +++++++++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+), 39 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 05c8e8a05427..58331b83d648 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -50,45 +50,6 @@ static struct ctl_table vm_userfaultfd_table[] =3D { =20 static struct kmem_cache *userfaultfd_ctx_cachep __ro_after_init; =20 -/* - * Start with fault_pending_wqh and fault_wqh so they're more likely - * to be in the same cacheline. - * - * Locking order: - * fd_wqh.lock - * fault_pending_wqh.lock - * fault_wqh.lock - * event_wqh.lock - * - * To avoid deadlocks, IRQs must be disabled when taking any of the above = locks, - * since fd_wqh.lock is taken by aio_poll() while it's holding a lock that= 's - * also taken in IRQ context. - */ -struct userfaultfd_ctx { - /* waitqueue head for the pending (i.e. not read) userfaults */ - wait_queue_head_t fault_pending_wqh; - /* waitqueue head for the userfaults */ - wait_queue_head_t fault_wqh; - /* waitqueue head for the pseudo fd to wakeup poll/read */ - wait_queue_head_t fd_wqh; - /* waitqueue head for events */ - wait_queue_head_t event_wqh; - /* a refile sequence protected by fault_pending_wqh lock */ - seqcount_spinlock_t refile_seq; - /* pseudo fd refcounting */ - refcount_t refcount; - /* userfaultfd syscall flags */ - unsigned int flags; - /* features requested from the userspace */ - unsigned int features; - /* released */ - bool released; - /* memory mappings are changing because of non-cooperative event */ - atomic_t mmap_changing; - /* mm with one ore more vmas attached to this userfaultfd_ctx */ - struct mm_struct *mm; -}; - struct userfaultfd_fork_ctx { struct userfaultfd_ctx *orig; struct userfaultfd_ctx *new; diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index e4056547fbe6..691d928ee864 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -36,6 +36,45 @@ #define UFFD_SHARED_FCNTL_FLAGS (O_CLOEXEC | O_NONBLOCK) #define UFFD_FLAGS_SET (EFD_SHARED_FCNTL_FLAGS) =20 +/* + * Start with fault_pending_wqh and fault_wqh so they're more likely + * to be in the same cacheline. + * + * Locking order: + * fd_wqh.lock + * fault_pending_wqh.lock + * fault_wqh.lock + * event_wqh.lock + * + * To avoid deadlocks, IRQs must be disabled when taking any of the above = locks, + * since fd_wqh.lock is taken by aio_poll() while it's holding a lock that= 's + * also taken in IRQ context. + */ +struct userfaultfd_ctx { + /* waitqueue head for the pending (i.e. not read) userfaults */ + wait_queue_head_t fault_pending_wqh; + /* waitqueue head for the userfaults */ + wait_queue_head_t fault_wqh; + /* waitqueue head for the pseudo fd to wakeup poll/read */ + wait_queue_head_t fd_wqh; + /* waitqueue head for events */ + wait_queue_head_t event_wqh; + /* a refile sequence protected by fault_pending_wqh lock */ + seqcount_spinlock_t refile_seq; + /* pseudo fd refcounting */ + refcount_t refcount; + /* userfaultfd syscall flags */ + unsigned int flags; + /* features requested from the userspace */ + unsigned int features; + /* released */ + bool released; + /* memory mappings are changing because of non-cooperative event */ + atomic_t mmap_changing; + /* mm with one ore more vmas attached to this userfaultfd_ctx */ + struct mm_struct *mm; +}; + extern vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long rea= son); =20 /* A combined operation mode + behavior flags. */ --=20 2.43.0.687.g38aa6559b0-goog From nobody Sun Sep 8 03:09:49 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30F6579DF for ; Tue, 13 Feb 2024 00:19:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707783590; cv=none; b=KY5CdxEnjArray58aqIppKhsAP9puyQ+IWxNiE0mNdNhz4jqsrHXIAlSNFtL9jjWh3/OPKfSIevykBsy5s5+WhVW/ojoeFdhq4qvXekCGsIPOUMuee1NvVRzgP2d7G2gtYo/2D09jvomASG19jD/qRicL1Qkgw2ol+xH/kOltWI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707783590; c=relaxed/simple; bh=In1FF0ynBqRiNE2uJuhiKoePMO53Q9NUdpdbPZmXlAM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Pg14rm5BbJChpKs4V+MAG8aqxF01Xp05nMo9NEbHX7rx8i7VACCfJpFucsQIsbtq+p2xCka7NWm/4mRbtitpC4uL5wmjprDFG5H2IC2Y4jLfACbVao2YaIOREvVu5aJzCsCcwZrl/fYvAk1438SK8y4/Ab9Nj0pdAjV5gGS0MJI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Bg6+5qHL; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Bg6+5qHL" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dcc73148612so296752276.1 for ; Mon, 12 Feb 2024 16:19:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707783587; x=1708388387; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lSLmVkRgWUoN7XoRPV1r5JJdZzlBEVE8ndv4C2H5Jh8=; b=Bg6+5qHLCmq7tEftSDSbY67j7mnOS0DtpOFHeW9Fc7BXZlEovYxqiMmli0lvJ/s2X2 49yrGCB78KInP7XaHfIwFS2L8S0bLSpoTbGt8SnK2b+FM9JmjUcQhUGCQ+DVpn3Ytdbf qYAECxTym0SnvsmXURPvDZWGy04MLBywJK2fqjSh6IP/DpTha2m3YrNbjpoaFBdNVhaj MCsbeuB7affwfWJeWLI8U/iBl5qd33fviuTXMeZRR9dWXAuUrgEuR2UlcUJbxPC+ZNrG gICzpwN0nBpBK3zgOkbzsWrnrnfHB6PmZpC+NiifiFwQNDzgBWsSPhpkA+UeUyEfmTgG KmVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707783587; x=1708388387; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lSLmVkRgWUoN7XoRPV1r5JJdZzlBEVE8ndv4C2H5Jh8=; b=cSx3nq+8eX+0GFNf8aKR4kFMyS9bnQlFwQlkDBulP1J71fEuiEA7Qep93vdcjW55NH VWR8l0eUWb5WnHQvmRt6lvvKRh8qHtvNhpxBBD69EmXDKDEQl0cJpSfSCFiFpHdcFBD6 tcjWZy0O7ZA50cumKvZxU2y45E0UJoRpoEH1NDfTcQkGqAtH5RuNSh6v1uji0WOFKC8w EtlV2Lzafi63YoQeEUoYo6x48za/RR4BV7mI7bxRA/9lDZ9vfIqB7E++7IoZREr31oAD SFZG+WHJLeqgndqNSydMdGIXMND7Q2CUlgRIeABJWIPgGjGsg7FIM+s0q722/taRHCJM LB9A== X-Gm-Message-State: AOJu0YxHRjGl+1SJa2zSMhHp6QoPvz90ctawFlUyhvp3T6wOzmTLcbOw vGzyhz3C2VlITRFzmCrp63tCbZuHPQDMNSIVoF+W3kTzD0jkUZmwmua8G753nEWszRob0ApZ3nB bLXLyj3W7MCOfX5Qyoje2VA== X-Google-Smtp-Source: AGHT+IHUDuEjzMbEdCkojri1moJYf3kxPzNfFLGXk8eAuGj57GMIrZzgUI2jyxqII0cDDxjVoJCdhqMTIGxFJTHhlA== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:fd80:ef0f:6359:fc4c]) (user=lokeshgidra job=sendgmr) by 2002:a05:6902:1207:b0:dc6:dd76:34cc with SMTP id s7-20020a056902120700b00dc6dd7634ccmr375064ybu.1.1707783587100; Mon, 12 Feb 2024 16:19:47 -0800 (PST) Date: Mon, 12 Feb 2024 16:19:19 -0800 In-Reply-To: <20240213001920.3551772-1-lokeshgidra@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240213001920.3551772-1-lokeshgidra@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240213001920.3551772-3-lokeshgidra@google.com> Subject: [PATCH v5 2/3] userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx From: Lokesh Gidra To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org, Liam.Howlett@oracle.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Increments and loads to mmap_changing are always in mmap_lock critical section. This ensures that if userspace requests event notification for non-cooperative operations (e.g. mremap), userfaultfd operations don't occur concurrently. This can be achieved by using a separate read-write semaphore in userfaultfd_ctx such that increments are done in write-mode and loads in read-mode, thereby eliminating the dependency on mmap_lock for this purpose. This is a preparatory step before we replace mmap_lock usage with per-vma locks in fill/move ioctls. Signed-off-by: Lokesh Gidra Reviewed-by: Mike Rapoport (IBM) --- fs/userfaultfd.c | 40 ++++++++++++---------- include/linux/userfaultfd_k.h | 31 ++++++++++-------- mm/userfaultfd.c | 62 ++++++++++++++++++++--------------- 3 files changed, 75 insertions(+), 58 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 58331b83d648..c00a021bcce4 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -685,12 +685,15 @@ int dup_userfaultfd(struct vm_area_struct *vma, struc= t list_head *fcs) ctx->flags =3D octx->flags; ctx->features =3D octx->features; ctx->released =3D false; + init_rwsem(&ctx->map_changing_lock); atomic_set(&ctx->mmap_changing, 0); ctx->mm =3D vma->vm_mm; mmgrab(ctx->mm); =20 userfaultfd_ctx_get(octx); + down_write(&octx->map_changing_lock); atomic_inc(&octx->mmap_changing); + up_write(&octx->map_changing_lock); fctx->orig =3D octx; fctx->new =3D ctx; list_add_tail(&fctx->list, fcs); @@ -737,7 +740,9 @@ void mremap_userfaultfd_prep(struct vm_area_struct *vma, if (ctx->features & UFFD_FEATURE_EVENT_REMAP) { vm_ctx->ctx =3D ctx; userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); } else { /* Drop uffd context if remap feature not enabled */ vma_start_write(vma); @@ -783,7 +788,9 @@ bool userfaultfd_remove(struct vm_area_struct *vma, return true; =20 userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); mmap_read_unlock(mm); =20 msg_init(&ewq.msg); @@ -825,7 +832,9 @@ int userfaultfd_unmap_prep(struct vm_area_struct *vma, = unsigned long start, return -ENOMEM; =20 userfaultfd_ctx_get(ctx); + down_write(&ctx->map_changing_lock); atomic_inc(&ctx->mmap_changing); + up_write(&ctx->map_changing_lock); unmap_ctx->ctx =3D ctx; unmap_ctx->start =3D start; unmap_ctx->end =3D end; @@ -1709,9 +1718,8 @@ static int userfaultfd_copy(struct userfaultfd_ctx *c= tx, if (uffdio_copy.mode & UFFDIO_COPY_MODE_WP) flags |=3D MFILL_ATOMIC_WP; if (mmget_not_zero(ctx->mm)) { - ret =3D mfill_atomic_copy(ctx->mm, uffdio_copy.dst, uffdio_copy.src, - uffdio_copy.len, &ctx->mmap_changing, - flags); + ret =3D mfill_atomic_copy(ctx, uffdio_copy.dst, uffdio_copy.src, + uffdio_copy.len, flags); mmput(ctx->mm); } else { return -ESRCH; @@ -1761,9 +1769,8 @@ static int userfaultfd_zeropage(struct userfaultfd_ct= x *ctx, goto out; =20 if (mmget_not_zero(ctx->mm)) { - ret =3D mfill_atomic_zeropage(ctx->mm, uffdio_zeropage.range.start, - uffdio_zeropage.range.len, - &ctx->mmap_changing); + ret =3D mfill_atomic_zeropage(ctx, uffdio_zeropage.range.start, + uffdio_zeropage.range.len); mmput(ctx->mm); } else { return -ESRCH; @@ -1818,9 +1825,8 @@ static int userfaultfd_writeprotect(struct userfaultf= d_ctx *ctx, return -EINVAL; =20 if (mmget_not_zero(ctx->mm)) { - ret =3D mwriteprotect_range(ctx->mm, uffdio_wp.range.start, - uffdio_wp.range.len, mode_wp, - &ctx->mmap_changing); + ret =3D mwriteprotect_range(ctx, uffdio_wp.range.start, + uffdio_wp.range.len, mode_wp); mmput(ctx->mm); } else { return -ESRCH; @@ -1870,9 +1876,8 @@ static int userfaultfd_continue(struct userfaultfd_ct= x *ctx, unsigned long arg) flags |=3D MFILL_ATOMIC_WP; =20 if (mmget_not_zero(ctx->mm)) { - ret =3D mfill_atomic_continue(ctx->mm, uffdio_continue.range.start, - uffdio_continue.range.len, - &ctx->mmap_changing, flags); + ret =3D mfill_atomic_continue(ctx, uffdio_continue.range.start, + uffdio_continue.range.len, flags); mmput(ctx->mm); } else { return -ESRCH; @@ -1925,9 +1930,8 @@ static inline int userfaultfd_poison(struct userfault= fd_ctx *ctx, unsigned long goto out; =20 if (mmget_not_zero(ctx->mm)) { - ret =3D mfill_atomic_poison(ctx->mm, uffdio_poison.range.start, - uffdio_poison.range.len, - &ctx->mmap_changing, 0); + ret =3D mfill_atomic_poison(ctx, uffdio_poison.range.start, + uffdio_poison.range.len, 0); mmput(ctx->mm); } else { return -ESRCH; @@ -2003,13 +2007,14 @@ static int userfaultfd_move(struct userfaultfd_ctx = *ctx, if (mmget_not_zero(mm)) { mmap_read_lock(mm); =20 - /* Re-check after taking mmap_lock */ + /* Re-check after taking map_changing_lock */ + down_read(&ctx->map_changing_lock); if (likely(!atomic_read(&ctx->mmap_changing))) ret =3D move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, uffdio_move.len, uffdio_move.mode); else ret =3D -EAGAIN; - + up_read(&ctx->map_changing_lock); mmap_read_unlock(mm); mmput(mm); } else { @@ -2216,6 +2221,7 @@ static int new_userfaultfd(int flags) ctx->flags =3D flags; ctx->features =3D 0; ctx->released =3D false; + init_rwsem(&ctx->map_changing_lock); atomic_set(&ctx->mmap_changing, 0); ctx->mm =3D current->mm; /* prevent the mm struct to be freed */ diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 691d928ee864..3210c3552976 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -69,6 +69,13 @@ struct userfaultfd_ctx { unsigned int features; /* released */ bool released; + /* + * Prevents userfaultfd operations (fill/move/wp) from happening while + * some non-cooperative event(s) is taking place. Increments are done + * in write-mode. Whereas, userfaultfd operations, which includes + * reading mmap_changing, is done under read-mode. + */ + struct rw_semaphore map_changing_lock; /* memory mappings are changing because of non-cooperative event */ atomic_t mmap_changing; /* mm with one ore more vmas attached to this userfaultfd_ctx */ @@ -113,22 +120,18 @@ extern int mfill_atomic_install_pte(pmd_t *dst_pmd, unsigned long dst_addr, struct page *page, bool newly_allocated, uffd_flags_t flags); =20 -extern ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long d= st_start, +extern ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned lon= g dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags); -extern ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, + uffd_flags_t flags); +extern ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, unsigned long dst_start, - unsigned long len, - atomic_t *mmap_changing); -extern ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned lo= ng dst_start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags); -extern ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long= start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags); -extern int mwriteprotect_range(struct mm_struct *dst_mm, - unsigned long start, unsigned long len, - bool enable_wp, atomic_t *mmap_changing); + unsigned long len); +extern ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned= long dst_start, + unsigned long len, uffd_flags_t flags); +extern ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned l= ong start, + unsigned long len, uffd_flags_t flags); +extern int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long = start, + unsigned long len, bool enable_wp); extern long uffd_wp_range(struct vm_area_struct *vma, unsigned long start, unsigned long len, bool enable_wp); =20 diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 9cc93cc1330b..74aad0831e40 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -353,11 +353,11 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsi= gned long address) * called with mmap_lock held, it will release mmap_lock before returning. */ static __always_inline ssize_t mfill_atomic_hugetlb( + struct userfaultfd_ctx *ctx, struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) { struct mm_struct *dst_mm =3D dst_vma->vm_mm; @@ -379,6 +379,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( * feature is not supported. */ if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); return -EINVAL; } @@ -463,6 +464,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( cond_resched(); =20 if (unlikely(err =3D=3D -ENOENT)) { + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); BUG_ON(!folio); =20 @@ -473,12 +475,13 @@ static __always_inline ssize_t mfill_atomic_hugetlb( goto out; } mmap_read_lock(dst_mm); + down_read(&ctx->map_changing_lock); /* * If memory mappings are changing because of non-cooperative * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ - if (mmap_changing && atomic_read(mmap_changing)) { + if (atomic_read(&ctx->mmap_changing)) { err =3D -EAGAIN; break; } @@ -501,6 +504,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } =20 out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); out: if (folio) @@ -512,11 +516,11 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } #else /* !CONFIG_HUGETLB_PAGE */ /* fail at build time if gcc attempts to use this */ -extern ssize_t mfill_atomic_hugetlb(struct vm_area_struct *dst_vma, +extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx, + struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags); #endif /* CONFIG_HUGETLB_PAGE */ =20 @@ -564,13 +568,13 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t= *dst_pmd, return err; } =20 -static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, +static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) { + struct mm_struct *dst_mm =3D ctx->mm; struct vm_area_struct *dst_vma; ssize_t err; pmd_t *dst_pmd; @@ -600,8 +604,9 @@ static __always_inline ssize_t mfill_atomic(struct mm_s= truct *dst_mm, * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ + down_read(&ctx->map_changing_lock); err =3D -EAGAIN; - if (mmap_changing && atomic_read(mmap_changing)) + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; =20 /* @@ -633,8 +638,8 @@ static __always_inline ssize_t mfill_atomic(struct mm_s= truct *dst_mm, * If this is a HUGETLB vma, pass off to appropriate routine */ if (is_vm_hugetlb_page(dst_vma)) - return mfill_atomic_hugetlb(dst_vma, dst_start, src_start, - len, mmap_changing, flags); + return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, + src_start, len, flags); =20 if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; @@ -693,6 +698,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_s= truct *dst_mm, if (unlikely(err =3D=3D -ENOENT)) { void *kaddr; =20 + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); BUG_ON(!folio); =20 @@ -723,6 +729,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_s= truct *dst_mm, } =20 out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); out: if (folio) @@ -733,34 +740,33 @@ static __always_inline ssize_t mfill_atomic(struct mm= _struct *dst_mm, return copied ? copied : err; } =20 -ssize_t mfill_atomic_copy(struct mm_struct *dst_mm, unsigned long dst_star= t, +ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_s= tart, unsigned long src_start, unsigned long len, - atomic_t *mmap_changing, uffd_flags_t flags) + uffd_flags_t flags) { - return mfill_atomic(dst_mm, dst_start, src_start, len, mmap_changing, + return mfill_atomic(ctx, dst_start, src_start, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_COPY)); } =20 -ssize_t mfill_atomic_zeropage(struct mm_struct *dst_mm, unsigned long star= t, - unsigned long len, atomic_t *mmap_changing) +ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx, + unsigned long start, + unsigned long len) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(0, MFILL_ATOMIC_ZEROPAGE)); } =20 -ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long star= t, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags) +ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long s= tart, + unsigned long len, uffd_flags_t flags) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_CONTINUE)); } =20 -ssize_t mfill_atomic_poison(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, atomic_t *mmap_changing, - uffd_flags_t flags) +ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long sta= rt, + unsigned long len, uffd_flags_t flags) { - return mfill_atomic(dst_mm, start, 0, len, mmap_changing, + return mfill_atomic(ctx, start, 0, len, uffd_flags_set_mode(flags, MFILL_ATOMIC_POISON)); } =20 @@ -793,10 +799,10 @@ long uffd_wp_range(struct vm_area_struct *dst_vma, return ret; } =20 -int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, bool enable_wp, - atomic_t *mmap_changing) +int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, + unsigned long len, bool enable_wp) { + struct mm_struct *dst_mm =3D ctx->mm; unsigned long end =3D start + len; unsigned long _start, _end; struct vm_area_struct *dst_vma; @@ -820,8 +826,9 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsig= ned long start, * operation (e.g. mremap) running in parallel, bail out and * request the user to retry later */ + down_read(&ctx->map_changing_lock); err =3D -EAGAIN; - if (mmap_changing && atomic_read(mmap_changing)) + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; =20 err =3D -ENOENT; @@ -850,6 +857,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsig= ned long start, err =3D 0; } out_unlock: + up_read(&ctx->map_changing_lock); mmap_read_unlock(dst_mm); return err; } --=20 2.43.0.687.g38aa6559b0-goog From nobody Sun Sep 8 03:09:49 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5369D56466 for ; Tue, 13 Feb 2024 00:19:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707783592; cv=none; b=bYhkXj7mXvmXK5mw2qzr5nER8wFc78odlmwMuEktjS+WQhXJrqkODlKLBW+756npEKcn2bHbxaV7+1MQmAV0XXT2WME7lIx2ZCPRDThrZMi583lBcU36A27klBRXWBNzzmrsDwnkb9T9LoMnoHjZEeDeKs3Lxa/6eOECQ+NCQ94= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707783592; c=relaxed/simple; bh=A5bhvrPdjKTfnTdVO1aPAt2lC1s11dRsOATT6DQ0RO4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fjNeVQspQTJoryvGXGBm2hJg084GqbpJXUFsE19M9Dms7Imy+DvMMuSSc2x2sm6HyQw+RAyFdyJQ6I5OtQ2dUYOUagHc8q6Cbf8nzXoHKCF3oaj+Ib8Pwxwb7nmt9VuHV+3fi6pNE7aV0Pks+2lKmsCimHAMNvFpG7of9ioJez8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0BgbwLlK; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--lokeshgidra.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0BgbwLlK" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc6b269686aso6087983276.1 for ; Mon, 12 Feb 2024 16:19:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707783589; x=1708388389; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=p0acLNL0hrBdsLJihHiNr7h0MABVTrAouOJnKfhyS/s=; b=0BgbwLlK1FgK10EV7ODEFKhP7x4ZskNNNw+FF2QO57GFsLHnoaMLrNHGFVwp2EKBBa 531V8rUEWyLaOXfPV+KANXUXKbcxTeVm9UxDex504tWesGj/RqeihEU3swTQrdcPj9ko jDv8toRjhfd4o25aqoVwI+P1tNQjiTmWW8ksyP09Sl0msPylr7UBjXPgEQ7fTHWesdSf giBM+cXXceJEeHckw0pqb3TzXtJUqGo4BvBYB40HFIFt/5rrFP1vCYZ3HiiEtopzZl5Y t1u7465AxfJtIY9F5l6pKuPe6+dWcp/nGeAXURfCJwPsmq6sXcat7RrKJvf3IIuCaR+C R94Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707783589; x=1708388389; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=p0acLNL0hrBdsLJihHiNr7h0MABVTrAouOJnKfhyS/s=; b=Fl6tV+BH6FxY+ITzB0KDGp839rZV5c+EpeIo2bYIhk0dl23p8iJzPOIAYIsI3lzPUp HkUbuCKvox0y3KOGkcyyUluDmvwFdG+GcxgfD9g24vRuYi+3rvoDRCu3u94s6k6vrLVr lzTXcx+o+ArLKPJhs/RD0tlmZhOzH8VpCdKmHJ5phmC7PRikB8IzL0NBTcWFCJtAbdKf 5RSwPt+8ZVePwyM8kKSRsC6GX2aTl71KRPMhr9F7nv4I7RpcUF3AtsndXJTAfj91/KlS /plcW+xj+d4SkZvNew8/gUwx5tjh/hT5VGA64M8yLzEfkZbqLAs16hw18IjQHTkELqoK HAow== X-Forwarded-Encrypted: i=1; AJvYcCVDHoWAuW1jKqtTDZeGAeMzEocyYm6trLaoTE/nY49xp0EQmJnn0cR9kjVyb8KUFDwwQH0iyGmexzL1/QTd5GpEm2p3AdsHEXvQ34tZ X-Gm-Message-State: AOJu0YyTbgnjpyVTjsDJu7eZBCF8eU1ilACuzNylDIKDkhq+Lkb9h3fV qjNjQzZe9VwiHKWX9P6px6vWM0fhsB7iY/U3tuoi40tW8rcpOegbAlb4jI5yxBAZYcrzq73DY4P VtyGjjZiBwmBoa1QN/kp3ow== X-Google-Smtp-Source: AGHT+IGabpi1FIUpjHhIJoV9U+X03FKg2G8P1VhpeQN7NaunrFQnnnBYd6dTBFPSRWclcvT+yNmGkJoRbVyzmNPGQg== X-Received: from lg.mtv.corp.google.com ([2620:15c:211:202:fd80:ef0f:6359:fc4c]) (user=lokeshgidra job=sendgmr) by 2002:a05:6902:1005:b0:dcb:c2c0:b319 with SMTP id w5-20020a056902100500b00dcbc2c0b319mr91815ybt.9.1707783589269; Mon, 12 Feb 2024 16:19:49 -0800 (PST) Date: Mon, 12 Feb 2024 16:19:20 -0800 In-Reply-To: <20240213001920.3551772-1-lokeshgidra@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240213001920.3551772-1-lokeshgidra@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240213001920.3551772-4-lokeshgidra@google.com> Subject: [PATCH v5 3/3] userfaultfd: use per-vma locks in userfaultfd operations From: Lokesh Gidra To: akpm@linux-foundation.org Cc: lokeshgidra@google.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, jannh@google.com, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org, Liam.Howlett@oracle.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" All userfaultfd operations, except write-protect, opportunistically use per-vma locks to lock vmas. On failure, attempt again inside mmap_lock critical section. Write-protect operation requires mmap_lock as it iterates over multiple vmas. Signed-off-by: Lokesh Gidra --- fs/userfaultfd.c | 13 +- include/linux/userfaultfd_k.h | 5 +- mm/userfaultfd.c | 392 ++++++++++++++++++++++++++-------- 3 files changed, 312 insertions(+), 98 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index c00a021bcce4..60dcfafdc11a 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -2005,17 +2005,8 @@ static int userfaultfd_move(struct userfaultfd_ctx *= ctx, return -EINVAL; =20 if (mmget_not_zero(mm)) { - mmap_read_lock(mm); - - /* Re-check after taking map_changing_lock */ - down_read(&ctx->map_changing_lock); - if (likely(!atomic_read(&ctx->mmap_changing))) - ret =3D move_pages(ctx, mm, uffdio_move.dst, uffdio_move.src, - uffdio_move.len, uffdio_move.mode); - else - ret =3D -EAGAIN; - up_read(&ctx->map_changing_lock); - mmap_read_unlock(mm); + ret =3D move_pages(ctx, uffdio_move.dst, uffdio_move.src, + uffdio_move.len, uffdio_move.mode); mmput(mm); } else { return -ESRCH; diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 3210c3552976..05d59f74fc88 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -138,9 +138,8 @@ extern long uffd_wp_range(struct vm_area_struct *vma, /* move_pages */ void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2); void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2); -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, - unsigned long dst_start, unsigned long src_start, - unsigned long len, __u64 flags); +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long src_start, unsigned long len, __u64 flags); int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_p= md, pmd_t dst_pmdval, struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 74aad0831e40..eb7ff220f315 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -20,19 +20,11 @@ #include "internal.h" =20 static __always_inline -struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, - unsigned long dst_start, - unsigned long len) +bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_en= d) { - /* - * Make sure that the dst range is both valid and fully within a - * single existing vma. - */ - struct vm_area_struct *dst_vma; - - dst_vma =3D find_vma(dst_mm, dst_start); - if (!range_in_vma(dst_vma, dst_start, dst_start + len)) - return NULL; + /* Make sure that the dst range is fully within dst_vma. */ + if (dst_end > dst_vma->vm_end) + return false; =20 /* * Check the vma is registered in uffd, this is required to @@ -40,10 +32,118 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *= dst_mm, * time. */ if (!dst_vma->vm_userfaultfd_ctx.ctx) - return NULL; + return false; + + return true; +} + +static __always_inline +struct vm_area_struct *find_vma_and_prepare_anon(struct mm_struct *mm, + unsigned long addr) +{ + struct vm_area_struct *vma; + + mmap_assert_locked(mm); + vma =3D vma_lookup(mm, addr); + if (!vma) + vma =3D ERR_PTR(-ENOENT); + else if (!(vma->vm_flags & VM_SHARED) && anon_vma_prepare(vma)) + vma =3D ERR_PTR(-ENOMEM); + + return vma; +} + +#ifdef CONFIG_PER_VMA_LOCK +/* + * lock_vma() - Lookup and lock vma corresponding to @address. + * @mm: mm to search vma in. + * @address: address that the vma should contain. + * + * Should be called without holding mmap_lock. vma should be unlocked afte= r use + * with unlock_vma(). + * + * Return: A locked vma containing @address, -ENOENT if no vma is found, or + * -ENOMEM if anon_vma couldn't be allocated. + */ +static struct vm_area_struct *lock_vma(struct mm_struct *mm, + unsigned long address) +{ + struct vm_area_struct *vma; + + vma =3D lock_vma_under_rcu(mm, address); + if (vma) { + /* + * lock_vma_under_rcu() only checks anon_vma for private + * anonymous mappings. But we need to ensure it is assigned in + * private file-backed vmas as well. + */ + if (!(vma->vm_flags & VM_SHARED) && !vma->anon_vma) + vma_end_read(vma); + else + return vma; + } + + mmap_read_lock(mm); + vma =3D find_vma_and_prepare_anon(mm, address); + if (!IS_ERR(vma)) { + /* + * We cannot use vma_start_read() as it may fail due to + * false locked (see comment in vma_start_read()). We + * can avoid that by directly locking vm_lock under + * mmap_lock, which guarantees that nobody can lock the + * vma for write (vma_start_write()) under us. + */ + down_read(&vma->vm_lock->lock); + } + + mmap_read_unlock(mm); + return vma; +} + +static void unlock_vma(struct vm_area_struct *vma) +{ + vma_end_read(vma); +} + +static struct vm_area_struct *find_and_lock_dst_vma(struct mm_struct *dst_= mm, + unsigned long dst_start, + unsigned long len) +{ + struct vm_area_struct *dst_vma; =20 - return dst_vma; + dst_vma =3D lock_vma(dst_mm, dst_start); + if (IS_ERR(dst_vma) || validate_dst_vma(dst_vma, dst_start + len)) + return dst_vma; + + unlock_vma(dst_vma); + return ERR_PTR(-ENOENT); +} + +#else + +static struct vm_area_struct *lock_mm_and_find_dst_vma(struct mm_struct *d= st_mm, + unsigned long dst_start, + unsigned long len) +{ + struct vm_area_struct *dst_vma; + int err; + + mmap_read_lock(dst_mm); + dst_vma =3D find_vma_and_prepare_anon(dst_mm, dst_start); + if (IS_ERR(dst_vma)) { + err =3D PTR_ERR(dst_vma); + goto out_unlock; + } + + if (validate_dst_vma(dst_vma, dst_start + len)) + return dst_vma; + + err =3D -ENOENT; +out_unlock: + mmap_read_unlock(dst_mm); + return ERR_PTR(err); } +#endif =20 /* Check if dst_addr is outside of file's size. Must be called with ptl he= ld. */ static bool mfill_file_over_size(struct vm_area_struct *dst_vma, @@ -350,7 +450,8 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsign= ed long address) #ifdef CONFIG_HUGETLB_PAGE /* * mfill_atomic processing for HUGETLB vmas. Note that this routine is - * called with mmap_lock held, it will release mmap_lock before returning. + * called with either vma-lock or mmap_lock held, it will release the lock + * before returning. */ static __always_inline ssize_t mfill_atomic_hugetlb( struct userfaultfd_ctx *ctx, @@ -361,7 +462,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( uffd_flags_t flags) { struct mm_struct *dst_mm =3D dst_vma->vm_mm; - int vm_shared =3D dst_vma->vm_flags & VM_SHARED; ssize_t err; pte_t *dst_pte; unsigned long src_addr, dst_addr; @@ -380,7 +480,11 @@ static __always_inline ssize_t mfill_atomic_hugetlb( */ if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { up_read(&ctx->map_changing_lock); +#ifdef CONFIG_PER_VMA_LOCK + unlock_vma(dst_vma); +#else mmap_read_unlock(dst_mm); +#endif return -EINVAL; } =20 @@ -403,24 +507,32 @@ static __always_inline ssize_t mfill_atomic_hugetlb( * retry, dst_vma will be set to NULL and we must lookup again. */ if (!dst_vma) { +#ifdef CONFIG_PER_VMA_LOCK + dst_vma =3D find_and_lock_dst_vma(dst_mm, dst_start, len); +#else + dst_vma =3D lock_mm_and_find_dst_vma(dst_mm, dst_start, len); +#endif + if (IS_ERR(dst_vma)) { + err =3D PTR_ERR(dst_vma); + goto out; + } + err =3D -ENOENT; - dst_vma =3D find_dst_vma(dst_mm, dst_start, len); - if (!dst_vma || !is_vm_hugetlb_page(dst_vma)) - goto out_unlock; + if (!is_vm_hugetlb_page(dst_vma)) + goto out_unlock_vma; =20 err =3D -EINVAL; if (vma_hpagesize !=3D vma_kernel_pagesize(dst_vma)) - goto out_unlock; - - vm_shared =3D dst_vma->vm_flags & VM_SHARED; - } + goto out_unlock_vma; =20 - /* - * If not shared, ensure the dst_vma has a anon_vma. - */ - err =3D -ENOMEM; - if (!vm_shared) { - if (unlikely(anon_vma_prepare(dst_vma))) + /* + * If memory mappings are changing because of non-cooperative + * operation (e.g. mremap) running in parallel, bail out and + * request the user to retry later + */ + down_read(&ctx->map_changing_lock); + err =3D -EAGAIN; + if (atomic_read(&ctx->mmap_changing)) goto out_unlock; } =20 @@ -465,7 +577,11 @@ static __always_inline ssize_t mfill_atomic_hugetlb( =20 if (unlikely(err =3D=3D -ENOENT)) { up_read(&ctx->map_changing_lock); +#ifdef CONFIG_PER_VMA_LOCK + unlock_vma(dst_vma); +#else mmap_read_unlock(dst_mm); +#endif BUG_ON(!folio); =20 err =3D copy_folio_from_user(folio, @@ -474,17 +590,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( err =3D -EFAULT; goto out; } - mmap_read_lock(dst_mm); - down_read(&ctx->map_changing_lock); - /* - * If memory mappings are changing because of non-cooperative - * operation (e.g. mremap) running in parallel, bail out and - * request the user to retry later - */ - if (atomic_read(&ctx->mmap_changing)) { - err =3D -EAGAIN; - break; - } =20 dst_vma =3D NULL; goto retry; @@ -505,7 +610,12 @@ static __always_inline ssize_t mfill_atomic_hugetlb( =20 out_unlock: up_read(&ctx->map_changing_lock); +out_unlock_vma: +#ifdef CONFIG_PER_VMA_LOCK + unlock_vma(dst_vma); +#else mmap_read_unlock(dst_mm); +#endif out: if (folio) folio_put(folio); @@ -597,7 +707,19 @@ static __always_inline ssize_t mfill_atomic(struct use= rfaultfd_ctx *ctx, copied =3D 0; folio =3D NULL; retry: - mmap_read_lock(dst_mm); + /* + * Make sure the vma is not shared, that the dst range is + * both valid and fully within a single existing vma. + */ +#ifdef CONFIG_PER_VMA_LOCK + dst_vma =3D find_and_lock_dst_vma(dst_mm, dst_start, len); +#else + dst_vma =3D lock_mm_and_find_dst_vma(dst_mm, dst_start, len); +#endif + if (IS_ERR(dst_vma)) { + err =3D PTR_ERR(dst_vma); + goto out; + } =20 /* * If memory mappings are changing because of non-cooperative @@ -609,15 +731,6 @@ static __always_inline ssize_t mfill_atomic(struct use= rfaultfd_ctx *ctx, if (atomic_read(&ctx->mmap_changing)) goto out_unlock; =20 - /* - * Make sure the vma is not shared, that the dst range is - * both valid and fully within a single existing vma. - */ - err =3D -ENOENT; - dst_vma =3D find_dst_vma(dst_mm, dst_start, len); - if (!dst_vma) - goto out_unlock; - err =3D -EINVAL; /* * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SHARED but @@ -647,16 +760,6 @@ static __always_inline ssize_t mfill_atomic(struct use= rfaultfd_ctx *ctx, uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) goto out_unlock; =20 - /* - * Ensure the dst_vma has a anon_vma or this page - * would get a NULL anon_vma when moved in the - * dst_vma. - */ - err =3D -ENOMEM; - if (!(dst_vma->vm_flags & VM_SHARED) && - unlikely(anon_vma_prepare(dst_vma))) - goto out_unlock; - while (src_addr < src_start + len) { pmd_t dst_pmdval; =20 @@ -699,7 +802,11 @@ static __always_inline ssize_t mfill_atomic(struct use= rfaultfd_ctx *ctx, void *kaddr; =20 up_read(&ctx->map_changing_lock); +#ifdef CONFIG_PER_VMA_LOCK + unlock_vma(dst_vma); +#else mmap_read_unlock(dst_mm); +#endif BUG_ON(!folio); =20 kaddr =3D kmap_local_folio(folio, 0); @@ -730,7 +837,11 @@ static __always_inline ssize_t mfill_atomic(struct use= rfaultfd_ctx *ctx, =20 out_unlock: up_read(&ctx->map_changing_lock); +#ifdef CONFIG_PER_VMA_LOCK + unlock_vma(dst_vma); +#else mmap_read_unlock(dst_mm); +#endif out: if (folio) folio_put(folio); @@ -1267,27 +1378,119 @@ static int validate_move_areas(struct userfaultfd_= ctx *ctx, if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma)) return -EINVAL; =20 + return 0; +} + +static __always_inline +long find_vmas_mm_locked(struct mm_struct *mm, + unsigned long dst_start, + unsigned long src_start, + struct vm_area_struct **dst_vmap, + struct vm_area_struct **src_vmap) +{ + struct vm_area_struct *vma; + + mmap_assert_locked(mm); + vma =3D find_vma_and_prepare_anon(mm, dst_start); + if (IS_ERR(vma)) + return PTR_ERR(vma); + + *dst_vmap =3D vma; + /* Skip finding src_vma if src_start is in dst_vma */ + if (src_start >=3D vma->vm_start && src_start < vma->vm_end) + goto out_success; + + vma =3D vma_lookup(mm, src_start); + if (!vma) + return -ENOENT; +out_success: + *src_vmap =3D vma; + return 0; +} + +#ifdef CONFIG_PER_VMA_LOCK +static long find_and_lock_vmas(struct mm_struct *mm, + unsigned long dst_start, + unsigned long src_start, + struct vm_area_struct **dst_vmap, + struct vm_area_struct **src_vmap) +{ + struct vm_area_struct *vma; + long err; + + vma =3D lock_vma(mm, dst_start); + if (IS_ERR(vma)) + return PTR_ERR(vma); + + *dst_vmap =3D vma; /* - * Ensure the dst_vma has a anon_vma or this page - * would get a NULL anon_vma when moved in the - * dst_vma. + * Skip finding src_vma if src_start is in dst_vma. This also ensures + * that we don't lock the same vma twice. */ - if (unlikely(anon_vma_prepare(dst_vma))) - return -ENOMEM; + if (src_start >=3D vma->vm_start && src_start < vma->vm_end) { + *src_vmap =3D vma; + return 0; + } =20 - return 0; + /* + * Using lock_vma() to get src_vma can lead to following deadlock: + * + * Thread1 Thread2 + * ------- ------- + * vma_start_read(dst_vma) + * mmap_write_lock(mm) + * vma_start_write(src_vma) + * vma_start_read(src_vma) + * mmap_read_lock(mm) + * vma_start_write(dst_vma) + */ + *src_vmap =3D lock_vma_under_rcu(mm, src_start); + if (likely(*src_vmap)) + return 0; + + /* Undo any locking and retry in mmap_lock critical section */ + vma_end_read(*dst_vmap); + + mmap_read_lock(mm); + err =3D find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); + if (!err) { + /* + * See comment in lock_vma() as to why not using + * vma_start_read() here. + */ + down_read(&(*dst_vmap)->vm_lock->lock); + if (*dst_vmap !=3D *src_vmap) + down_read(&(*src_vmap)->vm_lock->lock); + } + mmap_read_unlock(mm); + return err; +} +#else +static long lock_mm_and_find_vmas(struct mm_struct *mm, + unsigned long dst_start, + unsigned long src_start, + struct vm_area_struct **dst_vmap, + struct vm_area_struct **src_vmap) +{ + long err; + + mmap_read_lock(mm); + err =3D find_vmas_mm_locked(mm, dst_start, src_start, dst_vmap, src_vmap); + if (err) + mmap_read_unlock(mm); + return err; } +#endif =20 /** * move_pages - move arbitrary anonymous pages of an existing vma * @ctx: pointer to the userfaultfd context - * @mm: the address space to move pages * @dst_start: start of the destination virtual memory range * @src_start: start of the source virtual memory range * @len: length of the virtual memory range * @mode: flags from uffdio_move.mode * - * Must be called with mmap_lock held for read. + * It will either use the mmap_lock in read mode or per-vma locks * * move_pages() remaps arbitrary anonymous pages atomically in zero * copy. It only works on non shared anonymous pages because those can @@ -1355,10 +1558,10 @@ static int validate_move_areas(struct userfaultfd_c= tx *ctx, * could be obtained. This is the only additional complexity added to * the rmap code to provide this anonymous page remapping functionality. */ -ssize_t move_pages(struct userfaultfd_ctx *ctx, struct mm_struct *mm, - unsigned long dst_start, unsigned long src_start, - unsigned long len, __u64 mode) +ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, + unsigned long src_start, unsigned long len, __u64 mode) { + struct mm_struct *mm =3D ctx->mm; struct vm_area_struct *src_vma, *dst_vma; unsigned long src_addr, dst_addr; pmd_t *src_pmd, *dst_pmd; @@ -1376,28 +1579,40 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, str= uct mm_struct *mm, WARN_ON_ONCE(dst_start + len <=3D dst_start)) goto out; =20 +#ifdef CONFIG_PER_VMA_LOCK + err =3D find_and_lock_vmas(mm, dst_start, src_start, + &dst_vma, &src_vma); +#else + err =3D lock_mm_and_find_vmas(mm, dst_start, src_start, + &dst_vma, &src_vma); +#endif + if (err) + goto out; + + /* Re-check after taking map_changing_lock */ + err =3D -EAGAIN; + down_read(&ctx->map_changing_lock); + if (likely(atomic_read(&ctx->mmap_changing))) + goto out_unlock; /* * Make sure the vma is not shared, that the src and dst remap * ranges are both valid and fully within a single existing * vma. */ - src_vma =3D find_vma(mm, src_start); - if (!src_vma || (src_vma->vm_flags & VM_SHARED)) - goto out; - if (src_start < src_vma->vm_start || - src_start + len > src_vma->vm_end) - goto out; + err =3D -EINVAL; + if (src_vma->vm_flags & VM_SHARED) + goto out_unlock; + if (src_start + len > src_vma->vm_end) + goto out_unlock; =20 - dst_vma =3D find_vma(mm, dst_start); - if (!dst_vma || (dst_vma->vm_flags & VM_SHARED)) - goto out; - if (dst_start < dst_vma->vm_start || - dst_start + len > dst_vma->vm_end) - goto out; + if (dst_vma->vm_flags & VM_SHARED) + goto out_unlock; + if (dst_start + len > dst_vma->vm_end) + goto out_unlock; =20 err =3D validate_move_areas(ctx, src_vma, dst_vma); if (err) - goto out; + goto out_unlock; =20 for (src_addr =3D src_start, dst_addr =3D dst_start; src_addr < src_start + len;) { @@ -1514,6 +1729,15 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, stru= ct mm_struct *mm, moved +=3D step_size; } =20 +out_unlock: + up_read(&ctx->map_changing_lock); +#ifdef CONFIG_PER_VMA_LOCK + unlock_vma(src_vma); + if (src_vma !=3D dst_vma) + unlock_vma(dst_vma); +#else + mmap_read_unlock(mm); +#endif out: VM_WARN_ON(moved < 0); VM_WARN_ON(err > 0); --=20 2.43.0.687.g38aa6559b0-goog